qcacmn: Use sched_clock instead of jiffies to calc yield time

The current implementation uses jiffies to calculate the yield time and
yields after 2 jiffies. The problem with this is that we end up yielding
anywhere between 1 - 2 jiffies as each jiffies are incremented by 10 ms
intervals. Sometimes we are taking more than 2 jiffies to yield. This
prevents the update of deferrable_timer if its being done at the same
CPU where hif_napi_poll is executing. This may result in the CPU
frequencies not being updated resulting in some fluctuations.

Use sched_clock kernel API to calculate the precise yield time.
Reduce the yield time to 10ms.

Some stats here
- How many times did we have to yield because of time
	(%)0.670391061
	Total Number of completes	1790
	Total Number of complete 0	12
- When we yielded, how much did we exceed the time limit(10ms) by (ms)
	Average	1.327444
	Max	3.381667
	Min	0.037709
- How many HTT Messages did we process when we had to yield
	Average	22.41667
	Max	33
	Min	18
- How much time did we take when we had to yield 10ms + delta above

- How much time did we take when we did not yield (ms)
	Average	0.907641
	Max	8.649
	Min	0
- How many HTT Messages did we process when we did not have to yield
	Average	2.193476
	Max	24
	Min	1

Change-Id: I0d42c716ab8941b1de22a456447797c9ba5475c8
CRs-Fixed: 1089902
diff --git a/hif/src/ce/ce_internal.h b/hif/src/ce/ce_internal.h
index ff649a6..d1fe309 100644
--- a/hif/src/ce/ce_internal.h
+++ b/hif/src/ce/ce_internal.h
@@ -139,7 +139,8 @@
 	bool force_break;	/* Flag to indicate whether to
 				 * break out the DPC context */
 
-	qdf_time_t ce_service_yield_time;
+	/* time in nanoseconds to yield control of napi poll */
+	unsigned long long ce_service_yield_time;
 	unsigned int receive_count;	/* count Num Of Receive Buffers
 					 * handled for one interrupt
 					 * DPC routine */
diff --git a/hif/src/ce/ce_service.c b/hif/src/ce/ce_service.c
index 2b84936..4a57a43 100644
--- a/hif/src/ce/ce_service.c
+++ b/hif/src/ce/ce_service.c
@@ -219,8 +219,9 @@
 {
 	bool yield, time_limit_reached, rxpkt_thresh_reached = 0;
 
-	time_limit_reached = qdf_system_time_after_eq(qdf_system_ticks(),
-					ce_state->ce_service_yield_time);
+	time_limit_reached =
+		sched_clock() > ce_state->ce_service_yield_time ? 1 : 0;
+
 	if (!time_limit_reached)
 		rxpkt_thresh_reached = hif_max_num_receives_reached
 					(scn, ce_state->receive_count);
@@ -1839,7 +1840,6 @@
 						 FAST_RX_SOFTWARE_INDEX_UPDATE,
 						 NULL, NULL, sw_index);
 			dest_ring->sw_index = sw_index;
-
 			ce_fastpath_rx_handle(ce_state, cmpl_msdus,
 					      MSG_FLUSH_NUM, ctrl_addr);
 
@@ -1910,7 +1910,11 @@
 }
 #endif /* WLAN_FEATURE_FASTPATH */
 
-#define CE_PER_ENGINE_SERVICE_MAX_TIME_JIFFIES 2
+/* Maximum amount of time in nano seconds before which the CE per engine service
+ * should yield. ~1 jiffie.
+ */
+#define CE_PER_ENGINE_SERVICE_MAX_YIELD_TIME_NS (10 * 1000 * 1000)
+
 /*
  * Guts of interrupt handler for per-engine interrupts on a particular CE.
  *
@@ -1947,8 +1951,9 @@
 	/* Clear force_break flag and re-initialize receive_count to 0 */
 	CE_state->receive_count = 0;
 	CE_state->force_break = 0;
-	CE_state->ce_service_yield_time = qdf_system_ticks() +
-		CE_PER_ENGINE_SERVICE_MAX_TIME_JIFFIES;
+	CE_state->ce_service_yield_time =
+		sched_clock() +
+		(unsigned long long)CE_PER_ENGINE_SERVICE_MAX_YIELD_TIME_NS;
 
 
 	qdf_spin_lock(&CE_state->ce_index_lock);