drivers: rmnet_shs: Disable RPS for ICMP packets

Previously ICMP packet would be ignored by SHS causing RPS to send
them randomly to a CPU in the configured RPS mask. This would lead
ICMP packets to being sent to gold cluster in some cases. This results
in worse power performance and higher latency as gold cluster might
have to be woken up from power collapse in order to process a single
packet.

This change causes SHS to instead mark the ICMP packet as having a
valid hash but actually leaves the hash as null. This is interpreted
by RPS as an invalid CPU and causes the packet to be processed on the
current CPU.

From various experiments rmnet to NW stack latency start seems to take
on avg about ~.8ms longer for inter-cluster ping processing so queing
to gold cluster is not advised.

Additionally queueing to a separate silver core in the silver cluster
only increased avg latency by about ~.01ms.

Change-Id: I631061890b1edb03d2e680b7f6d19f310d838ed1
Acked-by: Raul Martinez <mraul@qti.qualcomm.com>
Signed-off-by: Subash Abhinov Kasiviswanathan <subashab@codeaurora.org>
1 file changed
tree: 0a166b631705bd9d6c26e03b889098fe947113ea
  1. drivers/
  2. data_dlkm_vendor_board.mk