Merge "coresight: csr: add snapshot of CSR driver" into msm-4.8
diff --git a/Documentation/arm/msm/msm_ipc_logging.txt b/Documentation/arm/msm/msm_ipc_logging.txt
new file mode 100644
index 0000000..9d42200
--- /dev/null
+++ b/Documentation/arm/msm/msm_ipc_logging.txt
@@ -0,0 +1,361 @@
+Introduction
+============
+
+This module will be used to log the events by any module/driver which
+enables Inter Processor Communication (IPC). Some of the IPC drivers such
+as Message Routers, Multiplexers etc. which act as a passive pipe need
+some mechanism to log their events. Since all such IPC drivers handle a
+large amount of traffic/events, using kernel logs renders kernel logs
+unusable by other drivers and also degrades the performance of IPC
+drivers. This new module will help in logging such high frequency IPC
+driver events while keeping the standard kernel logging mechanism
+intact.
+
+Hardware description
+====================
+
+This module does not drive any hardware resource and will only use the
+kernel memory-space to log the events.
+
+Software description
+====================
+
+Design Goals
+------------
+This module is designed to
+ * support logging for drivers handling large amount of
+ traffic/events
+ * define & differentiate events/logs from different drivers
+ * support both id-based and stream-based logging
+ * support extracting the logs from both live target & memory dump
+
+IPC Log Context
+----------------
+
+This module will support logging by multiple drivers. To differentiate
+between the multiple drivers that are using this logging mechanism, each
+driver will be assigned a unique context by this module. Associated with
+each context is the logging space, dynamically allocated from the kernel
+memory-space, specific to that context so that the events logged using that
+context will not interfere with other contexts.
+
+Event Logging
+--------------
+
+Every event will be logged as a <Type: Size: Value> combination. Type
+field identifies the type of the event that is logged. Size field represents
+the size of the log information. Value field represents the actual
+information being logged. This approach will support both id-based logging
+and stream-based logging. This approach will also support logging sub-events
+of an event. This module will provide helper routines to encode/decode the
+logs to/from this format.
+
+Encode Context
+---------------
+
+Encode context is a temporary storage space that will be used by the client
+drivers to log the events in <Type: Size: Value> format. The client drivers
+will perform an encode start operation to initialize the encode context
+data structure. Then the client drivers will log their events into the
+encode context. Upon completion of event logging, the client drivers will
+perform an encode end operation to finalize the encode context data
+structure to be logged. Then this updated encode context data structure
+will be written into the client driver's IPC Log Context. The maximum
+event log size will be defined as 256 bytes.
+
+Log Space
+----------
+
+Each context (Figure 1) has an associated log space, which is dynamically
+allocated from the kernel memory-space. The log space is organized as a list of
+1 or more kernel memory pages. Each page (Figure 2) contains header information
+which is used to differentiate the log kernel page from the other kernel pages.
+
+
+ 0 ---------------------------------
+ | magic_no = 0x25874452 |
+ ---------------------------------
+ | nmagic_no = 0x52784425 |
+ ---------------------------------
+ | version |
+ ---------------------------------
+ | user_version |
+ ---------------------------------
+ | log_id |
+ ---------------------------------
+ | header_size |
+ ---------------------------------
+ | |
+ | |
+ | name [20 chars] |
+ | |
+ | |
+ ---------------------------------
+ | run-time data structures |
+ ---------------------------------
+ Figure 1 - Log Context Structure
+
+
+ 31 0
+ 0 ---------------------------------
+ | magic_no = 0x52784425 |
+ ---------------------------------
+ | nmagic_no = 0xAD87BBDA |
+ ---------------------------------
+ |1| page_num |
+ ---------------------------------
+ | read_offset | write_offset |
+ ---------------------------------
+ | log_id |
+ ---------------------------------
+ | start_time low word |
+ | start_time high word |
+ ---------------------------------
+ | end_time low word |
+ | end_time high word |
+ ---------------------------------
+ | context offset |
+ ---------------------------------
+ | run-time data structures |
+ . . . . .
+ ---------------------------------
+ | |
+ | Log Data |
+ . . .
+ . . .
+ | |
+ --------------------------------- PAGE_SIZE - 1
+ Figure 2 - Log Page Structure
+
+In addition to extracting logs at runtime through DebugFS, IPC Logging has been
+designed to allow extraction of logs from a memory dump. The magic numbers,
+timestamps, and context offset are all added to support the memory-dump
+extraction use case.
+
+Design
+======
+
+Alternate solutions discussed include using kernel & SMEM logs which are
+limited in size and hence using them render them unusable by other drivers.
+Also kernel logging into serial console is slowing down the performance of
+the drivers by multiple times and sometimes lead to APPs watchdog bite.
+
+Power Management
+================
+
+Not-Applicable
+
+SMP/multi-core
+==============
+
+This module uses spinlocks & mutexes to handle multi-core safety.
+
+Security
+========
+
+Not-Applicable
+
+Performance
+===========
+
+This logging mechanism, based on experimental data, is not expected to
+cause a significant performance degradation. Under worst case, it can
+cause 1 - 2 percent degradation in the throughput of the IPC Drivers.
+
+Interface
+=========
+
+Exported Data Structures
+------------------------
+struct encode_context {
+ struct tsv_header hdr;
+ char buff[MAX_MSG_SIZE];
+ int offset;
+};
+
+struct decode_context {
+ int output_format;
+ char *buff;
+ int size;
+};
+
+Kernel-Space Interface APIs
+----------------------------
+/*
+ * ipc_log_context_create: Create a ipc log context
+ *
+ * @max_num_pages: Number of pages of logging space required (max. 10)
+ * @mod_name : Name of the directory entry under DEBUGFS
+ * @user_version : Version number of user-defined message formats
+ *
+ * returns reference to context on success, NULL on failure
+ */
+void * ipc_log_context_create(int max_num_pages,
+ const char *mod_name);
+
+/*
+ * msg_encode_start: Start encoding a log message
+ *
+ * @ectxt: Temporary storage to hold the encoded message
+ * @type: Root event type defined by the module which is logging
+ */
+void msg_encode_start(struct encode_context *ectxt, uint32_t type);
+
+/*
+ * msg_encode_end: Complete the message encode process
+ *
+ * @ectxt: Temporary storage which holds the encoded message
+ */
+void msg_encode_end(struct encode_context *ectxt);
+
+/*
+ * tsv_timestamp_write: Writes the current timestamp count
+ *
+ * @ectxt: Context initialized by calling msg_encode_start()
+ *
+ * Returns 0 on success, -ve error code on failure
+ */
+int tsv_timestamp_write(struct encode_context *ectxt);
+
+/*
+ * tsv_pointer_write: Writes a data pointer
+ *
+ * @ectxt: Context initialized by calling msg_encode_start()
+ * @pointer: Pointer value to write
+ *
+ * Returns 0 on success, -ve error code on failure
+ */
+int tsv_pointer_write(struct encode_context *ectxt, void *pointer);
+
+/*
+ * tsv_int32_write: Writes a 32-bit integer value
+ *
+ * @ectxt: Context initialized by calling msg_encode_start()
+ * @n: Integer to write
+ *
+ * Returns 0 on success, -ve error code on failure
+ */
+int tsv_int32_write(struct encode_context *ectxt, int32_t n);
+
+/*
+ * tsv_byte_array_write: Writes a byte array
+ *
+ * @ectxt: Context initialized by calling msg_encode_start()
+ * @data: Location of data
+ * @data_size: Size of data to be written
+ *
+ * Returns 0 on success, -ve error code on failure
+ */
+int tsv_byte_array_write(struct encode_context *ectxt,
+ void *data, int data_size);
+
+/*
+ * ipc_log_write: Write the encoded message into the log space
+ *
+ * @ctxt: IPC log context where the message has to be logged into
+ * @ectxt: Temporary storage containing the encoded message
+ */
+void ipc_log_write(unsigned long ctxt, struct encode_context *ectxt);
+
+/*
+ * ipc_log_string: Helper function to log a string
+ *
+ * @dlctxt: IPC Log Context created using ipc_log_context_create()
+ * @fmt: Data specified using format specifiers
+ */
+int ipc_log_string(unsigned long dlctxt, const char *fmt, ...);
+
+/*
+ * tsv_timestamp_read: Reads a timestamp
+ *
+ * @ectxt: Context retrieved by reading from log space
+ * @dctxt: Temporary storage to hold the decoded message
+ * @format: Output format while dumping through DEBUGFS
+ */
+void tsv_timestamp_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format);
+
+/*
+ * tsv_pointer_read: Reads a data pointer
+ *
+ * @ectxt: Context retrieved by reading from log space
+ * @dctxt: Temporary storage to hold the decoded message
+ * @format: Output format while dumping through DEBUGFS
+ */
+void tsv_pointer_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format);
+
+/*
+ * tsv_int32_read: Reads a 32-bit integer value
+ *
+ * @ectxt: Context retrieved by reading from log space
+ * @dctxt: Temporary storage to hold the decoded message
+ * @format: Output format while dumping through DEBUGFS
+ */
+void tsv_int32_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format);
+
+/*
+ * tsv_byte_array_read: Reads a byte array/string
+ *
+ * @ectxt: Context retrieved by reading from log space
+ * @dctxt: Temporary storage to hold the decoded message
+ * @format: Output format while dumping through DEBUGFS
+ */
+void tsv_byte_array_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format);
+
+/*
+ * add_deserialization_func: Register a deserialization function to
+ * to unpack the subevents of a main event
+ *
+ * @ctxt: IPC log context to which the deserialization function has
+ * to be registered
+ * @type: Main/Root event, defined by the module which is logging, to
+ * which this deserialization function has to be registered.
+ * @dfune: Deserialization function to be registered
+ *
+ * return 0 on success, -ve value on FAILURE
+ */
+int add_deserialization_func(unsigned long ctxt, int type,
+ void (*dfunc)(struct encode_context *,
+ struct decode_context *));
+
+Driver parameters
+=================
+
+Not-Applicable
+
+Config options
+==============
+
+Not-Applicable
+
+Dependencies
+============
+
+This module will partially depend on CONFIG_DEBUGFS, in order to dump the
+logs through debugfs. If CONFIG_DEBUGFS is disabled, the above mentioned
+helper functions will perform no operation and return appropriate error
+code if the return value is non void. Under such circumstances the logs can
+only be extracted through the memory dump.
+
+User space utilities
+====================
+
+DEBUGFS
+
+Other
+=====
+
+Not-Applicable
+
+Known issues
+============
+
+None
+
+To do
+=====
+
+None
diff --git a/Documentation/arm/msm/msm_ipc_router.txt b/Documentation/arm/msm/msm_ipc_router.txt
new file mode 100644
index 0000000..f8626ec
--- /dev/null
+++ b/Documentation/arm/msm/msm_ipc_router.txt
@@ -0,0 +1,404 @@
+Introduction
+============
+
+Inter Process Communication(IPC) Message Router
+
+IPC Router provides a connectionless message routing service between
+multiple processes in a MSM setup. The communicating processes can
+run either in the same processor or in a different processor within
+the MSM setup. The IPC Router has been designed to
+ 1) Route messages of any types
+ 2) Support a broader network of processors
+The IPC Router follows the same protocol as the existing RPC Router
+in the kernel while communicating with its peer IPC Routers.
+
+Hardware description
+====================
+
+The IPC Router doesn't implement any specific hardware driver.
+The IPC Router uses the existing hardware drivers to transport messages
+across to the peer router. IPC Router contains a XPRT interface layer to
+handle the different types of transports/links. This interface layer
+abstracts the underlying transport complexities from the router and
+provides a packet/message interface with the transports.
+
+Software description
+====================
+
+The IPC Router is designed to support a client-server model. The
+clients and servers communicate with one another by exchanging data
+units known as messages. A message is a byte string from 1 to 64K bytes
+long. The IPC Router provides a connectionless message routing service
+between the clients and servers i.e. any client/server can communicate
+with any other client/server in the network of processors.
+
+Network Topology Overview:
+--------------------------
+
+The system is organized as a network of nodes. Each processor in the
+network is the most fundamental element called node. The complete
+network is hierarchically structured i.e. the network is divided into
+tiers and each tier is fully-meshed. The following figure shows an
+example network topology.
+
+
+ ---N1--- ---N4---
+ | | | |
+ | | | |
+ N2----N3-------N5-----N6
+ | |
+ | |
+ ---N7----
+ | |
+ | |
+ N8------N9
+
+Each node in the complete network is identified using a unique node id
+(Nx in the example network). In the example network, nodes N1, N2 & N3
+are fully-meshed to form a tier 1 network. Similarly nodes N4, N5 & N6
+form another tier 1 network and nodes N7, N8 & N9 form third tier 1
+network. These three tier 1 networks are fully-meshed to form a tier 2
+network.
+
+Each transport/link in the network is identified using a unique name/id
+called XPRT id. This XPRT id is used by the nodes to identify the
+link to be used while sending message to a specific destination.
+In addition, each transport/link in the network is assigned a link id.
+This link id is used to identify the tier to which the link belongs to.
+This link marking is used to avoid the routing loops while forwarding
+the broadcast messages. The incoming messages are only forwarded onto an
+egress link which has a link id different from that of an ingress link.
+
+IPC Router Addressing Overview:
+-------------------------------
+
+Each client/server in the network is identified using a unique
+<Node_id:Port_id> combination. Node_id identifies the processor on which
+a client/server is running. Port_id is a unique id within a node. This
+Port_id is assigned by the IPC Router in that node when a client/server
+comes up. The Node_id & Port_id are 32 bits each.
+
+Port_id 0xFFFFFFFE is reserved for Router &
+0xFFFFFFFF is reserved for broadcast messages.
+
+Each server in the network can also be addressed using a service name.
+The service name is of the form <service(32 bits):instance(32 bits)>.
+When a server comes up, it binds itself with a service name. This name
+information along with the <Node_id:Port_id> is broadcast onto the
+entire network.
+
+Control Path:
+-------------
+
+IPC Router uses the control messages to communicate and propagate the
+system wide events to the other routers in the system. Some of the
+events include:
+ 1) Node Status
+ 2) Server Status
+ 3) Client Status
+ 4) Flow Control Request/Confirmation
+
+Message Header:
+---------------
+
+IPC Router prepends a header to every message that it communicates with
+other IPC Router. The receiving IPC Routers use the header to identify
+the source and destination of the message, size of the message, type
+of the message and handle any flow control requests. The IPC Router
+header format is as follows:
+
+ 0 31
+ -------------------------------------------------
+ | Version |
+ -------------------------------------------------
+ | Message Type |
+ -------------------------------------------------
+ | Source Node ID |
+ -------------------------------------------------
+ | Source Port ID |
+ -------------------------------------------------
+ | Confirm RX |
+ -------------------------------------------------
+ | Payload Length |
+ -------------------------------------------------
+ | Destination Node ID |
+ -------------------------------------------------
+ | Destination Port ID |
+ -------------------------------------------------
+
+Message Header v2(Optimized):
+-----------------------------
+
+The following optimization has been done to the IPC Router header to
+make it concise, align with the system requirement and enable future
+expansion:
+
+ 0 8 16 24 31
+ -----------------------------------------------------------------
+ | Version | Message Type | Control Flag |
+ -----------------------------------------------------------------
+ | Payload Length |
+ -----------------------------------------------------------------
+ | Source Node ID | Source Port ID |
+ -----------------------------------------------------------------
+ | Destination Node ID | Destination Port ID |
+ -----------------------------------------------------------------
+
+Control Flag:
+
+ 0 14 15
+ ------------------------------------------------------------------
+ | Reserved | Opt. Hdr. | Confirm RX |
+ ------------------------------------------------------------------
+
+IPC Router identifies and processes the header depending on the version
+field. The Confirm RX field in message header v1 becomes part of the
+control flag. All the other fields are reduced in size to align with the
+system requirement.
+
+Optional Header:
+An optional header bit field is introduced in the control flag to handle
+any unforeseen future requirement that this header cannot handle. When
+that bit is set, an optional header follows the current header. The
+optional header format is as follows:
+
+ 0 8 16 31
+ -----------------------------------------------------------------
+ | Length(words) | Type | Control Flag |
+ -----------------------------------------------------------------
+ | |
+ | Optional Header Contents |
+ | |
+ -----------------------------------------------------------------
+
+Design
+======
+
+The IPC Router is organized into 2 layers:
+ 1) Router Core layer
+ 2) Router - XPRT Interface layer
+
+
+This organization allows the router to abstract the XPRT's complexities
+from that of the core router functionalities. The Router Core layer
+performs the following core functions:
+ 1) Message Routing
+ 2) Distributed name service
+ 3) Flow control between ports
+The Router core layer contains the following important data structures
+to perform the core functions in their respective order:
+ 1) Routing Table
+ 2) Table of Active Servers
+ 3) Table of Active Remote ports
+All these data structures get updated based on the events passed through
+the control path.
+
+
+The Router - XPRT Interface layer hides the underlying transport
+complexities and provides and abstracted packet interface to the
+Router Core layer. The Router - XPRT Interface layer registers itself
+with the Router Core layer upon complete initialization of the XPRT.
+The Router - XPRT Interface layer upon registration exports the
+following functionalities to the Router Core:
+ 1) Read from the XPRT
+ 2) # of bytes of data available to read
+ 3) Write to the XPRT
+ 4) Space available to write to the XPRT
+ 5) Close the XPRT
+
+
+The user behavioral model of the IPC Router should be
+ 1) Create a port
+ 2) If server, register a name to the port
+ 3) If remote port not known, lookup through the name
+ 4) Send messages over the port to the remote port
+ 5) Receive messages along with the source info from the port
+ 6) Repeat steps 3, 4 & 5 as required
+ 7) If server, unregister the name from the port
+ 8) Close the port
+
+Power Management
+================
+
+IPC Message Router uses wakelocks to ensure that the system does not go
+into suspend mode while there are pending messages to be handled. Once all
+the messages are handled, IPC Message Router releases the wakelocks to
+allow the system to go into suspend mode and comply with the system power
+management requirement.
+
+SMP/multi-core
+==============
+
+The IPC Router uses mutexes & spinlocks to protect the shared data
+structures to be SMP/multi-core safe.
+
+Security
+========
+
+None
+
+Performance
+===========
+
+None
+
+Interface
+=========
+
+Kernel-space APIs:
+------------------
+
+/*
+ * msm_ipc_router_create_port - Create a IPC Router port
+ *
+ * @msm_ipc_port_notify: notification function which will notify events
+ * like READ_DATA_AVAIL, WRITE_DONE etc.
+ * @priv: caller private context pointer, passed to the notify callback.
+ *
+ * @return: a valid port pointer on success, NULL on failure
+ *
+ */
+struct msm_ipc_port * msm_ipc_router_create_port(
+ void (*msm_ipc_port_notify)(unsigned event, void *data,
+ void *addr, void *priv),
+ void *priv)
+
+
+/*
+ * msm_ipc_router_close_port - Close down the port
+ *
+ * @port: Port to be closed
+ *
+ * @return: 0 on success, -ve value on failure
+ *
+ */
+int msm_ipc_router_close_port(struct msm_ipc_port *port)
+
+
+/*
+ * msm_ipc_router_send_to - Send data to a remote port
+ *
+ * @from_port: Source port of the message
+ * @data: Data to be sent
+ * @to_addr: Destination port name or address
+ *
+ * @return: number of bytes sent on success, -ve value on failure
+ *
+ */
+int msm_ipc_router_send_to(struct msm_ipc_port *from_port,
+ struct sk_buff_head *data,
+ struct msm_ipc_addr *to_addr)
+
+
+/*
+ * msm_ipc_router_recv_from - Receive data over a port
+ *
+ * @port: Port from which the data has to be read
+ * @data: Pointer to the data
+ * @src_addr: If valid, filled with the source address of the data
+ * @timeout: Time to wait for the data, if already not present
+ *
+ * @return: number of bytes received on success, -ve value on failure
+ *
+ */
+int msm_ipc_router_recv_from(struct msm_ipc_port *port,
+ struct sk_buff_head **data,
+ struct msm_ipc_addr *src_addr,
+ unsigned long timeout)
+
+/*
+ * msm_ipc_router_register_server - Bind a local port with a service
+ * name
+ *
+ * @server_port: Port to be bound with a service name
+ * @name: Name to bind with the port
+ *
+ * @return: 0 on success, -ve value on failure
+ *
+ */
+int msm_ipc_router_register_server(struct msm_ipc_port *server_port,
+ struct msm_ipc_addr *name)
+
+
+/*
+ * msm_ipc_router_unregister_server - Unbind the local port from its
+ * service name
+ *
+ * @server_port: Port to be unbound from its service name
+ *
+ * @return: 0 on success, -ve value on failure
+ *
+ */
+int msm_ipc_router_unregister_server(struct msm_ipc_port *server_port)
+
+
+/*
+ * msm_ipc_router_lookup_server - Lookup port address for the port name
+ *
+ * @name: Name to be looked up for
+ *
+ * @return: Port address corresponding to the service name on success,
+ * NULL on failure
+ *
+ */
+struct msm_ipc_addr * msm_ipc_router_lookup_server(
+ struct msm_ipc_addr *name)
+
+
+User-space APIs:
+----------------
+
+User-space applications/utilities can use the socket APIs to interface
+with the IPC Router. IPC Router, in order to support the socket APIs,
+will register a new socket Address/Protocol Family with the kernel
+Socket layer. The identity of the new Address/Protocol Family will be
+defined using the macro AF_MSM_IPC/PF_MSM_IPC (hardcoded to 38) in
+include/linux/socket.h file. Since IPC Router supports only message
+oriented transfer, only SOCK_DGRAM type of sockets will be supported
+by the IPC Router.
+
+Driver parameters
+=================
+
+debug_mask - This module parameter is used to enable/disable Router
+log messages in the kernel logs. This parameter can take any value
+from 0 to 255.
+
+Dependencies
+============
+
+Drivers in this project:
+-----------------------
+
+The following drivers are present in this project, listed in the
+bottom - up order of the stack.
+
+1a) Router - SMD XPRT Interface driver. This driver is used to interface
+the Router with the SMD transport.
+1b) Router - HSIC XPRT Interface driver. This driver is used to interface
+the Router with the HSIC_IPC Bridge transport for off-chip communication.
+2) Core Router driver. This driver performs the core functionalities
+3) Socket - Router Interface driver. This driver enables the socket
+interface to be used with the IPC Router.
+
+In the write/send direction, these drivers interact by invoking the
+exported APIs from the underlying drivers. In the read/receive
+directions, these drivers interact by passing messages/events.
+
+Drivers Needed:
+---------------
+
+ 1) SMD
+ 2) Kernel Socket Layer
+ 3) Platform Driver
+ 4) HSIC IPC Bridge Driver
+
+To do
+=====
+Improvements:
+-------------
+
+The IPC Router is designed to route any messages, as long as the system
+follows the network architecture and addressing schemes. But the
+implementation in progress will route only QMI messages. With few
+additional enhancements, it can route existing RPC messages too.
diff --git a/Documentation/arm/msm/msm_qmi.txt b/Documentation/arm/msm/msm_qmi.txt
new file mode 100644
index 0000000..590b9ab
--- /dev/null
+++ b/Documentation/arm/msm/msm_qmi.txt
@@ -0,0 +1,520 @@
+Introduction
+============
+
+Qualcomm Technologies, Inc. MSM Interface(QMI) is a messaging format used
+to communicate between software components in the modem and other
+peripheral subsystems. This document proposes an architecture to introduce
+the QMI messaging into the kernel. This document proposes introducing a
+QMI encode/decode library to enable QMI message marshaling and an
+interface library to enable sending and receiving QMI messages through
+MSM IPC Router.
+
+Hardware description
+====================
+
+QMI is a messaging format used to interface with the components in modem
+and other subsystems. QMI does not drive or manage any hardware resources.
+
+Software description
+====================
+QMI communication is based on a client-server model, where clients and
+servers exchange messages in QMI wire format. A module can act as a client
+of any number of QMI services and a QMI service can serve any number of
+clients.
+
+QMI communication is of request/response type or an unsolicited event type.
+QMI client driver sends a request to a QMI service and receives a response.
+QMI client driver registers with the QMI service to receive indications
+regarding a system event and the QMI service sends the indications to the
+client when the event occurs in the system.
+
+The wire format of QMI message is as follows:
+
+ ----------------------------------------------------
+ | QMI Header | TLV 0 | TLV 1 | ... | TLV N |
+ ----------------------------------------------------
+
+QMI Header:
+-----------
+ --------------------------------------------------------
+ | Flags | Transaction ID | Message ID | Message Length |
+ --------------------------------------------------------
+
+The flags field is used to indicate the kind of QMI message - request,
+response or indication. The transaction ID is a client specific field
+to uniquely match the QMI request and the response. The message ID is
+also a client specific field to indicate the kind of information present
+in the QMI payload. The message length field holds the size information
+of the QMI payload.
+
+Flags:
+------
+ * 0 - QMI Request
+ * 2 - QMI Response
+ * 4 - QMI Indication
+
+TLV:
+----
+QMI payload is represented using a series of Type, Length and Value fields.
+Each information being passed is encoded into a type, length and value
+combination. The type element identifies the type of information being
+encoded. The length element specifies the length of the information/values
+being encoded. The information can be of a primitive type or a structure
+or an array.
+
+ -------------------------------------------
+ | Type | Length | Value 0 | ... | Value N |
+ -------------------------------------------
+
+QMI Message Marshaling and Transport:
+-------------------------------------
+QMI encode/decode library is designed to encode the kernel C data
+structures into QMI wire format and to decode the QMI messages into kernel
+C data strcuture format. This library will provide a single interface to
+transform any data structure into a QMI message and vice-versa.
+
+QMI interface library is designed to send and receive QMI messages over
+IPC Router.
+
+
+ ----------------------------------
+ | Kernel Drivers |
+ ----------------------------------
+ | |
+ | |
+ ----------------- -----------------
+ | QMI Interface |___| QMI Enc/Dec |
+ | Library | | Library |
+ ----------------- -----------------
+ |
+ |
+ -------------------
+ | IPC Message |
+ | Router |
+ -------------------
+ |
+ |
+ -------
+ | SMD |
+ -------
+
+Design
+======
+
+The design goals of this proposed QMI messaging mechanism are:
+ * To enable QMI messaging from within the kernel
+ * To provide a common library to marshal QMI messages
+ * To provide a common interface library to send/receive QMI messages
+ * To support kernel QMI clients which have latency constraints
+
+The reason behind this design decision is:
+ * To provide a simple QMI marshaling interface to the kernel users
+ * To hide the complexities of QMI message transports
+ * To minimize code redundancy
+
+In order to provide a single encode/decode API, the library expects
+the kernel drivers to pass the:
+ * starting address of the data structure to be encoded/decoded
+ * starting address of the QMI message buffer
+ * a table containing information regarding the data structure to
+ be encoded/decoded
+
+The design is based on the idea that any complex data structure is a
+collection of primary data elements. Hence the information about any
+data structure can be constructed as an array of information about its
+primary data elements. The following structure is defined to describe
+information about a primary data element.
+
+/**
+ * elem_info - Data structure to specify information about an element
+ * in a data structure. An array of this data structure
+ * can be used to specify info about a complex data
+ * structure to be encoded/decoded.
+ * @data_type: Data type of this element
+ * @elem_len: Array length of this element, if an array
+ * @elem_size: Size of a single instance of this data type
+ * @is_array: Array type of this element
+ * @tlv_type: QMI message specific type to identify which element
+ * is present in an incoming message
+ * @offset: To identify the address of the first instance of this
+ * element in the data structure
+ * @ei_array: Array to provide information about the nested structure
+ * within a data structure to be encoded/decoded.
+ */
+struct elem_info {
+ enum elem_type data_type;
+ uint32_t elem_len;
+ uint32_t elem_size;
+ enum array_type is_array;
+ uint8_t tlv_type;
+ uint32_t offset;
+ struct elem_info *ei_array;
+};
+
+The alternate design discussions include manual encoding/decoding of QMI
+messages. From RPC experience, this approach has mostly been error prone.
+This in turn lead to increased development and debugging effort. Another
+approach included data-structure specific marshaling API -- i.e. every
+data structure to be encoded/decoded should have a unique auto-generated
+marshaling API. This approach comes with the cost of code redundancy and
+was therefore rejected.
+
+Power Management
+================
+
+N/A
+
+SMP/multi-core
+==============
+
+The QMI encode/decode library does not access any global or shared data
+structures. Hence it does not require any locking mechanisms to ensure
+multi-core safety.
+
+The QMI interface library uses mutexes while accessing shared resources.
+
+Security
+========
+
+N/A
+
+Performance
+===========
+
+This design proposal is to support kernel QMI clients which have latency
+constraints. Hence the number and size of QMI messages are expected to be
+kept short, in order to achieve latency of less than 1 ms consistently.
+
+Interface
+=========
+
+Kernel-APIs:
+------------
+
+Encode/Decode Library APIs:
+---------------------------
+
+/**
+ * elem_type - Enum to identify the data type of elements in a data
+ * structure.
+ */
+enum elem_type {
+ QMI_OPT_FLAG = 1,
+ QMI_DATA_LEN,
+ QMI_UNSIGNED_1_BYTE,
+ QMI_UNSIGNED_2_BYTE,
+ QMI_UNSIGNED_4_BYTE,
+ QMI_UNSIGNED_8_BYTE,
+ QMI_SIGNED_2_BYTE_ENUM,
+ QMI_SIGNED_4_BYTE_ENUM,
+ QMI_STRUCT,
+ QMI_END_OF_TYPE_INFO,
+};
+
+/**
+ * array_type - Enum to identify if an element in a data structure is
+ * an array. If so, then is it a static length array or a
+ * variable length array.
+ */
+enum array_type {
+ NO_ARRAY = 0,
+ STATIC_ARRAY = 1,
+ VAR_LEN_ARRAY = 2,
+};
+
+/**
+ * msg_desc - Describe about the main/outer structure to be
+ * encoded/decoded.
+ * @msg_id: Message ID to identify the kind of QMI message.
+ * @max_msg_len: Maximum possible length of the QMI message.
+ * @ei_array: Array to provide information about a data structure.
+ */
+struct msg_desc {
+ uint16_t msg_id;
+ int max_msg_len;
+ struct elem_info *ei_array;
+};
+
+/**
+ * qmi_kernel_encode() - Encode to QMI message wire format
+ * @desc: Structure describing the data structure to be encoded.
+ * @out_buf: Buffer to hold the encoded QMI message.
+ * @out_buf_len: Length of the buffer to hold the QMI message.
+ * @in_c_struct: C Structure to be encoded.
+ *
+ * @return: size of encoded message on success,
+ * -ve value on failure.
+ */
+int qmi_kernel_encode(struct msg_desc *desc,
+ void *out_buf, uint32_t out_buf_len,
+ void *in_c_struct);
+
+/**
+ * qmi_kernel_decode() - Decode to C Structure format
+ * @desc: Structure describing the data structure format.
+ * @out_c_struct: Buffer to hold the decoded C structure.
+ * @in_buf: Buffer containg the QMI message to be decoded.
+ * @in_buf_len: Length of the incoming QMI message.
+ *
+ * @return: 0 on success, -ve value on failure.
+ */
+int qmi_kernel_decode(struct msg_desc *desc, void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len);
+
+Interface Library APIs:
+-----------------------
+
+/**
+ * qmi_svc_event_notifier_register() - Register a notifier block to receive
+ * events regarding a QMI service
+ * @service_id: Service ID to identify the QMI service.
+ * @instance_id: Instance ID to identify the instance of the QMI service.
+ * @nb: Notifier block used to receive the event.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_event_notifier_register(uint32_t service_id,
+ uint32_t instance_id,
+ struct notifier_block *nb);
+
+/**
+ * qmi_handle_create() - Create a QMI handle
+ * @notify: Callback to notify events on the handle created.
+ * @notify_priv: Private info to be passed along with the notification.
+ *
+ * @return: Valid QMI handle on success, NULL on error.
+ */
+struct qmi_handle *qmi_handle_create(
+ void (*notify)(struct qmi_handle *handle,
+ enum qmi_event_type event, void *notify_priv),
+ void *notify_priv);
+
+/**
+ * qmi_connect_to_service() - Connect the QMI handle with a QMI service
+ * @handle: QMI handle to be connected with the QMI service.
+ * @service_id: Service id to identify the QMI service.
+ * @instance_id: Instance id to identify the instance of the QMI service.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_connect_to_service(struct qmi_handle *handle,
+ uint32_t service_id, uint32_t instance_id);
+
+/**
+ * qmi_register_ind_cb() - Register the indication callback function
+ * @handle: QMI handle with which the function is registered.
+ * @ind_cb: Callback function to be registered.
+ * @ind_cb_priv: Private data to be passed with the indication callback.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_register_ind_cb(struct qmi_handle *handle,
+ void (*ind_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ unsigned int msg_len, void *ind_cb_priv),
+ void *ind_cb_priv);
+
+/**
+ * qmi_send_req_wait() - Send a synchronous QMI request
+ * @handle: QMI handle through which the QMI request is sent.
+ * @req_desc: Structure describing the request data structure.
+ * @req: Buffer containing the request data structure.
+ * @req_len: Length of the request data structure.
+ * @resp_desc: Structure describing the response data structure.
+ * @resp: Buffer to hold the response data structure.
+ * @resp_len: Length of the response data structure.
+ * @timeout_ms: Timeout before a response is received.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_req_wait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ unsigned long timeout_ms);
+
+/**
+ * qmi_send_req_nowait() - Send an asynchronous QMI request
+ * @handle: QMI handle through which the QMI request is sent.
+ * @req_desc: Structure describing the request data structure.
+ * @req: Buffer containing the request data structure.
+ * @req_len: Length of the request data structure.
+ * @resp_desc: Structure describing the response data structure.
+ * @resp: Buffer to hold the response data structure.
+ * @resp_len: Length of the response data structure.
+ * @resp_cb: Callback function to be invoked when the response arrives.
+ * @resp_cb_data: Private information to be passed along with the callback.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_req_nowait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ void (*resp_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ void *resp_cb_data),
+ void *resp_cb_data);
+
+/**
+ * qmi_recv_msg() - Receive the QMI message
+ * @handle: Handle for which the QMI message has to be received.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_recv_msg(struct qmi_handle *handle);
+
+/**
+ * qmi_handle_destroy() - Destroy the QMI handle
+ * @handle: QMI handle to be destroyed.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_handle_destroy(struct qmi_handle *handle);
+
+/**
+ * qmi_svc_event_notifier_unregister() - Unregister service event notifier block
+ * @service_id: Service ID to identify the QMI service.
+ * @instance_id: Instance ID to identify the instance of the QMI service.
+ * @nb: Notifier block registered to receive the events.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_event_notifier_unregister(uint32_t service_id,
+ uint32_t instance_id,
+ struct notifier_block *nb);
+
+/**
+ * qmi_svc_ops_options - Operations and options to be specified when
+ * a service registers.
+ * @version: Version field to identify the ops_options structure.
+ * @service_id: Service ID of the service being registered.
+ * @instance_id: Instance ID of the service being registered.
+ * @connect_cb: Callback when a new client connects with the service.
+ * @disconnect_cb: Callback when the client exits the connection.
+ * @req_desc_cb: Callback to get request structure and its descriptor
+ * for a message id.
+ * @req_cb: Callback to process the request.
+ */
+struct qmi_svc_ops_options {
+ unsigned version;
+ uint32_t service_id;
+ uint32_t instance_id;
+ int (*connect_cb)(struct qmi_handle *handle,
+ struct qmi_svc_clnt *clnt);
+ int (*disconnect_cb)(struct qmi_handle *handle,
+ struct qmi_svc_clnt *clnt);
+ struct msg_desc *(*req_desc_cb)(unsigned int msg_id,
+ void **req,
+ unsigned int req_len);
+ int (*req_cb)(struct qmi_handle *handle,
+ struct qmi_svc_clnt *clnt,
+ void *req_handle,
+ unsigned int msg_id,
+ void *req);
+};
+
+/**
+ * qmi_svc_register() - Register a QMI service with a QMI handle
+ * @handle: QMI handle on which the service has to be registered.
+ * @ops_options: Service specific operations and options.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_register(struct qmi_handle *handle,
+ void *ops_options);
+
+/**
+ * qmi_send_resp() - Send response to a request
+ * @handle: QMI handle from which the response is sent.
+ * @clnt: Client to which the response is sent.
+ * @req_handle: Request for which the response is sent.
+ * @resp_desc: Descriptor explaining the response structure.
+ * @resp: Pointer to the response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_resp(struct qmi_handle *handle,
+ struct qmi_svc_clnt *clnt,
+ void *req_handle,
+ struct msg_desc *resp_desc,
+ void *resp,
+ unsigned int resp_len);
+
+/**
+ * qmi_send_ind() - Send unsolicited event/indication to a client
+ * @handle: QMI handle from which the indication is sent.
+ * @clnt: Client to which the indication is sent.
+ * @ind_desc: Descriptor explaining the indication structure.
+ * @ind: Pointer to the indication structure.
+ * @ind_len: Length of the indication structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_ind(struct qmi_handle *handle,
+ struct qmi_svc_clnt *clnt,
+ struct msg_desc *ind_desc,
+ void *ind,
+ unsigned int ind_len);
+
+/**
+ * qmi_svc_unregister() - Unregister the service from a QMI handle
+ * @handle: QMI handle from which the service has to be unregistered.
+ *
+ * return: 0 on success, < 0 on error.
+ */
+int qmi_svc_unregister(struct qmi_handle *handle);
+
+User-space APIs:
+----------------
+This proposal is meant only for kernel QMI clients/services and hence no
+user-space interface is defined as part of this proposal.
+
+Driver parameters
+=================
+
+N/A
+
+Config options
+==============
+
+The QMI encode/decode library will be enabled by default in the kernel.
+It can be disabled using CONFIG_QMI_ENCDEC kernel config option.
+
+The QMI Interface library will be disabled by default in the kernel,
+since it depends on other components which are disabled by default.
+It can be enabled using CONFIG_MSM_QMI_INTERFACE kernel config option.
+
+Dependencies
+============
+
+The QMI encode/decode library is a stand-alone module and is not
+dependent on any other kernel modules.
+
+The QMI Interface library depends on QMI Encode/Decode library and
+IPC Message Router.
+
+User space utilities
+====================
+
+N/A
+
+Other
+=====
+
+N/A
+
+Known issues
+============
+
+N/A
+
+To do
+=====
+
+Look into the possibility of making QMI Interface Library transport
+agnostic. This may involve the kernel drivers to register the transport,
+with the QMI Interface Library, to be used for transporting QMI messages.
diff --git a/Documentation/devicetree/bindings/arm/msm/msm_ipc_router.txt b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router.txt
new file mode 100644
index 0000000..256905c
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router.txt
@@ -0,0 +1,16 @@
+Qualcomm Technologies, Inc. IPC Router
+
+Required properties:
+-compatible: should be "qcom,ipc_router"
+-qcom,node-id: unique ID to identify the node in network
+
+Optional properties:
+-qcom,default-peripheral: String property to indicate the default peripheral
+ to communicate
+
+Example:
+ qcom,ipc_router {
+ compatible = "qcom,ipc_router";
+ qcom,node-id = <1>;
+ qcom,default-peripheral = "modem";
+ };
diff --git a/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_glink_xprt.txt b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_glink_xprt.txt
new file mode 100644
index 0000000..9e1d230
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_glink_xprt.txt
@@ -0,0 +1,42 @@
+Qualcomm Technologies, Inc. IPC Router G-Link Transport
+
+Required properties:
+-compatible: should be "qcom,ipc_router_glink_xprt"
+-qcom,ch-name: the G-Link channel name used by the G-Link transport
+-qcom,xprt-remote: string that defines the edge of the transport
+-qcom,glink-xprt: string that describes the underlying physical transport
+-qcom,xprt-linkid: unique integer to identify the tier to which the link
+ belongs to in the network and is used to avoid the
+ routing loops while forwarding the broadcast messages
+-qcom,xprt-version: unique version ID used by G-Link transport header
+
+Optional properties:
+-qcom,fragmented-data: Boolean property to indicate that G-Link transport
+ supports fragmented data
+-qcom,pil-label: string that defines remote subsystem name understood
+ by pil. Absence of this property indicates that
+ subsystem loading through pil voting is disabled for
+ that subsystem.
+
+Example:
+ qcom,ipc_router_modem_xprt {
+ compatible = "qcom,ipc_router_glink_xprt";
+ qcom,ch-name = "IPCRTR";
+ qcom,xprt-remote = "mpss";
+ qcom,glink-xprt = "smem";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <1>;
+ qcom,fragmented-data;
+ qcom,pil-label = "modem";
+ };
+
+ qcom,ipc_router_q6_xprt {
+ compatible = "qcom,ipc_router_glink_xprt";
+ qcom,ch-name = "IPCRTR";
+ qcom,xprt-remote = "lpass";
+ qcom,glink-xprt = "smem";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <1>;
+ qcom,fragmented-data;
+ qcom,pil-label = "adsp";
+ };
diff --git a/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_hsic_xprt.txt b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_hsic_xprt.txt
new file mode 100644
index 0000000..71d0c0d
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_hsic_xprt.txt
@@ -0,0 +1,19 @@
+Qualcomm Technologies, Inc. IPC Router HSIC Transport
+
+Required properties:
+-compatible: should be "qcom,ipc_router_hsic_xprt"
+-qcom,ch-name: the HSIC channel name used by the HSIC transport
+-qcom,xprt-remote: string that defines the edge of the transport (PIL Name)
+-qcom,xprt-linkid: unique integer to identify the tier to which the link
+ belongs to in the network and is used to avoid the
+ routing loops while forwarding the broadcast messages
+-qcom,xprt-version: unique version ID used by HSIC transport header
+
+Example:
+ qcom,ipc_router_external_modem_xprt {
+ compatible = "qcom,ipc_router_hsic_xprt";
+ qcom,ch-name = "ipc_bridge";
+ qcom,xprt-remote = "external-modem";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <3>;
+ };
diff --git a/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_mhi_xprt.txt b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_mhi_xprt.txt
new file mode 100644
index 0000000..de5ab2c
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_mhi_xprt.txt
@@ -0,0 +1,21 @@
+Qualcomm Technologies, Inc. IPC Router MHI Transport
+
+Required properties:
+-compatible: should be "qcom,ipc_router_mhi_xprt"
+-qcom,out-chan-id: MHI Channel ID for the transmit path
+-qcom,in-chan-id: MHI Channel ID for the receive path
+-qcom,xprt-remote: string that defines the edge of the transport (PIL Name)
+-qcom,xprt-linkid: unique integer to identify the tier to which the link
+ belongs to in the network and is used to avoid the
+ routing loops while forwarding the broadcast messages
+-qcom,xprt-version: unique version ID used by MHI transport header
+
+Example:
+ qcom,ipc_router_external_modem_xprt2 {
+ compatible = "qcom,ipc_router_mhi_xprt";
+ qcom,out-chan-id = <34>;
+ qcom,in-chan-id = <35>;
+ qcom,xprt-remote = "external-modem";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <3>;
+ };
diff --git a/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_smd_xprt.txt b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_smd_xprt.txt
new file mode 100644
index 0000000..1d74447
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/msm/msm_ipc_router_smd_xprt.txt
@@ -0,0 +1,34 @@
+Qualcomm Technologies, Inc. IPC Router SMD Transport
+
+Required properties:
+-compatible: should be "qcom,ipc_router_smd_xprt"
+-qcom,ch-name: the SMD channel name used by the SMD transport
+-qcom,xprt-remote: string that defines the edge of the transport (PIL Name)
+-qcom,xprt-linkid: unique integer to identify the tier to which the link
+ belongs to in the network and is used to avoid the
+ routing loops while forwarding the broadcast messages
+-qcom,xprt-version: unique version ID used by SMD transport header
+
+Optional properties:
+-qcom,fragmented-data: Indicate the SMD transport supports fragmented data
+-qcom,disable-pil-loading: Disable PIL Loading of the remote subsystem
+
+Example:
+ qcom,ipc_router_modem_xprt {
+ compatible = "qcom,ipc_router_smd_xprt";
+ qcom,ch-name = "IPCRTR";
+ qcom,xprt-remote = "modem";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <1>;
+ qcom,fragmented-data;
+ qcom,disable-pil-loading;
+ };
+
+ qcom,ipc_router_q6_xprt {
+ compatible = "qcom,ipc_router_smd_xprt";
+ qcom,ch-name = "IPCRTR";
+ qcom,xprt-remote = "adsp";
+ qcom,xprt-linkid = <1>;
+ qcom,xprt-version = <1>;
+ qcom,fragmented-data;
+ };
diff --git a/arch/arm64/configs/msmskunk-perf_defconfig b/arch/arm64/configs/msmskunk-perf_defconfig
index 4163a7a..d3d2ca8 100644
--- a/arch/arm64/configs/msmskunk-perf_defconfig
+++ b/arch/arm64/configs/msmskunk-perf_defconfig
@@ -197,6 +197,8 @@
CONFIG_CFG80211=y
CONFIG_CFG80211_INTERNAL_REGDB=y
CONFIG_RFKILL=y
+CONFIG_IPC_ROUTER=y
+CONFIG_IPC_ROUTER_SECURITY=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_DMA_CMA=y
CONFIG_ZRAM=y
@@ -330,6 +332,8 @@
CONFIG_QTI_RPMH_API=y
CONFIG_MSM_SMP2P=y
CONFIG_MSM_SMP2P_TEST=y
+CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_QMI_INTERFACE=y
CONFIG_EXTCON=y
CONFIG_IIO=y
CONFIG_PWM=y
@@ -356,6 +360,7 @@
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
# CONFIG_DEBUG_PREEMPT is not set
+CONFIG_IPC_LOGGING=y
CONFIG_DEBUG_ALIGN_RODATA=y
CONFIG_CORESIGHT=y
CONFIG_CORESIGHT_LINK_AND_SINK_TMC=y
@@ -377,3 +382,4 @@
CONFIG_CRYPTO_AES_ARM64_CE_BLK=y
CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y
CONFIG_CRYPTO_CRC32_ARM64=y
+CONFIG_QMI_ENCDEC=y
diff --git a/arch/arm64/configs/msmskunk_defconfig b/arch/arm64/configs/msmskunk_defconfig
index 1d01551..b657821 100644
--- a/arch/arm64/configs/msmskunk_defconfig
+++ b/arch/arm64/configs/msmskunk_defconfig
@@ -204,6 +204,8 @@
CONFIG_CFG80211_INTERNAL_REGDB=y
# CONFIG_CFG80211_CRDA_SUPPORT is not set
CONFIG_RFKILL=y
+CONFIG_IPC_ROUTER=y
+CONFIG_IPC_ROUTER_SECURITY=y
CONFIG_FW_LOADER_USER_HELPER_FALLBACK=y
CONFIG_DMA_CMA=y
CONFIG_ZRAM=y
@@ -341,6 +343,8 @@
CONFIG_QTI_RPMH_API=y
CONFIG_MSM_SMP2P=y
CONFIG_MSM_SMP2P_TEST=y
+CONFIG_MSM_IPC_ROUTER_GLINK_XPRT=y
+CONFIG_MSM_QMI_INTERFACE=y
CONFIG_EXTCON=y
CONFIG_IIO=y
CONFIG_PWM=y
@@ -395,6 +399,7 @@
CONFIG_FAIL_PAGE_ALLOC=y
CONFIG_FAULT_INJECTION_DEBUG_FS=y
CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
+CONFIG_IPC_LOGGING=y
CONFIG_QCOM_RTB=y
CONFIG_QCOM_RTB_SEPARATE_CPUS=y
CONFIG_FUNCTION_TRACER=y
@@ -426,3 +431,4 @@
CONFIG_CRYPTO_AES_ARM64_NEON_BLK=y
CONFIG_CRYPTO_CRC32_ARM64=y
CONFIG_XZ_DEC=y
+CONFIG_QMI_ENCDEC=y
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index f4e6ee6..2f443fa 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -1575,6 +1575,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
struct arm_smmu_cfg *cfg = &smmu_domain->cfg;
bool is_fast = smmu_domain->attributes & (1 << DOMAIN_ATTR_FAST);
+ unsigned long quirks = 0;
bool dynamic;
mutex_lock(&smmu_domain->init_mutex);
@@ -1681,6 +1682,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
if (is_fast)
fmt = ARM_V8L_FAST;
+ if (smmu_domain->attributes & (1 << DOMAIN_ATTR_USE_UPSTREAM_HINT))
+ quirks |= IO_PGTABLE_QUIRK_QCOM_USE_UPSTREAM_HINT;
/* Dynamic domains must set cbndx through domain attribute */
if (!dynamic) {
@@ -1698,6 +1701,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
}
smmu_domain->pgtbl_cfg = (struct io_pgtable_cfg) {
+ .quirks = quirks,
.pgsize_bitmap = smmu->pgsize_bitmap,
.ias = ias,
.oas = oas,
@@ -2579,6 +2583,11 @@ static int arm_smmu_domain_get_attr(struct iommu_domain *domain,
& (1 << DOMAIN_ATTR_FAST));
ret = 0;
break;
+ case DOMAIN_ATTR_USE_UPSTREAM_HINT:
+ *((int *)data) = !!(smmu_domain->attributes &
+ (1 << DOMAIN_ATTR_USE_UPSTREAM_HINT));
+ ret = 0;
+ break;
default:
return -ENODEV;
}
@@ -2706,6 +2715,17 @@ static int arm_smmu_domain_set_attr(struct iommu_domain *domain,
smmu_domain->attributes |= 1 << DOMAIN_ATTR_FAST;
ret = 0;
break;
+ case DOMAIN_ATTR_USE_UPSTREAM_HINT:
+ /* can't be changed while attached */
+ if (smmu_domain->smmu != NULL) {
+ ret = -EBUSY;
+ break;
+ }
+ if (*((int *)data))
+ smmu_domain->attributes |=
+ 1 << DOMAIN_ATTR_USE_UPSTREAM_HINT;
+ ret = 0;
+ break;
default:
ret = -ENODEV;
}
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index 34fd127..f00c142 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -165,9 +165,11 @@
#define ARM_LPAE_MAIR_ATTR_DEVICE 0x04
#define ARM_LPAE_MAIR_ATTR_NC 0x44
#define ARM_LPAE_MAIR_ATTR_WBRWA 0xff
+#define ARM_LPAE_MAIR_ATTR_UPSTREAM 0xf4
#define ARM_LPAE_MAIR_ATTR_IDX_NC 0
#define ARM_LPAE_MAIR_ATTR_IDX_CACHE 1
#define ARM_LPAE_MAIR_ATTR_IDX_DEV 2
+#define ARM_LPAE_MAIR_ATTR_IDX_UPSTREAM 3
/* IOPTE accessors */
#define iopte_deref(pte, d) \
@@ -483,6 +485,9 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data,
else if (prot & IOMMU_CACHE)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
+ else if (prot & IOMMU_USE_UPSTREAM_HINT)
+ pte |= (ARM_LPAE_MAIR_ATTR_IDX_UPSTREAM
+ << ARM_LPAE_PTE_ATTRINDX_SHIFT);
} else {
pte = ARM_LPAE_PTE_HAP_FAULT;
if (prot & IOMMU_READ)
@@ -921,9 +926,6 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
u64 reg;
struct arm_lpae_io_pgtable *data;
- if (cfg->quirks & ~IO_PGTABLE_QUIRK_ARM_NS)
- return NULL;
-
data = arm_lpae_alloc_pgtable(cfg);
if (!data)
return NULL;
@@ -933,6 +935,10 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
reg = (ARM_LPAE_TCR_SH_OS << ARM_LPAE_TCR_SH0_SHIFT) |
(ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_IRGN0_SHIFT) |
(ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
+ else if (cfg->quirks && IO_PGTABLE_QUIRK_QCOM_USE_UPSTREAM_HINT)
+ reg = (ARM_LPAE_TCR_SH_OS << ARM_LPAE_TCR_SH0_SHIFT) |
+ (ARM_LPAE_TCR_RGN_NC << ARM_LPAE_TCR_IRGN0_SHIFT) |
+ (ARM_LPAE_TCR_RGN_WBWA << ARM_LPAE_TCR_ORGN0_SHIFT);
else
reg = (ARM_LPAE_TCR_SH_OS << ARM_LPAE_TCR_SH0_SHIFT) |
(ARM_LPAE_TCR_RGN_NC << ARM_LPAE_TCR_IRGN0_SHIFT) |
@@ -985,7 +991,9 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie)
(ARM_LPAE_MAIR_ATTR_WBRWA
<< ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_CACHE)) |
(ARM_LPAE_MAIR_ATTR_DEVICE
- << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV));
+ << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_DEV)) |
+ (ARM_LPAE_MAIR_ATTR_UPSTREAM
+ << ARM_LPAE_MAIR_ATTR_SHIFT(ARM_LPAE_MAIR_ATTR_IDX_UPSTREAM));
cfg->arm_lpae_s1_cfg.mair[0] = reg;
cfg->arm_lpae_s1_cfg.mair[1] = 0;
diff --git a/drivers/iommu/io-pgtable-fast.c b/drivers/iommu/io-pgtable-fast.c
index ad396a4..2c34d10f 100644
--- a/drivers/iommu/io-pgtable-fast.c
+++ b/drivers/iommu/io-pgtable-fast.c
@@ -133,9 +133,11 @@ struct av8l_fast_io_pgtable {
#define AV8L_FAST_MAIR_ATTR_DEVICE 0x04
#define AV8L_FAST_MAIR_ATTR_NC 0x44
#define AV8L_FAST_MAIR_ATTR_WBRWA 0xff
+#define AV8L_FAST_MAIR_ATTR_UPSTREAM 0xf4
#define AV8L_FAST_MAIR_ATTR_IDX_NC 0
#define AV8L_FAST_MAIR_ATTR_IDX_CACHE 1
#define AV8L_FAST_MAIR_ATTR_IDX_DEV 2
+#define AV8L_FAST_MAIR_ATTR_IDX_UPSTREAM 3
#define AV8L_FAST_PAGE_SHIFT 12
@@ -204,6 +206,9 @@ int av8l_fast_map_public(av8l_fast_iopte *ptep, phys_addr_t paddr, size_t size,
else if (prot & IOMMU_CACHE)
pte |= (AV8L_FAST_MAIR_ATTR_IDX_CACHE
<< AV8L_FAST_PTE_ATTRINDX_SHIFT);
+ else if (prot & IOMMU_USE_UPSTREAM_HINT)
+ pte |= (AV8L_FAST_MAIR_ATTR_IDX_UPSTREAM
+ << AV8L_FAST_PTE_ATTRINDX_SHIFT);
if (!(prot & IOMMU_WRITE))
pte |= AV8L_FAST_PTE_AP_RO;
@@ -428,9 +433,14 @@ av8l_fast_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
cfg->pgsize_bitmap = SZ_4K;
/* TCR */
- reg = (AV8L_FAST_TCR_SH_IS << AV8L_FAST_TCR_SH0_SHIFT) |
- (AV8L_FAST_TCR_RGN_NC << AV8L_FAST_TCR_IRGN0_SHIFT) |
- (AV8L_FAST_TCR_RGN_NC << AV8L_FAST_TCR_ORGN0_SHIFT);
+ if (cfg->quirks && IO_PGTABLE_QUIRK_QCOM_USE_UPSTREAM_HINT)
+ reg = (AV8L_FAST_TCR_SH_OS << AV8L_FAST_TCR_SH0_SHIFT) |
+ (AV8L_FAST_TCR_RGN_NC << AV8L_FAST_TCR_IRGN0_SHIFT) |
+ (AV8L_FAST_TCR_RGN_WBWA << AV8L_FAST_TCR_ORGN0_SHIFT);
+ else
+ reg = (AV8L_FAST_TCR_SH_IS << AV8L_FAST_TCR_SH0_SHIFT) |
+ (AV8L_FAST_TCR_RGN_NC << AV8L_FAST_TCR_IRGN0_SHIFT) |
+ (AV8L_FAST_TCR_RGN_NC << AV8L_FAST_TCR_ORGN0_SHIFT);
reg |= AV8L_FAST_TCR_TG0_4K;
@@ -467,7 +477,9 @@ av8l_fast_alloc_pgtable(struct io_pgtable_cfg *cfg, void *cookie)
(AV8L_FAST_MAIR_ATTR_WBRWA
<< AV8L_FAST_MAIR_ATTR_SHIFT(AV8L_FAST_MAIR_ATTR_IDX_CACHE)) |
(AV8L_FAST_MAIR_ATTR_DEVICE
- << AV8L_FAST_MAIR_ATTR_SHIFT(AV8L_FAST_MAIR_ATTR_IDX_DEV));
+ << AV8L_FAST_MAIR_ATTR_SHIFT(AV8L_FAST_MAIR_ATTR_IDX_DEV)) |
+ (AV8L_FAST_MAIR_ATTR_UPSTREAM
+ << AV8L_FAST_MAIR_ATTR_SHIFT(AV8L_FAST_MAIR_ATTR_IDX_UPSTREAM));
cfg->av8l_fast_cfg.mair[0] = reg;
cfg->av8l_fast_cfg.mair[1] = 0;
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index 5512a8b..33e0879 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -74,11 +74,16 @@ struct io_pgtable_cfg {
* PTEs, for Mediatek IOMMUs which treat it as a 33rd address bit
* when the SoC is in "4GB mode" and they can only access the high
* remap of DRAM (0x1_00000000 to 0x1_ffffffff).
+ *
+ * IO_PGTABLE_QUIRK_QCOM_USE_UPSTREAM_HINT: Override the attributes
+ * set in TCR for the page table walker. Use attributes specified
+ * by the upstream hw instead.
*/
#define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
#define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
#define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2)
#define IO_PGTABLE_QUIRK_ARM_MTK_4GB BIT(3)
+ #define IO_PGTABLE_QUIRK_QCOM_USE_UPSTREAM_HINT BIT(4)
unsigned long quirks;
unsigned long pgsize_bitmap;
unsigned int ias;
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa.c b/drivers/platform/msm/ipa/ipa_v3/ipa.c
index 7bc2753..15e7d47 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa.c
@@ -3585,6 +3585,12 @@ static int ipa3_gsi_pre_fw_load_init(void)
return 0;
}
+static void ipa3_uc_is_loaded(void)
+{
+ IPADBG("\n");
+ complete_all(&ipa3_ctx->uc_loaded_completion_obj);
+}
+
static enum gsi_ver ipa3_get_gsi_ver(enum ipa_hw_type ipa_hw_type)
{
enum gsi_ver gsi_ver;
@@ -3637,6 +3643,7 @@ static int ipa3_post_init(const struct ipa3_plat_drv_res *resource_p,
int result;
struct sps_bam_props bam_props = { 0 };
struct gsi_per_props gsi_props;
+ struct ipa3_uc_hdlrs uc_hdlrs = { 0 };
if (ipa3_ctx->transport_prototype == IPA_TRANSPORT_TYPE_GSI) {
memset(&gsi_props, 0, sizeof(gsi_props));
@@ -3713,6 +3720,9 @@ static int ipa3_post_init(const struct ipa3_plat_drv_res *resource_p,
else
IPADBG(":ipa Uc interface init ok\n");
+ uc_hdlrs.ipa_uc_loaded_hdlr = ipa3_uc_is_loaded;
+ ipa3_uc_register_handlers(IPA_HW_FEATURE_COMMON, &uc_hdlrs);
+
result = ipa3_wdi_init();
if (result)
IPAERR(":wdi init failed (%d)\n", -result);
@@ -4336,6 +4346,7 @@ static int ipa3_pre_init(const struct ipa3_plat_drv_res *resource_p,
INIT_LIST_HEAD(&ipa3_ctx->ipa_ready_cb_list);
init_completion(&ipa3_ctx->init_completion_obj);
+ init_completion(&ipa3_ctx->uc_loaded_completion_obj);
/*
* For GSI, we can't register the GSI driver yet, as it expects
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
index f583a36..f3b07f5 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_client.c
@@ -99,6 +99,18 @@ int ipa3_disable_data_path(u32 clnt_hdl)
/* Suspend the pipe */
if (IPA_CLIENT_IS_CONS(ep->client)) {
+ /*
+ * for RG10 workaround uC needs to be loaded before pipe can
+ * be suspended in this case.
+ */
+ if (ipa3_ctx->apply_rg10_wa && ipa3_uc_state_check()) {
+ IPADBG("uC is not loaded yet, waiting...\n");
+ res = wait_for_completion_timeout(
+ &ipa3_ctx->uc_loaded_completion_obj, 60 * HZ);
+ if (res == 0)
+ IPADBG("timeout waiting for uC to load\n");
+ }
+
memset(&ep_cfg_ctrl, 0, sizeof(struct ipa_ep_cfg_ctrl));
ep_cfg_ctrl.ipa_ep_suspend = true;
ipa3_cfg_ep_ctrl(clnt_hdl, &ep_cfg_ctrl);
diff --git a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
index 461b720..23fb2ae 100644
--- a/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_v3/ipa_i.h
@@ -1230,6 +1230,7 @@ struct ipa3_context {
bool ipa_initialization_complete;
struct list_head ipa_ready_cb_list;
struct completion init_completion_obj;
+ struct completion uc_loaded_completion_obj;
struct ipa3_smp2p_info smp2p_info;
};
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 8e49a3d..c5a84cc 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -314,3 +314,53 @@
processors to do unit testing. Unit tests
are used to verify the local and remote
implementations.
+
+config MSM_IPC_ROUTER_SMD_XPRT
+ depends on MSM_SMD
+ depends on IPC_ROUTER
+ bool "MSM SMD XPRT Layer"
+ help
+ SMD Transport Layer that enables IPC Router communication within
+ a System-on-Chip(SoC). When the SMD channels become available,
+ this layer registers a transport with IPC Router and enable
+ message exchange.
+
+config MSM_IPC_ROUTER_HSIC_XPRT
+ depends on USB_QCOM_IPC_BRIDGE
+ depends on IPC_ROUTER
+ bool "MSM HSIC XPRT Layer"
+ help
+ HSIC Transport Layer that enables off-chip communication of
+ IPC Router. When the HSIC endpoint becomes available, this layer
+ registers the transport with IPC Router and enable message
+ exchange.
+
+config MSM_IPC_ROUTER_MHI_XPRT
+ depends on MSM_MHI
+ depends on IPC_ROUTER
+ bool "MSM MHI XPRT Layer"
+ help
+ MHI Transport Layer that enables off-chip communication of
+ IPC Router. When the MHI endpoint becomes available, this layer
+ registers the transport with IPC Router and enable message
+ exchange.
+
+config MSM_IPC_ROUTER_GLINK_XPRT
+ depends on MSM_GLINK
+ depends on IPC_ROUTER
+ bool "MSM GLINK XPRT Layer"
+ help
+ GLINK Transport Layer that enables IPC Router communication within
+ a System-on-Chip(SoC). When the GLINK channels become available,
+ this layer registers a transport with IPC Router and enable
+ message exchange.
+
+config MSM_QMI_INTERFACE
+ depends on IPC_ROUTER
+ depends on QMI_ENCDEC
+ bool "MSM QMI Interface Library"
+ help
+ Library to send and receive QMI messages over IPC Router.
+ This library provides interface functions to the kernel drivers
+ to perform QMI message marshaling and transport them over IPC
+ Router.
diff --git a/drivers/soc/qcom/Makefile b/drivers/soc/qcom/Makefile
index 52f1d8c..32f6dcd 100644
--- a/drivers/soc/qcom/Makefile
+++ b/drivers/soc/qcom/Makefile
@@ -34,3 +34,8 @@
obj-$(CONFIG_QTI_SYSTEM_PM) += system_pm.o
obj-$(CONFIG_MSM_SMP2P) += msm_smp2p.o smp2p_debug.o smp2p_sleepstate.o
obj-$(CONFIG_MSM_SMP2P_TEST) += smp2p_loopback.o smp2p_test.o smp2p_spinlock_test.o
+obj-$(CONFIG_MSM_IPC_ROUTER_SMD_XPRT) += ipc_router_smd_xprt.o
+obj-$(CONFIG_MSM_IPC_ROUTER_HSIC_XPRT) += ipc_router_hsic_xprt.o
+obj-$(CONFIG_MSM_IPC_ROUTER_MHI_XPRT) += ipc_router_mhi_xprt.o
+obj-$(CONFIG_MSM_IPC_ROUTER_GLINK_XPRT) += ipc_router_glink_xprt.o
+obj-$(CONFIG_MSM_QMI_INTERFACE) += qmi_interface.o
diff --git a/drivers/soc/qcom/ipc_router_glink_xprt.c b/drivers/soc/qcom/ipc_router_glink_xprt.c
new file mode 100644
index 0000000..9a9d73b
--- /dev/null
+++ b/drivers/soc/qcom/ipc_router_glink_xprt.c
@@ -0,0 +1,871 @@
+/* Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * IPC ROUTER GLINK XPRT module.
+ */
+#define DEBUG
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <linux/of.h>
+#include <linux/ipc_router_xprt.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+
+#include <soc/qcom/glink.h>
+#include <soc/qcom/subsystem_restart.h>
+
+static int ipc_router_glink_xprt_debug_mask;
+module_param_named(debug_mask, ipc_router_glink_xprt_debug_mask,
+ int, 0664);
+
+#if defined(DEBUG)
+#define D(x...) do { \
+if (ipc_router_glink_xprt_debug_mask) \
+ pr_info(x); \
+} while (0)
+#else
+#define D(x...) do { } while (0)
+#endif
+
+#define MIN_FRAG_SZ (IPC_ROUTER_HDR_SIZE + sizeof(union rr_control_msg))
+#define IPC_RTR_XPRT_NAME_LEN (2 * GLINK_NAME_SIZE)
+#define PIL_SUBSYSTEM_NAME_LEN 32
+#define DEFAULT_NUM_INTENTS 5
+#define DEFAULT_RX_INTENT_SIZE 2048
+/**
+ * ipc_router_glink_xprt - IPC Router's GLINK XPRT structure
+ * @list: IPC router's GLINK XPRT list.
+ * @ch_name: GLink Channel Name.
+ * @edge: Edge between the local node and the remote node.
+ * @transport: Physical Transport Name as identified by Glink.
+ * @pil_edge: Edge name understood by PIL.
+ * @ipc_rtr_xprt_name: XPRT Name to be registered with IPC Router.
+ * @xprt: IPC Router XPRT structure to contain XPRT specific info.
+ * @ch_hndl: Opaque Channel handle returned by GLink.
+ * @xprt_wq: Workqueue to queue read & other XPRT related works.
+ * @ss_reset_rwlock: Read-Write lock to protect access to the ss_reset flag.
+ * @ss_reset: flag used to check SSR state.
+ * @pil: pil handle to the remote subsystem
+ * @sft_close_complete: Variable to indicate completion of SSR handling
+ * by IPC Router.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @xprt_option: XPRT specific options to be handled by IPC Router.
+ * @disable_pil_loading: Disable PIL Loading of the subsystem.
+ */
+struct ipc_router_glink_xprt {
+ struct list_head list;
+ char ch_name[GLINK_NAME_SIZE];
+ char edge[GLINK_NAME_SIZE];
+ char transport[GLINK_NAME_SIZE];
+ char pil_edge[PIL_SUBSYSTEM_NAME_LEN];
+ char ipc_rtr_xprt_name[IPC_RTR_XPRT_NAME_LEN];
+ struct msm_ipc_router_xprt xprt;
+ void *ch_hndl;
+ struct workqueue_struct *xprt_wq;
+ struct rw_semaphore ss_reset_rwlock;
+ int ss_reset;
+ void *pil;
+ struct completion sft_close_complete;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+ bool disable_pil_loading;
+};
+
+struct ipc_router_glink_xprt_work {
+ struct ipc_router_glink_xprt *glink_xprtp;
+ struct work_struct work;
+};
+
+struct queue_rx_intent_work {
+ struct ipc_router_glink_xprt *glink_xprtp;
+ size_t intent_size;
+ struct work_struct work;
+};
+
+struct read_work {
+ struct ipc_router_glink_xprt *glink_xprtp;
+ void *iovec;
+ size_t iovec_size;
+ void * (*vbuf_provider)(void *iovec, size_t offset, size_t *size);
+ void * (*pbuf_provider)(void *iovec, size_t offset, size_t *size);
+ struct work_struct work;
+};
+
+static void glink_xprt_read_data(struct work_struct *work);
+static void glink_xprt_open_event(struct work_struct *work);
+static void glink_xprt_close_event(struct work_struct *work);
+
+/**
+ * ipc_router_glink_xprt_config - Config. Info. of each GLINK XPRT
+ * @ch_name: Name of the GLINK endpoint exported by GLINK driver.
+ * @edge: Edge between the local node and remote node.
+ * @transport: Physical Transport Name as identified by GLINK.
+ * @pil_edge: Edge name understood by PIL.
+ * @ipc_rtr_xprt_name: XPRT Name to be registered with IPC Router.
+ * @link_id: Network Cluster ID to which this XPRT belongs to.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @disable_pil_loading:Disable PIL Loading of the subsystem.
+ */
+struct ipc_router_glink_xprt_config {
+ char ch_name[GLINK_NAME_SIZE];
+ char edge[GLINK_NAME_SIZE];
+ char transport[GLINK_NAME_SIZE];
+ char ipc_rtr_xprt_name[IPC_RTR_XPRT_NAME_LEN];
+ char pil_edge[PIL_SUBSYSTEM_NAME_LEN];
+ uint32_t link_id;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+ bool disable_pil_loading;
+};
+
+#define MODULE_NAME "ipc_router_glink_xprt"
+static DEFINE_MUTEX(glink_xprt_list_lock_lha1);
+static LIST_HEAD(glink_xprt_list);
+
+static struct workqueue_struct *glink_xprt_wq;
+
+static void glink_xprt_link_state_cb(struct glink_link_state_cb_info *cb_info,
+ void *priv);
+static struct glink_link_info glink_xprt_link_info = {
+ NULL, NULL, glink_xprt_link_state_cb};
+static void *glink_xprt_link_state_notif_handle;
+
+struct xprt_state_work_info {
+ char edge[GLINK_NAME_SIZE];
+ char transport[GLINK_NAME_SIZE];
+ uint32_t link_state;
+ struct work_struct work;
+};
+
+#define OVERFLOW_ADD_UNSIGNED(type, a, b) \
+ (((type)~0 - (a)) < (b) ? true : false)
+
+static void *glink_xprt_vbuf_provider(void *iovec, size_t offset,
+ size_t *buf_size)
+{
+ struct rr_packet *pkt = (struct rr_packet *)iovec;
+ struct sk_buff *skb;
+ size_t temp_size = 0;
+
+ if (unlikely(!pkt || !buf_size))
+ return NULL;
+
+ *buf_size = 0;
+ skb_queue_walk(pkt->pkt_fragment_q, skb) {
+ if (unlikely(OVERFLOW_ADD_UNSIGNED(size_t, temp_size,
+ skb->len)))
+ break;
+
+ temp_size += skb->len;
+ if (offset >= temp_size)
+ continue;
+
+ *buf_size = temp_size - offset;
+ return (void *)skb->data + skb->len - *buf_size;
+ }
+ return NULL;
+}
+
+/**
+ * ipc_router_glink_xprt_set_version() - Set the IPC Router version in transport
+ * @xprt: Reference to the transport structure.
+ * @version: The version to be set in transport.
+ */
+static void ipc_router_glink_xprt_set_version(
+ struct msm_ipc_router_xprt *xprt, unsigned int version)
+{
+ struct ipc_router_glink_xprt *glink_xprtp;
+
+ if (!xprt)
+ return;
+ glink_xprtp = container_of(xprt, struct ipc_router_glink_xprt, xprt);
+ glink_xprtp->xprt_version = version;
+}
+
+static int ipc_router_glink_xprt_get_version(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_glink_xprt *glink_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ glink_xprtp = container_of(xprt, struct ipc_router_glink_xprt, xprt);
+
+ return (int)glink_xprtp->xprt_version;
+}
+
+static int ipc_router_glink_xprt_get_option(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_glink_xprt *glink_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ glink_xprtp = container_of(xprt, struct ipc_router_glink_xprt, xprt);
+
+ return (int)glink_xprtp->xprt_option;
+}
+
+static int ipc_router_glink_xprt_write(void *data, uint32_t len,
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct rr_packet *pkt = (struct rr_packet *)data;
+ struct rr_packet *temp_pkt;
+ int ret;
+ struct ipc_router_glink_xprt *glink_xprtp =
+ container_of(xprt, struct ipc_router_glink_xprt, xprt);
+
+ if (!pkt)
+ return -EINVAL;
+
+ if (!len || pkt->length != len)
+ return -EINVAL;
+
+ temp_pkt = clone_pkt(pkt);
+ if (!temp_pkt) {
+ IPC_RTR_ERR("%s: Error cloning packet while tx\n", __func__);
+ return -ENOMEM;
+ }
+
+ down_read(&glink_xprtp->ss_reset_rwlock);
+ if (glink_xprtp->ss_reset) {
+ release_pkt(temp_pkt);
+ IPC_RTR_ERR("%s: %s chnl reset\n", __func__, xprt->name);
+ ret = -ENETRESET;
+ goto out_write_data;
+ }
+
+ D("%s: Ready to write %d bytes\n", __func__, len);
+ ret = glink_txv(glink_xprtp->ch_hndl, (void *)glink_xprtp,
+ (void *)temp_pkt, len, glink_xprt_vbuf_provider,
+ NULL, true);
+ if (ret < 0) {
+ release_pkt(temp_pkt);
+ IPC_RTR_ERR("%s: Error %d while tx\n", __func__, ret);
+ goto out_write_data;
+ }
+ ret = len;
+ D("%s:%s: TX Complete for %d bytes @ %p\n", __func__,
+ glink_xprtp->ipc_rtr_xprt_name, len, temp_pkt);
+
+out_write_data:
+ up_read(&glink_xprtp->ss_reset_rwlock);
+ return ret;
+}
+
+static int ipc_router_glink_xprt_close(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_glink_xprt *glink_xprtp =
+ container_of(xprt, struct ipc_router_glink_xprt, xprt);
+
+ down_write(&glink_xprtp->ss_reset_rwlock);
+ glink_xprtp->ss_reset = 1;
+ up_write(&glink_xprtp->ss_reset_rwlock);
+ return glink_close(glink_xprtp->ch_hndl);
+}
+
+static void glink_xprt_sft_close_done(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_glink_xprt *glink_xprtp =
+ container_of(xprt, struct ipc_router_glink_xprt, xprt);
+
+ complete_all(&glink_xprtp->sft_close_complete);
+}
+
+static struct rr_packet *glink_xprt_copy_data(struct read_work *rx_work)
+{
+ void *buf, *pbuf, *dest_buf;
+ size_t buf_size;
+ struct rr_packet *pkt;
+ struct sk_buff *skb;
+
+ pkt = create_pkt(NULL);
+ if (!pkt) {
+ IPC_RTR_ERR("%s: Couldn't alloc rr_packet\n", __func__);
+ return NULL;
+ }
+
+ do {
+ buf_size = 0;
+ if (rx_work->vbuf_provider) {
+ buf = rx_work->vbuf_provider(rx_work->iovec,
+ pkt->length, &buf_size);
+ } else {
+ pbuf = rx_work->pbuf_provider(rx_work->iovec,
+ pkt->length, &buf_size);
+ buf = phys_to_virt((unsigned long)pbuf);
+ }
+ if (!buf_size || !buf)
+ break;
+
+ skb = alloc_skb(buf_size, GFP_KERNEL);
+ if (!skb) {
+ IPC_RTR_ERR("%s: Couldn't alloc skb of size %zu\n",
+ __func__, buf_size);
+ release_pkt(pkt);
+ return NULL;
+ }
+ dest_buf = skb_put(skb, buf_size);
+ memcpy(dest_buf, buf, buf_size);
+ skb_queue_tail(pkt->pkt_fragment_q, skb);
+ pkt->length += buf_size;
+ } while (buf && buf_size);
+ return pkt;
+}
+
+static void glink_xprt_read_data(struct work_struct *work)
+{
+ struct rr_packet *pkt;
+ struct read_work *rx_work =
+ container_of(work, struct read_work, work);
+ struct ipc_router_glink_xprt *glink_xprtp = rx_work->glink_xprtp;
+ bool reuse_intent = false;
+
+ down_read(&glink_xprtp->ss_reset_rwlock);
+ if (glink_xprtp->ss_reset) {
+ IPC_RTR_ERR("%s: %s channel reset\n",
+ __func__, glink_xprtp->xprt.name);
+ goto out_read_data;
+ }
+
+ D("%s %zu bytes @ %p\n", __func__, rx_work->iovec_size, rx_work->iovec);
+ if (rx_work->iovec_size <= DEFAULT_RX_INTENT_SIZE)
+ reuse_intent = true;
+
+ pkt = glink_xprt_copy_data(rx_work);
+ if (!pkt) {
+ IPC_RTR_ERR("%s: Error copying data\n", __func__);
+ goto out_read_data;
+ }
+
+ msm_ipc_router_xprt_notify(&glink_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_DATA, pkt);
+ release_pkt(pkt);
+out_read_data:
+ glink_rx_done(glink_xprtp->ch_hndl, rx_work->iovec, reuse_intent);
+ kfree(rx_work);
+ up_read(&glink_xprtp->ss_reset_rwlock);
+}
+
+static void glink_xprt_open_event(struct work_struct *work)
+{
+ struct ipc_router_glink_xprt_work *xprt_work =
+ container_of(work, struct ipc_router_glink_xprt_work, work);
+ struct ipc_router_glink_xprt *glink_xprtp = xprt_work->glink_xprtp;
+ int i;
+
+ msm_ipc_router_xprt_notify(&glink_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_OPEN, NULL);
+ D("%s: Notified IPC Router of %s OPEN\n",
+ __func__, glink_xprtp->xprt.name);
+ for (i = 0; i < DEFAULT_NUM_INTENTS; i++)
+ glink_queue_rx_intent(glink_xprtp->ch_hndl, (void *)glink_xprtp,
+ DEFAULT_RX_INTENT_SIZE);
+ kfree(xprt_work);
+}
+
+static void glink_xprt_close_event(struct work_struct *work)
+{
+ struct ipc_router_glink_xprt_work *xprt_work =
+ container_of(work, struct ipc_router_glink_xprt_work, work);
+ struct ipc_router_glink_xprt *glink_xprtp = xprt_work->glink_xprtp;
+
+ init_completion(&glink_xprtp->sft_close_complete);
+ msm_ipc_router_xprt_notify(&glink_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_CLOSE, NULL);
+ D("%s: Notified IPC Router of %s CLOSE\n",
+ __func__, glink_xprtp->xprt.name);
+ wait_for_completion(&glink_xprtp->sft_close_complete);
+ kfree(xprt_work);
+}
+
+static void glink_xprt_qrx_intent_worker(struct work_struct *work)
+{
+ struct queue_rx_intent_work *qrx_intent_work =
+ container_of(work, struct queue_rx_intent_work, work);
+ struct ipc_router_glink_xprt *glink_xprtp =
+ qrx_intent_work->glink_xprtp;
+
+ glink_queue_rx_intent(glink_xprtp->ch_hndl, (void *)glink_xprtp,
+ qrx_intent_work->intent_size);
+ kfree(qrx_intent_work);
+}
+
+static void msm_ipc_unload_subsystem(struct ipc_router_glink_xprt *glink_xprtp)
+{
+ if (glink_xprtp->pil) {
+ subsystem_put(glink_xprtp->pil);
+ glink_xprtp->pil = NULL;
+ }
+}
+
+static void *msm_ipc_load_subsystem(struct ipc_router_glink_xprt *glink_xprtp)
+{
+ void *pil = NULL;
+
+ if (!glink_xprtp->disable_pil_loading) {
+ pil = subsystem_get(glink_xprtp->pil_edge);
+ if (IS_ERR(pil)) {
+ pr_err("%s: Failed to load %s err = [%ld]\n",
+ __func__, glink_xprtp->pil_edge, PTR_ERR(pil));
+ pil = NULL;
+ }
+ }
+ return pil;
+}
+
+static void glink_xprt_notify_rxv(void *handle, const void *priv,
+ const void *pkt_priv, void *ptr, size_t size,
+ void * (*vbuf_provider)(void *iovec, size_t offset, size_t *size),
+ void * (*pbuf_provider)(void *iovec, size_t offset, size_t *size))
+{
+ struct ipc_router_glink_xprt *glink_xprtp =
+ (struct ipc_router_glink_xprt *)priv;
+ struct read_work *rx_work;
+
+ rx_work = kmalloc(sizeof(*rx_work), GFP_ATOMIC);
+ if (!rx_work) {
+ IPC_RTR_ERR("%s: couldn't allocate read_work\n", __func__);
+ glink_rx_done(glink_xprtp->ch_hndl, ptr, true);
+ return;
+ }
+
+ rx_work->glink_xprtp = glink_xprtp;
+ rx_work->iovec = ptr;
+ rx_work->iovec_size = size;
+ rx_work->vbuf_provider = vbuf_provider;
+ rx_work->pbuf_provider = pbuf_provider;
+ INIT_WORK(&rx_work->work, glink_xprt_read_data);
+ queue_work(glink_xprtp->xprt_wq, &rx_work->work);
+}
+
+static void glink_xprt_notify_tx_done(void *handle, const void *priv,
+ const void *pkt_priv, const void *ptr)
+{
+ struct ipc_router_glink_xprt *glink_xprtp =
+ (struct ipc_router_glink_xprt *)priv;
+ struct rr_packet *temp_pkt = (struct rr_packet *)ptr;
+
+ D("%s:%s: @ %p\n", __func__, glink_xprtp->ipc_rtr_xprt_name, ptr);
+ release_pkt(temp_pkt);
+}
+
+static bool glink_xprt_notify_rx_intent_req(void *handle, const void *priv,
+ size_t sz)
+{
+ struct queue_rx_intent_work *qrx_intent_work;
+ struct ipc_router_glink_xprt *glink_xprtp =
+ (struct ipc_router_glink_xprt *)priv;
+
+ if (sz <= DEFAULT_RX_INTENT_SIZE)
+ return true;
+
+ qrx_intent_work = kmalloc(sizeof(struct queue_rx_intent_work),
+ GFP_ATOMIC);
+ if (!qrx_intent_work) {
+ IPC_RTR_ERR("%s: Couldn't queue rx_intent of %zu bytes\n",
+ __func__, sz);
+ return false;
+ }
+ qrx_intent_work->glink_xprtp = glink_xprtp;
+ qrx_intent_work->intent_size = sz;
+ INIT_WORK(&qrx_intent_work->work, glink_xprt_qrx_intent_worker);
+ queue_work(glink_xprtp->xprt_wq, &qrx_intent_work->work);
+ return true;
+}
+
+static void glink_xprt_notify_state(void *handle, const void *priv,
+ unsigned int event)
+{
+ struct ipc_router_glink_xprt_work *xprt_work;
+ struct ipc_router_glink_xprt *glink_xprtp =
+ (struct ipc_router_glink_xprt *)priv;
+
+ D("%s: %s:%s - State %d\n",
+ __func__, glink_xprtp->edge, glink_xprtp->transport, event);
+ switch (event) {
+ case GLINK_CONNECTED:
+ if (IS_ERR_OR_NULL(glink_xprtp->ch_hndl))
+ glink_xprtp->ch_hndl = handle;
+ down_write(&glink_xprtp->ss_reset_rwlock);
+ glink_xprtp->ss_reset = 0;
+ up_write(&glink_xprtp->ss_reset_rwlock);
+ xprt_work = kmalloc(sizeof(struct ipc_router_glink_xprt_work),
+ GFP_ATOMIC);
+ if (!xprt_work) {
+ IPC_RTR_ERR(
+ "%s: Couldn't notify %d event to IPC Router\n",
+ __func__, event);
+ return;
+ }
+ xprt_work->glink_xprtp = glink_xprtp;
+ INIT_WORK(&xprt_work->work, glink_xprt_open_event);
+ queue_work(glink_xprtp->xprt_wq, &xprt_work->work);
+ break;
+
+ case GLINK_LOCAL_DISCONNECTED:
+ case GLINK_REMOTE_DISCONNECTED:
+ down_write(&glink_xprtp->ss_reset_rwlock);
+ if (glink_xprtp->ss_reset) {
+ up_write(&glink_xprtp->ss_reset_rwlock);
+ break;
+ }
+ glink_xprtp->ss_reset = 1;
+ up_write(&glink_xprtp->ss_reset_rwlock);
+ xprt_work = kmalloc(sizeof(struct ipc_router_glink_xprt_work),
+ GFP_ATOMIC);
+ if (!xprt_work) {
+ IPC_RTR_ERR(
+ "%s: Couldn't notify %d event to IPC Router\n",
+ __func__, event);
+ return;
+ }
+ xprt_work->glink_xprtp = glink_xprtp;
+ INIT_WORK(&xprt_work->work, glink_xprt_close_event);
+ queue_work(glink_xprtp->xprt_wq, &xprt_work->work);
+ break;
+ }
+}
+
+static void glink_xprt_ch_open(struct ipc_router_glink_xprt *glink_xprtp)
+{
+ struct glink_open_config open_cfg = {0};
+
+ if (!IS_ERR_OR_NULL(glink_xprtp->ch_hndl))
+ return;
+
+ open_cfg.transport = glink_xprtp->transport;
+ open_cfg.options |= GLINK_OPT_INITIAL_XPORT;
+ open_cfg.edge = glink_xprtp->edge;
+ open_cfg.name = glink_xprtp->ch_name;
+ open_cfg.notify_rx = NULL;
+ open_cfg.notify_rxv = glink_xprt_notify_rxv;
+ open_cfg.notify_tx_done = glink_xprt_notify_tx_done;
+ open_cfg.notify_state = glink_xprt_notify_state;
+ open_cfg.notify_rx_intent_req = glink_xprt_notify_rx_intent_req;
+ open_cfg.priv = glink_xprtp;
+
+ glink_xprtp->pil = msm_ipc_load_subsystem(glink_xprtp);
+ glink_xprtp->ch_hndl = glink_open(&open_cfg);
+ if (IS_ERR_OR_NULL(glink_xprtp->ch_hndl)) {
+ IPC_RTR_ERR("%s:%s:%s %s: unable to open channel\n",
+ open_cfg.transport, open_cfg.edge,
+ open_cfg.name, __func__);
+ msm_ipc_unload_subsystem(glink_xprtp);
+ }
+}
+
+/**
+ * glink_xprt_link_state_worker() - Function to handle link state updates
+ * @work: Pointer to the work item in the link_state_work_info.
+ *
+ * This worker function is scheduled when there is a link state update. Since
+ * the loopback server registers for all transports, it receives all link state
+ * updates about all transports that get registered in the system.
+ */
+static void glink_xprt_link_state_worker(struct work_struct *work)
+{
+ struct xprt_state_work_info *xs_info =
+ container_of(work, struct xprt_state_work_info, work);
+ struct ipc_router_glink_xprt *glink_xprtp;
+
+ if (xs_info->link_state == GLINK_LINK_STATE_UP) {
+ D("%s: LINK_STATE_UP %s:%s\n",
+ __func__, xs_info->edge, xs_info->transport);
+ mutex_lock(&glink_xprt_list_lock_lha1);
+ list_for_each_entry(glink_xprtp, &glink_xprt_list, list) {
+ if (strcmp(glink_xprtp->edge, xs_info->edge) ||
+ strcmp(glink_xprtp->transport, xs_info->transport))
+ continue;
+ glink_xprt_ch_open(glink_xprtp);
+ }
+ mutex_unlock(&glink_xprt_list_lock_lha1);
+ } else if (xs_info->link_state == GLINK_LINK_STATE_DOWN) {
+ D("%s: LINK_STATE_DOWN %s:%s\n",
+ __func__, xs_info->edge, xs_info->transport);
+ mutex_lock(&glink_xprt_list_lock_lha1);
+ list_for_each_entry(glink_xprtp, &glink_xprt_list, list) {
+ if (strcmp(glink_xprtp->edge, xs_info->edge) ||
+ strcmp(glink_xprtp->transport, xs_info->transport)
+ || IS_ERR_OR_NULL(glink_xprtp->ch_hndl))
+ continue;
+ glink_close(glink_xprtp->ch_hndl);
+ glink_xprtp->ch_hndl = NULL;
+ msm_ipc_unload_subsystem(glink_xprtp);
+ }
+ mutex_unlock(&glink_xprt_list_lock_lha1);
+
+ }
+ kfree(xs_info);
+}
+
+/**
+ * glink_xprt_link_state_cb() - Callback to receive link state updates
+ * @cb_info: Information containing link & its state.
+ * @priv: Private data passed during the link state registration.
+ *
+ * This function is called by the GLINK core to notify the IPC Router
+ * regarding the link state updates. This function is registered with the
+ * GLINK core by IPC Router during glink_register_link_state_cb().
+ */
+static void glink_xprt_link_state_cb(struct glink_link_state_cb_info *cb_info,
+ void *priv)
+{
+ struct xprt_state_work_info *xs_info;
+
+ if (!cb_info)
+ return;
+
+ D("%s: %s:%s\n", __func__, cb_info->edge, cb_info->transport);
+ xs_info = kmalloc(sizeof(*xs_info), GFP_KERNEL);
+ if (!xs_info) {
+ IPC_RTR_ERR("%s: Error allocating xprt state info\n", __func__);
+ return;
+ }
+
+ strlcpy(xs_info->edge, cb_info->edge, GLINK_NAME_SIZE);
+ strlcpy(xs_info->transport, cb_info->transport, GLINK_NAME_SIZE);
+ xs_info->link_state = cb_info->link_state;
+ INIT_WORK(&xs_info->work, glink_xprt_link_state_worker);
+ queue_work(glink_xprt_wq, &xs_info->work);
+}
+
+/**
+ * ipc_router_glink_config_init() - init GLINK xprt configs
+ *
+ * @glink_xprt_config: pointer to GLINK Channel configurations.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called to initialize the GLINK XPRT pointer with
+ * the GLINK XPRT configurations either from device tree or static arrays.
+ */
+static int ipc_router_glink_config_init(
+ struct ipc_router_glink_xprt_config *glink_xprt_config)
+{
+ struct ipc_router_glink_xprt *glink_xprtp;
+ char xprt_wq_name[GLINK_NAME_SIZE];
+
+ glink_xprtp = kzalloc(sizeof(struct ipc_router_glink_xprt), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(glink_xprtp)) {
+ IPC_RTR_ERR("%s:%s:%s:%s glink_xprtp alloc failed\n",
+ __func__, glink_xprt_config->ch_name,
+ glink_xprt_config->edge,
+ glink_xprt_config->transport);
+ return -ENOMEM;
+ }
+
+ glink_xprtp->xprt.link_id = glink_xprt_config->link_id;
+ glink_xprtp->xprt_version = glink_xprt_config->xprt_version;
+ glink_xprtp->xprt_option = glink_xprt_config->xprt_option;
+ glink_xprtp->disable_pil_loading =
+ glink_xprt_config->disable_pil_loading;
+
+ if (!glink_xprtp->disable_pil_loading)
+ strlcpy(glink_xprtp->pil_edge, glink_xprt_config->pil_edge,
+ PIL_SUBSYSTEM_NAME_LEN);
+ strlcpy(glink_xprtp->ch_name, glink_xprt_config->ch_name,
+ GLINK_NAME_SIZE);
+ strlcpy(glink_xprtp->edge, glink_xprt_config->edge, GLINK_NAME_SIZE);
+ strlcpy(glink_xprtp->transport,
+ glink_xprt_config->transport, GLINK_NAME_SIZE);
+ strlcpy(glink_xprtp->ipc_rtr_xprt_name,
+ glink_xprt_config->ipc_rtr_xprt_name, IPC_RTR_XPRT_NAME_LEN);
+ glink_xprtp->xprt.name = glink_xprtp->ipc_rtr_xprt_name;
+
+ glink_xprtp->xprt.get_version = ipc_router_glink_xprt_get_version;
+ glink_xprtp->xprt.set_version = ipc_router_glink_xprt_set_version;
+ glink_xprtp->xprt.get_option = ipc_router_glink_xprt_get_option;
+ glink_xprtp->xprt.read_avail = NULL;
+ glink_xprtp->xprt.read = NULL;
+ glink_xprtp->xprt.write_avail = NULL;
+ glink_xprtp->xprt.write = ipc_router_glink_xprt_write;
+ glink_xprtp->xprt.close = ipc_router_glink_xprt_close;
+ glink_xprtp->xprt.sft_close_done = glink_xprt_sft_close_done;
+ glink_xprtp->xprt.priv = NULL;
+
+ init_rwsem(&glink_xprtp->ss_reset_rwlock);
+ glink_xprtp->ss_reset = 0;
+
+ scnprintf(xprt_wq_name, GLINK_NAME_SIZE, "%s_%s_%s",
+ glink_xprtp->ch_name, glink_xprtp->edge,
+ glink_xprtp->transport);
+ glink_xprtp->xprt_wq = create_singlethread_workqueue(xprt_wq_name);
+ if (IS_ERR_OR_NULL(glink_xprtp->xprt_wq)) {
+ IPC_RTR_ERR("%s:%s:%s:%s wq alloc failed\n",
+ __func__, glink_xprt_config->ch_name,
+ glink_xprt_config->edge,
+ glink_xprt_config->transport);
+ kfree(glink_xprtp);
+ return -EFAULT;
+ }
+
+ mutex_lock(&glink_xprt_list_lock_lha1);
+ list_add(&glink_xprtp->list, &glink_xprt_list);
+ mutex_unlock(&glink_xprt_list_lock_lha1);
+
+ glink_xprt_link_info.edge = glink_xprt_config->edge;
+ glink_xprt_link_state_notif_handle = glink_register_link_state_cb(
+ &glink_xprt_link_info, NULL);
+ return 0;
+}
+
+/**
+ * parse_devicetree() - parse device tree binding
+ *
+ * @node: pointer to device tree node
+ * @glink_xprt_config: pointer to GLINK XPRT configurations
+ *
+ * @return: 0 on success, -ENODEV on failure.
+ */
+static int parse_devicetree(struct device_node *node,
+ struct ipc_router_glink_xprt_config *glink_xprt_config)
+{
+ int ret;
+ int link_id;
+ int version;
+ char *key;
+ const char *ch_name;
+ const char *edge;
+ const char *transport;
+ const char *pil_edge;
+
+ key = "qcom,ch-name";
+ ch_name = of_get_property(node, key, NULL);
+ if (!ch_name)
+ goto error;
+ strlcpy(glink_xprt_config->ch_name, ch_name, GLINK_NAME_SIZE);
+
+ key = "qcom,xprt-remote";
+ edge = of_get_property(node, key, NULL);
+ if (!edge)
+ goto error;
+ strlcpy(glink_xprt_config->edge, edge, GLINK_NAME_SIZE);
+
+ key = "qcom,glink-xprt";
+ transport = of_get_property(node, key, NULL);
+ if (!transport)
+ goto error;
+ strlcpy(glink_xprt_config->transport, transport,
+ GLINK_NAME_SIZE);
+
+ key = "qcom,xprt-linkid";
+ ret = of_property_read_u32(node, key, &link_id);
+ if (ret)
+ goto error;
+ glink_xprt_config->link_id = link_id;
+
+ key = "qcom,xprt-version";
+ ret = of_property_read_u32(node, key, &version);
+ if (ret)
+ goto error;
+ glink_xprt_config->xprt_version = version;
+
+ key = "qcom,fragmented-data";
+ glink_xprt_config->xprt_option = of_property_read_bool(node, key);
+
+ key = "qcom,pil-label";
+ pil_edge = of_get_property(node, key, NULL);
+ if (pil_edge) {
+ strlcpy(glink_xprt_config->pil_edge,
+ pil_edge, PIL_SUBSYSTEM_NAME_LEN);
+ glink_xprt_config->disable_pil_loading = false;
+ } else {
+ glink_xprt_config->disable_pil_loading = true;
+ }
+ scnprintf(glink_xprt_config->ipc_rtr_xprt_name, IPC_RTR_XPRT_NAME_LEN,
+ "%s_%s", edge, ch_name);
+
+ return 0;
+
+error:
+ IPC_RTR_ERR("%s: missing key: %s\n", __func__, key);
+ return -ENODEV;
+}
+
+/**
+ * ipc_router_glink_xprt_probe() - Probe a GLINK xprt
+ *
+ * @pdev: Platform device corresponding to GLINK xprt.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying device tree driver registers
+ * a platform device, mapped to a GLINK transport.
+ */
+static int ipc_router_glink_xprt_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct ipc_router_glink_xprt_config glink_xprt_config;
+
+ if (pdev) {
+ if (pdev->dev.of_node) {
+ ret = parse_devicetree(pdev->dev.of_node,
+ &glink_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR("%s: Failed to parse device tree\n",
+ __func__);
+ return ret;
+ }
+
+ ret = ipc_router_glink_config_init(&glink_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR("%s init failed\n", __func__);
+ return ret;
+ }
+ }
+ }
+ return 0;
+}
+
+static const struct of_device_id ipc_router_glink_xprt_match_table[] = {
+ { .compatible = "qcom,ipc_router_glink_xprt" },
+ {},
+};
+
+static struct platform_driver ipc_router_glink_xprt_driver = {
+ .probe = ipc_router_glink_xprt_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = ipc_router_glink_xprt_match_table,
+ },
+};
+
+static int __init ipc_router_glink_xprt_init(void)
+{
+ int rc;
+
+ glink_xprt_wq = create_singlethread_workqueue("glink_xprt_wq");
+ if (IS_ERR_OR_NULL(glink_xprt_wq)) {
+ pr_err("%s: create_singlethread_workqueue failed\n", __func__);
+ return -EFAULT;
+ }
+
+ rc = platform_driver_register(&ipc_router_glink_xprt_driver);
+ if (rc) {
+ IPC_RTR_ERR(
+ "%s: ipc_router_glink_xprt_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+module_init(ipc_router_glink_xprt_init);
+MODULE_DESCRIPTION("IPC Router GLINK XPRT");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/ipc_router_hsic_xprt.c b/drivers/soc/qcom/ipc_router_hsic_xprt.c
new file mode 100644
index 0000000..937c9f7
--- /dev/null
+++ b/drivers/soc/qcom/ipc_router_hsic_xprt.c
@@ -0,0 +1,784 @@
+/* Copyright (c) 2013-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * IPC ROUTER HSIC XPRT module.
+ */
+#define DEBUG
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <linux/of.h>
+#include <linux/ipc_router_xprt.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+#include <soc/qcom/subsystem_restart.h>
+
+#include <mach/ipc_bridge.h>
+
+static int msm_ipc_router_hsic_xprt_debug_mask;
+module_param_named(debug_mask, msm_ipc_router_hsic_xprt_debug_mask,
+ int, 0664);
+
+#if defined(DEBUG)
+#define D(x...) do { \
+if (msm_ipc_router_hsic_xprt_debug_mask) \
+ pr_info(x); \
+} while (0)
+#else
+#define D(x...) do { } while (0)
+#endif
+
+#define NUM_HSIC_XPRTS 1
+#define XPRT_NAME_LEN 32
+
+/**
+ * msm_ipc_router_hsic_xprt - IPC Router's HSIC XPRT structure
+ * @list: IPC router's HSIC XPRTs list.
+ * @ch_name: Name of the HSIC endpoint exported by ipc_bridge driver.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @driver: Platform drivers register by this XPRT.
+ * @xprt: IPC Router XPRT structure to contain HSIC XPRT specific info.
+ * @pdev: Platform device registered by IPC Bridge function driver.
+ * @hsic_xprt_wq: Workqueue to queue read & other XPRT related works.
+ * @read_work: Read Work to perform read operation from HSIC's ipc_bridge.
+ * @in_pkt: Pointer to any partially read packet.
+ * @ss_reset_lock: Lock to protect access to the ss_reset flag.
+ * @ss_reset: flag used to check SSR state.
+ * @sft_close_complete: Variable to indicate completion of SSR handling
+ * by IPC Router.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @xprt_option: XPRT specific options to be handled by IPC Router.
+ */
+struct msm_ipc_router_hsic_xprt {
+ struct list_head list;
+ char ch_name[XPRT_NAME_LEN];
+ char xprt_name[XPRT_NAME_LEN];
+ struct platform_driver driver;
+ struct msm_ipc_router_xprt xprt;
+ struct platform_device *pdev;
+ struct workqueue_struct *hsic_xprt_wq;
+ struct delayed_work read_work;
+ struct rr_packet *in_pkt;
+ struct mutex ss_reset_lock;
+ int ss_reset;
+ struct completion sft_close_complete;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+};
+
+struct msm_ipc_router_hsic_xprt_work {
+ struct msm_ipc_router_xprt *xprt;
+ struct work_struct work;
+};
+
+static void hsic_xprt_read_data(struct work_struct *work);
+
+/**
+ * msm_ipc_router_hsic_xprt_config - Config. Info. of each HSIC XPRT
+ * @ch_name: Name of the HSIC endpoint exported by ipc_bridge driver.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @hsic_pdev_id: ID to differentiate among multiple ipc_bridge endpoints.
+ * @link_id: Network Cluster ID to which this XPRT belongs to.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ */
+struct msm_ipc_router_hsic_xprt_config {
+ char ch_name[XPRT_NAME_LEN];
+ char xprt_name[XPRT_NAME_LEN];
+ int hsic_pdev_id;
+ uint32_t link_id;
+ unsigned int xprt_version;
+};
+
+struct msm_ipc_router_hsic_xprt_config hsic_xprt_cfg[] = {
+ {"ipc_bridge", "ipc_rtr_ipc_bridge1", 1, 1, 3},
+};
+
+#define MODULE_NAME "ipc_router_hsic_xprt"
+#define IPC_ROUTER_HSIC_XPRT_WAIT_TIMEOUT 3000
+static int ipc_router_hsic_xprt_probe_done;
+static struct delayed_work ipc_router_hsic_xprt_probe_work;
+static DEFINE_MUTEX(hsic_remote_xprt_list_lock_lha1);
+static LIST_HEAD(hsic_remote_xprt_list);
+
+/**
+ * find_hsic_xprt_list() - Find xprt item specific to an HSIC endpoint
+ * @name: Name of the platform device to find in list
+ *
+ * @return: pointer to msm_ipc_router_hsic_xprt if matching endpoint is found,
+ * else NULL.
+ *
+ * This function is used to find specific xprt item from the global xprt list
+ */
+static struct msm_ipc_router_hsic_xprt *
+ find_hsic_xprt_list(const char *name)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ mutex_lock(&hsic_remote_xprt_list_lock_lha1);
+ list_for_each_entry(hsic_xprtp, &hsic_remote_xprt_list, list) {
+ if (!strcmp(name, hsic_xprtp->ch_name)) {
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+ return hsic_xprtp;
+ }
+ }
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+ return NULL;
+}
+
+/**
+ * ipc_router_hsic_set_xprt_version() - Set IPC Router header version
+ * in the transport
+ * @xprt: Reference to the transport structure.
+ * @version: The version to be set in transport.
+ */
+static void ipc_router_hsic_set_xprt_version(
+ struct msm_ipc_router_xprt *xprt, unsigned int version)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ if (!xprt)
+ return;
+ hsic_xprtp = container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+ hsic_xprtp->xprt_version = version;
+}
+
+/**
+ * msm_ipc_router_hsic_get_xprt_version() - Get IPC Router header version
+ * supported by the XPRT
+ * @xprt: XPRT for which the version information is required.
+ *
+ * @return: IPC Router header version supported by the XPRT.
+ */
+static int msm_ipc_router_hsic_get_xprt_version(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ hsic_xprtp = container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+
+ return (int)hsic_xprtp->xprt_version;
+}
+
+/**
+ * msm_ipc_router_hsic_get_xprt_option() - Get XPRT options
+ * @xprt: XPRT for which the option information is required.
+ *
+ * @return: Options supported by the XPRT.
+ */
+static int msm_ipc_router_hsic_get_xprt_option(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ hsic_xprtp = container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+
+ return (int)hsic_xprtp->xprt_option;
+}
+
+/**
+ * msm_ipc_router_hsic_remote_write_avail() - Get available write space
+ * @xprt: XPRT for which the available write space info. is required.
+ *
+ * @return: Write space in bytes on success, 0 on SSR.
+ */
+static int msm_ipc_router_hsic_remote_write_avail(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_bridge_platform_data *pdata;
+ int write_avail;
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp =
+ container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ if (hsic_xprtp->ss_reset || !hsic_xprtp->pdev) {
+ write_avail = 0;
+ } else {
+ pdata = hsic_xprtp->pdev->dev.platform_data;
+ write_avail = pdata->max_write_size;
+ }
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return write_avail;
+}
+
+/**
+ * msm_ipc_router_hsic_remote_write() - Write to XPRT
+ * @data: Data to be written to the XPRT.
+ * @len: Length of the data to be written.
+ * @xprt: XPRT to which the data has to be written.
+ *
+ * @return: Data Length on success, standard Linux error codes on failure.
+ */
+static int msm_ipc_router_hsic_remote_write(void *data,
+ uint32_t len, struct msm_ipc_router_xprt *xprt)
+{
+ struct rr_packet *pkt = (struct rr_packet *)data;
+ struct sk_buff *skb;
+ struct ipc_bridge_platform_data *pdata;
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+ int ret;
+ uint32_t bytes_written = 0;
+ uint32_t bytes_to_write;
+ unsigned char *tx_data;
+
+ if (!pkt || pkt->length != len || !xprt) {
+ IPC_RTR_ERR("%s: Invalid input parameters\n", __func__);
+ return -EINVAL;
+ }
+
+ hsic_xprtp = container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ if (hsic_xprtp->ss_reset) {
+ IPC_RTR_ERR("%s: Trying to write on a reset link\n", __func__);
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return -ENETRESET;
+ }
+
+ if (!hsic_xprtp->pdev) {
+ IPC_RTR_ERR("%s: Trying to write on a closed link\n", __func__);
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return -ENODEV;
+ }
+
+ pdata = hsic_xprtp->pdev->dev.platform_data;
+ if (!pdata || !pdata->write) {
+ IPC_RTR_ERR("%s on a uninitialized link\n", __func__);
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return -EFAULT;
+ }
+
+ skb = skb_peek(pkt->pkt_fragment_q);
+ if (!skb) {
+ IPC_RTR_ERR("%s SKB is NULL\n", __func__);
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return -EINVAL;
+ }
+ D("%s: About to write %d bytes\n", __func__, len);
+
+ while (bytes_written < len) {
+ bytes_to_write = min_t(uint32_t, (skb->len - bytes_written),
+ pdata->max_write_size);
+ tx_data = skb->data + bytes_written;
+ ret = pdata->write(hsic_xprtp->pdev, tx_data, bytes_to_write);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Error writing data %d\n",
+ __func__, ret);
+ break;
+ }
+ if (ret != bytes_to_write)
+ IPC_RTR_ERR("%s: Partial write %d < %d, retrying...\n",
+ __func__, ret, bytes_to_write);
+ bytes_written += bytes_to_write;
+ }
+ if (bytes_written == len) {
+ ret = bytes_written;
+ } else if (ret > 0 && bytes_written != len) {
+ IPC_RTR_ERR("%s: Fault writing data %d != %d\n",
+ __func__, bytes_written, len);
+ ret = -EFAULT;
+ }
+ D("%s: Finished writing %d bytes\n", __func__, len);
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ return ret;
+}
+
+/**
+ * msm_ipc_router_hsic_remote_close() - Close the XPRT
+ * @xprt: XPRT which needs to be closed.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int msm_ipc_router_hsic_remote_close(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+ struct ipc_bridge_platform_data *pdata;
+
+ if (!xprt)
+ return -EINVAL;
+ hsic_xprtp = container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ hsic_xprtp->ss_reset = 1;
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ flush_workqueue(hsic_xprtp->hsic_xprt_wq);
+ destroy_workqueue(hsic_xprtp->hsic_xprt_wq);
+ pdata = hsic_xprtp->pdev->dev.platform_data;
+ if (pdata && pdata->close)
+ pdata->close(hsic_xprtp->pdev);
+ hsic_xprtp->pdev = NULL;
+ return 0;
+}
+
+/**
+ * hsic_xprt_read_data() - Read work to read from the XPRT
+ * @work: Read work to be executed.
+ *
+ * This function is a read work item queued on a XPRT specific workqueue.
+ * The work parameter contains information regarding the XPRT on which this
+ * read work has to be performed. The work item keeps reading from the HSIC
+ * endpoint, until the endpoint returns an error.
+ */
+static void hsic_xprt_read_data(struct work_struct *work)
+{
+ int bytes_to_read;
+ int bytes_read;
+ int skb_size;
+ struct sk_buff *skb = NULL;
+ struct ipc_bridge_platform_data *pdata;
+ struct delayed_work *rwork = to_delayed_work(work);
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp =
+ container_of(rwork, struct msm_ipc_router_hsic_xprt, read_work);
+
+ while (1) {
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ if (hsic_xprtp->ss_reset) {
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ break;
+ }
+ pdata = hsic_xprtp->pdev->dev.platform_data;
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ while (!hsic_xprtp->in_pkt) {
+ hsic_xprtp->in_pkt = create_pkt(NULL);
+ if (hsic_xprtp->in_pkt)
+ break;
+ IPC_RTR_ERR("%s: packet allocation failure\n",
+ __func__);
+ msleep(100);
+ }
+ D("%s: Allocated rr_packet\n", __func__);
+
+ bytes_to_read = 0;
+ skb_size = pdata->max_read_size;
+ do {
+ do {
+ skb = alloc_skb(skb_size, GFP_KERNEL);
+ if (skb)
+ break;
+ IPC_RTR_ERR("%s: Couldn't alloc SKB\n",
+ __func__);
+ msleep(100);
+ } while (!skb);
+ bytes_read = pdata->read(hsic_xprtp->pdev, skb->data,
+ pdata->max_read_size);
+ if (bytes_read < 0) {
+ IPC_RTR_ERR("%s: Error %d @ read operation\n",
+ __func__, bytes_read);
+ kfree_skb(skb);
+ goto out_read_data;
+ }
+ if (!bytes_to_read) {
+ bytes_to_read = ipc_router_peek_pkt_size(
+ skb->data);
+ if (bytes_to_read < 0) {
+ IPC_RTR_ERR("%s: Invalid size %d\n",
+ __func__, bytes_to_read);
+ kfree_skb(skb);
+ goto out_read_data;
+ }
+ }
+ bytes_to_read -= bytes_read;
+ skb_put(skb, bytes_read);
+ skb_queue_tail(hsic_xprtp->in_pkt->pkt_fragment_q, skb);
+ hsic_xprtp->in_pkt->length += bytes_read;
+ skb_size = min_t(uint32_t, pdata->max_read_size,
+ (uint32_t)bytes_to_read);
+ } while (bytes_to_read > 0);
+
+ D("%s: Packet size read %d\n",
+ __func__, hsic_xprtp->in_pkt->length);
+ msm_ipc_router_xprt_notify(&hsic_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_DATA, (void *)hsic_xprtp->in_pkt);
+ release_pkt(hsic_xprtp->in_pkt);
+ hsic_xprtp->in_pkt = NULL;
+ }
+out_read_data:
+ release_pkt(hsic_xprtp->in_pkt);
+ hsic_xprtp->in_pkt = NULL;
+}
+
+/**
+ * hsic_xprt_sft_close_done() - Completion of XPRT reset
+ * @xprt: XPRT on which the reset operation is complete.
+ *
+ * This function is used by IPC Router to signal this HSIC XPRT Abstraction
+ * Layer(XAL) that the reset of XPRT is completely handled by IPC Router.
+ */
+static void hsic_xprt_sft_close_done(struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp =
+ container_of(xprt, struct msm_ipc_router_hsic_xprt, xprt);
+
+ complete_all(&hsic_xprtp->sft_close_complete);
+}
+
+/**
+ * msm_ipc_router_hsic_remote_remove() - Remove an HSIC endpoint
+ * @pdev: Platform device corresponding to HSIC endpoint.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying ipc_bridge driver unregisters
+ * a platform device, mapped to an HSIC endpoint, during SSR.
+ */
+static int msm_ipc_router_hsic_remote_remove(struct platform_device *pdev)
+{
+ struct ipc_bridge_platform_data *pdata;
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ hsic_xprtp = find_hsic_xprt_list(pdev->name);
+ if (!hsic_xprtp) {
+ IPC_RTR_ERR("%s No device with name %s\n",
+ __func__, pdev->name);
+ return -ENODEV;
+ }
+
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ hsic_xprtp->ss_reset = 1;
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ flush_workqueue(hsic_xprtp->hsic_xprt_wq);
+ destroy_workqueue(hsic_xprtp->hsic_xprt_wq);
+ init_completion(&hsic_xprtp->sft_close_complete);
+ msm_ipc_router_xprt_notify(&hsic_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_CLOSE, NULL);
+ D("%s: Notified IPC Router of %s CLOSE\n",
+ __func__, hsic_xprtp->xprt.name);
+ wait_for_completion(&hsic_xprtp->sft_close_complete);
+ hsic_xprtp->pdev = NULL;
+ pdata = pdev->dev.platform_data;
+ if (pdata && pdata->close)
+ pdata->close(pdev);
+ return 0;
+}
+
+/**
+ * msm_ipc_router_hsic_remote_probe() - Probe an HSIC endpoint
+ * @pdev: Platform device corresponding to HSIC endpoint.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying ipc_bridge driver registers
+ * a platform device, mapped to an HSIC endpoint.
+ */
+static int msm_ipc_router_hsic_remote_probe(struct platform_device *pdev)
+{
+ int rc;
+ struct ipc_bridge_platform_data *pdata;
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ pdata = pdev->dev.platform_data;
+ if (!pdata || !pdata->open || !pdata->read ||
+ !pdata->write || !pdata->close) {
+ IPC_RTR_ERR("%s: pdata or pdata->operations is NULL\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ hsic_xprtp = find_hsic_xprt_list(pdev->name);
+ if (!hsic_xprtp) {
+ IPC_RTR_ERR("%s No device with name %s\n",
+ __func__, pdev->name);
+ return -ENODEV;
+ }
+
+ hsic_xprtp->hsic_xprt_wq =
+ create_singlethread_workqueue(pdev->name);
+ if (!hsic_xprtp->hsic_xprt_wq) {
+ IPC_RTR_ERR("%s: WQ creation failed for %s\n",
+ __func__, pdev->name);
+ return -EFAULT;
+ }
+
+ rc = pdata->open(pdev);
+ if (rc < 0) {
+ IPC_RTR_ERR("%s: Channel open failed for %s.%d\n",
+ __func__, pdev->name, pdev->id);
+ destroy_workqueue(hsic_xprtp->hsic_xprt_wq);
+ return rc;
+ }
+ hsic_xprtp->pdev = pdev;
+ mutex_lock(&hsic_xprtp->ss_reset_lock);
+ hsic_xprtp->ss_reset = 0;
+ mutex_unlock(&hsic_xprtp->ss_reset_lock);
+ msm_ipc_router_xprt_notify(&hsic_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_OPEN, NULL);
+ D("%s: Notified IPC Router of %s OPEN\n",
+ __func__, hsic_xprtp->xprt.name);
+ queue_delayed_work(hsic_xprtp->hsic_xprt_wq,
+ &hsic_xprtp->read_work, 0);
+ return 0;
+}
+
+/**
+ * msm_ipc_router_hsic_driver_register() - register HSIC XPRT drivers
+ *
+ * @hsic_xprtp: pointer to IPC router hsic xprt structure.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when a new XPRT is added to register platform
+ * drivers for new XPRT.
+ */
+static int msm_ipc_router_hsic_driver_register(
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp)
+{
+ int ret;
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp_item;
+
+ hsic_xprtp_item = find_hsic_xprt_list(hsic_xprtp->ch_name);
+
+ mutex_lock(&hsic_remote_xprt_list_lock_lha1);
+ list_add(&hsic_xprtp->list, &hsic_remote_xprt_list);
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+
+ if (!hsic_xprtp_item) {
+ hsic_xprtp->driver.driver.name = hsic_xprtp->ch_name;
+ hsic_xprtp->driver.driver.owner = THIS_MODULE;
+ hsic_xprtp->driver.probe = msm_ipc_router_hsic_remote_probe;
+ hsic_xprtp->driver.remove = msm_ipc_router_hsic_remote_remove;
+
+ ret = platform_driver_register(&hsic_xprtp->driver);
+ if (ret) {
+ IPC_RTR_ERR(
+ "%s: Failed to register platform driver[%s]\n",
+ __func__, hsic_xprtp->ch_name);
+ return ret;
+ }
+ } else {
+ IPC_RTR_ERR("%s Already driver registered %s\n",
+ __func__, hsic_xprtp->ch_name);
+ }
+
+ return 0;
+}
+
+/**
+ * msm_ipc_router_hsic_config_init() - init HSIC xprt configs
+ *
+ * @hsic_xprt_config: pointer to HSIC xprt configurations.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called to initialize the HSIC XPRT pointer with
+ * the HSIC XPRT configurations either from device tree or static arrays.
+ */
+static int msm_ipc_router_hsic_config_init(
+ struct msm_ipc_router_hsic_xprt_config *hsic_xprt_config)
+{
+ struct msm_ipc_router_hsic_xprt *hsic_xprtp;
+
+ hsic_xprtp = kzalloc(sizeof(struct msm_ipc_router_hsic_xprt),
+ GFP_KERNEL);
+ if (IS_ERR_OR_NULL(hsic_xprtp)) {
+ IPC_RTR_ERR("%s: kzalloc() failed for hsic_xprtp id:%s\n",
+ __func__, hsic_xprt_config->ch_name);
+ return -ENOMEM;
+ }
+
+ hsic_xprtp->xprt.link_id = hsic_xprt_config->link_id;
+ hsic_xprtp->xprt_version = hsic_xprt_config->xprt_version;
+
+ strlcpy(hsic_xprtp->ch_name, hsic_xprt_config->ch_name,
+ XPRT_NAME_LEN);
+
+ strlcpy(hsic_xprtp->xprt_name, hsic_xprt_config->xprt_name,
+ XPRT_NAME_LEN);
+ hsic_xprtp->xprt.name = hsic_xprtp->xprt_name;
+
+ hsic_xprtp->xprt.set_version =
+ ipc_router_hsic_set_xprt_version;
+ hsic_xprtp->xprt.get_version =
+ msm_ipc_router_hsic_get_xprt_version;
+ hsic_xprtp->xprt.get_option =
+ msm_ipc_router_hsic_get_xprt_option;
+ hsic_xprtp->xprt.read_avail = NULL;
+ hsic_xprtp->xprt.read = NULL;
+ hsic_xprtp->xprt.write_avail =
+ msm_ipc_router_hsic_remote_write_avail;
+ hsic_xprtp->xprt.write = msm_ipc_router_hsic_remote_write;
+ hsic_xprtp->xprt.close = msm_ipc_router_hsic_remote_close;
+ hsic_xprtp->xprt.sft_close_done = hsic_xprt_sft_close_done;
+ hsic_xprtp->xprt.priv = NULL;
+
+ hsic_xprtp->in_pkt = NULL;
+ INIT_DELAYED_WORK(&hsic_xprtp->read_work, hsic_xprt_read_data);
+ mutex_init(&hsic_xprtp->ss_reset_lock);
+ hsic_xprtp->ss_reset = 0;
+ hsic_xprtp->xprt_option = 0;
+
+ msm_ipc_router_hsic_driver_register(hsic_xprtp);
+ return 0;
+
+}
+
+/**
+ * parse_devicetree() - parse device tree binding
+ *
+ * @node: pointer to device tree node
+ * @hsic_xprt_config: pointer to HSIC XPRT configurations
+ *
+ * @return: 0 on success, -ENODEV on failure.
+ */
+static int parse_devicetree(struct device_node *node,
+ struct msm_ipc_router_hsic_xprt_config *hsic_xprt_config)
+{
+ int ret;
+ int link_id;
+ int version;
+ char *key;
+ const char *ch_name;
+ const char *remote_ss;
+
+ key = "qcom,ch-name";
+ ch_name = of_get_property(node, key, NULL);
+ if (!ch_name)
+ goto error;
+ strlcpy(hsic_xprt_config->ch_name, ch_name, XPRT_NAME_LEN);
+
+ key = "qcom,xprt-remote";
+ remote_ss = of_get_property(node, key, NULL);
+ if (!remote_ss)
+ goto error;
+
+ key = "qcom,xprt-linkid";
+ ret = of_property_read_u32(node, key, &link_id);
+ if (ret)
+ goto error;
+ hsic_xprt_config->link_id = link_id;
+
+ key = "qcom,xprt-version";
+ ret = of_property_read_u32(node, key, &version);
+ if (ret)
+ goto error;
+ hsic_xprt_config->xprt_version = version;
+
+ scnprintf(hsic_xprt_config->xprt_name, XPRT_NAME_LEN, "%s_%s",
+ remote_ss, hsic_xprt_config->ch_name);
+
+ return 0;
+
+error:
+ IPC_RTR_ERR("%s: missing key: %s\n", __func__, key);
+ return -ENODEV;
+}
+
+/**
+ * msm_ipc_router_hsic_xprt_probe() - Probe an HSIC xprt
+ * @pdev: Platform device corresponding to HSIC xprt.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying device tree driver registers
+ * a platform device, mapped to an HSIC transport.
+ */
+static int msm_ipc_router_hsic_xprt_probe(
+ struct platform_device *pdev)
+{
+ int ret;
+ struct msm_ipc_router_hsic_xprt_config hsic_xprt_config;
+
+ if (pdev && pdev->dev.of_node) {
+ mutex_lock(&hsic_remote_xprt_list_lock_lha1);
+ ipc_router_hsic_xprt_probe_done = 1;
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+
+ ret = parse_devicetree(pdev->dev.of_node,
+ &hsic_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR("%s: Failed to parse device tree\n",
+ __func__);
+ return ret;
+ }
+
+ ret = msm_ipc_router_hsic_config_init(
+ &hsic_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR(" %s init failed\n", __func__);
+ return ret;
+ }
+ }
+ return ret;
+}
+
+/**
+ * ipc_router_hsic_xprt_probe_worker() - probe worker for non DT configurations
+ *
+ * @work: work item to process
+ *
+ * This function is called by schedule_delay_work after 3sec and check if
+ * device tree probe is done or not. If device tree probe fails the default
+ * configurations read from static array.
+ */
+static void ipc_router_hsic_xprt_probe_worker(struct work_struct *work)
+{
+ int i, ret;
+
+ if (WARN_ON(ARRAY_SIZE(hsic_xprt_cfg) != NUM_HSIC_XPRTS))
+ return;
+
+ mutex_lock(&hsic_remote_xprt_list_lock_lha1);
+ if (!ipc_router_hsic_xprt_probe_done) {
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+ for (i = 0; i < ARRAY_SIZE(hsic_xprt_cfg); i++) {
+ ret = msm_ipc_router_hsic_config_init(
+ &hsic_xprt_cfg[i]);
+ if (ret)
+ IPC_RTR_ERR(" %s init failed config idx %d\n",
+ __func__, i);
+ }
+ mutex_lock(&hsic_remote_xprt_list_lock_lha1);
+ }
+ mutex_unlock(&hsic_remote_xprt_list_lock_lha1);
+}
+
+static const struct of_device_id msm_ipc_router_hsic_xprt_match_table[] = {
+ { .compatible = "qcom,ipc_router_hsic_xprt" },
+ {},
+};
+
+static struct platform_driver msm_ipc_router_hsic_xprt_driver = {
+ .probe = msm_ipc_router_hsic_xprt_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = msm_ipc_router_hsic_xprt_match_table,
+ },
+};
+
+static int __init msm_ipc_router_hsic_xprt_init(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&msm_ipc_router_hsic_xprt_driver);
+ if (rc) {
+ IPC_RTR_ERR(
+ "%s: msm_ipc_router_hsic_xprt_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ INIT_DELAYED_WORK(&ipc_router_hsic_xprt_probe_work,
+ ipc_router_hsic_xprt_probe_worker);
+ schedule_delayed_work(&ipc_router_hsic_xprt_probe_work,
+ msecs_to_jiffies(IPC_ROUTER_HSIC_XPRT_WAIT_TIMEOUT));
+ return 0;
+}
+
+module_init(msm_ipc_router_hsic_xprt_init);
+MODULE_DESCRIPTION("IPC Router HSIC XPRT");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/ipc_router_mhi_xprt.c b/drivers/soc/qcom/ipc_router_mhi_xprt.c
new file mode 100644
index 0000000..68849f7
--- /dev/null
+++ b/drivers/soc/qcom/ipc_router_mhi_xprt.c
@@ -0,0 +1,1011 @@
+/* Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * IPC ROUTER MHI XPRT module.
+ */
+#include <linux/delay.h>
+#include <linux/ipc_router_xprt.h>
+#include <linux/module.h>
+#include <linux/msm_mhi.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+
+
+static int ipc_router_mhi_xprt_debug_mask;
+module_param_named(debug_mask, ipc_router_mhi_xprt_debug_mask,
+ int, 0664);
+
+#define D(x...) do { \
+if (ipc_router_mhi_xprt_debug_mask) \
+ pr_info(x); \
+} while (0)
+
+#define NUM_MHI_XPRTS 1
+#define XPRT_NAME_LEN 32
+#define IPC_ROUTER_MHI_XPRT_MAX_PKT_SIZE 0x1000
+#define IPC_ROUTER_MHI_XPRT_NUM_TRBS 10
+
+/**
+ * ipc_router_mhi_addr_map - Struct for virtual address to IPC Router
+ * packet mapping.
+ * @list_node: Address mapping list node used by mhi transport map list.
+ * @virt_addr: The virtual address in mapping.
+ * @pkt: The IPC Router packet for the virtual address
+ */
+struct ipc_router_mhi_addr_map {
+ struct list_head list_node;
+ void *virt_addr;
+ struct rr_packet *pkt;
+};
+
+/**
+ * ipc_router_mhi_channel - MHI Channel related information
+ * @out_chan_id: Out channel ID for use by IPC ROUTER enumerated in MHI driver.
+ * @out_handle: MHI Output channel handle.
+ * @out_clnt_info: IPC Router callbacks/info to be passed to the MHI driver.
+ * @in_chan_id: In channel ID for use by IPC ROUTER enumerated in MHI driver.
+ * @in_handle: MHI Input channel handle.
+ * @in_clnt_info: IPC Router callbacks/info to be passed to the MHI driver.
+ * @state_lock: Lock to protect access to the state information.
+ * @out_chan_enabled: State of the outgoing channel.
+ * @in_chan_enabled: State of the incoming channel.
+ * @bytes_to_rx: Remaining bytes to be received in a packet.
+ * @in_skbq_lock: Lock to protect access to the input skbs queue.
+ * @in_skbq: Queue containing the input buffers.
+ * @max_packet_size: Possible maximum packet size.
+ * @num_trbs: Number of TRBs.
+ * @mhi_xprtp: Pointer to IPC Router MHI XPRT.
+ */
+struct ipc_router_mhi_channel {
+ enum MHI_CLIENT_CHANNEL out_chan_id;
+ struct mhi_client_handle *out_handle;
+ struct mhi_client_info_t out_clnt_info;
+
+ enum MHI_CLIENT_CHANNEL in_chan_id;
+ struct mhi_client_handle *in_handle;
+ struct mhi_client_info_t in_clnt_info;
+
+ struct mutex state_lock;
+ bool out_chan_enabled;
+ bool in_chan_enabled;
+ int bytes_to_rx;
+
+ struct mutex in_skbq_lock;
+ struct sk_buff_head in_skbq;
+ size_t max_packet_size;
+ uint32_t num_trbs;
+ void *mhi_xprtp;
+};
+
+/**
+ * ipc_router_mhi_xprt - IPC Router's MHI XPRT structure
+ * @list: IPC router's MHI XPRTs list.
+ * @ch_hndl: Data Structure to hold MHI Channel information.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @xprt: IPC Router XPRT structure to contain MHI XPRT specific info.
+ * @wq: Workqueue to queue read & other XPRT related works.
+ * @read_work: Read Work to perform read operation from MHI Driver.
+ * @in_pkt: Pointer to any partially read packet.
+ * @write_wait_q: Wait Queue to handle the write events.
+ * @sft_close_complete: Variable to indicate completion of SSR handling
+ * by IPC Router.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @xprt_option: XPRT specific options to be handled by IPC Router.
+ * @tx_addr_map_list_lock: The lock to protect the address mapping list for TX
+ * operations.
+ * @tx_addr_map_list: Virtual address mapping list for TX operations.
+ * @rx_addr_map_list_lock: The lock to protect the address mapping list for RX
+ * operations.
+ * @rx_addr_map_list: Virtual address mapping list for RX operations.
+ */
+struct ipc_router_mhi_xprt {
+ struct list_head list;
+ struct ipc_router_mhi_channel ch_hndl;
+ char xprt_name[XPRT_NAME_LEN];
+ struct msm_ipc_router_xprt xprt;
+ struct workqueue_struct *wq;
+ struct work_struct read_work;
+ struct rr_packet *in_pkt;
+ wait_queue_head_t write_wait_q;
+ struct completion sft_close_complete;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+ struct mutex tx_addr_map_list_lock;
+ struct list_head tx_addr_map_list;
+ struct mutex rx_addr_map_list_lock;
+ struct list_head rx_addr_map_list;
+};
+
+struct ipc_router_mhi_xprt_work {
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+ enum MHI_CLIENT_CHANNEL chan_id;
+ struct work_struct work;
+};
+
+static void mhi_xprt_read_data(struct work_struct *work);
+static void mhi_xprt_enable_event(struct work_struct *work);
+static void mhi_xprt_disable_event(struct work_struct *work);
+
+/**
+ * ipc_router_mhi_xprt_config - Config. Info. of each MHI XPRT
+ * @out_chan_id: Out channel ID for use by IPC ROUTER enumerated in MHI driver.
+ * @in_chan_id: In channel ID for use by IPC ROUTER enumerated in MHI driver.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @link_id: Network Cluster ID to which this XPRT belongs to.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ */
+struct ipc_router_mhi_xprt_config {
+ enum MHI_CLIENT_CHANNEL out_chan_id;
+ enum MHI_CLIENT_CHANNEL in_chan_id;
+ char xprt_name[XPRT_NAME_LEN];
+ uint32_t link_id;
+ uint32_t xprt_version;
+};
+
+#define MODULE_NAME "ipc_router_mhi_xprt"
+static DEFINE_MUTEX(mhi_xprt_list_lock_lha1);
+static LIST_HEAD(mhi_xprt_list);
+
+/*
+ * ipc_router_mhi_release_pkt() - Release a cloned IPC Router packet
+ * @ref: Reference to the kref object in the IPC Router packet.
+ */
+void ipc_router_mhi_release_pkt(struct kref *ref)
+{
+ struct rr_packet *pkt = container_of(ref, struct rr_packet, ref);
+
+ release_pkt(pkt);
+}
+
+/*
+ * ipc_router_mhi_xprt_find_addr_map() - Search the mapped virtual address
+ * @addr_map_list: The list of address mappings.
+ * @addr_map_list_lock: Reference to the lock that protects the @addr_map_list.
+ * @addr: The virtual address that needs to be found.
+ *
+ * Return: The mapped virtual Address if found, NULL otherwise.
+ */
+void *ipc_router_mhi_xprt_find_addr_map(struct list_head *addr_map_list,
+ struct mutex *addr_map_list_lock,
+ void *addr)
+{
+ struct ipc_router_mhi_addr_map *addr_mapping;
+ struct ipc_router_mhi_addr_map *tmp_addr_mapping;
+ void *virt_addr;
+
+ if (!addr_map_list || !addr_map_list_lock)
+ return NULL;
+ mutex_lock(addr_map_list_lock);
+ list_for_each_entry_safe(addr_mapping, tmp_addr_mapping,
+ addr_map_list, list_node) {
+ if (addr_mapping->virt_addr == addr) {
+ virt_addr = addr_mapping->virt_addr;
+ list_del(&addr_mapping->list_node);
+ if (addr_mapping->pkt)
+ kref_put(&addr_mapping->pkt->ref,
+ ipc_router_mhi_release_pkt);
+ kfree(addr_mapping);
+ mutex_unlock(addr_map_list_lock);
+ return virt_addr;
+ }
+ }
+ mutex_unlock(addr_map_list_lock);
+ IPC_RTR_ERR(
+ "%s: Virtual address mapping [%p] not found\n",
+ __func__, (void *)addr);
+ return NULL;
+}
+
+/*
+ * ipc_router_mhi_xprt_add_addr_map() - Add a virtual address mapping structure
+ * @addr_map_list: The list of address mappings.
+ * @addr_map_list_lock: Reference to the lock that protects the @addr_map_list.
+ * @pkt: The IPC Router packet that contains the virtual address in skbs.
+ * @virt_addr: The virtual address which needs to be added.
+ *
+ * Return: 0 on success, standard Linux error code otherwise.
+ */
+int ipc_router_mhi_xprt_add_addr_map(struct list_head *addr_map_list,
+ struct mutex *addr_map_list_lock,
+ struct rr_packet *pkt, void *virt_addr)
+{
+ struct ipc_router_mhi_addr_map *addr_mapping;
+
+ if (!addr_map_list || !addr_map_list_lock)
+ return -EINVAL;
+ addr_mapping = kmalloc(sizeof(*addr_mapping), GFP_KERNEL);
+ if (!addr_mapping)
+ return -ENOMEM;
+ addr_mapping->virt_addr = virt_addr;
+ addr_mapping->pkt = pkt;
+ mutex_lock(addr_map_list_lock);
+ if (addr_mapping->pkt)
+ kref_get(&addr_mapping->pkt->ref);
+ list_add_tail(&addr_mapping->list_node, addr_map_list);
+ mutex_unlock(addr_map_list_lock);
+ return 0;
+}
+
+/*
+ * mhi_xprt_queue_in_buffers() - Queue input buffers
+ * @mhi_xprtp: MHI XPRT in which the input buffer has to be queued.
+ * @num_trbs: Number of buffers to be queued.
+ *
+ * @return: number of buffers queued.
+ */
+int mhi_xprt_queue_in_buffers(struct ipc_router_mhi_xprt *mhi_xprtp,
+ uint32_t num_trbs)
+{
+ int i;
+ struct sk_buff *skb;
+ uint32_t buf_size = mhi_xprtp->ch_hndl.max_packet_size;
+ int rc_val = 0;
+
+ for (i = 0; i < num_trbs; i++) {
+ skb = alloc_skb(buf_size, GFP_KERNEL);
+ if (!skb) {
+ IPC_RTR_ERR("%s: Could not allocate %d SKB(s)\n",
+ __func__, (i + 1));
+ break;
+ }
+ if (ipc_router_mhi_xprt_add_addr_map(
+ &mhi_xprtp->rx_addr_map_list,
+ &mhi_xprtp->rx_addr_map_list_lock, NULL,
+ skb->data) < 0) {
+ IPC_RTR_ERR("%s: Could not map %d SKB address\n",
+ __func__, (i + 1));
+ break;
+ }
+ mutex_lock(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ rc_val = mhi_queue_xfer(mhi_xprtp->ch_hndl.in_handle,
+ skb->data, buf_size, MHI_EOT);
+ if (rc_val) {
+ mutex_unlock(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ IPC_RTR_ERR("%s: Failed to queue TRB # %d into MHI\n",
+ __func__, (i + 1));
+ kfree_skb(skb);
+ break;
+ }
+ skb_queue_tail(&mhi_xprtp->ch_hndl.in_skbq, skb);
+ mutex_unlock(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ }
+ return i;
+}
+
+/**
+ * ipc_router_mhi_set_xprt_version() - Set the IPC Router version in transport
+ * @xprt: Reference to the transport structure.
+ * @version: The version to be set in transport.
+ */
+static void ipc_router_mhi_set_xprt_version(struct msm_ipc_router_xprt *xprt,
+ unsigned int version)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+
+ if (!xprt)
+ return;
+ mhi_xprtp = container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+ mhi_xprtp->xprt_version = version;
+}
+
+/**
+ * ipc_router_mhi_get_xprt_version() - Get IPC Router header version
+ * supported by the XPRT
+ * @xprt: XPRT for which the version information is required.
+ *
+ * @return: IPC Router header version supported by the XPRT.
+ */
+static int ipc_router_mhi_get_xprt_version(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ mhi_xprtp = container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ return (int)mhi_xprtp->xprt_version;
+}
+
+/**
+ * ipc_router_mhi_get_xprt_option() - Get XPRT options
+ * @xprt: XPRT for which the option information is required.
+ *
+ * @return: Options supported by the XPRT.
+ */
+static int ipc_router_mhi_get_xprt_option(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ mhi_xprtp = container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ return (int)mhi_xprtp->xprt_option;
+}
+
+/**
+ * ipc_router_mhi_write_avail() - Get available write space
+ * @xprt: XPRT for which the available write space info. is required.
+ *
+ * @return: Write space in bytes on success, 0 on SSR.
+ */
+static int ipc_router_mhi_write_avail(struct msm_ipc_router_xprt *xprt)
+{
+ int write_avail;
+ struct ipc_router_mhi_xprt *mhi_xprtp =
+ container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ if (!mhi_xprtp->ch_hndl.out_chan_enabled)
+ write_avail = 0;
+ else
+ write_avail = mhi_get_free_desc(mhi_xprtp->ch_hndl.out_handle) *
+ mhi_xprtp->ch_hndl.max_packet_size;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ return write_avail;
+}
+
+/**
+ * ipc_router_mhi_write_skb() - Write a single SKB onto the XPRT
+ * @mhi_xprtp: XPRT in which the SKB has to be written.
+ * @skb: SKB to be written.
+ *
+ * @return: return number of bytes written on success,
+ * standard Linux error codes on failure.
+ */
+static int ipc_router_mhi_write_skb(struct ipc_router_mhi_xprt *mhi_xprtp,
+ struct sk_buff *skb, struct rr_packet *pkt)
+{
+ size_t sz_to_write = 0;
+ size_t offset = 0;
+ int rc;
+
+ while (offset < skb->len) {
+ wait_event(mhi_xprtp->write_wait_q,
+ mhi_get_free_desc(mhi_xprtp->ch_hndl.out_handle) ||
+ !mhi_xprtp->ch_hndl.out_chan_enabled);
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ if (!mhi_xprtp->ch_hndl.out_chan_enabled) {
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ IPC_RTR_ERR("%s: %s chnl reset\n",
+ __func__, mhi_xprtp->xprt_name);
+ return -ENETRESET;
+ }
+
+ sz_to_write = min((size_t)(skb->len - offset),
+ (size_t)IPC_ROUTER_MHI_XPRT_MAX_PKT_SIZE);
+ if (ipc_router_mhi_xprt_add_addr_map(
+ &mhi_xprtp->tx_addr_map_list,
+ &mhi_xprtp->tx_addr_map_list_lock, pkt,
+ skb->data + offset) < 0) {
+ IPC_RTR_ERR("%s: Could not map SKB address\n",
+ __func__);
+ break;
+ }
+
+ rc = mhi_queue_xfer(mhi_xprtp->ch_hndl.out_handle,
+ skb->data + offset, sz_to_write,
+ MHI_EOT | MHI_EOB);
+ if (rc) {
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ IPC_RTR_ERR("%s: Error queueing mhi_xfer 0x%zx\n",
+ __func__, sz_to_write);
+ return -EFAULT;
+ }
+ offset += sz_to_write;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ }
+ return skb->len;
+}
+
+/**
+ * ipc_router_mhi_write() - Write to XPRT
+ * @data: Data to be written to the XPRT.
+ * @len: Length of the data to be written.
+ * @xprt: XPRT to which the data has to be written.
+ *
+ * @return: Data Length on success, standard Linux error codes on failure.
+ */
+static int ipc_router_mhi_write(void *data,
+ uint32_t len, struct msm_ipc_router_xprt *xprt)
+{
+ struct rr_packet *pkt = (struct rr_packet *)data;
+ struct sk_buff *ipc_rtr_pkt;
+ struct rr_packet *cloned_pkt;
+ int rc;
+ struct ipc_router_mhi_xprt *mhi_xprtp =
+ container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ if (!pkt)
+ return -EINVAL;
+
+ if (!len || pkt->length != len)
+ return -EINVAL;
+
+ cloned_pkt = clone_pkt(pkt);
+ if (!cloned_pkt) {
+ pr_err("%s: Error in cloning packet while tx\n", __func__);
+ return -ENOMEM;
+ }
+ D("%s: Ready to write %d bytes\n", __func__, len);
+ skb_queue_walk(cloned_pkt->pkt_fragment_q, ipc_rtr_pkt) {
+ rc = ipc_router_mhi_write_skb(mhi_xprtp, ipc_rtr_pkt,
+ cloned_pkt);
+ if (rc < 0) {
+ IPC_RTR_ERR("%s: Error writing SKB %d\n",
+ __func__, rc);
+ break;
+ }
+ }
+
+ kref_put(&cloned_pkt->ref, ipc_router_mhi_release_pkt);
+ if (rc < 0)
+ return rc;
+ else
+ return len;
+}
+
+/**
+ * mhi_xprt_read_data() - Read work to read from the XPRT
+ * @work: Read work to be executed.
+ *
+ * This function is a read work item queued on a XPRT specific workqueue.
+ * The work parameter contains information regarding the XPRT on which this
+ * read work has to be performed. The work item keeps reading from the MHI
+ * endpoint, until the endpoint returns an error.
+ */
+static void mhi_xprt_read_data(struct work_struct *work)
+{
+ void *data_addr;
+ ssize_t data_sz;
+ void *skb_data;
+ struct sk_buff *skb;
+ struct ipc_router_mhi_xprt *mhi_xprtp =
+ container_of(work, struct ipc_router_mhi_xprt, read_work);
+ struct mhi_result result;
+ int rc;
+
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ if (!mhi_xprtp->ch_hndl.in_chan_enabled) {
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ if (mhi_xprtp->in_pkt)
+ release_pkt(mhi_xprtp->in_pkt);
+ mhi_xprtp->in_pkt = NULL;
+ mhi_xprtp->ch_hndl.bytes_to_rx = 0;
+ IPC_RTR_ERR("%s: %s channel reset\n",
+ __func__, mhi_xprtp->xprt.name);
+ return;
+ }
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+
+ while (1) {
+ rc = mhi_poll_inbound(mhi_xprtp->ch_hndl.in_handle, &result);
+ if (rc || !result.buf_addr || !result.bytes_xferd) {
+ if (rc != -ENODATA)
+ IPC_RTR_ERR("%s: Poll failed %s:%d:%p:%u\n",
+ __func__, mhi_xprtp->xprt_name, rc,
+ result.buf_addr,
+ (unsigned int) result.bytes_xferd);
+ break;
+ }
+ data_addr = result.buf_addr;
+ data_sz = result.bytes_xferd;
+
+ /* Create a new rr_packet, if first fragment */
+ if (!mhi_xprtp->ch_hndl.bytes_to_rx) {
+ mhi_xprtp->in_pkt = create_pkt(NULL);
+ if (!mhi_xprtp->in_pkt) {
+ IPC_RTR_ERR("%s: Couldn't alloc rr_packet\n",
+ __func__);
+ return;
+ }
+ D("%s: Allocated rr_packet\n", __func__);
+ }
+
+ skb_data = ipc_router_mhi_xprt_find_addr_map(
+ &mhi_xprtp->rx_addr_map_list,
+ &mhi_xprtp->rx_addr_map_list_lock,
+ data_addr);
+
+ if (!skb_data)
+ continue;
+ mutex_lock(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ skb_queue_walk(&mhi_xprtp->ch_hndl.in_skbq, skb) {
+ if (skb->data == skb_data) {
+ skb_unlink(skb, &mhi_xprtp->ch_hndl.in_skbq);
+ break;
+ }
+ }
+ mutex_unlock(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ skb_put(skb, data_sz);
+ skb_queue_tail(mhi_xprtp->in_pkt->pkt_fragment_q, skb);
+ mhi_xprtp->in_pkt->length += data_sz;
+ if (!mhi_xprtp->ch_hndl.bytes_to_rx)
+ mhi_xprtp->ch_hndl.bytes_to_rx =
+ ipc_router_peek_pkt_size(skb_data) - data_sz;
+ else
+ mhi_xprtp->ch_hndl.bytes_to_rx -= data_sz;
+ /* Packet is completely read, so notify to router */
+ if (!mhi_xprtp->ch_hndl.bytes_to_rx) {
+ D("%s: Packet size read %d\n",
+ __func__, mhi_xprtp->in_pkt->length);
+ msm_ipc_router_xprt_notify(&mhi_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_DATA,
+ (void *)mhi_xprtp->in_pkt);
+ release_pkt(mhi_xprtp->in_pkt);
+ mhi_xprtp->in_pkt = NULL;
+ }
+
+ while (mhi_xprt_queue_in_buffers(mhi_xprtp, 1) != 1 &&
+ mhi_xprtp->ch_hndl.in_chan_enabled)
+ msleep(100);
+ }
+}
+
+/**
+ * ipc_router_mhi_close() - Close the XPRT
+ * @xprt: XPRT which needs to be closed.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int ipc_router_mhi_close(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ mhi_xprtp = container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ mhi_xprtp->ch_hndl.out_chan_enabled = false;
+ mhi_xprtp->ch_hndl.in_chan_enabled = false;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ flush_workqueue(mhi_xprtp->wq);
+ mhi_close_channel(mhi_xprtp->ch_hndl.in_handle);
+ mhi_close_channel(mhi_xprtp->ch_hndl.out_handle);
+ return 0;
+}
+
+/**
+ * mhi_xprt_sft_close_done() - Completion of XPRT reset
+ * @xprt: XPRT on which the reset operation is complete.
+ *
+ * This function is used by IPC Router to signal this MHI XPRT Abstraction
+ * Layer(XAL) that the reset of XPRT is completely handled by IPC Router.
+ */
+static void mhi_xprt_sft_close_done(struct msm_ipc_router_xprt *xprt)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp =
+ container_of(xprt, struct ipc_router_mhi_xprt, xprt);
+
+ complete_all(&mhi_xprtp->sft_close_complete);
+}
+
+/**
+ * mhi_xprt_enable_event() - Enable the MHI link for communication
+ * @work: Work containing some reference to the link to be enabled.
+ *
+ * This work is scheduled when the MHI link to the peripheral is up.
+ */
+static void mhi_xprt_enable_event(struct work_struct *work)
+{
+ struct ipc_router_mhi_xprt_work *xprt_work =
+ container_of(work, struct ipc_router_mhi_xprt_work, work);
+ struct ipc_router_mhi_xprt *mhi_xprtp = xprt_work->mhi_xprtp;
+ int rc;
+ bool notify = false;
+
+ if (xprt_work->chan_id == mhi_xprtp->ch_hndl.out_chan_id) {
+ rc = mhi_open_channel(mhi_xprtp->ch_hndl.out_handle);
+ if (rc) {
+ IPC_RTR_ERR("%s Failed to open chan 0x%x, rc %d\n",
+ __func__, mhi_xprtp->ch_hndl.out_chan_id, rc);
+ goto out_enable_event;
+ }
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ mhi_xprtp->ch_hndl.out_chan_enabled = true;
+ notify = mhi_xprtp->ch_hndl.out_chan_enabled &&
+ mhi_xprtp->ch_hndl.in_chan_enabled;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ } else if (xprt_work->chan_id == mhi_xprtp->ch_hndl.in_chan_id) {
+ rc = mhi_open_channel(mhi_xprtp->ch_hndl.in_handle);
+ if (rc) {
+ IPC_RTR_ERR("%s Failed to open chan 0x%x, rc %d\n",
+ __func__, mhi_xprtp->ch_hndl.in_chan_id, rc);
+ goto out_enable_event;
+ }
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ mhi_xprtp->ch_hndl.in_chan_enabled = true;
+ notify = mhi_xprtp->ch_hndl.out_chan_enabled &&
+ mhi_xprtp->ch_hndl.in_chan_enabled;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ }
+
+ /* Register the XPRT before receiving any data */
+ if (notify) {
+ msm_ipc_router_xprt_notify(&mhi_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_OPEN, NULL);
+ D("%s: Notified IPC Router of %s OPEN\n",
+ __func__, mhi_xprtp->xprt.name);
+ }
+
+ if (xprt_work->chan_id != mhi_xprtp->ch_hndl.in_chan_id)
+ goto out_enable_event;
+
+ rc = mhi_xprt_queue_in_buffers(mhi_xprtp, mhi_xprtp->ch_hndl.num_trbs);
+ if (rc > 0)
+ goto out_enable_event;
+
+ IPC_RTR_ERR("%s: Could not queue one TRB atleast\n", __func__);
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ mhi_xprtp->ch_hndl.in_chan_enabled = false;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ if (notify)
+ msm_ipc_router_xprt_notify(&mhi_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_CLOSE, NULL);
+ mhi_close_channel(mhi_xprtp->ch_hndl.in_handle);
+out_enable_event:
+ kfree(xprt_work);
+}
+
+/**
+ * mhi_xprt_disable_event() - Disable the MHI link for communication
+ * @work: Work containing some reference to the link to be disabled.
+ *
+ * This work is scheduled when the MHI link to the peripheral is down.
+ */
+static void mhi_xprt_disable_event(struct work_struct *work)
+{
+ struct ipc_router_mhi_xprt_work *xprt_work =
+ container_of(work, struct ipc_router_mhi_xprt_work, work);
+ struct ipc_router_mhi_xprt *mhi_xprtp = xprt_work->mhi_xprtp;
+ bool notify = false;
+
+ if (xprt_work->chan_id == mhi_xprtp->ch_hndl.out_chan_id) {
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ notify = mhi_xprtp->ch_hndl.out_chan_enabled &&
+ mhi_xprtp->ch_hndl.in_chan_enabled;
+ mhi_xprtp->ch_hndl.out_chan_enabled = false;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ wake_up(&mhi_xprtp->write_wait_q);
+ mhi_close_channel(mhi_xprtp->ch_hndl.out_handle);
+ } else if (xprt_work->chan_id == mhi_xprtp->ch_hndl.in_chan_id) {
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ notify = mhi_xprtp->ch_hndl.out_chan_enabled &&
+ mhi_xprtp->ch_hndl.in_chan_enabled;
+ mhi_xprtp->ch_hndl.in_chan_enabled = false;
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ /* Queue a read work to remove any partially read packets */
+ queue_work(mhi_xprtp->wq, &mhi_xprtp->read_work);
+ flush_workqueue(mhi_xprtp->wq);
+ mhi_close_channel(mhi_xprtp->ch_hndl.in_handle);
+ }
+
+ if (notify) {
+ init_completion(&mhi_xprtp->sft_close_complete);
+ msm_ipc_router_xprt_notify(&mhi_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_CLOSE, NULL);
+ D("%s: Notified IPC Router of %s CLOSE\n",
+ __func__, mhi_xprtp->xprt.name);
+ wait_for_completion(&mhi_xprtp->sft_close_complete);
+ }
+ kfree(xprt_work);
+}
+
+/**
+ * mhi_xprt_xfer_event() - Function to handle MHI XFER Callbacks
+ * @cb_info: Information containing xfer callback details.
+ *
+ * This function is called when the MHI generates a XFER event to the
+ * IPC Router. This function is used to handle events like tx/rx.
+ */
+static void mhi_xprt_xfer_event(struct mhi_cb_info *cb_info)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+ void *out_addr;
+
+ mhi_xprtp = (struct ipc_router_mhi_xprt *)(cb_info->result->user_data);
+ if (cb_info->chan == mhi_xprtp->ch_hndl.out_chan_id) {
+ out_addr = cb_info->result->buf_addr;
+ mutex_lock(&mhi_xprtp->ch_hndl.state_lock);
+ ipc_router_mhi_xprt_find_addr_map(&mhi_xprtp->tx_addr_map_list,
+ &mhi_xprtp->tx_addr_map_list_lock,
+ out_addr);
+ wake_up(&mhi_xprtp->write_wait_q);
+ mutex_unlock(&mhi_xprtp->ch_hndl.state_lock);
+ } else if (cb_info->chan == mhi_xprtp->ch_hndl.in_chan_id) {
+ queue_work(mhi_xprtp->wq, &mhi_xprtp->read_work);
+ } else {
+ IPC_RTR_ERR("%s: chan_id %d not part of %s\n",
+ __func__, cb_info->chan, mhi_xprtp->xprt_name);
+ }
+}
+
+/**
+ * ipc_router_mhi_xprt_cb() - Callback to notify events on a channel
+ * @cb_info: Information containing the details of callback.
+ *
+ * This function is called by the MHI driver to notify different events
+ * like successful tx/rx, SSR events etc.
+ */
+static void ipc_router_mhi_xprt_cb(struct mhi_cb_info *cb_info)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+ struct ipc_router_mhi_xprt_work *xprt_work;
+
+ if (cb_info->result == NULL) {
+ IPC_RTR_ERR("%s: Result not available in cb_info\n", __func__);
+ return;
+ }
+
+ mhi_xprtp = (struct ipc_router_mhi_xprt *)(cb_info->result->user_data);
+ switch (cb_info->cb_reason) {
+ case MHI_CB_MHI_ENABLED:
+ case MHI_CB_MHI_DISABLED:
+ xprt_work = kmalloc(sizeof(*xprt_work), GFP_KERNEL);
+ if (!xprt_work) {
+ IPC_RTR_ERR("%s: Couldn't handle %d event on %s\n",
+ __func__, cb_info->cb_reason,
+ mhi_xprtp->xprt_name);
+ return;
+ }
+ xprt_work->mhi_xprtp = mhi_xprtp;
+ xprt_work->chan_id = cb_info->chan;
+ if (cb_info->cb_reason == MHI_CB_MHI_ENABLED)
+ INIT_WORK(&xprt_work->work, mhi_xprt_enable_event);
+ else
+ INIT_WORK(&xprt_work->work, mhi_xprt_disable_event);
+ queue_work(mhi_xprtp->wq, &xprt_work->work);
+ break;
+ case MHI_CB_XFER:
+ mhi_xprt_xfer_event(cb_info);
+ break;
+ default:
+ IPC_RTR_ERR("%s: Invalid cb reason %x\n",
+ __func__, cb_info->cb_reason);
+ }
+}
+
+/**
+ * ipc_router_mhi_driver_register() - register for MHI channels
+ *
+ * @mhi_xprtp: pointer to IPC router mhi xprt structure.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when a new XPRT is added.
+ */
+static int ipc_router_mhi_driver_register(
+ struct ipc_router_mhi_xprt *mhi_xprtp)
+{
+ int rc_status;
+
+ rc_status = mhi_register_channel(&mhi_xprtp->ch_hndl.out_handle,
+ mhi_xprtp->ch_hndl.out_chan_id, 0,
+ &mhi_xprtp->ch_hndl.out_clnt_info,
+ (void *)mhi_xprtp);
+ if (rc_status) {
+ IPC_RTR_ERR("%s: Error %d registering out_chan for %s\n",
+ __func__, rc_status, mhi_xprtp->xprt_name);
+ return -EFAULT;
+ }
+
+ rc_status = mhi_register_channel(&mhi_xprtp->ch_hndl.in_handle,
+ mhi_xprtp->ch_hndl.in_chan_id, 0,
+ &mhi_xprtp->ch_hndl.in_clnt_info,
+ (void *)mhi_xprtp);
+ if (rc_status) {
+ mhi_deregister_channel(mhi_xprtp->ch_hndl.out_handle);
+ IPC_RTR_ERR("%s: Error %d registering in_chan for %s\n",
+ __func__, rc_status, mhi_xprtp->xprt_name);
+ return -EFAULT;
+ }
+ return 0;
+}
+
+/**
+ * ipc_router_mhi_config_init() - init MHI xprt configs
+ *
+ * @mhi_xprt_config: pointer to MHI xprt configurations.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called to initialize the MHI XPRT pointer with
+ * the MHI XPRT configurations from device tree.
+ */
+static int ipc_router_mhi_config_init(
+ struct ipc_router_mhi_xprt_config *mhi_xprt_config)
+{
+ struct ipc_router_mhi_xprt *mhi_xprtp;
+ char wq_name[XPRT_NAME_LEN];
+ int rc;
+
+ mhi_xprtp = kzalloc(sizeof(struct ipc_router_mhi_xprt), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(mhi_xprtp)) {
+ IPC_RTR_ERR("%s: kzalloc() failed for mhi_xprtp:%s\n",
+ __func__, mhi_xprt_config->xprt_name);
+ return -ENOMEM;
+ }
+
+ scnprintf(wq_name, XPRT_NAME_LEN, "MHI_XPRT%x:%x",
+ mhi_xprt_config->out_chan_id, mhi_xprt_config->in_chan_id);
+ mhi_xprtp->wq = create_singlethread_workqueue(wq_name);
+ if (!mhi_xprtp->wq) {
+ IPC_RTR_ERR("%s: %s create WQ failed\n",
+ __func__, mhi_xprt_config->xprt_name);
+ kfree(mhi_xprtp);
+ return -EFAULT;
+ }
+
+ INIT_WORK(&mhi_xprtp->read_work, mhi_xprt_read_data);
+ init_waitqueue_head(&mhi_xprtp->write_wait_q);
+ mhi_xprtp->xprt_version = mhi_xprt_config->xprt_version;
+ strlcpy(mhi_xprtp->xprt_name, mhi_xprt_config->xprt_name,
+ XPRT_NAME_LEN);
+
+ /* Initialize XPRT operations and parameters registered with IPC RTR */
+ mhi_xprtp->xprt.link_id = mhi_xprt_config->link_id;
+ mhi_xprtp->xprt.name = mhi_xprtp->xprt_name;
+ mhi_xprtp->xprt.get_version = ipc_router_mhi_get_xprt_version;
+ mhi_xprtp->xprt.set_version = ipc_router_mhi_set_xprt_version;
+ mhi_xprtp->xprt.get_option = ipc_router_mhi_get_xprt_option;
+ mhi_xprtp->xprt.read_avail = NULL;
+ mhi_xprtp->xprt.read = NULL;
+ mhi_xprtp->xprt.write_avail = ipc_router_mhi_write_avail;
+ mhi_xprtp->xprt.write = ipc_router_mhi_write;
+ mhi_xprtp->xprt.close = ipc_router_mhi_close;
+ mhi_xprtp->xprt.sft_close_done = mhi_xprt_sft_close_done;
+ mhi_xprtp->xprt.priv = NULL;
+
+ /* Initialize channel handle parameters */
+ mhi_xprtp->ch_hndl.out_chan_id = mhi_xprt_config->out_chan_id;
+ mhi_xprtp->ch_hndl.in_chan_id = mhi_xprt_config->in_chan_id;
+ mhi_xprtp->ch_hndl.out_clnt_info.mhi_client_cb = ipc_router_mhi_xprt_cb;
+ mhi_xprtp->ch_hndl.in_clnt_info.mhi_client_cb = ipc_router_mhi_xprt_cb;
+ mutex_init(&mhi_xprtp->ch_hndl.state_lock);
+ mutex_init(&mhi_xprtp->ch_hndl.in_skbq_lock);
+ skb_queue_head_init(&mhi_xprtp->ch_hndl.in_skbq);
+ mhi_xprtp->ch_hndl.max_packet_size = IPC_ROUTER_MHI_XPRT_MAX_PKT_SIZE;
+ mhi_xprtp->ch_hndl.num_trbs = IPC_ROUTER_MHI_XPRT_NUM_TRBS;
+ mhi_xprtp->ch_hndl.mhi_xprtp = mhi_xprtp;
+ INIT_LIST_HEAD(&mhi_xprtp->tx_addr_map_list);
+ mutex_init(&mhi_xprtp->tx_addr_map_list_lock);
+ INIT_LIST_HEAD(&mhi_xprtp->rx_addr_map_list);
+ mutex_init(&mhi_xprtp->rx_addr_map_list_lock);
+
+ rc = ipc_router_mhi_driver_register(mhi_xprtp);
+ return rc;
+}
+
+/**
+ * parse_devicetree() - parse device tree binding
+ *
+ * @node: pointer to device tree node
+ * @mhi_xprt_config: pointer to MHI XPRT configurations
+ *
+ * @return: 0 on success, -ENODEV on failure.
+ */
+static int parse_devicetree(struct device_node *node,
+ struct ipc_router_mhi_xprt_config *mhi_xprt_config)
+{
+ int rc;
+ uint32_t out_chan_id;
+ uint32_t in_chan_id;
+ const char *remote_ss;
+ uint32_t link_id;
+ uint32_t version;
+ char *key;
+
+ key = "qcom,out-chan-id";
+ rc = of_property_read_u32(node, key, &out_chan_id);
+ if (rc)
+ goto error;
+ mhi_xprt_config->out_chan_id = out_chan_id;
+
+ key = "qcom,in-chan-id";
+ rc = of_property_read_u32(node, key, &in_chan_id);
+ if (rc)
+ goto error;
+ mhi_xprt_config->in_chan_id = in_chan_id;
+
+ key = "qcom,xprt-remote";
+ remote_ss = of_get_property(node, key, NULL);
+ if (!remote_ss)
+ goto error;
+
+ key = "qcom,xprt-linkid";
+ rc = of_property_read_u32(node, key, &link_id);
+ if (rc)
+ goto error;
+ mhi_xprt_config->link_id = link_id;
+
+ key = "qcom,xprt-version";
+ rc = of_property_read_u32(node, key, &version);
+ if (rc)
+ goto error;
+ mhi_xprt_config->xprt_version = version;
+
+ scnprintf(mhi_xprt_config->xprt_name, XPRT_NAME_LEN,
+ "IPCRTR_MHI%x:%x_%s",
+ out_chan_id, in_chan_id, remote_ss);
+
+ return 0;
+error:
+ IPC_RTR_ERR("%s: missing key: %s\n", __func__, key);
+ return -ENODEV;
+}
+
+/**
+ * ipc_router_mhi_xprt_probe() - Probe an MHI xprt
+ * @pdev: Platform device corresponding to MHI xprt.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying device tree driver registers
+ * a platform device, mapped to an MHI transport.
+ */
+static int ipc_router_mhi_xprt_probe(struct platform_device *pdev)
+{
+ int rc;
+ struct ipc_router_mhi_xprt_config mhi_xprt_config;
+
+ if (pdev && pdev->dev.of_node) {
+ rc = parse_devicetree(pdev->dev.of_node, &mhi_xprt_config);
+ if (rc) {
+ IPC_RTR_ERR("%s: failed to parse device tree\n",
+ __func__);
+ return rc;
+ }
+
+ rc = ipc_router_mhi_config_init(&mhi_xprt_config);
+ if (rc) {
+ IPC_RTR_ERR("%s: init failed\n", __func__);
+ return rc;
+ }
+ }
+ return rc;
+}
+
+static const struct of_device_id ipc_router_mhi_xprt_match_table[] = {
+ { .compatible = "qcom,ipc_router_mhi_xprt" },
+ {},
+};
+
+static struct platform_driver ipc_router_mhi_xprt_driver = {
+ .probe = ipc_router_mhi_xprt_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = ipc_router_mhi_xprt_match_table,
+ },
+};
+
+static int __init ipc_router_mhi_xprt_init(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&ipc_router_mhi_xprt_driver);
+ if (rc) {
+ IPC_RTR_ERR("%s: ipc_router_mhi_xprt_driver reg. failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+ return 0;
+}
+
+module_init(ipc_router_mhi_xprt_init);
+MODULE_DESCRIPTION("IPC Router MHI XPRT");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/ipc_router_smd_xprt.c b/drivers/soc/qcom/ipc_router_smd_xprt.c
new file mode 100644
index 0000000..513689a
--- /dev/null
+++ b/drivers/soc/qcom/ipc_router_smd_xprt.c
@@ -0,0 +1,867 @@
+/* Copyright (c) 2011-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * IPC ROUTER SMD XPRT module.
+ */
+#define DEBUG
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <linux/of.h>
+#include <linux/ipc_router_xprt.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h>
+#include <linux/sched.h>
+
+#include <soc/qcom/smd.h>
+#include <soc/qcom/smsm.h>
+#include <soc/qcom/subsystem_restart.h>
+
+static int msm_ipc_router_smd_xprt_debug_mask;
+module_param_named(debug_mask, msm_ipc_router_smd_xprt_debug_mask,
+ int, 0664);
+
+#if defined(DEBUG)
+#define D(x...) do { \
+if (msm_ipc_router_smd_xprt_debug_mask) \
+ pr_info(x); \
+} while (0)
+#else
+#define D(x...) do { } while (0)
+#endif
+
+#define MIN_FRAG_SZ (IPC_ROUTER_HDR_SIZE + sizeof(union rr_control_msg))
+
+#define NUM_SMD_XPRTS 4
+#define XPRT_NAME_LEN (SMD_MAX_CH_NAME_LEN + 12)
+
+/**
+ * msm_ipc_router_smd_xprt - IPC Router's SMD XPRT structure
+ * @list: IPC router's SMD XPRTs list.
+ * @ch_name: Name of the HSIC endpoint exported by ipc_bridge driver.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @edge: SMD channel edge.
+ * @driver: Platform drivers register by this XPRT.
+ * @xprt: IPC Router XPRT structure to contain XPRT specific info.
+ * @channel: SMD channel specific info.
+ * @smd_xprt_wq: Workqueue to queue read & other XPRT related works.
+ * @write_avail_wait_q: wait queue for writer thread.
+ * @in_pkt: Pointer to any partially read packet.
+ * @is_partial_in_pkt: check pkt completion.
+ * @read_work: Read Work to perform read operation from SMD.
+ * @ss_reset_lock: Lock to protect access to the ss_reset flag.
+ * @ss_reset: flag used to check SSR state.
+ * @pil: handle to the remote subsystem.
+ * @sft_close_complete: Variable to indicate completion of SSR handling
+ * by IPC Router.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @xprt_option: XPRT specific options to be handled by IPC Router.
+ * @disable_pil_loading: Disable PIL Loading of the subsystem.
+ */
+struct msm_ipc_router_smd_xprt {
+ struct list_head list;
+ char ch_name[SMD_MAX_CH_NAME_LEN];
+ char xprt_name[XPRT_NAME_LEN];
+ uint32_t edge;
+ struct platform_driver driver;
+ struct msm_ipc_router_xprt xprt;
+ smd_channel_t *channel;
+ struct workqueue_struct *smd_xprt_wq;
+ wait_queue_head_t write_avail_wait_q;
+ struct rr_packet *in_pkt;
+ int is_partial_in_pkt;
+ struct delayed_work read_work;
+ spinlock_t ss_reset_lock; /*Subsystem reset lock*/
+ int ss_reset;
+ void *pil;
+ struct completion sft_close_complete;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+ bool disable_pil_loading;
+};
+
+struct msm_ipc_router_smd_xprt_work {
+ struct msm_ipc_router_xprt *xprt;
+ struct work_struct work;
+};
+
+static void smd_xprt_read_data(struct work_struct *work);
+static void smd_xprt_open_event(struct work_struct *work);
+static void smd_xprt_close_event(struct work_struct *work);
+
+/**
+ * msm_ipc_router_smd_xprt_config - Config. Info. of each SMD XPRT
+ * @ch_name: Name of the SMD endpoint exported by SMD driver.
+ * @xprt_name: Name of the XPRT to be registered with IPC Router.
+ * @edge: ID to differentiate among multiple SMD endpoints.
+ * @link_id: Network Cluster ID to which this XPRT belongs to.
+ * @xprt_version: IPC Router header version supported by this XPRT.
+ * @disable_pil_loading: Disable PIL Loading of the subsystem.
+ */
+struct msm_ipc_router_smd_xprt_config {
+ char ch_name[SMD_MAX_CH_NAME_LEN];
+ char xprt_name[XPRT_NAME_LEN];
+ uint32_t edge;
+ uint32_t link_id;
+ unsigned int xprt_version;
+ unsigned int xprt_option;
+ bool disable_pil_loading;
+};
+
+struct msm_ipc_router_smd_xprt_config smd_xprt_cfg[] = {
+ {"RPCRPY_CNTL", "ipc_rtr_smd_rpcrpy_cntl", SMD_APPS_MODEM, 1, 1},
+ {"IPCRTR", "ipc_rtr_smd_ipcrtr", SMD_APPS_MODEM, 1, 1},
+ {"IPCRTR", "ipc_rtr_q6_ipcrtr", SMD_APPS_QDSP, 1, 1},
+ {"IPCRTR", "ipc_rtr_wcnss_ipcrtr", SMD_APPS_WCNSS, 1, 1},
+};
+
+#define MODULE_NAME "ipc_router_smd_xprt"
+#define IPC_ROUTER_SMD_XPRT_WAIT_TIMEOUT 3000
+static int ipc_router_smd_xprt_probe_done;
+static struct delayed_work ipc_router_smd_xprt_probe_work;
+static DEFINE_MUTEX(smd_remote_xprt_list_lock_lha1);
+static LIST_HEAD(smd_remote_xprt_list);
+
+static bool is_pil_loading_disabled(uint32_t edge);
+
+/**
+ * ipc_router_smd_set_xprt_version() - Set IPC Router header version
+ * in the transport
+ * @xprt: Reference to the transport structure.
+ * @version: The version to be set in transport.
+ */
+static void ipc_router_smd_set_xprt_version(
+ struct msm_ipc_router_xprt *xprt, unsigned int version)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ if (!xprt)
+ return;
+ smd_xprtp = container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+ smd_xprtp->xprt_version = version;
+}
+
+static int msm_ipc_router_smd_get_xprt_version(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ smd_xprtp = container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ return (int)smd_xprtp->xprt_version;
+}
+
+static int msm_ipc_router_smd_get_xprt_option(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ if (!xprt)
+ return -EINVAL;
+ smd_xprtp = container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ return (int)smd_xprtp->xprt_option;
+}
+
+static int msm_ipc_router_smd_remote_write_avail(
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ return smd_write_avail(smd_xprtp->channel);
+}
+
+static int msm_ipc_router_smd_remote_write(void *data,
+ uint32_t len,
+ struct msm_ipc_router_xprt *xprt)
+{
+ struct rr_packet *pkt = (struct rr_packet *)data;
+ struct sk_buff *ipc_rtr_pkt;
+ int offset, sz_written = 0;
+ int ret, num_retries = 0;
+ unsigned long flags;
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ if (!pkt)
+ return -EINVAL;
+
+ if (!len || pkt->length != len)
+ return -EINVAL;
+
+ do {
+ spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
+ if (smd_xprtp->ss_reset) {
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock,
+ flags);
+ IPC_RTR_ERR("%s: %s chnl reset\n",
+ __func__, xprt->name);
+ return -ENETRESET;
+ }
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock, flags);
+ ret = smd_write_start(smd_xprtp->channel, len);
+ if (ret < 0 && num_retries >= 5) {
+ IPC_RTR_ERR("%s: Error %d @smd_write_start for %s\n",
+ __func__, ret, xprt->name);
+ return ret;
+ } else if (ret < 0) {
+ msleep(50);
+ num_retries++;
+ }
+ } while (ret < 0);
+
+ D("%s: Ready to write %d bytes\n", __func__, len);
+ skb_queue_walk(pkt->pkt_fragment_q, ipc_rtr_pkt) {
+ offset = 0;
+ while (offset < ipc_rtr_pkt->len) {
+ if (!smd_write_segment_avail(smd_xprtp->channel))
+ smd_enable_read_intr(smd_xprtp->channel);
+
+ wait_event(smd_xprtp->write_avail_wait_q,
+ (smd_write_segment_avail(smd_xprtp->channel) ||
+ smd_xprtp->ss_reset));
+ smd_disable_read_intr(smd_xprtp->channel);
+ spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
+ if (smd_xprtp->ss_reset) {
+ spin_unlock_irqrestore(
+ &smd_xprtp->ss_reset_lock, flags);
+ IPC_RTR_ERR("%s: %s chnl reset\n",
+ __func__, xprt->name);
+ return -ENETRESET;
+ }
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock,
+ flags);
+
+ sz_written = smd_write_segment(smd_xprtp->channel,
+ ipc_rtr_pkt->data + offset,
+ (ipc_rtr_pkt->len - offset));
+ offset += sz_written;
+ sz_written = 0;
+ }
+ D("%s: Wrote %d bytes over %s\n",
+ __func__, offset, xprt->name);
+ }
+
+ if (!smd_write_end(smd_xprtp->channel))
+ D("%s: Finished writing\n", __func__);
+ return len;
+}
+
+static int msm_ipc_router_smd_remote_close(struct msm_ipc_router_xprt *xprt)
+{
+ int rc;
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ rc = smd_close(smd_xprtp->channel);
+ if (smd_xprtp->pil) {
+ subsystem_put(smd_xprtp->pil);
+ smd_xprtp->pil = NULL;
+ }
+ return rc;
+}
+
+static void smd_xprt_sft_close_done(struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt, struct msm_ipc_router_smd_xprt, xprt);
+
+ complete_all(&smd_xprtp->sft_close_complete);
+}
+
+static void smd_xprt_read_data(struct work_struct *work)
+{
+ int pkt_size, sz_read, sz;
+ struct sk_buff *ipc_rtr_pkt;
+ void *data;
+ unsigned long flags;
+ struct delayed_work *rwork = to_delayed_work(work);
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(rwork, struct msm_ipc_router_smd_xprt, read_work);
+
+ spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
+ if (smd_xprtp->ss_reset) {
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock, flags);
+ if (smd_xprtp->in_pkt)
+ release_pkt(smd_xprtp->in_pkt);
+ smd_xprtp->is_partial_in_pkt = 0;
+ IPC_RTR_ERR("%s: %s channel reset\n",
+ __func__, smd_xprtp->xprt.name);
+ return;
+ }
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock, flags);
+
+ D("%s pkt_size: %d, read_avail: %d\n", __func__,
+ smd_cur_packet_size(smd_xprtp->channel),
+ smd_read_avail(smd_xprtp->channel));
+ while ((pkt_size = smd_cur_packet_size(smd_xprtp->channel)) &&
+ smd_read_avail(smd_xprtp->channel)) {
+ if (!smd_xprtp->is_partial_in_pkt) {
+ smd_xprtp->in_pkt = create_pkt(NULL);
+ if (!smd_xprtp->in_pkt) {
+ IPC_RTR_ERR("%s: Couldn't alloc rr_packet\n",
+ __func__);
+ return;
+ }
+ smd_xprtp->is_partial_in_pkt = 1;
+ D("%s: Allocated rr_packet\n", __func__);
+ }
+
+ if (((pkt_size >= MIN_FRAG_SZ) &&
+ (smd_read_avail(smd_xprtp->channel) < MIN_FRAG_SZ)) ||
+ ((pkt_size < MIN_FRAG_SZ) &&
+ (smd_read_avail(smd_xprtp->channel) < pkt_size)))
+ return;
+
+ sz = smd_read_avail(smd_xprtp->channel);
+ do {
+ ipc_rtr_pkt = alloc_skb(sz, GFP_KERNEL);
+ if (!ipc_rtr_pkt) {
+ if (sz <= (PAGE_SIZE/2)) {
+ queue_delayed_work(
+ smd_xprtp->smd_xprt_wq,
+ &smd_xprtp->read_work,
+ msecs_to_jiffies(100));
+ return;
+ }
+ sz = sz / 2;
+ }
+ } while (!ipc_rtr_pkt);
+
+ D("%s: Allocated the sk_buff of size %d\n", __func__, sz);
+ data = skb_put(ipc_rtr_pkt, sz);
+ sz_read = smd_read(smd_xprtp->channel, data, sz);
+ if (sz_read != sz) {
+ IPC_RTR_ERR("%s: Couldn't read %s completely\n",
+ __func__, smd_xprtp->xprt.name);
+ kfree_skb(ipc_rtr_pkt);
+ release_pkt(smd_xprtp->in_pkt);
+ smd_xprtp->is_partial_in_pkt = 0;
+ return;
+ }
+ skb_queue_tail(smd_xprtp->in_pkt->pkt_fragment_q, ipc_rtr_pkt);
+ smd_xprtp->in_pkt->length += sz_read;
+ if (sz_read != pkt_size)
+ smd_xprtp->is_partial_in_pkt = 1;
+ else
+ smd_xprtp->is_partial_in_pkt = 0;
+
+ if (!smd_xprtp->is_partial_in_pkt) {
+ D("%s: Packet size read %d\n",
+ __func__, smd_xprtp->in_pkt->length);
+ msm_ipc_router_xprt_notify(&smd_xprtp->xprt,
+ IPC_ROUTER_XPRT_EVENT_DATA,
+ (void *)smd_xprtp->in_pkt);
+ release_pkt(smd_xprtp->in_pkt);
+ smd_xprtp->in_pkt = NULL;
+ }
+ }
+}
+
+static void smd_xprt_open_event(struct work_struct *work)
+{
+ struct msm_ipc_router_smd_xprt_work *xprt_work =
+ container_of(work, struct msm_ipc_router_smd_xprt_work, work);
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt_work->xprt,
+ struct msm_ipc_router_smd_xprt, xprt);
+ unsigned long flags;
+
+ spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
+ smd_xprtp->ss_reset = 0;
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock, flags);
+ msm_ipc_router_xprt_notify(xprt_work->xprt,
+ IPC_ROUTER_XPRT_EVENT_OPEN, NULL);
+ D("%s: Notified IPC Router of %s OPEN\n",
+ __func__, xprt_work->xprt->name);
+ kfree(xprt_work);
+}
+
+static void smd_xprt_close_event(struct work_struct *work)
+{
+ struct msm_ipc_router_smd_xprt_work *xprt_work =
+ container_of(work, struct msm_ipc_router_smd_xprt_work, work);
+ struct msm_ipc_router_smd_xprt *smd_xprtp =
+ container_of(xprt_work->xprt,
+ struct msm_ipc_router_smd_xprt, xprt);
+
+ if (smd_xprtp->in_pkt) {
+ release_pkt(smd_xprtp->in_pkt);
+ smd_xprtp->in_pkt = NULL;
+ }
+ smd_xprtp->is_partial_in_pkt = 0;
+ init_completion(&smd_xprtp->sft_close_complete);
+ msm_ipc_router_xprt_notify(xprt_work->xprt,
+ IPC_ROUTER_XPRT_EVENT_CLOSE, NULL);
+ D("%s: Notified IPC Router of %s CLOSE\n",
+ __func__, xprt_work->xprt->name);
+ wait_for_completion(&smd_xprtp->sft_close_complete);
+ kfree(xprt_work);
+}
+
+static void msm_ipc_router_smd_remote_notify(void *_dev, unsigned int event)
+{
+ unsigned long flags;
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+ struct msm_ipc_router_smd_xprt_work *xprt_work;
+
+ smd_xprtp = (struct msm_ipc_router_smd_xprt *)_dev;
+ if (!smd_xprtp)
+ return;
+
+ switch (event) {
+ case SMD_EVENT_DATA:
+ if (smd_read_avail(smd_xprtp->channel))
+ queue_delayed_work(smd_xprtp->smd_xprt_wq,
+ &smd_xprtp->read_work, 0);
+ if (smd_write_segment_avail(smd_xprtp->channel))
+ wake_up(&smd_xprtp->write_avail_wait_q);
+ break;
+
+ case SMD_EVENT_OPEN:
+ xprt_work = kmalloc(sizeof(struct msm_ipc_router_smd_xprt_work),
+ GFP_ATOMIC);
+ if (!xprt_work) {
+ IPC_RTR_ERR(
+ "%s: Couldn't notify %d event to IPC Router\n",
+ __func__, event);
+ return;
+ }
+ xprt_work->xprt = &smd_xprtp->xprt;
+ INIT_WORK(&xprt_work->work, smd_xprt_open_event);
+ queue_work(smd_xprtp->smd_xprt_wq, &xprt_work->work);
+ break;
+
+ case SMD_EVENT_CLOSE:
+ spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
+ smd_xprtp->ss_reset = 1;
+ spin_unlock_irqrestore(&smd_xprtp->ss_reset_lock, flags);
+ wake_up(&smd_xprtp->write_avail_wait_q);
+ xprt_work = kmalloc(sizeof(struct msm_ipc_router_smd_xprt_work),
+ GFP_ATOMIC);
+ if (!xprt_work) {
+ IPC_RTR_ERR(
+ "%s: Couldn't notify %d event to IPC Router\n",
+ __func__, event);
+ return;
+ }
+ xprt_work->xprt = &smd_xprtp->xprt;
+ INIT_WORK(&xprt_work->work, smd_xprt_close_event);
+ queue_work(smd_xprtp->smd_xprt_wq, &xprt_work->work);
+ break;
+ }
+}
+
+static void *msm_ipc_load_subsystem(uint32_t edge)
+{
+ void *pil = NULL;
+ const char *peripheral;
+ bool loading_disabled;
+
+ loading_disabled = is_pil_loading_disabled(edge);
+ peripheral = smd_edge_to_pil_str(edge);
+ if (!IS_ERR_OR_NULL(peripheral) && !loading_disabled) {
+ pil = subsystem_get(peripheral);
+ if (IS_ERR(pil)) {
+ IPC_RTR_ERR("%s: Failed to load %s\n",
+ __func__, peripheral);
+ pil = NULL;
+ }
+ }
+ return pil;
+}
+
+/**
+ * find_smd_xprt_list() - Find xprt item specific to an HSIC endpoint
+ * @pdev: Platform device registered by HSIC's ipc_bridge driver
+ *
+ * @return: pointer to msm_ipc_router_smd_xprt if matching endpoint is found,
+ * else NULL.
+ *
+ * This function is used to find specific xprt item from the global xprt list
+ */
+static struct msm_ipc_router_smd_xprt *
+ find_smd_xprt_list(struct platform_device *pdev)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ list_for_each_entry(smd_xprtp, &smd_remote_xprt_list, list) {
+ if (!strcmp(pdev->name, smd_xprtp->ch_name)
+ && (pdev->id == smd_xprtp->edge)) {
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+ return smd_xprtp;
+ }
+ }
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+ return NULL;
+}
+
+/**
+ * is_pil_loading_disabled() - Check if pil loading a subsystem is disabled
+ * @edge: Edge that points to the remote subsystem.
+ *
+ * @return: true if disabled, false if enabled.
+ */
+static bool is_pil_loading_disabled(uint32_t edge)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ list_for_each_entry(smd_xprtp, &smd_remote_xprt_list, list) {
+ if (smd_xprtp->edge == edge) {
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+ return smd_xprtp->disable_pil_loading;
+ }
+ }
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+ return true;
+}
+
+/**
+ * msm_ipc_router_smd_remote_probe() - Probe an SMD endpoint
+ *
+ * @pdev: Platform device corresponding to SMD endpoint.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying SMD driver registers
+ * a platform device, mapped to SMD endpoint.
+ */
+static int msm_ipc_router_smd_remote_probe(struct platform_device *pdev)
+{
+ int rc;
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ smd_xprtp = find_smd_xprt_list(pdev);
+ if (!smd_xprtp) {
+ IPC_RTR_ERR("%s No device with name %s\n",
+ __func__, pdev->name);
+ return -EPROBE_DEFER;
+ }
+ if (strcmp(pdev->name, smd_xprtp->ch_name)
+ || (pdev->id != smd_xprtp->edge)) {
+ IPC_RTR_ERR("%s wrong item name:%s edge:%d\n",
+ __func__, smd_xprtp->ch_name, smd_xprtp->edge);
+ return -ENODEV;
+ }
+ smd_xprtp->smd_xprt_wq =
+ create_singlethread_workqueue(pdev->name);
+ if (!smd_xprtp->smd_xprt_wq) {
+ IPC_RTR_ERR("%s: WQ creation failed for %s\n",
+ __func__, pdev->name);
+ return -EFAULT;
+ }
+
+ smd_xprtp->pil = msm_ipc_load_subsystem(
+ smd_xprtp->edge);
+ rc = smd_named_open_on_edge(smd_xprtp->ch_name,
+ smd_xprtp->edge,
+ &smd_xprtp->channel,
+ smd_xprtp,
+ msm_ipc_router_smd_remote_notify);
+ if (rc < 0) {
+ IPC_RTR_ERR("%s: Channel open failed for %s\n",
+ __func__, smd_xprtp->ch_name);
+ if (smd_xprtp->pil) {
+ subsystem_put(smd_xprtp->pil);
+ smd_xprtp->pil = NULL;
+ }
+ destroy_workqueue(smd_xprtp->smd_xprt_wq);
+ return rc;
+ }
+
+ smd_disable_read_intr(smd_xprtp->channel);
+
+ smsm_change_state(SMSM_APPS_STATE, 0, SMSM_RPCINIT);
+
+ return 0;
+}
+
+/**
+ * msm_ipc_router_smd_driver_register() - register SMD XPRT drivers
+ *
+ * @smd_xprtp: pointer to Ipc router smd xprt structure.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when a new XPRT is added to register platform
+ * drivers for new XPRT.
+ */
+static int msm_ipc_router_smd_driver_register(
+ struct msm_ipc_router_smd_xprt *smd_xprtp)
+{
+ int ret;
+ struct msm_ipc_router_smd_xprt *item;
+ unsigned int already_registered = 0;
+
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ list_for_each_entry(item, &smd_remote_xprt_list, list) {
+ if (!strcmp(smd_xprtp->ch_name, item->ch_name))
+ already_registered = 1;
+ }
+ list_add(&smd_xprtp->list, &smd_remote_xprt_list);
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+
+ if (!already_registered) {
+ smd_xprtp->driver.driver.name = smd_xprtp->ch_name;
+ smd_xprtp->driver.driver.owner = THIS_MODULE;
+ smd_xprtp->driver.probe = msm_ipc_router_smd_remote_probe;
+
+ ret = platform_driver_register(&smd_xprtp->driver);
+ if (ret) {
+ IPC_RTR_ERR(
+ "%s: Failed to register platform driver [%s]\n",
+ __func__, smd_xprtp->ch_name);
+ return ret;
+ }
+ } else {
+ IPC_RTR_ERR("%s Already driver registered %s\n",
+ __func__, smd_xprtp->ch_name);
+ }
+ return 0;
+}
+
+/**
+ * msm_ipc_router_smd_config_init() - init SMD xprt configs
+ *
+ * @smd_xprt_config: pointer to SMD xprt configurations.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called to initialize the SMD XPRT pointer with
+ * the SMD XPRT configurations either from device tree or static arrays.
+ */
+static int msm_ipc_router_smd_config_init(
+ struct msm_ipc_router_smd_xprt_config *smd_xprt_config)
+{
+ struct msm_ipc_router_smd_xprt *smd_xprtp;
+
+ smd_xprtp = kzalloc(sizeof(struct msm_ipc_router_smd_xprt), GFP_KERNEL);
+ if (IS_ERR_OR_NULL(smd_xprtp)) {
+ IPC_RTR_ERR("%s: kzalloc() failed for smd_xprtp id:%s\n",
+ __func__, smd_xprt_config->ch_name);
+ return -ENOMEM;
+ }
+
+ smd_xprtp->xprt.link_id = smd_xprt_config->link_id;
+ smd_xprtp->xprt_version = smd_xprt_config->xprt_version;
+ smd_xprtp->edge = smd_xprt_config->edge;
+ smd_xprtp->xprt_option = smd_xprt_config->xprt_option;
+ smd_xprtp->disable_pil_loading = smd_xprt_config->disable_pil_loading;
+
+ strlcpy(smd_xprtp->ch_name, smd_xprt_config->ch_name,
+ SMD_MAX_CH_NAME_LEN);
+
+ strlcpy(smd_xprtp->xprt_name, smd_xprt_config->xprt_name,
+ XPRT_NAME_LEN);
+ smd_xprtp->xprt.name = smd_xprtp->xprt_name;
+
+ smd_xprtp->xprt.set_version =
+ ipc_router_smd_set_xprt_version;
+ smd_xprtp->xprt.get_version =
+ msm_ipc_router_smd_get_xprt_version;
+ smd_xprtp->xprt.get_option =
+ msm_ipc_router_smd_get_xprt_option;
+ smd_xprtp->xprt.read_avail = NULL;
+ smd_xprtp->xprt.read = NULL;
+ smd_xprtp->xprt.write_avail =
+ msm_ipc_router_smd_remote_write_avail;
+ smd_xprtp->xprt.write = msm_ipc_router_smd_remote_write;
+ smd_xprtp->xprt.close = msm_ipc_router_smd_remote_close;
+ smd_xprtp->xprt.sft_close_done = smd_xprt_sft_close_done;
+ smd_xprtp->xprt.priv = NULL;
+
+ init_waitqueue_head(&smd_xprtp->write_avail_wait_q);
+ smd_xprtp->in_pkt = NULL;
+ smd_xprtp->is_partial_in_pkt = 0;
+ INIT_DELAYED_WORK(&smd_xprtp->read_work, smd_xprt_read_data);
+ spin_lock_init(&smd_xprtp->ss_reset_lock);
+ smd_xprtp->ss_reset = 0;
+
+ msm_ipc_router_smd_driver_register(smd_xprtp);
+
+ return 0;
+}
+
+/**
+ * parse_devicetree() - parse device tree binding
+ *
+ * @node: pointer to device tree node
+ * @smd_xprt_config: pointer to SMD XPRT configurations
+ *
+ * @return: 0 on success, -ENODEV on failure.
+ */
+static int parse_devicetree(struct device_node *node,
+ struct msm_ipc_router_smd_xprt_config *smd_xprt_config)
+{
+ int ret;
+ int edge;
+ int link_id;
+ int version;
+ char *key;
+ const char *ch_name;
+ const char *remote_ss;
+
+ key = "qcom,ch-name";
+ ch_name = of_get_property(node, key, NULL);
+ if (!ch_name)
+ goto error;
+ strlcpy(smd_xprt_config->ch_name, ch_name, SMD_MAX_CH_NAME_LEN);
+
+ key = "qcom,xprt-remote";
+ remote_ss = of_get_property(node, key, NULL);
+ if (!remote_ss)
+ goto error;
+ edge = smd_remote_ss_to_edge(remote_ss);
+ if (edge < 0)
+ goto error;
+ smd_xprt_config->edge = edge;
+
+ key = "qcom,xprt-linkid";
+ ret = of_property_read_u32(node, key, &link_id);
+ if (ret)
+ goto error;
+ smd_xprt_config->link_id = link_id;
+
+ key = "qcom,xprt-version";
+ ret = of_property_read_u32(node, key, &version);
+ if (ret)
+ goto error;
+ smd_xprt_config->xprt_version = version;
+
+ key = "qcom,fragmented-data";
+ smd_xprt_config->xprt_option = of_property_read_bool(node, key);
+
+ key = "qcom,disable-pil-loading";
+ smd_xprt_config->disable_pil_loading = of_property_read_bool(node, key);
+
+ scnprintf(smd_xprt_config->xprt_name, XPRT_NAME_LEN, "%s_%s",
+ remote_ss, smd_xprt_config->ch_name);
+
+ return 0;
+
+error:
+ IPC_RTR_ERR("%s: missing key: %s\n", __func__, key);
+ return -ENODEV;
+}
+
+/**
+ * msm_ipc_router_smd_xprt_probe() - Probe an SMD xprt
+ *
+ * @pdev: Platform device corresponding to SMD xprt.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying device tree driver registers
+ * a platform device, mapped to an SMD transport.
+ */
+static int msm_ipc_router_smd_xprt_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct msm_ipc_router_smd_xprt_config smd_xprt_config;
+
+ if (pdev) {
+ if (pdev->dev.of_node) {
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ ipc_router_smd_xprt_probe_done = 1;
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+
+ ret = parse_devicetree(pdev->dev.of_node,
+ &smd_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR("%s: Failed to parse device tree\n",
+ __func__);
+ return ret;
+ }
+
+ ret = msm_ipc_router_smd_config_init(&smd_xprt_config);
+ if (ret) {
+ IPC_RTR_ERR("%s init failed\n", __func__);
+ return ret;
+ }
+ }
+ }
+ return 0;
+}
+
+/**
+ * ipc_router_smd_xprt_probe_worker() - probe worker for non DT configurations
+ *
+ * @work: work item to process
+ *
+ * This function is called by schedule_delay_work after 3sec and check if
+ * device tree probe is done or not. If device tree probe fails the default
+ * configurations read from static array.
+ */
+static void ipc_router_smd_xprt_probe_worker(struct work_struct *work)
+{
+ int i, ret;
+
+ if (WARN_ON(ARRAY_SIZE(smd_xprt_cfg) != NUM_SMD_XPRTS))
+ return;
+
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ if (!ipc_router_smd_xprt_probe_done) {
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+ for (i = 0; i < ARRAY_SIZE(smd_xprt_cfg); i++) {
+ ret = msm_ipc_router_smd_config_init(&smd_xprt_cfg[i]);
+ if (ret)
+ IPC_RTR_ERR(" %s init failed config idx %d\n",
+ __func__, i);
+ }
+ mutex_lock(&smd_remote_xprt_list_lock_lha1);
+ }
+ mutex_unlock(&smd_remote_xprt_list_lock_lha1);
+}
+
+static const struct of_device_id msm_ipc_router_smd_xprt_match_table[] = {
+ { .compatible = "qcom,ipc_router_smd_xprt" },
+ {},
+};
+
+static struct platform_driver msm_ipc_router_smd_xprt_driver = {
+ .probe = msm_ipc_router_smd_xprt_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = msm_ipc_router_smd_xprt_match_table,
+ },
+};
+
+static int __init msm_ipc_router_smd_xprt_init(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&msm_ipc_router_smd_xprt_driver);
+ if (rc) {
+ IPC_RTR_ERR(
+ "%s: msm_ipc_router_smd_xprt_driver register failed %d\n",
+ __func__, rc);
+ return rc;
+ }
+
+ INIT_DELAYED_WORK(&ipc_router_smd_xprt_probe_work,
+ ipc_router_smd_xprt_probe_worker);
+ schedule_delayed_work(&ipc_router_smd_xprt_probe_work,
+ msecs_to_jiffies(IPC_ROUTER_SMD_XPRT_WAIT_TIMEOUT));
+ return 0;
+}
+
+module_init(msm_ipc_router_smd_xprt_init);
+MODULE_DESCRIPTION("IPC Router SMD XPRT");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/qmi_interface.c b/drivers/soc/qcom/qmi_interface.c
new file mode 100644
index 0000000..9c3f9431
--- /dev/null
+++ b/drivers/soc/qcom/qmi_interface.c
@@ -0,0 +1,2232 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/io.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/gfp.h>
+#include <linux/qmi_encdec.h>
+#include <linux/workqueue.h>
+#include <linux/mutex.h>
+#include <linux/hashtable.h>
+#include <linux/ipc_router.h>
+#include <linux/ipc_logging.h>
+
+#include <soc/qcom/msm_qmi_interface.h>
+
+#include "qmi_interface_priv.h"
+
+#define BUILD_INSTANCE_ID(vers, ins) (((vers) & 0xFF) | (((ins) & 0xFF) << 8))
+#define LOOKUP_MASK 0xFFFFFFFF
+#define MAX_WQ_NAME_LEN 20
+#define QMI_REQ_RESP_LOG_PAGES 3
+#define QMI_IND_LOG_PAGES 2
+#define QMI_REQ_RESP_LOG(buf...) \
+do { \
+ if (qmi_req_resp_log_ctx) { \
+ ipc_log_string(qmi_req_resp_log_ctx, buf); \
+ } \
+} while (0) \
+
+#define QMI_IND_LOG(buf...) \
+do { \
+ if (qmi_ind_log_ctx) { \
+ ipc_log_string(qmi_ind_log_ctx, buf); \
+ } \
+} while (0) \
+
+static LIST_HEAD(svc_event_nb_list);
+static DEFINE_MUTEX(svc_event_nb_list_lock);
+
+struct qmi_notify_event_work {
+ unsigned int event;
+ void *oob_data;
+ size_t oob_data_len;
+ void *priv;
+ struct work_struct work;
+};
+static void qmi_notify_event_worker(struct work_struct *work);
+
+#define HANDLE_HASH_TBL_SZ 1
+static DEFINE_HASHTABLE(handle_hash_tbl, HANDLE_HASH_TBL_SZ);
+static DEFINE_MUTEX(handle_hash_tbl_lock);
+
+struct elem_info qmi_response_type_v01_ei[] = {
+ {
+ .data_type = QMI_SIGNED_2_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(uint16_t),
+ .is_array = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ .offset = offsetof(struct qmi_response_type_v01,
+ result),
+ .ei_array = NULL,
+ },
+ {
+ .data_type = QMI_SIGNED_2_BYTE_ENUM,
+ .elem_len = 1,
+ .elem_size = sizeof(uint16_t),
+ .is_array = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ .offset = offsetof(struct qmi_response_type_v01,
+ error),
+ .ei_array = NULL,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .elem_len = 0,
+ .elem_size = 0,
+ .is_array = NO_ARRAY,
+ .tlv_type = QMI_COMMON_TLV_TYPE,
+ .offset = 0,
+ .ei_array = NULL,
+ },
+};
+
+struct elem_info qmi_error_resp_type_v01_ei[] = {
+ {
+ .data_type = QMI_STRUCT,
+ .elem_len = 1,
+ .elem_size = sizeof(struct qmi_response_type_v01),
+ .is_array = NO_ARRAY,
+ .tlv_type = 0x02,
+ .offset = 0,
+ .ei_array = qmi_response_type_v01_ei,
+ },
+ {
+ .data_type = QMI_EOTI,
+ .elem_len = 0,
+ .elem_size = 0,
+ .is_array = NO_ARRAY,
+ .tlv_type = 0x00,
+ .offset = 0,
+ .ei_array = NULL,
+ },
+};
+
+struct msg_desc err_resp_desc = {
+ .max_msg_len = 7,
+ .msg_id = 0,
+ .ei_array = qmi_error_resp_type_v01_ei,
+};
+
+static DEFINE_MUTEX(qmi_svc_event_notifier_lock);
+static struct msm_ipc_port *qmi_svc_event_notifier_port;
+static struct workqueue_struct *qmi_svc_event_notifier_wq;
+static void qmi_svc_event_notifier_init(void);
+static void qmi_svc_event_worker(struct work_struct *work);
+static struct svc_event_nb *find_svc_event_nb(uint32_t service_id,
+ uint32_t instance_id);
+DECLARE_WORK(qmi_svc_event_work, qmi_svc_event_worker);
+static void svc_resume_tx_worker(struct work_struct *work);
+static void clean_txn_info(struct qmi_handle *handle);
+static void *qmi_req_resp_log_ctx;
+static void *qmi_ind_log_ctx;
+
+/**
+ * qmi_log() - Pass log data to IPC logging framework
+ * @handle: The pointer to the qmi_handle
+ * @cntl_flg: Indicates the type(request/response/indications) of the message
+ * @txn_id: Transaction ID of the message.
+ * @msg_id: Message ID of the incoming/outgoing message.
+ * @msg_len: Total size of the message.
+ *
+ * This function builds the data the would be passed on to the IPC logging
+ * framework. The data that would be passed corresponds to the information
+ * that is exchanged between the IPC Router and kernel modules during
+ * request/response/indication transactions.
+ */
+
+static void qmi_log(struct qmi_handle *handle,
+ unsigned char cntl_flag, uint16_t txn_id,
+ uint16_t msg_id, uint16_t msg_len)
+{
+ uint32_t service_id = 0;
+ const char *ops_type = NULL;
+
+ if (handle->handle_type == QMI_CLIENT_HANDLE) {
+ service_id = handle->dest_service_id;
+ if (cntl_flag == QMI_REQUEST_CONTROL_FLAG)
+ ops_type = "TX";
+ else if (cntl_flag == QMI_INDICATION_CONTROL_FLAG ||
+ cntl_flag == QMI_RESPONSE_CONTROL_FLAG)
+ ops_type = "RX";
+ } else if (handle->handle_type == QMI_SERVICE_HANDLE) {
+ service_id = handle->svc_ops_options->service_id;
+ if (cntl_flag == QMI_REQUEST_CONTROL_FLAG)
+ ops_type = "RX";
+ else if (cntl_flag == QMI_INDICATION_CONTROL_FLAG ||
+ cntl_flag == QMI_RESPONSE_CONTROL_FLAG)
+ ops_type = "TX";
+ }
+
+ /*
+ * IPC Logging format is as below:-
+ * <Type of module>(CLNT or SERV) :
+ * <Opertaion Type> (Transmit/ RECV) :
+ * <Control Flag> (Req/Resp/Ind) :
+ * <Transaction ID> :
+ * <Message ID> :
+ * <Message Length> :
+ * <Service ID> :
+ */
+ if (qmi_req_resp_log_ctx &&
+ ((cntl_flag == QMI_REQUEST_CONTROL_FLAG) ||
+ (cntl_flag == QMI_RESPONSE_CONTROL_FLAG))) {
+ QMI_REQ_RESP_LOG("%s %s CF:%x TI:%x MI:%x ML:%x SvcId: %x",
+ (handle->handle_type == QMI_CLIENT_HANDLE ? "QCCI" : "QCSI"),
+ ops_type, cntl_flag, txn_id, msg_id, msg_len, service_id);
+ } else if (qmi_ind_log_ctx &&
+ (cntl_flag == QMI_INDICATION_CONTROL_FLAG)) {
+ QMI_IND_LOG("%s %s CF:%x TI:%x MI:%x ML:%x SvcId: %x",
+ (handle->handle_type == QMI_CLIENT_HANDLE ? "QCCI" : "QCSI"),
+ ops_type, cntl_flag, txn_id, msg_id, msg_len, service_id);
+ }
+}
+
+/**
+ * add_req_handle() - Create and Add a request handle to the connection
+ * @conn_h: Connection handle over which the request has arrived.
+ * @msg_id: Message ID of the request.
+ * @txn_id: Transaction ID of the request.
+ *
+ * @return: Pointer to request handle on success, NULL on error.
+ *
+ * This function creates a request handle to track the request that arrives
+ * on a connection. This function then adds it to the connection's request
+ * handle list.
+ */
+static struct req_handle *add_req_handle(struct qmi_svc_clnt_conn *conn_h,
+ uint16_t msg_id, uint16_t txn_id)
+{
+ struct req_handle *req_h;
+
+ req_h = kmalloc(sizeof(struct req_handle), GFP_KERNEL);
+ if (!req_h)
+ return NULL;
+
+ req_h->conn_h = conn_h;
+ req_h->msg_id = msg_id;
+ req_h->txn_id = txn_id;
+ list_add_tail(&req_h->list, &conn_h->req_handle_list);
+ return req_h;
+}
+
+/**
+ * verify_req_handle() - Verify the validity of a request handle
+ * @conn_h: Connection handle over which the request has arrived.
+ * @req_h: Request handle to be verified.
+ *
+ * @return: true on success, false on failure.
+ *
+ * This function is used to check if the request handle is present in
+ * the connection handle.
+ */
+static bool verify_req_handle(struct qmi_svc_clnt_conn *conn_h,
+ struct req_handle *req_h)
+{
+ struct req_handle *temp_req_h;
+
+ list_for_each_entry(temp_req_h, &conn_h->req_handle_list, list) {
+ if (temp_req_h == req_h)
+ return true;
+ }
+ return false;
+}
+
+/**
+ * rmv_req_handle() - Remove and destroy the request handle
+ * @req_h: Request handle to be removed and destroyed.
+ *
+ * @return: 0.
+ */
+static int rmv_req_handle(struct req_handle *req_h)
+{
+ list_del(&req_h->list);
+ kfree(req_h);
+ return 0;
+}
+
+/**
+ * add_svc_clnt_conn() - Create and add a connection handle to a service
+ * @handle: QMI handle in which the service is hosted.
+ * @clnt_addr: Address of the client connecting with the service.
+ * @clnt_addr_len: Length of the client address.
+ *
+ * @return: Pointer to connection handle on success, NULL on error.
+ *
+ * This function is used to create a connection handle that binds the service
+ * with a client. This function is called on a service's QMI handle when a
+ * client sends its first message to the service.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static struct qmi_svc_clnt_conn *add_svc_clnt_conn(
+ struct qmi_handle *handle, void *clnt_addr, size_t clnt_addr_len)
+{
+ struct qmi_svc_clnt_conn *conn_h;
+
+ conn_h = kmalloc(sizeof(struct qmi_svc_clnt_conn), GFP_KERNEL);
+ if (!conn_h)
+ return NULL;
+
+ conn_h->clnt_addr = kmalloc(clnt_addr_len, GFP_KERNEL);
+ if (!conn_h->clnt_addr) {
+ kfree(conn_h);
+ return NULL;
+ }
+
+ INIT_LIST_HEAD(&conn_h->list);
+ conn_h->svc_handle = handle;
+ memcpy(conn_h->clnt_addr, clnt_addr, clnt_addr_len);
+ conn_h->clnt_addr_len = clnt_addr_len;
+ INIT_LIST_HEAD(&conn_h->req_handle_list);
+ INIT_DELAYED_WORK(&conn_h->resume_tx_work, svc_resume_tx_worker);
+ INIT_LIST_HEAD(&conn_h->pending_txn_list);
+ mutex_init(&conn_h->pending_txn_lock);
+ list_add_tail(&conn_h->list, &handle->conn_list);
+ return conn_h;
+}
+
+/**
+ * find_svc_clnt_conn() - Find the existence of a client<->service connection
+ * @handle: Service's QMI handle.
+ * @clnt_addr: Address of the client to be present in the connection.
+ * @clnt_addr_len: Length of the client address.
+ *
+ * @return: Pointer to connection handle if the matching connection is found,
+ * NULL if the connection is not found.
+ *
+ * This function is used to find the existence of a client<->service connection
+ * handle in a service's QMI handle. This function tries to match the client
+ * address in the existing connections.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static struct qmi_svc_clnt_conn *find_svc_clnt_conn(
+ struct qmi_handle *handle, void *clnt_addr, size_t clnt_addr_len)
+{
+ struct qmi_svc_clnt_conn *conn_h;
+
+ list_for_each_entry(conn_h, &handle->conn_list, list) {
+ if (!memcmp(conn_h->clnt_addr, clnt_addr, clnt_addr_len))
+ return conn_h;
+ }
+ return NULL;
+}
+
+/**
+ * verify_svc_clnt_conn() - Verify the existence of a connection handle
+ * @handle: Service's QMI handle.
+ * @conn_h: Connection handle to be verified.
+ *
+ * @return: true on success, false on failure.
+ *
+ * This function is used to verify the existence of a connection in the
+ * connection list maintained by the service.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static bool verify_svc_clnt_conn(struct qmi_handle *handle,
+ struct qmi_svc_clnt_conn *conn_h)
+{
+ struct qmi_svc_clnt_conn *temp_conn_h;
+
+ list_for_each_entry(temp_conn_h, &handle->conn_list, list) {
+ if (temp_conn_h == conn_h)
+ return true;
+ }
+ return false;
+}
+
+/**
+ * rmv_svc_clnt_conn() - Remove the connection handle info from the service
+ * @conn_h: Connection handle to be removed.
+ *
+ * This function removes a connection handle from a service's QMI handle.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static void rmv_svc_clnt_conn(struct qmi_svc_clnt_conn *conn_h)
+{
+ struct req_handle *req_h, *temp_req_h;
+ struct qmi_txn *txn_h, *temp_txn_h;
+
+ list_del(&conn_h->list);
+ list_for_each_entry_safe(req_h, temp_req_h,
+ &conn_h->req_handle_list, list)
+ rmv_req_handle(req_h);
+
+ mutex_lock(&conn_h->pending_txn_lock);
+ list_for_each_entry_safe(txn_h, temp_txn_h,
+ &conn_h->pending_txn_list, list) {
+ list_del(&txn_h->list);
+ kfree(txn_h->enc_data);
+ kfree(txn_h);
+ }
+ mutex_unlock(&conn_h->pending_txn_lock);
+ flush_delayed_work(&conn_h->resume_tx_work);
+ kfree(conn_h->clnt_addr);
+ kfree(conn_h);
+}
+
+/**
+ * qmi_event_notify() - Notification function to QMI client/service interface
+ * @event: Type of event that gets notified.
+ * @oob_data: Any out-of-band data associated with event.
+ * @oob_data_len: Length of the out-of-band data, if any.
+ * @priv: Private data.
+ *
+ * This function is called by the underlying transport to notify the QMI
+ * interface regarding any incoming event. This function is registered by
+ * QMI interface when it opens a port/handle with the underlying transport.
+ */
+static void qmi_event_notify(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv)
+{
+ struct qmi_notify_event_work *notify_work;
+ struct qmi_handle *handle;
+ uint32_t key = 0;
+
+ notify_work = kmalloc(sizeof(struct qmi_notify_event_work),
+ GFP_KERNEL);
+ if (!notify_work)
+ return;
+
+ notify_work->event = event;
+ if (oob_data) {
+ notify_work->oob_data = kmalloc(oob_data_len, GFP_KERNEL);
+ if (!notify_work->oob_data) {
+ pr_err("%s: Couldn't allocate oob_data @ %d to %p\n",
+ __func__, event, priv);
+ kfree(notify_work);
+ return;
+ }
+ memcpy(notify_work->oob_data, oob_data, oob_data_len);
+ } else {
+ notify_work->oob_data = NULL;
+ }
+ notify_work->oob_data_len = oob_data_len;
+ notify_work->priv = priv;
+ INIT_WORK(¬ify_work->work, qmi_notify_event_worker);
+
+ mutex_lock(&handle_hash_tbl_lock);
+ hash_for_each_possible(handle_hash_tbl, handle, handle_hash, key) {
+ if (handle == (struct qmi_handle *)priv) {
+ queue_work(handle->handle_wq,
+ ¬ify_work->work);
+ mutex_unlock(&handle_hash_tbl_lock);
+ return;
+ }
+ }
+ mutex_unlock(&handle_hash_tbl_lock);
+ kfree(notify_work->oob_data);
+ kfree(notify_work);
+}
+
+static void qmi_notify_event_worker(struct work_struct *work)
+{
+ struct qmi_notify_event_work *notify_work =
+ container_of(work, struct qmi_notify_event_work, work);
+ struct qmi_handle *handle = (struct qmi_handle *)notify_work->priv;
+ unsigned long flags;
+
+ if (!handle)
+ return;
+
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset) {
+ mutex_unlock(&handle->handle_lock);
+ kfree(notify_work->oob_data);
+ kfree(notify_work);
+ return;
+ }
+
+ switch (notify_work->event) {
+ case IPC_ROUTER_CTRL_CMD_DATA:
+ spin_lock_irqsave(&handle->notify_lock, flags);
+ handle->notify(handle, QMI_RECV_MSG, handle->notify_priv);
+ spin_unlock_irqrestore(&handle->notify_lock, flags);
+ break;
+
+ case IPC_ROUTER_CTRL_CMD_RESUME_TX:
+ if (handle->handle_type == QMI_CLIENT_HANDLE) {
+ queue_delayed_work(handle->handle_wq,
+ &handle->resume_tx_work,
+ msecs_to_jiffies(0));
+ } else if (handle->handle_type == QMI_SERVICE_HANDLE) {
+ struct msm_ipc_addr rtx_addr = {0};
+ struct qmi_svc_clnt_conn *conn_h;
+ union rr_control_msg *msg;
+
+ msg = (union rr_control_msg *)notify_work->oob_data;
+ rtx_addr.addrtype = MSM_IPC_ADDR_ID;
+ rtx_addr.addr.port_addr.node_id = msg->cli.node_id;
+ rtx_addr.addr.port_addr.port_id = msg->cli.port_id;
+ conn_h = find_svc_clnt_conn(handle, &rtx_addr,
+ sizeof(rtx_addr));
+ if (conn_h)
+ queue_delayed_work(handle->handle_wq,
+ &conn_h->resume_tx_work,
+ msecs_to_jiffies(0));
+ }
+ break;
+
+ case IPC_ROUTER_CTRL_CMD_NEW_SERVER:
+ case IPC_ROUTER_CTRL_CMD_REMOVE_SERVER:
+ case IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT:
+ queue_delayed_work(handle->handle_wq,
+ &handle->ctl_work, msecs_to_jiffies(0));
+ break;
+ default:
+ break;
+ }
+ mutex_unlock(&handle->handle_lock);
+ kfree(notify_work->oob_data);
+ kfree(notify_work);
+}
+
+/**
+ * clnt_resume_tx_worker() - Handle the Resume_Tx event
+ * @work : Pointer to the work strcuture.
+ *
+ * This function handles the resume_tx event for any QMI client that
+ * exists in the kernel space. This function parses the pending_txn_list of
+ * the handle and attempts a send for each transaction in that list.
+ */
+static void clnt_resume_tx_worker(struct work_struct *work)
+{
+ struct delayed_work *rtx_work = to_delayed_work(work);
+ struct qmi_handle *handle =
+ container_of(rtx_work, struct qmi_handle, resume_tx_work);
+ struct qmi_txn *pend_txn, *temp_txn;
+ int ret;
+ uint16_t msg_id;
+
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset)
+ goto out_clnt_handle_rtx;
+
+ list_for_each_entry_safe(pend_txn, temp_txn,
+ &handle->pending_txn_list, list) {
+ ret = msm_ipc_router_send_msg(
+ (struct msm_ipc_port *)handle->src_port,
+ (struct msm_ipc_addr *)handle->dest_info,
+ pend_txn->enc_data, pend_txn->enc_data_len);
+
+ if (ret == -EAGAIN)
+ break;
+ msg_id = ((struct qmi_header *)pend_txn->enc_data)->msg_id;
+ kfree(pend_txn->enc_data);
+ if (ret < 0) {
+ pr_err("%s: Sending transaction %d from port %d failed",
+ __func__, pend_txn->txn_id,
+ ((struct msm_ipc_port *)handle->src_port)->
+ this_port.port_id);
+ if (pend_txn->type == QMI_ASYNC_TXN) {
+ pend_txn->resp_cb(pend_txn->handle,
+ msg_id, pend_txn->resp,
+ pend_txn->resp_cb_data,
+ ret);
+ list_del(&pend_txn->list);
+ kfree(pend_txn);
+ } else if (pend_txn->type == QMI_SYNC_TXN) {
+ pend_txn->send_stat = ret;
+ wake_up(&pend_txn->wait_q);
+ }
+ } else {
+ list_del(&pend_txn->list);
+ list_add_tail(&pend_txn->list, &handle->txn_list);
+ }
+ }
+out_clnt_handle_rtx:
+ mutex_unlock(&handle->handle_lock);
+}
+
+/**
+ * svc_resume_tx_worker() - Handle the Resume_Tx event
+ * @work : Pointer to the work strcuture.
+ *
+ * This function handles the resume_tx event for any QMI service that
+ * exists in the kernel space. This function parses the pending_txn_list of
+ * the connection handle and attempts a send for each transaction in that list.
+ */
+static void svc_resume_tx_worker(struct work_struct *work)
+{
+ struct delayed_work *rtx_work = to_delayed_work(work);
+ struct qmi_svc_clnt_conn *conn_h =
+ container_of(rtx_work, struct qmi_svc_clnt_conn,
+ resume_tx_work);
+ struct qmi_handle *handle = (struct qmi_handle *)conn_h->svc_handle;
+ struct qmi_txn *pend_txn, *temp_txn;
+ int ret;
+
+ mutex_lock(&conn_h->pending_txn_lock);
+ if (handle->handle_reset)
+ goto out_svc_handle_rtx;
+
+ list_for_each_entry_safe(pend_txn, temp_txn,
+ &conn_h->pending_txn_list, list) {
+ ret = msm_ipc_router_send_msg(
+ (struct msm_ipc_port *)handle->src_port,
+ (struct msm_ipc_addr *)conn_h->clnt_addr,
+ pend_txn->enc_data, pend_txn->enc_data_len);
+
+ if (ret == -EAGAIN)
+ break;
+ if (ret < 0)
+ pr_err("%s: Sending transaction %d from port %d failed",
+ __func__, pend_txn->txn_id,
+ ((struct msm_ipc_port *)handle->src_port)->
+ this_port.port_id);
+ list_del(&pend_txn->list);
+ kfree(pend_txn->enc_data);
+ kfree(pend_txn);
+ }
+out_svc_handle_rtx:
+ mutex_unlock(&conn_h->pending_txn_lock);
+}
+
+/**
+ * handle_rmv_server() - Handle the server exit event
+ * @handle: Client handle on which the server exit event is received.
+ * @ctl_msg: Information about the server that is exiting.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static int handle_rmv_server(struct qmi_handle *handle,
+ union rr_control_msg *ctl_msg)
+{
+ struct msm_ipc_addr *svc_addr;
+ unsigned long flags;
+
+ if (unlikely(!handle->dest_info))
+ return 0;
+
+ svc_addr = (struct msm_ipc_addr *)(handle->dest_info);
+ if (svc_addr->addr.port_addr.node_id == ctl_msg->srv.node_id &&
+ svc_addr->addr.port_addr.port_id == ctl_msg->srv.port_id) {
+ /* Wakeup any threads waiting for the response */
+ handle->handle_reset = 1;
+ clean_txn_info(handle);
+
+ spin_lock_irqsave(&handle->notify_lock, flags);
+ handle->notify(handle, QMI_SERVER_EXIT, handle->notify_priv);
+ spin_unlock_irqrestore(&handle->notify_lock, flags);
+ }
+ return 0;
+}
+
+/**
+ * handle_rmv_client() - Handle the client exit event
+ * @handle: Service handle on which the client exit event is received.
+ * @ctl_msg: Information about the client that is exiting.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static int handle_rmv_client(struct qmi_handle *handle,
+ union rr_control_msg *ctl_msg)
+{
+ struct qmi_svc_clnt_conn *conn_h;
+ struct msm_ipc_addr clnt_addr = {0};
+ unsigned long flags;
+
+ clnt_addr.addrtype = MSM_IPC_ADDR_ID;
+ clnt_addr.addr.port_addr.node_id = ctl_msg->cli.node_id;
+ clnt_addr.addr.port_addr.port_id = ctl_msg->cli.port_id;
+ conn_h = find_svc_clnt_conn(handle, &clnt_addr, sizeof(clnt_addr));
+ if (conn_h) {
+ spin_lock_irqsave(&handle->notify_lock, flags);
+ handle->svc_ops_options->disconnect_cb(handle, conn_h);
+ spin_unlock_irqrestore(&handle->notify_lock, flags);
+ rmv_svc_clnt_conn(conn_h);
+ }
+ return 0;
+}
+
+/**
+ * handle_ctl_msg: Worker function to handle the control events
+ * @work: Work item to map the QMI handle.
+ *
+ * This function is a worker function to handle the incoming control
+ * events like REMOVE_SERVER/REMOVE_CLIENT. The work item is unique
+ * to a handle and the workker function handles the control events on
+ * a specific handle.
+ */
+static void handle_ctl_msg(struct work_struct *work)
+{
+ struct delayed_work *ctl_work = to_delayed_work(work);
+ struct qmi_handle *handle =
+ container_of(ctl_work, struct qmi_handle, ctl_work);
+ unsigned int ctl_msg_len;
+ union rr_control_msg *ctl_msg = NULL;
+ struct msm_ipc_addr src_addr;
+ int rc;
+
+ mutex_lock(&handle->handle_lock);
+ while (1) {
+ if (handle->handle_reset)
+ break;
+
+ /* Read the messages */
+ rc = msm_ipc_router_read_msg(
+ (struct msm_ipc_port *)(handle->ctl_port),
+ &src_addr, (unsigned char **)&ctl_msg, &ctl_msg_len);
+ if (rc == -ENOMSG)
+ break;
+ if (rc < 0) {
+ pr_err("%s: Read failed %d\n", __func__, rc);
+ break;
+ }
+ if (ctl_msg->cmd == IPC_ROUTER_CTRL_CMD_REMOVE_SERVER &&
+ handle->handle_type == QMI_CLIENT_HANDLE)
+ handle_rmv_server(handle, ctl_msg);
+ else if (ctl_msg->cmd == IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT &&
+ handle->handle_type == QMI_SERVICE_HANDLE)
+ handle_rmv_client(handle, ctl_msg);
+ kfree(ctl_msg);
+ }
+ mutex_unlock(&handle->handle_lock);
+}
+
+struct qmi_handle *qmi_handle_create(
+ void (*notify)(struct qmi_handle *handle,
+ enum qmi_event_type event, void *notify_priv),
+ void *notify_priv)
+{
+ struct qmi_handle *temp_handle;
+ struct msm_ipc_port *port_ptr, *ctl_port_ptr;
+ static uint32_t handle_count;
+ char wq_name[MAX_WQ_NAME_LEN];
+
+ temp_handle = kzalloc(sizeof(struct qmi_handle), GFP_KERNEL);
+ if (!temp_handle)
+ return NULL;
+ mutex_lock(&handle_hash_tbl_lock);
+ handle_count++;
+ scnprintf(wq_name, MAX_WQ_NAME_LEN, "qmi_hndl%08x", handle_count);
+ hash_add(handle_hash_tbl, &temp_handle->handle_hash, 0);
+ temp_handle->handle_wq = create_singlethread_workqueue(wq_name);
+ mutex_unlock(&handle_hash_tbl_lock);
+ if (!temp_handle->handle_wq) {
+ pr_err("%s: Couldn't create workqueue for handle\n", __func__);
+ goto handle_create_err1;
+ }
+
+ /* Initialize common elements */
+ temp_handle->handle_type = QMI_CLIENT_HANDLE;
+ temp_handle->next_txn_id = 1;
+ mutex_init(&temp_handle->handle_lock);
+ spin_lock_init(&temp_handle->notify_lock);
+ temp_handle->notify = notify;
+ temp_handle->notify_priv = notify_priv;
+ init_waitqueue_head(&temp_handle->reset_waitq);
+ INIT_DELAYED_WORK(&temp_handle->resume_tx_work, clnt_resume_tx_worker);
+ INIT_DELAYED_WORK(&temp_handle->ctl_work, handle_ctl_msg);
+
+ /* Initialize client specific elements */
+ INIT_LIST_HEAD(&temp_handle->txn_list);
+ INIT_LIST_HEAD(&temp_handle->pending_txn_list);
+
+ /* Initialize service specific elements */
+ INIT_LIST_HEAD(&temp_handle->conn_list);
+
+ port_ptr = msm_ipc_router_create_port(qmi_event_notify,
+ (void *)temp_handle);
+ if (!port_ptr) {
+ pr_err("%s: IPC router port creation failed\n", __func__);
+ goto handle_create_err2;
+ }
+
+ ctl_port_ptr = msm_ipc_router_create_port(qmi_event_notify,
+ (void *)temp_handle);
+ if (!ctl_port_ptr) {
+ pr_err("%s: IPC router ctl port creation failed\n", __func__);
+ goto handle_create_err3;
+ }
+ msm_ipc_router_bind_control_port(ctl_port_ptr);
+
+ temp_handle->src_port = port_ptr;
+ temp_handle->ctl_port = ctl_port_ptr;
+ return temp_handle;
+
+handle_create_err3:
+ msm_ipc_router_close_port(port_ptr);
+handle_create_err2:
+ destroy_workqueue(temp_handle->handle_wq);
+handle_create_err1:
+ mutex_lock(&handle_hash_tbl_lock);
+ hash_del(&temp_handle->handle_hash);
+ mutex_unlock(&handle_hash_tbl_lock);
+ kfree(temp_handle);
+ return NULL;
+}
+EXPORT_SYMBOL(qmi_handle_create);
+
+static void clean_txn_info(struct qmi_handle *handle)
+{
+ struct qmi_txn *txn_handle, *temp_txn_handle, *pend_txn;
+
+ list_for_each_entry_safe(pend_txn, temp_txn_handle,
+ &handle->pending_txn_list, list) {
+ if (pend_txn->type == QMI_ASYNC_TXN) {
+ list_del(&pend_txn->list);
+ pend_txn->resp_cb(pend_txn->handle,
+ ((struct qmi_header *)
+ pend_txn->enc_data)->msg_id,
+ pend_txn->resp, pend_txn->resp_cb_data,
+ -ENETRESET);
+ kfree(pend_txn->enc_data);
+ kfree(pend_txn);
+ } else if (pend_txn->type == QMI_SYNC_TXN) {
+ kfree(pend_txn->enc_data);
+ wake_up(&pend_txn->wait_q);
+ }
+ }
+ list_for_each_entry_safe(txn_handle, temp_txn_handle,
+ &handle->txn_list, list) {
+ if (txn_handle->type == QMI_ASYNC_TXN) {
+ list_del(&txn_handle->list);
+ kfree(txn_handle);
+ } else if (txn_handle->type == QMI_SYNC_TXN) {
+ wake_up(&txn_handle->wait_q);
+ }
+ }
+}
+
+int qmi_handle_destroy(struct qmi_handle *handle)
+{
+ DEFINE_WAIT(wait);
+
+ if (!handle)
+ return -EINVAL;
+
+ mutex_lock(&handle_hash_tbl_lock);
+ hash_del(&handle->handle_hash);
+ mutex_unlock(&handle_hash_tbl_lock);
+
+ mutex_lock(&handle->handle_lock);
+ handle->handle_reset = 1;
+ clean_txn_info(handle);
+ msm_ipc_router_close_port((struct msm_ipc_port *)(handle->ctl_port));
+ msm_ipc_router_close_port((struct msm_ipc_port *)(handle->src_port));
+ mutex_unlock(&handle->handle_lock);
+ flush_workqueue(handle->handle_wq);
+ destroy_workqueue(handle->handle_wq);
+
+ mutex_lock(&handle->handle_lock);
+ while (!list_empty(&handle->txn_list) ||
+ !list_empty(&handle->pending_txn_list)) {
+ prepare_to_wait(&handle->reset_waitq, &wait,
+ TASK_UNINTERRUPTIBLE);
+ mutex_unlock(&handle->handle_lock);
+ schedule();
+ mutex_lock(&handle->handle_lock);
+ finish_wait(&handle->reset_waitq, &wait);
+ }
+ mutex_unlock(&handle->handle_lock);
+ kfree(handle->dest_info);
+ kfree(handle);
+ return 0;
+}
+EXPORT_SYMBOL(qmi_handle_destroy);
+
+int qmi_register_ind_cb(struct qmi_handle *handle,
+ void (*ind_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ unsigned int msg_len, void *ind_cb_priv),
+ void *ind_cb_priv)
+{
+ if (!handle)
+ return -EINVAL;
+
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset) {
+ mutex_unlock(&handle->handle_lock);
+ return -ENETRESET;
+ }
+
+ handle->ind_cb = ind_cb;
+ handle->ind_cb_priv = ind_cb_priv;
+ mutex_unlock(&handle->handle_lock);
+ return 0;
+}
+EXPORT_SYMBOL(qmi_register_ind_cb);
+
+static int qmi_encode_and_send_req(struct qmi_txn **ret_txn_handle,
+ struct qmi_handle *handle, enum txn_type type,
+ struct msg_desc *req_desc, void *req, unsigned int req_len,
+ struct msg_desc *resp_desc, void *resp, unsigned int resp_len,
+ void (*resp_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ void *resp_cb_data, int stat),
+ void *resp_cb_data)
+{
+ struct qmi_txn *txn_handle;
+ int rc, encoded_req_len;
+ void *encoded_req;
+
+ if (!handle || !handle->dest_info ||
+ !req_desc || !resp_desc || !resp)
+ return -EINVAL;
+
+ if ((!req && req_len) || (!req_len && req))
+ return -EINVAL;
+
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset) {
+ mutex_unlock(&handle->handle_lock);
+ return -ENETRESET;
+ }
+
+ /* Allocate Transaction Info */
+ txn_handle = kzalloc(sizeof(struct qmi_txn), GFP_KERNEL);
+ if (!txn_handle) {
+ mutex_unlock(&handle->handle_lock);
+ return -ENOMEM;
+ }
+ txn_handle->type = type;
+ INIT_LIST_HEAD(&txn_handle->list);
+ init_waitqueue_head(&txn_handle->wait_q);
+
+ /* Cache the parameters passed & mark it as sync*/
+ txn_handle->handle = handle;
+ txn_handle->resp_desc = resp_desc;
+ txn_handle->resp = resp;
+ txn_handle->resp_len = resp_len;
+ txn_handle->resp_received = 0;
+ txn_handle->resp_cb = resp_cb;
+ txn_handle->resp_cb_data = resp_cb_data;
+ txn_handle->enc_data = NULL;
+ txn_handle->enc_data_len = 0;
+
+ /* Encode the request msg */
+ encoded_req_len = req_desc->max_msg_len + QMI_HEADER_SIZE;
+ encoded_req = kmalloc(encoded_req_len, GFP_KERNEL);
+ if (!encoded_req) {
+ rc = -ENOMEM;
+ goto encode_and_send_req_err1;
+ }
+ rc = qmi_kernel_encode(req_desc,
+ (void *)(encoded_req + QMI_HEADER_SIZE),
+ req_desc->max_msg_len, req);
+ if (rc < 0) {
+ pr_err("%s: Encode Failure %d\n", __func__, rc);
+ goto encode_and_send_req_err2;
+ }
+ encoded_req_len = rc;
+
+ /* Encode the header & Add to the txn_list */
+ if (!handle->next_txn_id)
+ handle->next_txn_id++;
+ txn_handle->txn_id = handle->next_txn_id++;
+ encode_qmi_header(encoded_req, QMI_REQUEST_CONTROL_FLAG,
+ txn_handle->txn_id, req_desc->msg_id,
+ encoded_req_len);
+ encoded_req_len += QMI_HEADER_SIZE;
+
+ /*
+ * Check if this port has transactions queued to its pending list
+ * and if there are any pending transactions then add the current
+ * transaction to the pending list rather than sending it. This avoids
+ * out-of-order message transfers.
+ */
+ if (!list_empty(&handle->pending_txn_list)) {
+ rc = -EAGAIN;
+ goto append_pend_txn;
+ }
+
+ list_add_tail(&txn_handle->list, &handle->txn_list);
+ qmi_log(handle, QMI_REQUEST_CONTROL_FLAG, txn_handle->txn_id,
+ req_desc->msg_id, encoded_req_len);
+ /* Send the request */
+ rc = msm_ipc_router_send_msg((struct msm_ipc_port *)(handle->src_port),
+ (struct msm_ipc_addr *)handle->dest_info,
+ encoded_req, encoded_req_len);
+append_pend_txn:
+ if (rc == -EAGAIN) {
+ txn_handle->enc_data = encoded_req;
+ txn_handle->enc_data_len = encoded_req_len;
+ if (list_empty(&handle->pending_txn_list))
+ list_del(&txn_handle->list);
+ list_add_tail(&txn_handle->list, &handle->pending_txn_list);
+ if (ret_txn_handle)
+ *ret_txn_handle = txn_handle;
+ mutex_unlock(&handle->handle_lock);
+ return 0;
+ }
+ if (rc < 0) {
+ pr_err("%s: send_msg failed %d\n", __func__, rc);
+ goto encode_and_send_req_err3;
+ }
+ mutex_unlock(&handle->handle_lock);
+
+ kfree(encoded_req);
+ if (ret_txn_handle)
+ *ret_txn_handle = txn_handle;
+ return 0;
+
+encode_and_send_req_err3:
+ list_del(&txn_handle->list);
+encode_and_send_req_err2:
+ kfree(encoded_req);
+encode_and_send_req_err1:
+ kfree(txn_handle);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+
+int qmi_send_req_wait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ unsigned long timeout_ms)
+{
+ struct qmi_txn *txn_handle = NULL;
+ int rc;
+
+ /* Encode and send the request */
+ rc = qmi_encode_and_send_req(&txn_handle, handle, QMI_SYNC_TXN,
+ req_desc, req, req_len,
+ resp_desc, resp, resp_len,
+ NULL, NULL);
+ if (rc < 0) {
+ pr_err("%s: Error encode & send req: %d\n", __func__, rc);
+ return rc;
+ }
+
+ /* Wait for the response */
+ if (!timeout_ms) {
+ wait_event(txn_handle->wait_q,
+ (txn_handle->resp_received ||
+ handle->handle_reset ||
+ (txn_handle->send_stat < 0)));
+ } else {
+ rc = wait_event_timeout(txn_handle->wait_q,
+ (txn_handle->resp_received ||
+ handle->handle_reset ||
+ (txn_handle->send_stat < 0)),
+ msecs_to_jiffies(timeout_ms));
+ if (rc == 0)
+ rc = -ETIMEDOUT;
+ }
+
+ mutex_lock(&handle->handle_lock);
+ if (!txn_handle->resp_received) {
+ pr_err("%s: Response Wait Error %d\n", __func__, rc);
+ if (handle->handle_reset)
+ rc = -ENETRESET;
+ if (rc >= 0)
+ rc = -EFAULT;
+ if (txn_handle->send_stat < 0)
+ rc = txn_handle->send_stat;
+ goto send_req_wait_err;
+ }
+ rc = 0;
+
+send_req_wait_err:
+ list_del(&txn_handle->list);
+ kfree(txn_handle);
+ wake_up(&handle->reset_waitq);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_send_req_wait);
+
+int qmi_send_req_nowait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ void (*resp_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ void *resp_cb_data, int stat),
+ void *resp_cb_data)
+{
+ return qmi_encode_and_send_req(NULL, handle, QMI_ASYNC_TXN,
+ req_desc, req, req_len,
+ resp_desc, resp, resp_len,
+ resp_cb, resp_cb_data);
+}
+EXPORT_SYMBOL(qmi_send_req_nowait);
+
+/**
+ * qmi_encode_and_send_resp() - Encode and send QMI response
+ * @handle: QMI service handle sending the response.
+ * @conn_h: Connection handle to which the response is sent.
+ * @req_h: Request handle for which the response is sent.
+ * @resp_desc: Message Descriptor describing the response structure.
+ * @resp: Response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function encodes and sends a response message from a service to
+ * a client identified from the connection handle. The request for which
+ * the response is sent is identified from the connection handle.
+ *
+ * This function must be called with handle->handle_lock locked.
+ */
+static int qmi_encode_and_send_resp(struct qmi_handle *handle,
+ struct qmi_svc_clnt_conn *conn_h, struct req_handle *req_h,
+ struct msg_desc *resp_desc, void *resp, unsigned int resp_len)
+{
+ struct qmi_txn *txn_handle;
+ uint16_t cntl_flag;
+ int rc;
+ int encoded_resp_len;
+ void *encoded_resp;
+
+ if (handle->handle_reset) {
+ rc = -ENETRESET;
+ goto encode_and_send_resp_err0;
+ }
+
+ if (handle->handle_type != QMI_SERVICE_HANDLE ||
+ !verify_svc_clnt_conn(handle, conn_h) ||
+ (req_h && !verify_req_handle(conn_h, req_h))) {
+ rc = -EINVAL;
+ goto encode_and_send_resp_err0;
+ }
+
+ /* Allocate Transaction Info */
+ txn_handle = kzalloc(sizeof(struct qmi_txn), GFP_KERNEL);
+ if (!txn_handle) {
+ rc = -ENOMEM;
+ goto encode_and_send_resp_err0;
+ }
+ INIT_LIST_HEAD(&txn_handle->list);
+ init_waitqueue_head(&txn_handle->wait_q);
+ txn_handle->handle = handle;
+ txn_handle->enc_data = NULL;
+ txn_handle->enc_data_len = 0;
+
+ /* Encode the response msg */
+ encoded_resp_len = resp_desc->max_msg_len + QMI_HEADER_SIZE;
+ encoded_resp = kmalloc(encoded_resp_len, GFP_KERNEL);
+ if (!encoded_resp) {
+ rc = -ENOMEM;
+ goto encode_and_send_resp_err1;
+ }
+ rc = qmi_kernel_encode(resp_desc,
+ (void *)(encoded_resp + QMI_HEADER_SIZE),
+ resp_desc->max_msg_len, resp);
+ if (rc < 0) {
+ pr_err("%s: Encode Failure %d\n", __func__, rc);
+ goto encode_and_send_resp_err2;
+ }
+ encoded_resp_len = rc;
+
+ /* Encode the header & Add to the txn_list */
+ if (req_h) {
+ txn_handle->txn_id = req_h->txn_id;
+ cntl_flag = QMI_RESPONSE_CONTROL_FLAG;
+ } else {
+ if (!handle->next_txn_id)
+ handle->next_txn_id++;
+ txn_handle->txn_id = handle->next_txn_id++;
+ cntl_flag = QMI_INDICATION_CONTROL_FLAG;
+ }
+ encode_qmi_header(encoded_resp, cntl_flag,
+ txn_handle->txn_id, resp_desc->msg_id,
+ encoded_resp_len);
+ encoded_resp_len += QMI_HEADER_SIZE;
+
+ qmi_log(handle, cntl_flag, txn_handle->txn_id,
+ resp_desc->msg_id, encoded_resp_len);
+ /*
+ * Check if this svc_clnt has transactions queued to its pending list
+ * and if there are any pending transactions then add the current
+ * transaction to the pending list rather than sending it. This avoids
+ * out-of-order message transfers.
+ */
+ mutex_lock(&conn_h->pending_txn_lock);
+ if (list_empty(&conn_h->pending_txn_list))
+ rc = msm_ipc_router_send_msg(
+ (struct msm_ipc_port *)(handle->src_port),
+ (struct msm_ipc_addr *)conn_h->clnt_addr,
+ encoded_resp, encoded_resp_len);
+ else
+ rc = -EAGAIN;
+
+ if (req_h)
+ rmv_req_handle(req_h);
+ if (rc == -EAGAIN) {
+ txn_handle->enc_data = encoded_resp;
+ txn_handle->enc_data_len = encoded_resp_len;
+ list_add_tail(&txn_handle->list, &conn_h->pending_txn_list);
+ mutex_unlock(&conn_h->pending_txn_lock);
+ return 0;
+ }
+ mutex_unlock(&conn_h->pending_txn_lock);
+ if (rc < 0)
+ pr_err("%s: send_msg failed %d\n", __func__, rc);
+encode_and_send_resp_err2:
+ kfree(encoded_resp);
+encode_and_send_resp_err1:
+ kfree(txn_handle);
+encode_and_send_resp_err0:
+ return rc;
+}
+
+/**
+ * qmi_send_resp() - Send response to a request
+ * @handle: QMI handle from which the response is sent.
+ * @clnt: Client to which the response is sent.
+ * @req_handle: Request for which the response is sent.
+ * @resp_desc: Descriptor explaining the response structure.
+ * @resp: Pointer to the response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_resp(struct qmi_handle *handle, void *conn_handle,
+ void *req_handle, struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len)
+{
+ int rc;
+ struct qmi_svc_clnt_conn *conn_h;
+ struct req_handle *req_h;
+
+ if (!handle || !conn_handle || !req_handle ||
+ !resp_desc || !resp || !resp_len)
+ return -EINVAL;
+
+ conn_h = (struct qmi_svc_clnt_conn *)conn_handle;
+ req_h = (struct req_handle *)req_handle;
+ mutex_lock(&handle->handle_lock);
+ rc = qmi_encode_and_send_resp(handle, conn_h, req_h,
+ resp_desc, resp, resp_len);
+ if (rc < 0)
+ pr_err("%s: Error encoding and sending response\n", __func__);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_send_resp);
+
+/**
+ * qmi_send_resp_from_cb() - Send response to a request from request_cb
+ * @handle: QMI handle from which the response is sent.
+ * @clnt: Client to which the response is sent.
+ * @req_handle: Request for which the response is sent.
+ * @resp_desc: Descriptor explaining the response structure.
+ * @resp: Pointer to the response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_resp_from_cb(struct qmi_handle *handle, void *conn_handle,
+ void *req_handle, struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len)
+{
+ int rc;
+ struct qmi_svc_clnt_conn *conn_h;
+ struct req_handle *req_h;
+
+ if (!handle || !conn_handle || !req_handle ||
+ !resp_desc || !resp || !resp_len)
+ return -EINVAL;
+
+ conn_h = (struct qmi_svc_clnt_conn *)conn_handle;
+ req_h = (struct req_handle *)req_handle;
+ rc = qmi_encode_and_send_resp(handle, conn_h, req_h,
+ resp_desc, resp, resp_len);
+ if (rc < 0)
+ pr_err("%s: Error encoding and sending response\n", __func__);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_send_resp_from_cb);
+
+/**
+ * qmi_send_ind() - Send unsolicited event/indication to a client
+ * @handle: QMI handle from which the indication is sent.
+ * @clnt: Client to which the indication is sent.
+ * @ind_desc: Descriptor explaining the indication structure.
+ * @ind: Pointer to the indication structure.
+ * @ind_len: Length of the indication structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_ind(struct qmi_handle *handle, void *conn_handle,
+ struct msg_desc *ind_desc, void *ind, unsigned int ind_len)
+{
+ int rc = 0;
+ struct qmi_svc_clnt_conn *conn_h;
+
+ if (!handle || !conn_handle || !ind_desc)
+ return -EINVAL;
+
+ if ((!ind && ind_len) || (ind && !ind_len))
+ return -EINVAL;
+
+ conn_h = (struct qmi_svc_clnt_conn *)conn_handle;
+ mutex_lock(&handle->handle_lock);
+ rc = qmi_encode_and_send_resp(handle, conn_h, NULL,
+ ind_desc, ind, ind_len);
+ if (rc < 0)
+ pr_err("%s: Error encoding and sending ind.\n", __func__);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_send_ind);
+
+/**
+ * qmi_send_ind_from_cb() - Send indication to a client from registration_cb
+ * @handle: QMI handle from which the indication is sent.
+ * @clnt: Client to which the indication is sent.
+ * @ind_desc: Descriptor explaining the indication structure.
+ * @ind: Pointer to the indication structure.
+ * @ind_len: Length of the indication structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_ind_from_cb(struct qmi_handle *handle, void *conn_handle,
+ struct msg_desc *ind_desc, void *ind, unsigned int ind_len)
+{
+ int rc = 0;
+ struct qmi_svc_clnt_conn *conn_h;
+
+ if (!handle || !conn_handle || !ind_desc)
+ return -EINVAL;
+
+ if ((!ind && ind_len) || (ind && !ind_len))
+ return -EINVAL;
+
+ conn_h = (struct qmi_svc_clnt_conn *)conn_handle;
+ rc = qmi_encode_and_send_resp(handle, conn_h, NULL,
+ ind_desc, ind, ind_len);
+ if (rc < 0)
+ pr_err("%s: Error encoding and sending ind.\n", __func__);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_send_ind_from_cb);
+
+/**
+ * translate_err_code() - Translate Linux error codes into QMI error codes
+ * @err: Standard Linux error codes to be translated.
+ *
+ * @return: Return QMI error code.
+ */
+static int translate_err_code(int err)
+{
+ int rc;
+
+ switch (err) {
+ case -ECONNREFUSED:
+ rc = QMI_ERR_CLIENT_IDS_EXHAUSTED_V01;
+ break;
+ case -EBADMSG:
+ rc = QMI_ERR_ENCODING_V01;
+ break;
+ case -ENOMEM:
+ rc = QMI_ERR_NO_MEMORY_V01;
+ break;
+ case -EOPNOTSUPP:
+ rc = QMI_ERR_MALFORMED_MSG_V01;
+ break;
+ case -ENOTSUPP:
+ rc = QMI_ERR_NOT_SUPPORTED_V01;
+ break;
+ default:
+ rc = QMI_ERR_INTERNAL_V01;
+ break;
+ }
+ return rc;
+}
+
+/**
+ * send_err_resp() - Send the error response
+ * @handle: Service handle from which the response is sent.
+ * @conn_h: Client<->Service connection on which the response is sent.
+ * @addr: Client address to which the error response is sent.
+ * @msg_id: Request message id for which the error response is sent.
+ * @txn_id: Request Transaction ID for which the error response is sent.
+ * @err: Error code to be sent.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function is used to send an error response from within the QMI
+ * service interface. This function is called when the service returns
+ * an error to the QMI interface while handling a request.
+ */
+static int send_err_resp(struct qmi_handle *handle,
+ struct qmi_svc_clnt_conn *conn_h, void *addr,
+ uint16_t msg_id, uint16_t txn_id, int err)
+{
+ struct qmi_response_type_v01 err_resp;
+ struct qmi_txn *txn_handle;
+ struct msm_ipc_addr *dest_addr;
+ int rc;
+ int encoded_resp_len;
+ void *encoded_resp;
+
+ if (handle->handle_reset)
+ return -ENETRESET;
+
+ err_resp.result = QMI_RESULT_FAILURE_V01;
+ err_resp.error = translate_err_code(err);
+
+ /* Allocate Transaction Info */
+ txn_handle = kzalloc(sizeof(struct qmi_txn), GFP_KERNEL);
+ if (!txn_handle)
+ return -ENOMEM;
+ INIT_LIST_HEAD(&txn_handle->list);
+ init_waitqueue_head(&txn_handle->wait_q);
+ txn_handle->handle = handle;
+ txn_handle->enc_data = NULL;
+ txn_handle->enc_data_len = 0;
+
+ /* Encode the response msg */
+ encoded_resp_len = err_resp_desc.max_msg_len + QMI_HEADER_SIZE;
+ encoded_resp = kmalloc(encoded_resp_len, GFP_KERNEL);
+ if (!encoded_resp) {
+ rc = -ENOMEM;
+ goto encode_and_send_err_resp_err0;
+ }
+ rc = qmi_kernel_encode(&err_resp_desc,
+ (void *)(encoded_resp + QMI_HEADER_SIZE),
+ err_resp_desc.max_msg_len, &err_resp);
+ if (rc < 0) {
+ pr_err("%s: Encode Failure %d\n", __func__, rc);
+ goto encode_and_send_err_resp_err1;
+ }
+ encoded_resp_len = rc;
+
+ /* Encode the header & Add to the txn_list */
+ txn_handle->txn_id = txn_id;
+ encode_qmi_header(encoded_resp, QMI_RESPONSE_CONTROL_FLAG,
+ txn_handle->txn_id, msg_id,
+ encoded_resp_len);
+ encoded_resp_len += QMI_HEADER_SIZE;
+
+ qmi_log(handle, QMI_RESPONSE_CONTROL_FLAG, txn_id,
+ msg_id, encoded_resp_len);
+ /*
+ * Check if this svc_clnt has transactions queued to its pending list
+ * and if there are any pending transactions then add the current
+ * transaction to the pending list rather than sending it. This avoids
+ * out-of-order message transfers.
+ */
+ if (!conn_h) {
+ dest_addr = (struct msm_ipc_addr *)addr;
+ goto tx_err_resp;
+ }
+
+ mutex_lock(&conn_h->pending_txn_lock);
+ dest_addr = (struct msm_ipc_addr *)conn_h->clnt_addr;
+ if (!list_empty(&conn_h->pending_txn_list)) {
+ rc = -EAGAIN;
+ goto queue_err_resp;
+ }
+tx_err_resp:
+ rc = msm_ipc_router_send_msg(
+ (struct msm_ipc_port *)(handle->src_port),
+ dest_addr, encoded_resp, encoded_resp_len);
+queue_err_resp:
+ if (rc == -EAGAIN && conn_h) {
+ txn_handle->enc_data = encoded_resp;
+ txn_handle->enc_data_len = encoded_resp_len;
+ list_add_tail(&txn_handle->list, &conn_h->pending_txn_list);
+ mutex_unlock(&conn_h->pending_txn_lock);
+ return 0;
+ }
+ if (conn_h)
+ mutex_unlock(&conn_h->pending_txn_lock);
+ if (rc < 0)
+ pr_err("%s: send_msg failed %d\n", __func__, rc);
+encode_and_send_err_resp_err1:
+ kfree(encoded_resp);
+encode_and_send_err_resp_err0:
+ kfree(txn_handle);
+ return rc;
+}
+
+/**
+ * handle_qmi_request() - Handle the QMI request
+ * @handle: QMI service handle on which the request has arrived.
+ * @req_msg: Request message to be handled.
+ * @txn_id: Transaction ID of the request message.
+ * @msg_id: Message ID of the request message.
+ * @msg_len: Message Length of the request message.
+ * @src_addr: Address of the source which sent the request.
+ * @src_addr_len: Length of the source address.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int handle_qmi_request(struct qmi_handle *handle,
+ unsigned char *req_msg, uint16_t txn_id,
+ uint16_t msg_id, uint16_t msg_len,
+ void *src_addr, size_t src_addr_len)
+{
+ struct qmi_svc_clnt_conn *conn_h;
+ struct msg_desc *req_desc = NULL;
+ void *req_struct = NULL;
+ unsigned int req_struct_len = 0;
+ struct req_handle *req_h = NULL;
+ int rc = 0;
+
+ if (handle->handle_type != QMI_SERVICE_HANDLE)
+ return -EOPNOTSUPP;
+
+ conn_h = find_svc_clnt_conn(handle, src_addr, src_addr_len);
+ if (conn_h)
+ goto decode_req;
+
+ /* New client, establish a connection */
+ conn_h = add_svc_clnt_conn(handle, src_addr, src_addr_len);
+ if (!conn_h) {
+ pr_err("%s: Error adding a new conn_h\n", __func__);
+ rc = -ENOMEM;
+ goto out_handle_req;
+ }
+ rc = handle->svc_ops_options->connect_cb(handle, conn_h);
+ if (rc < 0) {
+ pr_err("%s: Error accepting new client\n", __func__);
+ rmv_svc_clnt_conn(conn_h);
+ conn_h = NULL;
+ goto out_handle_req;
+ }
+
+decode_req:
+ if (!msg_len)
+ goto process_req;
+
+ req_struct_len = handle->svc_ops_options->req_desc_cb(msg_id,
+ &req_desc);
+ if (!req_desc || req_struct_len <= 0) {
+ pr_err("%s: Error getting req_desc for msg_id %d\n",
+ __func__, msg_id);
+ rc = -ENOTSUPP;
+ goto out_handle_req;
+ }
+
+ req_struct = kzalloc(req_struct_len, GFP_KERNEL);
+ if (!req_struct) {
+ rc = -ENOMEM;
+ goto out_handle_req;
+ }
+
+ rc = qmi_kernel_decode(req_desc, req_struct,
+ (void *)(req_msg + QMI_HEADER_SIZE), msg_len);
+ if (rc < 0) {
+ pr_err("%s: Error decoding msg_id %d\n", __func__, msg_id);
+ rc = -EBADMSG;
+ goto out_handle_req;
+ }
+
+process_req:
+ req_h = add_req_handle(conn_h, msg_id, txn_id);
+ if (!req_h) {
+ pr_err("%s: Error adding new request handle\n", __func__);
+ rc = -ENOMEM;
+ goto out_handle_req;
+ }
+ rc = handle->svc_ops_options->req_cb(handle, conn_h, req_h,
+ msg_id, req_struct);
+ if (rc < 0) {
+ pr_err("%s: Error while req_cb\n", __func__);
+ /* Check if the error is before or after sending a response */
+ if (verify_req_handle(conn_h, req_h))
+ rmv_req_handle(req_h);
+ else
+ rc = 0;
+ }
+
+out_handle_req:
+ kfree(req_struct);
+ if (rc < 0)
+ send_err_resp(handle, conn_h, src_addr, msg_id, txn_id, rc);
+ return rc;
+}
+
+static struct qmi_txn *find_txn_handle(struct qmi_handle *handle,
+ uint16_t txn_id)
+{
+ struct qmi_txn *txn_handle;
+
+ list_for_each_entry(txn_handle, &handle->txn_list, list) {
+ if (txn_handle->txn_id == txn_id)
+ return txn_handle;
+ }
+ return NULL;
+}
+
+static int handle_qmi_response(struct qmi_handle *handle,
+ unsigned char *resp_msg, uint16_t txn_id,
+ uint16_t msg_id, uint16_t msg_len)
+{
+ struct qmi_txn *txn_handle;
+ int rc;
+
+ /* Find the transaction handle */
+ txn_handle = find_txn_handle(handle, txn_id);
+ if (!txn_handle) {
+ pr_err("%s Response received for non-existent txn_id %d\n",
+ __func__, txn_id);
+ return 0;
+ }
+
+ /* Decode the message */
+ rc = qmi_kernel_decode(txn_handle->resp_desc, txn_handle->resp,
+ (void *)(resp_msg + QMI_HEADER_SIZE), msg_len);
+ if (rc < 0) {
+ pr_err("%s: Response Decode Failure <%d: %d: %d> rc: %d\n",
+ __func__, txn_id, msg_id, msg_len, rc);
+ wake_up(&txn_handle->wait_q);
+ if (txn_handle->type == QMI_ASYNC_TXN) {
+ list_del(&txn_handle->list);
+ kfree(txn_handle);
+ }
+ return rc;
+ }
+
+ /* Handle async or sync resp */
+ switch (txn_handle->type) {
+ case QMI_SYNC_TXN:
+ txn_handle->resp_received = 1;
+ wake_up(&txn_handle->wait_q);
+ rc = 0;
+ break;
+
+ case QMI_ASYNC_TXN:
+ if (txn_handle->resp_cb)
+ txn_handle->resp_cb(txn_handle->handle, msg_id,
+ txn_handle->resp,
+ txn_handle->resp_cb_data, 0);
+ list_del(&txn_handle->list);
+ kfree(txn_handle);
+ rc = 0;
+ break;
+
+ default:
+ pr_err("%s: Unrecognized transaction type\n", __func__);
+ return -EFAULT;
+ }
+ return rc;
+}
+
+static int handle_qmi_indication(struct qmi_handle *handle, void *msg,
+ unsigned int msg_id, unsigned int msg_len)
+{
+ if (handle->ind_cb)
+ handle->ind_cb(handle, msg_id, msg + QMI_HEADER_SIZE,
+ msg_len, handle->ind_cb_priv);
+ return 0;
+}
+
+int qmi_recv_msg(struct qmi_handle *handle)
+{
+ unsigned int recv_msg_len;
+ unsigned char *recv_msg = NULL;
+ struct msm_ipc_addr src_addr = {0};
+ unsigned char cntl_flag;
+ uint16_t txn_id, msg_id, msg_len;
+ int rc;
+
+ if (!handle)
+ return -EINVAL;
+
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset) {
+ mutex_unlock(&handle->handle_lock);
+ return -ENETRESET;
+ }
+
+ /* Read the messages */
+ rc = msm_ipc_router_read_msg((struct msm_ipc_port *)(handle->src_port),
+ &src_addr, &recv_msg, &recv_msg_len);
+ if (rc == -ENOMSG) {
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+ }
+
+ if (rc < 0) {
+ pr_err("%s: Read failed %d\n", __func__, rc);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+ }
+
+ /* Decode the header & Handle the req, resp, indication message */
+ decode_qmi_header(recv_msg, &cntl_flag, &txn_id, &msg_id, &msg_len);
+
+ qmi_log(handle, cntl_flag, txn_id, msg_id, msg_len);
+ switch (cntl_flag) {
+ case QMI_REQUEST_CONTROL_FLAG:
+ rc = handle_qmi_request(handle, recv_msg, txn_id, msg_id,
+ msg_len, &src_addr, sizeof(src_addr));
+ break;
+
+ case QMI_RESPONSE_CONTROL_FLAG:
+ rc = handle_qmi_response(handle, recv_msg,
+ txn_id, msg_id, msg_len);
+ break;
+
+ case QMI_INDICATION_CONTROL_FLAG:
+ rc = handle_qmi_indication(handle, recv_msg, msg_id, msg_len);
+ break;
+
+ default:
+ rc = -EFAULT;
+ pr_err("%s: Unsupported message type %d\n",
+ __func__, cntl_flag);
+ break;
+ }
+ kfree(recv_msg);
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_recv_msg);
+
+int qmi_connect_to_service(struct qmi_handle *handle,
+ uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins)
+{
+ struct msm_ipc_port_name svc_name;
+ struct msm_ipc_server_info svc_info;
+ struct msm_ipc_addr *svc_dest_addr;
+ int rc;
+ uint32_t instance_id;
+
+ if (!handle)
+ return -EINVAL;
+
+ svc_dest_addr = kzalloc(sizeof(struct msm_ipc_addr),
+ GFP_KERNEL);
+ if (!svc_dest_addr)
+ return -ENOMEM;
+
+ instance_id = BUILD_INSTANCE_ID(service_vers, service_ins);
+ svc_name.service = service_id;
+ svc_name.instance = instance_id;
+
+ rc = msm_ipc_router_lookup_server_name(&svc_name, &svc_info,
+ 1, LOOKUP_MASK);
+ if (rc <= 0) {
+ pr_err("%s: Server %08x:%08x not found\n",
+ __func__, service_id, instance_id);
+ return -ENODEV;
+ }
+ svc_dest_addr->addrtype = MSM_IPC_ADDR_ID;
+ svc_dest_addr->addr.port_addr.node_id = svc_info.node_id;
+ svc_dest_addr->addr.port_addr.port_id = svc_info.port_id;
+ mutex_lock(&handle->handle_lock);
+ if (handle->handle_reset) {
+ mutex_unlock(&handle->handle_lock);
+ return -ENETRESET;
+ }
+ handle->dest_info = svc_dest_addr;
+ handle->dest_service_id = service_id;
+ mutex_unlock(&handle->handle_lock);
+
+ return 0;
+}
+EXPORT_SYMBOL(qmi_connect_to_service);
+
+/**
+ * svc_event_add_svc_addr() - Add a specific service address to the list
+ * @event_nb: Reference to the service event structure.
+ * @node_id: Node id of the service address.
+ * @port_id: Port id of the service address.
+ *
+ * Return: 0 on success, standard error code otheriwse.
+ *
+ * This function should be called with svc_addr_list_lock locked.
+ */
+static int svc_event_add_svc_addr(struct svc_event_nb *event_nb,
+ uint32_t node_id, uint32_t port_id)
+{
+
+ struct svc_addr *addr;
+
+ if (!event_nb)
+ return -EINVAL;
+ addr = kmalloc(sizeof(*addr), GFP_KERNEL);
+ if (!addr) {
+ pr_err("%s: Memory allocation failed for address list\n",
+ __func__);
+ return -ENOMEM;
+ }
+ addr->port_addr.node_id = node_id;
+ addr->port_addr.port_id = port_id;
+ list_add_tail(&addr->list_node, &event_nb->svc_addr_list);
+ return 0;
+}
+
+/**
+ * qmi_notify_svc_event_arrive() - Notify the clients about service arrival
+ * @service: Service id for the specific service.
+ * @instance: Instance id for the specific service.
+ * @node_id: Node id of the processor where the service is hosted.
+ * @port_id: Port id of the service port created by IPC Router.
+ *
+ * Return: 0 on Success or standard error code.
+ */
+static int qmi_notify_svc_event_arrive(uint32_t service,
+ uint32_t instance,
+ uint32_t node_id,
+ uint32_t port_id)
+{
+ struct svc_event_nb *temp;
+ unsigned long flags;
+ struct svc_addr *addr;
+ bool already_notified = false;
+
+ mutex_lock(&svc_event_nb_list_lock);
+ temp = find_svc_event_nb(service, instance);
+ if (!temp) {
+ mutex_unlock(&svc_event_nb_list_lock);
+ return -EINVAL;
+ }
+ mutex_unlock(&svc_event_nb_list_lock);
+
+ mutex_lock(&temp->svc_addr_list_lock);
+ list_for_each_entry(addr, &temp->svc_addr_list, list_node)
+ if (addr->port_addr.node_id == node_id &&
+ addr->port_addr.port_id == port_id)
+ already_notified = true;
+ if (!already_notified) {
+ /*
+ * Notify only if the clients are not notified about the
+ * service during registration.
+ */
+ svc_event_add_svc_addr(temp, node_id, port_id);
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ raw_notifier_call_chain(&temp->svc_event_rcvr_list,
+ QMI_SERVER_ARRIVE, NULL);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ }
+ mutex_unlock(&temp->svc_addr_list_lock);
+
+ return 0;
+}
+
+/**
+ * qmi_notify_svc_event_exit() - Notify the clients about service exit
+ * @service: Service id for the specific service.
+ * @instance: Instance id for the specific service.
+ * @node_id: Node id of the processor where the service is hosted.
+ * @port_id: Port id of the service port created by IPC Router.
+ *
+ * Return: 0 on Success or standard error code.
+ */
+static int qmi_notify_svc_event_exit(uint32_t service,
+ uint32_t instance,
+ uint32_t node_id,
+ uint32_t port_id)
+{
+ struct svc_event_nb *temp;
+ unsigned long flags;
+ struct svc_addr *addr;
+ struct svc_addr *temp_addr;
+
+ mutex_lock(&svc_event_nb_list_lock);
+ temp = find_svc_event_nb(service, instance);
+ if (!temp) {
+ mutex_unlock(&svc_event_nb_list_lock);
+ return -EINVAL;
+ }
+ mutex_unlock(&svc_event_nb_list_lock);
+
+ mutex_lock(&temp->svc_addr_list_lock);
+ list_for_each_entry_safe(addr, temp_addr, &temp->svc_addr_list,
+ list_node) {
+ if (addr->port_addr.node_id == node_id &&
+ addr->port_addr.port_id == port_id) {
+ /*
+ * Notify only if an already notified service has
+ * gone down.
+ */
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ raw_notifier_call_chain(&temp->svc_event_rcvr_list,
+ QMI_SERVER_EXIT, NULL);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ list_del(&addr->list_node);
+ kfree(addr);
+ }
+ }
+
+ mutex_unlock(&temp->svc_addr_list_lock);
+
+ return 0;
+}
+
+static struct svc_event_nb *find_svc_event_nb(uint32_t service_id,
+ uint32_t instance_id)
+{
+ struct svc_event_nb *temp;
+
+ list_for_each_entry(temp, &svc_event_nb_list, list) {
+ if (temp->service_id == service_id &&
+ temp->instance_id == instance_id)
+ return temp;
+ }
+ return NULL;
+}
+
+/**
+ * find_and_add_svc_event_nb() - Find/Add a notifier block for specific service
+ * @service_id: Service Id of the service
+ * @instance_id:Instance Id of the service
+ *
+ * Return: Pointer to svc_event_nb structure for the specified service
+ *
+ * This function should only be called after acquiring svc_event_nb_list_lock.
+ */
+static struct svc_event_nb *find_and_add_svc_event_nb(uint32_t service_id,
+ uint32_t instance_id)
+{
+ struct svc_event_nb *temp;
+
+ temp = find_svc_event_nb(service_id, instance_id);
+ if (temp)
+ return temp;
+
+ temp = kzalloc(sizeof(struct svc_event_nb), GFP_KERNEL);
+ if (!temp)
+ return temp;
+
+ spin_lock_init(&temp->nb_lock);
+ temp->service_id = service_id;
+ temp->instance_id = instance_id;
+ INIT_LIST_HEAD(&temp->list);
+ INIT_LIST_HEAD(&temp->svc_addr_list);
+ RAW_INIT_NOTIFIER_HEAD(&temp->svc_event_rcvr_list);
+ mutex_init(&temp->svc_addr_list_lock);
+ list_add_tail(&temp->list, &svc_event_nb_list);
+
+ return temp;
+}
+
+int qmi_svc_event_notifier_register(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb)
+{
+ struct svc_event_nb *temp;
+ unsigned long flags;
+ int ret;
+ int i;
+ int num_servers;
+ uint32_t instance_id;
+ struct msm_ipc_port_name svc_name;
+ struct msm_ipc_server_info *svc_info_arr = NULL;
+
+ mutex_lock(&qmi_svc_event_notifier_lock);
+ if (!qmi_svc_event_notifier_port && !qmi_svc_event_notifier_wq)
+ qmi_svc_event_notifier_init();
+ mutex_unlock(&qmi_svc_event_notifier_lock);
+
+ instance_id = BUILD_INSTANCE_ID(service_vers, service_ins);
+ mutex_lock(&svc_event_nb_list_lock);
+ temp = find_and_add_svc_event_nb(service_id, instance_id);
+ if (!temp) {
+ mutex_unlock(&svc_event_nb_list_lock);
+ return -EFAULT;
+ }
+ mutex_unlock(&svc_event_nb_list_lock);
+
+ mutex_lock(&temp->svc_addr_list_lock);
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ ret = raw_notifier_chain_register(&temp->svc_event_rcvr_list, nb);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ if (!list_empty(&temp->svc_addr_list)) {
+ /* Notify this client only if Some services already exist. */
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ nb->notifier_call(nb, QMI_SERVER_ARRIVE, NULL);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ } else {
+ /*
+ * Check if we have missed a new server event that happened
+ * earlier.
+ */
+ svc_name.service = service_id;
+ svc_name.instance = instance_id;
+ num_servers = msm_ipc_router_lookup_server_name(&svc_name,
+ NULL,
+ 0, LOOKUP_MASK);
+ if (num_servers > 0) {
+ svc_info_arr = kmalloc_array(num_servers,
+ sizeof(*svc_info_arr),
+ GFP_KERNEL);
+ if (!svc_info_arr)
+ return -ENOMEM;
+ num_servers = msm_ipc_router_lookup_server_name(
+ &svc_name,
+ svc_info_arr,
+ num_servers,
+ LOOKUP_MASK);
+ for (i = 0; i < num_servers; i++)
+ svc_event_add_svc_addr(temp,
+ svc_info_arr[i].node_id,
+ svc_info_arr[i].port_id);
+ kfree(svc_info_arr);
+
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ raw_notifier_call_chain(&temp->svc_event_rcvr_list,
+ QMI_SERVER_ARRIVE, NULL);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ }
+ }
+ mutex_unlock(&temp->svc_addr_list_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL(qmi_svc_event_notifier_register);
+
+int qmi_svc_event_notifier_unregister(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb)
+{
+ int ret;
+ struct svc_event_nb *temp;
+ unsigned long flags;
+ uint32_t instance_id;
+
+ instance_id = BUILD_INSTANCE_ID(service_vers, service_ins);
+ mutex_lock(&svc_event_nb_list_lock);
+ temp = find_svc_event_nb(service_id, instance_id);
+ if (!temp) {
+ mutex_unlock(&svc_event_nb_list_lock);
+ return -EINVAL;
+ }
+
+ spin_lock_irqsave(&temp->nb_lock, flags);
+ ret = raw_notifier_chain_unregister(&temp->svc_event_rcvr_list, nb);
+ spin_unlock_irqrestore(&temp->nb_lock, flags);
+ mutex_unlock(&svc_event_nb_list_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL(qmi_svc_event_notifier_unregister);
+
+/**
+ * qmi_svc_event_worker() - Read control messages over service event port
+ * @work: Reference to the work structure queued.
+ *
+ */
+static void qmi_svc_event_worker(struct work_struct *work)
+{
+ union rr_control_msg *ctl_msg = NULL;
+ unsigned int ctl_msg_len;
+ struct msm_ipc_addr src_addr;
+ int ret;
+
+ while (1) {
+ ret = msm_ipc_router_read_msg(qmi_svc_event_notifier_port,
+ &src_addr, (unsigned char **)&ctl_msg, &ctl_msg_len);
+ if (ret == -ENOMSG)
+ break;
+ if (ret < 0) {
+ pr_err("%s:Error receiving control message\n",
+ __func__);
+ break;
+ }
+ if (ctl_msg->cmd == IPC_ROUTER_CTRL_CMD_NEW_SERVER)
+ qmi_notify_svc_event_arrive(ctl_msg->srv.service,
+ ctl_msg->srv.instance,
+ ctl_msg->srv.node_id,
+ ctl_msg->srv.port_id);
+ else if (ctl_msg->cmd == IPC_ROUTER_CTRL_CMD_REMOVE_SERVER)
+ qmi_notify_svc_event_exit(ctl_msg->srv.service,
+ ctl_msg->srv.instance,
+ ctl_msg->srv.node_id,
+ ctl_msg->srv.port_id);
+ kfree(ctl_msg);
+ }
+}
+
+/**
+ * qmi_svc_event_notify() - Callback for any service event posted on the
+ * control port
+ * @event: The event posted on the control port.
+ * @data: Any out-of-band data associated with event.
+ * @odata_len: Length of the out-of-band data, if any.
+ * @priv: Private Data.
+ *
+ * This function is called by the underlying transport to notify the QMI
+ * interface regarding any incoming service related events. It is registered
+ * during service event control port creation.
+ */
+static void qmi_svc_event_notify(unsigned int event, void *data,
+ size_t odata_len, void *priv)
+{
+ if (event == IPC_ROUTER_CTRL_CMD_NEW_SERVER
+ || event == IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT
+ || event == IPC_ROUTER_CTRL_CMD_REMOVE_SERVER)
+ queue_work(qmi_svc_event_notifier_wq, &qmi_svc_event_work);
+}
+
+/**
+ * qmi_svc_event_notifier_init() - Create a control port to get service events
+ *
+ * This function is called during first service notifier registration. It
+ * creates a control port to get notification about server events so that
+ * respective clients can be notified about the events.
+ */
+static void qmi_svc_event_notifier_init(void)
+{
+ qmi_svc_event_notifier_wq = create_singlethread_workqueue(
+ "qmi_svc_event_wq");
+ if (!qmi_svc_event_notifier_wq) {
+ pr_err("%s: ctrl workqueue allocation failed\n", __func__);
+ return;
+ }
+ qmi_svc_event_notifier_port = msm_ipc_router_create_port(
+ qmi_svc_event_notify, NULL);
+ if (!qmi_svc_event_notifier_port) {
+ destroy_workqueue(qmi_svc_event_notifier_wq);
+ pr_err("%s: IPC Router Port creation failed\n", __func__);
+ return;
+ }
+ msm_ipc_router_bind_control_port(qmi_svc_event_notifier_port);
+}
+
+/**
+ * qmi_log_init() - Init function for IPC Logging
+ *
+ * Initialize log contexts for QMI request/response/indications.
+ */
+void qmi_log_init(void)
+{
+ qmi_req_resp_log_ctx =
+ ipc_log_context_create(QMI_REQ_RESP_LOG_PAGES,
+ "kqmi_req_resp", 0);
+ if (!qmi_req_resp_log_ctx)
+ pr_err("%s: Unable to create QMI IPC logging for Req/Resp",
+ __func__);
+ qmi_ind_log_ctx =
+ ipc_log_context_create(QMI_IND_LOG_PAGES, "kqmi_ind", 0);
+ if (!qmi_ind_log_ctx)
+ pr_err("%s: Unable to create QMI IPC %s",
+ "logging for Indications", __func__);
+}
+
+/**
+ * qmi_svc_register() - Register a QMI service with a QMI handle
+ * @handle: QMI handle on which the service has to be registered.
+ * @ops_options: Service specific operations and options.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_register(struct qmi_handle *handle, void *ops_options)
+{
+ struct qmi_svc_ops_options *svc_ops_options;
+ struct msm_ipc_addr svc_name;
+ int rc;
+ uint32_t instance_id;
+
+ svc_ops_options = (struct qmi_svc_ops_options *)ops_options;
+ if (!handle || !svc_ops_options)
+ return -EINVAL;
+
+ /* Check if the required elements of opts_options are filled */
+ if (!svc_ops_options->service_id || !svc_ops_options->service_vers ||
+ !svc_ops_options->connect_cb || !svc_ops_options->disconnect_cb ||
+ !svc_ops_options->req_desc_cb || !svc_ops_options->req_cb)
+ return -EINVAL;
+
+ mutex_lock(&handle->handle_lock);
+ /* Check if another service/client is registered in that handle */
+ if (handle->handle_type == QMI_SERVICE_HANDLE || handle->dest_info) {
+ mutex_unlock(&handle->handle_lock);
+ return -EBUSY;
+ }
+ INIT_LIST_HEAD(&handle->conn_list);
+ mutex_unlock(&handle->handle_lock);
+
+ /*
+ * Unlocked the handle_lock, because NEW_SERVER message will end up
+ * in this handle's control port, which requires holding the same
+ * mutex. Also it is safe to call register_server unlocked.
+ */
+ /* Register the service */
+ instance_id = ((svc_ops_options->service_vers & 0xFF) |
+ ((svc_ops_options->service_ins & 0xFF) << 8));
+ svc_name.addrtype = MSM_IPC_ADDR_NAME;
+ svc_name.addr.port_name.service = svc_ops_options->service_id;
+ svc_name.addr.port_name.instance = instance_id;
+ rc = msm_ipc_router_register_server(
+ (struct msm_ipc_port *)handle->src_port, &svc_name);
+ if (rc < 0) {
+ pr_err("%s: Error %d registering QMI service %08x:%08x\n",
+ __func__, rc, svc_ops_options->service_id,
+ instance_id);
+ return rc;
+ }
+ mutex_lock(&handle->handle_lock);
+ handle->svc_ops_options = svc_ops_options;
+ handle->handle_type = QMI_SERVICE_HANDLE;
+ mutex_unlock(&handle->handle_lock);
+ return rc;
+}
+EXPORT_SYMBOL(qmi_svc_register);
+
+
+/**
+ * qmi_svc_unregister() - Unregister the service from a QMI handle
+ * @handle: QMI handle from which the service has to be unregistered.
+ *
+ * return: 0 on success, < 0 on error.
+ */
+int qmi_svc_unregister(struct qmi_handle *handle)
+{
+ struct qmi_svc_clnt_conn *conn_h, *temp_conn_h;
+
+ if (!handle || handle->handle_type != QMI_SERVICE_HANDLE)
+ return -EINVAL;
+
+ mutex_lock(&handle->handle_lock);
+ handle->handle_type = QMI_CLIENT_HANDLE;
+ mutex_unlock(&handle->handle_lock);
+ /*
+ * Unlocked the handle_lock, because REMOVE_SERVER message will end up
+ * in this handle's control port, which requires holding the same
+ * mutex. Also it is safe to call register_server unlocked.
+ */
+ msm_ipc_router_unregister_server(
+ (struct msm_ipc_port *)handle->src_port);
+
+ mutex_lock(&handle->handle_lock);
+ list_for_each_entry_safe(conn_h, temp_conn_h,
+ &handle->conn_list, list)
+ rmv_svc_clnt_conn(conn_h);
+ mutex_unlock(&handle->handle_lock);
+ return 0;
+}
+EXPORT_SYMBOL(qmi_svc_unregister);
+
+static int __init qmi_interface_init(void)
+{
+ qmi_log_init();
+ return 0;
+}
+module_init(qmi_interface_init);
+
+MODULE_DESCRIPTION("MSM QMI Interface");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/soc/qcom/qmi_interface_priv.h b/drivers/soc/qcom/qmi_interface_priv.h
new file mode 100644
index 0000000..ef3e692
--- /dev/null
+++ b/drivers/soc/qcom/qmi_interface_priv.h
@@ -0,0 +1,123 @@
+/* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _QMI_INTERFACE_PRIV_H_
+#define _QMI_INTERFACE_PRIV_H_
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/gfp.h>
+#include <linux/platform_device.h>
+#include <linux/qmi_encdec.h>
+
+#include <soc/qcom/msm_qmi_interface.h>
+
+enum txn_type {
+ QMI_SYNC_TXN = 1,
+ QMI_ASYNC_TXN,
+};
+
+/**
+ * handle_type - Enum to identify QMI handle type
+ */
+enum handle_type {
+ QMI_CLIENT_HANDLE = 1,
+ QMI_SERVICE_HANDLE,
+};
+
+struct qmi_txn {
+ struct list_head list;
+ uint16_t txn_id;
+ enum txn_type type;
+ struct qmi_handle *handle;
+ void *enc_data;
+ unsigned int enc_data_len;
+ struct msg_desc *resp_desc;
+ void *resp;
+ unsigned int resp_len;
+ int resp_received;
+ int send_stat;
+ void (*resp_cb)(struct qmi_handle *handle, unsigned int msg_id,
+ void *msg, void *resp_cb_data, int stat);
+ void *resp_cb_data;
+ wait_queue_head_t wait_q;
+};
+
+/**
+ * svc_addr - Data structure to maintain a list of service addresses.
+ * @list_node: Service address list node used by "svc_addr_list"
+ * @port_addr: Service address in <node_id:port_id>.
+ */
+struct svc_addr {
+ struct list_head list_node;
+ struct msm_ipc_port_addr port_addr;
+};
+
+/**
+ * svc_event_nb - Service event notification structure.
+ * @nb_lock: Spinlock for the notifier block lists.
+ * @service_id: Service id for which list of notifier blocks are maintained.
+ * @instance_id: Instance id for which list of notifier blocks are maintained.
+ * @svc_event_rcvr_list: List of notifier blocks which clients have registered.
+ * @list: Used to chain this structure in a global list.
+ * @svc_addr_list_lock: Lock to protect @svc_addr_list.
+ * @svc_addr_list: List for mantaining all the address for a specific
+ * <service_id:instance_id>.
+ */
+struct svc_event_nb {
+ spinlock_t nb_lock;
+ uint32_t service_id;
+ uint32_t instance_id;
+ struct raw_notifier_head svc_event_rcvr_list;
+ struct list_head list;
+ struct mutex svc_addr_list_lock;
+ struct list_head svc_addr_list;
+};
+
+/**
+ * req_handle - Data structure to store request information
+ * @list: Points to req_handle_list maintained per connection.
+ * @conn_h: Connection handle on which the concerned request is received.
+ * @msg_id: Message ID of the request.
+ * @txn_id: Transaction ID of the request.
+ */
+struct req_handle {
+ struct list_head list;
+ struct qmi_svc_clnt_conn *conn_h;
+ uint16_t msg_id;
+ uint16_t txn_id;
+};
+
+/**
+ * qmi_svc_clnt_conn - Data structure to identify client service connection
+ * @list: List to chain up the client conncection to the connection list.
+ * @svc_handle: Service side information of the connection.
+ * @clnt_addr: Client side information of the connection.
+ * @clnt_addr_len: Length of the client address.
+ * @req_handle_list: Pending requests in this connection.
+ * @pending_tx_list: Pending response/indications awaiting flow control.
+ */
+struct qmi_svc_clnt_conn {
+ struct list_head list;
+ void *svc_handle;
+ void *clnt_addr;
+ size_t clnt_addr_len;
+ struct list_head req_handle_list;
+ struct delayed_work resume_tx_work;
+ struct list_head pending_txn_list;
+ struct mutex pending_txn_lock;
+};
+
+#endif
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 2e02f00..0e04308 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -137,6 +137,7 @@ enum iommu_attr {
DOMAIN_ATTR_SECURE_VMID,
DOMAIN_ATTR_FAST,
DOMAIN_ATTR_PGTBL_INFO,
+ DOMAIN_ATTR_USE_UPSTREAM_HINT,
DOMAIN_ATTR_MAX,
};
diff --git a/include/linux/ipc_router.h b/include/linux/ipc_router.h
new file mode 100644
index 0000000..8adf723
--- /dev/null
+++ b/include/linux/ipc_router.h
@@ -0,0 +1,346 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _IPC_ROUTER_H
+#define _IPC_ROUTER_H
+
+#include <linux/types.h>
+#include <linux/socket.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/pm.h>
+#include <linux/msm_ipc.h>
+#include <linux/device.h>
+#include <linux/kref.h>
+
+/* Maximum Wakeup Source Name Size */
+#define MAX_WS_NAME_SZ 32
+
+#define IPC_RTR_ERR(buf, ...) \
+ pr_err("IPC_RTR: " buf, __VA_ARGS__)
+
+/**
+ * enum msm_ipc_router_event - Events that will be generated by IPC Router
+ */
+enum msm_ipc_router_event {
+ IPC_ROUTER_CTRL_CMD_DATA = 1,
+ IPC_ROUTER_CTRL_CMD_HELLO,
+ IPC_ROUTER_CTRL_CMD_BYE,
+ IPC_ROUTER_CTRL_CMD_NEW_SERVER,
+ IPC_ROUTER_CTRL_CMD_REMOVE_SERVER,
+ IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT,
+ IPC_ROUTER_CTRL_CMD_RESUME_TX,
+};
+
+/**
+ * rr_control_msg - Control message structure
+ * @cmd: Command identifier for HELLO message in Version 1.
+ * @hello: Message structure for HELLO message in Version 2.
+ * @srv: Message structure for NEW_SERVER/REMOVE_SERVER events.
+ * @cli: Message structure for REMOVE_CLIENT event.
+ */
+union rr_control_msg {
+ uint32_t cmd;
+ struct {
+ uint32_t cmd;
+ uint32_t checksum;
+ uint32_t versions;
+ uint32_t capability;
+ uint32_t reserved;
+ } hello;
+ struct {
+ uint32_t cmd;
+ uint32_t service;
+ uint32_t instance;
+ uint32_t node_id;
+ uint32_t port_id;
+ } srv;
+ struct {
+ uint32_t cmd;
+ uint32_t node_id;
+ uint32_t port_id;
+ } cli;
+};
+
+struct comm_mode_info {
+ int mode;
+ void *xprt_info;
+};
+
+enum ipc_rtr_af_event_type {
+ IPCRTR_AF_INIT = 1,
+ IPCRTR_AF_DEINIT,
+};
+
+/**
+ * msm_ipc_port - Definition of IPC Router port
+ * @list: List(local/control ports) in which this port is present.
+ * @ref: Reference count for this port.
+ * @this_port: Contains port's node_id and port_id information.
+ * @port_name: Contains service & instance info if the port hosts a service.
+ * @type: Type of the port - Client, Service, Control or Security Config.
+ * @flags: Flags to identify the port state.
+ * @port_lock_lhc3: Lock to protect access to the port information.
+ * @mode_info: Communication mode of the port owner.
+ * @port_rx_q: Receive queue where incoming messages are queued.
+ * @port_rx_q_lock_lhc3: Lock to protect access to the port's rx_q.
+ * @rx_ws_name: Name of the receive wakeup source.
+ * @port_rx_ws: Wakeup source to prevent suspend until the rx_q is empty.
+ * @port_rx_wait_q: Wait queue to wait for the incoming messages.
+ * @restart_state: Flag to hold the restart state information.
+ * @restart_lock: Lock to protect access to the restart_state.
+ * @restart_wait: Wait Queue to wait for any restart events.
+ * @endpoint: Contains the information related to user-space interface.
+ * @notify: Function to notify the incoming events on the port.
+ * @check_send_permissions: Function to check access control from this port.
+ * @num_tx: Number of packets transmitted.
+ * @num_rx: Number of packets received.
+ * @num_tx_bytes: Number of bytes transmitted.
+ * @num_rx_bytes: Number of bytes received.
+ * @priv: Private information registered by the port owner.
+ */
+struct msm_ipc_port {
+ struct list_head list;
+ struct kref ref;
+
+ struct msm_ipc_port_addr this_port;
+ struct msm_ipc_port_name port_name;
+ uint32_t type;
+ unsigned int flags;
+ struct mutex port_lock_lhc3;
+ struct comm_mode_info mode_info;
+
+ struct msm_ipc_port_addr dest_addr;
+ int conn_status;
+
+ struct list_head port_rx_q;
+ struct mutex port_rx_q_lock_lhc3;
+ char rx_ws_name[MAX_WS_NAME_SZ];
+ struct wakeup_source *port_rx_ws;
+ wait_queue_head_t port_rx_wait_q;
+ wait_queue_head_t port_tx_wait_q;
+
+ int restart_state;
+ spinlock_t restart_lock;
+ wait_queue_head_t restart_wait;
+
+ void *rport_info;
+ void *endpoint;
+ void (*notify)(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv);
+ int (*check_send_permissions)(void *data);
+
+ uint32_t num_tx;
+ uint32_t num_rx;
+ unsigned long num_tx_bytes;
+ unsigned long num_rx_bytes;
+ void *priv;
+};
+
+#ifdef CONFIG_IPC_ROUTER
+/**
+ * msm_ipc_router_create_port() - Create a IPC Router port/endpoint
+ * @notify: Callback function to notify any event on the port.
+ * @event: Event ID to be handled.
+ * @oob_data: Any out-of-band data associated with the event.
+ * @oob_data_len: Size of the out-of-band data, if valid.
+ * @priv: Private data registered during the port creation.
+ * @priv: Private info to be passed while the notification is generated.
+ *
+ * @return: Pointer to the port on success, NULL on error.
+ */
+struct msm_ipc_port *msm_ipc_router_create_port(
+ void (*notify)(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv),
+ void *priv);
+
+/**
+ * msm_ipc_router_bind_control_port() - Bind a port as a control port
+ * @port_ptr: Port which needs to be marked as a control port.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ */
+int msm_ipc_router_bind_control_port(struct msm_ipc_port *port_ptr);
+
+/**
+ * msm_ipc_router_lookup_server_name() - Resolve server address
+ * @srv_name: Name<service:instance> of the server to be resolved.
+ * @srv_info: Buffer to hold the resolved address.
+ * @num_entries_in_array: Number of server info the buffer can hold.
+ * @lookup_mask: Mask to specify the range of instances to be resolved.
+ *
+ * @return: Number of server addresses resolved on success, < 0 on error.
+ */
+int msm_ipc_router_lookup_server_name(struct msm_ipc_port_name *srv_name,
+ struct msm_ipc_server_info *srv_info,
+ int num_entries_in_array,
+ uint32_t lookup_mask);
+
+/**
+ * msm_ipc_router_send_msg() - Send a message/packet
+ * @src: Sender's address/port.
+ * @dest: Destination address.
+ * @data: Pointer to the data to be sent.
+ * @data_len: Length of the data to be sent.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int msm_ipc_router_send_msg(struct msm_ipc_port *src,
+ struct msm_ipc_addr *dest,
+ void *data, unsigned int data_len);
+
+/**
+ * msm_ipc_router_get_curr_pkt_size() - Get the packet size of the first
+ * packet in the rx queue
+ * @port_ptr: Port which owns the rx queue.
+ *
+ * @return: Returns the size of the first packet, if available.
+ * 0 if no packets available, < 0 on error.
+ */
+int msm_ipc_router_get_curr_pkt_size(struct msm_ipc_port *port_ptr);
+
+/**
+ * msm_ipc_router_read_msg() - Read a message/packet
+ * @port_ptr: Receiver's port/address.
+ * @data: Pointer containing the address of the received data.
+ * @src: Address of the sender/source.
+ * @len: Length of the data being read.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int msm_ipc_router_read_msg(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *src,
+ unsigned char **data,
+ unsigned int *len);
+
+/**
+ * msm_ipc_router_close_port() - Close the port
+ * @port_ptr: Pointer to the port to be closed.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int msm_ipc_router_close_port(struct msm_ipc_port *port_ptr);
+
+/**
+ * msm_ipc_router_register_server() - Register a service on a port
+ * @server_port: IPC Router port with which a service is registered.
+ * @name: Service name <service_id:instance_id> that gets registered.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ */
+int msm_ipc_router_register_server(struct msm_ipc_port *server_port,
+ struct msm_ipc_addr *name);
+
+/**
+ * msm_ipc_router_unregister_server() - Unregister a service from a port
+ * @server_port: Port with with a service is already registered.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ */
+int msm_ipc_router_unregister_server(struct msm_ipc_port *server_port);
+
+/**
+ * register_ipcrtr_af_init_notifier() - Register for ipc router socket
+ * address family initialization callback
+ * @nb: Notifier block which will be notified once address family is
+ * initialized.
+ *
+ * Return: 0 on success, standard error code otherwise.
+ */
+int register_ipcrtr_af_init_notifier(struct notifier_block *nb);
+
+/**
+ * unregister_ipcrtr_af_init_notifier() - Unregister for ipc router socket
+ * address family initialization callback
+ * @nb: Notifier block which will be notified once address family is
+ * initialized.
+ *
+ * Return: 0 on success, standard error code otherwise.
+ */
+int unregister_ipcrtr_af_init_notifier(struct notifier_block *nb);
+
+#else
+
+struct msm_ipc_port *msm_ipc_router_create_port(
+ void (*notify)(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv),
+ void *priv)
+{
+ return NULL;
+}
+
+static inline int msm_ipc_router_bind_control_port(
+ struct msm_ipc_port *port_ptr)
+{
+ return -ENODEV;
+}
+
+int msm_ipc_router_lookup_server_name(struct msm_ipc_port_name *srv_name,
+ struct msm_ipc_server_info *srv_info,
+ int num_entries_in_array,
+ uint32_t lookup_mask)
+{
+ return -ENODEV;
+}
+
+int msm_ipc_router_send_msg(struct msm_ipc_port *src,
+ struct msm_ipc_addr *dest,
+ void *data, unsigned int data_len)
+{
+ return -ENODEV;
+}
+
+int msm_ipc_router_get_curr_pkt_size(struct msm_ipc_port *port_ptr)
+{
+ return -ENODEV;
+}
+
+int msm_ipc_router_read_msg(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *src,
+ unsigned char **data,
+ unsigned int *len)
+{
+ return -ENODEV;
+}
+
+int msm_ipc_router_close_port(struct msm_ipc_port *port_ptr)
+{
+ return -ENODEV;
+}
+
+static inline int msm_ipc_router_register_server(
+ struct msm_ipc_port *server_port,
+ struct msm_ipc_addr *name)
+{
+ return -ENODEV;
+}
+
+static inline int msm_ipc_router_unregister_server(
+ struct msm_ipc_port *server_port)
+{
+ return -ENODEV;
+}
+
+int register_ipcrtr_af_init_notifier(struct notifier_block *nb)
+{
+ return -ENODEV;
+}
+
+int unregister_ipcrtr_af_init_notifier(struct notifier_block *nb)
+{
+ return -ENODEV;
+}
+
+#endif
+
+#endif
diff --git a/include/linux/ipc_router_xprt.h b/include/linux/ipc_router_xprt.h
new file mode 100644
index 0000000..e33a10a
--- /dev/null
+++ b/include/linux/ipc_router_xprt.h
@@ -0,0 +1,175 @@
+/* Copyright (c) 2011-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _IPC_ROUTER_XPRT_H
+#define _IPC_ROUTER_XPRT_H
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/platform_device.h>
+#include <linux/msm_ipc.h>
+#include <linux/ipc_router.h>
+#include <linux/kref.h>
+
+#define IPC_ROUTER_XPRT_EVENT_DATA 1
+#define IPC_ROUTER_XPRT_EVENT_OPEN 2
+#define IPC_ROUTER_XPRT_EVENT_CLOSE 3
+
+#define FRAG_PKT_WRITE_ENABLE 0x1
+
+/**
+ * rr_header_v1 - IPC Router header version 1
+ * @version: Version information.
+ * @type: IPC Router Message Type.
+ * @src_node_id: Source Node ID of the message.
+ * @src_port_id: Source Port ID of the message.
+ * @control_flag: Flag to indicate flow control.
+ * @size: Size of the IPC Router payload.
+ * @dst_node_id: Destination Node ID of the message.
+ * @dst_port_id: Destination Port ID of the message.
+ */
+struct rr_header_v1 {
+ uint32_t version;
+ uint32_t type;
+ uint32_t src_node_id;
+ uint32_t src_port_id;
+ uint32_t control_flag;
+ uint32_t size;
+ uint32_t dst_node_id;
+ uint32_t dst_port_id;
+};
+
+/**
+ * rr_header_v2 - IPC Router header version 2
+ * @version: Version information.
+ * @type: IPC Router Message Type.
+ * @control_flag: Flags to indicate flow control, optional header etc.
+ * @opt_len: Combined size of the all optional headers in units of words.
+ * @size: Size of the IPC Router payload.
+ * @src_node_id: Source Node ID of the message.
+ * @src_port_id: Source Port ID of the message.
+ * @dst_node_id: Destination Node ID of the message.
+ * @dst_port_id: Destination Port ID of the message.
+ */
+struct rr_header_v2 {
+ uint8_t version;
+ uint8_t type;
+ uint8_t control_flag;
+ uint8_t opt_len;
+ uint32_t size;
+ uint16_t src_node_id;
+ uint16_t src_port_id;
+ uint16_t dst_node_id;
+ uint16_t dst_port_id;
+} __attribute__((__packed__));
+
+union rr_header {
+ struct rr_header_v1 hdr_v1;
+ struct rr_header_v2 hdr_v2;
+};
+
+/**
+ * rr_opt_hdr - Optional header for IPC Router header version 2
+ * @len: Total length of the optional header.
+ * @data: Pointer to the actual optional header.
+ */
+struct rr_opt_hdr {
+ size_t len;
+ unsigned char *data;
+};
+
+#define IPC_ROUTER_HDR_SIZE sizeof(union rr_header)
+#define IPCR_WORD_SIZE 4
+
+/**
+ * rr_packet - Router to Router packet structure
+ * @list: Pointer to prev & next packets in a port's rx list.
+ * @hdr: Header information extracted from or prepended to a packet.
+ * @opt_hdr: Optinal header information.
+ * @pkt_fragment_q: Queue of SKBs containing payload.
+ * @length: Length of data in the chain of SKBs
+ * @ref: Reference count for the packet.
+ */
+struct rr_packet {
+ struct list_head list;
+ struct rr_header_v1 hdr;
+ struct rr_opt_hdr opt_hdr;
+ struct sk_buff_head *pkt_fragment_q;
+ uint32_t length;
+ struct kref ref;
+};
+
+/**
+ * msm_ipc_router_xprt - Structure to hold XPRT specific information
+ * @name: Name of the XPRT.
+ * @link_id: Network cluster ID to which the XPRT belongs to.
+ * @priv: XPRT's private data.
+ * @get_version: Method to get header version supported by the XPRT.
+ * @set_version: Method to set header version in XPRT.
+ * @get_option: Method to get XPRT specific options.
+ * @read_avail: Method to get data size available to be read from the XPRT.
+ * @read: Method to read data from the XPRT.
+ * @write_avail: Method to get write space available in the XPRT.
+ * @write: Method to write data to the XPRT.
+ * @close: Method to close the XPRT.
+ * @sft_close_done: Method to indicate to the XPRT that handling of reset
+ * event is complete.
+ */
+struct msm_ipc_router_xprt {
+ char *name;
+ uint32_t link_id;
+ void *priv;
+
+ int (*get_version)(struct msm_ipc_router_xprt *xprt);
+ int (*get_option)(struct msm_ipc_router_xprt *xprt);
+ void (*set_version)(struct msm_ipc_router_xprt *xprt,
+ unsigned int version);
+ int (*read_avail)(struct msm_ipc_router_xprt *xprt);
+ int (*read)(void *data, uint32_t len,
+ struct msm_ipc_router_xprt *xprt);
+ int (*write_avail)(struct msm_ipc_router_xprt *xprt);
+ int (*write)(void *data, uint32_t len,
+ struct msm_ipc_router_xprt *xprt);
+ int (*close)(struct msm_ipc_router_xprt *xprt);
+ void (*sft_close_done)(struct msm_ipc_router_xprt *xprt);
+};
+
+void msm_ipc_router_xprt_notify(struct msm_ipc_router_xprt *xprt,
+ unsigned int event,
+ void *data);
+
+/**
+ * create_pkt() - Create a Router packet
+ * @data: SKB queue to be contained inside the packet.
+ *
+ * @return: pointer to packet on success, NULL on failure.
+ */
+struct rr_packet *create_pkt(struct sk_buff_head *data);
+struct rr_packet *clone_pkt(struct rr_packet *pkt);
+void release_pkt(struct rr_packet *pkt);
+
+/**
+ * ipc_router_peek_pkt_size() - Peek into the packet header to get potential
+ * packet size
+ * @data: Starting address of the packet which points to router header.
+ *
+ * @returns: potential packet size on success, < 0 on error.
+ *
+ * This function is used by the underlying transport abstraction layer to
+ * peek into the potential packet size of an incoming packet. This information
+ * is used to perform link layer fragmentation and re-assembly
+ */
+int ipc_router_peek_pkt_size(char *data);
+
+#endif
diff --git a/include/linux/qmi_encdec.h b/include/linux/qmi_encdec.h
new file mode 100644
index 0000000..66c3d84
--- /dev/null
+++ b/include/linux/qmi_encdec.h
@@ -0,0 +1,184 @@
+/* Copyright (c) 2012-2014, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _QMI_ENCDEC_H_
+#define _QMI_ENCDEC_H_
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/gfp.h>
+
+#define QMI_REQUEST_CONTROL_FLAG 0x00
+#define QMI_RESPONSE_CONTROL_FLAG 0x02
+#define QMI_INDICATION_CONTROL_FLAG 0x04
+#define QMI_HEADER_SIZE 7
+
+/**
+ * elem_type - Enum to identify the data type of elements in a data
+ * structure.
+ */
+enum elem_type {
+ QMI_OPT_FLAG = 1,
+ QMI_DATA_LEN,
+ QMI_UNSIGNED_1_BYTE,
+ QMI_UNSIGNED_2_BYTE,
+ QMI_UNSIGNED_4_BYTE,
+ QMI_UNSIGNED_8_BYTE,
+ QMI_SIGNED_2_BYTE_ENUM,
+ QMI_SIGNED_4_BYTE_ENUM,
+ QMI_STRUCT,
+ QMI_STRING,
+ QMI_EOTI,
+};
+
+/**
+ * array_type - Enum to identify if an element in a data structure is
+ * an array. If so, then is it a static length array or a
+ * variable length array.
+ */
+enum array_type {
+ NO_ARRAY = 0,
+ STATIC_ARRAY = 1,
+ VAR_LEN_ARRAY = 2,
+};
+
+/**
+ * elem_info - Data structure to specify information about an element
+ * in a data structure. An array of this data structure
+ * can be used to specify info about a complex data
+ * structure to be encoded/decoded.
+ *
+ * @data_type: Data type of this element.
+ * @elem_len: Array length of this element, if an array.
+ * @elem_size: Size of a single instance of this data type.
+ * @is_array: Array type of this element.
+ * @tlv_type: QMI message specific type to identify which element
+ * is present in an incoming message.
+ * @offset: To identify the address of the first instance of this
+ * element in the data structure.
+ * @ei_array: Array to provide information about the nested structure
+ * within a data structure to be encoded/decoded.
+ */
+struct elem_info {
+ enum elem_type data_type;
+ uint32_t elem_len;
+ uint32_t elem_size;
+ enum array_type is_array;
+ uint8_t tlv_type;
+ uint32_t offset;
+ struct elem_info *ei_array;
+};
+
+/**
+ * @msg_desc - Describe about the main/outer structure to be
+ * encoded/decoded.
+ *
+ * @max_msg_len: Maximum possible length of the QMI message.
+ * @ei_array: Array to provide information about a data structure.
+ */
+struct msg_desc {
+ uint16_t msg_id;
+ int max_msg_len;
+ struct elem_info *ei_array;
+};
+
+struct qmi_header {
+ unsigned char cntl_flag;
+ uint16_t txn_id;
+ uint16_t msg_id;
+ uint16_t msg_len;
+} __attribute__((__packed__));
+
+static inline void encode_qmi_header(unsigned char *buf,
+ unsigned char cntl_flag, uint16_t txn_id,
+ uint16_t msg_id, uint16_t msg_len)
+{
+ struct qmi_header *hdr = (struct qmi_header *)buf;
+
+ hdr->cntl_flag = cntl_flag;
+ hdr->txn_id = txn_id;
+ hdr->msg_id = msg_id;
+ hdr->msg_len = msg_len;
+}
+
+static inline void decode_qmi_header(unsigned char *buf,
+ unsigned char *cntl_flag, uint16_t *txn_id,
+ uint16_t *msg_id, uint16_t *msg_len)
+{
+ struct qmi_header *hdr = (struct qmi_header *)buf;
+
+ *cntl_flag = hdr->cntl_flag;
+ *txn_id = hdr->txn_id;
+ *msg_id = hdr->msg_id;
+ *msg_len = hdr->msg_len;
+}
+
+#ifdef CONFIG_QMI_ENCDEC
+/**
+ * qmi_kernel_encode() - Encode to QMI message wire format
+ * @desc: Pointer to structure descriptor.
+ * @out_buf: Buffer to hold the encoded QMI message.
+ * @out_buf_len: Length of the out buffer.
+ * @in_c_struct: C Structure to be encoded.
+ *
+ * @return: size of encoded message on success, < 0 on error.
+ */
+int qmi_kernel_encode(struct msg_desc *desc,
+ void *out_buf, uint32_t out_buf_len,
+ void *in_c_struct);
+
+/**
+ * qmi_kernel_decode() - Decode to C Structure format
+ * @desc: Pointer to structure descriptor.
+ * @out_c_struct: Buffer to hold the decoded C structure.
+ * @in_buf: Buffer containg the QMI message to be decoded.
+ * @in_buf_len: Length of the incoming QMI message.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_kernel_decode(struct msg_desc *desc, void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len);
+
+/**
+ * qmi_verify_max_msg_len() - Verify the maximum length of a QMI message
+ * @desc: Pointer to structure descriptor.
+ *
+ * @return: true if the maximum message length embedded in structure
+ * descriptor matches the calculated value, else false.
+ */
+bool qmi_verify_max_msg_len(struct msg_desc *desc);
+
+#else
+static inline int qmi_kernel_encode(struct msg_desc *desc,
+ void *out_buf, uint32_t out_buf_len,
+ void *in_c_struct)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline int qmi_kernel_decode(struct msg_desc *desc,
+ void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len)
+{
+ return -EOPNOTSUPP;
+}
+
+static inline bool qmi_verify_max_msg_len(struct msg_desc *desc)
+{
+ return false;
+}
+#endif
+
+#endif
diff --git a/include/soc/qcom/msm_qmi_interface.h b/include/soc/qcom/msm_qmi_interface.h
new file mode 100644
index 0000000..349ca2f
--- /dev/null
+++ b/include/soc/qcom/msm_qmi_interface.h
@@ -0,0 +1,501 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _MSM_QMI_INTERFACE_H_
+#define _MSM_QMI_INTERFACE_H_
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/gfp.h>
+#include <linux/qmi_encdec.h>
+#include <linux/workqueue.h>
+
+#define QMI_COMMON_TLV_TYPE 0
+
+enum qmi_event_type {
+ QMI_RECV_MSG = 1,
+ QMI_SERVER_ARRIVE,
+ QMI_SERVER_EXIT,
+};
+
+/**
+ * struct qmi_handle - QMI Handle Data Structure
+ * @handle_hash: Hash Table Node in which this handle is present.
+ * @src_port: Pointer to port used for message exchange.
+ * @ctl_port: Pointer to port used for out-of-band event exchange.
+ * @handle_type: Type of handle(Service/Client).
+ * @next_txn_id: Transaction ID of the next outgoing request.
+ * @handle_wq: Workqueue to handle any handle-specific events.
+ * @handle_lock: Lock to protect access to elements in the handle.
+ * @notify_lock: Lock to protect and generate notification atomically.
+ * @notify: Function to notify the handle owner of an event.
+ * @notify_priv: Private info to be passed during the notifcation.
+ * @handle_reset: Flag to hold the reset state of the handle.
+ * @reset_waitq: Wait queue to wait for any reset events.
+ * @ctl_work: Work to handle the out-of-band events for this handle.
+ * @dest_info: Destination to which this handle is connected to.
+ * @dest_service_id: service id of the service that client connected to.
+ * @txn_list: List of transactions waiting for the response.
+ * @ind_cb: Function to notify the handle owner of an indication message.
+ * @ind_cb_priv: Private info to be passed during an indication notification.
+ * @resume_tx_work: Work to resume the tx when the transport is not busy.
+ * @pending_txn_list: List of requests pending tx due to busy transport.
+ * @conn_list: List of connections handled by the service.
+ * @svc_ops_options: Service specific operations and options.
+ */
+struct qmi_handle {
+ struct hlist_node handle_hash;
+ void *src_port;
+ void *ctl_port;
+ unsigned int handle_type;
+ uint16_t next_txn_id;
+ struct workqueue_struct *handle_wq;
+ struct mutex handle_lock;
+ spinlock_t notify_lock;
+ void (*notify)(struct qmi_handle *handle, enum qmi_event_type event,
+ void *notify_priv);
+ void *notify_priv;
+ int handle_reset;
+ wait_queue_head_t reset_waitq;
+ struct delayed_work ctl_work;
+
+ /* Client specific elements */
+ void *dest_info;
+ uint32_t dest_service_id;
+ struct list_head txn_list;
+ void (*ind_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ unsigned int msg_len, void *ind_cb_priv);
+ void *ind_cb_priv;
+ struct delayed_work resume_tx_work;
+ struct list_head pending_txn_list;
+
+ /* Service specific elements */
+ struct list_head conn_list;
+ struct qmi_svc_ops_options *svc_ops_options;
+};
+
+enum qmi_result_type_v01 {
+ /* To force a 32 bit signed enum. Do not change or use*/
+ QMI_RESULT_TYPE_MIN_ENUM_VAL_V01 = INT_MIN,
+ QMI_RESULT_SUCCESS_V01 = 0,
+ QMI_RESULT_FAILURE_V01 = 1,
+ QMI_RESULT_TYPE_MAX_ENUM_VAL_V01 = INT_MAX,
+};
+
+enum qmi_error_type_v01 {
+ /* To force a 32 bit signed enum. Do not change or use*/
+ QMI_ERR_TYPE_MIN_ENUM_VAL_V01 = INT_MIN,
+ QMI_ERR_NONE_V01 = 0x0000,
+ QMI_ERR_MALFORMED_MSG_V01 = 0x0001,
+ QMI_ERR_NO_MEMORY_V01 = 0x0002,
+ QMI_ERR_INTERNAL_V01 = 0x0003,
+ QMI_ERR_CLIENT_IDS_EXHAUSTED_V01 = 0x0005,
+ QMI_ERR_INVALID_ID_V01 = 0x0029,
+ QMI_ERR_ENCODING_V01 = 0x003A,
+ QMI_ERR_INCOMPATIBLE_STATE_V01 = 0x005A,
+ QMI_ERR_NOT_SUPPORTED_V01 = 0x005E,
+ QMI_ERR_TYPE_MAX_ENUM_VAL_V01 = INT_MAX,
+};
+
+struct qmi_response_type_v01 {
+ enum qmi_result_type_v01 result;
+ enum qmi_error_type_v01 error;
+};
+
+/**
+ * qmi_svc_ops_options - Operations and options to be specified when
+ * a service registers.
+ * @version: Version field to identify the ops_options structure.
+ * @service_id: Service ID of the service.
+ * @service_vers: Version to identify the client-service compatibility.
+ * @service_ins: Instance ID registered by the service.
+ * @connect_cb: Callback when a new client connects with the service.
+ * @disconnect_cb: Callback when the client exits the connection.
+ * @req_desc_cb: Callback to get request structure and its descriptor
+ * for a message id.
+ * @req_cb: Callback to process the request.
+ */
+struct qmi_svc_ops_options {
+ unsigned int version;
+ uint32_t service_id;
+ uint32_t service_vers;
+ uint32_t service_ins;
+ int (*connect_cb)(struct qmi_handle *handle,
+ void *conn_handle);
+ int (*disconnect_cb)(struct qmi_handle *handle,
+ void *conn_handle);
+ int (*req_desc_cb)(unsigned int msg_id,
+ struct msg_desc **req_desc);
+ int (*req_cb)(struct qmi_handle *handle,
+ void *conn_handle,
+ void *req_handle,
+ unsigned int msg_id,
+ void *req);
+};
+
+#ifdef CONFIG_MSM_QMI_INTERFACE
+
+/* Element info array describing common qmi response structure */
+extern struct elem_info qmi_response_type_v01_ei[];
+#define get_qmi_response_type_v01_ei() qmi_response_type_v01_ei
+
+/**
+ * qmi_handle_create() - Create a QMI handle
+ * @notify: Callback to notify events on the handle created.
+ * @notify_priv: Private information to be passed along with the notification.
+ *
+ * @return: Valid QMI handle on success, NULL on error.
+ */
+struct qmi_handle *qmi_handle_create(
+ void (*notify)(struct qmi_handle *handle,
+ enum qmi_event_type event, void *notify_priv),
+ void *notify_priv);
+
+/**
+ * qmi_handle_destroy() - Destroy the QMI handle
+ * @handle: QMI handle to be destroyed.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_handle_destroy(struct qmi_handle *handle);
+
+/**
+ * qmi_register_ind_cb() - Register the indication callback function
+ * @handle: QMI handle with which the function is registered.
+ * @ind_cb: Callback function to be registered.
+ * @ind_cb_priv: Private data to be passed with the indication callback.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_register_ind_cb(struct qmi_handle *handle,
+ void (*ind_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ unsigned int msg_len, void *ind_cb_priv),
+ void *ind_cb_priv);
+
+/**
+ * qmi_send_req_wait() - Send a synchronous QMI request
+ * @handle: QMI handle through which the QMI request is sent.
+ * @request_desc: Structure describing the request data structure.
+ * @req: Buffer containing the request data structure.
+ * @req_len: Length of the request data structure.
+ * @resp_desc: Structure describing the response data structure.
+ * @resp: Buffer to hold the response data structure.
+ * @resp_len: Length of the response data structure.
+ * @timeout_ms: Timeout before a response is received.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_req_wait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ unsigned long timeout_ms);
+
+/**
+ * qmi_send_req_nowait() - Send an asynchronous QMI request
+ * @handle: QMI handle through which the QMI request is sent.
+ * @request_desc: Structure describing the request data structure.
+ * @req: Buffer containing the request data structure.
+ * @req_len: Length of the request data structure.
+ * @resp_desc: Structure describing the response data structure.
+ * @resp: Buffer to hold the response data structure.
+ * @resp_len: Length of the response data structure.
+ * @resp_cb: Callback function to be invoked when the response arrives.
+ * @resp_cb_data: Private information to be passed along with the callback.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_req_nowait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ void (*resp_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ void *resp_cb_data,
+ int stat),
+ void *resp_cb_data);
+
+/**
+ * qmi_recv_msg() - Receive the QMI message
+ * @handle: Handle for which the QMI message has to be received.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_recv_msg(struct qmi_handle *handle);
+
+/**
+ * qmi_connect_to_service() - Connect the QMI handle with a QMI service
+ * @handle: QMI handle to be connected with the QMI service.
+ * @service_id: Service id to identify the QMI service.
+ * @service_vers: Version to identify the compatibility.
+ * @service_ins: Instance id to identify the instance of the QMI service.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_connect_to_service(struct qmi_handle *handle,
+ uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins);
+
+/**
+ * qmi_svc_event_notifier_register() - Register a notifier block to receive
+ * events regarding a QMI service
+ * @service_id: Service ID to identify the QMI service.
+ * @service_vers: Version to identify the compatibility.
+ * @service_ins: Instance ID to identify the instance of the QMI service.
+ * @nb: Notifier block used to receive the event.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_event_notifier_register(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb);
+
+/**
+ * qmi_svc_event_notifier_unregister() - Unregister service event
+ * notifier block
+ * @service_id: Service ID to identify the QMI service.
+ * @service_vers: Version to identify the compatibility.
+ * @service_ins: Instance ID to identify the instance of the QMI service.
+ * @nb: Notifier block registered to receive the events.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_event_notifier_unregister(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb);
+
+/**
+ * qmi_svc_register() - Register a QMI service with a QMI handle
+ * @handle: QMI handle on which the service has to be registered.
+ * @ops_options: Service specific operations and options.
+ *
+ * @return: 0 if successfully registered, < 0 on error.
+ */
+int qmi_svc_register(struct qmi_handle *handle,
+ void *ops_options);
+
+/**
+ * qmi_send_resp() - Send response to a request
+ * @handle: QMI handle from which the response is sent.
+ * @clnt: Client to which the response is sent.
+ * @req_handle: Request for which the response is sent.
+ * @resp_desc: Descriptor explaining the response structure.
+ * @resp: Pointer to the response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_resp(struct qmi_handle *handle,
+ void *conn_handle,
+ void *req_handle,
+ struct msg_desc *resp_desc,
+ void *resp,
+ unsigned int resp_len);
+
+/**
+ * qmi_send_resp_from_cb() - Send response to a request from request_cb
+ * @handle: QMI handle from which the response is sent.
+ * @clnt: Client to which the response is sent.
+ * @req_handle: Request for which the response is sent.
+ * @resp_desc: Descriptor explaining the response structure.
+ * @resp: Pointer to the response structure.
+ * @resp_len: Length of the response structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_resp_from_cb(struct qmi_handle *handle,
+ void *conn_handle,
+ void *req_handle,
+ struct msg_desc *resp_desc,
+ void *resp,
+ unsigned int resp_len);
+
+/**
+ * qmi_send_ind() - Send unsolicited event/indication to a client
+ * @handle: QMI handle from which the indication is sent.
+ * @clnt: Client to which the indication is sent.
+ * @ind_desc: Descriptor explaining the indication structure.
+ * @ind: Pointer to the indication structure.
+ * @ind_len: Length of the indication structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_ind(struct qmi_handle *handle,
+ void *conn_handle,
+ struct msg_desc *ind_desc,
+ void *ind,
+ unsigned int ind_len);
+
+/**
+ * qmi_send_ind_from_cb() - Send indication to a client from registration_cb
+ * @handle: QMI handle from which the indication is sent.
+ * @clnt: Client to which the indication is sent.
+ * @ind_desc: Descriptor explaining the indication structure.
+ * @ind: Pointer to the indication structure.
+ * @ind_len: Length of the indication structure.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_send_ind_from_cb(struct qmi_handle *handle,
+ void *conn_handle,
+ struct msg_desc *ind_desc,
+ void *ind,
+ unsigned int ind_len);
+
+/**
+ * qmi_svc_unregister() - Unregister the service from a QMI handle
+ * @handle: QMI handle from which the service has to be unregistered.
+ *
+ * return: 0 on success, < 0 on error.
+ */
+int qmi_svc_unregister(struct qmi_handle *handle);
+
+#else
+
+#define get_qmi_response_type_v01_ei() NULL
+
+static inline struct qmi_handle *qmi_handle_create(
+ void (*notify)(struct qmi_handle *handle,
+ enum qmi_event_type event, void *notify_priv),
+ void *notify_priv)
+{
+ return NULL;
+}
+
+static inline int qmi_handle_destroy(struct qmi_handle *handle)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_register_ind_cb(struct qmi_handle *handle,
+ void (*ind_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ unsigned int msg_len, void *ind_cb_priv),
+ void *ind_cb_priv)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_req_wait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ unsigned long timeout_ms)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_req_nowait(struct qmi_handle *handle,
+ struct msg_desc *req_desc,
+ void *req, unsigned int req_len,
+ struct msg_desc *resp_desc,
+ void *resp, unsigned int resp_len,
+ void (*resp_cb)(struct qmi_handle *handle,
+ unsigned int msg_id, void *msg,
+ void *resp_cb_data),
+ void *resp_cb_data)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_recv_msg(struct qmi_handle *handle)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_connect_to_service(struct qmi_handle *handle,
+ uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_svc_event_notifier_register(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_svc_event_notifier_unregister(uint32_t service_id,
+ uint32_t service_vers,
+ uint32_t service_ins,
+ struct notifier_block *nb)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_svc_register(struct qmi_handle *handle,
+ void *ops_options)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_resp(struct qmi_handle *handle,
+ void *conn_handle,
+ void *req_handle,
+ struct msg_desc *resp_desc,
+ void *resp,
+ unsigned int resp_len)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_resp_from_cb(struct qmi_handle *handle,
+ void *conn_handle,
+ void *req_handle,
+ struct msg_desc *resp_desc,
+ void *resp,
+ unsigned int resp_len)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_ind(struct qmi_handle *handle,
+ void *conn_handle,
+ struct msg_desc *ind_desc,
+ void *ind,
+ unsigned int ind_len)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_send_ind_from_cb(struct qmi_handle *handle,
+ void *conn_handle,
+ struct msg_desc *ind_desc,
+ void *ind,
+ unsigned int ind_len)
+{
+ return -ENODEV;
+}
+
+static inline int qmi_svc_unregister(struct qmi_handle *handle)
+{
+ return -ENODEV;
+}
+
+#endif
+
+#endif
diff --git a/include/uapi/linux/Kbuild b/include/uapi/linux/Kbuild
index 08e84e6..8f09a32 100644
--- a/include/uapi/linux/Kbuild
+++ b/include/uapi/linux/Kbuild
@@ -281,6 +281,7 @@
header-y += msdos_fs.h
header-y += msg.h
header-y += msm_ion.h
+header-y += msm_ipc.h
header-y += msm_rmnet.h
header-y += mtio.h
header-y += nbd.h
diff --git a/include/uapi/linux/msm_ipc.h b/include/uapi/linux/msm_ipc.h
new file mode 100644
index 0000000..29711c0
--- /dev/null
+++ b/include/uapi/linux/msm_ipc.h
@@ -0,0 +1,91 @@
+#ifndef _UAPI_MSM_IPC_H_
+#define _UAPI_MSM_IPC_H_
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+struct msm_ipc_port_addr {
+ uint32_t node_id;
+ uint32_t port_id;
+};
+
+struct msm_ipc_port_name {
+ uint32_t service;
+ uint32_t instance;
+};
+
+struct msm_ipc_addr {
+ unsigned char addrtype;
+ union {
+ struct msm_ipc_port_addr port_addr;
+ struct msm_ipc_port_name port_name;
+ } addr;
+};
+
+#define MSM_IPC_WAIT_FOREVER (~0) /* timeout for permanent subscription */
+
+/*
+ * Socket API
+ */
+
+#ifndef AF_MSM_IPC
+#define AF_MSM_IPC 27
+#endif
+
+#ifndef PF_MSM_IPC
+#define PF_MSM_IPC AF_MSM_IPC
+#endif
+
+#define MSM_IPC_ADDR_NAME 1
+#define MSM_IPC_ADDR_ID 2
+
+struct sockaddr_msm_ipc {
+ unsigned short family;
+ struct msm_ipc_addr address;
+ unsigned char reserved;
+};
+
+struct config_sec_rules_args {
+ int num_group_info;
+ uint32_t service_id;
+ uint32_t instance_id;
+ unsigned int reserved;
+ gid_t group_id[0];
+};
+
+#define IPC_ROUTER_IOCTL_MAGIC (0xC3)
+
+#define IPC_ROUTER_IOCTL_GET_VERSION \
+ _IOR(IPC_ROUTER_IOCTL_MAGIC, 0, unsigned int)
+
+#define IPC_ROUTER_IOCTL_GET_MTU \
+ _IOR(IPC_ROUTER_IOCTL_MAGIC, 1, unsigned int)
+
+#define IPC_ROUTER_IOCTL_LOOKUP_SERVER \
+ _IOWR(IPC_ROUTER_IOCTL_MAGIC, 2, struct sockaddr_msm_ipc)
+
+#define IPC_ROUTER_IOCTL_GET_CURR_PKT_SIZE \
+ _IOR(IPC_ROUTER_IOCTL_MAGIC, 3, unsigned int)
+
+#define IPC_ROUTER_IOCTL_BIND_CONTROL_PORT \
+ _IOR(IPC_ROUTER_IOCTL_MAGIC, 4, unsigned int)
+
+#define IPC_ROUTER_IOCTL_CONFIG_SEC_RULES \
+ _IOR(IPC_ROUTER_IOCTL_MAGIC, 5, struct config_sec_rules_args)
+
+struct msm_ipc_server_info {
+ uint32_t node_id;
+ uint32_t port_id;
+ uint32_t service;
+ uint32_t instance;
+};
+
+struct server_lookup_args {
+ struct msm_ipc_port_name port_name;
+ int num_entries_in_array;
+ int num_entries_found;
+ uint32_t lookup_mask;
+ struct msm_ipc_server_info srv_info[0];
+};
+
+#endif
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 54115b8..4274797 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -84,6 +84,17 @@
Allow the use of ring_buffer_swap_cpu.
Adds a very slight overhead to tracing when enabled.
+config IPC_LOGGING
+ bool "Debug Logging for IPC Drivers"
+ select GENERIC_TRACER
+ help
+ IPC Logging driver provides a logging option for IPC Drivers.
+ This provides a cyclic buffer based logging support in a driver
+ specific context. This driver also provides a debugfs interface
+ to dump the logs in a live fashion.
+
+ If in doubt, say no.
+
config QCOM_RTB
bool "Register tracing"
help
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index 63ca2d7b..08e5e47 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -72,4 +72,9 @@
obj-$(CONFIG_TRACEPOINT_BENCHMARK) += trace_benchmark.o
obj-$(CONFIG_QCOM_RTB) += msm_rtb.o
+obj-$(CONFIG_IPC_LOGGING) += ipc_logging.o
+ifdef CONFIG_DEBUG_FS
+obj-$(CONFIG_IPC_LOGGING) += ipc_logging_debug.o
+endif
libftrace-y := ftrace.o
+
diff --git a/kernel/trace/ipc_logging.c b/kernel/trace/ipc_logging.c
new file mode 100644
index 0000000..62110a3
--- /dev/null
+++ b/kernel/trace/ipc_logging.c
@@ -0,0 +1,889 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <asm/arch_timer.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/jiffies.h>
+#include <linux/debugfs.h>
+#include <linux/io.h>
+#include <linux/idr.h>
+#include <linux/string.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/delay.h>
+#include <linux/completion.h>
+#include <linux/ipc_logging.h>
+
+#include "ipc_logging_private.h"
+
+#define LOG_PAGE_DATA_SIZE sizeof(((struct ipc_log_page *)0)->data)
+#define LOG_PAGE_FLAG (1 << 31)
+
+static LIST_HEAD(ipc_log_context_list);
+static DEFINE_RWLOCK(context_list_lock_lha1);
+static void *get_deserialization_func(struct ipc_log_context *ilctxt,
+ int type);
+
+static struct ipc_log_page *get_first_page(struct ipc_log_context *ilctxt)
+{
+ struct ipc_log_page_header *p_pghdr;
+ struct ipc_log_page *pg = NULL;
+
+ if (!ilctxt)
+ return NULL;
+ p_pghdr = list_first_entry(&ilctxt->page_list,
+ struct ipc_log_page_header, list);
+ pg = container_of(p_pghdr, struct ipc_log_page, hdr);
+ return pg;
+}
+
+/**
+ * is_nd_read_empty - Returns true if no data is available to read in log
+ *
+ * @ilctxt: logging context
+ * @returns: > 1 if context is empty; 0 if not empty; <0 for failure
+ *
+ * This is for the debugfs read pointer which allows for a non-destructive read.
+ * There may still be data in the log, but it may have already been read.
+ */
+static int is_nd_read_empty(struct ipc_log_context *ilctxt)
+{
+ if (!ilctxt)
+ return -EINVAL;
+
+ return ((ilctxt->nd_read_page == ilctxt->write_page) &&
+ (ilctxt->nd_read_page->hdr.nd_read_offset ==
+ ilctxt->write_page->hdr.write_offset));
+}
+
+/**
+ * is_read_empty - Returns true if no data is available in log
+ *
+ * @ilctxt: logging context
+ * @returns: > 1 if context is empty; 0 if not empty; <0 for failure
+ *
+ * This is for the actual log contents. If it is empty, then there
+ * is no data at all in the log.
+ */
+static int is_read_empty(struct ipc_log_context *ilctxt)
+{
+ if (!ilctxt)
+ return -EINVAL;
+
+ return ((ilctxt->read_page == ilctxt->write_page) &&
+ (ilctxt->read_page->hdr.read_offset ==
+ ilctxt->write_page->hdr.write_offset));
+}
+
+/**
+ * is_nd_read_equal_read - Return true if the non-destructive read is equal to
+ * the destructive read
+ *
+ * @ilctxt: logging context
+ * @returns: true if nd read is equal to read; false otherwise
+ */
+static bool is_nd_read_equal_read(struct ipc_log_context *ilctxt)
+{
+ uint16_t read_offset;
+ uint16_t nd_read_offset;
+
+ if (ilctxt->nd_read_page == ilctxt->read_page) {
+ read_offset = ilctxt->read_page->hdr.read_offset;
+ nd_read_offset = ilctxt->nd_read_page->hdr.nd_read_offset;
+
+ if (read_offset == nd_read_offset)
+ return true;
+ }
+
+ return false;
+}
+
+
+static struct ipc_log_page *get_next_page(struct ipc_log_context *ilctxt,
+ struct ipc_log_page *cur_pg)
+{
+ struct ipc_log_page_header *p_pghdr;
+ struct ipc_log_page *pg = NULL;
+
+ if (!ilctxt || !cur_pg)
+ return NULL;
+
+ if (ilctxt->last_page == cur_pg)
+ return ilctxt->first_page;
+
+ p_pghdr = list_first_entry(&cur_pg->hdr.list,
+ struct ipc_log_page_header, list);
+ pg = container_of(p_pghdr, struct ipc_log_page, hdr);
+
+ return pg;
+}
+
+/**
+ * ipc_log_read - do non-destructive read of the log
+ *
+ * @ilctxt: Logging context
+ * @data: Data pointer to receive the data
+ * @data_size: Number of bytes to read (must be <= bytes available in log)
+ *
+ * This read will update a runtime read pointer, but will not affect the actual
+ * contents of the log which allows for reading the logs continuously while
+ * debugging and if the system crashes, then the full logs can still be
+ * extracted.
+ */
+static void ipc_log_read(struct ipc_log_context *ilctxt,
+ void *data, int data_size)
+{
+ int bytes_to_read;
+
+ bytes_to_read = MIN(LOG_PAGE_DATA_SIZE
+ - ilctxt->nd_read_page->hdr.nd_read_offset,
+ data_size);
+
+ memcpy(data, (ilctxt->nd_read_page->data +
+ ilctxt->nd_read_page->hdr.nd_read_offset), bytes_to_read);
+
+ if (bytes_to_read != data_size) {
+ /* not enough space, wrap read to next page */
+ ilctxt->nd_read_page->hdr.nd_read_offset = 0;
+ ilctxt->nd_read_page = get_next_page(ilctxt,
+ ilctxt->nd_read_page);
+ if (WARN_ON(ilctxt->nd_read_page == NULL))
+ return;
+
+ memcpy((data + bytes_to_read),
+ (ilctxt->nd_read_page->data +
+ ilctxt->nd_read_page->hdr.nd_read_offset),
+ (data_size - bytes_to_read));
+ bytes_to_read = (data_size - bytes_to_read);
+ }
+ ilctxt->nd_read_page->hdr.nd_read_offset += bytes_to_read;
+}
+
+/**
+ * ipc_log_drop - do destructive read of the log
+ *
+ * @ilctxt: Logging context
+ * @data: Data pointer to receive the data (or NULL)
+ * @data_size: Number of bytes to read (must be <= bytes available in log)
+ */
+static void ipc_log_drop(struct ipc_log_context *ilctxt, void *data,
+ int data_size)
+{
+ int bytes_to_read;
+ bool push_nd_read;
+
+ bytes_to_read = MIN(LOG_PAGE_DATA_SIZE
+ - ilctxt->read_page->hdr.read_offset,
+ data_size);
+ if (data)
+ memcpy(data, (ilctxt->read_page->data +
+ ilctxt->read_page->hdr.read_offset), bytes_to_read);
+
+ if (bytes_to_read != data_size) {
+ /* not enough space, wrap read to next page */
+ push_nd_read = is_nd_read_equal_read(ilctxt);
+
+ ilctxt->read_page->hdr.read_offset = 0;
+ if (push_nd_read) {
+ ilctxt->read_page->hdr.nd_read_offset = 0;
+ ilctxt->read_page = get_next_page(ilctxt,
+ ilctxt->read_page);
+ if (WARN_ON(ilctxt->read_page == NULL))
+ return;
+ ilctxt->nd_read_page = ilctxt->read_page;
+ } else {
+ ilctxt->read_page = get_next_page(ilctxt,
+ ilctxt->read_page);
+ if (WARN_ON(ilctxt->read_page == NULL))
+ return;
+ }
+
+ if (data)
+ memcpy((data + bytes_to_read),
+ (ilctxt->read_page->data +
+ ilctxt->read_page->hdr.read_offset),
+ (data_size - bytes_to_read));
+
+ bytes_to_read = (data_size - bytes_to_read);
+ }
+
+ /* update non-destructive read pointer if necessary */
+ push_nd_read = is_nd_read_equal_read(ilctxt);
+ ilctxt->read_page->hdr.read_offset += bytes_to_read;
+ ilctxt->write_avail += data_size;
+
+ if (push_nd_read)
+ ilctxt->nd_read_page->hdr.nd_read_offset += bytes_to_read;
+}
+
+/**
+ * msg_read - Reads a message.
+ *
+ * If a message is read successfully, then the message context
+ * will be set to:
+ * .hdr message header .size and .type values
+ * .offset beginning of message data
+ *
+ * @ilctxt Logging context
+ * @ectxt Message context
+ *
+ * @returns 0 - no message available; >0 message size; <0 error
+ */
+static int msg_read(struct ipc_log_context *ilctxt,
+ struct encode_context *ectxt)
+{
+ struct tsv_header hdr;
+
+ if (!ectxt)
+ return -EINVAL;
+
+ if (is_nd_read_empty(ilctxt))
+ return 0;
+
+ ipc_log_read(ilctxt, &hdr, sizeof(hdr));
+ ectxt->hdr.type = hdr.type;
+ ectxt->hdr.size = hdr.size;
+ ectxt->offset = sizeof(hdr);
+ ipc_log_read(ilctxt, (ectxt->buff + ectxt->offset),
+ (int)hdr.size);
+
+ return sizeof(hdr) + (int)hdr.size;
+}
+
+/**
+ * msg_drop - Drops a message.
+ *
+ * @ilctxt Logging context
+ */
+static void msg_drop(struct ipc_log_context *ilctxt)
+{
+ struct tsv_header hdr;
+
+ if (!is_read_empty(ilctxt)) {
+ ipc_log_drop(ilctxt, &hdr, sizeof(hdr));
+ ipc_log_drop(ilctxt, NULL, (int)hdr.size);
+ }
+}
+
+/*
+ * Commits messages to the FIFO. If the FIFO is full, then enough
+ * messages are dropped to create space for the new message.
+ */
+void ipc_log_write(void *ctxt, struct encode_context *ectxt)
+{
+ struct ipc_log_context *ilctxt = (struct ipc_log_context *)ctxt;
+ int bytes_to_write;
+ unsigned long flags;
+
+ if (!ilctxt || !ectxt) {
+ pr_err("%s: Invalid ipc_log or encode context\n", __func__);
+ return;
+ }
+
+ read_lock_irqsave(&context_list_lock_lha1, flags);
+ spin_lock(&ilctxt->context_lock_lhb1);
+ while (ilctxt->write_avail <= ectxt->offset)
+ msg_drop(ilctxt);
+
+ bytes_to_write = MIN(LOG_PAGE_DATA_SIZE
+ - ilctxt->write_page->hdr.write_offset,
+ ectxt->offset);
+ memcpy((ilctxt->write_page->data +
+ ilctxt->write_page->hdr.write_offset),
+ ectxt->buff, bytes_to_write);
+
+ if (bytes_to_write != ectxt->offset) {
+ uint64_t t_now = sched_clock();
+
+ ilctxt->write_page->hdr.write_offset += bytes_to_write;
+ ilctxt->write_page->hdr.end_time = t_now;
+
+ ilctxt->write_page = get_next_page(ilctxt, ilctxt->write_page);
+ if (WARN_ON(ilctxt->write_page == NULL))
+ return;
+ ilctxt->write_page->hdr.write_offset = 0;
+ ilctxt->write_page->hdr.start_time = t_now;
+ memcpy((ilctxt->write_page->data +
+ ilctxt->write_page->hdr.write_offset),
+ (ectxt->buff + bytes_to_write),
+ (ectxt->offset - bytes_to_write));
+ bytes_to_write = (ectxt->offset - bytes_to_write);
+ }
+ ilctxt->write_page->hdr.write_offset += bytes_to_write;
+ ilctxt->write_avail -= ectxt->offset;
+ complete(&ilctxt->read_avail);
+ spin_unlock(&ilctxt->context_lock_lhb1);
+ read_unlock_irqrestore(&context_list_lock_lha1, flags);
+}
+EXPORT_SYMBOL(ipc_log_write);
+
+/*
+ * Starts a new message after which you can add serialized data and
+ * then complete the message by calling msg_encode_end().
+ */
+void msg_encode_start(struct encode_context *ectxt, uint32_t type)
+{
+ if (!ectxt) {
+ pr_err("%s: Invalid encode context\n", __func__);
+ return;
+ }
+
+ ectxt->hdr.type = type;
+ ectxt->hdr.size = 0;
+ ectxt->offset = sizeof(ectxt->hdr);
+}
+EXPORT_SYMBOL(msg_encode_start);
+
+/*
+ * Completes the message
+ */
+void msg_encode_end(struct encode_context *ectxt)
+{
+ if (!ectxt) {
+ pr_err("%s: Invalid encode context\n", __func__);
+ return;
+ }
+
+ /* finalize data size */
+ ectxt->hdr.size = ectxt->offset - sizeof(ectxt->hdr);
+ if (WARN_ON(ectxt->hdr.size > MAX_MSG_SIZE))
+ return;
+ memcpy(ectxt->buff, &ectxt->hdr, sizeof(ectxt->hdr));
+}
+EXPORT_SYMBOL(msg_encode_end);
+
+/*
+ * Helper function used to write data to a message context.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ * @data data to write
+ * @size number of bytes of data to write
+ */
+static inline int tsv_write_data(struct encode_context *ectxt,
+ void *data, uint32_t size)
+{
+ if (!ectxt) {
+ pr_err("%s: Invalid encode context\n", __func__);
+ return -EINVAL;
+ }
+ if ((ectxt->offset + size) > MAX_MSG_SIZE) {
+ pr_err("%s: No space to encode further\n", __func__);
+ return -EINVAL;
+ }
+
+ memcpy((void *)(ectxt->buff + ectxt->offset), data, size);
+ ectxt->offset += size;
+ return 0;
+}
+
+/*
+ * Helper function that writes a type to the context.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ * @type primitive type
+ * @size size of primitive in bytes
+ */
+static inline int tsv_write_header(struct encode_context *ectxt,
+ uint32_t type, uint32_t size)
+{
+ struct tsv_header hdr;
+
+ hdr.type = (unsigned char)type;
+ hdr.size = (unsigned char)size;
+ return tsv_write_data(ectxt, &hdr, sizeof(hdr));
+}
+
+/*
+ * Writes the current timestamp count.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ */
+int tsv_timestamp_write(struct encode_context *ectxt)
+{
+ int ret;
+ uint64_t t_now = sched_clock();
+
+ ret = tsv_write_header(ectxt, TSV_TYPE_TIMESTAMP, sizeof(t_now));
+ if (ret)
+ return ret;
+ return tsv_write_data(ectxt, &t_now, sizeof(t_now));
+}
+EXPORT_SYMBOL(tsv_timestamp_write);
+
+/*
+ * Writes the current QTimer timestamp count.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ */
+int tsv_qtimer_write(struct encode_context *ectxt)
+{
+ int ret;
+ uint64_t t_now = arch_counter_get_cntvct();
+
+ ret = tsv_write_header(ectxt, TSV_TYPE_QTIMER, sizeof(t_now));
+ if (ret)
+ return ret;
+ return tsv_write_data(ectxt, &t_now, sizeof(t_now));
+}
+EXPORT_SYMBOL(tsv_qtimer_write);
+
+/*
+ * Writes a data pointer.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ * @pointer pointer value to write
+ */
+int tsv_pointer_write(struct encode_context *ectxt, void *pointer)
+{
+ int ret;
+
+ ret = tsv_write_header(ectxt, TSV_TYPE_POINTER, sizeof(pointer));
+ if (ret)
+ return ret;
+ return tsv_write_data(ectxt, &pointer, sizeof(pointer));
+}
+EXPORT_SYMBOL(tsv_pointer_write);
+
+/*
+ * Writes a 32-bit integer value.
+ *
+ * @ectxt context initialized by calling msg_encode_start()
+ * @n integer to write
+ */
+int tsv_int32_write(struct encode_context *ectxt, int32_t n)
+{
+ int ret;
+
+ ret = tsv_write_header(ectxt, TSV_TYPE_INT32, sizeof(n));
+ if (ret)
+ return ret;
+ return tsv_write_data(ectxt, &n, sizeof(n));
+}
+EXPORT_SYMBOL(tsv_int32_write);
+
+/*
+ * Writes a byte array.
+ *
+ * @ectxt context initialized by calling msg_write_start()
+ * @data Beginning address of data
+ * @data_size Size of data to be written
+ */
+int tsv_byte_array_write(struct encode_context *ectxt,
+ void *data, int data_size)
+{
+ int ret;
+
+ ret = tsv_write_header(ectxt, TSV_TYPE_BYTE_ARRAY, data_size);
+ if (ret)
+ return ret;
+ return tsv_write_data(ectxt, data, data_size);
+}
+EXPORT_SYMBOL(tsv_byte_array_write);
+
+/*
+ * Helper function to log a string
+ *
+ * @ilctxt ipc_log_context created using ipc_log_context_create()
+ * @fmt Data specified using format specifiers
+ */
+int ipc_log_string(void *ilctxt, const char *fmt, ...)
+{
+ struct encode_context ectxt;
+ int avail_size, data_size, hdr_size = sizeof(struct tsv_header);
+ va_list arg_list;
+
+ if (!ilctxt)
+ return -EINVAL;
+
+ msg_encode_start(&ectxt, TSV_TYPE_STRING);
+ tsv_timestamp_write(&ectxt);
+ tsv_qtimer_write(&ectxt);
+ avail_size = (MAX_MSG_SIZE - (ectxt.offset + hdr_size));
+ va_start(arg_list, fmt);
+ data_size = vsnprintf((ectxt.buff + ectxt.offset + hdr_size),
+ avail_size, fmt, arg_list);
+ va_end(arg_list);
+ tsv_write_header(&ectxt, TSV_TYPE_BYTE_ARRAY, data_size);
+ ectxt.offset += data_size;
+ msg_encode_end(&ectxt);
+ ipc_log_write(ilctxt, &ectxt);
+ return 0;
+}
+EXPORT_SYMBOL(ipc_log_string);
+
+/**
+ * ipc_log_extract - Reads and deserializes log
+ *
+ * @ctxt: logging context
+ * @buff: buffer to receive the data
+ * @size: size of the buffer
+ * @returns: 0 if no data read; >0 number of bytes read; < 0 error
+ *
+ * If no data is available to be read, then the ilctxt::read_avail
+ * completion is reinitialized. This allows clients to block
+ * until new log data is save.
+ */
+int ipc_log_extract(void *ctxt, char *buff, int size)
+{
+ struct encode_context ectxt;
+ struct decode_context dctxt;
+ void (*deserialize_func)(struct encode_context *ectxt,
+ struct decode_context *dctxt);
+ struct ipc_log_context *ilctxt = (struct ipc_log_context *)ctxt;
+ unsigned long flags;
+
+ if (size < MAX_MSG_DECODED_SIZE)
+ return -EINVAL;
+
+ dctxt.output_format = OUTPUT_DEBUGFS;
+ dctxt.buff = buff;
+ dctxt.size = size;
+ read_lock_irqsave(&context_list_lock_lha1, flags);
+ spin_lock(&ilctxt->context_lock_lhb1);
+ while (dctxt.size >= MAX_MSG_DECODED_SIZE &&
+ !is_nd_read_empty(ilctxt)) {
+ msg_read(ilctxt, &ectxt);
+ deserialize_func = get_deserialization_func(ilctxt,
+ ectxt.hdr.type);
+ spin_unlock(&ilctxt->context_lock_lhb1);
+ read_unlock_irqrestore(&context_list_lock_lha1, flags);
+ if (deserialize_func)
+ deserialize_func(&ectxt, &dctxt);
+ else
+ pr_err("%s: unknown message 0x%x\n",
+ __func__, ectxt.hdr.type);
+ read_lock_irqsave(&context_list_lock_lha1, flags);
+ spin_lock(&ilctxt->context_lock_lhb1);
+ }
+ if ((size - dctxt.size) == 0)
+ reinit_completion(&ilctxt->read_avail);
+ spin_unlock(&ilctxt->context_lock_lhb1);
+ read_unlock_irqrestore(&context_list_lock_lha1, flags);
+ return size - dctxt.size;
+}
+EXPORT_SYMBOL(ipc_log_extract);
+
+/*
+ * Helper function used to read data from a message context.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @data data to read
+ * @size number of bytes of data to read
+ */
+static void tsv_read_data(struct encode_context *ectxt,
+ void *data, uint32_t size)
+{
+ if (WARN_ON((ectxt->offset + size) > MAX_MSG_SIZE))
+ return;
+ memcpy(data, (ectxt->buff + ectxt->offset), size);
+ ectxt->offset += size;
+}
+
+/*
+ * Helper function that reads a type from the context and updates the
+ * context pointers.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @hdr type header
+ */
+static void tsv_read_header(struct encode_context *ectxt,
+ struct tsv_header *hdr)
+{
+ if (WARN_ON((ectxt->offset + sizeof(*hdr)) > MAX_MSG_SIZE))
+ return;
+ memcpy(hdr, (ectxt->buff + ectxt->offset), sizeof(*hdr));
+ ectxt->offset += sizeof(*hdr);
+}
+
+/*
+ * Reads a timestamp.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @dctxt deserialization context
+ * @format output format (appended to %6u.09u timestamp format)
+ */
+void tsv_timestamp_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format)
+{
+ struct tsv_header hdr;
+ uint64_t val;
+ unsigned long nanosec_rem;
+
+ tsv_read_header(ectxt, &hdr);
+ if (WARN_ON(hdr.type != TSV_TYPE_TIMESTAMP))
+ return;
+ tsv_read_data(ectxt, &val, sizeof(val));
+ nanosec_rem = do_div(val, 1000000000U);
+ IPC_SPRINTF_DECODE(dctxt, "[%6u.%09lu%s/",
+ (unsigned int)val, nanosec_rem, format);
+}
+EXPORT_SYMBOL(tsv_timestamp_read);
+
+/*
+ * Reads a QTimer timestamp.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @dctxt deserialization context
+ * @format output format (appended to %#18llx timestamp format)
+ */
+void tsv_qtimer_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format)
+{
+ struct tsv_header hdr;
+ uint64_t val;
+
+ tsv_read_header(ectxt, &hdr);
+ if (WARN_ON(hdr.type != TSV_TYPE_QTIMER))
+ return;
+ tsv_read_data(ectxt, &val, sizeof(val));
+
+ /*
+ * This gives 16 hex digits of output. The # prefix prepends
+ * a 0x, and these characters count as part of the number.
+ */
+ IPC_SPRINTF_DECODE(dctxt, "%#18llx]%s", val, format);
+}
+EXPORT_SYMBOL(tsv_qtimer_read);
+
+/*
+ * Reads a data pointer.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @dctxt deserialization context
+ * @format output format
+ */
+void tsv_pointer_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format)
+{
+ struct tsv_header hdr;
+ void *val;
+
+ tsv_read_header(ectxt, &hdr);
+ if (WARN_ON(hdr.type != TSV_TYPE_POINTER))
+ return;
+ tsv_read_data(ectxt, &val, sizeof(val));
+
+ IPC_SPRINTF_DECODE(dctxt, format, val);
+}
+EXPORT_SYMBOL(tsv_pointer_read);
+
+/*
+ * Reads a 32-bit integer value.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @dctxt deserialization context
+ * @format output format
+ */
+int32_t tsv_int32_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format)
+{
+ struct tsv_header hdr;
+ int32_t val;
+
+ tsv_read_header(ectxt, &hdr);
+ if (WARN_ON(hdr.type != TSV_TYPE_INT32))
+ return -EINVAL;
+ tsv_read_data(ectxt, &val, sizeof(val));
+
+ IPC_SPRINTF_DECODE(dctxt, format, val);
+ return val;
+}
+EXPORT_SYMBOL(tsv_int32_read);
+
+/*
+ * Reads a byte array/string.
+ *
+ * @ectxt context initialized by calling msg_read()
+ * @dctxt deserialization context
+ * @format output format
+ */
+void tsv_byte_array_read(struct encode_context *ectxt,
+ struct decode_context *dctxt, const char *format)
+{
+ struct tsv_header hdr;
+
+ tsv_read_header(ectxt, &hdr);
+ if (WARN_ON(hdr.type != TSV_TYPE_BYTE_ARRAY))
+ return;
+ tsv_read_data(ectxt, dctxt->buff, hdr.size);
+ dctxt->buff += hdr.size;
+ dctxt->size -= hdr.size;
+}
+EXPORT_SYMBOL(tsv_byte_array_read);
+
+int add_deserialization_func(void *ctxt, int type,
+ void (*dfunc)(struct encode_context *,
+ struct decode_context *))
+{
+ struct ipc_log_context *ilctxt = (struct ipc_log_context *)ctxt;
+ struct dfunc_info *df_info;
+ unsigned long flags;
+
+ if (!ilctxt || !dfunc)
+ return -EINVAL;
+
+ df_info = kmalloc(sizeof(struct dfunc_info), GFP_KERNEL);
+ if (!df_info)
+ return -ENOSPC;
+
+ read_lock_irqsave(&context_list_lock_lha1, flags);
+ spin_lock(&ilctxt->context_lock_lhb1);
+ df_info->type = type;
+ df_info->dfunc = dfunc;
+ list_add_tail(&df_info->list, &ilctxt->dfunc_info_list);
+ spin_unlock(&ilctxt->context_lock_lhb1);
+ read_unlock_irqrestore(&context_list_lock_lha1, flags);
+ return 0;
+}
+EXPORT_SYMBOL(add_deserialization_func);
+
+static void *get_deserialization_func(struct ipc_log_context *ilctxt,
+ int type)
+{
+ struct dfunc_info *df_info = NULL;
+
+ if (!ilctxt)
+ return NULL;
+
+ list_for_each_entry(df_info, &ilctxt->dfunc_info_list, list) {
+ if (df_info->type == type)
+ return df_info->dfunc;
+ }
+ return NULL;
+}
+
+/**
+ * ipc_log_context_create: Create a debug log context
+ * Should not be called from atomic context
+ *
+ * @max_num_pages: Number of pages of logging space required (max. 10)
+ * @mod_name : Name of the directory entry under DEBUGFS
+ * @user_version : Version number of user-defined message formats
+ *
+ * returns context id on success, NULL on failure
+ */
+void *ipc_log_context_create(int max_num_pages,
+ const char *mod_name, uint16_t user_version)
+{
+ struct ipc_log_context *ctxt;
+ struct ipc_log_page *pg = NULL;
+ int page_cnt;
+ unsigned long flags;
+
+ ctxt = kzalloc(sizeof(struct ipc_log_context), GFP_KERNEL);
+ if (!ctxt)
+ return 0;
+
+ init_completion(&ctxt->read_avail);
+ INIT_LIST_HEAD(&ctxt->page_list);
+ INIT_LIST_HEAD(&ctxt->dfunc_info_list);
+ spin_lock_init(&ctxt->context_lock_lhb1);
+ for (page_cnt = 0; page_cnt < max_num_pages; page_cnt++) {
+ pg = kzalloc(sizeof(struct ipc_log_page), GFP_KERNEL);
+ if (!pg) {
+ pr_err("%s: cannot create ipc_log_page\n", __func__);
+ goto release_ipc_log_context;
+ }
+ pg->hdr.log_id = (uint64_t)(uintptr_t)ctxt;
+ pg->hdr.page_num = LOG_PAGE_FLAG | page_cnt;
+ pg->hdr.ctx_offset = (int64_t)((uint64_t)(uintptr_t)ctxt -
+ (uint64_t)(uintptr_t)&pg->hdr);
+
+ /* set magic last to signal that page init is complete */
+ pg->hdr.magic = IPC_LOGGING_MAGIC_NUM;
+ pg->hdr.nmagic = ~(IPC_LOGGING_MAGIC_NUM);
+
+ spin_lock_irqsave(&ctxt->context_lock_lhb1, flags);
+ list_add_tail(&pg->hdr.list, &ctxt->page_list);
+ spin_unlock_irqrestore(&ctxt->context_lock_lhb1, flags);
+ }
+
+ ctxt->log_id = (uint64_t)(uintptr_t)ctxt;
+ ctxt->version = IPC_LOG_VERSION;
+ strlcpy(ctxt->name, mod_name, IPC_LOG_MAX_CONTEXT_NAME_LEN);
+ ctxt->user_version = user_version;
+ ctxt->first_page = get_first_page(ctxt);
+ ctxt->last_page = pg;
+ ctxt->write_page = ctxt->first_page;
+ ctxt->read_page = ctxt->first_page;
+ ctxt->nd_read_page = ctxt->first_page;
+ ctxt->write_avail = max_num_pages * LOG_PAGE_DATA_SIZE;
+ ctxt->header_size = sizeof(struct ipc_log_page_header);
+ create_ctx_debugfs(ctxt, mod_name);
+
+ /* set magic last to signal context init is complete */
+ ctxt->magic = IPC_LOG_CONTEXT_MAGIC_NUM;
+ ctxt->nmagic = ~(IPC_LOG_CONTEXT_MAGIC_NUM);
+
+ write_lock_irqsave(&context_list_lock_lha1, flags);
+ list_add_tail(&ctxt->list, &ipc_log_context_list);
+ write_unlock_irqrestore(&context_list_lock_lha1, flags);
+ return (void *)ctxt;
+
+release_ipc_log_context:
+ while (page_cnt-- > 0) {
+ pg = get_first_page(ctxt);
+ list_del(&pg->hdr.list);
+ kfree(pg);
+ }
+ kfree(ctxt);
+ return 0;
+}
+EXPORT_SYMBOL(ipc_log_context_create);
+
+/*
+ * Destroy debug log context
+ *
+ * @ctxt: debug log context created by calling ipc_log_context_create API.
+ */
+int ipc_log_context_destroy(void *ctxt)
+{
+ struct ipc_log_context *ilctxt = (struct ipc_log_context *)ctxt;
+ struct ipc_log_page *pg = NULL;
+ unsigned long flags;
+
+ if (!ilctxt)
+ return 0;
+
+ while (!list_empty(&ilctxt->page_list)) {
+ pg = get_first_page(ctxt);
+ list_del(&pg->hdr.list);
+ kfree(pg);
+ }
+
+ write_lock_irqsave(&context_list_lock_lha1, flags);
+ list_del(&ilctxt->list);
+ write_unlock_irqrestore(&context_list_lock_lha1, flags);
+
+ debugfs_remove_recursive(ilctxt->dent);
+
+ kfree(ilctxt);
+ return 0;
+}
+EXPORT_SYMBOL(ipc_log_context_destroy);
+
+static int __init ipc_logging_init(void)
+{
+ check_and_create_debugfs();
+ return 0;
+}
+
+module_init(ipc_logging_init);
+
+MODULE_DESCRIPTION("ipc logging");
+MODULE_LICENSE("GPL v2");
diff --git a/kernel/trace/ipc_logging_debug.c b/kernel/trace/ipc_logging_debug.c
new file mode 100644
index 0000000..a545387
--- /dev/null
+++ b/kernel/trace/ipc_logging_debug.c
@@ -0,0 +1,184 @@
+/* Copyright (c) 2012-2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/jiffies.h>
+#include <linux/debugfs.h>
+#include <linux/io.h>
+#include <linux/idr.h>
+#include <linux/string.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/delay.h>
+#include <linux/completion.h>
+#include <linux/ipc_logging.h>
+
+#include "ipc_logging_private.h"
+
+static DEFINE_MUTEX(ipc_log_debugfs_init_lock);
+static struct dentry *root_dent;
+
+static int debug_log(struct ipc_log_context *ilctxt,
+ char *buff, int size, int cont)
+{
+ int i = 0;
+ int ret;
+
+ if (size < MAX_MSG_DECODED_SIZE) {
+ pr_err("%s: buffer size %d < %d\n", __func__, size,
+ MAX_MSG_DECODED_SIZE);
+ return -ENOMEM;
+ }
+ do {
+ i = ipc_log_extract(ilctxt, buff, size - 1);
+ if (cont && i == 0) {
+ ret = wait_for_completion_interruptible(
+ &ilctxt->read_avail);
+ if (ret < 0)
+ return ret;
+ }
+ } while (cont && i == 0);
+
+ return i;
+}
+
+/*
+ * VFS Read operation helper which dispatches the call to the debugfs
+ * read command stored in file->private_data.
+ *
+ * @file File structure
+ * @buff user buffer
+ * @count size of user buffer
+ * @ppos file position to read from (only a value of 0 is accepted)
+ * @cont 1 = continuous mode (don't return 0 to signal end-of-file)
+ *
+ * @returns ==0 end of file
+ * >0 number of bytes read
+ * <0 error
+ */
+static ssize_t debug_read_helper(struct file *file, char __user *buff,
+ size_t count, loff_t *ppos, int cont)
+{
+ struct ipc_log_context *ilctxt = file->private_data;
+ char *buffer;
+ int bsize;
+
+ buffer = kmalloc(count, GFP_KERNEL);
+ if (!buffer)
+ return -ENOMEM;
+
+ bsize = debug_log(ilctxt, buffer, count, cont);
+ if (bsize > 0) {
+ if (copy_to_user(buff, buffer, bsize)) {
+ kfree(buffer);
+ return -EFAULT;
+ }
+ *ppos += bsize;
+ }
+ kfree(buffer);
+ return bsize;
+}
+
+static ssize_t debug_read(struct file *file, char __user *buff,
+ size_t count, loff_t *ppos)
+{
+ return debug_read_helper(file, buff, count, ppos, 0);
+}
+
+static ssize_t debug_read_cont(struct file *file, char __user *buff,
+ size_t count, loff_t *ppos)
+{
+ return debug_read_helper(file, buff, count, ppos, 1);
+}
+
+static int debug_open(struct inode *inode, struct file *file)
+{
+ file->private_data = inode->i_private;
+ return 0;
+}
+
+static const struct file_operations debug_ops = {
+ .read = debug_read,
+ .open = debug_open,
+};
+
+static const struct file_operations debug_ops_cont = {
+ .read = debug_read_cont,
+ .open = debug_open,
+};
+
+static void debug_create(const char *name, mode_t mode,
+ struct dentry *dent,
+ struct ipc_log_context *ilctxt,
+ const struct file_operations *fops)
+{
+ debugfs_create_file(name, mode, dent, ilctxt, fops);
+}
+
+static void dfunc_string(struct encode_context *ectxt,
+ struct decode_context *dctxt)
+{
+ tsv_timestamp_read(ectxt, dctxt, "");
+ tsv_qtimer_read(ectxt, dctxt, " ");
+ tsv_byte_array_read(ectxt, dctxt, "");
+
+ /* add trailing \n if necessary */
+ if (*(dctxt->buff - 1) != '\n') {
+ if (dctxt->size) {
+ ++dctxt->buff;
+ --dctxt->size;
+ }
+ *(dctxt->buff - 1) = '\n';
+ }
+}
+
+void check_and_create_debugfs(void)
+{
+ mutex_lock(&ipc_log_debugfs_init_lock);
+ if (!root_dent) {
+ root_dent = debugfs_create_dir("ipc_logging", 0);
+
+ if (IS_ERR(root_dent)) {
+ pr_err("%s: unable to create debugfs %ld\n",
+ __func__, PTR_ERR(root_dent));
+ root_dent = NULL;
+ }
+ }
+ mutex_unlock(&ipc_log_debugfs_init_lock);
+}
+EXPORT_SYMBOL(check_and_create_debugfs);
+
+void create_ctx_debugfs(struct ipc_log_context *ctxt,
+ const char *mod_name)
+{
+ if (!root_dent)
+ check_and_create_debugfs();
+
+ if (root_dent) {
+ ctxt->dent = debugfs_create_dir(mod_name, root_dent);
+ if (!IS_ERR(ctxt->dent)) {
+ debug_create("log", 0444, ctxt->dent,
+ ctxt, &debug_ops);
+ debug_create("log_cont", 0444, ctxt->dent,
+ ctxt, &debug_ops_cont);
+ }
+ }
+ add_deserialization_func((void *)ctxt,
+ TSV_TYPE_STRING, dfunc_string);
+}
+EXPORT_SYMBOL(create_ctx_debugfs);
diff --git a/kernel/trace/ipc_logging_private.h b/kernel/trace/ipc_logging_private.h
new file mode 100644
index 0000000..594027a
--- /dev/null
+++ b/kernel/trace/ipc_logging_private.h
@@ -0,0 +1,165 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#ifndef _IPC_LOGGING_PRIVATE_H
+#define _IPC_LOGGING_PRIVATE_H
+
+#include <linux/ipc_logging.h>
+
+#define IPC_LOG_VERSION 0x0003
+#define IPC_LOG_MAX_CONTEXT_NAME_LEN 32
+
+/**
+ * struct ipc_log_page_header - Individual log page header
+ *
+ * @magic: Magic number (used for log extraction)
+ * @nmagic: Inverse of magic number (used for log extraction)
+ * @page_num: Index of page (0.. N - 1) (note top bit is always set)
+ * @read_offset: Read offset in page
+ * @write_offset: Write offset in page (or 0xFFFF if full)
+ * @log_id: ID of logging context that owns this page
+ * @start_time: Scheduler clock for first write time in page
+ * @end_time: Scheduler clock for last write time in page
+ * @ctx_offset: Signed offset from page to the logging context. Used to
+ * optimize ram-dump extraction.
+ *
+ * @list: Linked list of pages that make up a log
+ * @nd_read_offset: Non-destructive read offset used for debugfs
+ *
+ * The first part of the structure defines data that is used to extract the
+ * logs from a memory dump and elements in this section should not be changed
+ * or re-ordered. New local data structures can be added to the end of the
+ * structure since they will be ignored by the extraction tool.
+ */
+struct ipc_log_page_header {
+ uint32_t magic;
+ uint32_t nmagic;
+ uint32_t page_num;
+ uint16_t read_offset;
+ uint16_t write_offset;
+ uint64_t log_id;
+ uint64_t start_time;
+ uint64_t end_time;
+ int64_t ctx_offset;
+
+ /* add local data structures after this point */
+ struct list_head list;
+ uint16_t nd_read_offset;
+};
+
+/**
+ * struct ipc_log_page - Individual log page
+ *
+ * @hdr: Log page header
+ * @data: Log data
+ *
+ * Each log consists of 1 to N log pages. Data size is adjusted to always fit
+ * the structure into a single kernel page.
+ */
+struct ipc_log_page {
+ struct ipc_log_page_header hdr;
+ char data[PAGE_SIZE - sizeof(struct ipc_log_page_header)];
+};
+
+/**
+ * struct ipc_log_context - main logging context
+ *
+ * @magic: Magic number (used for log extraction)
+ * @nmagic: Inverse of magic number (used for log extraction)
+ * @version: IPC Logging version of log format
+ * @user_version: Version number for user-defined messages
+ * @header_size: Size of the log header which is used to determine the offset
+ * of ipc_log_page::data
+ * @log_id: Log ID (assigned when log is created)
+ * @name: Name of the log used to uniquely identify the log during extraction
+ *
+ * @list: List of log contexts (struct ipc_log_context)
+ * @page_list: List of log pages (struct ipc_log_page)
+ * @first_page: First page in list of logging pages
+ * @last_page: Last page in list of logging pages
+ * @write_page: Current write page
+ * @read_page: Current read page (for internal reads)
+ * @nd_read_page: Current debugfs extraction page (non-destructive)
+ *
+ * @write_avail: Number of bytes available to write in all pages
+ * @dent: Debugfs node for run-time log extraction
+ * @dfunc_info_list: List of deserialization functions
+ * @context_lock_lhb1: Lock for entire structure
+ * @read_avail: Completed when new data is added to the log
+ */
+struct ipc_log_context {
+ uint32_t magic;
+ uint32_t nmagic;
+ uint32_t version;
+ uint16_t user_version;
+ uint16_t header_size;
+ uint64_t log_id;
+ char name[IPC_LOG_MAX_CONTEXT_NAME_LEN];
+
+ /* add local data structures after this point */
+ struct list_head list;
+ struct list_head page_list;
+ struct ipc_log_page *first_page;
+ struct ipc_log_page *last_page;
+ struct ipc_log_page *write_page;
+ struct ipc_log_page *read_page;
+ struct ipc_log_page *nd_read_page;
+
+ uint32_t write_avail;
+ struct dentry *dent;
+ struct list_head dfunc_info_list;
+ spinlock_t context_lock_lhb1;
+ struct completion read_avail;
+};
+
+struct dfunc_info {
+ struct list_head list;
+ int type;
+ void (*dfunc)(struct encode_context *, struct decode_context *);
+};
+
+enum {
+ TSV_TYPE_INVALID,
+ TSV_TYPE_TIMESTAMP,
+ TSV_TYPE_POINTER,
+ TSV_TYPE_INT32,
+ TSV_TYPE_BYTE_ARRAY,
+ TSV_TYPE_QTIMER,
+};
+
+enum {
+ OUTPUT_DEBUGFS,
+};
+
+#define IPC_LOG_CONTEXT_MAGIC_NUM 0x25874452
+#define IPC_LOGGING_MAGIC_NUM 0x52784425
+#define MIN(x, y) ((x) < (y) ? (x) : (y))
+#define IS_MSG_TYPE(x) (((x) > TSV_TYPE_MSG_START) && \
+ ((x) < TSV_TYPE_MSG_END))
+#define MAX_MSG_DECODED_SIZE (MAX_MSG_SIZE*4)
+
+#if (defined(CONFIG_DEBUG_FS))
+void check_and_create_debugfs(void);
+
+void create_ctx_debugfs(struct ipc_log_context *ctxt,
+ const char *mod_name);
+#else
+void check_and_create_debugfs(void)
+{
+}
+
+void create_ctx_debugfs(struct ipc_log_context *ctxt, const char *mod_name)
+{
+}
+#endif
+
+#endif
diff --git a/lib/Kconfig b/lib/Kconfig
index 260a80e..8b6c41e 100644
--- a/lib/Kconfig
+++ b/lib/Kconfig
@@ -550,4 +550,20 @@
config SBITMAP
bool
+config QMI_ENCDEC
+ bool "QMI Encode/Decode Library"
+ help
+ Library to encode & decode QMI messages from within
+ the kernel. The kernel drivers encode the C structure into
+ QMI message wire format and then send it over a transport.
+ The kernel drivers receive the QMI message over a transport
+ and then decode it into a C structure.
+
+config QMI_ENCDEC_DEBUG
+ bool "QMI Encode/Decode Library Debug"
+ help
+ Kernel config option to enable debugging QMI Encode/Decode
+ library. This will log the information regarding the element
+ and message being encoded & decoded.
+
endmenu
diff --git a/lib/Makefile b/lib/Makefile
index 50144a3..e0eb131 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -230,3 +230,4 @@
UBSAN_SANITIZE_ubsan.o := n
obj-$(CONFIG_SBITMAP) += sbitmap.o
+obj-$(CONFIG_QMI_ENCDEC) += qmi_encdec.o
diff --git a/lib/qmi_encdec.c b/lib/qmi_encdec.c
new file mode 100644
index 0000000..d7221d8
--- /dev/null
+++ b/lib/qmi_encdec.c
@@ -0,0 +1,877 @@
+/* Copyright (c) 2012-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/io.h>
+#include <linux/string.h>
+#include <linux/qmi_encdec.h>
+
+#include "qmi_encdec_priv.h"
+
+#define TLV_LEN_SIZE sizeof(uint16_t)
+#define TLV_TYPE_SIZE sizeof(uint8_t)
+#define OPTIONAL_TLV_TYPE_START 0x10
+
+#ifdef CONFIG_QMI_ENCDEC_DEBUG
+
+#define qmi_encdec_dump(prefix_str, buf, buf_len) do { \
+ const u8 *ptr = buf; \
+ int i, linelen, remaining = buf_len; \
+ int rowsize = 16, groupsize = 1; \
+ unsigned char linebuf[256]; \
+ for (i = 0; i < buf_len; i += rowsize) { \
+ linelen = min(remaining, rowsize); \
+ remaining -= linelen; \
+ hex_dump_to_buffer(ptr + i, linelen, rowsize, groupsize, \
+ linebuf, sizeof(linebuf), false); \
+ pr_debug("%s: %s\n", prefix_str, linebuf); \
+ } \
+} while (0)
+
+#define QMI_ENCODE_LOG_MSG(buf, buf_len) \
+ qmi_encdec_dump("QMI_ENCODE_MSG", buf, buf_len)
+
+#define QMI_DECODE_LOG_MSG(buf, buf_len) \
+ qmi_encdec_dump("QMI_DECODE_MSG", buf, buf_len)
+
+#define QMI_ENCODE_LOG_ELEM(level, elem_len, elem_size, buf) do { \
+ pr_debug("QMI_ENCODE_ELEM lvl: %d, len: %d, size: %d\n", \
+ level, elem_len, elem_size); \
+ qmi_encdec_dump("QMI_ENCODE_ELEM", buf, (elem_len * elem_size)); \
+} while (0)
+
+#define QMI_DECODE_LOG_ELEM(level, elem_len, elem_size, buf) do { \
+ pr_debug("QMI_DECODE_ELEM lvl: %d, len: %d, size: %d\n", \
+ level, elem_len, elem_size); \
+ qmi_encdec_dump("QMI_DECODE_ELEM", buf, (elem_len * elem_size)); \
+} while (0)
+
+#define QMI_ENCODE_LOG_TLV(tlv_type, tlv_len) \
+ pr_debug("QMI_ENCODE_TLV type: %d, len: %d\n", tlv_type, tlv_len)
+
+#define QMI_DECODE_LOG_TLV(tlv_type, tlv_len) \
+ pr_debug("QMI_DECODE_TLV type: %d, len: %d\n", tlv_type, tlv_len)
+
+#else
+
+#define QMI_ENCODE_LOG_MSG(buf, buf_len) { }
+#define QMI_DECODE_LOG_MSG(buf, buf_len) { }
+#define QMI_ENCODE_LOG_ELEM(level, elem_len, elem_size, buf) { }
+#define QMI_DECODE_LOG_ELEM(level, elem_len, elem_size, buf) { }
+#define QMI_ENCODE_LOG_TLV(tlv_type, tlv_len) { }
+#define QMI_DECODE_LOG_TLV(tlv_type, tlv_len) { }
+
+#endif
+
+static int _qmi_kernel_encode(struct elem_info *ei_array,
+ void *out_buf, void *in_c_struct,
+ uint32_t out_buf_len, int enc_level);
+
+static int _qmi_kernel_decode(struct elem_info *ei_array,
+ void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len,
+ int dec_level);
+static struct elem_info *skip_to_next_elem(struct elem_info *ei_array,
+ int level);
+
+/**
+ * qmi_calc_max_msg_len() - Calculate the maximum length of a QMI message
+ * @ei_array: Struct info array describing the structure.
+ * @level: Level to identify the depth of the nested structures.
+ *
+ * @return: expected maximum length of the QMI message or 0 on failure.
+ */
+static int qmi_calc_max_msg_len(struct elem_info *ei_array,
+ int level)
+{
+ int max_msg_len = 0;
+ struct elem_info *temp_ei;
+
+ if (!ei_array)
+ return max_msg_len;
+
+ for (temp_ei = ei_array; temp_ei->data_type != QMI_EOTI; temp_ei++) {
+ /* Flag to identify the optional element is not encoded */
+ if (temp_ei->data_type == QMI_OPT_FLAG)
+ continue;
+
+ if (temp_ei->data_type == QMI_DATA_LEN) {
+ max_msg_len += (temp_ei->elem_size == sizeof(uint8_t) ?
+ sizeof(uint8_t) : sizeof(uint16_t));
+ continue;
+ } else if (temp_ei->data_type == QMI_STRUCT) {
+ max_msg_len += (temp_ei->elem_len *
+ qmi_calc_max_msg_len(temp_ei->ei_array,
+ (level + 1)));
+ } else if (temp_ei->data_type == QMI_STRING) {
+ if (level > 1)
+ max_msg_len += temp_ei->elem_len <= U8_MAX ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ max_msg_len += temp_ei->elem_len * temp_ei->elem_size;
+ } else {
+ max_msg_len += (temp_ei->elem_len * temp_ei->elem_size);
+ }
+
+ /*
+ * Type & Length info. not prepended for elements in the
+ * nested structure.
+ */
+ if (level == 1)
+ max_msg_len += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
+ }
+ return max_msg_len;
+}
+
+/**
+ * qmi_calc_min_msg_len() - Calculate the minimum length of a QMI message
+ * @ei_array: Struct info array describing the structure.
+ * @level: Level to identify the depth of the nested structures.
+ *
+ * @return: expected minimum length of the QMI message or 0 on failure.
+ */
+static int qmi_calc_min_msg_len(struct elem_info *ei_array,
+ int level)
+{
+ int min_msg_len = 0;
+ struct elem_info *temp_ei = ei_array;
+
+ if (!ei_array)
+ return min_msg_len;
+
+ while (temp_ei->data_type != QMI_EOTI) {
+ /* Optional elements do not count in minimum length */
+ if (temp_ei->data_type == QMI_OPT_FLAG) {
+ temp_ei = skip_to_next_elem(temp_ei, level);
+ continue;
+ }
+
+ if (temp_ei->data_type == QMI_DATA_LEN) {
+ min_msg_len += (temp_ei->elem_size == sizeof(uint8_t) ?
+ sizeof(uint8_t) : sizeof(uint16_t));
+ temp_ei++;
+ continue;
+ } else if (temp_ei->data_type == QMI_STRUCT) {
+ min_msg_len += qmi_calc_min_msg_len(temp_ei->ei_array,
+ (level + 1));
+ temp_ei++;
+ } else if (temp_ei->data_type == QMI_STRING) {
+ if (level > 1)
+ min_msg_len += temp_ei->elem_len <= U8_MAX ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ min_msg_len += temp_ei->elem_len * temp_ei->elem_size;
+ temp_ei++;
+ } else {
+ min_msg_len += (temp_ei->elem_len * temp_ei->elem_size);
+ temp_ei++;
+ }
+
+ /*
+ * Type & Length info. not prepended for elements in the
+ * nested structure.
+ */
+ if (level == 1)
+ min_msg_len += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
+ }
+ return min_msg_len;
+}
+
+/**
+ * qmi_verify_max_msg_len() - Verify the maximum length of a QMI message
+ * @desc: Pointer to structure descriptor.
+ *
+ * @return: true if the maximum message length embedded in structure
+ * descriptor matches the calculated value, else false.
+ */
+bool qmi_verify_max_msg_len(struct msg_desc *desc)
+{
+ int calc_max_msg_len;
+
+ if (!desc)
+ return false;
+
+ calc_max_msg_len = qmi_calc_max_msg_len(desc->ei_array, 1);
+ if (calc_max_msg_len != desc->max_msg_len) {
+ pr_err("%s: Calc. len %d != Passed len %d\n",
+ __func__, calc_max_msg_len, desc->max_msg_len);
+ return false;
+ }
+ return true;
+}
+
+/**
+ * qmi_kernel_encode() - Encode to QMI message wire format
+ * @desc: Pointer to structure descriptor.
+ * @out_buf: Buffer to hold the encoded QMI message.
+ * @out_buf_len: Length of the out buffer.
+ * @in_c_struct: C Structure to be encoded.
+ *
+ * @return: size of encoded message on success, < 0 for error.
+ */
+int qmi_kernel_encode(struct msg_desc *desc,
+ void *out_buf, uint32_t out_buf_len,
+ void *in_c_struct)
+{
+ int enc_level = 1;
+ int ret, calc_max_msg_len, calc_min_msg_len;
+
+ if (!desc)
+ return -EINVAL;
+
+ /* Check the possibility of a zero length QMI message */
+ if (!in_c_struct) {
+ calc_min_msg_len = qmi_calc_min_msg_len(desc->ei_array, 1);
+ if (calc_min_msg_len) {
+ pr_err("%s: Calc. len %d != 0, but NULL in_c_struct\n",
+ __func__, calc_min_msg_len);
+ return -EINVAL;
+ } else {
+ return 0;
+ }
+ }
+
+ /*
+ * Not a zero-length message. Ensure the output buffer and
+ * element information array are not NULL.
+ */
+ if (!out_buf || !desc->ei_array)
+ return -EINVAL;
+
+ if (desc->max_msg_len < out_buf_len)
+ return -ETOOSMALL;
+
+ ret = _qmi_kernel_encode(desc->ei_array, out_buf,
+ in_c_struct, out_buf_len, enc_level);
+ if (ret == -ETOOSMALL) {
+ calc_max_msg_len = qmi_calc_max_msg_len(desc->ei_array, 1);
+ pr_err("%s: Calc. len %d != Out buf len %d\n",
+ __func__, calc_max_msg_len, out_buf_len);
+ }
+ return ret;
+}
+EXPORT_SYMBOL(qmi_kernel_encode);
+
+/**
+ * qmi_encode_basic_elem() - Encodes elements of basic/primary data type
+ * @buf_dst: Buffer to store the encoded information.
+ * @buf_src: Buffer containing the elements to be encoded.
+ * @elem_len: Number of elements, in the buf_src, to be encoded.
+ * @elem_size: Size of a single instance of the element to be encoded.
+ *
+ * @return: number of bytes of encoded information.
+ *
+ * This function encodes the "elem_len" number of data elements, each of
+ * size "elem_size" bytes from the source buffer "buf_src" and stores the
+ * encoded information in the destination buffer "buf_dst". The elements are
+ * of primary data type which include uint8_t - uint64_t or similar. This
+ * function returns the number of bytes of encoded information.
+ */
+static int qmi_encode_basic_elem(void *buf_dst, void *buf_src,
+ uint32_t elem_len, uint32_t elem_size)
+{
+ uint32_t i, rc = 0;
+
+ for (i = 0; i < elem_len; i++) {
+ QMI_ENCDEC_ENCODE_N_BYTES(buf_dst, buf_src, elem_size);
+ rc += elem_size;
+ }
+
+ return rc;
+}
+
+/**
+ * qmi_encode_struct_elem() - Encodes elements of struct data type
+ * @ei_array: Struct info array descibing the struct element.
+ * @buf_dst: Buffer to store the encoded information.
+ * @buf_src: Buffer containing the elements to be encoded.
+ * @elem_len: Number of elements, in the buf_src, to be encoded.
+ * @out_buf_len: Available space in the encode buffer.
+ * @enc_level: Depth of the nested structure from the main structure.
+ *
+ * @return: Number of bytes of encoded information, on success.
+ * < 0 on error.
+ *
+ * This function encodes the "elem_len" number of struct elements, each of
+ * size "ei_array->elem_size" bytes from the source buffer "buf_src" and
+ * stores the encoded information in the destination buffer "buf_dst". The
+ * elements are of struct data type which includes any C structure. This
+ * function returns the number of bytes of encoded information.
+ */
+static int qmi_encode_struct_elem(struct elem_info *ei_array,
+ void *buf_dst, void *buf_src,
+ uint32_t elem_len, uint32_t out_buf_len,
+ int enc_level)
+{
+ int i, rc, encoded_bytes = 0;
+ struct elem_info *temp_ei = ei_array;
+
+ for (i = 0; i < elem_len; i++) {
+ rc = _qmi_kernel_encode(temp_ei->ei_array, buf_dst, buf_src,
+ (out_buf_len - encoded_bytes),
+ enc_level);
+ if (rc < 0) {
+ pr_err("%s: STRUCT Encode failure\n", __func__);
+ return rc;
+ }
+ buf_dst = buf_dst + rc;
+ buf_src = buf_src + temp_ei->elem_size;
+ encoded_bytes += rc;
+ }
+
+ return encoded_bytes;
+}
+
+/**
+ * qmi_encode_string_elem() - Encodes elements of string data type
+ * @ei_array: Struct info array descibing the string element.
+ * @buf_dst: Buffer to store the encoded information.
+ * @buf_src: Buffer containing the elements to be encoded.
+ * @out_buf_len: Available space in the encode buffer.
+ * @enc_level: Depth of the string element from the main structure.
+ *
+ * @return: Number of bytes of encoded information, on success.
+ * < 0 on error.
+ *
+ * This function encodes a string element of maximum length "ei_array->elem_len"
+ * bytes from the source buffer "buf_src" and stores the encoded information in
+ * the destination buffer "buf_dst". This function returns the number of bytes
+ * of encoded information.
+ */
+static int qmi_encode_string_elem(struct elem_info *ei_array,
+ void *buf_dst, void *buf_src,
+ uint32_t out_buf_len, int enc_level)
+{
+ int rc;
+ int encoded_bytes = 0;
+ struct elem_info *temp_ei = ei_array;
+ uint32_t string_len = 0;
+ uint32_t string_len_sz = 0;
+
+ string_len = strlen(buf_src);
+ string_len_sz = temp_ei->elem_len <= U8_MAX ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ if (string_len > temp_ei->elem_len) {
+ pr_err("%s: String to be encoded is longer - %d > %d\n",
+ __func__, string_len, temp_ei->elem_len);
+ return -EINVAL;
+ }
+
+ if (enc_level == 1) {
+ if (string_len + TLV_LEN_SIZE + TLV_TYPE_SIZE >
+ out_buf_len) {
+ pr_err("%s: Output len %d > Out Buf len %d\n",
+ __func__, string_len, out_buf_len);
+ return -ETOOSMALL;
+ }
+ } else {
+ if (string_len + string_len_sz > out_buf_len) {
+ pr_err("%s: Output len %d > Out Buf len %d\n",
+ __func__, string_len, out_buf_len);
+ return -ETOOSMALL;
+ }
+ rc = qmi_encode_basic_elem(buf_dst, &string_len,
+ 1, string_len_sz);
+ encoded_bytes += rc;
+ }
+
+ rc = qmi_encode_basic_elem(buf_dst + encoded_bytes, buf_src,
+ string_len, temp_ei->elem_size);
+ encoded_bytes += rc;
+ QMI_ENCODE_LOG_ELEM(enc_level, string_len, temp_ei->elem_size, buf_src);
+ return encoded_bytes;
+}
+
+/**
+ * skip_to_next_elem() - Skip to next element in the structure to be encoded
+ * @ei_array: Struct info describing the element to be skipped.
+ * @level: Depth level of encoding/decoding to identify nested structures.
+ *
+ * @return: Struct info of the next element that can be encoded.
+ *
+ * This function is used while encoding optional elements. If the flag
+ * corresponding to an optional element is not set, then encoding the
+ * optional element can be skipped. This function can be used to perform
+ * that operation.
+ */
+static struct elem_info *skip_to_next_elem(struct elem_info *ei_array,
+ int level)
+{
+ struct elem_info *temp_ei = ei_array;
+ uint8_t tlv_type;
+
+ if (level > 1) {
+ temp_ei = temp_ei + 1;
+ } else {
+ do {
+ tlv_type = temp_ei->tlv_type;
+ temp_ei = temp_ei + 1;
+ } while (tlv_type == temp_ei->tlv_type);
+ }
+
+ return temp_ei;
+}
+
+/**
+ * _qmi_kernel_encode() - Core Encode Function
+ * @ei_array: Struct info array describing the structure to be encoded.
+ * @out_buf: Buffer to hold the encoded QMI message.
+ * @in_c_struct: Pointer to the C structure to be encoded.
+ * @out_buf_len: Available space in the encode buffer.
+ * @enc_level: Encode level to indicate the depth of the nested structure,
+ * within the main structure, being encoded.
+ *
+ * @return: Number of bytes of encoded information, on success.
+ * < 0 on error.
+ */
+static int _qmi_kernel_encode(struct elem_info *ei_array,
+ void *out_buf, void *in_c_struct,
+ uint32_t out_buf_len, int enc_level)
+{
+ struct elem_info *temp_ei = ei_array;
+ uint8_t opt_flag_value = 0;
+ uint32_t data_len_value = 0, data_len_sz;
+ uint8_t *buf_dst = (uint8_t *)out_buf;
+ uint8_t *tlv_pointer;
+ uint32_t tlv_len;
+ uint8_t tlv_type;
+ uint32_t encoded_bytes = 0;
+ void *buf_src;
+ int encode_tlv = 0;
+ int rc;
+
+ tlv_pointer = buf_dst;
+ tlv_len = 0;
+ if (enc_level == 1)
+ buf_dst = buf_dst + (TLV_LEN_SIZE + TLV_TYPE_SIZE);
+
+ while (temp_ei->data_type != QMI_EOTI) {
+ buf_src = in_c_struct + temp_ei->offset;
+ tlv_type = temp_ei->tlv_type;
+
+ if (temp_ei->is_array == NO_ARRAY) {
+ data_len_value = 1;
+ } else if (temp_ei->is_array == STATIC_ARRAY) {
+ data_len_value = temp_ei->elem_len;
+ } else if (data_len_value <= 0 ||
+ temp_ei->elem_len < data_len_value) {
+ pr_err("%s: Invalid data length\n", __func__);
+ return -EINVAL;
+ }
+
+ switch (temp_ei->data_type) {
+ case QMI_OPT_FLAG:
+ rc = qmi_encode_basic_elem(&opt_flag_value, buf_src,
+ 1, sizeof(uint8_t));
+ if (opt_flag_value)
+ temp_ei = temp_ei + 1;
+ else
+ temp_ei = skip_to_next_elem(temp_ei, enc_level);
+ break;
+
+ case QMI_DATA_LEN:
+ memcpy(&data_len_value, buf_src, temp_ei->elem_size);
+ data_len_sz = temp_ei->elem_size == sizeof(uint8_t) ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ /* Check to avoid out of range buffer access */
+ if ((data_len_sz + encoded_bytes + TLV_LEN_SIZE +
+ TLV_TYPE_SIZE) > out_buf_len) {
+ pr_err("%s: Too Small Buffer @DATA_LEN\n",
+ __func__);
+ return -ETOOSMALL;
+ }
+ rc = qmi_encode_basic_elem(buf_dst, &data_len_value,
+ 1, data_len_sz);
+ UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ encoded_bytes, tlv_len, encode_tlv, rc);
+ if (!data_len_value)
+ temp_ei = skip_to_next_elem(temp_ei, enc_level);
+ else
+ encode_tlv = 0;
+ break;
+
+ case QMI_UNSIGNED_1_BYTE:
+ case QMI_UNSIGNED_2_BYTE:
+ case QMI_UNSIGNED_4_BYTE:
+ case QMI_UNSIGNED_8_BYTE:
+ case QMI_SIGNED_2_BYTE_ENUM:
+ case QMI_SIGNED_4_BYTE_ENUM:
+ /* Check to avoid out of range buffer access */
+ if (((data_len_value * temp_ei->elem_size) +
+ encoded_bytes + TLV_LEN_SIZE + TLV_TYPE_SIZE) >
+ out_buf_len) {
+ pr_err("%s: Too Small Buffer @data_type:%d\n",
+ __func__, temp_ei->data_type);
+ return -ETOOSMALL;
+ }
+ rc = qmi_encode_basic_elem(buf_dst, buf_src,
+ data_len_value, temp_ei->elem_size);
+ QMI_ENCODE_LOG_ELEM(enc_level, data_len_value,
+ temp_ei->elem_size, buf_src);
+ UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ encoded_bytes, tlv_len, encode_tlv, rc);
+ break;
+
+ case QMI_STRUCT:
+ rc = qmi_encode_struct_elem(temp_ei, buf_dst, buf_src,
+ data_len_value, (out_buf_len - encoded_bytes),
+ (enc_level + 1));
+ if (rc < 0)
+ return rc;
+ UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ encoded_bytes, tlv_len, encode_tlv, rc);
+ break;
+
+ case QMI_STRING:
+ rc = qmi_encode_string_elem(temp_ei, buf_dst, buf_src,
+ out_buf_len - encoded_bytes, enc_level);
+ if (rc < 0)
+ return rc;
+ UPDATE_ENCODE_VARIABLES(temp_ei, buf_dst,
+ encoded_bytes, tlv_len, encode_tlv, rc);
+ break;
+ default:
+ pr_err("%s: Unrecognized data type\n", __func__);
+ return -EINVAL;
+
+ }
+
+ if (encode_tlv && enc_level == 1) {
+ QMI_ENCDEC_ENCODE_TLV(tlv_type, tlv_len, tlv_pointer);
+ QMI_ENCODE_LOG_TLV(tlv_type, tlv_len);
+ encoded_bytes += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
+ tlv_pointer = buf_dst;
+ tlv_len = 0;
+ buf_dst = buf_dst + TLV_LEN_SIZE + TLV_TYPE_SIZE;
+ encode_tlv = 0;
+ }
+ }
+ QMI_ENCODE_LOG_MSG(out_buf, encoded_bytes);
+ return encoded_bytes;
+}
+
+/**
+ * qmi_kernel_decode() - Decode to C Structure format
+ * @desc: Pointer to structure descriptor.
+ * @out_c_struct: Buffer to hold the decoded C structure.
+ * @in_buf: Buffer containg the QMI message to be decoded.
+ * @in_buf_len: Length of the incoming QMI message.
+ *
+ * @return: 0 on success, < 0 on error.
+ */
+int qmi_kernel_decode(struct msg_desc *desc, void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len)
+{
+ int dec_level = 1;
+ int rc = 0;
+
+ if (!desc || !desc->ei_array)
+ return -EINVAL;
+
+ if (!out_c_struct || !in_buf || !in_buf_len)
+ return -EINVAL;
+
+ if (desc->max_msg_len < in_buf_len)
+ return -EINVAL;
+
+ rc = _qmi_kernel_decode(desc->ei_array, out_c_struct,
+ in_buf, in_buf_len, dec_level);
+ if (rc < 0)
+ return rc;
+ else
+ return 0;
+}
+EXPORT_SYMBOL(qmi_kernel_decode);
+
+/**
+ * qmi_decode_basic_elem() - Decodes elements of basic/primary data type
+ * @buf_dst: Buffer to store the decoded element.
+ * @buf_src: Buffer containing the elements in QMI wire format.
+ * @elem_len: Number of elements to be decoded.
+ * @elem_size: Size of a single instance of the element to be decoded.
+ *
+ * @return: Total size of the decoded data elements, in bytes.
+ *
+ * This function decodes the "elem_len" number of elements in QMI wire format,
+ * each of size "elem_size" bytes from the source buffer "buf_src" and stores
+ * the decoded elements in the destination buffer "buf_dst". The elements are
+ * of primary data type which include uint8_t - uint64_t or similar. This
+ * function returns the number of bytes of decoded information.
+ */
+static int qmi_decode_basic_elem(void *buf_dst, void *buf_src,
+ uint32_t elem_len, uint32_t elem_size)
+{
+ uint32_t i, rc = 0;
+
+ for (i = 0; i < elem_len; i++) {
+ QMI_ENCDEC_DECODE_N_BYTES(buf_dst, buf_src, elem_size);
+ rc += elem_size;
+ }
+
+ return rc;
+}
+
+/**
+ * qmi_decode_struct_elem() - Decodes elements of struct data type
+ * @ei_array: Struct info array descibing the struct element.
+ * @buf_dst: Buffer to store the decoded element.
+ * @buf_src: Buffer containing the elements in QMI wire format.
+ * @elem_len: Number of elements to be decoded.
+ * @tlv_len: Total size of the encoded inforation corresponding to
+ * this struct element.
+ * @dec_level: Depth of the nested structure from the main structure.
+ *
+ * @return: Total size of the decoded data elements, on success.
+ * < 0 on error.
+ *
+ * This function decodes the "elem_len" number of elements in QMI wire format,
+ * each of size "(tlv_len/elem_len)" bytes from the source buffer "buf_src"
+ * and stores the decoded elements in the destination buffer "buf_dst". The
+ * elements are of struct data type which includes any C structure. This
+ * function returns the number of bytes of decoded information.
+ */
+static int qmi_decode_struct_elem(struct elem_info *ei_array, void *buf_dst,
+ void *buf_src, uint32_t elem_len,
+ uint32_t tlv_len, int dec_level)
+{
+ int i, rc, decoded_bytes = 0;
+ struct elem_info *temp_ei = ei_array;
+
+ for (i = 0; i < elem_len && decoded_bytes < tlv_len; i++) {
+ rc = _qmi_kernel_decode(temp_ei->ei_array, buf_dst, buf_src,
+ (tlv_len - decoded_bytes), dec_level);
+ if (rc < 0)
+ return rc;
+ buf_src = buf_src + rc;
+ buf_dst = buf_dst + temp_ei->elem_size;
+ decoded_bytes += rc;
+ }
+
+ if ((dec_level <= 2 && decoded_bytes != tlv_len) ||
+ (dec_level > 2 && (i < elem_len || decoded_bytes > tlv_len))) {
+ pr_err("%s: Fault in decoding: dl(%d), db(%d), tl(%d), i(%d), el(%d)\n",
+ __func__, dec_level, decoded_bytes, tlv_len,
+ i, elem_len);
+ return -EFAULT;
+ }
+ return decoded_bytes;
+}
+
+/**
+ * qmi_decode_string_elem() - Decodes elements of string data type
+ * @ei_array: Struct info array descibing the string element.
+ * @buf_dst: Buffer to store the decoded element.
+ * @buf_src: Buffer containing the elements in QMI wire format.
+ * @tlv_len: Total size of the encoded inforation corresponding to
+ * this string element.
+ * @dec_level: Depth of the string element from the main structure.
+ *
+ * @return: Total size of the decoded data elements, on success.
+ * < 0 on error.
+ *
+ * This function decodes the string element of maximum length
+ * "ei_array->elem_len" from the source buffer "buf_src" and puts it into
+ * the destination buffer "buf_dst". This function returns number of bytes
+ * decoded from the input buffer.
+ */
+static int qmi_decode_string_elem(struct elem_info *ei_array, void *buf_dst,
+ void *buf_src, uint32_t tlv_len,
+ int dec_level)
+{
+ int rc;
+ int decoded_bytes = 0;
+ uint32_t string_len = 0;
+ uint32_t string_len_sz = 0;
+ struct elem_info *temp_ei = ei_array;
+
+ if (dec_level == 1) {
+ string_len = tlv_len;
+ } else {
+ string_len_sz = temp_ei->elem_len <= U8_MAX ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ rc = qmi_decode_basic_elem(&string_len, buf_src,
+ 1, string_len_sz);
+ decoded_bytes += rc;
+ }
+
+ if (string_len > temp_ei->elem_len) {
+ pr_err("%s: String len %d > Max Len %d\n",
+ __func__, string_len, temp_ei->elem_len);
+ return -ETOOSMALL;
+ } else if (string_len > tlv_len) {
+ pr_err("%s: String len %d > Input Buffer Len %d\n",
+ __func__, string_len, tlv_len);
+ return -EFAULT;
+ }
+
+ rc = qmi_decode_basic_elem(buf_dst, buf_src + decoded_bytes,
+ string_len, temp_ei->elem_size);
+ *((char *)buf_dst + string_len) = '\0';
+ decoded_bytes += rc;
+ QMI_DECODE_LOG_ELEM(dec_level, string_len, temp_ei->elem_size, buf_dst);
+ return decoded_bytes;
+}
+
+/**
+ * find_ei() - Find element info corresponding to TLV Type
+ * @ei_array: Struct info array of the message being decoded.
+ * @type: TLV Type of the element being searched.
+ *
+ * @return: Pointer to struct info, if found
+ *
+ * Every element that got encoded in the QMI message will have a type
+ * information associated with it. While decoding the QMI message,
+ * this function is used to find the struct info regarding the element
+ * that corresponds to the type being decoded.
+ */
+static struct elem_info *find_ei(struct elem_info *ei_array,
+ uint32_t type)
+{
+ struct elem_info *temp_ei = ei_array;
+
+ while (temp_ei->data_type != QMI_EOTI) {
+ if (temp_ei->tlv_type == (uint8_t)type)
+ return temp_ei;
+ temp_ei = temp_ei + 1;
+ }
+ return NULL;
+}
+
+/**
+ * _qmi_kernel_decode() - Core Decode Function
+ * @ei_array: Struct info array describing the structure to be decoded.
+ * @out_c_struct: Buffer to hold the decoded C struct
+ * @in_buf: Buffer containing the QMI message to be decoded
+ * @in_buf_len: Length of the QMI message to be decoded
+ * @dec_level: Decode level to indicate the depth of the nested structure,
+ * within the main structure, being decoded
+ *
+ * @return: Number of bytes of decoded information, on success
+ * < 0 on error.
+ */
+static int _qmi_kernel_decode(struct elem_info *ei_array,
+ void *out_c_struct,
+ void *in_buf, uint32_t in_buf_len,
+ int dec_level)
+{
+ struct elem_info *temp_ei = ei_array;
+ uint8_t opt_flag_value = 1;
+ uint32_t data_len_value = 0, data_len_sz = 0;
+ uint8_t *buf_dst = out_c_struct;
+ uint8_t *tlv_pointer;
+ uint32_t tlv_len = 0;
+ uint32_t tlv_type;
+ uint32_t decoded_bytes = 0;
+ void *buf_src = in_buf;
+ int rc;
+
+ QMI_DECODE_LOG_MSG(in_buf, in_buf_len);
+ while (decoded_bytes < in_buf_len) {
+ if (dec_level >= 2 && temp_ei->data_type == QMI_EOTI)
+ return decoded_bytes;
+
+ if (dec_level == 1) {
+ tlv_pointer = buf_src;
+ QMI_ENCDEC_DECODE_TLV(&tlv_type,
+ &tlv_len, tlv_pointer);
+ QMI_DECODE_LOG_TLV(tlv_type, tlv_len);
+ buf_src += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
+ decoded_bytes += (TLV_TYPE_SIZE + TLV_LEN_SIZE);
+ temp_ei = find_ei(ei_array, tlv_type);
+ if (!temp_ei && (tlv_type < OPTIONAL_TLV_TYPE_START)) {
+ pr_err("%s: Inval element info\n", __func__);
+ return -EINVAL;
+ } else if (!temp_ei) {
+ UPDATE_DECODE_VARIABLES(buf_src,
+ decoded_bytes, tlv_len);
+ continue;
+ }
+ } else {
+ /*
+ * No length information for elements in nested
+ * structures. So use remaining decodable buffer space.
+ */
+ tlv_len = in_buf_len - decoded_bytes;
+ }
+
+ buf_dst = out_c_struct + temp_ei->offset;
+ if (temp_ei->data_type == QMI_OPT_FLAG) {
+ memcpy(buf_dst, &opt_flag_value, sizeof(uint8_t));
+ temp_ei = temp_ei + 1;
+ buf_dst = out_c_struct + temp_ei->offset;
+ }
+
+ if (temp_ei->data_type == QMI_DATA_LEN) {
+ data_len_sz = temp_ei->elem_size == sizeof(uint8_t) ?
+ sizeof(uint8_t) : sizeof(uint16_t);
+ rc = qmi_decode_basic_elem(&data_len_value, buf_src,
+ 1, data_len_sz);
+ memcpy(buf_dst, &data_len_value, sizeof(uint32_t));
+ temp_ei = temp_ei + 1;
+ buf_dst = out_c_struct + temp_ei->offset;
+ tlv_len -= data_len_sz;
+ UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
+ }
+
+ if (temp_ei->is_array == NO_ARRAY) {
+ data_len_value = 1;
+ } else if (temp_ei->is_array == STATIC_ARRAY) {
+ data_len_value = temp_ei->elem_len;
+ } else if (data_len_value > temp_ei->elem_len) {
+ pr_err("%s: Data len %d > max spec %d\n",
+ __func__, data_len_value, temp_ei->elem_len);
+ return -ETOOSMALL;
+ }
+
+ switch (temp_ei->data_type) {
+ case QMI_UNSIGNED_1_BYTE:
+ case QMI_UNSIGNED_2_BYTE:
+ case QMI_UNSIGNED_4_BYTE:
+ case QMI_UNSIGNED_8_BYTE:
+ case QMI_SIGNED_2_BYTE_ENUM:
+ case QMI_SIGNED_4_BYTE_ENUM:
+ rc = qmi_decode_basic_elem(buf_dst, buf_src,
+ data_len_value, temp_ei->elem_size);
+ QMI_DECODE_LOG_ELEM(dec_level, data_len_value,
+ temp_ei->elem_size, buf_dst);
+ UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
+ break;
+
+ case QMI_STRUCT:
+ rc = qmi_decode_struct_elem(temp_ei, buf_dst, buf_src,
+ data_len_value, tlv_len, (dec_level + 1));
+ if (rc < 0)
+ return rc;
+ UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
+ break;
+
+ case QMI_STRING:
+ rc = qmi_decode_string_elem(temp_ei, buf_dst, buf_src,
+ tlv_len, dec_level);
+ if (rc < 0)
+ return rc;
+ UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc);
+ break;
+
+ default:
+ pr_err("%s: Unrecognized data type\n", __func__);
+ return -EINVAL;
+ }
+ temp_ei = temp_ei + 1;
+ }
+ return decoded_bytes;
+}
+MODULE_DESCRIPTION("QMI kernel enc/dec");
+MODULE_LICENSE("GPL v2");
diff --git a/lib/qmi_encdec_priv.h b/lib/qmi_encdec_priv.h
new file mode 100644
index 0000000..97fe45b
--- /dev/null
+++ b/lib/qmi_encdec_priv.h
@@ -0,0 +1,66 @@
+/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _QMI_ENCDEC_PRIV_H_
+#define _QMI_ENCDEC_PRIV_H_
+
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/socket.h>
+#include <linux/gfp.h>
+#include <linux/qmi_encdec.h>
+
+#define QMI_ENCDEC_ENCODE_TLV(type, length, p_dst) do { \
+ *p_dst++ = type; \
+ *p_dst++ = ((uint8_t)((length) & 0xFF)); \
+ *p_dst++ = ((uint8_t)(((length) >> 8) & 0xFF)); \
+} while (0)
+
+#define QMI_ENCDEC_DECODE_TLV(p_type, p_length, p_src) do { \
+ *p_type = (uint8_t)*p_src++; \
+ *p_length = (uint8_t)*p_src++; \
+ *p_length |= ((uint8_t)*p_src) << 8; \
+} while (0)
+
+#define QMI_ENCDEC_ENCODE_N_BYTES(p_dst, p_src, size) \
+do { \
+ memcpy(p_dst, p_src, size); \
+ p_dst = (uint8_t *)p_dst + size; \
+ p_src = (uint8_t *)p_src + size; \
+} while (0)
+
+#define QMI_ENCDEC_DECODE_N_BYTES(p_dst, p_src, size) \
+do { \
+ memcpy(p_dst, p_src, size); \
+ p_dst = (uint8_t *)p_dst + size; \
+ p_src = (uint8_t *)p_src + size; \
+} while (0)
+
+#define UPDATE_ENCODE_VARIABLES(temp_si, buf_dst, \
+ encoded_bytes, tlv_len, encode_tlv, rc) \
+do { \
+ buf_dst = (uint8_t *)buf_dst + rc; \
+ encoded_bytes += rc; \
+ tlv_len += rc; \
+ temp_si = temp_si + 1; \
+ encode_tlv = 1; \
+} while (0)
+
+#define UPDATE_DECODE_VARIABLES(buf_src, decoded_bytes, rc) \
+do { \
+ buf_src = (uint8_t *)buf_src + rc; \
+ decoded_bytes += rc; \
+} while (0)
+
+#endif
diff --git a/net/Kconfig b/net/Kconfig
index 8e8a857..cd20118 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -429,6 +429,8 @@
on MAY_USE_DEVLINK to ensure they do not cause link errors when
devlink is a loadable module and the driver using it is built-in.
+source "net/ipc_router/Kconfig"
+
endif # if NET
# Used by archs to tell that they support BPF JIT compiler plus which flavour.
diff --git a/net/Makefile b/net/Makefile
index 245aef0d5..c84a347 100644
--- a/net/Makefile
+++ b/net/Makefile
@@ -82,3 +82,4 @@
obj-$(CONFIG_QRTR) += qrtr/
obj-$(CONFIG_NET_NCSI) += ncsi/
obj-$(CONFIG_RMNET_DATA) += rmnet_data/
+obj-$(CONFIG_IPC_ROUTER) += ipc_router/
diff --git a/net/ipc_router/Kconfig b/net/ipc_router/Kconfig
new file mode 100644
index 0000000..30cd45a
--- /dev/null
+++ b/net/ipc_router/Kconfig
@@ -0,0 +1,25 @@
+#
+# IPC_ROUTER Configuration
+#
+
+menuconfig IPC_ROUTER
+ bool "IPC Router support"
+ help
+ IPC Router provides a connectionless message routing service
+ between multiple modules within a System-on-Chip(SoC). The
+ communicating entities can run either in the same processor or
+ in a different processor within the SoC. The IPC Router has been
+ designed to route messages of any types and support a broader
+ network of processors.
+
+ If in doubt, say N.
+
+config IPC_ROUTER_SECURITY
+ depends on IPC_ROUTER
+ bool "IPC Router Security support"
+ help
+ This feature of IPC Router will enforce security rules
+ configured by a security script from the user-space. IPC Router
+ once configured with the security rules will ensure that the
+ sender of the message to a service belongs to the relevant
+ Linux group as configured by the security script.
diff --git a/net/ipc_router/Makefile b/net/ipc_router/Makefile
new file mode 100644
index 0000000..501688e
--- /dev/null
+++ b/net/ipc_router/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the Linux IPC_ROUTER
+#
+
+obj-$(CONFIG_IPC_ROUTER) := ipc_router_core.o
+obj-$(CONFIG_IPC_ROUTER) += ipc_router_socket.o
+obj-$(CONFIG_IPC_ROUTER_SECURITY) += ipc_router_security.o
diff --git a/net/ipc_router/ipc_router_core.c b/net/ipc_router/ipc_router_core.c
new file mode 100644
index 0000000..cdf372f
--- /dev/null
+++ b/net/ipc_router/ipc_router_core.c
@@ -0,0 +1,4334 @@
+/* Copyright (c) 2011-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/sched.h>
+#include <linux/poll.h>
+#include <linux/pm.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/debugfs.h>
+#include <linux/rwsem.h>
+#include <linux/ipc_logging.h>
+#include <linux/uaccess.h>
+#include <linux/ipc_router.h>
+#include <linux/ipc_router_xprt.h>
+#include <linux/kref.h>
+#include <soc/qcom/subsystem_notif.h>
+#include <soc/qcom/subsystem_restart.h>
+
+#include <asm/byteorder.h>
+
+#include "ipc_router_private.h"
+#include "ipc_router_security.h"
+
+enum {
+ SMEM_LOG = 1U << 0,
+ RTR_DBG = 1U << 1,
+};
+
+static int msm_ipc_router_debug_mask;
+module_param_named(debug_mask, msm_ipc_router_debug_mask,
+ int, 0664);
+#define MODULE_NAME "ipc_router"
+
+#define IPC_RTR_INFO_PAGES 6
+
+#define IPC_RTR_INFO(log_ctx, x...) do { \
+typeof(log_ctx) _log_ctx = (log_ctx); \
+if (_log_ctx) \
+ ipc_log_string(_log_ctx, x); \
+if (msm_ipc_router_debug_mask & RTR_DBG) \
+ pr_info("[IPCRTR] "x); \
+} while (0)
+
+#define IPC_ROUTER_LOG_EVENT_TX 0x01
+#define IPC_ROUTER_LOG_EVENT_RX 0x02
+#define IPC_ROUTER_LOG_EVENT_TX_ERR 0x03
+#define IPC_ROUTER_LOG_EVENT_RX_ERR 0x04
+#define IPC_ROUTER_DUMMY_DEST_NODE 0xFFFFFFFF
+
+#define ipc_port_sk(port) ((struct sock *)(port))
+
+static LIST_HEAD(control_ports);
+static DECLARE_RWSEM(control_ports_lock_lha5);
+
+#define LP_HASH_SIZE 32
+static struct list_head local_ports[LP_HASH_SIZE];
+static DECLARE_RWSEM(local_ports_lock_lhc2);
+
+/* Server info is organized as a hash table. The server's service ID is
+ * used to index into the hash table. The instance ID of most of the servers
+ * are 1 or 2. The service IDs are well distributed compared to the instance
+ * IDs and hence choosing service ID to index into this hash table optimizes
+ * the hash table operations like add, lookup, destroy.
+ */
+#define SRV_HASH_SIZE 32
+static struct list_head server_list[SRV_HASH_SIZE];
+static DECLARE_RWSEM(server_list_lock_lha2);
+
+struct msm_ipc_server {
+ struct list_head list;
+ struct kref ref;
+ struct msm_ipc_port_name name;
+ char pdev_name[32];
+ int next_pdev_id;
+ int synced_sec_rule;
+ struct list_head server_port_list;
+};
+
+struct msm_ipc_server_port {
+ struct list_head list;
+ struct platform_device *pdev;
+ struct msm_ipc_port_addr server_addr;
+ struct msm_ipc_router_xprt_info *xprt_info;
+};
+
+struct msm_ipc_resume_tx_port {
+ struct list_head list;
+ u32 port_id;
+ u32 node_id;
+};
+
+struct ipc_router_conn_info {
+ struct list_head list;
+ u32 port_id;
+};
+
+enum {
+ RESET = 0,
+ VALID = 1,
+};
+
+#define RP_HASH_SIZE 32
+struct msm_ipc_router_remote_port {
+ struct list_head list;
+ struct kref ref;
+ struct mutex rport_lock_lhb2; /* lock for remote port state access */
+ u32 node_id;
+ u32 port_id;
+ int status;
+ u32 tx_quota_cnt;
+ struct list_head resume_tx_port_list;
+ struct list_head conn_info_list;
+ void *sec_rule;
+ struct msm_ipc_server *server;
+};
+
+struct msm_ipc_router_xprt_info {
+ struct list_head list;
+ struct msm_ipc_router_xprt *xprt;
+ u32 remote_node_id;
+ u32 initialized;
+ struct list_head pkt_list;
+ struct wakeup_source ws;
+ struct mutex rx_lock_lhb2; /* lock for xprt rx operations */
+ struct mutex tx_lock_lhb2; /* lock for xprt tx operations */
+ u32 need_len;
+ u32 abort_data_read;
+ struct work_struct read_data;
+ struct workqueue_struct *workqueue;
+ void *log_ctx;
+ struct kref ref;
+ struct completion ref_complete;
+};
+
+#define RT_HASH_SIZE 4
+struct msm_ipc_routing_table_entry {
+ struct list_head list;
+ struct kref ref;
+ u32 node_id;
+ u32 neighbor_node_id;
+ struct list_head remote_port_list[RP_HASH_SIZE];
+ struct msm_ipc_router_xprt_info *xprt_info;
+ struct rw_semaphore lock_lha4;
+ unsigned long num_tx_bytes;
+ unsigned long num_rx_bytes;
+};
+
+#define LOG_CTX_NAME_LEN 32
+struct ipc_rtr_log_ctx {
+ struct list_head list;
+ char log_ctx_name[LOG_CTX_NAME_LEN];
+ void *log_ctx;
+};
+
+static struct list_head routing_table[RT_HASH_SIZE];
+static DECLARE_RWSEM(routing_table_lock_lha3);
+static int routing_table_inited;
+
+static void do_read_data(struct work_struct *work);
+
+static LIST_HEAD(xprt_info_list);
+static DECLARE_RWSEM(xprt_info_list_lock_lha5);
+
+static DEFINE_MUTEX(log_ctx_list_lock_lha0);
+static LIST_HEAD(log_ctx_list);
+static DEFINE_MUTEX(ipc_router_init_lock);
+static bool is_ipc_router_inited;
+static int ipc_router_core_init(void);
+#define IPC_ROUTER_INIT_TIMEOUT (10 * HZ)
+
+static u32 next_port_id;
+static DEFINE_MUTEX(next_port_id_lock_lhc1);
+static struct workqueue_struct *msm_ipc_router_workqueue;
+
+static void *local_log_ctx;
+static void *ipc_router_get_log_ctx(char *sub_name);
+static int process_resume_tx_msg(union rr_control_msg *msg,
+ struct rr_packet *pkt);
+static void ipc_router_reset_conn(struct msm_ipc_router_remote_port *rport_ptr);
+static int ipc_router_get_xprt_info_ref(
+ struct msm_ipc_router_xprt_info *xprt_info);
+static void ipc_router_put_xprt_info_ref(
+ struct msm_ipc_router_xprt_info *xprt_info);
+static void ipc_router_release_xprt_info_ref(struct kref *ref);
+
+struct pil_vote_info {
+ void *pil_handle;
+ struct work_struct load_work;
+ struct work_struct unload_work;
+};
+
+#define PIL_SUBSYSTEM_NAME_LEN 32
+static char default_peripheral[PIL_SUBSYSTEM_NAME_LEN];
+
+enum {
+ DOWN,
+ UP,
+};
+
+static void init_routing_table(void)
+{
+ int i;
+
+ for (i = 0; i < RT_HASH_SIZE; i++)
+ INIT_LIST_HEAD(&routing_table[i]);
+}
+
+/**
+ * ipc_router_calc_checksum() - compute the checksum for extended HELLO message
+ * @msg: Reference to the IPC Router HELLO message.
+ *
+ * Return: Computed checksum value, 0 if msg is NULL.
+ */
+static u32 ipc_router_calc_checksum(union rr_control_msg *msg)
+{
+ u32 checksum = 0;
+ int i, len;
+ u16 upper_nb;
+ u16 lower_nb;
+ void *hello;
+
+ if (!msg)
+ return checksum;
+ hello = msg;
+ len = sizeof(*msg);
+
+ for (i = 0; i < len / IPCR_WORD_SIZE; i++) {
+ lower_nb = (*((u32 *)hello)) & IPC_ROUTER_CHECKSUM_MASK;
+ upper_nb = ((*((u32 *)hello)) >> 16) &
+ IPC_ROUTER_CHECKSUM_MASK;
+ checksum = checksum + upper_nb + lower_nb;
+ hello = ((u32 *)hello) + 1;
+ }
+ while (checksum > 0xFFFF)
+ checksum = (checksum & IPC_ROUTER_CHECKSUM_MASK) +
+ ((checksum >> 16) & IPC_ROUTER_CHECKSUM_MASK);
+
+ checksum = ~checksum & IPC_ROUTER_CHECKSUM_MASK;
+ return checksum;
+}
+
+/**
+ * skb_copy_to_log_buf() - copies the required number bytes from the skb_queue
+ * @skb_head: skb_queue head that contains the data.
+ * @pl_len: length of payload need to be copied.
+ * @hdr_offset: length of the header present in first skb
+ * @log_buf: The output buffer which will contain the formatted log string
+ *
+ * This function copies the first specified number of bytes from the skb_queue
+ * to a new buffer and formats them to a string for logging.
+ */
+static void skb_copy_to_log_buf(struct sk_buff_head *skb_head,
+ unsigned int pl_len, unsigned int hdr_offset,
+ u64 *log_buf)
+{
+ struct sk_buff *temp_skb;
+ unsigned int copied_len = 0, copy_len = 0;
+ int remaining;
+
+ if (!skb_head) {
+ IPC_RTR_ERR("%s: NULL skb_head\n", __func__);
+ return;
+ }
+ temp_skb = skb_peek(skb_head);
+ if (unlikely(!temp_skb || !temp_skb->data)) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return;
+ }
+
+ remaining = temp_skb->len - hdr_offset;
+ skb_queue_walk(skb_head, temp_skb) {
+ copy_len = remaining < pl_len ? remaining : pl_len;
+ memcpy(log_buf + copied_len, temp_skb->data + hdr_offset,
+ copy_len);
+ copied_len += copy_len;
+ hdr_offset = 0;
+ if (copied_len == pl_len)
+ break;
+ remaining = pl_len - remaining;
+ }
+}
+
+/**
+ * ipc_router_log_msg() - log all data messages exchanged
+ * @log_ctx: IPC Logging context specific to each transport
+ * @xchng_type: Identifies the data to be a receive or send.
+ * @data: IPC Router data packet or control msg received or to be send.
+ * @hdr: Reference to the router header
+ * @port_ptr: Local IPC Router port.
+ * @rport_ptr: Remote IPC Router port
+ *
+ * This function builds the log message that would be passed on to the IPC
+ * logging framework. The data messages that would be passed corresponds to
+ * the information that is exchanged between the IPC Router and it's clients.
+ */
+static void ipc_router_log_msg(void *log_ctx, u32 xchng_type,
+ void *data, struct rr_header_v1 *hdr,
+ struct msm_ipc_port *port_ptr,
+ struct msm_ipc_router_remote_port *rport_ptr)
+{
+ struct sk_buff_head *skb_head = NULL;
+ union rr_control_msg *msg = NULL;
+ struct rr_packet *pkt = NULL;
+ u64 pl_buf = 0;
+ struct sk_buff *skb;
+ u32 buf_len = 8;
+ u32 svc_id = 0;
+ u32 svc_ins = 0;
+ unsigned int hdr_offset = 0;
+ u32 port_type = 0;
+
+ if (!log_ctx || !hdr || !data)
+ return;
+
+ if (hdr->type == IPC_ROUTER_CTRL_CMD_DATA) {
+ pkt = (struct rr_packet *)data;
+ skb_head = pkt->pkt_fragment_q;
+ skb = skb_peek(skb_head);
+ if (!skb || !skb->data) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return;
+ }
+
+ if (skb_queue_len(skb_head) == 1 && skb->len < 8)
+ buf_len = skb->len;
+ if (xchng_type == IPC_ROUTER_LOG_EVENT_TX && hdr->dst_node_id
+ != IPC_ROUTER_NID_LOCAL) {
+ if (hdr->version == IPC_ROUTER_V1)
+ hdr_offset = sizeof(struct rr_header_v1);
+ else if (hdr->version == IPC_ROUTER_V2)
+ hdr_offset = sizeof(struct rr_header_v2);
+ }
+ skb_copy_to_log_buf(skb_head, buf_len, hdr_offset, &pl_buf);
+
+ if (port_ptr && rport_ptr && (port_ptr->type == CLIENT_PORT) &&
+ rport_ptr->server) {
+ svc_id = rport_ptr->server->name.service;
+ svc_ins = rport_ptr->server->name.instance;
+ port_type = CLIENT_PORT;
+ } else if (port_ptr && (port_ptr->type == SERVER_PORT)) {
+ svc_id = port_ptr->port_name.service;
+ svc_ins = port_ptr->port_name.instance;
+ port_type = SERVER_PORT;
+ }
+ IPC_RTR_INFO(log_ctx,
+ "%s %s %s Len:0x%x T:0x%x CF:0x%x SVC:<0x%x:0x%x> SRC:<0x%x:0x%x> DST:<0x%x:0x%x> DATA: %08x %08x",
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ?
+ current->comm : "")),
+ (port_type == CLIENT_PORT ? "CLI" : "SRV"),
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "RX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ? "TX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX_ERR ? "TX_ERR" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX_ERR ? "RX_ERR" :
+ "UNKNOWN")))),
+ hdr->size, hdr->type, hdr->control_flag,
+ svc_id, svc_ins, hdr->src_node_id, hdr->src_port_id,
+ hdr->dst_node_id, hdr->dst_port_id,
+ (unsigned int)pl_buf, (unsigned int)(pl_buf >> 32));
+
+ } else {
+ msg = (union rr_control_msg *)data;
+ if (msg->cmd == IPC_ROUTER_CTRL_CMD_NEW_SERVER ||
+ msg->cmd == IPC_ROUTER_CTRL_CMD_REMOVE_SERVER)
+ IPC_RTR_INFO(log_ctx,
+ "CTL MSG: %s cmd:0x%x SVC:<0x%x:0x%x> ADDR:<0x%x:0x%x>",
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "RX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ? "TX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX_ERR ? "TX_ERR" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX_ERR ? "RX_ERR" :
+ "UNKNOWN")))),
+ msg->cmd, msg->srv.service, msg->srv.instance,
+ msg->srv.node_id, msg->srv.port_id);
+ else if (msg->cmd == IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT ||
+ msg->cmd == IPC_ROUTER_CTRL_CMD_RESUME_TX)
+ IPC_RTR_INFO(log_ctx,
+ "CTL MSG: %s cmd:0x%x ADDR: <0x%x:0x%x>",
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "RX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ? "TX" : "ERR")),
+ msg->cmd, msg->cli.node_id, msg->cli.port_id);
+ else if (msg->cmd == IPC_ROUTER_CTRL_CMD_HELLO && hdr)
+ IPC_RTR_INFO(log_ctx,
+ "CTL MSG %s cmd:0x%x ADDR:0x%x",
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "RX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ? "TX" : "ERR")),
+ msg->cmd, hdr->src_node_id);
+ else
+ IPC_RTR_INFO(log_ctx,
+ "%s UNKNOWN cmd:0x%x",
+ (xchng_type == IPC_ROUTER_LOG_EVENT_RX ? "RX" :
+ (xchng_type == IPC_ROUTER_LOG_EVENT_TX ? "TX" : "ERR")),
+ msg->cmd);
+ }
+}
+
+/* Must be called with routing_table_lock_lha3 locked. */
+static struct msm_ipc_routing_table_entry *lookup_routing_table(
+ u32 node_id)
+{
+ u32 key = (node_id % RT_HASH_SIZE);
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ list_for_each_entry(rt_entry, &routing_table[key], list) {
+ if (rt_entry->node_id == node_id)
+ return rt_entry;
+ }
+ return NULL;
+}
+
+/**
+ * create_routing_table_entry() - Lookup and create a routing table entry
+ * @node_id: Node ID of the routing table entry to be created.
+ * @xprt_info: XPRT through which the node ID is reachable.
+ *
+ * @return: a reference to the routing table entry on success, NULL on failure.
+ */
+static struct msm_ipc_routing_table_entry *create_routing_table_entry(
+ u32 node_id, struct msm_ipc_router_xprt_info *xprt_info)
+{
+ int i;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ u32 key;
+
+ down_write(&routing_table_lock_lha3);
+ rt_entry = lookup_routing_table(node_id);
+ if (rt_entry)
+ goto out_create_rtentry1;
+
+ rt_entry = kmalloc(sizeof(*rt_entry), GFP_KERNEL);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: rt_entry allocation failed for %d\n",
+ __func__, node_id);
+ goto out_create_rtentry2;
+ }
+
+ for (i = 0; i < RP_HASH_SIZE; i++)
+ INIT_LIST_HEAD(&rt_entry->remote_port_list[i]);
+ init_rwsem(&rt_entry->lock_lha4);
+ kref_init(&rt_entry->ref);
+ rt_entry->node_id = node_id;
+ rt_entry->xprt_info = xprt_info;
+ if (xprt_info)
+ rt_entry->neighbor_node_id = xprt_info->remote_node_id;
+
+ key = (node_id % RT_HASH_SIZE);
+ list_add_tail(&rt_entry->list, &routing_table[key]);
+out_create_rtentry1:
+ kref_get(&rt_entry->ref);
+out_create_rtentry2:
+ up_write(&routing_table_lock_lha3);
+ return rt_entry;
+}
+
+/**
+ * ipc_router_get_rtentry_ref() - Get a reference to the routing table entry
+ * @node_id: Node ID of the routing table entry.
+ *
+ * @return: a reference to the routing table entry on success, NULL on failure.
+ *
+ * This function is used to obtain a reference to the rounting table entry
+ * corresponding to a node id.
+ */
+static struct msm_ipc_routing_table_entry *ipc_router_get_rtentry_ref(
+ u32 node_id)
+{
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ down_read(&routing_table_lock_lha3);
+ rt_entry = lookup_routing_table(node_id);
+ if (rt_entry)
+ kref_get(&rt_entry->ref);
+ up_read(&routing_table_lock_lha3);
+ return rt_entry;
+}
+
+/**
+ * ipc_router_release_rtentry() - Cleanup and release the routing table entry
+ * @ref: Reference to the entry.
+ *
+ * This function is called when all references to the routing table entry are
+ * released.
+ */
+void ipc_router_release_rtentry(struct kref *ref)
+{
+ struct msm_ipc_routing_table_entry *rt_entry =
+ container_of(ref, struct msm_ipc_routing_table_entry, ref);
+
+ /* All references to a routing entry will be put only under SSR.
+ * As part of SSR, all the internals of the routing table entry
+ * are cleaned. So just free the routing table entry.
+ */
+ kfree(rt_entry);
+}
+
+struct rr_packet *rr_read(struct msm_ipc_router_xprt_info *xprt_info)
+{
+ struct rr_packet *temp_pkt;
+
+ if (!xprt_info)
+ return NULL;
+
+ mutex_lock(&xprt_info->rx_lock_lhb2);
+ if (xprt_info->abort_data_read) {
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+ IPC_RTR_ERR("%s detected SSR & exiting now\n",
+ xprt_info->xprt->name);
+ return NULL;
+ }
+
+ if (list_empty(&xprt_info->pkt_list)) {
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+ return NULL;
+ }
+
+ temp_pkt = list_first_entry(&xprt_info->pkt_list,
+ struct rr_packet, list);
+ list_del(&temp_pkt->list);
+ if (list_empty(&xprt_info->pkt_list))
+ __pm_relax(&xprt_info->ws);
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+ return temp_pkt;
+}
+
+struct rr_packet *clone_pkt(struct rr_packet *pkt)
+{
+ struct rr_packet *cloned_pkt;
+ struct sk_buff *temp_skb, *cloned_skb;
+ struct sk_buff_head *pkt_fragment_q;
+
+ cloned_pkt = kzalloc(sizeof(*cloned_pkt), GFP_KERNEL);
+ if (!cloned_pkt) {
+ IPC_RTR_ERR("%s: failure\n", __func__);
+ return NULL;
+ }
+ memcpy(&cloned_pkt->hdr, &pkt->hdr, sizeof(struct rr_header_v1));
+ if (pkt->opt_hdr.len > 0) {
+ cloned_pkt->opt_hdr.data = kmalloc(pkt->opt_hdr.len,
+ GFP_KERNEL);
+ if (!cloned_pkt->opt_hdr.data) {
+ IPC_RTR_ERR("%s: Memory allocation Failed\n", __func__);
+ } else {
+ cloned_pkt->opt_hdr.len = pkt->opt_hdr.len;
+ memcpy(cloned_pkt->opt_hdr.data, pkt->opt_hdr.data,
+ pkt->opt_hdr.len);
+ }
+ }
+
+ pkt_fragment_q = kmalloc(sizeof(*pkt_fragment_q), GFP_KERNEL);
+ if (!pkt_fragment_q) {
+ IPC_RTR_ERR("%s: pkt_frag_q alloc failure\n", __func__);
+ kfree(cloned_pkt);
+ return NULL;
+ }
+ skb_queue_head_init(pkt_fragment_q);
+ kref_init(&cloned_pkt->ref);
+
+ skb_queue_walk(pkt->pkt_fragment_q, temp_skb) {
+ cloned_skb = skb_clone(temp_skb, GFP_KERNEL);
+ if (!cloned_skb)
+ goto fail_clone;
+ skb_queue_tail(pkt_fragment_q, cloned_skb);
+ }
+ cloned_pkt->pkt_fragment_q = pkt_fragment_q;
+ cloned_pkt->length = pkt->length;
+ return cloned_pkt;
+
+fail_clone:
+ while (!skb_queue_empty(pkt_fragment_q)) {
+ temp_skb = skb_dequeue(pkt_fragment_q);
+ kfree_skb(temp_skb);
+ }
+ kfree(pkt_fragment_q);
+ if (cloned_pkt->opt_hdr.len > 0)
+ kfree(cloned_pkt->opt_hdr.data);
+ kfree(cloned_pkt);
+ return NULL;
+}
+
+/**
+ * create_pkt() - Create a Router packet
+ * @data: SKB queue to be contained inside the packet.
+ *
+ * @return: pointer to packet on success, NULL on failure.
+ */
+struct rr_packet *create_pkt(struct sk_buff_head *data)
+{
+ struct rr_packet *pkt;
+ struct sk_buff *temp_skb;
+
+ pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+ if (!pkt) {
+ IPC_RTR_ERR("%s: failure\n", __func__);
+ return NULL;
+ }
+
+ if (data) {
+ pkt->pkt_fragment_q = data;
+ skb_queue_walk(pkt->pkt_fragment_q, temp_skb)
+ pkt->length += temp_skb->len;
+ } else {
+ pkt->pkt_fragment_q = kmalloc(sizeof(*pkt->pkt_fragment_q),
+ GFP_KERNEL);
+ if (!pkt->pkt_fragment_q) {
+ IPC_RTR_ERR("%s: Couldn't alloc pkt_fragment_q\n",
+ __func__);
+ kfree(pkt);
+ return NULL;
+ }
+ skb_queue_head_init(pkt->pkt_fragment_q);
+ }
+ kref_init(&pkt->ref);
+ return pkt;
+}
+
+void release_pkt(struct rr_packet *pkt)
+{
+ struct sk_buff *temp_skb;
+
+ if (!pkt)
+ return;
+
+ if (!pkt->pkt_fragment_q) {
+ kfree(pkt);
+ return;
+ }
+
+ while (!skb_queue_empty(pkt->pkt_fragment_q)) {
+ temp_skb = skb_dequeue(pkt->pkt_fragment_q);
+ kfree_skb(temp_skb);
+ }
+ kfree(pkt->pkt_fragment_q);
+ if (pkt->opt_hdr.len > 0)
+ kfree(pkt->opt_hdr.data);
+ kfree(pkt);
+}
+
+static struct sk_buff_head *msm_ipc_router_buf_to_skb(void *buf,
+ unsigned int buf_len)
+{
+ struct sk_buff_head *skb_head;
+ struct sk_buff *skb;
+ int first = 1, offset = 0;
+ int skb_size, data_size;
+ void *data;
+ int last = 1;
+ int align_size;
+
+ skb_head = kmalloc(sizeof(*skb_head), GFP_KERNEL);
+ if (!skb_head) {
+ IPC_RTR_ERR("%s: Couldnot allocate skb_head\n", __func__);
+ return NULL;
+ }
+ skb_queue_head_init(skb_head);
+
+ data_size = buf_len;
+ align_size = ALIGN_SIZE(data_size);
+ while (offset != buf_len) {
+ skb_size = data_size;
+ if (first)
+ skb_size += IPC_ROUTER_HDR_SIZE;
+ if (last)
+ skb_size += align_size;
+
+ skb = alloc_skb(skb_size, GFP_KERNEL);
+ if (!skb) {
+ if (skb_size <= (PAGE_SIZE / 2)) {
+ IPC_RTR_ERR("%s: cannot allocate skb\n",
+ __func__);
+ goto buf_to_skb_error;
+ }
+ data_size = data_size / 2;
+ last = 0;
+ continue;
+ }
+
+ if (first) {
+ skb_reserve(skb, IPC_ROUTER_HDR_SIZE);
+ first = 0;
+ }
+
+ data = skb_put(skb, data_size);
+ memcpy(skb->data, buf + offset, data_size);
+ skb_queue_tail(skb_head, skb);
+ offset += data_size;
+ data_size = buf_len - offset;
+ last = 1;
+ }
+ return skb_head;
+
+buf_to_skb_error:
+ while (!skb_queue_empty(skb_head)) {
+ skb = skb_dequeue(skb_head);
+ kfree_skb(skb);
+ }
+ kfree(skb_head);
+ return NULL;
+}
+
+static void *msm_ipc_router_skb_to_buf(struct sk_buff_head *skb_head,
+ unsigned int len)
+{
+ struct sk_buff *temp;
+ unsigned int offset = 0, buf_len = 0, copy_len;
+ void *buf;
+
+ if (!skb_head) {
+ IPC_RTR_ERR("%s: NULL skb_head\n", __func__);
+ return NULL;
+ }
+
+ temp = skb_peek(skb_head);
+ buf_len = len;
+ buf = kmalloc(buf_len, GFP_KERNEL);
+ if (!buf) {
+ IPC_RTR_ERR("%s: cannot allocate buf\n", __func__);
+ return NULL;
+ }
+ skb_queue_walk(skb_head, temp) {
+ copy_len = buf_len < temp->len ? buf_len : temp->len;
+ memcpy(buf + offset, temp->data, copy_len);
+ offset += copy_len;
+ buf_len -= copy_len;
+ }
+ return buf;
+}
+
+void msm_ipc_router_free_skb(struct sk_buff_head *skb_head)
+{
+ struct sk_buff *temp_skb;
+
+ if (!skb_head)
+ return;
+
+ while (!skb_queue_empty(skb_head)) {
+ temp_skb = skb_dequeue(skb_head);
+ kfree_skb(temp_skb);
+ }
+ kfree(skb_head);
+}
+
+/**
+ * extract_optional_header() - Extract the optional header from skb
+ * @pkt: Packet structure into which the header has to be extracted.
+ * @opt_len: The optional header length in word size.
+ *
+ * @return: Length of optional header in bytes if success, zero otherwise.
+ */
+static int extract_optional_header(struct rr_packet *pkt, u8 opt_len)
+{
+ size_t offset = 0, buf_len = 0, copy_len, opt_hdr_len;
+ struct sk_buff *temp;
+ struct sk_buff_head *skb_head;
+
+ opt_hdr_len = opt_len * IPCR_WORD_SIZE;
+ pkt->opt_hdr.data = kmalloc(opt_hdr_len, GFP_KERNEL);
+ if (!pkt->opt_hdr.data) {
+ IPC_RTR_ERR("%s: Memory allocation Failed\n", __func__);
+ return 0;
+ }
+ skb_head = pkt->pkt_fragment_q;
+ buf_len = opt_hdr_len;
+ skb_queue_walk(skb_head, temp) {
+ copy_len = buf_len < temp->len ? buf_len : temp->len;
+ memcpy(pkt->opt_hdr.data + offset, temp->data, copy_len);
+ offset += copy_len;
+ buf_len -= copy_len;
+ skb_pull(temp, copy_len);
+ if (temp->len == 0) {
+ skb_dequeue(skb_head);
+ kfree_skb(temp);
+ }
+ }
+ pkt->opt_hdr.len = opt_hdr_len;
+ return opt_hdr_len;
+}
+
+/**
+ * extract_header_v1() - Extract IPC Router header of version 1
+ * @pkt: Packet structure into which the header has to be extraced.
+ * @skb: SKB from which the header has to be extracted.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int extract_header_v1(struct rr_packet *pkt, struct sk_buff *skb)
+{
+ if (!pkt || !skb) {
+ IPC_RTR_ERR("%s: Invalid pkt or skb\n", __func__);
+ return -EINVAL;
+ }
+
+ memcpy(&pkt->hdr, skb->data, sizeof(struct rr_header_v1));
+ skb_pull(skb, sizeof(struct rr_header_v1));
+ pkt->length -= sizeof(struct rr_header_v1);
+ return 0;
+}
+
+/**
+ * extract_header_v2() - Extract IPC Router header of version 2
+ * @pkt: Packet structure into which the header has to be extraced.
+ * @skb: SKB from which the header has to be extracted.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int extract_header_v2(struct rr_packet *pkt, struct sk_buff *skb)
+{
+ struct rr_header_v2 *hdr;
+ u8 opt_len;
+ size_t opt_hdr_len;
+ size_t total_hdr_size = sizeof(*hdr);
+
+ if (!pkt || !skb) {
+ IPC_RTR_ERR("%s: Invalid pkt or skb\n", __func__);
+ return -EINVAL;
+ }
+
+ hdr = (struct rr_header_v2 *)skb->data;
+ pkt->hdr.version = (u32)hdr->version;
+ pkt->hdr.type = (u32)hdr->type;
+ pkt->hdr.src_node_id = (u32)hdr->src_node_id;
+ pkt->hdr.src_port_id = (u32)hdr->src_port_id;
+ pkt->hdr.size = (u32)hdr->size;
+ pkt->hdr.control_flag = (u32)hdr->control_flag;
+ pkt->hdr.dst_node_id = (u32)hdr->dst_node_id;
+ pkt->hdr.dst_port_id = (u32)hdr->dst_port_id;
+ opt_len = hdr->opt_len;
+ skb_pull(skb, total_hdr_size);
+ if (opt_len > 0) {
+ opt_hdr_len = extract_optional_header(pkt, opt_len);
+ total_hdr_size += opt_hdr_len;
+ }
+ pkt->length -= total_hdr_size;
+ return 0;
+}
+
+/**
+ * extract_header() - Extract IPC Router header
+ * @pkt: Packet from which the header has to be extraced.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function will check if the header version is v1 or v2 and invoke
+ * the corresponding helper function to extract the IPC Router header.
+ */
+static int extract_header(struct rr_packet *pkt)
+{
+ struct sk_buff *temp_skb;
+ int ret;
+
+ if (!pkt) {
+ IPC_RTR_ERR("%s: NULL PKT\n", __func__);
+ return -EINVAL;
+ }
+
+ temp_skb = skb_peek(pkt->pkt_fragment_q);
+ if (!temp_skb || !temp_skb->data) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return -EINVAL;
+ }
+
+ if (temp_skb->data[0] == IPC_ROUTER_V1) {
+ ret = extract_header_v1(pkt, temp_skb);
+ } else if (temp_skb->data[0] == IPC_ROUTER_V2) {
+ ret = extract_header_v2(pkt, temp_skb);
+ } else {
+ IPC_RTR_ERR("%s: Invalid Header version %02x\n",
+ __func__, temp_skb->data[0]);
+ print_hex_dump(KERN_ERR, "Header: ", DUMP_PREFIX_ADDRESS,
+ 16, 1, temp_skb->data, pkt->length, true);
+ return -EINVAL;
+ }
+ return ret;
+}
+
+/**
+ * calc_tx_header_size() - Calculate header size to be reserved in SKB
+ * @pkt: Packet in which the space for header has to be reserved.
+ * @dst_xprt_info: XPRT through which the destination is reachable.
+ *
+ * @return: required header size on success,
+ * starndard Linux error codes on failure.
+ *
+ * This function is used to calculate the header size that has to be reserved
+ * in a transmit SKB. The header size is calculated based on the XPRT through
+ * which the destination node is reachable.
+ */
+static int calc_tx_header_size(struct rr_packet *pkt,
+ struct msm_ipc_router_xprt_info *dst_xprt_info)
+{
+ int hdr_size = 0;
+ int xprt_version = 0;
+ struct msm_ipc_router_xprt_info *xprt_info = dst_xprt_info;
+
+ if (!pkt) {
+ IPC_RTR_ERR("%s: NULL PKT\n", __func__);
+ return -EINVAL;
+ }
+
+ if (xprt_info)
+ xprt_version = xprt_info->xprt->get_version(xprt_info->xprt);
+
+ if (xprt_version == IPC_ROUTER_V1) {
+ pkt->hdr.version = IPC_ROUTER_V1;
+ hdr_size = sizeof(struct rr_header_v1);
+ } else if (xprt_version == IPC_ROUTER_V2) {
+ pkt->hdr.version = IPC_ROUTER_V2;
+ hdr_size = sizeof(struct rr_header_v2) + pkt->opt_hdr.len;
+ } else {
+ IPC_RTR_ERR("%s: Invalid xprt_version %d\n",
+ __func__, xprt_version);
+ hdr_size = -EINVAL;
+ }
+
+ return hdr_size;
+}
+
+/**
+ * calc_rx_header_size() - Calculate the RX header size
+ * @xprt_info: XPRT info of the received message.
+ *
+ * @return: valid header size on success, INT_MAX on failure.
+ */
+static int calc_rx_header_size(struct msm_ipc_router_xprt_info *xprt_info)
+{
+ int xprt_version = 0;
+ int hdr_size = INT_MAX;
+
+ if (xprt_info)
+ xprt_version = xprt_info->xprt->get_version(xprt_info->xprt);
+
+ if (xprt_version == IPC_ROUTER_V1)
+ hdr_size = sizeof(struct rr_header_v1);
+ else if (xprt_version == IPC_ROUTER_V2)
+ hdr_size = sizeof(struct rr_header_v2);
+ return hdr_size;
+}
+
+/**
+ * prepend_header_v1() - Prepend IPC Router header of version 1
+ * @pkt: Packet structure which contains the header info to be prepended.
+ * @hdr_size: Size of the header
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int prepend_header_v1(struct rr_packet *pkt, int hdr_size)
+{
+ struct sk_buff *temp_skb;
+ struct rr_header_v1 *hdr;
+
+ if (!pkt || hdr_size <= 0) {
+ IPC_RTR_ERR("%s: Invalid input parameters\n", __func__);
+ return -EINVAL;
+ }
+
+ temp_skb = skb_peek(pkt->pkt_fragment_q);
+ if (!temp_skb || !temp_skb->data) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return -EINVAL;
+ }
+
+ if (skb_headroom(temp_skb) < hdr_size) {
+ temp_skb = alloc_skb(hdr_size, GFP_KERNEL);
+ if (!temp_skb) {
+ IPC_RTR_ERR("%s: Could not allocate SKB of size %d\n",
+ __func__, hdr_size);
+ return -ENOMEM;
+ }
+ skb_reserve(temp_skb, hdr_size);
+ }
+
+ hdr = (struct rr_header_v1 *)skb_push(temp_skb, hdr_size);
+ memcpy(hdr, &pkt->hdr, hdr_size);
+ if (temp_skb != skb_peek(pkt->pkt_fragment_q))
+ skb_queue_head(pkt->pkt_fragment_q, temp_skb);
+ pkt->length += hdr_size;
+ return 0;
+}
+
+/**
+ * prepend_header_v2() - Prepend IPC Router header of version 2
+ * @pkt: Packet structure which contains the header info to be prepended.
+ * @hdr_size: Size of the header
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+static int prepend_header_v2(struct rr_packet *pkt, int hdr_size)
+{
+ struct sk_buff *temp_skb;
+ struct rr_header_v2 *hdr;
+
+ if (!pkt || hdr_size <= 0) {
+ IPC_RTR_ERR("%s: Invalid input parameters\n", __func__);
+ return -EINVAL;
+ }
+
+ temp_skb = skb_peek(pkt->pkt_fragment_q);
+ if (!temp_skb || !temp_skb->data) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return -EINVAL;
+ }
+
+ if (skb_headroom(temp_skb) < hdr_size) {
+ temp_skb = alloc_skb(hdr_size, GFP_KERNEL);
+ if (!temp_skb) {
+ IPC_RTR_ERR("%s: Could not allocate SKB of size %d\n",
+ __func__, hdr_size);
+ return -ENOMEM;
+ }
+ skb_reserve(temp_skb, hdr_size);
+ }
+
+ hdr = (struct rr_header_v2 *)skb_push(temp_skb, hdr_size);
+ hdr->version = (u8)pkt->hdr.version;
+ hdr->type = (u8)pkt->hdr.type;
+ hdr->control_flag = (u8)pkt->hdr.control_flag;
+ hdr->size = (u32)pkt->hdr.size;
+ hdr->src_node_id = (u16)pkt->hdr.src_node_id;
+ hdr->src_port_id = (u16)pkt->hdr.src_port_id;
+ hdr->dst_node_id = (u16)pkt->hdr.dst_node_id;
+ hdr->dst_port_id = (u16)pkt->hdr.dst_port_id;
+ if (pkt->opt_hdr.len > 0) {
+ hdr->opt_len = pkt->opt_hdr.len / IPCR_WORD_SIZE;
+ memcpy(hdr + sizeof(*hdr), pkt->opt_hdr.data, pkt->opt_hdr.len);
+ } else {
+ hdr->opt_len = 0;
+ }
+ if (temp_skb != skb_peek(pkt->pkt_fragment_q))
+ skb_queue_head(pkt->pkt_fragment_q, temp_skb);
+ pkt->length += hdr_size;
+ return 0;
+}
+
+/**
+ * prepend_header() - Prepend IPC Router header
+ * @pkt: Packet structure which contains the header info to be prepended.
+ * @xprt_info: XPRT through which the packet is transmitted.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * This function prepends the header to the packet to be transmitted. The
+ * IPC Router header version to be prepended depends on the XPRT through
+ * which the destination is reachable.
+ */
+static int prepend_header(struct rr_packet *pkt,
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ int hdr_size;
+ struct sk_buff *temp_skb;
+
+ if (!pkt) {
+ IPC_RTR_ERR("%s: NULL PKT\n", __func__);
+ return -EINVAL;
+ }
+
+ temp_skb = skb_peek(pkt->pkt_fragment_q);
+ if (!temp_skb || !temp_skb->data) {
+ IPC_RTR_ERR("%s: No SKBs in skb_queue\n", __func__);
+ return -EINVAL;
+ }
+
+ hdr_size = calc_tx_header_size(pkt, xprt_info);
+ if (hdr_size <= 0)
+ return hdr_size;
+
+ if (pkt->hdr.version == IPC_ROUTER_V1)
+ return prepend_header_v1(pkt, hdr_size);
+ else if (pkt->hdr.version == IPC_ROUTER_V2)
+ return prepend_header_v2(pkt, hdr_size);
+ else
+ return -EINVAL;
+}
+
+/**
+ * defragment_pkt() - Defragment and linearize the packet
+ * @pkt: Packet to be linearized.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ *
+ * Some packets contain fragments of data over multiple SKBs. If an XPRT
+ * does not supported fragmented writes, linearize multiple SKBs into one
+ * single SKB.
+ */
+static int defragment_pkt(struct rr_packet *pkt)
+{
+ struct sk_buff *dst_skb, *src_skb, *temp_skb;
+ int offset = 0, buf_len = 0, copy_len;
+ void *buf;
+ int align_size;
+
+ if (!pkt || pkt->length <= 0) {
+ IPC_RTR_ERR("%s: Invalid PKT\n", __func__);
+ return -EINVAL;
+ }
+
+ if (skb_queue_len(pkt->pkt_fragment_q) == 1)
+ return 0;
+
+ align_size = ALIGN_SIZE(pkt->length);
+ dst_skb = alloc_skb(pkt->length + align_size, GFP_KERNEL);
+ if (!dst_skb) {
+ IPC_RTR_ERR("%s: could not allocate one skb of size %d\n",
+ __func__, pkt->length);
+ return -ENOMEM;
+ }
+ buf = skb_put(dst_skb, pkt->length);
+ buf_len = pkt->length;
+
+ skb_queue_walk(pkt->pkt_fragment_q, src_skb) {
+ copy_len = buf_len < src_skb->len ? buf_len : src_skb->len;
+ memcpy(buf + offset, src_skb->data, copy_len);
+ offset += copy_len;
+ buf_len -= copy_len;
+ }
+
+ while (!skb_queue_empty(pkt->pkt_fragment_q)) {
+ temp_skb = skb_dequeue(pkt->pkt_fragment_q);
+ kfree_skb(temp_skb);
+ }
+ skb_queue_tail(pkt->pkt_fragment_q, dst_skb);
+ return 0;
+}
+
+static int post_pkt_to_port(struct msm_ipc_port *port_ptr,
+ struct rr_packet *pkt, int clone)
+{
+ struct rr_packet *temp_pkt = pkt;
+ void (*notify)(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv);
+ void (*data_ready)(struct sock *sk) = NULL;
+ struct sock *sk;
+ u32 pkt_type;
+
+ if (unlikely(!port_ptr || !pkt))
+ return -EINVAL;
+
+ if (clone) {
+ temp_pkt = clone_pkt(pkt);
+ if (!temp_pkt) {
+ IPC_RTR_ERR(
+ "%s: Error cloning packet for port %08x:%08x\n",
+ __func__, port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ return -ENOMEM;
+ }
+ }
+
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ __pm_stay_awake(port_ptr->port_rx_ws);
+ list_add_tail(&temp_pkt->list, &port_ptr->port_rx_q);
+ wake_up(&port_ptr->port_rx_wait_q);
+ notify = port_ptr->notify;
+ pkt_type = temp_pkt->hdr.type;
+ sk = (struct sock *)port_ptr->endpoint;
+ if (sk) {
+ read_lock(&sk->sk_callback_lock);
+ data_ready = sk->sk_data_ready;
+ read_unlock(&sk->sk_callback_lock);
+ }
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ if (notify)
+ notify(pkt_type, NULL, 0, port_ptr->priv);
+ else if (sk && data_ready)
+ data_ready(sk);
+
+ return 0;
+}
+
+/**
+ * ipc_router_peek_pkt_size() - Peek into the packet header to get potential
+ * packet size
+ * @data: Starting address of the packet which points to router header.
+ *
+ * @returns: potential packet size on success, < 0 on error.
+ *
+ * This function is used by the underlying transport abstraction layer to
+ * peek into the potential packet size of an incoming packet. This information
+ * is used to perform link layer fragmentation and re-assembly
+ */
+int ipc_router_peek_pkt_size(char *data)
+{
+ int size;
+
+ if (!data) {
+ pr_err("%s: NULL PKT\n", __func__);
+ return -EINVAL;
+ }
+
+ if (data[0] == IPC_ROUTER_V1)
+ size = ((struct rr_header_v1 *)data)->size +
+ sizeof(struct rr_header_v1);
+ else if (data[0] == IPC_ROUTER_V2)
+ size = ((struct rr_header_v2 *)data)->size +
+ ((struct rr_header_v2 *)data)->opt_len * IPCR_WORD_SIZE
+ + sizeof(struct rr_header_v2);
+ else
+ return -EINVAL;
+
+ size += ALIGN_SIZE(size);
+ return size;
+}
+
+static int post_control_ports(struct rr_packet *pkt)
+{
+ struct msm_ipc_port *port_ptr;
+
+ if (!pkt)
+ return -EINVAL;
+
+ down_read(&control_ports_lock_lha5);
+ list_for_each_entry(port_ptr, &control_ports, list)
+ post_pkt_to_port(port_ptr, pkt, 1);
+ up_read(&control_ports_lock_lha5);
+ return 0;
+}
+
+static u32 allocate_port_id(void)
+{
+ u32 port_id = 0, prev_port_id, key;
+ struct msm_ipc_port *port_ptr;
+
+ mutex_lock(&next_port_id_lock_lhc1);
+ prev_port_id = next_port_id;
+ down_read(&local_ports_lock_lhc2);
+ do {
+ next_port_id++;
+ if ((next_port_id & IPC_ROUTER_ADDRESS) == IPC_ROUTER_ADDRESS)
+ next_port_id = 1;
+
+ key = (next_port_id & (LP_HASH_SIZE - 1));
+ if (list_empty(&local_ports[key])) {
+ port_id = next_port_id;
+ break;
+ }
+ list_for_each_entry(port_ptr, &local_ports[key], list) {
+ if (port_ptr->this_port.port_id == next_port_id) {
+ port_id = next_port_id;
+ break;
+ }
+ }
+ if (!port_id) {
+ port_id = next_port_id;
+ break;
+ }
+ port_id = 0;
+ } while (next_port_id != prev_port_id);
+ up_read(&local_ports_lock_lhc2);
+ mutex_unlock(&next_port_id_lock_lhc1);
+
+ return port_id;
+}
+
+void msm_ipc_router_add_local_port(struct msm_ipc_port *port_ptr)
+{
+ u32 key;
+
+ if (!port_ptr)
+ return;
+
+ key = (port_ptr->this_port.port_id & (LP_HASH_SIZE - 1));
+ down_write(&local_ports_lock_lhc2);
+ list_add_tail(&port_ptr->list, &local_ports[key]);
+ up_write(&local_ports_lock_lhc2);
+}
+
+/**
+ * msm_ipc_router_create_raw_port() - Create an IPC Router port
+ * @endpoint: User-space space socket information to be cached.
+ * @notify: Function to notify incoming events on the port.
+ * @event: Event ID to be handled.
+ * @oob_data: Any out-of-band data associated with the event.
+ * @oob_data_len: Size of the out-of-band data, if valid.
+ * @priv: Private data registered during the port creation.
+ * @priv: Private Data to be passed during the event notification.
+ *
+ * @return: Valid pointer to port on success, NULL on failure.
+ *
+ * This function is used to create an IPC Router port. The port is used for
+ * communication locally or outside the subsystem.
+ */
+struct msm_ipc_port *
+msm_ipc_router_create_raw_port(void *endpoint,
+ void (*notify)(unsigned int event,
+ void *oob_data,
+ size_t oob_data_len, void *priv),
+ void *priv)
+{
+ struct msm_ipc_port *port_ptr;
+
+ port_ptr = kzalloc(sizeof(*port_ptr), GFP_KERNEL);
+ if (!port_ptr)
+ return NULL;
+
+ port_ptr->this_port.node_id = IPC_ROUTER_NID_LOCAL;
+ port_ptr->this_port.port_id = allocate_port_id();
+ if (!port_ptr->this_port.port_id) {
+ IPC_RTR_ERR("%s: All port ids are in use\n", __func__);
+ kfree(port_ptr);
+ return NULL;
+ }
+
+ mutex_init(&port_ptr->port_lock_lhc3);
+ INIT_LIST_HEAD(&port_ptr->port_rx_q);
+ mutex_init(&port_ptr->port_rx_q_lock_lhc3);
+ init_waitqueue_head(&port_ptr->port_rx_wait_q);
+ snprintf(port_ptr->rx_ws_name, MAX_WS_NAME_SZ,
+ "ipc%08x_%s",
+ port_ptr->this_port.port_id,
+ current->comm);
+ port_ptr->port_rx_ws = wakeup_source_register(port_ptr->rx_ws_name);
+ if (!port_ptr->port_rx_ws) {
+ kfree(port_ptr);
+ return NULL;
+ }
+ init_waitqueue_head(&port_ptr->port_tx_wait_q);
+ kref_init(&port_ptr->ref);
+
+ port_ptr->endpoint = endpoint;
+ port_ptr->notify = notify;
+ port_ptr->priv = priv;
+
+ msm_ipc_router_add_local_port(port_ptr);
+ if (endpoint)
+ sock_hold(ipc_port_sk(endpoint));
+ return port_ptr;
+}
+
+/**
+ * ipc_router_get_port_ref() - Get a reference to the local port
+ * @port_id: Port ID of the local port for which reference is get.
+ *
+ * @return: If port is found, a reference to the port is returned.
+ * Else NULL is returned.
+ */
+static struct msm_ipc_port *ipc_router_get_port_ref(u32 port_id)
+{
+ int key = (port_id & (LP_HASH_SIZE - 1));
+ struct msm_ipc_port *port_ptr;
+
+ down_read(&local_ports_lock_lhc2);
+ list_for_each_entry(port_ptr, &local_ports[key], list) {
+ if (port_ptr->this_port.port_id == port_id) {
+ kref_get(&port_ptr->ref);
+ up_read(&local_ports_lock_lhc2);
+ return port_ptr;
+ }
+ }
+ up_read(&local_ports_lock_lhc2);
+ return NULL;
+}
+
+/**
+ * ipc_router_release_port() - Cleanup and release the port
+ * @ref: Reference to the port.
+ *
+ * This function is called when all references to the port are released.
+ */
+void ipc_router_release_port(struct kref *ref)
+{
+ struct rr_packet *pkt, *temp_pkt;
+ struct msm_ipc_port *port_ptr =
+ container_of(ref, struct msm_ipc_port, ref);
+
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ list_for_each_entry_safe(pkt, temp_pkt, &port_ptr->port_rx_q, list) {
+ list_del(&pkt->list);
+ release_pkt(pkt);
+ }
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ wakeup_source_unregister(port_ptr->port_rx_ws);
+ if (port_ptr->endpoint)
+ sock_put(ipc_port_sk(port_ptr->endpoint));
+ kfree(port_ptr);
+}
+
+/**
+ * ipc_router_get_rport_ref()- Get reference to the remote port
+ * @node_id: Node ID corresponding to the remote port.
+ * @port_id: Port ID corresponding to the remote port.
+ *
+ * @return: a reference to the remote port on success, NULL on failure.
+ */
+static struct msm_ipc_router_remote_port *ipc_router_get_rport_ref(
+ u32 node_id, u32 port_id)
+{
+ struct msm_ipc_router_remote_port *rport_ptr;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ int key = (port_id & (RP_HASH_SIZE - 1));
+
+ rt_entry = ipc_router_get_rtentry_ref(node_id);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: Node is not up\n", __func__);
+ return NULL;
+ }
+
+ down_read(&rt_entry->lock_lha4);
+ list_for_each_entry(rport_ptr,
+ &rt_entry->remote_port_list[key], list) {
+ if (rport_ptr->port_id == port_id) {
+ kref_get(&rport_ptr->ref);
+ goto out_lookup_rmt_port1;
+ }
+ }
+ rport_ptr = NULL;
+out_lookup_rmt_port1:
+ up_read(&rt_entry->lock_lha4);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ return rport_ptr;
+}
+
+/**
+ * ipc_router_create_rport() - Create a remote port
+ * @node_id: Node ID corresponding to the remote port.
+ * @port_id: Port ID corresponding to the remote port.
+ * @xprt_info: XPRT through which the concerned node is reachable.
+ *
+ * @return: a reference to the remote port on success, NULL on failure.
+ */
+static struct msm_ipc_router_remote_port *ipc_router_create_rport(
+ u32 node_id, u32 port_id,
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ struct msm_ipc_router_remote_port *rport_ptr;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ int key = (port_id & (RP_HASH_SIZE - 1));
+
+ rt_entry = create_routing_table_entry(node_id, xprt_info);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: Node cannot be created\n", __func__);
+ return NULL;
+ }
+
+ down_write(&rt_entry->lock_lha4);
+ list_for_each_entry(rport_ptr,
+ &rt_entry->remote_port_list[key], list) {
+ if (rport_ptr->port_id == port_id)
+ goto out_create_rmt_port1;
+ }
+
+ rport_ptr = kmalloc(sizeof(*rport_ptr), GFP_KERNEL);
+ if (!rport_ptr) {
+ IPC_RTR_ERR("%s: Remote port alloc failed\n", __func__);
+ goto out_create_rmt_port2;
+ }
+ rport_ptr->port_id = port_id;
+ rport_ptr->node_id = node_id;
+ rport_ptr->status = VALID;
+ rport_ptr->sec_rule = NULL;
+ rport_ptr->server = NULL;
+ rport_ptr->tx_quota_cnt = 0;
+ kref_init(&rport_ptr->ref);
+ mutex_init(&rport_ptr->rport_lock_lhb2);
+ INIT_LIST_HEAD(&rport_ptr->resume_tx_port_list);
+ INIT_LIST_HEAD(&rport_ptr->conn_info_list);
+ list_add_tail(&rport_ptr->list,
+ &rt_entry->remote_port_list[key]);
+out_create_rmt_port1:
+ kref_get(&rport_ptr->ref);
+out_create_rmt_port2:
+ up_write(&rt_entry->lock_lha4);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ return rport_ptr;
+}
+
+/**
+ * msm_ipc_router_free_resume_tx_port() - Free the resume_tx ports
+ * @rport_ptr: Pointer to the remote port.
+ *
+ * This function deletes all the resume_tx ports associated with a remote port
+ * and frees the memory allocated to each resume_tx port.
+ *
+ * Must be called with rport_ptr->rport_lock_lhb2 locked.
+ */
+static void msm_ipc_router_free_resume_tx_port(
+ struct msm_ipc_router_remote_port *rport_ptr)
+{
+ struct msm_ipc_resume_tx_port *rtx_port, *tmp_rtx_port;
+
+ list_for_each_entry_safe(rtx_port, tmp_rtx_port,
+ &rport_ptr->resume_tx_port_list, list) {
+ list_del(&rtx_port->list);
+ kfree(rtx_port);
+ }
+}
+
+/**
+ * msm_ipc_router_lookup_resume_tx_port() - Lookup resume_tx port list
+ * @rport_ptr: Remote port whose resume_tx port list needs to be looked.
+ * @port_id: Port ID which needs to be looked from the list.
+ *
+ * return 1 if the port_id is found in the list, else 0.
+ *
+ * This function is used to lookup the existence of a local port in
+ * remote port's resume_tx list. This function is used to ensure that
+ * the same port is not added to the remote_port's resume_tx list repeatedly.
+ *
+ * Must be called with rport_ptr->rport_lock_lhb2 locked.
+ */
+static int msm_ipc_router_lookup_resume_tx_port(
+ struct msm_ipc_router_remote_port *rport_ptr, u32 port_id)
+{
+ struct msm_ipc_resume_tx_port *rtx_port;
+
+ list_for_each_entry(rtx_port, &rport_ptr->resume_tx_port_list, list) {
+ if (port_id == rtx_port->port_id)
+ return 1;
+ }
+ return 0;
+}
+
+/**
+ * ipc_router_dummy_write_space() - Dummy write space available callback
+ * @sk: Socket pointer for which the callback is called.
+ */
+void ipc_router_dummy_write_space(struct sock *sk)
+{
+}
+
+/**
+ * post_resume_tx() - Post the resume_tx event
+ * @rport_ptr: Pointer to the remote port
+ * @pkt : The data packet that is received on a resume_tx event
+ * @msg: Out of band data to be passed to kernel drivers
+ *
+ * This function informs about the reception of the resume_tx message from a
+ * remote port pointed by rport_ptr to all the local ports that are in the
+ * resume_tx_ports_list of this remote port. On posting the information, this
+ * function sequentially deletes each entry in the resume_tx_port_list of the
+ * remote port.
+ *
+ * Must be called with rport_ptr->rport_lock_lhb2 locked.
+ */
+static void post_resume_tx(struct msm_ipc_router_remote_port *rport_ptr,
+ struct rr_packet *pkt, union rr_control_msg *msg)
+{
+ struct msm_ipc_resume_tx_port *rtx_port, *tmp_rtx_port;
+ struct msm_ipc_port *local_port;
+ struct sock *sk;
+ void (*write_space)(struct sock *sk) = NULL;
+
+ list_for_each_entry_safe(rtx_port, tmp_rtx_port,
+ &rport_ptr->resume_tx_port_list, list) {
+ local_port = ipc_router_get_port_ref(rtx_port->port_id);
+ if (local_port && local_port->notify) {
+ wake_up(&local_port->port_tx_wait_q);
+ local_port->notify(IPC_ROUTER_CTRL_CMD_RESUME_TX, msg,
+ sizeof(*msg), local_port->priv);
+ } else if (local_port) {
+ wake_up(&local_port->port_tx_wait_q);
+ sk = ipc_port_sk(local_port->endpoint);
+ if (sk) {
+ read_lock(&sk->sk_callback_lock);
+ write_space = sk->sk_write_space;
+ read_unlock(&sk->sk_callback_lock);
+ }
+ if (write_space &&
+ write_space != ipc_router_dummy_write_space)
+ write_space(sk);
+ else
+ post_pkt_to_port(local_port, pkt, 1);
+ } else {
+ IPC_RTR_ERR("%s: Local Port %d not Found",
+ __func__, rtx_port->port_id);
+ }
+ if (local_port)
+ kref_put(&local_port->ref, ipc_router_release_port);
+ list_del(&rtx_port->list);
+ kfree(rtx_port);
+ }
+}
+
+/**
+ * signal_rport_exit() - Signal the local ports of remote port exit
+ * @rport_ptr: Remote port that is exiting.
+ *
+ * This function is used to signal the local ports that are waiting
+ * to resume transmission to a remote port that is exiting.
+ */
+static void signal_rport_exit(struct msm_ipc_router_remote_port *rport_ptr)
+{
+ struct msm_ipc_resume_tx_port *rtx_port, *tmp_rtx_port;
+ struct msm_ipc_port *local_port;
+
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ rport_ptr->status = RESET;
+ list_for_each_entry_safe(rtx_port, tmp_rtx_port,
+ &rport_ptr->resume_tx_port_list, list) {
+ local_port = ipc_router_get_port_ref(rtx_port->port_id);
+ if (local_port) {
+ wake_up(&local_port->port_tx_wait_q);
+ kref_put(&local_port->ref, ipc_router_release_port);
+ }
+ list_del(&rtx_port->list);
+ kfree(rtx_port);
+ }
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+}
+
+/**
+ * ipc_router_release_rport() - Cleanup and release the remote port
+ * @ref: Reference to the remote port.
+ *
+ * This function is called when all references to the remote port are released.
+ */
+static void ipc_router_release_rport(struct kref *ref)
+{
+ struct msm_ipc_router_remote_port *rport_ptr =
+ container_of(ref, struct msm_ipc_router_remote_port, ref);
+
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ msm_ipc_router_free_resume_tx_port(rport_ptr);
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ kfree(rport_ptr);
+}
+
+/**
+ * ipc_router_destroy_rport() - Destroy the remote port
+ * @rport_ptr: Pointer to the remote port to be destroyed.
+ */
+static void ipc_router_destroy_rport(
+ struct msm_ipc_router_remote_port *rport_ptr)
+{
+ u32 node_id;
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ if (!rport_ptr)
+ return;
+
+ node_id = rport_ptr->node_id;
+ rt_entry = ipc_router_get_rtentry_ref(node_id);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: Node %d is not up\n", __func__, node_id);
+ return;
+ }
+ down_write(&rt_entry->lock_lha4);
+ list_del(&rport_ptr->list);
+ up_write(&rt_entry->lock_lha4);
+ signal_rport_exit(rport_ptr);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+}
+
+/**
+ * msm_ipc_router_lookup_server() - Lookup server information
+ * @service: Service ID of the server info to be looked up.
+ * @instance: Instance ID of the server info to be looked up.
+ * @node_id: Node/Processor ID in which the server is hosted.
+ * @port_id: Port ID within the node in which the server is hosted.
+ *
+ * @return: If found Pointer to server structure, else NULL.
+ *
+ * Note1: Lock the server_list_lock_lha2 before accessing this function.
+ * Note2: If the <node_id:port_id> are <0:0>, then the lookup is restricted
+ * to <service:instance>. Used only when a client wants to send a
+ * message to any QMI server.
+ */
+static struct msm_ipc_server *msm_ipc_router_lookup_server(
+ u32 service,
+ u32 instance,
+ u32 node_id,
+ u32 port_id)
+{
+ struct msm_ipc_server *server;
+ struct msm_ipc_server_port *server_port;
+ int key = (service & (SRV_HASH_SIZE - 1));
+
+ list_for_each_entry(server, &server_list[key], list) {
+ if ((server->name.service != service) ||
+ (server->name.instance != instance))
+ continue;
+ if ((node_id == 0) && (port_id == 0))
+ return server;
+ list_for_each_entry(server_port, &server->server_port_list,
+ list) {
+ if ((server_port->server_addr.node_id == node_id) &&
+ (server_port->server_addr.port_id == port_id))
+ return server;
+ }
+ }
+ return NULL;
+}
+
+/**
+ * ipc_router_get_server_ref() - Get reference to the server
+ * @svc: Service ID for which the reference is required.
+ * @ins: Instance ID for which the reference is required.
+ * @node_id: Node/Processor ID in which the server is hosted.
+ * @port_id: Port ID within the node in which the server is hosted.
+ *
+ * @return: If found return reference to server, else NULL.
+ */
+static struct msm_ipc_server *ipc_router_get_server_ref(
+ u32 svc, u32 ins, u32 node_id, u32 port_id)
+{
+ struct msm_ipc_server *server;
+
+ down_read(&server_list_lock_lha2);
+ server = msm_ipc_router_lookup_server(svc, ins, node_id, port_id);
+ if (server)
+ kref_get(&server->ref);
+ up_read(&server_list_lock_lha2);
+ return server;
+}
+
+/**
+ * ipc_router_release_server() - Cleanup and release the server
+ * @ref: Reference to the server.
+ *
+ * This function is called when all references to the server are released.
+ */
+static void ipc_router_release_server(struct kref *ref)
+{
+ struct msm_ipc_server *server =
+ container_of(ref, struct msm_ipc_server, ref);
+
+ kfree(server);
+}
+
+/**
+ * msm_ipc_router_create_server() - Add server info to hash table
+ * @service: Service ID of the server info to be created.
+ * @instance: Instance ID of the server info to be created.
+ * @node_id: Node/Processor ID in which the server is hosted.
+ * @port_id: Port ID within the node in which the server is hosted.
+ * @xprt_info: XPRT through which the node hosting the server is reached.
+ *
+ * @return: Pointer to server structure on success, else NULL.
+ *
+ * This function adds the server info to the hash table. If the same
+ * server(i.e. <service_id:instance_id>) is hosted in different nodes,
+ * they are maintained as list of "server_port" under "server" structure.
+ */
+static struct msm_ipc_server *msm_ipc_router_create_server(
+ u32 service,
+ u32 instance,
+ u32 node_id,
+ u32 port_id,
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ struct msm_ipc_server *server = NULL;
+ struct msm_ipc_server_port *server_port;
+ struct platform_device *pdev;
+ int key = (service & (SRV_HASH_SIZE - 1));
+
+ down_write(&server_list_lock_lha2);
+ server = msm_ipc_router_lookup_server(service, instance, 0, 0);
+ if (server) {
+ list_for_each_entry(server_port, &server->server_port_list,
+ list) {
+ if ((server_port->server_addr.node_id == node_id) &&
+ (server_port->server_addr.port_id == port_id))
+ goto return_server;
+ }
+ goto create_srv_port;
+ }
+
+ server = kzalloc(sizeof(*server), GFP_KERNEL);
+ if (!server) {
+ up_write(&server_list_lock_lha2);
+ IPC_RTR_ERR("%s: Server allocation failed\n", __func__);
+ return NULL;
+ }
+ server->name.service = service;
+ server->name.instance = instance;
+ server->synced_sec_rule = 0;
+ INIT_LIST_HEAD(&server->server_port_list);
+ kref_init(&server->ref);
+ list_add_tail(&server->list, &server_list[key]);
+ scnprintf(server->pdev_name, sizeof(server->pdev_name),
+ "SVC%08x:%08x", service, instance);
+ server->next_pdev_id = 1;
+
+create_srv_port:
+ server_port = kzalloc(sizeof(*server_port), GFP_KERNEL);
+ pdev = platform_device_alloc(server->pdev_name, server->next_pdev_id);
+ if (!server_port || !pdev) {
+ kfree(server_port);
+ if (pdev)
+ platform_device_put(pdev);
+ if (list_empty(&server->server_port_list)) {
+ list_del(&server->list);
+ kfree(server);
+ }
+ up_write(&server_list_lock_lha2);
+ IPC_RTR_ERR("%s: Server Port allocation failed\n", __func__);
+ return NULL;
+ }
+ server_port->pdev = pdev;
+ server_port->server_addr.node_id = node_id;
+ server_port->server_addr.port_id = port_id;
+ server_port->xprt_info = xprt_info;
+ list_add_tail(&server_port->list, &server->server_port_list);
+ server->next_pdev_id++;
+ platform_device_add(server_port->pdev);
+
+return_server:
+ /* Add a reference so that the caller can put it back */
+ kref_get(&server->ref);
+ up_write(&server_list_lock_lha2);
+ return server;
+}
+
+/**
+ * ipc_router_destroy_server_nolock() - Remove server info from hash table
+ * @server: Server info to be removed.
+ * @node_id: Node/Processor ID in which the server is hosted.
+ * @port_id: Port ID within the node in which the server is hosted.
+ *
+ * This function removes the server_port identified using <node_id:port_id>
+ * from the server structure. If the server_port list under server structure
+ * is empty after removal, then remove the server structure from the server
+ * hash table. This function must be called with server_list_lock_lha2 locked.
+ */
+static void ipc_router_destroy_server_nolock(struct msm_ipc_server *server,
+ u32 node_id, u32 port_id)
+{
+ struct msm_ipc_server_port *server_port;
+ bool server_port_found = false;
+
+ if (!server)
+ return;
+
+ list_for_each_entry(server_port, &server->server_port_list, list) {
+ if ((server_port->server_addr.node_id == node_id) &&
+ (server_port->server_addr.port_id == port_id)) {
+ server_port_found = true;
+ break;
+ }
+ }
+ if (server_port_found && server_port) {
+ platform_device_unregister(server_port->pdev);
+ list_del(&server_port->list);
+ kfree(server_port);
+ }
+ if (list_empty(&server->server_port_list)) {
+ list_del(&server->list);
+ kref_put(&server->ref, ipc_router_release_server);
+ }
+}
+
+/**
+ * ipc_router_destroy_server() - Remove server info from hash table
+ * @server: Server info to be removed.
+ * @node_id: Node/Processor ID in which the server is hosted.
+ * @port_id: Port ID within the node in which the server is hosted.
+ *
+ * This function removes the server_port identified using <node_id:port_id>
+ * from the server structure. If the server_port list under server structure
+ * is empty after removal, then remove the server structure from the server
+ * hash table.
+ */
+static void ipc_router_destroy_server(struct msm_ipc_server *server,
+ u32 node_id, u32 port_id)
+{
+ down_write(&server_list_lock_lha2);
+ ipc_router_destroy_server_nolock(server, node_id, port_id);
+ up_write(&server_list_lock_lha2);
+}
+
+static int ipc_router_send_ctl_msg(
+ struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg,
+ u32 dst_node_id)
+{
+ struct rr_packet *pkt;
+ struct sk_buff *ipc_rtr_pkt;
+ struct rr_header_v1 *hdr;
+ int pkt_size;
+ void *data;
+ int ret = -EINVAL;
+
+ pkt = create_pkt(NULL);
+ if (!pkt) {
+ IPC_RTR_ERR("%s: pkt alloc failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ pkt_size = IPC_ROUTER_HDR_SIZE + sizeof(*msg);
+ ipc_rtr_pkt = alloc_skb(pkt_size, GFP_KERNEL);
+ if (!ipc_rtr_pkt) {
+ IPC_RTR_ERR("%s: ipc_rtr_pkt alloc failed\n", __func__);
+ release_pkt(pkt);
+ return -ENOMEM;
+ }
+
+ skb_reserve(ipc_rtr_pkt, IPC_ROUTER_HDR_SIZE);
+ data = skb_put(ipc_rtr_pkt, sizeof(*msg));
+ memcpy(data, msg, sizeof(*msg));
+ skb_queue_tail(pkt->pkt_fragment_q, ipc_rtr_pkt);
+ pkt->length = sizeof(*msg);
+
+ hdr = &pkt->hdr;
+ hdr->version = IPC_ROUTER_V1;
+ hdr->type = msg->cmd;
+ hdr->src_node_id = IPC_ROUTER_NID_LOCAL;
+ hdr->src_port_id = IPC_ROUTER_ADDRESS;
+ hdr->control_flag = 0;
+ hdr->size = sizeof(*msg);
+ if (hdr->type == IPC_ROUTER_CTRL_CMD_RESUME_TX ||
+ (!xprt_info && dst_node_id == IPC_ROUTER_NID_LOCAL))
+ hdr->dst_node_id = dst_node_id;
+ else if (xprt_info)
+ hdr->dst_node_id = xprt_info->remote_node_id;
+ hdr->dst_port_id = IPC_ROUTER_ADDRESS;
+
+ if (dst_node_id == IPC_ROUTER_NID_LOCAL &&
+ msg->cmd != IPC_ROUTER_CTRL_CMD_RESUME_TX) {
+ ipc_router_log_msg(local_log_ctx, IPC_ROUTER_LOG_EVENT_TX, msg,
+ hdr, NULL, NULL);
+ ret = post_control_ports(pkt);
+ } else if (dst_node_id == IPC_ROUTER_NID_LOCAL &&
+ msg->cmd == IPC_ROUTER_CTRL_CMD_RESUME_TX) {
+ ipc_router_log_msg(local_log_ctx, IPC_ROUTER_LOG_EVENT_TX, msg,
+ hdr, NULL, NULL);
+ ret = process_resume_tx_msg(msg, pkt);
+ } else if (xprt_info && (msg->cmd == IPC_ROUTER_CTRL_CMD_HELLO ||
+ xprt_info->initialized)) {
+ mutex_lock(&xprt_info->tx_lock_lhb2);
+ ipc_router_log_msg(xprt_info->log_ctx, IPC_ROUTER_LOG_EVENT_TX,
+ msg, hdr, NULL, NULL);
+ ret = prepend_header(pkt, xprt_info);
+ if (ret < 0) {
+ mutex_unlock(&xprt_info->tx_lock_lhb2);
+ IPC_RTR_ERR("%s: Prepend Header failed\n", __func__);
+ release_pkt(pkt);
+ return ret;
+ }
+
+ ret = xprt_info->xprt->write(pkt, pkt->length, xprt_info->xprt);
+ mutex_unlock(&xprt_info->tx_lock_lhb2);
+ }
+
+ release_pkt(pkt);
+ return ret;
+}
+
+static int
+msm_ipc_router_send_server_list(u32 node_id,
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ union rr_control_msg ctl;
+ struct msm_ipc_server *server;
+ struct msm_ipc_server_port *server_port;
+ int i;
+
+ if (!xprt_info || !xprt_info->initialized) {
+ IPC_RTR_ERR("%s: Xprt info not initialized\n", __func__);
+ return -EINVAL;
+ }
+
+ memset(&ctl, 0, sizeof(ctl));
+ ctl.cmd = IPC_ROUTER_CTRL_CMD_NEW_SERVER;
+
+ for (i = 0; i < SRV_HASH_SIZE; i++) {
+ list_for_each_entry(server, &server_list[i], list) {
+ ctl.srv.service = server->name.service;
+ ctl.srv.instance = server->name.instance;
+ list_for_each_entry(server_port,
+ &server->server_port_list, list) {
+ if (server_port->server_addr.node_id !=
+ node_id)
+ continue;
+
+ ctl.srv.node_id =
+ server_port->server_addr.node_id;
+ ctl.srv.port_id =
+ server_port->server_addr.port_id;
+ ipc_router_send_ctl_msg
+ (xprt_info, &ctl,
+ IPC_ROUTER_DUMMY_DEST_NODE);
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int broadcast_ctl_msg_locally(union rr_control_msg *msg)
+{
+ return ipc_router_send_ctl_msg(NULL, msg, IPC_ROUTER_NID_LOCAL);
+}
+
+static int broadcast_ctl_msg(union rr_control_msg *ctl)
+{
+ struct msm_ipc_router_xprt_info *xprt_info;
+
+ down_read(&xprt_info_list_lock_lha5);
+ list_for_each_entry(xprt_info, &xprt_info_list, list) {
+ ipc_router_send_ctl_msg(xprt_info, ctl,
+ IPC_ROUTER_DUMMY_DEST_NODE);
+ }
+ up_read(&xprt_info_list_lock_lha5);
+ broadcast_ctl_msg_locally(ctl);
+
+ return 0;
+}
+
+static int relay_ctl_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *ctl)
+{
+ struct msm_ipc_router_xprt_info *fwd_xprt_info;
+
+ if (!xprt_info || !ctl)
+ return -EINVAL;
+
+ down_read(&xprt_info_list_lock_lha5);
+ list_for_each_entry(fwd_xprt_info, &xprt_info_list, list) {
+ if (xprt_info->xprt->link_id != fwd_xprt_info->xprt->link_id)
+ ipc_router_send_ctl_msg(fwd_xprt_info, ctl,
+ IPC_ROUTER_DUMMY_DEST_NODE);
+ }
+ up_read(&xprt_info_list_lock_lha5);
+
+ return 0;
+}
+
+static int forward_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ struct rr_packet *pkt)
+{
+ struct rr_header_v1 *hdr;
+ struct msm_ipc_router_xprt_info *fwd_xprt_info;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ int ret = 0;
+ int fwd_xprt_option;
+
+ if (!xprt_info || !pkt)
+ return -EINVAL;
+
+ hdr = &pkt->hdr;
+ rt_entry = ipc_router_get_rtentry_ref(hdr->dst_node_id);
+ if (!(rt_entry) || !(rt_entry->xprt_info)) {
+ IPC_RTR_ERR("%s: Routing table not initialized\n", __func__);
+ ret = -ENODEV;
+ goto fm_error1;
+ }
+
+ down_read(&rt_entry->lock_lha4);
+ fwd_xprt_info = rt_entry->xprt_info;
+ ret = ipc_router_get_xprt_info_ref(fwd_xprt_info);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Abort invalid xprt\n", __func__);
+ goto fm_error_xprt;
+ }
+ ret = prepend_header(pkt, fwd_xprt_info);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Prepend Header failed\n", __func__);
+ goto fm_error2;
+ }
+ fwd_xprt_option = fwd_xprt_info->xprt->get_option(fwd_xprt_info->xprt);
+ if (!(fwd_xprt_option & FRAG_PKT_WRITE_ENABLE)) {
+ ret = defragment_pkt(pkt);
+ if (ret < 0)
+ goto fm_error2;
+ }
+
+ mutex_lock(&fwd_xprt_info->tx_lock_lhb2);
+ if (xprt_info->remote_node_id == fwd_xprt_info->remote_node_id) {
+ IPC_RTR_ERR("%s: Discarding Command to route back\n", __func__);
+ ret = -EINVAL;
+ goto fm_error3;
+ }
+
+ if (xprt_info->xprt->link_id == fwd_xprt_info->xprt->link_id) {
+ IPC_RTR_ERR("%s: DST in the same cluster\n", __func__);
+ ret = 0;
+ goto fm_error3;
+ }
+ fwd_xprt_info->xprt->write(pkt, pkt->length, fwd_xprt_info->xprt);
+ IPC_RTR_INFO(fwd_xprt_info->log_ctx,
+ "%s %s Len:0x%x T:0x%x CF:0x%x SRC:<0x%x:0x%x> DST:<0x%x:0x%x>\n",
+ "FWD", "TX", hdr->size, hdr->type, hdr->control_flag,
+ hdr->src_node_id, hdr->src_port_id,
+ hdr->dst_node_id, hdr->dst_port_id);
+
+fm_error3:
+ mutex_unlock(&fwd_xprt_info->tx_lock_lhb2);
+fm_error2:
+ ipc_router_put_xprt_info_ref(fwd_xprt_info);
+fm_error_xprt:
+ up_read(&rt_entry->lock_lha4);
+fm_error1:
+ if (rt_entry)
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ return ret;
+}
+
+static int msm_ipc_router_send_remove_client(struct comm_mode_info *mode_info,
+ u32 node_id, u32 port_id)
+{
+ union rr_control_msg msg;
+ struct msm_ipc_router_xprt_info *tmp_xprt_info;
+ int mode;
+ void *xprt_info;
+ int rc = 0;
+
+ if (!mode_info) {
+ IPC_RTR_ERR("%s: NULL mode_info\n", __func__);
+ return -EINVAL;
+ }
+ mode = mode_info->mode;
+ xprt_info = mode_info->xprt_info;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT;
+ msg.cli.node_id = node_id;
+ msg.cli.port_id = port_id;
+
+ if ((mode == SINGLE_LINK_MODE) && xprt_info) {
+ down_read(&xprt_info_list_lock_lha5);
+ list_for_each_entry(tmp_xprt_info, &xprt_info_list, list) {
+ if (tmp_xprt_info != xprt_info)
+ continue;
+ ipc_router_send_ctl_msg(tmp_xprt_info, &msg,
+ IPC_ROUTER_DUMMY_DEST_NODE);
+ break;
+ }
+ up_read(&xprt_info_list_lock_lha5);
+ } else if ((mode == SINGLE_LINK_MODE) && !xprt_info) {
+ broadcast_ctl_msg_locally(&msg);
+ } else if (mode == MULTI_LINK_MODE) {
+ broadcast_ctl_msg(&msg);
+ } else if (mode != NULL_MODE) {
+ IPC_RTR_ERR(
+ "%s: Invalid mode(%d) + xprt_inf(%p) for %08x:%08x\n",
+ __func__, mode, xprt_info, node_id, port_id);
+ rc = -EINVAL;
+ }
+ return rc;
+}
+
+static void update_comm_mode_info(struct comm_mode_info *mode_info,
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ if (!mode_info) {
+ IPC_RTR_ERR("%s: NULL mode_info\n", __func__);
+ return;
+ }
+
+ if (mode_info->mode == NULL_MODE) {
+ mode_info->xprt_info = xprt_info;
+ mode_info->mode = SINGLE_LINK_MODE;
+ } else if (mode_info->mode == SINGLE_LINK_MODE &&
+ mode_info->xprt_info != xprt_info) {
+ mode_info->mode = MULTI_LINK_MODE;
+ }
+}
+
+/**
+ * cleanup_rmt_server() - Cleanup server hosted in the remote port
+ * @xprt_info: XPRT through which this cleanup event is handled.
+ * @rport_ptr: Remote port that is being cleaned up.
+ * @server: Server that is hosted in the remote port.
+ */
+static void cleanup_rmt_server(struct msm_ipc_router_xprt_info *xprt_info,
+ struct msm_ipc_router_remote_port *rport_ptr,
+ struct msm_ipc_server *server)
+{
+ union rr_control_msg ctl;
+
+ memset(&ctl, 0, sizeof(ctl));
+ ctl.cmd = IPC_ROUTER_CTRL_CMD_REMOVE_SERVER;
+ ctl.srv.service = server->name.service;
+ ctl.srv.instance = server->name.instance;
+ ctl.srv.node_id = rport_ptr->node_id;
+ ctl.srv.port_id = rport_ptr->port_id;
+ if (xprt_info)
+ relay_ctl_msg(xprt_info, &ctl);
+ broadcast_ctl_msg_locally(&ctl);
+ ipc_router_destroy_server_nolock(server, rport_ptr->node_id,
+ rport_ptr->port_id);
+}
+
+static void cleanup_rmt_ports(struct msm_ipc_router_xprt_info *xprt_info,
+ struct msm_ipc_routing_table_entry *rt_entry)
+{
+ struct msm_ipc_router_remote_port *rport_ptr, *tmp_rport_ptr;
+ struct msm_ipc_server *server;
+ union rr_control_msg ctl;
+ int j;
+
+ memset(&ctl, 0, sizeof(ctl));
+ for (j = 0; j < RP_HASH_SIZE; j++) {
+ list_for_each_entry_safe(rport_ptr, tmp_rport_ptr,
+ &rt_entry->remote_port_list[j], list) {
+ list_del(&rport_ptr->list);
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ server = rport_ptr->server;
+ rport_ptr->server = NULL;
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ ipc_router_reset_conn(rport_ptr);
+ if (server) {
+ cleanup_rmt_server(xprt_info, rport_ptr,
+ server);
+ server = NULL;
+ }
+
+ ctl.cmd = IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT;
+ ctl.cli.node_id = rport_ptr->node_id;
+ ctl.cli.port_id = rport_ptr->port_id;
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+
+ relay_ctl_msg(xprt_info, &ctl);
+ broadcast_ctl_msg_locally(&ctl);
+ }
+ }
+}
+
+static void msm_ipc_cleanup_routing_table(
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ int i;
+ struct msm_ipc_routing_table_entry *rt_entry, *tmp_rt_entry;
+
+ if (!xprt_info) {
+ IPC_RTR_ERR("%s: Invalid xprt_info\n", __func__);
+ return;
+ }
+
+ down_write(&server_list_lock_lha2);
+ down_write(&routing_table_lock_lha3);
+ for (i = 0; i < RT_HASH_SIZE; i++) {
+ list_for_each_entry_safe(rt_entry, tmp_rt_entry,
+ &routing_table[i], list) {
+ down_write(&rt_entry->lock_lha4);
+ if (rt_entry->xprt_info != xprt_info) {
+ up_write(&rt_entry->lock_lha4);
+ continue;
+ }
+ cleanup_rmt_ports(xprt_info, rt_entry);
+ rt_entry->xprt_info = NULL;
+ up_write(&rt_entry->lock_lha4);
+ list_del(&rt_entry->list);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ }
+ }
+ up_write(&routing_table_lock_lha3);
+ up_write(&server_list_lock_lha2);
+}
+
+/**
+ * sync_sec_rule() - Synchrnoize the security rule into the server structure
+ * @server: Server structure where the rule has to be synchronized.
+ * @rule: Security tule to be synchronized.
+ *
+ * This function is used to update the server structure with the security
+ * rule configured for the <service:instance> corresponding to that server.
+ */
+static void sync_sec_rule(struct msm_ipc_server *server, void *rule)
+{
+ struct msm_ipc_server_port *server_port;
+ struct msm_ipc_router_remote_port *rport_ptr = NULL;
+
+ list_for_each_entry(server_port, &server->server_port_list, list) {
+ rport_ptr = ipc_router_get_rport_ref(
+ server_port->server_addr.node_id,
+ server_port->server_addr.port_id);
+ if (!rport_ptr)
+ continue;
+ rport_ptr->sec_rule = rule;
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ }
+ server->synced_sec_rule = 1;
+}
+
+/**
+ * msm_ipc_sync_sec_rule() - Sync the security rule to the service
+ * @service: Service for which the rule has to be synchronized.
+ * @instance: Instance for which the rule has to be synchronized.
+ * @rule: Security rule to be synchronized.
+ *
+ * This function is used to syncrhonize the security rule with the server
+ * hash table, if the user-space script configures the rule after the service
+ * has come up. This function is used to synchronize the security rule to a
+ * specific service and optionally a specific instance.
+ */
+void msm_ipc_sync_sec_rule(u32 service, u32 instance, void *rule)
+{
+ int key = (service & (SRV_HASH_SIZE - 1));
+ struct msm_ipc_server *server;
+
+ down_write(&server_list_lock_lha2);
+ list_for_each_entry(server, &server_list[key], list) {
+ if (server->name.service != service)
+ continue;
+
+ if (server->name.instance != instance &&
+ instance != ALL_INSTANCE)
+ continue;
+
+ /* If the rule applies to all instances and if the specific
+ * instance of a service has a rule synchronized already,
+ * do not apply the rule for that specific instance.
+ */
+ if (instance == ALL_INSTANCE && server->synced_sec_rule)
+ continue;
+
+ sync_sec_rule(server, rule);
+ }
+ up_write(&server_list_lock_lha2);
+}
+
+/**
+ * msm_ipc_sync_default_sec_rule() - Default security rule to all services
+ * @rule: Security rule to be synchronized.
+ *
+ * This function is used to syncrhonize the security rule with the server
+ * hash table, if the user-space script configures the rule after the service
+ * has come up. This function is used to synchronize the security rule that
+ * applies to all services, if the concerned service do not have any rule
+ * defined.
+ */
+void msm_ipc_sync_default_sec_rule(void *rule)
+{
+ int key;
+ struct msm_ipc_server *server;
+
+ down_write(&server_list_lock_lha2);
+ for (key = 0; key < SRV_HASH_SIZE; key++) {
+ list_for_each_entry(server, &server_list[key], list) {
+ if (server->synced_sec_rule)
+ continue;
+
+ sync_sec_rule(server, rule);
+ }
+ }
+ up_write(&server_list_lock_lha2);
+}
+
+/**
+ * ipc_router_reset_conn() - Reset the connection to remote port
+ * @rport_ptr: Pointer to the remote port to be disconnected.
+ *
+ * This function is used to reset all the local ports that are connected to
+ * the remote port being passed.
+ */
+static void ipc_router_reset_conn(struct msm_ipc_router_remote_port *rport_ptr)
+{
+ struct msm_ipc_port *port_ptr;
+ struct ipc_router_conn_info *conn_info, *tmp_conn_info;
+
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ list_for_each_entry_safe(conn_info, tmp_conn_info,
+ &rport_ptr->conn_info_list, list) {
+ port_ptr = ipc_router_get_port_ref(conn_info->port_id);
+ if (port_ptr) {
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ port_ptr->conn_status = CONNECTION_RESET;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ wake_up(&port_ptr->port_rx_wait_q);
+ kref_put(&port_ptr->ref, ipc_router_release_port);
+ }
+
+ list_del(&conn_info->list);
+ kfree(conn_info);
+ }
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+}
+
+/**
+ * ipc_router_set_conn() - Set the connection by initializing dest address
+ * @port_ptr: Local port in which the connection has to be set.
+ * @addr: Destination address of the connection.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+int ipc_router_set_conn(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *addr)
+{
+ struct msm_ipc_router_remote_port *rport_ptr;
+ struct ipc_router_conn_info *conn_info;
+
+ if (unlikely(!port_ptr || !addr))
+ return -EINVAL;
+
+ if (addr->addrtype != MSM_IPC_ADDR_ID) {
+ IPC_RTR_ERR("%s: Invalid Address type\n", __func__);
+ return -EINVAL;
+ }
+
+ if (port_ptr->type == SERVER_PORT) {
+ IPC_RTR_ERR("%s: Connection refused on a server port\n",
+ __func__);
+ return -ECONNREFUSED;
+ }
+
+ if (port_ptr->conn_status == CONNECTED) {
+ IPC_RTR_ERR("%s: Port %08x already connected\n",
+ __func__, port_ptr->this_port.port_id);
+ return -EISCONN;
+ }
+
+ conn_info = kzalloc(sizeof(*conn_info), GFP_KERNEL);
+ if (!conn_info) {
+ IPC_RTR_ERR("%s: Error allocating conn_info\n", __func__);
+ return -ENOMEM;
+ }
+ INIT_LIST_HEAD(&conn_info->list);
+ conn_info->port_id = port_ptr->this_port.port_id;
+
+ rport_ptr = ipc_router_get_rport_ref(addr->addr.port_addr.node_id,
+ addr->addr.port_addr.port_id);
+ if (!rport_ptr) {
+ IPC_RTR_ERR("%s: Invalid remote endpoint\n", __func__);
+ kfree(conn_info);
+ return -ENODEV;
+ }
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ list_add_tail(&conn_info->list, &rport_ptr->conn_info_list);
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ memcpy(&port_ptr->dest_addr, &addr->addr.port_addr,
+ sizeof(struct msm_ipc_port_addr));
+ port_ptr->conn_status = CONNECTED;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ return 0;
+}
+
+/**
+ * do_version_negotiation() - perform a version negotiation and set the version
+ * @xprt_info: Pointer to the IPC Router transport info structure.
+ * @msg: Pointer to the IPC Router HELLO message.
+ *
+ * This function performs the version negotiation by verifying the computed
+ * checksum first. If the checksum matches with the magic number, it sets the
+ * negotiated IPC Router version in transport.
+ */
+static void do_version_negotiation(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg)
+{
+ u32 magic;
+ unsigned int version;
+
+ if (!xprt_info)
+ return;
+ magic = ipc_router_calc_checksum(msg);
+ if (magic == IPC_ROUTER_HELLO_MAGIC) {
+ version = fls(msg->hello.versions & IPC_ROUTER_VER_BITMASK) - 1;
+ /*Bit 0 & 31 are reserved for future usage*/
+ if ((version > 0) &&
+ (version != (sizeof(version) * BITS_PER_BYTE - 1)) &&
+ xprt_info->xprt->set_version)
+ xprt_info->xprt->set_version(xprt_info->xprt, version);
+ }
+}
+
+static int process_hello_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg,
+ struct rr_header_v1 *hdr)
+{
+ int i, rc = 0;
+ union rr_control_msg ctl;
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ if (!hdr)
+ return -EINVAL;
+
+ xprt_info->remote_node_id = hdr->src_node_id;
+ rt_entry = create_routing_table_entry(hdr->src_node_id, xprt_info);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: rt_entry allocation failed\n", __func__);
+ return -ENOMEM;
+ }
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+
+ do_version_negotiation(xprt_info, msg);
+ /* Send a reply HELLO message */
+ memset(&ctl, 0, sizeof(ctl));
+ ctl.hello.cmd = IPC_ROUTER_CTRL_CMD_HELLO;
+ ctl.hello.checksum = IPC_ROUTER_HELLO_MAGIC;
+ ctl.hello.versions = (u32)IPC_ROUTER_VER_BITMASK;
+ ctl.hello.checksum = ipc_router_calc_checksum(&ctl);
+ rc = ipc_router_send_ctl_msg(xprt_info, &ctl,
+ IPC_ROUTER_DUMMY_DEST_NODE);
+ if (rc < 0) {
+ IPC_RTR_ERR("%s: Error sending reply HELLO message\n",
+ __func__);
+ return rc;
+ }
+ xprt_info->initialized = 1;
+
+ /* Send list of servers from the local node and from nodes
+ * outside the mesh network in which this XPRT is part of.
+ */
+ down_read(&server_list_lock_lha2);
+ down_read(&routing_table_lock_lha3);
+ for (i = 0; i < RT_HASH_SIZE; i++) {
+ list_for_each_entry(rt_entry, &routing_table[i], list) {
+ if ((rt_entry->node_id != IPC_ROUTER_NID_LOCAL) &&
+ (!rt_entry->xprt_info ||
+ (rt_entry->xprt_info->xprt->link_id ==
+ xprt_info->xprt->link_id)))
+ continue;
+ rc = msm_ipc_router_send_server_list(rt_entry->node_id,
+ xprt_info);
+ if (rc < 0) {
+ up_read(&routing_table_lock_lha3);
+ up_read(&server_list_lock_lha2);
+ return rc;
+ }
+ }
+ }
+ up_read(&routing_table_lock_lha3);
+ up_read(&server_list_lock_lha2);
+ return rc;
+}
+
+static int process_resume_tx_msg(union rr_control_msg *msg,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ rport_ptr = ipc_router_get_rport_ref(msg->cli.node_id,
+ msg->cli.port_id);
+ if (!rport_ptr) {
+ IPC_RTR_ERR("%s: Unable to resume client\n", __func__);
+ return -ENODEV;
+ }
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ rport_ptr->tx_quota_cnt = 0;
+ post_resume_tx(rport_ptr, pkt, msg);
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ return 0;
+}
+
+static int process_new_server_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_routing_table_entry *rt_entry;
+ struct msm_ipc_server *server;
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ if (msg->srv.instance == 0) {
+ IPC_RTR_ERR("%s: Server %08x create rejected, version = 0\n",
+ __func__, msg->srv.service);
+ return -EINVAL;
+ }
+
+ rt_entry = ipc_router_get_rtentry_ref(msg->srv.node_id);
+ if (!rt_entry) {
+ rt_entry = create_routing_table_entry(msg->srv.node_id,
+ xprt_info);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: rt_entry allocation failed\n",
+ __func__);
+ return -ENOMEM;
+ }
+ }
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+
+ /* If the service already exists in the table, create_server returns
+ * a reference to it.
+ */
+ rport_ptr = ipc_router_create_rport(msg->srv.node_id,
+ msg->srv.port_id, xprt_info);
+ if (!rport_ptr)
+ return -ENOMEM;
+
+ server = msm_ipc_router_create_server(
+ msg->srv.service, msg->srv.instance,
+ msg->srv.node_id, msg->srv.port_id, xprt_info);
+ if (!server) {
+ IPC_RTR_ERR("%s: Server %08x:%08x Create failed\n",
+ __func__, msg->srv.service, msg->srv.instance);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ ipc_router_destroy_rport(rport_ptr);
+ return -ENOMEM;
+ }
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ rport_ptr->server = server;
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ rport_ptr->sec_rule = msm_ipc_get_security_rule(
+ msg->srv.service, msg->srv.instance);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ kref_put(&server->ref, ipc_router_release_server);
+
+ /* Relay the new server message to other subsystems that do not belong
+ * to the cluster from which this message is received. Notify the
+ * local clients waiting for this service.
+ */
+ relay_ctl_msg(xprt_info, msg);
+ post_control_ports(pkt);
+ return 0;
+}
+
+static int process_rmv_server_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_server *server;
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ server = ipc_router_get_server_ref(msg->srv.service, msg->srv.instance,
+ msg->srv.node_id, msg->srv.port_id);
+ rport_ptr = ipc_router_get_rport_ref(msg->srv.node_id,
+ msg->srv.port_id);
+ if (rport_ptr) {
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ if (rport_ptr->server == server)
+ rport_ptr->server = NULL;
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ }
+
+ if (server) {
+ kref_put(&server->ref, ipc_router_release_server);
+ ipc_router_destroy_server(server, msg->srv.node_id,
+ msg->srv.port_id);
+ /* Relay the new server message to other subsystems that do not
+ * belong to the cluster from which this message is received.
+ * Notify the local clients communicating with the service.
+ */
+ relay_ctl_msg(xprt_info, msg);
+ post_control_ports(pkt);
+ }
+ return 0;
+}
+
+static int process_rmv_client_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ union rr_control_msg *msg,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_router_remote_port *rport_ptr;
+ struct msm_ipc_server *server;
+
+ rport_ptr = ipc_router_get_rport_ref(msg->cli.node_id,
+ msg->cli.port_id);
+ if (rport_ptr) {
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ server = rport_ptr->server;
+ rport_ptr->server = NULL;
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ ipc_router_reset_conn(rport_ptr);
+ down_write(&server_list_lock_lha2);
+ if (server)
+ cleanup_rmt_server(NULL, rport_ptr, server);
+ up_write(&server_list_lock_lha2);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ ipc_router_destroy_rport(rport_ptr);
+ }
+
+ relay_ctl_msg(xprt_info, msg);
+ post_control_ports(pkt);
+ return 0;
+}
+
+static int process_control_msg(struct msm_ipc_router_xprt_info *xprt_info,
+ struct rr_packet *pkt)
+{
+ union rr_control_msg *msg;
+ int rc = 0;
+ struct rr_header_v1 *hdr;
+
+ if (pkt->length != sizeof(*msg)) {
+ IPC_RTR_ERR("%s: r2r msg size %d != %zu\n", __func__,
+ pkt->length, sizeof(*msg));
+ return -EINVAL;
+ }
+
+ hdr = &pkt->hdr;
+ msg = msm_ipc_router_skb_to_buf(pkt->pkt_fragment_q, sizeof(*msg));
+ if (!msg) {
+ IPC_RTR_ERR("%s: Error extracting control msg\n", __func__);
+ return -ENOMEM;
+ }
+
+ ipc_router_log_msg(xprt_info->log_ctx, IPC_ROUTER_LOG_EVENT_RX, msg,
+ hdr, NULL, NULL);
+
+ switch (msg->cmd) {
+ case IPC_ROUTER_CTRL_CMD_HELLO:
+ rc = process_hello_msg(xprt_info, msg, hdr);
+ break;
+ case IPC_ROUTER_CTRL_CMD_RESUME_TX:
+ rc = process_resume_tx_msg(msg, pkt);
+ break;
+ case IPC_ROUTER_CTRL_CMD_NEW_SERVER:
+ rc = process_new_server_msg(xprt_info, msg, pkt);
+ break;
+ case IPC_ROUTER_CTRL_CMD_REMOVE_SERVER:
+ rc = process_rmv_server_msg(xprt_info, msg, pkt);
+ break;
+ case IPC_ROUTER_CTRL_CMD_REMOVE_CLIENT:
+ rc = process_rmv_client_msg(xprt_info, msg, pkt);
+ break;
+ default:
+ rc = -EINVAL;
+ }
+ kfree(msg);
+ return rc;
+}
+
+static void do_read_data(struct work_struct *work)
+{
+ struct rr_header_v1 *hdr;
+ struct rr_packet *pkt = NULL;
+ struct msm_ipc_port *port_ptr;
+ struct msm_ipc_router_remote_port *rport_ptr;
+ int ret;
+
+ struct msm_ipc_router_xprt_info *xprt_info =
+ container_of(work,
+ struct msm_ipc_router_xprt_info,
+ read_data);
+
+ while ((pkt = rr_read(xprt_info)) != NULL) {
+ if (pkt->length < calc_rx_header_size(xprt_info) ||
+ pkt->length > MAX_IPC_PKT_SIZE) {
+ IPC_RTR_ERR("%s: Invalid pkt length %d\n", __func__,
+ pkt->length);
+ goto read_next_pkt1;
+ }
+
+ ret = extract_header(pkt);
+ if (ret < 0)
+ goto read_next_pkt1;
+ hdr = &pkt->hdr;
+
+ if ((hdr->dst_node_id != IPC_ROUTER_NID_LOCAL) &&
+ ((hdr->type == IPC_ROUTER_CTRL_CMD_RESUME_TX) ||
+ (hdr->type == IPC_ROUTER_CTRL_CMD_DATA))) {
+ IPC_RTR_INFO(xprt_info->log_ctx,
+ "%s %s Len:0x%x T:0x%x CF:0x%x SRC:<0x%x:0x%x> DST:<0x%x:0x%x>\n",
+ "FWD", "RX", hdr->size, hdr->type,
+ hdr->control_flag, hdr->src_node_id,
+ hdr->src_port_id, hdr->dst_node_id,
+ hdr->dst_port_id);
+ forward_msg(xprt_info, pkt);
+ goto read_next_pkt1;
+ }
+
+ if (hdr->type != IPC_ROUTER_CTRL_CMD_DATA) {
+ process_control_msg(xprt_info, pkt);
+ goto read_next_pkt1;
+ }
+
+ port_ptr = ipc_router_get_port_ref(hdr->dst_port_id);
+ if (!port_ptr) {
+ IPC_RTR_ERR("%s: No local port id %08x\n", __func__,
+ hdr->dst_port_id);
+ goto read_next_pkt1;
+ }
+
+ rport_ptr = ipc_router_get_rport_ref(hdr->src_node_id,
+ hdr->src_port_id);
+ if (!rport_ptr) {
+ rport_ptr = ipc_router_create_rport(hdr->src_node_id,
+ hdr->src_port_id,
+ xprt_info);
+ if (!rport_ptr) {
+ IPC_RTR_ERR(
+ "%s: Rmt Prt %08x:%08x create failed\n",
+ __func__, hdr->src_node_id,
+ hdr->src_port_id);
+ goto read_next_pkt2;
+ }
+ }
+
+ ipc_router_log_msg(xprt_info->log_ctx, IPC_ROUTER_LOG_EVENT_RX,
+ pkt, hdr, port_ptr, rport_ptr);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ post_pkt_to_port(port_ptr, pkt, 0);
+ kref_put(&port_ptr->ref, ipc_router_release_port);
+ continue;
+read_next_pkt2:
+ kref_put(&port_ptr->ref, ipc_router_release_port);
+read_next_pkt1:
+ release_pkt(pkt);
+ }
+}
+
+int msm_ipc_router_register_server(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *name)
+{
+ struct msm_ipc_server *server;
+ union rr_control_msg ctl;
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ if (!port_ptr || !name)
+ return -EINVAL;
+
+ if (name->addrtype != MSM_IPC_ADDR_NAME)
+ return -EINVAL;
+
+ rport_ptr = ipc_router_create_rport(IPC_ROUTER_NID_LOCAL,
+ port_ptr->this_port.port_id, NULL);
+ if (!rport_ptr) {
+ IPC_RTR_ERR("%s: RPort %08x:%08x creation failed\n", __func__,
+ IPC_ROUTER_NID_LOCAL, port_ptr->this_port.port_id);
+ return -ENOMEM;
+ }
+
+ server = msm_ipc_router_create_server(name->addr.port_name.service,
+ name->addr.port_name.instance,
+ IPC_ROUTER_NID_LOCAL,
+ port_ptr->this_port.port_id,
+ NULL);
+ if (!server) {
+ IPC_RTR_ERR("%s: Server %08x:%08x Create failed\n",
+ __func__, name->addr.port_name.service,
+ name->addr.port_name.instance);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ ipc_router_destroy_rport(rport_ptr);
+ return -ENOMEM;
+ }
+
+ memset(&ctl, 0, sizeof(ctl));
+ ctl.cmd = IPC_ROUTER_CTRL_CMD_NEW_SERVER;
+ ctl.srv.service = server->name.service;
+ ctl.srv.instance = server->name.instance;
+ ctl.srv.node_id = IPC_ROUTER_NID_LOCAL;
+ ctl.srv.port_id = port_ptr->this_port.port_id;
+ broadcast_ctl_msg(&ctl);
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ port_ptr->type = SERVER_PORT;
+ port_ptr->mode_info.mode = MULTI_LINK_MODE;
+ port_ptr->port_name.service = server->name.service;
+ port_ptr->port_name.instance = server->name.instance;
+ port_ptr->rport_info = rport_ptr;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ kref_put(&server->ref, ipc_router_release_server);
+ return 0;
+}
+
+int msm_ipc_router_unregister_server(struct msm_ipc_port *port_ptr)
+{
+ struct msm_ipc_server *server;
+ union rr_control_msg ctl;
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ if (!port_ptr)
+ return -EINVAL;
+
+ if (port_ptr->type != SERVER_PORT) {
+ IPC_RTR_ERR("%s: Trying to unregister a non-server port\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (port_ptr->this_port.node_id != IPC_ROUTER_NID_LOCAL) {
+ IPC_RTR_ERR(
+ "%s: Trying to unregister a remote server locally\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ server = ipc_router_get_server_ref(port_ptr->port_name.service,
+ port_ptr->port_name.instance,
+ port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ if (!server) {
+ IPC_RTR_ERR("%s: Server lookup failed\n", __func__);
+ return -ENODEV;
+ }
+
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ port_ptr->type = CLIENT_PORT;
+ rport_ptr = (struct msm_ipc_router_remote_port *)port_ptr->rport_info;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ if (rport_ptr)
+ ipc_router_reset_conn(rport_ptr);
+ memset(&ctl, 0, sizeof(ctl));
+ ctl.cmd = IPC_ROUTER_CTRL_CMD_REMOVE_SERVER;
+ ctl.srv.service = server->name.service;
+ ctl.srv.instance = server->name.instance;
+ ctl.srv.node_id = IPC_ROUTER_NID_LOCAL;
+ ctl.srv.port_id = port_ptr->this_port.port_id;
+ kref_put(&server->ref, ipc_router_release_server);
+ ipc_router_destroy_server(server, port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ broadcast_ctl_msg(&ctl);
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ port_ptr->type = CLIENT_PORT;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ return 0;
+}
+
+static int loopback_data(struct msm_ipc_port *src,
+ u32 port_id,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_port *port_ptr;
+ struct sk_buff *temp_skb;
+ int align_size;
+
+ if (!pkt) {
+ IPC_RTR_ERR("%s: Invalid pkt pointer\n", __func__);
+ return -EINVAL;
+ }
+
+ temp_skb = skb_peek_tail(pkt->pkt_fragment_q);
+ align_size = ALIGN_SIZE(pkt->length);
+ skb_put(temp_skb, align_size);
+ pkt->length += align_size;
+
+ port_ptr = ipc_router_get_port_ref(port_id);
+ if (!port_ptr) {
+ IPC_RTR_ERR("%s: Local port %d not present\n", __func__,
+ port_id);
+ return -ENODEV;
+ }
+ post_pkt_to_port(port_ptr, pkt, 1);
+ update_comm_mode_info(&src->mode_info, NULL);
+ kref_put(&port_ptr->ref, ipc_router_release_port);
+
+ return pkt->hdr.size;
+}
+
+static int ipc_router_tx_wait(struct msm_ipc_port *src,
+ struct msm_ipc_router_remote_port *rport_ptr,
+ u32 *set_confirm_rx,
+ long timeout)
+{
+ struct msm_ipc_resume_tx_port *resume_tx_port;
+ int ret;
+
+ if (unlikely(!src || !rport_ptr))
+ return -EINVAL;
+
+ for (;;) {
+ mutex_lock(&rport_ptr->rport_lock_lhb2);
+ if (rport_ptr->status == RESET) {
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ IPC_RTR_ERR("%s: RPort %08x:%08x is in reset state\n",
+ __func__, rport_ptr->node_id,
+ rport_ptr->port_id);
+ return -ENETRESET;
+ }
+
+ if (rport_ptr->tx_quota_cnt < IPC_ROUTER_HIGH_RX_QUOTA)
+ break;
+
+ if (msm_ipc_router_lookup_resume_tx_port(
+ rport_ptr, src->this_port.port_id))
+ goto check_timeo;
+
+ resume_tx_port =
+ kzalloc(sizeof(struct msm_ipc_resume_tx_port),
+ GFP_KERNEL);
+ if (!resume_tx_port) {
+ IPC_RTR_ERR("%s: Resume_Tx port allocation failed\n",
+ __func__);
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ return -ENOMEM;
+ }
+ INIT_LIST_HEAD(&resume_tx_port->list);
+ resume_tx_port->port_id = src->this_port.port_id;
+ resume_tx_port->node_id = src->this_port.node_id;
+ list_add_tail(&resume_tx_port->list,
+ &rport_ptr->resume_tx_port_list);
+check_timeo:
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ if (!timeout) {
+ return -EAGAIN;
+ } else if (timeout < 0) {
+ ret =
+ wait_event_interruptible(src->port_tx_wait_q,
+ (rport_ptr->tx_quota_cnt !=
+ IPC_ROUTER_HIGH_RX_QUOTA ||
+ rport_ptr->status == RESET));
+ if (ret)
+ return ret;
+ } else {
+ ret = wait_event_interruptible_timeout(
+ src->port_tx_wait_q,
+ (rport_ptr->tx_quota_cnt !=
+ IPC_ROUTER_HIGH_RX_QUOTA ||
+ rport_ptr->status == RESET),
+ msecs_to_jiffies(timeout));
+ if (ret < 0) {
+ return ret;
+ } else if (ret == 0) {
+ IPC_RTR_ERR("%s: Resume_tx Timeout %08x:%08x\n",
+ __func__, rport_ptr->node_id,
+ rport_ptr->port_id);
+ return -ETIMEDOUT;
+ }
+ }
+ }
+ rport_ptr->tx_quota_cnt++;
+ if (rport_ptr->tx_quota_cnt == IPC_ROUTER_LOW_RX_QUOTA)
+ *set_confirm_rx = 1;
+ mutex_unlock(&rport_ptr->rport_lock_lhb2);
+ return 0;
+}
+
+static int
+msm_ipc_router_write_pkt(struct msm_ipc_port *src,
+ struct msm_ipc_router_remote_port *rport_ptr,
+ struct rr_packet *pkt, long timeout)
+{
+ struct rr_header_v1 *hdr;
+ struct msm_ipc_router_xprt_info *xprt_info;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ struct sk_buff *temp_skb;
+ int xprt_option;
+ int ret;
+ int align_size;
+ u32 set_confirm_rx = 0;
+
+ if (!rport_ptr || !src || !pkt)
+ return -EINVAL;
+
+ hdr = &pkt->hdr;
+ hdr->version = IPC_ROUTER_V1;
+ hdr->type = IPC_ROUTER_CTRL_CMD_DATA;
+ hdr->src_node_id = src->this_port.node_id;
+ hdr->src_port_id = src->this_port.port_id;
+ hdr->size = pkt->length;
+ hdr->control_flag = 0;
+ hdr->dst_node_id = rport_ptr->node_id;
+ hdr->dst_port_id = rport_ptr->port_id;
+
+ ret = ipc_router_tx_wait(src, rport_ptr, &set_confirm_rx, timeout);
+ if (ret < 0)
+ return ret;
+ if (set_confirm_rx)
+ hdr->control_flag |= CONTROL_FLAG_CONFIRM_RX;
+
+ if (hdr->dst_node_id == IPC_ROUTER_NID_LOCAL) {
+ ipc_router_log_msg(local_log_ctx,
+ IPC_ROUTER_LOG_EVENT_TX, pkt, hdr, src,
+ rport_ptr);
+ ret = loopback_data(src, hdr->dst_port_id, pkt);
+ return ret;
+ }
+
+ rt_entry = ipc_router_get_rtentry_ref(hdr->dst_node_id);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: Remote node %d not up\n",
+ __func__, hdr->dst_node_id);
+ return -ENODEV;
+ }
+ down_read(&rt_entry->lock_lha4);
+ xprt_info = rt_entry->xprt_info;
+ ret = ipc_router_get_xprt_info_ref(xprt_info);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Abort invalid xprt\n", __func__);
+ up_read(&rt_entry->lock_lha4);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ return ret;
+ }
+ ret = prepend_header(pkt, xprt_info);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Prepend Header failed\n", __func__);
+ goto out_write_pkt;
+ }
+ xprt_option = xprt_info->xprt->get_option(xprt_info->xprt);
+ if (!(xprt_option & FRAG_PKT_WRITE_ENABLE)) {
+ ret = defragment_pkt(pkt);
+ if (ret < 0)
+ goto out_write_pkt;
+ }
+
+ temp_skb = skb_peek_tail(pkt->pkt_fragment_q);
+ align_size = ALIGN_SIZE(pkt->length);
+ skb_put(temp_skb, align_size);
+ pkt->length += align_size;
+ mutex_lock(&xprt_info->tx_lock_lhb2);
+ ret = xprt_info->xprt->write(pkt, pkt->length, xprt_info->xprt);
+ mutex_unlock(&xprt_info->tx_lock_lhb2);
+out_write_pkt:
+ up_read(&rt_entry->lock_lha4);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Write on XPRT failed\n", __func__);
+ ipc_router_log_msg(xprt_info->log_ctx,
+ IPC_ROUTER_LOG_EVENT_TX_ERR, pkt, hdr, src,
+ rport_ptr);
+
+ ipc_router_put_xprt_info_ref(xprt_info);
+ return ret;
+ }
+ update_comm_mode_info(&src->mode_info, xprt_info);
+ ipc_router_log_msg(xprt_info->log_ctx,
+ IPC_ROUTER_LOG_EVENT_TX, pkt, hdr, src, rport_ptr);
+
+ ipc_router_put_xprt_info_ref(xprt_info);
+ return hdr->size;
+}
+
+int msm_ipc_router_send_to(struct msm_ipc_port *src,
+ struct sk_buff_head *data,
+ struct msm_ipc_addr *dest,
+ long timeout)
+{
+ u32 dst_node_id = 0, dst_port_id = 0;
+ struct msm_ipc_server *server;
+ struct msm_ipc_server_port *server_port;
+ struct msm_ipc_router_remote_port *rport_ptr = NULL;
+ struct msm_ipc_router_remote_port *src_rport_ptr = NULL;
+ struct rr_packet *pkt;
+ int ret;
+
+ if (!src || !data || !dest) {
+ IPC_RTR_ERR("%s: Invalid Parameters\n", __func__);
+ return -EINVAL;
+ }
+
+ /* Resolve Address*/
+ if (dest->addrtype == MSM_IPC_ADDR_ID) {
+ dst_node_id = dest->addr.port_addr.node_id;
+ dst_port_id = dest->addr.port_addr.port_id;
+ } else if (dest->addrtype == MSM_IPC_ADDR_NAME) {
+ server =
+ ipc_router_get_server_ref(dest->addr.port_name.service,
+ dest->addr.port_name.instance,
+ 0, 0);
+ if (!server) {
+ IPC_RTR_ERR("%s: Destination not reachable\n",
+ __func__);
+ return -ENODEV;
+ }
+ server_port = list_first_entry(&server->server_port_list,
+ struct msm_ipc_server_port,
+ list);
+ dst_node_id = server_port->server_addr.node_id;
+ dst_port_id = server_port->server_addr.port_id;
+ kref_put(&server->ref, ipc_router_release_server);
+ }
+
+ rport_ptr = ipc_router_get_rport_ref(dst_node_id, dst_port_id);
+ if (!rport_ptr) {
+ IPC_RTR_ERR("%s: Remote port not found\n", __func__);
+ return -ENODEV;
+ }
+
+ if (src->check_send_permissions) {
+ ret = src->check_send_permissions(rport_ptr->sec_rule);
+ if (ret <= 0) {
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ IPC_RTR_ERR("%s: permission failure for %s\n",
+ __func__, current->comm);
+ return -EPERM;
+ }
+ }
+
+ if (dst_node_id == IPC_ROUTER_NID_LOCAL && !src->rport_info) {
+ src_rport_ptr = ipc_router_create_rport(IPC_ROUTER_NID_LOCAL,
+ src->this_port.port_id,
+ NULL);
+ if (!src_rport_ptr) {
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ IPC_RTR_ERR("%s: RPort creation failed\n", __func__);
+ return -ENOMEM;
+ }
+ mutex_lock(&src->port_lock_lhc3);
+ src->rport_info = src_rport_ptr;
+ mutex_unlock(&src->port_lock_lhc3);
+ kref_put(&src_rport_ptr->ref, ipc_router_release_rport);
+ }
+
+ pkt = create_pkt(data);
+ if (!pkt) {
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ IPC_RTR_ERR("%s: Pkt creation failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ ret = msm_ipc_router_write_pkt(src, rport_ptr, pkt, timeout);
+ kref_put(&rport_ptr->ref, ipc_router_release_rport);
+ if (ret < 0)
+ pkt->pkt_fragment_q = NULL;
+ release_pkt(pkt);
+
+ return ret;
+}
+
+int msm_ipc_router_send_msg(struct msm_ipc_port *src,
+ struct msm_ipc_addr *dest,
+ void *data, unsigned int data_len)
+{
+ struct sk_buff_head *out_skb_head;
+ int ret;
+
+ out_skb_head = msm_ipc_router_buf_to_skb(data, data_len);
+ if (!out_skb_head) {
+ IPC_RTR_ERR("%s: SKB conversion failed\n", __func__);
+ return -EFAULT;
+ }
+
+ ret = msm_ipc_router_send_to(src, out_skb_head, dest, 0);
+ if (ret < 0) {
+ if (ret != -EAGAIN)
+ IPC_RTR_ERR(
+ "%s: msm_ipc_router_send_to failed - ret: %d\n",
+ __func__, ret);
+ msm_ipc_router_free_skb(out_skb_head);
+ return ret;
+ }
+ return 0;
+}
+
+/**
+ * msm_ipc_router_send_resume_tx() - Send Resume_Tx message
+ * @data: Pointer to received data packet that has confirm_rx bit set
+ *
+ * @return: On success, number of bytes transferred is returned, else
+ * standard linux error code is returned.
+ *
+ * This function sends the Resume_Tx event to the remote node that
+ * sent the data with confirm_rx field set. In case of a multi-hop
+ * scenario also, this function makes sure that the destination node_id
+ * to which the resume_tx event should reach is right.
+ */
+static int msm_ipc_router_send_resume_tx(void *data)
+{
+ union rr_control_msg msg;
+ struct rr_header_v1 *hdr = (struct rr_header_v1 *)data;
+ struct msm_ipc_routing_table_entry *rt_entry;
+ int ret;
+
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = IPC_ROUTER_CTRL_CMD_RESUME_TX;
+ msg.cli.node_id = hdr->dst_node_id;
+ msg.cli.port_id = hdr->dst_port_id;
+ rt_entry = ipc_router_get_rtentry_ref(hdr->src_node_id);
+ if (!rt_entry) {
+ IPC_RTR_ERR("%s: %d Node is not present", __func__,
+ hdr->src_node_id);
+ return -ENODEV;
+ }
+ ret = ipc_router_get_xprt_info_ref(rt_entry->xprt_info);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Abort invalid xprt\n", __func__);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ return ret;
+ }
+ ret = ipc_router_send_ctl_msg(rt_entry->xprt_info, &msg,
+ hdr->src_node_id);
+ ipc_router_put_xprt_info_ref(rt_entry->xprt_info);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+ if (ret < 0)
+ IPC_RTR_ERR(
+ "%s: Send Resume_Tx Failed SRC_NODE: %d SRC_PORT: %d DEST_NODE: %d",
+ __func__, hdr->dst_node_id, hdr->dst_port_id,
+ hdr->src_node_id);
+
+ return ret;
+}
+
+int msm_ipc_router_read(struct msm_ipc_port *port_ptr,
+ struct rr_packet **read_pkt,
+ size_t buf_len)
+{
+ struct rr_packet *pkt;
+
+ if (!port_ptr || !read_pkt)
+ return -EINVAL;
+
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ if (list_empty(&port_ptr->port_rx_q)) {
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ return -EAGAIN;
+ }
+
+ pkt = list_first_entry(&port_ptr->port_rx_q, struct rr_packet, list);
+ if ((buf_len) && (pkt->hdr.size > buf_len)) {
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ return -ETOOSMALL;
+ }
+ list_del(&pkt->list);
+ if (list_empty(&port_ptr->port_rx_q))
+ __pm_relax(port_ptr->port_rx_ws);
+ *read_pkt = pkt;
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ if (pkt->hdr.control_flag & CONTROL_FLAG_CONFIRM_RX)
+ msm_ipc_router_send_resume_tx(&pkt->hdr);
+
+ return pkt->length;
+}
+
+/**
+ * msm_ipc_router_rx_data_wait() - Wait for new message destined to a local
+ * port.
+ * @port_ptr: Pointer to the local port
+ * @timeout: < 0 timeout indicates infinite wait till a message arrives.
+ * > 0 timeout indicates the wait time.
+ * 0 indicates that we do not wait.
+ * @return: 0 if there are pending messages to read,
+ * standard Linux error code otherwise.
+ *
+ * Checks for the availability of messages that are destined to a local port.
+ * If no messages are present then waits as per @timeout.
+ */
+int msm_ipc_router_rx_data_wait(struct msm_ipc_port *port_ptr, long timeout)
+{
+ int ret = 0;
+
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ while (list_empty(&port_ptr->port_rx_q)) {
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+ if (timeout < 0) {
+ ret = wait_event_interruptible(
+ port_ptr->port_rx_wait_q,
+ !list_empty(&port_ptr->port_rx_q));
+ if (ret)
+ return ret;
+ } else if (timeout > 0) {
+ timeout = wait_event_interruptible_timeout(
+ port_ptr->port_rx_wait_q,
+ !list_empty(&port_ptr->port_rx_q),
+ timeout);
+ if (timeout < 0)
+ return -EFAULT;
+ }
+ if (timeout == 0)
+ return -ENOMSG;
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ }
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+
+ return ret;
+}
+
+/**
+ * msm_ipc_router_recv_from() - Receive messages destined to a local port.
+ * @port_ptr: Pointer to the local port
+ * @pkt : Pointer to the router-to-router packet
+ * @src: Pointer to local port address
+ * @timeout: < 0 timeout indicates infinite wait till a message arrives.
+ * > 0 timeout indicates the wait time.
+ * 0 indicates that we do not wait.
+ * @return: = Number of bytes read(On successful read operation).
+ * = -ENOMSG (If there are no pending messages and timeout is 0).
+ * = -EINVAL (If either of the arguments, port_ptr or data is invalid)
+ * = -EFAULT (If there are no pending messages when timeout is > 0
+ * and the wait_event_interruptible_timeout has returned value > 0)
+ * = -ERESTARTSYS (If there are no pending messages when timeout
+ * is < 0 and wait_event_interruptible was interrupted by a signal)
+ *
+ * This function reads the messages that are destined for a local port. It
+ * is used by modules that exist with-in the kernel and use IPC Router for
+ * transport. The function checks if there are any messages that are already
+ * received. If yes, it reads them, else it waits as per the timeout value.
+ * On a successful read, the return value of the function indicates the number
+ * of bytes that are read.
+ */
+int msm_ipc_router_recv_from(struct msm_ipc_port *port_ptr,
+ struct rr_packet **pkt,
+ struct msm_ipc_addr *src,
+ long timeout)
+{
+ int ret, data_len, align_size;
+ struct sk_buff *temp_skb;
+ struct rr_header_v1 *hdr = NULL;
+
+ if (!port_ptr || !pkt) {
+ IPC_RTR_ERR("%s: Invalid pointers being passed\n", __func__);
+ return -EINVAL;
+ }
+
+ *pkt = NULL;
+
+ ret = msm_ipc_router_rx_data_wait(port_ptr, timeout);
+ if (ret)
+ return ret;
+
+ ret = msm_ipc_router_read(port_ptr, pkt, 0);
+ if (ret <= 0 || !(*pkt))
+ return ret;
+
+ hdr = &((*pkt)->hdr);
+ if (src) {
+ src->addrtype = MSM_IPC_ADDR_ID;
+ src->addr.port_addr.node_id = hdr->src_node_id;
+ src->addr.port_addr.port_id = hdr->src_port_id;
+ }
+
+ data_len = hdr->size;
+ align_size = ALIGN_SIZE(data_len);
+ if (align_size) {
+ temp_skb = skb_peek_tail((*pkt)->pkt_fragment_q);
+ skb_trim(temp_skb, (temp_skb->len - align_size));
+ }
+ return data_len;
+}
+
+int msm_ipc_router_read_msg(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *src,
+ unsigned char **data,
+ unsigned int *len)
+{
+ struct rr_packet *pkt;
+ int ret;
+
+ ret = msm_ipc_router_recv_from(port_ptr, &pkt, src, 0);
+ if (ret < 0) {
+ if (ret != -ENOMSG)
+ IPC_RTR_ERR(
+ "%s: msm_ipc_router_recv_from failed - ret: %d\n",
+ __func__, ret);
+ return ret;
+ }
+
+ *data = msm_ipc_router_skb_to_buf(pkt->pkt_fragment_q, ret);
+ if (!(*data)) {
+ IPC_RTR_ERR("%s: Buf conversion failed\n", __func__);
+ release_pkt(pkt);
+ return -ENOMEM;
+ }
+
+ *len = ret;
+ release_pkt(pkt);
+ return 0;
+}
+
+/**
+ * msm_ipc_router_create_port() - Create a IPC Router port/endpoint
+ * @notify: Callback function to notify any event on the port.
+ * @event: Event ID to be handled.
+ * @oob_data: Any out-of-band data associated with the event.
+ * @oob_data_len: Size of the out-of-band data, if valid.
+ * @priv: Private data registered during the port creation.
+ * @priv: Private info to be passed while the notification is generated.
+ *
+ * @return: Pointer to the port on success, NULL on error.
+ */
+struct msm_ipc_port *msm_ipc_router_create_port(
+ void (*notify)(unsigned int event, void *oob_data,
+ size_t oob_data_len, void *priv),
+ void *priv)
+{
+ struct msm_ipc_port *port_ptr;
+ int ret;
+
+ ret = ipc_router_core_init();
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Error %d initializing IPC Router\n",
+ __func__, ret);
+ return NULL;
+ }
+
+ port_ptr = msm_ipc_router_create_raw_port(NULL, notify, priv);
+ if (!port_ptr)
+ IPC_RTR_ERR("%s: port_ptr alloc failed\n", __func__);
+
+ return port_ptr;
+}
+
+int msm_ipc_router_close_port(struct msm_ipc_port *port_ptr)
+{
+ union rr_control_msg msg;
+ struct msm_ipc_server *server;
+ struct msm_ipc_router_remote_port *rport_ptr;
+
+ if (!port_ptr)
+ return -EINVAL;
+
+ if (port_ptr->type == SERVER_PORT || port_ptr->type == CLIENT_PORT) {
+ down_write(&local_ports_lock_lhc2);
+ list_del(&port_ptr->list);
+ up_write(&local_ports_lock_lhc2);
+
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ rport_ptr = (struct msm_ipc_router_remote_port *)
+ port_ptr->rport_info;
+ port_ptr->rport_info = NULL;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ if (rport_ptr) {
+ ipc_router_reset_conn(rport_ptr);
+ ipc_router_destroy_rport(rport_ptr);
+ }
+
+ if (port_ptr->type == SERVER_PORT) {
+ memset(&msg, 0, sizeof(msg));
+ msg.cmd = IPC_ROUTER_CTRL_CMD_REMOVE_SERVER;
+ msg.srv.service = port_ptr->port_name.service;
+ msg.srv.instance = port_ptr->port_name.instance;
+ msg.srv.node_id = port_ptr->this_port.node_id;
+ msg.srv.port_id = port_ptr->this_port.port_id;
+ broadcast_ctl_msg(&msg);
+ }
+
+ /* Server port could have been a client port earlier.
+ * Send REMOVE_CLIENT message in either case.
+ */
+ msm_ipc_router_send_remove_client(&port_ptr->mode_info,
+ port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ } else if (port_ptr->type == CONTROL_PORT) {
+ down_write(&control_ports_lock_lha5);
+ list_del(&port_ptr->list);
+ up_write(&control_ports_lock_lha5);
+ } else if (port_ptr->type == IRSC_PORT) {
+ down_write(&local_ports_lock_lhc2);
+ list_del(&port_ptr->list);
+ up_write(&local_ports_lock_lhc2);
+ signal_irsc_completion();
+ }
+
+ if (port_ptr->type == SERVER_PORT) {
+ server = ipc_router_get_server_ref(
+ port_ptr->port_name.service,
+ port_ptr->port_name.instance,
+ port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ if (server) {
+ kref_put(&server->ref, ipc_router_release_server);
+ ipc_router_destroy_server(server,
+ port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ }
+ }
+
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ rport_ptr = (struct msm_ipc_router_remote_port *)port_ptr->rport_info;
+ port_ptr->rport_info = NULL;
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ if (rport_ptr)
+ ipc_router_destroy_rport(rport_ptr);
+
+ kref_put(&port_ptr->ref, ipc_router_release_port);
+ return 0;
+}
+
+int msm_ipc_router_get_curr_pkt_size(struct msm_ipc_port *port_ptr)
+{
+ struct rr_packet *pkt;
+ int rc = 0;
+
+ if (!port_ptr)
+ return -EINVAL;
+
+ mutex_lock(&port_ptr->port_rx_q_lock_lhc3);
+ if (!list_empty(&port_ptr->port_rx_q)) {
+ pkt = list_first_entry(&port_ptr->port_rx_q, struct rr_packet,
+ list);
+ rc = pkt->hdr.size;
+ }
+ mutex_unlock(&port_ptr->port_rx_q_lock_lhc3);
+
+ return rc;
+}
+
+int msm_ipc_router_bind_control_port(struct msm_ipc_port *port_ptr)
+{
+ if (unlikely(!port_ptr || port_ptr->type != CLIENT_PORT))
+ return -EINVAL;
+
+ down_write(&local_ports_lock_lhc2);
+ list_del(&port_ptr->list);
+ up_write(&local_ports_lock_lhc2);
+ port_ptr->type = CONTROL_PORT;
+ down_write(&control_ports_lock_lha5);
+ list_add_tail(&port_ptr->list, &control_ports);
+ up_write(&control_ports_lock_lha5);
+
+ return 0;
+}
+
+int msm_ipc_router_lookup_server_name(struct msm_ipc_port_name *srv_name,
+ struct msm_ipc_server_info *srv_info,
+ int num_entries_in_array, u32 lookup_mask)
+{
+ struct msm_ipc_server *server;
+ struct msm_ipc_server_port *server_port;
+ int key, i = 0; /*num_entries_found*/
+
+ if (!srv_name) {
+ IPC_RTR_ERR("%s: Invalid srv_name\n", __func__);
+ return -EINVAL;
+ }
+
+ if (num_entries_in_array && !srv_info) {
+ IPC_RTR_ERR("%s: srv_info NULL\n", __func__);
+ return -EINVAL;
+ }
+
+ down_read(&server_list_lock_lha2);
+ key = (srv_name->service & (SRV_HASH_SIZE - 1));
+ list_for_each_entry(server, &server_list[key], list) {
+ if ((server->name.service != srv_name->service) ||
+ ((server->name.instance & lookup_mask) !=
+ srv_name->instance))
+ continue;
+
+ list_for_each_entry(server_port, &server->server_port_list,
+ list) {
+ if (i < num_entries_in_array) {
+ srv_info[i].node_id =
+ server_port->server_addr.node_id;
+ srv_info[i].port_id =
+ server_port->server_addr.port_id;
+ srv_info[i].service = server->name.service;
+ srv_info[i].instance = server->name.instance;
+ }
+ i++;
+ }
+ }
+ up_read(&server_list_lock_lha2);
+
+ return i;
+}
+
+int msm_ipc_router_close(void)
+{
+ struct msm_ipc_router_xprt_info *xprt_info, *tmp_xprt_info;
+
+ down_write(&xprt_info_list_lock_lha5);
+ list_for_each_entry_safe(xprt_info, tmp_xprt_info,
+ &xprt_info_list, list) {
+ xprt_info->xprt->close(xprt_info->xprt);
+ list_del(&xprt_info->list);
+ kfree(xprt_info);
+ }
+ up_write(&xprt_info_list_lock_lha5);
+ return 0;
+}
+
+/**
+ * pil_vote_load_worker() - Process vote to load the modem
+ *
+ * @work: Work item to process
+ *
+ * This function is called to process votes to load the modem that have been
+ * queued by msm_ipc_load_default_node().
+ */
+static void pil_vote_load_worker(struct work_struct *work)
+{
+ struct pil_vote_info *vote_info;
+
+ vote_info = container_of(work, struct pil_vote_info, load_work);
+ if (strlen(default_peripheral)) {
+ vote_info->pil_handle = subsystem_get(default_peripheral);
+ if (IS_ERR(vote_info->pil_handle)) {
+ IPC_RTR_ERR("%s: Failed to load %s\n",
+ __func__, default_peripheral);
+ vote_info->pil_handle = NULL;
+ }
+ } else {
+ vote_info->pil_handle = NULL;
+ }
+}
+
+/**
+ * pil_vote_unload_worker() - Process vote to unload the modem
+ *
+ * @work: Work item to process
+ *
+ * This function is called to process votes to unload the modem that have been
+ * queued by msm_ipc_unload_default_node().
+ */
+static void pil_vote_unload_worker(struct work_struct *work)
+{
+ struct pil_vote_info *vote_info;
+
+ vote_info = container_of(work, struct pil_vote_info, unload_work);
+
+ if (vote_info->pil_handle) {
+ subsystem_put(vote_info->pil_handle);
+ vote_info->pil_handle = NULL;
+ }
+ kfree(vote_info);
+}
+
+/**
+ * msm_ipc_load_default_node() - Queue a vote to load the modem.
+ *
+ * @return: PIL vote info structure on success, NULL on failure.
+ *
+ * This function places a work item that loads the modem on the
+ * single-threaded workqueue used for processing PIL votes to load
+ * or unload the modem.
+ */
+void *msm_ipc_load_default_node(void)
+{
+ struct pil_vote_info *vote_info;
+
+ vote_info = kmalloc(sizeof(*vote_info), GFP_KERNEL);
+ if (!vote_info)
+ return vote_info;
+
+ INIT_WORK(&vote_info->load_work, pil_vote_load_worker);
+ queue_work(msm_ipc_router_workqueue, &vote_info->load_work);
+
+ return vote_info;
+}
+
+/**
+ * msm_ipc_unload_default_node() - Queue a vote to unload the modem.
+ *
+ * @pil_vote: PIL vote info structure, containing the PIL handle
+ * and work structure.
+ *
+ * This function places a work item that unloads the modem on the
+ * single-threaded workqueue used for processing PIL votes to load
+ * or unload the modem.
+ */
+void msm_ipc_unload_default_node(void *pil_vote)
+{
+ struct pil_vote_info *vote_info;
+
+ if (pil_vote) {
+ vote_info = (struct pil_vote_info *)pil_vote;
+ INIT_WORK(&vote_info->unload_work, pil_vote_unload_worker);
+ queue_work(msm_ipc_router_workqueue, &vote_info->unload_work);
+ }
+}
+
+#if defined(CONFIG_DEBUG_FS)
+static void dump_routing_table(struct seq_file *s)
+{
+ int j;
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ seq_printf(s, "%-10s|%-20s|%-10s|\n", "Node Id", "XPRT Name",
+ "Next Hop");
+ seq_puts(s, "----------------------------------------------\n");
+ for (j = 0; j < RT_HASH_SIZE; j++) {
+ down_read(&routing_table_lock_lha3);
+ list_for_each_entry(rt_entry, &routing_table[j], list) {
+ down_read(&rt_entry->lock_lha4);
+ seq_printf(s, "0x%08x|", rt_entry->node_id);
+ if (rt_entry->node_id == IPC_ROUTER_NID_LOCAL)
+ seq_printf(s, "%-20s|0x%08x|\n", "Loopback",
+ rt_entry->node_id);
+ else
+ seq_printf(s, "%-20s|0x%08x|\n",
+ rt_entry->xprt_info->xprt->name,
+ rt_entry->node_id);
+ up_read(&rt_entry->lock_lha4);
+ }
+ up_read(&routing_table_lock_lha3);
+ }
+}
+
+static void dump_xprt_info(struct seq_file *s)
+{
+ struct msm_ipc_router_xprt_info *xprt_info;
+
+ seq_printf(s, "%-20s|%-10s|%-12s|%-15s|\n", "XPRT Name", "Link ID",
+ "Initialized", "Remote Node Id");
+ seq_puts(s, "------------------------------------------------------------\n");
+ down_read(&xprt_info_list_lock_lha5);
+ list_for_each_entry(xprt_info, &xprt_info_list, list)
+ seq_printf(s, "%-20s|0x%08x|%-12s|0x%08x|\n",
+ xprt_info->xprt->name, xprt_info->xprt->link_id,
+ (xprt_info->initialized ? "Y" : "N"),
+ xprt_info->remote_node_id);
+ up_read(&xprt_info_list_lock_lha5);
+}
+
+static void dump_servers(struct seq_file *s)
+{
+ int j;
+ struct msm_ipc_server *server;
+ struct msm_ipc_server_port *server_port;
+
+ seq_printf(s, "%-11s|%-11s|%-11s|%-11s|\n", "Service", "Instance",
+ "Node_id", "Port_id");
+ seq_puts(s, "------------------------------------------------------------\n");
+ down_read(&server_list_lock_lha2);
+ for (j = 0; j < SRV_HASH_SIZE; j++) {
+ list_for_each_entry(server, &server_list[j], list) {
+ list_for_each_entry(server_port,
+ &server->server_port_list,
+ list)
+ seq_printf(s, "0x%08x |0x%08x |0x%08x |0x%08x |\n",
+ server->name.service,
+ server->name.instance,
+ server_port->server_addr.node_id,
+ server_port->server_addr.port_id);
+ }
+ }
+ up_read(&server_list_lock_lha2);
+}
+
+static void dump_remote_ports(struct seq_file *s)
+{
+ int j, k;
+ struct msm_ipc_router_remote_port *rport_ptr;
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ seq_printf(s, "%-11s|%-11s|%-10s|\n", "Node_id", "Port_id",
+ "Quota_cnt");
+ seq_puts(s, "------------------------------------------------------------\n");
+ for (j = 0; j < RT_HASH_SIZE; j++) {
+ down_read(&routing_table_lock_lha3);
+ list_for_each_entry(rt_entry, &routing_table[j], list) {
+ down_read(&rt_entry->lock_lha4);
+ for (k = 0; k < RP_HASH_SIZE; k++) {
+ list_for_each_entry
+ (rport_ptr,
+ &rt_entry->remote_port_list[k],
+ list)
+ seq_printf(s, "0x%08x |0x%08x |0x%08x|\n",
+ rport_ptr->node_id,
+ rport_ptr->port_id,
+ rport_ptr->tx_quota_cnt);
+ }
+ up_read(&rt_entry->lock_lha4);
+ }
+ up_read(&routing_table_lock_lha3);
+ }
+}
+
+static void dump_control_ports(struct seq_file *s)
+{
+ struct msm_ipc_port *port_ptr;
+
+ seq_printf(s, "%-11s|%-11s|\n", "Node_id", "Port_id");
+ seq_puts(s, "------------------------------------------------------------\n");
+ down_read(&control_ports_lock_lha5);
+ list_for_each_entry(port_ptr, &control_ports, list)
+ seq_printf(s, "0x%08x |0x%08x |\n", port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ up_read(&control_ports_lock_lha5);
+}
+
+static void dump_local_ports(struct seq_file *s)
+{
+ int j;
+ struct msm_ipc_port *port_ptr;
+
+ seq_printf(s, "%-11s|%-11s|\n", "Node_id", "Port_id");
+ seq_puts(s, "------------------------------------------------------------\n");
+ down_read(&local_ports_lock_lhc2);
+ for (j = 0; j < LP_HASH_SIZE; j++) {
+ list_for_each_entry(port_ptr, &local_ports[j], list) {
+ mutex_lock(&port_ptr->port_lock_lhc3);
+ seq_printf(s, "0x%08x |0x%08x |\n",
+ port_ptr->this_port.node_id,
+ port_ptr->this_port.port_id);
+ mutex_unlock(&port_ptr->port_lock_lhc3);
+ }
+ }
+ up_read(&local_ports_lock_lhc2);
+}
+
+static int debugfs_show(struct seq_file *s, void *data)
+{
+ void (*show)(struct seq_file *) = s->private;
+
+ show(s);
+ return 0;
+}
+
+static int debug_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, debugfs_show, inode->i_private);
+}
+
+static const struct file_operations debug_ops = {
+ .open = debug_open,
+ .release = single_release,
+ .read = seq_read,
+ .llseek = seq_lseek,
+};
+
+static void debug_create(const char *name, struct dentry *dent,
+ void (*show)(struct seq_file *))
+{
+ debugfs_create_file(name, 0444, dent, show, &debug_ops);
+}
+
+static void debugfs_init(void)
+{
+ struct dentry *dent;
+
+ dent = debugfs_create_dir("msm_ipc_router", 0);
+ if (IS_ERR(dent))
+ return;
+
+ debug_create("dump_local_ports", dent, dump_local_ports);
+ debug_create("dump_remote_ports", dent, dump_remote_ports);
+ debug_create("dump_control_ports", dent, dump_control_ports);
+ debug_create("dump_servers", dent, dump_servers);
+ debug_create("dump_xprt_info", dent, dump_xprt_info);
+ debug_create("dump_routing_table", dent, dump_routing_table);
+}
+
+#else
+static void debugfs_init(void) {}
+#endif
+
+/**
+ * ipc_router_create_log_ctx() - Create and add the log context based on
+ * transport
+ * @name: subsystem name
+ *
+ * Return: a reference to the log context created
+ *
+ * This function creates ipc log context based on transport and adds it to a
+ * global list. This log context can be reused from the list in case of a
+ * subsystem restart.
+ */
+static void *ipc_router_create_log_ctx(char *name)
+{
+ struct ipc_rtr_log_ctx *sub_log_ctx;
+
+ sub_log_ctx = kmalloc(sizeof(*sub_log_ctx), GFP_KERNEL);
+ if (!sub_log_ctx)
+ return NULL;
+ sub_log_ctx->log_ctx = ipc_log_context_create(
+ IPC_RTR_INFO_PAGES, name, 0);
+ if (!sub_log_ctx->log_ctx) {
+ IPC_RTR_ERR("%s: Unable to create IPC logging for [%s]",
+ __func__, name);
+ kfree(sub_log_ctx);
+ return NULL;
+ }
+ strlcpy(sub_log_ctx->log_ctx_name, name, LOG_CTX_NAME_LEN);
+ INIT_LIST_HEAD(&sub_log_ctx->list);
+ list_add_tail(&sub_log_ctx->list, &log_ctx_list);
+ return sub_log_ctx->log_ctx;
+}
+
+static void ipc_router_log_ctx_init(void)
+{
+ mutex_lock(&log_ctx_list_lock_lha0);
+ local_log_ctx = ipc_router_create_log_ctx("local_IPCRTR");
+ mutex_unlock(&log_ctx_list_lock_lha0);
+}
+
+/**
+ * ipc_router_get_log_ctx() - Retrieves the ipc log context based on subsystem
+ * name.
+ * @sub_name: subsystem name
+ *
+ * Return: a reference to the log context
+ */
+static void *ipc_router_get_log_ctx(char *sub_name)
+{
+ void *log_ctx = NULL;
+ struct ipc_rtr_log_ctx *temp_log_ctx;
+
+ mutex_lock(&log_ctx_list_lock_lha0);
+ list_for_each_entry(temp_log_ctx, &log_ctx_list, list)
+ if (!strcmp(temp_log_ctx->log_ctx_name, sub_name)) {
+ log_ctx = temp_log_ctx->log_ctx;
+ mutex_unlock(&log_ctx_list_lock_lha0);
+ return log_ctx;
+ }
+ log_ctx = ipc_router_create_log_ctx(sub_name);
+ mutex_unlock(&log_ctx_list_lock_lha0);
+
+ return log_ctx;
+}
+
+/**
+ * ipc_router_get_xprt_info_ref() - Get a reference to the xprt_info structure
+ * @xprt_info: pointer to the xprt_info.
+ *
+ * @return: Zero on success, -ENODEV on failure.
+ *
+ * This function is used to obtain a reference to the xprt_info structure
+ * corresponding to the requested @xprt_info pointer.
+ */
+static int ipc_router_get_xprt_info_ref(
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ int ret = -ENODEV;
+ struct msm_ipc_router_xprt_info *tmp_xprt_info;
+
+ if (!xprt_info)
+ return 0;
+
+ down_read(&xprt_info_list_lock_lha5);
+ list_for_each_entry(tmp_xprt_info, &xprt_info_list, list) {
+ if (tmp_xprt_info == xprt_info) {
+ kref_get(&xprt_info->ref);
+ ret = 0;
+ break;
+ }
+ }
+ up_read(&xprt_info_list_lock_lha5);
+
+ return ret;
+}
+
+/**
+ * ipc_router_put_xprt_info_ref() - Put a reference to the xprt_info structure
+ * @xprt_info: pointer to the xprt_info.
+ *
+ * This function is used to put the reference to the xprt_info structure
+ * corresponding to the requested @xprt_info pointer.
+ */
+static void ipc_router_put_xprt_info_ref(
+ struct msm_ipc_router_xprt_info *xprt_info)
+{
+ if (xprt_info)
+ kref_put(&xprt_info->ref, ipc_router_release_xprt_info_ref);
+}
+
+/**
+ * ipc_router_release_xprt_info_ref() - release the xprt_info last reference
+ * @ref: Reference to the xprt_info structure.
+ *
+ * This function is called when all references to the xprt_info structure
+ * are released.
+ */
+static void ipc_router_release_xprt_info_ref(struct kref *ref)
+{
+ struct msm_ipc_router_xprt_info *xprt_info =
+ container_of(ref, struct msm_ipc_router_xprt_info, ref);
+
+ complete_all(&xprt_info->ref_complete);
+}
+
+static int msm_ipc_router_add_xprt(struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_xprt_info *xprt_info;
+
+ xprt_info = kmalloc(sizeof(*xprt_info), GFP_KERNEL);
+ if (!xprt_info)
+ return -ENOMEM;
+
+ xprt_info->xprt = xprt;
+ xprt_info->initialized = 0;
+ xprt_info->remote_node_id = -1;
+ INIT_LIST_HEAD(&xprt_info->pkt_list);
+ mutex_init(&xprt_info->rx_lock_lhb2);
+ mutex_init(&xprt_info->tx_lock_lhb2);
+ wakeup_source_init(&xprt_info->ws, xprt->name);
+ xprt_info->need_len = 0;
+ xprt_info->abort_data_read = 0;
+ INIT_WORK(&xprt_info->read_data, do_read_data);
+ INIT_LIST_HEAD(&xprt_info->list);
+ kref_init(&xprt_info->ref);
+ init_completion(&xprt_info->ref_complete);
+
+ xprt_info->workqueue = create_singlethread_workqueue(xprt->name);
+ if (!xprt_info->workqueue) {
+ kfree(xprt_info);
+ return -ENOMEM;
+ }
+
+ xprt_info->log_ctx = ipc_router_get_log_ctx(xprt->name);
+
+ if (!strcmp(xprt->name, "msm_ipc_router_loopback_xprt")) {
+ xprt_info->remote_node_id = IPC_ROUTER_NID_LOCAL;
+ xprt_info->initialized = 1;
+ }
+
+ IPC_RTR_INFO(xprt_info->log_ctx, "Adding xprt: [%s]\n", xprt->name);
+ down_write(&xprt_info_list_lock_lha5);
+ list_add_tail(&xprt_info->list, &xprt_info_list);
+ up_write(&xprt_info_list_lock_lha5);
+
+ down_write(&routing_table_lock_lha3);
+ if (!routing_table_inited) {
+ init_routing_table();
+ routing_table_inited = 1;
+ }
+ up_write(&routing_table_lock_lha3);
+
+ xprt->priv = xprt_info;
+
+ return 0;
+}
+
+static void msm_ipc_router_remove_xprt(struct msm_ipc_router_xprt *xprt)
+{
+ struct msm_ipc_router_xprt_info *xprt_info;
+ struct rr_packet *temp_pkt, *pkt;
+
+ if (xprt && xprt->priv) {
+ xprt_info = xprt->priv;
+
+ IPC_RTR_INFO(xprt_info->log_ctx, "Removing xprt: [%s]\n",
+ xprt->name);
+ mutex_lock(&xprt_info->rx_lock_lhb2);
+ xprt_info->abort_data_read = 1;
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+ flush_workqueue(xprt_info->workqueue);
+ destroy_workqueue(xprt_info->workqueue);
+ mutex_lock(&xprt_info->rx_lock_lhb2);
+ list_for_each_entry_safe(pkt, temp_pkt,
+ &xprt_info->pkt_list, list) {
+ list_del(&pkt->list);
+ release_pkt(pkt);
+ }
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+
+ down_write(&xprt_info_list_lock_lha5);
+ list_del(&xprt_info->list);
+ up_write(&xprt_info_list_lock_lha5);
+
+ msm_ipc_cleanup_routing_table(xprt_info);
+
+ wakeup_source_trash(&xprt_info->ws);
+
+ ipc_router_put_xprt_info_ref(xprt_info);
+ wait_for_completion(&xprt_info->ref_complete);
+
+ xprt->priv = 0;
+ kfree(xprt_info);
+ }
+}
+
+struct msm_ipc_router_xprt_work {
+ struct msm_ipc_router_xprt *xprt;
+ struct work_struct work;
+};
+
+static void xprt_open_worker(struct work_struct *work)
+{
+ struct msm_ipc_router_xprt_work *xprt_work =
+ container_of(work, struct msm_ipc_router_xprt_work, work);
+
+ msm_ipc_router_add_xprt(xprt_work->xprt);
+ kfree(xprt_work);
+}
+
+static void xprt_close_worker(struct work_struct *work)
+{
+ struct msm_ipc_router_xprt_work *xprt_work =
+ container_of(work, struct msm_ipc_router_xprt_work, work);
+
+ msm_ipc_router_remove_xprt(xprt_work->xprt);
+ xprt_work->xprt->sft_close_done(xprt_work->xprt);
+ kfree(xprt_work);
+}
+
+void msm_ipc_router_xprt_notify(struct msm_ipc_router_xprt *xprt,
+ unsigned int event,
+ void *data)
+{
+ struct msm_ipc_router_xprt_info *xprt_info = xprt->priv;
+ struct msm_ipc_router_xprt_work *xprt_work;
+ struct rr_packet *pkt;
+ int ret;
+
+ ret = ipc_router_core_init();
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Error %d initializing IPC Router\n",
+ __func__, ret);
+ return;
+ }
+
+ switch (event) {
+ case IPC_ROUTER_XPRT_EVENT_OPEN:
+ xprt_work = kmalloc(sizeof(*xprt_work), GFP_ATOMIC);
+ if (xprt_work) {
+ xprt_work->xprt = xprt;
+ INIT_WORK(&xprt_work->work, xprt_open_worker);
+ queue_work(msm_ipc_router_workqueue, &xprt_work->work);
+ } else {
+ IPC_RTR_ERR(
+ "%s: malloc failure - Couldn't notify OPEN event",
+ __func__);
+ }
+ break;
+
+ case IPC_ROUTER_XPRT_EVENT_CLOSE:
+ xprt_work = kmalloc(sizeof(*xprt_work), GFP_ATOMIC);
+ if (xprt_work) {
+ xprt_work->xprt = xprt;
+ INIT_WORK(&xprt_work->work, xprt_close_worker);
+ queue_work(msm_ipc_router_workqueue, &xprt_work->work);
+ } else {
+ IPC_RTR_ERR(
+ "%s: malloc failure - Couldn't notify CLOSE event",
+ __func__);
+ }
+ break;
+ }
+
+ if (!data)
+ return;
+
+ while (!xprt_info) {
+ msleep(100);
+ xprt_info = xprt->priv;
+ }
+
+ pkt = clone_pkt((struct rr_packet *)data);
+ if (!pkt)
+ return;
+
+ mutex_lock(&xprt_info->rx_lock_lhb2);
+ list_add_tail(&pkt->list, &xprt_info->pkt_list);
+ __pm_stay_awake(&xprt_info->ws);
+ mutex_unlock(&xprt_info->rx_lock_lhb2);
+ queue_work(xprt_info->workqueue, &xprt_info->read_data);
+}
+
+/**
+ * parse_devicetree() - parse device tree binding
+ *
+ * @node: pointer to device tree node
+ *
+ * @return: 0 on success, -ENODEV on failure.
+ */
+static int parse_devicetree(struct device_node *node)
+{
+ char *key;
+ const char *peripheral = NULL;
+
+ key = "qcom,default-peripheral";
+ peripheral = of_get_property(node, key, NULL);
+ if (peripheral)
+ strlcpy(default_peripheral, peripheral, PIL_SUBSYSTEM_NAME_LEN);
+
+ return 0;
+}
+
+/**
+ * ipc_router_probe() - Probe the IPC Router
+ *
+ * @pdev: Platform device corresponding to IPC Router.
+ *
+ * @return: 0 on success, standard Linux error codes on error.
+ *
+ * This function is called when the underlying device tree driver registers
+ * a platform device, mapped to IPC Router.
+ */
+static int ipc_router_probe(struct platform_device *pdev)
+{
+ int ret = 0;
+
+ if (pdev && pdev->dev.of_node) {
+ ret = parse_devicetree(pdev->dev.of_node);
+ if (ret)
+ IPC_RTR_ERR("%s: Failed to parse device tree\n",
+ __func__);
+ }
+ return ret;
+}
+
+static const struct of_device_id ipc_router_match_table[] = {
+ { .compatible = "qcom,ipc_router" },
+ {},
+};
+
+static struct platform_driver ipc_router_driver = {
+ .probe = ipc_router_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = ipc_router_match_table,
+ },
+};
+
+/**
+ * ipc_router_core_init() - Initialize all IPC Router core data structures
+ *
+ * Return: 0 on Success or Standard error code otherwise.
+ *
+ * This function only initializes all the core data structures to the IPC Router
+ * module. The remaining initialization is done inside msm_ipc_router_init().
+ */
+static int ipc_router_core_init(void)
+{
+ int i;
+ int ret;
+ struct msm_ipc_routing_table_entry *rt_entry;
+
+ mutex_lock(&ipc_router_init_lock);
+ if (likely(is_ipc_router_inited)) {
+ mutex_unlock(&ipc_router_init_lock);
+ return 0;
+ }
+
+ debugfs_init();
+
+ for (i = 0; i < SRV_HASH_SIZE; i++)
+ INIT_LIST_HEAD(&server_list[i]);
+
+ for (i = 0; i < LP_HASH_SIZE; i++)
+ INIT_LIST_HEAD(&local_ports[i]);
+
+ down_write(&routing_table_lock_lha3);
+ if (!routing_table_inited) {
+ init_routing_table();
+ routing_table_inited = 1;
+ }
+ up_write(&routing_table_lock_lha3);
+ rt_entry = create_routing_table_entry(IPC_ROUTER_NID_LOCAL, NULL);
+ kref_put(&rt_entry->ref, ipc_router_release_rtentry);
+
+ msm_ipc_router_workqueue =
+ create_singlethread_workqueue("msm_ipc_router");
+ if (!msm_ipc_router_workqueue) {
+ mutex_unlock(&ipc_router_init_lock);
+ return -ENOMEM;
+ }
+
+ ret = msm_ipc_router_security_init();
+ if (ret < 0)
+ IPC_RTR_ERR("%s: Security Init failed\n", __func__);
+ else
+ is_ipc_router_inited = true;
+ mutex_unlock(&ipc_router_init_lock);
+
+ return ret;
+}
+
+static int msm_ipc_router_init(void)
+{
+ int ret;
+
+ ret = ipc_router_core_init();
+ if (ret < 0)
+ return ret;
+
+ ret = platform_driver_register(&ipc_router_driver);
+ if (ret)
+ IPC_RTR_ERR(
+ "%s: ipc_router_driver register failed %d\n", __func__, ret);
+
+ ret = msm_ipc_router_init_sockets();
+ if (ret < 0)
+ IPC_RTR_ERR("%s: Init sockets failed\n", __func__);
+
+ ipc_router_log_ctx_init();
+ return ret;
+}
+
+module_init(msm_ipc_router_init);
+MODULE_DESCRIPTION("MSM IPC Router");
+MODULE_LICENSE("GPL v2");
diff --git a/net/ipc_router/ipc_router_private.h b/net/ipc_router/ipc_router_private.h
new file mode 100644
index 0000000..3ec9818
--- /dev/null
+++ b/net/ipc_router/ipc_router_private.h
@@ -0,0 +1,150 @@
+/* Copyright (c) 2011-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _IPC_ROUTER_PRIVATE_H
+#define _IPC_ROUTER_PRIVATE_H
+
+#include <linux/types.h>
+#include <linux/socket.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/list.h>
+#include <linux/platform_device.h>
+#include <linux/msm_ipc.h>
+#include <linux/ipc_router.h>
+#include <linux/ipc_router_xprt.h>
+
+#include <net/sock.h>
+
+/* definitions for the R2R wire protocol */
+#define IPC_ROUTER_V1 1
+/* Ambiguous definition but will enable multiplexing IPC_ROUTER_V2 packets
+ * with an existing alternate transport in user-space, if needed.
+ */
+#define IPC_ROUTER_V2 3
+#define IPC_ROUTER_VER_BITMASK ((BIT(IPC_ROUTER_V1)) | (BIT(IPC_ROUTER_V2)))
+#define IPC_ROUTER_HELLO_MAGIC 0xE110
+#define IPC_ROUTER_CHECKSUM_MASK 0xFFFF
+
+#define IPC_ROUTER_ADDRESS 0x0000FFFF
+
+#define IPC_ROUTER_NID_LOCAL 1
+#define MAX_IPC_PKT_SIZE 66000
+
+#define IPC_ROUTER_LOW_RX_QUOTA 5
+#define IPC_ROUTER_HIGH_RX_QUOTA 10
+
+#define IPC_ROUTER_INFINITY -1
+#define DEFAULT_RCV_TIMEO IPC_ROUTER_INFINITY
+#define DEFAULT_SND_TIMEO IPC_ROUTER_INFINITY
+
+#define ALIGN_SIZE(x) ((4 - ((x) & 3)) & 3)
+
+#define ALL_SERVICE 0xFFFFFFFF
+#define ALL_INSTANCE 0xFFFFFFFF
+
+#define CONTROL_FLAG_CONFIRM_RX 0x1
+#define CONTROL_FLAG_OPT_HDR 0x2
+
+enum {
+ CLIENT_PORT,
+ SERVER_PORT,
+ CONTROL_PORT,
+ IRSC_PORT,
+};
+
+enum {
+ NULL_MODE,
+ SINGLE_LINK_MODE,
+ MULTI_LINK_MODE,
+};
+
+enum {
+ CONNECTION_RESET = -1,
+ NOT_CONNECTED,
+ CONNECTED,
+};
+
+struct msm_ipc_sock {
+ struct sock sk;
+ struct msm_ipc_port *port;
+ void *default_node_vote_info;
+};
+
+/**
+ * msm_ipc_router_create_raw_port() - Create an IPC Router port
+ * @endpoint: User-space space socket information to be cached.
+ * @notify: Function to notify incoming events on the port.
+ * @event: Event ID to be handled.
+ * @oob_data: Any out-of-band data associated with the event.
+ * @oob_data_len: Size of the out-of-band data, if valid.
+ * @priv: Private data registered during the port creation.
+ * @priv: Private Data to be passed during the event notification.
+ *
+ * @return: Valid pointer to port on success, NULL on failure.
+ *
+ * This function is used to create an IPC Router port. The port is used for
+ * communication locally or outside the subsystem.
+ */
+struct msm_ipc_port *
+msm_ipc_router_create_raw_port(void *endpoint,
+ void (*notify)(unsigned int event,
+ void *oob_data,
+ size_t oob_data_len, void *priv),
+ void *priv);
+int msm_ipc_router_send_to(struct msm_ipc_port *src,
+ struct sk_buff_head *data,
+ struct msm_ipc_addr *dest,
+ long timeout);
+int msm_ipc_router_read(struct msm_ipc_port *port_ptr,
+ struct rr_packet **pkt,
+ size_t buf_len);
+
+int msm_ipc_router_recv_from(struct msm_ipc_port *port_ptr,
+ struct rr_packet **pkt,
+ struct msm_ipc_addr *src_addr, long timeout);
+int msm_ipc_router_register_server(struct msm_ipc_port *server_port,
+ struct msm_ipc_addr *name);
+int msm_ipc_router_unregister_server(struct msm_ipc_port *server_port);
+
+int msm_ipc_router_init_sockets(void);
+void msm_ipc_router_exit_sockets(void);
+
+void msm_ipc_sync_sec_rule(u32 service, u32 instance, void *rule);
+
+void msm_ipc_sync_default_sec_rule(void *rule);
+
+int msm_ipc_router_rx_data_wait(struct msm_ipc_port *port_ptr, long timeout);
+
+void msm_ipc_router_free_skb(struct sk_buff_head *skb_head);
+
+/**
+ * ipc_router_set_conn() - Set the connection by initializing dest address
+ * @port_ptr: Local port in which the connection has to be set.
+ * @addr: Destination address of the connection.
+ *
+ * @return: 0 on success, standard Linux error codes on failure.
+ */
+int ipc_router_set_conn(struct msm_ipc_port *port_ptr,
+ struct msm_ipc_addr *addr);
+
+void *msm_ipc_load_default_node(void);
+
+void msm_ipc_unload_default_node(void *pil);
+
+/**
+ * ipc_router_dummy_write_space() - Dummy write space available callback
+ * @sk: Socket pointer for which the callback is called.
+ */
+void ipc_router_dummy_write_space(struct sock *sk);
+
+#endif
diff --git a/net/ipc_router/ipc_router_security.c b/net/ipc_router/ipc_router_security.c
new file mode 100644
index 0000000..1359d64
--- /dev/null
+++ b/net/ipc_router/ipc_router_security.c
@@ -0,0 +1,330 @@
+/* Copyright (c) 2012-2014,2016 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/net.h>
+#include <linux/socket.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/poll.h>
+#include <linux/fcntl.h>
+#include <linux/gfp.h>
+#include <linux/uaccess.h>
+#include <linux/kernel.h>
+#include <linux/msm_ipc.h>
+#include <linux/rwsem.h>
+#include <linux/uaccess.h>
+
+#include <net/sock.h>
+#include "ipc_router_private.h"
+#include "ipc_router_security.h"
+
+#define IRSC_COMPLETION_TIMEOUT_MS 30000
+#define SEC_RULES_HASH_SZ 32
+
+#ifndef SIZE_MAX
+#define SIZE_MAX ((size_t)-1)
+#endif
+
+struct security_rule {
+ struct list_head list;
+ u32 service_id;
+ u32 instance_id;
+ unsigned int reserved;
+ int num_group_info;
+ kgid_t *group_id;
+};
+
+static DECLARE_RWSEM(security_rules_lock_lha4);
+static struct list_head security_rules[SEC_RULES_HASH_SZ];
+static DECLARE_COMPLETION(irsc_completion);
+
+/**
+ * wait_for_irsc_completion() - Wait for IPC Router Security Configuration
+ * (IRSC) to complete
+ */
+void wait_for_irsc_completion(void)
+{
+ unsigned long rem_jiffies;
+
+ do {
+ rem_jiffies = wait_for_completion_timeout
+ (&irsc_completion,
+ msecs_to_jiffies(IRSC_COMPLETION_TIMEOUT_MS));
+ if (rem_jiffies)
+ return;
+ pr_err("%s: waiting for IPC Security Conf.\n", __func__);
+ } while (1);
+}
+
+/**
+ * signal_irsc_completion() - Signal the completion of IRSC
+ */
+void signal_irsc_completion(void)
+{
+ complete_all(&irsc_completion);
+}
+
+/**
+ * check_permisions() - Check whether the process has permissions to
+ * create an interface handle with IPC Router
+ *
+ * @return: true if the process has permissions, else false.
+ */
+int check_permissions(void)
+{
+ int rc = 0;
+
+ if (capable(CAP_NET_RAW) || capable(CAP_NET_BIND_SERVICE))
+ rc = 1;
+ return rc;
+}
+EXPORT_SYMBOL(check_permissions);
+
+/**
+ * msm_ipc_config_sec_rules() - Add a security rule to the database
+ * @arg: Pointer to the buffer containing the rule.
+ *
+ * @return: 0 if successfully added, < 0 for error.
+ *
+ * A security rule is defined using <Service_ID: Group_ID> tuple. The rule
+ * implies that a user-space process in order to send a QMI message to
+ * service Service_ID should belong to the Linux group Group_ID.
+ */
+int msm_ipc_config_sec_rules(void *arg)
+{
+ struct config_sec_rules_args sec_rules_arg;
+ struct security_rule *rule, *temp_rule;
+ int key;
+ size_t kgroup_info_sz;
+ int ret;
+ size_t group_info_sz;
+ gid_t *group_id = NULL;
+ int loop;
+
+ if (!uid_eq(current_euid(), GLOBAL_ROOT_UID))
+ return -EPERM;
+
+ ret = copy_from_user(&sec_rules_arg, (void *)arg,
+ sizeof(sec_rules_arg));
+ if (ret)
+ return -EFAULT;
+
+ if (sec_rules_arg.num_group_info <= 0)
+ return -EINVAL;
+
+ if (sec_rules_arg.num_group_info > (SIZE_MAX / sizeof(gid_t))) {
+ pr_err("%s: Integer Overflow %zu * %d\n", __func__,
+ sizeof(gid_t), sec_rules_arg.num_group_info);
+ return -EINVAL;
+ }
+ group_info_sz = sec_rules_arg.num_group_info * sizeof(gid_t);
+
+ if (sec_rules_arg.num_group_info > (SIZE_MAX / sizeof(kgid_t))) {
+ pr_err("%s: Integer Overflow %zu * %d\n", __func__,
+ sizeof(kgid_t), sec_rules_arg.num_group_info);
+ return -EINVAL;
+ }
+ kgroup_info_sz = sec_rules_arg.num_group_info * sizeof(kgid_t);
+
+ rule = kzalloc(sizeof(*rule), GFP_KERNEL);
+ if (!rule)
+ return -ENOMEM;
+
+ rule->group_id = kzalloc(kgroup_info_sz, GFP_KERNEL);
+ if (!rule->group_id) {
+ kfree(rule);
+ return -ENOMEM;
+ }
+
+ group_id = kzalloc(group_info_sz, GFP_KERNEL);
+ if (!group_id) {
+ kfree(rule->group_id);
+ kfree(rule);
+ return -ENOMEM;
+ }
+
+ rule->service_id = sec_rules_arg.service_id;
+ rule->instance_id = sec_rules_arg.instance_id;
+ rule->reserved = sec_rules_arg.reserved;
+ rule->num_group_info = sec_rules_arg.num_group_info;
+ ret = copy_from_user(group_id, ((void *)(arg + sizeof(sec_rules_arg))),
+ group_info_sz);
+ if (ret) {
+ kfree(group_id);
+ kfree(rule->group_id);
+ kfree(rule);
+ return -EFAULT;
+ }
+ for (loop = 0; loop < rule->num_group_info; loop++)
+ rule->group_id[loop] = make_kgid(current_user_ns(),
+ group_id[loop]);
+ kfree(group_id);
+
+ key = rule->service_id & (SEC_RULES_HASH_SZ - 1);
+ down_write(&security_rules_lock_lha4);
+ if (rule->service_id == ALL_SERVICE) {
+ temp_rule = list_first_entry(&security_rules[key],
+ struct security_rule, list);
+ list_del(&temp_rule->list);
+ kfree(temp_rule->group_id);
+ kfree(temp_rule);
+ }
+ list_add_tail(&rule->list, &security_rules[key]);
+ up_write(&security_rules_lock_lha4);
+
+ if (rule->service_id == ALL_SERVICE)
+ msm_ipc_sync_default_sec_rule((void *)rule);
+ else
+ msm_ipc_sync_sec_rule(rule->service_id, rule->instance_id,
+ (void *)rule);
+
+ return 0;
+}
+EXPORT_SYMBOL(msm_ipc_config_sec_rules);
+
+/**
+ * msm_ipc_add_default_rule() - Add default security rule
+ *
+ * @return: 0 on success, < 0 on error/
+ *
+ * This function is used to ensure the basic security, if there is no
+ * security rule defined for a service. It can be overwritten by the
+ * default security rule from user-space script.
+ */
+static int msm_ipc_add_default_rule(void)
+{
+ struct security_rule *rule;
+ int key;
+
+ rule = kzalloc(sizeof(*rule), GFP_KERNEL);
+ if (!rule)
+ return -ENOMEM;
+
+ rule->group_id = kzalloc(sizeof(*rule->group_id), GFP_KERNEL);
+ if (!rule->group_id) {
+ kfree(rule);
+ return -ENOMEM;
+ }
+
+ rule->service_id = ALL_SERVICE;
+ rule->instance_id = ALL_INSTANCE;
+ rule->num_group_info = 1;
+ *rule->group_id = AID_NET_RAW;
+ down_write(&security_rules_lock_lha4);
+ key = (ALL_SERVICE & (SEC_RULES_HASH_SZ - 1));
+ list_add_tail(&rule->list, &security_rules[key]);
+ up_write(&security_rules_lock_lha4);
+ return 0;
+}
+
+/**
+ * msm_ipc_get_security_rule() - Get the security rule corresponding to a
+ * service
+ * @service_id: Service ID for which the rule has to be got.
+ * @instance_id: Instance ID for which the rule has to be got.
+ *
+ * @return: Returns the rule info on success, NULL on error.
+ *
+ * This function is used when the service comes up and gets registered with
+ * the IPC Router.
+ */
+void *msm_ipc_get_security_rule(u32 service_id, u32 instance_id)
+{
+ int key;
+ struct security_rule *rule;
+
+ key = (service_id & (SEC_RULES_HASH_SZ - 1));
+ down_read(&security_rules_lock_lha4);
+ /* Return the rule for a specific <service:instance>, if found. */
+ list_for_each_entry(rule, &security_rules[key], list) {
+ if ((rule->service_id == service_id) &&
+ (rule->instance_id == instance_id)) {
+ up_read(&security_rules_lock_lha4);
+ return (void *)rule;
+ }
+ }
+
+ /* Return the rule for a specific service, if found. */
+ list_for_each_entry(rule, &security_rules[key], list) {
+ if ((rule->service_id == service_id) &&
+ (rule->instance_id == ALL_INSTANCE)) {
+ up_read(&security_rules_lock_lha4);
+ return (void *)rule;
+ }
+ }
+
+ /* Return the default rule, if no rule defined for a service. */
+ key = (ALL_SERVICE & (SEC_RULES_HASH_SZ - 1));
+ list_for_each_entry(rule, &security_rules[key], list) {
+ if ((rule->service_id == ALL_SERVICE) &&
+ (rule->instance_id == ALL_INSTANCE)) {
+ up_read(&security_rules_lock_lha4);
+ return (void *)rule;
+ }
+ }
+ up_read(&security_rules_lock_lha4);
+ return NULL;
+}
+EXPORT_SYMBOL(msm_ipc_get_security_rule);
+
+/**
+ * msm_ipc_check_send_permissions() - Check if the sendng process has
+ * permissions specified as per the rule
+ * @data: Security rule to be checked.
+ *
+ * @return: true if the process has permissions, else false.
+ *
+ * This function is used to check if the current executing process has
+ * permissions to send message to the remote entity. The security rule
+ * corresponding to the remote entity is specified by "data" parameter
+ */
+int msm_ipc_check_send_permissions(void *data)
+{
+ int i;
+ struct security_rule *rule = (struct security_rule *)data;
+
+ /* Source/Sender is Root user */
+ if (uid_eq(current_euid(), GLOBAL_ROOT_UID))
+ return 1;
+
+ /* Destination has no rules defined, possibly a client. */
+ if (!rule)
+ return 1;
+
+ for (i = 0; i < rule->num_group_info; i++) {
+ if (!gid_valid(rule->group_id[i]))
+ continue;
+ if (in_egroup_p(rule->group_id[i]))
+ return 1;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(msm_ipc_check_send_permissions);
+
+/**
+ * msm_ipc_router_security_init() - Initialize the security rule database
+ *
+ * @return: 0 if successful, < 0 for error.
+ */
+int msm_ipc_router_security_init(void)
+{
+ int i;
+
+ for (i = 0; i < SEC_RULES_HASH_SZ; i++)
+ INIT_LIST_HEAD(&security_rules[i]);
+
+ msm_ipc_add_default_rule();
+ return 0;
+}
+EXPORT_SYMBOL(msm_ipc_router_security_init);
diff --git a/net/ipc_router/ipc_router_security.h b/net/ipc_router/ipc_router_security.h
new file mode 100644
index 0000000..55e5887
--- /dev/null
+++ b/net/ipc_router/ipc_router_security.h
@@ -0,0 +1,120 @@
+/* Copyright (c) 2012-2014,2016 The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _IPC_ROUTER_SECURITY_H
+#define _IPC_ROUTER_SECURITY_H
+
+#include <linux/types.h>
+#include <linux/socket.h>
+#include <linux/errno.h>
+
+#ifdef CONFIG_IPC_ROUTER_SECURITY
+#include <linux/android_aid.h>
+
+/**
+ * check_permisions() - Check whether the process has permissions to
+ * create an interface handle with IPC Router
+ *
+ * @return: true if the process has permissions, else false.
+ */
+int check_permissions(void);
+
+/**
+ * msm_ipc_config_sec_rules() - Add a security rule to the database
+ * @arg: Pointer to the buffer containing the rule.
+ *
+ * @return: 0 if successfully added, < 0 for error.
+ *
+ * A security rule is defined using <Service_ID: Group_ID> tuple. The rule
+ * implies that a user-space process in order to send a QMI message to
+ * service Service_ID should belong to the Linux group Group_ID.
+ */
+int msm_ipc_config_sec_rules(void *arg);
+
+/**
+ * msm_ipc_get_security_rule() - Get the security rule corresponding to a
+ * service
+ * @service_id: Service ID for which the rule has to be got.
+ * @instance_id: Instance ID for which the rule has to be got.
+ *
+ * @return: Returns the rule info on success, NULL on error.
+ *
+ * This function is used when the service comes up and gets registered with
+ * the IPC Router.
+ */
+void *msm_ipc_get_security_rule(u32 service_id, u32 instance_id);
+
+/**
+ * msm_ipc_check_send_permissions() - Check if the sendng process has
+ * permissions specified as per the rule
+ * @data: Security rule to be checked.
+ *
+ * @return: true if the process has permissions, else false.
+ *
+ * This function is used to check if the current executing process has
+ * permissions to send message to the remote entity. The security rule
+ * corresponding to the remote entity is specified by "data" parameter
+ */
+int msm_ipc_check_send_permissions(void *data);
+
+/**
+ * msm_ipc_router_security_init() - Initialize the security rule database
+ *
+ * @return: 0 if successful, < 0 for error.
+ */
+int msm_ipc_router_security_init(void);
+
+/**
+ * wait_for_irsc_completion() - Wait for IPC Router Security Configuration
+ * (IRSC) to complete
+ */
+void wait_for_irsc_completion(void);
+
+/**
+ * signal_irsc_completion() - Signal the completion of IRSC
+ */
+void signal_irsc_completion(void);
+
+#else
+
+static inline int check_permissions(void)
+{
+ return 1;
+}
+
+static inline int msm_ipc_config_sec_rules(void *arg)
+{
+ return -ENODEV;
+}
+
+static inline void *msm_ipc_get_security_rule(u32 service_id,
+ u32 instance_id)
+{
+ return NULL;
+}
+
+static inline int msm_ipc_check_send_permissions(void *data)
+{
+ return 1;
+}
+
+static inline int msm_ipc_router_security_init(void)
+{
+ return 0;
+}
+
+static inline void wait_for_irsc_completion(void) { }
+
+static inline void signal_irsc_completion(void) { }
+
+#endif
+#endif
diff --git a/net/ipc_router/ipc_router_socket.c b/net/ipc_router/ipc_router_socket.c
new file mode 100644
index 0000000..a84fc11
--- /dev/null
+++ b/net/ipc_router/ipc_router_socket.c
@@ -0,0 +1,681 @@
+/* Copyright (c) 2011-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/net.h>
+#include <linux/socket.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/poll.h>
+#include <linux/fcntl.h>
+#include <linux/gfp.h>
+#include <linux/msm_ipc.h>
+#include <linux/sched.h>
+#include <linux/thread_info.h>
+#include <linux/slab.h>
+#include <linux/kmemleak.h>
+#include <linux/ipc_logging.h>
+#include <linux/string.h>
+#include <linux/atomic.h>
+#include <linux/ipc_router.h>
+
+#include <net/sock.h>
+
+#include "ipc_router_private.h"
+#include "ipc_router_security.h"
+
+#define msm_ipc_sk(sk) ((struct msm_ipc_sock *)(sk))
+#define msm_ipc_sk_port(sk) ((struct msm_ipc_port *)(msm_ipc_sk(sk)->port))
+
+#ifndef SIZE_MAX
+#define SIZE_MAX ((size_t)-1)
+#endif
+
+static int sockets_enabled;
+static struct proto msm_ipc_proto;
+static const struct proto_ops msm_ipc_proto_ops;
+static RAW_NOTIFIER_HEAD(ipcrtr_af_init_chain);
+static DEFINE_MUTEX(ipcrtr_af_init_lock);
+
+static struct sk_buff_head *msm_ipc_router_build_msg(struct msghdr *m,
+ size_t total_len)
+{
+ struct sk_buff_head *msg_head;
+ struct sk_buff *msg;
+ int first = 1;
+ int last = 1;
+ size_t data_size = 0;
+ size_t alloc_size, align_size;
+ void *data;
+ size_t total_copied_size = 0, copied_size;
+
+ if (iov_iter_count(&m->msg_iter) == total_len)
+ data_size = total_len;
+
+ if (!data_size)
+ return NULL;
+ align_size = ALIGN_SIZE(data_size);
+
+ msg_head = kmalloc(sizeof(*msg_head), GFP_KERNEL);
+ if (!msg_head) {
+ IPC_RTR_ERR("%s: cannot allocate skb_head\n", __func__);
+ return NULL;
+ }
+ skb_queue_head_init(msg_head);
+
+ while (total_copied_size < total_len) {
+ alloc_size = data_size;
+ if (first)
+ alloc_size += IPC_ROUTER_HDR_SIZE;
+ if (last)
+ alloc_size += align_size;
+
+ msg = alloc_skb(alloc_size, GFP_KERNEL);
+ if (!msg) {
+ if (alloc_size <= (PAGE_SIZE / 2)) {
+ IPC_RTR_ERR("%s: cannot allocated skb\n",
+ __func__);
+ goto msg_build_failure;
+ }
+ data_size = data_size / 2;
+ last = 0;
+ continue;
+ }
+
+ if (first) {
+ skb_reserve(msg, IPC_ROUTER_HDR_SIZE);
+ first = 0;
+ }
+
+ data = skb_put(msg, data_size);
+ copied_size = copy_from_iter(msg->data, data_size,
+ &m->msg_iter);
+ if (copied_size != data_size) {
+ IPC_RTR_ERR("%s: copy_from_iter failed %zu %zu %zu\n",
+ __func__, alloc_size, data_size,
+ copied_size);
+ kfree_skb(msg);
+ goto msg_build_failure;
+ }
+ skb_queue_tail(msg_head, msg);
+ total_copied_size += data_size;
+ data_size = total_len - total_copied_size;
+ last = 1;
+ }
+ return msg_head;
+
+msg_build_failure:
+ while (!skb_queue_empty(msg_head)) {
+ msg = skb_dequeue(msg_head);
+ kfree_skb(msg);
+ }
+ kfree(msg_head);
+ return NULL;
+}
+
+static int msm_ipc_router_extract_msg(struct msghdr *m,
+ struct rr_packet *pkt)
+{
+ struct sockaddr_msm_ipc *addr;
+ struct rr_header_v1 *hdr;
+ struct sk_buff *temp;
+ union rr_control_msg *ctl_msg;
+ int offset = 0, data_len = 0, copy_len, copied_len;
+
+ if (!m || !pkt) {
+ IPC_RTR_ERR("%s: Invalid pointers passed\n", __func__);
+ return -EINVAL;
+ }
+ addr = (struct sockaddr_msm_ipc *)m->msg_name;
+
+ hdr = &pkt->hdr;
+ if (addr && (hdr->type == IPC_ROUTER_CTRL_CMD_RESUME_TX)) {
+ temp = skb_peek(pkt->pkt_fragment_q);
+ ctl_msg = (union rr_control_msg *)(temp->data);
+ addr->family = AF_MSM_IPC;
+ addr->address.addrtype = MSM_IPC_ADDR_ID;
+ addr->address.addr.port_addr.node_id = ctl_msg->cli.node_id;
+ addr->address.addr.port_addr.port_id = ctl_msg->cli.port_id;
+ m->msg_namelen = sizeof(struct sockaddr_msm_ipc);
+ return offset;
+ }
+ if (addr && (hdr->type == IPC_ROUTER_CTRL_CMD_DATA)) {
+ addr->family = AF_MSM_IPC;
+ addr->address.addrtype = MSM_IPC_ADDR_ID;
+ addr->address.addr.port_addr.node_id = hdr->src_node_id;
+ addr->address.addr.port_addr.port_id = hdr->src_port_id;
+ m->msg_namelen = sizeof(struct sockaddr_msm_ipc);
+ }
+
+ data_len = hdr->size;
+ skb_queue_walk(pkt->pkt_fragment_q, temp) {
+ copy_len = data_len < temp->len ? data_len : temp->len;
+ copied_len = copy_to_iter(temp->data, copy_len, &m->msg_iter);
+ if (copy_len != copied_len) {
+ IPC_RTR_ERR("%s: Copy to user failed\n", __func__);
+ return -EFAULT;
+ }
+ offset += copy_len;
+ data_len -= copy_len;
+ }
+ return offset;
+}
+
+static int msm_ipc_router_create(struct net *net,
+ struct socket *sock,
+ int protocol,
+ int kern)
+{
+ struct sock *sk;
+ struct msm_ipc_port *port_ptr;
+
+ if (unlikely(protocol != 0)) {
+ IPC_RTR_ERR("%s: Protocol not supported\n", __func__);
+ return -EPROTONOSUPPORT;
+ }
+
+ switch (sock->type) {
+ case SOCK_DGRAM:
+ break;
+ default:
+ IPC_RTR_ERR("%s: Protocol type not supported\n", __func__);
+ return -EPROTOTYPE;
+ }
+
+ sk = sk_alloc(net, AF_MSM_IPC, GFP_KERNEL, &msm_ipc_proto, kern);
+ if (!sk) {
+ IPC_RTR_ERR("%s: sk_alloc failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ sock->ops = &msm_ipc_proto_ops;
+ sock_init_data(sock, sk);
+ sk->sk_data_ready = NULL;
+ sk->sk_write_space = ipc_router_dummy_write_space;
+ sk->sk_rcvtimeo = DEFAULT_RCV_TIMEO;
+ sk->sk_sndtimeo = DEFAULT_SND_TIMEO;
+
+ port_ptr = msm_ipc_router_create_raw_port(sk, NULL, NULL);
+ if (!port_ptr) {
+ IPC_RTR_ERR("%s: port_ptr alloc failed\n", __func__);
+ sock_put(sk);
+ sock->sk = NULL;
+ return -ENOMEM;
+ }
+
+ port_ptr->check_send_permissions = msm_ipc_check_send_permissions;
+ msm_ipc_sk(sk)->port = port_ptr;
+ msm_ipc_sk(sk)->default_node_vote_info = NULL;
+
+ return 0;
+}
+
+int msm_ipc_router_bind(struct socket *sock, struct sockaddr *uaddr,
+ int uaddr_len)
+{
+ struct sockaddr_msm_ipc *addr = (struct sockaddr_msm_ipc *)uaddr;
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr;
+ int ret;
+
+ if (!sk)
+ return -EINVAL;
+
+ if (!check_permissions()) {
+ IPC_RTR_ERR("%s: %s Do not have permissions\n",
+ __func__, current->comm);
+ return -EPERM;
+ }
+
+ if (!uaddr_len) {
+ IPC_RTR_ERR("%s: Invalid address length\n", __func__);
+ return -EINVAL;
+ }
+
+ if (addr->family != AF_MSM_IPC) {
+ IPC_RTR_ERR("%s: Address family is incorrect\n", __func__);
+ return -EAFNOSUPPORT;
+ }
+
+ if (addr->address.addrtype != MSM_IPC_ADDR_NAME) {
+ IPC_RTR_ERR("%s: Address type is incorrect\n", __func__);
+ return -EINVAL;
+ }
+
+ port_ptr = msm_ipc_sk_port(sk);
+ if (!port_ptr)
+ return -ENODEV;
+
+ if (!msm_ipc_sk(sk)->default_node_vote_info)
+ msm_ipc_sk(sk)->default_node_vote_info =
+ msm_ipc_load_default_node();
+ lock_sock(sk);
+
+ ret = msm_ipc_router_register_server(port_ptr, &addr->address);
+
+ release_sock(sk);
+ return ret;
+}
+
+static int ipc_router_connect(struct socket *sock, struct sockaddr *uaddr,
+ int uaddr_len, int flags)
+{
+ struct sockaddr_msm_ipc *addr = (struct sockaddr_msm_ipc *)uaddr;
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr;
+ int ret;
+
+ if (!sk)
+ return -EINVAL;
+
+ if (uaddr_len <= 0) {
+ IPC_RTR_ERR("%s: Invalid address length\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!addr) {
+ IPC_RTR_ERR("%s: Invalid address\n", __func__);
+ return -EINVAL;
+ }
+
+ if (addr->family != AF_MSM_IPC) {
+ IPC_RTR_ERR("%s: Address family is incorrect\n", __func__);
+ return -EAFNOSUPPORT;
+ }
+
+ port_ptr = msm_ipc_sk_port(sk);
+ if (!port_ptr)
+ return -ENODEV;
+
+ lock_sock(sk);
+ ret = ipc_router_set_conn(port_ptr, &addr->address);
+ release_sock(sk);
+ return ret;
+}
+
+static int msm_ipc_router_sendmsg(struct socket *sock,
+ struct msghdr *m, size_t total_len)
+{
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr = msm_ipc_sk_port(sk);
+ struct sockaddr_msm_ipc *dest = (struct sockaddr_msm_ipc *)m->msg_name;
+ struct sk_buff_head *msg;
+ int ret;
+ struct msm_ipc_addr dest_addr = {0};
+ long timeout;
+
+ if (dest) {
+ if (m->msg_namelen < sizeof(*dest) ||
+ dest->family != AF_MSM_IPC)
+ return -EINVAL;
+ memcpy(&dest_addr, &dest->address, sizeof(dest_addr));
+ } else {
+ if (port_ptr->conn_status == NOT_CONNECTED)
+ return -EDESTADDRREQ;
+ if (port_ptr->conn_status < CONNECTION_RESET)
+ return -ENETRESET;
+ memcpy(&dest_addr.addr.port_addr, &port_ptr->dest_addr,
+ sizeof(struct msm_ipc_port_addr));
+ dest_addr.addrtype = MSM_IPC_ADDR_ID;
+ }
+
+ if (total_len > MAX_IPC_PKT_SIZE)
+ return -EINVAL;
+
+ lock_sock(sk);
+ timeout = sock_sndtimeo(sk, m->msg_flags & MSG_DONTWAIT);
+ msg = msm_ipc_router_build_msg(m, total_len);
+ if (!msg) {
+ IPC_RTR_ERR("%s: Msg build failure\n", __func__);
+ ret = -ENOMEM;
+ goto out_sendmsg;
+ }
+ kmemleak_not_leak(msg);
+
+ if (port_ptr->type == CLIENT_PORT)
+ wait_for_irsc_completion();
+ ret = msm_ipc_router_send_to(port_ptr, msg, &dest_addr, timeout);
+ if (ret != total_len) {
+ if (ret < 0) {
+ if (ret != -EAGAIN)
+ IPC_RTR_ERR("%s: Send_to failure %d\n",
+ __func__, ret);
+ msm_ipc_router_free_skb(msg);
+ } else if (ret >= 0) {
+ ret = -EFAULT;
+ }
+ }
+
+out_sendmsg:
+ release_sock(sk);
+ return ret;
+}
+
+static int msm_ipc_router_recvmsg(struct socket *sock,
+ struct msghdr *m, size_t buf_len, int flags)
+{
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr = msm_ipc_sk_port(sk);
+ struct rr_packet *pkt;
+ long timeout;
+ int ret;
+
+ lock_sock(sk);
+ if (!buf_len) {
+ if (flags & MSG_PEEK)
+ ret = msm_ipc_router_get_curr_pkt_size(port_ptr);
+ else
+ ret = -EINVAL;
+ release_sock(sk);
+ return ret;
+ }
+ timeout = sock_rcvtimeo(sk, flags & MSG_DONTWAIT);
+
+ ret = msm_ipc_router_rx_data_wait(port_ptr, timeout);
+ if (ret) {
+ release_sock(sk);
+ if (ret == -ENOMSG)
+ m->msg_namelen = 0;
+ return ret;
+ }
+
+ ret = msm_ipc_router_read(port_ptr, &pkt, buf_len);
+ if (ret <= 0 || !pkt) {
+ release_sock(sk);
+ return ret;
+ }
+
+ ret = msm_ipc_router_extract_msg(m, pkt);
+ release_pkt(pkt);
+ release_sock(sk);
+ return ret;
+}
+
+static int msm_ipc_router_ioctl(struct socket *sock,
+ unsigned int cmd, unsigned long arg)
+{
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr;
+ struct server_lookup_args server_arg;
+ struct msm_ipc_server_info *srv_info = NULL;
+ unsigned int n;
+ size_t srv_info_sz = 0;
+ int ret;
+
+ if (!sk)
+ return -EINVAL;
+
+ lock_sock(sk);
+ port_ptr = msm_ipc_sk_port(sock->sk);
+ if (!port_ptr) {
+ release_sock(sk);
+ return -EINVAL;
+ }
+
+ switch (cmd) {
+ case IPC_ROUTER_IOCTL_GET_VERSION:
+ n = IPC_ROUTER_V1;
+ ret = put_user(n, (unsigned int *)arg);
+ break;
+
+ case IPC_ROUTER_IOCTL_GET_MTU:
+ n = (MAX_IPC_PKT_SIZE - IPC_ROUTER_HDR_SIZE);
+ ret = put_user(n, (unsigned int *)arg);
+ break;
+
+ case IPC_ROUTER_IOCTL_GET_CURR_PKT_SIZE:
+ ret = msm_ipc_router_get_curr_pkt_size(port_ptr);
+ break;
+
+ case IPC_ROUTER_IOCTL_LOOKUP_SERVER:
+ if (!msm_ipc_sk(sk)->default_node_vote_info)
+ msm_ipc_sk(sk)->default_node_vote_info =
+ msm_ipc_load_default_node();
+
+ ret = copy_from_user(&server_arg, (void *)arg,
+ sizeof(server_arg));
+ if (ret) {
+ ret = -EFAULT;
+ break;
+ }
+
+ if (server_arg.num_entries_in_array < 0) {
+ ret = -EINVAL;
+ break;
+ }
+ if (server_arg.num_entries_in_array) {
+ if (server_arg.num_entries_in_array >
+ (SIZE_MAX / sizeof(*srv_info))) {
+ IPC_RTR_ERR("%s: Integer Overflow %zu * %d\n",
+ __func__, sizeof(*srv_info),
+ server_arg.num_entries_in_array);
+ ret = -EINVAL;
+ break;
+ }
+ srv_info_sz = server_arg.num_entries_in_array *
+ sizeof(*srv_info);
+ srv_info = kmalloc(srv_info_sz, GFP_KERNEL);
+ if (!srv_info) {
+ ret = -ENOMEM;
+ break;
+ }
+ }
+ ret = msm_ipc_router_lookup_server_name
+ (&server_arg.port_name, srv_info,
+ server_arg.num_entries_in_array,
+ server_arg.lookup_mask);
+ if (ret < 0) {
+ IPC_RTR_ERR("%s: Server not found\n", __func__);
+ ret = -ENODEV;
+ kfree(srv_info);
+ break;
+ }
+ server_arg.num_entries_found = ret;
+
+ ret = copy_to_user((void *)arg, &server_arg,
+ sizeof(server_arg));
+
+ n = min(server_arg.num_entries_found,
+ server_arg.num_entries_in_array);
+
+ if (ret == 0 && n) {
+ ret = copy_to_user((void *)(arg + sizeof(server_arg)),
+ srv_info, n * sizeof(*srv_info));
+ }
+
+ if (ret)
+ ret = -EFAULT;
+ kfree(srv_info);
+ break;
+
+ case IPC_ROUTER_IOCTL_BIND_CONTROL_PORT:
+ ret = msm_ipc_router_bind_control_port(port_ptr);
+ break;
+
+ case IPC_ROUTER_IOCTL_CONFIG_SEC_RULES:
+ ret = msm_ipc_config_sec_rules((void *)arg);
+ if (ret != -EPERM)
+ port_ptr->type = IRSC_PORT;
+ break;
+
+ default:
+ ret = -EINVAL;
+ }
+ release_sock(sk);
+ return ret;
+}
+
+static unsigned int msm_ipc_router_poll(struct file *file, struct socket *sock,
+ poll_table *wait)
+{
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr;
+ u32 mask = 0;
+
+ if (!sk)
+ return -EINVAL;
+
+ port_ptr = msm_ipc_sk_port(sk);
+ if (!port_ptr)
+ return -EINVAL;
+
+ poll_wait(file, &port_ptr->port_rx_wait_q, wait);
+
+ if (!list_empty(&port_ptr->port_rx_q))
+ mask |= (POLLRDNORM | POLLIN);
+
+ if (port_ptr->conn_status == CONNECTION_RESET)
+ mask |= (POLLHUP | POLLERR);
+
+ return mask;
+}
+
+static int msm_ipc_router_close(struct socket *sock)
+{
+ struct sock *sk = sock->sk;
+ struct msm_ipc_port *port_ptr = msm_ipc_sk_port(sk);
+ int ret;
+
+ lock_sock(sk);
+ ret = msm_ipc_router_close_port(port_ptr);
+ msm_ipc_unload_default_node(msm_ipc_sk(sk)->default_node_vote_info);
+ release_sock(sk);
+ sock_put(sk);
+ sock->sk = NULL;
+
+ return ret;
+}
+
+/**
+ * register_ipcrtr_af_init_notifier() - Register for ipc router socket
+ * address family initialization callback
+ * @nb: Notifier block which will be notified when address family is
+ * initialized.
+ *
+ * Return: 0 on success, standard error code otherwise.
+ */
+int register_ipcrtr_af_init_notifier(struct notifier_block *nb)
+{
+ int ret;
+
+ if (!nb)
+ return -EINVAL;
+ mutex_lock(&ipcrtr_af_init_lock);
+ if (sockets_enabled)
+ nb->notifier_call(nb, IPCRTR_AF_INIT, NULL);
+ ret = raw_notifier_chain_register(&ipcrtr_af_init_chain, nb);
+ mutex_unlock(&ipcrtr_af_init_lock);
+ return ret;
+}
+EXPORT_SYMBOL(register_ipcrtr_af_init_notifier);
+
+/**
+ * unregister_ipcrtr_af_init_notifier() - Unregister for ipc router socket
+ * address family initialization callback
+ * @nb: Notifier block which will be notified once address family is
+ * initialized.
+ *
+ * Return: 0 on success, standard error code otherwise.
+ */
+int unregister_ipcrtr_af_init_notifier(struct notifier_block *nb)
+{
+ int ret;
+
+ if (!nb)
+ return -EINVAL;
+ ret = raw_notifier_chain_unregister(&ipcrtr_af_init_chain, nb);
+ return ret;
+}
+EXPORT_SYMBOL(unregister_ipcrtr_af_init_notifier);
+
+static const struct net_proto_family msm_ipc_family_ops = {
+ .owner = THIS_MODULE,
+ .family = AF_MSM_IPC,
+ .create = msm_ipc_router_create
+};
+
+static const struct proto_ops msm_ipc_proto_ops = {
+ .family = AF_MSM_IPC,
+ .owner = THIS_MODULE,
+ .release = msm_ipc_router_close,
+ .bind = msm_ipc_router_bind,
+ .connect = ipc_router_connect,
+ .socketpair = sock_no_socketpair,
+ .accept = sock_no_accept,
+ .getname = sock_no_getname,
+ .poll = msm_ipc_router_poll,
+ .ioctl = msm_ipc_router_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = msm_ipc_router_ioctl,
+#endif
+ .listen = sock_no_listen,
+ .shutdown = sock_no_shutdown,
+ .setsockopt = sock_no_setsockopt,
+ .getsockopt = sock_no_getsockopt,
+#ifdef CONFIG_COMPAT
+ .compat_setsockopt = sock_no_setsockopt,
+ .compat_getsockopt = sock_no_getsockopt,
+#endif
+ .sendmsg = msm_ipc_router_sendmsg,
+ .recvmsg = msm_ipc_router_recvmsg,
+ .mmap = sock_no_mmap,
+ .sendpage = sock_no_sendpage,
+};
+
+static struct proto msm_ipc_proto = {
+ .name = "MSM_IPC",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct msm_ipc_sock),
+};
+
+int msm_ipc_router_init_sockets(void)
+{
+ int ret;
+
+ ret = proto_register(&msm_ipc_proto, 1);
+ if (ret) {
+ IPC_RTR_ERR("%s: Failed to register MSM_IPC protocol type\n",
+ __func__);
+ goto out_init_sockets;
+ }
+
+ ret = sock_register(&msm_ipc_family_ops);
+ if (ret) {
+ IPC_RTR_ERR("%s: Failed to register MSM_IPC socket type\n",
+ __func__);
+ proto_unregister(&msm_ipc_proto);
+ goto out_init_sockets;
+ }
+
+ mutex_lock(&ipcrtr_af_init_lock);
+ sockets_enabled = 1;
+ raw_notifier_call_chain(&ipcrtr_af_init_chain,
+ IPCRTR_AF_INIT, NULL);
+ mutex_unlock(&ipcrtr_af_init_lock);
+out_init_sockets:
+ return ret;
+}
+
+void msm_ipc_router_exit_sockets(void)
+{
+ if (!sockets_enabled)
+ return;
+
+ sock_unregister(msm_ipc_family_ops.family);
+ proto_unregister(&msm_ipc_proto);
+ mutex_lock(&ipcrtr_af_init_lock);
+ sockets_enabled = 0;
+ raw_notifier_call_chain(&ipcrtr_af_init_chain,
+ IPCRTR_AF_DEINIT, NULL);
+ mutex_unlock(&ipcrtr_af_init_lock);
+}