Merge "block: urgent: Fix dispatching of URGENT mechanism"
diff --git a/Documentation/ata/ahci_msm.txt b/Documentation/ata/ahci_msm.txt
new file mode 100644
index 0000000..2f84834
--- /dev/null
+++ b/Documentation/ata/ahci_msm.txt
@@ -0,0 +1,322 @@
+Introduction
+============
+The SATA Host Controller developed for Qualcomm SoC is used
+to facilitate SATA storage devices that connect to SoC through a
+standard SATA cable interface. The MSM Advanced Host Controller
+Interface (AHCI) driver interfaces with the generic Linux AHCI driver
+and communicates with the AHCI controller for data movement between
+system memory and SATA devices (persistent storage).
+
+Hardware description
+====================
+Serial Advanced Technology Attachment (SATA) is a communication
+protocol designed to transfer data between a computer and storage
+devices (Hard Disk Drive(HDD), Solid State Drives(SSD) etc.).
+First generation (Gen1) SATA interfaces communicate at a serial
+rate of 1.5Gb/s and use low-voltage differential signaling on
+serial links. With 8b-10b encoding, the effective data throughput
+for Gen1 interface is 150MB/s.
+
+The SATA host controller in Qualcomm chipsets adheres to the AHCI 1.3
+specification which describes the interface between system software
+and the host controller, as well as the functional behavior needed
+for software to communicate with the SATA host controller.
+
+The SATA PHY hardware macro in Qualcomm chipsets adheres to the
+SATA 3.0 specification with Gen1 serial interface. This is used to
+serialize and de-serialize data and communicates with SATA HDD. Also,
+the PHY can detect SATA HDD during hot swap and raise an interrupt to
+the CPU through AHCI controller to notify about the detection/removal.
+
+The following figure shows the SATA architecture block diagram as
+implemented in MSM chipsets.
+
+ +---------+
+ |SATA Disk|
+ | Drive |
+ +---------+
+ ^ ^
+ Tx | | Rx
+ v v
++--------------+ +--------------+ +-----------+
+| System Memory| | SATA PHY | | CPU |
++--------------+ +--------------+ +-----------+
+ ^ ^ ^ ^ ^
+ | | | | |
+ | v v | |
+ | +------------------+(Interrupt)|
+ | | SATA CONTROLLER |-----+ |
+ | +------------------+ |
+ | ^ ^ |
+ | | | |
+ v v v v
+ <--------------------------------------------------------->
+< System Fabric (Control and Data) >
+ <--------------------------------------------------------->
+
+Some controller capabilities:
+- Supports 64-bit addressing
+- Supports native command queueing (upto 32 commands)
+- Supports First-party DMA to move data to and from system memory
+- ATA-7 command set compliant
+- Port multiplier support for some chipsets
+- Supports aggressive power management (partial, slumber modes)
+- Supports asynchronous notification
+
+Software description
+====================
+The SATA driver uses the generic interface to read/write data to
+the Hard Disk Drive (HDD). It uses following components in Linux
+to interface with the generic block layer which then interfaces
+with file system or user processes.
+
+1) AHCI platform Driver (includes MSM-specific glue driver)
+2) LIBAHCI
+3) LIBATA
+4) SCSI
+
+AHCI platform driver registers as a device driver for platform
+device registered during SoC board initialization. It is responsible
+for platform specific tasks like PHY configuration, clock initial-
+ization, claiming memory resources etc. Also, implements certain
+functionality that deviates from the standard specification.
+
+Library "LIBAHCI" implements software layer functionality described
+in the standard AHCI specification. It interfaces with the LIBATA
+framework to execute SATA the command set. It converts ATA task files
+into AHCI command descriptors and pass them to the controller for
+execution. It handles controller interrupts and sends command
+completion events to the upper layers. It implements a controller-
+specific reset and recover mechanism in case of errors. It implements
+link power management policies - partial, slumber modes, runtime power
+management and platform power management. It abstracts the low-level
+controller details from the LIBATA framework.
+
+"LIBATA" is a helper library for implementing ATA and SATA command
+protocol as described in standard ATA and SATA specifications. It
+builds read/write requests from SCSI commands and pass them to the
+low-level controller driver (LLD). It handshakes with the SATA
+device using standard commands to understand capabilities and carry
+out device configurations. It interfaces with the SCSI layer to manage
+underlying disks. It manages different devices connected to each host
+port using a port multiplier. Also, it manages the link PHY component,
+the interconnect interface and any external interface (cables, etc.)
+that follow the SATA electrical specification.
+
+The SCSI layer is a helper library for translating generic block layer
+commands to SCSI commands and pass them on to the LIBATA framework.
+Certain generic stuff like device scan, media change, and hot plug
+detection are handled. This layer handles all types of SCSI devices,
+and SATA storage devices are one class of SCSI devices. It also provides
+the IOCTL interface to manage disks from userspace.
+
+Following is the logical code flow:
+
+ +------------------------+
+ | File System (ext4 etc.)|
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | Generic Block Layer |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | SCSI Layer |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | LIBATA library |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | LIBAHCI library |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | AHCI platform driver + |
+ | MSM AHCI glue driver |
+ +------------------------+
+
+Design
+======
+The MSM AHCI driver acts as a glue driver for the Linux
+AHCI controller driver. It provides the following functionality:
+- Registers as a driver for msm_sata device which has an AHCI-compliant
+ controller and PHY as resources.
+- Registers an AHCI platform device in the probe function providing
+ ahci platform data
+- AHCI platform data consists of the following callbacks:
+ - init
+ o PHY resource acquisition
+ o Clock and voltage regulator initialization
+ o PHY calibration
+ - exit
+ o PHY power down
+ o Clock and voltage regulator turn off
+ - suspend
+ - resume
+ o Sequence described in the next section.
+- The Linux AHCI platform driver then probes the AHCI device and
+ initializes it according to the standard AHCI specification.
+- The SATA drive is detected as part of scsi_scan_host() called by
+ LIBAHCI after controller initialization.
+
+Power Management
+================
+Various power modes are supported by this driver.
+
+Platform suspend/resume:
+During suspend:
+- PHY analog blocks are powered down
+- Controller and PHY is kept in Power-on-Reset (POR) mode
+- Clocks and voltage regulators are gated
+
+During resume:
+- Clocks and voltage regulators are ungated
+- PHY is powered up and calibrated to functional mode
+- Controller is re-initialized to process commands.
+
+Runtime suspend/resume:
+- Execute the same steps as in platform suspend/resume.
+- Runtime suspend/resume is disabled by default due to regressions
+ in hot-plug detection (specification limitation). The users can
+ enable runtime power management with following shell commands.
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo auto > ./power/control
+ # echo auto > ./ata1/power/control
+ # echo auto > ./ata1/host0/target0:0:0/0:0:0:0/power/control
+
+ Note: The device will be runtime-suspended only when user unmounts
+ all the partitions.
+
+Link power management (defined by AHCI 1.3 specification):
+- Automatic low power mode transition are supported.
+- AHCI supports two power modes: partial and slumber.
+- Software uses Inteface Communication Control (ICC) bits in AHCI
+ register space to enable automatic partial/slumber state.
+- Partial mode:
+ - Software asserts automatic partial mode when the use
+ case demands low latency resume.
+ - Upon receiving partial mode signal, PHY disables byte clocks
+ and re-enables them during resume and thus has low latency.
+- Slumber mode:
+ - Software asserts automatic slumber mode when the use
+ case demands low power consumption and can withstand
+ high resume latencies.
+ - Upon receiving slumber mode signal, PHY disables byte
+ clocks and some internal circuitry. Upon resume PHY
+ enables byte clocks and reacquires the PLL lock.
+- Once the software enables partial/slumber modes, the transitioning
+ into these modes are automatic and is handled by hardware without
+ software intervention while the controller is idle with no outstanding
+ commands to process.
+
+- The Linux AHCI link power management defines three modes:
+ - max_performance (default mode)
+ Doesn't allow partial/slumber transition when host is idle.
+ - medium_power (Partial mode)
+ Following shell commands are used to enable this mode:
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo medium_power > ./ata1/host0/scsi_host/host0/link_power_management_policy
+
+ - min_power (Slumber mode)
+ Following shell commands are used to enable this mode:
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo min_power > ./ata1/host0/scsi_host/host0/link_power_management_policy
+
+SMP/multi-core
+==============
+The MSM AHCI driver hooks only init, exit, suspend, resume callbacks to
+the AHCI driver which are serialized by design and hence the driver, which
+is inherently SMP safe.
+
+Security
+========
+None.
+
+Performance
+===========
+The theoretical performance with Gen1 SATA PHY is 150MB/s (8b/10b encoding).
+The performance is dependent on various factors, mainly:
+- Capabilities of the external SATA hard disk connected to the MSM SATA port
+- Various system bus frequencies and system loads
+- System memory capabilities
+- Benchmark test applications that collect performance numbers
+
+One example of the maximum performance achieved in a specific system
+configuration follows:
+
+Benchmark: Iozone sequential performance
+Block size: 128K
+File size: 1GB
+Platform: APQ8064 V2 CDP
+CPU Governor: Performance
+
+SanDisk SSD (i100 64GB):
+Read - 135MB/s
+Write - 125MB/s
+
+Western Digital HDD (WD20EURS 2TB):
+Read - 121MB/s
+Write - 98MB/s
+
+Interface
+=========
+The MSM AHCI controller driver provides function pointers as the
+required interface to the Linux AHCI controller driver. The main
+routines implemented are init, exit, suspend, and resume for handling
+MSM-specific initialization, freeing of resources on exit, and
+MSM-specific power management tweaks during suspend power collapse.
+
+Driver parameters
+=================
+None.
+
+Config options
+==============
+Config option SATA_AHCI_MSM in drivers/ata/Kconfig enables this driver.
+
+Dependencies
+============
+The MSM AHCI controller driver is dependent on Linux AHCI driver,
+Linux ATA framework, Linux SCSI framework and Linux generic block layer.
+
+While configuring the kernel, the following options should be set:
+
+- CONFIG_BLOCK
+- CONFIG_SCSI
+- CONFIG_ATA
+- CONFIG_SATA_AHCI_PLATFORM
+
+User space utilities
+====================
+Any user space component that can mount a block device can be used to
+read/write data into persistent storage. However, at the time of this
+writing there are no utilities that author is aware of that can manage
+h/w from userspace.
+
+Other
+=====
+None.
+
+Known issues
+============
+None.
+
+To do
+=====
+- Device tree support.
+- MSM bus frequency voting support.
diff --git a/Documentation/cpu-freq/governors.txt b/Documentation/cpu-freq/governors.txt
index 6aed1ce..b4ae5e6 100644
--- a/Documentation/cpu-freq/governors.txt
+++ b/Documentation/cpu-freq/governors.txt
@@ -198,57 +198,74 @@
The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
-usage, similar to "ondemand" and "conservative" governors. However,
-the governor is more aggressive about scaling the CPU speed up in
-response to CPU-intensive activity.
-
-Sampling the CPU load every X ms can lead to under-powering the CPU
-for X ms, leading to dropped frames, stuttering UI, etc. Instead of
-sampling the cpu at a specified rate, the interactive governor will
-check whether to scale the cpu frequency up soon after coming out of
-idle. When the cpu comes out of idle, a timer is configured to fire
-within 1-2 ticks. If the cpu is very busy between exiting idle and
-when the timer fires then we assume the cpu is underpowered and ramp
-to MAX speed.
-
-If the cpu was not sufficiently busy to immediately ramp to MAX speed,
-then governor evaluates the cpu load since the last speed adjustment,
-choosing the highest value between that longer-term load or the
-short-term load since idle exit to determine the cpu speed to ramp to.
+usage, similar to "ondemand" and "conservative" governors, but with a
+different set of configurable behaviors.
The tuneable values for this governor are:
+target_loads: CPU load values used to adjust speed to influence the
+current CPU load toward that value. In general, the lower the target
+load, the more often the governor will raise CPU speeds to bring load
+below the target. The format is a single target load, optionally
+followed by pairs of CPU speeds and CPU loads to target at or above
+those speeds. Colons can be used between the speeds and associated
+target loads for readability. For example:
+
+ 85 1000000:90 1700000:99
+
+targets CPU load 85% below speed 1GHz, 90% at or above 1GHz, until
+1.7GHz and above, at which load 99% is targeted. If speeds are
+specified these must appear in ascending order. Higher target load
+values are typically specified for higher speeds, that is, target load
+values also usually appear in an ascending order. The default is
+target load 90% for all speeds.
+
min_sample_time: The minimum amount of time to spend at the current
-frequency before ramping down. This is to ensure that the governor has
-seen enough historic cpu load data to determine the appropriate
-workload. Default is 80000 uS.
+frequency before ramping down. Default is 80000 uS.
hispeed_freq: An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
-then speed may be bumped higher. Default is maximum speed.
+then speed may be bumped higher. Default is the maximum speed
+allowed by the policy at governor initialization time.
-go_hispeed_load: The CPU load at which to ramp to the intermediate "hi
-speed". Default is 85%.
+go_hispeed_load: The CPU load at which to ramp to hispeed_freq.
+Default is 99%.
-above_hispeed_delay: Once speed is set to hispeed_freq, wait for this
-long before bumping speed higher in response to continued high load.
+above_hispeed_delay: When speed is at or above hispeed_freq, wait for
+this long before raising speed in response to continued high load.
Default is 20000 uS.
-timer_rate: Sample rate for reevaluating cpu load when the system is
-not idle. Default is 20000 uS.
+timer_rate: Sample rate for reevaluating CPU load when the CPU is not
+idle. A deferrable timer is used, such that the CPU will not be woken
+from idle to service this timer until something else needs to run.
+(The maximum time to allow deferring this timer when not running at
+minimum speed is configurable via timer_slack.) Default is 20000 uS.
-input_boost: If non-zero, boost speed of all CPUs to hispeed_freq on
-touchscreen activity. Default is 0.
+timer_slack: Maximum additional time to defer handling the governor
+sampling timer beyond timer_rate when running at speeds above the
+minimum. For platforms that consume additional power at idle when
+CPUs are running at speeds greater than minimum, this places an upper
+bound on how long the timer will be deferred prior to re-evaluating
+load and dropping speed. For example, if timer_rate is 20000uS and
+timer_slack is 10000uS then timers will be deferred for up to 30msec
+when not at lowest speed. A value of -1 means defer timers
+indefinitely at all speeds. Default is 80000 uS.
boost: If non-zero, immediately boost speed of all CPUs to at least
hispeed_freq until zero is written to this attribute. If zero, allow
CPU speeds to drop below hispeed_freq according to load as usual.
+Default is zero.
-boostpulse: Immediately boost speed of all CPUs to hispeed_freq for
-min_sample_time, after which speeds are allowed to drop below
+boostpulse: On each write, immediately boost speed of all CPUs to
+hispeed_freq for at least the period of time specified by
+boostpulse_duration, after which speeds are allowed to drop below
hispeed_freq according to load as usual.
+boostpulse_duration: Length of time to hold CPU speed at hispeed_freq
+on a write to boostpulse, before allowing speed to drop according to
+load as usual. Default is 80000 uS.
+
3. The Governor Interface in the CPUfreq Core
=============================================
diff --git a/Documentation/devicetree/bindings/arm/arch_timer.txt b/Documentation/devicetree/bindings/arm/arch_timer.txt
index eb3986e..7851d53 100644
--- a/Documentation/devicetree/bindings/arm/arch_timer.txt
+++ b/Documentation/devicetree/bindings/arm/arch_timer.txt
@@ -6,7 +6,7 @@
The timer is attached to a GIC to deliver its two per-processor
interrupts (one for the secure mode, one for the non-secure mode).
-** Timer node properties:
+** CP15 Timer node properties:
- compatible : Should be "arm,armv7-timer"
@@ -21,3 +21,52 @@
interrupts = <1 13 0xf08 1 14 0xf08>;
clock-frequency = <100000000>;
};
+
+** Memory mapped timer node properties:
+
+- compatible : Should at least contain "arm,armv7-timer-mem".
+
+- clock-frequency : The frequency of the main counter, in Hz. Optional.
+
+- reg : The control frame base address.
+
+Note that #address-cells, #size-cells, and ranges shall be present to ensure
+the CPU can address the frame's registers.
+
+Each timer node has up to 8 frame sub-nodes with the following properties:
+
+- frame-number: 0 to 7.
+
+- interrupts : Interrupt list for physical and virtual timers in that order.
+ The virtual timer interrupt is optional.
+
+- reg : The first and second view base addresses in that order. The second view
+ base address is optional.
+
+- status : "disabled" indicates the frame is not available for use. Optional.
+
+Example:
+
+ timer@f0000000 {
+ compatible = "arm,armv7-timer-mem";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ reg = <0xf0000000 0x1000>;
+ clock-frequency = <50000000>;
+
+ frame0@f0001000 {
+ frame-number = <0>
+ interrupts = <0 13 0x8>,
+ <0 14 0x8>;
+ reg = <0xf0001000 0x1000>,
+ <0xf0002000 0x1000>;
+ };
+
+ frame1@f0003000 {
+ frame-number = <1>
+ interrupts = <0 15 0x8>;
+ reg = <0xf0003000 0x1000>;
+ status = "disabled";
+ };
+ };
diff --git a/Documentation/devicetree/bindings/arm/msm/acpuclock/acpuclock-a7.txt b/Documentation/devicetree/bindings/arm/msm/acpuclock/acpuclock-a7.txt
index 7a15f6a..a77d435 100644
--- a/Documentation/devicetree/bindings/arm/msm/acpuclock/acpuclock-a7.txt
+++ b/Documentation/devicetree/bindings/arm/msm/acpuclock/acpuclock-a7.txt
@@ -10,7 +10,6 @@
- reg-names: name of the bases for the above registers. "rcg_base"
is expected.
- a7_cpu-supply: regulator to supply a7 cpu
-- a7_mem-supply: regulator to supply a7 l2 cache
Example:
qcom,acpuclk@f9011050 {
@@ -18,5 +17,4 @@
reg = <0xf9011050 0x8>;
reg-names = "rcg_base";
a7_cpu-supply = <&pm8026_s2>;
- a7_mem-supply = <&pm8026_l3>;
};
diff --git a/Documentation/devicetree/bindings/arm/msm/pm-8x60.txt b/Documentation/devicetree/bindings/arm/msm/pm-8x60.txt
index c741514..82e7e2a 100644
--- a/Documentation/devicetree/bindings/arm/msm/pm-8x60.txt
+++ b/Documentation/devicetree/bindings/arm/msm/pm-8x60.txt
@@ -25,6 +25,7 @@
- qcom,saw-turns-off-pll: Version of SAW2.1 or can turn off the HFPLL, when
doing power collapse and so the core need to switch to Global PLL before
PC.
+- qcom,pc-resets-timer: Indicates that the timer gets reset during power collapse.
Example:
diff --git a/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt b/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
index 88d69e0..86d863c 100644
--- a/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
+++ b/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
@@ -5,12 +5,26 @@
Required properties:
- compatible: Should be "qca,ar3002"
- qca,bt-reset-gpio: GPIO pin to bring BT Controller out of reset
+ - qca,bt-vdd-io-supply: Bluetooth VDD IO regulator handle
+ - qca,bt-vdd-pa-supply: Bluetooth VDD PA regulator handle
Optional properties:
- None
+ -qca,bt-vdd-ldo-supply: Bluetooth VDD LDO regulator handle. Kept under optional parameters
+ as some of the chipsets doesn't require ldo or it may use from same vddio.
+ - qca,bt-chip-pwd-supply: Chip power down gpio is required when bluetooth module
+ and other modules like wifi co-exist in a singe chip and shares a
+ common gpio to bring chip out of reset.
+ - qca,bt-vdd-io-voltage-level: min and max voltages for the vdd io regulator
+ - qca,bt-vdd-pa-voltage-level: min and max voltages for the vdd pa regulator
+ - qca,bt-vdd-ldo-voltage-level: min and max voltages for the vdd ldo regulator
Example:
bt-ar3002 {
compatible = "qca,ar3002";
qca,bt-reset-gpio = <&pm8941_gpios 34 0>;
+ qca,bt-vdd-io-supply = <&pm8941_s3>;
+ qca,bt-vdd-pa-supply = <&pm8941_l19>;
+ qca,bt-chip-pwd-supply = <&ath_chip_pwd_l>;
+ qca,bt-vdd-io-supply = <1800000 1800000>;
+ qca,bt-vdd-pa-supply = <2900000 2900000>;
};
diff --git a/Documentation/devicetree/bindings/cache/msm_cache_erp.txt b/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
index 400b299..8d00cc2 100644
--- a/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
+++ b/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
@@ -7,6 +7,17 @@
- interrupt-names: Should contain the interrupt names "l1_irq" and
"l2_irq"
+Optional properties:
+- reg: A set of I/O regions to be dumped in the event of a hardware fault being
+ detected. If this property is present, the "reg-names" property is must be
+ present as well.
+- reg-names: Human-readable names assigned to the I/O regions defined by the
+ "reg" property. The names can be completely arbitrary, since they are
+ intended to be human-read during failure analysis, and because the set of I/O
+ regions of interest may vary with the overall system design. This property
+ shall only be present if the "reg" property is present, and must contain as
+ many elements as the "reg" property.
+
Example:
qcom,cache_erp {
compatible = "qcom,cache_erp";
@@ -14,3 +25,17 @@
interrupt-names = "l1_irq", "l2_irq";
};
+Example with "reg" property defined:
+ qcom,cache_erp@f9012000 {
+ reg = <0xf9012000 0x80>,
+ <0xf9089000 0x80>,
+ <0xf9099000 0x80>;
+
+ reg-names = "l2_saw",
+ "krait0_saw",
+ "krait1_saw";
+
+ compatible = "qcom,cache_erp";
+ interrupts = <1 9 0>, <0 2 0>;
+ interrupt-names = "l1_irq", "l2_irq";
+ };
diff --git a/Documentation/devicetree/bindings/fb/mdss-dsi-ctrl.txt b/Documentation/devicetree/bindings/fb/mdss-dsi-ctrl.txt
index 5c426f2..ce4972a 100644
--- a/Documentation/devicetree/bindings/fb/mdss-dsi-ctrl.txt
+++ b/Documentation/devicetree/bindings/fb/mdss-dsi-ctrl.txt
@@ -17,12 +17,6 @@
- label: A string used to describe the controller used.
- qcom,supply-names: A list of strings that lists the names of the
regulator supplies.
-- qcom,supply-type: A list of strings that list the type of supply(ies)
- mentioned above. This list maps in the order of
- the supply names listed above.
- regulator = supply with controlled output
- switch = supply without controlled output. i.e.
- voltage switch
- qcom,supply-min-voltage-level: A list that specifies minimum voltage level
of supply(ies) mentioned above. This list maps
in the order of the supply names listed above.
@@ -44,7 +38,6 @@
vddio-supply = <&pm8226_l8>;
vdda-supply = <&pm8226_l4>;
qcom,supply-names = "vdd", "vddio", "vdda";
- qcom,supply-type = "regulator", "regulator", "regulator";
qcom,supply-min-voltage-level = <2800000 1800000 1200000>;
qcom,supply-max-voltage-level = <2800000 1800000 1200000>;
qcom,supply-peak-current = <150000 100000 100000>;
diff --git a/Documentation/devicetree/bindings/fb/mdss-mdp.txt b/Documentation/devicetree/bindings/fb/mdss-mdp.txt
index 0422b57..9bc949e 100644
--- a/Documentation/devicetree/bindings/fb/mdss-mdp.txt
+++ b/Documentation/devicetree/bindings/fb/mdss-mdp.txt
@@ -108,6 +108,21 @@
- qcom,mdss-rot-block-size: The size of a memory block (in pixels) to be used
by the rotator. If this property is not specified,
then a default value of 128 pixels would be used.
+- qcom,mdss-has-bwc: Boolean property to indicate the presence of bandwidth
+ compression feature in the rotator.
+- qcom,mdss-has-decimation: Boolean property to indicate the presence of
+ decimation feature in fetch.
+- qcom,mdss-ad-off: Array of offset addresses for the available
+ Assertive Display (AD) blocks. These offsets
+ are calculated from the register "mdp_phys"
+ defined in reg property. The number of AD
+ offsets should be less than or equal to the
+ number of mixers driving interfaces defined in
+ property: qcom,mdss-mixer-intf-off. Assumes
+ that AD blocks are aligned with the mixer
+ offsets as well (i.e. the first mixer offset
+ corresponds to the same pathway as the first
+ AD offset).
Optional subnodes:
Child nodes representing the frame buffer virtual devices.
@@ -115,6 +130,9 @@
Subnode properties:
- compatible : Must be "qcom,mdss-fb"
- cell-index : Index representing frame buffer
+- qcom,mdss-mixer-swap: A boolean property that indicates if the mixer muxes
+ need to be swapped based on the target panel.
+ By default the property is not defined.
@@ -141,6 +159,8 @@
qcom,mdss-pipe-dma-fetch-id = <10 13>;
qcom,mdss-smp-data = <22 4096>;
qcom,mdss-rot-block-size = <64>;
+ qcom,mdss-has-bwc;
+ qcom,mdss-has-decimation;
qcom,mdss-ctl-off = <0x00000600 0x00000700 0x00000800
0x00000900 0x0000A00>;
@@ -157,6 +177,7 @@
mdss_fb0: qcom,mdss_fb_primary {
cell-index = <0>;
compatible = "qcom,mdss-fb";
+ qcom,mdss-mixer-swap;
};
};
diff --git a/Documentation/devicetree/bindings/fb/msm-hdmi-tx.txt b/Documentation/devicetree/bindings/fb/msm-hdmi-tx.txt
index a2b66f7..8579ec0 100644
--- a/Documentation/devicetree/bindings/fb/msm-hdmi-tx.txt
+++ b/Documentation/devicetree/bindings/fb/msm-hdmi-tx.txt
@@ -12,15 +12,12 @@
- core-vcc-supply: phandle to the HDMI vcc regulator device tree node.
- qcom,hdmi-tx-supply-names: a list of strings that map in order
to the list of supplies.
-- qcom,hdmi-tx-supply-type: a type of supply(ies) mentioned above.
- 0 = supply with controlled output
- 1 = supply without controlled output. i.e. voltage switch
- qcom,hdmi-tx-min-voltage-level: specifies minimum voltage level
of supply(ies) mentioned above.
- qcom,hdmi-tx-max-voltage-level: specifies maximum voltage level
of supply(ies) mentioned above.
-- qcom,hdmi-tx-op-mode: specifies optimum operating mode of
- supply(ies) mentioned above.
+- qcom,hdmi-tx-peak-current: specifies the peak current that will be
+ drawn from the supply(ies) mentioned above.
- qcom,hdmi-tx-cec: gpio for Consumer Electronics Control (cec) line.
- qcom,hdmi-tx-ddc-clk: gpio for Display Data Channel (ddc) clock line.
@@ -56,10 +53,9 @@
core-vdda-supply = <&pm8941_l12>;
core-vcc-supply = <&pm8941_s3>;
qcom,hdmi-tx-supply-names = "hpd-gdsc", "hpd-5v", "core-vdda", "core-vcc";
- qcom,hdmi-tx-supply-type = <1 1 0 0>;
qcom,hdmi-tx-min-voltage-level = <0 0 1800000 1800000>;
qcom,hdmi-tx-max-voltage-level = <0 0 1800000 1800000>;
- qcom,hdmi-tx-op-mode = <0 0 1800000 0>;
+ qcom,hdmi-tx-peak-current = <0 0 1800000 0>;
qcom,hdmi-tx-cec = <&msmgpio 31 0>;
qcom,hdmi-tx-ddc-clk = <&msmgpio 32 0>;
diff --git a/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt b/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
index fbe8ffa..418447d 100644
--- a/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
+++ b/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
@@ -16,7 +16,10 @@
- interrupt-names : Should contain "eoc-int-en-set".
- qcom,adc-bit-resolution : Bit resolution of the ADC.
- qcom,adc-vdd-reference : Voltage reference used by the ADC.
-- qcom,rsense : Internal rsense resistor used for current measurements.
+
+Optional properties:
+- qcom,rsense : Use this property when external rsense should be used
+ for current calculation and specify the units in nano-ohms.
Channel node
NOTE: Atleast one Channel node is required.
diff --git a/Documentation/devicetree/bindings/leds/leds-qpnp.txt b/Documentation/devicetree/bindings/leds/leds-qpnp.txt
index 4f31f07..96d95da 100644
--- a/Documentation/devicetree/bindings/leds/leds-qpnp.txt
+++ b/Documentation/devicetree/bindings/leds/leds-qpnp.txt
@@ -4,7 +4,7 @@
controlling LEDs that are part of PMIC on Qualcomm reference
platforms. The PMIC is connected to Host processor via
SPMI bus. This driver supports various LED modules such as
-WLED (white LED), RGB LED and flash LED.
+Keypad backlight, WLED (white LED), RGB LED and flash LED.
Each LED module is represented as a node of "leds-qpnp". This
node will further contain the type of LED supported and its
@@ -68,8 +68,42 @@
- qcom,default-state: default state of the led, should be "on" or "off"
- qcom,turn-off-delay-ms: delay in millisecond for turning off the led when its default-state is "on". Value is being ignored in case default-state is "off".
+MPP LED is an LED controled through a Multi Purpose Pin.
+
+Optional properties for MPP LED:
+- linux,default-trigger: trigger the led from external modules such as display
+- qcom,default-state: default state of the led, should be "on" or "off"
+- qcom,source-sel: select power source, default 1 (enabled)
+- qcom,mode-ctrl: select operation mode, default 0x60 = Mode Sink
+
+Keypad backlight is a backlight source for buttons. It supports four rows
+and the required rows are enabled by specifying values in the properties.
+
+Required properties for keypad backlight:
+- qcom,mode: mode the led should operate in, options 0 = PWM, 1 = LPG
+- qcom,pwm-channel: pwm channel the led will operate on
+- qcom,pwm-us: time the pwm device will modulate at (us)
+- qcom,row-src-sel-val: select source for rows. One bit is used for each row.
+ Specify 0 for vph_pwr and 1 for vbst for each row.
+- qcom,row-scan-val: select rows for scanning
+- qcom,row-scan-en: row scan enable
+
Example:
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+
qcom,leds@d000 {
status = "okay";
qcom,rgb_pwm {
@@ -151,3 +185,19 @@
linux,name = "led:wled_backlight";
};
};
+
+ qcom,leds@e200 {
+ status = "okay";
+ qcom,kpdbl {
+ label = "kpdbl";
+ linux,name = "button-backlight";
+ qcom,mode = <0>;
+ qcom,pwm-channel = <8>;
+ qcom,pwm-us = <1000>;
+ qcom,id = <7>;
+ qcom,max-current = <20>;
+ qcom,row-src-sel-val = <0x00>;
+ qcom,row-scan-en = <0x01>;
+ qcom,row-scan-val = <0x01>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/pil/pil-pronto.txt b/Documentation/devicetree/bindings/pil/pil-pronto.txt
index 199862f..85ccc5d 100644
--- a/Documentation/devicetree/bindings/pil/pil-pronto.txt
+++ b/Documentation/devicetree/bindings/pil/pil-pronto.txt
@@ -14,6 +14,7 @@
- vdd_pronto_pll-supply: regulator to supply pronto pll.
- qcom,firmware-name: Base name of the firmware image. Ex. "wcnss"
- qcom,gpio-err-fatal: GPIO used by the wcnss to indicate error fatal to the Apps.
+- qcom,gpio-err-ready: GPIO used by the wcnss to indicate error ready to the Apps.
- qcom,gpio-proxy-unvote: GPIO used by the wcnss to trigger proxy unvoting in
the Apps
- qcom,gpio-force-stop: GPIO used by the Apps to force the wcnss to shutdown.
@@ -32,6 +33,7 @@
/* GPIO input from wcnss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_4_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_4_in 1 0>;
qcom,proxy-unvote = <&smp2pgpio_ssr_smp2p_4_in 2 0>;
/* GPIO output to wcnss */
diff --git a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
index 70f8b55..4cbff52 100644
--- a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
+++ b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
@@ -14,6 +14,9 @@
- interrupts: The lpass watchdog interrupt
- vdd_cx-supply: Reference to the regulator that supplies the vdd_cx domain.
- qcom,firmware-name: Base name of the firmware image. Ex. "lpass"
+- qcom,gpio-err-fatal: GPIO used by the lpass to indicate error fatal to the apps.
+- qcom,gpio-force-stop: GPIO used by the apps to force the lpass to shutdown.
+- qcom,gpio-proxy-unvote: GPIO used by the lpass to indicate apps clock is ready.
Optional properties:
- vdd_pll-supply: Reference to the regulator that supplies the PLL's rail.
@@ -29,4 +32,11 @@
interrupts = <0 194 1>;
vdd_cx-supply = <&pm8841_s2>;
qcom,firmware-name = "lpass";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
diff --git a/Documentation/devicetree/bindings/pil/pil-q6v5-mss.txt b/Documentation/devicetree/bindings/pil/pil-q6v5-mss.txt
index 2d20794..c3d929c 100644
--- a/Documentation/devicetree/bindings/pil/pil-q6v5-mss.txt
+++ b/Documentation/devicetree/bindings/pil/pil-q6v5-mss.txt
@@ -17,6 +17,7 @@
- vdd_mx-supply: Reference to the regulator that supplies the memory rail.
- qcom,firmware-name: Base name of the firmware image. Ex. "mdsp"
- qcom,gpio-err-fatal: GPIO used by the modem to indicate error fatal to the apps.
+- qcom,gpio-err-ready: GPIO used by the modem to indicate error ready to the apps.
- qcom,gpio-proxy-unvote: GPIO used by the modem to trigger proxy unvoting in
the apps.
- qcom,gpio-force-stop: GPIO used by the apps to force the modem to shutdown.
@@ -49,6 +50,7 @@
/* GPIO inputs from mss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_1_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_1_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_1_in 2 0>;
/* GPIO output to mss */
diff --git a/Documentation/devicetree/bindings/power/qpnp-charger.txt b/Documentation/devicetree/bindings/power/qpnp-charger.txt
index f5465a4..fced0d7 100644
--- a/Documentation/devicetree/bindings/power/qpnp-charger.txt
+++ b/Documentation/devicetree/bindings/power/qpnp-charger.txt
@@ -7,12 +7,12 @@
Each of these peripherals are implemented as subnodes in the example at the
end of this file.
-- qcom,chg-chgr: Supports charging control and status
+- qcom,chgr: Supports charging control and status
reporting.
-- qcom,chg-bat-if: Battery status reporting such as presence,
+- qcom,bat-if: Battery status reporting such as presence,
temperature reporting and voltage collapse
protection.
-- qcom,chg-buck: Charger buck configuration and status
+- qcom,buck: Charger buck configuration and status
reporting with regards to several regulation
loops such as vdd, ibat etc.
- qcom,usb-chgpth: USB charge path detection and input current
@@ -23,38 +23,39 @@
settings, comparator override features etc.
Parent node required properties:
-- qcom,chg-vddmax-mv: Target voltage of battery in mV.
-- qcom,chg-vddsafe-mv: Maximum Vdd voltage in mV.
-- qcom,chg-vinmin-mv: Minimum input voltage in mV.
-- qcom,chg-ibatmax-ma: Maximum battery charge current in mA
-- qcom,chg-ibatsafe-ma: Safety battery current setting
-- qcom,chg-thermal-mitigation: Array of ibatmax values for different
+- qcom,vddmax-mv: Target voltage of battery in mV.
+- qcom,vddsafe-mv: Maximum Vdd voltage in mV.
+- qcom,vinmin-mv: Minimum input voltage in mV.
+- qcom,ibatmax-ma: Maximum battery charge current in mA
+- qcom,ibatsafe-ma: Safety battery current setting
+- qcom,thermal-mitigation: Array of ibatmax values for different
system thermal mitigation level.
Parent node optional properties:
-- qcom,chg-ibatterm-ma: Current at which charging is terminated when
+- qcom,ibatterm-ma: Current at which charging is terminated when
the analog end of charge option is selected.
-- qcom,chg-maxinput-usb-ma: Maximum input current USB.
-- qcom,chg-maxinput-dc-ma: Maximum input current DC.
-- qcom,chg-vbatdet-delta-mv: Battery charging resume delta.
-- qcom,chg-charging-disabled: Set this property to disable charging
+- qcom,maxinput-usb-ma: Maximum input current USB.
+- qcom,maxinput-dc-ma: Maximum input current DC.
+- qcom,vbatdet-delta-mv: Battery charging resume delta.
+- qcom,charging-disabled: Set this property to disable charging
by default. This can then be overriden
writing the the module parameter
"charging_disabled".
-- qcom,chg-use-default-batt-values: Set this flag to force reporting of
+- qcom,use-default-batt-values: Set this flag to force reporting of
battery temperature of 250 decidegree
Celsius, state of charge to be 50%
and disable charging.
-- qcom,chg-warm-bat-decidegc: Warm battery temperature in decidegC.
-- qcom,chg-cool-bat-decidegc: Cool battery temperature in decidegC.
+- qcom,warm-bat-decidegc: Warm battery temperature in decidegC.
+- qcom,cool-bat-decidegc: Cool battery temperature in decidegC.
Note that if both warm and cool battery
temperatures are set, the corresponding
ibatmax and bat-mv properties are
required to be set.
-- qcom,chg-ibatmax-cool-ma: Maximum cool battery charge current.
-- qcom,chg-ibatmax-warm-ma: Maximum warm battery charge current.
-- qcom,chg-warm-bat-mv: Warm temperature battery target voltage.
-- qcom,chg-cool-bat-mv: Cool temperature battery target voltage.
+- qcom,ibatmax-cool-ma: Maximum cool battery charge current.
+- qcom,ibatmax-warm-ma: Maximum warm battery charge current.
+- qcom,warm-bat-mv: Warm temperature battery target voltage.
+- qcom,cool-bat-mv: Cool temperature battery target voltage.
+- qcom,tchg-mins: Maximum total software initialized charge time.
Sub node required structure:
- A qcom,chg node must be a child of an SPMI node that has specified
@@ -77,13 +78,13 @@
qcom,usb-chgpth:
- usbin-valid
- qcom,chg-chgr:
+ qcom,chgr:
- chg-done
- chg-failed
The following interrupts are available:
- qcom,chg-chgr:
+ qcom,chgr:
- chg-done: Triggers on charge completion.
- chg-failed: Notifies of charge failures.
- fast-chg-on: Notifies of fast charging state.
@@ -98,7 +99,7 @@
setting, can be used as
battery alarm.
- qcom,chg-buck:
+ qcom,buck:
- vdd-loop: VDD loop change interrupt.
- ibat-loop: Ibat loop change interrupt.
- ichg-loop: Charge current loop change.
@@ -107,7 +108,7 @@
- vref-ov: Reference overvoltage interrupt.
- vbat-ov: Battery overvoltage interrupt.
- qcom,chg-bat-if:
+ qcom,bat-if:
- psi: PMIC serial interface interrupt.
- vcp-on: Voltage collapse protection
status interrupt.
@@ -140,22 +141,22 @@
#address-cells = <1>;
#size-cells = <1>;
- qcom,chg-vddmax-mv = <4200>;
- qcom,chg-vddsafe-mv = <4200>;
- qcom,chg-vinmin-mv = <4200>;
- qcom,chg-ibatmax-ma = <1500>;
- qcom,chg-ibatterm-ma = <200>;
- qcom,chg-ibatsafe-ma = <1500>;
- qcom,chg-thermal-mitigation = <1500 700 600 325>;
- qcom,chg-cool-bat-degc = <10>;
- qcom,chg-cool-bat-mv = <4100>;
- qcom,chg-ibatmax-warm-ma = <350>;
- qcom,chg-warm-bat-degc = <45>;
- qcom,chg-warm-bat-mv = <4100>;
- qcom,chg-ibatmax-cool-ma = <350>;
- qcom,chg-vbatdet-delta-mv = <60>;
+ qcom,vddmax-mv = <4200>;
+ qcom,vddsafe-mv = <4200>;
+ qcom,vinmin-mv = <4200>;
+ qcom,ibatmax-ma = <1500>;
+ qcom,ibatterm-ma = <200>;
+ qcom,ibatsafe-ma = <1500>;
+ qcom,thermal-mitigation = <1500 700 600 325>;
+ qcom,cool-bat-degc = <10>;
+ qcom,cool-bat-mv = <4100>;
+ qcom,ibatmax-warm-ma = <350>;
+ qcom,warm-bat-degc = <45>;
+ qcom,warm-bat-mv = <4100>;
+ qcom,ibatmax-cool-ma = <350>;
+ qcom,vbatdet-delta-mv = <60>;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
reg = <0x1000 0x100>;
interrupts = <0x0 0x10 0x0>,
<0x0 0x10 0x1>,
@@ -176,7 +177,7 @@
"vbat-det-lo";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
reg = <0x1100 0x100>;
interrupts = <0x0 0x11 0x0>,
<0x0 0x11 0x1>,
@@ -195,7 +196,7 @@
"vbat-ov";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
reg = <0x1200 0x100>;
interrupts = <0x0 0x12 0x0>,
<0x0 0x12 0x1>,
@@ -210,7 +211,7 @@
"batt-pres";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
reg = <0x1300 0x100>;
interrupts = <0 0x13 0x0>,
<0 0x13 0x1>,
@@ -221,7 +222,7 @@
"chg-gone";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
reg = <0x1400 0x100>;
interrupts = <0x0 0x14 0x0>,
<0x0 0x14 0x1>;
@@ -230,7 +231,7 @@
"coarse-det-dc";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
reg = <0x1500 0x100>;
interrupts = <0x0 0x15 0x0>,
<0x0 0x15 0x1>;
@@ -239,7 +240,7 @@
"boost-pwr-ok";
};
- qcom,chg-misc@1600 {
+ qcom,misc@1600 {
reg = <0x1600 0x100>;
};
};
diff --git a/Documentation/devicetree/bindings/regulator/gdsc-regulator.txt b/Documentation/devicetree/bindings/regulator/gdsc-regulator.txt
index eb62ea1..f2cfe34 100644
--- a/Documentation/devicetree/bindings/regulator/gdsc-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/gdsc-regulator.txt
@@ -11,9 +11,10 @@
Optional properties:
- parent-supply: phandle to the parent supply/regulator node
- - qcom,retain-mems: For Oxili GDSCs only: Presence currently denotes a hardware
- requirement to assert the forced memory retention signals
- in the core's clock branch control register.
+ - qcom,clock-names: List of string names for core clocks
+ - qcom,retain-mems: Presence denotes a hardware requirement to leave the
+ forced memory retention signals in the core's clock
+ branch control register asserted.
Example:
gdsc_oxili_gx: qcom,gdsc@fd8c4024 {
@@ -21,4 +22,5 @@
regulator-name = "gdsc_oxili_gx";
parent-supply = <&pm8841_s4>;
reg = <0xfd8c4024 0x4>;
+ qcom,clock-names = "core_clk";
};
diff --git a/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt b/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
index 2116888..041928d 100644
--- a/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
@@ -16,12 +16,17 @@
the spmi-slave-container property
Optional properties:
+- interrupts: List of interrupts used by the regulator.
+- interrupt-names: List of strings defining the names of the
+ interrupts in the 'interrupts' property 1-to-1.
+ Supported values are "ocp" for voltage switch
+ type regulators. If an OCP interrupt is
+ specified, then the voltage switch will be
+ toggled off and back on when OCP triggers in
+ order to handle high in-rush current.
- qcom,system-load: Load in uA present on regulator that is not
captured by any consumer request
- qcom,enable-time: Time in us to delay after enabling the regulator
-- qcom,ocp-enable-time: Time to delay in us between enabling a switch and
- subsequently enabling over current protection
- (OCP) for the switch
- qcom,auto-mode-enable: 1 = Enable automatic hardware selection of
regulator mode (HPM vs LPM); not available on
boost type regulators
@@ -30,11 +35,18 @@
so that it acts like a switch and simply outputs
its input voltage
0 = Do not enable bypass mode
-- qcom,ocp-enable: 1 = Enable over current protection (OCP) for
- voltage switch type regulators so that they
- latch off automatically when over current is
- detected
+- qcom,ocp-enable: 1 = Allow over current protection (OCP) to be
+ enabled for voltage switch type regulators so
+ that they latch off automatically when over
+ current is detected. OCP is enabled when in
+ HPM or auto mode.
0 = Disable OCP
+- qcom,ocp-max-retries: Maximum number of times to try toggling a voltage
+ switch off and back on as a result of
+ consecutive over current events.
+- qcom,ocp-retry-delay: Time to delay in milliseconds between each
+ voltage switch toggle after an over current
+ event takes place.
- qcom,pull-down-enable: 1 = Enable output pull down resistor when the
regulator is disabled
0 = Disable pull down resistor
@@ -76,6 +88,16 @@
1 = 0.25 uA
2 = 0.55 uA
3 = 0.75 uA
+- qcom,hpm-enable: 1 = Enable high power mode (HPM), also referred
+ to as NPM. HPM consumes more ground current
+ than LPM, but it can source significantly higher
+ load current. HPM is not available on boost
+ type regulators. For voltage switch type
+ regulators, HPM implies that over current
+ protection and soft start are active all the
+ time. This configuration can be overwritten
+ by changing the regulator's mode dynamically.
+ 0 = Do not enable HPM
- qcom,force-type: Override the type and subtype register values. Useful for some
regulators that have invalid types advertised by the hardware.
The format is two unsigned integers of the form <type subtype>.
diff --git a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
index fe3d62f..9f6cb16 100644
--- a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
+++ b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
@@ -443,21 +443,27 @@
Each entry is a pair of strings, the first being the connection's sink,
the second being the connection's source.
- qcom,cdc-mclk-gpios : GPIO on which mclk signal is comming.
-- taiko-mclk-clk : phandle to PMIC8941 clkdiv1 node.
- qcom,taiko-mclk-clk-freq : Taiko mclk Freq in Hz. currently only 9600000Hz
is supported.
- qcom,prim-auxpcm-gpio-clk : GPIO on which Primary AUXPCM clk signal is coming.
- qcom,prim-auxpcm-gpio-sync : GPIO on which Primary AUXPCM SYNC signal is coming.
- qcom,prim-auxpcm-gpio-din : GPIO on which Primary AUXPCM DIN signal is coming.
- qcom,prim-auxpcm-gpio-dout : GPIO on which Primary AUXPCM DOUT signal is coming.
+- qcom,prim-auxpcm-gpio-set : set of GPIO lines used for Primary AUXPCM port
+ Possible Values:
+ prim-gpio-prim : Primary AUXPCM shares GPIOs with Primary MI2S
+ prim-gpio-tert : Primary AUXPCM shares GPIOs with Tertiary MI2S
- qcom,sec-auxpcm-gpio-clk : GPIO on which Secondary AUXPCM clk signal is coming.
- qcom,sec-auxpcm-gpio-sync : GPIO on which Secondary AUXPCM SYNC signal is coming.
- qcom,sec-auxpcm-gpio-din : GPIO on which Secondary AUXPCM DIN signal is coming.
- qcom,sec-auxpcm-gpio-dout : GPIO on which Secondary AUXPCM DOUT signal is coming.
- qcom,us-euro-gpios : GPIO on which gnd/mic swap signal is coming.
-
Optional properties:
- qcom,hdmi-audio-rx: specifies if HDMI audio support is enabled or not.
+- qcom,ext-ult-spk-amp-gpio : GPIO for enabling of speaker path amplifier.
+
+- qcom,ext-ult-lo-amp-gpio: GPIO to enable external ultrasound lineout
+ amplifier.
Example:
@@ -495,16 +501,17 @@
"MIC BIAS4 External", "Digital Mic6";
qcom,cdc-mclk-gpios = <&pm8941_gpios 15 0>;
- taiko-mclk-clk = <&pm8941_clkdiv1>;
qcom,taiko-mclk-clk-freq = <9600000>;
qcom,us-euro-gpios = <&pm8941_gpios 20 0>;
qcom,hdmi-audio-rx;
+ qcom,ext-ult-lo-amp-gpio = <&pm8941_gpios 6 0>;
qcom,prim-auxpcm-gpio-clk = <&msmgpio 65 0>;
qcom,prim-auxpcm-gpio-sync = <&msmgpio 66 0>;
qcom,prim-auxpcm-gpio-din = <&msmgpio 67 0>;
qcom,prim-auxpcm-gpio-dout = <&msmgpio 68 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-prim";
qcom,sec-auxpcm-gpio-clk = <&msmgpio 79 0>;
qcom,sec-auxpcm-gpio-sync = <&msmgpio 80 0>;
qcom,sec-auxpcm-gpio-din = <&msmgpio 81 0>;
diff --git a/Documentation/devicetree/bindings/sound/taiko_codec.txt b/Documentation/devicetree/bindings/sound/taiko_codec.txt
index ffea58f..777933a 100644
--- a/Documentation/devicetree/bindings/sound/taiko_codec.txt
+++ b/Documentation/devicetree/bindings/sound/taiko_codec.txt
@@ -35,6 +35,10 @@
- qcom,cdc-vddcx-2-voltage: cx-2 supply's voltage level min and max in mV.
- qcom,cdc-vddcx-2-current: cx-2 supply's max current in mA.
+ - qcom,cdc-static-supplies: List of supplies to be enabled prior to codec
+ hardware probe. Supplies in this list will be
+ stay enabled.
+
- qcom,cdc-micbias-ldoh-v - LDOH output in volts (should be 1.95 V and 3.00 V).
- qcom,cdc-micbias-cfilt1-mv - cfilt1 output voltage in milli volts.
@@ -67,6 +71,11 @@
values for 9.6MHZ mclk can be 2400000 Hz, 3200000 Hz
and 4800000 Hz. The values for 12.288MHz mclk can be
3072200 Hz, 4096000 Hz and 6144000 Hz.
+
+ - qcom,cdc-on-demand-supplies: List of supplies which can be enabled
+ dynamically.
+ Supplies in this list are off by default.
+
Example:
taiko_codec {
@@ -103,6 +112,16 @@
qcom,cdc-vddcx-2-voltage = <1225000 1225000>;
qcom,cdc-vddcx-2-current = <5000>;
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-vdd-tx-h",
+ "cdc-vdd-rx-h",
+ "cdc-vddpx-1",
+ "cdc-vdd-a-1p2v",
+ "cdc-vddcx-1",
+ "cdc-vddcx-2";
+
+ com,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
+
qcom,cdc-micbias-ldoh-v = <0x3>;
qcom,cdc-micbias-cfilt1-mv = <1800>;
qcom,cdc-micbias-cfilt2-mv = <2700>;
@@ -155,6 +174,10 @@
- qcom,cdc-vddcx-2-voltage: cx-2 supply's voltage level min and max in mV.
- qcom,cdc-vddcx-2-current: cx-2 supply's max current in mA.
+ - qcom,cdc-static-supplies: List of supplies to be enabled prior to codec
+ hardware probe. Supplies in this list will be
+ stay enabled.
+
- qcom,cdc-micbias-ldoh-v - LDOH output in volts (should be 1.95 V and 3.00 V).
- qcom,cdc-micbias-cfilt1-mv - cfilt1 output voltage in milli volts.
@@ -179,6 +202,20 @@
- qcom,cdc-mclk-clk-rate - Specifies the master clock rate in Hz required for
codec.
+Optional properties:
+
+ - cdc-vdd-spkdrv-supply: phandle of spkdrv supply's regulator device tree node.
+ - qcom,cdc-vdd-spkdrv-voltage: spkdrv supply voltage level min and max in mV.
+ - qcom,cdc-vdd-spkdrv-current: spkdrv supply max current in mA.
+
+ - cdc-vdd-spkdrv-supply: phandle of spkdrv supply's regulator device tree node.
+ - qcom,cdc-vdd-spkdrv-voltage: spkdrv supply voltage level min and max in mV.
+ - qcom,cdc-vdd-spkdrv-current: spkdrv supply max current in mA.
+
+ - qcom,cdc-on-demand-supplies: List of supplies which can be enabled
+ dynamically.
+ Supplies in this list are off by default.
+
Example:
i2c@f9925000 {
cell-index = <3>;
@@ -228,6 +265,16 @@
qcom,cdc-vddcx-2-voltage = <1200000 1200000>;
qcom,cdc-vddcx-2-current = <10000>;
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-vdd-tx-h",
+ "cdc-vdd-rx-h",
+ "cdc-vddpx-1",
+ "cdc-vdd-a-1p2v",
+ "cdc-vddcx-1",
+ "cdc-vddcx-2";
+
+ com,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
+
qcom,cdc-micbias-ldoh-v = <0x3>;
qcom,cdc-micbias-cfilt1-mv = <1800>;
qcom,cdc-micbias-cfilt2-mv = <2700>;
diff --git a/Documentation/devicetree/bindings/thermal/tsens.txt b/Documentation/devicetree/bindings/thermal/tsens.txt
index 1388b7d..9b0f97b 100644
--- a/Documentation/devicetree/bindings/thermal/tsens.txt
+++ b/Documentation/devicetree/bindings/thermal/tsens.txt
@@ -43,6 +43,9 @@
no need to re-initialize them. The control registers are also
under a secure domain which can prevent them from being initialized
locally.
+- qcom,sensor-id : If the flag is present map the TSENS sensors based on the
+ remote sensors that are enabled in HW. Ensure the mapping is not
+ more than the number of supported sensors.
Example:
tsens@fc4a8000 {
diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt b/Documentation/devicetree/bindings/usb/dwc3.txt
index fd5b93e..6d54f7e 100644
--- a/Documentation/devicetree/bindings/usb/dwc3.txt
+++ b/Documentation/devicetree/bindings/usb/dwc3.txt
@@ -12,6 +12,7 @@
Optional properties:
- tx-fifo-resize: determines if the FIFO *has* to be reallocated.
+ - host-only-mode: if present then dwc3 should be run in HOST only mode.
This is usually a subnode to DWC3 glue to which it is connected.
diff --git a/Documentation/devicetree/bindings/usb/msm-hsusb.txt b/Documentation/devicetree/bindings/usb/msm-hsusb.txt
index 6d06e99..de1577d 100644
--- a/Documentation/devicetree/bindings/usb/msm-hsusb.txt
+++ b/Documentation/devicetree/bindings/usb/msm-hsusb.txt
@@ -61,6 +61,9 @@
Used for allowing USB to respond for remote wakup.
- qcom,hsusb-otg-delay-lpm: If present then USB core will wait one second
after disconnect before entering low power mode.
+- qcom,hsusb-otg-delay-lpm-hndshk-on-disconnect: If present then USB core will
+ wait for the handshake with the IPA to complete before entering low
+ power mode.
- <supply-name>-supply: handle to the regulator device tree node.
Optional "supply-name" is "vbus_otg" to supply vbus in host mode.
- qcom,vdd-voltage-level: This property must be a list of three integer
@@ -121,6 +124,8 @@
- qcom,pool-64-bit-align: If present then the pool's memory will be aligned
to 64 bits
- qcom,enable_hbm: if present host bus manager is enabled.
+- qcom,disable-park-mode: if present park mode is enabled. Park mode enables executing
+ up to 3 usb packets from each QH.
Example MSM HSUSB EHCI controller device node :
ehci: qcom,ehci-host@f9a55000 {
@@ -145,6 +150,8 @@
update USB PID and serial numbers used by bootloader in DLOAD mode.
- qcom,android-usb-swfi-latency : value to be used by device to vote
for DMA latency in microsecs.
+- qcom,android-usb-cdrom : if this property is present then device creates
+ a new LUN as CD-ROM
Example Android USB device node :
android_usb@fc42b0c8 {
diff --git a/Documentation/devicetree/bindings/usb/msm-ssusb.txt b/Documentation/devicetree/bindings/usb/msm-ssusb.txt
index 5391734..282257c 100644
--- a/Documentation/devicetree/bindings/usb/msm-ssusb.txt
+++ b/Documentation/devicetree/bindings/usb/msm-ssusb.txt
@@ -45,6 +45,8 @@
bits 6-12 PARAMETER_OVERRIDE_B
bits 13-19 PARAMETER_OVERRIDE_C
bits 20-25 PARAMETER_OVERRIDE_D
+- qcom,skip-charger-detection: If present then charger detection using BC1.2
+ is not supported and attached host should always be assumed as SDP.
Sub nodes:
- Sub node for "DWC3- USB3 controller".
diff --git a/Documentation/devicetree/bindings/wcnss/wcnss-wlan.txt b/Documentation/devicetree/bindings/wcnss/wcnss-wlan.txt
index e394b56..c130b26 100644
--- a/Documentation/devicetree/bindings/wcnss/wcnss-wlan.txt
+++ b/Documentation/devicetree/bindings/wcnss/wcnss-wlan.txt
@@ -7,8 +7,8 @@
Required properties:
- compatible: "wcnss_wlan"
-- reg: offset and length of the register set for the device. The pair
- corresponds to PRONTO.
+- reg: physical address and length of the register set for the device.
+- reg-names: "wcnss_mmio", "wcnss_fiq"
- interupts: Pronto to Apps interrupts for tx done and rx pending.
- qcom,pronto-vddmx-supply: regulator to supply pronto pll.
- qcom,pronto-vddcx-supply: regulator to supply WLAN/BT/FM digital module.
@@ -25,8 +25,9 @@
qcom,wcnss-wlan@fb000000 {
compatible = "qcom,wcnss_wlan";
- reg = <0xfb000000 0x280000>;
- reg-names = "wcnss_mmio";
+ reg = <0xfb000000 0x280000>,
+ <0xf9011008 0x04>;
+ reg-names = "wcnss_mmio", "wcnss_fiq";
interrupts = <0 145 0 0 146 0>;
interrupt-names = "wcnss_wlantx_irq", "wcnss_wlanrx_irq";
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 17a44b3..8226e43 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -213,6 +213,9 @@
config ARCH_MTD_XIP
bool
+config ARCH_WANT_KMAP_ATOMIC_FLUSH
+ bool
+
config VECTORS_BASE
hex
default 0xffff0000 if MMU || CPU_HIGH_VECTOR
diff --git a/arch/arm/boot/dts/apq8074-v2-liquid.dts b/arch/arm/boot/dts/apq8074-v2-liquid.dts
new file mode 100644
index 0000000..4ec1cdd
--- /dev/null
+++ b/arch/arm/boot/dts/apq8074-v2-liquid.dts
@@ -0,0 +1,34 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+/include/ "apq8074-v2.dtsi"
+/include/ "msm8974-liquid.dtsi"
+
+/ {
+ model = "Qualcomm APQ 8074v2 LIQUID";
+ compatible = "qcom,apq8074-liquid", "qcom,apq8074", "qcom,liquid";
+ qcom,msm-id = <184 9 0x20000>;
+};
+
+&usb3 {
+ interrupt-parent = <&usb3>;
+ interrupts = <0 1>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0x0 0xffffffff>;
+ interrupt-map = <0x0 0 &intc 0 133 0
+ 0x0 1 &spmi_bus 0x0 0x0 0x9 0x0>;
+ interrupt-names = "hs_phy_irq", "pmic_id_irq";
+
+ qcom,misc-ref = <&pm8941_misc>;
+};
diff --git a/arch/arm/boot/dts/apq8074-v2.dtsi b/arch/arm/boot/dts/apq8074-v2.dtsi
new file mode 100644
index 0000000..3b65236
--- /dev/null
+++ b/arch/arm/boot/dts/apq8074-v2.dtsi
@@ -0,0 +1,48 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * As a general rule, only version-specific property overrides should be placed
+ * inside this file. However, device definitions should be placed inside the
+ * msm8974.dtsi file.
+ */
+
+/include/ "msm8974-v2.dtsi"
+
+/ {
+ qcom,qseecom@a700000 {
+ compatible = "qcom,qseecom";
+ reg = <0x0a700000 0x500000>;
+ reg-names = "secapp-region";
+ qcom,disk-encrypt-pipe-pair = <2>;
+ qcom,hlos-ce-hw-instance = <1>;
+ qcom,qsee-ce-hw-instance = <0>;
+ qcom,msm-bus,name = "qseecom-noc";
+ qcom,msm-bus,num-cases = <4>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <55 512 0 0>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>;
+ };
+};
+
+&memory_hole {
+ qcom,memblock-remove = <0x0a700000 0x5800000>; /* Address and size of the hole */
+};
+
+&qseecom {
+ status = "disabled";
+};
+
diff --git a/arch/arm/boot/dts/msmzinc-ion.dtsi b/arch/arm/boot/dts/apq8084-ion.dtsi
similarity index 100%
rename from arch/arm/boot/dts/msmzinc-ion.dtsi
rename to arch/arm/boot/dts/apq8084-ion.dtsi
diff --git a/arch/arm/boot/dts/apq8084-regulator.dtsi b/arch/arm/boot/dts/apq8084-regulator.dtsi
new file mode 100644
index 0000000..00991b3
--- /dev/null
+++ b/arch/arm/boot/dts/apq8084-regulator.dtsi
@@ -0,0 +1,327 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+
+/* QPNP controlled regulators: */
+
+&spmi_bus {
+ qcom,pma8084@1 {
+ pma8084_s1: regulator@1400 {
+ regulator-min-microvolt = <1050000>;
+ regulator-max-microvolt = <1050000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <100000>;
+ status = "okay";
+ };
+
+ /* PMA8084 S2 + S12 = 2 phase VDD_CX supply */
+ pma8084_s2: regulator@1700 {
+ regulator-min-microvolt = <1050000>;
+ regulator-max-microvolt = <1050000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <100000>;
+ status = "okay";
+ };
+
+ pma8084_s3: regulator@1a00 {
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <1300000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <100000>;
+ status = "okay";
+ };
+
+ pma8084_s4: regulator@1d00 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <100000>;
+ status = "okay";
+ };
+
+ pma8084_s5: regulator@2000 {
+ regulator-min-microvolt = <2150000>;
+ regulator-max-microvolt = <2150000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ /* PMA8084 S6 + S7 = 2 phase VDD_GFX supply */
+ pma8084_s6: regulator@2300 {
+ regulator-min-microvolt = <815000>;
+ regulator-max-microvolt = <900000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ /* PMA8084 S8 + S9 + S10 + S11 = 4 phase VDD_APC supply */
+ pma8084_s8: regulator@2900 {
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1100000>;
+ qcom,enable-time = <500>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <100000>;
+ status = "okay";
+ };
+
+ /* Output of PMA8084 L1 and L11 is tied together. */
+ pma8084_l1: regulator@4000 {
+ parent-supply = <&pma8084_s3>;
+ regulator-min-microvolt = <1225000>;
+ regulator-max-microvolt = <1225000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ regulator-always-on;
+ qcom,system-load = <10000>;
+ status = "okay";
+ };
+
+ pma8084_l2: regulator@4100 {
+ parent-supply = <&pma8084_s3>;
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l3: regulator@4200 {
+ parent-supply = <&pma8084_s3>;
+ regulator-min-microvolt = <1225000>;
+ regulator-max-microvolt = <1225000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l4: regulator@4300 {
+ parent-supply = <&pma8084_s3>;
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l6: regulator@4500 {
+ parent-supply = <&pma8084_s5>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l8: regulator@4700 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l9: regulator@4800 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l10: regulator@4900 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l12: regulator@4b00 {
+ parent-supply = <&pma8084_s5>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l13: regulator@4c00 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <2950000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l14: regulator@4d00 {
+ parent-supply = <&pma8084_s5>;
+ regulator-min-microvolt = <1225000>;
+ regulator-max-microvolt = <1225000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l15: regulator@4e00 {
+ parent-supply = <&pma8084_s5>;
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l16: regulator@4f00 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l17: regulator@5000 {
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l18: regulator@5100 {
+ regulator-min-microvolt = <2850000>;
+ regulator-max-microvolt = <2850000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l19: regulator@5200 {
+ regulator-min-microvolt = <2950000>;
+ regulator-max-microvolt = <2950000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l20: regulator@5300 {
+ regulator-min-microvolt = <2950000>;
+ regulator-max-microvolt = <2950000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l21: regulator@5400 {
+ regulator-min-microvolt = <2950000>;
+ regulator-max-microvolt = <2950000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l22: regulator@5500 {
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l23: regulator@5600 {
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <2700000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l24: regulator@5700 {
+ regulator-min-microvolt = <3075000>;
+ regulator-max-microvolt = <3075000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l25: regulator@5800 {
+ regulator-min-microvolt = <2100000>;
+ regulator-max-microvolt = <2100000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l26: regulator@5900 {
+ parent-supply = <&pma8084_s5>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_l27: regulator@5a00 {
+ parent-supply = <&pma8084_s3>;
+ regulator-min-microvolt = <1050000>;
+ regulator-max-microvolt = <1050000>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_lvs1: regulator@8000 {
+ parent-supply = <&pma8084_s4>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_lvs2: regulator@8100 {
+ parent-supply = <&pma8084_s4>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_lvs3: regulator@8200 {
+ parent-supply = <&pma8084_s4>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_lvs4: regulator@8300 {
+ parent-supply = <&pma8084_s4>;
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+
+ pma8084_mvs1: regulator@8400 {
+ qcom,enable-time = <200>;
+ qcom,pull-down-enable = <1>;
+ status = "okay";
+ };
+ };
+};
+
diff --git a/arch/arm/boot/dts/apq8084-sim.dts b/arch/arm/boot/dts/apq8084-sim.dts
new file mode 100644
index 0000000..ebcca1b
--- /dev/null
+++ b/arch/arm/boot/dts/apq8084-sim.dts
@@ -0,0 +1,171 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+/include/ "apq8084.dtsi"
+
+/ {
+ model = "Qualcomm APQ 8084 Simulator";
+ compatible = "qcom,apq8084-sim", "qcom,apq8084", "qcom,sim";
+ qcom,msm-id = <178 0 0>;
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ uart0: serial@f991f000 {
+ status = "ok";
+ };
+};
+
+&sdcc1 {
+ qcom,vdd-always-on;
+ qcom,vdd-lpm-sup;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <800 500000>;
+
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-voltage-level = <1800000 1800000>;
+ qcom,vdd-io-current-level = <250 154000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
+ qcom,sup-voltages = <2950 2950>;
+ qcom,nonremovable;
+ qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
+
+ status = "ok";
+};
+
+&sdcc2 {
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <9000 800000>;
+ qcom,vdd-io-voltage-level = <1800000 2950000>;
+ qcom,vdd-io-current-level = <6 22000>;
+ qcom,vdd-io-lpm-sup;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
+ qcom,sup-voltages = <2950 2950>;
+ qcom,xpc;
+ qcom,bus-speed-mode = "SDR12", "SDR25", "SDR50", "DDR50", "SDR104";
+ qcom,current-limit = <800>;
+
+ status = "ok";
+};
+
+&pma8084_gpios {
+ gpio@c000 { /* GPIO 1 */
+ };
+
+ gpio@c100 { /* GPIO 2 */
+ };
+
+ gpio@c200 { /* GPIO 3 */
+ };
+
+ gpio@c300 { /* GPIO 4 */
+ };
+
+ gpio@c400 { /* GPIO 5 */
+ };
+
+ gpio@c500 { /* GPIO 6 */
+ };
+
+ gpio@c600 { /* GPIO 7 */
+ };
+
+ gpio@c700 { /* GPIO 8 */
+ };
+
+ gpio@c800 { /* GPIO 9 */
+ };
+
+ gpio@c900 { /* GPIO 10 */
+ };
+
+ gpio@ca00 { /* GPIO 11 */
+ };
+
+ gpio@cb00 { /* GPIO 12 */
+ };
+
+ gpio@cc00 { /* GPIO 13 */
+ };
+
+ gpio@cd00 { /* GPIO 14 */
+ };
+
+ gpio@ce00 { /* GPIO 15 */
+ };
+
+ gpio@cf00 { /* GPIO 16 */
+ };
+
+ gpio@d000 { /* GPIO 17 */
+ };
+
+ gpio@d100 { /* GPIO 18 */
+ };
+
+ gpio@d200 { /* GPIO 19 */
+ };
+
+ gpio@d300 { /* GPIO 20 */
+ };
+
+ gpio@d400 { /* GPIO 21 */
+ };
+
+ gpio@d500 { /* GPIO 22 */
+ };
+};
+
+&pma8084_mpps {
+ mpp@a000 { /* MPP 1 */
+ };
+
+ mpp@a100 { /* MPP 2 */
+ };
+
+ mpp@a200 { /* MPP 3 */
+ };
+
+ mpp@a300 { /* MPP 4 */
+ };
+
+ mpp@a400 { /* MPP 5 */
+ };
+
+ mpp@a500 { /* MPP 6 */
+ };
+
+ mpp@a600 { /* MPP 7 */
+ };
+
+ mpp@a700 { /* MPP 8 */
+ };
+};
+
+&usb3 {
+ qcom,skip-charger-detection;
+};
diff --git a/arch/arm/boot/dts/apq8084.dtsi b/arch/arm/boot/dts/apq8084.dtsi
new file mode 100644
index 0000000..95b1c8f
--- /dev/null
+++ b/arch/arm/boot/dts/apq8084.dtsi
@@ -0,0 +1,163 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/include/ "skeleton.dtsi"
+/include/ "apq8084-ion.dtsi"
+
+/ {
+ model = "Qualcomm APQ 8084";
+ compatible = "qcom,apq8084";
+ interrupt-parent = <&intc>;
+
+ intc: interrupt-controller@f9000000 {
+ compatible = "qcom,msm-qgic2";
+ interrupt-controller;
+ #interrupt-cells = <3>;
+ reg = <0xF9000000 0x1000>,
+ <0xF9002000 0x1000>;
+ };
+
+ msmgpio: gpio@fd510000 {
+ compatible = "qcom,msm-gpio";
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ reg = <0xfd510000 0x4000>;
+ ngpio = <146>;
+ interrupts = <0 208 0>;
+ qcom,direct-connect-irqs = <8>;
+ };
+
+ timer {
+ compatible = "arm,armv7-timer";
+ interrupts = <1 2 0 1 3 0>;
+ clock-frequency = <19200000>;
+ };
+
+ serial@f991f000 {
+ compatible = "qcom,msm-lsuart-v14";
+ reg = <0xf991f000 0x1000>;
+ interrupts = <0 109 0>;
+ status = "disabled";
+ };
+
+ qcom,cache_erp {
+ compatible = "qcom,cache_erp";
+ interrupts = <1 9 0>, <0 2 0>;
+ interrupt-names = "l1_irq", "l2_irq";
+ };
+
+ qcom,cache_dump {
+ compatible = "qcom,cache_dump";
+ qcom,l1-dump-size = <0x100000>;
+ qcom,l2-dump-size = <0x500000>;
+ qcom,memory-reservation-type = "EBI1";
+ qcom,memory-reservation-size = <0x600000>; /* 6M EBI1 buffer */
+ };
+
+ rpm_bus: qcom,rpm-smd {
+ compatible = "qcom,rpm-smd";
+ rpm-channel-name = "rpm_requests";
+ rpm-channel-type = <15>; /* SMD_APPS_RPM */
+ rpm-standalone;
+ };
+
+ qcom,msm-imem@fe805000 {
+ compatible = "qcom,msm-imem";
+ reg = <0xfe805000 0x1000>; /* Address and size of IMEM */
+ };
+
+ qcom,msm-rtb {
+ compatible = "qcom,msm-rtb";
+ qcom,memory-reservation-type = "EBI1";
+ qcom,memory-reservation-size = <0x100000>; /* 1M EBI1 buffer */
+ };
+
+ sdcc1: qcom,sdcc@f9824000 {
+ cell-index = <1>; /* SDC1 eMMC slot */
+ compatible = "qcom,msm-sdcc";
+ reg = <0xf9824000 0x800>;
+ reg-names = "core_mem";
+ interrupts = <0 123 0>;
+ interrupt-names = "core_irq";
+
+ qcom,bus-width = <8>;
+ status = "disabled";
+ };
+
+ sdcc2: qcom,sdcc@f98a4000 {
+ cell-index = <2>; /* SDC2 SD card slot */
+ compatible = "qcom,msm-sdcc";
+ reg = <0xf98a4000 0x800>;
+ reg-names = "core_mem";
+ interrupts = <0 125 0>;
+ interrupt-names = "core_irq";
+
+
+ qcom,bus-width = <4>;
+ status = "disabled";
+ };
+
+ spmi_bus: qcom,spmi@fc4c0000 {
+ cell-index = <0>;
+ compatible = "qcom,spmi-pmic-arb";
+ reg-names = "core", "intr", "cnfg";
+ reg = <0xfc4cf000 0x1000>,
+ <0Xfc4cb000 0x1000>,
+ <0Xfc4ca000 0x1000>;
+ /* 190,ee0_krait_hlos_spmi_periph_irq */
+ /* 187,channel_0_krait_hlos_trans_done_irq */
+ interrupts = <0 190 0>, <0 187 0>;
+ qcom,not-wakeup;
+ qcom,pmic-arb-ee = <0>;
+ qcom,pmic-arb-channel = <0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupt-controller;
+ #interrupt-cells = <3>;
+ };
+
+ usb3: qcom,ssusb@f9200000 {
+ compatible = "qcom,dwc-usb3-msm";
+ reg = <0xf9200000 0xfc000>,
+ <0xfd4ab000 0x4>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ interrupts = <0 133 0>;
+ interrupt-names = "hs_phy_irq";
+ ssusb_vdd_dig-supply = <&pma8084_s1>;
+ SSUSB_1p8-supply = <&pma8084_l6>;
+ hsusb_vdd_dig-supply = <&pma8084_s1>;
+ HSUSB_1p8-supply = <&pma8084_l6>;
+ HSUSB_3p3-supply = <&pma8084_l24>;
+ qcom,dwc-usb3-msm-dbm-eps = <4>;
+ qcom,vdd-voltage-level = <0 900000 1050000>;
+
+ dwc3@f9200000 {
+ compatible = "synopsys,dwc3";
+ reg = <0xf9200000 0xfc000>;
+ interrupt-parent = <&intc>;
+ interrupts = <0 131 0>, <0 179 0>;
+ interrupt-names = "irq", "otg_irq";
+ tx-fifo-resize;
+ };
+ };
+
+ android_usb {
+ compatible = "qcom,android-usb";
+ };
+};
+
+/include/ "msm-pma8084.dtsi"
+/include/ "apq8084-regulator.dtsi"
diff --git a/arch/arm/boot/dts/dsi-panel-nt35590-720p-cmd.dtsi b/arch/arm/boot/dts/dsi-panel-nt35590-720p-cmd.dtsi
new file mode 100644
index 0000000..7942567
--- /dev/null
+++ b/arch/arm/boot/dts/dsi-panel-nt35590-720p-cmd.dtsi
@@ -0,0 +1,530 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/ {
+ qcom,mdss_dsi_nt35590_720p_cmd {
+ compatible = "qcom,mdss-dsi-panel";
+ label = "nt35590 720p command mode dsi panel";
+ status = "disable";
+ qcom,dsi-ctrl-phandle = <&mdss_dsi0>;
+ qcom,rst-gpio = <&msmgpio 25 0>;
+ qcom,te-gpio = <&msmgpio 24 0>;
+ qcom,mdss-pan-res = <720 1280>;
+ qcom,mdss-pan-bpp = <24>;
+ qcom,mdss-pan-dest = "display_1";
+ qcom,mdss-pan-porch-values = <164 8 140 1 1 6>;
+ qcom,mdss-pan-underflow-clr = <0xff>;
+ qcom,mdss-pan-bl-ctrl = "bl_ctrl_wled";
+ qcom,mdss-pan-bl-levels = <1 4095>;
+ qcom,mdss-pan-dsi-mode = <1>;
+ qcom,mdss-vsync-enable = <1>;
+ qcom,mdss-hw-vsync-mode = <1>;
+ qcom,mdss-pan-dsi-h-pulse-mode = <1>;
+ qcom,mdss-pan-dsi-h-power-stop = <0 0 0>;
+ qcom,mdss-pan-dsi-bllp-power-stop = <1 1>;
+ qcom,mdss-pan-dsi-traffic-mode = <2>;
+ qcom,mdss-pan-dsi-dst-format = <8>;
+ qcom,mdss-pan-insert-dcs-cmd = <1>;
+ qcom,mdss-pan-wr-mem-continue = <0x3c>;
+ qcom,mdss-pan-wr-mem-start = <0x2c>;
+ qcom,mdss-pan-te-sel = <1>;
+ qcom,mdss-pan-dsi-vc = <0>;
+ qcom,mdss-pan-dsi-rgb-swap = <0>;
+ qcom,mdss-pan-dsi-data-lanes = <1 1 1 1>; /* 4 lanes */
+ qcom,mdss-pan-dsi-dlane-swap = <0>;
+ qcom,mdss-pan-dsi-t-clk = <0x2c 0x20>;
+ qcom,mdss-pan-dsi-stream = <0>;
+ qcom,mdss-pan-dsi-mdp-tr = <0x0>;
+ qcom,mdss-pan-dsi-dma-tr = <0x04>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
+ qcom,panel-phy-regulatorSettings = [07 09 03 00 /* Regualotor settings */
+ 20 00 01];
+ qcom,panel-phy-timingSettings = [7d 25 1d 00 37 33
+ 22 27 1e 03 04 00];
+ qcom,panel-phy-strengthCtrl = [ff 06];
+ qcom,panel-phy-bistCtrl = [00 00 b1 ff /* BIST Ctrl settings */
+ 00 00];
+ qcom,panel-phy-laneConfig = [00 00 00 00 00 00 00 01 97 /* lane0 config */
+ 00 00 00 00 05 00 00 01 97 /* lane1 config */
+ 00 00 00 00 0a 00 00 01 97 /* lane2 config */
+ 00 00 00 00 0f 00 00 01 97 /* lane3 config */
+ 00 c0 00 00 00 00 00 01 bb]; /* Clk ln config */
+ qcom,panel-on-cmds = [29 01 00 00 00 02 FF EE
+ 29 01 00 00 00 02 26 08
+ 29 01 00 00 00 02 26 00
+ 29 01 00 00 10 02 FF 00
+ 29 01 00 00 00 02 BA 03
+ 29 01 00 00 00 02 C2 08
+ 29 01 00 00 00 02 FF 01
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 00 4A
+ 29 01 00 00 00 02 01 33
+ 29 01 00 00 00 02 02 53
+ 29 01 00 00 00 02 03 55
+ 29 01 00 00 00 02 04 55
+ 29 01 00 00 00 02 05 33
+ 29 01 00 00 00 02 06 22
+ 29 01 00 00 00 02 08 56
+ 29 01 00 00 00 02 09 8F
+ 29 01 00 00 00 02 36 73
+ 29 01 00 00 00 02 0B 9F
+ 29 01 00 00 00 02 0C 9F
+ 29 01 00 00 00 02 0D 2F
+ 29 01 00 00 00 02 0E 24
+ 29 01 00 00 00 02 11 83
+ 29 01 00 00 00 02 12 03
+ 29 01 00 00 00 02 71 2C
+ 29 01 00 00 00 02 6F 03
+ 29 01 00 00 00 02 0F 0A
+ 29 01 00 00 00 02 FF 05
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 01 00
+ 29 01 00 00 00 02 02 8B
+ 29 01 00 00 00 02 03 82
+ 29 01 00 00 00 02 04 82
+ 29 01 00 00 00 02 05 30
+ 29 01 00 00 00 02 06 33
+ 29 01 00 00 00 02 07 01
+ 29 01 00 00 00 02 08 00
+ 29 01 00 00 00 02 09 46
+ 29 01 00 00 00 02 0A 46
+ 29 01 00 00 00 02 0D 0B
+ 29 01 00 00 00 02 0E 1D
+ 29 01 00 00 00 02 0F 08
+ 29 01 00 00 00 02 10 53
+ 29 01 00 00 00 02 11 00
+ 29 01 00 00 00 02 12 00
+ 29 01 00 00 00 02 14 01
+ 29 01 00 00 00 02 15 00
+ 29 01 00 00 00 02 16 05
+ 29 01 00 00 00 02 17 00
+ 29 01 00 00 00 02 19 7F
+ 29 01 00 00 00 02 1A FF
+ 29 01 00 00 00 02 1B 0F
+ 29 01 00 00 00 02 1C 00
+ 29 01 00 00 00 02 1D 00
+ 29 01 00 00 00 02 1E 00
+ 29 01 00 00 00 02 1F 07
+ 29 01 00 00 00 02 20 00
+ 29 01 00 00 00 02 21 06
+ 29 01 00 00 00 02 22 55
+ 29 01 00 00 00 02 23 4D
+ 29 01 00 00 00 02 2D 02
+ 29 01 00 00 00 02 28 01
+ 29 01 00 00 00 02 2F 02
+ 29 01 00 00 00 02 83 01
+ 29 01 00 00 00 02 9E 58
+ 29 01 00 00 00 02 9F 6A
+ 29 01 00 00 00 02 A0 01
+ 29 01 00 00 00 02 A2 10
+ 29 01 00 00 00 02 BB 0A
+ 29 01 00 00 00 02 BC 0A
+ 29 01 00 00 00 02 32 08
+ 29 01 00 00 00 02 33 B8
+ 29 01 00 00 00 02 36 01
+ 29 01 00 00 00 02 37 00
+ 29 01 00 00 00 02 43 00
+ 29 01 00 00 00 02 4B 21
+ 29 01 00 00 00 02 4C 03
+ 29 01 00 00 00 02 50 21
+ 29 01 00 00 00 02 51 03
+ 29 01 00 00 00 02 58 21
+ 29 01 00 00 00 02 59 03
+ 29 01 00 00 00 02 5D 21
+ 29 01 00 00 00 02 5E 03
+ 29 01 00 00 00 02 6C 00
+ 29 01 00 00 00 02 6D 00
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 FF 01
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 75 00
+ 29 01 00 00 00 02 76 7D
+ 29 01 00 00 00 02 77 00
+ 29 01 00 00 00 02 78 8A
+ 29 01 00 00 00 02 79 00
+ 29 01 00 00 00 02 7A 9C
+ 29 01 00 00 00 02 7B 00
+ 29 01 00 00 00 02 7C B1
+ 29 01 00 00 00 02 7D 00
+ 29 01 00 00 00 02 7E BF
+ 29 01 00 00 00 02 7F 00
+ 29 01 00 00 00 02 80 CF
+ 29 01 00 00 00 02 81 00
+ 29 01 00 00 00 02 82 DD
+ 29 01 00 00 00 02 83 00
+ 29 01 00 00 00 02 84 E8
+ 29 01 00 00 00 02 85 00
+ 29 01 00 00 00 02 86 F2
+ 29 01 00 00 00 02 87 01
+ 29 01 00 00 00 02 88 1F
+ 29 01 00 00 00 02 89 01
+ 29 01 00 00 00 02 8A 41
+ 29 01 00 00 00 02 8B 01
+ 29 01 00 00 00 02 8C 78
+ 29 01 00 00 00 02 8D 01
+ 29 01 00 00 00 02 8E A5
+ 29 01 00 00 00 02 8F 01
+ 29 01 00 00 00 02 90 EE
+ 29 01 00 00 00 02 91 02
+ 29 01 00 00 00 02 92 29
+ 29 01 00 00 00 02 93 02
+ 29 01 00 00 00 02 94 2A
+ 29 01 00 00 00 02 95 02
+ 29 01 00 00 00 02 96 5D
+ 29 01 00 00 00 02 97 02
+ 29 01 00 00 00 02 98 93
+ 29 01 00 00 00 02 99 02
+ 29 01 00 00 00 02 9A B8
+ 29 01 00 00 00 02 9B 02
+ 29 01 00 00 00 02 9C E7
+ 29 01 00 00 00 02 9D 03
+ 29 01 00 00 00 02 9E 07
+ 29 01 00 00 00 02 9F 03
+ 29 01 00 00 00 02 A0 37
+ 29 01 00 00 00 02 A2 03
+ 29 01 00 00 00 02 A3 46
+ 29 01 00 00 00 02 A4 03
+ 29 01 00 00 00 02 A5 56
+ 29 01 00 00 00 02 A6 03
+ 29 01 00 00 00 02 A7 66
+ 29 01 00 00 00 02 A9 03
+ 29 01 00 00 00 02 AA 7A
+ 29 01 00 00 00 02 AB 03
+ 29 01 00 00 00 02 AC 93
+ 29 01 00 00 00 02 AD 03
+ 29 01 00 00 00 02 AE A3
+ 29 01 00 00 00 02 AF 03
+ 29 01 00 00 00 02 B0 B4
+ 29 01 00 00 00 02 B1 03
+ 29 01 00 00 00 02 B2 CB
+ 29 01 00 00 00 02 B3 00
+ 29 01 00 00 00 02 B4 7D
+ 29 01 00 00 00 02 B5 00
+ 29 01 00 00 00 02 B6 8A
+ 29 01 00 00 00 02 B7 00
+ 29 01 00 00 00 02 B8 9C
+ 29 01 00 00 00 02 B9 00
+ 29 01 00 00 00 02 BA B1
+ 29 01 00 00 00 02 BB 00
+ 29 01 00 00 00 02 BC BF
+ 29 01 00 00 00 02 BD 00
+ 29 01 00 00 00 02 BE CF
+ 29 01 00 00 00 02 BF 00
+ 29 01 00 00 00 02 C0 DD
+ 29 01 00 00 00 02 C1 00
+ 29 01 00 00 00 02 C2 E8
+ 29 01 00 00 00 02 C3 00
+ 29 01 00 00 00 02 C4 F2
+ 29 01 00 00 00 02 C5 01
+ 29 01 00 00 00 02 C6 1F
+ 29 01 00 00 00 02 C7 01
+ 29 01 00 00 00 02 C8 41
+ 29 01 00 00 00 02 C9 01
+ 29 01 00 00 00 02 CA 78
+ 29 01 00 00 00 02 CB 01
+ 29 01 00 00 00 02 CC A5
+ 29 01 00 00 00 02 CD 01
+ 29 01 00 00 00 02 CE EE
+ 29 01 00 00 00 02 CF 02
+ 29 01 00 00 00 02 D0 29
+ 29 01 00 00 00 02 D1 02
+ 29 01 00 00 00 02 D2 2A
+ 29 01 00 00 00 02 D3 02
+ 29 01 00 00 00 02 D4 5D
+ 29 01 00 00 00 02 D5 02
+ 29 01 00 00 00 02 D6 93
+ 29 01 00 00 00 02 D7 02
+ 29 01 00 00 00 02 D8 B8
+ 29 01 00 00 00 02 D9 02
+ 29 01 00 00 00 02 DA E7
+ 29 01 00 00 00 02 DB 03
+ 29 01 00 00 00 02 DC 07
+ 29 01 00 00 00 02 DD 03
+ 29 01 00 00 00 02 DE 37
+ 29 01 00 00 00 02 DF 03
+ 29 01 00 00 00 02 E0 46
+ 29 01 00 00 00 02 E1 03
+ 29 01 00 00 00 02 E2 56
+ 29 01 00 00 00 02 E3 03
+ 29 01 00 00 00 02 E4 66
+ 29 01 00 00 00 02 E5 03
+ 29 01 00 00 00 02 E6 7A
+ 29 01 00 00 00 02 E7 03
+ 29 01 00 00 00 02 E8 93
+ 29 01 00 00 00 02 E9 03
+ 29 01 00 00 00 02 EA A3
+ 29 01 00 00 00 02 EB 03
+ 29 01 00 00 00 02 EC B4
+ 29 01 00 00 00 02 ED 03
+ 29 01 00 00 00 02 EE CB
+ 29 01 00 00 00 02 EF 00
+ 29 01 00 00 00 02 F0 ED
+ 29 01 00 00 00 02 F1 00
+ 29 01 00 00 00 02 F2 F3
+ 29 01 00 00 00 02 F3 00
+ 29 01 00 00 00 02 F4 FE
+ 29 01 00 00 00 02 F5 01
+ 29 01 00 00 00 02 F6 09
+ 29 01 00 00 00 02 F7 01
+ 29 01 00 00 00 02 F8 13
+ 29 01 00 00 00 02 F9 01
+ 29 01 00 00 00 02 FA 1D
+ 29 01 00 00 00 02 FF 02
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 00 01
+ 29 01 00 00 00 02 01 26
+ 29 01 00 00 00 02 02 01
+ 29 01 00 00 00 02 03 2F
+ 29 01 00 00 00 02 04 01
+ 29 01 00 00 00 02 05 37
+ 29 01 00 00 00 02 06 01
+ 29 01 00 00 00 02 07 56
+ 29 01 00 00 00 02 08 01
+ 29 01 00 00 00 02 09 70
+ 29 01 00 00 00 02 0A 01
+ 29 01 00 00 00 02 0B 9D
+ 29 01 00 00 00 02 0C 01
+ 29 01 00 00 00 02 0D C2
+ 29 01 00 00 00 02 0E 01
+ 29 01 00 00 00 02 0F FF
+ 29 01 00 00 00 02 10 02
+ 29 01 00 00 00 02 11 31
+ 29 01 00 00 00 02 12 02
+ 29 01 00 00 00 02 13 32
+ 29 01 00 00 00 02 14 02
+ 29 01 00 00 00 02 15 60
+ 29 01 00 00 00 02 16 02
+ 29 01 00 00 00 02 17 94
+ 29 01 00 00 00 02 18 02
+ 29 01 00 00 00 02 19 B5
+ 29 01 00 00 00 02 1A 02
+ 29 01 00 00 00 02 1B E3
+ 29 01 00 00 00 02 1C 03
+ 29 01 00 00 00 02 1D 03
+ 29 01 00 00 00 02 1E 03
+ 29 01 00 00 00 02 1F 2D
+ 29 01 00 00 00 02 20 03
+ 29 01 00 00 00 02 21 3A
+ 29 01 00 00 00 02 22 03
+ 29 01 00 00 00 02 23 48
+ 29 01 00 00 00 02 24 03
+ 29 01 00 00 00 02 25 57
+ 29 01 00 00 00 02 26 03
+ 29 01 00 00 00 02 27 68
+ 29 01 00 00 00 02 28 03
+ 29 01 00 00 00 02 29 7B
+ 29 01 00 00 00 02 2A 03
+ 29 01 00 00 00 02 2B 90
+ 29 01 00 00 00 02 2D 03
+ 29 01 00 00 00 02 2F A0
+ 29 01 00 00 00 02 30 03
+ 29 01 00 00 00 02 31 CB
+ 29 01 00 00 00 02 32 00
+ 29 01 00 00 00 02 33 ED
+ 29 01 00 00 00 02 34 00
+ 29 01 00 00 00 02 35 F3
+ 29 01 00 00 00 02 36 00
+ 29 01 00 00 00 02 37 FE
+ 29 01 00 00 00 02 38 01
+ 29 01 00 00 00 02 39 09
+ 29 01 00 00 00 02 3A 01
+ 29 01 00 00 00 02 3B 13
+ 29 01 00 00 00 02 3D 01
+ 29 01 00 00 00 02 3F 1D
+ 29 01 00 00 00 02 40 01
+ 29 01 00 00 00 02 41 26
+ 29 01 00 00 00 02 42 01
+ 29 01 00 00 00 02 43 2F
+ 29 01 00 00 00 02 44 01
+ 29 01 00 00 00 02 45 37
+ 29 01 00 00 00 02 46 01
+ 29 01 00 00 00 02 47 56
+ 29 01 00 00 00 02 48 01
+ 29 01 00 00 00 02 49 70
+ 29 01 00 00 00 02 4A 01
+ 29 01 00 00 00 02 4B 9D
+ 29 01 00 00 00 02 4C 01
+ 29 01 00 00 00 02 4D C2
+ 29 01 00 00 00 02 4E 01
+ 29 01 00 00 00 02 4F FF
+ 29 01 00 00 00 02 50 02
+ 29 01 00 00 00 02 51 31
+ 29 01 00 00 00 02 52 02
+ 29 01 00 00 00 02 53 32
+ 29 01 00 00 00 02 54 02
+ 29 01 00 00 00 02 55 60
+ 29 01 00 00 00 02 56 02
+ 29 01 00 00 00 02 58 94
+ 29 01 00 00 00 02 59 02
+ 29 01 00 00 00 02 5A B5
+ 29 01 00 00 00 02 5B 02
+ 29 01 00 00 00 02 5C E3
+ 29 01 00 00 00 02 5D 03
+ 29 01 00 00 00 02 5E 03
+ 29 01 00 00 00 02 5F 03
+ 29 01 00 00 00 02 60 2D
+ 29 01 00 00 00 02 61 03
+ 29 01 00 00 00 02 62 3A
+ 29 01 00 00 00 02 63 03
+ 29 01 00 00 00 02 64 48
+ 29 01 00 00 00 02 65 03
+ 29 01 00 00 00 02 66 57
+ 29 01 00 00 00 02 67 03
+ 29 01 00 00 00 02 68 68
+ 29 01 00 00 00 02 69 03
+ 29 01 00 00 00 02 6A 7B
+ 29 01 00 00 00 02 6B 03
+ 29 01 00 00 00 02 6C 90
+ 29 01 00 00 00 02 6D 03
+ 29 01 00 00 00 02 6E A0
+ 29 01 00 00 00 02 6F 03
+ 29 01 00 00 00 02 70 CB
+ 29 01 00 00 00 02 71 00
+ 29 01 00 00 00 02 72 19
+ 29 01 00 00 00 02 73 00
+ 29 01 00 00 00 02 74 36
+ 29 01 00 00 00 02 75 00
+ 29 01 00 00 00 02 76 55
+ 29 01 00 00 00 02 77 00
+ 29 01 00 00 00 02 78 70
+ 29 01 00 00 00 02 79 00
+ 29 01 00 00 00 02 7A 83
+ 29 01 00 00 00 02 7B 00
+ 29 01 00 00 00 02 7C 99
+ 29 01 00 00 00 02 7D 00
+ 29 01 00 00 00 02 7E A8
+ 29 01 00 00 00 02 7F 00
+ 29 01 00 00 00 02 80 B7
+ 29 01 00 00 00 02 81 00
+ 29 01 00 00 00 02 82 C5
+ 29 01 00 00 00 02 83 00
+ 29 01 00 00 00 02 84 F7
+ 29 01 00 00 00 02 85 01
+ 29 01 00 00 00 02 86 1E
+ 29 01 00 00 00 02 87 01
+ 29 01 00 00 00 02 88 60
+ 29 01 00 00 00 02 89 01
+ 29 01 00 00 00 02 8A 95
+ 29 01 00 00 00 02 8B 01
+ 29 01 00 00 00 02 8C E1
+ 29 01 00 00 00 02 8D 02
+ 29 01 00 00 00 02 8E 20
+ 29 01 00 00 00 02 8F 02
+ 29 01 00 00 00 02 90 23
+ 29 01 00 00 00 02 91 02
+ 29 01 00 00 00 02 92 59
+ 29 01 00 00 00 02 93 02
+ 29 01 00 00 00 02 94 94
+ 29 01 00 00 00 02 95 02
+ 29 01 00 00 00 02 96 B4
+ 29 01 00 00 00 02 97 02
+ 29 01 00 00 00 02 98 E1
+ 29 01 00 00 00 02 99 03
+ 29 01 00 00 00 02 9A 01
+ 29 01 00 00 00 02 9B 03
+ 29 01 00 00 00 02 9C 28
+ 29 01 00 00 00 02 9D 03
+ 29 01 00 00 00 02 9E 30
+ 29 01 00 00 00 02 9F 03
+ 29 01 00 00 00 02 A0 37
+ 29 01 00 00 00 02 A2 03
+ 29 01 00 00 00 02 A3 3B
+ 29 01 00 00 00 02 A4 03
+ 29 01 00 00 00 02 A5 40
+ 29 01 00 00 00 02 A6 03
+ 29 01 00 00 00 02 A7 50
+ 29 01 00 00 00 02 A9 03
+ 29 01 00 00 00 02 AA 6D
+ 29 01 00 00 00 02 AB 03
+ 29 01 00 00 00 02 AC 80
+ 29 01 00 00 00 02 AD 03
+ 29 01 00 00 00 02 AE CB
+ 29 01 00 00 00 02 AF 00
+ 29 01 00 00 00 02 B0 19
+ 29 01 00 00 00 02 B1 00
+ 29 01 00 00 00 02 B2 36
+ 29 01 00 00 00 02 B3 00
+ 29 01 00 00 00 02 B4 55
+ 29 01 00 00 00 02 B5 00
+ 29 01 00 00 00 02 B6 70
+ 29 01 00 00 00 02 B7 00
+ 29 01 00 00 00 02 B8 83
+ 29 01 00 00 00 02 B9 00
+ 29 01 00 00 00 02 BA 99
+ 29 01 00 00 00 02 BB 00
+ 29 01 00 00 00 02 BC A8
+ 29 01 00 00 00 02 BD 00
+ 29 01 00 00 00 02 BE B7
+ 29 01 00 00 00 02 BF 00
+ 29 01 00 00 00 02 C0 C5
+ 29 01 00 00 00 02 C1 00
+ 29 01 00 00 00 02 C2 F7
+ 29 01 00 00 00 02 C3 01
+ 29 01 00 00 00 02 C4 1E
+ 29 01 00 00 00 02 C5 01
+ 29 01 00 00 00 02 C6 60
+ 29 01 00 00 00 02 C7 01
+ 29 01 00 00 00 02 C8 95
+ 29 01 00 00 00 02 C9 01
+ 29 01 00 00 00 02 CA E1
+ 29 01 00 00 00 02 CB 02
+ 29 01 00 00 00 02 CC 20
+ 29 01 00 00 00 02 CD 02
+ 29 01 00 00 00 02 CE 23
+ 29 01 00 00 00 02 CF 02
+ 29 01 00 00 00 02 D0 59
+ 29 01 00 00 00 02 D1 02
+ 29 01 00 00 00 02 D2 94
+ 29 01 00 00 00 02 D3 02
+ 29 01 00 00 00 02 D4 B4
+ 29 01 00 00 00 02 D5 02
+ 29 01 00 00 00 02 D6 E1
+ 29 01 00 00 00 02 D7 03
+ 29 01 00 00 00 02 D8 01
+ 29 01 00 00 00 02 D9 03
+ 29 01 00 00 00 02 DA 28
+ 29 01 00 00 00 02 DB 03
+ 29 01 00 00 00 02 DC 30
+ 29 01 00 00 00 02 DD 03
+ 29 01 00 00 00 02 DE 37
+ 29 01 00 00 00 02 DF 03
+ 29 01 00 00 00 02 E0 3B
+ 29 01 00 00 00 02 E1 03
+ 29 01 00 00 00 02 E2 40
+ 29 01 00 00 00 02 E3 03
+ 29 01 00 00 00 02 E4 50
+ 29 01 00 00 00 02 E5 03
+ 29 01 00 00 00 02 E6 6D
+ 29 01 00 00 00 02 E7 03
+ 29 01 00 00 00 02 E8 80
+ 29 01 00 00 00 02 E9 03
+ 29 01 00 00 00 02 EA CB
+ 29 01 00 00 00 02 FF 01
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 FF 02
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 FF 04
+ 29 01 00 00 00 02 FB 01
+ 29 01 00 00 00 02 FF 00
+ 29 01 00 00 64 02 11 00
+ 29 01 00 00 00 02 FF EE
+ 29 01 00 00 00 02 12 50
+ 29 01 00 00 00 02 13 02
+ 29 01 00 00 00 02 6A 60
+ 29 01 00 00 00 02 FF 00
+ 29 01 00 00 78 02 29 00];
+ qcom,on-cmds-dsi-state = "DSI_LP_MODE";
+ qcom,panel-off-cmds = [05 01 00 00 32 02 28 00
+ 05 01 00 00 78 02 10 00];
+ qcom,off-cmds-dsi-state = "DSI_HS_MODE";
+ };
+};
diff --git a/arch/arm/boot/dts/dsi-panel-nt35590-720p-video.dtsi b/arch/arm/boot/dts/dsi-panel-nt35590-720p-video.dtsi
index c52b7dd..c0c9107 100644
--- a/arch/arm/boot/dts/dsi-panel-nt35590-720p-video.dtsi
+++ b/arch/arm/boot/dts/dsi-panel-nt35590-720p-video.dtsi
@@ -38,7 +38,7 @@
qcom,mdss-pan-dsi-stream = <0>;
qcom,mdss-pan-dsi-mdp-tr = <0x0>;
qcom,mdss-pan-dsi-dma-tr = <0x04>;
- qcom,mdss-pan-frame-rate = <60>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
qcom,panel-phy-regulatorSettings = [07 09 03 00 /* Regualotor settings */
20 00 01];
qcom,panel-phy-timingSettings = [7d 25 1d 00 37 33
diff --git a/arch/arm/boot/dts/dsi-panel-orise-720p-video.dtsi b/arch/arm/boot/dts/dsi-panel-orise-720p-video.dtsi
index 7bd95e7..448d357 100644
--- a/arch/arm/boot/dts/dsi-panel-orise-720p-video.dtsi
+++ b/arch/arm/boot/dts/dsi-panel-orise-720p-video.dtsi
@@ -36,7 +36,7 @@
qcom,mdss-pan-dsi-stream = <0>;
qcom,mdss-pan-dsi-mdp-tr = <0x0>;
qcom,mdss-pan-dsi-dma-tr = <0x04>;
- qcom,mdss-pan-frame-rate = <60>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
qcom,panel-phy-regulatorSettings = [03 01 01 00 /* Regualotor settings */
20 00 01];
qcom,panel-phy-timingSettings = [69 29 1f 00 55 55
diff --git a/arch/arm/boot/dts/dsi-panel-sim-video.dtsi b/arch/arm/boot/dts/dsi-panel-sim-video.dtsi
index 98074c8..9a734a0 100644
--- a/arch/arm/boot/dts/dsi-panel-sim-video.dtsi
+++ b/arch/arm/boot/dts/dsi-panel-sim-video.dtsi
@@ -37,7 +37,7 @@
qcom,mdss-pan-dsi-stream = <0>;
qcom,mdss-pan-dsi-mdp-tr = <0x04>;
qcom,mdss-pan-dsi-dma-tr = <0x04>;
- qcom,mdss-pan-frame-rate = <60>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
qcom,panel-on-cmds = [32 01 00 00 00 02 00 00];
qcom,on-cmds-dsi-state = "DSI_LP_MODE";
qcom,panel-off-cmds = [22 01 00 00 00 02 00 00];
diff --git a/arch/arm/boot/dts/dsi-panel-toshiba-720p-video.dtsi b/arch/arm/boot/dts/dsi-panel-toshiba-720p-video.dtsi
index 42f6033..2937cde 100644
--- a/arch/arm/boot/dts/dsi-panel-toshiba-720p-video.dtsi
+++ b/arch/arm/boot/dts/dsi-panel-toshiba-720p-video.dtsi
@@ -40,7 +40,7 @@
qcom,mdss-pan-dsi-stream = <0>;
qcom,mdss-pan-dsi-mdp-tr = <0x0>;
qcom,mdss-pan-dsi-dma-tr = <0x04>;
- qcom,mdss-pan-frame-rate = <60>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
qcom,panel-phy-regulatorSettings = [07 09 03 00 /* Regualotor settings */
20 00 01];
qcom,panel-phy-timingSettings = [b0 23 1b 00 94 93
diff --git a/arch/arm/boot/dts/dsi-v2-panel-truly-wvga-video.dtsi b/arch/arm/boot/dts/dsi-v2-panel-truly-wvga-video.dtsi
new file mode 100644
index 0000000..a3718aa
--- /dev/null
+++ b/arch/arm/boot/dts/dsi-v2-panel-truly-wvga-video.dtsi
@@ -0,0 +1,120 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/ {
+ qcom,dsi_v2_truly_wvga_video {
+ compatible = "qcom,dsi-panel-v2";
+ label = "Truly WVGA video mode dsi panel";
+ qcom,dsi-ctrl-phandle = <&mdss_dsi0>;
+ qcom,rst-gpio = <&msmgpio 41 0>;
+ qcom,mode-selection-gpio = <&msmgpio 7 0>;
+ vdda-supply = <&pm8110_l19>;
+ vddio-supply=<&pm8110_l14>;
+ qcom,mdss-pan-res = <480 800>;
+ qcom,mdss-pan-bpp = <24>;
+ qcom,mdss-pan-dest = "display_1";
+ qcom,mdss-pan-porch-values = <40 8 160 10 2 12>;
+ qcom,mdss-pan-underflow-clr = <0xff>;
+ qcom,mdss-pan-bl-levels = <1 255>;
+ qcom,mdss-pan-bl-ctrl = "bl_ctrl_wled";
+ qcom,mdss-pan-dsi-mode = <0>;
+ qcom,mdss-pan-dsi-h-pulse-mode = <0>;
+ qcom,mdss-pan-dsi-h-power-stop = <0 0 0>;
+ qcom,mdss-pan-dsi-bllp-power-stop = <1 1>;
+ qcom,mdss-pan-dsi-traffic-mode = <1>;
+ qcom,mdss-pan-dsi-dst-format = <3>;
+ qcom,mdss-pan-dsi-vc = <0>;
+ qcom,mdss-pan-dsi-rgb-swap = <0>;
+ qcom,mdss-pan-dsi-data-lanes = <1 1 0 0>;
+ qcom,mdss-pan-dsi-dlane-swap = <0>;
+ qcom,mdss-pan-dsi-t-clk = <0x1b 0x04>;
+ qcom,mdss-pan-dsi-stream = <0>;
+ qcom,mdss-pan-dsi-mdp-tr = <0x0>;/*todo*/
+ qcom,mdss-pan-dsi-dma-tr = <0x04>;
+ qcom,mdss-pan-dsi-frame-rate = <60>;
+ qcom,panel-phy-regulatorSettings =[09 08 05 00 20 03];
+ qcom,panel-phy-timingSettings = [5D 12 0C 00 33 38
+ 10 16 1E 03 04 00];
+ qcom,panel-phy-strengthCtrl = [ff 06];
+ qcom,panel-phy-bistCtrl = [03 03 00 00 0f 00];
+ qcom,panel-phy-laneConfig =
+ [80 45 00 00 01 66 /*lane0**/
+ 80 45 00 00 01 66 /*lane1*/
+ 80 45 00 00 01 66 /*lane2*/
+ 80 45 00 00 01 66 /*lane3*/
+ 40 67 00 00 01 88]; /*Clk*/
+
+ qcom,on-cmds-dsi-state = "DSI_LP_MODE";
+ qcom,panel-on-cmds = [
+ 05 01 00 00 01 02
+ 01 00
+ 23 01 00 00 01 02
+ b0 04
+ 29 01 00 00 01 03
+ b3 02 00
+ 23 01 00 00 01 02
+ bd 00
+ 29 01 00 00 01 03
+ c0 18 66
+ 29 01 00 00 01 10
+ c1 23 31 99 21 20 00 30 28 0c 0c
+ 00 00 00 21 01
+ 29 01 00 00 01 07
+ c2 10 06 06 01 03 00
+ 29 01 00 00 01 19
+ c8 04 10 18 20 2e 46 3c 28 1f 18
+ 10 04 04 10 18 20 2e 46 3c 28 1f 18 10 04
+ 29 01 00 00 01 19
+ c9 04 10 18 20 2e 46 3c 28 1f 18
+ 10 04 04 10 18 20 2e 46 3c 28 1f 18 10 04
+ 29 01 00 00 01 19
+ ca 04 10 18 20 2e 46 3c 28 1f 18
+ 10 04 04 10 18 20 2e 46 3c 28 1f 18 10 04
+ 29 01 00 00 01 11
+ d0 29 03 ce a6 00 43 20 10 01 00
+ 01 01 00 03 01 00
+ 29 01 00 00 01 08
+ d1 18 0C 23 03 75 02 50
+ 23 01 00 00 01 02
+ d3 11
+ 29 01 00 00 01 03
+ d5 2a 2a
+ 29 01 00 00 01 03
+ de 01 41
+ 23 01 00 00 01 02
+ e6 51
+ 23 01 00 00 01 02
+ fa 03
+ 23 01 00 00 64 02
+ d6 28
+ 39 01 00 00 01 05
+ 2a 00 00 01 df
+ 39 01 00 00 01 05
+ 2b 00 00 03 1f
+ 15 01 00 00 01 02
+ 35 00
+ 39 01 00 00 01 03
+ 44 00 50
+ 15 01 00 00 01 02
+ 36 c1
+ 15 01 00 00 01 02
+ 3a 77
+ 05 01 00 00 96 02
+ 11 00
+ 05 01 00 00 64 02
+ 29 00
+ ];
+ qcom,panel-off-cmds = [05 01 00 00 32 02 28 00
+ 05 01 00 00 78 02 10 00];
+ qcom,off-cmds-dsi-state = "DSI_LP_MODE";
+ };
+};
diff --git a/arch/arm/boot/dts/mpq8092-regulator.dtsi b/arch/arm/boot/dts/mpq8092-regulator.dtsi
index b724a3d..e6866e5 100644
--- a/arch/arm/boot/dts/mpq8092-regulator.dtsi
+++ b/arch/arm/boot/dts/mpq8092-regulator.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -15,9 +15,9 @@
&spmi_bus {
- qcom,pm8644@1 {
+ qcom,pma8084@1 {
- pm8644_s3: regulator@1a00 {
+ pma8084_s3: regulator@1a00 {
regulator-min-microvolt = <1350000>;
regulator-max-microvolt = <1350000>;
qcom,enable-time = <500>;
@@ -27,7 +27,7 @@
status = "okay";
};
- pm8644_s4: regulator@1d00 {
+ pma8084_s4: regulator@1d00 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
qcom,enable-time = <500>;
@@ -37,7 +37,7 @@
status = "okay";
};
- pm8644_s5: regulator@2000 {
+ pma8084_s5: regulator@2000 {
regulator-min-microvolt = <2150000>;
regulator-max-microvolt = <2150000>;
qcom,enable-time = <500>;
@@ -45,7 +45,7 @@
status = "okay";
};
- pm8644_s6: regulator@2300 {
+ pma8084_s6: regulator@2300 {
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
qcom,enable-time = <500>;
@@ -53,7 +53,7 @@
status = "okay";
};
- pm8644_s7: regulator@2600 {
+ pma8084_s7: regulator@2600 {
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
qcom,enable-time = <500>;
@@ -61,7 +61,7 @@
status = "okay";
};
- pm8644_s8: regulator@2900 {
+ pma8084_s8: regulator@2900 {
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
qcom,enable-time = <500>;
@@ -69,8 +69,8 @@
status = "okay";
};
- pm8644_l1: regulator@4000 {
- parent-supply = <&pm8644_s3>;
+ pma8084_l1: regulator@4000 {
+ parent-supply = <&pma8084_s3>;
regulator-min-microvolt = <1225000>;
regulator-max-microvolt = <1225000>;
qcom,enable-time = <200>;
@@ -80,8 +80,8 @@
status = "okay";
};
- pm8644_l2: regulator@4100 {
- parent-supply = <&pm8644_s3>;
+ pma8084_l2: regulator@4100 {
+ parent-supply = <&pma8084_s3>;
regulator-min-microvolt = <900000>;
regulator-max-microvolt = <900000>;
qcom,enable-time = <200>;
@@ -89,8 +89,8 @@
status = "okay";
};
- pm8644_l3: regulator@4200 {
- parent-supply = <&pm8644_s3>;
+ pma8084_l3: regulator@4200 {
+ parent-supply = <&pma8084_s3>;
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
qcom,enable-time = <200>;
@@ -98,8 +98,8 @@
status = "okay";
};
- pm8644_l4: regulator@4300 {
- parent-supply = <&pm8644_s3>;
+ pma8084_l4: regulator@4300 {
+ parent-supply = <&pma8084_s3>;
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1000000>;
qcom,enable-time = <200>;
@@ -107,8 +107,8 @@
status = "okay";
};
- pm8644_l6: regulator@4500 {
- parent-supply = <&pm8644_s5>;
+ pma8084_l6: regulator@4500 {
+ parent-supply = <&pma8084_s5>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
qcom,enable-time = <200>;
@@ -116,7 +116,7 @@
status = "okay";
};
- pm8644_l8: regulator@4700 {
+ pma8084_l8: regulator@4700 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
qcom,enable-time = <200>;
@@ -124,7 +124,7 @@
status = "okay";
};
- pm8644_l9: regulator@4800 {
+ pma8084_l9: regulator@4800 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
qcom,enable-time = <200>;
@@ -132,7 +132,7 @@
status = "okay";
};
- pm8644_l10: regulator@4900 {
+ pma8084_l10: regulator@4900 {
regulator-min-microvolt = <2000000>;
regulator-max-microvolt = <2000000>;
qcom,enable-time = <200>;
@@ -140,8 +140,8 @@
status = "okay";
};
- pm8644_l11: regulator@4a00 {
- parent-supply = <&pm8644_s3>;
+ pma8084_l11: regulator@4a00 {
+ parent-supply = <&pma8084_s3>;
regulator-min-microvolt = <1300000>;
regulator-max-microvolt = <1300000>;
qcom,enable-time = <200>;
@@ -149,8 +149,8 @@
status = "okay";
};
- pm8644_l12: regulator@4b00 {
- parent-supply = <&pm8644_s5>;
+ pma8084_l12: regulator@4b00 {
+ parent-supply = <&pma8084_s5>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
qcom,enable-time = <200>;
@@ -158,7 +158,7 @@
status = "okay";
};
- pm8644_l13: regulator@4c00 {
+ pma8084_l13: regulator@4c00 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
qcom,enable-time = <200>;
@@ -166,8 +166,8 @@
status = "okay";
};
- pm8644_l14: regulator@4d00 {
- parent-supply = <&pm8644_s5>;
+ pma8084_l14: regulator@4d00 {
+ parent-supply = <&pma8084_s5>;
regulator-min-microvolt = <1000000>;
regulator-max-microvolt = <1000000>;
qcom,enable-time = <200>;
@@ -175,8 +175,8 @@
status = "okay";
};
- pm8644_l15: regulator@4e00 {
- parent-supply = <&pm8644_s5>;
+ pma8084_l15: regulator@4e00 {
+ parent-supply = <&pma8084_s5>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
qcom,enable-time = <200>;
@@ -184,8 +184,8 @@
status = "okay";
};
- pm8644_l16: regulator@4f00 {
- parent-supply = <&pm8644_s4>;
+ pma8084_l16: regulator@4f00 {
+ parent-supply = <&pma8084_s4>;
regulator-min-microvolt = <750000>;
regulator-max-microvolt = <750000>;
qcom,enable-time = <200>;
@@ -193,7 +193,7 @@
status = "okay";
};
- pm8644_l17: regulator@5000 {
+ pma8084_l17: regulator@5000 {
regulator-min-microvolt = <3150000>;
regulator-max-microvolt = <3150000>;
qcom,enable-time = <200>;
@@ -203,7 +203,7 @@
status = "okay";
};
- pm8644_l18: regulator@5100 {
+ pma8084_l18: regulator@5100 {
regulator-min-microvolt = <2850000>;
regulator-max-microvolt = <2850000>;
qcom,enable-time = <200>;
@@ -211,8 +211,8 @@
status = "okay";
};
- pm8644_l19: regulator@5200 {
- parent-supply = <&pm8644_s4>;
+ pma8084_l19: regulator@5200 {
+ parent-supply = <&pma8084_s4>;
regulator-min-microvolt = <1500000>;
regulator-max-microvolt = <1500000>;
qcom,enable-time = <200>;
@@ -220,7 +220,7 @@
status = "okay";
};
- pm8644_l20: regulator@5300 {
+ pma8084_l20: regulator@5300 {
regulator-min-microvolt = <2950000>;
regulator-max-microvolt = <2950000>;
qcom,enable-time = <200>;
@@ -228,7 +228,7 @@
status = "okay";
};
- pm8644_l21: regulator@5400 {
+ pma8084_l21: regulator@5400 {
regulator-min-microvolt = <2950000>;
regulator-max-microvolt = <2950000>;
qcom,enable-time = <200>;
@@ -236,7 +236,7 @@
status = "okay";
};
- pm8644_l22: regulator@5500 {
+ pma8084_l22: regulator@5500 {
regulator-min-microvolt = <2500000>;
regulator-max-microvolt = <2500000>;
qcom,enable-time = <200>;
@@ -244,7 +244,7 @@
status = "okay";
};
- pm8644_l23: regulator@5600 {
+ pma8084_l23: regulator@5600 {
regulator-min-microvolt = <3000000>;
regulator-max-microvolt = <3000000>;
qcom,enable-time = <200>;
@@ -252,7 +252,7 @@
status = "okay";
};
- pm8644_l24: regulator@5700 {
+ pma8084_l24: regulator@5700 {
regulator-min-microvolt = <3075000>;
regulator-max-microvolt = <3075000>;
qcom,enable-time = <200>;
@@ -260,27 +260,21 @@
status = "okay";
};
- pm8644_lvs1: regulator@8000 {
- parent-supply = <&pm8644_s4>;
+ pma8084_lvs1: regulator@8000 {
+ parent-supply = <&pma8084_s4>;
qcom,enable-time = <200>;
qcom,pull-down-enable = <1>;
status = "okay";
};
- pm8644_lvs2: regulator@8100 {
- parent-supply = <&pm8644_s4>;
+ pma8084_lvs2: regulator@8100 {
+ parent-supply = <&pma8084_s4>;
qcom,enable-time = <200>;
qcom,pull-down-enable = <1>;
status = "okay";
};
- pm8644_mvs1: regulator@8200 {
- qcom,enable-time = <200>;
- qcom,pull-down-enable = <1>;
- status = "okay";
- };
-
- pm8644_mvs2: regulator@8300 {
+ pma8084_mvs1: regulator@8400 {
qcom,enable-time = <200>;
qcom,pull-down-enable = <1>;
status = "okay";
diff --git a/arch/arm/boot/dts/mpq8092-sim.dts b/arch/arm/boot/dts/mpq8092-sim.dts
index 0cbfa33..fc07c59 100644
--- a/arch/arm/boot/dts/mpq8092-sim.dts
+++ b/arch/arm/boot/dts/mpq8092-sim.dts
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -28,7 +28,7 @@
};
};
-&pm8644_gpios {
+&pma8084_gpios {
gpio@c000 { /* GPIO 1 */
};
@@ -94,72 +94,9 @@
gpio@d500 { /* GPIO 22 */
};
-
- gpio@d600 { /* GPIO 23 */
- };
-
- gpio@d700 { /* GPIO 24 */
- };
-
- gpio@d800 { /* GPIO 25 */
- };
-
- gpio@d900 { /* GPIO 26 */
- };
-
- gpio@da00 { /* GPIO 27 */
- };
-
- gpio@db00 { /* GPIO 28 */
- };
-
- gpio@dc00 { /* GPIO 29 */
- };
-
- gpio@dd00 { /* GPIO 30 */
- };
-
- gpio@de00 { /* GPIO 31 */
- };
-
- gpio@df00 { /* GPIO 32 */
- };
-
- gpio@e000 { /* GPIO 33 */
- };
-
- gpio@e100 { /* GPIO 34 */
- };
-
- gpio@e200 { /* GPIO 35 */
- };
-
- gpio@e300 { /* GPIO 36 */
- };
-
- gpio@e400 { /* GPIO 37 */
- };
-
- gpio@e500 { /* GPIO 38 */
- };
-
- gpio@e600 { /* GPIO 39 */
- };
-
- gpio@e700 { /* GPIO 40 */
- };
-
- gpio@e800 { /* GPIO 41 */
- };
-
- gpio@e900 { /* GPIO 42 */
- };
-
- gpio@ea00 { /* GPIO 43 */
- };
};
-&pm8644_mpps {
+&pma8084_mpps {
mpp@a000 { /* MPP 1 */
};
@@ -177,4 +114,10 @@
mpp@a500 { /* MPP 6 */
};
+
+ mpp@a600 { /* MPP 7 */
+ };
+
+ mpp@a700 { /* MPP 8 */
+ };
};
diff --git a/arch/arm/boot/dts/mpq8092.dtsi b/arch/arm/boot/dts/mpq8092.dtsi
index 75f168d..c92dad9 100644
--- a/arch/arm/boot/dts/mpq8092.dtsi
+++ b/arch/arm/boot/dts/mpq8092.dtsi
@@ -85,7 +85,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -105,7 +105,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -151,5 +151,5 @@
status = "ok";
};
-/include/ "msm-pm8644.dtsi"
+/include/ "msm-pma8084.dtsi"
/include/ "mpq8092-regulator.dtsi"
diff --git a/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi b/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi
new file mode 100644
index 0000000..0de72b0
--- /dev/null
+++ b/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi
@@ -0,0 +1,381 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&rpm_bus {
+ rpm-regulator-smpa1 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <1>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s1 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-smpa3 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <3>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s3 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s3";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-smpa4 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <4>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s4 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s4";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa1 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <1>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l1 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l1";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa2 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <2>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l2 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l2";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa3 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <3>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l3 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa4 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <4>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l4 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l4";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa5 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <5>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l5 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l5";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa6 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <6>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l6 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l6";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa7 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <7>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l7 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l7";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa8 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <8>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <5000>;
+ status = "disabled";
+
+ regulator-l8 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l8";
+ qcom,set = <1>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa9 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <9>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l9 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l9";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa10 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <10>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l10 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l10";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa12 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <12>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l12 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l12";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa14 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <14>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l14 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l14";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa15 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <15>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l15 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l15";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa16 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <16>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l16 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l16";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa17 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <17>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l17 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l17";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa18 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <18>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l18 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l18";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa19 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <19>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l19 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l19";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa20 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <20>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <5000>;
+ status = "disabled";
+
+ regulator-l20 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l20";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa21 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <21>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l21 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l21";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa22 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <22>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l22 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l22";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/msm-pm8110.dtsi b/arch/arm/boot/dts/msm-pm8110.dtsi
index 28766cf..391c564 100644
--- a/arch/arm/boot/dts/msm-pm8110.dtsi
+++ b/arch/arm/boot/dts/msm-pm8110.dtsi
@@ -22,6 +22,160 @@
#address-cells = <1>;
#size-cells = <1>;
+ pm8110_chg: qcom,charger {
+ spmi-dev-container;
+ compatible = "qcom,qpnp-charger";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ status = "disabled";
+
+ qcom,vddmax-mv = <4200>;
+ qcom,vddsafe-mv = <4200>;
+ qcom,vinmin-mv = <4200>;
+ qcom,vbatdet-mv = <4100>;
+ qcom,ibatmax-ma = <1500>;
+ qcom,ibatterm-ma = <200>;
+ qcom,ibatsafe-ma = <1500>;
+ qcom,thermal-mitigation = <1500 700 600 325>;
+ qcom,vbatdet-delta-mv = <350>;
+ qcom,tchg-mins = <150>;
+
+ qcom,chgr@1000 {
+ status = "disabled";
+ reg = <0x1000 0x100>;
+ interrupts = <0x0 0x10 0x0>,
+ <0x0 0x10 0x1>,
+ <0x0 0x10 0x2>,
+ <0x0 0x10 0x3>,
+ <0x0 0x10 0x4>,
+ <0x0 0x10 0x5>,
+ <0x0 0x10 0x6>,
+ <0x0 0x10 0x7>;
+
+ interrupt-names = "vbat-det-lo",
+ "vbat-det-hi",
+ "chgwdog",
+ "state-change",
+ "trkl-chg-on",
+ "fast-chg-on",
+ "chg-failed",
+ "chg-done";
+ };
+
+ qcom,buck@1100 {
+ status = "disabled";
+ reg = <0x1100 0x100>;
+ interrupts = <0x0 0x11 0x0>,
+ <0x0 0x11 0x1>,
+ <0x0 0x11 0x2>,
+ <0x0 0x11 0x3>,
+ <0x0 0x11 0x4>,
+ <0x0 0x11 0x5>,
+ <0x0 0x11 0x6>;
+
+ interrupt-names = "vbat-ov",
+ "vreg-ov",
+ "overtemp",
+ "vchg-loop",
+ "ichg-loop",
+ "ibat-loop",
+ "vdd-loop";
+ };
+
+ qcom,bat-if@1200 {
+ status = "disabled";
+ reg = <0x1200 0x100>;
+ interrupts = <0x0 0x12 0x0>,
+ <0x0 0x12 0x1>,
+ <0x0 0x12 0x2>,
+ <0x0 0x12 0x3>,
+ <0x0 0x12 0x4>;
+
+ interrupt-names = "batt-pres",
+ "bat-temp-ok",
+ "bat-fet-on",
+ "vcp-on",
+ "psi";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "disabled";
+ reg = <0x1300 0x100>;
+ interrupts = <0 0x13 0x0>,
+ <0 0x13 0x1>,
+ <0x0 0x13 0x2>;
+
+ interrupt-names = "coarse-det-usb",
+ "usbin-valid",
+ "chg-gone";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "disabled";
+ reg = <0x1600 0x100>;
+ };
+ };
+
+ pm8110_gpios: gpios {
+ spmi-dev-container;
+ compatible = "qcom,qpnp-pin";
+ gpio-controller;
+ #gpio-cells = <2>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ label = "pm8110-gpio";
+
+ gpio@c000 {
+ reg = <0xc000 0x100>;
+ qcom,pin-num = <1>;
+ };
+
+ gpio@c100 {
+ reg = <0xc100 0x100>;
+ qcom,pin-num = <2>;
+ };
+
+ gpio@c200 {
+ reg = <0xc200 0x100>;
+ qcom,pin-num = <3>;
+ };
+
+ gpio@c300 {
+ reg = <0xc300 0x100>;
+ qcom,pin-num = <4>;
+ };
+ };
+
+ pm8110_mpps: mpps {
+ spmi-dev-container;
+ compatible = "qcom,qpnp-pin";
+ gpio-controller;
+ #gpio-cells = <2>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ label = "pm8110-mpp";
+
+ mpp@a000 {
+ reg = <0xa000 0x100>;
+ qcom,pin-num = <1>;
+ };
+
+ mpp@a100 {
+ reg = <0xa100 0x100>;
+ qcom,pin-num = <2>;
+ };
+
+ mpp@a200 {
+ reg = <0xa200 0x100>;
+ qcom,pin-num = <3>;
+ };
+
+ mpp@a300 {
+ reg = <0xa300 0x100>;
+ qcom,pin-num = <4>;
+ };
+ };
+
pm8110_vadc: vadc@3100 {
compatible = "qcom,qpnp-vadc";
reg = <0x3100 0x100>;
@@ -75,7 +229,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
@@ -106,6 +259,18 @@
interrupts = <0x0 0x61 0x1>;
};
};
+
+ qcom,leds@a100 {
+ compatible = "qcom,leds-qpnp";
+ reg = <0xa100 0x100>;
+ label = "mpp";
+ };
+
+ qcom,leds@a200 {
+ compatible = "qcom,leds-qpnp";
+ reg = <0xa200 0x100>;
+ label = "mpp";
+ };
};
qcom,pm8110@1 {
@@ -333,5 +498,12 @@
reg = <0x5500 0x100>;
status = "disabled";
};
+
+ qcom,vibrator@c000 {
+ compatible = "qcom,qpnp-vibrator";
+ reg = <0xc000 0x100>;
+ label = "vibrator";
+ status = "disabled";
+ };
};
};
diff --git a/arch/arm/boot/dts/msm-pm8226.dtsi b/arch/arm/boot/dts/msm-pm8226.dtsi
index f702abb..41920d5 100644
--- a/arch/arm/boot/dts/msm-pm8226.dtsi
+++ b/arch/arm/boot/dts/msm-pm8226.dtsi
@@ -56,16 +56,17 @@
#size-cells = <1>;
status = "disabled";
- qcom,chg-vddmax-mv = <4200>;
- qcom,chg-vddsafe-mv = <4200>;
- qcom,chg-vinmin-mv = <4200>;
- qcom,chg-vbatdet-mv = <4100>;
- qcom,chg-ibatmax-ma = <1500>;
- qcom,chg-ibatterm-ma = <200>;
- qcom,chg-ibatsafe-ma = <1500>;
- qcom,chg-thermal-mitigation = <1500 700 600 325>;
+ qcom,vddmax-mv = <4200>;
+ qcom,vddsafe-mv = <4200>;
+ qcom,vinmin-mv = <4200>;
+ qcom,vbatdet-delta-mv = <150>;
+ qcom,ibatmax-ma = <1500>;
+ qcom,ibatterm-ma = <200>;
+ qcom,ibatsafe-ma = <1500>;
+ qcom,thermal-mitigation = <1500 700 600 325>;
+ qcom,tchg-mins = <150>;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "disabled";
reg = <0x1000 0x100>;
interrupts = <0x0 0x10 0x0>,
@@ -87,7 +88,7 @@
"chg-done";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "disabled";
reg = <0x1100 0x100>;
interrupts = <0x0 0x11 0x0>,
@@ -107,7 +108,7 @@
"vdd-loop";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
status = "disabled";
reg = <0x1200 0x100>;
interrupts = <0x0 0x12 0x0>,
@@ -124,7 +125,7 @@
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "disabled";
reg = <0x1300 0x100>;
interrupts = <0 0x13 0x0>,
@@ -136,7 +137,7 @@
"chg-gone";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "disabled";
reg = <0x1500 0x100>;
interrupts = <0x0 0x15 0x0>,
@@ -367,7 +368,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
@@ -686,6 +686,54 @@
label = "wled";
};
+ pwm@b100 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb100 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <0>;
+ };
+
+ pwm@b200 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb200 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <1>;
+ };
+
+ pwm@b300 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb300 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <2>;
+ };
+
+ pwm@b400 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb400 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <3>;
+ };
+
+ pwm@b500 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb500 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <4>;
+ };
+
+ pwm@b600 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb600 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <5>;
+ };
+
regulator@8000 {
regulator-name = "8226_lvs1";
reg = <0x8000 0x100>;
diff --git a/arch/arm/boot/dts/msm-pm8941.dtsi b/arch/arm/boot/dts/msm-pm8941.dtsi
index 1881311..eadd5d3 100644
--- a/arch/arm/boot/dts/msm-pm8941.dtsi
+++ b/arch/arm/boot/dts/msm-pm8941.dtsi
@@ -171,21 +171,22 @@
#size-cells = <1>;
status = "disabled";
- qcom,chg-vddmax-mv = <4200>;
- qcom,chg-vddsafe-mv = <4200>;
- qcom,chg-vinmin-mv = <4200>;
- qcom,chg-ibatmax-ma = <1500>;
- qcom,chg-ibatsafe-ma = <1500>;
- qcom,chg-thermal-mitigation = <1500 700 600 325>;
- qcom,chg-cool-bat-decidegc = <100>;
- qcom,chg-cool-bat-mv = <4100>;
- qcom,chg-ibatmax-warm-ma = <350>;
- qcom,chg-warm-bat-decidegc = <450>;
- qcom,chg-warm-bat-mv = <4100>;
- qcom,chg-ibatmax-cool-ma = <350>;
- qcom,chg-vbatdet-delta-mv = <350>;
+ qcom,vddmax-mv = <4200>;
+ qcom,vddsafe-mv = <4200>;
+ qcom,vinmin-mv = <4200>;
+ qcom,ibatmax-ma = <1500>;
+ qcom,ibatsafe-ma = <1500>;
+ qcom,thermal-mitigation = <1500 700 600 325>;
+ qcom,cool-bat-decidegc = <100>;
+ qcom,cool-bat-mv = <4100>;
+ qcom,ibatmax-warm-ma = <350>;
+ qcom,warm-bat-decidegc = <450>;
+ qcom,warm-bat-mv = <4100>;
+ qcom,ibatmax-cool-ma = <350>;
+ qcom,vbatdet-delta-mv = <350>;
+ qcom,tchg-mins = <150>;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "disabled";
reg = <0x1000 0x100>;
interrupts = <0x0 0x10 0x0>,
@@ -207,7 +208,7 @@
"chg-done";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "disabled";
reg = <0x1100 0x100>;
interrupts = <0x0 0x11 0x0>,
@@ -227,7 +228,7 @@
"vdd-loop";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
status = "disabled";
reg = <0x1200 0x100>;
interrupts = <0x0 0x12 0x0>,
@@ -244,7 +245,7 @@
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "disabled";
reg = <0x1300 0x100>;
interrupts = <0 0x13 0x0>,
@@ -256,7 +257,7 @@
"chg-gone";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
status = "disabled";
reg = <0x1400 0x100>;
interrupts = <0x0 0x14 0x0>,
@@ -266,7 +267,7 @@
"dcin-valid";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "disabled";
reg = <0x1500 0x100>;
interrupts = <0x0 0x15 0x0>,
@@ -790,7 +791,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
@@ -849,7 +849,7 @@
qcom,decimation = <0>;
qcom,pre-div-channel-scaling = <0>;
qcom,calibration-type = "absolute";
- qcom,scale-function = <1>;
+ qcom,scale-function = <3>;
qcom,hw-settle-time = <0>;
qcom,fast-avg-setup = <3>;
qcom,btm-channel-number = <0x70>;
diff --git a/arch/arm/boot/dts/msm-pm8644.dtsi b/arch/arm/boot/dts/msm-pma8084.dtsi
similarity index 76%
rename from arch/arm/boot/dts/msm-pm8644.dtsi
rename to arch/arm/boot/dts/msm-pma8084.dtsi
index 17a6b0b..30525aa 100644
--- a/arch/arm/boot/dts/msm-pm8644.dtsi
+++ b/arch/arm/boot/dts/msm-pma8084.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -16,20 +16,20 @@
interrupt-controller;
#interrupt-cells = <3>;
- qcom,pm8644@0 {
+ qcom,pma8084@0 {
spmi-slave-container;
reg = <0x0>;
#address-cells = <1>;
#size-cells = <1>;
- pm8644_gpios: gpios {
+ pma8084_gpios: gpios {
spmi-dev-container;
compatible = "qcom,qpnp-pin";
gpio-controller;
#gpio-cells = <2>;
#address-cells = <1>;
#size-cells = <1>;
- label = "pm8644-gpio";
+ label = "pma8084-gpio";
gpio@c000 {
reg = <0xc000 0x100>;
@@ -140,121 +140,16 @@
reg = <0xd500 0x100>;
qcom,pin-num = <22>;
};
-
- gpio@d600 {
- reg = <0xd600 0x100>;
- qcom,pin-num = <23>;
- };
-
- gpio@d700 {
- reg = <0xd700 0x100>;
- qcom,pin-num = <24>;
- };
-
- gpio@d800 {
- reg = <0xd800 0x100>;
- qcom,pin-num = <25>;
- };
-
- gpio@d900 {
- reg = <0xd900 0x100>;
- qcom,pin-num = <26>;
- };
-
- gpio@da00 {
- reg = <0xda00 0x100>;
- qcom,pin-num = <27>;
- };
-
- gpio@db00 {
- reg = <0xdb00 0x100>;
- qcom,pin-num = <28>;
- };
-
- gpio@dc00 {
- reg = <0xdc00 0x100>;
- qcom,pin-num = <29>;
- };
-
- gpio@dd00 {
- reg = <0xdd00 0x100>;
- qcom,pin-num = <30>;
- };
-
- gpio@de00 {
- reg = <0xde00 0x100>;
- qcom,pin-num = <31>;
- };
-
- gpio@df00 {
- reg = <0xdf00 0x100>;
- qcom,pin-num = <32>;
- };
-
- gpio@e000 {
- reg = <0xe000 0x100>;
- qcom,pin-num = <33>;
- };
-
- gpio@e100 {
- reg = <0xe100 0x100>;
- qcom,pin-num = <34>;
- };
-
- gpio@e200 {
- reg = <0xe200 0x100>;
- qcom,pin-num = <35>;
- };
-
- gpio@e300 {
- reg = <0xe300 0x100>;
- qcom,pin-num = <36>;
- };
-
- gpio@e400 {
- reg = <0xe400 0x100>;
- qcom,pin-num = <37>;
- };
-
- gpio@e500 {
- reg = <0xe500 0x100>;
- qcom,pin-num = <38>;
- };
-
- gpio@e600 {
- reg = <0xe600 0x100>;
- qcom,pin-num = <39>;
- };
-
- gpio@e700 {
- reg = <0xe700 0x100>;
- qcom,pin-num = <40>;
- };
-
- gpio@e800 {
- reg = <0xe800 0x100>;
- qcom,pin-num = <41>;
- };
-
- gpio@e900 {
- reg = <0xe900 0x100>;
- qcom,pin-num = <42>;
- };
-
- gpio@ea00 {
- reg = <0xea00 0x100>;
- qcom,pin-num = <43>;
- };
};
- pm8644_mpps: mpps {
+ pma8084_mpps: mpps {
spmi-dev-container;
compatible = "qcom,qpnp-pin";
gpio-controller;
#gpio-cells = <2>;
#address-cells = <1>;
#size-cells = <1>;
- label = "pm8644-mpp";
+ label = "pma8084-mpp";
mpp@a000 {
reg = <0xa000 0x100>;
@@ -285,17 +180,27 @@
reg = <0xa500 0x100>;
qcom,pin-num = <6>;
};
+
+ mpp@a600 {
+ reg = <0xa600 0x100>;
+ qcom,pin-num = <7>;
+ };
+
+ mpp@a700 {
+ reg = <0xa700 0x100>;
+ qcom,pin-num = <8>;
+ };
};
};
- qcom,pm8644@1 {
+ qcom,pma8084@1 {
spmi-slave-container;
reg = <0x1>;
#address-cells = <1>;
#size-cells = <1>;
regulator@1400 {
- regulator-name = "8644_s1";
+ regulator-name = "8084_s1";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -315,7 +220,7 @@
};
regulator@1700 {
- regulator-name = "8644_s2";
+ regulator-name = "8084_s2";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -335,7 +240,7 @@
};
regulator@1a00 {
- regulator-name = "8644_s3";
+ regulator-name = "8084_s3";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -355,7 +260,7 @@
};
regulator@1d00 {
- regulator-name = "8644_s4";
+ regulator-name = "8084_s4";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -375,7 +280,7 @@
};
regulator@2000 {
- regulator-name = "8644_s5";
+ regulator-name = "8084_s5";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -395,7 +300,7 @@
};
regulator@2300 {
- regulator-name = "8644_s6";
+ regulator-name = "8084_s6";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -415,7 +320,7 @@
};
regulator@2600 {
- regulator-name = "8644_s7";
+ regulator-name = "8084_s7";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -435,7 +340,7 @@
};
regulator@2900 {
- regulator-name = "8644_s8";
+ regulator-name = "8084_s8";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -455,7 +360,7 @@
};
regulator@2c00 {
- regulator-name = "8644_s9";
+ regulator-name = "8084_s9";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -475,7 +380,7 @@
};
regulator@2f00 {
- regulator-name = "8644_s10";
+ regulator-name = "8084_s10";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -495,7 +400,7 @@
};
regulator@3200 {
- regulator-name = "8644_s11";
+ regulator-name = "8084_s11";
spmi-dev-container;
#address-cells = <1>;
#size-cells = <1>;
@@ -514,209 +419,248 @@
};
};
+ regulator@3500 {
+ regulator-name = "8084_s12";
+ spmi-dev-container;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "qcom,qpnp-regulator";
+ reg = <0x3500 0x300>;
+ status = "disabled";
+
+ qcom,ctl@3500 {
+ reg = <0x3500 0x100>;
+ };
+ qcom,ps@3600 {
+ reg = <0x3600 0x100>;
+ };
+ qcom,freq@3700 {
+ reg = <0x3700 0x100>;
+ };
+ };
+
regulator@4000 {
- regulator-name = "8644_l1";
+ regulator-name = "8084_l1";
reg = <0x4000 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4100 {
- regulator-name = "8644_l2";
+ regulator-name = "8084_l2";
reg = <0x4100 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4200 {
- regulator-name = "8644_l3";
+ regulator-name = "8084_l3";
reg = <0x4200 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4300 {
- regulator-name = "8644_l4";
+ regulator-name = "8084_l4";
reg = <0x4300 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4400 {
- regulator-name = "8644_l5";
+ regulator-name = "8084_l5";
reg = <0x4400 0x100>;
compatible = "qcom,qpnp-regulator";
- qcom,force-type = <0x04 0x10>;
status = "disabled";
};
regulator@4500 {
- regulator-name = "8644_l6";
+ regulator-name = "8084_l6";
reg = <0x4500 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4600 {
- regulator-name = "8644_l7";
+ regulator-name = "8084_l7";
reg = <0x4600 0x100>;
compatible = "qcom,qpnp-regulator";
- qcom,force-type = <0x04 0x10>;
status = "disabled";
};
regulator@4700 {
- regulator-name = "8644_l8";
+ regulator-name = "8084_l8";
reg = <0x4700 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4800 {
- regulator-name = "8644_l9";
+ regulator-name = "8084_l9";
reg = <0x4800 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4900 {
- regulator-name = "8644_l10";
+ regulator-name = "8084_l10";
reg = <0x4900 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4a00 {
- regulator-name = "8644_l11";
+ regulator-name = "8084_l11";
reg = <0x4a00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4b00 {
- regulator-name = "8644_l12";
+ regulator-name = "8084_l12";
reg = <0x4b00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4c00 {
- regulator-name = "8644_l13";
+ regulator-name = "8084_l13";
reg = <0x4c00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4d00 {
- regulator-name = "8644_l14";
+ regulator-name = "8084_l14";
reg = <0x4d00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4e00 {
- regulator-name = "8644_l15";
+ regulator-name = "8084_l15";
reg = <0x4e00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@4f00 {
- regulator-name = "8644_l16";
+ regulator-name = "8084_l16";
reg = <0x4f00 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5000 {
- regulator-name = "8644_l17";
+ regulator-name = "8084_l17";
reg = <0x5000 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5100 {
- regulator-name = "8644_l18";
+ regulator-name = "8084_l18";
reg = <0x5100 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5200 {
- regulator-name = "8644_l19";
+ regulator-name = "8084_l19";
reg = <0x5200 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5300 {
- regulator-name = "8644_l20";
+ regulator-name = "8084_l20";
reg = <0x5300 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5400 {
- regulator-name = "8644_l21";
+ regulator-name = "8084_l21";
reg = <0x5400 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5500 {
- regulator-name = "8644_l22";
+ regulator-name = "8084_l22";
reg = <0x5500 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5600 {
- regulator-name = "8644_l23";
+ regulator-name = "8084_l23";
reg = <0x5600 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5700 {
- regulator-name = "8644_l24";
+ regulator-name = "8084_l24";
reg = <0x5700 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@5800 {
- regulator-name = "8644_l25";
+ regulator-name = "8084_l25";
reg = <0x5800 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
+ regulator@5900 {
+ regulator-name = "8084_l26";
+ reg = <0x5900 0x100>;
+ compatible = "qcom,qpnp-regulator";
+ status = "disabled";
+ };
+
+ regulator@5a00 {
+ regulator-name = "8084_l27";
+ reg = <0x5a00 0x100>;
+ compatible = "qcom,qpnp-regulator";
+ status = "disabled";
+ };
+
regulator@8000 {
- regulator-name = "8644_lvs1";
+ regulator-name = "8084_lvs1";
reg = <0x8000 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@8100 {
- regulator-name = "8644_lvs2";
+ regulator-name = "8084_lvs2";
reg = <0x8100 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@8200 {
- regulator-name = "8644_mvs1";
+ regulator-name = "8084_lvs3";
reg = <0x8200 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
regulator@8300 {
- regulator-name = "8644_mvs2";
+ regulator-name = "8084_lvs4";
reg = <0x8300 0x100>;
compatible = "qcom,qpnp-regulator";
status = "disabled";
};
+
+ regulator@8400 {
+ regulator-name = "8084_mvs1";
+ reg = <0x8400 0x100>;
+ compatible = "qcom,qpnp-regulator";
+ status = "disabled";
+ };
};
};
diff --git a/arch/arm/boot/dts/msm8226-cdp.dts b/arch/arm/boot/dts/msm8226-cdp.dts
index bf2ab4f..b203540 100644
--- a/arch/arm/boot/dts/msm8226-cdp.dts
+++ b/arch/arm/boot/dts/msm8226-cdp.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -131,7 +131,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -140,6 +140,30 @@
qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
qcom,nonremovable;
+ status = "disabled";
+};
+
+&sdhc_1 {
+ vdd-supply = <&pm8226_l17>;
+ qcom,vdd-always-on;
+ qcom,vdd-lpm-sup;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <800 500000>;
+
+ vdd-io-supply = <&pm8226_l6>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-voltage-level = <1800000 1800000>;
+ qcom,vdd-io-current-level = <250 154000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+ qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
+ qcom,nonremovable;
+
status = "ok";
};
@@ -156,7 +180,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -177,6 +201,38 @@
interrupt-names = "core_irq", "bam_irq", "status_irq";
cd-gpios = <&msmgpio 38 0x1>;
+ status = "disabled";
+};
+
+&sdhc_2 {
+ vdd-supply = <&pm8226_l18>;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <9000 800000>;
+
+ vdd-io-supply = <&pm8226_l21>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-lpm-sup;
+ qcom,vdd-io-voltage-level = <1800000 2950000>;
+ qcom,vdd-io-current-level = <6 22000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+
+ #address-cells = <0>;
+ interrupt-parent = <&sdhc_2>;
+ interrupts = <0 1 2>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0xffffffff>;
+ interrupt-map = <0 &intc 0 125 0
+ 1 &intc 0 221 0
+ 2 &msmgpio 38 0x3>;
+ interrupt-names = "hc_irq", "pwr_irq", "status_irq";
+ cd-gpios = <&msmgpio 38 0x1>;
+
status = "ok";
};
@@ -210,7 +266,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -220,7 +276,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
@@ -272,6 +328,6 @@
};
&pm8226_chg {
- qcom,chg-charging-disabled;
- qcom,chg-use-default-batt-values;
+ qcom,charging-disabled;
+ qcom,use-default-batt-values;
};
diff --git a/arch/arm/boot/dts/msm8226-ion.dtsi b/arch/arm/boot/dts/msm8226-ion.dtsi
index 9cef5a9..9ada271 100644
--- a/arch/arm/boot/dts/msm8226-ion.dtsi
+++ b/arch/arm/boot/dts/msm8226-ion.dtsi
@@ -38,9 +38,7 @@
qcom,ion-heap@27 { /* QSECOM HEAP */
compatible = "qcom,msm-ion-reserve";
reg = <27>;
- qcom,heap-align = <0x1000>;
- qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
- qcom,memory-reservation-size = <0x780000>;
+ linux,contiguous-region = <&qsecom_mem>;
};
qcom,ion-heap@28 { /* AUDIO HEAP */
@@ -50,5 +48,19 @@
qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
qcom,memory-reservation-size = <0x314000>;
};
+ qcom,ion-heap@23 { /* OTHER PIL HEAP */
+ compatible = "qcom,msm-ion-reserve";
+ reg = <23>;
+ qcom,heap-align = <0x1000>;
+ qcom,memory-fixed = <0x06400000 0x2000000>;
+ };
+ qcom,ion-heap@26 { /* MODEM PIL HEAP */
+ compatible = "qcom,msm-ion-reserve";
+ reg = <26>;
+ qcom,heap-align = <0x1000>;
+ qcom,memory-fixed = <0x08400000 0x4E00000>;
+
+ };
+
};
};
diff --git a/arch/arm/boot/dts/msm8226-mdss.dtsi b/arch/arm/boot/dts/msm8226-mdss.dtsi
index 7ab76f1..5aa39d3 100644
--- a/arch/arm/boot/dts/msm8226-mdss.dtsi
+++ b/arch/arm/boot/dts/msm8226-mdss.dtsi
@@ -65,7 +65,6 @@
vddio-supply = <&pm8226_l8>;
vdda-supply = <&pm8226_l4>;
qcom,supply-names = "vdd", "vddio", "vdda";
- qcom,supply-type = "regulator", "regulator", "regulator";
qcom,supply-min-voltage-level = <2800000 1800000 1200000>;
qcom,supply-max-voltage-level = <2800000 1800000 1200000>;
qcom,supply-peak-current = <150000 100000 100000>;
diff --git a/arch/arm/boot/dts/msm8226-mtp.dts b/arch/arm/boot/dts/msm8226-mtp.dts
index b0a4a3d..1f8a773 100644
--- a/arch/arm/boot/dts/msm8226-mtp.dts
+++ b/arch/arm/boot/dts/msm8226-mtp.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -123,7 +123,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -132,6 +132,30 @@
qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
qcom,nonremovable;
+ status = "disabled";
+};
+
+&sdhc_1 {
+ vdd-supply = <&pm8226_l17>;
+ qcom,vdd-always-on;
+ qcom,vdd-lpm-sup;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <800 500000>;
+
+ vdd-io-supply = <&pm8226_l6>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-voltage-level = <1800000 1800000>;
+ qcom,vdd-io-current-level = <250 154000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+ qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
+ qcom,nonremovable;
+
status = "ok";
};
@@ -148,7 +172,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -166,6 +190,38 @@
interrupt-names = "core_irq", "bam_irq", "status_irq";
cd-gpios = <&msmgpio 38 0x1>;
+ status = "disabled";
+};
+
+&sdhc_2 {
+ vdd-supply = <&pm8226_l18>;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <9000 800000>;
+
+ vdd-io-supply = <&pm8226_l21>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-lpm-sup;
+ qcom,vdd-io-voltage-level = <1800000 2950000>;
+ qcom,vdd-io-current-level = <6 22000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+
+ #address-cells = <0>;
+ interrupt-parent = <&sdhc_2>;
+ interrupts = <0 1 2>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0xffffffff>;
+ interrupt-map = <0 &intc 0 125 0
+ 1 &intc 0 221 0
+ 2 &msmgpio 38 0x3>;
+ interrupt-names = "hc_irq", "pwr_irq", "status_irq";
+ cd-gpios = <&msmgpio 38 0x1>;
+
status = "ok";
};
@@ -203,7 +259,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -213,7 +269,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
@@ -303,3 +359,7 @@
&pm8226_bms {
status = "ok";
};
+
+&pm8226_chg {
+ qcom,charging-disabled;
+};
diff --git a/arch/arm/boot/dts/msm8226-pm.dtsi b/arch/arm/boot/dts/msm8226-pm.dtsi
index 2613e11..b6fbd5b 100644
--- a/arch/arm/boot/dts/msm8226-pm.dtsi
+++ b/arch/arm/boot/dts/msm8226-pm.dtsi
@@ -114,7 +114,7 @@
qcom,type = <0x61706d73>; /* "smpa" */
qcom,id = <0x01>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <5>; /* Super Turbo */
+ qcom,init-value = <3>; /* SVS SOC */
};
qcom,lpm-resources@1 {
@@ -123,7 +123,7 @@
qcom,type = <0x616F646C>; /* "ldoa" */
qcom,id = <0x03>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <3>; /* Active */
+ qcom,init-value = <3>; /* SVS SOC */
};
qcom,lpm-resources@2 {
@@ -366,6 +366,7 @@
reg = <0xfe805664 0x40>;
qcom,pc-mode = "tz_l2_int";
qcom,use-sync-timer;
+ qcom,pc-resets-timer;
};
qcom,rpm-log@fc19dc00 {
@@ -379,9 +380,9 @@
qcom,offset-page-indices = <56>;
};
- qcom,rpm-stats@0xfc19dbd0{
+ qcom,rpm-stats@fc19dba0 {
compatible = "qcom,rpm-stats";
- reg = <0xfc19dbd0 0x1000>;
+ reg = <0xfc19dba0 0x1000>;
reg-names = "phys_addr_base";
qcom,sleep-stats-version = <2>;
};
diff --git a/arch/arm/boot/dts/msm8226-qrd.dts b/arch/arm/boot/dts/msm8226-qrd.dts
index 7d4f0d5..660fb3e 100644
--- a/arch/arm/boot/dts/msm8226-qrd.dts
+++ b/arch/arm/boot/dts/msm8226-qrd.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -123,7 +123,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -132,6 +132,30 @@
qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
qcom,nonremovable;
+ status = "disabled";
+};
+
+&sdhc_1 {
+ vdd-supply = <&pm8226_l17>;
+ qcom,vdd-always-on;
+ qcom,vdd-lpm-sup;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <800 500000>;
+
+ vdd-io-supply = <&pm8226_l6>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-voltage-level = <1800000 1800000>;
+ qcom,vdd-io-current-level = <250 154000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+ qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
+ qcom,nonremovable;
+
status = "ok";
};
@@ -148,7 +172,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -169,6 +193,38 @@
interrupt-names = "core_irq", "bam_irq", "status_irq";
cd-gpios = <&msmgpio 38 0x1>;
+ status = "disabled";
+};
+
+&sdhc_2 {
+ vdd-supply = <&pm8226_l18>;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <9000 800000>;
+
+ vdd-io-supply = <&pm8226_l21>;
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-lpm-sup;
+ qcom,vdd-io-voltage-level = <1800000 2950000>;
+ qcom,vdd-io-current-level = <6 22000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
+
+ #address-cells = <0>;
+ interrupt-parent = <&sdhc_2>;
+ interrupts = <0 1 2>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0xffffffff>;
+ interrupt-map = <0 &intc 0 125 0
+ 1 &intc 0 221 0
+ 2 &msmgpio 38 0x3>;
+ interrupt-names = "hc_irq", "pwr_irq", "status_irq";
+ cd-gpios = <&msmgpio 38 0x1>;
+
status = "ok";
};
@@ -213,7 +269,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -223,7 +279,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
diff --git a/arch/arm/boot/dts/msm8226-regulator.dtsi b/arch/arm/boot/dts/msm8226-regulator.dtsi
index 70731d2..4551f03 100644
--- a/arch/arm/boot/dts/msm8226-regulator.dtsi
+++ b/arch/arm/boot/dts/msm8226-regulator.dtsi
@@ -67,7 +67,7 @@
regulator-min-microvolt = <1>;
regulator-max-microvolt = <7>;
qcom,use-voltage-corner;
- qcom,consumer-supplies = "vdd_dig", "", "vdd_sr2_dig", "";
+ qcom,consumer-supplies = "vdd_dig", "";
};
pm8226_s1_corner_ao: regulator-s1-corner-ao {
compatible = "qcom,rpm-regulator-smd";
@@ -76,6 +76,7 @@
regulator-min-microvolt = <1>;
regulator-max-microvolt = <7>;
qcom,use-voltage-corner;
+ qcom,consumer-supplies = "vdd_sr2_dig", "";
};
};
diff --git a/arch/arm/boot/dts/msm8226-sim.dts b/arch/arm/boot/dts/msm8226-sim.dts
index a86dc48..00c0e2e 100644
--- a/arch/arm/boot/dts/msm8226-sim.dts
+++ b/arch/arm/boot/dts/msm8226-sim.dts
@@ -36,7 +36,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
vdd-supply = <&pm8226_l17>;
@@ -62,7 +62,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
diff --git a/arch/arm/boot/dts/msm8226-smp2p.dtsi b/arch/arm/boot/dts/msm8226-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8226-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8226-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8226.dtsi b/arch/arm/boot/dts/msm8226.dtsi
index 16a7177..cdb79d6 100644
--- a/arch/arm/boot/dts/msm8226.dtsi
+++ b/arch/arm/boot/dts/msm8226.dtsi
@@ -49,6 +49,8 @@
aliases {
spi0 = &spi_0;
+ sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
+ sdhc2 = &sdhc_2; /* SDC2 SD card slot */
};
memory {
@@ -57,6 +59,19 @@
reg = <0 0x3800000>;
label = "secure_mem";
};
+
+ qsecom_mem: qsecom_region {
+ linux,contiguous-region;
+ reg = <0 0x780000>;
+ label = "qsecom_mem";
+ };
+
+ };
+
+ qcom,mpm2-sleep-counter@fc4a3000 {
+ compatible = "qcom,mpm2-sleep-counter";
+ reg = <0xfc4a3000 0x1000>;
+ clock-frequency = <32768>;
};
timer {
@@ -65,6 +80,65 @@
clock-frequency = <19200000>;
};
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
+ clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 8 0x4>,
+ <0 7 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+ };
+
qcom,vidc@fdc00000 {
compatible = "qcom,msm-vidc";
reg = <0xfdc00000 0xff000>;
@@ -168,6 +242,8 @@
HSUSB_3p3-supply = <&pm8226_l20>;
qcom,vdd-voltage-level = <1 5 7>;
+ qcom,hsusb-otg-phy-init-seq =
+ <0x44 0x80 0x68 0x81 0x24 0x82 0x13 0x83 0xffffffff>;
qcom,hsusb-otg-phy-type = <2>;
qcom,hsusb-otg-mode = <1>;
qcom,hsusb-otg-otg-control = <2>;
@@ -185,6 +261,7 @@
android_usb@fe8050c8 {
compatible = "qcom,android-usb";
reg = <0xfe8050c8 0xc8>;
+ qcom,android-usb-cdrom;
};
wcd9xxx_intc: wcd9xxx-irq {
@@ -234,6 +311,12 @@
qcom,cdc-vdd-cx-voltage = <1200000 1200000>;
qcom,cdc-vdd-cx-current = <10000>;
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-vdd-h",
+ "cdc-vdd-px",
+ "cdc-vdd-a-1p2v",
+ "cdc-vdd-cx";
+
qcom,cdc-micbias-ldoh-v = <0x3>;
qcom,cdc-micbias-cfilt1-mv = <1800>;
qcom,cdc-micbias-cfilt2-mv = <1800>;
@@ -412,8 +495,9 @@
qcom,wcnss-wlan@fb000000 {
compatible = "qcom,wcnss_wlan";
- reg = <0xfb000000 0x280000>;
- reg-names = "wcnss_mmio";
+ reg = <0xfb000000 0x280000>,
+ <0xf9011008 0x04>;
+ reg-names = "wcnss_mmio", "wcnss_fiq";
interrupts = <0 145 0 0 146 0>;
interrupt-names = "wcnss_wlantx_irq", "wcnss_wlanrx_irq";
@@ -448,7 +532,7 @@
qcom,smem@fa00000 {
compatible = "qcom,smem";
- reg = <0xfa00000 0x200000>,
+ reg = <0xfa00000 0x100000>,
<0xf9011000 0x1000>,
<0xfc428000 0x4000>;
reg-names = "smem", "irq-reg-base", "aux-mem1";
@@ -535,6 +619,18 @@
status = "disabled";
};
+ sdhc_1: sdhci@f9824900 {
+ compatible = "qcom,sdhci-msm";
+ reg = <0xf9824900 0x11c>, <0xf9824000 0x800>;
+ reg-names = "hc_mem", "core_mem";
+
+ interrupts = <0 123 0>, <0 138 0>;
+ interrupt-names = "hc_irq", "pwr_irq";
+
+ qcom,bus-width = <8>;
+ status = "disabled";
+ };
+
sdcc2: qcom,sdcc@f98a4000 {
cell-index = <2>; /* SDC2 SD card slot */
compatible = "qcom,msm-sdcc";
@@ -550,6 +646,18 @@
status = "disabled";
};
+ sdhc_2: sdhci@f98a4900 {
+ compatible = "qcom,sdhci-msm";
+ reg = <0xf98a4900 0x11c>, <0xf98a4000 0x800>;
+ reg-names = "hc_mem", "core_mem";
+
+ interrupts = <0 125 0>, <0 221 0>;
+ interrupt-names = "hc_irq", "pwr_irq";
+
+ qcom,bus-width = <4>;
+ status = "disabled";
+ };
+
spmi_bus: qcom,spmi@fc4c0000 {
cell-index = <0>;
compatible = "qcom,spmi-pmic-arb";
@@ -585,7 +693,7 @@
reg = <0xf9927000 0x1000>;
interrupt-names = "qup_err_intr";
interrupts = <0 99 0>;
- qcom,i2c-bus-freq = <100000>;
+ qcom,i2c-bus-freq = <384000>;
qcom,i2c-src-freq = <19200000>;
};
@@ -594,7 +702,6 @@
reg = <0xf9011050 0x8>;
reg-names = "rcg_base";
a7_cpu-supply = <&apc_vreg_corner>;
- a7_mem-supply = <&pm8226_l3>;
};
qcom,ocmem@fdd00000 {
@@ -643,6 +750,7 @@
/* GPIO inputs from wcnss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_4_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_4_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_4_in 2 0>;
/* GPIO output to wcnss */
@@ -663,6 +771,13 @@
interrupts = <0 162 1>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
qcom,mss@fc880000 {
@@ -688,6 +803,7 @@
/* GPIO inputs from mss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_1_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_1_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_1_in 2 0>;
/* GPIO output to mss */
@@ -696,7 +812,7 @@
qcom,msm-mem-hole {
compatible = "qcom,msm-mem-hole";
- qcom,memblock-remove = <0x8400000 0x7b00000>; /* Address and Size of Hole */
+ qcom,memblock-remove = <0x6400000 0x9b00000>; /* Address and Size of Hole */
};
tsens: tsens@fc4a8000 {
@@ -816,6 +932,42 @@
<55 512 3936000 393600>,
<55 512 3936000 393600>;
};
+
+ qcom,qcrypto@fd404000 {
+ compatible = "qcom,qcrypto";
+ reg = <0xfd400000 0x20000>,
+ <0xfd404000 0x8000>;
+ reg-names = "crypto-base","crypto-bam-base";
+ interrupts = <0 207 0>;
+ qcom,bam-pipe-pair = <2>;
+ qcom,ce-hw-instance = <0>;
+ qcom,ce-hw-shared;
+ qcom,msm-bus,name = "qcrypto-noc";
+ qcom,msm-bus,num-cases = <2>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <56 512 0 0>,
+ <56 512 3936000 393600>;
+ };
+
+ qcom,qcedev@fd400000 {
+ compatible = "qcom,qcedev";
+ reg = <0xfd400000 0x20000>,
+ <0xfd404000 0x8000>;
+ reg-names = "crypto-base","crypto-bam-base";
+ interrupts = <0 207 0>;
+ qcom,bam-pipe-pair = <1>;
+ qcom,ce-hw-instance = <0>;
+ qcom,ce-hw-shared;
+ qcom,msm-bus,name = "qcedev-noc";
+ qcom,msm-bus,num-cases = <2>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <56 512 0 0>,
+ <56 512 3936000 393600>;
+ };
};
&gdsc_venus {
@@ -950,23 +1102,23 @@
&pm8226_chg {
status = "ok";
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "ok";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "ok";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
status = "ok";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "ok";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "ok";
};
diff --git a/arch/arm/boot/dts/msm8610-cdp.dts b/arch/arm/boot/dts/msm8610-cdp.dts
index 08da115..d6d41cc 100644
--- a/arch/arm/boot/dts/msm8610-cdp.dts
+++ b/arch/arm/boot/dts/msm8610-cdp.dts
@@ -13,15 +13,198 @@
/dts-v1/;
/include/ "msm8610.dtsi"
+/include/ "dsi-v2-panel-truly-wvga-video.dtsi"
/ {
model = "Qualcomm MSM 8610 CDP";
compatible = "qcom,msm8610-cdp", "qcom,msm8610", "qcom,cdp";
- qcom,msm-id = <147 1 0>, <165 1 0>;
+ qcom,msm-id = <147 1 0>, <165 1 0>, <161 1 0>, <162 1 0>,
+ <163 1 0>, <164 1 0>, <166 1 0>;
serial@f991e000 {
status = "ok";
};
+
+ i2c@f9923000{
+ atmel_mxt_ts@4a {
+ compatible = "atmel,mxt-ts";
+ reg = <0x4a>;
+ interrupt-parent = <&msmgpio>;
+ interrupts = <1 0x2>;
+ vdd_ana-supply = <&pm8110_l19>;
+ vcc_i2c-supply = <&pm8110_l14>;
+ atmel,reset-gpio = <&msmgpio 0 0x00>;
+ atmel,irq-gpio = <&msmgpio 1 0x00>;
+ atmel,panel-coords = <0 0 508 880>;
+ atmel,display-coords = <0 0 480 800>;
+ atmel,i2c-pull-up;
+ atmel,no-force-update;
+ atmel,cfg_1 {
+ atmel,family-id = <0x81>;
+ atmel,variant-id = <0x15>;
+ atmel,version = <0x11>;
+ atmel,build = <0xaa>;
+ atmel,config = [
+ /* Object 6, Instance = 0 */
+ 00 00 00 00 00 00
+ /* Object 38, Instance = 0 */
+ 1D 01 00 0C 04 0D 00 00
+ /* Object 7, Instance = 0 */
+ 20 08 32
+ /* Object 8, Instance = 0 */
+ 19 00 14 14 FF 00 FF 00 00 00
+ /* Object 9, Instance = 0 */
+ 83 00 00 13 0B 00 20 32 01 03
+ 00 32 05 30 0A 05 0A 00 70 03
+ FC 01 00 36 2F 2C 00 00 40 00
+ 00 0A 00 00 02
+ /* Object 18, Instance = 0 */
+ 00 00
+ /* Object 19, Instance = 0 */
+ 00 00 00 00 00 00
+ /* Object 25, Instance = 0 */
+ 03 00 18 79 A8 61
+ /* Object 58, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ 00
+ /* Object 42, Instance = 0 */
+ 00 00 00 00 00 00 00 00
+ /* Object 46, Instance = 0 */
+ 04 03 08 10 00 00 00 00 00
+ /* Object 47, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ /* Object 48, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00
+ /* Object 55, Instance = 0 */
+ 00 00 00 00
+ ];
+ };
+ };
+ };
+};
+
+&i2c_cdc {
+ msm8x10_wcd_codec@0d{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x0d>;
+ cdc-vdda-cp-supply = <&pm8110_s4>;
+ qcom,cdc-vdda-cp-voltage = <2150000 2150000>;
+ qcom,cdc-vdda-cp-current = <650000>;
+
+ cdc-vdda-h-supply = <&pm8110_l6>;
+ qcom,cdc-vdda-h-voltage = <1800000 1800000>;
+ qcom,cdc-vdda-h-current = <250000>;
+
+ cdc-vdd-px-supply = <&pm8110_l6>;
+ qcom,cdc-vdd-px-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-px-current = <10000>;
+
+ cdc-vdd-1p2v-supply = <&pm8110_l4>;
+ qcom,cdc-vdd-1p2v-voltage = <1200000 1200000>;
+ qcom,cdc-vdd-1p2v-current = <5000>;
+
+ cdc-vdd-mic-bias-supply = <&pm8110_l20>;
+ qcom,cdc-vdd-mic-bias-voltage = <3075000 3075000>;
+ qcom,cdc-vdd-mic-bias-current = <25000>;
+
+ qcom,cdc-micbias-cfilt-sel = <0x0>;
+ qcom,cdc-micbias-cfilt-mv = <1800000>;
+ qcom,cdc-mclk-clk-rate = <12288000>;
+ };
+
+ msm8x10_wcd_codec@77{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x77>;
+ };
+
+ msm8x10_wcd_codec@66{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x66>;
+ };
+
+ msm8x10_wcd_codec@55{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x55>;
+ };
+};
+
+&spmi_bus {
+ qcom,pm8110@0 {
+ qcom,leds@a100 {
+ status = "okay";
+ qcom,led_mpp_2 {
+ label = "mpp";
+ linux,name = "button-backlight";
+ linux-default-trigger = "hr-trigger";
+ qcom,default-state = "off";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x60>;
+ };
+ };
+
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+ };
+
+ gpio_keys {
+ compatible = "gpio-keys";
+ input-name = "gpio-keys";
+
+ camera_snapshot {
+ label = "camera_snapshot";
+ gpios = <&msmgpio 73 0x1>;
+ linux,input-type = <1>;
+ linux,code = <0x2fe>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+
+ camera_focus {
+ label = "camera_focus";
+ gpios = <&msmgpio 74 0x1>;
+ linux,input-type = <1>;
+ linux,code = <0x210>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+
+ vol_up {
+ label = "volume_up";
+ gpios = <&msmgpio 72 0x1>;
+ linux,input-type = <1>;
+ linux,code = <115>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+ };
+};
+
+&spmi_bus {
+ qcom,pm8110@1 {
+ qcom,vibrator@c000 {
+ status = "okay";
+ qcom,vib-timeout-ms = <15000>;
+ qcom,vib-vtg-level-mV = <3100>;
+ };
+ };
};
&sdhc_1 {
@@ -79,3 +262,55 @@
status = "ok";
};
+
+&pm8110_chg {
+ status = "ok";
+ qcom,charging-disabled;
+ qcom,use-default-batt-values;
+
+ qcom,chgr@1000 {
+ status = "ok";
+ };
+
+ qcom,buck@1100 {
+ status = "ok";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "ok";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "ok";
+ };
+};
+
+&pm8110_gpios {
+ gpio@c000 { /* GPIO 1 */
+ };
+
+ gpio@c100 { /* GPIO 2 */
+ };
+
+ gpio@c200 { /* GPIO 3 */
+ };
+
+ gpio@c300 { /* GPIO 4 */
+ };
+};
+
+&pm8110_mpps {
+ mpp@a000 { /* MPP 1 */
+ };
+
+ mpp@a100 { /* MPP 2 */
+ status = "disabled";
+ };
+
+ mpp@a200 { /* MPP 3 */
+ status = "disabled";
+ };
+
+ mpp@a300 { /* MPP 4 */
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-gpu.dtsi b/arch/arm/boot/dts/msm8610-gpu.dtsi
index 5e57430..5580f73 100644
--- a/arch/arm/boot/dts/msm8610-gpu.dtsi
+++ b/arch/arm/boot/dts/msm8610-gpu.dtsi
@@ -27,8 +27,9 @@
qcom,idle-timeout = <8>; /* <HZ/12> */
qcom,nap-allowed = <1>;
qcom,strtstp-sleepwake;
- qcom,clk-map = <0x000001E>; /* KGSL_CLK_CORE |
- KGSL_CLK_IFACE | KGSL_CLK_MEM | KGSL_CLK_MEM_IFACE */
+ qcom,clk-map = <0x000005E>; /* KGSL_CLK_CORE |
+ KGSL_CLK_IFACE | KGSL_CLK_MEM | KGSL_CLK_MEM_IFACE |
+ KGSL_CLK_ALT_MEM_IFACE */
/* Bus Scale Settings */
qcom,msm-bus,name = "grp3d";
diff --git a/arch/arm/boot/dts/msm8610-ion.dtsi b/arch/arm/boot/dts/msm8610-ion.dtsi
index 848a6f5..7d7d8fd 100644
--- a/arch/arm/boot/dts/msm8610-ion.dtsi
+++ b/arch/arm/boot/dts/msm8610-ion.dtsi
@@ -31,17 +31,7 @@
qcom,ion-heap@27 { /* QSECOM HEAP */
compatible = "qcom,msm-ion-reserve";
reg = <27>;
- qcom,heap-align = <0x1000>;
- qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
- qcom,memory-reservation-size = <0x100000>;
- };
-
- qcom,ion-heap@28 { /* AUDIO HEAP */
- compatible = "qcom,msm-ion-reserve";
- reg = <28>;
- qcom,heap-align = <0x1000>;
- qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
- qcom,memory-reservation-size = <0x314000>;
+ linux,contiguous-region = <&qsecom_mem>;
};
};
};
diff --git a/arch/arm/boot/dts/msm8610-mdss.dtsi b/arch/arm/boot/dts/msm8610-mdss.dtsi
new file mode 100644
index 0000000..42fa149
--- /dev/null
+++ b/arch/arm/boot/dts/msm8610-mdss.dtsi
@@ -0,0 +1,37 @@
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/ {
+ qcom,mdss_mdp@fd900000 {
+ compatible = "qcom,mdss_mdp3";
+ reg = <0xfd900000 0x100000>;
+ reg-names = "mdp_phys";
+ interrupts = <0 72 0>;
+
+ mdss_fb0: qcom,mdss_fb_primary {
+ cell-index = <0>;
+ compatible = "qcom,mdss-fb";
+ qcom,memory-reservation-type = "EBI1";
+ qcom,memory-reservation-size = <0x800000>;
+ };
+ };
+
+ mdss_dsi0: qcom,mdss_dsi@fdd00000 {
+ compatible = "qcom,msm-dsi-v2";
+ label = "MDSS DSI CTRL->0";
+ cell-index = <0>;
+ reg = <0xfdd00000 0x100000>;
+ interrupts = <0 30 0>;
+ vdda-supply = <&pm8110_l4>;
+ qcom,mdss-fb-map = <&mdss_fb0>;
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-mtp.dts b/arch/arm/boot/dts/msm8610-mtp.dts
index e3eed72..06a2321 100644
--- a/arch/arm/boot/dts/msm8610-mtp.dts
+++ b/arch/arm/boot/dts/msm8610-mtp.dts
@@ -13,15 +13,198 @@
/dts-v1/;
/include/ "msm8610.dtsi"
+/include/ "dsi-v2-panel-truly-wvga-video.dtsi"
/ {
model = "Qualcomm MSM 8610 MTP";
compatible = "qcom,msm8610-mtp", "qcom,msm8610", "qcom,mtp";
- qcom,msm-id = <147 8 0>, <165 8 0>;
+ qcom,msm-id = <147 8 0>, <165 8 0>, <161 8 0>, <162 8 0>,
+ <163 8 0>, <164 8 0>, <166 8 0>;
- serial@f991f000 {
+ serial@f991e000 {
status = "ok";
};
+
+ i2c@f9923000{
+ atmel_mxt_ts@4a {
+ compatible = "atmel,mxt-ts";
+ reg = <0x4a>;
+ interrupt-parent = <&msmgpio>;
+ interrupts = <1 0x2>;
+ vdd_ana-supply = <&pm8110_l19>;
+ vcc_i2c-supply = <&pm8110_l14>;
+ atmel,reset-gpio = <&msmgpio 0 0x00>;
+ atmel,irq-gpio = <&msmgpio 1 0x00>;
+ atmel,panel-coords = <0 0 508 880>;
+ atmel,display-coords = <0 0 480 800>;
+ atmel,i2c-pull-up;
+ atmel,no-force-update;
+ atmel,cfg_1 {
+ atmel,family-id = <0x81>;
+ atmel,variant-id = <0x15>;
+ atmel,version = <0x11>;
+ atmel,build = <0xaa>;
+ atmel,config = [
+ /* Object 6, Instance = 0 */
+ 00 00 00 00 00 00
+ /* Object 38, Instance = 0 */
+ 1D 01 00 0C 04 0D 00 00
+ /* Object 7, Instance = 0 */
+ 20 08 32
+ /* Object 8, Instance = 0 */
+ 19 00 14 14 FF 00 FF 00 00 00
+ /* Object 9, Instance = 0 */
+ 83 00 00 13 0B 00 20 32 01 03
+ 00 32 05 30 0A 05 0A 00 70 03
+ FC 01 00 36 2F 2C 00 00 40 00
+ 00 0A 00 00 02
+ /* Object 18, Instance = 0 */
+ 00 00
+ /* Object 19, Instance = 0 */
+ 00 00 00 00 00 00
+ /* Object 25, Instance = 0 */
+ 03 00 18 79 A8 61
+ /* Object 58, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ 00
+ /* Object 42, Instance = 0 */
+ 00 00 00 00 00 00 00 00
+ /* Object 46, Instance = 0 */
+ 04 03 08 10 00 00 00 00 00
+ /* Object 47, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ /* Object 48, Instance = 0 */
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00 00 00 00 00 00 00
+ 00 00 00 00
+ /* Object 55, Instance = 0 */
+ 00 00 00 00
+ ];
+ };
+ };
+ };
+};
+
+&i2c_cdc {
+ msm8x10_wcd_codec@0d{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x0d>;
+ cdc-vdda-cp-supply = <&pm8110_s4>;
+ qcom,cdc-vdda-cp-voltage = <2150000 2150000>;
+ qcom,cdc-vdda-cp-current = <650000>;
+
+ cdc-vdda-h-supply = <&pm8110_l6>;
+ qcom,cdc-vdda-h-voltage = <1800000 1800000>;
+ qcom,cdc-vdda-h-current = <250000>;
+
+ cdc-vdd-px-supply = <&pm8110_l6>;
+ qcom,cdc-vdd-px-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-px-current = <10000>;
+
+ cdc-vdd-1p2v-supply = <&pm8110_l4>;
+ qcom,cdc-vdd-1p2v-voltage = <1200000 1200000>;
+ qcom,cdc-vdd-1p2v-current = <5000>;
+
+ cdc-vdd-mic-bias-supply = <&pm8110_l20>;
+ qcom,cdc-vdd-mic-bias-voltage = <3075000 3075000>;
+ qcom,cdc-vdd-mic-bias-current = <25000>;
+
+ qcom,cdc-micbias-cfilt-sel = <0x0>;
+ qcom,cdc-micbias-cfilt-mv = <1800000>;
+ qcom,cdc-mclk-clk-rate = <12288000>;
+ };
+
+ msm8x10_wcd_codec@77{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x77>;
+ };
+
+ msm8x10_wcd_codec@66{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x66>;
+ };
+
+ msm8x10_wcd_codec@55{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x55>;
+ };
+};
+
+&spmi_bus {
+ qcom,pm8110@0 {
+ qcom,leds@a100 {
+ status = "okay";
+ qcom,led_mpp_2 {
+ label = "mpp";
+ linux,name = "button-backlight";
+ linux-default-trigger = "hr-trigger";
+ qcom,default-state = "off";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x60>;
+ };
+ };
+
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+ };
+
+ gpio_keys {
+ compatible = "gpio-keys";
+ input-name = "gpio-keys";
+
+ camera_snapshot {
+ label = "camera_snapshot";
+ gpios = <&msmgpio 73 0x1>;
+ linux,input-type = <1>;
+ linux,code = <0x2fe>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+
+ camera_focus {
+ label = "camera_focus";
+ gpios = <&msmgpio 74 0x1>;
+ linux,input-type = <1>;
+ linux,code = <0x210>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+
+ vol_up {
+ label = "volume_up";
+ gpios = <&msmgpio 72 0x1>;
+ linux,input-type = <1>;
+ linux,code = <115>;
+ gpio-key,wakeup;
+ debounce-interval = <15>;
+ };
+ };
+};
+
+&spmi_bus {
+ qcom,pm8110@1 {
+ qcom,vibrator@c000 {
+ status = "okay";
+ qcom,vib-timeout-ms = <15000>;
+ qcom,vib-vtg-level-mV = <3100>;
+ };
+ };
};
&sdhc_1 {
@@ -79,3 +262,58 @@
status = "ok";
};
+
+&pm8110_chg {
+ status = "ok";
+ qcom,charging-disabled;
+
+ qcom,chgr@1000 {
+ status = "ok";
+ };
+
+ qcom,buck@1100 {
+ status = "ok";
+ };
+
+ qcom,bat-if@1200 {
+ status = "ok";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "ok";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "ok";
+ };
+};
+
+&pm8110_gpios {
+ gpio@c000 { /* GPIO 1 */
+ };
+
+ gpio@c100 { /* GPIO 2 */
+ };
+
+ gpio@c200 { /* GPIO 3 */
+ };
+
+ gpio@c300 { /* GPIO 4 */
+ };
+};
+
+&pm8110_mpps {
+ mpp@a000 { /* MPP 1 */
+ };
+
+ mpp@a100 { /* MPP 2 */
+ status = "disabled";
+ };
+
+ mpp@a200 { /* MPP 3 */
+ status = "disabled";
+ };
+
+ mpp@a300 { /* MPP 4 */
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-pm.dtsi b/arch/arm/boot/dts/msm8610-pm.dtsi
index 08a3758..baae269 100644
--- a/arch/arm/boot/dts/msm8610-pm.dtsi
+++ b/arch/arm/boot/dts/msm8610-pm.dtsi
@@ -368,6 +368,7 @@
reg = <0xfe805664 0x40>;
qcom,pc-mode = "tz_l2_int";
qcom,use-sync-timer;
+ qcom,pc-resets-timer;
};
qcom,rpm-log@fc19dc00 {
@@ -381,9 +382,9 @@
qcom,offset-page-indices = <56>;
};
- qcom,rpm-stats@0xfc19dbd0{
+ qcom,rpm-stats@fc19dba0 {
compatible = "qcom,rpm-stats";
- reg = <0xfc19dbd0 0x1000>;
+ reg = <0xfc19dba0 0x1000>;
reg-names = "phys_addr_base";
qcom,sleep-stats-version = <2>;
};
diff --git a/arch/arm/boot/dts/msm8610-regulator.dtsi b/arch/arm/boot/dts/msm8610-regulator.dtsi
index d50902c..f5d01e0 100644
--- a/arch/arm/boot/dts/msm8610-regulator.dtsi
+++ b/arch/arm/boot/dts/msm8610-regulator.dtsi
@@ -10,19 +10,6 @@
* GNU General Public License for more details.
*/
- /* Stub Regulators */
-
-/ {
- pm8110_s1_corner: regulator-s1-corner {
- compatible = "qcom,stub-regulator";
- regulator-name = "8110_s1_corner";
- qcom,hpm-min-load = <100000>;
- regulator-min-microvolt = <1>;
- regulator-max-microvolt = <7>;
- qcom,consumer-supplies = "vdd_dig", "", "vdd_sr2_dig", "";
- };
-};
-
/* SPM controlled regulators */
&spmi_bus {
@@ -60,195 +47,275 @@
};
};
-/* QPNP controlled regulators: */
+/* RPM controlled regulators: */
-&spmi_bus {
+&rpm_bus {
- qcom,pm8110@1 {
-
- pm8110_s1: regulator@1400 {
+ rpm-regulator-smpa1 {
+ status = "okay";
+ pm8110_s1: regulator-s1 {
status = "okay";
- regulator-min-microvolt = <1150000>;
- regulator-max-microvolt = <1150000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1275000>;
};
- pm8110_s3: regulator@1a00 {
- status = "okay";
- regulator-min-microvolt = <1350000>;
+ pm8110_s1_corner: regulator-s1-corner {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1_corner";
+ qcom,set = <3>;
+ regulator-min-microvolt = <1>;
+ regulator-max-microvolt = <7>;
+ qcom,use-voltage-corner;
+ qcom,consumer-supplies = "vdd_dig", "";
+ };
+
+ pm8110_s1_corner_ao: regulator-s1-corner-ao {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1_corner_ao";
+ qcom,set = <1>;
+ regulator-min-microvolt = <1>;
+ regulator-max-microvolt = <7>;
+ qcom,use-voltage-corner;
+ qcom,consumer-supplies = "vdd_sr2_dig", "";
+ };
+ };
+
+ rpm-regulator-smpa3 {
+ status = "okay";
+ pm8110_s3: regulator-s3 {
+ regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1350000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
- };
-
- pm8110_s4: regulator@1d00 {
+ qcom,init-voltage = <1200000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-smpa4 {
+ status = "okay";
+ pm8110_s4: regulator-s4 {
regulator-min-microvolt = <2150000>;
regulator-max-microvolt = <2150000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
- };
-
- pm8110_l1: regulator@4000 {
+ qcom,init-voltage = <2150000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa1 {
+ status = "okay";
+ pm8110_l1: regulator-l1 {
regulator-min-microvolt = <1225000>;
regulator-max-microvolt = <1225000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l2: regulator@4100 {
+ qcom,init-voltage = <1225000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa2 {
+ status = "okay";
+ pm8110_l2: regulator-l2 {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
+ qcom,init-voltage = <1200000>;
+ status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa3 {
+ status = "okay";
+ pm8110_l3: regulator-l3 {
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
+ status = "okay";
};
- pm8110_l3: regulator@4200 {
+ pm8110_l3_ao: regulator-l3-ao {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3_ao";
+ qcom,set = <1>;
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
- regulator-min-microvolt = <1150000>;
- regulator-max-microvolt = <1150000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
};
- pm8110_l4: regulator@4300 {
+ pm8110_l3_so: regulator-l3-so {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3_so";
+ qcom,set = <2>;
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
+ qcom,init-voltage = <750000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa4 {
+ status = "okay";
+ pm8110_l4: regulator-l4 {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l5: regulator@4400 {
+ qcom,init-voltage = <1200000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa5 {
+ status = "okay";
+ pm8110_l5: regulator-l5 {
regulator-min-microvolt = <1300000>;
regulator-max-microvolt = <1300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l6: regulator@4500 {
+ qcom,init-voltage = <1300000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa6 {
+ status = "okay";
+ pm8110_l6: regulator-l6 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
- };
-
- pm8110_l7: regulator@4600 {
+ qcom,init-voltage = <1800000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa7 {
+ status = "okay";
+ pm8110_l7: regulator-l7 {
regulator-min-microvolt = <2050000>;
regulator-max-microvolt = <2050000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l8: regulator@4700 {
+ qcom,init-voltage = <2050000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa8 {
+ status = "okay";
+ pm8110_l8: regulator-l8 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l9: regulator@4800 {
+ qcom,init-voltage = <1800000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa9 {
+ status = "okay";
+ pm8110_l9: regulator-l9 {
regulator-min-microvolt = <2050000>;
regulator-max-microvolt = <2050000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l10: regulator@4900 {
+ qcom,init-voltage = <2050000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa10 {
+ status = "okay";
+ pm8110_l10: regulator-l10 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
+ qcom,init-voltage = <1800000>;
+ status = "okay";
qcom,consumer-supplies = "vdd_sr2_pll", "";
};
+ };
- pm8110_l12: regulator@4b00 {
- status = "okay";
+ rpm-regulator-ldoa12 {
+ status = "okay";
+ pm8110_l12: regulator-l12 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l14: regulator@4d00 {
+ qcom,init-voltage = <3300000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa14 {
+ status = "okay";
+ pm8110_l14: regulator-l14 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l15: regulator@4e00 {
+ qcom,init-voltage = <1800000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa15 {
+ status = "okay";
+ pm8110_l15: regulator-l15 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l16: regulator@4f00 {
+ qcom,init-voltage = <3300000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa16 {
+ status = "okay";
+ pm8110_l16: regulator-l16 {
regulator-min-microvolt = <3000000>;
regulator-max-microvolt = <3000000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l17: regulator@5000 {
+ qcom,init-voltage = <3000000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa17 {
+ status = "okay";
+ pm8110_l17: regulator-l17 {
regulator-min-microvolt = <2900000>;
regulator-max-microvolt = <2900000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l18: regulator@5100 {
+ qcom,init-voltage = <2900000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa18 {
+ status = "okay";
+ pm8110_l18: regulator-l18 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l19: regulator@5200 {
+ qcom,init-voltage = <2950000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa19 {
+ status = "okay";
+ pm8110_l19: regulator-l19 {
regulator-min-microvolt = <2850000>;
regulator-max-microvolt = <2850000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l20: regulator@5300 {
+ qcom,init-voltage = <2850000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa20 {
+ status = "okay";
+ pm8110_l20: regulator-l20 {
regulator-min-microvolt = <3075000>;
regulator-max-microvolt = <3075000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l21: regulator@5400 {
+ qcom,init-voltage = <3075000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa21 {
+ status = "okay";
+ pm8110_l21: regulator-l21 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l22: regulator@5500 {
+ qcom,init-voltage = <2950000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa22 {
+ status = "okay";
+ pm8110_l22: regulator-l22 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
+ qcom,init-voltage = <3300000>;
+ status = "okay";
};
};
};
diff --git a/arch/arm/boot/dts/msm8610-sim.dts b/arch/arm/boot/dts/msm8610-sim.dts
index 1838b94..fc94cc9 100644
--- a/arch/arm/boot/dts/msm8610-sim.dts
+++ b/arch/arm/boot/dts/msm8610-sim.dts
@@ -23,3 +23,48 @@
status = "ok";
};
};
+
+&i2c_cdc {
+ msm8x10_wcd_codec@0d{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x0d>;
+ cdc-vdda-cp-supply = <&pm8110_s4>;
+ qcom,cdc-vdda-cp-voltage = <2150000 2150000>;
+ qcom,cdc-vdda-cp-current = <650000>;
+
+ cdc-vdda-h-supply = <&pm8110_l6>;
+ qcom,cdc-vdda-h-voltage = <1800000 1800000>;
+ qcom,cdc-vdda-h-current = <250000>;
+
+ cdc-vdd-px-supply = <&pm8110_l6>;
+ qcom,cdc-vdd-px-voltage = <1800000 1800000>;
+ qcom,cdc-vdd-px-current = <10000>;
+
+ cdc-vdd-1p2v-supply = <&pm8110_l4>;
+ qcom,cdc-vdd-1p2v-voltage = <1200000 1200000>;
+ qcom,cdc-vdd-1p2v-current = <5000>;
+
+ cdc-vdd-mic-bias-supply = <&pm8110_l20>;
+ qcom,cdc-vdd-mic-bias-voltage = <3075000 3075000>;
+ qcom,cdc-vdd-mic-bias-current = <25000>;
+
+ qcom,cdc-micbias-cfilt-sel = <0x0>;
+ qcom,cdc-micbias-cfilt-mv = <1800000>;
+ qcom,cdc-mclk-clk-rate = <12288000>;
+ };
+
+ msm8x10_wcd_codec@77{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x77>;
+ };
+
+ msm8x10_wcd_codec@66{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x66>;
+ };
+
+ msm8x10_wcd_codec@55{
+ compatible = "qcom,msm8x10-wcd-i2c";
+ reg = <0x55>;
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-smp2p.dtsi b/arch/arm/boot/dts/msm8610-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8610-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8610-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8610.dtsi b/arch/arm/boot/dts/msm8610.dtsi
index 2013bb8..2cbd76e 100644
--- a/arch/arm/boot/dts/msm8610.dtsi
+++ b/arch/arm/boot/dts/msm8610.dtsi
@@ -19,6 +19,7 @@
/include/ "msm8610-pm.dtsi"
/include/ "msm8610-smp2p.dtsi"
/include/ "msm8610-bus.dtsi"
+/include/ "msm8610-mdss.dtsi"
/ {
model = "Qualcomm MSM 8610";
@@ -45,18 +46,92 @@
qcom,direct-connect-irqs = <8>;
};
+ memory {
+
+ qsecom_mem: qsecom_region {
+ linux,contiguous-region;
+ reg = <0 0x100000>;
+ label = "qsecom_mem";
+ };
+
+ };
+
aliases {
- spi0 = &spi_0;
sdhc1 = &sdhc_1; /* SDC1 eMMC slot */
sdhc2 = &sdhc_2; /* SDC2 SD card slot */
};
+ qcom,mpm2-sleep-counter@fc4a3000 {
+ compatible = "qcom,mpm2-sleep-counter";
+ reg = <0xfc4a3000 0x1000>;
+ clock-frequency = <32768>;
+ };
+
timer {
compatible = "arm,armv7-timer";
interrupts = <1 2 0 1 3 0>;
clock-frequency = <19200000>;
};
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
+ clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 8 0x4>,
+ <0 7 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+ };
+
qcom,msm-adsp-loader {
compatible = "qcom,adsp-loader";
qcom,adsp-state = <0>;
@@ -89,7 +164,7 @@
qcom,buffer-type-tz-usage-map = <0x1 0x1>,
<0x1fe 0x2>;
qcom,hfi = "q6";
- qcom,max-hw-load = <97200>; /* FWVGA @ 30 * 2 */
+ qcom,max-hw-load = <108000>; /* 720p @ 30 * 1 */
};
qcom,usbbam@f9a44000 {
@@ -174,7 +249,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -206,7 +281,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -250,7 +325,7 @@
qcom,smem@d900000 {
compatible = "qcom,smem";
- reg = <0xd900000 0x200000>,
+ reg = <0xd900000 0x100000>,
<0xf9011000 0x1000>,
<0xfc428000 0x4000>;
reg-names = "smem", "irq-reg-base", "aux-mem1";
@@ -342,7 +417,6 @@
reg = <0xf9011050 0x8>;
reg-names = "rcg_base";
a7_cpu-supply = <&apc_vreg_corner>;
- a7_mem-supply = <&pm8110_l3>;
};
spmi_bus: qcom,spmi@fc4c0000 {
@@ -355,11 +429,52 @@
/* 190,ee0_krait_hlos_spmi_periph_irq */
/* 187,channel_0_krait_hlos_trans_done_irq */
interrupts = <0 190 0>, <0 187 0>;
- qcom,not-wakeup;
qcom,pmic-arb-ee = <0>;
qcom,pmic-arb-channel = <0>;
};
+ i2c@f9923000 { /* BLSP-1 QUP-1 */
+ cell-index = <1>;
+ compatible = "qcom,i2c-qup";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg-names = "qup_phys_addr";
+ reg = <0xf9923000 0x1000>;
+ interrupt-names = "qup_err_intr";
+ interrupts = <0 95 0>;
+ qcom,i2c-bus-freq = <100000>;
+ qcom,i2c-src-freq = <19200000>;
+ qcom,sda-gpio = <&msmgpio 2 0>;
+ qcom,scl-gpio = <&msmgpio 3 0>;
+ };
+
+ i2c_cdc: i2c@f9927000 { /* BLSP1 QUP5 */
+ cell-index = <5>;
+ compatible = "qcom,i2c-qup";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg-names = "qup_phys_addr";
+ reg = <0xf9927000 0x1000>;
+ interrupt-names = "qup_err_intr";
+ interrupts = <0 99 0>;
+ qcom,i2c-bus-freq = <100000>;
+ };
+
+ i2c@f9928000 { /* BLSP1 QUP6 */
+ cell-index = <6>;
+ compatible = "qcom,i2c-qup";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg-names = "qup_phys_addr";
+ reg = <0xf9928000 0x1000>;
+ interrupt-names = "qup_err_intr";
+ interrupts = <0 100 0>;
+ qcom,i2c-bus-freq = <100000>;
+ qcom,i2c-src-freq = <19200000>;
+ qcom,sda-gpio = <&msmgpio 16 0>;
+ qcom,scl-gpio = <&msmgpio 17 0>;
+ };
+
i2c@f9925000 { /* BLSP-1 QUP-3 */
cell-index = <0>;
compatible = "qcom,i2c-qup";
@@ -372,30 +487,6 @@
qcom,i2c-bus-freq = <100000>;
};
-
- spi_0: spi@f9923000 { /* BLSP1 QUP1 */
- compatible = "qcom,spi-qup-v2";
- #address-cells = <1>;
- #size-cells = <0>;
- reg-names = "spi_physical", "spi_bam_physical";
- reg = <0xf9923000 0x1000>,
- <0xf9904000 0xF000>;
- interrupt-names = "spi_irq", "spi_bam_irq";
- interrupts = <0 95 0>, <0 238 0>;
- spi-max-frequency = <19200000>;
-
- gpios = <&msmgpio 3 0>, /* CLK */
- <&msmgpio 1 0>, /* MISO */
- <&msmgpio 0 0>; /* MOSI */
- cs-gpios = <&msmgpio 2 0>;
-
- qcom,infinite-mode = <0>;
- qcom,use-bam;
- qcom,ver-reg-exists;
- qcom,bam-consumer-pipe-index = <12>;
- qcom,bam-producer-pipe-index = <13>;
- };
-
qcom,pronto@fb21b000 {
compatible = "qcom,pil-pronto";
reg = <0xfb21b000 0x3000>,
@@ -409,6 +500,7 @@
/* GPIO inputs from wcnss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_4_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_4_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_4_in 2 0>;
/* GPIO output to wcnss */
@@ -426,6 +518,13 @@
qcom,msm-pcm {
compatible = "qcom,msm-pcm-dsp";
+ qcom,msm-pcm-dsp-id = <0>;
+ };
+
+ qcom,msm-pcm-low-latency {
+ compatible = "qcom,msm-pcm-dsp";
+ qcom,msm-pcm-dsp-id = <1>;
+ qcom,msm-pcm-low-latency;
};
qcom,msm-pcm-routing {
@@ -465,15 +564,15 @@
qcom,msm-dai-q6-mi2s-prim {
compatible = "qcom,msm-dai-q6-mi2s";
qcom,msm-dai-q6-mi2s-dev-id = <0>;
- qcom,msm-mi2s-rx-lines = <1>;
- qcom,msm-mi2s-tx-lines = <0>;
+ qcom,msm-mi2s-rx-lines = <0>;
+ qcom,msm-mi2s-tx-lines = <3>;
};
qcom,msm-dai-q6-mi2s-sec {
compatible = "qcom,msm-dai-q6-mi2s";
qcom,msm-dai-q6-mi2s-dev-id = <1>;
- qcom,msm-mi2s-rx-lines = <0>;
- qcom,msm-mi2s-tx-lines = <3>;
+ qcom,msm-mi2s-rx-lines = <3>;
+ qcom,msm-mi2s-tx-lines = <0>;
};
};
@@ -518,6 +617,21 @@
compatible = "qcom,msm-dai-q6-dev";
qcom,msm-dai-q6-dev-id = <240>;
};
+
+ qcom,msm-dai-q6-incall-record-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32771>;
+ };
+
+ qcom,msm-dai-q6-incall-record-tx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32772>;
+ };
+
+ qcom,msm-dai-q6-incall-music-rx {
+ compatible = "qcom,msm-dai-q6-dev";
+ qcom,msm-dai-q6-dev-id = <32773>;
+ };
};
qcom,msm-pcm-hostless {
@@ -526,8 +640,9 @@
qcom,wcnss-wlan@fb000000 {
compatible = "qcom,wcnss_wlan";
- reg = <0xfb000000 0x280000>;
- reg-names = "wcnss_mmio";
+ reg = <0xfb000000 0x280000>,
+ <0xf9011008 0x04>;
+ reg-names = "wcnss_mmio", "wcnss_fiq";
interrupts = <0 145 0>, <0 146 0>;
interrupt-names = "wcnss_wlantx_irq", "wcnss_wlanrx_irq";
@@ -565,6 +680,7 @@
/* GPIO inputs from mss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_1_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_1_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_1_in 2 0>;
/* GPIO output to mss */
@@ -580,6 +696,13 @@
interrupts = <0 162 1>;
vdd_cx-supply = <&pm8110_s1_corner>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
tsens: tsens@fc4a8000 {
@@ -593,6 +716,7 @@
qcom,calib-mode = "fuse_map3";
qcom,calibration-less-mode;
qcom,tsens-local-init;
+ qcom,sensor-id = <0 5>;
};
qcom,msm-thermal {
@@ -615,6 +739,69 @@
reg = <0xfc834000 0x7000>;
interrupts = <0 29 1>;
};
+
+ qcom,qseecom@7B00000 {
+ compatible = "qcom,qseecom";
+ reg = <0x7B00000 0x500000>;
+ reg-names = "secapp-region";
+ qcom,disk-encrypt-pipe-pair = <2>;
+ qcom,hlos-ce-hw-instance = <0>;
+ qcom,qsee-ce-hw-instance = <0>;
+ qcom,msm-bus,name = "qseecom-noc";
+ qcom,msm-bus,num-cases = <4>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <55 512 0 0>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>;
+ };
+
+ qcom,msm-rng@f9bff000 {
+ compatible = "qcom,msm-rng";
+ reg = <0xf9bff000 0x200>;
+ qcom,msm-rng-iface-clk;
+ };
+
+ qcom,msm-rtb {
+ compatible = "qcom,msm-rtb";
+ qcom,memory-reservation-type = "EBI1";
+ qcom,memory-reservation-size = <0x100000>; /* 1M EBI1 buffer */
+ };
+
+ jtag_mm0: jtagmm@fc34c000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34c000 0x1000>,
+ <0xfc340000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm1: jtagmm@fc34d000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34d000 0x1000>,
+ <0xfc342000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm2: jtagmm@fc34e000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34e000 0x1000>,
+ <0xfc344000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm3: jtagmm@fc34f000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34f000 0x1000>,
+ <0xfc346000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ qcom,tz-log@fe805720 {
+ compatible = "qcom,tz-log";
+ reg = <0x0fe805720 0x1000>;
+ };
};
&gdsc_vfe {
@@ -651,6 +838,7 @@
/include/ "msm8610-iommu-domains.dtsi"
+/include/ "msm-pm8110-rpm-regulator.dtsi"
/include/ "msm-pm8110.dtsi"
/include/ "msm8610-regulator.dtsi"
@@ -670,7 +858,7 @@
label = "vchg_sns";
reg = <2>;
qcom,decimation = <0>;
- qcom,pre-div-channel-scaling = <3>;
+ qcom,pre-div-channel-scaling = <2>;
qcom,calibration-type = "absolute";
qcom,scale-function = <0>;
qcom,hw-settle-time = <0>;
diff --git a/arch/arm/boot/dts/msm8974-bus.dtsi b/arch/arm/boot/dts/msm8974-bus.dtsi
index bb4b48e..3e0ef04 100644
--- a/arch/arm/boot/dts/msm8974-bus.dtsi
+++ b/arch/arm/boot/dts/msm8974-bus.dtsi
@@ -1349,30 +1349,18 @@
qcom,hw-sel = "NoC";
};
- mas-video-p0-ocmem {
+ mas-video-ocmem {
cell-id = <68>;
- label = "mas-video-p0-ocmem";
- qcom,masterp = <3>;
+ label = "mas-video-ocmem";
+ qcom,masterp = <3 4>;
qcom,tier = <2>;
qcom,perm-mode = "Fixed";
qcom,mode = "Fixed";
- qcom,qport = <2>;
+ qcom,qport = <2 3>;
qcom,mas-hw-id = <15>;
qcom,hw-sel = "NoC";
};
- mas-video-p1-ocmem {
- cell-id = <69>;
- label = "mas-video-p1-ocmem";
- qcom,masterp = <4>;
- qcom,tier = <2>;
- qcom,perm-mode = "Fixed";
- qcom,mode = "Fixed";
- qcom,qport = <3>;
- qcom,mas-hw-id = <16>;
- qcom,hw-sel = "NoC";
- };
-
mas-vfe-ocmem {
cell-id = <70>;
label = "mas-vfe-ocmem";
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
index df0db7e..b574a31 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
@@ -111,7 +111,7 @@
reg = <0x6c 0x0>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
index f58c1e2..748d5f7 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
@@ -112,7 +112,7 @@
reg = <0x6c>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
index 767a705..53f6e9e 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
@@ -113,7 +113,7 @@
reg = <0x6c>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera.dtsi b/arch/arm/boot/dts/msm8974-camera.dtsi
index 3a78a15..94a28f7 100644
--- a/arch/arm/boot/dts/msm8974-camera.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera.dtsi
@@ -23,8 +23,9 @@
qcom,csiphy@fda0ac00 {
cell-index = <0>;
compatible = "qcom,csiphy";
- reg = <0xfda0ac00 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0ac00 0x200>,
+ <0xfda00030 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 78 0>;
interrupt-names = "csiphy";
};
@@ -32,8 +33,9 @@
qcom,csiphy@fda0b000 {
cell-index = <1>;
compatible = "qcom,csiphy";
- reg = <0xfda0b000 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0b000 0x200>,
+ <0xfda00038 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 79 0>;
interrupt-names = "csiphy";
};
@@ -41,8 +43,9 @@
qcom,csiphy@fda0b400 {
cell-index = <2>;
compatible = "qcom,csiphy";
- reg = <0xfda0b400 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0b400 0x200>,
+ <0xfda00040 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 80 0>;
interrupt-names = "csiphy";
};
@@ -94,8 +97,9 @@
qcom,ispif@fda0A000 {
cell-index = <0>;
compatible = "qcom,ispif";
- reg = <0xfda0A000 0x500>;
- reg-names = "ispif";
+ reg = <0xfda0A000 0x500>,
+ <0xfda00020 0x10>;
+ reg-names = "ispif", "csi_clk_mux";
interrupts = <0 55 0>;
interrupt-names = "ispif";
};
diff --git a/arch/arm/boot/dts/msm8974-cdp.dtsi b/arch/arm/boot/dts/msm8974-cdp.dtsi
index 91d3bc0..9cfc5fd 100644
--- a/arch/arm/boot/dts/msm8974-cdp.dtsi
+++ b/arch/arm/boot/dts/msm8974-cdp.dtsi
@@ -235,6 +235,19 @@
<85 512 40000 160000>;
};
+ wlan0: qca,wlan {
+ compatible = "qca,ar6004-hsic";
+ qcom,msm-bus,name = "wlan";
+ qcom,msm-bus,num-cases = <5>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <85 512 0 0>,
+ <85 512 40000 160000>,
+ <85 512 40000 320000>,
+ <85 512 40000 480000>,
+ <85 512 40000 640000>;
+ };
};
&spmi_bus {
@@ -331,7 +344,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -361,11 +374,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&uart7 {
status = "ok";
qcom,tx-gpio = <&msmgpio 41 0x00>;
@@ -381,23 +414,23 @@
&pm8941_chg {
status = "ok";
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "ok";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "ok";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "ok";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
status = "ok";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "ok";
};
@@ -677,5 +710,12 @@
qcom,cdc-micbias1-ext-cap;
qcom,cdc-micbias3-ext-cap;
qcom,cdc-micbias4-ext-cap;
+
+ /* If boot isn't available, vph_pwr_vreg can be used instead */
+ cdc-vdd-spkdrv-supply = <&pm8941_boost>;
+ qcom,cdc-vdd-spkdrv-voltage = <5000000 5000000>;
+ qcom,cdc-vdd-spkdrv-current = <1250000>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
diff --git a/arch/arm/boot/dts/msm8974-fluid.dtsi b/arch/arm/boot/dts/msm8974-fluid.dtsi
index de370e7..7f46a54 100644
--- a/arch/arm/boot/dts/msm8974-fluid.dtsi
+++ b/arch/arm/boot/dts/msm8974-fluid.dtsi
@@ -202,7 +202,34 @@
sound {
qcom,model = "msm8974-taiko-fluid-snd-card";
+ qcom,audio-routing =
+ "RX_BIAS", "MCLK",
+ "LDO_H", "MCLK",
+ "AMIC1", "MIC BIAS1 Internal1",
+ "MIC BIAS1 Internal1", "Handset Mic",
+ "AMIC2", "MIC BIAS2 External",
+ "MIC BIAS2 External", "Headset Mic",
+ "AMIC3", "MIC BIAS2 External",
+ "MIC BIAS2 External", "ANCRight Headset Mic",
+ "AMIC4", "MIC BIAS2 External",
+ "MIC BIAS2 External", "ANCLeft Headset Mic",
+ "DMIC1", "MIC BIAS1 External",
+ "MIC BIAS1 External", "Digital Mic1",
+ "DMIC2", "MIC BIAS1 External",
+ "MIC BIAS1 External", "Digital Mic2",
+ "DMIC3", "MIC BIAS3 External",
+ "MIC BIAS3 External", "Digital Mic3",
+ "DMIC4", "MIC BIAS3 External",
+ "MIC BIAS3 External", "Digital Mic4",
+ "DMIC5", "MIC BIAS4 External",
+ "MIC BIAS4 External", "Digital Mic5",
+ "DMIC6", "MIC BIAS4 External",
+ "MIC BIAS4 External", "Digital Mic6",
+ "Lineout_1 amp", "LINEOUT1",
+ "Lineout_3 amp", "LINEOUT3";
+
qcom,hdmi-audio-rx;
+ qcom,ext-ult-lo-amp-gpio = <&pm8941_gpios 6 0>;
};
};
@@ -212,6 +239,13 @@
qcom,cdc-micbias2-ext-cap;
qcom,cdc-micbias3-ext-cap;
qcom,cdc-micbias4-ext-cap;
+
+ /* If boot isn't available, vph_pwr_vreg can be used instead */
+ cdc-vdd-spkdrv-supply = <&pm8941_boost>;
+ qcom,cdc-vdd-spkdrv-voltage = <5000000 5000000>;
+ qcom,cdc-vdd-spkdrv-current = <1250000>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
@@ -308,7 +342,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -338,11 +372,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&usb3 {
qcom,otg-capability;
};
@@ -353,29 +407,29 @@
&pm8941_chg {
status = "ok";
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "ok";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "ok";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
status = "ok";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "ok";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
status = "ok";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "ok";
};
diff --git a/arch/arm/boot/dts/msm8974-ion.dtsi b/arch/arm/boot/dts/msm8974-ion.dtsi
index 31afd9c..cfe39fc 100644
--- a/arch/arm/boot/dts/msm8974-ion.dtsi
+++ b/arch/arm/boot/dts/msm8974-ion.dtsi
@@ -45,9 +45,7 @@
qcom,ion-heap@27 { /* QSECOM HEAP */
compatible = "qcom,msm-ion-reserve";
reg = <27>;
- qcom,heap-align = <0x1000>;
- qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
- qcom,memory-reservation-size = <0x1100000>;
+ linux,contiguous-region = <&qsecom_mem>;
};
qcom,ion-heap@28 { /* AUDIO HEAP */
diff --git a/arch/arm/boot/dts/msm8974-liquid.dtsi b/arch/arm/boot/dts/msm8974-liquid.dtsi
index 0ae20a0..d8a090b 100644
--- a/arch/arm/boot/dts/msm8974-liquid.dtsi
+++ b/arch/arm/boot/dts/msm8974-liquid.dtsi
@@ -318,6 +318,7 @@
"Lineout_3 amp", "LINEOUT3",
"Lineout_2 amp", "LINEOUT2",
"Lineout_4 amp", "LINEOUT4",
+ "SPK_ultrasound amp", "SPK_OUT",
"AMIC1", "MIC BIAS4 External",
"MIC BIAS4 External", "Analog Mic4",
"AMIC2", "MIC BIAS2 External",
@@ -346,7 +347,14 @@
qcom,ext-spk-amp-supply = <&ext_5v>;
qcom,ext-spk-amp-gpio = <&pm8841_mpps 1 0>;
qcom,dock-plug-det-irq = <&pm8841_mpps 2 0>;
+ qcom,ext-ult-spk-amp-gpio = <&pm8941_gpios 6 0>;
qcom,hdmi-audio-rx;
+
+ qcom,prim-auxpcm-gpio-clk = <&msmgpio 74 0>;
+ qcom,prim-auxpcm-gpio-sync = <&msmgpio 75 0>;
+ qcom,prim-auxpcm-gpio-din = <&msmgpio 76 0>;
+ qcom,prim-auxpcm-gpio-dout = <&msmgpio 77 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-tert";
};
hsic_hub {
@@ -443,6 +451,14 @@
};
gpio@c500 { /* GPIO 6 */
+ /* ULTRASOUND_EN_1 PA AB enable */
+ qcom,mode = <1>; /* DIG_OUT */
+ qcom,output-type = <0>; /* CMOS */
+ qcom,pull = <4>; /* PULL_DOWN */
+ qcom,vin-sel = <0>; /* VPH */
+ qcom,out-strength = <2>; /* STRENGTH_MED */
+ qcom,src-sel = <0>; /* CONSTANT */
+ qcom,master-en = <1>;
};
gpio@c600 { /* GPIO 7 */
@@ -695,10 +711,24 @@
};
};
+&vph_pwr_vreg {
+ status = "ok";
+};
+
&slim_msm {
taiko_codec {
qcom,cdc-micbias2-ext-cap;
qcom,cdc-micbias3-ext-cap;
+
+ /*
+ * Liquid has external spkrdrv supply. Give a dummy supply to
+ * make codec driver's happy.
+ */
+ cdc-vdd-spkdrv-supply = <&vph_pwr_vreg>;
+ qcom,cdc-vdd-spkdrv-voltage = <0 0>;
+ qcom,cdc-vdd-spkdrv-current = <0>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
@@ -736,25 +766,25 @@
&pm8941_chg {
status = "ok";
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "ok";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "ok";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "ok";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
status = "ok";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "ok";
};
@@ -786,7 +816,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -805,7 +835,27 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
diff --git a/arch/arm/boot/dts/msm8974-mdss.dtsi b/arch/arm/boot/dts/msm8974-mdss.dtsi
index 88641f9..86f8141 100644
--- a/arch/arm/boot/dts/msm8974-mdss.dtsi
+++ b/arch/arm/boot/dts/msm8974-mdss.dtsi
@@ -111,10 +111,9 @@
core-vdda-supply = <&pm8941_l12>;
core-vcc-supply = <&pm8941_s3>;
qcom,hdmi-tx-supply-names = "hpd-gdsc", "hpd-5v", "core-vdda", "core-vcc";
- qcom,hdmi-tx-supply-type = <1 1 0 0>;
qcom,hdmi-tx-min-voltage-level = <0 0 1800000 1800000>;
qcom,hdmi-tx-max-voltage-level = <0 0 1800000 1800000>;
- qcom,hdmi-tx-op-mode = <0 0 1800000 0>;
+ qcom,hdmi-tx-peak-current = <0 0 1800000 0>;
qcom,hdmi-tx-cec = <&msmgpio 31 0>;
qcom,hdmi-tx-ddc-clk = <&msmgpio 32 0>;
diff --git a/arch/arm/boot/dts/msm8974-mtp.dtsi b/arch/arm/boot/dts/msm8974-mtp.dtsi
index a81fc20..ca5f663 100644
--- a/arch/arm/boot/dts/msm8974-mtp.dtsi
+++ b/arch/arm/boot/dts/msm8974-mtp.dtsi
@@ -283,7 +283,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -313,11 +313,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&usb_otg {
qcom,hsusb-otg-otg-control = <2>;
};
@@ -336,29 +356,29 @@
&pm8941_chg {
status = "ok";
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
- qcom,chg-chgr@1000 {
+ qcom,chgr@1000 {
status = "ok";
};
- qcom,chg-buck@1100 {
+ qcom,buck@1100 {
status = "ok";
};
- qcom,chg-bat-if@1200 {
+ qcom,bat-if@1200 {
status = "ok";
};
- qcom,chg-usb-chgpth@1300 {
+ qcom,usb-chgpth@1300 {
status = "ok";
};
- qcom,chg-dc-chgpth@1400 {
+ qcom,dc-chgpth@1400 {
status = "ok";
};
- qcom,chg-boost@1500 {
+ qcom,boost@1500 {
status = "ok";
};
diff --git a/arch/arm/boot/dts/msm8974-regulator.dtsi b/arch/arm/boot/dts/msm8974-regulator.dtsi
index 49450f3..d1b3334 100644
--- a/arch/arm/boot/dts/msm8974-regulator.dtsi
+++ b/arch/arm/boot/dts/msm8974-regulator.dtsi
@@ -26,15 +26,33 @@
pm8941_mvs1: regulator@8300 {
parent-supply = <&pm8941_boost>;
- qcom,enable-time = <200>;
+ qcom,enable-time = <1000>;
qcom,pull-down-enable = <1>;
+ interrupts = <0x1 0x83 0x2>;
+ interrupt-names = "ocp";
+ qcom,ocp-enable = <1>;
+ qcom,ocp-max-retries = <10>;
+ qcom,ocp-retry-delay = <30>;
+ qcom,soft-start-enable = <1>;
+ qcom,vs-soft-start-strength = <0>;
+ qcom,hpm-enable = <1>;
+ qcom,auto-mode-enable = <0>;
status = "okay";
};
pm8941_mvs2: regulator@8400 {
parent-supply = <&pm8941_boost>;
- qcom,enable-time = <200>;
+ qcom,enable-time = <1000>;
qcom,pull-down-enable = <1>;
+ interrupts = <0x1 0x84 0x2>;
+ interrupt-names = "ocp";
+ qcom,ocp-enable = <1>;
+ qcom,ocp-max-retries = <10>;
+ qcom,ocp-retry-delay = <30>;
+ qcom,soft-start-enable = <1>;
+ qcom,vs-soft-start-strength = <0>;
+ qcom,hpm-enable = <1>;
+ qcom,auto-mode-enable = <0>;
status = "okay";
};
};
@@ -448,7 +466,7 @@
#address-cells = <1>;
#size-cells = <1>;
ranges;
- qcom,pfm-threshold = <376975>;
+ qcom,pfm-threshold = <73>;
krait0_vreg: regulator@f9088000 {
compatible = "qcom,krait-regulator";
diff --git a/arch/arm/boot/dts/msm8974-smp2p.dtsi b/arch/arm/boot/dts/msm8974-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8974-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8974-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8974-v1-fluid.dts b/arch/arm/boot/dts/msm8974-v1-fluid.dts
index 8f2ef31..8ab24df 100644
--- a/arch/arm/boot/dts/msm8974-v1-fluid.dts
+++ b/arch/arm/boot/dts/msm8974-v1-fluid.dts
@@ -23,7 +23,7 @@
};
&pm8941_chg {
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
};
&sdcc1 {
diff --git a/arch/arm/boot/dts/msm8974-v1-mtp.dts b/arch/arm/boot/dts/msm8974-v1-mtp.dts
index 6cb9f09..09ea84b 100644
--- a/arch/arm/boot/dts/msm8974-v1-mtp.dts
+++ b/arch/arm/boot/dts/msm8974-v1-mtp.dts
@@ -22,5 +22,5 @@
};
&pm8941_chg {
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
};
diff --git a/arch/arm/boot/dts/msm8974-v1-pm.dtsi b/arch/arm/boot/dts/msm8974-v1-pm.dtsi
index ec6b14a..f9c0920 100644
--- a/arch/arm/boot/dts/msm8974-v1-pm.dtsi
+++ b/arch/arm/boot/dts/msm8974-v1-pm.dtsi
@@ -142,7 +142,7 @@
qcom,type = <0x62706d73>; /* "smpb" */
qcom,id = <0x02>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <5>; /* Super Turbo */
+ qcom,init-value = <6>; /* Super Turbo */
};
qcom,lpm-resources@1 {
diff --git a/arch/arm/boot/dts/msm8974-v2-pm.dtsi b/arch/arm/boot/dts/msm8974-v2-pm.dtsi
index 41837c1..5a1c047 100644
--- a/arch/arm/boot/dts/msm8974-v2-pm.dtsi
+++ b/arch/arm/boot/dts/msm8974-v2-pm.dtsi
@@ -142,7 +142,7 @@
qcom,type = <0x62706d73>; /* "smpb" */
qcom,id = <0x02>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <5>; /* Super Turbo */
+ qcom,init-value = <6>; /* Super Turbo */
};
qcom,lpm-resources@1 {
diff --git a/arch/arm/boot/dts/msm8974-v2.dtsi b/arch/arm/boot/dts/msm8974-v2.dtsi
index 777d26c..def7b8c 100644
--- a/arch/arm/boot/dts/msm8974-v2.dtsi
+++ b/arch/arm/boot/dts/msm8974-v2.dtsi
@@ -63,6 +63,9 @@
qcom,mdss-intf-off = <0x00012500 0x00012700
0x00012900 0x00012b00>;
qcom,mdss-pingpong-off = <0x00012D00 0x00012E00 0x00012F00>;
+ qcom,mdss-has-bwc;
+ qcom,mdss-has-decimation;
+ qcom,mdss-ad-off = <0x0013100 0x00013300>;
};
&mdss_hdmi_tx {
diff --git a/arch/arm/boot/dts/msm8974.dtsi b/arch/arm/boot/dts/msm8974.dtsi
index 273c677..a0a7cbe 100644
--- a/arch/arm/boot/dts/msm8974.dtsi
+++ b/arch/arm/boot/dts/msm8974.dtsi
@@ -38,7 +38,7 @@
secure_mem: secure_region {
linux,contiguous-region;
- reg = <0 0x7800000>;
+ reg = <0 0xFC00000>;
label = "secure_mem";
};
@@ -47,6 +47,13 @@
reg = <0 0x2000000>;
label = "adsp_mem";
};
+
+ qsecom_mem: qsecom_region {
+ linux,contiguous-region;
+ reg = <0 0x1100000>;
+ label = "qseecom_mem";
+ };
+
};
intc: interrupt-controller@F9000000 {
@@ -84,6 +91,66 @@
clock-frequency = <19200000>;
};
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
+ clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 8 0x4>,
+ <0 7 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+ };
+
+
qcom,mpm2-sleep-counter@fc4a3000 {
compatible = "qcom,mpm2-sleep-counter";
reg = <0xfc4a3000 0x1000>;
@@ -186,7 +253,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
@@ -231,7 +298,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
@@ -573,6 +640,14 @@
qcom,cdc-vddcx-2-voltage = <1225000 1225000>;
qcom,cdc-vddcx-2-current = <10000>;
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-vdd-tx-h",
+ "cdc-vdd-rx-h",
+ "cdc-vddpx-1",
+ "cdc-vdd-a-1p2v",
+ "cdc-vddcx-1",
+ "cdc-vddcx-2";
+
qcom,cdc-micbias-ldoh-v = <0x3>;
qcom,cdc-micbias-cfilt1-mv = <1800>;
qcom,cdc-micbias-cfilt2-mv = <2700>;
@@ -618,12 +693,12 @@
"MIC BIAS4 External", "Digital Mic6";
qcom,cdc-mclk-gpios = <&pm8941_gpios 15 0>;
- taiko-mclk-clk = <&pm8941_clkdiv1>;
qcom,taiko-mclk-clk-freq = <9600000>;
qcom,prim-auxpcm-gpio-clk = <&msmgpio 65 0>;
qcom,prim-auxpcm-gpio-sync = <&msmgpio 66 0>;
qcom,prim-auxpcm-gpio-din = <&msmgpio 67 0>;
qcom,prim-auxpcm-gpio-dout = <&msmgpio 68 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-prim";
qcom,sec-auxpcm-gpio-clk = <&msmgpio 79 0>;
qcom,sec-auxpcm-gpio-sync = <&msmgpio 80 0>;
qcom,sec-auxpcm-gpio-din = <&msmgpio 81 0>;
@@ -657,6 +732,22 @@
qcom,i2c-src-freq = <50000000>;
};
+ i2c_1: i2c@f9923000 {
+ cell-index = <1>;
+ compatible = "qcom,i2c-qup";
+ reg = <0xf9923000 0x1000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg-names = "qup_phys_addr";
+ interrupts = <0 95 0>;
+ interrupt-names = "qup_err_intr";
+ qcom,i2c-bus-freq = <100000>;
+ qcom,i2c-src-freq = <19200000>;
+ qcom,scl-gpio = <&msmgpio 3 0>;
+ qcom,sda-gpio = <&msmgpio 2 0>;
+ status = "disabled";
+ };
+
i2c_2: i2c@f9924000 {
cell-index = <2>;
compatible = "qcom,i2c-qup";
@@ -766,6 +857,13 @@
interrupts = <0 162 1>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
qcom,msm-adsp-loader {
@@ -1027,6 +1125,7 @@
/* GPIO inputs from mss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_1_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_1_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_1_in 2 0>;
/* GPIO output to mss */
@@ -1046,6 +1145,7 @@
/* GPIO inputs from wcnss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_4_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_4_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_4_in 2 0>;
/* GPIO output to wcnss */
@@ -1058,8 +1158,9 @@
qcom,wcnss-wlan@fb000000 {
compatible = "qcom,wcnss_wlan";
- reg = <0xfb000000 0x280000>;
- reg-names = "wcnss_mmio";
+ reg = <0xfb000000 0x280000>,
+ <0xf9011008 0x04>;
+ reg-names = "wcnss_mmio", "wcnss_fiq";
interrupts = <0 145 0 0 146 0>;
interrupt-names = "wcnss_wlantx_irq", "wcnss_wlanrx_irq";
@@ -1123,7 +1224,7 @@
reg = <0xf9bff000 0x200>;
};
- qcom,qseecom@fe806000 {
+ qseecom: qcom,qseecom@7f00000 {
compatible = "qcom,qseecom";
reg = <0x7f00000 0x500000>;
reg-names = "secapp-region";
@@ -1164,7 +1265,27 @@
qcom,firmware-name = "venus";
};
- qcom,cache_erp {
+ qcom,cache_erp@f9012000 {
+ reg = <0xf9012000 0x80>,
+ <0xf9089000 0x80>,
+ <0xf9099000 0x80>,
+ <0xf90a9000 0x80>,
+ <0xf90b9000 0x80>,
+ <0xf9088000 0x40>,
+ <0xf9098000 0x40>,
+ <0xf90a8000 0x40>,
+ <0xf90b8000 0x40>;
+
+ reg-names = "l2_saw",
+ "krait0_saw",
+ "krait1_saw",
+ "krait2_saw",
+ "krait3_saw",
+ "krait0_acs",
+ "krait1_acs",
+ "krait2_acs",
+ "krait3_acs";
+
compatible = "qcom,cache_erp";
interrupts = <1 9 0>, <0 2 0>;
interrupt-names = "l1_irq", "l2_irq";
@@ -1295,8 +1416,8 @@
qcom,core-control-mask = <0xe>;
qcom,vdd-restriction-temp = <5>;
qcom,vdd-restriction-temp-hysteresis = <10>;
- qcom,pmic-sw-mode-temp = <90>;
- qcom,pmic-sw-mode-temp-hysteresis = <70>;
+ qcom,pmic-sw-mode-temp = <85>;
+ qcom,pmic-sw-mode-temp-hysteresis = <75>;
qcom,pmic-sw-mode-regs = "vdd_dig";
vdd_dig-supply = <&pm8841_s2_floor_corner>;
vdd_gfx-supply = <&pm8841_s4_floor_corner>;
@@ -1322,7 +1443,7 @@
qcom,rx-ring-size = <64>;
};
- qcom,msm-mem-hole {
+ memory_hole: qcom,msm-mem-hole {
compatible = "qcom,msm-mem-hole";
qcom,memblock-remove = <0x7f00000 0x8000000>; /* Address and Size of Hole */
};
@@ -1448,22 +1569,28 @@
};
&gdsc_venus {
+ qcom,clock-names = "core_clk";
status = "ok";
};
&gdsc_mdss {
+ qcom,clock-names = "core_clk", "lut_clk";
status = "ok";
};
&gdsc_jpeg {
+ qcom,clock-names = "core0_clk", "core1_clk", "core2_clk";
status = "ok";
};
&gdsc_vfe {
+ qcom,clock-names = "core0_clk", "core1_clk", "csi0_clk", "csi1_clk",
+ "cpp_clk";
status = "ok";
};
&gdsc_oxili_gx {
+ qcom,clock-names = "core_clk";
qcom,retain-mems;
status = "ok";
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts b/arch/arm/boot/dts/msm9625-cdp.dtsi
similarity index 79%
rename from arch/arm/boot/dts/msm9625-v2-1-cdp.dts
rename to arch/arm/boot/dts/msm9625-cdp.dtsi
index da07100..1f9cbb0 100644
--- a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-cdp.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,17 +10,10 @@
* GNU General Public License for more details.
*/
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
+/include/ "msm9625-display.dtsi"
+/include/ "qpic-panel-ili-qvga.dtsi"
/ {
- model = "Qualcomm MSM 9625V2.1 CDP";
- compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
- qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
- <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
- <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
-
i2c@f9925000 {
charger@57 {
compatible = "summit,smb137c";
@@ -42,10 +35,18 @@
wlan0: qca,wlan {
cell-index = <0>;
- compatible = "qca,ar6004-sdio";
+ compatible = "qca,ar6004-hsic";
qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
+ qca,vdd-io-supply = <&pm8019_l11>;
+ };
+
+ qca,wlan_ar6003 {
+ cell-index = <0>;
+ compatible = "qca,ar6003-sdio";
+ qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
+ qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
+ qca,vdd-io-supply = <&pm8019_l11>;
};
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts b/arch/arm/boot/dts/msm9625-mtp.dtsi
similarity index 73%
copy from arch/arm/boot/dts/msm9625-v2-1-cdp.dts
copy to arch/arm/boot/dts/msm9625-mtp.dtsi
index da07100..cc0bf5e 100644
--- a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-mtp.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,17 +10,7 @@
* GNU General Public License for more details.
*/
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
-
/ {
- model = "Qualcomm MSM 9625V2.1 CDP";
- compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
- qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
- <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
- <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
-
i2c@f9925000 {
charger@57 {
compatible = "summit,smb137c";
@@ -42,10 +32,18 @@
wlan0: qca,wlan {
cell-index = <0>;
- compatible = "qca,ar6004-sdio";
+ compatible = "qca,ar6004-hsic";
qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
+ qca,vdd-io-supply = <&pm8019_l11>;
+ };
+
+ qca,wlan_ar6003 {
+ cell-index = <0>;
+ compatible = "qca,ar6003-sdio";
+ qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
+ qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
+ qca,vdd-io-supply = <&pm8019_l11>;
};
};
@@ -89,11 +87,23 @@
};
mpp@a300 { /* MPP 4 */
+ /* VADC channel 19 */
+ qcom,mode = <4>;
+ qcom,ain-route = <3>; /* AMUX 8 */
+ qcom,master-en = <1>;
+ qcom,src-sel = <0>; /* Function constant */
+ qcom,invert = <1>;
};
mpp@a400 { /* MPP 5 */
};
mpp@a500 { /* MPP 6 */
+ /* channel 21 */
+ qcom,mode = <4>;
+ qcom,ain-route = <1>; /* AMUX 6 */
+ qcom,master-en = <1>;
+ qcom,src-sel = <0>; /* Function constant */
+ qcom,invert = <1>;
};
};
diff --git a/arch/arm/boot/dts/msm9625-v1-cdp.dts b/arch/arm/boot/dts/msm9625-v1-cdp.dts
index cf17c69..d7537eb 100644
--- a/arch/arm/boot/dts/msm9625-v1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-v1-cdp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v1.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
model = "Qualcomm MSM 9625V1 CDP";
@@ -20,88 +21,4 @@
qcom,msm-id = <134 1 0>, <152 1 0>, <149 1 0>, <150 1 0>,
<151 1 0>, <148 1 0>, <173 1 0>, <174 1 0>,
<175 1 0>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v1-mtp.dts b/arch/arm/boot/dts/msm9625-v1-mtp.dts
index 24aa3af..a70ec1a 100644
--- a/arch/arm/boot/dts/msm9625-v1-mtp.dts
+++ b/arch/arm/boot/dts/msm9625-v1-mtp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v1.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
model = "Qualcomm MSM 9625V1 MTP";
@@ -20,100 +21,4 @@
qcom,msm-id = <134 7 0>, <152 7 0>, <149 7 0>, <150 7 0>,
<151 7 0>, <148 7 0>, <173 7 0>, <174 7 0>,
<175 7 0>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- /* VADC channel 19 */
- qcom,mode = <4>;
- qcom,ain-route = <3>; /* AMUX 8 */
- qcom,master-en = <1>;
- qcom,src-sel = <0>; /* Function constant */
- qcom,invert = <1>;
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- /* VADC channel 21 */
- qcom,mode = <4>;
- qcom,ain-route = <1>; /* AMUX 6 */
- qcom,master-en = <1>;
- qcom,src-sel = <0>; /* Function constant */
- qcom,invert = <1>;
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v1.dtsi b/arch/arm/boot/dts/msm9625-v1.dtsi
index ad95601..de88ff1 100644
--- a/arch/arm/boot/dts/msm9625-v1.dtsi
+++ b/arch/arm/boot/dts/msm9625-v1.dtsi
@@ -37,6 +37,10 @@
};
};
+&hsic_host {
+ qcom,disable-park-mode;
+};
+
&ipa_hw {
qcom,ipa-hw-ver = <1>; /* IPA h-w revision */
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-mtp.dts b/arch/arm/boot/dts/msm9625-v2-1-mtp.dts
deleted file mode 100644
index 1e0f3c0..0000000
--- a/arch/arm/boot/dts/msm9625-v2-1-mtp.dts
+++ /dev/null
@@ -1,99 +0,0 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
-
-/ {
- model = "Qualcomm MSM 9625V2.1 MTP";
- compatible = "qcom,msm9625-mtp", "qcom,msm9625", "qcom,mtp";
- qcom,msm-id = <134 7 0x20001>, <152 7 0x20001>, <149 7 0x20001>,
- <150 7 0x20001>, <151 7 0x20001>, <148 7 0x20001>,
- <173 7 0x20001>, <174 7 0x20001>, <175 7 0x20001>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
-};
diff --git a/arch/arm/boot/dts/msm9625-v2-cdp.dts b/arch/arm/boot/dts/msm9625-v2-cdp.dts
index 660bdbd..9fbe5ec 100644
--- a/arch/arm/boot/dts/msm9625-v2-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-v2-cdp.dts
@@ -13,8 +13,7 @@
/dts-v1/;
/include/ "msm9625-v2.dtsi"
-/include/ "msm9625-display.dtsi"
-/include/ "qpic-panel-ili-qvga.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
model = "Qualcomm MSM 9625V2 CDP";
@@ -22,88 +21,4 @@
qcom,msm-id = <134 1 0x20000>, <152 1 0x20000>, <149 1 0x20000>,
<150 1 0x20000>, <151 1 0x20000>, <148 1 0x20000>,
<173 1 0x20000>, <174 1 0x20000>, <175 1 0x20000>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-hsic";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v2-mtp.dts b/arch/arm/boot/dts/msm9625-v2-mtp.dts
index c9e54be..5324e2c 100644
--- a/arch/arm/boot/dts/msm9625-v2-mtp.dts
+++ b/arch/arm/boot/dts/msm9625-v2-mtp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v2.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
model = "Qualcomm MSM 9625V2 MTP";
diff --git a/arch/arm/boot/dts/msmzinc-sim.dts b/arch/arm/boot/dts/msm9625-v2.1-cdp.dts
similarity index 61%
copy from arch/arm/boot/dts/msmzinc-sim.dts
copy to arch/arm/boot/dts/msm9625-v2.1-cdp.dts
index e410344..b643593 100644
--- a/arch/arm/boot/dts/msmzinc-sim.dts
+++ b/arch/arm/boot/dts/msm9625-v2.1-cdp.dts
@@ -12,18 +12,13 @@
/dts-v1/;
-/include/ "msmzinc.dtsi"
+/include/ "msm9625-v2.1.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
- model = "Qualcomm MSM ZINC Simulator";
- compatible = "qcom,msmzinc-sim", "qcom,msmzinc", "qcom,sim";
- qcom,msm-id = <178 0 0>;
-
- aliases {
- serial0 = &uart0;
- };
-
- uart0: serial@f991f000 {
- status = "ok";
- };
+ model = "Qualcomm MSM 9625V2.1 CDP";
+ compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
+ qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
+ <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
+ <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
};
diff --git a/arch/arm/boot/dts/msmzinc-sim.dts b/arch/arm/boot/dts/msm9625-v2.1-mtp.dts
similarity index 61%
copy from arch/arm/boot/dts/msmzinc-sim.dts
copy to arch/arm/boot/dts/msm9625-v2.1-mtp.dts
index e410344..8bbcc0d 100644
--- a/arch/arm/boot/dts/msmzinc-sim.dts
+++ b/arch/arm/boot/dts/msm9625-v2.1-mtp.dts
@@ -12,18 +12,13 @@
/dts-v1/;
-/include/ "msmzinc.dtsi"
+/include/ "msm9625-v2.1.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
- model = "Qualcomm MSM ZINC Simulator";
- compatible = "qcom,msmzinc-sim", "qcom,msmzinc", "qcom,sim";
- qcom,msm-id = <178 0 0>;
-
- aliases {
- serial0 = &uart0;
- };
-
- uart0: serial@f991f000 {
- status = "ok";
- };
+ model = "Qualcomm MSM 9625V2.1 MTP";
+ compatible = "qcom,msm9625-mtp", "qcom,msm9625", "qcom,mtp";
+ qcom,msm-id = <134 7 0x20001>, <152 7 0x20001>, <149 7 0x20001>,
+ <150 7 0x20001>, <151 7 0x20001>, <148 7 0x20001>,
+ <173 7 0x20001>, <174 7 0x20001>, <175 7 0x20001>;
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1.dtsi b/arch/arm/boot/dts/msm9625-v2.1.dtsi
similarity index 100%
rename from arch/arm/boot/dts/msm9625-v2-1.dtsi
rename to arch/arm/boot/dts/msm9625-v2.1.dtsi
diff --git a/arch/arm/boot/dts/msm9625-v2.dtsi b/arch/arm/boot/dts/msm9625-v2.dtsi
index 3ce6844..81d8e00 100644
--- a/arch/arm/boot/dts/msm9625-v2.dtsi
+++ b/arch/arm/boot/dts/msm9625-v2.dtsi
@@ -35,6 +35,10 @@
qcom,ipa-hw-ver = <2>; /* IPA h-w revision */
};
+&hsic_host {
+ qcom,disable-park-mode;
+};
+
&sfpb_spinlock {
status = "disable";
};
diff --git a/arch/arm/boot/dts/msm9625.dtsi b/arch/arm/boot/dts/msm9625.dtsi
index 0e98aa8..3e2eab3 100644
--- a/arch/arm/boot/dts/msm9625.dtsi
+++ b/arch/arm/boot/dts/msm9625.dtsi
@@ -58,12 +58,70 @@
clock-frequency = <32768>;
};
- timer: msm-qtimer@f9021000 {
- compatible = "arm,armv7-timer";
- reg = <0xF9021000 0x1000>;
- interrupts = <0 7 0>;
- irq-is-not-percpu;
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 7 0x4>,
+ <0 6 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 8 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9029000 {
+ frame-number = <7>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9029000 0x1000>;
+ status = "disabled";
+ };
};
qcom,sps@f9980000 {
@@ -97,7 +155,7 @@
qcom,hsusb-otg-disable-reset;
qcom,hsusb-otg-lpm-on-dev-suspend;
qcom,hsusb-otg-clk-always-on-workaround;
- qcom,hsusb-otg-delay-lpm;
+ qcom,hsusb-otg-delay-lpm-hndshk-on-disconnect;
qcom,msm-bus,name = "usb2";
qcom,msm-bus,num-cases = <2>;
@@ -147,6 +205,7 @@
qcom,src-bam-pipe-index = <1>;
qcom,data-fifo-size = <0x8000>;
qcom,descriptor-fifo-size = <0x2000>;
+ qcom,reset-bam-on-connect;
};
qcom,pipe1 {
label = "hsusb-ipa-in-0";
@@ -159,6 +218,7 @@
qcom,dst-bam-pipe-index = <0>;
qcom,data-fifo-size = <0x8000>;
qcom,descriptor-fifo-size = <0x2000>;
+ qcom,reset-bam-on-connect;
};
qcom,pipe2 {
label = "hsusb-qdss-in-0";
@@ -334,7 +394,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>;
qcom,pad-pull-off = <0x0 0x3 0x3>;
- qcom,pad-drv-on = <0x7 0x4 0x4>;
+ qcom,pad-drv-on = <0x4 0x4 0x4>;
qcom,pad-drv-off = <0x0 0x0 0x0>;
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -505,6 +565,14 @@
qcom,cdc-vddcx-2-voltage = <1200000 1200000>;
qcom,cdc-vddcx-2-current = <10000>;
+ qcom,cdc-static-supplies = "cdc-vdd-buck",
+ "cdc-vdd-tx-h",
+ "cdc-vdd-rx-h",
+ "cdc-vddpx-1",
+ "cdc-vdd-a-1p2v",
+ "cdc-vddcx-1",
+ "cdc-vddcx-2";
+
qcom,cdc-micbias-ldoh-v = <0x3>;
qcom,cdc-micbias-cfilt1-mv = <1800>;
qcom,cdc-micbias-cfilt2-mv = <2700>;
@@ -689,15 +757,16 @@
/* GPIO inputs from mss */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_1_in 0 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_1_in 1 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_1_in 2 0>;
/* GPIO output to mss */
qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_1_out 0 0>;
};
- qcom,smem@fa00000 {
+ qcom,smem@0 {
compatible = "qcom,smem";
- reg = <0xfa00000 0x200000>,
+ reg = <0x0 0x100000>,
<0xf9011000 0x1000>,
<0xfc428000 0x4000>;
reg-names = "smem", "irq-reg-base", "aux-mem1";
diff --git a/arch/arm/boot/dts/msmkrypton.dtsi b/arch/arm/boot/dts/msmkrypton.dtsi
index db61dab..3f51659 100644
--- a/arch/arm/boot/dts/msmkrypton.dtsi
+++ b/arch/arm/boot/dts/msmkrypton.dtsi
@@ -37,12 +37,70 @@
qcom,direct-connect-irqs = <8>;
};
- timer: msm-qtimer@f9021000 {
- compatible = "arm,armv7-timer";
- reg = <0xf9021000 0x1000>;
- interrupts = <0 7 0>;
- irq-is-not-percpu;
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 7 0x4>,
+ <0 6 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 8 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9029000 {
+ frame-number = <7>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9029000 0x1000>;
+ status = "disabled";
+ };
};
uartdm3: serial@f991f000 {
diff --git a/arch/arm/boot/dts/msmzinc.dtsi b/arch/arm/boot/dts/msmzinc.dtsi
deleted file mode 100644
index 642597d..0000000
--- a/arch/arm/boot/dts/msmzinc.dtsi
+++ /dev/null
@@ -1,86 +0,0 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-/include/ "skeleton.dtsi"
-/include/ "msmzinc-ion.dtsi"
-
-/ {
- model = "Qualcomm MSM ZINC";
- compatible = "qcom,msmzinc";
- interrupt-parent = <&intc>;
-
- intc: interrupt-controller@f9000000 {
- compatible = "qcom,msm-qgic2";
- interrupt-controller;
- #interrupt-cells = <3>;
- reg = <0xF9000000 0x1000>,
- <0xF9002000 0x1000>;
- };
-
- msmgpio: gpio@fd510000 {
- compatible = "qcom,msm-gpio";
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-controller;
- #interrupt-cells = <2>;
- reg = <0xfd510000 0x4000>;
- ngpio = <146>;
- interrupts = <0 208 0>;
- qcom,direct-connect-irqs = <8>;
- };
-
- timer {
- compatible = "arm,armv7-timer";
- interrupts = <1 2 0 1 3 0>;
- clock-frequency = <19200000>;
- };
-
- serial@f991f000 {
- compatible = "qcom,msm-lsuart-v14";
- reg = <0xf991f000 0x1000>;
- interrupts = <0 109 0>;
- status = "disabled";
- };
-
- qcom,cache_erp {
- compatible = "qcom,cache_erp";
- interrupts = <1 9 0>, <0 2 0>;
- interrupt-names = "l1_irq", "l2_irq";
- };
-
- qcom,cache_dump {
- compatible = "qcom,cache_dump";
- qcom,l1-dump-size = <0x100000>;
- qcom,l2-dump-size = <0x500000>;
- qcom,memory-reservation-type = "EBI1";
- qcom,memory-reservation-size = <0x600000>; /* 6M EBI1 buffer */
- };
-
- rpm_bus: qcom,rpm-smd {
- compatible = "qcom,rpm-smd";
- rpm-channel-name = "rpm_requests";
- rpm-channel-type = <15>; /* SMD_APPS_RPM */
- rpm-standalone;
- };
-
- qcom,msm-imem@fe805000 {
- compatible = "qcom,msm-imem";
- reg = <0xfe805000 0x1000>; /* Address and size of IMEM */
- };
-
- qcom,msm-rtb {
- compatible = "qcom,msm-rtb";
- qcom,memory-reservation-type = "EBI1";
- qcom,memory-reservation-size = <0x100000>; /* 1M EBI1 buffer */
- };
-
-};
diff --git a/arch/arm/configs/msmzinc_defconfig b/arch/arm/configs/apq8084_defconfig
similarity index 95%
rename from arch/arm/configs/msmzinc_defconfig
rename to arch/arm/configs/apq8084_defconfig
index d0ea87a..a1fa53c 100644
--- a/arch/arm/configs/msmzinc_defconfig
+++ b/arch/arm/configs/apq8084_defconfig
@@ -34,7 +34,7 @@
CONFIG_EFI_PARTITION=y
CONFIG_IOSCHED_TEST=y
CONFIG_ARCH_MSM=y
-CONFIG_ARCH_MSMZINC=y
+CONFIG_ARCH_APQ8084=y
CONFIG_MSM_KRAIT_TBB_ABORT_HANDLER=y
CONFIG_MSM_RPM_SMD=y
# CONFIG_MSM_STACKED_MEMORY is not set
@@ -74,8 +74,6 @@
CONFIG_MSM_L2_ERP_PORT_PANIC=y
CONFIG_MSM_L2_ERP_1BIT_PANIC=y
CONFIG_MSM_L2_ERP_2BIT_PANIC=y
-CONFIG_MSM_CACHE_DUMP=y
-CONFIG_MSM_CACHE_DUMP_ON_PANIC=y
CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
CONFIG_ARM_LPAE=y
CONFIG_NO_HZ=y
@@ -284,6 +282,7 @@
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
CONFIG_FB_MSM_MDSS=y
@@ -313,6 +312,23 @@
CONFIG_USB_STORAGE_KARMA=y
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_UNSAFE_RESUME=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_BLOCK_MINORS=32
+# CONFIG_MMC_BLOCK_BOUNCE is not set
+CONFIG_MMC_TEST=m
+CONFIG_MMC_BLOCK_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_MSM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_DWC3_MSM=y
+CONFIG_USB_G_ANDROID=y
+CONFIG_DIAG_CHAR=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_BACKLIGHT=y
diff --git a/arch/arm/configs/msmzinc_defconfig b/arch/arm/configs/msm8610-perf_defconfig
similarity index 67%
copy from arch/arm/configs/msmzinc_defconfig
copy to arch/arm/configs/msm8610-perf_defconfig
index d0ea87a..5660dec 100644
--- a/arch/arm/configs/msmzinc_defconfig
+++ b/arch/arm/configs/msm8610-perf_defconfig
@@ -1,5 +1,6 @@
# CONFIG_ARM_PATCH_PHYS_VIRT is not set
CONFIG_EXPERIMENTAL=y
+CONFIG_LOCALVERSION="-perf"
CONFIG_SYSVIPC=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_IKCONFIG=y
@@ -14,6 +15,7 @@
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_IPC_NS is not set
+# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
@@ -22,85 +24,70 @@
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
-CONFIG_ASHMEM=y
CONFIG_EMBEDDED=y
+# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
-CONFIG_KPROBES=y
+CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
-CONFIG_IOSCHED_TEST=y
CONFIG_ARCH_MSM=y
-CONFIG_ARCH_MSMZINC=y
-CONFIG_MSM_KRAIT_TBB_ABORT_HANDLER=y
-CONFIG_MSM_RPM_SMD=y
+CONFIG_ARCH_MSM8610=y
+CONFIG_ARCH_MSM8226=y
# CONFIG_MSM_STACKED_MEMORY is not set
CONFIG_CPU_HAS_L2_PMU=y
# CONFIG_MSM_FIQ_SUPPORT is not set
# CONFIG_MSM_PROC_COMM is not set
CONFIG_MSM_SMD=y
CONFIG_MSM_SMD_PKG4=y
+CONFIG_MSM_BAM_DMUX=y
+CONFIG_MSM_SMP2P=y
+CONFIG_MSM_SMP2P_TEST=y
CONFIG_MSM_IPC_LOGGING=y
CONFIG_MSM_IPC_ROUTER=y
CONFIG_MSM_IPC_ROUTER_SMD_XPRT=y
-CONFIG_MSM_IPC_ROUTER_SECURITY=y
-# CONFIG_MSM_HW3D is not set
-CONFIG_MSM_RPM_REGULATOR_SMD=y
+CONFIG_MSM_QMI_INTERFACE=y
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_SYSMON_COMM=y
+CONFIG_MSM_PIL_LPASS_QDSP6V5=y
+CONFIG_MSM_PIL_MSS_QDSP6V5=y
+CONFIG_MSM_PIL_VENUS=y
+CONFIG_MSM_PIL_PRONTO=y
+CONFIG_MSM_TZ_LOG=y
CONFIG_MSM_DIRECT_SCLK_ACCESS=y
CONFIG_MSM_WATCHDOG_V2=y
CONFIG_MSM_MEMORY_DUMP=y
CONFIG_MSM_DLOAD_MODE=y
-CONFIG_MSM_RUN_QUEUE_STATS=y
-CONFIG_MSM_SPM_V2=y
-CONFIG_MSM_L2_SPM=y
-CONFIG_MSM_MULTIMEDIA_USE_ION=y
+CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
+CONFIG_MSM_ADSP_LOADER=m
CONFIG_MSM_OCMEM=y
CONFIG_MSM_OCMEM_LOCAL_POWER_CTRL=y
CONFIG_MSM_OCMEM_DEBUG=y
CONFIG_MSM_OCMEM_NONSECURE=y
+CONFIG_MSM_OCMEM_POWER_DISABLE=y
CONFIG_SENSORS_ADSP=y
CONFIG_MSM_RTB=y
CONFIG_MSM_RTB_SEPARATE_CPUS=y
-CONFIG_MSM_CACHE_ERP=y
-CONFIG_MSM_L1_ERR_PANIC=y
-CONFIG_MSM_L1_RECOV_ERR_PANIC=y
-CONFIG_MSM_L1_ERR_LOG=y
-CONFIG_MSM_L2_ERP_PRINT_ACCESS_ERRORS=y
-CONFIG_MSM_L2_ERP_PORT_PANIC=y
-CONFIG_MSM_L2_ERP_1BIT_PANIC=y
-CONFIG_MSM_L2_ERP_2BIT_PANIC=y
-CONFIG_MSM_CACHE_DUMP=y
-CONFIG_MSM_CACHE_DUMP_ON_PANIC=y
-CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
-CONFIG_ARM_LPAE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
-# CONFIG_SMP_ON_UP is not set
CONFIG_SCHED_MC=y
CONFIG_ARM_ARCH_TIMER=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
-CONFIG_VMALLOC_RESERVE=0x19000000
-CONFIG_CC_STACKPROTECTOR=y
-CONFIG_CP_ACCESS=y
CONFIG_USE_OF=y
-CONFIG_CPU_FREQ=y
-CONFIG_CPU_FREQ_GOV_POWERSAVE=y
-CONFIG_CPU_FREQ_GOV_USERSPACE=y
-CONFIG_CPU_FREQ_GOV_ONDEMAND=y
-CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
-CONFIG_WAKELOCK=y
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
CONFIG_PM_RUNTIME=y
CONFIG_NET=y
CONFIG_PACKET=y
@@ -180,6 +167,10 @@
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
@@ -191,151 +182,163 @@
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_BRIDGE=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_CLS_FW=y
-CONFIG_NET_CLS_U32=y
-CONFIG_CLS_U32_MARK=y
-CONFIG_NET_CLS_FLOW=y
-CONFIG_NET_EMATCH=y
-CONFIG_NET_EMATCH_CMP=y
-CONFIG_NET_EMATCH_NBYTE=y
-CONFIG_NET_EMATCH_U32=y
-CONFIG_NET_EMATCH_META=y
-CONFIG_NET_EMATCH_TEXT=y
-CONFIG_NET_CLS_ACT=y
+CONFIG_BT=y
+CONFIG_BT_RFCOMM=y
+CONFIG_BT_RFCOMM_TTY=y
+CONFIG_BT_BNEP=y
+CONFIG_BT_BNEP_MC_FILTER=y
+CONFIG_BT_BNEP_PROTO_FILTER=y
+CONFIG_BT_HIDP=y
+CONFIG_BT_HCISMD=y
CONFIG_CFG80211=y
-CONFIG_RFKILL=y
-CONFIG_GENLOCK=y
-CONFIG_GENLOCK_MISCDEVICE=y
-CONFIG_SYNC=y
-CONFIG_SW_SYNC=y
+CONFIG_NL80211_TESTMODE=y
CONFIG_CMA=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
-CONFIG_HAPTIC_ISA1200=y
-CONFIG_USB_HSIC_SMSC_HUB=y
-CONFIG_TI_DRV2667=y
-CONFIG_SCSI=y
-CONFIG_SCSI_TGT=y
-CONFIG_BLK_DEV_SD=y
-CONFIG_CHR_DEV_SG=y
-CONFIG_CHR_DEV_SCH=y
-CONFIG_SCSI_MULTI_LUN=y
-CONFIG_SCSI_CONSTANTS=y
-CONFIG_SCSI_LOGGING=y
-CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_TUN=y
-CONFIG_KS8851=m
# CONFIG_MSM_RMNET is not set
-CONFIG_SLIP=y
-CONFIG_SLIP_COMPRESSED=y
-CONFIG_SLIP_MODE_SLIP6=y
-CONFIG_USB_USBNET=y
+CONFIG_MSM_RMNET_BAM=y
+CONFIG_WCNSS_CORE=y
+CONFIG_WCNSS_CORE_PRONTO=y
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_EVBUG=m
CONFIG_KEYBOARD_GPIO=y
-CONFIG_INPUT_JOYSTICK=y
CONFIG_INPUT_TOUCHSCREEN=y
-CONFIG_TOUCHSCREEN_ATMEL_MXT=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI4_DEV=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=m
CONFIG_SERIAL_MSM_HSL=y
CONFIG_SERIAL_MSM_HSL_CONSOLE=y
+CONFIG_DIAG_CHAR=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MSM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_QUP=y
CONFIG_SPI=y
+CONFIG_SPI_QUP=y
CONFIG_SPI_SPIDEV=m
CONFIG_SPMI=y
+CONFIG_MSM_BUS_SCALING=y
CONFIG_SPMI_MSM_PMIC_ARB=y
CONFIG_MSM_QPNP_INT=y
+CONFIG_SLIMBUS_MSM_NGD=y
CONFIG_DEBUG_GPIO=y
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_QPNP_PIN=y
-CONFIG_GPIO_QPNP_PIN_DEBUG=y
CONFIG_POWER_SUPPLY=y
-CONFIG_SMB350_CHARGER=y
-CONFIG_BATTERY_BQ28400=y
CONFIG_QPNP_CHARGER=y
-CONFIG_BATTERY_BCL=y
-CONFIG_SENSORS_EPM_ADC=y
+CONFIG_QPNP_BMS=y
CONFIG_SENSORS_QPNP_ADC_VOLTAGE=y
CONFIG_SENSORS_QPNP_ADC_CURRENT=y
CONFIG_THERMAL=y
CONFIG_THERMAL_TSENS8974=y
CONFIG_THERMAL_MONITOR=y
-CONFIG_THERMAL_QPNP=y
CONFIG_THERMAL_QPNP_ADC_TM=y
-CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_WCD9306_CODEC=y
CONFIG_REGULATOR_STUB=y
CONFIG_REGULATOR_QPNP=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_DEV=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+# CONFIG_MSM_CAMERA is not set
+CONFIG_OV8825=y
+CONFIG_MSM_CAMERA_SENSOR=y
+CONFIG_MSM_CPP=y
+CONFIG_MSM_CCI=y
+CONFIG_MSM_CSI30_HEADER=y
+CONFIG_MSM_CSIPHY=y
+CONFIG_MSM_CSID=y
+CONFIG_MSM_ISPIF=y
+CONFIG_MSMB_CAMERA=y
+CONFIG_OV9724=y
+CONFIG_MSMB_JPEG=y
+CONFIG_SWITCH=y
+CONFIG_MSM_WFD=y
+CONFIG_MSM_VIDC_V4L2=y
+CONFIG_VIDEOBUF2_MSM_MEM=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_RADIO_IRIS=y
+CONFIG_RADIO_IRIS_TRANSPORT=m
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
CONFIG_FB_MSM_MDSS=y
CONFIG_FB_MSM_MDSS_WRITEBACK=y
-CONFIG_FB_MSM_MDSS_HDMI_PANEL=y
-CONFIG_FB_MSM_MDSS_HDMI_MHL_SII8334=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
-CONFIG_USB=y
-CONFIG_USB_SUSPEND=y
-CONFIG_USB_EHCI_HCD=y
-CONFIG_USB_EHCI_MSM=y
-CONFIG_USB_EHCI_MSM_HSIC=y
-CONFIG_USB_ACM=y
-CONFIG_USB_STORAGE=y
-CONFIG_USB_STORAGE_DATAFAB=y
-CONFIG_USB_STORAGE_FREECOM=y
-CONFIG_USB_STORAGE_ISD200=y
-CONFIG_USB_STORAGE_USBAT=y
-CONFIG_USB_STORAGE_SDDR09=y
-CONFIG_USB_STORAGE_SDDR55=y
-CONFIG_USB_STORAGE_JUMPSHOT=y
-CONFIG_USB_STORAGE_ALAUDA=y
-CONFIG_USB_STORAGE_ONETOUCH=y
-CONFIG_USB_STORAGE_KARMA=y
-CONFIG_USB_STORAGE_CYPRESS_ATACB=y
-CONFIG_USB_STORAGE_ENE_UB6250=y
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_SOC=y
+CONFIG_SND_SOC_MSM8226=y
+CONFIG_SND_SOC_MSM8X10=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_DEBUG_FILES=y
+CONFIG_USB_GADGET_DEBUG_FS=y
+CONFIG_USB_CI13XXX_MSM=y
+CONFIG_USB_G_ANDROID=y
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_UNSAFE_RESUME=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_EMBEDDED_SDIO=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_TEST=m
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_MSM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_MSM_SPS_SUPPORT=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_TRIGGERS=y
-CONFIG_LEDS_TRIGGER_BACKLIGHT=y
-CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
-CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_DRV_MSM is not set
CONFIG_RTC_DRV_QPNP=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
+CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
-CONFIG_MSM_SSBI=y
CONFIG_SPS=y
-CONFIG_SPS_SUPPORT_BAMDMA=y
+CONFIG_USB_BAM=y
CONFIG_SPS_SUPPORT_NDP_BAM=y
CONFIG_QPNP_PWM=y
CONFIG_QPNP_POWER_ON=y
-CONFIG_QPNP_CLKDIV=y
CONFIG_MSM_IOMMU=y
-CONFIG_IOMMU_PGTABLES_L2=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_TMC=y
+CONFIG_CORESIGHT_TPIU=y
+CONFIG_CORESIGHT_FUNNEL=y
+CONFIG_CORESIGHT_REPLICATOR=y
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_ETM=y
+CONFIG_CORESIGHT_EVENT=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
@@ -344,42 +347,23 @@
CONFIG_FUSE_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
-CONFIG_PSTORE=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
-CONFIG_LOCKUP_DETECTOR=y
-# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
-CONFIG_DEBUG_KMEMLEAK=y
-CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
-CONFIG_DEBUG_SPINLOCK=y
-CONFIG_DEBUG_MUTEXES=y
-CONFIG_DEBUG_ATOMIC_SLEEP=y
-CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_MEMORY_INIT=y
-CONFIG_DEBUG_LIST=y
-CONFIG_FAULT_INJECTION=y
-CONFIG_FAIL_PAGE_ALLOC=y
-CONFIG_FAULT_INJECTION_DEBUG_FS=y
-CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
-CONFIG_DEBUG_PAGEALLOC=y
-CONFIG_CPU_FREQ_SWITCH_PROFILER=y
+CONFIG_ENABLE_DEFAULT_TRACERS=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_USER=y
-CONFIG_DEBUG_LL=y
-CONFIG_EARLY_PRINTK=y
-CONFIG_PID_IN_CONTEXTIDR=y
CONFIG_KEYS=y
CONFIG_CRYPTO_MD4=y
-CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_TWOFISH=y
-CONFIG_CRYPTO_DEV_QCRYPTO=m
-CONFIG_CRYPTO_DEV_QCE=m
-CONFIG_CRYPTO_DEV_QCEDEV=m
+# CONFIG_CRYPTO_HW is not set
CONFIG_CRC_CCITT=y
+CONFIG_QPNP_VIBRATOR=y
+CONFIG_QSEECOM=y
\ No newline at end of file
diff --git a/arch/arm/configs/msm8610_defconfig b/arch/arm/configs/msm8610_defconfig
index 7e835f6..09b038c 100644
--- a/arch/arm/configs/msm8610_defconfig
+++ b/arch/arm/configs/msm8610_defconfig
@@ -59,7 +59,6 @@
CONFIG_MSM_WATCHDOG_V2=y
CONFIG_MSM_MEMORY_DUMP=y
CONFIG_MSM_DLOAD_MODE=y
-CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
CONFIG_MSM_ADSP_LOADER=m
CONFIG_MSM_OCMEM=y
CONFIG_MSM_OCMEM_LOCAL_POWER_CTRL=y
@@ -69,6 +68,7 @@
CONFIG_SENSORS_ADSP=y
CONFIG_MSM_RTB=y
CONFIG_MSM_RTB_SEPARATE_CPUS=y
+CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
@@ -214,6 +214,7 @@
CONFIG_INPUT_EVBUG=m
CONFIG_KEYBOARD_GPIO=y
CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_TOUCHSCREEN_ATMEL_MXT=y
CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y
CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI4_DEV=y
CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE=y
@@ -278,7 +279,6 @@
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
CONFIG_FB=y
-CONFIG_FB_VIRTUAL=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
CONFIG_FB_MSM_MDSS=y
@@ -329,7 +329,6 @@
CONFIG_QPNP_PWM=y
CONFIG_QPNP_POWER_ON=y
CONFIG_MSM_IOMMU=y
-CONFIG_MSM_IOMMU_PMON=y
CONFIG_CORESIGHT=y
CONFIG_CORESIGHT_TMC=y
CONFIG_CORESIGHT_TPIU=y
@@ -378,3 +377,7 @@
CONFIG_CRC_CCITT=y
CONFIG_QPNP_VIBRATOR=y
CONFIG_QSEECOM=y
+CONFIG_CRYPTO_HW=y
+CONFIG_CRYPTO_DEV_QCRYPTO=m
+CONFIG_CRYPTO_DEV_QCE=y
+CONFIG_CRYPTO_DEV_QCEDEV=m
diff --git a/arch/arm/configs/msm8974-perf_defconfig b/arch/arm/configs/msm8974-perf_defconfig
index f67cb0d..dc84558 100644
--- a/arch/arm/configs/msm8974-perf_defconfig
+++ b/arch/arm/configs/msm8974-perf_defconfig
@@ -25,6 +25,7 @@
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
CONFIG_EMBEDDED=y
+# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
@@ -98,6 +99,7 @@
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
@@ -238,6 +240,7 @@
CONFIG_CMA=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_TSPP=m
CONFIG_HAPTIC_ISA1200=y
CONFIG_QSEECOM=y
CONFIG_QPNP_MISC=y
@@ -320,6 +323,7 @@
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_DVB_CORE=m
# CONFIG_MSM_CAMERA is not set
CONFIG_MT9M114=y
CONFIG_OV2720=y
@@ -337,6 +341,8 @@
CONFIG_MSMB_JPEG=y
CONFIG_MSM_VIDC_V4L2=y
CONFIG_MSM_WFD=y
+CONFIG_DVB_MPQ=m
+CONFIG_DVB_MPQ_DEMUX=m
CONFIG_VIDEOBUF2_MSM_MEM=y
CONFIG_USB_VIDEO_CLASS=y
CONFIG_V4L_PLATFORM_DRIVERS=y
@@ -345,6 +351,7 @@
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
+CONFIG_KGSL_PER_PROCESS_PAGE_TABLE=y
CONFIG_FB=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
@@ -386,6 +393,8 @@
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_QCOM_DIAG_BRIDGE=y
+CONFIG_USB_QCOM_KS_BRIDGE=y
CONFIG_USB_GADGET=y
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_CI13XXX_MSM=y
diff --git a/arch/arm/configs/msm8974_defconfig b/arch/arm/configs/msm8974_defconfig
index b5e67fd..fd8a639 100644
--- a/arch/arm/configs/msm8974_defconfig
+++ b/arch/arm/configs/msm8974_defconfig
@@ -85,6 +85,7 @@
CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
CONFIG_MSM_UARTDM_Core_v14=y
CONFIG_MSM_BOOT_STATS=y
+CONFIG_MSM_XPU_ERR_FATAL=y
CONFIG_STRICT_MEMORY_RWX=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
@@ -102,6 +103,7 @@
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
@@ -354,6 +356,7 @@
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
+CONFIG_KGSL_PER_PROCESS_PAGE_TABLE=y
CONFIG_FB=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
@@ -395,6 +398,8 @@
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_QCOM_DIAG_BRIDGE=y
+CONFIG_USB_QCOM_KS_BRIDGE=y
CONFIG_USB_GADGET=y
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_CI13XXX_MSM=y
diff --git a/arch/arm/configs/msm9625-perf_defconfig b/arch/arm/configs/msm9625-perf_defconfig
index 42acd99..1e0134a 100644
--- a/arch/arm/configs/msm9625-perf_defconfig
+++ b/arch/arm/configs/msm9625-perf_defconfig
@@ -25,6 +25,7 @@
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
CONFIG_EMBEDDED=y
+# CONFIG_SLUB_DEBUG is not set
CONFIG_PROFILING=y
CONFIG_OPROFILE=m
CONFIG_MODULES=y
@@ -230,9 +231,6 @@
CONFIG_REGULATOR_QPNP=y
CONFIG_ION=y
CONFIG_ION_MSM=y
-CONFIG_FB=y
-CONFIG_FB_MSM=y
-CONFIG_FB_MSM_QPIC_PANEL_DETECT=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
diff --git a/arch/arm/configs/msm9625_defconfig b/arch/arm/configs/msm9625_defconfig
index 041e89a..f7c3bff 100644
--- a/arch/arm/configs/msm9625_defconfig
+++ b/arch/arm/configs/msm9625_defconfig
@@ -231,9 +231,6 @@
CONFIG_REGULATOR_QPNP=y
CONFIG_ION=y
CONFIG_ION_MSM=y
-CONFIG_FB=y
-CONFIG_FB_MSM=y
-CONFIG_FB_MSM_QPIC_PANEL_DETECT=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index c1295c4..70d82ec 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -119,16 +119,16 @@
*/
extern void __iomem *__arm_ioremap_pfn_caller(unsigned long, unsigned long,
size_t, unsigned int, void *);
-extern void __iomem *__arm_ioremap_caller(unsigned long, size_t, unsigned int,
+extern void __iomem *__arm_ioremap_caller(phys_addr_t, size_t, unsigned int,
void *);
extern void __iomem *__arm_ioremap_pfn(unsigned long, unsigned long, size_t, unsigned int);
-extern void __iomem *__arm_ioremap(unsigned long, size_t, unsigned int);
-extern void __iomem *__arm_ioremap_exec(unsigned long, size_t, bool cached);
+extern void __iomem *__arm_ioremap(phys_addr_t, size_t, unsigned int);
+extern void __iomem *__arm_ioremap_exec(phys_addr_t, size_t, bool cached);
extern void __iounmap(volatile void __iomem *addr);
extern void __arm_iounmap(volatile void __iomem *addr);
-extern void __iomem * (*arch_ioremap_caller)(unsigned long, size_t,
+extern void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t,
unsigned int, void *);
extern void (*arch_iounmap)(volatile void __iomem *);
diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h
index 84560dc..7b6f42a 100644
--- a/arch/arm/include/asm/pgtable.h
+++ b/arch/arm/include/asm/pgtable.h
@@ -313,6 +313,7 @@
* We provide our own arch_get_unmapped_area to cope with VIPT caches.
*/
#define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
/*
* remap a physical page `pfn' of size `size' with page protection `prot'
diff --git a/arch/arm/include/asm/processor.h b/arch/arm/include/asm/processor.h
index 8e2bb29..07209d7 100644
--- a/arch/arm/include/asm/processor.h
+++ b/arch/arm/include/asm/processor.h
@@ -126,6 +126,8 @@
#endif
+#define HAVE_ARCH_PICK_MMAP_LAYOUT
+
#endif
#endif /* __ASM_ARM_PROCESSOR_H */
diff --git a/arch/arm/kernel/arch_timer.c b/arch/arm/kernel/arch_timer.c
index 366debb..ac4c7a3 100644
--- a/arch/arm/kernel/arch_timer.c
+++ b/arch/arm/kernel/arch_timer.c
@@ -23,6 +23,7 @@
#include <linux/io.h>
#include <linux/irq.h>
#include <linux/export.h>
+#include <linux/slab.h>
#include <asm/cputype.h>
#include <asm/delay.h>
@@ -33,46 +34,13 @@
#include <asm/system_info.h>
static unsigned long arch_timer_rate;
+static int arch_timer_spi;
static int arch_timer_ppi;
static int arch_timer_ppi2;
-static int is_irq_percpu;
static struct clock_event_device __percpu **arch_timer_evt;
static void __iomem *timer_base;
-static u32 timer_reg_read_cp15(int reg);
-static void timer_reg_write_cp15(int reg, u32 val);
-static inline cycle_t counter_get_cntpct_cp15(void);
-static inline cycle_t counter_get_cntvct_cp15(void);
-
-static u32 timer_reg_read_mem(int reg);
-static void timer_reg_write_mem(int reg, u32 val);
-static inline cycle_t counter_get_cntpct_mem(void);
-static inline cycle_t counter_get_cntvct_mem(void);
-
-struct arch_timer_operations {
- void (*reg_write)(int, u32);
- u32 (*reg_read)(int);
- cycle_t (*get_cntpct)(void);
- cycle_t (*get_cntvct)(void);
-};
-
-static struct arch_timer_operations arch_timer_ops_cp15 = {
- .reg_read = &timer_reg_read_cp15,
- .reg_write = &timer_reg_write_cp15,
- .get_cntpct = &counter_get_cntpct_cp15,
- .get_cntvct = &counter_get_cntvct_cp15,
-};
-
-static struct arch_timer_operations arch_timer_ops_mem = {
- .reg_read = &timer_reg_read_mem,
- .reg_write = &timer_reg_write_mem,
- .get_cntpct = &counter_get_cntpct_mem,
- .get_cntvct = &counter_get_cntvct_mem,
-};
-
-static struct arch_timer_operations *arch_specific_timer = &arch_timer_ops_cp15;
-
static struct delay_timer arch_delay_timer;
/*
@@ -97,7 +65,7 @@
#define QTIMER_CNTP_TVAL_REG 0x028
#define QTIMER_CNTV_TVAL_REG 0x038
-static void timer_reg_write_mem(int reg, u32 val)
+static inline void timer_reg_write_mem(int reg, u32 val)
{
switch (reg) {
case ARCH_TIMER_REG_CTRL:
@@ -109,7 +77,7 @@
}
}
-static void timer_reg_write_cp15(int reg, u32 val)
+static inline void timer_reg_write_cp15(int reg, u32 val)
{
switch (reg) {
case ARCH_TIMER_REG_CTRL:
@@ -123,7 +91,15 @@
isb();
}
-static u32 timer_reg_read_mem(int reg)
+static inline void arch_timer_reg_write(int cp15, int reg, u32 val)
+{
+ if (cp15)
+ timer_reg_write_cp15(reg, val);
+ else
+ timer_reg_write_mem(reg, val);
+}
+
+static inline u32 timer_reg_read_mem(int reg)
{
u32 val;
@@ -144,7 +120,7 @@
return val;
}
-static u32 timer_reg_read_cp15(int reg)
+static inline u32 timer_reg_read_cp15(int reg)
{
u32 val;
@@ -165,17 +141,23 @@
return val;
}
-static irqreturn_t arch_timer_handler(int irq, void *dev_id)
+static inline u32 arch_timer_reg_read(int cp15, int reg)
{
- struct clock_event_device *evt;
+ if (cp15)
+ return timer_reg_read_cp15(reg);
+ else
+ return timer_reg_read_mem(reg);
+}
+
+static inline irqreturn_t arch_timer_handler(int cp15,
+ struct clock_event_device *evt)
+{
unsigned long ctrl;
- ctrl = arch_specific_timer->reg_read(ARCH_TIMER_REG_CTRL);
+ ctrl = arch_timer_reg_read(cp15, ARCH_TIMER_REG_CTRL);
if (ctrl & ARCH_TIMER_CTRL_IT_STAT) {
ctrl |= ARCH_TIMER_CTRL_IT_MASK;
- arch_specific_timer->reg_write(ARCH_TIMER_REG_CTRL,
- ctrl);
- evt = *__this_cpu_ptr(arch_timer_evt);
+ arch_timer_reg_write(cp15, ARCH_TIMER_REG_CTRL, ctrl);
evt->event_handler(evt);
return IRQ_HANDLED;
}
@@ -183,16 +165,18 @@
return IRQ_NONE;
}
-static void arch_timer_disable(void)
+static irqreturn_t arch_timer_handler_cp15(int irq, void *dev_id)
{
- unsigned long ctrl;
-
- ctrl = arch_specific_timer->reg_read(ARCH_TIMER_REG_CTRL);
- ctrl &= ~ARCH_TIMER_CTRL_ENABLE;
- arch_specific_timer->reg_write(ARCH_TIMER_REG_CTRL, ctrl);
+ struct clock_event_device *evt = *(struct clock_event_device **)dev_id;
+ return arch_timer_handler(1, evt);
}
-static void arch_timer_set_mode(enum clock_event_mode mode,
+static irqreturn_t arch_timer_handler_mem(int irq, void *dev_id)
+{
+ return arch_timer_handler(0, dev_id);
+}
+
+static inline void arch_timer_set_mode(int cp15, enum clock_event_mode mode,
struct clock_event_device *clk)
{
unsigned long ctrl;
@@ -200,46 +184,72 @@
switch (mode) {
case CLOCK_EVT_MODE_UNUSED:
case CLOCK_EVT_MODE_SHUTDOWN:
- arch_timer_disable();
+ ctrl = arch_timer_reg_read(cp15, ARCH_TIMER_REG_CTRL);
+ ctrl &= ~ARCH_TIMER_CTRL_ENABLE;
+ arch_timer_reg_write(cp15, ARCH_TIMER_REG_CTRL, ctrl);
break;
case CLOCK_EVT_MODE_ONESHOT:
- ctrl = arch_specific_timer->reg_read(ARCH_TIMER_REG_CTRL);
+ ctrl = arch_timer_reg_read(cp15, ARCH_TIMER_REG_CTRL);
ctrl |= ARCH_TIMER_CTRL_ENABLE;
- arch_specific_timer->reg_write(ARCH_TIMER_REG_CTRL, ctrl);
+ arch_timer_reg_write(cp15, ARCH_TIMER_REG_CTRL, ctrl);
default:
break;
}
}
-static int arch_timer_set_next_event(unsigned long evt,
+static void arch_timer_set_mode_cp15(enum clock_event_mode mode,
+ struct clock_event_device *clk)
+{
+ arch_timer_set_mode(1, mode, clk);
+}
+
+static void arch_timer_set_mode_mem(enum clock_event_mode mode,
+ struct clock_event_device *clk)
+{
+ arch_timer_set_mode(0, mode, clk);
+}
+
+static int arch_timer_set_next_event(int cp15, unsigned long evt,
struct clock_event_device *unused)
{
unsigned long ctrl;
- ctrl = arch_specific_timer->reg_read(ARCH_TIMER_REG_CTRL);
+ ctrl = arch_timer_reg_read(cp15, ARCH_TIMER_REG_CTRL);
ctrl &= ~ARCH_TIMER_CTRL_IT_MASK;
- arch_specific_timer->reg_write(ARCH_TIMER_REG_CTRL, ctrl);
- arch_specific_timer->reg_write(ARCH_TIMER_REG_TVAL, evt);
+ arch_timer_reg_write(cp15, ARCH_TIMER_REG_CTRL, ctrl);
+ arch_timer_reg_write(cp15, ARCH_TIMER_REG_TVAL, evt);
return 0;
}
+static int arch_timer_set_next_event_cp15(unsigned long evt,
+ struct clock_event_device *unused)
+{
+ return arch_timer_set_next_event(1, evt, unused);
+}
+
+static int arch_timer_set_next_event_mem(unsigned long evt,
+ struct clock_event_device *unused)
+{
+ return arch_timer_set_next_event(0, evt, unused);
+}
+
static int __cpuinit arch_timer_setup(struct clock_event_device *clk)
{
/* setup clock event only once for CPU 0 */
if (!smp_processor_id() && clk->irq == arch_timer_ppi)
return 0;
- /* Be safe... */
- arch_timer_disable();
-
- clk->features = CLOCK_EVT_FEAT_ONESHOT;
+ clk->features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_C3STOP;
clk->name = "arch_sys_timer";
clk->rating = 450;
- clk->set_mode = arch_timer_set_mode;
- clk->set_next_event = arch_timer_set_next_event;
+ clk->set_mode = arch_timer_set_mode_cp15;
+ clk->set_next_event = arch_timer_set_next_event_cp15;
clk->irq = arch_timer_ppi;
+ /* Be safe... */
+ clk->set_mode(CLOCK_EVT_MODE_SHUTDOWN, clk);
+
clockevents_config_and_register(clk, arch_timer_rate,
0xf, 0x7fffffff);
@@ -264,8 +274,8 @@
unsigned long freq;
if (arch_timer_rate == 0) {
- arch_specific_timer->reg_write(ARCH_TIMER_REG_CTRL, 0);
- freq = arch_specific_timer->reg_read(ARCH_TIMER_REG_FREQ);
+ arch_timer_reg_write(1, ARCH_TIMER_REG_CTRL, 0);
+ freq = arch_timer_reg_read(1, ARCH_TIMER_REG_FREQ);
/* Check the timer frequency. */
if (freq == 0) {
@@ -323,9 +333,12 @@
return ((cycle_t) cvalh << 32) | cvall;
}
+static cycle_t (*get_cntpct_func)(void) = counter_get_cntpct_cp15;
+static cycle_t (*get_cntvct_func)(void) = counter_get_cntvct_cp15;
+
cycle_t arch_counter_get_cntpct(void)
{
- return arch_specific_timer->get_cntpct();
+ return get_cntpct_func();
}
EXPORT_SYMBOL(arch_counter_get_cntpct);
@@ -351,7 +364,7 @@
{
cycle_t cntvct;
- cntvct = arch_specific_timer->get_cntvct();
+ cntvct = get_cntvct_func();
/*
* The sched_clock infrastructure only knows about counters
@@ -373,7 +386,7 @@
disable_percpu_irq(clk->irq);
if (arch_timer_ppi2)
disable_percpu_irq(arch_timer_ppi2);
- arch_timer_set_mode(CLOCK_EVT_MODE_UNUSED, clk);
+ clk->set_mode(CLOCK_EVT_MODE_UNUSED, clk);
}
static struct local_timer_ops arch_timer_ops __cpuinitdata = {
@@ -383,13 +396,23 @@
static struct clock_event_device arch_timer_global_evt;
+static void __init arch_timer_counter_init(void)
+{
+ clocksource_register_hz(&clocksource_counter, arch_timer_rate);
+
+ setup_sched_clock(arch_timer_update_sched_clock, 32, arch_timer_rate);
+
+ /* Use the architected timer for the delay loop. */
+ arch_delay_timer.read_current_timer = &arch_timer_read_current_timer;
+ arch_delay_timer.freq = arch_timer_rate;
+ register_current_timer_delay(&arch_delay_timer);
+}
+
static int __init arch_timer_common_register(void)
{
int err;
- if (timer_base)
- arch_specific_timer = &arch_timer_ops_mem;
- else if (!local_timer_is_architected())
+ if (!local_timer_is_architected())
return -ENXIO;
err = arch_timer_available();
@@ -400,16 +423,8 @@
if (!arch_timer_evt)
return -ENOMEM;
- clocksource_register_hz(&clocksource_counter, arch_timer_rate);
-
- setup_sched_clock(arch_timer_update_sched_clock, 32, arch_timer_rate);
-
- if (is_irq_percpu)
- err = request_percpu_irq(arch_timer_ppi, arch_timer_handler,
- "arch_timer", arch_timer_evt);
- else
- err = request_irq(arch_timer_ppi, arch_timer_handler, 0,
- "arch_timer", arch_timer_evt);
+ err = request_percpu_irq(arch_timer_ppi, arch_timer_handler_cp15,
+ "arch_timer", arch_timer_evt);
if (err) {
pr_err("arch_timer: can't register interrupt %d (%d)\n",
arch_timer_ppi, err);
@@ -417,13 +432,9 @@
}
if (arch_timer_ppi2) {
- if (is_irq_percpu)
- err = request_percpu_irq(arch_timer_ppi2,
- arch_timer_handler, "arch_timer",
- arch_timer_evt);
- else
- err = request_irq(arch_timer_ppi2, arch_timer_handler,
- 0, "arch_timer", arch_timer_evt);
+ err = request_percpu_irq(arch_timer_ppi2,
+ arch_timer_handler_cp15,
+ "arch_timer", arch_timer_evt);
if (err) {
pr_err("arch_timer: can't register interrupt %d (%d)\n",
arch_timer_ppi2, err);
@@ -447,10 +458,6 @@
if (err)
goto out_free_irq;
- /* Use the architected timer for the delay loop. */
- arch_delay_timer.read_current_timer = &arch_timer_read_current_timer;
- arch_delay_timer.freq = arch_timer_rate;
- register_current_timer_delay(&arch_delay_timer);
return 0;
out_free_irq:
@@ -464,6 +471,34 @@
return err;
}
+static int __init arch_timer_mem_register(void)
+{
+ int err;
+ struct clock_event_device *clk;
+
+ clk = kzalloc(sizeof(*clk), GFP_KERNEL);
+ if (!clk)
+ return -ENOMEM;
+
+ clk->features = CLOCK_EVT_FEAT_ONESHOT;
+ clk->name = "arch_mem_timer";
+ clk->rating = 400;
+ clk->set_mode = arch_timer_set_mode_mem;
+ clk->set_next_event = arch_timer_set_next_event_mem;
+ clk->irq = arch_timer_spi;
+ clk->cpumask = cpu_all_mask;
+
+ clk->set_mode(CLOCK_EVT_MODE_SHUTDOWN, clk);
+
+ clockevents_config_and_register(clk, arch_timer_rate,
+ 0xf, 0x7fffffff);
+
+ err = request_irq(arch_timer_spi, arch_timer_handler_mem, 0,
+ "arch_timer", clk);
+
+ return err;
+}
+
int __init arch_timer_register(struct arch_timer *at)
{
if (at->res[0].start <= 0 || !(at->res[0].flags & IORESOURCE_IRQ))
@@ -493,48 +528,86 @@
{},
};
+static const struct of_device_id arch_timer_mem_of_match[] __initconst = {
+ { .compatible = "arm,armv7-timer-mem", },
+ {},
+};
+
int __init arch_timer_of_register(void)
{
- struct device_node *np;
+ struct device_node *np, *frame;
u32 freq;
int ret;
+ int has_cp15 = false, has_mem = false;
np = of_find_matching_node(NULL, arch_timer_of_match);
- if (!np) {
- pr_err("arch_timer: can't find DT node\n");
- return -ENODEV;
+ if (np) {
+ has_cp15 = true;
+ /*
+ * Try to determine the frequency from the device tree
+ */
+ if (!of_property_read_u32(np, "clock-frequency", &freq))
+ arch_timer_rate = freq;
+
+ ret = irq_of_parse_and_map(np, 0);
+ if (ret <= 0) {
+ pr_err("arch_timer: interrupt not specified in timer node\n");
+ return -ENODEV;
+ }
+ arch_timer_ppi = ret;
+ ret = irq_of_parse_and_map(np, 1);
+ if (ret > 0)
+ arch_timer_ppi2 = ret;
+
+ ret = arch_timer_common_register();
+ if (ret)
+ return ret;
}
- /* Try to determine the frequency from the device tree or CNTFRQ */
- if (!of_property_read_u32(np, "clock-frequency", &freq))
- arch_timer_rate = freq;
+ np = of_find_matching_node(NULL, arch_timer_mem_of_match);
+ if (np) {
+ has_mem = true;
- ret = irq_of_parse_and_map(np, 0);
- if (ret <= 0) {
- pr_err("arch_timer: interrupt not specified in timer node\n");
- return -ENODEV;
- }
+ if (!has_cp15) {
+ get_cntpct_func = counter_get_cntpct_mem;
+ get_cntvct_func = counter_get_cntvct_mem;
+ }
+ /*
+ * Try to determine the frequency from the device tree
+ */
+ if (!of_property_read_u32(np, "clock-frequency", &freq))
+ arch_timer_rate = freq;
- if (of_get_address(np, 0, NULL, NULL)) {
- timer_base = of_iomap(np, 0);
+ frame = of_get_next_child(np, NULL);
+ if (!frame) {
+ pr_err("arch_timer: no child frame\n");
+ return -EINVAL;
+ }
+
+ timer_base = of_iomap(frame, 0);
if (!timer_base) {
pr_err("arch_timer: cant map timer base\n");
return -ENOMEM;
}
+
+ arch_timer_spi = irq_of_parse_and_map(frame, 0);
+ if (!arch_timer_spi) {
+ pr_err("arch_timer: no physical timer irq\n");
+ return -EINVAL;
+ }
+
+ ret = arch_timer_mem_register();
+ if (ret)
+ return ret;
}
- if (of_get_property(np, "irq-is-not-percpu", NULL))
- is_irq_percpu = 0;
- else
- is_irq_percpu = 1;
+ if (!has_cp15 && !has_mem) {
+ pr_err("arch_timer: can't find DT node\n");
+ return -ENODEV;
+ }
- arch_timer_ppi = ret;
- ret = irq_of_parse_and_map(np, 1);
- if (ret > 0)
- arch_timer_ppi2 = ret;
- pr_info("arch_timer: found %s irqs %d %d\n",
- np->name, arch_timer_ppi, arch_timer_ppi2);
+ arch_timer_counter_init();
- return arch_timer_common_register();
+ return 0;
}
#endif
diff --git a/arch/arm/mach-msm/Kconfig b/arch/arm/mach-msm/Kconfig
index c510889..0eecffd 100644
--- a/arch/arm/mach-msm/Kconfig
+++ b/arch/arm/mach-msm/Kconfig
@@ -284,9 +284,10 @@
select MSM_ULTRASOUND_B
select MSM_LPM_TEST
select MSM_RPM_LOG
+ select ARCH_WANT_KMAP_ATOMIC_FLUSH
-config ARCH_MSMZINC
- bool "MSMZINC"
+config ARCH_APQ8084
+ bool "APQ8084"
select ARCH_MSM_KRAITMP
select GPIO_MSM_V3
select ARM_GIC
@@ -302,6 +303,7 @@
select REGULATOR
select ARM_HAS_SG_CHAIN
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
+ select ARCH_WANT_KMAP_ATOMIC_FLUSH
config ARCH_MPQ8092
bool "MPQ8092"
@@ -416,6 +418,7 @@
select MAY_HAVE_SPARSE_IRQ
select SPARSE_IRQ
select MULTI_IRQ_HANDLER
+ select ARM_TICKET_LOCKS
select GPIO_MSM_V3
select MSM_GPIOMUX
select MSM_NATIVE_RESTART
@@ -435,6 +438,7 @@
select CPU_FREQ
select CPU_FREQ_GOV_USERSPACE
select CPU_FREQ_GOV_ONDEMAND
+ select CPU_FREQ_GOV_POWERSAVE
select MSM_PIL
select MSM_RUN_QUEUE_STATS
select ARM_HAS_SG_CHAIN
@@ -456,6 +460,7 @@
select MAY_HAVE_SPARSE_IRQ
select SPARSE_IRQ
select MULTI_IRQ_HANDLER
+ select ARM_TICKET_LOCKS
select GPIO_MSM_V3
select MSM_GPIOMUX
select MSM_NATIVE_RESTART
@@ -485,6 +490,7 @@
select MSM_CPR_REGULATOR
select MSM_RPM_LOG
select MSM_RPM_STATS_LOG
+ select ARCH_WANT_KMAP_ATOMIC_FLUSH
endmenu
choice
@@ -1083,13 +1089,13 @@
default "0x80200000" if ARCH_MSM8960
default "0x80200000" if ARCH_MSM8930
default "0x00000000" if ARCH_MSM8974
- default "0x00000000" if ARCH_MSMZINC
+ default "0x00000000" if ARCH_APQ8084
default "0x00000000" if ARCH_MPQ8092
default "0x00000000" if ARCH_MSM8226
default "0x00000000" if ARCH_MSM8610
default "0x10000000" if ARCH_FSM9XXX
default "0x00200000" if ARCH_MSM9625
- default "0x00200000" if ARCH_MSMKRYPTON
+ default "0x00000000" if ARCH_MSMKRYPTON
default "0x00200000" if !MSM_STACKED_MEMORY
default "0x00000000" if ARCH_QSD8X50 && MSM_SOC_REV_A
default "0x20000000" if ARCH_QSD8X50
@@ -1109,14 +1115,14 @@
config KERNEL_PMEM_SMI_REGION
bool "Enable in-kernel PMEM region for SMI"
default y if ARCH_MSM8X60
- depends on ANDROID_PMEM && ((ARCH_QSD8X50 && !PMEM_GPU0) || (ARCH_MSM8X60 && !VCM))
+ depends on (ARCH_QSD8X50 && !PMEM_GPU0) || (ARCH_MSM8X60 && !VCM)
help
Enable the in-kernel PMEM allocator to use SMI memory.
config PMEM_GPU0
bool "Enable PMEM GPU0 region"
default y
- depends on ARCH_QSD8X50 && ANDROID_PMEM
+ depends on ARCH_QSD8X50
help
Enable the PMEM GPU0 device on SMI Memory.
@@ -1236,13 +1242,21 @@
Say Y here if you want the debug print routines to direct
their output to the serial port on MPQ8092 devices.
- config DEBUG_MSMZINC_UART
- bool "Kernel low-level debugging messages via MSMZINC UART"
- depends on ARCH_MSMZINC
+ config DEBUG_APQ8084_UART
+ bool "Kernel low-level debugging messages via APQ8084 UART"
+ depends on ARCH_APQ8084
select MSM_HAS_DEBUG_UART_HS_V14
help
Say Y here if you want the debug print routines to direct
- their output to the serial port on MSMZINC devices.
+ their output to the serial port on APQ8084 devices.
+
+ config DEBUG_MSM9625_UART
+ bool "Kernel low-level debugging messages via MSM9625 UART"
+ depends on ARCH_MSM9625
+ select MSM_HAS_DEBUG_UART_HS_V14
+ help
+ Say Y here if you want the debug print routines to direct
+ their output to the serial port on MSM9625 devices.
endchoice
choice
@@ -1844,7 +1858,6 @@
config MSM_HW3D
tristate "MSM Hardware 3D Register Driver"
- depends on ANDROID_PMEM
help
Provides access to registers needed by the userspace OpenGL|ES
library.
@@ -1852,7 +1865,6 @@
config MSM_ADSP
depends on (ARCH_MSM7X01A || ARCH_MSM7X25 || ARCH_MSM7X27)
tristate "MSM ADSP driver"
- depends on ANDROID_PMEM
default y
help
Provides access to registers needed by the userspace aDSP library.
@@ -1878,7 +1890,7 @@
config MSM7KV2_AUDIO
bool "MSM7K v2 audio"
- depends on (ARCH_MSM7X30 && ANDROID_PMEM)
+ depends on ARCH_MSM7X30
default y
help
Enables QDSP5V2-based audio drivers for audio playbacks and
@@ -1894,14 +1906,14 @@
config MSM_QDSP6
tristate "QDSP6 support"
- depends on ARCH_QSD8X50 && ANDROID_PMEM
+ depends on ARCH_QSD8X50
default y
help
Enable support for qdsp6. This provides audio and video functionality.
config MSM8X60_AUDIO
tristate "MSM8X60 audio support"
- depends on ARCH_MSM8X60 && ANDROID_PMEM
+ depends on ARCH_MSM8X60
default y
help
Enable support for qdsp6v2. This provides audio functionality.
@@ -1960,7 +1972,7 @@
config QSD_AUDIO
bool "QSD audio"
- depends on ARCH_MSM_SCORPION && MSM_DALRPC && ANDROID_PMEM && !MSM_SMP
+ depends on ARCH_MSM_SCORPION && MSM_DALRPC && !MSM_SMP
default y
help
Provides PCM, MP3, and AAC audio playback.
diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index 7c78395..8efc000 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -4,10 +4,9 @@
ifndef CONFIG_ARM_ARCH_TIMER
obj-y += timer.o
endif
-obj-y += clock.o clock-voter.o clock-dummy.o
+obj-y += clock.o clock-voter.o clock-dummy.o clock-generic.o
obj-y += modem_notifier.o
obj-$(CONFIG_USE_OF) += board-dt.o
-obj-$(CONFIG_CPU_FREQ_MSM) += cpufreq.o
obj-$(CONFIG_DEBUG_FS) += nohlt.o clock-debug.o
obj-$(CONFIG_KEXEC) += msm_kexec.o
@@ -120,7 +119,7 @@
ifndef CONFIG_ARCH_MSM9625
ifndef CONFIG_ARCH_MPQ8092
ifndef CONFIG_ARCH_MSM8610
-ifndef CONFIG_ARCH_MSMZINC
+ifndef CONFIG_ARCH_APQ8084
ifndef CONFIG_ARCH_MSMKRYPTON
obj-y += nand_partitions.o
endif
@@ -297,7 +296,7 @@
obj-$(CONFIG_MACH_MPQ8064_DTV) += board-8064-all.o board-8064-regulator.o
obj-$(CONFIG_ARCH_MSM9615) += board-9615.o devices-9615.o board-9615-regulator.o board-9615-gpiomux.o board-9615-storage.o board-9615-display.o
obj-$(CONFIG_ARCH_MSM9615) += clock-local.o clock-9615.o acpuclock-9615.o clock-rpm.o clock-pll.o
-obj-$(CONFIG_ARCH_MSMZINC) += board-zinc.o board-zinc-gpiomux.o
+obj-$(CONFIG_ARCH_APQ8084) += board-8084.o board-8084-gpiomux.o
obj-$(CONFIG_ARCH_MSM8974) += board-8974.o board-8974-gpiomux.o
obj-$(CONFIG_ARCH_MSM8974) += acpuclock-8974.o
obj-$(CONFIG_ARCH_MSM8974) += clock-local2.o clock-pll.o clock-8974.o clock-rpm.o clock-voter.o clock-mdss-8974.o
@@ -316,6 +315,7 @@
obj-$(CONFIG_ARCH_MSM8226) += acpuclock-8226.o acpuclock-cortex.o
obj-$(CONFIG_ARCH_MSM8610) += board-8610.o board-8610-gpiomux.o
obj-$(CONFIG_ARCH_MSM8610) += clock-local2.o clock-pll.o clock-8610.o clock-rpm.o clock-voter.o
+obj-$(CONFIG_ARCH_MSM8610) += clock-dsi-8610.o
obj-$(CONFIG_MACH_SAPPHIRE) += board-sapphire.o board-sapphire-gpio.o
obj-$(CONFIG_MACH_SAPPHIRE) += board-sapphire-keypad.o board-sapphire-panel.o
@@ -375,7 +375,7 @@
obj-$(CONFIG_ARCH_MPQ8092) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_ARCH_MSM8226) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_ARCH_MSM8610) += gpiomux-v2.o gpiomux.o
-obj-$(CONFIG_ARCH_MSMZINC) += gpiomux-v2.o gpiomux.o
+obj-$(CONFIG_ARCH_APQ8084) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_MSM_SLEEP_STATS_DEVICE) += idle_stats_device.o
obj-$(CONFIG_MSM_DCVS) += msm_dcvs_scm.o msm_dcvs.o msm_mpdecision.o
@@ -428,3 +428,4 @@
obj-$(CONFIG_ARCH_MSM8974) += msm_mpmctr.o
obj-$(CONFIG_MSM_CPR_REGULATOR) += cpr-regulator.o
+obj-$(CONFIG_CPU_FREQ_MSM) += cpufreq.o
diff --git a/arch/arm/mach-msm/Makefile.boot b/arch/arm/mach-msm/Makefile.boot
index f20f6ae..d57709d 100644
--- a/arch/arm/mach-msm/Makefile.boot
+++ b/arch/arm/mach-msm/Makefile.boot
@@ -57,13 +57,14 @@
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-fluid.dtb
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-liquid.dtb
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-mtp.dtb
+ dtb-$(CONFIG_ARCH_MSM8974) += apq8074-v2-liquid.dtb
-# MSMZINC
- zreladdr-$(CONFIG_ARCH_MSMZINC) := 0x00008000
- dtb-$(CONFIG_ARCH_MSMZINC) += msmzinc-sim.dtb
+# APQ8084
+ zreladdr-$(CONFIG_ARCH_APQ8084) := 0x00008000
+ dtb-$(CONFIG_ARCH_APQ8084) += apq8084-sim.dtb
# MSMKRYPTON
- zreladdr-$(CONFIG_ARCH_MSMKRYPTON) := 0x00208000
+ zreladdr-$(CONFIG_ARCH_MSMKRYPTON) := 0x00008000
dtb-$(CONFIG_ARCH_MSMKRYPTON) += msmkrypton-sim.dtb
# MSM9615
@@ -76,8 +77,8 @@
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v1-rumi.dtb
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-cdp.dtb
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-mtp.dtb
- dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-1-mtp.dtb
- dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-1-cdp.dtb
+ dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2.1-mtp.dtb
+ dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2.1-cdp.dtb
# MSM8226
zreladdr-$(CONFIG_ARCH_MSM8226) := 0x00008000
diff --git a/arch/arm/mach-msm/acpuclock-8064.c b/arch/arm/mach-msm/acpuclock-8064.c
index a0727b7..8262946 100644
--- a/arch/arm/mach-msm/acpuclock-8064.c
+++ b/arch/arm/mach-msm/acpuclock-8064.c
@@ -131,7 +131,6 @@
[12] = { { 1026000, HFPLL, 1, 0x26 }, 1150000, 1150000, 5 },
[13] = { { 1080000, HFPLL, 1, 0x28 }, 1150000, 1150000, 5 },
[14] = { { 1134000, HFPLL, 1, 0x2A }, 1150000, 1150000, 5 },
- [15] = { { 1188000, HFPLL, 1, 0x2C }, 1150000, 1150000, 5 },
{ }
};
@@ -149,15 +148,15 @@
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1100000 },
{ 0, { 972000, HFPLL, 1, 0x24 }, L2(5), 1125000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1125000 },
- { 0, { 1080000, HFPLL, 1, 0x28 }, L2(15), 1175000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1175000 },
- { 0, { 1188000, HFPLL, 1, 0x2C }, L2(15), 1200000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1200000 },
- { 0, { 1296000, HFPLL, 1, 0x30 }, L2(15), 1225000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1225000 },
- { 0, { 1404000, HFPLL, 1, 0x34 }, L2(15), 1237500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1237500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1250000 },
+ { 0, { 1080000, HFPLL, 1, 0x28 }, L2(14), 1175000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1175000 },
+ { 0, { 1188000, HFPLL, 1, 0x2C }, L2(14), 1200000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1200000 },
+ { 0, { 1296000, HFPLL, 1, 0x30 }, L2(14), 1225000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1225000 },
+ { 0, { 1404000, HFPLL, 1, 0x34 }, L2(14), 1237500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1237500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1250000 },
{ 0, { 0 } }
};
@@ -175,15 +174,15 @@
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1050000 },
{ 0, { 972000, HFPLL, 1, 0x24 }, L2(5), 1075000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1075000 },
- { 0, { 1080000, HFPLL, 1, 0x28 }, L2(15), 1125000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1125000 },
- { 0, { 1188000, HFPLL, 1, 0x2C }, L2(15), 1150000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1150000 },
- { 0, { 1296000, HFPLL, 1, 0x30 }, L2(15), 1175000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1175000 },
- { 0, { 1404000, HFPLL, 1, 0x34 }, L2(15), 1187500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1187500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1200000 },
+ { 0, { 1080000, HFPLL, 1, 0x28 }, L2(14), 1125000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1125000 },
+ { 0, { 1188000, HFPLL, 1, 0x2C }, L2(14), 1150000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1150000 },
+ { 0, { 1296000, HFPLL, 1, 0x30 }, L2(14), 1175000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1175000 },
+ { 0, { 1404000, HFPLL, 1, 0x34 }, L2(14), 1187500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1187500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1200000 },
{ 0, { 0 } }
};
@@ -201,15 +200,15 @@
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1000000 },
{ 0, { 972000, HFPLL, 1, 0x24 }, L2(5), 1025000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1025000 },
- { 0, { 1080000, HFPLL, 1, 0x28 }, L2(15), 1075000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1075000 },
- { 0, { 1188000, HFPLL, 1, 0x2C }, L2(15), 1100000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1100000 },
- { 0, { 1296000, HFPLL, 1, 0x30 }, L2(15), 1125000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1125000 },
- { 0, { 1404000, HFPLL, 1, 0x34 }, L2(15), 1137500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1137500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1150000 },
+ { 0, { 1080000, HFPLL, 1, 0x28 }, L2(14), 1075000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1075000 },
+ { 0, { 1188000, HFPLL, 1, 0x2C }, L2(14), 1100000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1100000 },
+ { 0, { 1296000, HFPLL, 1, 0x30 }, L2(14), 1125000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1125000 },
+ { 0, { 1404000, HFPLL, 1, 0x34 }, L2(14), 1137500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1137500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1150000 },
{ 0, { 0 } }
};
@@ -227,15 +226,15 @@
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 975000 },
{ 0, { 972000, HFPLL, 1, 0x24 }, L2(5), 1000000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1000000 },
- { 0, { 1080000, HFPLL, 1, 0x28 }, L2(15), 1050000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1050000 },
- { 0, { 1188000, HFPLL, 1, 0x2C }, L2(15), 1075000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1075000 },
- { 0, { 1296000, HFPLL, 1, 0x30 }, L2(15), 1100000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1100000 },
- { 0, { 1404000, HFPLL, 1, 0x34 }, L2(15), 1112500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1112500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1125000 },
+ { 0, { 1080000, HFPLL, 1, 0x28 }, L2(14), 1050000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1050000 },
+ { 0, { 1188000, HFPLL, 1, 0x2C }, L2(14), 1075000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1075000 },
+ { 0, { 1296000, HFPLL, 1, 0x30 }, L2(14), 1100000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1100000 },
+ { 0, { 1404000, HFPLL, 1, 0x34 }, L2(14), 1112500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1112500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1125000 },
{ 0, { 0 } }
};
@@ -247,11 +246,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 1000000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1025000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1037500 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1075000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1087500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1125000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1150000 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1162500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1075000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1087500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1125000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1150000 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1162500 },
{ 0, { 0 } }
};
@@ -263,11 +262,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 975000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1000000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1012500 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1037500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1050000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1087500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1112500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1125000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1037500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1050000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1087500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1112500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1125000 },
{ 0, { 0 } }
};
@@ -279,11 +278,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 937500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 950000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 975000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1000000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1012500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1037500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1075000 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1087500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1000000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1012500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1037500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1075000 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1087500 },
{ 0, { 0 } }
};
@@ -295,11 +294,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 900000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 925000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 950000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 975000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 987500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1000000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1037500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1050000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 975000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 987500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1000000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1037500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1050000 },
{ 0, { 0 } }
};
@@ -311,11 +310,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 950000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 962500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 975000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1000000 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1012500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 950000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 962500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 975000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1000000 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1012500 },
{ 0, { 0 } }
};
@@ -327,11 +326,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 987500 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 1000000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 987500 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 1000000 },
{ 0, { 0 } }
};
@@ -343,11 +342,11 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 975000 },
- { 1, { 1512000, HFPLL, 1, 0x38 }, L2(15), 987500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 975000 },
+ { 1, { 1512000, HFPLL, 1, 0x38 }, L2(14), 987500 },
{ 0, { 0 } }
};
@@ -359,13 +358,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 1000000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1025000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1037500 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1075000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1087500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1125000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1150000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1175000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1225000 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1250000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1075000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1087500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1125000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1150000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1175000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1225000 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1250000 },
{ 0, { 0 } }
};
@@ -377,13 +376,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 975000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 1000000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1012500 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1037500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1050000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1087500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1112500 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1150000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1187500 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1200000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1037500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1050000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1087500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1112500 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1150000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1187500 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1200000 },
{ 0, { 0 } }
};
@@ -395,13 +394,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 937500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 950000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 975000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1000000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1012500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1037500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1075000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1100000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1137500 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1162500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1000000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1012500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1037500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1075000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1100000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1137500 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1162500 },
{ 0, { 0 } }
};
@@ -413,13 +412,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 900000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 925000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 950000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 975000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 987500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1000000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1037500 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1062500 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1100000 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1125000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 975000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 987500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1000000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1037500 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1062500 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1100000 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1125000 },
{ 0, { 0 } }
};
@@ -431,13 +430,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 950000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 962500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 975000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1000000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1037500 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1075000 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1100000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 950000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 962500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 975000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1000000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1037500 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1075000 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1100000 },
{ 0, { 0 } }
};
@@ -449,13 +448,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 987500 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1012500 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1050000 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1075000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 987500 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1012500 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1050000 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1075000 },
{ 0, { 0 } }
};
@@ -467,13 +466,13 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 975000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1000000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1025000 },
- { 1, { 1728000, HFPLL, 1, 0x40 }, L2(15), 1050000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 975000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1000000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1025000 },
+ { 1, { 1728000, HFPLL, 1, 0x40 }, L2(14), 1050000 },
{ 0, { 0 } }
};
@@ -485,14 +484,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 962500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 975000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 1000000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1025000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1037500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1062500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1100000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1125000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1175000 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1225000 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1287500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1025000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1037500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1062500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1100000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1125000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1175000 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1225000 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1287500 },
{ 0, { 0 } }
};
@@ -504,14 +503,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 937500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 950000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 975000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 1000000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 1012500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1037500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1075000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1100000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1137500 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1187500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1250000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 1000000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 1012500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1037500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1075000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1100000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1137500 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1187500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1250000 },
{ 0, { 0 } }
};
@@ -523,14 +522,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 912500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 925000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 950000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 975000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 987500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1012500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1050000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1075000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1112500 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1162500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1212500 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 975000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 987500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1012500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1050000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1075000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1112500 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1162500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1212500 },
{ 0, { 0 } }
};
@@ -542,14 +541,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 900000 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 912500 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 937500 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 962500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 975000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 1000000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1025000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1050000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1087500 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1137500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1175000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 962500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 975000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 1000000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1025000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1050000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1087500 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1137500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1175000 },
{ 0, { 0 } }
};
@@ -561,14 +560,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 950000 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 962500 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 975000 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 1000000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1037500 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1075000 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1112500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1150000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 950000 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 962500 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 975000 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 1000000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1037500 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1075000 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1112500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1150000 },
{ 0, { 0 } }
};
@@ -580,14 +579,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 987500 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1012500 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1050000 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1087500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1125000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 987500 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1012500 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1050000 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1087500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1125000 },
{ 0, { 0 } }
};
@@ -599,14 +598,14 @@
{ 1, { 810000, HFPLL, 1, 0x1E }, L2(5), 887500 },
{ 1, { 918000, HFPLL, 1, 0x22 }, L2(5), 900000 },
{ 1, { 1026000, HFPLL, 1, 0x26 }, L2(5), 925000 },
- { 1, { 1134000, HFPLL, 1, 0x2A }, L2(15), 937500 },
- { 1, { 1242000, HFPLL, 1, 0x2E }, L2(15), 950000 },
- { 1, { 1350000, HFPLL, 1, 0x32 }, L2(15), 962500 },
- { 1, { 1458000, HFPLL, 1, 0x36 }, L2(15), 975000 },
- { 1, { 1566000, HFPLL, 1, 0x3A }, L2(15), 1000000 },
- { 1, { 1674000, HFPLL, 1, 0x3E }, L2(15), 1025000 },
- { 1, { 1782000, HFPLL, 1, 0x42 }, L2(15), 1062500 },
- { 1, { 1890000, HFPLL, 1, 0x46 }, L2(15), 1100000 },
+ { 1, { 1134000, HFPLL, 1, 0x2A }, L2(14), 937500 },
+ { 1, { 1242000, HFPLL, 1, 0x2E }, L2(14), 950000 },
+ { 1, { 1350000, HFPLL, 1, 0x32 }, L2(14), 962500 },
+ { 1, { 1458000, HFPLL, 1, 0x36 }, L2(14), 975000 },
+ { 1, { 1566000, HFPLL, 1, 0x3A }, L2(14), 1000000 },
+ { 1, { 1674000, HFPLL, 1, 0x3E }, L2(14), 1025000 },
+ { 1, { 1782000, HFPLL, 1, 0x42 }, L2(14), 1062500 },
+ { 1, { 1890000, HFPLL, 1, 0x46 }, L2(14), 1100000 },
{ 0, { 0 } }
};
diff --git a/arch/arm/mach-msm/acpuclock-8226.c b/arch/arm/mach-msm/acpuclock-8226.c
index 799d629..a6f772d 100644
--- a/arch/arm/mach-msm/acpuclock-8226.c
+++ b/arch/arm/mach-msm/acpuclock-8226.c
@@ -26,12 +26,13 @@
#include <mach/msm_bus.h>
#include <mach/msm_bus_board.h>
#include <mach/rpm-regulator-smd.h>
+#include <mach/socinfo.h>
#include "acpuclock-cortex.h"
#define RCG_CONFIG_UPDATE_BIT BIT(0)
-static struct msm_bus_paths bw_level_tbl[] = {
+static struct msm_bus_paths bw_level_tbl_8226[] = {
[0] = BW_MBPS(152), /* At least 19 MHz on bus. */
[1] = BW_MBPS(300), /* At least 37.5 MHz on bus. */
[2] = BW_MBPS(400), /* At least 50 MHz on bus. */
@@ -42,9 +43,18 @@
[7] = BW_MBPS(4264), /* At least 533 MHz on bus. */
};
+static struct msm_bus_paths bw_level_tbl_8610[] = {
+ [0] = BW_MBPS(152), /* At least 19 MHz on bus. */
+ [1] = BW_MBPS(300), /* At least 37.5 MHz on bus. */
+ [2] = BW_MBPS(400), /* At least 50 MHz on bus. */
+ [3] = BW_MBPS(800), /* At least 100 MHz on bus. */
+ [4] = BW_MBPS(1600), /* At least 200 MHz on bus. */
+ [5] = BW_MBPS(2128), /* At least 266 MHz on bus. */
+};
+
static struct msm_bus_scale_pdata bus_client_pdata = {
- .usecase = bw_level_tbl,
- .num_usecases = ARRAY_SIZE(bw_level_tbl),
+ .usecase = bw_level_tbl_8226,
+ .num_usecases = ARRAY_SIZE(bw_level_tbl_8226),
.active_only = 1,
.name = "acpuclock",
};
@@ -54,23 +64,32 @@
* 2) Update bus bandwidth
* 3) Depending on Frodo version, may need minimum of LVL_NOM
*/
-static struct clkctl_acpu_speed acpu_freq_tbl[] = {
- { 0, 19200, CXO, 0, 0, CPR_CORNER_SVS, 1150000, 0 },
- { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 1150000, 4 },
- { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 1150000, 4 },
- { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 1150000, 6 },
- { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 1150000, 7 },
- { 0, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 1150000, 7 },
- { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 1150000, 7 },
+static struct clkctl_acpu_speed acpu_freq_tbl_8226[] = {
+ { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 0, 4 },
+ { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 0, 4 },
+ { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 0, 6 },
+ { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 0, 7 },
+ { 1, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 7 },
+ { 1, 1094400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 7 },
+ { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 7 },
+ { 0 }
+};
+
+static struct clkctl_acpu_speed acpu_freq_tbl_8610[] = {
+ { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 0, 3 },
+ { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 0, 3 },
+ { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 0, 4 },
+ { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 0, 4 },
+ { 0, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 5 },
+ { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 5 },
{ 0 }
};
static struct acpuclk_drv_data drv_data = {
- .freq_tbl = acpu_freq_tbl,
+ .freq_tbl = acpu_freq_tbl_8226,
.current_speed = &(struct clkctl_acpu_speed){ 0 },
.bus_scale = &bus_client_pdata,
.vdd_max_cpu = CPR_CORNER_TURBO,
- .vdd_max_mem = 1150000,
.src_clocks = {
[PLL0].name = "gpll0",
[ACPUPLL].name = "a7sspll",
@@ -109,12 +128,6 @@
return PTR_ERR(drv_data.vdd_cpu);
}
- drv_data.vdd_mem = devm_regulator_get(&pdev->dev, "a7_mem");
- if (IS_ERR(drv_data.vdd_mem)) {
- dev_err(&pdev->dev, "regulator for %s get failed\n", "a7_mem");
- return PTR_ERR(drv_data.vdd_mem);
- }
-
for (i = 0; i < NUM_SRC; i++) {
if (!drv_data.src_clocks[i].name)
continue;
@@ -146,8 +159,18 @@
},
};
+void msm8610_acpu_init(void)
+{
+ drv_data.bus_scale->usecase = bw_level_tbl_8610;
+ drv_data.bus_scale->num_usecases = ARRAY_SIZE(bw_level_tbl_8610);
+ drv_data.freq_tbl = acpu_freq_tbl_8610;
+}
+
static int __init acpuclk_a7_init(void)
{
+ if (cpu_is_msm8610())
+ msm8610_acpu_init();
+
return platform_driver_probe(&acpuclk_a7_driver, acpuclk_a7_probe);
}
device_initcall(acpuclk_a7_init);
diff --git a/arch/arm/mach-msm/acpuclock-8960ab.c b/arch/arm/mach-msm/acpuclock-8960ab.c
index 38658a2..0fa2cde 100644
--- a/arch/arm/mach-msm/acpuclock-8960ab.c
+++ b/arch/arm/mach-msm/acpuclock-8960ab.c
@@ -109,12 +109,12 @@
static struct acpu_level freq_tbl_PVS0[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 950000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 950000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 975000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 1000000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 1025000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1050000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1075000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 975000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 1025000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1050000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1075000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1100000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1125000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1150000, AVS(0x0) },
@@ -127,12 +127,12 @@
static struct acpu_level freq_tbl_PVS1[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 925000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 925000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 950000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 975000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 1000000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1025000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1050000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 975000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1025000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1050000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1075000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1100000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1125000, AVS(0x0) },
@@ -145,12 +145,12 @@
static struct acpu_level freq_tbl_PVS2[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 900000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 900000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 925000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 950000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 975000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1000000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1025000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 950000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 975000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1025000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1050000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1075000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1100000, AVS(0x0) },
@@ -163,12 +163,12 @@
static struct acpu_level freq_tbl_PVS3[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 900000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 900000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 900000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 925000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 950000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 975000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1000000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 925000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 950000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 975000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1000000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1025000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1050000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1075000, AVS(0x0) },
@@ -181,12 +181,12 @@
static struct acpu_level freq_tbl_PVS4[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 875000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 875000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 900000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 925000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 950000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 975000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 900000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 925000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 975000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1000000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1025000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1050000, AVS(0x0) },
@@ -199,12 +199,12 @@
static struct acpu_level freq_tbl_PVS5[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 875000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 875000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 875000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 900000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 925000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 950000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 875000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 900000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 950000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 975000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1000000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1025000, AVS(0x0) },
@@ -217,12 +217,12 @@
static struct acpu_level freq_tbl_PVS6[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 850000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 850000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 850000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 850000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 875000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 900000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 925000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 850000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 850000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 850000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 875000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 925000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 950000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 975000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1000000, AVS(0x0) },
diff --git a/arch/arm/mach-msm/acpuclock-cortex.c b/arch/arm/mach-msm/acpuclock-cortex.c
index 74ca145..0c80a56 100644
--- a/arch/arm/mach-msm/acpuclock-cortex.c
+++ b/arch/arm/mach-msm/acpuclock-cortex.c
@@ -66,11 +66,17 @@
{
int rc = 0;
- /* Increase vdd_mem before vdd_cpu. vdd_mem should be >= vdd_cpu. */
- rc = regulator_set_voltage(priv->vdd_mem, vdd_mem, priv->vdd_max_mem);
- if (rc) {
- pr_err("vdd_mem increase failed (%d)\n", rc);
- return rc;
+ if (priv->vdd_mem) {
+ /*
+ * Increase vdd_mem before vdd_cpu. vdd_mem should
+ * be >= vdd_cpu.
+ */
+ rc = regulator_set_voltage(priv->vdd_mem, vdd_mem,
+ priv->vdd_max_mem);
+ if (rc) {
+ pr_err("vdd_mem increase failed (%d)\n", rc);
+ return rc;
+ }
}
rc = regulator_set_voltage(priv->vdd_cpu, vdd_cpu, priv->vdd_max_cpu);
@@ -92,6 +98,9 @@
return;
}
+ if (!priv->vdd_mem)
+ return;
+
/* Decrease vdd_mem after vdd_cpu. vdd_mem should be >= vdd_cpu. */
ret = regulator_set_voltage(priv->vdd_mem, vdd_mem, priv->vdd_max_mem);
if (ret)
@@ -129,12 +138,26 @@
pr_warn("acpu rcg didn't update its configuration\n");
}
-/*
- * This function can be called in both atomic and nonatomic context.
- * Since regulator APIS can sleep, we cannot always use the clk prepare
- * unprepare API.
- */
-static int set_speed(struct clkctl_acpu_speed *tgt_s, bool atomic)
+static int set_speed_atomic(struct clkctl_acpu_speed *tgt_s)
+{
+ struct clkctl_acpu_speed *strt_s = priv->current_speed;
+ struct clk *strt = priv->src_clocks[strt_s->src].clk;
+ struct clk *tgt = priv->src_clocks[tgt_s->src].clk;
+ int rc = 0;
+
+ WARN(strt_s->src == ACPUPLL && tgt_s->src == ACPUPLL,
+ "can't reprogram ACPUPLL during atomic context\n");
+ rc = clk_enable(tgt);
+ if (rc)
+ return rc;
+
+ select_clk_source_div(priv, tgt_s);
+ clk_disable(strt);
+
+ return rc;
+}
+
+static int set_speed(struct clkctl_acpu_speed *tgt_s)
{
int rc = 0;
unsigned int tgt_freq_hz = tgt_s->khz * 1000;
@@ -148,19 +171,13 @@
select_clk_source_div(priv, cxo_s);
/* Re-program acpu pll */
- if (atomic)
- clk_disable(tgt);
- else
- clk_disable_unprepare(tgt);
+ clk_disable_unprepare(tgt);
rc = clk_set_rate(tgt, tgt_freq_hz);
if (rc)
pr_err("Failed to set ACPU PLL to %u\n", tgt_freq_hz);
- if (atomic)
- BUG_ON(clk_enable(tgt));
- else
- BUG_ON(clk_prepare_enable(tgt));
+ BUG_ON(clk_prepare_enable(tgt));
/* Switch back to acpu pll */
select_clk_source_div(priv, tgt_s);
@@ -172,10 +189,7 @@
return rc;
}
- if (atomic)
- rc = clk_enable(tgt);
- else
- rc = clk_prepare_enable(tgt);
+ rc = clk_prepare_enable(tgt);
if (rc) {
pr_err("ACPU PLL enable failed\n");
@@ -184,16 +198,10 @@
select_clk_source_div(priv, tgt_s);
- if (atomic)
- clk_disable(strt);
- else
- clk_disable_unprepare(strt);
+ clk_disable_unprepare(strt);
} else {
- if (atomic)
- rc = clk_enable(tgt);
- else
- rc = clk_prepare_enable(tgt);
+ rc = clk_prepare_enable(tgt);
if (rc) {
pr_err("%s enable failed\n",
@@ -203,10 +211,7 @@
select_clk_source_div(priv, tgt_s);
- if (atomic)
- clk_disable(strt);
- else
- clk_disable_unprepare(strt);
+ clk_disable_unprepare(strt);
}
@@ -250,9 +255,9 @@
/* Switch CPU speed. Flag indicates atomic context */
if (reason == SETRATE_CPUFREQ || reason == SETRATE_INIT)
- rc = set_speed(tgt_s, false);
+ rc = set_speed(tgt_s);
else
- rc = set_speed(tgt_s, true);
+ rc = set_speed_atomic(tgt_s);
if (rc)
goto out;
@@ -347,10 +352,12 @@
if (rc)
goto err_vdd;
- rc = regulator_enable(priv->vdd_mem);
- if (rc) {
- dev_err(&pdev->dev, "regulator_enable for mem failed\n");
- goto err_vdd;
+ if (priv->vdd_mem) {
+ rc = regulator_enable(priv->vdd_mem);
+ if (rc) {
+ dev_err(&pdev->dev, "regulator_enable for mem failed\n");
+ goto err_vdd;
+ }
}
rc = regulator_enable(priv->vdd_cpu);
@@ -373,7 +380,8 @@
return 0;
err_vdd_cpu:
- regulator_disable(priv->vdd_mem);
+ if (priv->vdd_mem)
+ regulator_disable(priv->vdd_mem);
err_vdd:
return rc;
}
diff --git a/arch/arm/mach-msm/bam_dmux.c b/arch/arm/mach-msm/bam_dmux.c
index a0644e6..bed794b 100644
--- a/arch/arm/mach-msm/bam_dmux.c
+++ b/arch/arm/mach-msm/bam_dmux.c
@@ -913,8 +913,10 @@
if (!bam_is_connected) {
read_unlock(&ul_wakeup_lock);
ul_wakeup();
- if (unlikely(in_global_reset == 1))
+ if (unlikely(in_global_reset == 1)) {
+ kfree(hdr);
return -EFAULT;
+ }
read_lock(&ul_wakeup_lock);
notify_all(BAM_DMUX_UL_CONNECTED, (unsigned long)(NULL));
}
@@ -1384,12 +1386,11 @@
struct list_head *temp;
struct outside_notify_func *func;
+ BAM_DMUX_LOG("%s: event=%d, data=%lu\n", __func__, event, data);
+
for (i = 0; i < BAM_DMUX_NUM_CHANNELS; ++i) {
- if (bam_ch_is_open(i)) {
+ if (bam_ch_is_open(i))
bam_ch[i].notify(bam_ch[i].priv, event, data);
- BAM_DMUX_LOG("%s: cid=%d, event=%d, data=%lu\n",
- __func__, i, event, data);
- }
}
__list_for_each(temp, &bam_other_notify_funcs) {
@@ -1756,10 +1757,13 @@
/* in_ssr documentation/assumptions found in restart_notifier_cb */
if (!power_management_only_mode) {
if (likely(!in_ssr)) {
+ BAM_DMUX_LOG("%s: disconnect tx\n", __func__);
sps_disconnect(bam_tx_pipe);
+ BAM_DMUX_LOG("%s: disconnect rx\n", __func__);
sps_disconnect(bam_rx_pipe);
__memzero(rx_desc_mem_buf.base, rx_desc_mem_buf.size);
__memzero(tx_desc_mem_buf.base, tx_desc_mem_buf.size);
+ BAM_DMUX_LOG("%s: device reset\n", __func__);
sps_device_reset(a2_device_handle);
} else {
ssr_skipped_disconnect = 1;
diff --git a/arch/arm/mach-msm/board-8064-gpiomux.c b/arch/arm/mach-msm/board-8064-gpiomux.c
index 0f88287..ede53bc 100644
--- a/arch/arm/mach-msm/board-8064-gpiomux.c
+++ b/arch/arm/mach-msm/board-8064-gpiomux.c
@@ -968,7 +968,6 @@
.pull = GPIOMUX_PULL_DOWN,
};
-
static struct gpiomux_setting ap2mdm_soft_reset_cfg = {
.func = GPIOMUX_FUNC_GPIO,
.drv = GPIOMUX_DRV_4MA,
diff --git a/arch/arm/mach-msm/board-8064.c b/arch/arm/mach-msm/board-8064.c
index 0fe94d5..f969e31 100644
--- a/arch/arm/mach-msm/board-8064.c
+++ b/arch/arm/mach-msm/board-8064.c
@@ -437,7 +437,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
@@ -3585,6 +3585,18 @@
if (socinfo_get_pmic_model() == PMIC_MODEL_PM8917)
apq8064_pm8917_pdata_fixup();
platform_device_register(&msm_gpio_device);
+ if (cpu_is_apq8064ab())
+ apq8064ab_update_krait_spm();
+ if (cpu_is_krait_v3()) {
+ struct msm_pm_init_data_type *pdata =
+ msm8064_pm_8x60.dev.platform_data;
+ pdata->retention_calls_tz = false;
+ apq8064ab_update_retention_spm();
+ }
+ platform_device_register(&msm8064_pm_8x60);
+
+ msm_spm_init(msm_spm_data, ARRAY_SIZE(msm_spm_data));
+ msm_spm_l2_init(msm_spm_l2_data);
msm_tsens_early_init(&apq_tsens_pdata);
msm_thermal_init(&msm_thermal_pdata);
if (socinfo_init() < 0)
@@ -3699,18 +3711,6 @@
apq8064_init_dsps();
platform_device_register(&msm_8960_riva);
}
- if (cpu_is_apq8064ab())
- apq8064ab_update_krait_spm();
- if (cpu_is_krait_v3()) {
- struct msm_pm_init_data_type *pdata =
- msm8064_pm_8x60.dev.platform_data;
- pdata->retention_calls_tz = false;
- apq8064ab_update_retention_spm();
- }
- platform_device_register(&msm8064_pm_8x60);
-
- msm_spm_init(msm_spm_data, ARRAY_SIZE(msm_spm_data));
- msm_spm_l2_init(msm_spm_l2_data);
BUG_ON(msm_pm_boot_init(&msm_pm_boot_pdata));
apq8064_epm_adc_init();
}
diff --git a/arch/arm/mach-msm/board-zinc-gpiomux.c b/arch/arm/mach-msm/board-8084-gpiomux.c
similarity index 94%
rename from arch/arm/mach-msm/board-zinc-gpiomux.c
rename to arch/arm/mach-msm/board-8084-gpiomux.c
index ac4daa8..8d5bb49 100644
--- a/arch/arm/mach-msm/board-zinc-gpiomux.c
+++ b/arch/arm/mach-msm/board-8084-gpiomux.c
@@ -17,7 +17,7 @@
#include <mach/board.h>
#include <mach/gpiomux.h>
-void __init msmzinc_init_gpiomux(void)
+void __init apq8084_init_gpiomux(void)
{
int rc;
diff --git a/arch/arm/mach-msm/board-8084.c b/arch/arm/mach-msm/board-8084.c
new file mode 100644
index 0000000..0a13c56
--- /dev/null
+++ b/arch/arm/mach-msm/board-8084.c
@@ -0,0 +1,148 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/memory.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/map.h>
+#include <asm/mach/arch.h>
+#include <mach/board.h>
+#include <mach/gpiomux.h>
+#include <mach/msm_iomap.h>
+#include <mach/msm_memtypes.h>
+#include <mach/msm_smd.h>
+#include <mach/restart.h>
+#include <mach/socinfo.h>
+#include <mach/clk-provider.h>
+#include "board-dt.h"
+#include "clock.h"
+#include "devices.h"
+#include "platsmp.h"
+
+static struct memtype_reserve apq8084_reserve_table[] __initdata = {
+ [MEMTYPE_SMI] = {
+ },
+ [MEMTYPE_EBI0] = {
+ .flags = MEMTYPE_FLAGS_1M_ALIGN,
+ },
+ [MEMTYPE_EBI1] = {
+ .flags = MEMTYPE_FLAGS_1M_ALIGN,
+ },
+};
+
+static int apq8084_paddr_to_memtype(phys_addr_t paddr)
+{
+ return MEMTYPE_EBI1;
+}
+
+static struct reserve_info apq8084_reserve_info __initdata = {
+ .memtype_reserve_table = apq8084_reserve_table,
+ .paddr_to_memtype = apq8084_paddr_to_memtype,
+};
+
+static struct of_dev_auxdata apq8084_auxdata_lookup[] __initdata = {
+ OF_DEV_AUXDATA("qcom,msm-sdcc", 0xF9824000, \
+ "msm_sdcc.1", NULL),
+ OF_DEV_AUXDATA("qcom,msm-sdcc", 0xF98A4000, \
+ "msm_sdcc.2", NULL),
+ {}
+};
+
+void __init apq8084_reserve(void)
+{
+ reserve_info = &apq8084_reserve_info;
+ of_scan_flat_dt(dt_scan_for_memory_reserve, apq8084_reserve_table);
+ msm_reserve();
+}
+
+static void __init apq8084_early_memory(void)
+{
+ reserve_info = &apq8084_reserve_info;
+ of_scan_flat_dt(dt_scan_for_memory_hole, apq8084_reserve_table);
+}
+
+static struct clk_lookup msm_clocks_dummy[] = {
+ CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("core_clk", SDC1_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("iface_clk", SDC1_P_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("core_clk", SDC2_CLK, "msm_sdcc.2", OFF),
+ CLK_DUMMY("iface_clk", SDC2_P_CLK, "msm_sdcc.2", OFF),
+ CLK_DUMMY("xo", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("core_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("iface_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("sleep_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("sleep_a_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("utmi_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("ref_clk", NULL, "f9200000.qcom,ssusb", OFF),
+};
+
+static struct clock_init_data msm_dummy_clock_init_data __initdata = {
+ .table = msm_clocks_dummy,
+ .size = ARRAY_SIZE(msm_clocks_dummy),
+};
+
+/*
+ * Used to satisfy dependencies for devices that need to be
+ * run early or in a particular order. Most likely your device doesn't fall
+ * into this category, and thus the driver should not be added here. The
+ * EPROBE_DEFER can satisfy most dependency problems.
+ */
+void __init apq8084_add_drivers(void)
+{
+ msm_smd_init();
+ msm_clock_init(&msm_dummy_clock_init_data);
+}
+
+static void __init apq8084_map_io(void)
+{
+ msm_map_8084_io();
+}
+
+void __init apq8084_init(void)
+{
+ struct of_dev_auxdata *adata = apq8084_auxdata_lookup;
+
+ if (socinfo_init() < 0)
+ pr_err("%s: socinfo_init() failed\n", __func__);
+
+ apq8084_init_gpiomux();
+ of_platform_populate(NULL, of_default_bus_match_table, adata, NULL);
+ apq8084_add_drivers();
+}
+
+void __init apq8084_init_very_early(void)
+{
+ apq8084_early_memory();
+}
+
+static const char *apq8084_dt_match[] __initconst = {
+ "qcom,apq8084",
+ NULL
+};
+
+DT_MACHINE_START(APQ8084_DT, "Qualcomm APQ 8084 (Flattened Device Tree)")
+ .map_io = apq8084_map_io,
+ .init_irq = msm_dt_init_irq,
+ .init_machine = apq8084_init,
+ .handle_irq = gic_handle_irq,
+ .timer = &msm_dt_timer,
+ .dt_compat = apq8084_dt_match,
+ .reserve = apq8084_reserve,
+ .init_very_early = apq8084_init_very_early,
+ .restart = msm_restart,
+ .smp = &msm8974_smp_ops,
+MACHINE_END
diff --git a/arch/arm/mach-msm/board-8226.c b/arch/arm/mach-msm/board-8226.c
index 3582914..a892e32 100644
--- a/arch/arm/mach-msm/board-8226.c
+++ b/arch/arm/mach-msm/board-8226.c
@@ -73,6 +73,10 @@
"msm_sdcc.1", NULL),
OF_DEV_AUXDATA("qcom,msm-sdcc", 0xF98A4000, \
"msm_sdcc.2", NULL),
+ OF_DEV_AUXDATA("qcom,sdhci-msm", 0xF9824900, \
+ "msm_sdcc.1", NULL),
+ OF_DEV_AUXDATA("qcom,sdhci-msm", 0xF98A4900, \
+ "msm_sdcc.2", NULL),
{}
};
diff --git a/arch/arm/mach-msm/board-8610-gpiomux.c b/arch/arm/mach-msm/board-8610-gpiomux.c
index 4b435de..593e2b1 100644
--- a/arch/arm/mach-msm/board-8610-gpiomux.c
+++ b/arch/arm/mach-msm/board-8610-gpiomux.c
@@ -23,14 +23,32 @@
.pull = GPIOMUX_PULL_NONE,
};
-static struct gpiomux_setting gpio_spi_config = {
+static struct gpiomux_setting gpio_cam_i2c_config = {
.func = GPIOMUX_FUNC_1,
- .drv = GPIOMUX_DRV_6MA,
+ .drv = GPIOMUX_DRV_2MA,
.pull = GPIOMUX_PULL_NONE,
};
-static struct gpiomux_setting gpio_spi_cs_config = {
- .func = GPIOMUX_FUNC_1,
+static struct gpiomux_setting atmel_int_act_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_UP,
+};
+
+static struct gpiomux_setting atmel_int_sus_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_2MA,
+ .pull = GPIOMUX_PULL_DOWN,
+};
+
+static struct gpiomux_setting atmel_reset_act_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_6MA,
+ .pull = GPIOMUX_PULL_UP,
+};
+
+static struct gpiomux_setting atmel_reset_sus_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
.drv = GPIOMUX_DRV_6MA,
.pull = GPIOMUX_PULL_DOWN,
};
@@ -60,6 +78,18 @@
.pull = GPIOMUX_PULL_DOWN,
};
+static struct gpiomux_setting gpio_keys_active = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_2MA,
+ .pull = GPIOMUX_PULL_UP,
+};
+
+static struct gpiomux_setting gpio_keys_suspend = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_2MA,
+ .pull = GPIOMUX_PULL_NONE,
+};
+
static struct msm_gpiomux_config msm_lcd_configs[] __initdata = {
{
.gpio = 41,
@@ -79,6 +109,18 @@
static struct msm_gpiomux_config msm_blsp_configs[] __initdata = {
{
+ .gpio = 2, /* BLSP1 QUP1 I2C_SDA */
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &gpio_i2c_config,
+ },
+ },
+ {
+ .gpio = 3, /* BLSP1 QUP1 I2C_SCL */
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &gpio_i2c_config,
+ },
+ },
+ {
.gpio = 10, /* BLSP1 QUP3 I2C_SDA */
.settings = {
[GPIOMUX_SUSPENDED] = &gpio_i2c_config,
@@ -91,27 +133,32 @@
},
},
{
- .gpio = 0, /* BLSP1 QUP1 SPI_DATA_MOSI */
+ .gpio = 16, /* BLSP1 QUP6 I2C_SDA */
.settings = {
- [GPIOMUX_SUSPENDED] = &gpio_spi_config,
+ [GPIOMUX_SUSPENDED] = &gpio_cam_i2c_config,
},
},
{
- .gpio = 1, /* BLSP1 QUP1 SPI_DATA_MISO */
+ .gpio = 17, /* BLSP1 QUP6 I2C_SCL */
.settings = {
- [GPIOMUX_SUSPENDED] = &gpio_spi_config,
+ [GPIOMUX_SUSPENDED] = &gpio_cam_i2c_config,
+ },
+ },
+};
+
+static struct msm_gpiomux_config msm_atmel_configs[] __initdata = {
+ {
+ .gpio = 0, /* TOUCH RESET */
+ .settings = {
+ [GPIOMUX_ACTIVE] = &atmel_reset_act_cfg,
+ [GPIOMUX_SUSPENDED] = &atmel_reset_sus_cfg,
},
},
{
- .gpio = 3, /* BLSP1 QUP1 SPI_CLK */
+ .gpio = 1, /* TOUCH INT */
.settings = {
- [GPIOMUX_SUSPENDED] = &gpio_spi_config,
- },
- },
- {
- .gpio = 2, /* BLSP1 QUP1 SPI_CS1 */
- .settings = {
- [GPIOMUX_SUSPENDED] = &gpio_spi_cs_config,
+ [GPIOMUX_ACTIVE] = &atmel_int_act_cfg,
+ [GPIOMUX_SUSPENDED] = &atmel_int_sus_cfg,
},
},
};
@@ -154,6 +201,54 @@
},
};
+static struct msm_gpiomux_config msm_keypad_configs[] __initdata = {
+ {
+ .gpio = 72,
+ .settings = {
+ [GPIOMUX_ACTIVE] = &gpio_keys_active,
+ [GPIOMUX_SUSPENDED] = &gpio_keys_suspend,
+ },
+ },
+ {
+ .gpio = 73,
+ .settings = {
+ [GPIOMUX_ACTIVE] = &gpio_keys_active,
+ [GPIOMUX_SUSPENDED] = &gpio_keys_suspend,
+ },
+ },
+ {
+ .gpio = 74,
+ .settings = {
+ [GPIOMUX_ACTIVE] = &gpio_keys_active,
+ [GPIOMUX_SUSPENDED] = &gpio_keys_suspend,
+ },
+ },
+};
+
+static struct gpiomux_setting sd_card_det_active_config = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_2MA,
+ .pull = GPIOMUX_PULL_NONE,
+ .dir = GPIOMUX_IN,
+};
+
+static struct gpiomux_setting sd_card_det_suspend_config = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_2MA,
+ .pull = GPIOMUX_PULL_UP,
+ .dir = GPIOMUX_IN,
+};
+
+static struct msm_gpiomux_config sd_card_det[] __initdata = {
+ {
+ .gpio = 42,
+ .settings = {
+ [GPIOMUX_ACTIVE] = &sd_card_det_active_config,
+ [GPIOMUX_SUSPENDED] = &sd_card_det_suspend_config,
+ },
+ },
+};
+
void __init msm8610_init_gpiomux(void)
{
int rc;
@@ -165,7 +260,12 @@
}
msm_gpiomux_install(msm_blsp_configs, ARRAY_SIZE(msm_blsp_configs));
+ msm_gpiomux_install(msm_atmel_configs,
+ ARRAY_SIZE(msm_atmel_configs));
msm_gpiomux_install(wcnss_5wire_interface,
ARRAY_SIZE(wcnss_5wire_interface));
msm_gpiomux_install(msm_lcd_configs, ARRAY_SIZE(msm_lcd_configs));
+ msm_gpiomux_install(msm_keypad_configs,
+ ARRAY_SIZE(msm_keypad_configs));
+ msm_gpiomux_install(sd_card_det, ARRAY_SIZE(sd_card_det));
}
diff --git a/arch/arm/mach-msm/board-8610.c b/arch/arm/mach-msm/board-8610.c
index 67334d5..2cd7134 100644
--- a/arch/arm/mach-msm/board-8610.c
+++ b/arch/arm/mach-msm/board-8610.c
@@ -44,6 +44,7 @@
#include <mach/clk-provider.h>
#include <mach/msm_smd.h>
#include <mach/rpm-smd.h>
+#include <mach/rpm-regulator-smd.h>
#include <linux/msm_thermal.h>
#include "board-dt.h"
#include "clock.h"
@@ -105,6 +106,7 @@
msm_rpm_driver_init();
msm_lpmrs_module_init();
msm_spm_device_init();
+ rpm_regulator_smd_driver_init();
qpnp_regulator_init();
tsens_tm_init_driver();
msm_thermal_device_init();
diff --git a/arch/arm/mach-msm/board-8930-regulator-pm8038.c b/arch/arm/mach-msm/board-8930-regulator-pm8038.c
index c34394e..8ed93ea 100644
--- a/arch/arm/mach-msm/board-8930-regulator-pm8038.c
+++ b/arch/arm/mach-msm/board-8930-regulator-pm8038.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -95,7 +95,7 @@
REGULATOR_SUPPLY("VDDIO_CDC", "sitar1p1-slim"),
REGULATOR_SUPPLY("CDC_VDDA_TX", "sitar1p1-slim"),
REGULATOR_SUPPLY("CDC_VDDA_RX", "sitar1p1-slim"),
- REGULATOR_SUPPLY("vddp", "0-0048"),
+ REGULATOR_SUPPLY("vcc_i2c", "0-0048"),
REGULATOR_SUPPLY("mhl_iovcc18", "0-0039"),
REGULATOR_SUPPLY("vdd-io", "spi0.0"),
REGULATOR_SUPPLY("vdd-phy", "spi0.0"),
@@ -209,7 +209,6 @@
REGULATOR_SUPPLY("8038_lvs2", NULL),
REGULATOR_SUPPLY("vcc_i2c", "3-004a"),
REGULATOR_SUPPLY("vcc_i2c", "3-0024"),
- REGULATOR_SUPPLY("vcc_i2c", "0-0048"),
REGULATOR_SUPPLY("vddio", "12-0018"),
REGULATOR_SUPPLY("vlogic", "12-0068"),
};
diff --git a/arch/arm/mach-msm/board-8930-regulator-pm8917.c b/arch/arm/mach-msm/board-8930-regulator-pm8917.c
index 8f853a4..cdc419f 100644
--- a/arch/arm/mach-msm/board-8930-regulator-pm8917.c
+++ b/arch/arm/mach-msm/board-8930-regulator-pm8917.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -194,7 +194,7 @@
REGULATOR_SUPPLY("VDDIO_CDC", "sitar1p1-slim"),
REGULATOR_SUPPLY("CDC_VDDA_TX", "sitar1p1-slim"),
REGULATOR_SUPPLY("CDC_VDDA_RX", "sitar1p1-slim"),
- REGULATOR_SUPPLY("vddp", "0-0048"),
+ REGULATOR_SUPPLY("vcc_i2c", "0-0048"),
REGULATOR_SUPPLY("mhl_iovcc18", "0-0039"),
REGULATOR_SUPPLY("CDC_VDD_CP", "sitar-slim"),
REGULATOR_SUPPLY("CDC_VDD_CP", "sitar1p1-slim"),
@@ -233,7 +233,6 @@
REGULATOR_SUPPLY("8917_lvs4", NULL),
REGULATOR_SUPPLY("vcc_i2c", "3-004a"),
REGULATOR_SUPPLY("vcc_i2c", "3-0024"),
- REGULATOR_SUPPLY("vcc_i2c", "0-0048"),
REGULATOR_SUPPLY("vddio", "12-0018"),
REGULATOR_SUPPLY("vlogic", "12-0068"),
};
diff --git a/arch/arm/mach-msm/board-8930.c b/arch/arm/mach-msm/board-8930.c
index 8be128c..ccea956 100644
--- a/arch/arm/mach-msm/board-8930.c
+++ b/arch/arm/mach-msm/board-8930.c
@@ -477,7 +477,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
@@ -1718,12 +1718,6 @@
static struct isa1200_regulator isa1200_reg_data[] = {
{
- .name = "vddp",
- .min_uV = ISA_I2C_VTG_MIN_UV,
- .max_uV = ISA_I2C_VTG_MAX_UV,
- .load_uA = ISA_I2C_CURR_UA,
- },
- {
.name = "vcc_i2c",
.min_uV = ISA_I2C_VTG_MIN_UV,
.max_uV = ISA_I2C_VTG_MAX_UV,
@@ -2356,6 +2350,7 @@
&msm_pcm_hostless,
&msm_multi_ch_pcm,
&msm_lowlatency_pcm,
+ &msm_fm_loopback,
};
static void __init msm8930_i2c_init(void)
diff --git a/arch/arm/mach-msm/board-8960.c b/arch/arm/mach-msm/board-8960.c
index c5fc418..5d96389 100644
--- a/arch/arm/mach-msm/board-8960.c
+++ b/arch/arm/mach-msm/board-8960.c
@@ -532,7 +532,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
diff --git a/arch/arm/mach-msm/board-8974-gpiomux.c b/arch/arm/mach-msm/board-8974-gpiomux.c
index 3b92171..76dbaef 100644
--- a/arch/arm/mach-msm/board-8974-gpiomux.c
+++ b/arch/arm/mach-msm/board-8974-gpiomux.c
@@ -20,6 +20,98 @@
#define KS8851_IRQ_GPIO 94
+static struct gpiomux_setting ap2mdm_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_NONE,
+};
+
+static struct gpiomux_setting mdm2ap_status_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_DOWN,
+ .dir = GPIOMUX_IN,
+};
+
+static struct gpiomux_setting mdm2ap_errfatal_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_DOWN,
+ .dir = GPIOMUX_IN,
+};
+
+static struct gpiomux_setting mdm2ap_pblrdy = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_NONE,
+ .dir = GPIOMUX_IN,
+};
+
+
+static struct gpiomux_setting ap2mdm_soft_reset_cfg = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_NONE,
+};
+
+static struct gpiomux_setting ap2mdm_wakeup = {
+ .func = GPIOMUX_FUNC_GPIO,
+ .drv = GPIOMUX_DRV_8MA,
+ .pull = GPIOMUX_PULL_DOWN,
+};
+
+static struct msm_gpiomux_config mdm_configs[] __initdata = {
+ /* AP2MDM_STATUS */
+ {
+ .gpio = 105,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &ap2mdm_cfg,
+ }
+ },
+ /* MDM2AP_STATUS */
+ {
+ .gpio = 46,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &mdm2ap_status_cfg,
+ }
+ },
+ /* MDM2AP_ERRFATAL */
+ {
+ .gpio = 82,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &mdm2ap_errfatal_cfg,
+ }
+ },
+ /* AP2MDM_ERRFATAL */
+ {
+ .gpio = 106,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &ap2mdm_cfg,
+ }
+ },
+ /* AP2MDM_SOFT_RESET, aka AP2MDM_PON_RESET_N */
+ {
+ .gpio = 24,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &ap2mdm_soft_reset_cfg,
+ }
+ },
+ /* AP2MDM_WAKEUP */
+ {
+ .gpio = 104,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &ap2mdm_wakeup,
+ }
+ },
+ /* MDM2AP_PBL_READY*/
+ {
+ .gpio = 80,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &mdm2ap_pblrdy,
+ }
+ },
+};
+
static struct gpiomux_setting gpio_uart_config = {
.func = GPIOMUX_FUNC_2,
.drv = GPIOMUX_DRV_16MA,
@@ -1079,6 +1171,23 @@
static void msm_gpiomux_sdc4_install(void) {}
#endif /* CONFIG_MMC_MSM_SDC4_SUPPORT */
+static struct msm_gpiomux_config apq8074_dragonboard_ts_config[] __initdata = {
+ {
+ /* BLSP1 QUP I2C_DATA */
+ .gpio = 2,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &gpio_i2c_config,
+ },
+ },
+ {
+ /* BLSP1 QUP I2C_CLK */
+ .gpio = 3,
+ .settings = {
+ [GPIOMUX_SUSPENDED] = &gpio_i2c_config,
+ },
+ },
+};
+
void __init msm_8974_init_gpiomux(void)
{
int rc;
@@ -1097,8 +1206,9 @@
ARRAY_SIZE(msm_blsp2_uart7_configs));
msm_gpiomux_install(wcnss_5wire_interface,
ARRAY_SIZE(wcnss_5wire_interface));
- msm_gpiomux_install_nowrite(ath_gpio_configs,
- ARRAY_SIZE(ath_gpio_configs));
+ if (of_board_is_liquid())
+ msm_gpiomux_install_nowrite(ath_gpio_configs,
+ ARRAY_SIZE(ath_gpio_configs));
msm_gpiomux_install(msm8974_slimbus_config,
ARRAY_SIZE(msm8974_slimbus_config));
@@ -1139,4 +1249,12 @@
if (of_board_is_rumi())
msm_gpiomux_install(msm_rumi_blsp_configs,
ARRAY_SIZE(msm_rumi_blsp_configs));
+
+ if (socinfo_get_platform_subtype() == PLATFORM_SUBTYPE_MDM)
+ msm_gpiomux_install(mdm_configs,
+ ARRAY_SIZE(mdm_configs));
+
+ if (of_board_is_dragonboard() && machine_is_apq8074())
+ msm_gpiomux_install(apq8074_dragonboard_ts_config,
+ ARRAY_SIZE(apq8074_dragonboard_ts_config));
}
diff --git a/arch/arm/mach-msm/board-8974.c b/arch/arm/mach-msm/board-8974.c
index cfa1628..3eed219 100644
--- a/arch/arm/mach-msm/board-8974.c
+++ b/arch/arm/mach-msm/board-8974.c
@@ -174,6 +174,7 @@
static const char *msm8974_dt_match[] __initconst = {
"qcom,msm8974",
+ "qcom,apq8074",
NULL
};
diff --git a/arch/arm/mach-msm/board-9625-gpiomux.c b/arch/arm/mach-msm/board-9625-gpiomux.c
index 75aaaec..a6ac986 100644
--- a/arch/arm/mach-msm/board-9625-gpiomux.c
+++ b/arch/arm/mach-msm/board-9625-gpiomux.c
@@ -276,6 +276,7 @@
},
};
+#ifdef CONFIG_FB_MSM_QPIC
static struct gpiomux_setting qpic_lcdc_a_d = {
.func = GPIOMUX_FUNC_1,
.drv = GPIOMUX_DRV_10MA,
@@ -327,6 +328,17 @@
},
};
+static void msm9625_disp_init_gpiomux(void)
+{
+ msm_gpiomux_install(msm9625_qpic_lcdc_configs,
+ ARRAY_SIZE(msm9625_qpic_lcdc_configs));
+}
+#else
+static void msm9625_disp_init_gpiomux(void)
+{
+}
+#endif /* CONFIG_FB_MSM_QPIC */
+
void __init msm9625_init_gpiomux(void)
{
int rc;
@@ -347,7 +359,5 @@
ARRAY_SIZE(mdm9625_cdc_reset_config));
msm_gpiomux_install(sdc2_card_det_config,
ARRAY_SIZE(sdc2_card_det_config));
- msm_gpiomux_install(msm9625_qpic_lcdc_configs,
- ARRAY_SIZE(msm9625_qpic_lcdc_configs));
-
+ msm9625_disp_init_gpiomux();
}
diff --git a/arch/arm/mach-msm/board-zinc.c b/arch/arm/mach-msm/board-zinc.c
deleted file mode 100644
index 444444f..0000000
--- a/arch/arm/mach-msm/board-zinc.c
+++ /dev/null
@@ -1,127 +0,0 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/err.h>
-#include <linux/kernel.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_platform.h>
-#include <linux/memory.h>
-#include <asm/hardware/gic.h>
-#include <asm/mach/map.h>
-#include <asm/mach/arch.h>
-#include <mach/board.h>
-#include <mach/gpiomux.h>
-#include <mach/msm_iomap.h>
-#include <mach/msm_memtypes.h>
-#include <mach/msm_smd.h>
-#include <mach/restart.h>
-#include <mach/socinfo.h>
-#include <mach/clk-provider.h>
-#include "board-dt.h"
-#include "clock.h"
-#include "devices.h"
-#include "platsmp.h"
-
-static struct memtype_reserve msmzinc_reserve_table[] __initdata = {
- [MEMTYPE_SMI] = {
- },
- [MEMTYPE_EBI0] = {
- .flags = MEMTYPE_FLAGS_1M_ALIGN,
- },
- [MEMTYPE_EBI1] = {
- .flags = MEMTYPE_FLAGS_1M_ALIGN,
- },
-};
-
-static int msmzinc_paddr_to_memtype(phys_addr_t paddr)
-{
- return MEMTYPE_EBI1;
-}
-
-static struct reserve_info msmzinc_reserve_info __initdata = {
- .memtype_reserve_table = msmzinc_reserve_table,
- .paddr_to_memtype = msmzinc_paddr_to_memtype,
-};
-
-void __init msmzinc_reserve(void)
-{
- reserve_info = &msmzinc_reserve_info;
- of_scan_flat_dt(dt_scan_for_memory_reserve, msmzinc_reserve_table);
- msm_reserve();
-}
-
-static void __init msmzinc_early_memory(void)
-{
- reserve_info = &msmzinc_reserve_info;
- of_scan_flat_dt(dt_scan_for_memory_hole, msmzinc_reserve_table);
-}
-
-static struct clk_lookup msm_clocks_dummy[] = {
- CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
- CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
-};
-
-static struct clock_init_data msm_dummy_clock_init_data __initdata = {
- .table = msm_clocks_dummy,
- .size = ARRAY_SIZE(msm_clocks_dummy),
-};
-
-/*
- * Used to satisfy dependencies for devices that need to be
- * run early or in a particular order. Most likely your device doesn't fall
- * into this category, and thus the driver should not be added here. The
- * EPROBE_DEFER can satisfy most dependency problems.
- */
-void __init msmzinc_add_drivers(void)
-{
- msm_smd_init();
- msm_clock_init(&msm_dummy_clock_init_data);
-}
-
-static void __init msmzinc_map_io(void)
-{
- msm_map_zinc_io();
-}
-
-void __init msmzinc_init(void)
-{
- if (socinfo_init() < 0)
- pr_err("%s: socinfo_init() failed\n", __func__);
-
- msmzinc_init_gpiomux();
- of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
- msmzinc_add_drivers();
-}
-
-void __init msmzinc_init_very_early(void)
-{
- msmzinc_early_memory();
-}
-
-static const char *msmzinc_dt_match[] __initconst = {
- "qcom,msmzinc",
- NULL
-};
-
-DT_MACHINE_START(MSMZINC_DT, "Qualcomm MSM ZINC (Flattened Device Tree)")
- .map_io = msmzinc_map_io,
- .init_irq = msm_dt_init_irq,
- .init_machine = msmzinc_init,
- .handle_irq = gic_handle_irq,
- .timer = &msm_dt_timer,
- .dt_compat = msmzinc_dt_match,
- .reserve = msmzinc_reserve,
- .init_very_early = msmzinc_init_very_early,
- .restart = msm_restart,
- .smp = &msm8974_smp_ops,
-MACHINE_END
diff --git a/arch/arm/mach-msm/cache_erp.c b/arch/arm/mach-msm/cache_erp.c
index ddea91c..f52bc28 100644
--- a/arch/arm/mach-msm/cache_erp.c
+++ b/arch/arm/mach-msm/cache_erp.c
@@ -123,11 +123,18 @@
unsigned int mplxrexnok;
};
+struct msm_erp_dump_region {
+ struct resource *res;
+ void __iomem *va;
+};
+
static DEFINE_PER_CPU(struct msm_l1_err_stats, msm_l1_erp_stats);
static struct msm_l2_err_stats msm_l2_erp_stats;
static int l1_erp_irq, l2_erp_irq;
static struct proc_dir_entry *procfs_entry;
+static int num_dump_regions;
+static struct msm_erp_dump_region *dump_regions;
#ifdef CONFIG_MSM_L1_ERR_LOG
static struct proc_dir_entry *procfs_log_entry;
@@ -211,6 +218,22 @@
return len;
}
+static int msm_erp_dump_regions(void)
+{
+ int i = 0;
+ struct msm_erp_dump_region *r;
+
+ for (i = 0; i < num_dump_regions; i++) {
+ r = &dump_regions[i];
+
+ pr_alert("%s %pR:\n", r->res->name, r->res);
+ print_hex_dump(KERN_ALERT, "", DUMP_PREFIX_OFFSET, 32, 4, r->va,
+ resource_size(r->res), 0);
+ }
+
+ return 0;
+}
+
#ifdef CONFIG_MSM_L1_ERR_LOG
static int proc_read_log(char *page, char **start, off_t off, int count,
int *eof, void *data)
@@ -267,6 +290,7 @@
pr_alert("\tCESR = 0x%08x\n", cesr);
pr_alert("\tCPU speed = %lu\n", acpuclk_get_rate(cpu));
pr_alert("\tMIDR = 0x%08x\n", read_cpuid_id());
+ msm_erp_dump_regions();
}
if (cesr & CESR_DCTPE) {
@@ -425,6 +449,9 @@
if (port_error && print_alert)
ERP_PORT_ERR("L2 master port error detected");
+ if (soft_error && print_alert)
+ msm_erp_dump_regions();
+
if (soft_error && !unrecoverable)
ERP_1BIT_ERR("L2 single-bit error detected");
@@ -464,6 +491,37 @@
.notifier_call = cache_erp_cpu_callback,
};
+static int msm_erp_read_dump_regions(struct platform_device *pdev)
+{
+ int i;
+ struct device_node *np = pdev->dev.of_node;
+ struct resource *res;
+
+ num_dump_regions = of_property_count_strings(np, "reg-names");
+
+ if (num_dump_regions <= 0) {
+ num_dump_regions = 0;
+ return 0; /* Not an error - this is an optional property */
+ }
+
+ dump_regions = devm_kzalloc(&pdev->dev,
+ sizeof(*dump_regions) * num_dump_regions,
+ GFP_KERNEL);
+ if (!dump_regions)
+ return -ENOMEM;
+
+ for (i = 0; i < num_dump_regions; i++) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ dump_regions[i].res = res;
+ dump_regions[i].va = devm_ioremap(&pdev->dev, res->start,
+ resource_size(res));
+ if (!dump_regions[i].va)
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
static int msm_cache_erp_probe(struct platform_device *pdev)
{
struct resource *r;
@@ -511,6 +569,11 @@
goto fail_l2;
}
+ ret = msm_erp_read_dump_regions(pdev);
+
+ if (ret)
+ goto fail_l2;
+
get_online_cpus();
register_hotcpu_notifier(&cache_erp_cpu_notifier);
for_each_cpu(cpu, cpu_online_mask)
diff --git a/arch/arm/mach-msm/clock-8226.c b/arch/arm/mach-msm/clock-8226.c
index 5f9eafd..486842f 100644
--- a/arch/arm/mach-msm/clock-8226.c
+++ b/arch/arm/mach-msm/clock-8226.c
@@ -169,7 +169,7 @@
VDD_DIG_NUM
};
-static const int *vdd_corner[] = {
+static int *vdd_corner[] = {
[VDD_DIG_NONE] = VDD_UV(RPM_REGULATOR_CORNER_NONE),
[VDD_DIG_LOW] = VDD_UV(RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_DIG_NOMINAL] = VDD_UV(RPM_REGULATOR_CORNER_NORMAL),
@@ -1709,13 +1709,19 @@
F_MMSS( 100000000, gpll0, 6, 0, 0),
F_MMSS( 109090000, gpll0, 5.5, 0, 0),
F_MMSS( 133330000, gpll0, 4.5, 0, 0),
+ F_MMSS( 150000000, gpll0, 4, 0, 0),
F_MMSS( 200000000, gpll0, 3, 0, 0),
F_MMSS( 228570000, mmpll0_pll, 3.5, 0, 0),
F_MMSS( 266670000, mmpll0_pll, 3, 0, 0),
F_MMSS( 320000000, mmpll0_pll, 2.5, 0, 0),
+ F_MMSS( 400000000, mmpll0_pll, 2, 0, 0),
F_END
};
+static unsigned long camss_vfe_vfe0_fmax_v2[VDD_DIG_NUM] = {
+ 150000000, 320000000, 400000000,
+};
+
static struct rcg_clk vfe0_clk_src = {
.cmd_rcgr_reg = VFE0_CMD_RCGR,
.set_rate = set_rate_hid,
@@ -1972,11 +1978,17 @@
static struct clk_freq_tbl ftbl_camss_vfe_cpp_clk[] = {
F_MMSS( 133330000, gpll0, 4.5, 0, 0),
+ F_MMSS( 150000000, gpll0, 4, 0, 0),
F_MMSS( 266670000, mmpll0_pll, 3, 0, 0),
F_MMSS( 320000000, mmpll0_pll, 2.5, 0, 0),
+ F_MMSS( 400000000, mmpll0_pll, 2, 0, 0),
F_END
};
+static unsigned long camss_vfe_cpp_fmax_v2[VDD_DIG_NUM] = {
+ 150000000, 320000000, 400000000,
+};
+
static struct rcg_clk cpp_clk_src = {
.cmd_rcgr_reg = CPP_CMD_RCGR,
.set_rate = set_rate_hid,
@@ -2697,7 +2709,6 @@
.base = &virt_bases[LPASS_BASE],
.c = {
.dbg_name = "q6ss_xo_clk",
- .parent = &xo.c,
.ops = &clk_ops_branch,
CLK_INIT(q6ss_xo_clk.c),
},
@@ -2742,7 +2753,7 @@
VDD_SR2_PLL_NUM
};
-static const int *vdd_sr2_levels[] = {
+static int *vdd_sr2_levels[] = {
[VDD_SR2_PLL_OFF] = VDD_UV(0, RPM_REGULATOR_CORNER_NONE),
[VDD_SR2_PLL_SVS] = VDD_UV(1800000, RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_SR2_PLL_NOM] = VDD_UV(1800000, RPM_REGULATOR_CORNER_NORMAL),
@@ -2755,6 +2766,7 @@
F_APCS_PLL( 384000000, 20, 0x0, 0x1, 0x0, 0x0, 0x0),
F_APCS_PLL( 787200000, 41, 0x0, 0x1, 0x0, 0x0, 0x0),
F_APCS_PLL( 998400000, 52, 0x0, 0x1, 0x0, 0x0, 0x0),
+ F_APCS_PLL(1094400000, 57, 0x0, 0x1, 0x0, 0x0, 0x0),
F_APCS_PLL(1190400000, 62, 0x0, 0x1, 0x0, 0x0, 0x0),
PLL_F_END
};
@@ -2811,8 +2823,8 @@
static DEFINE_CLK_VOTER(pnoc_sps_clk, &pnoc_clk.c, LONG_MAX);
-static DEFINE_CLK_VOTER(qseecom_ce1_clk_src, &ce1_clk_src.c, LONG_MAX);
-static DEFINE_CLK_VOTER(scm_ce1_clk_src, &ce1_clk_src.c, LONG_MAX);
+static DEFINE_CLK_VOTER(qseecom_ce1_clk_src, &ce1_clk_src.c, 100000000);
+static DEFINE_CLK_VOTER(scm_ce1_clk_src, &ce1_clk_src.c, 100000000);
static DEFINE_CLK_BRANCH_VOTER(cxo_otg_clk, &xo.c);
static DEFINE_CLK_BRANCH_VOTER(cxo_pil_lpass_clk, &xo.c);
@@ -3060,7 +3072,7 @@
/* WCNSS CLOCKS */
CLK_LOOKUP("xo", cxo_wlan_clk.c, "fb000000.qcom,wcnss-wlan"),
- CLK_LOOKUP("rf_clk", cxo_a2.c, "fb000000.qcom,wcnss-wlan"),
+ CLK_LOOKUP("rf_clk", cxo_a1.c, "fb000000.qcom,wcnss-wlan"),
/* BUS DRIVER */
CLK_LOOKUP("bus_clk", cnoc_msmbus_clk.c, "msm_config_noc"),
@@ -3328,6 +3340,8 @@
CLK_LOOKUP("csi1_rdi_clk", camss_csi1rdi_clk.c, "fda08400.qcom,csid"),
/* ISPIF clocks */
+ CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
+ "fda0a000.qcom,ispif"),
CLK_LOOKUP("camss_vfe_vfe_clk", camss_vfe_vfe0_clk.c,
"fda0a000.qcom,ispif"),
CLK_LOOKUP("camss_csi_vfe_clk", camss_csi_vfe0_clk.c,
@@ -3415,6 +3429,18 @@
CLK_LOOKUP("osr_clk", div_clk1.c, "msm-dai-q6-dev.16390"),
CLK_LOOKUP("osr_clk", div_clk1.c, "msm-dai-q6-dev.16391"),
+ /* Add QCEDEV clocks */
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "fd400000.qcom,qcedev"),
+
+ /* Add QCRYPTO clocks */
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "fd404000.qcom,qcrypto"),
+
};
static struct clk_lookup msm_clocks_8226_rumi[] = {
@@ -3544,6 +3570,12 @@
reg_init();
+ /* v2 specific changes */
+ if (SOCINFO_VERSION_MAJOR(socinfo_get_version()) == 2) {
+ cpp_clk_src.c.fmax = camss_vfe_cpp_fmax_v2;
+ vfe0_clk_src.c.fmax = camss_vfe_vfe0_fmax_v2;
+ }
+
/*
* MDSS needs the ahb clock and needs to init before we register the
* lookup table.
diff --git a/arch/arm/mach-msm/clock-8610.c b/arch/arm/mach-msm/clock-8610.c
index 4a716fc..2683d19 100644
--- a/arch/arm/mach-msm/clock-8610.c
+++ b/arch/arm/mach-msm/clock-8610.c
@@ -30,6 +30,7 @@
#include "clock-rpm.h"
#include "clock-voter.h"
#include "clock.h"
+#include "clock-dsi-8610.h"
enum {
GCC_BASE,
@@ -143,6 +144,7 @@
#define CE1_AHB_CBCR 0x104C
#define COPSS_SMMU_AHB_CBCR 0x015C
#define LPSS_SMMU_AHB_CBCR 0x0158
+#define BIMC_SMMU_CBCR 0x1120
#define LPASS_Q6_AXI_CBCR 0x11C0
#define APCS_GPLL_ENA_VOTE 0x1480
#define APCS_CLOCK_BRANCH_ENA_VOTE 0x1484
@@ -254,6 +256,7 @@
#define MMSS_MMSSNOC_AXI_CBCR 0x506C
#define BIMC_GFX_BCR 0x5090
#define BIMC_GFX_CBCR 0x5094
+#define MMSS_CAMSS_MISC 0x3718
#define AUDIO_CORE_GDSCR 0x7000
#define SPDM_BCR 0x1000
@@ -431,7 +434,7 @@
VDD_DIG_NUM
};
-static const int *vdd_corner[] = {
+static int *vdd_corner[] = {
[VDD_DIG_NONE] = VDD_UV(RPM_REGULATOR_CORNER_NONE),
[VDD_DIG_LOW] = VDD_UV(RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_DIG_NOMINAL] = VDD_UV(RPM_REGULATOR_CORNER_NORMAL),
@@ -461,9 +464,9 @@
#define D0_ID 1
#define D1_ID 2
-#define A0_ID 3
-#define A1_ID 4
-#define A2_ID 5
+#define A0_ID 4
+#define A1_ID 5
+#define A2_ID 6
#define DIFF_CLK_ID 7
#define DIV_CLK_ID 11
@@ -530,7 +533,7 @@
VDD_SR2_PLL_NUM
};
-static const int *vdd_sr2_levels[] = {
+static int *vdd_sr2_levels[] = {
[VDD_SR2_PLL_OFF] = VDD_UV(0, RPM_REGULATOR_CORNER_NONE),
[VDD_SR2_PLL_SVS] = VDD_UV(1800000, RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_SR2_PLL_NOM] = VDD_UV(1800000, RPM_REGULATOR_CORNER_NORMAL),
@@ -1500,6 +1503,17 @@
},
};
+static struct branch_clk gcc_bimc_smmu_clk = {
+ .cbcr_reg = BIMC_SMMU_CBCR,
+ .has_sibling = 0,
+ .base = &virt_bases[GCC_BASE],
+ .c = {
+ .dbg_name = "gcc_bimc_smmu_clk",
+ .ops = &clk_ops_branch,
+ CLK_INIT(gcc_bimc_smmu_clk.c),
+ },
+};
+
static struct clk_freq_tbl ftbl_csi0_1_clk[] = {
F_MM(100000000, gpll0, 6, 0, 0),
F_MM(200000000, mmpll0, 4, 0, 0),
@@ -1548,21 +1562,85 @@
static DEFINE_CLK_VOTER(mdp_axi_clk_src, &axi_clk_src.c, 200000000);
static DEFINE_CLK_VOTER(mmssnoc_axi_clk_src, &axi_clk_src.c, 200000000);
-static struct clk_freq_tbl ftbl_dsi_pclk_clk[] = {
- F_MDSS( 50000000, dsipll, 10, 0, 0),
- F_MDSS(103330000, dsipll, 9, 0, 0),
- F_END,
+static struct clk_ops dsi_byte_clk_src_ops;
+static struct clk_ops dsi_pixel_clk_src_ops;
+static struct clk_ops dsi_dsi_clk_src_ops;
+
+static struct dsi_pll_vco_clk dsi_vco = {
+ .vco_clk_min = 600000000,
+ .vco_clk_max = 1200000000,
+ .pref_div_ratio = 26,
+ .c = {
+ .parent = &gcc_xo_clk_src.c,
+ .dbg_name = "dsi_vco",
+ .ops = &clk_ops_dsi_vco,
+ CLK_INIT(dsi_vco.c),
+ },
};
+static struct clk dsi_pll_byte = {
+ .parent = &dsi_vco.c,
+ .dbg_name = "dsi_pll_byte",
+ .ops = &clk_ops_dsi_byteclk,
+ CLK_INIT(dsi_pll_byte),
+};
+
+static struct clk dsi_pll_pixel = {
+ .parent = &dsi_vco.c,
+ .dbg_name = "dsi_pll_pixel",
+ .ops = &clk_ops_dsi_dsiclk,
+ CLK_INIT(dsi_pll_pixel),
+};
+
+static struct clk_freq_tbl pixel_freq_tbl[] = {
+ {
+ .src_clk = &dsi_pll_pixel,
+ .div_src_val = BVAL(10, 8, dsipll_mm_source_val),
+ },
+ F_END
+};
+
+#define CFG_RCGR_DIV_MASK BM(4, 0)
+
+static int set_rate_pixel_byte_clk(struct clk *clk, unsigned long rate)
+{
+ struct rcg_clk *rcg = to_rcg_clk(clk);
+ struct clk *pll = clk->parent;
+ unsigned long source_rate, div;
+ struct clk_freq_tbl *cur_freq = rcg->current_freq;
+ int rc;
+
+ if (rate == 0)
+ return clk_set_rate(pll, 0);
+
+ source_rate = clk_round_rate(pll, rate);
+ if (!source_rate || ((2 * source_rate) % rate))
+ return -EINVAL;
+
+ div = ((2 * source_rate)/rate) - 1;
+ if (div > CFG_RCGR_DIV_MASK)
+ return -EINVAL;
+
+ rc = clk_set_rate(pll, source_rate);
+ if (rc)
+ return rc;
+
+ cur_freq->div_src_val &= ~CFG_RCGR_DIV_MASK;
+ cur_freq->div_src_val |= BVAL(4, 0, div);
+ rcg->set_rate(rcg, cur_freq);
+
+ return 0;
+}
+
static struct rcg_clk dsi_pclk_clk_src = {
.cmd_rcgr_reg = DSI_PCLK_CMD_RCGR,
.set_rate = set_rate_mnd,
- .freq_tbl = ftbl_dsi_pclk_clk,
- .current_freq = &rcg_dummy_freq,
+ .current_freq = pixel_freq_tbl,
.base = &virt_bases[MMSS_BASE],
.c = {
+ .parent = &dsi_pll_pixel,
.dbg_name = "dsi_pclk_clk_src",
- .ops = &clk_ops_rcg_mnd,
+ .ops = &dsi_pixel_clk_src_ops,
VDD_DIG_FMAX_MAP2(LOW, 50000000, NOMINAL, 103330000),
CLK_INIT(dsi_pclk_clk_src.c),
},
@@ -1674,41 +1752,60 @@
},
};
-static struct clk_freq_tbl ftbl_dsi_clk[] = {
- F_MDSS(155000000, dsipll, 6, 0, 0),
- F_MDSS(310000000, dsipll, 3, 0, 0),
- F_END,
+/*
+ * The DSI clock will always use a divider of 1. However, we still
+ * need to set the right voltage and source.
+ */
+static int set_rate_dsi_clk(struct clk *clk, unsigned long rate)
+{
+ struct rcg_clk *rcg = to_rcg_clk(clk);
+ struct clk_freq_tbl *cur_freq = rcg->current_freq;
+
+ rcg->set_rate(rcg, cur_freq);
+
+ return 0;
+}
+
+static struct clk_freq_tbl dsi_freq_tbl[] = {
+ {
+ .src_clk = &dsi_pll_pixel,
+ .div_src_val = BVAL(4, 0, 0) |
+ BVAL(10, 8, dsipll_mm_source_val),
+ },
+ F_END
};
static struct rcg_clk dsi_clk_src = {
.cmd_rcgr_reg = DSI_CMD_RCGR,
.set_rate = set_rate_mnd,
- .freq_tbl = ftbl_dsi_clk,
- .current_freq = &rcg_dummy_freq,
+ .current_freq = dsi_freq_tbl,
.base = &virt_bases[MMSS_BASE],
.c = {
+ .parent = &dsi_pll_pixel,
.dbg_name = "dsi_clk_src",
- .ops = &clk_ops_rcg_mnd,
+ .ops = &dsi_dsi_clk_src_ops,
VDD_DIG_FMAX_MAP2(LOW, 155000000, NOMINAL, 310000000),
CLK_INIT(dsi_clk_src.c),
},
};
-static struct clk_freq_tbl ftbl_dsi_byte_clk[] = {
- F_MDSS( 62500000, dsipll, 12, 0, 0),
- F_MDSS(125000000, dsipll, 6, 0, 0),
- F_END,
+static struct clk_freq_tbl byte_freq_tbl[] = {
+ {
+ .src_clk = &dsi_pll_byte,
+ .div_src_val = BVAL(10, 8, dsipll_mm_source_val),
+ },
+ F_END
};
static struct rcg_clk dsi_byte_clk_src = {
.cmd_rcgr_reg = DSI_BYTE_CMD_RCGR,
.set_rate = set_rate_hid,
- .freq_tbl = ftbl_dsi_byte_clk,
- .current_freq = &rcg_dummy_freq,
+ .current_freq = byte_freq_tbl,
.base = &virt_bases[MMSS_BASE],
.c = {
+ .parent = &dsi_pll_byte,
.dbg_name = "dsi_byte_clk_src",
- .ops = &clk_ops_rcg,
+ .ops = &dsi_byte_clk_src_ops,
VDD_DIG_FMAX_MAP2(LOW, 62500000, NOMINAL, 125000000),
CLK_INIT(dsi_byte_clk_src.c),
},
@@ -1795,6 +1892,8 @@
.dbg_name = "bimc_gfx_clk",
.ops = &clk_ops_branch,
CLK_INIT(bimc_gfx_clk.c),
+ /* FIXME: Remove once kgsl votes on the depends clock. */
+ .depends = &gcc_bimc_smmu_clk.c,
},
};
@@ -1918,6 +2017,115 @@
},
};
+static struct cam_mux_clk csi0phy_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(11),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(9),
+ .sources = (struct mux_source[]) {
+ { &csi0phy_clk.c, 0 },
+ { &csi1phy_clk.c, BIT(9) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi0phy_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(csi0phy_cam_mux_clk.c),
+ },
+};
+
+static struct cam_mux_clk csi1phy_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(10),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(8),
+ .sources = (struct mux_source[]) {
+ { &csi0phy_clk.c, 0 },
+ { &csi1phy_clk.c, BIT(8) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi1phy_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(csi1phy_cam_mux_clk.c),
+ },
+};
+
+static struct cam_mux_clk csi0pix_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(7),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(3),
+ .sources = (struct mux_source[]) {
+ { &csi0pix_clk.c, 0 },
+ { &csi1pix_clk.c, BIT(3) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi0pix_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(csi0pix_cam_mux_clk.c),
+ },
+};
+
+
+static struct cam_mux_clk rdi2_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(6),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(2),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(2) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi2_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(rdi2_cam_mux_clk.c),
+ },
+};
+
+static struct cam_mux_clk rdi1_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(5),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(1),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(1) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi1_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(rdi1_cam_mux_clk.c),
+ },
+};
+
+static struct cam_mux_clk rdi0_cam_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(4),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(0),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(0) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi0_cam_mux_clk",
+ .ops = &clk_ops_cam_mux,
+ CLK_INIT(rdi0_cam_mux_clk.c),
+ },
+};
+
static struct branch_clk csi_ahb_clk = {
.cbcr_reg = CSI_AHB_CBCR,
.has_sibling = 1,
@@ -1990,7 +2198,6 @@
static struct branch_clk dsi_pclk_clk = {
.cbcr_reg = DSI_PCLK_CBCR,
- .has_sibling = 1,
.base = &virt_bases[MMSS_BASE],
.c = {
.parent = &dsi_pclk_clk_src.c,
@@ -2231,7 +2438,6 @@
.bcr_reg = LPASS_Q6SS_BCR,
.base = &virt_bases[LPASS_BASE],
.c = {
- .parent = &gcc_xo_clk_src.c,
.dbg_name = "q6ss_xo_clk",
.ops = &clk_ops_branch,
CLK_INIT(q6ss_xo_clk.c),
@@ -2290,6 +2496,7 @@
{ &gcc_ce1_ahb_clk.c, GCC_BASE, 0x013a},
{ &gcc_xo_clk_src.c, GCC_BASE, 0x0149},
{ &bimc_clk.c, GCC_BASE, 0x0154},
+ { &gcc_bimc_smmu_clk.c, GCC_BASE, 0x015e},
{ &gcc_lpass_q6_axi_clk.c, GCC_BASE, 0x0160},
{ &mmssnoc_ahb_clk.c, MMSS_BASE, 0x0001},
@@ -2550,6 +2757,8 @@
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991f000.serial"),
CLK_LOOKUP("core_clk", gcc_blsp1_uart3_apps_clk.c, "f991f000.serial"),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991e000.serial"),
+ CLK_LOOKUP("core_clk", gcc_blsp1_uart2_apps_clk.c, "f991e000.serial"),
CLK_LOOKUP("dfab_clk", pnoc_sps_clk.c, "msm_sps"),
CLK_LOOKUP("bus_clk", pnoc_qseecom_clk.c, "qseecom"),
@@ -2603,6 +2812,10 @@
CLK_LOOKUP("core_clk", qdss_clk.c, "fc352000.cti"),
CLK_LOOKUP("core_clk", qdss_clk.c, "fc353000.cti"),
CLK_LOOKUP("core_clk", qdss_clk.c, "fc354000.cti"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34c000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34d000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34e000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34f000.jtagmm"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc326000.tmc"),
@@ -2632,6 +2845,10 @@
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc352000.cti"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc353000.cti"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc354000.cti"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34c000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34d000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34e000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34f000.jtagmm"),
@@ -2655,19 +2872,21 @@
CLK_LOOKUP("core_clk_src", sdcc1_apps_clk_src.c, ""),
CLK_LOOKUP("core_clk_src", sdcc2_apps_clk_src.c, ""),
CLK_LOOKUP("core_clk_src", usb_hs_system_clk_src.c, ""),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9923000.i2c"),
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9925000.i2c"),
- CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9923000.spi"),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, ""),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup1_spi_apps_clk.c, "f9923000.spi"),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9927000.i2c"),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, "f9923000.i2c"),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup1_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_i2c_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup3_i2c_apps_clk.c, "f9925000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup3_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup4_i2c_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup4_spi_apps_clk.c, ""),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup5_i2c_apps_clk.c, ""),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup5_i2c_apps_clk.c, "f9927000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup5_spi_apps_clk.c, ""),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup6_i2c_apps_clk.c, ""),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9928000.i2c"),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup6_i2c_apps_clk.c, "f9928000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup6_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_uart1_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_uart2_apps_clk.c, ""),
@@ -2689,7 +2908,7 @@
CLK_LOOKUP("core_clk", gcc_mss_q6_bimc_axi_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_pdm2_clk.c, ""),
CLK_LOOKUP("iface_clk", gcc_pdm_ahb_clk.c, ""),
- CLK_LOOKUP("iface_clk", gcc_prng_ahb_clk.c, ""),
+ CLK_LOOKUP("iface_clk", gcc_prng_ahb_clk.c, "f9bff000.qcom,msm-rng"),
CLK_LOOKUP("iface_clk", gcc_sdcc1_ahb_clk.c, "msm_sdcc.1"),
CLK_LOOKUP("core_clk", gcc_sdcc1_apps_clk.c, "msm_sdcc.1"),
CLK_LOOKUP("iface_clk", gcc_sdcc2_ahb_clk.c, "msm_sdcc.2"),
@@ -2749,10 +2968,19 @@
CLK_LOOKUP("core_clk", vfe_ahb_clk.c, ""),
CLK_LOOKUP("core_clk", vfe_axi_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi0pix_cam_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi0phy_cam_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi1phy_cam_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi2_cam_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi1_cam_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi0_cam_mux_clk.c, ""),
+
CLK_LOOKUP("core_clk", oxili_gfx3d_clk.c, "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("iface_clk", oxili_ahb_clk.c, "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("mem_iface_clk", bimc_gfx_clk.c, "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("mem_clk", gmem_gfx3d_clk.c, "fdc00000.qcom,kgsl-3d0"),
+ CLK_LOOKUP("alt_mem_iface_clk", gcc_bimc_smmu_clk.c,
+ "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("iface_clk", vfe_ahb_clk.c, "fd890000.qcom,iommu"),
CLK_LOOKUP("core_clk", vfe_axi_clk.c, "fd890000.qcom,iommu"),
@@ -2762,6 +2990,7 @@
CLK_LOOKUP("core_clk", mdp_axi_clk.c, "fd870000.qcom,iommu"),
CLK_LOOKUP("iface_clk", oxili_ahb_clk.c, "fd880000.qcom,iommu"),
CLK_LOOKUP("core_clk", bimc_gfx_clk.c, "fd880000.qcom,iommu"),
+ CLK_LOOKUP("alt_core_clk", gcc_bimc_smmu_clk.c, "fd880000.qcom,iommu"),
CLK_LOOKUP("iface_clk", gcc_lpss_smmu_ahb_clk.c, "fd000000.qcom,iommu"),
CLK_LOOKUP("core_clk", gcc_lpass_q6_axi_clk.c, "fd000000.qcom,iommu"),
CLK_LOOKUP("iface_clk", gcc_copss_smmu_ahb_clk.c,
@@ -2784,7 +3013,7 @@
CLK_LOOKUP("measure_clk", l2_m_clk, ""),
CLK_LOOKUP("xo", gcc_xo_clk_src.c, "fb000000.qcom,wcnss-wlan"),
- CLK_LOOKUP("rf_clk", cxo_a2.c, "fb000000.qcom,wcnss-wlan"),
+ CLK_LOOKUP("rf_clk", cxo_a1.c, "fb000000.qcom,wcnss-wlan"),
CLK_LOOKUP("iface_clk", mdp_ahb_clk.c, "fd900000.qcom,mdss_mdp"),
CLK_LOOKUP("core_clk", mdp_axi_clk.c, "fd900000.qcom,mdss_mdp"),
@@ -2796,6 +3025,17 @@
CLK_LOOKUP("byte_clk", dsi_byte_clk.c, "fdd00000.qcom,mdss_dsi"),
CLK_LOOKUP("esc_clk", dsi_esc_clk.c, "fdd00000.qcom,mdss_dsi"),
CLK_LOOKUP("pixel_clk", dsi_pclk_clk.c, "fdd00000.qcom,mdss_dsi"),
+
+ /* QSEECOM Clocks */
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "qseecom"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "qseecom"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "qseecom"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "qseecom"),
+
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "scm"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "scm"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "scm"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "scm"),
};
static struct clk_lookup msm_clocks_8610_rumi[] = {
@@ -2919,6 +3159,26 @@
clk_set_rate(&mclk1_clk_src.c, mclk1_clk_src.freq_tbl[0].freq_hz);
}
+static void dsi_init(void)
+{
+ dsi_byte_clk_src_ops = clk_ops_rcg;
+ dsi_byte_clk_src_ops.set_rate = set_rate_pixel_byte_clk;
+ dsi_byte_clk_src_ops.handoff = byte_rcg_handoff;
+ dsi_byte_clk_src_ops.get_parent = NULL;
+
+ dsi_dsi_clk_src_ops = clk_ops_rcg_mnd;
+ dsi_dsi_clk_src_ops.set_rate = set_rate_dsi_clk;
+ dsi_dsi_clk_src_ops.handoff = pixel_rcg_handoff;
+ dsi_dsi_clk_src_ops.get_parent = NULL;
+
+ dsi_pixel_clk_src_ops = clk_ops_rcg_mnd;
+ dsi_pixel_clk_src_ops.set_rate = set_rate_pixel_byte_clk;
+ dsi_pixel_clk_src_ops.handoff = pixel_rcg_handoff;
+ dsi_pixel_clk_src_ops.get_parent = NULL;
+
+ dsi_clk_ctrl_init(&dsi_ahb_clk.c);
+}
+
#define GCC_CC_PHYS 0xFC400000
#define GCC_CC_SIZE SZ_16K
@@ -2978,6 +3238,8 @@
reg_init();
+ dsi_init();
+
/* Maintain the max nominal frequency on the MMSSNOC AHB bus. */
clk_set_rate(&mmssnoc_ahb_a_clk.c, 40000000);
clk_prepare_enable(&mmssnoc_ahb_a_clk.c);
diff --git a/arch/arm/mach-msm/clock-8960.c b/arch/arm/mach-msm/clock-8960.c
index 509443d..9ee4476 100644
--- a/arch/arm/mach-msm/clock-8960.c
+++ b/arch/arm/mach-msm/clock-8960.c
@@ -5419,11 +5419,6 @@
CLK_LOOKUP("csi_phy_clk", csi0_phy_clk.c, "msm_csid.0"),
CLK_LOOKUP("csi_phy_clk", csi1_phy_clk.c, "msm_csid.1"),
CLK_LOOKUP("csi_phy_clk", csi2_phy_clk.c, "msm_csid.2"),
- CLK_LOOKUP("csi_pix_clk", csi_pix_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_pix1_clk", csi_pix1_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi_clk", csi_rdi_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi1_clk", csi_rdi1_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi2_clk", csi_rdi2_clk.c, "msm_ispif.0"),
CLK_LOOKUP("csiphy_timer_src_clk",
csiphy_timer_src_clk.c, "msm_csiphy.0"),
CLK_LOOKUP("csiphy_timer_src_clk",
@@ -6411,79 +6406,14 @@
.main_output_mask = BIT(23),
};
-static void __init reg_init(void)
+
+static void __init init_mm_cc(void)
{
- void __iomem *imem_reg;
-
- /* Deassert MM SW_RESET_ALL signal. */
- writel_relaxed(0, SW_RESET_ALL_REG);
-
/*
- * Some bits are only used on 8960 or 8064 or 8930 and are marked as
- * reserved bits on the other SoCs. Writing to these reserved bits
- * should have no effect.
- */
- /*
- * Initialize MM AHB registers: Enable the FPB clock and disable HW
- * gating on 8627 and 8930ab for all clocks. Also set VFE_AHB's
- * FORCE_CORE_ON bit to prevent its memory from being collapsed when
- * the clock is halted. The sleep and wake-up delays are set to safe
- * values.
- */
- if (cpu_is_msm8627() || cpu_is_msm8930ab()) {
- rmwreg(0x00000003, AHB_EN_REG, 0x6C000103);
- writel_relaxed(0x000007F9, AHB_EN2_REG);
- } else {
- rmwreg(0x40000000, AHB_EN_REG, 0x6C000103);
- writel_relaxed(0x3C7097F9, AHB_EN2_REG);
- }
-
- if (soc_class_is_apq8064())
- rmwreg(0x00000000, AHB_EN3_REG, 0x00000001);
-
- /* Deassert all locally-owned MM AHB resets. */
- rmwreg(0, SW_RESET_AHB_REG, 0xFFF7DFFF);
- rmwreg(0, SW_RESET_AHB2_REG, 0x0000000F);
-
- /* Initialize MM AXI registers: Enable HW gating for all clocks that
- * support it. Also set FORCE_CORE_ON bits, and any sleep and wake-up
- * delays to safe values. */
- if ((cpu_is_msm8960() &&
- SOCINFO_VERSION_MAJOR(socinfo_get_version()) < 3) ||
- cpu_is_msm8627() || cpu_is_msm8930ab()) {
- rmwreg(0x000007F9, MAXI_EN_REG, 0x0803FFFF);
- rmwreg(0x3027FCFF, MAXI_EN2_REG, 0x3A3FFFFF);
- } else {
- rmwreg(0x0003AFF9, MAXI_EN_REG, 0x0803FFFF);
- rmwreg(0x3A27FCFF, MAXI_EN2_REG, 0x3A3FFFFF);
- }
-
- rmwreg(0x0027FCFF, MAXI_EN3_REG, 0x003FFFFF);
- rmwreg(0x0027FCFF, MAXI_EN4_REG, 0x017FFFFF);
-
- if (soc_class_is_apq8064())
- rmwreg(0x019FECFF, MAXI_EN5_REG, 0x01FFEFFF);
- if (cpu_is_msm8930() || cpu_is_msm8930aa() || cpu_is_msm8627() ||
- cpu_is_msm8930ab())
- rmwreg(0x000004FF, MAXI_EN5_REG, 0x00000FFF);
- if (cpu_is_msm8960ab())
- rmwreg(0x009FE000, MAXI_EN5_REG, 0x01FFE000);
-
- if (cpu_is_msm8627() || cpu_is_msm8930ab())
- rmwreg(0x000003C7, SAXI_EN_REG, 0x00003FFF);
- else
- rmwreg(0x00003C38, SAXI_EN_REG, 0x00003FFF);
-
- /* Enable IMEM's clk_on signal */
- imem_reg = ioremap(0x04b00040, 4);
- if (imem_reg) {
- writel_relaxed(0x3, imem_reg);
- iounmap(imem_reg);
- }
-
- /* Initialize MM CC registers: Set MM FORCE_CORE_ON bits so that core
+ * Initialize MM CC registers: Set MM FORCE_CORE_ON bits so that core
* memories retain state even when not clocked. Also, set sleep and
- * wake-up delays to safe values. */
+ * wake-up delays to safe values.
+ */
rmwreg(0x00000000, CSI0_CC_REG, 0x00000410);
rmwreg(0x00000000, CSI1_CC_REG, 0x00000410);
rmwreg(0x80FF0000, DSI1_BYTE_CC_REG, 0xE0FF0010);
@@ -6498,28 +6428,72 @@
rmwreg(0x80FF0000, VFE_CC_REG, 0xE0FF4010);
rmwreg(0x800000FF, VFE_CC2_REG, 0xE00000FF);
rmwreg(0x80FF0000, VPE_CC_REG, 0xE0FF0010);
- if (cpu_is_msm8960ab() || cpu_is_msm8960() || soc_class_is_apq8064()) {
- rmwreg(0x80FF0000, DSI2_BYTE_CC_REG, 0xE0FF0010);
- rmwreg(0x80FF0000, DSI2_PIXEL_CC_REG, 0xE0FF0010);
- rmwreg(0x80FF0000, JPEGD_CC_REG, 0xE0FF0010);
- }
- if (cpu_is_msm8960ab())
- rmwreg(0x00000001, DSI2_PIXEL_CC2_REG, 0x00000001);
+}
- if (cpu_is_msm8960() || cpu_is_msm8930() || cpu_is_msm8930aa() ||
- cpu_is_msm8627() || cpu_is_msm8930ab())
- rmwreg(0x80FF0000, TV_CC_REG, 0xE1FFC010);
- if (cpu_is_msm8960ab())
- rmwreg(0x00000000, TV_CC_REG, 0x00004010);
+static void __init enable_imem_clk(unsigned long phys)
+{
+ void __iomem *imem_reg;
- if (cpu_is_msm8960()) {
- rmwreg(0x80FF0000, GFX2D0_CC_REG, 0xE0FF0010);
- rmwreg(0x80FF0000, GFX2D1_CC_REG, 0xE0FF0010);
+ /* Enable IMEM's clk_on signal */
+ imem_reg = ioremap(phys, SZ_4);
+ if (imem_reg) {
+ writel_relaxed(0x3, imem_reg);
+ iounmap(imem_reg);
}
- if (soc_class_is_apq8064()) {
- rmwreg(0x00000000, TV_CC_REG, 0x00004010);
- rmwreg(0x80FF0000, VCAP_CC_REG, 0xE0FF1010);
+}
+
+static void __init reg_init_8930(void)
+{
+ /* MM-AHB default values */
+ u32 en_reg = 0x40000000, en2_reg = 0x3C7097F9;
+ /* MM-AXI default values */
+ u32 aen_reg = 0x0003AFF9, aen2_reg = 0x3A27FCFF,
+ saen_reg = 0x00003C38;
+
+
+ /* Deassert MM SW_RESET_ALL signal. */
+ writel_relaxed(0, SW_RESET_ALL_REG);
+
+ /*
+ * Initialize MM AHB registers: Enable the FPB clock and disable HW
+ * gating on 8627 and 8930ab for all clocks. Also set VFE_AHB's
+ * FORCE_CORE_ON bit to prevent its memory from being collapsed when
+ * the clock is halted. The sleep and wake-up delays are set to safe
+ * values.
+ */
+ if (cpu_is_msm8627() || cpu_is_msm8930ab()) {
+ en_reg = 0x00000003;
+ en2_reg = 0x000007F9;
}
+ rmwreg(en_reg, AHB_EN_REG, 0x6C000103);
+ writel_relaxed(en2_reg, AHB_EN2_REG);
+
+ /* Deassert all locally-owned MM AHB resets. */
+ rmwreg(0, SW_RESET_AHB_REG, 0xFFF7DFFF);
+ rmwreg(0, SW_RESET_AHB2_REG, 0x0000000F);
+
+ /*
+ * Initialize MM AXI registers: Enable HW gating for all clocks that
+ * support it. Also set FORCE_CORE_ON bits, and any sleep and wake-up
+ * delays to safe values.
+ */
+ if (cpu_is_msm8627() || cpu_is_msm8930ab()) {
+ aen_reg = 0x000007F9;
+ aen2_reg = 0x3027FCFF;
+ }
+ rmwreg(aen_reg, MAXI_EN_REG, 0x0803FFFF);
+ rmwreg(aen2_reg, MAXI_EN2_REG, 0x3A3FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN3_REG, 0x003FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN4_REG, 0x017FFFFF);
+ rmwreg(0x000004FF, MAXI_EN5_REG, 0x00000FFF);
+ if (cpu_is_msm8627() || cpu_is_msm8930ab())
+ saen_reg = 0x000003C7;
+ rmwreg(saen_reg, SAXI_EN_REG, 0x00003FFF);
+
+ enable_imem_clk(0x04b00040);
+
+ init_mm_cc();
+ rmwreg(0x80FF0000, TV_CC_REG, 0xE1FFC010);
/*
* Initialize USB_HS_HCLK_FS registers: Set FORCE_C_ON bits so that
@@ -6527,10 +6501,6 @@
* and wake-up value to max.
*/
rmwreg(0x0000004F, USB_HS1_HCLK_FS_REG, 0x0000007F);
- if (soc_class_is_apq8064()) {
- rmwreg(0x0000004F, USB_HS3_HCLK_FS_REG, 0x0000007F);
- rmwreg(0x0000004F, USB_HS4_HCLK_FS_REG, 0x0000007F);
- }
/* De-assert MM AXI resets to all hardware blocks. */
writel_relaxed(0, SW_RESET_AXI_REG);
@@ -6543,88 +6513,231 @@
writel_relaxed(BIT(11), TSSC_CLK_CTL_REG);
writel_relaxed(BIT(15), PDM_CLK_NS_REG);
- /* Source SLIMBus xo src from slimbus reference clock */
- if (cpu_is_msm8960ab() || cpu_is_msm8960())
- writel_relaxed(0x3, SLIMBUS_XO_SRC_CLK_CTL_REG);
-
- /* Source the dsi_byte_clks from the DSI PHY PLLs */
+ /* Source the dsi1_byte_clks/dsi1_esc_clk from the DSI PHY PLLs */
rmwreg(0x1, DSI1_BYTE_NS_REG, 0x7);
- if (cpu_is_msm8960ab() || cpu_is_msm8960() || soc_class_is_apq8064())
- rmwreg(0x2, DSI2_BYTE_NS_REG, 0x7);
-
- /* Source the dsi1_esc_clk from the DSI1 PHY PLLs */
rmwreg(0x1, DSI1_ESC_NS_REG, 0x7);
/*
+ * Change PLL15 configuration based on the SoC we're running on.
+ *
+ * Default pll15 l, m, n values for 8930/8930aa/8627()
+ */
+ pll15_config.l = 0x21 | BVAL(31, 7, 0x600);
+ pll15_config.m = 0x1;
+ pll15_config.n = 0x3;
+
+ if (cpu_is_msm8930ab()) {
+ pll15_config.l = 0x25 | BVAL(31, 7, 0x600);
+ pll15_config.m = 0x25;
+ pll15_config.n = 0x3E7;
+ }
+ configure_sr_pll(&pll15_config, &pll15_regs, 0);
+
+ /* Disable AUX and BIST outputs */
+ writel_relaxed(0, MM_PLL3_TEST_CTL_REG);
+}
+
+static void __init reg_init_8064(void)
+{
+ u32 is_pll_enabled;
+
+ /* Deassert MM SW_RESET_ALL signal. */
+ writel_relaxed(0, SW_RESET_ALL_REG);
+
+ /*
+ * Initialize MM AHB registers:
+ * Also set VFE_AHB's FORCE_CORE_ON bit to prevent its memory
+ * from being collapsed when the clock is halted. The sleep and
+ * wake-up delays are set to safe values.
+ */
+ rmwreg(0x40000000, AHB_EN_REG, 0x6C000103);
+ writel_relaxed(0x3C7097F9, AHB_EN2_REG);
+ rmwreg(0x00000000, AHB_EN3_REG, 0x00000001);
+
+ /* Deassert all locally-owned MM AHB resets. */
+ rmwreg(0, SW_RESET_AHB_REG, 0xFFF7DFFF);
+ rmwreg(0, SW_RESET_AHB2_REG, 0x0000000F);
+
+ /*
+ * Initialize MM AXI registers: Enable HW gating for all clocks that
+ * support it. Also set FORCE_CORE_ON bits, and any sleep and wake-up
+ * delays to safe values.
+ */
+ rmwreg(0x0003AFF9, MAXI_EN_REG, 0x0803FFFF);
+ rmwreg(0x3A27FCFF, MAXI_EN2_REG, 0x3A3FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN3_REG, 0x003FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN4_REG, 0x017FFFFF);
+ rmwreg(0x019FECFF, MAXI_EN5_REG, 0x01FFEFFF);
+ rmwreg(0x00003C38, SAXI_EN_REG, 0x00003FFF);
+
+ enable_imem_clk(0x04b00040);
+
+ init_mm_cc();
+ rmwreg(0x80FF0000, DSI2_BYTE_CC_REG, 0xE0FF0010);
+ rmwreg(0x80FF0000, DSI2_PIXEL_CC_REG, 0xE0FF0010);
+ rmwreg(0x80FF0000, JPEGD_CC_REG, 0xE0FF0010);
+ rmwreg(0x00000000, TV_CC_REG, 0x00004010);
+ rmwreg(0x80FF0000, VCAP_CC_REG, 0xE0FF1010);
+
+ /*
+ * Initialize USB_HS_HCLK_FS registers: Set FORCE_C_ON bits so that
+ * core remain active during halt state of the clk. Also, set sleep
+ * and wake-up value to max.
+ */
+ rmwreg(0x0000004F, USB_HS1_HCLK_FS_REG, 0x0000007F);
+ rmwreg(0x0000004F, USB_HS3_HCLK_FS_REG, 0x0000007F);
+ rmwreg(0x0000004F, USB_HS4_HCLK_FS_REG, 0x0000007F);
+
+ /* De-assert MM AXI resets to all hardware blocks. */
+ writel_relaxed(0, SW_RESET_AXI_REG);
+
+ /* Deassert all MM core resets. */
+ writel_relaxed(0, SW_RESET_CORE_REG);
+ writel_relaxed(0, SW_RESET_CORE2_REG);
+
+ /* Enable TSSC and PDM PXO sources. */
+ writel_relaxed(BIT(11), TSSC_CLK_CTL_REG);
+ writel_relaxed(BIT(15), PDM_CLK_NS_REG);
+
+ /* Source the dsi1_byte_clks/dsi1_esc_clk from the DSI PHY PLLs */
+ rmwreg(0x1, DSI1_BYTE_NS_REG, 0x7);
+ rmwreg(0x1, DSI1_ESC_NS_REG, 0x7);
+
+ /* Source the dsi2_byte_clks from the DSI PHY PLLs */
+ rmwreg(0x2, DSI2_BYTE_NS_REG, 0x7);
+
+ /*
* Source the sata_phy_ref_clk from PXO and set predivider of
* sata_pmalive_clk to 1.
*/
- if (soc_class_is_apq8064()) {
- rmwreg(0, SATA_PHY_REF_CLK_CTL_REG, 0x1);
- rmwreg(0, SATA_PMALIVE_CLK_CTL_REG, 0x3);
- }
+ rmwreg(0, SATA_PHY_REF_CLK_CTL_REG, 0x1);
+ rmwreg(0, SATA_PMALIVE_CLK_CTL_REG, 0x3);
/*
* TODO: Programming below PLLs and prng_clk is temporary and
* needs to be removed after bootloaders program them.
*/
- if (soc_class_is_apq8064()) {
- u32 is_pll_enabled;
- /* Program pxo_src_clk to source from PXO */
- rmwreg(0x1, PXO_SRC_CLK_CTL_REG, 0x7);
+ /* Program pxo_src_clk to source from PXO */
+ rmwreg(0x1, PXO_SRC_CLK_CTL_REG, 0x7);
- /* Check if PLL14 is active */
- is_pll_enabled = readl_relaxed(BB_PLL14_STATUS_REG) & BIT(16);
- if (!is_pll_enabled)
- /* Ref clk = 27MHz and program pll14 to 480MHz */
- configure_sr_pll(&pll14_config, &pll14_regs, 1);
+ /* Check if PLL14 is active */
+ is_pll_enabled = readl_relaxed(BB_PLL14_STATUS_REG) & BIT(16);
+ if (!is_pll_enabled)
+ /* Ref clk = 27MHz and program pll14 to 480MHz */
+ configure_sr_pll(&pll14_config, &pll14_regs, 1);
- /* Check if PLL4 is active */
- is_pll_enabled = readl_relaxed(LCC_PLL0_STATUS_REG) & BIT(16);
- if (!is_pll_enabled)
- /* Ref clk = 27MHz and program pll4 to 393.2160MHz */
- configure_sr_pll(&pll4_config_393, &pll4_regs, 1);
+ /* Check if PLL4 is active */
+ is_pll_enabled = readl_relaxed(LCC_PLL0_STATUS_REG) & BIT(16);
+ if (!is_pll_enabled)
+ /* Ref clk = 27MHz and program pll4 to 393.2160MHz */
+ configure_sr_pll(&pll4_config_393, &pll4_regs, 1);
- /* Enable PLL4 source on the LPASS Primary PLL Mux */
- writel_relaxed(0x1, LCC_PRI_PLL_CLK_CTL_REG);
+ /* Enable PLL4 source on the LPASS Primary PLL Mux */
+ writel_relaxed(0x1, LCC_PRI_PLL_CLK_CTL_REG);
- /* Program prng_clk to 64MHz if it isn't configured */
- if (!readl_relaxed(PRNG_CLK_NS_REG))
- writel_relaxed(0x2B, PRNG_CLK_NS_REG);
- }
+ /* Program prng_clk to 64MHz if it isn't configured */
+ if (!readl_relaxed(PRNG_CLK_NS_REG))
+ writel_relaxed(0x2B, PRNG_CLK_NS_REG);
- if (cpu_is_apq8064() || cpu_is_apq8064aa()) {
- /* Program PLL15 to 975MHz with ref clk = 27MHz */
- configure_sr_pll(&pll15_config, &pll15_regs, 0);
- } else if (cpu_is_apq8064ab()) {
+ if (cpu_is_apq8064ab()) {
/* Program PLL15 to 900MHZ */
pll15_config.l = 0x21 | BVAL(31, 7, 0x620);
pll15_config.m = 0x1;
pll15_config.n = 0x3;
- configure_sr_pll(&pll15_config, &pll15_regs, 0);
- } else if (cpu_is_msm8960ab()) {
- pll3_clk.c.rate = 880000000;
- configure_sr_pll(&pll3_config, &pll3_regs, 0);
+ }
+ /*
+ * Default Program PLL15 to 975MHz with ref clk = 27MHz
+ * In case of apq8064ab PLL15 is set to 900MHZ
+ */
+ configure_sr_pll(&pll15_config, &pll15_regs, 0);
+}
+
+static void __init reg_init_8960(void)
+{
+ u32 aen_reg = 0x0003AFF9, aen2_reg = 0x3A27FCFF;
+
+ /* Deassert MM SW_RESET_ALL signal. */
+ writel_relaxed(0, SW_RESET_ALL_REG);
+
+ /*
+ * Initialize MM AHB registers:
+ * Also set VFE_AHB's FORCE_CORE_ON bit to prevent its memory
+ * from being collapsed when the clock is halted. The sleep and
+ * wake-up delays are set to safe values.
+ */
+ rmwreg(0x40000000, AHB_EN_REG, 0x6C000103);
+ writel_relaxed(0x3C7097F9, AHB_EN2_REG);
+
+ /* Deassert all locally-owned MM AHB resets. */
+ rmwreg(0, SW_RESET_AHB_REG, 0xFFF7DFFF);
+ rmwreg(0, SW_RESET_AHB2_REG, 0x0000000F);
+
+ /*
+ * Initialize MM AXI registers: Enable HW gating for all clocks that
+ * support it. Also set FORCE_CORE_ON bits, and any sleep and wake-up
+ * delays to safe values.
+ */
+ if (cpu_is_msm8960() &&
+ SOCINFO_VERSION_MAJOR(socinfo_get_version()) < 3) {
+ aen_reg = 0x000007F9;
+ aen2_reg = 0x3027FCFF;
+ }
+ rmwreg(aen_reg, MAXI_EN_REG, 0x0803FFFF);
+ rmwreg(aen2_reg, MAXI_EN2_REG, 0x3A3FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN3_REG, 0x003FFFFF);
+ rmwreg(0x0027FCFF, MAXI_EN4_REG, 0x017FFFFF);
+ if (cpu_is_msm8960ab())
+ rmwreg(0x009FE000, MAXI_EN5_REG, 0x01FFE000);
+ rmwreg(0x00003C38, SAXI_EN_REG, 0x00003FFF);
+
+ enable_imem_clk(0x04b00040);
+
+ init_mm_cc();
+ rmwreg(0x80FF0000, DSI2_BYTE_CC_REG, 0xE0FF0010);
+ rmwreg(0x80FF0000, DSI2_PIXEL_CC_REG, 0xE0FF0010);
+ rmwreg(0x80FF0000, JPEGD_CC_REG, 0xE0FF0010);
+ if (cpu_is_msm8960ab()) {
+ rmwreg(0x00000001, DSI2_PIXEL_CC2_REG, 0x00000001);
+ rmwreg(0x00000000, TV_CC_REG, 0x00004010);
+ }
+ if (cpu_is_msm8960()) {
+ rmwreg(0x80FF0000, TV_CC_REG, 0xE1FFC010);
+ rmwreg(0x80FF0000, GFX2D0_CC_REG, 0xE0FF0010);
+ rmwreg(0x80FF0000, GFX2D1_CC_REG, 0xE0FF0010);
}
/*
- * Change PLL15 configuration based on the SoC we're running on.
+ * Initialize USB_HS_HCLK_FS registers: Set FORCE_C_ON bits so that
+ * core remain active during halt state of the clk. Also, set sleep
+ * and wake-up value to max.
*/
- if (cpu_is_msm8930() || cpu_is_msm8930aa() || cpu_is_msm8627()) {
- pll15_config.l = 0x21 | BVAL(31, 7, 0x600);
- pll15_config.m = 0x1;
- pll15_config.n = 0x3;
- configure_sr_pll(&pll15_config, &pll15_regs, 0);
- /* Disable AUX and BIST outputs */
- writel_relaxed(0, MM_PLL3_TEST_CTL_REG);
- } else if (cpu_is_msm8930ab()) {
- pll15_config.l = 0x25 | BVAL(31, 7, 0x600);
- pll15_config.m = 0x25;
- pll15_config.n = 0x3E7;
- configure_sr_pll(&pll15_config, &pll15_regs, 0);
- /* Disable AUX and BIST outputs */
- writel_relaxed(0, MM_PLL3_TEST_CTL_REG);
+ rmwreg(0x0000004F, USB_HS1_HCLK_FS_REG, 0x0000007F);
+
+ /* De-assert MM AXI resets to all hardware blocks. */
+ writel_relaxed(0, SW_RESET_AXI_REG);
+
+ /* Deassert all MM core resets. */
+ writel_relaxed(0, SW_RESET_CORE_REG);
+ writel_relaxed(0, SW_RESET_CORE2_REG);
+
+ /* Enable TSSC and PDM PXO sources. */
+ writel_relaxed(BIT(11), TSSC_CLK_CTL_REG);
+ writel_relaxed(BIT(15), PDM_CLK_NS_REG);
+
+ /* Source the dsi1_byte_clks/dsi1_esc_clk from the DSI PHY PLLs */
+ rmwreg(0x1, DSI1_BYTE_NS_REG, 0x7);
+ rmwreg(0x1, DSI1_ESC_NS_REG, 0x7);
+
+ /* Source SLIMBus xo src from slimbus reference clock */
+ writel_relaxed(0x3, SLIMBUS_XO_SRC_CLK_CTL_REG);
+
+ /* Source the dsi2_byte_clks from the DSI PHY PLLs */
+ rmwreg(0x2, DSI2_BYTE_NS_REG, 0x7);
+
+ if (cpu_is_msm8960ab()) {
+ pll3_clk.c.rate = 880000000;
+ configure_sr_pll(&pll3_config, &pll3_regs, 0);
}
}
@@ -6650,13 +6763,15 @@
}
struct clock_init_data msm8960_clock_init_data __initdata;
+
static void __init msm8960_clock_pre_init(void)
{
- /* Initialize clock registers. */
- reg_init();
+ struct clk_lookup *clk_lkup;
+ size_t clk_size;
+ struct clk_freq_tbl *tbl;
- if (soc_class_is_apq8064())
- vdd_sr2_hdmi_pll.set_vdd = set_vdd_sr2_hdmi_pll_8064;
+ /* Initialize clock registers. */
+ reg_init_8960();
/* Detect PLL4 programmed for alternate 491.52MHz clock plan. */
if (readl_relaxed(LCC_PLL0_L_VAL_REG) == 0x12) {
@@ -6670,73 +6785,120 @@
pcm_clk.freq_tbl = clk_tbl_pcm_492;
}
- if (cpu_is_msm8960() || cpu_is_msm8960ab())
- memcpy(msm_clocks_8960, msm_clocks_8960_common,
- sizeof(msm_clocks_8960_common));
- if (cpu_is_msm8960ab()) {
- gfx3d_clk.freq_tbl = clk_tbl_gfx3d_8960ab;
- mdp_clk.c.fmax = fmax_mdp_8960ab;
-
- gfx3d_clk.c.fmax = select_gfx_fmax_plan(fmax_gfx3d_8960ab,
- ARRAY_SIZE(fmax_gfx3d_8960ab));
-
- memcpy(msm_clocks_8960 + ARRAY_SIZE(msm_clocks_8960_common),
- msm_clocks_8960ab_only, sizeof(msm_clocks_8960ab_only));
- msm8960_clock_init_data.size -=
- ARRAY_SIZE(msm_clocks_8960_only);
-
- gmem_axi_clk.c.depends = &gfx3d_axi_clk.c;
- } else if (cpu_is_msm8960()) {
- gfx3d_clk.freq_tbl = clk_tbl_gfx3d_8960;
- memcpy(msm_clocks_8960 + ARRAY_SIZE(msm_clocks_8960_common),
- msm_clocks_8960_only, sizeof(msm_clocks_8960_only));
+ if (cpu_is_msm8960()) {
+ tbl = clk_tbl_gfx3d_8960;
+ clk_lkup = msm_clocks_8960_only;
+ clk_size = sizeof(msm_clocks_8960_only);
msm8960_clock_init_data.size -=
ARRAY_SIZE(msm_clocks_8960ab_only);
}
- /*
- * Change the freq tables for and voltage requirements for
- * clocks which differ between chips.
- */
- if (cpu_is_apq8064() || cpu_is_apq8064aa())
- gfx3d_clk.c.fmax = fmax_gfx3d_8064;
+
+ if (cpu_is_msm8960ab()) {
+ mdp_clk.c.fmax = fmax_mdp_8960ab;
+ gmem_axi_clk.c.depends = &gfx3d_axi_clk.c;
+ tbl = clk_tbl_gfx3d_8960ab;
+ gfx3d_clk.c.fmax = select_gfx_fmax_plan(fmax_gfx3d_8960ab,
+ ARRAY_SIZE(fmax_gfx3d_8960ab));
+
+ clk_lkup = msm_clocks_8960ab_only;
+ clk_size = sizeof(msm_clocks_8960ab_only);
+ msm8960_clock_init_data.size -=
+ ARRAY_SIZE(msm_clocks_8960_only);
+ }
+
+ gfx3d_clk.freq_tbl = tbl;
+
+ memcpy(msm_clocks_8960, msm_clocks_8960_common,
+ sizeof(msm_clocks_8960_common));
+ memcpy(msm_clocks_8960 + ARRAY_SIZE(msm_clocks_8960_common),
+ clk_lkup, clk_size);
+
+ if ((readl_relaxed(PRNG_CLK_NS_REG) & 0x7F) == 0x2B)
+ prng_clk.freq_tbl = clk_tbl_prng_64;
+
+ clk_ops_local_pll.enable = sr_pll_clk_enable;
+}
+
+static void __init msm8064_clock_pre_init(void)
+{
+ unsigned long *fmax = fmax_gfx3d_8064;
+
+ /* Initialize clock registers. */
+ reg_init_8064();
+
+ vdd_sr2_hdmi_pll.set_vdd = set_vdd_sr2_hdmi_pll_8064;
+
+ /* Detect PLL4 programmed for alternate 491.52MHz clock plan. */
+ if (readl_relaxed(LCC_PLL0_L_VAL_REG) == 0x12) {
+ pll4_clk.c.rate = 491520000;
+ audio_slimbus_clk.freq_tbl = clk_tbl_aif_osr_492;
+ mi2s_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ codec_i2s_mic_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ spare_i2s_mic_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ codec_i2s_spkr_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ spare_i2s_spkr_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ pcm_clk.freq_tbl = clk_tbl_pcm_492;
+ }
if (cpu_is_apq8064ab())
- gfx3d_clk.c.fmax = fmax_gfx3d_8064ab;
+ fmax = fmax_gfx3d_8064ab;
if ((cpu_is_apq8064() &&
SOCINFO_VERSION_MAJOR(socinfo_get_version()) == 2) ||
cpu_is_apq8064ab() || cpu_is_apq8064aa()) {
-
vcodec_clk.c.fmax = fmax_vcodec_8064v2;
ce3_src_clk.c.fmax = fmax_ce3_8064v2;
sdc1_clk.c.fmax = fmax_sdc1_8064v2;
}
- if (soc_class_is_apq8064()) {
- ijpeg_clk.c.fmax = fmax_ijpeg_8064;
- mdp_clk.c.fmax = fmax_mdp_8064;
- tv_src_clk.c.fmax = fmax_tv_src_8064;
- vfe_clk.c.fmax = fmax_vfe_8064;
- gmem_axi_clk.c.depends = &gfx3d_axi_clk.c;
+
+ gfx3d_clk.c.fmax = fmax;
+ ijpeg_clk.c.fmax = fmax_ijpeg_8064;
+ mdp_clk.c.fmax = fmax_mdp_8064;
+ tv_src_clk.c.fmax = fmax_tv_src_8064;
+ vfe_clk.c.fmax = fmax_vfe_8064;
+
+ gmem_axi_clk.c.depends = &gfx3d_axi_clk.c;
+
+ if ((readl_relaxed(PRNG_CLK_NS_REG) & 0x7F) == 0x2B)
+ prng_clk.freq_tbl = clk_tbl_prng_64;
+
+ clk_ops_local_pll.enable = sr_pll_clk_enable;
+}
+
+static void __init __msm8930_clock_pre_init(void)
+{
+ unsigned long rate = 900000000;
+ unsigned long *fmax = fmax_gfx3d_8930;
+
+ /* Initialize clock registers. */
+ reg_init_8930();
+
+ /* Detect PLL4 programmed for alternate 491.52MHz clock plan. */
+ if (readl_relaxed(LCC_PLL0_L_VAL_REG) == 0x12) {
+ pll4_clk.c.rate = 491520000;
+ audio_slimbus_clk.freq_tbl = clk_tbl_aif_osr_492;
+ mi2s_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ codec_i2s_mic_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ spare_i2s_mic_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ codec_i2s_spkr_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ spare_i2s_spkr_osr_clk.freq_tbl = clk_tbl_aif_osr_492;
+ pcm_clk.freq_tbl = clk_tbl_pcm_492;
}
- /*
- * Change the freq tables and voltage requirements for
- * clocks which differ between 8960 and 8930.
- */
- if (cpu_is_msm8930() || cpu_is_msm8627())
- gfx3d_clk.c.fmax = fmax_gfx3d_8930;
- else if (cpu_is_msm8930aa())
- gfx3d_clk.c.fmax = fmax_gfx3d_8930aa;
- if (cpu_is_msm8930() || cpu_is_msm8930aa() || cpu_is_msm8627()) {
- pll15_clk.c.rate = 900000000;
- gmem_axi_clk.c.depends = &gfx3d_axi_clk_8930.c;
- } else if (cpu_is_msm8930ab()) {
+ if (cpu_is_msm8930aa())
+ fmax = fmax_gfx3d_8930aa;
+
+ if (cpu_is_msm8930ab()) {
+ rate = 1000000000;
+ fmax = fmax_gfx3d_8930ab;
gfx3d_clk.freq_tbl = clk_tbl_gfx3d_8930ab;
- pll15_clk.c.rate = 1000000000;
- gfx3d_clk.c.fmax = fmax_gfx3d_8930ab;
- gmem_axi_clk.c.depends = &gfx3d_axi_clk_8930.c;
vcodec_clk.c.fmax = fmax_vcodec_8930ab;
}
+
+ pll15_clk.c.rate = rate;
+ gfx3d_clk.c.fmax = fmax;
+ gmem_axi_clk.c.depends = &gfx3d_axi_clk_8930.c;
+
if ((readl_relaxed(PRNG_CLK_NS_REG) & 0x7F) == 0x2B)
prng_clk.freq_tbl = clk_tbl_prng_64;
@@ -6751,7 +6913,7 @@
rpm_vreg_dig_8930 = RPM_VREG_ID_PM8917_VDD_DIG_CORNER;
vdd_sr2_hdmi_pll.set_vdd = set_vdd_sr2_hdmi_pll_8930_pm8917;
- msm8960_clock_pre_init();
+ __msm8930_clock_pre_init();
}
static void __init msm8930_clock_pre_init(void)
@@ -6759,10 +6921,10 @@
vdd_dig.set_vdd = set_vdd_dig_8930;
vdd_sr2_hdmi_pll.set_vdd = set_vdd_sr2_hdmi_pll_8930;
- msm8960_clock_pre_init();
+ __msm8930_clock_pre_init();
}
-static void __init msm8960_clock_post_init(void)
+static void __init common_clock_post_init(void)
{
/* Keep PXO on whenever APPS cpu is active */
clk_prepare_enable(&pxo_a_clk.c);
@@ -6782,18 +6944,12 @@
clk_set_rate(&tsif_ref_clk.c, 105000);
clk_set_rate(&tssc_clk.c, 27000000);
clk_set_rate(&usb_hs1_xcvr_clk.c, 60000000);
- if (soc_class_is_apq8064()) {
- clk_set_rate(&usb_hs3_xcvr_clk.c, 60000000);
- clk_set_rate(&usb_hs4_xcvr_clk.c, 60000000);
- }
clk_set_rate(&usb_fs1_src_clk.c, 60000000);
- if (cpu_is_msm8960ab() || cpu_is_msm8960() || cpu_is_msm8930() ||
- cpu_is_msm8930aa() || cpu_is_msm8627() || cpu_is_msm8930ab())
- clk_set_rate(&usb_fs2_src_clk.c, 60000000);
clk_set_rate(&usb_hsic_xcvr_fs_clk.c, 60000000);
clk_set_rate(&usb_hsic_hsic_src_clk.c, 480000000);
clk_set_rate(&usb_hsic_hsio_cal_clk.c, 9000000);
clk_set_rate(&usb_hsic_system_clk.c, 60000000);
+
/*
* Set the CSI rates to a safe default to avoid warnings when
* switching csi pix and rdi clocks.
@@ -6822,6 +6978,19 @@
clk_prepare_enable(&sfab_tmr_a_clk.c);
}
+static void __init msm8960_clock_post_init(void)
+{
+ common_clock_post_init();
+ clk_set_rate(&usb_fs2_src_clk.c, 60000000);
+}
+
+static void __init msm8064_clock_post_init(void)
+{
+ common_clock_post_init();
+ clk_set_rate(&usb_hs3_xcvr_clk.c, 60000000);
+ clk_set_rate(&usb_hs4_xcvr_clk.c, 60000000);
+}
+
static int __init msm8960_clock_late_init(void)
{
int rc;
@@ -6864,8 +7033,8 @@
struct clock_init_data apq8064_clock_init_data __initdata = {
.table = msm_clocks_8064,
.size = ARRAY_SIZE(msm_clocks_8064),
- .pre_init = msm8960_clock_pre_init,
- .post_init = msm8960_clock_post_init,
+ .pre_init = msm8064_clock_pre_init,
+ .post_init = msm8064_clock_post_init,
.late_init = msm8960_clock_late_init,
};
diff --git a/arch/arm/mach-msm/clock-8974.c b/arch/arm/mach-msm/clock-8974.c
index c42da92..f5af727 100644
--- a/arch/arm/mach-msm/clock-8974.c
+++ b/arch/arm/mach-msm/clock-8974.c
@@ -638,7 +638,7 @@
VDD_DIG_NUM
};
-static const int *vdd_corner[] = {
+static int *vdd_corner[] = {
[VDD_DIG_NONE] = VDD_UV(RPM_REGULATOR_CORNER_NONE),
[VDD_DIG_LOW] = VDD_UV(RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_DIG_NOMINAL] = VDD_UV(RPM_REGULATOR_CORNER_NORMAL),
@@ -920,6 +920,7 @@
};
static struct clk_freq_tbl ftbl_gcc_blsp1_2_qup1_6_i2c_apps_clk[] = {
+ F(19200000, cxo, 1, 0, 0),
F(50000000, gpll0, 12, 0, 0),
F_END
};
@@ -4757,7 +4758,8 @@
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991f000.serial"),
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9924000.i2c"),
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991e000.serial"),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, ""),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, "f9923000.i2c"),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9923000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_i2c_apps_clk.c, "f9924000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup1_spi_apps_clk.c, "f9923000.spi"),
@@ -4823,6 +4825,11 @@
CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "qseecom"),
CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "qseecom"),
+ CLK_LOOKUP("ce_drv_core_clk", gcc_ce2_clk.c, "qseecom"),
+ CLK_LOOKUP("ce_drv_iface_clk", gcc_ce2_ahb_clk.c, "qseecom"),
+ CLK_LOOKUP("ce_drv_bus_clk", gcc_ce2_axi_clk.c, "qseecom"),
+ CLK_LOOKUP("ce_drv_core_clk_src", ce2_clk_src.c, "qseecom"),
+
CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "scm"),
CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "scm"),
CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "scm"),
@@ -4888,6 +4895,7 @@
CLK_LOOKUP("alt_iface_clk", mdss_hdmi_ahb_clk.c,
"fd922100.qcom,hdmi_tx"),
CLK_LOOKUP("core_clk", mdss_hdmi_clk.c, "fd922100.qcom,hdmi_tx"),
+ CLK_LOOKUP("mdp_core_clk", mdss_mdp_clk.c, "fd922100.qcom,hdmi_tx"),
CLK_LOOKUP("extp_clk", mdss_extpclk_clk.c, "fd922100.qcom,hdmi_tx"),
CLK_LOOKUP("core_clk", mdss_mdp_clk.c, "mdp.0"),
CLK_LOOKUP("lut_clk", mdss_mdp_lut_clk.c, "mdp.0"),
@@ -4941,77 +4949,78 @@
"fda0b400.qcom,csiphy"),
CLK_LOOKUP("csiphy_timer_clk", camss_phy2_csi2phytimer_clk.c,
"fda0b400.qcom,csiphy"),
+
/* CSID clocks */
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08000.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08000.qcom,csid"),
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi0_ahb_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi0_clk_src.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi0phy_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi0_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi0pix_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi0rdi_clk.c,
+ "fda08000.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08400.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_ahb_clk", camss_csi1_ahb_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_src_clk", csi1_clk_src.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_phy_clk", camss_csi1phy_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_clk", camss_csi1_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_pix_clk", camss_csi1pix_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_rdi_clk", camss_csi1rdi_clk.c, "fda08400.qcom,csid"),
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi1_ahb_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi1_clk_src.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi1phy_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi1_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi1pix_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi1rdi_clk.c,
+ "fda08400.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08800.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_ahb_clk", camss_csi2_ahb_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_src_clk", csi2_clk_src.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_phy_clk", camss_csi2phy_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_clk", camss_csi2_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_pix_clk", camss_csi2pix_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_rdi_clk", camss_csi2rdi_clk.c, "fda08800.qcom,csid"),
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi2_ahb_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi2_clk_src.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi2phy_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi2_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi2pix_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi2rdi_clk.c,
+ "fda08800.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08c00.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_ahb_clk", camss_csi3_ahb_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_src_clk", csi3_clk_src.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_phy_clk", camss_csi3phy_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_clk", camss_csi3_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_pix_clk", camss_csi3pix_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_rdi_clk", camss_csi3rdi_clk.c, "fda08c00.qcom,csid"),
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi3_ahb_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi3_clk_src.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi3phy_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi3_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi3pix_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi3rdi_clk.c,
+ "fda08c00.qcom,csid"),
/* ISPIF clocks */
- CLK_LOOKUP("camss_vfe_vfe_clk", camss_vfe_vfe0_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_csi_vfe_clk", camss_csi_vfe0_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_vfe_vfe_clk1", camss_vfe_vfe1_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_csi_vfe_clk1", camss_csi_vfe1_clk.c,
+ CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
"fda0a000.qcom,ispif"),
/*VFE clocks*/
@@ -5114,6 +5123,17 @@
CLK_LOOKUP("bus_clk", venus0_axi_clk.c, "fdc00000.qcom,vidc"),
CLK_LOOKUP("mem_clk", venus0_ocmemnoc_clk.c, "fdc00000.qcom,vidc"),
+ CLK_LOOKUP("core_clk", venus0_vcodec0_clk.c, "fd8c1024.qcom,gdsc"),
+ CLK_LOOKUP("core_clk", mdss_mdp_clk.c, "fd8c2304.qcom,gdsc"),
+ CLK_LOOKUP("lut_clk", mdss_mdp_lut_clk.c, "fd8c2304.qcom,gdsc"),
+ CLK_LOOKUP("core0_clk", camss_jpeg_jpeg0_clk.c, "fd8c35a4.qcom,gdsc"),
+ CLK_LOOKUP("core1_clk", camss_jpeg_jpeg1_clk.c, "fd8c35a4.qcom,gdsc"),
+ CLK_LOOKUP("core2_clk", camss_jpeg_jpeg2_clk.c, "fd8c35a4.qcom,gdsc"),
+ CLK_LOOKUP("core0_clk", camss_vfe_vfe0_clk.c, "fd8c36a4.qcom,gdsc"),
+ CLK_LOOKUP("core1_clk", camss_vfe_vfe1_clk.c, "fd8c36a4.qcom,gdsc"),
+ CLK_LOOKUP("csi0_clk", camss_csi_vfe0_clk.c, "fd8c36a4.qcom,gdsc"),
+ CLK_LOOKUP("csi1_clk", camss_csi_vfe1_clk.c, "fd8c36a4.qcom,gdsc"),
+ CLK_LOOKUP("cpp_clk", camss_vfe_cpp_clk.c, "fd8c36a4.qcom,gdsc"),
CLK_LOOKUP("core_clk", oxili_gfx3d_clk.c, "fd8c4024.qcom,gdsc"),
/* LPASS clocks */
diff --git a/arch/arm/mach-msm/clock-9625.c b/arch/arm/mach-msm/clock-9625.c
index 0d97401..65176b4 100644
--- a/arch/arm/mach-msm/clock-9625.c
+++ b/arch/arm/mach-msm/clock-9625.c
@@ -280,7 +280,7 @@
VDD_DIG_NUM
};
-static const int *vdd_corner[] = {
+static int *vdd_corner[] = {
[VDD_DIG_NONE] = VDD_UV(RPM_REGULATOR_CORNER_NONE),
[VDD_DIG_LOW] = VDD_UV(RPM_REGULATOR_CORNER_SVS_SOC),
[VDD_DIG_NOMINAL] = VDD_UV(RPM_REGULATOR_CORNER_NORMAL),
@@ -1620,11 +1620,6 @@
{&gcc_sdcc2_ahb_clk.c, GCC_BASE, 0x0071},
{&gcc_ce1_clk.c, GCC_BASE, 0x0138},
{&gcc_sys_noc_ipa_axi_clk.c, GCC_BASE, 0x0007},
- {&gcc_ipa_clk.c, GCC_BASE, 0x01E0},
- {&gcc_ipa_cnoc_clk.c, GCC_BASE, 0x01E1},
- {&gcc_ipa_sleep_clk.c, GCC_BASE, 0x01E2},
- {&gcc_qpic_clk.c, GCC_BASE, 0x01D8},
- {&gcc_qpic_ahb_clk.c, GCC_BASE, 0x01D9},
{&a5_m_clk, APCS_BASE, 0x3},
@@ -2018,12 +2013,6 @@
*/
clk_prepare_enable(&cxo_a_clk_src.c);
- /*
- * TODO: This call is to prevent sending 0Hz to rpm to turn off pnoc.
- * Needs to remove this after vote of pnoc from sdcc driver is ready.
- */
- clk_prepare_enable(&pnoc_msmbus_a_clk.c);
-
/* Set rates for single-rate clocks. */
clk_set_rate(&usb_hs_system_clk_src.c,
usb_hs_system_clk_src.freq_tbl[0].freq_hz);
diff --git a/arch/arm/mach-msm/clock-debug.c b/arch/arm/mach-msm/clock-debug.c
index d540f2b..fe43b72 100644
--- a/arch/arm/mach-msm/clock-debug.c
+++ b/arch/arm/mach-msm/clock-debug.c
@@ -1,6 +1,6 @@
/*
* Copyright (C) 2007 Google, Inc.
- * Copyright (c) 2007-2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2007-2013, The Linux Foundation. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -21,10 +21,23 @@
#include <linux/clk.h>
#include <linux/list.h>
#include <linux/clkdev.h>
+#include <linux/uaccess.h>
#include <mach/clk-provider.h>
#include "clock.h"
+static LIST_HEAD(clk_list);
+static DEFINE_SPINLOCK(clk_list_lock);
+
+static struct dentry *debugfs_base;
+static u32 debug_suspend;
+
+struct clk_table {
+ struct list_head node;
+ struct clk_lookup *clocks;
+ size_t num_clocks;
+};
+
static int clock_debug_rate_set(void *data, u64 val)
{
struct clk *clock = data;
@@ -218,35 +231,68 @@
.release = seq_release,
};
-static int clock_parent_show(struct seq_file *m, void *unused)
+static ssize_t clock_parent_read(struct file *filp, char __user *ubuf,
+ size_t cnt, loff_t *ppos)
{
- struct clk *clock = m->private;
- struct clk *parent = clk_get_parent(clock);
+ struct clk *clock = filp->private_data;
+ struct clk *p = clock->parent;
+ char name[256] = {0};
- seq_printf(m, "%s\n", (parent ? parent->dbg_name : "None"));
+ snprintf(name, sizeof(name), "%s\n", p ? p->dbg_name : "None\n");
- return 0;
+ return simple_read_from_buffer(ubuf, cnt, ppos, name, strlen(name));
}
-static int clock_parent_open(struct inode *inode, struct file *file)
+
+static ssize_t clock_parent_write(struct file *filp,
+ const char __user *ubuf, size_t cnt, loff_t *ppos)
{
- return single_open(file, clock_parent_show, inode->i_private);
+ struct clk *clock = filp->private_data;
+ char buf[256];
+ char *cmp;
+ unsigned long flags;
+ struct clk_table *table;
+ int i, ret;
+ struct clk *parent = NULL;
+
+ cnt = min(cnt, sizeof(buf) - 1);
+ if (copy_from_user(&buf, ubuf, cnt))
+ return -EFAULT;
+ buf[cnt] = '\0';
+ cmp = strstrip(buf);
+
+ spin_lock_irqsave(&clk_list_lock, flags);
+ list_for_each_entry(table, &clk_list, node) {
+ for (i = 0; i < table->num_clocks; i++)
+ if (!strcmp(cmp, table->clocks[i].clk->dbg_name)) {
+ parent = table->clocks[i].clk;
+ break;
+ }
+ if (parent)
+ break;
+ }
+
+ if (!parent) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ spin_unlock_irqrestore(&clk_list_lock, flags);
+ ret = clk_set_parent(clock, table->clocks[i].clk);
+ if (ret)
+ return ret;
+
+ return cnt;
+err:
+ spin_unlock_irqrestore(&clk_list_lock, flags);
+ return ret;
}
+
static const struct file_operations clock_parent_fops = {
- .open = clock_parent_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-static struct dentry *debugfs_base;
-static u32 debug_suspend;
-
-struct clk_table {
- struct list_head node;
- struct clk_lookup *clocks;
- size_t num_clocks;
+ .open = simple_open,
+ .read = clock_parent_read,
+ .write = clock_parent_write,
};
static int clock_debug_add(struct clk *clock)
@@ -305,8 +351,6 @@
debugfs_remove_recursive(clk_dir);
return -ENOMEM;
}
-static LIST_HEAD(clk_list);
-static DEFINE_SPINLOCK(clk_list_lock);
/**
* clock_debug_register() - Add additional clocks to clock debugfs hierarchy
diff --git a/arch/arm/mach-msm/clock-dsi-8610.c b/arch/arm/mach-msm/clock-dsi-8610.c
new file mode 100644
index 0000000..44b332e
--- /dev/null
+++ b/arch/arm/mach-msm/clock-dsi-8610.c
@@ -0,0 +1,408 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/io.h>
+#include <linux/err.h>
+#include <linux/delay.h>
+#include <linux/string.h>
+#include <linux/iopoll.h>
+#include <linux/clk.h>
+
+#include <asm/processor.h>
+#include <mach/msm_iomap.h>
+#include <mach/clk-provider.h>
+
+#include "clock-dsi-8610.h"
+
+#define DSI_PHY_PHYS 0xFDD00000
+#define DSI_PHY_SIZE 0x00100000
+
+#define DSI_CTRL 0x0000
+#define DSI_DSIPHY_PLL_CTRL_0 0x0200
+#define DSI_DSIPHY_PLL_CTRL_1 0x0204
+#define DSI_DSIPHY_PLL_CTRL_2 0x0208
+#define DSI_DSIPHY_PLL_CTRL_3 0x020C
+#define DSI_DSIPHY_PLL_RDY 0x0280
+#define DSI_DSIPHY_PLL_CTRL_8 0x0220
+#define DSI_DSIPHY_PLL_CTRL_9 0x0224
+#define DSI_DSIPHY_PLL_CTRL_10 0x0228
+
+#define DSI_BPP 3
+#define DSI_PLL_RDY_BIT 0x01
+#define DSI_PLL_RDY_LOOP_COUNT 80000
+#define DSI_MAX_DIVIDER 256
+
+static unsigned char *dsi_base;
+static struct clk *dsi_ahb_clk;
+
+int __init dsi_clk_ctrl_init(struct clk *ahb_clk)
+{
+ dsi_base = ioremap(DSI_PHY_PHYS, DSI_PHY_SIZE);
+ if (!dsi_base) {
+ pr_err("unable to remap dsi base\n");
+ return -ENODEV;
+ }
+
+ dsi_ahb_clk = ahb_clk;
+ return 0;
+}
+
+static int dsi_pll_vco_enable(struct clk *c)
+{
+ u32 status;
+ int i = 0, ret = 0;
+
+ ret = clk_enable(dsi_ahb_clk);
+ if (ret) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return ret;
+ }
+
+ writel_relaxed(0x01, dsi_base + DSI_DSIPHY_PLL_CTRL_0);
+
+ do {
+ status = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_RDY);
+ } while (!(status & DSI_PLL_RDY_BIT) && (i++ < DSI_PLL_RDY_LOOP_COUNT));
+
+ if (!(status & DSI_PLL_RDY_BIT)) {
+ pr_err("DSI PLL not ready, polling time out!\n");
+ ret = -ETIMEDOUT;
+ }
+
+ clk_disable(dsi_ahb_clk);
+
+ return ret;
+}
+
+static void dsi_pll_vco_disable(struct clk *c)
+{
+ int ret;
+
+ ret = clk_enable(dsi_ahb_clk);
+ if (ret) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return;
+ }
+
+ writel_relaxed(0x00, dsi_base + DSI_DSIPHY_PLL_CTRL_0);
+ clk_disable(dsi_ahb_clk);
+}
+
+static int dsi_pll_vco_set_rate(struct clk *c, unsigned long rate)
+{
+ int ret;
+ u32 temp, val;
+ unsigned long fb_divider;
+ struct clk *parent = c->parent;
+ struct dsi_pll_vco_clk *vco_clk =
+ container_of(c, struct dsi_pll_vco_clk, c);
+
+ if (!rate)
+ return 0;
+
+ ret = clk_prepare_enable(dsi_ahb_clk);
+ if (ret) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return ret;
+ }
+
+ temp = rate / 10;
+ val = parent->rate / 10;
+ fb_divider = (temp * vco_clk->pref_div_ratio) / val;
+ fb_divider = fb_divider / 2 - 1;
+
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_1);
+ val = (temp & 0xFFFFFF00) | (fb_divider & 0xFF);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_1);
+
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_2);
+ val = (temp & 0xFFFFFFF8) | ((fb_divider >> 8) & 0x07);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_2);
+
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_3);
+ val = (temp & 0xFFFFFFC0) | (vco_clk->pref_div_ratio - 1);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_3);
+
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return 0;
+}
+
+/* rate is the bit clk rate */
+static long dsi_pll_vco_round_rate(struct clk *c, unsigned long rate)
+{
+ long vco_rate;
+ struct dsi_pll_vco_clk *vco_clk =
+ container_of(c, struct dsi_pll_vco_clk, c);
+
+
+ vco_rate = rate;
+ if (rate < vco_clk->vco_clk_min)
+ vco_rate = vco_clk->vco_clk_min;
+ else if (rate > vco_clk->vco_clk_max)
+ vco_rate = vco_clk->vco_clk_max;
+
+ return vco_rate;
+}
+
+static unsigned long dsi_pll_vco_get_rate(struct clk *c)
+{
+ u32 fb_divider, ref_divider, vco_rate;
+ u32 temp, status;
+ struct clk *parent = c->parent;
+
+ status = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_RDY);
+ if (status & DSI_PLL_RDY_BIT) {
+ fb_divider = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_1);
+ fb_divider &= 0xFF;
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_2) & 0x07;
+ fb_divider = (temp << 8) | fb_divider;
+ fb_divider += 1;
+
+ ref_divider = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_3);
+ ref_divider &= 0x3F;
+ ref_divider += 1;
+
+ vco_rate = (parent->rate / ref_divider) * fb_divider;
+ } else {
+ vco_rate = 0;
+ }
+ return vco_rate;
+}
+
+static enum handoff dsi_pll_vco_handoff(struct clk *c)
+{
+ u32 status;
+
+ if (clk_prepare_enable(dsi_ahb_clk)) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return HANDOFF_DISABLED_CLK;
+ }
+
+ status = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_0);
+ if (!status & DSI_PLL_RDY_BIT) {
+ pr_err("DSI PLL not ready\n");
+ clk_disable(dsi_ahb_clk);
+ return HANDOFF_DISABLED_CLK;
+ }
+
+ c->rate = dsi_pll_vco_get_rate(c);
+
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return HANDOFF_ENABLED_CLK;
+}
+
+static int dsi_byteclk_set_rate(struct clk *c, unsigned long rate)
+{
+ int div, ret;
+ long vco_rate;
+ unsigned long bitclk_rate;
+ u32 temp, val;
+ struct clk *parent = clk_get_parent(c);
+
+ if (rate == 0) {
+ ret = clk_set_rate(parent, 0);
+ return ret;
+ }
+
+ bitclk_rate = rate * 8;
+ for (div = 1; div < DSI_MAX_DIVIDER; div++) {
+ vco_rate = clk_round_rate(parent, bitclk_rate * div);
+
+ if (vco_rate == bitclk_rate * div)
+ break;
+
+ if (vco_rate < bitclk_rate * div)
+ return -EINVAL;
+ }
+
+ if (vco_rate != bitclk_rate * div)
+ return -EINVAL;
+
+ ret = clk_set_rate(parent, vco_rate);
+ if (ret) {
+ pr_err("fail to set vco rate\n");
+ return ret;
+ }
+
+ ret = clk_prepare_enable(dsi_ahb_clk);
+ if (ret) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return ret;
+ }
+
+ /* set the bit clk divider */
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_8);
+ val = (temp & 0xFFFFFFF0) | (div - 1);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_8);
+
+ /* set the byte clk divider */
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_9);
+ val = (temp & 0xFFFFFF00) | (vco_rate / rate - 1);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_9);
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return 0;
+}
+
+static long dsi_byteclk_round_rate(struct clk *c, unsigned long rate)
+{
+ int div;
+ long vco_rate;
+ unsigned long bitclk_rate;
+ struct clk *parent = clk_get_parent(c);
+
+ if (rate == 0)
+ return -EINVAL;
+
+ bitclk_rate = rate * 8;
+ for (div = 1; div < DSI_MAX_DIVIDER; div++) {
+ vco_rate = clk_round_rate(parent, bitclk_rate * div);
+ if (vco_rate == bitclk_rate * div)
+ break;
+ if (vco_rate < bitclk_rate * div)
+ return -EINVAL;
+ }
+
+ if (vco_rate != bitclk_rate * div)
+ return -EINVAL;
+
+ return rate;
+}
+
+static enum handoff dsi_byteclk_handoff(struct clk *c)
+{
+ struct clk *parent = clk_get_parent(c);
+ unsigned long vco_rate = clk_get_rate(parent);
+ u32 out_div2;
+
+ if (vco_rate == 0)
+ return HANDOFF_DISABLED_CLK;
+
+ if (clk_prepare_enable(dsi_ahb_clk)) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return HANDOFF_DISABLED_CLK;
+ }
+
+ out_div2 = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_9);
+ out_div2 &= 0xFF;
+ c->rate = vco_rate / (out_div2 + 1);
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return HANDOFF_ENABLED_CLK;
+}
+
+static int dsi_dsiclk_set_rate(struct clk *c, unsigned long rate)
+{
+ u32 temp, val;
+ int ret;
+ struct clk *parent = clk_get_parent(c);
+ unsigned long vco_rate = clk_get_rate(parent);
+
+ if (rate == 0)
+ return 0;
+
+ if (vco_rate % rate != 0) {
+ pr_err("dsiclk_set_rate invalid rate\n");
+ return -EINVAL;
+ }
+
+ ret = clk_prepare_enable(dsi_ahb_clk);
+ if (ret) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return ret;
+ }
+ temp = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_10);
+ val = (temp & 0xFFFFFF00) | (vco_rate / rate - 1);
+ writel_relaxed(val, dsi_base + DSI_DSIPHY_PLL_CTRL_10);
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return 0;
+}
+
+static long dsi_dsiclk_round_rate(struct clk *c, unsigned long rate)
+{
+ /* rate is the pixel clk rate, translate into dsi clk rate*/
+ struct clk *parent = clk_get_parent(c);
+ unsigned long vco_rate = clk_get_rate(parent);
+
+ rate *= DSI_BPP;
+
+ if (vco_rate < rate)
+ return -EINVAL;
+
+ if (vco_rate % rate != 0)
+ return -EINVAL;
+
+ return rate;
+}
+
+static enum handoff dsi_dsiclk_handoff(struct clk *c)
+{
+ struct clk *parent = clk_get_parent(c);
+ unsigned long vco_rate = clk_get_rate(parent);
+ u32 out_div3;
+
+ if (vco_rate == 0)
+ return HANDOFF_DISABLED_CLK;
+
+ if (clk_prepare_enable(dsi_ahb_clk)) {
+ pr_err("fail to enable dsi ahb clk\n");
+ return HANDOFF_DISABLED_CLK;
+ }
+
+ out_div3 = readl_relaxed(dsi_base + DSI_DSIPHY_PLL_CTRL_10);
+ out_div3 &= 0xFF;
+ c->rate = vco_rate / (out_div3 + 1);
+ clk_disable_unprepare(dsi_ahb_clk);
+
+ return HANDOFF_ENABLED_CLK;
+}
+
+int dsi_prepare(struct clk *clk)
+{
+ return clk_prepare(dsi_ahb_clk);
+}
+
+void dsi_unprepare(struct clk *clk)
+{
+ clk_unprepare(dsi_ahb_clk);
+}
+
+struct clk_ops clk_ops_dsi_dsiclk = {
+ .prepare = dsi_prepare,
+ .unprepare = dsi_unprepare,
+ .set_rate = dsi_dsiclk_set_rate,
+ .round_rate = dsi_dsiclk_round_rate,
+ .handoff = dsi_dsiclk_handoff,
+};
+
+struct clk_ops clk_ops_dsi_byteclk = {
+ .prepare = dsi_prepare,
+ .unprepare = dsi_unprepare,
+ .set_rate = dsi_byteclk_set_rate,
+ .round_rate = dsi_byteclk_round_rate,
+ .handoff = dsi_byteclk_handoff,
+};
+
+struct clk_ops clk_ops_dsi_vco = {
+ .prepare = dsi_prepare,
+ .unprepare = dsi_unprepare,
+ .enable = dsi_pll_vco_enable,
+ .disable = dsi_pll_vco_disable,
+ .set_rate = dsi_pll_vco_set_rate,
+ .round_rate = dsi_pll_vco_round_rate,
+ .handoff = dsi_pll_vco_handoff,
+};
+
diff --git a/arch/arm/mach-msm/clock-dsi-8610.h b/arch/arm/mach-msm/clock-dsi-8610.h
new file mode 100644
index 0000000..4c09790
--- /dev/null
+++ b/arch/arm/mach-msm/clock-dsi-8610.h
@@ -0,0 +1,32 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __ARCH_ARM_MACH_MSM_CLOCK_DSI_8610
+#define __ARCH_ARM_MACH_MSM_CLOCK_DSI_8610
+
+#include <mach/clk-provider.h>
+
+struct dsi_pll_vco_clk {
+ const unsigned long vco_clk_min;
+ const unsigned long vco_clk_max;
+ const unsigned pref_div_ratio;
+ int factor;
+ struct clk c;
+};
+
+extern struct clk_ops clk_ops_dsi_vco;
+extern struct clk_ops clk_ops_dsi_byteclk;
+extern struct clk_ops clk_ops_dsi_dsiclk;
+
+int dsi_clk_ctrl_init(struct clk *ahb_clk);
+
+#endif
diff --git a/arch/arm/mach-msm/clock-generic.c b/arch/arm/mach-msm/clock-generic.c
new file mode 100644
index 0000000..4d74533
--- /dev/null
+++ b/arch/arm/mach-msm/clock-generic.c
@@ -0,0 +1,405 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+
+#include <linux/clk.h>
+#include <mach/clk-provider.h>
+#include <mach/clock-generic.h>
+
+/* ==================== Mux clock ==================== */
+
+static int parent_to_src_sel(struct mux_clk *mux, struct clk *p)
+{
+ int i;
+
+ for (i = 0; i < mux->num_parents; i++) {
+ if (mux->parents[i].src == p)
+ return mux->parents[i].sel;
+ }
+
+ return -EINVAL;
+}
+
+static int mux_set_parent(struct clk *c, struct clk *p)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ int sel = parent_to_src_sel(mux, p);
+ struct clk *old_parent;
+ int rc = 0;
+ unsigned long flags;
+
+ if (sel < 0)
+ return sel;
+
+ rc = __clk_pre_reparent(c, p, &flags);
+ if (rc)
+ goto out;
+
+ rc = mux->ops->set_mux_sel(mux, sel);
+ if (rc)
+ goto set_fail;
+
+ old_parent = c->parent;
+ c->parent = p;
+ __clk_post_reparent(c, old_parent, &flags);
+
+ return 0;
+
+set_fail:
+ __clk_post_reparent(c, p, &flags);
+out:
+ return rc;
+}
+
+static long mux_round_rate(struct clk *c, unsigned long rate)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ int i;
+ long prate, max_prate = 0, rrate = LONG_MAX;
+
+ for (i = 0; i < mux->num_parents; i++) {
+ prate = clk_round_rate(mux->parents[i].src, rate);
+ if (prate < rate) {
+ max_prate = max(prate, max_prate);
+ continue;
+ }
+
+ rrate = min(rrate, prate);
+ }
+ if (rrate == LONG_MAX)
+ rrate = max_prate;
+
+ return rrate ? rrate : -EINVAL;
+}
+
+static int mux_set_rate(struct clk *c, unsigned long rate)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ struct clk *new_parent = NULL;
+ int rc = 0, i;
+ unsigned long new_par_curr_rate;
+
+ for (i = 0; i < mux->num_parents; i++) {
+ if (clk_round_rate(mux->parents[i].src, rate) == rate) {
+ new_parent = mux->parents[i].src;
+ break;
+ }
+ }
+ if (new_parent == NULL)
+ return -EINVAL;
+
+ /*
+ * Switch to safe parent since the old and new parent might be the
+ * same and the parent might temporarily turn off while switching
+ * rates.
+ */
+ if (mux->safe_sel >= 0)
+ rc = mux->ops->set_mux_sel(mux, mux->safe_sel);
+ if (rc)
+ return rc;
+
+ new_par_curr_rate = clk_get_rate(new_parent);
+ rc = clk_set_rate(new_parent, rate);
+ if (rc)
+ goto set_rate_fail;
+
+ rc = mux_set_parent(c, new_parent);
+ if (rc)
+ goto set_par_fail;
+
+ return 0;
+
+set_par_fail:
+ clk_set_rate(new_parent, new_par_curr_rate);
+set_rate_fail:
+ WARN(mux->ops->set_mux_sel(mux, parent_to_src_sel(mux, c->parent)),
+ "Set rate failed for %s. Also in bad state!\n", c->dbg_name);
+ return rc;
+}
+
+static int mux_enable(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ if (mux->ops->enable)
+ return mux->ops->enable(mux);
+ return 0;
+}
+
+static void mux_disable(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ if (mux->ops->disable)
+ return mux->ops->disable(mux);
+}
+
+static struct clk *mux_get_parent(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ int sel = mux->ops->get_mux_sel(mux);
+ int i;
+
+ for (i = 0; i < mux->num_parents; i++) {
+ if (mux->parents[i].sel == sel)
+ return mux->parents[i].src;
+ }
+
+ /* Unfamiliar parent. */
+ return NULL;
+}
+
+static enum handoff mux_handoff(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+
+ c->rate = clk_get_rate(c->parent);
+ mux->safe_sel = parent_to_src_sel(mux, mux->safe_parent);
+
+ if (mux->en_mask && mux->ops && mux->ops->is_enabled)
+ return mux->ops->is_enabled(mux)
+ ? HANDOFF_ENABLED_CLK
+ : HANDOFF_DISABLED_CLK;
+
+ /*
+ * If this function returns 'enabled' even when the clock downstream
+ * of this clock is disabled, then handoff code will unnecessarily
+ * enable the current parent of this clock. If this function always
+ * returns 'disabled' and a clock downstream is on, the clock handoff
+ * code will bump up the ref count for this clock and its current
+ * parent as necessary. So, clocks without an actual HW gate can
+ * always return disabled.
+ */
+ return HANDOFF_DISABLED_CLK;
+}
+
+struct clk_ops clk_ops_gen_mux = {
+ .enable = mux_enable,
+ .disable = mux_disable,
+ .set_parent = mux_set_parent,
+ .round_rate = mux_round_rate,
+ .set_rate = mux_set_rate,
+ .handoff = mux_handoff,
+ .get_parent = mux_get_parent,
+};
+
+
+/* ==================== Divider clock ==================== */
+
+static long __div_round_rate(struct clk *c, unsigned long rate, int *best_div)
+{
+ struct div_clk *d = to_div_clk(c);
+ unsigned int div, min_div, max_div;
+ long p_rrate, rrate = LONG_MAX;
+
+ rate = max(rate, 1UL);
+
+ if (!d->ops || !d->ops->set_div)
+ min_div = max_div = d->div;
+ else {
+ min_div = max(d->min_div, 1U);
+ max_div = min(d->max_div, (unsigned int) (LONG_MAX / rate));
+ }
+
+ for (div = min_div; div <= max_div; div++) {
+ p_rrate = clk_round_rate(c->parent, rate * div);
+ if (p_rrate < 0)
+ break;
+
+ p_rrate /= div;
+ /*
+ * Trying higher dividers is only going to ask the parent for
+ * a higher rate. If it can't even output a rate higher than
+ * the one we request for this divider, the parent is not
+ * going to be able to output an even higher rate required
+ * for a higher divider. So, stop trying higher dividers.
+ */
+ if (p_rrate < rate) {
+ if (rrate == LONG_MAX) {
+ rrate = p_rrate;
+ if (best_div)
+ *best_div = div;
+ }
+ break;
+ }
+ if (p_rrate < rrate) {
+ rrate = p_rrate;
+ if (best_div)
+ *best_div = div;
+ }
+
+ if (rrate <= rate + d->rate_margin)
+ break;
+ }
+
+ if (rrate == LONG_MAX)
+ return -EINVAL;
+
+ return rrate;
+}
+
+static long div_round_rate(struct clk *c, unsigned long rate)
+{
+ return __div_round_rate(c, rate, NULL);
+}
+
+static int div_set_rate(struct clk *c, unsigned long rate)
+{
+ struct div_clk *d = to_div_clk(c);
+ int div, rc = 0;
+ long rrate, old_prate;
+
+ rrate = __div_round_rate(c, rate, &div);
+ if (rrate != rate)
+ return -EINVAL;
+
+ if (div > d->div)
+ rc = d->ops->set_div(d, div);
+ if (rc)
+ return rc;
+
+ old_prate = clk_get_rate(c->parent);
+ rc = clk_set_rate(c->parent, rate * div);
+ if (rc)
+ goto set_rate_fail;
+
+ if (div < d->div)
+ rc = d->ops->set_div(d, div);
+ if (rc)
+ goto div_dec_fail;
+
+ d->div = div;
+
+ return 0;
+
+div_dec_fail:
+ WARN(clk_set_rate(c->parent, old_prate),
+ "Set rate failed for %s. Also in bad state!\n", c->dbg_name);
+set_rate_fail:
+ if (div > d->div)
+ WARN(d->ops->set_div(d, d->div),
+ "Set rate failed for %s. Also in bad state!\n",
+ c->dbg_name);
+ return rc;
+}
+
+static int div_enable(struct clk *c)
+{
+ struct div_clk *d = to_div_clk(c);
+ if (d->ops->enable)
+ return d->ops->enable(d);
+ return 0;
+}
+
+static void div_disable(struct clk *c)
+{
+ struct div_clk *d = to_div_clk(c);
+ if (d->ops->disable)
+ return d->ops->disable(d);
+}
+
+static enum handoff div_handoff(struct clk *c)
+{
+ struct div_clk *d = to_div_clk(c);
+
+ if (d->ops->get_div)
+ d->div = max(d->ops->get_div(d), 1);
+ d->div = max(d->div, 1U);
+ c->rate = clk_get_rate(c->parent) / d->div;
+
+ if (d->en_mask && d->ops && d->ops->is_enabled)
+ return d->ops->is_enabled(d)
+ ? HANDOFF_ENABLED_CLK
+ : HANDOFF_DISABLED_CLK;
+
+ /*
+ * If this function returns 'enabled' even when the clock downstream
+ * of this clock is disabled, then handoff code will unnecessarily
+ * enable the current parent of this clock. If this function always
+ * returns 'disabled' and a clock downstream is on, the clock handoff
+ * code will bump up the ref count for this clock and its current
+ * parent as necessary. So, clocks without an actual HW gate can
+ * always return disabled.
+ */
+ return HANDOFF_DISABLED_CLK;
+}
+
+struct clk_ops clk_ops_div = {
+ .enable = div_enable,
+ .disable = div_disable,
+ .round_rate = div_round_rate,
+ .set_rate = div_set_rate,
+ .handoff = div_handoff,
+};
+
+static long __slave_div_round_rate(struct clk *c, unsigned long rate,
+ int *best_div)
+{
+ struct div_clk *d = to_div_clk(c);
+ unsigned int div, min_div, max_div;
+ long p_rate;
+
+ rate = max(rate, 1UL);
+
+ if (!d->ops || !d->ops->set_div)
+ min_div = max_div = d->div;
+ else {
+ min_div = d->min_div;
+ max_div = d->max_div;
+ }
+
+ p_rate = clk_get_rate(c->parent);
+ div = p_rate / rate;
+ div = max(div, min_div);
+ div = min(div, max_div);
+ if (best_div)
+ *best_div = div;
+
+ return p_rate / div;
+}
+
+static long slave_div_round_rate(struct clk *c, unsigned long rate)
+{
+ return __slave_div_round_rate(c, rate, NULL);
+}
+
+static int slave_div_set_rate(struct clk *c, unsigned long rate)
+{
+ struct div_clk *d = to_div_clk(c);
+ int div, rc = 0;
+ long rrate;
+
+ rrate = __slave_div_round_rate(c, rate, &div);
+ if (rrate != rate)
+ return -EINVAL;
+
+ if (div == d->div)
+ return 0;
+
+ if (d->ops->set_div)
+ rc = d->ops->set_div(d, div);
+ if (rc)
+ return rc;
+
+ d->div = div;
+
+ return 0;
+}
+
+struct clk_ops clk_ops_slave_div = {
+ .enable = div_enable,
+ .disable = div_disable,
+ .round_rate = slave_div_round_rate,
+ .set_rate = slave_div_set_rate,
+ .handoff = div_handoff,
+};
diff --git a/arch/arm/mach-msm/clock-local2.c b/arch/arm/mach-msm/clock-local2.c
index 8c2121f..e67d973 100644
--- a/arch/arm/mach-msm/clock-local2.c
+++ b/arch/arm/mach-msm/clock-local2.c
@@ -476,10 +476,7 @@
if (branch->max_div)
return branch->c.rate;
- if (!branch->has_sibling)
- return clk_get_rate(c->parent);
-
- return 0;
+ return clk_get_rate(c->parent);
}
static int branch_clk_list_rate(struct clk *c, unsigned n)
@@ -596,6 +593,13 @@
ret = -EINVAL;
}
writel_relaxed(cbcr_val, CBCR_REG(branch));
+ /*
+ * 8974v2.2 has a requirement that writes to set bits 13 and 14 are
+ * separated by at least 2 bus cycles. Cover one of these cycles by
+ * performing an extra write here. The other cycle is covered by the
+ * read-modify-write design of this function.
+ */
+ writel_relaxed(cbcr_val, CBCR_REG(branch));
spin_unlock_irqrestore(&local_clock_reg_lock, irq_flags);
/* Make sure write is issued before returning. */
@@ -663,7 +667,7 @@
return HANDOFF_ENABLED_CLK;
}
-static enum handoff byte_rcg_handoff(struct clk *clk)
+enum handoff byte_rcg_handoff(struct clk *clk)
{
struct rcg_clk *rcg = to_rcg_clk(clk);
u32 div_val;
@@ -714,7 +718,7 @@
return 0;
}
-static enum handoff pixel_rcg_handoff(struct clk *clk)
+enum handoff pixel_rcg_handoff(struct clk *clk)
{
struct rcg_clk *rcg = to_rcg_clk(clk);
u32 div_val, mval, nval, cfg_regval;
@@ -805,6 +809,155 @@
}
+#define ENABLE_REG(x) (*(x)->base + (x)->enable_reg)
+#define SELECT_REG(x) (*(x)->base + (x)->select_reg)
+
+/*
+ * mux clock functions
+ */
+static void cam_mux_clk_halt_check(void)
+{
+ /* Ensure that the delay starts after the mux disable/enable. */
+ mb();
+ udelay(HALT_CHECK_DELAY_US);
+}
+
+static int cam_mux_clk_enable(struct clk *c)
+{
+ unsigned long flags;
+ u32 regval;
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+
+ spin_lock_irqsave(&local_clock_reg_lock, flags);
+ regval = readl_relaxed(ENABLE_REG(mux));
+ regval |= mux->enable_mask;
+ writel_relaxed(regval, ENABLE_REG(mux));
+ spin_unlock_irqrestore(&local_clock_reg_lock, flags);
+
+ /* Wait for clock to enable before continuing. */
+ cam_mux_clk_halt_check();
+
+ return 0;
+}
+
+static void cam_mux_clk_disable(struct clk *c)
+{
+ unsigned long flags;
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+ u32 regval;
+
+ spin_lock_irqsave(&local_clock_reg_lock, flags);
+ regval = readl_relaxed(ENABLE_REG(mux));
+ regval &= ~mux->enable_mask;
+ writel_relaxed(regval, ENABLE_REG(mux));
+ spin_unlock_irqrestore(&local_clock_reg_lock, flags);
+
+ /* Wait for clock to disable before continuing. */
+ cam_mux_clk_halt_check();
+}
+
+static int mux_source_switch(struct cam_mux_clk *mux, struct mux_source *dest)
+{
+ unsigned long flags;
+ u32 regval;
+ int ret = 0;
+
+ ret = __clk_pre_reparent(&mux->c, dest->clk, &flags);
+ if (ret)
+ goto out;
+
+ regval = readl_relaxed(SELECT_REG(mux));
+ regval &= ~mux->select_mask;
+ regval |= dest->select_val;
+ writel_relaxed(regval, SELECT_REG(mux));
+
+ /* Make sure switch request goes through before proceeding. */
+ mb();
+
+ __clk_post_reparent(&mux->c, mux->c.parent, &flags);
+out:
+ return ret;
+}
+
+static int cam_mux_clk_set_parent(struct clk *c, struct clk *parent)
+{
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+ struct mux_source *dest = NULL;
+ int ret;
+
+ if (!mux->sources || !parent)
+ return -EPERM;
+
+ dest = mux->sources;
+
+ while (dest->clk) {
+ if (dest->clk == parent)
+ break;
+ dest++;
+ }
+
+ if (!dest->clk)
+ return -EPERM;
+
+ ret = mux_source_switch(mux, dest);
+ if (ret)
+ return ret;
+
+ mux->c.rate = clk_get_rate(dest->clk);
+
+ return 0;
+}
+
+static enum handoff cam_mux_clk_handoff(struct clk *c)
+{
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+ u32 mask = mux->enable_mask;
+ u32 regval = readl_relaxed(ENABLE_REG(mux));
+
+ c->rate = clk_get_rate(c->parent);
+
+ if (mask == (regval & mask))
+ return HANDOFF_ENABLED_CLK;
+
+ return HANDOFF_DISABLED_CLK;
+}
+
+static struct clk *cam_mux_clk_get_parent(struct clk *c)
+{
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+ struct mux_source *parent = NULL;
+ u32 regval = readl_relaxed(SELECT_REG(mux));
+
+ if (!mux->sources)
+ return ERR_PTR(-EPERM);
+
+ parent = mux->sources;
+
+ while (parent->clk) {
+ if ((regval & mux->select_mask) == parent->select_val)
+ return parent->clk;
+
+ parent++;
+ }
+
+ return ERR_PTR(-EPERM);
+}
+
+static int cam_mux_clk_list_rate(struct clk *c, unsigned n)
+{
+ struct cam_mux_clk *mux = to_cam_mux_clk(c);
+ int i;
+
+ for (i = 0; i < n; i++)
+ if (!mux->sources[i].clk)
+ break;
+
+ if (!mux->sources[i].clk)
+ return -ENXIO;
+
+ return clk_get_rate(mux->sources[i].clk);
+}
+
struct clk_ops clk_ops_empty;
struct clk_ops clk_ops_rcg = {
@@ -868,3 +1021,14 @@
.reset = local_vote_clk_reset,
.handoff = local_vote_clk_handoff,
};
+
+struct clk_ops clk_ops_cam_mux = {
+ .enable = cam_mux_clk_enable,
+ .disable = cam_mux_clk_disable,
+ .set_parent = cam_mux_clk_set_parent,
+ .get_parent = cam_mux_clk_get_parent,
+ .handoff = cam_mux_clk_handoff,
+ .list_rate = cam_mux_clk_list_rate,
+};
+
+
diff --git a/arch/arm/mach-msm/clock-local2.h b/arch/arm/mach-msm/clock-local2.h
index 7ac7bd3..f307a2f 100644
--- a/arch/arm/mach-msm/clock-local2.h
+++ b/arch/arm/mach-msm/clock-local2.h
@@ -157,6 +157,38 @@
return container_of(clk, struct measure_clk, c);
}
+struct mux_source {
+ struct clk *const clk;
+ const u32 select_val;
+};
+
+/**
+ * struct cam_mux_clk - branch clock
+ * @c: clk
+ * @enable_reg: register that contains the enable bit(s) for the mux
+ * @select_reg: register that contains the source selection bits for the mux
+ * @enable_mask: mask that enables the mux
+ * @select_mask: mask for the source selection bits
+ * @sources: list of mux sources
+ * @base: pointer to base address of ioremapped registers.
+ */
+struct cam_mux_clk {
+ struct clk c;
+ const u32 enable_reg;
+ const u32 select_reg;
+ const u32 enable_mask;
+ const u32 select_mask;
+
+ struct mux_source *sources;
+
+ void *const __iomem *base;
+};
+
+static inline struct cam_mux_clk *to_cam_mux_clk(struct clk *clk)
+{
+ return container_of(clk, struct cam_mux_clk, c);
+}
+
/*
* Generic set-rate implementations
*/
@@ -168,6 +200,7 @@
*/
extern spinlock_t local_clock_reg_lock;
+extern struct clk_ops clk_ops_cam_mux;
extern struct clk_ops clk_ops_empty;
extern struct clk_ops clk_ops_rcg;
extern struct clk_ops clk_ops_rcg_mnd;
@@ -177,6 +210,9 @@
extern struct clk_ops clk_ops_byte;
extern struct clk_ops clk_ops_pixel;
+enum handoff pixel_rcg_handoff(struct clk *clk);
+enum handoff byte_rcg_handoff(struct clk *clk);
+
/*
* Clock definition macros
*/
diff --git a/arch/arm/mach-msm/clock-mdss-8226.c b/arch/arm/mach-msm/clock-mdss-8226.c
index f2c8d58..edfaf90 100644
--- a/arch/arm/mach-msm/clock-mdss-8226.c
+++ b/arch/arm/mach-msm/clock-mdss-8226.c
@@ -71,7 +71,6 @@
{
u32 status;
- clk_prepare_enable(mdss_dsi_ahb_clk);
/* poll for PLL ready status */
if (readl_poll_timeout_noirq((mdss_dsi_base + 0x02c0),
status,
@@ -83,7 +82,6 @@
} else {
pll_initialized = 1;
}
- clk_disable_unprepare(mdss_dsi_ahb_clk);
return pll_initialized;
}
@@ -177,28 +175,141 @@
return ret;
}
-static void mdss_dsi_uniphy_pll_lock_detect_setting(void)
-{
- REG_W(0x04, mdss_dsi_base + 0x0264); /* LKDetect CFG2 */
- udelay(100);
- REG_W(0x05, mdss_dsi_base + 0x0264); /* LKDetect CFG2 */
- udelay(500);
-}
-
static void mdss_dsi_uniphy_pll_sw_reset(void)
{
+ /*
+ * Add hardware recommended delays after toggling the
+ * software reset bit off and back on.
+ */
REG_W(0x01, mdss_dsi_base + 0x0268); /* PLL TEST CFG */
- udelay(1);
+ udelay(300);
REG_W(0x00, mdss_dsi_base + 0x0268); /* PLL TEST CFG */
+ udelay(300);
+}
+
+static void mdss_dsi_pll_enable_casem(void)
+{
+ int i;
+
+ /*
+ * Add hardware recommended delays between register writes for
+ * the updates to take effect. These delays are necessary for the
+ * PLL to successfully lock.
+ */
+ REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1000);
+
+ for (i = 0; (i < 3) && !mdss_dsi_check_pll_lock(); i++) {
+ REG_W(0x07, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1000);
+ }
+
+ if (pll_initialized)
+ pr_debug("%s: PLL Locked after %d attempts\n", __func__, i);
+ else
+ pr_debug("%s: PLL failed to lock\n", __func__);
+}
+
+static void mdss_dsi_pll_enable_casef1(void)
+{
+ /*
+ * Add hardware recommended delays between register writes for
+ * the updates to take effect. These delays are necessary for the
+ * PLL to successfully lock.
+ */
+ REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0d, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1000);
+
+ if (mdss_dsi_check_pll_lock())
+ pr_debug("%s: PLL Locked\n", __func__);
+ else
+ pr_debug("%s: PLL failed to lock\n", __func__);
+}
+
+static void mdss_dsi_pll_enable_cased(void)
+{
+ /*
+ * Add hardware recommended delays between register writes for
+ * the updates to take effect. These delays are necessary for the
+ * PLL to successfully lock.
+ */
+ REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
udelay(1);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+ REG_W(0x07, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+ REG_W(0x07, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1);
+
+ if (mdss_dsi_check_pll_lock())
+ pr_debug("%s: PLL Locked\n", __func__);
+ else
+ pr_debug("%s: PLL failed to lock\n", __func__);
+}
+
+static void mdss_dsi_pll_enable_casec(void)
+{
+ /*
+ * Add hardware recommended delays between register writes for
+ * the updates to take effect. These delays are necessary for the
+ * PLL to successfully lock.
+ */
+ REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1000);
+
+ if (mdss_dsi_check_pll_lock())
+ pr_debug("%s: PLL Locked\n", __func__);
+ else
+ pr_debug("%s: PLL failed to lock\n", __func__);
+}
+
+static void mdss_dsi_pll_enable_casee(void)
+{
+ /*
+ * Add hardware recommended delays between register writes for
+ * the updates to take effect. These delays are necessary for the
+ * PLL to successfully lock.
+ */
+ REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(200);
+ REG_W(0x0d, mdss_dsi_base + 0x0220); /* GLB CFG */
+ REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
+ udelay(1000);
+
+ if (mdss_dsi_check_pll_lock())
+ pr_debug("%s: PLL Locked\n", __func__);
+ else
+ pr_debug("%s: PLL failed to lock\n", __func__);
}
static int __mdss_dsi_pll_enable(struct clk *c)
{
- u32 status;
- u32 max_reads, timeout_us;
- int i;
-
if (!pll_initialized) {
if (dsi_pll_rate)
__mdss_dsi_pll_byte_set_rate(c, dsi_pll_rate);
@@ -207,56 +318,45 @@
__func__);
}
+ /*
+ * Try all PLL power-up sequences one-by-one until
+ * PLL lock is detected
+ */
mdss_dsi_uniphy_pll_sw_reset();
- /* PLL power up */
- /* Add HW recommended delay between
- register writes for the update to propagate */
- REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(20);
- REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(100);
- REG_W(0x0d, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(20);
- REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(200);
+ mdss_dsi_pll_enable_casem();
+ if (pll_initialized)
+ goto pll_locked;
- for (i = 0; i < 3; i++) {
- mdss_dsi_uniphy_pll_lock_detect_setting();
- /* poll for PLL ready status */
- max_reads = 5;
- timeout_us = 100;
- if (readl_poll_timeout_noirq((mdss_dsi_base + 0x02c0),
- status,
- ((status & 0x01) == 1),
- max_reads, timeout_us)) {
- pr_debug("%s: DSI PLL status=%x failed to Lock\n",
- __func__, status);
- pr_debug("%s:Trying to power UP PLL again\n",
- __func__);
- } else
- break;
+ mdss_dsi_uniphy_pll_sw_reset();
+ mdss_dsi_pll_enable_cased();
+ if (pll_initialized)
+ goto pll_locked;
- mdss_dsi_uniphy_pll_sw_reset();
- udelay(1000);
- /* Add HW recommended delay between
- register writes for the update to propagate */
- REG_W(0x01, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(20);
- REG_W(0x05, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(100);
- REG_W(0x0d, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(20);
- REG_W(0x0f, mdss_dsi_base + 0x0220); /* GLB CFG */
- udelay(200);
- }
+ mdss_dsi_uniphy_pll_sw_reset();
+ mdss_dsi_pll_enable_cased();
+ if (pll_initialized)
+ goto pll_locked;
- if ((status & 0x01) != 1) {
- pr_err("%s: DSI PLL status=%x failed to Lock\n",
- __func__, status);
- return -EINVAL;
- }
+ mdss_dsi_uniphy_pll_sw_reset();
+ mdss_dsi_pll_enable_casef1();
+ if (pll_initialized)
+ goto pll_locked;
- pr_debug("%s: **** PLL Lock success\n", __func__);
+ mdss_dsi_uniphy_pll_sw_reset();
+ mdss_dsi_pll_enable_casec();
+ if (pll_initialized)
+ goto pll_locked;
+
+ mdss_dsi_uniphy_pll_sw_reset();
+ mdss_dsi_pll_enable_casee();
+ if (pll_initialized)
+ goto pll_locked;
+
+ pr_err("%s: DSI PLL failed to Lock\n", __func__);
+ return -EINVAL;
+
+pll_locked:
+ pr_debug("%s: PLL Lock success\n", __func__);
return 0;
}
@@ -264,7 +364,7 @@
static void __mdss_dsi_pll_disable(void)
{
writel_relaxed(0x00, mdss_dsi_base + 0x0220); /* GLB CFG */
- pr_debug("%s: **** disable pll Initialize\n", __func__);
+ pr_debug("%s: PLL disabled\n", __func__);
pll_initialized = 0;
}
@@ -305,13 +405,17 @@
/* todo: Adjust these values appropriately */
static enum handoff mdss_dsi_pll_byte_handoff(struct clk *c)
{
- if (mdss_gdsc_enabled() && mdss_dsi_check_pll_lock()) {
- c->rate = 59000000;
- dsi_pll_rate = 59000000;
- pll_byte_clk_rate = 59000000;
- pll_pclk_rate = 117000000;
- dsipll_refcount++;
- return HANDOFF_ENABLED_CLK;
+ if (mdss_gdsc_enabled()) {
+ clk_prepare_enable(mdss_dsi_ahb_clk);
+ if (mdss_dsi_check_pll_lock()) {
+ c->rate = 59000000;
+ dsi_pll_rate = 59000000;
+ pll_byte_clk_rate = 59000000;
+ pll_pclk_rate = 117000000;
+ dsipll_refcount++;
+ return HANDOFF_ENABLED_CLK;
+ }
+ clk_disable_unprepare(mdss_dsi_ahb_clk);
}
return HANDOFF_DISABLED_CLK;
@@ -320,10 +424,14 @@
/* todo: Adjust these values appropriately */
static enum handoff mdss_dsi_pll_pixel_handoff(struct clk *c)
{
- if (mdss_gdsc_enabled() && mdss_dsi_check_pll_lock()) {
- c->rate = 117000000;
- dsipll_refcount++;
- return HANDOFF_ENABLED_CLK;
+ if (mdss_gdsc_enabled()) {
+ clk_prepare_enable(mdss_dsi_ahb_clk);
+ if (mdss_dsi_check_pll_lock()) {
+ c->rate = 117000000;
+ dsipll_refcount++;
+ return HANDOFF_ENABLED_CLK;
+ }
+ clk_disable_unprepare(mdss_dsi_ahb_clk);
}
return HANDOFF_DISABLED_CLK;
diff --git a/arch/arm/mach-msm/clock-rpm.c b/arch/arm/mach-msm/clock-rpm.c
index ee91a34..3870e2b 100644
--- a/arch/arm/mach-msm/clock-rpm.c
+++ b/arch/arm/mach-msm/clock-rpm.c
@@ -53,12 +53,8 @@
if (rc < 0)
return rc;
- if (!r->branch) {
- r->last_set_khz = iv.value;
- if (!r->active_only)
- r->last_set_sleep_khz = iv.value;
+ if (!r->branch)
r->c.rate = iv.value * r->factor;
- }
return 0;
}
@@ -78,12 +74,8 @@
static int clk_rpmrs_handoff_smd(struct rpm_clk *r)
{
- if (!r->branch) {
- r->last_set_khz = INT_MAX;
- if (!r->active_only)
- r->last_set_sleep_khz = INT_MAX;
- r->c.rate = 1 * r->factor;
- }
+ if (!r->branch)
+ r->c.rate = INT_MAX;
return 0;
}
@@ -113,6 +105,22 @@
static DEFINE_MUTEX(rpm_clock_lock);
+static void to_active_sleep_khz(struct rpm_clk *r, unsigned long rate,
+ unsigned long *active_khz, unsigned long *sleep_khz)
+{
+ /* Convert the rate (hz) to khz */
+ *active_khz = DIV_ROUND_UP(rate, r->factor);
+
+ /*
+ * Active-only clocks don't care what the rate is during sleep. So,
+ * they vote for zero.
+ */
+ if (r->active_only)
+ *sleep_khz = 0;
+ else
+ *sleep_khz = *active_khz;
+}
+
static int rpm_clk_prepare(struct clk *clk)
{
struct rpm_clk *r = to_rpm_clk(clk);
@@ -124,18 +132,16 @@
mutex_lock(&rpm_clock_lock);
- this_khz = r->last_set_khz;
+ to_active_sleep_khz(r, r->c.rate, &this_khz, &this_sleep_khz);
+
/* Don't send requests to the RPM if the rate has not been set. */
if (this_khz == 0)
goto out;
- this_sleep_khz = r->last_set_sleep_khz;
-
/* Take peer clock's rate into account only if it's enabled. */
- if (peer->enabled) {
- peer_khz = peer->last_set_khz;
- peer_sleep_khz = peer->last_set_sleep_khz;
- }
+ if (peer->enabled)
+ to_active_sleep_khz(peer, peer->c.rate,
+ &peer_khz, &peer_sleep_khz);
value = max(this_khz, peer_khz);
if (r->branch)
@@ -171,17 +177,16 @@
mutex_lock(&rpm_clock_lock);
- if (r->last_set_khz) {
+ if (r->c.rate) {
uint32_t value;
struct rpm_clk *peer = r->peer;
unsigned long peer_khz = 0, peer_sleep_khz = 0;
int rc;
/* Take peer clock's rate into account only if it's enabled. */
- if (peer->enabled) {
- peer_khz = peer->last_set_khz;
- peer_sleep_khz = peer->last_set_sleep_khz;
- }
+ if (peer->enabled)
+ to_active_sleep_khz(peer, peer->c.rate,
+ &peer_khz, &peer_sleep_khz);
value = r->branch ? !!peer_khz : peer_khz;
rc = clk_rpmrs_set_rate_active(r, value);
@@ -204,27 +209,19 @@
unsigned long this_khz, this_sleep_khz;
int rc = 0;
- this_khz = DIV_ROUND_UP(rate, r->factor);
-
mutex_lock(&rpm_clock_lock);
- /* Active-only clocks don't care what the rate is during sleep. So,
- * they vote for zero. */
- if (r->active_only)
- this_sleep_khz = 0;
- else
- this_sleep_khz = this_khz;
-
if (r->enabled) {
uint32_t value;
struct rpm_clk *peer = r->peer;
unsigned long peer_khz = 0, peer_sleep_khz = 0;
+ to_active_sleep_khz(r, rate, &this_khz, &this_sleep_khz);
+
/* Take peer clock's rate into account only if it's enabled. */
- if (peer->enabled) {
- peer_khz = peer->last_set_khz;
- peer_sleep_khz = peer->last_set_sleep_khz;
- }
+ if (peer->enabled)
+ to_active_sleep_khz(peer, peer->c.rate,
+ &peer_khz, &peer_sleep_khz);
value = max(this_khz, peer_khz);
rc = clk_rpmrs_set_rate_active(r, value);
@@ -234,10 +231,6 @@
value = max(this_sleep_khz, peer_sleep_khz);
rc = clk_rpmrs_set_rate_sleep(r, value);
}
- if (!rc) {
- r->last_set_khz = this_khz;
- r->last_set_sleep_khz = this_sleep_khz;
- }
out:
mutex_unlock(&rpm_clock_lock);
diff --git a/arch/arm/mach-msm/clock-rpm.h b/arch/arm/mach-msm/clock-rpm.h
index 8d328e3..b20c3d6 100644
--- a/arch/arm/mach-msm/clock-rpm.h
+++ b/arch/arm/mach-msm/clock-rpm.h
@@ -37,9 +37,6 @@
const int rpm_clk_id;
const int rpm_status_id;
const bool active_only;
- unsigned last_set_khz;
- /* 0 if active_only. Otherwise, same as last_set_khz. */
- unsigned last_set_sleep_khz;
bool enabled;
bool branch; /* true: RPM only accepts 1 for ON and 0 for OFF */
unsigned factor;
@@ -107,8 +104,6 @@
.rpm_status_id = (stat_id), \
.rpm_key = (key), \
.peer = &active, \
- .last_set_khz = ((r) / 1000), \
- .last_set_sleep_khz = ((r) / 1000), \
.factor = 1000, \
.branch = true, \
.rpmrs_data = (rpmrsdata),\
@@ -125,7 +120,6 @@
.rpm_status_id = (stat_id), \
.rpm_key = (key), \
.peer = &name, \
- .last_set_khz = ((r) / 1000), \
.active_only = true, \
.factor = 1000, \
.branch = true, \
diff --git a/arch/arm/mach-msm/clock.c b/arch/arm/mach-msm/clock.c
index 044fc2c..2e12e27 100644
--- a/arch/arm/mach-msm/clock.c
+++ b/arch/arm/mach-msm/clock.c
@@ -62,7 +62,7 @@
{
int level, rc = 0, i;
struct regulator **r = vdd_class->regulator;
- const int **vdd_uv = vdd_class->vdd_uv;
+ int **vdd_uv = vdd_class->vdd_uv;
int max_level = vdd_class->num_levels - 1;
for (level = max_level; level > 0; level--)
diff --git a/arch/arm/mach-msm/cpufreq.c b/arch/arm/mach-msm/cpufreq.c
index d02ab66..d5084e4 100644
--- a/arch/arm/mach-msm/cpufreq.c
+++ b/arch/arm/mach-msm/cpufreq.c
@@ -3,7 +3,7 @@
* MSM architecture cpufreq driver
*
* Copyright (C) 2007 Google, Inc.
- * Copyright (c) 2007-2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2007-2013, The Linux Foundation. All rights reserved.
* Author: Mike A. Chan <mikechan@google.com>
*
* This software is licensed under the terms of the GNU General Public
@@ -261,6 +261,7 @@
{
int cur_freq;
int index;
+ int ret = 0;
struct cpufreq_frequency_table *table;
struct cpufreq_work_struct *cpu_work = NULL;
@@ -296,19 +297,16 @@
policy->cpu, cur_freq);
return -EINVAL;
}
-
- if (cur_freq != table[index].frequency) {
- int ret = 0;
- ret = acpuclk_set_rate(policy->cpu, table[index].frequency,
- SETRATE_CPUFREQ);
- if (ret)
- return ret;
- pr_info("cpufreq: cpu%d init at %d switching to %d\n",
- policy->cpu, cur_freq, table[index].frequency);
- cur_freq = table[index].frequency;
- }
-
- policy->cur = cur_freq;
+ /*
+ * Call set_cpu_freq unconditionally so that when cpu is set to
+ * online, frequency limit will always be updated.
+ */
+ ret = set_cpu_freq(policy, table[index].frequency);
+ if (ret)
+ return ret;
+ pr_debug("cpufreq: cpu%d init at %d switching to %d\n",
+ policy->cpu, cur_freq, table[index].frequency);
+ policy->cur = table[index].frequency;
policy->cpuinfo.transition_latency =
acpuclk_get_switch_time() * NSEC_PER_USEC;
@@ -410,4 +408,4 @@
return cpufreq_register_driver(&msm_cpufreq_driver);
}
-late_initcall(msm_cpufreq_register);
+device_initcall(msm_cpufreq_register);
diff --git a/arch/arm/mach-msm/devices-8960.c b/arch/arm/mach-msm/devices-8960.c
index 837aef3..cb8ffc1 100644
--- a/arch/arm/mach-msm/devices-8960.c
+++ b/arch/arm/mach-msm/devices-8960.c
@@ -2510,6 +2510,11 @@
.id = -1,
};
+struct platform_device msm_fm_loopback = {
+ .name = "msm-pcm-loopback",
+ .id = -1,
+};
+
static struct fs_driver_data gfx2d0_fs_data = {
.clks = (struct fs_clk_data[]){
{ .name = "core_clk" },
diff --git a/arch/arm/mach-msm/devices.h b/arch/arm/mach-msm/devices.h
index 327c11d..eb12383 100644
--- a/arch/arm/mach-msm/devices.h
+++ b/arch/arm/mach-msm/devices.h
@@ -260,6 +260,7 @@
extern struct platform_device msm_i2s_cpudai4;
extern struct platform_device msm_i2s_cpudai5;
extern struct platform_device msm_cpudai_stub;
+extern struct platform_device msm_fm_loopback;
extern struct platform_device msm_pil_q6v3;
extern struct platform_device msm_pil_modem;
diff --git a/arch/arm/mach-msm/gdsc.c b/arch/arm/mach-msm/gdsc.c
index 6665d66..30a034e 100644
--- a/arch/arm/mach-msm/gdsc.c
+++ b/arch/arm/mach-msm/gdsc.c
@@ -22,6 +22,7 @@
#include <linux/regulator/driver.h>
#include <linux/regulator/machine.h>
#include <linux/regulator/of_regulator.h>
+#include <linux/slab.h>
#include <linux/clk.h>
#include <mach/clk.h>
@@ -44,7 +45,9 @@
struct regulator_dev *rdev;
struct regulator_desc rdesc;
void __iomem *gdscr;
- struct clk *core_clk;
+ struct clk **clocks;
+ int clock_count;
+ bool toggle_mems;
};
static int gdsc_is_enabled(struct regulator_dev *rdev)
@@ -58,7 +61,7 @@
{
struct gdsc *sc = rdev_get_drvdata(rdev);
uint32_t regval;
- int ret;
+ int i, ret;
regval = readl_relaxed(sc->gdscr);
regval &= ~SW_COLLAPSE_MASK;
@@ -71,9 +74,18 @@
return ret;
}
+ if (sc->toggle_mems) {
+ for (i = 0; i < sc->clock_count; i++) {
+ clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_MEM);
+ clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_PERIPH);
+ }
+ }
+
/*
* If clocks to this power domain were already on, they will take an
* additional 4 clock cycles to re-enable after the rail is enabled.
+ * Delay to account for this. A delay is also needed to ensure clocks
+ * are not enabled within 400ns of enabling power to the memories.
*/
udelay(1);
@@ -84,7 +96,7 @@
{
struct gdsc *sc = rdev_get_drvdata(rdev);
uint32_t regval;
- int ret;
+ int i, ret;
regval = readl_relaxed(sc->gdscr);
regval |= SW_COLLAPSE_MASK;
@@ -95,6 +107,13 @@
if (ret)
dev_err(&rdev->dev, "%s disable timed out\n", sc->rdesc.name);
+ if (sc->toggle_mems) {
+ for (i = 0; i < sc->clock_count; i++) {
+ clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_MEM);
+ clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_PERIPH);
+ }
+ }
+
return ret;
}
@@ -112,7 +131,7 @@
struct gdsc *sc;
uint32_t regval;
bool retain_mems;
- int ret;
+ int i, ret;
sc = devm_kzalloc(&pdev->dev, sizeof(struct gdsc), GFP_KERNEL);
if (sc == NULL)
@@ -137,6 +156,34 @@
if (sc->gdscr == NULL)
return -ENOMEM;
+ sc->clock_count = of_property_count_strings(pdev->dev.of_node,
+ "qcom,clock-names");
+ if (sc->clock_count == -EINVAL) {
+ sc->clock_count = 0;
+ } else if (IS_ERR_VALUE(sc->clock_count)) {
+ dev_err(&pdev->dev, "Failed to get clock names\n");
+ return -EINVAL;
+ }
+
+ sc->clocks = devm_kzalloc(&pdev->dev,
+ sizeof(struct clk *) * sc->clock_count, GFP_KERNEL);
+ if (!sc->clocks)
+ return -ENOMEM;
+ for (i = 0; i < sc->clock_count; i++) {
+ const char *clock_name;
+ of_property_read_string_index(pdev->dev.of_node,
+ "qcom,clock-names", i,
+ &clock_name);
+ sc->clocks[i] = devm_clk_get(&pdev->dev, clock_name);
+ if (IS_ERR(sc->clocks[i])) {
+ int rc = PTR_ERR(sc->clocks[i]);
+ if (rc != -EPROBE_DEFER)
+ dev_err(&pdev->dev, "Failed to get %s\n",
+ clock_name);
+ return rc;
+ }
+ }
+
sc->rdesc.id = atomic_inc_return(&gdsc_count);
sc->rdesc.ops = &gdsc_ops;
sc->rdesc.type = REGULATOR_VOLTAGE;
@@ -157,13 +204,16 @@
retain_mems = of_property_read_bool(pdev->dev.of_node,
"qcom,retain-mems");
- if (retain_mems) {
- sc->core_clk = devm_clk_get(&pdev->dev, "core_clk");
- if (IS_ERR(sc->core_clk))
- return PTR_ERR(sc->core_clk);
- clk_set_flags(sc->core_clk, CLKFLAG_RETAIN_MEM);
- clk_set_flags(sc->core_clk, CLKFLAG_RETAIN_PERIPH);
+ for (i = 0; i < sc->clock_count; i++) {
+ if (retain_mems || (regval & PWR_ON_MASK)) {
+ clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_MEM);
+ clk_set_flags(sc->clocks[i], CLKFLAG_RETAIN_PERIPH);
+ } else {
+ clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_MEM);
+ clk_set_flags(sc->clocks[i], CLKFLAG_NORETAIN_PERIPH);
+ }
}
+ sc->toggle_mems = !retain_mems;
sc->rdev = regulator_register(&sc->rdesc, &pdev->dev, init_data, sc,
pdev->dev.of_node);
diff --git a/arch/arm/mach-msm/include/mach/board.h b/arch/arm/mach-msm/include/mach/board.h
index 72f5051..22f74c8 100644
--- a/arch/arm/mach-msm/include/mach/board.h
+++ b/arch/arm/mach-msm/include/mach/board.h
@@ -600,7 +600,7 @@
void msm_map_msm7x30_io(void);
void msm_map_fsm9xxx_io(void);
void msm_map_8974_io(void);
-void msm_map_zinc_io(void);
+void msm_map_8084_io(void);
void msm_map_msmkrypton_io(void);
void msm_map_msm8625_io(void);
void msm_map_msm9625_io(void);
@@ -610,7 +610,7 @@
void msm_8974_reserve(void);
void msm_8974_very_early(void);
void msm_8974_init_gpiomux(void);
-void msmzinc_init_gpiomux(void);
+void apq8084_init_gpiomux(void);
void msm9625_init_gpiomux(void);
void msmkrypton_init_gpiomux(void);
void msm_map_mpq8092_io(void);
@@ -653,7 +653,7 @@
void msm_snddev_tx_route_config(void);
void msm_snddev_tx_route_deconfig(void);
-extern unsigned int msm_shared_ram_phys; /* defined in arch/arm/mach-msm/io.c */
+extern phys_addr_t msm_shared_ram_phys; /* defined in arch/arm/mach-msm/io.c */
#endif
diff --git a/arch/arm/mach-msm/include/mach/clk-provider.h b/arch/arm/mach-msm/include/mach/clk-provider.h
index 2a33228..b358b53 100644
--- a/arch/arm/mach-msm/include/mach/clk-provider.h
+++ b/arch/arm/mach-msm/include/mach/clk-provider.h
@@ -59,7 +59,7 @@
struct regulator **regulator;
int num_regulators;
int (*set_vdd)(struct clk_vdd_class *v_class, int level);
- const int **vdd_uv;
+ int **vdd_uv;
int *level_votes;
int num_levels;
unsigned long cur_level;
diff --git a/arch/arm/mach-msm/include/mach/clock-generic.h b/arch/arm/mach-msm/include/mach/clock-generic.h
new file mode 100644
index 0000000..0f689f1
--- /dev/null
+++ b/arch/arm/mach-msm/include/mach/clock-generic.h
@@ -0,0 +1,129 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __MACH_CLOCK_GENERIC_H
+#define __MACH_CLOCK_GENERIC_H
+
+#include <mach/clk-provider.h>
+
+/* ==================== Mux clock ==================== */
+
+struct clk_src {
+ struct clk *src;
+ int sel;
+};
+
+struct mux_clk;
+
+struct clk_mux_ops {
+ int (*set_mux_sel)(struct mux_clk *clk, int sel);
+ int (*get_mux_sel)(struct mux_clk *clk);
+
+ /* Optional */
+ bool (*is_enabled)(struct mux_clk *clk);
+ int (*enable)(struct mux_clk *clk);
+ void (*disable)(struct mux_clk *clk);
+};
+
+#define MUX_SRC_LIST(...) \
+ .parents = (struct clk_src[]){__VA_ARGS__}, \
+ .num_parents = ARRAY_SIZE(((struct clk_src[]){__VA_ARGS__}))
+
+struct mux_clk {
+ /* Parents in decreasing order of preference for obtaining rates. */
+ struct clk_src *parents;
+ int num_parents;
+ struct clk *safe_parent;
+ int safe_sel;
+ struct clk_mux_ops *ops;
+
+ /* Fields not used by helper function. */
+ void *const __iomem *base;
+ u32 offset;
+ u32 mask;
+ u32 shift;
+ u32 en_mask;
+ void *priv;
+
+ struct clk c;
+};
+
+static inline struct mux_clk *to_mux_clk(struct clk *c)
+{
+ return container_of(c, struct mux_clk, c);
+}
+
+extern struct clk_ops clk_ops_gen_mux;
+
+/* ==================== Divider clock ==================== */
+
+struct div_clk;
+
+struct clk_div_ops {
+ int (*set_div)(struct div_clk *clk, int div);
+ int (*get_div)(struct div_clk *clk);
+
+ /* Optional */
+ bool (*is_enabled)(struct div_clk *clk);
+ int (*enable)(struct div_clk *clk);
+ void (*disable)(struct div_clk *clk);
+};
+
+struct div_clk {
+ unsigned int div;
+ unsigned int min_div;
+ unsigned int max_div;
+ unsigned long rate_margin;
+ struct clk_div_ops *ops;
+
+ /* Fields not used by helper function. */
+ void *const __iomem *base;
+ u32 offset;
+ u32 mask;
+ u32 shift;
+ u32 en_mask;
+ void *priv;
+ struct clk c;
+};
+
+static inline struct div_clk *to_div_clk(struct clk *c)
+{
+ return container_of(c, struct div_clk, c);
+}
+
+extern struct clk_ops clk_ops_div;
+extern struct clk_ops clk_ops_slave_div;
+
+#define DEFINE_FIXED_DIV_CLK(clk_name, _div, _parent) \
+static struct div_clk clk_name = { \
+ .div = _div, \
+ .c = { \
+ .parent = _parent, \
+ .dbg_name = #clk_name, \
+ .ops = &clk_ops_div, \
+ CLK_INIT(clk_name.c), \
+ } \
+}
+
+#define DEFINE_FIXED_SLAVE_DIV_CLK(clk_name, _div, _parent) \
+static struct div_clk clk_name = { \
+ .div = _div, \
+ .c = { \
+ .parent = _parent, \
+ .dbg_name = #clk_name, \
+ .ops = &clk_ops_slave_div, \
+ CLK_INIT(clk_name.c), \
+ } \
+}
+
+#endif
diff --git a/arch/arm/mach-msm/include/mach/ecm_ipa.h b/arch/arm/mach-msm/include/mach/ecm_ipa.h
index 008a659..f6afb2a 100644
--- a/arch/arm/mach-msm/include/mach/ecm_ipa.h
+++ b/arch/arm/mach-msm/include/mach/ecm_ipa.h
@@ -24,15 +24,35 @@
enum ipa_dp_evt_type evt,
unsigned long data);
+/*
+ * struct ecm_ipa_params - parameters for ecm_ipa initialization API
+ *
+ * @ecm_ipa_rx_dp_notify: ecm_ipa will set this callback (out parameter).
+ * this callback shall be supplied for ipa_connect upon pipe
+ * connection (USB->IPA), once IPA driver receive data packets
+ * from USB pipe destined for Apps this callback will be called.
+ * @ecm_ipa_tx_dp_notify: ecm_ipa will set this callback (out parameter).
+ * this callback shall be supplied for ipa_connect upon pipe
+ * connection (IPA->USB), once IPA driver send packets destined
+ * for USB, IPA BAM will notify for Tx-complete.
+ * @priv: ecm_ipa will set this pointer (out parameter).
+ * This pointer will hold the network device for later interaction
+ * with ecm_ipa APIs
+ * @host_ethaddr: host Ethernet address in network order
+ * @device_ethaddr: device Ethernet address in network order
+ */
+struct ecm_ipa_params {
+ ecm_ipa_callback ecm_ipa_rx_dp_notify;
+ ecm_ipa_callback ecm_ipa_tx_dp_notify;
+ u8 host_ethaddr[ETH_ALEN];
+ u8 device_ethaddr[ETH_ALEN];
+ void *private;
+};
+
#ifdef CONFIG_ECM_IPA
-int ecm_ipa_init(ecm_ipa_callback * ecm_ipa_rx_dp_notify,
- ecm_ipa_callback * ecm_ipa_tx_dp_notify,
- void **priv);
-
-int ecm_ipa_configure(u8 host_ethaddr[], u8 device_ethaddr[],
- void *priv);
+int ecm_ipa_init(struct ecm_ipa_params *params);
int ecm_ipa_connect(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl,
void *priv);
@@ -43,15 +63,7 @@
#else /* CONFIG_ECM_IPA*/
-static inline int ecm_ipa_init(ecm_ipa_callback *ecm_ipa_rx_dp_notify,
- ecm_ipa_callback *ecm_ipa_tx_dp_notify,
- void **priv)
-{
- return 0;
-}
-
-static inline int ecm_ipa_configure(u8 host_ethaddr[], u8 device_ethaddr[],
- void *priv)
+int ecm_ipa_init(struct ecm_ipa_params *params)
{
return 0;
}
diff --git a/arch/arm/mach-msm/include/mach/iommu.h b/arch/arm/mach-msm/include/mach/iommu.h
index f750dc8..23d204a 100644
--- a/arch/arm/mach-msm/include/mach/iommu.h
+++ b/arch/arm/mach-msm/include/mach/iommu.h
@@ -129,6 +129,7 @@
* @iommu_power_off: Turn off power to unit
* @iommu_clk_on: Turn on clks to unit
* @iommu_clk_off: Turn off clks to unit
+ * @iommu_lock_initialize: Initialize the remote lock
* @iommu_lock_acquire: Acquire any locks needed
* @iommu_lock_release: Release locks needed
*/
@@ -137,6 +138,7 @@
void (*iommu_power_off)(struct msm_iommu_drvdata *);
int (*iommu_clk_on)(struct msm_iommu_drvdata *);
void (*iommu_clk_off)(struct msm_iommu_drvdata *);
+ void * (*iommu_lock_initialize)(void);
void (*iommu_lock_acquire)(void);
void (*iommu_lock_release)(void);
};
diff --git a/arch/arm/mach-msm/include/mach/iommu_domains.h b/arch/arm/mach-msm/include/mach/iommu_domains.h
index 9a89508..7157d12 100644
--- a/arch/arm/mach-msm/include/mach/iommu_domains.h
+++ b/arch/arm/mach-msm/include/mach/iommu_domains.h
@@ -122,6 +122,7 @@
unsigned long size);
extern int msm_register_domain(struct msm_iova_layout *layout);
+extern int msm_unregister_domain(struct iommu_domain *domain);
#else
static inline void msm_iommu_set_client_name(struct iommu_domain *domain,
@@ -195,6 +196,11 @@
{
return -ENODEV;
}
+
+static inline int msm_unregister_domain(struct iommu_domain *domain)
+{
+ return -ENODEV;
+}
#endif
#endif
diff --git a/arch/arm/mach-msm/include/mach/iommu_perfmon.h b/arch/arm/mach-msm/include/mach/iommu_perfmon.h
index dcae83b..dc4671c 100644
--- a/arch/arm/mach-msm/include/mach/iommu_perfmon.h
+++ b/arch/arm/mach-msm/include/mach/iommu_perfmon.h
@@ -63,6 +63,7 @@
* @iommu_dev: pointer to iommu device
* @ops: iommu access operations pointer.
* @hw_ops: iommu pm hw access operations pointer.
+ * @always_on: 1 if iommu is always on, 0 otherwise.
*/
struct iommu_info {
const char *iommu_name;
@@ -71,6 +72,7 @@
struct device *iommu_dev;
struct iommu_access_ops *ops;
struct iommu_pm_hw_ops *hw_ops;
+ unsigned int always_on;
};
/**
diff --git a/arch/arm/mach-msm/include/mach/ipa.h b/arch/arm/mach-msm/include/mach/ipa.h
index cc03c48..db5e126 100644
--- a/arch/arm/mach-msm/include/mach/ipa.h
+++ b/arch/arm/mach-msm/include/mach/ipa.h
@@ -464,6 +464,13 @@
int ipa_disconnect(u32 clnt_hdl);
/*
+ * Resume / Suspend
+ */
+int ipa_resume(u32 clnt_hdl);
+
+int ipa_suspend(u32 clnt_hdl);
+
+/*
* Configuration
*/
int ipa_cfg_ep(u32 clnt_hdl, const struct ipa_ep_cfg *ipa_ep_cfg);
@@ -557,17 +564,6 @@
int ipa_set_single_ndp_per_mbim(bool enable);
/*
- * rmnet bridge
- */
-int rmnet_bridge_init(void);
-
-int rmnet_bridge_disconnect(void);
-
-int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id);
-
-/*
* SW bridge (between IPA and A2)
*/
int ipa_bridge_setup(enum ipa_bridge_dir dir, enum ipa_bridge_type type,
@@ -594,6 +590,8 @@
*/
int ipa_rm_create_resource(struct ipa_rm_create_params *create_params);
+int ipa_rm_delete_resource(enum ipa_rm_resource_name resource_name);
+
int ipa_rm_register(enum ipa_rm_resource_name resource_name,
struct ipa_rm_register_params *reg_params);
@@ -705,6 +703,19 @@
}
/*
+ * Resume / Suspend
+ */
+static inline int ipa_resume(u32 clnt_hdl)
+{
+ return -EPERM;
+}
+
+static inline int ipa_suspend(u32 clnt_hdl)
+{
+ return -EPERM;
+}
+
+/*
* Configuration
*/
static inline int ipa_cfg_ep(u32 clnt_hdl,
@@ -917,26 +928,6 @@
}
/*
- * rmnet bridge
- */
-static inline int rmnet_bridge_init(void)
-{
- return -EPERM;
-}
-
-static inline int rmnet_bridge_disconnect(void)
-{
- return -EPERM;
-}
-
-static inline int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id)
-{
- return -EPERM;
-}
-
-/*
* SW bridge (between IPA and A2)
*/
static inline int ipa_bridge_setup(enum ipa_bridge_dir dir,
@@ -986,6 +977,12 @@
return -EPERM;
}
+static inline int ipa_rm_delete_resource(
+ enum ipa_rm_resource_name resource_name)
+{
+ return -EPERM;
+}
+
static inline int ipa_rm_register(enum ipa_rm_resource_name resource_name,
struct ipa_rm_register_params *reg_params)
{
diff --git a/arch/arm/mach-msm/include/mach/kgsl.h b/arch/arm/mach-msm/include/mach/kgsl.h
index b68aff8..349dbe7 100644
--- a/arch/arm/mach-msm/include/mach/kgsl.h
+++ b/arch/arm/mach-msm/include/mach/kgsl.h
@@ -20,6 +20,7 @@
#define KGSL_CLK_MEM 0x00000008
#define KGSL_CLK_MEM_IFACE 0x00000010
#define KGSL_CLK_AXI 0x00000020
+#define KGSL_CLK_ALT_MEM_IFACE 0x00000040
#define KGSL_MAX_PWRLEVELS 10
@@ -50,9 +51,19 @@
enum kgsl_iommu_context_id ctx_id;
};
+/*
+ * struct kgsl_device_iommu_data - Struct holding iommu context data obtained
+ * from dtsi file
+ * @iommu_ctxs: Pointer to array of struct hoding context name and id
+ * @iommu_ctx_count: Number of contexts defined in the dtsi file
+ * @iommu_halt_enable: Indicated if smmu halt h/w feature is supported
+ * @physstart: Start of iommu registers physical address
+ * @physend: End of iommu registers physical address
+ */
struct kgsl_device_iommu_data {
const struct kgsl_iommu_ctx *iommu_ctxs;
int iommu_ctx_count;
+ int iommu_halt_enable;
unsigned int physstart;
unsigned int physend;
};
diff --git a/arch/arm/mach-msm/include/mach/memory.h b/arch/arm/mach-msm/include/mach/memory.h
index 56c4afd..6119a3c 100644
--- a/arch/arm/mach-msm/include/mach/memory.h
+++ b/arch/arm/mach-msm/include/mach/memory.h
@@ -70,7 +70,7 @@
#ifndef __ASSEMBLY__
void *allocate_contiguous_ebi(unsigned long, unsigned long, int);
-unsigned long allocate_contiguous_ebi_nomap(unsigned long, unsigned long);
+phys_addr_t allocate_contiguous_ebi_nomap(unsigned long, unsigned long);
void clean_and_invalidate_caches(unsigned long, unsigned long, unsigned long);
void clean_caches(unsigned long, unsigned long, unsigned long);
void invalidate_caches(unsigned long, unsigned long, unsigned long);
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-zinc.h b/arch/arm/mach-msm/include/mach/msm_iomap-8084.h
similarity index 74%
rename from arch/arm/mach-msm/include/mach/msm_iomap-zinc.h
rename to arch/arm/mach-msm/include/mach/msm_iomap-8084.h
index 0a33055..43f1de0 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-zinc.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-8084.h
@@ -11,8 +11,8 @@
*/
-#ifndef __ASM_ARCH_MSM_IOMAP_zinc_H
-#define __ASM_ARCH_MSM_IOMAP_zinc_H
+#ifndef __ASM_ARCH_MSM_IOMAP_8084_H
+#define __ASM_ARCH_MSM_IOMAP_8084_H
/* Physical base address and size of peripherals.
* Ordered by the virtual base addresses they will be mapped at.
@@ -23,15 +23,15 @@
*
*/
-#define MSMZINC_SHARED_RAM_PHYS 0x0FA00000
+#define APQ8084_SHARED_RAM_PHYS 0x0FA00000
-#define MSMZINC_QGIC_DIST_PHYS 0xF9000000
-#define MSMZINC_QGIC_DIST_SIZE SZ_4K
+#define APQ8084_QGIC_DIST_PHYS 0xF9000000
+#define APQ8084_QGIC_DIST_SIZE SZ_4K
-#define MSMZINC_TLMM_PHYS 0xFD510000
-#define MSMZINC_TLMM_SIZE SZ_16K
+#define APQ8084_TLMM_PHYS 0xFD510000
+#define APQ8084_TLMM_SIZE SZ_16K
-#ifdef CONFIG_DEBUG_MSMZINC_UART
+#ifdef CONFIG_DEBUG_APQ8084_UART
#define MSM_DEBUG_UART_BASE IOMEM(0xFA71E000)
#define MSM_DEBUG_UART_PHYS 0xF991E000
#endif
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-8610.h b/arch/arm/mach-msm/include/mach/msm_iomap-8610.h
index 2a62460..18e448d 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-8610.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-8610.h
@@ -24,6 +24,9 @@
#define MSM8610_MSM_SHARED_RAM_PHYS 0x0D900000
+#define MSM8610_QGIC_DIST_PHYS 0xF9000000
+#define MSM8610_QGIC_DIST_SIZE SZ_4K
+
#define MSM8610_APCS_GCC_PHYS 0xF9011000
#define MSM8610_APCS_GCC_SIZE SZ_4K
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-9625.h b/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
index 9a8bfc1..31b19b3 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
@@ -38,8 +38,8 @@
#define MSM9625_MPM2_PSHOLD_SIZE SZ_4K
#ifdef CONFIG_DEBUG_MSM9625_UART
-#define MSM_DEBUG_UART_BASE IOMEM(0xFA71E000)
-#define MSM_DEBUG_UART_PHYS 0xF991E000
+#define MSM_DEBUG_UART_BASE IOMEM(0xFA71F000)
+#define MSM_DEBUG_UART_PHYS 0xF991F000
#endif
#endif
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap.h b/arch/arm/mach-msm/include/mach/msm_iomap.h
index f27eb36..8f48e94 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap.h
@@ -115,7 +115,8 @@
#if defined(CONFIG_ARCH_MSM9615) || defined(CONFIG_ARCH_MSM7X27) \
- || defined(CONFIG_ARCH_MSM7X30) || defined(CONFIG_ARCH_MSM9625)
+ || defined(CONFIG_ARCH_MSM7X30) || defined(CONFIG_ARCH_MSM9625) \
+ || defined(CONFIG_ARCH_MSM8610) || defined(CONFIG_ARCH_MSM8226)
#define MSM_SHARED_RAM_SIZE SZ_1M
#else
#define MSM_SHARED_RAM_SIZE SZ_2M
@@ -129,7 +130,7 @@
#include "msm_iomap-8064.h"
#include "msm_iomap-9615.h"
#include "msm_iomap-8974.h"
-#include "msm_iomap-zinc.h"
+#include "msm_iomap-8084.h"
#include "msm_iomap-9625.h"
#include "msm_iomap-8092.h"
#include "msm_iomap-8226.h"
diff --git a/arch/arm/mach-msm/include/mach/msm_smd.h b/arch/arm/mach-msm/include/mach/msm_smd.h
index a8c7bb7..d155c6f 100644
--- a/arch/arm/mach-msm/include/mach/msm_smd.h
+++ b/arch/arm/mach-msm/include/mach/msm_smd.h
@@ -279,6 +279,17 @@
*/
int smd_write_end(smd_channel_t *ch);
+/**
+ * smd_write_segment_avail() - available write space for packet transactions
+ * @ch: channel to write packet to
+ * @returns: number of bytes available to write to, or -ENODEV for invalid ch
+ *
+ * This is a version of smd_write_avail() intended for use with packet
+ * transactions. This version correctly accounts for any internal reserved
+ * space at all stages of the transaction.
+ */
+int smd_write_segment_avail(smd_channel_t *ch);
+
/*
* Returns a pointer to the subsystem name or NULL if no
* subsystem name is available.
@@ -441,6 +452,11 @@
return -ENODEV;
}
+static inline int smd_write_segment_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
static inline const char *smd_edge_to_subsystem(uint32_t type)
{
return NULL;
diff --git a/arch/arm/mach-msm/include/mach/msm_smsm.h b/arch/arm/mach-msm/include/mach/msm_smsm.h
index d983ce5..81a6399 100644
--- a/arch/arm/mach-msm/include/mach/msm_smsm.h
+++ b/arch/arm/mach-msm/include/mach/msm_smsm.h
@@ -256,6 +256,18 @@
int smsm_check_for_modem_crash(void);
void *smem_find(unsigned id, unsigned size);
void *smem_get_entry(unsigned id, unsigned *size);
+
+/**
+ * smem_virt_to_phys() - Convert SMEM address to physical address.
+ *
+ * @smem_address: Virtual address returned by smem_alloc()/smem_alloc2()
+ * @returns: Physical address (or NULL if there is a failure)
+ *
+ * This function should only be used if an SMEM item needs to be handed
+ * off to a DMA engine.
+ */
+phys_addr_t smem_virt_to_phys(void *smem_address);
+
#else
static inline void *smem_alloc(unsigned id, unsigned size)
{
@@ -339,5 +351,9 @@
{
return NULL;
}
+static inline phys_addr_t smem_virt_to_phys(void *smem_address)
+{
+ return NULL;
+}
#endif
#endif
diff --git a/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h b/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
index 91e4ef1..ea345fb 100644
--- a/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
+++ b/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
@@ -53,19 +53,10 @@
uint8_t model_ID[128];
};
-#define ADSP_CMD_SET_DOLBY_MANUFACTURER_ID 0x00012918
-
-struct adsp_dolby_manufacturer_id {
- struct apr_hdr hdr;
- int manufacturer_id;
-};
-
int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps);
uint32_t core_get_adsp_version(void);
uint32_t core_set_dts_model_id(uint32_t id_size, uint8_t *id);
-uint32_t core_set_dolby_manufacturer_id(int manufacturer_id);
-
#endif /* __Q6CORE_H__ */
diff --git a/arch/arm/mach-msm/include/mach/qseecomi.h b/arch/arm/mach-msm/include/mach/qseecomi.h
index 20dc851..e889242 100644
--- a/arch/arm/mach-msm/include/mach/qseecomi.h
+++ b/arch/arm/mach-msm/include/mach/qseecomi.h
@@ -18,6 +18,14 @@
#define QSEECOM_KEY_ID_SIZE 32
+#define QSEOS_RESULT_FAIL_LOAD_KS -48
+#define QSEOS_RESULT_FAIL_SAVE_KS -49
+#define QSEOS_RESULT_FAIL_MAX_KEYS -50
+#define QSEOS_RESULT_FAIL_KEY_ID_EXISTS -51
+#define QSEOS_RESULT_FAIL_KEY_ID_DNE -52
+#define QSEOS_RESULT_FAIL_KS_OP -53
+#define QSEOS_RESULT_FAIL_CE_PIPE_INVALID -54
+
enum qseecom_command_scm_resp_type {
QSEOS_APP_ID = 0xEE01,
QSEOS_LISTENER_ID
@@ -49,6 +57,22 @@
QSEOS_RESULT_FAILURE = 0xFFFFFFFF
};
+/* Key Management requests */
+enum qseecom_qceos_key_gen_cmd_id {
+ QSEOS_GENERATE_KEY = 0x11,
+ QSEOS_DELETE_KEY,
+ QSEOS_MAX_KEY_COUNT,
+ QSEOS_SET_KEY,
+ QSEOS_KEY_CMD_MAX = 0xEFFFFFFF
+};
+
+enum qseecom_pipe_type {
+ QSEOS_PIPE_ENC = 0,
+ QSEOS_PIPE_ENC_XTS,
+ QSEOS_PIPE_AUTH,
+ QSEOS_PIPE_ENUM_FILL = 0x7FFFFFFF
+};
+
__packed struct qsee_apps_region_info_ireq {
uint32_t qsee_cmd_id;
uint32_t addr;
@@ -143,29 +167,24 @@
unsigned int rsp_len; /* in/out */
};
-/* Key Management requests */
-enum qseecom_qceos_key_gen_cmd_id {
- QSEOS_GENERATE_KEY = 0x02,
- QSEOS_SET_KEY,
- QSEOS_DELETE_KEY,
- QSEOS_MAX_KEY_COUNT,
- QSEOS_KEY_CMD_MAX = 0xEFFFFFFF
-};
-
__packed struct qseecom_key_generate_ireq {
+ uint32_t qsee_command_id;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
};
__packed struct qseecom_key_select_ireq {
+ uint32_t qsee_command_id;
uint32_t ce;
uint32_t pipe;
+ uint32_t pipe_type;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
unsigned char hash[QSEECOM_HASH_SIZE];
};
__packed struct qseecom_key_delete_ireq {
+ uint32_t qsee_command_id;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
};
diff --git a/arch/arm/mach-msm/include/mach/scm.h b/arch/arm/mach-msm/include/mach/scm.h
index 4258dbd..4186603 100644
--- a/arch/arm/mach-msm/include/mach/scm.h
+++ b/arch/arm/mach-msm/include/mach/scm.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -22,8 +22,8 @@
#define SCM_SVC_FUSE 0x8
#define SCM_SVC_PWR 0x9
#define SCM_SVC_MP 0xC
-#define SCM_SVC_CRYPTO 0xA
#define SCM_SVC_DCVS 0xD
+#define SCM_SVC_ES 0x10
#define SCM_SVC_TZSCHEDULER 0xFC
#ifdef CONFIG_MSM_SCM
@@ -32,6 +32,7 @@
extern s32 scm_call_atomic1(u32 svc, u32 cmd, u32 arg1);
extern s32 scm_call_atomic2(u32 svc, u32 cmd, u32 arg1, u32 arg2);
+extern s32 scm_call_atomic3(u32 svc, u32 cmd, u32 arg1, u32 arg2, u32 arg3);
extern s32 scm_call_atomic4_3(u32 svc, u32 cmd, u32 arg1, u32 arg2, u32 arg3,
u32 arg4, u32 *ret1, u32 *ret2);
@@ -59,6 +60,12 @@
return 0;
}
+static inline s32 scm_call_atomic3(u32 svc, u32 cmd, u32 arg1, u32 arg2,
+ u32 arg3)
+{
+ return 0;
+}
+
static inline s32 scm_call_atomic4_3(u32 svc, u32 cmd, u32 arg1, u32 arg2,
u32 arg3, u32 arg4, u32 *ret1, u32 *ret2)
{
diff --git a/arch/arm/mach-msm/include/mach/socinfo.h b/arch/arm/mach-msm/include/mach/socinfo.h
index 7c9882e..64531f0 100644
--- a/arch/arm/mach-msm/include/mach/socinfo.h
+++ b/arch/arm/mach-msm/include/mach/socinfo.h
@@ -36,18 +36,21 @@
#define of_board_is_rumi() of_machine_is_compatible("qcom,rumi")
#define of_board_is_fluid() of_machine_is_compatible("qcom,fluid")
#define of_board_is_liquid() of_machine_is_compatible("qcom,liquid")
+#define of_board_is_dragonboard() \
+ of_machine_is_compatible("qcom,dragonboard")
#define machine_is_msm8974() of_machine_is_compatible("qcom,msm8974")
#define machine_is_msm9625() of_machine_is_compatible("qcom,msm9625")
#define machine_is_msm8610() of_machine_is_compatible("qcom,msm8610")
#define machine_is_msm8226() of_machine_is_compatible("qcom,msm8226")
+#define machine_is_apq8074() of_machine_is_compatible("qcom,apq8074")
#define early_machine_is_msm8610() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msm8610")
#define early_machine_is_mpq8092() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,mpq8092")
-#define early_machine_is_msmzinc() \
- of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmzinc")
+#define early_machine_is_apq8084() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,apq8084")
#define early_machine_is_msmkrypton() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmkrypton")
#else
@@ -55,18 +58,21 @@
#define of_board_is_rumi() 0
#define of_board_is_fluid() 0
#define of_board_is_liquid() 0
+#define of_board_is_dragonboard() 0
#define machine_is_msm8974() 0
#define machine_is_msm9625() 0
#define machine_is_msm8610() 0
#define machine_is_msm8226() 0
+#define machine_is_apq8074() 0
#define early_machine_is_msm8610() 0
#define early_machine_is_mpq8092() 0
-#define early_machine_is_msmzinc() 0
+#define early_machine_is_apq8084() 0
#define early_machine_is_msmkrypton() 0
#endif
+#define PLATFORM_SUBTYPE_MDM 1
#define PLATFORM_SUBTYPE_SGLTE 6
enum msm_cpu {
@@ -102,7 +108,7 @@
MSM_CPU_8226,
MSM_CPU_8610,
MSM_CPU_8625Q,
- MSM_CPU_ZINC,
+ MSM_CPU_8084,
MSM_CPU_KRYPTON,
};
diff --git a/arch/arm/mach-msm/include/mach/subsystem_restart.h b/arch/arm/mach-msm/include/mach/subsystem_restart.h
index 67f643e..4d4703f 100644
--- a/arch/arm/mach-msm/include/mach/subsystem_restart.h
+++ b/arch/arm/mach-msm/include/mach/subsystem_restart.h
@@ -55,6 +55,7 @@
int (*powerup)(const struct subsys_desc *desc);
void (*crash_shutdown)(const struct subsys_desc *desc);
int (*ramdump)(int, const struct subsys_desc *desc);
+ unsigned int err_ready_irq;
};
#if defined(CONFIG_MSM_SUBSYSTEM_RESTART)
diff --git a/arch/arm/mach-msm/include/mach/usb_bam.h b/arch/arm/mach-msm/include/mach/usb_bam.h
index 68313c5..da7c039 100644
--- a/arch/arm/mach-msm/include/mach/usb_bam.h
+++ b/arch/arm/mach-msm/include/mach/usb_bam.h
@@ -42,6 +42,12 @@
OCI_MEM, /* Shared memory among peripherals */
};
+enum usb_bam_event_type {
+ USB_BAM_EVENT_WAKEUP_PIPE = 0, /* Wake a pipe */
+ USB_BAM_EVENT_WAKEUP, /* Wake a bam (first pipe waked) */
+ USB_BAM_EVENT_INACTIVITY, /* Inactivity on all pipes */
+};
+
struct usb_bam_connect_ipa_params {
u8 src_idx;
u8 dst_idx;
@@ -57,16 +63,20 @@
void *priv;
void (*notify)(void *priv, enum ipa_dp_evt_type evt,
unsigned long data);
+ int (*activity_notify)(void *priv);
+ int (*inactivity_notify)(void *priv);
};
/**
* struct usb_bam_event_info: suspend/resume event information.
+* @type: usb bam event type.
* @event: holds event data.
* @callback: suspend/resume callback.
* @param: port num (for suspend) or NULL (for resume).
* @event_w: holds work queue parameters.
*/
struct usb_bam_event_info {
+ enum usb_bam_event_type type;
struct sps_register_event event;
int (*callback)(void *);
void *param;
@@ -89,8 +99,12 @@
* @desc_fifo_size: descriptor fifo size.
* @data_mem_buf: data fifo buffer.
* @desc_mem_buf: descriptor fifo buffer.
-* @wake_event: event for wakeup.
+* @event: event for wakeup.
* @enabled: true if pipe is enabled.
+* @priv: private data to return upon activity_notify
+* or inactivity_notify callbacks.
+* @activity_notify: callback to invoke on activity on one of the in pipes.
+* @inactivity_notify: callback to invoke on inactivity on all pipes.
*/
struct usb_bam_pipe_connect {
const char *name;
@@ -109,8 +123,11 @@
u32 desc_fifo_size;
struct sps_mem_buffer data_mem_buf;
struct sps_mem_buffer desc_mem_buf;
- struct usb_bam_event_info wake_event;
+ struct usb_bam_event_info event;
bool enabled;
+ void *priv;
+ int (*activity_notify)(void *priv);
+ int (*inactivity_notify)(void *priv);
};
/**
@@ -154,7 +171,10 @@
/**
* Connect USB-to-IPA SPS connection.
*
- * This function returns the allocated pipes number and clnt handles.
+ * This function returns the allocated pipes number and clnt
+ * handles. Assumes that the user first connects producer pipes
+ * and only after that consumer pipes, since that's the correct
+ * sequence for the handshake with the IPA.
*
* @ipa_params - in/out parameters
*
diff --git a/arch/arm/mach-msm/io.c b/arch/arm/mach-msm/io.c
index ecac4a5..1c4a317 100644
--- a/arch/arm/mach-msm/io.c
+++ b/arch/arm/mach-msm/io.c
@@ -45,7 +45,7 @@
/* msm_shared_ram_phys default value of 0x00100000 is the most common value
* and should work as-is for any target without stacked memory.
*/
-unsigned int msm_shared_ram_phys = 0x00100000;
+phys_addr_t msm_shared_ram_phys = 0x00100000;
static void __init msm_map_io(struct map_desc *io_desc, int size)
{
@@ -321,27 +321,27 @@
}
#endif /* CONFIG_ARCH_MSM8974 */
-#ifdef CONFIG_ARCH_MSMZINC
-static struct map_desc msm_zinc_io_desc[] __initdata = {
- MSM_CHIP_DEVICE(QGIC_DIST, MSMZINC),
- MSM_CHIP_DEVICE(TLMM, MSMZINC),
+#ifdef CONFIG_ARCH_APQ8084
+static struct map_desc msm_8084_io_desc[] __initdata = {
+ MSM_CHIP_DEVICE(QGIC_DIST, APQ8084),
+ MSM_CHIP_DEVICE(TLMM, APQ8084),
{
.virtual = (unsigned long) MSM_SHARED_RAM_BASE,
.length = MSM_SHARED_RAM_SIZE,
.type = MT_DEVICE,
},
-#ifdef CONFIG_DEBUG_MSMZINC_UART
+#ifdef CONFIG_DEBUG_APQ8084_UART
MSM_DEVICE(DEBUG_UART),
#endif
};
-void __init msm_map_zinc_io(void)
+void __init msm_map_8084_io(void)
{
- msm_shared_ram_phys = MSMZINC_SHARED_RAM_PHYS;
- msm_map_io(msm_zinc_io_desc, ARRAY_SIZE(msm_zinc_io_desc));
+ msm_shared_ram_phys = APQ8084_SHARED_RAM_PHYS;
+ msm_map_io(msm_8084_io_desc, ARRAY_SIZE(msm_8084_io_desc));
of_scan_flat_dt(msm_scan_dt_map_imem, NULL);
}
-#endif /* CONFIG_ARCH_MSMZINC */
+#endif /* CONFIG_ARCH_APQ8084 */
#ifdef CONFIG_ARCH_MSM7X30
static struct map_desc msm7x30_io_desc[] __initdata = {
@@ -574,6 +574,7 @@
#ifdef CONFIG_ARCH_MSM8610
static struct map_desc msm8610_io_desc[] __initdata = {
+ MSM_CHIP_DEVICE(QGIC_DIST, MSM8610),
MSM_CHIP_DEVICE(APCS_GCC, MSM8610),
MSM_CHIP_DEVICE(TLMM, MSM8610),
MSM_CHIP_DEVICE(MPM2_PSHOLD, MSM8610),
diff --git a/arch/arm/mach-msm/iommu_domains.c b/arch/arm/mach-msm/iommu_domains.c
index 18562a3..f4ac37e 100644
--- a/arch/arm/mach-msm/iommu_domains.c
+++ b/arch/arm/mach-msm/iommu_domains.c
@@ -21,6 +21,7 @@
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_device.h>
+#include <linux/idr.h>
#include <asm/sizes.h>
#include <asm/page.h>
#include <mach/iommu.h>
@@ -36,9 +37,14 @@
int domain_num;
};
+struct msm_iommu_data_entry {
+ struct list_head list;
+ void *data;
+};
+
static struct rb_root domain_root;
DEFINE_MUTEX(domain_mutex);
-static atomic_t domain_nums = ATOMIC_INIT(-1);
+static DEFINE_IDA(domain_nums);
void msm_iommu_set_client_name(struct iommu_domain *domain, char const *name)
{
@@ -273,6 +279,27 @@
return 0;
}
+static int remove_domain(struct iommu_domain *domain)
+{
+ struct rb_root *root = &domain_root;
+ struct rb_node *n;
+ struct msm_iova_data *node;
+ int ret = -EINVAL;
+
+ mutex_lock(&domain_mutex);
+
+ for (n = rb_first(root); n; n = rb_next(n)) {
+ node = rb_entry(n, struct msm_iova_data, node);
+ if (node->domain == domain) {
+ rb_erase(&node->node, &domain_root);
+ ret = 0;
+ break;
+ }
+ }
+ mutex_unlock(&domain_mutex);
+ return ret;
+}
+
struct iommu_domain *msm_get_iommu_domain(int domain_num)
{
struct msm_iova_data *data;
@@ -307,6 +334,27 @@
}
EXPORT_SYMBOL(msm_find_domain_no);
+static struct msm_iova_data *msm_domain_to_iova_data(struct iommu_domain
+ const *domain)
+{
+ struct rb_root *root = &domain_root;
+ struct rb_node *n;
+ struct msm_iova_data *node;
+ struct msm_iova_data *iova_data = ERR_PTR(-EINVAL);
+
+ mutex_lock(&domain_mutex);
+
+ for (n = rb_first(root); n; n = rb_next(n)) {
+ node = rb_entry(n, struct msm_iova_data, node);
+ if (node->domain == domain) {
+ iova_data = node;
+ break;
+ }
+ }
+ mutex_unlock(&domain_mutex);
+ return iova_data;
+}
+
int msm_allocate_iova_address(unsigned int iommu_domain,
unsigned int partition_no,
unsigned long size,
@@ -330,7 +378,9 @@
if (!pool->gpool)
return -EINVAL;
+ mutex_lock(&pool->pool_mutex);
va = gen_pool_alloc_aligned(pool->gpool, size, ilog2(align));
+ mutex_unlock(&pool->pool_mutex);
if (va) {
pool->free -= size;
/* Offset because genpool can't handle 0 addresses */
@@ -375,7 +425,9 @@
if (pool->paddr == 0)
iova += SZ_4K;
+ mutex_lock(&pool->pool_mutex);
gen_pool_free(pool->gpool, iova, size);
+ mutex_unlock(&pool->pool_mutex);
}
int msm_register_domain(struct msm_iova_layout *layout)
@@ -393,11 +445,11 @@
if (!data)
return -ENOMEM;
- pools = kmalloc(sizeof(struct mem_pool) * layout->npartitions,
+ pools = kzalloc(sizeof(struct mem_pool) * layout->npartitions,
GFP_KERNEL);
if (!pools)
- goto out;
+ goto free_data;
for (i = 0; i < layout->npartitions; i++) {
if (layout->partitions[i].size == 0)
@@ -410,6 +462,7 @@
pools[i].paddr = layout->partitions[i].start;
pools[i].size = layout->partitions[i].size;
+ mutex_init(&pools[i].pool_mutex);
/*
* genalloc can't handle a pool starting at address 0.
@@ -435,10 +488,13 @@
data->pools = pools;
data->npools = layout->npartitions;
- data->domain_num = atomic_inc_return(&domain_nums);
+ data->domain_num = ida_simple_get(&domain_nums, 0, 0, GFP_KERNEL);
+ if (data->domain_num < 0)
+ goto free_pools;
+
data->domain = iommu_domain_alloc(bus, layout->domain_flags);
if (!data->domain)
- goto out;
+ goto free_domain_num;
msm_iommu_set_client_name(data->domain, layout->client_name);
@@ -446,13 +502,50 @@
return data->domain_num;
-out:
+free_domain_num:
+ ida_simple_remove(&domain_nums, data->domain_num);
+
+free_pools:
+ for (i = 0; i < layout->npartitions; i++) {
+ if (pools[i].gpool)
+ gen_pool_destroy(pools[i].gpool);
+ }
+ kfree(pools);
+free_data:
kfree(data);
return -EINVAL;
}
EXPORT_SYMBOL(msm_register_domain);
+int msm_unregister_domain(struct iommu_domain *domain)
+{
+ unsigned int i;
+ struct msm_iova_data *data = msm_domain_to_iova_data(domain);
+
+ if (IS_ERR_OR_NULL(data)) {
+ pr_err("%s: Could not find iova_data\n", __func__);
+ return -EINVAL;
+ }
+
+ if (remove_domain(data->domain)) {
+ pr_err("%s: Domain not found. Failed to remove domain\n",
+ __func__);
+ }
+
+ iommu_domain_free(domain);
+
+ ida_simple_remove(&domain_nums, data->domain_num);
+
+ for (i = 0; i < data->npools; ++i)
+ gen_pool_destroy(data->pools[i].gpool);
+
+ kfree(data->pools);
+ kfree(data);
+ return 0;
+}
+EXPORT_SYMBOL(msm_unregister_domain);
+
static int find_and_add_contexts(struct iommu_group *group,
const struct device_node *node,
unsigned int num_contexts)
diff --git a/arch/arm/mach-msm/ipc_router.c b/arch/arm/mach-msm/ipc_router.c
index d81dbb4..c31c969 100644
--- a/arch/arm/mach-msm/ipc_router.c
+++ b/arch/arm/mach-msm/ipc_router.c
@@ -133,15 +133,21 @@
struct msm_ipc_router_xprt_info *xprt_info;
};
+struct msm_ipc_resume_tx_port {
+ struct list_head list;
+ uint32_t port_id;
+ uint32_t node_id;
+};
+
#define RP_HASH_SIZE 32
struct msm_ipc_router_remote_port {
struct list_head list;
uint32_t node_id;
uint32_t port_id;
uint32_t restart_state;
- wait_queue_head_t quota_wait;
uint32_t tx_quota_cnt;
struct mutex quota_lock;
+ struct list_head resume_tx_port_list;
void *sec_rule;
};
@@ -657,8 +663,8 @@
rport_ptr->restart_state = RESTART_NORMAL;
rport_ptr->sec_rule = NULL;
rport_ptr->tx_quota_cnt = 0;
- init_waitqueue_head(&rport_ptr->quota_wait);
mutex_init(&rport_ptr->quota_lock);
+ INIT_LIST_HEAD(&rport_ptr->resume_tx_port_list);
list_add_tail(&rport_ptr->list,
&rt_entry->remote_port_list[key]);
mutex_unlock(&rt_entry->lock);
@@ -666,6 +672,97 @@
return rport_ptr;
}
+/**
+ * msm_ipc_router_free_resume_tx_port() - Free the resume_tx ports
+ * @rport_ptr: Pointer to the remote port.
+ *
+ * This function deletes all the resume_tx ports associated with a remote port
+ * and frees the memory allocated to each resume_tx port.
+ *
+ * Must be called with rport_ptr->quota_lock locked.
+ */
+static void msm_ipc_router_free_resume_tx_port(
+ struct msm_ipc_router_remote_port *rport_ptr)
+{
+ struct msm_ipc_resume_tx_port *rtx_port, *tmp_rtx_port;
+
+ list_for_each_entry_safe(rtx_port, tmp_rtx_port,
+ &rport_ptr->resume_tx_port_list, list) {
+ list_del(&rtx_port->list);
+ kfree(rtx_port);
+ }
+}
+
+/**
+ * msm_ipc_router_lookup_resume_tx_port() - Lookup resume_tx port list
+ * @rport_ptr: Remote port whose resume_tx port list needs to be looked.
+ * @port_id: Port ID which needs to be looked from the list.
+ *
+ * return 1 if the port_id is found in the list, else 0.
+ *
+ * This function is used to lookup the existence of a local port in
+ * remote port's resume_tx list. This function is used to ensure that
+ * the same port is not added to the remote_port's resume_tx list repeatedly.
+ *
+ * Must be called with rport_ptr->quota_lock locked.
+ */
+static int msm_ipc_router_lookup_resume_tx_port(
+ struct msm_ipc_router_remote_port *rport_ptr, uint32_t port_id)
+{
+ struct msm_ipc_resume_tx_port *rtx_port;
+
+ list_for_each_entry(rtx_port, &rport_ptr->resume_tx_port_list, list) {
+ if (port_id == rtx_port->port_id)
+ return 1;
+ }
+ return 0;
+}
+
+/**
+ * post_resume_tx() - Post the resume_tx event
+ * @rport_ptr: Pointer to the remote port
+ * @pkt : The data packet that is received on a resume_tx event
+ *
+ * This function informs about the reception of the resume_tx message from a
+ * remote port pointed by rport_ptr to all the local ports that are in the
+ * resume_tx_ports_list of this remote port. On posting the information, this
+ * function sequentially deletes each entry in the resume_tx_port_list of the
+ * remote port.
+ *
+ * Must be called with rport_ptr->quota_lock locked.
+ */
+static void post_resume_tx(struct msm_ipc_router_remote_port *rport_ptr,
+ struct rr_packet *pkt)
+{
+ struct msm_ipc_resume_tx_port *rtx_port, *tmp_rtx_port;
+ struct msm_ipc_port *local_port;
+ struct rr_packet *cloned_pkt;
+
+ list_for_each_entry_safe(rtx_port, tmp_rtx_port,
+ &rport_ptr->resume_tx_port_list, list) {
+ mutex_lock(&local_ports_lock);
+ local_port =
+ msm_ipc_router_lookup_local_port(rtx_port->port_id);
+ if (local_port) {
+ cloned_pkt = clone_pkt(pkt);
+ if (cloned_pkt) {
+ mutex_lock(&local_port->port_rx_q_lock);
+ list_add_tail(&cloned_pkt->list,
+ &local_port->port_rx_q);
+ wake_up(&local_port->port_rx_wait_q);
+ mutex_unlock(&local_port->port_rx_q_lock);
+ } else {
+ pr_err("%s: Clone_pkt failed for %08x:%08x\n",
+ __func__, local_port->this_port.node_id,
+ local_port->this_port.port_id);
+ }
+ }
+ mutex_unlock(&local_ports_lock);
+ list_del(&rtx_port->list);
+ kfree(rtx_port);
+ }
+}
+
static void msm_ipc_router_destroy_remote_port(
struct msm_ipc_router_remote_port *rport_ptr)
{
@@ -683,7 +780,9 @@
pr_err("%s: Node %d is not up\n", __func__, node_id);
return;
}
-
+ mutex_lock(&rport_ptr->quota_lock);
+ msm_ipc_router_free_resume_tx_port(rport_ptr);
+ mutex_unlock(&rport_ptr->quota_lock);
mutex_lock(&rt_entry->lock);
list_del(&rport_ptr->list);
kfree(rport_ptr);
@@ -1158,7 +1257,7 @@
}
mutex_lock(&rport_ptr->quota_lock);
rport_ptr->restart_state = RESTART_PEND;
- wake_up(&rport_ptr->quota_wait);
+ msm_ipc_router_free_resume_tx_port(rport_ptr);
mutex_unlock(&rport_ptr->quota_lock);
return;
}
@@ -1240,7 +1339,8 @@
continue;
mutex_lock(&rport_ptr->quota_lock);
rport_ptr->restart_state = RESTART_PEND;
- wake_up(&rport_ptr->quota_wait);
+ msm_ipc_router_free_resume_tx_port(
+ rport_ptr);
mutex_unlock(&rport_ptr->quota_lock);
ctl.cli.node_id = rport_ptr->node_id;
ctl.cli.port_id = rport_ptr->port_id;
@@ -1531,8 +1631,8 @@
}
mutex_lock(&rport_ptr->quota_lock);
rport_ptr->tx_quota_cnt = 0;
+ post_resume_tx(rport_ptr, pkt);
mutex_unlock(&rport_ptr->quota_lock);
- wake_up(&rport_ptr->quota_wait);
break;
case IPC_ROUTER_CTRL_CMD_NEW_SERVER:
@@ -1587,9 +1687,11 @@
if (!rport_ptr)
pr_err("%s: Remote port create "
"failed\n", __func__);
- rport_ptr->sec_rule =
- msm_ipc_get_security_rule(
- msg->srv.service, msg->srv.instance);
+ else
+ rport_ptr->sec_rule =
+ msm_ipc_get_security_rule(
+ msg->srv.service,
+ msg->srv.instance);
}
wake_up(&newserver_wait);
}
@@ -1890,6 +1992,7 @@
head_skb = skb_peek(pkt->pkt_fragment_q);
if (!head_skb) {
pr_err("%s: pkt_fragment_q is empty\n", __func__);
+ release_pkt(pkt);
return -EINVAL;
}
hdr = (struct rr_header *)skb_push(head_skb, IPC_ROUTER_HDR_SIZE);
@@ -1936,8 +2039,8 @@
struct rr_header *hdr;
struct msm_ipc_router_xprt_info *xprt_info;
struct msm_ipc_routing_table_entry *rt_entry;
+ struct msm_ipc_resume_tx_port *resume_tx_port;
int ret;
- DEFINE_WAIT(__wait);
if (!rport_ptr || !src || !pkt)
return -EINVAL;
@@ -1962,29 +2065,33 @@
hdr->dst_port_id = rport_ptr->port_id;
pkt->length += IPC_ROUTER_HDR_SIZE;
- for (;;) {
- prepare_to_wait(&rport_ptr->quota_wait, &__wait,
- TASK_INTERRUPTIBLE);
- mutex_lock(&rport_ptr->quota_lock);
- if (rport_ptr->restart_state != RESTART_NORMAL)
- break;
- if (rport_ptr->tx_quota_cnt <
- IPC_ROUTER_DEFAULT_RX_QUOTA)
- break;
- if (signal_pending(current))
- break;
- mutex_unlock(&rport_ptr->quota_lock);
- schedule();
- }
- finish_wait(&rport_ptr->quota_wait, &__wait);
-
+ mutex_lock(&rport_ptr->quota_lock);
if (rport_ptr->restart_state != RESTART_NORMAL) {
mutex_unlock(&rport_ptr->quota_lock);
return -ENETRESET;
}
- if (signal_pending(current)) {
+ if (rport_ptr->tx_quota_cnt == IPC_ROUTER_DEFAULT_RX_QUOTA) {
+ if (msm_ipc_router_lookup_resume_tx_port(
+ rport_ptr, src->this_port.port_id)) {
+ mutex_unlock(&rport_ptr->quota_lock);
+ return -EAGAIN;
+ }
+ resume_tx_port =
+ kzalloc(sizeof(struct msm_ipc_resume_tx_port),
+ GFP_KERNEL);
+ if (!resume_tx_port) {
+ pr_err("%s: Resume_Tx port allocation failed\n",
+ __func__);
+ mutex_unlock(&rport_ptr->quota_lock);
+ return -ENOMEM;
+ }
+ INIT_LIST_HEAD(&resume_tx_port->list);
+ resume_tx_port->port_id = src->this_port.port_id;
+ resume_tx_port->node_id = src->this_port.node_id;
+ list_add_tail(&resume_tx_port->list,
+ &rport_ptr->resume_tx_port_list);
mutex_unlock(&rport_ptr->quota_lock);
- return -ERESTARTSYS;
+ return -EAGAIN;
}
rport_ptr->tx_quota_cnt++;
if (rport_ptr->tx_quota_cnt == IPC_ROUTER_DEFAULT_RX_QUOTA)
diff --git a/arch/arm/mach-msm/ipc_router_smd_xprt.c b/arch/arm/mach-msm/ipc_router_smd_xprt.c
index 88ab8e0..b2ec816 100644
--- a/arch/arm/mach-msm/ipc_router_smd_xprt.c
+++ b/arch/arm/mach-msm/ipc_router_smd_xprt.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -146,11 +146,11 @@
skb_queue_walk(pkt->pkt_fragment_q, ipc_rtr_pkt) {
offset = 0;
while (offset < ipc_rtr_pkt->len) {
- if (!smd_write_avail(smd_xprtp->channel))
+ if (!smd_write_segment_avail(smd_xprtp->channel))
smd_enable_read_intr(smd_xprtp->channel);
wait_event(smd_xprtp->write_avail_wait_q,
- (smd_write_avail(smd_xprtp->channel) ||
+ (smd_write_segment_avail(smd_xprtp->channel) ||
smd_xprtp->ss_reset));
smd_disable_read_intr(smd_xprtp->channel);
spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
@@ -175,11 +175,11 @@
}
if (align_sz) {
- if (smd_write_avail(smd_xprtp->channel) < align_sz)
+ if (smd_write_segment_avail(smd_xprtp->channel) < align_sz)
smd_enable_read_intr(smd_xprtp->channel);
wait_event(smd_xprtp->write_avail_wait_q,
- ((smd_write_avail(smd_xprtp->channel) >=
+ ((smd_write_segment_avail(smd_xprtp->channel) >=
align_sz) || smd_xprtp->ss_reset));
smd_disable_read_intr(smd_xprtp->channel);
spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
@@ -357,7 +357,7 @@
if (smd_read_avail(smd_xprtp->channel))
queue_delayed_work(smd_xprtp->smd_xprt_wq,
&smd_xprtp->read_work, 0);
- if (smd_write_avail(smd_xprtp->channel))
+ if (smd_write_segment_avail(smd_xprtp->channel))
wake_up(&smd_xprtp->write_avail_wait_q);
break;
diff --git a/arch/arm/mach-msm/ipc_socket.c b/arch/arm/mach-msm/ipc_socket.c
index c0422a1..64d8fed 100644
--- a/arch/arm/mach-msm/ipc_socket.c
+++ b/arch/arm/mach-msm/ipc_socket.c
@@ -197,6 +197,7 @@
struct sockaddr_msm_ipc *addr;
struct rr_header *hdr;
struct sk_buff *temp;
+ union rr_control_msg *ctl_msg;
int offset = 0, data_len = 0, copy_len;
if (!m || !msg_head) {
@@ -207,6 +208,16 @@
temp = skb_peek(msg_head);
hdr = (struct rr_header *)(temp->data);
+ if (addr && (hdr->type == IPC_ROUTER_CTRL_CMD_RESUME_TX)) {
+ skb_pull(temp, IPC_ROUTER_HDR_SIZE);
+ ctl_msg = (union rr_control_msg *)(temp->data);
+ addr->family = AF_MSM_IPC;
+ addr->address.addrtype = MSM_IPC_ADDR_ID;
+ addr->address.addr.port_addr.node_id = ctl_msg->cli.node_id;
+ addr->address.addr.port_addr.port_id = ctl_msg->cli.port_id;
+ m->msg_namelen = sizeof(struct sockaddr_msm_ipc);
+ return offset;
+ }
if (addr && (hdr->src_port_id != IPC_ROUTER_ADDRESS)) {
addr->family = AF_MSM_IPC;
addr->address.addrtype = MSM_IPC_ADDR_ID;
@@ -367,7 +378,8 @@
if (port_ptr->type == CLIENT_PORT)
wait_for_irsc_completion();
ipc_buf = skb_peek(msg);
- msm_ipc_router_ipc_log(IPC_SEND, ipc_buf, port_ptr);
+ if (ipc_buf)
+ msm_ipc_router_ipc_log(IPC_SEND, ipc_buf, port_ptr);
ret = msm_ipc_router_send_to(port_ptr, msg, &dest->address);
if (ret == (IPC_ROUTER_HDR_SIZE + total_len))
ret = total_len;
@@ -414,8 +426,10 @@
return -EFAULT;
}
- if (timeout == 0)
+ if (timeout == 0) {
+ m->msg_namelen = 0;
return 0;
+ }
lock_sock(sk);
mutex_lock(&port_ptr->port_rx_q_lock);
}
@@ -429,7 +443,8 @@
ret = msm_ipc_router_extract_msg(m, msg);
ipc_buf = skb_peek(msg);
- msm_ipc_router_ipc_log(IPC_RECV, ipc_buf, port_ptr);
+ if (ipc_buf)
+ msm_ipc_router_ipc_log(IPC_RECV, ipc_buf, port_ptr);
msm_ipc_router_release_msg(msg);
msg = NULL;
release_sock(sk);
diff --git a/arch/arm/mach-msm/krait-regulator.c b/arch/arm/mach-msm/krait-regulator.c
index b931ee6..9014f37 100644
--- a/arch/arm/mach-msm/krait-regulator.c
+++ b/arch/arm/mach-msm/krait-regulator.c
@@ -133,6 +133,7 @@
#define LDO_DELTA_MIN 10000
#define LDO_DELTA_MAX 100000
+#define MSM_L2_SAW_PHYS 0xf9012000
/**
* struct pmic_gang_vreg -
* @name: the string used to represent the gang
@@ -377,6 +378,21 @@
return online_total;
}
+static int get_total_load(struct krait_power_vreg *from)
+{
+ int load_total = 0;
+ struct krait_power_vreg *kvreg;
+ struct pmic_gang_vreg *pvreg = from->pvreg;
+
+ list_for_each_entry(kvreg, &pvreg->krait_power_vregs, link) {
+ if (!kvreg->online)
+ continue;
+ load_total += kvreg->load;
+ }
+
+ return load_total;
+}
+
static bool enable_phase_management(struct pmic_gang_vreg *pvreg)
{
struct krait_power_vreg *kvreg;
@@ -395,7 +411,8 @@
#define ONE_PHASE_COEFF 1000000
#define TWO_PHASE_COEFF 2000000
-#define PHASE_SETTLING_TIME_US 10
+#define PWM_SETTLING_TIME_US 50
+#define PHASE_SETTLING_TIME_US 50
static unsigned int pmic_gang_set_phases(struct krait_power_vreg *from,
int coeff_total)
{
@@ -403,6 +420,9 @@
int phase_count;
int rc = 0;
int n_online = num_online(pvreg);
+ int load_total;
+
+ load_total = get_total_load(from);
if (pvreg->manage_phases == false) {
if (enable_phase_management(pvreg))
@@ -412,12 +432,12 @@
}
/* First check if the coeff is low for PFM mode */
- if (coeff_total < pvreg->pfm_threshold && n_online == 1) {
+ if (load_total <= pvreg->pfm_threshold && n_online == 1) {
if (!pvreg->pfm_mode) {
rc = msm_spm_enable_fts_lpm(PMIC_FTS_MODE_PFM);
if (rc) {
- pr_err("%s PFM en failed coeff_t %d rc = %d\n",
- from->name, coeff_total, rc);
+ pr_err("%s PFM en failed load_t %d rc = %d\n",
+ from->name, load_total, rc);
return rc;
} else {
pvreg->pfm_mode = true;
@@ -435,6 +455,7 @@
return rc;
} else {
pvreg->pfm_mode = false;
+ udelay(PWM_SETTLING_TIME_US);
}
}
@@ -530,9 +551,15 @@
if (kvreg->mode == HS_MODE)
return 0;
/* enable bhs */
+ krait_masked_write(kvreg, APC_PWR_GATE_CTL, BHS_EN_MASK, BHS_EN_MASK);
+ /* complete the above write before the delay */
+ mb();
+ /* wait for the bhs to settle */
+ udelay(BHS_SETTLING_DELAY_US);
+
+ /* Turn on BHS segments */
krait_masked_write(kvreg, APC_PWR_GATE_CTL,
- BHS_SEG_EN_MASK | BHS_EN_MASK,
- BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS | BHS_EN_MASK);
+ BHS_SEG_EN_MASK, BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS);
/* complete the above write before the delay */
mb();
@@ -1309,35 +1336,53 @@
void secondary_cpu_hs_init(void *base_ptr)
{
+ uint32_t reg_val;
+ void *l2_saw_base;
+
/* Turn on the BHS, turn off LDO Bypass and power down LDO */
- writel_relaxed(
- BHS_CNT_DEFAULT << BHS_CNT_BIT_POS
+ reg_val = BHS_CNT_DEFAULT << BHS_CNT_BIT_POS
| LDO_PWR_DWN_MASK
| CLK_SRC_DEFAULT << CLK_SRC_SEL_BIT_POS
- | BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS
- | BHS_EN_MASK,
- base_ptr + APC_PWR_GATE_CTL);
+ | BHS_EN_MASK;
+ writel_relaxed(reg_val, base_ptr + APC_PWR_GATE_CTL);
/* complete the above write before the delay */
mb();
+ /* wait for the bhs to settle */
+ udelay(BHS_SETTLING_DELAY_US);
- /*
- * wait for the bhs to settle
- */
+ /* Turn on BHS segments */
+ reg_val |= BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS;
+ writel_relaxed(reg_val, base_ptr + APC_PWR_GATE_CTL);
+
+ /* complete the above write before the delay */
+ mb();
+ /* wait for the bhs to settle */
udelay(BHS_SETTLING_DELAY_US);
/* Finally turn on the bypass so that BHS supplies power */
- writel_relaxed(
- BHS_CNT_DEFAULT << BHS_CNT_BIT_POS
- | LDO_PWR_DWN_MASK
- | CLK_SRC_DEFAULT << CLK_SRC_SEL_BIT_POS
- | LDO_BYP_MASK
- | BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS
- | BHS_EN_MASK,
- base_ptr + APC_PWR_GATE_CTL);
+ reg_val |= LDO_BYP_MASK;
+ writel_relaxed(reg_val, base_ptr + APC_PWR_GATE_CTL);
+
+ if (the_gang && the_gang->manage_phases)
+ return;
+
+ /*
+ * If the driver has not yet started to manage phases then enable
+ * max phases.
+ */
+ l2_saw_base = ioremap_nocache(MSM_L2_SAW_PHYS, SZ_4K);
+ if (!l2_saw_base) {
+ __WARN();
+ return;
+ }
+ writel_relaxed(0x10003, l2_saw_base + 0x1c);
+ mb();
+ udelay(PHASE_SETTLING_TIME_US);
+
+ iounmap(l2_saw_base);
}
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("KRAIT POWER regulator driver");
-MODULE_VERSION("1.0");
MODULE_ALIAS("platform:"KRAIT_REGULATOR_DRIVER_NAME);
diff --git a/arch/arm/mach-msm/memory.c b/arch/arm/mach-msm/memory.c
index 5f85ee9..1680993 100644
--- a/arch/arm/mach-msm/memory.c
+++ b/arch/arm/mach-msm/memory.c
@@ -234,7 +234,7 @@
}
EXPORT_SYMBOL(allocate_contiguous_ebi);
-unsigned long allocate_contiguous_ebi_nomap(unsigned long size,
+phys_addr_t allocate_contiguous_ebi_nomap(unsigned long size,
unsigned long align)
{
return _allocate_contiguous_memory_nomap(size, get_ebi_memtype(),
diff --git a/arch/arm/mach-msm/mpm-of.c b/arch/arm/mach-msm/mpm-of.c
index fcd7cef..5c654b0 100644
--- a/arch/arm/mach-msm/mpm-of.c
+++ b/arch/arm/mach-msm/mpm-of.c
@@ -377,7 +377,7 @@
if (!msm_mpm_is_initialized())
return -EINVAL;
- if (pin > MSM_MPM_NR_MPM_IRQS)
+ if (pin >= MSM_MPM_NR_MPM_IRQS)
return -EINVAL;
spin_lock_irqsave(&msm_mpm_lock, flags);
@@ -767,7 +767,7 @@
return;
failed_malloc:
- for (i = 0; i < MSM_MPM_NR_MPM_IRQS; i++) {
+ for (i = 0; i < MSM_MPM_NR_IRQ_DOMAINS; i++) {
mpm_of_map[i].chip->irq_mask = NULL;
mpm_of_map[i].chip->irq_unmask = NULL;
mpm_of_map[i].chip->irq_disable = NULL;
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_arb.c b/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
index 4adfe4d..5002a7d 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -314,7 +314,7 @@
struct msm_bus_inode_info *info;
int next_pnode;
int64_t add_bw = req_bw - curr_bw;
- unsigned bwsum = 0;
+ uint64_t bwsum = 0;
uint64_t req_clk_hz, curr_clk_hz, bwsum_hz;
int *master_tiers;
struct msm_bus_fabric_device *fabdev = msm_bus_get_fabric_device
@@ -398,7 +398,7 @@
/* Update Bandwidth */
fabdev->algo->update_bw(fabdev, hop, info, add_bw,
master_tiers, ctx);
- bwsum = (uint16_t)*hop->link_info.sel_bw;
+ bwsum = *hop->link_info.sel_bw;
/* Update Fabric clocks */
curr_clk_hz = BW_TO_CLK_FREQ_HZ(hop->node_info->buswidth,
curr_clk);
@@ -574,6 +574,10 @@
curr = client->curr;
pdata = client->pdata;
+ if (!pdata) {
+ MSM_BUS_ERR("Null pdata passed to update-request\n");
+ return -ENXIO;
+ }
if (index >= pdata->num_usecases) {
MSM_BUS_ERR("Client %u passed invalid index: %d\n",
@@ -718,7 +722,7 @@
{
int i, src, pnode, index;
struct msm_bus_client *client = (struct msm_bus_client *)(cl);
- if (IS_ERR(client)) {
+ if (IS_ERR_OR_NULL(client)) {
MSM_BUS_ERR("msm_bus_scale_reset_pnodes error\n");
return;
}
@@ -739,7 +743,7 @@
void msm_bus_scale_unregister_client(uint32_t cl)
{
struct msm_bus_client *client = (struct msm_bus_client *)(cl);
- if (IS_ERR(client) || (!client))
+ if (IS_ERR_OR_NULL(client))
return;
if (client->curr != 0)
msm_bus_scale_client_update_request(cl, 0);
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
index 3c8348d..cd6693e 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
@@ -477,9 +477,11 @@
#define M_MODE_ADDR(b, n) \
(M_REG_BASE(b) + (0x4000 * (n)) + 0x00000210)
enum bimc_m_mode {
- M_MODE_RMSK = 0xf0000001,
+ M_MODE_RMSK = 0xf0000011,
M_MODE_WR_GATHER_BEATS_BMSK = 0xf0000000,
M_MODE_WR_GATHER_BEATS_SHFT = 0x1c,
+ M_MODE_NARROW_WR_BMSK = 0x10,
+ M_MODE_NARROW_WR_SHFT = 0x4,
M_MODE_ORDERING_MODEL_BMSK = 0x1,
M_MODE_ORDERING_MODEL_SHFT = 0x0,
};
@@ -1526,10 +1528,10 @@
reg_val = readl_relaxed(M_PRIOLVL_OVERRIDE_ADDR(binfo->
base, mas_index)) & M_PRIOLVL_OVERRIDE_RMSK;
val = qmode->fixed.prio_level <<
- M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_SHFT;
+ M_PRIOLVL_OVERRIDE_SHFT;
writel_relaxed(((reg_val &
- ~(M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_BMSK)) | (val
- & M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_BMSK)),
+ ~(M_PRIOLVL_OVERRIDE_BMSK)) | (val
+ & M_PRIOLVL_OVERRIDE_BMSK)),
M_PRIOLVL_OVERRIDE_ADDR(binfo->base, mas_index));
reg_val = readl_relaxed(M_RD_CMD_OVERRIDE_ADDR(binfo->
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_board_8064.c b/arch/arm/mach-msm/msm_bus/msm_bus_board_8064.c
index b45efad..ff3dc21 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_board_8064.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_board_8064.c
@@ -437,10 +437,7 @@
},
};
-static int mport_mdp[] = {
- MSM_BUS_MASTER_PORT_MDP_PORT0,
- MSM_BUS_MASTER_PORT_MDP_PORT1,
-};
+static int mport_mdp[] = {MSM_BUS_MASTER_PORT_MDP_PORT0,};
static int mport_mdp1[] = {MSM_BUS_MASTER_PORT_MDP_PORT1,};
static int mport_rotator[] = {MSM_BUS_MASTER_PORT_ROTATOR,};
static int mport_graphics_3d_port0[] = {MSM_BUS_MASTER_PORT_GRAPHICS_3D_PORT0,};
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_board_8930.c b/arch/arm/mach-msm/msm_bus/msm_bus_board_8930.c
index 91d106e..af51355 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_board_8930.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_board_8930.c
@@ -377,10 +377,7 @@
},
};
-static int mport_mdp[] = {
- MSM_BUS_MASTER_PORT_MDP_PORT0,
- MSM_BUS_MASTER_PORT_MDP_PORT1,
-};
+static int mport_mdp[] = {MSM_BUS_MASTER_PORT_MDP_PORT0,};
static int mport_mdp1[] = {MSM_BUS_MASTER_PORT_MDP_PORT1,};
static int mport_rotator[] = {MSM_BUS_MASTER_PORT_ROTATOR,};
static int mport_graphics_3d[] = {MSM_BUS_MASTER_PORT_GRAPHICS_3D,};
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_board_8960.c b/arch/arm/mach-msm/msm_bus/msm_bus_board_8960.c
index 158dee3..295b91b 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_board_8960.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_board_8960.c
@@ -440,10 +440,7 @@
},
};
-static int mport_mdp[] = {
- MSM_BUS_MASTER_PORT_MDP_PORT0,
- MSM_BUS_MASTER_PORT_MDP_PORT1,
-};
+static int mport_mdp[] = {MSM_BUS_MASTER_PORT_MDP_PORT0,};
static int mport_mdp1[] = {MSM_BUS_MASTER_PORT_MDP_PORT1,};
static int mport_rotator[] = {MSM_BUS_MASTER_PORT_ROTATOR,};
static int mport_graphics_3d[] = {MSM_BUS_MASTER_PORT_GRAPHICS_3D,};
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_core.h b/arch/arm/mach-msm/msm_bus/msm_bus_core.h
index fd2dbb5..98419a4 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_core.h
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_core.h
@@ -36,7 +36,7 @@
(((slv >= MSM_BUS_SLAVE_FIRST) && (slv <= MSM_BUS_SLAVE_LAST)) ? 1 : 0)
#define INTERLEAVED_BW(fab_pdata, bw, ports) \
- ((fab_pdata->il_flag) ? msm_bus_div64((bw), (ports)) : (bw))
+ ((fab_pdata->il_flag) ? msm_bus_div64((ports), (bw)) : (bw))
#define INTERLEAVED_VAL(fab_pdata, n) \
((fab_pdata->il_flag) ? (n) : 1)
diff --git a/arch/arm/mach-msm/msm_watchdog.c b/arch/arm/mach-msm/msm_watchdog.c
index b1c8b30..dcfe13c 100644
--- a/arch/arm/mach-msm/msm_watchdog.c
+++ b/arch/arm/mach-msm/msm_watchdog.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -23,6 +23,7 @@
#include <linux/suspend.h>
#include <linux/percpu.h>
#include <linux/interrupt.h>
+#include <linux/reboot.h>
#include <asm/fiq.h>
#include <asm/hardware/gic.h>
#include <mach/msm_iomap.h>
@@ -64,6 +65,12 @@
module_param(enable, int, 0);
/*
+ * Watchdog bark reboot timeout in seconds.
+ * Can be specified in kernel command line.
+ */
+static int reboot_bark_timeout = 22;
+module_param(reboot_bark_timeout, int, 0644);
+/*
* If the watchdog is enabled at bootup (enable=1),
* the runtime_disable sysfs node at
* /sys/module/msm_watchdog/runtime_disable
@@ -154,6 +161,27 @@
.notifier_call = panic_wdog_handler,
};
+#define get_sclk_hz(t_ms) ((t_ms / 1000) * WDT_HZ)
+#define get_reboot_bark_timeout(t_s) ((t_s * MSEC_PER_SEC) < bark_time ? \
+ get_sclk_hz(bark_time) : get_sclk_hz(t_s * MSEC_PER_SEC))
+
+static int msm_watchdog_reboot_notifier(struct notifier_block *this,
+ unsigned long code, void *unused)
+{
+
+ u64 timeout = get_reboot_bark_timeout(reboot_bark_timeout);
+ __raw_writel(timeout, msm_wdt_base + WDT_BARK_TIME);
+ __raw_writel(timeout + 3 * WDT_HZ,
+ msm_wdt_base + WDT_BITE_TIME);
+ __raw_writel(1, msm_wdt_base + WDT_RST);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block msm_reboot_notifier = {
+ .notifier_call = msm_watchdog_reboot_notifier,
+};
+
struct wdog_disable_work_data {
struct work_struct work;
struct completion complete;
@@ -177,6 +205,7 @@
}
enable = 0;
atomic_notifier_chain_unregister(&panic_notifier_list, &panic_blk);
+ unregister_reboot_notifier(&msm_reboot_notifier);
cancel_delayed_work(&dogwork_struct);
/* may be suspended after the first write above */
__raw_writel(0, msm_wdt_base + WDT_EN);
@@ -373,6 +402,10 @@
atomic_notifier_chain_register(&panic_notifier_list,
&panic_blk);
+ ret = register_reboot_notifier(&msm_reboot_notifier);
+ if (ret)
+ pr_err("Failed to register reboot notifier\n");
+
__raw_writel(1, msm_wdt_base + WDT_EN);
__raw_writel(1, msm_wdt_base + WDT_RST);
last_pet = sched_clock();
@@ -395,6 +428,11 @@
}
bark_time = pdata->bark_time;
+ /* reboot_bark_timeout (in seconds) might have been supplied as
+ * module parameter.
+ */
+ if ((reboot_bark_timeout * MSEC_PER_SEC) < bark_time)
+ reboot_bark_timeout = (bark_time / MSEC_PER_SEC);
has_vic = pdata->has_vic;
if (!pdata->has_secure) {
appsbark = 1;
diff --git a/arch/arm/mach-msm/ocmem_core.c b/arch/arm/mach-msm/ocmem_core.c
index 4ea1de9..4ed4eda 100644
--- a/arch/arm/mach-msm/ocmem_core.c
+++ b/arch/arm/mach-msm/ocmem_core.c
@@ -97,9 +97,12 @@
#define REGION_SLEEP_PERI_ON 0x00007777
#define REGION_DEFAULT_OFF REGION_SLEEP_NO_RETENTION
-#define REGION_DEFAULT_ON REGION_NORMAL_PASSTHROUGH
+#define REGION_DEFAULT_ON REGION_FORCE_CORE_ON
#define REGION_DEFAULT_RETENTION REGION_SLEEP_PERI_OFF
+#define REGION_STAGING_SET(x) (REGION_SLEEP_PERI_OFF & (0xF << (x * 0x4)))
+#define REGION_ON_SET(x) (REGION_DEFAULT_OFF & (0xF << (x * 0x4)))
+
enum rpm_macro_state {
rpm_macro_off = 0x0,
rpm_macro_retain,
@@ -195,6 +198,51 @@
return state;
}
+#ifndef CONFIG_MSM_OCMEM_POWER_DISABLE
+static int commit_region_staging(unsigned region_num, unsigned start_m,
+ unsigned new_state)
+{
+ int rc = -1;
+ unsigned state = 0x0;
+ unsigned read_state = 0x0;
+ unsigned curr_state = 0x0;
+
+ if (region_num >= num_regions)
+ return -EINVAL;
+
+ if (rpm_power_control)
+ return 0;
+ else {
+ if (new_state != REGION_DEFAULT_OFF) {
+ curr_state = read_region_state(region_num);
+ state = curr_state | REGION_STAGING_SET(start_m);
+ rc = ocmem_write(state,
+ ocmem_base + PSCGC_CTL_n(region_num));
+ /* Barrier to commit the region state */
+ mb();
+ read_state = read_region_state(region_num);
+ if (new_state == REGION_DEFAULT_ON) {
+ curr_state = read_region_state(region_num);
+ state = curr_state ^ REGION_ON_SET(start_m);
+ rc = ocmem_write(state,
+ ocmem_base + PSCGC_CTL_n(region_num));
+ /* Barrier to commit the region state */
+ mb();
+ read_state = read_region_state(region_num);
+ }
+ } else {
+ curr_state = read_region_state(region_num);
+ state = curr_state ^ REGION_STAGING_SET(start_m);
+ rc = ocmem_write(state,
+ ocmem_base + PSCGC_CTL_n(region_num));
+ /* Barrier to commit the region state */
+ mb();
+ read_state = read_region_state(region_num);
+ }
+ }
+ return 0;
+}
+#endif
/* Returns the current state of a OCMEM macro that belongs to a region */
static int read_macro_state(unsigned region_num, unsigned macro_num)
@@ -948,9 +996,11 @@
apply_macro_vote(id, i, j,
hw_macro_state(new_state));
aggregate_macro_state(i, j);
+ commit_region_staging(i, j, new_state);
}
aggregate_region_state(i);
- commit_region_state(i);
+ if (rpm_power_control)
+ commit_region_state(i);
len -= region_size;
/* If we voted ON/retain the banks must never be OFF */
diff --git a/arch/arm/mach-msm/ocmem_sched.c b/arch/arm/mach-msm/ocmem_sched.c
index 8afe695..a14b960 100644
--- a/arch/arm/mach-msm/ocmem_sched.c
+++ b/arch/arm/mach-msm/ocmem_sched.c
@@ -1515,6 +1515,18 @@
pr_err("ocmem: Failed to unmap %p\n", req);
goto free_fail;
}
+ /* Turn off the memory */
+ if (req->req_sz != 0) {
+
+ offset = phys_to_offset(req->req_start);
+ rc = ocmem_memory_off(req->owner, offset,
+ req->req_sz);
+
+ if (rc < 0) {
+ pr_err("Failed to switch OFF memory macros\n");
+ goto free_fail;
+ }
+ }
rc = do_free(req);
if (rc < 0) {
@@ -1525,21 +1537,19 @@
pr_debug("request %p was already shrunk to 0\n", req);
}
- /* Turn off the memory */
- if (req->req_sz != 0) {
+ if (!TEST_STATE(req, R_FREE)) {
+ /* Turn off the memory */
+ if (req->req_sz != 0) {
- offset = phys_to_offset(req->req_start);
+ offset = phys_to_offset(req->req_start);
+ rc = ocmem_memory_off(req->owner, offset, req->req_sz);
- rc = ocmem_memory_off(req->owner, offset, req->req_sz);
-
- if (rc < 0) {
- pr_err("Failed to switch OFF memory macros\n");
- goto free_fail;
+ if (rc < 0) {
+ pr_err("Failed to switch OFF memory macros\n");
+ goto free_fail;
+ }
}
- }
-
- if (!TEST_STATE(req, R_FREE)) {
/* free the allocation */
rc = do_free(req);
if (rc < 0)
diff --git a/arch/arm/mach-msm/peripheral-loader.c b/arch/arm/mach-msm/peripheral-loader.c
index 056da7d..475e8a1 100644
--- a/arch/arm/mach-msm/peripheral-loader.c
+++ b/arch/arm/mach-msm/peripheral-loader.c
@@ -290,10 +290,12 @@
static void pil_dump_segs(const struct pil_priv *priv)
{
struct pil_seg *seg;
+ phys_addr_t seg_h_paddr;
list_for_each_entry(seg, &priv->segs, list) {
- pil_info(priv->desc, "%d: %#08zx %#08lx\n", seg->num,
- seg->paddr, seg->paddr + seg->sz);
+ seg_h_paddr = seg->paddr + seg->sz;
+ pil_info(priv->desc, "%d: %pa %pa\n", seg->num,
+ &seg->paddr, &seg_h_paddr);
}
}
@@ -322,7 +324,7 @@
return 0;
}
}
- pil_err(priv->desc, "entry address %08zx not within range\n", entry);
+ pil_err(priv->desc, "entry address %pa not within range\n", &entry);
pil_dump_segs(priv);
return -EADDRNOTAVAIL;
}
@@ -489,7 +491,8 @@
static int pil_load_seg(struct pil_desc *desc, struct pil_seg *seg)
{
- int ret = 0, count, paddr;
+ int ret = 0, count;
+ phys_addr_t paddr;
char fw_name[30];
const struct firmware *fw = NULL;
const u8 *data;
diff --git a/arch/arm/mach-msm/peripheral-loader.h b/arch/arm/mach-msm/peripheral-loader.h
index ff10fe5..5aeeaf3 100644
--- a/arch/arm/mach-msm/peripheral-loader.h
+++ b/arch/arm/mach-msm/peripheral-loader.h
@@ -55,7 +55,8 @@
int (*init_image)(struct pil_desc *pil, const u8 *metadata,
size_t size);
int (*mem_setup)(struct pil_desc *pil, phys_addr_t addr, size_t size);
- int (*verify_blob)(struct pil_desc *pil, u32 phy_addr, size_t size);
+ int (*verify_blob)(struct pil_desc *pil, phys_addr_t phy_addr,
+ size_t size);
int (*proxy_vote)(struct pil_desc *pil);
int (*auth_and_reset)(struct pil_desc *pil);
void (*proxy_unvote)(struct pil_desc *pil);
diff --git a/arch/arm/mach-msm/pil-gss.c b/arch/arm/mach-msm/pil-gss.c
index d44add6..65f86bc 100644
--- a/arch/arm/mach-msm/pil-gss.c
+++ b/arch/arm/mach-msm/pil-gss.c
@@ -203,7 +203,7 @@
{
struct gss_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
void __iomem *cbase = drv->cbase;
int ret;
diff --git a/arch/arm/mach-msm/pil-modem.c b/arch/arm/mach-msm/pil-modem.c
index 30f480a..8398206 100644
--- a/arch/arm/mach-msm/pil-modem.c
+++ b/arch/arm/mach-msm/pil-modem.c
@@ -93,7 +93,7 @@
{
u32 reg;
const struct modem_data *drv = dev_get_drvdata(pil->dev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Put modem AHB0,1,2 clocks into reset */
writel_relaxed(BIT(0) | BIT(1), drv->cbase + MAHB0_SFAB_PORT_RESET);
diff --git a/arch/arm/mach-msm/pil-pronto.c b/arch/arm/mach-msm/pil-pronto.c
index 0df8739..b186a4d 100644
--- a/arch/arm/mach-msm/pil-pronto.c
+++ b/arch/arm/mach-msm/pil-pronto.c
@@ -123,7 +123,7 @@
int rc;
struct pronto_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Deassert reset to subsystem and wait for propagation */
reg = readl_relaxed(drv->reset_base);
@@ -515,6 +515,17 @@
drv->subsys_desc.start = pronto_start;
drv->subsys_desc.stop = pronto_stop;
+ ret = of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-err-ready", 0);
+ if (ret < 0)
+ return ret;
+
+ ret = gpio_to_irq(ret);
+ if (ret < 0)
+ return ret;
+
+ drv->subsys_desc.err_ready_irq = ret;
+
INIT_DELAYED_WORK(&drv->cancel_vote_work, wcnss_post_bootup);
drv->subsys = subsys_register(&drv->subsys_desc);
diff --git a/arch/arm/mach-msm/pil-q6v3.c b/arch/arm/mach-msm/pil-q6v3.c
index 0575a3d..a369878 100644
--- a/arch/arm/mach-msm/pil-q6v3.c
+++ b/arch/arm/mach-msm/pil-q6v3.c
@@ -116,7 +116,7 @@
{
u32 reg;
struct q6v3_data *drv = dev_get_drvdata(pil->dev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Put Q6 into reset */
reg = readl_relaxed(drv->cbase + LCC_Q6_FUNC);
diff --git a/arch/arm/mach-msm/pil-q6v4.c b/arch/arm/mach-msm/pil-q6v4.c
index 29d14dd..51f7aa2 100644
--- a/arch/arm/mach-msm/pil-q6v4.c
+++ b/arch/arm/mach-msm/pil-q6v4.c
@@ -130,7 +130,7 @@
{
u32 reg, err;
const struct q6v4_data *drv = pil_to_q6v4_data(pil);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Enable Q6 ACLK */
writel_relaxed(0x10, drv->aclk_reg);
diff --git a/arch/arm/mach-msm/pil-q6v5-lpass.c b/arch/arm/mach-msm/pil-q6v5-lpass.c
index 3b2bbf3..04c1be3 100644
--- a/arch/arm/mach-msm/pil-q6v5-lpass.c
+++ b/arch/arm/mach-msm/pil-q6v5-lpass.c
@@ -22,6 +22,7 @@
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/sysfs.h>
+#include <linux/of_gpio.h>
#include <mach/clk.h>
#include <mach/subsystem_restart.h>
@@ -50,6 +51,8 @@
void *wcnss_notif_hdle;
void *modem_notif_hdle;
int crash_shutdown;
+ unsigned int err_fatal_irq;
+ int force_stop_gpio;
};
#define subsys_to_drv(d) container_of(d, struct lpass_data, subsys_desc)
@@ -124,7 +127,7 @@
static int pil_lpass_reset(struct pil_desc *pil)
{
struct q6v5_data *drv = container_of(pil, struct q6v5_data, desc);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
int ret;
/* Deassert reset to subsystem and wait for propagation */
@@ -259,20 +262,19 @@
restart_adsp(drv);
}
-static void adsp_smsm_state_cb(void *data, uint32_t old_state,
- uint32_t new_state)
+static irqreturn_t adsp_err_fatal_intr_handler (int irq, void *dev_id)
{
- struct lpass_data *drv = data;
+ struct lpass_data *drv = dev_id;
- /* Ignore if we're the one that set SMSM_RESET */
+ /* Ignore if we're the one that set the force stop bit in the outbound
+ * entry
+ */
if (drv->crash_shutdown)
- return;
+ return IRQ_HANDLED;
- if (new_state & SMSM_RESET) {
- pr_err("%s: ADSP SMSM state changed to SMSM_RESET, new_state = %#x, old_state = %#x\n",
- __func__, new_state, old_state);
- restart_adsp(drv);
- }
+ pr_err("Fatal error on the ADSP!\n");
+ restart_adsp(drv);
+ return IRQ_HANDLED;
}
#define SCM_Q6_NMI_CMD 0x1
@@ -356,6 +358,7 @@
struct lpass_data *drv = subsys_to_lpass(subsys);
drv->crash_shutdown = 1;
+ gpio_set_value(drv->force_stop_gpio, 1);
send_q6_nmi();
}
@@ -388,7 +391,7 @@
struct q6v5_data *q6;
struct pil_desc *desc;
struct resource *res;
- int ret;
+ int ret, gpio_clk_ready;
drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
if (!drv)
@@ -399,6 +402,23 @@
if (drv->wdog_irq < 0)
return drv->wdog_irq;
+ ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-err-fatal", 0));
+ if (ret < 0)
+ return ret;
+ drv->err_fatal_irq = ret;
+
+ ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-proxy-unvote", 0));
+ if (ret < 0)
+ return ret;
+ gpio_clk_ready = ret;
+
+ drv->force_stop_gpio = of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-force-stop", 0);
+ if (drv->force_stop_gpio < 0)
+ return drv->force_stop_gpio;
+
q6 = pil_q6v5_init(pdev);
if (IS_ERR(q6))
return PTR_ERR(q6);
@@ -407,6 +427,7 @@
desc = &q6->desc;
desc->owner = THIS_MODULE;
desc->proxy_timeout = PROXY_TIMEOUT_MS;
+ desc->proxy_unvote_irq = gpio_clk_ready;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "restart_reg");
q6->restart_reg = devm_request_and_ioremap(&pdev->dev, res);
@@ -470,10 +491,12 @@
if (ret)
goto err_irq;
- ret = smsm_state_cb_register(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
- if (ret < 0)
- goto err_smsm;
+ ret = devm_request_irq(&pdev->dev, drv->err_fatal_irq,
+ adsp_err_fatal_intr_handler,
+ IRQF_TRIGGER_RISING,
+ dev_name(&pdev->dev), drv);
+ if (ret)
+ goto err_irq;
drv->wcnss_notif_hdle = subsys_notif_register_notifier("wcnss", &wnb);
if (IS_ERR(drv->wcnss_notif_hdle)) {
@@ -507,9 +530,6 @@
err_notif_modem:
subsys_notif_unregister_notifier(drv->wcnss_notif_hdle, &wnb);
err_notif_wcnss:
- smsm_state_cb_deregister(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
-err_smsm:
err_irq:
subsys_unregister(drv->subsys);
err_subsys:
@@ -524,8 +544,6 @@
struct lpass_data *drv = platform_get_drvdata(pdev);
subsys_notif_unregister_notifier(drv->wcnss_notif_hdle, &wnb);
subsys_notif_unregister_notifier(drv->modem_notif_hdle, &mnb);
- smsm_state_cb_deregister(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
subsys_unregister(drv->subsys);
destroy_ramdump_device(drv->ramdump_dev);
pil_desc_release(&drv->q6->desc);
diff --git a/arch/arm/mach-msm/pil-q6v5-mss.c b/arch/arm/mach-msm/pil-q6v5-mss.c
index 8f7d262..3ce1283 100644
--- a/arch/arm/mach-msm/pil-q6v5-mss.c
+++ b/arch/arm/mach-msm/pil-q6v5-mss.c
@@ -243,7 +243,7 @@
struct q6v5_data *drv = container_of(pil, struct q6v5_data, desc);
struct platform_device *pdev = to_platform_device(pil->dev);
struct mba_data *mba = platform_get_drvdata(pdev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
int ret;
/*
@@ -402,7 +402,7 @@
return ret;
}
-static int pil_mba_verify_blob(struct pil_desc *pil, u32 phy_addr,
+static int pil_mba_verify_blob(struct pil_desc *pil, phys_addr_t phy_addr,
size_t size)
{
struct mba_data *drv = dev_get_drvdata(pil->dev);
@@ -608,8 +608,15 @@
if (ret)
return ret;
ret = pil_boot(&drv->desc);
- if (ret)
+ if (ret) {
pil_shutdown(&drv->q6->desc);
+ /*
+ * We know now that the unvote interrupt is not coming.
+ * Remove the proxy votes immediately.
+ */
+ if (drv->q6->desc.proxy_unvote_irq)
+ pil_q6v5_mss_remove_proxy_votes(&drv->q6->desc);
+ }
return ret;
}
@@ -643,6 +650,17 @@
drv->subsys_desc.start = mss_start;
drv->subsys_desc.stop = mss_stop;
+ ret = of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-err-ready", 0);
+ if (ret < 0)
+ return ret;
+
+ ret = gpio_to_irq(ret);
+ if (ret < 0)
+ return ret;
+
+ drv->subsys_desc.err_ready_irq = ret;
+
drv->subsys = subsys_register(&drv->subsys_desc);
if (IS_ERR(drv->subsys)) {
ret = PTR_ERR(drv->subsys);
diff --git a/arch/arm/mach-msm/pil-q6v5.c b/arch/arm/mach-msm/pil-q6v5.c
index 0263faf..c6add8f 100644
--- a/arch/arm/mach-msm/pil-q6v5.c
+++ b/arch/arm/mach-msm/pil-q6v5.c
@@ -46,7 +46,9 @@
#define Q6SS_CLK_ENA BIT(1)
/* QDSP6SS_PWR_CTL */
-#define Q6SS_L2DATA_SLP_NRET_N (BIT(0)|BIT(1)|BIT(2))
+#define Q6SS_L2DATA_SLP_NRET_N_0 BIT(0)
+#define Q6SS_L2DATA_SLP_NRET_N_1 BIT(1)
+#define Q6SS_L2DATA_SLP_NRET_N_2 BIT(2)
#define Q6SS_L2TAG_SLP_NRET_N BIT(16)
#define Q6SS_ETB_SLP_NRET_N BIT(17)
#define Q6SS_L2DATA_STBY_N BIT(18)
@@ -160,7 +162,8 @@
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
/* Turn off Q6 memories */
- val &= ~(Q6SS_L2DATA_SLP_NRET_N | Q6SS_SLP_RET_N |
+ val &= ~(Q6SS_L2DATA_SLP_NRET_N_0 | Q6SS_L2DATA_SLP_NRET_N_1 |
+ Q6SS_L2DATA_SLP_NRET_N_2 | Q6SS_SLP_RET_N |
Q6SS_L2TAG_SLP_NRET_N | Q6SS_ETB_SLP_NRET_N |
Q6SS_L2DATA_STBY_N);
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
@@ -194,11 +197,19 @@
mb();
udelay(1);
- /* Turn on memories */
+ /*
+ * Turn on memories. L2 banks should be done individually
+ * to minimize inrush current.
+ */
val = readl_relaxed(drv->reg_base + QDSP6SS_PWR_CTL);
- val |= Q6SS_L2DATA_SLP_NRET_N | Q6SS_SLP_RET_N |
- Q6SS_L2TAG_SLP_NRET_N | Q6SS_ETB_SLP_NRET_N |
- Q6SS_L2DATA_STBY_N;
+ val |= Q6SS_SLP_RET_N | Q6SS_L2TAG_SLP_NRET_N |
+ Q6SS_ETB_SLP_NRET_N | Q6SS_L2DATA_STBY_N;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_2;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_1;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_0;
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
/* Remove IO clamp */
diff --git a/arch/arm/mach-msm/pil-riva.c b/arch/arm/mach-msm/pil-riva.c
index a2665b4..d72b848 100644
--- a/arch/arm/mach-msm/pil-riva.c
+++ b/arch/arm/mach-msm/pil-riva.c
@@ -134,7 +134,7 @@
u32 reg, sel;
struct riva_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
void __iomem *cbase = drv->cbase;
bool use_cxo = cxo_is_needed(drv);
diff --git a/arch/arm/mach-msm/pm-8x60.c b/arch/arm/mach-msm/pm-8x60.c
index cd1eaaf..4fca346 100644
--- a/arch/arm/mach-msm/pm-8x60.c
+++ b/arch/arm/mach-msm/pm-8x60.c
@@ -29,6 +29,7 @@
#include <linux/platform_device.h>
#include <linux/of_platform.h>
#include <linux/regulator/krait-regulator.h>
+#include <linux/cpu.h>
#include <mach/msm_iomap.h>
#include <mach/socinfo.h>
#include <mach/system.h>
@@ -56,7 +57,6 @@
#include <mach/event_timer.h>
#define CREATE_TRACE_POINTS
#include "trace_msm_low_power.h"
-
#define SCM_L2_RETENTION (0x2)
#define SCM_CMD_TERMINATE_PC (0x2)
@@ -64,7 +64,6 @@
(container_of(attr, struct msm_pm_kobj_attribute, ka)->cpu)
#define SCLK_HZ (32768)
-#define MSM_PM_SLEEP_TICK_LIMIT (0x6DDD000)
#define NUM_OF_COUNTERS 3
#define MAX_BUF_SIZE 512
@@ -127,9 +126,9 @@
static bool msm_pm_use_sync_timer;
static struct msm_pm_cp15_save_data cp15_data;
static bool msm_pm_retention_calls_tz;
-static uint32_t msm_pm_max_sleep_time;
static bool msm_no_ramp_down_pc;
static struct msm_pm_sleep_status_data *msm_pm_slp_sts;
+static bool msm_pm_pc_reset_timer;
static int msm_pm_get_pc_mode(struct device_node *node,
const char *key, uint32_t *pc_mode_val)
@@ -404,39 +403,6 @@
return;
}
-/*
- * Convert time from nanoseconds to slow clock ticks, then cap it to the
- * specified limit
- */
-static int64_t msm_pm_convert_and_cap_time(int64_t time_ns, int64_t limit)
-{
- do_div(time_ns, NSEC_PER_SEC / SCLK_HZ);
- return (time_ns > limit) ? limit : time_ns;
-}
-
-/*
- * Set the sleep time for suspend. 0 means infinite sleep time.
- */
-void msm_pm_set_max_sleep_time(int64_t max_sleep_time_ns)
-{
- if (max_sleep_time_ns == 0) {
- msm_pm_max_sleep_time = 0;
- } else {
- msm_pm_max_sleep_time =
- (uint32_t)msm_pm_convert_and_cap_time(
- max_sleep_time_ns, MSM_PM_SLEEP_TICK_LIMIT);
-
- if (msm_pm_max_sleep_time == 0)
- msm_pm_max_sleep_time = 1;
- }
-
- if (MSM_PM_DEBUG_SUSPEND & msm_pm_debug_mask)
- pr_info("%s: Requested %lld ns Giving %u sclk ticks\n",
- __func__, max_sleep_time_ns,
- msm_pm_max_sleep_time);
-}
-EXPORT_SYMBOL(msm_pm_set_max_sleep_time);
-
static void msm_pm_save_cpu_reg(void)
{
int i;
@@ -525,9 +491,14 @@
if (MSM_PM_DEBUG_RESET_VECTOR & msm_pm_debug_mask)
pr_info("CPU%u: %s: program vector to %p\n",
cpu, __func__, entry);
+ if (from_idle && msm_pm_pc_reset_timer)
+ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu);
collapsed = msm_pm_collapse();
+ if (from_idle && msm_pm_pc_reset_timer)
+ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu);
+
msm_pm_boot_config_after_pc(cpu);
if (collapsed) {
@@ -863,9 +834,8 @@
int exit_stat = -1;
enum msm_pm_sleep_mode sleep_mode;
void *msm_pm_idle_rs_limits = NULL;
- int sleep_delay = 1;
+ uint32_t sleep_delay = 1;
int ret = -ENODEV;
- int64_t timer_expiration = 0;
int notify_rpm = false;
bool timer_halted = false;
@@ -885,10 +855,8 @@
if (sleep_mode == MSM_PM_SLEEP_MODE_POWER_COLLAPSE) {
notify_rpm = true;
- timer_expiration = msm_pm_timer_enter_idle();
+ sleep_delay = (uint32_t)msm_pm_timer_enter_idle();
- sleep_delay = (uint32_t) msm_pm_convert_and_cap_time(
- timer_expiration, MSM_PM_SLEEP_TICK_LIMIT);
if (sleep_delay == 0) /* 0 would mean infinite time */
sleep_delay = 1;
}
@@ -1075,6 +1043,7 @@
void *rs_limits = NULL;
int ret = -ENODEV;
uint32_t power;
+ uint32_t msm_pm_max_sleep_time = 0;
if (MSM_PM_DEBUG_SUSPEND & msm_pm_debug_mask)
pr_info("%s: power collapse\n", __func__);
@@ -1084,8 +1053,8 @@
if (msm_pm_sleep_time_override > 0) {
int64_t ns = NSEC_PER_SEC *
(int64_t) msm_pm_sleep_time_override;
- msm_pm_set_max_sleep_time(ns);
- msm_pm_sleep_time_override = 0;
+ do_div(ns, NSEC_PER_SEC / SCLK_HZ);
+ msm_pm_max_sleep_time = (uint32_t) ns;
}
if (pm_sleep_ops.lowest_limits)
@@ -1277,6 +1246,36 @@
}
core_initcall(msm_pm_setup_saved_state);
+static void setup_broadcast_timer(void *arg)
+{
+ unsigned long reason = (unsigned long)arg;
+ int cpu = smp_processor_id();
+
+ reason = reason ?
+ CLOCK_EVT_NOTIFY_BROADCAST_ON : CLOCK_EVT_NOTIFY_BROADCAST_OFF;
+
+ clockevents_notify(reason, &cpu);
+}
+
+static int setup_broadcast_cpuhp_notify(struct notifier_block *n,
+ unsigned long action, void *hcpu)
+{
+ int hotcpu = (unsigned long)hcpu;
+
+ switch (action & ~CPU_TASKS_FROZEN) {
+ case CPU_ONLINE:
+ smp_call_function_single(hotcpu, setup_broadcast_timer,
+ (void *)true, 1);
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block setup_broadcast_notifier = {
+ .notifier_call = setup_broadcast_cpuhp_notify,
+};
+
static int __init msm_pm_init(void)
{
enum msm_pm_time_stats_id enable_stats[] = {
@@ -1292,6 +1291,14 @@
hrtimer_init(&pm_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
msm_cpuidle_init();
+ if (msm_pm_pc_reset_timer) {
+ get_cpu();
+ smp_call_function_many(cpu_online_mask, setup_broadcast_timer,
+ (void *)true, 1);
+ put_cpu();
+ register_cpu_notifier(&setup_broadcast_notifier);
+ }
+
return 0;
}
@@ -1361,7 +1368,7 @@
if (!data)
return -EINVAL;
- if (!bufu || count < 0)
+ if (!bufu)
return -EINVAL;
if (!access_ok(VERIFY_WRITE, bufu, count))
@@ -1476,6 +1483,10 @@
key = "qcom,saw-turns-off-pll";
msm_no_ramp_down_pc = of_property_read_bool(pdev->dev.of_node,
key);
+
+ key = "qcom,pc-resets-timer";
+ msm_pm_pc_reset_timer = of_property_read_bool(
+ pdev->dev.of_node, key);
}
if (pdata_local.cp15_data.reg_data &&
diff --git a/arch/arm/mach-msm/pm-data.c b/arch/arm/mach-msm/pm-data.c
index 249032f..f41c569 100644
--- a/arch/arm/mach-msm/pm-data.c
+++ b/arch/arm/mach-msm/pm-data.c
@@ -125,4 +125,11 @@
.idle_enabled = 1,
.suspend_enabled = 0,
},
+
+ [MSM_PM_MODE(3, MSM_PM_SLEEP_MODE_NR)] = {
+ .idle_supported = 0,
+ .suspend_supported = 0,
+ .idle_enabled = 0,
+ .suspend_enabled = 0,
+ },
};
diff --git a/arch/arm/mach-msm/qdsp6v2/Makefile b/arch/arm/mach-msm/qdsp6v2/Makefile
index 88e2894..6bd3efb 100644
--- a/arch/arm/mach-msm/qdsp6v2/Makefile
+++ b/arch/arm/mach-msm/qdsp6v2/Makefile
@@ -12,7 +12,7 @@
obj-$(CONFIG_FB_MSM_HDMI_MSM_PANEL) += lpa_if_hdmi.o
endif
obj-$(CONFIG_MSM_QDSP6_APR) += apr.o apr_v1.o apr_tal.o q6core.o dsp_debug.o
-obj-$(CONFIG_MSM_QDSP6_APRV2) += apr.o apr_v2.o apr_tal.o q6core.o dsp_debug.o
+obj-$(CONFIG_MSM_QDSP6_APRV2) += apr.o apr_v2.o apr_tal.o dsp_debug.o
ifdef CONFIG_ARCH_MSM9615
obj-y += audio_acdb.o
obj-y += rtac.o
diff --git a/arch/arm/mach-msm/qdsp6v2/q6core.c b/arch/arm/mach-msm/qdsp6v2/q6core.c
index 6594b08..fd699df 100644
--- a/arch/arm/mach-msm/qdsp6v2/q6core.c
+++ b/arch/arm/mach-msm/qdsp6v2/q6core.c
@@ -69,12 +69,12 @@
switch (payload1[0]) {
case ADSP_CMD_SET_POWER_COLLAPSE_STATE:
- pr_info("Cmd = ADSP_CMD_SET_POWER_COLLAPSE_STATE"
- " status[0x%x]\n", payload1[1]);
+ pr_info("Cmd = ADSP_CMD_SET_POWER_COLLAPSE_STATE status[0x%x]\n",
+ payload1[1]);
break;
case ADSP_CMD_REMOTE_BUS_BW_REQUEST:
- pr_info("%s: cmd = ADSP_CMD_REMOTE_BUS_BW_REQUEST"
- " status = 0x%x\n", __func__, payload1[1]);
+ pr_info("%s: cmd = ADSP_CMD_REMOTE_BUS_BW_REQUEST status = 0x%x\n",
+ __func__, payload1[1]);
bus_bw_resp_received = 1;
wake_up(&bus_bw_req_wait);
@@ -160,10 +160,9 @@
core_handle_q = apr_register("ADSP", "CORE",
aprv2_core_fn_q, 0xFFFFFFFF, NULL);
}
- pr_info("Open_q %p\n", core_handle_q);
- if (core_handle_q == NULL) {
+ pr_debug("Open_q %p\n", core_handle_q);
+ if (core_handle_q == NULL)
pr_err("%s: Unable to register CORE\n", __func__);
- }
}
int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps)
@@ -352,7 +351,7 @@
pc.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
pc.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE,
- sizeof(uint32_t));;
+ sizeof(uint32_t));
pc.hdr.src_port = 0;
pc.hdr.dest_port = 0;
pc.hdr.token = 0;
@@ -413,32 +412,6 @@
return rc;
}
-uint32_t core_set_dolby_manufacturer_id(int manufacturer_id)
-{
- struct adsp_dolby_manufacturer_id payload;
- int rc = 0;
- pr_debug("%s manufacturer_id :%d\n", __func__, manufacturer_id);
- core_open();
- if (core_handle_q) {
- payload.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
- APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
- payload.hdr.pkt_size =
- sizeof(struct adsp_dolby_manufacturer_id);
- payload.hdr.src_port = 0;
- payload.hdr.dest_port = 0;
- payload.hdr.token = 0;
- payload.hdr.opcode = ADSP_CMD_SET_DOLBY_MANUFACTURER_ID;
- payload.manufacturer_id = manufacturer_id;
- pr_debug("Send Dolby security opcode=%x manufacturer ID = %d\n",
- payload.hdr.opcode, payload.manufacturer_id);
- rc = apr_send_pkt(core_handle_q, (uint32_t *)&payload);
- if (rc < 0)
- pr_err("%s: SET_DOLBY_MANUFACTURER_ID failed op[0x%x]rc[%d]\n",
- __func__, payload.hdr.opcode, rc);
- }
- return rc;
-}
-
static const struct file_operations apr_debug_fops = {
.write = apr_debug_write,
.open = apr_debug_open,
diff --git a/arch/arm/mach-msm/qdsp6v2/ultrasound/version_b/q6usm_b.c b/arch/arm/mach-msm/qdsp6v2/ultrasound/version_b/q6usm_b.c
index ff7ba33..a3af3e78 100644
--- a/arch/arm/mach-msm/qdsp6v2/ultrasound/version_b/q6usm_b.c
+++ b/arch/arm/mach-msm/qdsp6v2/ultrasound/version_b/q6usm_b.c
@@ -17,7 +17,7 @@
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/msm_audio.h>
-#include <sound/apr_audio.h>
+#include <sound/apr_audio-v2.h>
#include <mach/qdsp6v2/apr_us_b.h>
#include "q6usm.h"
diff --git a/arch/arm/mach-msm/remote_spinlock.c b/arch/arm/mach-msm/remote_spinlock.c
index 94923a0..62e3e05 100644
--- a/arch/arm/mach-msm/remote_spinlock.c
+++ b/arch/arm/mach-msm/remote_spinlock.c
@@ -143,6 +143,7 @@
}
/* end dekkers implementation ----------------------------------------------- */
+#ifndef CONFIG_THUMB2_KERNEL
/* swp implementation ------------------------------------------------------- */
static void __raw_remote_swp_spin_lock(raw_remote_spinlock_t *lock)
{
@@ -194,6 +195,7 @@
: "cc");
}
/* end swp implementation --------------------------------------------------- */
+#endif
/* ldrex implementation ----------------------------------------------------- */
static char *ldrex_compatible_string = "qcom,ipc-spinlock-ldrex";
@@ -431,6 +433,7 @@
current_ops.owner = __raw_remote_dek_spin_owner;
is_hw_lock_type = 0;
break;
+#ifndef CONFIG_THUMB2_KERNEL
case SWP_MODE:
current_ops.lock = __raw_remote_swp_spin_lock;
current_ops.unlock = __raw_remote_swp_spin_unlock;
@@ -439,6 +442,7 @@
current_ops.owner = __raw_remote_gen_spin_owner;
is_hw_lock_type = 0;
break;
+#endif
case LDREX_MODE:
current_ops.lock = __raw_remote_ex_spin_lock;
current_ops.unlock = __raw_remote_ex_spin_unlock;
diff --git a/arch/arm/mach-msm/rpm-smd.c b/arch/arm/mach-msm/rpm-smd.c
index b84ade9..6ed80f6 100644
--- a/arch/arm/mach-msm/rpm-smd.c
+++ b/arch/arm/mach-msm/rpm-smd.c
@@ -673,7 +673,7 @@
static struct msm_rpm_wait_data *msm_rpm_get_entry_from_msg_id(uint32_t msg_id)
{
struct list_head *ptr;
- struct msm_rpm_wait_data *elem;
+ struct msm_rpm_wait_data *elem = NULL;
unsigned long flags;
spin_lock_irqsave(&msm_rpm_list_lock, flags);
@@ -739,7 +739,7 @@
static void msm_rpm_process_ack(uint32_t msg_id, int errno)
{
struct list_head *ptr;
- struct msm_rpm_wait_data *elem;
+ struct msm_rpm_wait_data *elem = NULL;
unsigned long flags;
spin_lock_irqsave(&msm_rpm_list_lock, flags);
diff --git a/arch/arm/mach-msm/rpm.c b/arch/arm/mach-msm/rpm.c
index 5128b44..f9ac00f 100644
--- a/arch/arm/mach-msm/rpm.c
+++ b/arch/arm/mach-msm/rpm.c
@@ -310,7 +310,7 @@
unsigned long flags;
uint32_t ctx_mask = msm_rpm_get_ctx_mask(ctx);
uint32_t ctx_mask_ack = 0;
- uint32_t sel_masks_ack[SEL_MASK_SIZE];
+ uint32_t sel_masks_ack[SEL_MASK_SIZE] = {0};
int i;
msm_rpm_request_irq_mode.req = req;
@@ -369,7 +369,7 @@
unsigned long flags;
uint32_t ctx_mask = msm_rpm_get_ctx_mask(ctx);
uint32_t ctx_mask_ack = 0;
- uint32_t sel_masks_ack[SEL_MASK_SIZE];
+ uint32_t sel_masks_ack[SEL_MASK_SIZE] = {0};
struct irq_chip *irq_chip, *err_chip;
int i;
diff --git a/arch/arm/mach-msm/rpm_log.c b/arch/arm/mach-msm/rpm_log.c
index a2c74a5..58e8588 100644
--- a/arch/arm/mach-msm/rpm_log.c
+++ b/arch/arm/mach-msm/rpm_log.c
@@ -203,14 +203,17 @@
struct msm_rpm_log_buffer *buf;
buf = file->private_data;
- pdata = buf->pdata;
- if (!pdata)
- return -EINVAL;
+
if (!buf)
return -ENOMEM;
+
+ pdata = buf->pdata;
+
+ if (!pdata)
+ return -EINVAL;
if (!buf->data)
return -ENOMEM;
- if (!bufu || count < 0)
+ if (!bufu || count == 0)
return -EINVAL;
if (!access_ok(VERIFY_WRITE, bufu, count))
return -EFAULT;
diff --git a/arch/arm/mach-msm/scm.c b/arch/arm/mach-msm/scm.c
index d070efa..6e05177 100644
--- a/arch/arm/mach-msm/scm.c
+++ b/arch/arm/mach-msm/scm.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -347,6 +347,43 @@
}
EXPORT_SYMBOL(scm_call_atomic2);
+/**
+ * scm_call_atomic3() - Send an atomic SCM command with three arguments
+ * @svc_id: service identifier
+ * @cmd_id: command identifier
+ * @arg1: first argument
+ * @arg2: second argument
+ * @arg3: third argument
+ *
+ * This shall only be used with commands that are guaranteed to be
+ * uninterruptable, atomic and SMP safe.
+ */
+s32 scm_call_atomic3(u32 svc, u32 cmd, u32 arg1, u32 arg2, u32 arg3)
+{
+ int context_id;
+ register u32 r0 asm("r0") = SCM_ATOMIC(svc, cmd, 3);
+ register u32 r1 asm("r1") = (u32)&context_id;
+ register u32 r2 asm("r2") = arg1;
+ register u32 r3 asm("r3") = arg2;
+ register u32 r4 asm("r4") = arg3;
+
+ asm volatile(
+ __asmeq("%0", "r0")
+ __asmeq("%1", "r0")
+ __asmeq("%2", "r1")
+ __asmeq("%3", "r2")
+ __asmeq("%4", "r3")
+ __asmeq("%5", "r4")
+#ifdef REQUIRES_SEC
+ ".arch_extension sec\n"
+#endif
+ "smc #0 @ switch to secure world\n"
+ : "=r" (r0)
+ : "r" (r0), "r" (r1), "r" (r2), "r" (r3), "r" (r4));
+ return r0;
+}
+EXPORT_SYMBOL(scm_call_atomic3);
+
s32 scm_call_atomic4_3(u32 svc, u32 cmd, u32 arg1, u32 arg2,
u32 arg3, u32 arg4, u32 *ret1, u32 *ret2)
{
diff --git a/arch/arm/mach-msm/smd.c b/arch/arm/mach-msm/smd.c
index 8a9042e..3590e6b 100644
--- a/arch/arm/mach-msm/smd.c
+++ b/arch/arm/mach-msm/smd.c
@@ -34,7 +34,6 @@
#include <linux/kfifo.h>
#include <linux/wakelock.h>
#include <linux/notifier.h>
-#include <linux/sort.h>
#include <linux/suspend.h>
#include <linux/of.h>
#include <linux/of_irq.h>
@@ -47,6 +46,7 @@
#include <mach/proc_comm.h>
#include <mach/msm_ipc_logging.h>
#include <mach/ramdump.h>
+#include <mach/board.h>
#include <asm/cacheflush.h>
@@ -76,6 +76,7 @@
#define SMSM_SNAPSHOT_CNT 64
#define SMSM_SNAPSHOT_SIZE ((SMSM_NUM_ENTRIES + 1) * 4)
#define RSPIN_INIT_WAIT_MS 1000
+#define SMD_FIFO_FULL_RESERVE 4
uint32_t SMSM_NUM_ENTRIES = 8;
uint32_t SMSM_NUM_HOSTS = 3;
@@ -183,7 +184,7 @@
static struct smem_area *smem_areas;
static struct ramdump_segment *smem_ramdump_segments;
static void *smem_ramdump_dev;
-static void *smem_range_check(phys_addr_t base, unsigned offset);
+static void *smem_phys_to_virt(phys_addr_t base, unsigned offset);
static void *smd_dev;
struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS];
@@ -243,6 +244,17 @@
#define SMx_POWER_INFO(x...) do { } while (0)
#endif
+/**
+ * OVERFLOW_ADD_UNSIGNED() - check for unsigned overflow
+ *
+ * @type: type to check for overflow
+ * @a: left value to use
+ * @b: right value to use
+ * @returns: true if a + b will result in overflow; false otherwise
+ */
+#define OVERFLOW_ADD_UNSIGNED(type, a, b) \
+ (((type)~0 - (a)) < (b) ? true : false)
+
static unsigned last_heap_free = 0xffffffff;
static inline void smd_write_intr(unsigned int val,
@@ -1053,8 +1065,16 @@
/* how many bytes we are free to write */
static int smd_stream_write_avail(struct smd_channel *ch)
{
- return ch->fifo_mask - ((ch->half_ch->get_head(ch->send) -
- ch->half_ch->get_tail(ch->send)) & ch->fifo_mask);
+ int bytes_avail;
+
+ bytes_avail = ch->fifo_mask - ((ch->half_ch->get_head(ch->send) -
+ ch->half_ch->get_tail(ch->send)) & ch->fifo_mask) + 1;
+
+ if (bytes_avail < SMD_FIFO_FULL_RESERVE)
+ bytes_avail = 0;
+ else
+ bytes_avail -= SMD_FIFO_FULL_RESERVE;
+ return bytes_avail;
}
static int smd_packet_read_avail(struct smd_channel *ch)
@@ -1176,7 +1196,18 @@
}
}
-/* provide a pointer and length to next free space in the fifo */
+/**
+ * ch_write_buffer() - Provide a pointer and length for the next segment of
+ * free space in the FIFO.
+ * @ch: channel
+ * @ptr: Address to pointer for the next segment write
+ * @returns: Maximum size that can be written until the FIFO is either full
+ * or the end of the FIFO has been reached.
+ *
+ * The returned pointer and length are passed to memcpy, so the next segment is
+ * defined as either the space available between the read index (tail) and the
+ * write index (head) or the space available to the end of the FIFO.
+ */
static unsigned ch_write_buffer(struct smd_channel *ch, void **ptr)
{
unsigned head = ch->half_ch->get_head(ch->send);
@@ -1184,10 +1215,11 @@
*ptr = (void *) (ch->send_data + head);
if (head < tail) {
- return tail - head - 1;
+ return tail - head - SMD_FIFO_FULL_RESERVE;
} else {
- if (tail == 0)
- return ch->fifo_size - head - 1;
+ if (tail < SMD_FIFO_FULL_RESERVE)
+ return ch->fifo_size + tail - head
+ - SMD_FIFO_FULL_RESERVE;
else
return ch->fifo_size - head;
}
@@ -2111,6 +2143,29 @@
}
EXPORT_SYMBOL(smd_write_end);
+int smd_write_segment_avail(smd_channel_t *ch)
+{
+ int n;
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (!ch->is_pkt_ch) {
+ pr_err("%s: non-packet channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ n = smd_stream_write_avail(ch);
+
+ /* pkt hdr already written, no need to reserve space for it */
+ if (ch->pending_pkt_sz)
+ return n;
+
+ return n > SMD_HEADER_SIZE ? n - SMD_HEADER_SIZE : 0;
+}
+EXPORT_SYMBOL(smd_write_segment_avail);
+
int smd_read(smd_channel_t *ch, void *data, int len)
{
if (!ch) {
@@ -2356,37 +2411,107 @@
/* -------------------------------------------------------------------------- */
-/*
- * Shared Memory Range Check
- *
- * Takes a physical address and an offset and checks if the resulting physical
- * address would fit into one of the aux smem regions. If so, returns the
- * corresponding virtual address. Otherwise returns NULL. Expects the array
- * of smem regions to be in ascending physical address order.
+/**
+ * smem_phys_to_virt() - Convert a physical base and offset to virtual address
*
* @base: physical base address to check
* @offset: offset from the base to get the final address
+ * @returns: virtual SMEM address; NULL for failure
+ *
+ * Takes a physical address and an offset and checks if the resulting physical
+ * address would fit into one of the smem regions. If so, returns the
+ * corresponding virtual address. Otherwise returns NULL.
*/
-static void *smem_range_check(phys_addr_t base, unsigned offset)
+static void *smem_phys_to_virt(phys_addr_t base, unsigned offset)
{
int i;
phys_addr_t phys_addr;
resource_size_t size;
+ if (OVERFLOW_ADD_UNSIGNED(phys_addr_t, base, offset))
+ return NULL;
+
+ if (!smem_areas) {
+ /*
+ * Early boot - no area configuration yet, so default
+ * to using the main memory region.
+ *
+ * To remove the MSM_SHARED_RAM_BASE and the static
+ * mapping of SMEM in the future, add dump_stack()
+ * to identify the early callers of smem_get_entry()
+ * (which calls this function) and replace those calls
+ * with a new function that knows how to lookup the
+ * SMEM base address before SMEM has been probed.
+ */
+ phys_addr = msm_shared_ram_phys;
+ size = MSM_SHARED_RAM_SIZE;
+
+ if (base >= phys_addr && base + offset < phys_addr + size) {
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)MSM_SHARED_RAM_BASE, offset)) {
+ pr_err("%s: overflow %p %x\n", __func__,
+ MSM_SHARED_RAM_BASE, offset);
+ return NULL;
+ }
+
+ return MSM_SHARED_RAM_BASE + offset;
+ } else {
+ return NULL;
+ }
+ }
for (i = 0; i < num_smem_areas; ++i) {
phys_addr = smem_areas[i].phys_addr;
size = smem_areas[i].size;
- if (base < phys_addr)
- return NULL;
- if (base > phys_addr + size)
+
+ if (base < phys_addr || base + offset >= phys_addr + size)
continue;
- if (base >= phys_addr && base + offset < phys_addr + size)
- return smem_areas[i].virt_addr + offset;
+
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas[i].virt_addr, offset)) {
+ pr_err("%s: overflow %p %x\n", __func__,
+ smem_areas[i].virt_addr, offset);
+ return NULL;
+ }
+
+ return smem_areas[i].virt_addr + offset;
}
return NULL;
}
+/**
+ * smem_virt_to_phys() - Convert SMEM address to physical address.
+ *
+ * @smem_address: Address of SMEM item (returned by smem_alloc(), etc)
+ * @returns: Physical address (or NULL if there is a failure)
+ *
+ * This function should only be used if an SMEM item needs to be handed
+ * off to a DMA engine.
+ */
+phys_addr_t smem_virt_to_phys(void *smem_address)
+{
+ phys_addr_t phys_addr = 0;
+ int i;
+ void *vend;
+
+ if (!smem_areas)
+ return phys_addr;
+
+ for (i = 0; i < num_smem_areas; ++i) {
+ vend = (void *)(smem_areas[i].virt_addr + smem_areas[i].size);
+
+ if (smem_address >= smem_areas[i].virt_addr &&
+ smem_address < vend) {
+ phys_addr = smem_address - smem_areas[i].virt_addr;
+ phys_addr += smem_areas[i].phys_addr;
+ break;
+ }
+ }
+
+ return phys_addr;
+}
+EXPORT_SYMBOL(smem_virt_to_phys);
+
/* smem_alloc returns the pointer to smem item if it is already allocated.
* Otherwise, it returns NULL.
*/
@@ -2460,14 +2585,15 @@
remote_spin_lock_irqsave(&remote_spinlock, flags);
/* toc is in device memory and cannot be speculatively accessed */
if (toc[id].allocated) {
+ phys_addr_t phys_base;
+
*size = toc[id].size;
barrier();
- if (!(toc[id].reserved & BASE_ADDR_MASK))
- ret = (void *) (MSM_SHARED_RAM_BASE + toc[id].offset);
- else
- ret = smem_range_check(
- toc[id].reserved & BASE_ADDR_MASK,
- toc[id].offset);
+
+ phys_base = toc[id].reserved & BASE_ADDR_MASK;
+ if (!phys_base)
+ phys_base = (phys_addr_t)msm_shared_ram_phys;
+ ret = smem_phys_to_virt(phys_base, toc[id].offset);
} else {
*size = 0;
}
@@ -3420,14 +3546,6 @@
return ret;
}
-int sort_cmp_func(const void *a, const void *b)
-{
- struct smem_area *left = (struct smem_area *)(a);
- struct smem_area *right = (struct smem_area *)(b);
-
- return left->phys_addr - right->phys_addr;
-}
-
int smd_core_platform_init(struct platform_device *pdev)
{
int i;
@@ -3438,7 +3556,8 @@
struct smd_subsystem_config *cfg;
int err_ret = 0;
struct smd_smem_regions *smd_smem_areas;
- int smem_idx = 0;
+ struct smem_area *smem_areas_tmp = NULL;
+ int smem_idx;
smd_platform_data = pdev->dev.platform_data;
num_ss = smd_platform_data->num_ss_configs;
@@ -3449,37 +3568,54 @@
smd_ssr_config->disable_smsm_reset_handshake;
smd_smem_areas = smd_platform_data->smd_smem_areas;
- if (smd_smem_areas) {
- num_smem_areas = smd_platform_data->num_smem_areas;
- smem_areas = kmalloc(sizeof(struct smem_area) * num_smem_areas,
- GFP_KERNEL);
- if (!smem_areas) {
- pr_err("%s: smem_areas kmalloc failed\n", __func__);
+ num_smem_areas = smd_platform_data->num_smem_areas + 1;
+
+ /* Initialize main SMEM region */
+ smem_areas_tmp = kmalloc_array(num_smem_areas, sizeof(struct smem_area),
+ GFP_KERNEL);
+ if (!smem_areas_tmp) {
+ pr_err("%s: smem_areas kmalloc failed\n", __func__);
+ err_ret = -ENOMEM;
+ goto smem_areas_alloc_fail;
+ }
+
+ smem_areas_tmp[0].phys_addr = msm_shared_ram_phys;
+ smem_areas_tmp[0].size = MSM_SHARED_RAM_SIZE;
+ smem_areas_tmp[0].virt_addr = MSM_SHARED_RAM_BASE;
+
+ /* Configure auxiliary SMEM regions */
+ for (smem_idx = 1; smem_idx < num_smem_areas; ++smem_idx) {
+ smem_areas_tmp[smem_idx].phys_addr =
+ smd_smem_areas[smem_idx].phys_addr;
+ smem_areas_tmp[smem_idx].size =
+ smd_smem_areas[smem_idx].size;
+ smem_areas_tmp[smem_idx].virt_addr = ioremap_nocache(
+ (unsigned long)(smem_areas_tmp[smem_idx].phys_addr),
+ smem_areas_tmp[smem_idx].size);
+ if (!smem_areas_tmp[smem_idx].virt_addr) {
+ pr_err("%s: ioremap_nocache() of addr: %pa size: %pa\n",
+ __func__,
+ &smem_areas_tmp[smem_idx].phys_addr,
+ &smem_areas_tmp[smem_idx].size);
err_ret = -ENOMEM;
- goto smem_areas_alloc_fail;
+ goto smem_failed;
}
- for (smem_idx = 0; smem_idx < num_smem_areas; ++smem_idx) {
- smem_areas[smem_idx].phys_addr =
- smd_smem_areas[smem_idx].phys_addr;
- smem_areas[smem_idx].size =
- smd_smem_areas[smem_idx].size;
- smem_areas[smem_idx].virt_addr = ioremap_nocache(
- (unsigned long)(smem_areas[smem_idx].phys_addr),
- smem_areas[smem_idx].size);
- if (!smem_areas[smem_idx].virt_addr) {
- pr_err("%s: ioremap_nocache() of addr: %pa size: %pa\n",
- __func__,
- &smem_areas[smem_idx].phys_addr,
- &smem_areas[smem_idx].size);
- err_ret = -ENOMEM;
- ++smem_idx;
- goto smem_failed;
- }
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas_tmp[smem_idx].virt_addr,
+ smem_areas_tmp[smem_idx].size)) {
+ pr_err("%s: invalid virtual address block %i: %p:%pa\n",
+ __func__, smem_idx,
+ smem_areas_tmp[smem_idx].virt_addr,
+ &smem_areas_tmp[smem_idx].size);
+ ++smem_idx;
+ err_ret = -EINVAL;
+ goto smem_failed;
}
- sort(smem_areas, num_smem_areas,
- sizeof(struct smem_area),
- sort_cmp_func, NULL);
+
+ SMD_DBG("%s: %d = %pa %pa", __func__, smem_idx,
+ &smd_smem_areas[smem_idx].phys_addr,
+ &smd_smem_areas[smem_idx].size);
}
for (i = 0; i < num_ss; i++) {
@@ -3523,8 +3659,9 @@
cfg->subsys_name, SMD_MAX_CH_NAME_LEN);
}
-
SMD_INFO("smd_core_platform_init() done\n");
+
+ smem_areas = smem_areas_tmp;
return 0;
intr_failed:
@@ -3542,9 +3679,12 @@
);
}
smem_failed:
- for (smem_idx = smem_idx - 1; smem_idx >= 0; --smem_idx)
- iounmap(smem_areas[smem_idx].virt_addr);
- kfree(smem_areas);
+ for (smem_idx = smem_idx - 1; smem_idx >= 1; --smem_idx)
+ iounmap(smem_areas_tmp[smem_idx].virt_addr);
+
+ num_smem_areas = 0;
+ kfree(smem_areas_tmp);
+
smem_areas_alloc_fail:
return err_ret;
}
@@ -3718,12 +3858,14 @@
resource_size_t aux_mem_size;
int temp_string_size = 11; /* max 3 digit count */
char temp_string[temp_string_size];
- int count;
struct device_node *node;
int ret;
const char *compatible;
- struct ramdump_segment *ramdump_segments_tmp;
+ struct ramdump_segment *ramdump_segments_tmp = NULL;
+ struct smem_area *smem_areas_tmp = NULL;
+ int smem_idx = 0;
int subnode_num = 0;
+ int i;
resource_size_t irq_out_size;
disable_smsm_reset_handshake = 1;
@@ -3743,101 +3885,115 @@
}
SMD_DBG("%s: %s = %p", __func__, key, irq_out_base);
- count = 1;
+ num_smem_areas = 1;
while (1) {
- scnprintf(temp_string, temp_string_size, "aux-mem%d", count);
+ scnprintf(temp_string, temp_string_size, "aux-mem%d",
+ num_smem_areas);
r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
temp_string);
if (!r)
break;
++num_smem_areas;
- ++count;
- if (count > 999) {
+ if (num_smem_areas > 999) {
pr_err("%s: max num aux mem regions reached\n",
__func__);
break;
}
}
- /* initialize SSR ramdump regions */
+ /* Initialize main SMEM region and SSR ramdump region */
key = "smem";
r = platform_get_resource_byname(pdev, IORESOURCE_MEM, key);
if (!r) {
pr_err("%s: missing '%s'\n", __func__, key);
return -ENODEV;
}
- ramdump_segments_tmp = kmalloc_array(num_smem_areas + 1,
- sizeof(struct ramdump_segment), GFP_KERNEL);
+ smem_areas_tmp = kmalloc_array(num_smem_areas, sizeof(struct smem_area),
+ GFP_KERNEL);
+ if (!smem_areas_tmp) {
+ pr_err("%s: smem areas kmalloc failed\n", __func__);
+ ret = -ENOMEM;
+ goto free_smem_areas;
+ }
+
+ ramdump_segments_tmp = kmalloc_array(num_smem_areas,
+ sizeof(struct ramdump_segment), GFP_KERNEL);
if (!ramdump_segments_tmp) {
pr_err("%s: ramdump segment kmalloc failed\n", __func__);
ret = -ENOMEM;
goto free_smem_areas;
}
- ramdump_segments_tmp[0].address = r->start;
- ramdump_segments_tmp[0].size = resource_size(r);
- if (num_smem_areas) {
+ smem_areas_tmp[smem_idx].phys_addr = r->start;
+ smem_areas_tmp[smem_idx].size = resource_size(r);
+ smem_areas_tmp[smem_idx].virt_addr = MSM_SHARED_RAM_BASE;
- smem_areas = kmalloc(sizeof(struct smem_area) * num_smem_areas,
- GFP_KERNEL);
+ ramdump_segments_tmp[smem_idx].address = r->start;
+ ramdump_segments_tmp[smem_idx].size = resource_size(r);
+ ++smem_idx;
- if (!smem_areas) {
- pr_err("%s: smem areas kmalloc failed\n", __func__);
+ /* Configure auxiliary SMEM regions */
+ while (1) {
+ scnprintf(temp_string, temp_string_size, "aux-mem%d",
+ smem_idx);
+ r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ temp_string);
+ if (!r)
+ break;
+ aux_mem_base = r->start;
+ aux_mem_size = resource_size(r);
+
+ ramdump_segments_tmp[smem_idx].address = aux_mem_base;
+ ramdump_segments_tmp[smem_idx].size = aux_mem_size;
+
+ smem_areas_tmp[smem_idx].phys_addr = aux_mem_base;
+ smem_areas_tmp[smem_idx].size = aux_mem_size;
+ smem_areas_tmp[smem_idx].virt_addr = ioremap_nocache(
+ (unsigned long)(smem_areas_tmp[smem_idx].phys_addr),
+ smem_areas_tmp[smem_idx].size);
+ SMD_DBG("%s: %s = %pa %pa -> %p", __func__, temp_string,
+ &aux_mem_base, &aux_mem_size,
+ smem_areas_tmp[smem_idx].virt_addr);
+
+ if (!smem_areas_tmp[smem_idx].virt_addr) {
+ pr_err("%s: ioremap_nocache() of addr:%pa size: %pa\n",
+ __func__,
+ &smem_areas_tmp[smem_idx].phys_addr,
+ &smem_areas_tmp[smem_idx].size);
ret = -ENOMEM;
goto free_smem_areas;
}
- count = 1;
- while (1) {
- scnprintf(temp_string, temp_string_size, "aux-mem%d",
- count);
- r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
- temp_string);
- if (!r)
- break;
- aux_mem_base = r->start;
- aux_mem_size = resource_size(r);
- /*
- * Add to ram-dumps segments.
- * ramdump_segments_tmp[0] is the main SMEM region,
- * so auxiliary segments are indexed by count
- * instead of count - 1.
- */
- ramdump_segments_tmp[count].address = aux_mem_base;
- ramdump_segments_tmp[count].size = aux_mem_size;
-
- SMD_DBG("%s: %s = %pa %pa", __func__, temp_string,
- &aux_mem_base, &aux_mem_size);
- smem_areas[count - 1].phys_addr = aux_mem_base;
- smem_areas[count - 1].size = aux_mem_size;
- smem_areas[count - 1].virt_addr = ioremap_nocache(
- (unsigned long)(smem_areas[count-1].phys_addr),
- smem_areas[count - 1].size);
- if (!smem_areas[count - 1].virt_addr) {
- pr_err("%s: ioremap_nocache() of addr:%pa size: %pa\n",
- __func__,
- &smem_areas[count - 1].phys_addr,
- &smem_areas[count - 1].size);
- ret = -ENOMEM;
- goto free_smem_areas;
- }
-
- ++count;
- if (count > 999) {
- pr_err("%s: max num aux mem regions reached\n",
- __func__);
- break;
- }
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas_tmp[smem_idx].virt_addr,
+ smem_areas_tmp[smem_idx].size)) {
+ pr_err("%s: invalid virtual address block %i: %p:%pa\n",
+ __func__, smem_idx,
+ smem_areas_tmp[smem_idx].virt_addr,
+ &smem_areas_tmp[smem_idx].size);
+ ++smem_idx;
+ ret = -EINVAL;
+ goto free_smem_areas;
}
- sort(smem_areas, num_smem_areas,
- sizeof(struct smem_area),
- sort_cmp_func, NULL);
+
+ ++smem_idx;
+ if (smem_idx > 999) {
+ pr_err("%s: max num aux mem regions reached\n",
+ __func__);
+ break;
+ }
}
for_each_child_of_node(pdev->dev.of_node, node) {
compatible = of_get_property(node, "compatible", NULL);
+ if (!compatible) {
+ pr_err("%s: invalid child node: compatible null\n",
+ __func__);
+ ret = -ENODEV;
+ goto rollback_subnodes;
+ }
if (!strcmp(compatible, "qcom,smd")) {
ret = parse_smd_devicetree(node, irq_out_base);
if (ret)
@@ -3855,15 +4011,16 @@
++subnode_num;
}
+ smem_areas = smem_areas_tmp;
smem_ramdump_segments = ramdump_segments_tmp;
return 0;
rollback_subnodes:
- count = 0;
+ i = 0;
for_each_child_of_node(pdev->dev.of_node, node) {
- if (count >= subnode_num)
+ if (i >= subnode_num)
break;
- ++count;
+ ++i;
compatible = of_get_property(node, "compatible", NULL);
if (!strcmp(compatible, "qcom,smd"))
unparse_smd_devicetree(node);
@@ -3871,10 +4028,12 @@
unparse_smsm_devicetree(node);
}
free_smem_areas:
+ for (smem_idx = smem_idx - 1; smem_idx >= 1; --smem_idx)
+ iounmap(smem_areas_tmp[smem_idx].virt_addr);
+
num_smem_areas = 0;
kfree(ramdump_segments_tmp);
- kfree(smem_areas);
- smem_areas = NULL;
+ kfree(smem_areas_tmp);
return ret;
}
diff --git a/arch/arm/mach-msm/smd_pkt.c b/arch/arm/mach-msm/smd_pkt.c
index 7eb9ead..20a6165 100644
--- a/arch/arm/mach-msm/smd_pkt.c
+++ b/arch/arm/mach-msm/smd_pkt.c
@@ -41,7 +41,7 @@
#ifdef CONFIG_ARCH_FSM9XXX
#define NUM_SMD_PKT_PORTS 4
#else
-#define NUM_SMD_PKT_PORTS 27
+#define NUM_SMD_PKT_PORTS 28
#endif
#define PDRIVER_NAME_MAX_SIZE 32
@@ -509,7 +509,7 @@
do {
prepare_to_wait(&smd_pkt_devp->ch_write_wait_queue,
&write_wait, TASK_UNINTERRUPTIBLE);
- if (!smd_write_avail(smd_pkt_devp->ch) &&
+ if (!smd_write_segment_avail(smd_pkt_devp->ch) &&
!smd_pkt_devp->has_reset) {
smd_enable_read_intr(smd_pkt_devp->ch);
schedule();
@@ -631,7 +631,7 @@
return;
}
- sz = smd_write_avail(smd_pkt_devp->ch);
+ sz = smd_write_segment_avail(smd_pkt_devp->ch);
if (sz) {
D_WRITE("%s: %d bytes write space in smd_pkt_dev id:%d\n",
__func__, sz, smd_pkt_devp->i);
@@ -729,6 +729,7 @@
"smd_test_framework",
"smd_logging_0",
"smd_data_0",
+ "apr",
"smd_pkt_loopback",
};
@@ -759,6 +760,7 @@
"TESTFRAMEWORK",
"LOGGING",
"DATA",
+ "apr",
"LOOPBACK",
};
@@ -789,6 +791,7 @@
SMD_APPS_QDSP,
SMD_APPS_QDSP,
SMD_APPS_QDSP,
+ SMD_APPS_QDSP,
SMD_APPS_MODEM,
};
#endif
diff --git a/arch/arm/mach-msm/socinfo.c b/arch/arm/mach-msm/socinfo.c
index 83f7a1d..12a3ceb 100644
--- a/arch/arm/mach-msm/socinfo.c
+++ b/arch/arm/mach-msm/socinfo.c
@@ -328,7 +328,12 @@
/* 8610 IDs */
[147] = MSM_CPU_8610,
+ [161] = MSM_CPU_8610,
+ [162] = MSM_CPU_8610,
+ [163] = MSM_CPU_8610,
+ [164] = MSM_CPU_8610,
[165] = MSM_CPU_8610,
+ [166] = MSM_CPU_8610,
/* 8064AB IDs */
[153] = MSM_CPU_8064AB,
@@ -348,8 +353,8 @@
/* 8064AA IDs */
[172] = MSM_CPU_8064AA,
- /* zinc IDs */
- [178] = MSM_CPU_ZINC,
+ /* 8084 IDs */
+ [178] = MSM_CPU_8084,
/* krypton IDs */
[187] = MSM_CPU_KRYPTON,
@@ -853,9 +858,9 @@
dummy_socinfo.id = 146;
strlcpy(dummy_socinfo.build_id, "mpq8092 - ",
sizeof(dummy_socinfo.build_id));
- } else if (early_machine_is_msmzinc()) {
+ } else if (early_machine_is_apq8084()) {
dummy_socinfo.id = 178;
- strlcpy(dummy_socinfo.build_id, "msmzinc - ",
+ strlcpy(dummy_socinfo.build_id, "apq8084 - ",
sizeof(dummy_socinfo.build_id));
} else if (early_machine_is_msmkrypton()) {
dummy_socinfo.id = 187;
diff --git a/arch/arm/mach-msm/spm_devices.c b/arch/arm/mach-msm/spm_devices.c
index 233c5a5..8e94b3a 100644
--- a/arch/arm/mach-msm/spm_devices.c
+++ b/arch/arm/mach-msm/spm_devices.c
@@ -74,8 +74,9 @@
info.cpu = cpu;
info.vlevel = vlevel;
+ info.err = -ENODEV;
- if (cpu_online(cpu)) {
+ if ((smp_processor_id() != cpu) && cpu_online(cpu)) {
/**
* We do not want to set the voltage of another core from
* this core, as its possible that we may race the vdd change
@@ -403,21 +404,28 @@
memset(&modes, 0,
(MSM_SPM_MODE_NR - 2) * sizeof(struct msm_spm_seq_entry));
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res)
- goto fail;
-
- spm_data.reg_base_addr = devm_ioremap(&pdev->dev, res->start,
- resource_size(res));
- if (!spm_data.reg_base_addr)
- return -ENOMEM;
-
key = "qcom,core-id";
ret = of_property_read_u32(node, key, &val);
if (ret)
goto fail;
cpu = val;
+ /*
+ * Device with id 0..NR_CPUS are SPM for apps cores
+ * Device with id 0xFFFF is for L2 SPM.
+ */
+ if (cpu >= 0 && cpu < num_possible_cpus()) {
+ mode_of_data = of_cpu_modes;
+ num_modes = ARRAY_SIZE(of_cpu_modes);
+ dev = &per_cpu(msm_cpu_spm_device, cpu);
+
+ } else if (cpu == 0xffff) {
+ mode_of_data = of_l2_modes;
+ num_modes = ARRAY_SIZE(of_l2_modes);
+ dev = &msm_spm_l2_device;
+ } else
+ return ret;
+
key = "qcom,saw2-ver-reg";
ret = of_property_read_u32(node, key, &val);
if (ret)
@@ -428,21 +436,17 @@
ret = of_property_read_u32(node, key, &val);
if (!ret)
spm_data.vctl_timeout_us = val;
+ else if (cpu == 0xffff)
+ goto fail;
- /*
- * Device with id 0..NR_CPUS are SPM for apps cores
- * Device with id 0xFFFF is for L2 SPM.
- */
- if (cpu >= 0 && cpu < num_possible_cpus()) {
- mode_of_data = of_cpu_modes;
- num_modes = ARRAY_SIZE(of_cpu_modes);
- dev = &per_cpu(msm_cpu_spm_device, cpu);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ goto fail;
- } else {
- mode_of_data = of_l2_modes;
- num_modes = ARRAY_SIZE(of_l2_modes);
- dev = &msm_spm_l2_device;
- }
+ spm_data.reg_base_addr = devm_ioremap(&pdev->dev, res->start,
+ resource_size(res));
+ if (!spm_data.reg_base_addr)
+ return -ENOMEM;
spm_data.vctl_port = -1;
spm_data.phase_port = -1;
diff --git a/arch/arm/mach-msm/subsystem_restart.c b/arch/arm/mach-msm/subsystem_restart.c
index 5fe7a29..fb44e16 100644
--- a/arch/arm/mach-msm/subsystem_restart.c
+++ b/arch/arm/mach-msm/subsystem_restart.c
@@ -31,6 +31,7 @@
#include <linux/idr.h>
#include <linux/debugfs.h>
#include <linux/miscdevice.h>
+#include <linux/interrupt.h>
#include <asm/current.h>
@@ -132,6 +133,7 @@
* @restart_order: order of other devices this devices restarts with
* @dentry: debugfs directory for this device
* @do_ramdump_on_put: ramdump on subsystem_put() if true
+ * @err_ready: completion variable to record error ready from subsystem
*/
struct subsys_device {
struct subsys_desc *desc;
@@ -154,6 +156,7 @@
bool do_ramdump_on_put;
struct miscdevice misc_dev;
char miscdevice_name[32];
+ struct completion err_ready;
};
static struct subsys_device *to_subsys(struct device *d)
@@ -412,6 +415,21 @@
}
}
+static int wait_for_err_ready(struct subsys_device *subsys)
+{
+ int ret;
+
+ if (!subsys->desc->err_ready_irq)
+ return 0;
+
+ ret = wait_for_completion_timeout(&subsys->err_ready,
+ msecs_to_jiffies(10000));
+ if (!ret)
+ return -ETIMEDOUT;
+
+ return 0;
+}
+
static void subsystem_shutdown(struct subsys_device *dev, void *data)
{
const char *name = dev->desc->name;
@@ -436,10 +454,17 @@
static void subsystem_powerup(struct subsys_device *dev, void *data)
{
const char *name = dev->desc->name;
+ int ret;
pr_info("[%p]: Powering up %s\n", current, name);
+ init_completion(&dev->err_ready);
if (dev->desc->powerup(dev->desc) < 0)
- panic("[%p]: Failed to powerup %s!", current, name);
+ panic("[%p]: Powerup error: %s!", current, name);
+
+ ret = wait_for_err_ready(dev);
+ if (ret)
+ panic("[%p]: Timed out waiting for error ready: %s!",
+ current, name);
subsys_set_state(dev, SUBSYS_ONLINE);
}
@@ -465,8 +490,18 @@
{
int ret;
+ init_completion(&subsys->err_ready);
ret = subsys->desc->start(subsys->desc);
- if (!ret)
+ if (ret)
+ return ret;
+
+ ret = wait_for_err_ready(subsys);
+ if (ret)
+ /* pil-boot succeeded but we need to shutdown
+ * the device because error ready timed out.
+ */
+ subsys->desc->stop(subsys->desc);
+ else
subsys_set_state(subsys, SUBSYS_ONLINE);
return ret;
@@ -895,6 +930,14 @@
ida_simple_remove(&subsys_ida, subsys->id);
kfree(subsys);
}
+static irqreturn_t subsys_err_ready_intr_handler(int irq, void *subsys)
+{
+ struct subsys_device *subsys_dev = subsys;
+ pr_info("Error ready interrupt occured for %s\n",
+ subsys_dev->desc->name);
+ complete(&subsys_dev->err_ready);
+ return IRQ_HANDLED;
+}
static int subsys_misc_device_add(struct subsys_device *subsys_dev)
{
@@ -971,8 +1014,24 @@
goto err_register;
}
+ if (subsys->desc->err_ready_irq) {
+ ret = devm_request_irq(&subsys->dev,
+ subsys->desc->err_ready_irq,
+ subsys_err_ready_intr_handler,
+ IRQF_TRIGGER_RISING,
+ "error_ready_interrupt", subsys);
+ if (ret < 0) {
+ dev_err(&subsys->dev,
+ "[%s]: Unable to register err ready handler\n",
+ subsys->desc->name);
+ goto err_misc_device;
+ }
+ }
+
return subsys;
+err_misc_device:
+ subsys_misc_device_remove(subsys);
err_register:
subsys_debugfs_remove(subsys);
err_debugfs:
diff --git a/arch/arm/mm/highmem.c b/arch/arm/mm/highmem.c
index 21b9e1b..d6f9ee8 100644
--- a/arch/arm/mm/highmem.c
+++ b/arch/arm/mm/highmem.c
@@ -10,6 +10,7 @@
* published by the Free Software Foundation.
*/
+#include <linux/cpu.h>
#include <linux/module.h>
#include <linux/highmem.h>
#include <linux/interrupt.h>
@@ -135,3 +136,58 @@
return pte_page(get_top_pte(vaddr));
}
+
+#ifdef CONFIG_ARCH_WANT_KMAP_ATOMIC_FLUSH
+static void kmap_remove_unused_cpu(int cpu)
+{
+ int start_idx, idx, type;
+
+ pagefault_disable();
+ type = kmap_atomic_idx();
+ start_idx = type + 1 + KM_TYPE_NR * cpu;
+
+ for (idx = start_idx; idx < KM_TYPE_NR + KM_TYPE_NR * cpu; idx++) {
+ unsigned long vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+ pte_t ptep;
+
+ ptep = get_top_pte(vaddr);
+ if (ptep)
+ set_top_pte(vaddr, __pte(0));
+ }
+ pagefault_enable();
+}
+
+static void kmap_remove_unused(void *unused)
+{
+ kmap_remove_unused_cpu(smp_processor_id());
+}
+
+void kmap_atomic_flush_unused(void)
+{
+ on_each_cpu(kmap_remove_unused, NULL, 1);
+}
+
+static int hotplug_kmap_atomic_callback(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+{
+ switch (action & (~CPU_TASKS_FROZEN)) {
+ case CPU_DYING:
+ kmap_remove_unused_cpu((int)hcpu);
+ break;
+ default:
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block hotplug_kmap_atomic_notifier = {
+ .notifier_call = hotplug_kmap_atomic_callback,
+};
+
+static int __init init_kmap_atomic(void)
+{
+ return register_hotcpu_notifier(&hotplug_kmap_atomic_notifier);
+}
+early_initcall(init_kmap_atomic);
+#endif
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 8df41e2..07b6f6c 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -280,11 +280,11 @@
return (void __iomem *) (offset + addr);
}
-void __iomem *__arm_ioremap_caller(unsigned long phys_addr, size_t size,
+void __iomem *__arm_ioremap_caller(phys_addr_t phys_addr, size_t size,
unsigned int mtype, void *caller)
{
- unsigned long last_addr;
- unsigned long offset = phys_addr & ~PAGE_MASK;
+ phys_addr_t last_addr;
+ phys_addr_t offset = phys_addr & ~PAGE_MASK;
unsigned long pfn = __phys_to_pfn(phys_addr);
/*
@@ -316,12 +316,12 @@
}
EXPORT_SYMBOL(__arm_ioremap_pfn);
-void __iomem * (*arch_ioremap_caller)(unsigned long, size_t,
+void __iomem * (*arch_ioremap_caller)(phys_addr_t, size_t,
unsigned int, void *) =
__arm_ioremap_caller;
void __iomem *
-__arm_ioremap(unsigned long phys_addr, size_t size, unsigned int mtype)
+__arm_ioremap(phys_addr_t phys_addr, size_t size, unsigned int mtype)
{
return arch_ioremap_caller(phys_addr, size, mtype,
__builtin_return_address(0));
@@ -336,7 +336,7 @@
* CONFIG_GENERIC_ALLOCATOR for allocating external memory.
*/
void __iomem *
-__arm_ioremap_exec(unsigned long phys_addr, size_t size, bool cached)
+__arm_ioremap_exec(phys_addr_t phys_addr, size_t size, bool cached)
{
unsigned int mtype;
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 06262c5..ce8cb19 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -11,10 +11,49 @@
#include <linux/random.h>
#include <asm/cachetype.h>
+static inline unsigned long COLOUR_ALIGN_DOWN(unsigned long addr,
+ unsigned long pgoff)
+{
+ unsigned long base = addr & ~(SHMLBA-1);
+ unsigned long off = (pgoff << PAGE_SHIFT) & (SHMLBA-1);
+
+ if (base + off <= addr)
+ return base + off;
+
+ return base - off;
+}
+
#define COLOUR_ALIGN(addr,pgoff) \
((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \
(((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
+/* gap between mmap and stack */
+#define MIN_GAP (128*1024*1024UL)
+#define MAX_GAP ((TASK_SIZE)/6*5)
+
+static int mmap_is_legacy(void)
+{
+ if (current->personality & ADDR_COMPAT_LAYOUT)
+ return 1;
+
+ if (rlimit(RLIMIT_STACK) == RLIM_INFINITY)
+ return 1;
+
+ return sysctl_legacy_va_layout;
+}
+
+static unsigned long mmap_base(unsigned long rnd)
+{
+ unsigned long gap = rlimit(RLIMIT_STACK);
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+ return PAGE_ALIGN(TASK_SIZE - gap - rnd);
+}
+
/*
* We need to ensure that shared mappings are correctly aligned to
* avoid aliasing issues with VIPT caches. We need to ensure that
@@ -68,13 +107,9 @@
if (len > mm->cached_hole_size) {
start_addr = addr = mm->free_area_cache;
} else {
- start_addr = addr = TASK_UNMAPPED_BASE;
+ start_addr = addr = mm->mmap_base;
mm->cached_hole_size = 0;
}
- /* 8 bits of randomness in 20 address space bits */
- if ((current->flags & PF_RANDOMIZE) &&
- !(current->personality & ADDR_NO_RANDOMIZE))
- addr += (get_random_int() % (1 << 8)) << PAGE_SHIFT;
full_search:
if (do_align)
@@ -111,6 +146,134 @@
}
}
+unsigned long
+arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
+ const unsigned long len, const unsigned long pgoff,
+ const unsigned long flags)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ unsigned long addr = addr0;
+ int do_align = 0;
+ int aliasing = cache_is_vipt_aliasing();
+
+ /*
+ * We only need to do colour alignment if either the I or D
+ * caches alias.
+ */
+ if (aliasing)
+ do_align = filp || (flags & MAP_SHARED);
+
+ /* requested length too big for entire address space */
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ if (flags & MAP_FIXED) {
+ if (aliasing && flags & MAP_SHARED &&
+ (addr - (pgoff << PAGE_SHIFT)) & (SHMLBA - 1))
+ return -EINVAL;
+ return addr;
+ }
+
+ /* requesting a specific address */
+ if (addr) {
+ if (do_align)
+ addr = COLOUR_ALIGN(addr, pgoff);
+ else
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+
+ /* check if free_area_cache is useful for us */
+ if (len <= mm->cached_hole_size) {
+ mm->cached_hole_size = 0;
+ mm->free_area_cache = mm->mmap_base;
+ }
+
+ /* either no address requested or can't fit in requested address hole */
+ addr = mm->free_area_cache;
+ if (do_align) {
+ unsigned long base = COLOUR_ALIGN_DOWN(addr - len, pgoff);
+ addr = base + len;
+ }
+
+ /* make sure it can fit in the remaining address space */
+ if (addr > len) {
+ vma = find_vma(mm, addr-len);
+ if (!vma || addr <= vma->vm_start)
+ /* remember the address as a hint for next time */
+ return (mm->free_area_cache = addr-len);
+ }
+
+ if (mm->mmap_base < len)
+ goto bottomup;
+
+ addr = mm->mmap_base - len;
+ if (do_align)
+ addr = COLOUR_ALIGN_DOWN(addr, pgoff);
+
+ do {
+ /*
+ * Lookup failure means no vma is above this address,
+ * else if new region fits below vma->vm_start,
+ * return with success:
+ */
+ vma = find_vma(mm, addr);
+ if (!vma || addr+len <= vma->vm_start)
+ /* remember the address as a hint for next time */
+ return (mm->free_area_cache = addr);
+
+ /* remember the largest hole we saw so far */
+ if (addr + mm->cached_hole_size < vma->vm_start)
+ mm->cached_hole_size = vma->vm_start - addr;
+
+ /* try just below the current vma->vm_start */
+ addr = vma->vm_start - len;
+ if (do_align)
+ addr = COLOUR_ALIGN_DOWN(addr, pgoff);
+ } while (len < vma->vm_start);
+
+bottomup:
+ /*
+ * A failed mmap() very likely causes application failure,
+ * so fall back to the bottom-up function here. This scenario
+ * can happen with large stack limits and large mmap()
+ * allocations.
+ */
+ mm->cached_hole_size = ~0UL;
+ mm->free_area_cache = TASK_UNMAPPED_BASE;
+ addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
+ /*
+ * Restore the topdown base:
+ */
+ mm->free_area_cache = mm->mmap_base;
+ mm->cached_hole_size = ~0UL;
+
+ return addr;
+}
+
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+ unsigned long random_factor = 0UL;
+
+ /* 8 bits of randomness in 20 address space bits */
+ if ((current->flags & PF_RANDOMIZE) &&
+ !(current->personality & ADDR_NO_RANDOMIZE))
+ random_factor = (get_random_int() % (1 << 8)) << PAGE_SHIFT;
+
+ if (mmap_is_legacy()) {
+ mm->mmap_base = TASK_UNMAPPED_BASE + random_factor;
+ mm->get_unmapped_area = arch_get_unmapped_area;
+ mm->unmap_area = arch_unmap_area;
+ } else {
+ mm->mmap_base = mmap_base(random_factor);
+ mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+ mm->unmap_area = arch_unmap_area_topdown;
+ }
+}
/*
* You really shouldn't be using read() or write() on /dev/mem. This
diff --git a/block/row-iosched.c b/block/row-iosched.c
index c8ba344..e71f6af 100644
--- a/block/row-iosched.c
+++ b/block/row-iosched.c
@@ -200,9 +200,9 @@
struct request *pending_urgent_rq;
int last_served_ioprio_class;
-#define ROW_REG_STARVATION_TOLLERANCE 50
+#define ROW_REG_STARVATION_TOLLERANCE 5000
struct starvation_data reg_prio_starvation;
-#define ROW_LOW_STARVATION_TOLLERANCE 1000
+#define ROW_LOW_STARVATION_TOLLERANCE 10000
struct starvation_data low_prio_starvation;
unsigned int cycle_flags;
diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
index ed91480..45c9023 100644
--- a/drivers/base/dma-contiguous.c
+++ b/drivers/base/dma-contiguous.c
@@ -227,7 +227,7 @@
pr_info("Found %s, memory base %lx, size %ld MiB\n", uname,
(unsigned long)base, (unsigned long)size / SZ_1M);
- dma_contiguous_reserve_area(size, &base, 0, name);
+ dma_contiguous_reserve_area(size, &base, MEMBLOCK_ALLOC_ANYWHERE, name);
return 0;
}
diff --git a/drivers/bluetooth/bluetooth-power.c b/drivers/bluetooth/bluetooth-power.c
index d7c69db..13c9080 100644
--- a/drivers/bluetooth/bluetooth-power.c
+++ b/drivers/bluetooth/bluetooth-power.c
@@ -23,38 +23,227 @@
#include <linux/gpio.h>
#include <linux/of_gpio.h>
#include <linux/delay.h>
+#include <linux/bluetooth-power.h>
+#include <linux/slab.h>
+#include <linux/regulator/consumer.h>
-static struct of_device_id ar3002_match_table[] = {
+#define BT_PWR_DBG(fmt, arg...) pr_debug("%s: " fmt "\n" , __func__ , ## arg)
+#define BT_PWR_INFO(fmt, arg...) pr_info("%s: " fmt "\n" , __func__ , ## arg)
+#define BT_PWR_ERR(fmt, arg...) pr_err("%s: " fmt "\n" , __func__ , ## arg)
+
+
+static struct of_device_id bt_power_match_table[] = {
{ .compatible = "qca,ar3002" },
{}
};
-static int bt_reset_gpio;
-
+static struct bluetooth_power_platform_data *bt_power_pdata;
+static struct platform_device *btpdev;
static bool previous;
-static int bluetooth_power(int on)
+static int bt_vreg_init(struct bt_power_vreg_data *vreg)
{
- int rc;
+ int rc = 0;
+ struct device *dev = &btpdev->dev;
- pr_debug("%s bt_gpio= %d\n", __func__, bt_reset_gpio);
+ BT_PWR_DBG("vreg_get for : %s", vreg->name);
+
+ /* Get the regulator handle */
+ vreg->reg = regulator_get(dev, vreg->name);
+ if (IS_ERR(vreg->reg)) {
+ rc = PTR_ERR(vreg->reg);
+ pr_err("%s: regulator_get(%s) failed. rc=%d\n",
+ __func__, vreg->name, rc);
+ goto out;
+ }
+
+ if ((regulator_count_voltages(vreg->reg) > 0)
+ && (vreg->low_vol_level) && (vreg->high_vol_level))
+ vreg->set_voltage_sup = 1;
+
+out:
+ return rc;
+}
+
+static int bt_vreg_enable(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("vreg_en for : %s", vreg->name);
+
+ if (!vreg->is_enabled) {
+ if (vreg->set_voltage_sup) {
+ rc = regulator_set_voltage(vreg->reg,
+ vreg->low_vol_level,
+ vreg->high_vol_level);
+ if (rc < 0) {
+ BT_PWR_ERR("vreg_set_vol(%s) failed rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ }
+
+ rc = regulator_enable(vreg->reg);
+ if (rc < 0) {
+ BT_PWR_ERR("regulator_enable(%s) failed. rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ vreg->is_enabled = true;
+ }
+out:
+ return rc;
+}
+
+static int bt_vreg_disable(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ if (!vreg)
+ return rc;
+
+ BT_PWR_DBG("vreg_disable for : %s", vreg->name);
+
+ if (vreg->is_enabled) {
+ rc = regulator_disable(vreg->reg);
+ if (rc < 0) {
+ BT_PWR_ERR("regulator_disable(%s) failed. rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ vreg->is_enabled = false;
+
+ if (vreg->set_voltage_sup) {
+ /* Set the min voltage to 0 */
+ rc = regulator_set_voltage(vreg->reg,
+ 0,
+ vreg->high_vol_level);
+ if (rc < 0) {
+ BT_PWR_ERR("vreg_set_vol(%s) failed rc=%d\n",
+ vreg->name, rc);
+ goto out;
+
+ }
+ }
+ }
+out:
+ return rc;
+}
+
+static int bt_configure_vreg(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("config %s", vreg->name);
+
+ /* Get the regulator handle for vreg */
+ if (!(vreg->reg)) {
+ rc = bt_vreg_init(vreg);
+ if (rc < 0)
+ return rc;
+ }
+ rc = bt_vreg_enable(vreg);
+
+ return rc;
+}
+
+static int bt_configure_gpios(int on)
+{
+ int rc = 0;
+ int bt_reset_gpio = bt_power_pdata->bt_gpio_sys_rst;
+
+ BT_PWR_DBG("%s bt_gpio= %d on: %d", __func__, bt_reset_gpio, on);
+
if (on) {
+ rc = gpio_request(bt_reset_gpio, "bt_sys_rst_n");
+ if (rc) {
+ BT_PWR_ERR("unable to request gpio %d (%d)\n",
+ bt_reset_gpio, rc);
+ return rc;
+ }
+
+ rc = gpio_direction_output(bt_reset_gpio, 0);
+ if (rc) {
+ BT_PWR_ERR("Unable to set direction\n");
+ return rc;
+ }
+
rc = gpio_direction_output(bt_reset_gpio, 1);
if (rc) {
- pr_err("%s: Unable to set direction\n", __func__);
+ BT_PWR_ERR("Unable to set direction\n");
return rc;
}
msleep(100);
} else {
gpio_set_value(bt_reset_gpio, 0);
+
rc = gpio_direction_input(bt_reset_gpio);
- if (rc) {
- pr_err("%s: Unable to set direction\n", __func__);
- return rc;
- }
+ if (rc)
+ BT_PWR_ERR("Unable to set direction\n");
+
msleep(100);
}
- return 0;
+ return rc;
+}
+
+static int bluetooth_power(int on)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("on: %d", on);
+
+ if (on) {
+ if (bt_power_pdata->bt_vdd_io) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_io);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddio config failed");
+ goto out;
+ }
+ }
+ if (bt_power_pdata->bt_vdd_ldo) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_ldo);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddldo config failed");
+ goto vdd_ldo_fail;
+ }
+ }
+ if (bt_power_pdata->bt_vdd_pa) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_pa);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddpa config failed");
+ goto vdd_pa_fail;
+ }
+ }
+ if (bt_power_pdata->bt_chip_pwd) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_chip_pwd);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddldo config failed");
+ goto chip_pwd_fail;
+ }
+ }
+ if (bt_power_pdata->bt_gpio_sys_rst) {
+ rc = bt_configure_gpios(on);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power gpio config failed");
+ goto gpio_fail;
+ }
+ }
+ } else {
+ bt_configure_gpios(on);
+gpio_fail:
+ if (bt_power_pdata->bt_gpio_sys_rst)
+ gpio_free(bt_power_pdata->bt_gpio_sys_rst);
+ bt_vreg_disable(bt_power_pdata->bt_chip_pwd);
+chip_pwd_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_pa);
+vdd_pa_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_ldo);
+vdd_ldo_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_io);
+ }
+
+out:
+ return rc;
}
static int bluetooth_toggle_radio(void *data, bool blocked)
@@ -62,7 +251,9 @@
int ret = 0;
int (*power_control)(int enable);
- power_control = data;
+ power_control =
+ ((struct bluetooth_power_platform_data *)data)->bt_power_setup;
+
if (previous != blocked)
ret = (*power_control)(!blocked);
if (!ret)
@@ -117,47 +308,146 @@
platform_set_drvdata(pdev, NULL);
}
+#define MAX_PROP_SIZE 32
+static int bt_dt_parse_vreg_info(struct device *dev,
+ struct bt_power_vreg_data **vreg_data, const char *vreg_name)
+{
+ int len, ret = 0;
+ const __be32 *prop;
+ char prop_name[MAX_PROP_SIZE];
+ struct bt_power_vreg_data *vreg;
+ struct device_node *np = dev->of_node;
+
+ BT_PWR_DBG("vreg dev tree parse for %s", vreg_name);
+
+ snprintf(prop_name, MAX_PROP_SIZE, "%s-supply", vreg_name);
+ if (of_parse_phandle(np, prop_name, 0)) {
+ vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL);
+ if (!vreg) {
+ dev_err(dev, "No memory for vreg: %s\n", vreg_name);
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ vreg->name = vreg_name;
+
+ snprintf(prop_name, MAX_PROP_SIZE,
+ "qcom,%s-voltage-level", vreg_name);
+ prop = of_get_property(np, prop_name, &len);
+ if (!prop || (len != (2 * sizeof(__be32)))) {
+ dev_warn(dev, "%s %s property\n",
+ prop ? "invalid format" : "no", prop_name);
+ } else {
+ vreg->low_vol_level = be32_to_cpup(&prop[0]);
+ vreg->high_vol_level = be32_to_cpup(&prop[1]);
+ }
+
+ *vreg_data = vreg;
+ BT_PWR_DBG("%s: vol=[%d %d]uV\n",
+ vreg->name, vreg->low_vol_level,
+ vreg->high_vol_level);
+ } else
+ BT_PWR_INFO("%s: is not provided in device tree", vreg_name);
+
+err:
+ return ret;
+}
+
+static int bt_power_populate_dt_pinfo(struct platform_device *pdev)
+{
+ int rc;
+
+ BT_PWR_DBG("");
+
+ if (!bt_power_pdata)
+ return -ENOMEM;
+
+ if (pdev->dev.of_node) {
+ bt_power_pdata->bt_gpio_sys_rst =
+ of_get_named_gpio(pdev->dev.of_node,
+ "qca,bt-reset-gpio", 0);
+ if (bt_power_pdata->bt_gpio_sys_rst < 0) {
+ BT_PWR_ERR("bt-reset-gpio not provided in device tree");
+ return bt_power_pdata->bt_gpio_sys_rst;
+ }
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_io,
+ "qca,bt-vdd-io");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_pa,
+ "qca,bt-vdd-pa");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_ldo,
+ "qca,bt-vdd-ldo");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_chip_pwd,
+ "qca,bt-chip-pwd");
+ if (rc < 0)
+ return rc;
+
+ }
+
+ bt_power_pdata->bt_power_setup = bluetooth_power;
+
+ return 0;
+}
+
static int __devinit bt_power_probe(struct platform_device *pdev)
{
int ret = 0;
dev_dbg(&pdev->dev, "%s\n", __func__);
- if (!pdev->dev.platform_data) {
- /* Update the platform data if the
- device node exists as part of device tree.*/
- if (pdev->dev.of_node) {
- pdev->dev.platform_data = bluetooth_power;
- } else {
- dev_err(&pdev->dev, "device node not set\n");
- return -ENOSYS;
- }
+ bt_power_pdata =
+ kzalloc(sizeof(struct bluetooth_power_platform_data),
+ GFP_KERNEL);
+
+ if (!bt_power_pdata) {
+ BT_PWR_ERR("Failed to allocate memory");
+ return -ENOMEM;
}
+
if (pdev->dev.of_node) {
- bt_reset_gpio = of_get_named_gpio(pdev->dev.of_node,
- "qca,bt-reset-gpio", 0);
- if (bt_reset_gpio < 0) {
- pr_err("bt-reset-gpio not available");
- return bt_reset_gpio;
+ ret = bt_power_populate_dt_pinfo(pdev);
+ if (ret < 0) {
+ BT_PWR_ERR("Failed to populate device tree info");
+ goto free_pdata;
}
+ pdev->dev.platform_data = bt_power_pdata;
+ } else if (pdev->dev.platform_data) {
+ /* Optional data set to default if not provided */
+ if (!((struct bluetooth_power_platform_data *)
+ (pdev->dev.platform_data))->bt_power_setup)
+ ((struct bluetooth_power_platform_data *)
+ (pdev->dev.platform_data))->bt_power_setup =
+ bluetooth_power;
+
+ memcpy(bt_power_pdata, pdev->dev.platform_data,
+ sizeof(struct bluetooth_power_platform_data));
+ } else {
+ BT_PWR_ERR("Failed to get platform data");
+ goto free_pdata;
}
- ret = gpio_request(bt_reset_gpio, "bt sys_rst_n");
- if (ret) {
- pr_err("%s: unable to request gpio %d (%d)\n",
- __func__, bt_reset_gpio, ret);
- return ret;
- }
+ if (bluetooth_power_rfkill_probe(pdev) < 0)
+ goto free_pdata;
- /* When booting up, de-assert BT reset pin */
- ret = gpio_direction_output(bt_reset_gpio, 0);
- if (ret) {
- pr_err("%s: Unable to set direction\n", __func__);
- return ret;
- }
+ btpdev = pdev;
- ret = bluetooth_power_rfkill_probe(pdev);
+ return 0;
+free_pdata:
+ kfree(bt_power_pdata);
return ret;
}
@@ -167,6 +457,11 @@
bluetooth_power_rfkill_remove(pdev);
+ if (bt_power_pdata->bt_chip_pwd->reg)
+ regulator_put(bt_power_pdata->bt_chip_pwd->reg);
+
+ kfree(bt_power_pdata);
+
return 0;
}
@@ -176,7 +471,7 @@
.driver = {
.name = "bt_power",
.owner = THIS_MODULE,
- .of_match_table = ar3002_match_table,
+ .of_match_table = bt_power_match_table,
},
};
diff --git a/drivers/bluetooth/hci_ath.c b/drivers/bluetooth/hci_ath.c
index 2557983..0383d8f 100644
--- a/drivers/bluetooth/hci_ath.c
+++ b/drivers/bluetooth/hci_ath.c
@@ -39,6 +39,8 @@
#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <linux/of_gpio.h>
+#include <linux/proc_fs.h>
+
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
@@ -47,8 +49,10 @@
#include <mach/msm_serial_hs.h>
#endif
-unsigned int enableuartsleep = 1;
-module_param(enableuartsleep, uint, 0644);
+static int enableuartsleep = 1;
+module_param(enableuartsleep, int, 0644);
+MODULE_PARM_DESC(enableuartsleep, "Enable Atheros Sleep Protocol");
+
/*
* Global variables
*/
@@ -62,6 +66,9 @@
/** Global state flags */
static unsigned long flags;
+/** To Check LPM is enabled */
+static bool is_lpm_enabled;
+
/** Workqueue to respond to change in hostwake line */
static void wakeup_host_work(struct work_struct *work);
@@ -72,6 +79,8 @@
/** Lock for state transitions */
static spinlock_t rw_lock;
+#define PROC_DIR "bluetooth/sleep"
+
#define POLARITY_LOW 0
#define POLARITY_HIGH 1
@@ -80,8 +89,11 @@
unsigned ext_wake; /* wake up device */
unsigned host_wake_irq;
int irq_polarity;
+ struct uart_port *uport;
};
+struct work_struct ws_sleep;
+
/* 1 second timeout */
#define TX_TIMER_INTERVAL 1
@@ -99,23 +111,24 @@
struct sk_buff_head txq;
struct work_struct ctxtsw;
- struct work_struct ws_sleep;
};
-static void hsuart_serial_clock_on(struct tty_struct *tty)
+static void hsuart_serial_clock_on(struct uart_port *port)
{
- struct uart_state *state = tty->driver_data;
- struct uart_port *port = state->uart_port;
BT_DBG("");
- msm_hs_request_clock_on(port);
+ if (port)
+ msm_hs_request_clock_on(port);
+ else
+ BT_INFO("Uart has not voted for Clock ON");
}
-static void hsuart_serial_clock_off(struct tty_struct *tty)
+static void hsuart_serial_clock_off(struct uart_port *port)
{
- struct uart_state *state = tty->driver_data;
- struct uart_port *port = state->uart_port;
BT_DBG("");
- msm_hs_request_clock_off(port);
+ if (port)
+ msm_hs_request_clock_off(port);
+ else
+ BT_INFO("Uart has not voted for Clock OFF");
}
static void modify_timer_task(void)
@@ -127,31 +140,31 @@
}
-static int ath_wakeup_ar3k(struct tty_struct *tty)
+static int ath_wakeup_ar3k(void)
{
int status = 0;
if (test_bit(BT_TXEXPIRED, &flags)) {
- hsuart_serial_clock_on(tty);
- BT_INFO("wakeup device\n");
+ hsuart_serial_clock_on(bsi->uport);
+ BT_DBG("wakeup device\n");
gpio_set_value(bsi->ext_wake, 0);
msleep(20);
gpio_set_value(bsi->ext_wake, 1);
}
- modify_timer_task();
+ if (!is_lpm_enabled)
+ modify_timer_task();
return status;
}
static void wakeup_host_work(struct work_struct *work)
{
- struct ath_struct *ath =
- container_of(work, struct ath_struct, ws_sleep);
- BT_INFO("wake up host");
+ BT_DBG("wake up host");
if (test_bit(BT_SLEEPENABLE, &flags)) {
if (test_bit(BT_TXEXPIRED, &flags))
- hsuart_serial_clock_on(ath->hu->tty);
+ hsuart_serial_clock_on(bsi->uport);
}
- modify_timer_task();
+ if (!is_lpm_enabled)
+ modify_timer_task();
}
static void ath_hci_uart_work(struct work_struct *work)
@@ -159,16 +172,14 @@
int status;
struct ath_struct *ath;
struct hci_uart *hu;
- struct tty_struct *tty;
ath = container_of(work, struct ath_struct, ctxtsw);
hu = ath->hu;
- tty = hu->tty;
/* verify and wake up controller */
if (test_bit(BT_SLEEPENABLE, &flags))
- status = ath_wakeup_ar3k(tty);
+ status = ath_wakeup_ar3k();
/* Ready to send Data */
clear_bit(HCI_UART_SENDING, &hu->tx_state);
hci_uart_tx_wakeup(hu);
@@ -176,15 +187,15 @@
static irqreturn_t bluesleep_hostwake_isr(int irq, void *dev_id)
{
- /* schedule a tasklet to handle the change in the host wake line */
- struct ath_struct *ath = (struct ath_struct *)dev_id;
-
- schedule_work(&ath->ws_sleep);
+ /* schedule a work to global shared workqueue to handle
+ * the change in the host wake line
+ */
+ schedule_work(&ws_sleep);
return IRQ_HANDLED;
}
-static int ath_bluesleep_gpio_config(struct ath_struct *ath, int on)
+static int ath_bluesleep_gpio_config(int on)
{
int ret = 0;
@@ -232,16 +243,16 @@
/* Initialize timer */
init_timer(&tx_timer);
tx_timer.function = bluesleep_tx_timer_expire;
- tx_timer.data = (u_long)ath->hu;
+ tx_timer.data = 0;
if (bsi->irq_polarity == POLARITY_LOW) {
ret = request_irq(bsi->host_wake_irq, bluesleep_hostwake_isr,
IRQF_DISABLED | IRQF_TRIGGER_FALLING,
- "bluetooth hostwake", (void *)ath);
+ "bluetooth hostwake", NULL);
} else {
ret = request_irq(bsi->host_wake_irq, bluesleep_hostwake_isr,
IRQF_DISABLED | IRQF_TRIGGER_RISING,
- "bluetooth hostwake", (void *)ath);
+ "bluetooth hostwake", NULL);
}
if (ret < 0) {
BT_ERR("Couldn't acquire BT_HOST_WAKE IRQ");
@@ -257,7 +268,7 @@
return 0;
free_host_wake_irq:
- free_irq(bsi->host_wake_irq, (void *)ath);
+ free_irq(bsi->host_wake_irq, NULL);
delete_timer:
del_timer(&tx_timer);
gpio_ext_wake:
@@ -268,26 +279,76 @@
return ret;
}
+static int ath_lpm_start(void)
+{
+ BT_DBG("Start LPM mode");
+
+ if (!bsi) {
+ BT_ERR("HCIATH3K bluesleep info does not exist");
+ return -EIO;
+ }
+
+ bsi->uport = msm_hs_get_uart_port(0);
+ if (!bsi->uport) {
+ BT_ERR("UART Port is not available");
+ return -ENODEV;
+ }
+
+ INIT_WORK(&ws_sleep, wakeup_host_work);
+
+ if (ath_bluesleep_gpio_config(1) < 0) {
+ BT_ERR("HCIATH3K GPIO Config failed");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int ath_lpm_stop(void)
+{
+ BT_DBG("Stop LPM mode");
+ cancel_work_sync(&ws_sleep);
+
+ if (bsi) {
+ bsi->uport = NULL;
+ ath_bluesleep_gpio_config(0);
+ }
+
+ return 0;
+}
+
/* Initialize protocol */
static int ath_open(struct hci_uart *hu)
{
struct ath_struct *ath;
+ struct uart_state *state;
BT_DBG("hu %p, bsi %p", hu, bsi);
- if (!bsi)
+ if (!bsi) {
+ BT_ERR("HCIATH3K bluesleep info does not exist");
return -EIO;
+ }
ath = kzalloc(sizeof(*ath), GFP_ATOMIC);
- if (!ath)
+ if (!ath) {
+ BT_ERR("HCIATH3K Memory not enough to init driver");
return -ENOMEM;
+ }
skb_queue_head_init(&ath->txq);
hu->priv = ath;
ath->hu = hu;
+ state = hu->tty->driver_data;
- if (ath_bluesleep_gpio_config(ath, 1) < 0) {
+ if (!state) {
+ BT_ERR("HCIATH3K tty driver data does not exist");
+ return -ENXIO;
+ }
+ bsi->uport = state->uart_port;
+
+ if (ath_bluesleep_gpio_config(1) < 0) {
BT_ERR("HCIATH3K GPIO Config failed");
hu->priv = NULL;
kfree(ath);
@@ -300,7 +361,7 @@
modify_timer_task();
}
INIT_WORK(&ath->ctxtsw, ath_hci_uart_work);
- INIT_WORK(&ath->ws_sleep, wakeup_host_work);
+ INIT_WORK(&ws_sleep, wakeup_host_work);
return 0;
}
@@ -327,12 +388,13 @@
cancel_work_sync(&ath->ctxtsw);
- cancel_work_sync(&ath->ws_sleep);
+ cancel_work_sync(&ws_sleep);
if (bsi)
- ath_bluesleep_gpio_config(ath, 0);
+ ath_bluesleep_gpio_config(0);
hu->priv = NULL;
+ bsi->uport = NULL;
kfree(ath);
return 0;
@@ -423,14 +485,13 @@
static void bluesleep_tx_timer_expire(unsigned long data)
{
- struct hci_uart *hu = (struct hci_uart *) data;
if (!test_bit(BT_SLEEPENABLE, &flags))
return;
BT_INFO("Tx timer expired\n");
set_bit(BT_TXEXPIRED, &flags);
- hsuart_serial_clock_off(hu->tty);
+ hsuart_serial_clock_off(bsi->uport);
}
static struct hci_uart_proto athp = {
@@ -443,6 +504,88 @@
.flush = ath_flush,
};
+static int lpm_enabled;
+
+static int bluesleep_lpm_set(const char *val, const struct kernel_param *kp)
+{
+ int ret;
+
+ ret = param_set_int(val, kp);
+
+ if (ret) {
+ BT_ERR("HCIATH3K: lpm enable parameter set failed");
+ return ret;
+ }
+
+ BT_DBG("lpm : %d", lpm_enabled);
+
+ if ((lpm_enabled == 0) && is_lpm_enabled) {
+ ath_lpm_stop();
+ clear_bit(BT_SLEEPENABLE, &flags);
+ is_lpm_enabled = false;
+ } else if ((lpm_enabled == 1) && !is_lpm_enabled) {
+ if (ath_lpm_start() < 0) {
+ BT_ERR("HCIATH3K LPM mode failed");
+ return -EIO;
+ }
+ set_bit(BT_SLEEPENABLE, &flags);
+ is_lpm_enabled = true;
+ } else {
+ BT_ERR("HCIATH3K invalid lpm value");
+ return -EINVAL;
+ }
+ return 0;
+
+}
+
+static struct kernel_param_ops bluesleep_lpm_ops = {
+ .set = bluesleep_lpm_set,
+ .get = param_get_int,
+};
+
+module_param_cb(ath_lpm, &bluesleep_lpm_ops,
+ &lpm_enabled, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(ath_lpm, "Enable Atheros LPM sleep Protocol");
+
+static int lpm_btwrite;
+
+static int bluesleep_lpm_btwrite(const char *val, const struct kernel_param *kp)
+{
+ int ret;
+
+ ret = param_set_int(val, kp);
+
+ if (ret) {
+ BT_ERR("HCIATH3K: lpm btwrite parameter set failed");
+ return ret;
+ }
+
+ BT_DBG("btwrite : %d", lpm_btwrite);
+ if (is_lpm_enabled) {
+ if (lpm_btwrite == 0) {
+ /*Setting TXEXPIRED bit to make it
+ compatible with current solution*/
+ set_bit(BT_TXEXPIRED, &flags);
+ hsuart_serial_clock_off(bsi->uport);
+ } else if (lpm_btwrite == 1) {
+ ath_wakeup_ar3k();
+ clear_bit(BT_TXEXPIRED, &flags);
+ } else {
+ BT_ERR("HCIATH3K invalid btwrite value");
+ return -EINVAL;
+ }
+ }
+ return 0;
+}
+
+static struct kernel_param_ops bluesleep_lpm_btwrite_ops = {
+ .set = bluesleep_lpm_btwrite,
+ .get = param_get_int,
+};
+
+module_param_cb(ath_btwrite, &bluesleep_lpm_btwrite_ops,
+ &lpm_btwrite, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(ath_lpm, "Assert/Deassert the sleep");
static int bluesleep_populate_dt_pinfo(struct platform_device *pdev)
{
@@ -581,5 +724,6 @@
int __exit ath_deinit(void)
{
platform_driver_unregister(&bluesleep_driver);
+
return hci_uart_unregister_proto(&athp);
}
diff --git a/drivers/char/adsprpc.c b/drivers/char/adsprpc.c
index d78327f..51578e0 100644
--- a/drivers/char/adsprpc.c
+++ b/drivers/char/adsprpc.c
@@ -527,11 +527,9 @@
{
struct fastrpc_apps *me = &gfa;
- if (me->chan)
- (void)smd_close(me->chan);
+ smd_close(me->chan);
context_list_dtor(&me->clst);
- if (me->iclient)
- ion_client_destroy(me->iclient);
+ ion_client_destroy(me->iclient);
me->iclient = 0;
me->chan = 0;
}
@@ -584,25 +582,32 @@
INIT_HLIST_HEAD(&me->htbl[i]);
VERIFY(err, 0 == context_list_ctor(&me->clst, SZ_4K));
if (err)
- goto bail;
+ goto context_list_bail;
me->iclient = msm_ion_client_create(ION_HEAP_CARVEOUT_MASK,
DEVICE_NAME);
VERIFY(err, 0 == IS_ERR_OR_NULL(me->iclient));
if (err)
- goto bail;
+ goto ion_bail;
VERIFY(err, 0 == smd_named_open_on_edge(FASTRPC_SMD_GUID,
SMD_APPS_QDSP, &me->chan,
me, smd_event_handler));
if (err)
- goto bail;
+ goto smd_bail;
VERIFY(err, 0 != wait_for_completion_timeout(&me->work,
RPC_TIMEOUT));
if (err)
- goto bail;
+ goto completion_bail;
}
- bail:
- if (err)
- fastrpc_deinit();
+
+ return 0;
+
+completion_bail:
+ smd_close(me->chan);
+smd_bail:
+ ion_client_destroy(me->iclient);
+ion_bail:
+ context_list_dtor(&me->clst);
+context_list_bail:
return err;
}
@@ -1090,35 +1095,37 @@
VERIFY(err, 0 == fastrpc_init());
if (err)
- goto bail;
+ goto fastrpc_bail;
VERIFY(err, 0 == alloc_chrdev_region(&me->dev_no, 0, 1, DEVICE_NAME));
if (err)
- goto bail;
+ goto alloc_chrdev_bail;
cdev_init(&me->cdev, &fops);
me->cdev.owner = THIS_MODULE;
VERIFY(err, 0 == cdev_add(&me->cdev, MKDEV(MAJOR(me->dev_no), 0), 1));
if (err)
- goto bail;
+ goto cdev_init_bail;
me->class = class_create(THIS_MODULE, "chardrv");
VERIFY(err, !IS_ERR(me->class));
if (err)
- goto bail;
+ goto class_create_bail;
me->dev = device_create(me->class, NULL, MKDEV(MAJOR(me->dev_no), 0),
NULL, DEVICE_NAME);
VERIFY(err, !IS_ERR(me->dev));
if (err)
- goto bail;
+ goto device_create_bail;
pr_info("'created /dev/%s c %d 0'\n", DEVICE_NAME, MAJOR(me->dev_no));
- bail:
- if (err) {
- if (me->dev_no)
- unregister_chrdev_region(me->dev_no, 1);
- if (me->class)
- class_destroy(me->class);
- if (me->cdev.owner)
- cdev_del(&me->cdev);
- fastrpc_deinit();
- }
+
+ return 0;
+
+device_create_bail:
+ class_destroy(me->class);
+class_create_bail:
+ cdev_del(&me->cdev);
+cdev_init_bail:
+ unregister_chrdev_region(me->dev_no, 1);
+alloc_chrdev_bail:
+ fastrpc_deinit();
+fastrpc_bail:
return err;
}
diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c
index 6d28042..682d876 100644
--- a/drivers/char/diag/diag_dci.c
+++ b/drivers/char/diag/diag_dci.c
@@ -1090,6 +1090,12 @@
diag_smd_destructor(&driver->smd_dci[i]);
platform_driver_unregister(&msm_diag_dci_driver);
+
+ if (driver->dci_client_tbl) {
+ for (i = 0; i < MAX_DCI_CLIENTS; i++)
+ kfree(driver->dci_client_tbl[i].dci_data);
+ }
+
kfree(driver->req_tracking_tbl);
kfree(driver->dci_client_tbl);
kfree(driver->apps_dci_buf);
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
index 9e36e1e..04c3300 100644
--- a/drivers/char/diag/diag_masks.c
+++ b/drivers/char/diag/diag_masks.c
@@ -481,6 +481,7 @@
driver->feature_mask->ctrl_pkt_data_len = 4 + FEATURE_MASK_LEN_BYTES;
driver->feature_mask->feature_mask_len = FEATURE_MASK_LEN_BYTES;
memcpy(buf, driver->feature_mask, header_size);
+ feature_byte |= F_DIAG_INT_FEATURE_MASK;
feature_byte |= APPS_RESPOND_LOG_ON_DEMAND;
memcpy(buf+header_size, &feature_byte, FEATURE_MASK_LEN_BYTES);
diff --git a/drivers/char/diag/diagchar.h b/drivers/char/diag/diagchar.h
index 6f37608..684f11d 100644
--- a/drivers/char/diag/diagchar.h
+++ b/drivers/char/diag/diagchar.h
@@ -37,6 +37,7 @@
#define HDLC_OUT_BUF_SIZE 8192
#define POOL_TYPE_COPY 1
#define POOL_TYPE_HDLC 2
+#define POOL_TYPE_USER 3
#define POOL_TYPE_WRITE_STRUCT 4
#define POOL_TYPE_HSIC 5
#define POOL_TYPE_HSIC_2 6
@@ -55,7 +56,7 @@
#define MSG_MASK_SIZE 10000
#define LOG_MASK_SIZE 8000
#define EVENT_MASK_SIZE 1000
-#define USER_SPACE_DATA 8000
+#define USER_SPACE_DATA 8192
#define PKT_SIZE 4096
#define MAX_EQUIP_ID 15
#define DIAG_CTRL_MSG_LOG_MASK 9
@@ -234,16 +235,20 @@
unsigned int poolsize;
unsigned int itemsize_hdlc;
unsigned int poolsize_hdlc;
+ unsigned int itemsize_user;
+ unsigned int poolsize_user;
unsigned int itemsize_write_struct;
unsigned int poolsize_write_struct;
unsigned int debug_flag;
/* State for the mempool for the char driver */
mempool_t *diagpool;
mempool_t *diag_hdlc_pool;
+ mempool_t *diag_user_pool;
mempool_t *diag_write_struct_pool;
struct mutex diagmem_mutex;
int count;
int count_hdlc_pool;
+ int count_user_pool;
int count_write_struct_pool;
int used;
/* Buffers for masks */
@@ -259,7 +264,6 @@
struct diag_smd_info smd_dci[NUM_SMD_DCI_CHANNELS];
unsigned char *usb_buf_out;
unsigned char *apps_rsp_buf;
- unsigned char *user_space_data;
/* buffer for updating mask to peripherals */
unsigned char *buf_msg_mask_update;
unsigned char *buf_log_mask_update;
@@ -276,6 +280,8 @@
struct usb_diag_ch *legacy_ch;
struct work_struct diag_proc_hdlc_work;
struct work_struct diag_read_work;
+ struct work_struct diag_usb_connect_work;
+ struct work_struct diag_usb_disconnect_work;
#endif
struct workqueue_struct *diag_wq;
struct work_struct diag_drain_work;
@@ -312,6 +318,7 @@
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
spinlock_t hsic_ready_spinlock;
/* common for all bridges */
+ struct work_struct diag_connect_work;
struct work_struct diag_disconnect_work;
/* SGLTE variables */
int lcid;
diff --git a/drivers/char/diag/diagchar_core.c b/drivers/char/diag/diagchar_core.c
index a0c32f5..7022a6f 100644
--- a/drivers/char/diag/diagchar_core.c
+++ b/drivers/char/diag/diagchar_core.c
@@ -56,13 +56,16 @@
/* The following variables can be specified by module options */
/* for copy buffer */
static unsigned int itemsize = 4096; /*Size of item in the mempool */
-static unsigned int poolsize = 10; /*Number of items in the mempool */
+static unsigned int poolsize = 12; /*Number of items in the mempool */
/* for hdlc buffer */
static unsigned int itemsize_hdlc = 8192; /*Size of item in the mempool */
-static unsigned int poolsize_hdlc = 8; /*Number of items in the mempool */
+static unsigned int poolsize_hdlc = 10; /*Number of items in the mempool */
+/* for user buffer */
+static unsigned int itemsize_user = 8192; /*Size of item in the mempool */
+static unsigned int poolsize_user = 8; /*Number of items in the mempool */
/* for write structure buffer */
static unsigned int itemsize_write_struct = 20; /*Size of item in the mempool */
-static unsigned int poolsize_write_struct = 8; /* Num of items in the mempool */
+static unsigned int poolsize_write_struct = 10;/* Num of items in the mempool */
/* This is the max number of user-space clients supported at initialization*/
static unsigned int max_clients = 15;
static unsigned int threshold_client_limit = 30;
@@ -781,10 +784,26 @@
{
int i, temp, success = -EINVAL, status;
int temp_realtime_mode = driver->real_time_mode;
+ int requested_mode = (int)ioarg;
+
+ switch (requested_mode) {
+ case USB_MODE:
+ case MEMORY_DEVICE_MODE:
+ case NO_LOGGING_MODE:
+ case UART_MODE:
+ case SOCKET_MODE:
+ case CALLBACK_MODE:
+ case MEMORY_DEVICE_MODE_NRT:
+ break;
+ default:
+ pr_err("diag: In %s, request to switch to invalid mode: %d\n",
+ __func__, requested_mode);
+ return -EINVAL;
+ }
mutex_lock(&driver->diagchar_mutex);
temp = driver->logging_mode;
- driver->logging_mode = (int)ioarg;
+ driver->logging_mode = requested_mode;
if (driver->logging_mode == MEMORY_DEVICE_MODE_NRT) {
diag_send_diag_mode_update(MODE_NONREALTIME);
@@ -1013,6 +1032,8 @@
current->tgid)
driver->req_tracking_tbl[i].pid = 0;
driver->dci_client_tbl[result].client = NULL;
+ kfree(driver->dci_client_tbl[result].dci_data);
+ driver->dci_client_tbl[result].dci_data = NULL;
driver->num_dci_client--;
}
mutex_unlock(&driver->dci_mutex);
@@ -1362,6 +1383,7 @@
struct diag_send_desc_type send = { NULL, NULL, DIAG_STATE_START, 0 };
struct diag_hdlc_dest_type enc = { NULL, NULL, 0 };
void *buf_copy = NULL;
+ void *user_space_data = NULL;
unsigned int payload_size;
index = 0;
@@ -1388,31 +1410,50 @@
}
#endif /* DIAG over USB */
if (pkt_type == DCI_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
- payload_size);
+ user_space_data = diagmem_alloc(driver, payload_size,
+ POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4, payload_size);
if (err) {
pr_alert("diag: copy failed for DCI data\n");
return DIAG_DCI_SEND_DATA_FAIL;
}
- err = diag_process_dci_transaction(driver->user_space_data,
+ err = diag_process_dci_transaction(user_space_data,
payload_size);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return err;
}
if (pkt_type == CALLBACK_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
- payload_size);
- if (err) {
+ if (payload_size > itemsize) {
+ pr_err("diag: Dropping packet, packet payload size crosses 4KB limit. Current payload size %d\n",
+ payload_size);
+ driver->dropped_count++;
+ return -EBADMSG;
+ }
+
+ buf_copy = diagmem_alloc(driver, payload_size, POOL_TYPE_COPY);
+ if (!buf_copy) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+
+ err = copy_from_user(buf_copy, buf + 4, payload_size);
+ if (err) {
pr_err("diag: copy failed for user space data\n");
return -EIO;
}
/* Check for proc_type */
- remote_proc = diag_get_remote(*(int *)driver->user_space_data);
+ remote_proc = diag_get_remote(*(int *)buf_copy);
if (!remote_proc) {
wait_event_interruptible(driver->wait_q,
(driver->in_busy_pktdata == 0));
- return diag_process_apps_pkt(driver->user_space_data,
- payload_size);
+ ret = diag_process_apps_pkt(buf_copy, payload_size);
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ return ret;
}
/* The packet is for the remote processor */
token_offset = 4;
@@ -1420,8 +1461,8 @@
buf += 4;
/* Perform HDLC encoding on incoming data */
send.state = DIAG_STATE_START;
- send.pkt = (void *)(driver->user_space_data + token_offset);
- send.last = (void *)(driver->user_space_data + token_offset -
+ send.pkt = (void *)(buf_copy + token_offset);
+ send.last = (void *)(buf_copy + token_offset -
1 + payload_size);
send.terminate = 1;
@@ -1503,21 +1544,30 @@
}
}
#endif
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
diagmem_free(driver, buf_hdlc, POOL_TYPE_HDLC);
+ buf_copy = NULL;
buf_hdlc = NULL;
driver->used = 0;
mutex_unlock(&driver->diagchar_mutex);
return ret;
}
if (pkt_type == USER_SPACE_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
+ user_space_data = diagmem_alloc(driver, payload_size,
+ POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4,
payload_size);
if (err) {
pr_err("diag: copy failed for user space data\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return -EIO;
}
/* Check for proc_type */
- remote_proc = diag_get_remote(*(int *)driver->user_space_data);
+ remote_proc = diag_get_remote(*(int *)user_space_data);
if (remote_proc) {
token_offset = 4;
@@ -1527,9 +1577,11 @@
/* Check masks for On-Device logging */
if (driver->mask_check) {
- if (!mask_request_validate(driver->user_space_data +
+ if (!mask_request_validate(user_space_data +
token_offset)) {
pr_alert("diag: mask request Invalid\n");
+ diagmem_free(driver, user_space_data,
+ POOL_TYPE_USER);
return -EFAULT;
}
}
@@ -1537,7 +1589,7 @@
#ifdef DIAG_DEBUG
pr_debug("diag: user space data %d\n", payload_size);
for (i = 0; i < payload_size; i++)
- pr_debug("\t %x", *((driver->user_space_data
+ pr_debug("\t %x", *((user_space_data
+ token_offset)+i));
#endif
#ifdef CONFIG_DIAG_SDIO_PIPE
@@ -1548,7 +1600,7 @@
payload_size));
if (driver->sdio_ch && (payload_size > 0)) {
sdio_write(driver->sdio_ch, (void *)
- (driver->user_space_data + token_offset),
+ (user_space_data + token_offset),
payload_size);
}
}
@@ -1578,8 +1630,8 @@
diag_hsic[index].in_busy_hsic_read_on_device =
0;
err = diag_bridge_write(index,
- driver->user_space_data +
- token_offset, payload_size);
+ user_space_data + token_offset,
+ payload_size);
if (err) {
pr_err("diag: err sending mask to MDM: %d\n",
err);
@@ -1600,11 +1652,13 @@
&& driver->lcid) {
if (payload_size > 0) {
err = msm_smux_write(driver->lcid, NULL,
- driver->user_space_data + token_offset,
+ user_space_data + token_offset,
payload_size);
if (err) {
pr_err("diag:send mask to MDM err %d",
err);
+ diagmem_free(driver, user_space_data,
+ POOL_TYPE_USER);
return err;
}
}
@@ -1613,8 +1667,8 @@
/* send masks to 8k now */
if (!remote_proc)
diag_process_hdlc((void *)
- (driver->user_space_data + token_offset),
- payload_size);
+ (user_space_data + token_offset), payload_size);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return 0;
}
@@ -1885,6 +1939,11 @@
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+static void diag_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect_bridge(1);
+}
+
static void diag_disconnect_work_fn(struct work_struct *w)
{
diagfwd_disconnect_bridge(1);
@@ -1944,6 +2003,8 @@
driver->poolsize = poolsize;
driver->itemsize_hdlc = itemsize_hdlc;
driver->poolsize_hdlc = poolsize_hdlc;
+ driver->itemsize_user = itemsize_user;
+ driver->poolsize_user = poolsize_user;
driver->itemsize_write_struct = itemsize_write_struct;
driver->poolsize_write_struct = poolsize_write_struct;
driver->num_clients = max_clients;
@@ -1969,6 +2030,8 @@
pr_err("diag: could not register HSIC device, ret: %d\n",
ret);
diagfwd_bridge_init(SMUX);
+ INIT_WORK(&(driver->diag_connect_work),
+ diag_connect_work_fn);
INIT_WORK(&(driver->diag_disconnect_work),
diag_disconnect_work_fn);
#endif
@@ -2008,6 +2071,7 @@
diagchar_cleanup();
diagfwd_exit();
diagfwd_cntl_exit();
+ diag_dci_exit();
diag_masks_exit();
diag_sdio_fn(EXIT);
diagfwd_bridge_fn(EXIT);
@@ -2022,6 +2086,7 @@
diagmem_exit(driver, POOL_TYPE_ALL);
diagfwd_exit();
diagfwd_cntl_exit();
+ diag_dci_exit();
diag_masks_exit();
diag_sdio_fn(EXIT);
diagfwd_bridge_fn(EXIT);
diff --git a/drivers/char/diag/diagfwd.c b/drivers/char/diag/diagfwd.c
index 5b929d7..151e304 100644
--- a/drivers/char/diag/diagfwd.c
+++ b/drivers/char/diag/diagfwd.c
@@ -402,7 +402,7 @@
return;
}
if (pkt_len > r) {
- pr_err("diag: In %s, SMD sending partial pkt %d %d %d %d %d %d\n",
+ pr_debug("diag: In %s, SMD sending partial pkt %d %d %d %d %d %d\n",
__func__, pkt_len, r, total_recd, loop_count,
smd_info->peripheral, smd_info->type);
}
@@ -693,8 +693,7 @@
diag_update_sleeping_process(entry.process_id, PKT_TYPE);
} else {
if (len > 0) {
- if ((entry.client_id >= 0) &&
- (entry.client_id < NUM_SMD_DATA_CHANNELS)) {
+ if (entry.client_id < NUM_SMD_DATA_CHANNELS) {
int index = entry.client_id;
if (driver->smd_data[index].ch) {
if ((index == MODEM_DATA) &&
@@ -907,94 +906,186 @@
/* bld time masks */
switch (ssid_first) {
case MSG_SSID_0:
+ if (ssid_range > sizeof(msg_bld_masks_0)) {
+ pr_warning("diag: truncating ssid range for ssid 0");
+ ssid_range = sizeof(msg_bld_masks_0);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_0[i/4];
break;
case MSG_SSID_1:
+ if (ssid_range > sizeof(msg_bld_masks_1)) {
+ pr_warning("diag: truncating ssid range for ssid 1");
+ ssid_range = sizeof(msg_bld_masks_1);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_1[i/4];
break;
case MSG_SSID_2:
+ if (ssid_range > sizeof(msg_bld_masks_2)) {
+ pr_warning("diag: truncating ssid range for ssid 2");
+ ssid_range = sizeof(msg_bld_masks_2);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_2[i/4];
break;
case MSG_SSID_3:
+ if (ssid_range > sizeof(msg_bld_masks_3)) {
+ pr_warning("diag: truncating ssid range for ssid 3");
+ ssid_range = sizeof(msg_bld_masks_3);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_3[i/4];
break;
case MSG_SSID_4:
+ if (ssid_range > sizeof(msg_bld_masks_4)) {
+ pr_warning("diag: truncating ssid range for ssid 4");
+ ssid_range = sizeof(msg_bld_masks_4);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_4[i/4];
break;
case MSG_SSID_5:
+ if (ssid_range > sizeof(msg_bld_masks_5)) {
+ pr_warning("diag: truncating ssid range for ssid 5");
+ ssid_range = sizeof(msg_bld_masks_5);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_5[i/4];
break;
case MSG_SSID_6:
+ if (ssid_range > sizeof(msg_bld_masks_6)) {
+ pr_warning("diag: truncating ssid range for ssid 6");
+ ssid_range = sizeof(msg_bld_masks_6);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_6[i/4];
break;
case MSG_SSID_7:
+ if (ssid_range > sizeof(msg_bld_masks_7)) {
+ pr_warning("diag: truncating ssid range for ssid 7");
+ ssid_range = sizeof(msg_bld_masks_7);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_7[i/4];
break;
case MSG_SSID_8:
+ if (ssid_range > sizeof(msg_bld_masks_8)) {
+ pr_warning("diag: truncating ssid range for ssid 8");
+ ssid_range = sizeof(msg_bld_masks_8);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_8[i/4];
break;
case MSG_SSID_9:
+ if (ssid_range > sizeof(msg_bld_masks_9)) {
+ pr_warning("diag: truncating ssid range for ssid 9");
+ ssid_range = sizeof(msg_bld_masks_9);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_9[i/4];
break;
case MSG_SSID_10:
+ if (ssid_range > sizeof(msg_bld_masks_10)) {
+ pr_warning("diag: truncating ssid range for ssid 10");
+ ssid_range = sizeof(msg_bld_masks_10);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_10[i/4];
break;
case MSG_SSID_11:
+ if (ssid_range > sizeof(msg_bld_masks_11)) {
+ pr_warning("diag: truncating ssid range for ssid 11");
+ ssid_range = sizeof(msg_bld_masks_11);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_11[i/4];
break;
case MSG_SSID_12:
+ if (ssid_range > sizeof(msg_bld_masks_12)) {
+ pr_warning("diag: truncating ssid range for ssid 12");
+ ssid_range = sizeof(msg_bld_masks_12);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_12[i/4];
break;
case MSG_SSID_13:
+ if (ssid_range > sizeof(msg_bld_masks_13)) {
+ pr_warning("diag: truncating ssid range for ssid 13");
+ ssid_range = sizeof(msg_bld_masks_13);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_13[i/4];
break;
case MSG_SSID_14:
+ if (ssid_range > sizeof(msg_bld_masks_14)) {
+ pr_warning("diag: truncating ssid range for ssid 14");
+ ssid_range = sizeof(msg_bld_masks_14);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_14[i/4];
break;
case MSG_SSID_15:
+ if (ssid_range > sizeof(msg_bld_masks_15)) {
+ pr_warning("diag: truncating ssid range for ssid 15");
+ ssid_range = sizeof(msg_bld_masks_15);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_15[i/4];
break;
case MSG_SSID_16:
+ if (ssid_range > sizeof(msg_bld_masks_16)) {
+ pr_warning("diag: truncating ssid range for ssid 16");
+ ssid_range = sizeof(msg_bld_masks_16);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_16[i/4];
break;
case MSG_SSID_17:
+ if (ssid_range > sizeof(msg_bld_masks_17)) {
+ pr_warning("diag: truncating ssid range for ssid 17");
+ ssid_range = sizeof(msg_bld_masks_17);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_17[i/4];
break;
case MSG_SSID_18:
+ if (ssid_range > sizeof(msg_bld_masks_18)) {
+ pr_warning("diag: truncating ssid range for ssid 18");
+ ssid_range = sizeof(msg_bld_masks_18);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_18[i/4];
break;
case MSG_SSID_19:
+ if (ssid_range > sizeof(msg_bld_masks_19)) {
+ pr_warning("diag: truncating ssid range for ssid 19");
+ ssid_range = sizeof(msg_bld_masks_19);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_19[i/4];
break;
case MSG_SSID_20:
+ if (ssid_range > sizeof(msg_bld_masks_20)) {
+ pr_warning("diag: truncating ssid range for ssid 20");
+ ssid_range = sizeof(msg_bld_masks_20);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_20[i/4];
break;
case MSG_SSID_21:
+ if (ssid_range > sizeof(msg_bld_masks_21)) {
+ pr_warning("diag: truncating ssid range for ssid 21");
+ ssid_range = sizeof(msg_bld_masks_21);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_21[i/4];
break;
case MSG_SSID_22:
+ if (ssid_range > sizeof(msg_bld_masks_22)) {
+ pr_warning("diag: truncating ssid range for ssid 22");
+ ssid_range = sizeof(msg_bld_masks_22);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_22[i/4];
break;
@@ -1191,6 +1282,16 @@
#define N_LEGACY_WRITE (driver->poolsize + 6)
#define N_LEGACY_READ 1
+static void diag_usb_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect();
+}
+
+static void diag_usb_disconnect_work_fn(struct work_struct *w)
+{
+ diagfwd_disconnect();
+}
+
int diagfwd_connect(void)
{
int err;
@@ -1357,10 +1458,12 @@
{
switch (event) {
case USB_DIAG_CONNECT:
- diagfwd_connect();
+ queue_work(driver->diag_wq,
+ &driver->diag_usb_connect_work);
break;
case USB_DIAG_DISCONNECT:
- diagfwd_disconnect();
+ queue_work(driver->diag_wq,
+ &driver->diag_usb_disconnect_work);
break;
case USB_DIAG_READ_DONE:
diagfwd_read_complete(d_req);
@@ -1692,11 +1795,6 @@
&& (driver->hdlc_buf = kzalloc(HDLC_MAX, GFP_KERNEL)) == NULL)
goto err;
kmemleak_not_leak(driver->hdlc_buf);
- if (driver->user_space_data == NULL)
- driver->user_space_data = kzalloc(USER_SPACE_DATA, GFP_KERNEL);
- if (driver->user_space_data == NULL)
- goto err;
- kmemleak_not_leak(driver->user_space_data);
if (driver->client_map == NULL &&
(driver->client_map = kzalloc
((driver->num_clients) * sizeof(struct diag_client_map),
@@ -1741,6 +1839,10 @@
}
driver->diag_wq = create_singlethread_workqueue("diag_wq");
#ifdef CONFIG_DIAG_OVER_USB
+ INIT_WORK(&(driver->diag_usb_connect_work),
+ diag_usb_connect_work_fn);
+ INIT_WORK(&(driver->diag_usb_disconnect_work),
+ diag_usb_disconnect_work_fn);
INIT_WORK(&(driver->diag_proc_hdlc_work), diag_process_hdlc_fn);
INIT_WORK(&(driver->diag_read_work), diag_read_work_fn);
driver->legacy_ch = usb_diag_open(DIAG_LEGACY, driver,
@@ -1771,7 +1873,6 @@
kfree(driver->pkt_buf);
kfree(driver->usb_read_ptr);
kfree(driver->apps_rsp_buf);
- kfree(driver->user_space_data);
if (driver->diag_wq)
destroy_workqueue(driver->diag_wq);
}
@@ -1804,6 +1905,5 @@
kfree(driver->pkt_buf);
kfree(driver->usb_read_ptr);
kfree(driver->apps_rsp_buf);
- kfree(driver->user_space_data);
destroy_workqueue(driver->diag_wq);
}
diff --git a/drivers/char/diag/diagfwd_bridge.c b/drivers/char/diag/diagfwd_bridge.c
index 475f5ba..8c07219b 100644
--- a/drivers/char/diag/diagfwd_bridge.c
+++ b/drivers/char/diag/diagfwd_bridge.c
@@ -233,7 +233,8 @@
switch (event) {
case USB_DIAG_CONNECT:
- diagfwd_connect_bridge(1);
+ queue_work(driver->diag_wq,
+ &driver->diag_connect_work);
break;
case USB_DIAG_DISCONNECT:
queue_work(driver->diag_wq,
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
index 7cd1866..9c2c691 100644
--- a/drivers/char/diag/diagfwd_cntl.h
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -30,6 +30,9 @@
/* Send Diag F3 mask */
#define DIAG_CTRL_MSG_F3_MASK_V2 11
+/* Denotes that we support sending/receiving the feature mask */
+#define F_DIAG_INT_FEATURE_MASK 0x01
+
struct cmd_code_range {
uint16_t cmd_code_lo;
uint16_t cmd_code_hi;
diff --git a/drivers/char/diag/diagmem.c b/drivers/char/diag/diagmem.c
index 0cd8267..bd339e2 100644
--- a/drivers/char/diag/diagmem.c
+++ b/drivers/char/diag/diagmem.c
@@ -45,6 +45,15 @@
GFP_ATOMIC);
}
}
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool) {
+ if (driver->count_user_pool < driver->poolsize_user) {
+ atomic_add(1,
+ (atomic_t *)&driver->count_user_pool);
+ buf = mempool_alloc(driver->diag_user_pool,
+ GFP_ATOMIC);
+ }
+ }
} else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
if (driver->diag_write_struct_pool) {
if (driver->count_write_struct_pool <
@@ -98,8 +107,9 @@
mempool_destroy(driver->diagpool);
driver->diagpool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy COPY mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy COPY mempool");
+ }
}
if (driver->diag_hdlc_pool) {
@@ -107,8 +117,19 @@
mempool_destroy(driver->diag_hdlc_pool);
driver->diag_hdlc_pool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy HDLC mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy HDLC mempool");
+ }
+ }
+
+ if (driver->diag_user_pool) {
+ if (driver->count_user_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_user_pool);
+ driver->diag_user_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type ==
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy USER mempool");
+ }
}
if (driver->diag_write_struct_pool) {
@@ -119,8 +140,9 @@
mempool_destroy(driver->diag_write_struct_pool);
driver->diag_write_struct_pool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy STRUCT mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy STRUCT mempool");
+ }
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
for (index = 0; index < MAX_HSIC_CH; index++) {
@@ -163,16 +185,25 @@
mempool_free(buf, driver->diagpool);
atomic_add(-1, (atomic_t *)&driver->count);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "mempool memory which is already free %d", driver->count);
+ pr_err("diag: Attempt to free up DIAG driver mempool memory which is already free %d",
+ driver->count);
} else if (pool_type == POOL_TYPE_HDLC) {
if (driver->diag_hdlc_pool != NULL &&
driver->count_hdlc_pool > 0) {
mempool_free(buf, driver->diag_hdlc_pool);
atomic_add(-1, (atomic_t *)&driver->count_hdlc_pool);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "HDLC mempool which is already free %d ", driver->count_hdlc_pool);
+ pr_err("diag: Attempt to free up DIAG driver HDLC mempool which is already free %d ",
+ driver->count_hdlc_pool);
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool != NULL &&
+ driver->count_user_pool > 0) {
+ mempool_free(buf, driver->diag_user_pool);
+ atomic_add(-1, (atomic_t *)&driver->count_user_pool);
+ } else {
+ pr_err("diag: Attempt to free up DIAG driver USER mempool which is already free %d ",
+ driver->count_user_pool);
+ }
} else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
if (driver->diag_write_struct_pool != NULL &&
driver->count_write_struct_pool > 0) {
@@ -180,9 +211,8 @@
atomic_add(-1,
(atomic_t *)&driver->count_write_struct_pool);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "USB structure mempool which is already free %d ",
- driver->count_write_struct_pool);
+ pr_err("diag: Attempt to free up DIAG driver USB structure mempool which is already free %d ",
+ driver->count_write_struct_pool);
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
} else if (pool_type == POOL_TYPE_HSIC ||
pool_type == POOL_TYPE_HSIC_2) {
@@ -229,18 +259,25 @@
driver->diag_hdlc_pool = mempool_create_kmalloc_pool(
driver->poolsize_hdlc, driver->itemsize_hdlc);
+ if (driver->count_user_pool == 0)
+ driver->diag_user_pool = mempool_create_kmalloc_pool(
+ driver->poolsize_user, driver->itemsize_user);
+
if (driver->count_write_struct_pool == 0)
driver->diag_write_struct_pool = mempool_create_kmalloc_pool(
driver->poolsize_write_struct, driver->itemsize_write_struct);
if (!driver->diagpool)
- printk(KERN_INFO "Cannot allocate diag mempool\n");
+ pr_err("diag: Cannot allocate diag mempool\n");
if (!driver->diag_hdlc_pool)
- printk(KERN_INFO "Cannot allocate diag HDLC mempool\n");
+ pr_err("diag: Cannot allocate diag HDLC mempool\n");
+
+ if (!driver->diag_user_pool)
+ pr_err("diag: Cannot allocate diag USER mempool\n");
if (!driver->diag_write_struct_pool)
- printk(KERN_INFO "Cannot allocate diag USB struct mempool\n");
+ pr_err("diag: Cannot allocate diag USB struct mempool\n");
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
diff --git a/drivers/char/msm_rotator.c b/drivers/char/msm_rotator.c
index e946b42..9a43ea4 100644
--- a/drivers/char/msm_rotator.c
+++ b/drivers/char/msm_rotator.c
@@ -181,8 +181,7 @@
pr_err("ion_import_dma_buf() failed\n");
return PTR_ERR(*pihdl);
}
- pr_debug("%s(): ion_hdl %p, ion_fd %d\n", __func__, *pihdl,
- ion_share_dma_buf(msm_rotator_dev->client, *pihdl));
+ pr_debug("%s(): ion_hdl %p, ion_fd %d\n", __func__, *pihdl, mem_id);
if (rot_iommu_split_domain) {
if (secure) {
diff --git a/drivers/coresight/coresight-csr.c b/drivers/coresight/coresight-csr.c
index 988d1c9..1c2ab25 100644
--- a/drivers/coresight/coresight-csr.c
+++ b/drivers/coresight/coresight-csr.c
@@ -102,7 +102,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_enable_bam_to_usb);
+EXPORT_SYMBOL(msm_qdss_csr_enable_bam_to_usb);
void msm_qdss_csr_disable_bam_to_usb(void)
{
@@ -117,7 +117,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_disable_bam_to_usb);
+EXPORT_SYMBOL(msm_qdss_csr_disable_bam_to_usb);
void msm_qdss_csr_disable_flush(void)
{
@@ -132,7 +132,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_disable_flush);
+EXPORT_SYMBOL(msm_qdss_csr_disable_flush);
static int __devinit csr_probe(struct platform_device *pdev)
{
diff --git a/drivers/coresight/coresight-etm.c b/drivers/coresight/coresight-etm.c
index 2777769..5a5c0cf 100644
--- a/drivers/coresight/coresight-etm.c
+++ b/drivers/coresight/coresight-etm.c
@@ -276,23 +276,32 @@
}
/*
- * Memory mapped writes to clear os lock are not supported on Krait v1, v2
- * and OS lock must be unlocked before any memory mapped access, otherwise
- * memory mapped reads/writes will be invalid.
+ * Unlock OS lock to allow memory mapped access on Krait and in general
+ * so that ETMSR[1] can be polled while clearing the ETMCR[10] prog bit
+ * since ETMSR[1] is set when prog bit is set or OS lock is set.
*/
static void etm_os_unlock(void *info)
{
struct etm_drvdata *drvdata = (struct etm_drvdata *) info;
- ETM_UNLOCK(drvdata);
+ /*
+ * Memory mapped writes to clear os lock are not supported on Krait v1,
+ * v2 and OS lock must be unlocked before any memory mapped access,
+ * otherwise memory mapped reads/writes will be invalid.
+ */
if (cpu_is_krait()) {
etm_writel_cp14(0x0, ETMOSLAR);
+ /* ensure os lock is unlocked before we return */
isb();
- } else if (etm_os_lock_present(drvdata)) {
- etm_writel(drvdata, 0x0, ETMOSLAR);
- mb();
+ } else {
+ ETM_UNLOCK(drvdata);
+ if (etm_os_lock_present(drvdata)) {
+ etm_writel(drvdata, 0x0, ETMOSLAR);
+ /* ensure os lock is unlocked before we return */
+ mb();
+ }
+ ETM_LOCK(drvdata);
}
- ETM_LOCK(drvdata);
}
/*
@@ -1876,11 +1885,26 @@
void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
+ static bool clk_disable[NR_CPUS];
+ int ret;
if (!etmdrvdata[cpu])
goto out;
switch (action & (~CPU_TASKS_FROZEN)) {
+ case CPU_UP_PREPARE:
+ if (!etmdrvdata[cpu]->os_unlock) {
+ ret = clk_prepare_enable(etmdrvdata[cpu]->clk);
+ if (ret) {
+ dev_err(etmdrvdata[cpu]->dev,
+ "ETM clk enable during hotplug failed"
+ "for cpu: %d, ret: %d\n", cpu, ret);
+ return notifier_from_errno(ret);
+ }
+ clk_disable[cpu] = true;
+ }
+ break;
+
case CPU_STARTING:
spin_lock(&etmdrvdata[cpu]->spinlock);
if (!etmdrvdata[cpu]->os_unlock) {
@@ -1894,6 +1918,11 @@
break;
case CPU_ONLINE:
+ if (clk_disable[cpu]) {
+ clk_disable_unprepare(etmdrvdata[cpu]->clk);
+ clk_disable[cpu] = false;
+ }
+
if (etmdrvdata[cpu]->boot_enable &&
!etmdrvdata[cpu]->sticky_enable)
coresight_enable(etmdrvdata[cpu]->csdev);
@@ -1903,6 +1932,13 @@
__etm_store_pcsave(etmdrvdata[cpu], 1);
break;
+ case CPU_UP_CANCELED:
+ if (clk_disable[cpu]) {
+ clk_disable_unprepare(etmdrvdata[cpu]->clk);
+ clk_disable[cpu] = false;
+ }
+ break;
+
case CPU_DYING:
spin_lock(&etmdrvdata[cpu]->spinlock);
if (etmdrvdata[cpu]->enable && etmdrvdata[cpu]->round_robin)
diff --git a/drivers/coresight/coresight-stm.c b/drivers/coresight/coresight-stm.c
index 87cf63a..7d4dabe 100644
--- a/drivers/coresight/coresight-stm.c
+++ b/drivers/coresight/coresight-stm.c
@@ -595,7 +595,7 @@
return __stm_trace(options, entity_id, proto_id, data, size);
}
-EXPORT_SYMBOL_GPL(stm_trace);
+EXPORT_SYMBOL(stm_trace);
static ssize_t stm_write(struct file *file, const char __user *data,
size_t size, loff_t *ppos)
diff --git a/drivers/coresight/coresight.c b/drivers/coresight/coresight.c
index aef3d26..e237fb7 100644
--- a/drivers/coresight/coresight.c
+++ b/drivers/coresight/coresight.c
@@ -368,7 +368,7 @@
pr_err("coresight: enable failed\n");
return ret;
}
-EXPORT_SYMBOL_GPL(coresight_enable);
+EXPORT_SYMBOL(coresight_enable);
void coresight_disable(struct coresight_device *csdev)
{
@@ -391,7 +391,7 @@
out:
up(&coresight_mutex);
}
-EXPORT_SYMBOL_GPL(coresight_disable);
+EXPORT_SYMBOL(coresight_disable);
void coresight_abort(void)
{
@@ -415,7 +415,7 @@
out:
up(&coresight_mutex);
}
-EXPORT_SYMBOL_GPL(coresight_abort);
+EXPORT_SYMBOL(coresight_abort);
static ssize_t coresight_show_type(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -681,7 +681,7 @@
err_kzalloc_csdev:
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(coresight_register);
+EXPORT_SYMBOL(coresight_register);
void coresight_unregister(struct coresight_device *csdev)
{
@@ -693,7 +693,7 @@
put_device(&csdev->dev);
}
}
-EXPORT_SYMBOL_GPL(coresight_unregister);
+EXPORT_SYMBOL(coresight_unregister);
static int __init coresight_init(void)
{
diff --git a/drivers/coresight/of_coresight.c b/drivers/coresight/of_coresight.c
index 1eccd09..8b8c0d62 100644
--- a/drivers/coresight/of_coresight.c
+++ b/drivers/coresight/of_coresight.c
@@ -97,7 +97,7 @@
"coresight-default-sink");
return pdata;
}
-EXPORT_SYMBOL_GPL(of_get_coresight_platform_data);
+EXPORT_SYMBOL(of_get_coresight_platform_data);
struct coresight_cti_data *of_get_coresight_cti_data(
struct device *dev, struct device_node *node)
diff --git a/drivers/cpufreq/cpufreq_interactive.c b/drivers/cpufreq/cpufreq_interactive.c
index 63cdc68..7d1952c 100644
--- a/drivers/cpufreq/cpufreq_interactive.c
+++ b/drivers/cpufreq/cpufreq_interactive.c
@@ -20,97 +20,94 @@
#include <linux/cpumask.h>
#include <linux/cpufreq.h>
#include <linux/module.h>
-#include <linux/mutex.h>
+#include <linux/moduleparam.h>
+#include <linux/rwsem.h>
#include <linux/sched.h>
#include <linux/tick.h>
#include <linux/time.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/kthread.h>
-#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/input.h>
#include <asm/cputime.h>
#define CREATE_TRACE_POINTS
#include <trace/events/cpufreq_interactive.h>
-static atomic_t active_count = ATOMIC_INIT(0);
+static int active_count;
struct cpufreq_interactive_cpuinfo {
struct timer_list cpu_timer;
- int timer_idlecancel;
+ struct timer_list cpu_slack_timer;
+ spinlock_t load_lock; /* protects the next 4 fields */
u64 time_in_idle;
- u64 idle_exit_time;
- u64 timer_run_time;
- int idling;
- u64 target_set_time;
- u64 target_set_time_in_idle;
+ u64 time_in_idle_timestamp;
+ u64 cputime_speedadj;
+ u64 cputime_speedadj_timestamp;
struct cpufreq_policy *policy;
struct cpufreq_frequency_table *freq_table;
unsigned int target_freq;
unsigned int floor_freq;
u64 floor_validate_time;
u64 hispeed_validate_time;
+ struct rw_semaphore enable_sem;
int governor_enabled;
};
static DEFINE_PER_CPU(struct cpufreq_interactive_cpuinfo, cpuinfo);
-/* Workqueues handle frequency scaling */
-static struct task_struct *up_task;
-static struct workqueue_struct *down_wq;
-static struct work_struct freq_scale_down_work;
-static cpumask_t up_cpumask;
-static spinlock_t up_cpumask_lock;
-static cpumask_t down_cpumask;
-static spinlock_t down_cpumask_lock;
-static struct mutex set_speed_lock;
+/* realtime thread handles frequency scaling */
+static struct task_struct *speedchange_task;
+static cpumask_t speedchange_cpumask;
+static spinlock_t speedchange_cpumask_lock;
+static struct mutex gov_lock;
/* Hi speed to bump to from lo speed when load burst (default max) */
-static u64 hispeed_freq;
+static unsigned int hispeed_freq;
/* Go to hi speed when CPU load at or above this value. */
-#define DEFAULT_GO_HISPEED_LOAD 85
-static unsigned long go_hispeed_load;
+#define DEFAULT_GO_HISPEED_LOAD 99
+static unsigned long go_hispeed_load = DEFAULT_GO_HISPEED_LOAD;
+
+/* Target load. Lower values result in higher CPU speeds. */
+#define DEFAULT_TARGET_LOAD 90
+static unsigned int default_target_loads[] = {DEFAULT_TARGET_LOAD};
+static spinlock_t target_loads_lock;
+static unsigned int *target_loads = default_target_loads;
+static int ntarget_loads = ARRAY_SIZE(default_target_loads);
/*
* The minimum amount of time to spend at a frequency before we can ramp down.
*/
#define DEFAULT_MIN_SAMPLE_TIME (80 * USEC_PER_MSEC)
-static unsigned long min_sample_time;
+static unsigned long min_sample_time = DEFAULT_MIN_SAMPLE_TIME;
/*
* The sample rate of the timer used to increase frequency
*/
#define DEFAULT_TIMER_RATE (20 * USEC_PER_MSEC)
-static unsigned long timer_rate;
+static unsigned long timer_rate = DEFAULT_TIMER_RATE;
/*
* Wait this long before raising speed above hispeed, by default a single
* timer interval.
*/
#define DEFAULT_ABOVE_HISPEED_DELAY DEFAULT_TIMER_RATE
-static unsigned long above_hispeed_delay_val;
+static unsigned long above_hispeed_delay_val = DEFAULT_ABOVE_HISPEED_DELAY;
-/*
- * Boost pulse to hispeed on touchscreen input.
- */
-
-static int input_boost_val;
-
-struct cpufreq_interactive_inputopen {
- struct input_handle *handle;
- struct work_struct inputopen_work;
-};
-
-static struct cpufreq_interactive_inputopen inputopen;
-
-/*
- * Non-zero means longer-term speed boost active.
- */
-
+/* Non-zero means indefinite speed boost active */
static int boost_val;
+/* Duration of a boot pulse in usecs */
+static int boostpulse_duration_val = DEFAULT_MIN_SAMPLE_TIME;
+/* End time of boost pulse in ktime converted to usecs */
+static u64 boostpulse_endtime;
+
+/*
+ * Max additional time to wait in idle, beyond timer_rate, at speeds above
+ * minimum before wakeup to reduce speed, or -1 if unnecessary.
+ */
+#define DEFAULT_TIMER_SLACK (4 * DEFAULT_TIMER_RATE)
+static int timer_slack_val = DEFAULT_TIMER_SLACK;
static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
unsigned int event);
@@ -125,104 +122,210 @@
.owner = THIS_MODULE,
};
-static void cpufreq_interactive_timer(unsigned long data)
+static void cpufreq_interactive_timer_resched(
+ struct cpufreq_interactive_cpuinfo *pcpu)
{
- unsigned int delta_idle;
- unsigned int delta_time;
- int cpu_load;
- int load_since_change;
- u64 time_in_idle;
- u64 idle_exit_time;
- struct cpufreq_interactive_cpuinfo *pcpu =
- &per_cpu(cpuinfo, data);
- u64 now_idle;
- unsigned int new_freq;
- unsigned int index;
+ unsigned long expires = jiffies + usecs_to_jiffies(timer_rate);
unsigned long flags;
- smp_rmb();
+ mod_timer_pinned(&pcpu->cpu_timer, expires);
+ if (timer_slack_val >= 0 && pcpu->target_freq > pcpu->policy->min) {
+ expires += usecs_to_jiffies(timer_slack_val);
+ mod_timer_pinned(&pcpu->cpu_slack_timer, expires);
+ }
+ spin_lock_irqsave(&pcpu->load_lock, flags);
+ pcpu->time_in_idle =
+ get_cpu_idle_time_us(smp_processor_id(),
+ &pcpu->time_in_idle_timestamp);
+ pcpu->cputime_speedadj = 0;
+ pcpu->cputime_speedadj_timestamp = pcpu->time_in_idle_timestamp;
+ spin_unlock_irqrestore(&pcpu->load_lock, flags);
+}
+
+static unsigned int freq_to_targetload(unsigned int freq)
+{
+ int i;
+ unsigned int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+
+ for (i = 0; i < ntarget_loads - 1 && freq >= target_loads[i+1]; i += 2)
+ ;
+
+ ret = target_loads[i];
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return ret;
+}
+
+/*
+ * If increasing frequencies never map to a lower target load then
+ * choose_freq() will find the minimum frequency that does not exceed its
+ * target load given the current load.
+ */
+
+static unsigned int choose_freq(
+ struct cpufreq_interactive_cpuinfo *pcpu, unsigned int loadadjfreq)
+{
+ unsigned int freq = pcpu->policy->cur;
+ unsigned int prevfreq, freqmin, freqmax;
+ unsigned int tl;
+ int index;
+
+ freqmin = 0;
+ freqmax = UINT_MAX;
+
+ do {
+ prevfreq = freq;
+ tl = freq_to_targetload(freq);
+
+ /*
+ * Find the lowest frequency where the computed load is less
+ * than or equal to the target load.
+ */
+
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table, loadadjfreq / tl,
+ CPUFREQ_RELATION_L, &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ if (freq > prevfreq) {
+ /* The previous frequency is too low. */
+ freqmin = prevfreq;
+
+ if (freq >= freqmax) {
+ /*
+ * Find the highest frequency that is less
+ * than freqmax.
+ */
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table,
+ freqmax - 1, CPUFREQ_RELATION_H,
+ &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ if (freq == freqmin) {
+ /*
+ * The first frequency below freqmax
+ * has already been found to be too
+ * low. freqmax is the lowest speed
+ * we found that is fast enough.
+ */
+ freq = freqmax;
+ break;
+ }
+ }
+ } else if (freq < prevfreq) {
+ /* The previous frequency is high enough. */
+ freqmax = prevfreq;
+
+ if (freq <= freqmin) {
+ /*
+ * Find the lowest frequency that is higher
+ * than freqmin.
+ */
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table,
+ freqmin + 1, CPUFREQ_RELATION_L,
+ &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ /*
+ * If freqmax is the first frequency above
+ * freqmin then we have already found that
+ * this speed is fast enough.
+ */
+ if (freq == freqmax)
+ break;
+ }
+ }
+
+ /* If same frequency chosen as previous then done. */
+ } while (freq != prevfreq);
+
+ return freq;
+}
+
+static u64 update_load(int cpu)
+{
+ struct cpufreq_interactive_cpuinfo *pcpu = &per_cpu(cpuinfo, cpu);
+ u64 now;
+ u64 now_idle;
+ unsigned int delta_idle;
+ unsigned int delta_time;
+ u64 active_time;
+
+ now_idle = get_cpu_idle_time_us(cpu, &now);
+ delta_idle = (unsigned int)(now_idle - pcpu->time_in_idle);
+ delta_time = (unsigned int)(now - pcpu->time_in_idle_timestamp);
+ active_time = delta_time - delta_idle;
+ pcpu->cputime_speedadj += active_time * pcpu->policy->cur;
+
+ pcpu->time_in_idle = now_idle;
+ pcpu->time_in_idle_timestamp = now;
+ return now;
+}
+
+static void cpufreq_interactive_timer(unsigned long data)
+{
+ u64 now;
+ unsigned int delta_time;
+ u64 cputime_speedadj;
+ int cpu_load;
+ struct cpufreq_interactive_cpuinfo *pcpu =
+ &per_cpu(cpuinfo, data);
+ unsigned int new_freq;
+ unsigned int loadadjfreq;
+ unsigned int index;
+ unsigned long flags;
+ bool boosted;
+
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return;
if (!pcpu->governor_enabled)
goto exit;
- /*
- * Once pcpu->timer_run_time is updated to >= pcpu->idle_exit_time,
- * this lets idle exit know the current idle time sample has
- * been processed, and idle exit can generate a new sample and
- * re-arm the timer. This prevents a concurrent idle
- * exit on that CPU from writing a new set of info at the same time
- * the timer function runs (the timer function can't use that info
- * until more time passes).
- */
- time_in_idle = pcpu->time_in_idle;
- idle_exit_time = pcpu->idle_exit_time;
- now_idle = get_cpu_idle_time_us(data, &pcpu->timer_run_time);
- smp_wmb();
+ spin_lock_irqsave(&pcpu->load_lock, flags);
+ now = update_load(data);
+ delta_time = (unsigned int)(now - pcpu->cputime_speedadj_timestamp);
+ cputime_speedadj = pcpu->cputime_speedadj;
+ spin_unlock_irqrestore(&pcpu->load_lock, flags);
- /* If we raced with cancelling a timer, skip. */
- if (!idle_exit_time)
- goto exit;
-
- delta_idle = (unsigned int)(now_idle - time_in_idle);
- delta_time = (unsigned int)(pcpu->timer_run_time - idle_exit_time);
-
- /*
- * If timer ran less than 1ms after short-term sample started, retry.
- */
- if (delta_time < 1000)
+ if (WARN_ON_ONCE(!delta_time))
goto rearm;
- if (delta_idle > delta_time)
- cpu_load = 0;
- else
- cpu_load = 100 * (delta_time - delta_idle) / delta_time;
+ do_div(cputime_speedadj, delta_time);
+ loadadjfreq = (unsigned int)cputime_speedadj * 100;
+ cpu_load = loadadjfreq / pcpu->target_freq;
+ boosted = boost_val || now < boostpulse_endtime;
- delta_idle = (unsigned int)(now_idle - pcpu->target_set_time_in_idle);
- delta_time = (unsigned int)(pcpu->timer_run_time -
- pcpu->target_set_time);
-
- if ((delta_time == 0) || (delta_idle > delta_time))
- load_since_change = 0;
- else
- load_since_change =
- 100 * (delta_time - delta_idle) / delta_time;
-
- /*
- * Choose greater of short-term load (since last idle timer
- * started or timer function re-armed itself) or long-term load
- * (since last frequency change).
- */
- if (load_since_change > cpu_load)
- cpu_load = load_since_change;
-
- if (cpu_load >= go_hispeed_load || boost_val) {
- if (pcpu->target_freq <= pcpu->policy->min) {
+ if (cpu_load >= go_hispeed_load || boosted) {
+ if (pcpu->target_freq < hispeed_freq) {
new_freq = hispeed_freq;
} else {
- new_freq = pcpu->policy->max * cpu_load / 100;
+ new_freq = choose_freq(pcpu, loadadjfreq);
if (new_freq < hispeed_freq)
new_freq = hispeed_freq;
-
- if (pcpu->target_freq == hispeed_freq &&
- new_freq > hispeed_freq &&
- pcpu->timer_run_time - pcpu->hispeed_validate_time
- < above_hispeed_delay_val) {
- trace_cpufreq_interactive_notyet(data, cpu_load,
- pcpu->target_freq,
- new_freq);
- goto rearm;
- }
}
} else {
- new_freq = pcpu->policy->max * cpu_load / 100;
+ new_freq = choose_freq(pcpu, loadadjfreq);
}
- if (new_freq <= hispeed_freq)
- pcpu->hispeed_validate_time = pcpu->timer_run_time;
+ if (pcpu->target_freq >= hispeed_freq &&
+ new_freq > pcpu->target_freq &&
+ now - pcpu->hispeed_validate_time < above_hispeed_delay_val) {
+ trace_cpufreq_interactive_notyet(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
+ goto rearm;
+ }
+
+ pcpu->hispeed_validate_time = now;
if (cpufreq_frequency_table_target(pcpu->policy, pcpu->freq_table,
- new_freq, CPUFREQ_RELATION_H,
+ new_freq, CPUFREQ_RELATION_L,
&index)) {
pr_warn_once("timer %d: cpufreq_frequency_table_target error\n",
(int) data);
@@ -236,41 +339,42 @@
* floor frequency for the minimum sample time since last validated.
*/
if (new_freq < pcpu->floor_freq) {
- if (pcpu->timer_run_time - pcpu->floor_validate_time
- < min_sample_time) {
- trace_cpufreq_interactive_notyet(data, cpu_load,
- pcpu->target_freq, new_freq);
+ if (now - pcpu->floor_validate_time < min_sample_time) {
+ trace_cpufreq_interactive_notyet(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
goto rearm;
}
}
- pcpu->floor_freq = new_freq;
- pcpu->floor_validate_time = pcpu->timer_run_time;
+ /*
+ * Update the timestamp for checking whether speed has been held at
+ * or above the selected frequency for a minimum of min_sample_time,
+ * if not boosted to hispeed_freq. If boosted to hispeed_freq then we
+ * allow the speed to drop as soon as the boostpulse duration expires
+ * (or the indefinite boost is turned off).
+ */
+
+ if (!boosted || new_freq > hispeed_freq) {
+ pcpu->floor_freq = new_freq;
+ pcpu->floor_validate_time = now;
+ }
if (pcpu->target_freq == new_freq) {
- trace_cpufreq_interactive_already(data, cpu_load,
- pcpu->target_freq, new_freq);
+ trace_cpufreq_interactive_already(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
goto rearm_if_notmax;
}
trace_cpufreq_interactive_target(data, cpu_load, pcpu->target_freq,
- new_freq);
- pcpu->target_set_time_in_idle = now_idle;
- pcpu->target_set_time = pcpu->timer_run_time;
+ pcpu->policy->cur, new_freq);
- if (new_freq < pcpu->target_freq) {
- pcpu->target_freq = new_freq;
- spin_lock_irqsave(&down_cpumask_lock, flags);
- cpumask_set_cpu(data, &down_cpumask);
- spin_unlock_irqrestore(&down_cpumask_lock, flags);
- queue_work(down_wq, &freq_scale_down_work);
- } else {
- pcpu->target_freq = new_freq;
- spin_lock_irqsave(&up_cpumask_lock, flags);
- cpumask_set_cpu(data, &up_cpumask);
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
- wake_up_process(up_task);
- }
+ pcpu->target_freq = new_freq;
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
+ cpumask_set_cpu(data, &speedchange_cpumask);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
+ wake_up_process(speedchange_task);
rearm_if_notmax:
/*
@@ -281,28 +385,11 @@
goto exit;
rearm:
- if (!timer_pending(&pcpu->cpu_timer)) {
- /*
- * If already at min: if that CPU is idle, don't set timer.
- * Else cancel the timer if that CPU goes idle. We don't
- * need to re-evaluate speed until the next idle exit.
- */
- if (pcpu->target_freq == pcpu->policy->min) {
- smp_rmb();
-
- if (pcpu->idling)
- goto exit;
-
- pcpu->timer_idlecancel = 1;
- }
-
- pcpu->time_in_idle = get_cpu_idle_time_us(
- data, &pcpu->idle_exit_time);
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- }
+ if (!timer_pending(&pcpu->cpu_timer))
+ cpufreq_interactive_timer_resched(pcpu);
exit:
+ up_read(&pcpu->enable_sem);
return;
}
@@ -312,15 +399,16 @@
&per_cpu(cpuinfo, smp_processor_id());
int pending;
- if (!pcpu->governor_enabled)
+ if (!down_read_trylock(&pcpu->enable_sem))
return;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return;
+ }
- pcpu->idling = 1;
- smp_wmb();
pending = timer_pending(&pcpu->cpu_timer);
if (pcpu->target_freq != pcpu->policy->min) {
-#ifdef CONFIG_SMP
/*
* Entering idle while not at lowest speed. On some
* platforms this can hold the other CPU(s) at that speed
@@ -329,33 +417,11 @@
* min indefinitely. This should probably be a quirk of
* the CPUFreq driver.
*/
- if (!pending) {
- pcpu->time_in_idle = get_cpu_idle_time_us(
- smp_processor_id(), &pcpu->idle_exit_time);
- pcpu->timer_idlecancel = 0;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- }
-#endif
- } else {
- /*
- * If at min speed and entering idle after load has
- * already been evaluated, and a timer has been set just in
- * case the CPU suddenly goes busy, cancel that timer. The
- * CPU didn't go busy; we'll recheck things upon idle exit.
- */
- if (pending && pcpu->timer_idlecancel) {
- del_timer(&pcpu->cpu_timer);
- /*
- * Ensure last timer run time is after current idle
- * sample start time, so next idle exit will always
- * start a new idle sampling period.
- */
- pcpu->idle_exit_time = 0;
- pcpu->timer_idlecancel = 0;
- }
+ if (!pending)
+ cpufreq_interactive_timer_resched(pcpu);
}
+ up_read(&pcpu->enable_sem);
}
static void cpufreq_interactive_idle_end(void)
@@ -363,34 +429,26 @@
struct cpufreq_interactive_cpuinfo *pcpu =
&per_cpu(cpuinfo, smp_processor_id());
- pcpu->idling = 0;
- smp_wmb();
-
- /*
- * Arm the timer for 1-2 ticks later if not already, and if the timer
- * function has already processed the previous load sampling
- * interval. (If the timer is not pending but has not processed
- * the previous interval, it is probably racing with us on another
- * CPU. Let it compute load based on the previous sample and then
- * re-arm the timer for another interval when it's done, rather
- * than updating the interval start time to be "now", which doesn't
- * give the timer function enough time to make a decision on this
- * run.)
- */
- if (timer_pending(&pcpu->cpu_timer) == 0 &&
- pcpu->timer_run_time >= pcpu->idle_exit_time &&
- pcpu->governor_enabled) {
- pcpu->time_in_idle =
- get_cpu_idle_time_us(smp_processor_id(),
- &pcpu->idle_exit_time);
- pcpu->timer_idlecancel = 0;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return;
}
+ /* Arm the timer for 1-2 ticks later if not already. */
+ if (!timer_pending(&pcpu->cpu_timer)) {
+ cpufreq_interactive_timer_resched(pcpu);
+ } else if (time_after_eq(jiffies, pcpu->cpu_timer.expires)) {
+ del_timer(&pcpu->cpu_timer);
+ del_timer(&pcpu->cpu_slack_timer);
+ cpufreq_interactive_timer(smp_processor_id());
+ }
+
+ up_read(&pcpu->enable_sem);
}
-static int cpufreq_interactive_up_task(void *data)
+static int cpufreq_interactive_speedchange_task(void *data)
{
unsigned int cpu;
cpumask_t tmp_mask;
@@ -399,34 +457,35 @@
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
- if (cpumask_empty(&up_cpumask)) {
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ if (cpumask_empty(&speedchange_cpumask)) {
+ spin_unlock_irqrestore(&speedchange_cpumask_lock,
+ flags);
schedule();
if (kthread_should_stop())
break;
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
}
set_current_state(TASK_RUNNING);
- tmp_mask = up_cpumask;
- cpumask_clear(&up_cpumask);
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ tmp_mask = speedchange_cpumask;
+ cpumask_clear(&speedchange_cpumask);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
for_each_cpu(cpu, &tmp_mask) {
unsigned int j;
unsigned int max_freq = 0;
pcpu = &per_cpu(cpuinfo, cpu);
- smp_rmb();
-
- if (!pcpu->governor_enabled)
+ if (!down_read_trylock(&pcpu->enable_sem))
continue;
-
- mutex_lock(&set_speed_lock);
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ continue;
+ }
for_each_cpu(j, pcpu->policy->cpus) {
struct cpufreq_interactive_cpuinfo *pjcpu =
@@ -440,57 +499,17 @@
__cpufreq_driver_target(pcpu->policy,
max_freq,
CPUFREQ_RELATION_H);
- mutex_unlock(&set_speed_lock);
- trace_cpufreq_interactive_up(cpu, pcpu->target_freq,
+ trace_cpufreq_interactive_setspeed(cpu,
+ pcpu->target_freq,
pcpu->policy->cur);
+
+ up_read(&pcpu->enable_sem);
}
}
return 0;
}
-static void cpufreq_interactive_freq_down(struct work_struct *work)
-{
- unsigned int cpu;
- cpumask_t tmp_mask;
- unsigned long flags;
- struct cpufreq_interactive_cpuinfo *pcpu;
-
- spin_lock_irqsave(&down_cpumask_lock, flags);
- tmp_mask = down_cpumask;
- cpumask_clear(&down_cpumask);
- spin_unlock_irqrestore(&down_cpumask_lock, flags);
-
- for_each_cpu(cpu, &tmp_mask) {
- unsigned int j;
- unsigned int max_freq = 0;
-
- pcpu = &per_cpu(cpuinfo, cpu);
- smp_rmb();
-
- if (!pcpu->governor_enabled)
- continue;
-
- mutex_lock(&set_speed_lock);
-
- for_each_cpu(j, pcpu->policy->cpus) {
- struct cpufreq_interactive_cpuinfo *pjcpu =
- &per_cpu(cpuinfo, j);
-
- if (pjcpu->target_freq > max_freq)
- max_freq = pjcpu->target_freq;
- }
-
- if (max_freq != pcpu->policy->cur)
- __cpufreq_driver_target(pcpu->policy, max_freq,
- CPUFREQ_RELATION_H);
-
- mutex_unlock(&set_speed_lock);
- trace_cpufreq_interactive_down(cpu, pcpu->target_freq,
- pcpu->policy->cur);
- }
-}
-
static void cpufreq_interactive_boost(void)
{
int i;
@@ -498,17 +517,16 @@
unsigned long flags;
struct cpufreq_interactive_cpuinfo *pcpu;
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
for_each_online_cpu(i) {
pcpu = &per_cpu(cpuinfo, i);
if (pcpu->target_freq < hispeed_freq) {
pcpu->target_freq = hispeed_freq;
- cpumask_set_cpu(i, &up_cpumask);
- pcpu->target_set_time_in_idle =
- get_cpu_idle_time_us(i, &pcpu->target_set_time);
- pcpu->hispeed_validate_time = pcpu->target_set_time;
+ cpumask_set_cpu(i, &speedchange_cpumask);
+ pcpu->hispeed_validate_time =
+ ktime_to_us(ktime_get());
anyboost = 1;
}
@@ -521,106 +539,126 @@
pcpu->floor_validate_time = ktime_to_us(ktime_get());
}
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
if (anyboost)
- wake_up_process(up_task);
+ wake_up_process(speedchange_task);
}
-/*
- * Pulsed boost on input event raises CPUs to hispeed_freq and lets
- * usual algorithm of min_sample_time decide when to allow speed
- * to drop.
- */
-
-static void cpufreq_interactive_input_event(struct input_handle *handle,
- unsigned int type,
- unsigned int code, int value)
+static int cpufreq_interactive_notifier(
+ struct notifier_block *nb, unsigned long val, void *data)
{
- if (input_boost_val && type == EV_SYN && code == SYN_REPORT) {
- trace_cpufreq_interactive_boost("input");
- cpufreq_interactive_boost();
+ struct cpufreq_freqs *freq = data;
+ struct cpufreq_interactive_cpuinfo *pcpu;
+ int cpu;
+ unsigned long flags;
+
+ if (val == CPUFREQ_POSTCHANGE) {
+ pcpu = &per_cpu(cpuinfo, freq->cpu);
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return 0;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return 0;
+ }
+
+ for_each_cpu(cpu, pcpu->policy->cpus) {
+ struct cpufreq_interactive_cpuinfo *pjcpu =
+ &per_cpu(cpuinfo, cpu);
+ spin_lock_irqsave(&pjcpu->load_lock, flags);
+ update_load(cpu);
+ spin_unlock_irqrestore(&pjcpu->load_lock, flags);
+ }
+
+ up_read(&pcpu->enable_sem);
}
-}
-
-static void cpufreq_interactive_input_open(struct work_struct *w)
-{
- struct cpufreq_interactive_inputopen *io =
- container_of(w, struct cpufreq_interactive_inputopen,
- inputopen_work);
- int error;
-
- error = input_open_device(io->handle);
- if (error)
- input_unregister_handle(io->handle);
-}
-
-static int cpufreq_interactive_input_connect(struct input_handler *handler,
- struct input_dev *dev,
- const struct input_device_id *id)
-{
- struct input_handle *handle;
- int error;
-
- pr_info("%s: connect to %s\n", __func__, dev->name);
- handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL);
- if (!handle)
- return -ENOMEM;
-
- handle->dev = dev;
- handle->handler = handler;
- handle->name = "cpufreq_interactive";
-
- error = input_register_handle(handle);
- if (error)
- goto err;
-
- inputopen.handle = handle;
- queue_work(down_wq, &inputopen.inputopen_work);
return 0;
-err:
- kfree(handle);
- return error;
}
-static void cpufreq_interactive_input_disconnect(struct input_handle *handle)
+static struct notifier_block cpufreq_notifier_block = {
+ .notifier_call = cpufreq_interactive_notifier,
+};
+
+static ssize_t show_target_loads(
+ struct kobject *kobj, struct attribute *attr, char *buf)
{
- input_close_device(handle);
- input_unregister_handle(handle);
- kfree(handle);
+ int i;
+ ssize_t ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+
+ for (i = 0; i < ntarget_loads; i++)
+ ret += sprintf(buf + ret, "%u%s", target_loads[i],
+ i & 0x1 ? ":" : " ");
+
+ ret += sprintf(buf + ret, "\n");
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return ret;
}
-static const struct input_device_id cpufreq_interactive_ids[] = {
- {
- .flags = INPUT_DEVICE_ID_MATCH_EVBIT |
- INPUT_DEVICE_ID_MATCH_ABSBIT,
- .evbit = { BIT_MASK(EV_ABS) },
- .absbit = { [BIT_WORD(ABS_MT_POSITION_X)] =
- BIT_MASK(ABS_MT_POSITION_X) |
- BIT_MASK(ABS_MT_POSITION_Y) },
- }, /* multi-touch touchscreen */
- {
- .flags = INPUT_DEVICE_ID_MATCH_KEYBIT |
- INPUT_DEVICE_ID_MATCH_ABSBIT,
- .keybit = { [BIT_WORD(BTN_TOUCH)] = BIT_MASK(BTN_TOUCH) },
- .absbit = { [BIT_WORD(ABS_X)] =
- BIT_MASK(ABS_X) | BIT_MASK(ABS_Y) },
- }, /* touchpad */
- { },
-};
+static ssize_t store_target_loads(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
+{
+ int ret;
+ const char *cp;
+ unsigned int *new_target_loads = NULL;
+ int ntokens = 1;
+ int i;
+ unsigned long flags;
-static struct input_handler cpufreq_interactive_input_handler = {
- .event = cpufreq_interactive_input_event,
- .connect = cpufreq_interactive_input_connect,
- .disconnect = cpufreq_interactive_input_disconnect,
- .name = "cpufreq_interactive",
- .id_table = cpufreq_interactive_ids,
-};
+ cp = buf;
+ while ((cp = strpbrk(cp + 1, " :")))
+ ntokens++;
+
+ if (!(ntokens & 0x1))
+ goto err_inval;
+
+ new_target_loads = kmalloc(ntokens * sizeof(unsigned int), GFP_KERNEL);
+ if (!new_target_loads) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ cp = buf;
+ i = 0;
+ while (i < ntokens) {
+ if (sscanf(cp, "%u", &new_target_loads[i++]) != 1)
+ goto err_inval;
+
+ cp = strpbrk(cp, " :");
+ if (!cp)
+ break;
+ cp++;
+ }
+
+ if (i != ntokens)
+ goto err_inval;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+ if (target_loads != default_target_loads)
+ kfree(target_loads);
+ target_loads = new_target_loads;
+ ntarget_loads = ntokens;
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return count;
+
+err_inval:
+ ret = -EINVAL;
+err:
+ kfree(new_target_loads);
+ return ret;
+}
+
+static struct global_attr target_loads_attr =
+ __ATTR(target_loads, S_IRUGO | S_IWUSR,
+ show_target_loads, store_target_loads);
static ssize_t show_hispeed_freq(struct kobject *kobj,
struct attribute *attr, char *buf)
{
- return sprintf(buf, "%llu\n", hispeed_freq);
+ return sprintf(buf, "%u\n", hispeed_freq);
}
static ssize_t store_hispeed_freq(struct kobject *kobj,
@@ -628,9 +666,9 @@
size_t count)
{
int ret;
- u64 val;
+ long unsigned int val;
- ret = strict_strtoull(buf, 0, &val);
+ ret = strict_strtoul(buf, 0, &val);
if (ret < 0)
return ret;
hispeed_freq = val;
@@ -729,26 +767,28 @@
static struct global_attr timer_rate_attr = __ATTR(timer_rate, 0644,
show_timer_rate, store_timer_rate);
-static ssize_t show_input_boost(struct kobject *kobj, struct attribute *attr,
- char *buf)
+static ssize_t show_timer_slack(
+ struct kobject *kobj, struct attribute *attr, char *buf)
{
- return sprintf(buf, "%u\n", input_boost_val);
+ return sprintf(buf, "%d\n", timer_slack_val);
}
-static ssize_t store_input_boost(struct kobject *kobj, struct attribute *attr,
- const char *buf, size_t count)
+static ssize_t store_timer_slack(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
{
int ret;
unsigned long val;
- ret = strict_strtoul(buf, 0, &val);
+ ret = kstrtol(buf, 10, &val);
if (ret < 0)
return ret;
- input_boost_val = val;
+
+ timer_slack_val = val;
return count;
}
-define_one_global_rw(input_boost);
+define_one_global_rw(timer_slack);
static ssize_t show_boost(struct kobject *kobj, struct attribute *attr,
char *buf)
@@ -790,6 +830,7 @@
if (ret < 0)
return ret;
+ boostpulse_endtime = ktime_to_us(ktime_get()) + boostpulse_duration_val;
trace_cpufreq_interactive_boost("pulse");
cpufreq_interactive_boost();
return count;
@@ -798,15 +839,40 @@
static struct global_attr boostpulse =
__ATTR(boostpulse, 0200, NULL, store_boostpulse);
+static ssize_t show_boostpulse_duration(
+ struct kobject *kobj, struct attribute *attr, char *buf)
+{
+ return sprintf(buf, "%d\n", boostpulse_duration_val);
+}
+
+static ssize_t store_boostpulse_duration(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
+{
+ int ret;
+ unsigned long val;
+
+ ret = kstrtoul(buf, 0, &val);
+ if (ret < 0)
+ return ret;
+
+ boostpulse_duration_val = val;
+ return count;
+}
+
+define_one_global_rw(boostpulse_duration);
+
static struct attribute *interactive_attributes[] = {
+ &target_loads_attr.attr,
&hispeed_freq_attr.attr,
&go_hispeed_load_attr.attr,
&above_hispeed_delay.attr,
&min_sample_time_attr.attr,
&timer_rate_attr.attr,
- &input_boost.attr,
+ &timer_slack.attr,
&boost.attr,
&boostpulse.attr,
+ &boostpulse_duration.attr,
NULL,
};
@@ -815,102 +881,6 @@
.name = "interactive",
};
-static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
- unsigned int event)
-{
- int rc;
- unsigned int j;
- struct cpufreq_interactive_cpuinfo *pcpu;
- struct cpufreq_frequency_table *freq_table;
-
- switch (event) {
- case CPUFREQ_GOV_START:
- if (!cpu_online(policy->cpu))
- return -EINVAL;
-
- freq_table =
- cpufreq_frequency_get_table(policy->cpu);
-
- for_each_cpu(j, policy->cpus) {
- pcpu = &per_cpu(cpuinfo, j);
- pcpu->policy = policy;
- pcpu->target_freq = policy->cur;
- pcpu->freq_table = freq_table;
- pcpu->target_set_time_in_idle =
- get_cpu_idle_time_us(j,
- &pcpu->target_set_time);
- pcpu->floor_freq = pcpu->target_freq;
- pcpu->floor_validate_time =
- pcpu->target_set_time;
- pcpu->hispeed_validate_time =
- pcpu->target_set_time;
- pcpu->governor_enabled = 1;
- pcpu->idle_exit_time = pcpu->target_set_time;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- smp_wmb();
- }
-
- if (!hispeed_freq)
- hispeed_freq = policy->max;
-
- /*
- * Do not register the idle hook and create sysfs
- * entries if we have already done so.
- */
- if (atomic_inc_return(&active_count) > 1)
- return 0;
-
- rc = sysfs_create_group(cpufreq_global_kobject,
- &interactive_attr_group);
- if (rc)
- return rc;
-
- rc = input_register_handler(&cpufreq_interactive_input_handler);
- if (rc)
- pr_warn("%s: failed to register input handler\n",
- __func__);
-
- break;
-
- case CPUFREQ_GOV_STOP:
- for_each_cpu(j, policy->cpus) {
- pcpu = &per_cpu(cpuinfo, j);
- pcpu->governor_enabled = 0;
- smp_wmb();
- del_timer_sync(&pcpu->cpu_timer);
-
- /*
- * Reset idle exit time since we may cancel the timer
- * before it can run after the last idle exit time,
- * to avoid tripping the check in idle exit for a timer
- * that is trying to run.
- */
- pcpu->idle_exit_time = 0;
- }
-
- flush_work(&freq_scale_down_work);
- if (atomic_dec_return(&active_count) > 0)
- return 0;
-
- input_unregister_handler(&cpufreq_interactive_input_handler);
- sysfs_remove_group(cpufreq_global_kobject,
- &interactive_attr_group);
-
- break;
-
- case CPUFREQ_GOV_LIMITS:
- if (policy->max < policy->cur)
- __cpufreq_driver_target(policy,
- policy->max, CPUFREQ_RELATION_H);
- else if (policy->min > policy->cur)
- __cpufreq_driver_target(policy,
- policy->min, CPUFREQ_RELATION_L);
- break;
- }
- return 0;
-}
-
static int cpufreq_interactive_idle_notifier(struct notifier_block *nb,
unsigned long val,
void *data)
@@ -931,57 +901,148 @@
.notifier_call = cpufreq_interactive_idle_notifier,
};
+static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
+ unsigned int event)
+{
+ int rc;
+ unsigned int j;
+ struct cpufreq_interactive_cpuinfo *pcpu;
+ struct cpufreq_frequency_table *freq_table;
+
+ switch (event) {
+ case CPUFREQ_GOV_START:
+ if (!cpu_online(policy->cpu))
+ return -EINVAL;
+
+ mutex_lock(&gov_lock);
+
+ freq_table =
+ cpufreq_frequency_get_table(policy->cpu);
+ if (!hispeed_freq)
+ hispeed_freq = policy->max;
+
+ for_each_cpu(j, policy->cpus) {
+ unsigned long expires;
+
+ pcpu = &per_cpu(cpuinfo, j);
+ pcpu->policy = policy;
+ pcpu->target_freq = policy->cur;
+ pcpu->freq_table = freq_table;
+ pcpu->floor_freq = pcpu->target_freq;
+ pcpu->floor_validate_time =
+ ktime_to_us(ktime_get());
+ pcpu->hispeed_validate_time =
+ pcpu->floor_validate_time;
+ down_write(&pcpu->enable_sem);
+ expires = jiffies + usecs_to_jiffies(timer_rate);
+ pcpu->cpu_timer.expires = expires;
+ add_timer_on(&pcpu->cpu_timer, j);
+ if (timer_slack_val >= 0) {
+ expires += usecs_to_jiffies(timer_slack_val);
+ pcpu->cpu_slack_timer.expires = expires;
+ add_timer_on(&pcpu->cpu_slack_timer, j);
+ }
+ pcpu->governor_enabled = 1;
+ up_write(&pcpu->enable_sem);
+ }
+
+ /*
+ * Do not register the idle hook and create sysfs
+ * entries if we have already done so.
+ */
+ if (++active_count > 1) {
+ mutex_unlock(&gov_lock);
+ return 0;
+ }
+
+ rc = sysfs_create_group(cpufreq_global_kobject,
+ &interactive_attr_group);
+ if (rc) {
+ mutex_unlock(&gov_lock);
+ return rc;
+ }
+
+ idle_notifier_register(&cpufreq_interactive_idle_nb);
+ cpufreq_register_notifier(
+ &cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ mutex_unlock(&gov_lock);
+ break;
+
+ case CPUFREQ_GOV_STOP:
+ mutex_lock(&gov_lock);
+ for_each_cpu(j, policy->cpus) {
+ pcpu = &per_cpu(cpuinfo, j);
+ down_write(&pcpu->enable_sem);
+ pcpu->governor_enabled = 0;
+ del_timer_sync(&pcpu->cpu_timer);
+ del_timer_sync(&pcpu->cpu_slack_timer);
+ up_write(&pcpu->enable_sem);
+ }
+
+ if (--active_count > 0) {
+ mutex_unlock(&gov_lock);
+ return 0;
+ }
+
+ cpufreq_unregister_notifier(
+ &cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ idle_notifier_unregister(&cpufreq_interactive_idle_nb);
+ sysfs_remove_group(cpufreq_global_kobject,
+ &interactive_attr_group);
+ mutex_unlock(&gov_lock);
+
+ break;
+
+ case CPUFREQ_GOV_LIMITS:
+ if (policy->max < policy->cur)
+ __cpufreq_driver_target(policy,
+ policy->max, CPUFREQ_RELATION_H);
+ else if (policy->min > policy->cur)
+ __cpufreq_driver_target(policy,
+ policy->min, CPUFREQ_RELATION_L);
+ break;
+ }
+ return 0;
+}
+
+static void cpufreq_interactive_nop_timer(unsigned long data)
+{
+}
+
static int __init cpufreq_interactive_init(void)
{
unsigned int i;
struct cpufreq_interactive_cpuinfo *pcpu;
struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
- go_hispeed_load = DEFAULT_GO_HISPEED_LOAD;
- min_sample_time = DEFAULT_MIN_SAMPLE_TIME;
- above_hispeed_delay_val = DEFAULT_ABOVE_HISPEED_DELAY;
- timer_rate = DEFAULT_TIMER_RATE;
-
/* Initalize per-cpu timers */
for_each_possible_cpu(i) {
pcpu = &per_cpu(cpuinfo, i);
- init_timer(&pcpu->cpu_timer);
+ init_timer_deferrable(&pcpu->cpu_timer);
pcpu->cpu_timer.function = cpufreq_interactive_timer;
pcpu->cpu_timer.data = i;
+ init_timer(&pcpu->cpu_slack_timer);
+ pcpu->cpu_slack_timer.function = cpufreq_interactive_nop_timer;
+ spin_lock_init(&pcpu->load_lock);
+ init_rwsem(&pcpu->enable_sem);
}
- up_task = kthread_create(cpufreq_interactive_up_task, NULL,
- "kinteractiveup");
- if (IS_ERR(up_task))
- return PTR_ERR(up_task);
+ spin_lock_init(&target_loads_lock);
+ spin_lock_init(&speedchange_cpumask_lock);
+ mutex_init(&gov_lock);
+ speedchange_task =
+ kthread_create(cpufreq_interactive_speedchange_task, NULL,
+ "cfinteractive");
+ if (IS_ERR(speedchange_task))
+ return PTR_ERR(speedchange_task);
- sched_setscheduler_nocheck(up_task, SCHED_FIFO, ¶m);
- get_task_struct(up_task);
+ sched_setscheduler_nocheck(speedchange_task, SCHED_FIFO, ¶m);
+ get_task_struct(speedchange_task);
- /* No rescuer thread, bind to CPU queuing the work for possibly
- warm cache (probably doesn't matter much). */
- down_wq = alloc_workqueue("knteractive_down", 0, 1);
+ /* NB: wake up so the thread does not look hung to the freezer */
+ wake_up_process(speedchange_task);
- if (!down_wq)
- goto err_freeuptask;
-
- INIT_WORK(&freq_scale_down_work,
- cpufreq_interactive_freq_down);
-
- spin_lock_init(&up_cpumask_lock);
- spin_lock_init(&down_cpumask_lock);
- mutex_init(&set_speed_lock);
-
- /* Kick the kthread to idle */
- wake_up_process(up_task);
-
- idle_notifier_register(&cpufreq_interactive_idle_nb);
- INIT_WORK(&inputopen.inputopen_work, cpufreq_interactive_input_open);
return cpufreq_register_governor(&cpufreq_gov_interactive);
-
-err_freeuptask:
- put_task_struct(up_task);
- return -ENOMEM;
}
#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE
@@ -993,9 +1054,8 @@
static void __exit cpufreq_interactive_exit(void)
{
cpufreq_unregister_governor(&cpufreq_gov_interactive);
- kthread_stop(up_task);
- put_task_struct(up_task);
- destroy_workqueue(down_wq);
+ kthread_stop(speedchange_task);
+ put_task_struct(speedchange_task);
}
module_exit(cpufreq_interactive_exit);
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index c758b3a..99ace44 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -306,7 +306,7 @@
config CRYPTO_DEV_QCE
tristate "Qualcomm Crypto Engine (QCE) module"
select CRYPTO_DEV_QCE40 if ARCH_MSM8960 || ARCH_MSM9615
- select CRYPTO_DEV_QCE50 if ARCH_MSM8974 || ARCH_MSM9625
+ select CRYPTO_DEV_QCE50 if ARCH_MSM8974 || ARCH_MSM9625 || ARCH_MSM8226
default n
help
This driver supports Qualcomm Crypto Engine in MSM7x30, MSM8660
diff --git a/drivers/crypto/msm/qce.c b/drivers/crypto/msm/qce.c
index 24cf30a..7778477 100644
--- a/drivers/crypto/msm/qce.c
+++ b/drivers/crypto/msm/qce.c
@@ -2203,6 +2203,18 @@
}
EXPORT_SYMBOL(qce_process_sha_req);
+int qce_enable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_enable_clk);
+
+int qce_disable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_disable_clk);
+
/*
* crypto engine open function.
*/
diff --git a/drivers/crypto/msm/qce.h b/drivers/crypto/msm/qce.h
index 3ff84cf..51a74b6 100644
--- a/drivers/crypto/msm/qce.h
+++ b/drivers/crypto/msm/qce.h
@@ -160,5 +160,7 @@
int qce_ablk_cipher_req(void *handle, struct qce_req *req);
int qce_hw_support(void *handle, struct ce_hw_support *support);
int qce_process_sha_req(void *handle, struct qce_sha_req *s_req);
+int qce_enable_clk(void *handle);
+int qce_disable_clk(void *handle);
#endif /* __CRYPTO_MSM_QCE_H */
diff --git a/drivers/crypto/msm/qce40.c b/drivers/crypto/msm/qce40.c
index 7b0964d..5249917 100644
--- a/drivers/crypto/msm/qce40.c
+++ b/drivers/crypto/msm/qce40.c
@@ -2426,6 +2426,18 @@
}
EXPORT_SYMBOL(qce_process_sha_req);
+int qce_enable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_enable_clk);
+
+int qce_disable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_disable_clk);
+
/* crypto engine open function. */
void *qce_open(struct platform_device *pdev, int *rc)
{
diff --git a/drivers/crypto/msm/qce50.c b/drivers/crypto/msm/qce50.c
index 739a753..58fa5c9 100644
--- a/drivers/crypto/msm/qce50.c
+++ b/drivers/crypto/msm/qce50.c
@@ -39,7 +39,7 @@
#include "qcryptohw_50.h"
#define CRYPTO_CONFIG_RESET 0xE001F
-#define QCE_MAX_NUM_DSCR 0x400
+#define QCE_MAX_NUM_DSCR 0x500
#define QCE_SECTOR_SIZE 0x200
static DEFINE_MUTEX(bam_register_cnt);
@@ -62,6 +62,7 @@
dma_addr_t coh_pmem; /* Allocated coherent physical memory */
int memsize; /* Memory allocated */
int is_shared; /* CE HW is shared */
+ bool support_cmd_dscr;
void __iomem *iobase; /* Virtual io base of CE HW */
unsigned int phy_iobase; /* Physical io base of CE HW */
@@ -84,6 +85,7 @@
int dir;
void *areq;
enum qce_cipher_mode_enum mode;
+ struct qce_ce_cfg_reg_setting reg;
struct ce_sps_data ce_sps;
};
@@ -494,25 +496,17 @@
for (i = 0; i < noncelen32; i++, pce++)
pce->data = nonce32[i];
- /* TBD NEW FEATURE partial AES CCM pkt support set last bit */
- auth_cfg |= ((1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST));
+ if (creq->authklen == AES128_KEY_SIZE)
+ auth_cfg = pce_dev->reg.auth_cfg_aes_ccm_128;
+ else {
+ if (creq->authklen == AES256_KEY_SIZE)
+ auth_cfg = pce_dev->reg.auth_cfg_aes_ccm_256;
+ }
if (creq->dir == QCE_ENCRYPT)
auth_cfg |= (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
else
auth_cfg |= (CRYPTO_AUTH_POS_AFTER << CRYPTO_AUTH_POS);
auth_cfg |= ((creq->authsize - 1) << CRYPTO_AUTH_SIZE);
- auth_cfg |= (CRYPTO_AUTH_MODE_CCM << CRYPTO_AUTH_MODE);
- if (creq->authklen == AES128_KEY_SIZE)
- auth_cfg |= (CRYPTO_AUTH_KEY_SZ_AES128 <<
- CRYPTO_AUTH_KEY_SIZE);
- else {
- if (creq->authklen == AES256_KEY_SIZE)
- auth_cfg |= (CRYPTO_AUTH_KEY_SZ_AES256 <<
- CRYPTO_AUTH_KEY_SIZE);
- }
- auth_cfg |= (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG);
- auth_cfg |= ((MAX_NONCE/sizeof(uint32_t)) <<
- CRYPTO_AUTH_NONCE_NUM_WORDS);
if (use_hw_key == true) {
auth_cfg |= (1 << CRYPTO_USE_HW_KEY_AUTH);
@@ -542,21 +536,37 @@
}
switch (creq->mode) {
case QCE_MODE_ECB:
- encr_cfg |= (CRYPTO_ENCR_MODE_ECB << CRYPTO_ENCR_MODE);
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ecb_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ecb_256;
break;
case QCE_MODE_CBC:
- encr_cfg |= (CRYPTO_ENCR_MODE_CBC << CRYPTO_ENCR_MODE);
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_cbc_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_cbc_256;
break;
case QCE_MODE_XTS:
- encr_cfg |= (CRYPTO_ENCR_MODE_XTS << CRYPTO_ENCR_MODE);
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_xts_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_xts_256;
break;
case QCE_MODE_CCM:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ccm_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ccm_256;
encr_cfg |= (CRYPTO_ENCR_MODE_CCM << CRYPTO_ENCR_MODE) |
(CRYPTO_LAST_CCM_XFR << CRYPTO_LAST_CCM);
break;
case QCE_MODE_CTR:
default:
- encr_cfg |= (CRYPTO_ENCR_MODE_CTR << CRYPTO_ENCR_MODE);
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ctr_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ctr_256;
break;
}
pce_dev->mode = creq->mode;
@@ -600,13 +610,15 @@
uint32_t xtsklen =
creq->encklen/(2 * sizeof(uint32_t));
- _byte_stream_to_net_words(xtskey32, (creq->enckey +
- creq->encklen/2), creq->encklen/2);
- /* write xts encr key */
- pce = cmdlistinfo->encr_xts_key;
- for (i = 0; i < xtsklen; i++, pce++)
- pce->data = xtskey32[i];
-
+ if ((use_hw_key == false) && (use_pipe_key == false)) {
+ _byte_stream_to_net_words(xtskey32,
+ (creq->enckey + creq->encklen/2),
+ creq->encklen/2);
+ /* write xts encr key */
+ pce = cmdlistinfo->encr_xts_key;
+ for (i = 0; i < xtsklen; i++, pce++)
+ pce->data = xtskey32[i];
+ }
/* write xts du size */
pce = cmdlistinfo->encr_xts_du_size;
if (use_pipe_key == true)
@@ -642,26 +654,13 @@
if (creq->op == QCE_REQ_ABLK_CIPHER_NO_KEY) {
encr_cfg |= (CRYPTO_ENCR_KEY_SZ_AES128 <<
CRYPTO_ENCR_KEY_SZ);
- encr_cfg |= CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG;
} else {
if (use_hw_key == false) {
/* write encr key */
pce = cmdlistinfo->encr_key;
for (i = 0; i < enck_size_in_word; i++, pce++)
pce->data = enckey32[i];
- switch (key_size) {
- case AES128_KEY_SIZE:
- encr_cfg |= (CRYPTO_ENCR_KEY_SZ_AES128
- << CRYPTO_ENCR_KEY_SZ);
- break;
- case AES256_KEY_SIZE:
- default:
- encr_cfg |= (CRYPTO_ENCR_KEY_SZ_AES256
- << CRYPTO_ENCR_KEY_SZ);
- break;
- } /* end of switch (creq->encklen) */
}
- encr_cfg |= CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG;
} /* else of if (creq->op == QCE_REQ_ABLK_CIPHER_NO_KEY) */
break;
} /* end of switch (creq->mode) */
@@ -706,10 +705,499 @@
return 0;
};
+static int _ce_setup_hash_direct(struct qce_device *pce_dev,
+ struct qce_sha_req *sreq)
+{
+ uint32_t auth32[SHA256_DIGEST_SIZE / sizeof(uint32_t)];
+ uint32_t diglen;
+ bool use_hw_key = false;
+ int i;
+ uint32_t mackey32[SHA_HMAC_KEY_SIZE/sizeof(uint32_t)] = {
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
+ bool sha1 = false;
+ uint32_t auth_cfg = 0;
+
+ writel_relaxed(pce_dev->reg.crypto_cfg_be, (pce_dev->iobase +
+ CRYPTO_CONFIG_REG));
+ /*
+ * Ensure previous instructions (setting the CONFIG register)
+ * was completed before issuing starting to set other config register
+ * This is to ensure the configurations are done in correct endian-ness
+ * as set in the CONFIG registers
+ */
+ mb();
+
+ if (sreq->alg == QCE_HASH_AES_CMAC) {
+ /* write seg_cfg */
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_AUTH_SEG_CFG_REG);
+ /* write seg_cfg */
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_ENCR_SEG_CFG_REG);
+ /* write seg_cfg */
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_ENCR_SEG_SIZE_REG);
+
+ /* Clear auth_ivn, auth_keyn registers */
+ for (i = 0; i < 16; i++) {
+ writel_relaxed(0, (pce_dev->iobase +
+ (CRYPTO_AUTH_IV0_REG + i*sizeof(uint32_t))));
+ writel_relaxed(0, (pce_dev->iobase +
+ (CRYPTO_AUTH_KEY0_REG + i*sizeof(uint32_t))));
+ }
+ /* write auth_bytecnt 0/1/2/3, start with 0 */
+ for (i = 0; i < 4; i++)
+ writel_relaxed(0, pce_dev->iobase +
+ CRYPTO_AUTH_BYTECNT0_REG +
+ i * sizeof(uint32_t));
+
+ if (sreq->authklen == AES128_KEY_SIZE)
+ auth_cfg = pce_dev->reg.auth_cfg_cmac_128;
+ else
+ auth_cfg = pce_dev->reg.auth_cfg_cmac_256;
+ }
+
+ if ((sreq->alg == QCE_HASH_SHA1_HMAC) ||
+ (sreq->alg == QCE_HASH_SHA256_HMAC) ||
+ (sreq->alg == QCE_HASH_AES_CMAC)) {
+ uint32_t authk_size_in_word = sreq->authklen/sizeof(uint32_t);
+
+ _byte_stream_to_net_words(mackey32, sreq->authkey,
+ sreq->authklen);
+
+ /* check for null key. If null, use hw key*/
+ for (i = 0; i < authk_size_in_word; i++) {
+ if (mackey32[i] != 0)
+ break;
+ }
+
+ if (i == authk_size_in_word)
+ use_hw_key = true;
+ else
+ /* Clear auth_ivn, auth_keyn registers */
+ for (i = 0; i < authk_size_in_word; i++)
+ writel_relaxed(mackey32[i], (pce_dev->iobase +
+ (CRYPTO_AUTH_KEY0_REG +
+ i*sizeof(uint32_t))));
+ }
+
+ if (sreq->alg == QCE_HASH_AES_CMAC)
+ goto go_proc;
+
+ /* if not the last, the size has to be on the block boundary */
+ if (sreq->last_blk == 0 && (sreq->size % SHA256_BLOCK_SIZE))
+ return -EIO;
+
+ switch (sreq->alg) {
+ case QCE_HASH_SHA1:
+ auth_cfg = pce_dev->reg.auth_cfg_sha1;
+ diglen = SHA1_DIGEST_SIZE;
+ sha1 = true;
+ break;
+ case QCE_HASH_SHA1_HMAC:
+ auth_cfg = pce_dev->reg.auth_cfg_hmac_sha1;
+ diglen = SHA1_DIGEST_SIZE;
+ sha1 = true;
+ break;
+ case QCE_HASH_SHA256:
+ auth_cfg = pce_dev->reg.auth_cfg_sha256;
+ diglen = SHA256_DIGEST_SIZE;
+ break;
+ case QCE_HASH_SHA256_HMAC:
+ auth_cfg = pce_dev->reg.auth_cfg_hmac_sha256;
+ diglen = SHA256_DIGEST_SIZE;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /* write 20/32 bytes, 5/8 words into auth_iv for SHA1/SHA256 */
+ if (sreq->first_blk) {
+ if (sha1) {
+ for (i = 0; i < 5; i++)
+ auth32[i] = _std_init_vector_sha1[i];
+ } else {
+ for (i = 0; i < 8; i++)
+ auth32[i] = _std_init_vector_sha256[i];
+ }
+ } else {
+ _byte_stream_to_net_words(auth32, sreq->digest, diglen);
+ }
+
+ /* Set auth_ivn, auth_keyn registers */
+ for (i = 0; i < 5; i++)
+ writel_relaxed(auth32[i], (pce_dev->iobase +
+ (CRYPTO_AUTH_IV0_REG + i*sizeof(uint32_t))));
+
+ if ((sreq->alg == QCE_HASH_SHA256) ||
+ (sreq->alg == QCE_HASH_SHA256_HMAC)) {
+ for (i = 5; i < 8; i++)
+ writel_relaxed(auth32[i], (pce_dev->iobase +
+ (CRYPTO_AUTH_IV0_REG + i*sizeof(uint32_t))));
+ }
+
+
+ /* write auth_bytecnt 0/1/2/3, start with 0 */
+ for (i = 0; i < 2; i++)
+ writel_relaxed(sreq->auth_data[i], pce_dev->iobase +
+ CRYPTO_AUTH_BYTECNT0_REG +
+ i * sizeof(uint32_t));
+
+ /* Set/reset last bit in CFG register */
+ if (sreq->last_blk)
+ auth_cfg |= 1 << CRYPTO_LAST;
+ else
+ auth_cfg &= ~(1 << CRYPTO_LAST);
+ if (sreq->first_blk)
+ auth_cfg |= 1 << CRYPTO_FIRST;
+ else
+ auth_cfg &= ~(1 << CRYPTO_FIRST);
+go_proc:
+ /* write seg_cfg */
+ writel_relaxed(auth_cfg, pce_dev->iobase + CRYPTO_AUTH_SEG_CFG_REG);
+ /* write auth seg_size */
+ writel_relaxed(sreq->size, pce_dev->iobase + CRYPTO_AUTH_SEG_SIZE_REG);
+
+ /* write auth_seg_start */
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_AUTH_SEG_START_REG);
+
+ /* reset encr seg_cfg */
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_ENCR_SEG_CFG_REG);
+
+ /* write seg_size */
+ writel_relaxed(sreq->size, pce_dev->iobase + CRYPTO_SEG_SIZE_REG);
+
+ writel_relaxed(pce_dev->reg.crypto_cfg_le, (pce_dev->iobase +
+ CRYPTO_CONFIG_REG));
+ /* issue go to crypto */
+ if (use_hw_key == false)
+ writel_relaxed(((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
+ pce_dev->iobase + CRYPTO_GOPROC_REG);
+ else
+ writel_relaxed(((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
+ pce_dev->iobase + CRYPTO_GOPROC_QC_KEY_REG);
+ /*
+ * Ensure previous instructions (setting the GO register)
+ * was completed before issuing a DMA transfer request
+ */
+ mb();
+ return 0;
+}
+
+static int _ce_setup_cipher_direct(struct qce_device *pce_dev,
+ struct qce_req *creq, uint32_t totallen_in, uint32_t coffset)
+{
+ uint32_t enckey32[(MAX_CIPHER_KEY_SIZE * 2)/sizeof(uint32_t)] = {
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
+ uint32_t enciv32[MAX_IV_LENGTH / sizeof(uint32_t)] = {
+ 0, 0, 0, 0};
+ uint32_t enck_size_in_word = 0;
+ uint32_t key_size;
+ bool use_hw_key = false;
+ bool use_pipe_key = false;
+ uint32_t encr_cfg = 0;
+ uint32_t ivsize = creq->ivsize;
+ int i;
+
+ writel_relaxed(pce_dev->reg.crypto_cfg_be, (pce_dev->iobase +
+ CRYPTO_CONFIG_REG));
+ /*
+ * Ensure previous instructions (setting the CONFIG register)
+ * was completed before issuing starting to set other config register
+ * This is to ensure the configurations are done in correct endian-ness
+ * as set in the CONFIG registers
+ */
+ mb();
+
+ if (creq->mode == QCE_MODE_XTS)
+ key_size = creq->encklen/2;
+ else
+ key_size = creq->encklen;
+
+ _byte_stream_to_net_words(enckey32, creq->enckey, key_size);
+
+ /* check for null key. If null, use hw key*/
+ enck_size_in_word = key_size/sizeof(uint32_t);
+ for (i = 0; i < enck_size_in_word; i++) {
+ if (enckey32[i] != 0)
+ break;
+ }
+ if (i == enck_size_in_word)
+ use_hw_key = true;
+
+ if (use_hw_key == false) {
+ for (i = 0; i < enck_size_in_word; i++) {
+ if (enckey32[i] != 0xFFFFFFFF)
+ break;
+ }
+ if (i == enck_size_in_word)
+ use_pipe_key = true;
+ }
+
+ if ((creq->op == QCE_REQ_AEAD) && (creq->mode == QCE_MODE_CCM)) {
+ uint32_t authklen32 = creq->encklen/sizeof(uint32_t);
+ uint32_t noncelen32 = MAX_NONCE/sizeof(uint32_t);
+ uint32_t nonce32[MAX_NONCE/sizeof(uint32_t)] = {0, 0, 0, 0};
+ uint32_t auth_cfg = 0;
+
+ /* Clear auth_ivn, auth_keyn registers */
+ for (i = 0; i < 16; i++) {
+ writel_relaxed(0, (pce_dev->iobase +
+ (CRYPTO_AUTH_IV0_REG + i*sizeof(uint32_t))));
+ writel_relaxed(0, (pce_dev->iobase +
+ (CRYPTO_AUTH_KEY0_REG + i*sizeof(uint32_t))));
+ }
+ /* write auth_bytecnt 0/1/2/3, start with 0 */
+ for (i = 0; i < 4; i++)
+ writel_relaxed(0, pce_dev->iobase +
+ CRYPTO_AUTH_BYTECNT0_REG +
+ i * sizeof(uint32_t));
+ /* write nonce */
+ _byte_stream_to_net_words(nonce32, creq->nonce, MAX_NONCE);
+ for (i = 0; i < noncelen32; i++)
+ writel_relaxed(nonce32[i], pce_dev->iobase +
+ CRYPTO_AUTH_INFO_NONCE0_REG +
+ (i*sizeof(uint32_t)));
+
+ if (creq->authklen == AES128_KEY_SIZE)
+ auth_cfg = pce_dev->reg.auth_cfg_aes_ccm_128;
+ else {
+ if (creq->authklen == AES256_KEY_SIZE)
+ auth_cfg = pce_dev->reg.auth_cfg_aes_ccm_256;
+ }
+ if (creq->dir == QCE_ENCRYPT)
+ auth_cfg |= (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ else
+ auth_cfg |= (CRYPTO_AUTH_POS_AFTER << CRYPTO_AUTH_POS);
+ auth_cfg |= ((creq->authsize - 1) << CRYPTO_AUTH_SIZE);
+
+ if (use_hw_key == true) {
+ auth_cfg |= (1 << CRYPTO_USE_HW_KEY_AUTH);
+ } else {
+ auth_cfg &= ~(1 << CRYPTO_USE_HW_KEY_AUTH);
+ /* write auth key */
+ for (i = 0; i < authklen32; i++)
+ writel_relaxed(enckey32[i], pce_dev->iobase +
+ CRYPTO_AUTH_KEY0_REG + (i*sizeof(uint32_t)));
+ }
+ writel_relaxed(auth_cfg, pce_dev->iobase +
+ CRYPTO_AUTH_SEG_CFG_REG);
+ if (creq->dir == QCE_ENCRYPT)
+ writel_relaxed(totallen_in, pce_dev->iobase +
+ CRYPTO_AUTH_SEG_SIZE_REG);
+ else
+ writel_relaxed((totallen_in - creq->authsize),
+ pce_dev->iobase + CRYPTO_AUTH_SEG_SIZE_REG);
+ writel_relaxed(0, pce_dev->iobase + CRYPTO_AUTH_SEG_START_REG);
+ } else {
+ if (creq->op != QCE_REQ_AEAD)
+ writel_relaxed(0, pce_dev->iobase +
+ CRYPTO_AUTH_SEG_CFG_REG);
+ }
+ /*
+ * Ensure previous instructions (write to all AUTH registers)
+ * was completed before accessing a register that is not in
+ * in the same 1K range.
+ */
+ mb();
+ switch (creq->mode) {
+ case QCE_MODE_ECB:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ecb_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ecb_256;
+ break;
+ case QCE_MODE_CBC:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_cbc_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_cbc_256;
+ break;
+ case QCE_MODE_XTS:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_xts_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_xts_256;
+ break;
+ case QCE_MODE_CCM:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ccm_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ccm_256;
+ break;
+ case QCE_MODE_CTR:
+ default:
+ if (key_size == AES128_KEY_SIZE)
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ctr_128;
+ else
+ encr_cfg = pce_dev->reg.encr_cfg_aes_ctr_256;
+ break;
+ }
+ pce_dev->mode = creq->mode;
+
+ switch (creq->alg) {
+ case CIPHER_ALG_DES:
+ if (creq->mode != QCE_MODE_ECB) {
+ encr_cfg = pce_dev->reg.encr_cfg_des_cbc;
+ _byte_stream_to_net_words(enciv32, creq->iv, ivsize);
+ writel_relaxed(enciv32[0], pce_dev->iobase +
+ CRYPTO_CNTR0_IV0_REG);
+ writel_relaxed(enciv32[1], pce_dev->iobase +
+ CRYPTO_CNTR1_IV1_REG);
+ } else {
+ encr_cfg = pce_dev->reg.encr_cfg_des_ecb;
+ }
+ if (use_hw_key == false) {
+ writel_relaxed(enckey32[0], pce_dev->iobase +
+ CRYPTO_ENCR_KEY0_REG);
+ writel_relaxed(enckey32[1], pce_dev->iobase +
+ CRYPTO_ENCR_KEY1_REG);
+ }
+ break;
+ case CIPHER_ALG_3DES:
+ if (creq->mode != QCE_MODE_ECB) {
+ _byte_stream_to_net_words(enciv32, creq->iv, ivsize);
+ writel_relaxed(enciv32[0], pce_dev->iobase +
+ CRYPTO_CNTR0_IV0_REG);
+ writel_relaxed(enciv32[1], pce_dev->iobase +
+ CRYPTO_CNTR1_IV1_REG);
+ encr_cfg = pce_dev->reg.encr_cfg_3des_cbc;
+ } else {
+ encr_cfg = pce_dev->reg.encr_cfg_3des_ecb;
+ }
+ if (use_hw_key == false) {
+ /* write encr key */
+ for (i = 0; i < 6; i++)
+ writel_relaxed(enckey32[0], (pce_dev->iobase +
+ (CRYPTO_ENCR_KEY0_REG + i * sizeof(uint32_t))));
+ }
+ break;
+ case CIPHER_ALG_AES:
+ default:
+ if (creq->mode == QCE_MODE_XTS) {
+ uint32_t xtskey32[MAX_CIPHER_KEY_SIZE/sizeof(uint32_t)]
+ = {0, 0, 0, 0, 0, 0, 0, 0};
+ uint32_t xtsklen =
+ creq->encklen/(2 * sizeof(uint32_t));
+
+ if ((use_hw_key == false) && (use_pipe_key == false)) {
+ _byte_stream_to_net_words(xtskey32,
+ (creq->enckey + creq->encklen/2),
+ creq->encklen/2);
+ /* write xts encr key */
+ for (i = 0; i < xtsklen; i++)
+ writel_relaxed(xtskey32[i],
+ pce_dev->iobase +
+ CRYPTO_ENCR_XTS_KEY0_REG +
+ (i * sizeof(uint32_t)));
+ }
+ /* write xts du size */
+ if (use_pipe_key == true)
+ writel_relaxed(min((uint32_t)QCE_SECTOR_SIZE,
+ creq->cryptlen),
+ pce_dev->iobase +
+ CRYPTO_ENCR_XTS_DU_SIZE_REG);
+ else
+ writel_relaxed(creq->cryptlen ,
+ pce_dev->iobase +
+ CRYPTO_ENCR_XTS_DU_SIZE_REG);
+ }
+ if (creq->mode != QCE_MODE_ECB) {
+ if (creq->mode == QCE_MODE_XTS)
+ _byte_stream_swap_to_net_words(enciv32,
+ creq->iv, ivsize);
+ else
+ _byte_stream_to_net_words(enciv32, creq->iv,
+ ivsize);
+
+ /* write encr cntr iv */
+ for (i = 0; i <= 3; i++)
+ writel_relaxed(enciv32[i], pce_dev->iobase +
+ CRYPTO_CNTR0_IV0_REG +
+ (i * sizeof(uint32_t)));
+
+ if (creq->mode == QCE_MODE_CCM) {
+ /* write cntr iv for ccm */
+ for (i = 0; i <= 3; i++)
+ writel_relaxed(enciv32[i],
+ pce_dev->iobase +
+ CRYPTO_ENCR_CCM_INT_CNTR0_REG +
+ (i * sizeof(uint32_t)));
+ /* update cntr_iv[3] by one */
+ writel_relaxed((enciv32[3] + 1),
+ pce_dev->iobase +
+ CRYPTO_CNTR0_IV0_REG +
+ (3 * sizeof(uint32_t)));
+ }
+ }
+
+ if (creq->op == QCE_REQ_ABLK_CIPHER_NO_KEY) {
+ encr_cfg |= (CRYPTO_ENCR_KEY_SZ_AES128 <<
+ CRYPTO_ENCR_KEY_SZ);
+ } else {
+ if ((use_hw_key == false) && (use_pipe_key == false)) {
+ for (i = 0; i < enck_size_in_word; i++)
+ writel_relaxed(enckey32[i],
+ pce_dev->iobase +
+ CRYPTO_ENCR_KEY0_REG +
+ (i * sizeof(uint32_t)));
+ }
+ } /* else of if (creq->op == QCE_REQ_ABLK_CIPHER_NO_KEY) */
+ break;
+ } /* end of switch (creq->mode) */
+
+ if (use_pipe_key)
+ encr_cfg |= (CRYPTO_USE_PIPE_KEY_ENCR_ENABLED
+ << CRYPTO_USE_PIPE_KEY_ENCR);
+
+ /* write encr seg cfg */
+ encr_cfg |= ((creq->dir == QCE_ENCRYPT) ? 1 : 0) << CRYPTO_ENCODE;
+ if (use_hw_key == true)
+ encr_cfg |= (CRYPTO_USE_HW_KEY << CRYPTO_USE_HW_KEY_ENCR);
+ else
+ encr_cfg &= ~(CRYPTO_USE_HW_KEY << CRYPTO_USE_HW_KEY_ENCR);
+ /* write encr seg cfg */
+ writel_relaxed(encr_cfg, pce_dev->iobase + CRYPTO_ENCR_SEG_CFG_REG);
+
+ /* write encr seg size */
+ if ((creq->mode == QCE_MODE_CCM) && (creq->dir == QCE_DECRYPT))
+ writel_relaxed((creq->cryptlen + creq->authsize),
+ pce_dev->iobase + CRYPTO_ENCR_SEG_SIZE_REG);
+ else
+ writel_relaxed(creq->cryptlen,
+ pce_dev->iobase + CRYPTO_ENCR_SEG_SIZE_REG);
+
+ /* write encr seg start */
+ writel_relaxed((coffset & 0xffff),
+ pce_dev->iobase + CRYPTO_ENCR_SEG_START_REG);
+ /* write encr seg start */
+ writel_relaxed(0xffffffff,
+ pce_dev->iobase + CRYPTO_CNTR_MASK_REG);
+
+ /* write seg size */
+ writel_relaxed(totallen_in, pce_dev->iobase + CRYPTO_SEG_SIZE_REG);
+
+ writel_relaxed(pce_dev->reg.crypto_cfg_le, (pce_dev->iobase +
+ CRYPTO_CONFIG_REG));
+ /* issue go to crypto */
+ if (use_hw_key == false)
+ writel_relaxed(((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
+ pce_dev->iobase + CRYPTO_GOPROC_REG);
+ else
+ writel_relaxed(((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
+ pce_dev->iobase + CRYPTO_GOPROC_QC_KEY_REG);
+ /*
+ * Ensure previous instructions (setting the GO register)
+ * was completed before issuing a DMA transfer request
+ */
+ mb();
+ return 0;
+};
+
static int _qce_unlock_other_pipes(struct qce_device *pce_dev)
{
int rc = 0;
+ if (pce_dev->support_cmd_dscr == false)
+ return rc;
+
pce_dev->ce_sps.consumer.event.callback = NULL;
rc = sps_transfer_one(pce_dev->ce_sps.consumer.pipe,
GET_PHYS_ADDR(pce_dev->ce_sps.cmdlistptr.unlock_all_pipes.cmdlist),
@@ -919,17 +1407,23 @@
iovec->flags |= flag;
}
-static void _qce_sps_add_data(uint32_t addr, uint32_t len,
+static int _qce_sps_add_data(uint32_t addr, uint32_t len,
struct sps_transfer *sps_bam_pipe)
{
struct sps_iovec *iovec = sps_bam_pipe->iovec +
sps_bam_pipe->iovec_count;
+ if (sps_bam_pipe->iovec_count == QCE_MAX_NUM_DSCR) {
+ pr_err("Num of descrptor %d exceed max (%d)",
+ sps_bam_pipe->iovec_count, (uint32_t)QCE_MAX_NUM_DSCR);
+ return -ENOMEM;
+ }
if (len) {
iovec->size = len;
iovec->addr = addr;
iovec->flags = 0;
sps_bam_pipe->iovec_count++;
}
+ return 0;
}
static int _qce_sps_add_sg_data(struct qce_device *pce_dev,
@@ -947,6 +1441,12 @@
if (pce_dev->ce_sps.minor_version == 0)
len = ALIGN(len, pce_dev->ce_sps.ce_burst_size);
while (len > 0) {
+ if (sps_bam_pipe->iovec_count == QCE_MAX_NUM_DSCR) {
+ pr_err("Num of descrptor %d exceed max (%d)",
+ sps_bam_pipe->iovec_count,
+ (uint32_t)QCE_MAX_NUM_DSCR);
+ return -ENOMEM;
+ }
if (len > SPS_MAX_PKT_SIZE) {
data_cnt = SPS_MAX_PKT_SIZE;
iovec->size = data_cnt;
@@ -1204,7 +1704,14 @@
bam.summing_threshold = 64;
/* SPS driver wll handle the crypto BAM IRQ */
bam.irq = (u32)pce_dev->ce_sps.bam_irq;
- bam.manage = SPS_BAM_MGR_LOCAL;
+ /*
+ * Set flag to indicate BAM global device control is managed
+ * remotely.
+ */
+ if (pce_dev->support_cmd_dscr == false)
+ bam.manage = SPS_BAM_MGR_DEVICE_REMOTE;
+ else
+ bam.manage = SPS_BAM_MGR_LOCAL;
bam.ee = 1;
pr_debug("bam physical base=0x%x\n", (u32)bam.phys_addr);
@@ -1325,19 +1832,6 @@
}
};
-static void _aead_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void _sha_sps_producer_callback(struct sps_event_notify *notify)
{
struct qce_device *pce_dev = (struct qce_device *)
@@ -1353,19 +1847,6 @@
_sha_complete(pce_dev);
};
-static void _sha_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void _ablk_cipher_sps_producer_callback(struct sps_event_notify *notify)
{
struct qce_device *pce_dev = (struct qce_device *)
@@ -1400,19 +1881,6 @@
}
};
-static void _ablk_cipher_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void qce_add_cmd_element(struct qce_device *pdev,
struct sps_command_element **cmd_ptr, u32 addr,
u32 data, struct sps_command_element **populate)
@@ -1438,20 +1906,11 @@
uint32_t key_reg = 0;
uint32_t xts_key_reg = 0;
uint32_t iv_reg = 0;
- uint32_t crypto_cfg = 0;
- uint32_t beats = (pdev->ce_sps.ce_burst_size >> 3) - 1;
- uint32_t pipe_pair = pdev->ce_sps.pipe_pair_index;
*pvaddr = (unsigned char *) ALIGN(((unsigned int)(*pvaddr)),
pdev->ce_sps.ce_burst_size);
ce_vaddr = (struct sps_command_element *)(*pvaddr);
ce_vaddr_start = (uint32_t)(*pvaddr);
- crypto_cfg = (beats << CRYPTO_REQ_SIZE) |
- BIT(CRYPTO_MASK_DOUT_INTR) |
- BIT(CRYPTO_MASK_DIN_INTR) |
- BIT(CRYPTO_MASK_OP_DONE_INTR) |
- (0 << CRYPTO_HIGH_SPD_EN_N) |
- (pipe_pair << CRYPTO_PIPE_SET_SELECT);
/*
* Designate chunks of the allocated memory to various
* command list pointers related to AES cipher operations defined
@@ -1464,11 +1923,10 @@
cmdlistptr->cipher_aes_128_cbc_ctr.cmdlist =
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_128_cbc_ctr);
-
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES128 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG);
+ if (mode == QCE_MODE_CBC)
+ encr_cfg = pdev->reg.encr_cfg_aes_cbc_128;
+ else
+ encr_cfg = pdev->reg.encr_cfg_aes_ctr_128;
iv_reg = 4;
key_reg = 4;
xts_key_reg = 0;
@@ -1477,10 +1935,10 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_256_cbc_ctr);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES256 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG);
+ if (mode == QCE_MODE_CBC)
+ encr_cfg = pdev->reg.encr_cfg_aes_cbc_256;
+ else
+ encr_cfg = pdev->reg.encr_cfg_aes_ctr_256;
iv_reg = 4;
key_reg = 8;
xts_key_reg = 0;
@@ -1492,12 +1950,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_128_ecb);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES128 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_ECB <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_aes_ecb_128;
iv_reg = 0;
key_reg = 4;
xts_key_reg = 0;
@@ -1506,12 +1959,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_256_ecb);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES256 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_ECB <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_aes_ecb_256;
iv_reg = 0;
key_reg = 8;
xts_key_reg = 0;
@@ -1523,12 +1971,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_128_xts);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES128 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_XTS <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_aes_xts_128;
iv_reg = 4;
key_reg = 4;
xts_key_reg = 4;
@@ -1537,12 +1980,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_aes_256_xts);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES256 <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_XTS <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_aes_xts_256;
iv_reg = 4;
key_reg = 8;
xts_key_reg = 8;
@@ -1555,8 +1993,8 @@
break;
}
- qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG, crypto_cfg,
- &pcl_info->crypto_cfg);
+ qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_SEG_SIZE_REG, 0,
&pcl_info->seg_size);
@@ -1610,7 +2048,7 @@
0, &pcl_info->auth_seg_size);
}
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- (crypto_cfg | CRYPTO_LITTLE_ENDIAN_MASK), NULL);
+ pdev->reg.crypto_cfg_le, NULL);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_GOPROC_REG,
((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
@@ -1635,20 +2073,11 @@
uint32_t encr_cfg = 0;
uint32_t key_reg = 0;
uint32_t iv_reg = 0;
- uint32_t crypto_cfg = 0;
- uint32_t beats = (pdev->ce_sps.ce_burst_size >> 3) - 1;
- uint32_t pipe_pair = pdev->ce_sps.pipe_pair_index;
*pvaddr = (unsigned char *) ALIGN(((unsigned int)(*pvaddr)),
pdev->ce_sps.ce_burst_size);
ce_vaddr = (struct sps_command_element *)(*pvaddr);
ce_vaddr_start = (uint32_t)(*pvaddr);
- crypto_cfg = (beats << CRYPTO_REQ_SIZE) |
- BIT(CRYPTO_MASK_DOUT_INTR) |
- BIT(CRYPTO_MASK_DIN_INTR) |
- BIT(CRYPTO_MASK_OP_DONE_INTR) |
- (0 << CRYPTO_HIGH_SPD_EN_N) |
- (pipe_pair << CRYPTO_PIPE_SET_SELECT);
/*
* Designate chunks of the allocated memory to various
@@ -1662,12 +2091,8 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_des_cbc);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_DES <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_DES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_CBC <<
- CRYPTO_ENCR_MODE);
+
+ encr_cfg = pdev->reg.encr_cfg_des_cbc;
iv_reg = 2;
key_reg = 2;
} else {
@@ -1675,12 +2100,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_des_ecb);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_DES <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_DES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_ECB <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_des_ecb;
iv_reg = 0;
key_reg = 2;
}
@@ -1691,12 +2111,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_3des_cbc);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_3DES <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_DES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_CBC <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_3des_cbc;
iv_reg = 2;
key_reg = 6;
} else {
@@ -1704,12 +2119,7 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->cipher_3des_ecb);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_3DES <<
- CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_DES <<
- CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_ECB <<
- CRYPTO_ENCR_MODE);
+ encr_cfg = pdev->reg.encr_cfg_3des_ecb;
iv_reg = 0;
key_reg = 6;
}
@@ -1720,8 +2130,8 @@
break;
}
- qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG, crypto_cfg,
- &pcl_info->crypto_cfg);
+ qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_SEG_SIZE_REG, 0,
&pcl_info->seg_size);
@@ -1754,8 +2164,7 @@
0, &pcl_info->auth_seg_size);
}
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- (crypto_cfg | CRYPTO_LITTLE_ENDIAN_MASK),
- NULL);
+ pdev->reg.crypto_cfg_le, NULL);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_GOPROC_REG,
((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
@@ -1779,20 +2188,12 @@
uint32_t key_reg = 0;
uint32_t auth_cfg = 0;
uint32_t iv_reg = 0;
- uint32_t crypto_cfg = 0;
- uint32_t beats = (pdev->ce_sps.ce_burst_size >> 3) - 1;
- uint32_t pipe_pair = pdev->ce_sps.pipe_pair_index;
*pvaddr = (unsigned char *) ALIGN(((unsigned int)(*pvaddr)),
pdev->ce_sps.ce_burst_size);
ce_vaddr_start = (uint32_t)(*pvaddr);
ce_vaddr = (struct sps_command_element *)(*pvaddr);
- crypto_cfg = (beats << CRYPTO_REQ_SIZE) |
- BIT(CRYPTO_MASK_DOUT_INTR) |
- BIT(CRYPTO_MASK_DIN_INTR) |
- BIT(CRYPTO_MASK_OP_DONE_INTR) |
- (0 << CRYPTO_HIGH_SPD_EN_N) |
- (pipe_pair << CRYPTO_PIPE_SET_SELECT);
+
/*
* Designate chunks of the allocated memory to various
* command list pointers related to authentication operations
@@ -1803,13 +2204,10 @@
cmdlistptr->auth_sha1.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_sha1);
- auth_cfg = (CRYPTO_AUTH_MODE_HASH << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_sha1;
iv_reg = 5;
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_AUTH_SEG_CFG_REG,
0, NULL);
@@ -1818,13 +2216,10 @@
cmdlistptr->auth_sha256.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_sha256);
- auth_cfg = (CRYPTO_AUTH_MODE_HASH << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_SHA256 << CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_sha256;
iv_reg = 8;
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
/* 1 dummy write */
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_SIZE_REG,
0, NULL);
@@ -1835,14 +2230,11 @@
cmdlistptr->auth_sha1_hmac.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_sha1_hmac);
- auth_cfg = (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_hmac_sha1;
key_reg = 16;
iv_reg = 5;
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_AUTH_SEG_CFG_REG,
0, NULL);
break;
@@ -1850,16 +2242,11 @@
cmdlistptr->aead_sha1_hmac.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->aead_sha1_hmac);
- auth_cfg = (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS) |
- (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST);
-
+ auth_cfg = pdev->reg.auth_cfg_aead_sha1_hmac;
key_reg = 16;
iv_reg = 5;
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
/* 1 dummy write */
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_SIZE_REG,
0, NULL);
@@ -1870,15 +2257,12 @@
cmdlistptr->auth_sha256_hmac.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_sha256_hmac);
- auth_cfg = (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_SHA256 << CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_hmac_sha256;
key_reg = 16;
iv_reg = 8;
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
/* 1 dummy write */
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_SIZE_REG,
0, NULL);
@@ -1891,32 +2275,18 @@
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_aes_128_cmac);
- auth_cfg = (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
- (CRYPTO_AUTH_MODE_CMAC << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_ENUM_16_BYTES <<
- CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_KEY_SZ_AES128 <<
- CRYPTO_AUTH_KEY_SIZE) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_cmac_128;
key_reg = 4;
} else {
cmdlistptr->auth_aes_256_cmac.cmdlist =
(uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->auth_aes_256_cmac);
- auth_cfg = (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST)|
- (CRYPTO_AUTH_MODE_CMAC << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_SIZE_ENUM_16_BYTES <<
- CRYPTO_AUTH_SIZE) |
- (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_KEY_SZ_AES256 <<
- CRYPTO_AUTH_KEY_SIZE) |
- (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+ auth_cfg = pdev->reg.auth_cfg_cmac_256;
key_reg = 8;
}
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
/* 1 dummy write */
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_SIZE_REG,
0, NULL);
@@ -1973,8 +2343,7 @@
0, NULL);
}
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- (crypto_cfg | CRYPTO_LITTLE_ENDIAN_MASK),
- NULL);
+ pdev->reg.crypto_cfg_le, NULL);
if (alg != QCE_AEAD_SHA1_HMAC)
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_GOPROC_REG,
@@ -1998,20 +2367,12 @@
uint32_t encr_cfg = 0;
uint32_t auth_cfg = 0;
uint32_t key_reg = 0;
- uint32_t crypto_cfg = 0;
- uint32_t beats = (pdev->ce_sps.ce_burst_size >> 3) - 1;
- uint32_t pipe_pair = pdev->ce_sps.pipe_pair_index;
*pvaddr = (unsigned char *) ALIGN(((unsigned int)(*pvaddr)),
pdev->ce_sps.ce_burst_size);
ce_vaddr_start = (uint32_t)(*pvaddr);
ce_vaddr = (struct sps_command_element *)(*pvaddr);
- crypto_cfg = (beats << CRYPTO_REQ_SIZE) |
- BIT(CRYPTO_MASK_DOUT_INTR) |
- BIT(CRYPTO_MASK_DIN_INTR) |
- BIT(CRYPTO_MASK_OP_DONE_INTR) |
- (0 << CRYPTO_HIGH_SPD_EN_N) |
- (pipe_pair << CRYPTO_PIPE_SET_SELECT);
+
/*
* Designate chunks of the allocated memory to various
* command list pointers related to aead operations
@@ -2021,36 +2382,21 @@
cmdlistptr->aead_aes_128_ccm.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->aead_aes_128_ccm);
- auth_cfg = (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
- (CRYPTO_AUTH_MODE_CCM << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_KEY_SZ_AES128 << CRYPTO_AUTH_KEY_SIZE);
- auth_cfg &= ~(1 << CRYPTO_USE_HW_KEY_AUTH);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
- ((CRYPTO_ENCR_MODE_CCM << CRYPTO_ENCR_MODE));
+ auth_cfg = pdev->reg.auth_cfg_aes_ccm_128;
+ encr_cfg = pdev->reg.encr_cfg_aes_ccm_128;
key_reg = 4;
} else {
cmdlistptr->aead_aes_256_ccm.cmdlist = (uint32_t)ce_vaddr;
pcl_info = &(cmdlistptr->aead_aes_256_ccm);
- auth_cfg = (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
- (CRYPTO_AUTH_MODE_CCM << CRYPTO_AUTH_MODE)|
- (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
- (CRYPTO_AUTH_KEY_SZ_AES256 << CRYPTO_AUTH_KEY_SIZE) |
- ((MAX_NONCE/sizeof(uint32_t)) <<
- CRYPTO_AUTH_NONCE_NUM_WORDS);
- auth_cfg &= ~(1 << CRYPTO_USE_HW_KEY_AUTH);
- encr_cfg = (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
- (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
- (CRYPTO_ENCR_MODE_CCM << CRYPTO_ENCR_MODE) |
- (CRYPTO_LAST_CCM_XFR << CRYPTO_LAST_CCM);
+ auth_cfg = pdev->reg.auth_cfg_aes_ccm_256;
+ encr_cfg = pdev->reg.encr_cfg_aes_ccm_256;
key_reg = 8;
}
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- crypto_cfg, &pcl_info->crypto_cfg);
+ pdev->reg.crypto_cfg_be, &pcl_info->crypto_cfg);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_SIZE_REG, 0, NULL);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_ENCR_SEG_CFG_REG, 0, NULL);
@@ -2120,8 +2466,7 @@
0, NULL);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_CONFIG_REG,
- (crypto_cfg | CRYPTO_LITTLE_ENDIAN_MASK),
- NULL);
+ pdev->reg.crypto_cfg_le, NULL);
qce_add_cmd_element(pdev, &ce_vaddr, CRYPTO_GOPROC_REG,
((1 << CRYPTO_GO) | (1 << CRYPTO_RESULTS_DUMP)),
@@ -2224,7 +2569,8 @@
(uint32_t)GET_PHYS_ADDR(vaddr);
vaddr += QCE_MAX_NUM_DSCR * sizeof(struct sps_iovec);
- qce_setup_cmdlistptrs(pce_dev, &vaddr);
+ if (pce_dev->support_cmd_dscr)
+ qce_setup_cmdlistptrs(pce_dev, &vaddr);
vaddr = (unsigned char *) ALIGN(((unsigned int)vaddr),
pce_dev->ce_sps.ce_burst_size);
pce_dev->ce_sps.result_dump = (uint32_t)vaddr;
@@ -2234,6 +2580,160 @@
return 0;
}
+static int qce_init_ce_cfg_val(struct qce_device *pce_dev)
+{
+ uint32_t beats = (pce_dev->ce_sps.ce_burst_size >> 3) - 1;
+ uint32_t pipe_pair = pce_dev->ce_sps.pipe_pair_index;
+
+ pce_dev->reg.crypto_cfg_be = (beats << CRYPTO_REQ_SIZE) |
+ BIT(CRYPTO_MASK_DOUT_INTR) | BIT(CRYPTO_MASK_DIN_INTR) |
+ BIT(CRYPTO_MASK_OP_DONE_INTR) | (0 << CRYPTO_HIGH_SPD_EN_N) |
+ (pipe_pair << CRYPTO_PIPE_SET_SELECT);
+
+ pce_dev->reg.crypto_cfg_le =
+ (pce_dev->reg.crypto_cfg_be | CRYPTO_LITTLE_ENDIAN_MASK);
+
+ /* Initialize encr_cfg register for AES alg */
+ pce_dev->reg.encr_cfg_aes_cbc_128 =
+ (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CBC << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_cbc_256 =
+ (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CBC << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_ctr_128 =
+ (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CTR << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_ctr_256 =
+ (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CTR << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_xts_128 =
+ (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_XTS << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_xts_256 =
+ (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_XTS << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_ecb_128 =
+ (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_ECB << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_ecb_256 =
+ (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_ECB << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_aes_ccm_128 =
+ (CRYPTO_ENCR_KEY_SZ_AES128 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ ((CRYPTO_ENCR_MODE_CCM << CRYPTO_ENCR_MODE));
+
+ pce_dev->reg.encr_cfg_aes_ccm_256 =
+ (CRYPTO_ENCR_KEY_SZ_AES256 << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_AES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CCM << CRYPTO_ENCR_MODE) |
+ (CRYPTO_LAST_CCM_XFR << CRYPTO_LAST_CCM);
+
+ /* Initialize encr_cfg register for DES alg */
+ pce_dev->reg.encr_cfg_des_ecb =
+ (CRYPTO_ENCR_KEY_SZ_DES << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_DES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_ECB << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_des_cbc =
+ (CRYPTO_ENCR_KEY_SZ_DES << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_DES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CBC << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_3des_ecb =
+ (CRYPTO_ENCR_KEY_SZ_3DES << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_DES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_ECB << CRYPTO_ENCR_MODE);
+
+ pce_dev->reg.encr_cfg_3des_cbc =
+ (CRYPTO_ENCR_KEY_SZ_3DES << CRYPTO_ENCR_KEY_SZ) |
+ (CRYPTO_ENCR_ALG_DES << CRYPTO_ENCR_ALG) |
+ (CRYPTO_ENCR_MODE_CBC << CRYPTO_ENCR_MODE);
+
+ /* Initialize auth_cfg register for CMAC alg */
+ pce_dev->reg.auth_cfg_cmac_128 =
+ (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
+ (CRYPTO_AUTH_MODE_CMAC << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_ENUM_16_BYTES << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_KEY_SZ_AES128 << CRYPTO_AUTH_KEY_SIZE);
+
+ pce_dev->reg.auth_cfg_cmac_256 =
+ (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
+ (CRYPTO_AUTH_MODE_CMAC << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_ENUM_16_BYTES << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_KEY_SZ_AES256 << CRYPTO_AUTH_KEY_SIZE);
+
+ /* Initialize auth_cfg register for HMAC alg */
+ pce_dev->reg.auth_cfg_hmac_sha1 =
+ (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+
+ pce_dev->reg.auth_cfg_hmac_sha256 =
+ (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_SHA256 << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+
+ /* Initialize auth_cfg register for SHA1/256 alg */
+ pce_dev->reg.auth_cfg_sha1 =
+ (CRYPTO_AUTH_MODE_HASH << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+
+ pce_dev->reg.auth_cfg_sha256 =
+ (CRYPTO_AUTH_MODE_HASH << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_SHA256 << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS);
+
+ /* Initialize auth_cfg register for AEAD alg */
+ pce_dev->reg.auth_cfg_aead_sha1_hmac =
+ (CRYPTO_AUTH_MODE_HMAC << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_SIZE_SHA1 << CRYPTO_AUTH_SIZE) |
+ (CRYPTO_AUTH_ALG_SHA << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_POS_BEFORE << CRYPTO_AUTH_POS) |
+ (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST);
+
+ pce_dev->reg.auth_cfg_aes_ccm_128 =
+ (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
+ (CRYPTO_AUTH_MODE_CCM << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_KEY_SZ_AES128 << CRYPTO_AUTH_KEY_SIZE) |
+ ((MAX_NONCE/sizeof(uint32_t)) << CRYPTO_AUTH_NONCE_NUM_WORDS);
+ pce_dev->reg.auth_cfg_aes_ccm_128 &= ~(1 << CRYPTO_USE_HW_KEY_AUTH);
+
+ pce_dev->reg.auth_cfg_aes_ccm_256 =
+ (1 << CRYPTO_LAST) | (1 << CRYPTO_FIRST) |
+ (CRYPTO_AUTH_MODE_CCM << CRYPTO_AUTH_MODE)|
+ (CRYPTO_AUTH_ALG_AES << CRYPTO_AUTH_ALG) |
+ (CRYPTO_AUTH_KEY_SZ_AES256 << CRYPTO_AUTH_KEY_SIZE) |
+ ((MAX_NONCE/sizeof(uint32_t)) << CRYPTO_AUTH_NONCE_NUM_WORDS);
+ pce_dev->reg.auth_cfg_aes_ccm_256 &= ~(1 << CRYPTO_USE_HW_KEY_AUTH);
+
+ return 0;
+}
+
int qce_aead_sha1_hmac_setup(struct qce_req *creq, struct crypto_aead *aead,
struct qce_cmdlist_info *cmdlistinfo)
{
@@ -2328,10 +2828,16 @@
pce_dev->dst_nents = pce_dev->src_nents;
}
- _ce_get_cipher_cmdlistinfo(pce_dev, q_req, &cmdlistinfo);
- /* set up crypto device */
- rc = _ce_setup_cipher(pce_dev, q_req, totallen_in,
- areq->assoclen + ivsize, cmdlistinfo);
+ if (pce_dev->support_cmd_dscr) {
+ _ce_get_cipher_cmdlistinfo(pce_dev, q_req, &cmdlistinfo);
+ /* set up crypto device */
+ rc = _ce_setup_cipher(pce_dev, q_req, totallen_in,
+ areq->assoclen + ivsize, cmdlistinfo);
+ } else {
+ /* set up crypto device */
+ rc = _ce_setup_cipher_direct(pce_dev, q_req, totallen_in,
+ areq->assoclen + ivsize);
+ }
if (rc < 0)
goto bad;
@@ -2359,31 +2865,24 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback = _aead_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
- _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
+ if (pce_dev->support_cmd_dscr)
+ _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
if (pce_dev->ce_sps.minor_version == 0) {
- _qce_sps_add_sg_data(pce_dev, areq->src, totallen_in,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, totallen_in,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_sg_data(pce_dev, areq->dst, out_len +
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, out_len +
areq->assoclen + hw_pad_out,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (totallen_in > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
@@ -2391,42 +2890,52 @@
SPS_O_DESC_DONE;
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
- _qce_sps_add_data(GET_PHYS_ADDR(
+ if (_qce_sps_add_data(GET_PHYS_ADDR(
pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
}
} else {
- _qce_sps_add_sg_data(pce_dev, areq->assoc, areq->assoclen,
- &pce_dev->ce_sps.in_transfer);
- _qce_sps_add_data((uint32_t)pce_dev->phy_iv_in, ivsize,
- &pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->cryptlen,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->assoc, areq->assoclen,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
+ if (_qce_sps_add_data((uint32_t)pce_dev->phy_iv_in, ivsize,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->cryptlen,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
/* Pass through to ignore associated (+iv, if applicable) data*/
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
(ivsize + areq->assoclen),
- &pce_dev->ce_sps.out_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->dst, out_len,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, out_len,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
/* Pass through to ignore hw_pad (padding of the MAC data) */
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
- hw_pad_out, &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
+ hw_pad_out, &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (totallen_in > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
- _qce_sps_add_data(
+ if (_qce_sps_add_data(
GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
@@ -2470,7 +2979,6 @@
pce_dev->src_nents = 0;
pce_dev->dst_nents = 0;
- _ce_get_cipher_cmdlistinfo(pce_dev, c_req, &cmdlistinfo);
/* cipher input */
pce_dev->src_nents = count_sg(areq->src, areq->nbytes);
@@ -2489,15 +2997,19 @@
pce_dev->dir = c_req->dir;
if ((pce_dev->ce_sps.minor_version == 0) && (c_req->dir == QCE_DECRYPT)
&& (c_req->mode == QCE_MODE_CBC)) {
- struct ablkcipher_request *areq =
- (struct ablkcipher_request *)pce_dev->areq;
memcpy(pce_dev->dec_iv, (unsigned char *)sg_virt(areq->src) +
areq->src->length - 16,
NUM_OF_CRYPTO_CNTR_IV_REG * CRYPTO_REG_SIZE);
}
/* set up crypto device */
- rc = _ce_setup_cipher(pce_dev, c_req, areq->nbytes, 0, cmdlistinfo);
+ if (pce_dev->support_cmd_dscr) {
+ _ce_get_cipher_cmdlistinfo(pce_dev, c_req, &cmdlistinfo);
+ rc = _ce_setup_cipher(pce_dev, c_req, areq->nbytes, 0,
+ cmdlistinfo);
+ } else {
+ rc = _ce_setup_cipher_direct(pce_dev, c_req, areq->nbytes, 0);
+ }
if (rc < 0)
goto bad;
@@ -2515,37 +3027,30 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback =
- _ablk_cipher_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
-
- _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
+ if (pce_dev->support_cmd_dscr)
+ _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_sg_data(pce_dev, areq->dst, areq->nbytes,
- &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, areq->nbytes,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (areq->nbytes > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
- CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
+ CRYPTO_RESULT_DUMP_SIZE,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
}
@@ -2579,10 +3084,15 @@
struct qce_cmdlist_info *cmdlistinfo = NULL;
pce_dev->src_nents = count_sg(sreq->src, sreq->size);
- _ce_get_hash_cmdlistinfo(pce_dev, sreq, &cmdlistinfo);
qce_dma_map_sg(pce_dev->pdev, sreq->src, pce_dev->src_nents,
DMA_TO_DEVICE);
- rc = _ce_setup_hash(pce_dev, sreq, cmdlistinfo);
+
+ if (pce_dev->support_cmd_dscr) {
+ _ce_get_hash_cmdlistinfo(pce_dev, sreq, &cmdlistinfo);
+ rc = _ce_setup_hash(pce_dev, sreq, cmdlistinfo);
+ } else {
+ rc = _ce_setup_hash_direct(pce_dev, sreq);
+ }
if (rc < 0)
goto bad;
@@ -2598,29 +3108,21 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
-
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback = _sha_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
- _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
+ if (pce_dev->support_cmd_dscr)
+ _qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
+ if (_qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer, SPS_IOVEC_FLAG_INT);
rc = _qce_sps_transfer(pce_dev);
if (rc)
@@ -2643,7 +3145,6 @@
pce_dev->is_shared = of_property_read_bool((&pdev->dev)->of_node,
"qcom,ce-hw-shared");
-
if (of_property_read_u32((&pdev->dev)->of_node,
"qcom,bam-pipe-pair",
&pce_dev->ce_sps.pipe_pair_index)) {
@@ -2678,7 +3179,7 @@
pce_dev->ce_sps.bam_mem = resource->start;
pce_dev->ce_sps.bam_iobase = ioremap_nocache(resource->start,
resource_size(resource));
- if (!pce_dev->iobase) {
+ if (!pce_dev->ce_sps.bam_iobase) {
rc = -ENOMEM;
pr_err("Can not map BAM io memory\n");
goto err_getting_bam_info;
@@ -2691,7 +3192,6 @@
pr_warn("ce_bam_phy_reg_base=0x%x ", pce_dev->ce_sps.bam_mem);
pr_warn("ce_bam_virt_reg_base=0x%x\n",
(uint32_t)pce_dev->ce_sps.bam_iobase);
-
resource = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (resource) {
pce_dev->ce_sps.bam_irq = resource->start;
@@ -2800,7 +3300,7 @@
}
}
-static int __qce_enable_clk(void *handle)
+int qce_enable_clk(void *handle)
{
struct qce_device *pce_dev = (struct qce_device *) handle;
int rc = 0;
@@ -2813,6 +3313,7 @@
return rc;
}
}
+
/* Enable CE clk */
if (pce_dev->ce_clk != NULL) {
rc = clk_prepare_enable(pce_dev->ce_clk);
@@ -2834,8 +3335,9 @@
}
return rc;
}
+EXPORT_SYMBOL(qce_enable_clk);
-static int __qce_disable_clk(void *handle)
+int qce_disable_clk(void *handle)
{
struct qce_device *pce_dev = (struct qce_device *) handle;
int rc = 0;
@@ -2849,11 +3351,13 @@
return rc;
}
+EXPORT_SYMBOL(qce_disable_clk);
/* crypto engine open function. */
void *qce_open(struct platform_device *pdev, int *rc)
{
struct qce_device *pce_dev;
+ uint32_t bam_cfg = 0 ;
pce_dev = kzalloc(sizeof(struct qce_device), GFP_KERNEL);
if (!pce_dev) {
@@ -2886,19 +3390,27 @@
if (*rc)
goto err_mem;
- *rc = __qce_enable_clk(pce_dev);
+ *rc = qce_enable_clk(pce_dev);
if (*rc)
goto err;
if (_probe_ce_engine(pce_dev)) {
*rc = -ENXIO;
- __qce_disable_clk(pce_dev);
goto err;
}
*rc = 0;
+
+ bam_cfg = readl_relaxed(pce_dev->ce_sps.bam_iobase +
+ CRYPTO_BAM_CNFG_BITS_REG);
+ pce_dev->support_cmd_dscr = (bam_cfg & CRYPTO_BAM_CD_ENABLE_MASK) ?
+ true : false;
+ qce_init_ce_cfg_val(pce_dev);
qce_setup_ce_sps_data(pce_dev);
qce_sps_init(pce_dev);
+
+ qce_disable_clk(pce_dev);
+
return pce_dev;
err:
__qce_deinit_clk(pce_dev);
@@ -2926,16 +3438,18 @@
if (handle == NULL)
return -ENODEV;
+ qce_enable_clk(pce_dev);
+ qce_sps_exit(pce_dev);
+
if (pce_dev->iobase)
iounmap(pce_dev->iobase);
if (pce_dev->coh_vmem)
dma_free_coherent(pce_dev->pdev, pce_dev->memsize,
pce_dev->coh_vmem, pce_dev->coh_pmem);
- __qce_disable_clk(pce_dev);
+ qce_disable_clk(pce_dev);
__qce_deinit_clk(pce_dev);
- qce_sps_exit(pce_dev);
kfree(handle);
return 0;
diff --git a/drivers/crypto/msm/qce50.h b/drivers/crypto/msm/qce50.h
index f5123df..38515fb 100644
--- a/drivers/crypto/msm/qce50.h
+++ b/drivers/crypto/msm/qce50.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -109,6 +109,45 @@
struct qce_cmdlist_info unlock_all_pipes;
};
+struct qce_ce_cfg_reg_setting {
+ uint32_t crypto_cfg_be;
+ uint32_t crypto_cfg_le;
+
+ uint32_t encr_cfg_aes_cbc_128;
+ uint32_t encr_cfg_aes_cbc_256;
+
+ uint32_t encr_cfg_aes_ecb_128;
+ uint32_t encr_cfg_aes_ecb_256;
+
+ uint32_t encr_cfg_aes_xts_128;
+ uint32_t encr_cfg_aes_xts_256;
+
+ uint32_t encr_cfg_aes_ctr_128;
+ uint32_t encr_cfg_aes_ctr_256;
+
+ uint32_t encr_cfg_aes_ccm_128;
+ uint32_t encr_cfg_aes_ccm_256;
+
+ uint32_t encr_cfg_des_cbc;
+ uint32_t encr_cfg_des_ecb;
+
+ uint32_t encr_cfg_3des_cbc;
+ uint32_t encr_cfg_3des_ecb;
+
+ uint32_t auth_cfg_cmac_128;
+ uint32_t auth_cfg_cmac_256;
+
+ uint32_t auth_cfg_sha1;
+ uint32_t auth_cfg_sha256;
+
+ uint32_t auth_cfg_hmac_sha1;
+ uint32_t auth_cfg_hmac_sha256;
+
+ uint32_t auth_cfg_aes_ccm_128;
+ uint32_t auth_cfg_aes_ccm_256;
+ uint32_t auth_cfg_aead_sha1_hmac;
+
+};
/* DM data structure with buffers, commandlists & commmand pointer lists */
struct ce_sps_data {
diff --git a/drivers/crypto/msm/qcedev.c b/drivers/crypto/msm/qcedev.c
index 8cc42df..a09bb42 100644
--- a/drivers/crypto/msm/qcedev.c
+++ b/drivers/crypto/msm/qcedev.c
@@ -98,7 +98,7 @@
};
static DEFINE_MUTEX(send_cmd_lock);
-static DEFINE_MUTEX(sent_bw_req);
+static DEFINE_MUTEX(qcedev_sent_bw_req);
/**********************************************************************
* Register ourselves as a misc device to be able to access the dev driver
* from userspace. */
@@ -177,25 +177,51 @@
{
int ret = 0;
- mutex_lock(&sent_bw_req);
+ mutex_lock(&qcedev_sent_bw_req);
if (high_bw_req) {
- if (podev->high_bw_req_count == 0)
+ if (podev->high_bw_req_count == 0) {
+ ret = qce_enable_clk(podev->qce);
+ if (ret) {
+ pr_err("%s Unable enable clk\n", __func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
ret = msm_bus_scale_client_update_request(
podev->bus_scale_handle, 1);
- if (ret)
- pr_err("%s Unable to set to high bandwidth\n",
+ if (ret) {
+ pr_err("%s Unable to set to high bandwidth\n",
__func__);
+ ret = qce_disable_clk(podev->qce);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ }
podev->high_bw_req_count++;
} else {
- if (podev->high_bw_req_count == 1)
+ if (podev->high_bw_req_count == 1) {
ret = msm_bus_scale_client_update_request(
podev->bus_scale_handle, 0);
- if (ret)
- pr_err("%s Unable to set to low bandwidth\n",
+ if (ret) {
+ pr_err("%s Unable to set to low bandwidth\n",
__func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ ret = qce_disable_clk(podev->qce);
+ if (ret) {
+ pr_err("%s Unable disable clk\n", __func__);
+ ret = msm_bus_scale_client_update_request(
+ podev->bus_scale_handle, 1);
+ if (ret)
+ pr_err("%s Unable to set to high bandwidth\n",
+ __func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ }
podev->high_bw_req_count--;
}
- mutex_unlock(&sent_bw_req);
+ mutex_unlock(&qcedev_sent_bw_req);
}
@@ -297,10 +323,10 @@
u32 qcedev_sha_fail;
};
-static struct qcedev_stat _qcedev_stat[MAX_QCE_DEVICE];
+static struct qcedev_stat _qcedev_stat;
static struct dentry *_debug_dent;
static char _debug_read_buf[DEBUG_MAX_RW_BUF];
-static int _debug_qcedev[MAX_QCE_DEVICE];
+static int _debug_qcedev;
static struct qcedev_control *qcedev_minor_to_control(unsigned n)
{
@@ -667,7 +693,7 @@
if (ret)
qcedev_areq->err = -EIO;
- pstat = &_qcedev_stat[podev->pdev->id];
+ pstat = &_qcedev_stat;
if (qcedev_areq->op_type == QCEDEV_CRYPTO_OPER_CIPHER) {
switch (qcedev_areq->cipher_op_req.op) {
case QCEDEV_OPER_DEC:
@@ -1533,34 +1559,54 @@
static int qcedev_check_cipher_key(struct qcedev_cipher_op_req *req,
struct qcedev_control *podev)
{
+
+ if (req->encklen < 0) {
+ pr_err("%s: Invalid key size: %d\n", __func__, req->encklen);
+ return -EINVAL;
+ }
/* if intending to use HW key make sure key fields are set
* correctly and HW key is indeed supported in target
*/
if (req->encklen == 0) {
int i;
- for (i = 0; i < QCEDEV_MAX_KEY_SIZE; i++)
- if (req->enckey[i])
+ for (i = 0; i < QCEDEV_MAX_KEY_SIZE; i++) {
+ if (req->enckey[i]) {
+ pr_err("%s: Invalid key: non-zero key input\n",
+ __func__);
goto error;
+ }
+ }
if ((req->op != QCEDEV_OPER_ENC_NO_KEY) &&
(req->op != QCEDEV_OPER_DEC_NO_KEY))
- if (!podev->platform_support.hw_key_support)
+ if (!podev->platform_support.hw_key_support) {
+ pr_err("%s: Invalid op %d\n", __func__,
+ (uint32_t)req->op);
goto error;
+ }
} else {
if (req->encklen == QCEDEV_AES_KEY_192) {
- if (!podev->ce_support.aes_key_192)
+ if (!podev->ce_support.aes_key_192) {
+ pr_err("%s: AES-192 not supported\n", __func__);
goto error;
+ }
} else {
/* if not using HW key make sure key
* length is valid
*/
if ((req->mode == QCEDEV_AES_MODE_XTS)) {
- if (!((req->encklen == QCEDEV_AES_KEY_128*2) ||
- (req->encklen == QCEDEV_AES_KEY_256*2)))
+ if ((req->encklen != QCEDEV_AES_KEY_128*2) &&
+ (req->encklen != QCEDEV_AES_KEY_256*2)) {
+ pr_err("%s: unsupported key size: %d\n",
+ __func__, req->encklen);
goto error;
+ }
} else {
- if (!((req->encklen == QCEDEV_AES_KEY_128) ||
- (req->encklen == QCEDEV_AES_KEY_256)))
+ if ((req->encklen != QCEDEV_AES_KEY_128) &&
+ (req->encklen != QCEDEV_AES_KEY_256)) {
+ pr_err("%s: unsupported key size %d\n",
+ __func__, req->encklen);
goto error;
+ }
}
}
}
@@ -1576,32 +1622,48 @@
pr_err("%s: Use of PMEM is not supported\n", __func__);
goto error;
}
- if ((req->entries == 0) || (req->data_len == 0))
+ if ((req->entries == 0) || (req->data_len == 0) ||
+ (req->entries > QCEDEV_MAX_BUFFERS)) {
+ pr_err("%s: Invalid cipher length/entries\n", __func__);
goto error;
+ }
if ((req->alg >= QCEDEV_ALG_LAST) ||
- (req->mode >= QCEDEV_AES_DES_MODE_LAST))
+ (req->mode >= QCEDEV_AES_DES_MODE_LAST)) {
+ pr_err("%s: Invalid algorithm %d\n", __func__,
+ (uint32_t)req->alg);
goto error;
-
- if ((req->mode == QCEDEV_AES_MODE_XTS) && (!podev->ce_support.aes_xts))
- goto error;
-
- if (req->alg == QCEDEV_ALG_AES)
+ }
+ if ((req->mode == QCEDEV_AES_MODE_XTS) &&
+ (!podev->ce_support.aes_xts)) {
+ pr_err("%s: XTS algorithm is not supported\n", __func__);
+ goto error;
+ }
+ if (req->alg == QCEDEV_ALG_AES) {
if (qcedev_check_cipher_key(req, podev))
- goto error;
+ goto error;
+
+ }
/* if using a byteoffset, make sure it is CTR mode using vbuf */
if (req->byteoffset) {
- if (req->mode != QCEDEV_AES_MODE_CTR)
+ if (req->mode != QCEDEV_AES_MODE_CTR) {
+ pr_err("%s: Operation on byte offset not supported\n",
+ __func__);
goto error;
+ }
}
/* Ensure zer ivlen for ECB mode */
- if (req->ivlen != 0) {
+ if (req->ivlen > 0) {
if ((req->mode == QCEDEV_AES_MODE_ECB) ||
- (req->mode == QCEDEV_DES_MODE_ECB))
+ (req->mode == QCEDEV_DES_MODE_ECB)) {
+ pr_err("%s: Expecting a zero length IV\n", __func__);
goto error;
+ }
} else {
if ((req->mode != QCEDEV_AES_MODE_ECB) &&
- (req->mode != QCEDEV_DES_MODE_ECB))
+ (req->mode != QCEDEV_DES_MODE_ECB)) {
+ pr_err("%s: Expecting a non-zero ength IV\n", __func__);
goto error;
+ }
}
return 0;
@@ -1614,20 +1676,42 @@
struct qcedev_control *podev)
{
if ((req->alg == QCEDEV_ALG_AES_CMAC) &&
- (!podev->ce_support.cmac))
+ (!podev->ce_support.cmac)) {
+ pr_err("%s: CMAC not supported\n", __func__);
goto sha_error;
-
- if ((req->entries == 0) || (req->data_len == 0))
+ }
+ if ((req->entries == 0) || (req->data_len == 0) ||
+ (req->entries > QCEDEV_MAX_BUFFERS)) {
+ pr_err("%s: Invalid data length (%d)/ num entries (%d)\n",
+ __func__, req->data_len, req->entries);
goto sha_error;
+ }
- if (req->alg >= QCEDEV_ALG_SHA_ALG_LAST)
+ if (req->alg >= QCEDEV_ALG_SHA_ALG_LAST) {
+ pr_err("%s: Invalid algorithm (%d)\n", __func__, req->alg);
goto sha_error;
-
+ }
if ((req->alg == QCEDEV_ALG_SHA1_HMAC) ||
(req->alg == QCEDEV_ALG_SHA1_HMAC)) {
- if (req->authklen == 0)
+ if (req->authkey == NULL) {
+ pr_err("%s: Invalid authkey pointer\n", __func__);
goto sha_error;
+ }
+ if (req->authklen <= 0) {
+ pr_err("%s: Invalid authkey length (%d)\n",
+ __func__, req->authklen);
+ goto sha_error;
+ }
}
+
+ if (req->alg == QCEDEV_ALG_AES_CMAC) {
+ if ((req->authklen != QCEDEV_AES_KEY_128) &&
+ (req->authklen != QCEDEV_AES_KEY_256)) {
+ pr_err("%s: unsupported key length\n", __func__);
+ goto sha_error;
+ }
+ }
+
return 0;
sha_error:
return -EINVAL;
@@ -1655,7 +1739,7 @@
return -ENOTTY;
init_completion(&qcedev_areq.complete);
- pstat = &_qcedev_stat[podev->pdev->id];
+ pstat = &_qcedev_stat;
switch (cmd) {
case QCEDEV_IOCTL_LOCK_CE:
@@ -1854,6 +1938,14 @@
podev->platform_support.hw_key_support = 0;
podev->platform_support.bus_scale_table = NULL;
podev->platform_support.sha_hmac = 1;
+
+ if (podev->ce_support.is_shared == false) {
+ podev->platform_support.bus_scale_table =
+ (struct msm_bus_scale_pdata *)
+ msm_bus_cl_get_pdata(pdev);
+ if (!podev->platform_support.bus_scale_table)
+ pr_err("bus_scale_table is NULL\n");
+ }
} else {
platform_support =
(struct msm_ce_hw_support *)pdev->dev.platform_data;
@@ -1934,11 +2026,7 @@
struct qcedev_stat *pstat;
int len = 0;
- if (id < 0) {
- pr_err("Crypto id is %d, cannot be negative\n", id);
- return len;
- }
- pstat = &_qcedev_stat[id];
+ pstat = &_qcedev_stat;
len = snprintf(_debug_read_buf, DEBUG_MAX_RW_BUF - 1,
"\nQualcomm QCE dev driver %d Statistics:\n",
id + 1);
@@ -1984,10 +2072,7 @@
static ssize_t _debug_stats_write(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
-
- int qcedev = *((int *) file->private_data);
-
- memset((char *)&_qcedev_stat[qcedev], 0, sizeof(struct qcedev_stat));
+ memset((char *)&_qcedev_stat, 0, sizeof(struct qcedev_stat));
return count;
};
@@ -2001,7 +2086,6 @@
{
int rc;
char name[DEBUG_MAX_FNAME];
- int i;
struct dentry *dent;
_debug_dent = debugfs_create_dir("qcedev", NULL);
@@ -2011,17 +2095,15 @@
return PTR_ERR(_debug_dent);
}
- for (i = 0; i < MAX_QCE_DEVICE; i++) {
- snprintf(name, DEBUG_MAX_FNAME-1, "stats-%d", i+1);
- _debug_qcedev[i] = i;
- dent = debugfs_create_file(name, 0644, _debug_dent,
- &_debug_qcedev[i], &_debug_stats_ops);
- if (dent == NULL) {
- pr_err("qcedev debugfs_create_file fail, error %ld\n",
- PTR_ERR(dent));
- rc = PTR_ERR(dent);
- goto err;
- }
+ snprintf(name, DEBUG_MAX_FNAME-1, "stats-%d", 1);
+ _debug_qcedev = 0;
+ dent = debugfs_create_file(name, 0644, _debug_dent,
+ &_debug_qcedev, &_debug_stats_ops);
+ if (dent == NULL) {
+ pr_err("qcedev debugfs_create_file fail, error %ld\n",
+ PTR_ERR(dent));
+ rc = PTR_ERR(dent);
+ goto err;
}
return 0;
err:
diff --git a/drivers/crypto/msm/qcrypto.c b/drivers/crypto/msm/qcrypto.c
index 85c25c7..d1cb933 100644
--- a/drivers/crypto/msm/qcrypto.c
+++ b/drivers/crypto/msm/qcrypto.c
@@ -42,7 +42,6 @@
#include "qce.h"
-#define MAX_CRYPTO_DEVICE 3
#define DEBUG_MAX_FNAME 16
#define DEBUG_MAX_RW_BUF 1024
@@ -53,6 +52,8 @@
u32 aead_sha1_des_dec;
u32 aead_sha1_3des_enc;
u32 aead_sha1_3des_dec;
+ u32 aead_ccm_aes_enc;
+ u32 aead_ccm_aes_dec;
u32 aead_op_success;
u32 aead_op_fail;
u32 ablk_cipher_aes_enc;
@@ -72,7 +73,7 @@
u32 sha_hmac_op_success;
u32 sha_hmac_op_fail;
};
-static struct crypto_stat _qcrypto_stat[MAX_CRYPTO_DEVICE];
+static struct crypto_stat _qcrypto_stat;
static struct dentry *_debug_dent;
static char _debug_read_buf[DEBUG_MAX_RW_BUF];
@@ -121,7 +122,7 @@
#define NUM_RETRY 1000
#define CE_BUSY 55
-static DEFINE_MUTEX(sent_bw_req);
+static DEFINE_MUTEX(qcrypto_sent_bw_req);
static int qcrypto_scm_cmd(int resource, int cmd, int *response)
{
@@ -346,25 +347,51 @@
{
int ret = 0;
- mutex_lock(&sent_bw_req);
+ mutex_lock(&qcrypto_sent_bw_req);
if (high_bw_req) {
- if (cp->high_bw_req_count == 0)
+ if (cp->high_bw_req_count == 0) {
+ ret = qce_enable_clk(cp->qce);
+ if (ret) {
+ pr_err("%s Unable enable clk\n", __func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
ret = msm_bus_scale_client_update_request(
- cp->bus_scale_handle, 1);
- if (ret)
- pr_err("%s Unable to set to high bandwidth\n",
+ cp->bus_scale_handle, 1);
+ if (ret) {
+ pr_err("%s Unable to set to high bandwidth\n",
__func__);
+ qce_disable_clk(cp->qce);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ }
cp->high_bw_req_count++;
} else {
- if (cp->high_bw_req_count == 1)
+ if (cp->high_bw_req_count == 1) {
ret = msm_bus_scale_client_update_request(
- cp->bus_scale_handle, 0);
- if (ret)
- pr_err("%s Unable to set to low bandwidth\n",
+ cp->bus_scale_handle, 0);
+ if (ret) {
+ pr_err("%s Unable to set to low bandwidth\n",
__func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ ret = qce_disable_clk(cp->qce);
+ if (ret) {
+ pr_err("%s Unable disable clk\n", __func__);
+ ret = msm_bus_scale_client_update_request(
+ cp->bus_scale_handle, 1);
+ if (ret)
+ pr_err("%s Unable to set to high bandwidth\n",
+ __func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ }
cp->high_bw_req_count--;
}
- mutex_unlock(&sent_bw_req);
+ mutex_unlock(&qcrypto_sent_bw_req);
}
static int _start_qcrypto_process(struct crypto_priv *cp);
@@ -379,6 +406,39 @@
return i;
}
+size_t qcrypto_sg_copy_from_buffer(struct scatterlist *sgl, unsigned int nents,
+ void *buf, size_t buflen)
+{
+ int i;
+ size_t offset, len;
+
+ for (i = 0, offset = 0; i < nents; ++i) {
+ len = sg_copy_from_buffer(sgl, 1, buf, buflen);
+ buf += len;
+ buflen -= len;
+ offset += len;
+ sgl = scatterwalk_sg_next(sgl);
+ }
+
+ return offset;
+}
+
+size_t qcrypto_sg_copy_to_buffer(struct scatterlist *sgl, unsigned int nents,
+ void *buf, size_t buflen)
+{
+ int i;
+ size_t offset, len;
+
+ for (i = 0, offset = 0; i < nents; ++i) {
+ len = sg_copy_to_buffer(sgl, 1, buf, buflen);
+ buf += len;
+ buflen -= len;
+ offset += len;
+ sgl = scatterwalk_sg_next(sgl);
+ }
+
+ return offset;
+}
static struct qcrypto_alg *_qcrypto_sha_alg_alloc(struct crypto_priv *cp,
struct ahash_alg *template)
{
@@ -555,11 +615,7 @@
struct crypto_stat *pstat;
int len = 0;
- if (id < 0) {
- pr_err("Crypto id is %d, cannot be negative\n", id);
- return len;
- }
- pstat = &_qcrypto_stat[id];
+ pstat = &_qcrypto_stat;
len = snprintf(_debug_read_buf, DEBUG_MAX_RW_BUF - 1,
"\nQualcomm crypto accelerator %d Statistics:\n",
id + 1);
@@ -615,6 +671,13 @@
pstat->aead_sha1_3des_dec);
len += snprintf(_debug_read_buf + len, DEBUG_MAX_RW_BUF - len - 1,
+ " AEAD CCM-AES encryption : %d\n",
+ pstat->aead_ccm_aes_enc);
+ len += snprintf(_debug_read_buf + len, DEBUG_MAX_RW_BUF - len - 1,
+ " AEAD CCM-AES decryption : %d\n",
+ pstat->aead_ccm_aes_dec);
+
+ len += snprintf(_debug_read_buf + len, DEBUG_MAX_RW_BUF - len - 1,
" AEAD operation success : %d\n",
pstat->aead_op_success);
len += snprintf(_debug_read_buf + len, DEBUG_MAX_RW_BUF - len - 1,
@@ -832,7 +895,7 @@
uint32_t diglen = crypto_ahash_digestsize(ahash);
uint32_t *auth32 = (uint32_t *)authdata;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
#ifdef QCRYPTO_DEBUG
dev_info(&cp->pdev->dev, "_qce_ahash_complete: %p ret %d\n",
@@ -890,7 +953,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
#ifdef QCRYPTO_DEBUG
dev_info(&cp->pdev->dev, "_qce_ablk_cipher_complete: %p ret %d\n",
@@ -917,7 +980,7 @@
areq->dst = rctx->orig_dst;
num_sg = qcrypto_count_sg(areq->dst, areq->nbytes);
- bytes = sg_copy_from_buffer(areq->dst, num_sg,
+ bytes = qcrypto_sg_copy_from_buffer(areq->dst, num_sg,
rctx->data, areq->nbytes);
if (bytes != areq->nbytes)
pr_warn("bytes copied=0x%x bytes to copy= 0x%x", bytes,
@@ -941,7 +1004,7 @@
struct qcrypto_cipher_req_ctx *rctx;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(areq);
@@ -962,7 +1025,7 @@
nbytes = areq->cryptlen -
crypto_aead_authsize(aead);
num_sg = qcrypto_count_sg(areq->dst, nbytes);
- bytes = sg_copy_from_buffer(areq->dst, num_sg,
+ bytes = qcrypto_sg_copy_from_buffer(areq->dst, num_sg,
((char *)rctx->data + areq->assoclen),
nbytes);
if (bytes != nbytes)
@@ -1092,7 +1155,7 @@
qreq->assoclen = ALIGN((alen + len), 16);
num_sg = qcrypto_count_sg(sg, alen);
- bytes = sg_copy_to_buffer(sg, num_sg, adata, alen);
+ bytes = qcrypto_sg_copy_to_buffer(sg, num_sg, adata, alen);
if (bytes != alen)
pr_warn("bytes copied=0x%x bytes to copy= 0x%x", bytes, alen);
@@ -1119,7 +1182,7 @@
rctx->orig_src = req->src;
rctx->orig_dst = req->dst;
- rctx->data = kzalloc((req->nbytes + 64), GFP_KERNEL);
+ rctx->data = kzalloc((req->nbytes + 64), GFP_ATOMIC);
if (rctx->data == NULL) {
pr_err("Mem Alloc fail rctx->data, err %ld for 0x%x\n",
@@ -1127,7 +1190,7 @@
return -ENOMEM;
}
num_sg = qcrypto_count_sg(req->src, req->nbytes);
- bytes = sg_copy_to_buffer(req->src, num_sg, rctx->data,
+ bytes = qcrypto_sg_copy_to_buffer(req->src, num_sg, rctx->data,
req->nbytes);
if (bytes != req->nbytes)
pr_warn("bytes copied=0x%x bytes to copy= 0x%x", bytes,
@@ -1268,7 +1331,7 @@
rctx->orig_src = req->src;
rctx->orig_dst = req->dst;
rctx->data = kzalloc((req->cryptlen + qreq.assoclen +
- qreq.authsize + 64*2), GFP_KERNEL);
+ qreq.authsize + 64*2), GFP_ATOMIC);
if (rctx->data == NULL) {
pr_err("Mem Alloc fail rctx->data, err %ld\n",
PTR_ERR(rctx->data));
@@ -1279,7 +1342,7 @@
memcpy((char *)rctx->data, qreq.assoc, qreq.assoclen);
num_sg = qcrypto_count_sg(req->src, req->cryptlen);
- bytes = sg_copy_to_buffer(req->src, num_sg,
+ bytes = qcrypto_sg_copy_to_buffer(req->src, num_sg,
rctx->data + qreq.assoclen , req->cryptlen);
if (bytes != req->cryptlen)
pr_warn("bytes copied=0x%x bytes to copy= 0x%x",
@@ -1332,7 +1395,7 @@
int ret = 0;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
again:
spin_lock_irqsave(&cp->lock, flags);
@@ -1408,7 +1471,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1432,7 +1495,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1456,7 +1519,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1480,7 +1543,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1507,7 +1570,7 @@
(ctx->auth_key_len != AES_KEYSIZE_256))
return -EINVAL;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -1516,7 +1579,7 @@
rctx->mode = QCE_MODE_CCM;
rctx->iv = req->iv;
- pstat->aead_sha1_aes_enc++;
+ pstat->aead_ccm_aes_enc++;
return _qcrypto_queue_req(cp, &req->base);
}
@@ -1527,7 +1590,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1548,7 +1611,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1569,7 +1632,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1590,7 +1653,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1611,7 +1674,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1635,7 +1698,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1660,7 +1723,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1686,7 +1749,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1707,7 +1770,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1728,7 +1791,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1749,7 +1812,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1770,7 +1833,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
BUG_ON(crypto_tfm_alg_type(req->base.tfm) !=
CRYPTO_ALG_TYPE_ABLKCIPHER);
@@ -1798,7 +1861,7 @@
(ctx->auth_key_len != AES_KEYSIZE_256))
return -EINVAL;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -1807,7 +1870,7 @@
rctx->mode = QCE_MODE_CCM;
rctx->iv = req->iv;
- pstat->aead_sha1_aes_dec++;
+ pstat->aead_ccm_aes_dec++;
return _qcrypto_queue_req(cp, &req->base);
}
@@ -1913,7 +1976,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
#ifdef QCRYPTO_DEBUG
dev_info(&cp->pdev->dev, "_qcrypto_aead_encrypt_aes_cbc: %p\n", req);
@@ -1937,7 +2000,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
#ifdef QCRYPTO_DEBUG
dev_info(&cp->pdev->dev, "_qcrypto_aead_decrypt_aes_cbc: %p\n", req);
@@ -1962,7 +2025,7 @@
struct qcrypto_cipher_req_ctx *rctx;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(areq);
rctx->aead = 1;
@@ -1986,7 +2049,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2006,7 +2069,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2031,7 +2094,7 @@
struct qcrypto_cipher_req_ctx *rctx;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(areq);
rctx->aead = 1;
@@ -2055,7 +2118,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2075,7 +2138,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2097,7 +2160,7 @@
struct qcrypto_cipher_req_ctx *rctx;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(areq);
rctx->aead = 1;
@@ -2120,7 +2183,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2140,7 +2203,7 @@
struct crypto_priv *cp = ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(req);
rctx->aead = 1;
@@ -2162,7 +2225,7 @@
struct qcrypto_cipher_req_ctx *rctx;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
rctx = aead_request_ctx(areq);
rctx->aead = 1;
@@ -2194,10 +2257,9 @@
static int _sha1_init(struct ahash_request *req)
{
struct qcrypto_sha_ctx *sha_ctx = crypto_tfm_ctx(req->base.tfm);
- struct crypto_priv *cp = sha_ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
_sha_init(sha_ctx);
sha_ctx->alg = QCE_HASH_SHA1;
@@ -2215,10 +2277,9 @@
static int _sha256_init(struct ahash_request *req)
{
struct qcrypto_sha_ctx *sha_ctx = crypto_tfm_ctx(req->base.tfm);
- struct crypto_priv *cp = sha_ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
_sha_init(sha_ctx);
sha_ctx->alg = QCE_HASH_SHA256;
@@ -2321,7 +2382,7 @@
srctx = ahash_request_ctx(req);
srctx->orig_src = req->src;
- srctx->data = kzalloc((req->nbytes + 64), GFP_KERNEL);
+ srctx->data = kzalloc((req->nbytes + 64), GFP_ATOMIC);
if (srctx->data == NULL) {
pr_err("Mem Alloc fail rctx->data, err %ld for 0x%x\n",
PTR_ERR(srctx->data), (req->nbytes + 64));
@@ -2329,7 +2390,8 @@
}
num_sg = qcrypto_count_sg(req->src, req->nbytes);
- bytes = sg_copy_to_buffer(req->src, num_sg, srctx->data, req->nbytes);
+ bytes = qcrypto_sg_copy_to_buffer(req->src, num_sg, srctx->data,
+ req->nbytes);
if (bytes != req->nbytes)
pr_warn("bytes copied=0x%x bytes to copy= 0x%x", bytes,
req->nbytes);
@@ -2364,7 +2426,7 @@
if (total <= sha_block_size) {
k_src = &sha_ctx->trailing_buf[sha_ctx->trailing_buf_len];
num_sg = qcrypto_count_sg(req->src, len);
- bytes = sg_copy_to_buffer(req->src, num_sg, k_src, len);
+ bytes = qcrypto_sg_copy_to_buffer(req->src, num_sg, k_src, len);
sha_ctx->trailing_buf_len = total;
if (sha_ctx->alg == QCE_HASH_SHA1)
@@ -2405,13 +2467,13 @@
if (sha_ctx->trailing_buf_len) {
if (cp->ce_support.aligned_only) {
sha_ctx->sg = kzalloc(sizeof(struct scatterlist),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (sha_ctx->sg == NULL) {
pr_err("MemAlloc fail sha_ctx->sg, error %ld\n",
PTR_ERR(sha_ctx->sg));
return -ENOMEM;
}
- rctx->data2 = kzalloc((req->nbytes + 64), GFP_KERNEL);
+ rctx->data2 = kzalloc((req->nbytes + 64), GFP_ATOMIC);
if (rctx->data2 == NULL) {
pr_err("Mem Alloc fail srctx->data2, err %ld\n",
PTR_ERR(rctx->data2));
@@ -2432,7 +2494,7 @@
} else {
sg_mark_end(sg_last);
sha_ctx->sg = kzalloc(2 * (sizeof(struct scatterlist)),
- GFP_KERNEL);
+ GFP_ATOMIC);
if (sha_ctx->sg == NULL) {
pr_err("MEMalloc fail sha_ctx->sg, error %ld\n",
PTR_ERR(sha_ctx->sg));
@@ -2662,7 +2724,7 @@
struct crypto_stat *pstat;
int ret = 0;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
pstat->sha1_hmac_digest++;
_sha_init(sha_ctx);
@@ -2689,7 +2751,7 @@
struct crypto_stat *pstat;
int ret = 0;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
pstat->sha256_hmac_digest++;
_sha_init(sha_ctx);
@@ -2827,10 +2889,9 @@
static int _sha1_hmac_digest(struct ahash_request *req)
{
struct qcrypto_sha_ctx *sha_ctx = crypto_tfm_ctx(req->base.tfm);
- struct crypto_priv *cp = sha_ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
pstat->sha1_hmac_digest++;
_sha_init(sha_ctx);
@@ -2845,10 +2906,9 @@
static int _sha256_hmac_digest(struct ahash_request *req)
{
struct qcrypto_sha_ctx *sha_ctx = crypto_tfm_ctx(req->base.tfm);
- struct crypto_priv *cp = sha_ctx->cp;
struct crypto_stat *pstat;
- pstat = &_qcrypto_stat[cp->pdev->id];
+ pstat = &_qcrypto_stat;
pstat->sha256_hmac_digest++;
_sha_init(sha_ctx);
@@ -3301,12 +3361,6 @@
int i;
struct msm_ce_hw_support *platform_support;
- if (pdev->id >= MAX_CRYPTO_DEVICE) {
- pr_err("%s: device id %d exceeds allowed %d\n",
- __func__, pdev->id, MAX_CRYPTO_DEVICE);
- return -ENOENT;
- }
-
cp = kzalloc(sizeof(*cp), GFP_KERNEL);
if (!cp) {
pr_err("qcrypto Memory allocation of q_alg FAIL, error %ld\n",
@@ -3336,6 +3390,14 @@
cp->platform_support.hw_key_support = 0;
cp->platform_support.bus_scale_table = NULL;
cp->platform_support.sha_hmac = 1;
+
+ if (cp->ce_support.is_shared == false) {
+ cp->platform_support.bus_scale_table =
+ (struct msm_bus_scale_pdata *)
+ msm_bus_cl_get_pdata(pdev);
+ if (!cp->platform_support.bus_scale_table)
+ pr_warn("bus_scale_table is NULL\n");
+ }
} else {
platform_support =
(struct msm_ce_hw_support *)pdev->dev.platform_data;
@@ -3538,7 +3600,7 @@
},
};
-static int _debug_qcrypto[MAX_CRYPTO_DEVICE];
+static int _debug_qcrypto;
static int _debug_stats_open(struct inode *inode, struct file *file)
{
@@ -3565,9 +3627,7 @@
size_t count, loff_t *ppos)
{
- int qcrypto = *((int *) file->private_data);
-
- memset((char *)&_qcrypto_stat[qcrypto], 0, sizeof(struct crypto_stat));
+ memset((char *)&_qcrypto_stat, 0, sizeof(struct crypto_stat));
return count;
};
@@ -3581,7 +3641,6 @@
{
int rc;
char name[DEBUG_MAX_FNAME];
- int i;
struct dentry *dent;
_debug_dent = debugfs_create_dir("qcrypto", NULL);
@@ -3591,17 +3650,15 @@
return PTR_ERR(_debug_dent);
}
- for (i = 0; i < MAX_CRYPTO_DEVICE; i++) {
- snprintf(name, DEBUG_MAX_FNAME-1, "stats-%d", i+1);
- _debug_qcrypto[i] = i;
- dent = debugfs_create_file(name, 0644, _debug_dent,
- &_debug_qcrypto[i], &_debug_stats_ops);
- if (dent == NULL) {
- pr_err("qcrypto debugfs_create_file fail, error %ld\n",
- PTR_ERR(dent));
- rc = PTR_ERR(dent);
- goto err;
- }
+ snprintf(name, DEBUG_MAX_FNAME-1, "stats-%d", 1);
+ _debug_qcrypto = 0;
+ dent = debugfs_create_file(name, 0644, _debug_dent,
+ &_debug_qcrypto, &_debug_stats_ops);
+ if (dent == NULL) {
+ pr_err("qcrypto debugfs_create_file fail, error %ld\n",
+ PTR_ERR(dent));
+ rc = PTR_ERR(dent);
+ goto err;
}
return 0;
err:
diff --git a/drivers/crypto/msm/qcryptohw_50.h b/drivers/crypto/msm/qcryptohw_50.h
index 245d737..2210dc8 100644
--- a/drivers/crypto/msm/qcryptohw_50.h
+++ b/drivers/crypto/msm/qcryptohw_50.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -14,6 +14,10 @@
#define _DRIVERS_CRYPTO_MSM_QCRYPTOHW_50_H_
+#define CRYPTO_BAM_CNFG_BITS_REG 0x0007C
+#define CRYPTO_BAM_CD_ENABLE 27
+#define CRYPTO_BAM_CD_ENABLE_MASK (1 << CRYPTO_BAM_CD_ENABLE)
+
#define QCE_AUTH_REG_BYTE_COUNT 4
#define CRYPTO_VERSION_REG 0x1A000
@@ -28,7 +32,7 @@
#define CRYPTO_DATA_OUT3_REG 0x1A02C
#define CRYPTO_STATUS_REG 0x1A100
-#define CRYPTO_STATUS2_REG 0x1A100
+#define CRYPTO_STATUS2_REG 0x1A104
#define CRYPTO_ENGINES_AVAIL 0x1A108
#define CRYPTO_FIFO_SIZES_REG 0x1A10C
@@ -64,9 +68,9 @@
#define CRYPTO_ENCR_PIPE0_KEY2_REG 0x1E008
#define CRYPTO_ENCR_PIPE0_KEY3_REG 0x1E00C
#define CRYPTO_ENCR_PIPE0_KEY4_REG 0x1E010
-#define CRYPTO_ENCR_PIPE0_KEY5_REG 0x1E004
-#define CRYPTO_ENCR_PIPE0_KEY6_REG 0x1E008
-#define CRYPTO_ENCR_PIPE0_KEY7_REG 0x1E00C
+#define CRYPTO_ENCR_PIPE0_KEY5_REG 0x1E014
+#define CRYPTO_ENCR_PIPE0_KEY6_REG 0x1E018
+#define CRYPTO_ENCR_PIPE0_KEY7_REG 0x1E01C
#define CRYPTO_ENCR_PIPE1_KEY0_REG 0x1E020
#define CRYPTO_ENCR_PIPE1_KEY1_REG 0x1E024
@@ -185,56 +189,56 @@
#define CRYPTO_AUTH_PIPE0_KEY14_REG 0x1E838
#define CRYPTO_AUTH_PIPE0_KEY15_REG 0x1E83C
-#define CRYPTO_AUTH_PIPE1_KEY0_REG 0x1E800
-#define CRYPTO_AUTH_PIPE1_KEY1_REG 0x1E804
-#define CRYPTO_AUTH_PIPE1_KEY2_REG 0x1E808
-#define CRYPTO_AUTH_PIPE1_KEY3_REG 0x1E80C
-#define CRYPTO_AUTH_PIPE1_KEY4_REG 0x1E810
-#define CRYPTO_AUTH_PIPE1_KEY5_REG 0x1E814
-#define CRYPTO_AUTH_PIPE1_KEY6_REG 0x1E818
-#define CRYPTO_AUTH_PIPE1_KEY7_REG 0x1E81C
-#define CRYPTO_AUTH_PIPE1_KEY8_REG 0x1E820
-#define CRYPTO_AUTH_PIPE1_KEY9_REG 0x1E824
-#define CRYPTO_AUTH_PIPE1_KEY10_REG 0x1E828
-#define CRYPTO_AUTH_PIPE1_KEY11_REG 0x1E82C
-#define CRYPTO_AUTH_PIPE1_KEY12_REG 0x1E830
-#define CRYPTO_AUTH_PIPE1_KEY13_REG 0x1E834
-#define CRYPTO_AUTH_PIPE1_KEY14_REG 0x1E838
-#define CRYPTO_AUTH_PIPE1_KEY15_REG 0x1E83C
+#define CRYPTO_AUTH_PIPE1_KEY0_REG 0x1E880
+#define CRYPTO_AUTH_PIPE1_KEY1_REG 0x1E884
+#define CRYPTO_AUTH_PIPE1_KEY2_REG 0x1E888
+#define CRYPTO_AUTH_PIPE1_KEY3_REG 0x1E88C
+#define CRYPTO_AUTH_PIPE1_KEY4_REG 0x1E890
+#define CRYPTO_AUTH_PIPE1_KEY5_REG 0x1E894
+#define CRYPTO_AUTH_PIPE1_KEY6_REG 0x1E898
+#define CRYPTO_AUTH_PIPE1_KEY7_REG 0x1E89C
+#define CRYPTO_AUTH_PIPE1_KEY8_REG 0x1E8A0
+#define CRYPTO_AUTH_PIPE1_KEY9_REG 0x1E8A4
+#define CRYPTO_AUTH_PIPE1_KEY10_REG 0x1E8A8
+#define CRYPTO_AUTH_PIPE1_KEY11_REG 0x1E8AC
+#define CRYPTO_AUTH_PIPE1_KEY12_REG 0x1E8B0
+#define CRYPTO_AUTH_PIPE1_KEY13_REG 0x1E8B4
+#define CRYPTO_AUTH_PIPE1_KEY14_REG 0x1E8B8
+#define CRYPTO_AUTH_PIPE1_KEY15_REG 0x1E8BC
-#define CRYPTO_AUTH_PIPE2_KEY0_REG 0x1E840
-#define CRYPTO_AUTH_PIPE2_KEY1_REG 0x1E844
-#define CRYPTO_AUTH_PIPE2_KEY2_REG 0x1E848
-#define CRYPTO_AUTH_PIPE2_KEY3_REG 0x1E84C
-#define CRYPTO_AUTH_PIPE2_KEY4_REG 0x1E850
-#define CRYPTO_AUTH_PIPE2_KEY5_REG 0x1E854
-#define CRYPTO_AUTH_PIPE2_KEY6_REG 0x1E858
-#define CRYPTO_AUTH_PIPE2_KEY7_REG 0x1E85C
-#define CRYPTO_AUTH_PIPE2_KEY8_REG 0x1E860
-#define CRYPTO_AUTH_PIPE2_KEY9_REG 0x1E864
-#define CRYPTO_AUTH_PIPE2_KEY10_REG 0x1E868
-#define CRYPTO_AUTH_PIPE2_KEY11_REG 0x1E86C
-#define CRYPTO_AUTH_PIPE2_KEY12_REG 0x1E870
-#define CRYPTO_AUTH_PIPE2_KEY13_REG 0x1E874
-#define CRYPTO_AUTH_PIPE2_KEY14_REG 0x1E878
-#define CRYPTO_AUTH_PIPE2_KEY15_REG 0x1E87C
+#define CRYPTO_AUTH_PIPE2_KEY0_REG 0x1E900
+#define CRYPTO_AUTH_PIPE2_KEY1_REG 0x1E904
+#define CRYPTO_AUTH_PIPE2_KEY2_REG 0x1E908
+#define CRYPTO_AUTH_PIPE2_KEY3_REG 0x1E90C
+#define CRYPTO_AUTH_PIPE2_KEY4_REG 0x1E910
+#define CRYPTO_AUTH_PIPE2_KEY5_REG 0x1E914
+#define CRYPTO_AUTH_PIPE2_KEY6_REG 0x1E918
+#define CRYPTO_AUTH_PIPE2_KEY7_REG 0x1E91C
+#define CRYPTO_AUTH_PIPE2_KEY8_REG 0x1E920
+#define CRYPTO_AUTH_PIPE2_KEY9_REG 0x1E924
+#define CRYPTO_AUTH_PIPE2_KEY10_REG 0x1E928
+#define CRYPTO_AUTH_PIPE2_KEY11_REG 0x1E92C
+#define CRYPTO_AUTH_PIPE2_KEY12_REG 0x1E930
+#define CRYPTO_AUTH_PIPE2_KEY13_REG 0x1E934
+#define CRYPTO_AUTH_PIPE2_KEY14_REG 0x1E938
+#define CRYPTO_AUTH_PIPE2_KEY15_REG 0x1E93C
-#define CRYPTO_AUTH_PIPE3_KEY0_REG 0x1E880
-#define CRYPTO_AUTH_PIPE3_KEY1_REG 0x1E884
-#define CRYPTO_AUTH_PIPE3_KEY2_REG 0x1E888
-#define CRYPTO_AUTH_PIPE3_KEY3_REG 0x1E88C
-#define CRYPTO_AUTH_PIPE3_KEY4_REG 0x1E890
-#define CRYPTO_AUTH_PIPE3_KEY5_REG 0x1E894
-#define CRYPTO_AUTH_PIPE3_KEY6_REG 0x1E898
-#define CRYPTO_AUTH_PIPE3_KEY7_REG 0x1E89C
-#define CRYPTO_AUTH_PIPE3_KEY8_REG 0x1E8A0
-#define CRYPTO_AUTH_PIPE3_KEY9_REG 0x1E8A4
-#define CRYPTO_AUTH_PIPE3_KEY10_REG 0x1E8A8
-#define CRYPTO_AUTH_PIPE3_KEY11_REG 0x1E8AC
-#define CRYPTO_AUTH_PIPE3_KEY12_REG 0x1E8B0
-#define CRYPTO_AUTH_PIPE3_KEY13_REG 0x1E8B4
-#define CRYPTO_AUTH_PIPE3_KEY14_REG 0x1E8B8
-#define CRYPTO_AUTH_PIPE3_KEY15_REG 0x1E8BC
+#define CRYPTO_AUTH_PIPE3_KEY0_REG 0x1E980
+#define CRYPTO_AUTH_PIPE3_KEY1_REG 0x1E984
+#define CRYPTO_AUTH_PIPE3_KEY2_REG 0x1E988
+#define CRYPTO_AUTH_PIPE3_KEY3_REG 0x1E98C
+#define CRYPTO_AUTH_PIPE3_KEY4_REG 0x1E990
+#define CRYPTO_AUTH_PIPE3_KEY5_REG 0x1E994
+#define CRYPTO_AUTH_PIPE3_KEY6_REG 0x1E998
+#define CRYPTO_AUTH_PIPE3_KEY7_REG 0x1E99C
+#define CRYPTO_AUTH_PIPE3_KEY8_REG 0x1E9A0
+#define CRYPTO_AUTH_PIPE3_KEY9_REG 0x1E9A4
+#define CRYPTO_AUTH_PIPE3_KEY10_REG 0x1E9A8
+#define CRYPTO_AUTH_PIPE3_KEY11_REG 0x1E9AC
+#define CRYPTO_AUTH_PIPE3_KEY12_REG 0x1E9B0
+#define CRYPTO_AUTH_PIPE3_KEY13_REG 0x1E9B4
+#define CRYPTO_AUTH_PIPE3_KEY14_REG 0x1E9B8
+#define CRYPTO_AUTH_PIPE3_KEY15_REG 0x1E9BC
#define CRYPTO_AUTH_IV0_REG 0x1A310
@@ -421,6 +425,7 @@
#define CRYPTO_AUTH_ALG_AES 2
#define CRYPTO_AUTH_ALG_KASUMI 3
#define CRYPTO_AUTH_ALG_SNOW3G 4
+#define CRYPTO_AUTH_ALG_ZUC 5
/* encr_xts_du_size reg */
#define CRYPTO_ENCR_XTS_DU_SIZE 0 /* bit 19-0 */
@@ -471,33 +476,49 @@
#define CRYPTO_ENCR_KEY_SZ_3DES 1
#define CRYPTO_ENCR_KEY_SZ_AES128 0
#define CRYPTO_ENCR_KEY_SZ_AES256 2
-#define CRYPTO_ENCR_KEY_SZ_UEA1 0
-#define CRYPTO_ENCR_KEY_SZ_UEA2 1
#define CRYPTO_ENCR_ALG 0 /* bit 2-0 */
#define CRYPTO_ENCR_ALG_MASK (7 << CRYPTO_ENCR_ALG)
#define CRYPTO_ENCR_ALG_NONE 0
#define CRYPTO_ENCR_ALG_DES 1
#define CRYPTO_ENCR_ALG_AES 2
-#define CRYPTO_ENCR_ALG_KASUMI 3
+#define CRYPTO_ENCR_ALG_KASUMI 4
#define CRYPTO_ENCR_ALG_SNOW_3G 5
+#define CRYPTO_ENCR_ALG_ZUC 6
/* goproc reg */
#define CRYPTO_GO 0
#define CRYPTO_CLR_CNTXT 1
#define CRYPTO_RESULTS_DUMP 2
+/* F8 definition of CRYPTO_ENCR_CNTR1_IV1 REG */
+#define CRYPTO_CNTR1_IV1_REG_F8_PKT_CNT 16 /* bit 31 - 16 */
+#define CRYPTO_CNTR1_IV1_REG_F8_PKT_CNT_MASK \
+ (0xffff << CRYPTO_CNTR1_IV1_REG_F8_PKT_CNT)
+
+#define CRYPTO_CNTR1_IV1_REG_F8_BEARER 0 /* bit 4 - 0 */
+#define CRYPTO_CNTR1_IV1_REG_F8_BEARER_MASK \
+ (0x1f << CRYPTO_CNTR1_IV1_REG_F8_BEARER)
+
+/* F9 definition of CRYPTO_AUTH_IV4 REG */
+#define CRYPTO_AUTH_IV4_REG_F9_VALID_BIS 0 /* bit 2 - 0 */
+#define CRYPTO_AUTH_IV4_REG_F9_VALID_BIS_MASK \
+ (0x7 << CRYPTO_AUTH_IV4_REG_F9_VALID_BIS)
/* engines_avail */
#define CRYPTO_ENCR_AES_SEL 0
-#define CRYPTO_DES_SEL 3
-#define CRYPTO_ENCR_SNOW3G_SEL 4
-#define CRYPTO_ENCR_KASUMI_SEL 5
-#define CRYPTO_SHA_SEL 6
-#define CRYPTO_SHA512_SEL 7
-#define CRYPTO_AUTH_AES_SEL 8
-#define CRYPTO_AUTH_SNOW3G_SEL 9
-#define CRYPTO_AUTH_KASUMI_SEL 10
-#define CRYPTO_BAM_SEL 11
-
+#define CRYPTO_DES_SEL 1
+#define CRYPTO_ENCR_SNOW3G_SEL 2
+#define CRYPTO_ENCR_KASUMI_SEL 3
+#define CRYPTO_SHA_SEL 4
+#define CRYPTO_SHA512_SEL 5
+#define CRYPTO_AUTH_AES_SEL 6
+#define CRYPTO_AUTH_SNOW3G_SEL 7
+#define CRYPTO_AUTH_KASUMI_SEL 8
+#define CRYPTO_BAM_PIPE_SETS 9 /* bit 12 - 9 */
+#define CRYPTO_AXI_WR_BEATS 13 /* bit 18 - 13 */
+#define CRYPTO_AXI_RD_BEATS 19 /* bit 24 - 19 */
+#define CRYPTO_ENCR_ZUC_SEL 26
+#define CRYPTO_AUTH_ZUC_SEL 27
+#define CRYPTO_ZUC_ENABLE 28
#endif /* _DRIVERS_CRYPTO_MSM_QCRYPTOHW_50_H_ */
diff --git a/drivers/gpu/ion/Kconfig b/drivers/gpu/ion/Kconfig
index 39133b5..5bb254b 100644
--- a/drivers/gpu/ion/Kconfig
+++ b/drivers/gpu/ion/Kconfig
@@ -16,12 +16,3 @@
depends on ARCH_MSM && ION
help
Choose this option if you wish to use ion on an MSM target.
-
-config ION_LEAK_CHECK
- bool "Check for leaked Ion buffers (debugging)"
- depends on ION
- help
- Choose this option if you wish to enable checking for leaked
- ion buffers at runtime. Choosing this option will also add a
- debugfs node under the ion directory that can be used to
- enable/disable the leak checking.
diff --git a/drivers/gpu/ion/Makefile b/drivers/gpu/ion/Makefile
index f4f9a92..0e460c8 100644
--- a/drivers/gpu/ion/Makefile
+++ b/drivers/gpu/ion/Makefile
@@ -1,4 +1,5 @@
-obj-$(CONFIG_ION) += ion.o ion_heap.o ion_system_heap.o ion_carveout_heap.o ion_iommu_heap.o ion_cp_heap.o ion_removed_heap.o
+obj-$(CONFIG_ION) += ion.o ion_heap.o ion_page_pool.o ion_system_heap.o \
+ ion_carveout_heap.o ion_chunk_heap.o
obj-$(CONFIG_CMA) += ion_cma_heap.o ion_cma_secure_heap.o
obj-$(CONFIG_ION_TEGRA) += tegra/
-obj-$(CONFIG_ION_MSM) += msm/
+obj-$(CONFIG_ION_MSM) += ion_iommu_heap.o ion_cp_heap.o ion_removed_heap.o msm/
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index a01ef3f..250b387 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -1,4 +1,5 @@
/*
+
* drivers/gpu/ion/ion.c
*
* Copyright (C) 2011 Google, Inc.
@@ -15,18 +16,21 @@
*
*/
-#include <linux/module.h>
#include <linux/device.h>
#include <linux/file.h>
+#include <linux/freezer.h>
#include <linux/fs.h>
#include <linux/anon_inodes.h>
#include <linux/ion.h>
+#include <linux/kthread.h>
#include <linux/list.h>
#include <linux/memblock.h>
#include <linux/miscdevice.h>
+#include <linux/export.h>
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/rbtree.h>
+#include <linux/rtmutex.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
@@ -37,22 +41,23 @@
#include <trace/events/kmem.h>
-#include <mach/iommu_domains.h>
#include "ion_priv.h"
/**
* struct ion_device - the metadata of the ion device node
* @dev: the actual misc device
- * @buffers: an rb tree of all the existing buffers
- * @lock: lock protecting the buffers & heaps trees
+ * @buffers: an rb tree of all the existing buffers
+ * @buffer_lock: lock protecting the tree of buffers
+ * @lock: rwsem protecting the tree of heaps and clients
* @heaps: list of all the heaps in the system
* @user_clients: list of all the clients created from userspace
*/
struct ion_device {
struct miscdevice dev;
struct rb_root buffers;
- struct mutex lock;
- struct rb_root heaps;
+ struct mutex buffer_lock;
+ struct rw_semaphore lock;
+ struct plist_head heaps;
long (*custom_ioctl) (struct ion_client *client, unsigned int cmd,
unsigned long arg);
struct rb_root clients;
@@ -65,7 +70,6 @@
* @dev: backpointer to ion device
* @handles: an rb tree of all the handles in this client
* @lock: lock protecting the tree of handles
- * @heap_mask: mask of all supported heaps
* @name: used for debugging
* @task: used for debugging
*
@@ -78,7 +82,6 @@
struct ion_device *dev;
struct rb_root handles;
struct mutex lock;
- unsigned int heap_mask;
char *name;
struct task_struct *task;
pid_t pid;
@@ -103,7 +106,6 @@
struct ion_buffer *buffer;
struct rb_node node;
unsigned int kmap_cnt;
- unsigned int iommu_map_cnt;
};
bool ion_buffer_fault_user_mappings(struct ion_buffer *buffer)
@@ -112,6 +114,11 @@
!(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC));
}
+bool ion_buffer_cached(struct ion_buffer *buffer)
+{
+ return !!(buffer->flags & ION_FLAG_CACHED);
+}
+
/* this function should only be called while dev->lock is held */
static void ion_buffer_add(struct ion_device *dev,
struct ion_buffer *buffer)
@@ -140,6 +147,7 @@
static int ion_buffer_alloc_dirty(struct ion_buffer *buffer);
+static bool ion_heap_drain_freelist(struct ion_heap *heap);
/* this function should only be called while dev->lock is held */
static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
struct ion_device *dev,
@@ -161,9 +169,16 @@
kref_init(&buffer->ref);
ret = heap->ops->allocate(heap, buffer, len, align, flags);
+
if (ret) {
- kfree(buffer);
- return ERR_PTR(ret);
+ if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE))
+ goto err2;
+
+ ion_heap_drain_freelist(heap);
+ ret = heap->ops->allocate(heap, buffer, len, align,
+ flags);
+ if (ret)
+ goto err2;
}
buffer->dev = dev;
@@ -208,12 +223,15 @@
if (sg_dma_address(sg) == 0)
sg_dma_address(sg) = sg_phys(sg);
}
+ mutex_lock(&dev->buffer_lock);
ion_buffer_add(dev, buffer);
+ mutex_unlock(&dev->buffer_lock);
return buffer;
err:
heap->ops->unmap_dma(heap, buffer);
heap->ops->free(buffer);
+err2:
kfree(buffer);
return ERR_PTR(ret);
}
@@ -224,25 +242,39 @@
buffer->heap->ops->unsecure_buffer(buffer, 1);
}
-static void ion_buffer_destroy(struct kref *kref)
+static void _ion_buffer_destroy(struct ion_buffer *buffer)
{
- struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref);
- struct ion_device *dev = buffer->dev;
-
if (WARN_ON(buffer->kmap_cnt > 0))
buffer->heap->ops->unmap_kernel(buffer->heap, buffer);
buffer->heap->ops->unmap_dma(buffer->heap, buffer);
ion_delayed_unsecure(buffer);
buffer->heap->ops->free(buffer);
- mutex_lock(&dev->lock);
- rb_erase(&buffer->node, &dev->buffers);
- mutex_unlock(&dev->lock);
if (buffer->flags & ION_FLAG_CACHED)
kfree(buffer->dirty);
kfree(buffer);
}
+static void ion_buffer_destroy(struct kref *kref)
+{
+ struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref);
+ struct ion_heap *heap = buffer->heap;
+ struct ion_device *dev = buffer->dev;
+
+ mutex_lock(&dev->buffer_lock);
+ rb_erase(&buffer->node, &dev->buffers);
+ mutex_unlock(&dev->buffer_lock);
+
+ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) {
+ rt_mutex_lock(&heap->lock);
+ list_add(&buffer->list, &heap->free_list);
+ rt_mutex_unlock(&heap->lock);
+ wake_up(&heap->waitqueue);
+ return;
+ }
+ _ion_buffer_destroy(buffer);
+}
+
static void ion_buffer_get(struct ion_buffer *buffer)
{
kref_get(&buffer->ref);
@@ -253,6 +285,37 @@
return kref_put(&buffer->ref, ion_buffer_destroy);
}
+static void ion_buffer_add_to_handle(struct ion_buffer *buffer)
+{
+ mutex_lock(&buffer->lock);
+ buffer->handle_count++;
+ mutex_unlock(&buffer->lock);
+}
+
+static void ion_buffer_remove_from_handle(struct ion_buffer *buffer)
+{
+ /*
+ * when a buffer is removed from a handle, if it is not in
+ * any other handles, copy the taskcomm and the pid of the
+ * process it's being removed from into the buffer. At this
+ * point there will be no way to track what processes this buffer is
+ * being used by, it only exists as a dma_buf file descriptor.
+ * The taskcomm and pid can provide a debug hint as to where this fd
+ * is in the system
+ */
+ mutex_lock(&buffer->lock);
+ buffer->handle_count--;
+ BUG_ON(buffer->handle_count < 0);
+ if (!buffer->handle_count) {
+ struct task_struct *task;
+
+ task = current->group_leader;
+ get_task_comm(buffer->task_comm, task);
+ buffer->pid = task_pid_nr(task);
+ }
+ mutex_unlock(&buffer->lock);
+}
+
static struct ion_handle *ion_handle_create(struct ion_client *client,
struct ion_buffer *buffer)
{
@@ -265,6 +328,7 @@
rb_init_node(&handle->node);
handle->client = client;
ion_buffer_get(buffer);
+ ion_buffer_add_to_handle(buffer);
handle->buffer = buffer;
return handle;
@@ -286,7 +350,9 @@
if (!RB_EMPTY_NODE(&handle->node))
rb_erase(&handle->node, &client->handles);
+ ion_buffer_remove_from_handle(buffer);
ion_buffer_put(buffer);
+
kfree(handle);
}
@@ -359,13 +425,13 @@
}
struct ion_handle *ion_alloc(struct ion_client *client, size_t len,
- size_t align, unsigned int heap_mask,
+ size_t align, unsigned int heap_id_mask,
unsigned int flags)
{
- struct rb_node *n;
struct ion_handle *handle;
struct ion_device *dev = client->dev;
struct ion_buffer *buffer = NULL;
+ struct ion_heap *heap;
unsigned long secure_allocation = flags & ION_FLAG_SECURE;
const unsigned int MAX_DBG_STR_LEN = 64;
char dbg_str[MAX_DBG_STR_LEN];
@@ -381,8 +447,8 @@
*/
flags |= ION_FLAG_CACHED_NEEDS_SYNC;
- pr_debug("%s: len %d align %d heap_mask %u flags %x\n", __func__, len,
- align, heap_mask, flags);
+ pr_debug("%s: len %d align %d heap_id_mask %u flags %x\n", __func__,
+ len, align, heap_id_mask, flags);
/*
* traverse the list of heaps available in this system in priority
* order. If the heap type is supported by the client, and matches the
@@ -394,29 +460,26 @@
len = PAGE_ALIGN(len);
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
- /* if the client doesn't support this heap type */
- if (!((1 << heap->type) & client->heap_mask))
- continue;
- /* if the caller didn't specify this heap type */
- if (!((1 << heap->id) & heap_mask))
+ down_read(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
+ /* if the caller didn't specify this heap id */
+ if (!((1 << heap->id) & heap_id_mask))
continue;
/* Do not allow un-secure heap if secure is specified */
if (secure_allocation &&
!ion_heap_allow_secure_allocation(heap->type))
continue;
trace_ion_alloc_buffer_start(client->name, heap->name, len,
- heap_mask, flags);
+ heap_id_mask, flags);
buffer = ion_buffer_create(heap, dev, len, align, flags);
trace_ion_alloc_buffer_end(client->name, heap->name, len,
- heap_mask, flags);
+ heap_id_mask, flags);
if (!IS_ERR_OR_NULL(buffer))
break;
trace_ion_alloc_buffer_fallback(client->name, heap->name, len,
- heap_mask, flags, PTR_ERR(buffer));
+ heap_id_mask, flags,
+ PTR_ERR(buffer));
if (dbg_str_idx < MAX_DBG_STR_LEN) {
unsigned int len_left = MAX_DBG_STR_LEN-dbg_str_idx-1;
int ret_value = snprintf(&dbg_str[dbg_str_idx],
@@ -433,21 +496,21 @@
}
}
}
- mutex_unlock(&dev->lock);
+ up_read(&dev->lock);
if (buffer == NULL) {
trace_ion_alloc_buffer_fail(client->name, dbg_str, len,
- heap_mask, flags, -ENODEV);
+ heap_id_mask, flags, -ENODEV);
return ERR_PTR(-ENODEV);
}
if (IS_ERR(buffer)) {
trace_ion_alloc_buffer_fail(client->name, dbg_str, len,
- heap_mask, flags, PTR_ERR(buffer));
+ heap_id_mask, flags,
+ PTR_ERR(buffer));
pr_debug("ION is unable to allocate 0x%x bytes (alignment: "
- "0x%x) from heap(s) %sfor client %s with heap "
- "mask 0x%x\n",
- len, align, dbg_str, client->name, client->heap_mask);
+ "0x%x) from heap(s) %sfor client %s\n",
+ len, align, dbg_str, client->name);
return ERR_PTR(PTR_ERR(buffer));
}
@@ -479,8 +542,8 @@
mutex_lock(&client->lock);
valid_handle = ion_handle_validate(client, handle);
if (!valid_handle) {
- mutex_unlock(&client->lock);
WARN(1, "%s: invalid handle passed to free.\n", __func__);
+ mutex_unlock(&client->lock);
return;
}
ion_handle_put(handle);
@@ -612,15 +675,14 @@
struct ion_client *client = s->private;
struct rb_node *n;
- seq_printf(s, "%16.16s: %16.16s : %16.16s : %12.12s : %12.12s : %s\n",
+ seq_printf(s, "%16.16s: %16.16s : %16.16s : %12.12s\n",
"heap_name", "size_in_bytes", "handle refcount",
- "buffer", "physical", "[domain,partition] - virt");
+ "buffer");
mutex_lock(&client->lock);
for (n = rb_first(&client->handles); n; n = rb_next(n)) {
struct ion_handle *handle = rb_entry(n, struct ion_handle,
node);
- enum ion_heap_type type = handle->buffer->heap->type;
seq_printf(s, "%16.16s: %16x : %16d : %12p",
handle->buffer->heap->name,
@@ -628,17 +690,9 @@
atomic_read(&handle->ref.refcount),
handle->buffer);
- if (type == ION_HEAP_TYPE_SYSTEM_CONTIG ||
- type == ION_HEAP_TYPE_CARVEOUT ||
- type == (enum ion_heap_type) ION_HEAP_TYPE_CP)
- seq_printf(s, " : %12pa", &handle->buffer->priv_phys);
- else
- seq_printf(s, " : %12s", "N/A");
-
seq_printf(s, "\n");
}
mutex_unlock(&client->lock);
-
return 0;
}
@@ -655,7 +709,6 @@
};
struct ion_client *ion_client_create(struct ion_device *dev,
- unsigned int heap_mask,
const char *name)
{
struct ion_client *client;
@@ -705,11 +758,10 @@
strlcpy(client->name, name, name_len+1);
}
- client->heap_mask = heap_mask;
client->task = task;
client->pid = pid;
- mutex_lock(&dev->lock);
+ down_write(&dev->lock);
p = &dev->clients.rb_node;
while (*p) {
parent = *p;
@@ -727,96 +779,16 @@
client->debug_root = debugfs_create_file(name, 0664,
dev->debug_root, client,
&debug_client_fops);
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return client;
}
-
-/**
- * ion_mark_dangling_buffers_locked() - Mark dangling buffers
- * @dev: the ion device whose buffers will be searched
- *
- * Sets marked=1 for all known buffers associated with `dev' that no
- * longer have a handle pointing to them. dev->lock should be held
- * across a call to this function (and should only be unlocked after
- * checking for marked buffers).
- */
-static void ion_mark_dangling_buffers_locked(struct ion_device *dev)
-{
- struct rb_node *n, *n2;
- /* mark all buffers as 1 */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- buf->marked = 1;
- }
-
- /* now see which buffers we can access */
- for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
- struct ion_client *client = rb_entry(n, struct ion_client,
- node);
-
- mutex_lock(&client->lock);
- for (n2 = rb_first(&client->handles); n2; n2 = rb_next(n2)) {
- struct ion_handle *handle
- = rb_entry(n2, struct ion_handle, node);
-
- handle->buffer->marked = 0;
-
- }
- mutex_unlock(&client->lock);
-
- }
-}
-
-#ifdef CONFIG_ION_LEAK_CHECK
-static u32 ion_debug_check_leaks_on_destroy;
-
-static int ion_check_for_and_print_leaks(struct ion_device *dev)
-{
- struct rb_node *n;
- int num_leaks = 0;
-
- if (!ion_debug_check_leaks_on_destroy)
- return 0;
-
- /* check for leaked buffers (those that no longer have a
- * handle pointing to them) */
- ion_mark_dangling_buffers_locked(dev);
-
- /* Anyone still marked as a 1 means a leaked handle somewhere */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- if (buf->marked == 1) {
- pr_info("Leaked ion buffer at %p\n", buf);
- num_leaks++;
- }
- }
- return num_leaks;
-}
-static void setup_ion_leak_check(struct dentry *debug_root)
-{
- debugfs_create_bool("check_leaks_on_destroy", 0664, debug_root,
- &ion_debug_check_leaks_on_destroy);
-}
-#else
-static int ion_check_for_and_print_leaks(struct ion_device *dev)
-{
- return 0;
-}
-static void setup_ion_leak_check(struct dentry *debug_root)
-{
-}
-#endif
+EXPORT_SYMBOL(ion_client_create);
void ion_client_destroy(struct ion_client *client)
{
struct ion_device *dev = client->dev;
struct rb_node *n;
- int num_leaks;
pr_debug("%s: %d\n", __func__, __LINE__);
while ((n = rb_first(&client->handles))) {
@@ -824,25 +796,13 @@
node);
ion_handle_destroy(&handle->ref);
}
- mutex_lock(&dev->lock);
+ down_write(&dev->lock);
if (client->task)
put_task_struct(client->task);
rb_erase(&client->node, &dev->clients);
debugfs_remove_recursive(client->debug_root);
- num_leaks = ion_check_for_and_print_leaks(dev);
-
- mutex_unlock(&dev->lock);
-
- if (num_leaks) {
- struct task_struct *current_task = current;
- char current_task_name[TASK_COMM_LEN];
- get_task_comm(current_task_name, current_task);
- WARN(1, "%s: Detected %d leaked ion buffer%s.\n",
- __func__, num_leaks, num_leaks == 1 ? "" : "s");
- pr_info("task name at time of leak: %s, pid: %d\n",
- current_task_name, current_task->pid);
- }
+ up_write(&dev->lock);
kfree(client->name);
kfree(client);
@@ -1115,7 +1075,7 @@
static void *ion_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset)
{
struct ion_buffer *buffer = dmabuf->priv;
- return buffer->vaddr + offset;
+ return buffer->vaddr + offset * PAGE_SIZE;
}
static void ion_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset,
@@ -1171,19 +1131,19 @@
.kunmap = ion_dma_buf_kunmap,
};
-int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle)
+struct dma_buf *ion_share_dma_buf(struct ion_client *client,
+ struct ion_handle *handle)
{
struct ion_buffer *buffer;
struct dma_buf *dmabuf;
bool valid_handle;
- int fd;
mutex_lock(&client->lock);
valid_handle = ion_handle_validate(client, handle);
mutex_unlock(&client->lock);
if (!valid_handle) {
WARN(1, "%s: invalid handle passed to share.\n", __func__);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
buffer = handle->buffer;
@@ -1191,15 +1151,29 @@
dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR);
if (IS_ERR(dmabuf)) {
ion_buffer_put(buffer);
- return PTR_ERR(dmabuf);
+ return dmabuf;
}
+
+ return dmabuf;
+}
+EXPORT_SYMBOL(ion_share_dma_buf);
+
+int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handle)
+{
+ struct dma_buf *dmabuf;
+ int fd;
+
+ dmabuf = ion_share_dma_buf(client, handle);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
fd = dma_buf_fd(dmabuf, O_CLOEXEC);
if (fd < 0)
dma_buf_put(dmabuf);
return fd;
}
-EXPORT_SYMBOL(ion_share_dma_buf);
+EXPORT_SYMBOL(ion_share_dma_buf_fd);
struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd)
{
@@ -1301,14 +1275,15 @@
ion_free(client, data.handle);
break;
}
- case ION_IOC_MAP:
case ION_IOC_SHARE:
+ case ION_IOC_MAP:
{
struct ion_fd_data data;
if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
- data.fd = ion_share_dma_buf(client, data.handle);
+ data.fd = ion_share_dma_buf_fd(client, data.handle);
+
if (copy_to_user((void __user *)arg, &data, sizeof(data)))
return -EFAULT;
if (data.fd < 0)
@@ -1388,7 +1363,7 @@
pr_debug("%s: %d\n", __func__, __LINE__);
snprintf(debug_name, 64, "%u", task_pid_nr(current->group_leader));
- client = ion_client_create(dev, -1, debug_name);
+ client = ion_client_create(dev, debug_name);
if (IS_ERR_OR_NULL(client))
return PTR_ERR(client);
file->private_data = client;
@@ -1404,7 +1379,7 @@
};
static size_t ion_debug_heap_total(struct ion_client *client,
- enum ion_heap_ids id)
+ unsigned int id)
{
size_t size = 0;
struct rb_node *n;
@@ -1570,9 +1545,11 @@
struct ion_heap *heap = s->private;
struct ion_device *dev = heap->dev;
struct rb_node *n;
+ size_t total_size = 0;
+ size_t total_orphaned_size = 0;
- mutex_lock(&dev->lock);
seq_printf(s, "%16.s %16.s %16.s\n", "client", "pid", "size");
+ seq_printf(s, "----------------------------------------------------\n");
for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
struct ion_client *client = rb_entry(n, struct ion_client,
@@ -1591,8 +1568,34 @@
client->pid, size);
}
}
+ seq_printf(s, "----------------------------------------------------\n");
+ seq_printf(s, "orphaned allocations (info is from last known client):"
+ "\n");
+ mutex_lock(&dev->buffer_lock);
+ for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
+ struct ion_buffer *buffer = rb_entry(n, struct ion_buffer,
+ node);
+ if (buffer->heap->id != heap->id)
+ continue;
+ total_size += buffer->size;
+ if (!buffer->handle_count) {
+ seq_printf(s, "%16.s %16u %16u %d %d\n", buffer->task_comm,
+ buffer->pid, buffer->size, buffer->kmap_cnt,
+ atomic_read(&buffer->ref.refcount));
+ total_orphaned_size += buffer->size;
+ }
+ }
+ mutex_unlock(&dev->buffer_lock);
+ seq_printf(s, "----------------------------------------------------\n");
+ seq_printf(s, "%16.s %16u\n", "total orphaned",
+ total_orphaned_size);
+ seq_printf(s, "%16.s %16u\n", "total ", total_size);
+ seq_printf(s, "----------------------------------------------------\n");
+
+ if (heap->debug_show)
+ heap->debug_show(heap, s, unused);
+
ion_heap_print_debug(s, heap);
- mutex_unlock(&dev->lock);
return 0;
}
@@ -1608,40 +1611,90 @@
.release = single_release,
};
+static size_t ion_heap_free_list_is_empty(struct ion_heap *heap)
+{
+ bool is_empty;
+
+ rt_mutex_lock(&heap->lock);
+ is_empty = list_empty(&heap->free_list);
+ rt_mutex_unlock(&heap->lock);
+
+ return is_empty;
+}
+
+static int ion_heap_deferred_free(void *data)
+{
+ struct ion_heap *heap = data;
+
+ while (true) {
+ struct ion_buffer *buffer;
+
+ wait_event_freezable(heap->waitqueue,
+ !ion_heap_free_list_is_empty(heap));
+
+ rt_mutex_lock(&heap->lock);
+ if (list_empty(&heap->free_list)) {
+ rt_mutex_unlock(&heap->lock);
+ continue;
+ }
+ buffer = list_first_entry(&heap->free_list, struct ion_buffer,
+ list);
+ list_del(&buffer->list);
+ rt_mutex_unlock(&heap->lock);
+ _ion_buffer_destroy(buffer);
+ }
+
+ return 0;
+}
+
+static bool ion_heap_drain_freelist(struct ion_heap *heap)
+{
+ struct ion_buffer *buffer, *tmp;
+
+ if (ion_heap_free_list_is_empty(heap))
+ return false;
+ rt_mutex_lock(&heap->lock);
+ list_for_each_entry_safe(buffer, tmp, &heap->free_list, list) {
+ list_del(&buffer->list);
+ _ion_buffer_destroy(buffer);
+ }
+ BUG_ON(!list_empty(&heap->free_list));
+ rt_mutex_unlock(&heap->lock);
+
+
+ return true;
+}
+
void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap)
{
- struct rb_node **p = &dev->heaps.rb_node;
- struct rb_node *parent = NULL;
- struct ion_heap *entry;
+ struct sched_param param = { .sched_priority = 0 };
if (!heap->ops->allocate || !heap->ops->free || !heap->ops->map_dma ||
!heap->ops->unmap_dma)
pr_err("%s: can not add heap with invalid ops struct.\n",
__func__);
- heap->dev = dev;
- mutex_lock(&dev->lock);
- while (*p) {
- parent = *p;
- entry = rb_entry(parent, struct ion_heap, node);
-
- if (heap->id < entry->id) {
- p = &(*p)->rb_left;
- } else if (heap->id > entry->id ) {
- p = &(*p)->rb_right;
- } else {
- pr_err("%s: can not insert multiple heaps with "
- "id %d\n", __func__, heap->id);
- goto end;
- }
+ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) {
+ INIT_LIST_HEAD(&heap->free_list);
+ rt_mutex_init(&heap->lock);
+ init_waitqueue_head(&heap->waitqueue);
+ heap->task = kthread_run(ion_heap_deferred_free, heap,
+ "%s", heap->name);
+ sched_setscheduler(heap->task, SCHED_IDLE, ¶m);
+ if (IS_ERR(heap->task))
+ pr_err("%s: creating thread for deferred free failed\n",
+ __func__);
}
- rb_link_node(&heap->node, parent, p);
- rb_insert_color(&heap->node, &dev->heaps);
+ heap->dev = dev;
+ down_write(&dev->lock);
+ /* use negative heap->id to reverse the priority -- when traversing
+ the list later attempt higher id numbers first */
+ plist_node_init(&heap->node, -heap->id);
+ plist_add(&heap->node, &dev->heaps);
debugfs_create_file(heap->name, 0664, dev->debug_root, heap,
&debug_heap_fops);
-end:
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
}
int ion_secure_handle(struct ion_client *client, struct ion_handle *handle,
@@ -1714,16 +1767,15 @@
int ion_secure_heap(struct ion_device *dev, int heap_id, int version,
void *data)
{
- struct rb_node *n;
int ret_val = 0;
+ struct ion_heap *heap;
/*
* traverse the list of heaps available in this system
* and find the heap that is specified.
*/
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
+ down_write(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
if (!ion_heap_allow_heap_secure(heap->type))
continue;
if (ION_HEAP(heap->id) != heap_id)
@@ -1734,7 +1786,7 @@
ret_val = -EINVAL;
break;
}
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return ret_val;
}
EXPORT_SYMBOL(ion_secure_heap);
@@ -1742,16 +1794,15 @@
int ion_unsecure_heap(struct ion_device *dev, int heap_id, int version,
void *data)
{
- struct rb_node *n;
int ret_val = 0;
+ struct ion_heap *heap;
/*
* traverse the list of heaps available in this system
* and find the heap that is specified.
*/
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
+ down_write(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
if (!ion_heap_allow_heap_secure(heap->type))
continue;
if (ION_HEAP(heap->id) != heap_id)
@@ -1762,50 +1813,11 @@
ret_val = -EINVAL;
break;
}
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return ret_val;
}
EXPORT_SYMBOL(ion_unsecure_heap);
-static int ion_debug_leak_show(struct seq_file *s, void *unused)
-{
- struct ion_device *dev = s->private;
- struct rb_node *n;
-
- seq_printf(s, "%16.s %16.s %16.s %16.s\n", "buffer", "heap", "size",
- "ref cnt");
-
- mutex_lock(&dev->lock);
- ion_mark_dangling_buffers_locked(dev);
-
- /* Anyone still marked as a 1 means a leaked handle somewhere */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- if (buf->marked == 1)
- seq_printf(s, "%16.x %16.s %16.x %16.d\n",
- (int)buf, buf->heap->name, buf->size,
- atomic_read(&buf->ref.refcount));
- }
- mutex_unlock(&dev->lock);
- return 0;
-}
-
-static int ion_debug_leak_open(struct inode *inode, struct file *file)
-{
- return single_open(file, ion_debug_leak_show, inode->i_private);
-}
-
-static const struct file_operations debug_leak_fops = {
- .open = ion_debug_leak_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
-};
-
-
-
struct ion_device *ion_device_create(long (*custom_ioctl)
(struct ion_client *client,
unsigned int cmd,
@@ -1834,13 +1846,10 @@
idev->custom_ioctl = custom_ioctl;
idev->buffers = RB_ROOT;
- mutex_init(&idev->lock);
- idev->heaps = RB_ROOT;
+ mutex_init(&idev->buffer_lock);
+ init_rwsem(&idev->lock);
+ plist_head_init(&idev->heaps);
idev->clients = RB_ROOT;
- debugfs_create_file("check_leaked_fds", 0664, idev->debug_root, idev,
- &debug_leak_fops);
-
- setup_ion_leak_check(idev->debug_root);
return idev;
}
@@ -1853,16 +1862,35 @@
void __init ion_reserve(struct ion_platform_data *data)
{
- int i, ret;
+ int i;
for (i = 0; i < data->nr; i++) {
if (data->heaps[i].size == 0)
continue;
- ret = memblock_reserve(data->heaps[i].base,
- data->heaps[i].size);
- if (ret)
- pr_err("memblock reserve of %x@%pa failed\n",
- data->heaps[i].size,
- &data->heaps[i].base);
+
+ if (data->heaps[i].base == 0) {
+ phys_addr_t paddr;
+ paddr = memblock_alloc_base(data->heaps[i].size,
+ data->heaps[i].align,
+ MEMBLOCK_ALLOC_ANYWHERE);
+ if (!paddr) {
+ pr_err("%s: error allocating memblock for "
+ "heap %d\n",
+ __func__, i);
+ continue;
+ }
+ data->heaps[i].base = paddr;
+ } else {
+ int ret = memblock_reserve(data->heaps[i].base,
+ data->heaps[i].size);
+ if (ret)
+ pr_err("memblock reserve of %x@%pa failed\n",
+ data->heaps[i].size,
+ &data->heaps[i].base);
+ }
+ pr_info("%s: %s reserved base %pa size %d\n", __func__,
+ data->heaps[i].name,
+ &data->heaps[i].base,
+ data->heaps[i].size);
}
}
diff --git a/drivers/gpu/ion/ion_carveout_heap.c b/drivers/gpu/ion/ion_carveout_heap.c
index 0dd3054..08921299 100644
--- a/drivers/gpu/ion/ion_carveout_heap.c
+++ b/drivers/gpu/ion/ion_carveout_heap.c
@@ -37,7 +37,6 @@
ion_phys_addr_t base;
unsigned long allocated_bytes;
unsigned long total_size;
- unsigned int has_outer_cache;
};
ion_phys_addr_t ion_carveout_allocate(struct ion_heap *heap,
@@ -254,7 +253,6 @@
carveout_heap->heap.type = ION_HEAP_TYPE_CARVEOUT;
carveout_heap->allocated_bytes = 0;
carveout_heap->total_size = heap_data->size;
- carveout_heap->has_outer_cache = heap_data->has_outer_cache;
return &carveout_heap->heap;
}
diff --git a/drivers/gpu/ion/ion_chunk_heap.c b/drivers/gpu/ion/ion_chunk_heap.c
new file mode 100644
index 0000000..b76f898
--- /dev/null
+++ b/drivers/gpu/ion/ion_chunk_heap.c
@@ -0,0 +1,180 @@
+/*
+ * drivers/gpu/ion/ion_chunk_heap.c
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+//#include <linux/spinlock.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/genalloc.h>
+#include <linux/io.h>
+#include <linux/ion.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "ion_priv.h"
+
+#include <asm/mach/map.h>
+
+struct ion_chunk_heap {
+ struct ion_heap heap;
+ struct gen_pool *pool;
+ ion_phys_addr_t base;
+ unsigned long chunk_size;
+ unsigned long size;
+ unsigned long allocated;
+};
+
+static int ion_chunk_heap_allocate(struct ion_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long size, unsigned long align,
+ unsigned long flags)
+{
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+ struct sg_table *table;
+ struct scatterlist *sg;
+ int ret, i;
+ unsigned long num_chunks;
+
+ if (ion_buffer_fault_user_mappings(buffer))
+ return -ENOMEM;
+
+ num_chunks = ALIGN(size, chunk_heap->chunk_size) /
+ chunk_heap->chunk_size;
+ buffer->size = num_chunks * chunk_heap->chunk_size;
+
+ if (buffer->size > chunk_heap->size - chunk_heap->allocated)
+ return -ENOMEM;
+
+ table = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
+ if (!table)
+ return -ENOMEM;
+ ret = sg_alloc_table(table, num_chunks, GFP_KERNEL);
+ if (ret) {
+ kfree(table);
+ return ret;
+ }
+
+ sg = table->sgl;
+ for (i = 0; i < num_chunks; i++) {
+ unsigned long paddr = gen_pool_alloc(chunk_heap->pool,
+ chunk_heap->chunk_size);
+ if (!paddr)
+ goto err;
+ sg_set_page(sg, phys_to_page(paddr), chunk_heap->chunk_size, 0);
+ sg = sg_next(sg);
+ }
+
+ buffer->priv_virt = table;
+ chunk_heap->allocated += buffer->size;
+ return 0;
+err:
+ sg = table->sgl;
+ for (i -= 1; i >= 0; i--) {
+ gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)),
+ sg_dma_len(sg));
+ sg = sg_next(sg);
+ }
+ sg_free_table(table);
+ kfree(table);
+ return -ENOMEM;
+}
+
+static void ion_chunk_heap_free(struct ion_buffer *buffer)
+{
+ struct ion_heap *heap = buffer->heap;
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+ struct sg_table *table = buffer->priv_virt;
+ struct scatterlist *sg;
+ int i;
+
+ ion_heap_buffer_zero(buffer);
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ if (ion_buffer_cached(buffer))
+ dma_sync_sg_for_device(NULL, sg, 1, DMA_BIDIRECTIONAL);
+ gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)),
+ sg_dma_len(sg));
+ }
+ chunk_heap->allocated -= buffer->size;
+ sg_free_table(table);
+ kfree(table);
+}
+
+struct sg_table *ion_chunk_heap_map_dma(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ return buffer->priv_virt;
+}
+
+void ion_chunk_heap_unmap_dma(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ return;
+}
+
+static struct ion_heap_ops chunk_heap_ops = {
+ .allocate = ion_chunk_heap_allocate,
+ .free = ion_chunk_heap_free,
+ .map_dma = ion_chunk_heap_map_dma,
+ .unmap_dma = ion_chunk_heap_unmap_dma,
+ .map_user = ion_heap_map_user,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
+};
+
+struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data)
+{
+ struct ion_chunk_heap *chunk_heap;
+ struct scatterlist sg;
+
+ chunk_heap = kzalloc(sizeof(struct ion_chunk_heap), GFP_KERNEL);
+ if (!chunk_heap)
+ return ERR_PTR(-ENOMEM);
+
+ chunk_heap->chunk_size = (unsigned long)heap_data->priv;
+ chunk_heap->pool = gen_pool_create(get_order(chunk_heap->chunk_size) +
+ PAGE_SHIFT, -1);
+ if (!chunk_heap->pool) {
+ kfree(chunk_heap);
+ return ERR_PTR(-ENOMEM);
+ }
+ chunk_heap->base = heap_data->base;
+ chunk_heap->size = heap_data->size;
+ chunk_heap->allocated = 0;
+
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, phys_to_page(heap_data->base), heap_data->size, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+ gen_pool_add(chunk_heap->pool, chunk_heap->base, heap_data->size, -1);
+ chunk_heap->heap.ops = &chunk_heap_ops;
+ chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK;
+ chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE;
+ pr_info("%s: base %pa size %zd align %pa\n", __func__,
+ &chunk_heap->base, heap_data->size, &heap_data->align);
+
+ return &chunk_heap->heap;
+}
+
+void ion_chunk_heap_destroy(struct ion_heap *heap)
+{
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+
+ gen_pool_destroy(chunk_heap->pool);
+ kfree(chunk_heap);
+ chunk_heap = NULL;
+}
diff --git a/drivers/gpu/ion/ion_cma_heap.c b/drivers/gpu/ion/ion_cma_heap.c
index f64ad4d..193f4d4 100644
--- a/drivers/gpu/ion/ion_cma_heap.c
+++ b/drivers/gpu/ion/ion_cma_heap.c
@@ -47,7 +47,7 @@
int ion_cma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size)
{
- struct page *page = virt_to_page(cpu_addr);
+ struct page *page = phys_to_page(handle);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_cma_secure_heap.c b/drivers/gpu/ion/ion_cma_secure_heap.c
index d622a51..e1b3eea 100644
--- a/drivers/gpu/ion/ion_cma_secure_heap.c
+++ b/drivers/gpu/ion/ion_cma_secure_heap.c
@@ -52,7 +52,7 @@
int ion_secure_cma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size)
{
- struct page *page = virt_to_page(cpu_addr);
+ struct page *page = phys_to_page(handle);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_heap.c b/drivers/gpu/ion/ion_heap.c
index 510b9ce..3d37541 100644
--- a/drivers/gpu/ion/ion_heap.c
+++ b/drivers/gpu/ion/ion_heap.c
@@ -17,8 +17,120 @@
#include <linux/err.h>
#include <linux/ion.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/vmalloc.h>
#include "ion_priv.h"
+void *ion_heap_map_kernel(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ struct scatterlist *sg;
+ int i, j;
+ void *vaddr;
+ pgprot_t pgprot;
+ struct sg_table *table = buffer->sg_table;
+ int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE;
+ struct page **pages = vmalloc(sizeof(struct page *) * npages);
+ struct page **tmp = pages;
+
+ if (!pages)
+ return 0;
+
+ if (buffer->flags & ION_FLAG_CACHED)
+ pgprot = PAGE_KERNEL;
+ else
+ pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ int npages_this_entry = PAGE_ALIGN(sg_dma_len(sg)) / PAGE_SIZE;
+ struct page *page = sg_page(sg);
+ BUG_ON(i >= npages);
+ for (j = 0; j < npages_this_entry; j++) {
+ *(tmp++) = page++;
+ }
+ }
+ vaddr = vmap(pages, npages, VM_MAP, pgprot);
+ vfree(pages);
+
+ return vaddr;
+}
+
+void ion_heap_unmap_kernel(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ vunmap(buffer->vaddr);
+}
+
+int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
+ struct vm_area_struct *vma)
+{
+ struct sg_table *table = buffer->sg_table;
+ unsigned long addr = vma->vm_start;
+ unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
+ struct scatterlist *sg;
+ int i;
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ struct page *page = sg_page(sg);
+ unsigned long remainder = vma->vm_end - addr;
+ unsigned long len = sg_dma_len(sg);
+
+ if (offset >= sg_dma_len(sg)) {
+ offset -= sg_dma_len(sg);
+ continue;
+ } else if (offset) {
+ page += offset / PAGE_SIZE;
+ len = sg_dma_len(sg) - offset;
+ offset = 0;
+ }
+ len = min(len, remainder);
+ remap_pfn_range(vma, addr, page_to_pfn(page), len,
+ vma->vm_page_prot);
+ addr += len;
+ if (addr >= vma->vm_end)
+ return 0;
+ }
+ return 0;
+}
+
+int ion_heap_buffer_zero(struct ion_buffer *buffer)
+{
+ struct sg_table *table = buffer->sg_table;
+ pgprot_t pgprot;
+ struct scatterlist *sg;
+ struct vm_struct *vm_struct;
+ int i, j, ret = 0;
+
+ if (buffer->flags & ION_FLAG_CACHED)
+ pgprot = PAGE_KERNEL;
+ else
+ pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+ vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC);
+ if (!vm_struct)
+ return -ENOMEM;
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ struct page *page = sg_page(sg);
+ unsigned long len = sg_dma_len(sg);
+
+ for (j = 0; j < len / PAGE_SIZE; j++) {
+ struct page *sub_page = page + j;
+ struct page **pages = &sub_page;
+ ret = map_vm_area(vm_struct, pgprot, &pages);
+ if (ret)
+ goto end;
+ memset(vm_struct->addr, 0, PAGE_SIZE);
+ unmap_kernel_range((unsigned long)vm_struct->addr,
+ PAGE_SIZE);
+ }
+ }
+end:
+ free_vm_area(vm_struct);
+ return ret;
+}
+
struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data)
{
struct ion_heap *heap = NULL;
@@ -33,6 +145,9 @@
case ION_HEAP_TYPE_CARVEOUT:
heap = ion_carveout_heap_create(heap_data);
break;
+ case ION_HEAP_TYPE_CHUNK:
+ heap = ion_chunk_heap_create(heap_data);
+ break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap_data->type);
@@ -67,6 +182,9 @@
case ION_HEAP_TYPE_CARVEOUT:
ion_carveout_heap_destroy(heap);
break;
+ case ION_HEAP_TYPE_CHUNK:
+ ion_chunk_heap_destroy(heap);
+ break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap->type);
diff --git a/drivers/gpu/ion/ion_iommu_heap.c b/drivers/gpu/ion/ion_iommu_heap.c
index ca29016..bc9bddd 100644
--- a/drivers/gpu/ion/ion_iommu_heap.c
+++ b/drivers/gpu/ion/ion_iommu_heap.c
@@ -27,6 +27,7 @@
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <mach/iommu_domains.h>
+#include <trace/events/kmem.h>
struct ion_iommu_heap {
struct ion_heap heap;
@@ -83,9 +84,13 @@
} else {
gfp |= GFP_KERNEL;
}
+ trace_alloc_pages_iommu_start(gfp, orders[i]);
page = alloc_pages(gfp, orders[i]);
- if (!page)
+ trace_alloc_pages_iommu_end(gfp, orders[i]);
+ if (!page) {
+ trace_alloc_pages_iommu_fail(gfp, orders[i]);
continue;
+ }
info = kmalloc(sizeof(struct page_info), GFP_KERNEL);
info->page = page;
@@ -111,7 +116,7 @@
int j;
void *ptr = NULL;
unsigned int npages_to_vmap, total_pages, num_large_pages = 0;
- long size_remaining = PAGE_ALIGN(size);
+ unsigned long size_remaining = PAGE_ALIGN(size);
unsigned int max_order = ION_IS_CACHED(flags) ? 0 : orders[0];
data = kmalloc(sizeof(*data), GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_page_pool.c b/drivers/gpu/ion/ion_page_pool.c
new file mode 100644
index 0000000..e8b5489
--- /dev/null
+++ b/drivers/gpu/ion/ion_page_pool.c
@@ -0,0 +1,282 @@
+/*
+ * drivers/gpu/ion/ion_mem_pool.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/fs.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/shrinker.h>
+#include "ion_priv.h"
+
+/* #define DEBUG_PAGE_POOL_SHRINKER */
+
+static struct plist_head pools = PLIST_HEAD_INIT(pools);
+static struct shrinker shrinker;
+
+struct ion_page_pool_item {
+ struct page *page;
+ struct list_head list;
+};
+
+static void *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
+{
+ struct page *page = alloc_pages(pool->gfp_mask, pool->order);
+ struct scatterlist sg;
+
+ if (!page)
+ return NULL;
+
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, page, PAGE_SIZE << pool->order, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+
+ return page;
+}
+
+static void ion_page_pool_free_pages(struct ion_page_pool *pool,
+ struct page *page)
+{
+ __free_pages(page, pool->order);
+}
+
+static int ion_page_pool_add(struct ion_page_pool *pool, struct page *page)
+{
+ struct ion_page_pool_item *item;
+
+ item = kmalloc(sizeof(struct ion_page_pool_item), GFP_KERNEL);
+ if (!item)
+ return -ENOMEM;
+
+ mutex_lock(&pool->mutex);
+ item->page = page;
+ if (PageHighMem(page)) {
+ list_add_tail(&item->list, &pool->high_items);
+ pool->high_count++;
+ } else {
+ list_add_tail(&item->list, &pool->low_items);
+ pool->low_count++;
+ }
+ mutex_unlock(&pool->mutex);
+ return 0;
+}
+
+static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high)
+{
+ struct ion_page_pool_item *item;
+ struct page *page;
+
+ if (high) {
+ BUG_ON(!pool->high_count);
+ item = list_first_entry(&pool->high_items,
+ struct ion_page_pool_item, list);
+ pool->high_count--;
+ } else {
+ BUG_ON(!pool->low_count);
+ item = list_first_entry(&pool->low_items,
+ struct ion_page_pool_item, list);
+ pool->low_count--;
+ }
+
+ list_del(&item->list);
+ page = item->page;
+ kfree(item);
+ return page;
+}
+
+void *ion_page_pool_alloc(struct ion_page_pool *pool)
+{
+ struct page *page = NULL;
+
+ BUG_ON(!pool);
+
+ mutex_lock(&pool->mutex);
+ if (pool->high_count)
+ page = ion_page_pool_remove(pool, true);
+ else if (pool->low_count)
+ page = ion_page_pool_remove(pool, false);
+ mutex_unlock(&pool->mutex);
+
+ if (!page)
+ page = ion_page_pool_alloc_pages(pool);
+
+ return page;
+}
+
+void ion_page_pool_free(struct ion_page_pool *pool, struct page* page)
+{
+ int ret;
+
+ ret = ion_page_pool_add(pool, page);
+ if (ret)
+ ion_page_pool_free_pages(pool, page);
+}
+
+#ifdef DEBUG_PAGE_POOL_SHRINKER
+static int debug_drop_pools_set(void *data, u64 val)
+{
+ struct shrink_control sc;
+ int objs;
+
+ sc.gfp_mask = -1;
+ sc.nr_to_scan = 0;
+
+ if (!val)
+ return 0;
+
+ objs = shrinker.shrink(&shrinker, &sc);
+ sc.nr_to_scan = objs;
+
+ shrinker.shrink(&shrinker, &sc);
+ return 0;
+}
+
+static int debug_drop_pools_get(void *data, u64 *val)
+{
+ struct shrink_control sc;
+ int objs;
+
+ sc.gfp_mask = -1;
+ sc.nr_to_scan = 0;
+
+ objs = shrinker.shrink(&shrinker, &sc);
+ *val = objs;
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debug_drop_pools_fops, debug_drop_pools_get,
+ debug_drop_pools_set, "%llu\n");
+
+static int debug_grow_pools_set(void *data, u64 val)
+{
+ struct ion_page_pool *pool;
+ struct page *page;
+
+ plist_for_each_entry(pool, &pools, list) {
+ if (val != pool->list.prio)
+ continue;
+ page = ion_page_pool_alloc_pages(pool);
+ if (page)
+ ion_page_pool_add(pool, page);
+ }
+
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debug_grow_pools_fops, debug_drop_pools_get,
+ debug_grow_pools_set, "%llu\n");
+#endif
+
+static int ion_page_pool_total(bool high)
+{
+ struct ion_page_pool *pool;
+ int total = 0;
+
+ plist_for_each_entry(pool, &pools, list) {
+ total += high ? (pool->high_count + pool->low_count) *
+ (1 << pool->order) :
+ pool->low_count * (1 << pool->order);
+ }
+ return total;
+}
+
+static int ion_page_pool_shrink(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct ion_page_pool *pool;
+ int nr_freed = 0;
+ int i;
+ bool high;
+ int nr_to_scan = sc->nr_to_scan;
+
+ if (sc->gfp_mask & __GFP_HIGHMEM)
+ high = true;
+
+ if (nr_to_scan == 0)
+ return ion_page_pool_total(high);
+
+ plist_for_each_entry(pool, &pools, list) {
+ for (i = 0; i < nr_to_scan; i++) {
+ struct page *page;
+
+ mutex_lock(&pool->mutex);
+ if (high && pool->high_count) {
+ page = ion_page_pool_remove(pool, true);
+ } else if (pool->low_count) {
+ page = ion_page_pool_remove(pool, false);
+ } else {
+ mutex_unlock(&pool->mutex);
+ break;
+ }
+ mutex_unlock(&pool->mutex);
+ ion_page_pool_free_pages(pool, page);
+ nr_freed += (1 << pool->order);
+ }
+ nr_to_scan -= i;
+ }
+
+ return ion_page_pool_total(high);
+}
+
+struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order)
+{
+ struct ion_page_pool *pool = kmalloc(sizeof(struct ion_page_pool),
+ GFP_KERNEL);
+ if (!pool)
+ return NULL;
+ pool->high_count = 0;
+ pool->low_count = 0;
+ INIT_LIST_HEAD(&pool->low_items);
+ INIT_LIST_HEAD(&pool->high_items);
+ pool->gfp_mask = gfp_mask;
+ pool->order = order;
+ mutex_init(&pool->mutex);
+ plist_node_init(&pool->list, order);
+ plist_add(&pool->list, &pools);
+
+ return pool;
+}
+
+void ion_page_pool_destroy(struct ion_page_pool *pool)
+{
+ plist_del(&pool->list, &pools);
+ kfree(pool);
+}
+
+static int __init ion_page_pool_init(void)
+{
+ shrinker.shrink = ion_page_pool_shrink;
+ shrinker.seeks = DEFAULT_SEEKS;
+ shrinker.batch = 0;
+ register_shrinker(&shrinker);
+#ifdef DEBUG_PAGE_POOL_SHRINKER
+ debugfs_create_file("ion_pools_shrink", 0644, NULL, NULL,
+ &debug_drop_pools_fops);
+ debugfs_create_file("ion_pools_grow", 0644, NULL, NULL,
+ &debug_grow_pools_fops);
+#endif
+ return 0;
+}
+
+static void __exit ion_page_pool_exit(void)
+{
+ unregister_shrinker(&shrinker);
+}
+
+module_init(ion_page_pool_init);
+module_exit(ion_page_pool_exit);
diff --git a/drivers/gpu/ion/ion_priv.h b/drivers/gpu/ion/ion_priv.h
index 71527ae..e3fbbda 100644
--- a/drivers/gpu/ion/ion_priv.h
+++ b/drivers/gpu/ion/ion_priv.h
@@ -18,14 +18,17 @@
#ifndef _ION_PRIV_H
#define _ION_PRIV_H
+#include <linux/ion.h>
#include <linux/kref.h>
#include <linux/mm_types.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
-#include <linux/ion.h>
#include <linux/seq_file.h>
#include "msm_ion_priv.h"
+#include <linux/sched.h>
+#include <linux/shrinker.h>
+#include <linux/types.h>
struct ion_buffer *ion_handle_buffer(struct ion_handle *handle);
@@ -46,10 +49,22 @@
* @vaddr: the kenrel mapping if kmap_cnt is not zero
* @dmap_cnt: number of times the buffer is mapped for dma
* @sg_table: the sg table for the buffer if dmap_cnt is not zero
+ * @dirty: bitmask representing which pages of this buffer have
+ * been dirtied by the cpu and need cache maintenance
+ * before dma
+ * @vmas: list of vma's mapping this buffer
+ * @handle_count: count of handles referencing this buffer
+ * @task_comm: taskcomm of last client to reference this buffer in a
+ * handle, used for debugging
+ * @pid: pid of last client to reference this buffer in a
+ * handle, used for debugging
*/
struct ion_buffer {
struct kref ref;
- struct rb_node node;
+ union {
+ struct rb_node node;
+ struct list_head list;
+ };
struct ion_device *dev;
struct ion_heap *heap;
unsigned long flags;
@@ -65,7 +80,10 @@
struct sg_table *sg_table;
unsigned long *dirty;
struct list_head vmas;
- int marked;
+ /* used to track orphaned buffers */
+ int handle_count;
+ char task_comm[TASK_COMM_LEN];
+ pid_t pid;
};
/**
@@ -106,16 +124,28 @@
};
/**
+ * heap flags - flags between the heaps and core ion code
+ */
+#define ION_HEAP_FLAG_DEFER_FREE (1 << 0)
+
+/**
* struct ion_heap - represents a heap in the system
* @node: rb node to put the heap on the device's tree of heaps
* @dev: back pointer to the ion_device
* @type: type of heap
* @ops: ops struct as above
+ * @flags: flags
* @id: id of heap, also indicates priority of this heap when
* allocating. These are specified by platform data and
* MUST be unique
* @name: used for debugging
* @priv: private heap data
+ * @free_list: free list head if deferred free is used
+ * @lock: protects the free list
+ * @waitqueue: queue to wait on from deferred free thread
+ * @task: task struct of deferred free thread
+ * @debug_show: called when heap debug file is read to add any
+ * heap specific debug info to output
*
* Represents a pool of memory from which buffers can be made. In some
* systems the only heap is regular system memory allocated via vmalloc.
@@ -123,16 +153,30 @@
* that are allocated from a specially reserved heap.
*/
struct ion_heap {
- struct rb_node node;
+ struct plist_node node;
struct ion_device *dev;
enum ion_heap_type type;
struct ion_heap_ops *ops;
- int id;
+ unsigned long flags;
+ unsigned int id;
const char *name;
void *priv;
+ struct list_head free_list;
+ struct rt_mutex lock;
+ wait_queue_head_t waitqueue;
+ struct task_struct *task;
+ int (*debug_show)(struct ion_heap *heap, struct seq_file *, void *);
};
/**
+ * ion_buffer_cached - this ion buffer is cached
+ * @buffer: buffer
+ *
+ * indicates whether this ion buffer is cached
+ */
+bool ion_buffer_cached(struct ion_buffer *buffer);
+
+/**
* ion_buffer_fault_user_mappings - fault in user mappings of this buffer
* @buffer: buffer
*
@@ -166,6 +210,17 @@
void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap);
/**
+ * some helpers for common operations on buffers using the sg_table
+ * and vaddr fields
+ */
+void *ion_heap_map_kernel(struct ion_heap *, struct ion_buffer *);
+void ion_heap_unmap_kernel(struct ion_heap *, struct ion_buffer *);
+int ion_heap_map_user(struct ion_heap *, struct ion_buffer *,
+ struct vm_area_struct *);
+int ion_heap_buffer_zero(struct ion_buffer *buffer);
+
+
+/**
* functions for creating and destroying the built in ion heaps.
* architectures can add their own custom architecture specific
* heaps as appropriate.
@@ -173,7 +228,6 @@
struct ion_heap *ion_heap_create(struct ion_platform_heap *);
void ion_heap_destroy(struct ion_heap *);
-
struct ion_heap *ion_system_heap_create(struct ion_platform_heap *);
void ion_system_heap_destroy(struct ion_heap *);
@@ -183,6 +237,8 @@
struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *);
void ion_carveout_heap_destroy(struct ion_heap *);
+struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *);
+void ion_chunk_heap_destroy(struct ion_heap *);
/**
* kernel api to allocate/free from carveout -- used when carveout is
* used to back an architecture specific custom heap
@@ -191,11 +247,57 @@
unsigned long align);
void ion_carveout_free(struct ion_heap *heap, ion_phys_addr_t addr,
unsigned long size);
-
/**
* The carveout heap returns physical addresses, since 0 may be a valid
* physical address, this is used to indicate allocation failed
*/
#define ION_CARVEOUT_ALLOCATE_FAIL -1
+/**
+ * functions for creating and destroying a heap pool -- allows you
+ * to keep a pool of pre allocated memory to use from your heap. Keeping
+ * a pool of memory that is ready for dma, ie any cached mapping have been
+ * invalidated from the cache, provides a significant peformance benefit on
+ * many systems */
+
+/**
+ * struct ion_page_pool - pagepool struct
+ * @high_count: number of highmem items in the pool
+ * @low_count: number of lowmem items in the pool
+ * @high_items: list of highmem items
+ * @low_items: list of lowmem items
+ * @shrinker: a shrinker for the items
+ * @mutex: lock protecting this struct and especially the count
+ * item list
+ * @alloc: function to be used to allocate pageory when the pool
+ * is empty
+ * @free: function to be used to free pageory back to the system
+ * when the shrinker fires
+ * @gfp_mask: gfp_mask to use from alloc
+ * @order: order of pages in the pool
+ * @list: plist node for list of pools
+ *
+ * Allows you to keep a pool of pre allocated pages to use from your heap.
+ * Keeping a pool of pages that is ready for dma, ie any cached mapping have
+ * been invalidated from the cache, provides a significant peformance benefit
+ * on many systems
+ */
+struct ion_page_pool {
+ int high_count;
+ int low_count;
+ struct list_head high_items;
+ struct list_head low_items;
+ struct mutex mutex;
+ void *(*alloc)(struct ion_page_pool *pool);
+ void (*free)(struct ion_page_pool *pool, struct page *page);
+ gfp_t gfp_mask;
+ unsigned int order;
+ struct plist_node list;
+};
+
+struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order);
+void ion_page_pool_destroy(struct ion_page_pool *);
+void *ion_page_pool_alloc(struct ion_page_pool *);
+void ion_page_pool_free(struct ion_page_pool *, struct page *);
+
#endif /* _ION_PRIV_H */
diff --git a/drivers/gpu/ion/ion_system_heap.c b/drivers/gpu/ion/ion_system_heap.c
index af7f04b..fb6dc2d 100644
--- a/drivers/gpu/ion/ion_system_heap.c
+++ b/drivers/gpu/ion/ion_system_heap.c
@@ -22,42 +22,122 @@
#include <linux/ion.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
+#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
-#include <linux/seq_file.h>
#include "ion_priv.h"
-#include <mach/memory.h>
-#include <asm/cacheflush.h>
-#include <linux/msm_ion.h>
#include <linux/dma-mapping.h>
+#include <trace/events/kmem.h>
-static atomic_t system_heap_allocated;
-static atomic_t system_contig_heap_allocated;
+static unsigned int high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO |
+ __GFP_NOWARN | __GFP_NORETRY |
+ __GFP_NO_KSWAPD) & ~__GFP_WAIT;
+static unsigned int low_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO |
+ __GFP_NOWARN);
+static const unsigned int orders[] = {8, 4, 0};
+static const int num_orders = ARRAY_SIZE(orders);
+static int order_to_index(unsigned int order)
+{
+ int i;
+ for (i = 0; i < num_orders; i++)
+ if (order == orders[i])
+ return i;
+ BUG();
+ return -1;
+}
+
+static unsigned int order_to_size(int order)
+{
+ return PAGE_SIZE << order;
+}
+
+struct ion_system_heap {
+ struct ion_heap heap;
+ struct ion_page_pool **pools;
+};
struct page_info {
struct page *page;
- unsigned long order;
+ unsigned int order;
struct list_head list;
};
-static struct page_info *alloc_largest_available(unsigned long size,
- bool split_pages)
+static struct page *alloc_buffer_page(struct ion_system_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long order)
{
- static unsigned int orders[] = {8, 4, 0};
+ bool cached = ion_buffer_cached(buffer);
+ bool split_pages = ion_buffer_fault_user_mappings(buffer);
+ struct ion_page_pool *pool = heap->pools[order_to_index(order)];
+ struct page *page;
+
+ if (!cached) {
+ page = ion_page_pool_alloc(pool);
+ } else {
+ struct scatterlist sg;
+ gfp_t gfp_flags = low_order_gfp_flags;
+
+ if (order > 4)
+ gfp_flags = high_order_gfp_flags;
+ trace_alloc_pages_sys_start(gfp_flags, order);
+ page = alloc_pages(gfp_flags, order);
+ trace_alloc_pages_sys_end(gfp_flags, order);
+ if (!page) {
+ trace_alloc_pages_sys_fail(gfp_flags, order);
+ return 0;
+ }
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, page, PAGE_SIZE << order, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+ }
+ if (!page)
+ return 0;
+
+ if (split_pages)
+ split_page(page, order);
+ return page;
+}
+
+static void free_buffer_page(struct ion_system_heap *heap,
+ struct ion_buffer *buffer, struct page *page,
+ unsigned int order)
+{
+ bool cached = ion_buffer_cached(buffer);
+ bool split_pages = ion_buffer_fault_user_mappings(buffer);
+ int i;
+
+ if (!cached) {
+ struct ion_page_pool *pool = heap->pools[order_to_index(order)];
+ ion_page_pool_free(pool, page);
+ } else if (split_pages) {
+ for (i = 0; i < (1 << order); i++)
+ __free_page(page + i);
+ } else {
+ __free_pages(page, order);
+ }
+}
+
+
+static struct page_info *alloc_largest_available(struct ion_system_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long size,
+ unsigned int max_order)
+{
struct page *page;
struct page_info *info;
int i;
- for (i = 0; i < ARRAY_SIZE(orders); i++) {
- if (size < (1 << orders[i]) * PAGE_SIZE)
+ for (i = 0; i < num_orders; i++) {
+ if (size < order_to_size(orders[i]))
continue;
- page = alloc_pages(GFP_HIGHUSER | __GFP_ZERO |
- __GFP_NOWARN | __GFP_NORETRY, orders[i]);
+ if (max_order < orders[i])
+ continue;
+
+ page = alloc_buffer_page(heap, buffer, orders[i]);
if (!page)
continue;
- if (split_pages)
- split_page(page, orders[i]);
- info = kmap(page);
+
+ info = kmalloc(sizeof(struct page_info), GFP_KERNEL);
info->page = page;
info->order = orders[i];
return info;
@@ -70,23 +150,27 @@
unsigned long size, unsigned long align,
unsigned long flags)
{
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
struct sg_table *table;
struct scatterlist *sg;
int ret;
struct list_head pages;
struct page_info *info, *tmp_info;
int i = 0;
- long size_remaining = PAGE_ALIGN(size);
+ unsigned long size_remaining = PAGE_ALIGN(size);
+ unsigned int max_order = orders[0];
bool split_pages = ion_buffer_fault_user_mappings(buffer);
-
INIT_LIST_HEAD(&pages);
while (size_remaining > 0) {
- info = alloc_largest_available(size_remaining, split_pages);
+ info = alloc_largest_available(sys_heap, buffer, size_remaining, max_order);
if (!info)
goto err;
list_add_tail(&info->list, &pages);
size_remaining -= (1 << info->order) * PAGE_SIZE;
+ max_order = info->order;
i++;
}
@@ -106,7 +190,6 @@
sg = table->sgl;
list_for_each_entry_safe(info, tmp_info, &pages, list) {
struct page *page = info->page;
-
if (split_pages) {
for (i = 0; i < (1 << info->order); i++) {
sg_set_page(sg, page + i, PAGE_SIZE, 0);
@@ -118,42 +201,43 @@
sg = sg_next(sg);
}
list_del(&info->list);
- kunmap(page);
+ kfree(info);
}
- dma_sync_sg_for_device(NULL, table->sgl, table->nents,
- DMA_BIDIRECTIONAL);
-
buffer->priv_virt = table;
- atomic_add(size, &system_heap_allocated);
return 0;
err1:
kfree(table);
err:
list_for_each_entry(info, &pages, list) {
- if (split_pages)
- for (i = 0; i < (1 << info->order); i++)
- __free_page(info->page + i);
- else
- __free_pages(info->page, info->order);
-
- kunmap(info->page);
+ free_buffer_page(sys_heap, buffer, info->page, info->order);
+ kfree(info);
}
return -ENOMEM;
}
void ion_system_heap_free(struct ion_buffer *buffer)
{
- int i;
+ struct ion_heap *heap = buffer->heap;
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
+ struct sg_table *table = buffer->sg_table;
+ bool cached = ion_buffer_cached(buffer);
struct scatterlist *sg;
- struct sg_table *table = buffer->priv_virt;
+ LIST_HEAD(pages);
+ int i;
+
+ /* uncached pages come from the page pools, zero them before returning
+ for security purposes (other allocations are zerod at alloc time */
+ if (!cached)
+ ion_heap_buffer_zero(buffer);
for_each_sg(table->sgl, sg, table->nents, i)
- __free_pages(sg_page(sg), get_order(sg_dma_len(sg)));
- if (buffer->sg_table)
- sg_free_table(buffer->sg_table);
- kfree(buffer->sg_table);
- atomic_sub(buffer->size, &system_heap_allocated);
+ free_buffer_page(sys_heap, buffer, sg_page(sg),
+ get_order(sg_dma_len(sg)));
+ sg_free_table(table);
+ kfree(table);
}
struct sg_table *ion_system_heap_map_dma(struct ion_heap *heap,
@@ -168,105 +252,85 @@
return;
}
-void *ion_system_heap_map_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- struct scatterlist *sg;
- int i, j;
- void *vaddr;
- pgprot_t pgprot;
- struct sg_table *table = buffer->priv_virt;
- int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE;
- struct page **pages = kzalloc(sizeof(struct page *) * npages,
- GFP_KERNEL);
- struct page **tmp = pages;
-
- if (buffer->flags & ION_FLAG_CACHED)
- pgprot = PAGE_KERNEL;
- else
- pgprot = pgprot_writecombine(PAGE_KERNEL);
-
- for_each_sg(table->sgl, sg, table->nents, i) {
- int npages_this_entry = PAGE_ALIGN(sg_dma_len(sg)) / PAGE_SIZE;
- struct page *page = sg_page(sg);
- BUG_ON(i >= npages);
- for (j = 0; j < npages_this_entry; j++) {
- *(tmp++) = page++;
- }
- }
- vaddr = vmap(pages, npages, VM_MAP, pgprot);
- kfree(pages);
-
- return vaddr;
-}
-
-void ion_system_heap_unmap_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- vunmap(buffer->vaddr);
-}
-
-int ion_system_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
- struct vm_area_struct *vma)
-{
- struct sg_table *table = buffer->priv_virt;
- unsigned long addr = vma->vm_start;
- unsigned long offset = vma->vm_pgoff;
- struct scatterlist *sg;
- int i;
-
- if (!ION_IS_CACHED(buffer->flags)) {
- pr_err("%s: cannot map system heap uncached\n", __func__);
- return -EINVAL;
- }
-
- for_each_sg(table->sgl, sg, table->nents, i) {
- if (offset) {
- offset--;
- continue;
- }
- remap_pfn_range(vma, addr, page_to_pfn(sg_page(sg)),
- sg_dma_len(sg), vma->vm_page_prot);
- addr += sg_dma_len(sg);
- }
- return 0;
-}
-
-static int ion_system_print_debug(struct ion_heap *heap, struct seq_file *s,
- const struct rb_root *unused)
-{
- seq_printf(s, "total bytes currently allocated: %lx\n",
- (unsigned long) atomic_read(&system_heap_allocated));
-
- return 0;
-}
-
-static struct ion_heap_ops vmalloc_ops = {
+static struct ion_heap_ops system_heap_ops = {
.allocate = ion_system_heap_allocate,
.free = ion_system_heap_free,
.map_dma = ion_system_heap_map_dma,
.unmap_dma = ion_system_heap_unmap_dma,
- .map_kernel = ion_system_heap_map_kernel,
- .unmap_kernel = ion_system_heap_unmap_kernel,
- .map_user = ion_system_heap_map_user,
- .print_debug = ion_system_print_debug,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
+ .map_user = ion_heap_map_user,
};
-struct ion_heap *ion_system_heap_create(struct ion_platform_heap *pheap)
+static int ion_system_heap_debug_show(struct ion_heap *heap, struct seq_file *s,
+ void *unused)
{
- struct ion_heap *heap;
- heap = kzalloc(sizeof(struct ion_heap), GFP_KERNEL);
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
+ int i;
+ for (i = 0; i < num_orders; i++) {
+ struct ion_page_pool *pool = sys_heap->pools[i];
+ seq_printf(s, "%d order %u highmem pages in pool = %lu total\n",
+ pool->high_count, pool->order,
+ (1 << pool->order) * PAGE_SIZE * pool->high_count);
+ seq_printf(s, "%d order %u lowmem pages in pool = %lu total\n",
+ pool->low_count, pool->order,
+ (1 << pool->order) * PAGE_SIZE * pool->low_count);
+ }
+ return 0;
+}
+
+struct ion_heap *ion_system_heap_create(struct ion_platform_heap *unused)
+{
+ struct ion_system_heap *heap;
+ int i;
+
+ heap = kzalloc(sizeof(struct ion_system_heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
- heap->ops = &vmalloc_ops;
- heap->type = ION_HEAP_TYPE_SYSTEM;
- return heap;
+ heap->heap.ops = &system_heap_ops;
+ heap->heap.type = ION_HEAP_TYPE_SYSTEM;
+ heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE;
+ heap->pools = kzalloc(sizeof(struct ion_page_pool *) * num_orders,
+ GFP_KERNEL);
+ if (!heap->pools)
+ goto err_alloc_pools;
+ for (i = 0; i < num_orders; i++) {
+ struct ion_page_pool *pool;
+ gfp_t gfp_flags = low_order_gfp_flags;
+
+ if (orders[i] > 4)
+ gfp_flags = high_order_gfp_flags;
+ pool = ion_page_pool_create(gfp_flags, orders[i]);
+ if (!pool)
+ goto err_create_pool;
+ heap->pools[i] = pool;
+ }
+ heap->heap.debug_show = ion_system_heap_debug_show;
+ return &heap->heap;
+err_create_pool:
+ for (i = 0; i < num_orders; i++)
+ if (heap->pools[i])
+ ion_page_pool_destroy(heap->pools[i]);
+ kfree(heap->pools);
+err_alloc_pools:
+ kfree(heap);
+ return ERR_PTR(-ENOMEM);
}
void ion_system_heap_destroy(struct ion_heap *heap)
{
- kfree(heap);
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
+ int i;
+
+ for (i = 0; i < num_orders; i++)
+ ion_page_pool_destroy(sys_heap->pools[i]);
+ kfree(sys_heap->pools);
+ kfree(sys_heap);
}
static int ion_system_contig_heap_allocate(struct ion_heap *heap,
@@ -278,14 +342,12 @@
buffer->priv_virt = kzalloc(len, GFP_KERNEL);
if (!buffer->priv_virt)
return -ENOMEM;
- atomic_add(len, &system_contig_heap_allocated);
return 0;
}
void ion_system_contig_heap_free(struct ion_buffer *buffer)
{
kfree(buffer->priv_virt);
- atomic_sub(buffer->size, &system_contig_heap_allocated);
}
static int ion_system_contig_heap_phys(struct ion_heap *heap,
@@ -323,57 +385,18 @@
kfree(buffer->sg_table);
}
-int ion_system_contig_heap_map_user(struct ion_heap *heap,
- struct ion_buffer *buffer,
- struct vm_area_struct *vma)
-{
- unsigned long pfn = __phys_to_pfn(virt_to_phys(buffer->priv_virt));
-
- if (ION_IS_CACHED(buffer->flags))
- return remap_pfn_range(vma, vma->vm_start, pfn + vma->vm_pgoff,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot);
- else {
- pr_err("%s: cannot map system heap uncached\n", __func__);
- return -EINVAL;
- }
-}
-
-static int ion_system_contig_print_debug(struct ion_heap *heap,
- struct seq_file *s,
- const struct rb_root *unused)
-{
- seq_printf(s, "total bytes currently allocated: %lx\n",
- (unsigned long) atomic_read(&system_contig_heap_allocated));
-
- return 0;
-}
-
-void *ion_system_contig_heap_map_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- return buffer->priv_virt;
-}
-
-void ion_system_contig_heap_unmap_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- return;
-}
-
static struct ion_heap_ops kmalloc_ops = {
.allocate = ion_system_contig_heap_allocate,
.free = ion_system_contig_heap_free,
.phys = ion_system_contig_heap_phys,
.map_dma = ion_system_contig_heap_map_dma,
.unmap_dma = ion_system_contig_heap_unmap_dma,
- .map_kernel = ion_system_contig_heap_map_kernel,
- .unmap_kernel = ion_system_contig_heap_unmap_kernel,
- .map_user = ion_system_contig_heap_map_user,
- .print_debug = ion_system_contig_print_debug,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
+ .map_user = ion_heap_map_user,
};
-struct ion_heap *ion_system_contig_heap_create(struct ion_platform_heap *pheap)
+struct ion_heap *ion_system_contig_heap_create(struct ion_platform_heap *unused)
{
struct ion_heap *heap;
diff --git a/drivers/gpu/ion/msm/ion_cp_common.c b/drivers/gpu/ion/msm/ion_cp_common.c
index 7d54cfa..58eca24 100644
--- a/drivers/gpu/ion/msm/ion_cp_common.c
+++ b/drivers/gpu/ion/msm/ion_cp_common.c
@@ -15,6 +15,7 @@
#include <linux/memory_alloc.h>
#include <linux/types.h>
#include <mach/scm.h>
+#include <linux/highmem.h>
#include "../ion_priv.h"
#include "ion_cp_common.h"
@@ -157,6 +158,8 @@
request.chunks.chunk_list_size = nchunks;
request.chunks.chunk_size = chunk_size;
+ kmap_flush_unused();
+ kmap_atomic_flush_unused();
return scm_call(SCM_SVC_MP, MEM_PROTECT_LOCK_ID2,
&request, sizeof(request), &resp, sizeof(resp));
diff --git a/drivers/gpu/ion/msm/ion_iommu_map.c b/drivers/gpu/ion/msm/ion_iommu_map.c
index ae4ae37..c8fb3c1 100644
--- a/drivers/gpu/ion/msm/ion_iommu_map.c
+++ b/drivers/gpu/ion/msm/ion_iommu_map.c
@@ -11,6 +11,7 @@
*
*/
+#include <linux/dma-buf.h>
#include <linux/export.h>
#include <linux/iommu.h>
#include <linux/ion.h>
@@ -69,6 +70,7 @@
struct sg_table *table;
unsigned long size;
struct mutex lock;
+ struct dma_buf *dbuf;
};
static struct rb_root iommu_root;
@@ -85,9 +87,9 @@
parent = *p;
entry = rb_entry(parent, struct ion_iommu_meta, node);
- if (meta->handle < entry->handle) {
+ if (meta->table < entry->table) {
p = &(*p)->rb_left;
- } else if (meta->handle > entry->handle) {
+ } else if (meta->table > entry->table) {
p = &(*p)->rb_right;
} else {
pr_err("%s: handle %p already exists\n", __func__,
@@ -101,7 +103,7 @@
}
-static struct ion_iommu_meta *ion_iommu_meta_lookup(struct ion_handle *handle)
+static struct ion_iommu_meta *ion_iommu_meta_lookup(struct sg_table *table)
{
struct rb_root *root = &iommu_root;
struct rb_node **p = &root->rb_node;
@@ -112,9 +114,9 @@
parent = *p;
entry = rb_entry(parent, struct ion_iommu_meta, node);
- if (handle < entry->handle)
+ if (table < entry->table)
p = &(*p)->rb_left;
- else if (handle > entry->handle)
+ else if (table > entry->table)
p = &(*p)->rb_right;
else
return entry;
@@ -206,8 +208,8 @@
* biggest entry. To take advantage of bigger mapping sizes both the
* VA and PA addresses have to be aligned to the biggest size.
*/
- if (table->sgl->length > align)
- align = table->sgl->length;
+ if (sg_dma_len(table->sgl) > align)
+ align = sg_dma_len(table->sgl);
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
@@ -319,7 +321,8 @@
return ERR_PTR(ret);
}
-static struct ion_iommu_meta *ion_iommu_meta_create(struct ion_handle *handle,
+static struct ion_iommu_meta *ion_iommu_meta_create(struct ion_client *client,
+ struct ion_handle *handle,
struct sg_table *table,
unsigned long size)
{
@@ -333,6 +336,7 @@
meta->handle = handle;
meta->table = table;
meta->size = size;
+ meta->dbuf = ion_share_dma_buf(client, handle);
kref_init(&meta->ref);
mutex_init(&meta->lock);
ion_iommu_meta_add(meta);
@@ -347,6 +351,7 @@
rb_erase(&meta->node, &iommu_root);
+ dma_buf_put(meta->dbuf);
kfree(meta);
}
@@ -427,13 +432,13 @@
}
mutex_lock(&msm_iommu_map_mutex);
- iommu_meta = ion_iommu_meta_lookup(handle);
+ iommu_meta = ion_iommu_meta_lookup(table);
if (!iommu_meta)
- iommu_meta = ion_iommu_meta_create(handle, table, size);
+ iommu_meta = ion_iommu_meta_create(client, handle, table, size);
else
kref_get(&iommu_meta->ref);
-
+ BUG_ON(iommu_meta->size != size);
mutex_unlock(&msm_iommu_map_mutex);
iommu_map = ion_iommu_lookup(iommu_meta, domain_num, partition_num);
@@ -493,6 +498,7 @@
{
struct ion_iommu_map *iommu_map;
struct ion_iommu_meta *meta;
+ struct sg_table *table;
if (IS_ERR_OR_NULL(client)) {
pr_err("%s: client pointer is invalid\n", __func__);
@@ -503,9 +509,10 @@
return;
}
+ table = ion_sg_table(client, handle);
mutex_lock(&msm_iommu_map_mutex);
- meta = ion_iommu_meta_lookup(handle);
+ meta = ion_iommu_meta_lookup(table);
if (!meta) {
WARN(1, "%s: (%d,%d) was never mapped for %p\n", __func__,
domain_num, partition_num, handle);
diff --git a/drivers/gpu/ion/msm/msm_ion.c b/drivers/gpu/ion/msm/msm_ion.c
index 9259de2..4a45313 100644
--- a/drivers/gpu/ion/msm/msm_ion.c
+++ b/drivers/gpu/ion/msm/msm_ion.c
@@ -89,7 +89,7 @@
},
{
.id = ION_QSECOM_HEAP_ID,
- .type = ION_HEAP_TYPE_CARVEOUT,
+ .type = ION_HEAP_TYPE_DMA,
.name = ION_QSECOM_HEAP_NAME,
},
{
@@ -128,7 +128,7 @@
struct ion_client *msm_ion_client_create(unsigned int heap_mask,
const char *name)
{
- return ion_client_create(idev, heap_mask, name);
+ return ion_client_create(idev, name);
}
EXPORT_SYMBOL(msm_ion_client_create);
diff --git a/drivers/gpu/msm/a3xx_reg.h b/drivers/gpu/msm/a3xx_reg.h
index c768bb7..5f435f3 100644
--- a/drivers/gpu/msm/a3xx_reg.h
+++ b/drivers/gpu/msm/a3xx_reg.h
@@ -179,6 +179,7 @@
#define A3XX_CP_MEQ_ADDR 0x1DA
#define A3XX_CP_MEQ_DATA 0x1DB
#define A3XX_CP_PERFCOUNTER_SELECT 0x445
+#define A3XX_CP_WFI_PEND_CTR 0x01F5
#define A3XX_CP_HW_FAULT 0x45C
#define A3XX_CP_AHB_FAULT 0x54D
#define A3XX_CP_PROTECT_CTRL 0x45E
diff --git a/drivers/gpu/msm/adreno.c b/drivers/gpu/msm/adreno.c
index 5589ff0..e6c345b 100644
--- a/drivers/gpu/msm/adreno.c
+++ b/drivers/gpu/msm/adreno.c
@@ -30,7 +30,6 @@
#include "kgsl_cffdump.h"
#include "kgsl_sharedmem.h"
#include "kgsl_iommu.h"
-#include "kgsl_trace.h"
#include "adreno.h"
#include "adreno_pm4types.h"
@@ -532,6 +531,7 @@
result = adreno_dev->gpudev->irq_handler(adreno_dev);
+ device->pwrctrl.irq_last = 1;
if (device->requested_state == KGSL_STATE_NONE) {
if (device->pwrctrl.nap_allowed == true) {
kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
@@ -607,41 +607,15 @@
return result;
}
-static void adreno_iommu_setstate(struct kgsl_device *device,
- unsigned int context_id,
- uint32_t flags)
+static unsigned int _adreno_iommu_setstate_v0(struct kgsl_device *device,
+ unsigned int *cmds_orig,
+ unsigned int pt_val,
+ int num_iommu_units, uint32_t flags)
{
- unsigned int pt_val, reg_pt_val;
- unsigned int link[230];
- unsigned int *cmds = &link[0];
- int sizedwords = 0;
+ unsigned int reg_pt_val;
+ unsigned int *cmds = cmds_orig;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
- int num_iommu_units, i;
- struct kgsl_context *context;
- struct adreno_context *adreno_ctx = NULL;
-
- /*
- * If we're idle and we don't need to use the GPU to save context
- * state, use the CPU instead of the GPU to reprogram the
- * iommu for simplicity's sake.
- */
- if (!adreno_dev->drawctxt_active || device->ftbl->isidle(device))
- return kgsl_mmu_device_setstate(&device->mmu, flags);
-
- num_iommu_units = kgsl_mmu_get_num_iommu_units(&device->mmu);
-
- context = idr_find(&device->context_idr, context_id);
- if (context == NULL)
- return;
- adreno_ctx = context->devctxt;
-
- if (kgsl_mmu_enable_clk(&device->mmu,
- KGSL_IOMMU_CONTEXT_USER))
- return;
-
- cmds += __adreno_add_idle_indirect_cmds(cmds,
- device->mmu.setstate_memory.gpuaddr +
- KGSL_IOMMU_SETSTATE_NOP_OFFSET);
+ int i;
if (cpu_is_msm8960())
cmds += adreno_add_change_mh_phys_limit_cmds(cmds, 0xFFFFF000,
@@ -658,8 +632,6 @@
/* Acquire GPU-CPU sync Lock here */
cmds += kgsl_mmu_sync_lock(&device->mmu, cmds);
- pt_val = kgsl_mmu_get_pt_base_addr(&device->mmu,
- device->mmu.hwpagetable);
if (flags & KGSL_MMUFLAGS_PTUPDATE) {
/*
* We need to perfrom the following operations for all
@@ -736,25 +708,168 @@
cmds += adreno_add_idle_cmds(adreno_dev, cmds);
- sizedwords += (cmds - &link[0]);
- if (sizedwords) {
- /* invalidate all base pointers */
- *cmds++ = cp_type3_packet(CP_INVALIDATE_STATE, 1);
- *cmds++ = 0x7fff;
- sizedwords += 2;
- /* This returns the per context timestamp but we need to
- * use the global timestamp for iommu clock disablement */
- adreno_ringbuffer_issuecmds(device, adreno_ctx,
- KGSL_CMD_FLAGS_PMODE,
- &link[0], sizedwords);
- kgsl_mmu_disable_clk_on_ts(&device->mmu,
- adreno_dev->ringbuffer.timestamp[KGSL_MEMSTORE_GLOBAL], true);
+ return cmds - cmds_orig;
+}
+
+static unsigned int _adreno_iommu_setstate_v1(struct kgsl_device *device,
+ unsigned int *cmds_orig,
+ unsigned int pt_val,
+ int num_iommu_units, uint32_t flags)
+{
+ unsigned int reg_pt_val;
+ unsigned int *cmds = cmds_orig;
+ int i;
+ unsigned int ttbr0, tlbiall, tlbstatus, tlbsync, mmu_ctrl;
+
+ for (i = 0; i < num_iommu_units; i++) {
+ reg_pt_val = (pt_val + kgsl_mmu_get_pt_lsb(&device->mmu,
+ i, KGSL_IOMMU_CONTEXT_USER));
+ if (flags & KGSL_MMUFLAGS_PTUPDATE) {
+ mmu_ctrl = kgsl_mmu_get_reg_ahbaddr(
+ &device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL) >> 2;
+
+ ttbr0 = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TTBR0) >> 2;
+
+ if (kgsl_mmu_hw_halt_supported(&device->mmu, i)) {
+ *cmds++ = cp_type3_packet(CP_WAIT_FOR_IDLE, 1);
+ *cmds++ = 0;
+ /*
+ * glue commands together until next
+ * WAIT_FOR_ME
+ */
+ cmds += adreno_wait_reg_eq(cmds,
+ A3XX_CP_WFI_PEND_CTR, 1, 0xFFFFFFFF, 0xF);
+
+ /* set the iommu lock bit */
+ *cmds++ = cp_type3_packet(CP_REG_RMW, 3);
+ *cmds++ = mmu_ctrl;
+ /* AND to unmask the lock bit */
+ *cmds++ =
+ ~(KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT);
+ /* OR to set the IOMMU lock bit */
+ *cmds++ =
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT;
+ /* wait for smmu to lock */
+ cmds += adreno_wait_reg_eq(cmds, mmu_ctrl,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE, 0xF);
+ }
+ /* set ttbr0 */
+ *cmds++ = cp_type0_packet(ttbr0, 1);
+ *cmds++ = reg_pt_val;
+ if (kgsl_mmu_hw_halt_supported(&device->mmu, i)) {
+ /* unlock the IOMMU lock */
+ *cmds++ = cp_type3_packet(CP_REG_RMW, 3);
+ *cmds++ = mmu_ctrl;
+ /* AND to unmask the lock bit */
+ *cmds++ =
+ ~(KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT);
+ /* OR with 0 so lock bit is unset */
+ *cmds++ = 0;
+ /* release all commands with wait_for_me */
+ *cmds++ = cp_type3_packet(CP_WAIT_FOR_ME, 1);
+ *cmds++ = 0;
+ }
+ }
+ if (flags & KGSL_MMUFLAGS_TLBFLUSH) {
+ tlbiall = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBIALL) >> 2;
+ *cmds++ = cp_type0_packet(tlbiall, 1);
+ *cmds++ = 1;
+
+ tlbsync = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBSYNC) >> 2;
+ *cmds++ = cp_type0_packet(tlbsync, 1);
+ *cmds++ = 0;
+
+ tlbstatus = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBSTATUS) >> 2;
+ cmds += adreno_wait_reg_eq(cmds, tlbstatus, 0,
+ KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE, 0xF);
+ }
}
+ return cmds - cmds_orig;
+}
+
+static void adreno_iommu_setstate(struct kgsl_device *device,
+ unsigned int context_id,
+ uint32_t flags)
+{
+ unsigned int pt_val;
+ unsigned int link[230];
+ unsigned int *cmds = &link[0];
+ int sizedwords = 0;
+ struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
+ int num_iommu_units;
+ struct kgsl_context *context;
+ struct adreno_context *adreno_ctx = NULL;
+ struct adreno_ringbuffer *rb = &adreno_dev->ringbuffer;
+
+ if (!adreno_dev->drawctxt_active ||
+ KGSL_STATE_ACTIVE != device->state) {
+ kgsl_mmu_device_setstate(&device->mmu, flags);
+ return;
+ }
+ num_iommu_units = kgsl_mmu_get_num_iommu_units(&device->mmu);
+
+ context = idr_find(&device->context_idr, context_id);
+ if (context == NULL)
+ return;
+
+ kgsl_context_get(context);
+
+ adreno_ctx = context->devctxt;
+
+ if (kgsl_mmu_enable_clk(&device->mmu,
+ KGSL_IOMMU_CONTEXT_USER))
+ return;
+
+ pt_val = kgsl_mmu_get_pt_base_addr(&device->mmu,
+ device->mmu.hwpagetable);
+
+ cmds += __adreno_add_idle_indirect_cmds(cmds,
+ device->mmu.setstate_memory.gpuaddr +
+ KGSL_IOMMU_SETSTATE_NOP_OFFSET);
+
+ if (msm_soc_version_supports_iommu_v0())
+ cmds += _adreno_iommu_setstate_v0(device, cmds, pt_val,
+ num_iommu_units, flags);
+ else
+ cmds += _adreno_iommu_setstate_v1(device, cmds, pt_val,
+ num_iommu_units, flags);
+
+ sizedwords += (cmds - &link[0]);
+ if (sizedwords == 0) {
+ KGSL_DRV_ERR(device, "no commands generated\n");
+ BUG();
+ }
+ /* invalidate all base pointers */
+ *cmds++ = cp_type3_packet(CP_INVALIDATE_STATE, 1);
+ *cmds++ = 0x7fff;
+ sizedwords += 2;
if (sizedwords > (ARRAY_SIZE(link))) {
KGSL_DRV_ERR(device, "Temp command buffer overflow\n");
BUG();
}
+ /*
+ * This returns the per context timestamp but we need to
+ * use the global timestamp for iommu clock disablement
+ */
+ adreno_ringbuffer_issuecmds(device, adreno_ctx, KGSL_CMD_FLAGS_PMODE,
+ &link[0], sizedwords);
+
+ kgsl_mmu_disable_clk_on_ts(&device->mmu,
+ rb->timestamp[KGSL_MEMSTORE_GLOBAL], true);
+
+ kgsl_context_put(context);
}
static void adreno_gpummu_setstate(struct kgsl_device *device,
@@ -1284,6 +1399,8 @@
data->physstart = reg_val[0];
data->physend = data->physstart + reg_val[1] - 1;
+ data->iommu_halt_enable = of_property_read_bool(node,
+ "qcom,iommu-enable-halt");
data->iommu_ctx_count = 0;
@@ -2187,6 +2304,7 @@
context->wait_on_invalid_ts = false;
if (!(adreno_context->flags & CTXT_FLAGS_PER_CONTEXT_TS)) {
+ ft_data->status = 1;
KGSL_FT_ERR(device, "Fault tolerance not supported\n");
goto play_good_cmds;
}
@@ -2205,7 +2323,7 @@
}
/* Check if we detected a long running IB, if false return */
- if (adreno_dev->long_ib) {
+ if ((adreno_context) && (adreno_dev->long_ib)) {
long_ib = _adreno_check_long_ib(device);
if (!long_ib) {
adreno_context->flags &= ~CTXT_FLAGS_GPU_HANG;
@@ -2221,6 +2339,7 @@
/* If long IB detected do not attempt replay of bad cmds */
if (long_ib) {
+ ft_data->status = 1;
_adreno_debug_ft_info(device, ft_data);
goto play_good_cmds;
}
@@ -2782,7 +2901,7 @@
if (device->state == KGSL_STATE_ACTIVE) {
/* Is the ring buffer is empty? */
GSL_RB_GET_READPTR(rb, &rb->rptr);
- if (!device->active_cnt && (rb->rptr == rb->wptr)) {
+ if (rb->rptr == rb->wptr) {
/*
* Are there interrupts pending? If so then pretend we
* are not idle - this avoids the possiblity that we go
@@ -2952,7 +3071,7 @@
if (!in_interrupt())
kgsl_pre_hwaccess(device);
- trace_kgsl_regwrite(device, offsetwords, value);
+ kgsl_trace_regwrite(device, offsetwords, value);
kgsl_cffdump_regwrite(device->id, offsetwords << 2, value);
reg = (unsigned int *)(device->reg_virt + (offsetwords << 2));
@@ -3040,7 +3159,7 @@
if (context && device->state != KGSL_STATE_SLUMBER) {
adreno_ringbuffer_issuecmds(device, context->devctxt,
- KGSL_CMD_FLAGS_NONE, NULL, 0);
+ KGSL_CMD_FLAGS_GET_INT, NULL, 0);
}
}
@@ -3596,15 +3715,20 @@
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
- unsigned int cycles;
+ unsigned int cycles = 0;
- /* Get the busy cycles counted since the counter was last reset */
- /* Calling this function also resets and restarts the counter */
+ /*
+ * Get the busy cycles counted since the counter was last reset.
+ * If we're not currently active, there shouldn't have been
+ * any cycles since the last time this function was called.
+ */
+ if (device->state == KGSL_STATE_ACTIVE)
+ cycles = adreno_dev->gpudev->busy_cycles(adreno_dev);
- cycles = adreno_dev->gpudev->busy_cycles(adreno_dev);
-
- /* In order to calculate idle you have to have run the algorithm *
- * at least once to get a start time. */
+ /*
+ * In order to calculate idle you have to have run the algorithm
+ * at least once to get a start time.
+ */
if (pwr->time != 0) {
s64 tmp = ktime_to_us(ktime_get());
stats->total_time = tmp - pwr->time;
diff --git a/drivers/gpu/msm/adreno.h b/drivers/gpu/msm/adreno.h
index 90d6027..fa892b9 100644
--- a/drivers/gpu/msm/adreno.h
+++ b/drivers/gpu/msm/adreno.h
@@ -34,6 +34,7 @@
#define KGSL_CMD_FLAGS_NONE 0x00000000
#define KGSL_CMD_FLAGS_PMODE 0x00000001
#define KGSL_CMD_FLAGS_INTERNAL_ISSUE 0x00000002
+#define KGSL_CMD_FLAGS_GET_INT 0x00000004
#define KGSL_CMD_FLAGS_EOF 0x00000100
/* Command identifiers */
@@ -506,4 +507,25 @@
return cmds - start;
}
+/*
+ * adreno_wait_reg_eq() - Add a CP_WAIT_REG_EQ command
+ * @cmds: Pointer to memory where commands are to be added
+ * @addr: Regiater address to poll for
+ * @val: Value to poll for
+ * @mask: The value against which register value is masked
+ * @interval: wait interval
+ */
+static inline int adreno_wait_reg_eq(unsigned int *cmds, unsigned int addr,
+ unsigned int val, unsigned int mask,
+ unsigned int interval)
+{
+ unsigned int *start = cmds;
+ *cmds++ = cp_type3_packet(CP_WAIT_REG_EQ, 4);
+ *cmds++ = addr;
+ *cmds++ = val;
+ *cmds++ = mask;
+ *cmds++ = interval;
+ return cmds - start;
+}
+
#endif /*__ADRENO_H */
diff --git a/drivers/gpu/msm/adreno_a3xx.c b/drivers/gpu/msm/adreno_a3xx.c
index be5c786..a4b3121 100644
--- a/drivers/gpu/msm/adreno_a3xx.c
+++ b/drivers/gpu/msm/adreno_a3xx.c
@@ -2758,7 +2758,7 @@
{
unsigned int in, out, bit, sel;
- if (countable > 0x7f)
+ if (counter > 1 || countable > 0x7f)
return;
adreno_regread(device, A3XX_VBIF_PERF_CNT_EN, &in);
@@ -2767,7 +2767,7 @@
if (counter == 0) {
bit = VBIF_PERF_CNT_0;
sel = (sel & ~VBIF_PERF_CNT_0_SEL_MASK) | countable;
- } else if (counter == 1) {
+ } else {
bit = VBIF_PERF_CNT_1;
sel = (sel & ~VBIF_PERF_CNT_1_SEL_MASK)
| (countable << VBIF_PERF_CNT_1_SEL);
@@ -2821,10 +2821,10 @@
unsigned int val = 0;
struct a3xx_perfcounter_register *reg;
- if (group > ARRAY_SIZE(a3xx_perfcounter_reglist))
+ if (group >= ARRAY_SIZE(a3xx_perfcounter_reglist))
return;
- if (counter > a3xx_perfcounter_reglist[group].count)
+ if (counter >= a3xx_perfcounter_reglist[group].count)
return;
/* Special cases */
@@ -2858,7 +2858,10 @@
unsigned int lo = 0, hi = 0;
unsigned int val;
- if (group > ARRAY_SIZE(a3xx_perfcounter_reglist))
+ if (group >= ARRAY_SIZE(a3xx_perfcounter_reglist))
+ return 0;
+
+ if (counter >= a3xx_perfcounter_reglist[group].count)
return 0;
reg = &(a3xx_perfcounter_reglist[group].regs[counter]);
diff --git a/drivers/gpu/msm/adreno_debugfs.c b/drivers/gpu/msm/adreno_debugfs.c
index ef599e9..980ff13 100644
--- a/drivers/gpu/msm/adreno_debugfs.c
+++ b/drivers/gpu/msm/adreno_debugfs.c
@@ -93,4 +93,7 @@
adreno_dev->ft_pf_policy = KGSL_FT_PAGEFAULT_DEFAULT_POLICY;
debugfs_create_u32("ft_pagefault_policy", 0644, device->d_debugfs,
&adreno_dev->ft_pf_policy);
+
+ debugfs_create_u32("active_cnt", 0444, device->d_debugfs,
+ &device->active_cnt);
}
diff --git a/drivers/gpu/msm/adreno_postmortem.c b/drivers/gpu/msm/adreno_postmortem.c
index 5396196..2249907 100644
--- a/drivers/gpu/msm/adreno_postmortem.c
+++ b/drivers/gpu/msm/adreno_postmortem.c
@@ -69,6 +69,8 @@
{CP_SET_PROTECTED_MODE, "ST_PRT_M"},
{CP_SET_SHADER_BASES, "ST_SHD_B"},
{CP_WAIT_FOR_IDLE, "WAIT4IDL"},
+ {CP_WAIT_FOR_ME, "WAIT4ME"},
+ {CP_WAIT_REG_EQ, "WAITRGEQ"},
};
static const struct pm_id_name pm3_nop_values[] = {
@@ -854,7 +856,12 @@
(num_iommu_units && this_cmd ==
kgsl_mmu_get_reg_gpuaddr(&device->mmu, 0,
KGSL_IOMMU_CONTEXT_USER,
- KGSL_IOMMU_CTX_TTBR0))) {
+ KGSL_IOMMU_CTX_TTBR0)) ||
+ (num_iommu_units && this_cmd == cp_type0_packet(
+ kgsl_mmu_get_reg_ahbaddr(
+ &device->mmu, 0,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TTBR0), 1))) {
KGSL_LOG_DUMP(device, "Current pagetable: %x\t"
"pagetable base: %x\n",
kgsl_mmu_get_ptname_from_ptbase(&device->mmu,
diff --git a/drivers/gpu/msm/adreno_ringbuffer.c b/drivers/gpu/msm/adreno_ringbuffer.c
index a4bb4fa..61ea916 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.c
+++ b/drivers/gpu/msm/adreno_ringbuffer.c
@@ -18,7 +18,6 @@
#include "kgsl.h"
#include "kgsl_sharedmem.h"
#include "kgsl_cffdump.h"
-#include "kgsl_trace.h"
#include "adreno.h"
#include "adreno_pm4types.h"
@@ -544,13 +543,15 @@
/*
* if the context was not created with per context timestamp
* support, we must use the global timestamp since issueibcmds
- * will be returning that one.
+ * will be returning that one, or if an internal issue then
+ * use global timestamp.
*/
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS)
+ if ((context && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS)) &&
+ !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE))
context_id = context->id;
- if ((context && context->flags & CTXT_FLAGS_USER_GENERATED_TS) &&
- (!(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE))) {
+ if ((context && (context->flags & CTXT_FLAGS_USER_GENERATED_TS)) &&
+ !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
if (timestamp_cmp(rb->timestamp[context_id],
timestamp) >= 0) {
KGSL_DRV_ERR(rb->device,
@@ -574,6 +575,11 @@
/* Add CP_COND_EXEC commands to generate CP_INTERRUPT */
total_sizedwords += context ? 13 : 0;
+ if ((context) && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS) &&
+ (flags & (KGSL_CMD_FLAGS_INTERNAL_ISSUE |
+ KGSL_CMD_FLAGS_GET_INT)))
+ total_sizedwords += 2;
+
if (adreno_is_a3xx(adreno_dev))
total_sizedwords += 7;
@@ -584,11 +590,9 @@
total_sizedwords += 3; /* sop timestamp */
total_sizedwords += 4; /* eop timestamp */
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS &&
- !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
+ if (KGSL_MEMSTORE_GLOBAL != context_id)
total_sizedwords += 3; /* global timestamp without cache
* flush for non-zero context */
- }
if (adreno_is_a20x(adreno_dev))
total_sizedwords += 2; /* CACHE_FLUSH */
@@ -619,12 +623,12 @@
/* always increment the global timestamp. once. */
rb->timestamp[KGSL_MEMSTORE_GLOBAL]++;
- /* Do not update context's timestamp for internal submissions */
- if (context && !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
- if (context_id == KGSL_MEMSTORE_GLOBAL)
- rb->timestamp[context->id] =
- rb->timestamp[KGSL_MEMSTORE_GLOBAL];
- else if (context->flags & CTXT_FLAGS_USER_GENERATED_TS)
+ /*
+ * If global timestamp then we are not using per context ts for
+ * this submission
+ */
+ if (context_id != KGSL_MEMSTORE_GLOBAL) {
+ if (context->flags & CTXT_FLAGS_USER_GENERATED_TS)
rb->timestamp[context_id] = timestamp;
else
rb->timestamp[context_id]++;
@@ -695,9 +699,7 @@
KGSL_MEMSTORE_OFFSET(context_id, eoptimestamp)));
GSL_RB_WRITE(ringcmds, rcmd_gpu, timestamp);
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS
- && !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
-
+ if (KGSL_MEMSTORE_GLOBAL != context_id) {
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_MEM_WRITE, 2));
GSL_RB_WRITE(ringcmds, rcmd_gpu, (gpuaddr +
@@ -749,6 +751,19 @@
GSL_RB_WRITE(ringcmds, rcmd_gpu, CP_INT_CNTL__RB_INT_MASK);
}
+ /*
+ * If per context timestamps are enabled and any of the kgsl
+ * internal commands want INT to be generated trigger the INT
+ */
+ if ((context) && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS) &&
+ (flags & (KGSL_CMD_FLAGS_INTERNAL_ISSUE |
+ KGSL_CMD_FLAGS_GET_INT))) {
+ GSL_RB_WRITE(ringcmds, rcmd_gpu,
+ cp_type3_packet(CP_INTERRUPT, 1));
+ GSL_RB_WRITE(ringcmds, rcmd_gpu,
+ CP_INT_CNTL__RB_INT_MASK);
+ }
+
if (adreno_is_a3xx(adreno_dev)) {
/* Dummy set-constant to trigger context rollover */
GSL_RB_WRITE(ringcmds, rcmd_gpu,
@@ -1107,8 +1122,10 @@
ret = 0;
done:
- trace_kgsl_issueibcmds(device, context->id, ibdesc, numibs,
- *timestamp, flags, ret, drawctxt->type);
+ device->pwrctrl.irq_last = 0;
+ kgsl_trace_issueibcmds(device, context ? context->id : 0, ibdesc,
+ numibs, *timestamp, flags, ret,
+ drawctxt ? drawctxt->type : 0);
kfree(link);
return ret;
diff --git a/drivers/gpu/msm/adreno_snapshot.c b/drivers/gpu/msm/adreno_snapshot.c
index a76ed87..967e4ab 100644
--- a/drivers/gpu/msm/adreno_snapshot.c
+++ b/drivers/gpu/msm/adreno_snapshot.c
@@ -785,13 +785,13 @@
struct kgsl_memdesc *memdesc =
adreno_find_ctxtmem(device, ptbase, ibaddr,
- ibsize);
+ ibsize << 2);
/* IOMMU uses a NOP IB placed in setsate memory */
if (NULL == memdesc)
if (kgsl_gpuaddr_in_memdesc(
&device->mmu.setstate_memory,
- ibaddr, ibsize))
+ ibaddr, ibsize << 2))
memdesc = &device->mmu.setstate_memory;
/*
* The IB from CP_IB1_BASE and the IBs for legacy
diff --git a/drivers/gpu/msm/kgsl.c b/drivers/gpu/msm/kgsl.c
index 53ef392..7cd9943 100644
--- a/drivers/gpu/msm/kgsl.c
+++ b/drivers/gpu/msm/kgsl.c
@@ -53,6 +53,46 @@
static struct ion_client *kgsl_ion_client;
+/**
+ * kgsl_trace_issueibcmds() - Call trace_issueibcmds by proxy
+ * device: KGSL device
+ * id: ID of the context submitting the command
+ * ibdesc: Pointer to the list of IB descriptors
+ * numib: Number of IBs in the list
+ * timestamp: Timestamp assigned to the command batch
+ * flags: Flags sent by the user
+ * result: Result of the submission attempt
+ * type: Type of context issuing the command
+ *
+ * Wrap the issueibcmds ftrace hook into a function that can be called from the
+ * GPU specific modules.
+ */
+void kgsl_trace_issueibcmds(struct kgsl_device *device, int id,
+ struct kgsl_ibdesc *ibdesc, int numibs,
+ unsigned int timestamp, unsigned int flags,
+ int result, unsigned int type)
+{
+ trace_kgsl_issueibcmds(device, id, ibdesc, numibs,
+ timestamp, flags, result, type);
+}
+EXPORT_SYMBOL(kgsl_trace_issueibcmds);
+
+/**
+ * kgsl_trace_regwrite - call regwrite ftrace function by proxy
+ * device: KGSL device
+ * offset: dword offset of the register being written
+ * value: Value of the register being written
+ *
+ * Wrap the regwrite ftrace hook into a function that can be called from the
+ * GPU specific modules.
+ */
+void kgsl_trace_regwrite(struct kgsl_device *device, unsigned int offset,
+ unsigned int value)
+{
+ trace_kgsl_regwrite(device, offset, value);
+}
+EXPORT_SYMBOL(kgsl_trace_regwrite);
+
int kgsl_memfree_hist_init(void)
{
void *base;
@@ -104,10 +144,13 @@
* @ptbase - the pagetable base of the object
* @gpuaddr - the GPU address of the object
* @size - Size of the region to search
+ *
+ * Caller must kgsl_mem_entry_put() the returned entry when finished using it.
*/
-struct kgsl_mem_entry *kgsl_get_mem_entry(struct kgsl_device *device,
- unsigned int ptbase, unsigned int gpuaddr, unsigned int size)
+struct kgsl_mem_entry * __must_check
+kgsl_get_mem_entry(struct kgsl_device *device, unsigned int ptbase,
+ unsigned int gpuaddr, unsigned int size)
{
struct kgsl_process_private *priv;
struct kgsl_mem_entry *entry;
@@ -117,15 +160,12 @@
list_for_each_entry(priv, &kgsl_driver.process_list, list) {
if (!kgsl_mmu_pt_equal(&device->mmu, priv->pagetable, ptbase))
continue;
- spin_lock(&priv->mem_lock);
entry = kgsl_sharedmem_find_region(priv, gpuaddr, size);
if (entry) {
- spin_unlock(&priv->mem_lock);
mutex_unlock(&kgsl_driver.process_mutex);
return entry;
}
- spin_unlock(&priv->mem_lock);
}
mutex_unlock(&kgsl_driver.process_mutex);
@@ -413,27 +453,6 @@
kfree(context);
}
-static void kgsl_check_idle_locked(struct kgsl_device *device)
-{
- if (device->pwrctrl.nap_allowed == true &&
- device->state == KGSL_STATE_ACTIVE &&
- device->requested_state == KGSL_STATE_NONE) {
- kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
- kgsl_pwrscale_idle(device);
- if (kgsl_pwrctrl_sleep(device) != 0)
- mod_timer(&device->idle_timer,
- jiffies +
- device->pwrctrl.interval_timeout);
- }
-}
-
-static void kgsl_check_idle(struct kgsl_device *device)
-{
- mutex_lock(&device->mutex);
- kgsl_check_idle_locked(device);
- mutex_unlock(&device->mutex);
-}
-
struct kgsl_device *kgsl_get_device(int dev_idx)
{
int i;
@@ -496,13 +515,12 @@
policy_saved = device->pwrscale.policy;
device->pwrscale.policy = NULL;
kgsl_pwrctrl_request_state(device, KGSL_STATE_SUSPEND);
- /* Make sure no user process is waiting for a timestamp *
- * before supending */
- if (device->active_cnt != 0) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->suspend_gate);
- mutex_lock(&device->mutex);
- }
+ /*
+ * Make sure no user process is waiting for a timestamp
+ * before supending.
+ */
+ kgsl_active_count_wait(device);
+
/* Don't let the timer wake us during suspended sleep. */
del_timer_sync(&device->idle_timer);
switch (device->state) {
@@ -513,6 +531,8 @@
device->ftbl->idle(device);
case KGSL_STATE_NAP:
case KGSL_STATE_SLEEP:
+ /* make sure power is on to stop the device */
+ kgsl_pwrctrl_enable(device);
/* Get the completion ready to be waited upon. */
INIT_COMPLETION(device->hwaccess_gate);
device->ftbl->suspend_context(device);
@@ -632,9 +652,16 @@
device->pwrctrl.restore_slumber = false;
if (device->pwrscale.policy == NULL)
kgsl_pwrctrl_pwrlevel_change(device, KGSL_PWRLEVEL_TURBO);
- kgsl_pwrctrl_wake(device);
+ if (kgsl_pwrctrl_wake(device) != 0)
+ return;
+ /*
+ * We don't have a way to go directly from
+ * a deeper sleep state to NAP, which is
+ * the desired state here.
+ */
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ kgsl_pwrctrl_sleep(device);
mutex_unlock(&device->mutex);
- kgsl_check_idle(device);
KGSL_PWR_WARN(device, "late resume end\n");
}
EXPORT_SYMBOL(kgsl_late_resume_driver);
@@ -745,7 +772,7 @@
filep->private_data = NULL;
mutex_lock(&device->mutex);
- kgsl_check_suspended(device);
+ kgsl_active_count_get(device);
while (1) {
context = idr_get_next(&device->context_idr, &next);
@@ -767,10 +794,17 @@
device->open_count--;
if (device->open_count == 0) {
+ BUG_ON(device->active_cnt > 1);
result = device->ftbl->stop(device);
kgsl_pwrctrl_set_state(device, KGSL_STATE_INIT);
+ /*
+ * active_cnt special case: we just stopped the device,
+ * so no need to use kgsl_active_count_put()
+ */
+ device->active_cnt--;
+ } else {
+ kgsl_active_count_put(device);
}
-
mutex_unlock(&device->mutex);
kfree(dev_priv);
@@ -816,9 +850,14 @@
filep->private_data = dev_priv;
mutex_lock(&device->mutex);
- kgsl_check_suspended(device);
if (device->open_count == 0) {
+ /*
+ * active_cnt special case: we are starting up for the first
+ * time, so use this sequence instead of the kgsl_pwrctrl_wake()
+ * which will be called by kgsl_active_count_get().
+ */
+ device->active_cnt++;
kgsl_sharedmem_set(&device->memstore, 0, 0,
device->memstore.size);
@@ -831,6 +870,7 @@
goto err_freedevpriv;
kgsl_pwrctrl_set_state(device, KGSL_STATE_ACTIVE);
+ kgsl_active_count_put(device);
}
device->open_count++;
mutex_unlock(&device->mutex);
@@ -856,10 +896,15 @@
mutex_lock(&device->mutex);
device->open_count--;
if (device->open_count == 0) {
+ /* make sure power is on to stop the device */
+ kgsl_pwrctrl_enable(device);
result = device->ftbl->stop(device);
kgsl_pwrctrl_set_state(device, KGSL_STATE_INIT);
}
err_freedevpriv:
+ /* only the first open takes an active count */
+ if (device->open_count == 0)
+ device->active_cnt--;
mutex_unlock(&device->mutex);
filep->private_data = NULL;
kfree(dev_priv);
@@ -868,8 +913,17 @@
return result;
}
-/*call with private->mem_lock locked */
-struct kgsl_mem_entry *
+/**
+ * kgsl_sharedmem_find_region() - Find a gpu memory allocation
+ *
+ * @private: private data for the process to check.
+ * @gpuaddr: start address of the region
+ * @size: size of the region
+ *
+ * Find a gpu allocation. Caller must kgsl_mem_entry_put()
+ * the returned entry when finished using it.
+ */
+struct kgsl_mem_entry * __must_check
kgsl_sharedmem_find_region(struct kgsl_process_private *private,
unsigned int gpuaddr, size_t size)
{
@@ -878,46 +932,57 @@
if (!kgsl_mmu_gpuaddr_in_range(private->pagetable, gpuaddr))
return NULL;
+ spin_lock(&private->mem_lock);
while (node != NULL) {
struct kgsl_mem_entry *entry;
entry = rb_entry(node, struct kgsl_mem_entry, node);
-
- if (kgsl_gpuaddr_in_memdesc(&entry->memdesc, gpuaddr, size))
+ if (kgsl_gpuaddr_in_memdesc(&entry->memdesc, gpuaddr, size)) {
+ kgsl_mem_entry_get(entry);
+ spin_unlock(&private->mem_lock);
return entry;
-
+ }
if (gpuaddr < entry->memdesc.gpuaddr)
node = node->rb_left;
else if (gpuaddr >=
(entry->memdesc.gpuaddr + entry->memdesc.size))
node = node->rb_right;
else {
+ spin_unlock(&private->mem_lock);
return NULL;
}
}
+ spin_unlock(&private->mem_lock);
return NULL;
}
EXPORT_SYMBOL(kgsl_sharedmem_find_region);
-/*call with private->mem_lock locked */
-static inline struct kgsl_mem_entry *
+/**
+ * kgsl_sharedmem_find() - Find a gpu memory allocation
+ *
+ * @private: private data for the process to check.
+ * @gpuaddr: start address of the region
+ *
+ * Find a gpu allocation. Caller must kgsl_mem_entry_put()
+ * the returned entry when finished using it.
+ */
+static inline struct kgsl_mem_entry * __must_check
kgsl_sharedmem_find(struct kgsl_process_private *private, unsigned int gpuaddr)
{
return kgsl_sharedmem_find_region(private, gpuaddr, 1);
}
/**
- * kgsl_sharedmem_region_empty - Check if an addression region is empty
+ * kgsl_sharedmem_region_empty() - Check if an addression region is empty
*
* @private: private data for the process to check.
* @gpuaddr: start address of the region
* @size: length of the region.
*
* Checks that there are no existing allocations within an address
- * region. Note that unlike other kgsl_sharedmem* search functions,
- * this one manages locking on its own.
+ * region.
*/
int
kgsl_sharedmem_region_empty(struct kgsl_process_private *private,
@@ -961,19 +1026,24 @@
}
/**
- * kgsl_sharedmem_find_id - find a memory entry by id
+ * kgsl_sharedmem_find_id() - find a memory entry by id
* @process: the owning process
* @id: id to find
*
* @returns - the mem_entry or NULL
+ *
+ * Caller must kgsl_mem_entry_put() the returned entry, when finished using
+ * it.
*/
-static inline struct kgsl_mem_entry *
+static inline struct kgsl_mem_entry * __must_check
kgsl_sharedmem_find_id(struct kgsl_process_private *process, unsigned int id)
{
struct kgsl_mem_entry *entry;
rcu_read_lock();
entry = idr_find(&process->mem_idr, id);
+ if (entry)
+ kgsl_mem_entry_get(entry);
rcu_read_unlock();
return entry;
@@ -1073,10 +1143,6 @@
struct kgsl_device *device = dev_priv->device;
unsigned int context_id = context ? context->id : KGSL_MEMSTORE_GLOBAL;
- /* Set the active count so that suspend doesn't do the wrong thing */
-
- device->active_cnt++;
-
trace_kgsl_waittimestamp_entry(device, context_id,
kgsl_readtimestamp(device, context,
KGSL_TIMESTAMP_RETIRED),
@@ -1090,9 +1156,6 @@
KGSL_TIMESTAMP_RETIRED),
result);
- /* Fire off any pending suspend operations that are in flight */
- kgsl_active_count_put(dev_priv->device);
-
return result;
}
@@ -1276,15 +1339,12 @@
struct kgsl_device *device = dev_priv->device;
unsigned int context_id = context ? context->id : KGSL_MEMSTORE_GLOBAL;
- spin_lock(&dev_priv->process_priv->mem_lock);
entry = kgsl_sharedmem_find(dev_priv->process_priv, gpuaddr);
- spin_unlock(&dev_priv->process_priv->mem_lock);
if (!entry) {
KGSL_DRV_ERR(dev_priv->device,
"invalid gpuaddr %08x\n", gpuaddr);
- result = -EINVAL;
- goto done;
+ return -EINVAL;
}
trace_kgsl_mem_timestamp_queue(device, entry, context_id,
kgsl_readtimestamp(device, context,
@@ -1292,7 +1352,7 @@
timestamp);
result = kgsl_add_event(dev_priv->device, context_id, timestamp,
kgsl_freemem_event_cb, entry, dev_priv);
-done:
+ kgsl_mem_entry_put(entry);
return result;
}
@@ -1378,15 +1438,13 @@
struct kgsl_process_private *private = dev_priv->process_priv;
struct kgsl_mem_entry *entry = NULL;
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find(private, param->gpuaddr);
- spin_unlock(&private->mem_lock);
-
if (!entry) {
KGSL_MEM_INFO(dev_priv->device, "invalid gpuaddr %08x\n",
param->gpuaddr);
return -EINVAL;
}
+
trace_kgsl_mem_free(entry);
kgsl_memfree_hist_set_event(entry->priv->pid,
@@ -1395,6 +1453,7 @@
entry->memdesc.flags);
kgsl_mem_entry_detach_process(entry);
+ kgsl_mem_entry_put(entry);
return 0;
}
@@ -1419,6 +1478,7 @@
entry->memdesc.flags);
kgsl_mem_entry_detach_process(entry);
+ kgsl_mem_entry_put(entry);
return 0;
}
@@ -1804,6 +1864,9 @@
if (!kgsl_mmu_use_cpu_map(private->pagetable->mmu))
entry->memdesc.flags &= ~KGSL_MEMFLAGS_USE_CPU_MAP;
+ if (kgsl_mmu_get_mmutype() == KGSL_MMU_TYPE_IOMMU)
+ entry->memdesc.priv |= KGSL_MEMDESC_GUARD_PAGE;
+
switch (memtype) {
case KGSL_USER_MEM_TYPE_PMEM:
if (param->fd == 0 || param->len == 0)
@@ -1887,7 +1950,6 @@
trace_kgsl_mem_map(entry, param->fd);
- kgsl_check_idle(dev_priv->device);
return result;
error_unmap:
@@ -1907,7 +1969,6 @@
}
error:
kfree(entry);
- kgsl_check_idle(dev_priv->device);
return result;
}
@@ -1954,6 +2015,7 @@
struct kgsl_gpumem_sync_cache *param = data;
struct kgsl_process_private *private = dev_priv->process_priv;
struct kgsl_mem_entry *entry = NULL;
+ long ret;
if (param->id != 0) {
entry = kgsl_sharedmem_find_id(private, param->id);
@@ -1963,9 +2025,7 @@
return -EINVAL;
}
} else if (param->gpuaddr != 0) {
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find(private, param->gpuaddr);
- spin_unlock(&private->mem_lock);
if (entry == NULL) {
KGSL_MEM_INFO(dev_priv->device,
"can't find gpuaddr %x\n",
@@ -1976,7 +2036,9 @@
return -EINVAL;
}
- return _kgsl_gpumem_sync_cache(entry, param->op);
+ ret = _kgsl_gpumem_sync_cache(entry, param->op);
+ kgsl_mem_entry_put(entry);
+ return ret;
}
/* Legacy cache function, does a flush (clean + invalidate) */
@@ -1988,10 +2050,9 @@
struct kgsl_sharedmem_free *param = data;
struct kgsl_process_private *private = dev_priv->process_priv;
struct kgsl_mem_entry *entry = NULL;
+ long ret;
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find(private, param->gpuaddr);
- spin_unlock(&private->mem_lock);
if (entry == NULL) {
KGSL_MEM_INFO(dev_priv->device,
"can't find gpuaddr %x\n",
@@ -1999,7 +2060,9 @@
return -EINVAL;
}
- return _kgsl_gpumem_sync_cache(entry, KGSL_GPUMEM_CACHE_FLUSH);
+ ret = _kgsl_gpumem_sync_cache(entry, KGSL_GPUMEM_CACHE_FLUSH);
+ kgsl_mem_entry_put(entry);
+ return ret;
}
/*
@@ -2028,6 +2091,9 @@
if (entry == NULL)
return -ENOMEM;
+ if (kgsl_mmu_get_mmutype() == KGSL_MMU_TYPE_IOMMU)
+ entry->memdesc.priv |= KGSL_MEMDESC_GUARD_PAGE;
+
result = kgsl_allocate_user(&entry->memdesc, private->pagetable, size,
flags);
if (result != 0)
@@ -2035,7 +2101,6 @@
entry->memtype = KGSL_MEM_ENTRY_KERNEL;
- kgsl_check_idle(dev_priv->device);
*ret_entry = entry;
return result;
err:
@@ -2138,9 +2203,7 @@
return -EINVAL;
}
} else if (param->gpuaddr != 0) {
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find(private, param->gpuaddr);
- spin_unlock(&private->mem_lock);
if (entry == NULL) {
KGSL_MEM_INFO(dev_priv->device,
"can't find gpuaddr %lx\n",
@@ -2156,6 +2219,8 @@
param->size = entry->memdesc.size;
param->mmapsize = kgsl_memdesc_mmapsize(&entry->memdesc);
param->useraddr = entry->memdesc.useraddr;
+
+ kgsl_mem_entry_put(entry);
return result;
}
@@ -2167,14 +2232,14 @@
struct kgsl_process_private *private = dev_priv->process_priv;
struct kgsl_mem_entry *entry = NULL;
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find_region(private, param->gpuaddr, param->len);
- if (entry)
- kgsl_cffdump_syncmem(dev_priv, &entry->memdesc, param->gpuaddr,
- param->len, true);
- else
- result = -EINVAL;
- spin_unlock(&private->mem_lock);
+ if (!entry)
+ return -EINVAL;
+
+ kgsl_cffdump_syncmem(dev_priv, &entry->memdesc, param->gpuaddr,
+ param->len, true);
+
+ kgsl_mem_entry_put(entry);
return result;
}
@@ -2370,7 +2435,7 @@
kgsl_ioctl_cff_user_event, 0),
KGSL_IOCTL_FUNC(IOCTL_KGSL_TIMESTAMP_EVENT,
kgsl_ioctl_timestamp_event,
- KGSL_IOCTL_LOCK),
+ KGSL_IOCTL_LOCK | KGSL_IOCTL_WAKE),
KGSL_IOCTL_FUNC(IOCTL_KGSL_SETPROPERTY,
kgsl_ioctl_device_setproperty,
KGSL_IOCTL_LOCK | KGSL_IOCTL_WAKE),
@@ -2462,14 +2527,19 @@
if (lock) {
mutex_lock(&dev_priv->device->mutex);
- if (use_hw)
- kgsl_check_suspended(dev_priv->device);
+ if (use_hw) {
+ ret = kgsl_active_count_get(dev_priv->device);
+ if (ret < 0)
+ goto unlock;
+ }
}
ret = func(dev_priv, cmd, uptr);
+unlock:
if (lock) {
- kgsl_check_idle_locked(dev_priv->device);
+ if (use_hw)
+ kgsl_active_count_put(dev_priv->device);
mutex_unlock(&dev_priv->device->mutex);
}
@@ -2561,21 +2631,17 @@
struct kgsl_mem_entry **out_entry, unsigned long pgoff,
unsigned long len)
{
- int ret = -EINVAL;
+ int ret = 0;
struct kgsl_mem_entry *entry;
entry = kgsl_sharedmem_find_id(private, pgoff);
if (entry == NULL) {
- spin_lock(&private->mem_lock);
entry = kgsl_sharedmem_find(private, pgoff << PAGE_SHIFT);
- spin_unlock(&private->mem_lock);
}
if (!entry)
return -EINVAL;
- kgsl_mem_entry_get(entry);
-
if (!entry->memdesc.ops ||
!entry->memdesc.ops->vmflags ||
!entry->memdesc.ops->vmfault) {
@@ -2600,12 +2666,18 @@
return ret;
}
+static inline bool
+mmap_range_valid(unsigned long addr, unsigned long len)
+{
+ return (addr + len) > addr && (addr + len) < TASK_SIZE;
+}
+
static unsigned long
kgsl_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
- unsigned long ret = 0;
+ unsigned long ret = 0, orig_len = len;
unsigned long vma_offset = pgoff << PAGE_SHIFT;
struct kgsl_device_private *dev_priv = file->private_data;
struct kgsl_process_private *private = dev_priv->process_priv;
@@ -2650,10 +2722,26 @@
if (align)
len += 1 << align;
+
+ if (!mmap_range_valid(addr, len))
+ addr = 0;
do {
ret = get_unmapped_area(NULL, addr, len, pgoff, flags);
- if (IS_ERR_VALUE(ret))
+ if (IS_ERR_VALUE(ret)) {
+ /*
+ * If we are really fragmented, there may not be room
+ * for the alignment padding, so try again without it.
+ */
+ if (!retry && (ret == (unsigned long)-ENOMEM)
+ && (align > PAGE_SHIFT)) {
+ align = PAGE_SHIFT;
+ addr = 0;
+ len = orig_len;
+ retry = 1;
+ continue;
+ }
break;
+ }
if (align)
ret = ALIGN(ret, (1 << align));
@@ -2675,13 +2763,13 @@
* the whole address space at least once by wrapping
* back around once.
*/
- if (!retry && (addr + len >= TASK_SIZE)) {
+ if (!retry && !mmap_range_valid(addr, len)) {
addr = 0;
retry = 1;
} else {
ret = -EBUSY;
}
- } while (addr + len < TASK_SIZE);
+ } while (mmap_range_valid(addr, len));
if (IS_ERR_VALUE(ret))
KGSL_MEM_INFO(device,
@@ -2706,6 +2794,10 @@
if (vma_offset == device->memstore.gpuaddr)
return kgsl_mmap_memstore(device, vma);
+ /*
+ * The reference count on the entry that we get from
+ * get_mmap_entry() will be held until kgsl_gpumem_vm_close().
+ */
ret = get_mmap_entry(private, &entry, vma->vm_pgoff,
vma->vm_end - vma->vm_start);
if (ret)
@@ -2755,10 +2847,6 @@
int sglen = entry->memdesc.sglen;
unsigned long addr = vma->vm_start;
- /* don't map in the guard page, it should always fault */
- if (kgsl_memdesc_has_guard_page(&entry->memdesc))
- sglen--;
-
for_each_sg(entry->memdesc.sg, s, sglen, i) {
int j;
for (j = 0; j < (sg_dma_len(s) >> PAGE_SHIFT); j++) {
@@ -2775,7 +2863,6 @@
entry->memdesc.useraddr = vma->vm_start;
trace_kgsl_mem_mmap(entry);
-
return 0;
}
@@ -3032,11 +3119,7 @@
/* For a manual dump, make sure that the system is idle */
if (manual) {
- if (device->active_cnt != 0) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->suspend_gate);
- mutex_lock(&device->mutex);
- }
+ kgsl_active_count_wait(device);
if (device->state == KGSL_STATE_ACTIVE)
kgsl_idle(device);
diff --git a/drivers/gpu/msm/kgsl.h b/drivers/gpu/msm/kgsl.h
index c568db5..abe9100 100644
--- a/drivers/gpu/msm/kgsl.h
+++ b/drivers/gpu/msm/kgsl.h
@@ -226,6 +226,14 @@
void kgsl_early_suspend_driver(struct early_suspend *h);
void kgsl_late_resume_driver(struct early_suspend *h);
+void kgsl_trace_regwrite(struct kgsl_device *device, unsigned int offset,
+ unsigned int value);
+
+void kgsl_trace_issueibcmds(struct kgsl_device *device, int id,
+ struct kgsl_ibdesc *ibdesc, int numibs,
+ unsigned int timestamp, unsigned int flags,
+ int result, unsigned int type);
+
#ifdef CONFIG_MSM_KGSL_DRM
extern int kgsl_drm_init(struct platform_device *dev);
extern void kgsl_drm_exit(void);
diff --git a/drivers/gpu/msm/kgsl_cffdump.c b/drivers/gpu/msm/kgsl_cffdump.c
index 6dc2ccc..ef2a19a 100644
--- a/drivers/gpu/msm/kgsl_cffdump.c
+++ b/drivers/gpu/msm/kgsl_cffdump.c
@@ -363,9 +363,11 @@
if (KGSL_MMU_TYPE_IOMMU == kgsl_mmu_get_mmutype()) {
kgsl_cffdump_memory_base(device->id,
- kgsl_mmu_get_base_addr(&device->mmu),
- kgsl_mmu_get_ptsize(&device->mmu) +
- KGSL_IOMMU_GLOBAL_MEM_SIZE, adreno_dev->gmem_size);
+ KGSL_PAGETABLE_BASE,
+ KGSL_IOMMU_GLOBAL_MEM_BASE +
+ KGSL_IOMMU_GLOBAL_MEM_SIZE -
+ KGSL_PAGETABLE_BASE,
+ adreno_dev->gmem_size);
} else {
kgsl_cffdump_memory_base(device->id,
kgsl_mmu_get_base_addr(&device->mmu),
diff --git a/drivers/gpu/msm/kgsl_device.h b/drivers/gpu/msm/kgsl_device.h
index 0d11660..ac82820 100644
--- a/drivers/gpu/msm/kgsl_device.h
+++ b/drivers/gpu/msm/kgsl_device.h
@@ -454,23 +454,4 @@
kref_put(&context->refcount, kgsl_context_destroy);
}
-/**
- * kgsl_active_count_put - Decrease the device active count
- * @device: Pointer to a KGSL device
- *
- * Decrease the active count for the KGSL device and trigger the suspend_gate
- * completion if it hits zero
- */
-static inline void
-kgsl_active_count_put(struct kgsl_device *device)
-{
- if (device->active_cnt == 1)
- INIT_COMPLETION(device->suspend_gate);
-
- device->active_cnt--;
-
- if (device->active_cnt == 0)
- complete(&device->suspend_gate);
-}
-
#endif /* __KGSL_DEVICE_H */
diff --git a/drivers/gpu/msm/kgsl_events.c b/drivers/gpu/msm/kgsl_events.c
index 9e9c0da..d872783 100644
--- a/drivers/gpu/msm/kgsl_events.c
+++ b/drivers/gpu/msm/kgsl_events.c
@@ -51,6 +51,7 @@
void (*cb)(struct kgsl_device *, void *, u32, u32), void *priv,
void *owner)
{
+ int ret;
struct kgsl_event *event;
unsigned int cur_ts;
struct kgsl_context *context = NULL;
@@ -82,6 +83,16 @@
if (event == NULL)
return -ENOMEM;
+ /*
+ * Increase the active count on the device to avoid going into power
+ * saving modes while events are pending
+ */
+ ret = kgsl_active_count_get_light(device);
+ if (ret < 0) {
+ kfree(event);
+ return ret;
+ }
+
event->context = context;
event->timestamp = ts;
event->priv = priv;
@@ -112,13 +123,6 @@
} else
_add_event_to_list(&device->events, event);
- /*
- * Increase the active count on the device to avoid going into power
- * saving modes while events are pending
- */
-
- device->active_cnt++;
-
queue_work(device->work_queue, &device->ts_expired_ws);
return 0;
}
diff --git a/drivers/gpu/msm/kgsl_gpummu.c b/drivers/gpu/msm/kgsl_gpummu.c
index 5cc0dff..6f139b9 100644
--- a/drivers/gpu/msm/kgsl_gpummu.c
+++ b/drivers/gpu/msm/kgsl_gpummu.c
@@ -615,7 +615,7 @@
{
unsigned int numpages;
unsigned int pte, ptefirst, ptelast, superpte;
- unsigned int range = kgsl_sg_size(memdesc->sg, memdesc->sglen);
+ unsigned int range = memdesc->size;
struct kgsl_gpummu_pt *gpummu_pt = pt->priv;
/* All GPU addresses as assigned are page aligned, but some
@@ -759,7 +759,9 @@
.mmu_disable_clk_on_ts = NULL,
.mmu_get_pt_lsb = NULL,
.mmu_get_reg_gpuaddr = NULL,
+ .mmu_get_reg_ahbaddr = NULL,
.mmu_get_num_iommu_units = kgsl_gpummu_get_num_iommu_units,
+ .mmu_hw_halt_supported = NULL,
};
struct kgsl_mmu_pt_ops gpummu_pt_ops = {
diff --git a/drivers/gpu/msm/kgsl_iommu.c b/drivers/gpu/msm/kgsl_iommu.c
index 739fcff..3057723 100644
--- a/drivers/gpu/msm/kgsl_iommu.c
+++ b/drivers/gpu/msm/kgsl_iommu.c
@@ -37,33 +37,60 @@
static struct kgsl_iommu_register_list kgsl_iommuv0_reg[KGSL_IOMMU_REG_MAX] = {
- { 0, 0, 0 }, /* GLOBAL_BASE */
- { 0x10, 0x0003FFFF, 14 }, /* TTBR0 */
- { 0x14, 0x0003FFFF, 14 }, /* TTBR1 */
- { 0x20, 0, 0 }, /* FSR */
- { 0x800, 0, 0 }, /* TLBIALL */
- { 0x820, 0, 0 }, /* RESUME */
- { 0x03C, 0, 0 }, /* TLBLKCR */
- { 0x818, 0, 0 }, /* V2PUR */
- { 0x2C, 0, 0 }, /* FSYNR0 */
- { 0x2C, 0, 0 }, /* FSYNR0 */
+ { 0, 0 }, /* GLOBAL_BASE */
+ { 0x10, 1 }, /* TTBR0 */
+ { 0x14, 1 }, /* TTBR1 */
+ { 0x20, 1 }, /* FSR */
+ { 0x800, 1 }, /* TLBIALL */
+ { 0x820, 1 }, /* RESUME */
+ { 0x03C, 1 }, /* TLBLKCR */
+ { 0x818, 1 }, /* V2PUR */
+ { 0x2C, 1 }, /* FSYNR0 */
+ { 0x30, 1 }, /* FSYNR1 */
+ { 0, 0 }, /* TLBSYNC, not in v0 */
+ { 0, 0 }, /* TLBSTATUS, not in v0 */
+ { 0, 0 } /* IMPLDEF_MICRO_MMU_CRTL, not in v0 */
};
static struct kgsl_iommu_register_list kgsl_iommuv1_reg[KGSL_IOMMU_REG_MAX] = {
- { 0, 0, 0 }, /* GLOBAL_BASE */
- { 0x20, 0x00FFFFFF, 14 }, /* TTBR0 */
- { 0x28, 0x00FFFFFF, 14 }, /* TTBR1 */
- { 0x58, 0, 0 }, /* FSR */
- { 0x618, 0, 0 }, /* TLBIALL */
- { 0x008, 0, 0 }, /* RESUME */
- { 0, 0, 0 }, /* TLBLKCR */
- { 0, 0, 0 }, /* V2PUR */
- { 0x68, 0, 0 }, /* FSYNR0 */
- { 0x6C, 0, 0 } /* FSYNR1 */
+ { 0, 0 }, /* GLOBAL_BASE */
+ { 0x20, 1 }, /* TTBR0 */
+ { 0x28, 1 }, /* TTBR1 */
+ { 0x58, 1 }, /* FSR */
+ { 0x618, 1 }, /* TLBIALL */
+ { 0x008, 1 }, /* RESUME */
+ { 0, 0 }, /* TLBLKCR not in V1 */
+ { 0, 0 }, /* V2PUR not in V1 */
+ { 0x68, 0 }, /* FSYNR0 */
+ { 0x6C, 0 }, /* FSYNR1 */
+ { 0x7F0, 1 }, /* TLBSYNC */
+ { 0x7F4, 1 }, /* TLBSTATUS */
+ { 0x2000, 0 } /* IMPLDEF_MICRO_MMU_CRTL */
};
+static struct iommu_access_ops *iommu_access_ops;
+
+static void _iommu_lock(void)
+{
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_acquire)
+ iommu_access_ops->iommu_lock_acquire();
+}
+
+static void _iommu_unlock(void)
+{
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_release)
+ iommu_access_ops->iommu_lock_release();
+}
+
struct remote_iommu_petersons_spinlock kgsl_iommu_sync_lock_vars;
+/*
+ * One page allocation for a guard region to protect against over-zealous
+ * GPU pre-fetch
+ */
+
+static struct page *kgsl_guard_page;
+
static int get_iommu_unit(struct device *dev, struct kgsl_mmu **mmu_out,
struct kgsl_iommu_unit **iommu_unit_out)
{
@@ -248,6 +275,8 @@
void *base = kgsl_driver.memfree_hist.base_hist_rb;
struct kgsl_memfree_hist_elem *wptr;
struct kgsl_memfree_hist_elem *p;
+ char name[32];
+ memset(name, 0, sizeof(name));
mutex_lock(&kgsl_driver.memfree_hist_mutex);
wptr = kgsl_driver.memfree_hist.wptr;
@@ -257,12 +286,15 @@
if (addr >= p->gpuaddr &&
addr < (p->gpuaddr + p->size)) {
+ kgsl_get_memory_usage(name, sizeof(name) - 1,
+ p->flags),
KGSL_LOG_DUMP(iommu_dev->kgsldev,
"---- premature free ----\n");
KGSL_LOG_DUMP(iommu_dev->kgsldev,
- "[%8.8X-%8.8X] was already freed by pid %d\n",
+ "[%8.8X-%8.8X] (%s) was already freed by pid %d\n",
p->gpuaddr,
p->gpuaddr + p->size,
+ name,
p->pid);
}
p++;
@@ -327,22 +359,17 @@
KGSL_IOMMU_V1_FSYNR0_WNR_SHIFT)) ? 1 : 0);
pid = kgsl_mmu_get_ptname_from_ptbase(mmu, ptbase);
- KGSL_MEM_CRIT(iommu_dev->kgsldev,
- "GPU PAGE FAULT: addr = %lX pid = %d\n", addr, pid);
- KGSL_MEM_CRIT(iommu_dev->kgsldev,
- "context = %d FSR = %X FSYNR0 = %X FSYNR1 = %X(%s fault)\n",
- iommu_dev->ctx_id, fsr, fsynr0, fsynr1,
- write ? "write" : "read");
if (adreno_dev->ft_pf_policy & KGSL_FT_PAGEFAULT_LOG_ONE_PER_PAGE)
no_page_fault_log = kgsl_mmu_log_fault_addr(mmu, ptbase, addr);
if (!no_page_fault_log) {
KGSL_MEM_CRIT(iommu_dev->kgsldev,
- "GPU PAGE FAULT: addr = %lX pid = %d\n",
- addr, kgsl_mmu_get_ptname_from_ptbase(mmu, ptbase));
- KGSL_MEM_CRIT(iommu_dev->kgsldev, "context = %d FSR = %X\n",
- iommu_dev->ctx_id, fsr);
+ "GPU PAGE FAULT: addr = %lX pid = %d\n", addr, pid);
+ KGSL_MEM_CRIT(iommu_dev->kgsldev,
+ "context = %d FSR = %X FSYNR0 = %X FSYNR1 = %X(%s fault)\n",
+ iommu_dev->ctx_id, fsr, fsynr0, fsynr1,
+ write ? "write" : "read");
_check_if_freed(iommu_dev, addr, pid);
@@ -370,18 +397,20 @@
kgsl_sharedmem_readl(&device->memstore, &curr_context_id,
KGSL_MEMSTORE_OFFSET(KGSL_MEMSTORE_GLOBAL, current_context));
context = idr_find(&device->context_idr, curr_context_id);
- if (context != NULL)
+ if (context != NULL) {
curr_context = context->devctxt;
- kgsl_sharedmem_readl(&device->memstore, &curr_global_ts,
- KGSL_MEMSTORE_OFFSET(KGSL_MEMSTORE_GLOBAL, eoptimestamp));
+ kgsl_sharedmem_readl(&device->memstore, &curr_global_ts,
+ KGSL_MEMSTORE_OFFSET(KGSL_MEMSTORE_GLOBAL,
+ eoptimestamp));
- /*
- * Store pagefault's timestamp in adreno context,
- * this information will be used in GFT
- */
- curr_context->pagefault = 1;
- curr_context->pagefault_ts = curr_global_ts;
+ /*
+ * Store pagefault's timestamp in adreno context,
+ * this information will be used in GFT
+ */
+ curr_context->pagefault = 1;
+ curr_context->pagefault_ts = curr_global_ts;
+ }
trace_kgsl_mmu_pagefault(iommu_dev->kgsldev, addr,
kgsl_mmu_get_ptname_from_ptbase(mmu, ptbase),
@@ -582,18 +611,13 @@
struct kgsl_pagetable *pt,
unsigned int pt_base)
{
- struct kgsl_iommu *iommu = mmu->priv;
struct kgsl_iommu_pt *iommu_pt = pt ? pt->priv : NULL;
unsigned int domain_ptbase = iommu_pt ?
iommu_get_pt_base_addr(iommu_pt->domain) : 0;
/* Only compare the valid address bits of the pt_base */
- domain_ptbase &=
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ domain_ptbase &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
- pt_base &=
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ pt_base &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
return domain_ptbase && pt_base &&
(domain_ptbase == pt_base);
@@ -609,8 +633,10 @@
{
struct kgsl_iommu_pt *iommu_pt = pt->priv;
if (iommu_pt->domain)
- iommu_domain_free(iommu_pt->domain);
+ msm_unregister_domain(iommu_pt->domain);
+
kfree(iommu_pt);
+ iommu_pt = NULL;
}
/*
@@ -649,15 +675,18 @@
domain_num = msm_register_domain(&kgsl_layout);
if (domain_num >= 0) {
iommu_pt->domain = msm_get_iommu_domain(domain_num);
- iommu_set_fault_handler(iommu_pt->domain,
- kgsl_iommu_fault_handler, NULL);
- } else {
- KGSL_CORE_ERR("Failed to create iommu domain\n");
- kfree(iommu_pt);
- return NULL;
+
+ if (iommu_pt->domain) {
+ iommu_set_fault_handler(iommu_pt->domain,
+ kgsl_iommu_fault_handler, NULL);
+
+ return iommu_pt;
+ }
}
- return iommu_pt;
+ KGSL_CORE_ERR("Failed to create iommu domain\n");
+ kfree(iommu_pt);
+ return NULL;
}
/*
@@ -889,11 +918,14 @@
if (iommu->sync_lock_initialized)
return status;
- /* Get the physical address of the Lock variables */
- lock_phy_addr = (msm_iommu_lock_initialize()
+ iommu_access_ops = get_iommu_access_ops_v0();
+
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_initialize)
+ lock_phy_addr = (iommu_access_ops->iommu_lock_initialize()
- MSM_SHARED_RAM_BASE + msm_shared_ram_phys);
if (!lock_phy_addr) {
+ iommu_access_ops = NULL;
KGSL_DRV_ERR(mmu->device,
"GPU CPU sync lock is not supported by kernel\n");
return -ENXIO;
@@ -911,8 +943,10 @@
iommu->sync_lock_desc.physaddr,
iommu->sync_lock_desc.size);
- if (status)
+ if (status) {
+ iommu_access_ops = NULL;
return status;
+ }
/* Flag Sync Lock is Initialized */
iommu->sync_lock_initialized = 1;
@@ -1092,6 +1126,9 @@
iommu_unit->reg_map.size);
if (ret)
goto err;
+
+ iommu_unit->iommu_halt_enable = data.iommu_halt_enable;
+ iommu_unit->ahb_base = data.physstart - mmu->device->reg_phys;
}
iommu->unit_count = pdata_dev->iommu_count;
return ret;
@@ -1118,11 +1155,9 @@
static unsigned int kgsl_iommu_get_pt_base_addr(struct kgsl_mmu *mmu,
struct kgsl_pagetable *pt)
{
- struct kgsl_iommu *iommu = mmu->priv;
struct kgsl_iommu_pt *iommu_pt = pt->priv;
return iommu_get_pt_base_addr(iommu_pt->domain) &
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
}
/*
@@ -1241,6 +1276,30 @@
}
+/*
+ * kgsl_iommu_get_reg_ahbaddr - Returns the ahb address of the register
+ * @mmu - Pointer to mmu structure
+ * @iommu_unit - The iommu unit for which base address is requested
+ * @ctx_id - The context ID of the IOMMU ctx
+ * @reg - The register for which address is required
+ *
+ * Return - The address of register which can be used in type0 packet
+ */
+static unsigned int kgsl_iommu_get_reg_ahbaddr(struct kgsl_mmu *mmu,
+ int iommu_unit, int ctx_id,
+ enum kgsl_iommu_reg_map reg)
+{
+ struct kgsl_iommu *iommu = mmu->priv;
+
+ if (iommu->iommu_reg_list[reg].ctx_reg)
+ return iommu->iommu_units[iommu_unit].ahb_base +
+ iommu->iommu_reg_list[reg].reg_offset +
+ (ctx_id << KGSL_IOMMU_CTX_SHIFT) + iommu->ctx_offset;
+ else
+ return iommu->iommu_units[iommu_unit].ahb_base +
+ iommu->iommu_reg_list[reg].reg_offset;
+}
+
static int kgsl_iommu_init(struct kgsl_mmu *mmu)
{
/*
@@ -1265,13 +1324,15 @@
status = kgsl_set_register_map(mmu);
if (status)
goto done;
- status = kgsl_iommu_init_sync_lock(mmu);
- if (status)
- goto done;
- /* We presently do not support per-process for IOMMU-v1 */
+ /*
+ * IOMMU-v1 requires hardware halt support to do in stream
+ * pagetable switching. This check assumes that if there are
+ * multiple units, they will be matching hardware.
+ */
mmu->pt_per_process = KGSL_MMU_USE_PER_PROCESS_PT &&
- msm_soc_version_supports_iommu_v0();
+ (msm_soc_version_supports_iommu_v0() ||
+ iommu->iommu_units[0].iommu_halt_enable);
/*
* For IOMMU per-process pagetables, the allocatable range
@@ -1294,6 +1355,9 @@
mmu->use_cpu_map = false;
}
+ status = kgsl_iommu_init_sync_lock(mmu);
+ if (status)
+ goto done;
iommu->iommu_reg_list = kgsl_iommuv0_reg;
iommu->ctx_offset = KGSL_IOMMU_CTX_OFFSET_V0;
@@ -1321,6 +1385,15 @@
iommu_ops.mmu_cleanup_pt = kgsl_iommu_cleanup_regs;
}
+ if (kgsl_guard_page == NULL) {
+ kgsl_guard_page = alloc_page(GFP_KERNEL | __GFP_ZERO |
+ __GFP_HIGHMEM);
+ if (kgsl_guard_page == NULL) {
+ status = -ENOMEM;
+ goto done;
+ }
+ }
+
dev_info(mmu->device->dev, "|%s| MMU type set for device is IOMMU\n",
__func__);
done:
@@ -1535,7 +1608,7 @@
* changing pagetables we can use this lsb value of the pagetable w/o
* having to read it again
*/
- msm_iommu_lock();
+ _iommu_lock();
for (i = 0; i < iommu->unit_count; i++) {
struct kgsl_iommu_unit *iommu_unit = &iommu->iommu_units[i];
for (j = 0; j < iommu_unit->dev_count; j++) {
@@ -1547,7 +1620,7 @@
}
}
kgsl_iommu_lock_rb_in_tlb(mmu);
- msm_iommu_unlock();
+ _iommu_unlock();
/* For complete CFF */
kgsl_cffdump_setmem(mmu->setstate_memory.gpuaddr +
@@ -1571,7 +1644,7 @@
unsigned int *tlb_flags)
{
int ret;
- unsigned int range = kgsl_sg_size(memdesc->sg, memdesc->sglen);
+ unsigned int range = memdesc->size;
struct kgsl_iommu_pt *iommu_pt = pt->priv;
/* All GPU addresses as assigned are page aligned, but some
@@ -1583,6 +1656,9 @@
if (range == 0 || gpuaddr == 0)
return 0;
+ if (kgsl_memdesc_has_guard_page(memdesc))
+ range += PAGE_SIZE;
+
ret = iommu_unmap_range(iommu_pt->domain, gpuaddr, range);
if (ret)
KGSL_CORE_ERR("iommu_unmap_range(%p, %x, %d) failed "
@@ -1607,14 +1683,10 @@
int ret;
unsigned int iommu_virt_addr;
struct kgsl_iommu_pt *iommu_pt = pt->priv;
- int size = kgsl_sg_size(memdesc->sg, memdesc->sglen);
+ int size = memdesc->size;
BUG_ON(NULL == iommu_pt);
- /* if there's a guard page, we'll map it read only below */
- if ((protflags & IOMMU_WRITE) && kgsl_memdesc_has_guard_page(memdesc))
- size -= PAGE_SIZE;
-
iommu_virt_addr = memdesc->gpuaddr;
ret = iommu_map_range(iommu_pt->domain, iommu_virt_addr, memdesc->sg,
@@ -1625,16 +1697,14 @@
protflags, ret);
return ret;
}
- if ((protflags & IOMMU_WRITE) && kgsl_memdesc_has_guard_page(memdesc)) {
- struct scatterlist *sg = &memdesc->sg[memdesc->sglen - 1];
-
+ if (kgsl_memdesc_has_guard_page(memdesc)) {
ret = iommu_map(iommu_pt->domain, iommu_virt_addr + size,
- kgsl_get_sg_pa(sg), PAGE_SIZE,
+ page_to_phys(kgsl_guard_page), PAGE_SIZE,
protflags & ~IOMMU_WRITE);
if (ret) {
- KGSL_CORE_ERR("iommu_map(%p, %x, %x, %x) err: %d\n",
+ KGSL_CORE_ERR("iommu_map(%p, %x, guard, %x) err: %d\n",
iommu_pt->domain, iommu_virt_addr + size,
- kgsl_get_sg_pa(sg), protflags & ~IOMMU_WRITE,
+ protflags & ~IOMMU_WRITE,
ret);
/* cleanup the partial mapping */
iommu_unmap_range(iommu_pt->domain, iommu_virt_addr,
@@ -1667,12 +1737,12 @@
for (j = 0; j < iommu_unit->dev_count; j++) {
if (iommu_unit->dev[j].fault) {
kgsl_iommu_enable_clk(mmu, j);
- msm_iommu_lock();
+ _iommu_lock();
KGSL_IOMMU_SET_CTX_REG(iommu,
iommu_unit,
iommu_unit->dev[j].ctx_id,
RESUME, 1);
- msm_iommu_unlock();
+ _iommu_unlock();
iommu_unit->dev[j].fault = 0;
}
}
@@ -1714,6 +1784,11 @@
kfree(iommu);
+ if (kgsl_guard_page != NULL) {
+ __free_page(kgsl_guard_page);
+ kgsl_guard_page = NULL;
+ }
+
return 0;
}
@@ -1732,9 +1807,7 @@
KGSL_IOMMU_CONTEXT_USER,
TTBR0);
kgsl_iommu_disable_clk_on_ts(mmu, 0, false);
- return pt_base &
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ return pt_base & KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
}
/*
@@ -1764,15 +1837,14 @@
return;
}
/* Mask off the lsb of the pt base address since lsb will not change */
- pt_base &= (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ pt_base &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
/* For v0 SMMU GPU needs to be idle for tlb invalidate as well */
if (msm_soc_version_supports_iommu_v0())
kgsl_idle(mmu->device);
/* Acquire GPU-CPU sync Lock here */
- msm_iommu_lock();
+ _iommu_lock();
if (flags & KGSL_MMUFLAGS_PTUPDATE) {
if (!msm_soc_version_supports_iommu_v0())
@@ -1795,15 +1867,42 @@
}
/* Flush tlb */
if (flags & KGSL_MMUFLAGS_TLBFLUSH) {
+ unsigned long wait_for_flush;
for (i = 0; i < iommu->unit_count; i++) {
KGSL_IOMMU_SET_CTX_REG(iommu, (&iommu->iommu_units[i]),
KGSL_IOMMU_CONTEXT_USER, TLBIALL, 1);
mb();
+ /*
+ * Wait for flush to complete by polling the flush
+ * status bit of TLBSTATUS register for not more than
+ * 2 s. After 2s just exit, at that point the SMMU h/w
+ * may be stuck and will eventually cause GPU to hang
+ * or bring the system down.
+ */
+ if (!msm_soc_version_supports_iommu_v0()) {
+ wait_for_flush = jiffies +
+ msecs_to_jiffies(2000);
+ KGSL_IOMMU_SET_CTX_REG(iommu,
+ (&iommu->iommu_units[i]),
+ KGSL_IOMMU_CONTEXT_USER, TLBSYNC, 0);
+ while (KGSL_IOMMU_GET_CTX_REG(iommu,
+ (&iommu->iommu_units[i]),
+ KGSL_IOMMU_CONTEXT_USER, TLBSTATUS) &
+ (KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE)) {
+ if (time_after(jiffies,
+ wait_for_flush)) {
+ KGSL_DRV_ERR(mmu->device,
+ "Wait limit reached for IOMMU tlb flush\n");
+ break;
+ }
+ cpu_relax();
+ }
+ }
}
}
/* Release GPU-CPU sync Lock here */
- msm_iommu_unlock();
+ _iommu_unlock();
/* Disable smmu clock */
kgsl_iommu_disable_clk_on_ts(mmu, 0, false);
@@ -1816,8 +1915,7 @@
* @ctx_id - The context ID of the IOMMU ctx
* @reg - The register for which address is required
*
- * Return - The number of iommu units which is also the number of register
- * mapped descriptor arrays which the out parameter will have
+ * Return - The gpu address of register which can be used in type3 packet
*/
static unsigned int kgsl_iommu_get_reg_gpuaddr(struct kgsl_mmu *mmu,
int iommu_unit, int ctx_id, int reg)
@@ -1826,10 +1924,25 @@
if (KGSL_IOMMU_GLOBAL_BASE == reg)
return iommu->iommu_units[iommu_unit].reg_map.gpuaddr;
- else
+
+ if (iommu->iommu_reg_list[reg].ctx_reg)
return iommu->iommu_units[iommu_unit].reg_map.gpuaddr +
iommu->iommu_reg_list[reg].reg_offset +
(ctx_id << KGSL_IOMMU_CTX_SHIFT) + iommu->ctx_offset;
+ else
+ return iommu->iommu_units[iommu_unit].reg_map.gpuaddr +
+ iommu->iommu_reg_list[reg].reg_offset;
+}
+/*
+ * kgsl_iommu_hw_halt_supported - Returns whether IOMMU halt command is
+ * supported
+ * @mmu - Pointer to mmu structure
+ * @iommu_unit - The iommu unit for which the property is requested
+ */
+static int kgsl_iommu_hw_halt_supported(struct kgsl_mmu *mmu, int iommu_unit)
+{
+ struct kgsl_iommu *iommu = mmu->priv;
+ return iommu->iommu_units[iommu_unit].iommu_halt_enable;
}
static int kgsl_iommu_get_num_iommu_units(struct kgsl_mmu *mmu)
@@ -1851,9 +1964,11 @@
.mmu_disable_clk_on_ts = kgsl_iommu_disable_clk_on_ts,
.mmu_get_pt_lsb = kgsl_iommu_get_pt_lsb,
.mmu_get_reg_gpuaddr = kgsl_iommu_get_reg_gpuaddr,
+ .mmu_get_reg_ahbaddr = kgsl_iommu_get_reg_ahbaddr,
.mmu_get_num_iommu_units = kgsl_iommu_get_num_iommu_units,
.mmu_pt_equal = kgsl_iommu_pt_equal,
.mmu_get_pt_base_addr = kgsl_iommu_get_pt_base_addr,
+ .mmu_hw_halt_supported = kgsl_iommu_hw_halt_supported,
/* These callbacks will be set on some chipsets */
.mmu_setup_pt = NULL,
.mmu_cleanup_pt = NULL,
diff --git a/drivers/gpu/msm/kgsl_iommu.h b/drivers/gpu/msm/kgsl_iommu.h
index c09bc4b..b1b83c0 100644
--- a/drivers/gpu/msm/kgsl_iommu.h
+++ b/drivers/gpu/msm/kgsl_iommu.h
@@ -46,6 +46,16 @@
#define KGSL_IOMMU_V1_FSYNR0_WNR_MASK 0x00000001
#define KGSL_IOMMU_V1_FSYNR0_WNR_SHIFT 4
+/* TTBR0 register fields */
+#define KGSL_IOMMU_CTX_TTBR0_ADDR_MASK 0xFFFFC000
+
+/* TLBSTATUS register fields */
+#define KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE BIT(0)
+
+/* IMPLDEF_MICRO_MMU_CTRL register fields */
+#define KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT BIT(2)
+#define KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE BIT(3)
+
enum kgsl_iommu_reg_map {
KGSL_IOMMU_GLOBAL_BASE = 0,
KGSL_IOMMU_CTX_TTBR0,
@@ -57,15 +67,31 @@
KGSL_IOMMU_CTX_V2PUR,
KGSL_IOMMU_CTX_FSYNR0,
KGSL_IOMMU_CTX_FSYNR1,
+ KGSL_IOMMU_CTX_TLBSYNC,
+ KGSL_IOMMU_CTX_TLBSTATUS,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL,
KGSL_IOMMU_REG_MAX
};
struct kgsl_iommu_register_list {
unsigned int reg_offset;
- unsigned int reg_mask;
- unsigned int reg_shift;
+ int ctx_reg;
};
+#ifdef CONFIG_MSM_IOMMU
+extern struct iommu_access_ops iommu_access_ops_v0;
+
+static inline struct iommu_access_ops *get_iommu_access_ops_v0(void)
+{
+ return &iommu_access_ops_v0;
+}
+#else
+static inline struct iommu_access_ops *get_iommu_access_ops_v0(void)
+{
+ return NULL;
+}
+#endif
+
/*
* Max number of iommu units that the gpu core can have
* On APQ8064, KGSL can control a maximum of 2 IOMMU units.
@@ -91,10 +117,8 @@
iommu->ctx_offset)
/* Gets the lsb value of pagetable */
-#define KGSL_IOMMMU_PT_LSB(iommu, pt_val) \
- (pt_val & \
- ~(iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask << \
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift))
+#define KGSL_IOMMMU_PT_LSB(iommu, pt_val) \
+ (pt_val & ~(KGSL_IOMMU_CTX_TTBR0_ADDR_MASK))
/* offset at which a nop command is placed in setstate_memory */
#define KGSL_IOMMU_SETSTATE_NOP_OFFSET 1024
@@ -130,11 +154,18 @@
* @dev_count: Number of IOMMU contexts that are valid in the previous feild
* @reg_map: Memory descriptor which holds the mapped address of this IOMMU
* units register range
+ * @ahb_base - The base address from where IOMMU registers can be accesed from
+ * ahb bus
+ * @iommu_halt_enable: Valid only on IOMMU-v1, when set indicates that the iommu
+ * unit supports halting of the IOMMU, which can be enabled while programming
+ * the IOMMU registers for synchronization
*/
struct kgsl_iommu_unit {
struct kgsl_iommu_device dev[KGSL_IOMMU_MAX_DEVS_PER_UNIT];
unsigned int dev_count;
struct kgsl_memdesc reg_map;
+ unsigned int ahb_base;
+ int iommu_halt_enable;
};
/*
diff --git a/drivers/gpu/msm/kgsl_mmu.c b/drivers/gpu/msm/kgsl_mmu.c
index 4e95373..3ebfdcd 100644
--- a/drivers/gpu/msm/kgsl_mmu.c
+++ b/drivers/gpu/msm/kgsl_mmu.c
@@ -610,6 +610,7 @@
* kgsl_pwrctrl_irq() is called
*/
}
+EXPORT_SYMBOL(kgsl_mh_start);
int
kgsl_mmu_map(struct kgsl_pagetable *pagetable,
@@ -639,7 +640,10 @@
}
}
- size = kgsl_sg_size(memdesc->sg, memdesc->sglen);
+ /* Add space for the guard page when allocating the mmu VA. */
+ size = memdesc->size;
+ if (kgsl_memdesc_has_guard_page(memdesc))
+ size += PAGE_SIZE;
pool = pagetable->pool;
@@ -737,7 +741,10 @@
return 0;
}
- size = kgsl_sg_size(memdesc->sg, memdesc->sglen);
+ /* Add space for the guard page when freeing the mmu VA. */
+ size = memdesc->size;
+ if (kgsl_memdesc_has_guard_page(memdesc))
+ size += PAGE_SIZE;
start_addr = memdesc->gpuaddr;
end_addr = (memdesc->gpuaddr + size);
diff --git a/drivers/gpu/msm/kgsl_mmu.h b/drivers/gpu/msm/kgsl_mmu.h
index d7d9516..ef5b0f4 100644
--- a/drivers/gpu/msm/kgsl_mmu.h
+++ b/drivers/gpu/msm/kgsl_mmu.h
@@ -14,7 +14,7 @@
#define __KGSL_MMU_H
#include <mach/iommu.h>
-
+#include "kgsl_iommu.h"
/*
* These defines control the address range for allocations that
* are mapped into all pagetables.
@@ -150,6 +150,9 @@
enum kgsl_iommu_context_id ctx_id);
unsigned int (*mmu_get_reg_gpuaddr)(struct kgsl_mmu *mmu,
int iommu_unit_num, int ctx_id, int reg);
+ unsigned int (*mmu_get_reg_ahbaddr)(struct kgsl_mmu *mmu,
+ int iommu_unit_num, int ctx_id,
+ enum kgsl_iommu_reg_map reg);
int (*mmu_get_num_iommu_units)(struct kgsl_mmu *mmu);
int (*mmu_pt_equal) (struct kgsl_mmu *mmu,
struct kgsl_pagetable *pt,
@@ -165,6 +168,7 @@
(struct kgsl_mmu *mmu, unsigned int *cmds);
unsigned int (*mmu_sync_unlock)
(struct kgsl_mmu *mmu, unsigned int *cmds);
+ int (*mmu_hw_halt_supported)(struct kgsl_mmu *mmu, int iommu_unit_num);
};
struct kgsl_mmu_pt_ops {
@@ -337,6 +341,29 @@
return 0;
}
+/*
+ * kgsl_mmu_get_reg_ahbaddr() - Calls the mmu specific function pointer to
+ * return the address that GPU can use to access register
+ * @mmu: Pointer to the device mmu
+ * @iommu_unit_num: There can be multiple iommu units used for graphics.
+ * This parameter is an index to the iommu unit being used
+ * @ctx_id: The context id within the iommu unit
+ * @reg: Register whose address is to be returned
+ *
+ * Returns the ahb address of reg else 0
+ */
+static inline unsigned int kgsl_mmu_get_reg_ahbaddr(struct kgsl_mmu *mmu,
+ int iommu_unit_num,
+ int ctx_id,
+ enum kgsl_iommu_reg_map reg)
+{
+ if (mmu->mmu_ops && mmu->mmu_ops->mmu_get_reg_ahbaddr)
+ return mmu->mmu_ops->mmu_get_reg_ahbaddr(mmu, iommu_unit_num,
+ ctx_id, reg);
+ else
+ return 0;
+}
+
static inline int kgsl_mmu_get_num_iommu_units(struct kgsl_mmu *mmu)
{
if (mmu->mmu_ops && mmu->mmu_ops->mmu_get_num_iommu_units)
@@ -346,6 +373,22 @@
}
/*
+ * kgsl_mmu_hw_halt_supported() - Runtime check for iommu hw halt
+ * @mmu: the mmu
+ *
+ * Returns non-zero if the iommu supports hw halt,
+ * 0 if not.
+ */
+static inline int kgsl_mmu_hw_halt_supported(struct kgsl_mmu *mmu,
+ int iommu_unit_num)
+{
+ if (mmu->mmu_ops && mmu->mmu_ops->mmu_hw_halt_supported)
+ return mmu->mmu_ops->mmu_hw_halt_supported(mmu, iommu_unit_num);
+ else
+ return 0;
+}
+
+/*
* kgsl_mmu_is_perprocess() - Runtime check for per-process
* pagetables.
* @mmu: the mmu
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.c b/drivers/gpu/msm/kgsl_pwrctrl.c
index 2f8d93e..b124257 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.c
+++ b/drivers/gpu/msm/kgsl_pwrctrl.c
@@ -18,6 +18,7 @@
#include <mach/msm_iomap.h>
#include <mach/msm_bus.h>
#include <linux/ktime.h>
+#include <linux/delay.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
@@ -33,6 +34,16 @@
#define UPDATE_BUSY_VAL 1000000
#define UPDATE_BUSY 50
+/*
+ * Expected delay for post-interrupt processing on A3xx.
+ * The delay may be longer, gradually increase the delay
+ * to compensate. If the GPU isn't done by max delay,
+ * it's working on something other than just the final
+ * command sequence so stop waiting for it to be idle.
+ */
+#define INIT_UDELAY 200
+#define MAX_UDELAY 2000
+
struct clk_pair {
const char *name;
uint map;
@@ -59,6 +70,10 @@
.name = "mem_iface_clk",
.map = KGSL_CLK_MEM_IFACE,
},
+ {
+ .name = "alt_mem_iface_clk",
+ .map = KGSL_CLK_ALT_MEM_IFACE,
+ },
};
/* Update the elapsed time at a particular clock level
@@ -1055,8 +1070,18 @@
pwr->power_flags = 0;
}
+/**
+ * kgsl_idle_check() - Work function for GPU interrupts and idle timeouts.
+ * @device: The device
+ *
+ * This function is called for work that is queued by the interrupt
+ * handler or the idle timer. It attempts to transition to a clocks
+ * off state if the active_cnt is 0 and the hardware is idle.
+ */
void kgsl_idle_check(struct work_struct *work)
{
+ int delay = INIT_UDELAY;
+ int requested_state;
struct kgsl_device *device = container_of(work, struct kgsl_device,
idle_check_ws);
WARN_ON(device == NULL);
@@ -1064,21 +1089,52 @@
return;
mutex_lock(&device->mutex);
- if (device->state & (KGSL_STATE_ACTIVE | KGSL_STATE_NAP)) {
- kgsl_pwrscale_idle(device);
- if (kgsl_pwrctrl_sleep(device) != 0) {
+ kgsl_pwrscale_idle(device);
+
+ if (device->state == KGSL_STATE_ACTIVE
+ || device->state == KGSL_STATE_NAP) {
+ /*
+ * If no user is explicitly trying to use the GPU
+ * (active_cnt is zero), then loop with increasing delay,
+ * waiting for the GPU to become idle.
+ */
+ while (!device->active_cnt && delay < MAX_UDELAY) {
+ requested_state = device->requested_state;
+ if (!kgsl_pwrctrl_sleep(device))
+ break;
+ /*
+ * If no new commands have been issued since the
+ * last interrupt, stay in this loop waiting for
+ * the GPU to become idle.
+ */
+ if (!device->pwrctrl.irq_last)
+ break;
+ kgsl_pwrctrl_request_state(device, requested_state);
+ mutex_unlock(&device->mutex);
+ udelay(delay);
+ delay *= 2;
+ mutex_lock(&device->mutex);
+ }
+
+
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
+ if (device->state == KGSL_STATE_ACTIVE) {
mod_timer(&device->idle_timer,
jiffies +
device->pwrctrl.interval_timeout);
- /* If the GPU has been too busy to sleep, make sure *
- * that is acurately reflected in the % busy numbers. */
+ /*
+ * If the GPU has been too busy to sleep, make sure
+ * that is acurately reflected in the % busy numbers.
+ */
device->pwrctrl.clk_stats.no_nap_cnt++;
if (device->pwrctrl.clk_stats.no_nap_cnt >
UPDATE_BUSY) {
kgsl_pwrctrl_busy_time(device, true);
device->pwrctrl.clk_stats.no_nap_cnt = 0;
}
+ } else {
+ device->pwrctrl.irq_last = 0;
}
} else if (device->state & (KGSL_STATE_HUNG |
KGSL_STATE_DUMP_AND_FT)) {
@@ -1087,6 +1143,7 @@
mutex_unlock(&device->mutex);
}
+EXPORT_SYMBOL(kgsl_idle_check);
void kgsl_timer(unsigned long data)
{
@@ -1104,54 +1161,26 @@
}
}
+
+/**
+ * kgsl_pre_hwaccess - Enforce preconditions for touching registers
+ * @device: The device
+ *
+ * This function ensures that the correct lock is held and that the GPU
+ * clock is on immediately before a register is read or written. Note
+ * that this function does not check active_cnt because the registers
+ * must be accessed during device start and stop, when the active_cnt
+ * may legitimately be 0.
+ */
void kgsl_pre_hwaccess(struct kgsl_device *device)
{
+ /* In order to touch a register you must hold the device mutex...*/
BUG_ON(!mutex_is_locked(&device->mutex));
- switch (device->state) {
- case KGSL_STATE_ACTIVE:
- return;
- case KGSL_STATE_NAP:
- case KGSL_STATE_SLEEP:
- case KGSL_STATE_SLUMBER:
- kgsl_pwrctrl_wake(device);
- break;
- case KGSL_STATE_SUSPEND:
- kgsl_check_suspended(device);
- break;
- case KGSL_STATE_INIT:
- case KGSL_STATE_HUNG:
- case KGSL_STATE_DUMP_AND_FT:
- if (test_bit(KGSL_PWRFLAGS_CLK_ON,
- &device->pwrctrl.power_flags))
- break;
- else
- KGSL_PWR_ERR(device,
- "hw access while clocks off from state %d\n",
- device->state);
- break;
- default:
- KGSL_PWR_ERR(device, "hw access while in unknown state %d\n",
- device->state);
- break;
- }
+ /* and have the clock on! */
+ BUG_ON(!test_bit(KGSL_PWRFLAGS_CLK_ON, &device->pwrctrl.power_flags));
}
EXPORT_SYMBOL(kgsl_pre_hwaccess);
-void kgsl_check_suspended(struct kgsl_device *device)
-{
- if (device->requested_state == KGSL_STATE_SUSPEND ||
- device->state == KGSL_STATE_SUSPEND) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->hwaccess_gate);
- mutex_lock(&device->mutex);
- } else if (device->state == KGSL_STATE_DUMP_AND_FT) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->ft_gate);
- mutex_lock(&device->mutex);
- } else if (device->state == KGSL_STATE_SLUMBER)
- kgsl_pwrctrl_wake(device);
-}
-
static int
_nap(struct kgsl_device *device)
{
@@ -1230,6 +1259,8 @@
case KGSL_STATE_NAP:
case KGSL_STATE_SLEEP:
del_timer_sync(&device->idle_timer);
+ /* make sure power is on to stop the device*/
+ kgsl_pwrctrl_enable(device);
device->ftbl->suspend_context(device);
device->ftbl->stop(device);
_sleep_accounting(device);
@@ -1278,9 +1309,9 @@
/******************************************************************/
/* Caller must hold the device mutex. */
-void kgsl_pwrctrl_wake(struct kgsl_device *device)
+int kgsl_pwrctrl_wake(struct kgsl_device *device)
{
- int status;
+ int status = 0;
unsigned int context_id;
unsigned int state = device->state;
unsigned int ts_processed = 0xdeaddead;
@@ -1329,8 +1360,10 @@
KGSL_PWR_WARN(device, "unhandled state %s\n",
kgsl_pwrstate_to_str(device->state));
kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
+ status = -EINVAL;
break;
}
+ return status;
}
EXPORT_SYMBOL(kgsl_pwrctrl_wake);
@@ -1396,3 +1429,124 @@
}
EXPORT_SYMBOL(kgsl_pwrstate_to_str);
+
+/**
+ * kgsl_active_count_get() - Increase the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Increase the active count for the KGSL device and turn on
+ * clocks if this is the first reference. Code paths that need
+ * to touch the hardware or wait for the hardware to complete
+ * an operation must hold an active count reference until they
+ * are finished. An error code will be returned if waking the
+ * device fails. The device mutex must be held while *calling
+ * this function.
+ */
+int kgsl_active_count_get(struct kgsl_device *device)
+{
+ int ret = 0;
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->active_cnt == 0) {
+ if (device->requested_state == KGSL_STATE_SUSPEND ||
+ device->state == KGSL_STATE_SUSPEND) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->hwaccess_gate);
+ mutex_lock(&device->mutex);
+ } else if (device->state == KGSL_STATE_DUMP_AND_FT) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->ft_gate);
+ mutex_lock(&device->mutex);
+ }
+ ret = kgsl_pwrctrl_wake(device);
+ }
+ if (ret == 0)
+ device->active_cnt++;
+ return ret;
+}
+EXPORT_SYMBOL(kgsl_active_count_get);
+
+/**
+ * kgsl_active_count_get_light() - Increase the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Increase the active count for the KGSL device WITHOUT
+ * turning on the clocks. Currently this is only used for creating
+ * kgsl_events. The device mutex must be held while calling this function.
+ */
+int kgsl_active_count_get_light(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->state != KGSL_STATE_ACTIVE) {
+ dev_WARN_ONCE(device->dev, 1, "device in unexpected state %s\n",
+ kgsl_pwrstate_to_str(device->state));
+ return -EINVAL;
+ }
+
+ if (device->active_cnt == 0) {
+ dev_WARN_ONCE(device->dev, 1, "active count is 0!\n");
+ return -EINVAL;
+ }
+
+ device->active_cnt++;
+ return 0;
+}
+EXPORT_SYMBOL(kgsl_active_count_get_light);
+
+/**
+ * kgsl_active_count_put() - Decrease the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Decrease the active count for the KGSL device and turn off
+ * clocks if there are no remaining references. This function will
+ * transition the device to NAP if there are no other pending state
+ * changes. It also completes the suspend gate. The device mutex must
+ * be held while calling this function.
+ */
+void kgsl_active_count_put(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+ BUG_ON(device->active_cnt == 0);
+
+ kgsl_pwrscale_idle(device);
+ if (device->active_cnt > 1) {
+ device->active_cnt--;
+ return;
+ }
+
+ INIT_COMPLETION(device->suspend_gate);
+
+ if (device->pwrctrl.nap_allowed == true &&
+ (device->state == KGSL_STATE_ACTIVE &&
+ device->requested_state == KGSL_STATE_NONE)) {
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ if (kgsl_pwrctrl_sleep(device) && device->pwrctrl.irq_last) {
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ queue_work(device->work_queue, &device->idle_check_ws);
+ }
+ }
+ device->active_cnt--;
+
+ if (device->active_cnt == 0)
+ complete(&device->suspend_gate);
+}
+EXPORT_SYMBOL(kgsl_active_count_put);
+
+/**
+ * kgsl_active_count_wait() - Wait for activity to finish.
+ * @device: Pointer to a KGSL device
+ *
+ * Block until all active_cnt users put() their reference.
+ */
+void kgsl_active_count_wait(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->active_cnt != 0) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->suspend_gate);
+ mutex_lock(&device->mutex);
+ }
+}
+EXPORT_SYMBOL(kgsl_active_count_wait);
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.h b/drivers/gpu/msm/kgsl_pwrctrl.h
index ced52e1..b3e8702 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.h
+++ b/drivers/gpu/msm/kgsl_pwrctrl.h
@@ -23,7 +23,7 @@
#define KGSL_PWRLEVEL_NOMINAL 1
#define KGSL_PWRLEVEL_LAST_OFFSET 2
-#define KGSL_MAX_CLKS 5
+#define KGSL_MAX_CLKS 6
struct platform_device;
@@ -91,6 +91,7 @@
struct pm_qos_request pm_qos_req_dma;
unsigned int pm_qos_latency;
unsigned int step_mul;
+ unsigned int irq_last;
};
void kgsl_pwrctrl_irq(struct kgsl_device *device, int state);
@@ -99,9 +100,8 @@
void kgsl_timer(unsigned long data);
void kgsl_idle_check(struct work_struct *work);
void kgsl_pre_hwaccess(struct kgsl_device *device);
-void kgsl_check_suspended(struct kgsl_device *device);
int kgsl_pwrctrl_sleep(struct kgsl_device *device);
-void kgsl_pwrctrl_wake(struct kgsl_device *device);
+int kgsl_pwrctrl_wake(struct kgsl_device *device);
void kgsl_pwrctrl_pwrlevel_change(struct kgsl_device *device,
unsigned int level);
int kgsl_pwrctrl_init_sysfs(struct kgsl_device *device);
@@ -115,4 +115,10 @@
void kgsl_pwrctrl_set_state(struct kgsl_device *device, unsigned int state);
void kgsl_pwrctrl_request_state(struct kgsl_device *device, unsigned int state);
+
+int kgsl_active_count_get(struct kgsl_device *device);
+int kgsl_active_count_get_light(struct kgsl_device *device);
+void kgsl_active_count_put(struct kgsl_device *device);
+void kgsl_active_count_wait(struct kgsl_device *device);
+
#endif /* __KGSL_PWRCTRL_H */
diff --git a/drivers/gpu/msm/kgsl_pwrscale.c b/drivers/gpu/msm/kgsl_pwrscale.c
index 02ada38..afef62e 100644
--- a/drivers/gpu/msm/kgsl_pwrscale.c
+++ b/drivers/gpu/msm/kgsl_pwrscale.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -240,6 +240,7 @@
device->pwrscale.policy->busy(device,
&device->pwrscale);
}
+EXPORT_SYMBOL(kgsl_pwrscale_busy);
void kgsl_pwrscale_idle(struct kgsl_device *device)
{
diff --git a/drivers/gpu/msm/kgsl_pwrscale_trustzone.c b/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
index 9b2ac70..5d5d5b1 100644
--- a/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
+++ b/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
@@ -31,6 +31,7 @@
unsigned int no_switch_cnt;
unsigned int skip_cnt;
struct kgsl_power_stats bin;
+ unsigned int idle_dcvs;
};
spinlock_t tz_lock;
@@ -47,24 +48,32 @@
#define SKIP_COUNTER 500
#define TZ_RESET_ID 0x3
#define TZ_UPDATE_ID 0x4
+#define TZ_INIT_ID 0x6
-#ifdef CONFIG_MSM_SCM
/* Trap into the TrustZone, and call funcs there. */
-static int __secure_tz_entry(u32 cmd, u32 val, u32 id)
+static int __secure_tz_entry2(u32 cmd, u32 val1, u32 val2)
{
int ret;
spin_lock(&tz_lock);
+ /* sync memory before sending the commands to tz*/
__iowmb();
- ret = scm_call_atomic2(SCM_SVC_IO, cmd, val, id);
+ ret = scm_call_atomic2(SCM_SVC_IO, cmd, val1, val2);
spin_unlock(&tz_lock);
return ret;
}
-#else
-static int __secure_tz_entry(u32 cmd, u32 val, u32 id)
+
+static int __secure_tz_entry3(u32 cmd, u32 val1, u32 val2,
+ u32 val3)
{
- return 0;
+ int ret;
+ spin_lock(&tz_lock);
+ /* sync memory before sending the commands to tz*/
+ __iowmb();
+ ret = scm_call_atomic3(SCM_SVC_IO, cmd, val1, val2,
+ val3);
+ spin_unlock(&tz_lock);
+ return ret;
}
-#endif /* CONFIG_MSM_SCM */
static ssize_t tz_governor_show(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
@@ -172,11 +181,21 @@
*/
if (priv->bin.busy_time > CEILING) {
val = -1;
- } else {
+ } else if (priv->idle_dcvs) {
idle = priv->bin.total_time - priv->bin.busy_time;
idle = (idle > 0) ? idle : 0;
- val = __secure_tz_entry(TZ_UPDATE_ID, idle, device->id);
+ val = __secure_tz_entry2(TZ_UPDATE_ID, idle, device->id);
+ } else {
+ if (pwr->step_mul > 1)
+ val = __secure_tz_entry3(TZ_UPDATE_ID,
+ (pwr->active_pwrlevel + 1)/2,
+ priv->bin.total_time, priv->bin.busy_time);
+ else
+ val = __secure_tz_entry3(TZ_UPDATE_ID,
+ pwr->active_pwrlevel,
+ priv->bin.total_time, priv->bin.busy_time);
}
+
priv->bin.total_time = 0;
priv->bin.busy_time = 0;
@@ -201,7 +220,7 @@
{
struct tz_priv *priv = pwrscale->priv;
- __secure_tz_entry(TZ_RESET_ID, 0, device->id);
+ __secure_tz_entry2(TZ_RESET_ID, 0, 0);
priv->no_switch_cnt = 0;
priv->bin.total_time = 0;
priv->bin.busy_time = 0;
@@ -210,16 +229,32 @@
#ifdef CONFIG_MSM_SCM
static int tz_init(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
+ int i = 0, j = 1, ret = 0;
struct tz_priv *priv;
+ struct kgsl_pwrctrl *pwr = &device->pwrctrl;
+ unsigned int tz_pwrlevels[KGSL_MAX_PWRLEVELS + 1];
priv = pwrscale->priv = kzalloc(sizeof(struct tz_priv), GFP_KERNEL);
if (pwrscale->priv == NULL)
return -ENOMEM;
-
+ priv->idle_dcvs = 0;
priv->governor = TZ_GOVERNOR_ONDEMAND;
spin_lock_init(&tz_lock);
kgsl_pwrscale_policy_add_files(device, pwrscale, &tz_attr_group);
-
+ for (i = 0; i < pwr->num_pwrlevels - 1; i++) {
+ if (i == 0)
+ tz_pwrlevels[j] = pwr->pwrlevels[i].gpu_freq;
+ else if (pwr->pwrlevels[i].gpu_freq !=
+ pwr->pwrlevels[i - 1].gpu_freq) {
+ j++;
+ tz_pwrlevels[j] = pwr->pwrlevels[i].gpu_freq;
+ }
+ }
+ tz_pwrlevels[0] = j;
+ ret = scm_call(SCM_SVC_DCVS, TZ_INIT_ID, tz_pwrlevels,
+ sizeof(tz_pwrlevels), NULL, 0);
+ if (ret)
+ priv->idle_dcvs = 1;
return 0;
}
#else
diff --git a/drivers/gpu/msm/kgsl_sharedmem.c b/drivers/gpu/msm/kgsl_sharedmem.c
index 595f78f..b9bc432 100644
--- a/drivers/gpu/msm/kgsl_sharedmem.c
+++ b/drivers/gpu/msm/kgsl_sharedmem.c
@@ -65,14 +65,6 @@
mem_entry_max_show), \
}
-
-/*
- * One page allocation for a guard region to protect against over-zealous
- * GPU pre-fetch
- */
-
-static struct page *kgsl_guard_page;
-
/**
* Given a kobj, find the process structure attached to it
*/
@@ -364,10 +356,6 @@
struct scatterlist *sg;
int sglen = memdesc->sglen;
- /* Don't free the guard page if it was used */
- if (memdesc->priv & KGSL_MEMDESC_GUARD_PAGE)
- sglen--;
-
kgsl_driver.stats.page_alloc -= memdesc->size;
if (memdesc->hostptr) {
@@ -402,10 +390,6 @@
int sglen = memdesc->sglen;
int i, count = 0;
- /* Don't map the guard page if it exists */
- if (memdesc->priv & KGSL_MEMDESC_GUARD_PAGE)
- sglen--;
-
/* create a list of pages to call vmap */
pages = vmalloc(npages * sizeof(struct page *));
if (!pages) {
@@ -564,14 +548,6 @@
sglen_alloc = PAGE_ALIGN(size) >> PAGE_SHIFT;
- /*
- * Add guard page to the end of the allocation when the
- * IOMMU is in use.
- */
-
- if (kgsl_mmu_get_mmutype() == KGSL_MMU_TYPE_IOMMU)
- sglen_alloc++;
-
memdesc->size = size;
memdesc->pagetable = pagetable;
memdesc->ops = &kgsl_page_alloc_ops;
@@ -607,16 +583,22 @@
while (len > 0) {
struct page *page;
- unsigned int gfp_mask = GFP_KERNEL | __GFP_HIGHMEM |
- __GFP_NOWARN | __GFP_NORETRY;
+ unsigned int gfp_mask = __GFP_HIGHMEM;
int j;
/* don't waste space at the end of the allocation*/
if (len < page_size)
page_size = PAGE_SIZE;
+ /*
+ * Don't do some of the more aggressive memory recovery
+ * techniques for large order allocations
+ */
if (page_size != PAGE_SIZE)
- gfp_mask |= __GFP_COMP;
+ gfp_mask |= __GFP_COMP | __GFP_NORETRY |
+ __GFP_NO_KSWAPD | __GFP_NOWARN;
+ else
+ gfp_mask |= GFP_KERNEL | __GFP_NORETRY;
page = alloc_pages(gfp_mask, get_order(page_size));
@@ -641,26 +623,6 @@
len -= page_size;
}
- /* Add the guard page to the end of the sglist */
-
- if (kgsl_mmu_get_mmutype() == KGSL_MMU_TYPE_IOMMU) {
- /*
- * It doesn't matter if we use GFP_ZERO here, this never
- * gets mapped, and we only allocate it once in the life
- * of the system
- */
-
- if (kgsl_guard_page == NULL)
- kgsl_guard_page = alloc_page(GFP_KERNEL | __GFP_ZERO |
- __GFP_HIGHMEM);
-
- if (kgsl_guard_page != NULL) {
- sg_set_page(&memdesc->sg[sglen++], kgsl_guard_page,
- PAGE_SIZE, 0);
- memdesc->priv |= KGSL_MEMDESC_GUARD_PAGE;
- }
- }
-
memdesc->sglen = sglen;
/*
diff --git a/drivers/gpu/msm/kgsl_sharedmem.h b/drivers/gpu/msm/kgsl_sharedmem.h
index 279490f..14ae0dc 100644
--- a/drivers/gpu/msm/kgsl_sharedmem.h
+++ b/drivers/gpu/msm/kgsl_sharedmem.h
@@ -295,15 +295,4 @@
return ret;
}
-static inline int kgsl_sg_size(struct scatterlist *sg, int sglen)
-{
- int i, size = 0;
- struct scatterlist *s;
-
- for_each_sg(sg, s, sglen, i) {
- size += s->length;
- }
-
- return size;
-}
#endif /* __KGSL_SHAREDMEM_H */
diff --git a/drivers/gpu/msm/kgsl_snapshot.c b/drivers/gpu/msm/kgsl_snapshot.c
index 4c9c744..4165690 100644
--- a/drivers/gpu/msm/kgsl_snapshot.c
+++ b/drivers/gpu/msm/kgsl_snapshot.c
@@ -325,6 +325,7 @@
return 0;
}
+EXPORT_SYMBOL(kgsl_snapshot_have_object);
/* kgsl_snapshot_get_object - Mark a GPU buffer to be frozen
* @device - the device that is being snapshotted
@@ -344,6 +345,7 @@
struct kgsl_mem_entry *entry;
struct kgsl_snapshot_object *obj;
int offset;
+ int ret = -EINVAL;
entry = kgsl_get_mem_entry(device, ptbase, gpuaddr, size);
@@ -357,7 +359,7 @@
if (entry->memtype != KGSL_MEM_ENTRY_KERNEL) {
KGSL_DRV_ERR(device,
"Only internal GPU buffers can be frozen\n");
- return -EINVAL;
+ goto err_put;
}
/*
@@ -380,7 +382,7 @@
if (size + offset > entry->memdesc.size) {
KGSL_DRV_ERR(device, "Invalid size for GPU buffer %8.8X\n",
gpuaddr);
- return -EINVAL;
+ goto err_put;
}
/* If the buffer is already on the list, skip it */
@@ -389,27 +391,24 @@
/* If the size is different, use the bigger size */
if (obj->size < size)
obj->size = size;
-
- return 0;
+ ret = 0;
+ goto err_put;
}
}
if (kgsl_memdesc_map(&entry->memdesc) == NULL) {
KGSL_DRV_ERR(device, "Unable to map GPU buffer %X\n",
gpuaddr);
- return -EINVAL;
+ goto err_put;
}
obj = kzalloc(sizeof(*obj), GFP_KERNEL);
if (obj == NULL) {
KGSL_DRV_ERR(device, "Unable to allocate memory\n");
- return -EINVAL;
+ goto err_put;
}
- /* Ref count the mem entry */
- kgsl_mem_entry_get(entry);
-
obj->type = type;
obj->entry = entry;
obj->gpuaddr = gpuaddr;
@@ -427,12 +426,15 @@
* 0 so it doesn't get counted twice
*/
- if (entry->memdesc.priv & KGSL_MEMDESC_FROZEN)
- return 0;
+ ret = (entry->memdesc.priv & KGSL_MEMDESC_FROZEN) ? 0
+ : entry->memdesc.size;
entry->memdesc.priv |= KGSL_MEMDESC_FROZEN;
- return entry->memdesc.size;
+ return ret;
+err_put:
+ kgsl_mem_entry_put(entry);
+ return ret;
}
EXPORT_SYMBOL(kgsl_snapshot_get_object);
diff --git a/drivers/gpu/msm/kgsl_sync.c b/drivers/gpu/msm/kgsl_sync.c
index 813305a..aad8ef1 100644
--- a/drivers/gpu/msm/kgsl_sync.c
+++ b/drivers/gpu/msm/kgsl_sync.c
@@ -168,7 +168,6 @@
fail_event:
fail_copy_fd:
/* clean up sync_fence_install */
- sync_fence_put(fence);
put_unused_fd(priv.fence_fd);
fail_fd:
/* clean up sync_fence_create */
diff --git a/drivers/gpu/msm/z180.c b/drivers/gpu/msm/z180.c
index a07959b..49265fc 100644
--- a/drivers/gpu/msm/z180.c
+++ b/drivers/gpu/msm/z180.c
@@ -17,7 +17,6 @@
#include "kgsl.h"
#include "kgsl_cffdump.h"
#include "kgsl_sharedmem.h"
-#include "kgsl_trace.h"
#include "z180.h"
#include "z180_reg.h"
@@ -485,7 +484,7 @@
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, 0);
error:
- trace_kgsl_issueibcmds(device, context->id, ibdesc, numibs,
+ kgsl_trace_issueibcmds(device, context->id, ibdesc, numibs,
*timestamp, ctrl, result, 0);
return (int)result;
diff --git a/drivers/gpu/msm/z180_postmortem.c b/drivers/gpu/msm/z180_postmortem.c
index c1e5f07..55b8faa 100644
--- a/drivers/gpu/msm/z180_postmortem.c
+++ b/drivers/gpu/msm/z180_postmortem.c
@@ -168,6 +168,7 @@
KGSL_LOG_DUMP(device,
"Could not map IB to kernel memory, Ringbuffer Slot: %d\n",
rb_slot_num);
+ kgsl_mem_entry_put(entry);
continue;
}
@@ -190,6 +191,7 @@
linebuf);
}
KGSL_LOG_DUMP(device, "IB Dump Finished\n");
+ kgsl_mem_entry_put(entry);
}
}
}
diff --git a/drivers/hwmon/qpnp-adc-common.c b/drivers/hwmon/qpnp-adc-common.c
index 1458bc5..8e350f0 100644
--- a/drivers/hwmon/qpnp-adc-common.c
+++ b/drivers/hwmon/qpnp-adc-common.c
@@ -130,6 +130,60 @@
{790, 203}
};
+static const struct qpnp_vadc_map_pt adcmap_qrd_btm_threshold[] = {
+ {-200, 1672},
+ {-180, 1656},
+ {-160, 1639},
+ {-140, 1620},
+ {-120, 1599},
+ {-100, 1577},
+ {-80, 1553},
+ {-60, 1527},
+ {-40, 1550},
+ {-20, 1471},
+ {0, 1440},
+ {20, 1408},
+ {40, 1374},
+ {60, 1339},
+ {80, 1303},
+ {100, 1266},
+ {120, 1228},
+ {140, 1190},
+ {160, 1150},
+ {180, 1111},
+ {200, 1071},
+ {220, 1032},
+ {240, 992},
+ {260, 953},
+ {280, 915},
+ {300, 877},
+ {320, 841},
+ {340, 805},
+ {360, 770},
+ {380, 736},
+ {400, 704},
+ {420, 673},
+ {440, 643},
+ {460, 614},
+ {480, 587},
+ {500, 561},
+ {520, 536},
+ {540, 513},
+ {560, 491},
+ {580, 470},
+ {600, 450},
+ {620, 431},
+ {640, 414},
+ {660, 397},
+ {680, 382},
+ {700, 367},
+ {720, 353},
+ {740, 340},
+ {760, 328},
+ {780, 317},
+ {800, 306},
+};
+
/* Voltage to temperature */
static const struct qpnp_vadc_map_pt adcmap_100k_104ef_104fb[] = {
{1758, -40},
@@ -373,7 +427,7 @@
{
struct qpnp_vadc_linear_graph btm_param;
int64_t low_output = 0, high_output = 0;
- int rc = 0;
+ int rc = 0, sign = 0;
rc = qpnp_get_vadc_gain_and_offset(&btm_param, CALIB_ABSOLUTE);
if (rc < 0) {
@@ -384,19 +438,36 @@
/* Convert to Kelvin and account for voltage to be written as 2mV/K */
low_output = (param->low_temp + KELVINMIL_DEGMIL) * 2;
/* Convert to voltage threshold */
- low_output *= btm_param.dy;
- do_div(low_output, btm_param.adc_vref);
+ low_output = (low_output - QPNP_ADC_625_UV) * btm_param.dy;
+ if (low_output < 0) {
+ sign = 1;
+ low_output = -low_output;
+ }
+ do_div(low_output, QPNP_ADC_625_UV);
+ if (sign)
+ low_output = -low_output;
low_output += btm_param.adc_gnd;
+ sign = 0;
/* Convert to Kelvin and account for voltage to be written as 2mV/K */
high_output = (param->high_temp + KELVINMIL_DEGMIL) * 2;
/* Convert to voltage threshold */
- high_output *= btm_param.dy;
- do_div(high_output, btm_param.adc_vref);
+ high_output = (high_output - QPNP_ADC_625_UV) * btm_param.dy;
+ if (high_output < 0) {
+ sign = 1;
+ high_output = -high_output;
+ }
+ do_div(high_output, QPNP_ADC_625_UV);
+ if (sign)
+ high_output = -high_output;
high_output += btm_param.adc_gnd;
- *low_threshold = low_output;
- *high_threshold = high_output;
+ *low_threshold = (uint32_t) low_output;
+ *high_threshold = (uint32_t) high_output;
+ pr_debug("high_temp:%d, low_temp:%d\n", param->high_temp,
+ param->low_temp);
+ pr_debug("adc_code_high:%x, adc_code_low:%x\n", *high_threshold,
+ *low_threshold);
return 0;
}
@@ -446,6 +517,24 @@
}
EXPORT_SYMBOL(qpnp_adc_scale_batt_therm);
+int32_t qpnp_adc_scale_qrd_batt_therm(int32_t adc_code,
+ const struct qpnp_adc_properties *adc_properties,
+ const struct qpnp_vadc_chan_properties *chan_properties,
+ struct qpnp_vadc_result *adc_chan_result)
+{
+ int64_t bat_voltage = 0;
+
+ bat_voltage = qpnp_adc_scale_ratiometric_calib(adc_code,
+ adc_properties, chan_properties);
+
+ return qpnp_adc_map_temp_voltage(
+ adcmap_qrd_btm_threshold,
+ ARRAY_SIZE(adcmap_qrd_btm_threshold),
+ bat_voltage,
+ &adc_chan_result->physical);
+}
+EXPORT_SYMBOL(qpnp_adc_scale_qrd_batt_therm);
+
int32_t qpnp_adc_scale_therm_pu1(int32_t adc_code,
const struct qpnp_adc_properties *adc_properties,
const struct qpnp_vadc_chan_properties *chan_properties,
@@ -637,21 +726,35 @@
uint32_t *low_threshold, uint32_t *high_threshold)
{
struct qpnp_vadc_linear_graph vbatt_param;
- int rc = 0;
+ int rc = 0, sign = 0;
+ int64_t low_thr = 0, high_thr = 0;
rc = qpnp_get_vadc_gain_and_offset(&vbatt_param, CALIB_ABSOLUTE);
if (rc < 0)
return rc;
- *low_threshold = (((param->low_thr/3) - QPNP_ADC_625_UV) *
+ low_thr = (((param->low_thr/3) - QPNP_ADC_625_UV) *
vbatt_param.dy);
- do_div(*low_threshold, QPNP_ADC_625_UV);
- *low_threshold += vbatt_param.adc_gnd;
+ if (low_thr < 0) {
+ sign = 1;
+ low_thr = -low_thr;
+ }
+ do_div(low_thr, QPNP_ADC_625_UV);
+ if (sign)
+ low_thr = -low_thr;
+ *low_threshold = low_thr + vbatt_param.adc_gnd;
- *high_threshold = (((param->high_thr/3) - QPNP_ADC_625_UV) *
+ sign = 0;
+ high_thr = (((param->high_thr/3) - QPNP_ADC_625_UV) *
vbatt_param.dy);
- do_div(*high_threshold, QPNP_ADC_625_UV);
- *high_threshold += vbatt_param.adc_gnd;
+ if (high_thr < 0) {
+ sign = 1;
+ high_thr = -high_thr;
+ }
+ do_div(high_thr, QPNP_ADC_625_UV);
+ if (sign)
+ high_thr = -high_thr;
+ *high_threshold = high_thr + vbatt_param.adc_gnd;
pr_debug("high_volt:%d, low_volt:%d\n", param->high_thr,
param->low_thr);
@@ -876,8 +979,6 @@
init_completion(&adc_qpnp->adc_rslt_completion);
- mutex_init(&adc_qpnp->adc_lock);
-
return 0;
}
EXPORT_SYMBOL(qpnp_adc_get_devicetree_data);
diff --git a/drivers/hwmon/qpnp-adc-current.c b/drivers/hwmon/qpnp-adc-current.c
index 0b02a34..275291f 100644
--- a/drivers/hwmon/qpnp-adc-current.c
+++ b/drivers/hwmon/qpnp-adc-current.c
@@ -129,6 +129,7 @@
struct qpnp_iadc_drv {
struct qpnp_adc_drv *adc;
int32_t rsense;
+ bool external_rsense;
struct device *iadc_hwmon;
bool iadc_initialized;
int64_t die_temp_calib_offset;
@@ -543,6 +544,9 @@
if (!iadc || !iadc->iadc_initialized)
return -EPROBE_DEFER;
+ if (iadc->external_rsense)
+ *rsense = iadc->rsense;
+
rc = qpnp_iadc_read_reg(QPNP_IADC_NOMINAL_RSENSE, &rslt_rsense);
if (rc < 0) {
pr_err("qpnp adc rsense read failed with %d\n", rc);
@@ -571,15 +575,21 @@
{
struct qpnp_iadc_drv *iadc = qpnp_iadc;
struct qpnp_vadc_result result_pmic_therm;
+ int64_t die_temp_offset;
int rc = 0;
rc = qpnp_vadc_read(DIE_TEMP, &result_pmic_therm);
if (rc < 0)
return rc;
- if (((uint64_t) (result_pmic_therm.physical -
- iadc->die_temp_calib_offset))
- > QPNP_IADC_DIE_TEMP_CALIB_OFFSET) {
+ die_temp_offset = result_pmic_therm.physical -
+ iadc->die_temp_calib_offset;
+ if (die_temp_offset < 0)
+ die_temp_offset = -die_temp_offset;
+
+ if (die_temp_offset > QPNP_IADC_DIE_TEMP_CALIB_OFFSET) {
+ iadc->die_temp_calib_offset =
+ result_pmic_therm.physical;
rc = qpnp_iadc_calibrate_for_trim();
if (rc)
pr_err("periodic IADC calibration failed\n");
@@ -820,11 +830,15 @@
goto fail;
}
+ mutex_init(&iadc->adc->adc_lock);
+
rc = of_property_read_u32(node, "qcom,rsense",
&iadc->rsense);
- if (rc) {
- pr_err("Invalid rsens reference property\n");
- goto fail;
+ if (rc)
+ pr_debug("Defaulting to internal rsense\n");
+ else {
+ pr_debug("Use external rsense\n");
+ iadc->external_rsense = true;
}
rc = devm_request_irq(&spmi->dev, iadc->adc->adc_irq_eoc,
diff --git a/drivers/hwmon/qpnp-adc-voltage.c b/drivers/hwmon/qpnp-adc-voltage.c
index d296a47..edb1d66 100644
--- a/drivers/hwmon/qpnp-adc-voltage.c
+++ b/drivers/hwmon/qpnp-adc-voltage.c
@@ -112,6 +112,7 @@
[SCALE_XOTHERM] = {qpnp_adc_tdkntcg_therm},
[SCALE_THERM_100K_PULLUP] = {qpnp_adc_scale_therm_pu2},
[SCALE_THERM_150K_PULLUP] = {qpnp_adc_scale_therm_pu1},
+ [SCALE_QRD_BATT_THERM] = {qpnp_adc_scale_qrd_batt_therm},
};
static int32_t qpnp_vadc_read_reg(int16_t reg, u8 *data)
@@ -431,12 +432,32 @@
return 0;
}
-static uint32_t qpnp_vadc_calib_device(void)
+static void qpnp_vadc_625mv_channel_sel(uint32_t *ref_channel_sel)
+{
+ struct qpnp_vadc_drv *vadc = qpnp_vadc;
+ uint32_t dt_index = 0;
+
+ /* Check if the buffered 625mV channel exists */
+ while ((vadc->adc->adc_channels[dt_index].channel_num
+ != SPARE1) && (dt_index < vadc->max_channels_available))
+ dt_index++;
+
+ if (dt_index >= vadc->max_channels_available) {
+ pr_debug("Use default 625mV ref channel\n");
+ *ref_channel_sel = REF_625MV;
+ } else {
+ pr_debug("Use buffered 625mV ref channel\n");
+ *ref_channel_sel = SPARE1;
+ }
+}
+
+static int32_t qpnp_vadc_calib_device(void)
{
struct qpnp_vadc_drv *vadc = qpnp_vadc;
struct qpnp_adc_amux_properties conv;
int rc, calib_read_1, calib_read_2, count = 0;
u8 status1 = 0;
+ uint32_t ref_channel_sel = 0;
conv.amux_channel = REF_125V;
conv.decimation = DECIMATION_TYPE2;
@@ -470,7 +491,8 @@
goto calib_fail;
}
- conv.amux_channel = REF_625MV;
+ qpnp_vadc_625mv_channel_sel(&ref_channel_sel);
+ conv.amux_channel = ref_channel_sel;
conv.decimation = DECIMATION_TYPE2;
conv.mode_sel = ADC_OP_NORMAL_MODE << QPNP_VADC_OP_MODE_SHIFT;
conv.hw_settle_time = ADC_CHANNEL_HW_SETTLE_DELAY_0US;
@@ -647,6 +669,7 @@
{
struct qpnp_vadc_drv *vadc = qpnp_vadc;
int rc = 0, scale_type, amux_prescaling, dt_index = 0;
+ uint32_t ref_channel;
if (!vadc || !vadc->vadc_initialized)
return -EPROBE_DEFER;
@@ -666,6 +689,11 @@
vadc->vadc_init_calib = true;
}
+ if (channel == REF_625MV) {
+ qpnp_vadc_625mv_channel_sel(&ref_channel);
+ channel = ref_channel;
+ }
+
vadc->adc->amux_prop->amux_channel = channel;
while ((vadc->adc->adc_channels[dt_index].channel_num
@@ -982,6 +1010,7 @@
dev_err(&spmi->dev, "failed to read device tree\n");
goto fail;
}
+ mutex_init(&vadc->adc->adc_lock);
rc = devm_request_irq(&spmi->dev, vadc->adc->adc_irq_eoc,
qpnp_vadc_isr, IRQF_TRIGGER_RISING,
diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
index 0ea230a..29b269a 100644
--- a/drivers/input/touchscreen/atmel_mxt_ts.c
+++ b/drivers/input/touchscreen/atmel_mxt_ts.c
@@ -128,6 +128,7 @@
#define MXT_SPT_DIGITIZER_T43 43
#define MXT_SPT_MESSAGECOUNT_T44 44
#define MXT_SPT_CTECONFIG_T46 46
+#define MXT_SPT_EXTRANOISESUPCTRLS_T58 58
#define MXT_SPT_TIMER_T61 61
/* MXT_GEN_COMMAND_T6 field */
@@ -396,6 +397,7 @@
atomic_t st_enabled;
atomic_t st_pending_irqs;
struct completion st_completion;
+ struct completion st_powerdown;
#endif
};
@@ -432,6 +434,7 @@
case MXT_SPT_USERDATA_T38:
case MXT_SPT_DIGITIZER_T43:
case MXT_SPT_CTECONFIG_T46:
+ case MXT_SPT_EXTRANOISESUPCTRLS_T58:
case MXT_SPT_TIMER_T61:
case MXT_PROCI_ADAPTIVETHRESHOLD_T55:
return true;
@@ -469,6 +472,7 @@
case MXT_SPT_USERDATA_T38:
case MXT_SPT_DIGITIZER_T43:
case MXT_SPT_CTECONFIG_T46:
+ case MXT_SPT_EXTRANOISESUPCTRLS_T58:
case MXT_SPT_TIMER_T61:
case MXT_PROCI_ADAPTIVETHRESHOLD_T55:
return true;
@@ -998,8 +1002,8 @@
static irqreturn_t mxt_filter_interrupt(struct mxt_data *data)
{
if (atomic_read(&data->st_enabled)) {
- atomic_cmpxchg(&data->st_pending_irqs, 0, 1);
- complete(&data->st_completion);
+ if (atomic_cmpxchg(&data->st_pending_irqs, 0, 1) == 0)
+ complete(&data->st_completion);
return IRQ_HANDLED;
}
return IRQ_NONE;
@@ -1993,6 +1997,7 @@
atomic_set(&data->st_enabled, 0);
complete(&data->st_completion);
mxt_interrupt(data->client->irq, data);
+ complete(&data->st_powerdown);
break;
case 1:
if (atomic_read(&data->st_enabled)) {
@@ -2005,6 +2010,8 @@
err = -EIO;
break;
}
+ INIT_COMPLETION(data->st_completion);
+ INIT_COMPLETION(data->st_powerdown);
atomic_set(&data->st_pending_irqs, 0);
atomic_set(&data->st_enabled, 1);
break;
@@ -2032,7 +2039,7 @@
return err;
if (atomic_cmpxchg(&data->st_pending_irqs, 1, 0) != 1)
- return -EBADF;
+ return -EINVAL;
return scnprintf(buf, PAGE_SIZE, "%u", 1);
}
@@ -2061,9 +2068,25 @@
.attrs = mxt_attrs,
};
+
+#if defined(CONFIG_SECURE_TOUCH)
+static void mxt_secure_touch_stop(struct mxt_data *data)
+{
+ if (atomic_read(&data->st_enabled)) {
+ complete(&data->st_completion);
+ wait_for_completion_interruptible(&data->st_powerdown);
+ }
+}
+#else
+static void mxt_secure_touch_stop(struct mxt_data *data)
+{
+}
+#endif
+
static int mxt_start(struct mxt_data *data)
{
int error;
+ mxt_secure_touch_stop(data);
/* restore the old power state values and reenable touch */
error = __mxt_write_reg(data->client, data->t7_start_addr,
@@ -2081,6 +2104,7 @@
{
int error;
u8 t7_data[T7_DATA_SIZE] = {0};
+ mxt_secure_touch_stop(data);
error = __mxt_write_reg(data->client, data->t7_start_addr,
T7_DATA_SIZE, t7_data);
@@ -2462,6 +2486,7 @@
/* calibrate */
if (data->pdata->need_calibration) {
+ mxt_secure_touch_stop(data);
error = mxt_write_object(data, MXT_GEN_COMMAND_T6,
MXT_COMMAND_CALIBRATE, 1);
if (error < 0)
@@ -2797,12 +2822,13 @@
#endif
#if defined(CONFIG_SECURE_TOUCH)
-static void __devinit secure_touch_init(struct mxt_data *data)
+static void __devinit mxt_secure_touch_init(struct mxt_data *data)
{
init_completion(&data->st_completion);
+ init_completion(&data->st_powerdown);
}
#else
-static void __devinit secure_touch_init(struct mxt_data *data)
+static void __devinit mxt_secure_touch_init(struct mxt_data *data)
{
}
#endif
@@ -2999,7 +3025,7 @@
mxt_debugfs_init(data);
- secure_touch_init(data);
+ mxt_secure_touch_init(data);
return 0;
diff --git a/drivers/input/touchscreen/synaptics_i2c_rmi4.c b/drivers/input/touchscreen/synaptics_i2c_rmi4.c
index 417ef83..426c7e7 100644
--- a/drivers/input/touchscreen/synaptics_i2c_rmi4.c
+++ b/drivers/input/touchscreen/synaptics_i2c_rmi4.c
@@ -34,6 +34,9 @@
#define DRIVER_NAME "synaptics_rmi4_i2c"
#define INPUT_PHYS_NAME "synaptics_rmi4_i2c/input0"
+
+#define RESET_DELAY 100
+
#define TYPE_B_PROTOCOL
#define NO_0D_WHILE_2D
@@ -68,6 +71,16 @@
#define NO_SLEEP_OFF (0 << 2)
#define NO_SLEEP_ON (1 << 2)
+enum device_status {
+ STATUS_NO_ERROR = 0x00,
+ STATUS_RESET_OCCURED = 0x01,
+ STATUS_INVALID_CONFIG = 0x02,
+ STATUS_DEVICE_FAILURE = 0x03,
+ STATUS_CONFIG_CRC_FAILURE = 0x04,
+ STATUS_FIRMWARE_CRC_FAILURE = 0x05,
+ STATUS_CRC_IN_PROGRESS = 0x06
+};
+
#define RMI4_VTG_MIN_UV 2700000
#define RMI4_VTG_MAX_UV 3300000
#define RMI4_ACTIVE_LOAD_UA 15000
@@ -79,7 +92,6 @@
#define RMI4_I2C_LPM_LOAD_UA 10
#define RMI4_GPIO_SLEEP_LOW_US 10000
-#define RMI4_GPIO_WAIT_HIGH_MS 25
static int synaptics_rmi4_i2c_read(struct synaptics_rmi4_data *rmi4_data,
unsigned short addr, unsigned char *data,
@@ -91,6 +103,10 @@
static int synaptics_rmi4_reset_device(struct synaptics_rmi4_data *rmi4_data);
+#ifdef CONFIG_PM
+static int synaptics_rmi4_suspend(struct device *dev);
+
+static int synaptics_rmi4_resume(struct device *dev);
#ifdef CONFIG_HAS_EARLYSUSPEND
static ssize_t synaptics_rmi4_full_pm_cycle_show(struct device *dev,
struct device_attribute *attr, char *buf);
@@ -101,10 +117,7 @@
static void synaptics_rmi4_early_suspend(struct early_suspend *h);
static void synaptics_rmi4_late_resume(struct early_suspend *h);
-
-static int synaptics_rmi4_suspend(struct device *dev);
-
-static int synaptics_rmi4_resume(struct device *dev);
+#endif
#endif
static ssize_t synaptics_rmi4_f01_reset_store(struct device *dev,
@@ -1069,17 +1082,25 @@
bool enable)
{
int retval = 0;
- unsigned char intr_status;
+ unsigned char *intr_status;
if (enable) {
if (rmi4_data->irq_enabled)
return retval;
+ intr_status = kzalloc(rmi4_data->num_of_intr_regs, GFP_KERNEL);
+ if (!intr_status) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "%s: Failed to alloc memory\n",
+ __func__);
+ return -ENOMEM;
+ }
/* Clear interrupts first */
retval = synaptics_rmi4_i2c_read(rmi4_data,
rmi4_data->f01_data_base_addr + 1,
- &intr_status,
+ intr_status,
rmi4_data->num_of_intr_regs);
+ kfree(intr_status);
if (retval < 0)
return retval;
@@ -1482,6 +1503,16 @@
if (retval < 0)
return retval;
+ while (status.status_code == STATUS_CRC_IN_PROGRESS) {
+ msleep(1);
+ retval = synaptics_rmi4_i2c_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ status.data,
+ sizeof(status.data));
+ if (retval < 0)
+ return retval;
+ }
+
if (status.flash_prog == 1) {
pr_notice("%s: In flash prog mode, status = 0x%02x\n",
__func__,
@@ -1645,7 +1676,7 @@
return retval;
}
- msleep(100);
+ msleep(RESET_DELAY);
return retval;
};
@@ -1816,7 +1847,7 @@
RMI4_VTG_MIN_UV, RMI4_VTG_MAX_UV);
if (retval) {
dev_err(&rmi4_data->i2c_client->dev,
- "regulator set_vtg failed retval=%d\n",
+ "regulator set_vtg failed retval =%d\n",
retval);
goto err_set_vtg_vdd;
}
@@ -1839,7 +1870,7 @@
RMI4_I2C_VTG_MIN_UV, RMI4_I2C_VTG_MAX_UV);
if (retval) {
dev_err(&rmi4_data->i2c_client->dev,
- "reg set i2c vtg failed retval=%d\n",
+ "reg set i2c vtg failed retval =%d\n",
retval);
goto err_set_vtg_i2c;
}
@@ -2110,7 +2141,7 @@
gpio_set_value(platform_data->reset_gpio, 0);
usleep(RMI4_GPIO_SLEEP_LOW_US);
gpio_set_value(platform_data->reset_gpio, 1);
- msleep(RMI4_GPIO_WAIT_HIGH_MS);
+ msleep(RESET_DELAY);
} else
synaptics_rmi4_reset_command(rmi4_data);
@@ -2470,6 +2501,75 @@
}
#endif
+static int synaptics_rmi4_regulator_lpm(struct synaptics_rmi4_data *rmi4_data,
+ bool on)
+{
+ int retval;
+
+ if (on == false)
+ goto regulator_hpm;
+
+ retval = reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_LPM_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_ana set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_lpm;
+ }
+
+ if (rmi4_data->board->i2c_pull_up) {
+ retval = reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_i2c set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_lpm;
+ }
+ }
+
+ return 0;
+
+regulator_hpm:
+
+ retval = reg_set_optimum_mode_check(rmi4_data->vdd,
+ RMI4_ACTIVE_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_ana set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_hpm;
+ }
+
+ if (rmi4_data->board->i2c_pull_up) {
+ retval = reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_i2c set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_hpm;
+ }
+ }
+
+ return 0;
+
+fail_regulator_lpm:
+ reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_ACTIVE_LOAD_UA);
+ if (rmi4_data->board->i2c_pull_up)
+ reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+
+ return retval;
+
+fail_regulator_hpm:
+ reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_LPM_LOAD_UA);
+ if (rmi4_data->board->i2c_pull_up)
+ reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LPM_LOAD_UA);
+ return retval;
+}
+
/**
* synaptics_rmi4_suspend()
*
@@ -2483,6 +2583,7 @@
static int synaptics_rmi4_suspend(struct device *dev)
{
struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ int retval;
if (!rmi4_data->sensor_sleep) {
rmi4_data->touch_stopped = true;
@@ -2491,6 +2592,12 @@
synaptics_rmi4_sensor_sleep(rmi4_data);
}
+ retval = synaptics_rmi4_regulator_lpm(rmi4_data, true);
+ if (retval < 0) {
+ dev_err(dev, "failed to enter low power mode\n");
+ return retval;
+ }
+
return 0;
}
@@ -2507,6 +2614,13 @@
static int synaptics_rmi4_resume(struct device *dev)
{
struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ int retval;
+
+ retval = synaptics_rmi4_regulator_lpm(rmi4_data, false);
+ if (retval < 0) {
+ dev_err(dev, "failed to enter active power mode\n");
+ return retval;
+ }
synaptics_rmi4_sensor_wake(rmi4_data);
rmi4_data->touch_stopped = false;
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f35f0e7..aa69475 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -16,7 +16,7 @@
# MSM IOMMU support
config MSM_IOMMU
bool "MSM IOMMU Support"
- depends on ARCH_MSM8X60 || ARCH_MSM8960 || ARCH_APQ8064 || ARCH_MSM8974 || ARCH_MPQ8092 || ARCH_MSM8610 || ARCH_MSM8226 || ARCH_MSMZINC
+ depends on ARCH_MSM8X60 || ARCH_MSM8960 || ARCH_APQ8064 || ARCH_MSM8974 || ARCH_MPQ8092 || ARCH_MSM8610 || ARCH_MSM8226 || ARCH_APQ8084
select IOMMU_API
help
Support for the IOMMUs found on certain Qualcomm SOCs.
diff --git a/drivers/iommu/msm_iommu-v0.c b/drivers/iommu/msm_iommu-v0.c
index c0a4720..b1960c6 100644
--- a/drivers/iommu/msm_iommu-v0.c
+++ b/drivers/iommu/msm_iommu-v0.c
@@ -55,6 +55,9 @@
.name = "msm_iommu_sec_bus",
};
+static int msm_iommu_unmap_range(struct iommu_domain *domain, unsigned int va,
+ unsigned int len);
+
static inline void clean_pte(unsigned long *start, unsigned long *end,
int redirect)
{
@@ -167,6 +170,11 @@
/* No need to do anything. IOMMUv0 is always on. */
}
+static void *_iommu_lock_initialize(void)
+{
+ return msm_iommu_lock_initialize();
+}
+
static void _iommu_lock_acquire(void)
{
msm_iommu_lock();
@@ -182,6 +190,7 @@
.iommu_power_off = __disable_regulators,
.iommu_clk_on = __enable_clocks,
.iommu_clk_off = __disable_clocks,
+ .iommu_lock_initialize = _iommu_lock_initialize,
.iommu_lock_acquire = _iommu_lock_acquire,
.iommu_lock_release = _iommu_lock_release,
};
@@ -953,6 +962,7 @@
int prot)
{
unsigned int pa;
+ unsigned int start_va = va;
unsigned int offset = 0;
unsigned long *fl_table;
unsigned long *fl_pte;
@@ -1026,12 +1036,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
continue;
}
@@ -1085,12 +1089,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
}
@@ -1103,6 +1101,8 @@
__flush_iotlb(domain);
fail:
mutex_unlock(&msm_iommu_lock);
+ if (ret && offset > 0)
+ msm_iommu_unmap_range(domain, start_va, offset);
return ret;
}
diff --git a/drivers/iommu/msm_iommu-v1.c b/drivers/iommu/msm_iommu-v1.c
index 8a26003..8e68beb 100644
--- a/drivers/iommu/msm_iommu-v1.c
+++ b/drivers/iommu/msm_iommu-v1.c
@@ -804,6 +804,19 @@
ctx_drvdata = dev_get_drvdata(&pdev->dev);
BUG_ON(!ctx_drvdata);
+ if (!drvdata->ctx_attach_count) {
+ pr_err("Unexpected IOMMU page fault!\n");
+ pr_err("name = %s\n", drvdata->name);
+ pr_err("Power is OFF. Unable to read page fault information\n");
+ /*
+ * We cannot determine which context bank caused the issue so
+ * we just return handled here to ensure IRQ handler code is
+ * happy
+ */
+ ret = IRQ_HANDLED;
+ goto fail;
+ }
+
ret = __enable_clocks(drvdata);
if (ret) {
ret = IRQ_NONE;
diff --git a/drivers/iommu/msm_iommu_dev-v0.c b/drivers/iommu/msm_iommu_dev-v0.c
index 549800f..7ae0b21 100644
--- a/drivers/iommu/msm_iommu_dev-v0.c
+++ b/drivers/iommu/msm_iommu_dev-v0.c
@@ -414,6 +414,7 @@
pmon_info->iommu.ops = &iommu_access_ops_v0;
pmon_info->iommu.hw_ops = iommu_pm_get_hw_ops_v0();
pmon_info->iommu.iommu_name = drvdata->name;
+ pmon_info->iommu.always_on = 1;
ret = msm_iommu_pm_iommu_register(pmon_info);
if (ret) {
pr_err("%s iommu register fail\n",
diff --git a/drivers/iommu/msm_iommu_pagetable.c b/drivers/iommu/msm_iommu_pagetable.c
index b62bb76..b871a5a 100644
--- a/drivers/iommu/msm_iommu_pagetable.c
+++ b/drivers/iommu/msm_iommu_pagetable.c
@@ -21,6 +21,7 @@
#include <mach/iommu.h>
#include <mach/msm_iommu_priv.h>
+#include <trace/events/kmem.h>
#include "msm_iommu_pagetable.h"
/* Sharability attributes of MSM IOMMU mappings */
@@ -431,6 +432,7 @@
struct scatterlist *sg, unsigned int len, int prot)
{
phys_addr_t pa;
+ unsigned int start_va = va;
unsigned int offset = 0;
unsigned long *fl_pte;
unsigned long fl_offset;
@@ -470,6 +472,8 @@
chunk_size = SZ_1M;
/* 64k or 4k determined later */
+ trace_iommu_map_range(va, pa, sg->length, chunk_size);
+
/* for 1M and 16M, only first level entries are required */
if (chunk_size >= SZ_1M) {
if (chunk_size == SZ_16M) {
@@ -495,12 +499,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
continue;
}
@@ -534,6 +532,9 @@
else
chunk_size = SZ_4K;
+ trace_iommu_map_range(va, pa, sg->length,
+ chunk_size);
+
if (chunk_size == SZ_4K) {
sl_4k(&sl_table[sl_offset], pa, pgprot4k);
sl_offset++;
@@ -553,12 +554,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
}
@@ -569,6 +564,9 @@
}
fail:
+ if (ret && offset > 0)
+ msm_iommu_pagetable_unmap_range(pt, start_va, offset);
+
return ret;
}
diff --git a/drivers/iommu/msm_iommu_perfmon.c b/drivers/iommu/msm_iommu_perfmon.c
index fee8a4a..a11d794 100644
--- a/drivers/iommu/msm_iommu_perfmon.c
+++ b/drivers/iommu/msm_iommu_perfmon.c
@@ -90,6 +90,19 @@
return pos;
}
+static int iommu_pm_event_class_supported(struct iommu_pmon *pmon,
+ int event_class)
+{
+ unsigned int nevent_cls = pmon->nevent_cls_supported;
+ unsigned int i;
+
+ for (i = 0; i < nevent_cls; ++i) {
+ if (event_class == pmon->event_cls_supported[i])
+ return event_class;
+ }
+ return MSM_IOMMU_PMU_NO_EVENT_CLASS;
+}
+
static const char *iommu_pm_find_event_class_name(int event_class)
{
size_t array_len;
@@ -113,7 +126,8 @@
return event_class_name;
}
-static int iommu_pm_find_event_class(const char *event_class_name)
+static int iommu_pm_find_event_class(struct iommu_pmon *pmon,
+ const char *event_class_name)
{
size_t array_len;
struct event_class *ptr;
@@ -134,6 +148,7 @@
}
out:
+ event_class = iommu_pm_event_class_supported(pmon, event_class);
return event_class;
}
@@ -389,11 +404,11 @@
rv = kstrtol(buf, 10, &value);
if (!rv) {
counter->current_event_class =
- iommu_pm_find_event_class(
+ iommu_pm_find_event_class(pmon,
iommu_pm_find_event_class_name(value));
} else {
counter->current_event_class =
- iommu_pm_find_event_class(buf);
+ iommu_pm_find_event_class(pmon, buf);
} }
if (current_event_class != counter->current_event_class)
@@ -488,14 +503,17 @@
rv = kstrtoul(buf, 10, &cmd);
if (!rv && (cmd < 2)) {
if (pmon->enabled == 1 && cmd == 0) {
- if (pmon->iommu_attach_count > 0)
+ if (pmon->iommu.always_on ||
+ pmon->iommu_attach_count > 0)
iommu_pm_off(pmon);
} else if (pmon->enabled == 0 && cmd == 1) {
/* We can only turn on perf. monitoring if
- * iommu is attached. Delay turning on perf.
- * monitoring until we are attached.
+ * iommu is attached (if not always on).
+ * Delay turning on perf. monitoring until
+ * we are attached.
*/
- if (pmon->iommu_attach_count > 0)
+ if (pmon->iommu.always_on ||
+ pmon->iommu_attach_count > 0)
iommu_pm_on(pmon);
else
pmon->enabled = 1;
@@ -788,9 +806,9 @@
++pmon->iommu_attach_count;
if (pmon->iommu_attach_count == 1) {
/* If perf. mon was enabled before we attached we do
- * the actual after we attach.
+ * the actual enabling after we attach.
*/
- if (pmon->enabled)
+ if (pmon->enabled && !pmon->iommu.always_on)
iommu_pm_on(pmon);
}
mutex_unlock(&pmon->lock);
@@ -805,9 +823,9 @@
mutex_lock(&pmon->lock);
if (pmon->iommu_attach_count == 1) {
/* If perf. mon is still enabled we have to disable
- * before we do the detach.
+ * before we do the detach if iommu is not always on.
*/
- if (pmon->enabled)
+ if (pmon->enabled && !pmon->iommu.always_on)
iommu_pm_off(pmon);
}
BUG_ON(pmon->iommu_attach_count == 0);
diff --git a/drivers/leds/leds-qpnp.c b/drivers/leds/leds-qpnp.c
index e88e574..5d5ff43 100644
--- a/drivers/leds/leds-qpnp.c
+++ b/drivers/leds/leds-qpnp.c
@@ -159,6 +159,41 @@
#define PWM_LUT_MAX_SIZE 63
#define RGB_LED_DISABLE 0x00
+#define MPP_MAX_LEVEL LED_FULL
+#define LED_MPP_MODE_CTRL(base) (base + 0x40)
+#define LED_MPP_VIN_CTRL(base) (base + 0x41)
+#define LED_MPP_EN_CTRL(base) (base + 0x46)
+#define LED_MPP_SINK_CTRL(base) (base + 0x4C)
+
+#define LED_MPP_CURRENT_DEFAULT 10
+#define LED_MPP_SOURCE_SEL_DEFAULT LED_MPP_MODE_ENABLE
+
+#define LED_MPP_SINK_MASK 0x07
+#define LED_MPP_MODE_MASK 0x7F
+#define LED_MPP_EN_MASK 0x80
+
+#define LED_MPP_MODE_SINK (0x06 << 4)
+#define LED_MPP_MODE_ENABLE 0x01
+#define LED_MPP_MODE_OUTPUT 0x10
+#define LED_MPP_MODE_DISABLE 0x00
+#define LED_MPP_EN_ENABLE 0x80
+#define LED_MPP_EN_DISABLE 0x00
+
+#define MPP_SOURCE_DTEST1 0x08
+
+#define KPDBL_MAX_LEVEL LED_FULL
+#define KPDBL_ROW_SRC_SEL(base) (base + 0x40)
+#define KPDBL_ENABLE(base) (base + 0x46)
+#define KPDBL_ROW_SRC(base) (base + 0xE5)
+
+#define KPDBL_ROW_SRC_SEL_VAL_MASK 0x0F
+#define KPDBL_ROW_SCAN_EN_MASK 0x80
+#define KPDBL_ROW_SCAN_VAL_MASK 0x0F
+#define KPDBL_ROW_SCAN_EN_SHIFT 7
+#define KPDBL_MODULE_EN 0x80
+#define KPDBL_MODULE_DIS 0x00
+#define KPDBL_MODULE_EN_MASK 0x80
+
/**
* enum qpnp_leds - QPNP supported led ids
* @QPNP_ID_WLED - White led backlight
@@ -170,6 +205,8 @@
QPNP_ID_RGB_RED,
QPNP_ID_RGB_GREEN,
QPNP_ID_RGB_BLUE,
+ QPNP_ID_LED_MPP,
+ QPNP_ID_KPDBL,
QPNP_ID_MAX,
};
@@ -215,9 +252,9 @@
DELAY_128us,
};
-enum rgb_mode {
- RGB_MODE_PWM = 0,
- RGB_MODE_LPG,
+enum led_mode {
+ PWM_MODE = 0,
+ LPG_MODE,
};
static u8 wled_debug_regs[] = {
@@ -240,6 +277,15 @@
static u8 rgb_pwm_debug_regs[] = {
0x45, 0x46, 0x47,
};
+
+static u8 mpp_debug_regs[] = {
+ 0x40, 0x41, 0x42, 0x45, 0x46, 0x4c,
+};
+
+static u8 kpdbl_debug_regs[] = {
+ 0x40, 0x46, 0xb1, 0xb3, 0xb4, 0xe5,
+};
+
/**
* wled_config_data - wled configuration data
* @num_strings - number of wled strings supported
@@ -264,6 +310,16 @@
};
/**
+ * mpp_config_data - mpp configuration data
+ * @current_setting - current setting, 5ma-40ma in 5ma increments
+ */
+struct mpp_config_data {
+ u8 current_setting;
+ u8 source_sel;
+ u8 mode_ctrl;
+};
+
+/**
* flash_config_data - flash configuration data
* @current_prgm - current to be programmed, scaled by max level
* @clamp_curr - clamp current to use
@@ -294,6 +350,25 @@
};
/**
+ * kpdbl_config_data - kpdbl configuration data
+ * @pwm_device - pwm device
+ * @pwm_channel - pwm channel to be configured for led
+ * @pwm_period_us - period for pwm, in us
+ * @row_src_sel_val - select source, 0 for vph_pwr and 1 for vbst
+ * @row_scan_en - enable row scan
+ * @row_scan_val - map to enable needed rows
+ */
+struct kpdbl_config_data {
+ struct pwm_device *pwm_dev;
+ int pwm_channel;
+ u32 pwm_period_us;
+ u32 row_src_sel_val;
+ u32 row_scan_en;
+ u32 row_scan_val;
+ u8 mode;
+};
+
+/**
* rgb_config_data - rgb configuration data
* @lut_params - lut parameters to be used by pwm driver
* @pwm_device - pwm device
@@ -335,7 +410,9 @@
spinlock_t lock;
struct wled_config_data *wled_cfg;
struct flash_config_data *flash_cfg;
+ struct kpdbl_config_data *kpdbl_cfg;
struct rgb_config_data *rgb_cfg;
+ struct mpp_config_data *mpp_cfg;
int max_current;
bool default_on;
int turn_off_delay_ms;
@@ -376,7 +453,8 @@
led->spmi_dev->sid,
led->base + regs[i],
&val, sizeof(val));
- pr_debug("0x%x = 0x%x\n", led->base + regs[i], val);
+ pr_debug("%s: 0x%x = 0x%x\n", led->cdev.name,
+ led->base + regs[i], val);
}
pr_debug("===== %s LED register dump end =====\n", led->cdev.name);
}
@@ -458,6 +536,68 @@
return 0;
}
+static int qpnp_mpp_set(struct qpnp_led_data *led)
+{
+ int rc, val;
+
+ if (led->cdev.brightness) {
+ val = (led->cdev.brightness * LED_MPP_SINK_MASK) / LED_FULL;
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_SINK_CTRL(led->base),
+ LED_MPP_SINK_MASK, val);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable reg\n");
+ return rc;
+ }
+
+ val = led->mpp_cfg->source_sel | led->mpp_cfg->mode_ctrl;
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_MODE_CTRL(led->base), LED_MPP_MODE_MASK,
+ val);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led mode reg\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_EN_CTRL(led->base), LED_MPP_EN_MASK,
+ LED_MPP_EN_ENABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable " \
+ "reg\n");
+ return rc;
+ }
+ } else {
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_MODE_CTRL(led->base),
+ LED_MPP_MODE_MASK,
+ LED_MPP_MODE_DISABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led mode reg\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_EN_CTRL(led->base),
+ LED_MPP_EN_MASK,
+ LED_MPP_EN_DISABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable reg\n");
+ return rc;
+ }
+ }
+
+ qpnp_dump_regs(led, mpp_debug_regs, ARRAY_SIZE(mpp_debug_regs));
+
+ return 0;
+}
+
static int qpnp_flash_set(struct qpnp_led_data *led)
{
int rc;
@@ -670,20 +810,57 @@
return 0;
}
+static int qpnp_kpdbl_set(struct qpnp_led_data *led)
+{
+ int duty_us;
+ int rc;
+
+ if (led->cdev.brightness) {
+ rc = qpnp_led_masked_write(led, KPDBL_ENABLE(led->base),
+ KPDBL_MODULE_EN_MASK, KPDBL_MODULE_EN);
+ duty_us = (led->kpdbl_cfg->pwm_period_us *
+ led->cdev.brightness) / KPDBL_MAX_LEVEL;
+ rc = pwm_config(led->kpdbl_cfg->pwm_dev, duty_us,
+ led->kpdbl_cfg->pwm_period_us);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev, "pwm config failed\n");
+ return rc;
+ }
+ rc = pwm_enable(led->kpdbl_cfg->pwm_dev);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev, "pwm enable failed\n");
+ return rc;
+ }
+ } else {
+ pwm_disable(led->kpdbl_cfg->pwm_dev);
+ rc = qpnp_led_masked_write(led, KPDBL_ENABLE(led->base),
+ KPDBL_MODULE_EN_MASK, KPDBL_MODULE_DIS);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable reg\n");
+ return rc;
+ }
+ }
+
+ qpnp_dump_regs(led, kpdbl_debug_regs, ARRAY_SIZE(kpdbl_debug_regs));
+
+ return 0;
+}
+
static int qpnp_rgb_set(struct qpnp_led_data *led)
{
int duty_us;
int rc;
if (led->cdev.brightness) {
- if (led->rgb_cfg->mode == RGB_MODE_PWM) {
+ if (led->rgb_cfg->mode == PWM_MODE) {
duty_us = (led->rgb_cfg->pwm_period_us *
led->cdev.brightness) / LED_FULL;
rc = pwm_config(led->rgb_cfg->pwm_dev, duty_us,
led->rgb_cfg->pwm_period_us);
if (rc < 0) {
- dev_err(&led->spmi_dev->dev, "Failed to " \
- "configure pwm for new values\n");
+ dev_err(&led->spmi_dev->dev,
+ "pwm config failed\n");
return rc;
}
}
@@ -696,6 +873,10 @@
return rc;
}
rc = pwm_enable(led->rgb_cfg->pwm_dev);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev, "pwm enable failed\n");
+ return rc;
+ }
} else {
pwm_disable(led->rgb_cfg->pwm_dev);
rc = qpnp_led_masked_write(led,
@@ -750,6 +931,17 @@
dev_err(&led->spmi_dev->dev,
"RGB set brightness failed (%d)\n", rc);
break;
+ case QPNP_ID_LED_MPP:
+ rc = qpnp_mpp_set(led);
+ if (rc < 0)
+ dev_err(&led->spmi_dev->dev,
+ "MPP set brightness failed (%d)\n", rc);
+ case QPNP_ID_KPDBL:
+ rc = qpnp_kpdbl_set(led);
+ if (rc < 0)
+ dev_err(&led->spmi_dev->dev,
+ "KPDBL set brightness failed (%d)\n", rc);
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
break;
@@ -772,6 +964,12 @@
case QPNP_ID_RGB_BLUE:
led->cdev.max_brightness = RGB_MAX_LEVEL;
break;
+ case QPNP_ID_LED_MPP:
+ led->cdev.max_brightness = MPP_MAX_LEVEL;
+ break;
+ case QPNP_ID_KPDBL:
+ led->cdev.max_brightness = KPDBL_MAX_LEVEL;
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
return -EINVAL;
@@ -1117,6 +1315,81 @@
return 0;
}
+static int __devinit qpnp_kpdbl_init(struct qpnp_led_data *led)
+{
+ int rc;
+ u8 val;
+
+ /* enable row source selct */
+ rc = qpnp_led_masked_write(led, KPDBL_ROW_SRC_SEL(led->base),
+ KPDBL_ROW_SRC_SEL_VAL_MASK, led->kpdbl_cfg->row_src_sel_val);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Enable row src sel write failed(%d)\n", rc);
+ return rc;
+ }
+
+ /* row source */
+ rc = spmi_ext_register_readl(led->spmi_dev->ctrl, led->spmi_dev->sid,
+ KPDBL_ROW_SRC(led->base), &val, 1);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Unable to read from addr=%x, rc(%d)\n",
+ KPDBL_ROW_SRC(led->base), rc);
+ return rc;
+ }
+
+ val &= ~KPDBL_ROW_SCAN_VAL_MASK;
+ val |= led->kpdbl_cfg->row_scan_val;
+
+ led->kpdbl_cfg->row_scan_en <<= KPDBL_ROW_SCAN_EN_SHIFT;
+ val &= ~KPDBL_ROW_SCAN_EN_MASK;
+ val |= led->kpdbl_cfg->row_scan_en;
+
+ rc = spmi_ext_register_writel(led->spmi_dev->ctrl, led->spmi_dev->sid,
+ KPDBL_ROW_SRC(led->base), &val, 1);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Unable to write to addr=%x, rc(%d)\n",
+ KPDBL_ROW_SRC(led->base), rc);
+ return rc;
+ }
+
+ /* enable module */
+ rc = qpnp_led_masked_write(led, KPDBL_ENABLE(led->base),
+ KPDBL_MODULE_EN_MASK, KPDBL_MODULE_EN);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Enable module write failed(%d)\n", rc);
+ return rc;
+ }
+
+ if (led->kpdbl_cfg->pwm_channel != -1) {
+ led->kpdbl_cfg->pwm_dev =
+ pwm_request(led->kpdbl_cfg->pwm_channel,
+ led->cdev.name);
+
+ if (IS_ERR_OR_NULL(led->kpdbl_cfg->pwm_dev)) {
+ dev_err(&led->spmi_dev->dev,
+ "could not acquire PWM Channel %d, " \
+ "error %ld\n",
+ led->kpdbl_cfg->pwm_channel,
+ PTR_ERR(led->kpdbl_cfg->pwm_dev));
+ led->kpdbl_cfg->pwm_dev = NULL;
+ return -ENODEV;
+ }
+ } else {
+ dev_err(&led->spmi_dev->dev,
+ "Invalid PWM channel\n");
+ return -EINVAL;
+ }
+
+ /* dump kpdbl registers */
+ qpnp_dump_regs(led, kpdbl_debug_regs, ARRAY_SIZE(kpdbl_debug_regs));
+
+ return 0;
+}
+
static int __devinit qpnp_rgb_init(struct qpnp_led_data *led)
{
int rc, start_idx, idx_len;
@@ -1144,7 +1417,7 @@
return -ENODEV;
}
- if (led->rgb_cfg->mode == RGB_MODE_LPG) {
+ if (led->rgb_cfg->mode == LPG_MODE) {
start_idx =
led->rgb_cfg->duty_cycles->start_idx;
idx_len =
@@ -1187,7 +1460,7 @@
static int __devinit qpnp_led_initialize(struct qpnp_led_data *led)
{
- int rc;
+ int rc = 0;
switch (led->id) {
case QPNP_ID_WLED:
@@ -1211,12 +1484,20 @@
dev_err(&led->spmi_dev->dev,
"RGB initialize failed(%d)\n", rc);
break;
+ case QPNP_ID_LED_MPP:
+ break;
+ case QPNP_ID_KPDBL:
+ rc = qpnp_kpdbl_init(led);
+ if (rc)
+ dev_err(&led->spmi_dev->dev,
+ "KPDBL initialize failed(%d)\n", rc);
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
return -EINVAL;
}
- return 0;
+ return rc;
}
static int __devinit qpnp_get_common_configs(struct qpnp_led_data *led,
@@ -1401,6 +1682,63 @@
return 0;
}
+static int __devinit qpnp_get_config_kpdbl(struct qpnp_led_data *led,
+ struct device_node *node)
+{
+ int rc;
+ u32 val;
+
+ led->kpdbl_cfg = devm_kzalloc(&led->spmi_dev->dev,
+ sizeof(struct kpdbl_config_data), GFP_KERNEL);
+ if (!led->kpdbl_cfg) {
+ dev_err(&led->spmi_dev->dev, "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ rc = of_property_read_u32(node, "qcom,mode", &val);
+ if (!rc)
+ led->kpdbl_cfg->mode = (u8) val;
+ else
+ return rc;
+
+ if (led->kpdbl_cfg->mode == LPG_MODE) {
+ dev_err(&led->spmi_dev->dev, "LPG mode not supported\n");
+ return -EINVAL;
+ }
+
+ rc = of_property_read_u32(node, "qcom,pwm-channel", &val);
+ if (!rc)
+ led->kpdbl_cfg->pwm_channel = (u8) val;
+ else
+ return rc;
+
+ rc = of_property_read_u32(node, "qcom,pwm-us", &val);
+ if (!rc)
+ led->kpdbl_cfg->pwm_period_us = val;
+ else
+ return rc;
+
+ rc = of_property_read_u32(node, "qcom,row-src-sel-val", &val);
+ if (!rc)
+ led->kpdbl_cfg->row_src_sel_val = val;
+ else
+ return rc;
+
+ rc = of_property_read_u32(node, "qcom,row-scan-val", &val);
+ if (!rc)
+ led->kpdbl_cfg->row_scan_val = val;
+ else
+ return rc;
+
+ rc = of_property_read_u32(node, "qcom,row-scan-en", &val);
+ if (!rc)
+ led->kpdbl_cfg->row_scan_en = val;
+ else
+ return rc;
+
+ return 0;
+}
+
static int __devinit qpnp_get_config_rgb(struct qpnp_led_data *led,
struct device_node *node)
{
@@ -1433,11 +1771,11 @@
rc = of_property_read_u32(node, "qcom,pwm-channel", &val);
if (!rc)
- led->rgb_cfg->pwm_channel = (u8) val;
+ led->rgb_cfg->pwm_channel = val;
else
return rc;
- if (led->rgb_cfg->mode == RGB_MODE_PWM) {
+ if (led->rgb_cfg->mode == PWM_MODE) {
rc = of_property_read_u32(node, "qcom,pwm-us", &val);
if (!rc)
led->rgb_cfg->pwm_period_us = val;
@@ -1445,7 +1783,7 @@
return rc;
}
- if (led->rgb_cfg->mode == RGB_MODE_LPG) {
+ if (led->rgb_cfg->mode == LPG_MODE) {
led->rgb_cfg->duty_cycles =
devm_kzalloc(&led->spmi_dev->dev,
sizeof(struct pwm_duty_cycles), GFP_KERNEL);
@@ -1495,22 +1833,22 @@
rc = of_property_read_u32(node, "qcom,start-idx", &val);
if (!rc) {
- led->rgb_cfg->lut_params.start_idx = (u8) val;
- led->rgb_cfg->duty_cycles->start_idx = (u8) val;
+ led->rgb_cfg->lut_params.start_idx = val;
+ led->rgb_cfg->duty_cycles->start_idx = val;
} else
return rc;
led->rgb_cfg->lut_params.lut_pause_hi = 0;
rc = of_property_read_u32(node, "qcom,pause-hi", &val);
if (!rc)
- led->rgb_cfg->lut_params.lut_pause_hi = (u8) val;
+ led->rgb_cfg->lut_params.lut_pause_hi = val;
else if (rc != -EINVAL)
return rc;
led->rgb_cfg->lut_params.lut_pause_lo = 0;
rc = of_property_read_u32(node, "qcom,pause-lo", &val);
if (!rc)
- led->rgb_cfg->lut_params.lut_pause_lo = (u8) val;
+ led->rgb_cfg->lut_params.lut_pause_lo = val;
else if (rc != -EINVAL)
return rc;
@@ -1518,14 +1856,14 @@
QPNP_LUT_RAMP_STEP_DEFAULT;
rc = of_property_read_u32(node, "qcom,ramp-step-ms", &val);
if (!rc)
- led->rgb_cfg->lut_params.ramp_step_ms = (u8) val;
+ led->rgb_cfg->lut_params.ramp_step_ms = val;
else if (rc != -EINVAL)
return rc;
led->rgb_cfg->lut_params.flags = QPNP_LED_PWM_FLAGS;
rc = of_property_read_u32(node, "qcom,lut-flags", &val);
if (!rc)
- led->rgb_cfg->lut_params.flags = (u8) val;
+ led->rgb_cfg->lut_params.flags = val;
else if (rc != -EINVAL)
return rc;
@@ -1536,6 +1874,43 @@
return 0;
}
+static int __devinit qpnp_get_config_mpp(struct qpnp_led_data *led,
+ struct device_node *node)
+{
+ int rc;
+ u32 val;
+
+ led->mpp_cfg = devm_kzalloc(&led->spmi_dev->dev,
+ sizeof(struct mpp_config_data), GFP_KERNEL);
+ if (!led->mpp_cfg) {
+ dev_err(&led->spmi_dev->dev, "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ led->mpp_cfg->current_setting = LED_MPP_CURRENT_DEFAULT;
+ rc = of_property_read_u32(node, "qcom,current-setting", &val);
+ if (!rc)
+ led->mpp_cfg->current_setting = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ led->mpp_cfg->source_sel = LED_MPP_SOURCE_SEL_DEFAULT;
+ rc = of_property_read_u32(node, "qcom,source-sel", &val);
+ if (!rc)
+ led->mpp_cfg->source_sel = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ led->mpp_cfg->mode_ctrl = LED_MPP_MODE_SINK;
+ rc = of_property_read_u32(node, "qcom,mode-ctrl", &val);
+ if (!rc)
+ led->mpp_cfg->mode_ctrl = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ return 0;
+}
+
static int __devinit qpnp_leds_probe(struct spmi_device *spmi)
{
struct qpnp_led_data *led, *led_array;
@@ -1638,6 +2013,19 @@
"Unable to read rgb config data\n");
goto fail_id_check;
}
+ } else if (strncmp(led_label, "mpp", sizeof("mpp")) == 0) {
+ rc = qpnp_get_config_mpp(led, temp);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev,
+ "Unable to read mpp config data\n");
+ }
+ } else if (strncmp(led_label, "kpdbl", sizeof("kpdbl")) == 0) {
+ rc = qpnp_get_config_kpdbl(led, temp);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev,
+ "Unable to read kpdbl config data\n");
+ goto fail_id_check;
+ }
} else {
dev_err(&led->spmi_dev->dev, "No LED matching label\n");
rc = -EINVAL;
diff --git a/drivers/media/dvb/dvb-core/demux.h b/drivers/media/dvb/dvb-core/demux.h
index f0b9b05..fcade49 100644
--- a/drivers/media/dvb/dvb-core/demux.h
+++ b/drivers/media/dvb/dvb-core/demux.h
@@ -127,6 +127,7 @@
u32 cont_err_counter;
u32 ts_packets_num;
u32 ts_dropped_bytes;
+ u64 stc;
} buf;
struct {
@@ -245,7 +246,10 @@
struct dmx_secure_mode *sec_mode);
int (*oob_command) (struct dmx_ts_feed *feed,
struct dmx_oob_command *cmd);
-
+ int (*ts_insertion_init)(struct dmx_ts_feed *feed);
+ int (*ts_insertion_terminate)(struct dmx_ts_feed *feed);
+ int (*ts_insertion_insert_buffer)(struct dmx_ts_feed *feed,
+ char *data, size_t size);
};
/*--------------------------------------------------------------------------*/
diff --git a/drivers/media/dvb/dvb-core/dmxdev.c b/drivers/media/dvb/dvb-core/dmxdev.c
index ca71c06..7347b37 100644
--- a/drivers/media/dvb/dvb-core/dmxdev.c
+++ b/drivers/media/dvb/dvb-core/dmxdev.c
@@ -34,6 +34,8 @@
#include <linux/uaccess.h>
#include <linux/debugfs.h>
#include <linux/seq_file.h>
+#include <linux/timer.h>
+#include <linux/jiffies.h>
#include "dmxdev.h"
static int debug;
@@ -416,7 +418,33 @@
return 0;
}
-static ssize_t dvb_dmxdev_buffer_read(struct dvb_ringbuffer *src,
+static inline int dvb_dmxdev_check_data(struct dmxdev_filter *filter,
+ struct dvb_ringbuffer *src)
+{
+ int data_status_change;
+
+ if (filter)
+ if (mutex_lock_interruptible(&filter->mutex))
+ return -ERESTARTSYS;
+
+ if (!src->data ||
+ !dvb_ringbuffer_empty(src) ||
+ src->error ||
+ (filter &&
+ (filter->state != DMXDEV_STATE_GO) &&
+ (filter->state != DMXDEV_STATE_DONE)))
+ data_status_change = 1;
+ else
+ data_status_change = 0;
+
+ if (filter)
+ mutex_unlock(&filter->mutex);
+
+ return data_status_change;
+}
+
+static ssize_t dvb_dmxdev_buffer_read(struct dmxdev_filter *filter,
+ struct dvb_ringbuffer *src,
int non_blocking, char __user *buf,
size_t count, loff_t *ppos)
{
@@ -439,9 +467,26 @@
break;
}
- ret = wait_event_interruptible(src->queue, (!src->data) ||
- !dvb_ringbuffer_empty(src) ||
- (src->error != 0));
+ if (filter) {
+ if ((filter->state == DMXDEV_STATE_DONE) &&
+ dvb_ringbuffer_empty(src))
+ break;
+
+ mutex_unlock(&filter->mutex);
+ }
+
+ ret = wait_event_interruptible(src->queue,
+ dvb_dmxdev_check_data(filter, src));
+
+ if (filter) {
+ if (mutex_lock_interruptible(&filter->mutex))
+ return -ERESTARTSYS;
+
+ if ((filter->state != DMXDEV_STATE_GO) &&
+ (filter->state != DMXDEV_STATE_DONE))
+ return -ENODEV;
+ }
+
if (ret < 0)
break;
@@ -1108,9 +1153,9 @@
if (dmxdev->exit)
return -ENODEV;
- res = dvb_dmxdev_buffer_read(&dmxdev->dvr_buffer,
- file->f_flags & O_NONBLOCK,
- buf, count, ppos);
+ res = dvb_dmxdev_buffer_read(NULL, &dmxdev->dvr_buffer,
+ file->f_flags & O_NONBLOCK,
+ buf, count, ppos);
if (res > 0) {
dvb_dmxdev_notify_data_read(dmxdev->dvr_feed, res);
@@ -1751,6 +1796,213 @@
return 0;
}
+static void dvb_dmxdev_ts_insertion_timer(unsigned long data)
+{
+ struct ts_insertion_buffer *ts_buffer =
+ (struct ts_insertion_buffer *)data;
+
+ if (ts_buffer && !ts_buffer->abort)
+ schedule_work(&ts_buffer->work);
+}
+
+static void dvb_dmxdev_ts_insertion_work(struct work_struct *worker)
+{
+ struct ts_insertion_buffer *ts_buffer =
+ container_of(worker, struct ts_insertion_buffer, work);
+ struct dmxdev_feed *feed;
+ size_t free_bytes;
+ struct dmx_ts_feed *ts;
+
+ mutex_lock(&ts_buffer->dmxdevfilter->mutex);
+
+ if (ts_buffer->abort ||
+ (ts_buffer->dmxdevfilter->state != DMXDEV_STATE_GO)) {
+ mutex_unlock(&ts_buffer->dmxdevfilter->mutex);
+ return;
+ }
+
+ feed = list_first_entry(&ts_buffer->dmxdevfilter->feed.ts,
+ struct dmxdev_feed, next);
+ ts = feed->ts;
+ free_bytes = dvb_ringbuffer_free(&ts_buffer->dmxdevfilter->buffer);
+
+ mutex_unlock(&ts_buffer->dmxdevfilter->mutex);
+
+ if (ts_buffer->size < free_bytes)
+ ts->ts_insertion_insert_buffer(ts,
+ ts_buffer->buffer, ts_buffer->size);
+
+ if (ts_buffer->repetition_time)
+ mod_timer(&ts_buffer->timer, jiffies +
+ msecs_to_jiffies(ts_buffer->repetition_time));
+}
+
+static void dvb_dmxdev_queue_ts_insertion(
+ struct ts_insertion_buffer *ts_buffer)
+{
+ size_t tsp_size;
+
+ if (ts_buffer->dmxdevfilter->dmx_tsp_format == DMX_TSP_FORMAT_188)
+ tsp_size = 188;
+ else
+ tsp_size = 192;
+
+ if (ts_buffer->size % tsp_size) {
+ printk(KERN_ERR "%s: Wrong buffer alignment, size=%d, tsp_size=%d\n",
+ __func__, ts_buffer->size, tsp_size);
+ return;
+ }
+
+ ts_buffer->abort = 0;
+ schedule_work(&ts_buffer->work);
+}
+
+static void dvb_dmxdev_cancel_ts_insertion(
+ struct ts_insertion_buffer *ts_buffer)
+{
+ /*
+ * This function assumes it is called while mutex
+ * of demux filter is taken. Since work in workqueue
+ * captures the filter's mutex to protect against the DB,
+ * mutex needs to be released before waiting for the work
+ * to get finished otherwise work in workqueue will
+ * never be finished.
+ */
+ if (!mutex_is_locked(&ts_buffer->dmxdevfilter->mutex)) {
+ printk(KERN_ERR "%s: mutex is not locked!\n", __func__);
+ return;
+ }
+
+ /*
+ * Work should be stopped first as it might re-trigger the timer
+ * until it is stopped. Timer would not re-schedule the work
+ * due to the abort flag.
+ */
+ ts_buffer->abort = 1;
+
+ mutex_unlock(&ts_buffer->dmxdevfilter->mutex);
+ cancel_work_sync(&ts_buffer->work);
+ del_timer_sync(&ts_buffer->timer);
+ mutex_lock(&ts_buffer->dmxdevfilter->mutex);
+}
+
+static int dvb_dmxdev_set_ts_insertion(struct dmxdev_filter *dmxdevfilter,
+ struct dmx_set_ts_insertion *params)
+{
+ int ret = 0;
+ int first_buffer;
+ struct dmxdev_feed *feed;
+ struct ts_insertion_buffer *ts_buffer;
+
+ if (!params ||
+ !params->size ||
+ !(dmxdevfilter->dev->capabilities & DMXDEV_CAP_TS_INSERTION) ||
+ (dmxdevfilter->state < DMXDEV_STATE_SET) ||
+ (dmxdevfilter->type != DMXDEV_TYPE_PES) ||
+ ((dmxdevfilter->params.pes.output != DMX_OUT_TS_TAP) &&
+ (dmxdevfilter->params.pes.output != DMX_OUT_TSDEMUX_TAP)))
+ return -EINVAL;
+
+ ts_buffer = vmalloc(sizeof(struct ts_insertion_buffer));
+ if (!ts_buffer)
+ return -ENOMEM;
+
+ ts_buffer->buffer = vmalloc(params->size);
+ if (!ts_buffer->buffer) {
+ vfree(ts_buffer);
+ return -ENOMEM;
+ }
+
+ if (copy_from_user(ts_buffer->buffer,
+ params->ts_packets, params->size)) {
+ vfree(ts_buffer->buffer);
+ vfree(ts_buffer);
+ return -EFAULT;
+ }
+
+ if (params->repetition_time &&
+ params->repetition_time < DMX_MIN_INSERTION_REPETITION_TIME)
+ params->repetition_time = DMX_MIN_INSERTION_REPETITION_TIME;
+
+ ts_buffer->size = params->size;
+ ts_buffer->identifier = params->identifier;
+ ts_buffer->repetition_time = params->repetition_time;
+ ts_buffer->dmxdevfilter = dmxdevfilter;
+ init_timer(&ts_buffer->timer);
+ ts_buffer->timer.function = dvb_dmxdev_ts_insertion_timer;
+ ts_buffer->timer.data = (unsigned long)ts_buffer;
+ ts_buffer->timer.expires = 0xffffffffL;
+ INIT_WORK(&ts_buffer->work, dvb_dmxdev_ts_insertion_work);
+
+ first_buffer = list_empty(&dmxdevfilter->insertion_buffers);
+ list_add_tail(&ts_buffer->next, &dmxdevfilter->insertion_buffers);
+
+ if (dmxdevfilter->state != DMXDEV_STATE_GO)
+ return 0;
+
+ feed = list_first_entry(&dmxdevfilter->feed.ts,
+ struct dmxdev_feed, next);
+
+ if (first_buffer && feed->ts->ts_insertion_init)
+ ret = feed->ts->ts_insertion_init(feed->ts);
+
+ if (!ret) {
+ dvb_dmxdev_queue_ts_insertion(ts_buffer);
+ } else {
+ list_del(&ts_buffer->next);
+ vfree(ts_buffer->buffer);
+ vfree(ts_buffer);
+ }
+
+ return ret;
+}
+
+static int dvb_dmxdev_abort_ts_insertion(struct dmxdev_filter *dmxdevfilter,
+ struct dmx_abort_ts_insertion *params)
+{
+ int ret = 0;
+ int found_buffer;
+ struct dmxdev_feed *feed;
+ struct ts_insertion_buffer *ts_buffer, *tmp;
+
+ if (!params ||
+ !(dmxdevfilter->dev->capabilities & DMXDEV_CAP_TS_INSERTION) ||
+ (dmxdevfilter->state < DMXDEV_STATE_SET) ||
+ (dmxdevfilter->type != DMXDEV_TYPE_PES) ||
+ ((dmxdevfilter->params.pes.output != DMX_OUT_TS_TAP) &&
+ (dmxdevfilter->params.pes.output != DMX_OUT_TSDEMUX_TAP)))
+ return -EINVAL;
+
+ found_buffer = 0;
+ list_for_each_entry_safe(ts_buffer, tmp,
+ &dmxdevfilter->insertion_buffers, next) {
+ if (ts_buffer->identifier == params->identifier) {
+ list_del(&ts_buffer->next);
+ found_buffer = 1;
+ break;
+ }
+ }
+
+ if (!found_buffer)
+ return -EINVAL;
+
+ if (dmxdevfilter->state == DMXDEV_STATE_GO) {
+ dvb_dmxdev_cancel_ts_insertion(ts_buffer);
+ if (list_empty(&dmxdevfilter->insertion_buffers)) {
+ feed = list_first_entry(&dmxdevfilter->feed.ts,
+ struct dmxdev_feed, next);
+ if (feed->ts->ts_insertion_terminate)
+ ret = feed->ts->ts_insertion_terminate(
+ feed->ts);
+ }
+ }
+
+ vfree(ts_buffer->buffer);
+ vfree(ts_buffer);
+
+ return ret;
+}
+
static int dvb_dmxdev_ts_fullness_callback(struct dmx_ts_feed *filter,
int required_space)
{
@@ -2006,6 +2258,9 @@
dvb_ringbuffer_flush(&dmxdevfilter->buffer);
dvb_dmxdev_notify_data_read(dmxdevfilter, flush_len);
dmxdevfilter->buffer.error = 0;
+ } else if (event->type == DMX_EVENT_SECTION_TIMEOUT) {
+ /* clear buffer error now that user was notified */
+ dmxdevfilter->buffer.error = 0;
}
/*
@@ -2036,10 +2291,13 @@
static void dvb_dmxdev_filter_timeout(unsigned long data)
{
struct dmxdev_filter *dmxdevfilter = (struct dmxdev_filter *)data;
+ struct dmx_filter_event event;
dmxdevfilter->buffer.error = -ETIMEDOUT;
spin_lock_irq(&dmxdevfilter->dev->lock);
dmxdevfilter->state = DMXDEV_STATE_TIMEDOUT;
+ event.type = DMX_EVENT_SECTION_TIMEOUT;
+ dvb_dmxdev_add_event(&dmxdevfilter->events, &event);
spin_unlock_irq(&dmxdevfilter->dev->lock);
wake_up_all(&dmxdevfilter->buffer.queue);
}
@@ -2406,6 +2664,7 @@
event.params.es_data.pts = dmx_data_ready->buf.pts;
event.params.es_data.dts_valid = dmx_data_ready->buf.dts_exists;
event.params.es_data.dts = dmx_data_ready->buf.dts;
+ event.params.es_data.stc = dmx_data_ready->buf.stc;
event.params.es_data.transport_error_indicator_counter =
dmx_data_ready->buf.tei_counter;
event.params.es_data.continuity_error_counter =
@@ -2611,6 +2870,7 @@
{
struct dmxdev_feed *feed;
struct dmx_demux *demux;
+ struct ts_insertion_buffer *ts_buffer;
if (dmxdevfilter->state < DMXDEV_STATE_GO)
return 0;
@@ -2630,6 +2890,18 @@
case DMXDEV_TYPE_PES:
dvb_dmxdev_feed_stop(dmxdevfilter);
demux = dmxdevfilter->dev->demux;
+
+ if (!list_empty(&dmxdevfilter->insertion_buffers)) {
+ feed = list_first_entry(&dmxdevfilter->feed.ts,
+ struct dmxdev_feed, next);
+
+ list_for_each_entry(ts_buffer,
+ &dmxdevfilter->insertion_buffers, next)
+ dvb_dmxdev_cancel_ts_insertion(ts_buffer);
+ if (feed->ts->ts_insertion_terminate)
+ feed->ts->ts_insertion_terminate(feed->ts);
+ }
+
list_for_each_entry(feed, &dmxdevfilter->feed.ts, next) {
demux->release_ts_feed(demux, feed->ts);
feed->ts = NULL;
@@ -2970,6 +3242,29 @@
}
dvb_dmxdev_filter_state_set(filter, DMXDEV_STATE_GO);
+
+ if ((filter->type == DMXDEV_TYPE_PES) &&
+ !list_empty(&filter->insertion_buffers)) {
+ struct ts_insertion_buffer *ts_buffer;
+
+ feed = list_first_entry(&filter->feed.ts,
+ struct dmxdev_feed, next);
+
+ ret = 0;
+ if (feed->ts->ts_insertion_init)
+ ret = feed->ts->ts_insertion_init(feed->ts);
+ if (!ret) {
+ list_for_each_entry(ts_buffer,
+ &filter->insertion_buffers, next)
+ dvb_dmxdev_queue_ts_insertion(
+ ts_buffer);
+ } else {
+ printk(KERN_ERR
+ "%s: ts_insertion_init failed, err %d\n",
+ __func__, ret);
+ }
+ }
+
return 0;
}
@@ -3016,6 +3311,8 @@
dvb_dmxdev_filter_state_set(dmxdevfilter, DMXDEV_STATE_ALLOCATED);
init_timer(&dmxdevfilter->timer);
+ INIT_LIST_HEAD(&dmxdevfilter->insertion_buffers);
+
dmxdevfilter->dmx_tsp_format = DMX_TSP_FORMAT_188;
dvbdev->users++;
@@ -3026,6 +3323,8 @@
static int dvb_dmxdev_filter_free(struct dmxdev *dmxdev,
struct dmxdev_filter *dmxdevfilter)
{
+ struct ts_insertion_buffer *ts_buffer, *tmp;
+
mutex_lock(&dmxdev->mutex);
mutex_lock(&dmxdevfilter->mutex);
@@ -3033,6 +3332,13 @@
dvb_dmxdev_filter_reset(dmxdevfilter);
+ list_for_each_entry_safe(ts_buffer, tmp,
+ &dmxdevfilter->insertion_buffers, next) {
+ list_del(&ts_buffer->next);
+ vfree(ts_buffer->buffer);
+ vfree(ts_buffer);
+ }
+
if (dmxdevfilter->buffer.data) {
void *mem = dmxdevfilter->buffer.data;
@@ -3097,12 +3403,20 @@
static int dvb_dmxdev_remove_pid(struct dmxdev *dmxdev,
struct dmxdev_filter *filter, u16 pid)
{
+ int feed_count;
struct dmxdev_feed *feed, *tmp;
if ((filter->type != DMXDEV_TYPE_PES) ||
(filter->state < DMXDEV_STATE_SET))
return -EINVAL;
+ feed_count = 0;
+ list_for_each_entry(tmp, &filter->feed.ts, next)
+ feed_count++;
+
+ if (feed_count <= 1)
+ return -EINVAL;
+
list_for_each_entry_safe(feed, tmp, &filter->feed.ts, next) {
if (feed->pid == pid) {
if (feed->ts != NULL) {
@@ -3282,7 +3596,7 @@
hcount = 3 + dfil->todo;
if (hcount > count)
hcount = count;
- result = dvb_dmxdev_buffer_read(&dfil->buffer,
+ result = dvb_dmxdev_buffer_read(dfil, &dfil->buffer,
file->f_flags & O_NONBLOCK,
buf, hcount, ppos);
if (result < 0) {
@@ -3303,7 +3617,7 @@
}
if (count > dfil->todo)
count = dfil->todo;
- result = dvb_dmxdev_buffer_read(&dfil->buffer,
+ result = dvb_dmxdev_buffer_read(dfil, &dfil->buffer,
file->f_flags & O_NONBLOCK,
buf, count, ppos);
if (result < 0)
@@ -3332,9 +3646,10 @@
if (dmxdevfilter->type == DMXDEV_TYPE_SEC)
ret = dvb_dmxdev_read_sec(dmxdevfilter, file, buf, count, ppos);
else
- ret = dvb_dmxdev_buffer_read(&dmxdevfilter->buffer,
- file->f_flags & O_NONBLOCK,
- buf, count, ppos);
+ ret = dvb_dmxdev_buffer_read(dmxdevfilter,
+ &dmxdevfilter->buffer,
+ file->f_flags & O_NONBLOCK,
+ buf, count, ppos);
if (ret > 0) {
dvb_dmxdev_notify_data_read(dmxdevfilter, ret);
@@ -3624,6 +3939,24 @@
mutex_unlock(&dmxdevfilter->mutex);
break;
+ case DMX_SET_TS_INSERTION:
+ if (mutex_lock_interruptible(&dmxdevfilter->mutex)) {
+ mutex_unlock(&dmxdev->mutex);
+ return -ERESTARTSYS;
+ }
+ ret = dvb_dmxdev_set_ts_insertion(dmxdevfilter, parg);
+ mutex_unlock(&dmxdevfilter->mutex);
+ break;
+
+ case DMX_ABORT_TS_INSERTION:
+ if (mutex_lock_interruptible(&dmxdevfilter->mutex)) {
+ mutex_unlock(&dmxdev->mutex);
+ return -ERESTARTSYS;
+ }
+ ret = dvb_dmxdev_abort_ts_insertion(dmxdevfilter, parg);
+ mutex_unlock(&dmxdevfilter->mutex);
+ break;
+
default:
ret = -EINVAL;
break;
diff --git a/drivers/media/dvb/dvb-core/dmxdev.h b/drivers/media/dvb/dvb-core/dmxdev.h
index 2ed99ae..d8cd982 100644
--- a/drivers/media/dvb/dvb-core/dmxdev.h
+++ b/drivers/media/dvb/dvb-core/dmxdev.h
@@ -114,6 +114,35 @@
struct dmx_filter_event queue[DMX_EVENT_QUEUE_SIZE];
};
+#define DMX_MIN_INSERTION_REPETITION_TIME 25 /* in msec */
+struct ts_insertion_buffer {
+ /* work scheduled for insertion of this buffer */
+ struct work_struct work;
+
+ struct list_head next;
+
+ /* buffer holding TS packets for insertion */
+ char *buffer;
+
+ /* buffer size */
+ size_t size;
+
+ /* buffer ID from user */
+ u32 identifier;
+
+ /* repetition time for the buffer insertion */
+ u32 repetition_time;
+
+ /* timer used for insertion of the buffer */
+ struct timer_list timer;
+
+ /* the recording filter to which this buffer belongs */
+ struct dmxdev_filter *dmxdevfilter;
+
+ /* indication whether insertion should be aborted */
+ int abort;
+};
+
struct dmxdev_filter {
union {
struct dmx_section_filter *sec;
@@ -145,6 +174,9 @@
enum dmx_tsp_format_t dmx_tsp_format;
u32 rec_chunk_size;
+ /* list of buffers used for insertion (struct ts_insertion_buffer) */
+ struct list_head insertion_buffers;
+
/* End-of-stream indication has been received */
int eos_state;
@@ -170,6 +202,8 @@
#define DMXDEV_CAP_PULL_MODE 0x02
#define DMXDEV_CAP_INDEXING 0x04
#define DMXDEV_CAP_EXTERNAL_BUFFS_ONLY 0x08
+#define DMXDEV_CAP_TS_INSERTION 0x10
+
enum dmx_playback_mode_t playback_mode;
dmx_source_t source;
diff --git a/drivers/media/dvb/dvb-core/dvb_demux.c b/drivers/media/dvb/dvb-core/dvb_demux.c
index 3f3d222..939d591 100644
--- a/drivers/media/dvb/dvb-core/dvb_demux.c
+++ b/drivers/media/dvb/dvb-core/dvb_demux.c
@@ -2331,6 +2331,25 @@
return ret;
}
+static int dvbdmx_ts_insertion_insert_buffer(struct dmx_ts_feed *ts_feed,
+ char *data, size_t size)
+{
+ struct dvb_demux_feed *feed = (struct dvb_demux_feed *)ts_feed;
+ struct dvb_demux *demux = feed->demux;
+
+ spin_lock(&demux->lock);
+ if (!ts_feed->is_filtering) {
+ spin_unlock(&demux->lock);
+ return 0;
+ }
+
+ feed->cb.ts(data, size, NULL, 0, ts_feed, DMX_OK);
+
+ spin_unlock(&demux->lock);
+
+ return 0;
+}
+
static int dmx_ts_set_tsp_out_format(
struct dmx_ts_feed *ts_feed,
enum dmx_tsp_format_t tsp_format)
@@ -2401,6 +2420,10 @@
(*ts_feed)->notify_data_read = NULL;
(*ts_feed)->set_secure_mode = dmx_ts_set_secure_mode;
(*ts_feed)->oob_command = dvbdmx_ts_feed_oob_cmd;
+ (*ts_feed)->ts_insertion_init = NULL;
+ (*ts_feed)->ts_insertion_terminate = NULL;
+ (*ts_feed)->ts_insertion_insert_buffer =
+ dvbdmx_ts_insertion_insert_buffer;
if (!(feed->filter = dvb_dmx_filter_alloc(demux))) {
feed->state = DMX_STATE_FREE;
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
index 84f7307..9370fc9 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -142,7 +142,7 @@
buf_p = msm_gemini_hw_pingpong_active_buffer(&we_pingpong_buf);
if (buf_p) {
buf_p->framedone_len = msm_gemini_hw_encode_output_size();
- GMN_DBG("%s:%d] framedone_len %d\n", __func__, __LINE__,
+ pr_debug("%s:%d] framedone_len %d\n", __func__, __LINE__,
buf_p->framedone_len);
}
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
index 0cbb101..79c533e 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010,2013 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -297,6 +297,8 @@
struct msm_gemini_hw_cmd *hw_cmd_p;
+ pr_debug("%s:%d] pingpong index %d", __func__, __LINE__,
+ pingpong_index);
if (pingpong_index == 0) {
hw_cmd_p = &hw_cmd_we_ping_update[0];
@@ -486,40 +488,38 @@
return is_copy_to_user;
}
-void msm_gemini_hw_region_dump(int size)
+#ifdef MSM_GMN_DBG_DUMP
+void msm_gemini_io_dump(int size)
{
- uint32_t *p;
- uint8_t *p8;
-
- if (size > gemini_region_size)
- GMN_PR_ERR("%s:%d] wrong region dump size\n",
- __func__, __LINE__);
-
- p = (uint32_t *) gemini_region_base;
- while (size >= 16) {
- GMN_DBG("0x%08X] %08X %08X %08X %08X\n",
- gemini_region_size - size,
- readl(p), readl(p+1), readl(p+2), readl(p+3));
- p += 4;
- size -= 16;
- }
-
- if (size > 0) {
- uint32_t d;
- GMN_DBG("0x%08X] ", gemini_region_size - size);
- while (size >= 4) {
- GMN_DBG("%08X ", readl(p++));
- size -= 4;
+ char line_str[128], *p_str;
+ void __iomem *addr = gemini_region_base;
+ int i;
+ u32 *p = (u32 *) addr;
+ u32 data;
+ pr_info("%s: %p %d reg_size %d\n", __func__, addr, size,
+ gemini_region_size);
+ line_str[0] = '\0';
+ p_str = line_str;
+ for (i = 0; i < size/4; i++) {
+ if (i % 4 == 0) {
+ snprintf(p_str, 12, "%08x: ", (u32) p);
+ p_str += 10;
}
-
- d = readl(p);
- p8 = (uint8_t *) &d;
- while (size) {
- GMN_DBG("%02X", *p8++);
- size--;
+ data = readl_relaxed(p++);
+ snprintf(p_str, 12, "%08x ", data);
+ p_str += 9;
+ if ((i + 1) % 4 == 0) {
+ pr_info("%s\n", line_str);
+ line_str[0] = '\0';
+ p_str = line_str;
}
-
- GMN_DBG("\n");
}
+ if (line_str[0] != '\0')
+ pr_info("%s\n", line_str);
}
+#else
+void msm_gemini_io_dump(int size)
+{
+}
+#endif
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
index 1c8de19..aa6c4aa1 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -95,7 +95,7 @@
int msm_gemini_hw_wait(struct msm_gemini_hw_cmd *hw_cmd_p, int m_us);
void msm_gemini_hw_delay(struct msm_gemini_hw_cmd *hw_cmd_p, int m_us);
int msm_gemini_hw_exec_cmds(struct msm_gemini_hw_cmd *hw_cmd_p, int m_cmds);
-void msm_gemini_hw_region_dump(int size);
+void msm_gemini_io_dump(int size);
#define MSM_GEMINI_PIPELINE_CLK_128MHZ 128 /* 8MP 128MHz */
#define MSM_GEMINI_PIPELINE_CLK_140MHZ 140 /* 9MP 140MHz */
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
index ea13d68..2fe6038 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010, 2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -171,6 +171,6 @@
#define HWIO_JPEG_IRQ_STATUS_RMSK 0xffffffff
#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_ADDR (GEMINI_REG_BASE + 0x00000034)
-#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_RMSK 0xffffff
+#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_RMSK 0xffffffff
#endif /* MSM_GEMINI_HW_REG_H */
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
index ed2222a..50c7284 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -16,6 +16,8 @@
#include <linux/uaccess.h>
#include <linux/slab.h>
#include <media/msm_gemini.h>
+#include <mach/msm_bus.h>
+#include <mach/msm_bus_board.h>
#include "msm_gemini_sync.h"
#include "msm_gemini_core.h"
#include "msm_gemini_platform.h"
@@ -23,6 +25,9 @@
static int release_buf;
+/* size is based on 4k page size */
+static const int g_max_out_size = 0x7ff000;
+
/*************** queue helper ****************/
inline void msm_gemini_q_init(char const *name, struct msm_gemini_q *q_p)
{
@@ -180,7 +185,7 @@
{
int rc = 0;
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
+ pr_debug("%s:%d] buf_in %p", __func__, __LINE__, buf_in);
if (buf_in) {
buf_in->vbuf.framedone_len = buf_in->framedone_len;
@@ -266,19 +271,88 @@
/*************** output queue ****************/
+int msm_gemini_get_out_buffer(struct msm_gemini_device *pgmn_dev,
+ struct msm_gemini_hw_buf *p_outbuf)
+{
+ int buf_size = 0;
+ int bytes_remaining = 0;
+ if (pgmn_dev->out_offset >= pgmn_dev->out_buf.y_len) {
+ GMN_PR_ERR("%s:%d] no more buffers", __func__, __LINE__);
+ return -EINVAL;
+ }
+ bytes_remaining = pgmn_dev->out_buf.y_len - pgmn_dev->out_offset;
+ buf_size = min(bytes_remaining, pgmn_dev->max_out_size);
+
+ pgmn_dev->out_frag_cnt++;
+ pr_debug("%s:%d] buf_size[%d] %d", __func__, __LINE__,
+ pgmn_dev->out_frag_cnt, buf_size);
+ p_outbuf->y_len = buf_size;
+ p_outbuf->y_buffer_addr = pgmn_dev->out_buf.y_buffer_addr +
+ pgmn_dev->out_offset;
+ pgmn_dev->out_offset += buf_size;
+ return 0;
+}
+
+int msm_gemini_outmode_single_we_pingpong_irq(
+ struct msm_gemini_device *pgmn_dev,
+ struct msm_gemini_core_buf *buf_in)
+{
+ int rc = 0;
+ struct msm_gemini_core_buf out_buf;
+ int frame_done = buf_in &&
+ buf_in->vbuf.type == MSM_GEMINI_EVT_FRAMEDONE;
+ pr_debug("%s:%d] framedone %d", __func__, __LINE__, frame_done);
+ if (!pgmn_dev->out_buf_set) {
+ pr_err("%s:%d] output buffer not set",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ if (frame_done) {
+ /* send the buffer back */
+ pgmn_dev->out_buf.vbuf.framedone_len = buf_in->framedone_len;
+ pgmn_dev->out_buf.vbuf.type = MSM_GEMINI_EVT_FRAMEDONE;
+ rc = msm_gemini_q_in_buf(&pgmn_dev->output_rtn_q,
+ &pgmn_dev->out_buf);
+ if (rc) {
+ pr_err("%s:%d] cannot queue the output buffer",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ rc = msm_gemini_q_wakeup(&pgmn_dev->output_rtn_q);
+ /* reset the output buffer since the ownership is
+ transferred to the rtn queue */
+ if (!rc)
+ pgmn_dev->out_buf_set = 0;
+ } else {
+ /* configure ping/pong */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc)
+ msm_gemini_core_we_buf_reset(&out_buf);
+ else
+ msm_gemini_core_we_buf_update(&out_buf);
+ }
+ return rc;
+}
+
int msm_gemini_we_pingpong_irq(struct msm_gemini_device *pgmn_dev,
struct msm_gemini_core_buf *buf_in)
{
int rc = 0;
struct msm_gemini_core_buf *buf_out;
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
+ pr_debug("%s:%d] Enter mode %d", __func__, __LINE__,
+ pgmn_dev->out_mode);
+
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_SINGLE)
+ return msm_gemini_outmode_single_we_pingpong_irq(pgmn_dev,
+ buf_in);
+
if (buf_in) {
- GMN_DBG("%s:%d] 0x%08x %d\n", __func__, __LINE__,
+ pr_debug("%s:%d] 0x%08x %d\n", __func__, __LINE__,
(int) buf_in->y_buffer_addr, buf_in->y_len);
rc = msm_gemini_q_in_buf(&pgmn_dev->output_rtn_q, buf_in);
} else {
- GMN_DBG("%s:%d] no output return buffer\n", __func__,
+ pr_debug("%s:%d] no output return buffer\n", __func__,
__LINE__);
rc = -1;
return rc;
@@ -291,7 +365,7 @@
kfree(buf_out);
} else {
msm_gemini_core_we_buf_reset(buf_in);
- GMN_DBG("%s:%d] no output buffer\n", __func__, __LINE__);
+ pr_debug("%s:%d] no output buffer\n", __func__, __LINE__);
rc = -2;
}
@@ -339,6 +413,43 @@
return 0;
}
+int msm_gemini_set_output_buf(struct msm_gemini_device *pgmn_dev,
+ void __user *arg)
+{
+ struct msm_gemini_buf buf_cmd;
+
+ if (pgmn_dev->out_buf_set) {
+ pr_err("%s:%d] outbuffer buffer already provided",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&buf_cmd, arg, sizeof(struct msm_gemini_buf))) {
+ pr_err("%s:%d] failed\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ GMN_DBG("%s:%d] output addr 0x%08x len %d", __func__, __LINE__,
+ (int) buf_cmd.vaddr,
+ buf_cmd.y_len);
+
+ pgmn_dev->out_buf.y_buffer_addr = msm_gemini_platform_v2p(
+ buf_cmd.fd,
+ buf_cmd.y_len,
+ &pgmn_dev->out_buf.file,
+ &pgmn_dev->out_buf.handle);
+ if (!pgmn_dev->out_buf.y_buffer_addr) {
+ pr_err("%s:%d] cannot map the output address",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ pgmn_dev->out_buf.y_len = buf_cmd.y_len;
+ pgmn_dev->out_buf.vbuf = buf_cmd;
+ pgmn_dev->out_buf_set = 1;
+
+ return 0;
+}
+
int msm_gemini_output_buf_enqueue(struct msm_gemini_device *pgmn_dev,
void __user *arg)
{
@@ -456,6 +567,7 @@
struct msm_gemini_core_buf *buf_p;
struct msm_gemini_buf buf_cmd;
int rc = 0;
+ struct msm_bus_scale_pdata *p_bus_scale_data = NULL;
if (copy_from_user(&buf_cmd, arg, sizeof(struct msm_gemini_buf))) {
GMN_PR_ERR("%s:%d] failed\n", __func__, __LINE__);
@@ -484,9 +596,9 @@
return rc;
}
} else {
- buf_p->y_buffer_addr = msm_gemini_platform_v2p(buf_cmd.fd,
- buf_cmd.y_len + buf_cmd.cbcr_len, &buf_p->file,
- &buf_p->handle) + buf_cmd.offset + buf_cmd.y_off;
+ buf_p->y_buffer_addr = msm_gemini_platform_v2p(buf_cmd.fd,
+ buf_cmd.y_len + buf_cmd.cbcr_len, &buf_p->file,
+ &buf_p->handle) + buf_cmd.offset + buf_cmd.y_off;
}
buf_p->y_len = buf_cmd.y_len;
@@ -504,6 +616,30 @@
return -1;
}
buf_p->vbuf = buf_cmd;
+ buf_p->vbuf.type = MSM_GEMINI_EVT_RESET;
+
+ /* Set bus vectors */
+ p_bus_scale_data = (struct msm_bus_scale_pdata *)
+ pgmn_dev->pdev->dev.platform_data;
+ if (pgmn_dev->bus_perf_client &&
+ (MSM_GMN_OUTMODE_SINGLE == pgmn_dev->out_mode)) {
+ int rc;
+ struct msm_bus_paths *path = &(p_bus_scale_data->usecase[1]);
+ GMN_DBG("%s:%d] Update bus bandwidth", __func__, __LINE__);
+ if (pgmn_dev->op_mode & MSM_GEMINI_MODE_OFFLINE_ENCODE) {
+ path->vectors[0].ab = (buf_p->y_len + buf_p->cbcr_len) *
+ 15 * 2;
+ path->vectors[0].ib = path->vectors[0].ab;
+ path->vectors[1].ab = 0;
+ path->vectors[1].ib = 0;
+ }
+ rc = msm_bus_scale_client_update_request(
+ pgmn_dev->bus_perf_client, 1);
+ if (rc < 0) {
+ GMN_PR_ERR("%s:%d] update_request fails %d",
+ __func__, __LINE__, rc);
+ }
+ }
msm_gemini_q_in(&pgmn_dev->input_buf_q, buf_p);
@@ -545,6 +681,9 @@
int __msm_gemini_open(struct msm_gemini_device *pgmn_dev)
{
int rc;
+ struct msm_bus_scale_pdata *p_bus_scale_data =
+ (struct msm_bus_scale_pdata *)pgmn_dev->pdev->dev.
+ platform_data;
mutex_lock(&pgmn_dev->lock);
if (pgmn_dev->open_count) {
@@ -576,7 +715,23 @@
msm_gemini_q_cleanup(&pgmn_dev->input_rtn_q);
msm_gemini_q_cleanup(&pgmn_dev->input_buf_q);
msm_gemini_core_init();
+ pgmn_dev->out_mode = MSM_GMN_OUTMODE_FRAGMENTED;
+ pgmn_dev->out_buf_set = 0;
+ pgmn_dev->out_offset = 0;
+ pgmn_dev->max_out_size = g_max_out_size;
+ pgmn_dev->out_frag_cnt = 0;
+ pgmn_dev->bus_perf_client = 0;
+ if (p_bus_scale_data) {
+ GMN_DBG("%s:%d] register bus client", __func__, __LINE__);
+ pgmn_dev->bus_perf_client =
+ msm_bus_scale_register_client(p_bus_scale_data);
+ if (!pgmn_dev->bus_perf_client) {
+ GMN_PR_ERR("%s:%d] bus client register failed",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+ }
GMN_DBG("%s:%d] success\n", __func__, __LINE__);
return rc;
}
@@ -593,13 +748,23 @@
pgmn_dev->open_count--;
mutex_unlock(&pgmn_dev->lock);
- msm_gemini_core_release(release_buf);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED) {
+ msm_gemini_core_release(release_buf);
+ } else if (pgmn_dev->out_buf_set) {
+ msm_gemini_platform_p2v(pgmn_dev->out_buf.file,
+ &pgmn_dev->out_buf.handle);
+ }
msm_gemini_q_cleanup(&pgmn_dev->evt_q);
msm_gemini_q_cleanup(&pgmn_dev->output_rtn_q);
msm_gemini_outbuf_q_cleanup(&pgmn_dev->output_buf_q);
msm_gemini_q_cleanup(&pgmn_dev->input_rtn_q);
msm_gemini_outbuf_q_cleanup(&pgmn_dev->input_buf_q);
+ if (pgmn_dev->bus_perf_client) {
+ msm_bus_scale_unregister_client(pgmn_dev->bus_perf_client);
+ pgmn_dev->bus_perf_client = 0;
+ }
+
if (pgmn_dev->open_count)
GMN_PR_ERR(KERN_ERR "%s: multiple opens\n", __func__);
@@ -699,29 +864,63 @@
}
}
- for (i = 0; i < 2; i++) {
- buf_out_free[i] = msm_gemini_q_out(&pgmn_dev->output_buf_q);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED) {
+ for (i = 0; i < 2; i++) {
+ buf_out_free[i] =
+ msm_gemini_q_out(&pgmn_dev->output_buf_q);
- if (buf_out_free[i]) {
- msm_gemini_core_we_buf_update(buf_out_free[i]);
- } else if (i == 1) {
- /* set the pong to same address as ping */
- buf_out_free[0]->y_len >>= 1;
- buf_out_free[0]->y_buffer_addr +=
- buf_out_free[0]->y_len;
- msm_gemini_core_we_buf_update(buf_out_free[0]);
- /* since ping and pong are same buf release only once*/
- release_buf = 0;
- } else {
- GMN_DBG("%s:%d] no output buffer\n",
- __func__, __LINE__);
- break;
+ if (buf_out_free[i]) {
+ msm_gemini_core_we_buf_update(buf_out_free[i]);
+ } else if (i == 1) {
+ /* set the pong to same address as ping */
+ buf_out_free[0]->y_len >>= 1;
+ buf_out_free[0]->y_buffer_addr +=
+ buf_out_free[0]->y_len;
+ msm_gemini_core_we_buf_update(buf_out_free[0]);
+ /*
+ * since ping and pong are same buf
+ * release only once
+ */
+ release_buf = 0;
+ } else {
+ GMN_DBG("%s:%d] no output buffer\n",
+ __func__, __LINE__);
+ break;
+ }
}
+ for (i = 0; i < 2; i++)
+ kfree(buf_out_free[i]);
+ } else {
+ struct msm_gemini_core_buf out_buf;
+ /*
+ * Since the same buffer is fragmented, p2v need not be
+ * called for all the buffers
+ */
+ release_buf = 0;
+ if (!pgmn_dev->out_buf_set) {
+ GMN_PR_ERR("%s:%d] output buffer not set",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ /* configure ping */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc) {
+ GMN_PR_ERR("%s:%d] no output buffer for ping",
+ __func__, __LINE__);
+ return rc;
+ }
+ msm_gemini_core_we_buf_update(&out_buf);
+ /* configure pong */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc) {
+ GMN_DBG("%s:%d] no output buffer for pong",
+ __func__, __LINE__);
+ /* fall through to configure same buffer */
+ }
+ msm_gemini_core_we_buf_update(&out_buf);
+ msm_gemini_io_dump(0x150);
}
- for (i = 0; i < 2; i++)
- kfree(buf_out_free[i]);
-
rc = msm_gemini_ioctl_hw_cmds(pgmn_dev, arg);
GMN_DBG("%s:%d]\n", __func__, __LINE__);
return rc;
@@ -746,12 +945,22 @@
return rc;
}
-int msm_gemini_ioctl_test_dump_region(struct msm_gemini_device *pgmn_dev,
- unsigned long arg)
+int msm_gemini_ioctl_set_outmode(struct msm_gemini_device *pgmn_dev,
+ void * __user arg)
{
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
- msm_gemini_hw_region_dump(arg);
- return 0;
+ int rc = 0;
+ enum msm_gmn_out_mode mode;
+
+ if (copy_from_user(&mode, arg, sizeof(mode))) {
+ GMN_PR_ERR("%s:%d] failed\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+ GMN_DBG("%s:%d] mode %d", __func__, __LINE__, mode);
+
+ if ((mode == MSM_GMN_OUTMODE_FRAGMENTED)
+ || (mode == MSM_GMN_OUTMODE_SINGLE))
+ pgmn_dev->out_mode = mode;
+ return rc;
}
long __msm_gemini_ioctl(struct msm_gemini_device *pgmn_dev,
@@ -790,8 +999,12 @@
break;
case MSM_GMN_IOCTL_OUTPUT_BUF_ENQUEUE:
- rc = msm_gemini_output_buf_enqueue(pgmn_dev,
- (void __user *) arg);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED)
+ rc = msm_gemini_output_buf_enqueue(pgmn_dev,
+ (void __user *) arg);
+ else
+ rc = msm_gemini_set_output_buf(pgmn_dev,
+ (void __user *) arg);
break;
case MSM_GMN_IOCTL_OUTPUT_GET:
@@ -818,8 +1031,8 @@
rc = msm_gemini_ioctl_hw_cmds(pgmn_dev, (void __user *) arg);
break;
- case MSM_GMN_IOCTL_TEST_DUMP_REGION:
- rc = msm_gemini_ioctl_test_dump_region(pgmn_dev, arg);
+ case MSM_GMN_IOCTL_SET_MODE:
+ rc = msm_gemini_ioctl_set_outmode(pgmn_dev, (void __user *)arg);
break;
default:
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
index d1a43e1..88e9615 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010,2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -74,6 +74,16 @@
struct msm_gemini_q input_buf_q;
struct v4l2_subdev subdev;
+ enum msm_gmn_out_mode out_mode;
+
+ /* single out mode parameters */
+ struct msm_gemini_hw_buf out_buf;
+ int out_offset;
+ int out_buf_set;
+ int max_out_size;
+ int out_frag_cnt;
+
+ uint32_t bus_perf_client;
};
int __msm_gemini_open(struct msm_gemini_device *pgmn_dev);
diff --git a/drivers/media/platform/msm/camera_v2/camera/camera.c b/drivers/media/platform/msm/camera_v2/camera/camera.c
index 71087d9..4579cee 100644
--- a/drivers/media/platform/msm/camera_v2/camera/camera.c
+++ b/drivers/media/platform/msm/camera_v2/camera/camera.c
@@ -595,13 +595,18 @@
struct v4l2_event event;
struct msm_video_device *pvdev = video_drvdata(filep);
struct camera_v4l2_private *sp = fh_to_private(filep->private_data);
-
BUG_ON(!pvdev);
atomic_sub_return(1, &pvdev->opened);
if (atomic_read(&pvdev->opened) == 0) {
+ camera_pack_event(filep, MSM_CAMERA_SET_PARM,
+ MSM_CAMERA_PRIV_DEL_STREAM, -1, &event);
+
+ /* Donot wait, imaging server may have crashed */
+ msm_post_event(&event, MSM_POST_EVT_TIMEOUT);
+
camera_pack_event(filep, MSM_CAMERA_DEL_SESSION, 0, -1, &event);
/* Donot wait, imaging server may have crashed */
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
index 22e8400..59858b5 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
@@ -256,6 +256,13 @@
buf_info->state ==
MSM_ISP_BUFFER_STATE_INITIALIZED)
continue;
+
+ if (!BUF_SRC(bufq->stream_id)) {
+ if (buf_info->state == MSM_ISP_BUFFER_STATE_DEQUEUED ||
+ buf_info->state == MSM_ISP_BUFFER_STATE_DIVERTED)
+ buf_mgr->vb2_ops->put_buf(buf_info->vb2_buf,
+ bufq->session_id, bufq->stream_id);
+ }
msm_isp_unprepare_v4l2_buf(buf_mgr, buf_info);
}
return 0;
@@ -281,16 +288,21 @@
list_for_each_entry(temp_buf_info,
&bufq->share_head, share_list) {
if (!temp_buf_info->buf_used[id]) {
- *buf_info = temp_buf_info;
temp_buf_info->buf_used[id] = 1;
temp_buf_info->buf_get_count++;
if (temp_buf_info->buf_get_count ==
bufq->buf_client_count)
list_del_init(
&temp_buf_info->share_list);
+ if (temp_buf_info->buf_reuse_flag) {
+ kfree(temp_buf_info);
+ } else {
+ *buf_info = temp_buf_info;
+ rc = 0;
+ }
spin_unlock_irqrestore(
&bufq->bufq_lock, flags);
- return 0;
+ return rc;
}
}
}
@@ -322,21 +334,31 @@
}
if (!(*buf_info)) {
- spin_unlock_irqrestore(&bufq->bufq_lock, flags);
- return rc;
- }
-
- (*buf_info)->state = MSM_ISP_BUFFER_STATE_DEQUEUED;
- if (bufq->buf_type == ISP_SHARE_BUF) {
- memset((*buf_info)->buf_used, 0,
- sizeof(uint8_t) * bufq->buf_client_count);
- (*buf_info)->buf_used[id] = 1;
- (*buf_info)->buf_get_count = 1;
- (*buf_info)->buf_put_count = 0;
- list_add_tail(&(*buf_info)->share_list, &bufq->share_head);
+ if (bufq->buf_type == ISP_SHARE_BUF) {
+ temp_buf_info = kzalloc(
+ sizeof(struct msm_isp_buffer), GFP_ATOMIC);
+ temp_buf_info->buf_reuse_flag = 1;
+ temp_buf_info->buf_used[id] = 1;
+ temp_buf_info->buf_get_count = 1;
+ list_add_tail(&temp_buf_info->share_list,
+ &bufq->share_head);
+ }
+ } else {
+ (*buf_info)->state = MSM_ISP_BUFFER_STATE_DEQUEUED;
+ if (bufq->buf_type == ISP_SHARE_BUF) {
+ memset((*buf_info)->buf_used, 0,
+ sizeof(uint8_t) * bufq->buf_client_count);
+ (*buf_info)->buf_used[id] = 1;
+ (*buf_info)->buf_get_count = 1;
+ (*buf_info)->buf_put_count = 0;
+ (*buf_info)->buf_reuse_flag = 0;
+ list_add_tail(&(*buf_info)->share_list,
+ &bufq->share_head);
+ }
+ rc = 0;
}
spin_unlock_irqrestore(&bufq->bufq_lock, flags);
- return 0;
+ return rc;
}
static int msm_isp_put_buf(struct msm_isp_buf_mgr *buf_mgr,
@@ -363,7 +385,6 @@
switch (buf_info->state) {
case MSM_ISP_BUFFER_STATE_PREPARED:
case MSM_ISP_BUFFER_STATE_DEQUEUED:
- case MSM_ISP_BUFFER_STATE_DISPATCHED:
case MSM_ISP_BUFFER_STATE_DIVERTED:
if (BUF_SRC(bufq->stream_id))
list_add_tail(&buf_info->list, &bufq->head);
@@ -373,6 +394,13 @@
buf_info->state = MSM_ISP_BUFFER_STATE_QUEUED;
rc = 0;
break;
+ case MSM_ISP_BUFFER_STATE_DISPATCHED:
+ buf_info->state = MSM_ISP_BUFFER_STATE_QUEUED;
+ rc = 0;
+ break;
+ case MSM_ISP_BUFFER_STATE_QUEUED:
+ rc = 0;
+ break;
default:
pr_err("%s: incorrect state = %d",
__func__, buf_info->state);
@@ -648,22 +676,35 @@
}
}
-static int msm_isp_attach_ctx(struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx)
+static void msm_isp_register_ctx(struct msm_isp_buf_mgr *buf_mgr,
+ struct device **iommu_ctx, int num_iommu_ctx)
{
- int rc;
- rc = iommu_attach_device(buf_mgr->iommu_domain, iommu_ctx);
- if (rc) {
- pr_err("%s: Iommu attach error\n", __func__);
- return -EINVAL;
+ int i;
+ buf_mgr->num_iommu_ctx = num_iommu_ctx;
+ for (i = 0; i < num_iommu_ctx; i++)
+ buf_mgr->iommu_ctx[i] = iommu_ctx[i];
+}
+
+static int msm_isp_attach_ctx(struct msm_isp_buf_mgr *buf_mgr)
+{
+ int rc, i;
+ for (i = 0; i < buf_mgr->num_iommu_ctx; i++) {
+ rc = iommu_attach_device(buf_mgr->iommu_domain,
+ buf_mgr->iommu_ctx[i]);
+ if (rc) {
+ pr_err("%s: Iommu attach error\n", __func__);
+ return -EINVAL;
+ }
}
return 0;
}
-static void msm_isp_detach_ctx(struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx)
+static void msm_isp_detach_ctx(struct msm_isp_buf_mgr *buf_mgr)
{
- iommu_detach_device(buf_mgr->iommu_domain, iommu_ctx);
+ int i;
+ for (i = 0; i < buf_mgr->num_iommu_ctx; i++)
+ iommu_detach_device(buf_mgr->iommu_domain,
+ buf_mgr->iommu_ctx[i]);
}
static int msm_isp_init_isp_buf_mgr(
@@ -680,6 +721,7 @@
}
CDBG("%s: E\n", __func__);
+ msm_isp_attach_ctx(buf_mgr);
buf_mgr->num_buf_q = num_buf_q;
buf_mgr->bufq =
kzalloc(sizeof(struct msm_isp_bufq) * num_buf_q,
@@ -704,6 +746,7 @@
ion_client_destroy(buf_mgr->client);
kfree(buf_mgr->bufq);
buf_mgr->num_buf_q = 0;
+ msm_isp_detach_ctx(buf_mgr);
return 0;
}
@@ -740,8 +783,7 @@
.flush_buf = msm_isp_flush_buf,
.buf_done = msm_isp_buf_done,
.buf_divert = msm_isp_buf_divert,
- .attach_ctx = msm_isp_attach_ctx,
- .detach_ctx = msm_isp_detach_ctx,
+ .register_ctx = msm_isp_register_ctx,
.buf_mgr_init = msm_isp_init_isp_buf_mgr,
.buf_mgr_deinit = msm_isp_deinit_isp_buf_mgr,
};
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
index d4e7c88..fda1a57 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
@@ -65,6 +65,7 @@
uint8_t buf_used[ISP_SHARE_BUF_CLIENT];
uint8_t buf_get_count;
uint8_t buf_put_count;
+ uint8_t buf_reuse_flag;
};
struct msm_isp_bufq {
@@ -111,10 +112,8 @@
int (*buf_divert) (struct msm_isp_buf_mgr *buf_mgr,
uint32_t bufq_handle, uint32_t buf_index,
struct timeval *tv, uint32_t frame_id);
- int (*attach_ctx) (struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx);
- void (*detach_ctx) (struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx);
+ void (*register_ctx) (struct msm_isp_buf_mgr *buf_mgr,
+ struct device **iommu_ctx, int num_iommu_ctx);
int (*buf_mgr_init) (struct msm_isp_buf_mgr *buf_mgr,
const char *ctx_name, uint16_t num_buf_q);
int (*buf_mgr_deinit) (struct msm_isp_buf_mgr *buf_mgr);
@@ -136,6 +135,9 @@
/*IOMMU specific*/
int iommu_domain_num;
struct iommu_domain *iommu_domain;
+
+ int num_iommu_ctx;
+ struct device *iommu_ctx[2];
};
int msm_isp_create_isp_buf_mgr(struct msm_isp_buf_mgr *buf_mgr,
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
index 447c752..b31b3f1 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
@@ -138,6 +138,8 @@
kfree(vfe_dev);
return -EINVAL;
}
+ vfe_dev->buf_mgr->ops->register_ctx(vfe_dev->buf_mgr,
+ &vfe_dev->iommu_ctx[0], vfe_dev->hw_info->num_iommu_ctx);
vfe_dev->vfe_open_cnt = 0;
end:
return rc;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
index 1b762ea..7bc2b7d 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
@@ -148,7 +148,8 @@
struct msm_vfe_stats_stream *stream_info);
void (*clear_framedrop) (struct vfe_device *vfe_dev,
struct msm_vfe_stats_stream *stream_info);
- void (*cfg_comp_mask) (struct vfe_device *vfe_dev);
+ void (*cfg_comp_mask) (struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable);
void (*cfg_wm_irq_mask) (struct vfe_device *vfe_dev,
struct msm_vfe_stats_stream *stream_info);
void (*clear_wm_irq_mask) (struct vfe_device *vfe_dev,
@@ -294,6 +295,7 @@
composite_info[MAX_NUM_COMPOSITE_MASK];
uint8_t num_used_composite_mask;
uint32_t stream_update;
+ enum msm_isp_camif_update_state pipeline_update;
struct msm_vfe_src_info src_info[VFE_SRC_MAX];
uint16_t stream_handle_cnt;
unsigned long event_mask;
@@ -312,6 +314,7 @@
STATS_ACTIVE,
STATS_START_PENDING,
STATS_STOP_PENDING,
+ STATS_STARTING,
STATS_STOPPING,
};
@@ -319,9 +322,11 @@
uint32_t session_id;
uint32_t stream_id;
uint32_t stream_handle;
+ uint32_t composite_flag;
enum msm_isp_stats_type stats_type;
enum msm_vfe_stats_state state;
uint32_t framedrop_pattern;
+ uint32_t framedrop_period;
uint32_t irq_subsample_pattern;
uint32_t buffer_offset;
@@ -331,11 +336,10 @@
struct msm_vfe_stats_shared_data {
struct msm_vfe_stats_stream stream_info[MSM_ISP_STATS_MAX];
- enum msm_vfe_stats_pipeline_policy stats_pipeline_policy;
- uint32_t comp_framedrop_pattern;
- uint32_t comp_irq_subsample_pattern;
uint8_t num_active_stream;
+ atomic_t stats_comp_mask;
uint16_t stream_handle_cnt;
+ atomic_t stats_update;
};
struct msm_vfe_tasklet_queue_cmd {
@@ -380,6 +384,7 @@
struct completion reset_complete;
struct completion halt_complete;
struct completion stream_config_complete;
+ struct completion stats_config_complete;
struct mutex realtime_mutex;
struct mutex core_mutex;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
index d4f6a07..679c5cb 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
@@ -114,7 +114,7 @@
msm_camera_io_w(0x07FFFFFF, vfe_dev->vfe_base + 0xC);
/* BUS_CFG */
msm_camera_io_w(0x00000001, vfe_dev->vfe_base + 0x3C);
- msm_camera_io_w(0x00000025, vfe_dev->vfe_base + 0x1C);
+ msm_camera_io_w(0x01000025, vfe_dev->vfe_base + 0x1C);
msm_camera_io_w_mb(0x1DFFFFFF, vfe_dev->vfe_base + 0x20);
msm_camera_io_w(0xFFFFFFFF, vfe_dev->vfe_base + 0x24);
msm_camera_io_w_mb(0x1FFFFFFF, vfe_dev->vfe_base + 0x28);
@@ -142,14 +142,13 @@
return;
if (irq_status0 & BIT(0)) {
- ISP_DBG("%s: PIX0 frame id: %lu\n", __func__,
- vfe_dev->axi_data.src_info[VFE_PIX_0].frame_id);
- msm_isp_sof_notify(vfe_dev, VFE_PIX_0, ts);
ISP_DBG("%s: SOF IRQ\n", __func__);
if (vfe_dev->axi_data.src_info[VFE_PIX_0].raw_stream_count > 0
&& vfe_dev->axi_data.src_info[VFE_PIX_0].
pix_stream_count == 0) {
msm_isp_sof_notify(vfe_dev, VFE_PIX_0, ts);
+ if (vfe_dev->axi_data.stream_update)
+ msm_isp_axi_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
}
}
@@ -304,6 +303,8 @@
if (vfe_dev->axi_data.stream_update)
msm_isp_axi_stream_update(vfe_dev);
+ if (atomic_read(&vfe_dev->stats_data.stats_update))
+ msm_isp_stats_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
msm_isp_update_error_frame_count(vfe_dev);
@@ -767,7 +768,8 @@
}
}
-static void msm_vfe32_stats_cfg_comp_mask(struct vfe_device *vfe_dev)
+static void msm_vfe32_stats_cfg_comp_mask(struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable)
{
return;
}
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
index 256d136..731056b 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
@@ -266,7 +266,7 @@
msm_camera_io_w(0xC001FF7F, vfe_dev->vfe_base + 0x974);
/* BUS_CFG */
msm_camera_io_w(0x10000001, vfe_dev->vfe_base + 0x50);
- msm_camera_io_w(0x800000F3, vfe_dev->vfe_base + 0x28);
+ msm_camera_io_w(0xE00000F3, vfe_dev->vfe_base + 0x28);
msm_camera_io_w_mb(0xFEFFFFFF, vfe_dev->vfe_base + 0x2C);
msm_camera_io_w(0xFFFFFFFF, vfe_dev->vfe_base + 0x30);
msm_camera_io_w_mb(0xFEFFFFFF, vfe_dev->vfe_base + 0x34);
@@ -299,6 +299,8 @@
&& vfe_dev->axi_data.src_info[VFE_PIX_0].
pix_stream_count == 0) {
msm_isp_sof_notify(vfe_dev, VFE_PIX_0, ts);
+ if (vfe_dev->axi_data.stream_update)
+ msm_isp_axi_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
}
}
@@ -466,6 +468,8 @@
if (vfe_dev->axi_data.stream_update)
msm_isp_axi_stream_update(vfe_dev);
+ if (atomic_read(&vfe_dev->stats_data.stats_update))
+ msm_isp_stats_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
msm_isp_update_error_frame_count(vfe_dev);
@@ -1003,12 +1007,16 @@
}
}
-static void msm_vfe40_stats_cfg_comp_mask(struct vfe_device *vfe_dev)
+static void msm_vfe40_stats_cfg_comp_mask(struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable)
{
- if (vfe_dev->stats_data.stats_pipeline_policy == STATS_COMP_ALL)
- msm_camera_io_w(0x00FF0000, vfe_dev->vfe_base + 0x44);
+ uint32_t comp_mask;
+ comp_mask = msm_camera_io_r(vfe_dev->vfe_base + 0x44) >> 16;
+ if (enable)
+ comp_mask |= stats_mask;
else
- msm_camera_io_w(0x00000000, vfe_dev->vfe_base + 0x44);
+ comp_mask &= ~stats_mask;
+ msm_camera_io_w(comp_mask << 16, vfe_dev->vfe_base + 0x44);
}
static void msm_vfe40_stats_cfg_wm_irq_mask(
@@ -1039,9 +1047,10 @@
uint32_t stats_base = VFE40_STATS_BASE(stats_idx);
/*WR_ADDR_CFG*/
- msm_camera_io_w(0x7C, vfe_dev->vfe_base + stats_base + 0x8);
+ msm_camera_io_w(stream_info->framedrop_period << 2,
+ vfe_dev->vfe_base + stats_base + 0x8);
/*WR_IRQ_FRAMEDROP_PATTERN*/
- msm_camera_io_w(0xFFFFFFFF,
+ msm_camera_io_w(stream_info->framedrop_pattern,
vfe_dev->vfe_base + stats_base + 0x10);
/*WR_IRQ_SUBSAMPLE_PATTERN*/
msm_camera_io_w(0xFFFFFFFF,
@@ -1189,11 +1198,9 @@
goto vfe_no_resource;
}
- if (vfe_dev->pdev->id == 0)
- vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe0");
- else if (vfe_dev->pdev->id == 1)
- vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe1");
- if (!vfe_dev->iommu_ctx[0]) {
+ vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe0");
+ vfe_dev->iommu_ctx[1] = msm_iommu_get_ctx("vfe1");
+ if (!vfe_dev->iommu_ctx[0] || !vfe_dev->iommu_ctx[1]) {
pr_err("%s: cannot get iommu_ctx\n", __func__);
rc = -ENODEV;
goto vfe_no_resource;
@@ -1245,7 +1252,7 @@
};
struct msm_vfe_hardware_info vfe40_hw_info = {
- .num_iommu_ctx = 1,
+ .num_iommu_ctx = 2,
.vfe_clk_idx = VFE40_CLK_IDX,
.vfe_ops = {
.irq_ops = {
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
index 477985d..13160ee 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
@@ -389,31 +389,6 @@
msm_isp_send_event(vfe_dev, ISP_EVENT_SOF, &sof_event);
}
-uint32_t msm_isp_get_framedrop_period(
- enum msm_vfe_frame_skip_pattern frame_skip_pattern)
-{
- switch (frame_skip_pattern) {
- case NO_SKIP:
- case EVERY_2FRAME:
- case EVERY_3FRAME:
- case EVERY_4FRAME:
- case EVERY_5FRAME:
- case EVERY_6FRAME:
- case EVERY_7FRAME:
- case EVERY_8FRAME:
- return frame_skip_pattern + 1;
- case EVERY_16FRAME:
- return 16;
- break;
- case EVERY_32FRAME:
- return 32;
- break;
- default:
- return 1;
- }
- return 1;
-}
-
void msm_isp_calculate_framedrop(
struct msm_vfe_axi_shared_data *axi_data,
struct msm_vfe_axi_stream_request_cmd *stream_cfg_cmd)
@@ -424,7 +399,10 @@
uint32_t framedrop_period = msm_isp_get_framedrop_period(
stream_cfg_cmd->frame_skip_pattern);
- stream_info->framedrop_pattern = 0x1;
+ if (stream_cfg_cmd->frame_skip_pattern == SKIP_ALL)
+ stream_info->framedrop_pattern = 0x0;
+ else
+ stream_info->framedrop_pattern = 0x1;
stream_info->framedrop_period = framedrop_period - 1;
if (stream_cfg_cmd->init_frame_drop < framedrop_period) {
@@ -611,6 +589,13 @@
ACTIVE : INACTIVE;
}
}
+
+ if (vfe_dev->axi_data.pipeline_update == DISABLE_CAMIF) {
+ vfe_dev->hw_info->vfe_ops.stats_ops.
+ enable_module(vfe_dev, 0xFF, 0);
+ vfe_dev->axi_data.pipeline_update = NO_UPDATE;
+ }
+
vfe_dev->axi_data.stream_update--;
if (vfe_dev->axi_data.stream_update == 0)
complete(&vfe_dev->stream_config_complete);
@@ -724,35 +709,60 @@
}
}
-enum msm_isp_camif_update_state
- msm_isp_update_camif_output_count(
- struct vfe_device *vfe_dev,
+static enum msm_isp_camif_update_state
+ msm_isp_get_camif_update_state(struct vfe_device *vfe_dev,
struct msm_vfe_axi_stream_cfg_cmd *stream_cfg_cmd)
{
int i;
struct msm_vfe_axi_stream *stream_info;
struct msm_vfe_axi_shared_data *axi_data = &vfe_dev->axi_data;
- uint8_t cur_pix_count = axi_data->src_info[VFE_PIX_0].
- pix_stream_count;
- uint8_t cur_raw_count = axi_data->src_info[VFE_PIX_0].
- raw_stream_count;
- uint8_t pix_stream_cnt = 0;
+ uint8_t pix_stream_cnt = 0, cur_pix_stream_cnt;
+ cur_pix_stream_cnt =
+ axi_data->src_info[VFE_PIX_0].pix_stream_count +
+ axi_data->src_info[VFE_PIX_0].raw_stream_count;
for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
stream_info =
&axi_data->stream_info[
HANDLE_TO_IDX(stream_cfg_cmd->stream_handle[i])];
if (stream_info->stream_src < RDI_INTF_0)
pix_stream_cnt++;
+ }
+
+ if (pix_stream_cnt) {
+ if (cur_pix_stream_cnt == 0 && pix_stream_cnt &&
+ stream_cfg_cmd->cmd == START_STREAM)
+ return ENABLE_CAMIF;
+ else if (cur_pix_stream_cnt &&
+ (cur_pix_stream_cnt - pix_stream_cnt) == 0 &&
+ stream_cfg_cmd->cmd == STOP_STREAM)
+ return DISABLE_CAMIF;
+ }
+ return NO_UPDATE;
+}
+
+static void msm_isp_update_camif_output_count(
+ struct vfe_device *vfe_dev,
+ struct msm_vfe_axi_stream_cfg_cmd *stream_cfg_cmd)
+{
+ int i;
+ struct msm_vfe_axi_stream *stream_info;
+ struct msm_vfe_axi_shared_data *axi_data = &vfe_dev->axi_data;
+ for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
+ stream_info =
+ &axi_data->stream_info[
+ HANDLE_TO_IDX(stream_cfg_cmd->stream_handle[i])];
+ if (stream_info->stream_src >= RDI_INTF_0)
+ return;
if (stream_info->stream_src == PIX_ENCODER ||
- stream_info->stream_src == PIX_VIEWFINDER) {
+ stream_info->stream_src == PIX_VIEWFINDER ||
+ stream_info->stream_src == IDEAL_RAW) {
if (stream_cfg_cmd->cmd == START_STREAM)
vfe_dev->axi_data.src_info[VFE_PIX_0].
pix_stream_count++;
else
vfe_dev->axi_data.src_info[VFE_PIX_0].
pix_stream_count--;
- } else if (stream_info->stream_src == CAMIF_RAW ||
- stream_info->stream_src == IDEAL_RAW) {
+ } else if (stream_info->stream_src == CAMIF_RAW) {
if (stream_cfg_cmd->cmd == START_STREAM)
vfe_dev->axi_data.src_info[VFE_PIX_0].
raw_stream_count++;
@@ -761,25 +771,6 @@
raw_stream_count--;
}
}
- if (pix_stream_cnt) {
- if ((cur_pix_count + cur_raw_count == 0) &&
- (axi_data->src_info[VFE_PIX_0].
- pix_stream_count +
- axi_data->src_info[VFE_PIX_0].
- raw_stream_count != 0)) {
- return ENABLE_CAMIF;
- }
-
- if ((cur_pix_count + cur_raw_count != 0) &&
- (axi_data->src_info[VFE_PIX_0].
- pix_stream_count +
- axi_data->src_info[VFE_PIX_0].
- raw_stream_count == 0)) {
- return DISABLE_CAMIF;
- }
- }
-
- return NO_UPDATE;
}
void msm_camera_io_dump_2(void __iomem *addr, int size)
@@ -849,12 +840,14 @@
return rc;
}
-static int msm_isp_axi_wait_for_cfg_done(struct vfe_device *vfe_dev)
+static int msm_isp_axi_wait_for_cfg_done(struct vfe_device *vfe_dev,
+ enum msm_isp_camif_update_state camif_update)
{
int rc;
unsigned long flags;
spin_lock_irqsave(&vfe_dev->shared_data_lock, flags);
init_completion(&vfe_dev->stream_config_complete);
+ vfe_dev->axi_data.pipeline_update = camif_update;
vfe_dev->axi_data.stream_update = 2;
spin_unlock_irqrestore(&vfe_dev->shared_data_lock, flags);
rc = wait_for_completion_interruptible_timeout(
@@ -902,6 +895,20 @@
return rc;
}
+static void msm_isp_deinit_stream_ping_pong_reg(
+ struct vfe_device *vfe_dev,
+ struct msm_vfe_axi_stream *stream_info)
+{
+ int i;
+ for (i = 0; i < 2; i++) {
+ struct msm_isp_buffer *buf;
+ buf = stream_info->buf[i];
+ if (buf)
+ vfe_dev->buf_mgr->ops->put_buf(vfe_dev->buf_mgr,
+ buf->bufq_handle, buf->buf_idx);
+ }
+}
+
static void msm_isp_get_stream_wm_mask(
struct msm_vfe_axi_stream *stream_info,
uint32_t *wm_reload_mask)
@@ -953,12 +960,13 @@
vfe_dev->hw_info->vfe_ops.axi_ops.reload_wm(vfe_dev, wm_reload_mask);
vfe_dev->hw_info->vfe_ops.core_ops.reg_update(vfe_dev);
+ msm_isp_update_camif_output_count(vfe_dev, stream_cfg_cmd);
if (camif_update == ENABLE_CAMIF)
vfe_dev->hw_info->vfe_ops.core_ops.
update_camif_state(vfe_dev, camif_update);
if (wait_for_complete)
- rc = msm_isp_axi_wait_for_cfg_done(vfe_dev);
+ rc = msm_isp_axi_wait_for_cfg_done(vfe_dev, camif_update);
return rc;
}
@@ -976,7 +984,7 @@
stream_info->state = STOP_PENDING;
}
- rc = msm_isp_axi_wait_for_cfg_done(vfe_dev);
+ rc = msm_isp_axi_wait_for_cfg_done(vfe_dev, camif_update);
if (rc < 0) {
pr_err("%s: wait for config done failed\n", __func__);
return rc;
@@ -985,6 +993,13 @@
if (camif_update == DISABLE_CAMIF)
vfe_dev->hw_info->vfe_ops.core_ops.
update_camif_state(vfe_dev, DISABLE_CAMIF);
+ msm_isp_update_camif_output_count(vfe_dev, stream_cfg_cmd);
+
+ for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
+ stream_info = &axi_data->stream_info[
+ HANDLE_TO_IDX(stream_cfg_cmd->stream_handle[i])];
+ msm_isp_deinit_stream_ping_pong_reg(vfe_dev, stream_info);
+ }
return rc;
}
@@ -1006,8 +1021,7 @@
/*Configure UB*/
vfe_dev->hw_info->vfe_ops.axi_ops.cfg_ub(vfe_dev);
}
- camif_update =
- msm_isp_update_camif_output_count(vfe_dev, stream_cfg_cmd);
+ camif_update = msm_isp_get_camif_update_state(vfe_dev, stream_cfg_cmd);
if (stream_cfg_cmd->cmd == START_STREAM)
rc = msm_isp_start_axi_stream(
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
index c47209f..ce71235 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
@@ -10,6 +10,7 @@
* GNU General Public License for more details.
*/
#include <linux/io.h>
+#include <linux/atomic.h>
#include <media/v4l2-subdev.h>
#include "msm_isp_util.h"
#include "msm_isp_stats_util.h"
@@ -67,6 +68,7 @@
struct msm_isp_buffer *done_buf;
struct msm_vfe_stats_stream *stream_info = NULL;
uint32_t pingpong_status;
+ uint32_t comp_stats_type_mask = 0;
uint32_t stats_comp_mask = 0, stats_irq_mask = 0;
stats_comp_mask = vfe_dev->hw_info->vfe_ops.stats_ops.
get_comp_mask(irq_status0, irq_status1);
@@ -76,13 +78,17 @@
return;
ISP_DBG("%s: status: 0x%x\n", __func__, irq_status0);
- if (vfe_dev->stats_data.stats_pipeline_policy == STATS_COMP_ALL) {
- if (!stats_comp_mask)
- return;
- stats_irq_mask = 0xFFFFFFFF;
- }
+ if (!stats_comp_mask)
+ stats_irq_mask &=
+ ~atomic_read(&vfe_dev->stats_data.stats_comp_mask);
+ else
+ stats_irq_mask |=
+ atomic_read(&vfe_dev->stats_data.stats_comp_mask);
memset(&buf_event, 0, sizeof(struct msm_isp_event_data));
+ buf_event.timestamp = ts->event_time;
+ buf_event.frame_id =
+ vfe_dev->axi_data.src_info[VFE_PIX_0].frame_id;
pingpong_status = vfe_dev->hw_info->
vfe_ops.stats_ops.get_pingpong_status(vfe_dev);
@@ -98,22 +104,32 @@
done_buf->bufq_handle, done_buf->buf_idx,
&ts->buf_time, vfe_dev->axi_data.
src_info[VFE_PIX_0].frame_id);
- if (rc == 0) {
- stats_event->stats_mask |=
+ if (rc != 0)
+ continue;
+
+ stats_event->stats_buf_idxs[stream_info->stats_type] =
+ done_buf->buf_idx;
+ if (!stream_info->composite_flag) {
+ stats_event->stats_mask =
1 << stream_info->stats_type;
- stats_event->stats_buf_idxs[
- stream_info->stats_type] =
- done_buf->buf_idx;
+ ISP_DBG("%s: stats event frame id: 0x%x\n",
+ __func__, buf_event.frame_id);
+ msm_isp_send_event(vfe_dev,
+ ISP_EVENT_STATS_NOTIFY +
+ stream_info->stats_type, &buf_event);
+ } else {
+ comp_stats_type_mask |=
+ 1 << stream_info->stats_type;
}
}
}
- if (stats_event->stats_mask) {
- buf_event.timestamp = ts->event_time;
- buf_event.frame_id =
- vfe_dev->axi_data.src_info[VFE_PIX_0].frame_id;
- msm_isp_send_event(vfe_dev, ISP_EVENT_STATS_NOTIFY +
- stream_info->stats_type, &buf_event);
+ if (comp_stats_type_mask) {
+ ISP_DBG("%s: composite stats event frame id: 0x%x mask: 0x%x\n",
+ __func__, buf_event.frame_id, comp_stats_type_mask);
+ stats_event->stats_mask = comp_stats_type_mask;
+ msm_isp_send_event(vfe_dev,
+ ISP_EVENT_COMP_STATS_NOTIFY, &buf_event);
}
}
@@ -140,36 +156,19 @@
return rc;
}
- if (stats_data->stats_pipeline_policy != STATS_COMP_ALL) {
- if (stream_req_cmd->framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid framedrop pattern\n", __func__);
- return rc;
- }
+ if (stream_req_cmd->framedrop_pattern >= MAX_SKIP) {
+ pr_err("%s: Invalid framedrop pattern\n", __func__);
+ return rc;
+ }
- if (stream_req_cmd->irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid irq subsample pattern\n", __func__);
- return rc;
- }
- } else {
- if (stats_data->comp_framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp framedrop pattern\n",
- __func__);
- return rc;
- }
-
- if (stats_data->comp_irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp irq subsample pattern\n",
- __func__);
- return rc;
- }
- stream_req_cmd->framedrop_pattern =
- vfe_dev->stats_data.comp_framedrop_pattern;
- stream_req_cmd->irq_subsample_pattern =
- vfe_dev->stats_data.comp_irq_subsample_pattern;
+ if (stream_req_cmd->irq_subsample_pattern >= MAX_SKIP) {
+ pr_err("%s: Invalid irq subsample pattern\n", __func__);
+ return rc;
}
stream_info->session_id = stream_req_cmd->session_id;
stream_info->stream_id = stream_req_cmd->stream_id;
+ stream_info->composite_flag = stream_req_cmd->composite_flag;
stream_info->stats_type = stream_req_cmd->stats_type;
stream_info->buffer_offset = stream_req_cmd->buffer_offset;
stream_info->framedrop_pattern = stream_req_cmd->framedrop_pattern;
@@ -193,6 +192,7 @@
struct msm_vfe_stats_stream_request_cmd *stream_req_cmd = arg;
struct msm_vfe_stats_stream *stream_info = NULL;
struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ uint32_t framedrop_period;
uint32_t stats_idx;
rc = msm_isp_stats_create_stream(vfe_dev, stream_req_cmd);
@@ -204,31 +204,16 @@
stats_idx = STATS_IDX(stream_req_cmd->stream_handle);
stream_info = &stats_data->stream_info[stats_idx];
- switch (stream_info->framedrop_pattern) {
- case NO_SKIP:
- stream_info->framedrop_pattern = VFE_NO_DROP;
- break;
- case EVERY_2FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_2FRAME;
- break;
- case EVERY_4FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_4FRAME;
- break;
- case EVERY_8FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_8FRAME;
- break;
- case EVERY_16FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_16FRAME;
- break;
- case EVERY_32FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_32FRAME;
- break;
- default:
- stream_info->framedrop_pattern = VFE_NO_DROP;
- break;
- }
+ framedrop_period = msm_isp_get_framedrop_period(
+ stream_req_cmd->framedrop_pattern);
- if (stats_data->stats_pipeline_policy == STATS_COMP_NONE)
+ if (stream_req_cmd->framedrop_pattern == SKIP_ALL)
+ stream_info->framedrop_pattern = 0x0;
+ else
+ stream_info->framedrop_pattern = 0x1;
+ stream_info->framedrop_period = framedrop_period - 1;
+
+ if (!stream_info->composite_flag)
vfe_dev->hw_info->vfe_ops.stats_ops.
cfg_wm_irq_mask(vfe_dev, stream_info);
@@ -257,7 +242,7 @@
rc = msm_isp_cfg_stats_stream(vfe_dev, &stream_cfg_cmd);
}
- if (stats_data->stats_pipeline_policy == STATS_COMP_NONE)
+ if (!stream_info->composite_flag)
vfe_dev->hw_info->vfe_ops.stats_ops.
clear_wm_irq_mask(vfe_dev, stream_info);
@@ -266,18 +251,159 @@
return 0;
}
-int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg)
+static int msm_isp_init_stats_ping_pong_reg(
+ struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream *stream_info)
+{
+ int rc = 0;
+ stream_info->bufq_handle =
+ vfe_dev->buf_mgr->ops->get_bufq_handle(
+ vfe_dev->buf_mgr, stream_info->session_id,
+ stream_info->stream_id);
+ if (stream_info->bufq_handle == 0) {
+ pr_err("%s: no buf configured for stream: 0x%x\n",
+ __func__, stream_info->stream_handle);
+ return -EINVAL;
+ }
+
+ rc = msm_isp_stats_cfg_ping_pong_address(vfe_dev,
+ stream_info, VFE_PING_FLAG, NULL);
+ if (rc < 0) {
+ pr_err("%s: No free buffer for ping\n", __func__);
+ return rc;
+ }
+ rc = msm_isp_stats_cfg_ping_pong_address(vfe_dev,
+ stream_info, VFE_PONG_FLAG, NULL);
+ if (rc < 0) {
+ pr_err("%s: No free buffer for pong\n", __func__);
+ return rc;
+ }
+ return rc;
+}
+
+static void msm_isp_deinit_stats_ping_pong_reg(
+ struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream *stream_info)
+{
+ int i;
+ struct msm_isp_buffer *buf;
+ for (i = 0; i < 2; i++) {
+ buf = stream_info->buf[i];
+ if (buf)
+ vfe_dev->buf_mgr->ops->put_buf(vfe_dev->buf_mgr,
+ buf->bufq_handle, buf->buf_idx);
+ }
+}
+
+void msm_isp_stats_stream_update(struct vfe_device *vfe_dev)
+{
+ int i;
+ uint32_t stats_mask = 0, comp_stats_mask = 0;
+ uint32_t enable = 0;
+ struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ for (i = 0; i < vfe_dev->hw_info->stats_hw_info->num_stats_type; i++) {
+ if (stats_data->stream_info[i].state == STATS_START_PENDING ||
+ stats_data->stream_info[i].state ==
+ STATS_STOP_PENDING) {
+ stats_mask |= i;
+ enable = stats_data->stream_info[i].state ==
+ STATS_START_PENDING ? 1 : 0;
+ stats_data->stream_info[i].state =
+ stats_data->stream_info[i].state ==
+ STATS_START_PENDING ?
+ STATS_STARTING : STATS_STOPPING;
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, BIT(i), enable);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, BIT(i), enable);
+ } else if (stats_data->stream_info[i].state == STATS_STARTING ||
+ stats_data->stream_info[i].state == STATS_STOPPING) {
+ if (stats_data->stream_info[i].composite_flag)
+ comp_stats_mask |= i;
+ if (stats_data->stream_info[i].state == STATS_STARTING)
+ atomic_add(BIT(i),
+ &stats_data->stats_comp_mask);
+ else
+ atomic_sub(BIT(i),
+ &stats_data->stats_comp_mask);
+ stats_data->stream_info[i].state =
+ stats_data->stream_info[i].state ==
+ STATS_STARTING ? STATS_ACTIVE : STATS_INACTIVE;
+ }
+ }
+ atomic_sub(1, &stats_data->stats_update);
+ if (!atomic_read(&stats_data->stats_update))
+ complete(&vfe_dev->stats_config_complete);
+}
+
+static int msm_isp_stats_wait_for_cfg_done(struct vfe_device *vfe_dev)
+{
+ int rc;
+ init_completion(&vfe_dev->stats_config_complete);
+ atomic_set(&vfe_dev->stats_data.stats_update, 2);
+ rc = wait_for_completion_interruptible_timeout(
+ &vfe_dev->stats_config_complete,
+ msecs_to_jiffies(500));
+ if (rc == 0) {
+ pr_err("%s: wait timeout\n", __func__);
+ rc = -1;
+ } else {
+ rc = 0;
+ }
+ return rc;
+}
+
+static int msm_isp_start_stats_stream(struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd)
{
int i, rc = 0;
- struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd = arg;
+ uint32_t stats_mask = 0, comp_stats_mask = 0, idx;
struct msm_vfe_stats_stream *stream_info;
struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
- int idx;
- uint32_t stats_mask = 0;
+ for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
+ idx = STATS_IDX(stream_cfg_cmd->stream_handle[i]);
+ stream_info = &stats_data->stream_info[idx];
+ if (stream_info->stream_handle !=
+ stream_cfg_cmd->stream_handle[i]) {
+ pr_err("%s: Invalid stream handle: 0x%x received\n",
+ __func__, stream_cfg_cmd->stream_handle[i]);
+ continue;
+ }
+ rc = msm_isp_init_stats_ping_pong_reg(vfe_dev, stream_info);
+ if (rc < 0) {
+ pr_err("%s: No buffer for stream%d\n", __func__, idx);
+ return rc;
+ }
- if (stats_data->num_active_stream == 0)
- vfe_dev->hw_info->vfe_ops.stats_ops.cfg_ub(vfe_dev);
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active)
+ stream_info->state = STATS_START_PENDING;
+ else
+ stream_info->state = STATS_ACTIVE;
+ stats_data->num_active_stream++;
+ stats_mask |= 1 << idx;
+ if (stream_info->composite_flag)
+ comp_stats_mask |= 1 << idx;
+ }
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active) {
+ rc = msm_isp_stats_wait_for_cfg_done(vfe_dev);
+ } else {
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ atomic_add(comp_stats_mask, &stats_data->stats_comp_mask);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, comp_stats_mask, 1);
+ }
+ return rc;
+}
+
+static int msm_isp_stop_stats_stream(struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd)
+{
+ int i, rc = 0;
+ uint32_t stats_mask = 0, comp_stats_mask = 0, idx;
+ struct msm_vfe_stats_stream *stream_info;
+ struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
idx = STATS_IDX(stream_cfg_cmd->stream_handle[i]);
stream_info = &stats_data->stream_info[idx];
@@ -288,71 +414,45 @@
continue;
}
- if (stream_cfg_cmd->enable) {
- stream_info->bufq_handle =
- vfe_dev->buf_mgr->ops->get_bufq_handle(
- vfe_dev->buf_mgr, stream_info->session_id,
- stream_info->stream_id);
- if (stream_info->bufq_handle == 0) {
- pr_err("%s: no buf configured for stream: 0x%x\n",
- __func__,
- stream_info->stream_handle);
- return -EINVAL;
- }
-
- msm_isp_stats_cfg_ping_pong_address(vfe_dev,
- stream_info, VFE_PING_FLAG, NULL);
- msm_isp_stats_cfg_ping_pong_address(vfe_dev,
- stream_info, VFE_PONG_FLAG, NULL);
- stream_info->state = STATS_START_PENDING;
- stats_data->num_active_stream++;
- } else {
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active)
stream_info->state = STATS_STOP_PENDING;
- stats_data->num_active_stream--;
- }
+ else
+ stream_info->state = STATS_INACTIVE;
+
+ stats_data->num_active_stream--;
stats_mask |= 1 << idx;
+ if (stream_info->composite_flag)
+ comp_stats_mask |= 1 << idx;
}
- vfe_dev->hw_info->vfe_ops.stats_ops.
- enable_module(vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active) {
+ rc = msm_isp_stats_wait_for_cfg_done(vfe_dev);
+ } else {
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ atomic_sub(comp_stats_mask, &stats_data->stats_comp_mask);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, comp_stats_mask, 0);
+ }
+
+ for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
+ idx = STATS_IDX(stream_cfg_cmd->stream_handle[i]);
+ stream_info = &stats_data->stream_info[idx];
+ msm_isp_deinit_stats_ping_pong_reg(vfe_dev, stream_info);
+ }
return rc;
}
-int msm_isp_cfg_stats_comp_policy(struct vfe_device *vfe_dev, void *arg)
+int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg)
{
- int rc = -1;
- struct msm_vfe_stats_comp_policy_cfg *policy_cfg_cmd = arg;
- struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ int rc = 0;
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd = arg;
+ if (vfe_dev->stats_data.num_active_stream == 0)
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_ub(vfe_dev);
- if (stats_data->num_active_stream != 0) {
- pr_err("%s: Cannot update policy when there are active streams\n",
- __func__);
- return rc;
- }
+ if (stream_cfg_cmd->enable)
+ rc = msm_isp_start_stats_stream(vfe_dev, stream_cfg_cmd);
+ else
+ rc = msm_isp_stop_stats_stream(vfe_dev, stream_cfg_cmd);
- if (policy_cfg_cmd->stats_pipeline_policy >= MAX_STATS_POLICY) {
- pr_err("%s: Invalid stats composite policy\n", __func__);
- return rc;
- }
-
- if (policy_cfg_cmd->comp_framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp framedrop pattern\n", __func__);
- return rc;
- }
-
- if (policy_cfg_cmd->comp_irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp irq subsample pattern\n", __func__);
- return rc;
- }
-
- stats_data->stats_pipeline_policy =
- policy_cfg_cmd->stats_pipeline_policy;
- stats_data->comp_framedrop_pattern =
- policy_cfg_cmd->comp_framedrop_pattern;
- stats_data->comp_irq_subsample_pattern =
- policy_cfg_cmd->comp_irq_subsample_pattern;
-
- vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(vfe_dev);
-
- return 0;
+ return rc;
}
-
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
index 13e1fd6..7b4c4b4 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
@@ -18,8 +18,8 @@
void msm_isp_process_stats_irq(struct vfe_device *vfe_dev,
uint32_t irq_status0, uint32_t irq_status1,
struct msm_isp_timestamp *ts);
+void msm_isp_stats_stream_update(struct vfe_device *vfe_dev);
int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg);
int msm_isp_release_stats_stream(struct vfe_device *vfe_dev, void *arg);
int msm_isp_request_stats_stream(struct vfe_device *vfe_dev, void *arg);
-int msm_isp_cfg_stats_comp_policy(struct vfe_device *vfe_dev, void *arg);
#endif /* __MSM_ISP_STATS_UTIL_H__ */
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
index 9fd87f3..613ad86 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
@@ -12,6 +12,7 @@
#include <linux/mutex.h>
#include <linux/io.h>
#include <media/v4l2-subdev.h>
+#include <linux/ratelimit.h>
#include "msm.h"
#include "msm_isp_util.h"
@@ -121,7 +122,7 @@
path->vectors[0].ab = MSM_ISP_MIN_AB;
path->vectors[0].ib = MSM_ISP_MIN_IB;
for (i = 0; i < MAX_ISP_CLIENT; i++) {
- if (isp_bandwidth_mgr.client_info[client].active) {
+ if (isp_bandwidth_mgr.client_info[i].active) {
path->vectors[0].ab +=
isp_bandwidth_mgr.client_info[i].ab;
path->vectors[0].ib +=
@@ -154,6 +155,33 @@
mutex_unlock(&bandwidth_mgr_mutex);
}
+uint32_t msm_isp_get_framedrop_period(
+ enum msm_vfe_frame_skip_pattern frame_skip_pattern)
+{
+ switch (frame_skip_pattern) {
+ case NO_SKIP:
+ case EVERY_2FRAME:
+ case EVERY_3FRAME:
+ case EVERY_4FRAME:
+ case EVERY_5FRAME:
+ case EVERY_6FRAME:
+ case EVERY_7FRAME:
+ case EVERY_8FRAME:
+ return frame_skip_pattern + 1;
+ case EVERY_16FRAME:
+ return 16;
+ break;
+ case EVERY_32FRAME:
+ return 32;
+ break;
+ case SKIP_ALL:
+ return 1;
+ default:
+ return 1;
+ }
+ return 1;
+}
+
static inline void msm_isp_get_timestamp(struct msm_isp_timestamp *time_stamp)
{
struct timespec ts;
@@ -355,11 +383,6 @@
rc = msm_isp_cfg_stats_stream(vfe_dev, arg);
mutex_unlock(&vfe_dev->core_mutex);
break;
- case VIDIOC_MSM_ISP_CFG_STATS_COMP_POLICY:
- mutex_lock(&vfe_dev->core_mutex);
- rc = msm_isp_cfg_stats_comp_policy(vfe_dev, arg);
- mutex_unlock(&vfe_dev->core_mutex);
- break;
case VIDIOC_MSM_ISP_UPDATE_STREAM:
mutex_lock(&vfe_dev->core_mutex);
rc = msm_isp_update_axi_stream(vfe_dev, arg);
@@ -686,6 +709,11 @@
{
int i;
struct msm_vfe_error_info *error_info = &vfe_dev->error_info;
+ static DEFINE_RATELIMIT_STATE(rs,
+ DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
+ static DEFINE_RATELIMIT_STATE(rs_stats,
+ DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
+
if (error_info->error_count == 1 ||
!(error_info->info_dump_frame_count % 100)) {
vfe_dev->hw_info->vfe_ops.core_ops.
@@ -695,7 +723,8 @@
error_info->camif_status = 0;
error_info->violation_status = 0;
for (i = 0; i < MAX_NUM_STREAM; i++) {
- if (error_info->stream_framedrop_count[i] != 0) {
+ if (error_info->stream_framedrop_count[i] != 0 &&
+ __ratelimit(&rs)) {
pr_err("%s: Stream[%d]: dropped %d frames\n",
__func__, i,
error_info->stream_framedrop_count[i]);
@@ -703,7 +732,8 @@
}
}
for (i = 0; i < MSM_ISP_STATS_MAX; i++) {
- if (error_info->stats_framedrop_count[i] != 0) {
+ if (error_info->stats_framedrop_count[i] != 0 &&
+ __ratelimit(&rs_stats)) {
pr_err("%s: Stats stream[%d]: dropped %d frames\n",
__func__, i,
error_info->stats_framedrop_count[i]);
@@ -750,7 +780,8 @@
spin_lock_irqsave(&vfe_dev->tasklet_lock, flags);
queue_cmd = &vfe_dev->tasklet_queue_cmd[vfe_dev->taskletq_idx];
if (queue_cmd->cmd_used) {
- pr_err("%s: Tasklet queue overflow\n", __func__);
+ pr_err_ratelimited("%s: Tasklet queue overflow: %d\n",
+ __func__, vfe_dev->pdev->id);
list_del(&queue_cmd->list);
} else {
atomic_add(1, &vfe_dev->irq_cnt);
@@ -818,7 +849,6 @@
int msm_isp_open_node(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- uint32_t i;
struct vfe_device *vfe_dev = v4l2_get_subdevdata(sd);
long rc;
ISP_DBG("%s\n", __func__);
@@ -851,9 +881,6 @@
vfe_dev->hw_info->vfe_ops.core_ops.init_hw_reg(vfe_dev);
- for (i = 0; i < vfe_dev->hw_info->num_iommu_ctx; i++)
- vfe_dev->buf_mgr->ops->attach_ctx(vfe_dev->buf_mgr,
- vfe_dev->iommu_ctx[i]);
vfe_dev->buf_mgr->ops->buf_mgr_init(vfe_dev->buf_mgr, "msm_isp", 28);
memset(&vfe_dev->axi_data, 0, sizeof(struct msm_vfe_axi_shared_data));
@@ -870,7 +897,6 @@
int msm_isp_close_node(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- int i;
long rc;
struct vfe_device *vfe_dev = v4l2_get_subdevdata(sd);
ISP_DBG("%s\n", __func__);
@@ -888,11 +914,6 @@
pr_err("%s: halt timeout\n", __func__);
vfe_dev->buf_mgr->ops->buf_mgr_deinit(vfe_dev->buf_mgr);
-
- for (i = vfe_dev->hw_info->num_iommu_ctx - 1; i >= 0; i--)
- vfe_dev->buf_mgr->ops->detach_ctx(vfe_dev->buf_mgr,
- vfe_dev->iommu_ctx[i]);
-
vfe_dev->hw_info->vfe_ops.core_ops.release_hw(vfe_dev);
vfe_dev->vfe_open_cnt--;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
index 7934f26..34b9859 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
@@ -43,6 +43,9 @@
struct msm_isp_bandwidth_info client_info[MAX_ISP_CLIENT];
};
+uint32_t msm_isp_get_framedrop_period(
+ enum msm_vfe_frame_skip_pattern frame_skip_pattern);
+
int msm_isp_init_bandwidth_mgr(enum msm_isp_hw_client client);
int msm_isp_update_bandwidth(enum msm_isp_hw_client client,
uint64_t ab, uint64_t ib);
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
index f209330..962c079 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
@@ -11,7 +11,6 @@
*/
#include <linux/delay.h>
-#include <linux/clk.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/jiffies.h>
@@ -60,181 +59,25 @@
false : true;
}
-static struct msm_cam_clk_info ispif_8960_clk_info[] = {
- {"csi_pix_clk", 0},
- {"csi_rdi_clk", 0},
- {"csi_pix1_clk", 0},
- {"csi_rdi1_clk", 0},
- {"csi_rdi2_clk", 0},
+static struct msm_cam_clk_info ispif_8974_ahb_clk_info[] = {
+ {"ispif_ahb_clk", -1},
};
-static struct msm_cam_clk_info ispif_8974_clk_info_vfe0[] = {
- {"camss_vfe_vfe_clk", -1},
- {"camss_csi_vfe_clk", -1},
-};
-
-static struct msm_cam_clk_info ispif_8974_clk_info_vfe1[] = {
- {"camss_vfe_vfe_clk1", -1},
- {"camss_csi_vfe_clk1", -1},
-};
-
-static int msm_ispif_clk_enable_one(struct ispif_device *ispif,
- enum msm_ispif_vfe_intf vfe_intf, int enable)
+static int msm_ispif_clk_ahb_enable(struct ispif_device *ispif, int enable)
{
int rc = 0;
- if (enable)
- pr_debug("enable clk for VFE%d\n", vfe_intf);
- else
- pr_debug("disable clk for VFE%d\n", vfe_intf);
-
- if (ispif->csid_version < CSID_VERSION_V2) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev, ispif_8960_clk_info,
- ispif->ispif_clk[vfe_intf], 2, enable);
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d\n",
- __func__, rc);
- goto end;
- }
- } else if (ispif->csid_version == CSID_VERSION_V2) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev, ispif_8960_clk_info,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8960_clk_info),
- enable);
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d\n",
- __func__, rc);
- goto end;
- }
- } else if (ispif->csid_version >= CSID_VERSION_V3) {
- if (vfe_intf == VFE0) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev,
- ispif_8974_clk_info_vfe0,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8974_clk_info_vfe0), enable);
- } else {
- rc = msm_cam_clk_enable(&ispif->pdev->dev,
- ispif_8974_clk_info_vfe1,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8974_clk_info_vfe1), enable);
- }
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d, vfeid = %d\n",
- __func__, rc, vfe_intf);
- goto end;
- }
- } else {
- pr_err("%s: unsupported version=%d\n", __func__,
- ispif->csid_version);
- goto end;
+ if (ispif->csid_version < CSID_VERSION_V3) {
+ /* Older ISPIF versiond don't need ahb clokc */
+ return 0;
}
-end:
- return rc;
-}
-
-static int msm_ispif_clk_enable(struct ispif_device *ispif,
- struct msm_ispif_param_data *params, int enable)
-{
- int rc = 0;
- int i, j;
- uint32_t vfe_intf_mask = 0;
-
- for (i = 0; i < params->num; i++) {
- if (vfe_intf_mask & (1 << params->entries[i].vfe_intf))
- continue;
- rc = msm_ispif_clk_enable_one(ispif,
- params->entries[i].vfe_intf, 1);
- if (rc < 0 && enable) {
- pr_err("%s: unable to enable clocks for VFE %d",
- __func__, params->entries[i].vfe_intf);
- for (j = 0; j < i; j++) {
- /* if VFE clock is not enabled do
- * not disable the clock */
- if (!(vfe_intf_mask & (1 <<
- params->entries[i].vfe_intf)))
- continue;
- msm_ispif_clk_enable_one(ispif,
- params->entries[j].vfe_intf, 0);
- /* remove the VFE ID from the mask */
- vfe_intf_mask &=
- ~(1 << params->entries[i].vfe_intf);
- }
- break;
- }
- vfe_intf_mask |= 1 << params->entries[i].vfe_intf;
- }
- return rc;
-}
-
-static int msm_ispif_intf_reset(struct ispif_device *ispif,
- struct msm_ispif_param_data *params)
-{
-
- int i, rc = 0;
- enum msm_ispif_intftype intf_type;
- int vfe_intf = 0;
- uint32_t data = 0;
-
- for (i = 0; i < params->num; i++) {
- data = STROBED_RST_EN;
- vfe_intf = params->entries[i].vfe_intf;
- intf_type = params->entries[i].intftype;
- ispif->sof_count[params->entries[i].vfe_intf].
- sof_cnt[intf_type] = 0;
-
- switch (intf_type) {
- case PIX0:
- data |= (PIX_0_VFE_RST_STB | PIX_0_CSID_RST_STB);
- break;
- case RDI0:
- data |= (RDI_0_VFE_RST_STB | RDI_0_CSID_RST_STB);
- break;
- case PIX1:
- data |= (PIX_1_VFE_RST_STB | PIX_1_CSID_RST_STB);
- break;
- case RDI1:
- data |= (RDI_1_VFE_RST_STB | RDI_1_CSID_RST_STB);
- break;
- case RDI2:
- data |= (RDI_2_VFE_RST_STB | RDI_2_CSID_RST_STB);
- break;
- default:
- rc = -EINVAL;
- break;
- }
- if (data > 0x1) {
- unsigned long jiffes = msecs_to_jiffies(500);
- long lrc = 0;
- unsigned long flags;
-
- spin_lock_irqsave(
- &ispif->auto_complete_lock, flags);
- ispif->wait_timeout[vfe_intf] = 0;
- init_completion(&ispif->reset_complete[vfe_intf]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
-
- if (vfe_intf == VFE0)
- msm_camera_io_w(data, ispif->base +
- ISPIF_RST_CMD_ADDR);
- else
- msm_camera_io_w(data, ispif->base +
- ISPIF_RST_CMD_1_ADDR);
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[vfe_intf], jiffes);
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfe_id = %d\n",
- __func__, lrc, vfe_intf);
- rc = -EIO;
-
- spin_lock_irqsave(
- &ispif->auto_complete_lock, flags);
- ispif->wait_timeout[vfe_intf] = 1;
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
- }
+ rc = msm_cam_clk_enable(&ispif->pdev->dev,
+ ispif_8974_ahb_clk_info, &ispif->ahb_clk,
+ ARRAY_SIZE(ispif_8974_ahb_clk_info), enable);
+ if (rc < 0) {
+ pr_err("%s: cannot enable clock, error = %d",
+ __func__, rc);
}
return rc;
@@ -243,63 +86,48 @@
static int msm_ispif_reset(struct ispif_device *ispif)
{
int rc = 0;
- long lrc = 0;
- unsigned long jiffes = msecs_to_jiffies(500);
- unsigned long flags;
-
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE0] = 0;
- init_completion(&ispif->reset_complete[VFE0]);
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- ispif->wait_timeout[VFE1] = 0;
- init_completion(&ispif->reset_complete[VFE1]);
- }
- spin_unlock_irqrestore(&ispif->auto_complete_lock, flags);
+ int i;
BUG_ON(!ispif);
memset(ispif->sof_count, 0, sizeof(ispif->sof_count));
- msm_camera_io_w(ISPIF_RST_CMD_MASK, ispif->base + ISPIF_RST_CMD_ADDR);
+ for (i = 0; i < ispif->vfe_info.num_vfe; i++) {
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[VFE0], jiffes);
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_CTRL_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_1(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_2(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_0(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_1(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_2(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INPUT_SEL(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INTF_CMD_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INTF_CMD_1(i));
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfeid = %d\n",
- __func__, lrc, VFE0);
- rc = -EIO;
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 1));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 1));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 2));
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE0] = 1;
- spin_unlock_irqrestore(&ispif->auto_complete_lock, flags);
-
- goto end;
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CROP(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CROP(i, 1));
}
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- msm_camera_io_w_mb(ISPIF_RST_CMD_1_MASK, ispif->base +
- ISPIF_RST_CMD_1_ADDR);
+ msm_camera_io_w_mb(ISPIF_IRQ_GLOBAL_CLEAR_CMD, ispif->base +
+ ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[VFE1], jiffes);
-
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfeid = %d\n",
- __func__, lrc, VFE1);
- rc = -EIO;
-
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE1] = 1;
- spin_unlock_irqrestore(&ispif->auto_complete_lock,
- flags);
- }
-
- }
-
-end:
return rc;
}
@@ -315,7 +143,6 @@
static void msm_ispif_sel_csid_core(struct ispif_device *ispif,
uint8_t intftype, uint8_t csid, uint8_t vfe_intf)
{
- int rc = 0;
uint32_t data;
BUG_ON(!ispif);
@@ -325,19 +152,6 @@
return;
}
- if (ispif->csid_version <= CSID_VERSION_V2) {
- if (ispif->ispif_clk[vfe_intf][intftype] == NULL) {
- CDBG("%s: ispif NULL clk\n", __func__);
- return;
- }
-
- rc = clk_set_rate(ispif->ispif_clk[vfe_intf][intftype], csid);
- if (rc) {
- pr_err("%s: clk_set_rate failed %d\n", __func__, rc);
- return;
- }
- }
-
data = msm_camera_io_r(ispif->base + ISPIF_VFE_m_INPUT_SEL(vfe_intf));
switch (intftype) {
case PIX0:
@@ -361,9 +175,9 @@
data |= (csid << 20);
break;
}
- if (data)
- msm_camera_io_w_mb(data, ispif->base +
- ISPIF_VFE_m_INPUT_SEL(vfe_intf));
+
+ msm_camera_io_w_mb(data, ispif->base +
+ ISPIF_VFE_m_INPUT_SEL(vfe_intf));
}
static void msm_ispif_enable_crop(struct ispif_device *ispif,
@@ -480,6 +294,58 @@
return rc;
}
+static void msm_ispif_select_clk_mux(struct ispif_device *ispif,
+ uint8_t intftype, uint8_t csid, uint8_t vfe_intf)
+{
+ uint32_t data = 0;
+
+ switch (intftype) {
+ case PIX0:
+ data = msm_camera_io_r(ispif->clk_mux_base);
+ data &= ~(0xf << (vfe_intf * 8));
+ data |= (csid << (vfe_intf * 8));
+ msm_camera_io_w(data, ispif->clk_mux_base);
+ break;
+
+ case RDI0:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (vfe_intf * 12));
+ data |= (csid << (vfe_intf * 12));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+
+ case PIX1:
+ data = msm_camera_io_r(ispif->clk_mux_base);
+ data &= ~(0xf0 << (vfe_intf * 8));
+ data |= (csid << (4 + (vfe_intf * 8)));
+ msm_camera_io_w(data, ispif->clk_mux_base);
+ break;
+
+ case RDI1:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (4 + (vfe_intf * 12)));
+ data |= (csid << (4 + (vfe_intf * 12)));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+
+ case RDI2:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (8 + (vfe_intf * 12)));
+ data |= (csid << (8 + (vfe_intf * 12)));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+ }
+ CDBG("%s intftype %d data %x\n", __func__, intftype, data);
+ mb();
+ return;
+}
+
static uint16_t msm_ispif_get_cids_mask_from_cfg(
struct msm_ispif_params_entry *entry)
{
@@ -512,11 +378,6 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
for (i = 0; i < params->num; i++) {
vfe_intf = params->entries[i].vfe_intf;
if (!msm_ispif_is_intf_valid(ispif->csid_version,
@@ -549,6 +410,10 @@
return -EINVAL;
}
+ if (ispif->csid_version >= CSID_VERSION_V3)
+ msm_ispif_select_clk_mux(ispif, intftype,
+ params->entries[i].csid, vfe_intf);
+
rc = msm_ispif_validate_intf_status(ispif, intftype, vfe_intf);
if (rc) {
pr_err("%s:validate_intf_status failed, rc = %d\n",
@@ -591,8 +456,6 @@
msm_camera_io_w_mb(ISPIF_IRQ_GLOBAL_CLEAR_CMD, ispif->base +
ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
- msm_ispif_clk_enable(ispif, params, 0);
-
return rc;
}
@@ -685,7 +548,7 @@
static int msm_ispif_start_frame_boundary(struct ispif_device *ispif,
struct msm_ispif_param_data *params)
{
- int rc;
+ int rc = 0;
if (ispif->ispif_state != ISPIF_POWER_UP) {
pr_err("%s: ispif invalid state %d\n", __func__,
@@ -694,23 +557,8 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
-
- rc = msm_ispif_intf_reset(ispif, params);
- if (rc) {
- pr_err("%s: msm_ispif_intf_reset failed. rc=%d\n",
- __func__, rc);
- goto end;
- }
-
msm_ispif_intf_cmd(ispif, ISPIF_INTF_CMD_ENABLE_FRAME_BOUNDARY, params);
-end:
- msm_ispif_clk_enable(ispif, params, 0);
return rc;
}
@@ -733,12 +581,6 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
-
for (i = 0; i < params->num; i++) {
if (!msm_ispif_is_intf_valid(ispif->csid_version,
params->entries[i].vfe_intf)) {
@@ -790,8 +632,6 @@
}
end:
- msm_ispif_clk_enable(ispif, params, 0);
-
return rc;
}
@@ -862,15 +702,6 @@
ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
if (out[VFE0].ispifIrqStatus0 & ISPIF_IRQ_STATUS_MASK) {
- if (out[VFE0].ispifIrqStatus0 & RESET_DONE_IRQ) {
- unsigned long flags;
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- if (ispif->wait_timeout[VFE0] == 0)
- complete(&ispif->reset_complete[VFE0]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
-
if (out[VFE0].ispifIrqStatus0 & PIX_INTF_0_OVERFLOW_IRQ)
pr_err("%s: VFE0 pix0 overflow.\n", __func__);
@@ -886,15 +717,6 @@
ispif_process_irq(ispif, out, VFE0);
}
if (ispif->vfe_info.num_vfe > 1) {
- if (out[VFE1].ispifIrqStatus0 & RESET_DONE_IRQ) {
- unsigned long flags;
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- if (ispif->wait_timeout[VFE1] == 0)
- complete(&ispif->reset_complete[VFE1]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
-
if (out[VFE1].ispifIrqStatus0 & PIX_INTF_0_OVERFLOW_IRQ)
pr_err("%s: VFE1 pix0 overflow.\n", __func__);
@@ -949,19 +771,20 @@
memset(ispif->sof_count, 0, sizeof(ispif->sof_count));
ispif->csid_version = csid_version;
- rc = msm_ispif_clk_enable_one(ispif, VFE0, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks for VFE0", __func__);
- goto error_clk0;
- }
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- rc = msm_ispif_clk_enable_one(ispif, VFE1, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks for VFE1",
- __func__);
- goto error_clk1;
+ if (ispif->csid_version >= CSID_VERSION_V3) {
+ if (!ispif->clk_mux_mem || !ispif->clk_mux_io) {
+ pr_err("%s csi clk mux mem %p io %p\n", __func__,
+ ispif->clk_mux_mem, ispif->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+ ispif->clk_mux_base = ioremap(ispif->clk_mux_mem->start,
+ resource_size(ispif->clk_mux_mem));
+ if (!ispif->clk_mux_base) {
+ pr_err("%s: clk_mux_mem ioremap failed\n", __func__);
+ rc = -ENOMEM;
+ return rc;
}
}
@@ -979,31 +802,30 @@
goto error_irq;
}
+ rc = msm_ispif_clk_ahb_enable(ispif, 1);
+ if (rc) {
+ pr_err("%s: ahb_clk enable failed", __func__);
+ goto error_ahb;
+ }
+
rc = msm_ispif_reset(ispif);
if (rc == 0) {
ispif->ispif_state = ISPIF_POWER_UP;
CDBG("%s: power up done\n", __func__);
goto end;
}
+
+error_ahb:
free_irq(ispif->irq->start, ispif);
error_irq:
iounmap(ispif->base);
end:
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1)
- msm_ispif_clk_enable_one(ispif, VFE1, 0);
-
-error_clk1:
- msm_ispif_clk_enable_one(ispif, VFE0, 0);
-
-error_clk0:
return rc;
}
static void msm_ispif_release(struct ispif_device *ispif)
{
- int i;
BUG_ON(!ispif);
if (ispif->ispif_state != ISPIF_POWER_UP) {
@@ -1012,18 +834,16 @@
return;
}
- for (i = 0; i < ispif->vfe_info.num_vfe; i++)
- msm_ispif_clk_enable_one(ispif, i, 1);
-
/* make sure no streaming going on */
msm_ispif_reset(ispif);
+ msm_ispif_clk_ahb_enable(ispif, 0);
+
free_irq(ispif->irq->start, ispif);
iounmap(ispif->base);
- for (i = 0; i < ispif->vfe_info.num_vfe; i++)
- msm_ispif_clk_enable_one(ispif, i, 0);
+ iounmap(ispif->clk_mux_base);
ispif->ispif_state = ISPIF_POWER_DOWN;
}
@@ -1143,7 +963,6 @@
{
int rc;
struct ispif_device *ispif;
- int i;
ispif = kzalloc(sizeof(struct ispif_device), GFP_KERNEL);
if (!ispif) {
@@ -1197,13 +1016,20 @@
rc = -EBUSY;
goto error;
}
+ ispif->clk_mux_mem = platform_get_resource_byname(pdev,
+ IORESOURCE_MEM, "csi_clk_mux");
+ if (ispif->clk_mux_mem) {
+ ispif->clk_mux_io = request_mem_region(
+ ispif->clk_mux_mem->start,
+ resource_size(ispif->clk_mux_mem),
+ ispif->clk_mux_mem->name);
+ if (!ispif->clk_mux_io)
+ pr_err("%s: no valid csi_mux region\n", __func__);
+ }
ispif->pdev = pdev;
ispif->ispif_state = ISPIF_POWER_DOWN;
ispif->open_cnt = 0;
- spin_lock_init(&ispif->auto_complete_lock);
- for (i = 0; i < VFE_MAX; i++)
- ispif->wait_timeout[i] = 0;
return 0;
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
index 2c77292..faa32aa 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
@@ -42,21 +42,21 @@
struct platform_device *pdev;
struct msm_sd_subdev msm_sd;
struct resource *mem;
+ struct resource *clk_mux_mem;
struct resource *irq;
struct resource *io;
+ struct resource *clk_mux_io;
void __iomem *base;
+ void __iomem *clk_mux_base;
struct mutex mutex;
uint8_t start_ack_pending;
- struct completion reset_complete[VFE_MAX];
- spinlock_t auto_complete_lock;
- uint8_t wait_timeout[VFE_MAX];
uint32_t csid_version;
int enb_dump_reg;
uint32_t open_cnt;
struct ispif_sof_count sof_count[VFE_MAX];
struct ispif_intf_cmd applied_intf_cmd[VFE_MAX];
enum msm_ispif_state_t ispif_state;
- struct clk *ispif_clk[VFE_MAX][INTF_MAX];
struct msm_ispif_vfe_info vfe_info;
+ struct clk *ahb_clk;
};
#endif
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
index 9f8b2fa..6396486 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
@@ -50,6 +50,8 @@
+/* CSID CLK MUX SEL REGISTERS */
+#define ISPIF_RDI_CLK_MUX_SEL_ADDR 0x8
/*ISPIF RESET BITS*/
#define VFE_CLK_DOMAIN_RST BIT(31)
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
index 5e61a4d..c805c3d 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
@@ -46,6 +46,9 @@
#define ISPIF_VFE_m_RDI_INTF_n_STATUS(m, n) (0x2D0 + ISPIF_VFE(m) + 4*(n))
#define ISPIF_VFE_m_3D_DESKEW_SIZE(m) (0x2E4 + ISPIF_VFE(m))
+/* CSID CLK MUX SEL REGISTERS */
+#define ISPIF_RDI_CLK_MUX_SEL_ADDR 0x8
+
/*ISPIF RESET BITS*/
#define VFE_CLK_DOMAIN_RST BIT(31)
#define PIX_1_CLK_DOMAIN_RST BIT(30)
diff --git a/drivers/media/platform/msm/camera_v2/msm.c b/drivers/media/platform/msm/camera_v2/msm.c
index 9f1c81a..be9f613 100644
--- a/drivers/media/platform/msm/camera_v2/msm.c
+++ b/drivers/media/platform/msm/camera_v2/msm.c
@@ -661,6 +661,7 @@
rc = -ETIMEDOUT;
}
if (rc < 0) {
+ pr_err("%s: rc = %d\n", __func__, rc);
mutex_unlock(&session->lock);
return rc;
}
diff --git a/drivers/media/platform/msm/camera_v2/msm_buf_mgr/msm_generic_buf_mgr.c b/drivers/media/platform/msm/camera_v2/msm_buf_mgr/msm_generic_buf_mgr.c
index 35210a0..b0ff832 100644
--- a/drivers/media/platform/msm/camera_v2/msm_buf_mgr/msm_generic_buf_mgr.c
+++ b/drivers/media/platform/msm/camera_v2/msm_buf_mgr/msm_generic_buf_mgr.c
@@ -35,7 +35,7 @@
new_entry->vb2_buf = buf_mngr_dev->vb2_ops.get_buf(buf_info->session_id,
buf_info->stream_id);
if (!new_entry->vb2_buf) {
- pr_err("%s:Get buf is null\n", __func__);
+ pr_debug("%s:Get buf is null\n", __func__);
kfree(new_entry);
return -EINVAL;
}
diff --git a/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c b/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
index 1203d17..5a174f5 100644
--- a/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
+++ b/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
@@ -588,7 +588,7 @@
pr_err("%s: Bandwidth registration Failed!\n", __func__);
goto bus_scale_register_failed;
}
- msm_isp_update_bandwidth(ISP_CPP, 981345600, 1600020000);
+ msm_isp_update_bandwidth(ISP_CPP, 981345600, 1066680000);
if (cpp_dev->fs_cpp == NULL) {
cpp_dev->fs_cpp =
@@ -860,7 +860,7 @@
rc = v4l2_subdev_call(cpp_dev->buf_mgr_subdev, core, ioctl,
buff_mgr_ops, buff_mgr_info);
if (rc < 0)
- pr_err("%s: line %d rc = %d\n", __func__, __LINE__, rc);
+ pr_debug("%s: line %d rc = %d\n", __func__, __LINE__, rc);
return rc;
}
@@ -1010,7 +1010,7 @@
&buff_mgr_info);
if (rc < 0) {
rc = -EAGAIN;
- pr_err("error getting buffer rc:%d\n", rc);
+ pr_debug("error getting buffer rc:%d\n", rc);
goto ERROR2;
}
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
index fbd4a2e..33eaa69 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
@@ -185,49 +185,15 @@
{"csi_pclk", -1},
};
-static struct msm_cam_clk_info csid0_8974_clk_info[] = {
+static struct msm_cam_clk_info csid_8974_clk_info[] = {
{"camss_top_ahb_clk", -1},
{"ispif_ahb_clk", -1},
- {"csi0_ahb_clk", -1},
- {"csi0_src_clk", 200000000},
- {"csi0_clk", -1},
- {"csi0_phy_clk", -1},
- {"csi0_pix_clk", -1},
- {"csi0_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid1_8974_clk_info[] = {
- {"csi1_ahb_clk", -1},
- {"csi1_src_clk", 200000000},
- {"csi1_clk", -1},
- {"csi1_phy_clk", -1},
- {"csi1_pix_clk", -1},
- {"csi1_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid2_8974_clk_info[] = {
- {"csi2_ahb_clk", -1},
- {"csi2_src_clk", 200000000},
- {"csi2_clk", -1},
- {"csi2_phy_clk", -1},
- {"csi2_pix_clk", -1},
- {"csi2_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid3_8974_clk_info[] = {
- {"csi3_ahb_clk", -1},
- {"csi3_src_clk", 200000000},
- {"csi3_clk", -1},
- {"csi3_phy_clk", -1},
- {"csi3_pix_clk", -1},
- {"csi3_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_setting csid_8974_clk_info[] = {
- {&csid0_8974_clk_info[0], ARRAY_SIZE(csid0_8974_clk_info)},
- {&csid1_8974_clk_info[0], ARRAY_SIZE(csid1_8974_clk_info)},
- {&csid2_8974_clk_info[0], ARRAY_SIZE(csid2_8974_clk_info)},
- {&csid3_8974_clk_info[0], ARRAY_SIZE(csid3_8974_clk_info)},
+ {"csi_ahb_clk", -1},
+ {"csi_src_clk", 200000000},
+ {"csi_clk", -1},
+ {"csi_phy_clk", -1},
+ {"csi_pix_clk", -1},
+ {"csi_rdi_clk", -1},
};
static struct camera_vreg_t csid_8960_vreg_info[] = {
@@ -241,7 +207,6 @@
static int msm_csid_init(struct csid_device *csid_dev, uint32_t *csid_version)
{
int rc = 0;
- uint8_t core_id = 0;
if (!csid_version) {
pr_err("%s:%d csid_version NULL\n", __func__, __LINE__);
@@ -306,26 +271,14 @@
}
rc = msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 1);
+ csid_8974_clk_info, csid_dev->csid_clk,
+ ARRAY_SIZE(csid_8974_clk_info), 1);
if (rc < 0) {
pr_err("%s: clock enable failed\n", __func__);
- goto csid0_clk_enable_failed;
- }
- core_id = csid_dev->pdev->id;
- if (core_id) {
- rc = msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[core_id].clk_info,
- csid_dev->csid_clk,
- csid_8974_clk_info[core_id].num_clk_info, 1);
- if (rc < 0) {
- pr_err("%s: clock enable failed\n",
- __func__);
- goto clk_enable_failed;
- }
+ goto clk_enable_failed;
}
}
-
+ CDBG("%s:%d called\n", __func__, __LINE__);
csid_dev->hw_version =
msm_camera_io_r(csid_dev->base + CSID_HW_VERSION_ADDR);
CDBG("%s:%d called csid_dev->hw_version %x\n", __func__, __LINE__,
@@ -341,12 +294,6 @@
return rc;
clk_enable_failed:
- if (CSID_VERSION >= CSID_VERSION_V3) {
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 0);
- }
-csid0_clk_enable_failed:
if (CSID_VERSION <= CSID_VERSION_V2) {
msm_camera_enable_vreg(&csid_dev->pdev->dev,
csid_8960_vreg_info, ARRAY_SIZE(csid_8960_vreg_info),
@@ -375,7 +322,6 @@
static int msm_csid_release(struct csid_device *csid_dev)
{
uint32_t irq;
- uint8_t core_id = 0;
if (csid_dev->csid_state != CSID_POWER_UP) {
pr_err("%s: csid invalid state %d\n", __func__,
@@ -401,16 +347,8 @@
csid_8960_vreg_info, ARRAY_SIZE(csid_8960_vreg_info),
NULL, 0, &csid_dev->csi_vdd, 0);
} else if (csid_dev->hw_version >= CSID_VERSION_V3) {
- core_id = csid_dev->pdev->id;
- if (core_id)
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[core_id].clk_info,
- csid_dev->csid_clk,
- csid_8974_clk_info[core_id].num_clk_info, 0);
-
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 0);
+ msm_cam_clk_enable(&csid_dev->pdev->dev, csid_8974_clk_info,
+ csid_dev->csid_clk, ARRAY_SIZE(csid_8974_clk_info), 0);
msm_camera_enable_vreg(&csid_dev->pdev->dev,
csid_vreg_info, ARRAY_SIZE(csid_vreg_info),
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
index 7ae1392..fd4db79 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
@@ -38,7 +38,6 @@
uint32_t hw_version;
enum msm_csid_state_t csid_state;
- struct clk *csid0_clk[11];
struct clk *csid_clk[11];
};
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h b/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
index e5093f8..ba964a2 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
@@ -14,7 +14,7 @@
#define MSM_CSIPHY_HWREG_H
/*MIPI CSI PHY registers*/
-#define MIPI_CSIPHY_HW_VERSION_ADDR 0x180
+#define MIPI_CSIPHY_HW_VERSION_ADDR 0x17C
#define MIPI_CSIPHY_LNn_CFG1_ADDR 0x0
#define MIPI_CSIPHY_LNn_CFG2_ADDR 0x4
#define MIPI_CSIPHY_LNn_CFG3_ADDR 0x8
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
index df3ee60..7d3a1fc 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
@@ -43,14 +43,15 @@
uint8_t lane_cnt = 0;
uint16_t lane_mask = 0;
void __iomem *csiphybase;
+ uint8_t csiphy_id = csiphy_dev->pdev->id;
csiphybase = csiphy_dev->base;
if (!csiphybase) {
pr_err("%s: csiphybase NULL\n", __func__);
return -EINVAL;
}
- csiphy_dev->lane_mask[csiphy_dev->pdev->id] |= csiphy_params->lane_mask;
- lane_mask = csiphy_dev->lane_mask[csiphy_dev->pdev->id];
+ csiphy_dev->lane_mask[csiphy_id] |= csiphy_params->lane_mask;
+ lane_mask = csiphy_dev->lane_mask[csiphy_id];
lane_cnt = csiphy_params->lane_cnt;
if (csiphy_params->lane_cnt < 1 || csiphy_params->lane_cnt > 4) {
pr_err("%s: unsupported lane cnt %d\n",
@@ -58,11 +59,28 @@
return rc;
}
- CDBG("%s csiphy_params, mask = %x, cnt = %d, settle cnt = %x\n",
+ CDBG("%s csiphy_params, mask = %x cnt = %d settle cnt = %x csid %d\n",
__func__,
csiphy_params->lane_mask,
csiphy_params->lane_cnt,
- csiphy_params->settle_cnt);
+ csiphy_params->settle_cnt,
+ csiphy_params->csid_core);
+
+ if (csiphy_dev->hw_version >= CSIPHY_VERSION_V3) {
+ val = msm_camera_io_r(csiphy_dev->clk_mux_base);
+ if (csiphy_params->combo_mode &&
+ (csiphy_params->lane_mask & 0x18)) {
+ val &= ~0xf0;
+ val |= csiphy_params->csid_core << 4;
+ } else {
+ val &= ~0xf;
+ val |= csiphy_params->csid_core;
+ }
+ msm_camera_io_w(val, csiphy_dev->clk_mux_base);
+ CDBG("%s clk mux addr %p val 0x%x\n", __func__,
+ csiphy_dev->clk_mux_base, val);
+ mb();
+ }
msm_camera_io_w(0x1, csiphybase + MIPI_CSIPHY_GLBL_T_INIT_CFG0_ADDR);
msm_camera_io_w(0x1, csiphybase + MIPI_CSIPHY_T_WAKEUP_CFG0_ADDR);
@@ -204,6 +222,22 @@
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 1);
} else {
+ if (!csiphy_dev->clk_mux_mem || !csiphy_dev->clk_mux_io) {
+ pr_err("%s clk mux mem %p io %p\n", __func__,
+ csiphy_dev->clk_mux_mem,
+ csiphy_dev->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+ csiphy_dev->clk_mux_base = ioremap(
+ csiphy_dev->clk_mux_mem->start,
+ resource_size(csiphy_dev->clk_mux_mem));
+ if (!csiphy_dev->clk_mux_base) {
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ rc = -ENOMEM;
+ return rc;
+ }
+
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
@@ -269,12 +303,31 @@
}
CDBG("%s:%d called\n", __func__, __LINE__);
+
if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 1);
} else {
+
+ if (!csiphy_dev->clk_mux_mem || !csiphy_dev->clk_mux_io) {
+ pr_err("%s clk mux mem %p io %p\n", __func__,
+ csiphy_dev->clk_mux_mem,
+ csiphy_dev->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ csiphy_dev->clk_mux_base = ioremap(
+ csiphy_dev->clk_mux_mem->start,
+ resource_size(csiphy_dev->clk_mux_mem));
+ if (!csiphy_dev->clk_mux_base) {
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ rc = -ENOMEM;
+ return rc;
+ }
+
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
@@ -359,14 +412,16 @@
disable_irq(csiphy_dev->irq->start);
- if (CSIPHY_VERSION < CSIPHY_VERSION_V3)
+ if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 0);
- else
+ } else {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8974_clk_info), 0);
+ iounmap(csiphy_dev->clk_mux_base);
+ }
iounmap(csiphy_dev->base);
csiphy_dev->base = NULL;
@@ -426,20 +481,23 @@
msm_camera_io_w(0x0, csiphy_dev->base + MIPI_CSIPHY_LNCK_CFG2_ADDR);
msm_camera_io_w(0x0, csiphy_dev->base + MIPI_CSIPHY_GLBL_PWR_CFG_ADDR);
- if (CSIPHY_VERSION < CSIPHY_VERSION_V3)
+ if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 0);
- else
+ } else {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8974_clk_info), 0);
+ iounmap(csiphy_dev->clk_mux_base);
+ }
iounmap(csiphy_dev->base);
csiphy_dev->base = NULL;
csiphy_dev->csiphy_state = CSIPHY_POWER_DOWN;
return 0;
}
+
#endif
static long msm_csiphy_cmd(struct csiphy_device *csiphy_dev, void *arg)
@@ -588,6 +646,17 @@
}
disable_irq(new_csiphy_dev->irq->start);
+ new_csiphy_dev->clk_mux_mem = platform_get_resource_byname(pdev,
+ IORESOURCE_MEM, "csiphy_clk_mux");
+ if (new_csiphy_dev->clk_mux_mem) {
+ new_csiphy_dev->clk_mux_io = request_mem_region(
+ new_csiphy_dev->clk_mux_mem->start,
+ resource_size(new_csiphy_dev->clk_mux_mem),
+ new_csiphy_dev->clk_mux_mem->name);
+ if (!new_csiphy_dev->clk_mux_io)
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ }
+
new_csiphy_dev->pdev = pdev;
new_csiphy_dev->msm_sd.sd.internal_ops = &msm_csiphy_internal_ops;
new_csiphy_dev->msm_sd.sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
index e19be34..a11b958 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
@@ -32,9 +32,12 @@
struct msm_sd_subdev msm_sd;
struct v4l2_subdev subdev;
struct resource *mem;
+ struct resource *clk_mux_mem;
struct resource *irq;
struct resource *io;
+ struct resource *clk_mux_io;
void __iomem *base;
+ void __iomem *clk_mux_base;
struct mutex mutex;
uint32_t hw_version;
enum msm_csiphy_state_t csiphy_state;
diff --git a/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_flash.h b/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_flash.h
index 76aa695..ac697fb 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_flash.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_flash.h
@@ -40,7 +40,7 @@
struct msm_flash_fn_t *func_tbl;
const char *led_trigger_name[MAX_LED_TRIGGERS];
struct led_trigger *led_trigger[MAX_LED_TRIGGERS];
- uint32_t max_current[MAX_LED_TRIGGERS];
+ uint32_t op_current[MAX_LED_TRIGGERS];
void *data;
};
diff --git a/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_trigger.c b/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_trigger.c
index 1a75a5a..c6f1f72 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_trigger.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/flash/msm_led_trigger.c
@@ -59,11 +59,11 @@
case MSM_CAMERA_LED_LOW:
led_trigger_event(fctrl->led_trigger[0],
- fctrl->max_current[0] / 2);
+ fctrl->op_current[0] / 2);
break;
case MSM_CAMERA_LED_HIGH:
- led_trigger_event(fctrl->led_trigger[0], fctrl->max_current[0]);
+ led_trigger_event(fctrl->led_trigger[0], fctrl->op_current[0]);
break;
case MSM_CAMERA_LED_INIT:
@@ -144,7 +144,7 @@
CDBG("default trigger %s\n", fctrl.led_trigger_name[i]);
rc = of_property_read_u32(flash_src_node,
- "qcom,max-current", &fctrl.max_current[i]);
+ "qcom,current", &fctrl.op_current[i]);
if (rc < 0) {
pr_err("failed rc %d\n", rc);
of_node_put(flash_src_node);
@@ -153,7 +153,7 @@
of_node_put(flash_src_node);
- CDBG("max_current[%d] %d\n", i, fctrl.max_current[i]);
+ CDBG("max_current[%d] %d\n", i, fctrl.op_current[i]);
led_trigger_register_simple(fctrl.led_trigger_name[i],
&fctrl.led_trigger[i]);
diff --git a/drivers/media/platform/msm/camera_v2/sensor/io/msm_camera_io_util.c b/drivers/media/platform/msm/camera_v2/sensor/io/msm_camera_io_util.c
index 7dbbc03..e30e798 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/io/msm_camera_io_util.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/io/msm_camera_io_util.c
@@ -122,7 +122,7 @@
rc = PTR_ERR(clk_ptr[i]);
goto cam_clk_get_err;
}
- if (clk_info[i].clk_rate >= 0) {
+ if (clk_info[i].clk_rate > 0) {
rc = clk_set_rate(clk_ptr[i],
clk_info[i].clk_rate);
if (rc < 0) {
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
index 431d35d..b56378a 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
@@ -25,6 +25,9 @@
/* Length of mandatory fields that must exist in header of video PES */
#define PES_MANDATORY_FIELDS_LEN 9
+/* Index of first byte in TS packet holding STC */
+#define STC_LOCATION_IDX 188
+
#define MAX_PES_LENGTH (SZ_64K)
#define MAX_TS_PACKETS_FOR_SDMX_PROCESS (500)
@@ -1085,7 +1088,7 @@
feed_data->buffer_desc.desc[0].read_ptr = 0;
feed_data->buffer_desc.desc[0].write_ptr = 0;
feed_data->buffer_desc.desc[0].handle =
- ion_share_dma_buf(client, temp_handle);
+ ion_share_dma_buf_fd(client, temp_handle);
if (IS_ERR_VALUE(feed_data->buffer_desc.desc[0].handle)) {
MPQ_DVB_ERR_PRINT(
"%s: FAILED to share payload buffer %d\n",
@@ -2202,6 +2205,14 @@
size_t len = 0;
struct dmx_pts_dts_info *pts_dts;
+ if (meta_data->packet_type == DMX_PES_PACKET) {
+ pts_dts = &meta_data->info.pes.pts_dts_info;
+ data->buf.stc = meta_data->info.pes.stc;
+ } else {
+ pts_dts = &meta_data->info.framing.pts_dts_info;
+ data->buf.stc = meta_data->info.framing.stc;
+ }
+
pts_dts = meta_data->packet_type == DMX_PES_PACKET ?
&meta_data->info.pes.pts_dts_info :
&meta_data->info.framing.pts_dts_info;
@@ -2313,6 +2324,7 @@
packet.raw_data_offset = feed_data->frame_offset;
meta_data.info.framing.pattern_type =
feed_data->last_framing_match_type;
+ meta_data.info.framing.stc = feed_data->last_framing_match_stc;
mpq_streambuffer_get_buffer_handle(stream_buffer,
0, /* current write buffer handle */
@@ -2385,6 +2397,7 @@
mpq_dmx_save_pts_dts(feed_data);
meta_data.packet_type = DMX_PES_PACKET;
+ meta_data.info.pes.stc = feed_data->prev_stc;
mpq_dmx_update_decoder_stat(mpq_demux);
@@ -2411,7 +2424,8 @@
static int mpq_dmx_process_video_packet_framing(
struct dvb_demux_feed *feed,
- const u8 *buf)
+ const u8 *buf,
+ u64 curr_stc)
{
int bytes_avail;
u32 ts_payload_offset;
@@ -2596,7 +2610,8 @@
* pass data to decoder only after sequence header
* or equivalent is found. Otherwise the data is dropped.
*/
- if (!(feed_data->found_sequence_header_pattern)) {
+ if (!feed_data->found_sequence_header_pattern) {
+ feed_data->prev_stc = curr_stc;
spin_unlock(&feed_data->video_buffer_lock);
return 0;
}
@@ -2699,6 +2714,12 @@
framing_res.info[i].type;
feed_data->last_pattern_offset =
framing_res.info[i].offset;
+ if (framing_res.info[i].used_prefix_size)
+ feed_data->last_framing_match_stc =
+ feed_data->prev_stc;
+ else
+ feed_data->last_framing_match_stc =
+ curr_stc;
continue;
}
/*
@@ -2744,6 +2765,8 @@
packet.raw_data_offset = feed_data->frame_offset;
meta_data.info.framing.pattern_type =
feed_data->last_framing_match_type;
+ meta_data.info.framing.stc =
+ feed_data->last_framing_match_stc;
mpq_streambuffer_get_buffer_handle(
stream_buffer,
@@ -2784,8 +2807,13 @@
framing_res.info[i].type;
feed_data->last_pattern_offset =
framing_res.info[i].offset;
+ if (framing_res.info[i].used_prefix_size)
+ feed_data->last_framing_match_stc = feed_data->prev_stc;
+ else
+ feed_data->last_framing_match_stc = curr_stc;
}
+ feed_data->prev_stc = curr_stc;
feed_data->first_prefix_size = 0;
if (pending_data_len) {
@@ -2810,7 +2838,8 @@
static int mpq_dmx_process_video_packet_no_framing(
struct dvb_demux_feed *feed,
- const u8 *buf)
+ const u8 *buf,
+ u64 curr_stc)
{
int bytes_avail;
u32 ts_payload_offset;
@@ -2886,6 +2915,7 @@
mpq_dmx_save_pts_dts(feed_data);
meta_data.packet_type = DMX_PES_PACKET;
+ meta_data.info.pes.stc = feed_data->prev_stc;
mpq_dmx_update_decoder_stat(mpq_demux);
@@ -2928,6 +2958,8 @@
} else {
feed->pusi_seen = 1;
}
+
+ feed_data->prev_stc = curr_stc;
}
/*
@@ -3077,10 +3109,25 @@
struct dvb_demux_feed *feed,
const u8 *buf)
{
+ u64 curr_stc;
+ struct mpq_demux *mpq_demux = feed->demux->priv;
+
+ if ((mpq_demux->source >= DMX_SOURCE_DVR0) &&
+ (mpq_demux->demux.tsp_format != DMX_TSP_FORMAT_192_TAIL)) {
+ curr_stc = 0;
+ } else {
+ curr_stc = buf[STC_LOCATION_IDX + 2] << 16;
+ curr_stc += buf[STC_LOCATION_IDX + 1] << 8;
+ curr_stc += buf[STC_LOCATION_IDX];
+ curr_stc *= 256; /* convert from 105.47 KHZ to 27MHz */
+ }
+
if (mpq_dmx_info.decoder_framing)
- return mpq_dmx_process_video_packet_no_framing(feed, buf);
+ return mpq_dmx_process_video_packet_no_framing(feed, buf,
+ curr_stc);
else
- return mpq_dmx_process_video_packet_framing(feed, buf);
+ return mpq_dmx_process_video_packet_framing(feed, buf,
+ curr_stc);
}
EXPORT_SYMBOL(mpq_dmx_process_video_packet);
@@ -3151,9 +3198,9 @@
(mpq_demux->demux.tsp_format != DMX_TSP_FORMAT_192_TAIL)) {
stc = 0;
} else {
- stc = buf[190] << 16;
- stc += buf[189] << 8;
- stc += buf[188];
+ stc = buf[STC_LOCATION_IDX + 2] << 16;
+ stc += buf[STC_LOCATION_IDX + 1] << 8;
+ stc += buf[STC_LOCATION_IDX];
stc *= 256; /* convert from 105.47 KHZ to 27MHz */
}
@@ -4190,6 +4237,9 @@
pes_header = (struct pes_packet_header *)
&metadata_buf[pes_header_offset];
meta_data.packet_type = DMX_PES_PACKET;
+ /* TODO - set to real STC when SDMX supports it */
+ meta_data.info.pes.stc = 0;
+
if (pes_header->pts_dts_flag & 0x2) {
meta_data.info.pes.pts_dts_info.pts_exist = 1;
meta_data.info.pes.pts_dts_info.pts =
@@ -4544,7 +4594,7 @@
}
MPQ_DVB_DBG_PRINT(
- "\n\n%s: Before SDMX_process: input read_offset=%u, fill count=%u\n",
+ "%s: Before SDMX_process: input read_offset=%u, fill count=%u\n",
__func__, read_offset, fill_count);
process_start_time = current_kernel_time();
@@ -4587,14 +4637,19 @@
int mpq_sdmx_process(struct mpq_demux *mpq_demux,
struct sdmx_buff_descr *input,
u32 fill_count,
- u32 read_offset)
+ u32 read_offset,
+ size_t tsp_size)
{
int ret;
int todo;
int total_bytes_read = 0;
- int limit = mpq_sdmx_proc_limit * mpq_demux->demux.ts_packet_size;
+ int limit = mpq_sdmx_proc_limit * tsp_size;
- while (fill_count >= mpq_demux->demux.ts_packet_size) {
+ MPQ_DVB_DBG_PRINT(
+ "\n\n%s: read_offset=%u, fill_count=%u, tsp_size=%u\n",
+ __func__, read_offset, fill_count, tsp_size);
+
+ while (fill_count >= tsp_size) {
todo = fill_count > limit ? limit : fill_count;
ret = mpq_sdmx_process_buffer(mpq_demux, input, todo,
read_offset);
@@ -4643,7 +4698,8 @@
}
read_offset = mpq_demux->demux.dmx.dvr_input.ringbuff->pread;
- return mpq_sdmx_process(mpq_demux, &buf_desc, count, read_offset);
+ return mpq_sdmx_process(mpq_demux, &buf_desc, count,
+ read_offset, mpq_demux->demux.ts_packet_size);
}
int mpq_dmx_write(struct dmx_demux *demux, const char *buf, size_t count)
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
index 80b5428..ca7c15a 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
@@ -244,6 +244,8 @@
* reported for this frame.
* @last_framing_match_type: Used for saving the type of
* the previous pattern match found in this video feed.
+ * @last_framing_match_stc: Used for saving the STC attached to TS packet
+ * of the previous pattern match found in this video feed.
* @found_sequence_header_pattern: Flag used to note that an MPEG-2
* Sequence Header, H.264 SPS or VC-1 Sequence Header pattern
* (whichever is relevant according to the video standard) had already
@@ -272,6 +274,7 @@
* buffer space.
* @last_pkt_index: used to save the last streambuffer packet index reported in
* a new elementary stream data event.
+ * @prev_stc: STC attached to the previous video TS packet
*/
struct mpq_video_feed_info {
struct mpq_streambuffer *video_buffer;
@@ -289,6 +292,7 @@
u32 last_pattern_offset;
u32 pending_pattern_len;
u64 last_framing_match_type;
+ u64 last_framing_match_stc;
int found_sequence_header_pattern;
struct dvb_dmx_video_prefix_size_masks prefix_size;
u32 first_prefix_size;
@@ -303,6 +307,7 @@
u32 ts_packets_num;
u32 ts_dropped_bytes;
int last_pkt_index;
+ u64 prev_stc;
};
/**
@@ -716,13 +721,15 @@
* @input: input buffer descriptor
* @fill_count: number of data bytes in input buffer that can be read
* @read_offset: offset in buffer for reading
+ * @tsp_size: size of single TS packet
*
* Return number of bytes read or error code
*/
int mpq_sdmx_process(struct mpq_demux *mpq_demux,
struct sdmx_buff_descr *input,
u32 fill_count,
- u32 read_offset);
+ u32 read_offset,
+ size_t tsp_size);
/**
* mpq_sdmx_loaded - Returns 1 if secure demux application is loaded,
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tsif.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tsif.c
index 8855e85..29369de 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tsif.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tsif.c
@@ -703,7 +703,8 @@
mpq_demux->dmxdev.capabilities =
DMXDEV_CAP_DUPLEX |
DMXDEV_CAP_PULL_MODE |
- DMXDEV_CAP_INDEXING;
+ DMXDEV_CAP_INDEXING |
+ DMXDEV_CAP_TS_INSERTION;
mpq_demux->dmxdev.demux->set_source = mpq_dmx_set_source;
mpq_demux->dmxdev.demux->get_stc = mpq_tsif_dmx_get_stc;
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
index 193141a..5e14d0c 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
@@ -382,7 +382,8 @@
buff_current_addr_phys - buff_start_addr_phys);
mpq_sdmx_process(mpq_demux, &input, aggregate_len,
- buff_current_addr_phys - buff_start_addr_phys);
+ buff_current_addr_phys - buff_start_addr_phys,
+ TSPP_RAW_TTS_SIZE);
}
for (i = 0; i < aggregate_count; i++)
@@ -1758,7 +1759,8 @@
mpq_demux->dmxdev.capabilities =
DMXDEV_CAP_DUPLEX |
DMXDEV_CAP_PULL_MODE |
- DMXDEV_CAP_INDEXING;
+ DMXDEV_CAP_INDEXING |
+ DMXDEV_CAP_TS_INSERTION;
mpq_demux->dmxdev.demux->set_source = mpq_dmx_set_source;
mpq_demux->dmxdev.demux->get_stc = mpq_tspp_dmx_get_stc;
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v2.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v2.c
index c306488..60e3cb4 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v2.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v2.c
@@ -143,7 +143,8 @@
mpq_demux->dmxdev.capabilities =
DMXDEV_CAP_DUPLEX |
DMXDEV_CAP_PULL_MODE |
- DMXDEV_CAP_INDEXING;
+ DMXDEV_CAP_INDEXING |
+ DMXDEV_CAP_TS_INSERTION;
mpq_demux->dmxdev.demux->set_source = mpq_dmx_set_source;
mpq_demux->dmxdev.demux->get_caps = mpq_tspp_dmx_get_caps;
diff --git a/drivers/media/platform/msm/dvb/include/mpq_adapter.h b/drivers/media/platform/msm/dvb/include/mpq_adapter.h
index 19abbbe..a2ade18 100644
--- a/drivers/media/platform/msm/dvb/include/mpq_adapter.h
+++ b/drivers/media/platform/msm/dvb/include/mpq_adapter.h
@@ -64,11 +64,17 @@
/** PTS/DTS information */
struct dmx_pts_dts_info pts_dts_info;
+
+ /** STC value attached to first TS packet holding the pattern */
+ u64 stc;
};
struct dmx_pes_packet_info {
/** PTS/DTS information */
struct dmx_pts_dts_info pts_dts_info;
+
+ /** STC value attached to first TS packet holding the PES */
+ u64 stc;
};
struct dmx_marker_info {
diff --git a/drivers/media/platform/msm/vidc/hfi_packetization.c b/drivers/media/platform/msm/vidc/hfi_packetization.c
index ef3e698..7fc8810 100644
--- a/drivers/media/platform/msm/vidc/hfi_packetization.c
+++ b/drivers/media/platform/msm/vidc/hfi_packetization.c
@@ -1057,6 +1057,51 @@
pkt->size += sizeof(u32) + sizeof(struct hfi_quantization);
break;
}
+ case HAL_PARAM_VENC_SESSION_QP_RANGE:
+ {
+ struct hfi_quantization_range *hfi;
+ struct hfi_quantization_range *hal_range =
+ (struct hfi_quantization_range *) pdata;
+ u32 min_qp, max_qp;
+
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_SESSION_QP_RANGE;
+ hfi = (struct hfi_quantization_range *)
+ &pkt->rg_property_data[1];
+
+ min_qp = hal_range->min_qp;
+ max_qp = hal_range->max_qp;
+
+ /* We'll be packing in the qp, so make sure we
+ * won't be losing data when masking */
+ if (min_qp > 0xff || max_qp > 0xff) {
+ dprintk(VIDC_ERR, "qp value out of range\n");
+ rc = -ERANGE;
+ break;
+ }
+
+ /* When creating the packet, pack the qp value as
+ * 0xiippbb, where ii = qp range for I-frames,
+ * pp = qp range for P-frames, etc. */
+ hfi->min_qp = min_qp | min_qp << 8 | min_qp << 16;
+ hfi->max_qp = max_qp | max_qp << 8 | max_qp << 16;
+ hfi->layer_id = hal_range->layer_id;
+
+ pkt->size += sizeof(u32) +
+ sizeof(struct hfi_quantization_range);
+ break;
+ }
+ case HAL_PARAM_VENC_MAX_NUM_B_FRAMES:
+ {
+ struct hfi_max_num_b_frames *hfi;
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_MAX_NUM_B_FRAMES;
+ hfi = (struct hfi_max_num_b_frames *) &pkt->rg_property_data[1];
+ memcpy(hfi, (struct hfi_max_num_b_frames *) pdata,
+ sizeof(struct hfi_max_num_b_frames));
+ pkt->size += sizeof(u32) + sizeof(struct hfi_max_num_b_frames);
+ break;
+ }
case HAL_CONFIG_VENC_INTRA_PERIOD:
{
struct hfi_intra_period *hfi;
@@ -1203,6 +1248,16 @@
}
case HAL_CONFIG_VPE_DEINTERLACE:
break;
+ case HAL_PARAM_VENC_H264_GENERATE_AUDNAL:
+ {
+ struct hfi_enable *hfi;
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_H264_GENERATE_AUDNAL;
+ hfi = (struct hfi_enable *) &pkt->rg_property_data[1];
+ hfi->enable = ((struct hal_enable *) pdata)->enable;
+ pkt->size += sizeof(u32) + sizeof(struct hfi_enable);
+ break;
+ }
/* FOLLOWING PROPERTIES ARE NOT IMPLEMENTED IN CORE YET */
case HAL_CONFIG_BUFFER_REQUIREMENTS:
case HAL_CONFIG_PRIORITY:
diff --git a/drivers/media/platform/msm/vidc/hfi_response_handler.c b/drivers/media/platform/msm/vidc/hfi_response_handler.c
index 91fb514..43a3dad 100644
--- a/drivers/media/platform/msm/vidc/hfi_response_handler.c
+++ b/drivers/media/platform/msm/vidc/hfi_response_handler.c
@@ -174,8 +174,8 @@
switch (pkt->event_id) {
case HFI_EVENT_SYS_ERROR:
- dprintk(VIDC_ERR, "HFI_EVENT_SYS_ERROR: %d\n",
- pkt->event_data1);
+ dprintk(VIDC_ERR, "HFI_EVENT_SYS_ERROR: %d, 0x%x\n",
+ pkt->event_data1, pkt->event_data2);
hfi_process_sys_error(callback, device_id);
break;
case HFI_EVENT_SESSION_ERROR:
diff --git a/drivers/media/platform/msm/vidc/msm_vdec.c b/drivers/media/platform/msm/vidc/msm_vdec.c
index f458a0a..1611a09 100644
--- a/drivers/media/platform/msm/vidc/msm_vdec.c
+++ b/drivers/media/platform/msm/vidc/msm_vdec.c
@@ -1143,6 +1143,11 @@
goto exit;
}
rc = msm_comm_try_state(inst, MSM_VIDC_CLOSE_DONE);
+ /* Clients rely on this event for joining poll thread.
+ * This event should be returned even if firmware has
+ * failed to respond */
+ dqevent.type = V4L2_EVENT_MSM_VIDC_CLOSE_DONE;
+ v4l2_event_queue_fh(&inst->event_handler, &dqevent);
break;
default:
dprintk(VIDC_ERR, "Unknown Decoder Command\n");
diff --git a/drivers/media/platform/msm/vidc/msm_venc.c b/drivers/media/platform/msm/vidc/msm_venc.c
index da97c7a..66a3b6e 100644
--- a/drivers/media/platform/msm/vidc/msm_venc.c
+++ b/drivers/media/platform/msm/vidc/msm_venc.c
@@ -20,6 +20,7 @@
#define MSM_VENC_DVC_NAME "msm_venc_8974"
#define MIN_NUM_OUTPUT_BUFFERS 4
+#define MIN_NUM_CAPTURE_BUFFERS 4
#define MIN_BIT_RATE 64000
#define MAX_BIT_RATE 160000000
#define DEFAULT_BIT_RATE 64000
@@ -35,6 +36,7 @@
#define P_FRAME_QP 28
#define B_FRAME_QP 30
#define MAX_INTRA_REFRESH_MBS 300
+#define MAX_NUM_B_FRAMES 4
#define L_MODE V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED_AT_SLICE_BOUNDARY
#define CODING V4L2_MPEG_VIDEO_MPEG4_PROFILE_ADVANCED_CODING_EFFICIENCY
@@ -419,6 +421,30 @@
.cluster = MSM_VENC_CTRL_CLUSTER_QP,
},
{
+ .id = V4L2_CID_MPEG_VIDEO_H264_MIN_QP,
+ .name = "H264 Minimum QP",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 1,
+ .maximum = 51,
+ .default_value = 1,
+ .step = 1,
+ .menu_skip_mask = 0,
+ .qmenu = NULL,
+ .cluster = MSM_VENC_CTRL_CLUSTER_QP,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_MAX_QP,
+ .name = "H264 Maximum QP",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 1,
+ .maximum = 51,
+ .default_value = 51,
+ .step = 1,
+ .menu_skip_mask = 0,
+ .qmenu = NULL,
+ .cluster = MSM_VENC_CTRL_CLUSTER_QP,
+ },
+ {
.id = V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE,
.name = "Slice Mode",
.type = V4L2_CTRL_TYPE_MENU,
@@ -641,6 +667,15 @@
V4L2_MPEG_VIDC_VIDEO_H264_VUI_TIMING_INFO_DISABLED,
.cluster = MSM_VENC_CTRL_CLUSTER_TIMING,
},
+ {
+ .id = V4L2_CID_MPEG_VIDC_VIDEO_H264_AU_DELIMITER,
+ .name = "H264 AU Delimiter",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED,
+ .maximum = V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_ENABLED,
+ .default_value =
+ V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED,
+ },
};
#define NUM_CTRLS ARRAY_SIZE(msm_venc_ctrls)
@@ -728,7 +763,7 @@
struct v4l2_ctrl *ctrl = NULL;
u32 extradata = 0;
if (!q || !q->drv_priv) {
- dprintk(VIDC_ERR, "Invalid input, q = %p\n", q);
+ dprintk(VIDC_ERR, "Invalid input\n");
return -EINVAL;
}
inst = q->drv_priv;
@@ -743,16 +778,21 @@
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
*num_planes = 1;
buff_req = get_buff_req_buffer(inst, HAL_BUFFER_OUTPUT);
- *num_buffers = buff_req->buffer_count_actual =
+ if (buff_req) {
+ *num_buffers = buff_req->buffer_count_actual =
max(*num_buffers, buff_req->buffer_count_actual);
- if (*num_buffers > VIDEO_MAX_FRAME) {
- dprintk(VIDC_ERR,
+ }
+ if (*num_buffers < MIN_NUM_CAPTURE_BUFFERS)
+ *num_buffers = MIN_NUM_CAPTURE_BUFFERS;
+
+ if (*num_buffers > VIDEO_MAX_FRAME) {
+ dprintk(VIDC_ERR,
"Failed : No of slices requested = %d"\
" Max supported slices = %d",
*num_buffers, VIDEO_MAX_FRAME);
- rc = -EINVAL;
- break;
- }
+ rc = -EINVAL;
+ break;
+ }
ctrl = v4l2_ctrl_find(&inst->ctrl_handler,
V4L2_CID_MPEG_VIDC_VIDEO_EXTRADATA);
if (ctrl)
@@ -1129,6 +1169,7 @@
struct hal_h264_db_control h264_db_control;
struct hal_enable enable;
struct hal_h264_vui_timing_info vui_timing_info;
+ struct hal_quantization_range qp_range;
u32 property_id = 0, property_val = 0;
void *pdata = NULL;
struct v4l2_ctrl *temp_ctrl = NULL;
@@ -1203,11 +1244,24 @@
break;
case V4L2_CID_MPEG_VIDC_VIDEO_NUM_B_FRAMES:
temp_ctrl = TRY_GET_CTRL(V4L2_CID_MPEG_VIDC_VIDEO_NUM_P_FRAMES);
-
- property_id =
- HAL_CONFIG_VENC_INTRA_PERIOD;
intra_period.bframes = ctrl->val;
intra_period.pframes = temp_ctrl->val;
+ if (intra_period.bframes) {
+ u32 max_num_b_frames = MAX_NUM_B_FRAMES;
+ property_id =
+ HAL_PARAM_VENC_MAX_NUM_B_FRAMES;
+ pdata = &max_num_b_frames;
+ rc = call_hfi_op(hdev, session_set_property,
+ (void *)inst->session, property_id, pdata);
+ if (rc) {
+ dprintk(VIDC_ERR,
+ "Failed : Setprop MAX_NUM_B_FRAMES"
+ "%d", rc);
+ break;
+ }
+ }
+ property_id =
+ HAL_CONFIG_VENC_INTRA_PERIOD;
pdata = &intra_period;
break;
case V4L2_CID_MPEG_VIDC_VIDEO_REQUEST_IFRAME:
@@ -1458,6 +1512,44 @@
pdata = &quantization;
break;
}
+ case V4L2_CID_MPEG_VIDEO_H264_MIN_QP: {
+ struct v4l2_ctrl *qp_max;
+
+ qp_max = TRY_GET_CTRL(V4L2_CID_MPEG_VIDEO_H264_MAX_QP);
+ if (ctrl->val >= qp_max->val) {
+ dprintk(VIDC_ERR, "Bad range: Min QP (%d) > Max QP(%d)",
+ ctrl->val, qp_max->val);
+ rc = -ERANGE;
+ break;
+ }
+
+ property_id = HAL_PARAM_VENC_SESSION_QP_RANGE;
+ qp_range.layer_id = 0;
+ qp_range.max_qp = qp_max->val;
+ qp_range.min_qp = ctrl->val;
+
+ pdata = &qp_range;
+ break;
+ }
+ case V4L2_CID_MPEG_VIDEO_H264_MAX_QP: {
+ struct v4l2_ctrl *qp_min;
+
+ qp_min = TRY_GET_CTRL(V4L2_CID_MPEG_VIDEO_H264_MIN_QP);
+ if (ctrl->val <= qp_min->val) {
+ dprintk(VIDC_ERR, "Bad range: Max QP (%d) < Min QP(%d)",
+ ctrl->val, qp_min->val);
+ rc = -ERANGE;
+ break;
+ }
+
+ property_id = HAL_PARAM_VENC_SESSION_QP_RANGE;
+ qp_range.layer_id = 0;
+ qp_range.max_qp = ctrl->val;
+ qp_range.min_qp = qp_min->val;
+
+ pdata = &qp_range;
+ break;
+ }
case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE: {
int temp = 0;
@@ -1680,6 +1772,23 @@
pdata = &vui_timing_info;
break;
}
+ case V4L2_CID_MPEG_VIDC_VIDEO_H264_AU_DELIMITER:
+ property_id = HAL_PARAM_VENC_H264_GENERATE_AUDNAL;
+
+ switch (ctrl->val) {
+ case V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED:
+ enable.enable = 0;
+ break;
+ case V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_ENABLED:
+ enable.enable = 1;
+ break;
+ default:
+ rc = -ENOTSUPP;
+ break;
+ }
+
+ pdata = &enable;
+ break;
default:
rc = -ENOTSUPP;
break;
@@ -1713,10 +1822,13 @@
for (c = 0; c < ctrl->ncontrols; ++c) {
if (ctrl->cluster[c]->is_new) {
- rc = try_set_ctrl(inst, ctrl->cluster[c]);
+ struct v4l2_ctrl *temp = ctrl->cluster[c];
+
+ rc = try_set_ctrl(inst, temp);
if (rc) {
- dprintk(VIDC_ERR, "Failed setting %x",
- ctrl->cluster[c]->id);
+ dprintk(VIDC_ERR, "Failed setting %s (%x)",
+ v4l2_ctrl_get_name(temp->id),
+ temp->id);
break;
}
}
@@ -1793,6 +1905,11 @@
dprintk(VIDC_ERR, "Failed to release persist buf:%d\n",
rc);
rc = msm_comm_try_state(inst, MSM_VIDC_CLOSE_DONE);
+ /* Clients rely on this event for joining poll thread.
+ * This event should be returned even if firmware has
+ * failed to respond */
+ dqevent.type = V4L2_EVENT_MSM_VIDC_CLOSE_DONE;
+ v4l2_event_queue_fh(&inst->event_handler, &dqevent);
break;
}
if (rc)
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_common.c b/drivers/media/platform/msm/vidc/msm_vidc_common.c
index dc08c64..eb5a03c 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_common.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_common.c
@@ -22,7 +22,7 @@
#include "msm_smem.h"
#include "msm_vidc_debug.h"
-#define HW_RESPONSE_TIMEOUT 200
+#define HW_RESPONSE_TIMEOUT 1000
#define IS_ALREADY_IN_STATE(__p, __d) ({\
int __rc = (__p >= __d);\
@@ -582,7 +582,6 @@
if (response) {
inst = (struct msm_vidc_inst *)response->session_id;
signal_session_msg_receipt(cmd, inst);
- queue_v4l2_event(inst, V4L2_EVENT_MSM_VIDC_CLOSE_DONE);
inst->session = NULL;
show_stats(inst);
} else {
@@ -730,6 +729,8 @@
vb->v4l2_buf.flags |= V4L2_BUF_FLAG_KEYFRAME;
if (fill_buf_done->flags1 & HAL_BUFFERFLAG_EOSEQ)
vb->v4l2_buf.flags |= V4L2_QCOM_BUF_FLAG_EOSEQ;
+ if (fill_buf_done->flags1 & HAL_BUFFERFLAG_DECODEONLY)
+ vb->v4l2_buf.flags |= V4L2_QCOM_BUF_FLAG_DECODEONLY;
switch (fill_buf_done->picture_type) {
case HAL_PICTURE_IDR:
case HAL_PICTURE_I:
@@ -1282,7 +1283,6 @@
}
hdev = inst->core->device;
-
if (IS_ALREADY_IN_STATE(flipped_state, MSM_VIDC_LOAD_RESOURCES)) {
dprintk(VIDC_INFO, "inst: %p is already in state: %d\n",
inst, inst->state);
@@ -1668,6 +1668,7 @@
core->state == VIDC_CORE_INVALID) {
dprintk(VIDC_ERR,
"Core is in bad state can't change the state");
+ rc = -EINVAL;
goto exit;
}
flipped_state = get_flipped_state(inst->state, state);
@@ -1901,8 +1902,14 @@
dprintk(VIDC_ERR, "%s invalid parameters", __func__);
return -EINVAL;
}
- hdev = inst->core->device;
+ if (inst->state == MSM_VIDC_CORE_INVALID ||
+ inst->core->state == VIDC_CORE_INVALID) {
+ dprintk(VIDC_ERR,
+ "Core is in bad state can't query get_bufreqs()");
+ return -EINVAL;
+ }
+ hdev = inst->core->device;
mutex_lock(&inst->sync_lock);
if (inst->state < MSM_VIDC_OPEN_DONE || inst->state >= MSM_VIDC_CLOSE) {
dprintk(VIDC_ERR,
@@ -2388,6 +2395,7 @@
if (inst->capability.capability_set) {
if (msm_vp8_low_tier &&
+ inst->core->hfi_type == VIDC_HFI_VENUS &&
inst->fmts[OUTPUT_PORT]->fourcc == V4L2_PIX_FMT_VP8) {
capability->width.max = DEFAULT_WIDTH;
capability->height.max = DEFAULT_HEIGHT;
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_debug.c b/drivers/media/platform/msm/vidc/msm_vidc_debug.c
index 3208df9..ae1e9b7 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_debug.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_debug.c
@@ -198,6 +198,7 @@
write_str(&dbg_buf, "core: 0x%p\n", inst->core);
write_str(&dbg_buf, "height: %d\n", inst->prop.height);
write_str(&dbg_buf, "width: %d\n", inst->prop.width);
+ write_str(&dbg_buf, "fps: %d\n", inst->prop.fps);
write_str(&dbg_buf, "state: %d\n", inst->state);
write_str(&dbg_buf, "-----------Formats-------------\n");
for (i = 0; i < MAX_PORT_NUM; i++) {
diff --git a/drivers/media/platform/msm/vidc/venus_hfi.c b/drivers/media/platform/msm/vidc/venus_hfi.c
index 8031c74..b96a6dc 100644
--- a/drivers/media/platform/msm/vidc/venus_hfi.c
+++ b/drivers/media/platform/msm/vidc/venus_hfi.c
@@ -219,6 +219,9 @@
}
queue->qhdr_write_idx = new_write_idx;
*rx_req_is_set = (1 == queue->qhdr_rx_req) ? 1 : 0;
+ /*Memory barrier to make sure data is written before an
+ * interupt is raised on venus.*/
+ mb();
dprintk(VIDC_DBG, "Out : ");
return 0;
}
@@ -282,6 +285,7 @@
u32 packet_size_in_words, new_read_idx;
u32 *read_ptr;
struct vidc_iface_q_info *qinfo;
+ int rc = 0;
if (!info || !packet || !pb_tx_req_is_set) {
dprintk(VIDC_ERR, "Invalid Params");
@@ -293,6 +297,9 @@
dprintk(VIDC_WARN, "Queues have already been freed\n");
return -EINVAL;
}
+ /*Memory barrier to make sure data is valid before
+ *reading it*/
+ mb();
queue = (struct hfi_queue_header *) qinfo->q_hdr;
if (!queue) {
@@ -317,17 +324,27 @@
new_read_idx = queue->qhdr_read_idx + packet_size_in_words;
dprintk(VIDC_DBG, "Read Ptr: %d", (u32) new_read_idx);
- if (new_read_idx < queue->qhdr_q_size) {
- memcpy(packet, read_ptr,
- packet_size_in_words << 2);
- } else {
- new_read_idx -= queue->qhdr_q_size;
- memcpy(packet, read_ptr,
+ if (((packet_size_in_words << 2) <= VIDC_IFACEQ_MED_PKT_SIZE)
+ && queue->qhdr_read_idx <= queue->qhdr_q_size) {
+ if (new_read_idx < queue->qhdr_q_size) {
+ memcpy(packet, read_ptr,
+ packet_size_in_words << 2);
+ } else {
+ new_read_idx -= queue->qhdr_q_size;
+ memcpy(packet, read_ptr,
(packet_size_in_words - new_read_idx) << 2);
- memcpy(packet + ((packet_size_in_words -
- new_read_idx) << 2),
- (u8 *)qinfo->q_array.align_virtual_addr,
- new_read_idx << 2);
+ memcpy(packet + ((packet_size_in_words -
+ new_read_idx) << 2),
+ (u8 *)qinfo->q_array.align_virtual_addr,
+ new_read_idx << 2);
+ }
+ } else {
+ dprintk(VIDC_WARN,
+ "BAD packet received, read_idx: 0x%x, pkt_size: %d\n",
+ queue->qhdr_read_idx, packet_size_in_words << 2);
+ dprintk(VIDC_WARN, "Dropping this packet\n");
+ new_read_idx = queue->qhdr_write_idx;
+ rc = -ENODATA;
}
queue->qhdr_read_idx = new_read_idx;
@@ -340,7 +357,7 @@
*pb_tx_req_is_set = (1 == queue->qhdr_tx_req) ? 1 : 0;
venus_hfi_hal_sim_modify_msg_packet(packet);
dprintk(VIDC_DBG, "Out : ");
- return 0;
+ return rc;
}
static int venus_hfi_alloc(void *mem, void *clnt, u32 size, u32 align,
@@ -547,7 +564,7 @@
q_info = &device->iface_queues[VIDC_IFACEQ_CMDQ_IDX];
if (!q_info) {
dprintk(VIDC_ERR, "cannot write to shared Q's");
- goto err_q_write;
+ goto err_q_null;
}
mutex_lock(&device->clock_lock);
result = venus_hfi_clk_gating_off(device);
@@ -572,8 +589,9 @@
dprintk(VIDC_ERR, "venus_hfi_iface_cmdq_write:queue_full");
}
err_q_write:
- mutex_unlock(&device->write_lock);
mutex_unlock(&device->clock_lock);
+err_q_null:
+ mutex_unlock(&device->write_lock);
return result;
}
@@ -592,7 +610,7 @@
q_array.align_virtual_addr == 0) {
dprintk(VIDC_ERR, "cannot read from shared MSG Q's");
rc = -ENODATA;
- goto read_error;
+ goto read_error_null;
}
q_info = &device->iface_queues[VIDC_IFACEQ_MSGQ_IDX];
mutex_lock(&device->clock_lock);
@@ -614,8 +632,9 @@
rc = -ENODATA;
}
read_error:
- mutex_unlock(&device->read_lock);
mutex_unlock(&device->clock_lock);
+read_error_null:
+ mutex_unlock(&device->read_lock);
return rc;
}
@@ -634,7 +653,7 @@
q_array.align_virtual_addr == 0) {
dprintk(VIDC_ERR, "cannot read from shared DBG Q's");
rc = -ENODATA;
- goto dbg_error;
+ goto dbg_error_null;
}
mutex_lock(&device->clock_lock);
rc = venus_hfi_clk_gating_off(device);
@@ -656,8 +675,9 @@
rc = -ENODATA;
}
dbg_error:
- mutex_unlock(&device->read_lock);
mutex_unlock(&device->clock_lock);
+dbg_error_null:
+ mutex_unlock(&device->read_lock);
return rc;
}
@@ -1069,9 +1089,8 @@
if (!(dev->intr_status & VIDC_WRAPPER_INTR_STATUS_A2HWD_BMSK))
disable_irq_nosync(dev->hal_data->irq);
dev->intr_status = 0;
- venus_hfi_interface_queues_release(dev);
+ mutex_unlock(&dev->clock_lock);
}
- mutex_unlock(&dev->clock_lock);
dprintk(VIDC_INFO, "HAL exited\n");
return 0;
}
@@ -1867,21 +1886,46 @@
if (((ctrl_status & VIDC_CPU_CS_SCIACMDARG0_HFI_CTRL_INIT_IDLE_MSG_BMSK)
!= 0) && !rc)
venus_hfi_clk_gating_on(device);
- mutex_unlock(&device->write_lock);
mutex_unlock(&device->clock_lock);
+ mutex_unlock(&device->write_lock);
return rc;
}
+static void venus_hfi_process_msg_event_notify(
+ struct venus_hfi_device *device, void *packet)
+{
+ struct hfi_sfr_struct *vsfr = NULL;
+ struct hfi_msg_event_notify_packet *event_pkt;
+ struct vidc_hal_msg_pkt_hdr *msg_hdr;
+ msg_hdr = (struct vidc_hal_msg_pkt_hdr *)packet;
+ event_pkt =
+ (struct hfi_msg_event_notify_packet *)msg_hdr;
+ if (event_pkt && event_pkt->event_id ==
+ HFI_EVENT_SYS_ERROR) {
+ vsfr = (struct hfi_sfr_struct *)
+ device->sfr.align_virtual_addr;
+ if (vsfr)
+ dprintk(VIDC_ERR, "SFR Message from FW : %s",
+ vsfr->rg_data);
+ }
+}
static void venus_hfi_response_handler(struct venus_hfi_device *device)
{
u8 packet[VIDC_IFACEQ_MED_PKT_SIZE];
u32 rc = 0;
+ struct hfi_sfr_struct *vsfr = NULL;
dprintk(VIDC_INFO, "#####venus_hfi_response_handler#####\n");
if (device) {
if ((device->intr_status &
VIDC_WRAPPER_INTR_CLEAR_A2HWD_BMSK)) {
dprintk(VIDC_ERR, "Received: Watchdog timeout %s",
__func__);
+ vsfr = (struct hfi_sfr_struct *)
+ device->sfr.align_virtual_addr;
+ if (vsfr)
+ dprintk(VIDC_ERR,
+ "SFR Message from FW : %s",
+ vsfr->rg_data);
venus_hfi_process_sys_watchdog_timeout(device);
}
@@ -1889,6 +1933,9 @@
rc = hfi_process_msg_packet(device->callback,
device->device_id,
(struct vidc_hal_msg_pkt_hdr *) packet);
+ if (rc == HFI_MSG_EVENT_NOTIFY)
+ venus_hfi_process_msg_event_notify(
+ device, (void *)packet);
}
while (!venus_hfi_iface_dbgq_read(device, packet)) {
struct hfi_msg_sys_debug_packet *pkt =
@@ -2749,9 +2796,11 @@
return;
}
if (device->resources.fw.cookie) {
+ flush_workqueue(device->vidc_workq);
venus_hfi_disable_clks(device);
- venus_hfi_iommu_detach(device);
subsystem_put(device->resources.fw.cookie);
+ venus_hfi_interface_queues_release(dev);
+ venus_hfi_iommu_detach(device);
device->resources.fw.cookie = NULL;
}
}
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi.h b/drivers/media/platform/msm/vidc/vidc_hfi.h
index 075b391..1311752 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi.h
@@ -113,8 +113,6 @@
#define HFI_PROPERTY_SYS_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + 0x0000)
-#define HFI_PROPERTY_SYS_IDLE_INDICATOR \
- (HFI_PROPERTY_SYS_OX_START + 0x001)
#define HFI_PROPERTY_PARAM_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + 0x1000)
@@ -333,7 +331,6 @@
#define HFI_MSG_SYS_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + HFI_MSG_START_OFFSET + 0x0000)
-#define HFI_MSG_SYS_IDLE (HFI_MSG_SYS_OX_START + 0x1)
#define HFI_MSG_SYS_PING_ACK (HFI_MSG_SYS_OX_START + 0x2)
#define HFI_MSG_SYS_PROPERTY_INFO (HFI_MSG_SYS_OX_START + 0x3)
#define HFI_MSG_SYS_SESSION_ABORT_DONE (HFI_MSG_SYS_OX_START + 0x4)
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi_api.h b/drivers/media/platform/msm/vidc/vidc_hfi_api.h
index 3729c3a..3fbfec4 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi_api.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi_api.h
@@ -130,6 +130,7 @@
HAL_PARAM_VENC_H264_DEBLOCK_CONTROL,
HAL_PARAM_VENC_TEMPORAL_SPATIAL_TRADEOFF,
HAL_PARAM_VENC_SESSION_QP,
+ HAL_PARAM_VENC_SESSION_QP_RANGE,
HAL_CONFIG_VENC_INTRA_PERIOD,
HAL_CONFIG_VENC_IDR_PERIOD,
HAL_CONFIG_VPE_OPERATIONS,
@@ -168,6 +169,8 @@
HAL_PARAM_VENC_H264_ENTROPY_CABAC_MODEL,
HAL_CONFIG_VENC_MAX_BITRATE,
HAL_PARAM_VENC_H264_VUI_TIMING_INFO,
+ HAL_PARAM_VENC_H264_GENERATE_AUDNAL,
+ HAL_PARAM_VENC_MAX_NUM_B_FRAMES,
};
enum hal_domain {
@@ -595,6 +598,12 @@
u32 layer_id;
};
+struct hal_quantization_range {
+ u32 min_qp;
+ u32 max_qp;
+ u32 layer_id;
+};
+
struct hal_intra_period {
u32 pframes;
u32 bframes;
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi_helper.h b/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
index 2d0c3bd..6234dba 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
@@ -195,6 +195,8 @@
(HFI_PROPERTY_SYS_COMMON_START + 0x002)
#define HFI_PROPERTY_SYS_CONFIG_VCODEC_CLKFREQ \
(HFI_PROPERTY_SYS_COMMON_START + 0x003)
+#define HFI_PROPERTY_SYS_IDLE_INDICATOR \
+ (HFI_PROPERTY_SYS_COMMON_START + 0x004)
#define HFI_PROPERTY_PARAM_COMMON_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_COMMON_OFFSET + 0x1000)
@@ -304,7 +306,8 @@
(HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x01E)
#define HFI_PROPERTY_PARAM_VENC_VC1_PERF_CFG \
(HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x01F)
-
+#define HFI_PROPERTY_PARAM_VENC_MAX_NUM_B_FRAMES \
+ (HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x020)
#define HFI_PROPERTY_CONFIG_VENC_COMMON_START \
(HFI_DOMAIN_BASE_VENC + HFI_ARCH_COMMON_OFFSET + 0x6000)
#define HFI_PROPERTY_CONFIG_VENC_TARGET_BITRATE \
@@ -424,6 +427,10 @@
u32 idr_period;
};
+struct hfi_max_num_b_frames {
+ u32 max_num_b_frames;
+};
+
struct hfi_intra_period {
u32 pframes;
u32 bframes;
@@ -692,6 +699,7 @@
#define HFI_MSG_SYS_DEBUG (HFI_MSG_SYS_COMMON_START + 0x4)
#define HFI_MSG_SYS_SESSION_INIT_DONE (HFI_MSG_SYS_COMMON_START + 0x6)
#define HFI_MSG_SYS_SESSION_END_DONE (HFI_MSG_SYS_COMMON_START + 0x7)
+#define HFI_MSG_SYS_IDLE (HFI_MSG_SYS_COMMON_START + 0x8)
#define HFI_MSG_SESSION_COMMON_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_COMMON_OFFSET + \
diff --git a/drivers/media/platform/msm/wfd/enc-mfc-subdev.c b/drivers/media/platform/msm/wfd/enc-mfc-subdev.c
index ceb0149..26e24da 100644
--- a/drivers/media/platform/msm/wfd/enc-mfc-subdev.c
+++ b/drivers/media/platform/msm/wfd/enc-mfc-subdev.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -20,6 +20,7 @@
#include <media/msm/vcd_property.h>
#include <linux/time.h>
#include <linux/ktime.h>
+#include <linux/slab.h>
#define VID_ENC_MAX_ENCODER_CLIENTS 1
#define MAX_NUM_CTRLS 20
diff --git a/drivers/media/platform/msm/wfd/enc-venus-subdev.c b/drivers/media/platform/msm/wfd/enc-venus-subdev.c
index 4f7fb44..722aa8f 100644
--- a/drivers/media/platform/msm/wfd/enc-venus-subdev.c
+++ b/drivers/media/platform/msm/wfd/enc-venus-subdev.c
@@ -18,6 +18,7 @@
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/wait.h>
+#include <linux/slab.h>
#include <mach/iommu_domains.h>
#include <media/msm_vidc.h>
#include <media/v4l2-subdev.h>
diff --git a/drivers/media/platform/msm/wfd/mdp-4-subdev.c b/drivers/media/platform/msm/wfd/mdp-4-subdev.c
index d2ecd22..2d79622 100644
--- a/drivers/media/platform/msm/wfd/mdp-4-subdev.c
+++ b/drivers/media/platform/msm/wfd/mdp-4-subdev.c
@@ -11,7 +11,7 @@
*
*/
#include <linux/msm_mdp.h>
-#include <linux/switch.h>
+#include <linux/slab.h>
#include <mach/iommu_domains.h>
#include <media/videobuf2-core.h>
#include "enc-subdev.h"
@@ -24,7 +24,6 @@
u32 width;
bool secure;
bool uses_iommu_split_domain;
- struct switch_dev sdev;
};
int mdp_init(struct v4l2_subdev *sd, u32 val)
@@ -56,13 +55,7 @@
rc = -ENODEV;
goto mdp_open_fail;
}
- inst->sdev.name = "wfd";
- /* Register wfd node to switch driver */
- rc = switch_dev_register(&inst->sdev);
- if (rc) {
- WFD_MSG_ERR("WFD switch registration failed\n");
- goto mdp_open_fail;
- }
+
msm_fb_writeback_init(fbi);
inst->mdp = fbi;
inst->secure = mops->secure;
@@ -92,8 +85,6 @@
rc = -ENODEV;
goto exit;
}
- switch_set_state(&inst->sdev, true);
- WFD_MSG_DBG("wfd state switched to %d\n", inst->sdev.state);
}
exit:
return rc;
@@ -110,8 +101,6 @@
return rc;
}
fbi = (struct fb_info *)inst->mdp;
- switch_set_state(&inst->sdev, false);
- WFD_MSG_DBG("wfd state switched to %d\n", inst->sdev.state);
}
return 0;
}
@@ -123,8 +112,6 @@
fbi = (struct fb_info *)inst->mdp;
msm_fb_writeback_terminate(fbi);
kfree(inst);
- /* Unregister wfd node from switch driver */
- switch_dev_unregister(&inst->sdev);
}
return 0;
}
diff --git a/drivers/media/platform/msm/wfd/mdp-5-subdev.c b/drivers/media/platform/msm/wfd/mdp-5-subdev.c
index 55386b9..6399117 100644
--- a/drivers/media/platform/msm/wfd/mdp-5-subdev.c
+++ b/drivers/media/platform/msm/wfd/mdp-5-subdev.c
@@ -11,7 +11,7 @@
*
*/
#include <linux/msm_mdp.h>
-#include <linux/switch.h>
+#include <linux/slab.h>
#include <mach/iommu_domains.h>
#include <media/videobuf2-core.h>
#include "enc-subdev.h"
@@ -23,7 +23,6 @@
u32 height;
u32 width;
bool secure;
- struct switch_dev sdev;
};
static int mdp_secure(struct v4l2_subdev *sd, void *arg);
@@ -57,13 +56,6 @@
rc = -ENODEV;
goto mdp_open_fail;
}
- inst->sdev.name = "wfd";
- /* Register wfd node to switch driver */
- rc = switch_dev_register(&inst->sdev);
- if (rc) {
- WFD_MSG_ERR("WFD switch registration failed\n");
- goto mdp_open_fail;
- }
msm_fb_writeback_init(fbi);
@@ -81,7 +73,6 @@
mops->cookie = inst;
return 0;
mdp_secure_fail:
- switch_dev_unregister(&inst->sdev);
msm_fb_writeback_terminate(inst->mdp);
mdp_open_fail:
kfree(inst);
@@ -105,8 +96,6 @@
rc = -ENODEV;
goto exit;
}
- switch_set_state(&inst->sdev, true);
- WFD_MSG_DBG("wfd state switched to %d\n", inst->sdev.state);
}
exit:
return rc;
@@ -123,9 +112,8 @@
WFD_MSG_ERR("Failed to stop writeback mode\n");
return rc;
}
+
fbi = (struct fb_info *)inst->mdp;
- switch_set_state(&inst->sdev, false);
- WFD_MSG_DBG("wfd state switched to %d\n", inst->sdev.state);
}
return 0;
}
@@ -139,8 +127,6 @@
if (inst->secure)
msm_fb_writeback_set_secure(inst->mdp, false);
msm_fb_writeback_terminate(fbi);
- /* Unregister wfd node from switch driver */
- switch_dev_unregister(&inst->sdev);
kfree(inst);
}
return 0;
diff --git a/drivers/media/platform/msm/wfd/mdp-dummy-subdev.c b/drivers/media/platform/msm/wfd/mdp-dummy-subdev.c
index 2242c76..10ba3c3 100644
--- a/drivers/media/platform/msm/wfd/mdp-dummy-subdev.c
+++ b/drivers/media/platform/msm/wfd/mdp-dummy-subdev.c
@@ -12,6 +12,7 @@
*/
#include <linux/list.h>
#include <linux/msm_mdp.h>
+#include <linux/slab.h>
#include <media/videobuf2-core.h>
#include "enc-subdev.h"
diff --git a/drivers/media/platform/msm/wfd/vsg-subdev.c b/drivers/media/platform/msm/wfd/vsg-subdev.c
index 90b1957..e0a46cc 100644
--- a/drivers/media/platform/msm/wfd/vsg-subdev.c
+++ b/drivers/media/platform/msm/wfd/vsg-subdev.c
@@ -13,6 +13,7 @@
#include <linux/hrtimer.h>
#include <linux/time.h>
#include <linux/list.h>
+#include <linux/slab.h>
#include <media/videobuf2-core.h>
#include "enc-subdev.h"
#include "vsg-subdev.h"
diff --git a/drivers/media/platform/msm/wfd/wfd-ioctl.c b/drivers/media/platform/msm/wfd/wfd-ioctl.c
index af3cd69..d44792c 100644
--- a/drivers/media/platform/msm/wfd/wfd-ioctl.c
+++ b/drivers/media/platform/msm/wfd/wfd-ioctl.c
@@ -14,14 +14,14 @@
#include <linux/types.h>
#include <linux/list.h>
#include <linux/ioctl.h>
-#include <linux/spinlock.h>
+#include <linux/mutex.h>
#include <linux/init.h>
#include <linux/version.h>
#include <linux/platform_device.h>
-
#include <linux/sched.h>
#include <linux/kthread.h>
#include <linux/time.h>
+#include <linux/slab.h>
#include <mach/board.h>
#include <media/v4l2-dev.h>
@@ -78,7 +78,8 @@
struct wfd_inst {
struct vb2_queue vid_bufq;
- spinlock_t inst_lock;
+ struct mutex lock;
+ struct mutex vb2_lock;
u32 buf_count;
struct task_struct *mdp_task;
void *mdp_inst;
@@ -114,7 +115,6 @@
{
struct file *priv_data = (struct file *)(q->drv_priv);
struct wfd_inst *inst = file_to_inst(priv_data);
- unsigned long flags;
int i;
WFD_MSG_DBG("In %s\n", __func__);
@@ -122,12 +122,12 @@
return -EINVAL;
*num_planes = 1;
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
for (i = 0; i < *num_planes; ++i) {
sizes[i] = inst->out_buf_size;
alloc_ctxs[i] = inst;
}
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return 0;
}
@@ -257,16 +257,15 @@
struct mem_region *enc_mregion, *mdp_mregion;
struct mem_region_pair *mpair;
int rc;
- unsigned long flags;
struct mdp_buf_info mdp_buf = {0};
struct mem_region_map mmap_context = {0};
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
if (inst->input_bufs_allocated) {
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return 0;
}
inst->input_bufs_allocated = true;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
for (i = 0; i < VENC_INPUT_BUFFERS; ++i) {
mpair = kzalloc(sizeof(*mpair), GFP_KERNEL);
@@ -409,15 +408,14 @@
{
struct list_head *ptr, *next;
struct mem_region_pair *mpair;
- unsigned long flags;
int rc = 0;
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
if (!inst->input_bufs_allocated) {
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return;
}
inst->input_bufs_allocated = false;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
if (!list_empty(&inst->input_mem_list)) {
list_for_each_safe(ptr, next,
&inst->input_mem_list) {
@@ -470,8 +468,7 @@
{
struct mem_info_entry *temp;
struct mem_info *ret = NULL;
- unsigned long flags;
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
if (!list_empty(&inst->minfo_list)) {
list_for_each_entry(temp, &inst->minfo_list, list) {
if (temp && temp->userptr == userptr) {
@@ -480,7 +477,7 @@
}
}
}
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return ret;
}
@@ -489,8 +486,7 @@
{
struct list_head *ptr, *next;
struct mem_info_entry *temp;
- unsigned long flags;
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
if (!list_empty(&inst->minfo_list)) {
list_for_each_safe(ptr, next,
&inst->minfo_list) {
@@ -502,7 +498,7 @@
}
}
}
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
}
static void wfd_unregister_out_buf(struct wfd_inst *inst,
struct mem_info *minfo)
@@ -806,7 +802,6 @@
struct v4l2_format *fmt)
{
struct wfd_inst *inst = file_to_inst(filp);
- unsigned long flags;
if (!fmt) {
WFD_MSG_ERR("Invalid argument\n");
return -EINVAL;
@@ -815,13 +810,13 @@
WFD_MSG_ERR("Only V4L2_BUF_TYPE_VIDEO_CAPTURE is supported\n");
return -EINVAL;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
fmt->fmt.pix.width = inst->width;
fmt->fmt.pix.height = inst->height;
fmt->fmt.pix.pixelformat = inst->pixelformat;
fmt->fmt.pix.sizeimage = inst->out_buf_size;
fmt->fmt.pix.priv = 0;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return 0;
}
@@ -832,7 +827,6 @@
struct wfd_inst *inst = file_to_inst(filp);
struct wfd_device *wfd_dev = video_drvdata(filp);
struct mdp_prop prop;
- unsigned long flags;
struct bufreq breq;
if (!fmt) {
WFD_MSG_ERR("Invalid argument\n");
@@ -864,13 +858,13 @@
WFD_MSG_ERR("Failed to set buffer reqs on encoder\n");
return rc;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
inst->input_buf_size = breq.size;
inst->out_buf_size = fmt->fmt.pix.sizeimage;
prop.height = inst->height = fmt->fmt.pix.height;
prop.width = inst->width = fmt->fmt.pix.width;
prop.inst = inst->mdp_inst;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
rc = v4l2_subdev_call(&wfd_dev->mdp_sdev, core, ioctl, MDP_SET_PROP,
(void *)&prop);
if (rc)
@@ -882,7 +876,6 @@
{
struct wfd_inst *inst = file_to_inst(filp);
struct wfd_device *wfd_dev = video_drvdata(filp);
- unsigned long flags;
int rc = 0;
if (b->type != V4L2_CAP_VIDEO_CAPTURE ||
@@ -897,9 +890,9 @@
WFD_MSG_ERR("Failed to get buf reqs from encoder\n");
return rc;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
inst->buf_count = b->count;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
rc = vb2_reqbufs(&inst->vid_bufq, b);
return rc;
}
@@ -908,7 +901,6 @@
{
struct mem_info_entry *minfo_entry;
struct mem_info *minfo;
- unsigned long flags;
if (!b || !inst || !b->reserved) {
WFD_MSG_ERR("Invalid arguments\n");
return -EINVAL;
@@ -924,9 +916,9 @@
return -EINVAL;
}
minfo_entry->userptr = b->m.userptr;
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
list_add_tail(&minfo_entry->list, &inst->minfo_list);
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
} else
WFD_MSG_DBG("Buffer already registered\n");
@@ -948,7 +940,10 @@
return rc;
}
+ mutex_lock(&inst->vb2_lock);
rc = vb2_qbuf(&inst->vid_bufq, b);
+ mutex_unlock(&inst->vb2_lock);
+
if (rc)
WFD_MSG_ERR("Failed to queue buffer\n");
else
@@ -962,16 +957,15 @@
{
int rc = 0;
struct wfd_inst *inst = file_to_inst(filp);
- unsigned long flags;
if (i != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
WFD_MSG_ERR("stream on for buffer type = %d is not "
"supported.\n", i);
return -EINVAL;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
inst->streamoff = false;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
rc = vb2_streamon(&inst->vid_bufq, i);
if (rc) {
@@ -988,7 +982,6 @@
enum v4l2_buf_type i)
{
struct wfd_inst *inst = file_to_inst(filp);
- unsigned long flags;
if (i != V4L2_BUF_TYPE_VIDEO_CAPTURE) {
WFD_MSG_ERR("stream off for buffer type = %d is not "
@@ -996,16 +989,17 @@
return -EINVAL;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
if (inst->streamoff) {
WFD_MSG_ERR("Module is already in streamoff state\n");
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
return -EINVAL;
}
inst->streamoff = true;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
WFD_MSG_DBG("Calling videobuf_streamoff\n");
vb2_streamoff(&inst->vid_bufq, i);
+ wake_up(&inst->event_handler.wait);
return 0;
}
static int wfdioc_dqbuf(struct file *filp, void *fh,
@@ -1015,7 +1009,10 @@
int rc;
WFD_MSG_DBG("Waiting to dequeue buffer\n");
- rc = vb2_dqbuf(&inst->vid_bufq, b, 0);
+
+ /* XXX: If we switch to non-blocking mode in the future,
+ * we'll need to lock this with vb2_lock */
+ rc = vb2_dqbuf(&inst->vid_bufq, b, false /* blocking */);
if (rc)
WFD_MSG_ERR("Failed to dequeue buffer\n");
@@ -1232,7 +1229,6 @@
};
static int wfd_set_default_properties(struct file *filp)
{
- unsigned long flags;
struct v4l2_format fmt;
struct v4l2_control ctrl;
struct wfd_inst *inst = file_to_inst(filp);
@@ -1240,13 +1236,13 @@
WFD_MSG_ERR("Invalid argument\n");
return -EINVAL;
}
- spin_lock_irqsave(&inst->inst_lock, flags);
+ mutex_lock(&inst->lock);
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.fmt.pix.height = inst->height = DEFAULT_WFD_HEIGHT;
fmt.fmt.pix.width = inst->width = DEFAULT_WFD_WIDTH;
fmt.fmt.pix.pixelformat = inst->pixelformat
= V4L2_PIX_FMT_H264;
- spin_unlock_irqrestore(&inst->inst_lock, flags);
+ mutex_unlock(&inst->lock);
wfdioc_s_fmt(filp, filp->private_data, &fmt);
ctrl.id = V4L2_CID_MPEG_VIDEO_HEADER_MODE;
@@ -1257,8 +1253,13 @@
static void venc_op_buffer_done(void *cookie, u32 status,
struct vb2_buffer *buf)
{
+ struct file *filp = cookie;
+ struct wfd_inst *inst = file_to_inst(filp);
+
WFD_MSG_DBG("yay!! got callback\n");
+ mutex_lock(&inst->vb2_lock);
vb2_buffer_done(buf, VB2_BUF_STATE_DONE);
+ mutex_unlock(&inst->vb2_lock);
}
static void venc_ip_buffer_done(void *cookie, u32 status,
@@ -1429,7 +1430,8 @@
goto err_mdp_open;
}
filp->private_data = &inst->event_handler;
- spin_lock_init(&inst->inst_lock);
+ mutex_init(&inst->lock);
+ mutex_init(&inst->vb2_lock);
INIT_LIST_HEAD(&inst->input_mem_list);
INIT_LIST_HEAD(&inst->minfo_list);
@@ -1530,6 +1532,8 @@
wfd_stats_deinit(&inst->stats);
v4l2_fh_del(&inst->event_handler);
+ mutex_destroy(&inst->lock);
+ mutex_destroy(&inst->vb2_lock);
kfree(inst);
}
@@ -1545,12 +1549,19 @@
unsigned int wfd_poll(struct file *filp, struct poll_table_struct *pt)
{
struct wfd_inst *inst = file_to_inst(filp);
- unsigned int flags = 0;
+ unsigned long flags = 0;
+ bool streamoff = false;
poll_wait(filp, &inst->event_handler.wait, pt);
+ mutex_lock(&inst->lock);
+ streamoff = inst->streamoff;
+ mutex_unlock(&inst->lock);
+
if (v4l2_event_pending(&inst->event_handler))
flags |= POLLPRI;
+ if (streamoff)
+ flags |= POLLERR;
return flags;
}
diff --git a/drivers/media/radio/radio-iris.c b/drivers/media/radio/radio-iris.c
index 153552d..d673713 100644
--- a/drivers/media/radio/radio-iris.c
+++ b/drivers/media/radio/radio-iris.c
@@ -3790,6 +3790,27 @@
if (retval < 0)
FMDERR("set CF0 Threshold failed\n");
break;
+ case V4L2_CID_PRIVATE_RXREPEATCOUNT:
+ rd.mode = RDS_PS0_XFR_MODE;
+ rd.length = RDS_PS0_LEN;
+ rd.param_len = 0;
+ rd.param = 0;
+
+ retval = hci_def_data_read(&rd, radio->fm_hdev);
+ if (retval < 0) {
+ FMDERR("default data read failed for PS0 %x", retval);
+ return retval;
+ }
+ wrd.mode = RDS_PS0_XFR_MODE;
+ wrd.length = RDS_PS0_LEN;
+ memcpy(&wrd.data, &radio->default_data.data,
+ radio->default_data.ret_data_len);
+ wrd.data[RX_REPEATE_BYTE_OFFSET] = ctrl->value;
+
+ retval = hci_def_data_write(&wrd, radio->fm_hdev);
+ if (retval < 0)
+ FMDERR("set RxRePeat count failed\n");
+ break;
default:
retval = -EINVAL;
}
diff --git a/drivers/mfd/wcd9xxx-core.c b/drivers/mfd/wcd9xxx-core.c
index e011d8f..046faac 100644
--- a/drivers/mfd/wcd9xxx-core.c
+++ b/drivers/mfd/wcd9xxx-core.c
@@ -34,8 +34,6 @@
#define SLIMBUS_PRESENT_TIMEOUT 100
#define MAX_WCD9XXX_DEVICE 4
-#define TABLA_I2C_MODE 0x03
-#define SITAR_I2C_MODE 0x01
#define CODEC_DT_MAX_PROP_SIZE 40
#define WCD9XXX_I2C_GSBI_SLAVE_ID "3-000d"
#define WCD9XXX_I2C_TOP_SLAVE_ADDR 0x0d
@@ -59,18 +57,9 @@
int mod_id;
};
-static char *taiko_supplies[] = {
- WCD9XXX_SUPPLY_BUCK_NAME, "cdc-vdd-tx-h", "cdc-vdd-rx-h", "cdc-vddpx-1",
- "cdc-vdd-a-1p2v", "cdc-vddcx-1", "cdc-vddcx-2",
-};
-
-static char *tapan_supplies[] = {
- WCD9XXX_SUPPLY_BUCK_NAME, "cdc-vdd-h", "cdc-vdd-px",
- "cdc-vdd-a-1p2v", "cdc-vdd-cx"
-};
-
static int wcd9xxx_dt_parse_vreg_info(struct device *dev,
- struct wcd9xxx_regulator *vreg, const char *vreg_name);
+ struct wcd9xxx_regulator *vreg,
+ const char *vreg_name, bool ondemand);
static int wcd9xxx_dt_parse_micbias_info(struct device *dev,
struct wcd9xxx_micbias_setting *micbias);
static struct wcd9xxx_pdata *wcd9xxx_populate_dt_pdata(struct device *dev);
@@ -292,49 +281,60 @@
},
};
-static struct wcd9xx_codec_type {
- u8 byte[4];
- struct mfd_cell *dev;
- int size;
- int num_irqs;
- int version; /* -1 to retrive version from chip version register */
- enum wcd9xxx_slim_slave_addr_type slim_slave_type;
-} wcd9xxx_codecs[] = {
+
+enum wcd9xxx_chipid_major {
+ TABLA_MAJOR = cpu_to_le16(0x100),
+ SITAR_MAJOR = cpu_to_le16(0x101),
+ TAIKO_MAJOR = cpu_to_le16(0x102),
+ TAPAN_MAJOR = cpu_to_le16(0x103),
+};
+
+static const struct wcd9xxx_codec_type wcd9xxx_codecs[] = {
{
- {0x2, 0x0, 0x0, 0x1}, tabla_devs, ARRAY_SIZE(tabla_devs),
- TABLA_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA
+ TABLA_MAJOR, cpu_to_le16(0x1), tabla1x_devs,
+ ARRAY_SIZE(tabla1x_devs), TABLA_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA, 0x03,
},
{
- {0x1, 0x0, 0x0, 0x1}, tabla1x_devs, ARRAY_SIZE(tabla1x_devs),
- TABLA_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA
- },
- { /* wcd9320 version 1 */
- {0x0, 0x0, 0x2, 0x1}, taiko_devs, ARRAY_SIZE(taiko_devs),
- TAIKO_NUM_IRQS, 1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO
- },
- { /* wcd9320 version 2 */
- {0x1, 0x0, 0x2, 0x1}, taiko_devs, ARRAY_SIZE(taiko_devs),
- TAIKO_NUM_IRQS, 2, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO
+ TABLA_MAJOR, cpu_to_le16(0x2), tabla_devs,
+ ARRAY_SIZE(tabla_devs), TABLA_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA, 0x03
},
{
- {0x0, 0x0, 0x3, 0x1}, tapan_devs, ARRAY_SIZE(tapan_devs),
- TAPAN_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO
+ /* Siter version 1 has same major chip id with Tabla */
+ TABLA_MAJOR, cpu_to_le16(0x0), sitar_devs,
+ ARRAY_SIZE(sitar_devs), SITAR_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA, 0x01
},
{
- {0x1, 0x0, 0x3, 0x1}, tapan_devs, ARRAY_SIZE(tapan_devs),
- TAPAN_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO
+ SITAR_MAJOR, cpu_to_le16(0x1), sitar_devs,
+ ARRAY_SIZE(sitar_devs), SITAR_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA, 0x01
},
{
- {0x0, 0x0, 0x0, 0x1}, sitar_devs, ARRAY_SIZE(sitar_devs),
- SITAR_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA
+ SITAR_MAJOR, cpu_to_le16(0x2), sitar_devs,
+ ARRAY_SIZE(sitar_devs), SITAR_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA, 0x01
},
{
- {0x1, 0x0, 0x1, 0x1}, sitar_devs, ARRAY_SIZE(sitar_devs),
- SITAR_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA
+ TAIKO_MAJOR, cpu_to_le16(0x0), taiko_devs,
+ ARRAY_SIZE(taiko_devs), TAIKO_NUM_IRQS, 1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO, 0x01
},
{
- {0x2, 0x0, 0x1, 0x1}, sitar_devs, ARRAY_SIZE(sitar_devs),
- SITAR_NUM_IRQS, -1, WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA
+ TAIKO_MAJOR, cpu_to_le16(0x1), taiko_devs,
+ ARRAY_SIZE(taiko_devs), TAIKO_NUM_IRQS, 2,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO, 0x01
+ },
+ {
+ TAPAN_MAJOR, cpu_to_le16(0x0), tapan_devs,
+ ARRAY_SIZE(tapan_devs), TAPAN_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO, 0x03
+ },
+ {
+ TAPAN_MAJOR, cpu_to_le16(0x1), tapan_devs,
+ ARRAY_SIZE(tapan_devs), TAPAN_NUM_IRQS, -1,
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TAIKO, 0x03
},
};
@@ -384,65 +384,80 @@
wcd9xxx->reset_gpio = 0;
}
}
-static int wcd9xxx_check_codec_type(struct wcd9xxx *wcd9xxx,
- struct mfd_cell **wcd9xxx_dev,
- int *wcd9xxx_dev_size,
- int *wcd9xxx_dev_num_irqs)
+
+static const struct wcd9xxx_codec_type
+*wcd9xxx_check_codec_type(struct wcd9xxx *wcd9xxx, u8 *version)
{
- int i;
- int ret;
- i = WCD9XXX_A_CHIP_ID_BYTE_0;
- while (i <= WCD9XXX_A_CHIP_ID_BYTE_3) {
- ret = wcd9xxx_reg_read(wcd9xxx, i);
- if (ret < 0)
- goto exit;
- wcd9xxx->idbyte[i-WCD9XXX_A_CHIP_ID_BYTE_0] = (u8)ret;
- pr_debug("%s: wcd9xx read = %x, byte = %x\n", __func__, ret,
- i);
- i++;
+ int i, rc;
+ const struct wcd9xxx_codec_type *c, *d = NULL;
+
+ rc = wcd9xxx_bulk_read(wcd9xxx, WCD9XXX_A_CHIP_ID_BYTE_0,
+ sizeof(wcd9xxx->id_minor),
+ (u8 *)&wcd9xxx->id_minor);
+ if (rc < 0)
+ goto exit;
+
+ rc = wcd9xxx_bulk_read(wcd9xxx, WCD9XXX_A_CHIP_ID_BYTE_2,
+ sizeof(wcd9xxx->id_major),
+ (u8 *)&wcd9xxx->id_major);
+ if (rc < 0)
+ goto exit;
+ dev_dbg(wcd9xxx->dev, "%s: wcd9xxx chip id major 0x%x, minor 0x%x\n",
+ __func__, wcd9xxx->id_major, wcd9xxx->id_minor);
+
+ for (i = 0, c = &wcd9xxx_codecs[0]; i < ARRAY_SIZE(wcd9xxx_codecs);
+ i++, c++) {
+ if (c->id_major == wcd9xxx->id_major) {
+ if (c->id_minor == wcd9xxx->id_minor) {
+ d = c;
+ dev_dbg(wcd9xxx->dev,
+ "%s: exact match %s\n", __func__,
+ d->dev->name);
+ break;
+ } else if (!d) {
+ d = c;
+ } else {
+ if ((d->id_minor < c->id_minor) ||
+ (d->id_minor == c->id_minor &&
+ d->version < c->version))
+ d = c;
+ }
+ dev_dbg(wcd9xxx->dev,
+ "%s: best match %s, major 0x%x, minor 0x%x\n",
+ __func__, d->dev->name, d->id_major,
+ d->id_minor);
+ }
}
- /* Read codec version */
- ret = wcd9xxx_reg_read(wcd9xxx, WCD9XXX_A_CHIP_VERSION);
- if (ret < 0)
- goto exit;
- wcd9xxx->version = (u8)ret & 0x1F;
- i = 0;
- while (i < ARRAY_SIZE(wcd9xxx_codecs)) {
- if ((wcd9xxx_codecs[i].byte[0] == wcd9xxx->idbyte[0]) &&
- (wcd9xxx_codecs[i].byte[1] == wcd9xxx->idbyte[1]) &&
- (wcd9xxx_codecs[i].byte[2] == wcd9xxx->idbyte[2]) &&
- (wcd9xxx_codecs[i].byte[3] == wcd9xxx->idbyte[3])) {
- pr_info("%s: codec is %s", __func__,
- wcd9xxx_codecs[i].dev->name);
- *wcd9xxx_dev = wcd9xxx_codecs[i].dev;
- *wcd9xxx_dev_size = wcd9xxx_codecs[i].size;
- *wcd9xxx_dev_num_irqs = wcd9xxx_codecs[i].num_irqs;
- wcd9xxx->slim_slave_type =
- wcd9xxx_codecs[i].slim_slave_type;
- if (wcd9xxx_codecs[i].version > -1)
- wcd9xxx->version = wcd9xxx_codecs[i].version;
- break;
+ if (!d) {
+ dev_warn(wcd9xxx->dev,
+ "%s: driver for id major 0x%x, minor 0x%x not found\n",
+ __func__, wcd9xxx->id_major, wcd9xxx->id_minor);
+ } else {
+ if (d->version > -1) {
+ *version = d->version;
+ } else {
+ rc = wcd9xxx_reg_read(wcd9xxx, WCD9XXX_A_CHIP_VERSION);
+ if (rc < 0) {
+ d = NULL;
+ goto exit;
+ }
+ *version = (u8)rc & 0x1F;
}
- i++;
+ dev_info(wcd9xxx->dev,
+ "%s: detected %s, major 0x%x, minor 0x%x, ver 0x%x\n",
+ __func__, d->dev->name, d->id_major, d->id_minor,
+ *version);
}
- if (*wcd9xxx_dev == NULL || *wcd9xxx_dev_size == 0)
- ret = -ENODEV;
- pr_info("%s: Read codec idbytes & version\n"
- "byte_0[%08x] byte_1[%08x] byte_2[%08x]\n"
- " byte_3[%08x] version = %x\n", __func__,
- wcd9xxx->idbyte[0], wcd9xxx->idbyte[1],
- wcd9xxx->idbyte[2], wcd9xxx->idbyte[3],
- wcd9xxx->version);
exit:
- return ret;
+ return d;
}
static int wcd9xxx_device_init(struct wcd9xxx *wcd9xxx)
{
int ret;
- struct mfd_cell *wcd9xxx_dev = NULL;
- int wcd9xxx_dev_size = 0;
+ u8 version;
+ const struct wcd9xxx_codec_type *found;
mutex_init(&wcd9xxx->io_lock);
mutex_init(&wcd9xxx->xfer_lock);
@@ -458,10 +473,14 @@
wcd9xxx_bring_up(wcd9xxx);
- ret = wcd9xxx_check_codec_type(wcd9xxx, &wcd9xxx_dev, &wcd9xxx_dev_size,
- &wcd9xxx->num_irqs);
- if (ret < 0)
+ found = wcd9xxx_check_codec_type(wcd9xxx, &version);
+ if (!found) {
+ ret = -ENODEV;
goto err_irq;
+ } else {
+ wcd9xxx->codec_type = found;
+ wcd9xxx->version = version;
+ }
if (wcd9xxx->irq != -1) {
ret = wcd9xxx_irq_init(wcd9xxx);
@@ -471,7 +490,7 @@
}
}
- ret = mfd_add_devices(wcd9xxx->dev, -1, wcd9xxx_dev, wcd9xxx_dev_size,
+ ret = mfd_add_devices(wcd9xxx->dev, -1, found->dev, found->size,
NULL, 0);
if (ret != 0) {
dev_err(wcd9xxx->dev, "Failed to add children: %d\n", ret);
@@ -605,8 +624,8 @@
};
#endif
-static int wcd9xxx_enable_supplies(struct wcd9xxx *wcd9xxx,
- struct wcd9xxx_pdata *pdata)
+static int wcd9xxx_init_supplies(struct wcd9xxx *wcd9xxx,
+ struct wcd9xxx_pdata *pdata)
{
int ret;
int i;
@@ -620,7 +639,7 @@
wcd9xxx->num_of_supplies = 0;
- if (ARRAY_SIZE(pdata->regulator) > MAX_REGULATOR) {
+ if (ARRAY_SIZE(pdata->regulator) > WCD9XXX_MAX_REGULATOR) {
pr_err("%s: Array Size out of bound\n", __func__);
ret = -EINVAL;
goto err;
@@ -642,8 +661,12 @@
}
for (i = 0; i < wcd9xxx->num_of_supplies; i++) {
+ if (regulator_count_voltages(wcd9xxx->supplies[i].consumer) <=
+ 0)
+ continue;
ret = regulator_set_voltage(wcd9xxx->supplies[i].consumer,
- pdata->regulator[i].min_uV, pdata->regulator[i].max_uV);
+ pdata->regulator[i].min_uV,
+ pdata->regulator[i].max_uV);
if (ret) {
pr_err("%s: Setting regulator voltage failed for "
"regulator %s err = %d\n", __func__,
@@ -652,30 +675,19 @@
}
ret = regulator_set_optimum_mode(wcd9xxx->supplies[i].consumer,
- pdata->regulator[i].optimum_uA);
+ pdata->regulator[i].optimum_uA);
if (ret < 0) {
pr_err("%s: Setting regulator optimum mode failed for "
"regulator %s err = %d\n", __func__,
wcd9xxx->supplies[i].supply, ret);
goto err_get;
+ } else {
+ ret = 0;
}
}
- ret = regulator_bulk_enable(wcd9xxx->num_of_supplies,
- wcd9xxx->supplies);
- if (ret != 0) {
- dev_err(wcd9xxx->dev, "Failed to enable supplies: err = %d\n",
- ret);
- goto err_configure;
- }
return ret;
-err_configure:
- for (i = 0; i < wcd9xxx->num_of_supplies; i++) {
- regulator_set_voltage(wcd9xxx->supplies[i].consumer, 0,
- pdata->regulator[i].max_uV);
- regulator_set_optimum_mode(wcd9xxx->supplies[i].consumer, 0);
- }
err_get:
regulator_bulk_free(wcd9xxx->num_of_supplies, wcd9xxx->supplies);
err_supplies:
@@ -684,6 +696,33 @@
return ret;
}
+static int wcd9xxx_enable_static_supplies(struct wcd9xxx *wcd9xxx,
+ struct wcd9xxx_pdata *pdata)
+{
+ int i;
+ int ret = 0;
+
+ for (i = 0; i < wcd9xxx->num_of_supplies; i++) {
+ if (pdata->regulator[i].ondemand)
+ continue;
+ ret = regulator_enable(wcd9xxx->supplies[i].consumer);
+ if (ret) {
+ pr_err("%s: Failed to enable %s\n", __func__,
+ wcd9xxx->supplies[i].supply);
+ break;
+ } else {
+ pr_debug("%s: Enabled regulator %s\n", __func__,
+ wcd9xxx->supplies[i].supply);
+ }
+ }
+
+ while (ret && --i)
+ if (!pdata->regulator[i].ondemand)
+ regulator_disable(wcd9xxx->supplies[i].consumer);
+
+ return ret;
+}
+
static void wcd9xxx_disable_supplies(struct wcd9xxx *wcd9xxx,
struct wcd9xxx_pdata *pdata)
{
@@ -692,8 +731,11 @@
regulator_bulk_disable(wcd9xxx->num_of_supplies,
wcd9xxx->supplies);
for (i = 0; i < wcd9xxx->num_of_supplies; i++) {
+ if (regulator_count_voltages(wcd9xxx->supplies[i].consumer) <=
+ 0)
+ continue;
regulator_set_voltage(wcd9xxx->supplies[i].consumer, 0,
- pdata->regulator[i].max_uV);
+ pdata->regulator[i].max_uV);
regulator_set_optimum_mode(wcd9xxx->supplies[i].consumer, 0);
}
regulator_bulk_free(wcd9xxx->num_of_supplies, wcd9xxx->supplies);
@@ -856,7 +898,6 @@
struct wcd9xxx_pdata *pdata = NULL;
int val = 0;
int ret = 0;
- int i2c_mode = 0;
int wcd9xx_index = 0;
struct device *dev;
@@ -913,18 +954,26 @@
wcd9xxx->slim_device_bootup = true;
if (client->dev.of_node)
wcd9xxx->mclk_rate = pdata->mclk_rate;
- ret = wcd9xxx_enable_supplies(wcd9xxx, pdata);
+
+ ret = wcd9xxx_init_supplies(wcd9xxx, pdata);
if (ret) {
pr_err("%s: Fail to enable Codec supplies\n",
__func__);
goto err_codec;
}
+ ret = wcd9xxx_enable_static_supplies(wcd9xxx, pdata);
+ if (ret) {
+ pr_err("%s: Fail to enable Codec pre-reset supplies\n",
+ __func__);
+ goto err_codec;
+ }
usleep_range(5, 5);
+
ret = wcd9xxx_reset(wcd9xxx);
if (ret) {
pr_err("%s: Resetting Codec failed\n", __func__);
- goto err_supplies;
+ goto err_supplies;
}
ret = wcd9xxx_i2c_get_client_index(client, &wcd9xx_index);
@@ -948,18 +997,14 @@
goto err_device_init;
}
- if ((wcd9xxx->idbyte[0] == 0x2) || (wcd9xxx->idbyte[0] == 0x1))
- i2c_mode = TABLA_I2C_MODE;
- else if (wcd9xxx->idbyte[0] == 0x0)
- i2c_mode = SITAR_I2C_MODE;
-
ret = wcd9xxx_read(wcd9xxx, WCD9XXX_A_CHIP_STATUS, 1, &val, 0);
+ if (ret < 0)
+ pr_err("%s: failed to read the wcd9xxx status (%d)\n",
+ __func__, ret);
+ if (val != wcd9xxx->codec_type->i2c_chip_status)
+ pr_err("%s: unknown chip status 0x%x\n", __func__, val);
- if ((ret < 0) || (val != i2c_mode))
- pr_err("failed to read the wcd9xxx status ret = %d\n",
- ret);
-
- wcd9xxx_intf = WCD9XXX_INTERFACE_TYPE_I2C;
+ wcd9xxx_intf = WCD9XXX_INTERFACE_TYPE_I2C;
return ret;
} else
@@ -988,7 +1033,9 @@
}
static int wcd9xxx_dt_parse_vreg_info(struct device *dev,
- struct wcd9xxx_regulator *vreg, const char *vreg_name)
+ struct wcd9xxx_regulator *vreg,
+ const char *vreg_name,
+ bool ondemand)
{
int len, ret = 0;
const __be32 *prop;
@@ -1006,6 +1053,7 @@
return -ENODEV;
}
vreg->name = vreg_name;
+ vreg->ondemand = ondemand;
snprintf(prop_name, CODEC_DT_MAX_PROP_SIZE,
"qcom,%s-voltage", vreg_name);
@@ -1014,7 +1062,7 @@
if (!prop || (len != (2 * sizeof(__be32)))) {
dev_err(dev, "%s %s property\n",
prop ? "invalid format" : "no", prop_name);
- return -ENODEV;
+ return -EINVAL;
} else {
vreg->min_uV = be32_to_cpup(&prop[0]);
vreg->max_uV = be32_to_cpup(&prop[1]);
@@ -1027,12 +1075,12 @@
if (ret) {
dev_err(dev, "Looking up %s property in node %s failed",
prop_name, dev->of_node->full_name);
- return -ENODEV;
+ return -EFAULT;
}
vreg->optimum_uA = prop_val;
- dev_info(dev, "%s: vol=[%d %d]uV, curr=[%d]uA\n", vreg->name,
- vreg->min_uV, vreg->max_uV, vreg->optimum_uA);
+ dev_info(dev, "%s: vol=[%d %d]uV, curr=[%d]uA, ond %d\n", vreg->name,
+ vreg->min_uV, vreg->max_uV, vreg->optimum_uA, vreg->ondemand);
return 0;
}
@@ -1150,40 +1198,66 @@
static struct wcd9xxx_pdata *wcd9xxx_populate_dt_pdata(struct device *dev)
{
struct wcd9xxx_pdata *pdata;
- int ret, i;
- char **codec_supplies;
- u32 num_of_supplies = 0;
+ int ret, static_cnt, ond_cnt, idx, i;
+ const char *name = NULL;
u32 mclk_rate = 0;
u32 dmic_sample_rate = 0;
+ const char *static_prop_name = "qcom,cdc-static-supplies";
+ const char *ond_prop_name = "qcom,cdc-on-demand-supplies";
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata) {
dev_err(dev, "could not allocate memory for platform data\n");
return NULL;
}
- if (!strcmp(dev_name(dev), "taiko-slim-pgd") ||
- (!strcmp(dev_name(dev), WCD9XXX_I2C_GSBI_SLAVE_ID))) {
- codec_supplies = taiko_supplies;
- num_of_supplies = ARRAY_SIZE(taiko_supplies);
- } else if (!strcmp(dev_name(dev), "tapan-slim-pgd")) {
- codec_supplies = tapan_supplies;
- num_of_supplies = ARRAY_SIZE(tapan_supplies);
- } else {
- dev_err(dev, "%s unsupported device %s\n",
- __func__, dev_name(dev));
+
+ static_cnt = of_property_count_strings(dev->of_node, static_prop_name);
+ if (IS_ERR_VALUE(static_cnt)) {
+ dev_err(dev, "%s: Failed to get static supplies %d\n", __func__,
+ static_cnt);
goto err;
}
- if (num_of_supplies > ARRAY_SIZE(pdata->regulator)) {
+ /* On-demand supply list is an optional property */
+ ond_cnt = of_property_count_strings(dev->of_node, ond_prop_name);
+ if (IS_ERR_VALUE(ond_cnt))
+ ond_cnt = 0;
+
+ BUG_ON(static_cnt <= 0 || ond_cnt < 0);
+ if ((static_cnt + ond_cnt) > ARRAY_SIZE(pdata->regulator)) {
dev_err(dev, "%s: Num of supplies %u > max supported %u\n",
- __func__, num_of_supplies, ARRAY_SIZE(pdata->regulator));
-
+ __func__, static_cnt, ARRAY_SIZE(pdata->regulator));
goto err;
}
- for (i = 0; i < num_of_supplies; i++) {
- ret = wcd9xxx_dt_parse_vreg_info(dev, &pdata->regulator[i],
- codec_supplies[i]);
+ for (idx = 0; idx < static_cnt; idx++) {
+ ret = of_property_read_string_index(dev->of_node,
+ static_prop_name, idx,
+ &name);
+ if (ret) {
+ dev_err(dev, "%s: of read string %s idx %d error %d\n",
+ __func__, static_prop_name, idx, ret);
+ goto err;
+ }
+
+ dev_dbg(dev, "%s: Found static cdc supply %s\n", __func__,
+ name);
+ ret = wcd9xxx_dt_parse_vreg_info(dev, &pdata->regulator[idx],
+ name, false);
+ if (ret)
+ goto err;
+ }
+
+ for (i = 0; i < ond_cnt; i++, idx++) {
+ ret = of_property_read_string_index(dev->of_node, ond_prop_name,
+ i, &name);
+ if (ret)
+ goto err;
+
+ dev_dbg(dev, "%s: Found on-demand cdc supply %s\n", __func__,
+ name);
+ ret = wcd9xxx_dt_parse_vreg_info(dev, &pdata->regulator[idx],
+ name, true);
if (ret)
goto err;
}
@@ -1325,9 +1399,17 @@
wcd9xxx->mclk_rate = pdata->mclk_rate;
wcd9xxx->slim_device_bootup = true;
- ret = wcd9xxx_enable_supplies(wcd9xxx, pdata);
- if (ret)
+ ret = wcd9xxx_init_supplies(wcd9xxx, pdata);
+ if (ret) {
+ pr_err("%s: Fail to init Codec supplies %d\n", __func__, ret);
goto err_codec;
+ }
+ ret = wcd9xxx_enable_static_supplies(wcd9xxx, pdata);
+ if (ret) {
+ pr_err("%s: Fail to enable Codec pre-reset supplies\n",
+ __func__);
+ goto err_codec;
+ }
usleep_range(5, 5);
ret = wcd9xxx_reset(wcd9xxx);
diff --git a/drivers/mfd/wcd9xxx-irq.c b/drivers/mfd/wcd9xxx-irq.c
index f2c3959..5efd905 100644
--- a/drivers/mfd/wcd9xxx-irq.c
+++ b/drivers/mfd/wcd9xxx-irq.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -25,6 +25,7 @@
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
+#include <linux/ratelimit.h>
#include <mach/cpuidle.h>
#define BYTE_BIT_MASK(nr) (1UL << ((nr) % BITS_PER_BYTE))
@@ -218,16 +219,19 @@
static int wcd9xxx_num_irq_regs(const struct wcd9xxx *wcd9xxx)
{
- return (wcd9xxx->num_irqs / 8) + ((wcd9xxx->num_irqs % 8) ? 1 : 0);
+ return (wcd9xxx->codec_type->num_irqs / 8) +
+ ((wcd9xxx->codec_type->num_irqs % 8) ? 1 : 0);
}
static irqreturn_t wcd9xxx_irq_thread(int irq, void *data)
{
int ret;
int i;
+ char linebuf[128];
struct wcd9xxx *wcd9xxx = data;
int num_irq_regs = wcd9xxx_num_irq_regs(wcd9xxx);
- u8 status[num_irq_regs];
+ u8 status[num_irq_regs], status1[num_irq_regs];
+ static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 1);
if (unlikely(wcd9xxx_lock_sleep(wcd9xxx) == false)) {
dev_err(wcd9xxx->dev, "Failed to hold suspend\n");
@@ -249,12 +253,17 @@
for (i = 0; i < num_irq_regs; i++)
status[i] &= ~wcd9xxx->irq_masks_cur[i];
+ memcpy(status1, status, sizeof(status1));
+
/* Find out which interrupt was triggered and call that interrupt's
* handler function
*/
if (status[BIT_BYTE(WCD9XXX_IRQ_SLIMBUS)] &
- BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS))
+ BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS)) {
wcd9xxx_irq_dispatch(wcd9xxx, WCD9XXX_IRQ_SLIMBUS);
+ status1[BIT_BYTE(WCD9XXX_IRQ_SLIMBUS)] &=
+ ~BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS);
+ }
/* Since codec has only one hardware irq line which is shared by
* codec's different internal interrupts, so it's possible master irq
@@ -263,12 +272,43 @@
* machine's order */
for (i = WCD9XXX_IRQ_MBHC_INSERTION;
i >= WCD9XXX_IRQ_MBHC_REMOVAL; i--) {
- if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i))
+ if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i)) {
wcd9xxx_irq_dispatch(wcd9xxx, i);
+ status1[BIT_BYTE(i)] &= ~BYTE_BIT_MASK(i);
+ }
}
- for (i = WCD9XXX_IRQ_BG_PRECHARGE; i < wcd9xxx->num_irqs; i++) {
- if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i))
+ for (i = WCD9XXX_IRQ_BG_PRECHARGE; i < wcd9xxx->codec_type->num_irqs;
+ i++) {
+ if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i)) {
wcd9xxx_irq_dispatch(wcd9xxx, i);
+ status1[BIT_BYTE(i)] &= ~BYTE_BIT_MASK(i);
+ }
+ }
+
+ /*
+ * As a failsafe if unhandled irq is found, clear it to prevent
+ * interrupt storm.
+ * Note that we can say there was an unhandled irq only when no irq
+ * handled by nested irq handler since Taiko supports qdsp as irqs'
+ * destination for few irqs. Therefore driver shouldn't clear pending
+ * irqs when few handled while few others not.
+ */
+ if (unlikely(!memcmp(status, status1, sizeof(status)))) {
+ if (__ratelimit(&ratelimit)) {
+ pr_warn("%s: Unhandled irq found\n", __func__);
+ hex_dump_to_buffer(status, sizeof(status), 16, 1,
+ linebuf, sizeof(linebuf), false);
+ pr_warn("%s: status0 : %s\n", __func__, linebuf);
+ hex_dump_to_buffer(status1, sizeof(status1), 16, 1,
+ linebuf, sizeof(linebuf), false);
+ pr_warn("%s: status1 : %s\n", __func__, linebuf);
+ }
+
+ memset(status, 0xff, num_irq_regs);
+ wcd9xxx_bulk_write(wcd9xxx, WCD9XXX_A_INTR_CLEAR0,
+ num_irq_regs, status);
+ if (wcd9xxx_get_intf_type() == WCD9XXX_INTERFACE_TYPE_I2C)
+ wcd9xxx_reg_write(wcd9xxx, WCD9XXX_A_INTR_MODE, 0x02);
}
wcd9xxx_unlock_sleep(wcd9xxx);
@@ -301,7 +341,7 @@
pr_debug("%s: enter\n", __func__);
- for (irq = 0; irq < wcd9xxx->num_irqs; irq++) {
+ for (irq = 0; irq < wcd9xxx->codec_type->num_irqs; irq++) {
/* Map OF irq */
virq = wcd9xxx_map_irq(wcd9xxx, irq);
pr_debug("%s: irq %d -> %d\n", __func__, irq, virq);
@@ -365,7 +405,7 @@
/* mask all the interrupts */
memset(irq_level, 0, wcd9xxx_num_irq_regs(wcd9xxx));
- for (i = 0; i < wcd9xxx->num_irqs; i++) {
+ for (i = 0; i < wcd9xxx->codec_type->num_irqs; i++) {
wcd9xxx->irq_masks_cur[BIT_BYTE(i)] |= BYTE_BIT_MASK(i);
wcd9xxx->irq_masks_cache[BIT_BYTE(i)] |= BYTE_BIT_MASK(i);
irq_level[BIT_BYTE(i)] |=
diff --git a/drivers/mfd/wcd9xxx-slimslave.c b/drivers/mfd/wcd9xxx-slimslave.c
index f2d71b6..81262b58 100644
--- a/drivers/mfd/wcd9xxx-slimslave.c
+++ b/drivers/mfd/wcd9xxx-slimslave.c
@@ -31,7 +31,8 @@
static int wcd9xxx_configure_ports(struct wcd9xxx *wcd9xxx)
{
- if (wcd9xxx->slim_slave_type == WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA) {
+ if (wcd9xxx->codec_type->slim_slave_type ==
+ WCD9XXX_SLIM_SLAVE_ADDR_TYPE_TABLA) {
sh_ch.rx_port_ch_reg_base = 0x180;
sh_ch.port_rx_cfg_reg_base = 0x040;
sh_ch.port_tx_cfg_reg_base = 0x040;
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 2573a16..a7932ba 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -51,10 +51,6 @@
To compile this driver as a module, choose M here: the
module will be called ad525x_dpot-spi.
-config ANDROID_PMEM
- bool "Android pmem allocator"
- default y
-
config ATMEL_PWM
tristate "Atmel AT32/AT91 PWM support"
depends on HAVE_CLK
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 327d1ec..760f768 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -19,7 +19,6 @@
obj-$(CONFIG_SENSORS_BH1780) += bh1780gli.o
obj-$(CONFIG_SENSORS_BH1770) += bh1770glc.o
obj-$(CONFIG_SENSORS_APDS990X) += apds990x.o
-obj-$(CONFIG_ANDROID_PMEM) += pmem.o
obj-$(CONFIG_SGI_IOC4) += ioc4.o
obj-$(CONFIG_ENCLOSURE_SERVICES) += enclosure.o
obj-$(CONFIG_KGDB_TESTS) += kgdbts.o
diff --git a/drivers/misc/pmem.c b/drivers/misc/pmem.c
deleted file mode 100644
index e91db7a..0000000
--- a/drivers/misc/pmem.c
+++ /dev/null
@@ -1,2961 +0,0 @@
-/* drivers/android/pmem.c
- *
- * Copyright (C) 2007 Google, Inc.
- * Copyright (c) 2009-2012, The Linux Foundation. All rights reserved.
- *
- * This software is licensed under the terms of the GNU General Public
- * License version 2, as published by the Free Software Foundation, and
- * may be copied, distributed, and modified under those terms.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- */
-
-#include <linux/export.h>
-#include <linux/miscdevice.h>
-#include <linux/platform_device.h>
-#include <linux/fs.h>
-#include <linux/file.h>
-#include <linux/fmem.h>
-#include <linux/mm.h>
-#include <linux/list.h>
-#include <linux/debugfs.h>
-#include <linux/android_pmem.h>
-#include <linux/mempolicy.h>
-#include <linux/sched.h>
-#include <linux/kobject.h>
-#include <linux/pm_runtime.h>
-#include <linux/memory_alloc.h>
-#include <linux/vmalloc.h>
-#include <linux/io.h>
-#include <linux/mm_types.h>
-#include <asm/io.h>
-#include <asm/uaccess.h>
-#include <asm/cacheflush.h>
-#include <asm/sizes.h>
-#include <asm/mach/map.h>
-#include <asm/page.h>
-
-#define PMEM_MAX_DEVICES (10)
-
-#define PMEM_MAX_ORDER (128)
-#define PMEM_MIN_ALLOC PAGE_SIZE
-
-#define PMEM_INITIAL_NUM_BITMAP_ALLOCATIONS (64)
-
-#define PMEM_32BIT_WORD_ORDER (5)
-#define PMEM_BITS_PER_WORD_MASK (BITS_PER_LONG - 1)
-
-#ifdef CONFIG_ANDROID_PMEM_DEBUG
-#define PMEM_DEBUG 1
-#else
-#define PMEM_DEBUG 0
-#endif
-
-#define SYSTEM_ALLOC_RETRY 10
-
-/* indicates that a refernce to this file has been taken via get_pmem_file,
- * the file should not be released until put_pmem_file is called */
-#define PMEM_FLAGS_BUSY 0x1
-/* indicates that this is a suballocation of a larger master range */
-#define PMEM_FLAGS_CONNECTED 0x1 << 1
-/* indicates this is a master and not a sub allocation and that it is mmaped */
-#define PMEM_FLAGS_MASTERMAP 0x1 << 2
-/* submap and unsubmap flags indicate:
- * 00: subregion has never been mmaped
- * 10: subregion has been mmaped, reference to the mm was taken
- * 11: subretion has ben released, refernece to the mm still held
- * 01: subretion has been released, reference to the mm has been released
- */
-#define PMEM_FLAGS_SUBMAP 0x1 << 3
-#define PMEM_FLAGS_UNSUBMAP 0x1 << 4
-
-struct pmem_data {
- /* in alloc mode: an index into the bitmap
- * in no_alloc mode: the size of the allocation */
- int index;
- /* see flags above for descriptions */
- unsigned int flags;
- /* protects this data field, if the mm_mmap sem will be held at the
- * same time as this sem, the mm sem must be taken first (as this is
- * the order for vma_open and vma_close ops */
- struct rw_semaphore sem;
- /* info about the mmaping process */
- struct vm_area_struct *vma;
- /* task struct of the mapping process */
- struct task_struct *task;
- /* process id of teh mapping process */
- pid_t pid;
- /* file descriptor of the master */
- int master_fd;
- /* file struct of the master */
- struct file *master_file;
- /* a list of currently available regions if this is a suballocation */
- struct list_head region_list;
- /* a linked list of data so we can access them for debugging */
- struct list_head list;
-#if PMEM_DEBUG
- int ref;
-#endif
-};
-
-struct pmem_bits {
- unsigned allocated:1; /* 1 if allocated, 0 if free */
- unsigned order:7; /* size of the region in pmem space */
-};
-
-struct pmem_region_node {
- struct pmem_region region;
- struct list_head list;
-};
-
-#define PMEM_DEBUG_MSGS 0
-#if PMEM_DEBUG_MSGS
-#define DLOG(fmt,args...) \
- do { pr_debug("[%s:%s:%d] "fmt, __FILE__, __func__, __LINE__, \
- ##args); } \
- while (0)
-#else
-#define DLOG(x...) do {} while (0)
-#endif
-
-enum pmem_align {
- PMEM_ALIGN_4K,
- PMEM_ALIGN_1M,
-};
-
-#define PMEM_NAME_SIZE 16
-
-struct alloc_list {
- void *addr; /* physical addr of allocation */
- void *aaddr; /* aligned physical addr */
- unsigned int size; /* total size of allocation */
- unsigned char __iomem *vaddr; /* Virtual addr */
- struct list_head allocs;
-};
-
-struct pmem_info {
- struct miscdevice dev;
- /* physical start address of the remaped pmem space */
- unsigned long base;
- /* vitual start address of the remaped pmem space */
- unsigned char __iomem *vbase;
- /* total size of the pmem space */
- unsigned long size;
- /* number of entries in the pmem space */
- unsigned long num_entries;
- /* pfn of the garbage page in memory */
- unsigned long garbage_pfn;
- /* which memory type (i.e. SMI, EBI1) this PMEM device is backed by */
- unsigned memory_type;
-
- char name[PMEM_NAME_SIZE];
-
- /* index of the garbage page in the pmem space */
- int garbage_index;
- /* reserved virtual address range */
- struct vm_struct *area;
-
- enum pmem_allocator_type allocator_type;
-
- int (*allocate)(const int,
- const unsigned long,
- const unsigned int);
- int (*free)(int, int);
- int (*free_space)(int, struct pmem_freespace *);
- unsigned long (*len)(int, struct pmem_data *);
- unsigned long (*start_addr)(int, struct pmem_data *);
-
- /* actual size of memory element, e.g.: (4 << 10) is 4K */
- unsigned int quantum;
-
- /* indicates maps of this region should be cached, if a mix of
- * cached and uncached is desired, set this and open the device with
- * O_SYNC to get an uncached region */
- unsigned cached;
- unsigned buffered;
- union {
- struct {
- /* in all_or_nothing allocator mode the first mapper
- * gets the whole space and sets this flag */
- unsigned allocated;
- } all_or_nothing;
-
- struct {
- /* the buddy allocator bitmap for the region
- * indicating which entries are allocated and which
- * are free.
- */
-
- struct pmem_bits *buddy_bitmap;
- } buddy_bestfit;
-
- struct {
- unsigned int bitmap_free; /* # of zero bits/quanta */
- uint32_t *bitmap;
- int32_t bitmap_allocs;
- struct {
- short bit;
- unsigned short quanta;
- } *bitm_alloc;
- } bitmap;
-
- struct {
- unsigned long used; /* Bytes currently allocated */
- struct list_head alist; /* List of allocations */
- } system_mem;
- } allocator;
-
- int id;
- struct kobject kobj;
-
- /* for debugging, creates a list of pmem file structs, the
- * data_list_mutex should be taken before pmem_data->sem if both are
- * needed */
- struct mutex data_list_mutex;
- struct list_head data_list;
- /* arena_mutex protects the global allocation arena
- *
- * IF YOU TAKE BOTH LOCKS TAKE THEM IN THIS ORDER:
- * down(pmem_data->sem) => mutex_lock(arena_mutex)
- */
- struct mutex arena_mutex;
-
- long (*ioctl)(struct file *, unsigned int, unsigned long);
- int (*release)(struct inode *, struct file *);
- /* reference count of allocations */
- atomic_t allocation_cnt;
- /*
- * request function for a region when the allocation count goes
- * from 0 -> 1
- */
- int (*mem_request)(void *);
- /*
- * release function for a region when the allocation count goes
- * from 1 -> 0
- */
- int (*mem_release)(void *);
- /*
- * private data for the request/release callback
- */
- void *region_data;
- /*
- * map and unmap as needed
- */
- int map_on_demand;
- /*
- * memory will be reused through fmem
- */
- int reusable;
-};
-#define to_pmem_info_id(a) (container_of(a, struct pmem_info, kobj)->id)
-
-static void ioremap_pmem(int id);
-static void pmem_put_region(int id);
-static int pmem_get_region(int id);
-
-static struct pmem_info pmem[PMEM_MAX_DEVICES];
-static int id_count;
-
-#define PMEM_SYSFS_DIR_NAME "pmem_regions" /* under /sys/kernel/ */
-static struct kset *pmem_kset;
-
-#define PMEM_IS_FREE_BUDDY(id, index) \
- (!(pmem[id].allocator.buddy_bestfit.buddy_bitmap[index].allocated))
-#define PMEM_BUDDY_ORDER(id, index) \
- (pmem[id].allocator.buddy_bestfit.buddy_bitmap[index].order)
-#define PMEM_BUDDY_INDEX(id, index) \
- (index ^ (1 << PMEM_BUDDY_ORDER(id, index)))
-#define PMEM_BUDDY_NEXT_INDEX(id, index) \
- (index + (1 << PMEM_BUDDY_ORDER(id, index)))
-#define PMEM_OFFSET(index) (index * pmem[id].quantum)
-#define PMEM_START_ADDR(id, index) \
- (PMEM_OFFSET(index) + pmem[id].base)
-#define PMEM_BUDDY_LEN(id, index) \
- ((1 << PMEM_BUDDY_ORDER(id, index)) * pmem[id].quantum)
-#define PMEM_END_ADDR(id, index) \
- (PMEM_START_ADDR(id, index) + PMEM_LEN(id, index))
-#define PMEM_START_VADDR(id, index) \
- (PMEM_OFFSET(id, index) + pmem[id].vbase)
-#define PMEM_END_VADDR(id, index) \
- (PMEM_START_VADDR(id, index) + PMEM_LEN(id, index))
-#define PMEM_REVOKED(data) (data->flags & PMEM_FLAGS_REVOKED)
-#define PMEM_IS_PAGE_ALIGNED(addr) (!((addr) & (~PAGE_MASK)))
-#define PMEM_IS_SUBMAP(data) \
- ((data->flags & PMEM_FLAGS_SUBMAP) && \
- (!(data->flags & PMEM_FLAGS_UNSUBMAP)))
-
-static int pmem_release(struct inode *, struct file *);
-static int pmem_mmap(struct file *, struct vm_area_struct *);
-static int pmem_open(struct inode *, struct file *);
-static long pmem_ioctl(struct file *, unsigned int, unsigned long);
-
-struct file_operations pmem_fops = {
- .release = pmem_release,
- .mmap = pmem_mmap,
- .open = pmem_open,
- .unlocked_ioctl = pmem_ioctl,
-};
-
-#define PMEM_ATTR(_name, _mode, _show, _store) { \
- .attr = {.name = __stringify(_name), .mode = _mode }, \
- .show = _show, \
- .store = _store, \
-}
-
-struct pmem_attr {
- struct attribute attr;
- ssize_t(*show) (const int id, char * const);
- ssize_t(*store) (const int id, const char * const, const size_t count);
-};
-#define to_pmem_attr(a) container_of(a, struct pmem_attr, attr)
-
-#define RW_PMEM_ATTR(name) \
-static struct pmem_attr pmem_attr_## name = \
- PMEM_ATTR(name, S_IRUGO | S_IWUSR, show_pmem_## name, store_pmem_## name)
-
-#define RO_PMEM_ATTR(name) \
-static struct pmem_attr pmem_attr_## name = \
- PMEM_ATTR(name, S_IRUGO, show_pmem_## name, NULL)
-
-#define WO_PMEM_ATTR(name) \
-static struct pmem_attr pmem_attr_## name = \
- PMEM_ATTR(name, S_IWUSR, NULL, store_pmem_## name)
-
-static ssize_t show_pmem(struct kobject *kobj,
- struct attribute *attr,
- char *buf)
-{
- struct pmem_attr *a = to_pmem_attr(attr);
- return a->show ? a->show(to_pmem_info_id(kobj), buf) : -EIO;
-}
-
-static ssize_t store_pmem(struct kobject *kobj, struct attribute *attr,
- const char *buf, size_t count)
-{
- struct pmem_attr *a = to_pmem_attr(attr);
- return a->store ? a->store(to_pmem_info_id(kobj), buf, count) : -EIO;
-}
-
-static struct sysfs_ops pmem_ops = {
- .show = show_pmem,
- .store = store_pmem,
-};
-
-static ssize_t show_pmem_base(int id, char *buf)
-{
- return scnprintf(buf, PAGE_SIZE, "%lu(%#lx)\n",
- pmem[id].base, pmem[id].base);
-}
-RO_PMEM_ATTR(base);
-
-static ssize_t show_pmem_size(int id, char *buf)
-{
- return scnprintf(buf, PAGE_SIZE, "%lu(%#lx)\n",
- pmem[id].size, pmem[id].size);
-}
-RO_PMEM_ATTR(size);
-
-static ssize_t show_pmem_allocator_type(int id, char *buf)
-{
- switch (pmem[id].allocator_type) {
- case PMEM_ALLOCATORTYPE_ALLORNOTHING:
- return scnprintf(buf, PAGE_SIZE, "%s\n", "All or Nothing");
- case PMEM_ALLOCATORTYPE_BUDDYBESTFIT:
- return scnprintf(buf, PAGE_SIZE, "%s\n", "Buddy Bestfit");
- case PMEM_ALLOCATORTYPE_BITMAP:
- return scnprintf(buf, PAGE_SIZE, "%s\n", "Bitmap");
- case PMEM_ALLOCATORTYPE_SYSTEM:
- return scnprintf(buf, PAGE_SIZE, "%s\n", "System heap");
- default:
- return scnprintf(buf, PAGE_SIZE,
- "??? Invalid allocator type (%d) for this region! "
- "Something isn't right.\n",
- pmem[id].allocator_type);
- }
-}
-RO_PMEM_ATTR(allocator_type);
-
-static ssize_t show_pmem_mapped_regions(int id, char *buf)
-{
- struct list_head *elt;
- int ret;
-
- ret = scnprintf(buf, PAGE_SIZE,
- "pid #: mapped regions (offset, len) (offset,len)...\n");
-
- mutex_lock(&pmem[id].data_list_mutex);
- list_for_each(elt, &pmem[id].data_list) {
- struct pmem_data *data =
- list_entry(elt, struct pmem_data, list);
- struct list_head *elt2;
-
- down_read(&data->sem);
- ret += scnprintf(buf + ret, PAGE_SIZE - ret, "pid %u:",
- data->pid);
- list_for_each(elt2, &data->region_list) {
- struct pmem_region_node *region_node = list_entry(elt2,
- struct pmem_region_node,
- list);
- ret += scnprintf(buf + ret, PAGE_SIZE - ret,
- "(%lx,%lx) ",
- region_node->region.offset,
- region_node->region.len);
- }
- up_read(&data->sem);
- ret += scnprintf(buf + ret, PAGE_SIZE - ret, "\n");
- }
- mutex_unlock(&pmem[id].data_list_mutex);
- return ret;
-}
-RO_PMEM_ATTR(mapped_regions);
-
-#define PMEM_COMMON_SYSFS_ATTRS \
- &pmem_attr_base.attr, \
- &pmem_attr_size.attr, \
- &pmem_attr_allocator_type.attr, \
- &pmem_attr_mapped_regions.attr
-
-
-static ssize_t show_pmem_allocated(int id, char *buf)
-{
- ssize_t ret;
-
- mutex_lock(&pmem[id].arena_mutex);
- ret = scnprintf(buf, PAGE_SIZE, "%s\n",
- pmem[id].allocator.all_or_nothing.allocated ?
- "is allocated" : "is NOT allocated");
- mutex_unlock(&pmem[id].arena_mutex);
- return ret;
-}
-RO_PMEM_ATTR(allocated);
-
-static struct attribute *pmem_allornothing_attrs[] = {
- PMEM_COMMON_SYSFS_ATTRS,
-
- &pmem_attr_allocated.attr,
-
- NULL
-};
-
-static struct kobj_type pmem_allornothing_ktype = {
- .sysfs_ops = &pmem_ops,
- .default_attrs = pmem_allornothing_attrs,
-};
-
-static ssize_t show_pmem_total_entries(int id, char *buf)
-{
- return scnprintf(buf, PAGE_SIZE, "%lu\n", pmem[id].num_entries);
-}
-RO_PMEM_ATTR(total_entries);
-
-static ssize_t show_pmem_quantum_size(int id, char *buf)
-{
- return scnprintf(buf, PAGE_SIZE, "%u (%#x)\n",
- pmem[id].quantum, pmem[id].quantum);
-}
-RO_PMEM_ATTR(quantum_size);
-
-static ssize_t show_pmem_buddy_bitmap_dump(int id, char *buf)
-{
- int ret, i;
-
- mutex_lock(&pmem[id].data_list_mutex);
- ret = scnprintf(buf, PAGE_SIZE, "index\torder\tlength\tallocated\n");
-
- for (i = 0; i < pmem[id].num_entries && (PAGE_SIZE - ret);
- i = PMEM_BUDDY_NEXT_INDEX(id, i))
- ret += scnprintf(buf + ret, PAGE_SIZE - ret, "%d\t%d\t%d\t%d\n",
- i, PMEM_BUDDY_ORDER(id, i),
- PMEM_BUDDY_LEN(id, i),
- !PMEM_IS_FREE_BUDDY(id, i));
-
- mutex_unlock(&pmem[id].data_list_mutex);
- return ret;
-}
-RO_PMEM_ATTR(buddy_bitmap_dump);
-
-#define PMEM_BITMAP_BUDDY_BESTFIT_COMMON_SYSFS_ATTRS \
- &pmem_attr_quantum_size.attr, \
- &pmem_attr_total_entries.attr
-
-static struct attribute *pmem_buddy_bestfit_attrs[] = {
- PMEM_COMMON_SYSFS_ATTRS,
-
- PMEM_BITMAP_BUDDY_BESTFIT_COMMON_SYSFS_ATTRS,
-
- &pmem_attr_buddy_bitmap_dump.attr,
-
- NULL
-};
-
-static struct kobj_type pmem_buddy_bestfit_ktype = {
- .sysfs_ops = &pmem_ops,
- .default_attrs = pmem_buddy_bestfit_attrs,
-};
-
-static ssize_t show_pmem_free_quanta(int id, char *buf)
-{
- ssize_t ret;
-
- mutex_lock(&pmem[id].arena_mutex);
- ret = scnprintf(buf, PAGE_SIZE, "%u\n",
- pmem[id].allocator.bitmap.bitmap_free);
- mutex_unlock(&pmem[id].arena_mutex);
- return ret;
-}
-RO_PMEM_ATTR(free_quanta);
-
-static ssize_t show_pmem_bits_allocated(int id, char *buf)
-{
- ssize_t ret;
- unsigned int i;
-
- mutex_lock(&pmem[id].arena_mutex);
-
- ret = scnprintf(buf, PAGE_SIZE,
- "id: %d\nbitnum\tindex\tquanta allocated\n", id);
-
- for (i = 0; i < pmem[id].allocator.bitmap.bitmap_allocs; i++)
- if (pmem[id].allocator.bitmap.bitm_alloc[i].bit != -1)
- ret += scnprintf(buf + ret, PAGE_SIZE - ret,
- "%u\t%u\t%u\n",
- i,
- pmem[id].allocator.bitmap.bitm_alloc[i].bit,
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta
- );
-
- mutex_unlock(&pmem[id].arena_mutex);
- return ret;
-}
-RO_PMEM_ATTR(bits_allocated);
-
-static struct attribute *pmem_bitmap_attrs[] = {
- PMEM_COMMON_SYSFS_ATTRS,
-
- PMEM_BITMAP_BUDDY_BESTFIT_COMMON_SYSFS_ATTRS,
-
- &pmem_attr_free_quanta.attr,
- &pmem_attr_bits_allocated.attr,
-
- NULL
-};
-
-static struct attribute *pmem_system_attrs[] = {
- PMEM_COMMON_SYSFS_ATTRS,
-
- NULL
-};
-
-static struct kobj_type pmem_bitmap_ktype = {
- .sysfs_ops = &pmem_ops,
- .default_attrs = pmem_bitmap_attrs,
-};
-
-static struct kobj_type pmem_system_ktype = {
- .sysfs_ops = &pmem_ops,
- .default_attrs = pmem_system_attrs,
-};
-
-static int pmem_allocate_from_id(const int id, const unsigned long size,
- const unsigned int align)
-{
- int ret;
- ret = pmem_get_region(id);
-
- if (ret)
- return -1;
-
- ret = pmem[id].allocate(id, size, align);
-
- if (ret < 0)
- pmem_put_region(id);
-
- return ret;
-}
-
-static int pmem_free_from_id(const int id, const int index)
-{
- pmem_put_region(id);
- return pmem[id].free(id, index);
-}
-
-static int pmem_get_region(int id)
-{
- /* Must be called with arena mutex locked */
- atomic_inc(&pmem[id].allocation_cnt);
- if (!pmem[id].vbase) {
- DLOG("PMEMDEBUG: mapping for %s", pmem[id].name);
- if (pmem[id].mem_request) {
- int ret = pmem[id].mem_request(pmem[id].region_data);
- if (ret) {
- atomic_dec(&pmem[id].allocation_cnt);
- return 1;
- }
- }
- ioremap_pmem(id);
- }
-
- if (pmem[id].vbase) {
- return 0;
- } else {
- if (pmem[id].mem_release)
- pmem[id].mem_release(pmem[id].region_data);
- atomic_dec(&pmem[id].allocation_cnt);
- return 1;
- }
-}
-
-static void pmem_put_region(int id)
-{
- /* Must be called with arena mutex locked */
- if (atomic_dec_and_test(&pmem[id].allocation_cnt)) {
- DLOG("PMEMDEBUG: unmapping for %s", pmem[id].name);
- BUG_ON(!pmem[id].vbase);
- if (pmem[id].map_on_demand) {
- /* unmap_kernel_range() flushes the caches
- * and removes the page table entries
- */
- unmap_kernel_range((unsigned long)pmem[id].vbase,
- pmem[id].size);
- pmem[id].vbase = NULL;
- if (pmem[id].mem_release) {
- int ret = pmem[id].mem_release(
- pmem[id].region_data);
- WARN(ret, "mem_release failed");
- }
-
- }
- }
-}
-
-static int get_id(struct file *file)
-{
- return MINOR(file->f_dentry->d_inode->i_rdev);
-}
-
-static char *get_name(struct file *file)
-{
- int id = get_id(file);
- return pmem[id].name;
-}
-
-static int is_pmem_file(struct file *file)
-{
- int id;
-
- if (unlikely(!file || !file->f_dentry || !file->f_dentry->d_inode))
- return 0;
-
- id = get_id(file);
- return (unlikely(id >= PMEM_MAX_DEVICES ||
- file->f_dentry->d_inode->i_rdev !=
- MKDEV(MISC_MAJOR, pmem[id].dev.minor))) ? 0 : 1;
-}
-
-static int has_allocation(struct file *file)
-{
- /* must be called with at least read lock held on
- * ((struct pmem_data *)(file->private_data))->sem which
- * means that file is guaranteed not to be NULL upon entry!!
- * check is_pmem_file first if not accessed via pmem_file_ops */
- struct pmem_data *pdata = file->private_data;
- return pdata && pdata->index != -1;
-}
-
-static int is_master_owner(struct file *file)
-{
- struct file *master_file;
- struct pmem_data *data = file->private_data;
- int put_needed, ret = 0;
-
- if (!has_allocation(file))
- return 0;
- if (PMEM_FLAGS_MASTERMAP & data->flags)
- return 1;
- master_file = fget_light(data->master_fd, &put_needed);
- if (master_file && data->master_file == master_file)
- ret = 1;
- if (master_file)
- fput_light(master_file, put_needed);
- return ret;
-}
-
-static int pmem_free_all_or_nothing(int id, int index)
-{
- /* caller should hold the lock on arena_mutex! */
- DLOG("index %d\n", index);
-
- pmem[id].allocator.all_or_nothing.allocated = 0;
- return 0;
-}
-
-static int pmem_free_space_all_or_nothing(int id,
- struct pmem_freespace *fs)
-{
- /* caller should hold the lock on arena_mutex! */
- fs->total = (unsigned long)
- pmem[id].allocator.all_or_nothing.allocated == 0 ?
- pmem[id].size : 0;
-
- fs->largest = fs->total;
- return 0;
-}
-
-
-static int pmem_free_buddy_bestfit(int id, int index)
-{
- /* caller should hold the lock on arena_mutex! */
- int curr = index;
- DLOG("index %d\n", index);
-
-
- /* clean up the bitmap, merging any buddies */
- pmem[id].allocator.buddy_bestfit.buddy_bitmap[curr].allocated = 0;
- /* find a slots buddy Buddy# = Slot# ^ (1 << order)
- * if the buddy is also free merge them
- * repeat until the buddy is not free or end of the bitmap is reached
- */
- do {
- int buddy = PMEM_BUDDY_INDEX(id, curr);
- if (buddy < pmem[id].num_entries &&
- PMEM_IS_FREE_BUDDY(id, buddy) &&
- PMEM_BUDDY_ORDER(id, buddy) ==
- PMEM_BUDDY_ORDER(id, curr)) {
- PMEM_BUDDY_ORDER(id, buddy)++;
- PMEM_BUDDY_ORDER(id, curr)++;
- curr = min(buddy, curr);
- } else {
- break;
- }
- } while (curr < pmem[id].num_entries);
-
- return 0;
-}
-
-
-static int pmem_free_space_buddy_bestfit(int id,
- struct pmem_freespace *fs)
-{
- /* caller should hold the lock on arena_mutex! */
- int curr;
- unsigned long size;
- fs->total = 0;
- fs->largest = 0;
-
- for (curr = 0; curr < pmem[id].num_entries;
- curr = PMEM_BUDDY_NEXT_INDEX(id, curr)) {
- if (PMEM_IS_FREE_BUDDY(id, curr)) {
- size = PMEM_BUDDY_LEN(id, curr);
- if (size > fs->largest)
- fs->largest = size;
- fs->total += size;
- }
- }
- return 0;
-}
-
-
-static inline uint32_t start_mask(int bit_start)
-{
- return (uint32_t)(~0) << (bit_start & PMEM_BITS_PER_WORD_MASK);
-}
-
-static inline uint32_t end_mask(int bit_end)
-{
- return (uint32_t)(~0) >>
- ((BITS_PER_LONG - bit_end) & PMEM_BITS_PER_WORD_MASK);
-}
-
-static inline int compute_total_words(int bit_end, int word_index)
-{
- return ((bit_end + BITS_PER_LONG - 1) >>
- PMEM_32BIT_WORD_ORDER) - word_index;
-}
-
-static void bitmap_bits_clear_all(uint32_t *bitp, int bit_start, int bit_end)
-{
- int word_index = bit_start >> PMEM_32BIT_WORD_ORDER, total_words;
-
- total_words = compute_total_words(bit_end, word_index);
- if (total_words > 0) {
- if (total_words == 1) {
- bitp[word_index] &=
- ~(start_mask(bit_start) & end_mask(bit_end));
- } else {
- bitp[word_index++] &= ~start_mask(bit_start);
- if (total_words > 2) {
- int total_bytes;
-
- total_words -= 2;
- total_bytes = total_words << 2;
-
- memset(&bitp[word_index], 0, total_bytes);
- word_index += total_words;
- }
- bitp[word_index] &= ~end_mask(bit_end);
- }
- }
-}
-
-static int pmem_free_bitmap(int id, int bitnum)
-{
- /* caller should hold the lock on arena_mutex! */
- int i;
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-
- DLOG("bitnum %d\n", bitnum);
-
- for (i = 0; i < pmem[id].allocator.bitmap.bitmap_allocs; i++) {
- const int curr_bit =
- pmem[id].allocator.bitmap.bitm_alloc[i].bit;
-
- if (curr_bit == bitnum) {
- const int curr_quanta =
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta;
-
- bitmap_bits_clear_all(pmem[id].allocator.bitmap.bitmap,
- curr_bit, curr_bit + curr_quanta);
- pmem[id].allocator.bitmap.bitmap_free += curr_quanta;
- pmem[id].allocator.bitmap.bitm_alloc[i].bit = -1;
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta = 0;
- return 0;
- }
- }
- printk(KERN_ALERT "pmem: %s: Attempt to free unallocated index %d, id"
- " %d, pid %d(%s)\n", __func__, bitnum, id, current->pid,
- get_task_comm(currtask_name, current));
-
- return -1;
-}
-
-static int pmem_free_system(int id, int index)
-{
- /* caller should hold the lock on arena_mutex! */
- struct alloc_list *item;
-
- DLOG("index %d\n", index);
- if (index != 0)
- item = (struct alloc_list *)index;
- else
- return 0;
-
- if (item->vaddr != NULL) {
- iounmap(item->vaddr);
- kfree(__va(item->addr));
- list_del(&item->allocs);
- kfree(item);
- }
-
- return 0;
-}
-
-static int pmem_free_space_bitmap(int id, struct pmem_freespace *fs)
-{
- int i, j;
- int max_allocs = pmem[id].allocator.bitmap.bitmap_allocs;
- int alloc_start = 0;
- int next_alloc;
- unsigned long size = 0;
-
- fs->total = 0;
- fs->largest = 0;
-
- for (i = 0; i < max_allocs; i++) {
-
- int alloc_quanta = 0;
- int alloc_idx = 0;
- next_alloc = pmem[id].num_entries;
-
- /* Look for the lowest bit where next allocation starts */
- for (j = 0; j < max_allocs; j++) {
- const int curr_alloc = pmem[id].allocator.
- bitmap.bitm_alloc[j].bit;
- if (curr_alloc != -1) {
- if (alloc_start == curr_alloc)
- alloc_idx = j;
- if (alloc_start >= curr_alloc)
- continue;
- if (curr_alloc < next_alloc)
- next_alloc = curr_alloc;
- }
- }
- alloc_quanta = pmem[id].allocator.bitmap.
- bitm_alloc[alloc_idx].quanta;
- size = (next_alloc - (alloc_start + alloc_quanta)) *
- pmem[id].quantum;
-
- if (size > fs->largest)
- fs->largest = size;
- fs->total += size;
-
- if (next_alloc == pmem[id].num_entries)
- break;
- else
- alloc_start = next_alloc;
- }
-
- return 0;
-}
-
-static int pmem_free_space_system(int id, struct pmem_freespace *fs)
-{
- fs->total = pmem[id].size;
- fs->largest = pmem[id].size;
-
- return 0;
-}
-
-static void pmem_revoke(struct file *file, struct pmem_data *data);
-
-static int pmem_release(struct inode *inode, struct file *file)
-{
- struct pmem_data *data = file->private_data;
- struct pmem_region_node *region_node;
- struct list_head *elt, *elt2;
- int id = get_id(file), ret = 0;
-
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("releasing memory pid %u(%s) file %p(%ld) dev %s(id: %d)\n",
- current->pid, get_task_comm(currtask_name, current),
- file, file_count(file), get_name(file), id);
- mutex_lock(&pmem[id].data_list_mutex);
- /* if this file is a master, revoke all the memory in the connected
- * files */
- if (PMEM_FLAGS_MASTERMAP & data->flags) {
- list_for_each(elt, &pmem[id].data_list) {
- struct pmem_data *sub_data =
- list_entry(elt, struct pmem_data, list);
- int is_master;
-
- down_read(&sub_data->sem);
- is_master = (PMEM_IS_SUBMAP(sub_data) &&
- file == sub_data->master_file);
- up_read(&sub_data->sem);
-
- if (is_master)
- pmem_revoke(file, sub_data);
- }
- }
- list_del(&data->list);
- mutex_unlock(&pmem[id].data_list_mutex);
-
- down_write(&data->sem);
-
- /* if it is not a connected file and it has an allocation, free it */
- if (!(PMEM_FLAGS_CONNECTED & data->flags) && has_allocation(file)) {
- mutex_lock(&pmem[id].arena_mutex);
- ret = pmem_free_from_id(id, data->index);
- mutex_unlock(&pmem[id].arena_mutex);
- }
-
- /* if this file is a submap (mapped, connected file), downref the
- * task struct */
- if (PMEM_FLAGS_SUBMAP & data->flags)
- if (data->task) {
- put_task_struct(data->task);
- data->task = NULL;
- }
-
- file->private_data = NULL;
-
- list_for_each_safe(elt, elt2, &data->region_list) {
- region_node = list_entry(elt, struct pmem_region_node, list);
- list_del(elt);
- kfree(region_node);
- }
- BUG_ON(!list_empty(&data->region_list));
-
- up_write(&data->sem);
- kfree(data);
- if (pmem[id].release)
- ret = pmem[id].release(inode, file);
-
- return ret;
-}
-
-static int pmem_open(struct inode *inode, struct file *file)
-{
- struct pmem_data *data;
- int id = get_id(file);
- int ret = 0;
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
-
- DLOG("pid %u(%s) file %p(%ld) dev %s(id: %d)\n",
- current->pid, get_task_comm(currtask_name, current),
- file, file_count(file), get_name(file), id);
- data = kmalloc(sizeof(struct pmem_data), GFP_KERNEL);
- if (!data) {
- printk(KERN_ALERT "pmem: %s: unable to allocate memory for "
- "pmem metadata.", __func__);
- return -1;
- }
- data->flags = 0;
- data->index = -1;
- data->task = NULL;
- data->vma = NULL;
- data->pid = 0;
- data->master_file = NULL;
-#if PMEM_DEBUG
- data->ref = 0;
-#endif
- INIT_LIST_HEAD(&data->region_list);
- init_rwsem(&data->sem);
-
- file->private_data = data;
- INIT_LIST_HEAD(&data->list);
-
- mutex_lock(&pmem[id].data_list_mutex);
- list_add(&data->list, &pmem[id].data_list);
- mutex_unlock(&pmem[id].data_list_mutex);
- return ret;
-}
-
-static unsigned long pmem_order(unsigned long len, int id)
-{
- int i;
-
- len = (len + pmem[id].quantum - 1)/pmem[id].quantum;
- len--;
- for (i = 0; i < sizeof(len)*8; i++)
- if (len >> i == 0)
- break;
- return i;
-}
-
-static int pmem_allocator_all_or_nothing(const int id,
- const unsigned long len,
- const unsigned int align)
-{
- /* caller should hold the lock on arena_mutex! */
- DLOG("all or nothing\n");
- if ((len > pmem[id].size) ||
- pmem[id].allocator.all_or_nothing.allocated)
- return -1;
- pmem[id].allocator.all_or_nothing.allocated = 1;
- return len;
-}
-
-static int pmem_allocator_buddy_bestfit(const int id,
- const unsigned long len,
- unsigned int align)
-{
- /* caller should hold the lock on arena_mutex! */
- int curr;
- int best_fit = -1;
- unsigned long order;
-
- DLOG("buddy bestfit\n");
- order = pmem_order(len, id);
- if (order > PMEM_MAX_ORDER)
- goto out;
-
- DLOG("order %lx\n", order);
-
- /* Look through the bitmap.
- * If a free slot of the correct order is found, use it.
- * Otherwise, use the best fit (smallest with size > order) slot.
- */
- for (curr = 0;
- curr < pmem[id].num_entries;
- curr = PMEM_BUDDY_NEXT_INDEX(id, curr))
- if (PMEM_IS_FREE_BUDDY(id, curr)) {
- if (PMEM_BUDDY_ORDER(id, curr) ==
- (unsigned char)order) {
- /* set the not free bit and clear others */
- best_fit = curr;
- break;
- }
- if (PMEM_BUDDY_ORDER(id, curr) >
- (unsigned char)order &&
- (best_fit < 0 ||
- PMEM_BUDDY_ORDER(id, curr) <
- PMEM_BUDDY_ORDER(id, best_fit)))
- best_fit = curr;
- }
-
- /* if best_fit < 0, there are no suitable slots; return an error */
- if (best_fit < 0) {
-#if PMEM_DEBUG
- printk(KERN_ALERT "pmem: %s: no space left to allocate!\n",
- __func__);
-#endif
- goto out;
- }
-
- /* now partition the best fit:
- * split the slot into 2 buddies of order - 1
- * repeat until the slot is of the correct order
- */
- while (PMEM_BUDDY_ORDER(id, best_fit) > (unsigned char)order) {
- int buddy;
- PMEM_BUDDY_ORDER(id, best_fit) -= 1;
- buddy = PMEM_BUDDY_INDEX(id, best_fit);
- PMEM_BUDDY_ORDER(id, buddy) = PMEM_BUDDY_ORDER(id, best_fit);
- }
- pmem[id].allocator.buddy_bestfit.buddy_bitmap[best_fit].allocated = 1;
-out:
- return best_fit;
-}
-
-
-static inline unsigned long paddr_from_bit(const int id, const int bitnum)
-{
- return pmem[id].base + pmem[id].quantum * bitnum;
-}
-
-static inline unsigned long bit_from_paddr(const int id,
- const unsigned long paddr)
-{
- return (paddr - pmem[id].base) / pmem[id].quantum;
-}
-
-static void bitmap_bits_set_all(uint32_t *bitp, int bit_start, int bit_end)
-{
- int word_index = bit_start >> PMEM_32BIT_WORD_ORDER, total_words;
-
- total_words = compute_total_words(bit_end, word_index);
- if (total_words > 0) {
- if (total_words == 1) {
- bitp[word_index] |=
- (start_mask(bit_start) & end_mask(bit_end));
- } else {
- bitp[word_index++] |= start_mask(bit_start);
- if (total_words > 2) {
- int total_bytes;
-
- total_words -= 2;
- total_bytes = total_words << 2;
-
- memset(&bitp[word_index], ~0, total_bytes);
- word_index += total_words;
- }
- bitp[word_index] |= end_mask(bit_end);
- }
- }
-}
-
-static int
-bitmap_allocate_contiguous(uint32_t *bitp, int num_bits_to_alloc,
- int total_bits, int spacing, int start_bit)
-{
- int bit_start, last_bit, word_index;
-
- if (num_bits_to_alloc <= 0)
- return -1;
-
- for (bit_start = start_bit; ;
- bit_start = ((last_bit +
- (word_index << PMEM_32BIT_WORD_ORDER) + spacing - 1)
- & ~(spacing - 1)) + start_bit) {
- int bit_end = bit_start + num_bits_to_alloc, total_words;
-
- if (bit_end > total_bits)
- return -1; /* out of contiguous memory */
-
- word_index = bit_start >> PMEM_32BIT_WORD_ORDER;
- total_words = compute_total_words(bit_end, word_index);
-
- if (total_words <= 0)
- return -1;
-
- if (total_words == 1) {
- last_bit = fls(bitp[word_index] &
- (start_mask(bit_start) &
- end_mask(bit_end)));
- if (last_bit)
- continue;
- } else {
- int end_word = word_index + (total_words - 1);
- last_bit =
- fls(bitp[word_index] & start_mask(bit_start));
- if (last_bit)
- continue;
-
- for (word_index++;
- word_index < end_word;
- word_index++) {
- last_bit = fls(bitp[word_index]);
- if (last_bit)
- break;
- }
- if (last_bit)
- continue;
-
- last_bit = fls(bitp[word_index] & end_mask(bit_end));
- if (last_bit)
- continue;
- }
- bitmap_bits_set_all(bitp, bit_start, bit_end);
- return bit_start;
- }
- return -1;
-}
-
-static int reserve_quanta(const unsigned int quanta_needed,
- const int id,
- unsigned int align)
-{
- /* alignment should be a valid power of 2 */
- int ret = -1, start_bit = 0, spacing = 1;
-
- /* Sanity check */
- if (quanta_needed > pmem[id].allocator.bitmap.bitmap_free) {
-#if PMEM_DEBUG
- printk(KERN_ALERT "pmem: %s: request (%d) too big for"
- " available free (%d)\n", __func__, quanta_needed,
- pmem[id].allocator.bitmap.bitmap_free);
-#endif
- return -1;
- }
-
- start_bit = bit_from_paddr(id,
- (pmem[id].base + align - 1) & ~(align - 1));
- if (start_bit <= -1) {
-#if PMEM_DEBUG
- printk(KERN_ALERT
- "pmem: %s: bit_from_paddr fails for"
- " %u alignment.\n", __func__, align);
-#endif
- return -1;
- }
- spacing = align / pmem[id].quantum;
- spacing = spacing > 1 ? spacing : 1;
-
- ret = bitmap_allocate_contiguous(pmem[id].allocator.bitmap.bitmap,
- quanta_needed,
- (pmem[id].size + pmem[id].quantum - 1) / pmem[id].quantum,
- spacing,
- start_bit);
-
-#if PMEM_DEBUG
- if (ret < 0)
- printk(KERN_ALERT "pmem: %s: not enough contiguous bits free "
- "in bitmap! Region memory is either too fragmented or"
- " request is too large for available memory.\n",
- __func__);
-#endif
-
- return ret;
-}
-
-static int pmem_allocator_bitmap(const int id,
- const unsigned long len,
- const unsigned int align)
-{
- /* caller should hold the lock on arena_mutex! */
- int bitnum, i;
- unsigned int quanta_needed;
-
- DLOG("bitmap id %d, len %ld, align %u\n", id, len, align);
- if (!pmem[id].allocator.bitmap.bitm_alloc) {
-#if PMEM_DEBUG
- printk(KERN_ALERT "pmem: bitm_alloc not present! id: %d\n",
- id);
-#endif
- return -1;
- }
-
- quanta_needed = (len + pmem[id].quantum - 1) / pmem[id].quantum;
- DLOG("quantum size %u quanta needed %u free %u id %d\n",
- pmem[id].quantum, quanta_needed,
- pmem[id].allocator.bitmap.bitmap_free, id);
-
- if (pmem[id].allocator.bitmap.bitmap_free < quanta_needed) {
-#if PMEM_DEBUG
- printk(KERN_ALERT "pmem: memory allocation failure. "
- "PMEM memory region exhausted, id %d."
- " Unable to comply with allocation request.\n", id);
-#endif
- return -1;
- }
-
- bitnum = reserve_quanta(quanta_needed, id, align);
- if (bitnum == -1)
- goto leave;
-
- for (i = 0;
- i < pmem[id].allocator.bitmap.bitmap_allocs &&
- pmem[id].allocator.bitmap.bitm_alloc[i].bit != -1;
- i++)
- ;
-
- if (i >= pmem[id].allocator.bitmap.bitmap_allocs) {
- void *temp;
- int32_t new_bitmap_allocs =
- pmem[id].allocator.bitmap.bitmap_allocs << 1;
- int j;
-
- if (!new_bitmap_allocs) { /* failed sanity check!! */
-#if PMEM_DEBUG
- pr_alert("pmem: bitmap_allocs number"
- " wrapped around to zero! Something "
- "is VERY wrong.\n");
-#endif
- return -1;
- }
-
- if (new_bitmap_allocs > pmem[id].num_entries) {
- /* failed sanity check!! */
-#if PMEM_DEBUG
- pr_alert("pmem: required bitmap_allocs"
- " number exceeds maximum entries possible"
- " for current quanta\n");
-#endif
- return -1;
- }
-
- temp = krealloc(pmem[id].allocator.bitmap.bitm_alloc,
- new_bitmap_allocs *
- sizeof(*pmem[id].allocator.bitmap.bitm_alloc),
- GFP_KERNEL);
- if (!temp) {
-#if PMEM_DEBUG
- pr_alert("pmem: can't realloc bitmap_allocs,"
- "id %d, current num bitmap allocs %d\n",
- id, pmem[id].allocator.bitmap.bitmap_allocs);
-#endif
- return -1;
- }
- pmem[id].allocator.bitmap.bitmap_allocs = new_bitmap_allocs;
- pmem[id].allocator.bitmap.bitm_alloc = temp;
-
- for (j = i; j < new_bitmap_allocs; j++) {
- pmem[id].allocator.bitmap.bitm_alloc[j].bit = -1;
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta = 0;
- }
-
- DLOG("increased # of allocated regions to %d for id %d\n",
- pmem[id].allocator.bitmap.bitmap_allocs, id);
- }
-
- DLOG("bitnum %d, bitm_alloc index %d\n", bitnum, i);
-
- pmem[id].allocator.bitmap.bitmap_free -= quanta_needed;
- pmem[id].allocator.bitmap.bitm_alloc[i].bit = bitnum;
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta = quanta_needed;
-leave:
- return bitnum;
-}
-
-static int pmem_allocator_system(const int id,
- const unsigned long len,
- const unsigned int align)
-{
- /* caller should hold the lock on arena_mutex! */
- struct alloc_list *list;
- unsigned long aligned_len;
- int count = SYSTEM_ALLOC_RETRY;
- void *buf;
-
- DLOG("system id %d, len %ld, align %u\n", id, len, align);
-
- if ((pmem[id].allocator.system_mem.used + len) > pmem[id].size) {
- DLOG("requested size would be larger than quota\n");
- return -1;
- }
-
- /* Handle alignment */
- aligned_len = len + align;
-
- /* Attempt allocation */
- list = kmalloc(sizeof(struct alloc_list), GFP_KERNEL);
- if (list == NULL) {
- printk(KERN_ERR "pmem: failed to allocate system metadata\n");
- return -1;
- }
- list->vaddr = NULL;
-
- buf = NULL;
- while ((buf == NULL) && count--) {
- buf = kmalloc((aligned_len), GFP_KERNEL);
- if (buf == NULL) {
- DLOG("pmem: kmalloc %d temporarily failed len= %ld\n",
- count, aligned_len);
- }
- }
- if (!buf) {
- printk(KERN_CRIT "pmem: kmalloc failed for id= %d len= %ld\n",
- id, aligned_len);
- kfree(list);
- return -1;
- }
- list->size = aligned_len;
- list->addr = (void *)__pa(buf);
- list->aaddr = (void *)(((unsigned int)(list->addr) + (align - 1)) &
- ~(align - 1));
-
- if (!pmem[id].cached)
- list->vaddr = ioremap(__pa(buf), aligned_len);
- else
- list->vaddr = ioremap_cached(__pa(buf), aligned_len);
-
- INIT_LIST_HEAD(&list->allocs);
- list_add(&list->allocs, &pmem[id].allocator.system_mem.alist);
-
- return (int)list;
-}
-
-static pgprot_t pmem_phys_mem_access_prot(struct file *file, pgprot_t vma_prot)
-{
- int id = get_id(file);
-#ifdef pgprot_writecombine
- if (pmem[id].cached == 0 || file->f_flags & O_SYNC)
- /* on ARMv6 and ARMv7 this expands to Normal Noncached */
- return pgprot_writecombine(vma_prot);
-#endif
-#ifdef pgprot_ext_buffered
- else if (pmem[id].buffered)
- return pgprot_ext_buffered(vma_prot);
-#endif
- return vma_prot;
-}
-
-static unsigned long pmem_start_addr_all_or_nothing(int id,
- struct pmem_data *data)
-{
- return PMEM_START_ADDR(id, 0);
-}
-
-static unsigned long pmem_start_addr_buddy_bestfit(int id,
- struct pmem_data *data)
-{
- return PMEM_START_ADDR(id, data->index);
-}
-
-static unsigned long pmem_start_addr_bitmap(int id, struct pmem_data *data)
-{
- return data->index * pmem[id].quantum + pmem[id].base;
-}
-
-static unsigned long pmem_start_addr_system(int id, struct pmem_data *data)
-{
- return (unsigned long)(((struct alloc_list *)(data->index))->aaddr);
-}
-
-static void *pmem_start_vaddr(int id, struct pmem_data *data)
-{
- if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_SYSTEM)
- return ((struct alloc_list *)(data->index))->vaddr;
- else
- return pmem[id].start_addr(id, data) - pmem[id].base + pmem[id].vbase;
-}
-
-static unsigned long pmem_len_all_or_nothing(int id, struct pmem_data *data)
-{
- return data->index;
-}
-
-static unsigned long pmem_len_buddy_bestfit(int id, struct pmem_data *data)
-{
- return PMEM_BUDDY_LEN(id, data->index);
-}
-
-static unsigned long pmem_len_bitmap(int id, struct pmem_data *data)
-{
- int i;
- unsigned long ret = 0;
-
- mutex_lock(&pmem[id].arena_mutex);
-
- for (i = 0; i < pmem[id].allocator.bitmap.bitmap_allocs; i++)
- if (pmem[id].allocator.bitmap.bitm_alloc[i].bit ==
- data->index) {
- ret = pmem[id].allocator.bitmap.bitm_alloc[i].quanta *
- pmem[id].quantum;
- break;
- }
-
- mutex_unlock(&pmem[id].arena_mutex);
-#if PMEM_DEBUG
- if (i >= pmem[id].allocator.bitmap.bitmap_allocs)
- pr_alert("pmem: %s: can't find bitnum %d in "
- "alloc'd array!\n", __func__, data->index);
-#endif
- return ret;
-}
-
-static unsigned long pmem_len_system(int id, struct pmem_data *data)
-{
- unsigned long ret = 0;
-
- mutex_lock(&pmem[id].arena_mutex);
-
- ret = ((struct alloc_list *)data->index)->size;
- mutex_unlock(&pmem[id].arena_mutex);
-
- return ret;
-}
-
-static int pmem_map_garbage(int id, struct vm_area_struct *vma,
- struct pmem_data *data, unsigned long offset,
- unsigned long len)
-{
- int i, garbage_pages = len >> PAGE_SHIFT;
-
- vma->vm_flags |= VM_IO | VM_RESERVED | VM_PFNMAP | VM_SHARED | VM_WRITE;
- for (i = 0; i < garbage_pages; i++) {
- if (vm_insert_pfn(vma, vma->vm_start + offset + (i * PAGE_SIZE),
- pmem[id].garbage_pfn))
- return -EAGAIN;
- }
- return 0;
-}
-
-static int pmem_unmap_pfn_range(int id, struct vm_area_struct *vma,
- struct pmem_data *data, unsigned long offset,
- unsigned long len)
-{
- int garbage_pages;
- DLOG("unmap offset %lx len %lx\n", offset, len);
-
- BUG_ON(!PMEM_IS_PAGE_ALIGNED(len));
-
- garbage_pages = len >> PAGE_SHIFT;
- zap_page_range(vma, vma->vm_start + offset, len, NULL);
- pmem_map_garbage(id, vma, data, offset, len);
- return 0;
-}
-
-static int pmem_map_pfn_range(int id, struct vm_area_struct *vma,
- struct pmem_data *data, unsigned long offset,
- unsigned long len)
-{
- int ret;
- DLOG("map offset %lx len %lx\n", offset, len);
- BUG_ON(!PMEM_IS_PAGE_ALIGNED(vma->vm_start));
- BUG_ON(!PMEM_IS_PAGE_ALIGNED(vma->vm_end));
- BUG_ON(!PMEM_IS_PAGE_ALIGNED(len));
- BUG_ON(!PMEM_IS_PAGE_ALIGNED(offset));
-
- ret = io_remap_pfn_range(vma, vma->vm_start + offset,
- (pmem[id].start_addr(id, data) + offset) >> PAGE_SHIFT,
- len, vma->vm_page_prot);
- if (ret) {
-#if PMEM_DEBUG
- pr_alert("pmem: %s: io_remap_pfn_range fails with "
- "return value: %d!\n", __func__, ret);
-#endif
-
- ret = -EAGAIN;
- }
- return ret;
-}
-
-static int pmem_remap_pfn_range(int id, struct vm_area_struct *vma,
- struct pmem_data *data, unsigned long offset,
- unsigned long len)
-{
- /* hold the mm semp for the vma you are modifying when you call this */
- BUG_ON(!vma);
- zap_page_range(vma, vma->vm_start + offset, len, NULL);
- return pmem_map_pfn_range(id, vma, data, offset, len);
-}
-
-static void pmem_vma_open(struct vm_area_struct *vma)
-{
- struct file *file = vma->vm_file;
- struct pmem_data *data = file->private_data;
- int id = get_id(file);
-
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("Dev %s(id: %d) pid %u(%s) ppid %u file %p count %ld\n",
- get_name(file), id, current->pid,
- get_task_comm(currtask_name, current),
- current->parent->pid, file, file_count(file));
- /* this should never be called as we don't support copying pmem
- * ranges via fork */
- down_read(&data->sem);
- BUG_ON(!has_allocation(file));
- /* remap the garbage pages, forkers don't get access to the data */
- pmem_unmap_pfn_range(id, vma, data, 0, vma->vm_start - vma->vm_end);
- up_read(&data->sem);
-}
-
-static void pmem_vma_close(struct vm_area_struct *vma)
-{
- struct file *file = vma->vm_file;
- struct pmem_data *data = file->private_data;
-
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("Dev %s(id: %d) pid %u(%s) ppid %u file %p count %ld\n",
- get_name(file), get_id(file), current->pid,
- get_task_comm(currtask_name, current),
- current->parent->pid, file, file_count(file));
-
- if (unlikely(!is_pmem_file(file))) {
- pr_warning("pmem: something is very wrong, you are "
- "closing a vm backing an allocation that doesn't "
- "exist!\n");
- return;
- }
-
- down_write(&data->sem);
- if (unlikely(!has_allocation(file))) {
- up_write(&data->sem);
- pr_warning("pmem: something is very wrong, you are "
- "closing a vm backing an allocation that doesn't "
- "exist!\n");
- return;
- }
- if (data->vma == vma) {
- data->vma = NULL;
- if ((data->flags & PMEM_FLAGS_CONNECTED) &&
- (data->flags & PMEM_FLAGS_SUBMAP))
- data->flags |= PMEM_FLAGS_UNSUBMAP;
- }
- /* the kernel is going to free this vma now anyway */
- up_write(&data->sem);
-}
-
-static struct vm_operations_struct vm_ops = {
- .open = pmem_vma_open,
- .close = pmem_vma_close,
-};
-
-static int pmem_mmap(struct file *file, struct vm_area_struct *vma)
-{
- struct pmem_data *data = file->private_data;
- int index = -1;
- unsigned long vma_size = vma->vm_end - vma->vm_start;
- int ret = 0, id = get_id(file);
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
-
- if (!data) {
- pr_err("pmem: Invalid file descriptor, no private data\n");
- return -EINVAL;
- }
- DLOG("pid %u(%s) mmap vma_size %lu on dev %s(id: %d)\n", current->pid,
- get_task_comm(currtask_name, current), vma_size,
- get_name(file), id);
- if (vma->vm_pgoff || !PMEM_IS_PAGE_ALIGNED(vma_size)) {
-#if PMEM_DEBUG
- pr_err("pmem: mmaps must be at offset zero, aligned"
- " and a multiple of pages_size.\n");
-#endif
- return -EINVAL;
- }
-
- down_write(&data->sem);
- /* check this file isn't already mmaped, for submaps check this file
- * has never been mmaped */
- if ((data->flags & PMEM_FLAGS_SUBMAP) ||
- (data->flags & PMEM_FLAGS_UNSUBMAP)) {
-#if PMEM_DEBUG
- pr_err("pmem: you can only mmap a pmem file once, "
- "this file is already mmaped. %x\n", data->flags);
-#endif
- ret = -EINVAL;
- goto error;
- }
- /* if file->private_data == unalloced, alloc*/
- if (data->index == -1) {
- mutex_lock(&pmem[id].arena_mutex);
- index = pmem_allocate_from_id(id,
- vma->vm_end - vma->vm_start,
- SZ_4K);
- mutex_unlock(&pmem[id].arena_mutex);
- /* either no space was available or an error occured */
- if (index == -1) {
- pr_err("pmem: mmap unable to allocate memory"
- "on %s\n", get_name(file));
- ret = -ENOMEM;
- goto error;
- }
- /* store the index of a successful allocation */
- data->index = index;
- }
-
- if (pmem[id].len(id, data) < vma_size) {
-#if PMEM_DEBUG
- pr_err("pmem: mmap size [%lu] does not match"
- " size of backing region [%lu].\n", vma_size,
- pmem[id].len(id, data));
-#endif
- ret = -EINVAL;
- goto error;
- }
-
- vma->vm_pgoff = pmem[id].start_addr(id, data) >> PAGE_SHIFT;
-
- vma->vm_page_prot = pmem_phys_mem_access_prot(file, vma->vm_page_prot);
-
- if (data->flags & PMEM_FLAGS_CONNECTED) {
- struct pmem_region_node *region_node;
- struct list_head *elt;
- if (pmem_map_garbage(id, vma, data, 0, vma_size)) {
- pr_alert("pmem: mmap failed in kernel!\n");
- ret = -EAGAIN;
- goto error;
- }
- list_for_each(elt, &data->region_list) {
- region_node = list_entry(elt, struct pmem_region_node,
- list);
- DLOG("remapping file: %p %lx %lx\n", file,
- region_node->region.offset,
- region_node->region.len);
- if (pmem_remap_pfn_range(id, vma, data,
- region_node->region.offset,
- region_node->region.len)) {
- ret = -EAGAIN;
- goto error;
- }
- }
- data->flags |= PMEM_FLAGS_SUBMAP;
- get_task_struct(current->group_leader);
- data->task = current->group_leader;
- data->vma = vma;
-#if PMEM_DEBUG
- data->pid = current->pid;
-#endif
- DLOG("submmapped file %p vma %p pid %u\n", file, vma,
- current->pid);
- } else {
- if (pmem_map_pfn_range(id, vma, data, 0, vma_size)) {
- pr_err("pmem: mmap failed in kernel!\n");
- ret = -EAGAIN;
- goto error;
- }
- data->flags |= PMEM_FLAGS_MASTERMAP;
- data->pid = current->pid;
- }
- vma->vm_ops = &vm_ops;
-error:
- up_write(&data->sem);
- return ret;
-}
-
-/* the following are the api for accessing pmem regions by other drivers
- * from inside the kernel */
-int get_pmem_user_addr(struct file *file, unsigned long *start,
- unsigned long *len)
-{
- int ret = -1;
-
- if (is_pmem_file(file)) {
- struct pmem_data *data = file->private_data;
-
- down_read(&data->sem);
- if (has_allocation(file)) {
- if (data->vma) {
- *start = data->vma->vm_start;
- *len = data->vma->vm_end - data->vma->vm_start;
- } else {
- *start = *len = 0;
-#if PMEM_DEBUG
- pr_err("pmem: %s: no vma present.\n",
- __func__);
-#endif
- }
- ret = 0;
- }
- up_read(&data->sem);
- }
-
-#if PMEM_DEBUG
- if (ret)
- pr_err("pmem: %s: requested pmem data from invalid"
- "file.\n", __func__);
-#endif
- return ret;
-}
-
-int get_pmem_addr(struct file *file, unsigned long *start,
- unsigned long *vstart, unsigned long *len)
-{
- int ret = -1;
-
- if (is_pmem_file(file)) {
- struct pmem_data *data = file->private_data;
-
- down_read(&data->sem);
- if (has_allocation(file)) {
- int id = get_id(file);
-
- *start = pmem[id].start_addr(id, data);
- *len = pmem[id].len(id, data);
- *vstart = (unsigned long)
- pmem_start_vaddr(id, data);
- up_read(&data->sem);
-#if PMEM_DEBUG
- down_write(&data->sem);
- data->ref++;
- up_write(&data->sem);
-#endif
- DLOG("returning start %#lx len %lu "
- "vstart %#lx\n",
- *start, *len, *vstart);
- ret = 0;
- } else {
- up_read(&data->sem);
- }
- }
- return ret;
-}
-
-int get_pmem_file(unsigned int fd, unsigned long *start, unsigned long *vstart,
- unsigned long *len, struct file **filp)
-{
- int ret = -1;
- struct file *file = fget(fd);
-
- if (unlikely(file == NULL)) {
- pr_err("pmem: %s: requested data from file "
- "descriptor that doesn't exist.\n", __func__);
- } else {
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("filp %p rdev %d pid %u(%s) file %p(%ld)"
- " dev %s(id: %d)\n", filp,
- file->f_dentry->d_inode->i_rdev,
- current->pid, get_task_comm(currtask_name, current),
- file, file_count(file), get_name(file), get_id(file));
-
- if (!get_pmem_addr(file, start, vstart, len)) {
- if (filp)
- *filp = file;
- ret = 0;
- } else {
- fput(file);
- }
- }
- return ret;
-}
-EXPORT_SYMBOL(get_pmem_file);
-
-int get_pmem_fd(int fd, unsigned long *start, unsigned long *len)
-{
- unsigned long vstart;
- return get_pmem_file(fd, start, &vstart, len, NULL);
-}
-EXPORT_SYMBOL(get_pmem_fd);
-
-void put_pmem_file(struct file *file)
-{
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("rdev %d pid %u(%s) file %p(%ld)" " dev %s(id: %d)\n",
- file->f_dentry->d_inode->i_rdev, current->pid,
- get_task_comm(currtask_name, current), file,
- file_count(file), get_name(file), get_id(file));
- if (is_pmem_file(file)) {
-#if PMEM_DEBUG
- struct pmem_data *data = file->private_data;
-
- down_write(&data->sem);
- if (!data->ref--) {
- data->ref++;
- pr_alert("pmem: pmem_put > pmem_get %s "
- "(pid %d)\n",
- pmem[get_id(file)].dev.name, data->pid);
- BUG();
- }
- up_write(&data->sem);
-#endif
- fput(file);
- }
-}
-EXPORT_SYMBOL(put_pmem_file);
-
-void put_pmem_fd(int fd)
-{
- int put_needed;
- struct file *file = fget_light(fd, &put_needed);
-
- if (file) {
- put_pmem_file(file);
- fput_light(file, put_needed);
- }
-}
-
-void flush_pmem_fd(int fd, unsigned long offset, unsigned long len)
-{
- int fput_needed;
- struct file *file = fget_light(fd, &fput_needed);
-
- if (file) {
- flush_pmem_file(file, offset, len);
- fput_light(file, fput_needed);
- }
-}
-
-void flush_pmem_file(struct file *file, unsigned long offset, unsigned long len)
-{
- struct pmem_data *data;
- int id;
- void *vaddr;
- struct pmem_region_node *region_node;
- struct list_head *elt;
- void *flush_start, *flush_end;
-#ifdef CONFIG_OUTER_CACHE
- unsigned long phy_start, phy_end;
-#endif
- if (!is_pmem_file(file))
- return;
-
- id = get_id(file);
- if (!pmem[id].cached)
- return;
-
- /* is_pmem_file fails if !file */
- data = file->private_data;
-
- down_read(&data->sem);
- if (!has_allocation(file))
- goto end;
-
- vaddr = pmem_start_vaddr(id, data);
-
- if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_SYSTEM) {
- dmac_flush_range(vaddr,
- (void *)((unsigned long)vaddr +
- ((struct alloc_list *)(data->index))->size));
-#ifdef CONFIG_OUTER_CACHE
- phy_start = pmem_start_addr_system(id, data);
-
- phy_end = phy_start +
- ((struct alloc_list *)(data->index))->size;
-
- outer_flush_range(phy_start, phy_end);
-#endif
- goto end;
- }
- /* if this isn't a submmapped file, flush the whole thing */
- if (unlikely(!(data->flags & PMEM_FLAGS_CONNECTED))) {
- dmac_flush_range(vaddr, vaddr + pmem[id].len(id, data));
-#ifdef CONFIG_OUTER_CACHE
- phy_start = (unsigned long)vaddr -
- (unsigned long)pmem[id].vbase + pmem[id].base;
-
- phy_end = phy_start + pmem[id].len(id, data);
-
- outer_flush_range(phy_start, phy_end);
-#endif
- goto end;
- }
- /* otherwise, flush the region of the file we are drawing */
- list_for_each(elt, &data->region_list) {
- region_node = list_entry(elt, struct pmem_region_node, list);
- if ((offset >= region_node->region.offset) &&
- ((offset + len) <= (region_node->region.offset +
- region_node->region.len))) {
- flush_start = vaddr + region_node->region.offset;
- flush_end = flush_start + region_node->region.len;
- dmac_flush_range(flush_start, flush_end);
-#ifdef CONFIG_OUTER_CACHE
-
- phy_start = (unsigned long)flush_start -
- (unsigned long)pmem[id].vbase + pmem[id].base;
-
- phy_end = phy_start + region_node->region.len;
-
- outer_flush_range(phy_start, phy_end);
-#endif
- break;
- }
- }
-end:
- up_read(&data->sem);
-}
-
-int pmem_cache_maint(struct file *file, unsigned int cmd,
- struct pmem_addr *pmem_addr)
-{
- struct pmem_data *data;
- int id;
- unsigned long vaddr, paddr, length, offset,
- pmem_len, pmem_start_addr;
-
- /* Called from kernel-space so file may be NULL */
- if (!file)
- return -EBADF;
-
- /*
- * check that the vaddr passed for flushing is valid
- * so that you don't crash the kernel
- */
- if (!pmem_addr->vaddr)
- return -EINVAL;
-
- data = file->private_data;
- id = get_id(file);
-
- if (!pmem[id].cached)
- return 0;
-
- offset = pmem_addr->offset;
- length = pmem_addr->length;
-
- down_read(&data->sem);
- if (!has_allocation(file)) {
- up_read(&data->sem);
- return -EINVAL;
- }
- pmem_len = pmem[id].len(id, data);
- pmem_start_addr = pmem[id].start_addr(id, data);
- up_read(&data->sem);
-
- if (offset + length > pmem_len)
- return -EINVAL;
-
- vaddr = pmem_addr->vaddr;
- paddr = pmem_start_addr + offset;
-
- DLOG("pmem cache maint on dev %s(id: %d)"
- "(vaddr %lx paddr %lx len %lu bytes)\n",
- get_name(file), id, vaddr, paddr, length);
- if (cmd == PMEM_CLEAN_INV_CACHES)
- clean_and_invalidate_caches(vaddr,
- length, paddr);
- else if (cmd == PMEM_CLEAN_CACHES)
- clean_caches(vaddr, length, paddr);
- else if (cmd == PMEM_INV_CACHES)
- invalidate_caches(vaddr, length, paddr);
-
- return 0;
-}
-EXPORT_SYMBOL(pmem_cache_maint);
-
-static int pmem_connect(unsigned long connect, struct file *file)
-{
- int ret = 0, put_needed;
- struct file *src_file;
-
- if (!file) {
- pr_err("pmem: %s: NULL file pointer passed in, "
- "bailing out!\n", __func__);
- ret = -EINVAL;
- goto leave;
- }
-
- src_file = fget_light(connect, &put_needed);
-
- if (!src_file) {
- pr_err("pmem: %s: src file not found!\n", __func__);
- ret = -EBADF;
- goto leave;
- }
-
- if (src_file == file) { /* degenerative case, operator error */
- pr_err("pmem: %s: src_file and passed in file are "
- "the same; refusing to connect to self!\n", __func__);
- ret = -EINVAL;
- goto put_src_file;
- }
-
- if (unlikely(!is_pmem_file(src_file))) {
- pr_err("pmem: %s: src file is not a pmem file!\n",
- __func__);
- ret = -EINVAL;
- goto put_src_file;
- } else {
- struct pmem_data *src_data = src_file->private_data;
-
- if (!src_data) {
- pr_err("pmem: %s: src file pointer has no"
- "private data, bailing out!\n", __func__);
- ret = -EINVAL;
- goto put_src_file;
- }
-
- down_read(&src_data->sem);
-
- if (unlikely(!has_allocation(src_file))) {
- up_read(&src_data->sem);
- pr_err("pmem: %s: src file has no allocation!\n",
- __func__);
- ret = -EINVAL;
- } else {
- struct pmem_data *data;
- int src_index = src_data->index;
-
- up_read(&src_data->sem);
-
- data = file->private_data;
- if (!data) {
- pr_err("pmem: %s: passed in file "
- "pointer has no private data, bailing"
- " out!\n", __func__);
- ret = -EINVAL;
- goto put_src_file;
- }
-
- down_write(&data->sem);
- if (has_allocation(file) &&
- (data->index != src_index)) {
- up_write(&data->sem);
-
- pr_err("pmem: %s: file is already "
- "mapped but doesn't match this "
- "src_file!\n", __func__);
- ret = -EINVAL;
- } else {
- data->index = src_index;
- data->flags |= PMEM_FLAGS_CONNECTED;
- data->master_fd = connect;
- data->master_file = src_file;
-
- up_write(&data->sem);
-
- DLOG("connect %p to %p\n", file, src_file);
- }
- }
- }
-put_src_file:
- fput_light(src_file, put_needed);
-leave:
- return ret;
-}
-
-static void pmem_unlock_data_and_mm(struct pmem_data *data,
- struct mm_struct *mm)
-{
- up_write(&data->sem);
- if (mm != NULL) {
- up_write(&mm->mmap_sem);
- mmput(mm);
- }
-}
-
-static int pmem_lock_data_and_mm(struct file *file, struct pmem_data *data,
- struct mm_struct **locked_mm)
-{
- int ret = 0;
- struct mm_struct *mm = NULL;
-#if PMEM_DEBUG_MSGS
- char currtask_name[FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
- DLOG("pid %u(%s) file %p(%ld)\n",
- current->pid, get_task_comm(currtask_name, current),
- file, file_count(file));
-
- *locked_mm = NULL;
-lock_mm:
- down_read(&data->sem);
- if (PMEM_IS_SUBMAP(data)) {
- mm = get_task_mm(data->task);
- if (!mm) {
- up_read(&data->sem);
-#if PMEM_DEBUG
- pr_alert("pmem: can't remap - task is gone!\n");
-#endif
- return -1;
- }
- }
- up_read(&data->sem);
-
- if (mm)
- down_write(&mm->mmap_sem);
-
- down_write(&data->sem);
- /* check that the file didn't get mmaped before we could take the
- * data sem, this should be safe b/c you can only submap each file
- * once */
- if (PMEM_IS_SUBMAP(data) && !mm) {
- pmem_unlock_data_and_mm(data, mm);
- DLOG("mapping contention, repeating mmap op\n");
- goto lock_mm;
- }
- /* now check that vma.mm is still there, it could have been
- * deleted by vma_close before we could get the data->sem */
- if ((data->flags & PMEM_FLAGS_UNSUBMAP) && (mm != NULL)) {
- /* might as well release this */
- if (data->flags & PMEM_FLAGS_SUBMAP) {
- put_task_struct(data->task);
- data->task = NULL;
- /* lower the submap flag to show the mm is gone */
- data->flags &= ~(PMEM_FLAGS_SUBMAP);
- }
- pmem_unlock_data_and_mm(data, mm);
-#if PMEM_DEBUG
- pr_alert("pmem: vma.mm went away!\n");
-#endif
- return -1;
- }
- *locked_mm = mm;
- return ret;
-}
-
-int pmem_remap(struct pmem_region *region, struct file *file,
- unsigned operation)
-{
- int ret;
- struct pmem_region_node *region_node;
- struct mm_struct *mm = NULL;
- struct list_head *elt, *elt2;
- int id = get_id(file);
- struct pmem_data *data;
-
- DLOG("operation %#x, region offset %ld, region len %ld\n",
- operation, region->offset, region->len);
-
- if (!is_pmem_file(file)) {
-#if PMEM_DEBUG
- pr_err("pmem: remap request for non-pmem file descriptor\n");
-#endif
- return -EINVAL;
- }
-
- /* is_pmem_file fails if !file */
- data = file->private_data;
-
- /* pmem region must be aligned on a page boundry */
- if (unlikely(!PMEM_IS_PAGE_ALIGNED(region->offset) ||
- !PMEM_IS_PAGE_ALIGNED(region->len))) {
-#if PMEM_DEBUG
- pr_err("pmem: request for unaligned pmem"
- "suballocation %lx %lx\n",
- region->offset, region->len);
-#endif
- return -EINVAL;
- }
-
- /* if userspace requests a region of len 0, there's nothing to do */
- if (region->len == 0)
- return 0;
-
- /* lock the mm and data */
- ret = pmem_lock_data_and_mm(file, data, &mm);
- if (ret)
- return 0;
-
- /* only the owner of the master file can remap the client fds
- * that back in it */
- if (!is_master_owner(file)) {
-#if PMEM_DEBUG
- pr_err("pmem: remap requested from non-master process\n");
-#endif
- ret = -EINVAL;
- goto err;
- }
-
- /* check that the requested range is within the src allocation */
- if (unlikely((region->offset > pmem[id].len(id, data)) ||
- (region->len > pmem[id].len(id, data)) ||
- (region->offset + region->len > pmem[id].len(id, data)))) {
-#if PMEM_DEBUG
- pr_err("pmem: suballoc doesn't fit in src_file!\n");
-#endif
- ret = -EINVAL;
- goto err;
- }
-
- if (operation == PMEM_MAP) {
- region_node = kmalloc(sizeof(struct pmem_region_node),
- GFP_KERNEL);
- if (!region_node) {
- ret = -ENOMEM;
-#if PMEM_DEBUG
- pr_alert("pmem: No space to allocate remap metadata!");
-#endif
- goto err;
- }
- region_node->region = *region;
- list_add(®ion_node->list, &data->region_list);
- } else if (operation == PMEM_UNMAP) {
- int found = 0;
- list_for_each_safe(elt, elt2, &data->region_list) {
- region_node = list_entry(elt, struct pmem_region_node,
- list);
- if (region->len == 0 ||
- (region_node->region.offset == region->offset &&
- region_node->region.len == region->len)) {
- list_del(elt);
- kfree(region_node);
- found = 1;
- }
- }
- if (!found) {
-#if PMEM_DEBUG
- pr_err("pmem: Unmap region does not map any"
- " mapped region!");
-#endif
- ret = -EINVAL;
- goto err;
- }
- }
-
- if (data->vma && PMEM_IS_SUBMAP(data)) {
- if (operation == PMEM_MAP)
- ret = pmem_remap_pfn_range(id, data->vma, data,
- region->offset, region->len);
- else if (operation == PMEM_UNMAP)
- ret = pmem_unmap_pfn_range(id, data->vma, data,
- region->offset, region->len);
- }
-
-err:
- pmem_unlock_data_and_mm(data, mm);
- return ret;
-}
-
-static void pmem_revoke(struct file *file, struct pmem_data *data)
-{
- struct pmem_region_node *region_node;
- struct list_head *elt, *elt2;
- struct mm_struct *mm = NULL;
- int id = get_id(file);
- int ret = 0;
-
- data->master_file = NULL;
- ret = pmem_lock_data_and_mm(file, data, &mm);
- /* if lock_data_and_mm fails either the task that mapped the fd, or
- * the vma that mapped it have already gone away, nothing more
- * needs to be done */
- if (ret)
- return;
- /* unmap everything */
- /* delete the regions and region list nothing is mapped any more */
- if (data->vma)
- list_for_each_safe(elt, elt2, &data->region_list) {
- region_node = list_entry(elt, struct pmem_region_node,
- list);
- pmem_unmap_pfn_range(id, data->vma, data,
- region_node->region.offset,
- region_node->region.len);
- list_del(elt);
- kfree(region_node);
- }
- /* delete the master file */
- pmem_unlock_data_and_mm(data, mm);
-}
-
-static void pmem_get_size(struct pmem_region *region, struct file *file)
-{
- /* called via ioctl file op, so file guaranteed to be not NULL */
- struct pmem_data *data = file->private_data;
- int id = get_id(file);
-
- down_read(&data->sem);
- if (!has_allocation(file)) {
- region->offset = 0;
- region->len = 0;
- } else {
- region->offset = pmem[id].start_addr(id, data);
- region->len = pmem[id].len(id, data);
- }
- up_read(&data->sem);
- DLOG("offset 0x%lx len 0x%lx\n", region->offset, region->len);
-}
-
-
-static long pmem_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
-{
- /* called from user space as file op, so file guaranteed to be not
- * NULL
- */
- struct pmem_data *data = file->private_data;
- int id = get_id(file);
-#if PMEM_DEBUG_MSGS
- char currtask_name[
- FIELD_SIZEOF(struct task_struct, comm) + 1];
-#endif
-
- DLOG("pid %u(%s) file %p(%ld) cmd %#x, dev %s(id: %d)\n",
- current->pid, get_task_comm(currtask_name, current),
- file, file_count(file), cmd, get_name(file), id);
-
- switch (cmd) {
- case PMEM_GET_PHYS:
- {
- struct pmem_region region;
-
- DLOG("get_phys\n");
- down_read(&data->sem);
- if (!has_allocation(file)) {
- region.offset = 0;
- region.len = 0;
- } else {
- region.offset = pmem[id].start_addr(id, data);
- region.len = pmem[id].len(id, data);
- }
- up_read(&data->sem);
-
- if (copy_to_user((void __user *)arg, ®ion,
- sizeof(struct pmem_region)))
- return -EFAULT;
-
- DLOG("pmem: successful request for "
- "physical address of pmem region id %d, "
- "offset 0x%lx, len 0x%lx\n",
- id, region.offset, region.len);
-
- break;
- }
- case PMEM_MAP:
- {
- struct pmem_region region;
- DLOG("map\n");
- if (copy_from_user(®ion, (void __user *)arg,
- sizeof(struct pmem_region)))
- return -EFAULT;
- return pmem_remap(®ion, file, PMEM_MAP);
- }
- break;
- case PMEM_UNMAP:
- {
- struct pmem_region region;
- DLOG("unmap\n");
- if (copy_from_user(®ion, (void __user *)arg,
- sizeof(struct pmem_region)))
- return -EFAULT;
- return pmem_remap(®ion, file, PMEM_UNMAP);
- break;
- }
- case PMEM_GET_SIZE:
- {
- struct pmem_region region;
- DLOG("get_size\n");
- pmem_get_size(®ion, file);
- if (copy_to_user((void __user *)arg, ®ion,
- sizeof(struct pmem_region)))
- return -EFAULT;
- break;
- }
- case PMEM_GET_TOTAL_SIZE:
- {
- struct pmem_region region;
- DLOG("get total size\n");
- region.offset = 0;
- get_id(file);
- region.len = pmem[id].size;
- if (copy_to_user((void __user *)arg, ®ion,
- sizeof(struct pmem_region)))
- return -EFAULT;
- break;
- }
- case PMEM_GET_FREE_SPACE:
- {
- struct pmem_freespace fs;
- DLOG("get freespace on %s(id: %d)\n",
- get_name(file), id);
-
- mutex_lock(&pmem[id].arena_mutex);
- pmem[id].free_space(id, &fs);
- mutex_unlock(&pmem[id].arena_mutex);
-
- DLOG("%s(id: %d) total free %lu, largest %lu\n",
- get_name(file), id, fs.total, fs.largest);
-
- if (copy_to_user((void __user *)arg, &fs,
- sizeof(struct pmem_freespace)))
- return -EFAULT;
- break;
- }
-
- case PMEM_ALLOCATE:
- {
- int ret = 0;
- DLOG("allocate, id %d\n", id);
- down_write(&data->sem);
- if (has_allocation(file)) {
- pr_err("pmem: Existing allocation found on "
- "this file descrpitor\n");
- up_write(&data->sem);
- return -EINVAL;
- }
-
- mutex_lock(&pmem[id].arena_mutex);
- data->index = pmem_allocate_from_id(id,
- arg,
- SZ_4K);
- mutex_unlock(&pmem[id].arena_mutex);
- ret = data->index == -1 ? -ENOMEM :
- data->index;
- up_write(&data->sem);
- return ret;
- }
- case PMEM_ALLOCATE_ALIGNED:
- {
- struct pmem_allocation alloc;
- int ret = 0;
-
- if (copy_from_user(&alloc, (void __user *)arg,
- sizeof(struct pmem_allocation)))
- return -EFAULT;
- DLOG("allocate id align %d %u\n", id, alloc.align);
- down_write(&data->sem);
- if (has_allocation(file)) {
- pr_err("pmem: Existing allocation found on "
- "this file descrpitor\n");
- up_write(&data->sem);
- return -EINVAL;
- }
-
- if (alloc.align & (alloc.align - 1)) {
- pr_err("pmem: Alignment is not a power of 2\n");
- return -EINVAL;
- }
-
- if (alloc.align != SZ_4K &&
- (pmem[id].allocator_type !=
- PMEM_ALLOCATORTYPE_BITMAP)) {
- pr_err("pmem: Non 4k alignment requires bitmap"
- " allocator on %s\n", pmem[id].name);
- return -EINVAL;
- }
-
- if (alloc.align > SZ_1M ||
- alloc.align < SZ_4K) {
- pr_err("pmem: Invalid Alignment (%u) "
- "specified\n", alloc.align);
- return -EINVAL;
- }
-
- mutex_lock(&pmem[id].arena_mutex);
- data->index = pmem_allocate_from_id(id,
- alloc.size,
- alloc.align);
- mutex_unlock(&pmem[id].arena_mutex);
- ret = data->index == -1 ? -ENOMEM :
- data->index;
- up_write(&data->sem);
- return ret;
- }
- case PMEM_CONNECT:
- DLOG("connect\n");
- return pmem_connect(arg, file);
- case PMEM_CLEAN_INV_CACHES:
- case PMEM_CLEAN_CACHES:
- case PMEM_INV_CACHES:
- {
- struct pmem_addr pmem_addr;
-
- if (copy_from_user(&pmem_addr, (void __user *)arg,
- sizeof(struct pmem_addr)))
- return -EFAULT;
-
- return pmem_cache_maint(file, cmd, &pmem_addr);
- }
- default:
- if (pmem[id].ioctl)
- return pmem[id].ioctl(file, cmd, arg);
-
- DLOG("ioctl invalid (%#x)\n", cmd);
- return -EINVAL;
- }
- return 0;
-}
-
-static void ioremap_pmem(int id)
-{
- unsigned long addr;
- const struct mem_type *type;
-
- DLOG("PMEMDEBUG: ioremaping for %s\n", pmem[id].name);
- if (pmem[id].map_on_demand) {
- addr = (unsigned long)pmem[id].area->addr;
- if (pmem[id].cached)
- type = get_mem_type(MT_DEVICE_CACHED);
- else
- type = get_mem_type(MT_DEVICE);
- DLOG("PMEMDEBUG: Remap phys %lx to virt %lx on %s\n",
- pmem[id].base, addr, pmem[id].name);
- if (ioremap_pages(addr, pmem[id].base, pmem[id].size, type)) {
- pr_err("pmem: Failed to map pages\n");
- BUG();
- }
- pmem[id].vbase = pmem[id].area->addr;
- /* Flush the cache after installing page table entries to avoid
- * aliasing when these pages are remapped to user space.
- */
- flush_cache_vmap(addr, addr + pmem[id].size);
- } else {
- if (pmem[id].cached)
- pmem[id].vbase = ioremap_cached(pmem[id].base,
- pmem[id].size);
- #ifdef ioremap_ext_buffered
- else if (pmem[id].buffered)
- pmem[id].vbase = ioremap_ext_buffered(pmem[id].base,
- pmem[id].size);
- #endif
- else
- pmem[id].vbase = ioremap(pmem[id].base, pmem[id].size);
- }
-}
-
-int pmem_setup(struct android_pmem_platform_data *pdata,
- long (*ioctl)(struct file *, unsigned int, unsigned long),
- int (*release)(struct inode *, struct file *))
-{
- int i, index = 0, id;
- struct vm_struct *pmem_vma = NULL;
- struct page *page;
-
- if (id_count >= PMEM_MAX_DEVICES) {
- pr_alert("pmem: %s: unable to register driver(%s) - no more "
- "devices available!\n", __func__, pdata->name);
- goto err_no_mem;
- }
-
- if (!pdata->size) {
- pr_alert("pmem: %s: unable to register pmem driver(%s) - zero "
- "size passed in!\n", __func__, pdata->name);
- goto err_no_mem;
- }
-
- id = id_count++;
-
- pmem[id].id = id;
-
- if (pmem[id].allocate) {
- pr_alert("pmem: %s: unable to register pmem driver - "
- "duplicate registration of %s!\n",
- __func__, pdata->name);
- goto err_no_mem;
- }
-
- pmem[id].allocator_type = pdata->allocator_type;
-
- /* 'quantum' is a "hidden" variable that defaults to 0 in the board
- * files */
- pmem[id].quantum = pdata->quantum ?: PMEM_MIN_ALLOC;
- if (pmem[id].quantum < PMEM_MIN_ALLOC ||
- !is_power_of_2(pmem[id].quantum)) {
- pr_alert("pmem: %s: unable to register pmem driver %s - "
- "invalid quantum value (%#x)!\n",
- __func__, pdata->name, pmem[id].quantum);
- goto err_reset_pmem_info;
- }
-
- if (pdata->size % pmem[id].quantum) {
- /* bad alignment for size! */
- pr_alert("pmem: %s: Unable to register driver %s - "
- "memory region size (%#lx) is not a multiple of "
- "quantum size(%#x)!\n", __func__, pdata->name,
- pdata->size, pmem[id].quantum);
- goto err_reset_pmem_info;
- }
-
- pmem[id].cached = pdata->cached;
- pmem[id].buffered = pdata->buffered;
- pmem[id].size = pdata->size;
- pmem[id].memory_type = pdata->memory_type;
- strlcpy(pmem[id].name, pdata->name, PMEM_NAME_SIZE);
-
- pmem[id].num_entries = pmem[id].size / pmem[id].quantum;
-
- memset(&pmem[id].kobj, 0, sizeof(pmem[0].kobj));
- pmem[id].kobj.kset = pmem_kset;
-
- switch (pmem[id].allocator_type) {
- case PMEM_ALLOCATORTYPE_ALLORNOTHING:
- pmem[id].allocate = pmem_allocator_all_or_nothing;
- pmem[id].free = pmem_free_all_or_nothing;
- pmem[id].free_space = pmem_free_space_all_or_nothing;
- pmem[id].len = pmem_len_all_or_nothing;
- pmem[id].start_addr = pmem_start_addr_all_or_nothing;
- pmem[id].num_entries = 1;
- pmem[id].quantum = pmem[id].size;
- pmem[id].allocator.all_or_nothing.allocated = 0;
-
- if (kobject_init_and_add(&pmem[id].kobj,
- &pmem_allornothing_ktype, NULL,
- "%s", pdata->name))
- goto out_put_kobj;
-
- break;
-
- case PMEM_ALLOCATORTYPE_BUDDYBESTFIT:
- pmem[id].allocator.buddy_bestfit.buddy_bitmap = kmalloc(
- pmem[id].num_entries * sizeof(struct pmem_bits),
- GFP_KERNEL);
- if (!pmem[id].allocator.buddy_bestfit.buddy_bitmap)
- goto err_reset_pmem_info;
-
- memset(pmem[id].allocator.buddy_bestfit.buddy_bitmap, 0,
- sizeof(struct pmem_bits) * pmem[id].num_entries);
-
- for (i = sizeof(pmem[id].num_entries) * 8 - 1; i >= 0; i--)
- if ((pmem[id].num_entries) & 1<<i) {
- PMEM_BUDDY_ORDER(id, index) = i;
- index = PMEM_BUDDY_NEXT_INDEX(id, index);
- }
- pmem[id].allocate = pmem_allocator_buddy_bestfit;
- pmem[id].free = pmem_free_buddy_bestfit;
- pmem[id].free_space = pmem_free_space_buddy_bestfit;
- pmem[id].len = pmem_len_buddy_bestfit;
- pmem[id].start_addr = pmem_start_addr_buddy_bestfit;
- if (kobject_init_and_add(&pmem[id].kobj,
- &pmem_buddy_bestfit_ktype, NULL,
- "%s", pdata->name))
- goto out_put_kobj;
-
- break;
-
- case PMEM_ALLOCATORTYPE_BITMAP: /* 0, default if not explicit */
- pmem[id].allocator.bitmap.bitm_alloc = kmalloc(
- PMEM_INITIAL_NUM_BITMAP_ALLOCATIONS *
- sizeof(*pmem[id].allocator.bitmap.bitm_alloc),
- GFP_KERNEL);
- if (!pmem[id].allocator.bitmap.bitm_alloc) {
- pr_alert("pmem: %s: Unable to register pmem "
- "driver %s - can't allocate "
- "bitm_alloc!\n",
- __func__, pdata->name);
- goto err_reset_pmem_info;
- }
-
- if (kobject_init_and_add(&pmem[id].kobj,
- &pmem_bitmap_ktype, NULL,
- "%s", pdata->name))
- goto out_put_kobj;
-
- for (i = 0; i < PMEM_INITIAL_NUM_BITMAP_ALLOCATIONS; i++) {
- pmem[id].allocator.bitmap.bitm_alloc[i].bit = -1;
- pmem[id].allocator.bitmap.bitm_alloc[i].quanta = 0;
- }
-
- pmem[id].allocator.bitmap.bitmap_allocs =
- PMEM_INITIAL_NUM_BITMAP_ALLOCATIONS;
-
- pmem[id].allocator.bitmap.bitmap =
- kcalloc((pmem[id].num_entries + 31) / 32,
- sizeof(unsigned int), GFP_KERNEL);
- if (!pmem[id].allocator.bitmap.bitmap) {
- pr_alert("pmem: %s: Unable to register pmem "
- "driver - can't allocate bitmap!\n",
- __func__);
- goto err_cant_register_device;
- }
- pmem[id].allocator.bitmap.bitmap_free = pmem[id].num_entries;
-
- pmem[id].allocate = pmem_allocator_bitmap;
- pmem[id].free = pmem_free_bitmap;
- pmem[id].free_space = pmem_free_space_bitmap;
- pmem[id].len = pmem_len_bitmap;
- pmem[id].start_addr = pmem_start_addr_bitmap;
-
- DLOG("bitmap allocator id %d (%s), num_entries %u, raw size "
- "%lu, quanta size %u\n",
- id, pdata->name, pmem[id].allocator.bitmap.bitmap_free,
- pmem[id].size, pmem[id].quantum);
- break;
-
- case PMEM_ALLOCATORTYPE_SYSTEM:
-
- INIT_LIST_HEAD(&pmem[id].allocator.system_mem.alist);
-
- pmem[id].allocator.system_mem.used = 0;
- pmem[id].vbase = NULL;
-
- if (kobject_init_and_add(&pmem[id].kobj,
- &pmem_system_ktype, NULL,
- "%s", pdata->name))
- goto out_put_kobj;
-
- pmem[id].allocate = pmem_allocator_system;
- pmem[id].free = pmem_free_system;
- pmem[id].free_space = pmem_free_space_system;
- pmem[id].len = pmem_len_system;
- pmem[id].start_addr = pmem_start_addr_system;
- pmem[id].num_entries = 0;
- pmem[id].quantum = PAGE_SIZE;
-
- DLOG("system allocator id %d (%s), raw size %lu\n",
- id, pdata->name, pmem[id].size);
- break;
-
- default:
- pr_alert("Invalid allocator type (%d) for pmem driver\n",
- pdata->allocator_type);
- goto err_reset_pmem_info;
- }
-
- pmem[id].ioctl = ioctl;
- pmem[id].release = release;
- mutex_init(&pmem[id].arena_mutex);
- mutex_init(&pmem[id].data_list_mutex);
- INIT_LIST_HEAD(&pmem[id].data_list);
-
- pmem[id].dev.name = pdata->name;
- pmem[id].dev.minor = id;
- pmem[id].dev.fops = &pmem_fops;
- pmem[id].reusable = pdata->reusable;
- pr_info("pmem: Initializing %s as %s\n",
- pdata->name, pdata->cached ? "cached" : "non-cached");
-
- if (misc_register(&pmem[id].dev)) {
- pr_alert("Unable to register pmem driver!\n");
- goto err_cant_register_device;
- }
-
- if (!pmem[id].reusable) {
- pmem[id].base = allocate_contiguous_memory_nomap(pmem[id].size,
- pmem[id].memory_type, PAGE_SIZE);
- if (!pmem[id].base) {
- pr_err("pmem: Cannot allocate from reserved memory for %s\n",
- pdata->name);
- goto err_misc_deregister;
- }
- }
-
- /* reusable pmem requires map on demand */
- pmem[id].map_on_demand = pdata->map_on_demand || pdata->reusable;
- if (pmem[id].map_on_demand) {
- if (pmem[id].reusable) {
- const struct fmem_data *fmem_info = fmem_get_info();
- pmem[id].area = fmem_info->area;
- pmem[id].base = fmem_info->phys;
- } else {
- pmem_vma = get_vm_area(pmem[id].size, VM_IOREMAP);
- if (!pmem_vma) {
- pr_err("pmem: Failed to allocate virtual space for "
- "%s\n", pdata->name);
- goto err_free;
- }
- pr_err("pmem: Reserving virtual address range %lx - %lx for"
- " %s\n", (unsigned long) pmem_vma->addr,
- (unsigned long) pmem_vma->addr + pmem[id].size,
- pdata->name);
- pmem[id].area = pmem_vma;
- }
- } else
- pmem[id].area = NULL;
-
- page = alloc_page(GFP_KERNEL);
- if (!page) {
- pr_err("pmem: Failed to allocate page for %s\n", pdata->name);
- goto cleanup_vm;
- }
- pmem[id].garbage_pfn = page_to_pfn(page);
- atomic_set(&pmem[id].allocation_cnt, 0);
-
- if (pdata->setup_region)
- pmem[id].region_data = pdata->setup_region();
-
- if (pdata->request_region)
- pmem[id].mem_request = pdata->request_region;
-
- if (pdata->release_region)
- pmem[id].mem_release = pdata->release_region;
-
- pr_info("allocating %lu bytes at %lx physical for %s\n",
- pmem[id].size, pmem[id].base, pmem[id].name);
-
- return 0;
-
-cleanup_vm:
- if (!pmem[id].reusable)
- remove_vm_area(pmem_vma);
-err_free:
- if (!pmem[id].reusable)
- free_contiguous_memory_by_paddr(pmem[id].base);
-err_misc_deregister:
- misc_deregister(&pmem[id].dev);
-err_cant_register_device:
-out_put_kobj:
- kobject_put(&pmem[id].kobj);
- if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_BUDDYBESTFIT)
- kfree(pmem[id].allocator.buddy_bestfit.buddy_bitmap);
- else if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_BITMAP) {
- kfree(pmem[id].allocator.bitmap.bitmap);
- kfree(pmem[id].allocator.bitmap.bitm_alloc);
- }
-err_reset_pmem_info:
- pmem[id].allocate = 0;
- pmem[id].dev.minor = -1;
-err_no_mem:
- return -1;
-}
-
-static int pmem_probe(struct platform_device *pdev)
-{
- struct android_pmem_platform_data *pdata;
-
- if (!pdev || !pdev->dev.platform_data) {
- pr_alert("Unable to probe pmem!\n");
- return -1;
- }
- pdata = pdev->dev.platform_data;
-
- pm_runtime_set_active(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
-
- return pmem_setup(pdata, NULL, NULL);
-}
-
-static int pmem_remove(struct platform_device *pdev)
-{
- int id = pdev->id;
- __free_page(pfn_to_page(pmem[id].garbage_pfn));
- pm_runtime_disable(&pdev->dev);
- if (pmem[id].vbase)
- iounmap(pmem[id].vbase);
- if (pmem[id].map_on_demand && !pmem[id].reusable && pmem[id].area)
- free_vm_area(pmem[id].area);
- if (pmem[id].base)
- free_contiguous_memory_by_paddr(pmem[id].base);
- kobject_put(&pmem[id].kobj);
- if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_BUDDYBESTFIT)
- kfree(pmem[id].allocator.buddy_bestfit.buddy_bitmap);
- else if (pmem[id].allocator_type == PMEM_ALLOCATORTYPE_BITMAP) {
- kfree(pmem[id].allocator.bitmap.bitmap);
- kfree(pmem[id].allocator.bitmap.bitm_alloc);
- }
- misc_deregister(&pmem[id].dev);
- return 0;
-}
-
-static int pmem_runtime_suspend(struct device *dev)
-{
- dev_dbg(dev, "pm_runtime: suspending...\n");
- return 0;
-}
-
-static int pmem_runtime_resume(struct device *dev)
-{
- dev_dbg(dev, "pm_runtime: resuming...\n");
- return 0;
-}
-
-static const struct dev_pm_ops pmem_dev_pm_ops = {
- .runtime_suspend = pmem_runtime_suspend,
- .runtime_resume = pmem_runtime_resume,
-};
-
-static struct platform_driver pmem_driver = {
- .probe = pmem_probe,
- .remove = pmem_remove,
- .driver = { .name = "android_pmem",
- .pm = &pmem_dev_pm_ops,
- }
-};
-
-
-static int __init pmem_init(void)
-{
- /* create /sys/kernel/<PMEM_SYSFS_DIR_NAME> directory */
- pmem_kset = kset_create_and_add(PMEM_SYSFS_DIR_NAME,
- NULL, kernel_kobj);
- if (!pmem_kset) {
- pr_err("pmem(%s):kset_create_and_add fail\n", __func__);
- return -ENOMEM;
- }
-
- return platform_driver_register(&pmem_driver);
-}
-
-static void __exit pmem_exit(void)
-{
- platform_driver_unregister(&pmem_driver);
-}
-
-module_init(pmem_init);
-module_exit(pmem_exit);
-
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index c02eecf..fcadc30 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -1,5 +1,3 @@
-
-
/*Qualcomm Secure Execution Environment Communicator (QSEECOM) driver
*
* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
@@ -53,6 +51,9 @@
#define QSEE_VERSION_01 0x401000
#define QSEE_VERSION_02 0x402000
#define QSEE_VERSION_03 0x403000
+#define QSEE_VERSION_04 0x404000
+#define QSEE_VERSION_05 0x405000
+
#define QSEOS_CHECK_VERSION_CMD 0x00001803
@@ -62,6 +63,12 @@
#define QSEECOM_MAX_SG_ENTRY 512
#define QSEECOM_DISK_ENCRYTPION_KEY_ID 0
+/* Save partition image hash for authentication check */
+#define SCM_SAVE_PARTITION_HASH_ID 0x01
+
+/* Check if enterprise security is activate */
+#define SCM_IS_ACTIVATED_ID 0x02
+
enum qseecom_clk_definitions {
CLK_DFAB = 0,
CLK_SFPB,
@@ -74,6 +81,11 @@
QSEECOM_GENERIC,
};
+enum qseecom_ce_hw_instance {
+ CLK_QSEE = 0,
+ CLK_CE_DRV,
+};
+
static struct class *driver_class;
static dev_t qseecom_device_no;
static struct cdev qseecom_cdev;
@@ -85,6 +97,7 @@
static DEFINE_MUTEX(qsee_bw_mutex);
static DEFINE_MUTEX(app_access_lock);
+static DEFINE_MUTEX(clk_access_lock);
struct qseecom_registered_listener_list {
struct list_head list;
@@ -117,10 +130,12 @@
};
struct qseecom_clk {
+ enum qseecom_ce_hw_instance instance;
struct clk *ce_core_clk;
struct clk *ce_clk;
struct clk *ce_core_src_clk;
struct clk *ce_bus_clk;
+ uint32_t clk_access_cnt;
};
struct qseecom_control {
@@ -148,6 +163,7 @@
uint32_t qsee_perf_client;
struct qseecom_clk qsee;
+ struct qseecom_clk ce_drv;
};
struct qseecom_client_handle {
@@ -551,13 +567,14 @@
if (wait_event_freezable(qseecom.send_resp_wq,
__qseecom_listener_has_sent_rsp(data))) {
pr_warning("Interrupted: exiting send_cmd loop\n");
- return -ERESTARTSYS;
+ ret = -ERESTARTSYS;
}
- if (data->abort) {
- pr_err("Aborting listener service %d\n",
- data->listener.id);
- rc = -ENODEV;
+ if ((data->abort) || (ret == -ERESTARTSYS)) {
+ pr_err("Abort clnt %d waiting on lstnr svc %d, ret %d",
+ data->client.app_id, lstnr, ret);
+ if (data->abort)
+ rc = -ENODEV;
send_data_rsp.status = QSEOS_RESULT_FAILURE;
} else {
send_data_rsp.status = QSEOS_RESULT_SUCCESS;
@@ -658,7 +675,7 @@
app_id = ret;
if (app_id) {
- pr_warn("App id %d (%s) already exists\n", app_id,
+ pr_debug("App id %d (%s) already exists\n", app_id,
(char *)(req.app_name));
spin_lock_irqsave(&qseecom.registered_app_list_lock, flags);
list_for_each_entry(entry,
@@ -816,7 +833,7 @@
break;
} else {
ptr_app->ref_cnt--;
- pr_warn("Can't unload app(%d) inuse\n",
+ pr_debug("Can't unload app(%d) inuse\n",
ptr_app->app_id);
break;
}
@@ -1633,7 +1650,6 @@
data->abort = 0;
data->type = QSEECOM_CLIENT_APP;
data->released = false;
- data->client.app_id = ret;
data->client.sb_length = size;
data->client.user_virt_sb_base = 0;
data->client.ihandle = NULL;
@@ -1678,7 +1694,7 @@
*handle = NULL;
return -EINVAL;
}
-
+ data->client.app_id = ret;
if (ret > 0) {
pr_warn("App id %d for [%s] app exists\n", ret,
(char *)app_ireq.app_name);
@@ -1891,12 +1907,23 @@
return 0;
}
-static int __qseecom_enable_clk(void)
+static int __qseecom_enable_clk(enum qseecom_ce_hw_instance ce)
{
int rc = 0;
struct qseecom_clk *qclk;
+ if (ce == CLK_QSEE)
qclk = &qseecom.qsee;
+ else
+ qclk = &qseecom.ce_drv;
+
+ mutex_lock(&clk_access_lock);
+ if (qclk->clk_access_cnt > 0) {
+ qclk->clk_access_cnt++;
+ mutex_unlock(&clk_access_lock);
+ return rc;
+ }
+
/* Enable CE core clk */
rc = clk_prepare_enable(qclk->ce_core_clk);
if (rc) {
@@ -1915,6 +1942,8 @@
pr_err("Unable to enable/prepare CE bus clk\n");
goto ce_bus_clk_err;
}
+ qclk->clk_access_cnt++;
+ mutex_unlock(&clk_access_lock);
return 0;
ce_bus_clk_err:
@@ -1922,20 +1951,30 @@
ce_clk_err:
clk_disable_unprepare(qclk->ce_core_clk);
err:
+ mutex_unlock(&clk_access_lock);
return -EIO;
}
-static void __qseecom_disable_clk(void)
+static void __qseecom_disable_clk(enum qseecom_ce_hw_instance ce)
{
struct qseecom_clk *qclk;
- qclk = &qseecom.qsee;
- if (qclk->ce_clk != NULL)
- clk_disable_unprepare(qclk->ce_clk);
- if (qclk->ce_core_clk != NULL)
- clk_disable_unprepare(qclk->ce_core_clk);
- if (qclk->ce_bus_clk != NULL)
- clk_disable_unprepare(qclk->ce_bus_clk);
+ if (ce == CLK_QSEE)
+ qclk = &qseecom.qsee;
+ else
+ qclk = &qseecom.ce_drv;
+
+ mutex_lock(&clk_access_lock);
+ if (qclk->clk_access_cnt == 1) {
+ if (qclk->ce_clk != NULL)
+ clk_disable_unprepare(qclk->ce_clk);
+ if (qclk->ce_core_clk != NULL)
+ clk_disable_unprepare(qclk->ce_core_clk);
+ if (qclk->ce_bus_clk != NULL)
+ clk_disable_unprepare(qclk->ce_bus_clk);
+ }
+ qclk->clk_access_cnt--;
+ mutex_unlock(&clk_access_lock);
}
static int qsee_vote_for_clock(struct qseecom_dev_handle *data,
@@ -1957,14 +1996,14 @@
qseecom.qsee_perf_client, 3);
else {
if (qclk->ce_core_src_clk != NULL)
- ret = __qseecom_enable_clk();
+ ret = __qseecom_enable_clk(CLK_QSEE);
if (!ret) {
ret =
msm_bus_scale_client_update_request(
qseecom.qsee_perf_client, 1);
if ((ret) &&
(qclk->ce_core_src_clk != NULL))
- __qseecom_disable_clk();
+ __qseecom_disable_clk(CLK_QSEE);
}
}
if (ret)
@@ -1988,14 +2027,14 @@
qseecom.qsee_perf_client, 3);
else {
if (qclk->ce_core_src_clk != NULL)
- ret = __qseecom_enable_clk();
+ ret = __qseecom_enable_clk(CLK_QSEE);
if (!ret) {
ret =
msm_bus_scale_client_update_request(
qseecom.qsee_perf_client, 2);
if ((ret) &&
(qclk->ce_core_src_clk != NULL))
- __qseecom_disable_clk();
+ __qseecom_disable_clk(CLK_QSEE);
}
}
@@ -2046,7 +2085,7 @@
ret = msm_bus_scale_client_update_request(
qseecom.qsee_perf_client, 0);
if ((!ret) && (qclk->ce_core_src_clk != NULL))
- __qseecom_disable_clk();
+ __qseecom_disable_clk(CLK_QSEE);
}
if (ret)
pr_err("SFPB Bandwidth req fail (%d)\n",
@@ -2076,7 +2115,7 @@
ret = msm_bus_scale_client_update_request(
qseecom.qsee_perf_client, 0);
if ((!ret) && (qclk->ce_core_src_clk != NULL))
- __qseecom_disable_clk();
+ __qseecom_disable_clk(CLK_QSEE);
}
if (ret)
pr_err("SFPB Bandwidth req fail (%d)\n",
@@ -2278,7 +2317,7 @@
pr_err(" scm call to check if app is loaded failed");
return ret; /* scm call failed */
} else if (ret > 0) {
- pr_warn("App id %d (%s) already exists\n", ret,
+ pr_debug("App id %d (%s) already exists\n", ret,
(char *)(req.app_name));
spin_lock_irqsave(&qseecom.registered_app_list_lock, flags);
list_for_each_entry(entry,
@@ -2344,22 +2383,35 @@
memcpy(ireq.key_id, key_id, QSEECOM_KEY_ID_SIZE);
ireq.flags = flags;
+ ireq.qsee_command_id = QSEOS_GENERATE_KEY;
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_GENERATE_KEY,
+ __qseecom_enable_clk(CLK_QSEE);
+
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_generate_ireq),
&resp, sizeof(resp));
if (ret) {
pr_err("scm call to generate key failed : %d\n", ret);
+ __qseecom_disable_clk(CLK_QSEE);
return ret;
}
switch (resp.result) {
case QSEOS_RESULT_SUCCESS:
break;
+ case QSEOS_RESULT_FAIL_KEY_ID_EXISTS:
+ break;
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
- if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ if (ret) {
+ if (resp.result == QSEOS_RESULT_FAIL_KEY_ID_EXISTS) {
+ pr_warn("process_incomplete_cmd return Key ID exits.\n");
+ ret = 0;
+ } else {
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
+ }
+ }
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2367,6 +2419,7 @@
ret = -EINVAL;
break;
}
+ __qseecom_disable_clk(CLK_QSEE);
return ret;
}
@@ -2385,12 +2438,16 @@
memcpy(ireq.key_id, key_id, QSEECOM_KEY_ID_SIZE);
ireq.flags = flags;
+ ireq.qsee_command_id = QSEOS_DELETE_KEY;
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_DELETE_KEY,
+ __qseecom_enable_clk(CLK_QSEE);
+
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_delete_ireq),
&resp, sizeof(struct qseecom_command_scm_resp));
if (ret) {
pr_err("scm call to delete key failed : %d\n", ret);
+ __qseecom_disable_clk(CLK_QSEE);
return ret;
}
@@ -2400,7 +2457,8 @@
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2409,6 +2467,7 @@
ret = -EINVAL;
break;
}
+ __qseecom_disable_clk(CLK_QSEE);
return ret;
}
@@ -2424,11 +2483,21 @@
pr_err("Error:: unsupported usage %d\n", usage);
return -EFAULT;
}
+
+ if (qseecom.qsee.instance == qseecom.ce_drv.instance)
+ __qseecom_enable_clk(CLK_QSEE);
+ else
+ __qseecom_enable_clk(CLK_CE_DRV);
+
memcpy(ireq.key_id, set_key_para->key_id, QSEECOM_KEY_ID_SIZE);
+ ireq.qsee_command_id = QSEOS_SET_KEY;
ireq.ce = set_key_para->ce_hw;
ireq.pipe = set_key_para->pipe;
ireq.flags = set_key_para->flags;
+ /* set PIPE_ENC */
+ ireq.pipe_type = QSEOS_PIPE_ENC;
+
if (set_key_para->set_clear_key_flag ==
QSEECOM_SET_CE_KEY_CMD)
memcpy((void *)ireq.hash, (void *)set_key_para->hash32,
@@ -2436,11 +2505,22 @@
else
memset((void *)ireq.hash, 0, QSEECOM_HASH_SIZE);
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_SET_KEY,
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_select_ireq),
&resp, sizeof(struct qseecom_command_scm_resp));
if (ret) {
- pr_err("scm call to set key failed : %d\n", ret);
+ pr_err("scm call to set QSEOS_PIPE_ENC key failed : %d\n", ret);
+ return ret;
+ }
+
+ /* set PIPE_ENC_XTS */
+ ireq.pipe_type = QSEOS_PIPE_ENC_XTS;
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
+ &ireq, sizeof(struct qseecom_key_select_ireq),
+ &resp, sizeof(struct qseecom_command_scm_resp));
+ if (ret) {
+ pr_err("scm call to set QSEOS_PIPE_ENC_XTS key failed : %d\n",
+ ret);
return ret;
}
@@ -2450,7 +2530,8 @@
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2459,6 +2540,11 @@
break;
}
+ if (qseecom.qsee.instance == qseecom.ce_drv.instance)
+ __qseecom_disable_clk(CLK_QSEE);
+ else
+ __qseecom_disable_clk(CLK_CE_DRV);
+
return ret;
}
@@ -2570,6 +2656,70 @@
return ret;
}
+static int qseecom_is_es_activated(void __user *argp)
+{
+ struct qseecom_is_es_activated_req req;
+ int ret;
+ int resp_buf;
+
+ if (qseecom.qsee_version < QSEE_VERSION_04) {
+ pr_err("invalid qsee version");
+ return -ENODEV;
+ }
+
+ if (argp == NULL) {
+ pr_err("arg is null");
+ return -EINVAL;
+ }
+
+ ret = scm_call(SCM_SVC_ES, SCM_IS_ACTIVATED_ID, NULL, 0,
+ (void *) &resp_buf, sizeof(resp_buf));
+ if (ret) {
+ pr_err("scm_call failed");
+ return ret;
+ }
+
+ req.is_activated = resp_buf;
+ ret = copy_to_user(argp, &req, sizeof(req));
+ if (ret) {
+ pr_err("copy_to_user failed");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int qseecom_save_partition_hash(void __user *argp)
+{
+ struct qseecom_save_partition_hash_req req;
+ int ret;
+
+ if (qseecom.qsee_version < QSEE_VERSION_04) {
+ pr_err("invalid qsee version ");
+ return -ENODEV;
+ }
+
+ if (argp == NULL) {
+ pr_err("arg is null");
+ return -EINVAL;
+ }
+
+ ret = copy_from_user(&req, argp, sizeof(req));
+ if (ret) {
+ pr_err("copy_from_user failed");
+ return ret;
+ }
+
+ ret = scm_call(SCM_SVC_ES, SCM_SAVE_PARTITION_HASH_ID,
+ (void *) &req, sizeof(req), NULL, 0);
+ if (ret) {
+ pr_err("scm_call failed");
+ return ret;
+ }
+
+ return 0;
+}
+
static long qseecom_ioctl(struct file *file, unsigned cmd,
unsigned long arg)
{
@@ -2760,6 +2910,11 @@
break;
}
case QSEECOM_IOCTL_CREATE_KEY_REQ: {
+ if (qseecom.qsee_version < QSEE_VERSION_05) {
+ pr_err("Create Key feature not supported in qsee version %u\n",
+ qseecom.qsee_version);
+ return -EINVAL;
+ }
data->released = true;
mutex_lock(&app_access_lock);
atomic_inc(&data->ioctl_count);
@@ -2772,6 +2927,11 @@
break;
}
case QSEECOM_IOCTL_WIPE_KEY_REQ: {
+ if (qseecom.qsee_version < QSEE_VERSION_05) {
+ pr_err("Wipe Key feature not supported in qsee version %u\n",
+ qseecom.qsee_version);
+ return -EINVAL;
+ }
data->released = true;
mutex_lock(&app_access_lock);
atomic_inc(&data->ioctl_count);
@@ -2782,6 +2942,24 @@
mutex_unlock(&app_access_lock);
break;
}
+ case QSEECOM_IOCTL_SAVE_PARTITION_HASH_REQ: {
+ data->released = true;
+ mutex_lock(&app_access_lock);
+ atomic_inc(&data->ioctl_count);
+ ret = qseecom_save_partition_hash(argp);
+ atomic_dec(&data->ioctl_count);
+ mutex_unlock(&app_access_lock);
+ break;
+ }
+ case QSEECOM_IOCTL_IS_ES_ACTIVATED_REQ: {
+ data->released = true;
+ mutex_lock(&app_access_lock);
+ atomic_inc(&data->ioctl_count);
+ ret = qseecom_is_es_activated(argp);
+ atomic_dec(&data->ioctl_count);
+ mutex_unlock(&app_access_lock);
+ break;
+ }
default:
return -EINVAL;
}
@@ -2877,18 +3055,43 @@
.release = qseecom_release
};
-static int __qseecom_init_clk(void)
+static int __qseecom_init_clk(enum qseecom_ce_hw_instance ce)
{
int rc = 0;
struct device *pdev;
struct qseecom_clk *qclk;
+ char *core_clk_src = NULL;
+ char *core_clk = NULL;
+ char *iface_clk = NULL;
+ char *bus_clk = NULL;
- qclk = &qseecom.qsee;
-
+ switch (ce) {
+ case CLK_QSEE: {
+ core_clk_src = "core_clk_src";
+ core_clk = "core_clk";
+ iface_clk = "iface_clk";
+ bus_clk = "bus_clk";
+ qclk = &qseecom.qsee;
+ qclk->instance = CLK_QSEE;
+ break;
+ };
+ case CLK_CE_DRV: {
+ core_clk_src = "ce_drv_core_clk_src";
+ core_clk = "ce_drv_core_clk";
+ iface_clk = "ce_drv_iface_clk";
+ bus_clk = "ce_drv_bus_clk";
+ qclk = &qseecom.ce_drv;
+ qclk->instance = CLK_CE_DRV;
+ break;
+ };
+ default:
+ pr_err("Invalid ce hw instance: %d!\n", ce);
+ return -EIO;
+ }
pdev = qseecom.pdev;
- /* Get CE3 src core clk. */
- qclk->ce_core_src_clk = clk_get(pdev, "core_clk_src");
+ /* Get CE3 src core clk. */
+ qclk->ce_core_src_clk = clk_get(pdev, core_clk_src);
if (!IS_ERR(qclk->ce_core_src_clk)) {
/* Set the core src clk @100Mhz */
rc = clk_set_rate(qclk->ce_core_src_clk, QSEE_CE_CLK_100MHZ);
@@ -2903,7 +3106,7 @@
}
/* Get CE core clk */
- qclk->ce_core_clk = clk_get(pdev, "core_clk");
+ qclk->ce_core_clk = clk_get(pdev, core_clk);
if (IS_ERR(qclk->ce_core_clk)) {
rc = PTR_ERR(qclk->ce_core_clk);
pr_err("Unable to get CE core clk\n");
@@ -2913,7 +3116,7 @@
}
/* Get CE Interface clk */
- qclk->ce_clk = clk_get(pdev, "iface_clk");
+ qclk->ce_clk = clk_get(pdev, iface_clk);
if (IS_ERR(qclk->ce_clk)) {
rc = PTR_ERR(qclk->ce_clk);
pr_err("Unable to get CE interface clk\n");
@@ -2924,7 +3127,7 @@
}
/* Get CE AXI clk */
- qclk->ce_bus_clk = clk_get(pdev, "bus_clk");
+ qclk->ce_bus_clk = clk_get(pdev, bus_clk);
if (IS_ERR(qclk->ce_bus_clk)) {
rc = PTR_ERR(qclk->ce_bus_clk);
pr_err("Unable to get CE BUS interface clk\n");
@@ -2937,11 +3140,14 @@
return rc;
}
-static void __qseecom_deinit_clk(void)
+static void __qseecom_deinit_clk(enum qseecom_ce_hw_instance ce)
{
struct qseecom_clk *qclk;
- qclk = &qseecom.qsee;
+ if (ce == CLK_QSEE)
+ qclk = &qseecom.qsee;
+ else
+ qclk = &qseecom.ce_drv;
if (qclk->ce_clk != NULL) {
clk_put(qclk->ce_clk);
@@ -2979,6 +3185,11 @@
qseecom.qsee.ce_core_src_clk = NULL;
qseecom.qsee.ce_bus_clk = NULL;
+ qseecom.ce_drv.ce_core_clk = NULL;
+ qseecom.ce_drv.ce_clk = NULL;
+ qseecom.ce_drv.ce_core_src_clk = NULL;
+ qseecom.ce_drv.ce_bus_clk = NULL;
+
rc = alloc_chrdev_region(&qseecom_device_no, 0, 1, QSEECOM_DEV);
if (rc < 0) {
pr_err("alloc_chrdev_region failed %d\n", rc);
@@ -3053,6 +3264,7 @@
/* register client for bus scaling */
if (pdev->dev.of_node) {
+
if (of_property_read_u32((&pdev->dev)->of_node,
"qcom,disk-encrypt-pipe-pair",
&qseecom.ce_info.disk_encrypt_pipe)) {
@@ -3089,10 +3301,29 @@
qseecom.ce_info.hlos_ce_hw_instance);
}
- ret = __qseecom_init_clk();
+ qseecom.qsee.instance = qseecom.ce_info.qsee_ce_hw_instance;
+ qseecom.ce_drv.instance = qseecom.ce_info.hlos_ce_hw_instance;
+
+ ret = __qseecom_init_clk(CLK_QSEE);
if (ret)
goto err;
+ if (qseecom.qsee.instance != qseecom.ce_drv.instance) {
+ ret = __qseecom_init_clk(CLK_CE_DRV);
+ if (ret) {
+ __qseecom_deinit_clk(CLK_QSEE);
+ goto err;
+ }
+ } else {
+ struct qseecom_clk *qclk;
+
+ qclk = &qseecom.qsee;
+ qseecom.ce_drv.ce_core_clk = qclk->ce_core_clk;
+ qseecom.ce_drv.ce_clk = qclk->ce_clk;
+ qseecom.ce_drv.ce_core_src_clk = qclk->ce_core_src_clk;
+ qseecom.ce_drv.ce_bus_clk = qclk->ce_bus_clk;
+ }
+
qseecom_platform_support = (struct msm_bus_scale_pdata *)
msm_bus_cl_get_pdata(pdev);
if (qseecom.qsee_version >= (QSEE_VERSION_02)) {
@@ -3203,9 +3434,11 @@
msm_bus_scale_client_update_request(qseecom.qsee_perf_client,
0);
/* register client for bus scaling */
- if (pdev->dev.of_node)
- __qseecom_deinit_clk();
-
+ if (pdev->dev.of_node) {
+ __qseecom_deinit_clk(CLK_QSEE);
+ if (qseecom.qsee.instance != qseecom.ce_drv.instance)
+ __qseecom_deinit_clk(CLK_CE_DRV);
+ }
return ret;
};
diff --git a/drivers/misc/tspp.c b/drivers/misc/tspp.c
index dbb4f5e..73cae32 100644
--- a/drivers/misc/tspp.c
+++ b/drivers/misc/tspp.c
@@ -1571,6 +1571,7 @@
int id;
int table_idx;
u32 val;
+ unsigned long flags;
struct sps_connect *config;
struct tspp_device *pdev;
@@ -1591,6 +1592,15 @@
if (!channel->used)
return 0;
+ /*
+ * Need to protect access to used and waiting fields, as they are
+ * used by the tasklet which is invoked from interrupt context
+ */
+ spin_lock_irqsave(&pdev->spinlock, flags);
+ channel->used = 0;
+ channel->waiting = NULL;
+ spin_unlock_irqrestore(&pdev->spinlock, flags);
+
if (channel->expiration_period_ms)
del_timer(&channel->expiration_timer);
@@ -1644,9 +1654,7 @@
channel->buffer_count = 0;
channel->data = NULL;
channel->read = NULL;
- channel->waiting = NULL;
channel->locked = NULL;
- channel->used = 0;
if (tspp_channels_in_use(pdev) == 0) {
wake_unlock(&pdev->wake_lock);
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 5b7f08f..21e65b9 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -67,12 +67,19 @@
(rq_data_dir(req) == WRITE))
#define PACKED_CMD_VER 0x01
#define PACKED_CMD_WR 0x02
+#define PACKED_TRIGGER_MAX_ELEMENTS 5000
#define MMC_BLK_UPDATE_STOP_REASON(stats, reason) \
do { \
if (stats->enabled) \
stats->pack_stop_reason[reason]++; \
} while (0)
+#define PCKD_TRGR_INIT_MEAN_POTEN 17
+#define PCKD_TRGR_POTEN_LOWER_BOUND 5
+#define PCKD_TRGR_URGENT_PENALTY 2
+#define PCKD_TRGR_LOWER_BOUND 5
+#define PCKD_TRGR_PRECISION_MULTIPLIER 100
+
static DEFINE_MUTEX(block_mutex);
/*
@@ -221,6 +228,7 @@
md = mmc_blk_get(dev_to_disk(dev));
card = md->queue.card;
+ mmc_rpm_hold(card->host, &card->dev);
mmc_claim_host(card->host);
ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP,
@@ -233,6 +241,7 @@
card->ext_csd.boot_ro_lock |= EXT_CSD_BOOT_WP_B_PWR_WP_EN;
mmc_release_host(card->host);
+ mmc_rpm_release(card->host, &card->dev);
if (!ret) {
pr_info("%s: Locking boot partition ro until next power on\n",
@@ -595,7 +604,7 @@
md = mmc_blk_get(bdev->bd_disk);
if (!md) {
err = -EINVAL;
- goto cmd_done;
+ goto blk_err;
}
card = md->queue.card;
@@ -700,6 +709,7 @@
cmd_done:
mmc_blk_put(md);
+blk_err:
kfree(idata->buf);
kfree(idata);
return err;
@@ -1862,6 +1872,80 @@
}
EXPORT_SYMBOL(mmc_blk_disable_wr_packing);
+static int get_packed_trigger(int potential, struct mmc_card *card,
+ struct request *req, int curr_trigger)
+{
+ static int num_mean_elements = 1;
+ static unsigned long mean_potential = PCKD_TRGR_INIT_MEAN_POTEN;
+ unsigned int trigger = curr_trigger;
+ unsigned int pckd_trgr_upper_bound = card->ext_csd.max_packed_writes;
+
+ /* scale down the upper bound to 75% */
+ pckd_trgr_upper_bound = (pckd_trgr_upper_bound * 3) / 4;
+
+ /*
+ * since the most common calls for this function are with small
+ * potential write values and since we don't want these calls to affect
+ * the packed trigger, set a lower bound and ignore calls with
+ * potential lower than that bound
+ */
+ if (potential <= PCKD_TRGR_POTEN_LOWER_BOUND)
+ return trigger;
+
+ /*
+ * this is to prevent integer overflow in the following calculation:
+ * once every PACKED_TRIGGER_MAX_ELEMENTS reset the algorithm
+ */
+ if (num_mean_elements > PACKED_TRIGGER_MAX_ELEMENTS) {
+ num_mean_elements = 1;
+ mean_potential = PCKD_TRGR_INIT_MEAN_POTEN;
+ }
+
+ /*
+ * get next mean value based on previous mean value and current
+ * potential packed writes. Calculation is as follows:
+ * mean_pot[i+1] =
+ * ((mean_pot[i] * num_mean_elem) + potential)/(num_mean_elem + 1)
+ */
+ mean_potential *= num_mean_elements;
+ /*
+ * add num_mean_elements so that the division of two integers doesn't
+ * lower mean_potential too much
+ */
+ if (potential > mean_potential)
+ mean_potential += num_mean_elements;
+ mean_potential += potential;
+ /* this is for gaining more precision when dividing two integers */
+ mean_potential *= PCKD_TRGR_PRECISION_MULTIPLIER;
+ /* this completes the mean calculation */
+ mean_potential /= ++num_mean_elements;
+ mean_potential /= PCKD_TRGR_PRECISION_MULTIPLIER;
+
+ /*
+ * if current potential packed writes is greater than the mean potential
+ * then the heuristic is that the following workload will contain many
+ * write requests, therefore we lower the packed trigger. In the
+ * opposite case we want to increase the trigger in order to get less
+ * packing events.
+ */
+ if (potential >= mean_potential)
+ trigger = (trigger <= PCKD_TRGR_LOWER_BOUND) ?
+ PCKD_TRGR_LOWER_BOUND : trigger - 1;
+ else
+ trigger = (trigger >= pckd_trgr_upper_bound) ?
+ pckd_trgr_upper_bound : trigger + 1;
+
+ /*
+ * an urgent read request indicates a packed list being interrupted
+ * by this read, therefore we aim for less packing, hence the trigger
+ * gets increased
+ */
+ if (req && (req->cmd_flags & REQ_URGENT) && (rq_data_dir(req) == READ))
+ trigger += PCKD_TRGR_URGENT_PENALTY;
+
+ return trigger;
+}
+
static void mmc_blk_write_packing_control(struct mmc_queue *mq,
struct request *req)
{
@@ -1889,6 +1973,10 @@
if (mq->num_of_potential_packed_wr_reqs >
mq->num_wr_reqs_to_start_packing)
mq->wr_packing_enabled = true;
+ mq->num_wr_reqs_to_start_packing =
+ get_packed_trigger(mq->num_of_potential_packed_wr_reqs,
+ mq->card, req,
+ mq->num_wr_reqs_to_start_packing);
mq->num_of_potential_packed_wr_reqs = 0;
return;
}
@@ -1897,6 +1985,12 @@
if (data_dir == READ) {
mmc_blk_disable_wr_packing(mq);
+ mq->num_wr_reqs_to_start_packing =
+ get_packed_trigger(mq->num_of_potential_packed_wr_reqs,
+ mq->card, req,
+ mq->num_wr_reqs_to_start_packing);
+ mq->num_of_potential_packed_wr_reqs = 0;
+ mq->wr_packing_enabled = false;
return;
} else if (data_dir == WRITE) {
mq->num_of_potential_packed_wr_reqs++;
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 73a1b41..9fc599b 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -2864,6 +2864,40 @@
return ret;
}
+static int mmc_clk_update_freq(struct mmc_host *host,
+ unsigned long freq, enum mmc_load state)
+{
+ int err = 0;
+
+ if (host->ops->notify_load) {
+ err = host->ops->notify_load(host, state);
+ if (err)
+ goto out;
+ }
+
+ if (freq != host->clk_scaling.curr_freq) {
+ if (!mmc_is_vaild_state_for_clk_scaling(host)) {
+ err = -EAGAIN;
+ goto error;
+ }
+
+ err = host->bus_ops->change_bus_speed(host, &freq);
+ if (!err)
+ host->clk_scaling.curr_freq = freq;
+ else
+ pr_err("%s: %s: failed (%d) at freq=%lu\n",
+ mmc_hostname(host), __func__, err, freq);
+ }
+error:
+ if (err) {
+ /* restore previous state */
+ if (host->ops->notify_load)
+ host->ops->notify_load(host, host->clk_scaling.state);
+ }
+out:
+ return err;
+}
+
/**
* mmc_clk_scaling() - clock scaling decision algorithm
* @host: pointer to mmc host structure
@@ -2889,6 +2923,7 @@
unsigned int up_threshold = host->clk_scaling.up_threshold;
unsigned int down_threshold = host->clk_scaling.down_threshold;
bool queue_scale_down_work = false;
+ enum mmc_load state;
if (!card || !host->bus_ops || !host->bus_ops->change_bus_speed) {
pr_err("%s: %s: invalid entry\n", mmc_hostname(host), __func__);
@@ -2916,6 +2951,7 @@
busy_time_ms = host->clk_scaling.busy_time_us / USEC_PER_MSEC;
freq = host->clk_scaling.curr_freq;
+ state = host->clk_scaling.state;
/*
* Note that the max. and min. frequency should be based
@@ -2924,28 +2960,24 @@
*/
if ((busy_time_ms * 100 > total_time_ms * up_threshold)) {
freq = mmc_get_max_frequency(host);
+ state = MMC_LOAD_HIGH;
} else if ((busy_time_ms * 100 < total_time_ms * down_threshold)) {
if (!from_wq)
queue_scale_down_work = true;
freq = mmc_get_min_frequency(host);
+ state = MMC_LOAD_LOW;
}
- if (freq != host->clk_scaling.curr_freq) {
+ if (state != host->clk_scaling.state) {
if (!queue_scale_down_work) {
if (!from_wq)
cancel_delayed_work_sync(
&host->clk_scaling.work);
-
- if (!mmc_is_vaild_state_for_clk_scaling(host))
- goto bypass_scaling;
-
- err = host->bus_ops->change_bus_speed(host, &freq);
+ err = mmc_clk_update_freq(host, freq, state);
if (!err)
- host->clk_scaling.curr_freq = freq;
- else
- pr_err("%s: %s: failed (%d) at freq=%lu\n",
- mmc_hostname(host), __func__, err,
- freq);
+ host->clk_scaling.state = state;
+ else if (err == -EAGAIN)
+ goto no_reset_stats;
} else {
/*
* We hold claim host while queueing the scale down
@@ -2954,13 +2986,12 @@
*/
queue_delayed_work(system_nrt_wq,
&host->clk_scaling.work, 1);
- host->clk_scaling.in_progress = false;
- goto out;
+ goto no_reset_stats;
}
}
mmc_reset_clk_scale_stats(host);
-bypass_scaling:
+no_reset_stats:
host->clk_scaling.in_progress = false;
out:
return;
@@ -3007,6 +3038,9 @@
INIT_DELAYED_WORK(&host->clk_scaling.work, mmc_clk_scale_work);
host->clk_scaling.curr_freq = mmc_get_max_frequency(host);
+ if (host->ops->notify_load)
+ host->ops->notify_load(host, MMC_LOAD_HIGH);
+ host->clk_scaling.state = MMC_LOAD_HIGH;
mmc_reset_clk_scale_stats(host);
host->clk_scaling.enable = true;
host->clk_scaling.initialized = true;
diff --git a/drivers/mmc/core/debugfs.c b/drivers/mmc/core/debugfs.c
index 8b7e0bd..903decf 100644
--- a/drivers/mmc/core/debugfs.c
+++ b/drivers/mmc/core/debugfs.c
@@ -176,9 +176,11 @@
if (val > host->f_max)
return -EINVAL;
+ mmc_rpm_hold(host, &host->class_dev);
mmc_claim_host(host);
mmc_set_clock(host, (unsigned int) val);
mmc_release_host(host);
+ mmc_rpm_release(host, &host->class_dev);
return 0;
}
@@ -208,6 +210,7 @@
if (!host || (val < host->f_min))
goto out;
+ mmc_rpm_hold(host, &host->class_dev);
mmc_claim_host(host);
if (host->bus_ops && host->bus_ops->change_bus_speed) {
old_freq = host->f_max;
@@ -219,6 +222,7 @@
host->f_max = old_freq;
}
mmc_release_host(host);
+ mmc_rpm_release(host, &host->class_dev);
out:
return err;
}
@@ -286,6 +290,7 @@
u32 status;
int ret;
+ mmc_rpm_hold(card->host, &card->dev);
mmc_claim_host(card->host);
ret = mmc_send_status(data, &status);
@@ -293,6 +298,7 @@
*val = status;
mmc_release_host(card->host);
+ mmc_rpm_release(card->host, &card->dev);
return ret;
}
@@ -319,9 +325,11 @@
goto out_free;
}
+ mmc_rpm_hold(card->host, &card->dev);
mmc_claim_host(card->host);
err = mmc_send_ext_csd(card, ext_csd);
mmc_release_host(card->host);
+ mmc_rpm_release(card->host, &card->dev);
if (err)
goto out_free;
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index c0a4cef..6c03bfc 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -466,12 +466,8 @@
goto err;
if (value && !mmc_can_scale_clk(host)) {
- if (mmc_card_ddr_mode(host->card) ||
- mmc_card_hs200(host->card) ||
- mmc_card_uhs(host->card)) {
- host->caps2 |= MMC_CAP2_CLK_SCALE;
- mmc_init_clk_scaling(host);
- }
+ host->caps2 |= MMC_CAP2_CLK_SCALE;
+ mmc_init_clk_scaling(host);
if (!mmc_can_scale_clk(host)) {
host->caps2 &= ~MMC_CAP2_CLK_SCALE;
@@ -487,6 +483,10 @@
if (host->bus_ops->change_bus_speed(host, &freq))
goto err;
}
+ if (host->ops->notify_load &&
+ host->ops->notify_load(host, MMC_LOAD_HIGH))
+ goto err;
+ host->clk_scaling.state = MMC_LOAD_HIGH;
host->clk_scaling.initialized = false;
}
retval = count;
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 8a866cf..89f8c91 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -1734,9 +1734,7 @@
if (err)
goto remove_card;
- /* Initialize clock scaling only for high frequency modes */
- if (mmc_card_hs200(host->card) || mmc_card_ddr_mode(host->card))
- mmc_init_clk_scaling(host);
+ mmc_init_clk_scaling(host);
return 0;
diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
index 60e0640..ddf9a87 100644
--- a/drivers/mmc/core/sd.c
+++ b/drivers/mmc/core/sd.c
@@ -1415,9 +1415,7 @@
if (err)
goto remove_card;
- /* Initialize clock scaling only for high frequency modes */
- if (mmc_card_uhs(host->card))
- mmc_init_clk_scaling(host);
+ mmc_init_clk_scaling(host);
return 0;
diff --git a/drivers/mmc/host/msm_sdcc.c b/drivers/mmc/host/msm_sdcc.c
index 9c30cd1..caf5fe4 100644
--- a/drivers/mmc/host/msm_sdcc.c
+++ b/drivers/mmc/host/msm_sdcc.c
@@ -958,7 +958,7 @@
struct msmsdcc_nc_dmadata *nc;
dmov_box *box;
uint32_t rows;
- unsigned int n;
+ int n;
int i, err = 0, box_cmd_cnt = 0;
struct scatterlist *sg = data->sg;
unsigned int len, offset;
@@ -4393,6 +4393,39 @@
return data_cnt;
}
+static int msmsdcc_notify_load(struct mmc_host *mmc, enum mmc_load state)
+{
+ int err = 0;
+ unsigned long rate;
+ struct msmsdcc_host *host = mmc_priv(mmc);
+
+ if (IS_ERR_OR_NULL(host->bus_clk))
+ goto out;
+
+ switch (state) {
+ case MMC_LOAD_HIGH:
+ rate = MSMSDCC_BUS_VOTE_MAX_RATE;
+ break;
+ case MMC_LOAD_LOW:
+ rate = MSMSDCC_BUS_VOTE_MIN_RATE;
+ break;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+
+ if (rate != host->bus_clk_rate) {
+ err = clk_set_rate(host->bus_clk, rate);
+ if (err)
+ pr_err("%s: %s: bus clk set rate %lu Hz err %d\n",
+ mmc_hostname(mmc), __func__, rate, err);
+ else
+ host->bus_clk_rate = rate;
+ }
+out:
+ return err;
+}
+
static const struct mmc_host_ops msmsdcc_ops = {
.enable = msmsdcc_enable,
.disable = msmsdcc_disable,
@@ -4407,6 +4440,7 @@
.hw_reset = msmsdcc_hw_reset,
.stop_request = msmsdcc_stop_request,
.get_xfer_remain = msmsdcc_get_xfer_remain,
+ .notify_load = msmsdcc_notify_load,
};
static void msmsdcc_enable_status_gpio(struct msmsdcc_host *host)
@@ -4683,10 +4717,10 @@
}
/* Now save the sps pipe handle */
ep->pipe_handle = sps_pipe_handle;
- pr_debug("%s: %s, success !!! %s: pipe_handle=0x%x,"
- " desc_fifo.phys_base=0x%x\n", mmc_hostname(host->mmc),
+ pr_debug("%s: %s, success !!! %s: pipe_handle=0x%x,"\
+ " desc_fifo.phys_base=%pa\n", mmc_hostname(host->mmc),
__func__, is_producer ? "READ" : "WRITE",
- (u32)sps_pipe_handle, sps_config->desc.phys_base);
+ (u32)sps_pipe_handle, &sps_config->desc.phys_base);
goto out;
reg_event_err:
@@ -4929,11 +4963,8 @@
host->bam_base = ioremap(host->bam_memres->start,
resource_size(host->bam_memres));
if (!host->bam_base) {
- pr_err("%s: BAM ioremap() failed!!! phys_addr=0x%x,"
- " size=0x%x", mmc_hostname(host->mmc),
- host->bam_memres->start,
- (host->bam_memres->end -
- host->bam_memres->start));
+ pr_err("%s: BAM ioremap() failed!!! resource: %pr\n",
+ mmc_hostname(host->mmc), host->bam_memres);
rc = -ENOMEM;
goto out;
}
@@ -4954,7 +4985,7 @@
*/
bam.summing_threshold = SPS_MIN_XFER_SIZE;
/* SPS driver wll handle the SDCC BAM IRQ */
- bam.irq = (u32)host->bam_irqres->start;
+ bam.irq = host->bam_irqres->start;
bam.manage = SPS_BAM_MGR_LOCAL;
bam.callback = msmsdcc_sps_bam_global_irq_cb;
bam.user = (void *)host;
@@ -4990,10 +5021,8 @@
if (rc)
goto cons_conn_err;
- pr_info("%s: Qualcomm MSM SDCC-BAM at 0x%016llx irq %d\n",
- mmc_hostname(host->mmc),
- (unsigned long long)host->bam_memres->start,
- (unsigned int)host->bam_irqres->start);
+ pr_info("%s: Qualcomm MSM SDCC-BAM at %pr %pr\n",
+ mmc_hostname(host->mmc), host->bam_memres, host->bam_irqres);
goto out;
cons_conn_err:
@@ -5180,15 +5209,16 @@
}
static void msmsdcc_print_regs(const char *name, void __iomem *base,
- u32 phys_base, unsigned int no_of_regs)
+ resource_size_t phys_base,
+ unsigned int no_of_regs)
{
unsigned int i;
if (!base)
return;
- pr_err("===== %s: Register Dumps @phys_base=0x%x, @virt_base=0x%x"
- " =====\n", name, phys_base, (u32)base);
+ pr_err("===== %s: Register Dumps @phys_base=%pa, @virt_base=0x%x"\
+ " =====\n", name, &phys_base, (u32)base);
for (i = 0; i < no_of_regs; i = i + 4) {
pr_err("Reg=0x%.2x: 0x%.8x, 0x%.8x, 0x%.8x, 0x%.8x\n", i*4,
(u32)readl_relaxed(base + i*4),
@@ -5979,12 +6009,13 @@
host->bus_clk = clk_get(&pdev->dev, "bus_clk");
if (!IS_ERR_OR_NULL(host->bus_clk)) {
/* Vote for max. clk rate for max. performance */
- ret = clk_set_rate(host->bus_clk, INT_MAX);
+ ret = clk_set_rate(host->bus_clk, MSMSDCC_BUS_VOTE_MAX_RATE);
if (ret)
goto bus_clk_put;
ret = clk_prepare_enable(host->bus_clk);
if (ret)
goto bus_clk_put;
+ host->bus_clk_rate = MSMSDCC_BUS_VOTE_MAX_RATE;
}
/*
@@ -6275,10 +6306,8 @@
mmc->clk_scaling.polling_delay_ms = 100;
mmc->caps2 |= MMC_CAP2_CLK_SCALE;
- pr_info("%s: Qualcomm MSM SDCC-core at 0x%016llx irq %d,%d dma %d"
- " dmacrcri %d\n", mmc_hostname(mmc),
- (unsigned long long)core_memres->start,
- (unsigned int) core_irqres->start,
+ pr_info("%s: Qualcomm MSM SDCC-core %pr %pr,%d dma %d dmacrcri %d\n",
+ mmc_hostname(mmc), core_memres, core_irqres,
(unsigned int) plat->status_irq, host->dma.channel,
host->dma.crci);
@@ -6300,11 +6329,11 @@
if (is_dma_mode(host) && host->dma.channel != -1
&& host->dma.crci != -1) {
- pr_info("%s: DM non-cached buffer at %p, dma_addr 0x%.8x\n",
- mmc_hostname(mmc), host->dma.nc, host->dma.nc_busaddr);
- pr_info("%s: DM cmd busaddr 0x%.8x, cmdptr busaddr 0x%.8x\n",
- mmc_hostname(mmc), host->dma.cmd_busaddr,
- host->dma.cmdptr_busaddr);
+ pr_info("%s: DM non-cached buffer at %p, dma_addr: %pa\n",
+ mmc_hostname(mmc), host->dma.nc, &host->dma.nc_busaddr);
+ pr_info("%s: DM cmd busaddr: %pa, cmdptr busaddr: %pa\n",
+ mmc_hostname(mmc), &host->dma.cmd_busaddr,
+ &host->dma.cmdptr_busaddr);
} else if (is_sps_mode(host)) {
pr_info("%s: SPS-BAM data transfer mode available\n",
mmc_hostname(mmc));
diff --git a/drivers/mmc/host/msm_sdcc.h b/drivers/mmc/host/msm_sdcc.h
index 4ed2d96..bcfde57 100644
--- a/drivers/mmc/host/msm_sdcc.h
+++ b/drivers/mmc/host/msm_sdcc.h
@@ -260,6 +260,12 @@
#define MMC_MAX_DMA_CMDS (MAX_NR_SG_DMA_PIO * (MMC_MAX_REQ_SIZE / \
MMC_MAX_DMA_BOX_LENGTH))
+/*
+ * Peripheral bus clock scaling vote rates
+ */
+#define MSMSDCC_BUS_VOTE_MAX_RATE 64000000 /* Hz */
+#define MSMSDCC_BUS_VOTE_MIN_RATE 32000000 /* Hz */
+
struct clk;
struct msmsdcc_nc_dmadata {
@@ -360,6 +366,7 @@
struct clk *clk; /* main MMC bus clock */
struct clk *pclk; /* SDCC peripheral bus clock */
struct clk *bus_clk; /* SDCC bus voter clock */
+ unsigned long bus_clk_rate; /* peripheral bus clk rate */
atomic_t clks_on; /* set if clocks are enabled */
unsigned int eject; /* eject state */
diff --git a/drivers/mmc/host/msm_sdcc_dml.c b/drivers/mmc/host/msm_sdcc_dml.c
index 91ab7e3..2562436 100644
--- a/drivers/mmc/host/msm_sdcc_dml.c
+++ b/drivers/mmc/host/msm_sdcc_dml.c
@@ -166,17 +166,13 @@
host->dml_base = ioremap(host->dml_memres->start,
resource_size(host->dml_memres));
if (!host->dml_base) {
- pr_err("%s: DML ioremap() failed!!! phys_addr=0x%x,"
- " size=0x%x", mmc_hostname(host->mmc),
- host->dml_memres->start,
- (host->dml_memres->end -
- host->dml_memres->start));
+ pr_err("%s: DML ioremap() failed!!! %pr\n",
+ mmc_hostname(host->mmc), host->dml_memres);
rc = -ENOMEM;
goto out;
}
- pr_info("%s: Qualcomm MSM SDCC-DML at 0x%016llx\n",
- mmc_hostname(host->mmc),
- (unsigned long long)host->dml_memres->start);
+ pr_info("%s: Qualcomm MSM SDCC-DML %pr\n",
+ mmc_hostname(host->mmc), host->dml_memres);
}
dml_base = host->dml_base;
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index 6b392b9..89730b0 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -38,6 +38,7 @@
#include <linux/dma-mapping.h>
#include <mach/gpio.h>
#include <mach/msm_bus.h>
+#include <linux/iopoll.h>
#include "sdhci-pltfm.h"
@@ -81,6 +82,31 @@
#define CORE_CLK_PWRSAVE (1 << 1)
#define CORE_IO_PAD_PWR_SWITCH (1 << 16)
+#define CORE_MCI_DATA_CTRL 0x2C
+#define CORE_MCI_DPSM_ENABLE (1 << 0)
+
+#define CORE_TESTBUS_CONFIG 0x0CC
+#define CORE_TESTBUS_ENA (1 << 3)
+#define CORE_TESTBUS_SEL2 (1 << 4)
+
+/*
+ * Waiting until end of potential AHB access for data:
+ * 16 AHB cycles (160ns for 100MHz and 320ns for 50MHz) +
+ * delay on AHB (2us) = maximum 2.32us
+ * Taking x10 times margin
+ */
+#define CORE_AHB_DATA_DELAY_US 23
+/* Waiting until end of potential AHB access for descriptor:
+ * Single (1 AHB cycle) + delay on AHB bus = max 2us
+ * INCR4 (4 AHB cycles) + delay on AHB bus = max 2us
+ * Single (1 AHB cycle) + delay on AHB bus = max 2us
+ * Total 8 us delay with margin
+ */
+#define CORE_AHB_DESC_DELAY_US 8
+
+#define CORE_SDCC_DEBUG_REG 0x124
+#define CORE_DEBUG_REG_AHB_HTRANS (3 << 12)
+
/* 8KB descriptors */
#define SDHCI_MSM_MAX_SEGMENTS (1 << 13)
#define SDHCI_MSM_MMC_CLK_GATE_DELAY 200 /* msecs */
@@ -2044,6 +2070,51 @@
return 0;
}
+/*
+ * sdhci_msm_disable_data_xfer - disable undergoing AHB bus data transfer
+ *
+ * Write 0 to bit 0 in MCI_DATA_CTL (offset 0x2C) - clearing TxActive bit by
+ * access to legacy registers. It will stop current burst and prevent start of
+ * the next on.
+ *
+ * Polling CORE_AHB_DATA_DELAY_US timeout, by reading bit 13:12 until they are 0
+ * in CORE_SDCC_DEBUG_REG (offset 0x124) will validate that AHB burst was
+ * completed and a new one didn't start.
+ *
+ * Waiting for 4us while AHB finishes descriptors fetch.
+ */
+static void sdhci_msm_disable_data_xfer(struct sdhci_host *host)
+{
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_msm_host *msm_host = pltfm_host->priv;
+ u32 value;
+ int ret;
+
+ value = readl_relaxed(msm_host->core_mem + CORE_MCI_DATA_CTRL);
+ value &= ~(u32)CORE_MCI_DPSM_ENABLE;
+ writel_relaxed(value, msm_host->core_mem + CORE_MCI_DATA_CTRL);
+
+ /* Enable the test bus for device slot */
+ writel_relaxed(CORE_TESTBUS_ENA | CORE_TESTBUS_SEL2,
+ msm_host->core_mem + CORE_TESTBUS_CONFIG);
+
+ ret = readl_poll_timeout_noirq(msm_host->core_mem
+ + CORE_SDCC_DEBUG_REG, value,
+ !(value & CORE_DEBUG_REG_AHB_HTRANS),
+ CORE_AHB_DATA_DELAY_US, 1);
+ if (ret) {
+ pr_err("%s: %s: can't stop ongoing AHB bus access by ADMA\n",
+ mmc_hostname(host->mmc), __func__);
+ BUG();
+ }
+ /* Disable the test bus for device slot */
+ value = readl_relaxed(msm_host->core_mem + CORE_TESTBUS_CONFIG);
+ value &= ~CORE_TESTBUS_ENA;
+ writel_relaxed(value, msm_host->core_mem + CORE_TESTBUS_CONFIG);
+
+ udelay(CORE_AHB_DESC_DELAY_US);
+}
+
static struct sdhci_ops sdhci_msm_ops = {
.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
.check_power_status = sdhci_msm_check_power_status,
@@ -2054,6 +2125,7 @@
.platform_bus_voting = sdhci_msm_bus_voting,
.get_min_clock = sdhci_msm_get_min_clock,
.get_max_clock = sdhci_msm_get_max_clock,
+ .disable_data_xfer = sdhci_msm_disable_data_xfer,
};
static int __devinit sdhci_msm_probe(struct platform_device *pdev)
@@ -2064,7 +2136,8 @@
struct resource *core_memres = NULL;
int ret = 0, dead = 0;
u32 vdd_max_current;
- u32 host_version;
+ u16 host_version;
+ u32 pwr;
pr_debug("%s: Enter %s\n", dev_name(&pdev->dev), __func__);
msm_host = devm_kzalloc(&pdev->dev, sizeof(struct sdhci_msm_host),
@@ -2173,7 +2246,21 @@
}
/* Set SW_RST bit in POWER register (Offset 0x0) */
- writel_relaxed(CORE_SW_RST, msm_host->core_mem + CORE_POWER);
+ writel_relaxed(readl_relaxed(msm_host->core_mem + CORE_POWER) |
+ CORE_SW_RST, msm_host->core_mem + CORE_POWER);
+ /*
+ * SW reset can take upto 10HCLK + 15MCLK cycles.
+ * Calculating based on min clk rates (hclk = 27MHz,
+ * mclk = 400KHz) it comes to ~40us. Let's poll for
+ * max. 1ms for reset completion.
+ */
+ ret = readl_poll_timeout(msm_host->core_mem + CORE_POWER,
+ pwr, !(pwr & CORE_SW_RST), 100, 10);
+
+ if (ret) {
+ dev_err(&pdev->dev, "reset failed (%d)\n", ret);
+ goto vreg_deinit;
+ }
/* Set HC_MODE_EN bit in HC_MODE register */
writel_relaxed(HC_MODE_EN, (msm_host->core_mem + CORE_HC_MODE));
@@ -2188,8 +2275,11 @@
host->quirks2 |= SDHCI_QUIRK2_ALWAYS_USE_BASE_CLOCK;
host->quirks2 |= SDHCI_QUIRK2_IGNORE_CMDCRC_FOR_TUNING;
host->quirks2 |= SDHCI_QUIRK2_USE_MAX_DISCARD_SIZE;
+ host->quirks2 |= SDHCI_QUIRK2_IGNORE_DATATOUT_FOR_R1BCMD;
+ host->quirks2 |= SDHCI_QUIRK2_BROKEN_PRESET_VALUE;
+ host->quirks2 |= SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT;
- host_version = readl_relaxed((host->ioaddr + SDHCI_HOST_VERSION));
+ host_version = readw_relaxed((host->ioaddr + SDHCI_HOST_VERSION));
dev_dbg(&pdev->dev, "Host Version: 0x%x Vendor Version 0x%x\n",
host_version, ((host_version & SDHCI_VENDOR_VER_MASK) >>
SDHCI_VENDOR_VER_SHIFT));
@@ -2259,6 +2349,7 @@
msm_host->mmc->caps2 |= MMC_CAP2_CACHE_CTRL;
msm_host->mmc->caps2 |= MMC_CAP2_POWEROFF_NOTIFY;
msm_host->mmc->caps2 |= MMC_CAP2_CLK_SCALE;
+ msm_host->mmc->caps2 |= MMC_CAP2_STOP_REQUEST;
if (msm_host->pdata->nonremovable)
msm_host->mmc->caps |= MMC_CAP_NONREMOVABLE;
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 904c255..3efea77 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -741,10 +741,12 @@
break;
}
- if (count >= 0xF) {
- DBG("%s: Too large timeout 0x%x requested for CMD%d!\n",
- mmc_hostname(host->mmc), count, cmd->opcode);
- count = 0xE;
+ if (!(host->quirks2 & SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT)) {
+ if (count >= 0xF) {
+ DBG("%s: Too large timeout 0x%x requested for CMD%d!\n",
+ mmc_hostname(host->mmc), count, cmd->opcode);
+ count = 0xE;
+ }
}
return count;
@@ -2089,6 +2091,9 @@
if (host->version < SDHCI_SPEC_300)
return;
+ if (host->quirks2 & SDHCI_QUIRK2_BROKEN_PRESET_VALUE)
+ return;
+
spin_lock_irqsave(&host->lock, flags);
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
@@ -2119,6 +2124,53 @@
sdhci_runtime_pm_put(host);
}
+static int sdhci_stop_request(struct mmc_host *mmc)
+{
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+ struct mmc_data *data;
+
+ spin_lock_irqsave(&host->lock, flags);
+ if (!host->mrq || !host->data)
+ goto out;
+
+ data = host->data;
+
+ if (host->ops->disable_data_xfer)
+ host->ops->disable_data_xfer(host);
+
+ sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
+
+ if (host->flags & SDHCI_REQ_USE_DMA) {
+ if (host->flags & SDHCI_USE_ADMA) {
+ sdhci_adma_table_post(host, data);
+ } else {
+ if (!data->host_cookie)
+ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
+ data->sg_len,
+ (data->flags & MMC_DATA_READ) ?
+ DMA_FROM_DEVICE : DMA_TO_DEVICE);
+ }
+ }
+ del_timer(&host->timer);
+ host->mrq = NULL;
+ host->cmd = NULL;
+ host->data = NULL;
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+ return 0;
+}
+
+static unsigned int sdhci_get_xfer_remain(struct mmc_host *mmc)
+{
+ struct sdhci_host *host = mmc_priv(mmc);
+ u32 present_state = 0;
+
+ present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
+
+ return present_state & SDHCI_DOING_WRITE;
+}
+
static const struct mmc_host_ops sdhci_ops = {
.pre_req = sdhci_pre_req,
.post_req = sdhci_post_req,
@@ -2132,6 +2184,8 @@
.enable_preset_value = sdhci_enable_preset_value,
.enable = sdhci_enable,
.disable = sdhci_disable,
+ .stop_request = sdhci_stop_request,
+ .get_xfer_remain = sdhci_get_xfer_remain,
};
/*****************************************************************************\
@@ -2416,6 +2470,9 @@
sdhci_finish_command(host);
return;
}
+ if (host->quirks2 &
+ SDHCI_QUIRK2_IGNORE_DATATOUT_FOR_R1BCMD)
+ return;
}
pr_err("%s: Got data interrupt 0x%08x even "
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index c6bef8a..a3d8442 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -286,6 +286,7 @@
void (*toggle_cdr)(struct sdhci_host *host, bool enable);
unsigned int (*get_max_segments)(void);
void (*platform_bus_voting)(struct sdhci_host *host, u32 enable);
+ void (*disable_data_xfer)(struct sdhci_host *host);
};
#ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS
diff --git a/drivers/mtd/devices/msm_qpic_nand.c b/drivers/mtd/devices/msm_qpic_nand.c
index 570c257..0b04bbf 100644
--- a/drivers/mtd/devices/msm_qpic_nand.c
+++ b/drivers/mtd/devices/msm_qpic_nand.c
@@ -691,9 +691,9 @@
/* Lookup the 'APPS' partition's first page address */
for (i = 0; i < FLASH_PTABLE_MAX_PARTS_V4; i++) {
- if (!strncmp("apps", ptable.part_entry[i].name,
- strlen(ptable.part_entry[i].name))) {
- page_address = ptable.part_entry[i].offset << 6;
+ if (!strncmp("apps", mtd_part[i].name,
+ strlen(mtd_part[i].name))) {
+ page_address = mtd_part[i].offset << 6;
break;
}
}
@@ -1458,7 +1458,7 @@
int ret;
struct mtd_oob_ops ops;
- ops.mode = MTD_OPS_PLACE_OOB;
+ ops.mode = MTD_OPS_AUTO_OOB;
ops.len = len;
ops.retlen = 0;
ops.ooblen = 0;
@@ -1643,7 +1643,7 @@
int ret;
struct mtd_oob_ops ops;
- ops.mode = MTD_OPS_PLACE_OOB;
+ ops.mode = MTD_OPS_AUTO_OOB;
ops.len = len;
ops.retlen = 0;
ops.ooblen = 0;
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index 4e12bb7..147e378 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -1127,6 +1127,33 @@
}
#endif
+static inline unsigned long get_vm_size(struct vm_area_struct *vma)
+{
+ return vma->vm_end - vma->vm_start;
+}
+
+static inline resource_size_t get_vm_offset(struct vm_area_struct *vma)
+{
+ return (resource_size_t) vma->vm_pgoff << PAGE_SHIFT;
+}
+
+/*
+ * Set a new vm offset.
+ *
+ * Verify that the incoming offset really works as a page offset,
+ * and that the offset and size fit in a resource_size_t.
+ */
+static inline int set_vm_offset(struct vm_area_struct *vma, resource_size_t off)
+{
+ pgoff_t pgoff = off >> PAGE_SHIFT;
+ if (off != (resource_size_t) pgoff << PAGE_SHIFT)
+ return -EINVAL;
+ if (off + get_vm_size(vma) - 1 < off)
+ return -EINVAL;
+ vma->vm_pgoff = pgoff;
+ return 0;
+}
+
/*
* set up a mapping for shared memory segments
*/
@@ -1136,20 +1163,29 @@
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
struct map_info *map = mtd->priv;
- unsigned long start;
- unsigned long off;
- u32 len;
+ resource_size_t start, off;
+ unsigned long len, vma_len;
if (mtd->type == MTD_RAM || mtd->type == MTD_ROM) {
- off = vma->vm_pgoff << PAGE_SHIFT;
+ off = get_vm_offset(vma);
start = map->phys;
len = PAGE_ALIGN((start & ~PAGE_MASK) + map->size);
start &= PAGE_MASK;
- if ((vma->vm_end - vma->vm_start + off) > len)
+ vma_len = get_vm_size(vma);
+
+ /* Overflow in off+len? */
+ if (vma_len + off < off)
+ return -EINVAL;
+ /* Does it fit in the mapping? */
+ if (vma_len + off > len)
return -EINVAL;
off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
+ /* Did that overflow? */
+ if (off < start)
+ return -EINVAL;
+ if (set_vm_offset(vma, off) < 0)
+ return -EINVAL;
vma->vm_flags |= VM_IO | VM_RESERVED;
#ifdef pgprot_noncached
diff --git a/drivers/net/ethernet/msm/ecm_ipa.c b/drivers/net/ethernet/msm/ecm_ipa.c
index 114b23d..3ba3f98 100644
--- a/drivers/net/ethernet/msm/ecm_ipa.c
+++ b/drivers/net/ethernet/msm/ecm_ipa.c
@@ -22,13 +22,13 @@
#include <mach/ecm_ipa.h>
#define DRIVER_NAME "ecm_ipa"
-#define DRIVER_VERSION "20-Mar-2013"
#define ECM_IPA_IPV4_HDR_NAME "ecm_eth_ipv4"
#define ECM_IPA_IPV6_HDR_NAME "ecm_eth_ipv6"
#define IPA_TO_USB_CLIENT IPA_CLIENT_USB_CONS
#define INACTIVITY_MSEC_DELAY 100
#define DEFAULT_OUTSTANDING_HIGH 64
#define DEFAULT_OUTSTANDING_LOW 32
+#define DEBUGFS_TEMP_BUF_SIZE 4
#define ECM_IPA_ERROR(fmt, args...) \
pr_err(DRIVER_NAME "@%s@%d@ctx:%s: "\
@@ -49,17 +49,11 @@
/**
* struct ecm_ipa_dev - main driver context parameters
* @net: network interface struct implemented by this driver
- * @folder: debugfs folder for various debuging switches
+ * @directory: debugfs directory for various debuging switches
* @tx_enable: flag that enable/disable Tx path to continue to IPA
* @rx_enable: flag that enable/disable Rx path to continue to IPA
* @rm_enable: flag that enable/disable Resource manager request prior to Tx
* @dma_enable: flag that allow on-the-fly DMA mode for IPA
- * @tx_file: saved debugfs entry to allow cleanup
- * @rx_file: saved debugfs entry to allow cleanup
- * @rm_file: saved debugfs entry to allow cleanup
- * @outstanding_high_file saved debugfs entry to allow cleanup
- * @outstanding_low_file saved debugfs entry to allow cleanup
- * @dma_file: saved debugfs entry to allow cleanup
* @eth_ipv4_hdr_hdl: saved handle for ipv4 header-insertion table
* @eth_ipv6_hdr_hdl: saved handle for ipv6 header-insertion table
* @usb_to_ipa_hdl: save handle for IPA pipe operations
@@ -71,17 +65,11 @@
*/
struct ecm_ipa_dev {
struct net_device *net;
- bool tx_enable;
- bool rx_enable;
- bool rm_enable;
+ u32 tx_enable;
+ u32 rx_enable;
+ u32 rm_enable;
bool dma_enable;
- struct dentry *folder;
- struct dentry *tx_file;
- struct dentry *rx_file;
- struct dentry *rm_file;
- struct dentry *outstanding_high_file;
- struct dentry *outstanding_low_file;
- struct dentry *dma_file;
+ struct dentry *directory;
uint32_t eth_ipv4_hdr_hdl;
uint32_t eth_ipv6_hdr_hdl;
u32 usb_to_ipa_hdl;
@@ -92,25 +80,22 @@
};
/**
- * struct ecm_ipa_ctx - saved pointer for the std ecm network device
+ * struct ecm_ipa_ctx - saved pointer for the ecm_ipa network device
* which allow ecm_ipa to be a singleton
*/
static struct ecm_ipa_dev *ecm_ipa_ctx;
-static int ecm_ipa_ep_registers_cfg(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl);
-static int ecm_ipa_set_device_ethernet_addr(
- u8 *dev_ethaddr, u8 device_ethaddr[]);
-static void ecm_ipa_packet_receive_notify(void *priv,
- enum ipa_dp_evt_type evt,
- unsigned long data);
-static void ecm_ipa_tx_complete_notify(void *priv,
- enum ipa_dp_evt_type evt,
- unsigned long data);
-static int ecm_ipa_ep_registers_dma_cfg(u32 usb_to_ipa_hdl);
static int ecm_ipa_open(struct net_device *net);
+static void ecm_ipa_packet_receive_notify(void *priv,
+ enum ipa_dp_evt_type evt, unsigned long data);
+static void ecm_ipa_tx_complete_notify(void *priv,
+ enum ipa_dp_evt_type evt, unsigned long data);
static int ecm_ipa_stop(struct net_device *net);
-static netdev_tx_t ecm_ipa_start_xmit(struct sk_buff *skb,
- struct net_device *net);
+static int ecm_ipa_rules_cfg(struct ecm_ipa_dev *dev,
+ const void *dst_mac, const void *src_mac);
+static void ecm_ipa_rules_destroy(struct ecm_ipa_dev *dev);
+static int ecm_ipa_register_properties(void);
+static void ecm_ipa_deregister_properties(void);
static void ecm_ipa_rm_notify(void *user_data, enum ipa_rm_event event,
unsigned long data);
static int ecm_ipa_create_rm_resource(struct ecm_ipa_dev *dev);
@@ -118,25 +103,28 @@
static bool rx_filter(struct sk_buff *skb);
static bool tx_filter(struct sk_buff *skb);
static bool rm_enabled(struct ecm_ipa_dev *dev);
-
-static int ecm_ipa_rules_cfg(struct ecm_ipa_dev *dev,
- const void *dst_mac, const void *src_mac);
-static int ecm_ipa_register_properties(void);
-static void ecm_ipa_deregister_properties(void);
-static int ecm_ipa_debugfs_init(struct ecm_ipa_dev *dev);
-static void ecm_ipa_debugfs_destroy(struct ecm_ipa_dev *dev);
-static int ecm_ipa_debugfs_tx_open(struct inode *inode, struct file *file);
-static int ecm_ipa_debugfs_rx_open(struct inode *inode, struct file *file);
-static int ecm_ipa_debugfs_rm_open(struct inode *inode, struct file *file);
-static int ecm_ipa_debugfs_dma_open(struct inode *inode, struct file *file);
-static ssize_t ecm_ipa_debugfs_enable_read(struct file *file,
- char __user *ubuf, size_t count, loff_t *ppos);
-static ssize_t ecm_ipa_debugfs_enable_write(struct file *file,
- const char __user *buf, size_t count, loff_t *ppos);
+static int resource_request(struct ecm_ipa_dev *dev);
+static void resource_release(struct ecm_ipa_dev *dev);
+static netdev_tx_t ecm_ipa_start_xmit(struct sk_buff *skb,
+ struct net_device *net);
+static int ecm_ipa_debugfs_atomic_open(struct inode *inode, struct file *file);
static ssize_t ecm_ipa_debugfs_enable_write_dma(struct file *file,
const char __user *buf, size_t count, loff_t *ppos);
-static void eth_get_drvinfo(struct net_device *net,
- struct ethtool_drvinfo *drv_info);
+static int ecm_ipa_debugfs_dma_open(struct inode *inode, struct file *file);
+static ssize_t ecm_ipa_debugfs_enable_write(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos);
+static ssize_t ecm_ipa_debugfs_enable_read(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos);
+static ssize_t ecm_ipa_debugfs_atomic_read(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos);
+static int ecm_ipa_debugfs_init(struct ecm_ipa_dev *dev);
+static void ecm_ipa_debugfs_destroy(struct ecm_ipa_dev *dev);
+static int ecm_ipa_ep_registers_cfg(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl);
+static int ecm_ipa_ep_registers_dma_cfg(u32 usb_to_ipa_hdl);
+static int ecm_ipa_set_device_ethernet_addr(u8 *dev_ethaddr,
+ u8 device_ethaddr[]);
+static int ecm_ipa_init_module(void);
+static void ecm_ipa_cleanup_module(void);
static const struct net_device_ops ecm_ipa_netdev_ops = {
.ndo_open = ecm_ipa_open,
@@ -144,33 +132,352 @@
.ndo_start_xmit = ecm_ipa_start_xmit,
.ndo_set_mac_address = eth_mac_addr,
};
-static const struct ethtool_ops ops = {
- .get_drvinfo = eth_get_drvinfo,
- .get_link = ethtool_op_get_link,
-};
-const struct file_operations ecm_ipa_debugfs_tx_ops = {
- .open = ecm_ipa_debugfs_tx_open,
- .read = ecm_ipa_debugfs_enable_read,
- .write = ecm_ipa_debugfs_enable_write,
-};
-const struct file_operations ecm_ipa_debugfs_rx_ops = {
- .open = ecm_ipa_debugfs_rx_open,
- .read = ecm_ipa_debugfs_enable_read,
- .write = ecm_ipa_debugfs_enable_write,
-};
-const struct file_operations ecm_ipa_debugfs_rm_ops = {
- .open = ecm_ipa_debugfs_rm_open,
- .read = ecm_ipa_debugfs_enable_read,
- .write = ecm_ipa_debugfs_enable_write,
-};
+
const struct file_operations ecm_ipa_debugfs_dma_ops = {
- .open = ecm_ipa_debugfs_dma_open,
+ .open = ecm_ipa_debugfs_dma_open,
.read = ecm_ipa_debugfs_enable_read,
.write = ecm_ipa_debugfs_enable_write_dma,
};
+const struct file_operations ecm_ipa_debugfs_atomic_ops = {
+ .open = ecm_ipa_debugfs_atomic_open,
+ .read = ecm_ipa_debugfs_atomic_read,
+};
+
/**
- * ecm_ipa_init() - initializes internal data structures
+ * ecm_ipa_init() - create network device and initializes internal
+ * data structures
+ * @params: in/out parameters required for ecm_ipa initialization
+ *
+ * Shall be called prior to pipe connection.
+ * The out parameters (the callbacks) shall be supplied to ipa_connect.
+ * Detailed description:
+ * - allocate the network device
+ * - set default values for driver internals
+ * - create debugfs folder and files
+ * - create IPA resource manager client
+ * - add header insertion rules for IPA driver (based on host/device
+ * Ethernet addresses given in input params)
+ * - register tx/rx properties to IPA driver (will be later used
+ * by IPA configuration manager to configure reset of the IPA rules)
+ * - set the carrier state to "off" (until ecm_ipa_connect is called)
+ * - register the network device
+ * - set the out parameters
+ *
+ * Returns negative errno, or zero on success
+ */
+int ecm_ipa_init(struct ecm_ipa_params *params)
+{
+ int result = 0;
+ struct net_device *net;
+ struct ecm_ipa_dev *dev;
+
+ ECM_IPA_LOG_ENTRY();
+ pr_debug("%s initializing\n", DRIVER_NAME);
+ NULL_CHECK(params);
+
+ pr_debug("host_ethaddr=%pM, device_ethaddr=%pM\n",
+ params->host_ethaddr,
+ params->device_ethaddr);
+
+ net = alloc_etherdev(sizeof(struct ecm_ipa_dev));
+ if (!net) {
+ result = -ENOMEM;
+ ECM_IPA_ERROR("fail to allocate etherdev\n");
+ goto fail_alloc_etherdev;
+ }
+ pr_debug("network device was successfully allocated\n");
+
+ dev = netdev_priv(net);
+ memset(dev, 0, sizeof(*dev));
+ dev->net = net;
+ ecm_ipa_ctx = dev;
+ dev->tx_enable = true;
+ dev->rx_enable = true;
+ dev->outstanding_high = DEFAULT_OUTSTANDING_HIGH;
+ dev->outstanding_low = DEFAULT_OUTSTANDING_LOW;
+ atomic_set(&dev->outstanding_pkts, 0);
+ snprintf(net->name, sizeof(net->name), "%s%%d", "ecm");
+ net->netdev_ops = &ecm_ipa_netdev_ops;
+ pr_debug("internal data structures were intialized and defaults set\n");
+
+ result = ecm_ipa_debugfs_init(dev);
+ if (result)
+ goto fail_debugfs;
+ pr_debug("debugfs entries were created\n");
+
+ result = ecm_ipa_create_rm_resource(dev);
+ if (result) {
+ ECM_IPA_ERROR("fail on RM create\n");
+ goto fail_create_rm;
+ }
+ pr_debug("RM resource was created\n");
+
+ result = ecm_ipa_set_device_ethernet_addr(net->dev_addr,
+ params->device_ethaddr);
+ if (result) {
+ ECM_IPA_ERROR("set device MAC failed\n");
+ goto fail_set_device_ethernet;
+ }
+ pr_debug("Device Ethernet address set %pM\n", net->dev_addr);
+
+ result = ecm_ipa_rules_cfg(dev, params->host_ethaddr,
+ params->device_ethaddr);
+ if (result) {
+ ECM_IPA_ERROR("fail on ipa rules set\n");
+ goto fail_rules_cfg;
+ }
+ pr_debug("Ethernet header insertion set\n");
+
+ result = ecm_ipa_register_properties();
+ if (result) {
+ ECM_IPA_ERROR("fail on properties set\n");
+ goto fail_register_tx;
+ }
+ pr_debug("ecm_ipa 2 Tx and 2 Rx properties were registered\n");
+
+ netif_carrier_off(net);
+ pr_debug("set carrier off\n");
+
+ result = register_netdev(net);
+ if (result) {
+ ECM_IPA_ERROR("register_netdev failed: %d\n", result);
+ goto fail_register_netdev;
+ }
+ pr_debug("register_netdev succeeded\n");
+
+ params->ecm_ipa_rx_dp_notify = ecm_ipa_packet_receive_notify;
+ params->ecm_ipa_tx_dp_notify = ecm_ipa_tx_complete_notify;
+ params->private = (void *)dev;
+ ECM_IPA_LOG_EXIT();
+
+ return 0;
+
+fail_register_netdev:
+ ecm_ipa_deregister_properties();
+fail_register_tx:
+ ecm_ipa_rules_destroy(dev);
+fail_set_device_ethernet:
+fail_rules_cfg:
+ ecm_ipa_destory_rm_resource(dev);
+fail_create_rm:
+ ecm_ipa_debugfs_destroy(dev);
+fail_debugfs:
+ free_netdev(net);
+fail_alloc_etherdev:
+ return result;
+}
+EXPORT_SYMBOL(ecm_ipa_init);
+
+
+int ecm_ipa_connect(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl,
+ void *priv)
+{
+ struct ecm_ipa_dev *dev = priv;
+
+ ECM_IPA_LOG_ENTRY();
+ NULL_CHECK(priv);
+ pr_debug("usb_to_ipa_hdl = %d, ipa_to_usb_hdl = %d, priv=0x%p\n",
+ usb_to_ipa_hdl, ipa_to_usb_hdl, priv);
+
+ if (!usb_to_ipa_hdl || usb_to_ipa_hdl >= IPA_CLIENT_MAX) {
+ ECM_IPA_ERROR("usb_to_ipa_hdl(%d) is not a valid ipa handle\n",
+ usb_to_ipa_hdl);
+ return -EINVAL;
+ }
+ if (!ipa_to_usb_hdl || ipa_to_usb_hdl >= IPA_CLIENT_MAX) {
+ ECM_IPA_ERROR("ipa_to_usb_hdl(%d) is not a valid ipa handle\n",
+ ipa_to_usb_hdl);
+ return -EINVAL;
+ }
+ dev->ipa_to_usb_hdl = ipa_to_usb_hdl;
+ dev->usb_to_ipa_hdl = usb_to_ipa_hdl;
+ ecm_ipa_ep_registers_cfg(usb_to_ipa_hdl, ipa_to_usb_hdl);
+ pr_debug("end-point configured\n");
+
+ netif_carrier_on(dev->net);
+ if (!netif_carrier_ok(dev->net)) {
+ ECM_IPA_ERROR("netif_carrier_ok error\n");
+ return -EBUSY;
+ }
+ pr_debug("carrier_on notified, ecm_ipa is operational\n");
+
+ ECM_IPA_LOG_EXIT();
+ return 0;
+}
+EXPORT_SYMBOL(ecm_ipa_connect);
+
+static int ecm_ipa_open(struct net_device *net)
+{
+ struct ecm_ipa_dev *dev;
+ ECM_IPA_LOG_ENTRY();
+
+ dev = netdev_priv(net);
+
+ if (!netif_carrier_ok(net))
+ pr_debug("carrier is not ON yet - continuing\n");
+
+ netif_start_queue(net);
+ pr_debug("queue started\n");
+
+ ECM_IPA_LOG_EXIT();
+ return 0;
+}
+
+/**
+ * ecm_ipa_start_xmit() - send data from APPs to USB core via IPA core
+ * @skb: packet received from Linux stack
+ * @net: the network device being used to send this packet
+ *
+ * Several conditions needed in order to send the packet to IPA:
+ * - we are in a valid state were the queue is not stopped
+ * - Filter Tx switch is turned off
+ * - The resources required for actual Tx are all up
+ *
+ */
+static netdev_tx_t ecm_ipa_start_xmit(struct sk_buff *skb,
+ struct net_device *net)
+{
+ int ret;
+ netdev_tx_t status = NETDEV_TX_BUSY;
+ struct ecm_ipa_dev *dev = netdev_priv(net);
+
+ if (unlikely(netif_queue_stopped(net))) {
+ ECM_IPA_ERROR("interface queue is stopped\n");
+ goto out;
+ }
+
+ if (unlikely(tx_filter(skb))) {
+ dev_kfree_skb_any(skb);
+ pr_debug("packet got filtered out on Tx path\n");
+ status = NETDEV_TX_OK;
+ goto out;
+ }
+ ret = resource_request(dev);
+ if (ret) {
+ pr_debug("Waiting to resource\n");
+ netif_stop_queue(net);
+ goto resource_busy;
+ }
+
+ if (atomic_read(&dev->outstanding_pkts) >= dev->outstanding_high) {
+ pr_debug("Outstanding high boundary reached (%d)- stopping queue\n",
+ dev->outstanding_high);
+ netif_stop_queue(net);
+ status = -NETDEV_TX_BUSY;
+ goto out;
+ }
+
+ ret = ipa_tx_dp(IPA_TO_USB_CLIENT, skb, NULL);
+ if (ret) {
+ ECM_IPA_ERROR("ipa transmit failed (%d)\n", ret);
+ goto fail_tx_packet;
+ }
+
+ atomic_inc(&dev->outstanding_pkts);
+ net->stats.tx_packets++;
+ net->stats.tx_bytes += skb->len;
+ status = NETDEV_TX_OK;
+ goto out;
+
+fail_tx_packet:
+out:
+ resource_release(dev);
+resource_busy:
+ return status;
+}
+
+/**
+ * ecm_ipa_packet_receive_notify() - Rx notify
+ *
+ * @priv: ecm driver context
+ * @evt: event type
+ * @data: data provided with event
+ *
+ * IPA will pass a packet with skb->data pointing to Ethernet packet frame
+ */
+static void ecm_ipa_packet_receive_notify(void *priv,
+ enum ipa_dp_evt_type evt,
+ unsigned long data)
+{
+ struct sk_buff *skb = (struct sk_buff *)data;
+ struct ecm_ipa_dev *dev = priv;
+ int result;
+
+ if (evt != IPA_RECEIVE) {
+ ECM_IPA_ERROR("A none IPA_RECEIVE event in ecm_ipa_receive\n");
+ return;
+ }
+
+ skb->dev = dev->net;
+ skb->protocol = eth_type_trans(skb, dev->net);
+ if (rx_filter(skb)) {
+ pr_debug("packet got filtered out on Rx path\n");
+ dev_kfree_skb_any(skb);
+ return;
+ }
+
+ result = netif_rx(skb);
+ if (result)
+ ECM_IPA_ERROR("fail on netif_rx\n");
+ dev->net->stats.rx_packets++;
+ dev->net->stats.rx_bytes += skb->len;
+
+ return;
+}
+
+static int ecm_ipa_stop(struct net_device *net)
+{
+ ECM_IPA_LOG_ENTRY();
+ pr_debug("stopping net device\n");
+ netif_stop_queue(net);
+ ECM_IPA_LOG_EXIT();
+ return 0;
+}
+
+int ecm_ipa_disconnect(void *priv)
+{
+ struct ecm_ipa_dev *dev = priv;
+ ECM_IPA_LOG_ENTRY();
+ NULL_CHECK(dev);
+ pr_debug("priv=0x%p\n", priv);
+ netif_carrier_off(dev->net);
+ ECM_IPA_LOG_EXIT();
+ return 0;
+}
+EXPORT_SYMBOL(ecm_ipa_disconnect);
+
+
+/**
+ * ecm_ipa_cleanup() - destroys all
+ * ecm information
+ * @priv: main driver context parameters
+ *
+ */
+void ecm_ipa_cleanup(void *priv)
+{
+ struct ecm_ipa_dev *dev = priv;
+ ECM_IPA_LOG_ENTRY();
+ pr_debug("priv=0x%p\n", priv);
+ if (!dev) {
+ ECM_IPA_ERROR("dev NULL pointer\n");
+ return;
+ }
+
+ ecm_ipa_destory_rm_resource(dev);
+ ecm_ipa_debugfs_destroy(dev);
+
+ unregister_netdev(dev->net);
+ free_netdev(dev->net);
+
+ pr_debug("cleanup done\n");
+ ecm_ipa_ctx = NULL;
+ ECM_IPA_LOG_EXIT();
+ return ;
+}
+EXPORT_SYMBOL(ecm_ipa_cleanup);
+
+/**
* @ecm_ipa_rx_dp_notify: supplied callback to be called by the IPA
* driver upon data packets received from USB pipe into IPA core.
* @ecm_ipa_rt_dp_notify: supplied callback to be called by the IPA
@@ -187,52 +494,7 @@
*
* Returns negative errno, or zero on success
*/
-int ecm_ipa_init(ecm_ipa_callback *ecm_ipa_rx_dp_notify,
- ecm_ipa_callback *ecm_ipa_tx_dp_notify,
- void **priv)
-{
- int ret = 0;
- struct net_device *net;
- struct ecm_ipa_dev *dev;
- ECM_IPA_LOG_ENTRY();
- pr_debug("%s version %s\n", DRIVER_NAME, DRIVER_VERSION);
- NULL_CHECK(ecm_ipa_rx_dp_notify);
- NULL_CHECK(ecm_ipa_tx_dp_notify);
- NULL_CHECK(priv);
- net = alloc_etherdev(sizeof(struct ecm_ipa_dev));
- if (!net) {
- ret = -ENOMEM;
- ECM_IPA_ERROR("fail to allocate etherdev\n");
- goto fail_alloc_etherdev;
- }
- pr_debug("etherdev was successfully allocated\n");
- dev = netdev_priv(net);
- memset(dev, 0, sizeof(*dev));
- dev->tx_enable = true;
- dev->rx_enable = true;
- atomic_set(&dev->outstanding_pkts, 0);
- dev->outstanding_high = DEFAULT_OUTSTANDING_HIGH;
- dev->outstanding_low = DEFAULT_OUTSTANDING_LOW;
- dev->net = net;
- ecm_ipa_ctx = dev;
- *priv = (void *)dev;
- snprintf(net->name, sizeof(net->name), "%s%%d", "ecm");
- net->netdev_ops = &ecm_ipa_netdev_ops;
- pr_debug("internal data structures were intialized\n");
- ret = ecm_ipa_debugfs_init(dev);
- if (ret)
- goto fail_debugfs;
- pr_debug("debugfs entries were created\n");
- *ecm_ipa_rx_dp_notify = ecm_ipa_packet_receive_notify;
- *ecm_ipa_tx_dp_notify = ecm_ipa_tx_complete_notify;
- ECM_IPA_LOG_EXIT();
- return 0;
-fail_debugfs:
- free_netdev(net);
-fail_alloc_etherdev:
- return ret;
-}
-EXPORT_SYMBOL(ecm_ipa_init);
+
/**
* ecm_ipa_rules_cfg() - set header insertion and register Tx/Rx properties
@@ -408,105 +670,6 @@
*
* Returns negative errno, or zero on success
*/
-int ecm_ipa_configure(u8 host_ethaddr[], u8 device_ethaddr[],
- void *priv)
-{
- struct ecm_ipa_dev *dev = priv;
- struct net_device *net;
- int result;
- ECM_IPA_LOG_ENTRY();
- NULL_CHECK(host_ethaddr);
- NULL_CHECK(host_ethaddr);
- NULL_CHECK(dev);
- net = dev->net;
- NULL_CHECK(net);
- pr_debug("host_ethaddr=%pM device_ethaddr=%pM\n",
- host_ethaddr, device_ethaddr);
- result = ecm_ipa_create_rm_resource(dev);
- if (result) {
- ECM_IPA_ERROR("fail on RM create\n");
- return -EINVAL;
- }
- pr_debug("RM resource was created\n");
- netif_carrier_off(dev->net);
- result = ecm_ipa_set_device_ethernet_addr(net->dev_addr,
- device_ethaddr);
- if (result) {
- ECM_IPA_ERROR("set device MAC failed\n");
- goto fail_set_device_ethernet;
- }
- result = ecm_ipa_rules_cfg(dev, host_ethaddr, device_ethaddr);
- if (result) {
- ECM_IPA_ERROR("fail on ipa rules set\n");
- goto fail_set_device_ethernet;
- }
- pr_debug("Ethernet header insertion was set\n");
- result = ecm_ipa_register_properties();
- if (result) {
- ECM_IPA_ERROR("fail on properties set\n");
- goto fail_register_tx;
- }
- pr_debug("ECM 2 Tx and 2 Rx properties were registered\n");
- result = register_netdev(net);
- if (result) {
- ECM_IPA_ERROR("register_netdev failed: %d\n", result);
- goto fail_register_netdev;
- }
- pr_debug("register_netdev succeeded\n");
- ECM_IPA_LOG_EXIT();
- return 0;
-fail_register_netdev:
- ecm_ipa_deregister_properties();
-fail_register_tx:
-fail_set_device_ethernet:
- ecm_ipa_rules_destroy(dev);
- ecm_ipa_destory_rm_resource(dev);
- free_netdev(net);
- return result;
-}
-EXPORT_SYMBOL(ecm_ipa_configure);
-
-int ecm_ipa_connect(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl,
- void *priv)
-{
- struct ecm_ipa_dev *dev = priv;
- ECM_IPA_LOG_ENTRY();
- NULL_CHECK(priv);
- pr_debug("usb_to_ipa_hdl = %d, ipa_to_usb_hdl = %d\n",
- usb_to_ipa_hdl, ipa_to_usb_hdl);
- if (!usb_to_ipa_hdl || usb_to_ipa_hdl >= IPA_CLIENT_MAX) {
- ECM_IPA_ERROR("usb_to_ipa_hdl(%d) is not a valid ipa handle\n",
- usb_to_ipa_hdl);
- return -EINVAL;
- }
- if (!ipa_to_usb_hdl || ipa_to_usb_hdl >= IPA_CLIENT_MAX) {
- ECM_IPA_ERROR("ipa_to_usb_hdl(%d) is not a valid ipa handle\n",
- ipa_to_usb_hdl);
- return -EINVAL;
- }
- dev->ipa_to_usb_hdl = ipa_to_usb_hdl;
- dev->usb_to_ipa_hdl = usb_to_ipa_hdl;
- ecm_ipa_ep_registers_cfg(usb_to_ipa_hdl, ipa_to_usb_hdl);
- netif_carrier_on(dev->net);
- if (!netif_carrier_ok(dev->net)) {
- ECM_IPA_ERROR("netif_carrier_ok error\n");
- return -EBUSY;
- }
- ECM_IPA_LOG_EXIT();
- return 0;
-}
-EXPORT_SYMBOL(ecm_ipa_connect);
-
-int ecm_ipa_disconnect(void *priv)
-{
- struct ecm_ipa_dev *dev = priv;
- ECM_IPA_LOG_ENTRY();
- NULL_CHECK(dev);
- netif_carrier_off(dev->net);
- ECM_IPA_LOG_EXIT();
- return 0;
-}
-EXPORT_SYMBOL(ecm_ipa_disconnect);
static void ecm_ipa_rm_notify(void *user_data, enum ipa_rm_event event,
@@ -595,52 +758,6 @@
return dev->rm_enable;
}
-static int ecm_ipa_open(struct net_device *net)
-{
- ECM_IPA_LOG_ENTRY();
- netif_start_queue(net);
- ECM_IPA_LOG_EXIT();
- return 0;
-}
-
-static int ecm_ipa_stop(struct net_device *net)
-{
- ECM_IPA_LOG_ENTRY();
- pr_debug("stopping net device\n");
- netif_stop_queue(net);
- ECM_IPA_LOG_EXIT();
- return 0;
-}
-
-/**
- * ecm_ipa_cleanup() - destroys all
- * ecm information
- * @priv: main driver context parameters
- *
- */
-void ecm_ipa_cleanup(void *priv)
-{
- struct ecm_ipa_dev *dev = priv;
- ECM_IPA_LOG_ENTRY();
- if (!dev) {
- ECM_IPA_ERROR("dev NULL pointer\n");
- return;
- }
-
- ecm_ipa_destory_rm_resource(dev);
- ecm_ipa_debugfs_destroy(dev);
-
- if (!dev->net) {
- unregister_netdev(dev->net);
- free_netdev(dev->net);
- }
- pr_debug("cleanup done\n");
- ecm_ipa_ctx = NULL;
- ECM_IPA_LOG_EXIT();
- return ;
-}
-EXPORT_SYMBOL(ecm_ipa_cleanup);
-
static int resource_request(struct ecm_ipa_dev *dev)
{
int result = 0;
@@ -663,111 +780,6 @@
}
/**
- * ecm_ipa_start_xmit() - send data from APPs to USB core via IPA core
- * @skb: packet received from Linux stack
- * @net: the network device being used to send this packet
- *
- * Several conditions needed in order to send the packet to IPA:
- * - we are in a valid state were the queue is not stopped
- * - Filter Tx switch is turned off
- * - The resources required for actual Tx are all up
- *
- */
-static netdev_tx_t ecm_ipa_start_xmit(struct sk_buff *skb,
- struct net_device *net)
-{
- int ret;
- netdev_tx_t status = NETDEV_TX_BUSY;
- struct ecm_ipa_dev *dev = netdev_priv(net);
-
- if (unlikely(netif_queue_stopped(net))) {
- ECM_IPA_ERROR("interface queue is stopped\n");
- goto out;
- }
-
- if (unlikely(tx_filter(skb))) {
- dev_kfree_skb_any(skb);
- pr_debug("packet got filtered out on Tx path\n");
- status = NETDEV_TX_OK;
- goto out;
- }
- ret = resource_request(dev);
- if (ret) {
- pr_debug("Waiting to resource\n");
- netif_stop_queue(net);
- goto resource_busy;
- }
-
- pr_debug("Before sending packet the outstanding packets counter is %d\n",
- atomic_read(&dev->outstanding_pkts));
-
- if (atomic_read(&dev->outstanding_pkts) >= dev->outstanding_high) {
- pr_debug("Outstanding high boundary reached (%d)- stopping queue\n",
- dev->outstanding_high);
- netif_stop_queue(net);
- status = -NETDEV_TX_BUSY;
- goto out;
- }
-
- ret = ipa_tx_dp(IPA_TO_USB_CLIENT, skb, NULL);
- if (ret) {
- ECM_IPA_ERROR("ipa transmit failed (%d)\n", ret);
- goto fail_tx_packet;
- }
-
- atomic_inc(&dev->outstanding_pkts);
- net->stats.tx_packets++;
- net->stats.tx_bytes += skb->len;
- status = NETDEV_TX_OK;
- goto out;
-
-fail_tx_packet:
-out:
- resource_release(dev);
-resource_busy:
- return status;
-}
-
-/**
- * ecm_ipa_packet_receive_notify() - Rx notify
- *
- * @priv: ecm driver context
- * @evt: event type
- * @data: data provided with event
- *
- * IPA will pass a packet with skb->data pointing to Ethernet packet frame
- */
-void ecm_ipa_packet_receive_notify(void *priv,
- enum ipa_dp_evt_type evt,
- unsigned long data)
-{
- struct sk_buff *skb = (struct sk_buff *)data;
- struct ecm_ipa_dev *dev = priv;
- int result;
-
- if (evt != IPA_RECEIVE) {
- ECM_IPA_ERROR("A none IPA_RECEIVE event in ecm_ipa_receive\n");
- return;
- }
-
- skb->dev = dev->net;
- skb->protocol = eth_type_trans(skb, dev->net);
- if (rx_filter(skb)) {
- pr_debug("packet got filtered out on Rx path\n");
- dev_kfree_skb_any(skb);
- return;
- }
-
- result = netif_rx(skb);
- if (result)
- ECM_IPA_ERROR("fail on netif_rx\n");
- dev->net->stats.rx_packets++;
- dev->net->stats.rx_bytes += skb->len;
-
- return;
-}
-
-/**
* ecm_ipa_tx_complete_notify() - Rx notify
*
* @priv: ecm driver context
@@ -777,7 +789,7 @@
* Check that the packet is the one we sent and release it
* This function will be called in defered context in IPA wq.
*/
-void ecm_ipa_tx_complete_notify(void *priv,
+static void ecm_ipa_tx_complete_notify(void *priv,
enum ipa_dp_evt_type evt,
unsigned long data)
{
@@ -799,36 +811,16 @@
dev->outstanding_low);
netif_wake_queue(dev->net);
}
- pr_debug("After Tx-complete the outstanding packets counter is %d\n",
- atomic_read(&dev->outstanding_pkts));
dev_kfree_skb_any(skb);
return;
}
-static int ecm_ipa_debugfs_tx_open(struct inode *inode, struct file *file)
+static int ecm_ipa_debugfs_atomic_open(struct inode *inode, struct file *file)
{
struct ecm_ipa_dev *dev = inode->i_private;
ECM_IPA_LOG_ENTRY();
- file->private_data = &(dev->tx_enable);
- ECM_IPA_LOG_ENTRY();
- return 0;
-}
-
-static int ecm_ipa_debugfs_rx_open(struct inode *inode, struct file *file)
-{
- struct ecm_ipa_dev *dev = inode->i_private;
- ECM_IPA_LOG_ENTRY();
- file->private_data = &(dev->rx_enable);
- ECM_IPA_LOG_EXIT();
- return 0;
-}
-
-static int ecm_ipa_debugfs_rm_open(struct inode *inode, struct file *file)
-{
- struct ecm_ipa_dev *dev = inode->i_private;
- ECM_IPA_LOG_ENTRY();
- file->private_data = &(dev->rm_enable);
+ file->private_data = &(dev->outstanding_pkts);
ECM_IPA_LOG_EXIT();
return 0;
}
@@ -904,87 +896,90 @@
return size;
}
+static ssize_t ecm_ipa_debugfs_atomic_read(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos)
+{
+ int nbytes;
+ u8 atomic_str[DEBUGFS_TEMP_BUF_SIZE] = {0};
+ atomic_t *atomic_var = file->private_data;
+ nbytes = scnprintf(atomic_str, sizeof(atomic_str), "%d\n",
+ atomic_read(atomic_var));
+ return simple_read_from_buffer(ubuf, count, ppos, atomic_str, nbytes);
+}
+
+
static int ecm_ipa_debugfs_init(struct ecm_ipa_dev *dev)
{
- const mode_t flags = S_IRUGO | S_IWUGO;
+ const mode_t flags_read_write = S_IRUGO | S_IWUGO;
+ const mode_t flags_read_only = S_IRUGO;
+ struct dentry *file;
- int ret = -EINVAL;
ECM_IPA_LOG_ENTRY();
+
if (!dev)
return -EINVAL;
- dev->folder = debugfs_create_dir("ecm_ipa", NULL);
- if (!dev->folder) {
- ECM_IPA_ERROR("could not create debugfs folder entry\n");
- ret = -EFAULT;
- goto fail_folder;
- }
- dev->tx_file = debugfs_create_file("tx_enable", flags, dev->folder, dev,
- &ecm_ipa_debugfs_tx_ops);
- if (!dev->tx_file) {
- ECM_IPA_ERROR("could not create debugfs tx file\n");
- ret = -EFAULT;
- goto fail_file;
- }
- dev->rx_file = debugfs_create_file("rx_enable", flags, dev->folder, dev,
- &ecm_ipa_debugfs_rx_ops);
- if (!dev->rx_file) {
- ECM_IPA_ERROR("could not create debugfs rx file\n");
- ret = -EFAULT;
- goto fail_file;
- }
- dev->rm_file = debugfs_create_file("rm_enable", flags, dev->folder, dev,
- &ecm_ipa_debugfs_rm_ops);
- if (!dev->rm_file) {
- ECM_IPA_ERROR("could not create debugfs rm file\n");
- ret = -EFAULT;
- goto fail_file;
- }
- dev->dma_file = debugfs_create_file("dma_enable", flags, dev->folder,
- dev, &ecm_ipa_debugfs_dma_ops);
- if (!dev->dma_file) {
- ECM_IPA_ERROR("could not create debugfs dma file\n");
- ret = -EFAULT;
- goto fail_file;
- }
- dev->outstanding_high_file = debugfs_create_u8("outstanding_high",
- flags, dev->folder, &dev->outstanding_high);
- if (!dev->outstanding_high_file) {
- ECM_IPA_ERROR("could not create outstanding_high file\n");
- ret = -EFAULT;
+ dev->directory = debugfs_create_dir("ecm_ipa", NULL);
+ if (!dev->directory) {
+ ECM_IPA_ERROR("could not create debugfs directory entry\n");
+ goto fail_directory;
+ }
+ file = debugfs_create_bool("tx_enable", flags_read_write,
+ dev->directory, &dev->tx_enable);
+ if (!file) {
+ ECM_IPA_ERROR("could not create debugfs tx file\n");
goto fail_file;
}
- dev->outstanding_low_file = debugfs_create_u8("outstanding_low",
- flags, dev->folder, &dev->outstanding_low);
- if (!dev->outstanding_low_file) {
+ file = debugfs_create_bool("rx_enable", flags_read_write,
+ dev->directory, &dev->rx_enable);
+ if (!file) {
+ ECM_IPA_ERROR("could not create debugfs rx file\n");
+ goto fail_file;
+ }
+ file = debugfs_create_bool("rm_enable", flags_read_write,
+ dev->directory, &dev->rm_enable);
+ if (!file) {
+ ECM_IPA_ERROR("could not create debugfs rm file\n");
+ goto fail_file;
+ }
+ file = debugfs_create_u8("outstanding_high", flags_read_write,
+ dev->directory, &dev->outstanding_high);
+ if (!file) {
+ ECM_IPA_ERROR("could not create outstanding_high file\n");
+ goto fail_file;
+ }
+ file = debugfs_create_u8("outstanding_low", flags_read_write,
+ dev->directory, &dev->outstanding_low);
+ if (!file) {
ECM_IPA_ERROR("could not create outstanding_low file\n");
- ret = -EFAULT;
+ goto fail_file;
+ }
+ file = debugfs_create_file("dma_enable", flags_read_write,
+ dev->directory, dev, &ecm_ipa_debugfs_dma_ops);
+ if (!file) {
+ ECM_IPA_ERROR("could not create debugfs dma file\n");
+ goto fail_file;
+ }
+ file = debugfs_create_file("outstanding", flags_read_only,
+ dev->directory, dev, &ecm_ipa_debugfs_atomic_ops);
+ if (!file) {
+ ECM_IPA_ERROR("could not create outstanding file\n");
goto fail_file;
}
ECM_IPA_LOG_EXIT();
return 0;
fail_file:
- debugfs_remove_recursive(dev->folder);
-fail_folder:
- return ret;
+ debugfs_remove_recursive(dev->directory);
+fail_directory:
+ return -EFAULT;
}
static void ecm_ipa_debugfs_destroy(struct ecm_ipa_dev *dev)
{
- debugfs_remove_recursive(dev->folder);
+ debugfs_remove_recursive(dev->directory);
}
-static void eth_get_drvinfo(struct net_device *net,
- struct ethtool_drvinfo *drv_info)
-{
- ECM_IPA_LOG_ENTRY();
- strlcpy(drv_info->driver, DRIVER_NAME, sizeof(drv_info->driver));
- strlcpy(drv_info->version, DRIVER_VERSION, sizeof(drv_info->version));
- ECM_IPA_LOG_EXIT();
-}
-
-
/**
* ecm_ipa_ep_cfg() - configure the USB endpoints for ECM
*
@@ -1000,7 +995,7 @@
* - No aggregation
* - Add Ethernet header
*/
-int ecm_ipa_ep_registers_cfg(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl)
+static int ecm_ipa_ep_registers_cfg(u32 usb_to_ipa_hdl, u32 ipa_to_usb_hdl)
{
int result = 0;
struct ipa_ep_cfg usb_to_ipa_ep_cfg;
@@ -1042,7 +1037,7 @@
* which is needed for cores that does not support blocks logic
* Note that client handles are the actual pipe index
*/
-int ecm_ipa_ep_registers_dma_cfg(u32 usb_to_ipa_hdl)
+static int ecm_ipa_ep_registers_dma_cfg(u32 usb_to_ipa_hdl)
{
int result = 0;
struct ipa_ep_cfg_mode cfg_mode;
@@ -1078,7 +1073,8 @@
*
* Returns 0 for success, negative otherwise
*/
-int ecm_ipa_set_device_ethernet_addr(u8 *dev_ethaddr, u8 device_ethaddr[])
+static int ecm_ipa_set_device_ethernet_addr(u8 *dev_ethaddr,
+ u8 device_ethaddr[])
{
if (!is_valid_ether_addr(device_ethaddr))
return -EINVAL;
diff --git a/drivers/net/wireless/wcnss/wcnss_wlan.c b/drivers/net/wireless/wcnss/wcnss_wlan.c
index 44465dc..1a175c9 100644
--- a/drivers/net/wireless/wcnss/wcnss_wlan.c
+++ b/drivers/net/wireless/wcnss/wcnss_wlan.c
@@ -197,6 +197,7 @@
void __iomem *riva_ccu_base;
void __iomem *pronto_a2xb_base;
void __iomem *pronto_ccpu_base;
+ void __iomem *fiq_reg;
} *penv = NULL;
static ssize_t wcnss_serial_number_show(struct device *dev,
@@ -401,7 +402,7 @@
if (wcnss_hardware_type() == WCNSS_PRONTO_HW) {
wcnss_pronto_log_debug_regs();
wmb();
- __raw_writel(1 << 16, MSM_APCS_GCC_BASE + 0x8);
+ __raw_writel(1 << 16, penv->fiq_reg);
} else {
wcnss_riva_log_debug_regs();
wmb();
@@ -1068,6 +1069,7 @@
struct qcom_wcnss_opts *pdata;
unsigned long wcnss_phys_addr;
int size = 0;
+ struct resource *res;
int has_pronto_hw = of_property_read_bool(pdev->dev.of_node,
"qcom,has_pronto_hw");
@@ -1183,11 +1185,28 @@
pr_err("%s: ioremap wcnss physical failed\n", __func__);
goto fail_ioremap2;
}
+ /* for reset FIQ */
+ res = platform_get_resource_byname(penv->pdev,
+ IORESOURCE_MEM, "wcnss_fiq");
+ if (!res) {
+ dev_err(&pdev->dev, "insufficient irq mem resources\n");
+ ret = -ENOENT;
+ goto fail_ioremap3;
+ }
+ penv->fiq_reg = ioremap_nocache(res->start, resource_size(res));
+ if (!penv->fiq_reg) {
+ pr_err("wcnss: %s: ioremap_nocache() failed fiq_reg addr:%pr\n",
+ __func__, &res->start);
+ ret = -ENOMEM;
+ goto fail_ioremap3;
+ }
}
penv->cold_boot_done = 1;
return 0;
+fail_ioremap3:
+ iounmap(penv->pronto_ccpu_base);
fail_ioremap2:
iounmap(penv->pronto_a2xb_base);
fail_ioremap:
diff --git a/drivers/of/platform.c b/drivers/of/platform.c
index 0970505..5f0ba94 100644
--- a/drivers/of/platform.c
+++ b/drivers/of/platform.c
@@ -214,7 +214,7 @@
#if defined(CONFIG_MICROBLAZE)
dev->archdata.dma_mask = 0xffffffffUL;
#endif
- dev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->dev.coherent_dma_mask = DMA_BIT_MASK(sizeof(dma_addr_t) * 8);
dev->dev.bus = &platform_bus_type;
dev->dev.platform_data = platform_data;
diff --git a/drivers/platform/msm/ipa/Makefile b/drivers/platform/msm/ipa/Makefile
index b7eca61..2b6ce75 100644
--- a/drivers/platform/msm/ipa/Makefile
+++ b/drivers/platform/msm/ipa/Makefile
@@ -1,4 +1,4 @@
obj-$(CONFIG_IPA) += ipat.o
ipat-y := ipa.o ipa_debugfs.o ipa_hdr.o ipa_flt.o ipa_rt.o ipa_dp.o ipa_client.o \
- ipa_utils.o ipa_nat.o rmnet_bridge.o a2_service.o ipa_bridge.o ipa_intf.o teth_bridge.o \
+ ipa_utils.o ipa_nat.o a2_service.o ipa_bridge.o ipa_intf.o teth_bridge.o \
ipa_rm.o ipa_rm_dependency_graph.o ipa_rm_peers_list.o ipa_rm_resource.o ipa_rm_inactivity_timer.o
diff --git a/drivers/platform/msm/ipa/a2_service.c b/drivers/platform/msm/ipa/a2_service.c
index 4b5f0a2..26353ec 100644
--- a/drivers/platform/msm/ipa/a2_service.c
+++ b/drivers/platform/msm/ipa/a2_service.c
@@ -86,8 +86,10 @@
struct list_head bam_tx_pool;
spinlock_t bam_tx_pool_spinlock;
struct workqueue_struct *a2_mux_tx_workqueue;
+ struct workqueue_struct *a2_mux_rx_workqueue;
int a2_mux_initialized;
bool bam_is_connected;
+ bool bam_connect_in_progress;
int a2_mux_send_power_vote_on_init_once;
int a2_mux_sw_bridge_is_connected;
u32 a2_device_handle;
@@ -205,6 +207,7 @@
smsm_change_state(SMSM_APPS_STATE,
clear_bit & SMSM_A2_POWER_CONTROL_ACK,
~clear_bit & SMSM_A2_POWER_CONTROL_ACK);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_apps_acks);
clear_bit = ~clear_bit;
}
@@ -216,10 +219,13 @@
if (a2_mux_ctx->bam_dmux_uplink_vote == vote)
IPADBG("%s: warning - duplicate power vote\n", __func__);
a2_mux_ctx->bam_dmux_uplink_vote = vote;
- if (vote)
+ if (vote) {
smsm_change_state(SMSM_APPS_STATE, 0, SMSM_A2_POWER_CONTROL);
- else
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_on_reqs_out);
+ } else {
smsm_change_state(SMSM_APPS_STATE, SMSM_A2_POWER_CONTROL, 0);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_off_reqs_out);
+ }
}
static inline void ul_powerdown(void)
@@ -233,7 +239,6 @@
INIT_COMPLETION(a2_mux_ctx->ul_wakeup_ack_completion);
power_vote(0);
}
- a2_mux_ctx->bam_is_connected = false;
}
static void ul_wakeup(void)
@@ -241,7 +246,8 @@
int ret;
mutex_lock(&a2_mux_ctx->wakeup_lock);
- if (a2_mux_ctx->bam_is_connected) {
+ if (a2_mux_ctx->bam_is_connected &&
+ !a2_mux_ctx->bam_connect_in_progress) {
IPADBG("%s Already awake\n", __func__);
mutex_unlock(&a2_mux_ctx->wakeup_lock);
return;
@@ -255,7 +261,6 @@
grab_wakelock();
else
a2_mux_ctx->a2_pc_disabled_wakelock_skipped = 1;
- a2_mux_ctx->bam_is_connected = true;
mutex_unlock(&a2_mux_ctx->wakeup_lock);
return;
}
@@ -294,7 +299,6 @@
goto bail;
}
}
- a2_mux_ctx->bam_is_connected = true;
IPADBG("%s complete\n", __func__);
mutex_unlock(&a2_mux_ctx->wakeup_lock);
return;
@@ -363,26 +367,82 @@
dev_kfree_skb_any(skb);
}
+static bool msm_bam_dmux_kickoff_ul_power_down(void)
+
+{
+ bool is_connected;
+
+ write_lock(&a2_mux_ctx->ul_wakeup_lock);
+ if (a2_mux_ctx->bam_connect_in_progress) {
+ a2_mux_ctx->bam_is_connected = false;
+ is_connected = true;
+ } else {
+ is_connected = a2_mux_ctx->bam_is_connected;
+ a2_mux_ctx->bam_is_connected = false;
+ if (is_connected) {
+ a2_mux_ctx->bam_connect_in_progress = true;
+ queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
+ &a2_mux_ctx->kickoff_ul_power_down);
+ }
+ }
+ write_unlock(&a2_mux_ctx->ul_wakeup_lock);
+ return is_connected;
+}
+
+static bool msm_bam_dmux_kickoff_ul_wakeup(void)
+{
+ bool is_connected;
+
+ write_lock(&a2_mux_ctx->ul_wakeup_lock);
+ if (a2_mux_ctx->bam_connect_in_progress) {
+ a2_mux_ctx->bam_is_connected = true;
+ is_connected = false;
+ } else {
+ is_connected = a2_mux_ctx->bam_is_connected;
+ a2_mux_ctx->bam_is_connected = true;
+ if (!is_connected) {
+ a2_mux_ctx->bam_connect_in_progress = true;
+ queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
+ &a2_mux_ctx->kickoff_ul_wakeup);
+ }
+ }
+ write_unlock(&a2_mux_ctx->ul_wakeup_lock);
+ return is_connected;
+}
+
static void kickoff_ul_power_down_func(struct work_struct *work)
{
- unsigned long flags;
+ bool is_connected;
- write_lock_irqsave(&a2_mux_ctx->ul_wakeup_lock, flags);
- if (a2_mux_ctx->bam_is_connected) {
- IPADBG("%s: UL active - forcing powerdown\n", __func__);
- ul_powerdown();
- }
- write_unlock_irqrestore(&a2_mux_ctx->ul_wakeup_lock, flags);
- ipa_rm_notify_completion(IPA_RM_RESOURCE_RELEASED,
- IPA_RM_RESOURCE_A2_CONS);
+ IPADBG("%s: UL active - forcing powerdown\n", __func__);
+ ul_powerdown();
+ write_lock(&a2_mux_ctx->ul_wakeup_lock);
+ is_connected = a2_mux_ctx->bam_is_connected;
+ a2_mux_ctx->bam_is_connected = false;
+ a2_mux_ctx->bam_connect_in_progress = false;
+ write_unlock(&a2_mux_ctx->ul_wakeup_lock);
+ if (is_connected)
+ msm_bam_dmux_kickoff_ul_wakeup();
+ else
+ ipa_rm_notify_completion(IPA_RM_RESOURCE_RELEASED,
+ IPA_RM_RESOURCE_A2_CONS);
}
static void kickoff_ul_wakeup_func(struct work_struct *work)
{
- if (!a2_mux_ctx->bam_is_connected)
- ul_wakeup();
- ipa_rm_notify_completion(IPA_RM_RESOURCE_GRANTED,
+ bool is_connected;
+
+ ul_wakeup();
+ write_lock(&a2_mux_ctx->ul_wakeup_lock);
+ is_connected = a2_mux_ctx->bam_is_connected;
+ a2_mux_ctx->bam_is_connected = true;
+ a2_mux_ctx->bam_connect_in_progress = false;
+ write_unlock(&a2_mux_ctx->ul_wakeup_lock);
+ if (is_connected)
+ ipa_rm_notify_completion(IPA_RM_RESOURCE_GRANTED,
IPA_RM_RESOURCE_A2_CONS);
+ else
+ msm_bam_dmux_kickoff_ul_power_down();
}
static void kickoff_ul_request_resource_func(struct work_struct *work)
@@ -410,33 +470,6 @@
toggle_apps_ack();
}
-static bool msm_bam_dmux_kickoff_ul_wakeup(void)
-{
- bool is_connected;
-
- read_lock(&a2_mux_ctx->ul_wakeup_lock);
- is_connected = a2_mux_ctx->bam_is_connected;
- read_unlock(&a2_mux_ctx->ul_wakeup_lock);
- if (!is_connected)
- queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
- &a2_mux_ctx->kickoff_ul_wakeup);
- return is_connected;
-}
-
-static bool msm_bam_dmux_kickoff_ul_power_down(void)
-
-{
- bool is_connected;
-
- read_lock(&a2_mux_ctx->ul_wakeup_lock);
- is_connected = a2_mux_ctx->bam_is_connected;
- read_unlock(&a2_mux_ctx->ul_wakeup_lock);
- if (is_connected)
- queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
- &a2_mux_ctx->kickoff_ul_power_down);
- return is_connected;
-}
-
static void ipa_embedded_notify(void *priv,
enum ipa_dp_evt_type evt,
unsigned long data)
@@ -487,10 +520,6 @@
__func__);
return -EFAULT;
}
- ret = sps_device_reset(a2_mux_ctx->a2_device_handle);
- if (ret)
- IPAERR("%s: device reset failed ret = %d\n",
- __func__, ret);
memset(&connect_params, 0, sizeof(struct ipa_sys_connect_params));
connect_params.client = IPA_CLIENT_A2_TETHERED_CONS;
connect_params.notify = ipa_tethered_notify;
@@ -602,12 +631,6 @@
__func__, ret);
return ret;
}
- ret = sps_device_reset(a2_mux_ctx->a2_device_handle);
- if (ret) {
- IPAERR("%s: device reset failed ret = %d\n",
- __func__, ret);
- return ret;
- }
verify_tx_queue_is_empty(__func__);
(void) ipa_rm_release_resource(IPA_RM_RESOURCE_A2_PROD);
if (a2_mux_ctx->disconnect_ack)
@@ -634,12 +657,14 @@
last_processed_state = new_state & SMSM_A2_POWER_CONTROL;
if (new_state & SMSM_A2_POWER_CONTROL) {
IPADBG("%s: MODEM PWR CTRL 1\n", __func__);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_on_reqs_in);
grab_wakelock();
(void) connect_to_bam();
- queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
+ queue_work(a2_mux_ctx->a2_mux_rx_workqueue,
&a2_mux_ctx->kickoff_ul_request_resource);
} else if (!(new_state & SMSM_A2_POWER_CONTROL)) {
IPADBG("%s: MODEM PWR CTRL 0\n", __func__);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_off_reqs_in);
(void) disconnect_to_bam();
release_wakelock();
} else {
@@ -653,6 +678,7 @@
{
IPADBG("%s: 0x%08x -> 0x%08x\n", __func__, old_state,
new_state);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_modem_acks);
complete_all(&a2_mux_ctx->ul_wakeup_ack_completion);
}
@@ -929,7 +955,8 @@
}
spin_unlock_irqrestore(&a2_mux_ctx->bam_ch[id].lock, flags);
read_lock(&a2_mux_ctx->ul_wakeup_lock);
- is_connected = a2_mux_ctx->bam_is_connected;
+ is_connected = a2_mux_ctx->bam_is_connected &&
+ !a2_mux_ctx->bam_connect_in_progress;
read_unlock(&a2_mux_ctx->ul_wakeup_lock);
if (!is_connected)
return -ENODEV;
@@ -1233,7 +1260,8 @@
a2_mux_ctx->bam_ch[lcid].use_wm = 0;
spin_unlock_irqrestore(&a2_mux_ctx->bam_ch[lcid].lock, flags);
read_lock(&a2_mux_ctx->ul_wakeup_lock);
- is_connected = a2_mux_ctx->bam_is_connected;
+ is_connected = a2_mux_ctx->bam_is_connected &&
+ !a2_mux_ctx->bam_connect_in_progress;
read_unlock(&a2_mux_ctx->ul_wakeup_lock);
if (!is_connected)
return -ENODEV;
@@ -1302,7 +1330,8 @@
if (!a2_mux_ctx->a2_mux_initialized)
return -ENODEV;
read_lock(&a2_mux_ctx->ul_wakeup_lock);
- is_connected = a2_mux_ctx->bam_is_connected;
+ is_connected = a2_mux_ctx->bam_is_connected &&
+ !a2_mux_ctx->bam_connect_in_progress;
read_unlock(&a2_mux_ctx->ul_wakeup_lock);
if (!is_connected && !bam_ch_is_in_reset(lcid))
return -ENODEV;
@@ -1449,6 +1478,13 @@
__func__);
return -ENOMEM;
}
+ a2_mux_ctx->a2_mux_rx_workqueue =
+ create_singlethread_workqueue("a2_mux_rx");
+ if (!a2_mux_ctx->a2_mux_rx_workqueue) {
+ IPAERR("%s: a2_mux_rx_workqueue alloc failed\n",
+ __func__);
+ return -ENOMEM;
+ }
return 0;
}
@@ -1492,6 +1528,7 @@
a2_props.options = SPS_BAM_OPT_IRQ_WAKEUP;
a2_props.num_pipes = A2_NUM_PIPES;
a2_props.summing_threshold = A2_SUMMING_THRESHOLD;
+ a2_props.manage = SPS_BAM_MGR_DEVICE_REMOTE;
/* need to free on tear down */
rc = sps_register_bam_device(&a2_props, &h);
if (rc < 0) {
@@ -1573,5 +1610,7 @@
NULL);
if (a2_mux_ctx->a2_mux_tx_workqueue)
destroy_workqueue(a2_mux_ctx->a2_mux_tx_workqueue);
+ if (a2_mux_ctx->a2_mux_rx_workqueue)
+ destroy_workqueue(a2_mux_ctx->a2_mux_rx_workqueue);
return 0;
}
diff --git a/drivers/platform/msm/ipa/ipa.c b/drivers/platform/msm/ipa/ipa.c
index db7f7f0..35b2561 100644
--- a/drivers/platform/msm/ipa/ipa.c
+++ b/drivers/platform/msm/ipa/ipa.c
@@ -41,6 +41,7 @@
#define IPA_DMA_POOL_SIZE (512)
#define IPA_DMA_POOL_ALIGNMENT (4)
#define IPA_DMA_POOL_BOUNDARY (1024)
+#define IPA_NUM_DESC_PER_SW_TX (2)
#define IPA_ROUTING_RULE_BYTE_SIZE (4)
#define IPA_BAM_CNFG_BITS_VAL (0x7FFFE004)
@@ -49,6 +50,13 @@
#define IPA_AGGR_STR_IN_BYTES(str) \
(strnlen((str), IPA_AGGR_MAX_STR_LENGTH - 1) + 1)
+/*
+ * This equals a timer value of 162.56us. This value was
+ * determined empirically and shows good bi-directional
+ * WLAN throughputs
+ */
+#define IPA_HOLB_TMR_DEFAULT_VAL 0x7f
+
static struct ipa_plat_drv_res ipa_res = {0, };
static struct of_device_id ipa_plat_drv_match[] = {
{
@@ -72,6 +80,18 @@
.ab = 0,
.ib = 0,
},
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_EBI_CH0,
+ .ab = 0,
+ .ib = 0,
+ },
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_OCIMEM,
+ .ab = 0,
+ .ib = 0,
+ },
};
static struct msm_bus_vectors ipa_max_perf_vectors[] = {
@@ -81,6 +101,18 @@
.ab = 50000000,
.ib = 960000000,
},
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_EBI_CH0,
+ .ab = 50000000,
+ .ib = 960000000,
+ },
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_OCIMEM,
+ .ab = 50000000,
+ .ib = 960000000,
+ },
};
static struct msm_bus_paths ipa_usecases[] = {
@@ -746,66 +778,6 @@
return ret;
}
-static int ipa_handle_tx_poll_for_pipe(struct ipa_sys_context *sys,
- bool process_all)
-{
- struct ipa_tx_pkt_wrapper *tx_pkt, *t;
- struct sps_iovec iov;
- unsigned long irq_flags;
- int ret;
- int cnt = 0;
-
- do {
- iov.addr = 0;
- ret = sps_get_iovec(sys->ep->ep_hdl, &iov);
- if (ret) {
- pr_err("%s: sps_get_iovec failed %d\n", __func__, ret);
- break;
- }
- if (!iov.addr)
- break;
- spin_lock_irqsave(&sys->spinlock, irq_flags);
- tx_pkt = list_first_entry(&sys->head_desc_list,
- struct ipa_tx_pkt_wrapper, link);
- spin_unlock_irqrestore(&sys->spinlock, irq_flags);
-
- switch (tx_pkt->cnt) {
- case 1:
- ipa_wq_write_done(&tx_pkt->work);
- ++cnt;
- break;
- case 0xFFFF:
- /* reached end of set */
- spin_lock_irqsave(&sys->spinlock, irq_flags);
- list_for_each_entry_safe(tx_pkt, t,
- &sys->wait_desc_list, link) {
- list_del(&tx_pkt->link);
- list_add(&tx_pkt->link, &sys->head_desc_list);
- }
- tx_pkt =
- list_first_entry(&sys->head_desc_list,
- struct ipa_tx_pkt_wrapper, link);
- spin_unlock_irqrestore(&sys->spinlock, irq_flags);
- ipa_wq_write_done(&tx_pkt->work);
- ++cnt;
- break;
- default:
- /* keep looping till reach the end of the set */
- spin_lock_irqsave(&sys->spinlock,
- irq_flags);
- list_del(&tx_pkt->link);
- list_add_tail(&tx_pkt->link,
- &sys->wait_desc_list);
- spin_unlock_irqrestore(&sys->spinlock,
- irq_flags);
- ++cnt;
- break;
- }
- } while (process_all);
-
- return cnt;
-}
-
static void ipa_poll_function(struct work_struct *work)
{
int ret;
@@ -825,13 +797,15 @@
/* check all the system pipes for tx comp and rx avail */
if (ipa_ctx->sys[IPA_A5_LAN_WAN_IN].ep->valid)
- cnt |= ipa_handle_rx_core(false, true);
+ cnt |= ipa_handle_rx_core(
+ &ipa_ctx->sys[IPA_A5_LAN_WAN_IN],
+ false, true);
for (i = 0; i < num_tx_pipes; i++)
if (ipa_ctx->sys[tx_pipes[i]].ep->valid)
- cnt |= ipa_handle_tx_poll_for_pipe(
+ cnt |= ipa_handle_tx_core(
&ipa_ctx->sys[tx_pipes[i]],
- false);
+ false, true);
} while (cnt);
/* re-post the poll work */
@@ -900,7 +874,7 @@
/* LAN-WAN OUT (A5->IPA) */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_A5_LAN_WAN_PROD;
- sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
+ sys_in.desc_fifo_sz = IPA_SYS_TX_DATA_DESC_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_A5_LAN_WAN_CONS;
if (ipa_setup_sys_pipe(&sys_in, &ipa_ctx->clnt_hdl_data_out)) {
@@ -1122,7 +1096,7 @@
u32 producer_hdl = 0;
u32 consumer_hdl = 0;
- rmnet_bridge_get_client_handles(&producer_hdl, &consumer_hdl);
+ teth_bridge_get_client_handles(&producer_hdl, &consumer_hdl);
/* configure aggregation on producer */
memset(&agg_params, 0, sizeof(struct ipa_ep_cfg_aggr));
@@ -1459,6 +1433,41 @@
WARN_ON(1);
}
+/**
+* ipa_inc_client_enable_clks() - Increase active clients counter, and
+* enable ipa clocks if necessary
+*
+* Return codes:
+* None
+*/
+void ipa_inc_client_enable_clks(void)
+{
+ mutex_lock(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients++;
+ if (ipa_ctx->ipa_active_clients == 1)
+ if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
+ ipa_enable_clks();
+ mutex_unlock(&ipa_ctx->ipa_active_clients_lock);
+}
+
+/**
+* ipa_dec_client_disable_clks() - Decrease active clients counter, and
+* disable ipa clocks if necessary
+*
+* Return codes:
+* None
+*/
+void ipa_dec_client_disable_clks(void)
+{
+ mutex_lock(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients--;
+ if (ipa_ctx->ipa_active_clients == 0)
+ if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
+ ipa_disable_clks();
+ mutex_unlock(&ipa_ctx->ipa_active_clients_lock);
+}
+
+
static int ipa_setup_bam_cfg(const struct ipa_plat_drv_res *res)
{
void *bam_cnfg_bits;
@@ -1594,13 +1603,11 @@
result = -ENOMEM;
goto fail_mem;
}
+ ipa_ctx->hol_en = 0x1;
+ ipa_ctx->hol_timer = IPA_HOLB_TMR_DEFAULT_VAL;
IPADBG("polling_mode=%u delay_ms=%u\n", polling_mode, polling_delay_ms);
ipa_ctx->polling_mode = polling_mode;
- if (ipa_ctx->polling_mode)
- atomic_set(&ipa_ctx->curr_polling_state, 1);
- else
- atomic_set(&ipa_ctx->curr_polling_state, 0);
IPADBG("hdr_lcl=%u ip4_rt=%u ip6_rt=%u ip4_flt=%u ip6_flt=%u\n",
hdr_tbl_lcl, ip4_rt_tbl_lcl, ip6_rt_tbl_lcl, ip4_flt_tbl_lcl,
ip6_flt_tbl_lcl);
@@ -1676,6 +1683,7 @@
bam_props.num_pipes = IPA_NUM_PIPES;
bam_props.summing_threshold = IPA_SUMMING_THRESHOLD;
bam_props.event_threshold = IPA_EVENT_THRESHOLD;
+ bam_props.options |= SPS_BAM_NO_LOCAL_CLK_GATING;
result = sps_register_bam_device(&bam_props, &ipa_ctx->bam_handle);
if (result) {
@@ -1761,15 +1769,20 @@
* This is an issue with IPA HW v1.0 only.
*/
if (ipa_ctx->ipa_hw_type == IPA_HW_v1_0) {
- ipa_ctx->one_kb_no_straddle_pool = dma_pool_create("ipa_1k",
+ ipa_ctx->dma_pool = dma_pool_create("ipa_1k",
NULL,
IPA_DMA_POOL_SIZE, IPA_DMA_POOL_ALIGNMENT,
IPA_DMA_POOL_BOUNDARY);
- if (!ipa_ctx->one_kb_no_straddle_pool) {
- IPAERR("cannot setup 1kb alloc DMA pool.\n");
- result = -ENOMEM;
- goto fail_dma_pool;
- }
+ } else {
+ ipa_ctx->dma_pool = dma_pool_create("ipa_tx", NULL,
+ IPA_NUM_DESC_PER_SW_TX * sizeof(struct sps_iovec),
+ 0, 0);
+ }
+
+ if (!ipa_ctx->dma_pool) {
+ IPAERR("cannot alloc DMA pool.\n");
+ result = -ENOMEM;
+ goto fail_dma_pool;
}
ipa_ctx->glob_flt_tbl[IPA_IP_v4].in_sys = !ipa_ctx->ip4_flt_tbl_lcl;
@@ -1816,7 +1829,10 @@
ipa_ctx->sys[i].ep = &ipa_ctx->ep[i];
else
ipa_ctx->sys[i].ep = &ipa_ctx->ep[WLAN_AMPDU_TX_EP];
- INIT_LIST_HEAD(&ipa_ctx->sys[i].wait_desc_list);
+ if (ipa_ctx->polling_mode)
+ atomic_set(&ipa_ctx->sys[i].curr_polling_state, 1);
+ else
+ atomic_set(&ipa_ctx->sys[i].curr_polling_state, 0);
}
ipa_ctx->rx_wq = create_singlethread_workqueue("ipa rx wq");
@@ -1826,7 +1842,8 @@
goto fail_rx_wq;
}
- ipa_ctx->tx_wq = create_singlethread_workqueue("ipa tx wq");
+ ipa_ctx->tx_wq = alloc_workqueue("ipa tx wq", WQ_MEM_RECLAIM |
+ WQ_CPU_INTENSIVE, 2);
if (!ipa_ctx->tx_wq) {
IPAERR(":fail to create tx wq\n");
result = -ENOMEM;
@@ -1839,13 +1856,14 @@
ipa_ctx->flt_rule_hdl_tree = RB_ROOT;
ipa_ctx->tag_tree = RB_ROOT;
- atomic_set(&ipa_ctx->ipa_active_clients, 0);
+ mutex_init(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients = 0;
result = ipa_bridge_init();
if (result) {
IPAERR("ipa bridge init err.\n");
result = -ENODEV;
- goto fail_bridge_init;
+ goto fail_a5_pipes;
}
/* setup the A5-IPA pipes */
@@ -1966,8 +1984,6 @@
ipa_cleanup_rx();
ipa_teardown_a5_pipes();
fail_a5_pipes:
- ipa_bridge_cleanup();
-fail_bridge_init:
destroy_workqueue(ipa_ctx->tx_wq);
fail_tx_wq:
destroy_workqueue(ipa_ctx->rx_wq);
@@ -1976,7 +1992,7 @@
* DMA pool need to be released only for IPA HW v1.0 only.
*/
if (ipa_ctx->ipa_hw_type == IPA_HW_v1_0)
- dma_pool_destroy(ipa_ctx->one_kb_no_straddle_pool);
+ dma_pool_destroy(ipa_ctx->dma_pool);
fail_dma_pool:
kmem_cache_destroy(ipa_ctx->tree_node_cache);
fail_tree_node_cache:
diff --git a/drivers/platform/msm/ipa/ipa_bridge.c b/drivers/platform/msm/ipa/ipa_bridge.c
index dd00081..3ff604c 100644
--- a/drivers/platform/msm/ipa/ipa_bridge.c
+++ b/drivers/platform/msm/ipa/ipa_bridge.c
@@ -12,10 +12,52 @@
#include <linux/delay.h>
#include <linux/ratelimit.h>
+#include <mach/msm_smsm.h>
#include "ipa_i.h"
-#define A2_EMBEDDED_PIPE_TX 4
-#define A2_EMBEDDED_PIPE_RX 5
+/*
+ * EP0 (teth)
+ * A2_BAM(1)->(12)DMA_BAM->DMA_BAM(13)->(6)IPA_BAM->IPA_BAM(10)->USB_BAM(0)
+ * A2_BAM(0)<-(15)DMA_BAM<-DMA_BAM(14)<-(7)IPA_BAM<-IPA_BAM(11)<-USB_BAM(1)
+ *
+ * EP2 (emb)
+ * A2_BAM(5)->(16)DMA_BAM->DMA_BAM(17)->(8)IPA_BAM->
+ * A2_BAM(4)<-(19)DMA_BAM<-DMA_BAM(18)<-(9)IPA_BAM<-
+ */
+
+#define A2_TETHERED_PIPE_UL 0
+#define DMA_A2_TETHERED_PIPE_UL 15
+#define DMA_IPA_TETHERED_PIPE_UL 14
+#define A2_TETHERED_PIPE_DL 1
+#define DMA_A2_TETHERED_PIPE_DL 12
+#define DMA_IPA_TETHERED_PIPE_DL 13
+
+#define A2_EMBEDDED_PIPE_UL 4
+#define DMA_A2_EMBEDDED_PIPE_UL 19
+#define DMA_IPA_EMBEDDED_PIPE_UL 18
+#define A2_EMBEDDED_PIPE_DL 5
+#define DMA_A2_EMBEDDED_PIPE_DL 16
+#define DMA_IPA_EMBEDDED_PIPE_DL 17
+
+#define IPA_SMEM_PIPE_MEM_SZ 32768
+
+#define IPA_UL_DATA_FIFO_SZ 0xc00
+#define IPA_UL_DESC_FIFO_SZ 0x530
+#define IPA_DL_DATA_FIFO_SZ 0x2400
+#define IPA_DL_DESC_FIFO_SZ 0x8a0
+
+#define IPA_SMEM_UL_DATA_FIFO_OFST 0x3dd0
+#define IPA_SMEM_UL_DESC_FIFO_OFST 0x49d0
+#define IPA_SMEM_DL_DATA_FIFO_OFST 0x4f00
+#define IPA_SMEM_DL_DESC_FIFO_OFST 0x7300
+
+#define IPA_OCIMEM_UL_DATA_FIFO_OFST 0
+#define IPA_OCIMEM_UL_DESC_FIFO_OFST (IPA_OCIMEM_UL_DATA_FIFO_OFST + \
+ IPA_UL_DATA_FIFO_SZ)
+#define IPA_OCIMEM_DL_DATA_FIFO_OFST (IPA_OCIMEM_UL_DESC_FIFO_OFST + \
+ IPA_UL_DESC_FIFO_SZ)
+#define IPA_OCIMEM_DL_DESC_FIFO_OFST (IPA_OCIMEM_DL_DATA_FIFO_OFST + \
+ IPA_DL_DATA_FIFO_SZ)
enum ipa_pipe_type {
IPA_DL_FROM_A2,
@@ -25,678 +67,383 @@
IPA_PIPE_TYPE_MAX
};
-static int polling_min_sleep[IPA_BRIDGE_DIR_MAX] = { 950, 950 };
-static int polling_max_sleep[IPA_BRIDGE_DIR_MAX] = { 1050, 1050 };
-static int polling_inactivity[IPA_BRIDGE_DIR_MAX] = { 4, 4 };
-
-struct ipa_pkt_info {
- void *buffer;
- dma_addr_t dma_address;
- uint32_t len;
- struct list_head link;
-};
-
struct ipa_bridge_pipe_context {
- struct list_head head_desc_list;
struct sps_pipe *pipe;
- struct sps_connect connection;
- struct sps_mem_buffer desc_mem_buf;
- struct sps_register_event register_event;
- struct list_head free_desc_list;
+ bool ipa_facing;
bool valid;
};
struct ipa_bridge_context {
struct ipa_bridge_pipe_context pipe[IPA_PIPE_TYPE_MAX];
- struct workqueue_struct *ul_wq;
- struct workqueue_struct *dl_wq;
- struct work_struct ul_work;
- struct work_struct dl_work;
enum ipa_bridge_type type;
};
static struct ipa_bridge_context bridge[IPA_BRIDGE_TYPE_MAX];
-static void ipa_do_bridge_work(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx);
-
-static void ul_work_func(struct work_struct *work)
+static void ipa_get_dma_pipe_num(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type, int *a2, int *ipa)
{
- struct ipa_bridge_context *ctx = container_of(work,
- struct ipa_bridge_context, ul_work);
- ipa_do_bridge_work(IPA_BRIDGE_DIR_UL, ctx);
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ *a2 = DMA_A2_TETHERED_PIPE_UL;
+ *ipa = DMA_IPA_TETHERED_PIPE_UL;
+ } else {
+ *a2 = DMA_A2_TETHERED_PIPE_DL;
+ *ipa = DMA_IPA_TETHERED_PIPE_DL;
+ }
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ *a2 = DMA_A2_EMBEDDED_PIPE_UL;
+ *ipa = DMA_IPA_EMBEDDED_PIPE_UL;
+ } else {
+ *a2 = DMA_A2_EMBEDDED_PIPE_DL;
+ *ipa = DMA_IPA_EMBEDDED_PIPE_DL;
+ }
+ }
}
-static void dl_work_func(struct work_struct *work)
+static int ipa_get_desc_fifo_sz(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
{
- struct ipa_bridge_context *ctx = container_of(work,
- struct ipa_bridge_context, dl_work);
- ipa_do_bridge_work(IPA_BRIDGE_DIR_DL, ctx);
+ int sz;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DESC_FIFO_SZ;
+ else
+ sz = IPA_DL_DESC_FIFO_SZ;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DESC_FIFO_SZ;
+ else
+ sz = IPA_DL_DESC_FIFO_SZ;
+ }
+
+ return sz;
}
-static int ipa_switch_to_intr_mode(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx)
+static int ipa_get_data_fifo_sz(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
+{
+ int sz;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DATA_FIFO_SZ;
+ else
+ sz = IPA_DL_DATA_FIFO_SZ;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DATA_FIFO_SZ;
+ else
+ sz = IPA_DL_DATA_FIFO_SZ;
+ }
+
+ return sz;
+}
+
+static int ipa_get_a2_pipe_num(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
+{
+ int ep;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ ep = A2_TETHERED_PIPE_UL;
+ else
+ ep = A2_TETHERED_PIPE_DL;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ ep = A2_EMBEDDED_PIPE_UL;
+ else
+ ep = A2_EMBEDDED_PIPE_DL;
+ }
+
+ return ep;
+}
+
+int ipa_setup_a2_dma_fifos(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type,
+ struct sps_mem_buffer *desc,
+ struct sps_mem_buffer *data)
{
int ret;
- struct ipa_bridge_pipe_context *sys = &ctx->pipe[2 * dir];
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_get_config() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- sys->register_event.options = SPS_O_EOT;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret) {
- IPAERR("sps_register_event() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_EOT;
- ret = sps_set_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_set_config() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- ret = 0;
-fail:
- return ret;
-}
+ if (type == IPA_BRIDGE_TYPE_EMBEDDED) {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ desc->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_UL_DESC_FIFO_OFST;
+ desc->phys_base = smem_virt_to_phys(desc->base);
+ desc->size = ipa_get_desc_fifo_sz(dir, type);
+ data->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_UL_DATA_FIFO_OFST;
+ data->phys_base = smem_virt_to_phys(data->base);
+ data->size = ipa_get_data_fifo_sz(dir, type);
+ } else {
+ desc->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_DL_DESC_FIFO_OFST;
+ desc->phys_base = smem_virt_to_phys(desc->base);
+ desc->size = ipa_get_desc_fifo_sz(dir, type);
+ data->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_DL_DATA_FIFO_OFST;
+ data->phys_base = smem_virt_to_phys(data->base);
+ data->size = ipa_get_data_fifo_sz(dir, type);
+ }
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ ret = sps_setup_bam2bam_fifo(data,
+ IPA_OCIMEM_UL_DATA_FIFO_OFST,
+ ipa_get_data_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DAFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
-static int ipa_switch_to_poll_mode(enum ipa_bridge_dir dir,
- enum ipa_bridge_type type)
-{
- int ret;
- struct ipa_bridge_pipe_context *sys = &bridge[type].pipe[2 * dir];
+ ret = sps_setup_bam2bam_fifo(desc,
+ IPA_OCIMEM_UL_DESC_FIFO_OFST,
+ ipa_get_desc_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DEFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
+ } else {
+ ret = sps_setup_bam2bam_fifo(data,
+ IPA_OCIMEM_DL_DATA_FIFO_OFST,
+ ipa_get_data_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DAFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_get_config() failed %d type=%d dir=%d\n",
- ret, type, dir);
- goto fail;
- }
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- ret = sps_set_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_set_config() failed %d type=%d dir=%d\n",
- ret, type, dir);
- goto fail;
- }
- ret = 0;
-fail:
- return ret;
-}
-
-static int queue_rx_single(enum ipa_bridge_dir dir, enum ipa_bridge_type type)
-{
- struct ipa_bridge_pipe_context *sys_rx = &bridge[type].pipe[2 * dir];
- struct ipa_pkt_info *info;
- int ret;
-
- info = kmalloc(sizeof(struct ipa_pkt_info), GFP_KERNEL);
- if (!info) {
- IPAERR("unable to alloc rx_pkt_info type=%d dir=%d\n",
- type, dir);
- goto fail_pkt;
+ ret = sps_setup_bam2bam_fifo(desc,
+ IPA_OCIMEM_DL_DESC_FIFO_OFST,
+ ipa_get_desc_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DEFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
+ }
}
- info->buffer = kmalloc(IPA_RX_SKB_SIZE, GFP_KERNEL | GFP_DMA);
- if (!info->buffer) {
- IPAERR("unable to alloc rx_pkt_buffer type=%d dir=%d\n",
- type, dir);
- goto fail_buffer;
- }
+ IPADBG("dir=%d type=%d Dpa=%x Dsz=%u Dva=%p dpa=%x dsz=%u dva=%p\n",
+ dir, type, data->phys_base, data->size, data->base,
+ desc->phys_base, desc->size, desc->base);
- info->dma_address = dma_map_single(NULL, info->buffer, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- if (info->dma_address == 0 || info->dma_address == ~0) {
- IPAERR("dma_map_single failure %p for %p type=%d dir=%d\n",
- (void *)info->dma_address, info->buffer,
- type, dir);
- goto fail_dma;
- }
-
- list_add_tail(&info->link, &sys_rx->head_desc_list);
- ret = sps_transfer_one(sys_rx->pipe, info->dma_address,
- IPA_RX_SKB_SIZE, info,
- SPS_IOVEC_FLAG_INT);
- if (ret) {
- list_del(&info->link);
- dma_unmap_single(NULL, info->dma_address, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- IPAERR("sps_transfer_one failed %d type=%d dir=%d\n", ret,
- type, dir);
- goto fail_dma;
- }
return 0;
-
-fail_dma:
- kfree(info->buffer);
-fail_buffer:
- kfree(info);
-fail_pkt:
- IPAERR("failed type=%d dir=%d\n", type, dir);
- return -ENOMEM;
}
-static int ipa_reclaim_tx(struct ipa_bridge_pipe_context *sys_tx, bool all)
-{
- struct sps_iovec iov;
- struct ipa_pkt_info *tx_pkt;
- int cnt = 0;
- int ret;
-
- do {
- iov.addr = 0;
- ret = sps_get_iovec(sys_tx->pipe, &iov);
- if (ret || iov.addr == 0) {
- break;
- } else {
- tx_pkt = list_first_entry(&sys_tx->head_desc_list,
- struct ipa_pkt_info,
- link);
- list_move_tail(&tx_pkt->link,
- &sys_tx->free_desc_list);
- cnt++;
- }
- } while (all);
-
- return cnt;
-}
-
-static void ipa_do_bridge_work(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx)
-{
- struct ipa_bridge_pipe_context *sys_rx = &ctx->pipe[2 * dir];
- struct ipa_bridge_pipe_context *sys_tx = &ctx->pipe[2 * dir + 1];
- struct ipa_pkt_info *tx_pkt;
- struct ipa_pkt_info *rx_pkt;
- struct ipa_pkt_info *tmp_pkt;
- struct sps_iovec iov;
- int ret;
- int inactive_cycles = 0;
-
- while (1) {
- ++inactive_cycles;
-
- if (ipa_reclaim_tx(sys_tx, false))
- inactive_cycles = 0;
-
- iov.addr = 0;
- ret = sps_get_iovec(sys_rx->pipe, &iov);
- if (ret || iov.addr == 0) {
- /* no-op */
- } else {
- inactive_cycles = 0;
-
- rx_pkt = list_first_entry(&sys_rx->head_desc_list,
- struct ipa_pkt_info,
- link);
- list_del(&rx_pkt->link);
- rx_pkt->len = iov.size;
-
-retry_alloc_tx:
- if (list_empty(&sys_tx->free_desc_list)) {
- tmp_pkt = kmalloc(sizeof(struct ipa_pkt_info),
- GFP_KERNEL);
- if (!tmp_pkt) {
- pr_debug_ratelimited("%s: unable to alloc tx_pkt_info type=%d dir=%d\n",
- __func__, ctx->type, dir);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_alloc_tx;
- }
-
- tmp_pkt->buffer = kmalloc(IPA_RX_SKB_SIZE,
- GFP_KERNEL | GFP_DMA);
- if (!tmp_pkt->buffer) {
- pr_debug_ratelimited("%s: unable to alloc tx_pkt_buffer type=%d dir=%d\n",
- __func__, ctx->type, dir);
- kfree(tmp_pkt);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_alloc_tx;
- }
-
- tmp_pkt->dma_address = dma_map_single(NULL,
- tmp_pkt->buffer,
- IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- if (tmp_pkt->dma_address == 0 ||
- tmp_pkt->dma_address == ~0) {
- pr_debug_ratelimited("%s: dma_map_single failure %p for %p type=%d dir=%d\n",
- __func__,
- (void *)tmp_pkt->dma_address,
- tmp_pkt->buffer, ctx->type, dir);
- }
-
- list_add_tail(&tmp_pkt->link,
- &sys_tx->free_desc_list);
- }
-
- tx_pkt = list_first_entry(&sys_tx->free_desc_list,
- struct ipa_pkt_info,
- link);
- list_del(&tx_pkt->link);
-
-retry_add_rx:
- list_add_tail(&tx_pkt->link,
- &sys_rx->head_desc_list);
- ret = sps_transfer_one(sys_rx->pipe,
- tx_pkt->dma_address,
- IPA_RX_SKB_SIZE,
- tx_pkt,
- SPS_IOVEC_FLAG_INT);
- if (ret) {
- list_del(&tx_pkt->link);
- pr_debug_ratelimited("%s: sps_transfer_one failed %d type=%d dir=%d\n",
- __func__, ret, ctx->type, dir);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_add_rx;
- }
-
-retry_add_tx:
- list_add_tail(&rx_pkt->link,
- &sys_tx->head_desc_list);
- ret = sps_transfer_one(sys_tx->pipe,
- rx_pkt->dma_address,
- iov.size,
- rx_pkt,
- SPS_IOVEC_FLAG_INT |
- SPS_IOVEC_FLAG_EOT);
- if (ret) {
- pr_debug_ratelimited("%s: fail to add to TX type=%d dir=%d\n",
- __func__, ctx->type, dir);
- list_del(&rx_pkt->link);
- ipa_reclaim_tx(sys_tx, true);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_add_tx;
- }
- IPA_STATS_INC_BRIDGE_CNT(ctx->type, dir,
- ipa_ctx->stats.bridged_pkts);
- }
-
- if (inactive_cycles >= polling_inactivity[dir]) {
- ipa_switch_to_intr_mode(dir, ctx);
- break;
- }
- }
-}
-
-static void ipa_sps_irq_rx_notify(struct sps_event_notify *notify)
-{
- enum ipa_bridge_type type = (enum ipa_bridge_type) notify->user;
-
- switch (notify->event_id) {
- case SPS_EVENT_EOT:
- ipa_switch_to_poll_mode(IPA_BRIDGE_DIR_UL, type);
- queue_work(bridge[type].ul_wq, &bridge[type].ul_work);
- break;
- default:
- IPAERR("recieved unexpected event id %d type %d\n",
- notify->event_id, type);
- }
-}
-
-static int setup_bridge_to_ipa(enum ipa_bridge_dir dir,
+static int setup_dma_bam_bridge(enum ipa_bridge_dir dir,
enum ipa_bridge_type type,
struct ipa_sys_connect_params *props,
u32 *clnt_hdl)
{
- struct ipa_bridge_pipe_context *sys;
- dma_addr_t dma_addr;
- enum ipa_pipe_type pipe_type;
- int ipa_ep_idx;
- int ret;
- int i;
-
- ipa_ep_idx = ipa_get_ep_mapping(ipa_ctx->mode, props->client);
- if (ipa_ep_idx == -1) {
- IPAERR("Invalid client=%d mode=%d type=%d dir=%d\n",
- props->client, ipa_ctx->mode, type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
- }
-
- if (ipa_ctx->ep[ipa_ep_idx].valid) {
- IPAERR("EP %d already allocated type=%d dir=%d\n", ipa_ep_idx,
- type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
- }
-
- pipe_type = (dir == IPA_BRIDGE_DIR_DL) ? IPA_DL_TO_IPA :
- IPA_UL_FROM_IPA;
-
- sys = &bridge[type].pipe[pipe_type];
- sys->pipe = sps_alloc_endpoint();
- if (sys->pipe == NULL) {
- IPAERR("alloc endpoint failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto alloc_endpoint_failed;
- }
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("get config failed %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto get_config_failed;
- }
-
- if (dir == IPA_BRIDGE_DIR_DL) {
- sys->connection.source = SPS_DEV_HANDLE_MEM;
- sys->connection.src_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.destination = ipa_ctx->bam_handle;
- sys->connection.dest_pipe_index = ipa_ep_idx;
- sys->connection.mode = SPS_MODE_DEST;
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- } else {
- sys->connection.source = ipa_ctx->bam_handle;
- sys->connection.src_pipe_index = ipa_ep_idx;
- sys->connection.destination = SPS_DEV_HANDLE_MEM;
- sys->connection.dest_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.mode = SPS_MODE_SRC;
- sys->connection.options = SPS_O_AUTO_ENABLE | SPS_O_EOT |
- SPS_O_ACK_TRANSFERS | SPS_O_NO_DISABLE;
- }
-
- sys->desc_mem_buf.size = props->desc_fifo_sz;
- sys->desc_mem_buf.base = dma_alloc_coherent(NULL,
- sys->desc_mem_buf.size,
- &dma_addr,
- 0);
- if (sys->desc_mem_buf.base == NULL) {
- IPAERR("memory alloc failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto get_config_failed;
- }
- sys->desc_mem_buf.phys_base = dma_addr;
- memset(sys->desc_mem_buf.base, 0x0, sys->desc_mem_buf.size);
- sys->connection.desc = sys->desc_mem_buf;
- sys->connection.event_thresh = IPA_EVENT_THRESHOLD;
-
- ret = sps_connect(sys->pipe, &sys->connection);
- if (ret < 0) {
- IPAERR("connect error %d type=%d dir=%d\n", ret, type, dir);
- goto connect_failed;
- }
-
- INIT_LIST_HEAD(&sys->head_desc_list);
- INIT_LIST_HEAD(&sys->free_desc_list);
-
- memset(&ipa_ctx->ep[ipa_ep_idx], 0,
- sizeof(struct ipa_ep_context));
-
- ipa_ctx->ep[ipa_ep_idx].valid = 1;
- ipa_ctx->ep[ipa_ep_idx].client_notify = props->notify;
- ipa_ctx->ep[ipa_ep_idx].priv = props->priv;
-
- ret = ipa_cfg_ep(ipa_ep_idx, &props->ipa_ep_cfg);
- if (ret < 0) {
- IPAERR("ep cfg set error %d type=%d dir=%d\n", ret, type, dir);
- ipa_ctx->ep[ipa_ep_idx].valid = 0;
- goto event_reg_failed;
- }
-
- if (dir == IPA_BRIDGE_DIR_UL) {
- sys->register_event.options = SPS_O_EOT;
- sys->register_event.mode = SPS_TRIGGER_CALLBACK;
- sys->register_event.xfer_done = NULL;
- sys->register_event.callback = ipa_sps_irq_rx_notify;
- sys->register_event.user = (void *)type;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret < 0) {
- IPAERR("register event error %d type=%d dir=%d\n", ret,
- type, dir);
- goto event_reg_failed;
- }
-
- for (i = 0; i < IPA_RX_POOL_CEIL; i++) {
- ret = queue_rx_single(dir, type);
- if (ret < 0)
- IPAERR("queue fail dir=%d type=%d iter=%d\n",
- dir, type, i);
- }
- }
-
- *clnt_hdl = ipa_ep_idx;
- sys->valid = true;
-
- return 0;
-
-event_reg_failed:
- sps_disconnect(sys->pipe);
-connect_failed:
- dma_free_coherent(NULL,
- sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
-get_config_failed:
- sps_free_endpoint(sys->pipe);
-alloc_endpoint_failed:
- return ret;
-}
-
-static void bam_mux_rx_notify(struct sps_event_notify *notify)
-{
- enum ipa_bridge_type type = (enum ipa_bridge_type) notify->user;
-
- switch (notify->event_id) {
- case SPS_EVENT_EOT:
- ipa_switch_to_poll_mode(IPA_BRIDGE_DIR_DL, type);
- queue_work(bridge[type].dl_wq, &bridge[type].dl_work);
- break;
- default:
- IPAERR("recieved unexpected event id %d type %d\n",
- notify->event_id, type);
- }
-}
-
-static int setup_bridge_to_a2(enum ipa_bridge_dir dir,
- enum ipa_bridge_type type,
- u32 desc_fifo_sz)
-{
- struct ipa_bridge_pipe_context *sys;
- struct a2_mux_pipe_connection pipe_conn = { 0 };
- dma_addr_t dma_addr;
- u32 a2_handle;
+ struct ipa_connect_params ipa_in_params;
+ struct ipa_sps_params sps_out_params;
+ int dma_a2_pipe;
+ int dma_ipa_pipe;
+ struct sps_pipe *pipe;
+ struct sps_pipe *pipe_a2;
+ struct sps_connect _connection;
+ struct sps_connect *connection = &_connection;
+ struct a2_mux_pipe_connection pipe_conn = {0};
enum a2_mux_pipe_direction pipe_dir;
- enum ipa_pipe_type pipe_type;
+ u32 dma_hdl = sps_dma_get_bam_handle();
+ u32 a2_hdl;
u32 pa;
int ret;
- int i;
+
+ memset(&ipa_in_params, 0, sizeof(ipa_in_params));
+ memset(&sps_out_params, 0, sizeof(sps_out_params));
pipe_dir = (dir == IPA_BRIDGE_DIR_UL) ? IPA_TO_A2 : A2_TO_IPA;
ret = ipa_get_a2_mux_pipe_info(pipe_dir, &pipe_conn);
if (ret) {
- IPAERR("ipa_get_a2_mux_pipe_info failed type=%d dir=%d\n",
- type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
+ IPAERR("ipa_get_a2_mux_pipe_info failed dir=%d type=%d\n",
+ dir, type);
+ goto fail_get_a2_prop;
}
pa = (dir == IPA_BRIDGE_DIR_UL) ? pipe_conn.dst_phy_addr :
pipe_conn.src_phy_addr;
- ret = sps_phy2h(pa, &a2_handle);
+ ret = sps_phy2h(pa, &a2_hdl);
if (ret) {
- IPAERR("sps_phy2h failed (A2 BAM) %d type=%d dir=%d\n",
- ret, type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
+ IPAERR("sps_phy2h failed (A2 BAM) %d dir=%d type=%d\n",
+ ret, dir, type);
+ goto fail_get_a2_prop;
}
- pipe_type = (dir == IPA_BRIDGE_DIR_UL) ? IPA_UL_TO_A2 : IPA_DL_FROM_A2;
+ ipa_get_dma_pipe_num(dir, type, &dma_a2_pipe, &dma_ipa_pipe);
- sys = &bridge[type].pipe[pipe_type];
- sys->pipe = sps_alloc_endpoint();
- if (sys->pipe == NULL) {
- IPAERR("alloc endpoint failed type=%d dir=%d\n", type, dir);
+ ipa_in_params.ipa_ep_cfg = props->ipa_ep_cfg;
+ ipa_in_params.client = props->client;
+ ipa_in_params.client_bam_hdl = dma_hdl;
+ ipa_in_params.client_ep_idx = dma_ipa_pipe;
+ ipa_in_params.priv = props->priv;
+ ipa_in_params.notify = props->notify;
+ ipa_in_params.desc_fifo_sz = ipa_get_desc_fifo_sz(dir, type);
+ ipa_in_params.data_fifo_sz = ipa_get_data_fifo_sz(dir, type);
+
+ if (ipa_connect(&ipa_in_params, &sps_out_params, clnt_hdl)) {
+ IPAERR("ipa connect failed dir=%d type=%d\n", dir, type);
+ goto fail_get_a2_prop;
+ }
+
+ pipe = sps_alloc_endpoint();
+ if (pipe == NULL) {
+ IPAERR("sps_alloc_endpoint failed dir=%d type=%d\n", dir, type);
ret = -ENOMEM;
- goto alloc_endpoint_failed;
+ goto fail_sps_alloc;
}
- ret = sps_get_config(sys->pipe, &sys->connection);
+
+ memset(&_connection, 0, sizeof(_connection));
+ ret = sps_get_config(pipe, connection);
if (ret) {
- IPAERR("get config failed %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto get_config_failed;
+ IPAERR("sps_get_config failed %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config;
}
- if (dir == IPA_BRIDGE_DIR_UL) {
- sys->connection.source = SPS_DEV_HANDLE_MEM;
- sys->connection.src_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.destination = a2_handle;
- if (type == IPA_BRIDGE_TYPE_TETHERED)
- sys->connection.dest_pipe_index =
- pipe_conn.dst_pipe_index;
- else
- sys->connection.dest_pipe_index = A2_EMBEDDED_PIPE_TX;
- sys->connection.mode = SPS_MODE_DEST;
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- } else {
- sys->connection.source = a2_handle;
- if (type == IPA_BRIDGE_TYPE_TETHERED)
- sys->connection.src_pipe_index =
- pipe_conn.src_pipe_index;
- else
- sys->connection.src_pipe_index = A2_EMBEDDED_PIPE_RX;
- sys->connection.destination = SPS_DEV_HANDLE_MEM;
- sys->connection.dest_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.mode = SPS_MODE_SRC;
- sys->connection.options = SPS_O_AUTO_ENABLE | SPS_O_EOT |
- SPS_O_ACK_TRANSFERS;
- }
-
- sys->desc_mem_buf.size = desc_fifo_sz;
- sys->desc_mem_buf.base = dma_alloc_coherent(NULL,
- sys->desc_mem_buf.size,
- &dma_addr,
- 0);
- if (sys->desc_mem_buf.base == NULL) {
- IPAERR("memory alloc failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto get_config_failed;
- }
- sys->desc_mem_buf.phys_base = dma_addr;
- memset(sys->desc_mem_buf.base, 0x0, sys->desc_mem_buf.size);
- sys->connection.desc = sys->desc_mem_buf;
- sys->connection.event_thresh = IPA_EVENT_THRESHOLD;
-
- ret = sps_connect(sys->pipe, &sys->connection);
- if (ret < 0) {
- IPAERR("connect error %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto connect_failed;
- }
-
- INIT_LIST_HEAD(&sys->head_desc_list);
- INIT_LIST_HEAD(&sys->free_desc_list);
-
if (dir == IPA_BRIDGE_DIR_DL) {
- sys->register_event.options = SPS_O_EOT;
- sys->register_event.mode = SPS_TRIGGER_CALLBACK;
- sys->register_event.xfer_done = NULL;
- sys->register_event.callback = bam_mux_rx_notify;
- sys->register_event.user = (void *)type;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret < 0) {
- IPAERR("register event error %d type=%d dir=%d\n",
- ret, type, dir);
- ret = -EINVAL;
- goto event_reg_failed;
- }
-
- for (i = 0; i < IPA_RX_POOL_CEIL; i++) {
- ret = queue_rx_single(dir, type);
- if (ret < 0)
- IPAERR("queue fail dir=%d type=%d iter=%d\n",
- dir, type, i);
- }
+ connection->mode = SPS_MODE_SRC;
+ connection->source = dma_hdl;
+ connection->destination = sps_out_params.ipa_bam_hdl;
+ connection->src_pipe_index = dma_ipa_pipe;
+ connection->dest_pipe_index = sps_out_params.ipa_ep_idx;
+ } else {
+ connection->mode = SPS_MODE_DEST;
+ connection->source = sps_out_params.ipa_bam_hdl;
+ connection->destination = dma_hdl;
+ connection->src_pipe_index = sps_out_params.ipa_ep_idx;
+ connection->dest_pipe_index = dma_ipa_pipe;
}
- sys->valid = true;
+ connection->event_thresh = IPA_EVENT_THRESHOLD;
+ connection->data = sps_out_params.data;
+ connection->desc = sps_out_params.desc;
+ connection->options = SPS_O_AUTO_ENABLE;
+
+ ret = sps_connect(pipe, connection);
+ if (ret) {
+ IPAERR("sps_connect failed %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ bridge[type].pipe[IPA_DL_TO_IPA].pipe = pipe;
+ bridge[type].pipe[IPA_DL_TO_IPA].ipa_facing = true;
+ bridge[type].pipe[IPA_DL_TO_IPA].valid = true;
+ } else {
+ bridge[type].pipe[IPA_UL_FROM_IPA].pipe = pipe;
+ bridge[type].pipe[IPA_UL_FROM_IPA].ipa_facing = true;
+ bridge[type].pipe[IPA_UL_FROM_IPA].valid = true;
+ }
+
+ IPADBG("dir=%d type=%d (ipa) src(0x%x:%u)->dst(0x%x:%u)\n", dir, type,
+ connection->source, connection->src_pipe_index,
+ connection->destination, connection->dest_pipe_index);
+
+ pipe_a2 = sps_alloc_endpoint();
+ if (pipe_a2 == NULL) {
+ IPAERR("sps_alloc_endpoint failed2 dir=%d type=%d\n", dir,
+ type);
+ ret = -ENOMEM;
+ goto fail_sps_alloc_a2;
+ }
+
+ memset(&_connection, 0, sizeof(_connection));
+ ret = sps_get_config(pipe_a2, connection);
+ if (ret) {
+ IPAERR("sps_get_config failed2 %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config_a2;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ connection->mode = SPS_MODE_DEST;
+ connection->source = a2_hdl;
+ connection->destination = dma_hdl;
+ connection->src_pipe_index = ipa_get_a2_pipe_num(dir, type);
+ connection->dest_pipe_index = dma_a2_pipe;
+ } else {
+ connection->mode = SPS_MODE_SRC;
+ connection->source = dma_hdl;
+ connection->destination = a2_hdl;
+ connection->src_pipe_index = dma_a2_pipe;
+ connection->dest_pipe_index = ipa_get_a2_pipe_num(dir, type);
+ }
+
+ connection->event_thresh = IPA_EVENT_THRESHOLD;
+
+ if (ipa_setup_a2_dma_fifos(dir, type, &connection->desc,
+ &connection->data)) {
+ IPAERR("fail to setup A2-DMA FIFOs dir=%d type=%d\n",
+ dir, type);
+ goto fail_sps_get_config_a2;
+ }
+
+ connection->options = SPS_O_AUTO_ENABLE;
+
+ ret = sps_connect(pipe_a2, connection);
+ if (ret) {
+ IPAERR("sps_connect failed2 %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config_a2;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ bridge[type].pipe[IPA_DL_FROM_A2].pipe = pipe_a2;
+ bridge[type].pipe[IPA_DL_FROM_A2].valid = true;
+ } else {
+ bridge[type].pipe[IPA_UL_TO_A2].pipe = pipe_a2;
+ bridge[type].pipe[IPA_UL_TO_A2].valid = true;
+ }
+
+ IPADBG("dir=%d type=%d (a2) src(0x%x:%u)->dst(0x%x:%u)\n", dir, type,
+ connection->source, connection->src_pipe_index,
+ connection->destination, connection->dest_pipe_index);
return 0;
-event_reg_failed:
- sps_disconnect(sys->pipe);
-connect_failed:
- dma_free_coherent(NULL,
- sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
-get_config_failed:
- sps_free_endpoint(sys->pipe);
-alloc_endpoint_failed:
+fail_sps_get_config_a2:
+ sps_free_endpoint(pipe_a2);
+fail_sps_alloc_a2:
+ sps_disconnect(pipe);
+fail_sps_get_config:
+ sps_free_endpoint(pipe);
+fail_sps_alloc:
+ ipa_disconnect(*clnt_hdl);
+fail_get_a2_prop:
return ret;
}
/**
- * ipa_bridge_init() - create workqueues and work items serving SW bridges
+ * ipa_bridge_init()
*
* Return codes: 0: success, -ENOMEM: failure
*/
int ipa_bridge_init(void)
{
- int ret;
int i;
- bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq =
- create_singlethread_workqueue("ipa_ul_teth");
- if (!bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq) {
- IPAERR("ipa ul teth wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_ul_teth;
+ ipa_ctx->smem_pipe_mem = smem_alloc(SMEM_BAM_PIPE_MEMORY,
+ IPA_SMEM_PIPE_MEM_SZ);
+ if (!ipa_ctx->smem_pipe_mem) {
+ IPAERR("smem alloc failed\n");
+ return -ENOMEM;
}
+ IPADBG("smem_pipe_mem = %p\n", ipa_ctx->smem_pipe_mem);
- bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq =
- create_singlethread_workqueue("ipa_dl_teth");
- if (!bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq) {
- IPAERR("ipa dl teth wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_dl_teth;
- }
-
- bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq =
- create_singlethread_workqueue("ipa_ul_emb");
- if (!bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq) {
- IPAERR("ipa ul emb wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_ul_emb;
- }
-
- bridge[IPA_BRIDGE_TYPE_EMBEDDED].dl_wq =
- create_singlethread_workqueue("ipa_dl_emb");
- if (!bridge[IPA_BRIDGE_TYPE_EMBEDDED].dl_wq) {
- IPAERR("ipa dl emb wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_dl_emb;
- }
-
- for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++) {
- INIT_WORK(&bridge[i].ul_work, ul_work_func);
- INIT_WORK(&bridge[i].dl_work, dl_work_func);
+ for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++)
bridge[i].type = i;
- }
return 0;
-
-fail_dl_emb:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq);
-fail_ul_emb:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq);
-fail_dl_teth:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq);
-fail_ul_teth:
- return ret;
}
/**
@@ -720,66 +467,29 @@
if (props == NULL || clnt_hdl == NULL ||
type >= IPA_BRIDGE_TYPE_MAX || dir >= IPA_BRIDGE_DIR_MAX ||
- props->client >= IPA_CLIENT_MAX || props->desc_fifo_sz == 0) {
+ props->client >= IPA_CLIENT_MAX) {
IPAERR("Bad param props=%p clnt_hdl=%p type=%d dir=%d\n",
props, clnt_hdl, type, dir);
return -EINVAL;
}
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
- }
+ ipa_inc_client_enable_clks();
- if (setup_bridge_to_ipa(dir, type, props, clnt_hdl)) {
+ if (setup_dma_bam_bridge(dir, type, props, clnt_hdl)) {
IPAERR("fail to setup SYS pipe to IPA dir=%d type=%d\n",
dir, type);
ret = -EINVAL;
goto bail_ipa;
}
- if (setup_bridge_to_a2(dir, type, props->desc_fifo_sz)) {
- IPAERR("fail to setup SYS pipe to A2 dir=%d type=%d\n",
- dir, type);
- ret = -EINVAL;
- goto bail_a2;
- }
-
-
return 0;
-bail_a2:
- ipa_bridge_teardown(dir, type, *clnt_hdl);
bail_ipa:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
return ret;
}
EXPORT_SYMBOL(ipa_bridge_setup);
-static void ipa_bridge_free_pkt(struct ipa_pkt_info *pkt)
-{
- list_del(&pkt->link);
- dma_unmap_single(NULL, pkt->dma_address, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- kfree(pkt->buffer);
- kfree(pkt);
-}
-
-static void ipa_bridge_free_resources(struct ipa_bridge_pipe_context *pipe)
-{
- struct ipa_pkt_info *pkt;
- struct ipa_pkt_info *n;
-
- list_for_each_entry_safe(pkt, n, &pipe->head_desc_list, link)
- ipa_bridge_free_pkt(pkt);
-
- list_for_each_entry_safe(pkt, n, &pipe->free_desc_list, link)
- ipa_bridge_free_pkt(pkt);
-}
-
/**
* ipa_bridge_teardown() - teardown SW bridge leg
* @dir: downlink or uplink (from air interface perspective)
@@ -814,39 +524,18 @@
for (; lo <= hi; lo++) {
sys = &bridge[type].pipe[lo];
if (sys->valid) {
+ if (sys->ipa_facing)
+ ipa_disconnect(clnt_hdl);
sps_disconnect(sys->pipe);
- dma_free_coherent(NULL, sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
sps_free_endpoint(sys->pipe);
- ipa_bridge_free_resources(sys);
sys->valid = false;
}
}
memset(&ipa_ctx->ep[clnt_hdl], 0, sizeof(struct ipa_ep_context));
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
return 0;
}
EXPORT_SYMBOL(ipa_bridge_teardown);
-
-/**
- * ipa_bridge_cleanup() - destroy workqueues serving the SW bridges
- *
- * Return codes:
- * None
- */
-void ipa_bridge_cleanup(void)
-{
- int i;
-
- for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++) {
- destroy_workqueue(bridge[i].dl_wq);
- destroy_workqueue(bridge[i].ul_wq);
- }
-}
diff --git a/drivers/platform/msm/ipa/ipa_client.c b/drivers/platform/msm/ipa/ipa_client.c
index f2f80bf..a95eafe 100644
--- a/drivers/platform/msm/ipa/ipa_client.c
+++ b/drivers/platform/msm/ipa/ipa_client.c
@@ -13,22 +13,16 @@
#include <linux/delay.h>
#include "ipa_i.h"
-#define IPA_HOLB_TMR_VAL 0xff
-
static void ipa_enable_data_path(u32 clnt_hdl)
{
- struct ipa_ep_context *ep = &ipa_ctx->ep[clnt_hdl];
-
if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_VIRTUAL) {
/* IPA_HW_MODE_VIRTUAL lacks support for TAG IC & EP suspend */
return;
}
- if (ipa_ctx->ipa_hw_type == IPA_HW_v1_1 && ep->suspended) {
+ if (ipa_ctx->ipa_hw_type == IPA_HW_v1_1)
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_CTRL_n_OFST(clnt_hdl), 0);
- ep->suspended = false;
- }
}
static int ipa_disable_data_path(u32 clnt_hdl)
@@ -52,7 +46,7 @@
goto fail_alloc;
}
- if (ipa_ctx->ipa_hw_type == IPA_HW_v1_1 && !ep->suspended) {
+ if (ipa_ctx->ipa_hw_type == IPA_HW_v1_1) {
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_CTRL_n_OFST(clnt_hdl), 1);
@@ -87,7 +81,6 @@
ep->cfg.aggr.aggr_en == IPA_ENABLE_AGGR &&
ep->cfg.aggr.aggr_time_limit)
msleep(ep->cfg.aggr.aggr_time_limit);
- ep->suspended = true;
}
return 0;
@@ -204,9 +197,7 @@
int result = -EFAULT;
struct ipa_ep_context *ep;
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
+ ipa_inc_client_enable_clks();
if (in == NULL || sps == NULL || clnt_hdl == NULL ||
in->client >= IPA_CLIENT_MAX ||
@@ -302,15 +293,17 @@
if (in->client == IPA_CLIENT_HSIC1_CONS ||
in->client == IPA_CLIENT_HSIC2_CONS ||
in->client == IPA_CLIENT_HSIC3_CONS ||
- in->client == IPA_CLIENT_HSIC4_CONS) {
+ in->client == IPA_CLIENT_HSIC4_CONS ||
+ in->client == IPA_CLIENT_A2_TETHERED_CONS ||
+ in->client == IPA_CLIENT_A2_EMBEDDED_CONS) {
IPADBG("disable holb for ep=%d tmr=%d\n", ipa_ep_idx,
- IPA_HOLB_TMR_VAL);
+ ipa_ctx->hol_timer);
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_HOL_BLOCK_EN_n_OFST(ipa_ep_idx),
- 0x1);
+ ipa_ctx->hol_en);
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_OFST(ipa_ep_idx),
- IPA_HOLB_TMR_VAL);
+ ipa_ctx->hol_timer);
}
IPADBG("client %d (ep: %d) connected\n", in->client, ipa_ep_idx);
@@ -342,11 +335,7 @@
ipa_cfg_ep_fail:
memset(&ipa_ctx->ep[ipa_ep_idx], 0, sizeof(struct ipa_ep_context));
fail:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
-
+ ipa_dec_client_disable_clks();
return result;
}
EXPORT_SYMBOL(ipa_connect);
@@ -375,6 +364,11 @@
ep = &ipa_ctx->ep[clnt_hdl];
+ if (ep->suspended) {
+ ipa_inc_client_enable_clks();
+ ep->suspended = false;
+ }
+
result = ipa_disable_data_path(clnt_hdl);
if (result) {
IPAERR("disable data path failed res=%d clnt=%d.\n", result,
@@ -421,10 +415,7 @@
ipa_enable_data_path(clnt_hdl);
memset(&ipa_ctx->ep[clnt_hdl], 0, sizeof(struct ipa_ep_context));
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
IPADBG("client (ep: %d) disconnected\n", clnt_hdl);
@@ -433,55 +424,79 @@
EXPORT_SYMBOL(ipa_disconnect);
/**
- * ipa_connection_suspend() - suspend B2B connection to/from IPA
+ * ipa_resume() - low-level IPA client resume
* @clnt_hdl: [in] opaque client handle assigned by IPA to client
*
- * Should be called by the driver of the peripheral that wants to suspend
- * its BAM-BAM connection to/from IPA in BAM-BAM mode. The pipe is not
- * disconnected and must later be resumed before data transfer can begin
+ * Should be called by the driver of the peripheral that wants to resume IPA
+ * connection. Resume IPA connection results in turning on IPA clocks in
+ * case they were off as a result of suspend.
+ * this api can be called only if a call to ipa_suspend() was
+ * made.
*
* Returns: 0 on success, negative on failure
*
* Note: Should not be called from atomic context
*/
-int ipa_connection_suspend(u32 clnt_hdl)
+int ipa_resume(u32 clnt_hdl)
{
- int result;
+ struct ipa_ep_context *ep;
if (clnt_hdl >= IPA_NUM_PIPES || ipa_ctx->ep[clnt_hdl].valid == 0) {
- IPAERR("bad parm.\n");
- return -EINVAL;
- }
- result = ipa_disable_data_path(clnt_hdl);
- if (result)
- IPAERR("disable data path failed res=%d clnt=%d.\n", result,
- clnt_hdl);
-
- return result;
-}
-EXPORT_SYMBOL(ipa_connection_suspend);
-
-/**
- * ipa_connection_resume() - resume B2B connection to/from IPA
- * @clnt_hdl: [in] opaque client handle assigned by IPA to client
- *
- * Should be called by the driver of the peripheral that wants to resume
- * its previously suspended BAM-BAM connection to/from IPA in BAM-BAM mode.
- *
- * Returns: 0 on success, negative on failure
- *
- * Note: Should not be called from atomic context
- */
-int ipa_connection_resume(u32 clnt_hdl)
-{
- if (clnt_hdl >= IPA_NUM_PIPES || ipa_ctx->ep[clnt_hdl].valid == 0) {
- IPAERR("bad parm.\n");
+ IPAERR("bad parm. clnt_hdl %d\n", clnt_hdl);
return -EINVAL;
}
- ipa_enable_data_path(clnt_hdl);
+ ep = &ipa_ctx->ep[clnt_hdl];
+
+ if (!ep->suspended) {
+ IPAERR("EP not suspended. clnt_hdl %d\n", clnt_hdl);
+ return -EPERM;
+ }
+
+ ipa_inc_client_enable_clks();
+ ep->suspended = false;
return 0;
}
-EXPORT_SYMBOL(ipa_connection_resume);
+EXPORT_SYMBOL(ipa_resume);
+/**
+* ipa_suspend() - low-level IPA client suspend
+* @clnt_hdl: [in] opaque client handle assigned by IPA to client
+*
+* Should be called by the driver of the peripheral that wants to suspend IPA
+* connection. Suspend IPA connection results in turning off IPA clocks in
+* case that there is no active clients using IPA. Pipes remains connected in
+* case of suspend.
+*
+* Returns: 0 on success, negative on failure
+*
+* Note: Should not be called from atomic context
+*/
+int ipa_suspend(u32 clnt_hdl)
+{
+ struct ipa_ep_context *ep;
+
+ if (clnt_hdl >= IPA_NUM_PIPES || ipa_ctx->ep[clnt_hdl].valid == 0) {
+ IPAERR("bad parm. clnt_hdl %d\n", clnt_hdl);
+ return -EINVAL;
+ }
+
+ ep = &ipa_ctx->ep[clnt_hdl];
+
+ if (ep->suspended) {
+ IPAERR("EP already suspended. clnt_hdl %d\n", clnt_hdl);
+ return -EPERM;
+ }
+
+ if (IPA_CLIENT_IS_CONS(ep->client) &&
+ ep->cfg.aggr.aggr_en == IPA_ENABLE_AGGR &&
+ ep->cfg.aggr.aggr_time_limit)
+ msleep(ep->cfg.aggr.aggr_time_limit);
+
+ ipa_dec_client_disable_clks();
+ ep->suspended = true;
+
+ return 0;
+}
+EXPORT_SYMBOL(ipa_suspend);
diff --git a/drivers/platform/msm/ipa/ipa_debugfs.c b/drivers/platform/msm/ipa/ipa_debugfs.c
index 51a950d..b11c7da 100644
--- a/drivers/platform/msm/ipa/ipa_debugfs.c
+++ b/drivers/platform/msm/ipa/ipa_debugfs.c
@@ -94,6 +94,8 @@
static struct dentry *dent;
static struct dentry *dfile_gen_reg;
static struct dentry *dfile_ep_reg;
+static struct dentry *dfile_ep_hol_en;
+static struct dentry *dfile_ep_hol_timer;
static struct dentry *dfile_hdr;
static struct dentry *dfile_ip4_rt;
static struct dentry *dfile_ip6_rt;
@@ -102,6 +104,7 @@
static struct dentry *dfile_stats;
static struct dentry *dfile_dbg_cnt;
static struct dentry *dfile_msg;
+static struct dentry *dfile_ip4_nat;
static char dbg_buff[IPA_MAX_MSG_LEN];
static s8 ep_reg_idx;
@@ -144,6 +147,58 @@
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
+static ssize_t ipa_write_ep_hol_en_reg(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ u32 endp_reg_val;
+ unsigned long missing;
+
+ if (sizeof(dbg_buff) < count + 1)
+ return -EFAULT;
+
+ missing = copy_from_user(dbg_buff, buf, count);
+ if (missing)
+ return -EFAULT;
+
+ dbg_buff[count] = '\0';
+ if (kstrtou32(dbg_buff, 16, &endp_reg_val))
+ return -EFAULT;
+
+ ipa_write_reg(ipa_ctx->mmio,
+ IPA_ENDP_INIT_HOL_BLOCK_EN_n_OFST(ep_reg_idx),
+ endp_reg_val);
+
+ ipa_ctx->hol_en = endp_reg_val;
+
+ return count;
+}
+
+static ssize_t ipa_write_ep_hol_timer_reg(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ u32 endp_reg_val;
+ unsigned long missing;
+
+ if (sizeof(dbg_buff) < count + 1)
+ return -EFAULT;
+
+ missing = copy_from_user(dbg_buff, buf, count);
+ if (missing)
+ return -EFAULT;
+
+ dbg_buff[count] = '\0';
+ if (kstrtou32(dbg_buff, 16, &endp_reg_val))
+ return -EFAULT;
+
+ ipa_write_reg(ipa_ctx->mmio,
+ IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_OFST(ep_reg_idx),
+ endp_reg_val);
+
+ ipa_ctx->hol_timer = endp_reg_val;
+
+ return count;
+}
+
static ssize_t ipa_write_ep_reg(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
@@ -554,17 +609,31 @@
"rx=%u\n"
"rx_repl_repost=%u\n"
"x_intr_repost=%u\n"
+ "x_intr_repost_tx=%u\n"
"rx_q_len=%u\n"
"act_clnt=%u\n"
- "con_clnt_bmap=0x%x\n",
+ "con_clnt_bmap=0x%x\n"
+ "a2_power_on_reqs_in=%u\n"
+ "a2_power_on_reqs_out=%u\n"
+ "a2_power_off_reqs_in=%u\n"
+ "a2_power_off_reqs_out=%u\n"
+ "a2_power_modem_acks=%u\n"
+ "a2_power_apps_acks=%u\n",
ipa_ctx->stats.tx_sw_pkts,
ipa_ctx->stats.tx_hw_pkts,
ipa_ctx->stats.rx_pkts,
ipa_ctx->stats.rx_repl_repost,
ipa_ctx->stats.x_intr_repost,
+ ipa_ctx->stats.x_intr_repost_tx,
ipa_ctx->stats.rx_q_len,
- atomic_read(&ipa_ctx->ipa_active_clients),
- connect);
+ ipa_ctx->ipa_active_clients,
+ connect,
+ ipa_ctx->stats.a2_power_on_reqs_in,
+ ipa_ctx->stats.a2_power_on_reqs_out,
+ ipa_ctx->stats.a2_power_off_reqs_in,
+ ipa_ctx->stats.a2_power_off_reqs_out,
+ ipa_ctx->stats.a2_power_modem_acks,
+ ipa_ctx->stats.a2_power_apps_acks);
cnt += nbytes;
for (i = 0; i < MAX_NUM_EXCP; i++) {
@@ -654,6 +723,238 @@
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, cnt);
}
+static ssize_t ipa_read_nat4(struct file *file,
+ char __user *ubuf, size_t count,
+ loff_t *ppos) {
+
+#define ENTRY_U32_FIELDS 8
+#define NAT_ENTRY_ENABLE 0x8000
+#define NAT_ENTRY_RST_FIN_BIT 0x4000
+#define BASE_TABLE 0
+#define EXPANSION_TABLE 1
+
+ u32 *base_tbl, *indx_tbl;
+ u32 tbl_size, *tmp;
+ u32 value, i, j, rule_id;
+ u16 enable, tbl_entry, flag;
+ int nbytes, cnt;
+
+ cnt = 0;
+ value = ipa_ctx->nat_mem.public_ip_addr;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Table IP Address:%d.%d.%d.%d\n",
+ ((value & 0xFF000000) >> 24),
+ ((value & 0x00FF0000) >> 16),
+ ((value & 0x0000FF00) >> 8),
+ ((value & 0x000000FF)));
+ cnt += nbytes;
+
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Table Size:%d\n",
+ ipa_ctx->nat_mem.size_base_tables);
+ cnt += nbytes;
+
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Expansion Table Size:%d\n",
+ ipa_ctx->nat_mem.size_expansion_tables);
+ cnt += nbytes;
+
+ if (!ipa_ctx->nat_mem.is_sys_mem) {
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Not supported for local(shared) memory\n");
+ cnt += nbytes;
+
+ return simple_read_from_buffer(ubuf, count,
+ ppos, dbg_buff, cnt);
+ }
+
+
+ /* Print Base tables */
+ rule_id = 0;
+ for (j = 0; j < 2; j++) {
+ if (j == BASE_TABLE) {
+ tbl_size = ipa_ctx->nat_mem.size_base_tables;
+ base_tbl = (u32 *)ipa_ctx->nat_mem.ipv4_rules_addr;
+
+ nbytes = scnprintf(dbg_buff + cnt, IPA_MAX_MSG_LEN,
+ "\nBase Table:\n");
+ cnt += nbytes;
+ } else {
+ tbl_size = ipa_ctx->nat_mem.size_expansion_tables;
+ base_tbl =
+ (u32 *)ipa_ctx->nat_mem.ipv4_expansion_rules_addr;
+
+ nbytes = scnprintf(dbg_buff + cnt, IPA_MAX_MSG_LEN,
+ "\nExpansion Base Table:\n");
+ cnt += nbytes;
+ }
+
+ if (base_tbl != NULL) {
+ for (i = 0; i <= tbl_size; i++, rule_id++) {
+ tmp = base_tbl;
+ value = tmp[4];
+ enable = ((value & 0xFFFF0000) >> 16);
+
+ if (enable & NAT_ENTRY_ENABLE) {
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Rule:%d ",
+ rule_id);
+ cnt += nbytes;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Private_IP:%d.%d.%d.%d ",
+ ((value & 0xFF000000) >> 24),
+ ((value & 0x00FF0000) >> 16),
+ ((value & 0x0000FF00) >> 8),
+ ((value & 0x000000FF)));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Target_IP:%d.%d.%d.%d ",
+ ((value & 0xFF000000) >> 24),
+ ((value & 0x00FF0000) >> 16),
+ ((value & 0x0000FF00) >> 8),
+ ((value & 0x000000FF)));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Next_Index:%d Public_Port:%d ",
+ (value & 0x0000FFFF),
+ ((value & 0xFFFF0000) >> 16));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Private_Port:%d Target_Port:%d ",
+ (value & 0x0000FFFF),
+ ((value & 0xFFFF0000) >> 16));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "IP-CKSM-delta:0x%x ",
+ (value & 0x0000FFFF));
+ cnt += nbytes;
+
+ flag = ((value & 0xFFFF0000) >> 16);
+ if (flag & NAT_ENTRY_RST_FIN_BIT) {
+ nbytes =
+ scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "IP_CKSM_delta:0x%x Flags:%s ",
+ (value & 0x0000FFFF),
+ "Direct_To_A5");
+ cnt += nbytes;
+ } else {
+ nbytes =
+ scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "IP_CKSM_delta:0x%x Flags:%s ",
+ (value & 0x0000FFFF),
+ "Fwd_to_route");
+ cnt += nbytes;
+ }
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Time_stamp:0x%x Proto:%d ",
+ (value & 0x00FFFFFF),
+ ((value & 0xFF000000) >> 27));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Prev_Index:%d Indx_tbl_entry:%d ",
+ (value & 0x0000FFFF),
+ ((value & 0xFFFF0000) >> 16));
+ cnt += nbytes;
+ tmp++;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "TCP_UDP_cksum_delta:0x%x\n",
+ ((value & 0xFFFF0000) >> 16));
+ cnt += nbytes;
+ }
+
+ base_tbl += ENTRY_U32_FIELDS;
+
+ }
+ }
+ }
+
+ /* Print Index tables */
+ rule_id = 0;
+ for (j = 0; j < 2; j++) {
+ if (j == BASE_TABLE) {
+ tbl_size = ipa_ctx->nat_mem.size_base_tables;
+ indx_tbl = (u32 *)ipa_ctx->nat_mem.index_table_addr;
+
+ nbytes = scnprintf(dbg_buff + cnt, IPA_MAX_MSG_LEN,
+ "\nIndex Table:\n");
+ cnt += nbytes;
+ } else {
+ tbl_size = ipa_ctx->nat_mem.size_expansion_tables;
+ indx_tbl =
+ (u32 *)ipa_ctx->nat_mem.index_table_expansion_addr;
+
+ nbytes = scnprintf(dbg_buff + cnt, IPA_MAX_MSG_LEN,
+ "\nExpansion Index Table:\n");
+ cnt += nbytes;
+ }
+
+ if (indx_tbl != NULL) {
+ for (i = 0; i <= tbl_size; i++, rule_id++) {
+ tmp = indx_tbl;
+ value = *tmp;
+ tbl_entry = (value & 0x0000FFFF);
+
+ if (tbl_entry) {
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Rule:%d ",
+ rule_id);
+ cnt += nbytes;
+
+ value = *tmp;
+ nbytes = scnprintf(dbg_buff + cnt,
+ IPA_MAX_MSG_LEN,
+ "Table_Entry:%d Next_Index:%d\n",
+ tbl_entry,
+ ((value & 0xFFFF0000) >> 16));
+ cnt += nbytes;
+ }
+
+ indx_tbl++;
+ }
+ }
+ }
+
+ return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, cnt);
+}
+
const struct file_operations ipa_gen_reg_ops = {
.read = ipa_read_gen_reg,
};
@@ -663,6 +964,13 @@
.write = ipa_write_ep_reg,
};
+const struct file_operations ipa_ep_hol_en_ops = {
+ .write = ipa_write_ep_hol_en_reg,
+};
+const struct file_operations ipa_ep_hol_timer_ops = {
+ .write = ipa_write_ep_hol_timer_reg,
+};
+
const struct file_operations ipa_hdr_ops = {
.read = ipa_read_hdr,
};
@@ -690,11 +998,16 @@
.write = ipa_write_dbg_cnt,
};
+const struct file_operations ipa_nat4_ops = {
+ .read = ipa_read_nat4,
+};
+
void ipa_debugfs_init(void)
{
const mode_t read_only_mode = S_IRUSR | S_IRGRP | S_IROTH;
const mode_t read_write_mode = S_IRUSR | S_IRGRP | S_IROTH |
S_IWUSR | S_IWGRP | S_IWOTH;
+ const mode_t write_only_mode = S_IWUSR | S_IWGRP | S_IWOTH;
dent = debugfs_create_dir("ipa", 0);
if (IS_ERR(dent)) {
@@ -716,6 +1029,20 @@
goto fail;
}
+ dfile_ep_hol_en = debugfs_create_file("hol_en", write_only_mode, dent,
+ 0, &ipa_ep_hol_en_ops);
+ if (!dfile_ep_hol_en || IS_ERR(dfile_ep_hol_en)) {
+ IPAERR("fail to create file for debug_fs dfile_ep_hol_en\n");
+ goto fail;
+ }
+
+ dfile_ep_hol_timer = debugfs_create_file("hol_timer", write_only_mode,
+ dent, 0, &ipa_ep_hol_timer_ops);
+ if (!dfile_ep_hol_timer || IS_ERR(dfile_ep_hol_timer)) {
+ IPAERR("fail to create file for debug_fs dfile_ep_hol_timer\n");
+ goto fail;
+ }
+
dfile_hdr = debugfs_create_file("hdr", read_only_mode, dent, 0,
&ipa_hdr_ops);
if (!dfile_hdr || IS_ERR(dfile_hdr)) {
@@ -772,6 +1099,13 @@
goto fail;
}
+ dfile_ip4_nat = debugfs_create_file("ip4_nat", read_only_mode, dent,
+ 0, &ipa_nat4_ops);
+ if (!dfile_ip4_nat || IS_ERR(dfile_ip4_nat)) {
+ IPAERR("fail to create file for debug_fs ip4 nat\n");
+ goto fail;
+ }
+
return;
fail:
diff --git a/drivers/platform/msm/ipa/ipa_dp.c b/drivers/platform/msm/ipa/ipa_dp.c
index 5f7f3d9..67728c2 100644
--- a/drivers/platform/msm/ipa/ipa_dp.c
+++ b/drivers/platform/msm/ipa/ipa_dp.c
@@ -20,14 +20,15 @@
#define list_next_entry(pos, member) \
list_entry(pos->member.next, typeof(*pos), member)
#define IPA_LAST_DESC_CNT 0xFFFF
-#define POLLING_INACTIVITY 40
-#define POLLING_MIN_SLEEP 950
-#define POLLING_MAX_SLEEP 1050
+#define POLLING_INACTIVITY_RX 40
+#define POLLING_MIN_SLEEP_RX 950
+#define POLLING_MAX_SLEEP_RX 1050
+#define POLLING_INACTIVITY_TX 40
+#define POLLING_MIN_SLEEP_TX 400
+#define POLLING_MAX_SLEEP_TX 500
static void replenish_rx_work_func(struct work_struct *work);
static struct delayed_work replenish_rx_work;
-static void switch_to_intr_work_func(struct work_struct *work);
-static struct delayed_work switch_to_intr_work;
static void ipa_wq_handle_rx(struct work_struct *work);
static DECLARE_WORK(rx_work, ipa_wq_handle_rx);
@@ -42,62 +43,169 @@
* the order for sent packet is the same as expected
* - delete all the tx packet descriptors from the system
* pipe context (not needed anymore)
- * - return the tx buffer back to one_kb_no_straddle_pool
+ * - return the tx buffer back to dma_pool
*/
void ipa_wq_write_done(struct work_struct *work)
{
struct ipa_tx_pkt_wrapper *tx_pkt;
- struct ipa_tx_pkt_wrapper *next_pkt;
struct ipa_tx_pkt_wrapper *tx_pkt_expected;
unsigned long irq_flags;
- struct ipa_mem_buffer mult = { 0 };
- int i;
- u32 cnt;
tx_pkt = container_of(work, struct ipa_tx_pkt_wrapper, work);
- cnt = tx_pkt->cnt;
- IPADBG("cnt=%d\n", cnt);
- if (unlikely(cnt == 0))
+ if (unlikely(tx_pkt == NULL))
WARN_ON(1);
+ WARN_ON(tx_pkt->cnt != 1);
- if (cnt > 1 && cnt != IPA_LAST_DESC_CNT)
- mult = tx_pkt->mult;
+ spin_lock_irqsave(&tx_pkt->sys->spinlock, irq_flags);
+ tx_pkt_expected = list_first_entry(&tx_pkt->sys->head_desc_list,
+ struct ipa_tx_pkt_wrapper,
+ link);
+ if (unlikely(tx_pkt != tx_pkt_expected)) {
+ spin_unlock_irqrestore(&tx_pkt->sys->spinlock,
+ irq_flags);
+ WARN_ON(1);
+ }
+ list_del(&tx_pkt->link);
+ spin_unlock_irqrestore(&tx_pkt->sys->spinlock, irq_flags);
+ if (unlikely(ipa_ctx->ipa_hw_type == IPA_HW_v1_0)) {
+ dma_pool_free(ipa_ctx->dma_pool,
+ tx_pkt->bounce,
+ tx_pkt->mem.phys_base);
+ } else {
+ dma_unmap_single(NULL, tx_pkt->mem.phys_base,
+ tx_pkt->mem.size,
+ DMA_TO_DEVICE);
+ }
- for (i = 0; i < cnt; i++) {
- if (unlikely(tx_pkt == NULL))
- WARN_ON(1);
- spin_lock_irqsave(&tx_pkt->sys->spinlock, irq_flags);
- tx_pkt_expected = list_first_entry(&tx_pkt->sys->head_desc_list,
- struct ipa_tx_pkt_wrapper,
- link);
- if (unlikely(tx_pkt != tx_pkt_expected)) {
- spin_unlock_irqrestore(&tx_pkt->sys->spinlock,
- irq_flags);
- WARN_ON(1);
+ if (tx_pkt->callback)
+ tx_pkt->callback(tx_pkt->user1, tx_pkt->user2);
+
+ kmem_cache_free(ipa_ctx->tx_pkt_wrapper_cache, tx_pkt);
+}
+
+int ipa_handle_tx_core(struct ipa_sys_context *sys, bool process_all,
+ bool in_poll_state)
+{
+ struct ipa_tx_pkt_wrapper *tx_pkt;
+ struct sps_iovec iov;
+ int ret;
+ int cnt = 0;
+ unsigned long irq_flags;
+
+ while ((in_poll_state ? atomic_read(&sys->curr_polling_state) :
+ !atomic_read(&sys->curr_polling_state))) {
+ if (cnt && !process_all)
+ break;
+ ret = sps_get_iovec(sys->ep->ep_hdl, &iov);
+ if (ret) {
+ IPAERR("sps_get_iovec failed %d\n", ret);
+ break;
}
- next_pkt = list_next_entry(tx_pkt, link);
+
+ if (iov.addr == 0)
+ break;
+
+ if (unlikely(list_empty(&sys->head_desc_list)))
+ continue;
+
+ spin_lock_irqsave(&sys->spinlock, irq_flags);
+ tx_pkt = list_first_entry(&sys->head_desc_list,
+ struct ipa_tx_pkt_wrapper, link);
+
+ sys->len--;
list_del(&tx_pkt->link);
- spin_unlock_irqrestore(&tx_pkt->sys->spinlock, irq_flags);
- if (unlikely(ipa_ctx->ipa_hw_type == IPA_HW_v1_0)) {
- dma_pool_free(ipa_ctx->one_kb_no_straddle_pool,
+ spin_unlock_irqrestore(&sys->spinlock, irq_flags);
+
+ IPADBG("--curr_cnt=%d\n", sys->len);
+
+ if (unlikely(ipa_ctx->ipa_hw_type == IPA_HW_v1_0))
+ dma_pool_free(ipa_ctx->dma_pool,
tx_pkt->bounce,
tx_pkt->mem.phys_base);
- } else {
+ else
dma_unmap_single(NULL, tx_pkt->mem.phys_base,
tx_pkt->mem.size,
DMA_TO_DEVICE);
- }
if (tx_pkt->callback)
tx_pkt->callback(tx_pkt->user1, tx_pkt->user2);
+ if (tx_pkt->cnt > 1 && tx_pkt->cnt != IPA_LAST_DESC_CNT)
+ dma_pool_free(ipa_ctx->dma_pool, tx_pkt->mult.base,
+ tx_pkt->mult.phys_base);
+
kmem_cache_free(ipa_ctx->tx_pkt_wrapper_cache, tx_pkt);
- tx_pkt = next_pkt;
+ cnt++;
+ };
+
+ return cnt;
+}
+
+/**
+ * ipa_tx_switch_to_intr_mode() - Operate the Tx data path in interrupt mode
+ */
+static void ipa_tx_switch_to_intr_mode(struct ipa_sys_context *sys)
+{
+ int ret;
+
+ if (!atomic_read(&sys->curr_polling_state)) {
+ IPAERR("already in intr mode\n");
+ goto fail;
}
- if (mult.phys_base)
- dma_free_coherent(NULL, mult.size, mult.base, mult.phys_base);
+ ret = sps_get_config(sys->ep->ep_hdl, &sys->ep->connect);
+ if (ret) {
+ IPAERR("sps_get_config() failed %d\n", ret);
+ goto fail;
+ }
+ sys->event.options = SPS_O_EOT;
+ ret = sps_register_event(sys->ep->ep_hdl, &sys->event);
+ if (ret) {
+ IPAERR("sps_register_event() failed %d\n", ret);
+ goto fail;
+ }
+ sys->ep->connect.options =
+ SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_EOT;
+ ret = sps_set_config(sys->ep->ep_hdl, &sys->ep->connect);
+ if (ret) {
+ IPAERR("sps_set_config() failed %d\n", ret);
+ goto fail;
+ }
+ atomic_set(&sys->curr_polling_state, 0);
+ ipa_handle_tx_core(sys, true, false);
+ return;
+
+fail:
+ IPA_STATS_INC_CNT(ipa_ctx->stats.x_intr_repost_tx);
+ schedule_delayed_work(&sys->switch_to_intr_work, msecs_to_jiffies(1));
+ return;
+}
+
+static void ipa_handle_tx(struct ipa_sys_context *sys)
+{
+ int inactive_cycles = 0;
+ int cnt;
+
+ do {
+ cnt = ipa_handle_tx_core(sys, true, true);
+ if (cnt == 0) {
+ inactive_cycles++;
+ usleep_range(POLLING_MIN_SLEEP_TX,
+ POLLING_MAX_SLEEP_TX);
+ } else {
+ inactive_cycles = 0;
+ }
+ } while (inactive_cycles <= POLLING_INACTIVITY_TX);
+
+ ipa_tx_switch_to_intr_mode(sys);
+}
+
+static void ipa_wq_handle_tx(struct work_struct *work)
+{
+ struct ipa_tx_pkt_wrapper *tx_pkt;
+ tx_pkt = container_of(work, struct ipa_tx_pkt_wrapper, work);
+ ipa_handle_tx(tx_pkt->sys);
}
/**
@@ -144,7 +252,7 @@
* does not cross a 1KB boundary
*/
tx_pkt->bounce = dma_pool_alloc(
- ipa_ctx->one_kb_no_straddle_pool,
+ ipa_ctx->dma_pool,
mem_flag, &dma_address);
if (!tx_pkt->bounce) {
dma_address = 0;
@@ -165,7 +273,6 @@
}
INIT_LIST_HEAD(&tx_pkt->link);
- INIT_WORK(&tx_pkt->work, ipa_wq_write_done);
tx_pkt->type = desc->type;
tx_pkt->cnt = 1; /* only 1 desc in this "set" */
@@ -187,10 +294,15 @@
IPADBG("sending cmd=%d pyld_len=%d sps_flags=%x\n",
desc->opcode, desc->len, sps_flags);
IPA_DUMP_BUFF(desc->pyld, dma_address, desc->len);
+ INIT_WORK(&tx_pkt->work, ipa_wq_write_done);
} else {
len = desc->len;
+ INIT_WORK(&tx_pkt->work, ipa_wq_handle_tx);
}
+ if (unlikely(ipa_ctx->polling_mode))
+ INIT_WORK(&tx_pkt->work, ipa_wq_handle_tx);
+
spin_lock_irqsave(&sys->spinlock, irq_flags);
list_add_tail(&tx_pkt->link, &sys->head_desc_list);
result = sps_transfer_one(sys->ep->ep_hdl, dma_address, len, tx_pkt,
@@ -208,7 +320,7 @@
list_del(&tx_pkt->link);
spin_unlock_irqrestore(&sys->spinlock, irq_flags);
if (unlikely(ipa_ctx->ipa_hw_type == IPA_HW_v1_0))
- dma_pool_free(ipa_ctx->one_kb_no_straddle_pool, tx_pkt->bounce,
+ dma_pool_free(ipa_ctx->dma_pool, tx_pkt->bounce,
dma_address);
else
dma_unmap_single(NULL, dma_address, desc->len, DMA_TO_DEVICE);
@@ -259,7 +371,7 @@
if (unlikely(!in_atomic))
mem_flag = GFP_KERNEL;
- transfer.iovec = dma_alloc_coherent(NULL, size, &dma_addr, mem_flag);
+ transfer.iovec = dma_pool_alloc(ipa_ctx->dma_pool, mem_flag, &dma_addr);
transfer.iovec_phys = dma_addr;
transfer.iovec_count = num_desc;
spin_lock_irqsave(&sys->spinlock, irq_flags);
@@ -286,7 +398,7 @@
tx_pkt->mult.base = transfer.iovec;
tx_pkt->mult.size = size;
tx_pkt->cnt = num_desc;
- INIT_WORK(&tx_pkt->work, ipa_wq_write_done);
+ INIT_WORK(&tx_pkt->work, ipa_wq_handle_tx);
}
iovec = &transfer.iovec[i];
@@ -306,7 +418,7 @@
* packet does not cross a 1KB boundary
*/
tx_pkt->bounce =
- dma_pool_alloc(ipa_ctx->one_kb_no_straddle_pool,
+ dma_pool_alloc(ipa_ctx->dma_pool,
mem_flag,
&tx_pkt->mem.phys_base);
if (!tx_pkt->bounce) {
@@ -377,7 +489,7 @@
next_pkt = list_next_entry(tx_pkt, link);
list_del(&tx_pkt->link);
if (unlikely(ipa_ctx->ipa_hw_type == IPA_HW_v1_0))
- dma_pool_free(ipa_ctx->one_kb_no_straddle_pool,
+ dma_pool_free(ipa_ctx->dma_pool,
tx_pkt->bounce,
tx_pkt->mem.phys_base);
else
@@ -392,7 +504,7 @@
if (fail_dma_wrap)
kmem_cache_free(ipa_ctx->tx_pkt_wrapper_cache, tx_pkt);
if (transfer.iovec_phys)
- dma_free_coherent(NULL, size, transfer.iovec,
+ dma_pool_free(ipa_ctx->dma_pool, transfer.iovec,
transfer.iovec_phys);
failure_coherent:
spin_unlock_irqrestore(&sys->spinlock, irq_flags);
@@ -433,9 +545,7 @@
struct ipa_desc *desc;
int result = 0;
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
+ ipa_inc_client_enable_clks();
if (num_desc == 1) {
init_completion(&descr->xfer_done);
@@ -471,14 +581,55 @@
IPA_STATS_INC_IC_CNT(num_desc, descr, ipa_ctx->stats.imm_cmds);
bail:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
+ ipa_dec_client_disable_clks();
return result;
}
/**
* ipa_sps_irq_tx_notify() - Callback function which will be called by
+ * the SPS driver to start a Tx poll operation.
+ * Called in an interrupt context.
+ * @notify: SPS driver supplied notification struct
+ *
+ * This function defer the work for this event to the tx workqueue.
+ */
+static void ipa_sps_irq_tx_notify(struct sps_event_notify *notify)
+{
+ struct ipa_sys_context *sys = &ipa_ctx->sys[IPA_A5_LAN_WAN_OUT];
+ struct ipa_tx_pkt_wrapper *tx_pkt;
+ int ret;
+
+ IPADBG("event %d notified\n", notify->event_id);
+
+ switch (notify->event_id) {
+ case SPS_EVENT_EOT:
+ tx_pkt = notify->data.transfer.user;
+ if (!atomic_read(&sys->curr_polling_state)) {
+ ret = sps_get_config(sys->ep->ep_hdl,
+ &sys->ep->connect);
+ if (ret) {
+ IPAERR("sps_get_config() failed %d\n", ret);
+ break;
+ }
+ sys->ep->connect.options = SPS_O_AUTO_ENABLE |
+ SPS_O_ACK_TRANSFERS | SPS_O_POLL;
+ ret = sps_set_config(sys->ep->ep_hdl,
+ &sys->ep->connect);
+ if (ret) {
+ IPAERR("sps_set_config() failed %d\n", ret);
+ break;
+ }
+ atomic_set(&sys->curr_polling_state, 1);
+ queue_work(ipa_ctx->tx_wq, &tx_pkt->work);
+ }
+ break;
+ default:
+ IPAERR("recieved unexpected event id %d\n", notify->event_id);
+ }
+}
+
+/**
+ * ipa_sps_irq_tx_no_aggr_notify() - Callback function which will be called by
* the SPS driver after a Tx operation is complete.
* Called in an interrupt context.
* @notify: SPS driver supplied notification struct
@@ -486,7 +637,7 @@
* This function defer the work for this event to the tx workqueue.
* This event will be later handled by ipa_write_done.
*/
-static void ipa_sps_irq_tx_notify(struct sps_event_notify *notify)
+static void ipa_sps_irq_tx_no_aggr_notify(struct sps_event_notify *notify)
{
struct ipa_tx_pkt_wrapper *tx_pkt;
@@ -516,7 +667,8 @@
* - Call the endpoints notify function, passing the skb in the parameters
* - Replenish the rx cache
*/
-int ipa_handle_rx_core(bool process_all, bool in_poll_state)
+int ipa_handle_rx_core(struct ipa_sys_context *sys, bool process_all,
+ bool in_poll_state)
{
struct ipa_a5_mux_hdr *mux_hdr;
struct ipa_rx_pkt_wrapper *rx_pkt;
@@ -525,15 +677,14 @@
unsigned int pull_len;
unsigned int padding;
int ret;
- struct ipa_sys_context *sys = &ipa_ctx->sys[IPA_A5_LAN_WAN_IN];
struct ipa_ep_context *ep;
int cnt = 0;
struct completion *compl;
struct ipa_tree_node *node;
unsigned int src_pipe;
- while ((in_poll_state ? atomic_read(&ipa_ctx->curr_polling_state) :
- !atomic_read(&ipa_ctx->curr_polling_state))) {
+ while ((in_poll_state ? atomic_read(&sys->curr_polling_state) :
+ !atomic_read(&sys->curr_polling_state))) {
if (cnt && !process_all)
break;
@@ -658,19 +809,15 @@
/**
* ipa_rx_switch_to_intr_mode() - Operate the Rx data path in interrupt mode
*/
-static void ipa_rx_switch_to_intr_mode(void)
+static void ipa_rx_switch_to_intr_mode(struct ipa_sys_context *sys)
{
int ret;
- struct ipa_sys_context *sys;
- IPADBG("Enter");
- if (!atomic_read(&ipa_ctx->curr_polling_state)) {
+ if (!atomic_read(&sys->curr_polling_state)) {
IPAERR("already in intr mode\n");
goto fail;
}
- sys = &ipa_ctx->sys[IPA_A5_LAN_WAN_IN];
-
ret = sps_get_config(sys->ep->ep_hdl, &sys->ep->connect);
if (ret) {
IPAERR("sps_get_config() failed %d\n", ret);
@@ -689,15 +836,16 @@
IPAERR("sps_set_config() failed %d\n", ret);
goto fail;
}
- atomic_set(&ipa_ctx->curr_polling_state, 0);
- ipa_handle_rx_core(true, false);
+ atomic_set(&sys->curr_polling_state, 0);
+ ipa_handle_rx_core(sys, true, false);
return;
fail:
IPA_STATS_INC_CNT(ipa_ctx->stats.x_intr_repost);
- schedule_delayed_work(&switch_to_intr_work, msecs_to_jiffies(1));
+ schedule_delayed_work(&sys->switch_to_intr_work, msecs_to_jiffies(1));
}
+
/**
* ipa_rx_notify() - Callback function which is called by the SPS driver when a
* a packet is received
@@ -713,29 +861,29 @@
*/
static void ipa_sps_irq_rx_notify(struct sps_event_notify *notify)
{
- struct ipa_ep_context *ep;
+ struct ipa_sys_context *sys = &ipa_ctx->sys[IPA_A5_LAN_WAN_IN];
int ret;
IPADBG("event %d notified\n", notify->event_id);
switch (notify->event_id) {
case SPS_EVENT_EOT:
- if (!atomic_read(&ipa_ctx->curr_polling_state)) {
- ep = ipa_ctx->sys[IPA_A5_LAN_WAN_IN].ep;
-
- ret = sps_get_config(ep->ep_hdl, &ep->connect);
+ if (!atomic_read(&sys->curr_polling_state)) {
+ ret = sps_get_config(sys->ep->ep_hdl,
+ &sys->ep->connect);
if (ret) {
IPAERR("sps_get_config() failed %d\n", ret);
break;
}
- ep->connect.options = SPS_O_AUTO_ENABLE |
+ sys->ep->connect.options = SPS_O_AUTO_ENABLE |
SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- ret = sps_set_config(ep->ep_hdl, &ep->connect);
+ ret = sps_set_config(sys->ep->ep_hdl,
+ &sys->ep->connect);
if (ret) {
IPAERR("sps_set_config() failed %d\n", ret);
break;
}
- atomic_set(&ipa_ctx->curr_polling_state, 1);
+ atomic_set(&sys->curr_polling_state, 1);
queue_work(ipa_ctx->rx_wq, &rx_work);
}
break;
@@ -744,6 +892,51 @@
}
}
+static void switch_to_intr_tx_work_func(struct work_struct *work)
+{
+ struct delayed_work *dwork;
+ struct ipa_sys_context *sys;
+ dwork = container_of(work, struct delayed_work, work);
+ sys = container_of(dwork, struct ipa_sys_context, switch_to_intr_work);
+ ipa_handle_tx(sys);
+}
+
+/**
+ * ipa_handle_rx() - handle packet reception. This function is executed in the
+ * context of a work queue.
+ * @work: work struct needed by the work queue
+ *
+ * ipa_handle_rx_core() is run in polling mode. After all packets has been
+ * received, the driver switches back to interrupt mode.
+ */
+static void ipa_handle_rx(struct ipa_sys_context *sys)
+{
+ int inactive_cycles = 0;
+ int cnt;
+
+ do {
+ cnt = ipa_handle_rx_core(sys, true, true);
+ if (cnt == 0) {
+ inactive_cycles++;
+ usleep_range(POLLING_MIN_SLEEP_RX,
+ POLLING_MAX_SLEEP_RX);
+ } else {
+ inactive_cycles = 0;
+ }
+ } while (inactive_cycles <= POLLING_INACTIVITY_RX);
+
+ ipa_rx_switch_to_intr_mode(sys);
+}
+
+static void switch_to_intr_rx_work_func(struct work_struct *work)
+{
+ struct delayed_work *dwork;
+ struct ipa_sys_context *sys;
+ dwork = container_of(work, struct delayed_work, work);
+ sys = container_of(dwork, struct ipa_sys_context, switch_to_intr_work);
+ ipa_handle_rx(sys);
+}
+
/**
* ipa_setup_sys_pipe() - Setup an IPA end-point in system-BAM mode and perform
* IPA EP configuration
@@ -862,14 +1055,18 @@
switch (ipa_ep_idx) {
case 1:
- /* fall through */
+ sys_idx = ipa_ep_idx;
+ break;
case 2:
- /* fall through */
+ sys_idx = ipa_ep_idx;
+ INIT_DELAYED_WORK(&ipa_ctx->sys[sys_idx].switch_to_intr_work,
+ switch_to_intr_tx_work_func);
+ break;
case 3:
sys_idx = ipa_ep_idx;
INIT_DELAYED_WORK(&replenish_rx_work, replenish_rx_work_func);
- INIT_DELAYED_WORK(&switch_to_intr_work,
- switch_to_intr_work_func);
+ INIT_DELAYED_WORK(&ipa_ctx->sys[sys_idx].switch_to_intr_work,
+ switch_to_intr_rx_work_func);
break;
case WLAN_AMPDU_TX_EP:
sys_idx = IPA_A5_WLAN_AMPDU_OUT;
@@ -890,7 +1087,10 @@
ipa_ctx->sys[sys_idx].event.callback =
IPA_CLIENT_IS_CONS(sys_in->client) ?
ipa_sps_irq_rx_notify :
- ipa_sps_irq_tx_notify;
+ (sys_in->client ==
+ IPA_CLIENT_A5_LAN_WAN_PROD ?
+ ipa_sps_irq_tx_notify :
+ ipa_sps_irq_tx_no_aggr_notify);
result = sps_register_event(ipa_ctx->ep[ipa_ep_idx].ep_hdl,
&ipa_ctx->sys[sys_idx].event);
if (result < 0) {
@@ -1082,35 +1282,9 @@
}
EXPORT_SYMBOL(ipa_tx_dp);
-static void ipa_handle_rx(void)
-{
- int inactive_cycles = 0;
- int cnt;
-
- do {
- cnt = ipa_handle_rx_core(true, true);
- if (cnt == 0) {
- inactive_cycles++;
- usleep_range(POLLING_MIN_SLEEP, POLLING_MAX_SLEEP);
- } else {
- inactive_cycles = 0;
- }
- } while (inactive_cycles <= POLLING_INACTIVITY);
-
- ipa_rx_switch_to_intr_mode();
-}
-
-/**
- * ipa_handle_rx() - handle packet reception. This function is executed in the
- * context of a work queue.
- * @work: work struct needed by the work queue
- *
- * ipa_handle_rx_core() is run in polling mode. After all packets has been
- * received, the driver switches back to interrupt mode.
- */
static void ipa_wq_handle_rx(struct work_struct *work)
{
- ipa_handle_rx();
+ ipa_handle_rx(&ipa_ctx->sys[IPA_A5_LAN_WAN_IN]);
}
/**
@@ -1204,11 +1378,6 @@
ipa_replenish_rx_cache();
}
-static void switch_to_intr_work_func(struct work_struct *work)
-{
- ipa_handle_rx();
-}
-
/**
* ipa_cleanup_rx() - release RX queue resources
*
diff --git a/drivers/platform/msm/ipa/ipa_flt.c b/drivers/platform/msm/ipa/ipa_flt.c
index b63b939..edb9fb1 100644
--- a/drivers/platform/msm/ipa/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_flt.c
@@ -368,6 +368,7 @@
return 0;
proc_err:
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
+ mem->base = NULL;
error:
return -EPERM;
@@ -456,7 +457,7 @@
if (mem->size > avail) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size, avail);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
if (ip == IPA_IP_v4) {
diff --git a/drivers/platform/msm/ipa/ipa_hdr.c b/drivers/platform/msm/ipa/ipa_hdr.c
index 7d0bc24..9618da2 100644
--- a/drivers/platform/msm/ipa/ipa_hdr.c
+++ b/drivers/platform/msm/ipa/ipa_hdr.c
@@ -89,7 +89,7 @@
if (ipa_ctx->hdr_tbl_lcl && mem->size > IPA_RAM_HDR_SIZE) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size,
IPA_RAM_HDR_SIZE);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
cmd->hdr_table_addr = mem->phys_base;
@@ -126,7 +126,7 @@
return 0;
fail_send_cmd:
- if (mem->phys_base)
+ if (mem->base)
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
fail_hw_tbl_gen:
kfree(cmd);
diff --git a/drivers/platform/msm/ipa/ipa_i.h b/drivers/platform/msm/ipa/ipa_i.h
index ca5740d..b57194e 100644
--- a/drivers/platform/msm/ipa/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_i.h
@@ -30,7 +30,8 @@
#define IPA_COOKIE 0xfacefeed
#define IPA_NUM_PIPES 0x14
-#define IPA_SYS_DESC_FIFO_SZ (0x800)
+#define IPA_SYS_DESC_FIFO_SZ 0x800
+#define IPA_SYS_TX_DATA_DESC_FIFO_SZ 0x1000
#ifdef IPA_DEBUG
#define IPADBG(fmt, args...) \
@@ -40,10 +41,11 @@
#define IPADBG(fmt, args...)
#endif
-#define WLAN_AMPDU_TX_EP (15)
-#define WLAN_PROD_TX_EP (19)
-#define MAX_NUM_EXCP (8)
-#define MAX_NUM_IMM_CMD (17)
+#define WLAN_AMPDU_TX_EP 15
+#define WLAN_PROD_TX_EP 19
+
+#define MAX_NUM_EXCP 8
+#define MAX_NUM_IMM_CMD 17
#define IPA_STATS
@@ -370,7 +372,6 @@
* @spinlock: protects the list and its size
* @event: used to request CALLBACK mode from SPS driver
* @ep: IPA EP context
- * @wait_desc_list: used to hold completed Tx packets
*
* IPA context specific to the system-bam pipes a.k.a LAN IN/OUT and WAN
*/
@@ -380,7 +381,8 @@
spinlock_t spinlock;
struct sps_register_event event;
struct ipa_ep_context *ep;
- struct list_head wait_desc_list;
+ atomic_t curr_polling_state;
+ struct delayed_work switch_to_intr_work;
};
/**
@@ -477,6 +479,14 @@
* @is_sys_mem: flag indicating if NAT memory is sys memory
* @is_dev_init: flag indicating if NAT device is initialized
* @lock: NAT memory mutex
+ * @nat_base_address: nat table virutal address
+ * @ipv4_rules_addr: base nat table address
+ * @ipv4_expansion_rules_addr: expansion table address
+ * @index_table_addr: index table address
+ * @index_table_expansion_addr: index expansion table address
+ * @size_base_tables: base table size
+ * @size_expansion_tables: expansion table size
+ * @public_ip_addr: ip address of nat table
*/
struct ipa_nat_mem {
struct class *class;
@@ -490,6 +500,14 @@
bool is_sys_mem;
bool is_dev_init;
struct mutex lock;
+ void *nat_base_address;
+ char *ipv4_rules_addr;
+ char *ipv4_expansion_rules_addr;
+ char *index_table_addr;
+ char *index_table_expansion_addr;
+ u32 size_base_tables;
+ u32 size_expansion_tables;
+ u32 public_ip_addr;
};
/**
@@ -528,9 +546,16 @@
u32 bridged_pkts[IPA_BRIDGE_TYPE_MAX][IPA_BRIDGE_DIR_MAX];
u32 rx_repl_repost;
u32 x_intr_repost;
+ u32 x_intr_repost_tx;
u32 rx_q_len;
u32 msg_w[IPA_EVENT_MAX];
u32 msg_r[IPA_EVENT_MAX];
+ u32 a2_power_on_reqs_in;
+ u32 a2_power_on_reqs_out;
+ u32 a2_power_off_reqs_in;
+ u32 a2_power_off_reqs_out;
+ u32 a2_power_modem_acks;
+ u32 a2_power_apps_acks;
};
/**
@@ -585,7 +610,7 @@
* @ip6_flt_tbl_lcl: where ip6 flt tables reside 1-local; 0-system
* @empty_rt_tbl_mem: empty routing tables memory
* @pipe_mem_pool: pipe memory pool
- * @one_kb_no_straddle_pool: one kb no straddle pool
+ * @dma_pool: special purpose DMA pool
* @ipa_hw_type: type of IPA HW type (e.g. IPA 1.0, IPA 1.1 etc')
* @ipa_hw_mode: mode of IPA HW mode (e.g. Normal, Virtual or over PCIe)
*
@@ -633,7 +658,6 @@
uint aggregation_type;
uint aggregation_byte_limit;
uint aggregation_time_limit;
- atomic_t curr_polling_state;
struct delayed_work poll_work;
bool hdr_tbl_lcl;
struct ipa_mem_buffer hdr_mem;
@@ -643,8 +667,9 @@
bool ip6_flt_tbl_lcl;
struct ipa_mem_buffer empty_rt_tbl_mem;
struct gen_pool *pipe_mem_pool;
- struct dma_pool *one_kb_no_straddle_pool;
- atomic_t ipa_active_clients;
+ struct dma_pool *dma_pool;
+ struct mutex ipa_active_clients_lock;
+ int ipa_active_clients;
u32 clnt_hdl_cmd;
u32 clnt_hdl_data_in;
u32 clnt_hdl_data_out;
@@ -658,6 +683,10 @@
enum ipa_hw_mode ipa_hw_mode;
/* featurize if memory footprint becomes a concern */
struct ipa_stats stats;
+ void *smem_pipe_mem;
+ /* store HOLB configuration for WLAN TX pipes */
+ u32 hol_en;
+ u32 hol_timer;
};
/**
@@ -742,7 +771,7 @@
struct a2_mux_pipe_connection *pipe_connect);
int ipa_get_a2_mux_bam_info(u32 *a2_bam_mem_base, u32 *a2_bam_mem_size,
u32 *a2_bam_irq);
-void rmnet_bridge_get_client_handles(u32 *producer_handle,
+void teth_bridge_get_client_handles(u32 *producer_handle,
u32 *consumer_handle);
int ipa_send_one(struct ipa_sys_context *sys, struct ipa_desc *desc,
bool in_atomic);
@@ -787,7 +816,10 @@
void ipa_cleanup_rx(void);
int ipa_cfg_filter(u32 disable);
void ipa_wq_write_done(struct work_struct *work);
-int ipa_handle_rx_core(bool process_all, bool in_poll_state);
+int ipa_handle_rx_core(struct ipa_sys_context *sys, bool process_all,
+ bool in_poll_state);
+int ipa_handle_tx_core(struct ipa_sys_context *sys, bool process_all,
+ bool in_poll_state);
int ipa_pipe_mem_init(u32 start_ofst, u32 size);
int ipa_pipe_mem_alloc(u32 *ofst, u32 size);
int ipa_pipe_mem_free(u32 ofst, u32 size);
@@ -795,6 +827,8 @@
struct ipa_context *ipa_get_ctx(void);
void ipa_enable_clks(void);
void ipa_disable_clks(void);
+void ipa_inc_client_enable_clks(void);
+void ipa_dec_client_disable_clks(void);
int __ipa_del_rt_rule(u32 rule_hdl);
int __ipa_del_hdr(u32 hdr_hdl);
int __ipa_release_hdr(u32 hdr_hdl);
diff --git a/drivers/platform/msm/ipa/ipa_intf.c b/drivers/platform/msm/ipa/ipa_intf.c
index 0f41d2c..5ee1929 100644
--- a/drivers/platform/msm/ipa/ipa_intf.c
+++ b/drivers/platform/msm/ipa/ipa_intf.c
@@ -432,6 +432,7 @@
}
IPA_STATS_INC_CNT(
ipa_ctx->stats.msg_r[msg->meta.msg_type]);
+ kfree(msg);
}
ret = -EAGAIN;
diff --git a/drivers/platform/msm/ipa/ipa_nat.c b/drivers/platform/msm/ipa/ipa_nat.c
index befa2cf..e2c344f 100644
--- a/drivers/platform/msm/ipa/ipa_nat.c
+++ b/drivers/platform/msm/ipa/ipa_nat.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -75,6 +75,7 @@
IPAERR("unable to map memory. Err:%d\n", result);
goto bail;
}
+ ipa_ctx->nat_mem.nat_base_address = nat_ctx->vaddr;
} else {
IPADBG("Mapping shared(local) memory\n");
IPADBG("map sz=0x%lx\n", vsize);
@@ -88,7 +89,7 @@
result = -EAGAIN;
goto bail;
}
-
+ ipa_ctx->nat_mem.nat_base_address = (void *)vma->vm_start;
}
nat_ctx->is_mapped = true;
vma->vm_ops = &ipa_nat_remap_vm_ops;
@@ -299,6 +300,35 @@
goto free_cmd;
}
+ ipa_ctx->nat_mem.public_ip_addr = init->ip_addr;
+ IPADBG("Table ip address:0x%x", ipa_ctx->nat_mem.public_ip_addr);
+
+ ipa_ctx->nat_mem.ipv4_rules_addr =
+ (char *)ipa_ctx->nat_mem.nat_base_address + init->ipv4_rules_offset;
+ IPADBG("ipv4_rules_addr: 0x%p\n",
+ ipa_ctx->nat_mem.ipv4_rules_addr);
+
+ ipa_ctx->nat_mem.ipv4_expansion_rules_addr =
+ (char *)ipa_ctx->nat_mem.nat_base_address + init->expn_rules_offset;
+ IPADBG("ipv4_expansion_rules_addr: 0x%p\n",
+ ipa_ctx->nat_mem.ipv4_expansion_rules_addr);
+
+ ipa_ctx->nat_mem.index_table_addr =
+ (char *)ipa_ctx->nat_mem.nat_base_address + init->index_offset;
+ IPADBG("index_table_addr: 0x%p\n",
+ ipa_ctx->nat_mem.index_table_addr);
+
+ ipa_ctx->nat_mem.index_table_expansion_addr =
+ (char *)ipa_ctx->nat_mem.nat_base_address + init->index_expn_offset;
+ IPADBG("index_table_expansion_addr: 0x%p\n",
+ ipa_ctx->nat_mem.index_table_expansion_addr);
+
+ IPADBG("size_base_tables: %d\n", init->table_entries);
+ ipa_ctx->nat_mem.size_base_tables = init->table_entries;
+
+ IPADBG("size_expansion_tables: %d\n", init->expn_table_entries);
+ ipa_ctx->nat_mem.size_expansion_tables = init->expn_table_entries;
+
IPADBG("return\n");
result = 0;
free_cmd:
diff --git a/drivers/platform/msm/ipa/ipa_rm.c b/drivers/platform/msm/ipa/ipa_rm.c
index 1fdd300..88a49c4 100644
--- a/drivers/platform/msm/ipa/ipa_rm.c
+++ b/drivers/platform/msm/ipa/ipa_rm.c
@@ -21,6 +21,7 @@
struct ipa_rm_context_type {
struct ipa_rm_dep_graph *dep_graph;
struct workqueue_struct *ipa_rm_wq;
+ rwlock_t lock;
};
static struct ipa_rm_context_type *ipa_rm_ctx;
@@ -41,10 +42,9 @@
struct ipa_rm_resource *resource;
int result;
- if (!create_params) {
- result = -EINVAL;
- goto bail;
- }
+ if (!create_params)
+ return -EINVAL;
+ write_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
create_params->name,
&resource) == 0) {
@@ -59,11 +59,51 @@
if (result)
ipa_rm_resource_delete(resource);
bail:
+ write_unlock(&ipa_rm_ctx->lock);
return result;
}
EXPORT_SYMBOL(ipa_rm_create_resource);
/**
+ * ipa_rm_delete_resource() - delete resource
+ * @resource_name: name of resource to be deleted
+ *
+ * Returns: 0 on success, negative on failure
+ *
+ * This function is called by IPA RM client to delete client's resources.
+ *
+ */
+int ipa_rm_delete_resource(enum ipa_rm_resource_name resource_name)
+{
+ struct ipa_rm_resource *resource;
+ int result;
+
+ IPADBG("IPA RM ::ipa_rm_delete_resource num[%d] ENTER\n",
+ resource_name);
+ write_lock(&ipa_rm_ctx->lock);
+ if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
+ resource_name,
+ &resource) != 0) {
+ IPADBG("ipa_rm_delete_resource param are bad********\n");
+ result = -EINVAL;
+ goto bail;
+ }
+ result = ipa_rm_resource_delete(resource);
+ if (result) {
+ IPADBG("error in ipa_rm_resource_delete\n");
+ goto bail;
+ }
+ result = ipa_rm_dep_graph_remove(ipa_rm_ctx->dep_graph,
+ resource_name);
+ IPADBG("IPA RM ::ipa_rm_delete_resource [%d] SUCCESS\n",
+ resource_name);
+bail:
+ write_unlock(&ipa_rm_ctx->lock);
+ return result;
+}
+EXPORT_SYMBOL(ipa_rm_delete_resource);
+
+/**
* ipa_rm_add_dependency() - create dependency
* between 2 resources
* @resource_name: name of dependent resource
@@ -77,13 +117,19 @@
int ipa_rm_add_dependency(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name)
{
- return ipa_rm_dep_graph_add_dependency(
- ipa_rm_ctx->dep_graph,
- resource_name,
- depends_on_name);
+ int result;
+
+ read_lock(&ipa_rm_ctx->lock);
+ result = ipa_rm_dep_graph_add_dependency(
+ ipa_rm_ctx->dep_graph,
+ resource_name,
+ depends_on_name);
+ read_unlock(&ipa_rm_ctx->lock);
+ return result;
}
EXPORT_SYMBOL(ipa_rm_add_dependency);
+
/**
* ipa_rm_delete_dependency() - create dependency
* between 2 resources
@@ -98,10 +144,14 @@
int ipa_rm_delete_dependency(enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name)
{
- return ipa_rm_dep_graph_delete_dependency(
- ipa_rm_ctx->dep_graph,
- resource_name,
- depends_on_name);
+ int result;
+ read_lock(&ipa_rm_ctx->lock);
+ result = ipa_rm_dep_graph_delete_dependency(
+ ipa_rm_ctx->dep_graph,
+ resource_name,
+ depends_on_name);
+ read_unlock(&ipa_rm_ctx->lock);
+ return result;
}
EXPORT_SYMBOL(ipa_rm_delete_dependency);
@@ -120,10 +170,9 @@
int result;
IPADBG("IPA RM ::ipa_rm_request_resource ENTER\n");
- if (!IPA_RM_RESORCE_IS_PROD(resource_name)) {
- result = -EINVAL;
- goto bail;
- }
+ if (!IPA_RM_RESORCE_IS_PROD(resource_name))
+ return -EINVAL;
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
resource_name,
&resource) != 0) {
@@ -136,6 +185,7 @@
bail:
IPADBG("IPA RM ::ipa_rm_request_resource EXIT [%d]\n", result);
+ read_unlock(&ipa_rm_ctx->lock);
return result;
}
EXPORT_SYMBOL(ipa_rm_request_resource);
@@ -155,10 +205,9 @@
int result;
IPADBG("IPA RM ::ipa_rm_release_resource ENTER\n");
- if (!IPA_RM_RESORCE_IS_PROD(resource_name)) {
- result = -EINVAL;
- goto bail;
- }
+ if (!IPA_RM_RESORCE_IS_PROD(resource_name))
+ return -EINVAL;
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
resource_name,
&resource) != 0) {
@@ -170,6 +219,7 @@
bail:
IPADBG("IPA RM ::ipa_rm_release_resource EXIT [%d]\n", result);
+ read_unlock(&ipa_rm_ctx->lock);
return result;
}
EXPORT_SYMBOL(ipa_rm_release_resource);
@@ -189,10 +239,10 @@
{
int result;
struct ipa_rm_resource *resource;
- if (!IPA_RM_RESORCE_IS_PROD(resource_name)) {
- result = -EINVAL;
- goto bail;
- }
+
+ if (!IPA_RM_RESORCE_IS_PROD(resource_name))
+ return -EINVAL;
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
resource_name,
&resource) != 0) {
@@ -203,6 +253,7 @@
(struct ipa_rm_resource_prod *)resource,
reg_params);
bail:
+ read_unlock(&ipa_rm_ctx->lock);
return result;
}
EXPORT_SYMBOL(ipa_rm_register);
@@ -222,10 +273,10 @@
{
int result;
struct ipa_rm_resource *resource;
- if (!IPA_RM_RESORCE_IS_PROD(resource_name)) {
- result = -EINVAL;
- goto bail;
- }
+
+ if (!IPA_RM_RESORCE_IS_PROD(resource_name))
+ return -EINVAL;
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
resource_name,
&resource) != 0) {
@@ -236,6 +287,7 @@
(struct ipa_rm_resource_prod *)resource,
reg_params);
bail:
+ read_unlock(&ipa_rm_ctx->lock);
return result;
}
EXPORT_SYMBOL(ipa_rm_deregister);
@@ -278,25 +330,32 @@
case IPA_RM_WQ_NOTIFY_PROD:
if (!IPA_RM_RESORCE_IS_PROD(ipa_rm_work->resource_name))
return;
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
ipa_rm_work->resource_name,
- &resource) != 0)
+ &resource) != 0){
+ read_unlock(&ipa_rm_ctx->lock);
return;
+ }
ipa_rm_resource_producer_notify_clients(
(struct ipa_rm_resource_prod *)resource,
ipa_rm_work->event);
-
+ read_unlock(&ipa_rm_ctx->lock);
break;
case IPA_RM_WQ_NOTIFY_CONS:
break;
case IPA_RM_WQ_RESOURCE_CB:
+ read_lock(&ipa_rm_ctx->lock);
if (ipa_rm_dep_graph_get_resource(ipa_rm_ctx->dep_graph,
ipa_rm_work->resource_name,
- &resource) != 0)
+ &resource) != 0){
+ read_unlock(&ipa_rm_ctx->lock);
return;
+ }
ipa_rm_resource_consumer_handle_cb(
(struct ipa_rm_resource_cons *)resource,
ipa_rm_work->event);
+ read_unlock(&ipa_rm_ctx->lock);
break;
default:
break;
@@ -351,6 +410,7 @@
result = ipa_rm_dep_graph_create(&(ipa_rm_ctx->dep_graph));
if (result)
goto graph_alloc_fail;
+ rwlock_init(&ipa_rm_ctx->lock);
IPADBG("IPA RM ipa_rm_initialize SUCCESS\n");
return 0;
diff --git a/drivers/platform/msm/ipa/ipa_rm_dependency_graph.c b/drivers/platform/msm/ipa/ipa_rm_dependency_graph.c
index 6afab42..8144a42 100644
--- a/drivers/platform/msm/ipa/ipa_rm_dependency_graph.c
+++ b/drivers/platform/msm/ipa/ipa_rm_dependency_graph.c
@@ -39,7 +39,6 @@
result = -ENOMEM;
goto bail;
}
- rwlock_init(&((*dep_graph)->lock));
bail:
return result;
}
@@ -55,12 +54,10 @@
int resource_index;
if (!graph)
return;
- write_lock(&graph->lock);
for (resource_index = 0;
resource_index < IPA_RM_RESOURCE_MAX;
resource_index++)
kfree(graph->resource_table[resource_index]);
- write_unlock(&graph->lock);
memset(graph->resource_table, 0, sizeof(graph->resource_table));
}
@@ -88,9 +85,7 @@
result = -EINVAL;
goto bail;
}
- read_lock(&graph->lock);
*resource = graph->resource_table[resource_index];
- read_unlock(&graph->lock);
if (!*resource) {
result = -EINVAL;
goto bail;
@@ -112,6 +107,7 @@
{
int result = 0;
int resource_index;
+
if (!graph || !resource) {
result = -EINVAL;
goto bail;
@@ -121,14 +117,29 @@
result = -EINVAL;
goto bail;
}
- write_lock(&graph->lock);
graph->resource_table[resource_index] = resource;
- write_unlock(&graph->lock);
bail:
return result;
}
/**
+ * ipa_rm_dep_graph_remove() - removes resource from graph
+ * @graph: [in] dependency graph
+ * @resource: [in] resource to add
+ *
+ * Returns: 0 on success, negative on failure
+ */
+int ipa_rm_dep_graph_remove(struct ipa_rm_dep_graph *graph,
+ enum ipa_rm_resource_name resource_name)
+{
+ if (!graph)
+ return -EINVAL;
+
+ graph->resource_table[resource_name] = NULL;
+ return 0;
+}
+
+/**
* ipa_rm_dep_graph_add_dependency() - adds dependency between
* two nodes in graph
* @graph: [in] dependency graph
diff --git a/drivers/platform/msm/ipa/ipa_rm_dependency_graph.h b/drivers/platform/msm/ipa/ipa_rm_dependency_graph.h
index 19d9461..4126819 100644
--- a/drivers/platform/msm/ipa/ipa_rm_dependency_graph.h
+++ b/drivers/platform/msm/ipa/ipa_rm_dependency_graph.h
@@ -19,7 +19,6 @@
struct ipa_rm_dep_graph {
struct ipa_rm_resource *resource_table[IPA_RM_RESOURCE_MAX];
- rwlock_t lock;
};
int ipa_rm_dep_graph_get_resource(
@@ -34,6 +33,9 @@
int ipa_rm_dep_graph_add(struct ipa_rm_dep_graph *graph,
struct ipa_rm_resource *resource);
+int ipa_rm_dep_graph_remove(struct ipa_rm_dep_graph *graph,
+ enum ipa_rm_resource_name resource_name);
+
int ipa_rm_dep_graph_add_dependency(struct ipa_rm_dep_graph *graph,
enum ipa_rm_resource_name resource_name,
enum ipa_rm_resource_name depends_on_name);
diff --git a/drivers/platform/msm/ipa/ipa_rm_resource.c b/drivers/platform/msm/ipa/ipa_rm_resource.c
index 0a6771c..0fc4826 100644
--- a/drivers/platform/msm/ipa/ipa_rm_resource.c
+++ b/drivers/platform/msm/ipa/ipa_rm_resource.c
@@ -80,7 +80,8 @@
int result = 0;
int driver_result;
unsigned long flags;
- IPADBG("IPA RM ::ipa_rm_resource_consumer_request ENTER\n");
+ IPADBG("IPA RM ::%s name %d ENTER\n",
+ __func__, consumer->resource.name);
spin_lock_irqsave(&consumer->resource.state_lock, flags);
switch (consumer->resource.state) {
case IPA_RM_RELEASED:
@@ -125,7 +126,8 @@
int driver_result;
unsigned long flags;
enum ipa_rm_resource_state save_state;
- IPADBG("IPA RM ::ipa_rm_resource_consumer_release ENTER\n");
+ IPADBG("IPA RM ::%s name %d ENTER\n",
+ __func__, consumer->resource.name);
spin_lock_irqsave(&consumer->resource.state_lock, flags);
switch (consumer->resource.state) {
case IPA_RM_RELEASED:
@@ -328,19 +330,56 @@
* ipa_rm_resource_delete() - deletes resource
* @resource: [in] resource
* for resource initialization with IPA RM
+ *
+ * Returns: 0 on success, negative on failure
*/
-void ipa_rm_resource_delete(struct ipa_rm_resource *resource)
+int ipa_rm_resource_delete(struct ipa_rm_resource *resource)
{
- if (!resource)
- return;
- if (resource->peers_list)
- ipa_rm_peers_list_delete(resource->peers_list);
+ struct ipa_rm_resource *consumer, *producer;
+ int i, result = 0;
+
+ IPADBG("IPA RM: %s ENTER\n", __func__);
+
+ if (!resource) {
+ IPADBG("ipa_rm_resource_delete ENTER with invalid param\n");
+ return -EINVAL;
+ }
if (resource->type == IPA_RM_PRODUCER) {
+ if (resource->peers_list) {
+ for (i = IPA_RM_RESOURCE_PROD_MAX;
+ i < IPA_RM_RESOURCE_MAX;
+ ++i) {
+ consumer = ipa_rm_peers_list_get_resource(
+ i,
+ resource->peers_list);
+ if (consumer) {
+ ipa_rm_resource_delete_dependency(
+ resource,
+ consumer);
+ }
+ }
+ ipa_rm_peers_list_delete(resource->peers_list);
+ }
ipa_rm_resource_producer_delete(
(struct ipa_rm_resource_prod *) resource);
kfree((struct ipa_rm_resource_prod *) resource);
- } else
+ } else if (resource->type == IPA_RM_CONSUMER) {
+ if (resource->peers_list) {
+ for (i = 0; i < IPA_RM_RESOURCE_PROD_MAX; ++i) {
+ producer = ipa_rm_peers_list_get_resource(
+ i,
+ resource->peers_list);
+ if (producer)
+ ipa_rm_resource_delete_dependency(
+ producer,
+ resource);
+ }
+ ipa_rm_peers_list_delete(resource->peers_list);
+ }
kfree((struct ipa_rm_resource_cons *) resource);
+ }
+ IPADBG("ipa_rm_resource_delete SUCCESS\n");
+ return result;
}
/**
@@ -363,6 +402,8 @@
result = -EPERM;
goto bail;
}
+ IPADBG("IPA RM: %s name %d ENTER\n",
+ __func__, producer->resource.name);
read_lock(&producer->event_listeners_lock);
list_for_each(pos, &(producer->event_listeners)) {
reg_info = list_entry(pos,
@@ -507,6 +548,10 @@
unsigned long flags;
if (!resource || !depends_on)
return -EINVAL;
+ IPADBG("IPA RM: %s from %d to %d ENTER\n",
+ __func__,
+ resource->name,
+ depends_on->name);
if (!ipa_rm_peers_list_check_dependency(resource->peers_list,
resource->name,
depends_on->peers_list,
@@ -535,8 +580,6 @@
goto bail;
}
spin_unlock_irqrestore(&resource->state_lock, flags);
- (void) ipa_rm_resource_consumer_release(
- (struct ipa_rm_resource_cons *)depends_on);
if (ipa_rm_peers_list_has_last_peer(resource->peers_list)) {
(void) ipa_rm_wq_send_cmd(IPA_RM_WQ_NOTIFY_PROD,
resource->name,
@@ -547,6 +590,12 @@
depends_on->name);
ipa_rm_peers_list_remove_peer(depends_on->peers_list,
resource->name);
+ (void) ipa_rm_resource_consumer_release(
+ (struct ipa_rm_resource_cons *)depends_on);
+ IPADBG("IPA RM: %s from %d to %d SUCCESS\n",
+ __func__,
+ resource->name,
+ depends_on->name);
bail:
return result;
}
@@ -624,7 +673,7 @@
if (producer->pending_request == 0)
producer->resource.state = IPA_RM_GRANTED;
spin_unlock_irqrestore(&producer->resource.state_lock, flags);
- return result;
+ goto bail;
unlock_and_bail:
spin_unlock_irqrestore(&producer->resource.state_lock, flags);
bail:
@@ -646,7 +695,9 @@
unsigned long flags;
struct ipa_rm_resource *consumer;
int consumer_result;
- IPADBG("IPA RM ::ipa_rm_resource_producer_release ENTER\n");
+ IPADBG("IPA RM: %s name %d ENTER\n",
+ __func__,
+ producer->resource.name);
if (ipa_rm_peers_list_is_empty(producer->resource.peers_list)) {
spin_lock_irqsave(&producer->resource.state_lock, flags);
producer->resource.state = IPA_RM_RELEASED;
diff --git a/drivers/platform/msm/ipa/ipa_rm_resource.h b/drivers/platform/msm/ipa/ipa_rm_resource.h
index b9c2e91..81ccc53 100644
--- a/drivers/platform/msm/ipa/ipa_rm_resource.h
+++ b/drivers/platform/msm/ipa/ipa_rm_resource.h
@@ -99,7 +99,7 @@
struct ipa_rm_create_params *create_params,
struct ipa_rm_resource **resource);
-void ipa_rm_resource_delete(struct ipa_rm_resource *resource);
+int ipa_rm_resource_delete(struct ipa_rm_resource *resource);
int ipa_rm_resource_producer_register(struct ipa_rm_resource_prod *producer,
struct ipa_rm_register_params *reg_params);
diff --git a/drivers/platform/msm/ipa/ipa_rt.c b/drivers/platform/msm/ipa/ipa_rt.c
index 1d88280..6430c07 100644
--- a/drivers/platform/msm/ipa/ipa_rt.c
+++ b/drivers/platform/msm/ipa/ipa_rt.c
@@ -305,6 +305,7 @@
rt_tbl_mem.base, rt_tbl_mem.phys_base);
proc_err:
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
+ mem->base = NULL;
error:
return -EPERM;
}
@@ -378,7 +379,7 @@
if (mem->size > avail) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size, avail);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
if (ip == IPA_IP_v4) {
@@ -413,7 +414,7 @@
return 0;
fail_send_cmd:
- if (mem->phys_base)
+ if (mem->base)
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
fail_hw_tbl_gen:
kfree(cmd);
@@ -505,6 +506,8 @@
IPAERR("failed to add to tree\n");
WARN_ON(1);
}
+ } else {
+ kmem_cache_free(ipa_ctx->tree_node_cache, node);
}
return entry;
diff --git a/drivers/platform/msm/ipa/rmnet_bridge.c b/drivers/platform/msm/ipa/rmnet_bridge.c
deleted file mode 100644
index 696b363..0000000
--- a/drivers/platform/msm/ipa/rmnet_bridge.c
+++ /dev/null
@@ -1,141 +0,0 @@
-/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/export.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <mach/bam_dmux.h>
-#include <mach/ipa.h>
-#include <mach/sps.h>
-
-static struct rmnet_bridge_cb_type {
- u32 producer_handle;
- u32 consumer_handle;
- u32 ipa_producer_handle;
- u32 ipa_consumer_handle;
- bool is_connected;
-} rmnet_bridge_cb;
-
-/**
-* rmnet_bridge_init() - Initialize RmNet bridge module
-*
-* Return codes:
-* 0: success
-*/
-int rmnet_bridge_init(void)
-{
- memset(&rmnet_bridge_cb, 0, sizeof(struct rmnet_bridge_cb_type));
-
- return 0;
-}
-EXPORT_SYMBOL(rmnet_bridge_init);
-
-/**
-* rmnet_bridge_disconnect() - Disconnect RmNet bridge module
-*
-* Return codes:
-* 0: success
-* -EINVAL: invalid parameters
-*/
-int rmnet_bridge_disconnect(void)
-{
- int ret = 0;
- if (false == rmnet_bridge_cb.is_connected) {
- pr_err("%s: trying to disconnect already disconnected RmNet bridge\n",
- __func__);
- goto bail;
- }
-
- rmnet_bridge_cb.is_connected = false;
-
- ret = ipa_bridge_teardown(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_consumer_handle);
- ret = ipa_bridge_teardown(IPA_BRIDGE_DIR_UL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_producer_handle);
-bail:
- return ret;
-}
-EXPORT_SYMBOL(rmnet_bridge_disconnect);
-
-/**
-* rmnet_bridge_connect() - Connect RmNet bridge module
-* @producer_hdl: IPA producer handle
-* @consumer_hdl: IPA consumer handle
-* @wwan_logical_channel_id: WWAN logical channel ID
-*
-* Return codes:
-* 0: success
-* -EINVAL: invalid parameters
-*/
-int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id)
-{
- struct ipa_sys_connect_params props;
- int ret = 0;
-
- if (true == rmnet_bridge_cb.is_connected) {
- ret = 0;
- pr_err("%s: trying to connect already connected RmNet bridge\n",
- __func__);
- goto bail;
- }
-
- rmnet_bridge_cb.consumer_handle = consumer_hdl;
- rmnet_bridge_cb.producer_handle = producer_hdl;
- rmnet_bridge_cb.is_connected = true;
-
- memset(&props, 0, sizeof(props));
- props.ipa_ep_cfg.mode.mode = IPA_DMA;
- props.ipa_ep_cfg.mode.dst = IPA_CLIENT_USB_CONS;
- props.client = IPA_CLIENT_A2_TETHERED_PROD;
- props.desc_fifo_sz = 0x800;
- /* setup notification callback if needed */
-
- ret = ipa_bridge_setup(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- &props, &rmnet_bridge_cb.ipa_consumer_handle);
- if (ret) {
- pr_err("%s: IPA DL bridge setup failure\n", __func__);
- goto bail_dl;
- }
-
- memset(&props, 0, sizeof(props));
- props.client = IPA_CLIENT_A2_TETHERED_CONS;
- props.desc_fifo_sz = 0x800;
- /* setup notification callback if needed */
-
- ret = ipa_bridge_setup(IPA_BRIDGE_DIR_UL, IPA_BRIDGE_TYPE_TETHERED,
- &props, &rmnet_bridge_cb.ipa_producer_handle);
- if (ret) {
- pr_err("%s: IPA UL bridge setup failure\n", __func__);
- goto bail_ul;
- }
- return 0;
-bail_ul:
- ipa_bridge_teardown(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_consumer_handle);
-bail_dl:
- rmnet_bridge_cb.is_connected = false;
-bail:
- return ret;
-}
-EXPORT_SYMBOL(rmnet_bridge_connect);
-
-void rmnet_bridge_get_client_handles(u32 *producer_handle,
- u32 *consumer_handle)
-{
- if (producer_handle == NULL || consumer_handle == NULL)
- return;
-
- *producer_handle = rmnet_bridge_cb.producer_handle;
- *consumer_handle = rmnet_bridge_cb.consumer_handle;
-}
diff --git a/drivers/platform/msm/ipa/teth_bridge.c b/drivers/platform/msm/ipa/teth_bridge.c
index 5b26e41..9062ab8 100644
--- a/drivers/platform/msm/ipa/teth_bridge.c
+++ b/drivers/platform/msm/ipa/teth_bridge.c
@@ -58,6 +58,24 @@
#define TETH_AGGR_MAX_DATAGRAMS_DEFAULT 16
#define TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT (8*1024)
+#define TETH_MTU_BYTE 1500
+
+#define TETH_INACTIVITY_TIME_MSEC (1000)
+
+#define TETH_WORKQUEUE_NAME "tethering_bridge_wq"
+
+#define TETH_TOTAL_HDR_ENTRIES 6
+#define TETH_TOTAL_RT_ENTRIES_IP 3
+#define TETH_TOTAL_FLT_ENTRIES_IP 2
+#define TETH_IP_FAMILIES 2
+
+/**
+ * struct mac_addresses_type - store host PC and device MAC addresses
+ * @host_pc_mac_addr: MAC address of the host PC
+ * @host_pc_mac_addr_known: is the MAC address of the host PC known ?
+ * @device_mac_addr: MAC address of the device
+ * @device_mac_addr_known: is the MAC address of the device known ?
+ */
struct mac_addresses_type {
u8 host_pc_mac_addr[ETH_ALEN];
bool host_pc_mac_addr_known;
@@ -65,11 +83,59 @@
bool device_mac_addr_known;
};
+/**
+ * struct stats - driver statistics, viewable using debugfs
+ * @a2_to_usb_num_sw_tx_packets: number of packets bridged from A2 to USB using
+ * the SW bridge
+ * @usb_to_a2_num_sw_tx_packets: number of packets bridged from USB to A2 using
+ * the SW bridge
+ * @num_sw_tx_packets_during_resource_wakeup: number of packets bridged during a
+ * resource wakeup period, there is a special treatment for these kind of
+ * packets
+ */
struct stats {
u64 a2_to_usb_num_sw_tx_packets;
u64 usb_to_a2_num_sw_tx_packets;
+ u64 num_sw_tx_packets_during_resource_wakeup;
};
+/**
+ * struct teth_bridge_ctx - Tethering bridge driver context information
+ * @class: kernel class pointer
+ * @dev_num: kernel device number
+ * @dev: kernel device struct pointer
+ * @cdev: kernel character device struct
+ * @usb_ipa_pipe_hdl: USB to IPA pipe handle
+ * @ipa_usb_pipe_hdl: IPA to USB pipe handle
+ * @a2_ipa_pipe_hdl: A2 to IPA pipe handle
+ * @ipa_a2_pipe_hdl: IPA to A2 pipe handle
+ * @is_connected: is the tethered bridge connected ?
+ * @link_protocol: IP / Ethernet
+ * @mac_addresses: Struct which holds host pc and device MAC addresses, relevant
+ * in ethernet mode only
+ * @is_hw_bridge_complete: is HW bridge setup ?
+ * @aggr_params: aggregation parmeters
+ * @aggr_params_known: are the aggregation parameters known ?
+ * @tethering_mode: Rmnet / MBIM
+ * @is_bridge_prod_up: completion object signaled when the bridge producer
+ * finished its resource request procedure
+ * @is_bridge_prod_down: completion object signaled when the bridge producer
+ * finished its resource release procedure
+ * @comp_hw_bridge_work: used for setting up the HW bridge using a workqueue
+ * @comp_hw_bridge_in_progress: true when the HW bridge setup is in progress
+ * @aggr_caps: aggregation capabilities
+ * @stats: statistics, how many packets were transmitted using the SW bridge
+ * @teth_wq: dedicated workqueue, used for setting up the HW bridge and for
+ * sending packets using the SW bridge when the system is waking up from power
+ * collapse
+ * @a2_ipa_hdr_len: A2 to IPA header length, used to configure the A2 endpoint
+ * for header removal
+ * @hdr_del: array to store the headers handles in order to delete them later
+ * @routing_del: array of routing rules handles, one array for IPv4 and one for
+ * IPv6
+ * @filtering_del: array of routing rules handles, one array for IPv4 and one
+ * for IPv6
+ */
struct teth_bridge_ctx {
struct class *class;
dev_t dev_num;
@@ -92,15 +158,45 @@
bool comp_hw_bridge_in_progress;
struct teth_aggr_capabilities *aggr_caps;
struct stats stats;
+ struct workqueue_struct *teth_wq;
+ u16 a2_ipa_hdr_len;
+ struct ipa_ioc_del_hdr *hdr_del;
+ struct ipa_ioc_del_rt_rule *routing_del[TETH_IP_FAMILIES];
+ struct ipa_ioc_del_flt_rule *filtering_del[TETH_IP_FAMILIES];
+};
+static struct teth_bridge_ctx *teth_ctx;
+
+enum teth_packet_direction {
+ TETH_USB_TO_A2,
+ TETH_A2_TO_USB,
};
-static struct teth_bridge_ctx *teth_ctx;
+/**
+ * struct teth_work - wrapper for an skb which is sent using a workqueue
+ * @work: used by the workqueue
+ * @skb: pointer to the skb to be sent
+ * @dir: direction of send, A2 to USB or USB to A2
+ */
+struct teth_work {
+ struct work_struct work;
+ struct sk_buff *skb;
+ enum teth_packet_direction dir;
+};
#ifdef CONFIG_DEBUG_FS
#define TETH_MAX_MSG_LEN 512
static char dbg_buff[TETH_MAX_MSG_LEN];
#endif
+/**
+ * add_eth_hdrs() - add Ethernet headers to IPA
+ * @hdr_name_ipv4: header name for IPv4
+ * @hdr_name_ipv6: header name for IPv6
+ * @src_mac_addr: source MAC address
+ * @dst_mac_addr: destination MAC address
+ *
+ * This function is called only when link protocol is Ethernet
+ */
static int add_eth_hdrs(char *hdr_name_ipv4, char *hdr_name_ipv6,
u8 *src_mac_addr, u8 *dst_mac_addr)
{
@@ -108,6 +204,7 @@
struct ipa_ioc_add_hdr *hdrs;
struct ethhdr hdr_ipv4;
struct ethhdr hdr_ipv6;
+ int idx1;
TETH_DBG_FUNC_ENTRY();
memcpy(hdr_ipv4.h_source, src_mac_addr, ETH_ALEN);
@@ -142,6 +239,13 @@
res = ipa_add_hdr(hdrs);
if (res || hdrs->hdr[0].status || hdrs->hdr[1].status)
TETH_ERR("Header insertion failed\n");
+
+ /* Save the headers handles in order to delete them later */
+ for (idx1 = 0; idx1 < hdrs->num_hdrs; idx1++) {
+ int idx2 = teth_ctx->hdr_del->num_hdls++;
+ teth_ctx->hdr_del->hdl[idx2].hdl = hdrs->hdr[idx1].hdr_hdl;
+ }
+
kfree(hdrs);
TETH_DBG_FUNC_EXIT();
@@ -167,6 +271,7 @@
}
hdr_cfg.hdr_len = a2_ipa_hdr_len;
+ teth_ctx->a2_ipa_hdr_len = a2_ipa_hdr_len;
res = ipa_cfg_ep_hdr(teth_ctx->a2_ipa_pipe_hdl, &hdr_cfg);
if (res) {
TETH_ERR("Header removal config for A2->IPA pipe failed\n");
@@ -198,6 +303,7 @@
int res;
struct ipa_ioc_add_hdr *mbim_hdr;
u8 mbim_stream_id = 0;
+ int idx;
TETH_DBG_FUNC_ENTRY();
mbim_hdr = kzalloc(sizeof(struct ipa_ioc_add_hdr) +
@@ -221,12 +327,23 @@
} else {
TETH_DBG("Added MBIM stream ID header\n");
}
+
+ /* Save the header handle in order to delete it later */
+ idx = teth_ctx->hdr_del->num_hdls++;
+ teth_ctx->hdr_del->hdl[idx].hdl = mbim_hdr->hdr[0].hdr_hdl;
+
kfree(mbim_hdr);
TETH_DBG_FUNC_EXIT();
return res;
}
+/**
+ * configure_ipa_header_block() - adds headers and configures endpoint registers
+ *
+ * - For IP link protocol and MBIM aggregation, configure MBIM header
+ * - For Ethernet link protocol, configure Ethernet headers
+ */
static int configure_ipa_header_block(void)
{
int res;
@@ -283,14 +400,7 @@
TETH_ERR("Configuration of header removal/insertion failed\n");
goto bail;
}
-
- res = ipa_commit_hdr();
- if (res) {
- TETH_ERR("Failed committing headers\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -304,6 +414,7 @@
struct ipa_ioc_add_rt_rule *rt_rule;
struct ipa_ioc_get_hdr hdr_info;
int res;
+ int idx;
TETH_DBG_FUNC_ENTRY();
/* Get the header handle */
@@ -330,6 +441,12 @@
res = ipa_add_rt_rule(rt_rule);
if (res || rt_rule->rules[0].status)
TETH_ERR("Failed adding routing rule\n");
+
+ /* Save the routing rule handle in order to delete it later */
+ idx = teth_ctx->routing_del[ip_address_family]->num_hdls++;
+ teth_ctx->routing_del[ip_address_family]->hdl[idx].hdl =
+ rt_rule->rules[0].rt_rule_hdl;
+
kfree(rt_rule);
TETH_DBG_FUNC_EXIT();
@@ -370,6 +487,14 @@
return res;
}
+/**
+ * configure_ipa_routing_block() - Configure the IPA routing block
+ *
+ * This function configures IPA for:
+ * - Route all packets from USB to A2
+ * - Route all packets from A2 to USB
+ * - Use the correct headers in Ethernet or MBIM cases
+ */
static int configure_ipa_routing_block(void)
{
int res;
@@ -425,20 +550,7 @@
TETH_ERR("A2 to USB routing block configuration failed\n");
goto bail;
}
-
- /* Commit all the changes to HW in one shot */
- res = ipa_commit_rt(IPA_IP_v4);
- if (res) {
- TETH_ERR("Failed commiting IPv4 routing tables\n");
- goto bail;
- }
- res = ipa_commit_rt(IPA_IP_v6);
- if (res) {
- TETH_ERR("Failed commiting IPv6 routing tables\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -450,6 +562,7 @@
struct ipa_ioc_add_flt_rule *flt_tbl;
struct ipa_ioc_get_rt_tbl rt_tbl_info;
int res;
+ int idx;
TETH_DBG_FUNC_ENTRY();
/* Get the needed routing table handle */
@@ -480,6 +593,12 @@
res = ipa_add_flt_rule(flt_tbl);
if (res || flt_tbl->rules[0].status)
TETH_ERR("Failed adding filtering table\n");
+
+ /* Save the filtering rule handle in order to delete it later */
+ idx = teth_ctx->filtering_del[ip_address_family]->num_hdls++;
+ teth_ctx->filtering_del[ip_address_family]->hdl[idx].hdl =
+ flt_tbl->rules[0].flt_rule_hdl;
+
kfree(flt_tbl);
TETH_DBG_FUNC_EXIT();
@@ -511,6 +630,13 @@
return res;
}
+/**
+ * configure_ipa_filtering_block() - Configures IPA filtering block
+ *
+ * This function configures IPA for:
+ * - Filter all traffic coming from USB to A2 pointing routing table
+ * - Filter all traffic coming from A2 to USB pointing routing table
+ */
static int configure_ipa_filtering_block(void)
{
int res;
@@ -533,20 +659,7 @@
TETH_ERR("A2_PROD filtering configuration failed\n");
goto bail;
}
-
- /* Commit all the changes to HW in one shot */
- res = ipa_commit_flt(IPA_IP_v4);
- if (res) {
- TETH_ERR("Failed commiting IPv4 filtering tables\n");
- goto bail;
- }
- res = ipa_commit_flt(IPA_IP_v6);
- if (res) {
- TETH_ERR("Failed commiting IPv6 filtering tables\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -578,8 +691,15 @@
return -EFAULT;
}
+ /*
+ * Due to a HW 'feature', the maximal aggregated packet size may be the
+ * requested aggr_byte_limit plus the MTU. Therefore, the MTU is
+ * subtracted from the requested aggr_byte_limit so that the requested
+ * byte limit is honored .
+ */
ipa_aggr_params->aggr_byte_limit =
- teth_aggr_params->max_transfer_size_byte / 1024;
+ (teth_aggr_params->max_transfer_size_byte - TETH_MTU_BYTE) /
+ 1024;
ipa_aggr_params->aggr_time_limit = TETH_DEFAULT_AGGR_TIME_LIMIT;
TETH_DBG_FUNC_EXIT();
@@ -592,7 +712,6 @@
u32 pipe_hdl)
{
struct ipa_ep_cfg_aggr agg_params;
- struct ipa_ep_cfg_hdr hdr_params;
int res;
TETH_DBG_FUNC_ENTRY();
@@ -609,18 +728,7 @@
TETH_ERR("ipa_cfg_ep_aggr() failed\n");
goto bail;
}
-
- if (!client_is_prod) {
- memset(&hdr_params, 0, sizeof(hdr_params));
- hdr_params.hdr_len = 1;
- res = ipa_cfg_ep_hdr(pipe_hdl, &hdr_params);
- if (res) {
- TETH_ERR("ipa_cfg_ep_hdr() failed\n");
- goto bail;
- }
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -645,12 +753,30 @@
}
}
+/**
+ * teth_set_aggregation() - set aggregation parameters to IPA
+ *
+ * The parameters to this function are passed in the context variable ipa_ctx.
+ */
static int teth_set_aggregation(void)
{
int res;
char aggr_prot_str[20];
TETH_DBG_FUNC_ENTRY();
+ if (!teth_ctx->aggr_params_known) {
+ TETH_ERR("Aggregation parameters unknown.\n");
+ return -EINVAL;
+ }
+
+ if ((teth_ctx->usb_ipa_pipe_hdl == 0) ||
+ (teth_ctx->ipa_usb_pipe_hdl == 0))
+ return 0;
+ /*
+ * Returning 0 in case pipe handles are 0 becuase aggregation
+ * params will be set later
+ */
+
if (teth_ctx->aggr_params.ul.aggr_prot == TETH_AGGR_PROTOCOL_MBIM ||
teth_ctx->aggr_params.dl.aggr_prot == TETH_AGGR_PROTOCOL_MBIM) {
res = ipa_set_aggr_mode(IPA_MBIM);
@@ -696,12 +822,38 @@
return res;
}
+/**
+ * teth_request_resource() - wrapper function to
+ * ipa_rm_inactivity_timer_request_resource()
+ *
+ * - initialize the is_bridge_prod_up completion object
+ * - request the resource
+ * - error handling
+ */
+static int teth_request_resource(void)
+{
+ int res;
+
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_up);
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS)
+ wait_for_completion(&teth_ctx->is_bridge_prod_up);
+ else
+ return res;
+ }
+
+ return 0;
+}
+
+/**
+ * complete_hw_bridge() - setup the HW bridge from USB to A2 and back through
+ * IPA
+ */
static void complete_hw_bridge(struct work_struct *work)
{
int res;
- static DEFINE_MUTEX(f_lock);
-
- mutex_lock(&f_lock);
TETH_DBG_FUNC_ENTRY();
TETH_DBG("Completing HW bridge in %s mode\n",
@@ -709,20 +861,15 @@
"ETHERNET" :
"IP");
- res = teth_set_aggregation();
+ res = teth_request_resource();
if (res) {
- TETH_ERR("Failed setting aggregation params\n");
+ TETH_ERR("request_resource() failed.\n");
goto bail;
}
- /*
- * Reset the Header, Routing and Filtering blocks.
- * Resetting the Header block will also reset the other blocks.
- * This reset is not comitted to HW.
- */
- res = ipa_reset_hdr();
+ res = teth_set_aggregation();
if (res) {
- TETH_ERR("Failed resetting IPA\n");
+ TETH_ERR("Failed setting aggregation params\n");
goto bail;
}
@@ -744,10 +891,20 @@
goto bail;
}
+ /*
+ * Commit all the data to HW, including header, routing and filtering
+ * blocks, IPv4 and IPv6
+ */
+ res = ipa_commit_hdr();
+ if (res) {
+ TETH_ERR("Failed committing headers / routing / filtering.\n");
+ goto bail;
+ }
+
teth_ctx->is_hw_bridge_complete = true;
- teth_ctx->comp_hw_bridge_in_progress = false;
bail:
- mutex_unlock(&f_lock);
+ teth_ctx->comp_hw_bridge_in_progress = false;
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
TETH_DBG_FUNC_EXIT();
return;
@@ -762,6 +919,20 @@
mac_addr[4], mac_addr[5]);
}
+/**
+ * check_to_complete_hw_bridge() - can HW bridge be set up ?
+ * @param skb: pointer to socket buffer
+ * @param my_mac_addr: pointer to write 'my' extracted MAC address to
+ * @param my_mac_addr_known: pointer to update whether 'my' extracted MAC
+ * address is known
+ * @param peer_mac_addr_known: pointer to update whether the 'peer' extracted
+ * MAC address is known
+ *
+ * This function is used by both A2 and USB callback functions, therefore the
+ * meaning of 'my' and 'peer' changes according to the context.
+ * Extracts MAC address from the packet in Ethernet link protocol,
+ * Sets up the HW bridge in case all conditions are met.
+ */
static void check_to_complete_hw_bridge(struct sk_buff *skb,
u8 *my_mac_addr,
bool *my_mac_addr_known,
@@ -787,10 +958,114 @@
(teth_ctx->aggr_params_known)) {
INIT_WORK(&teth_ctx->comp_hw_bridge_work, complete_hw_bridge);
teth_ctx->comp_hw_bridge_in_progress = true;
- schedule_work(&teth_ctx->comp_hw_bridge_work);
+ queue_work(teth_ctx->teth_wq, &teth_ctx->comp_hw_bridge_work);
}
}
+/**
+ * teth_send_skb_work() - workqueue function for sending a packet
+ */
+static void teth_send_skb_work(struct work_struct *work)
+{
+ struct teth_work *work_data =
+ container_of(work, struct teth_work, work);
+ int res;
+
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+
+ switch (work_data->dir) {
+ case TETH_USB_TO_A2:
+ res = a2_mux_write(A2_MUX_TETHERED_0, work_data->skb);
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+ teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ break;
+
+ case TETH_A2_TO_USB:
+ res = ipa_tx_dp(IPA_CLIENT_USB_CONS, work_data->skb, NULL);
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+ teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ break;
+
+ default:
+ TETH_ERR("Unsupported direction to send !\n");
+ WARN_ON(1);
+ }
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ kfree(work_data);
+ teth_ctx->stats.num_sw_tx_packets_during_resource_wakeup++;
+
+ return;
+bail:
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ dev_kfree_skb(work_data->skb);
+ kfree(work_data);
+}
+
+/**
+ * defer_skb_send() - defer sending an skb using the SW bridge to a workqueue
+ * @param skb: pointer to the socket buffer
+ * @param dir: direction of send
+ *
+ * In case where during a packet send, the A2 or USB needs to wake up from power
+ * collapse, defer the send and return the context to IPA driver. This is
+ * important since IPA driver has a single threaded Rx path.
+ */
+static void defer_skb_send(struct sk_buff *skb, enum teth_packet_direction dir)
+{
+ struct teth_work *work = kmalloc(sizeof(struct teth_work), GFP_KERNEL);
+
+ if (!work) {
+ TETH_ERR("No mem, dropping packet\n");
+ dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource
+ (IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
+
+ /*
+ * Since IPA uses a single Rx thread, we don't
+ * want to wait for completion here
+ */
+ INIT_WORK(&work->work, teth_send_skb_work);
+ work->dir = dir;
+ work->skb = skb;
+ queue_work(teth_ctx->teth_wq, &work->work);
+}
+
+/**
+ * usb_notify_cb() - callback function for sending packets from USB to A2
+ * @param priv: private data
+ * @param evt: event - RECEIVE or WRITE_DONE
+ * @param data: pointer to skb to be sent
+ *
+ * This callback function is installed by the IPA driver, it is invoked in 2
+ * cases:
+ * 1. When a packet comes from the USB pipe and is routed to A5 (SW bridging)
+ * 2. After a packet has been bridged from USB to A2 and its skb should be freed
+ *
+ * Invocation: sps driver --> IPA driver --> bridge driver
+ *
+ * In the event of IPA_RECEIVE:
+ * - Checks whether the HW bridge can be set up..
+ * - Requests the BRIDGE_PROD resource so that A2 and USB are not in power
+ * collapse. In case where the resource is waking up, defer the send operation
+ * to a workqueue in order to not block the IPA driver single threaded Rx path.
+ * - Sends the packets to A2 using a2_service driver API.
+ * - Releases the BRIDGE_PROD resource.
+ *
+ * In the event of IPA_WRITE_DONE:
+ * - Frees the skb memory
+ */
static void usb_notify_cb(void *priv,
enum ipa_dp_evt_type evt,
unsigned long data)
@@ -807,13 +1082,36 @@
&teth_ctx->mac_addresses.host_pc_mac_addr_known,
&teth_ctx->mac_addresses.device_mac_addr_known);
- /* Send the packet to A2, using a2_service driver API */
- teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ /*
+ * Request the BRIDGE_PROD resource, send the packet and release
+ * the resource
+ */
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS) {
+ /* The resource is waking up */
+ defer_skb_send(skb, TETH_USB_TO_A2);
+ } else {
+ TETH_ERR(
+ "Packet send failure, dropping packet !\n");
+ dev_kfree_skb(skb);
+ }
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
res = a2_mux_write(A2_MUX_TETHERED_0, skb);
if (res) {
TETH_ERR("Packet send failure, dropping packet !\n");
dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
}
+ teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
break;
case IPA_WRITE_DONE:
@@ -828,6 +1126,30 @@
return;
}
+/**
+ * a2_notify_cb() - callback function for sending packets from A2 to USB
+ * @param user_data: private data
+ * @param event: event - RECEIVE or WRITE_DONE
+ * @param data: pointer to skb to be sent
+ *
+ * This callback function is installed by the IPA driver, it is invoked in 2
+ * cases:
+ * 1. When a packet comes from the A2 pipe and is routed to A5 (SW bridging)
+ * 2. After a packet has been bridged from A2 to USB and its skb should be freed
+ *
+ * Invocation: sps driver --> IPA driver --> a2_service driver --> bridge driver
+ *
+ * In the event of A2_MUX_RECEIVE:
+ * - Checks whether the HW bridge can be set up..
+ * - Requests the BRIDGE_PROD resource so that A2 and USB are not in power
+ * collapse. In case where the resource is waking up, defer the send operation
+ * to a workqueue in order to not block the IPA driver single threaded Rx path.
+ * - Sends the packets to USB using IPA drivers ipa_tx_dp() API.
+ * - Releases the BRIDGE_PROD resource.
+ *
+ * In the event of A2_MUX_WRITE_DONE:
+ * - Frees the skb memory
+ */
static void a2_notify_cb(void *user_data,
enum a2_mux_event_type event,
unsigned long data)
@@ -845,13 +1167,37 @@
&teth_ctx->
mac_addresses.host_pc_mac_addr_known);
- /* Send the packet to USB */
- teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ /*
+ * Request the BRIDGE_PROD resource, send the packet and release
+ * the resource
+ */
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS) {
+ /* The resource is waking up */
+ defer_skb_send(skb, TETH_A2_TO_USB);
+ } else {
+ TETH_ERR(
+ "Packet send failure, dropping packet !\n");
+ dev_kfree_skb(skb);
+ }
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
+
res = ipa_tx_dp(IPA_CLIENT_USB_CONS, skb, NULL);
if (res) {
TETH_ERR("Packet send failure, dropping packet !\n");
dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
}
+ teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
break;
case A2_MUX_WRITE_DONE:
@@ -866,12 +1212,41 @@
return;
}
+/**
+ * bridge_prod_notify_cb() - IPA Resource Manager callback function
+ * @param notify_cb_data: private data
+ * @param event: RESOURCE_GRANTED / RESOURCE_RELEASED
+ * @param data: not used in this case
+ *
+ * This callback function is called by IPA resource manager to notify the
+ * BRIDGE_PROD entity of events like RESOURCE_GRANTED and RESOURCE_RELEASED.
+ */
static void bridge_prod_notify_cb(void *notify_cb_data,
enum ipa_rm_event event,
unsigned long data)
{
+ int res;
+ struct ipa_ep_cfg ipa_ep_cfg;
+
switch (event) {
case IPA_RM_RESOURCE_GRANTED:
+ res = a2_mux_get_tethered_client_handles(
+ A2_MUX_TETHERED_0,
+ &teth_ctx->ipa_a2_pipe_hdl,
+ &teth_ctx->a2_ipa_pipe_hdl);
+ if (res) {
+ TETH_ERR(
+ "a2_mux_get_tethered_client_handles() failed, res = %d\n",
+ res);
+ return;
+ }
+
+ /* Reset the various endpoints configuration */
+ memset(&ipa_ep_cfg, 0, sizeof(ipa_ep_cfg));
+ ipa_cfg_ep(teth_ctx->ipa_a2_pipe_hdl, &ipa_ep_cfg);
+
+ ipa_ep_cfg.hdr.hdr_len = teth_ctx->a2_ipa_hdr_len;
+ ipa_cfg_ep(teth_ctx->a2_ipa_pipe_hdl, &ipa_ep_cfg);
complete(&teth_ctx->is_bridge_prod_up);
break;
@@ -890,10 +1265,17 @@
/**
* teth_bridge_init() - Initialize the Tethering bridge driver
-* @usb_notify_cb_ptr: Callback function which should be used
-* by the caller. Output parameter.
-* @private_data_ptr: Data for the callback function. Should
-* be used by the caller. Output parameter.
+* @usb_notify_cb_ptr: Callback function which should be used by the caller.
+* Output parameter.
+* @private_data_ptr: Data for the callback function. Should be used by the
+* caller. Output parameter.
+*
+* USB driver gets a pointer to a callback function (usb_notify_cb) and an
+* associated data. USB driver installs this callback function in the call to
+* ipa_connect().
+*
+* Builds IPA resource manager dependency graph.
+*
* Return codes: 0: success,
* -EINVAL - Bad parameter
* Other negative value - Failure
@@ -915,32 +1297,34 @@
/* Build IPA Resource manager dependency graph */
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_USB_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto bail;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_A2_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_1;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_A2_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_2;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_A2_PROD,
IPA_RM_RESOURCE_USB_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_3;
}
+ /* Return 0 as EINPROGRESS is a valid return value at this point */
+ res = 0;
goto bail;
fail_add_dependency_3:
@@ -959,46 +1343,146 @@
EXPORT_SYMBOL(teth_bridge_init);
/**
+ * initialize_context() - Initialize the ipa_ctx struct
+ */
+static void initialize_context(void)
+{
+ TETH_DBG_FUNC_ENTRY();
+ /* Initialize context variables */
+ teth_ctx->usb_ipa_pipe_hdl = 0;
+ teth_ctx->ipa_a2_pipe_hdl = 0;
+ teth_ctx->a2_ipa_pipe_hdl = 0;
+ teth_ctx->ipa_usb_pipe_hdl = 0;
+ teth_ctx->is_connected = false;
+
+ /* The default link protocol is Ethernet */
+ teth_ctx->link_protocol = TETH_LINK_PROTOCOL_ETHERNET;
+
+ memset(&teth_ctx->mac_addresses, 0, sizeof(teth_ctx->mac_addresses));
+ teth_ctx->is_hw_bridge_complete = false;
+ memset(&teth_ctx->aggr_params, 0, sizeof(teth_ctx->aggr_params));
+ teth_ctx->aggr_params_known = false;
+ teth_ctx->tethering_mode = 0;
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_up);
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_down);
+ teth_ctx->comp_hw_bridge_in_progress = false;
+ memset(&teth_ctx->stats, 0, sizeof(teth_ctx->stats));
+ teth_ctx->a2_ipa_hdr_len = 0;
+ memset(teth_ctx->hdr_del,
+ 0,
+ sizeof(struct ipa_ioc_del_hdr) + TETH_TOTAL_HDR_ENTRIES *
+ sizeof(struct ipa_hdr_del));
+ memset(teth_ctx->routing_del[IPA_IP_v4],
+ 0,
+ sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP * sizeof(struct ipa_rt_rule_del));
+ teth_ctx->routing_del[IPA_IP_v4]->ip = IPA_IP_v4;
+ memset(teth_ctx->routing_del[IPA_IP_v6],
+ 0,
+ sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP * sizeof(struct ipa_rt_rule_del));
+ teth_ctx->routing_del[IPA_IP_v6]->ip = IPA_IP_v6;
+ memset(teth_ctx->filtering_del[IPA_IP_v4],
+ 0,
+ sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP * sizeof(struct ipa_flt_rule_del));
+ teth_ctx->filtering_del[IPA_IP_v4]->ip = IPA_IP_v4;
+ memset(teth_ctx->filtering_del[IPA_IP_v6],
+ 0,
+ sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP * sizeof(struct ipa_flt_rule_del));
+ teth_ctx->filtering_del[IPA_IP_v6]->ip = IPA_IP_v6;
+
+ TETH_DBG_FUNC_EXIT();
+}
+
+/**
* teth_bridge_disconnect() - Disconnect tethering bridge module
-*
-* Return codes: 0: success
-* -EPERM: Operation not permitted as the bridge is already
-* disconnected
*/
int teth_bridge_disconnect(void)
{
- int res = -EPERM;
+ int res;
TETH_DBG_FUNC_ENTRY();
if (!teth_ctx->is_connected) {
TETH_ERR(
- "Trying to disconnect an already disconnected bridge\n");
+ "Trying to disconnect an already disconnected bridge\n");
goto bail;
}
- teth_ctx->is_connected = false;
-
- res = ipa_rm_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
- if (res == -EINPROGRESS)
- wait_for_completion(&teth_ctx->is_bridge_prod_down);
-
- /* Initialize statistics */
- memset(&teth_ctx->stats, 0, sizeof(teth_ctx->stats));
-
- /* Delete IPA Resource manager dependency graph */
+ /*
+ * Delete part of IPA resource manager dependency graph. Only the
+ * BRIDGE_PROD <-> A2 dependency remains intact
+ */
res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_USB_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
- IPA_RM_RESOURCE_A2_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
- IPA_RM_RESOURCE_A2_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_A2_PROD,
- IPA_RM_RESOURCE_USB_CONS);
- if (res)
- TETH_ERR("Failed deleting ipa_rm dependency.\n");
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency BRIDGE_PROD <-> USB_CONS\n");
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
+ IPA_RM_RESOURCE_A2_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency USB_PROD <-> A2_CONS\n");
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_A2_PROD,
+ IPA_RM_RESOURCE_USB_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency A2_PROD <-> USB_CONS\n");
+
+ /* Request the BRIDGE_PROD resource, A2 and IPA should power up */
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("request_resource() failed.\n");
+ goto bail;
+ }
+
+ /* Close the channel to A2 */
+ if (a2_mux_close_channel(A2_MUX_TETHERED_0))
+ TETH_ERR("a2_mux_close_channel() failed\n");
+
+ /* Teardown the IPA HW bridge */
+ if (teth_ctx->is_hw_bridge_complete) {
+ /* Delete header entries */
+ if (ipa_del_hdr(teth_ctx->hdr_del))
+ TETH_ERR("ipa_del_hdr() failed\n");
+
+ /* Delete installed routing rules */
+ if (ipa_del_rt_rule(teth_ctx->routing_del[IPA_IP_v4]))
+ TETH_ERR("ipa_del_rt_rule() failed\n");
+ if (ipa_del_rt_rule(teth_ctx->routing_del[IPA_IP_v6]))
+ TETH_ERR("ipa_del_rt_rule() failed\n");
+
+ /* Delete installed filtering rules */
+ if (ipa_del_flt_rule(teth_ctx->filtering_del[IPA_IP_v4]))
+ TETH_ERR("ipa_del_flt_rule() failed\n");
+ if (ipa_del_flt_rule(teth_ctx->filtering_del[IPA_IP_v6]))
+ TETH_ERR("ipa_del_flt_rule() failed\n");
+
+ /*
+ * Commit all the data to HW, including header, routing and
+ * filtering blocks, IPv4 and IPv6
+ */
+ if (ipa_commit_hdr())
+ TETH_ERR("Failed committing headers\n");
+ }
+
+ initialize_context();
+
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+
+ /* Delete the last ipa_rm dependency - BRIDGE_PROD <-> A2 */
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
+ IPA_RM_RESOURCE_A2_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency BRIDGE_PROD <-> A2_CONS\n");
+
+ teth_ctx->is_connected = false;
bail:
TETH_DBG_FUNC_EXIT();
- return res;
+
+ return 0;
}
EXPORT_SYMBOL(teth_bridge_disconnect);
@@ -1032,12 +1516,10 @@
teth_ctx->usb_ipa_pipe_hdl = connect_params->usb_ipa_pipe_hdl;
teth_ctx->tethering_mode = connect_params->tethering_mode;
- res = ipa_rm_request_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
- if (res < 0) {
- if (res == -EINPROGRESS)
- wait_for_completion(&teth_ctx->is_bridge_prod_up);
- else
- goto bail;
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("request_resource() failed.\n");
+ goto bail;
}
res = a2_mux_open_channel(A2_MUX_TETHERED_0,
@@ -1068,10 +1550,28 @@
if (teth_ctx->tethering_mode == TETH_TETHERING_MODE_MBIM)
teth_ctx->link_protocol = TETH_LINK_PROTOCOL_IP;
- TETH_DBG_FUNC_EXIT();
+
+ if (teth_ctx->aggr_params_known) {
+ res = teth_set_aggregation();
+ if (res) {
+ TETH_ERR("Failed setting aggregation params\n");
+ goto bail;
+ }
+ }
+
+ /* In case of IP link protocol, complete HW bridge */
+ if ((teth_ctx->link_protocol == TETH_LINK_PROTOCOL_IP) &&
+ (!teth_ctx->comp_hw_bridge_in_progress) &&
+ (teth_ctx->aggr_params_known) &&
+ (!teth_ctx->is_hw_bridge_complete)) {
+ INIT_WORK(&teth_ctx->comp_hw_bridge_work, complete_hw_bridge);
+ teth_ctx->comp_hw_bridge_in_progress = true;
+ queue_work(teth_ctx->teth_wq, &teth_ctx->comp_hw_bridge_work);
+ }
bail:
- if (res)
- ipa_rm_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ TETH_DBG_FUNC_EXIT();
+
return res;
}
EXPORT_SYMBOL(teth_bridge_connect);
@@ -1086,6 +1586,9 @@
TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT;
}
+/**
+ * teth_set_bridge_mode() - set the link protocol (IP / Ethernet)
+ */
static void teth_set_bridge_mode(enum teth_link_protocol_type link_protocol)
{
teth_ctx->link_protocol = link_protocol;
@@ -1093,15 +1596,45 @@
memset(&teth_ctx->mac_addresses, 0, sizeof(teth_ctx->mac_addresses));
}
+/**
+ * teth_bridge_set_aggr_params() - kernel API to set aggregation parameters
+ * @param aggr_params: aggregation parmeters for uplink and downlink
+ *
+ * Besides setting the aggregation parameters, the function enforces max tranfer
+ * size which is less then 8K and also forbids Ethernet link protocol with MBIM
+ * aggregation which is not supported by HW.
+ */
int teth_bridge_set_aggr_params(struct teth_aggr_params *aggr_params)
{
int res;
+ TETH_DBG_FUNC_ENTRY();
if (!aggr_params) {
TETH_ERR("Invalid parameter\n");
return -EINVAL;
}
+ /*
+ * In case the requested max transfer size is larger than 8K, set it to
+ * to the default 8K
+ */
+ if (aggr_params->dl.max_transfer_size_byte >
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT)
+ aggr_params->dl.max_transfer_size_byte =
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT;
+ if (aggr_params->ul.max_transfer_size_byte >
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT)
+ aggr_params->ul.max_transfer_size_byte =
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT;
+
+ /* Ethernet link protocol and MBIM aggregation is not supported */
+ if (teth_ctx->link_protocol == TETH_LINK_PROTOCOL_ETHERNET &&
+ (aggr_params->dl.aggr_prot == TETH_AGGR_PROTOCOL_MBIM ||
+ aggr_params->ul.aggr_prot == TETH_AGGR_PROTOCOL_MBIM)) {
+ TETH_ERR("Ethernet with MBIM is not supported.\n");
+ return -EINVAL;
+ }
+
memcpy(&teth_ctx->aggr_params,
aggr_params,
sizeof(struct teth_aggr_params));
@@ -1110,10 +1643,9 @@
teth_ctx->aggr_params_known = true;
res = teth_set_aggregation();
- if (res) {
+ if (res)
TETH_ERR("Failed setting aggregation params\n");
- res = -EFAULT;
- }
+ TETH_DBG_FUNC_EXIT();
return res;
}
@@ -1153,6 +1685,19 @@
}
res = teth_bridge_set_aggr_params(&aggr_params);
+ if (res)
+ break;
+
+ /* In case of IP link protocol, complete HW bridge */
+ if ((teth_ctx->link_protocol == TETH_LINK_PROTOCOL_IP) &&
+ (!teth_ctx->comp_hw_bridge_in_progress) &&
+ (!teth_ctx->is_hw_bridge_complete)) {
+ INIT_WORK(&teth_ctx->comp_hw_bridge_work,
+ complete_hw_bridge);
+ teth_ctx->comp_hw_bridge_in_progress = true;
+ queue_work(teth_ctx->teth_wq,
+ &teth_ctx->comp_hw_bridge_work);
+ }
break;
case TETH_BRIDGE_IOC_GET_AGGR_PARAMS:
@@ -1208,7 +1753,11 @@
return res;
}
-static void set_aggr_capabilities(void)
+/**
+ * set_aggr_capabilities() - allocates and fills the aggregation capabilities
+ * struct
+ */
+static int set_aggr_capabilities(void)
{
u16 NUM_PROTOCOLS = 2;
@@ -1216,9 +1765,9 @@
NUM_PROTOCOLS *
sizeof(struct teth_aggr_params_link),
GFP_KERNEL);
- if (teth_ctx->aggr_caps == NULL) {
+ if (!teth_ctx->aggr_caps) {
TETH_ERR("Memory alloc failed for aggregation capabilities.\n");
- return;
+ return -ENOMEM;
}
teth_ctx->aggr_caps->num_protocols = NUM_PROTOCOLS;
@@ -1228,8 +1777,15 @@
teth_ctx->aggr_caps->prot_caps[1].aggr_prot = TETH_AGGR_PROTOCOL_TLP;
set_aggr_default_params(&teth_ctx->aggr_caps->prot_caps[1]);
+
+ return 0;
}
+/**
+* teth_bridge_get_client_handles() - Get USB <--> IPA pipe handles
+* @producer_handle: USB --> IPA pipe handle
+* @consumer_handle: IPA --> USB pipe handle
+*/
void teth_bridge_get_client_handles(u32 *producer_handle,
u32 *consumer_handle)
{
@@ -1397,6 +1953,11 @@
TETH_MAX_MSG_LEN - nbytes,
"A2 to USB SW Tx packets: %lld\n",
teth_ctx->stats.a2_to_usb_num_sw_tx_packets);
+ nbytes += scnprintf(
+ &dbg_buff[nbytes],
+ TETH_MAX_MSG_LEN - nbytes,
+ "SW Tx packets sent during resource wakeup: %lld\n",
+ teth_ctx->stats.num_sw_tx_packets_during_resource_wakeup);
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
@@ -1523,7 +2084,59 @@
return -ENOMEM;
}
- set_aggr_capabilities();
+ res = set_aggr_capabilities();
+ if (res) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_aggr_caps;
+ }
+
+ res = -ENOMEM;
+ teth_ctx->hdr_del = kzalloc(sizeof(struct ipa_ioc_del_hdr) +
+ TETH_TOTAL_HDR_ENTRIES *
+ sizeof(struct ipa_hdr_del),
+ GFP_KERNEL);
+ if (!teth_ctx->hdr_del) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_hdr_del;
+ }
+
+ teth_ctx->routing_del[IPA_IP_v4] =
+ kzalloc(sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP *
+ sizeof(struct ipa_rt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->routing_del[IPA_IP_v4]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_routing_del_ipv4;
+ }
+ teth_ctx->routing_del[IPA_IP_v6] =
+ kzalloc(sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP *
+ sizeof(struct ipa_rt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->routing_del[IPA_IP_v6]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_routing_del_ipv6;
+ }
+
+ teth_ctx->filtering_del[IPA_IP_v4] =
+ kzalloc(sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP *
+ sizeof(struct ipa_flt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->filtering_del[IPA_IP_v4]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_filtering_del_ipv4;
+ }
+ teth_ctx->filtering_del[IPA_IP_v6] =
+ kzalloc(sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP *
+ sizeof(struct ipa_flt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->filtering_del[IPA_IP_v6]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_filtering_del_ipv6;
+ }
teth_ctx->class = class_create(THIS_MODULE, TETH_BRIDGE_DRV_NAME);
@@ -1554,8 +2167,6 @@
goto fail_cdev_add;
}
- teth_ctx->comp_hw_bridge_in_progress = false;
-
teth_debugfs_init();
/* Create BRIDGE_PROD entity in IPA Resource Manager */
@@ -1570,9 +2181,21 @@
init_completion(&teth_ctx->is_bridge_prod_up);
init_completion(&teth_ctx->is_bridge_prod_down);
- /* The default link protocol is Ethernet */
- teth_ctx->link_protocol = TETH_LINK_PROTOCOL_ETHERNET;
+ res = ipa_rm_inactivity_timer_init(IPA_RM_RESOURCE_BRIDGE_PROD,
+ TETH_INACTIVITY_TIME_MSEC);
+ if (res) {
+ TETH_ERR("ipa_rm_inactivity_timer_init() failed, res=%d\n",
+ res);
+ goto fail_cdev_add;
+ }
+ teth_ctx->teth_wq = create_workqueue(TETH_WORKQUEUE_NAME);
+ if (!teth_ctx->teth_wq) {
+ TETH_ERR("workqueue creation failed\n");
+ goto fail_cdev_add;
+ }
+
+ initialize_context();
TETH_DBG("Tethering bridge driver init OK\n");
return 0;
@@ -1581,7 +2204,18 @@
fail_device_create:
unregister_chrdev_region(teth_ctx->dev_num, 1);
fail_alloc_chrdev_region:
+ kfree(teth_ctx->filtering_del[IPA_IP_v6]);
+fail_alloc_filtering_del_ipv6:
+ kfree(teth_ctx->filtering_del[IPA_IP_v4]);
+fail_alloc_filtering_del_ipv4:
+ kfree(teth_ctx->routing_del[IPA_IP_v6]);
+fail_alloc_routing_del_ipv6:
+ kfree(teth_ctx->routing_del[IPA_IP_v4]);
+fail_alloc_routing_del_ipv4:
+ kfree(teth_ctx->hdr_del);
+fail_alloc_hdr_del:
kfree(teth_ctx->aggr_caps);
+fail_alloc_aggr_caps:
kfree(teth_ctx);
teth_ctx = NULL;
diff --git a/drivers/platform/msm/qpnp-pwm.c b/drivers/platform/msm/qpnp-pwm.c
index 52c523e..442d18f 100644
--- a/drivers/platform/msm/qpnp-pwm.c
+++ b/drivers/platform/msm/qpnp-pwm.c
@@ -319,6 +319,7 @@
#define QPNP_ENABLE_LUT_CONTROL qpnp_set_control(0, 0, 0, 0, 1)
#define QPNP_ENABLE_PWM_CONTROL qpnp_set_control(0, 0, 0, 1, 0)
#define QPNP_ENABLE_PWM_MODE qpnp_set_control(1, 1, 1, 1, 0)
+#define QPNP_ENABLE_PWM_MODE_GPLED_CHANNEL qpnp_set_control(1, 1, 1, 1, 1)
#define QPNP_ENABLE_LPG_MODE qpnp_set_control(1, 1, 1, 0, 1)
#define QPNP_DISABLE_PWM_MODE qpnp_set_control(0, 0, 0, 1, 0)
#define QPNP_DISABLE_LPG_MODE qpnp_set_control(0, 0, 0, 0, 1)
@@ -908,6 +909,17 @@
}
+#define QPNP_GPLED_LPG_CHANNEL_RANGE_START 8
+#define QPNP_GPLED_LPG_CHANNEL_RANGE_END 11
+
+static inline int qpnp_enable_pwm_mode(struct qpnp_pwm_config *pwm_conf)
+{
+ if (pwm_conf->channel_id >= QPNP_GPLED_LPG_CHANNEL_RANGE_START ||
+ pwm_conf->channel_id <= QPNP_GPLED_LPG_CHANNEL_RANGE_END)
+ return QPNP_ENABLE_PWM_MODE_GPLED_CHANNEL;
+ return QPNP_ENABLE_PWM_MODE;
+}
+
static int qpnp_lpg_configure_pwm_state(struct pwm_device *pwm,
enum qpnp_pwm_state state)
{
@@ -917,7 +929,7 @@
int rc;
if (state == QPNP_PWM_ENABLE)
- value = QPNP_ENABLE_PWM_MODE;
+ value = qpnp_enable_pwm_mode(&pwm->pwm_config);
else
value = QPNP_DISABLE_PWM_MODE;
diff --git a/drivers/platform/msm/usb_bam.c b/drivers/platform/msm/usb_bam.c
index 1d0fb5d..2d1b763 100644
--- a/drivers/platform/msm/usb_bam.c
+++ b/drivers/platform/msm/usb_bam.c
@@ -27,9 +27,11 @@
#include <linux/workqueue.h>
#include <linux/dma-mapping.h>
#include <mach/msm_smsm.h>
+#include <linux/pm_runtime.h>
#define USB_THRESHOLD 512
#define USB_BAM_MAX_STR_LEN 50
+#define USB_BAM_TIMEOUT (10*HZ)
enum usb_bam_sm {
USB_BAM_SM_INIT = 0,
@@ -73,6 +75,11 @@
* handle/device of the sps driver.
* @pipes_enabled_per_bam: This array stores for each BAM
* ("ssusb", "hsusb" or "hsic") the number of pipes currently enabled.
+* @inactivity_timer_ms: The timeout configuration per each bam for inactivity
+* timer feature.
+* @is_bam_inactivity: Is there no activity on all pipes belongs to a
+* specific bam. (no activity = no data is pulled or pushed
+* from/into ones of the pipes).
*/
struct usb_bam_ctx_type {
struct usb_bam_sps_type usb_bam_sps;
@@ -85,19 +92,61 @@
char qdss_core_name[USB_BAM_MAX_STR_LEN];
u32 h_bam[MAX_BAMS];
u8 pipes_enabled_per_bam[MAX_BAMS];
+ u32 inactivity_timer_ms[MAX_BAMS];
+ bool is_bam_inactivity[MAX_BAMS];
};
-static char *bam_enable_strings[3] = {
+static char *bam_enable_strings[MAX_BAMS] = {
[SSUSB_BAM] = "ssusb",
[HSUSB_BAM] = "hsusb",
[HSIC_BAM] = "hsic",
};
-static spinlock_t usb_bam_lock;
+static enum usb_bam ipa_rm_bams[] = {HSUSB_BAM, HSIC_BAM};
+
+static enum ipa_client_type ipa_rm_resource_prod[MAX_BAMS] = {
+ [HSUSB_BAM] = IPA_RM_RESOURCE_USB_PROD,
+ [HSIC_BAM] = IPA_RM_RESOURCE_HSIC_PROD,
+};
+
+static enum ipa_client_type ipa_rm_resource_cons[MAX_BAMS] = {
+ [HSUSB_BAM] = IPA_RM_RESOURCE_USB_CONS,
+ [HSIC_BAM] = IPA_RM_RESOURCE_HSIC_CONS,
+};
+
+static int usb_cons_request_resource(void);
+static int usb_cons_release_resource(void);
+static int hsic_cons_request_resource(void);
+static int hsic_cons_release_resource(void);
+
+static int (*request_resource_cb[MAX_BAMS])(void) = {
+ [HSUSB_BAM] = usb_cons_request_resource,
+ [HSIC_BAM] = hsic_cons_request_resource,
+};
+
+static int (*release_resource_cb[MAX_BAMS])(void) = {
+ [HSUSB_BAM] = usb_cons_release_resource,
+ [HSIC_BAM] = hsic_cons_release_resource,
+};
+
+static enum ipa_rm_event cur_prod_state[MAX_BAMS];
+static enum ipa_rm_event cur_cons_state[MAX_BAMS];
+static int sched_lpm;
+static int lpm_wait_handshake;
+static struct completion prod_avail[MAX_BAMS];
+static struct completion cons_avail[MAX_BAMS];
+static struct completion cons_released[MAX_BAMS];
+static struct completion prod_released[MAX_BAMS];
+
+static spinlock_t usb_bam_peer_handshake_info_lock;
static struct usb_bam_peer_handshake_info peer_handshake_info;
+static spinlock_t usb_bam_lock; /* Protect ctx and usb_bam_connections */
static struct usb_bam_pipe_connect *usb_bam_connections;
static struct usb_bam_ctx_type ctx;
+static int __usb_bam_register_wake_cb(u8 idx, int (*callback)(void *user),
+ void *param, bool trigger_cb_per_pipe);
+
static int get_bam_type_from_core_name(const char *name)
{
if (strnstr(name, bam_enable_strings[SSUSB_BAM],
@@ -128,6 +177,42 @@
return false;
}
+static void usb_bam_set_inactivity_timer(enum usb_bam bam)
+{
+ struct sps_timer_ctrl timer_ctrl;
+ struct usb_bam_pipe_connect *pipe_connect;
+ struct sps_pipe *pipe = NULL;
+ int i;
+
+ /*
+ * Since we configure global incativity timer for all pipes
+ * and not per each pipe, it is enough to use some pipe
+ * handle associated with this bam, so just find the first one.
+ * This pipe handle is required due to SPS driver API we use below.
+ */
+ for (i = 0; i < ctx.max_connections; i++) {
+ pipe_connect = &usb_bam_connections[i];
+ if (pipe_connect->bam_type == bam) {
+ pipe = ctx.usb_bam_sps.sps_pipes[i];
+ break;
+ }
+ }
+
+ if (!pipe) {
+ pr_err("%s: Bam %s has no pipes\n", __func__,
+ bam_enable_strings[bam]);
+ return;
+ }
+
+ timer_ctrl.op = SPS_TIMER_OP_CONFIG;
+ timer_ctrl.mode = SPS_TIMER_MODE_ONESHOT;
+ timer_ctrl.timeout_msec = ctx.inactivity_timer_ms[bam];
+ sps_timer_ctrl(pipe, &timer_ctrl, NULL);
+
+ timer_ctrl.op = SPS_TIMER_OP_RESET;
+ sps_timer_ctrl(pipe, &timer_ctrl, NULL);
+}
+
static int connect_pipe(u8 idx, u32 *usb_pipe_idx)
{
int ret, ram1_value;
@@ -281,6 +366,7 @@
pr_err("%s: sps_connect failed %d\n", __func__, ret);
goto error;
}
+
return 0;
error:
@@ -323,6 +409,10 @@
return ret;
}
+ pipe_connect->activity_notify = ipa_params->activity_notify;
+ pipe_connect->inactivity_notify = ipa_params->inactivity_notify;
+ pipe_connect->priv = ipa_params->priv;
+
/* IPA input parameters */
ipa_in_params.client_bam_hdl = usb_handle;
ipa_in_params.desc_fifo_sz = pipe_connect->desc_fifo_size;
@@ -395,6 +485,7 @@
__func__,
pipe_connect->src_pipe_index,
pipe_connect->dst_pipe_index);
+ sps_connection->options = SPS_O_NO_DISABLE;
} else {
/* IPA src, USB dest */
sps_connection->mode = SPS_MODE_DEST;
@@ -409,12 +500,13 @@
__func__,
pipe_connect->src_pipe_index,
pipe_connect->dst_pipe_index);
+ sps_connection->options = 0;
}
sps_connection->data = sps_out_params.data;
sps_connection->desc = sps_out_params.desc;
sps_connection->event_thresh = 16;
- sps_connection->options = SPS_O_AUTO_ENABLE;
+ sps_connection->options |= SPS_O_AUTO_ENABLE;
ret = sps_connect(*pipe, sps_connection);
if (ret < 0) {
@@ -422,6 +514,16 @@
goto error;
}
+ spin_lock(&usb_bam_lock);
+
+ /* Set global inactivity timer upon first pipe connection */
+ if (ctx.pipes_enabled_per_bam[pipe_connect->bam_type] == 0 &&
+ ctx.inactivity_timer_ms[pipe_connect->bam_type] &&
+ pipe_connect->inactivity_notify)
+ usb_bam_set_inactivity_timer(pipe_connect->bam_type);
+
+ spin_unlock(&usb_bam_lock);
+
return 0;
error:
@@ -505,6 +607,8 @@
return -EINVAL;
}
+ spin_lock(&usb_bam_lock);
+
/* Check if BAM requires RESET before connect and reset of first pipe */
if ((pdata->reset_on_connect[pipe_connect->bam_type] == true) &&
(ctx.pipes_enabled_per_bam[pipe_connect->bam_type] == 0))
@@ -513,24 +617,34 @@
ret = connect_pipe(idx, bam_pipe_idx);
if (ret) {
pr_err("%s: pipe connection[%d] failure\n", __func__, idx);
+ spin_unlock(&usb_bam_lock);
return ret;
}
pipe_connect->enabled = 1;
ctx.pipes_enabled_per_bam[pipe_connect->bam_type] += 1;
+ spin_unlock(&usb_bam_lock);
return 0;
}
static void usb_prod_notify_cb(void *user_data, enum ipa_rm_event event,
unsigned long data)
{
+ enum usb_bam *cur_bam = (void *)user_data;
+
switch (event) {
case IPA_RM_RESOURCE_GRANTED:
- pr_debug("USB_PROD resource granted\n");
+ pr_debug("%s: %s_PROD resource granted\n",
+ __func__, bam_enable_strings[*cur_bam]);
+ cur_prod_state[*cur_bam] = IPA_RM_RESOURCE_GRANTED;
+ complete_all(&prod_avail[*cur_bam]);
break;
case IPA_RM_RESOURCE_RELEASED:
- pr_debug("USB_PROD resource released\n");
+ pr_debug("%s: %s_PROD resource released\n",
+ __func__, bam_enable_strings[*cur_bam]);
+ cur_prod_state[*cur_bam] = IPA_RM_RESOURCE_RELEASED;
+ complete_all(&prod_released[*cur_bam]);
break;
default:
break;
@@ -538,50 +652,130 @@
return;
}
+static int cons_request_resource(enum usb_bam cur_bam)
+{
+ pr_debug("%s: Request %s_CONS resource\n",
+ __func__, bam_enable_strings[cur_bam]);
+
+ cur_cons_state[cur_bam] = IPA_RM_RESOURCE_GRANTED;
+ complete_all(&cons_avail[cur_bam]);
+
+ if (ctx.pipes_enabled_per_bam[cur_bam])
+ return 0;
+
+ return -EINPROGRESS;
+}
+
static int usb_cons_request_resource(void)
{
- pr_debug(": Requesting USB_CONS resource\n");
- return 0;
+ return cons_request_resource(HSUSB_BAM);
+}
+
+static int hsic_cons_request_resource(void)
+{
+ return cons_request_resource(HSIC_BAM);
+}
+
+static int cons_release_resource(enum usb_bam cur_bam)
+{
+ pr_debug("%s: Release %s_CONS resource\n",
+ __func__, bam_enable_strings[cur_bam]);
+
+ cur_cons_state[cur_bam] = IPA_RM_RESOURCE_RELEASED;
+ complete_all(&cons_released[cur_bam]);
+
+ if (!ctx.pipes_enabled_per_bam[cur_bam])
+ return 0;
+
+ return -EINPROGRESS;
+}
+
+static int hsic_cons_release_resource(void)
+{
+ return cons_release_resource(HSIC_BAM);
}
static int usb_cons_release_resource(void)
{
- pr_debug(": Releasing USB_CONS resource\n");
- return 0;
+ return cons_release_resource(HSUSB_BAM);
}
static void usb_bam_ipa_create_resources(void)
{
struct ipa_rm_create_params usb_prod_create_params;
struct ipa_rm_create_params usb_cons_create_params;
+ enum usb_bam cur_bam;
+ int ret, i;
+
+ for (i = 0; i < ARRAY_SIZE(ipa_rm_bams); i++) {
+ /* Create USB/HSIC_PROD entity */
+ cur_bam = ipa_rm_bams[i];
+
+ memset(&usb_prod_create_params, 0,
+ sizeof(usb_prod_create_params));
+ usb_prod_create_params.name = ipa_rm_resource_prod[cur_bam];
+ usb_prod_create_params.reg_params.notify_cb =
+ usb_prod_notify_cb;
+ usb_prod_create_params.reg_params.user_data = &ipa_rm_bams[i];
+ ret = ipa_rm_create_resource(&usb_prod_create_params);
+ if (ret) {
+ pr_err("%s: Failed to create USB_PROD resource\n",
+ __func__);
+ return;
+ }
+
+ /* Create USB_CONS entity */
+ memset(&usb_cons_create_params, 0,
+ sizeof(usb_cons_create_params));
+ usb_cons_create_params.name = ipa_rm_resource_cons[cur_bam];
+ usb_cons_create_params.request_resource =
+ request_resource_cb[cur_bam];
+ usb_cons_create_params.release_resource =
+ release_resource_cb[cur_bam];
+ ret = ipa_rm_create_resource(&usb_cons_create_params);
+ if (ret) {
+ pr_err("%s: Failed to create USB_CONS resource\n",
+ __func__);
+ return ;
+ }
+ }
+}
+
+static void wait_for_prod_granted(enum usb_bam cur_bam)
+{
int ret;
- /* Create USB_PROD entity */
- memset(&usb_prod_create_params, 0, sizeof(usb_prod_create_params));
- usb_prod_create_params.name = IPA_RM_RESOURCE_USB_PROD;
- usb_prod_create_params.reg_params.notify_cb = usb_prod_notify_cb;
- usb_prod_create_params.reg_params.user_data = NULL;
- ret = ipa_rm_create_resource(&usb_prod_create_params);
- if (ret) {
- pr_err("%s: Failed to create USB_PROD resource\n", __func__);
- return;
- }
+ pr_debug("%s Request %s_PROD_RES\n", __func__,
+ bam_enable_strings[cur_bam]);
+ if (cur_cons_state[cur_bam] == IPA_RM_RESOURCE_GRANTED)
+ pr_debug("%s: CONS already granted for some reason\n",
+ __func__);
+ if (cur_prod_state[cur_bam] == IPA_RM_RESOURCE_GRANTED)
+ pr_debug("%s: PROD already granted for some reason\n",
+ __func__);
- /* Create USB_CONS entity */
- memset(&usb_cons_create_params, 0, sizeof(usb_cons_create_params));
- usb_cons_create_params.name = IPA_RM_RESOURCE_USB_CONS;
- usb_cons_create_params.request_resource = usb_cons_request_resource;
- usb_cons_create_params.release_resource = usb_cons_release_resource;
- ret = ipa_rm_create_resource(&usb_cons_create_params);
- if (ret) {
- pr_err("%s: Failed to create USB_CONS resource\n", __func__);
- return ;
- }
+ init_completion(&prod_avail[cur_bam]);
+ init_completion(&cons_avail[cur_bam]);
+
+ ret = ipa_rm_request_resource(ipa_rm_resource_prod[cur_bam]);
+ if (!ret) {
+ cur_prod_state[cur_bam] = IPA_RM_RESOURCE_GRANTED;
+ complete_all(&prod_avail[cur_bam]);
+ pr_debug("%s: PROD_GRANTED without wait\n", __func__);
+ } else if (ret == -EINPROGRESS) {
+ pr_debug("%s: Waiting for PROD_GRANTED\n", __func__);
+ if (!wait_for_completion_timeout(&prod_avail[cur_bam],
+ USB_BAM_TIMEOUT))
+ pr_err("%s: Timeout wainting for PROD_GRANTED\n",
+ __func__);
+ } else
+ pr_err("%s: ipa_rm_request_resource ret =%d\n", __func__, ret);
}
int usb_bam_connect_ipa(struct usb_bam_connect_ipa_params *ipa_params)
{
u8 idx;
+ enum usb_bam cur_bam;
struct usb_bam_pipe_connect *pipe_connect;
int ret;
struct msm_usb_bam_platform_data *pdata =
@@ -604,6 +798,14 @@
return -EINVAL;
}
pipe_connect = &usb_bam_connections[idx];
+ cur_bam = pipe_connect->bam_type;
+
+ if (cur_bam == HSUSB_BAM) {
+ spin_lock(&usb_bam_lock);
+ sched_lpm = 0;
+ lpm_wait_handshake = 1;
+ spin_unlock(&usb_bam_lock);
+ }
if (pipe_connect->enabled) {
pr_debug("%s: connection %d was already established\n",
@@ -611,21 +813,40 @@
return 0;
}
- /* Check if BAM requires RESET before connect and reset of first pipe */
- if ((pdata->reset_on_connect[pipe_connect->bam_type] == true) &&
- (ctx.pipes_enabled_per_bam[pipe_connect->bam_type] == 0))
- sps_device_reset(ctx.h_bam[pipe_connect->bam_type]);
+ spin_lock(&usb_bam_lock);
+
+ /* Check if BAM requires RESET before connect and reset first pipe */
+ if ((pdata->reset_on_connect[cur_bam] == true) &&
+ (ctx.pipes_enabled_per_bam[cur_bam] == 0))
+ sps_device_reset(ctx.h_bam[cur_bam]);
+
+ spin_unlock(&usb_bam_lock);
+
+ if (ipa_params->dir == USB_TO_PEER_PERIPHERAL) {
+ pr_debug("%s: Starting connect sequence\n", __func__);
+ wait_for_prod_granted(cur_bam);
+ }
ret = connect_pipe_ipa(idx, ipa_params);
- ipa_rm_request_resource(IPA_RM_RESOURCE_USB_PROD);
-
if (ret) {
- pr_err("%s: dst pipe connection failure\n", __func__);
+ pr_err("%s: pipe connection failure\n", __func__);
return ret;
}
+ spin_lock(&usb_bam_lock);
+
pipe_connect->enabled = 1;
- ctx.pipes_enabled_per_bam[pipe_connect->bam_type] += 1;
+ ctx.pipes_enabled_per_bam[cur_bam] += 1;
+
+ if (ipa_params->dir == PEER_PERIPHERAL_TO_USB &&
+ cur_cons_state[cur_bam] == IPA_RM_RESOURCE_GRANTED) {
+ pr_debug("%s: Notify CONS_GRANTED\n", __func__);
+ ipa_rm_notify_completion(IPA_RM_RESOURCE_GRANTED,
+ ipa_rm_resource_cons[cur_bam]);
+ pr_debug("%s: Ended connect sequence\n", __func__);
+ }
+
+ spin_unlock(&usb_bam_lock);
return 0;
}
@@ -633,22 +854,22 @@
int usb_bam_client_ready(bool ready)
{
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
if (peer_handshake_info.client_ready == ready) {
pr_debug("%s: client state is already %d\n",
__func__, ready);
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
return 0;
}
peer_handshake_info.client_ready = ready;
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
if (!queue_work(ctx.usb_bam_wq,
&peer_handshake_info.reset_event.event_w)) {
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.pending_work++;
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
}
return 0;
@@ -656,18 +877,108 @@
static void usb_bam_work(struct work_struct *w)
{
+ int i;
struct usb_bam_event_info *event_info =
container_of(w, struct usb_bam_event_info, event_w);
+ struct usb_bam_pipe_connect *pipe_connect =
+ container_of(event_info, struct usb_bam_pipe_connect, event);
+ struct usb_bam_pipe_connect *pipe_connect_iter;
+ int (*callback)(void *priv);
+ void *param = NULL;
- event_info->callback(event_info->param);
+ switch (event_info->type) {
+ case USB_BAM_EVENT_WAKEUP:
+ case USB_BAM_EVENT_WAKEUP_PIPE:
+
+ pr_debug("%s recieved USB_BAM_EVENT_WAKEUP\n", __func__);
+
+ /* Notify about wakeup / activity of the bam */
+ event_info->callback(event_info->param);
+
+ /*
+ * Reset inactivity timer counter if this pipe's bam
+ * has inactivity timeout.
+ */
+ spin_lock(&usb_bam_lock);
+ if (ctx.inactivity_timer_ms[pipe_connect->bam_type])
+ usb_bam_set_inactivity_timer(pipe_connect->bam_type);
+ spin_unlock(&usb_bam_lock);
+
+ break;
+
+ case USB_BAM_EVENT_INACTIVITY:
+
+ pr_debug("%s recieved USB_BAM_EVENT_INACTIVITY\n", __func__);
+
+ /*
+ * Since event info is one structure per pipe, it might be
+ * overriden when we will register the wakeup events below,
+ * and still we want ot register the wakeup events before we
+ * notify on the inactivity in order to identify the next
+ * activity as soon as possible.
+ */
+ callback = event_info->callback;
+ param = event_info->param;
+
+ /*
+ * Upon inactivity, configure wakeup irq for all pipes
+ * that are into the usb bam.
+ */
+ spin_lock(&usb_bam_lock);
+ for (i = 0; i < ctx.max_connections; i++) {
+ pipe_connect_iter = &usb_bam_connections[i];
+ if (pipe_connect_iter->bam_type ==
+ pipe_connect->bam_type &&
+ pipe_connect_iter->dir ==
+ PEER_PERIPHERAL_TO_USB &&
+ pipe_connect_iter->enabled) {
+ __usb_bam_register_wake_cb(i,
+ pipe_connect_iter->activity_notify,
+ pipe_connect_iter->priv,
+ false);
+ }
+ }
+ spin_unlock(&usb_bam_lock);
+
+ /* Notify about the inactivity */
+ if (callback)
+ callback(param);
+
+ break;
+ default:
+ pr_err("%s: unknown usb bam event type %d\n", __func__,
+ event_info->type);
+ }
}
static void usb_bam_wake_cb(struct sps_event_notify *notify)
{
- struct usb_bam_event_info *wake_event_info =
+ struct usb_bam_event_info *event_info =
(struct usb_bam_event_info *)notify->user;
+ struct usb_bam_pipe_connect *pipe_connect =
+ container_of(event_info,
+ struct usb_bam_pipe_connect,
+ event);
- queue_work(ctx.usb_bam_wq, &wake_event_info->event_w);
+ spin_lock(&usb_bam_lock);
+
+ if (event_info->type == USB_BAM_EVENT_WAKEUP_PIPE)
+ queue_work(ctx.usb_bam_wq, &event_info->event_w);
+ else if (event_info->type == USB_BAM_EVENT_WAKEUP &&
+ ctx.is_bam_inactivity[pipe_connect->bam_type]) {
+
+ /*
+ * Sps wake event is per pipe, so usb_bam_wake_cb is
+ * called per pipe. However, we want to filter the wake
+ * event to be wake event per all the pipes.
+ * Therefore, the first pipe that awaked will be considered
+ * as global bam wake event.
+ */
+ ctx.is_bam_inactivity[pipe_connect->bam_type] = false;
+ queue_work(ctx.usb_bam_wq, &event_info->event_w);
+ }
+
+ spin_unlock(&usb_bam_lock);
}
static void usb_bam_sm_work(struct work_struct *w)
@@ -675,15 +986,15 @@
pr_debug("%s: current state: %d\n", __func__,
peer_handshake_info.state);
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
switch (peer_handshake_info.state) {
case USB_BAM_SM_INIT:
if (peer_handshake_info.client_ready) {
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
smsm_change_state(SMSM_APPS_STATE, 0,
SMSM_USB_PLUG_UNPLUG);
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.state = USB_BAM_SM_PLUG_NOTIFIED;
}
break;
@@ -695,19 +1006,19 @@
break;
case USB_BAM_SM_PLUG_ACKED:
if (!peer_handshake_info.client_ready) {
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
smsm_change_state(SMSM_APPS_STATE,
SMSM_USB_PLUG_UNPLUG, 0);
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.state = USB_BAM_SM_UNPLUG_NOTIFIED;
}
break;
case USB_BAM_SM_UNPLUG_NOTIFIED:
if (peer_handshake_info.ack_received) {
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.reset_event.
callback(peer_handshake_info.reset_event.param);
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.state = USB_BAM_SM_INIT;
peer_handshake_info.ack_received = 0;
}
@@ -716,12 +1027,12 @@
if (peer_handshake_info.pending_work) {
peer_handshake_info.pending_work--;
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
queue_work(ctx.usb_bam_wq,
&peer_handshake_info.reset_event.event_w);
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
}
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
}
static void usb_bam_ack_toggle_cb(void *priv,
@@ -730,29 +1041,29 @@
static int last_processed_state;
int current_state;
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
current_state = new_state & SMSM_USB_PLUG_UNPLUG;
if (current_state == last_processed_state) {
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
return;
}
last_processed_state = current_state;
peer_handshake_info.ack_received = true;
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
if (!queue_work(ctx.usb_bam_wq,
&peer_handshake_info.reset_event.event_w)) {
- spin_lock(&usb_bam_lock);
+ spin_lock(&usb_bam_peer_handshake_info_lock);
peer_handshake_info.pending_work++;
- spin_unlock(&usb_bam_lock);
+ spin_unlock(&usb_bam_peer_handshake_info_lock);
}
}
-int usb_bam_register_wake_cb(u8 idx, int (*callback)(void *user),
- void *param)
+static int __usb_bam_register_wake_cb(u8 idx, int (*callback)(void *user),
+ void *param, bool trigger_cb_per_pipe)
{
struct sps_pipe *pipe = ctx.usb_bam_sps.sps_pipes[idx];
struct sps_connect *sps_connection;
@@ -767,8 +1078,11 @@
pipe = ctx.usb_bam_sps.sps_pipes[idx];
sps_connection = &ctx.usb_bam_sps.sps_connections[idx];
pipe_connect = &usb_bam_connections[idx];
- wake_event_info = &pipe_connect->wake_event;
+ wake_event_info = &pipe_connect->event;
+ wake_event_info->type = (trigger_cb_per_pipe ?
+ USB_BAM_EVENT_WAKEUP_PIPE :
+ USB_BAM_EVENT_WAKEUP);
wake_event_info->param = param;
wake_event_info->callback = callback;
wake_event_info->event.mode = SPS_TRIGGER_CALLBACK;
@@ -794,6 +1108,12 @@
return 0;
}
+int usb_bam_register_wake_cb(u8 idx, int (*callback)(void *user),
+ void *param)
+{
+ return __usb_bam_register_wake_cb(idx, callback, param, true);
+}
+
int usb_bam_register_peer_reset_cb(int (*callback)(void *), void *param)
{
u32 ret = 0;
@@ -831,44 +1151,121 @@
pipe_connect = &usb_bam_connections[idx];
if (!pipe_connect->enabled) {
- pr_debug("%s: connection %d isn't enabled\n",
+ pr_err("%s: connection %d isn't enabled\n",
__func__, idx);
return 0;
}
+ spin_lock(&usb_bam_lock);
+
ret = disconnect_pipe(idx);
if (ret) {
- pr_err("%s: src pipe connection failure\n", __func__);
+ pr_err("%s: src pipe disconnection failure\n", __func__);
+ spin_unlock(&usb_bam_lock);
return ret;
}
pipe_connect->enabled = 0;
+
if (ctx.pipes_enabled_per_bam[pipe_connect->bam_type] == 0)
pr_err("%s: wrong pipes enabled counter for bam_type=%d\n",
__func__, pipe_connect->bam_type);
else
ctx.pipes_enabled_per_bam[pipe_connect->bam_type] -= 1;
+ spin_unlock(&usb_bam_lock);
+
return 0;
}
+static void usb_bam_start_lpm(void)
+{
+ struct usb_phy *trans = usb_get_transceiver();
+ BUG_ON(trans == NULL);
+ spin_lock(&usb_bam_lock);
+ lpm_wait_handshake = 0;
+ if (sched_lpm) {
+ pr_debug("%s: Going to LPM\n", __func__);
+ spin_unlock(&usb_bam_lock);
+ pm_runtime_resume(trans->dev);
+ pm_runtime_put_noidle(trans->dev);
+ pm_runtime_suspend(trans->dev);
+ return;
+ }
+ spin_unlock(&usb_bam_lock);
+}
+
+static void wait_for_prod_release(enum usb_bam cur_bam)
+{
+ int ret;
+
+ if (cur_cons_state[cur_bam] == IPA_RM_RESOURCE_RELEASED)
+ pr_debug("%s consumer already released\n", __func__);
+ if (cur_prod_state[cur_bam] == IPA_RM_RESOURCE_RELEASED)
+ pr_debug("%s producer already released\n", __func__);
+
+ init_completion(&prod_released[cur_bam]);
+ init_completion(&cons_released[cur_bam]);
+ pr_debug("%s: Releasing %s_PROD\n", __func__,
+ bam_enable_strings[cur_bam]);
+ ret = ipa_rm_release_resource(ipa_rm_resource_prod[cur_bam]);
+ if (!ret) {
+ pr_debug("%s: Released without waiting\n", __func__);
+ cur_prod_state[cur_bam] = IPA_RM_RESOURCE_RELEASED;
+ complete_all(&prod_released[cur_bam]);
+ } else if (ret == -EINPROGRESS) {
+ pr_debug("%s: Waiting for PROD_RELEASED\n", __func__);
+ if (!wait_for_completion_timeout(&prod_released[cur_bam],
+ USB_BAM_TIMEOUT))
+ pr_err("%s: Timeout waiting for PROD_RELEASED\n",
+ __func__);
+ } else
+ pr_err("%s: ipa_rm_request_resource ret =%d", __func__, ret);
+}
+
+static void wait_for_cons_release(enum usb_bam cur_bam)
+{
+ pr_debug("%s: Waiting for CONS release\n", __func__);
+ if (cur_prod_state[cur_bam] != IPA_RM_RESOURCE_RELEASED) {
+ if (!wait_for_completion_timeout(&cons_released[cur_bam],
+ USB_BAM_TIMEOUT))
+ pr_err("%s: Timeout wainting for CONS_RELEASE\n",
+ __func__);
+ } else
+ pr_debug("%s Didn't need to wait for CONS release\n",
+ __func__);
+}
+
int usb_bam_disconnect_ipa(struct usb_bam_connect_ipa_params *ipa_params)
{
int ret;
u8 idx;
struct usb_bam_pipe_connect *pipe_connect;
struct sps_connect *sps_connection;
+ enum usb_bam cur_bam;
+
+ if (!ipa_params->prod_clnt_hdl && !ipa_params->cons_clnt_hdl) {
+ pr_err("%s: Both of the handles is missing\n", __func__);
+ return -EINVAL;
+ }
+
+ pr_debug("%s: Starting disconnect sequence\n", __func__);
+ /* Delay USB core to go into lpm before we finish our handshake */
if (ipa_params->prod_clnt_hdl) {
- /* close USB -> IPA pipe */
idx = ipa_params->dst_idx;
+ pipe_connect = &usb_bam_connections[idx];
+
+ /* Do the release handshake with the A2 via RM */
+ cur_bam = pipe_connect->bam_type;
+ wait_for_prod_release(cur_bam);
+ /* close USB -> IPA pipe */
ret = ipa_disconnect(ipa_params->prod_clnt_hdl);
if (ret) {
pr_err("%s: dst pipe disconnection failure\n",
__func__);
return ret;
}
- pipe_connect = &usb_bam_connections[idx];
sps_connection = &ctx.usb_bam_sps.sps_connections[idx];
sps_connection->data.phys_base = 0;
sps_connection->desc.phys_base = 0;
@@ -880,16 +1277,20 @@
return ret;
}
}
+
if (ipa_params->cons_clnt_hdl) {
- /* close IPA -> USB pipe */
idx = ipa_params->src_idx;
+ pipe_connect = &usb_bam_connections[idx];
+ cur_bam = pipe_connect->bam_type;
+ wait_for_cons_release(cur_bam);
+ /* close IPA -> USB pipe */
ret = ipa_disconnect(ipa_params->cons_clnt_hdl);
if (ret) {
pr_err("%s: src pipe disconnection failure\n",
__func__);
return ret;
}
- pipe_connect = &usb_bam_connections[idx];
+
sps_connection = &ctx.usb_bam_sps.sps_connections[idx];
sps_connection->data.phys_base = 0;
sps_connection->desc.phys_base = 0;
@@ -900,9 +1301,15 @@
__func__, idx);
return ret;
}
+ pr_debug("%s: Notify CONS release\n", __func__);
+
+ if (cur_cons_state[cur_bam] == IPA_RM_RESOURCE_RELEASED)
+ ipa_rm_notify_completion(IPA_RM_RESOURCE_RELEASED,
+ ipa_rm_resource_cons[cur_bam]);
+ pr_debug("%s Ended disconnect sequence\n", __func__);
+ usb_bam_start_lpm();
}
- ipa_rm_release_resource(IPA_RM_RESOURCE_USB_PROD);
return 0;
}
EXPORT_SYMBOL(usb_bam_disconnect_ipa);
@@ -965,6 +1372,53 @@
return ret;
}
+static void usb_bam_sps_events(enum sps_callback_case sps_cb_case, void *user)
+{
+ int i;
+ enum usb_bam bam;
+ struct usb_bam_pipe_connect *pipe_connect;
+ struct usb_bam_event_info *event_info;
+
+ switch (sps_cb_case) {
+ case SPS_CALLBACK_BAM_TIMER_IRQ:
+
+ pr_debug("%s:recieved SPS_CALLBACK_BAM_TIMER_IRQ\n", __func__);
+
+ spin_lock(&usb_bam_lock);
+
+ bam = get_bam_type_from_core_name((char *)user);
+ ctx.is_bam_inactivity[bam] = true;
+ pr_debug("%s: Incativity happened on bam=%s,%d\n", __func__,
+ (char *)user, bam);
+
+ for (i = 0; i < ctx.max_connections; i++) {
+ pipe_connect = &usb_bam_connections[i];
+
+ /*
+ * Notify inactivity once, Since it is global
+ * for all pipes on bam.
+ */
+ if (pipe_connect->bam_type == bam) {
+ event_info = &pipe_connect->event;
+ event_info->type = USB_BAM_EVENT_INACTIVITY;
+ event_info->param = pipe_connect->priv;
+ event_info->callback =
+ pipe_connect->inactivity_notify;
+ queue_work(ctx.usb_bam_wq,
+ &event_info->event_w);
+ break;
+ }
+ }
+
+ spin_unlock(&usb_bam_lock);
+
+ break;
+ default:
+ pr_debug("%s:received sps_cb_case=%d\n", __func__,
+ (int)sps_cb_case);
+ }
+}
+
static struct msm_usb_bam_platform_data *usb_bam_dt_to_pdata(
struct platform_device *pdev)
{
@@ -1174,6 +1628,9 @@
props.summing_threshold = USB_THRESHOLD;
props.event_threshold = USB_THRESHOLD;
props.num_pipes = pdata->usb_bam_num_pipes;
+ props.callback = usb_bam_sps_events;
+ props.user = bam_enable_strings[bam_idx];
+
/*
* HSUSB and HSIC Cores don't support RESET ACK signal to BAMs
* Hence, let BAM to ignore acknowledge from USB while resetting PIPE
@@ -1234,6 +1691,76 @@
return 0;
}
+static ssize_t
+usb_bam_show_inactivity_timer(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ char *buff = buf;
+ int i;
+
+ spin_lock(&usb_bam_lock);
+
+ for (i = 0; i < ARRAY_SIZE(bam_enable_strings); i++) {
+ buff += snprintf(buff, PAGE_SIZE, "%s: %dms\n",
+ bam_enable_strings[i],
+ ctx.inactivity_timer_ms[i]);
+ }
+
+ spin_unlock(&usb_bam_lock);
+
+ return buff - buf;
+}
+
+static ssize_t usb_bam_store_inactivity_timer(struct device *dev,
+ struct device_attribute *attr,
+ const char *buff, size_t count)
+{
+ char buf[USB_BAM_MAX_STR_LEN];
+ char *trimmed_buf, *bam_str, *bam_name, *timer;
+ int timer_d;
+ enum usb_bam bam;
+
+ if (strnstr(buff, "help", USB_BAM_MAX_STR_LEN)) {
+ pr_info("Usage: <bam_name> <ms>,<bam_name> <ms>,...\n");
+ pr_info("\tbam_name: [%s, %s, %s]\n",
+ bam_enable_strings[SSUSB_BAM],
+ bam_enable_strings[HSUSB_BAM],
+ bam_enable_strings[HSIC_BAM]);
+ pr_info("\tms: time in ms. Use 0 to disable timer\n");
+ return count;
+ }
+
+ strlcpy(buf, buff, sizeof(buf));
+ trimmed_buf = strim(buf);
+
+ while (trimmed_buf) {
+ bam_str = strsep(&trimmed_buf, ",");
+ if (bam_str) {
+ bam_name = strsep(&bam_str, " ");
+ bam = get_bam_type_from_core_name(bam_name);
+
+ timer = strsep(&bam_str, " ");
+ sscanf(timer, "%d", &timer_d);
+
+ spin_lock(&usb_bam_lock);
+
+ ctx.inactivity_timer_ms[bam] = timer_d;
+ /* Apply new timer setting if bam has running pipes */
+ if (ctx.pipes_enabled_per_bam[bam] > 0)
+ usb_bam_set_inactivity_timer(bam);
+
+ spin_unlock(&usb_bam_lock);
+ }
+ }
+
+ return count;
+}
+
+static DEVICE_ATTR(inactivity_timer, S_IWUSR | S_IRUSR,
+ usb_bam_show_inactivity_timer,
+ usb_bam_store_inactivity_timer);
+
+
static int usb_bam_probe(struct platform_device *pdev)
{
int ret, i;
@@ -1241,10 +1768,14 @@
dev_dbg(&pdev->dev, "usb_bam_probe\n");
+ ret = device_create_file(&pdev->dev, &dev_attr_inactivity_timer);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to create fs node\n");
+ return ret;
+ }
+
ctx.mem_clk = devm_clk_get(&pdev->dev, "mem_clk");
if (IS_ERR(ctx.mem_clk))
-
-
dev_dbg(&pdev->dev, "failed to get mem_clock\n");
ctx.mem_iface_clk = devm_clk_get(&pdev->dev, "mem_iface_clk");
@@ -1269,14 +1800,27 @@
for (i = 0; i < ctx.max_connections; i++) {
usb_bam_connections[i].enabled = 0;
- INIT_WORK(&usb_bam_connections[i].wake_event.event_w,
+ INIT_WORK(&usb_bam_connections[i].event.event_w,
usb_bam_work);
}
- for (i = 0; i < MAX_BAMS; i++)
+ for (i = 0; i < MAX_BAMS; i++) {
ctx.pipes_enabled_per_bam[i] = 0;
+ ctx.inactivity_timer_ms[i] = 0;
+ ctx.is_bam_inactivity[i] = false;
+ init_completion(&prod_avail[i]);
+ complete(&prod_avail[i]);
+ init_completion(&cons_avail[i]);
+ complete(&cons_avail[i]);
+ init_completion(&cons_released[i]);
+ complete(&cons_released[i]);
+ init_completion(&prod_released[i]);
+ complete(&prod_released[i]);
+ cur_prod_state[i] = IPA_RM_RESOURCE_RELEASED;
+ cur_cons_state[i] = IPA_RM_RESOURCE_RELEASED;
+ }
- spin_lock_init(&usb_bam_lock);
+ spin_lock_init(&usb_bam_peer_handshake_info_lock);
INIT_WORK(&peer_handshake_info.reset_event.event_w, usb_bam_sm_work);
ctx.usb_bam_wq = alloc_workqueue("usb_bam_wq",
@@ -1293,6 +1837,8 @@
}
usb_bam_ipa_create_resources();
+ spin_lock_init(&usb_bam_lock);
+
return ret;
}
@@ -1361,6 +1907,22 @@
}
EXPORT_SYMBOL(usb_bam_get_connection_idx);
+bool msm_bam_lpm_ok(void)
+{
+ spin_lock(&usb_bam_lock);
+ if (lpm_wait_handshake) {
+ sched_lpm = 1;
+ spin_unlock(&usb_bam_lock);
+ pr_err("%s: Scheduling LPM for later\n", __func__);
+ return 0;
+ } else {
+ spin_unlock(&usb_bam_lock);
+ pr_err("%s: Going to LPM now\n", __func__);
+ return 1;
+ }
+}
+EXPORT_SYMBOL(msm_bam_lpm_ok);
+
static int usb_bam_remove(struct platform_device *pdev)
{
destroy_workqueue(ctx.usb_bam_wq);
diff --git a/drivers/power/battery_current_limit.c b/drivers/power/battery_current_limit.c
index ecda153..69fa4a8 100644
--- a/drivers/power/battery_current_limit.c
+++ b/drivers/power/battery_current_limit.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -120,8 +120,10 @@
if (psy == NULL) {
psy = power_supply_get_by_name("battery");
- if (psy == NULL)
+ if (psy == NULL) {
+ pr_err("failed to get ps battery\n");
return;
+ }
}
if (psy->get_property(psy, POWER_SUPPLY_PROP_CURRENT_NOW, &ret))
@@ -143,6 +145,7 @@
gbcl->bcl_imax_ma = imax_ma;
gbcl->bcl_vbat_mv = vbatt_mv;
+ pr_debug("ibatt %d, imax %d, vbatt %d\n", ibatt_ma, imax_ma, vbatt_mv);
if (gbcl->bcl_threshold_mode[BCL_IBAT_IMAX_THRESHOLD_TYPE_HIGH]
== BCL_IBAT_IMAX_THRESHOLD_ENABLED) {
imax_high_threshold =
@@ -179,8 +182,7 @@
bcl_calculate_imax_trigger();
/* restart the delay work for caculating imax */
schedule_delayed_work(&bcl->bcl_imax_work,
- round_jiffies_relative(msecs_to_jiffies
- (bcl->bcl_poll_interval_msec)));
+ msecs_to_jiffies(bcl->bcl_poll_interval_msec));
}
}
diff --git a/drivers/power/bq28400_battery.c b/drivers/power/bq28400_battery.c
index beab4e2..79ed2b1 100644
--- a/drivers/power/bq28400_battery.c
+++ b/drivers/power/bq28400_battery.c
@@ -164,6 +164,9 @@
u16 subcmd;
};
+static int fake_battery = -EINVAL;
+module_param(fake_battery, int, 0644);
+
#define BQ28400_DEBUG_REG(x) {#x, SBS_##x, 0}
#define BQ28400_DEBUG_SUBREG(x, y) {#y, SBS_##x, SUBCMD_##y}
@@ -403,6 +406,11 @@
{
int percentage = 0;
+ if (fake_battery != -EINVAL) {
+ pr_debug("Reporting Fake SOC = %d\n", fake_battery);
+ return fake_battery;
+ }
+
/* This register is only 1 byte */
percentage = i2c_smbus_read_byte_data(client, SBS_RSOC);
diff --git a/drivers/power/pm8921-charger.c b/drivers/power/pm8921-charger.c
index e9cf973..1ad7f21 100644
--- a/drivers/power/pm8921-charger.c
+++ b/drivers/power/pm8921-charger.c
@@ -83,6 +83,7 @@
#define CHG_COMP_OVR 0x20A
#define IUSB_FINE_RES 0x2B6
#define OVP_USB_UVD 0x2B7
+#define PM8921_USB_TRIM_SEL 0x339
/* check EOC every 10 seconds */
#define EOC_CHECK_PERIOD_MS 10000
@@ -213,6 +214,8 @@
* @alarm_high_mv: the battery alarm voltage high
* @cool_temp_dc: the cool temp threshold in deciCelcius
* @warm_temp_dc: the warm temp threshold in deciCelcius
+ * @hysteresis_temp_dc: the hysteresis between temp thresholds in
+ * deciCelcius
* @resume_voltage_delta: the voltage delta from vdd max at which the
* battery should resume charging
* @term_current: The charging based term current
@@ -235,6 +238,7 @@
unsigned int alarm_high_mv;
int cool_temp_dc;
int warm_temp_dc;
+ int hysteresis_temp_dc;
unsigned int temp_check_period;
unsigned int cool_bat_chg_current;
unsigned int warm_bat_chg_current;
@@ -308,6 +312,7 @@
static int thermal_mitigation;
static struct pm8921_chg_chip *the_chip;
+static void check_temp_thresholds(struct pm8921_chg_chip *chip);
#define LPM_ENABLE_BIT BIT(2)
static int pm8921_chg_set_lpm(struct pm8921_chg_chip *chip, int enable)
@@ -773,6 +778,44 @@
};
/* USB Trim tables */
+static int usb_trim_pm8921_table_1[USB_TRIM_ENTRIES] = {
+ 0x0,
+ 0x0,
+ -0x5,
+ 0x0,
+ -0x7,
+ 0x0,
+ -0x9,
+ -0xA,
+ 0x0,
+ 0x0,
+ -0xE,
+ 0x0,
+ -0xF,
+ 0x0,
+ -0x10,
+ 0x0
+};
+
+static int usb_trim_pm8921_table_2[USB_TRIM_ENTRIES] = {
+ 0x0,
+ 0x0,
+ -0x2,
+ 0x0,
+ -0x4,
+ 0x0,
+ -0x4,
+ -0x5,
+ 0x0,
+ 0x0,
+ -0x6,
+ 0x0,
+ -0x6,
+ 0x0,
+ -0x6,
+ 0x0
+};
+
static int usb_trim_8038_table[USB_TRIM_ENTRIES] = {
0x0,
0x0,
@@ -840,6 +883,8 @@
#define REG_USB_OVP_TRIM_ORIG_MSB 0x09C
#define REG_USB_OVP_TRIM_PM8917 0x2B5
#define REG_USB_OVP_TRIM_PM8917_BIT BIT(0)
+#define USB_TRIM_MAX_DATA_PM8917 0x3F
+#define USB_TRIM_POLARITY_PM8917_BIT BIT(6)
static int pm_chg_usb_trim(struct pm8921_chg_chip *chip, int index)
{
u8 temp, sbi_config, msb, lsb, mask;
@@ -3162,6 +3207,22 @@
struct delayed_work *dwork = to_delayed_work(work);
struct pm8921_chg_chip *chip = container_of(dwork,
struct pm8921_chg_chip, update_heartbeat_work);
+ bool chg_present = chip->usb_present || chip->dc_present;
+
+ /* for battery health when charger is not connected */
+ if (chip->btc_override && !chg_present)
+ schedule_delayed_work(&chip->btc_override_work,
+ round_jiffies_relative(msecs_to_jiffies
+ (chip->btc_delay_ms)));
+
+ /*
+ * check temp thresholds when charger is present and
+ * and battery is FULL. The temperature here can impact
+ * the charging restart conditions.
+ */
+ if (chip->btc_override && chg_present &&
+ !wake_lock_active(&chip->eoc_wake_lock))
+ check_temp_thresholds(chip);
power_supply_changed(&chip->batt_psy);
if (chip->recent_reported_soc <= 20)
@@ -3317,7 +3378,7 @@
if (chip->warm_temp_dc != INT_MIN) {
if (chip->is_bat_warm
- && temp < chip->warm_temp_dc - TEMP_HYSTERISIS_DECIDEGC)
+ && temp < chip->warm_temp_dc - chip->hysteresis_temp_dc)
battery_warm(false);
else if (!chip->is_bat_warm && temp >= chip->warm_temp_dc)
battery_warm(true);
@@ -3325,7 +3386,7 @@
if (chip->cool_temp_dc != INT_MIN) {
if (chip->is_bat_cool
- && temp > chip->cool_temp_dc + TEMP_HYSTERISIS_DECIDEGC)
+ && temp > chip->cool_temp_dc + chip->hysteresis_temp_dc)
battery_cool(false);
else if (!chip->is_bat_cool && temp <= chip->cool_temp_dc)
battery_cool(true);
@@ -3543,7 +3604,8 @@
temp = pm_chg_get_rt_status(chip, BATTTEMP_HOT_IRQ);
if (temp) {
- if (decidegc < chip->btc_override_hot_decidegc)
+ if (decidegc < chip->btc_override_hot_decidegc -
+ chip->hysteresis_temp_dc)
/* stop forcing batt hot */
rc = pm_chg_override_hot(chip, 0);
if (rc)
@@ -3558,7 +3620,8 @@
temp = pm_chg_get_rt_status(chip, BATTTEMP_COLD_IRQ);
if (temp) {
- if (decidegc > chip->btc_override_cold_decidegc)
+ if (decidegc > chip->btc_override_cold_decidegc +
+ chip->hysteresis_temp_dc)
/* stop forcing batt cold */
rc = pm_chg_override_cold(chip, 0);
if (rc)
@@ -3622,7 +3685,8 @@
end = is_charging_finished(chip, vbat_batt_terminal_uv, ichg_meas_ma);
- if (end == CHG_NOT_IN_PROGRESS) {
+ if (end == CHG_NOT_IN_PROGRESS && (!chip->btc_override ||
+ !(chip->usb_present || chip->dc_present))) {
count = 0;
goto eoc_worker_stop;
}
@@ -3650,7 +3714,8 @@
chgdone_irq_handler(chip->pmic_chg_irq[CHGDONE_IRQ], chip);
} else {
check_temp_thresholds(chip);
- adjust_vdd_max_for_fastchg(chip, vbat_batt_terminal_uv);
+ if (end != CHG_NOT_IN_PROGRESS)
+ adjust_vdd_max_for_fastchg(chip, vbat_batt_terminal_uv);
pr_debug("EOC count = %d\n", count);
schedule_delayed_work(&chip->eoc_work,
round_jiffies_relative(msecs_to_jiffies
@@ -3659,9 +3724,9 @@
}
eoc_worker_stop:
- wake_unlock(&chip->eoc_wake_lock);
/* set the vbatdet back, in case it was changed to trigger charging */
set_appropriate_vbatdet(chip);
+ wake_unlock(&chip->eoc_wake_lock);
}
/**
@@ -3781,11 +3846,14 @@
}
}
+#define PM8921_USB_TRIM_SEL_BIT BIT(6)
/* determines the initial present states */
static void __devinit determine_initial_state(struct pm8921_chg_chip *chip)
{
int fsm_state;
int is_fast_chg;
+ int rc = 0;
+ u8 trim_sel_reg = 0, regsbi;
chip->dc_present = !!is_dc_chg_plugged_in(chip);
chip->usb_present = !!is_usb_chg_plugged_in(chip);
@@ -3795,6 +3863,11 @@
schedule_delayed_work(&chip->unplug_check_work,
msecs_to_jiffies(UNPLUG_CHECK_WAIT_PERIOD_MS));
pm8921_chg_enable_irq(chip, CHG_GONE_IRQ);
+
+ if (chip->btc_override)
+ schedule_delayed_work(&chip->btc_override_work,
+ round_jiffies_relative(msecs_to_jiffies
+ (chip->btc_delay_ms)));
}
pm8921_chg_enable_irq(chip, DCIN_VALID_IRQ);
@@ -3843,10 +3916,26 @@
fsm_state);
/* Determine which USB trim column to use */
- if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8917)
+ if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8917) {
chip->usb_trim_table = usb_trim_8917_table;
- else if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8038)
+ } else if (pm8xxx_get_version(chip->dev->parent) ==
+ PM8XXX_VERSION_8038) {
chip->usb_trim_table = usb_trim_8038_table;
+ } else if (pm8xxx_get_version(chip->dev->parent) ==
+ PM8XXX_VERSION_8921) {
+ rc = pm8xxx_readb(chip->dev->parent, REG_SBI_CONFIG, ®sbi);
+ rc |= pm8xxx_writeb(chip->dev->parent, REG_SBI_CONFIG, 0x5E);
+ rc |= pm8xxx_readb(chip->dev->parent, PM8921_USB_TRIM_SEL,
+ &trim_sel_reg);
+ rc |= pm8xxx_writeb(chip->dev->parent, REG_SBI_CONFIG, regsbi);
+ if (rc)
+ pr_err("Failed to read trim sel register rc=%d\n", rc);
+
+ if (trim_sel_reg & PM8921_USB_TRIM_SEL_BIT)
+ chip->usb_trim_table = usb_trim_pm8921_table_1;
+ else
+ chip->usb_trim_table = usb_trim_pm8921_table_2;
+ }
}
struct pm_chg_irq_init_data {
@@ -4574,12 +4663,12 @@
int rc;
struct pm8921_chg_chip *chip = dev_get_drvdata(dev);
- pm8921_chg_force_19p2mhz_clk(chip);
-
rc = pm8921_chg_set_lpm(chip, 0);
if (rc)
pr_err("Failed to set lpm rc=%d\n", rc);
+ pm8921_chg_force_19p2mhz_clk(chip);
+
rc = pm_chg_masked_write(chip, CHG_CNTRL, VREF_BATT_THERM_FORCE_ON,
VREF_BATT_THERM_FORCE_ON);
if (rc)
@@ -4600,6 +4689,8 @@
is_usb_chg_plugged_in(the_chip)))
schedule_delayed_work(&chip->btc_override_work, 0);
+ schedule_delayed_work(&chip->update_heartbeat_work, 0);
+
return 0;
}
@@ -4607,6 +4698,8 @@
{
struct pm8921_chg_chip *chip = dev_get_drvdata(dev);
+ cancel_delayed_work_sync(&chip->update_heartbeat_work);
+
if (chip->btc_override)
cancel_delayed_work_sync(&chip->btc_override_work);
@@ -4663,6 +4756,11 @@
else
chip->warm_temp_dc = INT_MIN;
+ if (pdata->hysteresis_temp)
+ chip->hysteresis_temp_dc = pdata->hysteresis_temp * 10;
+ else
+ chip->hysteresis_temp_dc = TEMP_HYSTERISIS_DECIDEGC;
+
chip->temp_check_period = pdata->temp_check_period;
chip->max_bat_chg_current = pdata->max_bat_chg_current;
/* Assign to corresponding module parameter */
diff --git a/drivers/power/qpnp-bms.c b/drivers/power/qpnp-bms.c
index fd42c47..866b921 100644
--- a/drivers/power/qpnp-bms.c
+++ b/drivers/power/qpnp-bms.c
@@ -375,7 +375,7 @@
#define CC_READING_RESOLUTION_N 542535
#define CC_READING_RESOLUTION_D 100000
-static int cc_reading_to_uv(int16_t reading)
+static s64 cc_reading_to_uv(s64 reading)
{
return div_s64(reading * CC_READING_RESOLUTION_N,
CC_READING_RESOLUTION_D);
@@ -582,7 +582,7 @@
return false;
}
-static bool is_batfet_open(struct qpnp_bms_chip *chip)
+static bool is_battery_full(struct qpnp_bms_chip *chip)
{
union power_supply_propval ret = {0,};
@@ -610,18 +610,32 @@
iadc_channel = chip->use_external_rsense ?
EXTERNAL_RSENSE : INTERNAL_RSENSE;
- rc = qpnp_iadc_vadc_sync_read(iadc_channel, &i_result,
- VBAT_SNS, &v_result);
- if (rc) {
- pr_err("vadc read failed with rc: %d\n", rc);
- return rc;
+ if (is_battery_full(chip)) {
+ rc = get_battery_current(chip, ibat_ua);
+ if (rc) {
+ pr_err("bms current read failed with rc: %d\n", rc);
+ return rc;
+ }
+ rc = qpnp_vadc_read(VBAT_SNS, &v_result);
+ if (rc) {
+ pr_err("vadc read failed with rc: %d\n", rc);
+ return rc;
+ }
+ *vbat_uv = (int)v_result.physical;
+ } else {
+ rc = qpnp_iadc_vadc_sync_read(iadc_channel, &i_result,
+ VBAT_SNS, &v_result);
+ if (rc) {
+ pr_err("adc sync read failed with rc: %d\n", rc);
+ return rc;
+ }
+ /*
+ * reverse the current read by the iadc, since the bms uses
+ * flipped battery current polarity.
+ */
+ *ibat_ua = -1 * (int)i_result.result_ua;
+ *vbat_uv = (int)v_result.physical;
}
- /*
- * reverse the current read by the iadc, since the bms uses
- * flipped battery current polarity.
- */
- *ibat_ua = -1 * (int)i_result.result_ua;
- *vbat_uv = (int)v_result.physical;
return 0;
}
@@ -774,14 +788,6 @@
return (fcc_uah * pc) / 100;
}
-#define CC_RESOLUTION_N 542535
-#define CC_RESOLUTION_D 100000
-
-static s64 cc_to_uv(s64 cc)
-{
- return div_s64(cc * CC_RESOLUTION_N, CC_RESOLUTION_D);
-}
-
#define CC_READING_TICKS 56
#define SLEEP_CLK_HZ 32764
#define SECONDS_PER_HOUR 3600
@@ -817,7 +823,7 @@
qpnp_iadc_get_gain_and_offset(&calibration);
pr_debug("cc = %lld\n", cc);
- cc_voltage_uv = cc_to_uv(cc);
+ cc_voltage_uv = cc_reading_to_uv(cc);
cc_voltage_uv = cc_adjust_for_gain(cc_voltage_uv,
calibration.gain_raw
- calibration.offset_raw);
@@ -1409,7 +1415,7 @@
goto out;
}
- if (ibat_ua < 0 && !is_batfet_open(chip)) {
+ if (ibat_ua < 0 && !is_battery_full(chip)) {
soc = charging_adjustments(chip, params, soc, vbat_uv, ibat_ua,
batt_temp);
goto out;
diff --git a/drivers/power/qpnp-charger.c b/drivers/power/qpnp-charger.c
index 331c7f1..993b2cb 100644
--- a/drivers/power/qpnp-charger.c
+++ b/drivers/power/qpnp-charger.c
@@ -85,7 +85,11 @@
#define CHGR_BUCK_BCK_VBAT_REG_MODE 0x74
#define MISC_REVISION2 0x01
#define USB_OVP_CTL 0x42
+#define USB_CHG_GONE_REV_BST 0xED
+#define BUCK_VCHG_OV 0x77
+#define BUCK_TEST_SMBC_MODES 0xE6
#define SEC_ACCESS 0xD0
+#define BAT_IF_VREF_BAT_THM_CTRL 0x4A
#define REG_OFFSET_PERP_SUBTYPE 0x05
/* SMBB peripheral subtype values */
@@ -105,6 +109,13 @@
#define SMBBP_BOOST_SUBTYPE 0x36
#define SMBBP_MISC_SUBTYPE 0x37
+/* SMBCL peripheral subtype values */
+#define SMBCL_CHGR_SUBTYPE 0x41
+#define SMBCL_BUCK_SUBTYPE 0x42
+#define SMBCL_BAT_IF_SUBTYPE 0x43
+#define SMBCL_USB_CHGPTH_SUBTYPE 0x44
+#define SMBCL_MISC_SUBTYPE 0x47
+
#define QPNP_CHARGER_DEV_NAME "qcom,qpnp-charger"
/* Status bits and masks */
@@ -113,6 +124,8 @@
#define CHGR_ON_BAT_FORCE_BIT BIT(0)
#define USB_VALID_DEB_20MS 0x03
#define BUCK_VBAT_REG_NODE_SEL_BIT BIT(0)
+#define VREF_BATT_THERM_FORCE_ON 0xC0
+#define VREF_BAT_THM_ENABLED_FSM 0x80
/* Interrupt definitions */
/* smbb_chg_interrupts */
@@ -225,7 +238,7 @@
u16 freq_base;
unsigned int usbin_valid_irq;
unsigned int dcin_valid_irq;
- unsigned int chg_done_irq;
+ unsigned int chg_gone_irq;
unsigned int chg_fastchg_irq;
unsigned int chg_trklchg_irq;
unsigned int chg_failed_irq;
@@ -266,6 +279,7 @@
uint32_t flags;
struct qpnp_adc_tm_btm_param adc_param;
struct work_struct adc_measure_work;
+ struct delayed_work arb_stop_work;
};
static struct of_device_id qpnp_charger_match_table[] = {
@@ -518,7 +532,44 @@
enable ? USB_SUSPEND_BIT : 0, 1);
}
-static void qpnp_bat_if_adc_measure_work(struct work_struct *work)
+static int
+qpnp_chg_charge_en(struct qpnp_chg_chip *chip, int enable)
+{
+ return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
+ CHGR_CHG_EN,
+ enable ? CHGR_CHG_EN : 0, 1);
+}
+
+static int
+qpnp_chg_force_run_on_batt(struct qpnp_chg_chip *chip, int disable)
+{
+ /* Don't run on battery for batteryless hardware */
+ if (chip->use_default_batt_values)
+ return 0;
+ /* Don't force on battery if battery is not present */
+ if (!qpnp_chg_is_batt_present(chip))
+ return 0;
+
+ /* This bit forces the charger to run off of the battery rather
+ * than a connected charger */
+ return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
+ CHGR_ON_BAT_FORCE_BIT,
+ disable ? CHGR_ON_BAT_FORCE_BIT : 0, 1);
+}
+
+static void
+qpnp_arb_stop_work(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct qpnp_chg_chip *chip = container_of(dwork,
+ struct qpnp_chg_chip, arb_stop_work);
+
+ qpnp_chg_charge_en(chip, !chip->charging_disabled);
+ qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+}
+
+static void
+qpnp_bat_if_adc_measure_work(struct work_struct *work)
{
struct qpnp_chg_chip *chip = container_of(work,
struct qpnp_chg_chip, adc_measure_work);
@@ -527,6 +578,23 @@
pr_err("request ADC error\n");
}
+#define ARB_STOP_WORK_MS 1000
+static irqreturn_t
+qpnp_chg_usb_chg_gone_irq_handler(int irq, void *_chip)
+{
+ struct qpnp_chg_chip *chip = _chip;
+
+ pr_debug("chg_gone triggered\n");
+ if (qpnp_chg_is_usb_chg_plugged_in(chip)) {
+ qpnp_chg_charge_en(chip, 0);
+ qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+ schedule_delayed_work(&chip->arb_stop_work,
+ msecs_to_jiffies(ARB_STOP_WORK_MS));
+ }
+
+ return IRQ_HANDLED;
+}
+
#define ENUM_T_STOP_BIT BIT(0)
static irqreturn_t
qpnp_chg_usb_usbin_valid_irq_handler(int irq, void *_chip)
@@ -546,7 +614,8 @@
if (chip->usb_present ^ usb_present) {
chip->usb_present = usb_present;
if (!usb_present)
- qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
+ qpnp_chg_usb_suspend_enable(chip, 1);
+
power_supply_set_present(chip->usb_psy,
chip->usb_present);
}
@@ -639,26 +708,6 @@
return IRQ_HANDLED;
}
-static irqreturn_t
-qpnp_chg_chgr_chg_done_irq_handler(int irq, void *_chip)
-{
- struct qpnp_chg_chip *chip = _chip;
- u8 chgr_sts;
- int rc;
-
- pr_debug("CHG_DONE IRQ triggered\n");
-
- rc = qpnp_chg_read(chip, &chgr_sts,
- INT_RT_STS(chip->chgr_base), 1);
- if (rc)
- pr_err("failed to read interrupt sts %d\n", rc);
-
- chip->chg_done = true;
- power_supply_changed(&chip->batt_psy);
-
- return IRQ_HANDLED;
-}
-
static int
qpnp_batt_property_is_writeable(struct power_supply *psy,
enum power_supply_property psp)
@@ -675,28 +724,6 @@
}
static int
-qpnp_chg_charge_en(struct qpnp_chg_chip *chip, int enable)
-{
- return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
- CHGR_CHG_EN,
- enable ? CHGR_CHG_EN : 0, 1);
-}
-
-static int
-qpnp_chg_force_run_on_batt(struct qpnp_chg_chip *chip, int disable)
-{
- /* Don't run on battery for batteryless hardware */
- if (chip->use_default_batt_values)
- return 0;
-
- /* This bit forces the charger to run off of the battery rather
- * than a connected charger */
- return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
- CHGR_ON_BAT_FORCE_BIT,
- disable ? CHGR_ON_BAT_FORCE_BIT : 0, 1);
-}
-
-static int
qpnp_chg_buck_control(struct qpnp_chg_chip *chip, int enable)
{
int rc;
@@ -1073,8 +1100,8 @@
POWER_SUPPLY_PROP_CURRENT_MAX, &ret);
if (ret.intval <= 2 && !chip->use_default_batt_values &&
get_prop_batt_present(chip)) {
- qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
qpnp_chg_usb_suspend_enable(chip, 1);
+ qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
} else {
qpnp_chg_usb_suspend_enable(chip, 0);
qpnp_chg_iusbmax_set(chip, ret.intval / 1000);
@@ -1223,7 +1250,7 @@
QPNP_CHG_ITERM_MASK, temp, 1);
}
-#define QPNP_CHG_IBATMAX_MIN 100
+#define QPNP_CHG_IBATMAX_MIN 50
#define QPNP_CHG_IBATMAX_MAX 3250
static int
qpnp_chg_ibatmax_set(struct qpnp_chg_chip *chip, int chg_current)
@@ -1235,11 +1262,28 @@
pr_err("bad mA=%d asked to set\n", chg_current);
return -EINVAL;
}
- temp = (chg_current - QPNP_CHG_I_MIN_MA) / QPNP_CHG_I_STEP_MA;
+ temp = chg_current / QPNP_CHG_I_STEP_MA;
return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_IBAT_MAX,
QPNP_CHG_I_MASK, temp, 1);
}
+#define QPNP_CHG_TCHG_MASK 0x7F
+#define QPNP_CHG_TCHG_MIN 4
+#define QPNP_CHG_TCHG_MAX 512
+#define QPNP_CHG_TCHG_STEP 4
+static int qpnp_chg_tchg_max_set(struct qpnp_chg_chip *chip, int minutes)
+{
+ u8 temp;
+
+ if (minutes < QPNP_CHG_TCHG_MIN || minutes > QPNP_CHG_TCHG_MAX) {
+ pr_err("bad max minutes =%d asked to set\n", minutes);
+ return -EINVAL;
+ }
+
+ temp = (minutes - 1)/QPNP_CHG_TCHG_STEP;
+ return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_TCHG_MAX,
+ QPNP_CHG_I_MASK, temp, 1);
+}
#define QPNP_CHG_VBATDET_MIN_MV 3240
#define QPNP_CHG_VBATDET_MAX_MV 5780
#define QPNP_CHG_VBATDET_STEP_MV 20
@@ -1380,6 +1424,7 @@
if (state == ADC_TM_WARM_STATE) {
if (temp > chip->warm_bat_decidegc) {
+ /* Normal to warm */
bat_warm = true;
bat_cool = false;
chip->adc_param.low_temp =
@@ -1388,6 +1433,7 @@
ADC_TM_COOL_THR_ENABLE;
} else if (temp >
chip->cool_bat_decidegc + HYSTERISIS_DECIDEGC){
+ /* Cool to normal */
bat_warm = false;
bat_cool = false;
@@ -1398,14 +1444,16 @@
}
} else {
if (temp < chip->cool_bat_decidegc) {
+ /* Normal to cool */
bat_warm = false;
bat_cool = true;
chip->adc_param.high_temp =
chip->cool_bat_decidegc + HYSTERISIS_DECIDEGC;
chip->adc_param.state_request =
ADC_TM_WARM_THR_ENABLE;
- } else if (temp >
+ } else if (temp <
chip->warm_bat_decidegc - HYSTERISIS_DECIDEGC){
+ /* Warm to normal */
bat_warm = false;
bat_cool = false;
@@ -1462,6 +1510,177 @@
chip->flags |= CHG_FLAGS_VCP_WA;
}
+static int
+qpnp_chg_request_irqs(struct qpnp_chg_chip *chip)
+{
+ int rc = 0;
+ struct resource *resource;
+ struct spmi_resource *spmi_resource;
+ u8 subtype;
+ struct spmi_device *spmi = chip->spmi;
+
+ spmi_for_each_container_dev(spmi_resource, chip->spmi) {
+ if (!spmi_resource) {
+ pr_err("qpnp_chg: spmi resource absent\n");
+ return rc;
+ }
+
+ resource = spmi_get_resource(spmi, spmi_resource,
+ IORESOURCE_MEM, 0);
+ if (!(resource && resource->start)) {
+ pr_err("node %s IO resource absent!\n",
+ spmi->dev.of_node->full_name);
+ return rc;
+ }
+
+ rc = qpnp_chg_read(chip, &subtype,
+ resource->start + REG_OFFSET_PERP_SUBTYPE, 1);
+ if (rc) {
+ pr_err("Peripheral subtype read failed rc=%d\n", rc);
+ return rc;
+ }
+
+ switch (subtype) {
+ case SMBB_CHGR_SUBTYPE:
+ case SMBBP_CHGR_SUBTYPE:
+ case SMBCL_CHGR_SUBTYPE:
+ chip->chg_fastchg_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "fast-chg-on");
+ if (chip->chg_fastchg_irq < 0) {
+ pr_err("Unable to get fast-chg-on irq\n");
+ return rc;
+ }
+
+ chip->chg_trklchg_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "trkl-chg-on");
+ if (chip->chg_trklchg_irq < 0) {
+ pr_err("Unable to get trkl-chg-on irq\n");
+ return rc;
+ }
+
+ chip->chg_failed_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "chg-failed");
+ if (chip->chg_failed_irq < 0) {
+ pr_err("Unable to get chg_failed irq\n");
+ return rc;
+ }
+
+ rc |= devm_request_irq(chip->dev, chip->chg_failed_irq,
+ qpnp_chg_chgr_chg_failed_irq_handler,
+ IRQF_TRIGGER_RISING, "chg-failed", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d chg-failed: %d\n",
+ chip->chg_failed_irq, rc);
+ return rc;
+ }
+
+ rc |= devm_request_irq(chip->dev, chip->chg_fastchg_irq,
+ qpnp_chg_chgr_chg_fastchg_irq_handler,
+ IRQF_TRIGGER_RISING,
+ "fast-chg-on", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d fast-chg-on: %d\n",
+ chip->chg_fastchg_irq, rc);
+ return rc;
+ }
+
+ rc |= devm_request_irq(chip->dev, chip->chg_trklchg_irq,
+ qpnp_chg_chgr_chg_trklchg_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+ "trkl-chg-on", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d trkl-chg-on: %d\n",
+ chip->chg_trklchg_irq, rc);
+ return rc;
+ }
+ enable_irq_wake(chip->chg_fastchg_irq);
+ enable_irq_wake(chip->chg_trklchg_irq);
+ enable_irq_wake(chip->chg_failed_irq);
+
+ break;
+ case SMBB_BAT_IF_SUBTYPE:
+ case SMBBP_BAT_IF_SUBTYPE:
+ case SMBCL_BAT_IF_SUBTYPE:
+ chip->batt_pres_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "batt-pres");
+ if (chip->batt_pres_irq < 0) {
+ pr_err("Unable to get batt-pres irq\n");
+ return rc;
+ }
+ rc = devm_request_irq(chip->dev, chip->batt_pres_irq,
+ qpnp_chg_bat_if_batt_pres_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+ "batt-pres", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d batt-pres irq: %d\n",
+ chip->batt_pres_irq, rc);
+ return rc;
+ }
+
+ enable_irq_wake(chip->batt_pres_irq);
+ break;
+ case SMBB_USB_CHGPTH_SUBTYPE:
+ case SMBBP_USB_CHGPTH_SUBTYPE:
+ case SMBCL_USB_CHGPTH_SUBTYPE:
+ chip->usbin_valid_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "usbin-valid");
+ if (chip->usbin_valid_irq < 0) {
+ pr_err("Unable to get usbin irq\n");
+ return rc;
+ }
+ rc = devm_request_irq(chip->dev, chip->usbin_valid_irq,
+ qpnp_chg_usb_usbin_valid_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+ "usbin-valid", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d usbin-valid: %d\n",
+ chip->usbin_valid_irq, rc);
+ return rc;
+ }
+
+ chip->chg_gone_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "chg-gone");
+ if (chip->chg_gone_irq < 0) {
+ pr_err("Unable to get chg-gone irq\n");
+ return rc;
+ }
+ rc = devm_request_irq(chip->dev, chip->chg_gone_irq,
+ qpnp_chg_usb_chg_gone_irq_handler,
+ IRQF_TRIGGER_RISING,
+ "chg-gone", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d chg-gone: %d\n",
+ chip->chg_gone_irq, rc);
+ return rc;
+ }
+ enable_irq_wake(chip->usbin_valid_irq);
+ enable_irq_wake(chip->chg_gone_irq);
+ break;
+ case SMBB_DC_CHGPTH_SUBTYPE:
+ chip->dcin_valid_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "dcin-valid");
+ if (chip->dcin_valid_irq < 0) {
+ pr_err("Unable to get dcin irq\n");
+ return -rc;
+ }
+ rc = devm_request_irq(chip->dev, chip->dcin_valid_irq,
+ qpnp_chg_dc_dcin_valid_irq_handler,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+ "dcin-valid", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d dcin-valid: %d\n",
+ chip->dcin_valid_irq, rc);
+ return rc;
+ }
+
+ enable_irq_wake(chip->dcin_valid_irq);
+ break;
+ }
+ }
+
+ return rc;
+}
+
#define WDOG_EN_BIT BIT(7)
static int
qpnp_chg_hwinit(struct qpnp_chg_chip *chip, u8 subtype,
@@ -1473,73 +1692,7 @@
switch (subtype) {
case SMBB_CHGR_SUBTYPE:
case SMBBP_CHGR_SUBTYPE:
- chip->chg_done_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "chg-done");
- if (chip->chg_done_irq < 0) {
- pr_err("Unable to get chg_done irq\n");
- return -ENXIO;
- }
-
- chip->chg_fastchg_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "fast-chg-on");
- if (chip->chg_fastchg_irq < 0) {
- pr_err("Unable to get fast-chg-on irq\n");
- return -ENXIO;
- }
-
- chip->chg_trklchg_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "trkl-chg-on");
- if (chip->chg_trklchg_irq < 0) {
- pr_err("Unable to get trkl-chg-on irq\n");
- return -ENXIO;
- }
-
- chip->chg_failed_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "chg-failed");
- if (chip->chg_failed_irq < 0) {
- pr_err("Unable to get chg_failed irq\n");
- return -ENXIO;
- }
-
- rc |= devm_request_irq(chip->dev, chip->chg_done_irq,
- qpnp_chg_chgr_chg_done_irq_handler,
- IRQF_TRIGGER_RISING,
- "chg_done", chip);
- if (rc < 0) {
- pr_err("Can't request %d chg_done for chg: %d\n",
- chip->chg_done_irq, rc);
- return -ENXIO;
- }
-
- rc |= devm_request_irq(chip->dev, chip->chg_failed_irq,
- qpnp_chg_chgr_chg_failed_irq_handler,
- IRQF_TRIGGER_RISING, "chg_failed", chip);
- if (rc < 0) {
- pr_err("Can't request %d chg_failed chg: %d\n",
- chip->chg_failed_irq, rc);
- return -ENXIO;
- }
-
- rc |= devm_request_irq(chip->dev, chip->chg_fastchg_irq,
- qpnp_chg_chgr_chg_fastchg_irq_handler,
- IRQF_TRIGGER_RISING,
- "fast-chg-on", chip);
- if (rc < 0) {
- pr_err("Can't request %d fast-chg-on for chg: %d\n",
- chip->chg_fastchg_irq, rc);
- return -ENXIO;
- }
-
- rc |= devm_request_irq(chip->dev, chip->chg_trklchg_irq,
- qpnp_chg_chgr_chg_trklchg_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- "fast-chg-on", chip);
- if (rc < 0) {
- pr_err("Can't request %d trkl-chg-on for chg: %d\n",
- chip->chg_trklchg_irq, rc);
- return -ENXIO;
- }
-
+ case SMBCL_CHGR_SUBTYPE:
rc = qpnp_chg_vinmin_set(chip, chip->min_voltage_mv);
if (rc) {
pr_debug("failed setting min_voltage rc=%d\n", rc);
@@ -1578,22 +1731,25 @@
pr_debug("failed setting ibat_Safe rc=%d\n", rc);
return rc;
}
+ rc = qpnp_chg_tchg_max_set(chip, chip->tchg_mins);
+ if (rc) {
+ pr_debug("failed setting tchg_mins rc=%d\n", rc);
+ return rc;
+ }
+
/* HACK: Disable wdog */
rc = qpnp_chg_masked_write(chip, chip->chgr_base + 0x62,
0xFF, 0xA0, 1);
- /* HACK: use digital EOC */
+ /* HACK: use analog EOC */
rc = qpnp_chg_masked_write(chip, chip->chgr_base +
CHGR_IBAT_TERM_CHGR,
- 0x88, 0x80, 1);
+ 0x80, 0x00, 1);
- enable_irq_wake(chip->chg_fastchg_irq);
- enable_irq_wake(chip->chg_trklchg_irq);
- enable_irq_wake(chip->chg_failed_irq);
- enable_irq_wake(chip->chg_done_irq);
break;
case SMBB_BUCK_SUBTYPE:
case SMBBP_BUCK_SUBTYPE:
+ case SMBCL_BUCK_SUBTYPE:
rc = qpnp_chg_masked_write(chip,
chip->chgr_base + CHGR_BUCK_BCK_VBAT_REG_MODE,
BUCK_VBAT_REG_NODE_SEL_BIT,
@@ -1605,43 +1761,20 @@
break;
case SMBB_BAT_IF_SUBTYPE:
case SMBBP_BAT_IF_SUBTYPE:
- chip->batt_pres_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "batt-pres");
- if (chip->batt_pres_irq < 0) {
- pr_err("Unable to get batt-pres irq\n");
- return -ENXIO;
+ case SMBCL_BAT_IF_SUBTYPE:
+ /* Force on VREF_BAT_THM */
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BATT_THERM_FORCE_ON, 1);
+ if (rc) {
+ pr_debug("failed to force on VREF_BAT_THM rc=%d\n", rc);
+ return rc;
}
- rc = devm_request_irq(chip->dev, chip->batt_pres_irq,
- qpnp_chg_bat_if_batt_pres_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- "bat_if_batt_pres", chip);
- if (rc < 0) {
- pr_err("Can't request %d batt-pres irq for chg: %d\n",
- chip->batt_pres_irq, rc);
- return -ENXIO;
- }
-
- enable_irq_wake(chip->batt_pres_irq);
break;
case SMBB_USB_CHGPTH_SUBTYPE:
case SMBBP_USB_CHGPTH_SUBTYPE:
- chip->usbin_valid_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "usbin-valid");
- if (chip->usbin_valid_irq < 0) {
- pr_err("Unable to get usbin irq\n");
- return -ENXIO;
- }
- rc = devm_request_irq(chip->dev, chip->usbin_valid_irq,
- qpnp_chg_usb_usbin_valid_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- "chg_usbin_valid", chip);
- if (rc < 0) {
- pr_err("Can't request %d usbinvalid for chg: %d\n",
- chip->usbin_valid_irq, rc);
- return -ENXIO;
- }
-
- enable_irq_wake(chip->usbin_valid_irq);
+ case SMBCL_USB_CHGPTH_SUBTYPE:
chip->usb_present = qpnp_chg_is_usb_chg_plugged_in(chip);
if (chip->usb_present) {
rc = qpnp_chg_masked_write(chip,
@@ -1664,25 +1797,18 @@
ENUM_T_STOP_BIT,
ENUM_T_STOP_BIT, 1);
+ rc = qpnp_chg_masked_write(chip,
+ chip->usb_chgpth_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->usb_chgpth_base + USB_CHG_GONE_REV_BST,
+ 0xFF,
+ 0x80, 1);
+
break;
case SMBB_DC_CHGPTH_SUBTYPE:
- chip->dcin_valid_irq = spmi_get_irq_byname(chip->spmi,
- spmi_resource, "dcin-valid");
- if (chip->dcin_valid_irq < 0) {
- pr_err("Unable to get dcin irq\n");
- return -ENXIO;
- }
- rc = devm_request_irq(chip->dev, chip->dcin_valid_irq,
- qpnp_chg_dc_dcin_valid_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- "chg_dcin_valid", chip);
- if (rc < 0) {
- pr_err("Can't request %d dcinvalid for chg: %d\n",
- chip->dcin_valid_irq, rc);
- return -ENXIO;
- }
-
- enable_irq_wake(chip->dcin_valid_irq);
break;
case SMBB_BOOST_SUBTYPE:
case SMBBP_BOOST_SUBTYPE:
@@ -1691,6 +1817,8 @@
chip->type = SMBB;
case SMBBP_MISC_SUBTYPE:
chip->type = SMBBP;
+ case SMBCL_MISC_SUBTYPE:
+ chip->type = SMBCL;
pr_debug("Setting BOOT_DONE\n");
rc = qpnp_chg_masked_write(chip,
chip->misc_base + CHGR_MISC_BOOT_DONE,
@@ -1710,6 +1838,100 @@
return rc;
}
+#define OF_PROP_READ(chip, prop, qpnp_dt_property, retval, optional) \
+do { \
+ if (retval) \
+ break; \
+ \
+ retval = of_property_read_u32(chip->spmi->dev.of_node, \
+ "qcom," qpnp_dt_property, \
+ &chip->prop); \
+ \
+ if ((retval == -EINVAL) && optional) \
+ retval = 0; \
+ else if (retval) \
+ pr_err("Error reading " #qpnp_dt_property \
+ " property rc = %d\n", rc); \
+} while (0)
+
+static int
+qpnp_charger_read_dt_props(struct qpnp_chg_chip *chip)
+{
+ int rc = 0;
+
+ OF_PROP_READ(chip, max_voltage_mv, "vddmax-mv", rc, 0);
+ OF_PROP_READ(chip, min_voltage_mv, "vinmin-mv", rc, 0);
+ OF_PROP_READ(chip, safe_voltage_mv, "vddsafe-mv", rc, 0);
+ OF_PROP_READ(chip, resume_delta_mv, "vbatdet-delta-mv", rc, 0);
+ OF_PROP_READ(chip, safe_current, "ibatsafe-ma", rc, 0);
+ OF_PROP_READ(chip, max_bat_chg_current, "ibatmax-ma", rc, 0);
+ if (rc)
+ pr_err("failed to read required dt parameters %d\n", rc);
+
+ OF_PROP_READ(chip, term_current, "ibatterm-ma", rc, 1);
+ OF_PROP_READ(chip, maxinput_dc_ma, "maxinput-dc-ma", rc, 1);
+ OF_PROP_READ(chip, maxinput_usb_ma, "maxinput-usb-ma", rc, 1);
+ OF_PROP_READ(chip, warm_bat_decidegc, "warm-bat-decidegc", rc, 1);
+ OF_PROP_READ(chip, cool_bat_decidegc, "cool-bat-decidegc", rc, 1);
+ OF_PROP_READ(chip, tchg_mins, "tchg-mins", rc, 1);
+ if (rc)
+ return rc;
+
+ /* Look up JEITA compliance parameters if cool and warm temp provided */
+ if (chip->cool_bat_decidegc && chip->warm_bat_decidegc) {
+ rc = qpnp_adc_tm_is_ready();
+ if (rc) {
+ pr_err("tm not ready %d\n", rc);
+ return rc;
+ }
+
+ OF_PROP_READ(chip, warm_bat_chg_ma, "ibatmax-warm-ma", rc, 1);
+ OF_PROP_READ(chip, cool_bat_chg_ma, "ibatmax-cool-ma", rc, 1);
+ OF_PROP_READ(chip, warm_bat_mv, "warm-bat-mv", rc, 1);
+ OF_PROP_READ(chip, cool_bat_mv, "cool-bat-mv", rc, 1);
+ if (rc)
+ return rc;
+ }
+
+ /* Get the charging-disabled property */
+ chip->charging_disabled = of_property_read_bool(chip->spmi->dev.of_node,
+ "qcom,charging-disabled");
+
+ /* Get the fake-batt-values property */
+ chip->use_default_batt_values =
+ of_property_read_bool(chip->spmi->dev.of_node,
+ "qcom,use-default-batt-values");
+
+ /* Disable charging when faking battery values */
+ if (chip->use_default_batt_values)
+ chip->charging_disabled = true;
+
+ of_get_property(chip->spmi->dev.of_node, "qcom,thermal-mitigation",
+ &(chip->thermal_levels));
+
+ if (chip->thermal_levels > sizeof(int)) {
+ chip->thermal_mitigation = kzalloc(
+ chip->thermal_levels,
+ GFP_KERNEL);
+
+ if (chip->thermal_mitigation == NULL) {
+ pr_err("thermal mitigation kzalloc() failed.\n");
+ return rc;
+ }
+
+ chip->thermal_levels /= sizeof(int);
+ rc = of_property_read_u32_array(chip->spmi->dev.of_node,
+ "qcom,thermal-mitigation",
+ chip->thermal_mitigation, chip->thermal_levels);
+ if (rc) {
+ pr_err("qcom,thermal-mitigation missing in dt\n");
+ return rc;
+ }
+ }
+
+ return rc;
+}
+
static int __devinit
qpnp_charger_probe(struct spmi_device *spmi)
{
@@ -1736,178 +1958,10 @@
goto fail_chg_enable;
}
- /* Get the vddmax property */
- rc = of_property_read_u32(spmi->dev.of_node, "qcom,chg-vddmax-mv",
- &chip->max_voltage_mv);
- if (rc) {
- pr_err("Error reading vddmax property %d\n", rc);
+ /* Get all device tree properties */
+ rc = qpnp_charger_read_dt_props(chip);
+ if (rc)
goto fail_chg_enable;
- }
-
- /* Get the vinmin property */
- rc = of_property_read_u32(spmi->dev.of_node, "qcom,chg-vinmin-mv",
- &chip->min_voltage_mv);
- if (rc) {
- pr_err("Error reading vddmax property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the vddmax property */
- rc = of_property_read_u32(spmi->dev.of_node, "qcom,chg-vddsafe-mv",
- &chip->safe_voltage_mv);
- if (rc) {
- pr_err("Error reading vddsave property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the vbatdet-delta property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-vbatdet-delta-mv",
- &chip->resume_delta_mv);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading vbatdet-delta property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the ibatsafe property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-ibatsafe-ma",
- &chip->safe_current);
- if (rc) {
- pr_err("Error reading ibatsafe property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the ibatterm property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-ibatterm-ma",
- &chip->term_current);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading ibatterm property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the ibatmax property */
- rc = of_property_read_u32(spmi->dev.of_node, "qcom,chg-ibatmax-ma",
- &chip->max_bat_chg_current);
- if (rc) {
- pr_err("Error reading ibatmax property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the maxinput-dc-ma property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-maxinput-dc-ma",
- &chip->maxinput_dc_ma);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading maxinput-dc-ma property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the maxinput-usb-ma property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-maxinput-usb-ma",
- &chip->maxinput_usb_ma);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading maxinput-usb-ma property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the charging-disabled property */
- chip->charging_disabled = of_property_read_bool(spmi->dev.of_node,
- "qcom,chg-charging-disabled");
-
- /* Get the warm-bat-degc property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-warm-bat-decidegc",
- &chip->warm_bat_decidegc);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading warm-bat-degc property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the cool-bat-degc property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-cool-bat-decidegc",
- &chip->cool_bat_decidegc);
- if (rc && rc != -EINVAL) {
- pr_err("Error reading cool-bat-degc property %d\n", rc);
- goto fail_chg_enable;
- }
-
- if (chip->cool_bat_decidegc && chip->warm_bat_decidegc) {
- rc = qpnp_adc_tm_is_ready();
- if (rc) {
- pr_err("tm not ready %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the ibatmax-warm property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-ibatmax-warm-ma",
- &chip->warm_bat_chg_ma);
- if (rc) {
- pr_err("Error reading ibatmax-warm-ma %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the ibatmax-cool property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-ibatmax-cool-ma",
- &chip->cool_bat_chg_ma);
- if (rc) {
- pr_err("Error reading ibatmax-cool-ma %d\n", rc);
- goto fail_chg_enable;
- }
- /* Get the cool-bat-mv property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-cool-bat-mv",
- &chip->cool_bat_mv);
- if (rc) {
- pr_err("Error reading cool-bat-mv property %d\n", rc);
- goto fail_chg_enable;
- }
-
- /* Get the warm-bat-mv property */
- rc = of_property_read_u32(spmi->dev.of_node,
- "qcom,chg-warm-bat-mv",
- &chip->warm_bat_mv);
- if (rc) {
- pr_err("Error reading warm-bat-mv property %d\n", rc);
- goto fail_chg_enable;
- }
- }
-
- /* Get the fake-batt-values property */
- chip->use_default_batt_values = of_property_read_bool(spmi->dev.of_node,
- "qcom,chg-use-default-batt-values");
-
- of_get_property(spmi->dev.of_node, "qcom,chg-thermal-mitigation",
- &(chip->thermal_levels));
-
- if (chip->thermal_levels > sizeof(int)) {
- chip->thermal_mitigation = kzalloc(
- chip->thermal_levels,
- GFP_KERNEL);
-
- if (chip->thermal_mitigation == NULL) {
- pr_err("thermal mitigation kzalloc() failed.\n");
- goto fail_chg_enable;
- }
-
- chip->thermal_levels /= sizeof(int);
- rc = of_property_read_u32_array(spmi->dev.of_node,
- "qcom,chg-thermal-mitigation",
- chip->thermal_mitigation, chip->thermal_levels);
- if (rc) {
- pr_err("qcom,chg-thermal-mitigation missing in dt\n");
- goto fail_chg_enable;
- }
- }
-
- /* Disable charging when faking battery values */
- if (chip->use_default_batt_values)
- chip->charging_disabled = true;
spmi_for_each_container_dev(spmi_resource, spmi) {
if (!spmi_resource) {
@@ -1935,6 +1989,7 @@
switch (subtype) {
case SMBB_CHGR_SUBTYPE:
case SMBBP_CHGR_SUBTYPE:
+ case SMBCL_CHGR_SUBTYPE:
chip->chgr_base = resource->start;
rc = qpnp_chg_hwinit(chip, subtype, spmi_resource);
if (rc) {
@@ -1945,6 +2000,7 @@
break;
case SMBB_BUCK_SUBTYPE:
case SMBBP_BUCK_SUBTYPE:
+ case SMBCL_BUCK_SUBTYPE:
chip->buck_base = resource->start;
rc = qpnp_chg_hwinit(chip, subtype, spmi_resource);
if (rc) {
@@ -1952,9 +2008,31 @@
subtype, rc);
goto fail_chg_enable;
}
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + BUCK_VCHG_OV,
+ 0xff,
+ 0x00, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + BUCK_TEST_SMBC_MODES,
+ 0xFF,
+ 0x80, 1);
+
break;
case SMBB_BAT_IF_SUBTYPE:
case SMBBP_BAT_IF_SUBTYPE:
+ case SMBCL_BAT_IF_SUBTYPE:
chip->bat_if_base = resource->start;
rc = qpnp_chg_hwinit(chip, subtype, spmi_resource);
if (rc) {
@@ -1965,6 +2043,7 @@
break;
case SMBB_USB_CHGPTH_SUBTYPE:
case SMBBP_USB_CHGPTH_SUBTYPE:
+ case SMBCL_USB_CHGPTH_SUBTYPE:
chip->usb_chgpth_base = resource->start;
rc = qpnp_chg_hwinit(chip, subtype, spmi_resource);
if (rc) {
@@ -1994,6 +2073,7 @@
break;
case SMBB_MISC_SUBTYPE:
case SMBBP_MISC_SUBTYPE:
+ case SMBCL_MISC_SUBTYPE:
chip->misc_base = resource->start;
rc = qpnp_chg_hwinit(chip, subtype, spmi_resource);
if (rc) {
@@ -2045,6 +2125,8 @@
qpnp_bat_if_adc_measure_work);
}
+ INIT_DELAYED_WORK(&chip->arb_stop_work, qpnp_arb_stop_work);
+
if (chip->dc_chgpth_base) {
chip->dc_psy.name = "qpnp-dc";
chip->dc_psy.type = POWER_SUPPLY_TYPE_MAINS;
@@ -2097,6 +2179,12 @@
qpnp_chg_charge_en(chip, !chip->charging_disabled);
qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+ rc = qpnp_chg_request_irqs(chip);
+ if (rc) {
+ pr_err("failed to request interrupts %d\n", rc);
+ goto unregister_batt;
+ }
+
pr_info("success chg_dis = %d, usb = %d, dc = %d b_health = %d batt_present = %d\n",
chip->charging_disabled,
qpnp_chg_is_usb_chg_plugged_in(chip),
@@ -2131,13 +2219,49 @@
return 0;
}
+static int qpnp_chg_resume(struct device *dev)
+{
+ struct qpnp_chg_chip *chip = dev_get_drvdata(dev);
+ int rc = 0;
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BATT_THERM_FORCE_ON, 1);
+ if (rc)
+ pr_debug("failed to force on VREF_BAT_THM rc=%d\n", rc);
+
+ return rc;
+}
+
+static int qpnp_chg_suspend(struct device *dev)
+{
+ struct qpnp_chg_chip *chip = dev_get_drvdata(dev);
+ int rc = 0;
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BAT_THM_ENABLED_FSM, 1);
+ if (rc)
+ pr_debug("failed to enable FSM ctrl VREF_BAT_THM rc=%d\n", rc);
+
+ return rc;
+}
+
+static const struct dev_pm_ops qpnp_chg_pm_ops = {
+ .resume = qpnp_chg_resume,
+ .suspend = qpnp_chg_suspend,
+};
+
static struct spmi_driver qpnp_charger_driver = {
.probe = qpnp_charger_probe,
.remove = __devexit_p(qpnp_charger_remove),
.driver = {
- .name = QPNP_CHARGER_DEV_NAME,
- .owner = THIS_MODULE,
- .of_match_table = qpnp_charger_match_table,
+ .name = QPNP_CHARGER_DEV_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = qpnp_charger_match_table,
+ .pm = &qpnp_chg_pm_ops,
},
};
diff --git a/drivers/power/smb350_charger.c b/drivers/power/smb350_charger.c
index 21d7aea..11fff43 100644
--- a/drivers/power/smb350_charger.c
+++ b/drivers/power/smb350_charger.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012 The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -263,6 +263,9 @@
if (!chg_enabled) {
pr_warn("Charging not enabled.\n");
+ /* release the wake-lock when DC power removed */
+ if (wake_lock_active(&dev->chg_wake_lock))
+ wake_unlock(&dev->chg_wake_lock);
return POWER_SUPPLY_CHARGE_TYPE_NONE;
}
diff --git a/drivers/regulator/qpnp-regulator.c b/drivers/regulator/qpnp-regulator.c
index 4cdfaeb..2d10f89 100644
--- a/drivers/regulator/qpnp-regulator.c
+++ b/drivers/regulator/qpnp-regulator.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -19,12 +19,14 @@
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/spmi.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
+#include <linux/ktime.h>
#include <linux/regulator/driver.h>
#include <linux/regulator/of_regulator.h>
#include <linux/regulator/qpnp-regulator.h>
@@ -36,6 +38,7 @@
QPNP_VREG_DEBUG_INIT = BIT(2), /* Show state after probe */
QPNP_VREG_DEBUG_WRITES = BIT(3), /* Show SPMI writes */
QPNP_VREG_DEBUG_READS = BIT(4), /* Show SPMI reads */
+ QPNP_VREG_DEBUG_OCP = BIT(5), /* Show VS OCP IRQ events */
};
static int qpnp_vreg_debug_mask;
@@ -156,9 +159,8 @@
#define QPNP_LDO_SOFT_START_ENABLE_MASK 0x80
/* VS regulator over current protection control register layout */
-#define QPNP_VS_OCP_ENABLE_MASK 0x80
-#define QPNP_VS_OCP_OVERRIDE_MASK 0x01
-#define QPNP_VS_OCP_DISABLE 0x00
+#define QPNP_VS_OCP_OVERRIDE 0x01
+#define QPNP_VS_OCP_NO_OVERRIDE 0x00
/* VS regulator soft start control register layout */
#define QPNP_VS_SOFT_START_ENABLE_MASK 0x80
@@ -168,6 +170,11 @@
#define QPNP_BOOST_CURRENT_LIMIT_ENABLE_MASK 0x80
#define QPNP_BOOST_CURRENT_LIMIT_MASK 0x07
+#define QPNP_VS_OCP_DEFAULT_MAX_RETRIES 10
+#define QPNP_VS_OCP_DEFAULT_RETRY_DELAY_MS 30
+#define QPNP_VS_OCP_FALL_DELAY_US 90
+#define QPNP_VS_OCP_FAULT_DELAY_US 20000
+
/*
* This voltage in uV is returned by get_voltage functions when there is no way
* to determine the current voltage level. It is needed because the regulator
@@ -203,17 +210,22 @@
struct qpnp_regulator {
struct regulator_desc rdesc;
+ struct delayed_work ocp_work;
struct spmi_device *spmi_dev;
struct regulator_dev *rdev;
struct qpnp_voltage_set_points *set_points;
enum qpnp_regulator_logical_type logical_type;
int enable_time;
- int ocp_enable_time;
int ocp_enable;
+ int ocp_irq;
+ int ocp_count;
+ int ocp_max_retries;
+ int ocp_retry_delay_ms;
int system_load;
int hpm_min_load;
u32 write_count;
u32 prev_write_count;
+ ktime_t vs_enable_time;
u16 base_addr;
/* ctrl_reg provides a shadow copy of register values 0x40 to 0x47. */
u8 ctrl_reg[8];
@@ -501,35 +513,11 @@
static int qpnp_regulator_vs_enable(struct regulator_dev *rdev)
{
struct qpnp_regulator *vreg = rdev_get_drvdata(rdev);
- int rc;
- u8 reg;
- if (vreg->ocp_enable == QPNP_REGULATOR_ENABLE) {
- /* Disable OCP */
- reg = QPNP_VS_OCP_DISABLE;
- rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
- if (rc)
- goto fail;
- }
+ if (vreg->ocp_irq)
+ vreg->vs_enable_time = ktime_get();
- rc = qpnp_regulator_common_enable(rdev);
- if (rc)
- goto fail;
-
- if (vreg->ocp_enable == QPNP_REGULATOR_ENABLE) {
- /* Wait for inrush current to subsided, then enable OCP. */
- udelay(vreg->ocp_enable_time);
- reg = QPNP_VS_OCP_ENABLE_MASK;
- rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
- if (rc)
- goto fail;
- }
-
- return rc;
-fail:
- vreg_err(vreg, "qpnp_vreg_write failed, rc=%d\n", rc);
-
- return rc;
+ return qpnp_regulator_common_enable(rdev);
}
static int qpnp_regulator_common_disable(struct regulator_dev *rdev)
@@ -785,6 +773,88 @@
return vreg->enable_time;
}
+static int qpnp_regulator_vs_clear_ocp(struct qpnp_regulator *vreg)
+{
+ int rc;
+
+ rc = qpnp_vreg_masked_write(vreg, QPNP_COMMON_REG_ENABLE,
+ QPNP_COMMON_DISABLE, QPNP_COMMON_ENABLE_MASK,
+ &vreg->ctrl_reg[QPNP_COMMON_IDX_ENABLE]);
+ if (rc)
+ vreg_err(vreg, "qpnp_vreg_masked_write failed, rc=%d\n", rc);
+
+ vreg->vs_enable_time = ktime_get();
+
+ rc = qpnp_vreg_masked_write(vreg, QPNP_COMMON_REG_ENABLE,
+ QPNP_COMMON_ENABLE, QPNP_COMMON_ENABLE_MASK,
+ &vreg->ctrl_reg[QPNP_COMMON_IDX_ENABLE]);
+ if (rc)
+ vreg_err(vreg, "qpnp_vreg_masked_write failed, rc=%d\n", rc);
+
+ if (qpnp_vreg_debug_mask & QPNP_VREG_DEBUG_OCP) {
+ pr_info("%s: switch state toggled after OCP event\n",
+ vreg->rdesc.name);
+ }
+
+ return rc;
+}
+
+static void qpnp_regulator_vs_ocp_work(struct work_struct *work)
+{
+ struct delayed_work *dwork
+ = container_of(work, struct delayed_work, work);
+ struct qpnp_regulator *vreg
+ = container_of(dwork, struct qpnp_regulator, ocp_work);
+
+ qpnp_regulator_vs_clear_ocp(vreg);
+
+ return;
+}
+
+static irqreturn_t qpnp_regulator_vs_ocp_isr(int irq, void *data)
+{
+ struct qpnp_regulator *vreg = data;
+ ktime_t ocp_irq_time;
+ s64 ocp_trigger_delay_us;
+
+ ocp_irq_time = ktime_get();
+ ocp_trigger_delay_us = ktime_us_delta(ocp_irq_time,
+ vreg->vs_enable_time);
+
+ /*
+ * Reset the OCP count if there is a large delay between switch enable
+ * and when OCP triggers. This is indicative of a hotplug event as
+ * opposed to a fault.
+ */
+ if (ocp_trigger_delay_us > QPNP_VS_OCP_FAULT_DELAY_US)
+ vreg->ocp_count = 0;
+
+ /* Wait for switch output to settle back to 0 V after OCP triggered. */
+ udelay(QPNP_VS_OCP_FALL_DELAY_US);
+
+ vreg->ocp_count++;
+
+ if (qpnp_vreg_debug_mask & QPNP_VREG_DEBUG_OCP) {
+ pr_info("%s: VS OCP triggered, count = %d, delay = %lld us\n",
+ vreg->rdesc.name, vreg->ocp_count,
+ ocp_trigger_delay_us);
+ }
+
+ if (vreg->ocp_count == 1) {
+ /* Immediately clear the over current condition. */
+ qpnp_regulator_vs_clear_ocp(vreg);
+ } else if (vreg->ocp_count <= vreg->ocp_max_retries) {
+ /* Schedule the over current clear task to run later. */
+ schedule_delayed_work(&vreg->ocp_work,
+ msecs_to_jiffies(vreg->ocp_retry_delay_ms) + 1);
+ } else {
+ vreg_err(vreg, "OCP triggered %d times; no further retries\n",
+ vreg->ocp_count);
+ }
+
+ return IRQ_HANDLED;
+}
+
static const char const *qpnp_print_actions[] = {
[QPNP_REGULATOR_ACTION_INIT] = "initial ",
[QPNP_REGULATOR_ACTION_ENABLE] = "enable ",
@@ -834,7 +904,8 @@
if (type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
|| type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
- || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS) {
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_VS) {
mode = qpnp_regulator_common_get_mode(rdev);
mode_label = mode == REGULATOR_MODE_NORMAL ? "HPM" : "LPM";
}
@@ -903,9 +974,9 @@
pc_mode_label[1] =
mode_reg & QPNP_COMMON_MODE_FOLLOW_AWAKE_MASK ? 'W' : '_';
- pr_info("%s %-11s: %s, pc_en=%s, alt_mode=%s\n",
+ pr_info("%s %-11s: %s, mode=%s, pc_en=%s, alt_mode=%s\n",
action_label, vreg->rdesc.name, enable_label,
- pc_enable_label, pc_mode_label);
+ mode_label, pc_enable_label, pc_mode_label);
break;
case QPNP_REGULATOR_LOGICAL_TYPE_BOOST:
pr_info("%s %-11s: %s, v=%7d uV\n",
@@ -1099,6 +1170,17 @@
pdata->pin_ctrl_enable & QPNP_COMMON_ENABLE_FOLLOW_ALL_MASK;
}
+ /* Set up HPM control. */
+ if ((type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_VS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS)
+ && (pdata->hpm_enable != QPNP_REGULATOR_USE_HW_DEFAULT)) {
+ ctrl_reg[QPNP_COMMON_IDX_MODE] &= ~QPNP_COMMON_MODE_HPM_MASK;
+ ctrl_reg[QPNP_COMMON_IDX_MODE] |=
+ (pdata->hpm_enable ? QPNP_COMMON_MODE_HPM_MASK : 0);
+ }
+
/* Set up auto mode control. */
if ((type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
|| type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
@@ -1224,7 +1306,8 @@
}
if (pdata->ocp_enable != QPNP_REGULATOR_USE_HW_DEFAULT) {
- reg = pdata->ocp_enable ? QPNP_VS_OCP_ENABLE_MASK : 0;
+ reg = pdata->ocp_enable ? QPNP_VS_OCP_NO_OVERRIDE
+ : QPNP_VS_OCP_OVERRIDE;
rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
if (rc) {
vreg_err(vreg, "spmi write failed, rc=%d\n",
@@ -1256,6 +1339,11 @@
}
pdata->base_addr = res->start;
+ /* OCP IRQ is optional so ignore get errors. */
+ pdata->ocp_irq = spmi_get_irq_byname(spmi, NULL, "ocp");
+ if (pdata->ocp_irq < 0)
+ pdata->ocp_irq = 0;
+
/*
* Initialize configuration parameters to use hardware default in case
* no value is specified via device tree.
@@ -1269,6 +1357,7 @@
pdata->pin_ctrl_enable = QPNP_REGULATOR_PIN_CTRL_ENABLE_HW_DEFAULT;
pdata->pin_ctrl_hpm = QPNP_REGULATOR_PIN_CTRL_HPM_HW_DEFAULT;
pdata->vs_soft_start_strength = QPNP_VS_SOFT_START_STR_HW_DEFAULT;
+ pdata->hpm_enable = QPNP_REGULATOR_USE_HW_DEFAULT;
/* These bindings are optional, so it is okay if they are not found. */
of_property_read_u32(node, "qcom,auto-mode-enable",
@@ -1276,6 +1365,10 @@
of_property_read_u32(node, "qcom,bypass-mode-enable",
&pdata->bypass_mode_enable);
of_property_read_u32(node, "qcom,ocp-enable", &pdata->ocp_enable);
+ of_property_read_u32(node, "qcom,ocp-max-retries",
+ &pdata->ocp_max_retries);
+ of_property_read_u32(node, "qcom,ocp-retry-delay",
+ &pdata->ocp_retry_delay_ms);
of_property_read_u32(node, "qcom,pull-down-enable",
&pdata->pull_down_enable);
of_property_read_u32(node, "qcom,soft-start-enable",
@@ -1285,12 +1378,11 @@
of_property_read_u32(node, "qcom,pin-ctrl-enable",
&pdata->pin_ctrl_enable);
of_property_read_u32(node, "qcom,pin-ctrl-hpm", &pdata->pin_ctrl_hpm);
+ of_property_read_u32(node, "qcom,hpm-enable", &pdata->hpm_enable);
of_property_read_u32(node, "qcom,vs-soft-start-strength",
&pdata->vs_soft_start_strength);
of_property_read_u32(node, "qcom,system-load", &pdata->system_load);
of_property_read_u32(node, "qcom,enable-time", &pdata->enable_time);
- of_property_read_u32(node, "qcom,ocp-enable-time",
- &pdata->ocp_enable_time);
return rc;
}
@@ -1364,7 +1456,14 @@
vreg->enable_time = pdata->enable_time;
vreg->system_load = pdata->system_load;
vreg->ocp_enable = pdata->ocp_enable;
- vreg->ocp_enable_time = pdata->ocp_enable_time;
+ vreg->ocp_irq = pdata->ocp_irq;
+ vreg->ocp_max_retries = pdata->ocp_max_retries;
+ vreg->ocp_retry_delay_ms = pdata->ocp_retry_delay_ms;
+
+ if (vreg->ocp_max_retries == 0)
+ vreg->ocp_max_retries = QPNP_VS_OCP_DEFAULT_MAX_RETRIES;
+ if (vreg->ocp_retry_delay_ms == 0)
+ vreg->ocp_retry_delay_ms = QPNP_VS_OCP_DEFAULT_RETRY_DELAY_MS;
rdesc = &vreg->rdesc;
rdesc->id = spmi->ctrl->nr;
@@ -1414,18 +1513,37 @@
goto bail;
}
+ if (vreg->logical_type != QPNP_REGULATOR_LOGICAL_TYPE_VS)
+ vreg->ocp_irq = 0;
+
+ if (vreg->ocp_irq) {
+ rc = devm_request_irq(&spmi->dev, vreg->ocp_irq,
+ qpnp_regulator_vs_ocp_isr, IRQF_TRIGGER_RISING, "ocp",
+ vreg);
+ if (rc < 0) {
+ vreg_err(vreg, "failed to request irq %d, rc=%d\n",
+ vreg->ocp_irq, rc);
+ goto bail;
+ }
+
+ INIT_DELAYED_WORK(&vreg->ocp_work, qpnp_regulator_vs_ocp_work);
+ }
+
vreg->rdev = regulator_register(rdesc, &spmi->dev,
&(pdata->init_data), vreg, spmi->dev.of_node);
if (IS_ERR(vreg->rdev)) {
rc = PTR_ERR(vreg->rdev);
vreg_err(vreg, "regulator_register failed, rc=%d\n", rc);
- goto bail;
+ goto cancel_ocp_work;
}
qpnp_vreg_show_state(vreg->rdev, QPNP_REGULATOR_ACTION_INIT);
return 0;
+cancel_ocp_work:
+ if (vreg->ocp_irq)
+ cancel_delayed_work_sync(&vreg->ocp_work);
bail:
if (rc)
vreg_err(vreg, "probe failed, rc=%d\n", rc);
@@ -1445,6 +1563,8 @@
if (vreg) {
regulator_unregister(vreg->rdev);
+ if (vreg->ocp_irq)
+ cancel_delayed_work_sync(&vreg->ocp_work);
kfree(vreg->rdesc.name);
kfree(vreg);
}
diff --git a/drivers/scsi/ufs/Kconfig b/drivers/scsi/ufs/Kconfig
index 706fba7..9a4ea63 100644
--- a/drivers/scsi/ufs/Kconfig
+++ b/drivers/scsi/ufs/Kconfig
@@ -58,6 +58,17 @@
If unsure, say N.
+config SCSI_UFSHCD_PLATFORM
+ tristate "Platform bus based UFS Controller support"
+ depends on SCSI_UFSHCD
+ ---help---
+ This selects the UFS host controller support. Select this if
+ you have an UFS controller on Platform bus.
+
+ If you have a controller with this interface, say Y or M here.
+
+ If unsure, say N.
+
config SCSI_UFS_TEST
tristate "Universal Flash Storage host controller driver unit-tests"
depends on SCSI_UFSHCD && IOSCHED_TEST
diff --git a/drivers/scsi/ufs/Makefile b/drivers/scsi/ufs/Makefile
index bbcc202..8d6665b 100644
--- a/drivers/scsi/ufs/Makefile
+++ b/drivers/scsi/ufs/Makefile
@@ -1,4 +1,5 @@
# UFSHCD makefile
obj-$(CONFIG_SCSI_UFSHCD) += ufshcd.o
obj-$(CONFIG_SCSI_UFSHCD_PCI) += ufshcd-pci.o
+obj-$(CONFIG_SCSI_UFSHCD_PLATFORM) += ufshcd-pltfrm.o
obj-$(CONFIG_SCSI_UFS_TEST) += ufs_test.o
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
index 139bc06..086ff03 100644
--- a/drivers/scsi/ufs/ufs.h
+++ b/drivers/scsi/ufs/ufs.h
@@ -36,10 +36,20 @@
#ifndef _UFS_H
#define _UFS_H
+#include <linux/mutex.h>
+#include <linux/types.h>
+
#define MAX_CDB_SIZE 16
+#define GENERAL_UPIU_REQUEST_SIZE 32
+#define UPIU_HEADER_DATA_SEGMENT_MAX_SIZE ((ALIGNED_UPIU_SIZE) - \
+ (GENERAL_UPIU_REQUEST_SIZE))
+#define QUERY_OSF_SIZE ((GENERAL_UPIU_REQUEST_SIZE) - \
+ (sizeof(struct utp_upiu_header)))
+#define UFS_QUERY_RESERVED_SCSI_CMD 0xCC
+#define UFS_QUERY_CMD_SIZE 10
#define UPIU_HEADER_DWORD(byte3, byte2, byte1, byte0)\
- ((byte3 << 24) | (byte2 << 16) |\
+ cpu_to_be32((byte3 << 24) | (byte2 << 16) |\
(byte1 << 8) | (byte0))
/*
@@ -62,7 +72,7 @@
UPIU_TRANSACTION_COMMAND = 0x01,
UPIU_TRANSACTION_DATA_OUT = 0x02,
UPIU_TRANSACTION_TASK_REQ = 0x04,
- UPIU_TRANSACTION_QUERY_REQ = 0x26,
+ UPIU_TRANSACTION_QUERY_REQ = 0x16,
};
/* UTP UPIU Transaction Codes Target to Initiator */
@@ -73,6 +83,7 @@
UPIU_TRANSACTION_TASK_RSP = 0x24,
UPIU_TRANSACTION_READY_XFER = 0x31,
UPIU_TRANSACTION_QUERY_RSP = 0x36,
+ UPIU_TRANSACTION_REJECT_UPIU = 0x3F,
};
/* UPIU Read/Write flags */
@@ -90,6 +101,12 @@
UPIU_TASK_ATTR_ACA = 0x03,
};
+/* UPIU Query request function */
+enum {
+ UPIU_QUERY_FUNC_STANDARD_READ_REQUEST = 0x01,
+ UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST = 0x81,
+};
+
/* UTP QUERY Transaction Specific Fields OpCode */
enum {
UPIU_QUERY_OPCODE_NOP = 0x0,
@@ -103,6 +120,21 @@
UPIU_QUERY_OPCODE_TOGGLE_FLAG = 0x8,
};
+/* Query response result code */
+enum {
+ QUERY_RESULT_SUCCESS = 0x00,
+ QUERY_RESULT_NOT_READABLE = 0xF6,
+ QUERY_RESULT_NOT_WRITEABLE = 0xF7,
+ QUERY_RESULT_ALREADY_WRITTEN = 0xF8,
+ QUERY_RESULT_INVALID_LENGTH = 0xF9,
+ QUERY_RESULT_INVALID_VALUE = 0xFA,
+ QUERY_RESULT_INVALID_SELECTOR = 0xFB,
+ QUERY_RESULT_INVALID_INDEX = 0xFC,
+ QUERY_RESULT_INVALID_IDN = 0xFD,
+ QUERY_RESULT_INVALID_OPCODE = 0xFE,
+ QUERY_RESULT_GENERAL_FAILURE = 0xFF,
+};
+
/* UTP Transfer Request Command Type (CT) */
enum {
UPIU_COMMAND_SET_TYPE_SCSI = 0x0,
@@ -110,10 +142,17 @@
UPIU_COMMAND_SET_TYPE_QUERY = 0x2,
};
+/* UTP Transfer Request Command Offset */
+#define UPIU_COMMAND_TYPE_OFFSET 28
+
+/* Offset of the response code in the UPIU header */
+#define UPIU_RSP_CODE_OFFSET 8
+
enum {
MASK_SCSI_STATUS = 0xFF,
MASK_TASK_RESPONSE = 0xFF00,
MASK_RSP_UPIU_RESULT = 0xFFFF,
+ MASK_QUERY_DATA_SEG_LEN = 0xFFFF,
};
/* Task management service response */
@@ -138,26 +177,59 @@
/**
* struct utp_upiu_cmd - Command UPIU structure
- * @header: UPIU header structure DW-0 to DW-2
* @data_transfer_len: Data Transfer Length DW-3
* @cdb: Command Descriptor Block CDB DW-4 to DW-7
*/
struct utp_upiu_cmd {
- struct utp_upiu_header header;
u32 exp_data_transfer_len;
u8 cdb[MAX_CDB_SIZE];
};
/**
- * struct utp_upiu_rsp - Response UPIU structure
- * @header: UPIU header DW-0 to DW-2
+ * struct utp_upiu_query - upiu request buffer structure for
+ * query request.
+ * @opcode: command to perform B-0
+ * @idn: a value that indicates the particular type of data B-1
+ * @index: Index to further identify data B-2
+ * @selector: Index to further identify data B-3
+ * @reserved_osf: spec reserved field B-4,5
+ * @length: number of descriptor bytes to read/write B-6,7
+ * @value: Attribute value to be written DW-6
+ * @reserved: spec reserved DW-7,8
+ */
+struct utp_upiu_query {
+ u8 opcode;
+ u8 idn;
+ u8 index;
+ u8 selector;
+ u16 reserved_osf;
+ u16 length;
+ u32 value;
+ u32 reserved[2];
+};
+
+/**
+ * struct utp_upiu_req - general upiu request structure
+ * @header:UPIU header structure DW-0 to DW-2
+ * @sc: fields structure for scsi command
+ * @qr: fields structure for query request
+ */
+struct utp_upiu_req {
+ struct utp_upiu_header header;
+ union {
+ struct utp_upiu_cmd sc;
+ struct utp_upiu_query qr;
+ };
+};
+
+/**
+ * struct utp_cmd_rsp - Response UPIU structure
* @residual_transfer_count: Residual transfer count DW-3
* @reserved: Reserved double words DW-4 to DW-7
* @sense_data_len: Sense data length DW-8 U16
* @sense_data: Sense data field DW-8 to DW-12
*/
-struct utp_upiu_rsp {
- struct utp_upiu_header header;
+struct utp_cmd_rsp {
u32 residual_transfer_count;
u32 reserved[4];
u16 sense_data_len;
@@ -165,6 +237,20 @@
};
/**
+ * struct utp_upiu_rsp - general upiu response structure
+ * @header: UPIU header structure DW-0 to DW-2
+ * @sc: fields structure for scsi command
+ * @qr: fields structure for query request
+ */
+struct utp_upiu_rsp {
+ struct utp_upiu_header header;
+ union {
+ struct utp_cmd_rsp sc;
+ struct utp_upiu_query qr;
+ };
+};
+
+/**
* struct utp_upiu_task_req - Task request UPIU structure
* @header - UPIU header structure DW0 to DW-2
* @input_param1: Input parameter 1 DW-3
@@ -194,4 +280,24 @@
u32 reserved[3];
};
+/**
+ * struct ufs_query_req - parameters for building a query request
+ * @query_func: UPIU header query function
+ * @upiu_req: the query request data
+ */
+struct ufs_query_req {
+ u8 query_func;
+ struct utp_upiu_query upiu_req;
+};
+
+/**
+ * struct ufs_query_resp - UPIU QUERY
+ * @response: device response code
+ * @upiu_res: query response data
+ */
+struct ufs_query_res {
+ u8 response;
+ struct utp_upiu_query upiu_res;
+};
+
#endif /* End of Header */
diff --git a/drivers/scsi/ufs/ufshcd-pltfrm.c b/drivers/scsi/ufs/ufshcd-pltfrm.c
new file mode 100644
index 0000000..03319ac
--- /dev/null
+++ b/drivers/scsi/ufs/ufshcd-pltfrm.c
@@ -0,0 +1,217 @@
+/*
+ * Universal Flash Storage Host controller Platform bus based glue driver
+ *
+ * This code is based on drivers/scsi/ufs/ufshcd-pltfrm.c
+ * Copyright (C) 2011-2013 Samsung India Software Operations
+ *
+ * Authors:
+ * Santosh Yaraganavi <santosh.sy@samsung.com>
+ * Vinayak Holikatti <h.vinayak@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ * See the COPYING file in the top-level directory or visit
+ * <http://www.gnu.org/licenses/gpl-2.0.html>
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * This program is provided "AS IS" and "WITH ALL FAULTS" and
+ * without warranty of any kind. You are solely responsible for
+ * determining the appropriateness of using and distributing
+ * the program and assume all risks associated with your exercise
+ * of rights with respect to the program, including but not limited
+ * to infringement of third party rights, the risks and costs of
+ * program errors, damage to or loss of data, programs or equipment,
+ * and unavailability or interruption of operations. Under no
+ * circumstances will the contributor of this Program be liable for
+ * any damages of any kind arising from your use or distribution of
+ * this program.
+ */
+
+#include "ufshcd.h"
+#include <linux/platform_device.h>
+
+#ifdef CONFIG_PM
+/**
+ * ufshcd_pltfrm_suspend - suspend power management function
+ * @dev: pointer to device handle
+ *
+ *
+ * Returns 0
+ */
+static int ufshcd_pltfrm_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
+
+ /*
+ * TODO:
+ * 1. Call ufshcd_suspend
+ * 2. Do bus specific power management
+ */
+
+ disable_irq(hba->irq);
+
+ return 0;
+}
+
+/**
+ * ufshcd_pltfrm_resume - resume power management function
+ * @dev: pointer to device handle
+ *
+ * Returns 0
+ */
+static int ufshcd_pltfrm_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
+
+ /*
+ * TODO:
+ * 1. Call ufshcd_resume.
+ * 2. Do bus specific wake up
+ */
+
+ enable_irq(hba->irq);
+
+ return 0;
+}
+#else
+#define ufshcd_pltfrm_suspend NULL
+#define ufshcd_pltfrm_resume NULL
+#endif
+
+/**
+ * ufshcd_pltfrm_probe - probe routine of the driver
+ * @pdev: pointer to Platform device handle
+ *
+ * Returns 0 on success, non-zero value on failure
+ */
+static int ufshcd_pltfrm_probe(struct platform_device *pdev)
+{
+ struct ufs_hba *hba;
+ void __iomem *mmio_base;
+ struct resource *mem_res;
+ struct resource *irq_res;
+ resource_size_t mem_size;
+ int err;
+ struct device *dev = &pdev->dev;
+
+ mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!mem_res) {
+ dev_err(&pdev->dev,
+ "Memory resource not available\n");
+ err = -ENODEV;
+ goto out_error;
+ }
+
+ mem_size = resource_size(mem_res);
+ if (!request_mem_region(mem_res->start, mem_size, "ufshcd")) {
+ dev_err(&pdev->dev,
+ "Cannot reserve the memory resource\n");
+ err = -EBUSY;
+ goto out_error;
+ }
+
+ mmio_base = ioremap_nocache(mem_res->start, mem_size);
+ if (!mmio_base) {
+ dev_err(&pdev->dev, "memory map failed\n");
+ err = -ENOMEM;
+ goto out_release_regions;
+ }
+
+ irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
+ if (!irq_res) {
+ dev_err(&pdev->dev, "IRQ resource not available\n");
+ err = -ENODEV;
+ goto out_iounmap;
+ }
+
+ err = dma_set_coherent_mask(dev, dev->coherent_dma_mask);
+ if (err) {
+ dev_err(&pdev->dev, "set dma mask failed\n");
+ goto out_iounmap;
+ }
+
+ err = ufshcd_init(&pdev->dev, &hba, mmio_base, irq_res->start);
+ if (err) {
+ dev_err(&pdev->dev, "Intialization failed\n");
+ goto out_iounmap;
+ }
+
+ platform_set_drvdata(pdev, hba);
+
+ return 0;
+
+out_iounmap:
+ iounmap(mmio_base);
+out_release_regions:
+ release_mem_region(mem_res->start, mem_size);
+out_error:
+ return err;
+}
+
+/**
+ * ufshcd_pltfrm_remove - remove platform driver routine
+ * @pdev: pointer to platform device handle
+ *
+ * Returns 0 on success, non-zero value on failure
+ */
+static int ufshcd_pltfrm_remove(struct platform_device *pdev)
+{
+ struct resource *mem_res;
+ resource_size_t mem_size;
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
+
+ disable_irq(hba->irq);
+
+ /* Some buggy controllers raise interrupt after
+ * the resources are removed. So first we unregister the
+ * irq handler and then the resources used by driver
+ */
+
+ free_irq(hba->irq, hba);
+ ufshcd_remove(hba);
+ mem_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!mem_res)
+ dev_err(&pdev->dev, "ufshcd: Memory resource not available\n");
+ else {
+ mem_size = resource_size(mem_res);
+ release_mem_region(mem_res->start, mem_size);
+ }
+ platform_set_drvdata(pdev, NULL);
+ return 0;
+}
+
+static const struct of_device_id ufs_of_match[] = {
+ { .compatible = "jedec,ufs-1.1"},
+};
+
+static const struct dev_pm_ops ufshcd_dev_pm_ops = {
+ .suspend = ufshcd_pltfrm_suspend,
+ .resume = ufshcd_pltfrm_resume,
+};
+
+static struct platform_driver ufshcd_pltfrm_driver = {
+ .probe = ufshcd_pltfrm_probe,
+ .remove = ufshcd_pltfrm_remove,
+ .driver = {
+ .name = "ufshcd",
+ .owner = THIS_MODULE,
+ .pm = &ufshcd_dev_pm_ops,
+ .of_match_table = ufs_of_match,
+ },
+};
+
+module_platform_driver(ufshcd_pltfrm_driver);
+
+MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
+MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
+MODULE_DESCRIPTION("UFS host controller Pltform bus based glue driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(UFSHCD_DRIVER_VERSION);
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index 60fd40c..e12b9b4 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -202,18 +202,13 @@
}
/**
- * ufshcd_is_valid_req_rsp - checks if controller TR response is valid
+ * ufshcd_get_req_rsp - returns the TR response
* @ucd_rsp_ptr: pointer to response UPIU
- *
- * This function checks the response UPIU for valid transaction type in
- * response field
- * Returns 0 on success, non-zero on failure
*/
static inline int
-ufshcd_is_valid_req_rsp(struct utp_upiu_rsp *ucd_rsp_ptr)
+ufshcd_get_req_rsp(struct utp_upiu_rsp *ucd_rsp_ptr)
{
- return ((be32_to_cpu(ucd_rsp_ptr->header.dword_0) >> 24) ==
- UPIU_TRANSACTION_RESPONSE) ? 0 : DID_ERROR << 16;
+ return be32_to_cpu(ucd_rsp_ptr->header.dword_0) >> 24;
}
/**
@@ -316,14 +311,71 @@
{
int len;
if (lrbp->sense_buffer) {
- len = be16_to_cpu(lrbp->ucd_rsp_ptr->sense_data_len);
+ len = be16_to_cpu(lrbp->ucd_rsp_ptr->sc.sense_data_len);
memcpy(lrbp->sense_buffer,
- lrbp->ucd_rsp_ptr->sense_data,
+ lrbp->ucd_rsp_ptr->sc.sense_data,
min_t(int, len, SCSI_SENSE_BUFFERSIZE));
}
}
/**
+ * ufshcd_query_to_cpu() - formats the received buffer in to the native cpu
+ * endian
+ * @response: upiu query response to convert
+ */
+static inline void ufshcd_query_to_cpu(struct utp_upiu_query *response)
+{
+ response->length = be16_to_cpu(response->length);
+ response->value = be32_to_cpu(response->value);
+}
+
+/**
+ * ufshcd_query_to_be() - formats the buffer before sending in to big endian
+ * @response: upiu query request to convert
+ */
+static inline void ufshcd_query_to_be(struct utp_upiu_query *request)
+{
+ request->length = cpu_to_be16(request->length);
+ request->value = cpu_to_be32(request->value);
+}
+
+/**
+ * ufshcd_copy_query_response() - Copy Query Response and descriptor
+ * @lrb - pointer to local reference block
+ * @query_res - pointer to the query result
+ */
+static
+void ufshcd_copy_query_response(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+{
+ struct ufs_query_res *query_res = hba->query.response;
+
+ /* Get the UPIU response */
+ if (query_res) {
+ query_res->response = ufshcd_get_rsp_upiu_result(
+ lrbp->ucd_rsp_ptr) >> UPIU_RSP_CODE_OFFSET;
+
+ memcpy(&query_res->upiu_res, &lrbp->ucd_rsp_ptr->qr,
+ QUERY_OSF_SIZE);
+ ufshcd_query_to_cpu(&query_res->upiu_res);
+ }
+
+ /* Get the descriptor */
+ if (hba->query.descriptor && lrbp->ucd_rsp_ptr->qr.opcode ==
+ UPIU_QUERY_OPCODE_READ_DESC) {
+ u8 *descp = (u8 *)&lrbp->ucd_rsp_ptr +
+ GENERAL_UPIU_REQUEST_SIZE;
+ u16 len;
+
+ /* data segment length */
+ len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2) &
+ MASK_QUERY_DATA_SEG_LEN;
+
+ memcpy(hba->query.descriptor, descp,
+ min_t(u16, len, UPIU_HEADER_DATA_SEGMENT_MAX_SIZE));
+ }
+}
+
+/**
* ufshcd_hba_capabilities - Read controller capabilities
* @hba: per adapter instance
*/
@@ -423,76 +475,159 @@
}
/**
+ * ufshcd_prepare_req_desc - Fills the requests header
+ * descriptor according to request
+ * lrbp: pointer to local reference block
+ * upiu_flags: flags required in the header
+ */
+static void ufshcd_prepare_req_desc(struct ufshcd_lrb *lrbp, u32 *upiu_flags)
+{
+ struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
+ enum dma_data_direction cmd_dir =
+ lrbp->cmd->sc_data_direction;
+ u32 data_direction;
+ u32 dword_0;
+
+ if (cmd_dir == DMA_FROM_DEVICE) {
+ data_direction = UTP_DEVICE_TO_HOST;
+ *upiu_flags = UPIU_CMD_FLAGS_READ;
+ } else if (cmd_dir == DMA_TO_DEVICE) {
+ data_direction = UTP_HOST_TO_DEVICE;
+ *upiu_flags = UPIU_CMD_FLAGS_WRITE;
+ } else {
+ data_direction = UTP_NO_DATA_TRANSFER;
+ *upiu_flags = UPIU_CMD_FLAGS_NONE;
+ }
+
+ dword_0 = data_direction | (lrbp->command_type
+ << UPIU_COMMAND_TYPE_OFFSET);
+
+ /* Transfer request descriptor header fields */
+ req_desc->header.dword_0 = cpu_to_le32(dword_0);
+
+ /*
+ * assigning invalid value for command status. Controller
+ * updates OCS on command completion, with the command
+ * status
+ */
+ req_desc->header.dword_2 =
+ cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
+}
+
+static inline bool ufshcd_is_query_req(struct ufshcd_lrb *lrbp)
+{
+ return lrbp->cmd ? lrbp->cmd->cmnd[0] == UFS_QUERY_RESERVED_SCSI_CMD :
+ false;
+}
+
+/**
+ * ufshcd_prepare_utp_scsi_cmd_upiu() - fills the utp_transfer_req_desc,
+ * for scsi commands
+ * @lrbp - local reference block pointer
+ * @upiu_flags - flags
+ */
+static
+void ufshcd_prepare_utp_scsi_cmd_upiu(struct ufshcd_lrb *lrbp, u32 upiu_flags)
+{
+ struct utp_upiu_req *ucd_req_ptr = lrbp->ucd_req_ptr;
+
+ /* command descriptor fields */
+ ucd_req_ptr->header.dword_0 = UPIU_HEADER_DWORD(
+ UPIU_TRANSACTION_COMMAND, upiu_flags,
+ lrbp->lun, lrbp->task_tag);
+ ucd_req_ptr->header.dword_1 = UPIU_HEADER_DWORD(
+ UPIU_COMMAND_SET_TYPE_SCSI, 0, 0, 0);
+
+ /* Total EHS length and Data segment length will be zero */
+ ucd_req_ptr->header.dword_2 = 0;
+
+ ucd_req_ptr->sc.exp_data_transfer_len =
+ cpu_to_be32(lrbp->cmd->sdb.length);
+
+ memcpy(ucd_req_ptr->sc.cdb, lrbp->cmd->cmnd,
+ (min_t(unsigned short, lrbp->cmd->cmd_len, MAX_CDB_SIZE)));
+}
+
+/**
+ * ufshcd_prepare_utp_query_req_upiu() - fills the utp_transfer_req_desc,
+ * for query requsts
+ * @hba: UFS hba
+ * @lrbp: local reference block pointer
+ * @upiu_flags: flags
+ */
+static void ufshcd_prepare_utp_query_req_upiu(struct ufs_hba *hba,
+ struct ufshcd_lrb *lrbp,
+ u32 upiu_flags)
+{
+ struct utp_upiu_req *ucd_req_ptr = lrbp->ucd_req_ptr;
+ u16 len = hba->query.request->upiu_req.length;
+ u8 *descp = (u8 *)lrbp->ucd_req_ptr + GENERAL_UPIU_REQUEST_SIZE;
+
+ /* Query request header */
+ ucd_req_ptr->header.dword_0 = UPIU_HEADER_DWORD(
+ UPIU_TRANSACTION_QUERY_REQ, upiu_flags,
+ lrbp->lun, lrbp->task_tag);
+ ucd_req_ptr->header.dword_1 = UPIU_HEADER_DWORD(
+ 0, hba->query.request->query_func, 0, 0);
+
+ /* Data segment length */
+ ucd_req_ptr->header.dword_2 = UPIU_HEADER_DWORD(
+ 0, 0, len >> 8, (u8)len);
+
+ /* Copy the Query Request buffer as is */
+ memcpy(&lrbp->ucd_req_ptr->qr, &hba->query.request->upiu_req,
+ QUERY_OSF_SIZE);
+ ufshcd_query_to_be(&lrbp->ucd_req_ptr->qr);
+
+ /* Copy the Descriptor */
+ if (hba->query.descriptor != NULL && len > 0 &&
+ (hba->query.request->upiu_req.opcode ==
+ UPIU_QUERY_OPCODE_WRITE_DESC)) {
+ memcpy(descp, hba->query.descriptor,
+ min_t(u16, len, UPIU_HEADER_DATA_SEGMENT_MAX_SIZE));
+ }
+
+}
+
+/**
* ufshcd_compose_upiu - form UFS Protocol Information Unit(UPIU)
+ * @hba - UFS hba
* @lrb - pointer to local reference block
*/
-static void ufshcd_compose_upiu(struct ufshcd_lrb *lrbp)
+static int ufshcd_compose_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
- struct utp_transfer_req_desc *req_desc;
- struct utp_upiu_cmd *ucd_cmd_ptr;
- u32 data_direction;
u32 upiu_flags;
-
- ucd_cmd_ptr = lrbp->ucd_cmd_ptr;
- req_desc = lrbp->utr_descriptor_ptr;
+ int ret = 0;
switch (lrbp->command_type) {
case UTP_CMD_TYPE_SCSI:
- if (lrbp->cmd->sc_data_direction == DMA_FROM_DEVICE) {
- data_direction = UTP_DEVICE_TO_HOST;
- upiu_flags = UPIU_CMD_FLAGS_READ;
- } else if (lrbp->cmd->sc_data_direction == DMA_TO_DEVICE) {
- data_direction = UTP_HOST_TO_DEVICE;
- upiu_flags = UPIU_CMD_FLAGS_WRITE;
- } else {
- data_direction = UTP_NO_DATA_TRANSFER;
- upiu_flags = UPIU_CMD_FLAGS_NONE;
- }
-
- /* Transfer request descriptor header fields */
- req_desc->header.dword_0 =
- cpu_to_le32(data_direction | UTP_SCSI_COMMAND);
-
- /*
- * assigning invalid value for command status. Controller
- * updates OCS on command completion, with the command
- * status
- */
- req_desc->header.dword_2 =
- cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-
- /* command descriptor fields */
- ucd_cmd_ptr->header.dword_0 =
- cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_COMMAND,
- upiu_flags,
- lrbp->lun,
- lrbp->task_tag));
- ucd_cmd_ptr->header.dword_1 =
- cpu_to_be32(
- UPIU_HEADER_DWORD(UPIU_COMMAND_SET_TYPE_SCSI,
- 0,
- 0,
- 0));
-
- /* Total EHS length and Data segment length will be zero */
- ucd_cmd_ptr->header.dword_2 = 0;
-
- ucd_cmd_ptr->exp_data_transfer_len =
- cpu_to_be32(lrbp->cmd->transfersize);
-
- memcpy(ucd_cmd_ptr->cdb,
- lrbp->cmd->cmnd,
- (min_t(unsigned short,
- lrbp->cmd->cmd_len,
- MAX_CDB_SIZE)));
- break;
case UTP_CMD_TYPE_DEV_MANAGE:
- /* For query function implementation */
+ ufshcd_prepare_req_desc(lrbp, &upiu_flags);
+ if (lrbp->cmd && lrbp->command_type == UTP_CMD_TYPE_SCSI)
+ ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags);
+ else if (lrbp->cmd)
+ ufshcd_prepare_utp_query_req_upiu(hba, lrbp,
+ upiu_flags);
+ else {
+ dev_err(hba->dev, "%s: Invalid UPIU request\n",
+ __func__);
+ ret = -EINVAL;
+ }
break;
case UTP_CMD_TYPE_UFS:
/* For UFS native command implementation */
+ dev_err(hba->dev, "%s: UFS native command are not supported\n",
+ __func__);
+ ret = -ENOTSUPP;
+ break;
+ default:
+ ret = -ENOTSUPP;
+ dev_err(hba->dev, "%s: unknown command type: 0x%x\n",
+ __func__, lrbp->command_type);
break;
} /* end of switch */
+
+ return ret;
}
/**
@@ -527,10 +662,13 @@
lrbp->task_tag = tag;
lrbp->lun = cmd->device->lun;
- lrbp->command_type = UTP_CMD_TYPE_SCSI;
+ if (ufshcd_is_query_req(lrbp))
+ lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE;
+ else
+ lrbp->command_type = UTP_CMD_TYPE_SCSI;
/* form UPIU before issuing the command */
- ufshcd_compose_upiu(lrbp);
+ ufshcd_compose_upiu(hba, lrbp);
err = ufshcd_map_sg(lrbp);
if (err)
goto out;
@@ -544,6 +682,110 @@
}
/**
+ * ufshcd_query_request() - Entry point for issuing query request to a
+ * ufs device.
+ * @hba: ufs driver context
+ * @query: params for query request
+ * @descriptor: buffer for sending/receiving descriptor
+ * @response: pointer to a buffer that will contain the response code and
+ * response upiu
+ * @timeout: time limit for the command in seconds
+ * @retries: number of times to try executing the command
+ *
+ * The query request is submitted to the same request queue as the rest of
+ * the scsi commands passed to the UFS controller. In order to use this
+ * queue, we need to receive a tag, same as all other commands. The tags
+ * are issued from the block layer. To simulate a request from the block
+ * layer, we use the same interface as the SCSI layer does when it issues
+ * commands not generated by users. To distinguish a query request from
+ * the SCSI commands, we use a vendor specific unused SCSI command
+ * op-code. This op-code is not part of the SCSI command subset used in
+ * UFS. In such way it is easy to check the command in the driver and
+ * handle it appropriately.
+ *
+ * All necessary fields for issuing a query and receiving its response are
+ * stored in the UFS hba struct. We can use this method since we know
+ * there is only one active query request at all times.
+ *
+ * The request that will pass to the device is stored in "query" argument
+ * passed to this function, while the "response" argument (which is output
+ * field) will hold the query response from the device along with the
+ * response code.
+ */
+int ufshcd_query_request(struct ufs_hba *hba,
+ struct ufs_query_req *query,
+ u8 *descriptor,
+ struct ufs_query_res *response,
+ int timeout,
+ int retries)
+{
+ struct scsi_device *sdev;
+ u8 cmd[UFS_QUERY_CMD_SIZE] = {0};
+ int result;
+ bool sdev_lookup = true;
+
+ if (!hba || !query || !response) {
+ dev_err(hba->dev,
+ "%s: NULL pointer hba = %p, query = %p response = %p\n",
+ __func__, hba, query, response);
+ return -EINVAL;
+ }
+
+ /*
+ * A SCSI command structure is composed from opcode at the
+ * begining and 0 at the end.
+ */
+ cmd[0] = UFS_QUERY_RESERVED_SCSI_CMD;
+
+ /* extracting the SCSI Device */
+ sdev = scsi_device_lookup(hba->host, 0, 0, 0);
+ if (!sdev) {
+ /**
+ * There are some Query Requests that are sent during device
+ * initialization, this happens before the scsi device was
+ * initialized. If there is no scsi device, we generate a
+ * temporary device to allow the Query Request flow.
+ */
+ sdev_lookup = false;
+ sdev = scsi_get_host_dev(hba->host);
+ }
+
+ if (!sdev) {
+ dev_err(hba->dev, "%s: Could not fetch scsi device\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ mutex_lock(&hba->query.lock_ufs_query);
+ hba->query.request = query;
+ hba->query.descriptor = descriptor;
+ hba->query.response = response;
+
+ /* wait until request is completed */
+ result = scsi_execute(sdev, cmd, DMA_NONE, NULL, 0, NULL,
+ timeout, retries, 0, NULL);
+ if (result) {
+ dev_err(hba->dev,
+ "%s: Query with opcode 0x%x, failed with result %d\n",
+ __func__, query->upiu_req.opcode, result);
+ result = -EIO;
+ }
+
+ hba->query.request = NULL;
+ hba->query.descriptor = NULL;
+ hba->query.response = NULL;
+ mutex_unlock(&hba->query.lock_ufs_query);
+
+ /* Releasing scsi device resource */
+ if (sdev_lookup)
+ scsi_device_put(sdev);
+ else
+ scsi_free_host_dev(sdev);
+
+ return result;
+}
+
+/**
* ufshcd_memory_alloc - allocate memory for host memory space data structures
* @hba: per adapter instance
*
@@ -677,8 +919,8 @@
cpu_to_le16(ALIGNED_UPIU_SIZE);
hba->lrb[i].utr_descriptor_ptr = (utrdlp + i);
- hba->lrb[i].ucd_cmd_ptr =
- (struct utp_upiu_cmd *)(cmd_descp + i);
+ hba->lrb[i].ucd_req_ptr =
+ (struct utp_upiu_req *)(cmd_descp + i);
hba->lrb[i].ucd_rsp_ptr =
(struct utp_upiu_rsp *)cmd_descp[i].response_upiu;
hba->lrb[i].ucd_prdt_ptr =
@@ -1101,7 +1343,9 @@
* @hba: per adapter instance
* @lrb: pointer to local reference block of completed command
*
- * Returns result of the command to notify SCSI midlayer
+ * Returns result of the command to notify SCSI midlayer. In
+ * case of query request specific result, returns DID_OK, and
+ * the error will be handled by the dispatcher.
*/
static inline int
ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
@@ -1115,27 +1359,46 @@
switch (ocs) {
case OCS_SUCCESS:
-
/* check if the returned transfer response is valid */
- result = ufshcd_is_valid_req_rsp(lrbp->ucd_rsp_ptr);
- if (result) {
- dev_err(hba->dev,
- "Invalid response = %x\n", result);
+ result = ufshcd_get_req_rsp(lrbp->ucd_rsp_ptr);
+
+ switch (result) {
+ case UPIU_TRANSACTION_RESPONSE:
+ /*
+ * get the response UPIU result to extract
+ * the SCSI command status
+ */
+ result = ufshcd_get_rsp_upiu_result(lrbp->ucd_rsp_ptr);
+
+ /*
+ * get the result based on SCSI status response
+ * to notify the SCSI midlayer of the command status
+ */
+ scsi_status = result & MASK_SCSI_STATUS;
+ result = ufshcd_scsi_cmd_status(lrbp, scsi_status);
break;
+ case UPIU_TRANSACTION_QUERY_RSP:
+ /*
+ * Return result = ok, since SCSI layer wouldn't
+ * know how to handle errors from query requests.
+ * The result is saved with the response so that
+ * the ufs_core layer will handle it.
+ */
+ result |= DID_OK << 16;
+ ufshcd_copy_query_response(hba, lrbp);
+ break;
+ case UPIU_TRANSACTION_REJECT_UPIU:
+ /* TODO: handle Reject UPIU Response */
+ result |= DID_ERROR << 16;
+ dev_err(hba->dev,
+ "Reject UPIU not fully implemented\n");
+ break;
+ default:
+ result |= DID_ERROR << 16;
+ dev_err(hba->dev,
+ "Unexpected request response code = %x\n",
+ result);
}
-
- /*
- * get the response UPIU result to extract
- * the SCSI command status
- */
- result = ufshcd_get_rsp_upiu_result(lrbp->ucd_rsp_ptr);
-
- /*
- * get the result based on SCSI status response
- * to notify the SCSI midlayer of the command status
- */
- scsi_status = result & MASK_SCSI_STATUS;
- result = ufshcd_scsi_cmd_status(lrbp, scsi_status);
break;
case OCS_ABORTED:
result |= DID_ABORT << 16;
@@ -1364,10 +1627,10 @@
task_req_upiup =
(struct utp_upiu_task_req *) task_req_descp->task_req_upiu;
task_req_upiup->header.dword_0 =
- cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_TASK_REQ, 0,
- lrbp->lun, lrbp->task_tag));
+ UPIU_HEADER_DWORD(UPIU_TRANSACTION_TASK_REQ, 0,
+ lrbp->lun, lrbp->task_tag);
task_req_upiup->header.dword_1 =
- cpu_to_be32(UPIU_HEADER_DWORD(0, tm_function, 0, 0));
+ UPIU_HEADER_DWORD(0, tm_function, 0, 0);
task_req_upiup->input_param1 = lrbp->lun;
task_req_upiup->input_param1 =
@@ -1670,6 +1933,9 @@
INIT_WORK(&hba->uic_workq, ufshcd_uic_cc_handler);
INIT_WORK(&hba->feh_workq, ufshcd_fatal_err_handler);
+ /* Initialize mutex for query requests */
+ mutex_init(&hba->query.lock_ufs_query);
+
/* IRQ registration */
err = request_irq(irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba);
if (err) {
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 6b99a42..336980b 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -60,6 +60,7 @@
#include <scsi/scsi_tcq.h>
#include <scsi/scsi_dbg.h>
#include <scsi/scsi_eh.h>
+#include <scsi/scsi_device.h>
#include "ufs.h"
#include "ufshci.h"
@@ -88,7 +89,7 @@
/**
* struct ufshcd_lrb - local reference block
* @utr_descriptor_ptr: UTRD address of the command
- * @ucd_cmd_ptr: UCD address of the command
+ * @ucd_req_ptr: UCD address of the command
* @ucd_rsp_ptr: Response UPIU address for this command
* @ucd_prdt_ptr: PRDT address of the command
* @cmd: pointer to SCSI command
@@ -101,7 +102,7 @@
*/
struct ufshcd_lrb {
struct utp_transfer_req_desc *utr_descriptor_ptr;
- struct utp_upiu_cmd *ucd_cmd_ptr;
+ struct utp_upiu_req *ucd_req_ptr;
struct utp_upiu_rsp *ucd_rsp_ptr;
struct ufshcd_sg_entry *ucd_prdt_ptr;
@@ -115,6 +116,19 @@
unsigned int lun;
};
+/**
+ * struct ufs_query - keeps the query request information
+ * @request: request upiu and function
+ * @descriptor: buffer for sending/receiving descriptor
+ * @response: response upiu and response
+ * @mutex: lock to allow one query at a time
+ */
+struct ufs_query {
+ struct ufs_query_req *request;
+ u8 *descriptor;
+ struct ufs_query_res *response;
+ struct mutex lock_ufs_query;
+};
/**
* struct ufs_hba - per adapter private structure
@@ -143,6 +157,7 @@
* @uic_workq: Work queue for UIC completion handling
* @feh_workq: Work queue for fatal controller error handling
* @errors: HBA errors
+ * @query: query request information
*/
struct ufs_hba {
void __iomem *mmio_base;
@@ -184,6 +199,9 @@
/* HBA Errors */
u32 errors;
+
+ /* Query Request */
+ struct ufs_query query;
};
int ufshcd_init(struct device *, struct ufs_hba ** , void __iomem * ,
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 0c16484..4a86247 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -39,7 +39,7 @@
enum {
TASK_REQ_UPIU_SIZE_DWORDS = 8,
TASK_RSP_UPIU_SIZE_DWORDS = 8,
- ALIGNED_UPIU_SIZE = 128,
+ ALIGNED_UPIU_SIZE = 512,
};
/* UFSHCI Registers */
diff --git a/drivers/slimbus/slim-msm-ctrl.c b/drivers/slimbus/slim-msm-ctrl.c
index 9a864aa..4a3ea76 100644
--- a/drivers/slimbus/slim-msm-ctrl.c
+++ b/drivers/slimbus/slim-msm-ctrl.c
@@ -37,8 +37,6 @@
#define QC_DEVID_SAT2 0x4
#define QC_DEVID_PGD 0x5
#define QC_MSM_DEVS 5
-#define INIT_MX_RETRIES 10
-#define DEF_RETRY_MS 10
/* Manager registers */
enum mgr_reg {
diff --git a/drivers/slimbus/slim-msm-ngd.c b/drivers/slimbus/slim-msm-ngd.c
index 63d3750..a0179cb 100644
--- a/drivers/slimbus/slim-msm-ngd.c
+++ b/drivers/slimbus/slim-msm-ngd.c
@@ -631,6 +631,7 @@
if (mc == SLIM_USR_MC_MASTER_CAPABILITY &&
mt == SLIM_MSG_MT_SRC_REFERRED_USER) {
struct slim_msg_txn txn;
+ int retries = 0;
u8 wbuf[8];
txn.dt = SLIM_MSG_DEST_LOGICALADDR;
txn.ec = 0;
@@ -638,7 +639,6 @@
txn.mc = SLIM_USR_MC_REPORT_SATELLITE;
txn.mt = SLIM_MSG_MT_SRC_REFERRED_USER;
txn.la = SLIM_LA_MGR;
- txn.rl = 8;
wbuf[0] = SAT_MAGIC_LSB;
wbuf[1] = SAT_MAGIC_MSB;
wbuf[2] = SAT_MSG_VER;
@@ -655,7 +655,8 @@
/* make sure NGD MSG-Q config goes through */
mb();
}
-
+capability_retry:
+ txn.rl = 8;
ret = ngd_xfer_msg(&dev->ctrl, &txn);
if (!ret) {
enum msm_ctrl_state prev_state = dev->state;
@@ -668,6 +669,13 @@
/* ADSP SSR, send device_up notifications */
if (prev_state == MSM_CTRL_DOWN)
schedule_work(&dev->slave_notify);
+ } else if (ret == -EIO) {
+ pr_info("capability message NACKed, retrying");
+ if (retries < INIT_MX_RETRIES) {
+ msleep(DEF_RETRY_MS);
+ retries++;
+ goto capability_retry;
+ }
}
}
if (mc == SLIM_MSG_MC_REPLY_INFORMATION ||
diff --git a/drivers/slimbus/slim-msm.h b/drivers/slimbus/slim-msm.h
index 6ff3f19..cf2d26f 100644
--- a/drivers/slimbus/slim-msm.h
+++ b/drivers/slimbus/slim-msm.h
@@ -50,6 +50,8 @@
#define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \
((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16))
+#define INIT_MX_RETRIES 10
+#define DEF_RETRY_MS 10
#define MSM_CONCUR_MSG 8
#define SAT_CONCUR_MSG 8
#define DEF_WATERMARK (8 << 1)
diff --git a/drivers/staging/android/binder.c b/drivers/staging/android/binder.c
index 6cd30f1..3abc3b9 100644
--- a/drivers/staging/android/binder.c
+++ b/drivers/staging/android/binder.c
@@ -2525,14 +2525,38 @@
struct binder_transaction *t;
t = container_of(w, struct binder_transaction, work);
- if (t->buffer->target_node && !(t->flags & TF_ONE_WAY))
+ if (t->buffer->target_node &&
+ !(t->flags & TF_ONE_WAY)) {
binder_send_failed_reply(t, BR_DEAD_REPLY);
+ } else {
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered transaction %d\n",
+ t->debug_id);
+ t->buffer->transaction = NULL;
+ kfree(t);
+ binder_stats_deleted(BINDER_STAT_TRANSACTION);
+ }
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered TRANSACTION_COMPLETE\n");
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
+ case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
+ case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
+ struct binder_ref_death *death;
+
+ death = container_of(w, struct binder_ref_death, work);
+ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
+ "binder: undelivered death notification, %p\n",
+ death->cookie);
+ kfree(death);
+ binder_stats_deleted(BINDER_STAT_DEATH);
+ } break;
default:
+ pr_err("binder: unexpected work type, %d, not freed\n",
+ w->type);
break;
}
}
@@ -3009,6 +3033,7 @@
nodes++;
rb_erase(&node->rb_node, &proc->nodes);
list_del_init(&node->work.entry);
+ binder_release_work(&node->async_todo);
if (hlist_empty(&node->refs)) {
kfree(node);
binder_stats_deleted(BINDER_STAT_NODE);
@@ -3047,6 +3072,7 @@
binder_delete_ref(ref);
}
binder_release_work(&proc->todo);
+ binder_release_work(&proc->delivered_death);
buffers = 0;
while ((n = rb_first(&proc->allocated_buffers))) {
diff --git a/drivers/thermal/msm8974-tsens.c b/drivers/thermal/msm8974-tsens.c
index 9ba954a..991cf2e 100644
--- a/drivers/thermal/msm8974-tsens.c
+++ b/drivers/thermal/msm8974-tsens.c
@@ -244,7 +244,11 @@
struct tsens_tm_device_sensor {
struct thermal_zone_device *tz_dev;
enum thermal_device_mode mode;
- unsigned int sensor_num;
+ /* Physical HW sensor number */
+ unsigned int sensor_hw_num;
+ /* Software index. This is keep track of the HW/SW
+ * sensor_ID mapping */
+ unsigned int sensor_sw_id;
struct work_struct work;
int offset;
int calib_data_point1;
@@ -273,13 +277,54 @@
struct tsens_tm_device *tmdev;
-static int tsens_tz_code_to_degc(int adc_code, int sensor_num)
+int tsens_get_sw_id_mapping(int sensor_hw_num, int *sensor_sw_idx)
{
- int degc, num, den;
+ int i = 0;
+ bool id_found = false;
+ while (i < tmdev->tsens_num_sensor && !id_found) {
+ if (sensor_hw_num == tmdev->sensor[i].sensor_hw_num) {
+ *sensor_sw_idx = tmdev->sensor[i].sensor_sw_id;
+ id_found = true;
+ }
+ i++;
+ }
+
+ if (!id_found)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(tsens_get_sw_id_mapping);
+
+int tsens_get_hw_id_mapping(int sensor_sw_id, int *sensor_hw_num)
+{
+ int i = 0;
+ bool id_found = false;
+
+ while (i < tmdev->tsens_num_sensor && !id_found) {
+ if (sensor_sw_id == tmdev->sensor[i].sensor_sw_id) {
+ *sensor_hw_num = tmdev->sensor[i].sensor_hw_num;
+ id_found = true;
+ }
+ i++;
+ }
+
+ if (!id_found)
+ return -EINVAL;
+
+ return 0;
+}
+EXPORT_SYMBOL(tsens_get_hw_id_mapping);
+
+static int tsens_tz_code_to_degc(int adc_code, int sensor_sw_id)
+{
+ int degc, num, den, idx;
+
+ idx = sensor_sw_id;
num = ((adc_code * tmdev->tsens_factor) -
- tmdev->sensor[sensor_num].offset);
- den = (int) tmdev->sensor[sensor_num].slope_mul_tsens_factor;
+ tmdev->sensor[idx].offset);
+ den = (int) tmdev->sensor[idx].slope_mul_tsens_factor;
if (num > 0)
degc = ((num + (den/2))/den);
@@ -288,24 +333,29 @@
else
degc = num/den;
+ pr_debug("raw_code:0x%x, sensor_num:%d, degc:%d\n",
+ adc_code, idx, degc);
return degc;
}
-static int tsens_tz_degc_to_code(int degc, int sensor_num)
+static int tsens_tz_degc_to_code(int degc, int idx)
{
- int code = ((degc * tmdev->sensor[sensor_num].slope_mul_tsens_factor)
- + tmdev->sensor[sensor_num].offset)/tmdev->tsens_factor;
+ int code = ((degc * tmdev->sensor[idx].slope_mul_tsens_factor)
+ + tmdev->sensor[idx].offset)/tmdev->tsens_factor;
if (code > TSENS_THRESHOLD_MAX_CODE)
code = TSENS_THRESHOLD_MAX_CODE;
else if (code < TSENS_THRESHOLD_MIN_CODE)
code = TSENS_THRESHOLD_MIN_CODE;
+ pr_debug("raw_code:0x%x, sensor_num:%d, degc:%d\n",
+ code, idx, degc);
return code;
}
-static void msm_tsens_get_temp(int sensor_num, unsigned long *temp)
+static void msm_tsens_get_temp(int sensor_hw_num, unsigned long *temp)
{
unsigned int code, sensor_addr;
+ int sensor_sw_id = -EINVAL, rc = 0;
if (!tmdev->prev_reading_avail) {
while (!(readl_relaxed(TSENS_TRDY_ADDR(tmdev->tsens_addr))
@@ -318,9 +368,17 @@
sensor_addr =
(unsigned int)TSENS_S0_STATUS_ADDR(tmdev->tsens_addr);
code = readl_relaxed(sensor_addr +
- (sensor_num << TSENS_STATUS_ADDR_OFFSET));
+ (sensor_hw_num << TSENS_STATUS_ADDR_OFFSET));
+ /* Obtain SW index to map the corresponding thermal zone's
+ * offset and slope for code to degc conversion. */
+ rc = tsens_get_sw_id_mapping(sensor_hw_num, &sensor_sw_id);
+ if (rc < 0) {
+ pr_err("tsens mapping index not found\n");
+ return;
+ }
+
*temp = tsens_tz_code_to_degc((code & TSENS_SN_STATUS_TEMP_MASK),
- sensor_num);
+ sensor_sw_id);
}
static int tsens_tz_get_temp(struct thermal_zone_device *thermal,
@@ -331,7 +389,7 @@
if (!tm_sensor || tm_sensor->mode != THERMAL_DEVICE_ENABLED || !temp)
return -EINVAL;
- msm_tsens_get_temp(tm_sensor->sensor_num, temp);
+ msm_tsens_get_temp(tm_sensor->sensor_hw_num, temp);
return 0;
}
@@ -406,8 +464,9 @@
hi_code = TSENS_THRESHOLD_MAX_CODE;
reg_cntl = readl_relaxed((TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
- (tmdev->tsens_addr) +
- (tm_sensor->sensor_num * 4)));
+ (tmdev->tsens_addr) +
+ (tm_sensor->sensor_hw_num *
+ TSENS_SN_ADDR_OFFSET)));
switch (trip) {
case TSENS_TRIP_WARM:
code = (reg_cntl & TSENS_UPPER_THRESHOLD_MASK)
@@ -432,8 +491,8 @@
if (mode == THERMAL_TRIP_ACTIVATION_DISABLED)
writel_relaxed(reg_cntl | mask,
(TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
- (tmdev->tsens_addr) +
- (tm_sensor->sensor_num * 4)));
+ (tmdev->tsens_addr) +
+ (tm_sensor->sensor_hw_num * TSENS_SN_ADDR_OFFSET)));
else {
if (code < lo_code || code > hi_code) {
pr_err("%s with invalid code %x\n", __func__, code);
@@ -441,7 +500,7 @@
}
writel_relaxed(reg_cntl & ~mask,
(TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR(tmdev->tsens_addr) +
- (tm_sensor->sensor_num * 4)));
+ (tm_sensor->sensor_hw_num * TSENS_SN_ADDR_OFFSET)));
}
mb();
return 0;
@@ -452,13 +511,14 @@
{
struct tsens_tm_device_sensor *tm_sensor = thermal->devdata;
unsigned int reg;
+ int sensor_sw_id = -EINVAL, rc = 0;
if (!tm_sensor || trip < 0 || !temp)
return -EINVAL;
reg = readl_relaxed(TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
(tmdev->tsens_addr) +
- (tm_sensor->sensor_num * TSENS_SN_ADDR_OFFSET));
+ (tm_sensor->sensor_hw_num * TSENS_SN_ADDR_OFFSET));
switch (trip) {
case TSENS_TRIP_WARM:
reg = (reg & TSENS_UPPER_THRESHOLD_MASK) >>
@@ -471,7 +531,12 @@
return -EINVAL;
}
- *temp = tsens_tz_code_to_degc(reg, tm_sensor->sensor_num);
+ rc = tsens_get_sw_id_mapping(tm_sensor->sensor_hw_num, &sensor_sw_id);
+ if (rc < 0) {
+ pr_err("tsens mapping index not found\n");
+ return rc;
+ }
+ *temp = tsens_tz_code_to_degc(reg, sensor_sw_id);
return 0;
}
@@ -479,9 +544,8 @@
static int tsens_tz_notify(struct thermal_zone_device *thermal,
int count, enum thermal_trip_type type)
{
- /* TSENS driver does not shutdown the device.
- All Thermal notification are sent to the
- thermal daemon to take appropriate action */
+ /* Critical temperature threshold are enabled and will
+ * shutdown the device once critical thresholds are crossed. */
pr_debug("%s debug\n", __func__);
return 1;
}
@@ -491,10 +555,14 @@
{
struct tsens_tm_device_sensor *tm_sensor = thermal->devdata;
unsigned int reg_cntl;
- int code, hi_code, lo_code, code_err_chk;
+ int code, hi_code, lo_code, code_err_chk, sensor_sw_id = 0, rc = 0;
- code_err_chk = code = tsens_tz_degc_to_code(temp,
- tm_sensor->sensor_num);
+ rc = tsens_get_sw_id_mapping(tm_sensor->sensor_hw_num, &sensor_sw_id);
+ if (rc < 0) {
+ pr_err("tsens mapping index not found\n");
+ return rc;
+ }
+ code_err_chk = code = tsens_tz_degc_to_code(temp, sensor_sw_id);
if (!tm_sensor || trip < 0)
return -EINVAL;
@@ -502,8 +570,8 @@
hi_code = TSENS_THRESHOLD_MAX_CODE;
reg_cntl = readl_relaxed(TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
- (tmdev->tsens_addr) +
- (tm_sensor->sensor_num * TSENS_SN_ADDR_OFFSET));
+ (tmdev->tsens_addr) + (tm_sensor->sensor_hw_num *
+ TSENS_SN_ADDR_OFFSET));
switch (trip) {
case TSENS_TRIP_WARM:
code <<= TSENS_UPPER_THRESHOLD_SHIFT;
@@ -526,7 +594,7 @@
writel_relaxed(reg_cntl | code, (TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
(tmdev->tsens_addr) +
- (tm_sensor->sensor_num *
+ (tm_sensor->sensor_hw_num *
TSENS_SN_ADDR_OFFSET)));
mb();
return 0;
@@ -557,35 +625,48 @@
tsens_work);
unsigned int i, status, threshold;
unsigned int sensor_status_addr, sensor_status_ctrl_addr;
+ int sensor_sw_id = -EINVAL, rc = 0;
sensor_status_addr =
(unsigned int)TSENS_S0_STATUS_ADDR(tmdev->tsens_addr);
sensor_status_ctrl_addr =
(unsigned int)TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR
(tmdev->tsens_addr);
- for (i = 0; i < tmdev->tsens_num_sensor; i++) {
+ for (i = 0; i < tm->tsens_num_sensor; i++) {
bool upper_thr = false, lower_thr = false;
- status = readl_relaxed(sensor_status_addr);
- threshold = readl_relaxed(sensor_status_ctrl_addr);
+ uint32_t addr_offset;
+
+ addr_offset = tm->sensor[i].sensor_hw_num *
+ TSENS_SN_ADDR_OFFSET;
+ status = readl_relaxed(sensor_status_addr + addr_offset);
+ threshold = readl_relaxed(sensor_status_ctrl_addr +
+ addr_offset);
if (status & TSENS_SN_STATUS_UPPER_STATUS) {
writel_relaxed(threshold | TSENS_UPPER_STATUS_CLR,
- sensor_status_ctrl_addr);
+ TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR(
+ tmdev->tsens_addr + addr_offset));
upper_thr = true;
}
if (status & TSENS_SN_STATUS_LOWER_STATUS) {
writel_relaxed(threshold | TSENS_LOWER_STATUS_CLR,
- sensor_status_ctrl_addr);
+ TSENS_S0_UPPER_LOWER_STATUS_CTRL_ADDR(
+ tmdev->tsens_addr + addr_offset));
lower_thr = true;
}
if (upper_thr || lower_thr) {
/* Notify user space */
schedule_work(&tm->sensor[i].work);
- pr_debug("sensor:%d trigger temp (%d degC)\n", i,
+ rc = tsens_get_sw_id_mapping(
+ tm->sensor[i].sensor_hw_num,
+ &sensor_sw_id);
+ if (rc < 0)
+ pr_err("tsens mapping index not found\n");
+ pr_debug("sensor:%d trigger temp (%d degC)\n",
+ tm->sensor[i].sensor_hw_num,
tsens_tz_code_to_degc((status &
- TSENS_SN_STATUS_TEMP_MASK), i));
+ TSENS_SN_STATUS_TEMP_MASK),
+ sensor_sw_id));
}
- sensor_status_addr += TSENS_SN_ADDR_OFFSET;
- sensor_status_ctrl_addr += TSENS_SN_ADDR_OFFSET;
}
mb();
}
@@ -599,17 +680,19 @@
static void tsens_hw_init(void)
{
- unsigned int reg_cntl = 0;
+ unsigned int reg_cntl = 0, sensor_en = 0;
unsigned int i;
if (tmdev->tsens_local_init) {
writel_relaxed(reg_cntl, TSENS_CTRL_ADDR(tmdev->tsens_addr));
writel_relaxed(reg_cntl | TSENS_SW_RST,
TSENS_CTRL_ADDR(tmdev->tsens_addr));
- reg_cntl |= ((TSENS_62_5_MS_MEAS_PERIOD <<
- TSENS_MEAS_PERIOD_SHIFT) |
- (((1 << tmdev->tsens_num_sensor) - 1) << TSENS_SENSOR0_SHIFT) |
- TSENS_EN);
+ reg_cntl |= (TSENS_62_5_MS_MEAS_PERIOD <<
+ TSENS_MEAS_PERIOD_SHIFT);
+ for (i = 0; i < tmdev->tsens_num_sensor; i++)
+ sensor_en |= (1 << tmdev->sensor[i].sensor_hw_num);
+ sensor_en <<= TSENS_SENSOR0_SHIFT;
+ reg_cntl |= (sensor_en | TSENS_EN);
writel_relaxed(reg_cntl, TSENS_CTRL_ADDR(tmdev->tsens_addr));
writel_relaxed(TSENS_GLOBAL_INIT_DATA,
TSENS_GLOBAL_CONFIG(tmdev->tsens_addr));
@@ -618,10 +701,12 @@
for (i = 0; i < tmdev->tsens_num_sensor; i++) {
writel_relaxed(TSENS_SN_MIN_MAX_STATUS_CTRL_DATA,
TSENS_SN_MIN_MAX_STATUS_CTRL(tmdev->tsens_addr)
- + (i * TSENS_SN_ADDR_OFFSET));
+ + (tmdev->sensor[i].sensor_hw_num *
+ TSENS_SN_ADDR_OFFSET));
writel_relaxed(TSENS_SN_REMOTE_CFG_DATA,
TSENS_SN_REMOTE_CONFIG(tmdev->tsens_addr)
- + (i * TSENS_SN_ADDR_OFFSET));
+ + (tmdev->sensor[i].sensor_hw_num *
+ TSENS_SN_ADDR_OFFSET));
}
pr_debug("Local TSENS control initialization\n");
}
@@ -648,6 +733,7 @@
tsens_calibration_mode = (calib_data[0] & TSENS_8X10_TSENS_CAL_SEL)
>> TSENS_8X10_CAL_SEL_SHIFT;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_TWO_POINT_CALIB) ||
(tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2)) {
@@ -702,6 +788,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -746,6 +835,7 @@
tsens_calibration_mode = (calib_data[5] & TSENS_8X26_TSENS_CAL_SEL)
>> TSENS_8X26_CAL_SEL_SHIFT;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_TWO_POINT_CALIB) ||
(tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2)) {
@@ -855,6 +945,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -895,11 +988,14 @@
TSENS_EEPROM_REDUNDANCY_SEL(tmdev->tsens_calib_addr));
calib_redun_sel = calib_redun_sel & TSENS_QFPROM_BACKUP_REDUN_SEL;
calib_redun_sel >>= TSENS_QFPROM_BACKUP_REDUN_SHIFT;
+ pr_debug("calib_redun_sel:%x\n", calib_redun_sel);
- for (i = 0; i < TSENS_MAIN_CALIB_ADDR_RANGE; i++)
+ for (i = 0; i < TSENS_MAIN_CALIB_ADDR_RANGE; i++) {
calib_data[i] = readl_relaxed(
(TSENS_EEPROM(tmdev->tsens_calib_addr))
+ (i * TSENS_SN_ADDR_OFFSET));
+ pr_debug("calib raw data row%d:0x%x\n", i, calib_data[i]);
+ }
if (calib_redun_sel == TSENS_QFPROM_BACKUP_SEL) {
tsens_calibration_mode = (calib_data[4] & TSENS_CAL_SEL_0_1)
@@ -907,6 +1003,7 @@
temp = (calib_data[5] & TSENS_CAL_SEL_2)
>> TSENS_CAL_SEL_SHIFT_2;
tsens_calibration_mode |= temp;
+ pr_debug("backup calib mode:%x\n", calib_redun_sel);
for (i = 0; i < TSENS_BACKUP_CALIB_ADDR_RANGE; i++)
calib_data_backup[i] = readl_relaxed(
@@ -992,6 +1089,7 @@
temp = (calib_data[3] & TSENS_CAL_SEL_2)
>> TSENS_CAL_SEL_SHIFT_2;
tsens_calibration_mode |= temp;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_ONE_POINT_CALIB) ||
(tsens_calibration_mode ==
TSENS_ONE_POINT_CALIB_OPTION_2) ||
@@ -1103,6 +1201,7 @@
if ((tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2) ||
(tsens_calibration_mode == TSENS_TWO_POINT_CALIB)) {
+ pr_debug("one point calibration calculation\n");
calib_tsens_point1_data[0] =
((((tsens_base1_data) + tsens0_point1) << 2) |
TSENS_BIT_APPEND);
@@ -1180,6 +1279,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -1192,6 +1294,7 @@
tmdev->sensor[i].offset = (tmdev->sensor[i].calib_data_point1 *
tmdev->tsens_factor) - (TSENS_CAL_DEGC_POINT1 *
tmdev->sensor[i].slope_mul_tsens_factor);
+ pr_debug("offset:%d\n", tmdev->sensor[i].offset);
INIT_WORK(&tmdev->sensor[i].work, notify_uspace_tsens_fn);
tmdev->prev_reading_avail = false;
}
@@ -1223,6 +1326,7 @@
const struct device_node *of_node = pdev->dev.of_node;
struct resource *res_mem = NULL;
u32 *tsens_slope_data;
+ u32 *sensor_id;
u32 rc = 0, i, tsens_num_sensors, calib_type;
const char *tsens_calib_mode;
@@ -1280,6 +1384,29 @@
tmdev->tsens_local_init = of_property_read_bool(of_node,
"qcom,tsens-local-init");
+ sensor_id = devm_kzalloc(&pdev->dev,
+ tsens_num_sensors * sizeof(u32), GFP_KERNEL);
+ if (!sensor_id) {
+ dev_err(&pdev->dev, "can not allocate sensor id\n");
+ return -ENOMEM;
+ }
+
+ rc = of_property_read_u32_array(of_node,
+ "qcom,sensor-id", sensor_id, tsens_num_sensors);
+ if (rc) {
+ pr_debug("Default sensor id mapping\n");
+ for (i = 0; i < tsens_num_sensors; i++) {
+ tmdev->sensor[i].sensor_hw_num = i;
+ tmdev->sensor[i].sensor_sw_id = i;
+ }
+ } else {
+ pr_debug("Use specified sensor id mapping\n");
+ for (i = 0; i < tsens_num_sensors; i++) {
+ tmdev->sensor[i].sensor_hw_num = sensor_id[i];
+ tmdev->sensor[i].sensor_sw_id = i;
+ }
+ }
+
tmdev->tsens_irq = platform_get_irq(pdev, 0);
if (tmdev->tsens_irq < 0) {
pr_err("Invalid get irq\n");
@@ -1422,9 +1549,9 @@
for (i = 0; i < tmdev->tsens_num_sensor; i++) {
char name[18];
- snprintf(name, sizeof(name), "tsens_tz_sensor%d", i);
+ snprintf(name, sizeof(name), "tsens_tz_sensor%d",
+ tmdev->sensor[i].sensor_hw_num);
tmdev->sensor[i].mode = THERMAL_DEVICE_ENABLED;
- tmdev->sensor[i].sensor_num = i;
tmdev->sensor[i].tz_dev = thermal_zone_device_register(name,
TSENS_TRIP_NUM, &tmdev->sensor[i],
&tsens_thermal_zone_ops, 0, 0, 0, 0);
diff --git a/drivers/thermal/msm_thermal.c b/drivers/thermal/msm_thermal.c
index 12ac3bc..2b9af47 100644
--- a/drivers/thermal/msm_thermal.c
+++ b/drivers/thermal/msm_thermal.c
@@ -28,6 +28,7 @@
#include <linux/of.h>
#include <linux/sysfs.h>
#include <linux/types.h>
+#include <linux/android_alarm.h>
#include <mach/cpufreq.h>
#include <mach/rpm-regulator.h>
#include <mach/rpm-regulator-smd.h>
@@ -41,6 +42,10 @@
static bool core_control_enabled;
static uint32_t cpus_offlined;
static DEFINE_MUTEX(core_control_mutex);
+static uint32_t wakeup_ms;
+static struct alarm thermal_rtc;
+static struct kobject *tt_kobj;
+static struct work_struct timer_work;
static int enabled;
static int rails_cnt;
@@ -58,6 +63,7 @@
static bool psm_enabled;
static bool psm_nodes_called;
static bool psm_probed;
+static int *tsens_id_map;
static DEFINE_MUTEX(vdd_rstr_mutex);
static DEFINE_MUTEX(psm_mutex);
@@ -66,7 +72,7 @@
uint32_t freq_req;
uint32_t min_level;
uint32_t num_levels;
- uint32_t curr_level;
+ int32_t curr_level;
uint32_t levels[3];
struct kobj_attribute value_attr;
struct kobj_attribute level_attr;
@@ -103,6 +109,7 @@
ko_attr.attr.mode = 444; \
ko_attr.show = vdd_rstr_reg_##_name##_show; \
ko_attr.store = NULL; \
+ sysfs_attr_init(&ko_attr.attr); \
_rail.attr_gp.attrs[j] = &ko_attr.attr;
#define VDD_RES_RW_ATTRIB(_rail, ko_attr, j, _name) \
@@ -110,6 +117,7 @@
ko_attr.attr.mode = 644; \
ko_attr.show = vdd_rstr_reg_##_name##_show; \
ko_attr.store = vdd_rstr_reg_##_name##_store; \
+ sysfs_attr_init(&ko_attr.attr); \
_rail.attr_gp.attrs[j] = &ko_attr.attr;
#define VDD_RSTR_ENABLE_FROM_ATTRIBS(attr) \
@@ -126,6 +134,7 @@
ko_attr.attr.mode = 644; \
ko_attr.show = psm_reg_##_name##_show; \
ko_attr.store = psm_reg_##_name##_store; \
+ sysfs_attr_init(&ko_attr.attr); \
_rail.attr_gp.attrs[j] = &ko_attr.attr;
#define PSM_REG_MODE_FROM_ATTRIBS(attr) \
@@ -150,6 +159,7 @@
{
int cpu = 0;
int ret = 0;
+ struct cpufreq_policy *policy = NULL;
if (!freq_table_get) {
ret = check_freq_table();
@@ -165,16 +175,20 @@
for_each_possible_cpu(cpu) {
ret = msm_cpufreq_set_freq_limits(cpu, min, limited_max_freq);
-
if (ret) {
pr_err("%s:Fail to set limits for cpu%d\n",
__func__, cpu);
return ret;
}
- if (cpufreq_update_policy(cpu))
- pr_debug("%s: Cannot update policy for cpu%d\n",
- __func__, cpu);
+ if (cpu_online(cpu)) {
+ policy = cpufreq_cpu_get(cpu);
+ if (!policy)
+ continue;
+ cpufreq_driver_target(policy, policy->cur,
+ CPUFREQ_RELATION_L);
+ cpufreq_cpu_put(policy);
+ }
}
return ret;
@@ -184,6 +198,9 @@
{
int ret = 0;
+ if (level == r->curr_level)
+ return ret;
+
/* level = -1: disable, level = 0,1,2..n: enable */
if (level == -1) {
ret = update_cpu_min_freq_all(r->min_level);
@@ -214,6 +231,9 @@
r->name);
return -EFAULT;
}
+ if (level == r->curr_level)
+ return ret;
+
/* level = -1: disable, level = 0,1,2..n: enable */
if (level == -1) {
ret = regulator_set_voltage(r->reg, r->min_level,
@@ -232,32 +252,6 @@
return ret;
}
-/* 1:enable, 0:disable */
-static int vdd_restriction_apply_all(int en)
-{
- int i = 0;
- int fail_cnt = 0;
- int ret = 0;
-
- for (i = 0; i < rails_cnt; i++) {
- if (rails[i].freq_req == 1 && freq_table_get)
- ret = vdd_restriction_apply_freq(&rails[i],
- en ? 0 : -1);
- else
- ret = vdd_restriction_apply_voltage(&rails[i],
- en ? 0 : -1);
- if (ret) {
- pr_err("Cannot set voltage for %s", rails[i].name);
- fail_cnt++;
- }
- }
- /* Check fail_cnt again to make sure all of the rails are applied
- * restriction successfully or not */
- if (fail_cnt)
- return -EFAULT;
-
- return ret;
-}
/* Setting all rails the same mode */
static int psm_set_mode_all(int mode)
@@ -319,8 +313,10 @@
ret = vdd_restriction_apply_voltage(&rails[i],
(val) ? 0 : -1);
- /* Even if fail to set one rail, still try to set the
- * others. Continue the loop */
+ /*
+ * Even if fail to set one rail, still try to set the
+ * others. Continue the loop
+ */
if (ret)
pr_err("Set vdd restriction for %s failed\n",
rails[i].name);
@@ -370,7 +366,7 @@
else
val = reg->levels[reg->curr_level];
- return snprintf(buf, PAGE_SIZE, "%d\n", reg->levels[reg->curr_level]);
+ return snprintf(buf, PAGE_SIZE, "%d\n", val);
}
static int vdd_rstr_reg_level_show(
@@ -392,7 +388,7 @@
if (vdd_rstr_en.enabled == 0)
goto done_store_level;
- ret = kstrtoint(buf, 10, &val);
+ ret = kstrtouint(buf, 10, &val);
if (ret) {
pr_err("Invalid input %s for level\n", buf);
goto done_store_level;
@@ -465,6 +461,103 @@
return count;
}
+static int check_sensor_id(int sensor_id)
+{
+ int i = 0;
+ bool hw_id_found;
+ int ret = 0;
+
+ for (i = 0; i < max_tsens_num; i++) {
+ if (sensor_id == tsens_id_map[i]) {
+ hw_id_found = true;
+ break;
+ }
+ }
+ if (!hw_id_found) {
+ pr_err("%s: Invalid sensor hw id :%d\n", __func__, sensor_id);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int create_sensor_id_map(void)
+{
+ int i = 0;
+ int ret = 0;
+
+ tsens_id_map = kzalloc(sizeof(int) * max_tsens_num,
+ GFP_KERNEL);
+ if (!tsens_id_map) {
+ pr_err("%s: Cannot allocate memory for tsens_id_map\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < max_tsens_num; i++) {
+ ret = tsens_get_hw_id_mapping(i, &tsens_id_map[i]);
+ /* If return -ENXIO, hw_id is default in sequence */
+ if (ret) {
+ if (ret == -ENXIO) {
+ tsens_id_map[i] = i;
+ ret = 0;
+ } else {
+ pr_err( \
+ "%s: Failed to get hw id for sw id %d\n",
+ __func__, i);
+ goto fail;
+ }
+ }
+ }
+
+ return ret;
+fail:
+ kfree(tsens_id_map);
+ return ret;
+}
+
+/* 1:enable, 0:disable */
+static int vdd_restriction_apply_all(int en)
+{
+ int i = 0;
+ int en_cnt = 0;
+ int dis_cnt = 0;
+ int fail_cnt = 0;
+ int ret = 0;
+
+ for (i = 0; i < rails_cnt; i++) {
+ if (rails[i].freq_req == 1 && freq_table_get)
+ ret = vdd_restriction_apply_freq(&rails[i],
+ en ? 0 : -1);
+ else
+ ret = vdd_restriction_apply_voltage(&rails[i],
+ en ? 0 : -1);
+ if (ret) {
+ pr_err("Cannot set voltage for %s", rails[i].name);
+ fail_cnt++;
+ } else {
+ if (en)
+ en_cnt++;
+ else
+ dis_cnt++;
+ }
+ }
+
+ /* As long as one rail is enabled, vdd rstr is enabled */
+ if (en && en_cnt)
+ vdd_rstr_en.enabled = 1;
+ else if (!en && (dis_cnt == rails_cnt))
+ vdd_rstr_en.enabled = 0;
+
+ /*
+ * Check fail_cnt again to make sure all of the rails are applied
+ * restriction successfully or not
+ */
+ if (fail_cnt)
+ return -EFAULT;
+ return ret;
+}
+
static int msm_thermal_get_freq_table(void)
{
int ret = 0;
@@ -542,7 +635,8 @@
cpus_offlined &= ~BIT(i);
pr_info("%s: Allow Online CPU%d Temp: %ld\n",
KBUILD_MODNAME, i, temp);
- /* If this core is already online, then bring up the
+ /*
+ * If this core is already online, then bring up the
* next offlined core.
*/
if (cpu_online(i))
@@ -575,7 +669,7 @@
mutex_lock(&vdd_rstr_mutex);
for (i = 0; i < max_tsens_num; i++) {
- tsens_dev.sensor_num = i;
+ tsens_dev.sensor_num = tsens_id_map[i];
ret = tsens_get_temp(&tsens_dev, &temp);
if (ret) {
pr_debug("%s: Unable to read TSENS sensor %d\n",
@@ -583,18 +677,15 @@
dis_cnt++;
continue;
}
- if (temp <= msm_thermal_info.vdd_rstr_temp_hyst_degC &&
- vdd_rstr_en.enabled == 0) {
+ if (temp <= msm_thermal_info.vdd_rstr_temp_degC) {
ret = vdd_restriction_apply_all(1);
if (ret) {
pr_err( \
"Enable vdd rstr votlage for all failed\n");
goto exit;
}
- vdd_rstr_en.enabled = 1;
goto exit;
- } else if (temp > msm_thermal_info.vdd_rstr_temp_degC &&
- vdd_rstr_en.enabled == 1)
+ } else if (temp > msm_thermal_info.vdd_rstr_temp_hyst_degC)
dis_cnt++;
}
if (dis_cnt == max_tsens_num) {
@@ -603,7 +694,6 @@
pr_err("Disable vdd rstr votlage for all failed\n");
goto exit;
}
- vdd_rstr_en.enabled = 0;
}
exit:
mutex_unlock(&vdd_rstr_mutex);
@@ -620,7 +710,7 @@
mutex_lock(&psm_mutex);
for (i = 0; i < max_tsens_num; i++) {
- tsens_dev.sensor_num = i;
+ tsens_dev.sensor_num = tsens_id_map[i];
ret = tsens_get_temp(&tsens_dev, &temp);
if (ret) {
pr_debug("%s: Unable to read TSENS sensor %d\n",
@@ -629,9 +719,11 @@
continue;
}
- /* As long as one sensor is above the threshold, set PWM mode
+ /*
+ * As long as one sensor is above the threshold, set PWM mode
* on all rails, and loop stops. Set auto mode when all rails
- * are below thershold */
+ * are below thershold
+ */
if (temp > msm_thermal_info.psm_temp_degC) {
ret = psm_set_mode_all(PMIC_PWM_MODE);
if (ret) {
@@ -746,7 +838,40 @@
.notifier_call = msm_thermal_cpu_callback,
};
-/**
+static void thermal_rtc_setup(void)
+{
+ ktime_t wakeup_time;
+ ktime_t curr_time;
+
+ curr_time = alarm_get_elapsed_realtime();
+ wakeup_time = ktime_add_us(curr_time,
+ (wakeup_ms * USEC_PER_MSEC));
+ alarm_start_range(&thermal_rtc, wakeup_time,
+ wakeup_time);
+ pr_debug("%s: Current Time: %ld %ld, Alarm set to: %ld %ld\n",
+ KBUILD_MODNAME,
+ ktime_to_timeval(curr_time).tv_sec,
+ ktime_to_timeval(curr_time).tv_usec,
+ ktime_to_timeval(wakeup_time).tv_sec,
+ ktime_to_timeval(wakeup_time).tv_usec);
+
+}
+
+static void timer_work_fn(struct work_struct *work)
+{
+ sysfs_notify(tt_kobj, NULL, "wakeup_ms");
+}
+
+static void thermal_rtc_callback(struct alarm *al)
+{
+ struct timeval ts;
+ ts = ktime_to_timeval(alarm_get_elapsed_realtime());
+ schedule_work(&timer_work);
+ pr_debug("%s: Time on alarm expiry: %ld %ld\n", KBUILD_MODNAME,
+ ts.tv_sec, ts.tv_usec);
+}
+
+/*
* We will reset the cpu frequencies limits here. The core online/offline
* status will be carried over to the process stopping the msm_thermal, as
* we dont want to online a core and bring in the thermal issues.
@@ -902,6 +1027,52 @@
.attrs = cc_attrs,
};
+static ssize_t show_wakeup_ms(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return snprintf(buf, PAGE_SIZE, "%d\n", wakeup_ms);
+}
+
+static ssize_t store_wakeup_ms(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t count)
+{
+ int ret;
+ ret = kstrtouint(buf, 10, &wakeup_ms);
+
+ if (ret) {
+ pr_err("%s: Trying to set invalid wakeup timer\n",
+ KBUILD_MODNAME);
+ return ret;
+ }
+
+ if (wakeup_ms > 0) {
+ thermal_rtc_setup();
+ pr_debug("%s: Timer started for %ums\n", KBUILD_MODNAME,
+ wakeup_ms);
+ } else {
+ ret = alarm_cancel(&thermal_rtc);
+ if (ret)
+ pr_debug("%s: Timer canceled\n", KBUILD_MODNAME);
+ else
+ pr_debug("%s: No active timer present to cancel\n",
+ KBUILD_MODNAME);
+
+ }
+ return count;
+}
+
+static __cpuinitdata struct kobj_attribute timer_attr =
+__ATTR(wakeup_ms, 0644, show_wakeup_ms, store_wakeup_ms);
+
+static __cpuinitdata struct attribute *tt_attrs[] = {
+ &timer_attr.attr,
+ NULL,
+};
+
+static __cpuinitdata struct attribute_group tt_attr_group = {
+ .attrs = tt_attrs,
+};
+
static __init int msm_thermal_add_cc_nodes(void)
{
struct kobject *module_kobj = NULL;
@@ -938,15 +1109,54 @@
return ret;
}
+static __init int msm_thermal_add_timer_nodes(void)
+{
+ struct kobject *module_kobj = NULL;
+ int ret = 0;
+
+ module_kobj = kset_find_obj(module_kset, KBUILD_MODNAME);
+ if (!module_kobj) {
+ pr_err("%s: cannot find kobject for module\n",
+ KBUILD_MODNAME);
+ ret = -ENOENT;
+ goto failed;
+ }
+
+ tt_kobj = kobject_create_and_add("thermal_timer", module_kobj);
+ if (!tt_kobj) {
+ pr_err("%s: cannot create timer kobj\n",
+ KBUILD_MODNAME);
+ ret = -ENOMEM;
+ goto failed;
+ }
+
+ ret = sysfs_create_group(tt_kobj, &tt_attr_group);
+ if (ret) {
+ pr_err("%s: cannot create group\n", KBUILD_MODNAME);
+ goto failed;
+ }
+
+ return 0;
+
+failed:
+ if (tt_kobj)
+ kobject_del(tt_kobj);
+ return ret;
+}
+
int __devinit msm_thermal_init(struct msm_thermal_data *pdata)
{
int ret = 0;
BUG_ON(!pdata);
tsens_get_max_sensor_num(&max_tsens_num);
- BUG_ON(msm_thermal_info.sensor_id >= max_tsens_num);
memcpy(&msm_thermal_info, pdata, sizeof(struct msm_thermal_data));
+ if (create_sensor_id_map())
+ return -EINVAL;
+ if (check_sensor_id(msm_thermal_info.sensor_id))
+ return -EINVAL;
+
enabled = 1;
core_control_enabled = 1;
INIT_DELAYED_WORK(&check_temp_work, check_temp);
@@ -966,8 +1176,10 @@
if (rails[i].freq_req == 1) {
usefreq |= BIT(i);
check_freq_table();
- /* Restrict frequency by default until we have made
- * our first temp reading */
+ /*
+ * Restrict frequency by default until we have made
+ * our first temp reading
+ */
if (freq_table_get)
ret = vdd_restriction_apply_freq(&rails[i], 0);
else
@@ -988,8 +1200,10 @@
}
return ret;
}
- /* Restrict votlage by default until we have made
- * our first temp reading */
+ /*
+ * Restrict votlage by default until we have made
+ * our first temp reading
+ */
ret = vdd_restriction_apply_voltage(&rails[i], 0);
}
}
@@ -1393,9 +1607,11 @@
key = "qcom,core-control-mask";
ret = of_property_read_u32(node, key, &data.core_control_mask);
- /* Probe optional properties below. Call probe_psm before
+ /*
+ * Probe optional properties below. Call probe_psm before
* probe_vdd_rstr because rpm_regulator_get has to be called
- * before devm_regulator_get*/
+ * before devm_regulator_get
+ */
ret = probe_psm(node, &data, pdev);
if (ret == -EPROBE_DEFER)
goto fail;
@@ -1403,8 +1619,10 @@
if (ret == -EPROBE_DEFER)
goto fail;
- /* In case sysfs add nodes get called before probe function.
- * Need to make sure sysfs node is created again */
+ /*
+ * In case sysfs add nodes get called before probe function.
+ * Need to make sure sysfs node is created again
+ */
if (psm_nodes_called) {
msm_thermal_add_psm_nodes();
psm_nodes_called = false;
@@ -1449,6 +1667,10 @@
msm_thermal_add_cc_nodes();
msm_thermal_add_psm_nodes();
msm_thermal_add_vdd_rstr_nodes();
+ alarm_init(&thermal_rtc, ANDROID_ALARM_ELAPSED_REALTIME_WAKEUP,
+ thermal_rtc_callback);
+ INIT_WORK(&timer_work, timer_work_fn);
+ msm_thermal_add_timer_nodes();
return 0;
}
diff --git a/drivers/thermal/pm8xxx-tm.c b/drivers/thermal/pm8xxx-tm.c
index 4568933..99a9454 100644
--- a/drivers/thermal/pm8xxx-tm.c
+++ b/drivers/thermal/pm8xxx-tm.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -682,6 +682,13 @@
return 0;
}
+static void pm8xxx_tm_shutdown(struct platform_device *pdev)
+{
+ struct pm8xxx_tm_chip *chip = platform_get_drvdata(pdev);
+
+ pm8xxx_tm_write_pwm(chip, TEMP_ALARM_PWM_EN_NEVER);
+}
+
#ifdef CONFIG_PM
static int pm8xxx_tm_suspend(struct device *dev)
{
@@ -719,6 +726,7 @@
static struct platform_driver pm8xxx_tm_driver = {
.probe = pm8xxx_tm_probe,
.remove = __devexit_p(pm8xxx_tm_remove),
+ .shutdown = pm8xxx_tm_shutdown,
.driver = {
.name = PM8XXX_TM_DEV_NAME,
.owner = THIS_MODULE,
diff --git a/drivers/thermal/qpnp-adc-tm.c b/drivers/thermal/qpnp-adc-tm.c
index e7d2e0f..26a12d0 100644
--- a/drivers/thermal/qpnp-adc-tm.c
+++ b/drivers/thermal/qpnp-adc-tm.c
@@ -36,6 +36,8 @@
/* QPNP VADC TM register definition */
#define QPNP_REVISION3 0x2
+#define QPNP_PERPH_SUBTYPE 0x5
+#define QPNP_PERPH_TYPE2 0x2
#define QPNP_REVISION_EIGHT_CHANNEL_SUPPORT 2
#define QPNP_STATUS1 0x8
#define QPNP_STATUS1_OP_MODE 4
@@ -366,7 +368,7 @@
static int32_t qpnp_adc_tm_check_revision(uint32_t btm_chan_num)
{
- u8 rev;
+ u8 rev, perph_subtype;
int rc = 0;
rc = qpnp_adc_tm_read_reg(QPNP_REVISION3, &rev);
@@ -375,10 +377,18 @@
return rc;
}
- if ((rev < QPNP_REVISION_EIGHT_CHANNEL_SUPPORT) &&
- (btm_chan_num > QPNP_ADC_TM_M4_ADC_CH_SEL_CTL)) {
- pr_debug("Version does not support more than 5 channels\n");
- return -EINVAL;
+ rc = qpnp_adc_tm_read_reg(QPNP_PERPH_SUBTYPE, &perph_subtype);
+ if (rc) {
+ pr_err("adc-tm perph_subtype read failed\n");
+ return rc;
+ }
+
+ if (perph_subtype == QPNP_PERPH_TYPE2) {
+ if ((rev < QPNP_REVISION_EIGHT_CHANNEL_SUPPORT) &&
+ (btm_chan_num > QPNP_ADC_TM_M4_ADC_CH_SEL_CTL)) {
+ pr_debug("Version does not support more than 5 channels\n");
+ return -EINVAL;
+ }
}
return rc;
@@ -1190,22 +1200,28 @@
}
}
- rc = qpnp_adc_tm_reg_update(QPNP_ADC_TM_MULTI_MEAS_EN,
- sensor_mask, false);
- if (rc < 0) {
- pr_err("multi meas disable for channel failed\n");
- goto fail;
- }
+ if (adc_tm_high_enable || adc_tm_low_enable) {
+ rc = qpnp_adc_tm_reg_update(QPNP_ADC_TM_MULTI_MEAS_EN,
+ sensor_mask, false);
+ if (rc < 0) {
+ pr_err("multi meas disable for channel failed\n");
+ goto fail;
+ }
- rc = qpnp_adc_tm_enable_if_channel_meas();
- if (rc < 0) {
- pr_err("re-enabling measurement failed\n");
- return rc;
- }
+ rc = qpnp_adc_tm_enable_if_channel_meas();
+ if (rc < 0) {
+ pr_err("re-enabling measurement failed\n");
+ return rc;
+ }
+ } else
+ pr_debug("No threshold status enable %d for high/low??\n",
+ sensor_mask);
+
fail:
mutex_unlock(&adc_tm->adc->adc_lock);
- schedule_work(&adc_tm->sensor[sensor_num].work);
+ if (adc_tm_high_enable || adc_tm_low_enable)
+ schedule_work(&adc_tm->sensor[sensor_num].work);
return rc;
}
@@ -1518,6 +1534,7 @@
dev_err(&spmi->dev, "failed to read device tree\n");
goto fail;
}
+ mutex_init(&adc_tm->adc->adc_lock);
/* Register the ADC peripheral interrupt */
adc_tm->adc->adc_high_thr_irq = spmi_get_irq_byname(spmi,
@@ -1584,6 +1601,8 @@
adc_tm->sensor[sen_idx].sensor_num = sen_idx;
pr_debug("btm_chan:%x, vadc_chan:%x\n", btm_channel_num,
adc_tm->adc->adc_channels[sen_idx].channel_num);
+ thermal_node = of_property_read_bool(child,
+ "qcom,thermal-node");
if (thermal_node) {
/* Register with the thermal zone */
pr_debug("thermal node%x\n", btm_channel_num);
diff --git a/drivers/tty/serial/msm_serial_hs.c b/drivers/tty/serial/msm_serial_hs.c
index ad7c702..f695870 100644
--- a/drivers/tty/serial/msm_serial_hs.c
+++ b/drivers/tty/serial/msm_serial_hs.c
@@ -2340,9 +2340,7 @@
msm_hs_start_rx_locked(uport);
spin_unlock_irqrestore(&uport->lock, flags);
- ret = pm_runtime_set_active(uport->dev);
- if (ret)
- dev_err(uport->dev, "set active error:%d\n", ret);
+
pm_runtime_enable(uport->dev);
return 0;
@@ -3180,7 +3178,6 @@
cancel_delayed_work_sync(&msm_uport->rx.flip_insert_work);
flush_workqueue(msm_uport->hsuart_wq);
pm_runtime_disable(uport->dev);
- pm_runtime_set_suspended(uport->dev);
/* Disable the transmitter */
msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_TX_DISABLE_BMSK);
diff --git a/drivers/tty/serial/msm_serial_hs_lite.c b/drivers/tty/serial/msm_serial_hs_lite.c
index 7aa14de..d520253 100644
--- a/drivers/tty/serial/msm_serial_hs_lite.c
+++ b/drivers/tty/serial/msm_serial_hs_lite.c
@@ -364,6 +364,22 @@
return msm_hsl_port->uart.line;
}
+static int bus_vote(uint32_t client, int vector)
+{
+ int ret = 0;
+
+ if (!client)
+ return ret;
+
+ pr_debug("Voting for bus scaling:%d\n", vector);
+
+ ret = msm_bus_scale_client_update_request(client, vector);
+ if (ret)
+ pr_err("Failed to request bus bw vector %d\n", vector);
+
+ return ret;
+}
+
static int clk_en(struct uart_port *port, int enable)
{
struct msm_hsl_port *msm_hsl_port = UART_TO_MSM(port);
@@ -372,9 +388,13 @@
if (enable) {
msm_hsl_port->clk_enable_count++;
- ret = clk_prepare_enable(msm_hsl_port->clk);
+ ret = bus_vote(msm_hsl_port->bus_perf_client,
+ !!msm_hsl_port->clk_enable_count);
if (ret)
goto err;
+ ret = clk_prepare_enable(msm_hsl_port->clk);
+ if (ret)
+ goto err_bus;
if (msm_hsl_port->pclk) {
ret = clk_prepare_enable(msm_hsl_port->pclk);
if (ret)
@@ -386,23 +406,17 @@
clk_disable_unprepare(msm_hsl_port->clk);
if (msm_hsl_port->pclk)
clk_disable_unprepare(msm_hsl_port->pclk);
- }
-
- if (msm_hsl_port->bus_perf_client) {
- pr_debug("Voting for bus scaling:%d\n",
- !!msm_hsl_port->clk_enable_count);
- ret = msm_bus_scale_client_update_request(
- msm_hsl_port->bus_perf_client,
+ ret = bus_vote(msm_hsl_port->bus_perf_client,
!!msm_hsl_port->clk_enable_count);
- if (ret)
- pr_err("Failed to request bus bw vector %d\n",
- !!msm_hsl_port->clk_enable_count);
}
return ret;
err_clk_disable:
clk_disable_unprepare(msm_hsl_port->clk);
+err_bus:
+ bus_vote(msm_hsl_port->bus_perf_client,
+ !!(msm_hsl_port->clk_enable_count - 1));
err:
msm_hsl_port->clk_enable_count--;
return ret;
diff --git a/drivers/tty/smux_ctl.c b/drivers/tty/smux_ctl.c
index 2e091cc..1b3a7abe 100644
--- a/drivers/tty/smux_ctl.c
+++ b/drivers/tty/smux_ctl.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -940,6 +940,7 @@
static int smux_ctl_remove(struct platform_device *pdev)
{
int i;
+ int ret;
SMUXCTL_DBG(SMUX_CTL_MODULE_NAME ": %s Begins\n", __func__);
@@ -950,6 +951,13 @@
devp->abort_wait = 1;
wake_up(&devp->write_wait_queue);
wake_up(&devp->read_wait_queue);
+
+ if (atomic_read(&devp->ref_count)) {
+ ret = msm_smux_close(devp->id);
+ if (ret)
+ pr_err("%s: unable to close ch %d, ret %d\n",
+ __func__, devp->id, ret);
+ }
mutex_unlock(&devp->dev_lock);
/* Empty RX queue */
diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
index ede2ef1..1d02e32 100644
--- a/drivers/tty/vt/vt_ioctl.c
+++ b/drivers/tty/vt/vt_ioctl.c
@@ -110,6 +110,34 @@
wake_up_interruptible(&vt_event_waitqueue);
}
+static void __vt_event_queue(struct vt_event_wait *vw)
+{
+ unsigned long flags;
+ /* Prepare the event */
+ INIT_LIST_HEAD(&vw->list);
+ vw->done = 0;
+ /* Queue our event */
+ spin_lock_irqsave(&vt_event_lock, flags);
+ list_add(&vw->list, &vt_events);
+ spin_unlock_irqrestore(&vt_event_lock, flags);
+}
+
+static void __vt_event_wait(struct vt_event_wait *vw)
+{
+ /* Wait for it to pass */
+ wait_event_interruptible(vt_event_waitqueue, vw->done);
+}
+
+static void __vt_event_dequeue(struct vt_event_wait *vw)
+{
+ unsigned long flags;
+
+ /* Dequeue it */
+ spin_lock_irqsave(&vt_event_lock, flags);
+ list_del(&vw->list);
+ spin_unlock_irqrestore(&vt_event_lock, flags);
+}
+
/**
* vt_event_wait - wait for an event
* @vw: our event
@@ -121,20 +149,9 @@
static void vt_event_wait(struct vt_event_wait *vw)
{
- unsigned long flags;
- /* Prepare the event */
- INIT_LIST_HEAD(&vw->list);
- vw->done = 0;
- /* Queue our event */
- spin_lock_irqsave(&vt_event_lock, flags);
- list_add(&vw->list, &vt_events);
- spin_unlock_irqrestore(&vt_event_lock, flags);
- /* Wait for it to pass */
- wait_event_interruptible(vt_event_waitqueue, vw->done);
- /* Dequeue it */
- spin_lock_irqsave(&vt_event_lock, flags);
- list_del(&vw->list);
- spin_unlock_irqrestore(&vt_event_lock, flags);
+ __vt_event_queue(vw);
+ __vt_event_wait(vw);
+ __vt_event_dequeue(vw);
}
/**
@@ -177,10 +194,14 @@
{
struct vt_event_wait vw;
do {
- if (n == fg_console + 1)
- break;
vw.event.event = VT_EVENT_SWITCH;
- vt_event_wait(&vw);
+ __vt_event_queue(&vw);
+ if (n == fg_console + 1) {
+ __vt_event_dequeue(&vw);
+ break;
+ }
+ __vt_event_wait(&vw);
+ __vt_event_dequeue(&vw);
if (vw.done == 0)
return -EINTR;
} while (vw.event.newev != n);
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index 6619e96..fab5219 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -475,6 +475,7 @@
void *mem;
u8 mode;
+ bool host_only_mode;
mem = devm_kzalloc(dev, sizeof(*dwc) + DWC3_ALIGN_MASK, GFP_KERNEL);
if (!mem) {
@@ -487,7 +488,7 @@
if (!dev->dma_mask)
dev->dma_mask = &dwc3_dma_mask;
if (!dev->coherent_dma_mask)
- dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->coherent_dma_mask = DMA_BIT_MASK(64);
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!res) {
@@ -548,6 +549,7 @@
dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
dwc->needs_fifo_resize = of_property_read_bool(node, "tx-fifo-resize");
+ host_only_mode = of_property_read_bool(node, "host-only-mode");
pm_runtime_no_callbacks(dev);
pm_runtime_set_active(dev);
@@ -561,6 +563,12 @@
mode = DWC3_MODE(dwc->hwparams.hwparams0);
+ /* Override mode if user selects host-only config with DRD core */
+ if (host_only_mode && (mode == DWC3_MODE_DRD)) {
+ dev_dbg(dev, "host only mode selected\n");
+ mode = DWC3_MODE_HOST;
+ }
+
switch (mode) {
case DWC3_MODE_DEVICE:
dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_DEVICE);
diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
index 93504eb..df95646 100644
--- a/drivers/usb/dwc3/debugfs.c
+++ b/drivers/usb/dwc3/debugfs.c
@@ -693,9 +693,10 @@
list_for_each(ptr, &dep->request_list) {
req = list_entry(ptr, struct dwc3_request, list);
- seq_printf(s, "req:0x%p len: %d sts: %d dma:0x%x num_sgs: %d\n",
+ seq_printf(s,
+ "req:0x%p len: %d sts: %d dma:0x%pa num_sgs: %d\n",
req, req->request.length, req->request.status,
- req->request.dma, req->request.num_sgs);
+ &req->request.dma, req->request.num_sgs);
}
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -731,9 +732,10 @@
list_for_each(ptr, &dep->req_queued) {
req = list_entry(ptr, struct dwc3_request, list);
- seq_printf(s, "req:0x%p len:%d sts:%d dma:%x nsg:%d trb:0x%p\n",
+ seq_printf(s,
+ "req:0x%p len:%d sts:%d dma:%pa nsg:%d trb:0x%p\n",
req, req->request.length, req->request.status,
- req->request.dma, req->request.num_sgs, req->trb);
+ &req->request.dma, req->request.num_sgs, req->trb);
}
spin_unlock_irqrestore(&dwc->lock, flags);
diff --git a/drivers/usb/dwc3/dwc3-msm.c b/drivers/usb/dwc3/dwc3-msm.c
index 7a6765b..a27eb09 100644
--- a/drivers/usb/dwc3/dwc3-msm.c
+++ b/drivers/usb/dwc3/dwc3-msm.c
@@ -127,6 +127,8 @@
#define DBM_TRB_DMA 0x20000000
#define DBM_TRB_EP_NUM(ep) (ep<<24)
+#define USB3_PORTSC (0x430)
+#define PORT_PE (0x1 << 1)
/**
* USB QSCRATCH Hardware registers
*
@@ -177,6 +179,9 @@
struct regulator *hsusb_vddcx;
struct regulator *ssusb_1p8;
struct regulator *ssusb_vddcx;
+
+ /* VBUS regulator if no OTG and running in host only mode */
+ struct regulator *vbus_otg;
struct dwc3_ext_xceiv ext_xceiv;
bool resume_pending;
atomic_t pm_suspended;
@@ -210,6 +215,9 @@
bool vbus_active;
bool ext_inuse;
enum dwc3_id_state id_state;
+ unsigned long lpm_flags;
+#define MDWC3_CORECLK_OFF BIT(0)
+#define MDWC3_TCXO_SHUTDOWN BIT(1)
};
#define USB_HSPHY_3P3_VOL_MIN 3050000 /* uV */
@@ -1301,8 +1309,14 @@
data |= (0x7F | (1 << 14));
dwc3_msm_ssusb_write_phycreg(msm->base, 0x1002, data);
- /* Set LOS_BIAS to 0x5 */
- dwc3_msm_write_readback(msm->base, SS_PHY_PARAM_CTRL_1, 0x07, 0x5);
+ /*
+ * Set the QSCRATCH SS_PHY_PARAM_CTRL1 parameters as follows
+ * TX_FULL_SWING [26:20] amplitude to 127
+ * TX_DEEMPH_3_5DB [13:8] to 22
+ * LOS_BIAS [2:0] to 0x5
+ */
+ dwc3_msm_write_readback(msm->base, SS_PHY_PARAM_CTRL_1,
+ 0x07f03f07, 0x07f01605);
}
/* Initialize QSCRATCH registers for HSPHY and SSPHY operation */
@@ -1585,6 +1599,7 @@
int ret;
bool dcp;
bool host_bus_suspend;
+ bool host_ss_active;
dev_dbg(mdwc->dev, "%s: entering lpm\n", __func__);
@@ -1593,6 +1608,7 @@
return 0;
}
+ host_ss_active = dwc3_msm_read_reg(mdwc->base, USB3_PORTSC) & PORT_PE;
if (mdwc->hs_phy_irq)
disable_irq(mdwc->hs_phy_irq);
@@ -1606,7 +1622,8 @@
0x37, 0x0);
}
- dcp = mdwc->charger.chg_type == DWC3_DCP_CHARGER;
+ dcp = ((mdwc->charger.chg_type == DWC3_DCP_CHARGER) ||
+ (mdwc->charger.chg_type == DWC3_PROPRIETARY_CHARGER));
host_bus_suspend = mdwc->host_mode == 1;
/* Sequence to put SSPHY in low power state:
@@ -1661,14 +1678,19 @@
/* make sure above writes are completed before turning off clocks */
wmb();
- clk_disable_unprepare(mdwc->core_clk);
+ if (!host_bus_suspend || !host_ss_active) {
+ clk_disable_unprepare(mdwc->core_clk);
+ mdwc->lpm_flags |= MDWC3_CORECLK_OFF;
+ }
clk_disable_unprepare(mdwc->iface_clk);
- if (!host_bus_suspend) {
+ if (!host_bus_suspend)
clk_disable_unprepare(mdwc->utmi_clk);
+ if (!host_bus_suspend) {
/* USB PHY no more requires TCXO */
clk_disable_unprepare(mdwc->xo_clk);
+ mdwc->lpm_flags |= MDWC3_TCXO_SHUTDOWN;
}
if (mdwc->bus_perf_client) {
@@ -1684,15 +1706,19 @@
dwc3_ssusb_ldo_enable(0);
dwc3_ssusb_config_vddcx(0);
- if (!host_bus_suspend)
+ if (!host_bus_suspend && !dcp)
dwc3_hsusb_config_vddcx(0);
wake_unlock(&mdwc->wlock);
atomic_set(&mdwc->in_lpm, 1);
dev_info(mdwc->dev, "DWC3 in low power mode\n");
- if (mdwc->hs_phy_irq)
+ if (mdwc->hs_phy_irq) {
enable_irq(mdwc->hs_phy_irq);
+ /* with DCP we dont require wakeup using HS_PHY_IRQ */
+ if (dcp)
+ disable_irq_wake(mdwc->hs_phy_irq);
+ }
return 0;
}
@@ -1719,17 +1745,22 @@
dev_err(mdwc->dev, "Failed to vote for bus scaling\n");
}
- dcp = mdwc->charger.chg_type == DWC3_DCP_CHARGER;
+ dcp = ((mdwc->charger.chg_type == DWC3_DCP_CHARGER) ||
+ (mdwc->charger.chg_type == DWC3_PROPRIETARY_CHARGER));
host_bus_suspend = mdwc->host_mode == 1;
- if (!host_bus_suspend) {
+ if (mdwc->lpm_flags & MDWC3_TCXO_SHUTDOWN) {
/* Vote for TCXO while waking up USB HSPHY */
ret = clk_prepare_enable(mdwc->xo_clk);
if (ret)
dev_err(mdwc->dev, "%s failed to vote TCXO buffer%d\n",
__func__, ret);
+ mdwc->lpm_flags &= ~MDWC3_TCXO_SHUTDOWN;
}
+ if (!host_bus_suspend)
+ clk_prepare_enable(mdwc->utmi_clk);
+
if (mdwc->otg_xceiv && mdwc->ext_xceiv.otg_capability && !dcp &&
!host_bus_suspend)
dwc3_hsusb_ldo_enable(1);
@@ -1737,16 +1768,17 @@
dwc3_ssusb_ldo_enable(1);
dwc3_ssusb_config_vddcx(1);
- if (!host_bus_suspend) {
+ if (!host_bus_suspend && !dcp)
dwc3_hsusb_config_vddcx(1);
- clk_prepare_enable(mdwc->utmi_clk);
- }
clk_prepare_enable(mdwc->ref_clk);
usleep_range(1000, 1200);
clk_prepare_enable(mdwc->iface_clk);
- clk_prepare_enable(mdwc->core_clk);
+ if (mdwc->lpm_flags & MDWC3_CORECLK_OFF) {
+ clk_prepare_enable(mdwc->core_clk);
+ mdwc->lpm_flags &= ~MDWC3_CORECLK_OFF;
+ }
if (host_bus_suspend) {
/* Disable HV interrupt */
@@ -1809,6 +1841,9 @@
enable_irq(mdwc->hs_phy_irq);
mdwc->lpm_irq_seen = false;
}
+ /* it must DCP disconnect, re-enable HS_PHY wakeup IRQ */
+ if (mdwc->hs_phy_irq && dcp)
+ enable_irq_wake(mdwc->hs_phy_irq);
dev_info(mdwc->dev, "DWC3 exited from low power mode\n");
@@ -1838,7 +1873,7 @@
if (mdwc->otg_xceiv)
mdwc->ext_xceiv.notify_ext_events(mdwc->otg_xceiv->otg,
DWC3_EVENT_PHY_RESUME);
- pm_runtime_put_sync(mdwc->dev);
+ pm_runtime_put_noidle(mdwc->dev);
if (mdwc->otg_xceiv && (mdwc->ext_xceiv.otg_capability))
mdwc->ext_xceiv.notify_ext_events(mdwc->otg_xceiv->otg,
DWC3_EVENT_XCEIV_STATE);
@@ -2404,6 +2439,8 @@
msm->charger.charging_disabled = of_property_read_bool(node,
"qcom,charging-disabled");
+ msm->charger.skip_chg_detect = of_property_read_bool(node,
+ "qcom,skip-charger-detection");
/*
* DWC3 has separate IRQ line for OTG events (ID/BSV) and for
* DP and DM linestate transitions during low power mode.
@@ -2529,24 +2566,28 @@
goto disable_hs_ldo;
}
- msm->usb_psy.name = "usb";
- msm->usb_psy.type = POWER_SUPPLY_TYPE_USB;
- msm->usb_psy.supplied_to = dwc3_msm_pm_power_supplied_to;
- msm->usb_psy.num_supplicants = ARRAY_SIZE(
- dwc3_msm_pm_power_supplied_to);
- msm->usb_psy.properties = dwc3_msm_pm_power_props_usb;
- msm->usb_psy.num_properties = ARRAY_SIZE(dwc3_msm_pm_power_props_usb);
- msm->usb_psy.get_property = dwc3_msm_power_get_property_usb;
- msm->usb_psy.set_property = dwc3_msm_power_set_property_usb;
- msm->usb_psy.external_power_changed =
- dwc3_msm_external_power_changed;
+ /* usb_psy required only for vbus_notifications or charging support */
+ if (msm->ext_xceiv.otg_capability || !msm->charger.charging_disabled) {
+ msm->usb_psy.name = "usb";
+ msm->usb_psy.type = POWER_SUPPLY_TYPE_USB;
+ msm->usb_psy.supplied_to = dwc3_msm_pm_power_supplied_to;
+ msm->usb_psy.num_supplicants = ARRAY_SIZE(
+ dwc3_msm_pm_power_supplied_to);
+ msm->usb_psy.properties = dwc3_msm_pm_power_props_usb;
+ msm->usb_psy.num_properties =
+ ARRAY_SIZE(dwc3_msm_pm_power_props_usb);
+ msm->usb_psy.get_property = dwc3_msm_power_get_property_usb;
+ msm->usb_psy.set_property = dwc3_msm_power_set_property_usb;
+ msm->usb_psy.external_power_changed =
+ dwc3_msm_external_power_changed;
- ret = power_supply_register(&pdev->dev, &msm->usb_psy);
- if (ret < 0) {
- dev_err(&pdev->dev,
- "%s:power_supply_register usb failed\n",
- __func__);
- goto disable_hs_ldo;
+ ret = power_supply_register(&pdev->dev, &msm->usb_psy);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "%s:power_supply_register usb failed\n",
+ __func__);
+ goto disable_hs_ldo;
+ }
}
if (node) {
@@ -2571,13 +2612,19 @@
}
msm->otg_xceiv = usb_get_transceiver();
- if (msm->otg_xceiv) {
- msm->charger.start_detection = dwc3_start_chg_det;
- ret = dwc3_set_charger(msm->otg_xceiv->otg, &msm->charger);
- if (ret || !msm->charger.notify_detection_complete) {
- dev_err(&pdev->dev, "failed to register charger: %d\n",
- ret);
- goto put_xcvr;
+ /* Register with OTG if present, ignore USB2 OTG using other PHY */
+ if (msm->otg_xceiv && !(msm->otg_xceiv->flags & ENABLE_SECONDARY_PHY)) {
+ /* Skip charger detection for simulator targets */
+ if (!msm->charger.skip_chg_detect) {
+ msm->charger.start_detection = dwc3_start_chg_det;
+ ret = dwc3_set_charger(msm->otg_xceiv->otg,
+ &msm->charger);
+ if (ret || !msm->charger.notify_detection_complete) {
+ dev_err(&pdev->dev,
+ "failed to register charger: %d\n",
+ ret);
+ goto put_xcvr;
+ }
}
if (msm->ext_xceiv.otg_capability)
@@ -2589,7 +2636,20 @@
goto put_xcvr;
}
} else {
- dev_err(&pdev->dev, "%s: No OTG transceiver found\n", __func__);
+ dev_dbg(&pdev->dev, "No OTG, DWC3 running in host only mode\n");
+ msm->host_mode = 1;
+ msm->vbus_otg = devm_regulator_get(&pdev->dev, "vbus_dwc3");
+ if (IS_ERR(msm->vbus_otg)) {
+ dev_dbg(&pdev->dev, "Failed to get vbus regulator\n");
+ msm->vbus_otg = 0;
+ } else {
+ ret = regulator_enable(msm->vbus_otg);
+ if (ret) {
+ msm->vbus_otg = 0;
+ dev_err(&pdev->dev, "Failed to enable vbus_otg\n");
+ }
+ }
+ msm->otg_xceiv = NULL;
}
wake_lock_init(&msm->wlock, WAKE_LOCK_SUSPEND, "msm_dwc3");
@@ -2601,7 +2661,8 @@
put_xcvr:
usb_put_transceiver(msm->otg_xceiv);
put_psupply:
- power_supply_unregister(&msm->usb_psy);
+ if (msm->usb_psy.dev)
+ power_supply_unregister(&msm->usb_psy);
disable_hs_ldo:
dwc3_hsusb_ldo_enable(0);
free_hs_ldo_init:
@@ -2650,6 +2711,10 @@
dwc3_start_chg_det(&msm->charger, false);
usb_put_transceiver(msm->otg_xceiv);
}
+ if (msm->usb_psy.dev)
+ power_supply_unregister(&msm->usb_psy);
+ if (msm->vbus_otg)
+ regulator_disable(msm->vbus_otg);
pm_runtime_disable(msm->dev);
wake_lock_destroy(&msm->wlock);
diff --git a/drivers/usb/dwc3/dwc3_otg.c b/drivers/usb/dwc3/dwc3_otg.c
index 1d67cee..a3b2617 100644
--- a/drivers/usb/dwc3/dwc3_otg.c
+++ b/drivers/usb/dwc3/dwc3_otg.c
@@ -198,8 +198,6 @@
} else {
dev_dbg(otg->phy->dev, "%s: turn off host\n", __func__);
- platform_device_del(dwc->xhci);
-
ret = regulator_disable(dotg->vbus_otg);
if (ret) {
dev_err(otg->phy->dev, "unable to disable vbus_otg\n");
@@ -207,6 +205,7 @@
}
dwc3_otg_notify_host_mode(otg, on);
+ platform_device_del(dwc->xhci);
/*
* Perform USB hardware RESET (both core reset and DBM reset)
* when moving from host to peripheral. This is required for
diff --git a/drivers/usb/dwc3/dwc3_otg.h b/drivers/usb/dwc3/dwc3_otg.h
index c2fab53..90e26cf 100644
--- a/drivers/usb/dwc3/dwc3_otg.h
+++ b/drivers/usb/dwc3/dwc3_otg.h
@@ -77,6 +77,8 @@
unsigned max_power;
bool charging_disabled;
+ bool skip_chg_detect;
+
/* start/stop charger detection, provided by external charger module */
void (*start_detection)(struct dwc3_charger *charger, bool start);
diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
index 66854b2..bb0b2fb 100644
--- a/drivers/usb/dwc3/ep0.c
+++ b/drivers/usb/dwc3/ep0.c
@@ -393,7 +393,6 @@
u32 recip;
u32 wValue;
u32 wIndex;
- u32 reg;
int ret;
wValue = le16_to_cpu(ctrl->wValue);
@@ -414,13 +413,6 @@
return -EINVAL;
if (dwc->speed != DWC3_DSTS_SUPERSPEED)
return -EINVAL;
-
- reg = dwc3_readl(dwc->regs, DWC3_DCTL);
- if (set)
- reg |= DWC3_DCTL_INITU1ENA;
- else
- reg &= ~DWC3_DCTL_INITU1ENA;
- dwc3_writel(dwc->regs, DWC3_DCTL, reg);
break;
case USB_DEVICE_U2_ENABLE:
@@ -428,13 +420,6 @@
return -EINVAL;
if (dwc->speed != DWC3_DSTS_SUPERSPEED)
return -EINVAL;
-
- reg = dwc3_readl(dwc->regs, DWC3_DCTL);
- if (set)
- reg |= DWC3_DCTL_INITU2ENA;
- else
- reg &= ~DWC3_DCTL_INITU2ENA;
- dwc3_writel(dwc->regs, DWC3_DCTL, reg);
break;
case USB_DEVICE_LTM_ENABLE:
@@ -539,7 +524,6 @@
{
u32 cfg;
int ret;
- u32 reg;
dwc->start_config_issued = false;
cfg = le16_to_cpu(ctrl->wValue);
@@ -554,14 +538,6 @@
/* if the cfg matches and the cfg is non zero */
if (cfg && (!ret || (ret == USB_GADGET_DELAYED_STATUS))) {
dwc->dev_state = DWC3_CONFIGURED_STATE;
- /*
- * Enable transition to U1/U2 state when
- * nothing is pending from application.
- */
- reg = dwc3_readl(dwc->regs, DWC3_DCTL);
- reg |= (DWC3_DCTL_ACCEPTU1ENA | DWC3_DCTL_ACCEPTU2ENA);
- dwc3_writel(dwc->regs, DWC3_DCTL, reg);
-
dwc->resize_fifos = true;
dev_dbg(dwc->dev, "resize fifos flag SET\n");
}
@@ -775,6 +751,9 @@
dwc->ep0_next_event = DWC3_EP0_NRDY_STATUS;
r = next_request(&ep0->request_list);
+ if (r == NULL)
+ return;
+
ur = &r->request;
if ((epnum & 1) && ur->zero &&
(ur->length % ep0->endpoint.maxpacket == 0)) {
diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
index d6d8a76..420d030 100644
--- a/drivers/usb/dwc3/host.c
+++ b/drivers/usb/dwc3/host.c
@@ -79,6 +79,7 @@
/* Add XHCI device if !OTG, otherwise OTG takes care of this */
if (!dwc->dotg) {
+ xhci->dev.parent = dwc->dev;
ret = platform_device_add(xhci);
if (ret) {
dev_err(dwc->dev, "failed to register xHCI device\n");
diff --git a/drivers/usb/gadget/android.c b/drivers/usb/gadget/android.c
index d4bdf99..147e3db 100644
--- a/drivers/usb/gadget/android.c
+++ b/drivers/usb/gadget/android.c
@@ -53,6 +53,7 @@
#include "f_rmnet_sdio.c"
#include "f_rmnet_smd_sdio.c"
#include "f_rmnet.c"
+#include "f_gps.c"
#ifdef CONFIG_SND_PCM
#include "f_audio_source.c"
#endif
@@ -731,6 +732,47 @@
.attributes = rmnet_function_attributes,
};
+static void gps_function_cleanup(struct android_usb_function *f)
+{
+ gps_cleanup();
+}
+
+static int gps_function_bind_config(struct android_usb_function *f,
+ struct usb_configuration *c)
+{
+ int err;
+ static int gps_initialized;
+
+ if (!gps_initialized) {
+ gps_initialized = 1;
+ err = gps_init_port();
+ if (err) {
+ pr_err("gps: Cannot init gps port");
+ return err;
+ }
+ }
+
+ err = gps_gport_setup();
+ if (err) {
+ pr_err("gps: Cannot setup transports");
+ return err;
+ }
+ err = gps_bind_config(c);
+ if (err) {
+ pr_err("Could not bind gps config\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static struct android_usb_function gps_function = {
+ .name = "gps",
+ .cleanup = gps_function_cleanup,
+ .bind_config = gps_function_bind_config,
+};
+
+
/* ecm transport string */
static char ecm_transports[MAX_XPORT_STR_LEN];
@@ -1783,6 +1825,7 @@
&rmnet_sdio_function,
&rmnet_smd_sdio_function,
&rmnet_function,
+ &gps_function,
&diag_function,
&qdss_function,
&serial_function,
@@ -2646,6 +2689,8 @@
of_property_read_u32(pdev->dev.of_node,
"qcom,android-usb-swfi-latency",
&pdata->swfi_latency);
+ pdata->cdrom = of_property_read_bool(pdev->dev.of_node,
+ "qcom,android-usb-cdrom");
} else {
pdata = pdev->dev.platform_data;
}
diff --git a/drivers/usb/gadget/ci13xxx_udc.c b/drivers/usb/gadget/ci13xxx_udc.c
index 6fca910..01d8be1 100644
--- a/drivers/usb/gadget/ci13xxx_udc.c
+++ b/drivers/usb/gadget/ci13xxx_udc.c
@@ -445,7 +445,8 @@
int n = hw_ep_bit(num, dir);
struct ci13xxx_ep *mEp = &_udc->ci13xxx_ep[n];
- if (_udc->skip_flush || list_empty(&mEp->qh.queue))
+ /* Flush ep0 even when queue is empty */
+ if (_udc->skip_flush || (num && list_empty(&mEp->qh.queue)))
return 0;
start = ktime_get();
@@ -2107,7 +2108,18 @@
if (mReq->zptr) {
if ((TD_STATUS_ACTIVE & mReq->zptr->token) != 0)
return -EBUSY;
- dma_pool_free(mEp->td_pool, mReq->zptr, mReq->zdma);
+
+ /* The controller may access this dTD one more time.
+ * Defer freeing this to next zero length dTD completion.
+ * It is safe to assume that controller will no longer
+ * access the previous dTD after next dTD completion.
+ */
+ if (mEp->last_zptr)
+ dma_pool_free(mEp->td_pool, mEp->last_zptr,
+ mEp->last_zdma);
+ mEp->last_zptr = mReq->zptr;
+ mEp->last_zdma = mReq->zdma;
+
mReq->zptr = NULL;
}
@@ -2256,12 +2268,13 @@
gadget->otg_srp_reqd = 0;
udc->driver->disconnect(gadget);
- usb_ep_fifo_flush(&udc->ep0out.ep);
- usb_ep_fifo_flush(&udc->ep0in.ep);
+ _ep_nuke(&udc->ep0out);
+ _ep_nuke(&udc->ep0in);
- if (udc->status != NULL) {
- usb_ep_free_request(&udc->ep0in.ep, udc->status);
- udc->status = NULL;
+ if (udc->ep0in.last_zptr) {
+ dma_pool_free(udc->ep0in.td_pool, udc->ep0in.last_zptr,
+ udc->ep0in.last_zdma);
+ udc->ep0in.last_zptr = NULL;
}
return 0;
@@ -2316,10 +2329,6 @@
if (retval)
goto done;
- udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_ATOMIC);
- if (udc->status == NULL)
- retval = -ENOMEM;
-
spin_lock(udc->lock);
done:
@@ -2388,8 +2397,8 @@
return;
}
- kfree(req->buf);
- usb_ep_free_request(ep, req);
+ if (req->status)
+ err("GET_STATUS failed");
}
/**
@@ -2405,8 +2414,7 @@
__acquires(mEp->lock)
{
struct ci13xxx_ep *mEp = &udc->ep0in;
- struct usb_request *req = NULL;
- gfp_t gfp_flags = GFP_ATOMIC;
+ struct usb_request *req = udc->status;
int dir, num, retval;
trace("%p, %p", mEp, setup);
@@ -2414,19 +2422,9 @@
if (mEp == NULL || setup == NULL)
return -EINVAL;
- spin_unlock(mEp->lock);
- req = usb_ep_alloc_request(&mEp->ep, gfp_flags);
- spin_lock(mEp->lock);
- if (req == NULL)
- return -ENOMEM;
-
req->complete = isr_get_status_complete;
req->length = 2;
- req->buf = kzalloc(req->length, gfp_flags);
- if (req->buf == NULL) {
- retval = -ENOMEM;
- goto err_free_req;
- }
+ req->buf = udc->status_buf;
if ((setup->bRequestType & USB_RECIP_MASK) == USB_RECIP_DEVICE) {
if (setup->wIndex == OTG_STATUS_SELECTOR) {
@@ -2449,18 +2447,7 @@
/* else do nothing; reserved for future use */
spin_unlock(mEp->lock);
- retval = usb_ep_queue(&mEp->ep, req, gfp_flags);
- spin_lock(mEp->lock);
- if (retval)
- goto err_free_buf;
-
- return 0;
-
- err_free_buf:
- kfree(req->buf);
- err_free_req:
- spin_unlock(mEp->lock);
- usb_ep_free_request(&mEp->ep, req);
+ retval = usb_ep_queue(&mEp->ep, req, GFP_ATOMIC);
spin_lock(mEp->lock);
return retval;
}
@@ -2503,11 +2490,9 @@
trace("%p", udc);
mEp = (udc->ep0_dir == TX) ? &udc->ep0out : &udc->ep0in;
- if (udc->status) {
- udc->status->context = udc;
- udc->status->complete = isr_setup_status_complete;
- } else
- return -EINVAL;
+ udc->status->context = udc;
+ udc->status->complete = isr_setup_status_complete;
+ udc->status->length = 0;
spin_unlock(mEp->lock);
retval = usb_ep_queue(&mEp->ep, udc->status, GFP_ATOMIC);
@@ -2941,6 +2926,12 @@
} while (mEp->dir != direction);
+ if (mEp->last_zptr) {
+ dma_pool_free(mEp->td_pool, mEp->last_zptr,
+ mEp->last_zdma);
+ mEp->last_zptr = NULL;
+ }
+
mEp->desc = NULL;
mEp->ep.desc = NULL;
mEp->ep.maxpacket = USHRT_MAX;
@@ -3032,21 +3023,24 @@
trace("%p, %p, %X", ep, req, gfp_flags);
- if (ep == NULL || req == NULL || mEp->desc == NULL)
- return -EINVAL;
-
- if (!udc->softconnect)
- return -ENODEV;
-
spin_lock_irqsave(mEp->lock, flags);
+ if (ep == NULL || req == NULL || mEp->desc == NULL) {
+ retval = -EINVAL;
+ goto done;
+ }
+
+ if (!udc->softconnect) {
+ retval = -ENODEV;
+ goto done;
+ }
if (!udc->configured && mEp->type !=
USB_ENDPOINT_XFER_CONTROL) {
- spin_unlock_irqrestore(mEp->lock, flags);
trace("usb is not configured"
"ept #%d, ept name#%s\n",
mEp->num, mEp->ep.name);
- return -ESHUTDOWN;
+ retval = -ESHUTDOWN;
+ goto done;
}
if (mEp->type == USB_ENDPOINT_XFER_CONTROL) {
@@ -3126,12 +3120,19 @@
trace("%p, %p", ep, req);
+ spin_lock_irqsave(mEp->lock, flags);
+ /*
+ * Only ep0 IN is exposed to composite. When a req is dequeued
+ * on ep0, check both ep0 IN and ep0 OUT queues.
+ */
if (ep == NULL || req == NULL || mReq->req.status != -EALREADY ||
mEp->desc == NULL || list_empty(&mReq->queue) ||
- list_empty(&mEp->qh.queue))
+ (list_empty(&mEp->qh.queue) && ((mEp->type !=
+ USB_ENDPOINT_XFER_CONTROL) ||
+ list_empty(&_udc->ep0out.qh.queue)))) {
+ spin_unlock_irqrestore(mEp->lock, flags);
return -EINVAL;
-
- spin_lock_irqsave(mEp->lock, flags);
+ }
dbg_event(_usb_addr(mEp), "DEQUEUE", 0);
@@ -3475,6 +3476,14 @@
retval = usb_ep_enable(&udc->ep0in.ep);
if (retval)
return retval;
+ udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_KERNEL);
+ if (!udc->status)
+ return -ENOMEM;
+ udc->status_buf = kzalloc(2, GFP_KERNEL); /* for GET_STATUS */
+ if (!udc->status_buf) {
+ usb_ep_free_request(&udc->ep0in.ep, udc->status);
+ return -ENOMEM;
+ }
spin_lock_irqsave(udc->lock, flags);
udc->gadget.ep0 = &udc->ep0in.ep;
@@ -3558,6 +3567,9 @@
driver->unbind(&udc->gadget); /* MAY SLEEP */
spin_lock_irqsave(udc->lock, flags);
+ usb_ep_free_request(&udc->ep0in.ep, udc->status);
+ kfree(udc->status_buf);
+
udc->gadget.dev.driver = NULL;
/* free resources */
diff --git a/drivers/usb/gadget/ci13xxx_udc.h b/drivers/usb/gadget/ci13xxx_udc.h
index 3145418..1530474 100644
--- a/drivers/usb/gadget/ci13xxx_udc.h
+++ b/drivers/usb/gadget/ci13xxx_udc.h
@@ -115,6 +115,8 @@
spinlock_t *lock;
struct device *device;
struct dma_pool *td_pool;
+ struct ci13xxx_td *last_zptr;
+ dma_addr_t last_zdma;
unsigned long dTD_update_fail_count;
unsigned long prime_fail_count;
int prime_timer_count;
@@ -153,6 +155,7 @@
struct dma_pool *qh_pool; /* DMA pool for queue heads */
struct dma_pool *td_pool; /* DMA pool for transfer descs */
struct usb_request *status; /* ep0 status request */
+ void *status_buf;/* GET_STATUS buffer */
struct usb_gadget gadget; /* USB slave device */
struct ci13xxx_ep ci13xxx_ep[ENDPT_MAX]; /* extended endpts */
diff --git a/drivers/usb/gadget/f_diag.c b/drivers/usb/gadget/f_diag.c
index 3b1843b..3355e19 100644
--- a/drivers/usb/gadget/f_diag.c
+++ b/drivers/usb/gadget/f_diag.c
@@ -139,9 +139,6 @@
* @out_desc: USB OUT endpoint descriptor struct
* @read_pool: List of requests used for Rx (OUT ep)
* @write_pool: List of requests used for Tx (IN ep)
- * @config_work: Work item schedule after interface is configured to notify
- * CONNECT event to diag char driver and updating product id
- * and serial number to MODEM/IMEM.
* @lock: Spinlock to proctect read_pool, write_pool lists
* @cdev: USB composite device struct
* @ch: USB diag channel
@@ -153,7 +150,6 @@
struct usb_ep *in;
struct list_head read_pool;
struct list_head write_pool;
- struct work_struct config_work;
spinlock_t lock;
unsigned configured;
struct usb_composite_dev *cdev;
@@ -176,21 +172,20 @@
return container_of(f, struct diag_context, function);
}
-static void usb_config_work_func(struct work_struct *work)
+static void diag_update_pid_and_serial_num(struct diag_context *ctxt)
{
- struct diag_context *ctxt = container_of(work,
- struct diag_context, config_work);
struct usb_composite_dev *cdev = ctxt->cdev;
struct usb_gadget_strings *table;
struct usb_string *s;
- if (!ctxt->ch)
+ if (!ctxt->update_pid_and_serial_num)
return;
- if (ctxt->ch->notify)
- ctxt->ch->notify(ctxt->ch->priv, USB_DIAG_CONNECT, NULL);
-
- if (!ctxt->update_pid_and_serial_num)
+ /*
+ * update pid and serail number to dload only if diag
+ * interface is zeroth interface.
+ */
+ if (intf_desc.bInterfaceNumber)
return;
/* pass on product id and serial number to dload */
@@ -612,7 +607,6 @@
usb_ep_disable(dev->in);
return rc;
}
- schedule_work(&dev->config_work);
dev->dpkts_tolaptop = 0;
dev->dpkts_tomodem = 0;
@@ -622,6 +616,9 @@
dev->configured = 1;
spin_unlock_irqrestore(&dev->lock, flags);
+ if (dev->ch->notify)
+ dev->ch->notify(dev->ch->priv, USB_DIAG_CONNECT, NULL);
+
return rc;
}
@@ -699,6 +696,7 @@
if (!f->ss_descriptors)
goto fail;
}
+ diag_update_pid_and_serial_num(ctxt);
return 0;
fail:
if (f->ss_descriptors)
@@ -761,7 +759,6 @@
spin_lock_init(&dev->lock);
INIT_LIST_HEAD(&dev->read_pool);
INIT_LIST_HEAD(&dev->write_pool);
- INIT_WORK(&dev->config_work, usb_config_work_func);
ret = usb_add_function(c, &dev->function);
if (ret) {
diff --git a/drivers/usb/gadget/f_gps.c b/drivers/usb/gadget/f_gps.c
new file mode 100644
index 0000000..ef08fc5
--- /dev/null
+++ b/drivers/usb/gadget/f_gps.c
@@ -0,0 +1,780 @@
+/*
+ * Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+
+#include <mach/usb_gadget_xport.h>
+
+#include "u_rmnet.h"
+#include "gadget_chips.h"
+
+#define GPS_NOTIFY_INTERVAL 5
+#define GPS_MAX_NOTIFY_SIZE 64
+
+
+#define ACM_CTRL_DTR (1 << 0)
+
+/* TODO: use separate structures for data and
+ * control paths
+ */
+struct f_gps {
+ struct grmnet port;
+ u8 port_num;
+ int ifc_id;
+ atomic_t online;
+ atomic_t ctrl_online;
+ struct usb_composite_dev *cdev;
+
+ spinlock_t lock;
+
+ /* usb eps */
+ struct usb_ep *notify;
+ struct usb_request *notify_req;
+
+ /* control info */
+ struct list_head cpkt_resp_q;
+ atomic_t notify_count;
+ unsigned long cpkts_len;
+};
+
+static struct gps_ports {
+ enum transport_type ctrl_xport;
+ struct f_gps *port;
+} gps_port;
+
+static struct usb_interface_descriptor gps_interface_desc = {
+ .bLength = USB_DT_INTERFACE_SIZE,
+ .bDescriptorType = USB_DT_INTERFACE,
+ .bNumEndpoints = 1,
+ .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+ .bInterfaceSubClass = USB_CLASS_VENDOR_SPEC,
+ .bInterfaceProtocol = USB_CLASS_VENDOR_SPEC,
+ /* .iInterface = DYNAMIC */
+};
+
+/* Full speed support */
+static struct usb_endpoint_descriptor gps_fs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GPS_MAX_NOTIFY_SIZE),
+ .bInterval = 1 << GPS_NOTIFY_INTERVAL,
+};
+
+static struct usb_descriptor_header *gps_fs_function[] = {
+ (struct usb_descriptor_header *) &gps_interface_desc,
+ (struct usb_descriptor_header *) &gps_fs_notify_desc,
+ NULL,
+};
+
+/* High speed support */
+static struct usb_endpoint_descriptor gps_hs_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GPS_MAX_NOTIFY_SIZE),
+ .bInterval = GPS_NOTIFY_INTERVAL + 4,
+};
+
+static struct usb_descriptor_header *gps_hs_function[] = {
+ (struct usb_descriptor_header *) &gps_interface_desc,
+ (struct usb_descriptor_header *) &gps_hs_notify_desc,
+ NULL,
+};
+
+/* Super speed support */
+static struct usb_endpoint_descriptor gps_ss_notify_desc = {
+ .bLength = USB_DT_ENDPOINT_SIZE,
+ .bDescriptorType = USB_DT_ENDPOINT,
+ .bEndpointAddress = USB_DIR_IN,
+ .bmAttributes = USB_ENDPOINT_XFER_INT,
+ .wMaxPacketSize = __constant_cpu_to_le16(GPS_MAX_NOTIFY_SIZE),
+ .bInterval = GPS_NOTIFY_INTERVAL + 4,
+};
+
+static struct usb_ss_ep_comp_descriptor gps_ss_notify_comp_desc = {
+ .bLength = sizeof gps_ss_notify_comp_desc,
+ .bDescriptorType = USB_DT_SS_ENDPOINT_COMP,
+
+ /* the following 3 values can be tweaked if necessary */
+ /* .bMaxBurst = 0, */
+ /* .bmAttributes = 0, */
+ .wBytesPerInterval = cpu_to_le16(GPS_MAX_NOTIFY_SIZE),
+};
+
+static struct usb_descriptor_header *gps_ss_function[] = {
+ (struct usb_descriptor_header *) &gps_interface_desc,
+ (struct usb_descriptor_header *) &gps_ss_notify_desc,
+ (struct usb_descriptor_header *) &gps_ss_notify_comp_desc,
+ NULL,
+};
+
+/* String descriptors */
+
+static struct usb_string gps_string_defs[] = {
+ [0].s = "GPS",
+ { } /* end of list */
+};
+
+static struct usb_gadget_strings gps_string_table = {
+ .language = 0x0409, /* en-us */
+ .strings = gps_string_defs,
+};
+
+static struct usb_gadget_strings *gps_strings[] = {
+ &gps_string_table,
+ NULL,
+};
+
+static void gps_ctrl_response_available(struct f_gps *dev);
+
+/* ------- misc functions --------------------*/
+
+static inline struct f_gps *func_to_gps(struct usb_function *f)
+{
+ return container_of(f, struct f_gps, port.func);
+}
+
+static inline struct f_gps *port_to_gps(struct grmnet *r)
+{
+ return container_of(r, struct f_gps, port);
+}
+
+static struct usb_request *
+gps_alloc_req(struct usb_ep *ep, unsigned len, gfp_t flags)
+{
+ struct usb_request *req;
+
+ req = usb_ep_alloc_request(ep, flags);
+ if (!req)
+ return ERR_PTR(-ENOMEM);
+
+ req->buf = kmalloc(len, flags);
+ if (!req->buf) {
+ usb_ep_free_request(ep, req);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ req->length = len;
+
+ return req;
+}
+
+void gps_free_req(struct usb_ep *ep, struct usb_request *req)
+{
+ kfree(req->buf);
+ usb_ep_free_request(ep, req);
+}
+
+static struct rmnet_ctrl_pkt *gps_alloc_ctrl_pkt(unsigned len, gfp_t flags)
+{
+ struct rmnet_ctrl_pkt *pkt;
+
+ pkt = kzalloc(sizeof(struct rmnet_ctrl_pkt), flags);
+ if (!pkt)
+ return ERR_PTR(-ENOMEM);
+
+ pkt->buf = kmalloc(len, flags);
+ if (!pkt->buf) {
+ kfree(pkt);
+ return ERR_PTR(-ENOMEM);
+ }
+ pkt->len = len;
+
+ return pkt;
+}
+
+static void gps_free_ctrl_pkt(struct rmnet_ctrl_pkt *pkt)
+{
+ kfree(pkt->buf);
+ kfree(pkt);
+}
+
+/* -------------------------------------------*/
+
+static int gps_gport_setup(void)
+{
+ u8 base;
+ int res;
+
+ res = gsmd_ctrl_setup(GPS_CTRL_CLIENT, 1, &base);
+ gps_port.port->port_num += base;
+ return res;
+}
+
+static int gport_ctrl_connect(struct f_gps *dev)
+{
+ return gsmd_ctrl_connect(&dev->port, dev->port_num);
+}
+
+static int gport_gps_disconnect(struct f_gps *dev)
+{
+ gsmd_ctrl_disconnect(&dev->port, dev->port_num);
+ return 0;
+}
+
+static void gps_unbind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct f_gps *dev = func_to_gps(f);
+
+ pr_debug("%s: portno:%d\n", __func__, dev->port_num);
+
+ if (gadget_is_superspeed(c->cdev->gadget))
+ usb_free_descriptors(f->ss_descriptors);
+ if (gadget_is_dualspeed(c->cdev->gadget))
+ usb_free_descriptors(f->hs_descriptors);
+ usb_free_descriptors(f->descriptors);
+
+ gps_free_req(dev->notify, dev->notify_req);
+
+ kfree(f->name);
+}
+
+static void gps_purge_responses(struct f_gps *dev)
+{
+ unsigned long flags;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ pr_debug("%s: port#%d\n", __func__, dev->port_num);
+
+ spin_lock_irqsave(&dev->lock, flags);
+ while (!list_empty(&dev->cpkt_resp_q)) {
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+
+ list_del(&cpkt->list);
+ rmnet_free_ctrl_pkt(cpkt);
+ }
+ atomic_set(&dev->notify_count, 0);
+ spin_unlock_irqrestore(&dev->lock, flags);
+}
+
+static void gps_suspend(struct usb_function *f)
+{
+ struct f_gps *dev = func_to_gps(f);
+ gps_purge_responses(dev);
+
+}
+
+static void gps_disable(struct usb_function *f)
+{
+ struct f_gps *dev = func_to_gps(f);
+
+ usb_ep_disable(dev->notify);
+ dev->notify->driver_data = NULL;
+
+ atomic_set(&dev->online, 0);
+
+ gps_purge_responses(dev);
+
+ gport_gps_disconnect(dev);
+}
+
+static int
+gps_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+{
+ struct f_gps *dev = func_to_gps(f);
+ struct usb_composite_dev *cdev = dev->cdev;
+ int ret;
+ struct list_head *cpkt;
+
+ pr_debug("%s:dev:%p\n", __func__, dev);
+
+ if (dev->notify->driver_data)
+ usb_ep_disable(dev->notify);
+
+ ret = config_ep_by_speed(cdev->gadget, f, dev->notify);
+ if (ret) {
+ dev->notify->desc = NULL;
+ ERROR(cdev, "config_ep_by_speed failes for ep %s, result %d\n",
+ dev->notify->name, ret);
+ return ret;
+ }
+ ret = usb_ep_enable(dev->notify);
+
+ if (ret) {
+ pr_err("%s: usb ep#%s enable failed, err#%d\n",
+ __func__, dev->notify->name, ret);
+ return ret;
+ }
+ dev->notify->driver_data = dev;
+
+ ret = gport_ctrl_connect(dev);
+
+ atomic_set(&dev->online, 1);
+
+ /* In case notifications were aborted, but there are pending control
+ packets in the response queue, re-add the notifications */
+ list_for_each(cpkt, &dev->cpkt_resp_q)
+ gps_ctrl_response_available(dev);
+
+ return ret;
+}
+
+static void gps_ctrl_response_available(struct f_gps *dev)
+{
+ struct usb_request *req = dev->notify_req;
+ struct usb_cdc_notification *event;
+ unsigned long flags;
+ int ret;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ pr_debug("%s:dev:%p\n", __func__, dev);
+
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!atomic_read(&dev->online) || !req || !req->buf) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return;
+ }
+
+ if (atomic_inc_return(&dev->notify_count) != 1) {
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return;
+ }
+
+ event = req->buf;
+ event->bmRequestType = USB_DIR_IN | USB_TYPE_CLASS
+ | USB_RECIP_INTERFACE;
+ event->bNotificationType = USB_CDC_NOTIFY_RESPONSE_AVAILABLE;
+ event->wValue = cpu_to_le16(0);
+ event->wIndex = cpu_to_le16(dev->ifc_id);
+ event->wLength = cpu_to_le16(0);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ ret = usb_ep_queue(dev->notify, dev->notify_req, GFP_ATOMIC);
+ if (ret) {
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!list_empty(&dev->cpkt_resp_q)) {
+ atomic_dec(&dev->notify_count);
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+ list_del(&cpkt->list);
+ gps_free_ctrl_pkt(cpkt);
+ }
+ spin_unlock_irqrestore(&dev->lock, flags);
+ pr_debug("ep enqueue error %d\n", ret);
+ }
+}
+
+static void gps_connect(struct grmnet *gr)
+{
+ struct f_gps *dev;
+
+ if (!gr) {
+ pr_err("%s: Invalid grmnet:%p\n", __func__, gr);
+ return;
+ }
+
+ dev = port_to_gps(gr);
+
+ atomic_set(&dev->ctrl_online, 1);
+}
+
+static void gps_disconnect(struct grmnet *gr)
+{
+ struct f_gps *dev;
+ struct usb_cdc_notification *event;
+ int status;
+
+ if (!gr) {
+ pr_err("%s: Invalid grmnet:%p\n", __func__, gr);
+ return;
+ }
+
+ dev = port_to_gps(gr);
+
+ atomic_set(&dev->ctrl_online, 0);
+
+ if (!atomic_read(&dev->online)) {
+ pr_debug("%s: nothing to do\n", __func__);
+ return;
+ }
+
+ usb_ep_fifo_flush(dev->notify);
+
+ event = dev->notify_req->buf;
+ event->bmRequestType = USB_DIR_IN | USB_TYPE_CLASS
+ | USB_RECIP_INTERFACE;
+ event->bNotificationType = USB_CDC_NOTIFY_NETWORK_CONNECTION;
+ event->wValue = cpu_to_le16(0);
+ event->wIndex = cpu_to_le16(dev->ifc_id);
+ event->wLength = cpu_to_le16(0);
+
+ status = usb_ep_queue(dev->notify, dev->notify_req, GFP_ATOMIC);
+ if (status < 0) {
+ if (!atomic_read(&dev->online))
+ return;
+ pr_err("%s: gps notify ep enqueue error %d\n",
+ __func__, status);
+ }
+
+ gps_purge_responses(dev);
+}
+
+static int
+gps_send_cpkt_response(void *gr, void *buf, size_t len)
+{
+ struct f_gps *dev;
+ struct rmnet_ctrl_pkt *cpkt;
+ unsigned long flags;
+
+ if (!gr || !buf) {
+ pr_err("%s: Invalid grmnet/buf, grmnet:%p buf:%p\n",
+ __func__, gr, buf);
+ return -ENODEV;
+ }
+ cpkt = gps_alloc_ctrl_pkt(len, GFP_ATOMIC);
+ if (IS_ERR(cpkt)) {
+ pr_err("%s: Unable to allocate ctrl pkt\n", __func__);
+ return -ENOMEM;
+ }
+ memcpy(cpkt->buf, buf, len);
+ cpkt->len = len;
+
+ dev = port_to_gps(gr);
+
+ pr_debug("%s: dev:%p\n", __func__, dev);
+
+ if (!atomic_read(&dev->online) || !atomic_read(&dev->ctrl_online)) {
+ gps_free_ctrl_pkt(cpkt);
+ return 0;
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ list_add_tail(&cpkt->list, &dev->cpkt_resp_q);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ gps_ctrl_response_available(dev);
+
+ return 0;
+}
+
+static void
+gps_cmd_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct f_gps *dev = req->context;
+ struct usb_composite_dev *cdev;
+
+ if (!dev) {
+ pr_err("%s: dev is null\n", __func__);
+ return;
+ }
+
+ pr_debug("%s: dev:%p\n", __func__, dev);
+
+ cdev = dev->cdev;
+
+ if (dev->port.send_encap_cmd)
+ dev->port.send_encap_cmd(dev->port_num, req->buf, req->actual);
+}
+
+static void gps_notify_complete(struct usb_ep *ep, struct usb_request *req)
+{
+ struct f_gps *dev = req->context;
+ int status = req->status;
+ unsigned long flags;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ pr_debug("%s: dev:%p port#%d\n", __func__, dev, dev->port_num);
+
+ switch (status) {
+ case -ECONNRESET:
+ case -ESHUTDOWN:
+ /* connection gone */
+ atomic_set(&dev->notify_count, 0);
+ break;
+ default:
+ pr_err("gps notify ep error %d\n", status);
+ /* FALLTHROUGH */
+ case 0:
+ if (!atomic_read(&dev->ctrl_online))
+ break;
+
+ if (atomic_dec_and_test(&dev->notify_count))
+ break;
+
+ status = usb_ep_queue(dev->notify, req, GFP_ATOMIC);
+ if (status) {
+ spin_lock_irqsave(&dev->lock, flags);
+ if (!list_empty(&dev->cpkt_resp_q)) {
+ atomic_dec(&dev->notify_count);
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+ list_del(&cpkt->list);
+ gps_free_ctrl_pkt(cpkt);
+ }
+ spin_unlock_irqrestore(&dev->lock, flags);
+ pr_debug("ep enqueue error %d\n", status);
+ }
+ break;
+ }
+}
+
+static int
+gps_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
+{
+ struct f_gps *dev = func_to_gps(f);
+ struct usb_composite_dev *cdev = dev->cdev;
+ struct usb_request *req = cdev->req;
+ u16 w_index = le16_to_cpu(ctrl->wIndex);
+ u16 w_value = le16_to_cpu(ctrl->wValue);
+ u16 w_length = le16_to_cpu(ctrl->wLength);
+ int ret = -EOPNOTSUPP;
+
+ pr_debug("%s:dev:%p\n", __func__, dev);
+
+ if (!atomic_read(&dev->online)) {
+ pr_debug("%s: usb cable is not connected\n", __func__);
+ return -ENOTCONN;
+ }
+
+ switch ((ctrl->bRequestType << 8) | ctrl->bRequest) {
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_SEND_ENCAPSULATED_COMMAND:
+ ret = w_length;
+ req->complete = gps_cmd_complete;
+ req->context = dev;
+ break;
+
+ case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_GET_ENCAPSULATED_RESPONSE:
+ if (w_value)
+ goto invalid;
+ else {
+ unsigned len;
+ struct rmnet_ctrl_pkt *cpkt;
+
+ spin_lock(&dev->lock);
+ if (list_empty(&dev->cpkt_resp_q)) {
+ pr_err("%s: ctrl resp queue empty", __func__);
+ spin_unlock(&dev->lock);
+ goto invalid;
+ }
+
+ cpkt = list_first_entry(&dev->cpkt_resp_q,
+ struct rmnet_ctrl_pkt, list);
+ list_del(&cpkt->list);
+ spin_unlock(&dev->lock);
+
+ len = min_t(unsigned, w_length, cpkt->len);
+ memcpy(req->buf, cpkt->buf, len);
+ ret = len;
+
+ gps_free_ctrl_pkt(cpkt);
+ }
+ break;
+ case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8)
+ | USB_CDC_REQ_SET_CONTROL_LINE_STATE:
+ if (dev->port.notify_modem)
+ dev->port.notify_modem(&dev->port,
+ dev->port_num, w_value);
+ ret = 0;
+
+ break;
+ default:
+
+invalid:
+ DBG(cdev, "invalid control req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ }
+
+ /* respond with data transfer or status phase? */
+ if (ret >= 0) {
+ VDBG(cdev, "gps req%02x.%02x v%04x i%04x l%d\n",
+ ctrl->bRequestType, ctrl->bRequest,
+ w_value, w_index, w_length);
+ req->zero = (ret < w_length);
+ req->length = ret;
+ ret = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
+ if (ret < 0)
+ ERROR(cdev, "gps ep0 enqueue err %d\n", ret);
+ }
+
+ return ret;
+}
+
+static int gps_bind(struct usb_configuration *c, struct usb_function *f)
+{
+ struct f_gps *dev = func_to_gps(f);
+ struct usb_ep *ep;
+ struct usb_composite_dev *cdev = c->cdev;
+ int ret = -ENODEV;
+
+ dev->ifc_id = usb_interface_id(c, f);
+ if (dev->ifc_id < 0) {
+ pr_err("%s: unable to allocate ifc id, err:%d",
+ __func__, dev->ifc_id);
+ return dev->ifc_id;
+ }
+ gps_interface_desc.bInterfaceNumber = dev->ifc_id;
+
+ dev->port.in = NULL;
+ dev->port.out = NULL;
+
+ ep = usb_ep_autoconfig(cdev->gadget, &gps_fs_notify_desc);
+ if (!ep) {
+ pr_err("%s: usb epnotify autoconfig failed\n", __func__);
+ ret = -ENODEV;
+ goto ep_auto_notify_fail;
+ }
+ dev->notify = ep;
+ ep->driver_data = cdev;
+
+ dev->notify_req = gps_alloc_req(ep,
+ sizeof(struct usb_cdc_notification),
+ GFP_KERNEL);
+ if (IS_ERR(dev->notify_req)) {
+ pr_err("%s: unable to allocate memory for notify req\n",
+ __func__);
+ ret = -ENOMEM;
+ goto ep_notify_alloc_fail;
+ }
+
+ dev->notify_req->complete = gps_notify_complete;
+ dev->notify_req->context = dev;
+
+ ret = -ENOMEM;
+ f->descriptors = usb_copy_descriptors(gps_fs_function);
+
+ if (!f->descriptors)
+ goto fail;
+
+ if (gadget_is_dualspeed(cdev->gadget)) {
+ gps_hs_notify_desc.bEndpointAddress =
+ gps_fs_notify_desc.bEndpointAddress;
+
+ /* copy descriptors, and track endpoint copies */
+ f->hs_descriptors = usb_copy_descriptors(gps_hs_function);
+
+ if (!f->hs_descriptors)
+ goto fail;
+ }
+
+ if (gadget_is_superspeed(cdev->gadget)) {
+ gps_ss_notify_desc.bEndpointAddress =
+ gps_fs_notify_desc.bEndpointAddress;
+
+ /* copy descriptors, and track endpoint copies */
+ f->ss_descriptors = usb_copy_descriptors(gps_ss_function);
+
+ if (!f->ss_descriptors)
+ goto fail;
+ }
+
+ pr_info("%s: GPS(%d) %s Speed\n",
+ __func__, dev->port_num,
+ gadget_is_dualspeed(cdev->gadget) ? "dual" : "full");
+
+ return 0;
+
+fail:
+ if (f->ss_descriptors)
+ usb_free_descriptors(f->ss_descriptors);
+ if (f->hs_descriptors)
+ usb_free_descriptors(f->hs_descriptors);
+ if (f->descriptors)
+ usb_free_descriptors(f->descriptors);
+ if (dev->notify_req)
+ gps_free_req(dev->notify, dev->notify_req);
+ep_notify_alloc_fail:
+ dev->notify->driver_data = NULL;
+ dev->notify = NULL;
+ep_auto_notify_fail:
+ return ret;
+}
+
+static int gps_bind_config(struct usb_configuration *c)
+{
+ int status;
+ struct f_gps *dev;
+ struct usb_function *f;
+ unsigned long flags;
+
+ pr_debug("%s: usb config:%p\n", __func__, c);
+
+ if (gps_string_defs[0].id == 0) {
+ status = usb_string_id(c->cdev);
+ if (status < 0) {
+ pr_err("%s: failed to get string id, err:%d\n",
+ __func__, status);
+ return status;
+ }
+ gps_string_defs[0].id = status;
+ }
+
+ dev = gps_port.port;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->cdev = c->cdev;
+ f = &dev->port.func;
+ f->name = kasprintf(GFP_ATOMIC, "gps");
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (!f->name) {
+ pr_err("%s: cannot allocate memory for name\n", __func__);
+ return -ENOMEM;
+ }
+
+ f->strings = gps_strings;
+ f->bind = gps_bind;
+ f->unbind = gps_unbind;
+ f->disable = gps_disable;
+ f->set_alt = gps_set_alt;
+ f->setup = gps_setup;
+ f->suspend = gps_suspend;
+ dev->port.send_cpkt_response = gps_send_cpkt_response;
+ dev->port.disconnect = gps_disconnect;
+ dev->port.connect = gps_connect;
+
+ status = usb_add_function(c, f);
+ if (status) {
+ pr_err("%s: usb add function failed: %d\n",
+ __func__, status);
+ kfree(f->name);
+ return status;
+ }
+
+ pr_debug("%s: complete\n", __func__);
+
+ return status;
+}
+
+static void gps_cleanup(void)
+{
+ kfree(gps_port.port);
+}
+
+static int gps_init_port(void)
+{
+ struct f_gps *dev;
+
+ dev = kzalloc(sizeof(struct f_gps), GFP_KERNEL);
+ if (!dev) {
+ pr_err("%s: Unable to allocate gps device\n", __func__);
+ return -ENOMEM;
+ }
+
+ spin_lock_init(&dev->lock);
+ INIT_LIST_HEAD(&dev->cpkt_resp_q);
+ dev->port_num = 0;
+
+ gps_port.port = dev;
+ gps_port.ctrl_xport = USB_GADGET_XPORT_SMD;
+
+ return 0;
+}
diff --git a/drivers/usb/gadget/f_mass_storage.c b/drivers/usb/gadget/f_mass_storage.c
index bac8b68..87a3fd5 100644
--- a/drivers/usb/gadget/f_mass_storage.c
+++ b/drivers/usb/gadget/f_mass_storage.c
@@ -2510,7 +2510,8 @@
goto reset_bulk_int;
fsg->bulk_out->driver_data = common;
fsg->bulk_out_enabled = 1;
- common->bulk_out_maxpacket = le16_to_cpu(fsg->bulk_in->desc->wMaxPacketSize);
+ common->bulk_out_maxpacket =
+ le16_to_cpu(fsg->bulk_out->desc->wMaxPacketSize);
clear_bit(IGNORE_BULK_OUT, &fsg->atomic_bitflags);
csw_hack_sent = 0;
write_error_after_csw_sent = 0;
diff --git a/drivers/usb/gadget/f_mbim.c b/drivers/usb/gadget/f_mbim.c
index 5a3d753..22f8dc9 100644
--- a/drivers/usb/gadget/f_mbim.c
+++ b/drivers/usb/gadget/f_mbim.c
@@ -1703,6 +1703,7 @@
ntb_parameters.dwNtbInMaxSize =
cpu_to_le32(NTB_DEFAULT_IN_SIZE_IPA);
ntb_parameters.dwNtbOutMaxSize = cpu_to_le32(NTB_OUT_SIZE_IPA);
+ ntb_parameters.wNdpInDivisor = 1;
}
INIT_LIST_HEAD(&mbim->cpkt_req_q);
diff --git a/drivers/usb/gadget/f_qc_ecm.c b/drivers/usb/gadget/f_qc_ecm.c
index 51f0e50..a395d15 100644
--- a/drivers/usb/gadget/f_qc_ecm.c
+++ b/drivers/usb/gadget/f_qc_ecm.c
@@ -77,15 +77,7 @@
bool is_open;
};
-struct f_ecm_qc_ipa_params {
- u8 dev_mac[ETH_ALEN];
- u8 host_mac[ETH_ALEN];
- ecm_ipa_callback ipa_rx_cb;
- ecm_ipa_callback ipa_tx_cb;
- void *ipa_priv;
-};
-
-static struct f_ecm_qc_ipa_params ipa_params;
+static struct ecm_ipa_params ipa_params;
static inline struct f_ecm_qc *func_to_ecm_qc(struct usb_function *f)
{
@@ -430,24 +422,22 @@
bam_data_disconnect(&ecm_qc_bam_port, 0);
- ecm_ipa_cleanup(ipa_params.ipa_priv);
-
return 0;
}
void *ecm_qc_get_ipa_rx_cb(void)
{
- return ipa_params.ipa_rx_cb;
+ return ipa_params.ecm_ipa_rx_dp_notify;
}
void *ecm_qc_get_ipa_tx_cb(void)
{
- return ipa_params.ipa_tx_cb;
+ return ipa_params.ecm_ipa_tx_dp_notify;
}
void *ecm_qc_get_ipa_priv(void)
{
- return ipa_params.ipa_priv;
+ return ipa_params.private;
}
/*-------------------------------------------------------------------------*/
@@ -849,6 +839,10 @@
usb_ep_free_request(ecm->notify, ecm->notify_req);
ecm_qc_string_defs[1].s = NULL;
+
+ if (ecm->xport == USB_GADGET_XPORT_BAM2BAM_IPA)
+ ecm_ipa_cleanup(ipa_params.private);
+
kfree(ecm);
}
@@ -918,12 +912,13 @@
/* export host's Ethernet address in CDC format */
if (ecm->xport == USB_GADGET_XPORT_BAM2BAM_IPA) {
- gether_qc_get_macs(ipa_params.dev_mac, ipa_params.host_mac);
+ gether_qc_get_macs(ipa_params.device_ethaddr,
+ ipa_params.host_ethaddr);
snprintf(ecm->ethaddr, sizeof ecm->ethaddr,
"%02X%02X%02X%02X%02X%02X",
- ipa_params.host_mac[0], ipa_params.host_mac[1],
- ipa_params.host_mac[2], ipa_params.host_mac[3],
- ipa_params.host_mac[4], ipa_params.host_mac[5]);
+ ipa_params.host_ethaddr[0], ipa_params.host_ethaddr[1],
+ ipa_params.host_ethaddr[2], ipa_params.host_ethaddr[3],
+ ipa_params.host_ethaddr[4], ipa_params.host_ethaddr[5]);
} else
snprintf(ecm->ethaddr, sizeof ecm->ethaddr,
"%02X%02X%02X%02X%02X%02X",
@@ -955,21 +950,15 @@
if (ecm->xport != USB_GADGET_XPORT_BAM2BAM_IPA)
return status;
- status = ecm_ipa_init(&ipa_params.ipa_rx_cb, &ipa_params.ipa_tx_cb,
- &ipa_params.ipa_priv);
+ pr_debug("setting ecm_ipa, host_ethaddr=%pM, device_ethaddr=%pM",
+ ipa_params.host_ethaddr, ipa_params.device_ethaddr);
+ status = ecm_ipa_init(&ipa_params);
if (status) {
- pr_err("failed to initialize ECM IPA Driver");
+ pr_err("failed to initialize ecm_ipa");
ecm_qc_string_defs[1].s = NULL;
kfree(ecm);
- return status;
- }
-
- status = ecm_ipa_configure(ipa_params.host_mac, ipa_params.dev_mac,
- ipa_params.ipa_priv);
- if (status) {
- pr_err("failed to configure ECM IPA Driver");
- ecm_qc_string_defs[1].s = NULL;
- kfree(ecm);
+ } else {
+ pr_debug("ecm_ipa successful created");
}
return status;
diff --git a/drivers/usb/gadget/f_rmnet.c b/drivers/usb/gadget/f_rmnet.c
index 2dccca8..f095efb 100644
--- a/drivers/usb/gadget/f_rmnet.c
+++ b/drivers/usb/gadget/f_rmnet.c
@@ -301,6 +301,7 @@
int ret;
int port_idx;
int i;
+ u8 base;
pr_debug("%s: bam ports: %u bam2bam ports: %u data hsic ports: %u data hsuart ports: %u"
" smd ports: %u ctrl hsic ports: %u ctrl hsuart ports: %u"
@@ -317,9 +318,13 @@
}
if (no_ctrl_smd_ports) {
- ret = gsmd_ctrl_setup(no_ctrl_smd_ports);
+ ret = gsmd_ctrl_setup(FRMNET_CTRL_CLIENT,
+ no_ctrl_smd_ports, &base);
if (ret)
return ret;
+ for (i = 0; i < nr_rmnet_ports; i++)
+ if (rmnet_ports[i].port)
+ rmnet_ports[i].port->port_num += base;
}
if (no_data_hsic_ports) {
diff --git a/drivers/usb/gadget/f_rndis.c b/drivers/usb/gadget/f_rndis.c
index f3c6481..1017900 100644
--- a/drivers/usb/gadget/f_rndis.c
+++ b/drivers/usb/gadget/f_rndis.c
@@ -25,11 +25,6 @@
#include "u_ether.h"
#include "rndis.h"
-static bool rndis_multipacket_dl_disable;
-module_param(rndis_multipacket_dl_disable, bool, S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(rndis_multipacket_dl_disable,
- "Disable RNDIS Multi-packet support in DownLink");
-
/*
* This function is an RNDIS Ethernet port -- a Microsoft protocol that's
* been promoted instead of the standard CDC Ethernet. The published RNDIS
@@ -71,6 +66,16 @@
* - MS-Windows drivers sometimes emit undocumented requests.
*/
+static bool rndis_multipacket_dl_disable;
+module_param(rndis_multipacket_dl_disable, bool, S_IRUGO|S_IWUSR);
+MODULE_PARM_DESC(rndis_multipacket_dl_disable,
+ "Disable RNDIS Multi-packet support in DownLink");
+
+static unsigned int rndis_ul_max_pkt_per_xfer = 1;
+module_param(rndis_ul_max_pkt_per_xfer, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(rndis_ul_max_pkt_per_xfer,
+ "Disable RNDIS Multi-packet support in DownLink");
+
struct f_rndis {
struct gether port;
u8 ctrl_id, data_id;
@@ -799,6 +804,7 @@
rndis_set_param_medium(rndis->config, NDIS_MEDIUM_802_3, 0);
rndis_set_host_mac(rndis->config, rndis->ethaddr);
+ rndis_set_max_pkt_xfer(rndis->config, rndis_ul_max_pkt_per_xfer);
if (rndis->manufacturer && rndis->vendorID &&
rndis_set_param_vendor(rndis->config, rndis->vendorID,
@@ -945,6 +951,7 @@
rndis->port.header_len = sizeof(struct rndis_packet_msg_type);
rndis->port.wrap = rndis_add_header;
rndis->port.unwrap = rndis_rm_hdr;
+ rndis->port.ul_max_pkts_per_xfer = rndis_ul_max_pkt_per_xfer;
rndis->port.func.name = "rndis";
rndis->port.func.strings = rndis_strings;
diff --git a/drivers/usb/gadget/rndis.c b/drivers/usb/gadget/rndis.c
index 7ac5b64..07f4d26 100644
--- a/drivers/usb/gadget/rndis.c
+++ b/drivers/usb/gadget/rndis.c
@@ -59,6 +59,16 @@
#define RNDIS_MAX_CONFIGS 1
+int rndis_ul_max_pkt_per_xfer_rcvd;
+module_param(rndis_ul_max_pkt_per_xfer_rcvd, int, S_IRUGO);
+MODULE_PARM_DESC(rndis_ul_max_pkt_per_xfer_rcvd,
+ "Max num of REMOTE_NDIS_PACKET_MSGs received in a single transfer");
+
+int rndis_ul_max_xfer_size_rcvd;
+module_param(rndis_ul_max_xfer_size_rcvd, int, S_IRUGO);
+MODULE_PARM_DESC(rndis_ul_max_xfer_size_rcvd,
+ "Max size of bus transfer received");
+
static rndis_params rndis_per_dev_params[RNDIS_MAX_CONFIGS];
@@ -590,7 +600,6 @@
(params->dev->mtu
+ sizeof(struct ethhdr)
+ sizeof(struct rndis_packet_msg_type)
-
+ 22));
resp->PacketAlignmentFactor = cpu_to_le32(params->pkt_alignment_factor);
resp->AFListOffset = cpu_to_le32(0);
@@ -1051,25 +1060,77 @@
struct sk_buff *skb,
struct sk_buff_head *list)
{
- /* tmp points to a struct rndis_packet_msg_type */
- __le32 *tmp = (void *)skb->data;
+ int num_pkts = 1;
- /* MessageType, MessageLength */
- if (cpu_to_le32(REMOTE_NDIS_PACKET_MSG)
- != get_unaligned(tmp++)) {
- dev_kfree_skb_any(skb);
- return -EINVAL;
- }
- tmp++;
+ if (skb->len > rndis_ul_max_xfer_size_rcvd)
+ rndis_ul_max_xfer_size_rcvd = skb->len;
- /* DataOffset, DataLength */
- if (!skb_pull(skb, get_unaligned_le32(tmp++) + 8)) {
- dev_kfree_skb_any(skb);
- return -EOVERFLOW;
+ while (skb->len) {
+ struct rndis_packet_msg_type *hdr;
+ struct sk_buff *skb2;
+ u32 msg_len, data_offset, data_len;
+
+ /* some rndis hosts send extra byte to avoid zlp, ignore it */
+ if (skb->len == 1) {
+ dev_kfree_skb_any(skb);
+ return 0;
+ }
+
+ if (skb->len < sizeof *hdr) {
+ pr_err("invalid rndis pkt: skblen:%u hdr_len:%u",
+ skb->len, sizeof *hdr);
+ dev_kfree_skb_any(skb);
+ return -EINVAL;
+ }
+
+ hdr = (void *)skb->data;
+ msg_len = le32_to_cpu(hdr->MessageLength);
+ data_offset = le32_to_cpu(hdr->DataOffset);
+ data_len = le32_to_cpu(hdr->DataLength);
+
+ if (skb->len < msg_len ||
+ ((data_offset + data_len + 8) > msg_len)) {
+ pr_err("invalid rndis message: %d/%d/%d/%d, len:%d\n",
+ le32_to_cpu(hdr->MessageType),
+ msg_len, data_offset, data_len, skb->len);
+ dev_kfree_skb_any(skb);
+ return -EOVERFLOW;
+ }
+
+ if (le32_to_cpu(hdr->MessageType) != REMOTE_NDIS_PACKET_MSG) {
+ pr_err("invalid rndis message: %d/%d/%d/%d, len:%d\n",
+ le32_to_cpu(hdr->MessageType),
+ msg_len, data_offset, data_len, skb->len);
+ dev_kfree_skb_any(skb);
+ return -EINVAL;
+ }
+
+ skb_pull(skb, data_offset + 8);
+
+ if (msg_len == skb->len) {
+ skb_trim(skb, data_len);
+ break;
+ }
+
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+ if (!skb2) {
+ pr_err("%s:skb clone failed\n", __func__);
+ dev_kfree_skb_any(skb);
+ return -ENOMEM;
+ }
+
+ skb_pull(skb, msg_len - sizeof *hdr);
+ skb_trim(skb2, data_len);
+ skb_queue_tail(list, skb2);
+
+ num_pkts++;
}
- skb_trim(skb, get_unaligned_le32(tmp++));
+
+ if (num_pkts > rndis_ul_max_pkt_per_xfer_rcvd)
+ rndis_ul_max_pkt_per_xfer_rcvd = num_pkts;
skb_queue_tail(list, skb);
+
return 0;
}
diff --git a/drivers/usb/gadget/rndis.h b/drivers/usb/gadget/rndis.h
index 8a6a630..d00f81f 100644
--- a/drivers/usb/gadget/rndis.h
+++ b/drivers/usb/gadget/rndis.h
@@ -252,6 +252,7 @@
int rndis_set_param_vendor (u8 configNr, u32 vendorID,
const char *vendorDescr);
int rndis_set_param_medium (u8 configNr, u32 medium, u32 speed);
+void rndis_set_max_pkt_xfer(u8 configNr, u8 max_pkt_per_xfer);
void rndis_add_hdr (struct sk_buff *skb);
int rndis_rm_hdr(struct gether *port, struct sk_buff *skb,
struct sk_buff_head *list);
diff --git a/drivers/usb/gadget/u_bam.c b/drivers/usb/gadget/u_bam.c
index 67c9a1a..c05f683 100644
--- a/drivers/usb/gadget/u_bam.c
+++ b/drivers/usb/gadget/u_bam.c
@@ -667,11 +667,11 @@
int ret;
if (d->trans == USB_GADGET_XPORT_BAM2BAM_IPA) {
+ teth_bridge_disconnect();
ret = usb_bam_disconnect_ipa(&d->ipa_params);
if (ret)
pr_err("%s: usb_bam_disconnect_ipa failed: err:%d\n",
__func__, ret);
- teth_bridge_disconnect();
}
}
@@ -717,6 +717,7 @@
ipa_notify_cb usb_notify_cb;
void *priv;
int ret;
+ unsigned long flags;
if (d->trans == USB_GADGET_XPORT_BAM2BAM) {
ret = usb_bam_connect(d->src_connection_idx, &d->src_pipe_idx);
@@ -741,8 +742,8 @@
d->ipa_params.priv = priv;
d->ipa_params.ipa_ep_cfg.mode.mode = IPA_BASIC;
- d->ipa_params.client = IPA_CLIENT_USB_CONS;
- d->ipa_params.dir = PEER_PERIPHERAL_TO_USB;
+ d->ipa_params.client = IPA_CLIENT_USB_PROD;
+ d->ipa_params.dir = USB_TO_PEER_PERIPHERAL;
ret = usb_bam_connect_ipa(&d->ipa_params);
if (ret) {
pr_err("%s: usb_bam_connect_ipa failed: err:%d\n",
@@ -750,8 +751,8 @@
return;
}
- d->ipa_params.client = IPA_CLIENT_USB_PROD;
- d->ipa_params.dir = USB_TO_PEER_PERIPHERAL;
+ d->ipa_params.client = IPA_CLIENT_USB_CONS;
+ d->ipa_params.dir = PEER_PERIPHERAL_TO_USB;
ret = usb_bam_connect_ipa(&d->ipa_params);
if (ret) {
pr_err("%s: usb_bam_connect_ipa failed: err:%d\n",
@@ -769,9 +770,21 @@
}
}
- d->rx_req = usb_ep_alloc_request(port->port_usb->out, GFP_KERNEL);
- if (!d->rx_req)
+ spin_lock_irqsave(&port->port_lock_ul, flags);
+ spin_lock(&port->port_lock_dl);
+ if (!port->port_usb) {
+ pr_debug("%s: usb cable is disconnected, exiting\n", __func__);
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
return;
+ }
+ d->rx_req = usb_ep_alloc_request(port->port_usb->out, GFP_ATOMIC);
+ if (!d->rx_req) {
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
+ pr_err("%s: out of memory\n", __func__);
+ return;
+ }
d->rx_req->context = port;
d->rx_req->complete = gbam_endless_rx_complete;
@@ -779,9 +792,14 @@
sps_params = (MSM_SPS_MODE | d->src_pipe_idx |
MSM_VENDOR_ID) & ~MSM_IS_FINITE_TRANSFER;
d->rx_req->udc_priv = sps_params;
- d->tx_req = usb_ep_alloc_request(port->port_usb->in, GFP_KERNEL);
- if (!d->tx_req)
+
+ d->tx_req = usb_ep_alloc_request(port->port_usb->in, GFP_ATOMIC);
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
+ if (!d->tx_req) {
+ pr_err("%s: out of memory\n", __func__);
return;
+ }
d->tx_req->context = port;
d->tx_req->complete = gbam_endless_tx_complete;
diff --git a/drivers/usb/gadget/u_bam_data.c b/drivers/usb/gadget/u_bam_data.c
index eec9e37..2abe2ef 100644
--- a/drivers/usb/gadget/u_bam_data.c
+++ b/drivers/usb/gadget/u_bam_data.c
@@ -219,10 +219,10 @@
d->ipa_params.ipa_ep_cfg.mode.mode = IPA_BASIC;
}
- d->ipa_params.client = IPA_CLIENT_USB_CONS;
- d->ipa_params.dir = PEER_PERIPHERAL_TO_USB;
+ d->ipa_params.client = IPA_CLIENT_USB_PROD;
+ d->ipa_params.dir = USB_TO_PEER_PERIPHERAL;
if (d->func_type == USB_FUNC_ECM) {
- d->ipa_params.notify = ecm_qc_get_ipa_tx_cb();
+ d->ipa_params.notify = ecm_qc_get_ipa_rx_cb();
d->ipa_params.priv = ecm_qc_get_ipa_priv();
}
ret = usb_bam_connect_ipa(&d->ipa_params);
@@ -232,10 +232,10 @@
return;
}
- d->ipa_params.client = IPA_CLIENT_USB_PROD;
- d->ipa_params.dir = USB_TO_PEER_PERIPHERAL;
+ d->ipa_params.client = IPA_CLIENT_USB_CONS;
+ d->ipa_params.dir = PEER_PERIPHERAL_TO_USB;
if (d->func_type == USB_FUNC_ECM) {
- d->ipa_params.notify = ecm_qc_get_ipa_rx_cb();
+ d->ipa_params.notify = ecm_qc_get_ipa_tx_cb();
d->ipa_params.priv = ecm_qc_get_ipa_priv();
}
ret = usb_bam_connect_ipa(&d->ipa_params);
diff --git a/drivers/usb/gadget/u_ether.c b/drivers/usb/gadget/u_ether.c
index 2d974ab..d4c21dd 100644
--- a/drivers/usb/gadget/u_ether.c
+++ b/drivers/usb/gadget/u_ether.c
@@ -70,6 +70,7 @@
struct sk_buff_head rx_frames;
unsigned header_len;
+ unsigned ul_max_pkts_per_xfer;
struct sk_buff *(*wrap)(struct gether *, struct sk_buff *skb);
int (*unwrap)(struct gether *,
struct sk_buff *skb,
@@ -233,9 +234,13 @@
size += out->maxpacket - 1;
size -= size % out->maxpacket;
+ if (dev->ul_max_pkts_per_xfer)
+ size *= dev->ul_max_pkts_per_xfer;
+
if (dev->port_usb->is_fixed)
size = max_t(size_t, size, dev->port_usb->fixed_out_len);
+ pr_debug("%s: size: %d", __func__, size);
skb = alloc_skb(size + NET_IP_ALIGN, gfp_flags);
if (skb == NULL) {
DBG(dev, "no rx skb\n");
@@ -1076,6 +1081,7 @@
dev->header_len = link->header_len;
dev->unwrap = link->unwrap;
dev->wrap = link->wrap;
+ dev->ul_max_pkts_per_xfer = link->ul_max_pkts_per_xfer;
spin_lock(&dev->lock);
dev->tx_skb_hold_count = 0;
diff --git a/drivers/usb/gadget/u_ether.h b/drivers/usb/gadget/u_ether.h
index faa9a3b..7040ab0 100644
--- a/drivers/usb/gadget/u_ether.h
+++ b/drivers/usb/gadget/u_ether.h
@@ -53,6 +53,8 @@
bool is_fixed;
u32 fixed_out_len;
u32 fixed_in_len;
+
+ unsigned ul_max_pkts_per_xfer;
/* Max number of SKB packets to be used to create Multi Packet RNDIS */
#define TX_SKB_HOLD_THRESHOLD 3
bool multi_pkt_xfer;
diff --git a/drivers/usb/gadget/u_rmnet.h b/drivers/usb/gadget/u_rmnet.h
index a9cca50..06471a4 100644
--- a/drivers/usb/gadget/u_rmnet.h
+++ b/drivers/usb/gadget/u_rmnet.h
@@ -46,6 +46,13 @@
void (*connect)(struct grmnet *g);
};
+enum ctrl_client {
+ FRMNET_CTRL_CLIENT,
+ GPS_CTRL_CLIENT,
+
+ NR_CTRL_CLIENTS
+};
+
int gbam_setup(unsigned int no_bam_port, unsigned int no_bam2bam_port);
int gbam_connect(struct grmnet *gr, u8 port_num,
enum transport_type trans, u8 src_connection_idx,
@@ -56,7 +63,8 @@
void gbam_resume(struct grmnet *gr, u8 port_num, enum transport_type trans);
int gsmd_ctrl_connect(struct grmnet *gr, int port_num);
void gsmd_ctrl_disconnect(struct grmnet *gr, u8 port_num);
-int gsmd_ctrl_setup(unsigned int count);
+int gsmd_ctrl_setup(enum ctrl_client client_num, unsigned int count,
+ u8 *first_port_idx);
int gqti_ctrl_connect(struct grmnet *gr);
void gqti_ctrl_disconnect(struct grmnet *gr);
diff --git a/drivers/usb/gadget/u_rmnet_ctrl_qti.c b/drivers/usb/gadget/u_rmnet_ctrl_qti.c
index e92978f..182cd40 100644
--- a/drivers/usb/gadget/u_rmnet_ctrl_qti.c
+++ b/drivers/usb/gadget/u_rmnet_ctrl_qti.c
@@ -259,20 +259,6 @@
return -EBUSY;
}
- /* block until online */
- while (!(atomic_read(&port->connected))) {
- pr_debug("Not connected. Wait.\n");
- ret = wait_event_interruptible(port->read_wq,
- atomic_read(&port->connected));
- if (ret < 0) {
- rmnet_ctrl_unlock(&port->read_excl);
- if (ret == -ERESTARTSYS)
- return -ERESTARTSYS;
- else
- return -EINTR;
- }
- }
-
/* block until a new packet is available */
do {
spin_lock_irqsave(&port->lock, flags);
diff --git a/drivers/usb/gadget/u_rmnet_ctrl_smd.c b/drivers/usb/gadget/u_rmnet_ctrl_smd.c
index 161634e..caea4ef 100644
--- a/drivers/usb/gadget/u_rmnet_ctrl_smd.c
+++ b/drivers/usb/gadget/u_rmnet_ctrl_smd.c
@@ -24,11 +24,16 @@
#include "u_rmnet.h"
-#define NR_CTRL_SMD_PORTS 3
-static int n_rmnet_ctrl_ports;
-static char *rmnet_ctrl_names[] = {"DATA40_CNTL", "DATA39_CNTL", "DATA38_CNTL"};
+#define MAX_CTRL_PER_CLIENT 3
+#define MAX_CTRL_PORT (MAX_CTRL_PER_CLIENT * NR_CTRL_CLIENTS)
+static char *ctrl_names[NR_CTRL_CLIENTS][MAX_CTRL_PER_CLIENT] = {
+ {"DATA40_CNTL", "DATA39_CNTL", "DATA38_CNTL"},
+ {"DATA39_CNTL"},
+};
static struct workqueue_struct *grmnet_ctrl_wq;
+u8 online_clients;
+
#define SMD_CH_MAX_LEN 20
#define CH_OPENED 0
#define CH_READY 1
@@ -68,7 +73,7 @@
static struct rmnet_ctrl_ports {
struct rmnet_ctrl_port *port;
struct platform_driver pdrv;
-} ctrl_smd_ports[NR_CTRL_SMD_PORTS];
+} ctrl_smd_ports[MAX_CTRL_PORT];
/*---------------misc functions---------------- */
@@ -172,6 +177,15 @@
}
spin_unlock_irqrestore(&port->port_lock, flags);
}
+static int is_legal_port_num(u8 portno)
+{
+ if (portno >= MAX_CTRL_PORT)
+ return false;
+ if (ctrl_smd_ports[portno].port == NULL)
+ return false;
+
+ return true;
+}
static int
grmnet_ctrl_smd_send_cpkt_tomodem(u8 portno,
@@ -182,7 +196,7 @@
struct smd_ch_info *c;
struct rmnet_ctrl_pkt *cpkt;
- if (portno >= n_rmnet_ctrl_ports) {
+ if (!is_legal_port_num(portno)) {
pr_err("%s: Invalid portno#%d\n", __func__, portno);
return -ENODEV;
}
@@ -225,7 +239,7 @@
int clear_bits = 0;
int temp = 0;
- if (portno >= n_rmnet_ctrl_ports) {
+ if (!is_legal_port_num(portno)) {
pr_err("%s: Invalid portno#%d\n", __func__, portno);
return;
}
@@ -375,8 +389,8 @@
pr_debug("%s: grmnet:%p port#%d\n", __func__, gr, port_num);
- if (port_num >= n_rmnet_ctrl_ports) {
- pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ if (!is_legal_port_num(port_num)) {
+ pr_err("%s: Invalid port_num#%d\n", __func__, port_num);
return -ENODEV;
}
@@ -431,8 +445,8 @@
pr_debug("%s: grmnet:%p port#%d\n", __func__, gr, port_num);
- if (port_num >= n_rmnet_ctrl_ports) {
- pr_err("%s: invalid portno#%d\n", __func__, port_num);
+ if (!is_legal_port_num(port_num)) {
+ pr_err("%s: Invalid port_num#%d\n", __func__, port_num);
return;
}
@@ -478,7 +492,10 @@
pr_debug("%s: name:%s\n", __func__, pdev->name);
- for (i = 0; i < n_rmnet_ctrl_ports; i++) {
+ for (i = 0; i < MAX_CTRL_PORT; i++) {
+ if (!ctrl_smd_ports[i].port)
+ continue;
+
port = ctrl_smd_ports[i].port;
c = &port->ctrl_ch;
@@ -508,7 +525,10 @@
pr_debug("%s: name:%s\n", __func__, pdev->name);
- for (i = 0; i < n_rmnet_ctrl_ports; i++) {
+ for (i = 0; i < MAX_CTRL_PORT; i++) {
+ if (!ctrl_smd_ports[i].port)
+ continue;
+
port = ctrl_smd_ports[i].port;
c = &port->ctrl_ch;
@@ -555,7 +575,8 @@
INIT_DELAYED_WORK(&port->disconnect_w, grmnet_ctrl_smd_disconnect_w);
c = &port->ctrl_ch;
- c->name = rmnet_ctrl_names[portno];
+ c->name = ctrl_names[portno / MAX_CTRL_PER_CLIENT]
+ [portno % MAX_CTRL_PER_CLIENT];
c->port = port;
init_waitqueue_head(&c->wait);
INIT_LIST_HEAD(&c->tx_q);
@@ -575,44 +596,54 @@
return 0;
}
-int gsmd_ctrl_setup(unsigned int count)
+int gsmd_ctrl_setup(enum ctrl_client client_num, unsigned int count,
+ u8 *first_port_idx)
{
- int i;
+ int i, start_port, allocated_ports;
int ret;
pr_debug("%s: requested ports:%d\n", __func__, count);
- if (!count || count > NR_CTRL_SMD_PORTS) {
+ if (!count || count > MAX_CTRL_PER_CLIENT) {
pr_err("%s: Invalid num of ports count:%d\n",
__func__, count);
return -EINVAL;
}
- grmnet_ctrl_wq = alloc_workqueue("gsmd_ctrl",
- WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
- if (!grmnet_ctrl_wq) {
- pr_err("%s: Unable to create workqueue grmnet_ctrl\n",
- __func__);
- return -ENOMEM;
+ if (!online_clients) {
+ grmnet_ctrl_wq = alloc_workqueue("gsmd_ctrl",
+ WQ_UNBOUND | WQ_MEM_RECLAIM, 1);
+ if (!grmnet_ctrl_wq) {
+ pr_err("%s: Unable to create workqueue grmnet_ctrl\n",
+ __func__);
+ return -ENOMEM;
+ }
}
+ online_clients++;
- for (i = 0; i < count; i++) {
- n_rmnet_ctrl_ports++;
+ start_port = MAX_CTRL_PER_CLIENT * client_num;
+ allocated_ports = 0;
+ for (i = start_port; i < count + start_port; i++) {
+ allocated_ports++;
ret = grmnet_ctrl_smd_port_alloc(i);
if (ret) {
pr_err("%s: Unable to alloc port:%d\n", __func__, i);
- n_rmnet_ctrl_ports--;
+ allocated_ports--;
goto free_ctrl_smd_ports;
}
}
-
+ if (first_port_idx)
+ *first_port_idx = start_port;
return 0;
free_ctrl_smd_ports:
- for (i = 0; i < n_rmnet_ctrl_ports; i++)
- grmnet_ctrl_smd_port_free(i);
+ for (i = 0; i < allocated_ports; i++)
+ grmnet_ctrl_smd_port_free(start_port + i);
- destroy_workqueue(grmnet_ctrl_wq);
+
+ online_clients--;
+ if (!online_clients)
+ destroy_workqueue(grmnet_ctrl_wq);
return ret;
}
@@ -634,10 +665,11 @@
if (!buf)
return -ENOMEM;
- for (i = 0; i < n_rmnet_ctrl_ports; i++) {
- port = ctrl_smd_ports[i].port;
- if (!port)
+ for (i = 0; i < MAX_CTRL_PORT; i++) {
+ if (!ctrl_smd_ports[i].port)
continue;
+ port = ctrl_smd_ports[i].port;
+
spin_lock_irqsave(&port->port_lock, flags);
c = &port->ctrl_ch;
@@ -677,10 +709,10 @@
int i;
unsigned long flags;
- for (i = 0; i < n_rmnet_ctrl_ports; i++) {
- port = ctrl_smd_ports[i].port;
- if (!port)
+ for (i = 0; i < MAX_CTRL_PORT; i++) {
+ if (!ctrl_smd_ports[i].port)
continue;
+ port = ctrl_smd_ports[i].port;
spin_lock_irqsave(&port->port_lock, flags);
@@ -727,6 +759,7 @@
static int __init gsmd_ctrl_init(void)
{
gsmd_ctrl_debugfs_init();
+ online_clients = 0;
return 0;
}
diff --git a/drivers/usb/host/ehci-msm-hsic.c b/drivers/usb/host/ehci-msm-hsic.c
index 4a085aa..ede8bdb 100644
--- a/drivers/usb/host/ehci-msm-hsic.c
+++ b/drivers/usb/host/ehci-msm-hsic.c
@@ -78,6 +78,7 @@
struct clk *alt_core_clk;
struct clk *phy_clk;
struct clk *cal_clk;
+ struct clk *inactivity_clk;
struct regulator *hsic_vddcx;
struct regulator *hsic_gdsc;
atomic_t async_int;
@@ -575,6 +576,8 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
ret = clk_reset(mehci->core_clk, CLK_RESET_ASSERT);
if (ret) {
@@ -596,6 +599,8 @@
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
}
}
@@ -794,6 +799,8 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
none_vol = vdd_val[mehci->vdd_type][VDD_NONE];
max_vol = vdd_val[mehci->vdd_type][VDD_MAX];
@@ -876,6 +883,8 @@
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
temp = readl_relaxed(USB_USBCMD);
temp &= ~ASYNC_INTR_CTRL;
@@ -1507,10 +1516,21 @@
goto put_cal_clk;
}
+ /*
+ * Inactivity_clk is required for hsic bam inactivity timer.
+ * This clock is not compulsory and is defined in clock lookup
+ * only for targets that need to use the inactivity timer feature.
+ */
+ mehci->inactivity_clk = clk_get(mehci->dev, "inactivity_clk");
+ if (IS_ERR(mehci->inactivity_clk))
+ dev_dbg(mehci->dev, "failed to get inactivity_clk\n");
+
clk_prepare_enable(mehci->core_clk);
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
return 0;
@@ -1520,7 +1540,11 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
}
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_put(mehci->inactivity_clk);
clk_put(mehci->ahb_clk);
put_cal_clk:
clk_put(mehci->cal_clk);
@@ -1827,6 +1851,8 @@
"qcom,pool-64-bit-align");
pdata->enable_hbm = of_property_read_bool(node,
"qcom,enable-hbm");
+ pdata->disable_park_mode = (of_property_read_bool(node,
+ "qcom,disable-park-mode"));
return pdata;
}
@@ -2064,7 +2090,7 @@
pm_runtime_put_sync(pdev->dev.parent);
if (mehci->enable_hbm)
- hbm_init(hcd);
+ hbm_init(hcd, pdata->disable_park_mode);
return 0;
diff --git a/drivers/usb/host/hbm.c b/drivers/usb/host/hbm.c
index 1a0c0aa..d34301d 100644
--- a/drivers/usb/host/hbm.c
+++ b/drivers/usb/host/hbm.c
@@ -44,6 +44,7 @@
struct hbm_msm {
u32 *base;
struct usb_hcd *hcd;
+ bool disable_park_mode;
};
static struct hbm_msm *hbm_ctx;
@@ -173,8 +174,8 @@
USB_OTG_HS_HBM_PIPE_PRODUCER, 1 << pipe_num,
(is_consumer ? 0 : 1));
- /* disable park mode as default */
- set_disable_park_mode(pipe_num, true);
+ /* set park mode */
+ set_disable_park_mode(pipe_num, hbm_ctx->disable_park_mode);
/* enable zlt as default*/
set_disable_zlt(pipe_num, false);
@@ -186,7 +187,7 @@
return 0;
}
-void hbm_init(struct usb_hcd *hcd)
+void hbm_init(struct usb_hcd *hcd, bool disable_park_mode)
{
pr_info("%s\n", __func__);
@@ -198,6 +199,7 @@
hbm_ctx->base = hcd->regs;
hbm_ctx->hcd = hcd;
+ hbm_ctx->disable_park_mode = disable_park_mode;
/* reset hbm */
hbm_reset(true);
diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
index 79dcf2f..46b5ce4 100644
--- a/drivers/usb/host/xhci-plat.c
+++ b/drivers/usb/host/xhci-plat.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/usb/otg.h>
+#include <linux/usb/msm_hsusb.h>
#include "xhci.h"
@@ -140,6 +141,10 @@
goto release_mem_region;
}
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (ret)
goto unmap_registers;
@@ -166,7 +171,8 @@
goto put_usb3_hcd;
phy = usb_get_transceiver();
- if (phy && phy->otg) {
+ /* Register with OTG if present, ignore USB2 OTG using other PHY */
+ if (phy && phy->otg && !(phy->flags & ENABLE_SECONDARY_PHY)) {
dev_dbg(&pdev->dev, "%s otg support available\n", __func__);
ret = otg_set_host(phy->otg, &hcd->self);
if (ret) {
@@ -175,15 +181,12 @@
usb_put_transceiver(phy);
goto put_usb3_hcd;
}
- pm_runtime_set_active(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
} else {
pm_runtime_no_callbacks(&pdev->dev);
- pm_runtime_set_active(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
- pm_runtime_get(&pdev->dev);
}
+ pm_runtime_put(&pdev->dev);
+
return 0;
put_usb3_hcd:
@@ -222,9 +225,6 @@
if (phy && phy->otg) {
otg_set_host(phy->otg, NULL);
usb_put_transceiver(phy);
- } else {
- pm_runtime_put(&dev->dev);
- pm_runtime_disable(&dev->dev);
}
return 0;
diff --git a/drivers/usb/otg/msm_otg.c b/drivers/usb/otg/msm_otg.c
index 4865b03..d4d7ee9 100644
--- a/drivers/usb/otg/msm_otg.c
+++ b/drivers/usb/otg/msm_otg.c
@@ -869,6 +869,9 @@
if (atomic_read(&motg->in_lpm))
return 0;
+ if (motg->pdata->delay_lpm_hndshk_on_disconnect && !msm_bam_lpm_ok())
+ return 0;
+
disable_irq(motg->irq);
host_bus_suspend = !test_bit(MHL, &motg->inputs) && phy->otg->host &&
!test_bit(ID, &motg->inputs);
@@ -3802,6 +3805,8 @@
"qcom,hsusb-otg-clk-always-on-workaround");
pdata->delay_lpm_on_disconnect = of_property_read_bool(node,
"qcom,hsusb-otg-delay-lpm");
+ pdata->delay_lpm_hndshk_on_disconnect = of_property_read_bool(node,
+ "qcom,hsusb-otg-delay-lpm-hndshk-on-disconnect");
pdata->dp_manual_pullup = of_property_read_bool(node,
"qcom,dp-manual-pullup");
pdata->enable_sec_phy = of_property_read_bool(node,
diff --git a/drivers/video/msm/mdp4.h b/drivers/video/msm/mdp4.h
index a3d8d7e..b57d0a4 100644
--- a/drivers/video/msm/mdp4.h
+++ b/drivers/video/msm/mdp4.h
@@ -945,6 +945,7 @@
void mdp4_writeback_dma_stop(struct msm_fb_data_type *mfd);
int mdp4_writeback_init(struct fb_info *info);
int mdp4_writeback_terminate(struct fb_info *info);
+int mdp4_writeback_set_mirroring_hint(struct fb_info *info, int hint);
uint32_t mdp_block2base(uint32_t block);
int mdp_hist_lut_config(struct mdp_hist_lut_data *data);
diff --git a/drivers/video/msm/mdp4_overlay.c b/drivers/video/msm/mdp4_overlay.c
index e8d7489..afa7b97 100644
--- a/drivers/video/msm/mdp4_overlay.c
+++ b/drivers/video/msm/mdp4_overlay.c
@@ -245,7 +245,7 @@
return PTR_ERR(*srcp_ihdl);
}
pr_debug("%s(): ion_hdl %p, ion_buf %d\n", __func__, *srcp_ihdl,
- ion_share_dma_buf(display_iclient, *srcp_ihdl));
+ mem_id);
pr_debug("mixer %u, pipe %u, plane %u\n", pipe->mixer_num,
pipe->pipe_ndx, plane);
if (ion_map_iommu(display_iclient, *srcp_ihdl,
diff --git a/drivers/video/msm/mdp4_overlay_writeback.c b/drivers/video/msm/mdp4_overlay_writeback.c
index 7caf0ad..62e89d3 100644
--- a/drivers/video/msm/mdp4_overlay_writeback.c
+++ b/drivers/video/msm/mdp4_overlay_writeback.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -807,3 +807,23 @@
mutex_unlock(&mfd->writeback_mutex);
wake_up(&mfd->wait_q);
}
+
+int mdp4_writeback_set_mirroring_hint(struct fb_info *info, int hint)
+{
+ struct msm_fb_data_type *mfd = (struct msm_fb_data_type *)info->par;
+
+ if (mfd->panel.type != WRITEBACK_PANEL)
+ return -ENOTSUPP;
+
+ switch (hint) {
+ case MDP_WRITEBACK_MIRROR_ON:
+ case MDP_WRITEBACK_MIRROR_PAUSE:
+ case MDP_WRITEBACK_MIRROR_RESUME:
+ case MDP_WRITEBACK_MIRROR_OFF:
+ pr_info("wfd state switched to %d\n", hint);
+ switch_set_state(&mfd->writeback_sdev, hint);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
diff --git a/drivers/video/msm/mdp4_wfd_writeback.c b/drivers/video/msm/mdp4_wfd_writeback.c
index d96fc7d..ba6c78b 100644
--- a/drivers/video/msm/mdp4_wfd_writeback.c
+++ b/drivers/video/msm/mdp4_wfd_writeback.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -75,6 +75,13 @@
platform_set_drvdata(mdp_dev, mfd);
+ mfd->writeback_sdev.name = "wfd";
+ rc = switch_dev_register(&mfd->writeback_sdev);
+ if (rc) {
+ pr_err("Failed to setup switch dev for writeback panel");
+ return rc;
+ }
+
rc = platform_device_add(mdp_dev);
if (rc) {
WRITEBACK_MSG_ERR("failed to add device");
@@ -84,8 +91,16 @@
return rc;
}
+static int writeback_remove(struct platform_device *pdev)
+{
+ struct msm_fb_data_type *mfd = platform_get_drvdata(pdev);
+ switch_dev_unregister(&mfd->writeback_sdev);
+ return 0;
+}
+
static struct platform_driver writeback_driver = {
.probe = writeback_probe,
+ .remove = writeback_remove,
.driver = {
.name = "writeback",
},
diff --git a/drivers/video/msm/mdss/dsi_host_v2.c b/drivers/video/msm/mdss/dsi_host_v2.c
index 453cbaa..887dde7 100644
--- a/drivers/video/msm/mdss/dsi_host_v2.c
+++ b/drivers/video/msm/mdss/dsi_host_v2.c
@@ -29,6 +29,7 @@
#define DSI_POLL_SLEEP_US 1000
#define DSI_POLL_TIMEOUT_US 16000
#define DSI_ESC_CLK_RATE 19200000
+#define DSI_DMA_CMD_TIMEOUT_MS 200
struct dsi_host_v2_private {
struct completion dma_comp;
@@ -426,18 +427,15 @@
dsi_ctrl = MIPI_INP(ctrl_base + DSI_CTRL);
/*If Video enabled, Keep Video and Cmd mode ON */
- if (dsi_ctrl & 0x02)
- dsi_ctrl &= ~0x05;
- else
- dsi_ctrl &= ~0x07;
+
+
+ dsi_ctrl &= ~0x06;
if (mode == DSI_VIDEO_MODE) {
- dsi_ctrl |= 0x03;
+ dsi_ctrl |= 0x02;
intr_ctrl = DSI_INTR_CMD_DMA_DONE_MASK;
} else { /* command mode */
- dsi_ctrl |= 0x05;
- if (pdata->panel_info.type == MIPI_VIDEO_PANEL)
- dsi_ctrl |= 0x02;
+ dsi_ctrl |= 0x04;
intr_ctrl = DSI_INTR_CMD_DMA_DONE_MASK | DSI_INTR_ERROR_MASK |
DSI_INTR_CMD_MDP_DONE_MASK;
@@ -480,7 +478,7 @@
int msm_dsi_cmd_dma_tx(struct dsi_buf *tp)
{
- int len;
+ int len, rc;
unsigned long size, addr;
unsigned char *ctrl_base = dsi_host_private->dsi_base;
@@ -505,12 +503,17 @@
MIPI_OUTP(ctrl_base + DSI_CMD_MODE_DMA_SW_TRIGGER, 0x01);
wmb();
- wait_for_completion_interruptible(&dsi_host_private->dma_comp);
+ rc = wait_for_completion_timeout(&dsi_host_private->dma_comp,
+ msecs_to_jiffies(DSI_DMA_CMD_TIMEOUT_MS));
+ if (rc == 0) {
+ pr_err("DSI command transaction time out\n");
+ rc = -ETIME;
+ }
dma_unmap_single(&dsi_host_private->dis_dev, tp->dmap, size,
DMA_TO_DEVICE);
tp->dmap = 0;
- return 0;
+ return rc;
}
int msm_dsi_cmd_dma_rx(struct dsi_buf *rp, int rlen)
@@ -723,6 +726,7 @@
static int msm_dsi_cal_clk_rate(struct mdss_panel_data *pdata,
u32 *bitclk_rate,
+ u32 *dsiclk_rate,
u32 *byteclk_rate,
u32 *pclk_rate)
{
@@ -761,10 +765,11 @@
*bitclk_rate /= lanes;
*byteclk_rate = *bitclk_rate / 8;
+ *dsiclk_rate = *byteclk_rate * lanes;
*pclk_rate = *byteclk_rate * lanes * 8 / pdata->panel_info.bpp;
- pr_debug("bitclk=%u, byteclk=%u, pck_=%u\n",
- *bitclk_rate, *byteclk_rate, *pclk_rate);
+ pr_debug("dsiclk_rate=%u, byteclk=%u, pck_=%u\n",
+ *dsiclk_rate, *byteclk_rate, *pclk_rate);
return 0;
}
@@ -777,7 +782,7 @@
u32 hbp, hfp, vbp, vfp, hspw, vspw, width, height;
u32 ystride, bpp, data;
u32 dummy_xres, dummy_yres;
- u32 bitclk_rate = 0, byteclk_rate = 0, pclk_rate = 0;
+ u32 bitclk_rate = 0, byteclk_rate = 0, pclk_rate = 0, dsiclk_rate = 0;
unsigned char *ctrl_base = dsi_host_private->dsi_base;
pr_debug("msm_dsi_on\n");
@@ -794,8 +799,10 @@
msm_dsi_phy_sw_reset(dsi_host_private->dsi_base);
msm_dsi_phy_init(dsi_host_private->dsi_base, pdata);
- msm_dsi_cal_clk_rate(pdata, &bitclk_rate, &byteclk_rate, &pclk_rate);
- msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, byteclk_rate, pclk_rate);
+ msm_dsi_cal_clk_rate(pdata, &bitclk_rate, &dsiclk_rate,
+ &byteclk_rate, &pclk_rate);
+ msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, dsiclk_rate,
+ byteclk_rate, pclk_rate);
msm_dsi_prepare_clocks();
msm_dsi_clk_enable();
@@ -879,12 +886,11 @@
int ret = 0;
pr_debug("msm_dsi_off\n");
- msm_dsi_clk_set_rate(0, 0, 0);
+ msm_dsi_controller_cfg(0);
+ msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, 0, 0, 0);
msm_dsi_clk_disable();
msm_dsi_unprepare_clocks();
- /* disable DSI controller */
- msm_dsi_controller_cfg(0);
msm_dsi_ahb_ctrl(0);
ret = msm_dsi_regulator_disable();
diff --git a/drivers/video/msm/mdss/dsi_io_v2.c b/drivers/video/msm/mdss/dsi_io_v2.c
index 0486c4c..93f2c76 100644
--- a/drivers/video/msm/mdss/dsi_io_v2.c
+++ b/drivers/video/msm/mdss/dsi_io_v2.c
@@ -29,6 +29,7 @@
struct clk *dsi_esc_clk;
struct clk *dsi_pixel_clk;
struct clk *dsi_ahb_clk;
+ struct clk *dsi_clk;
int msm_dsi_clk_on;
int msm_dsi_ahb_clk_on;
};
@@ -97,6 +98,13 @@
{
int rc = 0;
+ dsi_io_private->dsi_clk = clk_get(&dev->dev, "dsi_clk");
+ if (IS_ERR(dsi_io_private->dsi_clk)) {
+ pr_err("can't find dsi core_clk\n");
+ rc = PTR_ERR(dsi_io_private->dsi_clk);
+ dsi_io_private->dsi_clk = NULL;
+ return rc;
+ }
dsi_io_private->dsi_byte_clk = clk_get(&dev->dev, "byte_clk");
if (IS_ERR(dsi_io_private->dsi_byte_clk)) {
pr_err("can't find dsi byte_clk\n");
@@ -135,6 +143,10 @@
void msm_dsi_clk_deinit(void)
{
+ if (dsi_io_private->dsi_clk) {
+ clk_put(dsi_io_private->dsi_clk);
+ dsi_io_private->dsi_clk = NULL;
+ }
if (dsi_io_private->dsi_byte_clk) {
clk_put(dsi_io_private->dsi_byte_clk);
dsi_io_private->dsi_byte_clk = NULL;
@@ -156,6 +168,7 @@
int msm_dsi_prepare_clocks(void)
{
+ clk_prepare(dsi_io_private->dsi_clk);
clk_prepare(dsi_io_private->dsi_byte_clk);
clk_prepare(dsi_io_private->dsi_esc_clk);
clk_prepare(dsi_io_private->dsi_pixel_clk);
@@ -164,16 +177,24 @@
int msm_dsi_unprepare_clocks(void)
{
+ clk_unprepare(dsi_io_private->dsi_clk);
clk_unprepare(dsi_io_private->dsi_esc_clk);
clk_unprepare(dsi_io_private->dsi_byte_clk);
clk_unprepare(dsi_io_private->dsi_pixel_clk);
return 0;
}
-int msm_dsi_clk_set_rate(unsigned long esc_rate, unsigned long byte_rate,
+int msm_dsi_clk_set_rate(unsigned long esc_rate,
+ unsigned long dsi_rate,
+ unsigned long byte_rate,
unsigned long pixel_rate)
{
int rc;
+ rc = clk_set_rate(dsi_io_private->dsi_clk, dsi_rate);
+ if (rc) {
+ pr_err("dsi_esc_clk - clk_set_rate failed =%d\n", rc);
+ return rc;
+ }
rc = clk_set_rate(dsi_io_private->dsi_esc_clk, esc_rate);
if (rc) {
@@ -202,6 +223,7 @@
return 0;
}
+ clk_enable(dsi_io_private->dsi_clk);
clk_enable(dsi_io_private->dsi_esc_clk);
clk_enable(dsi_io_private->dsi_byte_clk);
clk_enable(dsi_io_private->dsi_pixel_clk);
@@ -217,6 +239,7 @@
return 0;
}
+ clk_disable(dsi_io_private->dsi_clk);
clk_disable(dsi_io_private->dsi_byte_clk);
clk_disable(dsi_io_private->dsi_esc_clk);
clk_disable(dsi_io_private->dsi_pixel_clk);
@@ -246,7 +269,7 @@
void msm_dsi_regulator_deinit(void)
{
- if (dsi_io_private->vdda_vreg) {
+ if (!IS_ERR(dsi_io_private->vdda_vreg)) {
devm_regulator_put(dsi_io_private->vdda_vreg);
dsi_io_private->vdda_vreg = NULL;
}
diff --git a/drivers/video/msm/mdss/dsi_io_v2.h b/drivers/video/msm/mdss/dsi_io_v2.h
index 25ecd7f..285bf30 100644
--- a/drivers/video/msm/mdss/dsi_io_v2.h
+++ b/drivers/video/msm/mdss/dsi_io_v2.h
@@ -29,7 +29,9 @@
int msm_dsi_unprepare_clocks(void);
-int msm_dsi_clk_set_rate(unsigned long esc_rate, unsigned long byte_rate,
+int msm_dsi_clk_set_rate(unsigned long esc_rate,
+ unsigned long dsi_rate,
+ unsigned long byte_rate,
unsigned long pixel_rate);
int msm_dsi_clk_enable(void);
diff --git a/drivers/video/msm/mdss/dsi_panel_v2.c b/drivers/video/msm/mdss/dsi_panel_v2.c
index 6686de3..e46ea3b 100644
--- a/drivers/video/msm/mdss/dsi_panel_v2.c
+++ b/drivers/video/msm/mdss/dsi_panel_v2.c
@@ -20,6 +20,7 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/leds.h>
+#include <linux/err.h>
#include <linux/regulator/consumer.h>
#include "dsi_v2.h"
@@ -32,6 +33,7 @@
int rst_gpio;
int disp_en_gpio;
+ int video_mode_gpio;
char bl_ctrl;
struct regulator *vddio_vreg;
@@ -40,6 +42,9 @@
struct dsi_panel_cmds_list *on_cmds_list;
struct dsi_panel_cmds_list *off_cmds_list;
struct mdss_dsi_phy_ctrl phy_params;
+
+ char *on_cmds;
+ char *off_cmds;
};
static struct dsi_panel_private *panel_private;
@@ -82,15 +87,54 @@
kfree(panel_private->dsi_panel_tx_buf.start);
kfree(panel_private->dsi_panel_rx_buf.start);
- if (panel_private->vddio_vreg)
+ if (!IS_ERR(panel_private->vddio_vreg))
devm_regulator_put(panel_private->vddio_vreg);
- if (panel_private->vdda_vreg)
- devm_regulator_put(panel_private->vddio_vreg);
+ if (!IS_ERR(panel_private->vdda_vreg))
+ devm_regulator_put(panel_private->vdda_vreg);
+ if (panel_private->on_cmds_list) {
+ kfree(panel_private->on_cmds_list->buf);
+ kfree(panel_private->on_cmds_list);
+ }
+ if (panel_private->off_cmds_list) {
+ kfree(panel_private->off_cmds_list->buf);
+ kfree(panel_private->off_cmds_list);
+ }
+
+ kfree(panel_private->on_cmds);
+ kfree(panel_private->off_cmds);
kfree(panel_private);
panel_private = NULL;
}
+int dsi_panel_power(int enable)
+{
+ int ret;
+ if (enable) {
+ ret = regulator_enable(panel_private->vddio_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator enable vddio fail\n");
+ return ret;
+ }
+ ret = regulator_enable(panel_private->vdda_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator enable vdda fail\n");
+ return ret;
+ }
+ } else {
+ ret = regulator_disable(panel_private->vddio_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator disable vddio fail\n");
+ return ret;
+ }
+ ret = regulator_disable(panel_private->vdda_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator dsiable vdda fail\n");
+ return ret;
+ }
+ }
+ return 0;
+}
void dsi_panel_reset(struct mdss_panel_data *pdata, int enable)
{
@@ -113,6 +157,8 @@
pr_debug("%s: enable = %d\n", __func__, enable);
if (enable) {
+ dsi_panel_power(1);
+ gpio_request(panel_private->rst_gpio, "panel_reset");
gpio_set_value(panel_private->rst_gpio, 1);
/*
* these delay values are by experiments currently, will need
@@ -123,12 +169,34 @@
udelay(200);
gpio_set_value(panel_private->rst_gpio, 1);
msleep(20);
- if (gpio_is_valid(panel_private->disp_en_gpio))
+ if (gpio_is_valid(panel_private->disp_en_gpio)) {
+ gpio_request(panel_private->disp_en_gpio,
+ "panel_enable");
gpio_set_value(panel_private->disp_en_gpio, 1);
+ }
+ if (gpio_is_valid(panel_private->video_mode_gpio)) {
+ gpio_request(panel_private->video_mode_gpio,
+ "panel_video_mdoe");
+ if (pdata->panel_info.mipi.mode == DSI_VIDEO_MODE)
+ gpio_set_value(panel_private->video_mode_gpio,
+ 1);
+ else
+ gpio_set_value(panel_private->video_mode_gpio,
+ 0);
+ }
} else {
gpio_set_value(panel_private->rst_gpio, 0);
- if (gpio_is_valid(panel_private->disp_en_gpio))
+ gpio_free(panel_private->rst_gpio);
+
+ if (gpio_is_valid(panel_private->disp_en_gpio)) {
gpio_set_value(panel_private->disp_en_gpio, 0);
+ gpio_free(panel_private->disp_en_gpio);
+ }
+
+ if (gpio_is_valid(panel_private->video_mode_gpio))
+ gpio_free(panel_private->video_mode_gpio);
+
+ dsi_panel_power(0);
}
}
@@ -196,13 +264,42 @@
panel_private->disp_en_gpio = of_get_named_gpio(np,
"qcom,enable-gpio", 0);
panel_private->rst_gpio = of_get_named_gpio(np, "qcom,rst-gpio", 0);
+ panel_private->video_mode_gpio = of_get_named_gpio(np,
+ "qcom,mode-selection-gpio", 0);
return 0;
}
static int dsi_panel_parse_regulator(struct platform_device *pdev)
{
+ int ret;
panel_private->vddio_vreg = devm_regulator_get(&pdev->dev, "vddio");
+ if (IS_ERR(panel_private->vddio_vreg)) {
+ pr_err("%s: could not get vddio vreg, rc=%ld\n",
+ __func__, PTR_ERR(panel_private->vddio_vreg));
+ return PTR_ERR(panel_private->vddio_vreg);
+ }
+ ret = regulator_set_voltage(panel_private->vddio_vreg,
+ 1800000,
+ 1800000);
+ if (ret) {
+ pr_err("%s: set voltage failed on vddio_vreg, rc=%d\n",
+ __func__, ret);
+ return ret;
+ }
panel_private->vdda_vreg = devm_regulator_get(&pdev->dev, "vdda");
+ if (IS_ERR(panel_private->vdda_vreg)) {
+ pr_err("%s: could not get vdda_vreg , rc=%ld\n",
+ __func__, PTR_ERR(panel_private->vdda_vreg));
+ return PTR_ERR(panel_private->vdda_vreg);
+ }
+ ret = regulator_set_voltage(panel_private->vdda_vreg,
+ 2850000,
+ 2850000);
+ if (ret) {
+ pr_err("%s: set voltage failed on vdda_vreg, rc=%d\n",
+ __func__, ret);
+ return ret;
+ }
return 0;
}
@@ -398,180 +495,163 @@
int cmd_plen, data_offset;
const char *data;
const char *on_cmds_state, *off_cmds_state;
- char *on_cmds = NULL, *off_cmds = NULL;
int num_of_on_cmds = 0, num_of_off_cmds = 0;
data = of_get_property(np, "qcom,panel-on-cmds", &len);
if (!data) {
pr_err("%s:%d, Unable to read ON cmds", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- on_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
- if (!on_cmds)
- goto parse_init_cmds_error;
+ panel_private->on_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
+ if (!panel_private->on_cmds)
+ return -ENOMEM;
- memcpy(on_cmds, data, len);
+ memcpy(panel_private->on_cmds, data, len);
data_offset = 0;
cmd_plen = 0;
while ((len - data_offset) >= DT_CMD_HDR) {
data_offset += (DT_CMD_HDR - 1);
- cmd_plen = on_cmds[data_offset++];
+ cmd_plen = panel_private->on_cmds[data_offset++];
data_offset += cmd_plen;
num_of_on_cmds++;
}
if (!num_of_on_cmds) {
pr_err("%s:%d, No ON cmds specified", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- panel_data->dsi_panel_on_cmds =
+ panel_private->on_cmds_list =
kzalloc(sizeof(struct dsi_panel_cmds_list), GFP_KERNEL);
- if (!panel_data->dsi_panel_on_cmds)
- goto parse_init_cmds_error;
+ if (!panel_private->on_cmds_list)
+ return -ENOMEM;
- (panel_data->dsi_panel_on_cmds)->buf =
+ panel_private->on_cmds_list->buf =
kzalloc((num_of_on_cmds * sizeof(struct dsi_cmd_desc)),
GFP_KERNEL);
- if (!(panel_data->dsi_panel_on_cmds)->buf)
- goto parse_init_cmds_error;
+ if (!panel_private->on_cmds_list->buf)
+ return -ENOMEM;
data_offset = 0;
for (i = 0; i < num_of_on_cmds; i++) {
- panel_data->dsi_panel_on_cmds->buf[i].dtype =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].last =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].vc =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].ack =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].wait =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].dlen =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].payload =
- &on_cmds[data_offset];
- data_offset += (panel_data->dsi_panel_on_cmds->buf[i].dlen);
+ panel_private->on_cmds_list->buf[i].dtype =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].last =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].vc =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].ack =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].wait =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].dlen =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].payload =
+ &panel_private->on_cmds[data_offset];
+ data_offset += (panel_private->on_cmds_list->buf[i].dlen);
}
if (data_offset != len) {
pr_err("%s:%d, Incorrect ON command entries",
__func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- (panel_data->dsi_panel_on_cmds)->size = num_of_on_cmds;
+ panel_private->on_cmds_list->size = num_of_on_cmds;
on_cmds_state = of_get_property(pdev->dev.of_node,
"qcom,on-cmds-dsi-state", NULL);
if (!strncmp(on_cmds_state, "DSI_LP_MODE", 11)) {
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_LP_MODE;
} else if (!strncmp(on_cmds_state, "DSI_HS_MODE", 11)) {
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_HS_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_HS_MODE;
} else {
pr_debug("%s: ON cmds state not specified. Set Default\n",
__func__);
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_LP_MODE;
}
- panel_private->on_cmds_list = panel_data->dsi_panel_on_cmds;
+ panel_data->dsi_panel_on_cmds = panel_private->on_cmds_list;
+
data = of_get_property(np, "qcom,panel-off-cmds", &len);
if (!data) {
pr_err("%s:%d, Unable to read OFF cmds", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- off_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
- if (!off_cmds)
- goto parse_init_cmds_error;
+ panel_private->off_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
+ if (!panel_private->off_cmds)
+ return -ENOMEM;
- memcpy(off_cmds, data, len);
+ memcpy(panel_private->off_cmds, data, len);
data_offset = 0;
cmd_plen = 0;
while ((len - data_offset) >= DT_CMD_HDR) {
data_offset += (DT_CMD_HDR - 1);
- cmd_plen = off_cmds[data_offset++];
+ cmd_plen = panel_private->off_cmds[data_offset++];
data_offset += cmd_plen;
num_of_off_cmds++;
}
if (!num_of_off_cmds) {
pr_err("%s:%d, No OFF cmds specified", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -ENOMEM;
}
- panel_data->dsi_panel_off_cmds =
+ panel_private->off_cmds_list =
kzalloc(sizeof(struct dsi_panel_cmds_list), GFP_KERNEL);
- if (!panel_data->dsi_panel_off_cmds)
- goto parse_init_cmds_error;
+ if (!panel_private->off_cmds_list)
+ return -ENOMEM;
- (panel_data->dsi_panel_off_cmds)->buf = kzalloc(num_of_off_cmds
+ panel_private->off_cmds_list->buf = kzalloc(num_of_off_cmds
* sizeof(struct dsi_cmd_desc),
GFP_KERNEL);
- if (!(panel_data->dsi_panel_off_cmds)->buf)
- goto parse_init_cmds_error;
+ if (!panel_private->off_cmds_list->buf)
+ return -ENOMEM;
data_offset = 0;
for (i = 0; i < num_of_off_cmds; i++) {
- panel_data->dsi_panel_off_cmds->buf[i].dtype =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].last =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].vc =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].ack =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].wait =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].dlen =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].payload =
- &off_cmds[data_offset];
- data_offset += (panel_data->dsi_panel_off_cmds->buf[i].dlen);
+ panel_private->off_cmds_list->buf[i].dtype =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].last =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].vc =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].ack =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].wait =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].dlen =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].payload =
+ &panel_private->off_cmds[data_offset];
+ data_offset += (panel_private->off_cmds_list->buf[i].dlen);
}
if (data_offset != len) {
pr_err("%s:%d, Incorrect OFF command entries",
__func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- (panel_data->dsi_panel_off_cmds)->size = num_of_off_cmds;
- off_cmds_state = of_get_property(pdev->dev.of_node,
+ panel_private->off_cmds_list->size = num_of_off_cmds;
+ off_cmds_state = of_get_property(pdev->dev.of_node,
"qcom,off-cmds-dsi-state", NULL);
if (!strncmp(off_cmds_state, "DSI_LP_MODE", 11)) {
- (panel_data->dsi_panel_off_cmds)->ctrl_state =
+ panel_private->off_cmds_list->ctrl_state =
DSI_LP_MODE;
} else if (!strncmp(off_cmds_state, "DSI_HS_MODE", 11)) {
- (panel_data->dsi_panel_off_cmds)->ctrl_state = DSI_HS_MODE;
+ panel_private->off_cmds_list->ctrl_state = DSI_HS_MODE;
} else {
pr_debug("%s: ON cmds state not specified. Set Default\n",
__func__);
- (panel_data->dsi_panel_off_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->off_cmds_list->ctrl_state = DSI_LP_MODE;
}
- panel_private->off_cmds_list = panel_data->dsi_panel_on_cmds;
- kfree(on_cmds);
- kfree(off_cmds);
+ panel_data->dsi_panel_off_cmds = panel_private->off_cmds_list;
return 0;
-parse_init_cmds_error:
- if (panel_data->dsi_panel_on_cmds) {
- kfree((panel_data->dsi_panel_on_cmds)->buf);
- kfree(panel_data->dsi_panel_on_cmds);
- panel_data->dsi_panel_on_cmds = NULL;
- }
- if (panel_data->dsi_panel_off_cmds) {
- kfree((panel_data->dsi_panel_off_cmds)->buf);
- kfree(panel_data->dsi_panel_off_cmds);
- panel_data->dsi_panel_off_cmds = NULL;
- }
-
- kfree(on_cmds);
- kfree(off_cmds);
- return -EINVAL;
}
static int dsi_panel_parse_backlight(struct platform_device *pdev,
@@ -709,6 +789,7 @@
vendor_pdata.on = dsi_panel_on;
vendor_pdata.off = dsi_panel_off;
+ vendor_pdata.reset = dsi_panel_reset;
vendor_pdata.bl_fnc = dsi_panel_bl_ctrl;
rc = dsi_panel_device_register_v2(pdev, &vendor_pdata,
diff --git a/drivers/video/msm/mdss/dsi_v2.c b/drivers/video/msm/mdss/dsi_v2.c
index 5e46bf5..5833796 100644
--- a/drivers/video/msm/mdss/dsi_v2.c
+++ b/drivers/video/msm/mdss/dsi_v2.c
@@ -29,6 +29,14 @@
if (!panel_common_data || !pdata)
return -ENODEV;
+ if (dsi_intf.op_mode_config)
+ dsi_intf.op_mode_config(DSI_CMD_MODE, pdata);
+
+ pr_debug("panel off commands\n");
+ if (panel_common_data->off)
+ panel_common_data->off(pdata);
+
+ pr_debug("turn off dsi controller\n");
if (dsi_intf.off)
rc = dsi_intf.off(pdata);
@@ -37,9 +45,9 @@
return rc;
}
- pr_debug("dsi_off reset\n");
- if (panel_common_data->off)
- panel_common_data->off(pdata);
+ pr_debug("turn off panel power\n");
+ if (panel_common_data->reset)
+ panel_common_data->reset(pdata, 0);
return rc;
}
@@ -53,8 +61,6 @@
if (!panel_common_data || !pdata)
return -ENODEV;
- if (panel_common_data->reset)
- panel_common_data->reset(1);
pr_debug("dsi_on DSI controller ont\n");
if (dsi_intf.on)
@@ -64,6 +70,9 @@
pr_err("mdss_dsi_on DSI failed %d\n", rc);
return rc;
}
+ pr_debug("dsi_on power on panel\n");
+ if (panel_common_data->reset)
+ panel_common_data->reset(pdata, 1);
pr_debug("dsi_on DSI panel ont\n");
if (panel_common_data->on)
diff --git a/drivers/video/msm/mdss/dsi_v2.h b/drivers/video/msm/mdss/dsi_v2.h
index fa868ab..54b772b 100644
--- a/drivers/video/msm/mdss/dsi_v2.h
+++ b/drivers/video/msm/mdss/dsi_v2.h
@@ -181,7 +181,7 @@
struct dsi_panel_cmds_list {
struct dsi_cmd_desc *buf;
- char size;
+ int size;
char ctrl_state;
};
@@ -189,7 +189,7 @@
struct mdss_panel_info panel_info;
int (*on) (struct mdss_panel_data *pdata);
int (*off) (struct mdss_panel_data *pdata);
- void (*reset)(int enable);
+ void (*reset)(struct mdss_panel_data *pdata, int enable);
void (*bl_fnc) (struct mdss_panel_data *pdata, u32 bl_level);
struct dsi_panel_cmds_list *dsi_panel_on_cmds;
struct dsi_panel_cmds_list *dsi_panel_off_cmds;
diff --git a/drivers/video/msm/mdss/mdp3.c b/drivers/video/msm/mdss/mdp3.c
index 890b00b..52243eb 100644
--- a/drivers/video/msm/mdss/mdp3.c
+++ b/drivers/video/msm/mdss/mdp3.c
@@ -47,7 +47,7 @@
#include "mdp3_hwio.h"
#include "mdp3_ctrl.h"
-#define MDP_CORE_HW_VERSION 0x03030304
+#define MDP_CORE_HW_VERSION 0x03040310
struct mdp3_hw_resource *mdp3_res;
#define MDP_BUS_VECTOR_ENTRY(ab_val, ib_val) \
@@ -302,16 +302,7 @@
return ret;
}
-int mdp3_vsync_clk_enable(int enable)
-{
- int ret = 0;
- pr_debug("vsync clk enable=%d\n", enable);
- mutex_lock(&mdp3_res->res_mutex);
- mdp3_clk_update(MDP3_CLK_VSYNC, enable);
- mutex_unlock(&mdp3_res->res_mutex);
- return ret;
-}
int mdp3_clk_set_rate(int clk_type, unsigned long clk_rate)
{
@@ -400,6 +391,9 @@
if (rc)
return rc;
+ rc = mdp3_clk_register("dsi_clk", MDP3_CLK_DSI);
+ if (rc)
+ return rc;
return rc;
}
@@ -409,6 +403,7 @@
clk_put(mdp3_res->clocks[MDP3_CLK_CORE]);
clk_put(mdp3_res->clocks[MDP3_CLK_VSYNC]);
clk_put(mdp3_res->clocks[MDP3_CLK_LCDC]);
+ clk_put(mdp3_res->clocks[MDP3_CLK_DSI]);
}
int mdp3_clk_enable(int enable)
@@ -421,6 +416,7 @@
rc = mdp3_clk_update(MDP3_CLK_AHB, enable);
rc |= mdp3_clk_update(MDP3_CLK_CORE, enable);
rc |= mdp3_clk_update(MDP3_CLK_VSYNC, enable);
+ rc |= mdp3_clk_update(MDP3_CLK_DSI, enable);
mutex_unlock(&mdp3_res->res_mutex);
return rc;
}
@@ -577,12 +573,14 @@
int rc;
rc = mdp3_clk_update(MDP3_CLK_AHB, 1);
+ rc |= mdp3_clk_update(MDP3_CLK_CORE, 1);
if (rc)
return rc;
mdp3_res->mdp_rev = MDP3_REG_READ(MDP3_REG_HW_VERSION);
rc = mdp3_clk_update(MDP3_CLK_AHB, 0);
+ rc |= mdp3_clk_update(MDP3_CLK_CORE, 0);
if (rc)
pr_err("fail to turn off the MDP3_CLK_AHB clk\n");
diff --git a/drivers/video/msm/mdss/mdp3.h b/drivers/video/msm/mdss/mdp3.h
index c853664..5774e5a 100644
--- a/drivers/video/msm/mdss/mdp3.h
+++ b/drivers/video/msm/mdss/mdp3.h
@@ -29,6 +29,7 @@
MDP3_CLK_CORE,
MDP3_CLK_VSYNC,
MDP3_CLK_LCDC,
+ MDP3_CLK_DSI,
MDP3_MAX_CLK
};
diff --git a/drivers/video/msm/mdss/mdp3_ctrl.c b/drivers/video/msm/mdss/mdp3_ctrl.c
index e07c0a4..929e5f8 100644
--- a/drivers/video/msm/mdss/mdp3_ctrl.c
+++ b/drivers/video/msm/mdss/mdp3_ctrl.c
@@ -28,7 +28,7 @@
void vsync_notify_handler(void *arg)
{
- struct mdp3_session_data *session = (struct mdp3_session_data *)session;
+ struct mdp3_session_data *session = (struct mdp3_session_data *)arg;
complete(&session->vsync_comp);
}
@@ -100,18 +100,30 @@
.attrs = vsync_fs_attrs,
};
-static int mdp3_ctrl_res_req_dma(struct msm_fb_data_type *mfd, int status)
+static int mdp3_ctrl_res_req_bus(struct msm_fb_data_type *mfd, int status)
{
int rc = 0;
if (status) {
struct mdss_panel_info *panel_info = mfd->panel_info;
int ab = 0;
int ib = 0;
- unsigned long core_clk = 0;
- int vtotal = 0;
ab = panel_info->xres * panel_info->yres * 4;
ab *= panel_info->mipi.frame_rate;
ib = (ab * 3) / 2;
+ rc = mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, ab, ib);
+ } else {
+ rc = mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, 0, 0);
+ }
+ return rc;
+}
+
+static int mdp3_ctrl_res_req_clk(struct msm_fb_data_type *mfd, int status)
+{
+ int rc = 0;
+ if (status) {
+ struct mdss_panel_info *panel_info = mfd->panel_info;
+ unsigned long core_clk;
+ int vtotal;
vtotal = panel_info->lcdc.v_back_porch +
panel_info->lcdc.v_front_porch +
panel_info->lcdc.v_pulse_width +
@@ -126,10 +138,8 @@
if (rc)
return rc;
- mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, ab, ib);
} else {
rc = mdp3_clk_enable(false);
- rc |= mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, 0, 0);
}
return rc;
}
@@ -285,13 +295,25 @@
}
mutex_lock(&mdp3_session->lock);
if (mdp3_session->status) {
- pr_info("fb%d is on already", mfd->index);
+ pr_debug("fb%d is on already", mfd->index);
goto on_error;
}
- rc = mdp3_ctrl_res_req_dma(mfd, 1);
+ /* request bus bandwidth before DSI DMA traffic */
+ rc = mdp3_ctrl_res_req_bus(mfd, 1);
+ if (rc)
+ pr_err("fail to request bus resource\n");
+
+ panel = mdp3_session->panel;
+ if (panel->event_handler)
+ rc = panel->event_handler(panel, MDSS_EVENT_PANEL_ON, NULL);
if (rc) {
- pr_err("resource request for dma on failed\n");
+ pr_err("fail to turn on the panel\n");
+ goto on_error;
+ }
+ rc = mdp3_ctrl_res_req_clk(mfd, 1);
+ if (rc) {
+ pr_err("fail to request mdp clk resource\n");
goto on_error;
}
@@ -304,16 +326,9 @@
rc = mdp3_ctrl_intf_init(mfd, mdp3_session->intf);
if (rc) {
pr_err("display interface init failed\n");
- goto on_error;
- }
- panel = mdp3_session->panel;
- if (panel->event_handler)
- rc = panel->event_handler(panel, MDSS_EVENT_PANEL_ON, NULL);
- if (rc) {
- pr_err("fail to turn on the panel\n");
goto on_error;
}
@@ -348,26 +363,31 @@
mutex_lock(&mdp3_session->lock);
if (!mdp3_session->status) {
- pr_info("fb%d is off already", mfd->index);
+ pr_debug("fb%d is off already", mfd->index);
goto off_error;
}
- panel = mdp3_session->panel;
- if (panel->event_handler)
- rc = panel->event_handler(panel, MDSS_EVENT_PANEL_OFF, NULL);
-
- if (rc)
- pr_err("fail to turn off the panel\n");
+ pr_debug("mdp3_ctrl_off stop mdp3 dma engine\n");
rc = mdp3_session->dma->stop(mdp3_session->dma, mdp3_session->intf);
if (rc)
pr_err("fail to stop the MDP3 dma\n");
- rc = mdp3_ctrl_res_req_dma(mfd, 0);
+ pr_debug("mdp3_ctrl_off stop dsi panel and controller\n");
+ panel = mdp3_session->panel;
+ if (panel->event_handler)
+ rc = panel->event_handler(panel, MDSS_EVENT_PANEL_OFF, NULL);
if (rc)
- pr_err("resource release for dma on failed\n");
+ pr_err("fail to turn off the panel\n");
+ pr_debug("mdp3_ctrl_off release bus and clock\n");
+ rc = mdp3_ctrl_res_req_bus(mfd, 0);
+ if (rc)
+ pr_err("mdp bus resource release failed\n");
+ rc = mdp3_ctrl_res_req_clk(mfd, 0);
+ if (rc)
+ pr_err("mdp clock resource release failed\n");
off_error:
mdp3_session->status = 0;
@@ -414,11 +434,32 @@
mutex_unlock(&mdp3_session->lock);
}
+static int mdp3_get_metadata(struct msm_fb_data_type *mfd,
+ struct msmfb_metadata *metadata)
+{
+ int ret = 0;
+ switch (metadata->op) {
+ case metadata_op_frame_rate:
+ metadata->data.panel_frame_rate =
+ mfd->panel_info->mipi.frame_rate;
+ break;
+ case metadata_op_get_caps:
+ metadata->data.caps.mdp_rev = 304;
+ break;
+ default:
+ pr_warn("Unsupported request to MDP META IOCTL.\n");
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
static int mdp3_ctrl_ioctl_handler(struct msm_fb_data_type *mfd,
u32 cmd, void __user *argp)
{
int rc = -EINVAL;
struct mdp3_session_data *mdp3_session;
+ struct msmfb_metadata metadata;
int val;
pr_debug("mdp3_ctrl_ioctl_handler\n");
@@ -444,6 +485,14 @@
rc = -EFAULT;
}
break;
+ case MSMFB_METADATA_GET:
+ rc = copy_from_user(&metadata, argp, sizeof(metadata));
+ if (rc)
+ return rc;
+ rc = mdp3_get_metadata(mfd, &metadata);
+ if (!rc)
+ rc = copy_to_user(argp, &metadata, sizeof(metadata));
+ break;
default:
break;
}
diff --git a/drivers/video/msm/mdss/mdss.h b/drivers/video/msm/mdss/mdss.h
index 763c7f6..3bb4bdc 100644
--- a/drivers/video/msm/mdss/mdss.h
+++ b/drivers/video/msm/mdss/mdss.h
@@ -62,8 +62,6 @@
struct regulator *fs;
u32 max_mdp_clk_rate;
- struct workqueue_struct *clk_ctrl_wq;
- struct work_struct clk_ctrl_worker;
struct platform_device *pdev;
char __iomem *mdp_base;
size_t mdp_reg_size;
@@ -73,12 +71,13 @@
u32 irq_mask;
u32 irq_ena;
u32 irq_buzy;
+ u32 has_bwc;
+ u32 has_decimation;
u32 mdp_irq_mask;
u32 mdp_hist_irq_mask;
int suspend_fs_ena;
- atomic_t clk_ref;
u8 clk_ena;
u8 fs_ena;
u8 vsync_ena;
@@ -111,6 +110,10 @@
void *video_intf;
u32 nintf;
+ struct mdss_ad_info *ad_cfgs;
+ u32 nad_cfgs;
+ struct workqueue_struct *ad_calc_wq;
+
struct ion_client *iclient;
int iommu_attached;
struct mdss_iommu_map_type *iommu_map;
diff --git a/drivers/video/msm/mdss/mdss_dsi.c b/drivers/video/msm/mdss/mdss_dsi.c
index 8bf8c95..5d56df4 100644
--- a/drivers/video/msm/mdss/mdss_dsi.c
+++ b/drivers/video/msm/mdss/mdss_dsi.c
@@ -326,19 +326,6 @@
snprintf(mp->vreg_config[i].vreg_name,
ARRAY_SIZE((mp->vreg_config[i].vreg_name)), "%s", st);
- /* vreg-type */
- rc = of_property_read_string_index(of_node, "qcom,supply-type",
- i, &st);
- if (rc) {
- pr_err("%s: error reading vreg type. rc=%d\n",
- __func__, rc);
- goto error;
- }
- if (!strncmp(st, "regulator", 9))
- mp->vreg_config[i].type = 0;
- else if (!strncmp(st, "switch", 6))
- mp->vreg_config[i].type = 1;
-
/* vreg-min-voltage */
memset(val_array, 0, sizeof(u32) * dt_vreg_total);
rc = of_property_read_u32_array(of_node,
@@ -373,14 +360,13 @@
__func__, rc);
goto error;
}
- mp->vreg_config[i].optimum_voltage = val_array[i];
+ mp->vreg_config[i].peak_current = val_array[i];
- pr_debug("%s: %s type=%d, min=%d, max=%d, op=%d\n",
- __func__, mp->vreg_config[i].vreg_name,
- mp->vreg_config[i].type,
+ pr_debug("%s: %s min=%d, max=%d, pc=%d\n", __func__,
+ mp->vreg_config[i].vreg_name,
mp->vreg_config[i].min_voltage,
mp->vreg_config[i].max_voltage,
- mp->vreg_config[i].optimum_voltage);
+ mp->vreg_config[i].peak_current);
}
devm_kfree(dev, val_array);
@@ -428,6 +414,8 @@
/* disable DSI controller */
mdss_dsi_controller_cfg(0, pdata);
+ /* disable DSI phy */
+ mdss_dsi_phy_enable(ctrl_pdata->ctrl_base, 0);
ret = mdss_dsi_panel_power_on(pdata, 0);
if (ret) {
pr_err("%s: Panel power off failed\n", __func__);
@@ -1092,12 +1080,12 @@
cont_splash_enabled = of_property_read_bool(pdev->dev.of_node,
"qcom,cont-splash-enabled");
if (!cont_splash_enabled) {
- pr_info("%s:%d Continous splash flag not found.\n",
+ pr_info("%s:%d Continuous splash flag not found.\n",
__func__, __LINE__);
ctrl_pdata->panel_data.panel_info.cont_splash_enabled = 0;
ctrl_pdata->panel_data.panel_info.panel_power_on = 0;
} else {
- pr_info("%s:%d Continous splash flag enabled.\n",
+ pr_info("%s:%d Continuous splash flag enabled.\n",
__func__, __LINE__);
ctrl_pdata->panel_data.panel_info.cont_splash_enabled = 1;
diff --git a/drivers/video/msm/mdss/mdss_dsi_host.c b/drivers/video/msm/mdss/mdss_dsi_host.c
index 22ff08c..3c0dfc2 100644
--- a/drivers/video/msm/mdss/mdss_dsi_host.c
+++ b/drivers/video/msm/mdss/mdss_dsi_host.c
@@ -52,6 +52,8 @@
void mdss_dsi_irq_handler_config(struct mdss_dsi_ctrl_pdata *ctrl)
{
+ int ret;
+
if (ctrl->panel_data.panel_info.pdest == DISPLAY_1) {
mdss_dsi0_hw.ptr = (void *)(ctrl);
ctrl->mdss_hw = &mdss_dsi0_hw;
@@ -60,7 +62,8 @@
ctrl->mdss_hw = &mdss_dsi1_hw;
}
- if (!mdss_register_irq(ctrl->mdss_hw))
+ ret = mdss_register_irq(ctrl->mdss_hw);
+ if (ret)
pr_err("%s: mdss_register_irq failed.\n", __func__);
}
@@ -813,7 +816,6 @@
dsi_ctrl |= BIT(0); /* enable dsi */
MIPI_OUTP((ctrl_pdata->ctrl_base) + 0x0004, dsi_ctrl);
- mdss_dsi_irq_ctrl(ctrl_pdata, 1, 0); /* enable dsi irq */
wmb();
}
diff --git a/drivers/video/msm/mdss/mdss_fb.c b/drivers/video/msm/mdss/mdss_fb.c
index a5903cf..72ad67e 100644
--- a/drivers/video/msm/mdss/mdss_fb.c
+++ b/drivers/video/msm/mdss/mdss_fb.c
@@ -143,9 +143,9 @@
if (!bl_lvl && value)
bl_lvl = 1;
- mutex_lock(&mfd->lock);
+ mutex_lock(&mfd->bl_lock);
mdss_fb_set_backlight(mfd, bl_lvl);
- mutex_unlock(&mfd->lock);
+ mutex_unlock(&mfd->bl_lock);
}
static struct led_classdev backlight_led = {
@@ -263,6 +263,7 @@
mfd->mdp = *mdp_instance;
mutex_init(&mfd->lock);
+ mutex_init(&mfd->bl_lock);
fbi_list[fbi_list_index++] = fbi;
@@ -524,7 +525,7 @@
(*bl_lvl) = temp;
}
-/* must call this function from within mfd->lock */
+/* must call this function from within mfd->bl_lock */
void mdss_fb_set_backlight(struct msm_fb_data_type *mfd, u32 bkl_lvl)
{
struct mdss_panel_data *pdata;
@@ -566,11 +567,11 @@
if (unset_bl_level && !bl_updated) {
pdata = dev_get_platdata(&mfd->pdev->dev);
if ((pdata) && (pdata->set_backlight)) {
- mutex_lock(&mfd->lock);
+ mutex_lock(&mfd->bl_lock);
mfd->bl_level = unset_bl_level;
pdata->set_backlight(pdata, mfd->bl_level);
bl_level_old = unset_bl_level;
- mutex_unlock(&mfd->lock);
+ mutex_unlock(&mfd->bl_lock);
bl_updated = 1;
}
}
@@ -626,12 +627,7 @@
static int mdss_fb_blank(int blank_mode, struct fb_info *info)
{
struct msm_fb_data_type *mfd = (struct msm_fb_data_type *)info->par;
- if (blank_mode == FB_BLANK_POWERDOWN) {
- struct fb_event event;
- event.info = info;
- event.data = &blank_mode;
- fb_notifier_call_chain(FB_EVENT_BLANK, &event);
- }
+
mdss_fb_pan_idle(mfd);
if (mfd->op_enable == 0) {
if (blank_mode == FB_BLANK_UNBLANK)
@@ -956,6 +952,7 @@
result = mdss_fb_blank_sub(FB_BLANK_UNBLANK, info,
mfd->op_enable);
if (result) {
+ pm_runtime_put(info->dev);
pr_err("mdss_fb_open: can't turn on display!\n");
return result;
}
@@ -1143,6 +1140,7 @@
struct mdp_display_commit disp_commit;
memset(&disp_commit, 0, sizeof(disp_commit));
disp_commit.wait_for_finish = true;
+ memcpy(&disp_commit.var, var, sizeof(struct fb_var_screeninfo));
return mdss_fb_pan_display_ex(info, &disp_commit);
}
@@ -1209,6 +1207,7 @@
mdss_fb_wait_for_fence(mfd);
if (mfd->mdp.kickoff_fnc)
mfd->mdp.kickoff_fnc(mfd);
+ mdss_fb_update_backlight(mfd);
mdss_fb_signal_timeline(mfd);
} else {
var = &fb_backup->disp_commit.var;
@@ -1431,6 +1430,7 @@
int i, fence_cnt = 0, ret = 0;
int acq_fen_fd[MDP_MAX_FENCE_FD];
struct sync_fence *fence;
+ u32 threshold;
if ((buf_sync->acq_fen_fd_cnt > MDP_MAX_FENCE_FD) ||
(mfd->timeline == NULL))
@@ -1464,8 +1464,13 @@
if (buf_sync->flags & MDP_BUF_SYNC_FLAG_WAIT)
mdss_fb_wait_for_fence(mfd);
+ if (mfd->panel.type == WRITEBACK_PANEL)
+ threshold = 1;
+ else
+ threshold = 2;
+
mfd->cur_rel_sync_pt = sw_sync_pt_create(mfd->timeline,
- mfd->timeline_value + 2);
+ mfd->timeline_value + threshold);
if (mfd->cur_rel_sync_pt == NULL) {
pr_err("%s: cannot create sync point", __func__);
ret = -ENOMEM;
diff --git a/drivers/video/msm/mdss/mdss_fb.h b/drivers/video/msm/mdss/mdss_fb.h
index fdbbea9..6f6f490 100644
--- a/drivers/video/msm/mdss/mdss_fb.h
+++ b/drivers/video/msm/mdss/mdss_fb.h
@@ -102,6 +102,7 @@
u32 bl_level;
u32 bl_scale;
u32 bl_min_lvl;
+ struct mutex bl_lock;
struct mutex lock;
struct platform_device *pdev;
diff --git a/drivers/video/msm/mdss/mdss_hdmi_cec.c b/drivers/video/msm/mdss/mdss_hdmi_cec.c
index 694fcde..2cf47fc 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_cec.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_cec.c
@@ -785,12 +785,16 @@
io = cec_ctrl->init_data.io;
- if (!cec_ctrl->cec_enabled)
- return 0;
-
cec_intr = DSS_REG_R_ND(io, HDMI_CEC_INT);
DEV_DBG("%s: cec interrupt status is [0x%x]\n", __func__, cec_intr);
+ if (!cec_ctrl->cec_enabled) {
+ DEV_ERR("%s: cec is not enabled. Just clear int and return.\n",
+ __func__);
+ DSS_REG_W(io, HDMI_CEC_INT, cec_intr);
+ return 0;
+ }
+
cec_status = DSS_REG_R_ND(io, HDMI_CEC_STATUS);
DEV_DBG("%s: cec status is [0x%x]\n", __func__, cec_status);
diff --git a/drivers/video/msm/mdss/mdss_hdmi_hdcp.c b/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
index 2e20787..f726e79 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
@@ -618,7 +618,7 @@
/* Wait until READY bit is set in BCAPS */
timeout_count = 50;
- while (!(bcaps && BIT(5)) && timeout_count) {
+ while (!(bcaps & BIT(5)) && timeout_count) {
msleep(100);
timeout_count--;
/* Read BCAPS at offset 0x40 */
@@ -1057,11 +1057,16 @@
io = hdcp_ctrl->init_data.core_io;
- /* Ignore HDCP interrupts if HDCP is disabled */
- if (HDCP_STATE_INACTIVE == hdcp_ctrl->hdcp_state)
- return 0;
-
hdcp_int_val = DSS_REG_R(io, HDMI_HDCP_INT_CTRL);
+
+ /* Ignore HDCP interrupts if HDCP is disabled */
+ if (HDCP_STATE_INACTIVE == hdcp_ctrl->hdcp_state) {
+ DEV_ERR("%s: HDCP inactive. Just clear int and return.\n",
+ __func__);
+ DSS_REG_W(io, HDMI_HDCP_INT_CTRL, hdcp_int_val);
+ return 0;
+ }
+
if (hdcp_int_val & BIT(0)) {
/* AUTH_SUCCESS_INT */
DSS_REG_W(io, HDMI_HDCP_INT_CTRL, (hdcp_int_val | BIT(1)));
diff --git a/drivers/video/msm/mdss/mdss_hdmi_mhl.h b/drivers/video/msm/mdss/mdss_hdmi_mhl.h
index 8fef63e..8a9d4fc 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_mhl.h
+++ b/drivers/video/msm/mdss/mdss_hdmi_mhl.h
@@ -19,9 +19,9 @@
struct msm_hdmi_mhl_ops {
u8 (*tmds_enabled)(struct platform_device *pdev);
int (*set_mhl_max_pclk)(struct platform_device *pdev, u32 max_val);
+ int (*set_upstream_hpd)(struct platform_device *pdev, uint8_t on);
};
int msm_hdmi_register_mhl(struct platform_device *pdev,
- struct msm_hdmi_mhl_ops *ops);
-
+ struct msm_hdmi_mhl_ops *ops, void *data);
#endif /* __MDSS_HDMI_MHL_H__ */
diff --git a/drivers/video/msm/mdss/mdss_hdmi_tx.c b/drivers/video/msm/mdss/mdss_hdmi_tx.c
index 213fcff..ad4cbd2 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_tx.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_tx.c
@@ -81,9 +81,12 @@
struct hdmi_tx_audio_acr lut[AUDIO_SAMPLE_RATE_MAX];
};
+static int hdmi_tx_set_mhl_hpd(struct platform_device *pdev, uint8_t on);
static int hdmi_tx_sysfs_enable_hpd(struct hdmi_tx_ctrl *hdmi_ctrl, int on);
static irqreturn_t hdmi_tx_isr(int irq, void *data);
static void hdmi_tx_hpd_off(struct hdmi_tx_ctrl *hdmi_ctrl);
+static int hdmi_tx_enable_power(struct hdmi_tx_ctrl *hdmi_ctrl,
+ enum hdmi_tx_power_module_type module, int enable);
struct mdss_hw hdmi_tx_hw = {
.hw_ndx = MDSS_HW_HDMI,
@@ -93,12 +96,15 @@
struct dss_gpio hpd_gpio_config[] = {
{0, 1, COMPATIBLE_NAME "-hpd"},
- {0, 1, COMPATIBLE_NAME "-ddc-clk"},
- {0, 1, COMPATIBLE_NAME "-ddc-data"},
{0, 1, COMPATIBLE_NAME "-mux-en"},
{0, 0, COMPATIBLE_NAME "-mux-sel"}
};
+struct dss_gpio ddc_gpio_config[] = {
+ {0, 1, COMPATIBLE_NAME "-ddc-clk"},
+ {0, 1, COMPATIBLE_NAME "-ddc-data"}
+};
+
struct dss_gpio core_gpio_config[] = {
};
@@ -110,6 +116,7 @@
{
switch (module) {
case HDMI_TX_HPD_PM: return "HDMI_TX_HPD_PM";
+ case HDMI_TX_DDC_PM: return "HDMI_TX_DDC_PM";
case HDMI_TX_CORE_PM: return "HDMI_TX_CORE_PM";
case HDMI_TX_CEC_PM: return "HDMI_TX_CEC_PM";
default: return "???";
@@ -187,6 +194,7 @@
{
switch (module) {
case HDMI_TX_HPD_PM: return "HDMI_TX_HPD_PM";
+ case HDMI_TX_DDC_PM: return "HDMI_TX_DDC_PM";
case HDMI_TX_CORE_PM: return "HDMI_TX_CORE_PM";
case HDMI_TX_CEC_PM: return "HDMI_TX_CEC_PM";
default: return "???";
@@ -810,11 +818,19 @@
DEV_DBG("%s: Got HPD interrupt\n", __func__);
if (hdmi_ctrl->hpd_state) {
+ if (hdmi_tx_enable_power(hdmi_ctrl, HDMI_TX_DDC_PM, true)) {
+ DEV_ERR("%s: Failed to enable ddc power\n", __func__);
+ return;
+ }
+
hdmi_tx_read_sink_info(hdmi_ctrl);
hdmi_tx_send_cable_notification(hdmi_ctrl, 1);
DEV_INFO("%s: sense cable CONNECTED: state switch to %d\n",
__func__, hdmi_ctrl->sdev.state);
} else {
+ if (hdmi_tx_enable_power(hdmi_ctrl, HDMI_TX_DDC_PM, false))
+ DEV_WARN("%s: Failed to disable ddc power\n", __func__);
+
hdmi_tx_send_cable_notification(hdmi_ctrl, 0);
DEV_INFO("%s: sense cable DISCONNECTED: state switch to %d\n",
__func__, hdmi_ctrl->sdev.state);
@@ -1935,7 +1951,7 @@
}
int msm_hdmi_register_mhl(struct platform_device *pdev,
- struct msm_hdmi_mhl_ops *ops)
+ struct msm_hdmi_mhl_ops *ops, void *data)
{
struct hdmi_tx_ctrl *hdmi_ctrl = platform_get_drvdata(pdev);
@@ -1951,6 +1967,8 @@
ops->tmds_enabled = hdmi_tx_tmds_enabled;
ops->set_mhl_max_pclk = hdmi_tx_set_mhl_max_pclk;
+ ops->set_upstream_hpd = hdmi_tx_set_mhl_hpd;
+
return 0;
}
@@ -2298,6 +2316,7 @@
static void hdmi_tx_hpd_off(struct hdmi_tx_ctrl *hdmi_ctrl)
{
int rc = 0;
+ struct dss_io_data *io = NULL;
if (!hdmi_ctrl) {
DEV_ERR("%s: invalid input\n", __func__);
@@ -2309,10 +2328,26 @@
return;
}
+ io = &hdmi_ctrl->pdata.io[HDMI_TX_CORE_IO];
+ if (!io->base) {
+ DEV_ERR("%s: core io not inititalized\n", __func__);
+ return;
+ }
+
+ /* Turn off HPD interrupts */
+ DSS_REG_W(io, HDMI_HPD_INT_CTRL, 0);
+
mdss_disable_irq(&hdmi_tx_hw);
hdmi_tx_set_mode(hdmi_ctrl, false);
+ if (hdmi_ctrl->hpd_state) {
+ rc = hdmi_tx_enable_power(hdmi_ctrl, HDMI_TX_DDC_PM, 0);
+ if (rc)
+ DEV_INFO("%s: Failed to disable ddc power. Error=%d\n",
+ __func__, rc);
+ }
+
rc = hdmi_tx_enable_power(hdmi_ctrl, HDMI_TX_HPD_PM, 0);
if (rc)
DEV_INFO("%s: Failed to disable hpd power. Error=%d\n",
@@ -2406,12 +2441,47 @@
return rc;
} /* hdmi_tx_sysfs_enable_hpd */
+static int hdmi_tx_set_mhl_hpd(struct platform_device *pdev, uint8_t on)
+{
+ int rc = 0;
+ struct hdmi_tx_ctrl *hdmi_ctrl = NULL;
+
+ hdmi_ctrl = platform_get_drvdata(pdev);
+
+ if (!hdmi_ctrl) {
+ DEV_ERR("%s: invalid input\n", __func__);
+ return -EINVAL;
+ }
+
+ if (!on && hdmi_ctrl->hpd_feature_on) {
+ rc = hdmi_tx_sysfs_enable_hpd(hdmi_ctrl, false);
+ } else if (on && !hdmi_ctrl->hpd_feature_on) {
+ rc = hdmi_tx_sysfs_enable_hpd(hdmi_ctrl, true);
+ } else {
+ DEV_DBG("%s: hpd is already '%s'. return\n", __func__,
+ hdmi_ctrl->hpd_feature_on ? "enabled" : "disabled");
+ return rc;
+ }
+
+ if (!rc) {
+ hdmi_ctrl->hpd_feature_on =
+ (~hdmi_ctrl->hpd_feature_on) & BIT(0);
+ DEV_DBG("%s: '%d'\n", __func__, hdmi_ctrl->hpd_feature_on);
+ } else {
+ DEV_ERR("%s: failed to '%s' hpd. rc = %d\n", __func__,
+ on ? "enable" : "disable", rc);
+ }
+
+ return rc;
+
+}
+
static irqreturn_t hdmi_tx_isr(int irq, void *data)
{
struct dss_io_data *io = NULL;
struct hdmi_tx_ctrl *hdmi_ctrl = (struct hdmi_tx_ctrl *)data;
- if (!hdmi_ctrl || !hdmi_ctrl->hpd_initialized) {
+ if (!hdmi_ctrl) {
DEV_WARN("%s: invalid input data, ISR ignored\n", __func__);
return IRQ_HANDLED;
}
@@ -2836,7 +2906,7 @@
switch (module_type) {
case HDMI_TX_HPD_PM:
- mp->num_clk = 2;
+ mp->num_clk = 3;
mp->clk_config = devm_kzalloc(dev, sizeof(struct dss_clk) *
mp->num_clk, GFP_KERNEL);
if (!mp->clk_config) {
@@ -2852,6 +2922,16 @@
snprintf(mp->clk_config[1].clk_name, 32, "%s", "core_clk");
mp->clk_config[1].type = DSS_CLK_OTHER;
mp->clk_config[1].rate = 19200000;
+
+ /*
+ * This clock is required to clock MDSS interrupt registers
+ * when HDMI is the only block turned on within MDSS. Since
+ * rate for this clock is controlled by MDP driver, treat this
+ * similar to AHB clock and do not set rate for it.
+ */
+ snprintf(mp->clk_config[2].clk_name, 32, "%s", "mdp_core_clk");
+ mp->clk_config[2].type = DSS_CLK_AHB;
+ mp->clk_config[2].rate = 0;
break;
case HDMI_TX_CORE_PM:
@@ -2874,6 +2954,7 @@
mp->clk_config[1].rate = 0;
break;
+ case HDMI_TX_DDC_PM:
case HDMI_TX_CEC_PM:
mp->num_clk = 0;
DEV_DBG("%s: no clk\n", __func__);
@@ -2933,6 +3014,9 @@
case HDMI_TX_HPD_PM:
mod_name = "hpd";
break;
+ case HDMI_TX_DDC_PM:
+ mod_name = "ddc";
+ break;
case HDMI_TX_CORE_PM:
mod_name = "core";
break;
@@ -3021,20 +3105,6 @@
}
snprintf(mp->vreg_config[j].vreg_name, 32, "%s", st);
- /* vreg-type */
- memset(prop_name, 0, sizeof(prop_name));
- snprintf(prop_name, 32, "%s-%s", COMPATIBLE_NAME,
- "supply-type");
- memset(val_array, 0, sizeof(u32) * dt_vreg_total);
- rc = of_property_read_u32_array(of_node,
- prop_name, val_array, dt_vreg_total);
- if (rc) {
- DEV_ERR("%s: error read '%s' vreg type. rc=%d\n",
- __func__, hdmi_tx_pm_name(module_type), rc);
- goto error;
- }
- mp->vreg_config[j].type = val_array[i];
-
/* vreg-min-voltage */
memset(prop_name, 0, sizeof(prop_name));
snprintf(prop_name, 32, "%s-%s", COMPATIBLE_NAME,
@@ -3068,24 +3138,23 @@
/* vreg-op-mode */
memset(prop_name, 0, sizeof(prop_name));
snprintf(prop_name, 32, "%s-%s", COMPATIBLE_NAME,
- "op-mode");
+ "peak-current");
memset(val_array, 0, sizeof(u32) * dt_vreg_total);
rc = of_property_read_u32_array(of_node,
prop_name, val_array,
dt_vreg_total);
if (rc) {
- DEV_ERR("%s: error read '%s' min volt. rc=%d\n",
+ DEV_ERR("%s: error read '%s' peak current. rc=%d\n",
__func__, hdmi_tx_pm_name(module_type), rc);
goto error;
}
- mp->vreg_config[j].optimum_voltage = val_array[i];
+ mp->vreg_config[j].peak_current = val_array[i];
- DEV_DBG("%s: %s type=%d, min=%d, max=%d, op=%d\n",
- __func__, mp->vreg_config[j].vreg_name,
- mp->vreg_config[j].type,
+ DEV_DBG("%s: %s min=%d, max=%d, pc=%d\n", __func__,
+ mp->vreg_config[j].vreg_name,
mp->vreg_config[j].min_voltage,
mp->vreg_config[j].max_voltage,
- mp->vreg_config[j].optimum_voltage);
+ mp->vreg_config[j].peak_current);
ndx_mask >>= 1;
j++;
@@ -3144,6 +3213,10 @@
gpio_list_size = ARRAY_SIZE(hpd_gpio_config);
gpio_list = hpd_gpio_config;
break;
+ case HDMI_TX_DDC_PM:
+ gpio_list_size = ARRAY_SIZE(ddc_gpio_config);
+ gpio_list = ddc_gpio_config;
+ break;
case HDMI_TX_CORE_PM:
gpio_list_size = ARRAY_SIZE(core_gpio_config);
gpio_list = core_gpio_config;
diff --git a/drivers/video/msm/mdss/mdss_hdmi_tx.h b/drivers/video/msm/mdss/mdss_hdmi_tx.h
index 8d9a477..ce3c00c 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_tx.h
+++ b/drivers/video/msm/mdss/mdss_hdmi_tx.h
@@ -25,6 +25,7 @@
enum hdmi_tx_power_module_type {
HDMI_TX_HPD_PM,
+ HDMI_TX_DDC_PM,
HDMI_TX_CORE_PM,
HDMI_TX_CEC_PM,
HDMI_TX_MAX_PM
diff --git a/drivers/video/msm/mdss/mdss_io_util.c b/drivers/video/msm/mdss/mdss_io_util.c
index c38eaa4..ff52e4c 100644
--- a/drivers/video/msm/mdss/mdss_io_util.c
+++ b/drivers/video/msm/mdss/mdss_io_util.c
@@ -131,6 +131,7 @@
{
int i = 0, rc = 0;
struct dss_vreg *curr_vreg = NULL;
+ enum dss_vreg_type type;
if (config) {
for (i = 0; i < num_vreg; i++) {
@@ -145,7 +146,9 @@
curr_vreg->vreg = NULL;
goto vreg_get_fail;
}
- if (curr_vreg->type == DSS_REG_LDO) {
+ type = (regulator_count_voltages(curr_vreg->vreg) > 0)
+ ? DSS_REG_LDO : DSS_REG_VS;
+ if (type == DSS_REG_LDO) {
rc = regulator_set_voltage(
curr_vreg->vreg,
curr_vreg->min_voltage,
@@ -157,10 +160,10 @@
curr_vreg->vreg_name);
goto vreg_set_voltage_fail;
}
- if (curr_vreg->optimum_voltage >= 0) {
+ if (curr_vreg->peak_current >= 0) {
rc = regulator_set_optimum_mode(
curr_vreg->vreg,
- curr_vreg->optimum_voltage);
+ curr_vreg->peak_current);
if (rc < 0) {
DEV_ERR(
"%pS->%s: %s set opt m fail\n",
@@ -176,8 +179,11 @@
for (i = num_vreg-1; i >= 0; i--) {
curr_vreg = &in_vreg[i];
if (curr_vreg->vreg) {
- if (curr_vreg->type == DSS_REG_LDO) {
- if (curr_vreg->optimum_voltage >= 0) {
+ type = (regulator_count_voltages(
+ curr_vreg->vreg) > 0)
+ ? DSS_REG_LDO : DSS_REG_VS;
+ if (type == DSS_REG_LDO) {
+ if (curr_vreg->peak_current >= 0) {
regulator_set_optimum_mode(
curr_vreg->vreg, 0);
}
@@ -192,11 +198,11 @@
return 0;
vreg_unconfig:
-if (curr_vreg->type == DSS_REG_LDO)
+if (type == DSS_REG_LDO)
regulator_set_optimum_mode(curr_vreg->vreg, 0);
vreg_set_opt_mode_fail:
-if (curr_vreg->type == DSS_REG_LDO)
+if (type == DSS_REG_LDO)
regulator_set_voltage(curr_vreg->vreg, 0, curr_vreg->max_voltage);
vreg_set_voltage_fail:
@@ -206,6 +212,8 @@
vreg_get_fail:
for (i--; i >= 0; i--) {
curr_vreg = &in_vreg[i];
+ type = (regulator_count_voltages(curr_vreg->vreg) > 0)
+ ? DSS_REG_LDO : DSS_REG_VS;
goto vreg_unconfig;
}
return rc;
diff --git a/drivers/video/msm/mdss/mdss_io_util.h b/drivers/video/msm/mdss/mdss_io_util.h
index 0ae62a3..23341d6 100644
--- a/drivers/video/msm/mdss/mdss_io_util.h
+++ b/drivers/video/msm/mdss/mdss_io_util.h
@@ -50,10 +50,9 @@
struct dss_vreg {
struct regulator *vreg; /* vreg handle */
char vreg_name[32];
- enum dss_vreg_type type;
int min_voltage;
int max_voltage;
- int optimum_voltage;
+ int peak_current;
};
struct dss_gpio {
diff --git a/drivers/video/msm/mdss/mdss_mdp.c b/drivers/video/msm/mdss/mdss_mdp.c
index 51776db..b9457be 100644
--- a/drivers/video/msm/mdss/mdss_mdp.c
+++ b/drivers/video/msm/mdss/mdss_mdp.c
@@ -134,6 +134,7 @@
char *prop_name);
static int mdss_mdp_parse_dt_smp(struct platform_device *pdev);
static int mdss_mdp_parse_dt_misc(struct platform_device *pdev);
+static int mdss_mdp_parse_dt_ad_cfg(struct platform_device *pdev);
int mdss_mdp_alloc_fb_mem(struct msm_fb_data_type *mfd)
{
@@ -330,7 +331,6 @@
hw->hw_ndx, mdss_res->mdp_irq_mask,
mdss_res->mdp_hist_irq_mask);
} else {
- mdss_irq_handlers[hw->hw_ndx] = NULL;
mdss_res->irq_mask &= ~ndx_bit;
if (mdss_res->irq_mask == 0) {
mdss_res->irq_ena = false;
@@ -388,12 +388,11 @@
bus_idx = (current_bus_idx % (num_cases - 1)) + 1;
- /* aligning to avoid performing updates for small changes */
- ab_quota = ALIGN(ab_quota, SZ_64M);
- ib_quota = ALIGN(ib_quota, SZ_64M);
-
vect = mdp_bus_scale_table.usecase[current_bus_idx].vectors;
- if ((ab_quota == vect->ab) && (ib_quota == vect->ib)) {
+
+ /* avoid performing updates for small changes */
+ if ((ALIGN(ab_quota, SZ_64M) == ALIGN(vect->ab, SZ_64M)) &&
+ (ALIGN(ib_quota, SZ_64M) == ALIGN(vect->ib, SZ_64M))) {
pr_debug("skip bus scaling, no change in vectors\n");
return 0;
}
@@ -622,71 +621,43 @@
return clk_rate;
}
-static void mdss_mdp_clk_ctrl_update(struct mdss_data_type *mdata)
-{
- int enable;
-
- mutex_lock(&mdp_clk_lock);
- enable = atomic_read(&mdata->clk_ref) > 0;
- if (mdata->clk_ena == enable) {
- mutex_unlock(&mdp_clk_lock);
- return;
- }
- mdata->clk_ena = enable;
-
- if (enable)
- pm_runtime_get_sync(&mdata->pdev->dev);
-
- pr_debug("MDP CLKS %s\n", (enable ? "Enable" : "Disable"));
- mb();
-
- mdss_mdp_clk_update(MDSS_CLK_AHB, enable);
- mdss_mdp_clk_update(MDSS_CLK_AXI, enable);
-
- mdss_mdp_clk_update(MDSS_CLK_MDP_CORE, enable);
- mdss_mdp_clk_update(MDSS_CLK_MDP_LUT, enable);
- if (mdata->vsync_ena)
- mdss_mdp_clk_update(MDSS_CLK_MDP_VSYNC, enable);
-
- if (!enable)
- pm_runtime_put(&mdata->pdev->dev);
-
- mutex_unlock(&mdp_clk_lock);
-}
-
-static void mdss_mdp_clk_ctrl_workqueue_handler(struct work_struct *work)
-{
- struct mdss_data_type *mdata;
-
- mdata = container_of(work, struct mdss_data_type, clk_ctrl_worker);
- mdss_mdp_clk_ctrl_update(mdata);
-}
-
void mdss_mdp_clk_ctrl(int enable, int isr)
{
struct mdss_data_type *mdata = mdss_res;
+ static int mdp_clk_cnt;
+ int changed = 0;
- pr_debug("clk enable=%d isr=%d ref= %d\n", enable, isr,
- atomic_read(&mdata->clk_ref));
-
- if (enable == MDP_BLOCK_POWER_ON) {
- BUG_ON(isr);
-
- if (atomic_inc_return(&mdata->clk_ref) == 1)
- mdss_mdp_clk_ctrl_update(mdata);
+ mutex_lock(&mdp_clk_lock);
+ if (enable) {
+ if (mdp_clk_cnt == 0)
+ changed++;
+ mdp_clk_cnt++;
} else {
- BUG_ON(atomic_read(&mdata->clk_ref) == 0);
-
- if (atomic_dec_and_test(&mdata->clk_ref)) {
- if (isr)
- queue_work(mdata->clk_ctrl_wq,
- &mdata->clk_ctrl_worker);
- else
- mdss_mdp_clk_ctrl_update(mdata);
- }
+ mdp_clk_cnt--;
+ if (mdp_clk_cnt == 0)
+ changed++;
}
+ pr_debug("%s: clk_cnt=%d changed=%d enable=%d\n",
+ __func__, mdp_clk_cnt, changed, enable);
+ if (changed) {
+ mdata->clk_ena = enable;
+ if (enable)
+ pm_runtime_get_sync(&mdata->pdev->dev);
+
+ mdss_mdp_clk_update(MDSS_CLK_AHB, enable);
+ mdss_mdp_clk_update(MDSS_CLK_AXI, enable);
+ mdss_mdp_clk_update(MDSS_CLK_MDP_CORE, enable);
+ mdss_mdp_clk_update(MDSS_CLK_MDP_LUT, enable);
+ if (mdata->vsync_ena)
+ mdss_mdp_clk_update(MDSS_CLK_MDP_VSYNC, enable);
+
+ if (!enable)
+ pm_runtime_put(&mdata->pdev->dev);
+ }
+
+ mutex_unlock(&mdp_clk_lock);
}
static inline int mdss_mdp_irq_clk_register(struct mdss_data_type *mdata,
@@ -878,6 +849,7 @@
{
int i, j;
char *offset;
+ struct mdss_mdp_pipe *vig;
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
mdata->mdp_rev = MDSS_MDP_REG_READ(MDSS_MDP_REG_HW_VERSION);
@@ -899,7 +871,16 @@
writel_relaxed(j, offset);
/* swap */
- writel_relaxed(i, offset + 4);
+ writel_relaxed(1, offset + 4);
+ }
+ vig = mdata->vig_pipes;
+ for (i = 0; i < mdata->nvig_pipes; i++) {
+ offset = vig[i].base +
+ MDSS_MDP_REG_VIG_HIST_LUT_BASE;
+ for (j = 0; j < ENHIST_LUT_ENTRIES; j++)
+ writel_relaxed(j, offset);
+ /* swap */
+ writel_relaxed(1, offset + 16);
}
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
pr_debug("MDP hw init done\n");
@@ -925,9 +906,6 @@
if (rc)
return rc;
- mdata->clk_ctrl_wq = create_singlethread_workqueue("mdp_clk_wq");
- INIT_WORK(&mdata->clk_ctrl_worker, mdss_mdp_clk_ctrl_workqueue_handler);
-
mdata->iclient = msm_ion_client_create(-1, mdata->pdev->name);
if (IS_ERR_OR_NULL(mdata->iclient)) {
pr_err("msm_ion_client_create() return error (%p)\n",
@@ -1183,6 +1161,12 @@
return rc;
}
+ rc = mdss_mdp_parse_dt_ad_cfg(pdev);
+ if (rc) {
+ pr_err("Error in device tree : ad\n");
+ return rc;
+ }
+
return 0;
}
@@ -1498,9 +1482,48 @@
&data);
mdata->rot_block_size = (!rc ? data : 128);
+ mdata->has_bwc = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-has-bwc");
+ mdata->has_decimation = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-has-decimation");
return 0;
}
+static int mdss_mdp_parse_dt_ad_cfg(struct platform_device *pdev)
+{
+ struct mdss_data_type *mdata = platform_get_drvdata(pdev);
+ u32 *ad_offsets = NULL;
+ int rc;
+
+ mdata->nad_cfgs = mdss_mdp_parse_dt_prop_len(pdev, "qcom,mdss-ad-off");
+
+ if (mdata->nad_cfgs == 0) {
+ mdata->ad_cfgs = NULL;
+ return 0;
+ }
+ if (mdata->nad_cfgs > mdata->nmixers_intf)
+ return -EINVAL;
+
+ ad_offsets = kzalloc(sizeof(u32) * mdata->nad_cfgs, GFP_KERNEL);
+ if (!ad_offsets) {
+ pr_err("no mem assigned: kzalloc fail\n");
+ return -ENOMEM;
+ }
+
+ rc = mdss_mdp_parse_dt_handler(pdev, "qcom,mdss-ad-off", ad_offsets,
+ mdata->nad_cfgs);
+ if (rc)
+ goto parse_done;
+
+ rc = mdss_mdp_ad_addr_setup(mdata, ad_offsets);
+ if (rc)
+ pr_err("unable to setup assertive display\n");
+
+parse_done:
+ kfree(ad_offsets);
+ return rc;
+}
+
static int mdss_mdp_parse_dt_handler(struct platform_device *pdev,
char *prop_name, u32 *offsets, int len)
{
@@ -1559,8 +1582,6 @@
static inline int mdss_mdp_suspend_sub(struct mdss_data_type *mdata)
{
- flush_workqueue(mdata->clk_ctrl_wq);
-
mdata->suspend_fs_ena = mdata->fs_ena;
mdss_mdp_footswitch_ctrl(mdata, false);
@@ -1658,8 +1679,6 @@
dev_dbg(dev, "pm_runtime: idling...\n");
- flush_workqueue(mdata->clk_ctrl_wq);
-
return 0;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp.h b/drivers/video/msm/mdss/mdss_mdp.h
index 25c5871..e18722c 100644
--- a/drivers/video/msm/mdss/mdss_mdp.h
+++ b/drivers/video/msm/mdss/mdss_mdp.h
@@ -38,6 +38,7 @@
#define MAX_PLANES 4
#define MAX_DOWNSCALE_RATIO 4
#define MAX_UPSCALE_RATIO 20
+#define MAX_DECIMATION 4
#define C3_ALPHA 3 /* alpha */
#define C2_R_Cr 2 /* R/Cr */
@@ -108,6 +109,12 @@
struct mdss_mdp_ctl;
typedef void (*mdp_vsync_handler_t)(struct mdss_mdp_ctl *, ktime_t);
+struct mdss_mdp_vsync_handler {
+ bool enabled;
+ mdp_vsync_handler_t vsync_handler;
+ struct list_head list;
+};
+
struct mdss_mdp_ctl {
u32 num;
char __iomem *base;
@@ -121,6 +128,7 @@
u32 opmode;
u32 flush_bits;
+ bool is_video_mode;
u32 play_cnt;
u32 vsync_cnt;
u32 underrun_cnt;
@@ -128,6 +136,7 @@
u16 width;
u16 height;
u32 dst_format;
+ bool is_secure;
u32 bus_ab_quota;
u32 bus_ib_quota;
@@ -141,13 +150,18 @@
struct mutex lock;
struct mdss_panel_data *panel_data;
+ struct mdss_mdp_vsync_handler vsync_handler;
int (*start_fnc) (struct mdss_mdp_ctl *ctl);
int (*stop_fnc) (struct mdss_mdp_ctl *ctl);
int (*prepare_fnc) (struct mdss_mdp_ctl *ctl, void *arg);
int (*display_fnc) (struct mdss_mdp_ctl *ctl, void *arg);
int (*wait_fnc) (struct mdss_mdp_ctl *ctl, void *arg);
- int (*set_vsync_handler) (struct mdss_mdp_ctl *, mdp_vsync_handler_t);
+ u32 (*read_line_cnt_fnc) (struct mdss_mdp_ctl *);
+ int (*add_vsync_handler) (struct mdss_mdp_ctl *,
+ struct mdss_mdp_vsync_handler *);
+ int (*remove_vsync_handler) (struct mdss_mdp_ctl *,
+ struct mdss_mdp_vsync_handler *);
void *priv_data;
};
@@ -233,6 +247,25 @@
spinlock_t hist_lock;
};
+struct mdss_ad_info {
+ char __iomem *base;
+ u8 num;
+ u32 sts;
+ u32 state;
+ u32 ad_data;
+ u32 ad_data_mode;
+ struct mdss_ad_init init;
+ struct mdss_ad_cfg cfg;
+ struct mutex lock;
+ struct work_struct calc_work;
+ struct msm_fb_data_type *mfd;
+ struct mdss_mdp_vsync_handler handle;
+ struct completion comp;
+ u32 last_str;
+ u32 last_bl;
+ u32 calc_itr;
+};
+
struct pp_sts_type {
u32 pa_sts;
u32 pcc_sts;
@@ -249,6 +282,7 @@
struct mdss_pipe_pp_res {
u32 igc_c0_c1[IGC_LUT_ENTRIES];
u32 igc_c2[IGC_LUT_ENTRIES];
+ u32 hist_lut[ENHIST_LUT_ENTRIES];
struct pp_hist_col_info hist;
struct pp_sts_type pp_sts;
};
@@ -267,6 +301,8 @@
u16 img_width;
u16 img_height;
+ u8 horz_deci;
+ u8 vert_deci;
struct mdss_mdp_img_rect src;
struct mdss_mdp_img_rect dst;
@@ -286,6 +322,7 @@
u32 params_changed;
unsigned long smp[MAX_PLANES];
+ unsigned long smp_reserved[MAX_PLANES];
struct mdss_mdp_data back_buf;
struct mdss_mdp_data front_buf;
@@ -311,6 +348,7 @@
int borderfill_enable;
int overlay_play_enable;
int hw_refresh;
+ void *cpu_pm_hdl;
struct mdss_data_type *mdata;
struct mutex ov_lock;
@@ -319,6 +357,7 @@
struct list_head overlay_list;
struct list_head pipes_used;
struct list_head pipes_cleanup;
+ bool mixer_swap;
};
#define is_vig_pipe(_pipe_id_) ((_pipe_id_) <= MDSS_MDP_SSPP_VIG2)
@@ -346,7 +385,7 @@
irqreturn_t mdss_mdp_isr(int irq, void *ptr);
int mdss_iommu_attach(struct mdss_data_type *mdata);
-int mdss_mdp_copy_splash_screen(struct mdss_panel_data *pdata);
+int mdss_mdp_video_copy_splash_screen(struct mdss_panel_data *pdata);
void mdss_mdp_irq_clear(struct mdss_data_type *mdata,
u32 intr_type, u32 intf_num);
int mdss_mdp_irq_enable(u32 intr_type, u32 intf_num);
@@ -376,6 +415,10 @@
struct mdss_mdp_ctl *mdss_mdp_ctl_init(struct mdss_panel_data *pdata,
struct msm_fb_data_type *mfd);
+int mdss_mdp_video_reconfigure_splash_done(struct mdss_mdp_ctl *ctl);
+int mdss_mdp_video_copy_splash_screen(struct mdss_panel_data *pdata);
+void mdss_mdp_ctl_splash_start(struct mdss_panel_data *pdata);
+int mdss_mdp_ctl_splash_finish(struct mdss_mdp_ctl *ctl);
int mdss_mdp_ctl_setup(struct mdss_mdp_ctl *ctl);
int mdss_mdp_ctl_split_display_setup(struct mdss_mdp_ctl *ctl,
struct mdss_panel_data *pdata);
@@ -393,6 +436,8 @@
int mdss_mdp_mixer_pipe_unstage(struct mdss_mdp_pipe *pipe);
int mdss_mdp_display_commit(struct mdss_mdp_ctl *ctl, void *arg);
int mdss_mdp_display_wait4comp(struct mdss_mdp_ctl *ctl);
+int mdss_mdp_display_wakeup_time(struct mdss_mdp_ctl *ctl,
+ ktime_t *wakeup_time);
int mdss_mdp_csc_setup(u32 block, u32 blk_idx, u32 tbl_idx, u32 csc_type);
int mdss_mdp_csc_setup_data(u32 block, u32 blk_idx, u32 tbl_idx,
@@ -401,7 +446,7 @@
int mdss_mdp_pp_init(struct device *dev);
void mdss_mdp_pp_term(struct device *dev);
-int mdss_mdp_pp_resume(u32 mixer_num);
+int mdss_mdp_pp_resume(struct mdss_mdp_ctl *ctl, u32 mixer_num);
int mdss_mdp_pp_setup(struct mdss_mdp_ctl *ctl);
int mdss_mdp_pp_setup_locked(struct mdss_mdp_ctl *ctl);
@@ -438,17 +483,28 @@
struct mdp_histogram_start_req *req);
int mdss_mdp_histogram_stop(struct mdss_mdp_ctl *ctl, u32 block);
int mdss_mdp_hist_collect(struct mdss_mdp_ctl *ctl,
- struct mdp_histogram_data *hist,
- u32 *hist_data_addr);
+ struct mdp_histogram_data *hist);
void mdss_mdp_hist_intr_done(u32 isr);
+int mdss_ad_init_checks(struct msm_fb_data_type *mfd);
+int mdss_mdp_ad_config(struct msm_fb_data_type *mfd,
+ struct mdss_ad_init_cfg *init_cfg);
+int mdss_mdp_ad_input(struct msm_fb_data_type *mfd,
+ struct mdss_ad_input *input);
+int mdss_mdp_ad_addr_setup(struct mdss_data_type *mdata, u32 *ad_off);
+
struct mdss_mdp_pipe *mdss_mdp_pipe_alloc(struct mdss_mdp_mixer *mixer,
u32 type);
struct mdss_mdp_pipe *mdss_mdp_pipe_get(struct mdss_data_type *mdata, u32 ndx);
+struct mdss_mdp_pipe *mdss_mdp_pipe_search(struct mdss_data_type *mdata,
+ u32 ndx);
int mdss_mdp_pipe_map(struct mdss_mdp_pipe *pipe);
void mdss_mdp_pipe_unmap(struct mdss_mdp_pipe *pipe);
struct mdss_mdp_pipe *mdss_mdp_pipe_alloc_dma(struct mdss_mdp_mixer *mixer);
+int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe);
+void mdss_mdp_smp_unreserve(struct mdss_mdp_pipe *pipe);
+
int mdss_mdp_pipe_addr_setup(struct mdss_data_type *mdata, u32 *offsets,
u32 *ftch_y_id, u32 type, u32 num_base, u32 len);
int mdss_mdp_mixer_addr_setup(struct mdss_data_type *mdata, u32 *mixer_offsets,
@@ -466,6 +522,8 @@
struct mdss_mdp_plane_sizes *ps, u32 bwc_mode);
int mdss_mdp_get_rau_strides(u32 w, u32 h, struct mdss_mdp_format_params *fmt,
struct mdss_mdp_plane_sizes *ps);
+void mdss_mdp_data_calc_offset(struct mdss_mdp_data *data, u16 x, u16 y,
+ struct mdss_mdp_plane_sizes *ps, struct mdss_mdp_format_params *fmt);
struct mdss_mdp_format_params *mdss_mdp_get_format_params(u32 format);
int mdss_mdp_put_img(struct mdss_mdp_img_data *data);
int mdss_mdp_get_img(struct msmfb_data *img, struct mdss_mdp_img_data *data);
diff --git a/drivers/video/msm/mdss/mdss_mdp_ctl.c b/drivers/video/msm/mdss/mdss_mdp_ctl.c
index 4e51100..b2147c3 100644
--- a/drivers/video/msm/mdss/mdss_mdp_ctl.c
+++ b/drivers/video/msm/mdss/mdss_mdp_ctl.c
@@ -47,6 +47,15 @@
writel_relaxed(val, mixer->base + reg);
}
+static inline u32 mdss_mdp_get_pclk_rate(struct mdss_mdp_ctl *ctl)
+{
+ struct mdss_panel_info *pinfo = &ctl->panel_data->panel_info;
+
+ return (ctl->intf_type == MDSS_INTF_DSI) ?
+ pinfo->mipi.dsi_pclk_rate :
+ pinfo->clk_rate;
+}
+
static int mdss_mdp_ctl_perf_commit(struct mdss_data_type *mdata, u32 flags)
{
struct mdss_mdp_ctl *ctl;
@@ -202,13 +211,7 @@
max_clk_rate = clk_rate;
if (ctl->intf_type) {
- struct mdss_panel_info *pinfo;
-
- pinfo = &ctl->panel_data->panel_info;
- clk_rate = (ctl->intf_type == MDSS_INTF_DSI) ?
- pinfo->mipi.dsi_pclk_rate :
- pinfo->clk_rate;
-
+ clk_rate = mdss_mdp_get_pclk_rate(ctl);
/* minimum clock rate due to inefficiency in 3dmux */
clk_rate = mult_frac(clk_rate >> 1, 9, 8);
if (clk_rate > max_clk_rate)
@@ -279,8 +282,6 @@
return -EINVAL;
}
- mutex_lock(&mdss_mdp_ctl_lock);
- ctl->ref_cnt--;
if (ctl->mixer_left) {
mdss_mdp_mixer_free(ctl->mixer_left);
ctl->mixer_left = NULL;
@@ -289,13 +290,19 @@
mdss_mdp_mixer_free(ctl->mixer_right);
ctl->mixer_right = NULL;
}
+ mutex_lock(&mdss_mdp_ctl_lock);
+ ctl->ref_cnt--;
+ ctl->intf_num = MDSS_MDP_NO_INTF;
+ ctl->is_secure = false;
ctl->power_on = false;
ctl->start_fnc = NULL;
ctl->stop_fnc = NULL;
ctl->prepare_fnc = NULL;
ctl->display_fnc = NULL;
ctl->wait_fnc = NULL;
- ctl->set_vsync_handler = NULL;
+ ctl->read_line_cnt_fnc = NULL;
+ ctl->add_vsync_handler = NULL;
+ ctl->remove_vsync_handler = NULL;
mutex_unlock(&mdss_mdp_ctl_lock);
return 0;
@@ -437,7 +444,6 @@
if (ctl->stop_fnc)
ctl->stop_fnc(ctl);
- mdss_mdp_mixer_free(mixer);
mdss_mdp_ctl_free(ctl);
mdss_mdp_ctl_perf_commit(ctl->mdata, MDSS_MDP_PERF_UPDATE_ALL);
@@ -445,6 +451,29 @@
return 0;
}
+void mdss_mdp_ctl_splash_start(struct mdss_panel_data *pdata)
+{
+ switch (pdata->panel_info.type) {
+ case MIPI_VIDEO_PANEL:
+ mdss_mdp_video_copy_splash_screen(pdata);
+ break;
+ case MIPI_CMD_PANEL:
+ default:
+ break;
+ }
+}
+
+int mdss_mdp_ctl_splash_finish(struct mdss_mdp_ctl *ctl)
+{
+ switch (ctl->panel_data->panel_info.type) {
+ case MIPI_VIDEO_PANEL:
+ return mdss_mdp_video_reconfigure_splash_done(ctl);
+ case MIPI_CMD_PANEL:
+ default:
+ return 0;
+ }
+}
+
static inline int mdss_mdp_set_split_ctl(struct mdss_mdp_ctl *ctl,
struct mdss_mdp_ctl *split_ctl)
{
@@ -647,15 +676,18 @@
}
ctl->mfd = mfd;
ctl->panel_data = pdata;
+ ctl->is_video_mode = false;
switch (pdata->panel_info.type) {
case EDP_PANEL:
+ ctl->is_video_mode = true;
ctl->intf_num = MDSS_MDP_INTF0;
ctl->intf_type = MDSS_INTF_EDP;
ctl->opmode = MDSS_MDP_CTL_OP_VIDEO_MODE;
ctl->start_fnc = mdss_mdp_video_start;
break;
case MIPI_VIDEO_PANEL:
+ ctl->is_video_mode = true;
if (pdata->panel_info.pdest == DISPLAY_1)
ctl->intf_num = MDSS_MDP_INTF1;
else
@@ -674,6 +706,7 @@
ctl->start_fnc = mdss_mdp_cmd_start;
break;
case DTV_PANEL:
+ ctl->is_video_mode = true;
ctl->intf_num = MDSS_MDP_INTF3;
ctl->intf_type = MDSS_INTF_HDMI;
ctl->opmode = MDSS_MDP_CTL_OP_VIDEO_MODE;
@@ -881,7 +914,7 @@
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_LAYER(i), 0);
mixer = ctl->mixer_left;
- mdss_mdp_pp_resume(mixer->num);
+ mdss_mdp_pp_resume(ctl, mixer->num);
mixer->params_changed++;
temp = MDSS_MDP_REG_READ(MDSS_MDP_REG_DISP_INTF_SEL);
@@ -928,7 +961,7 @@
ret = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_RESET, NULL);
if (ret) {
pr_err("panel power on failed ctl=%d\n", ctl->num);
- return ret;
+ goto error;
}
ret = mdss_mdp_ctl_start_sub(ctl);
@@ -941,7 +974,7 @@
struct mdss_mdp_mixer *mixer = ctl->mixer_right;
u32 out, off;
- mdss_mdp_pp_resume(mixer->num);
+ mdss_mdp_pp_resume(ctl, mixer->num);
mixer->params_changed++;
out = (mixer->height << 16) | mixer->width;
off = MDSS_MDP_REG_LM_OFFSET(mixer->num);
@@ -951,6 +984,7 @@
}
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+error:
mutex_unlock(&ctl->lock);
return ret;
@@ -1085,14 +1119,6 @@
stage);
}
- if (mixercfg == MDSS_MDP_LM_BORDER_COLOR &&
- pipe->src_fmt->alpha_enable &&
- pipe->dst.w == mixer->width &&
- pipe->dst.h == mixer->height) {
- pr_debug("setting pipe=%d as BG_PIPE\n", pipe->num);
- bgalpha = 1;
- }
-
mixercfg |= stage << (3 * pipe->num);
mdp_mixer_write(mixer, off + MDSS_MDP_REG_LM_OP_MODE, blend_op);
@@ -1197,16 +1223,19 @@
struct mdss_mdp_mixer *mdss_mdp_mixer_get(struct mdss_mdp_ctl *ctl, int mux)
{
struct mdss_mdp_mixer *mixer = NULL;
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(ctl->mfd);
if (!ctl)
return NULL;
switch (mux) {
case MDSS_MDP_MIXER_MUX_DEFAULT:
case MDSS_MDP_MIXER_MUX_LEFT:
- mixer = ctl->mixer_left;
+ mixer = mdp5_data->mixer_swap ?
+ ctl->mixer_right : ctl->mixer_left;
break;
case MDSS_MDP_MIXER_MUX_RIGHT:
- mixer = ctl->mixer_right;
+ mixer = mdp5_data->mixer_swap ?
+ ctl->mixer_left : ctl->mixer_right;
break;
}
@@ -1236,6 +1265,7 @@
{
struct mdss_mdp_ctl *ctl;
struct mdss_mdp_mixer *mixer;
+ int i;
if (!pipe)
return -EINVAL;
@@ -1259,7 +1289,12 @@
if (params_changed) {
mixer->params_changed++;
- mixer->stage_pipe[pipe->mixer_stage] = pipe;
+ for (i = 0; i < MDSS_MDP_MAX_STAGE; i++) {
+ if (i == pipe->mixer_stage)
+ mixer->stage_pipe[i] = pipe;
+ else if (mixer->stage_pipe[i] == pipe)
+ mixer->stage_pipe[i] = NULL;
+ }
}
if (pipe->type == MDSS_MDP_PIPE_TYPE_DMA)
@@ -1292,9 +1327,10 @@
if (mutex_lock_interruptible(&ctl->lock))
return -EINTR;
- mixer->params_changed++;
- mixer->stage_pipe[pipe->mixer_stage] = NULL;
-
+ if (pipe == mixer->stage_pipe[pipe->mixer_stage]) {
+ mixer->params_changed++;
+ mixer->stage_pipe[pipe->mixer_stage] = NULL;
+ }
mutex_unlock(&ctl->lock);
return 0;
@@ -1311,6 +1347,71 @@
return 0;
}
+int mdss_mdp_display_wakeup_time(struct mdss_mdp_ctl *ctl,
+ ktime_t *wakeup_time)
+{
+ struct mdss_panel_info *pinfo;
+ u32 clk_rate, clk_period;
+ u32 current_line, total_line;
+ u32 time_of_line, time_to_vsync;
+ ktime_t current_time = ktime_get();
+
+ if (!ctl->read_line_cnt_fnc)
+ return -ENOSYS;
+
+ pinfo = &ctl->panel_data->panel_info;
+ if (!pinfo)
+ return -ENODEV;
+
+ clk_rate = mdss_mdp_get_pclk_rate(ctl);
+
+ clk_rate /= 1000; /* in kHz */
+ if (!clk_rate)
+ return -EINVAL;
+
+ /*
+ * calculate clk_period as pico second to maintain good
+ * accuracy with high pclk rate and this number is in 17 bit
+ * range.
+ */
+ clk_period = 1000000000 / clk_rate;
+ if (!clk_period)
+ return -EINVAL;
+
+ time_of_line = (pinfo->lcdc.h_back_porch +
+ pinfo->lcdc.h_front_porch +
+ pinfo->lcdc.h_pulse_width +
+ pinfo->xres) * clk_period;
+
+ time_of_line /= 1000; /* in nano second */
+ if (!time_of_line)
+ return -EINVAL;
+
+ current_line = ctl->read_line_cnt_fnc(ctl);
+
+ total_line = pinfo->lcdc.v_back_porch +
+ pinfo->lcdc.v_front_porch +
+ pinfo->lcdc.v_pulse_width +
+ pinfo->yres;
+
+ if (current_line > total_line)
+ return -EINVAL;
+
+ time_to_vsync = time_of_line * (total_line - current_line);
+ if (!time_to_vsync)
+ return -EINVAL;
+
+ *wakeup_time = ktime_add_ns(current_time, time_to_vsync);
+
+ pr_debug("clk_rate=%dkHz clk_period=%d cur_line=%d tot_line=%d\n",
+ clk_rate, clk_period, current_line, total_line);
+ pr_debug("time_to_vsync=%d current_time=%d wakeup_time=%d\n",
+ time_to_vsync, (int)ktime_to_ms(current_time),
+ (int)ktime_to_ms(*wakeup_time));
+
+ return 0;
+}
+
int mdss_mdp_display_wait4comp(struct mdss_mdp_ctl *ctl)
{
int ret;
diff --git a/drivers/video/msm/mdss/mdss_mdp_formats.h b/drivers/video/msm/mdss/mdss_mdp_formats.h
index c6d5fb9..acb8dc2 100644
--- a/drivers/video/msm/mdss/mdss_mdp_formats.h
+++ b/drivers/video/msm/mdss/mdss_mdp_formats.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -108,6 +108,8 @@
FMT_YUV_COMMON(fmt), \
.fetch_planes = MDSS_MDP_PLANE_PLANAR, \
.chroma_sample = samp, \
+ .bpp = 1, \
+ .unpack_count = 1, \
.element = { (e0), (e1) } \
}
@@ -134,9 +136,9 @@
FMT_YUV_PSEUDO(MDP_Y_CBCR_H2V2_VENUS, MDSS_MDP_CHROMA_420,
C1_B_Cb, C2_R_Cr),
- FMT_YUV_PLANR(MDP_Y_CR_CB_H2V2, MDSS_MDP_CHROMA_420, C2_R_Cr, C1_B_Cb),
- FMT_YUV_PLANR(MDP_Y_CB_CR_H2V2, MDSS_MDP_CHROMA_420, C1_B_Cb, C2_R_Cr),
- FMT_YUV_PLANR(MDP_Y_CR_CB_GH2V2, MDSS_MDP_CHROMA_420, C2_R_Cr, C1_B_Cb),
+ FMT_YUV_PLANR(MDP_Y_CB_CR_H2V2, MDSS_MDP_CHROMA_420, C2_R_Cr, C1_B_Cb),
+ FMT_YUV_PLANR(MDP_Y_CR_CB_H2V2, MDSS_MDP_CHROMA_420, C1_B_Cb, C2_R_Cr),
+ FMT_YUV_PLANR(MDP_Y_CR_CB_GH2V2, MDSS_MDP_CHROMA_420, C1_B_Cb, C2_R_Cr),
{
FMT_YUV_COMMON(MDP_YCBCR_H1V1),
diff --git a/drivers/video/msm/mdss/mdss_mdp_hwio.h b/drivers/video/msm/mdss/mdss_mdp_hwio.h
index d50f47e..6ec5b63 100644
--- a/drivers/video/msm/mdss/mdss_mdp_hwio.h
+++ b/drivers/video/msm/mdss/mdss_mdp_hwio.h
@@ -190,8 +190,7 @@
#define MDSS_MDP_REG_SSPP_CURRENT_SRC1_ADDR 0x0A8
#define MDSS_MDP_REG_SSPP_CURRENT_SRC2_ADDR 0x0AC
#define MDSS_MDP_REG_SSPP_CURRENT_SRC3_ADDR 0x0B0
-#define MDSS_MDP_REG_SSPP_LINE_SKIP_STEP_C03 0x0B4
-#define MDSS_MDP_REG_SSPP_LINE_SKIP_STEP_C12 0x0B8
+#define MDSS_MDP_REG_SSPP_DECIMATION_CONFIG 0x0B4
#define MDSS_MDP_REG_VIG_OP_MODE 0x200
#define MDSS_MDP_REG_VIG_QSEED2_CONFIG 0x204
@@ -349,6 +348,43 @@
#define MDSS_MDP_REG_WB_OUT_SIZE 0x074
#define MDSS_MDP_REG_WB_ALPHA_X_VALUE 0x078
#define MDSS_MDP_REG_WB_CSC_BASE 0x260
+#define MDSS_MDP_REG_WB_DST_ADDR_SW_STATUS 0x2B0
+
+
+#define MDSS_MDP_REG_AD_BYPASS 0x000
+#define MDSS_MDP_REG_AD_CTRL_0 0x004
+#define MDSS_MDP_REG_AD_CTRL_1 0x008
+#define MDSS_MDP_REG_AD_FRAME_SIZE 0x00C
+#define MDSS_MDP_REG_AD_CON_CTRL_0 0x010
+#define MDSS_MDP_REG_AD_CON_CTRL_1 0x014
+#define MDSS_MDP_REG_AD_STR_MAN 0x018
+#define MDSS_MDP_REG_AD_VAR 0x01C
+#define MDSS_MDP_REG_AD_DITH 0x020
+#define MDSS_MDP_REG_AD_DITH_CTRL 0x024
+#define MDSS_MDP_REG_AD_AMP_LIM 0x028
+#define MDSS_MDP_REG_AD_SLOPE 0x02C
+#define MDSS_MDP_REG_AD_BW_LVL 0x030
+#define MDSS_MDP_REG_AD_LOGO_POS 0x034
+#define MDSS_MDP_REG_AD_LUT_FI 0x038
+#define MDSS_MDP_REG_AD_LUT_CC 0x07C
+#define MDSS_MDP_REG_AD_STR_LIM 0x0C8
+#define MDSS_MDP_REG_AD_CALIB_AB 0x0CC
+#define MDSS_MDP_REG_AD_CALIB_CD 0x0D0
+#define MDSS_MDP_REG_AD_MODE_SEL 0x0D4
+#define MDSS_MDP_REG_AD_TFILT_CTRL 0x0D8
+#define MDSS_MDP_REG_AD_BL_MINMAX 0x0DC
+#define MDSS_MDP_REG_AD_BL 0x0E0
+#define MDSS_MDP_REG_AD_BL_MAX 0x0E8
+#define MDSS_MDP_REG_AD_AL 0x0EC
+#define MDSS_MDP_REG_AD_AL_MIN 0x0F0
+#define MDSS_MDP_REG_AD_AL_FILT 0x0F4
+#define MDSS_MDP_REG_AD_CFG_BUF 0x0F8
+#define MDSS_MDP_REG_AD_LUT_AL 0x100
+#define MDSS_MDP_REG_AD_TARG_STR 0x144
+#define MDSS_MDP_REG_AD_START_CALC 0x148
+#define MDSS_MDP_REG_AD_STR_OUT 0x14C
+#define MDSS_MDP_REG_AD_BL_OUT 0x154
+#define MDSS_MDP_REG_AD_CALC_DONE 0x158
enum mdss_mdp_dspp_index {
MDSS_MDP_DSPP0,
@@ -402,6 +438,9 @@
#define MDSS_MDP_REG_INTF_TEST_CTL 0x054
#define MDSS_MDP_REG_INTF_TP_COLOR0 0x058
#define MDSS_MDP_REG_INTF_TP_COLOR1 0x05C
+#define MDSS_MDP_REG_INTF_FRAME_LINE_COUNT_EN 0x0A8
+#define MDSS_MDP_REG_INTF_FRAME_COUNT 0x0AC
+#define MDSS_MDP_REG_INTF_LINE_COUNT 0x0B0
#define MDSS_MDP_REG_INTF_DEFLICKER_CONFIG 0x0F0
#define MDSS_MDP_REG_INTF_DEFLICKER_STRNG_COEFF 0x0F4
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c b/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
index 006a8dd..eb2bd21 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
@@ -15,30 +15,50 @@
#include "mdss_panel.h"
#include "mdss_mdp.h"
+#define VSYNC_EXPIRE_TICK 4
+
#define START_THRESHOLD 4
#define CONTINUE_TRESHOLD 4
#define MAX_SESSIONS 2
+/* wait for at most 2 vsync for lowest refresh rate (24hz) */
+#define KOFF_TIMEOUT msecs_to_jiffies(84)
+
struct mdss_mdp_cmd_ctx {
u32 pp_num;
u8 ref_cnt;
-
struct completion pp_comp;
+ struct completion stop_comp;
atomic_t vsync_ref;
- spinlock_t vsync_lock;
- mdp_vsync_handler_t vsync_handler;
+ mdp_vsync_handler_t send_vsync;
int panel_on;
+ int koff_cnt;
+ int clk_enabled;
+ int clk_control;
+ int vsync_enabled;
+ int expire;
+ struct mutex clk_mtx;
+ spinlock_t clk_lock;
+ struct work_struct clk_work;
/* te config */
u8 tear_check;
- u16 total_lcd_lines;
- u16 v_porch; /* vertical porches */
- u32 vsync_cnt;
+ u16 height; /* panel height */
+ u16 vporch; /* vertical porches */
+ u32 vclk_line; /* vsync clock per line */
};
struct mdss_mdp_cmd_ctx mdss_mdp_cmd_ctx_list[MAX_SESSIONS];
+/*
+ * TE configuration:
+ * dsi byte clock calculated base on 70 fps
+ * around 14 ms to complete a kickoff cycle if te disabled
+ * vclk_line base on 60 fps
+ * write is faster than read
+ * init == start == rdptr
+ */
static int mdss_mdp_cmd_tearcheck_cfg(struct mdss_mdp_mixer *mixer,
struct mdss_mdp_cmd_ctx *ctx, int enable)
{
@@ -47,15 +67,21 @@
cfg = BIT(19); /* VSYNC_COUNTER_EN */
if (ctx->tear_check)
cfg |= BIT(20); /* VSYNC_IN_EN */
- cfg |= ctx->vsync_cnt;
+ cfg |= ctx->vclk_line;
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_CONFIG_VSYNC, cfg);
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_CONFIG_HEIGHT,
0xfff0); /* set to verh height */
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_VSYNC_INIT_VAL, 0);
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_RD_PTR_IRQ, 0);
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_START_POS, ctx->v_porch);
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_VSYNC_INIT_VAL,
+ ctx->height);
+
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_RD_PTR_IRQ,
+ ctx->height + 1);
+
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_START_POS,
+ ctx->height);
+
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_THRESH,
(CONTINUE_TRESHOLD << 16) | (START_THRESHOLD));
@@ -73,7 +99,6 @@
if (pinfo->mipi.vsync_enable && enable) {
u32 mdp_vsync_clk_speed_hz, total_lines;
- u32 vsync_cnt_cfg_dem;
mdss_mdp_vsync_clk_enable(1);
@@ -88,21 +113,18 @@
}
ctx->tear_check = pinfo->mipi.hw_vsync_mode;
-
- total_lines = pinfo->lcdc.v_back_porch +
- pinfo->lcdc.v_front_porch +
- pinfo->lcdc.v_pulse_width + pinfo->yres;
-
- vsync_cnt_cfg_dem =
- mult_frac(pinfo->mipi.frame_rate * total_lines,
- 1, 100);
-
- ctx->vsync_cnt = mdp_vsync_clk_speed_hz / vsync_cnt_cfg_dem;
-
- ctx->v_porch = pinfo->lcdc.v_back_porch +
+ ctx->height = pinfo->yres;
+ ctx->vporch = pinfo->lcdc.v_back_porch +
pinfo->lcdc.v_front_porch +
pinfo->lcdc.v_pulse_width;
- ctx->total_lcd_lines = total_lines;
+
+ total_lines = ctx->height + ctx->vporch;
+ total_lines *= pinfo->mipi.frame_rate;
+ ctx->vclk_line = mdp_vsync_clk_speed_hz / total_lines;
+
+ pr_debug("%s: fr=%d tline=%d vcnt=%d vrate=%d\n",
+ __func__, pinfo->mipi.frame_rate, total_lines,
+ ctx->vclk_line, mdp_vsync_clk_speed_hz);
} else {
enable = 0;
}
@@ -118,54 +140,6 @@
return 0;
}
-static inline void cmd_readptr_irq_enable(struct mdss_mdp_ctl *ctl)
-{
- struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
-
- if (atomic_inc_return(&ctx->vsync_ref) == 1) {
- pr_debug("%s:\n", __func__);
- mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num);
- }
-}
-
-static inline void cmd_readptr_irq_disable(struct mdss_mdp_ctl *ctl)
-{
- struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
-
- if (atomic_dec_return(&ctx->vsync_ref) == 0) {
- pr_debug("%s:\n", __func__);
- mdss_mdp_irq_disable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
- ctx->pp_num);
- }
-}
-
-int mdss_mdp_cmd_set_vsync_handler(struct mdss_mdp_ctl *ctl,
- mdp_vsync_handler_t vsync_handler)
-{
- struct mdss_mdp_cmd_ctx *ctx;
- unsigned long flags;
-
- ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
- if (!ctx) {
- pr_err("invalid ctx for ctl=%d\n", ctl->num);
- return -ENODEV;
- }
-
- spin_lock_irqsave(&ctx->vsync_lock, flags);
-
- if (!ctx->vsync_handler && vsync_handler) {
- ctx->vsync_handler = vsync_handler;
- cmd_readptr_irq_enable(ctl);
- } else if (ctx->vsync_handler && !vsync_handler) {
- cmd_readptr_irq_disable(ctl);
- ctx->vsync_handler = vsync_handler;
- }
-
- spin_unlock_irqrestore(&ctx->vsync_lock, flags);
-
- return 0;
-}
-
static void mdss_mdp_cmd_readptr_done(void *arg)
{
struct mdss_mdp_ctl *ctl = arg;
@@ -177,15 +151,29 @@
return;
}
- pr_debug("%s: ctl=%d intf_num=%d\n", __func__, ctl->num, ctl->intf_num);
+ pr_debug("%s: num=%d ctx=%d expire=%d koff=%d\n", __func__, ctl->num,
+ ctx->pp_num, ctx->expire, ctx->koff_cnt);
vsync_time = ktime_get();
ctl->vsync_cnt++;
- spin_lock(&ctx->vsync_lock);
- if (ctx->vsync_handler)
- ctx->vsync_handler(ctl, vsync_time);
- spin_unlock(&ctx->vsync_lock);
+ spin_lock(&ctx->clk_lock);
+ if (ctx->send_vsync)
+ ctx->send_vsync(ctl, vsync_time);
+
+ if (ctx->expire) {
+ ctx->expire--;
+ if (ctx->expire == 0) {
+ if (ctx->koff_cnt <= 0) {
+ ctx->clk_control = 1;
+ schedule_work(&ctx->clk_work);
+ } else {
+ /* put off one vsync */
+ ctx->expire += 1;
+ }
+ }
+ }
+ spin_unlock(&ctx->clk_lock);
}
static void mdss_mdp_cmd_pingpong_done(void *arg)
@@ -193,12 +181,161 @@
struct mdss_mdp_ctl *ctl = arg;
struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
- pr_debug("%s: intf_num=%d ctx=%p\n", __func__, ctl->intf_num, ctx);
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return;
+ }
+ spin_lock(&ctx->clk_lock);
mdss_mdp_irq_disable_nosync(MDSS_MDP_IRQ_PING_PONG_COMP, ctx->pp_num);
- if (ctx)
- complete(&ctx->pp_comp);
+ complete_all(&ctx->pp_comp);
+
+ if (ctx->koff_cnt)
+ ctx->koff_cnt--;
+
+ pr_debug("%s: ctl_num=%d intf_num=%d ctx=%d kcnt=%d\n", __func__,
+ ctl->num, ctl->intf_num, ctx->pp_num, ctx->koff_cnt);
+
+ spin_unlock(&ctx->clk_lock);
+}
+
+static void clk_ctrl_work(struct work_struct *work)
+{
+ unsigned long flags;
+ struct mdss_mdp_cmd_ctx *ctx =
+ container_of(work, typeof(*ctx), clk_work);
+
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return;
+ }
+
+ pr_debug("%s:ctx=%p num=%d\n", __func__, ctx, ctx->pp_num);
+
+ mutex_lock(&ctx->clk_mtx);
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ if (ctx->clk_control && ctx->clk_enabled) {
+ ctx->clk_enabled = 0;
+ ctx->clk_control = 0;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ /*
+ * make sure dsi link is idle here
+ */
+ ctx->vsync_enabled = 0;
+ mdss_mdp_irq_disable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
+ ctx->pp_num);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ complete(&ctx->stop_comp);
+ pr_debug("%s: SET_CLK_OFF, pid=%d\n", __func__, current->pid);
+ } else {
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+}
+
+static int mdss_mdp_cmd_vsync_ctrl(struct mdss_mdp_ctl *ctl,
+ struct mdss_mdp_vsync_handler *handler)
+{
+ struct mdss_mdp_cmd_ctx *ctx;
+ unsigned long flags;
+ int enable;
+
+ ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return -ENODEV;
+ }
+
+ enable = (handler->vsync_handler != NULL);
+
+ pr_debug("%s: ctx=%p ctx=%d enabled=%d %d clk_enabled=%d clk_ctrl=%d\n",
+ __func__, ctx, ctx->pp_num, ctx->vsync_enabled, enable,
+ ctx->clk_enabled, ctx->clk_control);
+
+ mutex_lock(&ctx->clk_mtx);
+ if (ctx->vsync_enabled == enable) {
+ mutex_unlock(&ctx->clk_mtx);
+ return 0;
+ }
+
+ if (enable) {
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->clk_control = 0;
+ ctx->expire = 0;
+ ctx->send_vsync = handler->vsync_handler;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ if (ctx->clk_enabled == 0) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
+ ctx->pp_num);
+ ctx->vsync_enabled = 1;
+ ctx->clk_enabled = 1;
+ pr_debug("%s: SET_CLK_ON, pid=%d\n", __func__,
+ current->pid);
+ }
+ } else {
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->expire = VSYNC_EXPIRE_TICK;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+
+ return 0;
+}
+
+static void mdss_mdp_cmd_chk_clock(struct mdss_mdp_cmd_ctx *ctx)
+{
+ unsigned long flags;
+ int set_clk_on = 0;
+
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return;
+ }
+
+ pr_debug("%s: ctx=%p num=%d clk_enabled=%d\n", __func__,
+ ctx, ctx->pp_num, ctx->clk_enabled);
+
+ mutex_lock(&ctx->clk_mtx);
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->koff_cnt++;
+ ctx->clk_control = 0;
+ ctx->expire = VSYNC_EXPIRE_TICK;
+ if (ctx->clk_enabled == 0) {
+ set_clk_on++;
+ ctx->clk_enabled = 1;
+ }
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+
+ if (set_clk_on) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ ctx->vsync_enabled = 1;
+ mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num);
+ pr_debug("%s: ctx=%p num=%d SET_CLK_ON\n", __func__,
+ ctx, ctx->pp_num);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+}
+
+static int mdss_mdp_cmd_wait4comp(struct mdss_mdp_ctl *ctl, void *arg)
+{
+ struct mdss_mdp_cmd_ctx *ctx;
+ int rc;
+
+ ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return -ENODEV;
+ }
+
+ pr_debug("%s: intf_num=%d ctx=%p\n", __func__, ctl->intf_num, ctx);
+
+ rc = wait_for_completion_interruptible_timeout(&ctx->pp_comp,
+ KOFF_TIMEOUT);
+ WARN(rc <= 0, "cmd kickoff timed out (%d) ctl=%d\n", rc, ctl->num);
+
+ return rc;
}
int mdss_mdp_cmd_kickoff(struct mdss_mdp_ctl *ctl, void *arg)
@@ -207,14 +344,16 @@
int rc;
ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
- pr_debug("%s: kickoff intf_num=%d ctx=%p\n", __func__,
- ctl->intf_num, ctx);
-
if (!ctx) {
pr_err("invalid ctx\n");
return -ENODEV;
}
+ pr_debug("%s: kickoff intf_num=%d ctx=%p\n", __func__,
+ ctl->intf_num, ctx);
+
+ mdss_mdp_cmd_chk_clock(ctx);
+
if (ctx->panel_on == 0) {
rc = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_UNBLANK, NULL);
WARN(rc, "intf %d unblank error (%d)\n", ctl->intf_num, rc);
@@ -230,27 +369,39 @@
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_START, 1);
- wait_for_completion_interruptible(&ctx->pp_comp);
-
return 0;
}
int mdss_mdp_cmd_stop(struct mdss_mdp_ctl *ctl)
{
struct mdss_mdp_cmd_ctx *ctx;
+ int need_wait = 0;
+ struct mdss_mdp_vsync_handler null_handle;
int ret;
- pr_debug("%s: +\n", __func__);
-
ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
if (!ctx) {
pr_err("invalid ctx\n");
return -ENODEV;
}
+ pr_debug("%s:+ vaync_enable=%d expire=%d\n", __func__,
+ ctx->vsync_enabled, ctx->expire);
+
+ mutex_lock(&ctx->clk_mtx);
+ if (ctx->vsync_enabled) {
+ INIT_COMPLETION(ctx->stop_comp);
+ need_wait = 1;
+ }
+ mutex_unlock(&ctx->clk_mtx);
+
+ if (need_wait)
+ wait_for_completion_interruptible(&ctx->stop_comp);
+
ctx->panel_on = 0;
- mdss_mdp_cmd_set_vsync_handler(ctl, NULL);
+ null_handle.vsync_handler = NULL;
+ mdss_mdp_cmd_vsync_ctrl(ctl, &null_handle);
mdss_mdp_set_intr_callback(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctl->intf_num,
NULL, NULL);
@@ -265,7 +416,6 @@
ret = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_PANEL_OFF, NULL);
WARN(ret, "intf %d unblank error (%d)\n", ctl->intf_num, ret);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
pr_debug("%s:-\n", __func__);
@@ -280,8 +430,6 @@
pr_debug("%s:+\n", __func__);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
-
mixer = mdss_mdp_mixer_get(ctl, MDSS_MDP_MIXER_MUX_LEFT);
if (!mixer) {
pr_err("mixer not setup correctly\n");
@@ -308,8 +456,14 @@
ctx->pp_num = mixer->num;
init_completion(&ctx->pp_comp);
- spin_lock_init(&ctx->vsync_lock);
+ init_completion(&ctx->stop_comp);
atomic_set(&ctx->vsync_ref, 0);
+ spin_lock_init(&ctx->clk_lock);
+ mutex_init(&ctx->clk_mtx);
+ INIT_WORK(&ctx->clk_work, clk_ctrl_work);
+
+ pr_debug("%s: ctx=%p num=%d mixer=%d\n", __func__,
+ ctx, ctx->pp_num, mixer->num);
mdss_mdp_set_intr_callback(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num,
mdss_mdp_cmd_readptr_done, ctl);
@@ -325,7 +479,9 @@
ctl->stop_fnc = mdss_mdp_cmd_stop;
ctl->display_fnc = mdss_mdp_cmd_kickoff;
- ctl->set_vsync_handler = mdss_mdp_cmd_set_vsync_handler;
+ ctl->wait_fnc = mdss_mdp_cmd_wait4comp;
+ ctl->add_vsync_handler = mdss_mdp_cmd_vsync_ctrl;
+ ctl->remove_vsync_handler = mdss_mdp_cmd_vsync_ctrl;
pr_debug("%s:-\n", __func__);
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_video.c b/drivers/video/msm/mdss/mdss_mdp_intf_video.c
index 6e631e9..72cbed9 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_video.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_video.c
@@ -13,11 +13,18 @@
#define pr_fmt(fmt) "%s: " fmt, __func__
+#include <linux/iopoll.h>
+#include <linux/delay.h>
+#include <linux/dma-mapping.h>
+
#include "mdss_fb.h"
#include "mdss_mdp.h"
-/* wait for at most 2 vsync for lowest refresh rate (24hz) */
-#define VSYNC_TIMEOUT msecs_to_jiffies(84)
+/* wait for at least 2 vsyncs for lowest refresh rate (24hz) */
+#define VSYNC_TIMEOUT_US 100000
+
+#define MDP_INTR_MASK_INTF_VSYNC(intf_num) \
+ (1 << (2 * (intf_num - MDSS_MDP_INTF0) + MDSS_MDP_IRQ_INTF_VSYNC))
/* intf timing settings */
struct intf_timing_params {
@@ -45,12 +52,14 @@
u8 ref_cnt;
u8 timegen_en;
+ bool polling_en;
+ u32 poll_cnt;
struct completion vsync_comp;
int wait_pending;
atomic_t vsync_ref;
spinlock_t vsync_lock;
- mdp_vsync_handler_t vsync_handler;
+ struct list_head vsync_handlers;
};
static inline void mdp_video_write(struct mdss_mdp_video_ctx *ctx,
@@ -59,6 +68,22 @@
writel_relaxed(val, ctx->base + reg);
}
+static inline u32 mdp_video_read(struct mdss_mdp_video_ctx *ctx,
+ u32 reg)
+{
+ return readl_relaxed(ctx->base + reg);
+}
+
+static inline u32 mdss_mdp_video_line_count(struct mdss_mdp_ctl *ctl)
+{
+ struct mdss_mdp_video_ctx *ctx = ctl->priv_data;
+ u32 line_cnt = 0;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ line_cnt = mdp_video_read(ctx, MDSS_MDP_REG_INTF_LINE_COUNT);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ return line_cnt;
+}
+
int mdss_mdp_video_addr_setup(struct mdss_data_type *mdata,
u32 *offsets, u32 count)
{
@@ -76,6 +101,7 @@
offsets[i], head[i].base);
head[i].ref_cnt = 0;
head[i].intf_num = i + MDSS_MDP_INTF0;
+ INIT_LIST_HEAD(&head[i].vsync_handlers);
}
mdata->video_intf = head;
@@ -171,18 +197,19 @@
p->underflow_clr);
mdp_video_write(ctx, MDSS_MDP_REG_INTF_HSYNC_SKEW, p->hsync_skew);
mdp_video_write(ctx, MDSS_MDP_REG_INTF_POLARITY_CTL, polarity_ctl);
+ mdp_video_write(ctx, MDSS_MDP_REG_INTF_FRAME_LINE_COUNT_EN, 0x3);
return 0;
}
-static inline void video_vsync_irq_enable(struct mdss_mdp_ctl *ctl)
+static inline void video_vsync_irq_enable(struct mdss_mdp_ctl *ctl, bool clear)
{
struct mdss_mdp_video_ctx *ctx = ctl->priv_data;
if (atomic_inc_return(&ctx->vsync_ref) == 1)
mdss_mdp_irq_enable(MDSS_MDP_IRQ_INTF_VSYNC, ctl->intf_num);
- else
+ else if (clear)
mdss_mdp_irq_clear(ctl->mdata, MDSS_MDP_IRQ_INTF_VSYNC,
ctl->intf_num);
}
@@ -195,12 +222,45 @@
mdss_mdp_irq_disable(MDSS_MDP_IRQ_INTF_VSYNC, ctl->intf_num);
}
-static int mdss_mdp_video_set_vsync_handler(struct mdss_mdp_ctl *ctl,
- mdp_vsync_handler_t vsync_handler)
+static int mdss_mdp_video_add_vsync_handler(struct mdss_mdp_ctl *ctl,
+ struct mdss_mdp_vsync_handler *handle)
{
struct mdss_mdp_video_ctx *ctx;
unsigned long flags;
- int need_update;
+ int ret = 0;
+ bool irq_en = false;
+
+ if (!handle || !(handle->vsync_handler)) {
+ ret = -EINVAL;
+ goto exit;
+ }
+
+ ctx = (struct mdss_mdp_video_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("invalid ctx for ctl=%d\n", ctl->num);
+ ret = -ENODEV;
+ goto exit;
+ }
+
+ spin_lock_irqsave(&ctx->vsync_lock, flags);
+ if (!handle->enabled) {
+ handle->enabled = true;
+ list_add(&handle->list, &ctx->vsync_handlers);
+ irq_en = true;
+ }
+ spin_unlock_irqrestore(&ctx->vsync_lock, flags);
+ if (irq_en)
+ video_vsync_irq_enable(ctl, false);
+exit:
+ return ret;
+}
+
+static int mdss_mdp_video_remove_vsync_handler(struct mdss_mdp_ctl *ctl,
+ struct mdss_mdp_vsync_handler *handle)
+{
+ struct mdss_mdp_video_ctx *ctx;
+ unsigned long flags;
+ bool irq_dis = false;
ctx = (struct mdss_mdp_video_ctx *) ctl->priv_data;
if (!ctx) {
@@ -209,24 +269,21 @@
}
spin_lock_irqsave(&ctx->vsync_lock, flags);
- need_update = (!ctx->vsync_handler && vsync_handler) ||
- (ctx->vsync_handler && !vsync_handler);
- ctx->vsync_handler = vsync_handler;
- spin_unlock_irqrestore(&ctx->vsync_lock, flags);
-
- if (need_update) {
- if (vsync_handler)
- video_vsync_irq_enable(ctl);
- else
- video_vsync_irq_disable(ctl);
+ if (handle->enabled) {
+ handle->enabled = false;
+ list_del_init(&handle->list);
+ irq_dis = true;
}
-
+ spin_unlock_irqrestore(&ctx->vsync_lock, flags);
+ if (irq_dis)
+ video_vsync_irq_disable(ctl);
return 0;
}
static int mdss_mdp_video_stop(struct mdss_mdp_ctl *ctl)
{
struct mdss_mdp_video_ctx *ctx;
+ struct mdss_mdp_vsync_handler *tmp, *handle;
int rc;
pr_debug("stop ctl=%d\n", ctl->num);
@@ -257,7 +314,8 @@
ctl->intf_num);
}
- mdss_mdp_video_set_vsync_handler(ctl, NULL);
+ list_for_each_entry_safe(handle, tmp, &ctx->vsync_handlers, list)
+ mdss_mdp_video_remove_vsync_handler(ctl, handle);
mdss_mdp_set_intr_callback(MDSS_MDP_IRQ_INTF_VSYNC, ctl->intf_num,
NULL, NULL);
@@ -274,6 +332,7 @@
{
struct mdss_mdp_ctl *ctl = arg;
struct mdss_mdp_video_ctx *ctx = ctl->priv_data;
+ struct mdss_mdp_vsync_handler *tmp;
ktime_t vsync_time;
if (!ctx) {
@@ -284,15 +343,56 @@
vsync_time = ktime_get();
ctl->vsync_cnt++;
- pr_debug("intr ctl=%d vsync cnt=%u\n", ctl->num, ctl->vsync_cnt);
+ pr_debug("intr ctl=%d vsync cnt=%u vsync_time=%d\n",
+ ctl->num, ctl->vsync_cnt, (int)ktime_to_ms(vsync_time));
+ ctx->polling_en = false;
complete_all(&ctx->vsync_comp);
spin_lock(&ctx->vsync_lock);
- if (ctx->vsync_handler)
- ctx->vsync_handler(ctl, vsync_time);
+ list_for_each_entry(tmp, &ctx->vsync_handlers, list) {
+ tmp->vsync_handler(ctl, vsync_time);
+ }
spin_unlock(&ctx->vsync_lock);
}
+static int mdss_mdp_video_pollwait(struct mdss_mdp_ctl *ctl)
+{
+ struct mdss_mdp_video_ctx *ctx = ctl->priv_data;
+ u32 mask, status;
+ int rc;
+
+ mask = MDP_INTR_MASK_INTF_VSYNC(ctl->intf_num);
+
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ rc = readl_poll_timeout(ctl->mdata->mdp_base + MDSS_MDP_REG_INTR_STATUS,
+ status,
+ (status & mask) || try_wait_for_completion(&ctx->vsync_comp),
+ 1000,
+ VSYNC_TIMEOUT_US);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+
+ if (rc == 0) {
+ pr_debug("vsync poll successful! rc=%d status=0x%x\n",
+ rc, status);
+ ctx->poll_cnt++;
+ if (status) {
+ struct mdss_mdp_vsync_handler *tmp;
+ unsigned long flags;
+ ktime_t vsync_time = ktime_get();
+
+ spin_lock_irqsave(&ctx->vsync_lock, flags);
+ list_for_each_entry(tmp, &ctx->vsync_handlers, list)
+ tmp->vsync_handler(ctl, vsync_time);
+ spin_unlock_irqrestore(&ctx->vsync_lock, flags);
+ }
+ } else {
+ pr_warn("vsync poll timed out! rc=%d status=0x%x mask=0x%x\n",
+ rc, status, mask);
+ }
+
+ return rc;
+}
+
static int mdss_mdp_video_wait4comp(struct mdss_mdp_ctl *ctl, void *arg)
{
struct mdss_mdp_video_ctx *ctx;
@@ -306,9 +406,22 @@
WARN(!ctx->wait_pending, "waiting without commit! ctl=%d", ctl->num);
- rc = wait_for_completion_interruptible_timeout(&ctx->vsync_comp,
- VSYNC_TIMEOUT);
- WARN(rc <= 0, "vsync timed out (%d) ctl=%d\n", rc, ctl->num);
+ if (ctx->polling_en) {
+ rc = mdss_mdp_video_pollwait(ctl);
+ } else {
+ rc = wait_for_completion_interruptible_timeout(&ctx->vsync_comp,
+ usecs_to_jiffies(VSYNC_TIMEOUT_US));
+ if (rc < 0) {
+ pr_warn("vsync wait interrupted ctl=%d\n", ctl->num);
+ } else if (rc == 0) {
+ pr_warn("vsync wait timeout %d, fallback to poll mode\n",
+ ctl->num);
+ ctx->polling_en++;
+ rc = mdss_mdp_video_pollwait(ctl);
+ } else {
+ rc = 0;
+ }
+ }
if (ctx->wait_pending) {
ctx->wait_pending = 0;
@@ -325,7 +438,7 @@
return;
ctl->underrun_cnt++;
- pr_warn("display underrun detected for ctl=%d count=%d\n", ctl->num,
+ pr_debug("display underrun detected for ctl=%d count=%d\n", ctl->num,
ctl->underrun_cnt);
}
@@ -344,7 +457,7 @@
if (!ctx->wait_pending) {
ctx->wait_pending++;
- video_vsync_irq_enable(ctl);
+ video_vsync_irq_enable(ctl, true);
INIT_COMPLETION(ctx->vsync_comp);
} else {
WARN(1, "commit without wait! ctl=%d", ctl->num);
@@ -352,7 +465,13 @@
if (!ctx->timegen_en) {
rc = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_UNBLANK, NULL);
- WARN(rc, "intf %d unblank error (%d)\n", ctl->intf_num, rc);
+ if (rc) {
+ pr_warn("intf #%d unblank error (%d)\n",
+ ctl->intf_num, rc);
+ video_vsync_irq_disable(ctl);
+ ctx->wait_pending = 0;
+ return rc;
+ }
pr_debug("enabling timing gen for intf=%d\n", ctl->intf_num);
@@ -363,7 +482,7 @@
wmb();
rc = wait_for_completion_interruptible_timeout(&ctx->vsync_comp,
- VSYNC_TIMEOUT);
+ usecs_to_jiffies(VSYNC_TIMEOUT_US));
WARN(rc <= 0, "timeout (%d) enabling timegen on ctl=%d\n",
rc, ctl->num);
@@ -375,6 +494,92 @@
return 0;
}
+int mdss_mdp_video_copy_splash_screen(struct mdss_panel_data *pdata)
+{
+ void *virt = NULL;
+ unsigned long bl_fb_addr = 0;
+ unsigned long *bl_fb_addr_va;
+ unsigned long pipe_addr, pipe_src_size;
+ u32 height, width, rgb_size, bpp;
+ size_t size;
+ static struct ion_handle *ihdl;
+ struct ion_client *iclient = mdss_get_ionclient();
+ static ion_phys_addr_t phys;
+
+ pipe_addr = MDSS_MDP_REG_SSPP_OFFSET(3) +
+ MDSS_MDP_REG_SSPP_SRC0_ADDR;
+ pipe_src_size =
+ MDSS_MDP_REG_SSPP_OFFSET(3) + MDSS_MDP_REG_SSPP_SRC_SIZE;
+
+ bpp = 3;
+ rgb_size = MDSS_MDP_REG_READ(pipe_src_size);
+ bl_fb_addr = MDSS_MDP_REG_READ(pipe_addr);
+
+ height = (rgb_size >> 16) & 0xffff;
+ width = rgb_size & 0xffff;
+ size = PAGE_ALIGN(height * width * bpp);
+ pr_debug("%s:%d splash_height=%d splash_width=%d Buffer size=%d\n",
+ __func__, __LINE__, height, width, size);
+
+ ihdl = ion_alloc(iclient, size, SZ_1M,
+ ION_HEAP(ION_QSECOM_HEAP_ID), 0);
+ if (IS_ERR_OR_NULL(ihdl)) {
+ pr_err("unable to alloc fbmem from ion (%p)\n", ihdl);
+ return -ENOMEM;
+ }
+
+ pdata->panel_info.splash_ihdl = ihdl;
+
+ virt = ion_map_kernel(iclient, ihdl);
+ ion_phys(iclient, ihdl, &phys, &size);
+
+ pr_debug("%s %d Allocating %u bytes at 0x%lx (%pa phys)\n",
+ __func__, __LINE__, size,
+ (unsigned long int)virt, &phys);
+
+ bl_fb_addr_va = (unsigned long *)ioremap(bl_fb_addr, size);
+
+ memcpy(virt, bl_fb_addr_va, size);
+
+ MDSS_MDP_REG_WRITE(pipe_addr, phys);
+ MDSS_MDP_REG_WRITE(MDSS_MDP_REG_CTL_FLUSH + MDSS_MDP_REG_CTL_OFFSET(0),
+ 0x48);
+
+ return 0;
+}
+
+int mdss_mdp_video_reconfigure_splash_done(struct mdss_mdp_ctl *ctl)
+{
+ struct ion_client *iclient = mdss_get_ionclient();
+ struct mdss_panel_data *pdata;
+ int ret = 0, off;
+ int mdss_mdp_rev = MDSS_MDP_REG_READ(MDSS_MDP_REG_HW_VERSION);
+ int mdss_v2_intf_off = 0;
+
+ off = 0;
+
+ pdata = ctl->panel_data;
+
+ pdata->panel_info.cont_splash_enabled = 0;
+
+
+ mdss_mdp_ctl_write(ctl, 0, MDSS_MDP_LM_BORDER_COLOR);
+ off = MDSS_MDP_REG_INTF_OFFSET(ctl->intf_num);
+
+ if (mdss_mdp_rev == MDSS_MDP_HW_REV_102)
+ mdss_v2_intf_off = 0xEC00;
+
+ MDSS_MDP_REG_WRITE(off + MDSS_MDP_REG_INTF_TIMING_ENGINE_EN -
+ mdss_v2_intf_off, 0);
+ /* wait for 1 VSYNC for the pipe to be unstaged */
+ msleep(20);
+ ion_free(iclient, pdata->panel_info.splash_ihdl);
+ ret = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_CONT_SPLASH_FINISH,
+ NULL);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ return ret;
+}
+
int mdss_mdp_video_start(struct mdss_mdp_ctl *ctl)
{
struct mdss_data_type *mdata;
@@ -453,7 +658,9 @@
ctl->stop_fnc = mdss_mdp_video_stop;
ctl->display_fnc = mdss_mdp_video_display;
ctl->wait_fnc = mdss_mdp_video_wait4comp;
- ctl->set_vsync_handler = mdss_mdp_video_set_vsync_handler;
+ ctl->read_line_cnt_fnc = mdss_mdp_video_line_count;
+ ctl->add_vsync_handler = mdss_mdp_video_add_vsync_handler;
+ ctl->remove_vsync_handler = mdss_mdp_video_remove_vsync_handler;
return 0;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c b/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
index 7fbb031..0c08eda 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
@@ -17,6 +17,9 @@
#include "mdss_mdp.h"
#include "mdss_mdp_rotator.h"
+/* wait for at most 2 vsync for lowest refresh rate (24hz) */
+#define KOFF_TIMEOUT msecs_to_jiffies(84)
+
enum mdss_mdp_writeback_type {
MDSS_MDP_WRITEBACK_TYPE_ROTATOR,
MDSS_MDP_WRITEBACK_TYPE_LINE,
@@ -28,14 +31,18 @@
char __iomem *base;
u8 ref_cnt;
u8 type;
+ struct completion wb_comp;
+ int comp_cnt;
u32 intr_type;
u32 intf_num;
u32 opmode;
- u32 format;
+ struct mdss_mdp_format_params *dst_fmt;
u16 width;
u16 height;
+ struct mdss_mdp_img_rect dst_rect;
+
u8 rot90;
u32 bwc_mode;
int initialized;
@@ -81,48 +88,55 @@
}
static int mdss_mdp_writeback_addr_setup(struct mdss_mdp_writeback_ctx *ctx,
- struct mdss_mdp_data *data)
+ const struct mdss_mdp_data *in_data)
{
int ret;
+ struct mdss_mdp_data data;
- if (!data)
+ if (!in_data)
return -EINVAL;
+ data = *in_data;
- pr_debug("wb_num=%d addr=0x%x\n", ctx->wb_num, data->p[0].addr);
+ pr_debug("wb_num=%d addr=0x%x\n", ctx->wb_num, data.p[0].addr);
if (ctx->bwc_mode)
- data->bwc_enabled = 1;
+ data.bwc_enabled = 1;
- ret = mdss_mdp_data_check(data, &ctx->dst_planes);
+ ret = mdss_mdp_data_check(&data, &ctx->dst_planes);
if (ret)
return ret;
- mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST0_ADDR, data->p[0].addr);
- mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST1_ADDR, data->p[1].addr);
- mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST2_ADDR, data->p[2].addr);
- mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST3_ADDR, data->p[3].addr);
+ mdss_mdp_data_calc_offset(&data, ctx->dst_rect.x, ctx->dst_rect.y,
+ &ctx->dst_planes, ctx->dst_fmt);
+
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST0_ADDR, data.p[0].addr);
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST1_ADDR, data.p[1].addr);
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST2_ADDR, data.p[2].addr);
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST3_ADDR, data.p[3].addr);
return 0;
}
-static int mdss_mdp_writeback_format_setup(struct mdss_mdp_writeback_ctx *ctx)
+static int mdss_mdp_writeback_format_setup(struct mdss_mdp_writeback_ctx *ctx,
+ u32 format)
{
struct mdss_mdp_format_params *fmt;
u32 dst_format, pattern, ystride0, ystride1, outsize, chroma_samp;
u32 opmode = ctx->opmode;
struct mdss_data_type *mdata;
- pr_debug("wb_num=%d format=%d\n", ctx->wb_num, ctx->format);
+ pr_debug("wb_num=%d format=%d\n", ctx->wb_num, format);
- mdss_mdp_get_plane_sizes(ctx->format, ctx->width, ctx->height,
+ mdss_mdp_get_plane_sizes(format, ctx->width, ctx->height,
&ctx->dst_planes,
ctx->opmode & MDSS_MDP_OP_BWC_EN);
- fmt = mdss_mdp_get_format_params(ctx->format);
+ fmt = mdss_mdp_get_format_params(format);
if (!fmt) {
- pr_err("wb format=%d not supported\n", ctx->format);
+ pr_err("wb format=%d not supported\n", format);
return -EINVAL;
}
+ ctx->dst_fmt = fmt;
chroma_samp = fmt->chroma_sample;
@@ -164,33 +178,29 @@
dst_format |= BIT(14); /* DST_ALPHA_X */
}
- if (fmt->fetch_planes != MDSS_MDP_PLANE_PLANAR) {
- mdata = mdss_mdp_get_mdata();
- if (mdata && mdata->mdp_rev >= MDSS_MDP_HW_REV_102) {
- pattern = (fmt->element[3] << 24) |
- (fmt->element[2] << 16) |
- (fmt->element[1] << 8) |
- (fmt->element[0] << 0);
- } else {
- pattern = (fmt->element[3] << 24) |
- (fmt->element[2] << 15) |
- (fmt->element[1] << 8) |
- (fmt->element[0] << 0);
- }
-
- dst_format |= (fmt->unpack_align_msb << 18) |
- (fmt->unpack_tight << 17) |
- ((fmt->unpack_count - 1) << 12) |
- ((fmt->bpp - 1) << 9);
+ mdata = mdss_mdp_get_mdata();
+ if (mdata && mdata->mdp_rev >= MDSS_MDP_HW_REV_102) {
+ pattern = (fmt->element[3] << 24) |
+ (fmt->element[2] << 16) |
+ (fmt->element[1] << 8) |
+ (fmt->element[0] << 0);
} else {
- pattern = 0;
+ pattern = (fmt->element[3] << 24) |
+ (fmt->element[2] << 15) |
+ (fmt->element[1] << 8) |
+ (fmt->element[0] << 0);
}
+ dst_format |= (fmt->unpack_align_msb << 18) |
+ (fmt->unpack_tight << 17) |
+ ((fmt->unpack_count - 1) << 12) |
+ ((fmt->bpp - 1) << 9);
+
ystride0 = (ctx->dst_planes.ystride[0]) |
(ctx->dst_planes.ystride[1] << 16);
ystride1 = (ctx->dst_planes.ystride[2]) |
(ctx->dst_planes.ystride[3] << 16);
- outsize = (ctx->height << 16) | ctx->width;
+ outsize = (ctx->dst_rect.h << 16) | ctx->dst_rect.w;
mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST_FORMAT, dst_format);
mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST_OP_MODE, opmode);
@@ -217,11 +227,14 @@
pr_debug("wfd setup ctl=%d\n", ctl->num);
ctx->opmode = 0;
- ctx->format = ctl->dst_format;
ctx->width = ctl->width;
ctx->height = ctl->height;
+ ctx->dst_rect.x = 0;
+ ctx->dst_rect.y = 0;
+ ctx->dst_rect.w = ctx->width;
+ ctx->dst_rect.h = ctx->height;
- ret = mdss_mdp_writeback_format_setup(ctx);
+ ret = mdss_mdp_writeback_format_setup(ctx, ctl->dst_format);
if (ret) {
pr_err("format setup failed\n");
return ret;
@@ -237,6 +250,7 @@
struct mdss_mdp_writeback_ctx *ctx;
struct mdss_mdp_writeback_arg *wb_args;
struct mdss_mdp_rotator_session *rot;
+ u32 format;
ctx = (struct mdss_mdp_writeback_ctx *) ctl->priv_data;
if (!ctx)
@@ -259,24 +273,26 @@
ctx->bwc_mode = rot->bwc_mode;
ctx->opmode |= ctx->bwc_mode;
- ctx->width = rot->src_rect.w;
- ctx->height = rot->src_rect.h;
-
- ctx->format = rot->format;
+ ctx->width = rot->dst.w;
+ ctx->height = rot->dst.h;
+ ctx->dst_rect.x = rot->dst.x;
+ ctx->dst_rect.y = rot->dst.y;
+ ctx->dst_rect.w = rot->src_rect.w;
+ ctx->dst_rect.h = rot->src_rect.h;
ctx->rot90 = !!(rot->flags & MDP_ROT_90);
if (ctx->bwc_mode || ctx->rot90)
- ctx->format = mdss_mdp_get_rotator_dst_format(rot->format);
+ format = mdss_mdp_get_rotator_dst_format(rot->format);
else
- ctx->format = rot->format;
+ format = rot->format;
if (ctx->rot90) {
ctx->opmode |= BIT(5); /* ROT 90 */
- swap(ctx->width, ctx->height);
+ swap(ctx->dst_rect.w, ctx->dst_rect.h);
}
- return mdss_mdp_writeback_format_setup(ctx);
+ return mdss_mdp_writeback_format_setup(ctx, format);
}
static int mdss_mdp_writeback_stop(struct mdss_mdp_ctl *ctl)
@@ -310,10 +326,37 @@
pr_debug("intr wb_num=%d\n", ctx->wb_num);
mdss_mdp_irq_disable_nosync(ctx->intr_type, ctx->intf_num);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, true);
if (ctx->callback_fnc)
ctx->callback_fnc(ctx->callback_arg);
+
+ complete_all(&ctx->wb_comp);
+}
+
+static int mdss_mdp_wb_wait4comp(struct mdss_mdp_ctl *ctl, void *arg)
+{
+ struct mdss_mdp_writeback_ctx *ctx;
+ int rc = 0;
+
+ ctx = (struct mdss_mdp_writeback_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return -ENODEV;
+ }
+
+ if (ctx->comp_cnt == 0)
+ return rc;
+
+ rc = wait_for_completion_interruptible_timeout(&ctx->wb_comp,
+ KOFF_TIMEOUT);
+ WARN(rc <= 0, "writeback kickoff timed out (%d) ctl=%d\n",
+ rc, ctl->num);
+
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false); /* clock off */
+
+ ctx->comp_cnt--;
+
+ return rc;
}
static int mdss_mdp_writeback_display(struct mdss_mdp_ctl *ctl, void *arg)
@@ -327,6 +370,12 @@
if (!ctx)
return -ENODEV;
+ if (ctx->comp_cnt) {
+ pr_err("previous kickoff not completed yet, ctl=%d\n",
+ ctl->num);
+ return -EPERM;
+ }
+
wb_args = (struct mdss_mdp_writeback_arg *) arg;
if (!wb_args)
return -ENOENT;
@@ -341,14 +390,18 @@
ctx->callback_arg = wb_args->priv_data;
flush_bits = BIT(16); /* WB */
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST_ADDR_SW_STATUS, ctl->is_secure);
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_FLUSH, flush_bits);
+ INIT_COMPLETION(ctx->wb_comp);
mdss_mdp_irq_enable(ctx->intr_type, ctx->intf_num);
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_START, 1);
wmb();
+ ctx->comp_cnt++;
+
return 0;
}
@@ -376,6 +429,7 @@
ctx->wb_num = ctl->num; /* wb num should match ctl num */
ctx->base = ctl->wb_base;
ctx->initialized = false;
+ init_completion(&ctx->wb_comp);
mdss_mdp_set_intr_callback(ctx->intr_type, ctx->intf_num,
mdss_mdp_writeback_intr_done, ctx);
@@ -386,6 +440,7 @@
ctl->prepare_fnc = mdss_mdp_writeback_prepare_wfd;
ctl->stop_fnc = mdss_mdp_writeback_stop;
ctl->display_fnc = mdss_mdp_writeback_display;
+ ctl->wait_fnc = mdss_mdp_wb_wait4comp;
return ret;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_overlay.c b/drivers/video/msm/mdss/mdss_mdp_overlay.c
index dae3e05..ac1c4ce 100644
--- a/drivers/video/msm/mdss/mdss_mdp_overlay.c
+++ b/drivers/video/msm/mdss/mdss_mdp_overlay.c
@@ -24,6 +24,7 @@
#include <linux/msm_mdp.h>
#include <mach/iommu_domains.h>
+#include <mach/event_timer.h>
#include "mdss.h"
#include "mdss_fb.h"
@@ -37,6 +38,8 @@
static atomic_t ov_active_panels = ATOMIC_INIT(0);
static int mdss_mdp_overlay_free_fb_pipe(struct msm_fb_data_type *mfd);
+static int mdss_mdp_overlay_fb_parse_dt(struct msm_fb_data_type *mfd);
+static int mdss_mdp_overlay_off(struct msm_fb_data_type *mfd);
static int mdss_mdp_overlay_get(struct msm_fb_data_type *mfd,
struct mdp_overlay *req)
@@ -92,15 +95,26 @@
return -EOVERFLOW;
}
- if (req->dst_rect.w < min_dst_size || req->dst_rect.h < min_dst_size ||
- req->dst_rect.w > MAX_DST_W || req->dst_rect.h > MAX_DST_H) {
+ if (req->dst_rect.w < min_dst_size || req->dst_rect.h < min_dst_size) {
pr_err("invalid destination resolution (%dx%d)",
req->dst_rect.w, req->dst_rect.h);
return -EOVERFLOW;
}
+ if (req->horz_deci || req->vert_deci) {
+ if (!mdata->has_decimation) {
+ pr_err("No Decimation in MDP V=%x\n", mdata->mdp_rev);
+ return -EINVAL;
+ } else if ((req->horz_deci > MAX_DECIMATION) ||
+ (req->vert_deci > MAX_DECIMATION)) {
+ pr_err("Invalid decimation factors horz=%d vert=%d\n",
+ req->horz_deci, req->vert_deci);
+ return -EINVAL;
+ }
+ }
+
if (!(req->flags & MDSS_MDP_ROT_ONLY)) {
- u32 dst_w, dst_h;
+ u32 src_w, src_h, dst_w, dst_h;
if ((CHECK_BOUNDS(req->dst_rect.x, req->dst_rect.w, xres) ||
CHECK_BOUNDS(req->dst_rect.y, req->dst_rect.h, yres))) {
@@ -118,27 +132,36 @@
dst_h = req->dst_rect.h;
}
- if ((req->src_rect.w * MAX_UPSCALE_RATIO) < dst_w) {
+ src_w = req->src_rect.w >> req->horz_deci;
+ src_h = req->src_rect.h >> req->vert_deci;
+
+ if (src_w > MAX_MIXER_WIDTH) {
+ pr_err("invalid source width=%d HDec=%d\n",
+ req->src_rect.w, req->horz_deci);
+ return -EINVAL;
+ }
+
+ if ((src_w * MAX_UPSCALE_RATIO) < dst_w) {
pr_err("too much upscaling Width %d->%d\n",
req->src_rect.w, req->dst_rect.w);
return -EINVAL;
}
- if ((req->src_rect.h * MAX_UPSCALE_RATIO) < dst_h) {
+ if ((src_h * MAX_UPSCALE_RATIO) < dst_h) {
pr_err("too much upscaling. Height %d->%d\n",
req->src_rect.h, req->dst_rect.h);
return -EINVAL;
}
- if (req->src_rect.w > (dst_w * MAX_DOWNSCALE_RATIO)) {
- pr_err("too much downscaling. Width %d->%d\n",
- req->src_rect.w, req->dst_rect.w);
+ if (src_w > (dst_w * MAX_DOWNSCALE_RATIO)) {
+ pr_err("too much downscaling. Width %d->%d H Dec=%d\n",
+ src_w, req->dst_rect.w, req->horz_deci);
return -EINVAL;
}
- if (req->src_rect.h > (dst_h * MAX_DOWNSCALE_RATIO)) {
- pr_err("too much downscaling. Height %d->%d\n",
- req->src_rect.h, req->dst_rect.h);
+ if (src_h > (dst_h * MAX_DOWNSCALE_RATIO)) {
+ pr_err("too much downscaling. Height %d->%d V Dec=%d\n",
+ src_h, req->dst_rect.h, req->vert_deci);
return -EINVAL;
}
@@ -157,6 +180,14 @@
req->src_rect.h, req->dst_rect.h);
return -EINVAL;
}
+
+ if (req->flags & MDP_BWC_EN) {
+ if ((req->src.width != req->src_rect.w) ||
+ (req->src.height != req->src_rect.h)) {
+ pr_err("BWC: unequal src img and rect w,h\n");
+ return -EINVAL;
+ }
+ }
}
if (fmt->is_yuv) {
@@ -167,6 +198,13 @@
}
}
+ if (req->flags & MDP_DEINTERLACE) {
+ if ((req->src.width % 4 != 0) || (req->src.height % 4 != 0)) {
+ pr_err("interlaced fmt w,h need to be even post div\n");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
@@ -177,6 +215,7 @@
struct mdss_mdp_rotator_session *rot;
struct mdss_mdp_format_params *fmt;
int ret = 0;
+ u32 bwc_enabled;
pr_debug("rot ctl=%u req id=%x\n", mdp5_data->ctl->num, req->id);
@@ -213,7 +252,14 @@
rot->flags = req->flags & (MDP_ROT_90 | MDP_FLIP_LR | MDP_FLIP_UD |
MDP_SECURE_OVERLAY_SESSION);
- rot->bwc_mode = (req->flags & MDP_BWC_EN) ? 1 : 0;
+ bwc_enabled = req->flags & MDP_BWC_EN;
+ if (bwc_enabled && !mdp5_data->mdata->has_bwc) {
+ pr_err("BWC is not supported in MDP version %x\n",
+ mdp5_data->mdata->mdp_rev);
+ rot->bwc_mode = 0;
+ } else {
+ rot->bwc_mode = bwc_enabled ? 1 : 0;
+ }
rot->format = fmt->format;
rot->img_width = req->src.width;
rot->img_height = req->src.height;
@@ -227,9 +273,13 @@
rot->src_rect.h /= 2;
}
- rot->params_changed++;
-
- req->id = rot->session_id;
+ ret = mdss_mdp_rotator_setup(rot);
+ if (ret == 0) {
+ req->id = rot->session_id;
+ } else {
+ pr_err("Unable to setup rotator session\n");
+ mdss_mdp_rotator_release(rot->session_id);
+ }
return ret;
}
@@ -243,11 +293,24 @@
struct mdss_mdp_mixer *mixer = NULL;
u32 pipe_type, mixer_mux, len, src_format;
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
+ struct mdp_histogram_start_req hist;
int ret;
+ u32 bwc_enabled;
if (mdp5_data->ctl == NULL)
return -ENODEV;
+ if (req->flags & MDP_ROT_90) {
+ pr_err("unsupported inline rotation\n");
+ return -ENOTSUPP;
+ }
+
+ if ((req->dst_rect.w > MAX_DST_W) || (req->dst_rect.h > MAX_DST_H)) {
+ pr_err("exceeded max mixer supported resolution %dx%d\n",
+ req->dst_rect.w, req->dst_rect.h);
+ return -EOVERFLOW;
+ }
+
if (req->flags & MDSS_MDP_RIGHT_MIXER)
mixer_mux = MDSS_MDP_MIXER_MUX_RIGHT;
else
@@ -256,11 +319,6 @@
pr_debug("pipe ctl=%u req id=%x mux=%d\n", mdp5_data->ctl->num, req->id,
mixer_mux);
- if (req->flags & MDP_ROT_90) {
- pr_err("unsupported inline rotation\n");
- return -ENOTSUPP;
- }
-
src_format = req->src.format;
if (req->flags & (MDP_SOURCE_ROTATED_90 | MDP_BWC_EN))
src_format = mdss_mdp_get_rotator_dst_format(src_format);
@@ -280,7 +338,7 @@
if (pipe && pipe->ndx != req->id) {
pr_debug("replacing pnum=%d at stage=%d mux=%d\n",
pipe->num, req->z_order, mixer_mux);
- pipe->params_changed = true;
+ mdss_mdp_mixer_pipe_unstage(pipe);
}
mixer = mdss_mdp_mixer_get(mdp5_data->ctl, mixer_mux);
@@ -334,8 +392,8 @@
pr_err("Can't switch mixer %d->%d pnum %d!\n",
pipe->mixer->num, mixer->num,
pipe->num);
- mdss_mdp_pipe_unmap(pipe);
- return -EINVAL;
+ ret = -EINVAL;
+ goto exit_fail;
}
pr_debug("switching pipe mixer %d->%d pnum %d\n",
pipe->mixer->num, mixer->num,
@@ -346,8 +404,15 @@
}
pipe->flags = req->flags;
- pipe->bwc_mode = pipe->mixer->rotator_mode ?
- 0 : (req->flags & MDP_BWC_EN ? 1 : 0) ;
+ bwc_enabled = req->flags & MDP_BWC_EN;
+ if (bwc_enabled && !mdp5_data->mdata->has_bwc) {
+ pr_err("BWC is not supported in MDP version %x\n",
+ mdp5_data->mdata->mdp_rev);
+ pipe->bwc_mode = 0;
+ } else {
+ pipe->bwc_mode = pipe->mixer->rotator_mode ?
+ 0 : (bwc_enabled ? 1 : 0) ;
+ }
pipe->img_width = req->src.width & 0x3fff;
pipe->img_height = req->src.height & 0x3fff;
pipe->src.x = req->src_rect.x;
@@ -358,14 +423,16 @@
pipe->dst.y = req->dst_rect.y;
pipe->dst.w = req->dst_rect.w;
pipe->dst.h = req->dst_rect.h;
-
+ pipe->horz_deci = req->horz_deci;
+ pipe->vert_deci = req->vert_deci;
pipe->src_fmt = fmt;
pipe->mixer_stage = req->z_order;
pipe->is_fg = req->is_fg;
pipe->alpha = req->alpha;
pipe->transp = req->transp_mask;
- pipe->overfetch_disable = fmt->is_yuv;
+ pipe->overfetch_disable = fmt->is_yuv &&
+ !(pipe->flags & MDP_SOURCE_ROTATED_90);
pipe->req_data = *req;
@@ -389,6 +456,31 @@
pipe->pp_res.igc_c0_c1;
pipe->pp_cfg.igc_cfg.c2_data = pipe->pp_res.igc_c2;
}
+ if (pipe->pp_cfg.config_ops & MDP_OVERLAY_PP_HIST_CFG) {
+ if (pipe->pp_cfg.hist_cfg.ops & MDP_PP_OPS_ENABLE) {
+ hist.block = pipe->pp_cfg.hist_cfg.block;
+ hist.frame_cnt =
+ pipe->pp_cfg.hist_cfg.frame_cnt;
+ hist.bit_mask = pipe->pp_cfg.hist_cfg.bit_mask;
+ hist.num_bins = pipe->pp_cfg.hist_cfg.num_bins;
+ mdss_mdp_histogram_start(pipe->mixer->ctl,
+ &hist);
+ } else if (pipe->pp_cfg.hist_cfg.ops &
+ MDP_PP_OPS_DISABLE) {
+ mdss_mdp_histogram_stop(pipe->mixer->ctl,
+ pipe->pp_cfg.hist_cfg.block);
+ }
+ }
+ len = pipe->pp_cfg.hist_lut_cfg.len;
+ if ((pipe->pp_cfg.config_ops & MDP_OVERLAY_PP_HIST_LUT_CFG) &&
+ (len == ENHIST_LUT_ENTRIES)) {
+ ret = copy_from_user(pipe->pp_res.hist_lut,
+ pipe->pp_cfg.hist_lut_cfg.data,
+ sizeof(uint32_t) * len);
+ if (ret)
+ return -ENOMEM;
+ pipe->pp_cfg.hist_lut_cfg.data = pipe->pp_res.hist_lut;
+ }
}
if (pipe->flags & MDP_DEINTERLACE) {
@@ -400,6 +492,12 @@
}
}
+ ret = mdss_mdp_smp_reserve(pipe);
+ if (ret) {
+ pr_debug("mdss_mdp_smp_reserve failed. ret=%d\n", ret);
+ goto exit_fail;
+ }
+
pipe->params_changed++;
req->id = pipe->ndx;
@@ -409,6 +507,25 @@
mdss_mdp_pipe_unmap(pipe);
return ret;
+
+exit_fail:
+ mdss_mdp_pipe_unmap(pipe);
+
+ mutex_lock(&mfd->lock);
+ if (pipe->play_cnt == 0) {
+ pr_debug("failed for pipe %d\n", pipe->num);
+ list_del(&pipe->used_list);
+ mdss_mdp_pipe_destroy(pipe);
+ }
+
+ /* invalidate any overlays in this framebuffer after failure */
+ list_for_each_entry(pipe, &mdp5_data->pipes_used, used_list) {
+ pr_debug("freeing allocations for pipe %d\n", pipe->num);
+ mdss_mdp_smp_unreserve(pipe);
+ pipe->params_changed = 0;
+ }
+ mutex_unlock(&mfd->lock);
+ return ret;
}
static int mdss_mdp_overlay_set(struct msm_fb_data_type *mfd,
@@ -487,7 +604,7 @@
return 0;
}
-static int mdss_mdp_overlay_cleanup(struct msm_fb_data_type *mfd)
+static void mdss_mdp_overlay_cleanup(struct msm_fb_data_type *mfd)
{
struct mdss_mdp_pipe *pipe, *tmp;
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
@@ -499,6 +616,7 @@
list_move(&pipe->cleanup_list, &destroy_pipes);
mdss_mdp_overlay_free_buf(&pipe->back_buf);
mdss_mdp_overlay_free_buf(&pipe->front_buf);
+ pipe->mfd = NULL;
}
list_for_each_entry(pipe, &mdp5_data->pipes_used, used_list) {
@@ -511,96 +629,6 @@
mutex_unlock(&mfd->lock);
list_for_each_entry_safe(pipe, tmp, &destroy_pipes, cleanup_list)
mdss_mdp_pipe_destroy(pipe);
-
- return 0;
-}
-
-int mdss_mdp_copy_splash_screen(struct mdss_panel_data *pdata)
-{
- void *virt = NULL;
- unsigned long bl_fb_addr = 0;
- unsigned long *bl_fb_addr_va;
- unsigned long pipe_addr, pipe_src_size;
- u32 height, width, rgb_size, bpp;
- size_t size;
- static struct ion_handle *ihdl;
- struct ion_client *iclient = mdss_get_ionclient();
- static ion_phys_addr_t phys;
-
- pipe_addr = MDSS_MDP_REG_SSPP_OFFSET(3) +
- MDSS_MDP_REG_SSPP_SRC0_ADDR;
- pipe_src_size =
- MDSS_MDP_REG_SSPP_OFFSET(3) + MDSS_MDP_REG_SSPP_SRC_SIZE;
-
- bpp = 3;
- rgb_size = MDSS_MDP_REG_READ(pipe_src_size);
- bl_fb_addr = MDSS_MDP_REG_READ(pipe_addr);
-
- height = (rgb_size >> 16) & 0xffff;
- width = rgb_size & 0xffff;
- size = PAGE_ALIGN(height * width * bpp);
- pr_debug("%s:%d splash_height=%d splash_width=%d Buffer size=%d\n",
- __func__, __LINE__, height, width, size);
-
- ihdl = ion_alloc(iclient, size, SZ_1M,
- ION_HEAP(ION_QSECOM_HEAP_ID), 0);
- if (IS_ERR_OR_NULL(ihdl)) {
- pr_err("unable to alloc fbmem from ion (%p)\n", ihdl);
- return -ENOMEM;
- }
-
- pdata->panel_info.splash_ihdl = ihdl;
-
- virt = ion_map_kernel(iclient, ihdl);
- ion_phys(iclient, ihdl, &phys, &size);
-
- pr_debug("%s %d Allocating %u bytes at 0x%lx (%pa phys)\n",
- __func__, __LINE__, size,
- (unsigned long int)virt, &phys);
-
- bl_fb_addr_va = (unsigned long *)ioremap(bl_fb_addr, size);
-
- memcpy(virt, bl_fb_addr_va, size);
-
- MDSS_MDP_REG_WRITE(pipe_addr, phys);
- MDSS_MDP_REG_WRITE(MDSS_MDP_REG_CTL_FLUSH + MDSS_MDP_REG_CTL_OFFSET(0),
- 0x48);
-
- return 0;
-
-}
-
-int mdss_mdp_reconfigure_splash_done(struct mdss_mdp_ctl *ctl)
-{
- struct ion_client *iclient = mdss_get_ionclient();
- struct mdss_panel_data *pdata;
- int ret = 0, off;
- int mdss_mdp_rev = MDSS_MDP_REG_READ(MDSS_MDP_REG_HW_VERSION);
- int mdss_v2_intf_off = 0;
-
- off = 0;
-
- pdata = ctl->panel_data;
-
- pdata->panel_info.cont_splash_enabled = 0;
-
- ion_free(iclient, pdata->panel_info.splash_ihdl);
-
- mdss_mdp_ctl_write(ctl, 0, MDSS_MDP_LM_BORDER_COLOR);
- off = MDSS_MDP_REG_INTF_OFFSET(ctl->intf_num);
-
- if (mdss_mdp_rev == MDSS_MDP_HW_REV_102)
- mdss_v2_intf_off = 0xEC00;
-
- /* wait for 1 VSYNC for the pipe to be unstaged */
- msleep(20);
- MDSS_MDP_REG_WRITE(off + MDSS_MDP_REG_INTF_TIMING_ENGINE_EN -
- mdss_v2_intf_off, 0);
- ret = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_CONT_SPLASH_FINISH,
- NULL);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
- mdss_mdp_footswitch_ctrl_splash(0);
- return ret;
}
static int mdss_mdp_overlay_start(struct msm_fb_data_type *mfd)
@@ -619,8 +647,10 @@
return rc;
}
- if (mfd->panel_info->cont_splash_enabled)
- mdss_mdp_reconfigure_splash_done(mdp5_data->ctl);
+ if (mfd->panel_info->cont_splash_enabled) {
+ mdss_mdp_ctl_splash_finish(mdp5_data->ctl);
+ mdss_mdp_footswitch_ctrl_splash(0);
+ }
if (!is_mdss_iommu_attached()) {
mdss_iommu_attach(mdss_res);
@@ -641,6 +671,19 @@
return rc;
}
+static void mdss_mdp_overlay_update_pm(struct mdss_overlay_private *mdp5_data)
+{
+ ktime_t wakeup_time;
+
+ if (!mdp5_data->cpu_pm_hdl)
+ return;
+
+ if (mdss_mdp_display_wakeup_time(mdp5_data->ctl, &wakeup_time))
+ return;
+
+ activate_event_timer(mdp5_data->cpu_pm_hdl, wakeup_time);
+}
+
int mdss_mdp_overlay_kickoff(struct msm_fb_data_type *mfd)
{
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
@@ -658,15 +701,17 @@
} else if (pipe->front_buf.num_planes) {
buf = &pipe->front_buf;
} else {
- pr_warn("pipe queue without buffer\n");
- buf = NULL;
+ pr_warn("pipe queue w/o buffer. unstaging layer\n");
+ pipe->params_changed = 0;
+ mdss_mdp_mixer_pipe_unstage(pipe);
+ continue;
}
ret = mdss_mdp_pipe_queue_data(pipe, buf);
if (IS_ERR_VALUE(ret)) {
pr_warn("Unable to queue data for pnum=%d\n",
pipe->num);
- mdss_mdp_overlay_free_buf(buf);
+ mdss_mdp_mixer_pipe_unstage(pipe);
}
}
@@ -680,6 +725,8 @@
if (IS_ERR_VALUE(ret))
goto commit_fail;
+ mdss_mdp_overlay_update_pm(mdp5_data);
+
ret = mdss_mdp_display_wait4comp(mdp5_data->ctl);
complete(&mfd->update.comp);
@@ -692,7 +739,7 @@
mutex_unlock(&mfd->no_update.lock);
commit_fail:
- ret = mdss_mdp_overlay_cleanup(mfd);
+ mdss_mdp_overlay_cleanup(mfd);
mutex_unlock(&mdp5_data->ov_lock);
@@ -742,12 +789,13 @@
if (ndx == BORDERFILL_NDX) {
pr_debug("borderfill disable\n");
mdp5_data->borderfill_enable = false;
- return 0;
+ ret = 0;
+ goto done;
}
if (!mfd->panel_power_on) {
- mutex_unlock(&mdp5_data->ov_lock);
- return -EPERM;
+ ret = -EPERM;
+ goto done;
}
pr_debug("unset ndx=%x\n", ndx);
@@ -757,6 +805,7 @@
else
ret = mdss_mdp_overlay_release(mfd, ndx);
+done:
mutex_unlock(&mdp5_data->ov_lock);
return ret;
@@ -885,6 +934,39 @@
return ret;
}
+static void mdss_mdp_overlay_force_cleanup(struct msm_fb_data_type *mfd)
+{
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
+ struct mdss_mdp_ctl *ctl = mdp5_data->ctl;
+ int ret;
+
+ pr_debug("forcing cleanup to unset dma pipes on fb%d\n", mfd->index);
+
+ /*
+ * video mode panels require the layer to be unstaged and wait for
+ * vsync to be able to release buffer.
+ */
+ if (ctl && ctl->is_video_mode) {
+ ret = mdss_mdp_display_commit(ctl, NULL);
+ if (!IS_ERR_VALUE(ret))
+ mdss_mdp_display_wait4comp(ctl);
+ }
+
+ mdss_mdp_overlay_cleanup(mfd);
+}
+
+static void mdss_mdp_overlay_force_dma_cleanup(struct mdss_data_type *mdata)
+{
+ struct mdss_mdp_pipe *pipe;
+ int i;
+
+ for (i = 0; i < mdata->ndma_pipes; i++) {
+ pipe = mdata->dma_pipes + i;
+ if (atomic_read(&pipe->ref_cnt) && pipe->mfd)
+ mdss_mdp_overlay_force_cleanup(pipe->mfd);
+ }
+}
+
static int mdss_mdp_overlay_play(struct msm_fb_data_type *mfd,
struct msmfb_overlay_data *req)
{
@@ -898,17 +980,19 @@
return ret;
if (!mfd->panel_power_on) {
- mutex_unlock(&mdp5_data->ov_lock);
- return -EPERM;
+ ret = -EPERM;
+ goto done;
}
ret = mdss_mdp_overlay_start(mfd);
if (ret) {
pr_err("unable to start overlay %d (%d)\n", mfd->index, ret);
- return ret;
+ goto done;
}
if (req->id & MDSS_MDP_ROT_SESSION_MASK) {
+ mdss_mdp_overlay_force_dma_cleanup(mfd_to_mdata(mfd));
+
ret = mdss_mdp_overlay_rotate(mfd, req);
} else if (req->id == BORDERFILL_NDX) {
pr_debug("borderfill enable\n");
@@ -918,6 +1002,7 @@
ret = mdss_mdp_overlay_queue(mfd, req);
}
+done:
mutex_unlock(&mdp5_data->ov_lock);
return ret;
@@ -1143,7 +1228,7 @@
if (!ctl)
return -ENODEV;
- if (!ctl->set_vsync_handler)
+ if (!ctl->add_vsync_handler || !ctl->remove_vsync_handler)
return -ENOTSUPP;
rc = mutex_lock_interruptible(&ctl->lock);
@@ -1166,9 +1251,9 @@
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
if (en)
- rc = ctl->set_vsync_handler(ctl, mdss_mdp_overlay_handle_vsync);
+ rc = ctl->add_vsync_handler(ctl, &ctl->vsync_handler);
else
- rc = ctl->set_vsync_handler(ctl, NULL);
+ rc = ctl->remove_vsync_handler(ctl, &ctl->vsync_handler);
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
mutex_unlock(&ctl->lock);
@@ -1358,7 +1443,7 @@
{
int ret = 0;
int curr_bl;
- mutex_lock(&mfd->lock);
+ mutex_lock(&mfd->bl_lock);
curr_bl = mfd->bl_level;
mfd->bl_scale = data->scale;
mfd->bl_min_lvl = data->min_lvl;
@@ -1367,7 +1452,7 @@
/* update current backlight to use new scaling*/
mdss_fb_set_backlight(mfd, curr_bl);
- mutex_unlock(&mfd->lock);
+ mutex_unlock(&mfd->bl_lock);
return ret;
}
@@ -1441,6 +1526,16 @@
ret = mdss_bl_scale_config(mfd, (struct mdp_bl_scale_data *)
&mdp_pp.data.bl_scale_data);
break;
+ case mdp_op_ad_cfg:
+ ret = mdss_mdp_ad_config(mfd, &mdp_pp.data.ad_init_cfg);
+ break;
+ case mdp_op_ad_input:
+ ret = mdss_mdp_ad_input(mfd, &mdp_pp.data.ad_input);
+ if (ret > 0) {
+ ret = 0;
+ copyback = 1;
+ }
+ break;
default:
pr_err("Unsupported request to MDP_PP IOCTL.\n");
ret = -EINVAL;
@@ -1457,7 +1552,7 @@
int ret = -ENOSYS;
struct mdp_histogram_data hist;
struct mdp_histogram_start_req hist_req;
- u32 block, hist_data_addr = 0;
+ u32 block;
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
switch (cmd) {
@@ -1488,15 +1583,9 @@
if (ret)
return ret;
- ret = mdss_mdp_hist_collect(mdp5_data->ctl, &hist,
- &hist_data_addr);
- if ((ret == 0) && hist_data_addr) {
- ret = copy_to_user(hist.c0, (u32 *)hist_data_addr,
- sizeof(u32) * hist.bin_cnt);
- if (ret == 0)
- ret = copy_to_user(argp, &hist,
- sizeof(hist));
- }
+ ret = mdss_mdp_hist_collect(mdp5_data->ctl, &hist);
+ if (!ret)
+ ret = copy_to_user(argp, &hist, sizeof(hist));
break;
default:
break;
@@ -1532,6 +1621,10 @@
caps->vig_pipes = mdata->nvig_pipes;
caps->rgb_pipes = mdata->nrgb_pipes;
caps->dma_pipes = mdata->ndma_pipes;
+ if (mdata->has_bwc)
+ caps->features |= MDP_BWC_EN;
+ if (mdata->has_decimation)
+ caps->features |= MDP_DECIMATION_EN;
return 0;
}
@@ -1583,10 +1676,8 @@
ret = copy_to_user(argp, &req, sizeof(req));
}
- if (ret) {
+ if (ret)
pr_debug("OVERLAY_GET failed (%d)\n", ret);
- ret = -EFAULT;
- }
break;
case MSMFB_OVERLAY_SET:
@@ -1597,10 +1688,8 @@
if (!IS_ERR_VALUE(ret))
ret = copy_to_user(argp, &req, sizeof(req));
}
- if (ret) {
+ if (ret)
pr_debug("OVERLAY_SET failed (%d)\n", ret);
- ret = -EFAULT;
- }
break;
@@ -1623,16 +1712,11 @@
struct msmfb_overlay_data data;
ret = copy_from_user(&data, argp, sizeof(data));
- if (!ret) {
+ if (!ret)
ret = mdss_mdp_overlay_play(mfd, &data);
- if (!IS_ERR_VALUE(ret))
- mdss_fb_update_backlight(mfd);
- }
- if (ret) {
+ if (ret)
pr_debug("OVERLAY_PLAY failed (%d)\n", ret);
- ret = -EFAULT;
- }
} else {
ret = 0;
}
@@ -1646,10 +1730,8 @@
if (!ret)
ret = mdss_mdp_overlay_play_wait(mfd, &data);
- if (ret) {
+ if (ret)
pr_err("OVERLAY_PLAY_WAIT failed (%d)\n", ret);
- ret = -EFAULT;
- }
} else {
ret = 0;
}
@@ -1719,6 +1801,8 @@
mfd->index);
return PTR_ERR(ctl);
}
+ ctl->vsync_handler.vsync_handler =
+ mdss_mdp_overlay_handle_vsync;
if (mfd->split_display && pdata->next) {
/* enable split display */
@@ -1741,7 +1825,10 @@
return rc;
}
- if (!IS_ERR_VALUE(rc) && mdp5_data->vsync_pending) {
+ if (IS_ERR_VALUE(rc)) {
+ pr_err("Failed to turn on fb%d\n", mfd->index);
+ mdss_mdp_overlay_off(mfd);
+ } else if (mdp5_data->vsync_pending) {
mdp5_data->vsync_pending = 0;
mdss_mdp_overlay_vsync_ctrl(mfd, mdp5_data->vsync_pending);
}
@@ -1798,7 +1885,7 @@
if (pdata->panel_info.cont_splash_enabled) {
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
mdss_mdp_footswitch_ctrl_splash(1);
- mdss_mdp_copy_splash_screen(pdata);
+ mdss_mdp_ctl_splash_start(pdata);
}
return 0;
}
@@ -1842,6 +1929,10 @@
}
mfd->mdp.private1 = mdp5_data;
+ rc = mdss_mdp_overlay_fb_parse_dt(mfd);
+ if (rc)
+ return rc;
+
rc = sysfs_create_group(&dev->kobj, &vsync_fs_attr_group);
if (rc) {
pr_err("vsync sysfs group creation failed, ret=%d\n", rc);
@@ -1854,8 +1945,27 @@
kobject_uevent(&dev->kobj, KOBJ_ADD);
pr_debug("vsync kobject_uevent(KOBJ_ADD)\n");
+ mdp5_data->cpu_pm_hdl = add_event_timer(NULL, (void *)mdp5_data);
+ if (!mdp5_data->cpu_pm_hdl)
+ pr_warn("%s: unable to add event timer\n", __func__);
+
return rc;
init_fail:
kfree(mdp5_data);
return rc;
}
+
+static int mdss_mdp_overlay_fb_parse_dt(struct msm_fb_data_type *mfd)
+{
+ struct platform_device *pdev = mfd->pdev;
+ struct mdss_overlay_private *mdp5_mdata = mfd_to_mdp5_data(mfd);
+
+ mdp5_mdata->mixer_swap = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-mixer-swap");
+ if (mdp5_mdata->mixer_swap) {
+ pr_info("mixer swap is enabled for fb device=%s\n",
+ pdev->name);
+ }
+
+ return 0;
+}
diff --git a/drivers/video/msm/mdss/mdss_mdp_pipe.c b/drivers/video/msm/mdss/mdss_mdp_pipe.c
index b169c43..0f65530 100644
--- a/drivers/video/msm/mdss/mdss_mdp_pipe.c
+++ b/drivers/video/msm/mdss/mdss_mdp_pipe.c
@@ -27,8 +27,6 @@
static DEFINE_MUTEX(mdss_mdp_smp_lock);
static DECLARE_BITMAP(mdss_mdp_smp_mmb_pool, MDSS_MDP_SMP_MMB_BLOCKS);
-static struct mdss_mdp_pipe *mdss_mdp_pipe_search(struct mdss_data_type *mdata,
- u32 ndx);
static int mdss_mdp_pipe_free(struct mdss_mdp_pipe *pipe);
static inline void mdss_mdp_pipe_write(struct mdss_mdp_pipe *pipe,
@@ -42,17 +40,18 @@
return readl_relaxed(pipe->base + reg);
}
-static u32 mdss_mdp_smp_mmb_reserve(unsigned long *smp, size_t n)
+static u32 mdss_mdp_smp_mmb_reserve(unsigned long *existing,
+ unsigned long *reserve, size_t n)
{
u32 i, mmb;
/* reserve more blocks if needed, but can't free mmb at this point */
- for (i = bitmap_weight(smp, SMP_MB_CNT); i < n; i++) {
+ for (i = bitmap_weight(existing, SMP_MB_CNT); i < n; i++) {
if (bitmap_full(mdss_mdp_smp_mmb_pool, SMP_MB_CNT))
break;
mmb = find_first_zero_bit(mdss_mdp_smp_mmb_pool, SMP_MB_CNT);
- set_bit(mmb, smp);
+ set_bit(mmb, reserve);
set_bit(mmb, mdss_mdp_smp_mmb_pool);
}
return i;
@@ -76,10 +75,17 @@
return cnt;
}
-static void mdss_mdp_smp_mmb_free(unsigned long *smp)
+static void mdss_mdp_smp_mmb_amend(unsigned long *smp, unsigned long *extra)
+{
+ bitmap_or(smp, smp, extra, SMP_MB_CNT);
+ bitmap_zero(extra, SMP_MB_CNT);
+}
+
+static void mdss_mdp_smp_mmb_free(unsigned long *smp, bool write)
{
if (!bitmap_empty(smp, SMP_MB_CNT)) {
- mdss_mdp_smp_mmb_set(MDSS_MDP_SMP_CLIENT_UNUSED, smp);
+ if (write)
+ mdss_mdp_smp_mmb_set(MDSS_MDP_SMP_CLIENT_UNUSED, smp);
bitmap_andnot(mdss_mdp_smp_mmb_pool, mdss_mdp_smp_mmb_pool,
smp, SMP_MB_CNT);
bitmap_zero(smp, SMP_MB_CNT);
@@ -106,19 +112,33 @@
static void mdss_mdp_smp_free(struct mdss_mdp_pipe *pipe)
{
+ int i;
+
mutex_lock(&mdss_mdp_smp_lock);
- mdss_mdp_smp_mmb_free(&pipe->smp[0]);
- mdss_mdp_smp_mmb_free(&pipe->smp[1]);
- mdss_mdp_smp_mmb_free(&pipe->smp[2]);
+ for (i = 0; i < MAX_PLANES; i++) {
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ mdss_mdp_smp_mmb_free(&pipe->smp[i], true);
+ }
mutex_unlock(&mdss_mdp_smp_lock);
}
-static int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe)
+void mdss_mdp_smp_unreserve(struct mdss_mdp_pipe *pipe)
+{
+ int i;
+
+ mutex_lock(&mdss_mdp_smp_lock);
+ for (i = 0; i < MAX_PLANES; i++)
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ mutex_unlock(&mdss_mdp_smp_lock);
+}
+
+int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe)
{
struct mdss_data_type *mdata = mdss_mdp_get_mdata();
u32 num_blks = 0, reserved = 0;
struct mdss_mdp_plane_sizes ps;
- int i, rc;
+ int i;
+ int rc = 0;
u32 nlines;
if (pipe->bwc_mode) {
@@ -128,11 +148,10 @@
return rc;
pr_debug("BWC SMP strides ystride0=%x ystride1=%x\n",
ps.ystride[0], ps.ystride[1]);
- } else if ((mdata->mdp_rev >= MDSS_MDP_HW_REV_102) &&
- pipe->src_fmt->is_yuv) {
+ } else if (mdata->has_decimation && pipe->src_fmt->is_yuv) {
ps.num_planes = 2;
- ps.ystride[0] = pipe->src.w;
- ps.ystride[1] = pipe->src.w;
+ ps.ystride[0] = pipe->src.w >> pipe->horz_deci;
+ ps.ystride[1] = pipe->src.h >> pipe->vert_deci;
} else {
rc = mdss_mdp_get_plane_sizes(pipe->src_fmt->format,
pipe->src.w, pipe->src.h, &ps, 0);
@@ -143,29 +162,29 @@
mutex_lock(&mdss_mdp_smp_lock);
for (i = 0; i < ps.num_planes; i++) {
nlines = pipe->bwc_mode ? ps.rau_h[i] : 2;
- num_blks = DIV_ROUND_UP(nlines * ps.ystride[i],
- mdss_res->smp_mb_size);
+ num_blks = DIV_ROUND_UP(nlines * ps.ystride[i], SMP_MB_SIZE);
if (mdata->mdp_rev == MDSS_MDP_HW_REV_100)
num_blks = roundup_pow_of_two(num_blks);
pr_debug("reserving %d mmb for pnum=%d plane=%d\n",
num_blks, pipe->num, i);
- reserved = mdss_mdp_smp_mmb_reserve(&pipe->smp[i], num_blks);
+ reserved = mdss_mdp_smp_mmb_reserve(&pipe->smp[i],
+ &pipe->smp_reserved[i], num_blks);
if (reserved < num_blks)
break;
}
if (reserved < num_blks) {
- pr_err("insufficient MMB blocks\n");
+ pr_debug("insufficient MMB blocks\n");
for (; i >= 0; i--)
- mdss_mdp_smp_mmb_free(&pipe->smp[i]);
- return -ENOMEM;
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ rc = -ENOMEM;
}
mutex_unlock(&mdss_mdp_smp_lock);
- return 0;
+ return rc;
}
static int mdss_mdp_smp_alloc(struct mdss_mdp_pipe *pipe)
@@ -174,8 +193,10 @@
int cnt = 0;
mutex_lock(&mdss_mdp_smp_lock);
- for (i = 0; i < MAX_PLANES; i++)
+ for (i = 0; i < MAX_PLANES; i++) {
+ mdss_mdp_smp_mmb_amend(&pipe->smp[i], &pipe->smp_reserved[i]);
cnt += mdss_mdp_smp_mmb_set(pipe->ftch_id + i, &pipe->smp[i]);
+ }
mdss_mdp_smp_set_wm_levels(pipe, cnt);
mutex_unlock(&mdss_mdp_smp_lock);
return 0;
@@ -257,10 +278,13 @@
pipe = NULL;
}
- if (pipe)
+ if (pipe) {
pr_debug("type=%x pnum=%d\n", pipe->type, pipe->num);
- else
+ mutex_init(&pipe->pp_res.hist.hist_mutex);
+ spin_lock_init(&pipe->pp_res.hist.hist_lock);
+ } else {
pr_err("no %d type pipes available\n", type);
+ }
return pipe;
}
@@ -307,18 +331,20 @@
mutex_lock(&mdss_mdp_sspp_lock);
pipe = mdss_mdp_pipe_search(mdata, ndx);
- if (!pipe)
- return ERR_PTR(-EINVAL);
+ if (!pipe) {
+ pipe = ERR_PTR(-EINVAL);
+ goto error;
+ }
if (mdss_mdp_pipe_map(pipe))
- return ERR_PTR(-EACCES);
+ pipe = ERR_PTR(-EACCES);
+error:
mutex_unlock(&mdss_mdp_sspp_lock);
-
return pipe;
}
-static struct mdss_mdp_pipe *mdss_mdp_pipe_search(struct mdss_data_type *mdata,
+struct mdss_mdp_pipe *mdss_mdp_pipe_search(struct mdss_data_type *mdata,
u32 ndx)
{
u32 i;
@@ -376,6 +402,7 @@
{
u32 img_size, src_size, src_xy, dst_size, dst_xy, ystride0, ystride1;
u32 width, height;
+ u32 decimation;
pr_debug("pnum=%d wh=%dx%d src={%d,%d,%d,%d} dst={%d,%d,%d,%d}\n",
pipe->num, pipe->img_width, pipe->img_height,
@@ -396,6 +423,12 @@
height /= 2;
}
+ decimation = ((1 << pipe->horz_deci) - 1) << 8;
+ decimation |= ((1 << pipe->vert_deci) - 1);
+ if (decimation)
+ pr_debug("Image decimation h=%d v=%d\n",
+ pipe->horz_deci, pipe->vert_deci);
+
img_size = (height << 16) | width;
src_size = (pipe->src.h << 16) | pipe->src.w;
src_xy = (pipe->src.y << 16) | pipe->src.x;
@@ -418,6 +451,8 @@
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_OUT_XY, dst_xy);
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_SRC_YSTRIDE0, ystride0);
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_SRC_YSTRIDE1, ystride1);
+ mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_DECIMATION_CONFIG,
+ decimation);
return 0;
}
@@ -465,17 +500,12 @@
fmt->fetch_planes != MDSS_MDP_PLANE_INTERLEAVED)
src_format |= BIT(8); /* SRCC3_EN */
- if (fmt->fetch_planes != MDSS_MDP_PLANE_PLANAR) {
- unpack = (fmt->element[3] << 24) | (fmt->element[2] << 16) |
+ unpack = (fmt->element[3] << 24) | (fmt->element[2] << 16) |
(fmt->element[1] << 8) | (fmt->element[0] << 0);
-
- src_format |= ((fmt->unpack_count - 1) << 12) |
- (fmt->unpack_tight << 17) |
- (fmt->unpack_align_msb << 18) |
- ((fmt->bpp - 1) << 9);
- } else {
- unpack = 0;
- }
+ src_format |= ((fmt->unpack_count - 1) << 12) |
+ (fmt->unpack_tight << 17) |
+ (fmt->unpack_align_msb << 18) |
+ ((fmt->bpp - 1) << 9);
mdss_mdp_pipe_sspp_setup(pipe, &opmode);
@@ -487,28 +517,6 @@
return 0;
}
-static void mdss_mdp_addr_add_offset(struct mdss_mdp_pipe *pipe,
- struct mdss_mdp_data *data)
-{
- data->p[0].addr += pipe->src.x +
- (pipe->src.y * pipe->src_planes.ystride[0]);
- if (data->num_planes > 1) {
- u8 hmap[] = { 1, 2, 1, 2 };
- u8 vmap[] = { 1, 1, 2, 2 };
- u16 xoff = pipe->src.x / hmap[pipe->src_fmt->chroma_sample];
- u16 yoff = pipe->src.y / vmap[pipe->src_fmt->chroma_sample];
-
- if (data->num_planes == 2) /* pseudo planar */
- xoff *= 2;
- data->p[1].addr += xoff + (yoff * pipe->src_planes.ystride[1]);
-
- if (data->num_planes > 2) { /* planar */
- data->p[2].addr += xoff +
- (yoff * pipe->src_planes.ystride[2]);
- }
- }
-}
-
int mdss_mdp_pipe_addr_setup(struct mdss_data_type *mdata, u32 *offsets,
u32 *ftch_id, u32 type, u32 num_base, u32 len)
{
@@ -559,7 +567,7 @@
static int mdss_mdp_src_addr_setup(struct mdss_mdp_pipe *pipe,
struct mdss_mdp_data *data)
{
- int is_rot = pipe->mixer->rotator_mode;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
int ret = 0;
pr_debug("pnum=%d\n", pipe->num);
@@ -571,11 +579,13 @@
return ret;
if (pipe->overfetch_disable)
- mdss_mdp_addr_add_offset(pipe, data);
+ mdss_mdp_data_calc_offset(data, pipe->src.x, pipe->src.y,
+ &pipe->src_planes, pipe->src_fmt);
/* planar format expects YCbCr, swap chroma planes if YCrCb */
- if (!is_rot && (pipe->src_fmt->fetch_planes == MDSS_MDP_PLANE_PLANAR) &&
- (pipe->src_fmt->element[0] == C2_R_Cr))
+ if (mdata->mdp_rev < MDSS_MDP_HW_REV_102 &&
+ (pipe->src_fmt->fetch_planes == MDSS_MDP_PLANE_PLANAR)
+ && (pipe->src_fmt->element[0] == C1_B_Cb))
swap(data->p[1].addr, data->p[2].addr);
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_SRC0_ADDR, data->p[0].addr);
@@ -661,13 +671,6 @@
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_VIG_OP_MODE,
opmode);
- ret = mdss_mdp_smp_reserve(pipe);
- if (ret) {
- pr_err("unable to reserve smp for pnum=%d\n",
- pipe->num);
- goto done;
- }
-
mdss_mdp_smp_alloc(pipe);
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_pp.c b/drivers/video/msm/mdss/mdss_mdp_pp.c
index 2e20eb6..ce4c47c 100644
--- a/drivers/video/msm/mdss/mdss_mdp_pp.c
+++ b/drivers/video/msm/mdss/mdss_mdp_pp.c
@@ -78,7 +78,7 @@
#define MDSS_BLOCK_DISP_NUM (MDP_BLOCK_MAX - MDP_LOGICAL_BLOCK_DISP_0)
-#define HIST_WAIT_TIMEOUT(frame) ((60 * HZ * (frame)) / 1000)
+#define HIST_WAIT_TIMEOUT(frame) ((75 * HZ * (frame)) / 1000)
/* hist collect state */
enum {
HIST_UNKNOWN,
@@ -119,6 +119,39 @@
#define PP_STS_ENABLE 0x1
#define PP_STS_GAMUT_FIRST 0x2
+#define PP_AD_STATE_INIT 0x2
+#define PP_AD_STATE_CFG 0x4
+#define PP_AD_STATE_DATA 0x8
+#define PP_AD_STATE_RUN 0x10
+#define PP_AD_STATE_VSYNC 0x20
+
+#define PP_AD_STATE_IS_INITCFG(st) (((st) & PP_AD_STATE_INIT) &&\
+ ((st) & PP_AD_STATE_CFG))
+
+#define PP_AD_STATE_IS_READY(st) (((st) & PP_AD_STATE_INIT) &&\
+ ((st) & PP_AD_STATE_CFG) &&\
+ ((st) & PP_AD_STATE_DATA))
+
+#define PP_AD_STS_DIRTY_INIT 0x2
+#define PP_AD_STS_DIRTY_CFG 0x4
+#define PP_AD_STS_DIRTY_DATA 0x8
+#define PP_AD_STS_DIRTY_VSYNC 0x10
+
+#define PP_AD_STS_IS_DIRTY(sts) (((sts) & PP_AD_STS_DIRTY_INIT) ||\
+ ((sts) & PP_AD_STS_DIRTY_CFG))
+
+/* Bits 0 and 1 */
+#define MDSS_AD_INPUT_AMBIENT (0x03)
+/* Bits 3 and 7 */
+#define MDSS_AD_INPUT_STRENGTH (0x88)
+/*
+ * Check data by shifting by mode to see if it matches to the
+ * MDSS_AD_INPUT_* bitfields
+ */
+#define MDSS_AD_MODE_DATA_MATCH(mode, data) ((1 << (mode)) & (data))
+#define MDSS_AD_RUNNING_AUTO_BL(ad) (((ad)->state & PP_AD_STATE_RUN) &&\
+ ((ad)->cfg.mode == MDSS_AD_MODE_AUTO_BL))
+
#define SHARP_STRENGTH_DEFAULT 32
#define SHARP_EDGE_THR_DEFAULT 112
#define SHARP_SMOOTH_THR_DEFAULT 8
@@ -154,11 +187,13 @@
};
static DEFINE_MUTEX(mdss_pp_mutex);
-static DEFINE_SPINLOCK(mdss_hist_lock);
-static DEFINE_MUTEX(mdss_mdp_hist_mutex);
static struct mdss_pp_res_type *mdss_pp_res;
-static void pp_hist_read(u32 v_base, struct pp_hist_col_info *hist_info);
+static void pp_hist_read(char __iomem *v_base,
+ struct pp_hist_col_info *hist_info);
+static int pp_histogram_setup(u32 *op, u32 block, struct mdss_mdp_mixer *mix);
+static int pp_histogram_disable(struct pp_hist_col_info *hist_info,
+ u32 done_bit, char __iomem *ctl_base);
static void pp_update_pcc_regs(u32 offset,
struct mdp_pcc_cfg_data *cfg_ptr);
static void pp_update_igc_lut(struct mdp_igc_lut_data *cfg,
@@ -167,7 +202,8 @@
struct mdp_ar_gc_lut_data *lut_data);
static void pp_update_argc_lut(u32 offset,
struct mdp_pgc_lut_data *config);
-static void pp_update_hist_lut(u32 offset, struct mdp_hist_lut_data *cfg);
+static void pp_update_hist_lut(char __iomem *base,
+ struct mdp_hist_lut_data *cfg);
static void pp_pa_config(unsigned long flags, u32 base,
struct pp_sts_type *pp_sts,
struct mdp_pa_cfg *pa_config);
@@ -178,13 +214,20 @@
struct pp_sts_type *pp_sts,
struct mdp_igc_lut_data *igc_config,
u32 pipe_num);
-static void pp_enhist_config(unsigned long flags, u32 base,
+static void pp_enhist_config(unsigned long flags, char __iomem *base,
struct pp_sts_type *pp_sts,
struct mdp_hist_lut_data *enhist_cfg);
static void pp_sharp_config(char __iomem *offset,
struct pp_sts_type *pp_sts,
struct mdp_sharp_cfg *sharp_config);
+static void pp_ad_vsync_handler(struct mdss_mdp_ctl *ctl, ktime_t t);
+static void pp_ad_cfg_write(struct mdss_ad_info *ad);
+static void pp_ad_init_write(struct mdss_ad_info *ad);
+static void pp_ad_input_write(struct mdss_ad_info *ad, u32 bl_lvl);
+static int mdss_mdp_ad_setup(struct msm_fb_data_type *mfd);
+static void pp_ad_cfg_lut(char __iomem *offset, u32 *data);
+static u32 last_sts, last_state;
int mdss_mdp_csc_setup_data(u32 block, u32 blk_idx, u32 tbl_idx,
struct mdp_csc_cfg *data)
@@ -379,7 +422,7 @@
}
}
-static void pp_enhist_config(unsigned long flags, u32 base,
+static void pp_enhist_config(unsigned long flags, char __iomem *base,
struct pp_sts_type *pp_sts,
struct mdp_hist_lut_data *enhist_cfg)
{
@@ -419,6 +462,7 @@
{
u32 opmode = 0, base = 0;
unsigned long flags = 0;
+ char __iomem *offset;
pr_debug("pnum=%x\n", pipe->num);
@@ -452,6 +496,8 @@
}
}
+ pp_histogram_setup(&opmode, MDSS_PP_SSPP_CFG | pipe->num, pipe->mixer);
+
if (pipe->flags & MDP_OVERLAY_PP_CFG_EN) {
if (pipe->pp_cfg.config_ops & MDP_OVERLAY_PP_PA_CFG) {
flags = PP_FLAGS_DIRTY_PA;
@@ -463,6 +509,26 @@
if (pipe->pp_res.pp_sts.pa_sts & PP_STS_ENABLE)
opmode |= (1 << 4); /* PA_EN */
}
+
+ if (pipe->pp_cfg.config_ops & MDP_OVERLAY_PP_HIST_LUT_CFG) {
+ pp_enhist_config(PP_FLAGS_DIRTY_ENHIST,
+ pipe->base + MDSS_MDP_REG_VIG_HIST_LUT_BASE,
+ &pipe->pp_res.pp_sts,
+ &pipe->pp_cfg.hist_lut_cfg);
+ }
+ }
+
+ if (pipe->pp_res.pp_sts.enhist_sts & PP_STS_ENABLE) {
+ /* Enable HistLUT and PA */
+ opmode |= BIT(10) | BIT(4);
+ if (!(pipe->pp_res.pp_sts.pa_sts & PP_STS_ENABLE)) {
+ /* Program default value */
+ offset = pipe->base + MDSS_MDP_REG_VIG_PA_BASE;
+ writel_relaxed(0, offset);
+ writel_relaxed(0, offset + 4);
+ writel_relaxed(0, offset + 8);
+ writel_relaxed(0, offset + 12);
+ }
}
*op = opmode;
@@ -510,6 +576,7 @@
u32 chroma_sample;
u32 filter_mode;
struct mdss_data_type *mdata;
+ u32 src_w, src_h;
mdata = mdss_mdp_get_mdata();
if (mdata->mdp_rev >= MDSS_MDP_HW_REV_102 && pipe->src_fmt->is_yuv)
@@ -526,6 +593,9 @@
}
}
+ src_w = pipe->src.w >> pipe->horz_deci;
+ src_h = pipe->src.h >> pipe->vert_deci;
+
chroma_sample = pipe->src_fmt->chroma_sample;
if (pipe->flags & MDP_SOURCE_ROTATED_90) {
if (chroma_sample == MDSS_MDP_CHROMA_H1V2)
@@ -544,23 +614,22 @@
}
if ((pipe->src_fmt->is_yuv) &&
- !((pipe->dst.w < pipe->src.w) || (pipe->dst.h < pipe->src.h))) {
+ !((pipe->dst.w < src_w) || (pipe->dst.h < src_h))) {
pp_sharp_config(pipe->base +
MDSS_MDP_REG_VIG_QSEED2_SHARP,
&pipe->pp_res.pp_sts,
&pipe->pp_cfg.sharp_cfg);
}
- if ((pipe->src.h != pipe->dst.h) ||
+ if ((src_h != pipe->dst.h) ||
(pipe->pp_res.pp_sts.sharp_sts & PP_STS_ENABLE) ||
(chroma_sample == MDSS_MDP_CHROMA_420) ||
(chroma_sample == MDSS_MDP_CHROMA_H1V2)) {
- pr_debug("scale y - src_h=%d dst_h=%d\n",
- pipe->src.h, pipe->dst.h);
+ pr_debug("scale y - src_h=%d dst_h=%d\n", src_h, pipe->dst.h);
- if ((pipe->src.h / MAX_DOWNSCALE_RATIO) > pipe->dst.h) {
+ if ((src_h / MAX_DOWNSCALE_RATIO) > pipe->dst.h) {
pr_err("too much downscaling height=%d->%d",
- pipe->src.h, pipe->dst.h);
+ src_h, pipe->dst.h);
return -EINVAL;
}
@@ -568,11 +637,12 @@
if (pipe->type == MDSS_MDP_PIPE_TYPE_VIG) {
u32 chr_dst_h = pipe->dst.h;
- if ((chroma_sample == MDSS_MDP_CHROMA_420) ||
- (chroma_sample == MDSS_MDP_CHROMA_H1V2))
+ if (!pipe->vert_deci &&
+ ((chroma_sample == MDSS_MDP_CHROMA_420) ||
+ (chroma_sample == MDSS_MDP_CHROMA_H1V2)))
chr_dst_h *= 2; /* 2x upsample chroma */
- if (pipe->src.h <= pipe->dst.h) {
+ if (src_h <= pipe->dst.h) {
scale_config |= /* G/Y, A */
(filter_mode << 10) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 18);
@@ -581,7 +651,7 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 10) |
(MDSS_MDP_SCALE_FILTER_PCMN << 18);
- if (pipe->src.h <= chr_dst_h)
+ if (src_h <= chr_dst_h)
scale_config |= /* CrCb */
(MDSS_MDP_SCALE_FILTER_BIL << 14);
else
@@ -589,35 +659,34 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 14);
phasey_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.h, chr_dst_h);
+ PHASE_STEP_SHIFT, src_h, chr_dst_h);
writel_relaxed(phasey_step, pipe->base +
MDSS_MDP_REG_VIG_QSEED2_C12_PHASESTEPY);
} else {
- if (pipe->src.h <= pipe->dst.h)
+ if (src_h <= pipe->dst.h)
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_BIL << 10) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 18);
else
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_PCMN << 10) |
- (MDSS_MDP_SCALE_FILTER_NEAREST << 18);
+ (MDSS_MDP_SCALE_FILTER_PCMN << 18);
}
phasey_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.h, pipe->dst.h);
+ PHASE_STEP_SHIFT, src_h, pipe->dst.h);
}
- if ((pipe->src.w != pipe->dst.w) ||
+ if ((src_w != pipe->dst.w) ||
(pipe->pp_res.pp_sts.sharp_sts & PP_STS_ENABLE) ||
(chroma_sample == MDSS_MDP_CHROMA_420) ||
(chroma_sample == MDSS_MDP_CHROMA_H2V1)) {
- pr_debug("scale x - src_w=%d dst_w=%d\n",
- pipe->src.w, pipe->dst.w);
+ pr_debug("scale x - src_w=%d dst_w=%d\n", src_w, pipe->dst.w);
- if ((pipe->src.w / MAX_DOWNSCALE_RATIO) > pipe->dst.w) {
+ if ((src_w / MAX_DOWNSCALE_RATIO) > pipe->dst.w) {
pr_err("too much downscaling width=%d->%d",
- pipe->src.w, pipe->dst.w);
+ src_w, pipe->dst.w);
return -EINVAL;
}
@@ -626,11 +695,12 @@
if (pipe->type == MDSS_MDP_PIPE_TYPE_VIG) {
u32 chr_dst_w = pipe->dst.w;
- if ((chroma_sample == MDSS_MDP_CHROMA_420) ||
- (chroma_sample == MDSS_MDP_CHROMA_H2V1))
+ if (!pipe->horz_deci &&
+ ((chroma_sample == MDSS_MDP_CHROMA_420) ||
+ (chroma_sample == MDSS_MDP_CHROMA_H2V1)))
chr_dst_w *= 2; /* 2x upsample chroma */
- if (pipe->src.w <= pipe->dst.w) {
+ if (src_w <= pipe->dst.w) {
scale_config |= /* G/Y, A */
(filter_mode << 8) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 16);
@@ -639,7 +709,7 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 8) |
(MDSS_MDP_SCALE_FILTER_PCMN << 16);
- if (pipe->src.w <= chr_dst_w)
+ if (src_w <= chr_dst_w)
scale_config |= /* CrCb */
(MDSS_MDP_SCALE_FILTER_BIL << 12);
else
@@ -647,22 +717,22 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 12);
phasex_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.w, chr_dst_w);
+ PHASE_STEP_SHIFT, src_w, chr_dst_w);
writel_relaxed(phasex_step, pipe->base +
MDSS_MDP_REG_VIG_QSEED2_C12_PHASESTEPX);
} else {
- if (pipe->src.w <= pipe->dst.w)
+ if (src_w <= pipe->dst.w)
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_BIL << 8) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 16);
else
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_PCMN << 8) |
- (MDSS_MDP_SCALE_FILTER_NEAREST << 16);
+ (MDSS_MDP_SCALE_FILTER_PCMN << 16);
}
phasex_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.w, pipe->dst.w);
+ PHASE_STEP_SHIFT, src_w, pipe->dst.w);
}
writel_relaxed(scale_config, pipe->base +
@@ -692,6 +762,17 @@
void mdss_mdp_pipe_sspp_term(struct mdss_mdp_pipe *pipe)
{
+ u32 done_bit;
+ struct pp_hist_col_info *hist_info;
+ char __iomem *ctl_base;
+
+ if (!pipe && pipe->pp_res.hist.col_en) {
+ done_bit = 3 << (pipe->num * 4);
+ hist_info = &pipe->pp_res.hist;
+ ctl_base = pipe->base +
+ MDSS_MDP_REG_VIG_HIST_CTL_BASE;
+ pp_histogram_disable(hist_info, done_bit, ctl_base);
+ }
memset(&pipe->pp_cfg, 0, sizeof(struct mdp_overlay_pp_params));
memset(&pipe->pp_res, 0, sizeof(struct mdss_pipe_pp_res));
}
@@ -784,19 +865,92 @@
return 0;
}
+static char __iomem *mdss_mdp_get_dspp_addr_off(u32 dspp_num)
+{
+ struct mdss_data_type *mdata;
+ struct mdss_mdp_mixer *mixer;
+
+ mdata = mdss_mdp_get_mdata();
+ if (mdata->nmixers_intf <= dspp_num) {
+ pr_err("Invalid dspp_num=%d", dspp_num);
+ return ERR_PTR(-EINVAL);
+ }
+ mixer = mdata->mixer_intf + dspp_num;
+ return mixer->dspp_base;
+}
+
+/* Assumes that function will be called from within clock enabled space*/
+static int pp_histogram_setup(u32 *op, u32 block, struct mdss_mdp_mixer *mix)
+{
+ int ret = -EINVAL;
+ char __iomem *base;
+ u32 op_flags, kick_base, col_state;
+ struct mdss_data_type *mdata;
+ struct mdss_mdp_pipe *pipe;
+ struct pp_hist_col_info *hist_info;
+ unsigned long flag;
+
+ if (mix && (PP_LOCAT(block) == MDSS_PP_DSPP_CFG)) {
+ /* HIST_EN & AUTO_CLEAR */
+ op_flags = BIT(16) | BIT(17);
+ hist_info = &mdss_pp_res->dspp_hist[mix->num];
+ base = mdss_mdp_get_dspp_addr_off(PP_BLOCK(block));
+ kick_base = MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
+ } else if (PP_LOCAT(block) == MDSS_PP_SSPP_CFG) {
+ mdata = mdss_mdp_get_mdata();
+ pipe = mdss_mdp_pipe_get(mdata, BIT(PP_BLOCK(block)));
+ if (IS_ERR_OR_NULL(pipe)) {
+ pr_debug("pipe DNE (%d)", (u32) BIT(PP_BLOCK(block)));
+ ret = -ENODEV;
+ goto error;
+ }
+ /* HIST_EN & AUTO_CLEAR */
+ op_flags = BIT(8) + BIT(9);
+ hist_info = &pipe->pp_res.hist;
+ base = pipe->base;
+ kick_base = MDSS_MDP_REG_VIG_HIST_CTL_BASE;
+ mdss_mdp_pipe_unmap(pipe);
+ } else {
+ pr_warn("invalid histogram location (%d)", block);
+ goto error;
+ }
+
+ if (hist_info->col_en) {
+ *op |= op_flags;
+ mutex_lock(&hist_info->hist_mutex);
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ col_state = hist_info->col_state;
+ if (hist_info->is_kick_ready &&
+ ((col_state == HIST_IDLE) ||
+ ((false == hist_info->read_request) &&
+ col_state == HIST_READY))) {
+ /* Kick off collection */
+ writel_relaxed(1, base + kick_base);
+ hist_info->col_state = HIST_START;
+ }
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ mutex_unlock(&hist_info->hist_mutex);
+ }
+ ret = 0;
+error:
+ return ret;
+}
+
static int pp_dspp_setup(u32 disp_num, struct mdss_mdp_ctl *ctl,
struct mdss_mdp_mixer *mixer)
{
u32 flags, base, offset, dspp_num, opmode = 0;
struct mdp_dither_cfg_data *dither_cfg;
- struct pp_hist_col_info *hist_info;
struct mdp_pgc_lut_data *pgc_config;
struct pp_sts_type *pp_sts;
- u32 data, col_state;
- unsigned long flag;
+ u32 data;
+ char __iomem *basel;
int i, ret = 0;
+ struct mdss_data_type *mdata;
- if (!mixer || !ctl)
+ mdata = ctl->mdata;
+
+ if (!mixer || !ctl || !mdata)
return -EINVAL;
dspp_num = mixer->num;
@@ -805,37 +959,28 @@
(dspp_num >= MDSS_MDP_MAX_DSPP))
return -EINVAL;
base = MDSS_MDP_REG_DSPP_OFFSET(dspp_num);
- hist_info = &mdss_pp_res->dspp_hist[dspp_num];
+ basel = mdss_mdp_get_dspp_addr_off(dspp_num);
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
- if (hist_info->col_en) {
- /* HIST_EN & AUTO_CLEAR */
- opmode |= (1 << 16) | (1 << 17);
- mutex_lock(&mdss_mdp_hist_mutex);
- spin_lock_irqsave(&mdss_hist_lock, flag);
- col_state = hist_info->col_state;
- if (hist_info->is_kick_ready &&
- ((col_state == HIST_IDLE) ||
- ((false == hist_info->read_request) &&
- col_state == HIST_READY))) {
- /* Kick off collection */
- MDSS_MDP_REG_WRITE(base +
- MDSS_MDP_REG_DSPP_HIST_CTL_BASE, 1);
- hist_info->col_state = HIST_START;
- }
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- mutex_unlock(&mdss_mdp_hist_mutex);
- }
+ ret = pp_histogram_setup(&opmode, MDSS_PP_DSPP_CFG | dspp_num, mixer);
+ if (ret)
+ goto dspp_exit;
if (disp_num < MDSS_BLOCK_DISP_NUM)
flags = mdss_pp_res->pp_disp_flags[disp_num];
else
flags = 0;
+ if (dspp_num < mdata->nad_cfgs) {
+ ret = mdss_mdp_ad_setup(ctl->mfd);
+ if (ret < 0)
+ pr_warn("ad_setup(dspp%d) returns %d", dspp_num, ret);
+ }
/* nothing to update */
- if ((!flags) && (!(hist_info->col_en)))
+ if ((!flags) && (!(opmode)) && (ret <= 0))
goto dspp_exit;
+ ret = 0;
pp_sts = &mdss_pp_res->pp_dspp_sts[dspp_num];
@@ -848,7 +993,7 @@
pp_igc_config(flags, MDSS_MDP_REG_IGC_DSPP_BASE, pp_sts,
&mdss_pp_res->igc_disp_cfg[disp_num], dspp_num);
- pp_enhist_config(flags, base + MDSS_MDP_REG_DSPP_HIST_LUT_BASE,
+ pp_enhist_config(flags, basel + MDSS_MDP_REG_DSPP_HIST_LUT_BASE,
pp_sts, &mdss_pp_res->enhist_disp_cfg[disp_num]);
if (pp_sts->pa_sts & PP_STS_ENABLE)
@@ -922,7 +1067,7 @@
if (pp_sts->pgc_sts & PP_STS_ENABLE)
opmode |= (1 << 22);
- MDSS_MDP_REG_WRITE(base + MDSS_MDP_REG_DSPP_OP_MODE, opmode);
+ writel_relaxed(opmode, basel + MDSS_MDP_REG_DSPP_OP_MODE);
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_FLUSH, BIT(13 + dspp_num));
wmb();
dspp_exit:
@@ -981,16 +1126,30 @@
* Set dirty and write bits on features that were enabled so they will be
* reconfigured
*/
-int mdss_mdp_pp_resume(u32 mixer_num)
+int mdss_mdp_pp_resume(struct mdss_mdp_ctl *ctl, u32 mixer_num)
{
u32 flags = 0;
struct pp_sts_type pp_sts;
-
+ struct mdss_ad_info *ad;
+ struct mdss_data_type *mdata = ctl->mdata;
if (mixer_num >= MDSS_MDP_MAX_DSPP) {
pr_warn("invalid mixer_num");
return -EINVAL;
}
+ if (mixer_num < mdata->nad_cfgs) {
+ ad = &mdata->ad_cfgs[mixer_num];
+
+ if (PP_AD_STATE_CFG & ad->state)
+ pp_ad_cfg_write(ad);
+ if (PP_AD_STATE_INIT & ad->state)
+ pp_ad_init_write(ad);
+ if (PP_AD_STATE_DATA & ad->state)
+ pp_ad_input_write(ad, ctl->mfd->bl_level);
+ if ((PP_AD_STATE_VSYNC & ad->state) && ad->calc_itr)
+ ctl->add_vsync_handler(ctl, &ad->handle);
+ }
+
pp_sts = mdss_pp_res->pp_dspp_sts[mixer_num];
if (pp_sts.pa_sts & PP_STS_ENABLE) {
@@ -1056,7 +1215,9 @@
int mdss_mdp_pp_init(struct device *dev)
{
- int ret = 0;
+ int i, ret = 0;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
+ struct mdss_mdp_pipe *vig;
mutex_lock(&mdss_pp_mutex);
if (!mdss_pp_res) {
@@ -1066,6 +1227,18 @@
pr_err("%s mdss_pp_res allocation failed!", __func__);
ret = -ENOMEM;
}
+
+ for (i = 0; i < MDSS_MDP_MAX_DSPP; i++) {
+ mutex_init(&mdss_pp_res->dspp_hist[i].hist_mutex);
+ spin_lock_init(&mdss_pp_res->dspp_hist[i].hist_lock);
+ }
+ }
+ if (mdata) {
+ vig = mdata->vig_pipes;
+ for (i = 0; i < mdata->nvig_pipes; i++) {
+ mutex_init(&vig[i].pp_res.hist.hist_mutex);
+ spin_lock_init(&vig[i].pp_res.hist.hist_lock);
+ }
}
mutex_unlock(&mdss_pp_mutex);
return ret;
@@ -1413,11 +1586,13 @@
if (copy_to_user(config->c0_c1_data, local_cfg.c2_data,
config->len * sizeof(u32))) {
ret = -EFAULT;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
goto igc_config_exit;
}
if (copy_to_user(config->c2_data, local_cfg.c0_c1_data,
config->len * sizeof(u32))) {
ret = -EFAULT;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
goto igc_config_exit;
}
*copyback = 1;
@@ -1533,13 +1708,17 @@
}
/* Note: Assumes that its inputs have been checked by calling function */
-static void pp_update_hist_lut(u32 offset, struct mdp_hist_lut_data *cfg)
+static void pp_update_hist_lut(char __iomem *offset,
+ struct mdp_hist_lut_data *cfg)
{
int i;
for (i = 0; i < ENHIST_LUT_ENTRIES; i++)
- MDSS_MDP_REG_WRITE(offset, cfg->data[i]);
+ writel_relaxed(cfg->data[i], offset);
/* swap */
- MDSS_MDP_REG_WRITE(offset + 4, 1);
+ if (PP_LOCAT(cfg->block) == MDSS_PP_DSPP_CFG)
+ writel_relaxed(1, offset + 4);
+ else
+ writel_relaxed(1, offset + 16);
}
int mdss_mdp_argc_config(struct mdss_mdp_ctl *ctl,
@@ -1562,7 +1741,7 @@
mutex_lock(&mdss_pp_mutex);
disp_num = PP_BLOCK(config->block) - MDP_LOGICAL_BLOCK_DISP_0;
- switch (config->block & MDSS_PP_LOCATION_MASK) {
+ switch (PP_LOCAT(config->block)) {
case MDSS_PP_LM_CFG:
argc_offset = MDSS_MDP_REG_LM_OFFSET(dspp_num) +
MDSS_MDP_REG_LM_GC_LUT_BASE;
@@ -1602,16 +1781,19 @@
pp_read_argc_lut(&local_cfg, argc_offset);
if (copy_to_user(config->r_data,
&mdss_pp_res->gc_lut_r[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
if (copy_to_user(config->g_data,
&mdss_pp_res->gc_lut_g[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
if (copy_to_user(config->b_data,
&mdss_pp_res->gc_lut_b[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
@@ -1658,12 +1840,12 @@
if (!ctl)
return -EINVAL;
- if ((config->block < MDP_LOGICAL_BLOCK_DISP_0) ||
- (config->block >= MDP_BLOCK_MAX))
+ if ((PP_BLOCK(config->block) < MDP_LOGICAL_BLOCK_DISP_0) ||
+ (PP_BLOCK(config->block) >= MDP_BLOCK_MAX))
return -EINVAL;
mutex_lock(&mdss_pp_mutex);
- disp_num = config->block - MDP_LOGICAL_BLOCK_DISP_0;
+ disp_num = PP_BLOCK(config->block) - MDP_LOGICAL_BLOCK_DISP_0;
if (config->ops & MDP_PP_OPS_READ) {
ret = pp_get_dspp_num(disp_num, &dspp_num);
@@ -1682,6 +1864,7 @@
if (copy_to_user(config->data,
&mdss_pp_res->enhist_lut[disp_num][0],
ENHIST_LUT_ENTRIES * sizeof(u32))) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto enhist_config_exit;
}
@@ -1821,124 +2004,206 @@
mdss_mdp_pp_setup(ctl);
return ret;
}
-static void pp_hist_read(u32 v_base, struct pp_hist_col_info *hist_info)
+static void pp_hist_read(char __iomem *v_base,
+ struct pp_hist_col_info *hist_info)
{
int i, i_start;
u32 data;
- data = MDSS_MDP_REG_READ(v_base);
+ data = readl_relaxed(v_base);
i_start = data >> 24;
hist_info->data[i_start] = data & 0xFFFFFF;
for (i = i_start + 1; i < HIST_V_SIZE; i++)
- hist_info->data[i] = MDSS_MDP_REG_READ(v_base) & 0xFFFFFF;
+ hist_info->data[i] = readl_relaxed(v_base) & 0xFFFFFF;
for (i = 0; i < i_start - 1; i++)
- hist_info->data[i] = MDSS_MDP_REG_READ(v_base) & 0xFFFFFF;
+ hist_info->data[i] = readl_relaxed(v_base) & 0xFFFFFF;
hist_info->hist_cnt_read++;
}
-int mdss_mdp_histogram_start(struct mdss_mdp_ctl *ctl,
- struct mdp_histogram_start_req *req)
+/* Assumes that relevant clocks are enabled */
+static int pp_histogram_enable(struct pp_hist_col_info *hist_info,
+ struct mdp_histogram_start_req *req,
+ u32 shift_bit, char __iomem *ctl_base)
{
- u32 ctl_base, done_shift_bit;
+ unsigned long flag;
+ int ret = 0;
+ mutex_lock(&hist_info->hist_mutex);
+ /* check if it is idle */
+ if (hist_info->col_en) {
+ pr_info("%s Hist collection has already been enabled %d",
+ __func__, (u32) ctl_base);
+ ret = -EINVAL;
+ goto exit;
+ }
+ hist_info->frame_cnt = req->frame_cnt;
+ init_completion(&hist_info->comp);
+ hist_info->hist_cnt_read = 0;
+ hist_info->hist_cnt_sent = 0;
+ hist_info->hist_cnt_time = 0;
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ hist_info->read_request = false;
+ hist_info->col_state = HIST_RESET;
+ hist_info->col_en = true;
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ hist_info->is_kick_ready = false;
+ mdss_mdp_hist_irq_enable(3 << shift_bit);
+ writel_relaxed(req->frame_cnt, ctl_base + 8);
+ /* Kick out reset start */
+ writel_relaxed(1, ctl_base + 4);
+exit:
+ mutex_unlock(&hist_info->hist_mutex);
+ return ret;
+}
+
+int mdss_mdp_histogram_start(struct mdss_mdp_ctl *ctl,
+ struct mdp_histogram_start_req *req)
+{
+ u32 done_shift_bit;
+ char __iomem *ctl_base;
struct pp_hist_col_info *hist_info;
int i, ret = 0;
u32 disp_num, dspp_num = 0;
u32 mixer_cnt, mixer_id[MDSS_MDP_INTF_MAX_LAYERMIXER];
- unsigned long flag;
-
+ struct mdss_mdp_pipe *pipe;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
if (!ctl)
return -EINVAL;
- if ((req->block < MDP_LOGICAL_BLOCK_DISP_0) ||
- (req->block >= MDP_BLOCK_MAX))
+ if ((PP_BLOCK(req->block) < MDP_LOGICAL_BLOCK_DISP_0) ||
+ (PP_BLOCK(req->block) >= MDP_BLOCK_MAX))
return -EINVAL;
- mutex_lock(&mdss_mdp_hist_mutex);
- disp_num = req->block - MDP_LOGICAL_BLOCK_DISP_0;
+ disp_num = PP_BLOCK(req->block) - MDP_LOGICAL_BLOCK_DISP_0;
mixer_cnt = mdss_mdp_get_ctl_mixers(disp_num, mixer_id);
if (!mixer_cnt) {
pr_err("%s, no dspp connects to disp %d",
__func__, disp_num);
ret = -EPERM;
- goto hist_start_exit;
+ goto hist_exit;
}
if (mixer_cnt >= MDSS_MDP_MAX_DSPP) {
pr_err("%s, Too many dspp connects to disp %d",
__func__, mixer_cnt);
ret = -EPERM;
- goto hist_start_exit;
+ goto hist_exit;
}
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
- for (i = 0; i < mixer_cnt; i++) {
- dspp_num = mixer_id[i];
- hist_info = &mdss_pp_res->dspp_hist[dspp_num];
- done_shift_bit = (dspp_num * 4) + 12;
- /* check if it is idle */
- if (hist_info->col_en) {
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
- pr_info("%s Hist collection has already been enabled %d",
- __func__, dspp_num);
- goto hist_start_exit;
+
+ if (PP_LOCAT(req->block) == MDSS_PP_SSPP_CFG) {
+ i = MDSS_PP_ARG_MASK & req->block;
+ if (!i) {
+ ret = -EINVAL;
+ pr_warn("Must pass pipe arguments, %d", i);
+ goto hist_exit;
}
- spin_lock_irqsave(&mdss_hist_lock, flag);
- hist_info->frame_cnt = req->frame_cnt;
- init_completion(&hist_info->comp);
- hist_info->hist_cnt_read = 0;
- hist_info->hist_cnt_sent = 0;
- hist_info->read_request = false;
- hist_info->col_state = HIST_RESET;
- hist_info->col_en = true;
- hist_info->is_kick_ready = false;
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- mdss_pp_res->hist_col[disp_num][i] =
- &mdss_pp_res->dspp_hist[dspp_num];
- mdss_mdp_hist_irq_enable(3 << done_shift_bit);
- ctl_base = MDSS_MDP_REG_DSPP_OFFSET(dspp_num) +
- MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
- MDSS_MDP_REG_WRITE(ctl_base + 8, req->frame_cnt);
- /* Kick out reset start */
- MDSS_MDP_REG_WRITE(ctl_base + 4, 1);
+
+ for (i = 0; i < MDSS_PP_ARG_NUM; i++) {
+ if (!PP_ARG(i, req->block))
+ continue;
+ pipe = mdss_mdp_pipe_get(mdata, BIT(i));
+ if (IS_ERR_OR_NULL(pipe))
+ continue;
+ if (!pipe || pipe->num > MDSS_MDP_SSPP_VIG2) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ ret = -EINVAL;
+ pr_warn("Invalid Hist pipe (%d)", i);
+ goto hist_exit;
+ }
+ done_shift_bit = (pipe->num * 4);
+ hist_info = &pipe->pp_res.hist;
+ ctl_base = pipe->base +
+ MDSS_MDP_REG_VIG_HIST_CTL_BASE;
+ ret = pp_histogram_enable(hist_info, req,
+ done_shift_bit, ctl_base);
+ mdss_mdp_pipe_unmap(pipe);
+ }
+ } else if (PP_LOCAT(req->block) == MDSS_PP_DSPP_CFG) {
+ for (i = 0; i < mixer_cnt; i++) {
+ dspp_num = mixer_id[i];
+ done_shift_bit = (dspp_num * 4) + 12;
+ hist_info = &mdss_pp_res->dspp_hist[dspp_num];
+ ctl_base = mdss_mdp_get_dspp_addr_off(dspp_num) +
+ MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
+ ret = pp_histogram_enable(hist_info, req,
+ done_shift_bit, ctl_base);
+ mdss_pp_res->pp_disp_flags[disp_num] |=
+ PP_FLAGS_DIRTY_HIST_COL;
+ }
}
- for (i = mixer_cnt; i < MDSS_MDP_MAX_DSPP; i++)
- mdss_pp_res->hist_col[disp_num][i] = 0;
- mdss_pp_res->pp_disp_flags[disp_num] |= PP_FLAGS_DIRTY_HIST_COL;
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
-hist_start_exit:
- mutex_unlock(&mdss_mdp_hist_mutex);
- if (!ret) {
+
+hist_exit:
+ if (!ret && (PP_LOCAT(req->block) == MDSS_PP_DSPP_CFG)) {
mdss_mdp_pp_setup(ctl);
/* wait for a frame to let histrogram enable itself */
+ /* TODO add hysteresis value to be able to remove this sleep */
usleep(41666);
for (i = 0; i < mixer_cnt; i++) {
dspp_num = mixer_id[i];
hist_info = &mdss_pp_res->dspp_hist[dspp_num];
- mutex_lock(&mdss_mdp_hist_mutex);
- spin_lock_irqsave(&mdss_hist_lock, flag);
+ mutex_lock(&hist_info->hist_mutex);
hist_info->is_kick_ready = true;
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- mutex_unlock(&mdss_mdp_hist_mutex);
+ mutex_unlock(&hist_info->hist_mutex);
+ }
+ } else if (!ret) {
+ for (i = 0; i < MDSS_PP_ARG_NUM; i++) {
+ if (!PP_ARG(i, req->block))
+ continue;
+ pr_info("PP_ARG(%d) = %d", i, PP_ARG(i, req->block));
+ pipe = mdss_mdp_pipe_get(mdata, BIT(i));
+ if (IS_ERR_OR_NULL(pipe))
+ continue;
+ hist_info = &pipe->pp_res.hist;
+ hist_info->is_kick_ready = true;
+ mdss_mdp_pipe_unmap(pipe);
}
}
return ret;
}
+static int pp_histogram_disable(struct pp_hist_col_info *hist_info,
+ u32 done_bit, char __iomem *ctl_base)
+{
+ int ret = 0;
+ unsigned long flag;
+ mutex_lock(&hist_info->hist_mutex);
+ if (hist_info->col_en == false) {
+ pr_debug("Histogram already disabled (%d)", (u32) ctl_base);
+ ret = -EINVAL;
+ goto exit;
+ }
+ complete_all(&hist_info->comp);
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ hist_info->col_en = false;
+ hist_info->col_state = HIST_UNKNOWN;
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ hist_info->is_kick_ready = false;
+ mdss_mdp_hist_irq_disable(done_bit);
+ writel_relaxed(BIT(1), ctl_base);/* cancel */
+ ret = 0;
+exit:
+ mutex_unlock(&hist_info->hist_mutex);
+ return ret;
+}
+
int mdss_mdp_histogram_stop(struct mdss_mdp_ctl *ctl, u32 block)
{
int i, ret = 0;
- u32 dspp_num, disp_num, ctl_base, done_bit;
+ char __iomem *ctl_base;
+ u32 dspp_num, disp_num, done_bit;
struct pp_hist_col_info *hist_info;
u32 mixer_cnt, mixer_id[MDSS_MDP_INTF_MAX_LAYERMIXER];
- unsigned long flag;
+ struct mdss_mdp_pipe *pipe;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
if (!ctl)
return -EINVAL;
- if ((block < MDP_LOGICAL_BLOCK_DISP_0) ||
- (block >= MDP_BLOCK_MAX))
+ if ((PP_BLOCK(block) < MDP_LOGICAL_BLOCK_DISP_0) ||
+ (PP_BLOCK(block) >= MDP_BLOCK_MAX))
return -EINVAL;
- mutex_lock(&mdss_mdp_hist_mutex);
- disp_num = block - MDP_LOGICAL_BLOCK_DISP_0;
+ disp_num = PP_BLOCK(block) - MDP_LOGICAL_BLOCK_DISP_0;
mixer_cnt = mdss_mdp_get_ctl_mixers(disp_num, mixer_id);
if (!mixer_cnt) {
@@ -1954,162 +2219,313 @@
goto hist_stop_exit;
}
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
- for (i = 0; i < mixer_cnt; i++) {
- dspp_num = mixer_id[i];
- hist_info = &mdss_pp_res->dspp_hist[dspp_num];
- done_bit = 3 << ((dspp_num * 4) + 12);
- ctl_base = MDSS_MDP_REG_DSPP_OFFSET(dspp_num) +
- MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
- if (hist_info->col_en == false) {
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
- goto hist_stop_exit;
+ if (PP_LOCAT(block) == MDSS_PP_SSPP_CFG) {
+ i = MDSS_PP_ARG_MASK & block;
+ if (!i) {
+ pr_warn("Must pass pipe arguments, %d", i);
+ goto hist_stop_clk;
}
- complete_all(&hist_info->comp);
- spin_lock_irqsave(&mdss_hist_lock, flag);
- hist_info->col_en = false;
- hist_info->col_state = HIST_UNKNOWN;
- hist_info->is_kick_ready = false;
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- mdss_mdp_hist_irq_disable(done_bit);
- MDSS_MDP_REG_WRITE(ctl_base, (1 << 1));/* cancel */
+
+ for (i = 0; i < MDSS_PP_ARG_NUM; i++) {
+ if (!PP_ARG(i, block))
+ continue;
+ pipe = mdss_mdp_pipe_get(mdata, BIT(i));
+ if (IS_ERR_OR_NULL(pipe) ||
+ pipe->num > MDSS_MDP_SSPP_VIG2) {
+ pr_warn("Invalid Hist pipe (%d)", i);
+ continue;
+ }
+ done_bit = 3 << (pipe->num * 4);
+ hist_info = &pipe->pp_res.hist;
+ ctl_base = pipe->base +
+ MDSS_MDP_REG_VIG_HIST_CTL_BASE;
+ ret = pp_histogram_disable(hist_info, done_bit,
+ ctl_base);
+ mdss_mdp_pipe_unmap(pipe);
+ if (ret)
+ goto hist_stop_clk;
+ }
+ } else if (PP_LOCAT(block) == MDSS_PP_DSPP_CFG) {
+ for (i = 0; i < mixer_cnt; i++) {
+ dspp_num = mixer_id[i];
+ done_bit = 3 << ((dspp_num * 4) + 12);
+ hist_info = &mdss_pp_res->dspp_hist[dspp_num];
+ ctl_base = mdss_mdp_get_dspp_addr_off(dspp_num) +
+ MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
+ ret = pp_histogram_disable(hist_info, done_bit,
+ ctl_base);
+ if (ret)
+ goto hist_stop_clk;
+ mdss_pp_res->pp_disp_flags[disp_num] |=
+ PP_FLAGS_DIRTY_HIST_COL;
+ }
}
- for (i = 0; i < MDSS_MDP_MAX_DSPP; i++)
- mdss_pp_res->hist_col[disp_num][i] = 0;
- mdss_pp_res->pp_disp_flags[disp_num] |= PP_FLAGS_DIRTY_HIST_COL;
+hist_stop_clk:
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
hist_stop_exit:
- mutex_unlock(&mdss_mdp_hist_mutex);
- if (!ret)
+ if (!ret && (PP_LOCAT(block) == MDSS_PP_DSPP_CFG))
mdss_mdp_pp_setup(ctl);
return ret;
}
-int mdss_mdp_hist_collect(struct mdss_mdp_ctl *ctl,
- struct mdp_histogram_data *hist,
- u32 *hist_data_addr)
+static int pp_hist_collect(struct mdss_mdp_ctl *ctl,
+ struct mdp_histogram_data *hist,
+ struct pp_hist_col_info *hist_info,
+ char __iomem *ctl_base)
{
- int i, j, wait_ret, ret = 0;
- u32 timeout, v_base;
- struct pp_hist_col_info *hist_info;
- u32 dspp_num, disp_num, ctl_base;
- u32 mixer_cnt, mixer_id[MDSS_MDP_INTF_MAX_LAYERMIXER];
+ int wait_ret, ret = 0;
+ u32 timeout;
+ char __iomem *v_base;
unsigned long flag;
+ struct mdss_pipe_pp_res *res;
+ struct mdss_mdp_pipe *pipe;
+
+ mutex_lock(&hist_info->hist_mutex);
+ if ((hist_info->col_en == 0) ||
+ (hist_info->col_state == HIST_UNKNOWN)) {
+ ret = -EINVAL;
+ goto hist_collect_exit;
+ }
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ /* wait for hist done if cache has no data */
+ if (hist_info->col_state != HIST_READY) {
+ hist_info->read_request = true;
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ timeout = HIST_WAIT_TIMEOUT(hist_info->frame_cnt);
+ mutex_unlock(&hist_info->hist_mutex);
+ /* flush updates before wait*/
+ if (PP_LOCAT(hist->block) == MDSS_PP_DSPP_CFG)
+ mdss_mdp_pp_setup(ctl);
+ if (PP_LOCAT(hist->block) == MDSS_PP_SSPP_CFG) {
+ res = container_of(hist_info, struct mdss_pipe_pp_res,
+ hist);
+ pipe = container_of(res, struct mdss_mdp_pipe, pp_res);
+ pipe->params_changed++;
+ }
+ wait_ret = wait_for_completion_killable_timeout(
+ &(hist_info->comp), timeout);
+
+ mutex_lock(&hist_info->hist_mutex);
+ if (wait_ret == 0) {
+ ret = -ETIMEDOUT;
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ pr_debug("bin collection timedout, state %d",
+ hist_info->col_state);
+ /*
+ * When the histogram has timed out (usually
+ * underrun) change the SW state back to idle
+ * since histogram hardware will have done the
+ * same. Histogram data also needs to be
+ * cleared in this case, which is done by the
+ * histogram being read (triggered by READY
+ * state, which also moves the histogram SW back
+ * to IDLE).
+ */
+ hist_info->hist_cnt_time++;
+ hist_info->col_state = HIST_READY;
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ } else if (wait_ret < 0) {
+ ret = -EINTR;
+ pr_debug("%s: bin collection interrupted",
+ __func__);
+ goto hist_collect_exit;
+ }
+ if (hist_info->col_state != HIST_READY) {
+ ret = -ENODATA;
+ pr_debug("%s: state is not ready: %d",
+ __func__, hist_info->col_state);
+ goto hist_collect_exit;
+ }
+ } else {
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ }
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ if (hist_info->col_state == HIST_READY) {
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+ v_base = ctl_base + 0x1C;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ pp_hist_read(v_base, hist_info);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ spin_lock_irqsave(&hist_info->hist_lock, flag);
+ hist_info->read_request = false;
+ hist_info->col_state = HIST_IDLE;
+ }
+ spin_unlock_irqrestore(&hist_info->hist_lock, flag);
+hist_collect_exit:
+ mutex_unlock(&hist_info->hist_mutex);
+ return ret;
+}
+
+int mdss_mdp_hist_collect(struct mdss_mdp_ctl *ctl,
+ struct mdp_histogram_data *hist)
+{
+ int i, j, off, ret = 0;
+ struct pp_hist_col_info *hist_info;
+ u32 dspp_num, disp_num;
+ char __iomem *ctl_base;
+ u32 hist_cnt, mixer_id[MDSS_MDP_INTF_MAX_LAYERMIXER];
+ u32 *hist_concat = NULL;
+ u32 *hist_data_addr;
+ u32 pipe_cnt = 0;
+ u32 pipe_num = MDSS_MDP_SSPP_VIG0;
+ struct mdss_mdp_pipe *pipe;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
if (!ctl)
return -EINVAL;
- if ((hist->block < MDP_LOGICAL_BLOCK_DISP_0) ||
- (hist->block >= MDP_BLOCK_MAX))
+ if ((PP_BLOCK(hist->block) < MDP_LOGICAL_BLOCK_DISP_0) ||
+ (PP_BLOCK(hist->block) >= MDP_BLOCK_MAX))
return -EINVAL;
- mutex_lock(&mdss_mdp_hist_mutex);
- disp_num = hist->block - MDP_LOGICAL_BLOCK_DISP_0;
- mixer_cnt = mdss_mdp_get_ctl_mixers(disp_num, mixer_id);
+ disp_num = PP_BLOCK(hist->block) - MDP_LOGICAL_BLOCK_DISP_0;
+ hist_cnt = mdss_mdp_get_ctl_mixers(disp_num, mixer_id);
- if (!mixer_cnt) {
+ if (!hist_cnt) {
pr_err("%s, no dspp connects to disp %d",
__func__, disp_num);
ret = -EPERM;
goto hist_collect_exit;
}
- if (mixer_cnt >= MDSS_MDP_MAX_DSPP) {
+ if (hist_cnt >= MDSS_MDP_MAX_DSPP) {
pr_err("%s, Too many dspp connects to disp %d",
- __func__, mixer_cnt);
+ __func__, hist_cnt);
ret = -EPERM;
goto hist_collect_exit;
}
- hist_info = &mdss_pp_res->dspp_hist[0];
- for (i = 0; i < mixer_cnt; i++) {
- dspp_num = mixer_id[i];
- hist_info = &mdss_pp_res->dspp_hist[dspp_num];
- ctl_base = MDSS_MDP_REG_DSPP_OFFSET(dspp_num) +
- MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
- if ((hist_info->col_en == 0) ||
- (hist_info->col_state == HIST_UNKNOWN)) {
- ret = -EINVAL;
- goto hist_collect_exit;
- }
- spin_lock_irqsave(&mdss_hist_lock, flag);
- /* wait for hist done if cache has no data */
- if (hist_info->col_state != HIST_READY) {
- hist_info->read_request = true;
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- timeout = HIST_WAIT_TIMEOUT(hist_info->frame_cnt);
- mutex_unlock(&mdss_mdp_hist_mutex);
- /* flush updates before wait*/
- mdss_mdp_pp_setup(ctl);
- wait_ret = wait_for_completion_killable_timeout(
- &(hist_info->comp), timeout);
-
- mutex_lock(&mdss_mdp_hist_mutex);
- if (wait_ret == 0) {
- ret = -ETIMEDOUT;
- spin_lock_irqsave(&mdss_hist_lock, flag);
- pr_debug("bin collection timedout, state %d",
- hist_info->col_state);
- /*
- * When the histogram has timed out (usually
- * underrun) change the SW state back to idle
- * since histogram hardware will have done the
- * same. Histogram data also needs to be
- * cleared in this case, which is done by the
- * histogram being read (triggered by READY
- * state, which also moves the histogram SW back
- * to IDLE).
- */
- hist_info->col_state = HIST_READY;
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- } else if (wait_ret < 0) {
- ret = -EINTR;
- pr_debug("%s: bin collection interrupted",
- __func__);
- goto hist_collect_exit;
- }
- if (hist_info->col_state != HIST_READY) {
- ret = -ENODATA;
- pr_debug("%s: state is not ready: %d",
- __func__, hist_info->col_state);
- goto hist_collect_exit;
- }
- } else {
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- }
- spin_lock_irqsave(&mdss_hist_lock, flag);
- if (hist_info->col_state == HIST_READY) {
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- v_base = ctl_base + 0x1C;
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
- pp_hist_read(v_base, hist_info);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
- spin_lock_irqsave(&mdss_hist_lock, flag);
- hist_info->read_request = false;
- hist_info->col_state = HIST_IDLE;
- }
- spin_unlock_irqrestore(&mdss_hist_lock, flag);
- }
- if (mixer_cnt > 1) {
- memset(&mdss_pp_res->hist_data[disp_num][0],
- 0, HIST_V_SIZE * sizeof(u32));
- for (i = 0; i < mixer_cnt; i++) {
+ if (PP_LOCAT(hist->block) == MDSS_PP_DSPP_CFG) {
+ hist_info = &mdss_pp_res->dspp_hist[disp_num];
+ for (i = 0; i < hist_cnt; i++) {
dspp_num = mixer_id[i];
hist_info = &mdss_pp_res->dspp_hist[dspp_num];
- for (j = 0; j < HIST_V_SIZE; j++)
- mdss_pp_res->hist_data[disp_num][i] +=
- hist_info->data[i];
+ ctl_base = mdss_mdp_get_dspp_addr_off(dspp_num) +
+ MDSS_MDP_REG_DSPP_HIST_CTL_BASE;
+ ret = pp_hist_collect(ctl, hist, hist_info, ctl_base);
+ if (ret)
+ goto hist_collect_exit;
}
- *hist_data_addr = (u32)&mdss_pp_res->hist_data[disp_num][0];
+ if (hist_cnt > 1) {
+ if (hist->bin_cnt != HIST_V_SIZE) {
+ pr_err("User not expecting size %d output",
+ HIST_V_SIZE);
+ ret = -EINVAL;
+ goto hist_collect_exit;
+ }
+ hist_concat = kmalloc(HIST_V_SIZE * sizeof(u32),
+ GFP_KERNEL);
+ if (!hist_concat) {
+ ret = -ENOMEM;
+ goto hist_collect_exit;
+ }
+ memset(hist_concat, 0, HIST_V_SIZE * sizeof(u32));
+ for (i = 0; i < hist_cnt; i++) {
+ dspp_num = mixer_id[i];
+ hist_info = &mdss_pp_res->dspp_hist[dspp_num];
+ mutex_lock(&hist_info->hist_mutex);
+ for (j = 0; j < HIST_V_SIZE; j++)
+ hist_concat[i] += hist_info->data[i];
+ mutex_unlock(&hist_info->hist_mutex);
+ }
+ hist_data_addr = hist_concat;
+ } else {
+ hist_data_addr = hist_info->data;
+ }
+ hist_info = &mdss_pp_res->dspp_hist[disp_num];
+ hist_info->hist_cnt_sent++;
+ } else if (PP_LOCAT(hist->block) == MDSS_PP_SSPP_CFG) {
+
+ hist_cnt = MDSS_PP_ARG_MASK & hist->block;
+ if (!hist_cnt) {
+ pr_warn("Must pass pipe arguments, %d", hist_cnt);
+ goto hist_collect_exit;
+ }
+
+ /* Find the first pipe requested */
+ for (i = 0; i < MDSS_PP_ARG_NUM; i++) {
+ if (PP_ARG(i, hist_cnt)) {
+ pipe_num = i;
+ break;
+ }
+ }
+
+ pipe = mdss_mdp_pipe_get(mdata, BIT(pipe_num));
+ if (IS_ERR_OR_NULL(pipe)) {
+ pr_warn("Invalid starting hist pipe, %d", pipe_num);
+ ret = -ENODEV;
+ goto hist_collect_exit;
+ }
+ hist_info = &pipe->pp_res.hist;
+ mdss_mdp_pipe_unmap(pipe);
+ for (i = pipe_num; i < MDSS_PP_ARG_NUM; i++) {
+ if (!PP_ARG(i, hist->block))
+ continue;
+ pipe_cnt++;
+ pipe = mdss_mdp_pipe_get(mdata, BIT(i));
+ if (IS_ERR_OR_NULL(pipe) ||
+ pipe->num > MDSS_MDP_SSPP_VIG2) {
+ pr_warn("Invalid Hist pipe (%d)", i);
+ continue;
+ }
+ hist_info = &pipe->pp_res.hist;
+ ctl_base = pipe->base +
+ MDSS_MDP_REG_VIG_HIST_CTL_BASE;
+ ret = pp_hist_collect(ctl, hist, hist_info, ctl_base);
+ mdss_mdp_pipe_unmap(pipe);
+ if (ret)
+ goto hist_collect_exit;
+ }
+ if (pipe_cnt > 1) {
+ if (hist->bin_cnt != (HIST_V_SIZE * pipe_cnt)) {
+ pr_err("User not expecting size %d output",
+ pipe_cnt * HIST_V_SIZE);
+ ret = -EINVAL;
+ goto hist_collect_exit;
+ }
+ hist_concat = kmalloc(HIST_V_SIZE * pipe_cnt *
+ sizeof(u32), GFP_KERNEL);
+ if (!hist_concat) {
+ ret = -ENOMEM;
+ goto hist_collect_exit;
+ }
+
+ memset(hist_concat, 0, pipe_cnt * HIST_V_SIZE *
+ sizeof(u32));
+ for (i = pipe_num; i < MDSS_PP_ARG_NUM; i++) {
+ if (!PP_ARG(i, hist->block))
+ continue;
+ pipe = mdss_mdp_pipe_get(mdata, BIT(i));
+ hist_info = &pipe->pp_res.hist;
+ off = HIST_V_SIZE * i;
+ mutex_lock(&hist_info->hist_mutex);
+ for (j = off; j < off + HIST_V_SIZE; j++)
+ hist_concat[j] =
+ hist_info->data[j - off];
+ hist_info->hist_cnt_sent++;
+ mutex_unlock(&hist_info->hist_mutex);
+ mdss_mdp_pipe_unmap(pipe);
+ }
+
+ hist_data_addr = hist_concat;
+ } else {
+ hist_data_addr = hist_info->data;
+ }
} else {
- *hist_data_addr = (u32)hist_info->data;
+ pr_info("No Histogram at location %d", PP_LOCAT(hist->block));
+ goto hist_collect_exit;
}
- hist_info->hist_cnt_sent++;
+ ret = copy_to_user(hist->c0, hist_data_addr, sizeof(u32) *
+ hist->bin_cnt);
hist_collect_exit:
- mutex_unlock(&mdss_mdp_hist_mutex);
+ kfree(hist_concat);
+
return ret;
}
void mdss_mdp_hist_intr_done(u32 isr)
{
u32 isr_blk, blk_idx;
- struct pp_hist_col_info *hist_info;
+ struct pp_hist_col_info *hist_info = NULL;
+ struct mdss_mdp_pipe *pipe;
+ struct mdss_data_type *mdata = mdss_mdp_get_mdata();
isr &= 0x333333;
while (isr != 0) {
if (isr & 0xFFF000) {
@@ -2129,36 +2545,509 @@
hist_info = &mdss_pp_res->dspp_hist[blk_idx];
} else {
if (isr & 0x3) {
- blk_idx = 0;
+ blk_idx = MDSS_MDP_SSPP_VIG0;
isr_blk = isr & 0x3;
isr &= ~0x3;
} else if (isr & 0x30) {
- blk_idx = 1;
+ blk_idx = MDSS_MDP_SSPP_VIG1;
isr_blk = (isr >> 4) & 0x3;
isr &= ~0x30;
} else {
- blk_idx = 2;
+ blk_idx = MDSS_MDP_SSPP_VIG2;
isr_blk = (isr >> 8) & 0x3;
isr &= ~0x300;
}
- /* SSPP block, not support yet*/
- continue;
+ pipe = mdss_mdp_pipe_search(mdata, BIT(blk_idx));
+ if (IS_ERR_OR_NULL(pipe)) {
+ pr_debug("pipe DNE, %d", blk_idx);
+ continue;
+ }
+ hist_info = &pipe->pp_res.hist;
}
/* Histogram Done Interrupt */
- if ((isr_blk & 0x1) &&
+ if (hist_info && (isr_blk & 0x1) &&
(hist_info->col_en)) {
- spin_lock(&mdss_hist_lock);
+ spin_lock(&hist_info->hist_lock);
hist_info->col_state = HIST_READY;
- spin_unlock(&mdss_hist_lock);
+ spin_unlock(&hist_info->hist_lock);
if (hist_info->read_request)
complete(&hist_info->comp);
}
/* Histogram Reset Done Interrupt */
if ((isr_blk & 0x2) &&
(hist_info->col_en)) {
- spin_lock(&mdss_hist_lock);
+ spin_lock(&hist_info->hist_lock);
hist_info->col_state = HIST_IDLE;
- spin_unlock(&mdss_hist_lock);
+ spin_unlock(&hist_info->hist_lock);
}
};
}
+
+#define MDSS_AD_MAX_MIXERS 1
+int mdss_ad_init_checks(struct msm_fb_data_type *mfd)
+{
+ u32 mixer_id[MDSS_MDP_INTF_MAX_LAYERMIXER];
+ u32 mixer_num;
+ u32 ret = -EINVAL;
+ int i = 0;
+ struct mdss_data_type *mdata = mfd_to_mdata(mfd);
+
+ if (!mfd || !mdata)
+ return ret;
+
+ if (mdata->nad_cfgs == 0) {
+ pr_debug("Assertive Display not supported by device");
+ return -ENODEV;
+ }
+
+ if (mfd->panel_info->type == MIPI_CMD_PANEL) {
+ pr_debug("Command panel not supported");
+ return -EINVAL;
+ }
+
+ mixer_num = mdss_mdp_get_ctl_mixers(mfd->index, mixer_id);
+ if (!mixer_num || mixer_num > MDSS_AD_MAX_MIXERS) {
+ pr_err("invalid mixer_num, %d", mixer_num);
+ return ret;
+ }
+
+ do {
+ if (mixer_id[i] >= mdata->nad_cfgs) {
+ pr_err("invalid mixer input, %d", mixer_id[i]);
+ return ret;
+ }
+ i++;
+ } while (i < mixer_num);
+
+ return mixer_id[0];
+}
+
+int mdss_mdp_ad_config(struct msm_fb_data_type *mfd,
+ struct mdss_ad_init_cfg *init_cfg)
+{
+ int ad_num;
+ struct mdss_ad_info *ad;
+ struct mdss_data_type *mdata;
+ struct mdss_mdp_ctl *ctl;
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
+
+ ctl = mdp5_data->ctl;
+
+ ad_num = mdss_ad_init_checks(mfd);
+ if (ad_num < 0)
+ return ad_num;
+
+ mdata = mdss_mdp_get_mdata();
+ ad = &mdata->ad_cfgs[ad_num];
+
+ mutex_lock(&ad->lock);
+ if (init_cfg->ops & MDP_PP_AD_INIT) {
+ memcpy(&ad->init, &init_cfg->params.init,
+ sizeof(struct mdss_ad_init));
+ ad->sts |= PP_AD_STS_DIRTY_INIT;
+ } else if (init_cfg->ops & MDP_PP_AD_CFG) {
+ memcpy(&ad->cfg, &init_cfg->params.cfg,
+ sizeof(struct mdss_ad_cfg));
+ /*
+ * TODO: specify panel independent range of input from cfg,
+ * scale input backlight_scale to panel bl_max's range
+ */
+ ad->cfg.backlight_scale = mfd->panel_info->bl_max;
+ ad->sts |= PP_AD_STS_DIRTY_CFG;
+ }
+
+ if (init_cfg->ops & MDP_PP_OPS_DISABLE) {
+ ad->sts &= ~PP_STS_ENABLE;
+ ad->mfd = NULL;
+ } else if (init_cfg->ops & MDP_PP_OPS_ENABLE) {
+ ad->sts |= PP_STS_ENABLE;
+ ad->mfd = mfd;
+ }
+ mutex_unlock(&ad->lock);
+ mdss_mdp_pp_setup(ctl);
+ return 0;
+}
+
+int mdss_mdp_ad_input(struct msm_fb_data_type *mfd,
+ struct mdss_ad_input *input) {
+ int ad_num, ret = 0;
+ struct mdss_ad_info *ad;
+ struct mdss_data_type *mdata;
+ struct mdss_mdp_ctl *ctl;
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
+
+ ctl = mdp5_data->ctl;
+
+ ad_num = mdss_ad_init_checks(mfd);
+ if (ad_num < 0)
+ return ad_num;
+
+ mdata = mdss_mdp_get_mdata();
+ ad = &mdata->ad_cfgs[ad_num];
+
+ mutex_lock(&ad->lock);
+ if (!PP_AD_STATE_IS_INITCFG(ad->state) &&
+ !PP_AD_STS_IS_DIRTY(ad->sts)) {
+ pr_warn("AD not initialized or configured.");
+ ret = -EPERM;
+ goto error;
+ }
+ switch (input->mode) {
+ case MDSS_AD_MODE_AUTO_BL:
+ case MDSS_AD_MODE_AUTO_STR:
+ if (!MDSS_AD_MODE_DATA_MATCH(ad->cfg.mode,
+ MDSS_AD_INPUT_AMBIENT)) {
+ ret = -EINVAL;
+ goto error;
+ }
+ ad->ad_data_mode = MDSS_AD_INPUT_AMBIENT;
+
+ ad->ad_data = input->in.amb_light;
+ ad->calc_itr = ad->cfg.stab_itr;
+ ad->sts |= PP_AD_STS_DIRTY_VSYNC;
+ ad->sts |= PP_AD_STS_DIRTY_DATA;
+ break;
+ case MDSS_AD_MODE_TARG_STR:
+ case MDSS_AD_MODE_MAN_STR:
+ if (!MDSS_AD_MODE_DATA_MATCH(ad->cfg.mode,
+ MDSS_AD_INPUT_STRENGTH)) {
+ ret = -EINVAL;
+ goto error;
+ }
+ ad->ad_data_mode = MDSS_AD_INPUT_STRENGTH;
+ ad->ad_data = input->in.strength;
+ ad->calc_itr = ad->cfg.stab_itr;
+ ad->sts |= PP_AD_STS_DIRTY_VSYNC;
+ ad->sts |= PP_AD_STS_DIRTY_DATA;
+ break;
+ default:
+ pr_warn("invalid default %d", input->mode);
+ ret = -EINVAL;
+ goto error;
+ }
+error:
+ mutex_unlock(&ad->lock);
+ if (!ret) {
+ mutex_lock(&ad->lock);
+ init_completion(&ad->comp);
+ mutex_unlock(&ad->lock);
+ mdss_mdp_pp_setup(ctl);
+ ret = wait_for_completion_interruptible_timeout(&ad->comp,
+ HIST_WAIT_TIMEOUT(1));
+ if (ret == 0)
+ ret = -ETIMEDOUT;
+ else if (ret > 0)
+ input->output = ad->last_str;
+ }
+ return ret;
+}
+
+static void pp_ad_input_write(struct mdss_ad_info *ad, u32 bl_lvl)
+{
+ char __iomem *base = ad->base;
+ switch (ad->cfg.mode) {
+ case MDSS_AD_MODE_AUTO_BL:
+ writel_relaxed(ad->ad_data, base + MDSS_MDP_REG_AD_AL);
+ break;
+ case MDSS_AD_MODE_AUTO_STR:
+ writel_relaxed(bl_lvl, base + MDSS_MDP_REG_AD_BL);
+ writel_relaxed(ad->ad_data, base + MDSS_MDP_REG_AD_AL);
+ break;
+ case MDSS_AD_MODE_TARG_STR:
+ writel_relaxed(bl_lvl, base + MDSS_MDP_REG_AD_BL);
+ writel_relaxed(ad->ad_data, base + MDSS_MDP_REG_AD_TARG_STR);
+ break;
+ case MDSS_AD_MODE_MAN_STR:
+ writel_relaxed(bl_lvl, base + MDSS_MDP_REG_AD_BL);
+ writel_relaxed(ad->ad_data, base + MDSS_MDP_REG_AD_STR_MAN);
+ break;
+ default:
+ pr_warn("Invalid mode! %d", ad->cfg.mode);
+ break;
+ }
+}
+
+static void pp_ad_init_write(struct mdss_ad_info *ad)
+{
+ u32 temp;
+ char __iomem *base = ad->base;
+ writel_relaxed(ad->init.i_control[0] & 0x1F,
+ base + MDSS_MDP_REG_AD_CON_CTRL_0);
+ writel_relaxed(ad->init.i_control[1] << 8,
+ base + MDSS_MDP_REG_AD_CON_CTRL_1);
+
+ temp = ad->init.white_lvl << 16;
+ temp |= ad->init.black_lvl & 0xFFFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_BW_LVL);
+
+ writel_relaxed(ad->init.var, base + MDSS_MDP_REG_AD_VAR);
+
+ writel_relaxed(ad->init.limit_ampl, base + MDSS_MDP_REG_AD_AMP_LIM);
+
+ writel_relaxed(ad->init.i_dither, base + MDSS_MDP_REG_AD_DITH);
+
+ temp = ad->init.slope_max << 8;
+ temp |= ad->init.slope_min & 0xFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_SLOPE);
+
+ writel_relaxed(ad->init.dither_ctl, base + MDSS_MDP_REG_AD_DITH_CTRL);
+
+ writel_relaxed(ad->init.format, base + MDSS_MDP_REG_AD_CTRL_0);
+ writel_relaxed(ad->init.auto_size, base + MDSS_MDP_REG_AD_CTRL_1);
+
+ temp = ad->init.frame_w << 16;
+ temp |= ad->init.frame_h & 0xFFFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_FRAME_SIZE);
+
+ temp = ad->init.logo_v << 8;
+ temp |= ad->init.logo_h & 0xFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_LOGO_POS);
+
+ pp_ad_cfg_lut(base + MDSS_MDP_REG_AD_LUT_FI, ad->init.asym_lut);
+ pp_ad_cfg_lut(base + MDSS_MDP_REG_AD_LUT_CC, ad->init.color_corr_lut);
+}
+
+#define MDSS_PP_AD_DEF_CALIB 0x6E
+static void pp_ad_cfg_write(struct mdss_ad_info *ad)
+{
+ char __iomem *base = ad->base;
+ u32 temp, temp_calib = MDSS_PP_AD_DEF_CALIB;
+ switch (ad->cfg.mode) {
+ case MDSS_AD_MODE_AUTO_BL:
+ temp = ad->cfg.backlight_max << 16;
+ temp |= ad->cfg.backlight_min & 0xFFFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_BL_MINMAX);
+ writel_relaxed(ad->cfg.amb_light_min,
+ base + MDSS_MDP_REG_AD_AL_MIN);
+ temp = ad->cfg.filter[1] << 16;
+ temp |= ad->cfg.filter[0] & 0xFFFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_AL_FILT);
+ case MDSS_AD_MODE_AUTO_STR:
+ pp_ad_cfg_lut(base + MDSS_MDP_REG_AD_LUT_AL,
+ ad->cfg.al_calib_lut);
+ writel_relaxed(ad->cfg.strength_limit,
+ base + MDSS_MDP_REG_AD_STR_LIM);
+ temp = ad->cfg.calib[3] << 16;
+ temp |= ad->cfg.calib[2] & 0xFFFF;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_CALIB_CD);
+ writel_relaxed(ad->cfg.t_filter_recursion,
+ base + MDSS_MDP_REG_AD_TFILT_CTRL);
+ temp_calib = ad->cfg.calib[0] & 0xFFFF;
+ case MDSS_AD_MODE_TARG_STR:
+ temp = ad->cfg.calib[1] << 16;
+ temp |= temp_calib;
+ writel_relaxed(temp, base + MDSS_MDP_REG_AD_CALIB_AB);
+ case MDSS_AD_MODE_MAN_STR:
+ writel_relaxed(ad->cfg.backlight_scale,
+ base + MDSS_MDP_REG_AD_BL_MAX);
+ writel_relaxed(ad->cfg.mode, base + MDSS_MDP_REG_AD_MODE_SEL);
+ pr_debug("stab_itr = %d", ad->cfg.stab_itr);
+ break;
+ default:
+ break;
+ }
+}
+
+static void pp_ad_vsync_handler(struct mdss_mdp_ctl *ctl, ktime_t t)
+{
+ struct mdss_data_type *mdata = ctl->mdata;
+ struct mdss_ad_info *ad;
+
+ if (ctl->mixer_left && ctl->mixer_left->num < mdata->nad_cfgs) {
+ ad = &mdata->ad_cfgs[ctl->mixer_left->num];
+ queue_work(mdata->ad_calc_wq, &ad->calc_work);
+ }
+}
+
+#define MDSS_PP_AD_BYPASS_DEF 0x101
+static int mdss_mdp_ad_setup(struct msm_fb_data_type *mfd)
+{
+ int ad_num, ret = 0;
+ struct mdss_ad_info *ad;
+ struct mdss_data_type *mdata;
+ struct mdss_mdp_ctl *ctl = mfd_to_ctl(mfd);
+ char __iomem *base;
+ u32 bypass = MDSS_PP_AD_BYPASS_DEF;
+
+ ad_num = mdss_ad_init_checks(mfd);
+ if (ad_num < 0)
+ return ad_num;
+
+ mdata = mdss_mdp_get_mdata();
+ ad = &mdata->ad_cfgs[ad_num];
+ base = ad->base;
+
+ mutex_lock(&ad->lock);
+ if (ad->sts != last_sts || ad->state != last_state) {
+ last_sts = ad->sts;
+ last_state = ad->state;
+ pr_debug("begining: ad->sts = 0x%08x, state = 0x%08x", ad->sts,
+ ad->state);
+ }
+ if (!PP_AD_STS_IS_DIRTY(ad->sts) &&
+ (ad->sts & PP_AD_STS_DIRTY_DATA ||
+ (ad->state & PP_AD_STATE_RUN &&
+ (ad->cfg.mode == MDSS_AD_MODE_AUTO_STR)))) {
+ /*
+ * Write inputs to regs when the data has been updated or
+ * Assertive Display is up and running as long as there are
+ * no updates to AD init or cfg
+ */
+ ad->sts &= ~PP_AD_STS_DIRTY_DATA;
+ ad->state |= PP_AD_STATE_DATA;
+ if (mfd->bl_level != ad->last_bl) {
+ ad->last_bl = mfd->bl_level;
+ ad->calc_itr = ad->cfg.stab_itr;
+ ad->sts |= PP_AD_STS_DIRTY_VSYNC;
+ }
+ pp_ad_input_write(ad, mfd->bl_level);
+ }
+
+ if (ad->sts & PP_AD_STS_DIRTY_CFG) {
+ ad->sts &= ~PP_AD_STS_DIRTY_CFG;
+ ad->state |= PP_AD_STATE_CFG;
+
+ pp_ad_cfg_write(ad);
+
+ if (!MDSS_AD_MODE_DATA_MATCH(ad->cfg.mode, ad->ad_data_mode)) {
+ ad->sts &= ~PP_AD_STS_DIRTY_DATA;
+ ad->state &= ~PP_AD_STATE_DATA;
+ pr_debug("Mode switched, data invalidated!");
+ }
+ }
+ if (ad->sts & PP_AD_STS_DIRTY_INIT) {
+ ad->sts &= ~PP_AD_STS_DIRTY_INIT;
+ ad->state |= PP_AD_STATE_INIT;
+ pp_ad_init_write(ad);
+ }
+
+ if ((ad->sts & PP_STS_ENABLE) && PP_AD_STATE_IS_READY(ad->state)) {
+ bypass = 0;
+ ret = 1;
+ ad->state |= PP_AD_STATE_RUN;
+ } else {
+ if (ad->state & PP_AD_STATE_RUN) {
+ ret = 1;
+ ad->sts |= PP_AD_STS_DIRTY_VSYNC;
+ }
+ ad->state &= ~PP_AD_STATE_RUN;
+ }
+ writel_relaxed(bypass, base);
+
+ if (PP_AD_STS_DIRTY_VSYNC & ad->sts) {
+ pr_debug("dirty vsync, calc_itr = %d", ad->calc_itr);
+ ad->sts &= ~PP_AD_STS_DIRTY_VSYNC;
+ if (!(PP_AD_STATE_VSYNC & ad->state) && ad->calc_itr) {
+ ctl->add_vsync_handler(ctl, &ad->handle);
+ ad->state |= PP_AD_STATE_VSYNC;
+ } else if ((PP_AD_STATE_VSYNC & ad->state) &&
+ (!ad->calc_itr || !(PP_AD_STATE_RUN & ad->state))) {
+ ctl->remove_vsync_handler(ctl, &ad->handle);
+ ad->state &= ~PP_AD_STATE_VSYNC;
+ }
+ }
+
+ if (ad->sts != last_sts || ad->state != last_state) {
+ last_sts = ad->sts;
+ last_state = ad->state;
+ pr_debug("end: ad->sts = 0x%08x, state = 0x%08x", ad->sts,
+ ad->state);
+ }
+ mutex_unlock(&ad->lock);
+ return ret;
+}
+
+#define MDSS_PP_AD_SLEEP 10
+static void pp_ad_calc_worker(struct work_struct *work)
+{
+ struct mdss_ad_info *ad;
+ struct mdss_mdp_ctl *ctl;
+ u32 bl, calc_done = 0;
+ ad = container_of(work, struct mdss_ad_info, calc_work);
+ ctl = mfd_to_ctl(ad->mfd);
+
+ mutex_lock(&ad->lock);
+ if (PP_AD_STATE_RUN & ad->state) {
+ /* Kick off calculation */
+ ad->calc_itr--;
+ writel_relaxed(1, ad->base + MDSS_MDP_REG_AD_START_CALC);
+ }
+ if (ad->state & PP_AD_STATE_RUN) {
+ do {
+ calc_done = readl_relaxed(ad->base +
+ MDSS_MDP_REG_AD_CALC_DONE);
+ if (!calc_done)
+ usleep(MDSS_PP_AD_SLEEP);
+ } while (!calc_done && (ad->state & PP_AD_STATE_RUN));
+ if (calc_done) {
+ ad->last_str = 0xFF & readl_relaxed(ad->base +
+ MDSS_MDP_REG_AD_STR_OUT);
+ if (MDSS_AD_RUNNING_AUTO_BL(ad)) {
+ bl = 0xFFFF & readl_relaxed(ad->base +
+ MDSS_MDP_REG_AD_BL_OUT);
+ pr_debug("calc bl = %d", bl);
+ ad->last_str |= bl << 16;
+ mutex_lock(&ad->mfd->bl_lock);
+ mdss_fb_set_backlight(ad->mfd, bl);
+ mutex_unlock(&ad->mfd->bl_lock);
+ }
+ pr_debug("calc_str = %d, calc_itr %d",
+ ad->last_str & 0xFF,
+ ad->calc_itr);
+ } else {
+ ad->last_str = 0xFFFFFFFF;
+ }
+ }
+ complete(&ad->comp);
+
+ if (!ad->calc_itr) {
+ ad->state &= ~PP_AD_STATE_VSYNC;
+ ctl->remove_vsync_handler(ctl, &ad->handle);
+ }
+ mutex_unlock(&ad->lock);
+ mutex_lock(&ad->mfd->lock);
+ mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_FLUSH, BIT(13 + ad->num));
+ mutex_unlock(&ad->mfd->lock);
+}
+
+#define PP_AD_LUT_LEN 33
+static void pp_ad_cfg_lut(char __iomem *offset, u32 *data)
+{
+ int i;
+ u32 temp;
+
+ for (i = 0; i < PP_AD_LUT_LEN - 1; i += 2) {
+ temp = data[i+1] << 16;
+ temp |= (data[i] & 0xFFFF);
+ writel_relaxed(temp, offset + (i*2));
+ }
+ writel_relaxed(data[PP_AD_LUT_LEN - 1] << 16,
+ offset + ((PP_AD_LUT_LEN - 1) * 2));
+}
+
+int mdss_mdp_ad_addr_setup(struct mdss_data_type *mdata, u32 *ad_off)
+{
+ u32 i;
+ int rc = 0;
+
+ mdata->ad_cfgs = devm_kzalloc(&mdata->pdev->dev,
+ sizeof(struct mdss_ad_info) * mdata->nad_cfgs,
+ GFP_KERNEL);
+
+ if (!mdata->ad_cfgs) {
+ pr_err("unable to setup assertive display:devm_kzalloc fail\n");
+ return -ENOMEM;
+ }
+
+ mdata->ad_calc_wq = create_singlethread_workqueue("ad_calc_wq");
+ for (i = 0; i < mdata->nad_cfgs; i++) {
+ mdata->ad_cfgs[i].base = mdata->mdp_base + ad_off[i];
+ mdata->ad_cfgs[i].num = i;
+ mdata->ad_cfgs[i].calc_itr = 0;
+ mdata->ad_cfgs[i].last_str = 0xFFFFFFFF;
+ mutex_init(&mdata->ad_cfgs[i].lock);
+ mdata->ad_cfgs[i].handle.vsync_handler = pp_ad_vsync_handler;
+ INIT_WORK(&mdata->ad_cfgs[i].calc_work, pp_ad_calc_worker);
+ }
+ return rc;
+}
diff --git a/drivers/video/msm/mdss/mdss_mdp_rotator.c b/drivers/video/msm/mdss/mdss_mdp_rotator.c
index 5711653..5d9396a 100644
--- a/drivers/video/msm/mdss/mdss_mdp_rotator.c
+++ b/drivers/video/msm/mdss/mdss_mdp_rotator.c
@@ -27,6 +27,8 @@
static struct mdss_mdp_rotator_session rotator_session[MAX_ROTATOR_SESSIONS];
static LIST_HEAD(rotator_queue);
+static int mdss_mdp_rotator_finish(struct mdss_mdp_rotator_session *rot);
+
struct mdss_mdp_rotator_session *mdss_mdp_rotator_session_alloc(void)
{
struct mdss_mdp_rotator_session *rot;
@@ -39,7 +41,6 @@
rot->ref_cnt++;
rot->session_id = i | MDSS_MDP_ROT_SESSION_MASK;
mutex_init(&rot->lock);
- init_completion(&rot->comp);
break;
}
}
@@ -85,10 +86,18 @@
static int mdss_mdp_rotator_busy_wait(struct mdss_mdp_rotator_session *rot)
{
+ struct mdss_mdp_pipe *rot_pipe = NULL;
+ struct mdss_mdp_ctl *ctl = NULL;
+
+ rot_pipe = rot->pipe;
+ if (!rot_pipe)
+ return -ENODEV;
+
+ ctl = rot_pipe->mixer->ctl;
mutex_lock(&rot->lock);
if (rot->busy) {
pr_debug("waiting for rot=%d to complete\n", rot->pipe->num);
- wait_for_completion_interruptible(&rot->comp);
+ mdss_mdp_display_wait4comp(ctl);
rot->busy = false;
}
@@ -97,28 +106,18 @@
return 0;
}
-static void mdss_mdp_rotator_callback(void *arg)
-{
- struct mdss_mdp_rotator_session *rot;
-
- rot = (struct mdss_mdp_rotator_session *) arg;
- if (rot)
- complete(&rot->comp);
-}
-
static int mdss_mdp_rotator_kickoff(struct mdss_mdp_ctl *ctl,
struct mdss_mdp_rotator_session *rot,
struct mdss_mdp_data *dst_data)
{
int ret;
struct mdss_mdp_writeback_arg wb_args = {
- .callback_fnc = mdss_mdp_rotator_callback,
+ .callback_fnc = NULL,
.data = dst_data,
.priv_data = rot,
};
mutex_lock(&rot->lock);
- INIT_COMPLETION(rot->comp);
rot->busy = true;
ret = mdss_mdp_display_commit(ctl, &wb_args);
if (ret) {
@@ -166,27 +165,21 @@
return 0;
}
-int mdss_mdp_rotator_queue(struct mdss_mdp_rotator_session *rot,
+static int mdss_mdp_rotator_queue_sub(struct mdss_mdp_rotator_session *rot,
struct mdss_mdp_data *src_data,
struct mdss_mdp_data *dst_data)
{
struct mdss_mdp_pipe *rot_pipe = NULL;
struct mdss_mdp_ctl *ctl;
- int ret, need_wait = false;
+ int ret;
- ret = mutex_lock_interruptible(&rotator_lock);
- if (ret)
- return ret;
-
- if (!rot || !rot->ref_cnt) {
- mutex_unlock(&rotator_lock);
- return -ENODEV;
- }
+ if (!rot || !rot->ref_cnt)
+ return -ENOENT;
ret = mdss_mdp_rotator_pipe_dequeue(rot);
if (ret) {
pr_err("unable to acquire rotator\n");
- goto done;
+ return ret;
}
rot_pipe = rot->pipe;
@@ -203,31 +196,148 @@
rot_pipe->img_height = rot->img_height;
rot_pipe->src = rot->src_rect;
rot_pipe->dst = rot->src_rect;
+ rot_pipe->dst.x = 0;
+ rot_pipe->dst.y = 0;
rot_pipe->params_changed++;
}
+ ret = mdss_mdp_smp_reserve(rot->pipe);
+ if (ret) {
+ pr_err("unable to mdss_mdp_smp_reserve rot data\n");
+ return ret;
+ }
+
ret = mdss_mdp_pipe_queue_data(rot->pipe, src_data);
if (ret) {
pr_err("unable to queue rot data\n");
- goto done;
+ mdss_mdp_smp_unreserve(rot->pipe);
+ return ret;
}
ret = mdss_mdp_rotator_kickoff(ctl, rot, dst_data);
- if (ret == 0 && !rot->no_wait)
- need_wait = true;
-done:
+ return ret;
+}
+
+int mdss_mdp_rotator_queue(struct mdss_mdp_rotator_session *rot,
+ struct mdss_mdp_data *src_data,
+ struct mdss_mdp_data *dst_data)
+{
+ int ret;
+ struct mdss_mdp_rotator_session *tmp = rot;
+
+ ret = mutex_lock_interruptible(&rotator_lock);
+ if (ret)
+ return ret;
+
+ pr_debug("rotator session=%x start\n", rot->session_id);
+
+ for (ret = 0, tmp = rot; ret == 0 && tmp; tmp = tmp->next)
+ ret = mdss_mdp_rotator_queue_sub(tmp, src_data, dst_data);
+
mutex_unlock(&rotator_lock);
- if (need_wait)
- mdss_mdp_rotator_busy_wait(rot);
+ if (ret) {
+ pr_err("rotation failed %d for rot=%d\n", ret, rot->session_id);
+ return ret;
+ }
- if (rot_pipe)
- pr_debug("end of rotator pnum=%d enqueue\n", rot_pipe->num);
+ for (tmp = rot; tmp; tmp = tmp->next)
+ mdss_mdp_rotator_busy_wait(tmp);
+
+ pr_debug("rotator session=%x queue done\n", rot->session_id);
return ret;
}
+int mdss_mdp_rotator_setup(struct mdss_mdp_rotator_session *rot)
+{
+
+ rot->dst = rot->src_rect;
+ /*
+ * by default, rotator output should be placed directly on
+ * output buffer address without any offset.
+ */
+ rot->dst.x = 0;
+ rot->dst.y = 0;
+
+ if (rot->flags & MDP_ROT_90)
+ swap(rot->dst.w, rot->dst.h);
+
+ if (rot->src_rect.w > MAX_MIXER_WIDTH) {
+ struct mdss_mdp_rotator_session *tmp;
+ u32 width;
+
+ if (rot->bwc_mode) {
+ pr_err("Unable to do split rotation with bwc set\n");
+ return -EINVAL;
+ }
+
+ width = rot->src_rect.w;
+
+ pr_debug("setting up split rotation src=%dx%d\n",
+ rot->src_rect.w, rot->src_rect.h);
+
+ if (width > (MAX_MIXER_WIDTH * 2)) {
+ pr_err("unsupported source width %d\n", width);
+ return -EOVERFLOW;
+ }
+
+ if (!rot->next) {
+ tmp = mdss_mdp_rotator_session_alloc();
+ if (!tmp) {
+ pr_err("unable to allocate rot dual session\n");
+ return -ENOMEM;
+ }
+ rot->next = tmp;
+ }
+ tmp = rot->next;
+
+ tmp->session_id = rot->session_id & ~MDSS_MDP_ROT_SESSION_MASK;
+ tmp->flags = rot->flags;
+ tmp->format = rot->format;
+ tmp->img_width = rot->img_width;
+ tmp->img_height = rot->img_height;
+ tmp->src_rect = rot->src_rect;
+
+ tmp->src_rect.w = width / 2;
+ width -= tmp->src_rect.w;
+ tmp->src_rect.x += width;
+
+ tmp->dst = rot->dst;
+ rot->src_rect.w = width;
+
+ if (rot->flags & MDP_ROT_90) {
+ /*
+ * If rotated by 90 first half should be on top.
+ * But if horizontally flipped should be on bottom.
+ */
+ if (rot->flags & MDP_FLIP_LR)
+ rot->dst.y = tmp->src_rect.w;
+ else
+ tmp->dst.y = rot->src_rect.w;
+ } else {
+ /*
+ * If not rotated, first half should be the left part
+ * of the frame, unless horizontally flipped
+ */
+ if (rot->flags & MDP_FLIP_LR)
+ rot->dst.x = tmp->src_rect.w;
+ else
+ tmp->dst.x = rot->src_rect.w;
+ }
+
+ tmp->params_changed++;
+ } else if (rot->next) {
+ mdss_mdp_rotator_finish(rot->next);
+ rot->next = NULL;
+ }
+
+ rot->params_changed++;
+
+ return 0;
+}
+
static int mdss_mdp_rotator_finish(struct mdss_mdp_rotator_session *rot)
{
struct mdss_mdp_pipe *rot_pipe;
@@ -237,6 +347,9 @@
pr_debug("finish rot id=%x\n", rot->session_id);
+ if (rot->next)
+ mdss_mdp_rotator_finish(rot->next);
+
rot_pipe = rot->pipe;
if (rot_pipe) {
mdss_mdp_rotator_busy_wait(rot);
@@ -254,6 +367,7 @@
int mdss_mdp_rotator_release(u32 ndx)
{
+ int rc = 0;
struct mdss_mdp_rotator_session *rot;
mutex_lock(&rotator_lock);
rot = mdss_mdp_rotator_session_get(ndx);
@@ -261,11 +375,11 @@
mdss_mdp_rotator_finish(rot);
} else {
pr_warn("unknown session id=%x\n", ndx);
- return -ENOENT;
+ rc = -ENOENT;
}
mutex_unlock(&rotator_lock);
- return 0;
+ return rc;
}
int mdss_mdp_rotator_release_all(void)
diff --git a/drivers/video/msm/mdss/mdss_mdp_rotator.h b/drivers/video/msm/mdss/mdss_mdp_rotator.h
index 21ee9bb..3401fe8 100644
--- a/drivers/video/msm/mdss/mdss_mdp_rotator.h
+++ b/drivers/video/msm/mdss/mdss_mdp_rotator.h
@@ -30,16 +30,17 @@
u16 img_width;
u16 img_height;
struct mdss_mdp_img_rect src_rect;
+ struct mdss_mdp_img_rect dst;
u32 bwc_mode;
struct mdss_mdp_pipe *pipe;
struct mutex lock;
- struct completion comp;
u8 busy;
u8 no_wait;
struct list_head head;
+ struct mdss_mdp_rotator_session *next;
};
static inline u32 mdss_mdp_get_rotator_dst_format(u32 in_format)
@@ -61,6 +62,7 @@
struct mdss_mdp_rotator_session *mdss_mdp_rotator_session_alloc(void);
struct mdss_mdp_rotator_session *mdss_mdp_rotator_session_get(u32 session_id);
+int mdss_mdp_rotator_setup(struct mdss_mdp_rotator_session *rot);
int mdss_mdp_rotator_queue(struct mdss_mdp_rotator_session *rot,
struct mdss_mdp_data *src_data,
struct mdss_mdp_data *dst_data);
diff --git a/drivers/video/msm/mdss/mdss_mdp_util.c b/drivers/video/msm/mdss/mdss_mdp_util.c
index 5915f61..60f05ca 100644
--- a/drivers/video/msm/mdss/mdss_mdp_util.c
+++ b/drivers/video/msm/mdss/mdss_mdp_util.c
@@ -240,7 +240,7 @@
ps->ystride[1] = 32 * 2;
} else if (fmt->fetch_planes == MDSS_MDP_PLANE_INTERLEAVED) {
ps->rau_cnt = DIV_ROUND_UP(w, 32);
- ps->ystride[0] = 32 * 4;
+ ps->ystride[0] = 32 * 4 * fmt->bpp;
ps->ystride[1] = 0;
ps->rau_h[0] = 4;
ps->rau_h[1] = 0;
@@ -250,8 +250,8 @@
}
stride_off = DIV_ROUND_UP(ps->rau_cnt, 8);
- ps->ystride[0] = ps->ystride[0] * ps->rau_cnt * fmt->bpp + stride_off;
- ps->ystride[1] = ps->ystride[1] * ps->rau_cnt * fmt->bpp + stride_off;
+ ps->ystride[0] = ps->ystride[0] * ps->rau_cnt + stride_off;
+ ps->ystride[1] = ps->ystride[1] * ps->rau_cnt + stride_off;
ps->num_planes = 2;
return 0;
@@ -262,8 +262,7 @@
{
struct mdss_mdp_format_params *fmt;
int i, rc;
- u32 bpp, stride_off;
-
+ u32 bpp, ystride0_off, ystride1_off;
if (ps == NULL)
return -EINVAL;
@@ -281,12 +280,14 @@
rc = mdss_mdp_get_rau_strides(w, h, fmt, ps);
if (rc)
return rc;
- stride_off = DIV_ROUND_UP(h, ps->rau_h[0]);
- ps->ystride[0] = ps->ystride[0] + ps->ystride[1];
- ps->plane_size[0] = ps->ystride[0] * stride_off;
+ ystride0_off = DIV_ROUND_UP(h, ps->rau_h[0]);
+ ystride1_off = DIV_ROUND_UP(h, ps->rau_h[1]);
+ ps->plane_size[0] = (ps->ystride[0] * ystride0_off) +
+ (ps->ystride[1] * ystride1_off);
+ ps->ystride[0] += ps->ystride[1];
ps->ystride[1] = 2;
- ps->plane_size[1] = ps->rau_cnt * ps->ystride[1] * stride_off;
-
+ ps->plane_size[1] = ps->rau_cnt * ps->ystride[1] *
+ (ystride0_off + ystride1_off);
} else {
if (fmt->fetch_planes == MDSS_MDP_PLANE_INTERLEAVED) {
ps->num_planes = 1;
@@ -346,47 +347,67 @@
int mdss_mdp_data_check(struct mdss_mdp_data *data,
struct mdss_mdp_plane_sizes *ps)
{
+ struct mdss_mdp_img_data *prev, *curr;
+ int i;
+
if (!ps)
return 0;
if (!data || data->num_planes == 0)
return -ENOMEM;
- if (data->bwc_enabled) {
- data->num_planes = ps->num_planes;
- data->p[1].addr = data->p[0].addr + ps->plane_size[0];
- } else {
- struct mdss_mdp_img_data *prev, *curr;
- int i;
+ pr_debug("srcp0=%x len=%u frame_size=%u\n", data->p[0].addr,
+ data->p[0].len, ps->total_size);
- pr_debug("srcp0=%x len=%u frame_size=%u\n", data->p[0].addr,
- data->p[0].len, ps->total_size);
-
- for (i = 0; i < ps->num_planes; i++) {
- curr = &data->p[i];
- if (i >= data->num_planes) {
- u32 psize = ps->plane_size[i-1];
- prev = &data->p[i-1];
- if (prev->len > psize) {
- curr->len = prev->len - psize;
- prev->len = psize;
- }
- curr->addr = prev->addr + psize;
+ for (i = 0; i < ps->num_planes; i++) {
+ curr = &data->p[i];
+ if (i >= data->num_planes) {
+ u32 psize = ps->plane_size[i-1];
+ prev = &data->p[i-1];
+ if (prev->len > psize) {
+ curr->len = prev->len - psize;
+ prev->len = psize;
}
- if (curr->len < ps->plane_size[i]) {
- pr_err("insufficient mem=%u p=%d len=%u\n",
- curr->len, i, ps->plane_size[i]);
- return -ENOMEM;
- }
- pr_debug("plane[%d] addr=%x len=%u\n", i,
- curr->addr, curr->len);
+ curr->addr = prev->addr + psize;
}
- data->num_planes = ps->num_planes;
+ if (curr->len < ps->plane_size[i]) {
+ pr_err("insufficient mem=%u p=%d len=%u\n",
+ curr->len, i, ps->plane_size[i]);
+ return -ENOMEM;
+ }
+ pr_debug("plane[%d] addr=%x len=%u\n", i,
+ curr->addr, curr->len);
}
+ data->num_planes = ps->num_planes;
return 0;
}
+void mdss_mdp_data_calc_offset(struct mdss_mdp_data *data, u16 x, u16 y,
+ struct mdss_mdp_plane_sizes *ps, struct mdss_mdp_format_params *fmt)
+{
+ if ((x == 0) && (y == 0))
+ return;
+
+ data->p[0].addr += y * ps->ystride[0];
+
+ if (data->num_planes == 1) {
+ data->p[0].addr += x * fmt->bpp;
+ } else {
+ u8 hmap[] = { 1, 2, 1, 2 };
+ u8 vmap[] = { 1, 1, 2, 2 };
+ u16 xoff = x / hmap[fmt->chroma_sample];
+ u16 yoff = y / vmap[fmt->chroma_sample];
+
+ data->p[0].addr += x;
+ data->p[1].addr += xoff + (yoff * ps->ystride[1]);
+ if (data->num_planes == 2) /* pseudo planar */
+ data->p[1].addr += xoff;
+ else /* planar */
+ data->p[2].addr += xoff + (yoff * ps->ystride[2]);
+ }
+}
+
int mdss_mdp_put_img(struct mdss_mdp_img_data *data)
{
struct ion_client *iclient = mdss_get_ionclient();
diff --git a/drivers/video/msm/mdss/mdss_mdp_wb.c b/drivers/video/msm/mdss/mdss_mdp_wb.c
index 88e7605..1f8244d 100644
--- a/drivers/video/msm/mdss/mdss_mdp_wb.c
+++ b/drivers/video/msm/mdss/mdss_mdp_wb.c
@@ -25,6 +25,7 @@
#include "mdss_mdp.h"
#include "mdss_fb.h"
+#include "mdss_wb.h"
enum mdss_mdp_wb_state {
@@ -131,7 +132,13 @@
pr_debug("setting secure=%d\n", enable);
+ ctl->is_secure = enable;
wb->is_secure = enable;
+
+ /* newer revisions don't require secure src pipe for secure session */
+ if (ctl->mdata->mdp_rev > MDSS_MDP_HW_REV_100)
+ return 0;
+
pipe = wb->secure_pipe;
if (!enable) {
@@ -187,6 +194,7 @@
{
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
struct mdss_mdp_wb *wb = mfd_to_wb(mfd);
+ int rc = 0;
mutex_lock(&mdss_mdp_wb_buf_lock);
if (wb == NULL) {
@@ -195,7 +203,8 @@
mdp5_data->wb = wb;
} else if (mfd->index != wb->fb_ndx) {
pr_err("only one writeback intf supported at a time\n");
- return -EMLINK;
+ rc = -EMLINK;
+ goto error;
} else {
pr_debug("writeback already initialized\n");
}
@@ -210,8 +219,9 @@
init_waitqueue_head(&wb->wait_q);
mdp5_data->wb = wb;
+error:
mutex_unlock(&mdss_mdp_wb_buf_lock);
- return 0;
+ return rc;
}
static int mdss_mdp_wb_terminate(struct msm_fb_data_type *mfd)
@@ -242,6 +252,7 @@
mdss_mdp_pipe_destroy(wb->secure_pipe);
mutex_unlock(&wb->lock);
+ mdp5_data->ctl->is_secure = false;
mdp5_data->wb = NULL;
mutex_unlock(&mdss_mdp_wb_buf_lock);
@@ -535,11 +546,37 @@
return ret;
}
+int mdss_mdp_wb_set_mirr_hint(struct msm_fb_data_type *mfd, int hint)
+{
+ struct mdss_panel_data *pdata = NULL;
+ struct mdss_wb_ctrl *wb_ctrl = NULL;
+
+ if (!mfd) {
+ pr_err("No panel data!\n");
+ return -EINVAL;
+ }
+
+ pdata = mfd->pdev->dev.platform_data;
+ wb_ctrl = container_of(pdata, struct mdss_wb_ctrl, pdata);
+
+ switch (hint) {
+ case MDP_WRITEBACK_MIRROR_ON:
+ case MDP_WRITEBACK_MIRROR_PAUSE:
+ case MDP_WRITEBACK_MIRROR_RESUME:
+ case MDP_WRITEBACK_MIRROR_OFF:
+ pr_info("wfd state switched to %d\n", hint);
+ switch_set_state(&wb_ctrl->sdev, hint);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
int mdss_mdp_wb_ioctl_handler(struct msm_fb_data_type *mfd, u32 cmd,
void *arg)
{
struct msmfb_data data;
- int ret = -ENOSYS;
+ int ret = -ENOSYS, hint = 0;
switch (cmd) {
case MSMFB_WRITEBACK_INIT:
@@ -570,6 +607,14 @@
case MSMFB_WRITEBACK_TERMINATE:
ret = mdss_mdp_wb_terminate(mfd);
break;
+ case MSMFB_WRITEBACK_SET_MIRRORING_HINT:
+ if (!copy_from_user(&hint, arg, sizeof(hint))) {
+ ret = mdss_mdp_wb_set_mirr_hint(mfd, hint);
+ } else {
+ pr_err("set mirroring hint failed on copy_from_user\n");
+ ret = -EFAULT;
+ }
+ break;
}
return ret;
diff --git a/drivers/video/msm/mdss/mdss_panel.h b/drivers/video/msm/mdss/mdss_panel.h
index d230100..9e8cfa7 100644
--- a/drivers/video/msm/mdss/mdss_panel.h
+++ b/drivers/video/msm/mdss/mdss_panel.h
@@ -55,8 +55,35 @@
MAX_PHYS_TARGET_NUM,
};
+/**
+ * enum mdss_intf_events - Different events generated by MDP core
+ *
+ * @MDSS_EVENT_RESET: MDP control path is being (re)initialized.
+ * @MDSS_EVENT_UNBLANK: Sent before first frame update from MDP is
+ * sent to panel.
+ * @MDSS_EVENT_PANEL_ON: After first frame update from MDP.
+ * @MDSS_EVENT_BLANK: MDP has no contents to display only blank screen
+ * is shown in panel. Sent before panel off.
+ * @MDSS_EVENT_PANEL_OFF: MDP has suspended frame updates, panel should be
+ * completely shutdown after this call.
+ * @MDSS_EVENT_CLOSE: MDP has tore down entire session.
+ * @MDSS_EVENT_SUSPEND: Propagation of power suspend event.
+ * @MDSS_EVENT_RESUME: Propagation of power resume event.
+ * @MDSS_EVENT_CHECK_PARAMS: Event generated when a panel reconfiguration is
+ * requested including when resolution changes.
+ * The event handler receives pointer to
+ * struct mdss_panel_info and should return one of:
+ * - negative if the configuration is invalid
+ * - 0 if there is no panel reconfig needed
+ * - 1 if reconfig is needed to take effect
+ * @MDSS_EVENT_CONT_SPLASH_FINISH: Special event used to handle transition of
+ * display state from boot loader to panel driver.
+ * @MDSS_EVENT_FB_REGISTERED: Called after fb dev driver has been registered,
+ * panel driver gets ptr to struct fb_info which
+ * holds fb dev information.
+ */
enum mdss_intf_events {
- MDSS_EVENT_RESET,
+ MDSS_EVENT_RESET = 1,
MDSS_EVENT_UNBLANK,
MDSS_EVENT_PANEL_ON,
MDSS_EVENT_BLANK,
@@ -214,7 +241,18 @@
void (*set_backlight) (struct mdss_panel_data *pdata, u32 bl_level);
unsigned char *mmss_cc_base;
- /* function entry chain */
+ /**
+ * event_handler() - callback handler for MDP core events
+ * @pdata: Pointer refering to the panel struct associated to this
+ * event. Can be used to retrieve panel info.
+ * @e: Event being generated, see enum mdss_intf_events
+ * @arg: Optional argument to pass some info from some events.
+ *
+ * Used to register handler to be used to propagate different events
+ * happening in MDP core driver. Panel driver can listen for any of
+ * these events to perform appropriate actions for panel initialization
+ * and teardown.
+ */
int (*event_handler) (struct mdss_panel_data *pdata, int e, void *arg);
struct mdss_panel_data *next;
diff --git a/drivers/video/msm/mdss/mdss_qpic.c b/drivers/video/msm/mdss/mdss_qpic.c
index be02113..fa6bd3d 100644
--- a/drivers/video/msm/mdss/mdss_qpic.c
+++ b/drivers/video/msm/mdss/mdss_qpic.c
@@ -428,7 +428,7 @@
param[0]);
param++;
bytes_left -= 4;
- space++;
+ space--;
} else if (bytes_left == 2) {
QPIC_OUTPW(QPIC_REG_QPIC_LCDC_FIFO_DATA_PORT0,
*(u16 *)param);
diff --git a/drivers/video/msm/mdss/mdss_wb.c b/drivers/video/msm/mdss/mdss_wb.c
index 1b398d3..a169302 100644
--- a/drivers/video/msm/mdss/mdss_wb.c
+++ b/drivers/video/msm/mdss/mdss_wb.c
@@ -24,6 +24,7 @@
#include <linux/version.h>
#include "mdss_panel.h"
+#include "mdss_wb.h"
/**
* mdss_wb_check_params - check new panel info params
@@ -87,22 +88,62 @@
return 0;
}
+static int mdss_wb_dev_init(struct mdss_wb_ctrl *wb_ctrl)
+{
+ int rc = 0;
+ if (!wb_ctrl) {
+ pr_err("%s: no driver data\n", __func__);
+ return -ENODEV;
+ }
+
+ wb_ctrl->sdev.name = "wfd";
+ rc = switch_dev_register(&wb_ctrl->sdev);
+ if (rc) {
+ pr_err("Failed to setup switch dev for writeback panel");
+ return rc;
+ }
+
+ return 0;
+}
+
+static int mdss_wb_dev_uninit(struct mdss_wb_ctrl *wb_ctrl)
+{
+ if (!wb_ctrl) {
+ pr_err("%s: no driver data\n", __func__);
+ return -ENODEV;
+ }
+
+ switch_dev_unregister(&wb_ctrl->sdev);
+ return 0;
+}
+
static int mdss_wb_probe(struct platform_device *pdev)
{
struct mdss_panel_data *pdata = NULL;
+ struct mdss_wb_ctrl *wb_ctrl = NULL;
int rc = 0;
if (!pdev->dev.of_node)
return -ENODEV;
- pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
- if (!pdata)
+ wb_ctrl = devm_kzalloc(&pdev->dev, sizeof(*wb_ctrl), GFP_KERNEL);
+ if (!wb_ctrl)
return -ENOMEM;
+ pdata = &wb_ctrl->pdata;
+ wb_ctrl->pdev = pdev;
+ platform_set_drvdata(pdev, wb_ctrl);
+
rc = !mdss_wb_parse_dt(pdev, pdata);
if (!rc)
return rc;
+ rc = mdss_wb_dev_init(wb_ctrl);
+ if (rc) {
+ dev_err(&pdev->dev, "unable to set up device nodes for writeback panel\n");
+ return rc;
+ }
+
pdata->panel_info.type = WRITEBACK_PANEL;
pdata->panel_info.clk_rate = 74250000;
pdata->panel_info.pdest = DISPLAY_3;
@@ -120,6 +161,19 @@
return rc;
}
+static int mdss_wb_remove(struct platform_device *pdev)
+{
+ struct mdss_wb_ctrl *wb_ctrl = platform_get_drvdata(pdev);
+ if (!wb_ctrl) {
+ pr_err("%s: no driver data\n", __func__);
+ return -ENODEV;
+ }
+
+ mdss_wb_dev_uninit(wb_ctrl);
+ devm_kfree(&wb_ctrl->pdev->dev, wb_ctrl);
+ return 0;
+}
+
static const struct of_device_id mdss_wb_match[] = {
{ .compatible = "qcom,mdss_wb", },
{ { 0 } }
@@ -127,6 +181,7 @@
static struct platform_driver mdss_wb_driver = {
.probe = mdss_wb_probe,
+ .remove = mdss_wb_remove,
.driver = {
.name = "mdss_wb",
.of_match_table = mdss_wb_match,
diff --git a/arch/arm/boot/dts/msmzinc-sim.dts b/drivers/video/msm/mdss/mdss_wb.h
similarity index 67%
rename from arch/arm/boot/dts/msmzinc-sim.dts
rename to drivers/video/msm/mdss/mdss_wb.h
index e410344..3b0c52a 100644
--- a/arch/arm/boot/dts/msmzinc-sim.dts
+++ b/drivers/video/msm/mdss/mdss_wb.h
@@ -8,22 +8,18 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
+ *
*/
-/dts-v1/;
+#ifndef MDSS_WB_H
+#define MDSS_WB_H
-/include/ "msmzinc.dtsi"
+#include <linux/switch.h>
-/ {
- model = "Qualcomm MSM ZINC Simulator";
- compatible = "qcom,msmzinc-sim", "qcom,msmzinc", "qcom,sim";
- qcom,msm-id = <178 0 0>;
-
- aliases {
- serial0 = &uart0;
- };
-
- uart0: serial@f991f000 {
- status = "ok";
- };
+struct mdss_wb_ctrl {
+ struct platform_device *pdev;
+ struct mdss_panel_data pdata;
+ struct switch_dev sdev;
};
+
+#endif
diff --git a/drivers/video/msm/mdss/mhl_msc.c b/drivers/video/msm/mdss/mhl_msc.c
index 96e8b67..08d0693 100644
--- a/drivers/video/msm/mdss/mhl_msc.c
+++ b/drivers/video/msm/mdss/mhl_msc.c
@@ -169,6 +169,7 @@
cmd_env = vmalloc(sizeof(struct msc_cmd_envelope));
if (!cmd_env) {
pr_err("%s: out of memory!\n", __func__);
+ mutex_unlock(&msc_send_workqueue_mutex);
return -ENOMEM;
}
diff --git a/drivers/video/msm/mdss/mhl_sii8334.c b/drivers/video/msm/mdss/mhl_sii8334.c
index a3a1a4e..21b6f24 100644
--- a/drivers/video/msm/mdss/mhl_sii8334.c
+++ b/drivers/video/msm/mdss/mhl_sii8334.c
@@ -190,7 +190,7 @@
static irqreturn_t mhl_tx_isr(int irq, void *dev_id);
static void switch_mode(struct mhl_tx_ctrl *mhl_ctrl,
- enum mhl_st_type to_mode);
+ enum mhl_st_type to_mode, bool hpd_off);
static void mhl_init_reg_settings(struct mhl_tx_ctrl *mhl_ctrl,
bool mhl_disc_en);
@@ -703,9 +703,18 @@
}
-static void switch_mode(struct mhl_tx_ctrl *mhl_ctrl, enum mhl_st_type to_mode)
+static void switch_mode(struct mhl_tx_ctrl *mhl_ctrl, enum mhl_st_type to_mode,
+ bool hpd_off)
{
struct i2c_client *client = mhl_ctrl->i2c_handle;
+ unsigned long flags;
+ int rc;
+ struct msm_hdmi_mhl_ops *hdmi_mhl_ops = mhl_ctrl->hdmi_mhl_ops;
+
+ pr_debug("%s: tx pwr on\n", __func__);
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->tx_powered_off = false;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
switch (to_mode) {
case POWER_STATE_D0_NO_MHL:
@@ -722,8 +731,11 @@
mhl_ctrl->cur_state = to_mode;
break;
case POWER_STATE_D3:
- if (mhl_ctrl->cur_state == POWER_STATE_D3)
+ if (mhl_ctrl->cur_state == POWER_STATE_D3) {
+ pr_debug("%s: mhl tx already in low power mode\n",
+ __func__);
break;
+ }
/* Force HPD to 0 when not in MHL mode. */
mhl_drive_hpd(mhl_ctrl, HPD_DOWN);
@@ -736,7 +748,19 @@
msleep(50);
if (!mhl_ctrl->disc_enabled)
MHL_SII_REG_NAME_MOD(REG_DISC_CTRL1, BIT1 | BIT0, 0x00);
- MHL_SII_PAGE3_MOD(0x003D, BIT0, 0x00);
+
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->tx_powered_off = true;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+
+ if (hdmi_mhl_ops && hpd_off) {
+ rc = hdmi_mhl_ops->set_upstream_hpd(
+ mhl_ctrl->pdata->hdmi_pdev, 0);
+ pr_debug("%s: hdmi unset hpd %s\n", __func__,
+ rc ? "failed" : "passed");
+ }
+ msleep(200);
+ MHL_SII_PAGE1_MOD(0x003D, BIT0, 0x00);
mhl_ctrl->cur_state = POWER_STATE_D3;
break;
default:
@@ -744,6 +768,22 @@
}
}
+static bool is_mhl_powered(void *mhl_ctx)
+{
+ struct mhl_tx_ctrl *mhl_ctrl = (struct mhl_tx_ctrl *)mhl_ctx;
+ unsigned long flags;
+ bool r = false;
+
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ if (mhl_ctrl->tx_powered_off)
+ r = false;
+ else
+ r = true;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+
+ pr_debug("%s: ret pwr state as %x\n", __func__, r);
+ return r;
+}
void mhl_tmds_ctrl(struct mhl_tx_ctrl *mhl_ctrl, uint8_t on)
{
@@ -759,6 +799,7 @@
void mhl_drive_hpd(struct mhl_tx_ctrl *mhl_ctrl, uint8_t to_state)
{
struct i2c_client *client = mhl_ctrl->i2c_handle;
+ unsigned long flags;
pr_debug("%s: To state=[0x%x]\n", __func__, to_state);
if (to_state == HPD_UP) {
@@ -769,10 +810,13 @@
* propogate to src let HPD float by clearing
* HPD OUT OVRRD EN
*/
- MHL_SII_REG_NAME_MOD(REG_INT_CTRL, BIT4, 0x00);
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->tx_powered_off = false;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+ MHL_SII_REG_NAME_MOD(REG_INT_CTRL, BIT4, 0);
} else {
/* Drive HPD to DOWN state */
- MHL_SII_REG_NAME_MOD(REG_INT_CTRL, BIT4, BIT4);
+ MHL_SII_REG_NAME_MOD(REG_INT_CTRL, (BIT4 | BIT5), BIT4);
}
}
@@ -789,9 +833,7 @@
pr_err("%s: cur st not D0\n", __func__);
return;
}
- /* spin_lock_irqsave(&mhl_state_lock, flags); */
- switch_mode(mhl_ctrl, POWER_STATE_D0_MHL);
- /* spin_unlock_irqrestore(&mhl_state_lock, flags); */
+ switch_mode(mhl_ctrl, POWER_STATE_D0_MHL, true);
MHL_SII_REG_NAME_WR(REG_MHLTX_CTL1, 0x10);
MHL_SII_CBUS_WR(0x07, 0xF2);
@@ -823,19 +865,29 @@
static void mhl_msm_disconnection(struct mhl_tx_ctrl *mhl_ctrl)
{
struct i2c_client *client = mhl_ctrl->i2c_handle;
- /*
- * MHL TX CTL1
- * Disabling Tx termination
- */
- MHL_SII_PAGE3_WR(0x30, 0xD0);
+ unsigned long flags;
- switch_mode(mhl_ctrl, POWER_STATE_D3);
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->dwnstream_hpd &= ~BIT6;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+
+ /* disabling Tx termination */
+ MHL_SII_REG_NAME_WR(REG_MHLTX_CTL1, 0xD0);
+ switch_mode(mhl_ctrl, POWER_STATE_D3, true);
}
static int mhl_msm_read_rgnd_int(struct mhl_tx_ctrl *mhl_ctrl)
{
uint8_t rgnd_imp;
struct i2c_client *client = mhl_ctrl->i2c_handle;
+ struct msm_hdmi_mhl_ops *hdmi_mhl_ops = mhl_ctrl->hdmi_mhl_ops;
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->tx_powered_off = false;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+
/* DISC STATUS REG 2 */
rgnd_imp = (mhl_i2c_reg_read(client, TX_PAGE_3, 0x001C) &
(BIT1 | BIT0));
@@ -843,6 +895,12 @@
if (0x02 == rgnd_imp) {
pr_debug("%s: mhl sink\n", __func__);
+ if (hdmi_mhl_ops) {
+ rc = hdmi_mhl_ops->set_upstream_hpd(
+ mhl_ctrl->pdata->hdmi_pdev, 1);
+ pr_debug("%s: hdmi set hpd %s\n", __func__,
+ rc ? "failed" : "passed");
+ }
mhl_ctrl->mhl_mode = 1;
power_supply_changed(&mhl_ctrl->mhl_psy);
if (mhl_ctrl->notify_usb_online)
@@ -850,7 +908,7 @@
} else {
pr_debug("%s: non-mhl sink\n", __func__);
mhl_ctrl->mhl_mode = 0;
- switch_mode(mhl_ctrl, POWER_STATE_D3);
+ switch_mode(mhl_ctrl, POWER_STATE_D3, true);
}
complete(&mhl_ctrl->rgnd_done);
return mhl_ctrl->mhl_mode ?
@@ -904,9 +962,9 @@
}
-static void dev_detect_isr(struct mhl_tx_ctrl *mhl_ctrl)
+static int dev_detect_isr(struct mhl_tx_ctrl *mhl_ctrl)
{
- uint8_t status, reg ;
+ uint8_t status, reg;
struct i2c_client *client = mhl_ctrl->i2c_handle;
/* INTR_STATUS4 */
@@ -916,13 +974,13 @@
if ((0x00 == status) &&\
(mhl_ctrl->cur_state == POWER_STATE_D3)) {
pr_err("%s: invalid intr\n", __func__);
- return;
+ return 0;
}
if (0xFF == status) {
pr_debug("%s: invalid intr 0xff\n", __func__);
MHL_SII_REG_NAME_WR(REG_INTR4, status);
- return;
+ return 0;
}
if ((status & BIT0) && (mhl_ctrl->chip_rev_id < 1)) {
@@ -940,31 +998,35 @@
mhl_msm_connection(mhl_ctrl);
} else if (status & BIT3) {
pr_debug("%s: uUSB-a type dev detct\n", __func__);
+
/* Short RGND */
MHL_SII_REG_NAME_MOD(REG_DISC_STAT2, BIT0 | BIT1, 0x00);
mhl_msm_disconnection(mhl_ctrl);
power_supply_changed(&mhl_ctrl->mhl_psy);
if (mhl_ctrl->notify_usb_online)
mhl_ctrl->notify_usb_online(0);
+ return -EACCES;
}
if (status & BIT5) {
/* clr intr - reg int4 */
pr_debug("%s: mhl discon: int4 st=%02X\n", __func__,
(int)status);
+
reg = MHL_SII_REG_NAME_RD(REG_INTR4);
MHL_SII_REG_NAME_WR(REG_INTR4, reg);
mhl_msm_disconnection(mhl_ctrl);
power_supply_changed(&mhl_ctrl->mhl_psy);
if (mhl_ctrl->notify_usb_online)
mhl_ctrl->notify_usb_online(0);
+ return -EACCES;
}
if ((mhl_ctrl->cur_state != POWER_STATE_D0_NO_MHL) &&\
(status & BIT6)) {
/* rgnd rdy Intr */
pr_debug("%s: rgnd ready intr\n", __func__);
- switch_mode(mhl_ctrl, POWER_STATE_D0_NO_MHL);
+ switch_mode(mhl_ctrl, POWER_STATE_D0_NO_MHL, true);
mhl_msm_read_rgnd_int(mhl_ctrl);
}
@@ -981,6 +1043,7 @@
release_usb_switch_open(mhl_ctrl);
}
MHL_SII_REG_NAME_WR(REG_INTR4, status);
+ return 0;
}
static void mhl_misc_isr(struct mhl_tx_ctrl *mhl_ctrl)
@@ -1000,10 +1063,13 @@
static void mhl_hpd_stat_isr(struct mhl_tx_ctrl *mhl_ctrl)
{
- uint8_t intr_1_stat;
- uint8_t cbus_stat;
+ uint8_t intr_1_stat, cbus_stat, t;
+ unsigned long flags;
struct i2c_client *client = mhl_ctrl->i2c_handle;
+ if (!is_mhl_powered(mhl_ctrl))
+ return;
+
/* INTR STATUS 1 */
intr_1_stat = MHL_SII_PAGE0_RD(0x0071);
@@ -1012,6 +1078,7 @@
/* Clear interrupts */
MHL_SII_PAGE0_WR(0x0071, intr_1_stat);
+
if (BIT6 & intr_1_stat) {
/*
* HPD status change event is pending
@@ -1019,11 +1086,21 @@
* MSC REQ ABRT REASON
*/
cbus_stat = MHL_SII_CBUS_RD(0x0D);
- if (BIT6 & cbus_stat)
- mhl_drive_hpd(mhl_ctrl, HPD_UP);
- else
- mhl_drive_hpd(mhl_ctrl, HPD_DOWN);
+ pr_debug("%s: cbus_stat=[0x%02x] cur_pwr=[%u]\n",
+ __func__, cbus_stat, mhl_ctrl->cur_state);
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ t = mhl_ctrl->dwnstream_hpd;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+
+ if (BIT6 & (cbus_stat ^ t)) {
+ u8 status = cbus_stat & BIT6;
+ mhl_drive_hpd(mhl_ctrl, status ? HPD_UP : HPD_DOWN);
+
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->dwnstream_hpd = cbus_stat;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
+ }
}
}
@@ -1257,122 +1334,9 @@
}
-static void clear_all_intrs(struct i2c_client *client)
-{
- uint8_t regval = 0x00;
-
- pr_debug_intr("********* exiting isr mask check ?? *************\n");
- pr_debug_intr("int1 mask = %02X\n",
- (int) MHL_SII_REG_NAME_RD(REG_INTR1));
- pr_debug_intr("int3 mask = %02X\n",
- (int) MHL_SII_PAGE0_RD(0x0077));
- pr_debug_intr("int4 mask = %02X\n",
- (int) MHL_SII_REG_NAME_RD(REG_INTR4));
- pr_debug_intr("int5 mask = %02X\n",
- (int) MHL_SII_REG_NAME_RD(REG_INTR5));
- pr_debug_intr("cbus1 mask = %02X\n",
- (int) MHL_SII_CBUS_RD(0x0009));
- pr_debug_intr("cbus2 mask = %02X\n",
- (int) MHL_SII_CBUS_RD(0x001F));
- pr_debug_intr("********* end of isr mask check *************\n");
-
- regval = MHL_SII_REG_NAME_RD(REG_INTR1);
- pr_debug_intr("int1 st = %02X\n", (int)regval);
- MHL_SII_REG_NAME_WR(REG_INTR1, regval);
-
- regval = MHL_SII_REG_NAME_RD(REG_INTR2);
- pr_debug_intr("int2 st = %02X\n", (int)regval);
- MHL_SII_REG_NAME_WR(REG_INTR2, regval);
-
- regval = MHL_SII_PAGE0_RD(0x0073);
- pr_debug_intr("int3 st = %02X\n", (int)regval);
- MHL_SII_PAGE0_WR(0x0073, regval);
-
- regval = MHL_SII_REG_NAME_RD(REG_INTR4);
- pr_debug_intr("int4 st = %02X\n", (int)regval);
- MHL_SII_REG_NAME_WR(REG_INTR4, regval);
-
- regval = MHL_SII_REG_NAME_RD(REG_INTR5);
- pr_debug_intr("int5 st = %02X\n", (int)regval);
- MHL_SII_REG_NAME_WR(REG_INTR5, regval);
-
- regval = MHL_SII_CBUS_RD(0x0008);
- pr_debug_intr("cbusInt st = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x0008, regval);
-
- regval = MHL_SII_CBUS_RD(0x001E);
- pr_debug_intr("CBUS intR_2: %d\n", (int)regval);
- MHL_SII_CBUS_WR(0x001E, regval);
-
- regval = MHL_SII_CBUS_RD(0x00A0);
- pr_debug_intr("A0 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00A0, regval);
-
- regval = MHL_SII_CBUS_RD(0x00A1);
- pr_debug_intr("A1 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00A1, regval);
-
- regval = MHL_SII_CBUS_RD(0x00A2);
- pr_debug_intr("A2 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00A2, regval);
-
- regval = MHL_SII_CBUS_RD(0x00A3);
- pr_debug_intr("A3 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00A3, regval);
-
- regval = MHL_SII_CBUS_RD(0x00B0);
- pr_debug_intr("B0 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00B0, regval);
-
- regval = MHL_SII_CBUS_RD(0x00B1);
- pr_debug_intr("B1 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00B1, regval);
-
- regval = MHL_SII_CBUS_RD(0x00B2);
- pr_debug_intr("B2 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00B2, regval);
-
- regval = MHL_SII_CBUS_RD(0x00B3);
- pr_debug_intr("B3 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00B3, regval);
-
- regval = MHL_SII_CBUS_RD(0x00E0);
- pr_debug_intr("E0 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00E0, regval);
-
- regval = MHL_SII_CBUS_RD(0x00E1);
- pr_debug_intr("E1 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00E1, regval);
-
- regval = MHL_SII_CBUS_RD(0x00E2);
- pr_debug_intr("E2 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00E2, regval);
-
- regval = MHL_SII_CBUS_RD(0x00E3);
- pr_debug_intr("E3 st set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00E3, regval);
-
- regval = MHL_SII_CBUS_RD(0x00F0);
- pr_debug_intr("F0 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00F0, regval);
-
- regval = MHL_SII_CBUS_RD(0x00F1);
- pr_debug_intr("F1 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00F1, regval);
-
- regval = MHL_SII_CBUS_RD(0x00F2);
- pr_debug_intr("F2 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00F2, regval);
-
- regval = MHL_SII_CBUS_RD(0x00F3);
- pr_debug_intr("F3 int set = %02X\n", (int)regval);
- MHL_SII_CBUS_WR(0x00F3, regval);
- pr_debug_intr("********* end of exiting in isr *************\n");
-}
-
-
static irqreturn_t mhl_tx_isr(int irq, void *data)
{
+ int rc;
struct mhl_tx_ctrl *mhl_ctrl = (struct mhl_tx_ctrl *)data;
pr_debug("%s: Getting Interrupts\n", __func__);
@@ -1380,7 +1344,11 @@
* Check RGND, MHL_EST, CBUS_LOCKOUT, SCDT
* interrupts. In D3, we get only RGND
*/
- dev_detect_isr(mhl_ctrl);
+ rc = dev_detect_isr(mhl_ctrl);
+ if (rc) {
+ pr_info("%s: dev_detect_isr rc=[%d]\n", __func__, rc);
+ return IRQ_HANDLED;
+ }
pr_debug("%s: cur pwr state is [0x%x]\n",
__func__, mhl_ctrl->cur_state);
@@ -1401,8 +1369,6 @@
mhl_hpd_stat_isr(mhl_ctrl);
}
- clear_all_intrs(mhl_ctrl->i2c_handle);
-
return IRQ_HANDLED;
}
@@ -1410,6 +1376,13 @@
{
uint8_t chip_rev_id = 0x00;
struct i2c_client *client = mhl_ctrl->i2c_handle;
+ unsigned long flags;
+
+
+ spin_lock_irqsave(&mhl_ctrl->lock, flags);
+ mhl_ctrl->dwnstream_hpd = 0;
+ mhl_ctrl->tx_powered_off = false;
+ spin_unlock_irqrestore(&mhl_ctrl->lock, flags);
/* Reset the TX chip */
mhl_sii_reset_pin(mhl_ctrl, 0);
@@ -1428,7 +1401,7 @@
* MHL-USB handshake is implemented
*/
mhl_init_reg_settings(mhl_ctrl, true);
- switch_mode(mhl_ctrl, POWER_STATE_D3);
+ switch_mode(mhl_ctrl, POWER_STATE_D0_NO_MHL, false);
return 0;
}
@@ -1710,6 +1683,7 @@
mhl_ctrl->cur_state = POWER_STATE_D0_MHL;
INIT_LIST_HEAD(&mhl_ctrl->list_cmd);
init_completion(&mhl_ctrl->msc_cmd_done);
+ spin_lock_init(&mhl_ctrl->lock);
mhl_ctrl->msc_send_workqueue = create_singlethread_workqueue
("mhl_msc_cmd_queue");
@@ -1813,7 +1787,7 @@
pr_debug("%s: i2c client addr is [%x]\n", __func__, client->addr);
if (mhl_ctrl->pdata->hdmi_pdev) {
rc = msm_hdmi_register_mhl(mhl_ctrl->pdata->hdmi_pdev,
- hdmi_mhl_ops);
+ hdmi_mhl_ops, mhl_ctrl);
if (rc) {
pr_err("%s: register with hdmi failed\n", __func__);
rc = -EPROBE_DEFER;
diff --git a/drivers/video/msm/mdss/msm_mdss_io_8974.c b/drivers/video/msm/mdss/msm_mdss_io_8974.c
index 2b07e43..5564ceb 100644
--- a/drivers/video/msm/mdss/msm_mdss_io_8974.c
+++ b/drivers/video/msm/mdss/msm_mdss_io_8974.c
@@ -275,8 +275,8 @@
} else {
MIPI_OUTP(ctrl_base + 0x0220, 0x006);
- usleep(10);
MIPI_OUTP(ctrl_base + 0x0470, 0x000);
+ MIPI_OUTP(ctrl_base + 0x0598, 0x000);
wmb();
}
}
diff --git a/drivers/video/msm/msm_fb.c b/drivers/video/msm/msm_fb.c
index 797d4a3..18c63a0 100644
--- a/drivers/video/msm/msm_fb.c
+++ b/drivers/video/msm/msm_fb.c
@@ -3238,6 +3238,29 @@
return mdp4_writeback_terminate(info);
}
+static int msmfb_overlay_ioctl_writeback_set_mirr_hint(struct fb_info *
+ info, void *argp)
+{
+ int ret = 0, hint;
+
+ if (!info) {
+ ret = -EINVAL;
+ goto error;
+ }
+
+ ret = copy_from_user(&hint, argp, sizeof(hint));
+ if (ret)
+ goto error;
+
+ ret = mdp4_writeback_set_mirroring_hint(info, hint);
+ if (ret)
+ goto error;
+error:
+ if (ret)
+ pr_err("%s: ioctl failed\n", __func__);
+ return ret;
+}
+
#else
static int msmfb_overlay_ioctl_writeback_init(struct fb_info *info)
{
@@ -3270,6 +3293,12 @@
{
return -ENOTSUPP;
}
+
+static int msmfb_overlay_ioctl_writeback_set_mirr_hint(struct fb_info *
+ info, void *argp)
+{
+ return -ENOTSUPP;
+}
#endif
static int msmfb_overlay_3d_sbys(struct fb_info *info, unsigned long *argp)
@@ -3745,6 +3774,10 @@
case MSMFB_WRITEBACK_TERMINATE:
ret = msmfb_overlay_ioctl_writeback_terminate(info);
break;
+ case MSMFB_WRITEBACK_SET_MIRRORING_HINT:
+ ret = msmfb_overlay_ioctl_writeback_set_mirr_hint(
+ info, argp);
+ break;
#endif
case MSMFB_VSYNC_CTRL:
case MSMFB_OVERLAY_VSYNC_CTRL:
diff --git a/drivers/video/msm/msm_fb.h b/drivers/video/msm/msm_fb.h
index 7519ac7..a02a108 100644
--- a/drivers/video/msm/msm_fb.h
+++ b/drivers/video/msm/msm_fb.h
@@ -37,7 +37,9 @@
#include <linux/fb.h>
#include <linux/list.h>
#include <linux/types.h>
+#include <linux/switch.h>
#include <linux/msm_mdp.h>
+
#ifdef CONFIG_HAS_EARLYSUSPEND
#include <linux/earlysuspend.h>
#endif
@@ -180,6 +182,7 @@
struct list_head writeback_busy_queue;
struct list_head writeback_free_queue;
struct list_head writeback_register_queue;
+ struct switch_dev writeback_sdev;
wait_queue_head_t wait_q;
struct ion_client *iclient;
unsigned long display_iova;
diff --git a/fs/file.c b/fs/file.c
index ba3f605..2f989c3 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -476,6 +476,7 @@
spin_unlock(&files->file_lock);
return error;
}
+EXPORT_SYMBOL(alloc_fd);
int get_unused_fd(void)
{
diff --git a/include/linux/bluetooth-power.h b/include/linux/bluetooth-power.h
new file mode 100644
index 0000000..ba53a40
--- /dev/null
+++ b/include/linux/bluetooth-power.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __LINUX_BLUETOOTH_POWER_H
+#define __LINUX_BLUETOOTH_POWER_H
+
+/*
+ * voltage regulator information required for configuring the
+ * bluetooth chipset
+ */
+struct bt_power_vreg_data {
+ /* voltage regulator handle */
+ struct regulator *reg;
+ /* regulator name */
+ const char *name;
+ /* voltage levels to be set */
+ unsigned int low_vol_level;
+ unsigned int high_vol_level;
+ /*
+ * is set voltage supported for this regulator?
+ * false => set voltage is not supported
+ * true => set voltage is supported
+ *
+ * Some regulators (like gpio-regulators, LVS (low voltage swtiches)
+ * PMIC regulators) dont have the capability to call
+ * regulator_set_voltage or regulator_set_optimum_mode
+ * Use this variable to indicate if its a such regulator or not
+ */
+ bool set_voltage_sup;
+ /* is this regulator enabled? */
+ bool is_enabled;
+};
+
+/*
+ * Platform data for the bluetooth power driver.
+ */
+struct bluetooth_power_platform_data {
+ /* Bluetooth reset gpio */
+ int bt_gpio_sys_rst;
+ /* VDDIO voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_io;
+ /* VDD_PA voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_pa;
+ /* VDD_LDOIN voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_ldo;
+ /* Optional: chip power down gpio-regulator
+ * chip power down data is required when bluetooth module
+ * and other modules like wifi co-exist in a single chip and
+ * shares a common gpio to bring chip out of reset.
+ */
+ struct bt_power_vreg_data *bt_chip_pwd;
+ /* Optional: Bluetooth power setup function */
+ int (*bt_power_setup) (int);
+};
+
+#endif /* __LINUX_BLUETOOTH_POWER_H */
diff --git a/include/linux/dvb/dmx.h b/include/linux/dvb/dmx.h
index c2b35c8..aa1eba5 100644
--- a/include/linux/dvb/dmx.h
+++ b/include/linux/dvb/dmx.h
@@ -229,7 +229,15 @@
DMX_EVENT_MARKER = 0x00000100,
/* New indexing entry is ready */
- DMX_EVENT_NEW_INDEX_ENTRY = 0x00000200
+ DMX_EVENT_NEW_INDEX_ENTRY = 0x00000200,
+
+ /*
+ * Section filter timer expired. This is notified
+ * when timeout is configured to section filter
+ * (dmx_sct_filter_params) and no sections were
+ * received for the given time.
+ */
+ DMX_EVENT_SECTION_TIMEOUT = 0x00000400
};
enum dmx_oob_cmd {
@@ -368,6 +376,9 @@
/* DTS value associated with the buffer */
__u64 dts;
+ /* STC value associated with the buffer in 27MHz */
+ __u64 stc;
+
/*
* Number of TS packets with Transport Error Indicator (TEI) set
* in the TS packet header since last reported event
@@ -703,6 +714,46 @@
__u64 types;
};
+struct dmx_set_ts_insertion {
+ /*
+ * Unique identifier managed by the caller.
+ * This identifier can be used later to remove the
+ * insertion using DMX_ABORT_TS_INSERTION ioctl.
+ */
+ __u32 identifier;
+
+ /*
+ * Repetition time in msec, minimum allowed value is 25msec.
+ * 0 repetition time means one-shot insertion is done.
+ * Insertion done based on wall-clock.
+ */
+ __u32 repetition_time;
+
+ /*
+ * TS packets buffer to be inserted.
+ * The buffer is inserted as-is to the recording buffer
+ * without any modification.
+ * It is advised to set discontinuity flag in the very
+ * first TS packet in the buffer.
+ */
+ const __u8 *ts_packets;
+
+ /*
+ * Size in bytes of the TS packets buffer to be inserted.
+ * Should be in multiples of 188 or 192 bytes
+ * depending on recording filter output format.
+ */
+ size_t size;
+};
+
+struct dmx_abort_ts_insertion {
+ /*
+ * Identifier of the insertion buffer previously set
+ * using DMX_SET_TS_INSERTION.
+ */
+ __u32 identifier;
+};
+
#define DMX_START _IO('o', 41)
#define DMX_STOP _IO('o', 42)
#define DMX_SET_FILTER _IOW('o', 43, struct dmx_sct_filter_params)
@@ -731,5 +782,7 @@
#define DMX_GET_EVENTS_MASK _IOR('o', 67, struct dmx_events_mask)
#define DMX_PUSH_OOB_COMMAND _IOW('o', 68, struct dmx_oob_command)
#define DMX_SET_INDEXING_PARAMS _IOW('o', 69, struct dmx_indexing_params)
+#define DMX_SET_TS_INSERTION _IOW('o', 70, struct dmx_set_ts_insertion)
+#define DMX_ABORT_TS_INSERTION _IOW('o', 71, struct dmx_abort_ts_insertion)
#endif /*_DVBDMX_H_*/
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index c737eb7..2a144e6 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -39,6 +39,12 @@
void kmap_flush_unused(void);
+#ifdef CONFIG_ARCH_WANT_KMAP_ATOMIC_FLUSH
+void kmap_atomic_flush_unused(void);
+#else
+static inline void kmap_atomic_flush_unused(void) { }
+#endif
+
#else /* CONFIG_HIGHMEM */
static inline unsigned int nr_free_highpages(void) { return 0; }
@@ -72,6 +78,7 @@
#define kmap_atomic_to_page(ptr) virt_to_page(ptr)
#define kmap_flush_unused() do {} while(0)
+#define kmap_atomic_flush_unused() do {} while (0)
#endif
#endif /* CONFIG_HIGHMEM */
diff --git a/include/linux/ion.h b/include/linux/ion.h
index 88ad9a0..4983316 100644
--- a/include/linux/ion.h
+++ b/include/linux/ion.h
@@ -35,6 +35,7 @@
ION_HEAP_TYPE_SYSTEM,
ION_HEAP_TYPE_SYSTEM_CONTIG,
ION_HEAP_TYPE_CARVEOUT,
+ ION_HEAP_TYPE_CHUNK,
ION_HEAP_TYPE_CUSTOM, /* must be last so device specific heaps always
are at the end of this enum */
ION_NUM_HEAPS,
@@ -44,8 +45,10 @@
#define ION_HEAP_SYSTEM_CONTIG_MASK (1 << ION_HEAP_TYPE_SYSTEM_CONTIG)
#define ION_HEAP_CARVEOUT_MASK (1 << ION_HEAP_TYPE_CARVEOUT)
+#define ION_NUM_HEAP_IDS sizeof(unsigned int) * 8
+
/**
- * heap flags - the lower 16 bits are used by core ion, the upper 16
+ * allocation flags - the lower 16 bits are used by core ion, the upper 16
* bits are reserved for use by the heaps themselves.
*/
#define ION_FLAG_CACHED 1 /* mappings of this buffer should be
@@ -74,8 +77,9 @@
/**
* struct ion_platform_heap - defines a heap in the given platform
* @type: type of the heap from ion_heap_type enum
- * @id: unique identifier for heap. When allocating (lower numbers
- * will be allocated from first)
+ * @id: unique identifier for heap. When allocating higher numbers
+ * will be allocated from first. At allocation these are passed
+ * as a bit mask and therefore can not exceed ION_NUM_HEAP_IDS.
* @name: used for debug purposes
* @base: base address of heap in physical memory if applicable
* @size: size of the heap in bytes if applicable
@@ -83,6 +87,10 @@
* @has_outer_cache: set to 1 if outer cache is used, 0 otherwise.
* @extra_data: Extra data specific to each heap type
* @priv: heap private data
+ * @align: required alignment in physical memory if applicable
+ * @priv: private info passed from the board file
+ *
+ * Provided by the board file.
*/
struct ion_platform_heap {
enum ion_heap_type type;
@@ -93,6 +101,7 @@
enum ion_memory_types memory_type;
unsigned int has_outer_cache;
void *extra_data;
+ ion_phys_addr_t align;
void *priv;
};
@@ -125,12 +134,12 @@
/**
* ion_client_create() - allocate a client and returns it
- * @dev: the global ion device
- * @heap_mask: mask of heaps this client can allocate from
- * @name: used for debugging
+ * @dev: the global ion device
+ * @heap_type_mask: mask of heaps this client can allocate from
+ * @name: used for debugging
*/
struct ion_client *ion_client_create(struct ion_device *dev,
- unsigned int heap_mask, const char *name);
+ const char *name);
/**
* ion_client_destroy() - free's a client and all it's handles
@@ -143,21 +152,22 @@
/**
* ion_alloc - allocate ion memory
- * @client: the client
- * @len: size of the allocation
- * @align: requested allocation alignment, lots of hardware blocks have
- * alignment requirements of some kind
- * @heap_mask: mask of heaps to allocate from, if multiple bits are set
- * heaps will be tried in order from lowest to highest order bit
- * @flags: heap flags, the low 16 bits are consumed by ion, the high 16
- * bits are passed on to the respective heap and can be heap
- * custom
+ * @client: the client
+ * @len: size of the allocation
+ * @align: requested allocation alignment, lots of hardware blocks
+ * have alignment requirements of some kind
+ * @heap_id_mask: mask of heaps to allocate from, if multiple bits are set
+ * heaps will be tried in order from highest to lowest
+ * id
+ * @flags: heap flags, the low 16 bits are consumed by ion, the
+ * high 16 bits are passed on to the respective heap and
+ * can be heap custom
*
* Allocate memory in one of the heaps provided in heap mask and return
* an opaque handle to it.
*/
struct ion_handle *ion_alloc(struct ion_client *client, size_t len,
- size_t align, unsigned int heap_mask,
+ size_t align, unsigned int heap_id_mask,
unsigned int flags);
/**
@@ -217,11 +227,19 @@
void ion_unmap_kernel(struct ion_client *client, struct ion_handle *handle);
/**
- * ion_share_dma_buf() - given an ion client, create a dma-buf fd
+ * ion_share_dma_buf() - share buffer as dma-buf
* @client: the client
* @handle: the handle
*/
-int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle);
+struct dma_buf *ion_share_dma_buf(struct ion_client *client,
+ struct ion_handle *handle);
+
+/**
+ * ion_share_dma_buf_fd() - given an ion client, create a dma-buf fd
+ * @client: the client
+ * @handle: the handle
+ */
+int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handle);
/**
* ion_import_dma_buf() - given an dma-buf fd from the ion exporter get handle
@@ -310,12 +328,12 @@
/**
* struct ion_allocation_data - metadata passed from userspace for allocations
- * @len: size of the allocation
- * @align: required alignment of the allocation
- * @heap_mask: mask of heaps to allocate from
- * @flags: flags passed to heap
- * @handle: pointer that will be populated with a cookie to use to refer
- * to this allocation
+ * @len: size of the allocation
+ * @align: required alignment of the allocation
+ * @heap_id_mask: mask of heap ids to allocate from
+ * @flags: flags passed to heap
+ * @handle: pointer that will be populated with a cookie to use to
+ * refer to this allocation
*
* Provided by userspace as an argument to the ioctl
*/
diff --git a/include/linux/iopoll.h b/include/linux/iopoll.h
index 8aa758d..b882fe2 100644
--- a/include/linux/iopoll.h
+++ b/include/linux/iopoll.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -40,8 +40,12 @@
might_sleep_if(timeout_us); \
for (;;) { \
(val) = readl(addr); \
- if ((cond) || (timeout_us && time_after(jiffies, timeout))) \
+ if (cond) \
break; \
+ if (timeout_us && time_after(jiffies, timeout)) { \
+ (val) = readl(addr); \
+ break; \
+ } \
if (sleep_us) \
usleep_range(DIV_ROUND_UP(sleep_us, 4), sleep_us); \
} \
diff --git a/include/linux/memory_alloc.h b/include/linux/memory_alloc.h
index b649451..8097949 100644
--- a/include/linux/memory_alloc.h
+++ b/include/linux/memory_alloc.h
@@ -20,7 +20,7 @@
struct mem_pool {
struct mutex pool_mutex;
struct gen_pool *gpool;
- unsigned long paddr;
+ phys_addr_t paddr;
unsigned long size;
unsigned long free;
unsigned int id;
@@ -28,29 +28,34 @@
struct alloc {
struct rb_node rb_node;
- void *vaddr;
- unsigned long paddr;
+ /*
+ * The physical address may be used for lookup in the tree so the
+ * 'virtual address' needs to be able to accomodate larger physical
+ * addresses.
+ */
+ phys_addr_t vaddr;
+ phys_addr_t paddr;
struct mem_pool *mpool;
unsigned long len;
void *caller;
};
-struct mem_pool *initialize_memory_pool(unsigned long start,
+struct mem_pool *initialize_memory_pool(phys_addr_t start,
unsigned long size, int mem_type);
void *allocate_contiguous_memory(unsigned long size,
int mem_type, unsigned long align, int cached);
-unsigned long _allocate_contiguous_memory_nomap(unsigned long size,
+phys_addr_t _allocate_contiguous_memory_nomap(unsigned long size,
int mem_type, unsigned long align, void *caller);
-unsigned long allocate_contiguous_memory_nomap(unsigned long size,
+phys_addr_t allocate_contiguous_memory_nomap(unsigned long size,
int mem_type, unsigned long align);
void free_contiguous_memory(void *addr);
-void free_contiguous_memory_by_paddr(unsigned long paddr);
+void free_contiguous_memory_by_paddr(phys_addr_t paddr);
-unsigned long memory_pool_node_paddr(void *vaddr);
+phys_addr_t memory_pool_node_paddr(void *vaddr);
unsigned long memory_pool_node_len(void *vaddr);
diff --git a/include/linux/mfd/pm8xxx/pm8921-charger.h b/include/linux/mfd/pm8xxx/pm8921-charger.h
index 1c67b1e..5439fd1 100644
--- a/include/linux/mfd/pm8xxx/pm8921-charger.h
+++ b/include/linux/mfd/pm8xxx/pm8921-charger.h
@@ -165,6 +165,7 @@
unsigned int warm_bat_chg_current;
unsigned int cool_bat_voltage;
unsigned int warm_bat_voltage;
+ int hysteresis_temp;
unsigned int (*get_batt_capacity_percent) (void);
int64_t batt_id_min;
int64_t batt_id_max;
diff --git a/include/linux/mfd/wcd9xxx/core.h b/include/linux/mfd/wcd9xxx/core.h
index 37a12fb..0d1f49f 100644
--- a/include/linux/mfd/wcd9xxx/core.h
+++ b/include/linux/mfd/wcd9xxx/core.h
@@ -77,7 +77,8 @@
WCD9XXX_IRQ_HPH_L_PA_STARTUP,
WCD9XXX_IRQ_HPH_R_PA_STARTUP,
WCD9XXX_IRQ_EAR_PA_STARTUP,
- WCD9XXX_IRQ_RESERVED_0,
+ WCD9310_NUM_IRQS,
+ WCD9XXX_IRQ_RESERVED_0 = WCD9310_NUM_IRQS,
WCD9XXX_IRQ_RESERVED_1,
/* INTR_REG 3 */
WCD9XXX_IRQ_MAD_AUDIO,
@@ -85,12 +86,14 @@
WCD9XXX_IRQ_MAD_ULTRASOUND,
WCD9XXX_IRQ_SPEAKER_CLIPPING,
WCD9XXX_IRQ_MBHC_JACK_SWITCH,
+ WCD9XXX_IRQ_VBAT_MONITOR_ATTACK,
+ WCD9XXX_IRQ_VBAT_MONITOR_RELEASE,
WCD9XXX_NUM_IRQS,
};
enum {
- TABLA_NUM_IRQS = WCD9XXX_NUM_IRQS,
- SITAR_NUM_IRQS = WCD9XXX_NUM_IRQS,
+ TABLA_NUM_IRQS = WCD9310_NUM_IRQS,
+ SITAR_NUM_IRQS = WCD9310_NUM_IRQS,
TAIKO_NUM_IRQS = WCD9XXX_NUM_IRQS,
TAPAN_NUM_IRQS = WCD9XXX_NUM_IRQS,
};
@@ -150,6 +153,17 @@
#define WCD9XXX_CH(xport, xshift) \
{.port = xport, .shift = xshift}
+struct wcd9xxx_codec_type {
+ u16 id_major;
+ u16 id_minor;
+ struct mfd_cell *dev;
+ int size;
+ int num_irqs;
+ int version; /* -1 to retrive version from chip version register */
+ enum wcd9xxx_slim_slave_addr_type slim_slave_type;
+ u16 i2c_chip_status;
+};
+
struct wcd9xxx {
struct device *dev;
struct slim_device *slim;
@@ -181,14 +195,14 @@
struct pm_qos_request pm_qos_req;
int wlock_holders;
- u8 idbyte[4];
+ u16 id_minor;
+ u16 id_major;
unsigned int irq_base;
unsigned int irq;
u8 irq_masks_cur[WCD9XXX_NUM_IRQ_REGS];
u8 irq_masks_cache[WCD9XXX_NUM_IRQ_REGS];
bool irq_level_high[WCD9XXX_MAX_NUM_IRQS];
- int num_irqs;
/* Slimbus or I2S port */
u32 num_rx_port;
u32 num_tx_port;
@@ -196,7 +210,7 @@
struct wcd9xxx_ch *tx_chs;
u32 mclk_rate;
- enum wcd9xxx_slim_slave_addr_type slim_slave_type;
+ const struct wcd9xxx_codec_type *codec_type;
};
int wcd9xxx_reg_read(struct wcd9xxx *wcd9xxx, unsigned short reg);
diff --git a/include/linux/mfd/wcd9xxx/pdata.h b/include/linux/mfd/wcd9xxx/pdata.h
index 813cac3..c6e4ab3 100644
--- a/include/linux/mfd/wcd9xxx/pdata.h
+++ b/include/linux/mfd/wcd9xxx/pdata.h
@@ -136,7 +136,7 @@
unsigned int hph_ocp_limit:3; /* Headphone OCP current limit */
};
-#define MAX_REGULATOR 7
+#define WCD9XXX_MAX_REGULATOR 8
/*
* format : TABLA_<POWER_SUPPLY_PIN_NAME>_CUR_MAX
*
@@ -151,11 +151,14 @@
#define WCD9XXX_VDDD_CDC_D_CUR_MAX 5000
#define WCD9XXX_VDDD_CDC_A_CUR_MAX 5000
+#define WCD9XXX_VDD_SPKDRV_NAME "cdc-vdd-spkdrv"
+
struct wcd9xxx_regulator {
const char *name;
int min_uV;
int max_uV;
int optimum_uA;
+ bool ondemand;
struct regulator *regulator;
};
@@ -168,7 +171,7 @@
struct slim_device slimbus_slave_device;
struct wcd9xxx_micbias_setting micbias;
struct wcd9xxx_ocp_setting ocp;
- struct wcd9xxx_regulator regulator[MAX_REGULATOR];
+ struct wcd9xxx_regulator regulator[WCD9XXX_MAX_REGULATOR];
u32 mclk_rate;
u32 dmic_sample_rate;
};
diff --git a/include/linux/mhl_8334.h b/include/linux/mhl_8334.h
index d8eb494..a66a411 100644
--- a/include/linux/mhl_8334.h
+++ b/include/linux/mhl_8334.h
@@ -158,6 +158,9 @@
int scrpd_busy;
int wr_burst_pending;
struct completion req_write_done;
+ spinlock_t lock;
+ bool tx_powered_off;
+ uint8_t dwnstream_hpd;
};
int mhl_i2c_reg_read(struct i2c_client *client,
diff --git a/include/linux/mmc/host.h b/include/linux/mmc/host.h
index 9eef3a0..48be19a 100644
--- a/include/linux/mmc/host.h
+++ b/include/linux/mmc/host.h
@@ -82,6 +82,12 @@
#define MMC_SET_DRIVER_TYPE_D 3
};
+/* states to represent load on the host */
+enum mmc_load {
+ MMC_LOAD_HIGH,
+ MMC_LOAD_LOW,
+};
+
struct mmc_host_ops {
/*
* 'enable' is called when the host is claimed and 'disable' is called
@@ -140,6 +146,7 @@
void (*hw_reset)(struct mmc_host *host);
unsigned long (*get_max_frequency)(struct mmc_host *host);
unsigned long (*get_min_frequency)(struct mmc_host *host);
+ int (*notify_load)(struct mmc_host *, enum mmc_load);
int (*stop_request)(struct mmc_host *host);
unsigned int (*get_xfer_remain)(struct mmc_host *host);
};
@@ -401,6 +408,7 @@
bool initialized;
bool in_progress;
struct delayed_work work;
+ enum mmc_load state;
} clk_scaling;
unsigned long private[0] ____cacheline_aligned;
};
diff --git a/include/linux/mmc/sdhci.h b/include/linux/mmc/sdhci.h
index 538185f..4e30082 100644
--- a/include/linux/mmc/sdhci.h
+++ b/include/linux/mmc/sdhci.h
@@ -122,6 +122,25 @@
* secure discard kind of operations to complete.
*/
#define SDHCI_QUIRK2_USE_MAX_DISCARD_SIZE (1<<5)
+/*
+ * Ignore data timeout error for R1B commands as there will be no
+ * data associated and the busy timeout value for these commands
+ * could be lager than the maximum timeout value that controller
+ * can handle.
+ */
+#define SDHCI_QUIRK2_IGNORE_DATATOUT_FOR_R1BCMD (1<<6)
+/*
+ * The preset value registers are not properly initialized by
+ * some hardware and hence preset value must not be enabled for
+ * such controllers.
+ */
+#define SDHCI_QUIRK2_BROKEN_PRESET_VALUE (1<<7)
+/*
+ * Some controllers define the usage of 0xF in data timeout counter
+ * register (0x2E) which is actually a reserved bit as per
+ * specification.
+ */
+#define SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT (1<<8)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index ca7a586..d63232a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -72,11 +72,9 @@
#ifdef CONFIG_CMA
bool is_cma_pageblock(struct page *page);
# define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
-# define cma_wmark_pages(zone) zone->min_cma_pages
#else
# define is_cma_pageblock(page) false
# define is_migrate_cma(migratetype) false
-# define cma_wmark_pages(zone) 0
#endif
#define for_each_migratetype_order(order, type) \
@@ -385,11 +383,6 @@
seqlock_t span_seqlock;
#endif
#ifdef CONFIG_CMA
- /*
- * CMA needs to increase watermark levels during the allocation
- * process to make sure that the system is not starved.
- */
- unsigned long min_cma_pages;
bool cma_alloc;
#endif
struct free_area free_area[MAX_ORDER];
diff --git a/include/linux/msm_mdp.h b/include/linux/msm_mdp.h
index 60567fa..61e9cb1 100644
--- a/include/linux/msm_mdp.h
+++ b/include/linux/msm_mdp.h
@@ -76,6 +76,8 @@
struct mdp_display_commit)
#define MSMFB_METADATA_SET _IOW(MSMFB_IOCTL_MAGIC, 165, struct msmfb_metadata)
#define MSMFB_METADATA_GET _IOW(MSMFB_IOCTL_MAGIC, 166, struct msmfb_metadata)
+#define MSMFB_WRITEBACK_SET_MIRRORING_HINT _IOW(MSMFB_IOCTL_MAGIC, 167, \
+ unsigned int)
#define FB_TYPE_3D_PANEL 0x10101010
#define MDP_IMGTYPE2_START 0x10000
@@ -171,6 +173,7 @@
#define MDP_OV_PIPE_FORCE_DMA 0x00004000
#define MDP_MEMORY_ID_TYPE_FB 0x00001000
#define MDP_BWC_EN 0x00000400
+#define MDP_DECIMATION_EN 0x00000800
#define MDP_TRANSP_NOP 0xffffffff
#define MDP_ALPHA_NOP 0xff
@@ -296,7 +299,7 @@
#define MDSS_PP_ARG_MASK 0x3C00
#define MDSS_PP_ARG_NUM 4
-#define MDSS_PP_ARG_SHIFT 8
+#define MDSS_PP_ARG_SHIFT 10
#define MDSS_PP_LOCATION_MASK 0x0300
#define MDSS_PP_LOGICAL_MASK 0x00FF
@@ -332,6 +335,7 @@
#define MDP_OVERLAY_PP_IGC_CFG 0x8
#define MDP_OVERLAY_PP_SHARP_CFG 0x10
#define MDP_OVERLAY_PP_HIST_CFG 0x20
+#define MDP_OVERLAY_PP_HIST_LUT_CFG 0x40
#define MDP_CSC_FLAG_ENABLE 0x1
#define MDP_CSC_FLAG_YUV_IN 0x2
@@ -375,6 +379,13 @@
uint16_t num_bins;
};
+struct mdp_hist_lut_data {
+ uint32_t block;
+ uint32_t ops;
+ uint32_t len;
+ uint32_t *data;
+};
+
struct mdp_overlay_pp_params {
uint32_t config_ops;
struct mdp_csc_cfg csc_cfg;
@@ -383,6 +394,7 @@
struct mdp_igc_lut_data igc_cfg;
struct mdp_sharp_cfg sharp_cfg;
struct mdp_histogram_cfg hist_cfg;
+ struct mdp_hist_lut_data hist_lut_cfg;
};
struct mdp_overlay {
@@ -395,7 +407,9 @@
uint32_t transp_mask;
uint32_t flags;
uint32_t id;
- uint32_t user_data[8];
+ uint32_t user_data[7];
+ uint8_t horz_deci;
+ uint8_t vert_deci;
struct mdp_overlay_pp_params overlay_pp_cfg;
};
@@ -517,13 +531,6 @@
};
-struct mdp_hist_lut_data {
- uint32_t block;
- uint32_t ops;
- uint32_t len;
- uint32_t *data;
-};
-
struct mdp_lut_cfg_data {
uint32_t lut_type;
union {
@@ -606,6 +613,7 @@
uint16_t calib[4];
uint8_t strength_limit;
uint8_t t_filter_recursion;
+ uint16_t stab_itr;
};
/* ops uses standard MDP_PP_* flags */
@@ -624,6 +632,7 @@
uint32_t amb_light;
uint32_t strength;
} in;
+ uint32_t output;
};
enum {
@@ -692,6 +701,7 @@
uint8_t rgb_pipes;
uint8_t vig_pipes;
uint8_t dma_pipes;
+ uint32_t features;
};
struct msmfb_metadata {
@@ -763,6 +773,13 @@
MDP_IOMMU_DOMAIN_NS,
};
+enum {
+ MDP_WRITEBACK_MIRROR_OFF,
+ MDP_WRITEBACK_MIRROR_ON,
+ MDP_WRITEBACK_MIRROR_PAUSE,
+ MDP_WRITEBACK_MIRROR_RESUME,
+};
+
#ifdef __KERNEL__
int msm_fb_get_iommu_domain(struct fb_info *info, int domain);
/* get the framebuffer physical address information */
diff --git a/include/linux/msm_tsens.h b/include/linux/msm_tsens.h
index 757f1dc..35eacf1 100644
--- a/include/linux/msm_tsens.h
+++ b/include/linux/msm_tsens.h
@@ -44,9 +44,17 @@
#if defined(CONFIG_THERMAL_TSENS8974)
int __init tsens_tm_init_driver(void);
+int tsens_get_sw_id_mapping(int sensor_num, int *sensor_sw_idx);
+int tsens_get_hw_id_mapping(int sensor_sw_id, int *sensor_hw_num);
#else
static inline int __init tsens_tm_init_driver(void)
{ return -ENXIO; }
+static inline int tsens_get_sw_id_mapping(
+ int sensor_num, int *sensor_sw_idx)
+{ return -ENXIO; }
+static inline int tsens_get_hw_id_mapping(
+ int sensor_sw_id, int *sensor_hw_num)
+{ return -ENXIO; }
#endif
#if defined(CONFIG_THERMAL_TSENS8974) || defined(CONFIG_THERMAL_TSENS8960)
diff --git a/include/linux/qpnp-misc.h b/include/linux/qpnp-misc.h
index b241e5d..ee614a4 100644
--- a/include/linux/qpnp-misc.h
+++ b/include/linux/qpnp-misc.h
@@ -30,7 +30,7 @@
int qpnp_misc_irqs_available(struct device *consumer_dev);
#else
-static int qpnp_misc_irq_available(struct device *consumer_dev)
+static int qpnp_misc_irqs_available(struct device *consumer_dev)
{
return 0;
}
diff --git a/include/linux/qpnp/qpnp-adc.h b/include/linux/qpnp/qpnp-adc.h
index 15e5dc9..13d0b80 100644
--- a/include/linux/qpnp/qpnp-adc.h
+++ b/include/linux/qpnp/qpnp-adc.h
@@ -197,15 +197,18 @@
/**
* enum qpnp_adc_scale_fn_type - Scaling function for pm8941 pre calibrated
* digital data relative to ADC reference.
- * %ADC_SCALE_DEFAULT: Default scaling to convert raw adc code to voltage.
- * %ADC_SCALE_BATT_THERM: Conversion to temperature based on btm parameters.
- * %ADC_SCALE_THERM_100K_PULLUP: Returns temperature in degC.
+ * %SCALE_DEFAULT: Default scaling to convert raw adc code to voltage (uV).
+ * %SCALE_BATT_THERM: Conversion to temperature(decidegC) based on btm
+ * parameters.
+ * %SCALE_THERM_100K_PULLUP: Returns temperature in degC.
* Uses a mapping table with 100K pullup.
- * %ADC_SCALE_PMIC_THERM: Returns result in milli degree's Centigrade.
- * %ADC_SCALE_XOTHERM: Returns XO thermistor voltage in degree's Centigrade.
- * %ADC_SCALE_THERM_150K_PULLUP: Returns temperature in degC.
+ * %SCALE_PMIC_THERM: Returns result in milli degree's Centigrade.
+ * %SCALE_XOTHERM: Returns XO thermistor voltage in degree's Centigrade.
+ * %SCALE_THERM_150K_PULLUP: Returns temperature in degC.
* Uses a mapping table with 150K pullup.
- * %ADC_SCALE_NONE: Do not use this scaling type.
+ * %SCALE_QRD_BATT_THERM: Conversion to temperature(decidegC) based on
+ * btm parameters.
+ * %SCALE_NONE: Do not use this scaling type.
*/
enum qpnp_adc_scale_fn_type {
SCALE_DEFAULT = 0,
@@ -214,6 +217,7 @@
SCALE_PMIC_THERM,
SCALE_XOTHERM,
SCALE_THERM_150K_PULLUP,
+ SCALE_QRD_BATT_THERM,
SCALE_NONE,
};
@@ -1039,7 +1043,7 @@
/**
* qpnp_adc_scale_batt_therm() - Scales the pre-calibrated digital output
* of an ADC to the ADC reference and compensates for the
- * gain and offset. Returns the temperature in degC.
+ * gain and offset. Returns the temperature in decidegC.
* @adc_code: pre-calibrated digital ouput of the ADC.
* @adc_prop: adc properties of the pm8xxx adc such as bit resolution,
* reference voltage.
@@ -1052,6 +1056,21 @@
const struct qpnp_vadc_chan_properties *chan_prop,
struct qpnp_vadc_result *chan_rslt);
/**
+ * qpnp_adc_scale_qrd_batt_therm() - Scales the pre-calibrated digital output
+ * of an ADC to the ADC reference and compensates for the
+ * gain and offset. Returns the temperature in decidegC.
+ * @adc_code: pre-calibrated digital ouput of the ADC.
+ * @adc_prop: adc properties of the pm8xxx adc such as bit resolution,
+ * reference voltage.
+ * @chan_prop: individual channel properties to compensate the i/p scaling,
+ * slope and offset.
+ * @chan_rslt: physical result to be stored.
+ */
+int32_t qpnp_adc_scale_qrd_batt_therm(int32_t adc_code,
+ const struct qpnp_adc_properties *adc_prop,
+ const struct qpnp_vadc_chan_properties *chan_prop,
+ struct qpnp_vadc_result *chan_rslt);
+/**
* qpnp_adc_scale_batt_id() - Scales the pre-calibrated digital output
* of an ADC to the ADC reference and compensates for the
* gain and offset.
@@ -1257,6 +1276,11 @@
const struct qpnp_vadc_chan_properties *chan_prop,
struct qpnp_vadc_result *chan_rslt)
{ return -ENXIO; }
+static inline int32_t qpnp_adc_scale_qrd_batt_therm(int32_t adc_code,
+ const struct qpnp_adc_properties *adc_prop,
+ const struct qpnp_vadc_chan_properties *chan_prop,
+ struct qpnp_vadc_result *chan_rslt);
+{ return -ENXIO; }
static inline int32_t qpnp_adc_scale_batt_id(int32_t adc_code,
const struct qpnp_adc_properties *adc_prop,
const struct qpnp_vadc_chan_properties *chan_prop,
diff --git a/include/linux/qseecom.h b/include/linux/qseecom.h
index c399b81..294c881 100644
--- a/include/linux/qseecom.h
+++ b/include/linux/qseecom.h
@@ -138,6 +138,25 @@
enum qseecom_key_management_usage_type usage;
};
+#define SHA256_DIGEST_LENGTH (256/8)
+/*
+ * struct qseecom_save_partition_hash_req
+ * @partition_id - partition id.
+ * @hash[SHA256_DIGEST_LENGTH] - sha256 digest.
+ */
+struct qseecom_save_partition_hash_req {
+ int partition_id; /* in */
+ char digest[SHA256_DIGEST_LENGTH]; /* in */
+};
+
+/*
+ * struct qseecom_is_es_activated_req
+ * @is_activated - 1=true , 0=false
+ */
+struct qseecom_is_es_activated_req {
+ int is_activated; /* out */
+};
+
#define QSEECOM_IOC_MAGIC 0x97
@@ -195,5 +214,10 @@
#define QSEECOM_IOCTL_WIPE_KEY_REQ \
_IOWR(QSEECOM_IOC_MAGIC, 18, struct qseecom_wipe_key_req)
+#define QSEECOM_IOCTL_SAVE_PARTITION_HASH_REQ \
+ _IOWR(QSEECOM_IOC_MAGIC, 19, struct qseecom_save_partition_hash_req)
+
+#define QSEECOM_IOCTL_IS_ES_ACTIVATED_REQ \
+ _IOWR(QSEECOM_IOC_MAGIC, 20, struct qseecom_is_es_activated_req)
#endif /* __QSEECOM_H_ */
diff --git a/include/linux/regulator/qpnp-regulator.h b/include/linux/regulator/qpnp-regulator.h
index ec580ab..c7afeb5 100644
--- a/include/linux/regulator/qpnp-regulator.h
+++ b/include/linux/regulator/qpnp-regulator.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -93,12 +93,24 @@
* @system_load: Load in uA present on regulator that is not captured
* by any consumer request
* @enable_time: Time in us to delay after enabling the regulator
- * @ocp_enable: 1 = Enable over current protection (OCP) for voltage
- * switch type regulators so that they latch off
- * automatically when over current is detected
+ * @ocp_enable: 1 = Allow over current protection (OCP) to be
+ * enabled for voltage switch type regulators so
+ * that they latch off automatically when over
+ * current is detected. OCP is enabled when in HPM
+ * or auto mode.
* 0 = Disable OCP
* QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
* OCP state
+ * @ocp_irq: IRQ number of the voltage switch OCP IRQ. If
+ * specified the voltage switch will be toggled off
+ * and back on when OCP triggers in order to handle
+ * high in-rush current.
+ * @ocp_max_retries: Maximum number of times to try toggling a voltage
+ * switch off and back on as a result of
+ * consecutive over current events.
+ * @ocp_retry_delay_ms: Time to delay in milliseconds between each
+ * voltage switch toggle after an over current
+ * event takes place.
* @boost_current_limit: This parameter sets the current limit of boost type
* regulators. Its value should be one of
* QPNP_BOOST_CURRENT_LIMIT_*. If its value is
@@ -117,9 +129,6 @@
* its value is QPNP_VS_SOFT_START_STR_HW_DEFAULT,
* then the soft start strength will be left at its
* default hardware value.
- * @ocp_enable_time: Time to delay in us between enabling a switch and
- * subsequently enabling over current protection
- * (OCP) for the switch
* @auto_mode_enable: 1 = Enable automatic hardware selection of regulator
* mode (HPM vs LPM). Auto mode is not available
* on boost type regulators
@@ -132,6 +141,18 @@
* 0 = Do not enable bypass mode
* QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
* bypass mode state
+ * @hpm_enable: 1 = Enable high power mode (HPM), also referred to
+ * as NPM. HPM consumes more ground current than
+ * LPM, but it can source significantly higher load
+ * current. HPM is not available on boost type
+ * regulators. For voltage switch type regulators,
+ * HPM implies that over current protection and
+ * soft start are active all the time. This
+ * configuration can be overwritten by changing the
+ * regulator's mode dynamically.
+ * 0 = Do not enable HPM
+ * QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
+ * HPM state
* @base_addr: SMPI base address for the regulator peripheral
*/
struct qpnp_regulator_platform_data {
@@ -142,12 +163,15 @@
int system_load;
int enable_time;
int ocp_enable;
+ int ocp_irq;
+ int ocp_max_retries;
+ int ocp_retry_delay_ms;
enum qpnp_boost_current_limit boost_current_limit;
int soft_start_enable;
enum qpnp_vs_soft_start_str vs_soft_start_strength;
- int ocp_enable_time;
int auto_mode_enable;
int bypass_mode_enable;
+ int hpm_enable;
u16 base_addr;
};
diff --git a/include/linux/usb/msm_hsusb.h b/include/linux/usb/msm_hsusb.h
index 5e73bd9..4ae3b79 100644
--- a/include/linux/usb/msm_hsusb.h
+++ b/include/linux/usb/msm_hsusb.h
@@ -222,6 +222,7 @@
bool enable_lpm_on_dev_suspend;
bool core_clk_always_on_workaround;
bool delay_lpm_on_disconnect;
+ bool delay_lpm_hndshk_on_disconnect;
bool dp_manual_pullup;
bool enable_sec_phy;
struct msm_bus_scale_pdata *bus_scale_table;
@@ -422,6 +423,7 @@
u32 standalone_latency;
bool pool_64_bit_align;
bool enable_hbm;
+ bool disable_park_mode;
};
struct msm_usb_host_platform_data {
@@ -461,10 +463,13 @@
int (*notify)(void *, int, void (*)(int online));
void *ctxt;
};
-
+#ifdef CONFIG_USB_BAM
+bool msm_bam_lpm_ok(void);
+#else
+static inline bool msm_bam_lpm_ok(void) { return true; }
+#endif
#ifdef CONFIG_USB_CI13XXX_MSM
void msm_hw_bam_disable(bool bam_disable);
-
#else
static inline void msm_hw_bam_disable(bool bam_disable)
{
diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
index 74b09cb..4bce95c 100644
--- a/include/linux/videodev2.h
+++ b/include/linux/videodev2.h
@@ -700,6 +700,7 @@
#define V4L2_QCOM_BUF_FLAG_CODECCONFIG 0x4000
#define V4L2_QCOM_BUF_FLAG_EOSEQ 0x8000
#define V4L2_QCOM_BUF_TIMESTAMP_INVALID 0x10000
+#define V4L2_QCOM_BUF_FLAG_DECODEONLY 0x40000
/*
* O V E R L A Y P R E V I E W
diff --git a/include/media/msm_cam_sensor.h b/include/media/msm_cam_sensor.h
index bce6af3..992649f 100644
--- a/include/media/msm_cam_sensor.h
+++ b/include/media/msm_cam_sensor.h
@@ -108,10 +108,10 @@
SUB_MODULE_EEPROM,
SUB_MODULE_LED_FLASH,
SUB_MODULE_STROBE_FLASH,
- SUB_MODULE_CSIPHY,
- SUB_MODULE_CSIPHY_3D,
SUB_MODULE_CSID,
SUB_MODULE_CSID_3D,
+ SUB_MODULE_CSIPHY,
+ SUB_MODULE_CSIPHY_3D,
SUB_MODULE_MAX,
};
@@ -207,6 +207,7 @@
uint8_t settle_cnt;
uint16_t lane_mask;
uint8_t combo_mode;
+ uint8_t csid_core;
};
struct msm_camera_csi2_params {
diff --git a/include/media/msm_camera.h b/include/media/msm_camera.h
index afd5a42..b4b3bfc 100644
--- a/include/media/msm_camera.h
+++ b/include/media/msm_camera.h
@@ -1396,6 +1396,7 @@
uint8_t settle_cnt;
uint16_t lane_mask;
uint8_t combo_mode;
+ uint8_t csid_core;
};
struct msm_camera_csi2_params {
diff --git a/include/media/msm_gemini.h b/include/media/msm_gemini.h
index 0167335..2209758 100644
--- a/include/media/msm_gemini.h
+++ b/include/media/msm_gemini.h
@@ -51,10 +51,19 @@
#define MSM_GMN_IOCTL_TEST_DUMP_REGION \
_IOW(MSM_GMN_IOCTL_MAGIC, 15, unsigned long)
+#define MSM_GMN_IOCTL_SET_MODE \
+ _IOW(MSM_GMN_IOCTL_MAGIC, 16, enum msm_gmn_out_mode)
+
#define MSM_GEMINI_MODE_REALTIME_ENCODE 0
#define MSM_GEMINI_MODE_OFFLINE_ENCODE 1
#define MSM_GEMINI_MODE_REALTIME_ROTATION 2
#define MSM_GEMINI_MODE_OFFLINE_ROTATION 3
+
+enum msm_gmn_out_mode {
+ MSM_GMN_OUTMODE_FRAGMENTED,
+ MSM_GMN_OUTMODE_SINGLE
+};
+
struct msm_gemini_ctrl_cmd {
uint32_t type;
uint32_t len;
diff --git a/include/media/msmb_isp.h b/include/media/msmb_isp.h
index 6fb1a65..3775ddd 100644
--- a/include/media/msmb_isp.h
+++ b/include/media/msmb_isp.h
@@ -64,6 +64,7 @@
EVERY_8FRAME,
EVERY_16FRAME,
EVERY_32FRAME,
+ SKIP_ALL,
MAX_SKIP,
};
@@ -168,12 +169,6 @@
enum msm_vfe_frame_skip_pattern skip_pattern;
};
-enum msm_vfe_stats_pipeline_policy {
- STATS_COMP_ALL,
- STATS_COMP_NONE,
- MAX_STATS_POLICY,
-};
-
enum msm_isp_stats_type {
MSM_ISP_STATS_AEC, /* legacy based AEC */
MSM_ISP_STATS_AF, /* legacy based AF */
@@ -193,11 +188,11 @@
uint32_t session_id;
uint32_t stream_id;
enum msm_isp_stats_type stats_type;
+ uint32_t composite_flag;
uint32_t framedrop_pattern;
uint32_t irq_subsample_pattern;
uint32_t buffer_offset;
uint32_t stream_handle;
- uint8_t comp_flag;
};
struct msm_vfe_stats_stream_release_cmd {
@@ -209,12 +204,6 @@
uint8_t enable;
};
-struct msm_vfe_stats_comp_policy_cfg {
- enum msm_vfe_stats_pipeline_policy stats_pipeline_policy;
- uint32_t comp_framedrop_pattern;
- uint32_t comp_irq_subsample_pattern;
-};
-
enum msm_vfe_reg_cfg_type {
VFE_WRITE,
VFE_WRITE_MB,
@@ -319,7 +308,7 @@
#define ISP_EVENT_EOF (ISP_EVENT_BASE + ISP_EOF)
#define ISP_EVENT_BUF_DIVERT (ISP_BUF_EVENT_BASE)
#define ISP_EVENT_STATS_NOTIFY (ISP_STATS_EVENT_BASE)
-
+#define ISP_EVENT_COMP_STATS_NOTIFY (ISP_EVENT_STATS_NOTIFY + MSM_ISP_STATS_MAX)
/* The msm_v4l2_event_data structure should match the
* v4l2_event.u.data field.
* should not exceed 64 bytes */
@@ -412,10 +401,6 @@
_IOWR('V', BASE_VIDIOC_PRIVATE+11, \
struct msm_vfe_stats_stream_release_cmd)
-#define VIDIOC_MSM_ISP_CFG_STATS_COMP_POLICY \
- _IOWR('V', BASE_VIDIOC_PRIVATE+12, \
- struct msm_vfe_stats_comp_policy_cfg)
-
#define VIDIOC_MSM_ISP_UPDATE_STREAM \
_IOWR('V', BASE_VIDIOC_PRIVATE+13, struct msm_vfe_axi_stream_update_cmd)
diff --git a/include/media/radio-iris-commands.h b/include/media/radio-iris-commands.h
index d41baa9..0b3331a 100644
--- a/include/media/radio-iris-commands.h
+++ b/include/media/radio-iris-commands.h
@@ -71,6 +71,7 @@
V4L2_CID_PRIVATE_CF0TH12,
V4L2_CID_PRIVATE_SINRFIRSTSTAGE,
V4L2_CID_PRIVATE_RMSSIFIRSTSTAGE,
+ V4L2_CID_PRIVATE_RXREPEATCOUNT,
/*using private CIDs under userclass*/
V4L2_CID_PRIVATE_IRIS_READ_DEFAULT = 0x00980928,
diff --git a/include/media/radio-iris.h b/include/media/radio-iris.h
index 53602c5..d6151c0 100644
--- a/include/media/radio-iris.h
+++ b/include/media/radio-iris.h
@@ -73,6 +73,9 @@
#define CF0TH12_BYTE4_OFFSET 11
#define MAX_SINR_FIRSTSTAGE 127
#define MAX_RMSSI_FIRSTSTAGE 127
+#define RDS_PS0_XFR_MODE 0x01
+#define RDS_PS0_LEN 6
+#define RX_REPEATE_BYTE_OFFSET 5
/* HCI timeouts */
#define RADIO_HCI_TIMEOUT (10000) /* 10 seconds */
diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
index 42c8823..35c57c0 100644
--- a/include/net/bluetooth/hci.h
+++ b/include/net/bluetooth/hci.h
@@ -1,6 +1,6 @@
/*
BlueZ - Bluetooth protocol stack for Linux
- Copyright (c) 2000-2001, 2010-2012 The Linux Foundation. All rights reserved.
+ Copyright (c) 2000-2001, 2010-2013 The Linux Foundation. All rights reserved.
Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
@@ -1145,6 +1145,11 @@
__le16 opcode;
} __packed;
+#define HCI_EV_HARDWARE_ERROR 0x10
+struct hci_ev_hardware_error {
+ __u8 hw_err_code;
+} __packed;
+
#define HCI_EV_ROLE_CHANGE 0x12
struct hci_ev_role_change {
__u8 status;
diff --git a/include/sound/apr_audio-v2.h b/include/sound/apr_audio-v2.h
index fa4dedc..01e30d0 100644
--- a/include/sound/apr_audio-v2.h
+++ b/include/sound/apr_audio-v2.h
@@ -634,8 +634,6 @@
* when a new port is added here. */
#define PRIMARY_I2S_RX 0 /* index = 0 */
#define PRIMARY_I2S_TX 1 /* index = 1 */
-#define PCM_RX 2 /* index = 2 */
-#define PCM_TX 3 /* index = 3 */
#define SECONDARY_I2S_RX 4 /* index = 4 */
#define SECONDARY_I2S_TX 5 /* index = 5 */
#define MI2S_RX 6 /* index = 6 */
@@ -784,26 +782,6 @@
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_RX 0x4008
/* SLIMbus Tx port on channel 4. */
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_TX 0x4009
-/* SLIMbus Rx port on channel 0. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_0_RX 0x4000
-/* SLIMbus Tx port on channel 0. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_0_TX 0x4001
-/* SLIMbus Rx port on channel 1. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_1_RX 0x4002
-/* SLIMbus Tx port on channel 1. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_1_TX 0x4003
-/* SLIMbus Rx port on channel 2. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_2_RX 0x4004
-/* SLIMbus Tx port on channel 2. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_2_TX 0x4005
-/* SLIMbus Rx port on channel 3. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_3_RX 0x4006
-/* SLIMbus Tx port on channel 3. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_3_TX 0x4007
-/* SLIMbus Rx port on channel 4. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_RX 0x4008
-/* SLIMbus Tx port on channel 4. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_TX 0x4009
/* SLIMbus Rx port on channel 5. */
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_5_RX 0x400a
/* SLIMbus Tx port on channel 5. */
@@ -6538,6 +6516,11 @@
#define AANC_HW_BLOCK_VERSION_1 (1)
#define AANC_HW_BLOCK_VERSION_2 (2)
+/*Clip bank selection*/
+#define AFE_API_VERSION_CLIP_BANK_SEL_CFG 0x1
+#define AFE_CLIP_MAX_BANKS 4
+#define AFE_PARAM_ID_CLIP_BANK_SEL_CFG 0x00010242
+
struct afe_param_aanc_port_cfg {
/* Minor version used for tracking the version of the module's
* source port configuration.
@@ -6572,6 +6555,18 @@
uint32_t aanc_hw_version;
} __packed;
+struct afe_param_id_clip_bank_sel {
+ /* Minor version used for tracking the version of the module's
+ * hw version
+ */
+ uint32_t minor_version;
+
+ /* Number of banks to be read */
+ uint32_t num_banks;
+
+ uint32_t bank_map[AFE_CLIP_MAX_BANKS];
+} __packed;
+
/* ERROR CODES */
/* Success. The operation completed with no errors. */
#define ADSP_EOK 0x00000000
@@ -6804,6 +6799,8 @@
AFE_SLIMBUS_SLAVE_CONFIG,
AFE_CDC_REGISTERS_CONFIG,
AFE_AANC_VERSION,
+ AFE_CDC_CLIP_REGISTERS_CONFIG,
+ AFE_CLIP_BANK_SEL,
AFE_MAX_CONFIG_TYPES,
};
@@ -6925,4 +6922,16 @@
/* Dolby DAP topology */
#define DOLBY_ADM_COPP_TOPOLOGY_ID 0x0001033B
+struct afe_svc_cmd_set_clip_bank_selection {
+ struct apr_hdr hdr;
+ struct afe_svc_cmd_set_param param;
+ struct afe_port_param_data_v2 pdata;
+ struct afe_param_id_clip_bank_sel bank_sel;
+} __packed;
+
+/* Ultrasound supported formats */
+#define US_POINT_EPOS_FORMAT 0x00012310
+#define US_RAW_FORMAT 0x0001127C
+#define US_PROX_FORMAT 0x0001272B
+
#endif /*_APR_AUDIO_V2_H_ */
diff --git a/include/sound/apr_audio.h b/include/sound/apr_audio.h
index 40b0e1e..a7933a4 100644
--- a/include/sound/apr_audio.h
+++ b/include/sound/apr_audio.h
@@ -1262,7 +1262,7 @@
#define MPEG4_MULTI_AAC 0x00010D86
#define US_POINT_EPOS_FORMAT 0x00012310
#define US_RAW_FORMAT 0x0001127C
-#define US_PROX_FORMAT 0x00012721
+#define US_PROX_FORMAT 0x0001272B
#define MULTI_CHANNEL_PCM 0x00010C66
#define ASM_ENCDEC_SBCRATE 0x00010C13
@@ -1453,6 +1453,23 @@
u32 read_format;
} __attribute__((packed));
+#define ASM_STREAM_CMD_OPEN_LOOPBACK 0x00010D6E
+struct asm_stream_cmd_open_loopback {
+ struct apr_hdr hdr;
+ u32 mode_flags;
+/* Mode flags.
+ * Bit 0-31: reserved; client should set these bits to 0
+ */
+ u16 src_endpointype;
+ /* Endpoint type. 0 = Tx Matrix */
+ u16 sink_endpointype;
+ /* Endpoint type. 0 = Rx Matrix */
+ u32 postprocopo_id;
+/* Postprocessor topology ID. Specifies the topology of
+ * postprocessing algorithms.
+ */
+} __packed;
+
#define ADM_CMD_CONNECT_AFE_PORT 0x00010320
#define ADM_CMD_DISCONNECT_AFE_PORT 0x00010321
@@ -1909,5 +1926,4 @@
int srs_ss3d_open(int port_id, int srs_tech_id, void *srs_params);
/* SRS Studio Sound 3D end */
-
#endif /*_APR_AUDIO_H_*/
diff --git a/include/sound/q6afe-v2.h b/include/sound/q6afe-v2.h
index 2a740f4..1389b2a 100644
--- a/include/sound/q6afe-v2.h
+++ b/include/sound/q6afe-v2.h
@@ -41,8 +41,8 @@
enum {
IDX_PRIMARY_I2S_RX = 0,
IDX_PRIMARY_I2S_TX = 1,
- IDX_PCM_RX = 2,
- IDX_PCM_TX = 3,
+ IDX_AFE_PORT_ID_PRIMARY_PCM_RX = 2,
+ IDX_AFE_PORT_ID_PRIMARY_PCM_TX = 3,
IDX_SECONDARY_I2S_RX = 4,
IDX_SECONDARY_I2S_TX = 5,
IDX_MI2S_RX = 6,
@@ -166,7 +166,7 @@
int afe_port_start(u16 port_id, union afe_port_config *afe_config,
u32 rate);
int afe_spk_prot_feed_back_cfg(int src_port, int dst_port,
- int l_ch, int r_ch);
+ int l_ch, int r_ch, u32 enable);
int afe_spk_prot_get_calib_data(struct afe_spkr_prot_get_vi_calib *calib);
int afe_port_stop_nowait(int port_id);
int afe_apply_gain(u16 port_id, u16 gain);
diff --git a/include/sound/q6asm.h b/include/sound/q6asm.h
index 406407d..41f875b 100644
--- a/include/sound/q6asm.h
+++ b/include/sound/q6asm.h
@@ -207,6 +207,8 @@
uint32_t rd_format,
uint32_t wr_format);
+int q6asm_open_loopack(struct audio_client *ac);
+
int q6asm_write(struct audio_client *ac, uint32_t len, uint32_t msw_ts,
uint32_t lsw_ts, uint32_t flags);
int q6asm_write_nolock(struct audio_client *ac, uint32_t len, uint32_t msw_ts,
diff --git a/include/trace/events/cpufreq_interactive.h b/include/trace/events/cpufreq_interactive.h
index ea83664..951e6ca 100644
--- a/include/trace/events/cpufreq_interactive.h
+++ b/include/trace/events/cpufreq_interactive.h
@@ -28,13 +28,7 @@
__entry->actualfreq)
);
-DEFINE_EVENT(set, cpufreq_interactive_up,
- TP_PROTO(u32 cpu_id, unsigned long targfreq,
- unsigned long actualfreq),
- TP_ARGS(cpu_id, targfreq, actualfreq)
-);
-
-DEFINE_EVENT(set, cpufreq_interactive_down,
+DEFINE_EVENT(set, cpufreq_interactive_setspeed,
TP_PROTO(u32 cpu_id, unsigned long targfreq,
unsigned long actualfreq),
TP_ARGS(cpu_id, targfreq, actualfreq)
@@ -42,44 +36,50 @@
DECLARE_EVENT_CLASS(loadeval,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq),
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg),
TP_STRUCT__entry(
__field(unsigned long, cpu_id )
__field(unsigned long, load )
- __field(unsigned long, curfreq )
- __field(unsigned long, targfreq )
+ __field(unsigned long, curtarg )
+ __field(unsigned long, curactual )
+ __field(unsigned long, newtarg )
),
TP_fast_assign(
__entry->cpu_id = cpu_id;
__entry->load = load;
- __entry->curfreq = curfreq;
- __entry->targfreq = targfreq;
+ __entry->curtarg = curtarg;
+ __entry->curactual = curactual;
+ __entry->newtarg = newtarg;
),
- TP_printk("cpu=%lu load=%lu cur=%lu targ=%lu",
- __entry->cpu_id, __entry->load, __entry->curfreq,
- __entry->targfreq)
+ TP_printk("cpu=%lu load=%lu cur=%lu actual=%lu targ=%lu",
+ __entry->cpu_id, __entry->load, __entry->curtarg,
+ __entry->curactual, __entry->newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_target,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_already,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_notyet,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
TRACE_EVENT(cpufreq_interactive_boost,
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index a1da44f..909e1b9 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -495,6 +495,110 @@
TP_ARGS(mode)
);
+DECLARE_EVENT_CLASS(ion_alloc_pages,
+
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order),
+
+ TP_STRUCT__entry(
+ __field(gfp_t, gfp_flags)
+ __field(unsigned int, order)
+ ),
+
+ TP_fast_assign(
+ __entry->gfp_flags = gfp_flags;
+ __entry->order = order;
+ ),
+
+ TP_printk("gfp_flags=%s order=%d",
+ show_gfp_flags(__entry->gfp_flags),
+ __entry->order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_start,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_end,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_fail,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_start,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_end,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_fail,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+
+ );
+
+DECLARE_EVENT_CLASS(smmu_map,
+
+ TP_PROTO(unsigned int va,
+ phys_addr_t pa,
+ unsigned int chunk_size,
+ size_t len),
+
+ TP_ARGS(va, pa, chunk_size, len),
+
+ TP_STRUCT__entry(
+ __field(unsigned int, va)
+ __field(phys_addr_t, pa)
+ __field(unsigned int, chunk_size)
+ __field(size_t, len)
+ ),
+
+ TP_fast_assign(
+ __entry->va = va;
+ __entry->pa = pa;
+ __entry->chunk_size = chunk_size;
+ __entry->len = len;
+ ),
+
+ TP_printk("v_addr=%p p_addr=%pa chunk_size=0x%x len=%zu",
+ (void *)__entry->va,
+ &__entry->pa,
+ __entry->chunk_size,
+ __entry->len)
+ );
+
+DEFINE_EVENT(smmu_map, iommu_map_range,
+ TP_PROTO(unsigned int va,
+ phys_addr_t pa,
+ unsigned int chunk_size,
+ size_t len),
+
+ TP_ARGS(va, pa, chunk_size, len)
+ );
+
#endif /* _TRACE_KMEM_H */
/* This part must be outside protection */
diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c
index e6a2e35..edd656c 100644
--- a/kernel/hrtimer.c
+++ b/kernel/hrtimer.c
@@ -224,13 +224,10 @@
raw_spin_unlock(&base->cpu_base->lock);
raw_spin_lock(&new_base->cpu_base->lock);
- this_cpu = smp_processor_id();
-
- if (cpu != this_cpu && (hrtimer_check_target(timer, new_base)
- || !cpu_online(cpu))) {
+ if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) {
+ cpu = this_cpu;
raw_spin_unlock(&new_base->cpu_base->lock);
raw_spin_lock(&base->cpu_base->lock);
- cpu = smp_processor_id();
timer->base = base;
goto again;
}
diff --git a/kernel/pid.c b/kernel/pid.c
index 9f08dfa..7acf590 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -430,6 +430,7 @@
{
return find_task_by_pid_ns(vnr, current->nsproxy->pid_ns);
}
+EXPORT_SYMBOL_GPL(find_task_by_vpid);
struct pid *get_task_pid(struct task_struct *task, enum pid_type type)
{
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2bad88..862e172 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4757,6 +4757,7 @@
delayacct_blkio_end();
return ret;
}
+EXPORT_SYMBOL(io_schedule_timeout);
/**
* sys_sched_get_priority_max - return maximum RT priority.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b175073..7e31770 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2126,11 +2126,11 @@
static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
{
struct sched_entity *se = &p->se;
- struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ struct cfs_rq *cfs_rq = &rq->cfs;
WARN_ON(task_rq(p) != rq);
- if (cfs_rq->nr_running > 1) {
+ if (cfs_rq->h_nr_running > 1) {
u64 slice = sched_slice(cfs_rq, se);
u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime;
s64 delta = slice - ran;
@@ -2154,8 +2154,7 @@
/*
* called from enqueue/dequeue and updates the hrtick when the
- * current task is from our class and nr_running is low enough
- * to matter.
+ * current task is from our class.
*/
static void hrtick_update(struct rq *rq)
{
@@ -2164,8 +2163,7 @@
if (!hrtick_enabled(rq) || curr->sched_class != &fair_sched_class)
return;
- if (cfs_rq_of(&curr->se)->nr_running < sched_nr_latency)
- hrtick_start_fair(rq, curr);
+ hrtick_start_fair(rq, curr);
}
#else /* !CONFIG_SCHED_HRTICK */
static inline void
@@ -4626,7 +4624,7 @@
raw_spin_lock(&this_rq->lock);
- if (pulled_task || time_after(jiffies, this_rq->next_balance)) {
+ if (!pulled_task || time_after(jiffies, this_rq->next_balance)) {
/*
* We are going idle. next_balance may be set based on
* a busy processor. So reset next_balance.
diff --git a/lib/memory_alloc.c b/lib/memory_alloc.c
index cc7424f..03f1944 100644
--- a/lib/memory_alloc.c
+++ b/lib/memory_alloc.c
@@ -67,7 +67,7 @@
struct rb_node *r = p;
struct alloc *node = rb_entry(r, struct alloc, rb_node);
- seq_printf(m, "0x%lx 0x%p %ld %u %pS\n", node->paddr, node->vaddr,
+ seq_printf(m, "0x%pa 0x%pa %ld %u %pS\n", &node->paddr, &node->vaddr,
node->len, node->mpool->id, node->caller);
return 0;
}
@@ -84,7 +84,7 @@
return seq_open(file, &mempool_op);
}
-static struct alloc *find_alloc(void *addr)
+static struct alloc *find_alloc(phys_addr_t addr)
{
struct rb_root *root = &alloc_root;
struct rb_node *p = root->rb_node;
@@ -126,7 +126,7 @@
else if (node->vaddr > tmp->vaddr)
p = &(*p)->rb_right;
else {
- WARN(1, "memory at %p already allocated", tmp->vaddr);
+ WARN(1, "memory at %pa already allocated", &tmp->vaddr);
mutex_unlock(&alloc_mutex);
return -EINVAL;
}
@@ -149,7 +149,7 @@
return 0;
}
-static struct gen_pool *initialize_gpool(unsigned long start,
+static struct gen_pool *initialize_gpool(phys_addr_t start,
unsigned long size)
{
struct gen_pool *gpool;
@@ -194,7 +194,12 @@
if (!vaddr)
goto out_kfree;
- node->vaddr = vaddr;
+ /*
+ * Just cast to an unsigned long to avoid warnings about casting from a
+ * pointer to an integer of different size. The pointer is only 32-bits
+ * so we lose no data.
+ */
+ node->vaddr = (unsigned long)vaddr;
node->paddr = paddr;
node->len = aligned_size;
node->mpool = mpool;
@@ -216,13 +221,19 @@
static void __free(void *vaddr, bool unmap)
{
- struct alloc *node = find_alloc(vaddr);
+ struct alloc *node = find_alloc((unsigned long)vaddr);
if (!node)
return;
if (unmap)
- iounmap(node->vaddr);
+ /*
+ * We need the double cast because otherwise gcc complains about
+ * cast to pointer of different size. This is technically a down
+ * cast but if unmap is being called, this had better be an
+ * actual 32-bit pointer anyway.
+ */
+ iounmap((void *)(unsigned long)node->vaddr);
gen_pool_free(node->mpool->gpool, node->paddr, node->len);
node->mpool->free += node->len;
@@ -248,7 +259,7 @@
return mpool;
}
-struct mem_pool *initialize_memory_pool(unsigned long start,
+struct mem_pool *initialize_memory_pool(phys_addr_t start,
unsigned long size, int mem_type)
{
int id = mem_type;
@@ -264,8 +275,8 @@
mpools[id].id = id;
mutex_unlock(&mpools[id].pool_mutex);
- pr_info("memory pool %d (start %lx size %lx) initialized\n",
- id, start, size);
+ pr_info("memory pool %d (start %pa size %lx) initialized\n",
+ id, &start, size);
return &mpools[id];
}
EXPORT_SYMBOL_GPL(initialize_memory_pool);
@@ -285,10 +296,10 @@
}
EXPORT_SYMBOL_GPL(allocate_contiguous_memory);
-unsigned long _allocate_contiguous_memory_nomap(unsigned long size,
+phys_addr_t _allocate_contiguous_memory_nomap(unsigned long size,
int mem_type, unsigned long align, void *caller)
{
- unsigned long paddr;
+ phys_addr_t paddr;
unsigned long aligned_size;
struct alloc *node;
@@ -317,7 +328,7 @@
* are disjoint, so there won't be any chance of
* a duplicate node->vaddr value.
*/
- node->vaddr = (void *)paddr;
+ node->vaddr = paddr;
node->len = aligned_size;
node->mpool = mpool;
node->caller = caller;
@@ -334,7 +345,7 @@
}
EXPORT_SYMBOL_GPL(_allocate_contiguous_memory_nomap);
-unsigned long allocate_contiguous_memory_nomap(unsigned long size,
+phys_addr_t allocate_contiguous_memory_nomap(unsigned long size,
int mem_type, unsigned long align)
{
return _allocate_contiguous_memory_nomap(size, mem_type, align,
@@ -351,18 +362,18 @@
}
EXPORT_SYMBOL_GPL(free_contiguous_memory);
-void free_contiguous_memory_by_paddr(unsigned long paddr)
+void free_contiguous_memory_by_paddr(phys_addr_t paddr)
{
if (!paddr)
return;
- __free((void *)paddr, false);
+ __free((void *)(unsigned long)paddr, false);
return;
}
EXPORT_SYMBOL_GPL(free_contiguous_memory_by_paddr);
-unsigned long memory_pool_node_paddr(void *vaddr)
+phys_addr_t memory_pool_node_paddr(void *vaddr)
{
- struct alloc *node = find_alloc(vaddr);
+ struct alloc *node = find_alloc((unsigned long)vaddr);
if (!node)
return -EINVAL;
@@ -373,7 +384,7 @@
unsigned long memory_pool_node_len(void *vaddr)
{
- struct alloc *node = find_alloc(vaddr);
+ struct alloc *node = find_alloc((unsigned long)vaddr);
if (!node)
return -EINVAL;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 69b9521..798c750 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -785,6 +785,10 @@
set_pageblock_migratetype(page, MIGRATE_CMA);
__free_pages(page, pageblock_order);
totalram_pages += pageblock_nr_pages;
+#ifdef CONFIG_HIGHMEM
+ if (PageHighMem(page))
+ totalhigh_pages += pageblock_nr_pages;
+#endif
}
#endif
@@ -5199,10 +5203,6 @@
zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + (tmp >> 2);
zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1);
- zone->watermark[WMARK_MIN] += cma_wmark_pages(zone);
- zone->watermark[WMARK_LOW] += cma_wmark_pages(zone);
- zone->watermark[WMARK_HIGH] += cma_wmark_pages(zone);
-
setup_zone_migrate_reserve(zone);
spin_unlock_irqrestore(&zone->lock, flags);
}
@@ -5820,54 +5820,6 @@
return ret > 0 ? 0 : ret;
}
-/*
- * Update zone's cma pages counter used for watermark level calculation.
- */
-static inline void __update_cma_watermarks(struct zone *zone, int count)
-{
- unsigned long flags;
- spin_lock_irqsave(&zone->lock, flags);
- zone->min_cma_pages += count;
- spin_unlock_irqrestore(&zone->lock, flags);
- setup_per_zone_wmarks();
-}
-
-/*
- * Trigger memory pressure bump to reclaim some pages in order to be able to
- * allocate 'count' pages in single page units. Does similar work as
- *__alloc_pages_slowpath() function.
- */
-static int __reclaim_pages(struct zone *zone, gfp_t gfp_mask, int count)
-{
- enum zone_type high_zoneidx = gfp_zone(gfp_mask);
- struct zonelist *zonelist = node_zonelist(0, gfp_mask);
- int did_some_progress = 0;
- int order = 1;
-
- /*
- * Increase level of watermarks to force kswapd do his job
- * to stabilise at new watermark level.
- */
- __update_cma_watermarks(zone, count);
-
- /* Obey watermarks as if the page was being allocated */
- while (!zone_watermark_ok(zone, 0, low_wmark_pages(zone), 0, 0)) {
- wake_all_kswapd(order, zonelist, high_zoneidx, zone_idx(zone));
-
- did_some_progress = __perform_reclaim(gfp_mask, order, zonelist,
- NULL);
- if (!did_some_progress) {
- /* Exhausted what can be done so it's blamo time */
- out_of_memory(zonelist, gfp_mask, order, NULL, false);
- }
- }
-
- /* Restore original watermark levels. */
- __update_cma_watermarks(zone, -count);
-
- return count;
-}
-
/**
* alloc_contig_range() -- tries to allocate given range of pages
* @start: start PFN to allocate
@@ -5968,11 +5920,6 @@
goto done;
}
- /*
- * Reclaim enough pages to make sure that contiguous allocation
- * will not starve the system.
- */
- __reclaim_pages(zone, GFP_HIGHUSER_MOVABLE, end-start);
/* Grab isolated pages from freelists. */
outer_end = isolate_freepages_range(outer_start, end);
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index d243646..2dba071 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -2426,6 +2426,21 @@
}
}
+static inline void hci_hardware_error_evt(struct hci_dev *hdev,
+ struct sk_buff *skb)
+{
+ struct hci_ev_hardware_error *ev = (void *) skb->data;
+
+ BT_ERR("hdev=%p, hw_err_code = %u", hdev, ev->hw_err_code);
+
+ if (hdev && hdev->dev_type == HCI_BREDR) {
+ hci_dev_lock_bh(hdev);
+ mgmt_powered(hdev->id, 1);
+ hci_dev_unlock_bh(hdev);
+ }
+
+}
+
static inline void hci_role_change_evt(struct hci_dev *hdev, struct sk_buff *skb)
{
struct hci_ev_role_change *ev = (void *) skb->data;
@@ -3555,6 +3570,10 @@
hci_cmd_status_evt(hdev, skb);
break;
+ case HCI_EV_HARDWARE_ERROR:
+ hci_hardware_error_evt(hdev, skb);
+ break;
+
case HCI_EV_ROLE_CHANGE:
hci_role_change_evt(hdev, skb);
break;
diff --git a/scripts/build-all.py b/scripts/build-all.py
index 3cecbe2..4789af7 100755
--- a/scripts/build-all.py
+++ b/scripts/build-all.py
@@ -88,7 +88,6 @@
r'[fm]sm[0-9]*_defconfig',
r'apq*_defconfig',
r'qsd*_defconfig',
- r'msmzinc*_defconfig',
)
for p in arch_pats:
for n in glob.glob('arch/arm/configs/' + p):
diff --git a/sound/soc/codecs/msm8x10-wcd.c b/sound/soc/codecs/msm8x10-wcd.c
index c8647fb1..e1a904f 100644
--- a/sound/soc/codecs/msm8x10-wcd.c
+++ b/sound/soc/codecs/msm8x10-wcd.c
@@ -34,6 +34,7 @@
#include <sound/soc.h>
#include <sound/soc-dapm.h>
#include <sound/tlv.h>
+#include <mach/qdsp6v2/apr.h>
#include "msm8x10-wcd.h"
#include "wcd9xxx-resmgr.h"
#include "msm8x10_wcd_registers.h"
@@ -52,7 +53,7 @@
#define MAX_MSM8X10_WCD_DEVICE 4
#define CODEC_DT_MAX_PROP_SIZE 40
-#define MSM8X10_WCD_I2C_GSBI_SLAVE_ID "1-000d"
+#define MSM8X10_WCD_I2C_GSBI_SLAVE_ID "5-000d"
enum {
MSM8X10_WCD_I2C_TOP_LEVEL = 0,
@@ -82,6 +83,12 @@
static struct snd_soc_dai_driver msm8x10_wcd_i2s_dai[];
static const DECLARE_TLV_DB_SCALE(aux_pga_gain, 0, 2, 0);
+#define MSM8X10_WCD_ACQUIRE_LOCK(x) do { \
+ mutex_lock_nested(&x, SINGLE_DEPTH_NESTING); \
+} while (0)
+#define MSM8X10_WCD_RELEASE_LOCK(x) do { mutex_unlock(&x); } while (0)
+
+
/* Codec supports 2 IIR filters */
enum {
IIR1 = 0,
@@ -99,6 +106,12 @@
BAND_MAX,
};
+enum msm8x10_wcd_bandgap_type {
+ MSM8X10_WCD_BANDGAP_OFF = 0,
+ MSM8X10_WCD_BANDGAP_AUDIO_MODE,
+ MSM8X10_WCD_BANDGAP_MBHC_MODE,
+};
+
struct hpf_work {
struct msm8x10_wcd_priv *msm8x10_wcd;
u32 decimator;
@@ -113,11 +126,14 @@
u32 adc_count;
u32 rx_bias_count;
s32 dmic_1_2_clk_cnt;
-
+ enum msm8x10_wcd_bandgap_type bandgap_type;
+ bool mclk_enabled;
+ bool clock_active;
+ bool config_mode_active;
+ bool mbhc_polling_active;
+ struct mutex codec_resource_lock;
/* resmgr module */
struct wcd9xxx_resmgr resmgr;
- /* mbhc module */
- struct wcd9xxx_mbhc mbhc;
};
static unsigned short rx_digital_gain_reg[] = {
@@ -139,8 +155,8 @@
};
static char *msm8x10_wcd_supplies[] = {
- "cdc-vdd-mic-bias", "cdc-vdda-h", "cdc-vdd-1p2", "cdc-vdd-px",
- "cdc-vdda-cp",
+ "cdc-vdda-cp", "cdc-vdda-h", "cdc-vdd-px", "cdc-vdd-1p2v",
+ "cdc-vdd-mic-bias",
};
static int msm8x10_wcd_dt_parse_vreg_info(struct device *dev,
@@ -158,7 +174,6 @@
{
int rtn = 0;
int value = ((reg & 0x0f00) >> 8) & 0x000f;
- pr_debug("%s: reg(0x%x) value(%d)\n", __func__, reg, value);
switch (value) {
case 0:
case 1:
@@ -171,7 +186,7 @@
return rtn;
}
-static int msm8x10_wcd_abh_write_device(u16 reg, unsigned int *value, u32 bytes)
+static int msm8x10_wcd_abh_write_device(u16 reg, u8 *value, u32 bytes)
{
u32 temp = ((u32)(*value)) & 0x000000FF;
u32 offset = (((u32)(reg)) ^ 0x00000400) & 0x00000FFF;
@@ -179,11 +194,13 @@
return 0;
}
-static int msm8x10_wcd_abh_read_device(u16 reg, u32 bytes, unsigned int *value)
+static int msm8x10_wcd_abh_read_device(u16 reg, u32 bytes, u8 *value)
{
+ u32 temp;
u32 offset = (((u32)(reg)) ^ 0x00000400) & 0x00000FFF;
- *value = ioread32(ioremap(MSM8X10_DINO_CODEC_BASE_ADDR +
+ temp = ioread32(ioremap(MSM8X10_DINO_CODEC_BASE_ADDR +
offset, 4));
+ *value = (u8)temp;
return 0;
}
@@ -274,7 +291,7 @@
}
}
}
- pr_debug("%s: Reg 0x%x = 0x%x\n", __func__, reg, *dest);
+ pr_debug("%s: reg 0x%x = 0x%x\n", __func__, reg, *dest);
return 0;
}
@@ -292,20 +309,22 @@
u16 reg, unsigned int *val)
{
int ret = -EINVAL;
+ u8 temp;
/* check if use I2C interface for Helicon or AHB for Dino */
mutex_lock(&msm8x10_wcd->io_lock);
if (MSM8X10_WCD_IS_HELICON_REG(reg))
- ret = msm8x10_wcd_i2c_read(reg, 1, val);
+ ret = msm8x10_wcd_i2c_read(reg, 1, &temp);
else if (MSM8X10_WCD_IS_DINO_REG(reg))
- ret = msm8x10_wcd_abh_read_device(reg, 1, val);
+ ret = msm8x10_wcd_abh_read_device(reg, 1, &temp);
mutex_unlock(&msm8x10_wcd->io_lock);
+ *val = temp;
return ret;
}
static int msm8x10_wcd_reg_write(struct msm8x10_wcd *msm8x10_wcd, u16 reg,
- unsigned int val)
+ u8 val)
{
int ret = -EINVAL;
@@ -393,7 +412,7 @@
reg, ret);
}
- return msm8x10_wcd_reg_write(codec->control_data, reg, value);
+ return msm8x10_wcd_reg_write(codec->control_data, reg, (u8)value);
}
static unsigned int msm8x10_wcd_read(struct snd_soc_codec *codec,
@@ -438,10 +457,14 @@
regnode = of_parse_phandle(dev->of_node, prop_name, 0);
if (!regnode) {
- dev_err(dev, "Looking up %s property in node %s failed",
+ dev_err(dev, "Looking up %s property in node %s failed\n",
prop_name, dev->of_node->full_name);
return -ENODEV;
}
+
+ dev_dbg(dev, "Looking up %s property in node %s\n",
+ prop_name, dev->of_node->full_name);
+
vreg->name = vreg_name;
snprintf(prop_name, CODEC_DT_MAX_PROP_SIZE,
@@ -481,17 +504,7 @@
u32 prop_val;
snprintf(prop_name, CODEC_DT_MAX_PROP_SIZE,
- "qcom,cdc-micbias-ldoh-v");
- ret = of_property_read_u32(dev->of_node, prop_name, &prop_val);
- if (ret) {
- dev_err(dev, "Looking up %s property in node %s failed",
- prop_name, dev->of_node->full_name);
- return -ENODEV;
- }
- micbias->ldoh_v = (u8)prop_val;
-
- snprintf(prop_name, CODEC_DT_MAX_PROP_SIZE,
- "qcom,cdc-micbias-cfilt1-mv");
+ "qcom,cdc-micbias-cfilt-mv");
ret = of_property_read_u32(dev->of_node, prop_name,
&micbias->cfilt1_mv);
if (ret) {
@@ -501,7 +514,7 @@
}
snprintf(prop_name, CODEC_DT_MAX_PROP_SIZE,
- "qcom,cdc-micbias1-cfilt-sel");
+ "qcom,cdc-micbias-cfilt-sel");
ret = of_property_read_u32(dev->of_node, prop_name, &prop_val);
if (ret) {
dev_err(dev, "Looking up %s property in node %s failed",
@@ -559,23 +572,14 @@
if (ret)
goto err;
}
-
ret = msm8x10_wcd_dt_parse_micbias_info(dev, &pdata->micbias);
if (ret)
goto err;
-
- pdata->reset_gpio = of_get_named_gpio(dev->of_node,
- "qcom,cdc-reset-gpio", 0);
- if (pdata->reset_gpio < 0) {
- dev_err(dev, "Looking up %s property in node %s failed %d\n",
- "qcom, cdc-reset-gpio", dev->of_node->full_name,
- pdata->reset_gpio);
- goto err;
- }
- dev_dbg(dev, "%s: reset gpio %d", __func__, pdata->reset_gpio);
return pdata;
err:
devm_kfree(dev, pdata);
+ dev_err(dev, "%s: Failed to populate DT data ret = %d\n",
+ __func__, ret);
return NULL;
}
@@ -611,6 +615,9 @@
MSM8X10_WCD_A_CDC_CLK_OTHR_CTL, 0x01,
0x00);
snd_soc_update_bits(codec,
+ MSM8X10_WCD_A_CDC_CLK_OTHR_RESET_B1_CTL,
+ 0x01, 0x01);
+ snd_soc_update_bits(codec,
MSM8X10_WCD_A_CP_STATIC, 0x08, 0x00);
break;
}
@@ -846,7 +853,7 @@
SOC_ENUM_EXT("EAR PA Gain", msm8x10_wcd_ear_pa_gain_enum[0],
msm8x10_wcd_pa_gain_get, msm8x10_wcd_pa_gain_put),
- SOC_SINGLE_TLV("LINEOUT1 Volume", MSM8X10_WCD_A_RX_LINE_1_GAIN,
+ SOC_SINGLE_TLV("LINEOUT Volume", MSM8X10_WCD_A_RX_LINE_1_GAIN,
0, 12, 1, line_gain),
SOC_SINGLE_TLV("HPHL Volume", MSM8X10_WCD_A_RX_HPH_L_GAIN,
@@ -1102,9 +1109,6 @@
switch (decimator) {
case 1:
case 2:
- if (dec_mux == 1)
- adc_dmic_sel = 0x1;
- else
adc_dmic_sel = 0x0;
break;
default:
@@ -1148,68 +1152,6 @@
SOC_DAPM_SINGLE("Switch", MSM8X10_WCD_A_RX_HPH_L_DAC_CTL, 6, 1, 0)
};
-/* virtual port entries */
-static int slim_tx_mixer_get(struct snd_kcontrol *kcontrol,
- struct snd_ctl_elem_value *ucontrol)
-{
- struct snd_soc_dapm_widget_list *wlist = snd_kcontrol_chip(kcontrol);
- struct snd_soc_dapm_widget *widget = wlist->widgets[0];
-
- ucontrol->value.integer.value[0] = widget->value;
- return 0;
-}
-
-static int slim_tx_mixer_put(struct snd_kcontrol *kcontrol,
- struct snd_ctl_elem_value *ucontrol)
-{
- return 0;
-}
-
-static int slim_rx_mux_get(struct snd_kcontrol *kcontrol,
- struct snd_ctl_elem_value *ucontrol)
-{
- struct snd_soc_dapm_widget_list *wlist = snd_kcontrol_chip(kcontrol);
- struct snd_soc_dapm_widget *widget = wlist->widgets[0];
-
- ucontrol->value.enumerated.item[0] = widget->value;
- return 0;
-}
-
-static int slim_rx_mux_put(struct snd_kcontrol *kcontrol,
- struct snd_ctl_elem_value *ucontrol)
-{
- return 0;
-}
-
-
-static const char *const slim_rx_mux_text[] = {
- "ZERO", "AIF1_PB"
-};
-
-static const struct soc_enum slim_rx_mux_enum =
- SOC_ENUM_SINGLE_EXT(ARRAY_SIZE(slim_rx_mux_text), slim_rx_mux_text);
-
-static const struct snd_kcontrol_new slim_rx_mux[MSM8X10_WCD_RX_MAX] = {
- SOC_DAPM_ENUM_EXT("I2S RX1 Mux", slim_rx_mux_enum,
- slim_rx_mux_get, slim_rx_mux_put),
- SOC_DAPM_ENUM_EXT("I2S RX2 Mux", slim_rx_mux_enum,
- slim_rx_mux_get, slim_rx_mux_put),
- SOC_DAPM_ENUM_EXT("I2S RX3 Mux", slim_rx_mux_enum,
- slim_rx_mux_get, slim_rx_mux_put),
-};
-
-static const struct snd_kcontrol_new aif_cap_mixer[] = {
- SOC_SINGLE_EXT("I2S TX1", SND_SOC_NOPM, MSM8X10_WCD_TX1, 1, 0,
- slim_tx_mixer_get, slim_tx_mixer_put),
- SOC_SINGLE_EXT("I2S TX2", SND_SOC_NOPM, MSM8X10_WCD_TX2, 1, 0,
- slim_tx_mixer_get, slim_tx_mixer_put),
- SOC_SINGLE_EXT("I2S TX3", SND_SOC_NOPM, MSM8X10_WCD_TX3, 1, 0,
- slim_tx_mixer_get, slim_tx_mixer_put),
- SOC_SINGLE_EXT("I2S TX4", SND_SOC_NOPM, MSM8X10_WCD_TX4, 1, 0,
- slim_tx_mixer_get, slim_tx_mixer_put),
-};
-
-
static void msm8x10_wcd_codec_enable_adc_block(struct snd_soc_codec *codec,
int enable)
{
@@ -1243,7 +1185,7 @@
if (w->reg == MSM8X10_WCD_A_TX_1_EN)
init_bit_shift = 7;
- else if (adc_reg == MSM8X10_WCD_A_TX_2_EN)
+ else if (w->reg == MSM8X10_WCD_A_TX_2_EN)
init_bit_shift = 6;
else {
dev_err(codec->dev, "%s: Error, invalid adc register\n",
@@ -1365,54 +1307,36 @@
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
- struct msm8x10_wcd_priv *msm8x10_wcd = snd_soc_codec_get_drvdata(codec);
u16 micb_int_reg;
- u8 cfilt_sel_val = 0;
char *internal1_text = "Internal1";
char *internal2_text = "Internal2";
char *internal3_text = "Internal3";
- enum wcd9xxx_notify_event e_post_off, e_pre_on, e_post_on;
dev_dbg(codec->dev, "%s %d\n", __func__, event);
switch (w->reg) {
case MSM8X10_WCD_A_MICB_1_CTL:
micb_int_reg = MSM8X10_WCD_A_MICB_1_INT_RBIAS;
- cfilt_sel_val =
- msm8x10_wcd->resmgr.pdata->micbias.bias1_cfilt_sel;
- e_pre_on = WCD9XXX_EVENT_PRE_MICBIAS_1_ON;
- e_post_on = WCD9XXX_EVENT_POST_MICBIAS_1_ON;
- e_post_off = WCD9XXX_EVENT_POST_MICBIAS_1_OFF;
break;
default:
dev_err(codec->dev,
- "%s: Error, invalid micbias register\n", __func__);
+ "%s: Error, invalid micbias register 0x%x\n",
+ __func__, w->reg);
return -EINVAL;
}
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
- /* Let MBHC module know so micbias switch to be off */
- wcd9xxx_resmgr_notifier_call(&msm8x10_wcd->resmgr, e_pre_on);
-
- /* Get cfilt */
- wcd9xxx_resmgr_cfilt_get(&msm8x10_wcd->resmgr, cfilt_sel_val);
-
if (strnstr(w->name, internal1_text, 30))
- snd_soc_update_bits(codec, micb_int_reg, 0xE0, 0xE0);
+ snd_soc_update_bits(codec, micb_int_reg, 0x80, 0x80);
else if (strnstr(w->name, internal2_text, 30))
- snd_soc_update_bits(codec, micb_int_reg, 0x1C, 0x1C);
+ snd_soc_update_bits(codec, micb_int_reg, 0x10, 0x10);
else if (strnstr(w->name, internal3_text, 30))
- snd_soc_update_bits(codec, micb_int_reg, 0x3, 0x3);
+ snd_soc_update_bits(codec, micb_int_reg, 0x2, 0x2);
break;
case SND_SOC_DAPM_POST_PMU:
usleep_range(20000, 20100);
- /* Let MBHC module know so micbias is on */
- wcd9xxx_resmgr_notifier_call(&msm8x10_wcd->resmgr, e_post_on);
break;
case SND_SOC_DAPM_POST_PMD:
- /* Let MBHC module know so micbias switch to be off */
- wcd9xxx_resmgr_notifier_call(&msm8x10_wcd->resmgr, e_post_off);
-
if (strnstr(w->name, internal1_text, 30))
snd_soc_update_bits(codec, micb_int_reg, 0x80, 0x00);
else if (strnstr(w->name, internal2_text, 30))
@@ -1420,14 +1344,36 @@
else if (strnstr(w->name, internal3_text, 30))
snd_soc_update_bits(codec, micb_int_reg, 0x2, 0x0);
- /* Put cfilt */
- wcd9xxx_resmgr_cfilt_put(&msm8x10_wcd->resmgr, cfilt_sel_val);
break;
}
-
return 0;
}
+static void tx_hpf_corner_freq_callback(struct work_struct *work)
+{
+ struct delayed_work *hpf_delayed_work;
+ struct hpf_work *hpf_work;
+ struct msm8x10_wcd_priv *msm8x10_wcd;
+ struct snd_soc_codec *codec;
+ u16 tx_mux_ctl_reg;
+ u8 hpf_cut_of_freq;
+
+ hpf_delayed_work = to_delayed_work(work);
+ hpf_work = container_of(hpf_delayed_work, struct hpf_work, dwork);
+ msm8x10_wcd = hpf_work->msm8x10_wcd;
+ codec = hpf_work->msm8x10_wcd->codec;
+ hpf_cut_of_freq = hpf_work->tx_hpf_cut_of_freq;
+
+ tx_mux_ctl_reg = MSM8X10_WCD_A_CDC_TX1_MUX_CTL +
+ (hpf_work->decimator - 1) * 32;
+
+ dev_info(codec->dev, "%s(): decimator %u hpf_cut_of_freq 0x%x\n",
+ __func__, hpf_work->decimator, (unsigned int)hpf_cut_of_freq);
+
+ snd_soc_update_bits(codec, tx_mux_ctl_reg, 0x30, hpf_cut_of_freq << 4);
+}
+
+
#define TX_MUX_CTL_CUT_OFF_FREQ_MASK 0x30
#define CF_MIN_3DB_4HZ 0x0
#define CF_MIN_3DB_75HZ 0x1
@@ -1582,6 +1528,7 @@
{
struct snd_soc_codec *codec = w->codec;
struct msm8x10_wcd_priv *msm8x10_wcd = snd_soc_codec_get_drvdata(codec);
+ msm8x10_wcd->resmgr.codec = codec;
dev_dbg(codec->dev, "%s %d\n", __func__, event);
@@ -1697,21 +1644,12 @@
{"I2S TX1", NULL, "TX_I2S_CLK"},
{"I2S TX2", NULL, "TX_I2S_CLK"},
- {"I2S TX3", NULL, "TX_I2S_CLK"},
- {"I2S TX4", NULL, "TX_I2S_CLK"},
- {"AIF1 CAP", NULL, "AIF1_CAP Mixer"},
+ {"DEC1 MUX", NULL, "TX CLK"},
+ {"DEC2 MUX", NULL, "TX CLK"},
- {"AIF1_CAP Mixer", "I2S TX1", "I2S TX1 MUX"},
- {"AIF1_CAP Mixer", "I2S TX2", "I2S TX2 MUX"},
- {"AIF1_CAP Mixer", "I2S TX3", "I2S TX3 MUX"},
- {"AIF1_CAP Mixer", "I2S TX4", "I2S TX4 MUX"},
-
- {"I2S TX1 MUX", NULL, "DEC1 MUX"},
- {"I2S TX2 MUX", NULL, "DEC2 MUX"},
- {"I2S TX3 MUX", NULL, "RX1 MIX1"},
- {"I2S TX4 MUX", "RMIX2", "RX1 MIX2"},
- {"I2S TX4 MUX", "RMIX3", "RX1 MIX3"},
+ {"I2S TX1", NULL, "DEC1 MUX"},
+ {"I2S TX2", NULL, "DEC2 MUX"},
/* Earpiece (RX MIX1) */
{"EAR", NULL, "EAR PA"},
@@ -1725,33 +1663,32 @@
{"HPHL", NULL, "HPHL DAC"},
{"HPHR", NULL, "HPHR DAC"},
- {"HPHR_PA_MIXER", NULL, "HPHR DAC"},
{"HPHL DAC", NULL, "CP"},
{"HPHR DAC", NULL, "CP"},
+ {"SPK DAC", NULL, "CP"},
{"DAC1", "Switch", "RX1 CHAIN"},
{"HPHL DAC", "Switch", "RX1 CHAIN"},
{"HPHR DAC", NULL, "RX2 CHAIN"},
- {"LINEOUT1", NULL, "LINEOUT1 PA"},
+ {"LINEOUT", NULL, "LINEOUT PA"},
{"SPK_OUT", NULL, "SPK PA"},
- {"LINEOUT1 PA", NULL, "CP"},
- {"LINEOUT1 PA", NULL, "LINEOUT1 DAC"},
+ {"LINEOUT PA", NULL, "CP"},
+ {"LINEOUT PA", NULL, "LINEOUT DAC"},
- {"LINEOUT1 DAC", "RX2 INPUT", "RX2 MIX1"},
- {"LINEOUT1 DAC", "RX3 INPUT", "RX3 MIX1"},
-
+ {"CP", NULL, "RX_BIAS"},
{"SPK PA", NULL, "SPK DAC"},
- {"SPK DAC", NULL, "RX7 MIX2"},
+ {"SPK DAC", NULL, "RX3 CHAIN"},
+ {"RX1 CHAIN", NULL, "RX1 CLK"},
+ {"RX2 CHAIN", NULL, "RX2 CLK"},
+ {"RX3 CHAIN", NULL, "RX3 CLK"},
{"RX1 CHAIN", NULL, "RX1 MIX2"},
{"RX2 CHAIN", NULL, "RX2 MIX2"},
-
- {"LINEOUT1 DAC", NULL, "RX_BIAS"},
- {"SPK DAC", NULL, "RX_BIAS"},
+ {"RX3 CHAIN", NULL, "RX3 MIX1"},
{"RX1 MIX1", NULL, "RX1 MIX1 INP1"},
{"RX1 MIX1", NULL, "RX1 MIX1 INP2"},
@@ -1761,19 +1698,7 @@
{"RX3 MIX1", NULL, "RX3 MIX1 INP1"},
{"RX3 MIX1", NULL, "RX3 MIX1 INP2"},
{"RX1 MIX2", NULL, "RX1 MIX1"},
- {"RX1 MIX2", NULL, "RX1 MIX2 INP1"},
- {"RX1 MIX2", NULL, "RX1 MIX2 INP2"},
{"RX2 MIX2", NULL, "RX2 MIX1"},
- {"RX2 MIX2", NULL, "RX2 MIX2 INP1"},
- {"RX2 MIX2", NULL, "RX2 MIX2 INP2"},
-
- {"I2S RX1 MUX", "AIF1_PB", "AIF1 PB"},
- {"I2S RX2 MUX", "AIF1_PB", "AIF1 PB"},
- {"I2S RX3 MUX", "AIF1_PB", "AIF1 PB"},
-
- {"I2S RX1", NULL, "I2S RX1 MUX"},
- {"I2S RX2", NULL, "I2S RX2 MUX"},
- {"I2S RX3", NULL, "I2S RX3 MUX"},
{"RX1 MIX1 INP1", "RX1", "I2S RX1"},
{"RX1 MIX1 INP1", "RX2", "I2S RX2"},
@@ -1825,41 +1750,109 @@
{"IIR1", NULL, "IIR1 INP1 MUX"},
{"IIR1 INP1 MUX", "DEC1", "DEC1 MUX"},
{"IIR1 INP1 MUX", "DEC2", "DEC2 MUX"},
-
- /* There is no LDO_H in Helicon */
- {"MIC BIAS1 Internal1", NULL, "LDO_H"},
- {"MIC BIAS1 Internal2", NULL, "LDO_H"},
- {"MIC BIAS1 External", NULL, "LDO_H"},
+ {"MIC BIAS Internal2", NULL, "INT_LDO_H"},
+ {"MIC BIAS External", NULL, "INT_LDO_H"},
};
static int msm8x10_wcd_startup(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
- struct msm8x10_wcd *msm8x10_wcd_core =
- dev_get_drvdata(dai->codec->dev);
dev_dbg(dai->codec->dev, "%s(): substream = %s stream = %d\n",
__func__,
substream->name, substream->stream);
- if ((msm8x10_wcd_core != NULL) &&
- (msm8x10_wcd_core->dev != NULL))
- pm_runtime_get_sync(msm8x10_wcd_core->dev);
-
return 0;
}
static void msm8x10_wcd_shutdown(struct snd_pcm_substream *substream,
struct snd_soc_dai *dai)
{
- struct msm8x10_wcd *msm8x10_wcd_core =
- dev_get_drvdata(dai->codec->dev);
dev_dbg(dai->codec->dev,
"%s(): substream = %s stream = %d\n" , __func__,
substream->name, substream->stream);
- if ((msm8x10_wcd_core != NULL) &&
- (msm8x10_wcd_core->dev != NULL)) {
- pm_runtime_mark_last_busy(msm8x10_wcd_core->dev);
- pm_runtime_put(msm8x10_wcd_core->dev);
+}
+
+static int msm8x10_wcd_codec_enable_clock_block(struct snd_soc_codec *codec,
+ int enable)
+{
+ if (enable) {
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_CLK_MCLK_CTL,
+ 0x01, 0x01);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_CLK_PDM_CTL,
+ 0x03, 0x03);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_TOP_CLK_CTL,
+ 0x0f, 0x0d);
+ } else {
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_TOP_CLK_CTL,
+ 0x0f, 0x00);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_CLK_MCLK_CTL,
+ 0x01, 0x01);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_CDC_CLK_MCLK_CTL,
+ 0x03, 0x00);
}
+ return 0;
+}
+
+static void msm8x10_wcd_codec_enable_audio_mode_bandgap(struct snd_soc_codec
+ *codec)
+{
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x80,
+ 0x80);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x04,
+ 0x04);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x01,
+ 0x01);
+ usleep_range(1000, 1000);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x80,
+ 0x00);
+}
+
+static void msm8x10_wcd_codec_enable_bandgap(struct snd_soc_codec *codec,
+ enum msm8x10_wcd_bandgap_type choice)
+{
+ struct msm8x10_wcd_priv *msm8x10_wcd = snd_soc_codec_get_drvdata(codec);
+
+ /* TODO lock resources accessed by audio streams and threaded
+ * interrupt handlers
+ */
+
+ dev_dbg(codec->dev, "%s, choice is %d, current is %d\n",
+ __func__, choice,
+ msm8x10_wcd->bandgap_type);
+
+ if (msm8x10_wcd->bandgap_type == choice)
+ return;
+
+ if ((msm8x10_wcd->bandgap_type == MSM8X10_WCD_BANDGAP_OFF) &&
+ (choice == MSM8X10_WCD_BANDGAP_AUDIO_MODE)) {
+ msm8x10_wcd_codec_enable_audio_mode_bandgap(codec);
+ } else if (choice == MSM8X10_WCD_BANDGAP_MBHC_MODE) {
+ /* bandgap mode becomes fast,
+ * mclk should be off or clk buff source souldn't be VBG
+ * Let's turn off mclk always */
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL,
+ 0x2, 0x2);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL,
+ 0x80, 0x80);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL,
+ 0x4, 0x4);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL,
+ 0x01, 0x01);
+ usleep_range(1000, 1000);
+ snd_soc_update_bits(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL,
+ 0x80, 0x00);
+ } else if ((msm8x10_wcd->bandgap_type ==
+ MSM8X10_WCD_BANDGAP_MBHC_MODE) &&
+ (choice == MSM8X10_WCD_BANDGAP_AUDIO_MODE)) {
+ snd_soc_write(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x50);
+ usleep_range(100, 100);
+ msm8x10_wcd_codec_enable_audio_mode_bandgap(codec);
+ } else if (choice == MSM8X10_WCD_BANDGAP_OFF) {
+ snd_soc_write(codec, MSM8X10_WCD_A_BIAS_CENTRAL_BG_CTL, 0x50);
+ } else {
+ dev_err(codec->dev,
+ "%s: Error, Invalid bandgap settings\n", __func__);
+ }
+ msm8x10_wcd->bandgap_type = choice;
}
int msm8x10_wcd_mclk_enable(struct snd_soc_codec *codec,
@@ -1867,24 +1860,30 @@
{
struct msm8x10_wcd_priv *msm8x10_wcd = snd_soc_codec_get_drvdata(codec);
- dev_dbg(codec->dev,
- "%s: mclk_enable = %u, dapm = %d\n", __func__,
- mclk_enable, dapm);
- WCD9XXX_BCL_LOCK(&msm8x10_wcd->resmgr);
+ dev_dbg(codec->dev, "%s: mclk_enable = %u, dapm = %d\n",
+ __func__, mclk_enable, dapm);
+ if (dapm)
+ MSM8X10_WCD_ACQUIRE_LOCK(msm8x10_wcd->codec_resource_lock);
if (mclk_enable) {
- wcd9xxx_resmgr_get_bandgap(&msm8x10_wcd->resmgr,
- WCD9XXX_BANDGAP_AUDIO_MODE);
- wcd9xxx_resmgr_get_clk_block(&msm8x10_wcd->resmgr,
- WCD9XXX_CLK_MCLK);
+ msm8x10_wcd->mclk_enabled = true;
+ msm8x10_wcd_codec_enable_bandgap(codec,
+ MSM8X10_WCD_BANDGAP_AUDIO_MODE);
+ msm8x10_wcd_codec_enable_clock_block(codec, 1);
} else {
- /* Put clock and BG */
- wcd9xxx_resmgr_put_clk_block(&msm8x10_wcd->resmgr,
- WCD9XXX_CLK_MCLK);
- wcd9xxx_resmgr_put_bandgap(&msm8x10_wcd->resmgr,
- WCD9XXX_BANDGAP_AUDIO_MODE);
+ if (!msm8x10_wcd->mclk_enabled) {
+ if (dapm)
+ MSM8X10_WCD_RELEASE_LOCK(
+ msm8x10_wcd->codec_resource_lock);
+ dev_err(codec->dev, "Error, MCLK already diabled\n");
+ return -EINVAL;
+ }
+ msm8x10_wcd->mclk_enabled = false;
+ msm8x10_wcd_codec_enable_clock_block(codec, 0);
+ msm8x10_wcd_codec_enable_bandgap(codec,
+ MSM8X10_WCD_BANDGAP_OFF);
}
- WCD9XXX_BCL_UNLOCK(&msm8x10_wcd->resmgr);
-
+ if (dapm)
+ MSM8X10_WCD_RELEASE_LOCK(msm8x10_wcd->codec_resource_lock);
return 0;
}
@@ -2027,7 +2026,7 @@
.rate_max = 192000,
.rate_min = 8000,
.channels_min = 1,
- .channels_max = 4,
+ .channels_max = 3,
},
.ops = &msm8x10_wcd_dai_ops,
},
@@ -2077,23 +2076,16 @@
SND_SOC_DAPM_MIXER("DAC1", MSM8X10_WCD_A_RX_EAR_EN, 6, 0, dac1_switch,
ARRAY_SIZE(dac1_switch)),
- SND_SOC_DAPM_AIF_IN("AIF1 PB", "AIF1 Playback", 0, SND_SOC_NOPM,
- AIF1_PB, 0),
+ SND_SOC_DAPM_AIF_IN("I2S RX1", "AIF1 Playback", 0, SND_SOC_NOPM, 0, 0),
- SND_SOC_DAPM_MUX("I2S RX1 MUX", SND_SOC_NOPM, MSM8X10_WCD_RX1, 0,
- &slim_rx_mux[MSM8X10_WCD_RX1]),
- SND_SOC_DAPM_MUX("I2S RX2 MUX", SND_SOC_NOPM, MSM8X10_WCD_RX2, 0,
- &slim_rx_mux[MSM8X10_WCD_RX2]),
- SND_SOC_DAPM_MUX("I2S RX3 MUX", SND_SOC_NOPM, MSM8X10_WCD_RX3, 0,
- &slim_rx_mux[MSM8X10_WCD_RX3]),
+ SND_SOC_DAPM_AIF_IN("I2S RX2", "AIF1 Playback", 0, SND_SOC_NOPM, 0, 0),
- SND_SOC_DAPM_MIXER("I2S RX1", SND_SOC_NOPM, 0, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("I2S RX2", SND_SOC_NOPM, 0, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("I2S RX3", SND_SOC_NOPM, 0, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("I2S RX4", SND_SOC_NOPM, 0, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("I2S RX5", SND_SOC_NOPM, 0, 0, NULL, 0),
+ SND_SOC_DAPM_AIF_IN("I2S RX3", "AIF1 Playback", 0, SND_SOC_NOPM, 0, 0),
- /* Headphone */
+ SND_SOC_DAPM_SUPPLY("TX CLK", MSM8X10_WCD_A_CDC_DIG_CLK_CTL,
+ 4, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("INT_LDO_H", SND_SOC_NOPM, 1, 0, NULL, 0),
+
SND_SOC_DAPM_OUTPUT("HEADPHONE"),
SND_SOC_DAPM_PGA_E("HPHL", MSM8X10_WCD_A_RX_HPH_CNP_EN,
5, 0, NULL, 0,
@@ -2114,10 +2106,10 @@
SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
/* Speaker */
- SND_SOC_DAPM_OUTPUT("LINEOUT1"),
+ SND_SOC_DAPM_OUTPUT("LINEOUT"),
SND_SOC_DAPM_OUTPUT("SPK_OUT"),
- SND_SOC_DAPM_PGA_E("LINEOUT1 PA", MSM8X10_WCD_A_RX_LINE_CNP_EN,
+ SND_SOC_DAPM_PGA_E("LINEOUT PA", MSM8X10_WCD_A_RX_LINE_CNP_EN,
0, 0, NULL, 0, msm8x10_wcd_codec_enable_lineout,
SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
@@ -2127,7 +2119,7 @@
SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
- SND_SOC_DAPM_DAC_E("LINEOUT1 DAC", NULL,
+ SND_SOC_DAPM_DAC_E("LINEOUT DAC", NULL,
MSM8X10_WCD_A_RX_LINE_1_DAC_CTL, 7, 0,
msm8x10_wcd_lineout_dac_event,
SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
@@ -2152,10 +2144,18 @@
0, msm8x10_wcd_codec_enable_interpolator, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU),
+ SND_SOC_DAPM_SUPPLY("RX1 CLK", MSM8X10_WCD_A_CDC_DIG_CLK_CTL,
+ 0, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("RX2 CLK", MSM8X10_WCD_A_CDC_DIG_CLK_CTL,
+ 1, 0, NULL, 0),
+ SND_SOC_DAPM_SUPPLY("RX3 CLK", MSM8X10_WCD_A_CDC_DIG_CLK_CTL,
+ 2, 0, NULL, 0),
SND_SOC_DAPM_MIXER("RX1 CHAIN", MSM8X10_WCD_A_CDC_RX1_B6_CTL,
5, 0, NULL, 0),
SND_SOC_DAPM_MIXER("RX2 CHAIN", MSM8X10_WCD_A_CDC_RX2_B6_CTL,
5, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER("RX3 CHAIN", MSM8X10_WCD_A_CDC_RX3_B6_CTL,
+ 5, 0, NULL, 0),
SND_SOC_DAPM_MUX("RX1 MIX1 INP1", SND_SOC_NOPM, 0, 0,
&rx_mix1_inp1_mux),
@@ -2191,26 +2191,28 @@
SND_SOC_DAPM_INPUT("AMIC1"),
- SND_SOC_DAPM_MICBIAS_E("MIC BIAS1 External",
+ SND_SOC_DAPM_MICBIAS_E("MIC BIAS Internal1",
MSM8X10_WCD_A_MICB_1_CTL, 7, 0,
msm8x10_wcd_codec_enable_micbias, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
- SND_SOC_DAPM_MICBIAS_E("MIC BIAS1 Internal1",
+ SND_SOC_DAPM_MICBIAS_E("MIC BIAS Internal2",
MSM8X10_WCD_A_MICB_1_CTL, 7, 0,
msm8x10_wcd_codec_enable_micbias, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
- SND_SOC_DAPM_MICBIAS_E("MIC BIAS1 Internal2",
+ SND_SOC_DAPM_MICBIAS_E("MIC BIAS Internal3",
MSM8X10_WCD_A_MICB_1_CTL, 7, 0,
msm8x10_wcd_codec_enable_micbias, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
SND_SOC_DAPM_ADC_E("ADC1", NULL, MSM8X10_WCD_A_TX_1_EN, 7, 0,
msm8x10_wcd_codec_enable_adc, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
-
- SND_SOC_DAPM_ADC_E("ADC1", NULL, MSM8X10_WCD_A_TX_2_EN, 7, 0,
+ SND_SOC_DAPM_ADC_E("ADC2", NULL, MSM8X10_WCD_A_TX_2_EN, 7, 0,
msm8x10_wcd_codec_enable_adc, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_MICBIAS("MIC BIAS External", MSM8X10_WCD_A_MICB_1_CTL,
+ 7, 0),
+
SND_SOC_DAPM_INPUT("AMIC3"),
SND_SOC_DAPM_MUX_E("DEC1 MUX",
@@ -2226,11 +2228,13 @@
SND_SOC_DAPM_PRE_PMD | SND_SOC_DAPM_POST_PMD),
SND_SOC_DAPM_INPUT("AMIC2"),
- SND_SOC_DAPM_AIF_OUT("AIF1 CAP", "AIF1 Capture", 0, SND_SOC_NOPM,
- AIF1_CAP, 0),
- SND_SOC_DAPM_MIXER("AIF1_CAP Mixer", SND_SOC_NOPM, AIF1_CAP, 0,
- aif_cap_mixer, ARRAY_SIZE(aif_cap_mixer)),
+ SND_SOC_DAPM_AIF_OUT("I2S TX1", "AIF1 Capture", 0, SND_SOC_NOPM,
+ 0, 0),
+ SND_SOC_DAPM_AIF_OUT("I2S TX2", "AIF1 Capture", 0, SND_SOC_NOPM,
+ 0, 0),
+ SND_SOC_DAPM_AIF_OUT("I2S TX3", "AIF1 Capture", 0, SND_SOC_NOPM,
+ 0, 0),
/* Digital Mic Inputs */
SND_SOC_DAPM_ADC_E("DMIC1", NULL, SND_SOC_NOPM, 0, 0,
@@ -2254,7 +2258,7 @@
static const struct msm8x10_wcd_reg_mask_val msm8x10_wcd_reg_defaults[] = {
/* set MCLk to 9.6 */
- MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_CHIP_CTL, 0x0A),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_CHIP_CTL, 0x00),
/* EAR PA deafults */
MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_RX_EAR_CMBUFF, 0x05),
@@ -2280,7 +2284,12 @@
/* Disable TX7 internal biasing path which can cause leakage */
- MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_TX_SUP_SWITCH_CTRL_1, 0xBF),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_BIAS_CURR_CTL_2, 0x04),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_MICB_CFILT_1_VAL, 0x60),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_MICB_1_CTL, 0x82),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_TX_COM_BIAS, 0xE0),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_TX_1_EN, 0x32),
+ MSM8X10_WCD_REG_VAL(MSM8X10_WCD_A_TX_2_EN, 0x32),
};
static void msm8x10_wcd_update_reg_defaults(struct snd_soc_codec *codec)
@@ -2338,12 +2347,36 @@
static int msm8x10_wcd_codec_probe(struct snd_soc_codec *codec)
{
+ struct msm8x10_wcd_priv *msm8x10_wcd;
+ int i;
dev_dbg(codec->dev, "%s()\n", __func__);
+ msm8x10_wcd = kzalloc(sizeof(struct msm8x10_wcd_priv), GFP_KERNEL);
+ if (!msm8x10_wcd) {
+ dev_err(codec->dev, "Failed to allocate private data\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0 ; i < NUM_DECIMATORS; i++) {
+ tx_hpf_work[i].msm8x10_wcd = msm8x10_wcd;
+ tx_hpf_work[i].decimator = i + 1;
+ INIT_DELAYED_WORK(&tx_hpf_work[i].dwork,
+ tx_hpf_corner_freq_callback);
+ }
+
codec->control_data = dev_get_drvdata(codec->dev);
+ snd_soc_codec_set_drvdata(codec, msm8x10_wcd);
+ msm8x10_wcd->codec = codec;
msm8x10_wcd_codec_init_reg(codec);
msm8x10_wcd_update_reg_defaults(codec);
+ msm8x10_wcd->mclk_enabled = false;
+ msm8x10_wcd->bandgap_type = MSM8X10_WCD_BANDGAP_OFF;
+ msm8x10_wcd->clock_active = false;
+ msm8x10_wcd->config_mode_active = false;
+ msm8x10_wcd->mbhc_polling_active = false;
+ mutex_init(&msm8x10_wcd->codec_resource_lock);
+ msm8x10_wcd->codec = codec;
return 0;
}
@@ -2353,18 +2386,6 @@
return 0;
}
-static int msm8x10_wcd_device_init(struct msm8x10_wcd *msm8x10)
-{
-
- mutex_init(&msm8x10->io_lock);
- mutex_init(&msm8x10->xfer_lock);
- mutex_init(&msm8x10->pm_lock);
- msm8x10->wlock_holders = 0;
-
- return 0;
-}
-
-
static struct snd_soc_codec_driver soc_codec_dev_msm8x10_wcd = {
.probe = msm8x10_wcd_codec_probe,
.remove = msm8x10_wcd_codec_remove,
@@ -2387,6 +2408,124 @@
.num_dapm_routes = ARRAY_SIZE(audio_map),
};
+static int msm8x10_wcd_enable_supplies(struct msm8x10_wcd *msm8x10,
+ struct msm8x10_wcd_pdata *pdata)
+{
+ int ret;
+ int i;
+ msm8x10->supplies = kzalloc(sizeof(struct regulator_bulk_data) *
+ ARRAY_SIZE(pdata->regulator),
+ GFP_KERNEL);
+ if (!msm8x10->supplies) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ msm8x10->num_of_supplies = 0;
+
+ if (ARRAY_SIZE(pdata->regulator) > MAX_REGULATOR) {
+ dev_err(msm8x10->dev, "%s: Array Size out of bound\n",
+ __func__);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(pdata->regulator); i++) {
+ if (pdata->regulator[i].name) {
+ msm8x10->supplies[i].supply = pdata->regulator[i].name;
+ msm8x10->num_of_supplies++;
+ }
+ }
+
+ ret = regulator_bulk_get(msm8x10->dev, msm8x10->num_of_supplies,
+ msm8x10->supplies);
+ if (ret != 0) {
+ dev_err(msm8x10->dev, "Failed to get supplies: err = %d\n",
+ ret);
+ goto err_supplies;
+ }
+
+ for (i = 0; i < msm8x10->num_of_supplies; i++) {
+ ret = regulator_set_voltage(msm8x10->supplies[i].consumer,
+ pdata->regulator[i].min_uV, pdata->regulator[i].max_uV);
+ if (ret) {
+ dev_err(msm8x10->dev, "%s: Setting regulator voltage failed for regulator %s err = %d\n",
+ __func__, msm8x10->supplies[i].supply, ret);
+ goto err_get;
+ }
+
+ ret = regulator_set_optimum_mode(msm8x10->supplies[i].consumer,
+ pdata->regulator[i].optimum_uA);
+ if (ret < 0) {
+ dev_err(msm8x10->dev, "%s: Setting regulator optimum mode failed for regulator %s err = %d\n",
+ __func__, msm8x10->supplies[i].supply, ret);
+ goto err_get;
+ }
+ }
+
+ ret = regulator_bulk_enable(msm8x10->num_of_supplies,
+ msm8x10->supplies);
+ if (ret != 0) {
+ dev_err(msm8x10->dev, "Failed to enable supplies: err = %d\n",
+ ret);
+ goto err_configure;
+ }
+ return ret;
+
+err_configure:
+ for (i = 0; i < msm8x10->num_of_supplies; i++) {
+ regulator_set_voltage(msm8x10->supplies[i].consumer, 0,
+ pdata->regulator[i].max_uV);
+ regulator_set_optimum_mode(msm8x10->supplies[i].consumer, 0);
+ }
+err_get:
+ regulator_bulk_free(msm8x10->num_of_supplies, msm8x10->supplies);
+err_supplies:
+ kfree(msm8x10->supplies);
+err:
+ return ret;
+}
+
+static void msm8x10_wcd_disable_supplies(struct msm8x10_wcd *msm8x10,
+ struct msm8x10_wcd_pdata *pdata)
+{
+ int i;
+
+ regulator_bulk_disable(msm8x10->num_of_supplies,
+ msm8x10->supplies);
+ for (i = 0; i < msm8x10->num_of_supplies; i++) {
+ regulator_set_voltage(msm8x10->supplies[i].consumer, 0,
+ pdata->regulator[i].max_uV);
+ regulator_set_optimum_mode(msm8x10->supplies[i].consumer, 0);
+ }
+ regulator_bulk_free(msm8x10->num_of_supplies, msm8x10->supplies);
+ kfree(msm8x10->supplies);
+}
+
+static int msm8x10_wcd_bringup(struct msm8x10_wcd *msm8x10)
+{
+ msm8x10->read_dev = msm8x10_wcd_reg_read;
+ msm8x10->write_dev(msm8x10, MSM8X10_WCD_A_CDC_RST_CTL, 0x02);
+ msm8x10->write_dev(msm8x10, MSM8X10_WCD_A_CHIP_CTL, 0x00);
+ usleep_range(5000, 5000);
+ msm8x10->write_dev(msm8x10, MSM8X10_WCD_A_CDC_RST_CTL, 0x03);
+ return 0;
+}
+
+static int msm8x10_wcd_device_init(struct msm8x10_wcd *msm8x10)
+{
+ mutex_init(&msm8x10->io_lock);
+ mutex_init(&msm8x10->xfer_lock);
+ mutex_init(&msm8x10->pm_lock);
+ msm8x10->wlock_holders = 0;
+
+ iowrite32(0x03C00000, ioremap(0xFD512050, 4));
+ usleep_range(5000, 5000);
+
+ msm8x10_wcd_bringup(msm8x10);
+ return 0;
+}
+
static int __devinit msm8x10_wcd_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -2401,7 +2540,7 @@
if (device_id > 0) {
msm8x10_wcd_modules[device_id++].client = client;
- return ret;
+ goto rtn;
}
dev = &client->dev;
@@ -2421,19 +2560,25 @@
dev_err(&client->dev,
"%s: error, allocation failed\n", __func__);
ret = -ENOMEM;
- goto fail;
+ goto rtn;
}
msm8x10->dev = &client->dev;
msm8x10_wcd_modules[device_id++].client = client;
msm8x10->read_dev = msm8x10_wcd_reg_read;
msm8x10->write_dev = msm8x10_wcd_reg_write;
+ ret = msm8x10_wcd_enable_supplies(msm8x10, pdata);
+ if (ret) {
+ dev_err(&client->dev, "%s: Fail to enable Codec supplies\n",
+ __func__);
+ goto err_codec;
+ }
ret = msm8x10_wcd_device_init(msm8x10);
if (ret) {
dev_err(&client->dev,
"%s:msm8x10_wcd_device_init failed with error %d\n",
__func__, ret);
- goto fail;
+ goto err_supplies;
}
dev_set_drvdata(&client->dev, msm8x10);
ret = snd_soc_register_codec(&client->dev, &soc_codec_dev_msm8x10_wcd,
@@ -2443,7 +2588,14 @@
dev_err(&client->dev,
"%s:snd_soc_register_codec failed with error %d\n",
__func__, ret);
-fail:
+ else
+ goto rtn;
+
+err_supplies:
+ msm8x10_wcd_disable_supplies(msm8x10, pdata);
+err_codec:
+ kfree(msm8x10);
+rtn:
return ret;
}
diff --git a/sound/soc/codecs/msm8x10-wcd.h b/sound/soc/codecs/msm8x10-wcd.h
index 44e8a6d..d250e0a 100644
--- a/sound/soc/codecs/msm8x10-wcd.h
+++ b/sound/soc/codecs/msm8x10-wcd.h
@@ -199,7 +199,7 @@
int (*read_dev)(struct msm8x10_wcd *msm8x10,
unsigned short reg, unsigned int *val);
int (*write_dev)(struct msm8x10_wcd *msm8x10,
- unsigned short reg, unsigned int val);
+ unsigned short reg, u8 val);
u32 num_of_supplies;
struct regulator_bulk_data *supplies;
diff --git a/sound/soc/codecs/wcd9304-tables.c b/sound/soc/codecs/wcd9304-tables.c
index 83c0c1d..7ec0152 100644
--- a/sound/soc/codecs/wcd9304-tables.c
+++ b/sound/soc/codecs/wcd9304-tables.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -235,8 +235,24 @@
[SITAR_A_CDC_RX1_B4_CTL] = SITAR_A_CDC_RX1_B4_CTL__POR,
[SITAR_A_CDC_RX1_B5_CTL] = SITAR_A_CDC_RX1_B5_CTL__POR,
[SITAR_A_CDC_RX1_B6_CTL] = SITAR_A_CDC_RX1_B6_CTL__POR,
+ [SITAR_A_CDC_RX2_B1_CTL] = SITAR_A_CDC_RX2_B1_CTL__POR,
+ [SITAR_A_CDC_RX2_B2_CTL] = SITAR_A_CDC_RX2_B2_CTL__POR,
+ [SITAR_A_CDC_RX2_B3_CTL] = SITAR_A_CDC_RX2_B3_CTL__POR,
+ [SITAR_A_CDC_RX2_B4_CTL] = SITAR_A_CDC_RX2_B4_CTL__POR,
+ [SITAR_A_CDC_RX2_B5_CTL] = SITAR_A_CDC_RX2_B5_CTL__POR,
+ [SITAR_A_CDC_RX2_B6_CTL] = SITAR_A_CDC_RX2_B6_CTL__POR,
+ [SITAR_A_CDC_RX3_B1_CTL] = SITAR_A_CDC_RX3_B1_CTL__POR,
+ [SITAR_A_CDC_RX3_B2_CTL] = SITAR_A_CDC_RX3_B2_CTL__POR,
+ [SITAR_A_CDC_RX3_B3_CTL] = SITAR_A_CDC_RX3_B3_CTL__POR,
+ [SITAR_A_CDC_RX3_B4_CTL] = SITAR_A_CDC_RX3_B4_CTL__POR,
+ [SITAR_A_CDC_RX3_B5_CTL] = SITAR_A_CDC_RX3_B5_CTL__POR,
+ [SITAR_A_CDC_RX3_B6_CTL] = SITAR_A_CDC_RX3_B6_CTL__POR,
[SITAR_A_CDC_RX1_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX1_VOL_CTL_B1_CTL__POR,
[SITAR_A_CDC_RX1_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX1_VOL_CTL_B2_CTL__POR,
+ [SITAR_A_CDC_RX2_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX2_VOL_CTL_B1_CTL__POR,
+ [SITAR_A_CDC_RX2_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX2_VOL_CTL_B2_CTL__POR,
+ [SITAR_A_CDC_RX3_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX3_VOL_CTL_B1_CTL__POR,
+ [SITAR_A_CDC_RX3_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX3_VOL_CTL_B2_CTL__POR,
[SITAR_A_CDC_CLK_ANC_RESET_CTL] = SITAR_A_CDC_CLK_ANC_RESET_CTL__POR,
[SITAR_A_CDC_CLK_RX_RESET_CTL] = SITAR_A_CDC_CLK_RX_RESET_CTL__POR,
[SITAR_A_CDC_CLK_TX_RESET_B1_CTL] =
@@ -322,6 +338,15 @@
[SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS] =
SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS__POR,
[SITAR_A_CDC_COMP1_FS_CFG] = SITAR_A_CDC_COMP1_FS_CFG__POR,
+ [SITAR_A_CDC_COMP2_B1_CTL] = SITAR_A_CDC_COMP2_B1_CTL__POR,
+ [SITAR_A_CDC_COMP2_B2_CTL] = SITAR_A_CDC_COMP2_B2_CTL__POR,
+ [SITAR_A_CDC_COMP2_B3_CTL] = SITAR_A_CDC_COMP2_B3_CTL__POR,
+ [SITAR_A_CDC_COMP2_B4_CTL] = SITAR_A_CDC_COMP2_B4_CTL__POR,
+ [SITAR_A_CDC_COMP2_B5_CTL] = SITAR_A_CDC_COMP2_B5_CTL__POR,
+ [SITAR_A_CDC_COMP2_B6_CTL] = SITAR_A_CDC_COMP2_B6_CTL__POR,
+ [SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS] =
+ SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS__POR,
+ [SITAR_A_CDC_COMP2_FS_CFG] = SITAR_A_CDC_COMP2_FS_CFG__POR,
[SITAR_A_CDC_CONN_RX1_B1_CTL] = SITAR_A_CDC_CONN_RX1_B1_CTL__POR,
[SITAR_A_CDC_CONN_RX1_B2_CTL] = SITAR_A_CDC_CONN_RX1_B2_CTL__POR,
[SITAR_A_CDC_CONN_RX1_B3_CTL] = SITAR_A_CDC_CONN_RX1_B3_CTL__POR,
diff --git a/sound/soc/codecs/wcd9304.c b/sound/soc/codecs/wcd9304.c
index f5f4e23..616f8d5 100644
--- a/sound/soc/codecs/wcd9304.c
+++ b/sound/soc/codecs/wcd9304.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -81,6 +81,11 @@
#define SITAR_OCP_ATTEMPT 1
+#define COMP_DIGITAL_DB_GAIN_APPLY(a, b) \
+ (((a) <= 0) ? ((a) - b) : (a))
+/* The wait time value comes from codec HW specification */
+#define COMP_BRINGUP_WAIT_TIME 3000
+
#define SITAR_MCLK_RATE_12288KHZ 12288000
#define SITAR_MCLK_RATE_9600KHZ 9600000
@@ -148,6 +153,22 @@
BAND_MAX,
};
+enum {
+ COMPANDER_1 = 0,
+ COMPANDER_2,
+ COMPANDER_MAX,
+};
+
+enum {
+ COMPANDER_FS_8KHZ = 0,
+ COMPANDER_FS_16KHZ,
+ COMPANDER_FS_32KHZ,
+ COMPANDER_FS_48KHZ,
+ COMPANDER_FS_96KHZ,
+ COMPANDER_FS_192KHZ,
+ COMPANDER_FS_MAX,
+};
+
/* Flags to track of PA and DAC state.
* PA and DAC should be tracked separately as AUXPGA loopback requires
* only PA to be turned on without DAC being on. */
@@ -158,6 +179,33 @@
SITAR_HPHR_DAC_OFF_ACK
};
+struct comp_sample_dependent_params {
+ u32 peak_det_timeout;
+ u32 rms_meter_div_fact;
+ u32 rms_meter_resamp_fact;
+};
+
+struct comp_dgtl_gain_offset {
+ u8 whole_db_gain;
+ u8 half_db_gain;
+};
+
+static const struct comp_dgtl_gain_offset comp_dgtl_gain[] = {
+ {0, 0},
+ {1, 1},
+ {3, 0},
+ {4, 1},
+ {6, 0},
+ {7, 1},
+ {9, 0},
+ {10, 1},
+ {12, 0},
+ {13, 1},
+ {15, 0},
+ {16, 1},
+ {18, 0},
+};
+
/* Data used by MBHC */
struct mbhc_internal_cal_data {
u16 dce_z;
@@ -273,6 +321,11 @@
/* num of slim ports required */
struct wcd9xxx_codec_dai_data dai[NUM_CODEC_DAIS];
+ /*compander*/
+ int comp_enabled[COMPANDER_MAX];
+ u32 comp_fs[COMPANDER_MAX];
+ u8 comp_gain_offset[NUM_INTERPOLATORS];
+
/* Currently, only used for mbhc purpose, to protect
* concurrent execution of mbhc threaded irq handlers and
* kill race between DAPM and MBHC.But can serve as a
@@ -295,6 +348,47 @@
struct sitar_priv *debug_sitar_priv;
#endif
+static const int comp_rx_path[] = {
+ COMPANDER_2,
+ COMPANDER_1,
+ COMPANDER_1,
+ COMPANDER_MAX,
+};
+
+static const struct comp_sample_dependent_params
+ comp_samp_params[COMPANDER_FS_MAX] = {
+ {
+ .peak_det_timeout = 0x6,
+ .rms_meter_div_fact = 0x9 << 4,
+ .rms_meter_resamp_fact = 0x06,
+ },
+ {
+ .peak_det_timeout = 0x7,
+ .rms_meter_div_fact = 0xA << 4,
+ .rms_meter_resamp_fact = 0x0C,
+ },
+ {
+ .peak_det_timeout = 0x8,
+ .rms_meter_div_fact = 0xB << 4,
+ .rms_meter_resamp_fact = 0x30,
+ },
+ {
+ .peak_det_timeout = 0x9,
+ .rms_meter_div_fact = 0xB << 4,
+ .rms_meter_resamp_fact = 0x28,
+ },
+ {
+ .peak_det_timeout = 0xA,
+ .rms_meter_div_fact = 0xC << 4,
+ .rms_meter_resamp_fact = 0x50,
+ },
+ {
+ .peak_det_timeout = 0xB,
+ .rms_meter_div_fact = 0xC << 4,
+ .rms_meter_resamp_fact = 0x50,
+ },
+};
+
static int sitar_get_anc_slot(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
@@ -539,6 +633,268 @@
return 0;
}
+static int sitar_compander_gain_offset(
+ struct snd_soc_codec *codec, u32 enable,
+ unsigned int pa_reg, unsigned int vol_reg,
+ int mask, int event,
+ struct comp_dgtl_gain_offset *gain_offset,
+ int index)
+{
+ unsigned int pa_gain = snd_soc_read(codec, pa_reg);
+ unsigned int digital_vol = snd_soc_read(codec, vol_reg);
+ int pa_mode = pa_gain & mask;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: pa_gain(0x%x=0x%x)digital_vol(0x%x=0x%x)event(0x%x) index(%d)\n",
+ __func__, pa_reg, pa_gain, vol_reg, digital_vol, event, index);
+ if (((pa_gain & 0xF) + 1) > ARRAY_SIZE(comp_dgtl_gain) ||
+ (index >= ARRAY_SIZE(sitar->comp_gain_offset))) {
+ pr_err("%s: Out of array boundary\n", __func__);
+ return -EINVAL;
+ }
+
+ if (SND_SOC_DAPM_EVENT_ON(event) && (enable != 0)) {
+ gain_offset->whole_db_gain = COMP_DIGITAL_DB_GAIN_APPLY(
+ (digital_vol - comp_dgtl_gain[pa_gain & 0xF].whole_db_gain),
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain);
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain =
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain;
+ sitar->comp_gain_offset[index] = digital_vol -
+ gain_offset->whole_db_gain ;
+ }
+ if (SND_SOC_DAPM_EVENT_OFF(event) && (pa_mode == 0)) {
+ gain_offset->whole_db_gain = digital_vol +
+ sitar->comp_gain_offset[index];
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain = 0;
+ }
+
+ pr_debug("%s: half_db_gain(%d)whole_db_gain(0x%x)comp_gain_offset[%d](%d)\n",
+ __func__, gain_offset->half_db_gain,
+ gain_offset->whole_db_gain, index,
+ sitar->comp_gain_offset[index]);
+ return 0;
+}
+
+static int sitar_config_gain_compander(
+ struct snd_soc_codec *codec,
+ u32 compander, u32 enable, int event)
+{
+ int value = 0;
+ int mask = 1 << 4;
+ struct comp_dgtl_gain_offset gain_offset = {0, 0};
+ if (compander >= COMPANDER_MAX) {
+ pr_err("%s: Error, invalid compander channel\n", __func__);
+ return -EINVAL;
+ }
+
+ if ((enable == 0) || SND_SOC_DAPM_EVENT_OFF(event))
+ value = 1 << 4;
+
+ if (compander == COMPANDER_1) {
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_HPH_L_GAIN,
+ SITAR_A_CDC_RX2_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 1);
+ snd_soc_update_bits(codec, SITAR_A_RX_HPH_L_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX2_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX2_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_HPH_R_GAIN,
+ SITAR_A_CDC_RX3_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 2);
+ snd_soc_update_bits(codec, SITAR_A_RX_HPH_R_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX3_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX3_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ } else if (compander == COMPANDER_2) {
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_LINE_1_GAIN,
+ SITAR_A_CDC_RX1_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 0);
+ snd_soc_update_bits(codec, SITAR_A_RX_LINE_1_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_RX_LINE_2_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX1_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX1_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ }
+ return 0;
+}
+
+static int sitar_get_compander(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ int comp = ((struct soc_multi_mixer_control *)
+ kcontrol->private_value)->shift;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+
+ ucontrol->value.integer.value[0] = sitar->comp_enabled[comp];
+
+ return 0;
+}
+
+static int sitar_set_compander(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ int comp = ((struct soc_multi_mixer_control *)
+ kcontrol->private_value)->shift;
+ int value = ucontrol->value.integer.value[0];
+
+ pr_debug("%s: compander #%d enable %d\n",
+ __func__, comp + 1, value);
+ if (value == sitar->comp_enabled[comp]) {
+ pr_debug("%s: compander #%d enable %d no change\n",
+ __func__, comp + 1, value);
+ return 0;
+ }
+ sitar->comp_enabled[comp] = value;
+ return 0;
+}
+
+static int sitar_config_compander(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ u32 rate = sitar->comp_fs[w->shift];
+ u32 value;
+
+ pr_debug("%s: compander #%d enable %d event %d widget name %s\n",
+ __func__, w->shift + 1,
+ sitar->comp_enabled[w->shift], event , w->name);
+ if (sitar->comp_enabled[w->shift] == 0)
+ goto rtn;
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ /* Update compander sample rate */
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_FS_CFG +
+ w->shift * 8, 0x07, rate);
+ /* Enable compander clock */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_RX_B2_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift, 0);
+ sitar_config_gain_compander(codec, w->shift, 1, event);
+ /* Compander enable -> 0x370/0x378 */
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x03);
+ /* Update the RMS meter resampling */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF, 0x01);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0, 0x50);
+ usleep_range(COMP_BRINGUP_WAIT_TIME, COMP_BRINGUP_WAIT_TIME);
+ break;
+ case SND_SOC_DAPM_POST_PMU:
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLSG_CTL,
+ 0x11, 0x00);
+ if (w->shift == COMPANDER_1)
+ value = 0x22;
+ else
+ value = 0x11;
+ snd_soc_write(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL, value);
+
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0x0F,
+ comp_samp_params[rate].peak_det_timeout);
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0,
+ comp_samp_params[rate].rms_meter_div_fact);
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF,
+ comp_samp_params[rate].rms_meter_resamp_fact);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x00);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift, 0);
+ /* Disable compander clock */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_RX_B2_CTL,
+ 1 << w->shift,
+ 0);
+ /* Restore the gain */
+ sitar_config_gain_compander(codec, w->shift,
+ sitar->comp_enabled[w->shift],
+ event);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLSG_CTL,
+ 0x11, 0x11);
+ snd_soc_write(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL, 0x14);
+ break;
+ }
+rtn:
+ return 0;
+}
+
+static int sitar_codec_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ pr_debug("%s: compander#1->enable(%d) compander#2->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, sitar->comp_enabled[COMPANDER_1],
+ sitar->comp_enabled[COMPANDER_2],
+ SITAR_A_CDC_RX1_B6_CTL + w->shift * 8,
+ snd_soc_read(codec, SITAR_A_CDC_RX1_B6_CTL + w->shift * 8),
+ event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (sitar->comp_enabled[COMPANDER_1] ||
+ sitar->comp_enabled[COMPANDER_2])
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL +
+ w->shift * 8,
+ 1 << 5, 0);
+ else
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL +
+ w->shift * 8,
+ 1 << 5, 0x20);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL + w->shift * 8,
+ 1 << 5, 0);
+ break;
+ }
+ return 0;
+}
+
static const char * const sitar_ear_pa_gain_text[] = {"POS_6_DB",
"POS_2_DB", "NEG_2P5_DB", "NEG_12_DB"};
@@ -652,6 +1008,10 @@
sitar_get_iir_band_audio_mixer, sitar_put_iir_band_audio_mixer),
SOC_SINGLE_MULTI_EXT("IIR2 Band5", IIR2, BAND5, 255, 0, 5,
sitar_get_iir_band_audio_mixer, sitar_put_iir_band_audio_mixer),
+ SOC_SINGLE_EXT("COMP1 Switch", SND_SOC_NOPM, COMPANDER_1, 1, 0,
+ sitar_get_compander, sitar_set_compander),
+ SOC_SINGLE_EXT("COMP2 Switch", SND_SOC_NOPM, COMPANDER_2, 1, 0,
+ sitar_get_compander, sitar_set_compander),
};
static const char *rx_mix1_text[] = {
@@ -1267,9 +1627,14 @@
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
u16 lineout_gain_reg;
- pr_debug("%s %d %s\n", __func__, event, w->name);
+ pr_debug("%s %d %s comp2 enable %d\n", __func__, event, w->name,
+ sitar->comp_enabled[COMPANDER_2]);
+
+ if (sitar->comp_enabled[COMPANDER_2])
+ goto rtn;
switch (w->shift) {
case 0:
@@ -1311,6 +1676,7 @@
snd_soc_update_bits(codec, lineout_gain_reg, 0x10, 0x00);
break;
}
+rtn:
return 0;
}
@@ -2010,16 +2376,22 @@
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
- pr_debug("%s %s %d\n", __func__, w->name, event);
+ pr_debug("%s %s %d comp#1 enable %d\n", __func__,
+ w->name, event, sitar->comp_enabled[COMPANDER_1]);
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
if (w->reg == SITAR_A_RX_HPH_L_DAC_CTL) {
- snd_soc_update_bits(codec, SITAR_A_CDC_CONN_CLSG_CTL,
- 0x30, 0x20);
- snd_soc_update_bits(codec, SITAR_A_CDC_CONN_CLSG_CTL,
- 0x0C, 0x08);
+ if (!sitar->comp_enabled[COMPANDER_1]) {
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL,
+ 0x30, 0x20);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL,
+ 0x0C, 0x08);
+ }
}
snd_soc_update_bits(codec, w->reg, 0x40, 0x40);
break;
@@ -2338,9 +2710,15 @@
SND_SOC_DAPM_MUX("DAC4 MUX", SND_SOC_NOPM, 0, 0,
&rx_dac4_mux),
- SND_SOC_DAPM_MIXER("RX1 CHAIN", SITAR_A_CDC_RX1_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX2 CHAIN", SITAR_A_CDC_RX2_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX3 CHAIN", SITAR_A_CDC_RX3_B6_CTL, 5, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER_E("RX1 CHAIN", SND_SOC_NOPM, 0, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX2 CHAIN", SND_SOC_NOPM, 1, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX3 CHAIN", SND_SOC_NOPM, 2, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
SND_SOC_DAPM_MUX("RX1 MIX1 INP1", SND_SOC_NOPM, 0, 0,
&rx_mix1_inp1_mux),
@@ -2463,6 +2841,13 @@
sitar_codec_enable_dmic, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_SUPPLY("COMP1_CLK", SND_SOC_NOPM, COMPANDER_1, 0,
+ sitar_config_compander, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_SUPPLY("COMP2_CLK", SND_SOC_NOPM, COMPANDER_2, 0,
+ sitar_config_compander, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+
/* Sidetone */
SND_SOC_DAPM_MUX("IIR1 INP1 MUX", SND_SOC_NOPM, 0, 0, &iir1_inp1_mux),
SND_SOC_DAPM_PGA("IIR1", SITAR_A_CDC_CLK_SD_CTL, 0, 0, NULL, 0),
@@ -2592,6 +2977,10 @@
{"SLIM RX3", NULL, "SLIM RX3 MUX"},
{"SLIM RX4", NULL, "SLIM RX4 MUX"},
+ {"RX1 MIX1", NULL, "COMP2_CLK"},
+ {"RX2 MIX1", NULL, "COMP1_CLK"},
+ {"RX3 MIX1", NULL, "COMP1_CLK"},
+
/* Slimbus port 5 is non functional in Sitar 1.0 */
{"RX1 MIX1 INP1", "RX1", "SLIM RX1"},
{"RX1 MIX1 INP1", "RX2", "SLIM RX2"},
@@ -2733,6 +3122,10 @@
if (reg == SITAR_A_CDC_RX1_VOL_CTL_B2_CTL + (8 * i))
return 1;
}
+
+ if ((reg == SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS) ||
+ (reg == SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS))
+ return 1;
return 0;
}
@@ -3191,6 +3584,7 @@
struct snd_soc_codec *codec = dai->codec;
struct sitar_priv *sitar = snd_soc_codec_get_drvdata(dai->codec);
u8 path, shift;
+ u32 compander_fs;
u16 tx_fs_reg, rx_fs_reg;
u8 tx_fs_rate, rx_fs_rate, rx_state, tx_state;
@@ -3200,18 +3594,32 @@
case 8000:
tx_fs_rate = 0x00;
rx_fs_rate = 0x00;
+ compander_fs = COMPANDER_FS_8KHZ;
break;
case 16000:
tx_fs_rate = 0x01;
rx_fs_rate = 0x20;
+ compander_fs = COMPANDER_FS_16KHZ;
break;
case 32000:
tx_fs_rate = 0x02;
rx_fs_rate = 0x40;
+ compander_fs = COMPANDER_FS_32KHZ;
break;
case 48000:
tx_fs_rate = 0x03;
rx_fs_rate = 0x60;
+ compander_fs = COMPANDER_FS_48KHZ;
+ break;
+ case 96000:
+ tx_fs_rate = 0x04;
+ rx_fs_rate = 0x80;
+ compander_fs = COMPANDER_FS_96KHZ;
+ break;
+ case 192000:
+ tx_fs_rate = 0x05;
+ rx_fs_rate = 0xa0;
+ compander_fs = COMPANDER_FS_192KHZ;
break;
default:
pr_err("%s: Invalid sampling rate %d\n", __func__,
@@ -3285,6 +3693,9 @@
+ (BITS_PER_REG*(path-1));
snd_soc_update_bits(codec, rx_fs_reg,
0xE0, rx_fs_rate);
+ if (comp_rx_path[shift] < COMPANDER_MAX)
+ sitar->comp_fs[comp_rx_path[shift]]
+ = compander_fs;
}
}
if (sitar->intf_type == WCD9XXX_INTERFACE_TYPE_I2C) {
@@ -5484,6 +5895,11 @@
if (sitar->intf_type == WCD9XXX_INTERFACE_TYPE_I2C)
sitar_i2c_codec_init_reg(codec);
+ for (i = 0; i < COMPANDER_MAX; i++) {
+ sitar->comp_enabled[i] = 0;
+ sitar->comp_fs[i] = COMPANDER_FS_48KHZ;
+ }
+
ret = sitar_handle_pdata(sitar);
if (IS_ERR_VALUE(ret)) {
pr_err("%s: bad pdata\n", __func__);
diff --git a/sound/soc/codecs/wcd9306.c b/sound/soc/codecs/wcd9306.c
index a7069a6..b71dd65 100644
--- a/sound/soc/codecs/wcd9306.c
+++ b/sound/soc/codecs/wcd9306.c
@@ -461,8 +461,8 @@
kcontrol->private_value)->shift;
ucontrol->value.integer.value[0] =
- snd_soc_read(codec, (TAPAN_A_CDC_IIR1_CTL + 16 * iir_idx)) &
- (1 << band_idx);
+ (snd_soc_read(codec, (TAPAN_A_CDC_IIR1_CTL + 16 * iir_idx)) &
+ (1 << band_idx)) != 0;
dev_dbg(codec->dev, "%s: IIR #%d band #%d enable %d\n", __func__,
iir_idx, band_idx,
@@ -485,23 +485,54 @@
snd_soc_update_bits(codec, (TAPAN_A_CDC_IIR1_CTL + 16 * iir_idx),
(1 << band_idx), (value << band_idx));
- dev_dbg(codec->dev, "%s: IIR #%d band #%d enable %d\n", __func__,
- iir_idx, band_idx, value);
+ pr_debug("%s: IIR #%d band #%d enable %d\n", __func__,
+ iir_idx, band_idx,
+ ((snd_soc_read(codec, (TAPAN_A_CDC_IIR1_CTL + 16 * iir_idx)) &
+ (1 << band_idx)) != 0));
return 0;
}
static uint32_t get_iir_band_coeff(struct snd_soc_codec *codec,
int iir_idx, int band_idx,
int coeff_idx)
{
+ uint32_t value = 0;
+
/* Address does not automatically update if reading */
snd_soc_write(codec,
(TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
- (band_idx * BAND_MAX + coeff_idx) & 0x1F);
+ ((band_idx * BAND_MAX + coeff_idx)
+ * sizeof(uint32_t)) & 0x7F);
+
+ value |= snd_soc_read(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx));
+
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
+ ((band_idx * BAND_MAX + coeff_idx)
+ * sizeof(uint32_t) + 1) & 0x7F);
+
+ value |= (snd_soc_read(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx)) << 8);
+
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
+ ((band_idx * BAND_MAX + coeff_idx)
+ * sizeof(uint32_t) + 2) & 0x7F);
+
+ value |= (snd_soc_read(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx)) << 16);
+
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
+ ((band_idx * BAND_MAX + coeff_idx)
+ * sizeof(uint32_t) + 3) & 0x7F);
/* Mask bits top 2 bits since they are reserved */
- return ((snd_soc_read(codec,
- (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx)) << 24)) &
- 0x3FFFFFFF;
+ value |= ((snd_soc_read(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx)) & 0x3F) << 24);
+
+ return value;
+
}
static int tapan_get_iir_band_audio_mixer(
@@ -545,13 +576,19 @@
static void set_iir_band_coeff(struct snd_soc_codec *codec,
int iir_idx, int band_idx,
- int coeff_idx, uint32_t value)
+ uint32_t value)
{
- /* Mask top 3 bits, 6-8 are reserved */
- /* Update address manually each time */
snd_soc_write(codec,
- (TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
- (band_idx * BAND_MAX + coeff_idx) & 0x1F);
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx),
+ (value & 0xFF));
+
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx),
+ (value >> 8) & 0xFF);
+
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B2_CTL + 16 * iir_idx),
+ (value >> 16) & 0xFF);
/* Mask top 2 bits, 7-8 are reserved */
snd_soc_write(codec,
@@ -570,15 +607,21 @@
int band_idx = ((struct soc_multi_mixer_control *)
kcontrol->private_value)->shift;
- set_iir_band_coeff(codec, iir_idx, band_idx, 0,
+ /* Mask top bit it is reserved */
+ /* Updates addr automatically for each B2 write */
+ snd_soc_write(codec,
+ (TAPAN_A_CDC_IIR1_COEF_B1_CTL + 16 * iir_idx),
+ (band_idx * BAND_MAX * sizeof(uint32_t)) & 0x7F);
+
+ set_iir_band_coeff(codec, iir_idx, band_idx,
ucontrol->value.integer.value[0]);
- set_iir_band_coeff(codec, iir_idx, band_idx, 1,
+ set_iir_band_coeff(codec, iir_idx, band_idx,
ucontrol->value.integer.value[1]);
- set_iir_band_coeff(codec, iir_idx, band_idx, 2,
+ set_iir_band_coeff(codec, iir_idx, band_idx,
ucontrol->value.integer.value[2]);
- set_iir_band_coeff(codec, iir_idx, band_idx, 3,
+ set_iir_band_coeff(codec, iir_idx, band_idx,
ucontrol->value.integer.value[3]);
- set_iir_band_coeff(codec, iir_idx, band_idx, 4,
+ set_iir_band_coeff(codec, iir_idx, band_idx,
ucontrol->value.integer.value[4]);
dev_dbg(codec->dev, "%s: IIR #%d band #%d b0 = 0x%x\n"
@@ -4371,6 +4414,12 @@
return ret;
}
+static void tapan_cleanup_irqs(struct tapan_priv *tapan)
+{
+ struct snd_soc_codec *codec = tapan->codec;
+ wcd9xxx_free_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS, tapan);
+}
+
int tapan_hs_detect(struct snd_soc_codec *codec,
struct wcd9xxx_mbhc_config *mbhc_cfg)
{
@@ -4512,6 +4561,10 @@
mutex_unlock(&dapm->codec->mutex);
codec->ignore_pmdown_time = 1;
+
+ if (ret)
+ tapan_cleanup_irqs(tapan);
+
return ret;
err_pdata:
@@ -4532,6 +4585,9 @@
wcd9xxx_resmgr_put_bandgap(&tapan->resmgr,
WCD9XXX_BANDGAP_AUDIO_MODE);
WCD9XXX_BCL_UNLOCK(&tapan->resmgr);
+
+ tapan_cleanup_irqs(tapan);
+
/* cleanup MBHC */
wcd9xxx_mbhc_deinit(&tapan->mbhc);
/* cleanup resmgr */
diff --git a/sound/soc/codecs/wcd9310.c b/sound/soc/codecs/wcd9310.c
index b8a4a86..29703b9 100644
--- a/sound/soc/codecs/wcd9310.c
+++ b/sound/soc/codecs/wcd9310.c
@@ -55,9 +55,12 @@
#define MBHC_FW_READ_ATTEMPTS 15
#define MBHC_FW_READ_TIMEOUT 2000000
#define MBHC_VDDIO_SWITCH_WAIT_MS 10
+#define COMP_DIGITAL_DB_GAIN_APPLY(a, b) \
+ (((a) <= 0) ? ((a) - b) : (a))
#define SLIM_CLOSE_TIMEOUT 1000
-
+/* The wait time value comes from codec HW specification */
+#define COMP_BRINGUP_WAIT_TIME 2000
enum {
MBHC_USE_HPHL_TRIGGER = 1,
MBHC_USE_MB_TRIGGER = 2
@@ -99,9 +102,7 @@
RX_MIX1_INP_SEL_RX6,
RX_MIX1_INP_SEL_RX7,
};
-
-#define TABLA_COMP_DIGITAL_GAIN_HP_OFFSET 3
-#define TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET 6
+#define MAX_PA_GAIN_OPTIONS 13
#define TABLA_MCLK_RATE_12288KHZ 12288000
#define TABLA_MCLK_RATE_9600KHZ 9600000
@@ -220,6 +221,28 @@
u32 shutdown_timeout;
};
+struct comp_dgtl_gain_offset {
+ u8 whole_db_gain;
+ u8 half_db_gain;
+};
+
+static const struct comp_dgtl_gain_offset
+ comp_dgtl_gain[MAX_PA_GAIN_OPTIONS] = {
+ {0, 0},
+ {1, 1},
+ {3, 0},
+ {4, 1},
+ {6, 0},
+ {7, 1},
+ {9, 0},
+ {10, 1},
+ {12, 0},
+ {13, 1},
+ {15, 0},
+ {16, 1},
+ {18, 0},
+};
+
/* Data used by MBHC */
struct mbhc_internal_cal_data {
u16 dce_z;
@@ -377,6 +400,7 @@
/*compander*/
int comp_enabled[COMPANDER_MAX];
u32 comp_fs[COMPANDER_MAX];
+ u8 comp_gain_offset[TABLA_SB_PGD_MAX_NUMBER_OF_RX_SLAVE_DEV_PORTS - 1];
/* Maintain the status of AUX PGA */
int aux_pga_cnt;
@@ -547,7 +571,10 @@
{
struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ mutex_lock(&codec->dapm.codec->mutex);
ucontrol->value.integer.value[0] = (tabla->anc_func == true ? 1 : 0);
+ mutex_unlock(&codec->dapm.codec->mutex);
return 0;
}
@@ -802,34 +829,51 @@
static int tabla_compander_gain_offset(
struct snd_soc_codec *codec, u32 enable,
- unsigned int reg, int mask, int event, u32 comp)
+ unsigned int pa_reg, unsigned int vol_reg,
+ int mask, int event,
+ struct comp_dgtl_gain_offset *gain_offset,
+ int index)
{
- int pa_mode = snd_soc_read(codec, reg) & mask;
- int gain_offset = 0;
- /* if PMU && enable is 1-> offset is 3
- * if PMU && enable is 0-> offset is 0
- * if PMD && pa_mode is PA -> offset is 0: PMU compander is off
- * if PMD && pa_mode is comp -> offset is -3: PMU compander is on.
- */
+ unsigned int pa_gain = snd_soc_read(codec, pa_reg);
+ unsigned int digital_vol = snd_soc_read(codec, vol_reg);
+ int pa_mode = pa_gain & mask;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: pa_gain(0x%x=0x%x)digital_vol(0x%x=0x%x)event(0x%x) index(%d)\n",
+ __func__, pa_reg, pa_gain, vol_reg, digital_vol, event, index);
+ if (((pa_gain & 0xF) + 1) > ARRAY_SIZE(comp_dgtl_gain) ||
+ (index >= ARRAY_SIZE(tabla->comp_gain_offset))) {
+ pr_err("%s: Out of array boundary\n", __func__);
+ return -EINVAL;
+ }
if (SND_SOC_DAPM_EVENT_ON(event) && (enable != 0)) {
- if (comp == COMPANDER_1)
- gain_offset = TABLA_COMP_DIGITAL_GAIN_HP_OFFSET;
- if (comp == COMPANDER_2)
- gain_offset = TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET;
+ gain_offset->whole_db_gain = COMP_DIGITAL_DB_GAIN_APPLY(
+ (digital_vol - comp_dgtl_gain[pa_gain & 0xF].whole_db_gain),
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain);
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain =
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain;
+ tabla->comp_gain_offset[index] = digital_vol -
+ gain_offset->whole_db_gain ;
}
if (SND_SOC_DAPM_EVENT_OFF(event) && (pa_mode == 0)) {
- if (comp == COMPANDER_1)
- gain_offset = -TABLA_COMP_DIGITAL_GAIN_HP_OFFSET;
- if (comp == COMPANDER_2)
- gain_offset = -TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET;
-
+ gain_offset->whole_db_gain = digital_vol +
+ tabla->comp_gain_offset[index];
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain = 0;
}
- pr_debug("%s: compander #%d gain_offset %d\n",
- __func__, comp + 1, gain_offset);
- return gain_offset;
-}
+ pr_debug("%s: half_db_gain(%d)whole_db_gain(%d)comp_gain_offset[%d](%d)\n",
+ __func__, gain_offset->half_db_gain,
+ gain_offset->whole_db_gain, index,
+ tabla->comp_gain_offset[index]);
+ return 0;
+}
static int tabla_config_gain_compander(
struct snd_soc_codec *codec,
@@ -837,8 +881,7 @@
{
int value = 0;
int mask = 1 << 4;
- int gain = 0;
- int gain_offset;
+ struct comp_dgtl_gain_offset gain_offset = {0, 0};
if (compander >= COMPANDER_MAX) {
pr_err("%s: Error, invalid compander channel\n", __func__);
return -EINVAL;
@@ -848,43 +891,61 @@
value = 1 << 4;
if (compander == COMPANDER_1) {
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_HPH_L_GAIN, mask, event, compander);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_HPH_L_GAIN,
+ TABLA_A_CDC_RX1_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 0);
snd_soc_update_bits(codec, TABLA_A_RX_HPH_L_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX1_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX1_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_HPH_R_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_HPH_R_GAIN,
+ TABLA_A_CDC_RX2_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 1);
snd_soc_update_bits(codec, TABLA_A_RX_HPH_R_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX2_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX2_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
} else if (compander == COMPANDER_2) {
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_1_GAIN, mask, event, compander);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_1_GAIN,
+ TABLA_A_CDC_RX3_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 2);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_1_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX3_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX3_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_3_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX3_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_3_GAIN,
+ TABLA_A_CDC_RX4_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 3);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_3_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX4_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX4_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_2_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX4_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_2_GAIN,
+ TABLA_A_CDC_RX5_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 4);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_2_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX5_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX5_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_4_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX5_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_4_GAIN,
+ TABLA_A_CDC_RX6_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 5);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_4_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX6_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX6_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX6_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
}
return 0;
}
@@ -921,7 +982,6 @@
return 0;
}
-
static int tabla_config_compander(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol,
int event)
@@ -929,106 +989,161 @@
struct snd_soc_codec *codec = w->codec;
struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
u32 rate = tabla->comp_fs[w->shift];
- u32 status;
- unsigned long timeout;
- pr_debug("%s: compander #%d enable %d event %d\n",
+
+ pr_debug("%s: compander #%d enable %d event %d widget name %s\n",
__func__, w->shift + 1,
- tabla->comp_enabled[w->shift], event);
+ tabla->comp_enabled[w->shift], event , w->name);
+ if (tabla->comp_enabled[w->shift] == 0)
+ goto rtn;
+ if ((w->shift == COMPANDER_1) && (tabla->anc_func)) {
+ pr_debug("%s: ANC is enabled so compander #%d cannot be enabled\n",
+ __func__, w->shift + 1);
+ goto rtn;
+ }
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
- if (tabla->comp_enabled[w->shift] != 0) {
- /* Enable both L/R compander clocks */
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_RX_B2_CTL,
- 1 << comp_shift[w->shift],
- 1 << comp_shift[w->shift]);
- /* Clear the HALT for the compander*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 0);
- /* Toggle compander reset bits*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_OTHR_RESET_CTL,
- 1 << comp_shift[w->shift],
- 1 << comp_shift[w->shift]);
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_OTHR_RESET_CTL,
- 1 << comp_shift[w->shift], 0);
- tabla_config_gain_compander(codec, w->shift, 1, event);
- /* Compander enable -> 0x370/0x378*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x03, 0x03);
- /* Update the RMS meter resampling*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B3_CTL +
- w->shift * 8, 0xFF, 0x01);
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0xF0, 0x50);
- /* Wait for 1ms*/
- usleep_range(5000, 5000);
- }
+ /* Update compander sample rate */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_FS_CFG +
+ w->shift * 8, 0x07, rate);
+ /* Enable both L/R compander clocks */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_RX_B2_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift], 0);
+ tabla_config_gain_compander(codec, w->shift, 1, event);
+ /* Compander enable -> 0x370/0x378 */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x03);
+ /* Update the RMS meter resampling */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF, 0x01);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0, 0x50);
+ usleep_range(COMP_BRINGUP_WAIT_TIME, COMP_BRINGUP_WAIT_TIME);
break;
case SND_SOC_DAPM_POST_PMU:
- /* Set sample rate dependent paramater*/
- if (tabla->comp_enabled[w->shift] != 0) {
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_FS_CFG +
- w->shift * 8, 0x07, rate);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0x0F,
- comp_samp_params[rate].peak_det_timeout);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0xF0,
- comp_samp_params[rate].rms_meter_div_fact);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B3_CTL +
- w->shift * 8, 0xFF,
- comp_samp_params[rate].rms_meter_resamp_fact);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x38,
- comp_samp_params[rate].shutdown_timeout);
+ /* Set sample rate dependent paramater */
+ if (w->shift == COMPANDER_1) {
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLSG_CTL,
+ 0x11, 0x00);
+ snd_soc_write(codec,
+ TABLA_A_CDC_CONN_CLSG_CTL, 0x11);
}
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0x0F,
+ comp_samp_params[rate].peak_det_timeout);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0,
+ comp_samp_params[rate].rms_meter_div_fact);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF,
+ comp_samp_params[rate].rms_meter_resamp_fact);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x38,
+ comp_samp_params[rate].shutdown_timeout);
break;
case SND_SOC_DAPM_PRE_PMD:
- if (tabla->comp_enabled[w->shift] != 0) {
- status = snd_soc_read(codec,
- TABLA_A_CDC_COMP1_SHUT_DOWN_STATUS +
- w->shift * 8);
- pr_debug("%s: compander #%d shutdown status %d in event %d\n",
- __func__, w->shift + 1, status, event);
- /* Halt the compander*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 1 << 2);
- }
break;
case SND_SOC_DAPM_POST_PMD:
- if (tabla->comp_enabled[w->shift] != 0) {
- /* Wait up to a second for shutdown complete */
- timeout = jiffies + HZ;
- do {
- status = snd_soc_read(codec,
- TABLA_A_CDC_COMP1_SHUT_DOWN_STATUS +
- w->shift * 8);
- if (status == 0x3)
- break;
- usleep_range(5000, 5000);
- } while (!(time_after(jiffies, timeout)));
- /* Restore the gain */
- tabla_config_gain_compander(codec, w->shift,
- tabla->comp_enabled[w->shift],
- event);
- /* Disable the compander*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x03, 0x00);
- /* Turn off the clock for compander in pair*/
- snd_soc_update_bits(codec, TABLA_A_CDC_CLK_RX_B2_CTL,
- 0x03 << comp_shift[w->shift], 0);
- /* Clear the HALT for the compander*/
+ /* Disable the compander */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x00);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift], 0);
+ /* Turn off the clock for compander in pair */
+ snd_soc_update_bits(codec, TABLA_A_CDC_CLK_RX_B2_CTL,
+ 0x03 << comp_shift[w->shift], 0);
+ /* Restore the gain */
+ tabla_config_gain_compander(codec, w->shift,
+ tabla->comp_enabled[w->shift],
+ event);
+ if (w->shift == COMPANDER_1) {
snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 0);
+ TABLA_A_CDC_CLSG_CTL,
+ 0x11, 0x11);
+ snd_soc_write(codec,
+ TABLA_A_CDC_CONN_CLSG_CTL, 0x14);
}
break;
}
+rtn:
+ return 0;
+}
+
+static int tabla_codec_hphr_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: compander#1->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, tabla->comp_enabled[COMPANDER_1],
+ TABLA_A_CDC_RX1_B6_CTL,
+ snd_soc_read(codec, TABLA_A_CDC_RX1_B6_CTL), event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (tabla->comp_enabled[COMPANDER_1] && !tabla->anc_func)
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 0);
+ else
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 1 << w->shift);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int tabla_codec_hphl_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: compander#1->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, tabla->comp_enabled[COMPANDER_1],
+ TABLA_A_CDC_RX2_B6_CTL,
+ snd_soc_read(codec, TABLA_A_CDC_RX2_B6_CTL), event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (tabla->comp_enabled[COMPANDER_1] && !tabla->anc_func)
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 0);
+ else
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 1 << w->shift);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
return 0;
}
@@ -5361,8 +5476,12 @@
&rx6_dsm_mux, tabla_codec_reset_interpolator,
SND_SOC_DAPM_PRE_PMU),
- SND_SOC_DAPM_MIXER("RX1 CHAIN", TABLA_A_CDC_RX1_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX2 CHAIN", TABLA_A_CDC_RX2_B6_CTL, 5, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER_E("RX1 CHAIN", SND_SOC_NOPM, 5, 0, NULL,
+ 0, tabla_codec_hphr_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX2 CHAIN", SND_SOC_NOPM, 5, 0, NULL,
+ 0, tabla_codec_hphl_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
SND_SOC_DAPM_MUX("RX1 MIX1 INP1", SND_SOC_NOPM, 0, 0,
&rx_mix1_inp1_mux),
diff --git a/sound/soc/codecs/wcd9320.c b/sound/soc/codecs/wcd9320.c
index cc1e8eb..845f1a2 100644
--- a/sound/soc/codecs/wcd9320.c
+++ b/sound/soc/codecs/wcd9320.c
@@ -24,6 +24,7 @@
#include <linux/mfd/wcd9xxx/wcd9xxx_registers.h>
#include <linux/mfd/wcd9xxx/wcd9320_registers.h>
#include <linux/mfd/wcd9xxx/pdata.h>
+#include <linux/regulator/consumer.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include <sound/soc.h>
@@ -41,6 +42,9 @@
#define TAIKO_MAD_SLIMBUS_TX_PORT 12
#define TAIKO_MAD_AUDIO_FIRMWARE_PATH "wcd9320/wcd9320_mad_audio.bin"
+#define TAIKO_HPH_PA_SETTLE_COMP_ON 3000
+#define TAIKO_HPH_PA_SETTLE_COMP_OFF 13000
+
static atomic_t kp_taiko_priv;
static int spkr_drv_wrnd_param_set(const char *val,
const struct kernel_param *kp);
@@ -106,6 +110,15 @@
AANC_FF_GAIN_ADAPTIVE,
AANC_FFGAIN_ADAPTIVE_EN,
AANC_GAIN_CONTROL,
+ SPKR_CLIP_PIPE_BANK_SEL,
+ SPKR_CLIPDET_VAL0,
+ SPKR_CLIPDET_VAL1,
+ SPKR_CLIPDET_VAL2,
+ SPKR_CLIPDET_VAL3,
+ SPKR_CLIPDET_VAL4,
+ SPKR_CLIPDET_VAL5,
+ SPKR_CLIPDET_VAL6,
+ SPKR_CLIPDET_VAL7,
MAX_CFG_REGISTERS,
};
@@ -178,6 +191,74 @@
(TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_ANC1_GAIN_CTL),
AANC_GAIN_CONTROL, 0xFF, 8, 0
},
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_INTR_DESTN3),
+ MAD_CLIP_INT_DEST_SELECT_REG, 0x8, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_INTR_MASK3),
+ MAD_CLIP_INT_MASK_REG, 0x8, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_INTR_STATUS3),
+ MAD_CLIP_INT_STATUS_REG, 0x8, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_INTR_CLEAR3),
+ MAD_CLIP_INT_CLEAR_REG, 0x8, 8, 0
+ },
+};
+
+static struct afe_param_cdc_reg_cfg clip_reg_cfg[] = {
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_B1_CTL),
+ SPKR_CLIP_PIPE_BANK_SEL, 0x3, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL0),
+ SPKR_CLIPDET_VAL0, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL1),
+ SPKR_CLIPDET_VAL1, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL2),
+ SPKR_CLIPDET_VAL2, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL3),
+ SPKR_CLIPDET_VAL3, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL4),
+ SPKR_CLIPDET_VAL4, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL5),
+ SPKR_CLIPDET_VAL5, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL6),
+ SPKR_CLIPDET_VAL6, 0xff, 8, 0
+ },
+ {
+ 1,
+ (TAIKO_REGISTER_START_OFFSET + TAIKO_A_CDC_SPKR_CLIPDET_VAL7),
+ SPKR_CLIPDET_VAL7, 0xff, 8, 0
+ },
};
static struct afe_param_cdc_reg_cfg_data taiko_audio_reg_cfg = {
@@ -185,11 +266,22 @@
.reg_data = audio_reg_cfg,
};
+static struct afe_param_cdc_reg_cfg_data taiko_clip_reg_cfg = {
+ .num_registers = ARRAY_SIZE(clip_reg_cfg),
+ .reg_data = clip_reg_cfg,
+};
+
static struct afe_param_id_cdc_aanc_version taiko_cdc_aanc_version = {
.cdc_aanc_minor_version = AFE_API_VERSION_CDC_AANC_VERSION,
.aanc_hw_version = AANC_HW_BLOCK_VERSION_2,
};
+static struct afe_param_id_clip_bank_sel clip_bank_sel = {
+ .minor_version = AFE_API_VERSION_CLIP_BANK_SEL_CFG,
+ .num_banks = AFE_CLIP_MAX_BANKS,
+ .bank_map = {0, 1, 2, 3},
+};
+
module_param_cb(spkr_drv_wrnd, &spkr_drv_wrnd_param_ops, &spkr_drv_wrnd, 0644);
MODULE_PARM_DESC(spkr_drv_wrnd,
"Run software workaround to avoid leakage on the speaker drive");
@@ -394,6 +486,7 @@
u8 aux_r_gain;
bool spkr_pa_widget_on;
+ struct regulator *spkdrv_reg;
struct afe_param_cdc_slimbus_slave_cfg slimbus_slave_cfg;
@@ -404,7 +497,6 @@
/* class h specific data */
struct wcd9xxx_clsh_cdc_data clsh_d;
-
};
static const u32 comp_shift[] = {
@@ -427,39 +519,39 @@
static const struct comp_sample_dependent_params comp_samp_params[] = {
{
/* 8 Khz */
- .peak_det_timeout = 0x02,
+ .peak_det_timeout = 0x06,
.rms_meter_div_fact = 0x09,
.rms_meter_resamp_fact = 0x06,
},
{
/* 16 Khz */
- .peak_det_timeout = 0x03,
+ .peak_det_timeout = 0x07,
.rms_meter_div_fact = 0x0A,
.rms_meter_resamp_fact = 0x0C,
},
{
/* 32 Khz */
- .peak_det_timeout = 0x05,
+ .peak_det_timeout = 0x08,
.rms_meter_div_fact = 0x0B,
.rms_meter_resamp_fact = 0x1E,
},
{
/* 48 Khz */
- .peak_det_timeout = 0x05,
+ .peak_det_timeout = 0x09,
.rms_meter_div_fact = 0x0B,
.rms_meter_resamp_fact = 0x28,
},
{
/* 96 Khz */
- .peak_det_timeout = 0x06,
+ .peak_det_timeout = 0x0A,
.rms_meter_div_fact = 0x0C,
.rms_meter_resamp_fact = 0x50,
},
{
/* 192 Khz */
- .peak_det_timeout = 0x07,
- .rms_meter_div_fact = 0xD,
- .rms_meter_resamp_fact = 0xA0,
+ .peak_det_timeout = 0x0B,
+ .rms_meter_div_fact = 0xC,
+ .rms_meter_resamp_fact = 0x50,
},
};
@@ -809,6 +901,36 @@
pr_debug("%s: Compander %d enable current %d, new %d\n",
__func__, comp, taiko->comp_enabled[comp], value);
taiko->comp_enabled[comp] = value;
+
+ if (comp == COMPANDER_1 &&
+ taiko->comp_enabled[comp] == 1) {
+ /* Wavegen to 5 msec */
+ snd_soc_write(codec, TAIKO_A_RX_HPH_CNP_WG_CTL, 0xDA);
+ snd_soc_write(codec, TAIKO_A_RX_HPH_CNP_WG_TIME, 0x15);
+ snd_soc_write(codec, TAIKO_A_RX_HPH_BIAS_WG_OCP, 0x2A);
+
+ /* Enable Chopper */
+ snd_soc_update_bits(codec,
+ TAIKO_A_RX_HPH_CHOP_CTL, 0x80, 0x80);
+
+ snd_soc_write(codec, TAIKO_A_NCP_DTEST, 0x20);
+ pr_debug("%s: Enabled Chopper and set wavegen to 5 msec\n",
+ __func__);
+ } else if (comp == COMPANDER_1 &&
+ taiko->comp_enabled[comp] == 0) {
+ /* Wavegen to 20 msec */
+ snd_soc_write(codec, TAIKO_A_RX_HPH_CNP_WG_CTL, 0xDB);
+ snd_soc_write(codec, TAIKO_A_RX_HPH_CNP_WG_TIME, 0x58);
+ snd_soc_write(codec, TAIKO_A_RX_HPH_BIAS_WG_OCP, 0x1A);
+
+ /* Disable CHOPPER block */
+ snd_soc_update_bits(codec,
+ TAIKO_A_RX_HPH_CHOP_CTL, 0x80, 0x00);
+
+ snd_soc_write(codec, TAIKO_A_NCP_DTEST, 0x10);
+ pr_debug("%s: Disabled Chopper and set wavegen to 20 msec\n",
+ __func__);
+ }
return 0;
}
@@ -848,26 +970,50 @@
static void taiko_discharge_comp(struct snd_soc_codec *codec, int comp)
{
- /* Update RSM to 1, DIVF to 5 */
- snd_soc_write(codec, TAIKO_A_CDC_COMP0_B3_CTL + (comp * 8), 1);
+ /* Level meter DIV Factor to 5*/
snd_soc_update_bits(codec, TAIKO_A_CDC_COMP0_B2_CTL + (comp * 8), 0xF0,
- 1 << 5);
- /* Wait for 1ms */
- usleep_range(1000, 1000);
+ 0x05 << 4);
+ /* RMS meter Sampling to 0x01 */
+ snd_soc_write(codec, TAIKO_A_CDC_COMP0_B3_CTL + (comp * 8), 0x01);
+
+ /* Worst case timeout for compander CnP sleep timeout */
+ usleep_range(3000, 3000);
+}
+
+static enum wcd9xxx_buck_volt taiko_codec_get_buck_mv(
+ struct snd_soc_codec *codec)
+{
+ int buck_volt = WCD9XXX_CDC_BUCK_UNSUPPORTED;
+ struct taiko_priv *taiko = snd_soc_codec_get_drvdata(codec);
+ struct wcd9xxx_pdata *pdata = taiko->resmgr.pdata;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(pdata->regulator); i++) {
+ if (!strncmp(pdata->regulator[i].name,
+ WCD9XXX_SUPPLY_BUCK_NAME,
+ sizeof(WCD9XXX_SUPPLY_BUCK_NAME))) {
+ if ((pdata->regulator[i].min_uV ==
+ WCD9XXX_CDC_BUCK_MV_1P8) ||
+ (pdata->regulator[i].min_uV ==
+ WCD9XXX_CDC_BUCK_MV_2P15))
+ buck_volt = pdata->regulator[i].min_uV;
+ break;
+ }
+ }
+ return buck_volt;
}
static int taiko_config_compander(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol, int event)
{
- int mask, emask;
- bool timedout;
- unsigned long timeout;
+ int mask, enable_mask;
struct snd_soc_codec *codec = w->codec;
struct taiko_priv *taiko = snd_soc_codec_get_drvdata(codec);
const int comp = w->shift;
const u32 rate = taiko->comp_fs[comp];
const struct comp_sample_dependent_params *comp_params =
&comp_samp_params[rate];
+ enum wcd9xxx_buck_volt buck_mv;
pr_debug("%s: %s event %d compander %d, enabled %d", __func__,
w->name, event, comp, taiko->comp_enabled[comp]);
@@ -877,72 +1023,73 @@
/* Compander 0 has single channel */
mask = (comp == COMPANDER_0 ? 0x01 : 0x03);
- emask = (comp == COMPANDER_0 ? 0x02 : 0x03);
+ enable_mask = (comp == COMPANDER_0 ? 0x02 : 0x03);
+ buck_mv = taiko_codec_get_buck_mv(codec);
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
- /* Set gain source to compander */
- taiko_config_gain_compander(codec, comp, true);
- /* Enable RX interpolation path clocks */
+ /* Set compander Sample rate */
+ snd_soc_update_bits(codec,
+ TAIKO_A_CDC_COMP0_FS_CFG + (comp * 8),
+ 0x07, rate);
+ /* Set the static gain offset */
+ if (comp == COMPANDER_1
+ && buck_mv == WCD9XXX_CDC_BUCK_MV_2P15) {
+ snd_soc_update_bits(codec,
+ TAIKO_A_CDC_COMP0_B4_CTL + (comp * 8),
+ 0x80, 0x80);
+ } else {
+ snd_soc_update_bits(codec,
+ TAIKO_A_CDC_COMP0_B4_CTL + (comp * 8),
+ 0x80, 0x00);
+ }
+ /* Enable RX interpolation path compander clocks */
snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_RX_B2_CTL,
mask << comp_shift[comp],
mask << comp_shift[comp]);
-
- taiko_discharge_comp(codec, comp);
-
- /* Clear compander halt */
- snd_soc_update_bits(codec, TAIKO_A_CDC_COMP0_B1_CTL +
- (comp * 8),
- 1 << 2, 0);
/* Toggle compander reset bits */
snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_OTHR_RESET_B2_CTL,
mask << comp_shift[comp],
mask << comp_shift[comp]);
snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_OTHR_RESET_B2_CTL,
mask << comp_shift[comp], 0);
- break;
- case SND_SOC_DAPM_POST_PMU:
+
+ /* Set gain source to compander */
+ taiko_config_gain_compander(codec, comp, true);
+
+ /* Compander enable */
+ snd_soc_update_bits(codec, TAIKO_A_CDC_COMP0_B1_CTL +
+ (comp * 8), enable_mask, enable_mask);
+
+ taiko_discharge_comp(codec, comp);
+
/* Set sample rate dependent paramater */
- snd_soc_update_bits(codec,
- TAIKO_A_CDC_COMP0_FS_CFG + (comp * 8),
- 0x07, rate);
snd_soc_write(codec, TAIKO_A_CDC_COMP0_B3_CTL + (comp * 8),
comp_params->rms_meter_resamp_fact);
snd_soc_update_bits(codec,
TAIKO_A_CDC_COMP0_B2_CTL + (comp * 8),
- 0x0F, comp_params->peak_det_timeout);
- snd_soc_update_bits(codec,
- TAIKO_A_CDC_COMP0_B2_CTL + (comp * 8),
0xF0, comp_params->rms_meter_div_fact << 4);
- /* Compander enable */
- snd_soc_update_bits(codec, TAIKO_A_CDC_COMP0_B1_CTL +
- (comp * 8), emask, emask);
+ snd_soc_update_bits(codec,
+ TAIKO_A_CDC_COMP0_B2_CTL + (comp * 8),
+ 0x0F, comp_params->peak_det_timeout);
break;
case SND_SOC_DAPM_PRE_PMD:
- /* Halt compander */
- snd_soc_update_bits(codec,
- TAIKO_A_CDC_COMP0_B1_CTL + (comp * 8),
- 1 << 2, 1 << 2);
- /* Wait up to a second for shutdown complete */
- timeout = jiffies + HZ;
- do {
- if ((snd_soc_read(codec,
- TAIKO_A_CDC_COMP0_SHUT_DOWN_STATUS +
- (comp * 8)) & mask) == mask)
- break;
- } while (!(timedout = time_after(jiffies, timeout)));
- pr_debug("%s: Compander %d shutdown %s in %dms\n", __func__,
- comp, timedout ? "timedout" : "completed",
- jiffies_to_msecs(timeout - HZ - jiffies));
- break;
- case SND_SOC_DAPM_POST_PMD:
/* Disable compander */
snd_soc_update_bits(codec,
TAIKO_A_CDC_COMP0_B1_CTL + (comp * 8),
- emask, 0x00);
+ enable_mask, 0x00);
+
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_OTHR_RESET_B2_CTL,
+ mask << comp_shift[comp],
+ mask << comp_shift[comp]);
+ snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_OTHR_RESET_B2_CTL,
+ mask << comp_shift[comp], 0);
+
/* Turn off the clock for compander in pair */
snd_soc_update_bits(codec, TAIKO_A_CDC_CLK_RX_B2_CTL,
mask << comp_shift[comp], 0);
+
/* Set gain source to register */
taiko_config_gain_compander(codec, comp, false);
break;
@@ -1082,13 +1229,6 @@
40, digital_gain),
SOC_SINGLE_S8_TLV("IIR2 INP4 Volume", TAIKO_A_CDC_IIR2_GAIN_B4_CTL, -84,
40, digital_gain),
- SOC_SINGLE_TLV("ADC1 Volume", TAIKO_A_TX_1_2_EN, 5, 3, 0, analog_gain),
- SOC_SINGLE_TLV("ADC2 Volume", TAIKO_A_TX_1_2_EN, 1, 3, 0, analog_gain),
- SOC_SINGLE_TLV("ADC3 Volume", TAIKO_A_TX_3_4_EN, 5, 3, 0, analog_gain),
- SOC_SINGLE_TLV("ADC4 Volume", TAIKO_A_TX_3_4_EN, 1, 3, 0, analog_gain),
- SOC_SINGLE_TLV("ADC5 Volume", TAIKO_A_TX_5_6_EN, 5, 3, 0, analog_gain),
- SOC_SINGLE_TLV("ADC6 Volume", TAIKO_A_TX_5_6_EN, 1, 3, 0, analog_gain),
-
SOC_SINGLE_EXT("ANC Slot", SND_SOC_NOPM, 0, 100, 0, taiko_get_anc_slot,
taiko_put_anc_slot),
@@ -2707,10 +2847,20 @@
int ret = 0;
struct snd_soc_codec *codec = w->codec;
struct wcd9xxx *core = dev_get_drvdata(codec->dev->parent);
+ struct taiko_priv *priv = snd_soc_codec_get_drvdata(codec);
pr_debug("%s: %d %s\n", __func__, event, w->name);
+
+ WARN_ONCE(!priv->spkdrv_reg, "SPKDRV supply %s isn't defined\n",
+ WCD9XXX_VDD_SPKDRV_NAME);
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
+ if (priv->spkdrv_reg) {
+ ret = regulator_enable(priv->spkdrv_reg);
+ if (ret)
+ pr_err("%s: Failed to enable spkdrv_reg %s\n",
+ __func__, WCD9XXX_VDD_SPKDRV_NAME);
+ }
if (spkr_drv_wrnd > 0) {
WARN_ON(!(snd_soc_read(codec, TAIKO_A_SPKR_DRV_EN) &
0x80));
@@ -2731,6 +2881,12 @@
snd_soc_update_bits(codec, TAIKO_A_SPKR_DRV_EN, 0x80,
0x80);
}
+ if (priv->spkdrv_reg) {
+ ret = regulator_disable(priv->spkdrv_reg);
+ if (ret)
+ pr_err("%s: Failed to disable spkdrv_reg %s\n",
+ __func__, WCD9XXX_VDD_SPKDRV_NAME);
+ }
break;
}
@@ -2955,6 +3111,7 @@
struct taiko_priv *taiko = snd_soc_codec_get_drvdata(codec);
enum wcd9xxx_notify_event e_pre_on, e_post_off;
u8 req_clsh_state;
+ u32 pa_settle_time = TAIKO_HPH_PA_SETTLE_COMP_OFF;
pr_debug("%s: %s event = %d\n", __func__, w->name, event);
if (w->shift == 5) {
@@ -2970,6 +3127,9 @@
return -EINVAL;
}
+ if (taiko->comp_enabled[COMPANDER_1])
+ pa_settle_time = TAIKO_HPH_PA_SETTLE_COMP_ON;
+
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
/* Let MBHC module know PA is turning on */
@@ -2977,16 +3137,21 @@
break;
case SND_SOC_DAPM_POST_PMU:
+ usleep_range(pa_settle_time, pa_settle_time + 1000);
+ pr_debug("%s: sleep %d us after %s PA enable\n", __func__,
+ pa_settle_time, w->name);
wcd9xxx_clsh_fsm(codec, &taiko->clsh_d,
req_clsh_state,
WCD9XXX_CLSH_REQ_ENABLE,
WCD9XXX_CLSH_EVENT_POST_PA);
-
- usleep_range(5000, 5000);
break;
case SND_SOC_DAPM_POST_PMD:
+ usleep_range(pa_settle_time, pa_settle_time + 1000);
+ pr_debug("%s: sleep %d us after %s PA disable\n", __func__,
+ pa_settle_time, w->name);
+
/* Let MBHC module know PA turned off */
wcd9xxx_resmgr_notifier_call(&taiko->resmgr, e_post_off);
@@ -2995,9 +3160,6 @@
WCD9XXX_CLSH_REQ_DISABLE,
WCD9XXX_CLSH_EVENT_POST_PA);
- pr_debug("%s: sleep 10 ms after %s PA disable.\n", __func__,
- w->name);
- usleep_range(5000, 5000);
break;
}
return 0;
@@ -4546,9 +4708,6 @@
dai = &taiko_p->dai[w->shift];
switch (event) {
case SND_SOC_DAPM_POST_PMU:
- /*Enable Clip Detection*/
- snd_soc_update_bits(codec, TAIKO_A_SPKR_DRV_CLIP_DET,
- 0x8, 0x8);
/*Enable V&I sensing*/
snd_soc_update_bits(codec, TAIKO_A_SPKR_PROT_EN,
0x88, 0x88);
@@ -4561,6 +4720,7 @@
/*Enable Current Decimator*/
snd_soc_update_bits(codec,
TAIKO_A_CDC_CONN_TX_SB_B10_CTL, 0x1F, 0x13);
+ (void) taiko_codec_enable_slim_chmask(dai, true);
ret = wcd9xxx_cfg_slim_sch_tx(core, &dai->wcd9xxx_ch_list,
dai->rate, dai->bit_width,
&dai->grph);
@@ -4583,9 +4743,6 @@
/*Disable V&I sensing*/
snd_soc_update_bits(codec, TAIKO_A_SPKR_PROT_EN,
0x88, 0x00);
- /*Disable clip detection*/
- snd_soc_update_bits(codec, TAIKO_A_SPKR_DRV_CLIP_DET,
- 0x8, 0x0);
break;
}
out_vi:
@@ -4955,13 +5112,13 @@
SND_SOC_DAPM_SUPPLY("COMP0_CLK", SND_SOC_NOPM, 0, 0,
taiko_config_compander, SND_SOC_DAPM_PRE_PMU |
- SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_PRE_PMD),
SND_SOC_DAPM_SUPPLY("COMP1_CLK", SND_SOC_NOPM, 1, 0,
taiko_config_compander, SND_SOC_DAPM_PRE_PMU |
- SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_PRE_PMD),
SND_SOC_DAPM_SUPPLY("COMP2_CLK", SND_SOC_NOPM, 2, 0,
taiko_config_compander, SND_SOC_DAPM_PRE_PMU |
- SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_PRE_PMD),
SND_SOC_DAPM_INPUT("AMIC1"),
@@ -5378,7 +5535,8 @@
}
for (i = 0; i < ARRAY_SIZE(pdata->regulator); i++) {
- if (!strncmp(pdata->regulator[i].name, "CDC_VDDA_RX", 11)) {
+ if (pdata->regulator[i].name &&
+ !strncmp(pdata->regulator[i].name, "CDC_VDDA_RX", 11)) {
if (pdata->regulator[i].min_uV == 1800000 &&
pdata->regulator[i].max_uV == 1800000) {
snd_soc_write(codec, TAIKO_A_BIAS_REF_CTL,
@@ -5739,6 +5897,21 @@
{TAIKO_A_CDC_COMP0_B5_CTL, 0x7F, 0x7F},
{TAIKO_A_CDC_COMP1_B5_CTL, 0x7F, 0x7F},
{TAIKO_A_CDC_COMP2_B5_CTL, 0x7F, 0x7F},
+
+ /*
+ * Setup wavegen timer to 20msec and disable chopper
+ * as default. This corresponds to Compander OFF
+ */
+ {TAIKO_A_RX_HPH_CNP_WG_CTL, 0xFF, 0xDB},
+ {TAIKO_A_RX_HPH_CNP_WG_TIME, 0xFF, 0x58},
+ {TAIKO_A_RX_HPH_BIAS_WG_OCP, 0xFF, 0x1A},
+ {TAIKO_A_RX_HPH_CHOP_CTL, 0xFF, 0x24},
+
+ /* Choose max non-overlap time for NCP */
+ {TAIKO_A_NCP_CLK, 0xFF, 0xFC},
+
+ /* Program the 0.85 volt VBG_REFERENCE */
+ {TAIKO_A_BIAS_CURR_CTL_2, 0xFF, 0x04},
};
static void taiko_codec_init_reg(struct snd_soc_codec *codec)
@@ -5751,28 +5924,39 @@
taiko_codec_reg_init_val[i].val);
}
-static int taiko_setup_irqs(struct taiko_priv *taiko)
+static void taiko_slim_interface_init_reg(struct snd_soc_codec *codec)
{
int i;
- int ret = 0;
- struct snd_soc_codec *codec = taiko->codec;
-
- ret = wcd9xxx_request_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS,
- taiko_slimbus_irq, "SLIMBUS Slave", taiko);
- if (ret) {
- pr_err("%s: Failed to request irq %d\n", __func__,
- WCD9XXX_IRQ_SLIMBUS);
- goto exit;
- }
for (i = 0; i < WCD9XXX_SLIM_NUM_PORT_REG; i++)
wcd9xxx_interface_reg_write(codec->control_data,
TAIKO_SLIM_PGD_PORT_INT_EN0 + i,
0xFF);
-exit:
+}
+
+static int taiko_setup_irqs(struct taiko_priv *taiko)
+{
+ int ret = 0;
+ struct snd_soc_codec *codec = taiko->codec;
+
+ ret = wcd9xxx_request_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS,
+ taiko_slimbus_irq, "SLIMBUS Slave", taiko);
+ if (ret)
+ pr_err("%s: Failed to request irq %d\n", __func__,
+ WCD9XXX_IRQ_SLIMBUS);
+ else
+ taiko_slim_interface_init_reg(codec);
+
return ret;
}
+static void taiko_cleanup_irqs(struct taiko_priv *taiko)
+{
+ struct snd_soc_codec *codec = taiko->codec;
+
+ wcd9xxx_free_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS, taiko);
+}
+
int taiko_hs_detect(struct snd_soc_codec *codec,
struct wcd9xxx_mbhc_config *mbhc_cfg)
{
@@ -5831,6 +6015,7 @@
pr_err("%s: bad pdata\n", __func__);
taiko_init_slim_slave_cfg(codec);
+ taiko_slim_interface_init_reg(codec);
wcd9xxx_mbhc_deinit(&taiko->mbhc);
ret = wcd9xxx_mbhc_init(&taiko->mbhc, &taiko->resmgr, codec,
@@ -5847,6 +6032,7 @@
enum afe_config_type config_type)
{
struct taiko_priv *priv = snd_soc_codec_get_drvdata(codec);
+ struct wcd9xxx *taiko_core = dev_get_drvdata(codec->dev->parent);
switch (config_type) {
case AFE_SLIMBUS_SLAVE_CONFIG:
@@ -5857,6 +6043,16 @@
return &taiko_slimbus_slave_port_cfg;
case AFE_AANC_VERSION:
return &taiko_cdc_aanc_version;
+ case AFE_CLIP_BANK_SEL:
+ if (!TAIKO_IS_1_0(taiko_core->version))
+ return &clip_bank_sel;
+ else
+ return NULL;
+ case AFE_CDC_CLIP_REGISTERS_CONFIG:
+ if (!TAIKO_IS_1_0(taiko_core->version))
+ return &taiko_clip_reg_cfg;
+ else
+ return NULL;
default:
pr_err("%s: Unknown config_type 0x%x\n", __func__, config_type);
return NULL;
@@ -5877,24 +6073,6 @@
return 0;
}
-static int taiko_codec_get_buck_mv(struct snd_soc_codec *codec)
-{
- int buck_volt = WCD9XXX_CDC_BUCK_UNSUPPORTED;
- struct taiko_priv *taiko = snd_soc_codec_get_drvdata(codec);
- struct wcd9xxx_pdata *pdata = taiko->resmgr.pdata;
- int i;
-
- for (i = 0; i < ARRAY_SIZE(pdata->regulator); i++) {
- if (!strncmp(pdata->regulator[i].name,
- WCD9XXX_SUPPLY_BUCK_NAME,
- sizeof(WCD9XXX_SUPPLY_BUCK_NAME))) {
- buck_volt = pdata->regulator[i].min_uV;
- break;
- }
- }
- return buck_volt;
-}
-
static const struct snd_soc_dapm_widget taiko_1_dapm_widgets[] = {
SND_SOC_DAPM_ADC_E("ADC1", NULL, TAIKO_A_TX_1_2_EN, 7, 0,
taiko_codec_enable_adc,
@@ -5945,6 +6123,21 @@
SND_SOC_DAPM_POST_PMU),
};
+static struct regulator *taiko_codec_find_regulator(struct snd_soc_codec *codec,
+ const char *name)
+{
+ int i;
+ struct wcd9xxx *core = dev_get_drvdata(codec->dev->parent);
+
+ for (i = 0; i < core->num_of_supplies; i++) {
+ if (core->supplies[i].supply &&
+ !strcmp(core->supplies[i].supply, name))
+ return core->supplies[i].consumer;
+ }
+
+ return NULL;
+}
+
static int taiko_codec_probe(struct snd_soc_codec *codec)
{
struct wcd9xxx *control;
@@ -5976,10 +6169,8 @@
tx_hpf_corner_freq_callback);
}
-
snd_soc_codec_set_drvdata(codec, taiko);
-
/* codec resmgr module init */
wcd9xxx = codec->control_data;
pdata = dev_get_platdata(codec->dev->parent);
@@ -5987,7 +6178,7 @@
&taiko_reg_address);
if (ret) {
pr_err("%s: wcd9xxx init failed %d\n", __func__, ret);
- return ret;
+ goto err_init;
}
taiko->clsh_d.buck_mv = taiko_codec_get_buck_mv(codec);
@@ -5998,7 +6189,7 @@
WCD9XXX_MBHC_VERSION_TAIKO);
if (ret) {
pr_err("%s: mbhc init failed %d\n", __func__, ret);
- return ret;
+ goto err_init;
}
taiko->codec = codec;
@@ -6023,6 +6214,9 @@
goto err_pdata;
}
+ taiko->spkdrv_reg = taiko_codec_find_regulator(codec,
+ WCD9XXX_VDD_SPKDRV_NAME);
+
if (spkr_drv_wrnd > 0) {
WCD9XXX_BCL_LOCK(&taiko->resmgr);
wcd9xxx_resmgr_get_bandgap(&taiko->resmgr,
@@ -6089,7 +6283,11 @@
snd_soc_dapm_sync(dapm);
- (void) taiko_setup_irqs(taiko);
+ ret = taiko_setup_irqs(taiko);
+ if (ret) {
+ pr_err("%s: taiko irq setup failed %d\n", __func__, ret);
+ goto err_irq;
+ }
atomic_set(&kp_taiko_priv, (unsigned long)taiko);
mutex_lock(&dapm->codec->mutex);
@@ -6104,10 +6302,13 @@
codec->ignore_pmdown_time = 1;
return ret;
+err_irq:
+ taiko_cleanup_irqs(taiko);
err_pdata:
kfree(ptr);
err_nomem_slimch:
kfree(taiko);
+err_init:
return ret;
}
static int taiko_codec_remove(struct snd_soc_codec *codec)
@@ -6122,11 +6323,15 @@
WCD9XXX_BANDGAP_AUDIO_MODE);
WCD9XXX_BCL_UNLOCK(&taiko->resmgr);
+ taiko_cleanup_irqs(taiko);
+
/* cleanup MBHC */
wcd9xxx_mbhc_deinit(&taiko->mbhc);
/* cleanup resmgr */
wcd9xxx_resmgr_deinit(&taiko->resmgr);
+ taiko->spkdrv_reg = NULL;
+
kfree(taiko);
return 0;
}
diff --git a/sound/soc/codecs/wcd9xxx-mbhc.c b/sound/soc/codecs/wcd9xxx-mbhc.c
index aacc9df..a37b4eb 100644
--- a/sound/soc/codecs/wcd9xxx-mbhc.c
+++ b/sound/soc/codecs/wcd9xxx-mbhc.c
@@ -89,7 +89,7 @@
#define WCD9XXX_MEAS_INVALD_RANGE_LOW_MV 20
#define WCD9XXX_MEAS_INVALD_RANGE_HIGH_MV 80
#define WCD9XXX_GM_SWAP_THRES_MIN_MV 150
-#define WCD9XXX_GM_SWAP_THRES_MAX_MV 500
+#define WCD9XXX_GM_SWAP_THRES_MAX_MV 650
#define WCD9XXX_USLEEP_RANGE_MARGIN_US 1000
@@ -2786,6 +2786,7 @@
wcd9xxx_turn_onoff_rel_detection(codec, true);
pr_debug("%s: leave\n", __func__);
+ return;
gen_err:
pr_err("%s: Error returned, ret: %d\n", __func__, ret);
@@ -3534,7 +3535,6 @@
{
void *cdata = mbhc->codec->control_data;
- wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_SLIMBUS, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_RELEASE, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_POTENTIAL, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_REMOVAL, mbhc);
@@ -3543,7 +3543,6 @@
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_JACK_SWITCH, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_HPH_PA_OCPL_FAULT, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_HPH_PA_OCPR_FAULT, mbhc);
- wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_RELEASE, mbhc);
if (mbhc->mbhc_fw)
release_firmware(mbhc->mbhc_fw);
diff --git a/sound/soc/codecs/wcd9xxx-resmgr.c b/sound/soc/codecs/wcd9xxx-resmgr.c
index 18614d8..60a76a2 100644
--- a/sound/soc/codecs/wcd9xxx-resmgr.c
+++ b/sound/soc/codecs/wcd9xxx-resmgr.c
@@ -391,8 +391,6 @@
} else {
snd_soc_update_bits(codec, WCD9XXX_A_BIAS_OSC_BG_CTL, 0x1, 0);
snd_soc_update_bits(codec, WCD9XXX_A_RC_OSC_FREQ, 0x80, 0);
- /* clk source to ext clk and clk buff ref to VBG */
- snd_soc_update_bits(codec, WCD9XXX_A_CLK_BUFF_EN1, 0x0C, 0x04);
}
return 0;
@@ -423,9 +421,14 @@
snd_soc_write(codec, WCD9XXX_A_CLK_BUFF_EN2, 0x02);
wcd9xxx_resmgr_enable_config_mode(codec, 0);
}
+ /* clk source to ext clk and clk buff ref to VBG */
+ snd_soc_update_bits(codec, WCD9XXX_A_CLK_BUFF_EN1, 0x0C, 0x04);
}
snd_soc_update_bits(codec, WCD9XXX_A_CLK_BUFF_EN1, 0x01, 0x01);
+ /* sleep time required by codec hardware to enable clock buffer */
+ usleep_range(1000, 1200);
+
snd_soc_update_bits(codec, WCD9XXX_A_CLK_BUFF_EN2, 0x02, 0x00);
/* on MCLK */
diff --git a/sound/soc/msm/Makefile b/sound/soc/msm/Makefile
index ebde90b..7ab4811 100644
--- a/sound/soc/msm/Makefile
+++ b/sound/soc/msm/Makefile
@@ -55,8 +55,7 @@
# for MSM 8960 sound card driver
obj-$(CONFIG_SND_SOC_MSM_QDSP6_INTF) += qdsp6/
-
-snd-soc-qdsp6-objs := msm-dai-q6.o msm-pcm-q6.o msm-multi-ch-pcm-q6.o msm-lowlatency-pcm-q6.o msm-pcm-routing.o msm-dai-fe.o msm-compr-q6.o msm-dai-stub.o
+snd-soc-qdsp6-objs := msm-dai-q6.o msm-pcm-q6.o msm-multi-ch-pcm-q6.o msm-lowlatency-pcm-q6.o msm-pcm-loopback.o msm-pcm-routing.o msm-dai-fe.o msm-compr-q6.o msm-dai-stub.o
obj-$(CONFIG_SND_SOC_MSM_QDSP6_HDMI_AUDIO) += msm-dai-q6-hdmi.o
obj-$(CONFIG_SND_SOC_VOICE) += msm-pcm-voice.o msm-pcm-voip.o msm-pcm-dtmf.o msm-pcm-host-voice.o
snd-soc-qdsp6-objs += msm-pcm-lpa.o msm-pcm-afe.o
diff --git a/sound/soc/msm/mdm9625.c b/sound/soc/msm/mdm9625.c
index 2bef1b7..f3ccb33 100644
--- a/sound/soc/msm/mdm9625.c
+++ b/sound/soc/msm/mdm9625.c
@@ -761,6 +761,21 @@
.be_id = MSM_FRONTEND_DAI_MULTIMEDIA1
},
{
+ .name = "MDM9625 Media2",
+ .stream_name = "MultiMedia2",
+ .cpu_dai_name = "MultiMedia2",
+ .platform_name = "msm-pcm-dsp.0",
+ .dynamic = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_MULTIMEDIA2,
+ },
+ {
.name = "MSM VoIP",
.stream_name = "VoIP",
.cpu_dai_name = "VoIP",
diff --git a/sound/soc/msm/msm-dai-fe.c b/sound/soc/msm/msm-dai-fe.c
index 8db13f6..1b51595 100644
--- a/sound/soc/msm/msm-dai-fe.c
+++ b/sound/soc/msm/msm-dai-fe.c
@@ -236,6 +236,17 @@
.rate_min = 8000,
.rate_max = 192000,
},
+ .capture = {
+ .stream_name = "MultiMedia6 Capture",
+ .aif_name = "MM_UL6",
+ .rates = (SNDRV_PCM_RATE_8000_48000|
+ SNDRV_PCM_RATE_KNOT),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels_min = 1,
+ .channels_max = 8,
+ .rate_min = 8000,
+ .rate_max = 48000,
+ },
.ops = &msm_fe_Multimedia_dai_ops,
.name = "MultiMedia6",
},
@@ -549,6 +560,34 @@
.name = "SEC_I2S_RX_HOSTLESS",
},
{
+ .capture = {
+ .stream_name = "Primary MI2S_TX Hostless Capture",
+ .aif_name = "PRI_MI2S_UL_HL",
+ .rates = SNDRV_PCM_RATE_8000_48000,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels_min = 1,
+ .channels_max = 2,
+ .rate_min = 8000,
+ .rate_max = 48000,
+ },
+ .ops = &msm_fe_dai_ops,
+ .name = "PRI_MI2S_TX_HOSTLESS",
+ },
+ {
+ .playback = {
+ .stream_name = "Secondary MI2S_RX Hostless Playback",
+ .aif_name = "SEC_MI2S_DL_HL",
+ .rates = SNDRV_PCM_RATE_8000_48000,
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .channels_min = 1,
+ .channels_max = 2,
+ .rate_min = 8000,
+ .rate_max = 48000,
+ },
+ .ops = &msm_fe_dai_ops,
+ .name = "SEC_MI2S_RX_HOSTLESS",
+ },
+ {
.playback = {
.stream_name = "Voice2 Playback",
.aif_name = "VOICE2_DL",
diff --git a/sound/soc/msm/msm-pcm-loopback.c b/sound/soc/msm/msm-pcm-loopback.c
new file mode 100644
index 0000000..84ea9c6
--- /dev/null
+++ b/sound/soc/msm/msm-pcm-loopback.c
@@ -0,0 +1,362 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+
+* This program is free software; you can redistribute it and/or modify
+* it under the terms of the GNU General Public License version 2 and
+* only version 2 as published by the Free Software Foundation.
+
+* This program is distributed in the hope that it will be useful,
+* but WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+* GNU General Public License for more details.
+*/
+
+#include <linux/init.h>
+#include <linux/err.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <sound/apr_audio.h>
+#include <sound/core.h>
+#include <sound/soc.h>
+#include <sound/q6asm.h>
+#include <sound/pcm.h>
+#include <sound/initval.h>
+#include <sound/control.h>
+#include <asm/dma.h>
+#include <linux/dma-mapping.h>
+
+#include "msm-pcm-routing.h"
+
+struct msm_pcm_loopback {
+ struct snd_pcm_substream *playback_substream;
+ struct snd_pcm_substream *capture_substream;
+
+ int instance;
+
+ struct mutex lock;
+
+ uint32_t samp_rate;
+ uint32_t channel_mode;
+
+ int playback_start;
+ int capture_start;
+ int session_id;
+ struct audio_client *audio_client;
+};
+
+static void stop_pcm(struct msm_pcm_loopback *pcm);
+
+static const struct snd_pcm_hardware dummy_pcm_hardware = {
+ .formats = 0xffffffff,
+ .channels_min = 1,
+ .channels_max = UINT_MAX,
+
+ /* Random values to keep userspace happy when checking constraints */
+ .info = SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER,
+ .buffer_bytes_max = 128*1024,
+ .period_bytes_min = 1024,
+ .period_bytes_max = 1024*2,
+ .periods_min = 2,
+ .periods_max = 128,
+};
+
+static void msm_pcm_route_event_handler(enum msm_pcm_routing_event event,
+ void *priv_data)
+{
+ struct msm_pcm_loopback *pcm = priv_data;
+
+ BUG_ON(!pcm);
+
+ pr_debug("%s: event %x\n", __func__, event);
+
+ switch (event) {
+ case MSM_PCM_RT_EVT_DEVSWITCH:
+ q6asm_cmd(pcm->audio_client, CMD_PAUSE);
+ q6asm_cmd(pcm->audio_client, CMD_FLUSH);
+ q6asm_run(pcm->audio_client, 0, 0, 0);
+ default:
+ break;
+ }
+}
+
+static void msm_pcm_loopback_event_handler(uint32_t opcode,
+ uint32_t token, uint32_t *payload, void *priv)
+{
+ pr_debug("%s\n", __func__);
+ switch (opcode) {
+ case APR_BASIC_RSP_RESULT: {
+ switch (payload[0]) {
+ break;
+ default:
+ break;
+ }
+ }
+ break;
+ default:
+ pr_err("Not Supported Event opcode[0x%x]\n", opcode);
+ break;
+ }
+}
+
+static int msm_pcm_open(struct snd_pcm_substream *substream)
+{
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct snd_soc_pcm_runtime *rtd = snd_pcm_substream_chip(substream);
+ struct msm_pcm_loopback *pcm;
+ int ret = 0;
+ struct msm_pcm_routing_evt event;
+
+ pcm = dev_get_drvdata(rtd->platform->dev);
+ mutex_lock(&pcm->lock);
+
+ snd_soc_set_runtime_hwparams(substream, &dummy_pcm_hardware);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ pcm->playback_substream = substream;
+ else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+ pcm->capture_substream = substream;
+
+ pcm->instance++;
+ dev_dbg(rtd->platform->dev, "%s: pcm out open: %d,%d\n", __func__,
+ pcm->instance, substream->stream);
+ if (pcm->instance == 2) {
+ struct snd_soc_pcm_runtime *soc_pcm_rx =
+ pcm->playback_substream->private_data;
+ struct snd_soc_pcm_runtime *soc_pcm_tx =
+ pcm->capture_substream->private_data;
+ if (pcm->audio_client != NULL)
+ stop_pcm(pcm);
+
+ pcm->audio_client = q6asm_audio_client_alloc(
+ (app_cb)msm_pcm_loopback_event_handler, pcm);
+ if (!pcm->audio_client) {
+ dev_err(rtd->platform->dev,
+ "%s: Could not allocate memory\n", __func__);
+ mutex_unlock(&pcm->lock);
+ return -ENOMEM;
+ }
+ pcm->session_id = pcm->audio_client->session;
+ pcm->audio_client->perf_mode = false;
+ ret = q6asm_open_loopack(pcm->audio_client);
+ if (ret < 0) {
+ dev_err(rtd->platform->dev,
+ "%s: pcm out open failed\n", __func__);
+ q6asm_audio_client_free(pcm->audio_client);
+ mutex_unlock(&pcm->lock);
+ return -ENOMEM;
+ }
+ event.event_func = msm_pcm_route_event_handler;
+ event.priv_data = (void *) pcm;
+ msm_pcm_routing_reg_phy_stream(soc_pcm_tx->dai_link->be_id,
+ pcm->audio_client->perf_mode,
+ pcm->session_id, pcm->capture_substream->stream);
+ msm_pcm_routing_reg_phy_stream_v2(soc_pcm_rx->dai_link->be_id,
+ pcm->audio_client->perf_mode,
+ pcm->session_id, pcm->playback_substream->stream,
+ event);
+ }
+ dev_info(rtd->platform->dev, "%s: Instance = %d, Stream ID = %s\n",
+ __func__ , pcm->instance, substream->pcm->id);
+ runtime->private_data = pcm;
+
+ mutex_unlock(&pcm->lock);
+
+ return 0;
+}
+
+static void stop_pcm(struct msm_pcm_loopback *pcm)
+{
+ struct snd_soc_pcm_runtime *soc_pcm_rx =
+ pcm->playback_substream->private_data;
+ struct snd_soc_pcm_runtime *soc_pcm_tx =
+ pcm->capture_substream->private_data;
+
+ if (pcm->audio_client == NULL)
+ return;
+ q6asm_cmd(pcm->audio_client, CMD_CLOSE);
+
+ msm_pcm_routing_dereg_phy_stream(soc_pcm_rx->dai_link->be_id,
+ SNDRV_PCM_STREAM_PLAYBACK);
+ msm_pcm_routing_dereg_phy_stream(soc_pcm_tx->dai_link->be_id,
+ SNDRV_PCM_STREAM_CAPTURE);
+ q6asm_audio_client_free(pcm->audio_client);
+ pcm->audio_client = NULL;
+}
+
+static int msm_pcm_close(struct snd_pcm_substream *substream)
+{
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct msm_pcm_loopback *pcm = runtime->private_data;
+ struct snd_soc_pcm_runtime *rtd = snd_pcm_substream_chip(substream);
+ int ret = 0;
+
+ mutex_lock(&pcm->lock);
+
+ dev_dbg(rtd->platform->dev, "%s: end pcm call:%d\n",
+ __func__, substream->stream);
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ pcm->playback_start = 0;
+ else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+ pcm->capture_start = 0;
+
+ pcm->instance--;
+ if (!pcm->playback_start || !pcm->capture_start) {
+ dev_dbg(rtd->platform->dev, "%s: end pcm call\n", __func__);
+ stop_pcm(pcm);
+ }
+
+ mutex_unlock(&pcm->lock);
+ return ret;
+}
+
+static int msm_pcm_prepare(struct snd_pcm_substream *substream)
+{
+ int ret = 0;
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct msm_pcm_loopback *pcm = runtime->private_data;
+ struct snd_soc_pcm_runtime *rtd = snd_pcm_substream_chip(substream);
+
+ mutex_lock(&pcm->lock);
+
+ dev_dbg(rtd->platform->dev, "%s: ASM loopback stream:%d\n",
+ __func__, substream->stream);
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (!pcm->playback_start)
+ pcm->playback_start = 1;
+ } else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
+ if (!pcm->capture_start)
+ pcm->capture_start = 1;
+ }
+ mutex_unlock(&pcm->lock);
+
+ return ret;
+}
+
+static int msm_pcm_trigger(struct snd_pcm_substream *substream, int cmd)
+{
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct msm_pcm_loopback *pcm = runtime->private_data;
+ struct snd_soc_pcm_runtime *rtd = snd_pcm_substream_chip(substream);
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ dev_dbg(rtd->platform->dev,
+ "%s: playback_start:%d,capture_start:%d\n", __func__,
+ pcm->playback_start, pcm->capture_start);
+ if (pcm->playback_start && pcm->capture_start)
+ q6asm_run_nowait(pcm->audio_client, 0, 0, 0);
+ break;
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+ case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ case SNDRV_PCM_TRIGGER_STOP:
+ dev_dbg(rtd->platform->dev,
+ "%s:Pause/Stop - playback_start:%d,capture_start:%d\n",
+ __func__, pcm->playback_start, pcm->capture_start);
+ if (pcm->playback_start && pcm->capture_start)
+ q6asm_cmd_nowait(pcm->audio_client, CMD_PAUSE);
+ break;
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int msm_pcm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_soc_pcm_runtime *rtd = snd_pcm_substream_chip(substream);
+
+ dev_dbg(rtd->platform->dev, "%s: ASM loopback\n", __func__);
+
+ return snd_pcm_lib_alloc_vmalloc_buffer(substream,
+ params_buffer_bytes(params));
+}
+
+static int msm_pcm_hw_free(struct snd_pcm_substream *substream)
+{
+ return snd_pcm_lib_free_vmalloc_buffer(substream);
+}
+
+static struct snd_pcm_ops msm_pcm_ops = {
+ .open = msm_pcm_open,
+ .hw_params = msm_pcm_hw_params,
+ .hw_free = msm_pcm_hw_free,
+ .close = msm_pcm_close,
+ .prepare = msm_pcm_prepare,
+ .trigger = msm_pcm_trigger,
+};
+
+
+static int msm_asoc_pcm_new(struct snd_soc_pcm_runtime *rtd)
+{
+ struct snd_card *card = rtd->card->snd_card;
+ int ret = 0;
+
+ if (!card->dev->coherent_dma_mask)
+ card->dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ return ret;
+}
+
+static struct snd_soc_platform_driver msm_soc_platform = {
+ .ops = &msm_pcm_ops,
+ .pcm_new = msm_asoc_pcm_new,
+};
+
+static __devinit int msm_pcm_probe(struct platform_device *pdev)
+{
+ struct msm_pcm_loopback *pcm;
+
+ dev_dbg(&pdev->dev, "%s: dev name %s\n",
+ __func__, dev_name(&pdev->dev));
+
+ pcm = kzalloc(sizeof(struct msm_pcm_loopback), GFP_KERNEL);
+ if (!pcm) {
+ dev_err(&pdev->dev, "%s Failed to allocate memory for pcm\n",
+ __func__);
+ return -ENOMEM;
+ } else {
+ mutex_init(&pcm->lock);
+ dev_set_drvdata(&pdev->dev, pcm);
+ }
+ return snd_soc_register_platform(&pdev->dev,
+ &msm_soc_platform);
+}
+
+static int msm_pcm_remove(struct platform_device *pdev)
+{
+ struct msm_pcm_loopback *pcm;
+
+ pcm = dev_get_drvdata(&pdev->dev);
+ mutex_destroy(&pcm->lock);
+
+ snd_soc_unregister_platform(&pdev->dev);
+ return 0;
+}
+
+static struct platform_driver msm_pcm_driver = {
+ .driver = {
+ .name = "msm-pcm-loopback",
+ .owner = THIS_MODULE,
+ },
+ .probe = msm_pcm_probe,
+ .remove = __devexit_p(msm_pcm_remove),
+};
+
+static int __init msm_soc_platform_init(void)
+{
+ return platform_driver_register(&msm_pcm_driver);
+}
+module_init(msm_soc_platform_init);
+
+static void __exit msm_soc_platform_exit(void)
+{
+ platform_driver_unregister(&msm_pcm_driver);
+}
+module_exit(msm_soc_platform_exit);
+
+MODULE_DESCRIPTION("PCM loopback platform driver");
+MODULE_LICENSE("GPL v2");
diff --git a/sound/soc/msm/msm-pcm-routing.c b/sound/soc/msm/msm-pcm-routing.c
index e74a0dd..2c3c6df 100644
--- a/sound/soc/msm/msm-pcm-routing.c
+++ b/sound/soc/msm/msm-pcm-routing.c
@@ -658,6 +658,11 @@
msm_bedais[reg].sample_rate, channels,
DEFAULT_COPP_TOPOLOGY);
+ if (session_type == SESSION_TYPE_RX &&
+ fdai->event_info.event_func)
+ fdai->event_info.event_func(
+ MSM_PCM_RT_EVT_DEVSWITCH,
+ fdai->event_info.priv_data);
msm_pcm_routing_build_matrix(val,
fdai->strm_id, path_type);
@@ -976,10 +981,8 @@
static int msm_routing_set_fm_vol_mixer(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
- afe_loopback_gain(INT_FM_TX , ucontrol->value.integer.value[0]);
-
+ afe_loopback_gain(INT_FM_TX, ucontrol->value.integer.value[0]);
msm_route_fm_vol_control = ucontrol->value.integer.value[0];
-
return 0;
}
@@ -1660,6 +1663,9 @@
SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_INT_BT_SCO_RX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia6", MSM_BACKEND_DAI_INT_BT_SCO_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA6, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new int_fm_rx_mixer_controls[] = {
@@ -1696,6 +1702,9 @@
SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_AFE_PCM_RX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia6", MSM_BACKEND_DAI_AFE_PCM_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA6, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new auxpcm_rx_mixer_controls[] = {
@@ -1714,6 +1723,9 @@
SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_AUXPCM_RX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia6", MSM_BACKEND_DAI_AUXPCM_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA6, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new sec_auxpcm_rx_mixer_controls[] = {
@@ -1810,6 +1822,12 @@
msm_routing_put_audio_mixer),
};
+static const struct snd_kcontrol_new mmul6_mixer_controls[] = {
+ SOC_SINGLE_EXT("INTERNAL_FM_TX", MSM_BACKEND_DAI_INT_FM_TX,
+ MSM_FRONTEND_DAI_MULTIMEDIA6, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
+};
+
static const struct snd_kcontrol_new pri_rx_voice_mixer_controls[] = {
SOC_SINGLE_EXT("CSVoice", MSM_BACKEND_DAI_PRI_I2S_RX,
MSM_FRONTEND_DAI_CS_VOICE, 1, 0, msm_routing_get_voice_mixer,
@@ -2715,6 +2733,7 @@
SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("MM_UL6", "MultiMedia6 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("CS-VOICE_DL1", "CS-VOICE Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("CS-VOICE_UL1", "CS-VOICE Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("VoLTE_DL", "VoLTE Playback", 0, 0, 0, 0),
@@ -2828,6 +2847,8 @@
mmul4_mixer_controls, ARRAY_SIZE(mmul4_mixer_controls)),
SND_SOC_DAPM_MIXER("MultiMedia5 Mixer", SND_SOC_NOPM, 0, 0,
mmul5_mixer_controls, ARRAY_SIZE(mmul5_mixer_controls)),
+ SND_SOC_DAPM_MIXER("MultiMedia6 Mixer", SND_SOC_NOPM, 0, 0,
+ mmul6_mixer_controls, ARRAY_SIZE(mmul6_mixer_controls)),
SND_SOC_DAPM_MIXER("AUX_PCM_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
auxpcm_rx_mixer_controls, ARRAY_SIZE(auxpcm_rx_mixer_controls)),
SND_SOC_DAPM_MIXER("SEC_AUX_PCM_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
@@ -3008,6 +3029,7 @@
{"MI2S_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"MI2S_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
{"MI2S_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
+ {"MI2S_RX Audio Mixer", "MultiMedia6", "MM_DL6"},
{"MI2S_RX", NULL, "MI2S_RX Audio Mixer"},
{"MultiMedia1 Mixer", "PRI_TX", "PRI_I2S_TX"},
@@ -3026,6 +3048,7 @@
{"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
{"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
+ {"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia6", "MM_DL6"},
{"INT_BT_SCO_RX", NULL, "INTERNAL_BT_SCO_RX Audio Mixer"},
{"INTERNAL_FM_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
@@ -3040,12 +3063,14 @@
{"AFE_PCM_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"AFE_PCM_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
{"AFE_PCM_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
+ {"AFE_PCM_RX Audio Mixer", "MultiMedia6", "MM_DL6"},
{"PCM_RX", NULL, "AFE_PCM_RX Audio Mixer"},
{"MultiMedia1 Mixer", "INTERNAL_BT_SCO_TX", "INT_BT_SCO_TX"},
{"MultiMedia5 Mixer", "INTERNAL_BT_SCO_TX", "INT_BT_SCO_TX"},
{"MultiMedia1 Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
{"MultiMedia5 Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
+ {"MultiMedia6 Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
{"MultiMedia1 Mixer", "AFE_PCM_TX", "PCM_TX"},
{"MultiMedia5 Mixer", "AFE_PCM_TX", "PCM_TX"},
@@ -3054,12 +3079,14 @@
{"MM_UL2", NULL, "MultiMedia2 Mixer"},
{"MM_UL4", NULL, "MultiMedia4 Mixer"},
{"MM_UL5", NULL, "MultiMedia5 Mixer"},
+ {"MM_UL6", NULL, "MultiMedia6 Mixer"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
+ {"AUX_PCM_RX Audio Mixer", "MultiMedia6", "MM_DL6"},
{"AUX_PCM_RX", NULL, "AUX_PCM_RX Audio Mixer"},
{"SEC_AUX_PCM_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
diff --git a/sound/soc/msm/msm-pcm-routing.h b/sound/soc/msm/msm-pcm-routing.h
index c42e155..e25a44a 100644
--- a/sound/soc/msm/msm-pcm-routing.h
+++ b/sound/soc/msm/msm-pcm-routing.h
@@ -117,6 +117,7 @@
enum msm_pcm_routing_event {
MSM_PCM_RT_EVT_BUF_RECFG,
+ MSM_PCM_RT_EVT_DEVSWITCH,
MSM_PCM_RT_EVT_MAX,
};
/* dai_id: front-end ID,
diff --git a/sound/soc/msm/msm8930.c b/sound/soc/msm/msm8930.c
index 737317c..dd92e34 100644
--- a/sound/soc/msm/msm8930.c
+++ b/sound/soc/msm/msm8930.c
@@ -1143,6 +1143,22 @@
.ignore_pmdown_time = 1,
.be_id = MSM_FRONTEND_DAI_MULTIMEDIA5,
},
+ {
+ .name = "MSM8960 FM",
+ .stream_name = "MultiMedia6",
+ .cpu_dai_name = "MultiMedia6",
+ .platform_name = "msm-pcm-loopback",
+ .dynamic = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .ignore_suspend = 1,
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_MULTIMEDIA6,
+ },
/* Backend DAI Links */
{
.name = LPASS_BE_SLIMBUS_0_RX,
diff --git a/sound/soc/msm/msm8974.c b/sound/soc/msm/msm8974.c
index 8affc09..51334f2 100644
--- a/sound/soc/msm/msm8974.c
+++ b/sound/soc/msm/msm8974.c
@@ -75,6 +75,11 @@
#define EXT_CLASS_D_DIS_DELAY 3000
#define EXT_CLASS_D_DELAY_DELTA 2000
+/* It takes about 13ms for Class-AB PAs to ramp-up */
+#define EXT_CLASS_AB_EN_DELAY 10000
+#define EXT_CLASS_AB_DIS_DELAY 1000
+#define EXT_CLASS_AB_DELAY_DELTA 1000
+
#define NUM_OF_AUXPCM_GPIOS 4
static inline int param_is_mask(int p)
@@ -184,6 +189,8 @@
static struct platform_device *spdev;
static struct regulator *ext_spk_amp_regulator;
static int ext_spk_amp_gpio = -1;
+static int ext_ult_spk_amp_gpio = -1;
+static int ext_ult_lo_amp_gpio = -1;
static int msm8974_spk_control = 1;
static int msm8974_ext_spk_pamp;
static int msm_slim_0_rx_ch = 1;
@@ -196,7 +203,7 @@
static int msm_proxy_rx_ch = 2;
static struct mutex cdc_mclk_mutex;
-static struct q_clkdiv *codec_clk;
+static struct clk *codec_clk;
static int clk_users;
static atomic_t prim_auxpcm_rsc_ref;
static atomic_t sec_auxpcm_rsc_ref;
@@ -231,9 +238,43 @@
}
}
+ ext_ult_spk_amp_gpio = of_get_named_gpio(spdev->dev.of_node,
+ "qcom,ext-ult-spk-amp-gpio", 0);
+
+ if (ext_ult_spk_amp_gpio >= 0) {
+ ret = gpio_request(ext_ult_spk_amp_gpio,
+ "ext_ult_spk_amp_gpio");
+ if (ret) {
+ pr_err("%s: gpio_request failed for ext-ult_spk-amp-gpio.\n",
+ __func__);
+ return -EINVAL;
+ }
+ gpio_direction_output(ext_ult_spk_amp_gpio, 0);
+ }
+
return 0;
}
+static void msm8974_liquid_ext_ult_spk_power_amp_enable(u32 on)
+{
+ if (on) {
+ regulator_enable(ext_spk_amp_regulator);
+ gpio_direction_output(ext_ult_spk_amp_gpio, 1);
+ /* time takes enable the external power class AB amplifier */
+ usleep_range(EXT_CLASS_AB_EN_DELAY,
+ EXT_CLASS_AB_EN_DELAY + EXT_CLASS_AB_DELAY_DELTA);
+ } else {
+ gpio_direction_output(ext_ult_spk_amp_gpio, 0);
+ regulator_disable(ext_spk_amp_regulator);
+ /* time takes disable the external power class AB amplifier */
+ usleep_range(EXT_CLASS_AB_DIS_DELAY,
+ EXT_CLASS_AB_DIS_DELAY + EXT_CLASS_AB_DELAY_DELTA);
+ }
+
+ pr_debug("%s: %s external ultrasound SPKR_DRV PAs.\n", __func__,
+ on ? "Enable" : "Disable");
+}
+
static void msm8974_liquid_ext_spk_power_amp_enable(u32 on)
{
if (on) {
@@ -287,7 +328,6 @@
}
-
static irqreturn_t msm8974_liquid_docking_irq_handler(int irq, void *dev)
{
struct msm8974_liquid_dock_dev *dock_dev = dev;
@@ -351,41 +391,82 @@
return 0;
}
-
-
-
-static void msm8974_ext_spk_power_amp_on(u32 spk)
+static int msm8974_liquid_ext_spk_power_amp_on(u32 spk)
{
- if (spk & (LO_1_SPK_AMP |
- LO_3_SPK_AMP |
- LO_2_SPK_AMP |
- LO_4_SPK_AMP)) {
+ int rc;
- pr_debug("%s() External Left/Right Speakers already turned on. spk = 0x%08x\n",
- __func__, spk);
+ if (spk & (LO_1_SPK_AMP | LO_3_SPK_AMP | LO_2_SPK_AMP | LO_4_SPK_AMP)) {
+ pr_debug("%s: External speakers are already on. spk = 0x%x\n",
+ __func__, spk);
msm8974_ext_spk_pamp |= spk;
-
if ((msm8974_ext_spk_pamp & LO_1_SPK_AMP) &&
- (msm8974_ext_spk_pamp & LO_3_SPK_AMP) &&
- (msm8974_ext_spk_pamp & LO_2_SPK_AMP) &&
- (msm8974_ext_spk_pamp & LO_4_SPK_AMP)) {
-
+ (msm8974_ext_spk_pamp & LO_3_SPK_AMP) &&
+ (msm8974_ext_spk_pamp & LO_2_SPK_AMP) &&
+ (msm8974_ext_spk_pamp & LO_4_SPK_AMP))
if (ext_spk_amp_gpio >= 0 &&
- msm8974_liquid_dock_dev != NULL &&
- msm8974_liquid_dock_dev->dock_plug_det == 0)
+ msm8974_liquid_dock_dev &&
+ msm8974_liquid_dock_dev->dock_plug_det == 0)
msm8974_liquid_ext_spk_power_amp_enable(1);
- }
+ rc = 0;
} else {
+ pr_err("%s: Invalid external speaker ampl. spk = 0x%x\n",
+ __func__, spk);
+ rc = -EINVAL;
+ }
- pr_err("%s: ERROR : Invalid External Speaker Ampl. spk = 0x%08x\n",
+ return rc;
+}
+
+static void msm8974_fluid_ext_us_amp_on(u32 spk)
+{
+ if (!gpio_is_valid(ext_ult_lo_amp_gpio)) {
+ pr_err("%s: ext_ult_lo_amp_gpio isn't configured\n", __func__);
+ } else if (spk & (LO_1_SPK_AMP | LO_3_SPK_AMP)) {
+ pr_debug("%s: External US amp is already on. spk = 0x%x\n",
+ __func__, spk);
+ msm8974_ext_spk_pamp |= spk;
+ if ((msm8974_ext_spk_pamp & LO_1_SPK_AMP) &&
+ (msm8974_ext_spk_pamp & LO_3_SPK_AMP))
+ pr_debug("%s: Turn on US amp. spk = 0x%x\n",
+ __func__, spk);
+ gpio_direction_output(ext_ult_lo_amp_gpio, 1);
+
+ } else {
+ pr_err("%s: Invalid external speaker ampl. spk = 0x%x\n",
__func__, spk);
- return;
}
}
-static void msm8974_ext_spk_power_amp_off(u32 spk)
+static void msm8974_fluid_ext_us_amp_off(u32 spk)
{
+ if (!gpio_is_valid(ext_ult_lo_amp_gpio)) {
+ pr_err("%s: ext_ult_lo_amp_gpio isn't configured\n", __func__);
+ } else if (spk & (LO_1_SPK_AMP | LO_3_SPK_AMP)) {
+ pr_debug("%s LO1 and LO3. spk = 0x%x", __func__, spk);
+ if (!msm8974_ext_spk_pamp) {
+ pr_debug("%s: Turn off US amp. spk = 0x%x\n",
+ __func__, spk);
+ gpio_direction_output(ext_ult_lo_amp_gpio, 0);
+ msm8974_ext_spk_pamp = 0;
+ }
+ } else {
+ pr_err("%s: Invalid external speaker ampl. spk = 0x%x\n",
+ __func__, spk);
+ }
+}
+
+static void msm8974_ext_spk_power_amp_on(u32 spk)
+{
+ if (gpio_is_valid(ext_spk_amp_gpio))
+ msm8974_liquid_ext_spk_power_amp_on(spk);
+ else if (gpio_is_valid(ext_ult_lo_amp_gpio))
+ msm8974_fluid_ext_us_amp_on(spk);
+}
+
+static void msm8974_liquid_ext_spk_power_amp_off(u32 spk)
+{
+
if (spk & (LO_1_SPK_AMP |
LO_3_SPK_AMP |
LO_2_SPK_AMP |
@@ -410,6 +491,14 @@
}
}
+static void msm8974_ext_spk_power_amp_off(u32 spk)
+{
+ if (gpio_is_valid(ext_spk_amp_gpio))
+ msm8974_liquid_ext_spk_power_amp_off(spk);
+ else if (gpio_is_valid(ext_ult_lo_amp_gpio))
+ msm8974_fluid_ext_us_amp_off(spk);
+}
+
static void msm8974_ext_control(struct snd_soc_codec *codec)
{
struct snd_soc_dapm_context *dapm = &codec->dapm;
@@ -495,6 +584,33 @@
}
+static int msm_ext_spkramp_ultrasound_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *k, int event)
+{
+
+ pr_debug("%s()\n", __func__);
+
+ if (!strncmp(w->name, "SPK_ultrasound amp", 19)) {
+ if (!gpio_is_valid(ext_ult_spk_amp_gpio)) {
+ pr_err("%s: ext_ult_spk_amp_gpio isn't configured\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (SND_SOC_DAPM_EVENT_ON(event))
+ msm8974_liquid_ext_ult_spk_power_amp_enable(1);
+ else
+ msm8974_liquid_ext_ult_spk_power_amp_enable(0);
+
+ } else {
+ pr_err("%s() Invalid Speaker Widget = %s\n",
+ __func__, w->name);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int msm_snd_enable_codec_ext_clk(struct snd_soc_codec *codec, int enable,
bool dapm)
{
@@ -515,23 +631,24 @@
if (clk_users != 1)
goto exit;
- ret = qpnp_clkdiv_enable(codec_clk);
- if (ret) {
- dev_err(codec->dev, "%s: Error enabling taiko MCLK\n",
- __func__);
- ret = -ENODEV;
+ if (codec_clk) {
+ clk_set_rate(codec_clk, TAIKO_EXT_CLK_RATE);
+ clk_prepare_enable(codec_clk);
+ taiko_mclk_enable(codec, 1, dapm);
+ } else {
+ pr_err("%s: Error setting Taiko MCLK\n", __func__);
+ clk_users--;
goto exit;
}
- taiko_mclk_enable(codec, 1, dapm);
} else {
if (clk_users > 0) {
clk_users--;
if (clk_users == 0) {
taiko_mclk_enable(codec, 0, dapm);
- qpnp_clkdiv_disable(codec_clk);
+ clk_disable_unprepare(codec_clk);
}
} else {
- pr_err("%s: Error releasing Tabla MCLK\n", __func__);
+ pr_err("%s: Error releasing Taiko MCLK\n", __func__);
ret = -EINVAL;
goto exit;
}
@@ -566,6 +683,8 @@
SND_SOC_DAPM_SPK("Lineout_2 amp", msm_ext_spkramp_event),
SND_SOC_DAPM_SPK("Lineout_4 amp", msm_ext_spkramp_event),
+ SND_SOC_DAPM_SPK("SPK_ultrasound amp",
+ msm_ext_spkramp_ultrasound_event),
SND_SOC_DAPM_MIC("Handset Mic", NULL),
SND_SOC_DAPM_MIC("Headset Mic", NULL),
@@ -1276,6 +1395,8 @@
snd_soc_dapm_sync(dapm);
+ codec_clk = clk_get(cpu_dai->dev, "osr_clk");
+
snd_soc_dai_set_channel_map(codec_dai, ARRAY_SIZE(tx_ch),
tx_ch, ARRAY_SIZE(rx_ch), rx_ch);
@@ -1285,7 +1406,7 @@
if (err) {
pr_err("%s: Failed to set codec registers config %d\n",
__func__, err);
- return err;
+ goto out;
}
config_data = taiko_get_afe_config(codec, AFE_SLIMBUS_SLAVE_CONFIG);
@@ -1293,7 +1414,7 @@
if (err) {
pr_err("%s: Failed to set slimbus slave config %d\n", __func__,
err);
- return err;
+ goto out;
}
config_data = taiko_get_afe_config(codec, AFE_AANC_VERSION);
@@ -1301,16 +1422,42 @@
if (err) {
pr_err("%s: Failed to set aanc version %d\n",
__func__, err);
- return err;
+ goto out;
}
-
+ config_data = taiko_get_afe_config(codec,
+ AFE_CDC_CLIP_REGISTERS_CONFIG);
+ if (config_data) {
+ err = afe_set_config(AFE_CDC_CLIP_REGISTERS_CONFIG,
+ config_data, 0);
+ if (err) {
+ pr_err("%s: Failed to set clip registers %d\n",
+ __func__, err);
+ return err;
+ }
+ }
+ config_data = taiko_get_afe_config(codec, AFE_CLIP_BANK_SEL);
+ if (config_data) {
+ err = afe_set_config(AFE_CLIP_BANK_SEL, config_data, 0);
+ if (err) {
+ pr_err("%s: Failed to set AFE bank selection %d\n",
+ __func__, err);
+ return err;
+ }
+ }
/* start mbhc */
mbhc_cfg.calibration = def_taiko_mbhc_cal();
- if (mbhc_cfg.calibration)
+ if (mbhc_cfg.calibration) {
err = taiko_hs_detect(codec, &mbhc_cfg);
- else
+ if (err)
+ goto out;
+ else
+ return err;
+ } else {
err = -ENOMEM;
-
+ goto out;
+ }
+out:
+ clk_put(codec_clk);
return err;
}
@@ -2234,20 +2381,6 @@
}
}
- codec_clk = qpnp_clkdiv_get(card->dev, "taiko-mclk");
- if (IS_ERR(codec_clk)) {
- dev_err(card->dev,
- "%s: Failed to request taiko mclk from pmic %ld\n",
- __func__, PTR_ERR(codec_clk));
- return -ENODEV ;
- }
-
- ret = qpnp_clkdiv_config(codec_clk, Q_CLKDIV_XO_DIV_2);
- if (ret) {
- dev_err(card->dev, "%s: Failed to set taiko mclk to %u\n",
- __func__, pdata->mclk_gpio);
- return ret;
- }
return 0;
}
@@ -2275,6 +2408,8 @@
struct snd_soc_card *card = &snd_soc_card_msm8974;
struct msm8974_asoc_mach_data *pdata;
int ret;
+ const char *auxpcm_pri_gpio_set = NULL;
+ const char *prop_name_ult_lo_gpio = "qcom,ext-ult-lo-amp-gpio";
if (!pdev->dev.of_node) {
dev_err(&pdev->dev, "No platform supplied from device tree\n");
@@ -2346,9 +2481,26 @@
goto err;
}
+ ext_ult_lo_amp_gpio = of_get_named_gpio(pdev->dev.of_node,
+ prop_name_ult_lo_gpio, 0);
+ if (!gpio_is_valid(ext_ult_lo_amp_gpio)) {
+ dev_dbg(&pdev->dev,
+ "Couldn't find %s property in node %s, %d\n",
+ prop_name_ult_lo_gpio, pdev->dev.of_node->full_name,
+ ext_ult_lo_amp_gpio);
+ } else {
+ ret = gpio_request(ext_ult_lo_amp_gpio, "US_AMP_GPIO");
+ if (ret) {
+ dev_err(card->dev,
+ "%s: Failed to request US amp gpio %d\n",
+ __func__, ext_ult_lo_amp_gpio);
+ goto err;
+ }
+ }
+
ret = msm8974_prepare_codec_mclk(card);
if (ret)
- goto err;
+ goto err1;
if (of_property_read_bool(pdev->dev.of_node, "qcom,hdmi-audio-rx")) {
dev_info(&pdev->dev, "%s(): hdmi audio support present\n",
@@ -2397,22 +2549,43 @@
if (ret) {
dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n",
ret);
- goto err;
+ goto err1;
}
- lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_PRI_MODE_MUXSEL, 4);
+ ret = of_property_read_string(pdev->dev.of_node,
+ "qcom,prim-auxpcm-gpio-set", &auxpcm_pri_gpio_set);
+ if (ret) {
+ dev_err(&pdev->dev, "Looking up %s property in node %s failed",
+ "qcom,prim-auxpcm-gpio-set",
+ pdev->dev.of_node->full_name);
+ goto err1;
+ }
+ if (!strcmp(auxpcm_pri_gpio_set, "prim-gpio-prim")) {
+ lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_PRI_MODE_MUXSEL, 4);
+ } else if (!strcmp(auxpcm_pri_gpio_set, "prim-gpio-tert")) {
+ lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_TER_MODE_MUXSEL, 4);
+ } else {
+ dev_err(&pdev->dev, "Invalid value %s for AUXPCM GPIO set\n",
+ auxpcm_pri_gpio_set);
+ ret = -EINVAL;
+ goto err1;
+ }
if (lpaif_pri_muxsel_virt_addr == NULL) {
pr_err("%s Pri muxsel virt addr is null\n", __func__);
ret = -EINVAL;
- goto err;
+ goto err1;
}
lpaif_sec_muxsel_virt_addr = ioremap(LPAIF_SEC_MODE_MUXSEL, 4);
if (lpaif_sec_muxsel_virt_addr == NULL) {
pr_err("%s Sec muxsel virt addr is null\n", __func__);
ret = -EINVAL;
- goto err;
+ goto err1;
}
return 0;
+
+err1:
+ gpio_free(ext_ult_lo_amp_gpio);
+ ext_ult_lo_amp_gpio = -1;
err:
if (pdata->mclk_gpio > 0) {
dev_dbg(&pdev->dev, "%s free gpio %d\n",
@@ -2438,9 +2611,15 @@
if (ext_spk_amp_regulator)
regulator_put(ext_spk_amp_regulator);
+ if (gpio_is_valid(ext_ult_spk_amp_gpio))
+ gpio_free(ext_ult_spk_amp_gpio);
+
+ if (gpio_is_valid(ext_ult_lo_amp_gpio))
+ gpio_free(ext_ult_lo_amp_gpio);
+
gpio_free(pdata->mclk_gpio);
gpio_free(pdata->us_euro_gpio);
- if (ext_spk_amp_gpio >= 0)
+ if (gpio_is_valid(ext_spk_amp_gpio))
gpio_free(ext_spk_amp_gpio);
if (msm8974_liquid_dock_dev != NULL) {
diff --git a/sound/soc/msm/msm8x10.c b/sound/soc/msm/msm8x10.c
index 4dd85fc..1f7712f 100644
--- a/sound/soc/msm/msm8x10.c
+++ b/sound/soc/msm/msm8x10.c
@@ -18,6 +18,7 @@
#include <linux/slab.h>
#include <linux/mfd/pm8xxx/pm8921.h>
#include <linux/qpnp/clkdiv.h>
+#include <linux/io.h>
#include <sound/core.h>
#include <sound/soc.h>
#include <sound/soc-dapm.h>
@@ -26,8 +27,196 @@
#include <asm/mach-types.h>
#include <mach/socinfo.h>
#include <qdsp6v2/msm-pcm-routing-v2.h>
+#include <sound/q6afe-v2.h>
#include <linux/module.h>
+#include "../codecs/msm8x10-wcd.h"
#define DRV_NAME "msm8x10-asoc-wcd"
+#define BTSCO_RATE_8KHZ 8000
+#define BTSCO_RATE_16KHZ 16000
+
+static int msm_btsco_rate = BTSCO_RATE_8KHZ;
+static int msm_btsco_ch = 1;
+
+static int msm_proxy_rx_ch = 2;
+
+#define MSM8X10_DINO_LPASS_AUDIO_CORE_DIG_CODEC_CLK_SEL 0xFE03B004
+#define MSM8X10_DINO_LPASS_DIGCODEC_CMD_RCGR 0xFE02C000
+#define MSM8X10_DINO_LPASS_DIGCODEC_CFG_RCGR 0xFE02C004
+#define MSM8X10_DINO_LPASS_DIGCODEC_M 0xFE02C008
+#define MSM8X10_DINO_LPASS_DIGCODEC_N 0xFE02C00C
+#define MSM8X10_DINO_LPASS_DIGCODEC_D 0xFE02C010
+#define MSM8X10_DINO_LPASS_DIGCODEC_CBCR 0xFE02C014
+#define MSM8X10_DINO_LPASS_DIGCODEC_AHB_CBCR 0xFE02C018
+
+/*
+ * There is limitation for the clock root selection from
+ * either MI2S or DIG_CODEC.
+ * If DIG_CODEC root can only provide 9.6MHz clock
+ * to codec while MI2S only can provide
+ * 12.288MHz.
+ */
+enum {
+ DIG_CDC_CLK_SEL_DIG_CODEC,
+ DIG_CDC_CLK_SEL_PRI_MI2S,
+ DIG_CDC_CLK_SEL_SEC_MI2S,
+};
+
+static struct afe_clk_cfg mi2s_rx_clk = {
+ AFE_API_VERSION_I2S_CONFIG,
+ Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ,
+ Q6AFE_LPASS_OSR_CLK_12_P288_MHZ,
+ Q6AFE_LPASS_CLK_SRC_INTERNAL,
+ Q6AFE_LPASS_CLK_ROOT_DEFAULT,
+ Q6AFE_LPASS_MODE_BOTH_VALID,
+ 0,
+};
+
+static struct afe_clk_cfg mi2s_tx_clk = {
+ AFE_API_VERSION_I2S_CONFIG,
+ Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ,
+ Q6AFE_LPASS_OSR_CLK_12_P288_MHZ,
+ Q6AFE_LPASS_CLK_SRC_INTERNAL,
+ Q6AFE_LPASS_CLK_ROOT_DEFAULT,
+ Q6AFE_LPASS_MODE_BOTH_VALID,
+ 0,
+};
+
+static struct afe_digital_clk_cfg digital_cdc_clk = {
+ AFE_API_VERSION_I2S_CONFIG,
+ 9600000,
+ 5, /* Digital Codec root */
+ 0,
+};
+
+static atomic_t aud_init_rsc_ref;
+
+static int msm8x10_mclk_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event);
+
+static const struct snd_soc_dapm_route msm8x10_common_audio_map[] = {
+ {"RX_BIAS", NULL, "MCLK"},
+ {"INT_LDO_H", NULL, "MCLK"},
+ {"MIC BIAS External", NULL, "Handset Mic"},
+ {"MIC BIAS Internal2", NULL, "Headset Mic"},
+ {"AMIC1", NULL, "MIC BIAS External"},
+ {"AMIC2", NULL, "MIC BIAS Internal2"},
+
+};
+
+static const struct snd_soc_dapm_widget msm8x10_dapm_widgets[] = {
+
+ SND_SOC_DAPM_SUPPLY("MCLK", SND_SOC_NOPM, 0, 0,
+ msm8x10_mclk_event, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_MIC("Handset Mic", NULL),
+ SND_SOC_DAPM_MIC("Headset Mic", NULL),
+
+};
+
+/*
+ * This function will be replaced by
+ * afe_set_lpass_internal_digital_codec_clock(port_id, cfg)
+ * in the future after LPASS API fix
+ */
+static int msm_enable_lpass_mclk(void)
+{
+ /* Select the codec root */
+ iowrite32(DIG_CDC_CLK_SEL_DIG_CODEC,
+ ioremap(MSM8X10_DINO_LPASS_AUDIO_CORE_DIG_CODEC_CLK_SEL,
+ 4));
+ /* Div-2 */
+ iowrite32(0x3, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CFG_RCGR, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_M, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_N, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_D, 4));
+ /* Digital codec clock enable */
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CBCR, 4));
+ /* AHB clock enable */
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_AHB_CBCR, 4));
+ /* Set the update bit to make the settings go through */
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CMD_RCGR, 4));
+
+ return 0;
+}
+
+static int msm_enable_mclk_root(u16 port_id, struct afe_digital_clk_cfg *cfg)
+{
+ int ret = 0;
+ /*
+ * msm_enable_lpass_mclk() function call will be replaced by
+ * ret = afe_set_lpass_internal_digital_codec_clock(port_id, cfg)
+ * in the future. Currentlt there is a bug in LPASS plan which
+ * doesn't consider the digital codec clock. It will be fixed soon
+ * in new Q6 image
+ */
+ msm_enable_lpass_mclk();
+ pr_debug("%s(): return = %d\n", __func__, ret);
+ return ret;
+}
+
+static int msm_config_mclk(u16 port_id, struct afe_digital_clk_cfg *cfg)
+{
+ /* Select the codec root */
+ iowrite32(DIG_CDC_CLK_SEL_DIG_CODEC,
+ ioremap(MSM8X10_DINO_LPASS_AUDIO_CORE_DIG_CODEC_CLK_SEL,
+ 4));
+ /* Div-2 */
+ iowrite32(0x3, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CFG_RCGR, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_M, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_N, 4));
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_D, 4));
+ /* Digital codec clock enable */
+ if (cfg->clk_val == 0) {
+ iowrite32(0x0, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CBCR, 4));
+ pr_debug("%s(line %d)\n", __func__, __LINE__);
+ } else {
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CBCR, 4));
+ pr_debug("%s(line %d)\n", __func__, __LINE__);
+ }
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CBCR, 4));
+ /* AHB clock enable */
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_AHB_CBCR, 4));
+ /* Set the update bit to make the settings go through */
+ iowrite32(0x1, ioremap(MSM8X10_DINO_LPASS_DIGCODEC_CMD_RCGR, 4));
+
+ return 0;
+
+}
+
+static int msm_config_mi2s_clk(int enable)
+{
+ int ret = 0;
+ pr_debug("%s(line %d):enable = %x\n", __func__, __LINE__, enable);
+ if (enable) {
+ digital_cdc_clk.clk_val = 9600000;
+ mi2s_rx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_12_P288_MHZ;
+ mi2s_rx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &mi2s_rx_clk);
+ mi2s_tx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_12_P288_MHZ;
+ mi2s_tx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_PRIMARY_MI2S_RX,
+ &mi2s_tx_clk);
+ if (ret < 0)
+ pr_err("%s:afe_set_lpass_clock failed\n", __func__);
+
+ } else {
+ digital_cdc_clk.clk_val = 0;
+ mi2s_rx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_DISABLE;
+ mi2s_rx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_DISABLE;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &mi2s_rx_clk);
+ mi2s_tx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_DISABLE;
+ mi2s_tx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_DISABLE;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_PRIMARY_MI2S_RX,
+ &mi2s_tx_clk);
+ if (ret < 0)
+ pr_err("%s:afe_set_lpass_clock failed\n", __func__);
+
+ }
+ ret = msm_config_mclk(AFE_PORT_ID_SECONDARY_MI2S_RX, &digital_cdc_clk);
+ return ret;
+}
+
static int msm_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
struct snd_pcm_hw_params *params)
@@ -41,13 +230,22 @@
return 0;
}
-static int msm_audrx_init(struct snd_soc_pcm_runtime *rtd)
+static int msm_btsco_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
{
- pr_info("%s(), dev_name%s\n", __func__, dev_name(rtd->cpu_dai->dev));
+ struct snd_interval *rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
+
+ struct snd_interval *channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
+
+ rate->min = rate->max = msm_btsco_rate;
+ channels->min = channels->max = msm_btsco_ch;
+
return 0;
}
-static int msm_snd_hw_params(struct snd_pcm_substream *substream,
+static int msm_mi2s_snd_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params)
{
pr_debug("%s(): substream = %s stream = %d\n", __func__,
@@ -55,34 +253,184 @@
return 0;
}
-static int msm_snd_startup(struct snd_pcm_substream *substream)
+static int mi2s_clk_ctl(struct snd_pcm_substream *substream, bool enable)
{
+ int ret = 0;
+ if (enable) {
+ digital_cdc_clk.clk_val = 9600000;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mi2s_rx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_12_P288_MHZ;
+ mi2s_rx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &mi2s_rx_clk);
+ } else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
+ mi2s_tx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_12_P288_MHZ;
+ mi2s_tx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_1_P536_MHZ;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_PRIMARY_MI2S_RX,
+ &mi2s_tx_clk);
+ } else
+ pr_err("%s:Not valid substream.\n", __func__);
+
+ if (ret < 0)
+ pr_err("%s:afe_set_lpass_clock failed\n", __func__);
+
+ } else {
+ digital_cdc_clk.clk_val = 0;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mi2s_rx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_DISABLE;
+ mi2s_rx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_DISABLE;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &mi2s_rx_clk);
+ } else if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) {
+ mi2s_tx_clk.clk_val2 = Q6AFE_LPASS_OSR_CLK_DISABLE;
+ mi2s_tx_clk.clk_val1 = Q6AFE_LPASS_IBIT_CLK_DISABLE;
+ ret = afe_set_lpass_clock(AFE_PORT_ID_PRIMARY_MI2S_RX,
+ &mi2s_tx_clk);
+ } else
+ pr_err("%s:Not valid substream.\n", __func__);
+
+ if (ret < 0)
+ pr_err("%s:afe_set_lpass_clock failed\n", __func__);
+
+ }
+ ret = msm_config_mclk(AFE_PORT_ID_SECONDARY_MI2S_RX, &digital_cdc_clk);
+ return ret;
+}
+
+static int msm8x10_enable_codec_ext_clk(struct snd_soc_codec *codec,
+ int enable, bool dapm)
+{
+ int ret = 0;
+
+ pr_debug("%s: enable = %d codec name %s enable %x\n",
+ __func__, enable, codec->name, enable);
+ if (enable) {
+ digital_cdc_clk.clk_val = 9600000;
+ msm_config_mi2s_clk(1);
+ ret = msm_config_mclk(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &digital_cdc_clk);
+ msm8x10_wcd_mclk_enable(codec, 1, dapm);
+ } else {
+ msm8x10_wcd_mclk_enable(codec, 0, dapm);
+ ret = msm_config_mclk(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &digital_cdc_clk);
+ msm_config_mi2s_clk(0);
+ }
+ return ret;
+}
+
+static int msm8x10_mclk_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol, int event)
+{
+ pr_debug("%s: event = %d\n", __func__, event);
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ return msm8x10_enable_codec_ext_clk(w->codec, 1, true);
+ case SND_SOC_DAPM_POST_PMD:
+ return msm8x10_enable_codec_ext_clk(w->codec, 0, true);
+ default:
+ return -EINVAL;
+ }
+}
+
+static void msm_mi2s_snd_shutdown(struct snd_pcm_substream *substream)
+{
+ int ret;
+
pr_debug("%s(): substream = %s stream = %d\n", __func__,
substream->name, substream->stream);
+ ret = mi2s_clk_ctl(substream, false);
+ if (ret < 0)
+ pr_err("%s:clock disable failed\n", __func__);
+}
+
+static int msm_mi2s_snd_startup(struct snd_pcm_substream *substream)
+{
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ int ret = 0;
+
+ pr_debug("%s(): substream = %s stream = %d\n", __func__,
+ substream->name, substream->stream);
+
+ ret = mi2s_clk_ctl(substream, true);
+
+ ret = snd_soc_dai_set_fmt(cpu_dai, SND_SOC_DAIFMT_CBS_CFS);
+ if (ret < 0)
+ pr_err("set fmt cpu dai failed\n");
+
+ return ret;
+}
+
+static int msm_audrx_init(struct snd_soc_pcm_runtime *rtd)
+{
+
+ struct snd_soc_codec *codec = rtd->codec;
+ struct snd_soc_dapm_context *dapm = &codec->dapm;
+ struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ int ret = 0;
+
+ pr_debug("%s(),dev_name%s\n", __func__, dev_name(cpu_dai->dev));
+
+ pr_debug("%s(): aud_init_rsc_ref counter = %d\n",
+ __func__, atomic_read(&aud_init_rsc_ref));
+ if (atomic_inc_return(&aud_init_rsc_ref) != 1)
+ goto exit;
+
+ snd_soc_dapm_new_controls(dapm, msm8x10_dapm_widgets,
+ ARRAY_SIZE(msm8x10_dapm_widgets));
+
+ snd_soc_dapm_add_routes(dapm, msm8x10_common_audio_map,
+ ARRAY_SIZE(msm8x10_common_audio_map));
+
+ snd_soc_dapm_sync(dapm);
+ ret = msm_enable_mclk_root(AFE_PORT_ID_SECONDARY_MI2S_RX,
+ &digital_cdc_clk);
+exit:
+ return ret;
+}
+
+static int msm_proxy_rx_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_interval *rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
+
+ struct snd_interval *channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
+
+ pr_debug("%s: msm_proxy_rx_ch =%d\n", __func__, msm_proxy_rx_ch);
+
+ if (channels->max < 2)
+ channels->min = channels->max = 2;
+ channels->min = channels->max = msm_proxy_rx_ch;
+ rate->min = rate->max = 48000;
return 0;
}
-
-static void msm_snd_shutdown(struct snd_pcm_substream *substream)
+static int msm_proxy_tx_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
{
- pr_debug("%s(): substream = %s stream = %d\n", __func__,
- substream->name, substream->stream);
-}
+ struct snd_interval *rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
-static struct snd_soc_ops msm8x10_be_ops = {
- .startup = msm_snd_startup,
- .hw_params = msm_snd_hw_params,
- .shutdown = msm_snd_shutdown,
+ rate->min = rate->max = 48000;
+ return 0;
+}
+static struct snd_soc_ops msm8x10_mi2s_be_ops = {
+ .startup = msm_mi2s_snd_startup,
+ .hw_params = msm_mi2s_snd_hw_params,
+ .shutdown = msm_mi2s_snd_shutdown,
};
/* Digital audio interface glue - connects codec <---> CPU */
static struct snd_soc_dai_link msm8x10_dai[] = {
/* FrontEnd DAI Links */
- {
+ {/* hw:x,0 */
.name = "MSM8X10 Media1",
.stream_name = "MultiMedia1",
.cpu_dai_name = "MultiMedia1",
- .platform_name = "msm-pcm-dsp",
+ .platform_name = "msm-pcm-dsp.0",
.dynamic = 1,
.trigger = {SND_SOC_DPCM_TRIGGER_POST,
SND_SOC_DPCM_TRIGGER_POST},
@@ -93,11 +441,11 @@
.ignore_pmdown_time = 1,
.be_id = MSM_FRONTEND_DAI_MULTIMEDIA1
},
- {
+ {/* hw:x,1 */
.name = "MSM8X10 Media2",
.stream_name = "MultiMedia2",
.cpu_dai_name = "MultiMedia2",
- .platform_name = "msm-pcm-dsp",
+ .platform_name = "msm-pcm-dsp.0",
.dynamic = 1,
.codec_dai_name = "snd-soc-dummy-dai",
.codec_name = "snd-soc-dummy",
@@ -108,31 +456,347 @@
.ignore_pmdown_time = 1,
.be_id = MSM_FRONTEND_DAI_MULTIMEDIA2,
},
+ {/* hw:x,2 */
+ .name = "Circuit-Switch Voice",
+ .stream_name = "CS-Voice",
+ .cpu_dai_name = "CS-VOICE",
+ .platform_name = "msm-pcm-voice",
+ .dynamic = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_CS_VOICE,
+ },
+ {/* hw:x,3 */
+ .name = "MSM VoIP",
+ .stream_name = "VoIP",
+ .cpu_dai_name = "VoIP",
+ .platform_name = "msm-voip-dsp",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_VOIP,
+ },
+ {/* hw:x,4 */
+ .name = "MSM8X10 LPA",
+ .stream_name = "LPA",
+ .cpu_dai_name = "MultiMedia3",
+ .platform_name = "msm-pcm-lpa",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_MULTIMEDIA3,
+ },
+ /* Hostless PCM purpose */
+ {/* hw:x,5 */
+ .name = "Secondary MI2S RX Hostless",
+ .stream_name = "Secondary MI2S_RX Hostless Playback",
+ .cpu_dai_name = "SEC_MI2S_RX_HOSTLESS",
+ .platform_name = "msm-pcm-hostless",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ .ignore_suspend = 1,
+ .ignore_pmdown_time = 1,
+ /* This dainlink has MI2S support */
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ },
+ {/* hw:x,6 */
+ .name = "INT_FM Hostless",
+ .stream_name = "INT_FM Hostless",
+ .cpu_dai_name = "INT_FM_HOSTLESS",
+ .platform_name = "msm-pcm-hostless",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ },
+ {/* hw:x,7 */
+ .name = "MSM AFE-PCM RX",
+ .stream_name = "AFE-PROXY RX",
+ .cpu_dai_name = "msm-dai-q6-dev.241",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .platform_name = "msm-pcm-afe",
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ },
+ {/* hw:x,8 */
+ .name = "MSM AFE-PCM TX",
+ .stream_name = "AFE-PROXY TX",
+ .cpu_dai_name = "msm-dai-q6-dev.240",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .platform_name = "msm-pcm-afe",
+ .ignore_suspend = 1,
+ },
+ {/* hw:x,9 */
+ .name = "MSM8X10 Compr",
+ .stream_name = "COMPR",
+ .cpu_dai_name = "MultiMedia4",
+ .platform_name = "msm-compr-dsp",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .ignore_suspend = 1,
+ .ignore_pmdown_time = 1,
+ /* this dainlink has playback support */
+ .be_id = MSM_FRONTEND_DAI_MULTIMEDIA4,
+ },
+ {/* hw:x,10 */
+ .name = "AUXPCM Hostless",
+ .stream_name = "AUXPCM Hostless",
+ .cpu_dai_name = "AUXPCM_HOSTLESS",
+ .platform_name = "msm-pcm-hostless",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ },
+ {/* hw:x,11 */
+ .name = "Primary MI2S TX Hostless",
+ .stream_name = "Primary MI2S_TX Hostless Capture",
+ .cpu_dai_name = "PRI_MI2S_TX_HOSTLESS",
+ .platform_name = "msm-pcm-hostless",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
+ .ignore_suspend = 1,
+ .ignore_pmdown_time = 1,
+ /* This dainlink has MI2S support */
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ },
+ {/* hw:x,12 */
+ .name = "MSM8x10 LowLatency",
+ .stream_name = "MultiMedia5",
+ .cpu_dai_name = "MultiMedia5",
+ .platform_name = "msm-pcm-dsp.1",
+ .dynamic = 1,
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST,
+ SND_SOC_DPCM_TRIGGER_POST},
+ .ignore_suspend = 1,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .be_id = MSM_FRONTEND_DAI_MULTIMEDIA5,
+ },
/* Backend I2S DAI Links */
{
- .name = LPASS_BE_MI2S_RX,
- .stream_name = "Primary MI2S Playback",
- .cpu_dai_name = "msm-dai-q6-mi2s.0",
- .platform_name = "msm-pcm-routing",
- .codec_name = "msm8x10-wcd-i2c-core.1-000d",
- .codec_dai_name = "msm8x10_wcd_i2s_rx1",
- .no_pcm = 1,
- .be_id = MSM_BACKEND_DAI_MI2S_RX,
- .init = &msm_audrx_init,
- .be_hw_params_fixup = msm_be_hw_params_fixup,
- .ops = &msm8x10_be_ops,
- },
- {
- .name = LPASS_BE_SEC_MI2S_TX,
- .stream_name = "Secondary MI2S Capture",
+ .name = LPASS_BE_SEC_MI2S_RX,
+ .stream_name = "Secondary MI2S Playback",
.cpu_dai_name = "msm-dai-q6-mi2s.1",
.platform_name = "msm-pcm-routing",
- .codec_name = "msm8x10-wcd-i2c-core.1-000d",
+ .codec_name = "msm8x10-wcd-i2c-core.5-000d",
+ .codec_dai_name = "msm8x10_wcd_i2s_rx1",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ .init = &msm_audrx_init,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ops = &msm8x10_mi2s_be_ops,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_PRI_MI2S_TX,
+ .stream_name = "Primary MI2S Capture",
+ .cpu_dai_name = "msm-dai-q6-mi2s.0",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm8x10-wcd-i2c-core.5-000d",
.codec_dai_name = "msm8x10_wcd_i2s_tx1",
.no_pcm = 1,
- .be_id = MSM_BACKEND_DAI_SECONDARY_MI2S_TX,
+ .be_id = MSM_BACKEND_DAI_PRI_MI2S_TX,
+ .init = &msm_audrx_init,
.be_hw_params_fixup = msm_be_hw_params_fixup,
- .ops = &msm8x10_be_ops,
+ .ops = &msm8x10_mi2s_be_ops,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_INT_BT_SCO_RX,
+ .stream_name = "Internal BT-SCO Playback",
+ .cpu_dai_name = "msm-dai-q6-dev.12288",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INT_BT_SCO_RX,
+ .be_hw_params_fixup = msm_btsco_be_hw_params_fixup,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_INT_BT_SCO_TX,
+ .stream_name = "Internal BT-SCO Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.12289",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INT_BT_SCO_TX,
+ .be_hw_params_fixup = msm_btsco_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_INT_FM_RX,
+ .stream_name = "Internal FM Playback",
+ .cpu_dai_name = "msm-dai-q6-dev.12292",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INT_FM_RX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_INT_FM_TX,
+ .stream_name = "Internal FM Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.12293",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INT_FM_TX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_AFE_PCM_RX,
+ .stream_name = "AFE Playback",
+ .cpu_dai_name = "msm-dai-q6-dev.224",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_AFE_PCM_RX,
+ .be_hw_params_fixup = msm_proxy_rx_be_hw_params_fixup,
+ /* this dainlink has playback support */
+ .ignore_pmdown_time = 1,
+ .ignore_suspend = 1,
+ },
+ {
+ .name = LPASS_BE_AFE_PCM_TX,
+ .stream_name = "AFE Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.225",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_AFE_PCM_TX,
+ .be_hw_params_fixup = msm_proxy_tx_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Record Uplink BACK END DAI Link */
+ {
+ .name = LPASS_BE_INCALL_RECORD_TX,
+ .stream_name = "Voice Uplink Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.32772",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INCALL_RECORD_TX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Record Downlink BACK END DAI Link */
+ {
+ .name = LPASS_BE_INCALL_RECORD_RX,
+ .stream_name = "Voice Downlink Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.32771",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INCALL_RECORD_RX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Music BACK END DAI Link */
+ {
+ .name = LPASS_BE_VOICE_PLAYBACK_TX,
+ .stream_name = "Voice Farend Playback",
+ .cpu_dai_name = "msm-dai-q6-dev.32773",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_VOICE_PLAYBACK_TX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Record Uplink BACK END DAI Link */
+ {
+ .name = LPASS_BE_INCALL_RECORD_TX,
+ .stream_name = "Voice Uplink Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.32772",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INCALL_RECORD_TX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Record Downlink BACK END DAI Link */
+ {
+ .name = LPASS_BE_INCALL_RECORD_RX,
+ .stream_name = "Voice Downlink Capture",
+ .cpu_dai_name = "msm-dai-q6-dev.32771",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-tx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_INCALL_RECORD_RX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
+ },
+ /* Incall Music BACK END DAI Link */
+ {
+ .name = LPASS_BE_VOICE_PLAYBACK_TX,
+ .stream_name = "Voice Farend Playback",
+ .cpu_dai_name = "msm-dai-q6-dev.32773",
+ .platform_name = "msm-pcm-routing",
+ .codec_name = "msm-stub-codec.1",
+ .codec_dai_name = "msm-stub-rx",
+ .no_pcm = 1,
+ .be_id = MSM_BACKEND_DAI_VOICE_PLAYBACK_TX,
+ .be_hw_params_fixup = msm_be_hw_params_fixup,
+ .ignore_suspend = 1,
},
};
@@ -165,6 +829,8 @@
goto err;
}
+ atomic_set(&aud_init_rsc_ref, 0);
+
return 0;
err:
return ret;
diff --git a/sound/soc/msm/qdsp6/q6asm.c b/sound/soc/msm/qdsp6/q6asm.c
index f15f4d1..659d5a2 100644
--- a/sound/soc/msm/qdsp6/q6asm.c
+++ b/sound/soc/msm/qdsp6/q6asm.c
@@ -912,6 +912,7 @@
case ASM_STREAM_CMD_OPEN_WRITE:
case ASM_STREAM_CMD_OPEN_WRITE_V2_1:
case ASM_STREAM_CMD_OPEN_READWRITE:
+ case ASM_STREAM_CMD_OPEN_LOOPBACK:
case ASM_DATA_CMD_MEDIA_FORMAT_UPDATE:
case ASM_STREAM_CMD_SET_ENCDEC_PARAM:
case ASM_STREAM_CMD_OPEN_WRITE_COMPRESSED:
@@ -1852,6 +1853,45 @@
return -EINVAL;
}
+int q6asm_open_loopack(struct audio_client *ac)
+{
+ int rc = 0x00;
+ struct asm_stream_cmd_open_loopback open;
+
+ if ((ac == NULL) || (ac->apr == NULL)) {
+ pr_err("APR handle NULL\n");
+ return -EINVAL;
+ }
+ pr_debug("%s: session[%d]", __func__, ac->session);
+
+ q6asm_add_hdr(ac, &open.hdr, sizeof(open), TRUE);
+ open.hdr.opcode = ASM_STREAM_CMD_OPEN_LOOPBACK;
+
+ open.mode_flags = 0;
+ open.src_endpointype = 0;
+ open.sink_endpointype = 0;
+ /* source endpoint : matrix */
+ open.postprocopo_id = get_asm_topology();
+ if (open.postprocopo_id == 0)
+ open.postprocopo_id = DEFAULT_POPP_TOPOLOGY;
+
+ rc = apr_send_pkt(ac->apr, (uint32_t *) &open);
+ if (rc < 0) {
+ pr_err("open failed op[0x%x]rc[%d]\n", \
+ open.hdr.opcode, rc);
+ goto fail_cmd;
+ }
+ rc = wait_event_timeout(ac->cmd_wait,
+ (atomic_read(&ac->cmd_state) == 0), 5*HZ);
+ if (!rc) {
+ pr_err("timeout. waited for OPEN_WRITE rc[%d]\n", rc);
+ goto fail_cmd;
+ }
+ return 0;
+fail_cmd:
+ return -EINVAL;
+}
+
int q6asm_run(struct audio_client *ac, uint32_t flags,
uint32_t msw_ts, uint32_t lsw_ts)
{
diff --git a/sound/soc/msm/qdsp6v2/audio_ocmem.c b/sound/soc/msm/qdsp6v2/audio_ocmem.c
index ad96ae3..08d7277 100644
--- a/sound/soc/msm/qdsp6v2/audio_ocmem.c
+++ b/sound/soc/msm/qdsp6v2/audio_ocmem.c
@@ -125,7 +125,7 @@
atomic_t audio_cond;
atomic_t audio_exit;
spinlock_t audio_lock;
- struct mutex protect_lock;
+ struct mutex state_process_lock;
struct workqueue_struct *audio_ocmem_workqueue;
struct workqueue_struct *voice_ocmem_workqueue;
bool ocmem_en;
@@ -252,6 +252,7 @@
if (ret != 0) {
pr_err("%s: get low power segments from DSP failed, rc=%d\n",
__func__, ret);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
goto fail_cmd;
}
}
@@ -273,7 +274,9 @@
buf = ocmem_allocate_nb(cid, AUDIO_OCMEM_BUF_SIZE);
if (IS_ERR_OR_NULL(buf)) {
pr_err("%s: failed: %d\n", __func__, cid);
- return -ENOMEM;
+ ret = -ENOMEM;
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
+ goto fail_cmd;
}
set_bit_pos(audio_ocmem_lcl.audio_state, OCMEM_STATE_ALLOC);
@@ -285,7 +288,7 @@
if (!buf->len) {
pr_debug("%s: buf.len is 0, waiting for ocmem region\n",
__func__);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
wait_event_interruptible(audio_ocmem_lcl.audio_wait,
(atomic_read(&audio_ocmem_lcl.audio_cond) == 0) ||
(atomic_read(&audio_ocmem_lcl.audio_exit) == 1));
@@ -302,7 +305,7 @@
goto fail_cmd;
}
clear_bit_pos(audio_ocmem_lcl.audio_state, OCMEM_STATE_GROW);
- mutex_trylock(&audio_ocmem_lcl.protect_lock);
+ mutex_trylock(&audio_ocmem_lcl.state_process_lock);
}
pr_debug("%s: buf->len: %ld\n", __func__, (audio_ocmem_lcl.buf)->len);
@@ -333,7 +336,7 @@
_MAP_RESPONSE_BIT_MASK_) != 0);
atomic_set(&audio_ocmem_lcl.audio_cond, 1);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
pr_debug("%s: audio_cond[%d] audio_state[0x%x]\n", __func__,
atomic_read(&audio_ocmem_lcl.audio_cond),
atomic_read(&audio_ocmem_lcl.audio_state));
@@ -344,7 +347,7 @@
wait_event_interruptible(audio_ocmem_lcl.audio_wait,
(atomic_read(&audio_ocmem_lcl.audio_state) &
_BIT_MASK_) != 0);
-
+ mutex_lock(&audio_ocmem_lcl.state_process_lock);
state_bit = get_state_to_process(&audio_ocmem_lcl.audio_state);
switch (state_bit) {
case OCMEM_STATE_MAP_COMPL:
@@ -502,6 +505,7 @@
atomic_read(&audio_ocmem_lcl.audio_state));
break;
}
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
}
ret = 0;
fail_cmd:
@@ -531,7 +535,7 @@
wake_up(&audio_ocmem_lcl.audio_wait);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
pr_debug("%s: exit\n", __func__);
return 0;
}
@@ -654,7 +658,7 @@
container_of(work, struct audio_ocmem_workdata, work);
en = audio_ocm_work->en;
- mutex_lock(&audio_ocmem_lcl.protect_lock);
+ mutex_lock(&audio_ocmem_lcl.state_process_lock);
/* if previous work waiting for ocmem - signal it to exit */
atomic_set(&audio_ocmem_lcl.audio_exit, 1);
pr_debug("%s: acquired mutex for %d\n", __func__, en);
@@ -889,7 +893,7 @@
atomic_set(&audio_ocmem_lcl.audio_state, OCMEM_STATE_DEFAULT);
atomic_set(&audio_ocmem_lcl.audio_exit, 0);
spin_lock_init(&audio_ocmem_lcl.audio_lock);
- mutex_init(&audio_ocmem_lcl.protect_lock);
+ mutex_init(&audio_ocmem_lcl.state_process_lock);
audio_ocmem_lcl.ocmem_en = true;
audio_ocmem_lcl.audio_ocmem_running = false;
diff --git a/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c b/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
index 8bb3eaf..04b090a 100644
--- a/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
@@ -190,8 +190,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX
|| dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX
|| dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
@@ -295,8 +295,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX ||
dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX ||
dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
@@ -420,8 +420,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX ||
dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX ||
dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
@@ -1482,7 +1482,7 @@
case SNDRV_PCM_STREAM_PLAYBACK:
switch (mi2s_id) {
case MSM_PRIM_MI2S:
- *port_id = MI2S_RX;
+ *port_id = AFE_PORT_ID_PRIMARY_MI2S_RX;
break;
case MSM_SEC_MI2S:
*port_id = AFE_PORT_ID_SECONDARY_MI2S_RX;
@@ -1502,7 +1502,7 @@
case SNDRV_PCM_STREAM_CAPTURE:
switch (mi2s_id) {
case MSM_PRIM_MI2S:
- *port_id = MI2S_TX;
+ *port_id = AFE_PORT_ID_PRIMARY_MI2S_TX;
break;
case MSM_SEC_MI2S:
*port_id = AFE_PORT_ID_SECONDARY_MI2S_TX;
diff --git a/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c b/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
index b43c3bd..e6934f6 100644
--- a/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
+++ b/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
@@ -305,6 +305,7 @@
}
if (idx >= NUM_DOLBY_ENDP_DEVICE) {
pr_err("%s: device is not set accordingly\n", __func__);
+ kfree(params_value);
return -EINVAL;
}
for (i = 0; i < DOLBY_ENDDEP_PARAM_LENGTH; i++) {
@@ -367,6 +368,7 @@
params_length);
if (rc) {
pr_err("%s: send dolby params failed\n", __func__);
+ kfree(params_value);
return -EINVAL;
}
for (i = 0; i < MAX_DOLBY_PARAMS; i++) {
@@ -435,7 +437,8 @@
if (device == DEVICE_OUT_ALL) {
port_id = PRIMARY_I2S_RX | SLIMBUS_0_RX | HDMI_RX |
INT_BT_SCO_RX | INT_FM_RX |
- RT_PROXY_PORT_001_RX | PCM_RX |
+ RT_PROXY_PORT_001_RX |
+ AFE_PORT_ID_PRIMARY_PCM_RX |
MI2S_RX | SECONDARY_I2S_RX |
SLIMBUS_1_RX | SLIMBUS_4_RX | SLIMBUS_3_RX |
AFE_PORT_ID_SECONDARY_MI2S_RX;
@@ -584,7 +587,7 @@
if (rc) {
pr_err("%s: get parameters failed\n", __func__);
kfree(params_value);
- rc = -EINVAL;
+ return -EINVAL;
}
update_params_value = (int *)params_value;
ucontrol->value.integer.value[0] = dolby_dap_params_get.device_id;
diff --git a/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c b/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
index a163f6a..27b3f56 100644
--- a/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
@@ -33,6 +33,7 @@
#include <sound/compress_offload.h>
#include <sound/compress_driver.h>
#include <sound/timer.h>
+#include <sound/pcm_params.h>
#include "msm-pcm-q6-v2.h"
#include "msm-pcm-routing-v2.h"
@@ -55,9 +56,10 @@
SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME),
.formats = SNDRV_PCM_FMTBIT_S16_LE |
SNDRV_PCM_FMTBIT_S24_LE,
- .rates = SNDRV_PCM_RATE_8000_48000 | SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_8000_192000 |
+ SNDRV_PCM_RATE_KNOT,
.rate_min = 8000,
- .rate_max = 48000,
+ .rate_max = 192000,
.channels_min = 1,
.channels_max = 2,
.buffer_bytes_max = 1024 * 1024,
@@ -70,7 +72,8 @@
/* Conventional and unconventional sample rate supported */
static unsigned int supported_sample_rates[] = {
- 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000
+ 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000,
+ 96000, 192000
};
static struct snd_pcm_hw_constraint_list constraints_sample_rates = {
@@ -277,19 +280,7 @@
static int msm_pcm_open(struct snd_pcm_substream *substream)
{
struct snd_pcm_runtime *runtime = substream->runtime;
- struct snd_soc_pcm_runtime *soc_prtd = substream->private_data;
struct msm_audio *prtd;
- struct asm_softpause_params softpause = {
- .enable = SOFT_PAUSE_ENABLE,
- .period = SOFT_PAUSE_PERIOD,
- .step = SOFT_PAUSE_STEP,
- .rampingcurve = SOFT_PAUSE_CURVE_LINEAR,
- };
- struct asm_softvolume_params softvol = {
- .period = SOFT_VOLUME_PERIOD,
- .step = SOFT_VOLUME_STEP,
- .rampingcurve = SOFT_VOLUME_CURVE_LINEAR,
- };
int ret = 0;
pr_debug("%s\n", __func__);
@@ -307,31 +298,9 @@
kfree(prtd);
return -ENOMEM;
}
- prtd->audio_client->perf_mode = false;
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- ret = q6asm_open_write(prtd->audio_client, FORMAT_LINEAR_PCM);
- if (ret < 0) {
- pr_err("%s: pcm out open failed\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
- kfree(prtd);
- return -ENOMEM;
- }
- ret = q6asm_set_io_mode(prtd->audio_client, ASYNC_IO_MODE);
- if (ret < 0) {
- pr_err("%s: Set IO mode failed\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
- kfree(prtd);
- return -ENOMEM;
- }
- }
/* Capture path */
if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
return -EPERM;
- pr_debug("%s: session ID %d\n", __func__, prtd->audio_client->session);
- prtd->session_id = prtd->audio_client->session;
- msm_pcm_routing_reg_phy_stream(soc_prtd->dai_link->be_id,
- prtd->audio_client->perf_mode,
- prtd->session_id, substream->stream);
ret = snd_pcm_hw_constraint_list(runtime, 0,
SNDRV_PCM_HW_PARAM_RATE,
@@ -350,15 +319,6 @@
atomic_set(&lpa_audio.audio_ocmem_req, 0);
runtime->private_data = prtd;
lpa_audio.prtd = prtd;
- lpa_set_volume(0);
- ret = q6asm_set_softpause(lpa_audio.prtd->audio_client, &softpause);
- if (ret < 0)
- pr_err("%s: Send SoftPause Param failed ret=%d\n",
- __func__, ret);
- ret = q6asm_set_softvolume(lpa_audio.prtd->audio_client, &softvol);
- if (ret < 0)
- pr_err("%s: Send SoftVolume Param failed ret=%d\n",
- __func__, ret);
return 0;
}
@@ -407,22 +367,24 @@
prtd->pcm_irq_pos = 0;
}
- dir = IN;
- atomic_set(&prtd->pending_buffer, 0);
+ if (prtd->audio_client) {
+ dir = IN;
+ atomic_set(&prtd->pending_buffer, 0);
- if (atomic_cmpxchg(&lpa_audio.audio_ocmem_req, 1, 0))
- audio_ocmem_process_req(AUDIO, false);
- lpa_audio.prtd = NULL;
- q6asm_cmd(prtd->audio_client, CMD_CLOSE);
- q6asm_audio_client_buf_free_contiguous(dir,
+ if (atomic_cmpxchg(&lpa_audio.audio_ocmem_req, 1, 0))
+ audio_ocmem_process_req(AUDIO, false);
+ lpa_audio.prtd = NULL;
+ q6asm_cmd(prtd->audio_client, CMD_CLOSE);
+ q6asm_audio_client_buf_free_contiguous(dir,
prtd->audio_client);
- atomic_set(&prtd->stop, 1);
- pr_debug("%s\n", __func__);
+ atomic_set(&prtd->stop, 1);
+ q6asm_audio_client_free(prtd->audio_client);
+ pr_debug("%s\n", __func__);
+ }
msm_pcm_routing_dereg_phy_stream(soc_prtd->dai_link->be_id,
SNDRV_PCM_STREAM_PLAYBACK);
pr_debug("%s\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
kfree(prtd);
return 0;
@@ -484,9 +446,59 @@
{
struct snd_pcm_runtime *runtime = substream->runtime;
struct msm_audio *prtd = runtime->private_data;
+ struct snd_soc_pcm_runtime *soc_prtd = substream->private_data;
struct snd_dma_buffer *dma_buf = &substream->dma_buffer;
struct audio_buffer *buf;
+ uint16_t bits_per_sample = 16;
int dir, ret;
+ struct asm_softpause_params softpause = {
+ .enable = SOFT_PAUSE_ENABLE,
+ .period = SOFT_PAUSE_PERIOD,
+ .step = SOFT_PAUSE_STEP,
+ .rampingcurve = SOFT_PAUSE_CURVE_LINEAR,
+ };
+ struct asm_softvolume_params softvol = {
+ .period = SOFT_VOLUME_PERIOD,
+ .step = SOFT_VOLUME_STEP,
+ .rampingcurve = SOFT_VOLUME_CURVE_LINEAR,
+ };
+
+ prtd->audio_client->perf_mode = false;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (params_format(params) == SNDRV_PCM_FORMAT_S24_LE)
+ bits_per_sample = 24;
+ ret = q6asm_open_write_v2(prtd->audio_client,
+ FORMAT_LINEAR_PCM, bits_per_sample);
+ if (ret < 0) {
+ pr_err("%s: pcm out open failed\n", __func__);
+ q6asm_audio_client_free(prtd->audio_client);
+ prtd->audio_client = NULL;
+ return -ENOMEM;
+ }
+ ret = q6asm_set_io_mode(prtd->audio_client, ASYNC_IO_MODE);
+ if (ret < 0) {
+ pr_err("%s: Set IO mode failed\n", __func__);
+ q6asm_audio_client_free(prtd->audio_client);
+ prtd->audio_client = NULL;
+ return -ENOMEM;
+ }
+ }
+
+ pr_debug("%s: session ID %d\n", __func__, prtd->audio_client->session);
+ prtd->session_id = prtd->audio_client->session;
+ msm_pcm_routing_reg_phy_stream(soc_prtd->dai_link->be_id,
+ prtd->audio_client->perf_mode,
+ prtd->session_id, substream->stream);
+
+ lpa_set_volume(0);
+ ret = q6asm_set_softpause(lpa_audio.prtd->audio_client, &softpause);
+ if (ret < 0)
+ pr_err("%s: Send SoftPause Param failed ret=%d\n",
+ __func__, ret);
+ ret = q6asm_set_softvolume(lpa_audio.prtd->audio_client, &softvol);
+ if (ret < 0)
+ pr_err("%s: Send SoftVolume Param failed ret=%d\n",
+ __func__, ret);
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
dir = IN;
diff --git a/sound/soc/msm/qdsp6v2/msm-pcm-q6-v2.c b/sound/soc/msm/qdsp6v2/msm-pcm-q6-v2.c
index 717e63b..9fbf749 100644
--- a/sound/soc/msm/qdsp6v2/msm-pcm-q6-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-pcm-q6-v2.c
@@ -420,6 +420,12 @@
}
data = q6asm_is_cpu_buf_avail(IN, prtd->audio_client, &size, &idx);
+ if (size < fbytes) {
+ pr_err("%s: size mismatch error size %d fbytes %d\n",
+ __func__ , size , fbytes);
+ ret = -EFAULT;
+ goto fail;
+ }
bufptr = data;
if (bufptr) {
pr_debug("%s:fbytes =%d: xfer=%d size=%d\n",
diff --git a/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c b/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
index 8257023..fa476ee 100644
--- a/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
@@ -30,11 +30,11 @@
#include <sound/tlv.h>
#include <sound/asound.h>
#include <sound/pcm_params.h>
-#include <mach/qdsp6v2/q6core.h>
#include "msm-pcm-routing-v2.h"
#include "msm-dolby-dap-config.h"
#include "q6voice.h"
+#include "q6core.h"
struct msm_pcm_routing_bdai_data {
u16 port_id; /* AFE port ID */
@@ -213,8 +213,8 @@
{ INT_FM_TX, 0, 0, 0, 0, 0},
{ RT_PROXY_PORT_001_RX, 0, 0, 0, 0, 0},
{ RT_PROXY_PORT_001_TX, 0, 0, 0, 0, 0},
- { PCM_RX, 0, 0, 0, 0, 0},
- { PCM_TX, 0, 0, 0, 0, 0},
+ { AFE_PORT_ID_PRIMARY_PCM_RX, 0, 0, 0, 0, 0},
+ { AFE_PORT_ID_PRIMARY_PCM_TX, 0, 0, 0, 0, 0},
{ VOICE_PLAYBACK_TX, 0, 0, 0, 0, 0},
{ VOICE_RECORD_RX, 0, 0, 0, 0, 0},
{ VOICE_RECORD_TX, 0, 0, 0, 0, 0},
@@ -1406,6 +1406,9 @@
SOC_SINGLE_EXT("MultiMedia4", MSM_BACKEND_DAI_QUATERNARY_MI2S_RX,
MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_QUATERNARY_MI2S_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new secondary_mi2s_rx_mixer_controls[] = {
@@ -1421,6 +1424,18 @@
SOC_SINGLE_EXT("MultiMedia4", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
+};
+
+static const struct snd_kcontrol_new mi2s_hl_mixer_controls[] = {
+ SOC_SINGLE_EXT("PRI_MI2S_TX", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_BACKEND_DAI_PRI_MI2S_TX, 1, 0, msm_routing_get_port_mixer,
+ msm_routing_put_port_mixer),
+ SOC_SINGLE_EXT("INTERNAL_FM_TX", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_BACKEND_DAI_INT_FM_TX, 1, 0, msm_routing_get_port_mixer,
+ msm_routing_put_port_mixer),
};
static const struct snd_kcontrol_new primary_mi2s_rx_mixer_controls[] = {
@@ -1436,6 +1451,9 @@
SOC_SINGLE_EXT("MultiMedia4", MSM_BACKEND_DAI_PRI_MI2S_RX,
MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("MultiMedia5", MSM_BACKEND_DAI_PRI_MI2S_RX,
+ MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new hdmi_mixer_controls[] = {
@@ -1571,6 +1589,9 @@
SOC_SINGLE_EXT("MI2S_TX", MSM_BACKEND_DAI_MI2S_TX,
MSM_FRONTEND_DAI_MULTIMEDIA1, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("PRI_MI2S_TX", MSM_BACKEND_DAI_PRI_MI2S_TX,
+ MSM_FRONTEND_DAI_MULTIMEDIA1, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
SOC_SINGLE_EXT("QUAT_MI2S_TX", MSM_BACKEND_DAI_QUATERNARY_MI2S_TX,
MSM_FRONTEND_DAI_MULTIMEDIA1, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
@@ -1618,6 +1639,12 @@
msm_routing_put_audio_mixer),
};
+static const struct snd_kcontrol_new mmul4_mixer_controls[] = {
+ SOC_SINGLE_EXT("SLIM_0_TX", MSM_BACKEND_DAI_SLIMBUS_0_TX,
+ MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
+};
+
static const struct snd_kcontrol_new mmul5_mixer_controls[] = {
SOC_SINGLE_EXT("SLIM_0_TX", MSM_BACKEND_DAI_SLIMBUS_0_TX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
@@ -1640,6 +1667,9 @@
SOC_SINGLE_EXT("SEC_AUX_PCM_TX", MSM_BACKEND_DAI_SEC_AUXPCM_TX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
msm_routing_put_audio_mixer),
+ SOC_SINGLE_EXT("PRI_MI2S_TX", MSM_BACKEND_DAI_PRI_MI2S_TX,
+ MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
};
static const struct snd_kcontrol_new pri_rx_voice_mixer_controls[] = {
@@ -1678,6 +1708,24 @@
msm_routing_put_voice_mixer),
};
+static const struct snd_kcontrol_new sec_mi2s_rx_voice_mixer_controls[] = {
+ SOC_SINGLE_EXT("CSVoice", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_CS_VOICE, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("Voice2", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_VOICE2, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("Voip", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_VOIP, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("VoLTE", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_VOLTE, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("DTMF", MSM_BACKEND_DAI_SECONDARY_MI2S_RX,
+ MSM_FRONTEND_DAI_DTMF_RX, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
+};
+
static const struct snd_kcontrol_new slimbus_rx_voice_mixer_controls[] = {
SOC_SINGLE_EXT("CSVoice", MSM_BACKEND_DAI_SLIMBUS_0_RX,
MSM_FRONTEND_DAI_CS_VOICE, 1, 0, msm_routing_get_voice_mixer,
@@ -1859,6 +1907,9 @@
SOC_SINGLE_EXT("SEC_AUX_PCM_TX_Voice", MSM_BACKEND_DAI_SEC_AUXPCM_TX,
MSM_FRONTEND_DAI_CS_VOICE, 1, 0, msm_routing_get_voice_mixer,
msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("PRI_MI2S_TX_Voice", MSM_BACKEND_DAI_PRI_MI2S_TX,
+ MSM_FRONTEND_DAI_CS_VOICE, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
};
static const struct snd_kcontrol_new tx_voice2_mixer_controls[] = {
@@ -1928,6 +1979,9 @@
SOC_SINGLE_EXT("SEC_AUX_PCM_TX_Voip", MSM_BACKEND_DAI_SEC_AUXPCM_TX,
MSM_FRONTEND_DAI_VOIP, 1, 0, msm_routing_get_voice_mixer,
msm_routing_put_voice_mixer),
+ SOC_SINGLE_EXT("PRI_MI2S_TX_Voip", MSM_BACKEND_DAI_PRI_MI2S_TX,
+ MSM_FRONTEND_DAI_VOIP, 1, 0, msm_routing_get_voice_mixer,
+ msm_routing_put_voice_mixer),
};
static const struct snd_kcontrol_new tx_voice_stub_mixer_controls[] = {
@@ -2420,15 +2474,21 @@
__func__, e->shift_l , e->values[item]);
if (e->shift_l < MSM_BACKEND_DAI_MAX &&
e->values[item] < MSM_BACKEND_DAI_MAX)
+ /* Enable feedback TX path */
ret = afe_spk_prot_feed_back_cfg(
msm_bedais[e->values[item]].port_id,
- msm_bedais[e->shift_l].port_id, 1, 0);
+ msm_bedais[e->shift_l].port_id, 1, 0, 1);
else {
- pr_err("%s values are out of range\n", __func__);
- ret = -EINVAL;
+ pr_debug("%s values are out of range item %d\n",
+ __func__, e->values[item]);
+ /* Disable feedback TX path */
+ if (e->values[item] == MSM_BACKEND_DAI_MAX)
+ ret = afe_spk_prot_feed_back_cfg(0, 0, 0, 0, 0);
+ else
+ ret = -EINVAL;
}
} else {
- pr_err("%s item value is out of range\n", __func__);
+ pr_err("%s item value is out of range item\n", __func__);
ret = -EINVAL;
}
mutex_unlock(&routing_lock);
@@ -2443,11 +2503,11 @@
}
static const char * const slim0_rx_vi_fb_tx_lch_mux_text[] = {
- "SLIM4_TX",
+ "ZERO", "SLIM4_TX"
};
static const int const slim0_rx_vi_fb_tx_lch_value[] = {
- MSM_BACKEND_DAI_SLIMBUS_4_TX,
+ MSM_BACKEND_DAI_MAX, MSM_BACKEND_DAI_SLIMBUS_4_TX
};
static const struct soc_enum slim0_rx_vi_fb_lch_mux_enum =
SOC_VALUE_ENUM_DOUBLE(0, MSM_BACKEND_DAI_SLIMBUS_0_RX, 0, 0,
@@ -2472,6 +2532,7 @@
SND_SOC_DAPM_AIF_IN("VOIP_DL", "VoIP Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("CS-VOICE_DL1", "CS-VOICE Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("CS-VOICE_UL1", "CS-VOICE Capture", 0, 0, 0, 0),
@@ -2503,12 +2564,19 @@
SND_SOC_DAPM_AIF_IN("HDMI_DL_HL", "HDMI_HOSTLESS Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("SEC_I2S_DL_HL", "SEC_I2S_RX_HOSTLESS Playback",
0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("SEC_MI2S_DL_HL",
+ "Secondary MI2S_RX Hostless Playback",
+ 0, 0, 0, 0),
+
SND_SOC_DAPM_AIF_IN("AUXPCM_DL_HL", "AUXPCM_HOSTLESS Playback",
0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("AUXPCM_UL_HL", "AUXPCM_HOSTLESS Capture",
0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MI2S_UL_HL", "MI2S_TX_HOSTLESS Capture",
0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("PRI_MI2S_UL_HL",
+ "Primary MI2S_TX Hostless Capture",
+ 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MI2S_DL_HL", "MI2S_RX_HOSTLESS Playback",
0, 0, 0, 0),
@@ -2536,6 +2604,8 @@
SND_SOC_DAPM_AIF_IN("MI2S_TX", "MI2S Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("QUAT_MI2S_TX", "Quaternary MI2S Capture",
0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_IN("PRI_MI2S_TX", "Primary MI2S Capture",
+ 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("SEC_MI2S_TX", "Secondary MI2S Capture",
0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("SLIMBUS_0_TX", "Slimbus Capture", 0, 0, 0, 0),
@@ -2614,6 +2684,9 @@
SND_SOC_DAPM_MIXER("SEC_MI2S_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
secondary_mi2s_rx_mixer_controls,
ARRAY_SIZE(secondary_mi2s_rx_mixer_controls)),
+ SND_SOC_DAPM_MIXER("SEC_MI2S_RX Port Mixer", SND_SOC_NOPM, 0, 0,
+ mi2s_hl_mixer_controls,
+ ARRAY_SIZE(mi2s_hl_mixer_controls)),
SND_SOC_DAPM_MIXER("PRI_MI2S_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
primary_mi2s_rx_mixer_controls,
ARRAY_SIZE(primary_mi2s_rx_mixer_controls)),
@@ -2621,6 +2694,8 @@
mmul1_mixer_controls, ARRAY_SIZE(mmul1_mixer_controls)),
SND_SOC_DAPM_MIXER("MultiMedia2 Mixer", SND_SOC_NOPM, 0, 0,
mmul2_mixer_controls, ARRAY_SIZE(mmul2_mixer_controls)),
+ SND_SOC_DAPM_MIXER("MultiMedia4 Mixer", SND_SOC_NOPM, 0, 0,
+ mmul4_mixer_controls, ARRAY_SIZE(mmul4_mixer_controls)),
SND_SOC_DAPM_MIXER("MultiMedia5 Mixer", SND_SOC_NOPM, 0, 0,
mmul5_mixer_controls, ARRAY_SIZE(mmul5_mixer_controls)),
SND_SOC_DAPM_MIXER("AUX_PCM_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
@@ -2642,6 +2717,10 @@
SND_SOC_NOPM, 0, 0,
sec_i2s_rx_voice_mixer_controls,
ARRAY_SIZE(sec_i2s_rx_voice_mixer_controls)),
+ SND_SOC_DAPM_MIXER("SEC_MI2S_RX_Voice Mixer",
+ SND_SOC_NOPM, 0, 0,
+ sec_mi2s_rx_voice_mixer_controls,
+ ARRAY_SIZE(sec_mi2s_rx_voice_mixer_controls)),
SND_SOC_DAPM_MIXER("SLIM_0_RX_Voice Mixer",
SND_SOC_NOPM, 0, 0,
slimbus_rx_voice_mixer_controls,
@@ -2777,6 +2856,7 @@
{"MultiMedia1 Mixer", "VOC_REC_UL", "INCALL_RECORD_TX"},
{"MultiMedia1 Mixer", "VOC_REC_DL", "INCALL_RECORD_RX"},
{"MultiMedia1 Mixer", "SLIM_4_TX", "SLIMBUS_4_TX"},
+ {"MultiMedia4 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MultiMedia5 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MI2S_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
{"MI2S_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
@@ -2789,6 +2869,7 @@
{"QUAT_MI2S_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
{"QUAT_MI2S_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"QUAT_MI2S_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
+ {"QUAT_MI2S_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
{"QUAT_MI2S_RX", NULL, "QUAT_MI2S_RX Audio Mixer"},
@@ -2796,17 +2877,23 @@
{"SEC_MI2S_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
{"SEC_MI2S_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"SEC_MI2S_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
+ {"SEC_MI2S_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
{"SEC_MI2S_RX", NULL, "SEC_MI2S_RX Audio Mixer"},
+ {"SEC_MI2S_RX Port Mixer", "PRI_MI2S_TX", "PRI_MI2S_TX"},
+ {"SEC_MI2S_RX Port Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
+
{"PRI_MI2S_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
{"PRI_MI2S_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
{"PRI_MI2S_RX Audio Mixer", "MultiMedia3", "MM_DL3"},
{"PRI_MI2S_RX Audio Mixer", "MultiMedia4", "MM_DL4"},
+ {"PRI_MI2S_RX Audio Mixer", "MultiMedia5", "MM_DL5"},
{"PRI_MI2S_RX", NULL, "PRI_MI2S_RX Audio Mixer"},
{"MultiMedia1 Mixer", "PRI_TX", "PRI_I2S_TX"},
{"MultiMedia1 Mixer", "MI2S_TX", "MI2S_TX"},
{"MultiMedia2 Mixer", "MI2S_TX", "MI2S_TX"},
+ {"MultiMedia5 Mixer", "MI2S_TX", "MI2S_TX"},
{"MultiMedia1 Mixer", "QUAT_MI2S_TX", "QUAT_MI2S_TX"},
{"MultiMedia1 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MultiMedia1 Mixer", "AUX_PCM_UL_TX", "AUX_PCM_TX"},
@@ -2815,6 +2902,7 @@
{"MultiMedia5 Mixer", "SEC_AUX_PCM_TX", "SEC_AUX_PCM_TX"},
{"MultiMedia2 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MultiMedia1 Mixer", "SEC_MI2S_TX", "SEC_MI2S_TX"},
+ {"MultiMedia1 Mixer", "PRI_MI2S_TX", "PRI_MI2S_TX"},
{"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
{"INTERNAL_BT_SCO_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
@@ -2847,6 +2935,7 @@
{"MM_UL1", NULL, "MultiMedia1 Mixer"},
{"MultiMedia2 Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
{"MM_UL2", NULL, "MultiMedia2 Mixer"},
+ {"MM_UL4", NULL, "MultiMedia4 Mixer"},
{"MM_UL5", NULL, "MultiMedia5 Mixer"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
@@ -2884,6 +2973,13 @@
{"SEC_RX_Voice Mixer", "DTMF", "DTMF_DL_HL"},
{"SEC_I2S_RX", NULL, "SEC_RX_Voice Mixer"},
+ {"SEC_MI2S_RX_Voice Mixer", "CSVoice", "CS-VOICE_DL1"},
+ {"SEC_MI2S_RX_Voice Mixer", "Voice2", "VOICE2_DL"},
+ {"SEC_MI2S_RX_Voice Mixer", "VoLTE", "VoLTE_DL"},
+ {"SEC_MI2S_RX_Voice Mixer", "Voip", "VOIP_DL"},
+ {"SEC_MI2S_RX_Voice Mixer", "DTMF", "DTMF_DL_HL"},
+ {"SEC_MI2S_RX", NULL, "SEC_MI2S_RX_Voice Mixer"},
+
{"SLIM_0_RX_Voice Mixer", "CSVoice", "CS-VOICE_DL1"},
{"SLIM_0_RX_Voice Mixer", "Voice2", "VOICE2_DL"},
{"SLIM_0_RX_Voice Mixer", "VoLTE", "VoLTE_DL"},
@@ -2934,6 +3030,7 @@
{"MI2S_RX", NULL, "MI2S_RX_Voice Mixer"},
{"Voice_Tx Mixer", "PRI_TX_Voice", "PRI_I2S_TX"},
+ {"Voice_Tx Mixer", "PRI_MI2S_TX_Voice", "PRI_MI2S_TX"},
{"Voice_Tx Mixer", "MI2S_TX_Voice", "MI2S_TX"},
{"Voice_Tx Mixer", "SLIM_0_TX_Voice", "SLIMBUS_0_TX"},
{"Voice_Tx Mixer", "INTERNAL_BT_SCO_TX_Voice", "INT_BT_SCO_TX"},
@@ -2965,6 +3062,7 @@
{"Voip_Tx Mixer", "AFE_PCM_TX_Voip", "PCM_TX"},
{"Voip_Tx Mixer", "AUX_PCM_TX_Voip", "AUX_PCM_TX"},
{"Voip_Tx Mixer", "SEC_AUX_PCM_TX_Voip", "SEC_AUX_PCM_TX"},
+ {"Voip_Tx Mixer", "PRI_MI2S_TX_Voip", "PRI_MI2S_TX"},
{"VOIP_UL", NULL, "Voip_Tx Mixer"},
{"SLIMBUS_DL_HL", "Switch", "SLIM0_DL_HL"},
@@ -2997,6 +3095,9 @@
{"PCM_RX", NULL, "PCM_RX_DL_HL"},
{"MI2S_UL_HL", NULL, "MI2S_TX"},
{"SEC_I2S_RX", NULL, "SEC_I2S_DL_HL"},
+ {"PRI_MI2S_UL_HL", NULL, "PRI_MI2S_TX"},
+ {"SEC_MI2S_RX", NULL, "SEC_MI2S_DL_HL"},
+
{"SLIMBUS_0_RX Port Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
{"SLIMBUS_0_RX Port Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"SLIMBUS_0_RX Port Mixer", "AUX_PCM_UL_TX", "AUX_PCM_TX"},
@@ -3070,30 +3171,31 @@
{"BE_OUT", NULL, "SLIMBUS_3_RX"},
{"BE_OUT", NULL, "AUX_PCM_RX"},
{"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
+ {"BE_OUT", NULL, "INT_BT_SCO_RX"},
+ {"BE_OUT", NULL, "INT_FM_RX"},
+ {"BE_OUT", NULL, "PCM_RX"},
+ {"BE_OUT", NULL, "SLIMBUS_3_RX"},
+ {"BE_OUT", NULL, "AUX_PCM_RX"},
+ {"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
+ {"BE_OUT", NULL, "VOICE_PLAYBACK_TX"},
{"PRI_I2S_TX", NULL, "BE_IN"},
{"MI2S_TX", NULL, "BE_IN"},
{"QUAT_MI2S_TX", NULL, "BE_IN"},
+ {"PRI_MI2S_TX", NULL, "BE_IN"},
{"SEC_MI2S_TX", NULL, "BE_IN"},
{"SLIMBUS_0_TX", NULL, "BE_IN" },
{"SLIMBUS_1_TX", NULL, "BE_IN" },
{"SLIMBUS_3_TX", NULL, "BE_IN" },
{"SLIMBUS_4_TX", NULL, "BE_IN" },
{"SLIMBUS_5_TX", NULL, "BE_IN" },
- {"BE_OUT", NULL, "INT_BT_SCO_RX"},
{"INT_BT_SCO_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "INT_FM_RX"},
{"INT_FM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "PCM_RX"},
{"PCM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "SLIMBUS_3_RX"},
- {"BE_OUT", NULL, "AUX_PCM_RX"},
{"AUX_PCM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
{"SEC_AUX_PCM_TX", NULL, "BE_IN"},
{"INCALL_RECORD_TX", NULL, "BE_IN"},
{"INCALL_RECORD_RX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "VOICE_PLAYBACK_TX"},
{"SLIM0_RX_VI_FB_LCH_MUX", "SLIM4_TX", "SLIMBUS_4_TX"},
{"SLIMBUS_0_RX", NULL, "SLIM0_RX_VI_FB_LCH_MUX"},
};
diff --git a/sound/soc/msm/qdsp6v2/q6afe.c b/sound/soc/msm/qdsp6v2/q6afe.c
index e79a0cf..5b4244e 100644
--- a/sound/soc/msm/qdsp6v2/q6afe.c
+++ b/sound/soc/msm/qdsp6v2/q6afe.c
@@ -80,6 +80,8 @@
static int32_t afe_callback(struct apr_client_data *data, void *priv)
{
+ int i;
+
if (!data) {
pr_err("%s: Invalid param data\n", __func__);
return -EINVAL;
@@ -87,6 +89,12 @@
if (data->opcode == RESET_EVENTS) {
pr_debug("q6afe: reset event = %d %d apr[%p]\n",
data->reset_event, data->reset_proc, this_afe.apr);
+
+ for (i = 0; i < MAX_AUDPROC_TYPES; i++) {
+ this_afe.afe_cal_addr[i].cal_paddr = 0;
+ this_afe.afe_cal_addr[i].cal_size = 0;
+ }
+
if (this_afe.apr) {
apr_reset(this_afe.apr);
atomic_set(&this_afe.state, 0);
@@ -209,7 +217,7 @@
switch (port_id) {
case PRIMARY_I2S_RX:
- case PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
case SECONDARY_I2S_RX:
case MI2S_RX:
case HDMI_RX:
@@ -233,7 +241,7 @@
break;
case PRIMARY_I2S_TX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case SECONDARY_I2S_TX:
case MI2S_TX:
case DIGI_MIC_TX:
@@ -299,8 +307,8 @@
case RT_PROXY_PORT_001_TX:
ret_size = SIZEOF_CFG_CMD(afe_param_id_rt_proxy_port_cfg);
break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
default:
@@ -891,6 +899,44 @@
return ret;
}
+static int afe_send_bank_selection_clip(
+ struct afe_param_id_clip_bank_sel *param)
+{
+ int ret;
+ struct afe_svc_cmd_set_clip_bank_selection config;
+ if (!param) {
+ pr_err("%s: Invalid params", __func__);
+ return -EINVAL;
+ }
+ config.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_SEQ_CMD,
+ APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
+ config.hdr.pkt_size = sizeof(config);
+ config.hdr.src_port = 0;
+ config.hdr.dest_port = 0;
+ config.hdr.token = IDX_GLOBAL_CFG;
+ config.hdr.opcode = AFE_SVC_CMD_SET_PARAM;
+
+ config.param.payload_size = sizeof(struct afe_port_param_data_v2) +
+ sizeof(struct afe_param_id_clip_bank_sel);
+ config.param.payload_address_lsw = 0x00;
+ config.param.payload_address_msw = 0x00;
+ config.param.mem_map_handle = 0x00;
+
+ config.pdata.module_id = AFE_MODULE_CDC_DEV_CFG;
+ config.pdata.param_id = AFE_PARAM_ID_CLIP_BANK_SEL_CFG;
+ config.pdata.param_size =
+ sizeof(struct afe_param_id_clip_bank_sel);
+ config.bank_sel = *param;
+ ret = afe_apr_send_pkt(&config, &this_afe.wait[IDX_GLOBAL_CFG]);
+ if (ret) {
+ pr_err("%s: AFE_PARAM_ID_CLIP_BANK_SEL_CFG failed %d\n",
+ __func__, ret);
+ } else if (atomic_read(&this_afe.status) != 0) {
+ pr_err("%s: config cmd failed\n", __func__);
+ ret = -EAGAIN;
+ }
+ return ret;
+}
int afe_send_aanc_version(
struct afe_param_id_cdc_aanc_version *version_cfg)
{
@@ -947,7 +993,7 @@
int i;
i = port_id - SLIMBUS_0_RX;
- if (i < 0 || i > ARRAY_SIZE(afe_ports_mad_type)) {
+ if (i < 0 || i >= ARRAY_SIZE(afe_ports_mad_type)) {
pr_debug("%s: Non Slimbus port_id 0x%x\n", __func__, port_id);
return MAD_HW_NONE;
}
@@ -983,6 +1029,12 @@
case AFE_AANC_VERSION:
ret = afe_send_aanc_version(config_data);
break;
+ case AFE_CLIP_BANK_SEL:
+ ret = afe_send_bank_selection_clip(config_data);
+ break;
+ case AFE_CDC_CLIP_REGISTERS_CONFIG:
+ ret = afe_send_codec_reg_config(config_data);
+ break;
default:
pr_err("%s: unknown configuration type", __func__);
ret = -EINVAL;
@@ -1141,21 +1193,20 @@
config.hdr.token = index;
switch (port_id) {
- case PRIMARY_I2S_RX:
- case PRIMARY_I2S_TX:
- cfg_type = AFE_PARAM_ID_PCM_CONFIG;
- break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
cfg_type = AFE_PARAM_ID_PCM_CONFIG;
break;
+ case PRIMARY_I2S_RX:
+ case PRIMARY_I2S_TX:
case SECONDARY_I2S_RX:
case SECONDARY_I2S_TX:
case MI2S_RX:
case MI2S_TX:
case AFE_PORT_ID_PRIMARY_MI2S_RX:
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
case AFE_PORT_ID_SECONDARY_MI2S_RX:
case AFE_PORT_ID_SECONDARY_MI2S_TX:
case AFE_PORT_ID_TERTIARY_MI2S_RX:
@@ -1242,8 +1293,10 @@
switch (port_id) {
case PRIMARY_I2S_RX: return IDX_PRIMARY_I2S_RX;
case PRIMARY_I2S_TX: return IDX_PRIMARY_I2S_TX;
- case PCM_RX: return IDX_PCM_RX;
- case PCM_TX: return IDX_PCM_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return IDX_AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -1278,6 +1331,8 @@
case SLIMBUS_5_TX: return IDX_SLIMBUS_5_TX;
case AFE_PORT_ID_PRIMARY_MI2S_RX:
return IDX_AFE_PORT_ID_PRIMARY_MI2S_RX;
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_MI2S_TX;
case AFE_PORT_ID_QUATERNARY_MI2S_RX:
return IDX_AFE_PORT_ID_QUATERNARY_MI2S_RX;
case AFE_PORT_ID_QUATERNARY_MI2S_TX:
@@ -1341,8 +1396,8 @@
case PRIMARY_I2S_TX:
cfg_type = AFE_PARAM_ID_I2S_CONFIG;
break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
cfg_type = AFE_PARAM_ID_PCM_CONFIG;
@@ -2530,8 +2585,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
@@ -2957,12 +3012,18 @@
}
int afe_spk_prot_feed_back_cfg(int src_port, int dst_port,
- int l_ch, int r_ch)
+ int l_ch, int r_ch, u32 enable)
{
int ret = -EINVAL;
union afe_spkr_prot_config prot_config;
int index = 0;
+ if (!enable) {
+ pr_debug("%s Disable Feedback tx path", __func__);
+ this_afe.vi_tx_port = -1;
+ return 0;
+ }
+
if ((q6audio_validate_port(src_port) < 0) ||
(q6audio_validate_port(dst_port) < 0)) {
pr_err("%s invalid ports src %d dst %d",
@@ -3003,6 +3064,7 @@
this_afe.apr = NULL;
this_afe.dtmf_gen_rx_portid = -1;
this_afe.mmap_handle = 0;
+ this_afe.vi_tx_port = -1;
for (i = 0; i < AFE_MAX_PORTS; i++)
init_waitqueue_head(&this_afe.wait[i]);
diff --git a/sound/soc/msm/qdsp6v2/q6audio-v2.c b/sound/soc/msm/qdsp6v2/q6audio-v2.c
index faf5f35..dae1ebd 100644
--- a/sound/soc/msm/qdsp6v2/q6audio-v2.c
+++ b/sound/soc/msm/qdsp6v2/q6audio-v2.c
@@ -24,8 +24,10 @@
switch (port_id) {
case PRIMARY_I2S_RX: return IDX_PRIMARY_I2S_RX;
case PRIMARY_I2S_TX: return IDX_PRIMARY_I2S_TX;
- case PCM_RX: return IDX_PCM_RX;
- case PCM_TX: return IDX_PCM_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return IDX_AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -58,6 +60,8 @@
case RT_PROXY_PORT_001_TX: return IDX_RT_PROXY_PORT_001_TX;
case AFE_PORT_ID_PRIMARY_MI2S_RX:
return IDX_AFE_PORT_ID_PRIMARY_MI2S_RX;
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_MI2S_TX;
case AFE_PORT_ID_QUATERNARY_MI2S_RX:
return IDX_AFE_PORT_ID_QUATERNARY_MI2S_RX;
case AFE_PORT_ID_QUATERNARY_MI2S_TX:
@@ -74,10 +78,12 @@
int q6audio_get_port_id(u16 port_id)
{
switch (port_id) {
- case PRIMARY_I2S_RX: return AFE_PORT_ID_PRIMARY_MI2S_RX;
- case PRIMARY_I2S_TX: return AFE_PORT_ID_PRIMARY_MI2S_TX;
- case PCM_RX: return AFE_PORT_ID_PRIMARY_PCM_RX;
- case PCM_TX: return AFE_PORT_ID_PRIMARY_PCM_TX;
+ case PRIMARY_I2S_RX: return PRIMARY_I2S_RX;
+ case PRIMARY_I2S_TX: return PRIMARY_I2S_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -110,6 +116,8 @@
case RT_PROXY_PORT_001_TX: return AFE_PORT_ID_RT_PROXY_PORT_001_TX;
case AFE_PORT_ID_PRIMARY_MI2S_RX:
return AFE_PORT_ID_PRIMARY_MI2S_RX;
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
+ return AFE_PORT_ID_PRIMARY_MI2S_TX;
case AFE_PORT_ID_QUATERNARY_MI2S_RX:
return AFE_PORT_ID_QUATERNARY_MI2S_RX;
case AFE_PORT_ID_QUATERNARY_MI2S_TX:
@@ -152,8 +160,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
@@ -164,6 +172,10 @@
case AFE_PORT_ID_TERTIARY_MI2S_RX:
case AFE_PORT_ID_QUATERNARY_MI2S_RX:
case AFE_PORT_ID_QUATERNARY_MI2S_TX:
+ case AFE_PORT_ID_PRIMARY_MI2S_RX:
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
+ case AFE_PORT_ID_SECONDARY_MI2S_RX:
+ case AFE_PORT_ID_SECONDARY_MI2S_TX:
break;
default:
ret = -EINVAL;
@@ -179,8 +191,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
@@ -210,6 +222,7 @@
case RT_PROXY_PORT_001_RX:
case RT_PROXY_PORT_001_TX:
case AFE_PORT_ID_PRIMARY_MI2S_RX:
+ case AFE_PORT_ID_PRIMARY_MI2S_TX:
case AFE_PORT_ID_QUATERNARY_MI2S_RX:
case AFE_PORT_ID_QUATERNARY_MI2S_TX:
case AFE_PORT_ID_SECONDARY_MI2S_RX:
diff --git a/sound/soc/msm/qdsp6v2/q6core.c b/sound/soc/msm/qdsp6v2/q6core.c
index 557b326..42cbcd1 100644
--- a/sound/soc/msm/qdsp6v2/q6core.c
+++ b/sound/soc/msm/qdsp6v2/q6core.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -32,7 +32,7 @@
struct avcs_cmd_rsp_get_low_power_segments_info_t *lp_ocm_payload;
};
-struct q6core_str q6core_lcl;
+static struct q6core_str q6core_lcl;
static int32_t aprv2_core_fn_q(struct apr_client_data *data, void *priv)
{
@@ -40,7 +40,7 @@
uint32_t nseg;
int i, j;
- pr_info("core msg: payload len = %u, apr resp opcode = 0x%X\n",
+ pr_debug("core msg: payload len = %u, apr resp opcode = 0x%X\n",
data->payload_size, data->opcode);
switch (data->opcode) {
@@ -121,6 +121,33 @@
pr_err("%s: Unable to register CORE\n", __func__);
}
+uint32_t core_set_dolby_manufacturer_id(int manufacturer_id)
+{
+ struct adsp_dolby_manufacturer_id payload;
+ int rc = 0;
+ pr_debug("%s manufacturer_id :%d\n", __func__, manufacturer_id);
+ ocm_core_open();
+ if (q6core_lcl.core_handle_q) {
+ payload.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
+ APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
+ payload.hdr.pkt_size =
+ sizeof(struct adsp_dolby_manufacturer_id);
+ payload.hdr.src_port = 0;
+ payload.hdr.dest_port = 0;
+ payload.hdr.token = 0;
+ payload.hdr.opcode = ADSP_CMD_SET_DOLBY_MANUFACTURER_ID;
+ payload.manufacturer_id = manufacturer_id;
+ pr_debug("Send Dolby security opcode=%x manufacturer ID = %d\n",
+ payload.hdr.opcode, payload.manufacturer_id);
+ rc = apr_send_pkt(q6core_lcl.core_handle_q,
+ (uint32_t *)&payload);
+ if (rc < 0)
+ pr_err("%s: SET_DOLBY_MANUFACTURER_ID failed op[0x%x]rc[%d]\n",
+ __func__, payload.hdr.opcode, rc);
+ }
+ return rc;
+}
+
int core_get_low_power_segments(
struct avcs_cmd_rsp_get_low_power_segments_info_t **lp_memseg)
{
diff --git a/sound/soc/msm/qdsp6v2/q6core.h b/sound/soc/msm/qdsp6v2/q6core.h
index ff611d5..39bf4ab 100644
--- a/sound/soc/msm/qdsp6v2/q6core.h
+++ b/sound/soc/msm/qdsp6v2/q6core.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -90,4 +90,13 @@
int core_get_low_power_segments(
struct avcs_cmd_rsp_get_low_power_segments_info_t **);
+#define ADSP_CMD_SET_DOLBY_MANUFACTURER_ID 0x00012918
+
+struct adsp_dolby_manufacturer_id {
+ struct apr_hdr hdr;
+ int manufacturer_id;
+};
+
+uint32_t core_set_dolby_manufacturer_id(int manufacturer_id);
+
#endif /* __Q6CORE_H__ */
diff --git a/sound/soc/msm/qdsp6v2/q6voice.c b/sound/soc/msm/qdsp6v2/q6voice.c
index 80bc4f9..e9d0a7e 100644
--- a/sound/soc/msm/qdsp6v2/q6voice.c
+++ b/sound/soc/msm/qdsp6v2/q6voice.c
@@ -2963,6 +2963,7 @@
cvp_mute_cmd.hdr.opcode = VSS_IVOLUME_CMD_MUTE_V2;
cvp_mute_cmd.cvp_set_mute.direction = VSS_IVOLUME_DIRECTION_RX;
cvp_mute_cmd.cvp_set_mute.mute_flag = v->dev_rx.mute;
+ cvp_mute_cmd.cvp_set_mute.ramp_duration_ms = DEFAULT_MUTE_RAMP_DURATION;
v->cvp_state = CMD_STATUS_FAIL;
ret = apr_send_pkt(common.apr_q6_cvp, (uint32_t *) &cvp_mute_cmd);
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 40595bf..519f325 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -1767,7 +1767,12 @@
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ !((be->dpcm[stream].state == SND_SOC_DPCM_STATE_START) &&
+ ((fe->dpcm[stream].state != SND_SOC_DPCM_STATE_START) &&
+ (fe->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
+ (fe->dpcm[stream].state !=
+ SND_SOC_DPCM_STATE_SUSPEND))))
continue;
dev_dbg(be->dev, "dpcm: hw_free BE %s\n",