Merge "usb: gadget: Support power collapse for cable disconnect"
diff --git a/Documentation/ata/ahci_msm.txt b/Documentation/ata/ahci_msm.txt
new file mode 100644
index 0000000..2f84834
--- /dev/null
+++ b/Documentation/ata/ahci_msm.txt
@@ -0,0 +1,322 @@
+Introduction
+============
+The SATA Host Controller developed for Qualcomm SoC is used
+to facilitate SATA storage devices that connect to SoC through a
+standard SATA cable interface. The MSM Advanced Host Controller
+Interface (AHCI) driver interfaces with the generic Linux AHCI driver
+and communicates with the AHCI controller for data movement between
+system memory and SATA devices (persistent storage).
+
+Hardware description
+====================
+Serial Advanced Technology Attachment (SATA) is a communication
+protocol designed to transfer data between a computer and storage
+devices (Hard Disk Drive(HDD), Solid State Drives(SSD) etc.).
+First generation (Gen1) SATA interfaces communicate at a serial
+rate of 1.5Gb/s and use low-voltage differential signaling on
+serial links. With 8b-10b encoding, the effective data throughput
+for Gen1 interface is 150MB/s.
+
+The SATA host controller in Qualcomm chipsets adheres to the AHCI 1.3
+specification which describes the interface between system software
+and the host controller, as well as the functional behavior needed
+for software to communicate with the SATA host controller.
+
+The SATA PHY hardware macro in Qualcomm chipsets adheres to the
+SATA 3.0 specification with Gen1 serial interface. This is used to
+serialize and de-serialize data and communicates with SATA HDD. Also,
+the PHY can detect SATA HDD during hot swap and raise an interrupt to
+the CPU through AHCI controller to notify about the detection/removal.
+
+The following figure shows the SATA architecture block diagram as
+implemented in MSM chipsets.
+
+ +---------+
+ |SATA Disk|
+ | Drive |
+ +---------+
+ ^ ^
+ Tx | | Rx
+ v v
++--------------+ +--------------+ +-----------+
+| System Memory| | SATA PHY | | CPU |
++--------------+ +--------------+ +-----------+
+ ^ ^ ^ ^ ^
+ | | | | |
+ | v v | |
+ | +------------------+(Interrupt)|
+ | | SATA CONTROLLER |-----+ |
+ | +------------------+ |
+ | ^ ^ |
+ | | | |
+ v v v v
+ <--------------------------------------------------------->
+< System Fabric (Control and Data) >
+ <--------------------------------------------------------->
+
+Some controller capabilities:
+- Supports 64-bit addressing
+- Supports native command queueing (upto 32 commands)
+- Supports First-party DMA to move data to and from system memory
+- ATA-7 command set compliant
+- Port multiplier support for some chipsets
+- Supports aggressive power management (partial, slumber modes)
+- Supports asynchronous notification
+
+Software description
+====================
+The SATA driver uses the generic interface to read/write data to
+the Hard Disk Drive (HDD). It uses following components in Linux
+to interface with the generic block layer which then interfaces
+with file system or user processes.
+
+1) AHCI platform Driver (includes MSM-specific glue driver)
+2) LIBAHCI
+3) LIBATA
+4) SCSI
+
+AHCI platform driver registers as a device driver for platform
+device registered during SoC board initialization. It is responsible
+for platform specific tasks like PHY configuration, clock initial-
+ization, claiming memory resources etc. Also, implements certain
+functionality that deviates from the standard specification.
+
+Library "LIBAHCI" implements software layer functionality described
+in the standard AHCI specification. It interfaces with the LIBATA
+framework to execute SATA the command set. It converts ATA task files
+into AHCI command descriptors and pass them to the controller for
+execution. It handles controller interrupts and sends command
+completion events to the upper layers. It implements a controller-
+specific reset and recover mechanism in case of errors. It implements
+link power management policies - partial, slumber modes, runtime power
+management and platform power management. It abstracts the low-level
+controller details from the LIBATA framework.
+
+"LIBATA" is a helper library for implementing ATA and SATA command
+protocol as described in standard ATA and SATA specifications. It
+builds read/write requests from SCSI commands and pass them to the
+low-level controller driver (LLD). It handshakes with the SATA
+device using standard commands to understand capabilities and carry
+out device configurations. It interfaces with the SCSI layer to manage
+underlying disks. It manages different devices connected to each host
+port using a port multiplier. Also, it manages the link PHY component,
+the interconnect interface and any external interface (cables, etc.)
+that follow the SATA electrical specification.
+
+The SCSI layer is a helper library for translating generic block layer
+commands to SCSI commands and pass them on to the LIBATA framework.
+Certain generic stuff like device scan, media change, and hot plug
+detection are handled. This layer handles all types of SCSI devices,
+and SATA storage devices are one class of SCSI devices. It also provides
+the IOCTL interface to manage disks from userspace.
+
+Following is the logical code flow:
+
+ +------------------------+
+ | File System (ext4 etc.)|
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | Generic Block Layer |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | SCSI Layer |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | LIBATA library |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | LIBAHCI library |
+ +------------------------+
+ ^
+ |
+ v
+ +------------------------+
+ | AHCI platform driver + |
+ | MSM AHCI glue driver |
+ +------------------------+
+
+Design
+======
+The MSM AHCI driver acts as a glue driver for the Linux
+AHCI controller driver. It provides the following functionality:
+- Registers as a driver for msm_sata device which has an AHCI-compliant
+ controller and PHY as resources.
+- Registers an AHCI platform device in the probe function providing
+ ahci platform data
+- AHCI platform data consists of the following callbacks:
+ - init
+ o PHY resource acquisition
+ o Clock and voltage regulator initialization
+ o PHY calibration
+ - exit
+ o PHY power down
+ o Clock and voltage regulator turn off
+ - suspend
+ - resume
+ o Sequence described in the next section.
+- The Linux AHCI platform driver then probes the AHCI device and
+ initializes it according to the standard AHCI specification.
+- The SATA drive is detected as part of scsi_scan_host() called by
+ LIBAHCI after controller initialization.
+
+Power Management
+================
+Various power modes are supported by this driver.
+
+Platform suspend/resume:
+During suspend:
+- PHY analog blocks are powered down
+- Controller and PHY is kept in Power-on-Reset (POR) mode
+- Clocks and voltage regulators are gated
+
+During resume:
+- Clocks and voltage regulators are ungated
+- PHY is powered up and calibrated to functional mode
+- Controller is re-initialized to process commands.
+
+Runtime suspend/resume:
+- Execute the same steps as in platform suspend/resume.
+- Runtime suspend/resume is disabled by default due to regressions
+ in hot-plug detection (specification limitation). The users can
+ enable runtime power management with following shell commands.
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo auto > ./power/control
+ # echo auto > ./ata1/power/control
+ # echo auto > ./ata1/host0/target0:0:0/0:0:0:0/power/control
+
+ Note: The device will be runtime-suspended only when user unmounts
+ all the partitions.
+
+Link power management (defined by AHCI 1.3 specification):
+- Automatic low power mode transition are supported.
+- AHCI supports two power modes: partial and slumber.
+- Software uses Inteface Communication Control (ICC) bits in AHCI
+ register space to enable automatic partial/slumber state.
+- Partial mode:
+ - Software asserts automatic partial mode when the use
+ case demands low latency resume.
+ - Upon receiving partial mode signal, PHY disables byte clocks
+ and re-enables them during resume and thus has low latency.
+- Slumber mode:
+ - Software asserts automatic slumber mode when the use
+ case demands low power consumption and can withstand
+ high resume latencies.
+ - Upon receiving slumber mode signal, PHY disables byte
+ clocks and some internal circuitry. Upon resume PHY
+ enables byte clocks and reacquires the PLL lock.
+- Once the software enables partial/slumber modes, the transitioning
+ into these modes are automatic and is handled by hardware without
+ software intervention while the controller is idle with no outstanding
+ commands to process.
+
+- The Linux AHCI link power management defines three modes:
+ - max_performance (default mode)
+ Doesn't allow partial/slumber transition when host is idle.
+ - medium_power (Partial mode)
+ Following shell commands are used to enable this mode:
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo medium_power > ./ata1/host0/scsi_host/host0/link_power_management_policy
+
+ - min_power (Slumber mode)
+ Following shell commands are used to enable this mode:
+
+ # cd /sys/devices/platform/msm_sata.0/ahci.0/
+ # echo min_power > ./ata1/host0/scsi_host/host0/link_power_management_policy
+
+SMP/multi-core
+==============
+The MSM AHCI driver hooks only init, exit, suspend, resume callbacks to
+the AHCI driver which are serialized by design and hence the driver, which
+is inherently SMP safe.
+
+Security
+========
+None.
+
+Performance
+===========
+The theoretical performance with Gen1 SATA PHY is 150MB/s (8b/10b encoding).
+The performance is dependent on various factors, mainly:
+- Capabilities of the external SATA hard disk connected to the MSM SATA port
+- Various system bus frequencies and system loads
+- System memory capabilities
+- Benchmark test applications that collect performance numbers
+
+One example of the maximum performance achieved in a specific system
+configuration follows:
+
+Benchmark: Iozone sequential performance
+Block size: 128K
+File size: 1GB
+Platform: APQ8064 V2 CDP
+CPU Governor: Performance
+
+SanDisk SSD (i100 64GB):
+Read - 135MB/s
+Write - 125MB/s
+
+Western Digital HDD (WD20EURS 2TB):
+Read - 121MB/s
+Write - 98MB/s
+
+Interface
+=========
+The MSM AHCI controller driver provides function pointers as the
+required interface to the Linux AHCI controller driver. The main
+routines implemented are init, exit, suspend, and resume for handling
+MSM-specific initialization, freeing of resources on exit, and
+MSM-specific power management tweaks during suspend power collapse.
+
+Driver parameters
+=================
+None.
+
+Config options
+==============
+Config option SATA_AHCI_MSM in drivers/ata/Kconfig enables this driver.
+
+Dependencies
+============
+The MSM AHCI controller driver is dependent on Linux AHCI driver,
+Linux ATA framework, Linux SCSI framework and Linux generic block layer.
+
+While configuring the kernel, the following options should be set:
+
+- CONFIG_BLOCK
+- CONFIG_SCSI
+- CONFIG_ATA
+- CONFIG_SATA_AHCI_PLATFORM
+
+User space utilities
+====================
+Any user space component that can mount a block device can be used to
+read/write data into persistent storage. However, at the time of this
+writing there are no utilities that author is aware of that can manage
+h/w from userspace.
+
+Other
+=====
+None.
+
+Known issues
+============
+None.
+
+To do
+=====
+- Device tree support.
+- MSM bus frequency voting support.
diff --git a/Documentation/cpu-freq/governors.txt b/Documentation/cpu-freq/governors.txt
index 6aed1ce..b4ae5e6 100644
--- a/Documentation/cpu-freq/governors.txt
+++ b/Documentation/cpu-freq/governors.txt
@@ -198,57 +198,74 @@
The CPUfreq governor "interactive" is designed for latency-sensitive,
interactive workloads. This governor sets the CPU speed depending on
-usage, similar to "ondemand" and "conservative" governors. However,
-the governor is more aggressive about scaling the CPU speed up in
-response to CPU-intensive activity.
-
-Sampling the CPU load every X ms can lead to under-powering the CPU
-for X ms, leading to dropped frames, stuttering UI, etc. Instead of
-sampling the cpu at a specified rate, the interactive governor will
-check whether to scale the cpu frequency up soon after coming out of
-idle. When the cpu comes out of idle, a timer is configured to fire
-within 1-2 ticks. If the cpu is very busy between exiting idle and
-when the timer fires then we assume the cpu is underpowered and ramp
-to MAX speed.
-
-If the cpu was not sufficiently busy to immediately ramp to MAX speed,
-then governor evaluates the cpu load since the last speed adjustment,
-choosing the highest value between that longer-term load or the
-short-term load since idle exit to determine the cpu speed to ramp to.
+usage, similar to "ondemand" and "conservative" governors, but with a
+different set of configurable behaviors.
The tuneable values for this governor are:
+target_loads: CPU load values used to adjust speed to influence the
+current CPU load toward that value. In general, the lower the target
+load, the more often the governor will raise CPU speeds to bring load
+below the target. The format is a single target load, optionally
+followed by pairs of CPU speeds and CPU loads to target at or above
+those speeds. Colons can be used between the speeds and associated
+target loads for readability. For example:
+
+ 85 1000000:90 1700000:99
+
+targets CPU load 85% below speed 1GHz, 90% at or above 1GHz, until
+1.7GHz and above, at which load 99% is targeted. If speeds are
+specified these must appear in ascending order. Higher target load
+values are typically specified for higher speeds, that is, target load
+values also usually appear in an ascending order. The default is
+target load 90% for all speeds.
+
min_sample_time: The minimum amount of time to spend at the current
-frequency before ramping down. This is to ensure that the governor has
-seen enough historic cpu load data to determine the appropriate
-workload. Default is 80000 uS.
+frequency before ramping down. Default is 80000 uS.
hispeed_freq: An intermediate "hi speed" at which to initially ramp
when CPU load hits the value specified in go_hispeed_load. If load
stays high for the amount of time specified in above_hispeed_delay,
-then speed may be bumped higher. Default is maximum speed.
+then speed may be bumped higher. Default is the maximum speed
+allowed by the policy at governor initialization time.
-go_hispeed_load: The CPU load at which to ramp to the intermediate "hi
-speed". Default is 85%.
+go_hispeed_load: The CPU load at which to ramp to hispeed_freq.
+Default is 99%.
-above_hispeed_delay: Once speed is set to hispeed_freq, wait for this
-long before bumping speed higher in response to continued high load.
+above_hispeed_delay: When speed is at or above hispeed_freq, wait for
+this long before raising speed in response to continued high load.
Default is 20000 uS.
-timer_rate: Sample rate for reevaluating cpu load when the system is
-not idle. Default is 20000 uS.
+timer_rate: Sample rate for reevaluating CPU load when the CPU is not
+idle. A deferrable timer is used, such that the CPU will not be woken
+from idle to service this timer until something else needs to run.
+(The maximum time to allow deferring this timer when not running at
+minimum speed is configurable via timer_slack.) Default is 20000 uS.
-input_boost: If non-zero, boost speed of all CPUs to hispeed_freq on
-touchscreen activity. Default is 0.
+timer_slack: Maximum additional time to defer handling the governor
+sampling timer beyond timer_rate when running at speeds above the
+minimum. For platforms that consume additional power at idle when
+CPUs are running at speeds greater than minimum, this places an upper
+bound on how long the timer will be deferred prior to re-evaluating
+load and dropping speed. For example, if timer_rate is 20000uS and
+timer_slack is 10000uS then timers will be deferred for up to 30msec
+when not at lowest speed. A value of -1 means defer timers
+indefinitely at all speeds. Default is 80000 uS.
boost: If non-zero, immediately boost speed of all CPUs to at least
hispeed_freq until zero is written to this attribute. If zero, allow
CPU speeds to drop below hispeed_freq according to load as usual.
+Default is zero.
-boostpulse: Immediately boost speed of all CPUs to hispeed_freq for
-min_sample_time, after which speeds are allowed to drop below
+boostpulse: On each write, immediately boost speed of all CPUs to
+hispeed_freq for at least the period of time specified by
+boostpulse_duration, after which speeds are allowed to drop below
hispeed_freq according to load as usual.
+boostpulse_duration: Length of time to hold CPU speed at hispeed_freq
+on a write to boostpulse, before allowing speed to drop according to
+load as usual. Default is 80000 uS.
+
3. The Governor Interface in the CPUfreq Core
=============================================
diff --git a/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt b/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
index 88d69e0..86d863c 100644
--- a/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
+++ b/Documentation/devicetree/bindings/bluetooth/bluetooth_power.txt
@@ -5,12 +5,26 @@
Required properties:
- compatible: Should be "qca,ar3002"
- qca,bt-reset-gpio: GPIO pin to bring BT Controller out of reset
+ - qca,bt-vdd-io-supply: Bluetooth VDD IO regulator handle
+ - qca,bt-vdd-pa-supply: Bluetooth VDD PA regulator handle
Optional properties:
- None
+ -qca,bt-vdd-ldo-supply: Bluetooth VDD LDO regulator handle. Kept under optional parameters
+ as some of the chipsets doesn't require ldo or it may use from same vddio.
+ - qca,bt-chip-pwd-supply: Chip power down gpio is required when bluetooth module
+ and other modules like wifi co-exist in a singe chip and shares a
+ common gpio to bring chip out of reset.
+ - qca,bt-vdd-io-voltage-level: min and max voltages for the vdd io regulator
+ - qca,bt-vdd-pa-voltage-level: min and max voltages for the vdd pa regulator
+ - qca,bt-vdd-ldo-voltage-level: min and max voltages for the vdd ldo regulator
Example:
bt-ar3002 {
compatible = "qca,ar3002";
qca,bt-reset-gpio = <&pm8941_gpios 34 0>;
+ qca,bt-vdd-io-supply = <&pm8941_s3>;
+ qca,bt-vdd-pa-supply = <&pm8941_l19>;
+ qca,bt-chip-pwd-supply = <&ath_chip_pwd_l>;
+ qca,bt-vdd-io-supply = <1800000 1800000>;
+ qca,bt-vdd-pa-supply = <2900000 2900000>;
};
diff --git a/Documentation/devicetree/bindings/cache/msm_cache_erp.txt b/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
index 400b299..8d00cc2 100644
--- a/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
+++ b/Documentation/devicetree/bindings/cache/msm_cache_erp.txt
@@ -7,6 +7,17 @@
- interrupt-names: Should contain the interrupt names "l1_irq" and
"l2_irq"
+Optional properties:
+- reg: A set of I/O regions to be dumped in the event of a hardware fault being
+ detected. If this property is present, the "reg-names" property is must be
+ present as well.
+- reg-names: Human-readable names assigned to the I/O regions defined by the
+ "reg" property. The names can be completely arbitrary, since they are
+ intended to be human-read during failure analysis, and because the set of I/O
+ regions of interest may vary with the overall system design. This property
+ shall only be present if the "reg" property is present, and must contain as
+ many elements as the "reg" property.
+
Example:
qcom,cache_erp {
compatible = "qcom,cache_erp";
@@ -14,3 +25,17 @@
interrupt-names = "l1_irq", "l2_irq";
};
+Example with "reg" property defined:
+ qcom,cache_erp@f9012000 {
+ reg = <0xf9012000 0x80>,
+ <0xf9089000 0x80>,
+ <0xf9099000 0x80>;
+
+ reg-names = "l2_saw",
+ "krait0_saw",
+ "krait1_saw";
+
+ compatible = "qcom,cache_erp";
+ interrupts = <1 9 0>, <0 2 0>;
+ interrupt-names = "l1_irq", "l2_irq";
+ };
diff --git a/Documentation/devicetree/bindings/fb/mdss-mdp.txt b/Documentation/devicetree/bindings/fb/mdss-mdp.txt
index 0422b57..a3f3a06 100644
--- a/Documentation/devicetree/bindings/fb/mdss-mdp.txt
+++ b/Documentation/devicetree/bindings/fb/mdss-mdp.txt
@@ -108,13 +108,19 @@
- qcom,mdss-rot-block-size: The size of a memory block (in pixels) to be used
by the rotator. If this property is not specified,
then a default value of 128 pixels would be used.
-
+- qcom,mdss-has-bwc: Boolean property to indicate the presence of bandwidth
+ compression feature in the rotator.
+- qcom,mdss-has-decimation: Boolean property to indicate the presence of
+ decimation feature in fetch.
Optional subnodes:
Child nodes representing the frame buffer virtual devices.
Subnode properties:
- compatible : Must be "qcom,mdss-fb"
- cell-index : Index representing frame buffer
+- qcom,mdss-mixer-swap: A boolean property that indicates if the mixer muxes
+ need to be swapped based on the target panel.
+ By default the property is not defined.
@@ -141,6 +147,8 @@
qcom,mdss-pipe-dma-fetch-id = <10 13>;
qcom,mdss-smp-data = <22 4096>;
qcom,mdss-rot-block-size = <64>;
+ qcom,mdss-has-bwc;
+ qcom,mdss-has-decimation;
qcom,mdss-ctl-off = <0x00000600 0x00000700 0x00000800
0x00000900 0x0000A00>;
@@ -157,6 +165,7 @@
mdss_fb0: qcom,mdss_fb_primary {
cell-index = <0>;
compatible = "qcom,mdss-fb";
+ qcom,mdss-mixer-swap;
};
};
diff --git a/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt b/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
index fbe8ffa..418447d 100644
--- a/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
+++ b/Documentation/devicetree/bindings/hwmon/qpnp-adc-current.txt
@@ -16,7 +16,10 @@
- interrupt-names : Should contain "eoc-int-en-set".
- qcom,adc-bit-resolution : Bit resolution of the ADC.
- qcom,adc-vdd-reference : Voltage reference used by the ADC.
-- qcom,rsense : Internal rsense resistor used for current measurements.
+
+Optional properties:
+- qcom,rsense : Use this property when external rsense should be used
+ for current calculation and specify the units in nano-ohms.
Channel node
NOTE: Atleast one Channel node is required.
diff --git a/Documentation/devicetree/bindings/leds/leds-qpnp.txt b/Documentation/devicetree/bindings/leds/leds-qpnp.txt
index 4f31f07..a221433 100644
--- a/Documentation/devicetree/bindings/leds/leds-qpnp.txt
+++ b/Documentation/devicetree/bindings/leds/leds-qpnp.txt
@@ -68,8 +68,30 @@
- qcom,default-state: default state of the led, should be "on" or "off"
- qcom,turn-off-delay-ms: delay in millisecond for turning off the led when its default-state is "on". Value is being ignored in case default-state is "off".
+MPP LED is an LED controled through a Multi Purpose Pin.
+
+Optional properties for MPP LED:
+- linux,default-trigger: trigger the led from external modules such as display
+- qcom,default-state: default state of the led, should be "on" or "off"
+- qcom,source-sel: select power source, default 1 (enabled)
+- qcom,mode-ctrl: select operation mode, default 0x60 = Mode Sink
+
Example:
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+
qcom,leds@d000 {
status = "okay";
qcom,rgb_pwm {
@@ -151,3 +173,4 @@
linux,name = "led:wled_backlight";
};
};
+
diff --git a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
index 70f8b55..4cbff52 100644
--- a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
+++ b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
@@ -14,6 +14,9 @@
- interrupts: The lpass watchdog interrupt
- vdd_cx-supply: Reference to the regulator that supplies the vdd_cx domain.
- qcom,firmware-name: Base name of the firmware image. Ex. "lpass"
+- qcom,gpio-err-fatal: GPIO used by the lpass to indicate error fatal to the apps.
+- qcom,gpio-force-stop: GPIO used by the apps to force the lpass to shutdown.
+- qcom,gpio-proxy-unvote: GPIO used by the lpass to indicate apps clock is ready.
Optional properties:
- vdd_pll-supply: Reference to the regulator that supplies the PLL's rail.
@@ -29,4 +32,11 @@
interrupts = <0 194 1>;
vdd_cx-supply = <&pm8841_s2>;
qcom,firmware-name = "lpass";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
diff --git a/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt b/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
index 2116888..041928d 100644
--- a/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/qpnp-regulator.txt
@@ -16,12 +16,17 @@
the spmi-slave-container property
Optional properties:
+- interrupts: List of interrupts used by the regulator.
+- interrupt-names: List of strings defining the names of the
+ interrupts in the 'interrupts' property 1-to-1.
+ Supported values are "ocp" for voltage switch
+ type regulators. If an OCP interrupt is
+ specified, then the voltage switch will be
+ toggled off and back on when OCP triggers in
+ order to handle high in-rush current.
- qcom,system-load: Load in uA present on regulator that is not
captured by any consumer request
- qcom,enable-time: Time in us to delay after enabling the regulator
-- qcom,ocp-enable-time: Time to delay in us between enabling a switch and
- subsequently enabling over current protection
- (OCP) for the switch
- qcom,auto-mode-enable: 1 = Enable automatic hardware selection of
regulator mode (HPM vs LPM); not available on
boost type regulators
@@ -30,11 +35,18 @@
so that it acts like a switch and simply outputs
its input voltage
0 = Do not enable bypass mode
-- qcom,ocp-enable: 1 = Enable over current protection (OCP) for
- voltage switch type regulators so that they
- latch off automatically when over current is
- detected
+- qcom,ocp-enable: 1 = Allow over current protection (OCP) to be
+ enabled for voltage switch type regulators so
+ that they latch off automatically when over
+ current is detected. OCP is enabled when in
+ HPM or auto mode.
0 = Disable OCP
+- qcom,ocp-max-retries: Maximum number of times to try toggling a voltage
+ switch off and back on as a result of
+ consecutive over current events.
+- qcom,ocp-retry-delay: Time to delay in milliseconds between each
+ voltage switch toggle after an over current
+ event takes place.
- qcom,pull-down-enable: 1 = Enable output pull down resistor when the
regulator is disabled
0 = Disable pull down resistor
@@ -76,6 +88,16 @@
1 = 0.25 uA
2 = 0.55 uA
3 = 0.75 uA
+- qcom,hpm-enable: 1 = Enable high power mode (HPM), also referred
+ to as NPM. HPM consumes more ground current
+ than LPM, but it can source significantly higher
+ load current. HPM is not available on boost
+ type regulators. For voltage switch type
+ regulators, HPM implies that over current
+ protection and soft start are active all the
+ time. This configuration can be overwritten
+ by changing the regulator's mode dynamically.
+ 0 = Do not enable HPM
- qcom,force-type: Override the type and subtype register values. Useful for some
regulators that have invalid types advertised by the hardware.
The format is two unsigned integers of the form <type subtype>.
diff --git a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
index 65de56f..be08a1a 100644
--- a/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
+++ b/Documentation/devicetree/bindings/sound/qcom-audio-dev.txt
@@ -449,14 +449,18 @@
- qcom,prim-auxpcm-gpio-sync : GPIO on which Primary AUXPCM SYNC signal is coming.
- qcom,prim-auxpcm-gpio-din : GPIO on which Primary AUXPCM DIN signal is coming.
- qcom,prim-auxpcm-gpio-dout : GPIO on which Primary AUXPCM DOUT signal is coming.
+- qcom,prim-auxpcm-gpio-set : set of GPIO lines used for Primary AUXPCM port
+ Possible Values:
+ prim-gpio-prim : Primary AUXPCM shares GPIOs with Primary MI2S
+ prim-gpio-tert : Primary AUXPCM shares GPIOs with Tertiary MI2S
- qcom,sec-auxpcm-gpio-clk : GPIO on which Secondary AUXPCM clk signal is coming.
- qcom,sec-auxpcm-gpio-sync : GPIO on which Secondary AUXPCM SYNC signal is coming.
- qcom,sec-auxpcm-gpio-din : GPIO on which Secondary AUXPCM DIN signal is coming.
- qcom,sec-auxpcm-gpio-dout : GPIO on which Secondary AUXPCM DOUT signal is coming.
- qcom,us-euro-gpios : GPIO on which gnd/mic swap signal is coming.
-
Optional properties:
- qcom,hdmi-audio-rx: specifies if HDMI audio support is enabled or not.
+- qcom,ext-ult-spk-amp-gpio : GPIO for enabling of speaker path amplifier.
Example:
@@ -503,6 +507,7 @@
qcom,prim-auxpcm-gpio-sync = <&msmgpio 66 0>;
qcom,prim-auxpcm-gpio-din = <&msmgpio 67 0>;
qcom,prim-auxpcm-gpio-dout = <&msmgpio 68 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-prim";
qcom,sec-auxpcm-gpio-clk = <&msmgpio 79 0>;
qcom,sec-auxpcm-gpio-sync = <&msmgpio 80 0>;
qcom,sec-auxpcm-gpio-din = <&msmgpio 81 0>;
diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt b/Documentation/devicetree/bindings/usb/dwc3.txt
index fd5b93e..6d54f7e 100644
--- a/Documentation/devicetree/bindings/usb/dwc3.txt
+++ b/Documentation/devicetree/bindings/usb/dwc3.txt
@@ -12,6 +12,7 @@
Optional properties:
- tx-fifo-resize: determines if the FIFO *has* to be reallocated.
+ - host-only-mode: if present then dwc3 should be run in HOST only mode.
This is usually a subnode to DWC3 glue to which it is connected.
diff --git a/Documentation/devicetree/bindings/usb/msm-hsusb.txt b/Documentation/devicetree/bindings/usb/msm-hsusb.txt
index 8c48770..4fcd7b4 100644
--- a/Documentation/devicetree/bindings/usb/msm-hsusb.txt
+++ b/Documentation/devicetree/bindings/usb/msm-hsusb.txt
@@ -124,6 +124,8 @@
- qcom,pool-64-bit-align: If present then the pool's memory will be aligned
to 64 bits
- qcom,enable_hbm: if present host bus manager is enabled.
+- qcom,disable-park-mode: if present park mode is enabled. Park mode enables executing
+ up to 3 usb packets from each QH.
Example MSM HSUSB EHCI controller device node :
ehci: qcom,ehci-host@f9a55000 {
diff --git a/arch/arm/boot/dts/apq8074-v2-liquid.dts b/arch/arm/boot/dts/apq8074-v2-liquid.dts
new file mode 100644
index 0000000..4ec1cdd
--- /dev/null
+++ b/arch/arm/boot/dts/apq8074-v2-liquid.dts
@@ -0,0 +1,34 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+/include/ "apq8074-v2.dtsi"
+/include/ "msm8974-liquid.dtsi"
+
+/ {
+ model = "Qualcomm APQ 8074v2 LIQUID";
+ compatible = "qcom,apq8074-liquid", "qcom,apq8074", "qcom,liquid";
+ qcom,msm-id = <184 9 0x20000>;
+};
+
+&usb3 {
+ interrupt-parent = <&usb3>;
+ interrupts = <0 1>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0x0 0xffffffff>;
+ interrupt-map = <0x0 0 &intc 0 133 0
+ 0x0 1 &spmi_bus 0x0 0x0 0x9 0x0>;
+ interrupt-names = "hs_phy_irq", "pmic_id_irq";
+
+ qcom,misc-ref = <&pm8941_misc>;
+};
diff --git a/arch/arm/boot/dts/apq8074-v2.dtsi b/arch/arm/boot/dts/apq8074-v2.dtsi
new file mode 100644
index 0000000..3b65236
--- /dev/null
+++ b/arch/arm/boot/dts/apq8074-v2.dtsi
@@ -0,0 +1,48 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * As a general rule, only version-specific property overrides should be placed
+ * inside this file. However, device definitions should be placed inside the
+ * msm8974.dtsi file.
+ */
+
+/include/ "msm8974-v2.dtsi"
+
+/ {
+ qcom,qseecom@a700000 {
+ compatible = "qcom,qseecom";
+ reg = <0x0a700000 0x500000>;
+ reg-names = "secapp-region";
+ qcom,disk-encrypt-pipe-pair = <2>;
+ qcom,hlos-ce-hw-instance = <1>;
+ qcom,qsee-ce-hw-instance = <0>;
+ qcom,msm-bus,name = "qseecom-noc";
+ qcom,msm-bus,num-cases = <4>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <55 512 0 0>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>,
+ <55 512 3936000 393600>;
+ };
+};
+
+&memory_hole {
+ qcom,memblock-remove = <0x0a700000 0x5800000>; /* Address and size of the hole */
+};
+
+&qseecom {
+ status = "disabled";
+};
+
diff --git a/arch/arm/boot/dts/msmzinc-ion.dtsi b/arch/arm/boot/dts/apq8084-ion.dtsi
similarity index 100%
rename from arch/arm/boot/dts/msmzinc-ion.dtsi
rename to arch/arm/boot/dts/apq8084-ion.dtsi
diff --git a/arch/arm/boot/dts/apq8084-sim.dts b/arch/arm/boot/dts/apq8084-sim.dts
new file mode 100644
index 0000000..45d625c
--- /dev/null
+++ b/arch/arm/boot/dts/apq8084-sim.dts
@@ -0,0 +1,73 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+/include/ "apq8084.dtsi"
+
+/ {
+ model = "Qualcomm APQ 8084 Simulator";
+ compatible = "qcom,apq8084-sim", "qcom,apq8084", "qcom,sim";
+ qcom,msm-id = <178 0 0>;
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ uart0: serial@f991f000 {
+ status = "ok";
+ };
+};
+
+&sdcc1 {
+ qcom,vdd-always-on;
+ qcom,vdd-lpm-sup;
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <800 500000>;
+
+ qcom,vdd-io-always-on;
+ qcom,vdd-io-voltage-level = <1800000 1800000>;
+ qcom,vdd-io-current-level = <250 154000>;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
+ qcom,sup-voltages = <2950 2950>;
+ qcom,nonremovable;
+ qcom,bus-speed-mode = "HS200_1p8v", "DDR_1p8v";
+
+ status = "ok";
+};
+
+&sdcc2 {
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <9000 800000>;
+ qcom,vdd-io-voltage-level = <1800000 2950000>;
+ qcom,vdd-io-current-level = <6 22000>;
+ qcom,vdd-io-lpm-sup;
+
+ qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
+
+ qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
+ qcom,sup-voltages = <2950 2950>;
+ qcom,xpc;
+ qcom,bus-speed-mode = "SDR12", "SDR25", "SDR50", "DDR50", "SDR104";
+ qcom,current-limit = <800>;
+
+ status = "ok";
+};
diff --git a/arch/arm/boot/dts/msmzinc.dtsi b/arch/arm/boot/dts/apq8084.dtsi
similarity index 77%
rename from arch/arm/boot/dts/msmzinc.dtsi
rename to arch/arm/boot/dts/apq8084.dtsi
index 642597d..c3c3759 100644
--- a/arch/arm/boot/dts/msmzinc.dtsi
+++ b/arch/arm/boot/dts/apq8084.dtsi
@@ -11,11 +11,11 @@
*/
/include/ "skeleton.dtsi"
-/include/ "msmzinc-ion.dtsi"
+/include/ "apq8084-ion.dtsi"
/ {
- model = "Qualcomm MSM ZINC";
- compatible = "qcom,msmzinc";
+ model = "Qualcomm APQ 8084";
+ compatible = "qcom,apq8084";
interrupt-parent = <&intc>;
intc: interrupt-controller@f9000000 {
@@ -83,4 +83,28 @@
qcom,memory-reservation-size = <0x100000>; /* 1M EBI1 buffer */
};
+ sdcc1: qcom,sdcc@f9824000 {
+ cell-index = <1>; /* SDC1 eMMC slot */
+ compatible = "qcom,msm-sdcc";
+ reg = <0xf9824000 0x800>;
+ reg-names = "core_mem";
+ interrupts = <0 123 0>;
+ interrupt-names = "core_irq";
+
+ qcom,bus-width = <8>;
+ status = "disabled";
+ };
+
+ sdcc2: qcom,sdcc@f98a4000 {
+ cell-index = <2>; /* SDC2 SD card slot */
+ compatible = "qcom,msm-sdcc";
+ reg = <0xf98a4000 0x800>;
+ reg-names = "core_mem";
+ interrupts = <0 125 0>;
+ interrupt-names = "core_irq";
+
+
+ qcom,bus-width = <4>;
+ status = "disabled";
+ };
};
diff --git a/arch/arm/boot/dts/mpq8092.dtsi b/arch/arm/boot/dts/mpq8092.dtsi
index 75f168d..5c904b4 100644
--- a/arch/arm/boot/dts/mpq8092.dtsi
+++ b/arch/arm/boot/dts/mpq8092.dtsi
@@ -85,7 +85,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -105,7 +105,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
diff --git a/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi b/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi
new file mode 100644
index 0000000..0de72b0
--- /dev/null
+++ b/arch/arm/boot/dts/msm-pm8110-rpm-regulator.dtsi
@@ -0,0 +1,381 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+&rpm_bus {
+ rpm-regulator-smpa1 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <1>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s1 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-smpa3 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <3>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s3 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s3";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-smpa4 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "smpa";
+ qcom,resource-id = <4>;
+ qcom,regulator-type = <1>;
+ qcom,hpm-min-load = <100000>;
+ status = "disabled";
+
+ regulator-s4 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s4";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa1 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <1>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l1 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l1";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa2 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <2>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l2 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l2";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa3 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <3>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l3 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa4 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <4>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l4 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l4";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa5 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <5>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l5 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l5";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa6 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <6>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l6 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l6";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa7 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <7>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l7 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l7";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa8 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <8>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <5000>;
+ status = "disabled";
+
+ regulator-l8 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l8";
+ qcom,set = <1>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa9 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <9>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l9 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l9";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa10 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <10>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l10 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l10";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa12 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <12>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l12 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l12";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa14 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <14>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l14 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l14";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa15 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <15>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l15 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l15";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa16 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <16>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l16 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l16";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa17 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <17>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l17 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l17";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa18 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <18>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l18 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l18";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa19 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <19>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l19 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l19";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa20 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <20>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <5000>;
+ status = "disabled";
+
+ regulator-l20 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l20";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa21 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <21>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l21 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l21";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+
+ rpm-regulator-ldoa22 {
+ compatible = "qcom,rpm-regulator-smd-resource";
+ qcom,resource-name = "ldoa";
+ qcom,resource-id = <22>;
+ qcom,regulator-type = <0>;
+ qcom,hpm-min-load = <10000>;
+ status = "disabled";
+
+ regulator-l22 {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l22";
+ qcom,set = <3>;
+ status = "disabled";
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/msm-pm8110.dtsi b/arch/arm/boot/dts/msm-pm8110.dtsi
index 28766cf..b88b991 100644
--- a/arch/arm/boot/dts/msm-pm8110.dtsi
+++ b/arch/arm/boot/dts/msm-pm8110.dtsi
@@ -22,6 +22,100 @@
#address-cells = <1>;
#size-cells = <1>;
+ pm8110_chg: qcom,charger {
+ spmi-dev-container;
+ compatible = "qcom,qpnp-charger";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ status = "disabled";
+
+ qcom,vddmax-mv = <4200>;
+ qcom,vddsafe-mv = <4200>;
+ qcom,vinmin-mv = <4200>;
+ qcom,vbatdet-mv = <4100>;
+ qcom,ibatmax-ma = <1500>;
+ qcom,ibatterm-ma = <200>;
+ qcom,ibatsafe-ma = <1500>;
+ qcom,thermal-mitigation = <1500 700 600 325>;
+ qcom,vbatdet-delta-mv = <350>;
+ qcom,tchg-mins = <150>;
+
+ qcom,chgr@1000 {
+ status = "disabled";
+ reg = <0x1000 0x100>;
+ interrupts = <0x0 0x10 0x0>,
+ <0x0 0x10 0x1>,
+ <0x0 0x10 0x2>,
+ <0x0 0x10 0x3>,
+ <0x0 0x10 0x4>,
+ <0x0 0x10 0x5>,
+ <0x0 0x10 0x6>,
+ <0x0 0x10 0x7>;
+
+ interrupt-names = "vbat-det-lo",
+ "vbat-det-hi",
+ "chgwdog",
+ "state-change",
+ "trkl-chg-on",
+ "fast-chg-on",
+ "chg-failed",
+ "chg-done";
+ };
+
+ qcom,buck@1100 {
+ status = "disabled";
+ reg = <0x1100 0x100>;
+ interrupts = <0x0 0x11 0x0>,
+ <0x0 0x11 0x1>,
+ <0x0 0x11 0x2>,
+ <0x0 0x11 0x3>,
+ <0x0 0x11 0x4>,
+ <0x0 0x11 0x5>,
+ <0x0 0x11 0x6>;
+
+ interrupt-names = "vbat-ov",
+ "vreg-ov",
+ "overtemp",
+ "vchg-loop",
+ "ichg-loop",
+ "ibat-loop",
+ "vdd-loop";
+ };
+
+ qcom,bat-if@1200 {
+ status = "disabled";
+ reg = <0x1200 0x100>;
+ interrupts = <0x0 0x12 0x0>,
+ <0x0 0x12 0x1>,
+ <0x0 0x12 0x2>,
+ <0x0 0x12 0x3>,
+ <0x0 0x12 0x4>;
+
+ interrupt-names = "batt-pres",
+ "bat-temp-ok",
+ "bat-fet-on",
+ "vcp-on",
+ "psi";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "disabled";
+ reg = <0x1300 0x100>;
+ interrupts = <0 0x13 0x0>,
+ <0 0x13 0x1>,
+ <0x0 0x13 0x2>;
+
+ interrupt-names = "coarse-det-usb",
+ "usbin-valid",
+ "chg-gone";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "disabled";
+ reg = <0x1600 0x100>;
+ };
+ };
+
pm8110_vadc: vadc@3100 {
compatible = "qcom,qpnp-vadc";
reg = <0x3100 0x100>;
@@ -75,7 +169,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
@@ -106,6 +199,12 @@
interrupts = <0x0 0x61 0x1>;
};
};
+
+ qcom,leds@a200 {
+ compatible = "qcom,leds-qpnp";
+ reg = <0xa200 0x100>;
+ label = "mpp";
+ };
};
qcom,pm8110@1 {
diff --git a/arch/arm/boot/dts/msm-pm8226.dtsi b/arch/arm/boot/dts/msm-pm8226.dtsi
index 12641c5..41920d5 100644
--- a/arch/arm/boot/dts/msm-pm8226.dtsi
+++ b/arch/arm/boot/dts/msm-pm8226.dtsi
@@ -368,7 +368,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
@@ -687,6 +686,54 @@
label = "wled";
};
+ pwm@b100 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb100 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <0>;
+ };
+
+ pwm@b200 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb200 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <1>;
+ };
+
+ pwm@b300 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb300 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <2>;
+ };
+
+ pwm@b400 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb400 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <3>;
+ };
+
+ pwm@b500 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb500 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <4>;
+ };
+
+ pwm@b600 {
+ compatible = "qcom,qpnp-pwm";
+ reg = <0xb600 0x100>,
+ <0xb042 0x7e>;
+ reg-names = "qpnp-lpg-channel-base", "qpnp-lpg-lut-base";
+ qcom,channel-id = <5>;
+ };
+
regulator@8000 {
regulator-name = "8226_lvs1";
reg = <0x8000 0x100>;
diff --git a/arch/arm/boot/dts/msm-pm8941.dtsi b/arch/arm/boot/dts/msm-pm8941.dtsi
index 776b9d6..43b7d03 100644
--- a/arch/arm/boot/dts/msm-pm8941.dtsi
+++ b/arch/arm/boot/dts/msm-pm8941.dtsi
@@ -791,7 +791,6 @@
interrupt-names = "eoc-int-en-set";
qcom,adc-bit-resolution = <16>;
qcom,adc-vdd-reference = <1800>;
- qcom,rsense = <1500>;
chan@0 {
label = "internal_rsense";
diff --git a/arch/arm/boot/dts/msm8226-cdp.dts b/arch/arm/boot/dts/msm8226-cdp.dts
index 5f0dcc3..b203540 100644
--- a/arch/arm/boot/dts/msm8226-cdp.dts
+++ b/arch/arm/boot/dts/msm8226-cdp.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -131,7 +131,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -180,7 +180,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -266,7 +266,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -276,7 +276,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
diff --git a/arch/arm/boot/dts/msm8226-ion.dtsi b/arch/arm/boot/dts/msm8226-ion.dtsi
index 9cef5a9..f433a49 100644
--- a/arch/arm/boot/dts/msm8226-ion.dtsi
+++ b/arch/arm/boot/dts/msm8226-ion.dtsi
@@ -50,5 +50,19 @@
qcom,memory-reservation-type = "EBI1"; /* reserve EBI memory */
qcom,memory-reservation-size = <0x314000>;
};
+ qcom,ion-heap@23 { /* OTHER PIL HEAP */
+ compatible = "qcom,msm-ion-reserve";
+ reg = <23>;
+ qcom,heap-align = <0x1000>;
+ qcom,memory-fixed = <0x06400000 0x2000000>;
+ };
+ qcom,ion-heap@26 { /* MODEM PIL HEAP */
+ compatible = "qcom,msm-ion-reserve";
+ reg = <26>;
+ qcom,heap-align = <0x1000>;
+ qcom,memory-fixed = <0x08400000 0x4E00000>;
+
+ };
+
};
};
diff --git a/arch/arm/boot/dts/msm8226-mtp.dts b/arch/arm/boot/dts/msm8226-mtp.dts
index 59741c6..1f8a773 100644
--- a/arch/arm/boot/dts/msm8226-mtp.dts
+++ b/arch/arm/boot/dts/msm8226-mtp.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -123,7 +123,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -172,7 +172,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -259,7 +259,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -269,7 +269,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
@@ -359,3 +359,7 @@
&pm8226_bms {
status = "ok";
};
+
+&pm8226_chg {
+ qcom,charging-disabled;
+};
diff --git a/arch/arm/boot/dts/msm8226-qrd.dts b/arch/arm/boot/dts/msm8226-qrd.dts
index bbde23f..660fb3e 100644
--- a/arch/arm/boot/dts/msm8226-qrd.dts
+++ b/arch/arm/boot/dts/msm8226-qrd.dts
@@ -33,11 +33,11 @@
compatible = "synaptics,rmi4";
reg = <0x20>;
interrupt-parent = <&msmgpio>;
- interrupts = <17 0x2>;
+ interrupts = <17 0x2008>;
vdd-supply = <&pm8226_l19>;
vcc_i2c-supply = <&pm8226_lvs1>;
synaptics,reset-gpio = <&msmgpio 16 0x00>;
- synaptics,irq-gpio = <&msmgpio 17 0x00>;
+ synaptics,irq-gpio = <&msmgpio 17 0x2008>;
synaptics,button-map = <139 102 158>;
synaptics,i2c-pull-up;
synaptics,reg-en;
@@ -123,7 +123,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x4 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -269,7 +269,7 @@
qcom,mode = <1>; /* Digital output */
qcom,output-type = <0>; /* CMOS logic */
qcom,pull = <5>; /* QPNP_PIN_PULL_NO*/
- qcom,vin-sel = <2>; /* QPNP_PIN_VIN2 */
+ qcom,vin-sel = <3>; /* QPNP_PIN_VIN3 */
qcom,out-strength = <3>;/* QPNP_PIN_OUT_STRENGTH_HIGH */
qcom,src-sel = <2>; /* QPNP_PIN_SEL_FUNC_1 */
qcom,master-en = <1>; /* Enable GPIO */
@@ -279,7 +279,7 @@
qcom,mode = <1>;
qcom,output-type = <0>;
qcom,pull = <5>;
- qcom,vin-sel = <2>;
+ qcom,vin-sel = <3>;
qcom,out-strength = <3>;
qcom,src-sel = <2>;
qcom,master-en = <1>;
diff --git a/arch/arm/boot/dts/msm8226-sim.dts b/arch/arm/boot/dts/msm8226-sim.dts
index a86dc48..00c0e2e 100644
--- a/arch/arm/boot/dts/msm8226-sim.dts
+++ b/arch/arm/boot/dts/msm8226-sim.dts
@@ -36,7 +36,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
vdd-supply = <&pm8226_l17>;
@@ -62,7 +62,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
diff --git a/arch/arm/boot/dts/msm8226-smp2p.dtsi b/arch/arm/boot/dts/msm8226-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8226-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8226-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8226.dtsi b/arch/arm/boot/dts/msm8226.dtsi
index cb2047f..aa03951 100644
--- a/arch/arm/boot/dts/msm8226.dtsi
+++ b/arch/arm/boot/dts/msm8226.dtsi
@@ -229,6 +229,8 @@
HSUSB_3p3-supply = <&pm8226_l20>;
qcom,vdd-voltage-level = <1 5 7>;
+ qcom,hsusb-otg-phy-init-seq =
+ <0x44 0x80 0x68 0x81 0x24 0x82 0x13 0x83 0xffffffff>;
qcom,hsusb-otg-phy-type = <2>;
qcom,hsusb-otg-mode = <1>;
qcom,hsusb-otg-otg-control = <2>;
@@ -755,6 +757,13 @@
interrupts = <0 162 1>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
qcom,mss@fc880000 {
@@ -788,7 +797,7 @@
qcom,msm-mem-hole {
compatible = "qcom,msm-mem-hole";
- qcom,memblock-remove = <0x8400000 0x7b00000>; /* Address and Size of Hole */
+ qcom,memblock-remove = <0x6400000 0x9b00000>; /* Address and Size of Hole */
};
tsens: tsens@fc4a8000 {
@@ -908,6 +917,42 @@
<55 512 3936000 393600>,
<55 512 3936000 393600>;
};
+
+ qcom,qcrypto@fd404000 {
+ compatible = "qcom,qcrypto";
+ reg = <0xfd400000 0x20000>,
+ <0xfd404000 0x8000>;
+ reg-names = "crypto-base","crypto-bam-base";
+ interrupts = <0 207 0>;
+ qcom,bam-pipe-pair = <2>;
+ qcom,ce-hw-instance = <0>;
+ qcom,ce-hw-shared;
+ qcom,msm-bus,name = "qcrypto-noc";
+ qcom,msm-bus,num-cases = <2>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <56 512 0 0>,
+ <56 512 3936000 393600>;
+ };
+
+ qcom,qcedev@fd400000 {
+ compatible = "qcom,qcedev";
+ reg = <0xfd400000 0x20000>,
+ <0xfd404000 0x8000>;
+ reg-names = "crypto-base","crypto-bam-base";
+ interrupts = <0 207 0>;
+ qcom,bam-pipe-pair = <1>;
+ qcom,ce-hw-instance = <0>;
+ qcom,ce-hw-shared;
+ qcom,msm-bus,name = "qcedev-noc";
+ qcom,msm-bus,num-cases = <2>;
+ qcom,msm-bus,active-only = <0>;
+ qcom,msm-bus,num-paths = <1>;
+ qcom,msm-bus,vectors-KBps =
+ <56 512 0 0>,
+ <56 512 3936000 393600>;
+ };
};
&gdsc_venus {
diff --git a/arch/arm/boot/dts/msm8610-cdp.dts b/arch/arm/boot/dts/msm8610-cdp.dts
index 08da115..9b114cc 100644
--- a/arch/arm/boot/dts/msm8610-cdp.dts
+++ b/arch/arm/boot/dts/msm8610-cdp.dts
@@ -17,13 +17,32 @@
/ {
model = "Qualcomm MSM 8610 CDP";
compatible = "qcom,msm8610-cdp", "qcom,msm8610", "qcom,cdp";
- qcom,msm-id = <147 1 0>, <165 1 0>;
+ qcom,msm-id = <147 1 0>, <165 1 0>, <161 1 0>, <162 1 0>,
+ <163 1 0>, <164 1 0>, <166 1 0>;
serial@f991e000 {
status = "ok";
};
};
+&spmi_bus {
+ qcom,pm8110@0 {
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+ };
+};
+
&sdhc_1 {
vdd-supply = <&pm8110_l17>;
qcom,vdd-always-on;
@@ -79,3 +98,25 @@
status = "ok";
};
+
+&pm8110_chg {
+ status = "ok";
+ qcom,charging-disabled;
+ qcom,use-default-batt-values;
+
+ qcom,chgr@1000 {
+ status = "ok";
+ };
+
+ qcom,buck@1100 {
+ status = "ok";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "ok";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "ok";
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-gpu.dtsi b/arch/arm/boot/dts/msm8610-gpu.dtsi
index 5e57430..5580f73 100644
--- a/arch/arm/boot/dts/msm8610-gpu.dtsi
+++ b/arch/arm/boot/dts/msm8610-gpu.dtsi
@@ -27,8 +27,9 @@
qcom,idle-timeout = <8>; /* <HZ/12> */
qcom,nap-allowed = <1>;
qcom,strtstp-sleepwake;
- qcom,clk-map = <0x000001E>; /* KGSL_CLK_CORE |
- KGSL_CLK_IFACE | KGSL_CLK_MEM | KGSL_CLK_MEM_IFACE */
+ qcom,clk-map = <0x000005E>; /* KGSL_CLK_CORE |
+ KGSL_CLK_IFACE | KGSL_CLK_MEM | KGSL_CLK_MEM_IFACE |
+ KGSL_CLK_ALT_MEM_IFACE */
/* Bus Scale Settings */
qcom,msm-bus,name = "grp3d";
diff --git a/arch/arm/boot/dts/msm8610-mtp.dts b/arch/arm/boot/dts/msm8610-mtp.dts
index e3eed72..3a26376 100644
--- a/arch/arm/boot/dts/msm8610-mtp.dts
+++ b/arch/arm/boot/dts/msm8610-mtp.dts
@@ -17,13 +17,32 @@
/ {
model = "Qualcomm MSM 8610 MTP";
compatible = "qcom,msm8610-mtp", "qcom,msm8610", "qcom,mtp";
- qcom,msm-id = <147 8 0>, <165 8 0>;
+ qcom,msm-id = <147 8 0>, <165 8 0>, <161 8 0>, <162 8 0>,
+ <163 8 0>, <164 8 0>, <166 8 0>;
- serial@f991f000 {
+ serial@f991e000 {
status = "ok";
};
};
+&spmi_bus {
+ qcom,pm8110@0 {
+ qcom,leds@a200 {
+ status = "okay";
+ qcom,led_mpp_3 {
+ label = "mpp";
+ linux,name = "wled-backlight";
+ linux-default-trigger = "none";
+ qcom,default-state = "on";
+ qcom,max-current = <40>;
+ qcom,id = <6>;
+ qcom,source-sel = <1>;
+ qcom,mode-ctrl = <0x10>;
+ };
+ };
+ };
+};
+
&sdhc_1 {
vdd-supply = <&pm8110_l17>;
qcom,vdd-always-on;
@@ -79,3 +98,28 @@
status = "ok";
};
+
+&pm8110_chg {
+ status = "ok";
+ qcom,charging-disabled;
+
+ qcom,chgr@1000 {
+ status = "ok";
+ };
+
+ qcom,buck@1100 {
+ status = "ok";
+ };
+
+ qcom,bat-if@1200 {
+ status = "ok";
+ };
+
+ qcom,usb-chgpth@1300 {
+ status = "ok";
+ };
+
+ qcom,chg-misc@1600 {
+ status = "ok";
+ };
+};
diff --git a/arch/arm/boot/dts/msm8610-regulator.dtsi b/arch/arm/boot/dts/msm8610-regulator.dtsi
index d50902c..67eee5c 100644
--- a/arch/arm/boot/dts/msm8610-regulator.dtsi
+++ b/arch/arm/boot/dts/msm8610-regulator.dtsi
@@ -10,19 +10,6 @@
* GNU General Public License for more details.
*/
- /* Stub Regulators */
-
-/ {
- pm8110_s1_corner: regulator-s1-corner {
- compatible = "qcom,stub-regulator";
- regulator-name = "8110_s1_corner";
- qcom,hpm-min-load = <100000>;
- regulator-min-microvolt = <1>;
- regulator-max-microvolt = <7>;
- qcom,consumer-supplies = "vdd_dig", "", "vdd_sr2_dig", "";
- };
-};
-
/* SPM controlled regulators */
&spmi_bus {
@@ -60,195 +47,274 @@
};
};
-/* QPNP controlled regulators: */
+/* RPM controlled regulators: */
-&spmi_bus {
+&rpm_bus {
- qcom,pm8110@1 {
-
- pm8110_s1: regulator@1400 {
+ rpm-regulator-smpa1 {
+ status = "okay";
+ pm8110_s1: regulator-s1 {
status = "okay";
- regulator-min-microvolt = <1150000>;
- regulator-max-microvolt = <1150000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <1275000>;
};
- pm8110_s3: regulator@1a00 {
- status = "okay";
- regulator-min-microvolt = <1350000>;
+ pm8110_s1_corner: regulator-s1-corner {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1_corner";
+ qcom,set = <3>;
+ regulator-min-microvolt = <1>;
+ regulator-max-microvolt = <7>;
+ qcom,use-voltage-corner;
+ qcom,consumer-supplies = "vdd_dig", "", "vdd_sr2_dig", "";
+ };
+
+ pm8110_s1_corner_ao: regulator-s1-corner-ao {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_s1_corner_ao";
+ qcom,set = <1>;
+ regulator-min-microvolt = <1>;
+ regulator-max-microvolt = <7>;
+ qcom,use-voltage-corner;
+ };
+ };
+
+ rpm-regulator-smpa3 {
+ status = "okay";
+ pm8110_s3: regulator-s3 {
+ regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1350000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
- };
-
- pm8110_s4: regulator@1d00 {
+ qcom,init-voltage = <1200000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-smpa4 {
+ status = "okay";
+ pm8110_s4: regulator-s4 {
regulator-min-microvolt = <2150000>;
regulator-max-microvolt = <2150000>;
- qcom,enable-time = <500>;
- qcom,system-load = <100000>;
- regulator-always-on;
- };
-
- pm8110_l1: regulator@4000 {
+ qcom,init-voltage = <2150000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa1 {
+ status = "okay";
+ pm8110_l1: regulator-l1 {
regulator-min-microvolt = <1225000>;
regulator-max-microvolt = <1225000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l2: regulator@4100 {
+ qcom,init-voltage = <1225000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa2 {
+ status = "okay";
+ pm8110_l2: regulator-l2 {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
+ qcom,init-voltage = <1200000>;
+ status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa3 {
+ status = "okay";
+ pm8110_l3: regulator-l3 {
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
+ status = "okay";
};
- pm8110_l3: regulator@4200 {
+ pm8110_l3_ao: regulator-l3-ao {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3_ao";
+ qcom,set = <1>;
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
- regulator-min-microvolt = <1150000>;
- regulator-max-microvolt = <1150000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
};
- pm8110_l4: regulator@4300 {
+ pm8110_l3_so: regulator-l3-so {
+ compatible = "qcom,rpm-regulator-smd";
+ regulator-name = "8110_l3_so";
+ qcom,set = <2>;
+ regulator-min-microvolt = <750000>;
+ regulator-max-microvolt = <1275000>;
+ qcom,init-voltage = <750000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa4 {
+ status = "okay";
+ pm8110_l4: regulator-l4 {
regulator-min-microvolt = <1200000>;
regulator-max-microvolt = <1200000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l5: regulator@4400 {
+ qcom,init-voltage = <1200000>;
status = "okay";
- parent-supply = <&pm8110_s3>;
+ };
+ };
+
+ rpm-regulator-ldoa5 {
+ status = "okay";
+ pm8110_l5: regulator-l5 {
regulator-min-microvolt = <1300000>;
regulator-max-microvolt = <1300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l6: regulator@4500 {
+ qcom,init-voltage = <1300000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa6 {
+ status = "okay";
+ pm8110_l6: regulator-l6 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- qcom,system-load = <10000>;
- regulator-always-on;
- };
-
- pm8110_l7: regulator@4600 {
+ qcom,init-voltage = <1800000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa7 {
+ status = "okay";
+ pm8110_l7: regulator-l7 {
regulator-min-microvolt = <2050000>;
regulator-max-microvolt = <2050000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l8: regulator@4700 {
+ qcom,init-voltage = <2050000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa8 {
+ status = "okay";
+ pm8110_l8: regulator-l8 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l9: regulator@4800 {
+ qcom,init-voltage = <1800000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa9 {
+ status = "okay";
+ pm8110_l9: regulator-l9 {
regulator-min-microvolt = <2050000>;
regulator-max-microvolt = <2050000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l10: regulator@4900 {
+ qcom,init-voltage = <2050000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa10 {
+ status = "okay";
+ pm8110_l10: regulator-l10 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
+ qcom,init-voltage = <1800000>;
+ status = "okay";
qcom,consumer-supplies = "vdd_sr2_pll", "";
};
+ };
- pm8110_l12: regulator@4b00 {
- status = "okay";
+ rpm-regulator-ldoa12 {
+ status = "okay";
+ pm8110_l12: regulator-l12 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l14: regulator@4d00 {
+ qcom,init-voltage = <3300000>;
status = "okay";
- parent-supply = <&pm8110_s4>;
+ };
+ };
+
+ rpm-regulator-ldoa14 {
+ status = "okay";
+ pm8110_l14: regulator-l14 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l15: regulator@4e00 {
+ qcom,init-voltage = <1800000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa15 {
+ status = "okay";
+ pm8110_l15: regulator-l15 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l16: regulator@4f00 {
+ qcom,init-voltage = <3300000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa16 {
+ status = "okay";
+ pm8110_l16: regulator-l16 {
regulator-min-microvolt = <3000000>;
regulator-max-microvolt = <3000000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l17: regulator@5000 {
+ qcom,init-voltage = <3000000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa17 {
+ status = "okay";
+ pm8110_l17: regulator-l17 {
regulator-min-microvolt = <2900000>;
regulator-max-microvolt = <2900000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l18: regulator@5100 {
+ qcom,init-voltage = <2900000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa18 {
+ status = "okay";
+ pm8110_l18: regulator-l18 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l19: regulator@5200 {
+ qcom,init-voltage = <2950000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa19 {
+ status = "okay";
+ pm8110_l19: regulator-l19 {
regulator-min-microvolt = <2850000>;
regulator-max-microvolt = <2850000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l20: regulator@5300 {
+ qcom,init-voltage = <2850000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa20 {
+ status = "okay";
+ pm8110_l20: regulator-l20 {
regulator-min-microvolt = <3075000>;
regulator-max-microvolt = <3075000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l21: regulator@5400 {
+ qcom,init-voltage = <3075000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa21 {
+ status = "okay";
+ pm8110_l21: regulator-l21 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2950000>;
- qcom,enable-time = <200>;
- };
-
- pm8110_l22: regulator@5500 {
+ qcom,init-voltage = <2950000>;
status = "okay";
+ };
+ };
+
+ rpm-regulator-ldoa22 {
+ status = "okay";
+ pm8110_l22: regulator-l22 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <3300000>;
- qcom,enable-time = <200>;
+ qcom,init-voltage = <3300000>;
+ status = "okay";
};
};
};
diff --git a/arch/arm/boot/dts/msm8610-smp2p.dtsi b/arch/arm/boot/dts/msm8610-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8610-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8610-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8610.dtsi b/arch/arm/boot/dts/msm8610.dtsi
index 0df70df..b794dc7 100644
--- a/arch/arm/boot/dts/msm8610.dtsi
+++ b/arch/arm/boot/dts/msm8610.dtsi
@@ -233,7 +233,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -265,7 +265,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -401,7 +401,7 @@
reg = <0xf9011050 0x8>;
reg-names = "rcg_base";
a7_cpu-supply = <&apc_vreg_corner>;
- a7_mem-supply = <&pm8110_l3>;
+ a7_mem-supply = <&pm8110_l3_ao>;
};
spmi_bus: qcom,spmi@fc4c0000 {
@@ -414,7 +414,6 @@
/* 190,ee0_krait_hlos_spmi_periph_irq */
/* 187,channel_0_krait_hlos_trans_done_irq */
interrupts = <0 190 0>, <0 187 0>;
- qcom,not-wakeup;
qcom,pmic-arb-ee = <0>;
qcom,pmic-arb-channel = <0>;
};
@@ -585,8 +584,9 @@
qcom,wcnss-wlan@fb000000 {
compatible = "qcom,wcnss_wlan";
- reg = <0xfb000000 0x280000>;
- reg-names = "wcnss_mmio";
+ reg = <0xfb000000 0x280000>,
+ <0xf9011008 0x04>;
+ reg-names = "wcnss_mmio", "wcnss_fiq";
interrupts = <0 145 0>, <0 146 0>;
interrupt-names = "wcnss_wlantx_irq", "wcnss_wlanrx_irq";
@@ -639,6 +639,13 @@
interrupts = <0 162 1>;
vdd_cx-supply = <&pm8110_s1_corner>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
tsens: tsens@fc4a8000 {
@@ -693,6 +700,45 @@
<55 512 3936000 393600>,
<55 512 3936000 393600>;
};
+
+ qcom,msm-rng@f9bff000 {
+ compatible = "qcom,msm-rng";
+ reg = <0xf9bff000 0x200>;
+ qcom,msm-rng-iface-clk;
+ };
+
+ jtag_mm0: jtagmm@fc34c000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34c000 0x1000>,
+ <0xfc340000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm1: jtagmm@fc34d000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34d000 0x1000>,
+ <0xfc342000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm2: jtagmm@fc34e000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34e000 0x1000>,
+ <0xfc344000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ jtag_mm3: jtagmm@fc34f000 {
+ compatible = "qcom,jtag-mm";
+ reg = <0xfc34f000 0x1000>,
+ <0xfc346000 0x1000>;
+ reg-names = "etm-base","debug-base";
+ };
+
+ qcom,tz-log@fe805720 {
+ compatible = "qcom,tz-log";
+ reg = <0x0fe805720 0x1000>;
+ };
};
&gdsc_vfe {
@@ -729,6 +775,7 @@
/include/ "msm8610-iommu-domains.dtsi"
+/include/ "msm-pm8110-rpm-regulator.dtsi"
/include/ "msm-pm8110.dtsi"
/include/ "msm8610-regulator.dtsi"
diff --git a/arch/arm/boot/dts/msm8974-bus.dtsi b/arch/arm/boot/dts/msm8974-bus.dtsi
index bb4b48e..3e0ef04 100644
--- a/arch/arm/boot/dts/msm8974-bus.dtsi
+++ b/arch/arm/boot/dts/msm8974-bus.dtsi
@@ -1349,30 +1349,18 @@
qcom,hw-sel = "NoC";
};
- mas-video-p0-ocmem {
+ mas-video-ocmem {
cell-id = <68>;
- label = "mas-video-p0-ocmem";
- qcom,masterp = <3>;
+ label = "mas-video-ocmem";
+ qcom,masterp = <3 4>;
qcom,tier = <2>;
qcom,perm-mode = "Fixed";
qcom,mode = "Fixed";
- qcom,qport = <2>;
+ qcom,qport = <2 3>;
qcom,mas-hw-id = <15>;
qcom,hw-sel = "NoC";
};
- mas-video-p1-ocmem {
- cell-id = <69>;
- label = "mas-video-p1-ocmem";
- qcom,masterp = <4>;
- qcom,tier = <2>;
- qcom,perm-mode = "Fixed";
- qcom,mode = "Fixed";
- qcom,qport = <3>;
- qcom,mas-hw-id = <16>;
- qcom,hw-sel = "NoC";
- };
-
mas-vfe-ocmem {
cell-id = <70>;
label = "mas-vfe-ocmem";
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
index df0db7e..b574a31 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-cdp.dtsi
@@ -111,7 +111,7 @@
reg = <0x6c 0x0>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
index f58c1e2..748d5f7 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-fluid.dtsi
@@ -112,7 +112,7 @@
reg = <0x6c>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi b/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
index 767a705..53f6e9e 100644
--- a/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera-sensor-mtp.dtsi
@@ -113,7 +113,7 @@
reg = <0x6c>;
qcom,slave-id = <0x6c 0x300A 0x2720>;
qcom,csiphy-sd-index = <2>;
- qcom,csid-sd-index = <0>;
+ qcom,csid-sd-index = <2>;
qcom,mount-angle = <90>;
qcom,sensor-name = "ov2720";
cam_vdig-supply = <&pm8941_l3>;
diff --git a/arch/arm/boot/dts/msm8974-camera.dtsi b/arch/arm/boot/dts/msm8974-camera.dtsi
index 3a78a15..94a28f7 100644
--- a/arch/arm/boot/dts/msm8974-camera.dtsi
+++ b/arch/arm/boot/dts/msm8974-camera.dtsi
@@ -23,8 +23,9 @@
qcom,csiphy@fda0ac00 {
cell-index = <0>;
compatible = "qcom,csiphy";
- reg = <0xfda0ac00 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0ac00 0x200>,
+ <0xfda00030 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 78 0>;
interrupt-names = "csiphy";
};
@@ -32,8 +33,9 @@
qcom,csiphy@fda0b000 {
cell-index = <1>;
compatible = "qcom,csiphy";
- reg = <0xfda0b000 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0b000 0x200>,
+ <0xfda00038 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 79 0>;
interrupt-names = "csiphy";
};
@@ -41,8 +43,9 @@
qcom,csiphy@fda0b400 {
cell-index = <2>;
compatible = "qcom,csiphy";
- reg = <0xfda0b400 0x200>;
- reg-names = "csiphy";
+ reg = <0xfda0b400 0x200>,
+ <0xfda00040 0x4>;
+ reg-names = "csiphy", "csiphy_clk_mux";
interrupts = <0 80 0>;
interrupt-names = "csiphy";
};
@@ -94,8 +97,9 @@
qcom,ispif@fda0A000 {
cell-index = <0>;
compatible = "qcom,ispif";
- reg = <0xfda0A000 0x500>;
- reg-names = "ispif";
+ reg = <0xfda0A000 0x500>,
+ <0xfda00020 0x10>;
+ reg-names = "ispif", "csi_clk_mux";
interrupts = <0 55 0>;
interrupt-names = "ispif";
};
diff --git a/arch/arm/boot/dts/msm8974-cdp.dtsi b/arch/arm/boot/dts/msm8974-cdp.dtsi
index 0319128..9cfc5fd 100644
--- a/arch/arm/boot/dts/msm8974-cdp.dtsi
+++ b/arch/arm/boot/dts/msm8974-cdp.dtsi
@@ -344,7 +344,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -374,11 +374,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&uart7 {
status = "ok";
qcom,tx-gpio = <&msmgpio 41 0x00>;
@@ -690,5 +710,12 @@
qcom,cdc-micbias1-ext-cap;
qcom,cdc-micbias3-ext-cap;
qcom,cdc-micbias4-ext-cap;
+
+ /* If boot isn't available, vph_pwr_vreg can be used instead */
+ cdc-vdd-spkdrv-supply = <&pm8941_boost>;
+ qcom,cdc-vdd-spkdrv-voltage = <5000000 5000000>;
+ qcom,cdc-vdd-spkdrv-current = <1250000>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
diff --git a/arch/arm/boot/dts/msm8974-fluid.dtsi b/arch/arm/boot/dts/msm8974-fluid.dtsi
index 79e6371..e54c3d8 100644
--- a/arch/arm/boot/dts/msm8974-fluid.dtsi
+++ b/arch/arm/boot/dts/msm8974-fluid.dtsi
@@ -212,6 +212,13 @@
qcom,cdc-micbias2-ext-cap;
qcom,cdc-micbias3-ext-cap;
qcom,cdc-micbias4-ext-cap;
+
+ /* If boot isn't available, vph_pwr_vreg can be used instead */
+ cdc-vdd-spkdrv-supply = <&pm8941_boost>;
+ qcom,cdc-vdd-spkdrv-voltage = <5000000 5000000>;
+ qcom,cdc-vdd-spkdrv-current = <1250000>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
@@ -308,7 +315,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -338,11 +345,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&usb3 {
qcom,otg-capability;
};
@@ -353,7 +380,7 @@
&pm8941_chg {
status = "ok";
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
qcom,chgr@1000 {
status = "ok";
diff --git a/arch/arm/boot/dts/msm8974-liquid.dtsi b/arch/arm/boot/dts/msm8974-liquid.dtsi
index be890b1..d8a090b 100644
--- a/arch/arm/boot/dts/msm8974-liquid.dtsi
+++ b/arch/arm/boot/dts/msm8974-liquid.dtsi
@@ -318,6 +318,7 @@
"Lineout_3 amp", "LINEOUT3",
"Lineout_2 amp", "LINEOUT2",
"Lineout_4 amp", "LINEOUT4",
+ "SPK_ultrasound amp", "SPK_OUT",
"AMIC1", "MIC BIAS4 External",
"MIC BIAS4 External", "Analog Mic4",
"AMIC2", "MIC BIAS2 External",
@@ -346,7 +347,14 @@
qcom,ext-spk-amp-supply = <&ext_5v>;
qcom,ext-spk-amp-gpio = <&pm8841_mpps 1 0>;
qcom,dock-plug-det-irq = <&pm8841_mpps 2 0>;
+ qcom,ext-ult-spk-amp-gpio = <&pm8941_gpios 6 0>;
qcom,hdmi-audio-rx;
+
+ qcom,prim-auxpcm-gpio-clk = <&msmgpio 74 0>;
+ qcom,prim-auxpcm-gpio-sync = <&msmgpio 75 0>;
+ qcom,prim-auxpcm-gpio-din = <&msmgpio 76 0>;
+ qcom,prim-auxpcm-gpio-dout = <&msmgpio 77 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-tert";
};
hsic_hub {
@@ -443,6 +451,14 @@
};
gpio@c500 { /* GPIO 6 */
+ /* ULTRASOUND_EN_1 PA AB enable */
+ qcom,mode = <1>; /* DIG_OUT */
+ qcom,output-type = <0>; /* CMOS */
+ qcom,pull = <4>; /* PULL_DOWN */
+ qcom,vin-sel = <0>; /* VPH */
+ qcom,out-strength = <2>; /* STRENGTH_MED */
+ qcom,src-sel = <0>; /* CONSTANT */
+ qcom,master-en = <1>;
};
gpio@c600 { /* GPIO 7 */
@@ -695,10 +711,24 @@
};
};
+&vph_pwr_vreg {
+ status = "ok";
+};
+
&slim_msm {
taiko_codec {
qcom,cdc-micbias2-ext-cap;
qcom,cdc-micbias3-ext-cap;
+
+ /*
+ * Liquid has external spkrdrv supply. Give a dummy supply to
+ * make codec driver's happy.
+ */
+ cdc-vdd-spkdrv-supply = <&vph_pwr_vreg>;
+ qcom,cdc-vdd-spkdrv-voltage = <0 0>;
+ qcom,cdc-vdd-spkdrv-current = <0>;
+
+ qcom,cdc-on-demand-supplies = "cdc-vdd-spkdrv";
};
};
@@ -786,7 +816,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -805,7 +835,27 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
diff --git a/arch/arm/boot/dts/msm8974-mtp.dtsi b/arch/arm/boot/dts/msm8974-mtp.dtsi
index cd83668..ca5f663 100644
--- a/arch/arm/boot/dts/msm8974-mtp.dtsi
+++ b/arch/arm/boot/dts/msm8974-mtp.dtsi
@@ -283,7 +283,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,nonremovable;
@@ -313,11 +313,31 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
status = "ok";
};
+/* Drive strength recommendations for clock line from hardware team is 10 mA.
+ * But since the driver has been been using the below values from the start
+ * without any problems, continue to use those.
+ */
+&sdcc1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdcc2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_1 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
+&sdhc_2 {
+ qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+};
+
&usb_otg {
qcom,hsusb-otg-otg-control = <2>;
};
@@ -336,7 +356,7 @@
&pm8941_chg {
status = "ok";
- qcom,chg-charging-disabled;
+ qcom,charging-disabled;
qcom,chgr@1000 {
status = "ok";
diff --git a/arch/arm/boot/dts/msm8974-regulator.dtsi b/arch/arm/boot/dts/msm8974-regulator.dtsi
index 49450f3..d1b3334 100644
--- a/arch/arm/boot/dts/msm8974-regulator.dtsi
+++ b/arch/arm/boot/dts/msm8974-regulator.dtsi
@@ -26,15 +26,33 @@
pm8941_mvs1: regulator@8300 {
parent-supply = <&pm8941_boost>;
- qcom,enable-time = <200>;
+ qcom,enable-time = <1000>;
qcom,pull-down-enable = <1>;
+ interrupts = <0x1 0x83 0x2>;
+ interrupt-names = "ocp";
+ qcom,ocp-enable = <1>;
+ qcom,ocp-max-retries = <10>;
+ qcom,ocp-retry-delay = <30>;
+ qcom,soft-start-enable = <1>;
+ qcom,vs-soft-start-strength = <0>;
+ qcom,hpm-enable = <1>;
+ qcom,auto-mode-enable = <0>;
status = "okay";
};
pm8941_mvs2: regulator@8400 {
parent-supply = <&pm8941_boost>;
- qcom,enable-time = <200>;
+ qcom,enable-time = <1000>;
qcom,pull-down-enable = <1>;
+ interrupts = <0x1 0x84 0x2>;
+ interrupt-names = "ocp";
+ qcom,ocp-enable = <1>;
+ qcom,ocp-max-retries = <10>;
+ qcom,ocp-retry-delay = <30>;
+ qcom,soft-start-enable = <1>;
+ qcom,vs-soft-start-strength = <0>;
+ qcom,hpm-enable = <1>;
+ qcom,auto-mode-enable = <0>;
status = "okay";
};
};
@@ -448,7 +466,7 @@
#address-cells = <1>;
#size-cells = <1>;
ranges;
- qcom,pfm-threshold = <376975>;
+ qcom,pfm-threshold = <73>;
krait0_vreg: regulator@f9088000 {
compatible = "qcom,krait-regulator";
diff --git a/arch/arm/boot/dts/msm8974-smp2p.dtsi b/arch/arm/boot/dts/msm8974-smp2p.dtsi
index 91029e2..079e4ca 100644
--- a/arch/arm/boot/dts/msm8974-smp2p.dtsi
+++ b/arch/arm/boot/dts/msm8974-smp2p.dtsi
@@ -148,6 +148,29 @@
gpios = <&smp2pgpio_smp2p_2_out 0 0>;
};
+ /* SMP2P SSR Driver for inbound entry from lpass. */
+ smp2pgpio_ssr_smp2p_2_in: qcom,smp2pgpio-ssr-smp2p-2-in {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "slave-kernel";
+ qcom,remote-pid = <2>;
+ qcom,is-inbound;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ /* SMP2P SSR Driver for outbound entry to lpass */
+ smp2pgpio_ssr_smp2p_2_out: qcom,smp2pgpio-ssr-smp2p-2-out {
+ compatible = "qcom,smp2pgpio";
+ qcom,entry-name = "master-kernel";
+ qcom,remote-pid = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
smp2pgpio_smp2p_4_in: qcom,smp2pgpio-smp2p-4-in {
compatible = "qcom,smp2pgpio";
qcom,entry-name = "smp2p";
diff --git a/arch/arm/boot/dts/msm8974-v1-pm.dtsi b/arch/arm/boot/dts/msm8974-v1-pm.dtsi
index ec6b14a..f9c0920 100644
--- a/arch/arm/boot/dts/msm8974-v1-pm.dtsi
+++ b/arch/arm/boot/dts/msm8974-v1-pm.dtsi
@@ -142,7 +142,7 @@
qcom,type = <0x62706d73>; /* "smpb" */
qcom,id = <0x02>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <5>; /* Super Turbo */
+ qcom,init-value = <6>; /* Super Turbo */
};
qcom,lpm-resources@1 {
diff --git a/arch/arm/boot/dts/msm8974-v2-pm.dtsi b/arch/arm/boot/dts/msm8974-v2-pm.dtsi
index 41837c1..5a1c047 100644
--- a/arch/arm/boot/dts/msm8974-v2-pm.dtsi
+++ b/arch/arm/boot/dts/msm8974-v2-pm.dtsi
@@ -142,7 +142,7 @@
qcom,type = <0x62706d73>; /* "smpb" */
qcom,id = <0x02>;
qcom,key = <0x6e726f63>; /* "corn" */
- qcom,init-value = <5>; /* Super Turbo */
+ qcom,init-value = <6>; /* Super Turbo */
};
qcom,lpm-resources@1 {
diff --git a/arch/arm/boot/dts/msm8974-v2.dtsi b/arch/arm/boot/dts/msm8974-v2.dtsi
index 777d26c..494b12c 100644
--- a/arch/arm/boot/dts/msm8974-v2.dtsi
+++ b/arch/arm/boot/dts/msm8974-v2.dtsi
@@ -63,6 +63,8 @@
qcom,mdss-intf-off = <0x00012500 0x00012700
0x00012900 0x00012b00>;
qcom,mdss-pingpong-off = <0x00012D00 0x00012E00 0x00012F00>;
+ qcom,mdss-has-bwc;
+ qcom,mdss-has-decimation;
};
&mdss_hdmi_tx {
diff --git a/arch/arm/boot/dts/msm8974.dtsi b/arch/arm/boot/dts/msm8974.dtsi
index 9f11839..f787cf5 100644
--- a/arch/arm/boot/dts/msm8974.dtsi
+++ b/arch/arm/boot/dts/msm8974.dtsi
@@ -38,7 +38,7 @@
secure_mem: secure_region {
linux,contiguous-region;
- reg = <0 0x7800000>;
+ reg = <0 0xFC00000>;
label = "secure_mem";
};
@@ -246,7 +246,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
@@ -291,7 +291,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
qcom,pad-pull-off = <0x0 0x3 0x3>; /* no-pull, pull-up, pull-up */
- qcom,pad-drv-on = <0x7 0x4 0x4>; /* 16mA, 10mA, 10mA */
+ qcom,pad-drv-on = <0x4 0x4 0x4>; /* 10mA, 10mA, 10mA */
qcom,pad-drv-off = <0x0 0x0 0x0>; /* 2mA, 2mA, 2mA */
qcom,clk-rates = <400000 20000000 25000000 50000000 100000000 200000000>;
@@ -691,6 +691,7 @@
qcom,prim-auxpcm-gpio-sync = <&msmgpio 66 0>;
qcom,prim-auxpcm-gpio-din = <&msmgpio 67 0>;
qcom,prim-auxpcm-gpio-dout = <&msmgpio 68 0>;
+ qcom,prim-auxpcm-gpio-set = "prim-gpio-prim";
qcom,sec-auxpcm-gpio-clk = <&msmgpio 79 0>;
qcom,sec-auxpcm-gpio-sync = <&msmgpio 80 0>;
qcom,sec-auxpcm-gpio-din = <&msmgpio 81 0>;
@@ -833,6 +834,13 @@
interrupts = <0 162 1>;
qcom,firmware-name = "adsp";
+
+ /* GPIO inputs from lpass */
+ qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
+ qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+
+ /* GPIO output to lpass */
+ qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
};
qcom,msm-adsp-loader {
@@ -1191,7 +1199,7 @@
reg = <0xf9bff000 0x200>;
};
- qcom,qseecom@fe806000 {
+ qseecom: qcom,qseecom@7f00000 {
compatible = "qcom,qseecom";
reg = <0x7f00000 0x500000>;
reg-names = "secapp-region";
@@ -1232,7 +1240,27 @@
qcom,firmware-name = "venus";
};
- qcom,cache_erp {
+ qcom,cache_erp@f9012000 {
+ reg = <0xf9012000 0x80>,
+ <0xf9089000 0x80>,
+ <0xf9099000 0x80>,
+ <0xf90a9000 0x80>,
+ <0xf90b9000 0x80>,
+ <0xf9088000 0x40>,
+ <0xf9098000 0x40>,
+ <0xf90a8000 0x40>,
+ <0xf90b8000 0x40>;
+
+ reg-names = "l2_saw",
+ "krait0_saw",
+ "krait1_saw",
+ "krait2_saw",
+ "krait3_saw",
+ "krait0_acs",
+ "krait1_acs",
+ "krait2_acs",
+ "krait3_acs";
+
compatible = "qcom,cache_erp";
interrupts = <1 9 0>, <0 2 0>;
interrupt-names = "l1_irq", "l2_irq";
@@ -1363,8 +1391,8 @@
qcom,core-control-mask = <0xe>;
qcom,vdd-restriction-temp = <5>;
qcom,vdd-restriction-temp-hysteresis = <10>;
- qcom,pmic-sw-mode-temp = <90>;
- qcom,pmic-sw-mode-temp-hysteresis = <70>;
+ qcom,pmic-sw-mode-temp = <85>;
+ qcom,pmic-sw-mode-temp-hysteresis = <75>;
qcom,pmic-sw-mode-regs = "vdd_dig";
vdd_dig-supply = <&pm8841_s2_floor_corner>;
vdd_gfx-supply = <&pm8841_s4_floor_corner>;
@@ -1390,7 +1418,7 @@
qcom,rx-ring-size = <64>;
};
- qcom,msm-mem-hole {
+ memory_hole: qcom,msm-mem-hole {
compatible = "qcom,msm-mem-hole";
qcom,memblock-remove = <0x7f00000 0x8000000>; /* Address and Size of Hole */
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts b/arch/arm/boot/dts/msm9625-cdp.dtsi
similarity index 79%
rename from arch/arm/boot/dts/msm9625-v2-1-cdp.dts
rename to arch/arm/boot/dts/msm9625-cdp.dtsi
index da07100..1f9cbb0 100644
--- a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-cdp.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,17 +10,10 @@
* GNU General Public License for more details.
*/
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
+/include/ "msm9625-display.dtsi"
+/include/ "qpic-panel-ili-qvga.dtsi"
/ {
- model = "Qualcomm MSM 9625V2.1 CDP";
- compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
- qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
- <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
- <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
-
i2c@f9925000 {
charger@57 {
compatible = "summit,smb137c";
@@ -42,10 +35,18 @@
wlan0: qca,wlan {
cell-index = <0>;
- compatible = "qca,ar6004-sdio";
+ compatible = "qca,ar6004-hsic";
qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
+ qca,vdd-io-supply = <&pm8019_l11>;
+ };
+
+ qca,wlan_ar6003 {
+ cell-index = <0>;
+ compatible = "qca,ar6003-sdio";
+ qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
+ qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
+ qca,vdd-io-supply = <&pm8019_l11>;
};
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts b/arch/arm/boot/dts/msm9625-mtp.dtsi
similarity index 73%
copy from arch/arm/boot/dts/msm9625-v2-1-cdp.dts
copy to arch/arm/boot/dts/msm9625-mtp.dtsi
index da07100..cc0bf5e 100644
--- a/arch/arm/boot/dts/msm9625-v2-1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-mtp.dtsi
@@ -1,4 +1,4 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -10,17 +10,7 @@
* GNU General Public License for more details.
*/
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
-
/ {
- model = "Qualcomm MSM 9625V2.1 CDP";
- compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
- qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
- <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
- <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
-
i2c@f9925000 {
charger@57 {
compatible = "summit,smb137c";
@@ -42,10 +32,18 @@
wlan0: qca,wlan {
cell-index = <0>;
- compatible = "qca,ar6004-sdio";
+ compatible = "qca,ar6004-hsic";
qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
+ qca,vdd-io-supply = <&pm8019_l11>;
+ };
+
+ qca,wlan_ar6003 {
+ cell-index = <0>;
+ compatible = "qca,ar6003-sdio";
+ qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
+ qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
+ qca,vdd-io-supply = <&pm8019_l11>;
};
};
@@ -89,11 +87,23 @@
};
mpp@a300 { /* MPP 4 */
+ /* VADC channel 19 */
+ qcom,mode = <4>;
+ qcom,ain-route = <3>; /* AMUX 8 */
+ qcom,master-en = <1>;
+ qcom,src-sel = <0>; /* Function constant */
+ qcom,invert = <1>;
};
mpp@a400 { /* MPP 5 */
};
mpp@a500 { /* MPP 6 */
+ /* channel 21 */
+ qcom,mode = <4>;
+ qcom,ain-route = <1>; /* AMUX 6 */
+ qcom,master-en = <1>;
+ qcom,src-sel = <0>; /* Function constant */
+ qcom,invert = <1>;
};
};
diff --git a/arch/arm/boot/dts/msm9625-v1-cdp.dts b/arch/arm/boot/dts/msm9625-v1-cdp.dts
index cf17c69..d7537eb 100644
--- a/arch/arm/boot/dts/msm9625-v1-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-v1-cdp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v1.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
model = "Qualcomm MSM 9625V1 CDP";
@@ -20,88 +21,4 @@
qcom,msm-id = <134 1 0>, <152 1 0>, <149 1 0>, <150 1 0>,
<151 1 0>, <148 1 0>, <173 1 0>, <174 1 0>,
<175 1 0>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v1-mtp.dts b/arch/arm/boot/dts/msm9625-v1-mtp.dts
index 24aa3af..a70ec1a 100644
--- a/arch/arm/boot/dts/msm9625-v1-mtp.dts
+++ b/arch/arm/boot/dts/msm9625-v1-mtp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v1.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
model = "Qualcomm MSM 9625V1 MTP";
@@ -20,100 +21,4 @@
qcom,msm-id = <134 7 0>, <152 7 0>, <149 7 0>, <150 7 0>,
<151 7 0>, <148 7 0>, <173 7 0>, <174 7 0>,
<175 7 0>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- /* VADC channel 19 */
- qcom,mode = <4>;
- qcom,ain-route = <3>; /* AMUX 8 */
- qcom,master-en = <1>;
- qcom,src-sel = <0>; /* Function constant */
- qcom,invert = <1>;
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- /* VADC channel 21 */
- qcom,mode = <4>;
- qcom,ain-route = <1>; /* AMUX 6 */
- qcom,master-en = <1>;
- qcom,src-sel = <0>; /* Function constant */
- qcom,invert = <1>;
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v1.dtsi b/arch/arm/boot/dts/msm9625-v1.dtsi
index ad95601..de88ff1 100644
--- a/arch/arm/boot/dts/msm9625-v1.dtsi
+++ b/arch/arm/boot/dts/msm9625-v1.dtsi
@@ -37,6 +37,10 @@
};
};
+&hsic_host {
+ qcom,disable-park-mode;
+};
+
&ipa_hw {
qcom,ipa-hw-ver = <1>; /* IPA h-w revision */
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1-mtp.dts b/arch/arm/boot/dts/msm9625-v2-1-mtp.dts
deleted file mode 100644
index 1e0f3c0..0000000
--- a/arch/arm/boot/dts/msm9625-v2-1-mtp.dts
+++ /dev/null
@@ -1,99 +0,0 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-/dts-v1/;
-
-/include/ "msm9625-v2-1.dtsi"
-
-/ {
- model = "Qualcomm MSM 9625V2.1 MTP";
- compatible = "qcom,msm9625-mtp", "qcom,msm9625", "qcom,mtp";
- qcom,msm-id = <134 7 0x20001>, <152 7 0x20001>, <149 7 0x20001>,
- <150 7 0x20001>, <151 7 0x20001>, <148 7 0x20001>,
- <173 7 0x20001>, <174 7 0x20001>, <175 7 0x20001>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,ar6004-vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
-};
diff --git a/arch/arm/boot/dts/msm9625-v2-cdp.dts b/arch/arm/boot/dts/msm9625-v2-cdp.dts
index 660bdbd..9fbe5ec 100644
--- a/arch/arm/boot/dts/msm9625-v2-cdp.dts
+++ b/arch/arm/boot/dts/msm9625-v2-cdp.dts
@@ -13,8 +13,7 @@
/dts-v1/;
/include/ "msm9625-v2.dtsi"
-/include/ "msm9625-display.dtsi"
-/include/ "qpic-panel-ili-qvga.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
model = "Qualcomm MSM 9625V2 CDP";
@@ -22,88 +21,4 @@
qcom,msm-id = <134 1 0x20000>, <152 1 0x20000>, <149 1 0x20000>,
<150 1 0x20000>, <151 1 0x20000>, <148 1 0x20000>,
<173 1 0x20000>, <174 1 0x20000>, <175 1 0x20000>;
-
- i2c@f9925000 {
- charger@57 {
- compatible = "summit,smb137c";
- reg = <0x57>;
- summit,chg-current-ma = <1500>;
- summit,term-current-ma = <50>;
- summit,pre-chg-current-ma = <100>;
- summit,float-voltage-mv = <4200>;
- summit,thresh-voltage-mv = <3000>;
- summit,recharge-thresh-mv = <75>;
- summit,system-voltage-mv = <4250>;
- summit,charging-timeout = <382>;
- summit,pre-charge-timeout = <48>;
- summit,therm-current-ua = <10>;
- summit,temperature-min = <4>; /* 0 C */
- summit,temperature-max = <3>; /* 45 C */
- };
- };
-
- wlan0: qca,wlan {
- cell-index = <0>;
- compatible = "qca,ar6004-hsic";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-
- qca,wlan_ar6003 {
- cell-index = <0>;
- compatible = "qca,ar6003-sdio";
- qca,chip-pwd-l-gpios = <&msmgpio 62 0>;
- qca,pm-enable-gpios = <&pm8019_gpios 3 0x0>;
- qca,vdd-io-supply = <&pm8019_l11>;
- };
-};
-
-/* PM8019 GPIO and MPP configuration */
-&pm8019_gpios {
- gpio@c000 { /* GPIO 1 */
- };
-
- gpio@c100 { /* GPIO 2 */
- };
-
- gpio@c200 { /* GPIO 3 */
- };
-
- gpio@c300 { /* GPIO 4 */
- /* ext_2p95v regulator enable config */
- qcom,mode = <1>; /* Digital output */
- qcom,output-type = <0>; /* CMOS */
- qcom,invert = <0>; /* Output low */
- qcom,out-strength = <1>; /* Low */
- qcom,vin-sel = <2>; /* PM8019 L11 - 1.8V */
- qcom,src-sel = <0>; /* Constant */
- qcom,master-en = <1>; /* Enable GPIO */
- };
-
- gpio@c400 { /* GPIO 5 */
- };
-
- gpio@c500 { /* GPIO 6 */
- };
-};
-
-&pm8019_mpps {
- mpp@a000 { /* MPP 1 */
- };
-
- mpp@a100 { /* MPP 2 */
- };
-
- mpp@a200 { /* MPP 3 */
- };
-
- mpp@a300 { /* MPP 4 */
- };
-
- mpp@a400 { /* MPP 5 */
- };
-
- mpp@a500 { /* MPP 6 */
- };
};
diff --git a/arch/arm/boot/dts/msm9625-v2-mtp.dts b/arch/arm/boot/dts/msm9625-v2-mtp.dts
index c9e54be..5324e2c 100644
--- a/arch/arm/boot/dts/msm9625-v2-mtp.dts
+++ b/arch/arm/boot/dts/msm9625-v2-mtp.dts
@@ -13,6 +13,7 @@
/dts-v1/;
/include/ "msm9625-v2.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
model = "Qualcomm MSM 9625V2 MTP";
diff --git a/arch/arm/boot/dts/msmzinc-sim.dts b/arch/arm/boot/dts/msm9625-v2.1-cdp.dts
similarity index 61%
copy from arch/arm/boot/dts/msmzinc-sim.dts
copy to arch/arm/boot/dts/msm9625-v2.1-cdp.dts
index e410344..b643593 100644
--- a/arch/arm/boot/dts/msmzinc-sim.dts
+++ b/arch/arm/boot/dts/msm9625-v2.1-cdp.dts
@@ -12,18 +12,13 @@
/dts-v1/;
-/include/ "msmzinc.dtsi"
+/include/ "msm9625-v2.1.dtsi"
+/include/ "msm9625-cdp.dtsi"
/ {
- model = "Qualcomm MSM ZINC Simulator";
- compatible = "qcom,msmzinc-sim", "qcom,msmzinc", "qcom,sim";
- qcom,msm-id = <178 0 0>;
-
- aliases {
- serial0 = &uart0;
- };
-
- uart0: serial@f991f000 {
- status = "ok";
- };
+ model = "Qualcomm MSM 9625V2.1 CDP";
+ compatible = "qcom,msm9625-cdp", "qcom,msm9625", "qcom,cdp";
+ qcom,msm-id = <134 1 0x20001>, <152 1 0x20001>, <149 1 0x20001>,
+ <150 1 0x20001>, <151 1 0x20001>, <148 1 0x20001>,
+ <173 1 0x20001>, <174 1 0x20001>, <175 1 0x20001>;
};
diff --git a/arch/arm/boot/dts/msmzinc-sim.dts b/arch/arm/boot/dts/msm9625-v2.1-mtp.dts
similarity index 61%
rename from arch/arm/boot/dts/msmzinc-sim.dts
rename to arch/arm/boot/dts/msm9625-v2.1-mtp.dts
index e410344..8bbcc0d 100644
--- a/arch/arm/boot/dts/msmzinc-sim.dts
+++ b/arch/arm/boot/dts/msm9625-v2.1-mtp.dts
@@ -12,18 +12,13 @@
/dts-v1/;
-/include/ "msmzinc.dtsi"
+/include/ "msm9625-v2.1.dtsi"
+/include/ "msm9625-mtp.dtsi"
/ {
- model = "Qualcomm MSM ZINC Simulator";
- compatible = "qcom,msmzinc-sim", "qcom,msmzinc", "qcom,sim";
- qcom,msm-id = <178 0 0>;
-
- aliases {
- serial0 = &uart0;
- };
-
- uart0: serial@f991f000 {
- status = "ok";
- };
+ model = "Qualcomm MSM 9625V2.1 MTP";
+ compatible = "qcom,msm9625-mtp", "qcom,msm9625", "qcom,mtp";
+ qcom,msm-id = <134 7 0x20001>, <152 7 0x20001>, <149 7 0x20001>,
+ <150 7 0x20001>, <151 7 0x20001>, <148 7 0x20001>,
+ <173 7 0x20001>, <174 7 0x20001>, <175 7 0x20001>;
};
diff --git a/arch/arm/boot/dts/msm9625-v2-1.dtsi b/arch/arm/boot/dts/msm9625-v2.1.dtsi
similarity index 100%
rename from arch/arm/boot/dts/msm9625-v2-1.dtsi
rename to arch/arm/boot/dts/msm9625-v2.1.dtsi
diff --git a/arch/arm/boot/dts/msm9625-v2.dtsi b/arch/arm/boot/dts/msm9625-v2.dtsi
index 3ce6844..81d8e00 100644
--- a/arch/arm/boot/dts/msm9625-v2.dtsi
+++ b/arch/arm/boot/dts/msm9625-v2.dtsi
@@ -35,6 +35,10 @@
qcom,ipa-hw-ver = <2>; /* IPA h-w revision */
};
+&hsic_host {
+ qcom,disable-park-mode;
+};
+
&sfpb_spinlock {
status = "disable";
};
diff --git a/arch/arm/boot/dts/msm9625.dtsi b/arch/arm/boot/dts/msm9625.dtsi
index 16fa1da..abfd7ec 100644
--- a/arch/arm/boot/dts/msm9625.dtsi
+++ b/arch/arm/boot/dts/msm9625.dtsi
@@ -394,7 +394,7 @@
qcom,pad-pull-on = <0x0 0x3 0x3>;
qcom,pad-pull-off = <0x0 0x3 0x3>;
- qcom,pad-drv-on = <0x7 0x4 0x4>;
+ qcom,pad-drv-on = <0x4 0x4 0x4>;
qcom,pad-drv-off = <0x0 0x0 0x0>;
qcom,clk-rates = <400000 25000000 50000000 100000000 200000000>;
@@ -763,9 +763,9 @@
qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_1_out 0 0>;
};
- qcom,smem@fa00000 {
+ qcom,smem@0 {
compatible = "qcom,smem";
- reg = <0xfa00000 0x200000>,
+ reg = <0x0 0x100000>,
<0xf9011000 0x1000>,
<0xfc428000 0x4000>;
reg-names = "smem", "irq-reg-base", "aux-mem1";
diff --git a/arch/arm/boot/dts/msmkrypton.dtsi b/arch/arm/boot/dts/msmkrypton.dtsi
index db61dab..3f51659 100644
--- a/arch/arm/boot/dts/msmkrypton.dtsi
+++ b/arch/arm/boot/dts/msmkrypton.dtsi
@@ -37,12 +37,70 @@
qcom,direct-connect-irqs = <8>;
};
- timer: msm-qtimer@f9021000 {
- compatible = "arm,armv7-timer";
- reg = <0xf9021000 0x1000>;
- interrupts = <0 7 0>;
- irq-is-not-percpu;
+ timer@f9020000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "arm,armv7-timer-mem";
+ reg = <0xf9020000 0x1000>;
clock-frequency = <19200000>;
+
+ frame@f9021000 {
+ frame-number = <0>;
+ interrupts = <0 7 0x4>,
+ <0 6 0x4>;
+ reg = <0xf9021000 0x1000>,
+ <0xf9022000 0x1000>;
+ };
+
+ frame@f9023000 {
+ frame-number = <1>;
+ interrupts = <0 8 0x4>;
+ reg = <0xf9023000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9024000 {
+ frame-number = <2>;
+ interrupts = <0 9 0x4>;
+ reg = <0xf9024000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9025000 {
+ frame-number = <3>;
+ interrupts = <0 10 0x4>;
+ reg = <0xf9025000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9026000 {
+ frame-number = <4>;
+ interrupts = <0 11 0x4>;
+ reg = <0xf9026000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9027000 {
+ frame-number = <5>;
+ interrupts = <0 12 0x4>;
+ reg = <0xf9027000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9028000 {
+ frame-number = <6>;
+ interrupts = <0 13 0x4>;
+ reg = <0xf9028000 0x1000>;
+ status = "disabled";
+ };
+
+ frame@f9029000 {
+ frame-number = <7>;
+ interrupts = <0 14 0x4>;
+ reg = <0xf9029000 0x1000>;
+ status = "disabled";
+ };
};
uartdm3: serial@f991f000 {
diff --git a/arch/arm/configs/msmzinc_defconfig b/arch/arm/configs/apq8084_defconfig
similarity index 96%
rename from arch/arm/configs/msmzinc_defconfig
rename to arch/arm/configs/apq8084_defconfig
index d0ea87a..89d72f2 100644
--- a/arch/arm/configs/msmzinc_defconfig
+++ b/arch/arm/configs/apq8084_defconfig
@@ -34,7 +34,7 @@
CONFIG_EFI_PARTITION=y
CONFIG_IOSCHED_TEST=y
CONFIG_ARCH_MSM=y
-CONFIG_ARCH_MSMZINC=y
+CONFIG_ARCH_APQ8084=y
CONFIG_MSM_KRAIT_TBB_ABORT_HANDLER=y
CONFIG_MSM_RPM_SMD=y
# CONFIG_MSM_STACKED_MEMORY is not set
@@ -284,6 +284,7 @@
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
CONFIG_FB_MSM_MDSS=y
@@ -313,6 +314,19 @@
CONFIG_USB_STORAGE_KARMA=y
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_UNSAFE_RESUME=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_BLOCK_MINORS=32
+# CONFIG_MMC_BLOCK_BOUNCE is not set
+CONFIG_MMC_TEST=m
+CONFIG_MMC_BLOCK_TEST=y
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_MSM=y
+CONFIG_MMC_SDHCI_MSM=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_BACKLIGHT=y
diff --git a/arch/arm/configs/msmzinc_defconfig b/arch/arm/configs/msm8610-perf_defconfig
similarity index 67%
copy from arch/arm/configs/msmzinc_defconfig
copy to arch/arm/configs/msm8610-perf_defconfig
index d0ea87a..03ac944 100644
--- a/arch/arm/configs/msmzinc_defconfig
+++ b/arch/arm/configs/msm8610-perf_defconfig
@@ -1,5 +1,6 @@
# CONFIG_ARM_PATCH_PHYS_VIRT is not set
CONFIG_EXPERIMENTAL=y
+CONFIG_LOCALVERSION="-perf"
CONFIG_SYSVIPC=y
CONFIG_RCU_FAST_NO_HZ=y
CONFIG_IKCONFIG=y
@@ -14,6 +15,7 @@
CONFIG_NAMESPACES=y
# CONFIG_UTS_NS is not set
# CONFIG_IPC_NS is not set
+# CONFIG_USER_NS is not set
# CONFIG_PID_NS is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
@@ -22,85 +24,69 @@
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_PANIC_TIMEOUT=5
CONFIG_KALLSYMS_ALL=y
-CONFIG_ASHMEM=y
CONFIG_EMBEDDED=y
CONFIG_PROFILING=y
-CONFIG_KPROBES=y
+CONFIG_OPROFILE=m
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_EFI_PARTITION=y
-CONFIG_IOSCHED_TEST=y
CONFIG_ARCH_MSM=y
-CONFIG_ARCH_MSMZINC=y
-CONFIG_MSM_KRAIT_TBB_ABORT_HANDLER=y
-CONFIG_MSM_RPM_SMD=y
+CONFIG_ARCH_MSM8610=y
+CONFIG_ARCH_MSM8226=y
# CONFIG_MSM_STACKED_MEMORY is not set
CONFIG_CPU_HAS_L2_PMU=y
# CONFIG_MSM_FIQ_SUPPORT is not set
# CONFIG_MSM_PROC_COMM is not set
CONFIG_MSM_SMD=y
CONFIG_MSM_SMD_PKG4=y
+CONFIG_MSM_BAM_DMUX=y
+CONFIG_MSM_SMP2P=y
+CONFIG_MSM_SMP2P_TEST=y
CONFIG_MSM_IPC_LOGGING=y
CONFIG_MSM_IPC_ROUTER=y
CONFIG_MSM_IPC_ROUTER_SMD_XPRT=y
-CONFIG_MSM_IPC_ROUTER_SECURITY=y
-# CONFIG_MSM_HW3D is not set
-CONFIG_MSM_RPM_REGULATOR_SMD=y
+CONFIG_MSM_QMI_INTERFACE=y
CONFIG_MSM_SUBSYSTEM_RESTART=y
CONFIG_MSM_SYSMON_COMM=y
+CONFIG_MSM_PIL_LPASS_QDSP6V5=y
+CONFIG_MSM_PIL_MSS_QDSP6V5=y
+CONFIG_MSM_PIL_VENUS=y
+CONFIG_MSM_PIL_PRONTO=y
+CONFIG_MSM_TZ_LOG=y
CONFIG_MSM_DIRECT_SCLK_ACCESS=y
CONFIG_MSM_WATCHDOG_V2=y
CONFIG_MSM_MEMORY_DUMP=y
CONFIG_MSM_DLOAD_MODE=y
-CONFIG_MSM_RUN_QUEUE_STATS=y
-CONFIG_MSM_SPM_V2=y
-CONFIG_MSM_L2_SPM=y
-CONFIG_MSM_MULTIMEDIA_USE_ION=y
+CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
+CONFIG_MSM_ADSP_LOADER=m
CONFIG_MSM_OCMEM=y
CONFIG_MSM_OCMEM_LOCAL_POWER_CTRL=y
CONFIG_MSM_OCMEM_DEBUG=y
CONFIG_MSM_OCMEM_NONSECURE=y
+CONFIG_MSM_OCMEM_POWER_DISABLE=y
CONFIG_SENSORS_ADSP=y
CONFIG_MSM_RTB=y
CONFIG_MSM_RTB_SEPARATE_CPUS=y
-CONFIG_MSM_CACHE_ERP=y
-CONFIG_MSM_L1_ERR_PANIC=y
-CONFIG_MSM_L1_RECOV_ERR_PANIC=y
-CONFIG_MSM_L1_ERR_LOG=y
-CONFIG_MSM_L2_ERP_PRINT_ACCESS_ERRORS=y
-CONFIG_MSM_L2_ERP_PORT_PANIC=y
-CONFIG_MSM_L2_ERP_1BIT_PANIC=y
-CONFIG_MSM_L2_ERP_2BIT_PANIC=y
-CONFIG_MSM_CACHE_DUMP=y
-CONFIG_MSM_CACHE_DUMP_ON_PANIC=y
-CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
-CONFIG_ARM_LPAE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_SMP=y
-# CONFIG_SMP_ON_UP is not set
CONFIG_SCHED_MC=y
CONFIG_ARM_ARCH_TIMER=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
-CONFIG_VMALLOC_RESERVE=0x19000000
-CONFIG_CC_STACKPROTECTOR=y
-CONFIG_CP_ACCESS=y
CONFIG_USE_OF=y
-CONFIG_CPU_FREQ=y
-CONFIG_CPU_FREQ_GOV_POWERSAVE=y
-CONFIG_CPU_FREQ_GOV_USERSPACE=y
-CONFIG_CPU_FREQ_GOV_ONDEMAND=y
-CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
CONFIG_NEON=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
-CONFIG_WAKELOCK=y
+CONFIG_PM_AUTOSLEEP=y
+CONFIG_PM_WAKELOCKS=y
+CONFIG_PM_WAKELOCKS_LIMIT=0
+# CONFIG_PM_WAKELOCKS_GC is not set
CONFIG_PM_RUNTIME=y
CONFIG_NET=y
CONFIG_PACKET=y
@@ -180,6 +166,10 @@
CONFIG_IP_NF_MATCH_TTL=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_NF_NAT=y
+CONFIG_IP_NF_TARGET_MASQUERADE=y
+CONFIG_IP_NF_TARGET_NETMAP=y
+CONFIG_IP_NF_TARGET_REDIRECT=y
CONFIG_IP_NF_MANGLE=y
CONFIG_IP_NF_RAW=y
CONFIG_IP_NF_ARPTABLES=y
@@ -191,151 +181,163 @@
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
CONFIG_IP6_NF_RAW=y
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_BRIDGE=y
CONFIG_NET_SCHED=y
CONFIG_NET_SCH_HTB=y
CONFIG_NET_SCH_PRIO=y
CONFIG_NET_CLS_FW=y
-CONFIG_NET_CLS_U32=y
-CONFIG_CLS_U32_MARK=y
-CONFIG_NET_CLS_FLOW=y
-CONFIG_NET_EMATCH=y
-CONFIG_NET_EMATCH_CMP=y
-CONFIG_NET_EMATCH_NBYTE=y
-CONFIG_NET_EMATCH_U32=y
-CONFIG_NET_EMATCH_META=y
-CONFIG_NET_EMATCH_TEXT=y
-CONFIG_NET_CLS_ACT=y
+CONFIG_BT=y
+CONFIG_BT_RFCOMM=y
+CONFIG_BT_RFCOMM_TTY=y
+CONFIG_BT_BNEP=y
+CONFIG_BT_BNEP_MC_FILTER=y
+CONFIG_BT_BNEP_PROTO_FILTER=y
+CONFIG_BT_HIDP=y
+CONFIG_BT_HCISMD=y
CONFIG_CFG80211=y
-CONFIG_RFKILL=y
-CONFIG_GENLOCK=y
-CONFIG_GENLOCK_MISCDEVICE=y
-CONFIG_SYNC=y
-CONFIG_SW_SYNC=y
+CONFIG_NL80211_TESTMODE=y
CONFIG_CMA=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
-CONFIG_HAPTIC_ISA1200=y
-CONFIG_USB_HSIC_SMSC_HUB=y
-CONFIG_TI_DRV2667=y
-CONFIG_SCSI=y
-CONFIG_SCSI_TGT=y
-CONFIG_BLK_DEV_SD=y
-CONFIG_CHR_DEV_SG=y
-CONFIG_CHR_DEV_SCH=y
-CONFIG_SCSI_MULTI_LUN=y
-CONFIG_SCSI_CONSTANTS=y
-CONFIG_SCSI_LOGGING=y
-CONFIG_SCSI_SCAN_ASYNC=y
CONFIG_MD=y
CONFIG_BLK_DEV_DM=y
CONFIG_DM_CRYPT=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
-CONFIG_TUN=y
-CONFIG_KS8851=m
# CONFIG_MSM_RMNET is not set
-CONFIG_SLIP=y
-CONFIG_SLIP_COMPRESSED=y
-CONFIG_SLIP_MODE_SLIP6=y
-CONFIG_USB_USBNET=y
+CONFIG_MSM_RMNET_BAM=y
+CONFIG_WCNSS_CORE=y
+CONFIG_WCNSS_CORE_PRONTO=y
+CONFIG_WCNSS_MEM_PRE_ALLOC=y
CONFIG_INPUT_EVDEV=y
CONFIG_INPUT_EVBUG=m
CONFIG_KEYBOARD_GPIO=y
-CONFIG_INPUT_JOYSTICK=y
CONFIG_INPUT_TOUCHSCREEN=y
-CONFIG_TOUCHSCREEN_ATMEL_MXT=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_RMI4_DEV=y
+CONFIG_TOUCHSCREEN_SYNAPTICS_DSX_FW_UPDATE=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=m
CONFIG_SERIAL_MSM_HSL=y
CONFIG_SERIAL_MSM_HSL_CONSOLE=y
+CONFIG_DIAG_CHAR=y
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_MSM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_QUP=y
CONFIG_SPI=y
+CONFIG_SPI_QUP=y
CONFIG_SPI_SPIDEV=m
CONFIG_SPMI=y
+CONFIG_MSM_BUS_SCALING=y
CONFIG_SPMI_MSM_PMIC_ARB=y
CONFIG_MSM_QPNP_INT=y
+CONFIG_SLIMBUS_MSM_NGD=y
CONFIG_DEBUG_GPIO=y
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_QPNP_PIN=y
-CONFIG_GPIO_QPNP_PIN_DEBUG=y
CONFIG_POWER_SUPPLY=y
-CONFIG_SMB350_CHARGER=y
-CONFIG_BATTERY_BQ28400=y
CONFIG_QPNP_CHARGER=y
-CONFIG_BATTERY_BCL=y
-CONFIG_SENSORS_EPM_ADC=y
+CONFIG_QPNP_BMS=y
CONFIG_SENSORS_QPNP_ADC_VOLTAGE=y
CONFIG_SENSORS_QPNP_ADC_CURRENT=y
CONFIG_THERMAL=y
CONFIG_THERMAL_TSENS8974=y
CONFIG_THERMAL_MONITOR=y
-CONFIG_THERMAL_QPNP=y
CONFIG_THERMAL_QPNP_ADC_TM=y
-CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_WCD9306_CODEC=y
CONFIG_REGULATOR_STUB=y
CONFIG_REGULATOR_QPNP=y
CONFIG_MEDIA_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
+CONFIG_VIDEO_DEV=y
+CONFIG_VIDEO_V4L2_SUBDEV_API=y
+# CONFIG_MSM_CAMERA is not set
+CONFIG_OV8825=y
+CONFIG_MSM_CAMERA_SENSOR=y
+CONFIG_MSM_CPP=y
+CONFIG_MSM_CCI=y
+CONFIG_MSM_CSI30_HEADER=y
+CONFIG_MSM_CSIPHY=y
+CONFIG_MSM_CSID=y
+CONFIG_MSM_ISPIF=y
+CONFIG_MSMB_CAMERA=y
+CONFIG_OV9724=y
+CONFIG_MSMB_JPEG=y
+CONFIG_SWITCH=y
+CONFIG_MSM_WFD=y
+CONFIG_MSM_VIDC_V4L2=y
+CONFIG_VIDEOBUF2_MSM_MEM=y
+CONFIG_V4L_PLATFORM_DRIVERS=y
+CONFIG_RADIO_IRIS=y
+CONFIG_RADIO_IRIS_TRANSPORT=m
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
CONFIG_FB=y
+CONFIG_FB_VIRTUAL=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
CONFIG_FB_MSM_MDSS=y
CONFIG_FB_MSM_MDSS_WRITEBACK=y
-CONFIG_FB_MSM_MDSS_HDMI_PANEL=y
-CONFIG_FB_MSM_MDSS_HDMI_MHL_SII8334=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
-CONFIG_USB=y
-CONFIG_USB_SUSPEND=y
-CONFIG_USB_EHCI_HCD=y
-CONFIG_USB_EHCI_MSM=y
-CONFIG_USB_EHCI_MSM_HSIC=y
-CONFIG_USB_ACM=y
-CONFIG_USB_STORAGE=y
-CONFIG_USB_STORAGE_DATAFAB=y
-CONFIG_USB_STORAGE_FREECOM=y
-CONFIG_USB_STORAGE_ISD200=y
-CONFIG_USB_STORAGE_USBAT=y
-CONFIG_USB_STORAGE_SDDR09=y
-CONFIG_USB_STORAGE_SDDR55=y
-CONFIG_USB_STORAGE_JUMPSHOT=y
-CONFIG_USB_STORAGE_ALAUDA=y
-CONFIG_USB_STORAGE_ONETOUCH=y
-CONFIG_USB_STORAGE_KARMA=y
-CONFIG_USB_STORAGE_CYPRESS_ATACB=y
-CONFIG_USB_STORAGE_ENE_UB6250=y
+CONFIG_SOUND=y
+CONFIG_SND=y
+CONFIG_SND_SOC=y
+CONFIG_SND_SOC_MSM8226=y
+CONFIG_SND_SOC_MSM8X10=y
+CONFIG_USB_GADGET=y
+CONFIG_USB_GADGET_DEBUG_FILES=y
+CONFIG_USB_GADGET_DEBUG_FS=y
+CONFIG_USB_CI13XXX_MSM=y
+CONFIG_USB_G_ANDROID=y
+CONFIG_MMC=y
+CONFIG_MMC_PERF_PROFILING=y
+CONFIG_MMC_UNSAFE_RESUME=y
+CONFIG_MMC_CLKGATE=y
+CONFIG_MMC_EMBEDDED_SDIO=y
+CONFIG_MMC_PARANOID_SD_INIT=y
+CONFIG_MMC_BLOCK_MINORS=32
+CONFIG_MMC_TEST=m
+CONFIG_MMC_SDHCI=y
+CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_MSM=y
+CONFIG_MMC_SDHCI_MSM=y
+CONFIG_MMC_MSM_SPS_SUPPORT=y
CONFIG_LEDS_QPNP=y
CONFIG_LEDS_TRIGGERS=y
-CONFIG_LEDS_TRIGGER_BACKLIGHT=y
-CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
-CONFIG_SWITCH=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_DRV_MSM is not set
CONFIG_RTC_DRV_QPNP=y
CONFIG_STAGING=y
CONFIG_ANDROID=y
CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_ASHMEM=y
CONFIG_ANDROID_LOGGER=y
+CONFIG_ANDROID_RAM_CONSOLE=y
CONFIG_ANDROID_TIMED_GPIO=y
CONFIG_ANDROID_LOW_MEMORY_KILLER=y
-CONFIG_MSM_SSBI=y
CONFIG_SPS=y
-CONFIG_SPS_SUPPORT_BAMDMA=y
+CONFIG_USB_BAM=y
CONFIG_SPS_SUPPORT_NDP_BAM=y
CONFIG_QPNP_PWM=y
CONFIG_QPNP_POWER_ON=y
-CONFIG_QPNP_CLKDIV=y
CONFIG_MSM_IOMMU=y
-CONFIG_IOMMU_PGTABLES_L2=y
+CONFIG_CORESIGHT=y
+CONFIG_CORESIGHT_TMC=y
+CONFIG_CORESIGHT_TPIU=y
+CONFIG_CORESIGHT_FUNNEL=y
+CONFIG_CORESIGHT_REPLICATOR=y
+CONFIG_CORESIGHT_STM=y
+CONFIG_CORESIGHT_ETM=y
+CONFIG_CORESIGHT_EVENT=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
@@ -344,42 +346,23 @@
CONFIG_FUSE_FS=y
CONFIG_VFAT_FS=y
CONFIG_TMPFS=y
-CONFIG_PSTORE=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
-CONFIG_LOCKUP_DETECTOR=y
-# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y
-CONFIG_DEBUG_KMEMLEAK=y
-CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
-CONFIG_DEBUG_SPINLOCK=y
-CONFIG_DEBUG_MUTEXES=y
-CONFIG_DEBUG_ATOMIC_SLEEP=y
-CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_MEMORY_INIT=y
-CONFIG_DEBUG_LIST=y
-CONFIG_FAULT_INJECTION=y
-CONFIG_FAIL_PAGE_ALLOC=y
-CONFIG_FAULT_INJECTION_DEBUG_FS=y
-CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
-CONFIG_DEBUG_PAGEALLOC=y
-CONFIG_CPU_FREQ_SWITCH_PROFILER=y
+CONFIG_ENABLE_DEFAULT_TRACERS=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DEBUG_USER=y
-CONFIG_DEBUG_LL=y
-CONFIG_EARLY_PRINTK=y
-CONFIG_PID_IN_CONTEXTIDR=y
CONFIG_KEYS=y
CONFIG_CRYPTO_MD4=y
-CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_ARC4=y
CONFIG_CRYPTO_TWOFISH=y
-CONFIG_CRYPTO_DEV_QCRYPTO=m
-CONFIG_CRYPTO_DEV_QCE=m
-CONFIG_CRYPTO_DEV_QCEDEV=m
+# CONFIG_CRYPTO_HW is not set
CONFIG_CRC_CCITT=y
+CONFIG_QPNP_VIBRATOR=y
+CONFIG_QSEECOM=y
\ No newline at end of file
diff --git a/arch/arm/configs/msm8974-perf_defconfig b/arch/arm/configs/msm8974-perf_defconfig
index f67cb0d..59cafd1 100644
--- a/arch/arm/configs/msm8974-perf_defconfig
+++ b/arch/arm/configs/msm8974-perf_defconfig
@@ -98,6 +98,7 @@
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
@@ -238,6 +239,7 @@
CONFIG_CMA=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
+CONFIG_TSPP=m
CONFIG_HAPTIC_ISA1200=y
CONFIG_QSEECOM=y
CONFIG_QPNP_MISC=y
@@ -320,6 +322,7 @@
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_DEV=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_DVB_CORE=m
# CONFIG_MSM_CAMERA is not set
CONFIG_MT9M114=y
CONFIG_OV2720=y
@@ -337,6 +340,8 @@
CONFIG_MSMB_JPEG=y
CONFIG_MSM_VIDC_V4L2=y
CONFIG_MSM_WFD=y
+CONFIG_DVB_MPQ=m
+CONFIG_DVB_MPQ_DEMUX=m
CONFIG_VIDEOBUF2_MSM_MEM=y
CONFIG_USB_VIDEO_CLASS=y
CONFIG_V4L_PLATFORM_DRIVERS=y
@@ -345,6 +350,7 @@
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
+CONFIG_KGSL_PER_PROCESS_PAGE_TABLE=y
CONFIG_FB=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
@@ -386,6 +392,8 @@
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_QCOM_DIAG_BRIDGE=y
+CONFIG_USB_QCOM_KS_BRIDGE=y
CONFIG_USB_GADGET=y
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_CI13XXX_MSM=y
diff --git a/arch/arm/configs/msm8974_defconfig b/arch/arm/configs/msm8974_defconfig
index b5e67fd..fd8a639 100644
--- a/arch/arm/configs/msm8974_defconfig
+++ b/arch/arm/configs/msm8974_defconfig
@@ -85,6 +85,7 @@
CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
CONFIG_MSM_UARTDM_Core_v14=y
CONFIG_MSM_BOOT_STATS=y
+CONFIG_MSM_XPU_ERR_FATAL=y
CONFIG_STRICT_MEMORY_RWX=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
@@ -102,6 +103,7 @@
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_INTERACTIVE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_IDLE=y
CONFIG_VFP=y
@@ -354,6 +356,7 @@
CONFIG_ION=y
CONFIG_ION_MSM=y
CONFIG_MSM_KGSL=y
+CONFIG_KGSL_PER_PROCESS_PAGE_TABLE=y
CONFIG_FB=y
CONFIG_FB_MSM=y
# CONFIG_FB_MSM_BACKLIGHT is not set
@@ -395,6 +398,8 @@
CONFIG_USB_STORAGE_CYPRESS_ATACB=y
CONFIG_USB_STORAGE_ENE_UB6250=y
CONFIG_USB_EHSET_TEST_FIXTURE=y
+CONFIG_USB_QCOM_DIAG_BRIDGE=y
+CONFIG_USB_QCOM_KS_BRIDGE=y
CONFIG_USB_GADGET=y
CONFIG_USB_GADGET_DEBUG_FILES=y
CONFIG_USB_CI13XXX_MSM=y
diff --git a/arch/arm/configs/msm9625-perf_defconfig b/arch/arm/configs/msm9625-perf_defconfig
index 42acd99..ae73bad 100644
--- a/arch/arm/configs/msm9625-perf_defconfig
+++ b/arch/arm/configs/msm9625-perf_defconfig
@@ -230,9 +230,6 @@
CONFIG_REGULATOR_QPNP=y
CONFIG_ION=y
CONFIG_ION_MSM=y
-CONFIG_FB=y
-CONFIG_FB_MSM=y
-CONFIG_FB_MSM_QPIC_PANEL_DETECT=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
diff --git a/arch/arm/configs/msm9625_defconfig b/arch/arm/configs/msm9625_defconfig
index 041e89a..f7c3bff 100644
--- a/arch/arm/configs/msm9625_defconfig
+++ b/arch/arm/configs/msm9625_defconfig
@@ -231,9 +231,6 @@
CONFIG_REGULATOR_QPNP=y
CONFIG_ION=y
CONFIG_ION_MSM=y
-CONFIG_FB=y
-CONFIG_FB_MSM=y
-CONFIG_FB_MSM_QPIC_PANEL_DETECT=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
diff --git a/arch/arm/mach-msm/Kconfig b/arch/arm/mach-msm/Kconfig
index a6e6914..a70e6c6 100644
--- a/arch/arm/mach-msm/Kconfig
+++ b/arch/arm/mach-msm/Kconfig
@@ -286,8 +286,8 @@
select MSM_RPM_LOG
select ARCH_WANT_KMAP_ATOMIC_FLUSH
-config ARCH_MSMZINC
- bool "MSMZINC"
+config ARCH_APQ8084
+ bool "APQ8084"
select ARCH_MSM_KRAITMP
select GPIO_MSM_V3
select ARM_GIC
@@ -437,6 +437,7 @@
select CPU_FREQ
select CPU_FREQ_GOV_USERSPACE
select CPU_FREQ_GOV_ONDEMAND
+ select CPU_FREQ_GOV_POWERSAVE
select MSM_PIL
select MSM_RUN_QUEUE_STATS
select ARM_HAS_SG_CHAIN
@@ -1086,13 +1087,13 @@
default "0x80200000" if ARCH_MSM8960
default "0x80200000" if ARCH_MSM8930
default "0x00000000" if ARCH_MSM8974
- default "0x00000000" if ARCH_MSMZINC
+ default "0x00000000" if ARCH_APQ8084
default "0x00000000" if ARCH_MPQ8092
default "0x00000000" if ARCH_MSM8226
default "0x00000000" if ARCH_MSM8610
default "0x10000000" if ARCH_FSM9XXX
default "0x00200000" if ARCH_MSM9625
- default "0x00200000" if ARCH_MSMKRYPTON
+ default "0x00000000" if ARCH_MSMKRYPTON
default "0x00200000" if !MSM_STACKED_MEMORY
default "0x00000000" if ARCH_QSD8X50 && MSM_SOC_REV_A
default "0x20000000" if ARCH_QSD8X50
@@ -1239,13 +1240,21 @@
Say Y here if you want the debug print routines to direct
their output to the serial port on MPQ8092 devices.
- config DEBUG_MSMZINC_UART
- bool "Kernel low-level debugging messages via MSMZINC UART"
- depends on ARCH_MSMZINC
+ config DEBUG_APQ8084_UART
+ bool "Kernel low-level debugging messages via APQ8084 UART"
+ depends on ARCH_APQ8084
select MSM_HAS_DEBUG_UART_HS_V14
help
Say Y here if you want the debug print routines to direct
- their output to the serial port on MSMZINC devices.
+ their output to the serial port on APQ8084 devices.
+
+ config DEBUG_MSM9625_UART
+ bool "Kernel low-level debugging messages via MSM9625 UART"
+ depends on ARCH_MSM9625
+ select MSM_HAS_DEBUG_UART_HS_V14
+ help
+ Say Y here if you want the debug print routines to direct
+ their output to the serial port on MSM9625 devices.
endchoice
choice
diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index 7c78395..a45f5ec 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -120,7 +120,7 @@
ifndef CONFIG_ARCH_MSM9625
ifndef CONFIG_ARCH_MPQ8092
ifndef CONFIG_ARCH_MSM8610
-ifndef CONFIG_ARCH_MSMZINC
+ifndef CONFIG_ARCH_APQ8084
ifndef CONFIG_ARCH_MSMKRYPTON
obj-y += nand_partitions.o
endif
@@ -297,7 +297,7 @@
obj-$(CONFIG_MACH_MPQ8064_DTV) += board-8064-all.o board-8064-regulator.o
obj-$(CONFIG_ARCH_MSM9615) += board-9615.o devices-9615.o board-9615-regulator.o board-9615-gpiomux.o board-9615-storage.o board-9615-display.o
obj-$(CONFIG_ARCH_MSM9615) += clock-local.o clock-9615.o acpuclock-9615.o clock-rpm.o clock-pll.o
-obj-$(CONFIG_ARCH_MSMZINC) += board-zinc.o board-zinc-gpiomux.o
+obj-$(CONFIG_ARCH_APQ8084) += board-8084.o board-8084-gpiomux.o
obj-$(CONFIG_ARCH_MSM8974) += board-8974.o board-8974-gpiomux.o
obj-$(CONFIG_ARCH_MSM8974) += acpuclock-8974.o
obj-$(CONFIG_ARCH_MSM8974) += clock-local2.o clock-pll.o clock-8974.o clock-rpm.o clock-voter.o clock-mdss-8974.o
@@ -375,7 +375,7 @@
obj-$(CONFIG_ARCH_MPQ8092) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_ARCH_MSM8226) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_ARCH_MSM8610) += gpiomux-v2.o gpiomux.o
-obj-$(CONFIG_ARCH_MSMZINC) += gpiomux-v2.o gpiomux.o
+obj-$(CONFIG_ARCH_APQ8084) += gpiomux-v2.o gpiomux.o
obj-$(CONFIG_MSM_SLEEP_STATS_DEVICE) += idle_stats_device.o
obj-$(CONFIG_MSM_DCVS) += msm_dcvs_scm.o msm_dcvs.o msm_mpdecision.o
diff --git a/arch/arm/mach-msm/Makefile.boot b/arch/arm/mach-msm/Makefile.boot
index f20f6ae..d57709d 100644
--- a/arch/arm/mach-msm/Makefile.boot
+++ b/arch/arm/mach-msm/Makefile.boot
@@ -57,13 +57,14 @@
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-fluid.dtb
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-liquid.dtb
dtb-$(CONFIG_ARCH_MSM8974) += msm8974-v2-mtp.dtb
+ dtb-$(CONFIG_ARCH_MSM8974) += apq8074-v2-liquid.dtb
-# MSMZINC
- zreladdr-$(CONFIG_ARCH_MSMZINC) := 0x00008000
- dtb-$(CONFIG_ARCH_MSMZINC) += msmzinc-sim.dtb
+# APQ8084
+ zreladdr-$(CONFIG_ARCH_APQ8084) := 0x00008000
+ dtb-$(CONFIG_ARCH_APQ8084) += apq8084-sim.dtb
# MSMKRYPTON
- zreladdr-$(CONFIG_ARCH_MSMKRYPTON) := 0x00208000
+ zreladdr-$(CONFIG_ARCH_MSMKRYPTON) := 0x00008000
dtb-$(CONFIG_ARCH_MSMKRYPTON) += msmkrypton-sim.dtb
# MSM9615
@@ -76,8 +77,8 @@
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v1-rumi.dtb
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-cdp.dtb
dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-mtp.dtb
- dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-1-mtp.dtb
- dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2-1-cdp.dtb
+ dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2.1-mtp.dtb
+ dtb-$(CONFIG_ARCH_MSM9625) += msm9625-v2.1-cdp.dtb
# MSM8226
zreladdr-$(CONFIG_ARCH_MSM8226) := 0x00008000
diff --git a/arch/arm/mach-msm/acpuclock-8226.c b/arch/arm/mach-msm/acpuclock-8226.c
index 799d629..ee4bd7c 100644
--- a/arch/arm/mach-msm/acpuclock-8226.c
+++ b/arch/arm/mach-msm/acpuclock-8226.c
@@ -26,12 +26,13 @@
#include <mach/msm_bus.h>
#include <mach/msm_bus_board.h>
#include <mach/rpm-regulator-smd.h>
+#include <mach/socinfo.h>
#include "acpuclock-cortex.h"
#define RCG_CONFIG_UPDATE_BIT BIT(0)
-static struct msm_bus_paths bw_level_tbl[] = {
+static struct msm_bus_paths bw_level_tbl_8226[] = {
[0] = BW_MBPS(152), /* At least 19 MHz on bus. */
[1] = BW_MBPS(300), /* At least 37.5 MHz on bus. */
[2] = BW_MBPS(400), /* At least 50 MHz on bus. */
@@ -42,9 +43,18 @@
[7] = BW_MBPS(4264), /* At least 533 MHz on bus. */
};
+static struct msm_bus_paths bw_level_tbl_8610[] = {
+ [0] = BW_MBPS(152), /* At least 19 MHz on bus. */
+ [1] = BW_MBPS(300), /* At least 37.5 MHz on bus. */
+ [2] = BW_MBPS(400), /* At least 50 MHz on bus. */
+ [3] = BW_MBPS(800), /* At least 100 MHz on bus. */
+ [4] = BW_MBPS(1600), /* At least 200 MHz on bus. */
+ [5] = BW_MBPS(2128), /* At least 266 MHz on bus. */
+};
+
static struct msm_bus_scale_pdata bus_client_pdata = {
- .usecase = bw_level_tbl,
- .num_usecases = ARRAY_SIZE(bw_level_tbl),
+ .usecase = bw_level_tbl_8226,
+ .num_usecases = ARRAY_SIZE(bw_level_tbl_8226),
.active_only = 1,
.name = "acpuclock",
};
@@ -54,23 +64,33 @@
* 2) Update bus bandwidth
* 3) Depending on Frodo version, may need minimum of LVL_NOM
*/
-static struct clkctl_acpu_speed acpu_freq_tbl[] = {
- { 0, 19200, CXO, 0, 0, CPR_CORNER_SVS, 1150000, 0 },
- { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 1150000, 4 },
- { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 1150000, 4 },
- { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 1150000, 6 },
- { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 1150000, 7 },
- { 0, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 1150000, 7 },
- { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 1150000, 7 },
+static struct clkctl_acpu_speed acpu_freq_tbl_8226[] = {
+ { 0, 19200, CXO, 0, 0, CPR_CORNER_SVS, 0, 0 },
+ { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 0, 4 },
+ { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 0, 4 },
+ { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 0, 6 },
+ { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 0, 7 },
+ { 0, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 7 },
+ { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 7 },
+ { 0 }
+};
+
+static struct clkctl_acpu_speed acpu_freq_tbl_8610[] = {
+ { 0, 19200, CXO, 0, 0, CPR_CORNER_SVS, 0, 0 },
+ { 1, 300000, PLL0, 4, 2, CPR_CORNER_SVS, 0, 3 },
+ { 1, 384000, ACPUPLL, 5, 0, CPR_CORNER_SVS, 0, 3 },
+ { 1, 600000, PLL0, 4, 0, CPR_CORNER_NORMAL, 0, 4 },
+ { 1, 787200, ACPUPLL, 5, 0, CPR_CORNER_NORMAL, 0, 4 },
+ { 0, 998400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 5 },
+ { 0, 1190400, ACPUPLL, 5, 0, CPR_CORNER_TURBO, 0, 5 },
{ 0 }
};
static struct acpuclk_drv_data drv_data = {
- .freq_tbl = acpu_freq_tbl,
+ .freq_tbl = acpu_freq_tbl_8226,
.current_speed = &(struct clkctl_acpu_speed){ 0 },
.bus_scale = &bus_client_pdata,
.vdd_max_cpu = CPR_CORNER_TURBO,
- .vdd_max_mem = 1150000,
.src_clocks = {
[PLL0].name = "gpll0",
[ACPUPLL].name = "a7sspll",
@@ -109,12 +129,6 @@
return PTR_ERR(drv_data.vdd_cpu);
}
- drv_data.vdd_mem = devm_regulator_get(&pdev->dev, "a7_mem");
- if (IS_ERR(drv_data.vdd_mem)) {
- dev_err(&pdev->dev, "regulator for %s get failed\n", "a7_mem");
- return PTR_ERR(drv_data.vdd_mem);
- }
-
for (i = 0; i < NUM_SRC; i++) {
if (!drv_data.src_clocks[i].name)
continue;
@@ -146,8 +160,18 @@
},
};
+void msm8610_acpu_init(void)
+{
+ drv_data.bus_scale->usecase = bw_level_tbl_8610;
+ drv_data.bus_scale->num_usecases = ARRAY_SIZE(bw_level_tbl_8610);
+ drv_data.freq_tbl = acpu_freq_tbl_8610;
+}
+
static int __init acpuclk_a7_init(void)
{
+ if (cpu_is_msm8610())
+ msm8610_acpu_init();
+
return platform_driver_probe(&acpuclk_a7_driver, acpuclk_a7_probe);
}
device_initcall(acpuclk_a7_init);
diff --git a/arch/arm/mach-msm/acpuclock-8960ab.c b/arch/arm/mach-msm/acpuclock-8960ab.c
index 38658a2..0fa2cde 100644
--- a/arch/arm/mach-msm/acpuclock-8960ab.c
+++ b/arch/arm/mach-msm/acpuclock-8960ab.c
@@ -109,12 +109,12 @@
static struct acpu_level freq_tbl_PVS0[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 950000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 950000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 975000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 1000000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 1025000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1050000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1075000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 975000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 1025000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1050000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1075000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1100000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1125000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1150000, AVS(0x0) },
@@ -127,12 +127,12 @@
static struct acpu_level freq_tbl_PVS1[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 925000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 925000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 950000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 975000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 1000000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1025000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1050000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 975000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1025000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1050000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1075000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1100000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1125000, AVS(0x0) },
@@ -145,12 +145,12 @@
static struct acpu_level freq_tbl_PVS2[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 900000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 900000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 925000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 950000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 975000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 1000000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1025000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 950000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 975000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 1000000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1025000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1050000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1075000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1100000, AVS(0x0) },
@@ -163,12 +163,12 @@
static struct acpu_level freq_tbl_PVS3[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 900000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 900000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 900000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 925000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 950000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 975000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 1000000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 925000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 950000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 975000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 1000000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1025000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1050000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1075000, AVS(0x0) },
@@ -181,12 +181,12 @@
static struct acpu_level freq_tbl_PVS4[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 875000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 875000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 900000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 925000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 950000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 975000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 900000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 925000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 950000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 975000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 1000000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1025000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1050000, AVS(0x0) },
@@ -199,12 +199,12 @@
static struct acpu_level freq_tbl_PVS5[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 875000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 875000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 875000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 875000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 900000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 925000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 950000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 875000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 875000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 900000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 925000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 950000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 975000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 1000000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1025000, AVS(0x0) },
@@ -217,12 +217,12 @@
static struct acpu_level freq_tbl_PVS6[] __initdata = {
{ 1, { 384000, PLL_8, 0, 0x00 }, L2(0), 850000, AVS(0x70001F) },
- { 1, { 486000, HFPLL, 2, 0x24 }, L2(3), 850000, AVS(0x0) },
- { 1, { 594000, HFPLL, 1, 0x16 }, L2(3), 850000, AVS(0x0) },
- { 1, { 702000, HFPLL, 1, 0x1A }, L2(3), 850000, AVS(0x0) },
- { 1, { 810000, HFPLL, 1, 0x1E }, L2(3), 875000, AVS(0x0) },
- { 1, { 918000, HFPLL, 1, 0x22 }, L2(3), 900000, AVS(0x0) },
- { 1, { 1026000, HFPLL, 1, 0x26 }, L2(3), 925000, AVS(0x0) },
+ { 1, { 486000, HFPLL, 2, 0x24 }, L2(4), 850000, AVS(0x0) },
+ { 1, { 594000, HFPLL, 1, 0x16 }, L2(4), 850000, AVS(0x0) },
+ { 1, { 702000, HFPLL, 1, 0x1A }, L2(4), 850000, AVS(0x0) },
+ { 1, { 810000, HFPLL, 1, 0x1E }, L2(4), 875000, AVS(0x0) },
+ { 1, { 918000, HFPLL, 1, 0x22 }, L2(4), 900000, AVS(0x0) },
+ { 1, { 1026000, HFPLL, 1, 0x26 }, L2(4), 925000, AVS(0x0) },
{ 1, { 1134000, HFPLL, 1, 0x2A }, L2(9), 950000, AVS(0x70000D) },
{ 1, { 1242000, HFPLL, 1, 0x2E }, L2(9), 975000, AVS(0x0) },
{ 1, { 1350000, HFPLL, 1, 0x32 }, L2(9), 1000000, AVS(0x0) },
diff --git a/arch/arm/mach-msm/acpuclock-cortex.c b/arch/arm/mach-msm/acpuclock-cortex.c
index 74ca145..0c80a56 100644
--- a/arch/arm/mach-msm/acpuclock-cortex.c
+++ b/arch/arm/mach-msm/acpuclock-cortex.c
@@ -66,11 +66,17 @@
{
int rc = 0;
- /* Increase vdd_mem before vdd_cpu. vdd_mem should be >= vdd_cpu. */
- rc = regulator_set_voltage(priv->vdd_mem, vdd_mem, priv->vdd_max_mem);
- if (rc) {
- pr_err("vdd_mem increase failed (%d)\n", rc);
- return rc;
+ if (priv->vdd_mem) {
+ /*
+ * Increase vdd_mem before vdd_cpu. vdd_mem should
+ * be >= vdd_cpu.
+ */
+ rc = regulator_set_voltage(priv->vdd_mem, vdd_mem,
+ priv->vdd_max_mem);
+ if (rc) {
+ pr_err("vdd_mem increase failed (%d)\n", rc);
+ return rc;
+ }
}
rc = regulator_set_voltage(priv->vdd_cpu, vdd_cpu, priv->vdd_max_cpu);
@@ -92,6 +98,9 @@
return;
}
+ if (!priv->vdd_mem)
+ return;
+
/* Decrease vdd_mem after vdd_cpu. vdd_mem should be >= vdd_cpu. */
ret = regulator_set_voltage(priv->vdd_mem, vdd_mem, priv->vdd_max_mem);
if (ret)
@@ -129,12 +138,26 @@
pr_warn("acpu rcg didn't update its configuration\n");
}
-/*
- * This function can be called in both atomic and nonatomic context.
- * Since regulator APIS can sleep, we cannot always use the clk prepare
- * unprepare API.
- */
-static int set_speed(struct clkctl_acpu_speed *tgt_s, bool atomic)
+static int set_speed_atomic(struct clkctl_acpu_speed *tgt_s)
+{
+ struct clkctl_acpu_speed *strt_s = priv->current_speed;
+ struct clk *strt = priv->src_clocks[strt_s->src].clk;
+ struct clk *tgt = priv->src_clocks[tgt_s->src].clk;
+ int rc = 0;
+
+ WARN(strt_s->src == ACPUPLL && tgt_s->src == ACPUPLL,
+ "can't reprogram ACPUPLL during atomic context\n");
+ rc = clk_enable(tgt);
+ if (rc)
+ return rc;
+
+ select_clk_source_div(priv, tgt_s);
+ clk_disable(strt);
+
+ return rc;
+}
+
+static int set_speed(struct clkctl_acpu_speed *tgt_s)
{
int rc = 0;
unsigned int tgt_freq_hz = tgt_s->khz * 1000;
@@ -148,19 +171,13 @@
select_clk_source_div(priv, cxo_s);
/* Re-program acpu pll */
- if (atomic)
- clk_disable(tgt);
- else
- clk_disable_unprepare(tgt);
+ clk_disable_unprepare(tgt);
rc = clk_set_rate(tgt, tgt_freq_hz);
if (rc)
pr_err("Failed to set ACPU PLL to %u\n", tgt_freq_hz);
- if (atomic)
- BUG_ON(clk_enable(tgt));
- else
- BUG_ON(clk_prepare_enable(tgt));
+ BUG_ON(clk_prepare_enable(tgt));
/* Switch back to acpu pll */
select_clk_source_div(priv, tgt_s);
@@ -172,10 +189,7 @@
return rc;
}
- if (atomic)
- rc = clk_enable(tgt);
- else
- rc = clk_prepare_enable(tgt);
+ rc = clk_prepare_enable(tgt);
if (rc) {
pr_err("ACPU PLL enable failed\n");
@@ -184,16 +198,10 @@
select_clk_source_div(priv, tgt_s);
- if (atomic)
- clk_disable(strt);
- else
- clk_disable_unprepare(strt);
+ clk_disable_unprepare(strt);
} else {
- if (atomic)
- rc = clk_enable(tgt);
- else
- rc = clk_prepare_enable(tgt);
+ rc = clk_prepare_enable(tgt);
if (rc) {
pr_err("%s enable failed\n",
@@ -203,10 +211,7 @@
select_clk_source_div(priv, tgt_s);
- if (atomic)
- clk_disable(strt);
- else
- clk_disable_unprepare(strt);
+ clk_disable_unprepare(strt);
}
@@ -250,9 +255,9 @@
/* Switch CPU speed. Flag indicates atomic context */
if (reason == SETRATE_CPUFREQ || reason == SETRATE_INIT)
- rc = set_speed(tgt_s, false);
+ rc = set_speed(tgt_s);
else
- rc = set_speed(tgt_s, true);
+ rc = set_speed_atomic(tgt_s);
if (rc)
goto out;
@@ -347,10 +352,12 @@
if (rc)
goto err_vdd;
- rc = regulator_enable(priv->vdd_mem);
- if (rc) {
- dev_err(&pdev->dev, "regulator_enable for mem failed\n");
- goto err_vdd;
+ if (priv->vdd_mem) {
+ rc = regulator_enable(priv->vdd_mem);
+ if (rc) {
+ dev_err(&pdev->dev, "regulator_enable for mem failed\n");
+ goto err_vdd;
+ }
}
rc = regulator_enable(priv->vdd_cpu);
@@ -373,7 +380,8 @@
return 0;
err_vdd_cpu:
- regulator_disable(priv->vdd_mem);
+ if (priv->vdd_mem)
+ regulator_disable(priv->vdd_mem);
err_vdd:
return rc;
}
diff --git a/arch/arm/mach-msm/bam_dmux.c b/arch/arm/mach-msm/bam_dmux.c
index a0644e6..bed794b 100644
--- a/arch/arm/mach-msm/bam_dmux.c
+++ b/arch/arm/mach-msm/bam_dmux.c
@@ -913,8 +913,10 @@
if (!bam_is_connected) {
read_unlock(&ul_wakeup_lock);
ul_wakeup();
- if (unlikely(in_global_reset == 1))
+ if (unlikely(in_global_reset == 1)) {
+ kfree(hdr);
return -EFAULT;
+ }
read_lock(&ul_wakeup_lock);
notify_all(BAM_DMUX_UL_CONNECTED, (unsigned long)(NULL));
}
@@ -1384,12 +1386,11 @@
struct list_head *temp;
struct outside_notify_func *func;
+ BAM_DMUX_LOG("%s: event=%d, data=%lu\n", __func__, event, data);
+
for (i = 0; i < BAM_DMUX_NUM_CHANNELS; ++i) {
- if (bam_ch_is_open(i)) {
+ if (bam_ch_is_open(i))
bam_ch[i].notify(bam_ch[i].priv, event, data);
- BAM_DMUX_LOG("%s: cid=%d, event=%d, data=%lu\n",
- __func__, i, event, data);
- }
}
__list_for_each(temp, &bam_other_notify_funcs) {
@@ -1756,10 +1757,13 @@
/* in_ssr documentation/assumptions found in restart_notifier_cb */
if (!power_management_only_mode) {
if (likely(!in_ssr)) {
+ BAM_DMUX_LOG("%s: disconnect tx\n", __func__);
sps_disconnect(bam_tx_pipe);
+ BAM_DMUX_LOG("%s: disconnect rx\n", __func__);
sps_disconnect(bam_rx_pipe);
__memzero(rx_desc_mem_buf.base, rx_desc_mem_buf.size);
__memzero(tx_desc_mem_buf.base, tx_desc_mem_buf.size);
+ BAM_DMUX_LOG("%s: device reset\n", __func__);
sps_device_reset(a2_device_handle);
} else {
ssr_skipped_disconnect = 1;
diff --git a/arch/arm/mach-msm/board-8064.c b/arch/arm/mach-msm/board-8064.c
index 0fe94d5..f969e31 100644
--- a/arch/arm/mach-msm/board-8064.c
+++ b/arch/arm/mach-msm/board-8064.c
@@ -437,7 +437,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
@@ -3585,6 +3585,18 @@
if (socinfo_get_pmic_model() == PMIC_MODEL_PM8917)
apq8064_pm8917_pdata_fixup();
platform_device_register(&msm_gpio_device);
+ if (cpu_is_apq8064ab())
+ apq8064ab_update_krait_spm();
+ if (cpu_is_krait_v3()) {
+ struct msm_pm_init_data_type *pdata =
+ msm8064_pm_8x60.dev.platform_data;
+ pdata->retention_calls_tz = false;
+ apq8064ab_update_retention_spm();
+ }
+ platform_device_register(&msm8064_pm_8x60);
+
+ msm_spm_init(msm_spm_data, ARRAY_SIZE(msm_spm_data));
+ msm_spm_l2_init(msm_spm_l2_data);
msm_tsens_early_init(&apq_tsens_pdata);
msm_thermal_init(&msm_thermal_pdata);
if (socinfo_init() < 0)
@@ -3699,18 +3711,6 @@
apq8064_init_dsps();
platform_device_register(&msm_8960_riva);
}
- if (cpu_is_apq8064ab())
- apq8064ab_update_krait_spm();
- if (cpu_is_krait_v3()) {
- struct msm_pm_init_data_type *pdata =
- msm8064_pm_8x60.dev.platform_data;
- pdata->retention_calls_tz = false;
- apq8064ab_update_retention_spm();
- }
- platform_device_register(&msm8064_pm_8x60);
-
- msm_spm_init(msm_spm_data, ARRAY_SIZE(msm_spm_data));
- msm_spm_l2_init(msm_spm_l2_data);
BUG_ON(msm_pm_boot_init(&msm_pm_boot_pdata));
apq8064_epm_adc_init();
}
diff --git a/arch/arm/mach-msm/board-zinc-gpiomux.c b/arch/arm/mach-msm/board-8084-gpiomux.c
similarity index 94%
rename from arch/arm/mach-msm/board-zinc-gpiomux.c
rename to arch/arm/mach-msm/board-8084-gpiomux.c
index ac4daa8..8d5bb49 100644
--- a/arch/arm/mach-msm/board-zinc-gpiomux.c
+++ b/arch/arm/mach-msm/board-8084-gpiomux.c
@@ -17,7 +17,7 @@
#include <mach/board.h>
#include <mach/gpiomux.h>
-void __init msmzinc_init_gpiomux(void)
+void __init apq8084_init_gpiomux(void)
{
int rc;
diff --git a/arch/arm/mach-msm/board-8084.c b/arch/arm/mach-msm/board-8084.c
new file mode 100644
index 0000000..e45266e
--- /dev/null
+++ b/arch/arm/mach-msm/board-8084.c
@@ -0,0 +1,141 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/memory.h>
+#include <asm/hardware/gic.h>
+#include <asm/mach/map.h>
+#include <asm/mach/arch.h>
+#include <mach/board.h>
+#include <mach/gpiomux.h>
+#include <mach/msm_iomap.h>
+#include <mach/msm_memtypes.h>
+#include <mach/msm_smd.h>
+#include <mach/restart.h>
+#include <mach/socinfo.h>
+#include <mach/clk-provider.h>
+#include "board-dt.h"
+#include "clock.h"
+#include "devices.h"
+#include "platsmp.h"
+
+static struct memtype_reserve apq8084_reserve_table[] __initdata = {
+ [MEMTYPE_SMI] = {
+ },
+ [MEMTYPE_EBI0] = {
+ .flags = MEMTYPE_FLAGS_1M_ALIGN,
+ },
+ [MEMTYPE_EBI1] = {
+ .flags = MEMTYPE_FLAGS_1M_ALIGN,
+ },
+};
+
+static int apq8084_paddr_to_memtype(phys_addr_t paddr)
+{
+ return MEMTYPE_EBI1;
+}
+
+static struct reserve_info apq8084_reserve_info __initdata = {
+ .memtype_reserve_table = apq8084_reserve_table,
+ .paddr_to_memtype = apq8084_paddr_to_memtype,
+};
+
+static struct of_dev_auxdata apq8084_auxdata_lookup[] __initdata = {
+ OF_DEV_AUXDATA("qcom,msm-sdcc", 0xF9824000, \
+ "msm_sdcc.1", NULL),
+ OF_DEV_AUXDATA("qcom,msm-sdcc", 0xF98A4000, \
+ "msm_sdcc.2", NULL),
+ {}
+};
+
+void __init apq8084_reserve(void)
+{
+ reserve_info = &apq8084_reserve_info;
+ of_scan_flat_dt(dt_scan_for_memory_reserve, apq8084_reserve_table);
+ msm_reserve();
+}
+
+static void __init apq8084_early_memory(void)
+{
+ reserve_info = &apq8084_reserve_info;
+ of_scan_flat_dt(dt_scan_for_memory_hole, apq8084_reserve_table);
+}
+
+static struct clk_lookup msm_clocks_dummy[] = {
+ CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("core_clk", SDC1_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("iface_clk", SDC1_P_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("core_clk", SDC2_CLK, "msm_sdcc.2", OFF),
+ CLK_DUMMY("iface_clk", SDC2_P_CLK, "msm_sdcc.2", OFF),
+};
+
+static struct clock_init_data msm_dummy_clock_init_data __initdata = {
+ .table = msm_clocks_dummy,
+ .size = ARRAY_SIZE(msm_clocks_dummy),
+};
+
+/*
+ * Used to satisfy dependencies for devices that need to be
+ * run early or in a particular order. Most likely your device doesn't fall
+ * into this category, and thus the driver should not be added here. The
+ * EPROBE_DEFER can satisfy most dependency problems.
+ */
+void __init apq8084_add_drivers(void)
+{
+ msm_smd_init();
+ msm_clock_init(&msm_dummy_clock_init_data);
+}
+
+static void __init apq8084_map_io(void)
+{
+ msm_map_8084_io();
+}
+
+void __init apq8084_init(void)
+{
+ struct of_dev_auxdata *adata = apq8084_auxdata_lookup;
+
+ if (socinfo_init() < 0)
+ pr_err("%s: socinfo_init() failed\n", __func__);
+
+ apq8084_init_gpiomux();
+ of_platform_populate(NULL, of_default_bus_match_table, adata, NULL);
+ apq8084_add_drivers();
+}
+
+void __init apq8084_init_very_early(void)
+{
+ apq8084_early_memory();
+}
+
+static const char *apq8084_dt_match[] __initconst = {
+ "qcom,apq8084",
+ NULL
+};
+
+DT_MACHINE_START(APQ8084_DT, "Qualcomm APQ 8084 (Flattened Device Tree)")
+ .map_io = apq8084_map_io,
+ .init_irq = msm_dt_init_irq,
+ .init_machine = apq8084_init,
+ .handle_irq = gic_handle_irq,
+ .timer = &msm_dt_timer,
+ .dt_compat = apq8084_dt_match,
+ .reserve = apq8084_reserve,
+ .init_very_early = apq8084_init_very_early,
+ .restart = msm_restart,
+ .smp = &msm8974_smp_ops,
+MACHINE_END
diff --git a/arch/arm/mach-msm/board-8610.c b/arch/arm/mach-msm/board-8610.c
index 67334d5..2cd7134 100644
--- a/arch/arm/mach-msm/board-8610.c
+++ b/arch/arm/mach-msm/board-8610.c
@@ -44,6 +44,7 @@
#include <mach/clk-provider.h>
#include <mach/msm_smd.h>
#include <mach/rpm-smd.h>
+#include <mach/rpm-regulator-smd.h>
#include <linux/msm_thermal.h>
#include "board-dt.h"
#include "clock.h"
@@ -105,6 +106,7 @@
msm_rpm_driver_init();
msm_lpmrs_module_init();
msm_spm_device_init();
+ rpm_regulator_smd_driver_init();
qpnp_regulator_init();
tsens_tm_init_driver();
msm_thermal_device_init();
diff --git a/arch/arm/mach-msm/board-8930.c b/arch/arm/mach-msm/board-8930.c
index 8be128c..6ccaba6 100644
--- a/arch/arm/mach-msm/board-8930.c
+++ b/arch/arm/mach-msm/board-8930.c
@@ -477,7 +477,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
diff --git a/arch/arm/mach-msm/board-8960.c b/arch/arm/mach-msm/board-8960.c
index c5fc418..5d96389 100644
--- a/arch/arm/mach-msm/board-8960.c
+++ b/arch/arm/mach-msm/board-8960.c
@@ -532,7 +532,7 @@
if (fixed_position != NOT_FIXED)
fixed_size += heap->size;
- else
+ else if (!use_cma)
reserve_mem_for_ion(MEMTYPE_EBI1, heap->size);
if (fixed_position == FIXED_LOW) {
diff --git a/arch/arm/mach-msm/board-8974.c b/arch/arm/mach-msm/board-8974.c
index cfa1628..3eed219 100644
--- a/arch/arm/mach-msm/board-8974.c
+++ b/arch/arm/mach-msm/board-8974.c
@@ -174,6 +174,7 @@
static const char *msm8974_dt_match[] __initconst = {
"qcom,msm8974",
+ "qcom,apq8074",
NULL
};
diff --git a/arch/arm/mach-msm/board-9625-gpiomux.c b/arch/arm/mach-msm/board-9625-gpiomux.c
index 75aaaec..a6ac986 100644
--- a/arch/arm/mach-msm/board-9625-gpiomux.c
+++ b/arch/arm/mach-msm/board-9625-gpiomux.c
@@ -276,6 +276,7 @@
},
};
+#ifdef CONFIG_FB_MSM_QPIC
static struct gpiomux_setting qpic_lcdc_a_d = {
.func = GPIOMUX_FUNC_1,
.drv = GPIOMUX_DRV_10MA,
@@ -327,6 +328,17 @@
},
};
+static void msm9625_disp_init_gpiomux(void)
+{
+ msm_gpiomux_install(msm9625_qpic_lcdc_configs,
+ ARRAY_SIZE(msm9625_qpic_lcdc_configs));
+}
+#else
+static void msm9625_disp_init_gpiomux(void)
+{
+}
+#endif /* CONFIG_FB_MSM_QPIC */
+
void __init msm9625_init_gpiomux(void)
{
int rc;
@@ -347,7 +359,5 @@
ARRAY_SIZE(mdm9625_cdc_reset_config));
msm_gpiomux_install(sdc2_card_det_config,
ARRAY_SIZE(sdc2_card_det_config));
- msm_gpiomux_install(msm9625_qpic_lcdc_configs,
- ARRAY_SIZE(msm9625_qpic_lcdc_configs));
-
+ msm9625_disp_init_gpiomux();
}
diff --git a/arch/arm/mach-msm/board-zinc.c b/arch/arm/mach-msm/board-zinc.c
deleted file mode 100644
index 444444f..0000000
--- a/arch/arm/mach-msm/board-zinc.c
+++ /dev/null
@@ -1,127 +0,0 @@
-/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/err.h>
-#include <linux/kernel.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_platform.h>
-#include <linux/memory.h>
-#include <asm/hardware/gic.h>
-#include <asm/mach/map.h>
-#include <asm/mach/arch.h>
-#include <mach/board.h>
-#include <mach/gpiomux.h>
-#include <mach/msm_iomap.h>
-#include <mach/msm_memtypes.h>
-#include <mach/msm_smd.h>
-#include <mach/restart.h>
-#include <mach/socinfo.h>
-#include <mach/clk-provider.h>
-#include "board-dt.h"
-#include "clock.h"
-#include "devices.h"
-#include "platsmp.h"
-
-static struct memtype_reserve msmzinc_reserve_table[] __initdata = {
- [MEMTYPE_SMI] = {
- },
- [MEMTYPE_EBI0] = {
- .flags = MEMTYPE_FLAGS_1M_ALIGN,
- },
- [MEMTYPE_EBI1] = {
- .flags = MEMTYPE_FLAGS_1M_ALIGN,
- },
-};
-
-static int msmzinc_paddr_to_memtype(phys_addr_t paddr)
-{
- return MEMTYPE_EBI1;
-}
-
-static struct reserve_info msmzinc_reserve_info __initdata = {
- .memtype_reserve_table = msmzinc_reserve_table,
- .paddr_to_memtype = msmzinc_paddr_to_memtype,
-};
-
-void __init msmzinc_reserve(void)
-{
- reserve_info = &msmzinc_reserve_info;
- of_scan_flat_dt(dt_scan_for_memory_reserve, msmzinc_reserve_table);
- msm_reserve();
-}
-
-static void __init msmzinc_early_memory(void)
-{
- reserve_info = &msmzinc_reserve_info;
- of_scan_flat_dt(dt_scan_for_memory_hole, msmzinc_reserve_table);
-}
-
-static struct clk_lookup msm_clocks_dummy[] = {
- CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
- CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
-};
-
-static struct clock_init_data msm_dummy_clock_init_data __initdata = {
- .table = msm_clocks_dummy,
- .size = ARRAY_SIZE(msm_clocks_dummy),
-};
-
-/*
- * Used to satisfy dependencies for devices that need to be
- * run early or in a particular order. Most likely your device doesn't fall
- * into this category, and thus the driver should not be added here. The
- * EPROBE_DEFER can satisfy most dependency problems.
- */
-void __init msmzinc_add_drivers(void)
-{
- msm_smd_init();
- msm_clock_init(&msm_dummy_clock_init_data);
-}
-
-static void __init msmzinc_map_io(void)
-{
- msm_map_zinc_io();
-}
-
-void __init msmzinc_init(void)
-{
- if (socinfo_init() < 0)
- pr_err("%s: socinfo_init() failed\n", __func__);
-
- msmzinc_init_gpiomux();
- of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
- msmzinc_add_drivers();
-}
-
-void __init msmzinc_init_very_early(void)
-{
- msmzinc_early_memory();
-}
-
-static const char *msmzinc_dt_match[] __initconst = {
- "qcom,msmzinc",
- NULL
-};
-
-DT_MACHINE_START(MSMZINC_DT, "Qualcomm MSM ZINC (Flattened Device Tree)")
- .map_io = msmzinc_map_io,
- .init_irq = msm_dt_init_irq,
- .init_machine = msmzinc_init,
- .handle_irq = gic_handle_irq,
- .timer = &msm_dt_timer,
- .dt_compat = msmzinc_dt_match,
- .reserve = msmzinc_reserve,
- .init_very_early = msmzinc_init_very_early,
- .restart = msm_restart,
- .smp = &msm8974_smp_ops,
-MACHINE_END
diff --git a/arch/arm/mach-msm/cache_erp.c b/arch/arm/mach-msm/cache_erp.c
index ddea91c..f52bc28 100644
--- a/arch/arm/mach-msm/cache_erp.c
+++ b/arch/arm/mach-msm/cache_erp.c
@@ -123,11 +123,18 @@
unsigned int mplxrexnok;
};
+struct msm_erp_dump_region {
+ struct resource *res;
+ void __iomem *va;
+};
+
static DEFINE_PER_CPU(struct msm_l1_err_stats, msm_l1_erp_stats);
static struct msm_l2_err_stats msm_l2_erp_stats;
static int l1_erp_irq, l2_erp_irq;
static struct proc_dir_entry *procfs_entry;
+static int num_dump_regions;
+static struct msm_erp_dump_region *dump_regions;
#ifdef CONFIG_MSM_L1_ERR_LOG
static struct proc_dir_entry *procfs_log_entry;
@@ -211,6 +218,22 @@
return len;
}
+static int msm_erp_dump_regions(void)
+{
+ int i = 0;
+ struct msm_erp_dump_region *r;
+
+ for (i = 0; i < num_dump_regions; i++) {
+ r = &dump_regions[i];
+
+ pr_alert("%s %pR:\n", r->res->name, r->res);
+ print_hex_dump(KERN_ALERT, "", DUMP_PREFIX_OFFSET, 32, 4, r->va,
+ resource_size(r->res), 0);
+ }
+
+ return 0;
+}
+
#ifdef CONFIG_MSM_L1_ERR_LOG
static int proc_read_log(char *page, char **start, off_t off, int count,
int *eof, void *data)
@@ -267,6 +290,7 @@
pr_alert("\tCESR = 0x%08x\n", cesr);
pr_alert("\tCPU speed = %lu\n", acpuclk_get_rate(cpu));
pr_alert("\tMIDR = 0x%08x\n", read_cpuid_id());
+ msm_erp_dump_regions();
}
if (cesr & CESR_DCTPE) {
@@ -425,6 +449,9 @@
if (port_error && print_alert)
ERP_PORT_ERR("L2 master port error detected");
+ if (soft_error && print_alert)
+ msm_erp_dump_regions();
+
if (soft_error && !unrecoverable)
ERP_1BIT_ERR("L2 single-bit error detected");
@@ -464,6 +491,37 @@
.notifier_call = cache_erp_cpu_callback,
};
+static int msm_erp_read_dump_regions(struct platform_device *pdev)
+{
+ int i;
+ struct device_node *np = pdev->dev.of_node;
+ struct resource *res;
+
+ num_dump_regions = of_property_count_strings(np, "reg-names");
+
+ if (num_dump_regions <= 0) {
+ num_dump_regions = 0;
+ return 0; /* Not an error - this is an optional property */
+ }
+
+ dump_regions = devm_kzalloc(&pdev->dev,
+ sizeof(*dump_regions) * num_dump_regions,
+ GFP_KERNEL);
+ if (!dump_regions)
+ return -ENOMEM;
+
+ for (i = 0; i < num_dump_regions; i++) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ dump_regions[i].res = res;
+ dump_regions[i].va = devm_ioremap(&pdev->dev, res->start,
+ resource_size(res));
+ if (!dump_regions[i].va)
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
static int msm_cache_erp_probe(struct platform_device *pdev)
{
struct resource *r;
@@ -511,6 +569,11 @@
goto fail_l2;
}
+ ret = msm_erp_read_dump_regions(pdev);
+
+ if (ret)
+ goto fail_l2;
+
get_online_cpus();
register_hotcpu_notifier(&cache_erp_cpu_notifier);
for_each_cpu(cpu, cpu_online_mask)
diff --git a/arch/arm/mach-msm/clock-8226.c b/arch/arm/mach-msm/clock-8226.c
index 5f9eafd..af027f0 100644
--- a/arch/arm/mach-msm/clock-8226.c
+++ b/arch/arm/mach-msm/clock-8226.c
@@ -1709,13 +1709,19 @@
F_MMSS( 100000000, gpll0, 6, 0, 0),
F_MMSS( 109090000, gpll0, 5.5, 0, 0),
F_MMSS( 133330000, gpll0, 4.5, 0, 0),
+ F_MMSS( 150000000, gpll0, 4, 0, 0),
F_MMSS( 200000000, gpll0, 3, 0, 0),
F_MMSS( 228570000, mmpll0_pll, 3.5, 0, 0),
F_MMSS( 266670000, mmpll0_pll, 3, 0, 0),
F_MMSS( 320000000, mmpll0_pll, 2.5, 0, 0),
+ F_MMSS( 400000000, mmpll0_pll, 2, 0, 0),
F_END
};
+static unsigned long camss_vfe_vfe0_fmax_v2[VDD_DIG_NUM] = {
+ 150000000, 320000000, 400000000,
+};
+
static struct rcg_clk vfe0_clk_src = {
.cmd_rcgr_reg = VFE0_CMD_RCGR,
.set_rate = set_rate_hid,
@@ -1972,11 +1978,17 @@
static struct clk_freq_tbl ftbl_camss_vfe_cpp_clk[] = {
F_MMSS( 133330000, gpll0, 4.5, 0, 0),
+ F_MMSS( 150000000, gpll0, 4, 0, 0),
F_MMSS( 266670000, mmpll0_pll, 3, 0, 0),
F_MMSS( 320000000, mmpll0_pll, 2.5, 0, 0),
+ F_MMSS( 400000000, mmpll0_pll, 2, 0, 0),
F_END
};
+static unsigned long camss_vfe_cpp_fmax_v2[VDD_DIG_NUM] = {
+ 150000000, 320000000, 400000000,
+};
+
static struct rcg_clk cpp_clk_src = {
.cmd_rcgr_reg = CPP_CMD_RCGR,
.set_rate = set_rate_hid,
@@ -2697,7 +2709,6 @@
.base = &virt_bases[LPASS_BASE],
.c = {
.dbg_name = "q6ss_xo_clk",
- .parent = &xo.c,
.ops = &clk_ops_branch,
CLK_INIT(q6ss_xo_clk.c),
},
@@ -2811,8 +2822,8 @@
static DEFINE_CLK_VOTER(pnoc_sps_clk, &pnoc_clk.c, LONG_MAX);
-static DEFINE_CLK_VOTER(qseecom_ce1_clk_src, &ce1_clk_src.c, LONG_MAX);
-static DEFINE_CLK_VOTER(scm_ce1_clk_src, &ce1_clk_src.c, LONG_MAX);
+static DEFINE_CLK_VOTER(qseecom_ce1_clk_src, &ce1_clk_src.c, 100000000);
+static DEFINE_CLK_VOTER(scm_ce1_clk_src, &ce1_clk_src.c, 100000000);
static DEFINE_CLK_BRANCH_VOTER(cxo_otg_clk, &xo.c);
static DEFINE_CLK_BRANCH_VOTER(cxo_pil_lpass_clk, &xo.c);
@@ -3060,7 +3071,7 @@
/* WCNSS CLOCKS */
CLK_LOOKUP("xo", cxo_wlan_clk.c, "fb000000.qcom,wcnss-wlan"),
- CLK_LOOKUP("rf_clk", cxo_a2.c, "fb000000.qcom,wcnss-wlan"),
+ CLK_LOOKUP("rf_clk", cxo_a1.c, "fb000000.qcom,wcnss-wlan"),
/* BUS DRIVER */
CLK_LOOKUP("bus_clk", cnoc_msmbus_clk.c, "msm_config_noc"),
@@ -3328,6 +3339,8 @@
CLK_LOOKUP("csi1_rdi_clk", camss_csi1rdi_clk.c, "fda08400.qcom,csid"),
/* ISPIF clocks */
+ CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
+ "fda0a000.qcom,ispif"),
CLK_LOOKUP("camss_vfe_vfe_clk", camss_vfe_vfe0_clk.c,
"fda0a000.qcom,ispif"),
CLK_LOOKUP("camss_csi_vfe_clk", camss_csi_vfe0_clk.c,
@@ -3415,6 +3428,18 @@
CLK_LOOKUP("osr_clk", div_clk1.c, "msm-dai-q6-dev.16390"),
CLK_LOOKUP("osr_clk", div_clk1.c, "msm-dai-q6-dev.16391"),
+ /* Add QCEDEV clocks */
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "fd400000.qcom,qcedev"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "fd400000.qcom,qcedev"),
+
+ /* Add QCRYPTO clocks */
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "fd404000.qcom,qcrypto"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "fd404000.qcom,qcrypto"),
+
};
static struct clk_lookup msm_clocks_8226_rumi[] = {
@@ -3544,6 +3569,12 @@
reg_init();
+ /* v2 specific changes */
+ if (SOCINFO_VERSION_MAJOR(socinfo_get_version()) == 2) {
+ cpp_clk_src.c.fmax = camss_vfe_cpp_fmax_v2;
+ vfe0_clk_src.c.fmax = camss_vfe_vfe0_fmax_v2;
+ }
+
/*
* MDSS needs the ahb clock and needs to init before we register the
* lookup table.
diff --git a/arch/arm/mach-msm/clock-8610.c b/arch/arm/mach-msm/clock-8610.c
index 2e35d8a..d9442ac 100644
--- a/arch/arm/mach-msm/clock-8610.c
+++ b/arch/arm/mach-msm/clock-8610.c
@@ -255,6 +255,7 @@
#define MMSS_MMSSNOC_AXI_CBCR 0x506C
#define BIMC_GFX_BCR 0x5090
#define BIMC_GFX_CBCR 0x5094
+#define MMSS_CAMSS_MISC 0x3718
#define AUDIO_CORE_GDSCR 0x7000
#define SPDM_BCR 0x1000
@@ -462,9 +463,9 @@
#define D0_ID 1
#define D1_ID 2
-#define A0_ID 3
-#define A1_ID 4
-#define A2_ID 5
+#define A0_ID 4
+#define A1_ID 5
+#define A2_ID 6
#define DIFF_CLK_ID 7
#define DIV_CLK_ID 11
@@ -1932,6 +1933,115 @@
},
};
+static struct mux_clk csi0phy_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(11),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(9),
+ .sources = (struct mux_source[]) {
+ { &csi0phy_clk.c, 0 },
+ { &csi1phy_clk.c, BIT(9) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi0phy_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(csi0phy_mux_clk.c),
+ },
+};
+
+static struct mux_clk csi1phy_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(10),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(8),
+ .sources = (struct mux_source[]) {
+ { &csi0phy_clk.c, 0 },
+ { &csi1phy_clk.c, BIT(8) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi1phy_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(csi1phy_mux_clk.c),
+ },
+};
+
+static struct mux_clk csi0pix_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(7),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(3),
+ .sources = (struct mux_source[]) {
+ { &csi0pix_clk.c, 0 },
+ { &csi1pix_clk.c, BIT(3) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "csi0pix_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(csi0pix_mux_clk.c),
+ },
+};
+
+
+static struct mux_clk rdi2_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(6),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(2),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(2) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi2_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(rdi2_mux_clk.c),
+ },
+};
+
+static struct mux_clk rdi1_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(5),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(1),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(1) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi1_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(rdi1_mux_clk.c),
+ },
+};
+
+static struct mux_clk rdi0_mux_clk = {
+ .enable_reg = MMSS_CAMSS_MISC,
+ .enable_mask = BIT(4),
+ .select_reg = MMSS_CAMSS_MISC,
+ .select_mask = BIT(0),
+ .sources = (struct mux_source[]) {
+ { &csi0rdi_clk.c, 0 },
+ { &csi1rdi_clk.c, BIT(0) },
+ { 0 },
+ },
+ .base = &virt_bases[MMSS_BASE],
+ .c = {
+ .dbg_name = "rdi0_mux_clk",
+ .ops = &clk_ops_mux,
+ CLK_INIT(rdi0_mux_clk.c),
+ },
+};
+
static struct branch_clk csi_ahb_clk = {
.cbcr_reg = CSI_AHB_CBCR,
.has_sibling = 1,
@@ -2245,7 +2355,6 @@
.bcr_reg = LPASS_Q6SS_BCR,
.base = &virt_bases[LPASS_BASE],
.c = {
- .parent = &gcc_xo_clk_src.c,
.dbg_name = "q6ss_xo_clk",
.ops = &clk_ops_branch,
CLK_INIT(q6ss_xo_clk.c),
@@ -2620,6 +2729,10 @@
CLK_LOOKUP("core_clk", qdss_clk.c, "fc352000.cti"),
CLK_LOOKUP("core_clk", qdss_clk.c, "fc353000.cti"),
CLK_LOOKUP("core_clk", qdss_clk.c, "fc354000.cti"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34c000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34d000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34e000.jtagmm"),
+ CLK_LOOKUP("core_clk", qdss_clk.c, "fc34f000.jtagmm"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc326000.tmc"),
@@ -2649,6 +2762,10 @@
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc352000.cti"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc353000.cti"),
CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc354000.cti"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34c000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34d000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34e000.jtagmm"),
+ CLK_LOOKUP("core_a_clk", qdss_a_clk.c, "fc34f000.jtagmm"),
@@ -2706,7 +2823,7 @@
CLK_LOOKUP("core_clk", gcc_mss_q6_bimc_axi_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_pdm2_clk.c, ""),
CLK_LOOKUP("iface_clk", gcc_pdm_ahb_clk.c, ""),
- CLK_LOOKUP("iface_clk", gcc_prng_ahb_clk.c, ""),
+ CLK_LOOKUP("iface_clk", gcc_prng_ahb_clk.c, "f9bff000.qcom,msm-rng"),
CLK_LOOKUP("iface_clk", gcc_sdcc1_ahb_clk.c, "msm_sdcc.1"),
CLK_LOOKUP("core_clk", gcc_sdcc1_apps_clk.c, "msm_sdcc.1"),
CLK_LOOKUP("iface_clk", gcc_sdcc2_ahb_clk.c, "msm_sdcc.2"),
@@ -2766,6 +2883,13 @@
CLK_LOOKUP("core_clk", vfe_ahb_clk.c, ""),
CLK_LOOKUP("core_clk", vfe_axi_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi0pix_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi0phy_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", csi1phy_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi2_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi1_mux_clk.c, ""),
+ CLK_LOOKUP("core_clk", rdi0_mux_clk.c, ""),
+
CLK_LOOKUP("core_clk", oxili_gfx3d_clk.c, "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("iface_clk", oxili_ahb_clk.c, "fdc00000.qcom,kgsl-3d0"),
CLK_LOOKUP("mem_iface_clk", bimc_gfx_clk.c, "fdc00000.qcom,kgsl-3d0"),
@@ -2804,7 +2928,7 @@
CLK_LOOKUP("measure_clk", l2_m_clk, ""),
CLK_LOOKUP("xo", gcc_xo_clk_src.c, "fb000000.qcom,wcnss-wlan"),
- CLK_LOOKUP("rf_clk", cxo_a2.c, "fb000000.qcom,wcnss-wlan"),
+ CLK_LOOKUP("rf_clk", cxo_a1.c, "fb000000.qcom,wcnss-wlan"),
CLK_LOOKUP("iface_clk", mdp_ahb_clk.c, "fd900000.qcom,mdss_mdp"),
CLK_LOOKUP("core_clk", mdp_axi_clk.c, "fd900000.qcom,mdss_mdp"),
@@ -2822,6 +2946,11 @@
CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "qseecom"),
CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "qseecom"),
CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "qseecom"),
+
+ CLK_LOOKUP("core_clk", gcc_ce1_clk.c, "scm"),
+ CLK_LOOKUP("iface_clk", gcc_ce1_ahb_clk.c, "scm"),
+ CLK_LOOKUP("bus_clk", gcc_ce1_axi_clk.c, "scm"),
+ CLK_LOOKUP("core_clk_src", ce1_clk_src.c, "scm"),
};
static struct clk_lookup msm_clocks_8610_rumi[] = {
diff --git a/arch/arm/mach-msm/clock-8960.c b/arch/arm/mach-msm/clock-8960.c
index 509443d..be6d965 100644
--- a/arch/arm/mach-msm/clock-8960.c
+++ b/arch/arm/mach-msm/clock-8960.c
@@ -5419,11 +5419,6 @@
CLK_LOOKUP("csi_phy_clk", csi0_phy_clk.c, "msm_csid.0"),
CLK_LOOKUP("csi_phy_clk", csi1_phy_clk.c, "msm_csid.1"),
CLK_LOOKUP("csi_phy_clk", csi2_phy_clk.c, "msm_csid.2"),
- CLK_LOOKUP("csi_pix_clk", csi_pix_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_pix1_clk", csi_pix1_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi_clk", csi_rdi_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi1_clk", csi_rdi1_clk.c, "msm_ispif.0"),
- CLK_LOOKUP("csi_rdi2_clk", csi_rdi2_clk.c, "msm_ispif.0"),
CLK_LOOKUP("csiphy_timer_src_clk",
csiphy_timer_src_clk.c, "msm_csiphy.0"),
CLK_LOOKUP("csiphy_timer_src_clk",
diff --git a/arch/arm/mach-msm/clock-8974.c b/arch/arm/mach-msm/clock-8974.c
index 73e44b1..707e6b6 100644
--- a/arch/arm/mach-msm/clock-8974.c
+++ b/arch/arm/mach-msm/clock-8974.c
@@ -4757,7 +4757,8 @@
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991f000.serial"),
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9924000.i2c"),
CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f991e000.serial"),
- CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, ""),
+ CLK_LOOKUP("core_clk", gcc_blsp1_qup1_i2c_apps_clk.c, "f9923000.i2c"),
+ CLK_LOOKUP("iface_clk", gcc_blsp1_ahb_clk.c, "f9923000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_i2c_apps_clk.c, "f9924000.i2c"),
CLK_LOOKUP("core_clk", gcc_blsp1_qup2_spi_apps_clk.c, ""),
CLK_LOOKUP("core_clk", gcc_blsp1_qup1_spi_apps_clk.c, "f9923000.spi"),
@@ -4946,80 +4947,79 @@
"fda0b400.qcom,csiphy"),
CLK_LOOKUP("csiphy_timer_clk", camss_phy2_csi2phytimer_clk.c,
"fda0b400.qcom,csiphy"),
+
/* CSID clocks */
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08000.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08000.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08000.qcom,csid"),
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi0_ahb_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi0_clk_src.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi0phy_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi0_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi0pix_clk.c,
+ "fda08000.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi0rdi_clk.c,
+ "fda08000.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08400.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_ahb_clk", camss_csi1_ahb_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_src_clk", csi1_clk_src.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_phy_clk", camss_csi1phy_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_clk", camss_csi1_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_pix_clk", camss_csi1pix_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08400.qcom,csid"),
- CLK_LOOKUP("csi1_rdi_clk", camss_csi1rdi_clk.c, "fda08400.qcom,csid"),
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi1_ahb_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi1_clk_src.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi1phy_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi1_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi1pix_clk.c,
+ "fda08400.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi1rdi_clk.c,
+ "fda08400.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08800.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_ahb_clk", camss_csi2_ahb_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_src_clk", csi2_clk_src.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_phy_clk", camss_csi2phy_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_clk", camss_csi2_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_pix_clk", camss_csi2pix_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08800.qcom,csid"),
- CLK_LOOKUP("csi2_rdi_clk", camss_csi2rdi_clk.c, "fda08800.qcom,csid"),
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi2_ahb_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi2_clk_src.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi2phy_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi2_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi2pix_clk.c,
+ "fda08800.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi2rdi_clk.c,
+ "fda08800.qcom,csid"),
- CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
- "fda08c00.qcom,csid"),
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
- "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_ahb_clk", camss_csi0_ahb_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_ahb_clk", camss_csi3_ahb_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_src_clk", csi0_clk_src.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_src_clk", csi3_clk_src.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_phy_clk", camss_csi0phy_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_phy_clk", camss_csi3phy_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_clk", camss_csi0_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_clk", camss_csi3_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_pix_clk", camss_csi0pix_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_pix_clk", camss_csi3pix_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi0_rdi_clk", camss_csi0rdi_clk.c, "fda08c00.qcom,csid"),
- CLK_LOOKUP("csi3_rdi_clk", camss_csi3rdi_clk.c, "fda08c00.qcom,csid"),
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_ahb_clk", camss_csi3_ahb_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_src_clk", csi3_clk_src.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_phy_clk", camss_csi3phy_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_clk", camss_csi3_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_pix_clk", camss_csi3pix_clk.c,
+ "fda08c00.qcom,csid"),
+ CLK_LOOKUP("csi_rdi_clk", camss_csi3rdi_clk.c,
+ "fda08c00.qcom,csid"),
/* ISPIF clocks */
CLK_LOOKUP("ispif_ahb_clk", camss_ispif_ahb_clk.c,
"fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_vfe_vfe_clk", camss_vfe_vfe0_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_csi_vfe_clk", camss_csi_vfe0_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_vfe_vfe_clk1", camss_vfe_vfe1_clk.c,
- "fda0a000.qcom,ispif"),
- CLK_LOOKUP("camss_csi_vfe_clk1", camss_csi_vfe1_clk.c,
- "fda0a000.qcom,ispif"),
/*VFE clocks*/
CLK_LOOKUP("camss_top_ahb_clk", camss_top_ahb_clk.c,
diff --git a/arch/arm/mach-msm/clock-debug.c b/arch/arm/mach-msm/clock-debug.c
index d540f2b..fe43b72 100644
--- a/arch/arm/mach-msm/clock-debug.c
+++ b/arch/arm/mach-msm/clock-debug.c
@@ -1,6 +1,6 @@
/*
* Copyright (C) 2007 Google, Inc.
- * Copyright (c) 2007-2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2007-2013, The Linux Foundation. All rights reserved.
*
* This software is licensed under the terms of the GNU General Public
* License version 2, as published by the Free Software Foundation, and
@@ -21,10 +21,23 @@
#include <linux/clk.h>
#include <linux/list.h>
#include <linux/clkdev.h>
+#include <linux/uaccess.h>
#include <mach/clk-provider.h>
#include "clock.h"
+static LIST_HEAD(clk_list);
+static DEFINE_SPINLOCK(clk_list_lock);
+
+static struct dentry *debugfs_base;
+static u32 debug_suspend;
+
+struct clk_table {
+ struct list_head node;
+ struct clk_lookup *clocks;
+ size_t num_clocks;
+};
+
static int clock_debug_rate_set(void *data, u64 val)
{
struct clk *clock = data;
@@ -218,35 +231,68 @@
.release = seq_release,
};
-static int clock_parent_show(struct seq_file *m, void *unused)
+static ssize_t clock_parent_read(struct file *filp, char __user *ubuf,
+ size_t cnt, loff_t *ppos)
{
- struct clk *clock = m->private;
- struct clk *parent = clk_get_parent(clock);
+ struct clk *clock = filp->private_data;
+ struct clk *p = clock->parent;
+ char name[256] = {0};
- seq_printf(m, "%s\n", (parent ? parent->dbg_name : "None"));
+ snprintf(name, sizeof(name), "%s\n", p ? p->dbg_name : "None\n");
- return 0;
+ return simple_read_from_buffer(ubuf, cnt, ppos, name, strlen(name));
}
-static int clock_parent_open(struct inode *inode, struct file *file)
+
+static ssize_t clock_parent_write(struct file *filp,
+ const char __user *ubuf, size_t cnt, loff_t *ppos)
{
- return single_open(file, clock_parent_show, inode->i_private);
+ struct clk *clock = filp->private_data;
+ char buf[256];
+ char *cmp;
+ unsigned long flags;
+ struct clk_table *table;
+ int i, ret;
+ struct clk *parent = NULL;
+
+ cnt = min(cnt, sizeof(buf) - 1);
+ if (copy_from_user(&buf, ubuf, cnt))
+ return -EFAULT;
+ buf[cnt] = '\0';
+ cmp = strstrip(buf);
+
+ spin_lock_irqsave(&clk_list_lock, flags);
+ list_for_each_entry(table, &clk_list, node) {
+ for (i = 0; i < table->num_clocks; i++)
+ if (!strcmp(cmp, table->clocks[i].clk->dbg_name)) {
+ parent = table->clocks[i].clk;
+ break;
+ }
+ if (parent)
+ break;
+ }
+
+ if (!parent) {
+ ret = -EINVAL;
+ goto err;
+ }
+
+ spin_unlock_irqrestore(&clk_list_lock, flags);
+ ret = clk_set_parent(clock, table->clocks[i].clk);
+ if (ret)
+ return ret;
+
+ return cnt;
+err:
+ spin_unlock_irqrestore(&clk_list_lock, flags);
+ return ret;
}
+
static const struct file_operations clock_parent_fops = {
- .open = clock_parent_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = seq_release,
-};
-
-static struct dentry *debugfs_base;
-static u32 debug_suspend;
-
-struct clk_table {
- struct list_head node;
- struct clk_lookup *clocks;
- size_t num_clocks;
+ .open = simple_open,
+ .read = clock_parent_read,
+ .write = clock_parent_write,
};
static int clock_debug_add(struct clk *clock)
@@ -305,8 +351,6 @@
debugfs_remove_recursive(clk_dir);
return -ENOMEM;
}
-static LIST_HEAD(clk_list);
-static DEFINE_SPINLOCK(clk_list_lock);
/**
* clock_debug_register() - Add additional clocks to clock debugfs hierarchy
diff --git a/arch/arm/mach-msm/clock-local2.c b/arch/arm/mach-msm/clock-local2.c
index 8c2121f..f6c11b0 100644
--- a/arch/arm/mach-msm/clock-local2.c
+++ b/arch/arm/mach-msm/clock-local2.c
@@ -476,10 +476,7 @@
if (branch->max_div)
return branch->c.rate;
- if (!branch->has_sibling)
- return clk_get_rate(c->parent);
-
- return 0;
+ return clk_get_rate(c->parent);
}
static int branch_clk_list_rate(struct clk *c, unsigned n)
@@ -596,6 +593,13 @@
ret = -EINVAL;
}
writel_relaxed(cbcr_val, CBCR_REG(branch));
+ /*
+ * 8974v2.2 has a requirement that writes to set bits 13 and 14 are
+ * separated by at least 2 bus cycles. Cover one of these cycles by
+ * performing an extra write here. The other cycle is covered by the
+ * read-modify-write design of this function.
+ */
+ writel_relaxed(cbcr_val, CBCR_REG(branch));
spin_unlock_irqrestore(&local_clock_reg_lock, irq_flags);
/* Make sure write is issued before returning. */
@@ -805,6 +809,155 @@
}
+#define ENABLE_REG(x) (*(x)->base + (x)->enable_reg)
+#define SELECT_REG(x) (*(x)->base + (x)->select_reg)
+
+/*
+ * mux clock functions
+ */
+static void mux_clk_halt_check(void)
+{
+ /* Ensure that the delay starts after the mux disable/enable. */
+ mb();
+ udelay(HALT_CHECK_DELAY_US);
+}
+
+static int mux_clk_enable(struct clk *c)
+{
+ unsigned long flags;
+ u32 regval;
+ struct mux_clk *mux = to_mux_clk(c);
+
+ spin_lock_irqsave(&local_clock_reg_lock, flags);
+ regval = readl_relaxed(ENABLE_REG(mux));
+ regval |= mux->enable_mask;
+ writel_relaxed(regval, ENABLE_REG(mux));
+ spin_unlock_irqrestore(&local_clock_reg_lock, flags);
+
+ /* Wait for clock to enable before continuing. */
+ mux_clk_halt_check();
+
+ return 0;
+}
+
+static void mux_clk_disable(struct clk *c)
+{
+ unsigned long flags;
+ struct mux_clk *mux = to_mux_clk(c);
+ u32 regval;
+
+ spin_lock_irqsave(&local_clock_reg_lock, flags);
+ regval = readl_relaxed(ENABLE_REG(mux));
+ regval &= ~mux->enable_mask;
+ writel_relaxed(regval, ENABLE_REG(mux));
+ spin_unlock_irqrestore(&local_clock_reg_lock, flags);
+
+ /* Wait for clock to disable before continuing. */
+ mux_clk_halt_check();
+}
+
+static int mux_source_switch(struct mux_clk *mux, struct mux_source *dest)
+{
+ unsigned long flags;
+ u32 regval;
+ int ret = 0;
+
+ ret = __clk_pre_reparent(&mux->c, dest->clk, &flags);
+ if (ret)
+ goto out;
+
+ regval = readl_relaxed(SELECT_REG(mux));
+ regval &= ~mux->select_mask;
+ regval |= dest->select_val;
+ writel_relaxed(regval, SELECT_REG(mux));
+
+ /* Make sure switch request goes through before proceeding. */
+ mb();
+
+ __clk_post_reparent(&mux->c, mux->c.parent, &flags);
+out:
+ return ret;
+}
+
+static int mux_clk_set_parent(struct clk *c, struct clk *parent)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ struct mux_source *dest = NULL;
+ int ret;
+
+ if (!mux->sources || !parent)
+ return -EPERM;
+
+ dest = mux->sources;
+
+ while (dest->clk) {
+ if (dest->clk == parent)
+ break;
+ dest++;
+ }
+
+ if (!dest->clk)
+ return -EPERM;
+
+ ret = mux_source_switch(mux, dest);
+ if (ret)
+ return ret;
+
+ mux->c.rate = clk_get_rate(dest->clk);
+
+ return 0;
+}
+
+static enum handoff mux_clk_handoff(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ u32 mask = mux->enable_mask;
+ u32 regval = readl_relaxed(ENABLE_REG(mux));
+
+ c->rate = clk_get_rate(c->parent);
+
+ if (mask == (regval & mask))
+ return HANDOFF_ENABLED_CLK;
+
+ return HANDOFF_DISABLED_CLK;
+}
+
+static struct clk *mux_clk_get_parent(struct clk *c)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ struct mux_source *parent = NULL;
+ u32 regval = readl_relaxed(SELECT_REG(mux));
+
+ if (!mux->sources)
+ return ERR_PTR(-EPERM);
+
+ parent = mux->sources;
+
+ while (parent->clk) {
+ if ((regval & mux->select_mask) == parent->select_val)
+ return parent->clk;
+
+ parent++;
+ }
+
+ return ERR_PTR(-EPERM);
+}
+
+static int mux_clk_list_rate(struct clk *c, unsigned n)
+{
+ struct mux_clk *mux = to_mux_clk(c);
+ int i;
+
+ for (i = 0; i < n; i++)
+ if (!mux->sources[i].clk)
+ break;
+
+ if (!mux->sources[i].clk)
+ return -ENXIO;
+
+ return clk_get_rate(mux->sources[i].clk);
+}
+
struct clk_ops clk_ops_empty;
struct clk_ops clk_ops_rcg = {
@@ -868,3 +1021,14 @@
.reset = local_vote_clk_reset,
.handoff = local_vote_clk_handoff,
};
+
+struct clk_ops clk_ops_mux = {
+ .enable = mux_clk_enable,
+ .disable = mux_clk_disable,
+ .set_parent = mux_clk_set_parent,
+ .get_parent = mux_clk_get_parent,
+ .handoff = mux_clk_handoff,
+ .list_rate = mux_clk_list_rate,
+};
+
+
diff --git a/arch/arm/mach-msm/clock-local2.h b/arch/arm/mach-msm/clock-local2.h
index 7ac7bd3..7882edb 100644
--- a/arch/arm/mach-msm/clock-local2.h
+++ b/arch/arm/mach-msm/clock-local2.h
@@ -157,6 +157,38 @@
return container_of(clk, struct measure_clk, c);
}
+struct mux_source {
+ struct clk *const clk;
+ const u32 select_val;
+};
+
+/**
+ * struct mux_clk - branch clock
+ * @c: clk
+ * @enable_reg: register that contains the enable bit(s) for the mux
+ * @select_reg: register that contains the source selection bits for the mux
+ * @enable_mask: mask that enables the mux
+ * @select_mask: mask for the source selection bits
+ * @sources: list of mux sources
+ * @base: pointer to base address of ioremapped registers.
+ */
+struct mux_clk {
+ struct clk c;
+ const u32 enable_reg;
+ const u32 select_reg;
+ const u32 enable_mask;
+ const u32 select_mask;
+
+ struct mux_source *sources;
+
+ void *const __iomem *base;
+};
+
+static inline struct mux_clk *to_mux_clk(struct clk *clk)
+{
+ return container_of(clk, struct mux_clk, c);
+}
+
/*
* Generic set-rate implementations
*/
@@ -168,6 +200,7 @@
*/
extern spinlock_t local_clock_reg_lock;
+extern struct clk_ops clk_ops_mux;
extern struct clk_ops clk_ops_empty;
extern struct clk_ops clk_ops_rcg;
extern struct clk_ops clk_ops_rcg_mnd;
diff --git a/arch/arm/mach-msm/gdsc.c b/arch/arm/mach-msm/gdsc.c
index a07b13d..6665d66 100644
--- a/arch/arm/mach-msm/gdsc.c
+++ b/arch/arm/mach-msm/gdsc.c
@@ -38,7 +38,7 @@
#define EN_FEW_WAIT_VAL (0x8 << 16)
#define CLK_DIS_WAIT_VAL (0x2 << 12)
-#define TIMEOUT_US 1000
+#define TIMEOUT_US 100
struct gdsc {
struct regulator_dev *rdev;
diff --git a/arch/arm/mach-msm/include/mach/board.h b/arch/arm/mach-msm/include/mach/board.h
index 72f5051..22f74c8 100644
--- a/arch/arm/mach-msm/include/mach/board.h
+++ b/arch/arm/mach-msm/include/mach/board.h
@@ -600,7 +600,7 @@
void msm_map_msm7x30_io(void);
void msm_map_fsm9xxx_io(void);
void msm_map_8974_io(void);
-void msm_map_zinc_io(void);
+void msm_map_8084_io(void);
void msm_map_msmkrypton_io(void);
void msm_map_msm8625_io(void);
void msm_map_msm9625_io(void);
@@ -610,7 +610,7 @@
void msm_8974_reserve(void);
void msm_8974_very_early(void);
void msm_8974_init_gpiomux(void);
-void msmzinc_init_gpiomux(void);
+void apq8084_init_gpiomux(void);
void msm9625_init_gpiomux(void);
void msmkrypton_init_gpiomux(void);
void msm_map_mpq8092_io(void);
@@ -653,7 +653,7 @@
void msm_snddev_tx_route_config(void);
void msm_snddev_tx_route_deconfig(void);
-extern unsigned int msm_shared_ram_phys; /* defined in arch/arm/mach-msm/io.c */
+extern phys_addr_t msm_shared_ram_phys; /* defined in arch/arm/mach-msm/io.c */
#endif
diff --git a/arch/arm/mach-msm/include/mach/iommu.h b/arch/arm/mach-msm/include/mach/iommu.h
index f750dc8..23d204a 100644
--- a/arch/arm/mach-msm/include/mach/iommu.h
+++ b/arch/arm/mach-msm/include/mach/iommu.h
@@ -129,6 +129,7 @@
* @iommu_power_off: Turn off power to unit
* @iommu_clk_on: Turn on clks to unit
* @iommu_clk_off: Turn off clks to unit
+ * @iommu_lock_initialize: Initialize the remote lock
* @iommu_lock_acquire: Acquire any locks needed
* @iommu_lock_release: Release locks needed
*/
@@ -137,6 +138,7 @@
void (*iommu_power_off)(struct msm_iommu_drvdata *);
int (*iommu_clk_on)(struct msm_iommu_drvdata *);
void (*iommu_clk_off)(struct msm_iommu_drvdata *);
+ void * (*iommu_lock_initialize)(void);
void (*iommu_lock_acquire)(void);
void (*iommu_lock_release)(void);
};
diff --git a/arch/arm/mach-msm/include/mach/ipa.h b/arch/arm/mach-msm/include/mach/ipa.h
index cc03c48..90757b6 100644
--- a/arch/arm/mach-msm/include/mach/ipa.h
+++ b/arch/arm/mach-msm/include/mach/ipa.h
@@ -557,17 +557,6 @@
int ipa_set_single_ndp_per_mbim(bool enable);
/*
- * rmnet bridge
- */
-int rmnet_bridge_init(void);
-
-int rmnet_bridge_disconnect(void);
-
-int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id);
-
-/*
* SW bridge (between IPA and A2)
*/
int ipa_bridge_setup(enum ipa_bridge_dir dir, enum ipa_bridge_type type,
@@ -917,26 +906,6 @@
}
/*
- * rmnet bridge
- */
-static inline int rmnet_bridge_init(void)
-{
- return -EPERM;
-}
-
-static inline int rmnet_bridge_disconnect(void)
-{
- return -EPERM;
-}
-
-static inline int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id)
-{
- return -EPERM;
-}
-
-/*
* SW bridge (between IPA and A2)
*/
static inline int ipa_bridge_setup(enum ipa_bridge_dir dir,
diff --git a/arch/arm/mach-msm/include/mach/kgsl.h b/arch/arm/mach-msm/include/mach/kgsl.h
index b68aff8..349dbe7 100644
--- a/arch/arm/mach-msm/include/mach/kgsl.h
+++ b/arch/arm/mach-msm/include/mach/kgsl.h
@@ -20,6 +20,7 @@
#define KGSL_CLK_MEM 0x00000008
#define KGSL_CLK_MEM_IFACE 0x00000010
#define KGSL_CLK_AXI 0x00000020
+#define KGSL_CLK_ALT_MEM_IFACE 0x00000040
#define KGSL_MAX_PWRLEVELS 10
@@ -50,9 +51,19 @@
enum kgsl_iommu_context_id ctx_id;
};
+/*
+ * struct kgsl_device_iommu_data - Struct holding iommu context data obtained
+ * from dtsi file
+ * @iommu_ctxs: Pointer to array of struct hoding context name and id
+ * @iommu_ctx_count: Number of contexts defined in the dtsi file
+ * @iommu_halt_enable: Indicated if smmu halt h/w feature is supported
+ * @physstart: Start of iommu registers physical address
+ * @physend: End of iommu registers physical address
+ */
struct kgsl_device_iommu_data {
const struct kgsl_iommu_ctx *iommu_ctxs;
int iommu_ctx_count;
+ int iommu_halt_enable;
unsigned int physstart;
unsigned int physend;
};
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-zinc.h b/arch/arm/mach-msm/include/mach/msm_iomap-8084.h
similarity index 74%
rename from arch/arm/mach-msm/include/mach/msm_iomap-zinc.h
rename to arch/arm/mach-msm/include/mach/msm_iomap-8084.h
index 0a33055..43f1de0 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-zinc.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-8084.h
@@ -11,8 +11,8 @@
*/
-#ifndef __ASM_ARCH_MSM_IOMAP_zinc_H
-#define __ASM_ARCH_MSM_IOMAP_zinc_H
+#ifndef __ASM_ARCH_MSM_IOMAP_8084_H
+#define __ASM_ARCH_MSM_IOMAP_8084_H
/* Physical base address and size of peripherals.
* Ordered by the virtual base addresses they will be mapped at.
@@ -23,15 +23,15 @@
*
*/
-#define MSMZINC_SHARED_RAM_PHYS 0x0FA00000
+#define APQ8084_SHARED_RAM_PHYS 0x0FA00000
-#define MSMZINC_QGIC_DIST_PHYS 0xF9000000
-#define MSMZINC_QGIC_DIST_SIZE SZ_4K
+#define APQ8084_QGIC_DIST_PHYS 0xF9000000
+#define APQ8084_QGIC_DIST_SIZE SZ_4K
-#define MSMZINC_TLMM_PHYS 0xFD510000
-#define MSMZINC_TLMM_SIZE SZ_16K
+#define APQ8084_TLMM_PHYS 0xFD510000
+#define APQ8084_TLMM_SIZE SZ_16K
-#ifdef CONFIG_DEBUG_MSMZINC_UART
+#ifdef CONFIG_DEBUG_APQ8084_UART
#define MSM_DEBUG_UART_BASE IOMEM(0xFA71E000)
#define MSM_DEBUG_UART_PHYS 0xF991E000
#endif
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap-9625.h b/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
index 9a8bfc1..31b19b3 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap-9625.h
@@ -38,8 +38,8 @@
#define MSM9625_MPM2_PSHOLD_SIZE SZ_4K
#ifdef CONFIG_DEBUG_MSM9625_UART
-#define MSM_DEBUG_UART_BASE IOMEM(0xFA71E000)
-#define MSM_DEBUG_UART_PHYS 0xF991E000
+#define MSM_DEBUG_UART_BASE IOMEM(0xFA71F000)
+#define MSM_DEBUG_UART_PHYS 0xF991F000
#endif
#endif
diff --git a/arch/arm/mach-msm/include/mach/msm_iomap.h b/arch/arm/mach-msm/include/mach/msm_iomap.h
index f27eb36..a90e78a 100644
--- a/arch/arm/mach-msm/include/mach/msm_iomap.h
+++ b/arch/arm/mach-msm/include/mach/msm_iomap.h
@@ -129,7 +129,7 @@
#include "msm_iomap-8064.h"
#include "msm_iomap-9615.h"
#include "msm_iomap-8974.h"
-#include "msm_iomap-zinc.h"
+#include "msm_iomap-8084.h"
#include "msm_iomap-9625.h"
#include "msm_iomap-8092.h"
#include "msm_iomap-8226.h"
diff --git a/arch/arm/mach-msm/include/mach/msm_smd.h b/arch/arm/mach-msm/include/mach/msm_smd.h
index a8c7bb7..d155c6f 100644
--- a/arch/arm/mach-msm/include/mach/msm_smd.h
+++ b/arch/arm/mach-msm/include/mach/msm_smd.h
@@ -279,6 +279,17 @@
*/
int smd_write_end(smd_channel_t *ch);
+/**
+ * smd_write_segment_avail() - available write space for packet transactions
+ * @ch: channel to write packet to
+ * @returns: number of bytes available to write to, or -ENODEV for invalid ch
+ *
+ * This is a version of smd_write_avail() intended for use with packet
+ * transactions. This version correctly accounts for any internal reserved
+ * space at all stages of the transaction.
+ */
+int smd_write_segment_avail(smd_channel_t *ch);
+
/*
* Returns a pointer to the subsystem name or NULL if no
* subsystem name is available.
@@ -441,6 +452,11 @@
return -ENODEV;
}
+static inline int smd_write_segment_avail(smd_channel_t *ch)
+{
+ return -ENODEV;
+}
+
static inline const char *smd_edge_to_subsystem(uint32_t type)
{
return NULL;
diff --git a/arch/arm/mach-msm/include/mach/msm_smsm.h b/arch/arm/mach-msm/include/mach/msm_smsm.h
index d983ce5..81a6399 100644
--- a/arch/arm/mach-msm/include/mach/msm_smsm.h
+++ b/arch/arm/mach-msm/include/mach/msm_smsm.h
@@ -256,6 +256,18 @@
int smsm_check_for_modem_crash(void);
void *smem_find(unsigned id, unsigned size);
void *smem_get_entry(unsigned id, unsigned *size);
+
+/**
+ * smem_virt_to_phys() - Convert SMEM address to physical address.
+ *
+ * @smem_address: Virtual address returned by smem_alloc()/smem_alloc2()
+ * @returns: Physical address (or NULL if there is a failure)
+ *
+ * This function should only be used if an SMEM item needs to be handed
+ * off to a DMA engine.
+ */
+phys_addr_t smem_virt_to_phys(void *smem_address);
+
#else
static inline void *smem_alloc(unsigned id, unsigned size)
{
@@ -339,5 +351,9 @@
{
return NULL;
}
+static inline phys_addr_t smem_virt_to_phys(void *smem_address)
+{
+ return NULL;
+}
#endif
#endif
diff --git a/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h b/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
index 91e4ef1..ea345fb 100644
--- a/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
+++ b/arch/arm/mach-msm/include/mach/qdsp6v2/q6core.h
@@ -53,19 +53,10 @@
uint8_t model_ID[128];
};
-#define ADSP_CMD_SET_DOLBY_MANUFACTURER_ID 0x00012918
-
-struct adsp_dolby_manufacturer_id {
- struct apr_hdr hdr;
- int manufacturer_id;
-};
-
int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps);
uint32_t core_get_adsp_version(void);
uint32_t core_set_dts_model_id(uint32_t id_size, uint8_t *id);
-uint32_t core_set_dolby_manufacturer_id(int manufacturer_id);
-
#endif /* __Q6CORE_H__ */
diff --git a/arch/arm/mach-msm/include/mach/qseecomi.h b/arch/arm/mach-msm/include/mach/qseecomi.h
index 20dc851..e889242 100644
--- a/arch/arm/mach-msm/include/mach/qseecomi.h
+++ b/arch/arm/mach-msm/include/mach/qseecomi.h
@@ -18,6 +18,14 @@
#define QSEECOM_KEY_ID_SIZE 32
+#define QSEOS_RESULT_FAIL_LOAD_KS -48
+#define QSEOS_RESULT_FAIL_SAVE_KS -49
+#define QSEOS_RESULT_FAIL_MAX_KEYS -50
+#define QSEOS_RESULT_FAIL_KEY_ID_EXISTS -51
+#define QSEOS_RESULT_FAIL_KEY_ID_DNE -52
+#define QSEOS_RESULT_FAIL_KS_OP -53
+#define QSEOS_RESULT_FAIL_CE_PIPE_INVALID -54
+
enum qseecom_command_scm_resp_type {
QSEOS_APP_ID = 0xEE01,
QSEOS_LISTENER_ID
@@ -49,6 +57,22 @@
QSEOS_RESULT_FAILURE = 0xFFFFFFFF
};
+/* Key Management requests */
+enum qseecom_qceos_key_gen_cmd_id {
+ QSEOS_GENERATE_KEY = 0x11,
+ QSEOS_DELETE_KEY,
+ QSEOS_MAX_KEY_COUNT,
+ QSEOS_SET_KEY,
+ QSEOS_KEY_CMD_MAX = 0xEFFFFFFF
+};
+
+enum qseecom_pipe_type {
+ QSEOS_PIPE_ENC = 0,
+ QSEOS_PIPE_ENC_XTS,
+ QSEOS_PIPE_AUTH,
+ QSEOS_PIPE_ENUM_FILL = 0x7FFFFFFF
+};
+
__packed struct qsee_apps_region_info_ireq {
uint32_t qsee_cmd_id;
uint32_t addr;
@@ -143,29 +167,24 @@
unsigned int rsp_len; /* in/out */
};
-/* Key Management requests */
-enum qseecom_qceos_key_gen_cmd_id {
- QSEOS_GENERATE_KEY = 0x02,
- QSEOS_SET_KEY,
- QSEOS_DELETE_KEY,
- QSEOS_MAX_KEY_COUNT,
- QSEOS_KEY_CMD_MAX = 0xEFFFFFFF
-};
-
__packed struct qseecom_key_generate_ireq {
+ uint32_t qsee_command_id;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
};
__packed struct qseecom_key_select_ireq {
+ uint32_t qsee_command_id;
uint32_t ce;
uint32_t pipe;
+ uint32_t pipe_type;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
unsigned char hash[QSEECOM_HASH_SIZE];
};
__packed struct qseecom_key_delete_ireq {
+ uint32_t qsee_command_id;
uint32_t flags;
uint8_t key_id[QSEECOM_KEY_ID_SIZE];
};
diff --git a/arch/arm/mach-msm/include/mach/scm.h b/arch/arm/mach-msm/include/mach/scm.h
index 42e04dd..4186603 100644
--- a/arch/arm/mach-msm/include/mach/scm.h
+++ b/arch/arm/mach-msm/include/mach/scm.h
@@ -22,7 +22,6 @@
#define SCM_SVC_FUSE 0x8
#define SCM_SVC_PWR 0x9
#define SCM_SVC_MP 0xC
-#define SCM_SVC_CRYPTO 0xA
#define SCM_SVC_DCVS 0xD
#define SCM_SVC_ES 0x10
#define SCM_SVC_TZSCHEDULER 0xFC
diff --git a/arch/arm/mach-msm/include/mach/socinfo.h b/arch/arm/mach-msm/include/mach/socinfo.h
index 7c9882e..d4ea4ac 100644
--- a/arch/arm/mach-msm/include/mach/socinfo.h
+++ b/arch/arm/mach-msm/include/mach/socinfo.h
@@ -46,8 +46,8 @@
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msm8610")
#define early_machine_is_mpq8092() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,mpq8092")
-#define early_machine_is_msmzinc() \
- of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmzinc")
+#define early_machine_is_apq8084() \
+ of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,apq8084")
#define early_machine_is_msmkrypton() \
of_flat_dt_is_compatible(of_get_flat_dt_root(), "qcom,msmkrypton")
#else
@@ -63,7 +63,7 @@
#define early_machine_is_msm8610() 0
#define early_machine_is_mpq8092() 0
-#define early_machine_is_msmzinc() 0
+#define early_machine_is_apq8084() 0
#define early_machine_is_msmkrypton() 0
#endif
@@ -102,7 +102,7 @@
MSM_CPU_8226,
MSM_CPU_8610,
MSM_CPU_8625Q,
- MSM_CPU_ZINC,
+ MSM_CPU_8084,
MSM_CPU_KRYPTON,
};
diff --git a/arch/arm/mach-msm/io.c b/arch/arm/mach-msm/io.c
index ecac4a5..37dbbab 100644
--- a/arch/arm/mach-msm/io.c
+++ b/arch/arm/mach-msm/io.c
@@ -45,7 +45,7 @@
/* msm_shared_ram_phys default value of 0x00100000 is the most common value
* and should work as-is for any target without stacked memory.
*/
-unsigned int msm_shared_ram_phys = 0x00100000;
+phys_addr_t msm_shared_ram_phys = 0x00100000;
static void __init msm_map_io(struct map_desc *io_desc, int size)
{
@@ -321,27 +321,27 @@
}
#endif /* CONFIG_ARCH_MSM8974 */
-#ifdef CONFIG_ARCH_MSMZINC
-static struct map_desc msm_zinc_io_desc[] __initdata = {
- MSM_CHIP_DEVICE(QGIC_DIST, MSMZINC),
- MSM_CHIP_DEVICE(TLMM, MSMZINC),
+#ifdef CONFIG_ARCH_APQ8084
+static struct map_desc msm_8084_io_desc[] __initdata = {
+ MSM_CHIP_DEVICE(QGIC_DIST, APQ8084),
+ MSM_CHIP_DEVICE(TLMM, APQ8084),
{
.virtual = (unsigned long) MSM_SHARED_RAM_BASE,
.length = MSM_SHARED_RAM_SIZE,
.type = MT_DEVICE,
},
-#ifdef CONFIG_DEBUG_MSMZINC_UART
+#ifdef CONFIG_DEBUG_APQ8084_UART
MSM_DEVICE(DEBUG_UART),
#endif
};
-void __init msm_map_zinc_io(void)
+void __init msm_map_8084_io(void)
{
- msm_shared_ram_phys = MSMZINC_SHARED_RAM_PHYS;
- msm_map_io(msm_zinc_io_desc, ARRAY_SIZE(msm_zinc_io_desc));
+ msm_shared_ram_phys = APQ8084_SHARED_RAM_PHYS;
+ msm_map_io(msm_8084_io_desc, ARRAY_SIZE(msm_8084_io_desc));
of_scan_flat_dt(msm_scan_dt_map_imem, NULL);
}
-#endif /* CONFIG_ARCH_MSMZINC */
+#endif /* CONFIG_ARCH_APQ8084 */
#ifdef CONFIG_ARCH_MSM7X30
static struct map_desc msm7x30_io_desc[] __initdata = {
diff --git a/arch/arm/mach-msm/ipc_router.c b/arch/arm/mach-msm/ipc_router.c
index d81dbb4..0d617a6 100644
--- a/arch/arm/mach-msm/ipc_router.c
+++ b/arch/arm/mach-msm/ipc_router.c
@@ -1587,9 +1587,11 @@
if (!rport_ptr)
pr_err("%s: Remote port create "
"failed\n", __func__);
- rport_ptr->sec_rule =
- msm_ipc_get_security_rule(
- msg->srv.service, msg->srv.instance);
+ else
+ rport_ptr->sec_rule =
+ msm_ipc_get_security_rule(
+ msg->srv.service,
+ msg->srv.instance);
}
wake_up(&newserver_wait);
}
@@ -1890,6 +1892,7 @@
head_skb = skb_peek(pkt->pkt_fragment_q);
if (!head_skb) {
pr_err("%s: pkt_fragment_q is empty\n", __func__);
+ release_pkt(pkt);
return -EINVAL;
}
hdr = (struct rr_header *)skb_push(head_skb, IPC_ROUTER_HDR_SIZE);
diff --git a/arch/arm/mach-msm/ipc_router_smd_xprt.c b/arch/arm/mach-msm/ipc_router_smd_xprt.c
index 88ab8e0..b2ec816 100644
--- a/arch/arm/mach-msm/ipc_router_smd_xprt.c
+++ b/arch/arm/mach-msm/ipc_router_smd_xprt.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -146,11 +146,11 @@
skb_queue_walk(pkt->pkt_fragment_q, ipc_rtr_pkt) {
offset = 0;
while (offset < ipc_rtr_pkt->len) {
- if (!smd_write_avail(smd_xprtp->channel))
+ if (!smd_write_segment_avail(smd_xprtp->channel))
smd_enable_read_intr(smd_xprtp->channel);
wait_event(smd_xprtp->write_avail_wait_q,
- (smd_write_avail(smd_xprtp->channel) ||
+ (smd_write_segment_avail(smd_xprtp->channel) ||
smd_xprtp->ss_reset));
smd_disable_read_intr(smd_xprtp->channel);
spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
@@ -175,11 +175,11 @@
}
if (align_sz) {
- if (smd_write_avail(smd_xprtp->channel) < align_sz)
+ if (smd_write_segment_avail(smd_xprtp->channel) < align_sz)
smd_enable_read_intr(smd_xprtp->channel);
wait_event(smd_xprtp->write_avail_wait_q,
- ((smd_write_avail(smd_xprtp->channel) >=
+ ((smd_write_segment_avail(smd_xprtp->channel) >=
align_sz) || smd_xprtp->ss_reset));
smd_disable_read_intr(smd_xprtp->channel);
spin_lock_irqsave(&smd_xprtp->ss_reset_lock, flags);
@@ -357,7 +357,7 @@
if (smd_read_avail(smd_xprtp->channel))
queue_delayed_work(smd_xprtp->smd_xprt_wq,
&smd_xprtp->read_work, 0);
- if (smd_write_avail(smd_xprtp->channel))
+ if (smd_write_segment_avail(smd_xprtp->channel))
wake_up(&smd_xprtp->write_avail_wait_q);
break;
diff --git a/arch/arm/mach-msm/ipc_socket.c b/arch/arm/mach-msm/ipc_socket.c
index c0422a1..342663e 100644
--- a/arch/arm/mach-msm/ipc_socket.c
+++ b/arch/arm/mach-msm/ipc_socket.c
@@ -367,7 +367,8 @@
if (port_ptr->type == CLIENT_PORT)
wait_for_irsc_completion();
ipc_buf = skb_peek(msg);
- msm_ipc_router_ipc_log(IPC_SEND, ipc_buf, port_ptr);
+ if (ipc_buf)
+ msm_ipc_router_ipc_log(IPC_SEND, ipc_buf, port_ptr);
ret = msm_ipc_router_send_to(port_ptr, msg, &dest->address);
if (ret == (IPC_ROUTER_HDR_SIZE + total_len))
ret = total_len;
@@ -429,7 +430,8 @@
ret = msm_ipc_router_extract_msg(m, msg);
ipc_buf = skb_peek(msg);
- msm_ipc_router_ipc_log(IPC_RECV, ipc_buf, port_ptr);
+ if (ipc_buf)
+ msm_ipc_router_ipc_log(IPC_RECV, ipc_buf, port_ptr);
msm_ipc_router_release_msg(msg);
msg = NULL;
release_sock(sk);
diff --git a/arch/arm/mach-msm/krait-regulator.c b/arch/arm/mach-msm/krait-regulator.c
index af2fbab..52d20e3 100644
--- a/arch/arm/mach-msm/krait-regulator.c
+++ b/arch/arm/mach-msm/krait-regulator.c
@@ -133,7 +133,6 @@
#define LDO_DELTA_MIN 10000
#define LDO_DELTA_MAX 100000
-#define MSM_L2_SAW_PHYS 0xf9012000
/**
* struct pmic_gang_vreg -
* @name: the string used to represent the gang
@@ -378,6 +377,21 @@
return online_total;
}
+static int get_total_load(struct krait_power_vreg *from)
+{
+ int load_total = 0;
+ struct krait_power_vreg *kvreg;
+ struct pmic_gang_vreg *pvreg = from->pvreg;
+
+ list_for_each_entry(kvreg, &pvreg->krait_power_vregs, link) {
+ if (!kvreg->online)
+ continue;
+ load_total += kvreg->load;
+ }
+
+ return load_total;
+}
+
static bool enable_phase_management(struct pmic_gang_vreg *pvreg)
{
struct krait_power_vreg *kvreg;
@@ -396,7 +410,8 @@
#define ONE_PHASE_COEFF 1000000
#define TWO_PHASE_COEFF 2000000
-#define PHASE_SETTLING_TIME_US 10
+#define PWM_SETTLING_TIME_US 50
+#define PHASE_SETTLING_TIME_US 50
static unsigned int pmic_gang_set_phases(struct krait_power_vreg *from,
int coeff_total)
{
@@ -404,6 +419,9 @@
int phase_count;
int rc = 0;
int n_online = num_online(pvreg);
+ int load_total;
+
+ load_total = get_total_load(from);
if (pvreg->manage_phases == false) {
if (enable_phase_management(pvreg))
@@ -413,12 +431,12 @@
}
/* First check if the coeff is low for PFM mode */
- if (coeff_total < pvreg->pfm_threshold && n_online == 1) {
+ if (load_total <= pvreg->pfm_threshold && n_online == 1) {
if (!pvreg->pfm_mode) {
rc = msm_spm_enable_fts_lpm(PMIC_FTS_MODE_PFM);
if (rc) {
- pr_err("%s PFM en failed coeff_t %d rc = %d\n",
- from->name, coeff_total, rc);
+ pr_err("%s PFM en failed load_t %d rc = %d\n",
+ from->name, load_total, rc);
return rc;
} else {
pvreg->pfm_mode = true;
@@ -436,6 +454,7 @@
return rc;
} else {
pvreg->pfm_mode = false;
+ udelay(PWM_SETTLING_TIME_US);
}
}
@@ -1310,8 +1329,6 @@
void secondary_cpu_hs_init(void *base_ptr)
{
- void *l2_saw_base;
-
/* Turn on the BHS, turn off LDO Bypass and power down LDO */
writel_relaxed(
BHS_CNT_DEFAULT << BHS_CNT_BIT_POS
@@ -1338,23 +1355,6 @@
| BHS_SEG_EN_DEFAULT << BHS_SEG_EN_BIT_POS
| BHS_EN_MASK,
base_ptr + APC_PWR_GATE_CTL);
-
- if (the_gang && the_gang->manage_phases)
- return;
-
- /* If the driver has not yet started to manage phases then enable
- * max phases.
- */
- l2_saw_base = ioremap_nocache(MSM_L2_SAW_PHYS, SZ_4K);
- if (!l2_saw_base) {
- __WARN();
- return;
- }
- writel_relaxed(0x10003, l2_saw_base + 0x1c);
- mb();
- udelay(PHASE_SETTLING_TIME_US);
-
- iounmap(l2_saw_base);
}
MODULE_LICENSE("GPL v2");
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
index 3c8348d..cd6693e 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
@@ -477,9 +477,11 @@
#define M_MODE_ADDR(b, n) \
(M_REG_BASE(b) + (0x4000 * (n)) + 0x00000210)
enum bimc_m_mode {
- M_MODE_RMSK = 0xf0000001,
+ M_MODE_RMSK = 0xf0000011,
M_MODE_WR_GATHER_BEATS_BMSK = 0xf0000000,
M_MODE_WR_GATHER_BEATS_SHFT = 0x1c,
+ M_MODE_NARROW_WR_BMSK = 0x10,
+ M_MODE_NARROW_WR_SHFT = 0x4,
M_MODE_ORDERING_MODEL_BMSK = 0x1,
M_MODE_ORDERING_MODEL_SHFT = 0x0,
};
@@ -1526,10 +1528,10 @@
reg_val = readl_relaxed(M_PRIOLVL_OVERRIDE_ADDR(binfo->
base, mas_index)) & M_PRIOLVL_OVERRIDE_RMSK;
val = qmode->fixed.prio_level <<
- M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_SHFT;
+ M_PRIOLVL_OVERRIDE_SHFT;
writel_relaxed(((reg_val &
- ~(M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_BMSK)) | (val
- & M_PRIOLVL_OVERRIDE_OVERRIDE_PRIOLVL_BMSK)),
+ ~(M_PRIOLVL_OVERRIDE_BMSK)) | (val
+ & M_PRIOLVL_OVERRIDE_BMSK)),
M_PRIOLVL_OVERRIDE_ADDR(binfo->base, mas_index));
reg_val = readl_relaxed(M_RD_CMD_OVERRIDE_ADDR(binfo->
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_core.h b/arch/arm/mach-msm/msm_bus/msm_bus_core.h
index fd2dbb5..98419a4 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_core.h
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_core.h
@@ -36,7 +36,7 @@
(((slv >= MSM_BUS_SLAVE_FIRST) && (slv <= MSM_BUS_SLAVE_LAST)) ? 1 : 0)
#define INTERLEAVED_BW(fab_pdata, bw, ports) \
- ((fab_pdata->il_flag) ? msm_bus_div64((bw), (ports)) : (bw))
+ ((fab_pdata->il_flag) ? msm_bus_div64((ports), (bw)) : (bw))
#define INTERLEAVED_VAL(fab_pdata, n) \
((fab_pdata->il_flag) ? (n) : 1)
diff --git a/arch/arm/mach-msm/msm_watchdog.c b/arch/arm/mach-msm/msm_watchdog.c
index b1c8b30..dcfe13c 100644
--- a/arch/arm/mach-msm/msm_watchdog.c
+++ b/arch/arm/mach-msm/msm_watchdog.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -23,6 +23,7 @@
#include <linux/suspend.h>
#include <linux/percpu.h>
#include <linux/interrupt.h>
+#include <linux/reboot.h>
#include <asm/fiq.h>
#include <asm/hardware/gic.h>
#include <mach/msm_iomap.h>
@@ -64,6 +65,12 @@
module_param(enable, int, 0);
/*
+ * Watchdog bark reboot timeout in seconds.
+ * Can be specified in kernel command line.
+ */
+static int reboot_bark_timeout = 22;
+module_param(reboot_bark_timeout, int, 0644);
+/*
* If the watchdog is enabled at bootup (enable=1),
* the runtime_disable sysfs node at
* /sys/module/msm_watchdog/runtime_disable
@@ -154,6 +161,27 @@
.notifier_call = panic_wdog_handler,
};
+#define get_sclk_hz(t_ms) ((t_ms / 1000) * WDT_HZ)
+#define get_reboot_bark_timeout(t_s) ((t_s * MSEC_PER_SEC) < bark_time ? \
+ get_sclk_hz(bark_time) : get_sclk_hz(t_s * MSEC_PER_SEC))
+
+static int msm_watchdog_reboot_notifier(struct notifier_block *this,
+ unsigned long code, void *unused)
+{
+
+ u64 timeout = get_reboot_bark_timeout(reboot_bark_timeout);
+ __raw_writel(timeout, msm_wdt_base + WDT_BARK_TIME);
+ __raw_writel(timeout + 3 * WDT_HZ,
+ msm_wdt_base + WDT_BITE_TIME);
+ __raw_writel(1, msm_wdt_base + WDT_RST);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block msm_reboot_notifier = {
+ .notifier_call = msm_watchdog_reboot_notifier,
+};
+
struct wdog_disable_work_data {
struct work_struct work;
struct completion complete;
@@ -177,6 +205,7 @@
}
enable = 0;
atomic_notifier_chain_unregister(&panic_notifier_list, &panic_blk);
+ unregister_reboot_notifier(&msm_reboot_notifier);
cancel_delayed_work(&dogwork_struct);
/* may be suspended after the first write above */
__raw_writel(0, msm_wdt_base + WDT_EN);
@@ -373,6 +402,10 @@
atomic_notifier_chain_register(&panic_notifier_list,
&panic_blk);
+ ret = register_reboot_notifier(&msm_reboot_notifier);
+ if (ret)
+ pr_err("Failed to register reboot notifier\n");
+
__raw_writel(1, msm_wdt_base + WDT_EN);
__raw_writel(1, msm_wdt_base + WDT_RST);
last_pet = sched_clock();
@@ -395,6 +428,11 @@
}
bark_time = pdata->bark_time;
+ /* reboot_bark_timeout (in seconds) might have been supplied as
+ * module parameter.
+ */
+ if ((reboot_bark_timeout * MSEC_PER_SEC) < bark_time)
+ reboot_bark_timeout = (bark_time / MSEC_PER_SEC);
has_vic = pdata->has_vic;
if (!pdata->has_secure) {
appsbark = 1;
diff --git a/arch/arm/mach-msm/peripheral-loader.c b/arch/arm/mach-msm/peripheral-loader.c
index 056da7d..475e8a1 100644
--- a/arch/arm/mach-msm/peripheral-loader.c
+++ b/arch/arm/mach-msm/peripheral-loader.c
@@ -290,10 +290,12 @@
static void pil_dump_segs(const struct pil_priv *priv)
{
struct pil_seg *seg;
+ phys_addr_t seg_h_paddr;
list_for_each_entry(seg, &priv->segs, list) {
- pil_info(priv->desc, "%d: %#08zx %#08lx\n", seg->num,
- seg->paddr, seg->paddr + seg->sz);
+ seg_h_paddr = seg->paddr + seg->sz;
+ pil_info(priv->desc, "%d: %pa %pa\n", seg->num,
+ &seg->paddr, &seg_h_paddr);
}
}
@@ -322,7 +324,7 @@
return 0;
}
}
- pil_err(priv->desc, "entry address %08zx not within range\n", entry);
+ pil_err(priv->desc, "entry address %pa not within range\n", &entry);
pil_dump_segs(priv);
return -EADDRNOTAVAIL;
}
@@ -489,7 +491,8 @@
static int pil_load_seg(struct pil_desc *desc, struct pil_seg *seg)
{
- int ret = 0, count, paddr;
+ int ret = 0, count;
+ phys_addr_t paddr;
char fw_name[30];
const struct firmware *fw = NULL;
const u8 *data;
diff --git a/arch/arm/mach-msm/peripheral-loader.h b/arch/arm/mach-msm/peripheral-loader.h
index ff10fe5..5aeeaf3 100644
--- a/arch/arm/mach-msm/peripheral-loader.h
+++ b/arch/arm/mach-msm/peripheral-loader.h
@@ -55,7 +55,8 @@
int (*init_image)(struct pil_desc *pil, const u8 *metadata,
size_t size);
int (*mem_setup)(struct pil_desc *pil, phys_addr_t addr, size_t size);
- int (*verify_blob)(struct pil_desc *pil, u32 phy_addr, size_t size);
+ int (*verify_blob)(struct pil_desc *pil, phys_addr_t phy_addr,
+ size_t size);
int (*proxy_vote)(struct pil_desc *pil);
int (*auth_and_reset)(struct pil_desc *pil);
void (*proxy_unvote)(struct pil_desc *pil);
diff --git a/arch/arm/mach-msm/pil-gss.c b/arch/arm/mach-msm/pil-gss.c
index d44add6..65f86bc 100644
--- a/arch/arm/mach-msm/pil-gss.c
+++ b/arch/arm/mach-msm/pil-gss.c
@@ -203,7 +203,7 @@
{
struct gss_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
void __iomem *cbase = drv->cbase;
int ret;
diff --git a/arch/arm/mach-msm/pil-modem.c b/arch/arm/mach-msm/pil-modem.c
index 30f480a..8398206 100644
--- a/arch/arm/mach-msm/pil-modem.c
+++ b/arch/arm/mach-msm/pil-modem.c
@@ -93,7 +93,7 @@
{
u32 reg;
const struct modem_data *drv = dev_get_drvdata(pil->dev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Put modem AHB0,1,2 clocks into reset */
writel_relaxed(BIT(0) | BIT(1), drv->cbase + MAHB0_SFAB_PORT_RESET);
diff --git a/arch/arm/mach-msm/pil-pronto.c b/arch/arm/mach-msm/pil-pronto.c
index 0df8739..cf29cf1 100644
--- a/arch/arm/mach-msm/pil-pronto.c
+++ b/arch/arm/mach-msm/pil-pronto.c
@@ -123,7 +123,7 @@
int rc;
struct pronto_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Deassert reset to subsystem and wait for propagation */
reg = readl_relaxed(drv->reset_base);
diff --git a/arch/arm/mach-msm/pil-q6v3.c b/arch/arm/mach-msm/pil-q6v3.c
index 0575a3d..a369878 100644
--- a/arch/arm/mach-msm/pil-q6v3.c
+++ b/arch/arm/mach-msm/pil-q6v3.c
@@ -116,7 +116,7 @@
{
u32 reg;
struct q6v3_data *drv = dev_get_drvdata(pil->dev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Put Q6 into reset */
reg = readl_relaxed(drv->cbase + LCC_Q6_FUNC);
diff --git a/arch/arm/mach-msm/pil-q6v4.c b/arch/arm/mach-msm/pil-q6v4.c
index 29d14dd..51f7aa2 100644
--- a/arch/arm/mach-msm/pil-q6v4.c
+++ b/arch/arm/mach-msm/pil-q6v4.c
@@ -130,7 +130,7 @@
{
u32 reg, err;
const struct q6v4_data *drv = pil_to_q6v4_data(pil);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
/* Enable Q6 ACLK */
writel_relaxed(0x10, drv->aclk_reg);
diff --git a/arch/arm/mach-msm/pil-q6v5-lpass.c b/arch/arm/mach-msm/pil-q6v5-lpass.c
index 3b2bbf3..04c1be3 100644
--- a/arch/arm/mach-msm/pil-q6v5-lpass.c
+++ b/arch/arm/mach-msm/pil-q6v5-lpass.c
@@ -22,6 +22,7 @@
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/sysfs.h>
+#include <linux/of_gpio.h>
#include <mach/clk.h>
#include <mach/subsystem_restart.h>
@@ -50,6 +51,8 @@
void *wcnss_notif_hdle;
void *modem_notif_hdle;
int crash_shutdown;
+ unsigned int err_fatal_irq;
+ int force_stop_gpio;
};
#define subsys_to_drv(d) container_of(d, struct lpass_data, subsys_desc)
@@ -124,7 +127,7 @@
static int pil_lpass_reset(struct pil_desc *pil)
{
struct q6v5_data *drv = container_of(pil, struct q6v5_data, desc);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
int ret;
/* Deassert reset to subsystem and wait for propagation */
@@ -259,20 +262,19 @@
restart_adsp(drv);
}
-static void adsp_smsm_state_cb(void *data, uint32_t old_state,
- uint32_t new_state)
+static irqreturn_t adsp_err_fatal_intr_handler (int irq, void *dev_id)
{
- struct lpass_data *drv = data;
+ struct lpass_data *drv = dev_id;
- /* Ignore if we're the one that set SMSM_RESET */
+ /* Ignore if we're the one that set the force stop bit in the outbound
+ * entry
+ */
if (drv->crash_shutdown)
- return;
+ return IRQ_HANDLED;
- if (new_state & SMSM_RESET) {
- pr_err("%s: ADSP SMSM state changed to SMSM_RESET, new_state = %#x, old_state = %#x\n",
- __func__, new_state, old_state);
- restart_adsp(drv);
- }
+ pr_err("Fatal error on the ADSP!\n");
+ restart_adsp(drv);
+ return IRQ_HANDLED;
}
#define SCM_Q6_NMI_CMD 0x1
@@ -356,6 +358,7 @@
struct lpass_data *drv = subsys_to_lpass(subsys);
drv->crash_shutdown = 1;
+ gpio_set_value(drv->force_stop_gpio, 1);
send_q6_nmi();
}
@@ -388,7 +391,7 @@
struct q6v5_data *q6;
struct pil_desc *desc;
struct resource *res;
- int ret;
+ int ret, gpio_clk_ready;
drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL);
if (!drv)
@@ -399,6 +402,23 @@
if (drv->wdog_irq < 0)
return drv->wdog_irq;
+ ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-err-fatal", 0));
+ if (ret < 0)
+ return ret;
+ drv->err_fatal_irq = ret;
+
+ ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-proxy-unvote", 0));
+ if (ret < 0)
+ return ret;
+ gpio_clk_ready = ret;
+
+ drv->force_stop_gpio = of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-force-stop", 0);
+ if (drv->force_stop_gpio < 0)
+ return drv->force_stop_gpio;
+
q6 = pil_q6v5_init(pdev);
if (IS_ERR(q6))
return PTR_ERR(q6);
@@ -407,6 +427,7 @@
desc = &q6->desc;
desc->owner = THIS_MODULE;
desc->proxy_timeout = PROXY_TIMEOUT_MS;
+ desc->proxy_unvote_irq = gpio_clk_ready;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "restart_reg");
q6->restart_reg = devm_request_and_ioremap(&pdev->dev, res);
@@ -470,10 +491,12 @@
if (ret)
goto err_irq;
- ret = smsm_state_cb_register(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
- if (ret < 0)
- goto err_smsm;
+ ret = devm_request_irq(&pdev->dev, drv->err_fatal_irq,
+ adsp_err_fatal_intr_handler,
+ IRQF_TRIGGER_RISING,
+ dev_name(&pdev->dev), drv);
+ if (ret)
+ goto err_irq;
drv->wcnss_notif_hdle = subsys_notif_register_notifier("wcnss", &wnb);
if (IS_ERR(drv->wcnss_notif_hdle)) {
@@ -507,9 +530,6 @@
err_notif_modem:
subsys_notif_unregister_notifier(drv->wcnss_notif_hdle, &wnb);
err_notif_wcnss:
- smsm_state_cb_deregister(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
-err_smsm:
err_irq:
subsys_unregister(drv->subsys);
err_subsys:
@@ -524,8 +544,6 @@
struct lpass_data *drv = platform_get_drvdata(pdev);
subsys_notif_unregister_notifier(drv->wcnss_notif_hdle, &wnb);
subsys_notif_unregister_notifier(drv->modem_notif_hdle, &mnb);
- smsm_state_cb_deregister(SMSM_Q6_STATE, SMSM_RESET,
- adsp_smsm_state_cb, drv);
subsys_unregister(drv->subsys);
destroy_ramdump_device(drv->ramdump_dev);
pil_desc_release(&drv->q6->desc);
diff --git a/arch/arm/mach-msm/pil-q6v5-mss.c b/arch/arm/mach-msm/pil-q6v5-mss.c
index 8f7d262..c1c3100 100644
--- a/arch/arm/mach-msm/pil-q6v5-mss.c
+++ b/arch/arm/mach-msm/pil-q6v5-mss.c
@@ -243,7 +243,7 @@
struct q6v5_data *drv = container_of(pil, struct q6v5_data, desc);
struct platform_device *pdev = to_platform_device(pil->dev);
struct mba_data *mba = platform_get_drvdata(pdev);
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
int ret;
/*
@@ -402,7 +402,7 @@
return ret;
}
-static int pil_mba_verify_blob(struct pil_desc *pil, u32 phy_addr,
+static int pil_mba_verify_blob(struct pil_desc *pil, phys_addr_t phy_addr,
size_t size)
{
struct mba_data *drv = dev_get_drvdata(pil->dev);
@@ -608,8 +608,15 @@
if (ret)
return ret;
ret = pil_boot(&drv->desc);
- if (ret)
+ if (ret) {
pil_shutdown(&drv->q6->desc);
+ /*
+ * We know now that the unvote interrupt is not coming.
+ * Remove the proxy votes immediately.
+ */
+ if (drv->q6->desc.proxy_unvote_irq)
+ pil_q6v5_mss_remove_proxy_votes(&drv->q6->desc);
+ }
return ret;
}
diff --git a/arch/arm/mach-msm/pil-q6v5.c b/arch/arm/mach-msm/pil-q6v5.c
index 0263faf..c6add8f 100644
--- a/arch/arm/mach-msm/pil-q6v5.c
+++ b/arch/arm/mach-msm/pil-q6v5.c
@@ -46,7 +46,9 @@
#define Q6SS_CLK_ENA BIT(1)
/* QDSP6SS_PWR_CTL */
-#define Q6SS_L2DATA_SLP_NRET_N (BIT(0)|BIT(1)|BIT(2))
+#define Q6SS_L2DATA_SLP_NRET_N_0 BIT(0)
+#define Q6SS_L2DATA_SLP_NRET_N_1 BIT(1)
+#define Q6SS_L2DATA_SLP_NRET_N_2 BIT(2)
#define Q6SS_L2TAG_SLP_NRET_N BIT(16)
#define Q6SS_ETB_SLP_NRET_N BIT(17)
#define Q6SS_L2DATA_STBY_N BIT(18)
@@ -160,7 +162,8 @@
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
/* Turn off Q6 memories */
- val &= ~(Q6SS_L2DATA_SLP_NRET_N | Q6SS_SLP_RET_N |
+ val &= ~(Q6SS_L2DATA_SLP_NRET_N_0 | Q6SS_L2DATA_SLP_NRET_N_1 |
+ Q6SS_L2DATA_SLP_NRET_N_2 | Q6SS_SLP_RET_N |
Q6SS_L2TAG_SLP_NRET_N | Q6SS_ETB_SLP_NRET_N |
Q6SS_L2DATA_STBY_N);
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
@@ -194,11 +197,19 @@
mb();
udelay(1);
- /* Turn on memories */
+ /*
+ * Turn on memories. L2 banks should be done individually
+ * to minimize inrush current.
+ */
val = readl_relaxed(drv->reg_base + QDSP6SS_PWR_CTL);
- val |= Q6SS_L2DATA_SLP_NRET_N | Q6SS_SLP_RET_N |
- Q6SS_L2TAG_SLP_NRET_N | Q6SS_ETB_SLP_NRET_N |
- Q6SS_L2DATA_STBY_N;
+ val |= Q6SS_SLP_RET_N | Q6SS_L2TAG_SLP_NRET_N |
+ Q6SS_ETB_SLP_NRET_N | Q6SS_L2DATA_STBY_N;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_2;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_1;
+ writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
+ val |= Q6SS_L2DATA_SLP_NRET_N_0;
writel_relaxed(val, drv->reg_base + QDSP6SS_PWR_CTL);
/* Remove IO clamp */
diff --git a/arch/arm/mach-msm/pil-riva.c b/arch/arm/mach-msm/pil-riva.c
index a2665b4..d72b848 100644
--- a/arch/arm/mach-msm/pil-riva.c
+++ b/arch/arm/mach-msm/pil-riva.c
@@ -134,7 +134,7 @@
u32 reg, sel;
struct riva_data *drv = dev_get_drvdata(pil->dev);
void __iomem *base = drv->base;
- unsigned long start_addr = pil_get_entry_addr(pil);
+ phys_addr_t start_addr = pil_get_entry_addr(pil);
void __iomem *cbase = drv->cbase;
bool use_cxo = cxo_is_needed(drv);
diff --git a/arch/arm/mach-msm/pm-8x60.c b/arch/arm/mach-msm/pm-8x60.c
index 5a6e66a..a39e38b 100644
--- a/arch/arm/mach-msm/pm-8x60.c
+++ b/arch/arm/mach-msm/pm-8x60.c
@@ -64,7 +64,6 @@
(container_of(attr, struct msm_pm_kobj_attribute, ka)->cpu)
#define SCLK_HZ (32768)
-#define MSM_PM_SLEEP_TICK_LIMIT (0x6DDD000)
#define NUM_OF_COUNTERS 3
#define MAX_BUF_SIZE 512
@@ -127,7 +126,6 @@
static bool msm_pm_use_sync_timer;
static struct msm_pm_cp15_save_data cp15_data;
static bool msm_pm_retention_calls_tz;
-static uint32_t msm_pm_max_sleep_time;
static bool msm_no_ramp_down_pc;
static struct msm_pm_sleep_status_data *msm_pm_slp_sts;
static bool msm_pm_pc_reset_timer;
@@ -405,39 +403,6 @@
return;
}
-/*
- * Convert time from nanoseconds to slow clock ticks, then cap it to the
- * specified limit
- */
-static int64_t msm_pm_convert_and_cap_time(int64_t time_ns, int64_t limit)
-{
- do_div(time_ns, NSEC_PER_SEC / SCLK_HZ);
- return (time_ns > limit) ? limit : time_ns;
-}
-
-/*
- * Set the sleep time for suspend. 0 means infinite sleep time.
- */
-void msm_pm_set_max_sleep_time(int64_t max_sleep_time_ns)
-{
- if (max_sleep_time_ns == 0) {
- msm_pm_max_sleep_time = 0;
- } else {
- msm_pm_max_sleep_time =
- (uint32_t)msm_pm_convert_and_cap_time(
- max_sleep_time_ns, MSM_PM_SLEEP_TICK_LIMIT);
-
- if (msm_pm_max_sleep_time == 0)
- msm_pm_max_sleep_time = 1;
- }
-
- if (MSM_PM_DEBUG_SUSPEND & msm_pm_debug_mask)
- pr_info("%s: Requested %lld ns Giving %u sclk ticks\n",
- __func__, max_sleep_time_ns,
- msm_pm_max_sleep_time);
-}
-EXPORT_SYMBOL(msm_pm_set_max_sleep_time);
-
static void msm_pm_save_cpu_reg(void)
{
int i;
@@ -869,9 +834,8 @@
int exit_stat = -1;
enum msm_pm_sleep_mode sleep_mode;
void *msm_pm_idle_rs_limits = NULL;
- int sleep_delay = 1;
+ uint32_t sleep_delay = 1;
int ret = -ENODEV;
- int64_t timer_expiration = 0;
int notify_rpm = false;
bool timer_halted = false;
@@ -891,10 +855,8 @@
if (sleep_mode == MSM_PM_SLEEP_MODE_POWER_COLLAPSE) {
notify_rpm = true;
- timer_expiration = msm_pm_timer_enter_idle();
+ sleep_delay = (uint32_t)msm_pm_timer_enter_idle();
- sleep_delay = (uint32_t) msm_pm_convert_and_cap_time(
- timer_expiration, MSM_PM_SLEEP_TICK_LIMIT);
if (sleep_delay == 0) /* 0 would mean infinite time */
sleep_delay = 1;
}
@@ -1081,6 +1043,7 @@
void *rs_limits = NULL;
int ret = -ENODEV;
uint32_t power;
+ uint32_t msm_pm_max_sleep_time = 0;
if (MSM_PM_DEBUG_SUSPEND & msm_pm_debug_mask)
pr_info("%s: power collapse\n", __func__);
@@ -1090,8 +1053,8 @@
if (msm_pm_sleep_time_override > 0) {
int64_t ns = NSEC_PER_SEC *
(int64_t) msm_pm_sleep_time_override;
- msm_pm_set_max_sleep_time(ns);
- msm_pm_sleep_time_override = 0;
+ do_div(ns, NSEC_PER_SEC / SCLK_HZ);
+ msm_pm_max_sleep_time = (uint32_t) ns;
}
if (pm_sleep_ops.lowest_limits)
diff --git a/arch/arm/mach-msm/qdsp6v2/Makefile b/arch/arm/mach-msm/qdsp6v2/Makefile
index 88e2894..6bd3efb 100644
--- a/arch/arm/mach-msm/qdsp6v2/Makefile
+++ b/arch/arm/mach-msm/qdsp6v2/Makefile
@@ -12,7 +12,7 @@
obj-$(CONFIG_FB_MSM_HDMI_MSM_PANEL) += lpa_if_hdmi.o
endif
obj-$(CONFIG_MSM_QDSP6_APR) += apr.o apr_v1.o apr_tal.o q6core.o dsp_debug.o
-obj-$(CONFIG_MSM_QDSP6_APRV2) += apr.o apr_v2.o apr_tal.o q6core.o dsp_debug.o
+obj-$(CONFIG_MSM_QDSP6_APRV2) += apr.o apr_v2.o apr_tal.o dsp_debug.o
ifdef CONFIG_ARCH_MSM9615
obj-y += audio_acdb.o
obj-y += rtac.o
diff --git a/arch/arm/mach-msm/qdsp6v2/q6core.c b/arch/arm/mach-msm/qdsp6v2/q6core.c
index 6594b08..fd699df 100644
--- a/arch/arm/mach-msm/qdsp6v2/q6core.c
+++ b/arch/arm/mach-msm/qdsp6v2/q6core.c
@@ -69,12 +69,12 @@
switch (payload1[0]) {
case ADSP_CMD_SET_POWER_COLLAPSE_STATE:
- pr_info("Cmd = ADSP_CMD_SET_POWER_COLLAPSE_STATE"
- " status[0x%x]\n", payload1[1]);
+ pr_info("Cmd = ADSP_CMD_SET_POWER_COLLAPSE_STATE status[0x%x]\n",
+ payload1[1]);
break;
case ADSP_CMD_REMOTE_BUS_BW_REQUEST:
- pr_info("%s: cmd = ADSP_CMD_REMOTE_BUS_BW_REQUEST"
- " status = 0x%x\n", __func__, payload1[1]);
+ pr_info("%s: cmd = ADSP_CMD_REMOTE_BUS_BW_REQUEST status = 0x%x\n",
+ __func__, payload1[1]);
bus_bw_resp_received = 1;
wake_up(&bus_bw_req_wait);
@@ -160,10 +160,9 @@
core_handle_q = apr_register("ADSP", "CORE",
aprv2_core_fn_q, 0xFFFFFFFF, NULL);
}
- pr_info("Open_q %p\n", core_handle_q);
- if (core_handle_q == NULL) {
+ pr_debug("Open_q %p\n", core_handle_q);
+ if (core_handle_q == NULL)
pr_err("%s: Unable to register CORE\n", __func__);
- }
}
int core_req_bus_bandwith(u16 bus_id, u32 ab_bps, u32 ib_bps)
@@ -352,7 +351,7 @@
pc.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
pc.hdr.pkt_size = APR_PKT_SIZE(APR_HDR_SIZE,
- sizeof(uint32_t));;
+ sizeof(uint32_t));
pc.hdr.src_port = 0;
pc.hdr.dest_port = 0;
pc.hdr.token = 0;
@@ -413,32 +412,6 @@
return rc;
}
-uint32_t core_set_dolby_manufacturer_id(int manufacturer_id)
-{
- struct adsp_dolby_manufacturer_id payload;
- int rc = 0;
- pr_debug("%s manufacturer_id :%d\n", __func__, manufacturer_id);
- core_open();
- if (core_handle_q) {
- payload.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
- APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
- payload.hdr.pkt_size =
- sizeof(struct adsp_dolby_manufacturer_id);
- payload.hdr.src_port = 0;
- payload.hdr.dest_port = 0;
- payload.hdr.token = 0;
- payload.hdr.opcode = ADSP_CMD_SET_DOLBY_MANUFACTURER_ID;
- payload.manufacturer_id = manufacturer_id;
- pr_debug("Send Dolby security opcode=%x manufacturer ID = %d\n",
- payload.hdr.opcode, payload.manufacturer_id);
- rc = apr_send_pkt(core_handle_q, (uint32_t *)&payload);
- if (rc < 0)
- pr_err("%s: SET_DOLBY_MANUFACTURER_ID failed op[0x%x]rc[%d]\n",
- __func__, payload.hdr.opcode, rc);
- }
- return rc;
-}
-
static const struct file_operations apr_debug_fops = {
.write = apr_debug_write,
.open = apr_debug_open,
diff --git a/arch/arm/mach-msm/remote_spinlock.c b/arch/arm/mach-msm/remote_spinlock.c
index 94923a0..62e3e05 100644
--- a/arch/arm/mach-msm/remote_spinlock.c
+++ b/arch/arm/mach-msm/remote_spinlock.c
@@ -143,6 +143,7 @@
}
/* end dekkers implementation ----------------------------------------------- */
+#ifndef CONFIG_THUMB2_KERNEL
/* swp implementation ------------------------------------------------------- */
static void __raw_remote_swp_spin_lock(raw_remote_spinlock_t *lock)
{
@@ -194,6 +195,7 @@
: "cc");
}
/* end swp implementation --------------------------------------------------- */
+#endif
/* ldrex implementation ----------------------------------------------------- */
static char *ldrex_compatible_string = "qcom,ipc-spinlock-ldrex";
@@ -431,6 +433,7 @@
current_ops.owner = __raw_remote_dek_spin_owner;
is_hw_lock_type = 0;
break;
+#ifndef CONFIG_THUMB2_KERNEL
case SWP_MODE:
current_ops.lock = __raw_remote_swp_spin_lock;
current_ops.unlock = __raw_remote_swp_spin_unlock;
@@ -439,6 +442,7 @@
current_ops.owner = __raw_remote_gen_spin_owner;
is_hw_lock_type = 0;
break;
+#endif
case LDREX_MODE:
current_ops.lock = __raw_remote_ex_spin_lock;
current_ops.unlock = __raw_remote_ex_spin_unlock;
diff --git a/arch/arm/mach-msm/rpm_log.c b/arch/arm/mach-msm/rpm_log.c
index 53d5752..58e8588 100644
--- a/arch/arm/mach-msm/rpm_log.c
+++ b/arch/arm/mach-msm/rpm_log.c
@@ -213,7 +213,7 @@
return -EINVAL;
if (!buf->data)
return -ENOMEM;
- if (!bufu || count < 0)
+ if (!bufu || count == 0)
return -EINVAL;
if (!access_ok(VERIFY_WRITE, bufu, count))
return -EFAULT;
diff --git a/arch/arm/mach-msm/smd.c b/arch/arm/mach-msm/smd.c
index 8a9042e..3590e6b 100644
--- a/arch/arm/mach-msm/smd.c
+++ b/arch/arm/mach-msm/smd.c
@@ -34,7 +34,6 @@
#include <linux/kfifo.h>
#include <linux/wakelock.h>
#include <linux/notifier.h>
-#include <linux/sort.h>
#include <linux/suspend.h>
#include <linux/of.h>
#include <linux/of_irq.h>
@@ -47,6 +46,7 @@
#include <mach/proc_comm.h>
#include <mach/msm_ipc_logging.h>
#include <mach/ramdump.h>
+#include <mach/board.h>
#include <asm/cacheflush.h>
@@ -76,6 +76,7 @@
#define SMSM_SNAPSHOT_CNT 64
#define SMSM_SNAPSHOT_SIZE ((SMSM_NUM_ENTRIES + 1) * 4)
#define RSPIN_INIT_WAIT_MS 1000
+#define SMD_FIFO_FULL_RESERVE 4
uint32_t SMSM_NUM_ENTRIES = 8;
uint32_t SMSM_NUM_HOSTS = 3;
@@ -183,7 +184,7 @@
static struct smem_area *smem_areas;
static struct ramdump_segment *smem_ramdump_segments;
static void *smem_ramdump_dev;
-static void *smem_range_check(phys_addr_t base, unsigned offset);
+static void *smem_phys_to_virt(phys_addr_t base, unsigned offset);
static void *smd_dev;
struct interrupt_stat interrupt_stats[NUM_SMD_SUBSYSTEMS];
@@ -243,6 +244,17 @@
#define SMx_POWER_INFO(x...) do { } while (0)
#endif
+/**
+ * OVERFLOW_ADD_UNSIGNED() - check for unsigned overflow
+ *
+ * @type: type to check for overflow
+ * @a: left value to use
+ * @b: right value to use
+ * @returns: true if a + b will result in overflow; false otherwise
+ */
+#define OVERFLOW_ADD_UNSIGNED(type, a, b) \
+ (((type)~0 - (a)) < (b) ? true : false)
+
static unsigned last_heap_free = 0xffffffff;
static inline void smd_write_intr(unsigned int val,
@@ -1053,8 +1065,16 @@
/* how many bytes we are free to write */
static int smd_stream_write_avail(struct smd_channel *ch)
{
- return ch->fifo_mask - ((ch->half_ch->get_head(ch->send) -
- ch->half_ch->get_tail(ch->send)) & ch->fifo_mask);
+ int bytes_avail;
+
+ bytes_avail = ch->fifo_mask - ((ch->half_ch->get_head(ch->send) -
+ ch->half_ch->get_tail(ch->send)) & ch->fifo_mask) + 1;
+
+ if (bytes_avail < SMD_FIFO_FULL_RESERVE)
+ bytes_avail = 0;
+ else
+ bytes_avail -= SMD_FIFO_FULL_RESERVE;
+ return bytes_avail;
}
static int smd_packet_read_avail(struct smd_channel *ch)
@@ -1176,7 +1196,18 @@
}
}
-/* provide a pointer and length to next free space in the fifo */
+/**
+ * ch_write_buffer() - Provide a pointer and length for the next segment of
+ * free space in the FIFO.
+ * @ch: channel
+ * @ptr: Address to pointer for the next segment write
+ * @returns: Maximum size that can be written until the FIFO is either full
+ * or the end of the FIFO has been reached.
+ *
+ * The returned pointer and length are passed to memcpy, so the next segment is
+ * defined as either the space available between the read index (tail) and the
+ * write index (head) or the space available to the end of the FIFO.
+ */
static unsigned ch_write_buffer(struct smd_channel *ch, void **ptr)
{
unsigned head = ch->half_ch->get_head(ch->send);
@@ -1184,10 +1215,11 @@
*ptr = (void *) (ch->send_data + head);
if (head < tail) {
- return tail - head - 1;
+ return tail - head - SMD_FIFO_FULL_RESERVE;
} else {
- if (tail == 0)
- return ch->fifo_size - head - 1;
+ if (tail < SMD_FIFO_FULL_RESERVE)
+ return ch->fifo_size + tail - head
+ - SMD_FIFO_FULL_RESERVE;
else
return ch->fifo_size - head;
}
@@ -2111,6 +2143,29 @@
}
EXPORT_SYMBOL(smd_write_end);
+int smd_write_segment_avail(smd_channel_t *ch)
+{
+ int n;
+
+ if (!ch) {
+ pr_err("%s: Invalid channel specified\n", __func__);
+ return -ENODEV;
+ }
+ if (!ch->is_pkt_ch) {
+ pr_err("%s: non-packet channel specified\n", __func__);
+ return -ENODEV;
+ }
+
+ n = smd_stream_write_avail(ch);
+
+ /* pkt hdr already written, no need to reserve space for it */
+ if (ch->pending_pkt_sz)
+ return n;
+
+ return n > SMD_HEADER_SIZE ? n - SMD_HEADER_SIZE : 0;
+}
+EXPORT_SYMBOL(smd_write_segment_avail);
+
int smd_read(smd_channel_t *ch, void *data, int len)
{
if (!ch) {
@@ -2356,37 +2411,107 @@
/* -------------------------------------------------------------------------- */
-/*
- * Shared Memory Range Check
- *
- * Takes a physical address and an offset and checks if the resulting physical
- * address would fit into one of the aux smem regions. If so, returns the
- * corresponding virtual address. Otherwise returns NULL. Expects the array
- * of smem regions to be in ascending physical address order.
+/**
+ * smem_phys_to_virt() - Convert a physical base and offset to virtual address
*
* @base: physical base address to check
* @offset: offset from the base to get the final address
+ * @returns: virtual SMEM address; NULL for failure
+ *
+ * Takes a physical address and an offset and checks if the resulting physical
+ * address would fit into one of the smem regions. If so, returns the
+ * corresponding virtual address. Otherwise returns NULL.
*/
-static void *smem_range_check(phys_addr_t base, unsigned offset)
+static void *smem_phys_to_virt(phys_addr_t base, unsigned offset)
{
int i;
phys_addr_t phys_addr;
resource_size_t size;
+ if (OVERFLOW_ADD_UNSIGNED(phys_addr_t, base, offset))
+ return NULL;
+
+ if (!smem_areas) {
+ /*
+ * Early boot - no area configuration yet, so default
+ * to using the main memory region.
+ *
+ * To remove the MSM_SHARED_RAM_BASE and the static
+ * mapping of SMEM in the future, add dump_stack()
+ * to identify the early callers of smem_get_entry()
+ * (which calls this function) and replace those calls
+ * with a new function that knows how to lookup the
+ * SMEM base address before SMEM has been probed.
+ */
+ phys_addr = msm_shared_ram_phys;
+ size = MSM_SHARED_RAM_SIZE;
+
+ if (base >= phys_addr && base + offset < phys_addr + size) {
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)MSM_SHARED_RAM_BASE, offset)) {
+ pr_err("%s: overflow %p %x\n", __func__,
+ MSM_SHARED_RAM_BASE, offset);
+ return NULL;
+ }
+
+ return MSM_SHARED_RAM_BASE + offset;
+ } else {
+ return NULL;
+ }
+ }
for (i = 0; i < num_smem_areas; ++i) {
phys_addr = smem_areas[i].phys_addr;
size = smem_areas[i].size;
- if (base < phys_addr)
- return NULL;
- if (base > phys_addr + size)
+
+ if (base < phys_addr || base + offset >= phys_addr + size)
continue;
- if (base >= phys_addr && base + offset < phys_addr + size)
- return smem_areas[i].virt_addr + offset;
+
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas[i].virt_addr, offset)) {
+ pr_err("%s: overflow %p %x\n", __func__,
+ smem_areas[i].virt_addr, offset);
+ return NULL;
+ }
+
+ return smem_areas[i].virt_addr + offset;
}
return NULL;
}
+/**
+ * smem_virt_to_phys() - Convert SMEM address to physical address.
+ *
+ * @smem_address: Address of SMEM item (returned by smem_alloc(), etc)
+ * @returns: Physical address (or NULL if there is a failure)
+ *
+ * This function should only be used if an SMEM item needs to be handed
+ * off to a DMA engine.
+ */
+phys_addr_t smem_virt_to_phys(void *smem_address)
+{
+ phys_addr_t phys_addr = 0;
+ int i;
+ void *vend;
+
+ if (!smem_areas)
+ return phys_addr;
+
+ for (i = 0; i < num_smem_areas; ++i) {
+ vend = (void *)(smem_areas[i].virt_addr + smem_areas[i].size);
+
+ if (smem_address >= smem_areas[i].virt_addr &&
+ smem_address < vend) {
+ phys_addr = smem_address - smem_areas[i].virt_addr;
+ phys_addr += smem_areas[i].phys_addr;
+ break;
+ }
+ }
+
+ return phys_addr;
+}
+EXPORT_SYMBOL(smem_virt_to_phys);
+
/* smem_alloc returns the pointer to smem item if it is already allocated.
* Otherwise, it returns NULL.
*/
@@ -2460,14 +2585,15 @@
remote_spin_lock_irqsave(&remote_spinlock, flags);
/* toc is in device memory and cannot be speculatively accessed */
if (toc[id].allocated) {
+ phys_addr_t phys_base;
+
*size = toc[id].size;
barrier();
- if (!(toc[id].reserved & BASE_ADDR_MASK))
- ret = (void *) (MSM_SHARED_RAM_BASE + toc[id].offset);
- else
- ret = smem_range_check(
- toc[id].reserved & BASE_ADDR_MASK,
- toc[id].offset);
+
+ phys_base = toc[id].reserved & BASE_ADDR_MASK;
+ if (!phys_base)
+ phys_base = (phys_addr_t)msm_shared_ram_phys;
+ ret = smem_phys_to_virt(phys_base, toc[id].offset);
} else {
*size = 0;
}
@@ -3420,14 +3546,6 @@
return ret;
}
-int sort_cmp_func(const void *a, const void *b)
-{
- struct smem_area *left = (struct smem_area *)(a);
- struct smem_area *right = (struct smem_area *)(b);
-
- return left->phys_addr - right->phys_addr;
-}
-
int smd_core_platform_init(struct platform_device *pdev)
{
int i;
@@ -3438,7 +3556,8 @@
struct smd_subsystem_config *cfg;
int err_ret = 0;
struct smd_smem_regions *smd_smem_areas;
- int smem_idx = 0;
+ struct smem_area *smem_areas_tmp = NULL;
+ int smem_idx;
smd_platform_data = pdev->dev.platform_data;
num_ss = smd_platform_data->num_ss_configs;
@@ -3449,37 +3568,54 @@
smd_ssr_config->disable_smsm_reset_handshake;
smd_smem_areas = smd_platform_data->smd_smem_areas;
- if (smd_smem_areas) {
- num_smem_areas = smd_platform_data->num_smem_areas;
- smem_areas = kmalloc(sizeof(struct smem_area) * num_smem_areas,
- GFP_KERNEL);
- if (!smem_areas) {
- pr_err("%s: smem_areas kmalloc failed\n", __func__);
+ num_smem_areas = smd_platform_data->num_smem_areas + 1;
+
+ /* Initialize main SMEM region */
+ smem_areas_tmp = kmalloc_array(num_smem_areas, sizeof(struct smem_area),
+ GFP_KERNEL);
+ if (!smem_areas_tmp) {
+ pr_err("%s: smem_areas kmalloc failed\n", __func__);
+ err_ret = -ENOMEM;
+ goto smem_areas_alloc_fail;
+ }
+
+ smem_areas_tmp[0].phys_addr = msm_shared_ram_phys;
+ smem_areas_tmp[0].size = MSM_SHARED_RAM_SIZE;
+ smem_areas_tmp[0].virt_addr = MSM_SHARED_RAM_BASE;
+
+ /* Configure auxiliary SMEM regions */
+ for (smem_idx = 1; smem_idx < num_smem_areas; ++smem_idx) {
+ smem_areas_tmp[smem_idx].phys_addr =
+ smd_smem_areas[smem_idx].phys_addr;
+ smem_areas_tmp[smem_idx].size =
+ smd_smem_areas[smem_idx].size;
+ smem_areas_tmp[smem_idx].virt_addr = ioremap_nocache(
+ (unsigned long)(smem_areas_tmp[smem_idx].phys_addr),
+ smem_areas_tmp[smem_idx].size);
+ if (!smem_areas_tmp[smem_idx].virt_addr) {
+ pr_err("%s: ioremap_nocache() of addr: %pa size: %pa\n",
+ __func__,
+ &smem_areas_tmp[smem_idx].phys_addr,
+ &smem_areas_tmp[smem_idx].size);
err_ret = -ENOMEM;
- goto smem_areas_alloc_fail;
+ goto smem_failed;
}
- for (smem_idx = 0; smem_idx < num_smem_areas; ++smem_idx) {
- smem_areas[smem_idx].phys_addr =
- smd_smem_areas[smem_idx].phys_addr;
- smem_areas[smem_idx].size =
- smd_smem_areas[smem_idx].size;
- smem_areas[smem_idx].virt_addr = ioremap_nocache(
- (unsigned long)(smem_areas[smem_idx].phys_addr),
- smem_areas[smem_idx].size);
- if (!smem_areas[smem_idx].virt_addr) {
- pr_err("%s: ioremap_nocache() of addr: %pa size: %pa\n",
- __func__,
- &smem_areas[smem_idx].phys_addr,
- &smem_areas[smem_idx].size);
- err_ret = -ENOMEM;
- ++smem_idx;
- goto smem_failed;
- }
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas_tmp[smem_idx].virt_addr,
+ smem_areas_tmp[smem_idx].size)) {
+ pr_err("%s: invalid virtual address block %i: %p:%pa\n",
+ __func__, smem_idx,
+ smem_areas_tmp[smem_idx].virt_addr,
+ &smem_areas_tmp[smem_idx].size);
+ ++smem_idx;
+ err_ret = -EINVAL;
+ goto smem_failed;
}
- sort(smem_areas, num_smem_areas,
- sizeof(struct smem_area),
- sort_cmp_func, NULL);
+
+ SMD_DBG("%s: %d = %pa %pa", __func__, smem_idx,
+ &smd_smem_areas[smem_idx].phys_addr,
+ &smd_smem_areas[smem_idx].size);
}
for (i = 0; i < num_ss; i++) {
@@ -3523,8 +3659,9 @@
cfg->subsys_name, SMD_MAX_CH_NAME_LEN);
}
-
SMD_INFO("smd_core_platform_init() done\n");
+
+ smem_areas = smem_areas_tmp;
return 0;
intr_failed:
@@ -3542,9 +3679,12 @@
);
}
smem_failed:
- for (smem_idx = smem_idx - 1; smem_idx >= 0; --smem_idx)
- iounmap(smem_areas[smem_idx].virt_addr);
- kfree(smem_areas);
+ for (smem_idx = smem_idx - 1; smem_idx >= 1; --smem_idx)
+ iounmap(smem_areas_tmp[smem_idx].virt_addr);
+
+ num_smem_areas = 0;
+ kfree(smem_areas_tmp);
+
smem_areas_alloc_fail:
return err_ret;
}
@@ -3718,12 +3858,14 @@
resource_size_t aux_mem_size;
int temp_string_size = 11; /* max 3 digit count */
char temp_string[temp_string_size];
- int count;
struct device_node *node;
int ret;
const char *compatible;
- struct ramdump_segment *ramdump_segments_tmp;
+ struct ramdump_segment *ramdump_segments_tmp = NULL;
+ struct smem_area *smem_areas_tmp = NULL;
+ int smem_idx = 0;
int subnode_num = 0;
+ int i;
resource_size_t irq_out_size;
disable_smsm_reset_handshake = 1;
@@ -3743,101 +3885,115 @@
}
SMD_DBG("%s: %s = %p", __func__, key, irq_out_base);
- count = 1;
+ num_smem_areas = 1;
while (1) {
- scnprintf(temp_string, temp_string_size, "aux-mem%d", count);
+ scnprintf(temp_string, temp_string_size, "aux-mem%d",
+ num_smem_areas);
r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
temp_string);
if (!r)
break;
++num_smem_areas;
- ++count;
- if (count > 999) {
+ if (num_smem_areas > 999) {
pr_err("%s: max num aux mem regions reached\n",
__func__);
break;
}
}
- /* initialize SSR ramdump regions */
+ /* Initialize main SMEM region and SSR ramdump region */
key = "smem";
r = platform_get_resource_byname(pdev, IORESOURCE_MEM, key);
if (!r) {
pr_err("%s: missing '%s'\n", __func__, key);
return -ENODEV;
}
- ramdump_segments_tmp = kmalloc_array(num_smem_areas + 1,
- sizeof(struct ramdump_segment), GFP_KERNEL);
+ smem_areas_tmp = kmalloc_array(num_smem_areas, sizeof(struct smem_area),
+ GFP_KERNEL);
+ if (!smem_areas_tmp) {
+ pr_err("%s: smem areas kmalloc failed\n", __func__);
+ ret = -ENOMEM;
+ goto free_smem_areas;
+ }
+
+ ramdump_segments_tmp = kmalloc_array(num_smem_areas,
+ sizeof(struct ramdump_segment), GFP_KERNEL);
if (!ramdump_segments_tmp) {
pr_err("%s: ramdump segment kmalloc failed\n", __func__);
ret = -ENOMEM;
goto free_smem_areas;
}
- ramdump_segments_tmp[0].address = r->start;
- ramdump_segments_tmp[0].size = resource_size(r);
- if (num_smem_areas) {
+ smem_areas_tmp[smem_idx].phys_addr = r->start;
+ smem_areas_tmp[smem_idx].size = resource_size(r);
+ smem_areas_tmp[smem_idx].virt_addr = MSM_SHARED_RAM_BASE;
- smem_areas = kmalloc(sizeof(struct smem_area) * num_smem_areas,
- GFP_KERNEL);
+ ramdump_segments_tmp[smem_idx].address = r->start;
+ ramdump_segments_tmp[smem_idx].size = resource_size(r);
+ ++smem_idx;
- if (!smem_areas) {
- pr_err("%s: smem areas kmalloc failed\n", __func__);
+ /* Configure auxiliary SMEM regions */
+ while (1) {
+ scnprintf(temp_string, temp_string_size, "aux-mem%d",
+ smem_idx);
+ r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ temp_string);
+ if (!r)
+ break;
+ aux_mem_base = r->start;
+ aux_mem_size = resource_size(r);
+
+ ramdump_segments_tmp[smem_idx].address = aux_mem_base;
+ ramdump_segments_tmp[smem_idx].size = aux_mem_size;
+
+ smem_areas_tmp[smem_idx].phys_addr = aux_mem_base;
+ smem_areas_tmp[smem_idx].size = aux_mem_size;
+ smem_areas_tmp[smem_idx].virt_addr = ioremap_nocache(
+ (unsigned long)(smem_areas_tmp[smem_idx].phys_addr),
+ smem_areas_tmp[smem_idx].size);
+ SMD_DBG("%s: %s = %pa %pa -> %p", __func__, temp_string,
+ &aux_mem_base, &aux_mem_size,
+ smem_areas_tmp[smem_idx].virt_addr);
+
+ if (!smem_areas_tmp[smem_idx].virt_addr) {
+ pr_err("%s: ioremap_nocache() of addr:%pa size: %pa\n",
+ __func__,
+ &smem_areas_tmp[smem_idx].phys_addr,
+ &smem_areas_tmp[smem_idx].size);
ret = -ENOMEM;
goto free_smem_areas;
}
- count = 1;
- while (1) {
- scnprintf(temp_string, temp_string_size, "aux-mem%d",
- count);
- r = platform_get_resource_byname(pdev, IORESOURCE_MEM,
- temp_string);
- if (!r)
- break;
- aux_mem_base = r->start;
- aux_mem_size = resource_size(r);
- /*
- * Add to ram-dumps segments.
- * ramdump_segments_tmp[0] is the main SMEM region,
- * so auxiliary segments are indexed by count
- * instead of count - 1.
- */
- ramdump_segments_tmp[count].address = aux_mem_base;
- ramdump_segments_tmp[count].size = aux_mem_size;
-
- SMD_DBG("%s: %s = %pa %pa", __func__, temp_string,
- &aux_mem_base, &aux_mem_size);
- smem_areas[count - 1].phys_addr = aux_mem_base;
- smem_areas[count - 1].size = aux_mem_size;
- smem_areas[count - 1].virt_addr = ioremap_nocache(
- (unsigned long)(smem_areas[count-1].phys_addr),
- smem_areas[count - 1].size);
- if (!smem_areas[count - 1].virt_addr) {
- pr_err("%s: ioremap_nocache() of addr:%pa size: %pa\n",
- __func__,
- &smem_areas[count - 1].phys_addr,
- &smem_areas[count - 1].size);
- ret = -ENOMEM;
- goto free_smem_areas;
- }
-
- ++count;
- if (count > 999) {
- pr_err("%s: max num aux mem regions reached\n",
- __func__);
- break;
- }
+ if (OVERFLOW_ADD_UNSIGNED(uintptr_t,
+ (uintptr_t)smem_areas_tmp[smem_idx].virt_addr,
+ smem_areas_tmp[smem_idx].size)) {
+ pr_err("%s: invalid virtual address block %i: %p:%pa\n",
+ __func__, smem_idx,
+ smem_areas_tmp[smem_idx].virt_addr,
+ &smem_areas_tmp[smem_idx].size);
+ ++smem_idx;
+ ret = -EINVAL;
+ goto free_smem_areas;
}
- sort(smem_areas, num_smem_areas,
- sizeof(struct smem_area),
- sort_cmp_func, NULL);
+
+ ++smem_idx;
+ if (smem_idx > 999) {
+ pr_err("%s: max num aux mem regions reached\n",
+ __func__);
+ break;
+ }
}
for_each_child_of_node(pdev->dev.of_node, node) {
compatible = of_get_property(node, "compatible", NULL);
+ if (!compatible) {
+ pr_err("%s: invalid child node: compatible null\n",
+ __func__);
+ ret = -ENODEV;
+ goto rollback_subnodes;
+ }
if (!strcmp(compatible, "qcom,smd")) {
ret = parse_smd_devicetree(node, irq_out_base);
if (ret)
@@ -3855,15 +4011,16 @@
++subnode_num;
}
+ smem_areas = smem_areas_tmp;
smem_ramdump_segments = ramdump_segments_tmp;
return 0;
rollback_subnodes:
- count = 0;
+ i = 0;
for_each_child_of_node(pdev->dev.of_node, node) {
- if (count >= subnode_num)
+ if (i >= subnode_num)
break;
- ++count;
+ ++i;
compatible = of_get_property(node, "compatible", NULL);
if (!strcmp(compatible, "qcom,smd"))
unparse_smd_devicetree(node);
@@ -3871,10 +4028,12 @@
unparse_smsm_devicetree(node);
}
free_smem_areas:
+ for (smem_idx = smem_idx - 1; smem_idx >= 1; --smem_idx)
+ iounmap(smem_areas_tmp[smem_idx].virt_addr);
+
num_smem_areas = 0;
kfree(ramdump_segments_tmp);
- kfree(smem_areas);
- smem_areas = NULL;
+ kfree(smem_areas_tmp);
return ret;
}
diff --git a/arch/arm/mach-msm/smd_pkt.c b/arch/arm/mach-msm/smd_pkt.c
index 7eb9ead..20a6165 100644
--- a/arch/arm/mach-msm/smd_pkt.c
+++ b/arch/arm/mach-msm/smd_pkt.c
@@ -41,7 +41,7 @@
#ifdef CONFIG_ARCH_FSM9XXX
#define NUM_SMD_PKT_PORTS 4
#else
-#define NUM_SMD_PKT_PORTS 27
+#define NUM_SMD_PKT_PORTS 28
#endif
#define PDRIVER_NAME_MAX_SIZE 32
@@ -509,7 +509,7 @@
do {
prepare_to_wait(&smd_pkt_devp->ch_write_wait_queue,
&write_wait, TASK_UNINTERRUPTIBLE);
- if (!smd_write_avail(smd_pkt_devp->ch) &&
+ if (!smd_write_segment_avail(smd_pkt_devp->ch) &&
!smd_pkt_devp->has_reset) {
smd_enable_read_intr(smd_pkt_devp->ch);
schedule();
@@ -631,7 +631,7 @@
return;
}
- sz = smd_write_avail(smd_pkt_devp->ch);
+ sz = smd_write_segment_avail(smd_pkt_devp->ch);
if (sz) {
D_WRITE("%s: %d bytes write space in smd_pkt_dev id:%d\n",
__func__, sz, smd_pkt_devp->i);
@@ -729,6 +729,7 @@
"smd_test_framework",
"smd_logging_0",
"smd_data_0",
+ "apr",
"smd_pkt_loopback",
};
@@ -759,6 +760,7 @@
"TESTFRAMEWORK",
"LOGGING",
"DATA",
+ "apr",
"LOOPBACK",
};
@@ -789,6 +791,7 @@
SMD_APPS_QDSP,
SMD_APPS_QDSP,
SMD_APPS_QDSP,
+ SMD_APPS_QDSP,
SMD_APPS_MODEM,
};
#endif
diff --git a/arch/arm/mach-msm/socinfo.c b/arch/arm/mach-msm/socinfo.c
index 83f7a1d..12a3ceb 100644
--- a/arch/arm/mach-msm/socinfo.c
+++ b/arch/arm/mach-msm/socinfo.c
@@ -328,7 +328,12 @@
/* 8610 IDs */
[147] = MSM_CPU_8610,
+ [161] = MSM_CPU_8610,
+ [162] = MSM_CPU_8610,
+ [163] = MSM_CPU_8610,
+ [164] = MSM_CPU_8610,
[165] = MSM_CPU_8610,
+ [166] = MSM_CPU_8610,
/* 8064AB IDs */
[153] = MSM_CPU_8064AB,
@@ -348,8 +353,8 @@
/* 8064AA IDs */
[172] = MSM_CPU_8064AA,
- /* zinc IDs */
- [178] = MSM_CPU_ZINC,
+ /* 8084 IDs */
+ [178] = MSM_CPU_8084,
/* krypton IDs */
[187] = MSM_CPU_KRYPTON,
@@ -853,9 +858,9 @@
dummy_socinfo.id = 146;
strlcpy(dummy_socinfo.build_id, "mpq8092 - ",
sizeof(dummy_socinfo.build_id));
- } else if (early_machine_is_msmzinc()) {
+ } else if (early_machine_is_apq8084()) {
dummy_socinfo.id = 178;
- strlcpy(dummy_socinfo.build_id, "msmzinc - ",
+ strlcpy(dummy_socinfo.build_id, "apq8084 - ",
sizeof(dummy_socinfo.build_id));
} else if (early_machine_is_msmkrypton()) {
dummy_socinfo.id = 187;
diff --git a/arch/arm/mach-msm/spm_devices.c b/arch/arm/mach-msm/spm_devices.c
index 233c5a5..174d444 100644
--- a/arch/arm/mach-msm/spm_devices.c
+++ b/arch/arm/mach-msm/spm_devices.c
@@ -74,6 +74,7 @@
info.cpu = cpu;
info.vlevel = vlevel;
+ info.err = -ENODEV;
if (cpu_online(cpu)) {
/**
diff --git a/drivers/bluetooth/bluetooth-power.c b/drivers/bluetooth/bluetooth-power.c
index d7c69db..13c9080 100644
--- a/drivers/bluetooth/bluetooth-power.c
+++ b/drivers/bluetooth/bluetooth-power.c
@@ -23,38 +23,227 @@
#include <linux/gpio.h>
#include <linux/of_gpio.h>
#include <linux/delay.h>
+#include <linux/bluetooth-power.h>
+#include <linux/slab.h>
+#include <linux/regulator/consumer.h>
-static struct of_device_id ar3002_match_table[] = {
+#define BT_PWR_DBG(fmt, arg...) pr_debug("%s: " fmt "\n" , __func__ , ## arg)
+#define BT_PWR_INFO(fmt, arg...) pr_info("%s: " fmt "\n" , __func__ , ## arg)
+#define BT_PWR_ERR(fmt, arg...) pr_err("%s: " fmt "\n" , __func__ , ## arg)
+
+
+static struct of_device_id bt_power_match_table[] = {
{ .compatible = "qca,ar3002" },
{}
};
-static int bt_reset_gpio;
-
+static struct bluetooth_power_platform_data *bt_power_pdata;
+static struct platform_device *btpdev;
static bool previous;
-static int bluetooth_power(int on)
+static int bt_vreg_init(struct bt_power_vreg_data *vreg)
{
- int rc;
+ int rc = 0;
+ struct device *dev = &btpdev->dev;
- pr_debug("%s bt_gpio= %d\n", __func__, bt_reset_gpio);
+ BT_PWR_DBG("vreg_get for : %s", vreg->name);
+
+ /* Get the regulator handle */
+ vreg->reg = regulator_get(dev, vreg->name);
+ if (IS_ERR(vreg->reg)) {
+ rc = PTR_ERR(vreg->reg);
+ pr_err("%s: regulator_get(%s) failed. rc=%d\n",
+ __func__, vreg->name, rc);
+ goto out;
+ }
+
+ if ((regulator_count_voltages(vreg->reg) > 0)
+ && (vreg->low_vol_level) && (vreg->high_vol_level))
+ vreg->set_voltage_sup = 1;
+
+out:
+ return rc;
+}
+
+static int bt_vreg_enable(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("vreg_en for : %s", vreg->name);
+
+ if (!vreg->is_enabled) {
+ if (vreg->set_voltage_sup) {
+ rc = regulator_set_voltage(vreg->reg,
+ vreg->low_vol_level,
+ vreg->high_vol_level);
+ if (rc < 0) {
+ BT_PWR_ERR("vreg_set_vol(%s) failed rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ }
+
+ rc = regulator_enable(vreg->reg);
+ if (rc < 0) {
+ BT_PWR_ERR("regulator_enable(%s) failed. rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ vreg->is_enabled = true;
+ }
+out:
+ return rc;
+}
+
+static int bt_vreg_disable(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ if (!vreg)
+ return rc;
+
+ BT_PWR_DBG("vreg_disable for : %s", vreg->name);
+
+ if (vreg->is_enabled) {
+ rc = regulator_disable(vreg->reg);
+ if (rc < 0) {
+ BT_PWR_ERR("regulator_disable(%s) failed. rc=%d\n",
+ vreg->name, rc);
+ goto out;
+ }
+ vreg->is_enabled = false;
+
+ if (vreg->set_voltage_sup) {
+ /* Set the min voltage to 0 */
+ rc = regulator_set_voltage(vreg->reg,
+ 0,
+ vreg->high_vol_level);
+ if (rc < 0) {
+ BT_PWR_ERR("vreg_set_vol(%s) failed rc=%d\n",
+ vreg->name, rc);
+ goto out;
+
+ }
+ }
+ }
+out:
+ return rc;
+}
+
+static int bt_configure_vreg(struct bt_power_vreg_data *vreg)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("config %s", vreg->name);
+
+ /* Get the regulator handle for vreg */
+ if (!(vreg->reg)) {
+ rc = bt_vreg_init(vreg);
+ if (rc < 0)
+ return rc;
+ }
+ rc = bt_vreg_enable(vreg);
+
+ return rc;
+}
+
+static int bt_configure_gpios(int on)
+{
+ int rc = 0;
+ int bt_reset_gpio = bt_power_pdata->bt_gpio_sys_rst;
+
+ BT_PWR_DBG("%s bt_gpio= %d on: %d", __func__, bt_reset_gpio, on);
+
if (on) {
+ rc = gpio_request(bt_reset_gpio, "bt_sys_rst_n");
+ if (rc) {
+ BT_PWR_ERR("unable to request gpio %d (%d)\n",
+ bt_reset_gpio, rc);
+ return rc;
+ }
+
+ rc = gpio_direction_output(bt_reset_gpio, 0);
+ if (rc) {
+ BT_PWR_ERR("Unable to set direction\n");
+ return rc;
+ }
+
rc = gpio_direction_output(bt_reset_gpio, 1);
if (rc) {
- pr_err("%s: Unable to set direction\n", __func__);
+ BT_PWR_ERR("Unable to set direction\n");
return rc;
}
msleep(100);
} else {
gpio_set_value(bt_reset_gpio, 0);
+
rc = gpio_direction_input(bt_reset_gpio);
- if (rc) {
- pr_err("%s: Unable to set direction\n", __func__);
- return rc;
- }
+ if (rc)
+ BT_PWR_ERR("Unable to set direction\n");
+
msleep(100);
}
- return 0;
+ return rc;
+}
+
+static int bluetooth_power(int on)
+{
+ int rc = 0;
+
+ BT_PWR_DBG("on: %d", on);
+
+ if (on) {
+ if (bt_power_pdata->bt_vdd_io) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_io);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddio config failed");
+ goto out;
+ }
+ }
+ if (bt_power_pdata->bt_vdd_ldo) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_ldo);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddldo config failed");
+ goto vdd_ldo_fail;
+ }
+ }
+ if (bt_power_pdata->bt_vdd_pa) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_vdd_pa);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddpa config failed");
+ goto vdd_pa_fail;
+ }
+ }
+ if (bt_power_pdata->bt_chip_pwd) {
+ rc = bt_configure_vreg(bt_power_pdata->bt_chip_pwd);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power vddldo config failed");
+ goto chip_pwd_fail;
+ }
+ }
+ if (bt_power_pdata->bt_gpio_sys_rst) {
+ rc = bt_configure_gpios(on);
+ if (rc < 0) {
+ BT_PWR_ERR("bt_power gpio config failed");
+ goto gpio_fail;
+ }
+ }
+ } else {
+ bt_configure_gpios(on);
+gpio_fail:
+ if (bt_power_pdata->bt_gpio_sys_rst)
+ gpio_free(bt_power_pdata->bt_gpio_sys_rst);
+ bt_vreg_disable(bt_power_pdata->bt_chip_pwd);
+chip_pwd_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_pa);
+vdd_pa_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_ldo);
+vdd_ldo_fail:
+ bt_vreg_disable(bt_power_pdata->bt_vdd_io);
+ }
+
+out:
+ return rc;
}
static int bluetooth_toggle_radio(void *data, bool blocked)
@@ -62,7 +251,9 @@
int ret = 0;
int (*power_control)(int enable);
- power_control = data;
+ power_control =
+ ((struct bluetooth_power_platform_data *)data)->bt_power_setup;
+
if (previous != blocked)
ret = (*power_control)(!blocked);
if (!ret)
@@ -117,47 +308,146 @@
platform_set_drvdata(pdev, NULL);
}
+#define MAX_PROP_SIZE 32
+static int bt_dt_parse_vreg_info(struct device *dev,
+ struct bt_power_vreg_data **vreg_data, const char *vreg_name)
+{
+ int len, ret = 0;
+ const __be32 *prop;
+ char prop_name[MAX_PROP_SIZE];
+ struct bt_power_vreg_data *vreg;
+ struct device_node *np = dev->of_node;
+
+ BT_PWR_DBG("vreg dev tree parse for %s", vreg_name);
+
+ snprintf(prop_name, MAX_PROP_SIZE, "%s-supply", vreg_name);
+ if (of_parse_phandle(np, prop_name, 0)) {
+ vreg = devm_kzalloc(dev, sizeof(*vreg), GFP_KERNEL);
+ if (!vreg) {
+ dev_err(dev, "No memory for vreg: %s\n", vreg_name);
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ vreg->name = vreg_name;
+
+ snprintf(prop_name, MAX_PROP_SIZE,
+ "qcom,%s-voltage-level", vreg_name);
+ prop = of_get_property(np, prop_name, &len);
+ if (!prop || (len != (2 * sizeof(__be32)))) {
+ dev_warn(dev, "%s %s property\n",
+ prop ? "invalid format" : "no", prop_name);
+ } else {
+ vreg->low_vol_level = be32_to_cpup(&prop[0]);
+ vreg->high_vol_level = be32_to_cpup(&prop[1]);
+ }
+
+ *vreg_data = vreg;
+ BT_PWR_DBG("%s: vol=[%d %d]uV\n",
+ vreg->name, vreg->low_vol_level,
+ vreg->high_vol_level);
+ } else
+ BT_PWR_INFO("%s: is not provided in device tree", vreg_name);
+
+err:
+ return ret;
+}
+
+static int bt_power_populate_dt_pinfo(struct platform_device *pdev)
+{
+ int rc;
+
+ BT_PWR_DBG("");
+
+ if (!bt_power_pdata)
+ return -ENOMEM;
+
+ if (pdev->dev.of_node) {
+ bt_power_pdata->bt_gpio_sys_rst =
+ of_get_named_gpio(pdev->dev.of_node,
+ "qca,bt-reset-gpio", 0);
+ if (bt_power_pdata->bt_gpio_sys_rst < 0) {
+ BT_PWR_ERR("bt-reset-gpio not provided in device tree");
+ return bt_power_pdata->bt_gpio_sys_rst;
+ }
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_io,
+ "qca,bt-vdd-io");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_pa,
+ "qca,bt-vdd-pa");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_vdd_ldo,
+ "qca,bt-vdd-ldo");
+ if (rc < 0)
+ return rc;
+
+ rc = bt_dt_parse_vreg_info(&pdev->dev,
+ &bt_power_pdata->bt_chip_pwd,
+ "qca,bt-chip-pwd");
+ if (rc < 0)
+ return rc;
+
+ }
+
+ bt_power_pdata->bt_power_setup = bluetooth_power;
+
+ return 0;
+}
+
static int __devinit bt_power_probe(struct platform_device *pdev)
{
int ret = 0;
dev_dbg(&pdev->dev, "%s\n", __func__);
- if (!pdev->dev.platform_data) {
- /* Update the platform data if the
- device node exists as part of device tree.*/
- if (pdev->dev.of_node) {
- pdev->dev.platform_data = bluetooth_power;
- } else {
- dev_err(&pdev->dev, "device node not set\n");
- return -ENOSYS;
- }
+ bt_power_pdata =
+ kzalloc(sizeof(struct bluetooth_power_platform_data),
+ GFP_KERNEL);
+
+ if (!bt_power_pdata) {
+ BT_PWR_ERR("Failed to allocate memory");
+ return -ENOMEM;
}
+
if (pdev->dev.of_node) {
- bt_reset_gpio = of_get_named_gpio(pdev->dev.of_node,
- "qca,bt-reset-gpio", 0);
- if (bt_reset_gpio < 0) {
- pr_err("bt-reset-gpio not available");
- return bt_reset_gpio;
+ ret = bt_power_populate_dt_pinfo(pdev);
+ if (ret < 0) {
+ BT_PWR_ERR("Failed to populate device tree info");
+ goto free_pdata;
}
+ pdev->dev.platform_data = bt_power_pdata;
+ } else if (pdev->dev.platform_data) {
+ /* Optional data set to default if not provided */
+ if (!((struct bluetooth_power_platform_data *)
+ (pdev->dev.platform_data))->bt_power_setup)
+ ((struct bluetooth_power_platform_data *)
+ (pdev->dev.platform_data))->bt_power_setup =
+ bluetooth_power;
+
+ memcpy(bt_power_pdata, pdev->dev.platform_data,
+ sizeof(struct bluetooth_power_platform_data));
+ } else {
+ BT_PWR_ERR("Failed to get platform data");
+ goto free_pdata;
}
- ret = gpio_request(bt_reset_gpio, "bt sys_rst_n");
- if (ret) {
- pr_err("%s: unable to request gpio %d (%d)\n",
- __func__, bt_reset_gpio, ret);
- return ret;
- }
+ if (bluetooth_power_rfkill_probe(pdev) < 0)
+ goto free_pdata;
- /* When booting up, de-assert BT reset pin */
- ret = gpio_direction_output(bt_reset_gpio, 0);
- if (ret) {
- pr_err("%s: Unable to set direction\n", __func__);
- return ret;
- }
+ btpdev = pdev;
- ret = bluetooth_power_rfkill_probe(pdev);
+ return 0;
+free_pdata:
+ kfree(bt_power_pdata);
return ret;
}
@@ -167,6 +457,11 @@
bluetooth_power_rfkill_remove(pdev);
+ if (bt_power_pdata->bt_chip_pwd->reg)
+ regulator_put(bt_power_pdata->bt_chip_pwd->reg);
+
+ kfree(bt_power_pdata);
+
return 0;
}
@@ -176,7 +471,7 @@
.driver = {
.name = "bt_power",
.owner = THIS_MODULE,
- .of_match_table = ar3002_match_table,
+ .of_match_table = bt_power_match_table,
},
};
diff --git a/drivers/char/adsprpc.c b/drivers/char/adsprpc.c
index d78327f..51578e0 100644
--- a/drivers/char/adsprpc.c
+++ b/drivers/char/adsprpc.c
@@ -527,11 +527,9 @@
{
struct fastrpc_apps *me = &gfa;
- if (me->chan)
- (void)smd_close(me->chan);
+ smd_close(me->chan);
context_list_dtor(&me->clst);
- if (me->iclient)
- ion_client_destroy(me->iclient);
+ ion_client_destroy(me->iclient);
me->iclient = 0;
me->chan = 0;
}
@@ -584,25 +582,32 @@
INIT_HLIST_HEAD(&me->htbl[i]);
VERIFY(err, 0 == context_list_ctor(&me->clst, SZ_4K));
if (err)
- goto bail;
+ goto context_list_bail;
me->iclient = msm_ion_client_create(ION_HEAP_CARVEOUT_MASK,
DEVICE_NAME);
VERIFY(err, 0 == IS_ERR_OR_NULL(me->iclient));
if (err)
- goto bail;
+ goto ion_bail;
VERIFY(err, 0 == smd_named_open_on_edge(FASTRPC_SMD_GUID,
SMD_APPS_QDSP, &me->chan,
me, smd_event_handler));
if (err)
- goto bail;
+ goto smd_bail;
VERIFY(err, 0 != wait_for_completion_timeout(&me->work,
RPC_TIMEOUT));
if (err)
- goto bail;
+ goto completion_bail;
}
- bail:
- if (err)
- fastrpc_deinit();
+
+ return 0;
+
+completion_bail:
+ smd_close(me->chan);
+smd_bail:
+ ion_client_destroy(me->iclient);
+ion_bail:
+ context_list_dtor(&me->clst);
+context_list_bail:
return err;
}
@@ -1090,35 +1095,37 @@
VERIFY(err, 0 == fastrpc_init());
if (err)
- goto bail;
+ goto fastrpc_bail;
VERIFY(err, 0 == alloc_chrdev_region(&me->dev_no, 0, 1, DEVICE_NAME));
if (err)
- goto bail;
+ goto alloc_chrdev_bail;
cdev_init(&me->cdev, &fops);
me->cdev.owner = THIS_MODULE;
VERIFY(err, 0 == cdev_add(&me->cdev, MKDEV(MAJOR(me->dev_no), 0), 1));
if (err)
- goto bail;
+ goto cdev_init_bail;
me->class = class_create(THIS_MODULE, "chardrv");
VERIFY(err, !IS_ERR(me->class));
if (err)
- goto bail;
+ goto class_create_bail;
me->dev = device_create(me->class, NULL, MKDEV(MAJOR(me->dev_no), 0),
NULL, DEVICE_NAME);
VERIFY(err, !IS_ERR(me->dev));
if (err)
- goto bail;
+ goto device_create_bail;
pr_info("'created /dev/%s c %d 0'\n", DEVICE_NAME, MAJOR(me->dev_no));
- bail:
- if (err) {
- if (me->dev_no)
- unregister_chrdev_region(me->dev_no, 1);
- if (me->class)
- class_destroy(me->class);
- if (me->cdev.owner)
- cdev_del(&me->cdev);
- fastrpc_deinit();
- }
+
+ return 0;
+
+device_create_bail:
+ class_destroy(me->class);
+class_create_bail:
+ cdev_del(&me->cdev);
+cdev_init_bail:
+ unregister_chrdev_region(me->dev_no, 1);
+alloc_chrdev_bail:
+ fastrpc_deinit();
+fastrpc_bail:
return err;
}
diff --git a/drivers/char/diag/diag_dci.c b/drivers/char/diag/diag_dci.c
index 6d28042..682d876 100644
--- a/drivers/char/diag/diag_dci.c
+++ b/drivers/char/diag/diag_dci.c
@@ -1090,6 +1090,12 @@
diag_smd_destructor(&driver->smd_dci[i]);
platform_driver_unregister(&msm_diag_dci_driver);
+
+ if (driver->dci_client_tbl) {
+ for (i = 0; i < MAX_DCI_CLIENTS; i++)
+ kfree(driver->dci_client_tbl[i].dci_data);
+ }
+
kfree(driver->req_tracking_tbl);
kfree(driver->dci_client_tbl);
kfree(driver->apps_dci_buf);
diff --git a/drivers/char/diag/diag_masks.c b/drivers/char/diag/diag_masks.c
index 9e36e1e..04c3300 100644
--- a/drivers/char/diag/diag_masks.c
+++ b/drivers/char/diag/diag_masks.c
@@ -481,6 +481,7 @@
driver->feature_mask->ctrl_pkt_data_len = 4 + FEATURE_MASK_LEN_BYTES;
driver->feature_mask->feature_mask_len = FEATURE_MASK_LEN_BYTES;
memcpy(buf, driver->feature_mask, header_size);
+ feature_byte |= F_DIAG_INT_FEATURE_MASK;
feature_byte |= APPS_RESPOND_LOG_ON_DEMAND;
memcpy(buf+header_size, &feature_byte, FEATURE_MASK_LEN_BYTES);
diff --git a/drivers/char/diag/diagchar.h b/drivers/char/diag/diagchar.h
index 6f37608..684f11d 100644
--- a/drivers/char/diag/diagchar.h
+++ b/drivers/char/diag/diagchar.h
@@ -37,6 +37,7 @@
#define HDLC_OUT_BUF_SIZE 8192
#define POOL_TYPE_COPY 1
#define POOL_TYPE_HDLC 2
+#define POOL_TYPE_USER 3
#define POOL_TYPE_WRITE_STRUCT 4
#define POOL_TYPE_HSIC 5
#define POOL_TYPE_HSIC_2 6
@@ -55,7 +56,7 @@
#define MSG_MASK_SIZE 10000
#define LOG_MASK_SIZE 8000
#define EVENT_MASK_SIZE 1000
-#define USER_SPACE_DATA 8000
+#define USER_SPACE_DATA 8192
#define PKT_SIZE 4096
#define MAX_EQUIP_ID 15
#define DIAG_CTRL_MSG_LOG_MASK 9
@@ -234,16 +235,20 @@
unsigned int poolsize;
unsigned int itemsize_hdlc;
unsigned int poolsize_hdlc;
+ unsigned int itemsize_user;
+ unsigned int poolsize_user;
unsigned int itemsize_write_struct;
unsigned int poolsize_write_struct;
unsigned int debug_flag;
/* State for the mempool for the char driver */
mempool_t *diagpool;
mempool_t *diag_hdlc_pool;
+ mempool_t *diag_user_pool;
mempool_t *diag_write_struct_pool;
struct mutex diagmem_mutex;
int count;
int count_hdlc_pool;
+ int count_user_pool;
int count_write_struct_pool;
int used;
/* Buffers for masks */
@@ -259,7 +264,6 @@
struct diag_smd_info smd_dci[NUM_SMD_DCI_CHANNELS];
unsigned char *usb_buf_out;
unsigned char *apps_rsp_buf;
- unsigned char *user_space_data;
/* buffer for updating mask to peripherals */
unsigned char *buf_msg_mask_update;
unsigned char *buf_log_mask_update;
@@ -276,6 +280,8 @@
struct usb_diag_ch *legacy_ch;
struct work_struct diag_proc_hdlc_work;
struct work_struct diag_read_work;
+ struct work_struct diag_usb_connect_work;
+ struct work_struct diag_usb_disconnect_work;
#endif
struct workqueue_struct *diag_wq;
struct work_struct diag_drain_work;
@@ -312,6 +318,7 @@
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
spinlock_t hsic_ready_spinlock;
/* common for all bridges */
+ struct work_struct diag_connect_work;
struct work_struct diag_disconnect_work;
/* SGLTE variables */
int lcid;
diff --git a/drivers/char/diag/diagchar_core.c b/drivers/char/diag/diagchar_core.c
index a0c32f5..7022a6f 100644
--- a/drivers/char/diag/diagchar_core.c
+++ b/drivers/char/diag/diagchar_core.c
@@ -56,13 +56,16 @@
/* The following variables can be specified by module options */
/* for copy buffer */
static unsigned int itemsize = 4096; /*Size of item in the mempool */
-static unsigned int poolsize = 10; /*Number of items in the mempool */
+static unsigned int poolsize = 12; /*Number of items in the mempool */
/* for hdlc buffer */
static unsigned int itemsize_hdlc = 8192; /*Size of item in the mempool */
-static unsigned int poolsize_hdlc = 8; /*Number of items in the mempool */
+static unsigned int poolsize_hdlc = 10; /*Number of items in the mempool */
+/* for user buffer */
+static unsigned int itemsize_user = 8192; /*Size of item in the mempool */
+static unsigned int poolsize_user = 8; /*Number of items in the mempool */
/* for write structure buffer */
static unsigned int itemsize_write_struct = 20; /*Size of item in the mempool */
-static unsigned int poolsize_write_struct = 8; /* Num of items in the mempool */
+static unsigned int poolsize_write_struct = 10;/* Num of items in the mempool */
/* This is the max number of user-space clients supported at initialization*/
static unsigned int max_clients = 15;
static unsigned int threshold_client_limit = 30;
@@ -781,10 +784,26 @@
{
int i, temp, success = -EINVAL, status;
int temp_realtime_mode = driver->real_time_mode;
+ int requested_mode = (int)ioarg;
+
+ switch (requested_mode) {
+ case USB_MODE:
+ case MEMORY_DEVICE_MODE:
+ case NO_LOGGING_MODE:
+ case UART_MODE:
+ case SOCKET_MODE:
+ case CALLBACK_MODE:
+ case MEMORY_DEVICE_MODE_NRT:
+ break;
+ default:
+ pr_err("diag: In %s, request to switch to invalid mode: %d\n",
+ __func__, requested_mode);
+ return -EINVAL;
+ }
mutex_lock(&driver->diagchar_mutex);
temp = driver->logging_mode;
- driver->logging_mode = (int)ioarg;
+ driver->logging_mode = requested_mode;
if (driver->logging_mode == MEMORY_DEVICE_MODE_NRT) {
diag_send_diag_mode_update(MODE_NONREALTIME);
@@ -1013,6 +1032,8 @@
current->tgid)
driver->req_tracking_tbl[i].pid = 0;
driver->dci_client_tbl[result].client = NULL;
+ kfree(driver->dci_client_tbl[result].dci_data);
+ driver->dci_client_tbl[result].dci_data = NULL;
driver->num_dci_client--;
}
mutex_unlock(&driver->dci_mutex);
@@ -1362,6 +1383,7 @@
struct diag_send_desc_type send = { NULL, NULL, DIAG_STATE_START, 0 };
struct diag_hdlc_dest_type enc = { NULL, NULL, 0 };
void *buf_copy = NULL;
+ void *user_space_data = NULL;
unsigned int payload_size;
index = 0;
@@ -1388,31 +1410,50 @@
}
#endif /* DIAG over USB */
if (pkt_type == DCI_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
- payload_size);
+ user_space_data = diagmem_alloc(driver, payload_size,
+ POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4, payload_size);
if (err) {
pr_alert("diag: copy failed for DCI data\n");
return DIAG_DCI_SEND_DATA_FAIL;
}
- err = diag_process_dci_transaction(driver->user_space_data,
+ err = diag_process_dci_transaction(user_space_data,
payload_size);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return err;
}
if (pkt_type == CALLBACK_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
- payload_size);
- if (err) {
+ if (payload_size > itemsize) {
+ pr_err("diag: Dropping packet, packet payload size crosses 4KB limit. Current payload size %d\n",
+ payload_size);
+ driver->dropped_count++;
+ return -EBADMSG;
+ }
+
+ buf_copy = diagmem_alloc(driver, payload_size, POOL_TYPE_COPY);
+ if (!buf_copy) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+
+ err = copy_from_user(buf_copy, buf + 4, payload_size);
+ if (err) {
pr_err("diag: copy failed for user space data\n");
return -EIO;
}
/* Check for proc_type */
- remote_proc = diag_get_remote(*(int *)driver->user_space_data);
+ remote_proc = diag_get_remote(*(int *)buf_copy);
if (!remote_proc) {
wait_event_interruptible(driver->wait_q,
(driver->in_busy_pktdata == 0));
- return diag_process_apps_pkt(driver->user_space_data,
- payload_size);
+ ret = diag_process_apps_pkt(buf_copy, payload_size);
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
+ return ret;
}
/* The packet is for the remote processor */
token_offset = 4;
@@ -1420,8 +1461,8 @@
buf += 4;
/* Perform HDLC encoding on incoming data */
send.state = DIAG_STATE_START;
- send.pkt = (void *)(driver->user_space_data + token_offset);
- send.last = (void *)(driver->user_space_data + token_offset -
+ send.pkt = (void *)(buf_copy + token_offset);
+ send.last = (void *)(buf_copy + token_offset -
1 + payload_size);
send.terminate = 1;
@@ -1503,21 +1544,30 @@
}
}
#endif
+ diagmem_free(driver, buf_copy, POOL_TYPE_COPY);
diagmem_free(driver, buf_hdlc, POOL_TYPE_HDLC);
+ buf_copy = NULL;
buf_hdlc = NULL;
driver->used = 0;
mutex_unlock(&driver->diagchar_mutex);
return ret;
}
if (pkt_type == USER_SPACE_DATA_TYPE) {
- err = copy_from_user(driver->user_space_data, buf + 4,
+ user_space_data = diagmem_alloc(driver, payload_size,
+ POOL_TYPE_USER);
+ if (!user_space_data) {
+ driver->dropped_count++;
+ return -ENOMEM;
+ }
+ err = copy_from_user(user_space_data, buf + 4,
payload_size);
if (err) {
pr_err("diag: copy failed for user space data\n");
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return -EIO;
}
/* Check for proc_type */
- remote_proc = diag_get_remote(*(int *)driver->user_space_data);
+ remote_proc = diag_get_remote(*(int *)user_space_data);
if (remote_proc) {
token_offset = 4;
@@ -1527,9 +1577,11 @@
/* Check masks for On-Device logging */
if (driver->mask_check) {
- if (!mask_request_validate(driver->user_space_data +
+ if (!mask_request_validate(user_space_data +
token_offset)) {
pr_alert("diag: mask request Invalid\n");
+ diagmem_free(driver, user_space_data,
+ POOL_TYPE_USER);
return -EFAULT;
}
}
@@ -1537,7 +1589,7 @@
#ifdef DIAG_DEBUG
pr_debug("diag: user space data %d\n", payload_size);
for (i = 0; i < payload_size; i++)
- pr_debug("\t %x", *((driver->user_space_data
+ pr_debug("\t %x", *((user_space_data
+ token_offset)+i));
#endif
#ifdef CONFIG_DIAG_SDIO_PIPE
@@ -1548,7 +1600,7 @@
payload_size));
if (driver->sdio_ch && (payload_size > 0)) {
sdio_write(driver->sdio_ch, (void *)
- (driver->user_space_data + token_offset),
+ (user_space_data + token_offset),
payload_size);
}
}
@@ -1578,8 +1630,8 @@
diag_hsic[index].in_busy_hsic_read_on_device =
0;
err = diag_bridge_write(index,
- driver->user_space_data +
- token_offset, payload_size);
+ user_space_data + token_offset,
+ payload_size);
if (err) {
pr_err("diag: err sending mask to MDM: %d\n",
err);
@@ -1600,11 +1652,13 @@
&& driver->lcid) {
if (payload_size > 0) {
err = msm_smux_write(driver->lcid, NULL,
- driver->user_space_data + token_offset,
+ user_space_data + token_offset,
payload_size);
if (err) {
pr_err("diag:send mask to MDM err %d",
err);
+ diagmem_free(driver, user_space_data,
+ POOL_TYPE_USER);
return err;
}
}
@@ -1613,8 +1667,8 @@
/* send masks to 8k now */
if (!remote_proc)
diag_process_hdlc((void *)
- (driver->user_space_data + token_offset),
- payload_size);
+ (user_space_data + token_offset), payload_size);
+ diagmem_free(driver, user_space_data, POOL_TYPE_USER);
return 0;
}
@@ -1885,6 +1939,11 @@
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
+static void diag_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect_bridge(1);
+}
+
static void diag_disconnect_work_fn(struct work_struct *w)
{
diagfwd_disconnect_bridge(1);
@@ -1944,6 +2003,8 @@
driver->poolsize = poolsize;
driver->itemsize_hdlc = itemsize_hdlc;
driver->poolsize_hdlc = poolsize_hdlc;
+ driver->itemsize_user = itemsize_user;
+ driver->poolsize_user = poolsize_user;
driver->itemsize_write_struct = itemsize_write_struct;
driver->poolsize_write_struct = poolsize_write_struct;
driver->num_clients = max_clients;
@@ -1969,6 +2030,8 @@
pr_err("diag: could not register HSIC device, ret: %d\n",
ret);
diagfwd_bridge_init(SMUX);
+ INIT_WORK(&(driver->diag_connect_work),
+ diag_connect_work_fn);
INIT_WORK(&(driver->diag_disconnect_work),
diag_disconnect_work_fn);
#endif
@@ -2008,6 +2071,7 @@
diagchar_cleanup();
diagfwd_exit();
diagfwd_cntl_exit();
+ diag_dci_exit();
diag_masks_exit();
diag_sdio_fn(EXIT);
diagfwd_bridge_fn(EXIT);
@@ -2022,6 +2086,7 @@
diagmem_exit(driver, POOL_TYPE_ALL);
diagfwd_exit();
diagfwd_cntl_exit();
+ diag_dci_exit();
diag_masks_exit();
diag_sdio_fn(EXIT);
diagfwd_bridge_fn(EXIT);
diff --git a/drivers/char/diag/diagfwd.c b/drivers/char/diag/diagfwd.c
index 5b929d7..151e304 100644
--- a/drivers/char/diag/diagfwd.c
+++ b/drivers/char/diag/diagfwd.c
@@ -402,7 +402,7 @@
return;
}
if (pkt_len > r) {
- pr_err("diag: In %s, SMD sending partial pkt %d %d %d %d %d %d\n",
+ pr_debug("diag: In %s, SMD sending partial pkt %d %d %d %d %d %d\n",
__func__, pkt_len, r, total_recd, loop_count,
smd_info->peripheral, smd_info->type);
}
@@ -693,8 +693,7 @@
diag_update_sleeping_process(entry.process_id, PKT_TYPE);
} else {
if (len > 0) {
- if ((entry.client_id >= 0) &&
- (entry.client_id < NUM_SMD_DATA_CHANNELS)) {
+ if (entry.client_id < NUM_SMD_DATA_CHANNELS) {
int index = entry.client_id;
if (driver->smd_data[index].ch) {
if ((index == MODEM_DATA) &&
@@ -907,94 +906,186 @@
/* bld time masks */
switch (ssid_first) {
case MSG_SSID_0:
+ if (ssid_range > sizeof(msg_bld_masks_0)) {
+ pr_warning("diag: truncating ssid range for ssid 0");
+ ssid_range = sizeof(msg_bld_masks_0);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_0[i/4];
break;
case MSG_SSID_1:
+ if (ssid_range > sizeof(msg_bld_masks_1)) {
+ pr_warning("diag: truncating ssid range for ssid 1");
+ ssid_range = sizeof(msg_bld_masks_1);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_1[i/4];
break;
case MSG_SSID_2:
+ if (ssid_range > sizeof(msg_bld_masks_2)) {
+ pr_warning("diag: truncating ssid range for ssid 2");
+ ssid_range = sizeof(msg_bld_masks_2);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_2[i/4];
break;
case MSG_SSID_3:
+ if (ssid_range > sizeof(msg_bld_masks_3)) {
+ pr_warning("diag: truncating ssid range for ssid 3");
+ ssid_range = sizeof(msg_bld_masks_3);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_3[i/4];
break;
case MSG_SSID_4:
+ if (ssid_range > sizeof(msg_bld_masks_4)) {
+ pr_warning("diag: truncating ssid range for ssid 4");
+ ssid_range = sizeof(msg_bld_masks_4);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_4[i/4];
break;
case MSG_SSID_5:
+ if (ssid_range > sizeof(msg_bld_masks_5)) {
+ pr_warning("diag: truncating ssid range for ssid 5");
+ ssid_range = sizeof(msg_bld_masks_5);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_5[i/4];
break;
case MSG_SSID_6:
+ if (ssid_range > sizeof(msg_bld_masks_6)) {
+ pr_warning("diag: truncating ssid range for ssid 6");
+ ssid_range = sizeof(msg_bld_masks_6);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_6[i/4];
break;
case MSG_SSID_7:
+ if (ssid_range > sizeof(msg_bld_masks_7)) {
+ pr_warning("diag: truncating ssid range for ssid 7");
+ ssid_range = sizeof(msg_bld_masks_7);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_7[i/4];
break;
case MSG_SSID_8:
+ if (ssid_range > sizeof(msg_bld_masks_8)) {
+ pr_warning("diag: truncating ssid range for ssid 8");
+ ssid_range = sizeof(msg_bld_masks_8);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_8[i/4];
break;
case MSG_SSID_9:
+ if (ssid_range > sizeof(msg_bld_masks_9)) {
+ pr_warning("diag: truncating ssid range for ssid 9");
+ ssid_range = sizeof(msg_bld_masks_9);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_9[i/4];
break;
case MSG_SSID_10:
+ if (ssid_range > sizeof(msg_bld_masks_10)) {
+ pr_warning("diag: truncating ssid range for ssid 10");
+ ssid_range = sizeof(msg_bld_masks_10);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_10[i/4];
break;
case MSG_SSID_11:
+ if (ssid_range > sizeof(msg_bld_masks_11)) {
+ pr_warning("diag: truncating ssid range for ssid 11");
+ ssid_range = sizeof(msg_bld_masks_11);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_11[i/4];
break;
case MSG_SSID_12:
+ if (ssid_range > sizeof(msg_bld_masks_12)) {
+ pr_warning("diag: truncating ssid range for ssid 12");
+ ssid_range = sizeof(msg_bld_masks_12);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_12[i/4];
break;
case MSG_SSID_13:
+ if (ssid_range > sizeof(msg_bld_masks_13)) {
+ pr_warning("diag: truncating ssid range for ssid 13");
+ ssid_range = sizeof(msg_bld_masks_13);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_13[i/4];
break;
case MSG_SSID_14:
+ if (ssid_range > sizeof(msg_bld_masks_14)) {
+ pr_warning("diag: truncating ssid range for ssid 14");
+ ssid_range = sizeof(msg_bld_masks_14);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_14[i/4];
break;
case MSG_SSID_15:
+ if (ssid_range > sizeof(msg_bld_masks_15)) {
+ pr_warning("diag: truncating ssid range for ssid 15");
+ ssid_range = sizeof(msg_bld_masks_15);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_15[i/4];
break;
case MSG_SSID_16:
+ if (ssid_range > sizeof(msg_bld_masks_16)) {
+ pr_warning("diag: truncating ssid range for ssid 16");
+ ssid_range = sizeof(msg_bld_masks_16);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_16[i/4];
break;
case MSG_SSID_17:
+ if (ssid_range > sizeof(msg_bld_masks_17)) {
+ pr_warning("diag: truncating ssid range for ssid 17");
+ ssid_range = sizeof(msg_bld_masks_17);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_17[i/4];
break;
case MSG_SSID_18:
+ if (ssid_range > sizeof(msg_bld_masks_18)) {
+ pr_warning("diag: truncating ssid range for ssid 18");
+ ssid_range = sizeof(msg_bld_masks_18);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_18[i/4];
break;
case MSG_SSID_19:
+ if (ssid_range > sizeof(msg_bld_masks_19)) {
+ pr_warning("diag: truncating ssid range for ssid 19");
+ ssid_range = sizeof(msg_bld_masks_19);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_19[i/4];
break;
case MSG_SSID_20:
+ if (ssid_range > sizeof(msg_bld_masks_20)) {
+ pr_warning("diag: truncating ssid range for ssid 20");
+ ssid_range = sizeof(msg_bld_masks_20);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_20[i/4];
break;
case MSG_SSID_21:
+ if (ssid_range > sizeof(msg_bld_masks_21)) {
+ pr_warning("diag: truncating ssid range for ssid 21");
+ ssid_range = sizeof(msg_bld_masks_21);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_21[i/4];
break;
case MSG_SSID_22:
+ if (ssid_range > sizeof(msg_bld_masks_22)) {
+ pr_warning("diag: truncating ssid range for ssid 22");
+ ssid_range = sizeof(msg_bld_masks_22);
+ }
for (i = 0; i < ssid_range; i += 4)
*(int *)(ptr + i) = msg_bld_masks_22[i/4];
break;
@@ -1191,6 +1282,16 @@
#define N_LEGACY_WRITE (driver->poolsize + 6)
#define N_LEGACY_READ 1
+static void diag_usb_connect_work_fn(struct work_struct *w)
+{
+ diagfwd_connect();
+}
+
+static void diag_usb_disconnect_work_fn(struct work_struct *w)
+{
+ diagfwd_disconnect();
+}
+
int diagfwd_connect(void)
{
int err;
@@ -1357,10 +1458,12 @@
{
switch (event) {
case USB_DIAG_CONNECT:
- diagfwd_connect();
+ queue_work(driver->diag_wq,
+ &driver->diag_usb_connect_work);
break;
case USB_DIAG_DISCONNECT:
- diagfwd_disconnect();
+ queue_work(driver->diag_wq,
+ &driver->diag_usb_disconnect_work);
break;
case USB_DIAG_READ_DONE:
diagfwd_read_complete(d_req);
@@ -1692,11 +1795,6 @@
&& (driver->hdlc_buf = kzalloc(HDLC_MAX, GFP_KERNEL)) == NULL)
goto err;
kmemleak_not_leak(driver->hdlc_buf);
- if (driver->user_space_data == NULL)
- driver->user_space_data = kzalloc(USER_SPACE_DATA, GFP_KERNEL);
- if (driver->user_space_data == NULL)
- goto err;
- kmemleak_not_leak(driver->user_space_data);
if (driver->client_map == NULL &&
(driver->client_map = kzalloc
((driver->num_clients) * sizeof(struct diag_client_map),
@@ -1741,6 +1839,10 @@
}
driver->diag_wq = create_singlethread_workqueue("diag_wq");
#ifdef CONFIG_DIAG_OVER_USB
+ INIT_WORK(&(driver->diag_usb_connect_work),
+ diag_usb_connect_work_fn);
+ INIT_WORK(&(driver->diag_usb_disconnect_work),
+ diag_usb_disconnect_work_fn);
INIT_WORK(&(driver->diag_proc_hdlc_work), diag_process_hdlc_fn);
INIT_WORK(&(driver->diag_read_work), diag_read_work_fn);
driver->legacy_ch = usb_diag_open(DIAG_LEGACY, driver,
@@ -1771,7 +1873,6 @@
kfree(driver->pkt_buf);
kfree(driver->usb_read_ptr);
kfree(driver->apps_rsp_buf);
- kfree(driver->user_space_data);
if (driver->diag_wq)
destroy_workqueue(driver->diag_wq);
}
@@ -1804,6 +1905,5 @@
kfree(driver->pkt_buf);
kfree(driver->usb_read_ptr);
kfree(driver->apps_rsp_buf);
- kfree(driver->user_space_data);
destroy_workqueue(driver->diag_wq);
}
diff --git a/drivers/char/diag/diagfwd_bridge.c b/drivers/char/diag/diagfwd_bridge.c
index 475f5ba..8c07219b 100644
--- a/drivers/char/diag/diagfwd_bridge.c
+++ b/drivers/char/diag/diagfwd_bridge.c
@@ -233,7 +233,8 @@
switch (event) {
case USB_DIAG_CONNECT:
- diagfwd_connect_bridge(1);
+ queue_work(driver->diag_wq,
+ &driver->diag_connect_work);
break;
case USB_DIAG_DISCONNECT:
queue_work(driver->diag_wq,
diff --git a/drivers/char/diag/diagfwd_cntl.h b/drivers/char/diag/diagfwd_cntl.h
index 7cd1866..9c2c691 100644
--- a/drivers/char/diag/diagfwd_cntl.h
+++ b/drivers/char/diag/diagfwd_cntl.h
@@ -30,6 +30,9 @@
/* Send Diag F3 mask */
#define DIAG_CTRL_MSG_F3_MASK_V2 11
+/* Denotes that we support sending/receiving the feature mask */
+#define F_DIAG_INT_FEATURE_MASK 0x01
+
struct cmd_code_range {
uint16_t cmd_code_lo;
uint16_t cmd_code_hi;
diff --git a/drivers/char/diag/diagmem.c b/drivers/char/diag/diagmem.c
index 0cd8267..bd339e2 100644
--- a/drivers/char/diag/diagmem.c
+++ b/drivers/char/diag/diagmem.c
@@ -45,6 +45,15 @@
GFP_ATOMIC);
}
}
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool) {
+ if (driver->count_user_pool < driver->poolsize_user) {
+ atomic_add(1,
+ (atomic_t *)&driver->count_user_pool);
+ buf = mempool_alloc(driver->diag_user_pool,
+ GFP_ATOMIC);
+ }
+ }
} else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
if (driver->diag_write_struct_pool) {
if (driver->count_write_struct_pool <
@@ -98,8 +107,9 @@
mempool_destroy(driver->diagpool);
driver->diagpool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy COPY mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy COPY mempool");
+ }
}
if (driver->diag_hdlc_pool) {
@@ -107,8 +117,19 @@
mempool_destroy(driver->diag_hdlc_pool);
driver->diag_hdlc_pool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy HDLC mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy HDLC mempool");
+ }
+ }
+
+ if (driver->diag_user_pool) {
+ if (driver->count_user_pool == 0 && driver->ref_count == 0) {
+ mempool_destroy(driver->diag_user_pool);
+ driver->diag_user_pool = NULL;
+ } else if (driver->ref_count == 0 && pool_type ==
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy USER mempool");
+ }
}
if (driver->diag_write_struct_pool) {
@@ -119,8 +140,9 @@
mempool_destroy(driver->diag_write_struct_pool);
driver->diag_write_struct_pool = NULL;
} else if (driver->ref_count == 0 && pool_type ==
- POOL_TYPE_ALL)
- printk(KERN_ALERT "Unable to destroy STRUCT mempool");
+ POOL_TYPE_ALL) {
+ pr_err("diag: Unable to destroy STRUCT mempool");
+ }
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
for (index = 0; index < MAX_HSIC_CH; index++) {
@@ -163,16 +185,25 @@
mempool_free(buf, driver->diagpool);
atomic_add(-1, (atomic_t *)&driver->count);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "mempool memory which is already free %d", driver->count);
+ pr_err("diag: Attempt to free up DIAG driver mempool memory which is already free %d",
+ driver->count);
} else if (pool_type == POOL_TYPE_HDLC) {
if (driver->diag_hdlc_pool != NULL &&
driver->count_hdlc_pool > 0) {
mempool_free(buf, driver->diag_hdlc_pool);
atomic_add(-1, (atomic_t *)&driver->count_hdlc_pool);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "HDLC mempool which is already free %d ", driver->count_hdlc_pool);
+ pr_err("diag: Attempt to free up DIAG driver HDLC mempool which is already free %d ",
+ driver->count_hdlc_pool);
+ } else if (pool_type == POOL_TYPE_USER) {
+ if (driver->diag_user_pool != NULL &&
+ driver->count_user_pool > 0) {
+ mempool_free(buf, driver->diag_user_pool);
+ atomic_add(-1, (atomic_t *)&driver->count_user_pool);
+ } else {
+ pr_err("diag: Attempt to free up DIAG driver USER mempool which is already free %d ",
+ driver->count_user_pool);
+ }
} else if (pool_type == POOL_TYPE_WRITE_STRUCT) {
if (driver->diag_write_struct_pool != NULL &&
driver->count_write_struct_pool > 0) {
@@ -180,9 +211,8 @@
atomic_add(-1,
(atomic_t *)&driver->count_write_struct_pool);
} else
- pr_err("diag: Attempt to free up DIAG driver "
- "USB structure mempool which is already free %d ",
- driver->count_write_struct_pool);
+ pr_err("diag: Attempt to free up DIAG driver USB structure mempool which is already free %d ",
+ driver->count_write_struct_pool);
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
} else if (pool_type == POOL_TYPE_HSIC ||
pool_type == POOL_TYPE_HSIC_2) {
@@ -229,18 +259,25 @@
driver->diag_hdlc_pool = mempool_create_kmalloc_pool(
driver->poolsize_hdlc, driver->itemsize_hdlc);
+ if (driver->count_user_pool == 0)
+ driver->diag_user_pool = mempool_create_kmalloc_pool(
+ driver->poolsize_user, driver->itemsize_user);
+
if (driver->count_write_struct_pool == 0)
driver->diag_write_struct_pool = mempool_create_kmalloc_pool(
driver->poolsize_write_struct, driver->itemsize_write_struct);
if (!driver->diagpool)
- printk(KERN_INFO "Cannot allocate diag mempool\n");
+ pr_err("diag: Cannot allocate diag mempool\n");
if (!driver->diag_hdlc_pool)
- printk(KERN_INFO "Cannot allocate diag HDLC mempool\n");
+ pr_err("diag: Cannot allocate diag HDLC mempool\n");
+
+ if (!driver->diag_user_pool)
+ pr_err("diag: Cannot allocate diag USER mempool\n");
if (!driver->diag_write_struct_pool)
- printk(KERN_INFO "Cannot allocate diag USB struct mempool\n");
+ pr_err("diag: Cannot allocate diag USB struct mempool\n");
}
#ifdef CONFIG_DIAGFWD_BRIDGE_CODE
diff --git a/drivers/char/msm_rotator.c b/drivers/char/msm_rotator.c
index e946b42..9a43ea4 100644
--- a/drivers/char/msm_rotator.c
+++ b/drivers/char/msm_rotator.c
@@ -181,8 +181,7 @@
pr_err("ion_import_dma_buf() failed\n");
return PTR_ERR(*pihdl);
}
- pr_debug("%s(): ion_hdl %p, ion_fd %d\n", __func__, *pihdl,
- ion_share_dma_buf(msm_rotator_dev->client, *pihdl));
+ pr_debug("%s(): ion_hdl %p, ion_fd %d\n", __func__, *pihdl, mem_id);
if (rot_iommu_split_domain) {
if (secure) {
diff --git a/drivers/coresight/coresight-csr.c b/drivers/coresight/coresight-csr.c
index 988d1c9..1c2ab25 100644
--- a/drivers/coresight/coresight-csr.c
+++ b/drivers/coresight/coresight-csr.c
@@ -102,7 +102,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_enable_bam_to_usb);
+EXPORT_SYMBOL(msm_qdss_csr_enable_bam_to_usb);
void msm_qdss_csr_disable_bam_to_usb(void)
{
@@ -117,7 +117,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_disable_bam_to_usb);
+EXPORT_SYMBOL(msm_qdss_csr_disable_bam_to_usb);
void msm_qdss_csr_disable_flush(void)
{
@@ -132,7 +132,7 @@
CSR_LOCK(drvdata);
}
-EXPORT_SYMBOL_GPL(msm_qdss_csr_disable_flush);
+EXPORT_SYMBOL(msm_qdss_csr_disable_flush);
static int __devinit csr_probe(struct platform_device *pdev)
{
diff --git a/drivers/coresight/coresight-etm.c b/drivers/coresight/coresight-etm.c
index 2777769..5a5c0cf 100644
--- a/drivers/coresight/coresight-etm.c
+++ b/drivers/coresight/coresight-etm.c
@@ -276,23 +276,32 @@
}
/*
- * Memory mapped writes to clear os lock are not supported on Krait v1, v2
- * and OS lock must be unlocked before any memory mapped access, otherwise
- * memory mapped reads/writes will be invalid.
+ * Unlock OS lock to allow memory mapped access on Krait and in general
+ * so that ETMSR[1] can be polled while clearing the ETMCR[10] prog bit
+ * since ETMSR[1] is set when prog bit is set or OS lock is set.
*/
static void etm_os_unlock(void *info)
{
struct etm_drvdata *drvdata = (struct etm_drvdata *) info;
- ETM_UNLOCK(drvdata);
+ /*
+ * Memory mapped writes to clear os lock are not supported on Krait v1,
+ * v2 and OS lock must be unlocked before any memory mapped access,
+ * otherwise memory mapped reads/writes will be invalid.
+ */
if (cpu_is_krait()) {
etm_writel_cp14(0x0, ETMOSLAR);
+ /* ensure os lock is unlocked before we return */
isb();
- } else if (etm_os_lock_present(drvdata)) {
- etm_writel(drvdata, 0x0, ETMOSLAR);
- mb();
+ } else {
+ ETM_UNLOCK(drvdata);
+ if (etm_os_lock_present(drvdata)) {
+ etm_writel(drvdata, 0x0, ETMOSLAR);
+ /* ensure os lock is unlocked before we return */
+ mb();
+ }
+ ETM_LOCK(drvdata);
}
- ETM_LOCK(drvdata);
}
/*
@@ -1876,11 +1885,26 @@
void *hcpu)
{
unsigned int cpu = (unsigned long)hcpu;
+ static bool clk_disable[NR_CPUS];
+ int ret;
if (!etmdrvdata[cpu])
goto out;
switch (action & (~CPU_TASKS_FROZEN)) {
+ case CPU_UP_PREPARE:
+ if (!etmdrvdata[cpu]->os_unlock) {
+ ret = clk_prepare_enable(etmdrvdata[cpu]->clk);
+ if (ret) {
+ dev_err(etmdrvdata[cpu]->dev,
+ "ETM clk enable during hotplug failed"
+ "for cpu: %d, ret: %d\n", cpu, ret);
+ return notifier_from_errno(ret);
+ }
+ clk_disable[cpu] = true;
+ }
+ break;
+
case CPU_STARTING:
spin_lock(&etmdrvdata[cpu]->spinlock);
if (!etmdrvdata[cpu]->os_unlock) {
@@ -1894,6 +1918,11 @@
break;
case CPU_ONLINE:
+ if (clk_disable[cpu]) {
+ clk_disable_unprepare(etmdrvdata[cpu]->clk);
+ clk_disable[cpu] = false;
+ }
+
if (etmdrvdata[cpu]->boot_enable &&
!etmdrvdata[cpu]->sticky_enable)
coresight_enable(etmdrvdata[cpu]->csdev);
@@ -1903,6 +1932,13 @@
__etm_store_pcsave(etmdrvdata[cpu], 1);
break;
+ case CPU_UP_CANCELED:
+ if (clk_disable[cpu]) {
+ clk_disable_unprepare(etmdrvdata[cpu]->clk);
+ clk_disable[cpu] = false;
+ }
+ break;
+
case CPU_DYING:
spin_lock(&etmdrvdata[cpu]->spinlock);
if (etmdrvdata[cpu]->enable && etmdrvdata[cpu]->round_robin)
diff --git a/drivers/coresight/coresight-stm.c b/drivers/coresight/coresight-stm.c
index 87cf63a..7d4dabe 100644
--- a/drivers/coresight/coresight-stm.c
+++ b/drivers/coresight/coresight-stm.c
@@ -595,7 +595,7 @@
return __stm_trace(options, entity_id, proto_id, data, size);
}
-EXPORT_SYMBOL_GPL(stm_trace);
+EXPORT_SYMBOL(stm_trace);
static ssize_t stm_write(struct file *file, const char __user *data,
size_t size, loff_t *ppos)
diff --git a/drivers/coresight/coresight.c b/drivers/coresight/coresight.c
index aef3d26..e237fb7 100644
--- a/drivers/coresight/coresight.c
+++ b/drivers/coresight/coresight.c
@@ -368,7 +368,7 @@
pr_err("coresight: enable failed\n");
return ret;
}
-EXPORT_SYMBOL_GPL(coresight_enable);
+EXPORT_SYMBOL(coresight_enable);
void coresight_disable(struct coresight_device *csdev)
{
@@ -391,7 +391,7 @@
out:
up(&coresight_mutex);
}
-EXPORT_SYMBOL_GPL(coresight_disable);
+EXPORT_SYMBOL(coresight_disable);
void coresight_abort(void)
{
@@ -415,7 +415,7 @@
out:
up(&coresight_mutex);
}
-EXPORT_SYMBOL_GPL(coresight_abort);
+EXPORT_SYMBOL(coresight_abort);
static ssize_t coresight_show_type(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -681,7 +681,7 @@
err_kzalloc_csdev:
return ERR_PTR(ret);
}
-EXPORT_SYMBOL_GPL(coresight_register);
+EXPORT_SYMBOL(coresight_register);
void coresight_unregister(struct coresight_device *csdev)
{
@@ -693,7 +693,7 @@
put_device(&csdev->dev);
}
}
-EXPORT_SYMBOL_GPL(coresight_unregister);
+EXPORT_SYMBOL(coresight_unregister);
static int __init coresight_init(void)
{
diff --git a/drivers/coresight/of_coresight.c b/drivers/coresight/of_coresight.c
index 1eccd09..8b8c0d62 100644
--- a/drivers/coresight/of_coresight.c
+++ b/drivers/coresight/of_coresight.c
@@ -97,7 +97,7 @@
"coresight-default-sink");
return pdata;
}
-EXPORT_SYMBOL_GPL(of_get_coresight_platform_data);
+EXPORT_SYMBOL(of_get_coresight_platform_data);
struct coresight_cti_data *of_get_coresight_cti_data(
struct device *dev, struct device_node *node)
diff --git a/drivers/cpufreq/cpufreq_interactive.c b/drivers/cpufreq/cpufreq_interactive.c
index 63cdc68..7d1952c 100644
--- a/drivers/cpufreq/cpufreq_interactive.c
+++ b/drivers/cpufreq/cpufreq_interactive.c
@@ -20,97 +20,94 @@
#include <linux/cpumask.h>
#include <linux/cpufreq.h>
#include <linux/module.h>
-#include <linux/mutex.h>
+#include <linux/moduleparam.h>
+#include <linux/rwsem.h>
#include <linux/sched.h>
#include <linux/tick.h>
#include <linux/time.h>
#include <linux/timer.h>
#include <linux/workqueue.h>
#include <linux/kthread.h>
-#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/input.h>
#include <asm/cputime.h>
#define CREATE_TRACE_POINTS
#include <trace/events/cpufreq_interactive.h>
-static atomic_t active_count = ATOMIC_INIT(0);
+static int active_count;
struct cpufreq_interactive_cpuinfo {
struct timer_list cpu_timer;
- int timer_idlecancel;
+ struct timer_list cpu_slack_timer;
+ spinlock_t load_lock; /* protects the next 4 fields */
u64 time_in_idle;
- u64 idle_exit_time;
- u64 timer_run_time;
- int idling;
- u64 target_set_time;
- u64 target_set_time_in_idle;
+ u64 time_in_idle_timestamp;
+ u64 cputime_speedadj;
+ u64 cputime_speedadj_timestamp;
struct cpufreq_policy *policy;
struct cpufreq_frequency_table *freq_table;
unsigned int target_freq;
unsigned int floor_freq;
u64 floor_validate_time;
u64 hispeed_validate_time;
+ struct rw_semaphore enable_sem;
int governor_enabled;
};
static DEFINE_PER_CPU(struct cpufreq_interactive_cpuinfo, cpuinfo);
-/* Workqueues handle frequency scaling */
-static struct task_struct *up_task;
-static struct workqueue_struct *down_wq;
-static struct work_struct freq_scale_down_work;
-static cpumask_t up_cpumask;
-static spinlock_t up_cpumask_lock;
-static cpumask_t down_cpumask;
-static spinlock_t down_cpumask_lock;
-static struct mutex set_speed_lock;
+/* realtime thread handles frequency scaling */
+static struct task_struct *speedchange_task;
+static cpumask_t speedchange_cpumask;
+static spinlock_t speedchange_cpumask_lock;
+static struct mutex gov_lock;
/* Hi speed to bump to from lo speed when load burst (default max) */
-static u64 hispeed_freq;
+static unsigned int hispeed_freq;
/* Go to hi speed when CPU load at or above this value. */
-#define DEFAULT_GO_HISPEED_LOAD 85
-static unsigned long go_hispeed_load;
+#define DEFAULT_GO_HISPEED_LOAD 99
+static unsigned long go_hispeed_load = DEFAULT_GO_HISPEED_LOAD;
+
+/* Target load. Lower values result in higher CPU speeds. */
+#define DEFAULT_TARGET_LOAD 90
+static unsigned int default_target_loads[] = {DEFAULT_TARGET_LOAD};
+static spinlock_t target_loads_lock;
+static unsigned int *target_loads = default_target_loads;
+static int ntarget_loads = ARRAY_SIZE(default_target_loads);
/*
* The minimum amount of time to spend at a frequency before we can ramp down.
*/
#define DEFAULT_MIN_SAMPLE_TIME (80 * USEC_PER_MSEC)
-static unsigned long min_sample_time;
+static unsigned long min_sample_time = DEFAULT_MIN_SAMPLE_TIME;
/*
* The sample rate of the timer used to increase frequency
*/
#define DEFAULT_TIMER_RATE (20 * USEC_PER_MSEC)
-static unsigned long timer_rate;
+static unsigned long timer_rate = DEFAULT_TIMER_RATE;
/*
* Wait this long before raising speed above hispeed, by default a single
* timer interval.
*/
#define DEFAULT_ABOVE_HISPEED_DELAY DEFAULT_TIMER_RATE
-static unsigned long above_hispeed_delay_val;
+static unsigned long above_hispeed_delay_val = DEFAULT_ABOVE_HISPEED_DELAY;
-/*
- * Boost pulse to hispeed on touchscreen input.
- */
-
-static int input_boost_val;
-
-struct cpufreq_interactive_inputopen {
- struct input_handle *handle;
- struct work_struct inputopen_work;
-};
-
-static struct cpufreq_interactive_inputopen inputopen;
-
-/*
- * Non-zero means longer-term speed boost active.
- */
-
+/* Non-zero means indefinite speed boost active */
static int boost_val;
+/* Duration of a boot pulse in usecs */
+static int boostpulse_duration_val = DEFAULT_MIN_SAMPLE_TIME;
+/* End time of boost pulse in ktime converted to usecs */
+static u64 boostpulse_endtime;
+
+/*
+ * Max additional time to wait in idle, beyond timer_rate, at speeds above
+ * minimum before wakeup to reduce speed, or -1 if unnecessary.
+ */
+#define DEFAULT_TIMER_SLACK (4 * DEFAULT_TIMER_RATE)
+static int timer_slack_val = DEFAULT_TIMER_SLACK;
static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
unsigned int event);
@@ -125,104 +122,210 @@
.owner = THIS_MODULE,
};
-static void cpufreq_interactive_timer(unsigned long data)
+static void cpufreq_interactive_timer_resched(
+ struct cpufreq_interactive_cpuinfo *pcpu)
{
- unsigned int delta_idle;
- unsigned int delta_time;
- int cpu_load;
- int load_since_change;
- u64 time_in_idle;
- u64 idle_exit_time;
- struct cpufreq_interactive_cpuinfo *pcpu =
- &per_cpu(cpuinfo, data);
- u64 now_idle;
- unsigned int new_freq;
- unsigned int index;
+ unsigned long expires = jiffies + usecs_to_jiffies(timer_rate);
unsigned long flags;
- smp_rmb();
+ mod_timer_pinned(&pcpu->cpu_timer, expires);
+ if (timer_slack_val >= 0 && pcpu->target_freq > pcpu->policy->min) {
+ expires += usecs_to_jiffies(timer_slack_val);
+ mod_timer_pinned(&pcpu->cpu_slack_timer, expires);
+ }
+ spin_lock_irqsave(&pcpu->load_lock, flags);
+ pcpu->time_in_idle =
+ get_cpu_idle_time_us(smp_processor_id(),
+ &pcpu->time_in_idle_timestamp);
+ pcpu->cputime_speedadj = 0;
+ pcpu->cputime_speedadj_timestamp = pcpu->time_in_idle_timestamp;
+ spin_unlock_irqrestore(&pcpu->load_lock, flags);
+}
+
+static unsigned int freq_to_targetload(unsigned int freq)
+{
+ int i;
+ unsigned int ret;
+ unsigned long flags;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+
+ for (i = 0; i < ntarget_loads - 1 && freq >= target_loads[i+1]; i += 2)
+ ;
+
+ ret = target_loads[i];
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return ret;
+}
+
+/*
+ * If increasing frequencies never map to a lower target load then
+ * choose_freq() will find the minimum frequency that does not exceed its
+ * target load given the current load.
+ */
+
+static unsigned int choose_freq(
+ struct cpufreq_interactive_cpuinfo *pcpu, unsigned int loadadjfreq)
+{
+ unsigned int freq = pcpu->policy->cur;
+ unsigned int prevfreq, freqmin, freqmax;
+ unsigned int tl;
+ int index;
+
+ freqmin = 0;
+ freqmax = UINT_MAX;
+
+ do {
+ prevfreq = freq;
+ tl = freq_to_targetload(freq);
+
+ /*
+ * Find the lowest frequency where the computed load is less
+ * than or equal to the target load.
+ */
+
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table, loadadjfreq / tl,
+ CPUFREQ_RELATION_L, &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ if (freq > prevfreq) {
+ /* The previous frequency is too low. */
+ freqmin = prevfreq;
+
+ if (freq >= freqmax) {
+ /*
+ * Find the highest frequency that is less
+ * than freqmax.
+ */
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table,
+ freqmax - 1, CPUFREQ_RELATION_H,
+ &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ if (freq == freqmin) {
+ /*
+ * The first frequency below freqmax
+ * has already been found to be too
+ * low. freqmax is the lowest speed
+ * we found that is fast enough.
+ */
+ freq = freqmax;
+ break;
+ }
+ }
+ } else if (freq < prevfreq) {
+ /* The previous frequency is high enough. */
+ freqmax = prevfreq;
+
+ if (freq <= freqmin) {
+ /*
+ * Find the lowest frequency that is higher
+ * than freqmin.
+ */
+ cpufreq_frequency_table_target(
+ pcpu->policy, pcpu->freq_table,
+ freqmin + 1, CPUFREQ_RELATION_L,
+ &index);
+ freq = pcpu->freq_table[index].frequency;
+
+ /*
+ * If freqmax is the first frequency above
+ * freqmin then we have already found that
+ * this speed is fast enough.
+ */
+ if (freq == freqmax)
+ break;
+ }
+ }
+
+ /* If same frequency chosen as previous then done. */
+ } while (freq != prevfreq);
+
+ return freq;
+}
+
+static u64 update_load(int cpu)
+{
+ struct cpufreq_interactive_cpuinfo *pcpu = &per_cpu(cpuinfo, cpu);
+ u64 now;
+ u64 now_idle;
+ unsigned int delta_idle;
+ unsigned int delta_time;
+ u64 active_time;
+
+ now_idle = get_cpu_idle_time_us(cpu, &now);
+ delta_idle = (unsigned int)(now_idle - pcpu->time_in_idle);
+ delta_time = (unsigned int)(now - pcpu->time_in_idle_timestamp);
+ active_time = delta_time - delta_idle;
+ pcpu->cputime_speedadj += active_time * pcpu->policy->cur;
+
+ pcpu->time_in_idle = now_idle;
+ pcpu->time_in_idle_timestamp = now;
+ return now;
+}
+
+static void cpufreq_interactive_timer(unsigned long data)
+{
+ u64 now;
+ unsigned int delta_time;
+ u64 cputime_speedadj;
+ int cpu_load;
+ struct cpufreq_interactive_cpuinfo *pcpu =
+ &per_cpu(cpuinfo, data);
+ unsigned int new_freq;
+ unsigned int loadadjfreq;
+ unsigned int index;
+ unsigned long flags;
+ bool boosted;
+
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return;
if (!pcpu->governor_enabled)
goto exit;
- /*
- * Once pcpu->timer_run_time is updated to >= pcpu->idle_exit_time,
- * this lets idle exit know the current idle time sample has
- * been processed, and idle exit can generate a new sample and
- * re-arm the timer. This prevents a concurrent idle
- * exit on that CPU from writing a new set of info at the same time
- * the timer function runs (the timer function can't use that info
- * until more time passes).
- */
- time_in_idle = pcpu->time_in_idle;
- idle_exit_time = pcpu->idle_exit_time;
- now_idle = get_cpu_idle_time_us(data, &pcpu->timer_run_time);
- smp_wmb();
+ spin_lock_irqsave(&pcpu->load_lock, flags);
+ now = update_load(data);
+ delta_time = (unsigned int)(now - pcpu->cputime_speedadj_timestamp);
+ cputime_speedadj = pcpu->cputime_speedadj;
+ spin_unlock_irqrestore(&pcpu->load_lock, flags);
- /* If we raced with cancelling a timer, skip. */
- if (!idle_exit_time)
- goto exit;
-
- delta_idle = (unsigned int)(now_idle - time_in_idle);
- delta_time = (unsigned int)(pcpu->timer_run_time - idle_exit_time);
-
- /*
- * If timer ran less than 1ms after short-term sample started, retry.
- */
- if (delta_time < 1000)
+ if (WARN_ON_ONCE(!delta_time))
goto rearm;
- if (delta_idle > delta_time)
- cpu_load = 0;
- else
- cpu_load = 100 * (delta_time - delta_idle) / delta_time;
+ do_div(cputime_speedadj, delta_time);
+ loadadjfreq = (unsigned int)cputime_speedadj * 100;
+ cpu_load = loadadjfreq / pcpu->target_freq;
+ boosted = boost_val || now < boostpulse_endtime;
- delta_idle = (unsigned int)(now_idle - pcpu->target_set_time_in_idle);
- delta_time = (unsigned int)(pcpu->timer_run_time -
- pcpu->target_set_time);
-
- if ((delta_time == 0) || (delta_idle > delta_time))
- load_since_change = 0;
- else
- load_since_change =
- 100 * (delta_time - delta_idle) / delta_time;
-
- /*
- * Choose greater of short-term load (since last idle timer
- * started or timer function re-armed itself) or long-term load
- * (since last frequency change).
- */
- if (load_since_change > cpu_load)
- cpu_load = load_since_change;
-
- if (cpu_load >= go_hispeed_load || boost_val) {
- if (pcpu->target_freq <= pcpu->policy->min) {
+ if (cpu_load >= go_hispeed_load || boosted) {
+ if (pcpu->target_freq < hispeed_freq) {
new_freq = hispeed_freq;
} else {
- new_freq = pcpu->policy->max * cpu_load / 100;
+ new_freq = choose_freq(pcpu, loadadjfreq);
if (new_freq < hispeed_freq)
new_freq = hispeed_freq;
-
- if (pcpu->target_freq == hispeed_freq &&
- new_freq > hispeed_freq &&
- pcpu->timer_run_time - pcpu->hispeed_validate_time
- < above_hispeed_delay_val) {
- trace_cpufreq_interactive_notyet(data, cpu_load,
- pcpu->target_freq,
- new_freq);
- goto rearm;
- }
}
} else {
- new_freq = pcpu->policy->max * cpu_load / 100;
+ new_freq = choose_freq(pcpu, loadadjfreq);
}
- if (new_freq <= hispeed_freq)
- pcpu->hispeed_validate_time = pcpu->timer_run_time;
+ if (pcpu->target_freq >= hispeed_freq &&
+ new_freq > pcpu->target_freq &&
+ now - pcpu->hispeed_validate_time < above_hispeed_delay_val) {
+ trace_cpufreq_interactive_notyet(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
+ goto rearm;
+ }
+
+ pcpu->hispeed_validate_time = now;
if (cpufreq_frequency_table_target(pcpu->policy, pcpu->freq_table,
- new_freq, CPUFREQ_RELATION_H,
+ new_freq, CPUFREQ_RELATION_L,
&index)) {
pr_warn_once("timer %d: cpufreq_frequency_table_target error\n",
(int) data);
@@ -236,41 +339,42 @@
* floor frequency for the minimum sample time since last validated.
*/
if (new_freq < pcpu->floor_freq) {
- if (pcpu->timer_run_time - pcpu->floor_validate_time
- < min_sample_time) {
- trace_cpufreq_interactive_notyet(data, cpu_load,
- pcpu->target_freq, new_freq);
+ if (now - pcpu->floor_validate_time < min_sample_time) {
+ trace_cpufreq_interactive_notyet(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
goto rearm;
}
}
- pcpu->floor_freq = new_freq;
- pcpu->floor_validate_time = pcpu->timer_run_time;
+ /*
+ * Update the timestamp for checking whether speed has been held at
+ * or above the selected frequency for a minimum of min_sample_time,
+ * if not boosted to hispeed_freq. If boosted to hispeed_freq then we
+ * allow the speed to drop as soon as the boostpulse duration expires
+ * (or the indefinite boost is turned off).
+ */
+
+ if (!boosted || new_freq > hispeed_freq) {
+ pcpu->floor_freq = new_freq;
+ pcpu->floor_validate_time = now;
+ }
if (pcpu->target_freq == new_freq) {
- trace_cpufreq_interactive_already(data, cpu_load,
- pcpu->target_freq, new_freq);
+ trace_cpufreq_interactive_already(
+ data, cpu_load, pcpu->target_freq,
+ pcpu->policy->cur, new_freq);
goto rearm_if_notmax;
}
trace_cpufreq_interactive_target(data, cpu_load, pcpu->target_freq,
- new_freq);
- pcpu->target_set_time_in_idle = now_idle;
- pcpu->target_set_time = pcpu->timer_run_time;
+ pcpu->policy->cur, new_freq);
- if (new_freq < pcpu->target_freq) {
- pcpu->target_freq = new_freq;
- spin_lock_irqsave(&down_cpumask_lock, flags);
- cpumask_set_cpu(data, &down_cpumask);
- spin_unlock_irqrestore(&down_cpumask_lock, flags);
- queue_work(down_wq, &freq_scale_down_work);
- } else {
- pcpu->target_freq = new_freq;
- spin_lock_irqsave(&up_cpumask_lock, flags);
- cpumask_set_cpu(data, &up_cpumask);
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
- wake_up_process(up_task);
- }
+ pcpu->target_freq = new_freq;
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
+ cpumask_set_cpu(data, &speedchange_cpumask);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
+ wake_up_process(speedchange_task);
rearm_if_notmax:
/*
@@ -281,28 +385,11 @@
goto exit;
rearm:
- if (!timer_pending(&pcpu->cpu_timer)) {
- /*
- * If already at min: if that CPU is idle, don't set timer.
- * Else cancel the timer if that CPU goes idle. We don't
- * need to re-evaluate speed until the next idle exit.
- */
- if (pcpu->target_freq == pcpu->policy->min) {
- smp_rmb();
-
- if (pcpu->idling)
- goto exit;
-
- pcpu->timer_idlecancel = 1;
- }
-
- pcpu->time_in_idle = get_cpu_idle_time_us(
- data, &pcpu->idle_exit_time);
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- }
+ if (!timer_pending(&pcpu->cpu_timer))
+ cpufreq_interactive_timer_resched(pcpu);
exit:
+ up_read(&pcpu->enable_sem);
return;
}
@@ -312,15 +399,16 @@
&per_cpu(cpuinfo, smp_processor_id());
int pending;
- if (!pcpu->governor_enabled)
+ if (!down_read_trylock(&pcpu->enable_sem))
return;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return;
+ }
- pcpu->idling = 1;
- smp_wmb();
pending = timer_pending(&pcpu->cpu_timer);
if (pcpu->target_freq != pcpu->policy->min) {
-#ifdef CONFIG_SMP
/*
* Entering idle while not at lowest speed. On some
* platforms this can hold the other CPU(s) at that speed
@@ -329,33 +417,11 @@
* min indefinitely. This should probably be a quirk of
* the CPUFreq driver.
*/
- if (!pending) {
- pcpu->time_in_idle = get_cpu_idle_time_us(
- smp_processor_id(), &pcpu->idle_exit_time);
- pcpu->timer_idlecancel = 0;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- }
-#endif
- } else {
- /*
- * If at min speed and entering idle after load has
- * already been evaluated, and a timer has been set just in
- * case the CPU suddenly goes busy, cancel that timer. The
- * CPU didn't go busy; we'll recheck things upon idle exit.
- */
- if (pending && pcpu->timer_idlecancel) {
- del_timer(&pcpu->cpu_timer);
- /*
- * Ensure last timer run time is after current idle
- * sample start time, so next idle exit will always
- * start a new idle sampling period.
- */
- pcpu->idle_exit_time = 0;
- pcpu->timer_idlecancel = 0;
- }
+ if (!pending)
+ cpufreq_interactive_timer_resched(pcpu);
}
+ up_read(&pcpu->enable_sem);
}
static void cpufreq_interactive_idle_end(void)
@@ -363,34 +429,26 @@
struct cpufreq_interactive_cpuinfo *pcpu =
&per_cpu(cpuinfo, smp_processor_id());
- pcpu->idling = 0;
- smp_wmb();
-
- /*
- * Arm the timer for 1-2 ticks later if not already, and if the timer
- * function has already processed the previous load sampling
- * interval. (If the timer is not pending but has not processed
- * the previous interval, it is probably racing with us on another
- * CPU. Let it compute load based on the previous sample and then
- * re-arm the timer for another interval when it's done, rather
- * than updating the interval start time to be "now", which doesn't
- * give the timer function enough time to make a decision on this
- * run.)
- */
- if (timer_pending(&pcpu->cpu_timer) == 0 &&
- pcpu->timer_run_time >= pcpu->idle_exit_time &&
- pcpu->governor_enabled) {
- pcpu->time_in_idle =
- get_cpu_idle_time_us(smp_processor_id(),
- &pcpu->idle_exit_time);
- pcpu->timer_idlecancel = 0;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return;
}
+ /* Arm the timer for 1-2 ticks later if not already. */
+ if (!timer_pending(&pcpu->cpu_timer)) {
+ cpufreq_interactive_timer_resched(pcpu);
+ } else if (time_after_eq(jiffies, pcpu->cpu_timer.expires)) {
+ del_timer(&pcpu->cpu_timer);
+ del_timer(&pcpu->cpu_slack_timer);
+ cpufreq_interactive_timer(smp_processor_id());
+ }
+
+ up_read(&pcpu->enable_sem);
}
-static int cpufreq_interactive_up_task(void *data)
+static int cpufreq_interactive_speedchange_task(void *data)
{
unsigned int cpu;
cpumask_t tmp_mask;
@@ -399,34 +457,35 @@
while (1) {
set_current_state(TASK_INTERRUPTIBLE);
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
- if (cpumask_empty(&up_cpumask)) {
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ if (cpumask_empty(&speedchange_cpumask)) {
+ spin_unlock_irqrestore(&speedchange_cpumask_lock,
+ flags);
schedule();
if (kthread_should_stop())
break;
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
}
set_current_state(TASK_RUNNING);
- tmp_mask = up_cpumask;
- cpumask_clear(&up_cpumask);
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ tmp_mask = speedchange_cpumask;
+ cpumask_clear(&speedchange_cpumask);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
for_each_cpu(cpu, &tmp_mask) {
unsigned int j;
unsigned int max_freq = 0;
pcpu = &per_cpu(cpuinfo, cpu);
- smp_rmb();
-
- if (!pcpu->governor_enabled)
+ if (!down_read_trylock(&pcpu->enable_sem))
continue;
-
- mutex_lock(&set_speed_lock);
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ continue;
+ }
for_each_cpu(j, pcpu->policy->cpus) {
struct cpufreq_interactive_cpuinfo *pjcpu =
@@ -440,57 +499,17 @@
__cpufreq_driver_target(pcpu->policy,
max_freq,
CPUFREQ_RELATION_H);
- mutex_unlock(&set_speed_lock);
- trace_cpufreq_interactive_up(cpu, pcpu->target_freq,
+ trace_cpufreq_interactive_setspeed(cpu,
+ pcpu->target_freq,
pcpu->policy->cur);
+
+ up_read(&pcpu->enable_sem);
}
}
return 0;
}
-static void cpufreq_interactive_freq_down(struct work_struct *work)
-{
- unsigned int cpu;
- cpumask_t tmp_mask;
- unsigned long flags;
- struct cpufreq_interactive_cpuinfo *pcpu;
-
- spin_lock_irqsave(&down_cpumask_lock, flags);
- tmp_mask = down_cpumask;
- cpumask_clear(&down_cpumask);
- spin_unlock_irqrestore(&down_cpumask_lock, flags);
-
- for_each_cpu(cpu, &tmp_mask) {
- unsigned int j;
- unsigned int max_freq = 0;
-
- pcpu = &per_cpu(cpuinfo, cpu);
- smp_rmb();
-
- if (!pcpu->governor_enabled)
- continue;
-
- mutex_lock(&set_speed_lock);
-
- for_each_cpu(j, pcpu->policy->cpus) {
- struct cpufreq_interactive_cpuinfo *pjcpu =
- &per_cpu(cpuinfo, j);
-
- if (pjcpu->target_freq > max_freq)
- max_freq = pjcpu->target_freq;
- }
-
- if (max_freq != pcpu->policy->cur)
- __cpufreq_driver_target(pcpu->policy, max_freq,
- CPUFREQ_RELATION_H);
-
- mutex_unlock(&set_speed_lock);
- trace_cpufreq_interactive_down(cpu, pcpu->target_freq,
- pcpu->policy->cur);
- }
-}
-
static void cpufreq_interactive_boost(void)
{
int i;
@@ -498,17 +517,16 @@
unsigned long flags;
struct cpufreq_interactive_cpuinfo *pcpu;
- spin_lock_irqsave(&up_cpumask_lock, flags);
+ spin_lock_irqsave(&speedchange_cpumask_lock, flags);
for_each_online_cpu(i) {
pcpu = &per_cpu(cpuinfo, i);
if (pcpu->target_freq < hispeed_freq) {
pcpu->target_freq = hispeed_freq;
- cpumask_set_cpu(i, &up_cpumask);
- pcpu->target_set_time_in_idle =
- get_cpu_idle_time_us(i, &pcpu->target_set_time);
- pcpu->hispeed_validate_time = pcpu->target_set_time;
+ cpumask_set_cpu(i, &speedchange_cpumask);
+ pcpu->hispeed_validate_time =
+ ktime_to_us(ktime_get());
anyboost = 1;
}
@@ -521,106 +539,126 @@
pcpu->floor_validate_time = ktime_to_us(ktime_get());
}
- spin_unlock_irqrestore(&up_cpumask_lock, flags);
+ spin_unlock_irqrestore(&speedchange_cpumask_lock, flags);
if (anyboost)
- wake_up_process(up_task);
+ wake_up_process(speedchange_task);
}
-/*
- * Pulsed boost on input event raises CPUs to hispeed_freq and lets
- * usual algorithm of min_sample_time decide when to allow speed
- * to drop.
- */
-
-static void cpufreq_interactive_input_event(struct input_handle *handle,
- unsigned int type,
- unsigned int code, int value)
+static int cpufreq_interactive_notifier(
+ struct notifier_block *nb, unsigned long val, void *data)
{
- if (input_boost_val && type == EV_SYN && code == SYN_REPORT) {
- trace_cpufreq_interactive_boost("input");
- cpufreq_interactive_boost();
+ struct cpufreq_freqs *freq = data;
+ struct cpufreq_interactive_cpuinfo *pcpu;
+ int cpu;
+ unsigned long flags;
+
+ if (val == CPUFREQ_POSTCHANGE) {
+ pcpu = &per_cpu(cpuinfo, freq->cpu);
+ if (!down_read_trylock(&pcpu->enable_sem))
+ return 0;
+ if (!pcpu->governor_enabled) {
+ up_read(&pcpu->enable_sem);
+ return 0;
+ }
+
+ for_each_cpu(cpu, pcpu->policy->cpus) {
+ struct cpufreq_interactive_cpuinfo *pjcpu =
+ &per_cpu(cpuinfo, cpu);
+ spin_lock_irqsave(&pjcpu->load_lock, flags);
+ update_load(cpu);
+ spin_unlock_irqrestore(&pjcpu->load_lock, flags);
+ }
+
+ up_read(&pcpu->enable_sem);
}
-}
-
-static void cpufreq_interactive_input_open(struct work_struct *w)
-{
- struct cpufreq_interactive_inputopen *io =
- container_of(w, struct cpufreq_interactive_inputopen,
- inputopen_work);
- int error;
-
- error = input_open_device(io->handle);
- if (error)
- input_unregister_handle(io->handle);
-}
-
-static int cpufreq_interactive_input_connect(struct input_handler *handler,
- struct input_dev *dev,
- const struct input_device_id *id)
-{
- struct input_handle *handle;
- int error;
-
- pr_info("%s: connect to %s\n", __func__, dev->name);
- handle = kzalloc(sizeof(struct input_handle), GFP_KERNEL);
- if (!handle)
- return -ENOMEM;
-
- handle->dev = dev;
- handle->handler = handler;
- handle->name = "cpufreq_interactive";
-
- error = input_register_handle(handle);
- if (error)
- goto err;
-
- inputopen.handle = handle;
- queue_work(down_wq, &inputopen.inputopen_work);
return 0;
-err:
- kfree(handle);
- return error;
}
-static void cpufreq_interactive_input_disconnect(struct input_handle *handle)
+static struct notifier_block cpufreq_notifier_block = {
+ .notifier_call = cpufreq_interactive_notifier,
+};
+
+static ssize_t show_target_loads(
+ struct kobject *kobj, struct attribute *attr, char *buf)
{
- input_close_device(handle);
- input_unregister_handle(handle);
- kfree(handle);
+ int i;
+ ssize_t ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+
+ for (i = 0; i < ntarget_loads; i++)
+ ret += sprintf(buf + ret, "%u%s", target_loads[i],
+ i & 0x1 ? ":" : " ");
+
+ ret += sprintf(buf + ret, "\n");
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return ret;
}
-static const struct input_device_id cpufreq_interactive_ids[] = {
- {
- .flags = INPUT_DEVICE_ID_MATCH_EVBIT |
- INPUT_DEVICE_ID_MATCH_ABSBIT,
- .evbit = { BIT_MASK(EV_ABS) },
- .absbit = { [BIT_WORD(ABS_MT_POSITION_X)] =
- BIT_MASK(ABS_MT_POSITION_X) |
- BIT_MASK(ABS_MT_POSITION_Y) },
- }, /* multi-touch touchscreen */
- {
- .flags = INPUT_DEVICE_ID_MATCH_KEYBIT |
- INPUT_DEVICE_ID_MATCH_ABSBIT,
- .keybit = { [BIT_WORD(BTN_TOUCH)] = BIT_MASK(BTN_TOUCH) },
- .absbit = { [BIT_WORD(ABS_X)] =
- BIT_MASK(ABS_X) | BIT_MASK(ABS_Y) },
- }, /* touchpad */
- { },
-};
+static ssize_t store_target_loads(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
+{
+ int ret;
+ const char *cp;
+ unsigned int *new_target_loads = NULL;
+ int ntokens = 1;
+ int i;
+ unsigned long flags;
-static struct input_handler cpufreq_interactive_input_handler = {
- .event = cpufreq_interactive_input_event,
- .connect = cpufreq_interactive_input_connect,
- .disconnect = cpufreq_interactive_input_disconnect,
- .name = "cpufreq_interactive",
- .id_table = cpufreq_interactive_ids,
-};
+ cp = buf;
+ while ((cp = strpbrk(cp + 1, " :")))
+ ntokens++;
+
+ if (!(ntokens & 0x1))
+ goto err_inval;
+
+ new_target_loads = kmalloc(ntokens * sizeof(unsigned int), GFP_KERNEL);
+ if (!new_target_loads) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ cp = buf;
+ i = 0;
+ while (i < ntokens) {
+ if (sscanf(cp, "%u", &new_target_loads[i++]) != 1)
+ goto err_inval;
+
+ cp = strpbrk(cp, " :");
+ if (!cp)
+ break;
+ cp++;
+ }
+
+ if (i != ntokens)
+ goto err_inval;
+
+ spin_lock_irqsave(&target_loads_lock, flags);
+ if (target_loads != default_target_loads)
+ kfree(target_loads);
+ target_loads = new_target_loads;
+ ntarget_loads = ntokens;
+ spin_unlock_irqrestore(&target_loads_lock, flags);
+ return count;
+
+err_inval:
+ ret = -EINVAL;
+err:
+ kfree(new_target_loads);
+ return ret;
+}
+
+static struct global_attr target_loads_attr =
+ __ATTR(target_loads, S_IRUGO | S_IWUSR,
+ show_target_loads, store_target_loads);
static ssize_t show_hispeed_freq(struct kobject *kobj,
struct attribute *attr, char *buf)
{
- return sprintf(buf, "%llu\n", hispeed_freq);
+ return sprintf(buf, "%u\n", hispeed_freq);
}
static ssize_t store_hispeed_freq(struct kobject *kobj,
@@ -628,9 +666,9 @@
size_t count)
{
int ret;
- u64 val;
+ long unsigned int val;
- ret = strict_strtoull(buf, 0, &val);
+ ret = strict_strtoul(buf, 0, &val);
if (ret < 0)
return ret;
hispeed_freq = val;
@@ -729,26 +767,28 @@
static struct global_attr timer_rate_attr = __ATTR(timer_rate, 0644,
show_timer_rate, store_timer_rate);
-static ssize_t show_input_boost(struct kobject *kobj, struct attribute *attr,
- char *buf)
+static ssize_t show_timer_slack(
+ struct kobject *kobj, struct attribute *attr, char *buf)
{
- return sprintf(buf, "%u\n", input_boost_val);
+ return sprintf(buf, "%d\n", timer_slack_val);
}
-static ssize_t store_input_boost(struct kobject *kobj, struct attribute *attr,
- const char *buf, size_t count)
+static ssize_t store_timer_slack(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
{
int ret;
unsigned long val;
- ret = strict_strtoul(buf, 0, &val);
+ ret = kstrtol(buf, 10, &val);
if (ret < 0)
return ret;
- input_boost_val = val;
+
+ timer_slack_val = val;
return count;
}
-define_one_global_rw(input_boost);
+define_one_global_rw(timer_slack);
static ssize_t show_boost(struct kobject *kobj, struct attribute *attr,
char *buf)
@@ -790,6 +830,7 @@
if (ret < 0)
return ret;
+ boostpulse_endtime = ktime_to_us(ktime_get()) + boostpulse_duration_val;
trace_cpufreq_interactive_boost("pulse");
cpufreq_interactive_boost();
return count;
@@ -798,15 +839,40 @@
static struct global_attr boostpulse =
__ATTR(boostpulse, 0200, NULL, store_boostpulse);
+static ssize_t show_boostpulse_duration(
+ struct kobject *kobj, struct attribute *attr, char *buf)
+{
+ return sprintf(buf, "%d\n", boostpulse_duration_val);
+}
+
+static ssize_t store_boostpulse_duration(
+ struct kobject *kobj, struct attribute *attr, const char *buf,
+ size_t count)
+{
+ int ret;
+ unsigned long val;
+
+ ret = kstrtoul(buf, 0, &val);
+ if (ret < 0)
+ return ret;
+
+ boostpulse_duration_val = val;
+ return count;
+}
+
+define_one_global_rw(boostpulse_duration);
+
static struct attribute *interactive_attributes[] = {
+ &target_loads_attr.attr,
&hispeed_freq_attr.attr,
&go_hispeed_load_attr.attr,
&above_hispeed_delay.attr,
&min_sample_time_attr.attr,
&timer_rate_attr.attr,
- &input_boost.attr,
+ &timer_slack.attr,
&boost.attr,
&boostpulse.attr,
+ &boostpulse_duration.attr,
NULL,
};
@@ -815,102 +881,6 @@
.name = "interactive",
};
-static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
- unsigned int event)
-{
- int rc;
- unsigned int j;
- struct cpufreq_interactive_cpuinfo *pcpu;
- struct cpufreq_frequency_table *freq_table;
-
- switch (event) {
- case CPUFREQ_GOV_START:
- if (!cpu_online(policy->cpu))
- return -EINVAL;
-
- freq_table =
- cpufreq_frequency_get_table(policy->cpu);
-
- for_each_cpu(j, policy->cpus) {
- pcpu = &per_cpu(cpuinfo, j);
- pcpu->policy = policy;
- pcpu->target_freq = policy->cur;
- pcpu->freq_table = freq_table;
- pcpu->target_set_time_in_idle =
- get_cpu_idle_time_us(j,
- &pcpu->target_set_time);
- pcpu->floor_freq = pcpu->target_freq;
- pcpu->floor_validate_time =
- pcpu->target_set_time;
- pcpu->hispeed_validate_time =
- pcpu->target_set_time;
- pcpu->governor_enabled = 1;
- pcpu->idle_exit_time = pcpu->target_set_time;
- mod_timer(&pcpu->cpu_timer,
- jiffies + usecs_to_jiffies(timer_rate));
- smp_wmb();
- }
-
- if (!hispeed_freq)
- hispeed_freq = policy->max;
-
- /*
- * Do not register the idle hook and create sysfs
- * entries if we have already done so.
- */
- if (atomic_inc_return(&active_count) > 1)
- return 0;
-
- rc = sysfs_create_group(cpufreq_global_kobject,
- &interactive_attr_group);
- if (rc)
- return rc;
-
- rc = input_register_handler(&cpufreq_interactive_input_handler);
- if (rc)
- pr_warn("%s: failed to register input handler\n",
- __func__);
-
- break;
-
- case CPUFREQ_GOV_STOP:
- for_each_cpu(j, policy->cpus) {
- pcpu = &per_cpu(cpuinfo, j);
- pcpu->governor_enabled = 0;
- smp_wmb();
- del_timer_sync(&pcpu->cpu_timer);
-
- /*
- * Reset idle exit time since we may cancel the timer
- * before it can run after the last idle exit time,
- * to avoid tripping the check in idle exit for a timer
- * that is trying to run.
- */
- pcpu->idle_exit_time = 0;
- }
-
- flush_work(&freq_scale_down_work);
- if (atomic_dec_return(&active_count) > 0)
- return 0;
-
- input_unregister_handler(&cpufreq_interactive_input_handler);
- sysfs_remove_group(cpufreq_global_kobject,
- &interactive_attr_group);
-
- break;
-
- case CPUFREQ_GOV_LIMITS:
- if (policy->max < policy->cur)
- __cpufreq_driver_target(policy,
- policy->max, CPUFREQ_RELATION_H);
- else if (policy->min > policy->cur)
- __cpufreq_driver_target(policy,
- policy->min, CPUFREQ_RELATION_L);
- break;
- }
- return 0;
-}
-
static int cpufreq_interactive_idle_notifier(struct notifier_block *nb,
unsigned long val,
void *data)
@@ -931,57 +901,148 @@
.notifier_call = cpufreq_interactive_idle_notifier,
};
+static int cpufreq_governor_interactive(struct cpufreq_policy *policy,
+ unsigned int event)
+{
+ int rc;
+ unsigned int j;
+ struct cpufreq_interactive_cpuinfo *pcpu;
+ struct cpufreq_frequency_table *freq_table;
+
+ switch (event) {
+ case CPUFREQ_GOV_START:
+ if (!cpu_online(policy->cpu))
+ return -EINVAL;
+
+ mutex_lock(&gov_lock);
+
+ freq_table =
+ cpufreq_frequency_get_table(policy->cpu);
+ if (!hispeed_freq)
+ hispeed_freq = policy->max;
+
+ for_each_cpu(j, policy->cpus) {
+ unsigned long expires;
+
+ pcpu = &per_cpu(cpuinfo, j);
+ pcpu->policy = policy;
+ pcpu->target_freq = policy->cur;
+ pcpu->freq_table = freq_table;
+ pcpu->floor_freq = pcpu->target_freq;
+ pcpu->floor_validate_time =
+ ktime_to_us(ktime_get());
+ pcpu->hispeed_validate_time =
+ pcpu->floor_validate_time;
+ down_write(&pcpu->enable_sem);
+ expires = jiffies + usecs_to_jiffies(timer_rate);
+ pcpu->cpu_timer.expires = expires;
+ add_timer_on(&pcpu->cpu_timer, j);
+ if (timer_slack_val >= 0) {
+ expires += usecs_to_jiffies(timer_slack_val);
+ pcpu->cpu_slack_timer.expires = expires;
+ add_timer_on(&pcpu->cpu_slack_timer, j);
+ }
+ pcpu->governor_enabled = 1;
+ up_write(&pcpu->enable_sem);
+ }
+
+ /*
+ * Do not register the idle hook and create sysfs
+ * entries if we have already done so.
+ */
+ if (++active_count > 1) {
+ mutex_unlock(&gov_lock);
+ return 0;
+ }
+
+ rc = sysfs_create_group(cpufreq_global_kobject,
+ &interactive_attr_group);
+ if (rc) {
+ mutex_unlock(&gov_lock);
+ return rc;
+ }
+
+ idle_notifier_register(&cpufreq_interactive_idle_nb);
+ cpufreq_register_notifier(
+ &cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ mutex_unlock(&gov_lock);
+ break;
+
+ case CPUFREQ_GOV_STOP:
+ mutex_lock(&gov_lock);
+ for_each_cpu(j, policy->cpus) {
+ pcpu = &per_cpu(cpuinfo, j);
+ down_write(&pcpu->enable_sem);
+ pcpu->governor_enabled = 0;
+ del_timer_sync(&pcpu->cpu_timer);
+ del_timer_sync(&pcpu->cpu_slack_timer);
+ up_write(&pcpu->enable_sem);
+ }
+
+ if (--active_count > 0) {
+ mutex_unlock(&gov_lock);
+ return 0;
+ }
+
+ cpufreq_unregister_notifier(
+ &cpufreq_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+ idle_notifier_unregister(&cpufreq_interactive_idle_nb);
+ sysfs_remove_group(cpufreq_global_kobject,
+ &interactive_attr_group);
+ mutex_unlock(&gov_lock);
+
+ break;
+
+ case CPUFREQ_GOV_LIMITS:
+ if (policy->max < policy->cur)
+ __cpufreq_driver_target(policy,
+ policy->max, CPUFREQ_RELATION_H);
+ else if (policy->min > policy->cur)
+ __cpufreq_driver_target(policy,
+ policy->min, CPUFREQ_RELATION_L);
+ break;
+ }
+ return 0;
+}
+
+static void cpufreq_interactive_nop_timer(unsigned long data)
+{
+}
+
static int __init cpufreq_interactive_init(void)
{
unsigned int i;
struct cpufreq_interactive_cpuinfo *pcpu;
struct sched_param param = { .sched_priority = MAX_RT_PRIO-1 };
- go_hispeed_load = DEFAULT_GO_HISPEED_LOAD;
- min_sample_time = DEFAULT_MIN_SAMPLE_TIME;
- above_hispeed_delay_val = DEFAULT_ABOVE_HISPEED_DELAY;
- timer_rate = DEFAULT_TIMER_RATE;
-
/* Initalize per-cpu timers */
for_each_possible_cpu(i) {
pcpu = &per_cpu(cpuinfo, i);
- init_timer(&pcpu->cpu_timer);
+ init_timer_deferrable(&pcpu->cpu_timer);
pcpu->cpu_timer.function = cpufreq_interactive_timer;
pcpu->cpu_timer.data = i;
+ init_timer(&pcpu->cpu_slack_timer);
+ pcpu->cpu_slack_timer.function = cpufreq_interactive_nop_timer;
+ spin_lock_init(&pcpu->load_lock);
+ init_rwsem(&pcpu->enable_sem);
}
- up_task = kthread_create(cpufreq_interactive_up_task, NULL,
- "kinteractiveup");
- if (IS_ERR(up_task))
- return PTR_ERR(up_task);
+ spin_lock_init(&target_loads_lock);
+ spin_lock_init(&speedchange_cpumask_lock);
+ mutex_init(&gov_lock);
+ speedchange_task =
+ kthread_create(cpufreq_interactive_speedchange_task, NULL,
+ "cfinteractive");
+ if (IS_ERR(speedchange_task))
+ return PTR_ERR(speedchange_task);
- sched_setscheduler_nocheck(up_task, SCHED_FIFO, ¶m);
- get_task_struct(up_task);
+ sched_setscheduler_nocheck(speedchange_task, SCHED_FIFO, ¶m);
+ get_task_struct(speedchange_task);
- /* No rescuer thread, bind to CPU queuing the work for possibly
- warm cache (probably doesn't matter much). */
- down_wq = alloc_workqueue("knteractive_down", 0, 1);
+ /* NB: wake up so the thread does not look hung to the freezer */
+ wake_up_process(speedchange_task);
- if (!down_wq)
- goto err_freeuptask;
-
- INIT_WORK(&freq_scale_down_work,
- cpufreq_interactive_freq_down);
-
- spin_lock_init(&up_cpumask_lock);
- spin_lock_init(&down_cpumask_lock);
- mutex_init(&set_speed_lock);
-
- /* Kick the kthread to idle */
- wake_up_process(up_task);
-
- idle_notifier_register(&cpufreq_interactive_idle_nb);
- INIT_WORK(&inputopen.inputopen_work, cpufreq_interactive_input_open);
return cpufreq_register_governor(&cpufreq_gov_interactive);
-
-err_freeuptask:
- put_task_struct(up_task);
- return -ENOMEM;
}
#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_INTERACTIVE
@@ -993,9 +1054,8 @@
static void __exit cpufreq_interactive_exit(void)
{
cpufreq_unregister_governor(&cpufreq_gov_interactive);
- kthread_stop(up_task);
- put_task_struct(up_task);
- destroy_workqueue(down_wq);
+ kthread_stop(speedchange_task);
+ put_task_struct(speedchange_task);
}
module_exit(cpufreq_interactive_exit);
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index c758b3a..99ace44 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -306,7 +306,7 @@
config CRYPTO_DEV_QCE
tristate "Qualcomm Crypto Engine (QCE) module"
select CRYPTO_DEV_QCE40 if ARCH_MSM8960 || ARCH_MSM9615
- select CRYPTO_DEV_QCE50 if ARCH_MSM8974 || ARCH_MSM9625
+ select CRYPTO_DEV_QCE50 if ARCH_MSM8974 || ARCH_MSM9625 || ARCH_MSM8226
default n
help
This driver supports Qualcomm Crypto Engine in MSM7x30, MSM8660
diff --git a/drivers/crypto/msm/qce.c b/drivers/crypto/msm/qce.c
index 24cf30a..7778477 100644
--- a/drivers/crypto/msm/qce.c
+++ b/drivers/crypto/msm/qce.c
@@ -2203,6 +2203,18 @@
}
EXPORT_SYMBOL(qce_process_sha_req);
+int qce_enable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_enable_clk);
+
+int qce_disable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_disable_clk);
+
/*
* crypto engine open function.
*/
diff --git a/drivers/crypto/msm/qce.h b/drivers/crypto/msm/qce.h
index 3ff84cf..51a74b6 100644
--- a/drivers/crypto/msm/qce.h
+++ b/drivers/crypto/msm/qce.h
@@ -160,5 +160,7 @@
int qce_ablk_cipher_req(void *handle, struct qce_req *req);
int qce_hw_support(void *handle, struct ce_hw_support *support);
int qce_process_sha_req(void *handle, struct qce_sha_req *s_req);
+int qce_enable_clk(void *handle);
+int qce_disable_clk(void *handle);
#endif /* __CRYPTO_MSM_QCE_H */
diff --git a/drivers/crypto/msm/qce40.c b/drivers/crypto/msm/qce40.c
index 7b0964d..5249917 100644
--- a/drivers/crypto/msm/qce40.c
+++ b/drivers/crypto/msm/qce40.c
@@ -2426,6 +2426,18 @@
}
EXPORT_SYMBOL(qce_process_sha_req);
+int qce_enable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_enable_clk);
+
+int qce_disable_clk(void *handle)
+{
+ return 0;
+}
+EXPORT_SYMBOL(qce_disable_clk);
+
/* crypto engine open function. */
void *qce_open(struct platform_device *pdev, int *rc)
{
diff --git a/drivers/crypto/msm/qce50.c b/drivers/crypto/msm/qce50.c
index 739a753..245272b 100644
--- a/drivers/crypto/msm/qce50.c
+++ b/drivers/crypto/msm/qce50.c
@@ -39,7 +39,7 @@
#include "qcryptohw_50.h"
#define CRYPTO_CONFIG_RESET 0xE001F
-#define QCE_MAX_NUM_DSCR 0x400
+#define QCE_MAX_NUM_DSCR 0x500
#define QCE_SECTOR_SIZE 0x200
static DEFINE_MUTEX(bam_register_cnt);
@@ -919,17 +919,23 @@
iovec->flags |= flag;
}
-static void _qce_sps_add_data(uint32_t addr, uint32_t len,
+static int _qce_sps_add_data(uint32_t addr, uint32_t len,
struct sps_transfer *sps_bam_pipe)
{
struct sps_iovec *iovec = sps_bam_pipe->iovec +
sps_bam_pipe->iovec_count;
+ if (sps_bam_pipe->iovec_count == QCE_MAX_NUM_DSCR) {
+ pr_err("Num of descrptor %d exceed max (%d)",
+ sps_bam_pipe->iovec_count, (uint32_t)QCE_MAX_NUM_DSCR);
+ return -ENOMEM;
+ }
if (len) {
iovec->size = len;
iovec->addr = addr;
iovec->flags = 0;
sps_bam_pipe->iovec_count++;
}
+ return 0;
}
static int _qce_sps_add_sg_data(struct qce_device *pce_dev,
@@ -947,6 +953,12 @@
if (pce_dev->ce_sps.minor_version == 0)
len = ALIGN(len, pce_dev->ce_sps.ce_burst_size);
while (len > 0) {
+ if (sps_bam_pipe->iovec_count == QCE_MAX_NUM_DSCR) {
+ pr_err("Num of descrptor %d exceed max (%d)",
+ sps_bam_pipe->iovec_count,
+ (uint32_t)QCE_MAX_NUM_DSCR);
+ return -ENOMEM;
+ }
if (len > SPS_MAX_PKT_SIZE) {
data_cnt = SPS_MAX_PKT_SIZE;
iovec->size = data_cnt;
@@ -1325,19 +1337,6 @@
}
};
-static void _aead_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void _sha_sps_producer_callback(struct sps_event_notify *notify)
{
struct qce_device *pce_dev = (struct qce_device *)
@@ -1353,19 +1352,6 @@
_sha_complete(pce_dev);
};
-static void _sha_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void _ablk_cipher_sps_producer_callback(struct sps_event_notify *notify)
{
struct qce_device *pce_dev = (struct qce_device *)
@@ -1400,19 +1386,6 @@
}
};
-static void _ablk_cipher_sps_consumer_callback(struct sps_event_notify *notify)
-{
- struct qce_device *pce_dev = (struct qce_device *)
- ((struct sps_event_notify *)notify)->user;
-
- pce_dev->ce_sps.notify = *notify;
- pr_debug("sps ev_id=%d, addr=0x%x, size=0x%x, flags=0x%x\n",
- notify->event_id,
- notify->data.transfer.iovec.addr,
- notify->data.transfer.iovec.size,
- notify->data.transfer.iovec.flags);
-};
-
static void qce_add_cmd_element(struct qce_device *pdev,
struct sps_command_element **cmd_ptr, u32 addr,
u32 data, struct sps_command_element **populate)
@@ -2359,31 +2332,23 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback = _aead_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
_qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
if (pce_dev->ce_sps.minor_version == 0) {
- _qce_sps_add_sg_data(pce_dev, areq->src, totallen_in,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, totallen_in,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_sg_data(pce_dev, areq->dst, out_len +
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, out_len +
areq->assoclen + hw_pad_out,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (totallen_in > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
@@ -2391,42 +2356,52 @@
SPS_O_DESC_DONE;
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
- _qce_sps_add_data(GET_PHYS_ADDR(
+ if (_qce_sps_add_data(GET_PHYS_ADDR(
pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
}
} else {
- _qce_sps_add_sg_data(pce_dev, areq->assoc, areq->assoclen,
- &pce_dev->ce_sps.in_transfer);
- _qce_sps_add_data((uint32_t)pce_dev->phy_iv_in, ivsize,
- &pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->cryptlen,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->assoc, areq->assoclen,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
+ if (_qce_sps_add_data((uint32_t)pce_dev->phy_iv_in, ivsize,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->cryptlen,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
/* Pass through to ignore associated (+iv, if applicable) data*/
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
(ivsize + areq->assoclen),
- &pce_dev->ce_sps.out_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->dst, out_len,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, out_len,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
/* Pass through to ignore hw_pad (padding of the MAC data) */
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
- hw_pad_out, &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.ignore_buffer),
+ hw_pad_out, &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (totallen_in > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
- _qce_sps_add_data(
+ if (_qce_sps_add_data(
GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
@@ -2515,37 +2490,30 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback =
- _ablk_cipher_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
_qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_sg_data(pce_dev, areq->dst, areq->nbytes,
- &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->dst, areq->nbytes,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
if (areq->nbytes > SPS_MAX_PKT_SIZE) {
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_IDLE;
} else {
pce_dev->ce_sps.producer_state = QCE_PIPE_STATE_COMP;
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
- CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ if (_qce_sps_add_data(
+ GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
+ CRYPTO_RESULT_DUMP_SIZE,
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer,
SPS_IOVEC_FLAG_INT);
}
@@ -2598,29 +2566,20 @@
pr_err("Producer callback registration failed rc = %d\n", rc);
goto bad;
}
-
- /* Register callback event for EOT (End of transfer) event. */
- pce_dev->ce_sps.consumer.event.callback = _sha_sps_consumer_callback;
- pce_dev->ce_sps.consumer.event.options = SPS_O_DESC_DONE;
- rc = sps_register_event(pce_dev->ce_sps.consumer.pipe,
- &pce_dev->ce_sps.consumer.event);
- if (rc) {
- pr_err("Consumer callback registration failed rc = %d\n", rc);
- goto bad;
- }
-
_qce_sps_iovec_count_init(pce_dev);
_qce_sps_add_cmd(pce_dev, SPS_IOVEC_FLAG_LOCK, cmdlistinfo,
&pce_dev->ce_sps.in_transfer);
- _qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
- &pce_dev->ce_sps.in_transfer);
+ if (_qce_sps_add_sg_data(pce_dev, areq->src, areq->nbytes,
+ &pce_dev->ce_sps.in_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.in_transfer,
SPS_IOVEC_FLAG_EOT|SPS_IOVEC_FLAG_NWD);
- _qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
+ if (_qce_sps_add_data(GET_PHYS_ADDR(pce_dev->ce_sps.result_dump),
CRYPTO_RESULT_DUMP_SIZE,
- &pce_dev->ce_sps.out_transfer);
+ &pce_dev->ce_sps.out_transfer))
+ goto bad;
_qce_set_flag(&pce_dev->ce_sps.out_transfer, SPS_IOVEC_FLAG_INT);
rc = _qce_sps_transfer(pce_dev);
if (rc)
@@ -2800,7 +2759,7 @@
}
}
-static int __qce_enable_clk(void *handle)
+int qce_enable_clk(void *handle)
{
struct qce_device *pce_dev = (struct qce_device *) handle;
int rc = 0;
@@ -2813,6 +2772,7 @@
return rc;
}
}
+
/* Enable CE clk */
if (pce_dev->ce_clk != NULL) {
rc = clk_prepare_enable(pce_dev->ce_clk);
@@ -2834,8 +2794,9 @@
}
return rc;
}
+EXPORT_SYMBOL(qce_enable_clk);
-static int __qce_disable_clk(void *handle)
+int qce_disable_clk(void *handle)
{
struct qce_device *pce_dev = (struct qce_device *) handle;
int rc = 0;
@@ -2849,6 +2810,7 @@
return rc;
}
+EXPORT_SYMBOL(qce_disable_clk);
/* crypto engine open function. */
void *qce_open(struct platform_device *pdev, int *rc)
@@ -2886,19 +2848,20 @@
if (*rc)
goto err_mem;
- *rc = __qce_enable_clk(pce_dev);
+ *rc = qce_enable_clk(pce_dev);
if (*rc)
goto err;
if (_probe_ce_engine(pce_dev)) {
*rc = -ENXIO;
- __qce_disable_clk(pce_dev);
goto err;
}
*rc = 0;
qce_setup_ce_sps_data(pce_dev);
qce_sps_init(pce_dev);
+ qce_disable_clk(pce_dev);
+
return pce_dev;
err:
__qce_deinit_clk(pce_dev);
@@ -2932,7 +2895,7 @@
dma_free_coherent(pce_dev->pdev, pce_dev->memsize,
pce_dev->coh_vmem, pce_dev->coh_pmem);
- __qce_disable_clk(pce_dev);
+ qce_disable_clk(pce_dev);
__qce_deinit_clk(pce_dev);
qce_sps_exit(pce_dev);
diff --git a/drivers/crypto/msm/qcedev.c b/drivers/crypto/msm/qcedev.c
index 8cc42df..e91dcaa 100644
--- a/drivers/crypto/msm/qcedev.c
+++ b/drivers/crypto/msm/qcedev.c
@@ -98,7 +98,7 @@
};
static DEFINE_MUTEX(send_cmd_lock);
-static DEFINE_MUTEX(sent_bw_req);
+static DEFINE_MUTEX(qcedev_sent_bw_req);
/**********************************************************************
* Register ourselves as a misc device to be able to access the dev driver
* from userspace. */
@@ -177,25 +177,51 @@
{
int ret = 0;
- mutex_lock(&sent_bw_req);
+ mutex_lock(&qcedev_sent_bw_req);
if (high_bw_req) {
- if (podev->high_bw_req_count == 0)
+ if (podev->high_bw_req_count == 0) {
+ ret = qce_enable_clk(podev->qce);
+ if (ret) {
+ pr_err("%s Unable enable clk\n", __func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
ret = msm_bus_scale_client_update_request(
podev->bus_scale_handle, 1);
- if (ret)
- pr_err("%s Unable to set to high bandwidth\n",
+ if (ret) {
+ pr_err("%s Unable to set to high bandwidth\n",
__func__);
+ ret = qce_disable_clk(podev->qce);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ }
podev->high_bw_req_count++;
} else {
- if (podev->high_bw_req_count == 1)
+ if (podev->high_bw_req_count == 1) {
ret = msm_bus_scale_client_update_request(
podev->bus_scale_handle, 0);
- if (ret)
- pr_err("%s Unable to set to low bandwidth\n",
+ if (ret) {
+ pr_err("%s Unable to set to low bandwidth\n",
__func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ ret = qce_disable_clk(podev->qce);
+ if (ret) {
+ pr_err("%s Unable disable clk\n", __func__);
+ ret = msm_bus_scale_client_update_request(
+ podev->bus_scale_handle, 1);
+ if (ret)
+ pr_err("%s Unable to set to high bandwidth\n",
+ __func__);
+ mutex_unlock(&qcedev_sent_bw_req);
+ return;
+ }
+ }
podev->high_bw_req_count--;
}
- mutex_unlock(&sent_bw_req);
+ mutex_unlock(&qcedev_sent_bw_req);
}
@@ -1854,6 +1880,14 @@
podev->platform_support.hw_key_support = 0;
podev->platform_support.bus_scale_table = NULL;
podev->platform_support.sha_hmac = 1;
+
+ if (podev->ce_support.is_shared == false) {
+ podev->platform_support.bus_scale_table =
+ (struct msm_bus_scale_pdata *)
+ msm_bus_cl_get_pdata(pdev);
+ if (!podev->platform_support.bus_scale_table)
+ pr_err("bus_scale_table is NULL\n");
+ }
} else {
platform_support =
(struct msm_ce_hw_support *)pdev->dev.platform_data;
diff --git a/drivers/crypto/msm/qcrypto.c b/drivers/crypto/msm/qcrypto.c
index 85c25c7..40fb29ac 100644
--- a/drivers/crypto/msm/qcrypto.c
+++ b/drivers/crypto/msm/qcrypto.c
@@ -121,7 +121,7 @@
#define NUM_RETRY 1000
#define CE_BUSY 55
-static DEFINE_MUTEX(sent_bw_req);
+static DEFINE_MUTEX(qcrypto_sent_bw_req);
static int qcrypto_scm_cmd(int resource, int cmd, int *response)
{
@@ -346,25 +346,51 @@
{
int ret = 0;
- mutex_lock(&sent_bw_req);
+ mutex_lock(&qcrypto_sent_bw_req);
if (high_bw_req) {
- if (cp->high_bw_req_count == 0)
+ if (cp->high_bw_req_count == 0) {
+ ret = qce_enable_clk(cp->qce);
+ if (ret) {
+ pr_err("%s Unable enable clk\n", __func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
ret = msm_bus_scale_client_update_request(
- cp->bus_scale_handle, 1);
- if (ret)
- pr_err("%s Unable to set to high bandwidth\n",
+ cp->bus_scale_handle, 1);
+ if (ret) {
+ pr_err("%s Unable to set to high bandwidth\n",
__func__);
+ qce_disable_clk(cp->qce);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ }
cp->high_bw_req_count++;
} else {
- if (cp->high_bw_req_count == 1)
+ if (cp->high_bw_req_count == 1) {
ret = msm_bus_scale_client_update_request(
- cp->bus_scale_handle, 0);
- if (ret)
- pr_err("%s Unable to set to low bandwidth\n",
+ cp->bus_scale_handle, 0);
+ if (ret) {
+ pr_err("%s Unable to set to low bandwidth\n",
__func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ ret = qce_disable_clk(cp->qce);
+ if (ret) {
+ pr_err("%s Unable disable clk\n", __func__);
+ ret = msm_bus_scale_client_update_request(
+ cp->bus_scale_handle, 1);
+ if (ret)
+ pr_err("%s Unable to set to high bandwidth\n",
+ __func__);
+ mutex_unlock(&qcrypto_sent_bw_req);
+ return;
+ }
+ }
cp->high_bw_req_count--;
}
- mutex_unlock(&sent_bw_req);
+ mutex_unlock(&qcrypto_sent_bw_req);
}
static int _start_qcrypto_process(struct crypto_priv *cp);
@@ -3336,6 +3362,14 @@
cp->platform_support.hw_key_support = 0;
cp->platform_support.bus_scale_table = NULL;
cp->platform_support.sha_hmac = 1;
+
+ if (cp->ce_support.is_shared == false) {
+ cp->platform_support.bus_scale_table =
+ (struct msm_bus_scale_pdata *)
+ msm_bus_cl_get_pdata(pdev);
+ if (!cp->platform_support.bus_scale_table)
+ pr_warn("bus_scale_table is NULL\n");
+ }
} else {
platform_support =
(struct msm_ce_hw_support *)pdev->dev.platform_data;
diff --git a/drivers/gpu/ion/Makefile b/drivers/gpu/ion/Makefile
index f4f9a92..d7ff73a 100644
--- a/drivers/gpu/ion/Makefile
+++ b/drivers/gpu/ion/Makefile
@@ -1,4 +1,6 @@
-obj-$(CONFIG_ION) += ion.o ion_heap.o ion_system_heap.o ion_carveout_heap.o ion_iommu_heap.o ion_cp_heap.o ion_removed_heap.o
+obj-$(CONFIG_ION) += ion.o ion_heap.o ion_system_heap.o ion_carveout_heap.o \
+ ion_iommu_heap.o ion_cp_heap.o ion_removed_heap.o \
+ ion_page_pool.o ion_chunk_heap.o
obj-$(CONFIG_CMA) += ion_cma_heap.o ion_cma_secure_heap.o
obj-$(CONFIG_ION_TEGRA) += tegra/
obj-$(CONFIG_ION_MSM) += msm/
diff --git a/drivers/gpu/ion/ion.c b/drivers/gpu/ion/ion.c
index a01ef3f..fbe4da0 100644
--- a/drivers/gpu/ion/ion.c
+++ b/drivers/gpu/ion/ion.c
@@ -1,4 +1,5 @@
/*
+
* drivers/gpu/ion/ion.c
*
* Copyright (C) 2011 Google, Inc.
@@ -18,15 +19,18 @@
#include <linux/module.h>
#include <linux/device.h>
#include <linux/file.h>
+#include <linux/freezer.h>
#include <linux/fs.h>
#include <linux/anon_inodes.h>
#include <linux/ion.h>
+#include <linux/kthread.h>
#include <linux/list.h>
#include <linux/memblock.h>
#include <linux/miscdevice.h>
#include <linux/mm.h>
#include <linux/mm_types.h>
#include <linux/rbtree.h>
+#include <linux/rtmutex.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/seq_file.h>
@@ -43,16 +47,18 @@
/**
* struct ion_device - the metadata of the ion device node
* @dev: the actual misc device
- * @buffers: an rb tree of all the existing buffers
- * @lock: lock protecting the buffers & heaps trees
+ * @buffers: an rb tree of all the existing buffers
+ * @buffer_lock: lock protecting the tree of buffers
+ * @lock: rwsem protecting the tree of heaps and clients
* @heaps: list of all the heaps in the system
* @user_clients: list of all the clients created from userspace
*/
struct ion_device {
struct miscdevice dev;
struct rb_root buffers;
- struct mutex lock;
- struct rb_root heaps;
+ struct mutex buffer_lock;
+ struct rw_semaphore lock;
+ struct plist_head heaps;
long (*custom_ioctl) (struct ion_client *client, unsigned int cmd,
unsigned long arg);
struct rb_root clients;
@@ -65,7 +71,6 @@
* @dev: backpointer to ion device
* @handles: an rb tree of all the handles in this client
* @lock: lock protecting the tree of handles
- * @heap_mask: mask of all supported heaps
* @name: used for debugging
* @task: used for debugging
*
@@ -78,7 +83,7 @@
struct ion_device *dev;
struct rb_root handles;
struct mutex lock;
- unsigned int heap_mask;
+ unsigned int heap_type_mask;
char *name;
struct task_struct *task;
pid_t pid;
@@ -112,6 +117,11 @@
!(buffer->flags & ION_FLAG_CACHED_NEEDS_SYNC));
}
+bool ion_buffer_cached(struct ion_buffer *buffer)
+{
+ return !!(buffer->flags & ION_FLAG_CACHED);
+}
+
/* this function should only be called while dev->lock is held */
static void ion_buffer_add(struct ion_device *dev,
struct ion_buffer *buffer)
@@ -140,6 +150,7 @@
static int ion_buffer_alloc_dirty(struct ion_buffer *buffer);
+static bool ion_heap_drain_freelist(struct ion_heap *heap);
/* this function should only be called while dev->lock is held */
static struct ion_buffer *ion_buffer_create(struct ion_heap *heap,
struct ion_device *dev,
@@ -161,9 +172,16 @@
kref_init(&buffer->ref);
ret = heap->ops->allocate(heap, buffer, len, align, flags);
+
if (ret) {
- kfree(buffer);
- return ERR_PTR(ret);
+ if (!(heap->flags & ION_HEAP_FLAG_DEFER_FREE))
+ goto err2;
+
+ ion_heap_drain_freelist(heap);
+ ret = heap->ops->allocate(heap, buffer, len, align,
+ flags);
+ if (ret)
+ goto err2;
}
buffer->dev = dev;
@@ -208,12 +226,15 @@
if (sg_dma_address(sg) == 0)
sg_dma_address(sg) = sg_phys(sg);
}
+ mutex_lock(&dev->buffer_lock);
ion_buffer_add(dev, buffer);
+ mutex_unlock(&dev->buffer_lock);
return buffer;
err:
heap->ops->unmap_dma(heap, buffer);
heap->ops->free(buffer);
+err2:
kfree(buffer);
return ERR_PTR(ret);
}
@@ -224,25 +245,39 @@
buffer->heap->ops->unsecure_buffer(buffer, 1);
}
-static void ion_buffer_destroy(struct kref *kref)
+static void _ion_buffer_destroy(struct ion_buffer *buffer)
{
- struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref);
- struct ion_device *dev = buffer->dev;
-
if (WARN_ON(buffer->kmap_cnt > 0))
buffer->heap->ops->unmap_kernel(buffer->heap, buffer);
buffer->heap->ops->unmap_dma(buffer->heap, buffer);
ion_delayed_unsecure(buffer);
buffer->heap->ops->free(buffer);
- mutex_lock(&dev->lock);
- rb_erase(&buffer->node, &dev->buffers);
- mutex_unlock(&dev->lock);
if (buffer->flags & ION_FLAG_CACHED)
kfree(buffer->dirty);
kfree(buffer);
}
+static void ion_buffer_destroy(struct kref *kref)
+{
+ struct ion_buffer *buffer = container_of(kref, struct ion_buffer, ref);
+ struct ion_heap *heap = buffer->heap;
+ struct ion_device *dev = buffer->dev;
+
+ mutex_lock(&dev->buffer_lock);
+ rb_erase(&buffer->node, &dev->buffers);
+ mutex_unlock(&dev->buffer_lock);
+
+ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) {
+ rt_mutex_lock(&heap->lock);
+ list_add(&buffer->list, &heap->free_list);
+ rt_mutex_unlock(&heap->lock);
+ wake_up(&heap->waitqueue);
+ return;
+ }
+ _ion_buffer_destroy(buffer);
+}
+
static void ion_buffer_get(struct ion_buffer *buffer)
{
kref_get(&buffer->ref);
@@ -253,6 +288,37 @@
return kref_put(&buffer->ref, ion_buffer_destroy);
}
+static void ion_buffer_add_to_handle(struct ion_buffer *buffer)
+{
+ mutex_lock(&buffer->lock);
+ buffer->handle_count++;
+ mutex_unlock(&buffer->lock);
+}
+
+static void ion_buffer_remove_from_handle(struct ion_buffer *buffer)
+{
+ /*
+ * when a buffer is removed from a handle, if it is not in
+ * any other handles, copy the taskcomm and the pid of the
+ * process it's being removed from into the buffer. At this
+ * point there will be no way to track what processes this buffer is
+ * being used by, it only exists as a dma_buf file descriptor.
+ * The taskcomm and pid can provide a debug hint as to where this fd
+ * is in the system
+ */
+ mutex_lock(&buffer->lock);
+ buffer->handle_count--;
+ BUG_ON(buffer->handle_count < 0);
+ if (!buffer->handle_count) {
+ struct task_struct *task;
+
+ task = current->group_leader;
+ get_task_comm(buffer->task_comm, task);
+ buffer->pid = task_pid_nr(task);
+ }
+ mutex_unlock(&buffer->lock);
+}
+
static struct ion_handle *ion_handle_create(struct ion_client *client,
struct ion_buffer *buffer)
{
@@ -265,6 +331,7 @@
rb_init_node(&handle->node);
handle->client = client;
ion_buffer_get(buffer);
+ ion_buffer_add_to_handle(buffer);
handle->buffer = buffer;
return handle;
@@ -286,7 +353,9 @@
if (!RB_EMPTY_NODE(&handle->node))
rb_erase(&handle->node, &client->handles);
+ ion_buffer_remove_from_handle(buffer);
ion_buffer_put(buffer);
+
kfree(handle);
}
@@ -359,13 +428,13 @@
}
struct ion_handle *ion_alloc(struct ion_client *client, size_t len,
- size_t align, unsigned int heap_mask,
+ size_t align, unsigned int heap_id_mask,
unsigned int flags)
{
- struct rb_node *n;
struct ion_handle *handle;
struct ion_device *dev = client->dev;
struct ion_buffer *buffer = NULL;
+ struct ion_heap *heap;
unsigned long secure_allocation = flags & ION_FLAG_SECURE;
const unsigned int MAX_DBG_STR_LEN = 64;
char dbg_str[MAX_DBG_STR_LEN];
@@ -381,8 +450,8 @@
*/
flags |= ION_FLAG_CACHED_NEEDS_SYNC;
- pr_debug("%s: len %d align %d heap_mask %u flags %x\n", __func__, len,
- align, heap_mask, flags);
+ pr_debug("%s: len %d align %d heap_id_mask %u flags %x\n", __func__,
+ len, align, heap_id_mask, flags);
/*
* traverse the list of heaps available in this system in priority
* order. If the heap type is supported by the client, and matches the
@@ -394,29 +463,26 @@
len = PAGE_ALIGN(len);
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
- /* if the client doesn't support this heap type */
- if (!((1 << heap->type) & client->heap_mask))
- continue;
- /* if the caller didn't specify this heap type */
- if (!((1 << heap->id) & heap_mask))
+ down_read(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
+ /* if the caller didn't specify this heap id */
+ if (!((1 << heap->id) & heap_id_mask))
continue;
/* Do not allow un-secure heap if secure is specified */
if (secure_allocation &&
!ion_heap_allow_secure_allocation(heap->type))
continue;
trace_ion_alloc_buffer_start(client->name, heap->name, len,
- heap_mask, flags);
+ heap_id_mask, flags);
buffer = ion_buffer_create(heap, dev, len, align, flags);
trace_ion_alloc_buffer_end(client->name, heap->name, len,
- heap_mask, flags);
+ heap_id_mask, flags);
if (!IS_ERR_OR_NULL(buffer))
break;
trace_ion_alloc_buffer_fallback(client->name, heap->name, len,
- heap_mask, flags, PTR_ERR(buffer));
+ heap_id_mask, flags,
+ PTR_ERR(buffer));
if (dbg_str_idx < MAX_DBG_STR_LEN) {
unsigned int len_left = MAX_DBG_STR_LEN-dbg_str_idx-1;
int ret_value = snprintf(&dbg_str[dbg_str_idx],
@@ -433,21 +499,21 @@
}
}
}
- mutex_unlock(&dev->lock);
+ up_read(&dev->lock);
if (buffer == NULL) {
trace_ion_alloc_buffer_fail(client->name, dbg_str, len,
- heap_mask, flags, -ENODEV);
+ heap_id_mask, flags, -ENODEV);
return ERR_PTR(-ENODEV);
}
if (IS_ERR(buffer)) {
trace_ion_alloc_buffer_fail(client->name, dbg_str, len,
- heap_mask, flags, PTR_ERR(buffer));
+ heap_id_mask, flags,
+ PTR_ERR(buffer));
pr_debug("ION is unable to allocate 0x%x bytes (alignment: "
- "0x%x) from heap(s) %sfor client %s with heap "
- "mask 0x%x\n",
- len, align, dbg_str, client->name, client->heap_mask);
+ "0x%x) from heap(s) %sfor client %s\n",
+ len, align, dbg_str, client->name);
return ERR_PTR(PTR_ERR(buffer));
}
@@ -620,6 +686,7 @@
for (n = rb_first(&client->handles); n; n = rb_next(n)) {
struct ion_handle *handle = rb_entry(n, struct ion_handle,
node);
+
enum ion_heap_type type = handle->buffer->heap->type;
seq_printf(s, "%16.16s: %16x : %16d : %12p",
@@ -638,7 +705,6 @@
seq_printf(s, "\n");
}
mutex_unlock(&client->lock);
-
return 0;
}
@@ -655,7 +721,6 @@
};
struct ion_client *ion_client_create(struct ion_device *dev,
- unsigned int heap_mask,
const char *name)
{
struct ion_client *client;
@@ -705,11 +770,10 @@
strlcpy(client->name, name, name_len+1);
}
- client->heap_mask = heap_mask;
client->task = task;
client->pid = pid;
- mutex_lock(&dev->lock);
+ down_write(&dev->lock);
p = &dev->clients.rb_node;
while (*p) {
parent = *p;
@@ -727,96 +791,16 @@
client->debug_root = debugfs_create_file(name, 0664,
dev->debug_root, client,
&debug_client_fops);
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return client;
}
-
-/**
- * ion_mark_dangling_buffers_locked() - Mark dangling buffers
- * @dev: the ion device whose buffers will be searched
- *
- * Sets marked=1 for all known buffers associated with `dev' that no
- * longer have a handle pointing to them. dev->lock should be held
- * across a call to this function (and should only be unlocked after
- * checking for marked buffers).
- */
-static void ion_mark_dangling_buffers_locked(struct ion_device *dev)
-{
- struct rb_node *n, *n2;
- /* mark all buffers as 1 */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- buf->marked = 1;
- }
-
- /* now see which buffers we can access */
- for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
- struct ion_client *client = rb_entry(n, struct ion_client,
- node);
-
- mutex_lock(&client->lock);
- for (n2 = rb_first(&client->handles); n2; n2 = rb_next(n2)) {
- struct ion_handle *handle
- = rb_entry(n2, struct ion_handle, node);
-
- handle->buffer->marked = 0;
-
- }
- mutex_unlock(&client->lock);
-
- }
-}
-
-#ifdef CONFIG_ION_LEAK_CHECK
-static u32 ion_debug_check_leaks_on_destroy;
-
-static int ion_check_for_and_print_leaks(struct ion_device *dev)
-{
- struct rb_node *n;
- int num_leaks = 0;
-
- if (!ion_debug_check_leaks_on_destroy)
- return 0;
-
- /* check for leaked buffers (those that no longer have a
- * handle pointing to them) */
- ion_mark_dangling_buffers_locked(dev);
-
- /* Anyone still marked as a 1 means a leaked handle somewhere */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- if (buf->marked == 1) {
- pr_info("Leaked ion buffer at %p\n", buf);
- num_leaks++;
- }
- }
- return num_leaks;
-}
-static void setup_ion_leak_check(struct dentry *debug_root)
-{
- debugfs_create_bool("check_leaks_on_destroy", 0664, debug_root,
- &ion_debug_check_leaks_on_destroy);
-}
-#else
-static int ion_check_for_and_print_leaks(struct ion_device *dev)
-{
- return 0;
-}
-static void setup_ion_leak_check(struct dentry *debug_root)
-{
-}
-#endif
+EXPORT_SYMBOL(ion_client_create);
void ion_client_destroy(struct ion_client *client)
{
struct ion_device *dev = client->dev;
struct rb_node *n;
- int num_leaks;
pr_debug("%s: %d\n", __func__, __LINE__);
while ((n = rb_first(&client->handles))) {
@@ -824,25 +808,13 @@
node);
ion_handle_destroy(&handle->ref);
}
- mutex_lock(&dev->lock);
+ down_write(&dev->lock);
if (client->task)
put_task_struct(client->task);
rb_erase(&client->node, &dev->clients);
debugfs_remove_recursive(client->debug_root);
- num_leaks = ion_check_for_and_print_leaks(dev);
-
- mutex_unlock(&dev->lock);
-
- if (num_leaks) {
- struct task_struct *current_task = current;
- char current_task_name[TASK_COMM_LEN];
- get_task_comm(current_task_name, current_task);
- WARN(1, "%s: Detected %d leaked ion buffer%s.\n",
- __func__, num_leaks, num_leaks == 1 ? "" : "s");
- pr_info("task name at time of leak: %s, pid: %d\n",
- current_task_name, current_task->pid);
- }
+ up_write(&dev->lock);
kfree(client->name);
kfree(client);
@@ -1115,7 +1087,7 @@
static void *ion_dma_buf_kmap(struct dma_buf *dmabuf, unsigned long offset)
{
struct ion_buffer *buffer = dmabuf->priv;
- return buffer->vaddr + offset;
+ return buffer->vaddr + offset * PAGE_SIZE;
}
static void ion_dma_buf_kunmap(struct dma_buf *dmabuf, unsigned long offset,
@@ -1171,19 +1143,19 @@
.kunmap = ion_dma_buf_kunmap,
};
-int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle)
+struct dma_buf *ion_share_dma_buf(struct ion_client *client,
+ struct ion_handle *handle)
{
struct ion_buffer *buffer;
struct dma_buf *dmabuf;
bool valid_handle;
- int fd;
mutex_lock(&client->lock);
valid_handle = ion_handle_validate(client, handle);
mutex_unlock(&client->lock);
if (!valid_handle) {
WARN(1, "%s: invalid handle passed to share.\n", __func__);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
buffer = handle->buffer;
@@ -1191,15 +1163,29 @@
dmabuf = dma_buf_export(buffer, &dma_buf_ops, buffer->size, O_RDWR);
if (IS_ERR(dmabuf)) {
ion_buffer_put(buffer);
- return PTR_ERR(dmabuf);
+ return dmabuf;
}
+
+ return dmabuf;
+}
+EXPORT_SYMBOL(ion_share_dma_buf);
+
+int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handle)
+{
+ struct dma_buf *dmabuf;
+ int fd;
+
+ dmabuf = ion_share_dma_buf(client, handle);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
fd = dma_buf_fd(dmabuf, O_CLOEXEC);
if (fd < 0)
dma_buf_put(dmabuf);
return fd;
}
-EXPORT_SYMBOL(ion_share_dma_buf);
+EXPORT_SYMBOL(ion_share_dma_buf_fd);
struct ion_handle *ion_import_dma_buf(struct ion_client *client, int fd)
{
@@ -1308,7 +1294,8 @@
if (copy_from_user(&data, (void __user *)arg, sizeof(data)))
return -EFAULT;
- data.fd = ion_share_dma_buf(client, data.handle);
+ data.fd = ion_share_dma_buf_fd(client, data.handle);
+
if (copy_to_user((void __user *)arg, &data, sizeof(data)))
return -EFAULT;
if (data.fd < 0)
@@ -1388,7 +1375,7 @@
pr_debug("%s: %d\n", __func__, __LINE__);
snprintf(debug_name, 64, "%u", task_pid_nr(current->group_leader));
- client = ion_client_create(dev, -1, debug_name);
+ client = ion_client_create(dev, debug_name);
if (IS_ERR_OR_NULL(client))
return PTR_ERR(client);
file->private_data = client;
@@ -1570,9 +1557,12 @@
struct ion_heap *heap = s->private;
struct ion_device *dev = heap->dev;
struct rb_node *n;
+ size_t total_size = 0;
+ size_t total_orphaned_size = 0;
- mutex_lock(&dev->lock);
+ mutex_lock(&dev->buffer_lock);
seq_printf(s, "%16.s %16.s %16.s\n", "client", "pid", "size");
+ seq_printf(s, "----------------------------------------------------\n");
for (n = rb_first(&dev->clients); n; n = rb_next(n)) {
struct ion_client *client = rb_entry(n, struct ion_client,
@@ -1591,8 +1581,28 @@
client->pid, size);
}
}
+ seq_printf(s, "----------------------------------------------------\n");
+ seq_printf(s, "orphaned allocations (info is from last known client):"
+ "\n");
+ for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
+ struct ion_buffer *buffer = rb_entry(n, struct ion_buffer,
+ node);
+ if (buffer->heap->type == heap->type)
+ total_size += buffer->size;
+ if (!buffer->handle_count) {
+ seq_printf(s, "%16.s %16u %16u\n", buffer->task_comm,
+ buffer->pid, buffer->size);
+ total_orphaned_size += buffer->size;
+ }
+ }
+ seq_printf(s, "----------------------------------------------------\n");
+ seq_printf(s, "%16.s %16u\n", "total orphaned",
+ total_orphaned_size);
+ seq_printf(s, "%16.s %16u\n", "total ", total_size);
+ seq_printf(s, "----------------------------------------------------\n");
+
ion_heap_print_debug(s, heap);
- mutex_unlock(&dev->lock);
+ mutex_unlock(&dev->buffer_lock);
return 0;
}
@@ -1608,40 +1618,90 @@
.release = single_release,
};
+static size_t ion_heap_free_list_is_empty(struct ion_heap *heap)
+{
+ bool is_empty;
+
+ rt_mutex_lock(&heap->lock);
+ is_empty = list_empty(&heap->free_list);
+ rt_mutex_unlock(&heap->lock);
+
+ return is_empty;
+}
+
+static int ion_heap_deferred_free(void *data)
+{
+ struct ion_heap *heap = data;
+
+ while (true) {
+ struct ion_buffer *buffer;
+
+ wait_event_freezable(heap->waitqueue,
+ !ion_heap_free_list_is_empty(heap));
+
+ rt_mutex_lock(&heap->lock);
+ if (list_empty(&heap->free_list)) {
+ rt_mutex_unlock(&heap->lock);
+ continue;
+ }
+ buffer = list_first_entry(&heap->free_list, struct ion_buffer,
+ list);
+ list_del(&buffer->list);
+ rt_mutex_unlock(&heap->lock);
+ _ion_buffer_destroy(buffer);
+ }
+
+ return 0;
+}
+
+static bool ion_heap_drain_freelist(struct ion_heap *heap)
+{
+ struct ion_buffer *buffer, *tmp;
+
+ if (ion_heap_free_list_is_empty(heap))
+ return false;
+ rt_mutex_lock(&heap->lock);
+ list_for_each_entry_safe(buffer, tmp, &heap->free_list, list) {
+ list_del(&buffer->list);
+ _ion_buffer_destroy(buffer);
+ }
+ BUG_ON(!list_empty(&heap->free_list));
+ rt_mutex_unlock(&heap->lock);
+
+
+ return true;
+}
+
void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap)
{
- struct rb_node **p = &dev->heaps.rb_node;
- struct rb_node *parent = NULL;
- struct ion_heap *entry;
+ struct sched_param param = { .sched_priority = 0 };
if (!heap->ops->allocate || !heap->ops->free || !heap->ops->map_dma ||
!heap->ops->unmap_dma)
pr_err("%s: can not add heap with invalid ops struct.\n",
__func__);
- heap->dev = dev;
- mutex_lock(&dev->lock);
- while (*p) {
- parent = *p;
- entry = rb_entry(parent, struct ion_heap, node);
-
- if (heap->id < entry->id) {
- p = &(*p)->rb_left;
- } else if (heap->id > entry->id ) {
- p = &(*p)->rb_right;
- } else {
- pr_err("%s: can not insert multiple heaps with "
- "id %d\n", __func__, heap->id);
- goto end;
- }
+ if (heap->flags & ION_HEAP_FLAG_DEFER_FREE) {
+ INIT_LIST_HEAD(&heap->free_list);
+ rt_mutex_init(&heap->lock);
+ init_waitqueue_head(&heap->waitqueue);
+ heap->task = kthread_run(ion_heap_deferred_free, heap,
+ "%s", heap->name);
+ sched_setscheduler(heap->task, SCHED_IDLE, ¶m);
+ if (IS_ERR(heap->task))
+ pr_err("%s: creating thread for deferred free failed\n",
+ __func__);
}
- rb_link_node(&heap->node, parent, p);
- rb_insert_color(&heap->node, &dev->heaps);
+ heap->dev = dev;
+ down_write(&dev->lock);
+ /* use negative heap->id to reverse the priority -- when traversing
+ the list later attempt higher id numbers first */
+ plist_node_init(&heap->node, -heap->id);
+ plist_add(&heap->node, &dev->heaps);
debugfs_create_file(heap->name, 0664, dev->debug_root, heap,
&debug_heap_fops);
-end:
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
}
int ion_secure_handle(struct ion_client *client, struct ion_handle *handle,
@@ -1714,16 +1774,15 @@
int ion_secure_heap(struct ion_device *dev, int heap_id, int version,
void *data)
{
- struct rb_node *n;
int ret_val = 0;
+ struct ion_heap *heap;
/*
* traverse the list of heaps available in this system
* and find the heap that is specified.
*/
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
+ down_write(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
if (!ion_heap_allow_heap_secure(heap->type))
continue;
if (ION_HEAP(heap->id) != heap_id)
@@ -1734,7 +1793,7 @@
ret_val = -EINVAL;
break;
}
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return ret_val;
}
EXPORT_SYMBOL(ion_secure_heap);
@@ -1742,16 +1801,15 @@
int ion_unsecure_heap(struct ion_device *dev, int heap_id, int version,
void *data)
{
- struct rb_node *n;
int ret_val = 0;
+ struct ion_heap *heap;
/*
* traverse the list of heaps available in this system
* and find the heap that is specified.
*/
- mutex_lock(&dev->lock);
- for (n = rb_first(&dev->heaps); n != NULL; n = rb_next(n)) {
- struct ion_heap *heap = rb_entry(n, struct ion_heap, node);
+ down_write(&dev->lock);
+ plist_for_each_entry(heap, &dev->heaps, node) {
if (!ion_heap_allow_heap_secure(heap->type))
continue;
if (ION_HEAP(heap->id) != heap_id)
@@ -1762,50 +1820,11 @@
ret_val = -EINVAL;
break;
}
- mutex_unlock(&dev->lock);
+ up_write(&dev->lock);
return ret_val;
}
EXPORT_SYMBOL(ion_unsecure_heap);
-static int ion_debug_leak_show(struct seq_file *s, void *unused)
-{
- struct ion_device *dev = s->private;
- struct rb_node *n;
-
- seq_printf(s, "%16.s %16.s %16.s %16.s\n", "buffer", "heap", "size",
- "ref cnt");
-
- mutex_lock(&dev->lock);
- ion_mark_dangling_buffers_locked(dev);
-
- /* Anyone still marked as a 1 means a leaked handle somewhere */
- for (n = rb_first(&dev->buffers); n; n = rb_next(n)) {
- struct ion_buffer *buf = rb_entry(n, struct ion_buffer,
- node);
-
- if (buf->marked == 1)
- seq_printf(s, "%16.x %16.s %16.x %16.d\n",
- (int)buf, buf->heap->name, buf->size,
- atomic_read(&buf->ref.refcount));
- }
- mutex_unlock(&dev->lock);
- return 0;
-}
-
-static int ion_debug_leak_open(struct inode *inode, struct file *file)
-{
- return single_open(file, ion_debug_leak_show, inode->i_private);
-}
-
-static const struct file_operations debug_leak_fops = {
- .open = ion_debug_leak_open,
- .read = seq_read,
- .llseek = seq_lseek,
- .release = single_release,
-};
-
-
-
struct ion_device *ion_device_create(long (*custom_ioctl)
(struct ion_client *client,
unsigned int cmd,
@@ -1834,13 +1853,10 @@
idev->custom_ioctl = custom_ioctl;
idev->buffers = RB_ROOT;
- mutex_init(&idev->lock);
- idev->heaps = RB_ROOT;
+ mutex_init(&idev->buffer_lock);
+ init_rwsem(&idev->lock);
+ plist_head_init(&idev->heaps);
idev->clients = RB_ROOT;
- debugfs_create_file("check_leaked_fds", 0664, idev->debug_root, idev,
- &debug_leak_fops);
-
- setup_ion_leak_check(idev->debug_root);
return idev;
}
@@ -1853,16 +1869,35 @@
void __init ion_reserve(struct ion_platform_data *data)
{
- int i, ret;
+ int i;
for (i = 0; i < data->nr; i++) {
if (data->heaps[i].size == 0)
continue;
- ret = memblock_reserve(data->heaps[i].base,
- data->heaps[i].size);
- if (ret)
- pr_err("memblock reserve of %x@%pa failed\n",
- data->heaps[i].size,
- &data->heaps[i].base);
+
+ if (data->heaps[i].base == 0) {
+ phys_addr_t paddr;
+ paddr = memblock_alloc_base(data->heaps[i].size,
+ data->heaps[i].align,
+ MEMBLOCK_ALLOC_ANYWHERE);
+ if (!paddr) {
+ pr_err("%s: error allocating memblock for "
+ "heap %d\n",
+ __func__, i);
+ continue;
+ }
+ data->heaps[i].base = paddr;
+ } else {
+ int ret = memblock_reserve(data->heaps[i].base,
+ data->heaps[i].size);
+ if (ret)
+ pr_err("memblock reserve of %x@%pa failed\n",
+ data->heaps[i].size,
+ &data->heaps[i].base);
+ }
+ pr_info("%s: %s reserved base %pa size %d\n", __func__,
+ data->heaps[i].name,
+ &data->heaps[i].base,
+ data->heaps[i].size);
}
}
diff --git a/drivers/gpu/ion/ion_chunk_heap.c b/drivers/gpu/ion/ion_chunk_heap.c
new file mode 100644
index 0000000..b76f898
--- /dev/null
+++ b/drivers/gpu/ion/ion_chunk_heap.c
@@ -0,0 +1,180 @@
+/*
+ * drivers/gpu/ion/ion_chunk_heap.c
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+//#include <linux/spinlock.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/genalloc.h>
+#include <linux/io.h>
+#include <linux/ion.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "ion_priv.h"
+
+#include <asm/mach/map.h>
+
+struct ion_chunk_heap {
+ struct ion_heap heap;
+ struct gen_pool *pool;
+ ion_phys_addr_t base;
+ unsigned long chunk_size;
+ unsigned long size;
+ unsigned long allocated;
+};
+
+static int ion_chunk_heap_allocate(struct ion_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long size, unsigned long align,
+ unsigned long flags)
+{
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+ struct sg_table *table;
+ struct scatterlist *sg;
+ int ret, i;
+ unsigned long num_chunks;
+
+ if (ion_buffer_fault_user_mappings(buffer))
+ return -ENOMEM;
+
+ num_chunks = ALIGN(size, chunk_heap->chunk_size) /
+ chunk_heap->chunk_size;
+ buffer->size = num_chunks * chunk_heap->chunk_size;
+
+ if (buffer->size > chunk_heap->size - chunk_heap->allocated)
+ return -ENOMEM;
+
+ table = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
+ if (!table)
+ return -ENOMEM;
+ ret = sg_alloc_table(table, num_chunks, GFP_KERNEL);
+ if (ret) {
+ kfree(table);
+ return ret;
+ }
+
+ sg = table->sgl;
+ for (i = 0; i < num_chunks; i++) {
+ unsigned long paddr = gen_pool_alloc(chunk_heap->pool,
+ chunk_heap->chunk_size);
+ if (!paddr)
+ goto err;
+ sg_set_page(sg, phys_to_page(paddr), chunk_heap->chunk_size, 0);
+ sg = sg_next(sg);
+ }
+
+ buffer->priv_virt = table;
+ chunk_heap->allocated += buffer->size;
+ return 0;
+err:
+ sg = table->sgl;
+ for (i -= 1; i >= 0; i--) {
+ gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)),
+ sg_dma_len(sg));
+ sg = sg_next(sg);
+ }
+ sg_free_table(table);
+ kfree(table);
+ return -ENOMEM;
+}
+
+static void ion_chunk_heap_free(struct ion_buffer *buffer)
+{
+ struct ion_heap *heap = buffer->heap;
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+ struct sg_table *table = buffer->priv_virt;
+ struct scatterlist *sg;
+ int i;
+
+ ion_heap_buffer_zero(buffer);
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ if (ion_buffer_cached(buffer))
+ dma_sync_sg_for_device(NULL, sg, 1, DMA_BIDIRECTIONAL);
+ gen_pool_free(chunk_heap->pool, page_to_phys(sg_page(sg)),
+ sg_dma_len(sg));
+ }
+ chunk_heap->allocated -= buffer->size;
+ sg_free_table(table);
+ kfree(table);
+}
+
+struct sg_table *ion_chunk_heap_map_dma(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ return buffer->priv_virt;
+}
+
+void ion_chunk_heap_unmap_dma(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ return;
+}
+
+static struct ion_heap_ops chunk_heap_ops = {
+ .allocate = ion_chunk_heap_allocate,
+ .free = ion_chunk_heap_free,
+ .map_dma = ion_chunk_heap_map_dma,
+ .unmap_dma = ion_chunk_heap_unmap_dma,
+ .map_user = ion_heap_map_user,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
+};
+
+struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *heap_data)
+{
+ struct ion_chunk_heap *chunk_heap;
+ struct scatterlist sg;
+
+ chunk_heap = kzalloc(sizeof(struct ion_chunk_heap), GFP_KERNEL);
+ if (!chunk_heap)
+ return ERR_PTR(-ENOMEM);
+
+ chunk_heap->chunk_size = (unsigned long)heap_data->priv;
+ chunk_heap->pool = gen_pool_create(get_order(chunk_heap->chunk_size) +
+ PAGE_SHIFT, -1);
+ if (!chunk_heap->pool) {
+ kfree(chunk_heap);
+ return ERR_PTR(-ENOMEM);
+ }
+ chunk_heap->base = heap_data->base;
+ chunk_heap->size = heap_data->size;
+ chunk_heap->allocated = 0;
+
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, phys_to_page(heap_data->base), heap_data->size, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+ gen_pool_add(chunk_heap->pool, chunk_heap->base, heap_data->size, -1);
+ chunk_heap->heap.ops = &chunk_heap_ops;
+ chunk_heap->heap.type = ION_HEAP_TYPE_CHUNK;
+ chunk_heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE;
+ pr_info("%s: base %pa size %zd align %pa\n", __func__,
+ &chunk_heap->base, heap_data->size, &heap_data->align);
+
+ return &chunk_heap->heap;
+}
+
+void ion_chunk_heap_destroy(struct ion_heap *heap)
+{
+ struct ion_chunk_heap *chunk_heap =
+ container_of(heap, struct ion_chunk_heap, heap);
+
+ gen_pool_destroy(chunk_heap->pool);
+ kfree(chunk_heap);
+ chunk_heap = NULL;
+}
diff --git a/drivers/gpu/ion/ion_cma_heap.c b/drivers/gpu/ion/ion_cma_heap.c
index f64ad4d..193f4d4 100644
--- a/drivers/gpu/ion/ion_cma_heap.c
+++ b/drivers/gpu/ion/ion_cma_heap.c
@@ -47,7 +47,7 @@
int ion_cma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size)
{
- struct page *page = virt_to_page(cpu_addr);
+ struct page *page = phys_to_page(handle);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_cma_secure_heap.c b/drivers/gpu/ion/ion_cma_secure_heap.c
index d622a51..e1b3eea 100644
--- a/drivers/gpu/ion/ion_cma_secure_heap.c
+++ b/drivers/gpu/ion/ion_cma_secure_heap.c
@@ -52,7 +52,7 @@
int ion_secure_cma_get_sgtable(struct device *dev, struct sg_table *sgt,
void *cpu_addr, dma_addr_t handle, size_t size)
{
- struct page *page = virt_to_page(cpu_addr);
+ struct page *page = phys_to_page(handle);
int ret;
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_heap.c b/drivers/gpu/ion/ion_heap.c
index 510b9ce..3d37541 100644
--- a/drivers/gpu/ion/ion_heap.c
+++ b/drivers/gpu/ion/ion_heap.c
@@ -17,8 +17,120 @@
#include <linux/err.h>
#include <linux/ion.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/vmalloc.h>
#include "ion_priv.h"
+void *ion_heap_map_kernel(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ struct scatterlist *sg;
+ int i, j;
+ void *vaddr;
+ pgprot_t pgprot;
+ struct sg_table *table = buffer->sg_table;
+ int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE;
+ struct page **pages = vmalloc(sizeof(struct page *) * npages);
+ struct page **tmp = pages;
+
+ if (!pages)
+ return 0;
+
+ if (buffer->flags & ION_FLAG_CACHED)
+ pgprot = PAGE_KERNEL;
+ else
+ pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ int npages_this_entry = PAGE_ALIGN(sg_dma_len(sg)) / PAGE_SIZE;
+ struct page *page = sg_page(sg);
+ BUG_ON(i >= npages);
+ for (j = 0; j < npages_this_entry; j++) {
+ *(tmp++) = page++;
+ }
+ }
+ vaddr = vmap(pages, npages, VM_MAP, pgprot);
+ vfree(pages);
+
+ return vaddr;
+}
+
+void ion_heap_unmap_kernel(struct ion_heap *heap,
+ struct ion_buffer *buffer)
+{
+ vunmap(buffer->vaddr);
+}
+
+int ion_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
+ struct vm_area_struct *vma)
+{
+ struct sg_table *table = buffer->sg_table;
+ unsigned long addr = vma->vm_start;
+ unsigned long offset = vma->vm_pgoff * PAGE_SIZE;
+ struct scatterlist *sg;
+ int i;
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ struct page *page = sg_page(sg);
+ unsigned long remainder = vma->vm_end - addr;
+ unsigned long len = sg_dma_len(sg);
+
+ if (offset >= sg_dma_len(sg)) {
+ offset -= sg_dma_len(sg);
+ continue;
+ } else if (offset) {
+ page += offset / PAGE_SIZE;
+ len = sg_dma_len(sg) - offset;
+ offset = 0;
+ }
+ len = min(len, remainder);
+ remap_pfn_range(vma, addr, page_to_pfn(page), len,
+ vma->vm_page_prot);
+ addr += len;
+ if (addr >= vma->vm_end)
+ return 0;
+ }
+ return 0;
+}
+
+int ion_heap_buffer_zero(struct ion_buffer *buffer)
+{
+ struct sg_table *table = buffer->sg_table;
+ pgprot_t pgprot;
+ struct scatterlist *sg;
+ struct vm_struct *vm_struct;
+ int i, j, ret = 0;
+
+ if (buffer->flags & ION_FLAG_CACHED)
+ pgprot = PAGE_KERNEL;
+ else
+ pgprot = pgprot_writecombine(PAGE_KERNEL);
+
+ vm_struct = get_vm_area(PAGE_SIZE, VM_ALLOC);
+ if (!vm_struct)
+ return -ENOMEM;
+
+ for_each_sg(table->sgl, sg, table->nents, i) {
+ struct page *page = sg_page(sg);
+ unsigned long len = sg_dma_len(sg);
+
+ for (j = 0; j < len / PAGE_SIZE; j++) {
+ struct page *sub_page = page + j;
+ struct page **pages = &sub_page;
+ ret = map_vm_area(vm_struct, pgprot, &pages);
+ if (ret)
+ goto end;
+ memset(vm_struct->addr, 0, PAGE_SIZE);
+ unmap_kernel_range((unsigned long)vm_struct->addr,
+ PAGE_SIZE);
+ }
+ }
+end:
+ free_vm_area(vm_struct);
+ return ret;
+}
+
struct ion_heap *ion_heap_create(struct ion_platform_heap *heap_data)
{
struct ion_heap *heap = NULL;
@@ -33,6 +145,9 @@
case ION_HEAP_TYPE_CARVEOUT:
heap = ion_carveout_heap_create(heap_data);
break;
+ case ION_HEAP_TYPE_CHUNK:
+ heap = ion_chunk_heap_create(heap_data);
+ break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap_data->type);
@@ -67,6 +182,9 @@
case ION_HEAP_TYPE_CARVEOUT:
ion_carveout_heap_destroy(heap);
break;
+ case ION_HEAP_TYPE_CHUNK:
+ ion_chunk_heap_destroy(heap);
+ break;
default:
pr_err("%s: Invalid heap type %d\n", __func__,
heap->type);
diff --git a/drivers/gpu/ion/ion_iommu_heap.c b/drivers/gpu/ion/ion_iommu_heap.c
index ca29016..bc9bddd 100644
--- a/drivers/gpu/ion/ion_iommu_heap.c
+++ b/drivers/gpu/ion/ion_iommu_heap.c
@@ -27,6 +27,7 @@
#include <asm/page.h>
#include <asm/cacheflush.h>
#include <mach/iommu_domains.h>
+#include <trace/events/kmem.h>
struct ion_iommu_heap {
struct ion_heap heap;
@@ -83,9 +84,13 @@
} else {
gfp |= GFP_KERNEL;
}
+ trace_alloc_pages_iommu_start(gfp, orders[i]);
page = alloc_pages(gfp, orders[i]);
- if (!page)
+ trace_alloc_pages_iommu_end(gfp, orders[i]);
+ if (!page) {
+ trace_alloc_pages_iommu_fail(gfp, orders[i]);
continue;
+ }
info = kmalloc(sizeof(struct page_info), GFP_KERNEL);
info->page = page;
@@ -111,7 +116,7 @@
int j;
void *ptr = NULL;
unsigned int npages_to_vmap, total_pages, num_large_pages = 0;
- long size_remaining = PAGE_ALIGN(size);
+ unsigned long size_remaining = PAGE_ALIGN(size);
unsigned int max_order = ION_IS_CACHED(flags) ? 0 : orders[0];
data = kmalloc(sizeof(*data), GFP_KERNEL);
diff --git a/drivers/gpu/ion/ion_page_pool.c b/drivers/gpu/ion/ion_page_pool.c
new file mode 100644
index 0000000..e8b5489
--- /dev/null
+++ b/drivers/gpu/ion/ion_page_pool.c
@@ -0,0 +1,282 @@
+/*
+ * drivers/gpu/ion/ion_mem_pool.c
+ *
+ * Copyright (C) 2011 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/fs.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/shrinker.h>
+#include "ion_priv.h"
+
+/* #define DEBUG_PAGE_POOL_SHRINKER */
+
+static struct plist_head pools = PLIST_HEAD_INIT(pools);
+static struct shrinker shrinker;
+
+struct ion_page_pool_item {
+ struct page *page;
+ struct list_head list;
+};
+
+static void *ion_page_pool_alloc_pages(struct ion_page_pool *pool)
+{
+ struct page *page = alloc_pages(pool->gfp_mask, pool->order);
+ struct scatterlist sg;
+
+ if (!page)
+ return NULL;
+
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, page, PAGE_SIZE << pool->order, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+
+ return page;
+}
+
+static void ion_page_pool_free_pages(struct ion_page_pool *pool,
+ struct page *page)
+{
+ __free_pages(page, pool->order);
+}
+
+static int ion_page_pool_add(struct ion_page_pool *pool, struct page *page)
+{
+ struct ion_page_pool_item *item;
+
+ item = kmalloc(sizeof(struct ion_page_pool_item), GFP_KERNEL);
+ if (!item)
+ return -ENOMEM;
+
+ mutex_lock(&pool->mutex);
+ item->page = page;
+ if (PageHighMem(page)) {
+ list_add_tail(&item->list, &pool->high_items);
+ pool->high_count++;
+ } else {
+ list_add_tail(&item->list, &pool->low_items);
+ pool->low_count++;
+ }
+ mutex_unlock(&pool->mutex);
+ return 0;
+}
+
+static struct page *ion_page_pool_remove(struct ion_page_pool *pool, bool high)
+{
+ struct ion_page_pool_item *item;
+ struct page *page;
+
+ if (high) {
+ BUG_ON(!pool->high_count);
+ item = list_first_entry(&pool->high_items,
+ struct ion_page_pool_item, list);
+ pool->high_count--;
+ } else {
+ BUG_ON(!pool->low_count);
+ item = list_first_entry(&pool->low_items,
+ struct ion_page_pool_item, list);
+ pool->low_count--;
+ }
+
+ list_del(&item->list);
+ page = item->page;
+ kfree(item);
+ return page;
+}
+
+void *ion_page_pool_alloc(struct ion_page_pool *pool)
+{
+ struct page *page = NULL;
+
+ BUG_ON(!pool);
+
+ mutex_lock(&pool->mutex);
+ if (pool->high_count)
+ page = ion_page_pool_remove(pool, true);
+ else if (pool->low_count)
+ page = ion_page_pool_remove(pool, false);
+ mutex_unlock(&pool->mutex);
+
+ if (!page)
+ page = ion_page_pool_alloc_pages(pool);
+
+ return page;
+}
+
+void ion_page_pool_free(struct ion_page_pool *pool, struct page* page)
+{
+ int ret;
+
+ ret = ion_page_pool_add(pool, page);
+ if (ret)
+ ion_page_pool_free_pages(pool, page);
+}
+
+#ifdef DEBUG_PAGE_POOL_SHRINKER
+static int debug_drop_pools_set(void *data, u64 val)
+{
+ struct shrink_control sc;
+ int objs;
+
+ sc.gfp_mask = -1;
+ sc.nr_to_scan = 0;
+
+ if (!val)
+ return 0;
+
+ objs = shrinker.shrink(&shrinker, &sc);
+ sc.nr_to_scan = objs;
+
+ shrinker.shrink(&shrinker, &sc);
+ return 0;
+}
+
+static int debug_drop_pools_get(void *data, u64 *val)
+{
+ struct shrink_control sc;
+ int objs;
+
+ sc.gfp_mask = -1;
+ sc.nr_to_scan = 0;
+
+ objs = shrinker.shrink(&shrinker, &sc);
+ *val = objs;
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debug_drop_pools_fops, debug_drop_pools_get,
+ debug_drop_pools_set, "%llu\n");
+
+static int debug_grow_pools_set(void *data, u64 val)
+{
+ struct ion_page_pool *pool;
+ struct page *page;
+
+ plist_for_each_entry(pool, &pools, list) {
+ if (val != pool->list.prio)
+ continue;
+ page = ion_page_pool_alloc_pages(pool);
+ if (page)
+ ion_page_pool_add(pool, page);
+ }
+
+ return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debug_grow_pools_fops, debug_drop_pools_get,
+ debug_grow_pools_set, "%llu\n");
+#endif
+
+static int ion_page_pool_total(bool high)
+{
+ struct ion_page_pool *pool;
+ int total = 0;
+
+ plist_for_each_entry(pool, &pools, list) {
+ total += high ? (pool->high_count + pool->low_count) *
+ (1 << pool->order) :
+ pool->low_count * (1 << pool->order);
+ }
+ return total;
+}
+
+static int ion_page_pool_shrink(struct shrinker *shrinker,
+ struct shrink_control *sc)
+{
+ struct ion_page_pool *pool;
+ int nr_freed = 0;
+ int i;
+ bool high;
+ int nr_to_scan = sc->nr_to_scan;
+
+ if (sc->gfp_mask & __GFP_HIGHMEM)
+ high = true;
+
+ if (nr_to_scan == 0)
+ return ion_page_pool_total(high);
+
+ plist_for_each_entry(pool, &pools, list) {
+ for (i = 0; i < nr_to_scan; i++) {
+ struct page *page;
+
+ mutex_lock(&pool->mutex);
+ if (high && pool->high_count) {
+ page = ion_page_pool_remove(pool, true);
+ } else if (pool->low_count) {
+ page = ion_page_pool_remove(pool, false);
+ } else {
+ mutex_unlock(&pool->mutex);
+ break;
+ }
+ mutex_unlock(&pool->mutex);
+ ion_page_pool_free_pages(pool, page);
+ nr_freed += (1 << pool->order);
+ }
+ nr_to_scan -= i;
+ }
+
+ return ion_page_pool_total(high);
+}
+
+struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order)
+{
+ struct ion_page_pool *pool = kmalloc(sizeof(struct ion_page_pool),
+ GFP_KERNEL);
+ if (!pool)
+ return NULL;
+ pool->high_count = 0;
+ pool->low_count = 0;
+ INIT_LIST_HEAD(&pool->low_items);
+ INIT_LIST_HEAD(&pool->high_items);
+ pool->gfp_mask = gfp_mask;
+ pool->order = order;
+ mutex_init(&pool->mutex);
+ plist_node_init(&pool->list, order);
+ plist_add(&pool->list, &pools);
+
+ return pool;
+}
+
+void ion_page_pool_destroy(struct ion_page_pool *pool)
+{
+ plist_del(&pool->list, &pools);
+ kfree(pool);
+}
+
+static int __init ion_page_pool_init(void)
+{
+ shrinker.shrink = ion_page_pool_shrink;
+ shrinker.seeks = DEFAULT_SEEKS;
+ shrinker.batch = 0;
+ register_shrinker(&shrinker);
+#ifdef DEBUG_PAGE_POOL_SHRINKER
+ debugfs_create_file("ion_pools_shrink", 0644, NULL, NULL,
+ &debug_drop_pools_fops);
+ debugfs_create_file("ion_pools_grow", 0644, NULL, NULL,
+ &debug_grow_pools_fops);
+#endif
+ return 0;
+}
+
+static void __exit ion_page_pool_exit(void)
+{
+ unregister_shrinker(&shrinker);
+}
+
+module_init(ion_page_pool_init);
+module_exit(ion_page_pool_exit);
diff --git a/drivers/gpu/ion/ion_priv.h b/drivers/gpu/ion/ion_priv.h
index 71527ae..4b724df 100644
--- a/drivers/gpu/ion/ion_priv.h
+++ b/drivers/gpu/ion/ion_priv.h
@@ -18,14 +18,17 @@
#ifndef _ION_PRIV_H
#define _ION_PRIV_H
+#include <linux/ion.h>
#include <linux/kref.h>
#include <linux/mm_types.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
-#include <linux/ion.h>
#include <linux/seq_file.h>
#include "msm_ion_priv.h"
+#include <linux/sched.h>
+#include <linux/shrinker.h>
+#include <linux/types.h>
struct ion_buffer *ion_handle_buffer(struct ion_handle *handle);
@@ -46,10 +49,22 @@
* @vaddr: the kenrel mapping if kmap_cnt is not zero
* @dmap_cnt: number of times the buffer is mapped for dma
* @sg_table: the sg table for the buffer if dmap_cnt is not zero
+ * @dirty: bitmask representing which pages of this buffer have
+ * been dirtied by the cpu and need cache maintenance
+ * before dma
+ * @vmas: list of vma's mapping this buffer
+ * @handle_count: count of handles referencing this buffer
+ * @task_comm: taskcomm of last client to reference this buffer in a
+ * handle, used for debugging
+ * @pid: pid of last client to reference this buffer in a
+ * handle, used for debugging
*/
struct ion_buffer {
struct kref ref;
- struct rb_node node;
+ union {
+ struct rb_node node;
+ struct list_head list;
+ };
struct ion_device *dev;
struct ion_heap *heap;
unsigned long flags;
@@ -65,7 +80,10 @@
struct sg_table *sg_table;
unsigned long *dirty;
struct list_head vmas;
- int marked;
+ /* used to track orphaned buffers */
+ int handle_count;
+ char task_comm[TASK_COMM_LEN];
+ pid_t pid;
};
/**
@@ -106,16 +124,28 @@
};
/**
+ * heap flags - flags between the heaps and core ion code
+ */
+#define ION_HEAP_FLAG_DEFER_FREE (1 << 0)
+
+/**
* struct ion_heap - represents a heap in the system
* @node: rb node to put the heap on the device's tree of heaps
* @dev: back pointer to the ion_device
* @type: type of heap
* @ops: ops struct as above
+ * @flags: flags
* @id: id of heap, also indicates priority of this heap when
* allocating. These are specified by platform data and
* MUST be unique
* @name: used for debugging
* @priv: private heap data
+ * @free_list: free list head if deferred free is used
+ * @lock: protects the free list
+ * @waitqueue: queue to wait on from deferred free thread
+ * @task: task struct of deferred free thread
+ * @debug_show: called when heap debug file is read to add any
+ * heap specific debug info to output
*
* Represents a pool of memory from which buffers can be made. In some
* systems the only heap is regular system memory allocated via vmalloc.
@@ -123,16 +153,30 @@
* that are allocated from a specially reserved heap.
*/
struct ion_heap {
- struct rb_node node;
+ struct plist_node node;
struct ion_device *dev;
enum ion_heap_type type;
struct ion_heap_ops *ops;
- int id;
+ unsigned long flags;
+ unsigned int id;
const char *name;
void *priv;
+ struct list_head free_list;
+ struct rt_mutex lock;
+ wait_queue_head_t waitqueue;
+ struct task_struct *task;
+ int (*debug_show)(struct ion_heap *heap, struct seq_file *, void *);
};
/**
+ * ion_buffer_cached - this ion buffer is cached
+ * @buffer: buffer
+ *
+ * indicates whether this ion buffer is cached
+ */
+bool ion_buffer_cached(struct ion_buffer *buffer);
+
+/**
* ion_buffer_fault_user_mappings - fault in user mappings of this buffer
* @buffer: buffer
*
@@ -166,6 +210,17 @@
void ion_device_add_heap(struct ion_device *dev, struct ion_heap *heap);
/**
+ * some helpers for common operations on buffers using the sg_table
+ * and vaddr fields
+ */
+void *ion_heap_map_kernel(struct ion_heap *, struct ion_buffer *);
+void ion_heap_unmap_kernel(struct ion_heap *, struct ion_buffer *);
+int ion_heap_map_user(struct ion_heap *, struct ion_buffer *,
+ struct vm_area_struct *);
+int ion_heap_buffer_zero(struct ion_buffer *buffer);
+
+
+/**
* functions for creating and destroying the built in ion heaps.
* architectures can add their own custom architecture specific
* heaps as appropriate.
@@ -173,7 +228,6 @@
struct ion_heap *ion_heap_create(struct ion_platform_heap *);
void ion_heap_destroy(struct ion_heap *);
-
struct ion_heap *ion_system_heap_create(struct ion_platform_heap *);
void ion_system_heap_destroy(struct ion_heap *);
@@ -183,6 +237,8 @@
struct ion_heap *ion_carveout_heap_create(struct ion_platform_heap *);
void ion_carveout_heap_destroy(struct ion_heap *);
+struct ion_heap *ion_chunk_heap_create(struct ion_platform_heap *);
+void ion_chunk_heap_destroy(struct ion_heap *);
/**
* kernel api to allocate/free from carveout -- used when carveout is
* used to back an architecture specific custom heap
@@ -198,4 +254,52 @@
*/
#define ION_CARVEOUT_ALLOCATE_FAIL -1
+
+/**
+ * functions for creating and destroying a heap pool -- allows you
+ * to keep a pool of pre allocated memory to use from your heap. Keeping
+ * a pool of memory that is ready for dma, ie any cached mapping have been
+ * invalidated from the cache, provides a significant peformance benefit on
+ * many systems */
+
+/**
+ * struct ion_page_pool - pagepool struct
+ * @high_count: number of highmem items in the pool
+ * @low_count: number of lowmem items in the pool
+ * @high_items: list of highmem items
+ * @low_items: list of lowmem items
+ * @shrinker: a shrinker for the items
+ * @mutex: lock protecting this struct and especially the count
+ * item list
+ * @alloc: function to be used to allocate pageory when the pool
+ * is empty
+ * @free: function to be used to free pageory back to the system
+ * when the shrinker fires
+ * @gfp_mask: gfp_mask to use from alloc
+ * @order: order of pages in the pool
+ * @list: plist node for list of pools
+ *
+ * Allows you to keep a pool of pre allocated pages to use from your heap.
+ * Keeping a pool of pages that is ready for dma, ie any cached mapping have
+ * been invalidated from the cache, provides a significant peformance benefit
+ * on many systems
+ */
+struct ion_page_pool {
+ int high_count;
+ int low_count;
+ struct list_head high_items;
+ struct list_head low_items;
+ struct mutex mutex;
+ void *(*alloc)(struct ion_page_pool *pool);
+ void (*free)(struct ion_page_pool *pool, struct page *page);
+ gfp_t gfp_mask;
+ unsigned int order;
+ struct plist_node list;
+};
+
+struct ion_page_pool *ion_page_pool_create(gfp_t gfp_mask, unsigned int order);
+void ion_page_pool_destroy(struct ion_page_pool *);
+void *ion_page_pool_alloc(struct ion_page_pool *);
+void ion_page_pool_free(struct ion_page_pool *, struct page *);
+
#endif /* _ION_PRIV_H */
diff --git a/drivers/gpu/ion/ion_system_heap.c b/drivers/gpu/ion/ion_system_heap.c
index f3f627d..4e9f55c 100644
--- a/drivers/gpu/ion/ion_system_heap.c
+++ b/drivers/gpu/ion/ion_system_heap.c
@@ -22,6 +22,7 @@
#include <linux/ion.h>
#include <linux/mm.h>
#include <linux/scatterlist.h>
+#include <linux/seq_file.h>
#include <linux/slab.h>
#include <linux/vmalloc.h>
#include <linux/seq_file.h>
@@ -30,34 +31,120 @@
#include <asm/cacheflush.h>
#include <linux/msm_ion.h>
#include <linux/dma-mapping.h>
+#include <trace/events/kmem.h>
static atomic_t system_heap_allocated;
static atomic_t system_contig_heap_allocated;
+static unsigned int high_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO |
+ __GFP_NOWARN | __GFP_NORETRY |
+ __GFP_NO_KSWAPD) & ~__GFP_WAIT;
+static unsigned int low_order_gfp_flags = (GFP_HIGHUSER | __GFP_ZERO |
+ __GFP_NOWARN);
+static const unsigned int orders[] = {8, 4, 0};
+static const int num_orders = ARRAY_SIZE(orders);
+static int order_to_index(unsigned int order)
+{
+ int i;
+ for (i = 0; i < num_orders; i++)
+ if (order == orders[i])
+ return i;
+ BUG();
+ return -1;
+}
+
+static unsigned int order_to_size(int order)
+{
+ return PAGE_SIZE << order;
+}
+
+struct ion_system_heap {
+ struct ion_heap heap;
+ struct ion_page_pool **pools;
+};
+
struct page_info {
struct page *page;
- unsigned long order;
+ unsigned int order;
struct list_head list;
};
-static struct page_info *alloc_largest_available(unsigned long size,
- bool split_pages)
+static struct page *alloc_buffer_page(struct ion_system_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long order)
{
- static unsigned int orders[] = {8, 4, 0};
+ bool cached = ion_buffer_cached(buffer);
+ bool split_pages = ion_buffer_fault_user_mappings(buffer);
+ struct ion_page_pool *pool = heap->pools[order_to_index(order)];
+ struct page *page;
+
+ if (!cached) {
+ page = ion_page_pool_alloc(pool);
+ } else {
+ struct scatterlist sg;
+ gfp_t gfp_flags = low_order_gfp_flags;
+
+ if (order > 4)
+ gfp_flags = high_order_gfp_flags;
+ trace_alloc_pages_sys_start(gfp_flags, order);
+ page = alloc_pages(gfp_flags, order);
+ trace_alloc_pages_sys_end(gfp_flags, order);
+ if (!page) {
+ trace_alloc_pages_sys_fail(gfp_flags, order);
+ return 0;
+ }
+ sg_init_table(&sg, 1);
+ sg_set_page(&sg, page, PAGE_SIZE << order, 0);
+ dma_sync_sg_for_device(NULL, &sg, 1, DMA_BIDIRECTIONAL);
+ }
+ if (!page)
+ return 0;
+
+ if (split_pages)
+ split_page(page, order);
+ return page;
+}
+
+static void free_buffer_page(struct ion_system_heap *heap,
+ struct ion_buffer *buffer, struct page *page,
+ unsigned int order)
+{
+ bool cached = ion_buffer_cached(buffer);
+ bool split_pages = ion_buffer_fault_user_mappings(buffer);
+ int i;
+
+ if (!cached) {
+ struct ion_page_pool *pool = heap->pools[order_to_index(order)];
+ ion_page_pool_free(pool, page);
+ } else if (split_pages) {
+ for (i = 0; i < (1 << order); i++)
+ __free_page(page + i);
+ } else {
+ __free_pages(page, order);
+ }
+}
+
+
+static struct page_info *alloc_largest_available(struct ion_system_heap *heap,
+ struct ion_buffer *buffer,
+ unsigned long size,
+ unsigned int max_order)
+{
struct page *page;
struct page_info *info;
int i;
- for (i = 0; i < ARRAY_SIZE(orders); i++) {
- if (size < (1 << orders[i]) * PAGE_SIZE)
+ for (i = 0; i < num_orders; i++) {
+ if (size < order_to_size(orders[i]))
continue;
- page = alloc_pages(GFP_HIGHUSER | __GFP_ZERO |
- __GFP_NOWARN | __GFP_NORETRY, orders[i]);
+ if (max_order < orders[i])
+ continue;
+
+ page = alloc_buffer_page(heap, buffer, orders[i]);
if (!page)
continue;
- if (split_pages)
- split_page(page, orders[i]);
- info = kmalloc(sizeof(struct page_info *), GFP_KERNEL);
+
+ info = kmalloc(sizeof(struct page_info), GFP_KERNEL);
info->page = page;
info->order = orders[i];
return info;
@@ -70,23 +157,27 @@
unsigned long size, unsigned long align,
unsigned long flags)
{
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
struct sg_table *table;
struct scatterlist *sg;
int ret;
struct list_head pages;
struct page_info *info, *tmp_info;
int i = 0;
- long size_remaining = PAGE_ALIGN(size);
+ unsigned long size_remaining = PAGE_ALIGN(size);
+ unsigned int max_order = orders[0];
bool split_pages = ion_buffer_fault_user_mappings(buffer);
-
INIT_LIST_HEAD(&pages);
while (size_remaining > 0) {
- info = alloc_largest_available(size_remaining, split_pages);
+ info = alloc_largest_available(sys_heap, buffer, size_remaining, max_order);
if (!info)
goto err;
list_add_tail(&info->list, &pages);
size_remaining -= (1 << info->order) * PAGE_SIZE;
+ max_order = info->order;
i++;
}
@@ -106,7 +197,6 @@
sg = table->sgl;
list_for_each_entry_safe(info, tmp_info, &pages, list) {
struct page *page = info->page;
-
if (split_pages) {
for (i = 0; i < (1 << info->order); i++) {
sg_set_page(sg, page + i, PAGE_SIZE, 0);
@@ -121,9 +211,6 @@
kfree(info);
}
- dma_sync_sg_for_device(NULL, table->sgl, table->nents,
- DMA_BIDIRECTIONAL);
-
buffer->priv_virt = table;
atomic_add(size, &system_heap_allocated);
return 0;
@@ -131,12 +218,7 @@
kfree(table);
err:
list_for_each_entry(info, &pages, list) {
- if (split_pages)
- for (i = 0; i < (1 << info->order); i++)
- __free_page(info->page + i);
- else
- __free_pages(info->page, info->order);
-
+ free_buffer_page(sys_heap, buffer, info->page, info->order);
kfree(info);
}
return -ENOMEM;
@@ -144,15 +226,26 @@
void ion_system_heap_free(struct ion_buffer *buffer)
{
- int i;
+ struct ion_heap *heap = buffer->heap;
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
+ struct sg_table *table = buffer->sg_table;
+ bool cached = ion_buffer_cached(buffer);
struct scatterlist *sg;
- struct sg_table *table = buffer->priv_virt;
+ LIST_HEAD(pages);
+ int i;
+
+ /* uncached pages come from the page pools, zero them before returning
+ for security purposes (other allocations are zerod at alloc time */
+ if (!cached)
+ ion_heap_buffer_zero(buffer);
for_each_sg(table->sgl, sg, table->nents, i)
- __free_pages(sg_page(sg), get_order(sg_dma_len(sg)));
- if (buffer->sg_table)
- sg_free_table(buffer->sg_table);
- kfree(buffer->sg_table);
+ free_buffer_page(sys_heap, buffer, sg_page(sg),
+ get_order(sg_dma_len(sg)));
+ sg_free_table(table);
+ kfree(table);
atomic_sub(buffer->size, &system_heap_allocated);
}
@@ -168,70 +261,6 @@
return;
}
-void *ion_system_heap_map_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- struct scatterlist *sg;
- int i, j;
- void *vaddr;
- pgprot_t pgprot;
- struct sg_table *table = buffer->priv_virt;
- int npages = PAGE_ALIGN(buffer->size) / PAGE_SIZE;
- struct page **pages = kzalloc(sizeof(struct page *) * npages,
- GFP_KERNEL);
- struct page **tmp = pages;
-
- if (buffer->flags & ION_FLAG_CACHED)
- pgprot = PAGE_KERNEL;
- else
- pgprot = pgprot_writecombine(PAGE_KERNEL);
-
- for_each_sg(table->sgl, sg, table->nents, i) {
- int npages_this_entry = PAGE_ALIGN(sg_dma_len(sg)) / PAGE_SIZE;
- struct page *page = sg_page(sg);
- BUG_ON(i >= npages);
- for (j = 0; j < npages_this_entry; j++) {
- *(tmp++) = page++;
- }
- }
- vaddr = vmap(pages, npages, VM_MAP, pgprot);
- kfree(pages);
-
- return vaddr;
-}
-
-void ion_system_heap_unmap_kernel(struct ion_heap *heap,
- struct ion_buffer *buffer)
-{
- vunmap(buffer->vaddr);
-}
-
-int ion_system_heap_map_user(struct ion_heap *heap, struct ion_buffer *buffer,
- struct vm_area_struct *vma)
-{
- struct sg_table *table = buffer->priv_virt;
- unsigned long addr = vma->vm_start;
- unsigned long offset = vma->vm_pgoff;
- struct scatterlist *sg;
- int i;
-
- if (!ION_IS_CACHED(buffer->flags)) {
- pr_err("%s: cannot map system heap uncached\n", __func__);
- return -EINVAL;
- }
-
- for_each_sg(table->sgl, sg, table->nents, i) {
- if (offset) {
- offset--;
- continue;
- }
- remap_pfn_range(vma, addr, page_to_pfn(sg_page(sg)),
- sg_dma_len(sg), vma->vm_page_prot);
- addr += sg_dma_len(sg);
- }
- return 0;
-}
-
static int ion_system_print_debug(struct ion_heap *heap, struct seq_file *s,
const struct rb_root *unused)
{
@@ -241,32 +270,65 @@
return 0;
}
-static struct ion_heap_ops vmalloc_ops = {
+static struct ion_heap_ops system_heap_ops = {
.allocate = ion_system_heap_allocate,
.free = ion_system_heap_free,
.map_dma = ion_system_heap_map_dma,
.unmap_dma = ion_system_heap_unmap_dma,
- .map_kernel = ion_system_heap_map_kernel,
- .unmap_kernel = ion_system_heap_unmap_kernel,
- .map_user = ion_system_heap_map_user,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
+ .map_user = ion_heap_map_user,
.print_debug = ion_system_print_debug,
};
struct ion_heap *ion_system_heap_create(struct ion_platform_heap *pheap)
{
- struct ion_heap *heap;
+ struct ion_system_heap *heap;
+ int i;
- heap = kzalloc(sizeof(struct ion_heap), GFP_KERNEL);
+ heap = kzalloc(sizeof(struct ion_system_heap), GFP_KERNEL);
if (!heap)
return ERR_PTR(-ENOMEM);
- heap->ops = &vmalloc_ops;
- heap->type = ION_HEAP_TYPE_SYSTEM;
- return heap;
+ heap->heap.ops = &system_heap_ops;
+ heap->heap.type = ION_HEAP_TYPE_SYSTEM;
+ heap->heap.flags = ION_HEAP_FLAG_DEFER_FREE;
+ heap->pools = kzalloc(sizeof(struct ion_page_pool *) * num_orders,
+ GFP_KERNEL);
+ if (!heap->pools)
+ goto err_alloc_pools;
+ for (i = 0; i < num_orders; i++) {
+ struct ion_page_pool *pool;
+ gfp_t gfp_flags = low_order_gfp_flags;
+
+ if (orders[i] > 4)
+ gfp_flags = high_order_gfp_flags;
+ pool = ion_page_pool_create(gfp_flags, orders[i]);
+ if (!pool)
+ goto err_create_pool;
+ heap->pools[i] = pool;
+ }
+ return &heap->heap;
+err_create_pool:
+ for (i = 0; i < num_orders; i++)
+ if (heap->pools[i])
+ ion_page_pool_destroy(heap->pools[i]);
+ kfree(heap->pools);
+err_alloc_pools:
+ kfree(heap);
+ return ERR_PTR(-ENOMEM);
}
void ion_system_heap_destroy(struct ion_heap *heap)
{
- kfree(heap);
+ struct ion_system_heap *sys_heap = container_of(heap,
+ struct ion_system_heap,
+ heap);
+ int i;
+
+ for (i = 0; i < num_orders; i++)
+ ion_page_pool_destroy(sys_heap->pools[i]);
+ kfree(sys_heap->pools);
+ kfree(sys_heap);
}
static int ion_system_contig_heap_allocate(struct ion_heap *heap,
@@ -367,8 +429,8 @@
.phys = ion_system_contig_heap_phys,
.map_dma = ion_system_contig_heap_map_dma,
.unmap_dma = ion_system_contig_heap_unmap_dma,
- .map_kernel = ion_system_contig_heap_map_kernel,
- .unmap_kernel = ion_system_contig_heap_unmap_kernel,
+ .map_kernel = ion_heap_map_kernel,
+ .unmap_kernel = ion_heap_unmap_kernel,
.map_user = ion_system_contig_heap_map_user,
.print_debug = ion_system_contig_print_debug,
};
diff --git a/drivers/gpu/ion/msm/ion_iommu_map.c b/drivers/gpu/ion/msm/ion_iommu_map.c
index ae4ae37..5ce03db 100644
--- a/drivers/gpu/ion/msm/ion_iommu_map.c
+++ b/drivers/gpu/ion/msm/ion_iommu_map.c
@@ -206,8 +206,8 @@
* biggest entry. To take advantage of bigger mapping sizes both the
* VA and PA addresses have to be aligned to the biggest size.
*/
- if (table->sgl->length > align)
- align = table->sgl->length;
+ if (sg_dma_len(table->sgl) > align)
+ align = sg_dma_len(table->sgl);
ret = msm_allocate_iova_address(domain_num, partition_num,
data->mapped_size, align,
diff --git a/drivers/gpu/ion/msm/msm_ion.c b/drivers/gpu/ion/msm/msm_ion.c
index 9259de2..f43d276 100644
--- a/drivers/gpu/ion/msm/msm_ion.c
+++ b/drivers/gpu/ion/msm/msm_ion.c
@@ -128,7 +128,7 @@
struct ion_client *msm_ion_client_create(unsigned int heap_mask,
const char *name)
{
- return ion_client_create(idev, heap_mask, name);
+ return ion_client_create(idev, name);
}
EXPORT_SYMBOL(msm_ion_client_create);
diff --git a/drivers/gpu/msm/a3xx_reg.h b/drivers/gpu/msm/a3xx_reg.h
index c768bb7..5f435f3 100644
--- a/drivers/gpu/msm/a3xx_reg.h
+++ b/drivers/gpu/msm/a3xx_reg.h
@@ -179,6 +179,7 @@
#define A3XX_CP_MEQ_ADDR 0x1DA
#define A3XX_CP_MEQ_DATA 0x1DB
#define A3XX_CP_PERFCOUNTER_SELECT 0x445
+#define A3XX_CP_WFI_PEND_CTR 0x01F5
#define A3XX_CP_HW_FAULT 0x45C
#define A3XX_CP_AHB_FAULT 0x54D
#define A3XX_CP_PROTECT_CTRL 0x45E
diff --git a/drivers/gpu/msm/adreno.c b/drivers/gpu/msm/adreno.c
index 5589ff0..60bab32 100644
--- a/drivers/gpu/msm/adreno.c
+++ b/drivers/gpu/msm/adreno.c
@@ -30,7 +30,6 @@
#include "kgsl_cffdump.h"
#include "kgsl_sharedmem.h"
#include "kgsl_iommu.h"
-#include "kgsl_trace.h"
#include "adreno.h"
#include "adreno_pm4types.h"
@@ -532,6 +531,7 @@
result = adreno_dev->gpudev->irq_handler(adreno_dev);
+ device->pwrctrl.irq_last = 1;
if (device->requested_state == KGSL_STATE_NONE) {
if (device->pwrctrl.nap_allowed == true) {
kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
@@ -607,41 +607,15 @@
return result;
}
-static void adreno_iommu_setstate(struct kgsl_device *device,
- unsigned int context_id,
- uint32_t flags)
+static unsigned int _adreno_iommu_setstate_v0(struct kgsl_device *device,
+ unsigned int *cmds_orig,
+ unsigned int pt_val,
+ int num_iommu_units, uint32_t flags)
{
- unsigned int pt_val, reg_pt_val;
- unsigned int link[230];
- unsigned int *cmds = &link[0];
- int sizedwords = 0;
+ unsigned int reg_pt_val;
+ unsigned int *cmds = cmds_orig;
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
- int num_iommu_units, i;
- struct kgsl_context *context;
- struct adreno_context *adreno_ctx = NULL;
-
- /*
- * If we're idle and we don't need to use the GPU to save context
- * state, use the CPU instead of the GPU to reprogram the
- * iommu for simplicity's sake.
- */
- if (!adreno_dev->drawctxt_active || device->ftbl->isidle(device))
- return kgsl_mmu_device_setstate(&device->mmu, flags);
-
- num_iommu_units = kgsl_mmu_get_num_iommu_units(&device->mmu);
-
- context = idr_find(&device->context_idr, context_id);
- if (context == NULL)
- return;
- adreno_ctx = context->devctxt;
-
- if (kgsl_mmu_enable_clk(&device->mmu,
- KGSL_IOMMU_CONTEXT_USER))
- return;
-
- cmds += __adreno_add_idle_indirect_cmds(cmds,
- device->mmu.setstate_memory.gpuaddr +
- KGSL_IOMMU_SETSTATE_NOP_OFFSET);
+ int i;
if (cpu_is_msm8960())
cmds += adreno_add_change_mh_phys_limit_cmds(cmds, 0xFFFFF000,
@@ -658,8 +632,6 @@
/* Acquire GPU-CPU sync Lock here */
cmds += kgsl_mmu_sync_lock(&device->mmu, cmds);
- pt_val = kgsl_mmu_get_pt_base_addr(&device->mmu,
- device->mmu.hwpagetable);
if (flags & KGSL_MMUFLAGS_PTUPDATE) {
/*
* We need to perfrom the following operations for all
@@ -736,25 +708,169 @@
cmds += adreno_add_idle_cmds(adreno_dev, cmds);
- sizedwords += (cmds - &link[0]);
- if (sizedwords) {
- /* invalidate all base pointers */
- *cmds++ = cp_type3_packet(CP_INVALIDATE_STATE, 1);
- *cmds++ = 0x7fff;
- sizedwords += 2;
- /* This returns the per context timestamp but we need to
- * use the global timestamp for iommu clock disablement */
- adreno_ringbuffer_issuecmds(device, adreno_ctx,
- KGSL_CMD_FLAGS_PMODE,
- &link[0], sizedwords);
- kgsl_mmu_disable_clk_on_ts(&device->mmu,
- adreno_dev->ringbuffer.timestamp[KGSL_MEMSTORE_GLOBAL], true);
+ return cmds - cmds_orig;
+}
+
+static unsigned int _adreno_iommu_setstate_v1(struct kgsl_device *device,
+ unsigned int *cmds_orig,
+ unsigned int pt_val,
+ int num_iommu_units, uint32_t flags)
+{
+ unsigned int reg_pt_val;
+ unsigned int *cmds = cmds_orig;
+ int i;
+ unsigned int ttbr0, tlbiall, tlbstatus, tlbsync, mmu_ctrl;
+
+ for (i = 0; i < num_iommu_units; i++) {
+ reg_pt_val = (pt_val + kgsl_mmu_get_pt_lsb(&device->mmu,
+ i, KGSL_IOMMU_CONTEXT_USER));
+ if (flags & KGSL_MMUFLAGS_PTUPDATE) {
+ mmu_ctrl = kgsl_mmu_get_reg_ahbaddr(
+ &device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL) >> 2;
+
+ ttbr0 = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TTBR0) >> 2;
+
+ if (kgsl_mmu_hw_halt_supported(&device->mmu, i)) {
+ *cmds++ = cp_type3_packet(CP_WAIT_FOR_IDLE, 1);
+ *cmds++ = 0;
+ /*
+ * glue commands together until next
+ * WAIT_FOR_ME
+ */
+ cmds += adreno_wait_reg_eq(cmds,
+ A3XX_CP_WFI_PEND_CTR, 1, 0xFFFFFFFF, 0xF);
+
+ /* set the iommu lock bit */
+ *cmds++ = cp_type3_packet(CP_REG_RMW, 3);
+ *cmds++ = mmu_ctrl;
+ /* AND to unmask the lock bit */
+ *cmds++ =
+ ~(KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT);
+ /* OR to set the IOMMU lock bit */
+ *cmds++ =
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT;
+ /* wait for smmu to lock */
+ cmds += adreno_wait_reg_eq(cmds, mmu_ctrl,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE, 0xF);
+ }
+ /* set ttbr0 */
+ *cmds++ = cp_type0_packet(ttbr0, 1);
+ *cmds++ = reg_pt_val;
+ if (kgsl_mmu_hw_halt_supported(&device->mmu, i)) {
+ /* unlock the IOMMU lock */
+ *cmds++ = cp_type3_packet(CP_REG_RMW, 3);
+ *cmds++ = mmu_ctrl;
+ /* AND to unmask the lock bit */
+ *cmds++ =
+ ~(KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT);
+ /* OR with 0 so lock bit is unset */
+ *cmds++ = 0;
+ /* release all commands with wait_for_me */
+ *cmds++ = cp_type3_packet(CP_WAIT_FOR_ME, 1);
+ *cmds++ = 0;
+ }
+ }
+ if (flags & KGSL_MMUFLAGS_TLBFLUSH) {
+ tlbiall = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBIALL) >> 2;
+ *cmds++ = cp_type0_packet(tlbiall, 1);
+ *cmds++ = 1;
+
+ tlbsync = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBSYNC) >> 2;
+ *cmds++ = cp_type0_packet(tlbsync, 1);
+ *cmds++ = 0;
+
+ tlbstatus = kgsl_mmu_get_reg_ahbaddr(&device->mmu, i,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TLBSTATUS) >> 2;
+ cmds += adreno_wait_reg_eq(cmds, tlbstatus, 0,
+ KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE, 0xF);
+ }
}
+ return cmds - cmds_orig;
+}
+
+static void adreno_iommu_setstate(struct kgsl_device *device,
+ unsigned int context_id,
+ uint32_t flags)
+{
+ unsigned int pt_val;
+ unsigned int link[230];
+ unsigned int *cmds = &link[0];
+ int sizedwords = 0;
+ struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
+ int num_iommu_units;
+ struct kgsl_context *context;
+ struct adreno_context *adreno_ctx = NULL;
+
+ if (!adreno_dev->drawctxt_active) {
+ kgsl_mmu_device_setstate(&device->mmu, flags);
+ return;
+ }
+ num_iommu_units = kgsl_mmu_get_num_iommu_units(&device->mmu);
+
+ context = idr_find(&device->context_idr, context_id);
+ if (context == NULL)
+ return;
+
+ kgsl_context_get(context);
+
+ adreno_ctx = context->devctxt;
+
+ if (kgsl_mmu_enable_clk(&device->mmu,
+ KGSL_IOMMU_CONTEXT_USER))
+ return;
+
+ pt_val = kgsl_mmu_get_pt_base_addr(&device->mmu,
+ device->mmu.hwpagetable);
+
+ cmds += __adreno_add_idle_indirect_cmds(cmds,
+ device->mmu.setstate_memory.gpuaddr +
+ KGSL_IOMMU_SETSTATE_NOP_OFFSET);
+
+ if (msm_soc_version_supports_iommu_v0())
+ cmds += _adreno_iommu_setstate_v0(device, cmds, pt_val,
+ num_iommu_units, flags);
+ else
+ cmds += _adreno_iommu_setstate_v1(device, cmds, pt_val,
+ num_iommu_units, flags);
+
+ sizedwords += (cmds - &link[0]);
+ if (sizedwords == 0) {
+ KGSL_DRV_ERR(device, "no commands generated\n");
+ BUG();
+ }
+ /* invalidate all base pointers */
+ *cmds++ = cp_type3_packet(CP_INVALIDATE_STATE, 1);
+ *cmds++ = 0x7fff;
+ sizedwords += 2;
if (sizedwords > (ARRAY_SIZE(link))) {
KGSL_DRV_ERR(device, "Temp command buffer overflow\n");
BUG();
}
+ /*
+ * This returns the per context timestamp but we need to
+ * use the global timestamp for iommu clock disablement
+ */
+ adreno_ringbuffer_issuecmds(device, adreno_ctx, KGSL_CMD_FLAGS_PMODE,
+ &link[0], sizedwords);
+ /* timestamp based clock gating is currently unstable on iommuv1 */
+ if (msm_soc_version_supports_iommu_v0()) {
+ struct adreno_ringbuffer *rb = &adreno_dev->ringbuffer;
+ kgsl_mmu_disable_clk_on_ts(&device->mmu,
+ rb->timestamp[KGSL_MEMSTORE_GLOBAL], true);
+ }
+
+ kgsl_context_put(context);
}
static void adreno_gpummu_setstate(struct kgsl_device *device,
@@ -1284,6 +1400,8 @@
data->physstart = reg_val[0];
data->physend = data->physstart + reg_val[1] - 1;
+ data->iommu_halt_enable = of_property_read_bool(node,
+ "qcom,iommu-enable-halt");
data->iommu_ctx_count = 0;
@@ -2187,6 +2305,7 @@
context->wait_on_invalid_ts = false;
if (!(adreno_context->flags & CTXT_FLAGS_PER_CONTEXT_TS)) {
+ ft_data->status = 1;
KGSL_FT_ERR(device, "Fault tolerance not supported\n");
goto play_good_cmds;
}
@@ -2221,6 +2340,7 @@
/* If long IB detected do not attempt replay of bad cmds */
if (long_ib) {
+ ft_data->status = 1;
_adreno_debug_ft_info(device, ft_data);
goto play_good_cmds;
}
@@ -2782,7 +2902,7 @@
if (device->state == KGSL_STATE_ACTIVE) {
/* Is the ring buffer is empty? */
GSL_RB_GET_READPTR(rb, &rb->rptr);
- if (!device->active_cnt && (rb->rptr == rb->wptr)) {
+ if (rb->rptr == rb->wptr) {
/*
* Are there interrupts pending? If so then pretend we
* are not idle - this avoids the possiblity that we go
@@ -2952,7 +3072,7 @@
if (!in_interrupt())
kgsl_pre_hwaccess(device);
- trace_kgsl_regwrite(device, offsetwords, value);
+ kgsl_trace_regwrite(device, offsetwords, value);
kgsl_cffdump_regwrite(device->id, offsetwords << 2, value);
reg = (unsigned int *)(device->reg_virt + (offsetwords << 2));
@@ -3040,7 +3160,7 @@
if (context && device->state != KGSL_STATE_SLUMBER) {
adreno_ringbuffer_issuecmds(device, context->devctxt,
- KGSL_CMD_FLAGS_NONE, NULL, 0);
+ KGSL_CMD_FLAGS_GET_INT, NULL, 0);
}
}
@@ -3596,15 +3716,20 @@
{
struct adreno_device *adreno_dev = ADRENO_DEVICE(device);
struct kgsl_pwrctrl *pwr = &device->pwrctrl;
- unsigned int cycles;
+ unsigned int cycles = 0;
- /* Get the busy cycles counted since the counter was last reset */
- /* Calling this function also resets and restarts the counter */
+ /*
+ * Get the busy cycles counted since the counter was last reset.
+ * If we're not currently active, there shouldn't have been
+ * any cycles since the last time this function was called.
+ */
+ if (device->state == KGSL_STATE_ACTIVE)
+ cycles = adreno_dev->gpudev->busy_cycles(adreno_dev);
- cycles = adreno_dev->gpudev->busy_cycles(adreno_dev);
-
- /* In order to calculate idle you have to have run the algorithm *
- * at least once to get a start time. */
+ /*
+ * In order to calculate idle you have to have run the algorithm
+ * at least once to get a start time.
+ */
if (pwr->time != 0) {
s64 tmp = ktime_to_us(ktime_get());
stats->total_time = tmp - pwr->time;
diff --git a/drivers/gpu/msm/adreno.h b/drivers/gpu/msm/adreno.h
index 90d6027..fa892b9 100644
--- a/drivers/gpu/msm/adreno.h
+++ b/drivers/gpu/msm/adreno.h
@@ -34,6 +34,7 @@
#define KGSL_CMD_FLAGS_NONE 0x00000000
#define KGSL_CMD_FLAGS_PMODE 0x00000001
#define KGSL_CMD_FLAGS_INTERNAL_ISSUE 0x00000002
+#define KGSL_CMD_FLAGS_GET_INT 0x00000004
#define KGSL_CMD_FLAGS_EOF 0x00000100
/* Command identifiers */
@@ -506,4 +507,25 @@
return cmds - start;
}
+/*
+ * adreno_wait_reg_eq() - Add a CP_WAIT_REG_EQ command
+ * @cmds: Pointer to memory where commands are to be added
+ * @addr: Regiater address to poll for
+ * @val: Value to poll for
+ * @mask: The value against which register value is masked
+ * @interval: wait interval
+ */
+static inline int adreno_wait_reg_eq(unsigned int *cmds, unsigned int addr,
+ unsigned int val, unsigned int mask,
+ unsigned int interval)
+{
+ unsigned int *start = cmds;
+ *cmds++ = cp_type3_packet(CP_WAIT_REG_EQ, 4);
+ *cmds++ = addr;
+ *cmds++ = val;
+ *cmds++ = mask;
+ *cmds++ = interval;
+ return cmds - start;
+}
+
#endif /*__ADRENO_H */
diff --git a/drivers/gpu/msm/adreno_a3xx.c b/drivers/gpu/msm/adreno_a3xx.c
index be5c786..a4b3121 100644
--- a/drivers/gpu/msm/adreno_a3xx.c
+++ b/drivers/gpu/msm/adreno_a3xx.c
@@ -2758,7 +2758,7 @@
{
unsigned int in, out, bit, sel;
- if (countable > 0x7f)
+ if (counter > 1 || countable > 0x7f)
return;
adreno_regread(device, A3XX_VBIF_PERF_CNT_EN, &in);
@@ -2767,7 +2767,7 @@
if (counter == 0) {
bit = VBIF_PERF_CNT_0;
sel = (sel & ~VBIF_PERF_CNT_0_SEL_MASK) | countable;
- } else if (counter == 1) {
+ } else {
bit = VBIF_PERF_CNT_1;
sel = (sel & ~VBIF_PERF_CNT_1_SEL_MASK)
| (countable << VBIF_PERF_CNT_1_SEL);
@@ -2821,10 +2821,10 @@
unsigned int val = 0;
struct a3xx_perfcounter_register *reg;
- if (group > ARRAY_SIZE(a3xx_perfcounter_reglist))
+ if (group >= ARRAY_SIZE(a3xx_perfcounter_reglist))
return;
- if (counter > a3xx_perfcounter_reglist[group].count)
+ if (counter >= a3xx_perfcounter_reglist[group].count)
return;
/* Special cases */
@@ -2858,7 +2858,10 @@
unsigned int lo = 0, hi = 0;
unsigned int val;
- if (group > ARRAY_SIZE(a3xx_perfcounter_reglist))
+ if (group >= ARRAY_SIZE(a3xx_perfcounter_reglist))
+ return 0;
+
+ if (counter >= a3xx_perfcounter_reglist[group].count)
return 0;
reg = &(a3xx_perfcounter_reglist[group].regs[counter]);
diff --git a/drivers/gpu/msm/adreno_debugfs.c b/drivers/gpu/msm/adreno_debugfs.c
index ef599e9..980ff13 100644
--- a/drivers/gpu/msm/adreno_debugfs.c
+++ b/drivers/gpu/msm/adreno_debugfs.c
@@ -93,4 +93,7 @@
adreno_dev->ft_pf_policy = KGSL_FT_PAGEFAULT_DEFAULT_POLICY;
debugfs_create_u32("ft_pagefault_policy", 0644, device->d_debugfs,
&adreno_dev->ft_pf_policy);
+
+ debugfs_create_u32("active_cnt", 0444, device->d_debugfs,
+ &device->active_cnt);
}
diff --git a/drivers/gpu/msm/adreno_postmortem.c b/drivers/gpu/msm/adreno_postmortem.c
index 5396196..2249907 100644
--- a/drivers/gpu/msm/adreno_postmortem.c
+++ b/drivers/gpu/msm/adreno_postmortem.c
@@ -69,6 +69,8 @@
{CP_SET_PROTECTED_MODE, "ST_PRT_M"},
{CP_SET_SHADER_BASES, "ST_SHD_B"},
{CP_WAIT_FOR_IDLE, "WAIT4IDL"},
+ {CP_WAIT_FOR_ME, "WAIT4ME"},
+ {CP_WAIT_REG_EQ, "WAITRGEQ"},
};
static const struct pm_id_name pm3_nop_values[] = {
@@ -854,7 +856,12 @@
(num_iommu_units && this_cmd ==
kgsl_mmu_get_reg_gpuaddr(&device->mmu, 0,
KGSL_IOMMU_CONTEXT_USER,
- KGSL_IOMMU_CTX_TTBR0))) {
+ KGSL_IOMMU_CTX_TTBR0)) ||
+ (num_iommu_units && this_cmd == cp_type0_packet(
+ kgsl_mmu_get_reg_ahbaddr(
+ &device->mmu, 0,
+ KGSL_IOMMU_CONTEXT_USER,
+ KGSL_IOMMU_CTX_TTBR0), 1))) {
KGSL_LOG_DUMP(device, "Current pagetable: %x\t"
"pagetable base: %x\n",
kgsl_mmu_get_ptname_from_ptbase(&device->mmu,
diff --git a/drivers/gpu/msm/adreno_ringbuffer.c b/drivers/gpu/msm/adreno_ringbuffer.c
index a4bb4fa..61ea916 100644
--- a/drivers/gpu/msm/adreno_ringbuffer.c
+++ b/drivers/gpu/msm/adreno_ringbuffer.c
@@ -18,7 +18,6 @@
#include "kgsl.h"
#include "kgsl_sharedmem.h"
#include "kgsl_cffdump.h"
-#include "kgsl_trace.h"
#include "adreno.h"
#include "adreno_pm4types.h"
@@ -544,13 +543,15 @@
/*
* if the context was not created with per context timestamp
* support, we must use the global timestamp since issueibcmds
- * will be returning that one.
+ * will be returning that one, or if an internal issue then
+ * use global timestamp.
*/
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS)
+ if ((context && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS)) &&
+ !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE))
context_id = context->id;
- if ((context && context->flags & CTXT_FLAGS_USER_GENERATED_TS) &&
- (!(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE))) {
+ if ((context && (context->flags & CTXT_FLAGS_USER_GENERATED_TS)) &&
+ !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
if (timestamp_cmp(rb->timestamp[context_id],
timestamp) >= 0) {
KGSL_DRV_ERR(rb->device,
@@ -574,6 +575,11 @@
/* Add CP_COND_EXEC commands to generate CP_INTERRUPT */
total_sizedwords += context ? 13 : 0;
+ if ((context) && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS) &&
+ (flags & (KGSL_CMD_FLAGS_INTERNAL_ISSUE |
+ KGSL_CMD_FLAGS_GET_INT)))
+ total_sizedwords += 2;
+
if (adreno_is_a3xx(adreno_dev))
total_sizedwords += 7;
@@ -584,11 +590,9 @@
total_sizedwords += 3; /* sop timestamp */
total_sizedwords += 4; /* eop timestamp */
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS &&
- !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
+ if (KGSL_MEMSTORE_GLOBAL != context_id)
total_sizedwords += 3; /* global timestamp without cache
* flush for non-zero context */
- }
if (adreno_is_a20x(adreno_dev))
total_sizedwords += 2; /* CACHE_FLUSH */
@@ -619,12 +623,12 @@
/* always increment the global timestamp. once. */
rb->timestamp[KGSL_MEMSTORE_GLOBAL]++;
- /* Do not update context's timestamp for internal submissions */
- if (context && !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
- if (context_id == KGSL_MEMSTORE_GLOBAL)
- rb->timestamp[context->id] =
- rb->timestamp[KGSL_MEMSTORE_GLOBAL];
- else if (context->flags & CTXT_FLAGS_USER_GENERATED_TS)
+ /*
+ * If global timestamp then we are not using per context ts for
+ * this submission
+ */
+ if (context_id != KGSL_MEMSTORE_GLOBAL) {
+ if (context->flags & CTXT_FLAGS_USER_GENERATED_TS)
rb->timestamp[context_id] = timestamp;
else
rb->timestamp[context_id]++;
@@ -695,9 +699,7 @@
KGSL_MEMSTORE_OFFSET(context_id, eoptimestamp)));
GSL_RB_WRITE(ringcmds, rcmd_gpu, timestamp);
- if (context && context->flags & CTXT_FLAGS_PER_CONTEXT_TS
- && !(flags & KGSL_CMD_FLAGS_INTERNAL_ISSUE)) {
-
+ if (KGSL_MEMSTORE_GLOBAL != context_id) {
GSL_RB_WRITE(ringcmds, rcmd_gpu,
cp_type3_packet(CP_MEM_WRITE, 2));
GSL_RB_WRITE(ringcmds, rcmd_gpu, (gpuaddr +
@@ -749,6 +751,19 @@
GSL_RB_WRITE(ringcmds, rcmd_gpu, CP_INT_CNTL__RB_INT_MASK);
}
+ /*
+ * If per context timestamps are enabled and any of the kgsl
+ * internal commands want INT to be generated trigger the INT
+ */
+ if ((context) && (context->flags & CTXT_FLAGS_PER_CONTEXT_TS) &&
+ (flags & (KGSL_CMD_FLAGS_INTERNAL_ISSUE |
+ KGSL_CMD_FLAGS_GET_INT))) {
+ GSL_RB_WRITE(ringcmds, rcmd_gpu,
+ cp_type3_packet(CP_INTERRUPT, 1));
+ GSL_RB_WRITE(ringcmds, rcmd_gpu,
+ CP_INT_CNTL__RB_INT_MASK);
+ }
+
if (adreno_is_a3xx(adreno_dev)) {
/* Dummy set-constant to trigger context rollover */
GSL_RB_WRITE(ringcmds, rcmd_gpu,
@@ -1107,8 +1122,10 @@
ret = 0;
done:
- trace_kgsl_issueibcmds(device, context->id, ibdesc, numibs,
- *timestamp, flags, ret, drawctxt->type);
+ device->pwrctrl.irq_last = 0;
+ kgsl_trace_issueibcmds(device, context ? context->id : 0, ibdesc,
+ numibs, *timestamp, flags, ret,
+ drawctxt ? drawctxt->type : 0);
kfree(link);
return ret;
diff --git a/drivers/gpu/msm/kgsl.c b/drivers/gpu/msm/kgsl.c
index 53ef392..5275267 100644
--- a/drivers/gpu/msm/kgsl.c
+++ b/drivers/gpu/msm/kgsl.c
@@ -53,6 +53,46 @@
static struct ion_client *kgsl_ion_client;
+/**
+ * kgsl_trace_issueibcmds() - Call trace_issueibcmds by proxy
+ * device: KGSL device
+ * id: ID of the context submitting the command
+ * ibdesc: Pointer to the list of IB descriptors
+ * numib: Number of IBs in the list
+ * timestamp: Timestamp assigned to the command batch
+ * flags: Flags sent by the user
+ * result: Result of the submission attempt
+ * type: Type of context issuing the command
+ *
+ * Wrap the issueibcmds ftrace hook into a function that can be called from the
+ * GPU specific modules.
+ */
+void kgsl_trace_issueibcmds(struct kgsl_device *device, int id,
+ struct kgsl_ibdesc *ibdesc, int numibs,
+ unsigned int timestamp, unsigned int flags,
+ int result, unsigned int type)
+{
+ trace_kgsl_issueibcmds(device, id, ibdesc, numibs,
+ timestamp, flags, result, type);
+}
+EXPORT_SYMBOL(kgsl_trace_issueibcmds);
+
+/**
+ * kgsl_trace_regwrite - call regwrite ftrace function by proxy
+ * device: KGSL device
+ * offset: dword offset of the register being written
+ * value: Value of the register being written
+ *
+ * Wrap the regwrite ftrace hook into a function that can be called from the
+ * GPU specific modules.
+ */
+void kgsl_trace_regwrite(struct kgsl_device *device, unsigned int offset,
+ unsigned int value)
+{
+ trace_kgsl_regwrite(device, offset, value);
+}
+EXPORT_SYMBOL(kgsl_trace_regwrite);
+
int kgsl_memfree_hist_init(void)
{
void *base;
@@ -413,27 +453,6 @@
kfree(context);
}
-static void kgsl_check_idle_locked(struct kgsl_device *device)
-{
- if (device->pwrctrl.nap_allowed == true &&
- device->state == KGSL_STATE_ACTIVE &&
- device->requested_state == KGSL_STATE_NONE) {
- kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
- kgsl_pwrscale_idle(device);
- if (kgsl_pwrctrl_sleep(device) != 0)
- mod_timer(&device->idle_timer,
- jiffies +
- device->pwrctrl.interval_timeout);
- }
-}
-
-static void kgsl_check_idle(struct kgsl_device *device)
-{
- mutex_lock(&device->mutex);
- kgsl_check_idle_locked(device);
- mutex_unlock(&device->mutex);
-}
-
struct kgsl_device *kgsl_get_device(int dev_idx)
{
int i;
@@ -496,13 +515,12 @@
policy_saved = device->pwrscale.policy;
device->pwrscale.policy = NULL;
kgsl_pwrctrl_request_state(device, KGSL_STATE_SUSPEND);
- /* Make sure no user process is waiting for a timestamp *
- * before supending */
- if (device->active_cnt != 0) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->suspend_gate);
- mutex_lock(&device->mutex);
- }
+ /*
+ * Make sure no user process is waiting for a timestamp
+ * before supending.
+ */
+ kgsl_active_count_wait(device);
+
/* Don't let the timer wake us during suspended sleep. */
del_timer_sync(&device->idle_timer);
switch (device->state) {
@@ -513,6 +531,8 @@
device->ftbl->idle(device);
case KGSL_STATE_NAP:
case KGSL_STATE_SLEEP:
+ /* make sure power is on to stop the device */
+ kgsl_pwrctrl_enable(device);
/* Get the completion ready to be waited upon. */
INIT_COMPLETION(device->hwaccess_gate);
device->ftbl->suspend_context(device);
@@ -632,9 +652,16 @@
device->pwrctrl.restore_slumber = false;
if (device->pwrscale.policy == NULL)
kgsl_pwrctrl_pwrlevel_change(device, KGSL_PWRLEVEL_TURBO);
- kgsl_pwrctrl_wake(device);
+ if (kgsl_pwrctrl_wake(device) != 0)
+ return;
+ /*
+ * We don't have a way to go directly from
+ * a deeper sleep state to NAP, which is
+ * the desired state here.
+ */
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ kgsl_pwrctrl_sleep(device);
mutex_unlock(&device->mutex);
- kgsl_check_idle(device);
KGSL_PWR_WARN(device, "late resume end\n");
}
EXPORT_SYMBOL(kgsl_late_resume_driver);
@@ -745,7 +772,7 @@
filep->private_data = NULL;
mutex_lock(&device->mutex);
- kgsl_check_suspended(device);
+ kgsl_active_count_get(device);
while (1) {
context = idr_get_next(&device->context_idr, &next);
@@ -767,10 +794,17 @@
device->open_count--;
if (device->open_count == 0) {
+ BUG_ON(device->active_cnt > 1);
result = device->ftbl->stop(device);
kgsl_pwrctrl_set_state(device, KGSL_STATE_INIT);
+ /*
+ * active_cnt special case: we just stopped the device,
+ * so no need to use kgsl_active_count_put()
+ */
+ device->active_cnt--;
+ } else {
+ kgsl_active_count_put(device);
}
-
mutex_unlock(&device->mutex);
kfree(dev_priv);
@@ -816,9 +850,14 @@
filep->private_data = dev_priv;
mutex_lock(&device->mutex);
- kgsl_check_suspended(device);
if (device->open_count == 0) {
+ /*
+ * active_cnt special case: we are starting up for the first
+ * time, so use this sequence instead of the kgsl_pwrctrl_wake()
+ * which will be called by kgsl_active_count_get().
+ */
+ device->active_cnt++;
kgsl_sharedmem_set(&device->memstore, 0, 0,
device->memstore.size);
@@ -831,6 +870,7 @@
goto err_freedevpriv;
kgsl_pwrctrl_set_state(device, KGSL_STATE_ACTIVE);
+ kgsl_active_count_put(device);
}
device->open_count++;
mutex_unlock(&device->mutex);
@@ -856,10 +896,15 @@
mutex_lock(&device->mutex);
device->open_count--;
if (device->open_count == 0) {
+ /* make sure power is on to stop the device */
+ kgsl_pwrctrl_enable(device);
result = device->ftbl->stop(device);
kgsl_pwrctrl_set_state(device, KGSL_STATE_INIT);
}
err_freedevpriv:
+ /* only the first open takes an active count */
+ if (device->open_count == 0)
+ device->active_cnt--;
mutex_unlock(&device->mutex);
filep->private_data = NULL;
kfree(dev_priv);
@@ -1073,10 +1118,6 @@
struct kgsl_device *device = dev_priv->device;
unsigned int context_id = context ? context->id : KGSL_MEMSTORE_GLOBAL;
- /* Set the active count so that suspend doesn't do the wrong thing */
-
- device->active_cnt++;
-
trace_kgsl_waittimestamp_entry(device, context_id,
kgsl_readtimestamp(device, context,
KGSL_TIMESTAMP_RETIRED),
@@ -1090,9 +1131,6 @@
KGSL_TIMESTAMP_RETIRED),
result);
- /* Fire off any pending suspend operations that are in flight */
- kgsl_active_count_put(dev_priv->device);
-
return result;
}
@@ -1887,7 +1925,6 @@
trace_kgsl_mem_map(entry, param->fd);
- kgsl_check_idle(dev_priv->device);
return result;
error_unmap:
@@ -1907,7 +1944,6 @@
}
error:
kfree(entry);
- kgsl_check_idle(dev_priv->device);
return result;
}
@@ -2035,7 +2071,6 @@
entry->memtype = KGSL_MEM_ENTRY_KERNEL;
- kgsl_check_idle(dev_priv->device);
*ret_entry = entry;
return result;
err:
@@ -2370,7 +2405,7 @@
kgsl_ioctl_cff_user_event, 0),
KGSL_IOCTL_FUNC(IOCTL_KGSL_TIMESTAMP_EVENT,
kgsl_ioctl_timestamp_event,
- KGSL_IOCTL_LOCK),
+ KGSL_IOCTL_LOCK | KGSL_IOCTL_WAKE),
KGSL_IOCTL_FUNC(IOCTL_KGSL_SETPROPERTY,
kgsl_ioctl_device_setproperty,
KGSL_IOCTL_LOCK | KGSL_IOCTL_WAKE),
@@ -2462,14 +2497,19 @@
if (lock) {
mutex_lock(&dev_priv->device->mutex);
- if (use_hw)
- kgsl_check_suspended(dev_priv->device);
+ if (use_hw) {
+ ret = kgsl_active_count_get(dev_priv->device);
+ if (ret < 0)
+ goto unlock;
+ }
}
ret = func(dev_priv, cmd, uptr);
+unlock:
if (lock) {
- kgsl_check_idle_locked(dev_priv->device);
+ if (use_hw)
+ kgsl_active_count_put(dev_priv->device);
mutex_unlock(&dev_priv->device->mutex);
}
@@ -2600,12 +2640,18 @@
return ret;
}
+static inline bool
+mmap_range_valid(unsigned long addr, unsigned long len)
+{
+ return (addr + len) > addr && (addr + len) < TASK_SIZE;
+}
+
static unsigned long
kgsl_get_unmapped_area(struct file *file, unsigned long addr,
unsigned long len, unsigned long pgoff,
unsigned long flags)
{
- unsigned long ret = 0;
+ unsigned long ret = 0, orig_len = len;
unsigned long vma_offset = pgoff << PAGE_SHIFT;
struct kgsl_device_private *dev_priv = file->private_data;
struct kgsl_process_private *private = dev_priv->process_priv;
@@ -2650,10 +2696,26 @@
if (align)
len += 1 << align;
+
+ if (!mmap_range_valid(addr, len))
+ addr = 0;
do {
ret = get_unmapped_area(NULL, addr, len, pgoff, flags);
- if (IS_ERR_VALUE(ret))
+ if (IS_ERR_VALUE(ret)) {
+ /*
+ * If we are really fragmented, there may not be room
+ * for the alignment padding, so try again without it.
+ */
+ if (!retry && (ret == (unsigned long)-ENOMEM)
+ && (align > PAGE_SHIFT)) {
+ align = PAGE_SHIFT;
+ addr = 0;
+ len = orig_len;
+ retry = 1;
+ continue;
+ }
break;
+ }
if (align)
ret = ALIGN(ret, (1 << align));
@@ -2675,13 +2737,13 @@
* the whole address space at least once by wrapping
* back around once.
*/
- if (!retry && (addr + len >= TASK_SIZE)) {
+ if (!retry && !mmap_range_valid(addr, len)) {
addr = 0;
retry = 1;
} else {
ret = -EBUSY;
}
- } while (addr + len < TASK_SIZE);
+ } while (mmap_range_valid(addr, len));
if (IS_ERR_VALUE(ret))
KGSL_MEM_INFO(device,
@@ -3032,11 +3094,7 @@
/* For a manual dump, make sure that the system is idle */
if (manual) {
- if (device->active_cnt != 0) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->suspend_gate);
- mutex_lock(&device->mutex);
- }
+ kgsl_active_count_wait(device);
if (device->state == KGSL_STATE_ACTIVE)
kgsl_idle(device);
diff --git a/drivers/gpu/msm/kgsl.h b/drivers/gpu/msm/kgsl.h
index c568db5..abe9100 100644
--- a/drivers/gpu/msm/kgsl.h
+++ b/drivers/gpu/msm/kgsl.h
@@ -226,6 +226,14 @@
void kgsl_early_suspend_driver(struct early_suspend *h);
void kgsl_late_resume_driver(struct early_suspend *h);
+void kgsl_trace_regwrite(struct kgsl_device *device, unsigned int offset,
+ unsigned int value);
+
+void kgsl_trace_issueibcmds(struct kgsl_device *device, int id,
+ struct kgsl_ibdesc *ibdesc, int numibs,
+ unsigned int timestamp, unsigned int flags,
+ int result, unsigned int type);
+
#ifdef CONFIG_MSM_KGSL_DRM
extern int kgsl_drm_init(struct platform_device *dev);
extern void kgsl_drm_exit(void);
diff --git a/drivers/gpu/msm/kgsl_device.h b/drivers/gpu/msm/kgsl_device.h
index 0d11660..ac82820 100644
--- a/drivers/gpu/msm/kgsl_device.h
+++ b/drivers/gpu/msm/kgsl_device.h
@@ -454,23 +454,4 @@
kref_put(&context->refcount, kgsl_context_destroy);
}
-/**
- * kgsl_active_count_put - Decrease the device active count
- * @device: Pointer to a KGSL device
- *
- * Decrease the active count for the KGSL device and trigger the suspend_gate
- * completion if it hits zero
- */
-static inline void
-kgsl_active_count_put(struct kgsl_device *device)
-{
- if (device->active_cnt == 1)
- INIT_COMPLETION(device->suspend_gate);
-
- device->active_cnt--;
-
- if (device->active_cnt == 0)
- complete(&device->suspend_gate);
-}
-
#endif /* __KGSL_DEVICE_H */
diff --git a/drivers/gpu/msm/kgsl_events.c b/drivers/gpu/msm/kgsl_events.c
index 9e9c0da..d872783 100644
--- a/drivers/gpu/msm/kgsl_events.c
+++ b/drivers/gpu/msm/kgsl_events.c
@@ -51,6 +51,7 @@
void (*cb)(struct kgsl_device *, void *, u32, u32), void *priv,
void *owner)
{
+ int ret;
struct kgsl_event *event;
unsigned int cur_ts;
struct kgsl_context *context = NULL;
@@ -82,6 +83,16 @@
if (event == NULL)
return -ENOMEM;
+ /*
+ * Increase the active count on the device to avoid going into power
+ * saving modes while events are pending
+ */
+ ret = kgsl_active_count_get_light(device);
+ if (ret < 0) {
+ kfree(event);
+ return ret;
+ }
+
event->context = context;
event->timestamp = ts;
event->priv = priv;
@@ -112,13 +123,6 @@
} else
_add_event_to_list(&device->events, event);
- /*
- * Increase the active count on the device to avoid going into power
- * saving modes while events are pending
- */
-
- device->active_cnt++;
-
queue_work(device->work_queue, &device->ts_expired_ws);
return 0;
}
diff --git a/drivers/gpu/msm/kgsl_gpummu.c b/drivers/gpu/msm/kgsl_gpummu.c
index 5cc0dff..8d071d1 100644
--- a/drivers/gpu/msm/kgsl_gpummu.c
+++ b/drivers/gpu/msm/kgsl_gpummu.c
@@ -759,7 +759,9 @@
.mmu_disable_clk_on_ts = NULL,
.mmu_get_pt_lsb = NULL,
.mmu_get_reg_gpuaddr = NULL,
+ .mmu_get_reg_ahbaddr = NULL,
.mmu_get_num_iommu_units = kgsl_gpummu_get_num_iommu_units,
+ .mmu_hw_halt_supported = NULL,
};
struct kgsl_mmu_pt_ops gpummu_pt_ops = {
diff --git a/drivers/gpu/msm/kgsl_iommu.c b/drivers/gpu/msm/kgsl_iommu.c
index 739fcff..15f35c9 100644
--- a/drivers/gpu/msm/kgsl_iommu.c
+++ b/drivers/gpu/msm/kgsl_iommu.c
@@ -37,31 +37,51 @@
static struct kgsl_iommu_register_list kgsl_iommuv0_reg[KGSL_IOMMU_REG_MAX] = {
- { 0, 0, 0 }, /* GLOBAL_BASE */
- { 0x10, 0x0003FFFF, 14 }, /* TTBR0 */
- { 0x14, 0x0003FFFF, 14 }, /* TTBR1 */
- { 0x20, 0, 0 }, /* FSR */
- { 0x800, 0, 0 }, /* TLBIALL */
- { 0x820, 0, 0 }, /* RESUME */
- { 0x03C, 0, 0 }, /* TLBLKCR */
- { 0x818, 0, 0 }, /* V2PUR */
- { 0x2C, 0, 0 }, /* FSYNR0 */
- { 0x2C, 0, 0 }, /* FSYNR0 */
+ { 0, 0 }, /* GLOBAL_BASE */
+ { 0x10, 1 }, /* TTBR0 */
+ { 0x14, 1 }, /* TTBR1 */
+ { 0x20, 1 }, /* FSR */
+ { 0x800, 1 }, /* TLBIALL */
+ { 0x820, 1 }, /* RESUME */
+ { 0x03C, 1 }, /* TLBLKCR */
+ { 0x818, 1 }, /* V2PUR */
+ { 0x2C, 1 }, /* FSYNR0 */
+ { 0x2C, 1 }, /* FSYNR0 */
+ { 0, 0 }, /* TLBSYNC, not in v0 */
+ { 0, 0 }, /* TLBSTATUS, not in v0 */
+ { 0, 0 } /* IMPLDEF_MICRO_MMU_CRTL, not in v0 */
};
static struct kgsl_iommu_register_list kgsl_iommuv1_reg[KGSL_IOMMU_REG_MAX] = {
- { 0, 0, 0 }, /* GLOBAL_BASE */
- { 0x20, 0x00FFFFFF, 14 }, /* TTBR0 */
- { 0x28, 0x00FFFFFF, 14 }, /* TTBR1 */
- { 0x58, 0, 0 }, /* FSR */
- { 0x618, 0, 0 }, /* TLBIALL */
- { 0x008, 0, 0 }, /* RESUME */
- { 0, 0, 0 }, /* TLBLKCR */
- { 0, 0, 0 }, /* V2PUR */
- { 0x68, 0, 0 }, /* FSYNR0 */
- { 0x6C, 0, 0 } /* FSYNR1 */
+ { 0, 0 }, /* GLOBAL_BASE */
+ { 0x20, 1 }, /* TTBR0 */
+ { 0x28, 1 }, /* TTBR1 */
+ { 0x58, 1 }, /* FSR */
+ { 0x618, 1 }, /* TLBIALL */
+ { 0x008, 1 }, /* RESUME */
+ { 0, 0 }, /* TLBLKCR not in V1 */
+ { 0, 0 }, /* V2PUR not in V1 */
+ { 0x68, 0 }, /* FSYNR0 */
+ { 0x6C, 0 }, /* FSYNR1 */
+ { 0x7F0, 1 }, /* TLBSYNC */
+ { 0x7F4, 1 }, /* TLBSTATUS */
+ { 0x2000, 0 } /* IMPLDEF_MICRO_MMU_CRTL */
};
+static struct iommu_access_ops *iommu_access_ops;
+
+static void _iommu_lock(void)
+{
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_acquire)
+ iommu_access_ops->iommu_lock_acquire();
+}
+
+static void _iommu_unlock(void)
+{
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_release)
+ iommu_access_ops->iommu_lock_release();
+}
+
struct remote_iommu_petersons_spinlock kgsl_iommu_sync_lock_vars;
static int get_iommu_unit(struct device *dev, struct kgsl_mmu **mmu_out,
@@ -582,18 +602,13 @@
struct kgsl_pagetable *pt,
unsigned int pt_base)
{
- struct kgsl_iommu *iommu = mmu->priv;
struct kgsl_iommu_pt *iommu_pt = pt ? pt->priv : NULL;
unsigned int domain_ptbase = iommu_pt ?
iommu_get_pt_base_addr(iommu_pt->domain) : 0;
/* Only compare the valid address bits of the pt_base */
- domain_ptbase &=
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ domain_ptbase &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
- pt_base &=
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ pt_base &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
return domain_ptbase && pt_base &&
(domain_ptbase == pt_base);
@@ -649,15 +664,18 @@
domain_num = msm_register_domain(&kgsl_layout);
if (domain_num >= 0) {
iommu_pt->domain = msm_get_iommu_domain(domain_num);
- iommu_set_fault_handler(iommu_pt->domain,
- kgsl_iommu_fault_handler, NULL);
- } else {
- KGSL_CORE_ERR("Failed to create iommu domain\n");
- kfree(iommu_pt);
- return NULL;
+
+ if (iommu_pt->domain) {
+ iommu_set_fault_handler(iommu_pt->domain,
+ kgsl_iommu_fault_handler, NULL);
+
+ return iommu_pt;
+ }
}
- return iommu_pt;
+ KGSL_CORE_ERR("Failed to create iommu domain\n");
+ kfree(iommu_pt);
+ return NULL;
}
/*
@@ -889,11 +907,14 @@
if (iommu->sync_lock_initialized)
return status;
- /* Get the physical address of the Lock variables */
- lock_phy_addr = (msm_iommu_lock_initialize()
+ iommu_access_ops = get_iommu_access_ops_v0();
+
+ if (iommu_access_ops && iommu_access_ops->iommu_lock_initialize)
+ lock_phy_addr = (iommu_access_ops->iommu_lock_initialize()
- MSM_SHARED_RAM_BASE + msm_shared_ram_phys);
if (!lock_phy_addr) {
+ iommu_access_ops = NULL;
KGSL_DRV_ERR(mmu->device,
"GPU CPU sync lock is not supported by kernel\n");
return -ENXIO;
@@ -911,8 +932,10 @@
iommu->sync_lock_desc.physaddr,
iommu->sync_lock_desc.size);
- if (status)
+ if (status) {
+ iommu_access_ops = NULL;
return status;
+ }
/* Flag Sync Lock is Initialized */
iommu->sync_lock_initialized = 1;
@@ -1092,6 +1115,9 @@
iommu_unit->reg_map.size);
if (ret)
goto err;
+
+ iommu_unit->iommu_halt_enable = data.iommu_halt_enable;
+ iommu_unit->ahb_base = data.physstart - mmu->device->reg_phys;
}
iommu->unit_count = pdata_dev->iommu_count;
return ret;
@@ -1118,11 +1144,9 @@
static unsigned int kgsl_iommu_get_pt_base_addr(struct kgsl_mmu *mmu,
struct kgsl_pagetable *pt)
{
- struct kgsl_iommu *iommu = mmu->priv;
struct kgsl_iommu_pt *iommu_pt = pt->priv;
return iommu_get_pt_base_addr(iommu_pt->domain) &
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
}
/*
@@ -1241,6 +1265,30 @@
}
+/*
+ * kgsl_iommu_get_reg_ahbaddr - Returns the ahb address of the register
+ * @mmu - Pointer to mmu structure
+ * @iommu_unit - The iommu unit for which base address is requested
+ * @ctx_id - The context ID of the IOMMU ctx
+ * @reg - The register for which address is required
+ *
+ * Return - The address of register which can be used in type0 packet
+ */
+static unsigned int kgsl_iommu_get_reg_ahbaddr(struct kgsl_mmu *mmu,
+ int iommu_unit, int ctx_id,
+ enum kgsl_iommu_reg_map reg)
+{
+ struct kgsl_iommu *iommu = mmu->priv;
+
+ if (iommu->iommu_reg_list[reg].ctx_reg)
+ return iommu->iommu_units[iommu_unit].ahb_base +
+ iommu->iommu_reg_list[reg].reg_offset +
+ (ctx_id << KGSL_IOMMU_CTX_SHIFT) + iommu->ctx_offset;
+ else
+ return iommu->iommu_units[iommu_unit].ahb_base +
+ iommu->iommu_reg_list[reg].reg_offset;
+}
+
static int kgsl_iommu_init(struct kgsl_mmu *mmu)
{
/*
@@ -1265,13 +1313,15 @@
status = kgsl_set_register_map(mmu);
if (status)
goto done;
- status = kgsl_iommu_init_sync_lock(mmu);
- if (status)
- goto done;
- /* We presently do not support per-process for IOMMU-v1 */
+ /*
+ * IOMMU-v1 requires hardware halt support to do in stream
+ * pagetable switching. This check assumes that if there are
+ * multiple units, they will be matching hardware.
+ */
mmu->pt_per_process = KGSL_MMU_USE_PER_PROCESS_PT &&
- msm_soc_version_supports_iommu_v0();
+ (msm_soc_version_supports_iommu_v0() ||
+ iommu->iommu_units[0].iommu_halt_enable);
/*
* For IOMMU per-process pagetables, the allocatable range
@@ -1294,6 +1344,9 @@
mmu->use_cpu_map = false;
}
+ status = kgsl_iommu_init_sync_lock(mmu);
+ if (status)
+ goto done;
iommu->iommu_reg_list = kgsl_iommuv0_reg;
iommu->ctx_offset = KGSL_IOMMU_CTX_OFFSET_V0;
@@ -1535,7 +1588,7 @@
* changing pagetables we can use this lsb value of the pagetable w/o
* having to read it again
*/
- msm_iommu_lock();
+ _iommu_lock();
for (i = 0; i < iommu->unit_count; i++) {
struct kgsl_iommu_unit *iommu_unit = &iommu->iommu_units[i];
for (j = 0; j < iommu_unit->dev_count; j++) {
@@ -1547,7 +1600,7 @@
}
}
kgsl_iommu_lock_rb_in_tlb(mmu);
- msm_iommu_unlock();
+ _iommu_unlock();
/* For complete CFF */
kgsl_cffdump_setmem(mmu->setstate_memory.gpuaddr +
@@ -1667,12 +1720,12 @@
for (j = 0; j < iommu_unit->dev_count; j++) {
if (iommu_unit->dev[j].fault) {
kgsl_iommu_enable_clk(mmu, j);
- msm_iommu_lock();
+ _iommu_lock();
KGSL_IOMMU_SET_CTX_REG(iommu,
iommu_unit,
iommu_unit->dev[j].ctx_id,
RESUME, 1);
- msm_iommu_unlock();
+ _iommu_unlock();
iommu_unit->dev[j].fault = 0;
}
}
@@ -1732,9 +1785,7 @@
KGSL_IOMMU_CONTEXT_USER,
TTBR0);
kgsl_iommu_disable_clk_on_ts(mmu, 0, false);
- return pt_base &
- (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ return pt_base & KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
}
/*
@@ -1764,15 +1815,14 @@
return;
}
/* Mask off the lsb of the pt base address since lsb will not change */
- pt_base &= (iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask <<
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift);
+ pt_base &= KGSL_IOMMU_CTX_TTBR0_ADDR_MASK;
/* For v0 SMMU GPU needs to be idle for tlb invalidate as well */
if (msm_soc_version_supports_iommu_v0())
kgsl_idle(mmu->device);
/* Acquire GPU-CPU sync Lock here */
- msm_iommu_lock();
+ _iommu_lock();
if (flags & KGSL_MMUFLAGS_PTUPDATE) {
if (!msm_soc_version_supports_iommu_v0())
@@ -1795,15 +1845,42 @@
}
/* Flush tlb */
if (flags & KGSL_MMUFLAGS_TLBFLUSH) {
+ unsigned long wait_for_flush;
for (i = 0; i < iommu->unit_count; i++) {
KGSL_IOMMU_SET_CTX_REG(iommu, (&iommu->iommu_units[i]),
KGSL_IOMMU_CONTEXT_USER, TLBIALL, 1);
mb();
+ /*
+ * Wait for flush to complete by polling the flush
+ * status bit of TLBSTATUS register for not more than
+ * 2 s. After 2s just exit, at that point the SMMU h/w
+ * may be stuck and will eventually cause GPU to hang
+ * or bring the system down.
+ */
+ if (!msm_soc_version_supports_iommu_v0()) {
+ wait_for_flush = jiffies +
+ msecs_to_jiffies(2000);
+ KGSL_IOMMU_SET_CTX_REG(iommu,
+ (&iommu->iommu_units[i]),
+ KGSL_IOMMU_CONTEXT_USER, TLBSYNC, 0);
+ while (KGSL_IOMMU_GET_CTX_REG(iommu,
+ (&iommu->iommu_units[i]),
+ KGSL_IOMMU_CONTEXT_USER, TLBSTATUS) &
+ (KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE)) {
+ if (time_after(jiffies,
+ wait_for_flush)) {
+ KGSL_DRV_ERR(mmu->device,
+ "Wait limit reached for IOMMU tlb flush\n");
+ break;
+ }
+ cpu_relax();
+ }
+ }
}
}
/* Release GPU-CPU sync Lock here */
- msm_iommu_unlock();
+ _iommu_unlock();
/* Disable smmu clock */
kgsl_iommu_disable_clk_on_ts(mmu, 0, false);
@@ -1816,8 +1893,7 @@
* @ctx_id - The context ID of the IOMMU ctx
* @reg - The register for which address is required
*
- * Return - The number of iommu units which is also the number of register
- * mapped descriptor arrays which the out parameter will have
+ * Return - The gpu address of register which can be used in type3 packet
*/
static unsigned int kgsl_iommu_get_reg_gpuaddr(struct kgsl_mmu *mmu,
int iommu_unit, int ctx_id, int reg)
@@ -1826,10 +1902,25 @@
if (KGSL_IOMMU_GLOBAL_BASE == reg)
return iommu->iommu_units[iommu_unit].reg_map.gpuaddr;
- else
+
+ if (iommu->iommu_reg_list[reg].ctx_reg)
return iommu->iommu_units[iommu_unit].reg_map.gpuaddr +
iommu->iommu_reg_list[reg].reg_offset +
(ctx_id << KGSL_IOMMU_CTX_SHIFT) + iommu->ctx_offset;
+ else
+ return iommu->iommu_units[iommu_unit].reg_map.gpuaddr +
+ iommu->iommu_reg_list[reg].reg_offset;
+}
+/*
+ * kgsl_iommu_hw_halt_supported - Returns whether IOMMU halt command is
+ * supported
+ * @mmu - Pointer to mmu structure
+ * @iommu_unit - The iommu unit for which the property is requested
+ */
+static int kgsl_iommu_hw_halt_supported(struct kgsl_mmu *mmu, int iommu_unit)
+{
+ struct kgsl_iommu *iommu = mmu->priv;
+ return iommu->iommu_units[iommu_unit].iommu_halt_enable;
}
static int kgsl_iommu_get_num_iommu_units(struct kgsl_mmu *mmu)
@@ -1851,9 +1942,11 @@
.mmu_disable_clk_on_ts = kgsl_iommu_disable_clk_on_ts,
.mmu_get_pt_lsb = kgsl_iommu_get_pt_lsb,
.mmu_get_reg_gpuaddr = kgsl_iommu_get_reg_gpuaddr,
+ .mmu_get_reg_ahbaddr = kgsl_iommu_get_reg_ahbaddr,
.mmu_get_num_iommu_units = kgsl_iommu_get_num_iommu_units,
.mmu_pt_equal = kgsl_iommu_pt_equal,
.mmu_get_pt_base_addr = kgsl_iommu_get_pt_base_addr,
+ .mmu_hw_halt_supported = kgsl_iommu_hw_halt_supported,
/* These callbacks will be set on some chipsets */
.mmu_setup_pt = NULL,
.mmu_cleanup_pt = NULL,
diff --git a/drivers/gpu/msm/kgsl_iommu.h b/drivers/gpu/msm/kgsl_iommu.h
index c09bc4b..b1b83c0 100644
--- a/drivers/gpu/msm/kgsl_iommu.h
+++ b/drivers/gpu/msm/kgsl_iommu.h
@@ -46,6 +46,16 @@
#define KGSL_IOMMU_V1_FSYNR0_WNR_MASK 0x00000001
#define KGSL_IOMMU_V1_FSYNR0_WNR_SHIFT 4
+/* TTBR0 register fields */
+#define KGSL_IOMMU_CTX_TTBR0_ADDR_MASK 0xFFFFC000
+
+/* TLBSTATUS register fields */
+#define KGSL_IOMMU_CTX_TLBSTATUS_SACTIVE BIT(0)
+
+/* IMPLDEF_MICRO_MMU_CTRL register fields */
+#define KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_HALT BIT(2)
+#define KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL_IDLE BIT(3)
+
enum kgsl_iommu_reg_map {
KGSL_IOMMU_GLOBAL_BASE = 0,
KGSL_IOMMU_CTX_TTBR0,
@@ -57,15 +67,31 @@
KGSL_IOMMU_CTX_V2PUR,
KGSL_IOMMU_CTX_FSYNR0,
KGSL_IOMMU_CTX_FSYNR1,
+ KGSL_IOMMU_CTX_TLBSYNC,
+ KGSL_IOMMU_CTX_TLBSTATUS,
+ KGSL_IOMMU_IMPLDEF_MICRO_MMU_CTRL,
KGSL_IOMMU_REG_MAX
};
struct kgsl_iommu_register_list {
unsigned int reg_offset;
- unsigned int reg_mask;
- unsigned int reg_shift;
+ int ctx_reg;
};
+#ifdef CONFIG_MSM_IOMMU
+extern struct iommu_access_ops iommu_access_ops_v0;
+
+static inline struct iommu_access_ops *get_iommu_access_ops_v0(void)
+{
+ return &iommu_access_ops_v0;
+}
+#else
+static inline struct iommu_access_ops *get_iommu_access_ops_v0(void)
+{
+ return NULL;
+}
+#endif
+
/*
* Max number of iommu units that the gpu core can have
* On APQ8064, KGSL can control a maximum of 2 IOMMU units.
@@ -91,10 +117,8 @@
iommu->ctx_offset)
/* Gets the lsb value of pagetable */
-#define KGSL_IOMMMU_PT_LSB(iommu, pt_val) \
- (pt_val & \
- ~(iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_mask << \
- iommu->iommu_reg_list[KGSL_IOMMU_CTX_TTBR0].reg_shift))
+#define KGSL_IOMMMU_PT_LSB(iommu, pt_val) \
+ (pt_val & ~(KGSL_IOMMU_CTX_TTBR0_ADDR_MASK))
/* offset at which a nop command is placed in setstate_memory */
#define KGSL_IOMMU_SETSTATE_NOP_OFFSET 1024
@@ -130,11 +154,18 @@
* @dev_count: Number of IOMMU contexts that are valid in the previous feild
* @reg_map: Memory descriptor which holds the mapped address of this IOMMU
* units register range
+ * @ahb_base - The base address from where IOMMU registers can be accesed from
+ * ahb bus
+ * @iommu_halt_enable: Valid only on IOMMU-v1, when set indicates that the iommu
+ * unit supports halting of the IOMMU, which can be enabled while programming
+ * the IOMMU registers for synchronization
*/
struct kgsl_iommu_unit {
struct kgsl_iommu_device dev[KGSL_IOMMU_MAX_DEVS_PER_UNIT];
unsigned int dev_count;
struct kgsl_memdesc reg_map;
+ unsigned int ahb_base;
+ int iommu_halt_enable;
};
/*
diff --git a/drivers/gpu/msm/kgsl_mmu.c b/drivers/gpu/msm/kgsl_mmu.c
index 4e95373..6e41707 100644
--- a/drivers/gpu/msm/kgsl_mmu.c
+++ b/drivers/gpu/msm/kgsl_mmu.c
@@ -610,6 +610,7 @@
* kgsl_pwrctrl_irq() is called
*/
}
+EXPORT_SYMBOL(kgsl_mh_start);
int
kgsl_mmu_map(struct kgsl_pagetable *pagetable,
diff --git a/drivers/gpu/msm/kgsl_mmu.h b/drivers/gpu/msm/kgsl_mmu.h
index d7d9516..ef5b0f4 100644
--- a/drivers/gpu/msm/kgsl_mmu.h
+++ b/drivers/gpu/msm/kgsl_mmu.h
@@ -14,7 +14,7 @@
#define __KGSL_MMU_H
#include <mach/iommu.h>
-
+#include "kgsl_iommu.h"
/*
* These defines control the address range for allocations that
* are mapped into all pagetables.
@@ -150,6 +150,9 @@
enum kgsl_iommu_context_id ctx_id);
unsigned int (*mmu_get_reg_gpuaddr)(struct kgsl_mmu *mmu,
int iommu_unit_num, int ctx_id, int reg);
+ unsigned int (*mmu_get_reg_ahbaddr)(struct kgsl_mmu *mmu,
+ int iommu_unit_num, int ctx_id,
+ enum kgsl_iommu_reg_map reg);
int (*mmu_get_num_iommu_units)(struct kgsl_mmu *mmu);
int (*mmu_pt_equal) (struct kgsl_mmu *mmu,
struct kgsl_pagetable *pt,
@@ -165,6 +168,7 @@
(struct kgsl_mmu *mmu, unsigned int *cmds);
unsigned int (*mmu_sync_unlock)
(struct kgsl_mmu *mmu, unsigned int *cmds);
+ int (*mmu_hw_halt_supported)(struct kgsl_mmu *mmu, int iommu_unit_num);
};
struct kgsl_mmu_pt_ops {
@@ -337,6 +341,29 @@
return 0;
}
+/*
+ * kgsl_mmu_get_reg_ahbaddr() - Calls the mmu specific function pointer to
+ * return the address that GPU can use to access register
+ * @mmu: Pointer to the device mmu
+ * @iommu_unit_num: There can be multiple iommu units used for graphics.
+ * This parameter is an index to the iommu unit being used
+ * @ctx_id: The context id within the iommu unit
+ * @reg: Register whose address is to be returned
+ *
+ * Returns the ahb address of reg else 0
+ */
+static inline unsigned int kgsl_mmu_get_reg_ahbaddr(struct kgsl_mmu *mmu,
+ int iommu_unit_num,
+ int ctx_id,
+ enum kgsl_iommu_reg_map reg)
+{
+ if (mmu->mmu_ops && mmu->mmu_ops->mmu_get_reg_ahbaddr)
+ return mmu->mmu_ops->mmu_get_reg_ahbaddr(mmu, iommu_unit_num,
+ ctx_id, reg);
+ else
+ return 0;
+}
+
static inline int kgsl_mmu_get_num_iommu_units(struct kgsl_mmu *mmu)
{
if (mmu->mmu_ops && mmu->mmu_ops->mmu_get_num_iommu_units)
@@ -346,6 +373,22 @@
}
/*
+ * kgsl_mmu_hw_halt_supported() - Runtime check for iommu hw halt
+ * @mmu: the mmu
+ *
+ * Returns non-zero if the iommu supports hw halt,
+ * 0 if not.
+ */
+static inline int kgsl_mmu_hw_halt_supported(struct kgsl_mmu *mmu,
+ int iommu_unit_num)
+{
+ if (mmu->mmu_ops && mmu->mmu_ops->mmu_hw_halt_supported)
+ return mmu->mmu_ops->mmu_hw_halt_supported(mmu, iommu_unit_num);
+ else
+ return 0;
+}
+
+/*
* kgsl_mmu_is_perprocess() - Runtime check for per-process
* pagetables.
* @mmu: the mmu
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.c b/drivers/gpu/msm/kgsl_pwrctrl.c
index 2f8d93e..b124257 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.c
+++ b/drivers/gpu/msm/kgsl_pwrctrl.c
@@ -18,6 +18,7 @@
#include <mach/msm_iomap.h>
#include <mach/msm_bus.h>
#include <linux/ktime.h>
+#include <linux/delay.h>
#include "kgsl.h"
#include "kgsl_pwrscale.h"
@@ -33,6 +34,16 @@
#define UPDATE_BUSY_VAL 1000000
#define UPDATE_BUSY 50
+/*
+ * Expected delay for post-interrupt processing on A3xx.
+ * The delay may be longer, gradually increase the delay
+ * to compensate. If the GPU isn't done by max delay,
+ * it's working on something other than just the final
+ * command sequence so stop waiting for it to be idle.
+ */
+#define INIT_UDELAY 200
+#define MAX_UDELAY 2000
+
struct clk_pair {
const char *name;
uint map;
@@ -59,6 +70,10 @@
.name = "mem_iface_clk",
.map = KGSL_CLK_MEM_IFACE,
},
+ {
+ .name = "alt_mem_iface_clk",
+ .map = KGSL_CLK_ALT_MEM_IFACE,
+ },
};
/* Update the elapsed time at a particular clock level
@@ -1055,8 +1070,18 @@
pwr->power_flags = 0;
}
+/**
+ * kgsl_idle_check() - Work function for GPU interrupts and idle timeouts.
+ * @device: The device
+ *
+ * This function is called for work that is queued by the interrupt
+ * handler or the idle timer. It attempts to transition to a clocks
+ * off state if the active_cnt is 0 and the hardware is idle.
+ */
void kgsl_idle_check(struct work_struct *work)
{
+ int delay = INIT_UDELAY;
+ int requested_state;
struct kgsl_device *device = container_of(work, struct kgsl_device,
idle_check_ws);
WARN_ON(device == NULL);
@@ -1064,21 +1089,52 @@
return;
mutex_lock(&device->mutex);
- if (device->state & (KGSL_STATE_ACTIVE | KGSL_STATE_NAP)) {
- kgsl_pwrscale_idle(device);
- if (kgsl_pwrctrl_sleep(device) != 0) {
+ kgsl_pwrscale_idle(device);
+
+ if (device->state == KGSL_STATE_ACTIVE
+ || device->state == KGSL_STATE_NAP) {
+ /*
+ * If no user is explicitly trying to use the GPU
+ * (active_cnt is zero), then loop with increasing delay,
+ * waiting for the GPU to become idle.
+ */
+ while (!device->active_cnt && delay < MAX_UDELAY) {
+ requested_state = device->requested_state;
+ if (!kgsl_pwrctrl_sleep(device))
+ break;
+ /*
+ * If no new commands have been issued since the
+ * last interrupt, stay in this loop waiting for
+ * the GPU to become idle.
+ */
+ if (!device->pwrctrl.irq_last)
+ break;
+ kgsl_pwrctrl_request_state(device, requested_state);
+ mutex_unlock(&device->mutex);
+ udelay(delay);
+ delay *= 2;
+ mutex_lock(&device->mutex);
+ }
+
+
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
+ if (device->state == KGSL_STATE_ACTIVE) {
mod_timer(&device->idle_timer,
jiffies +
device->pwrctrl.interval_timeout);
- /* If the GPU has been too busy to sleep, make sure *
- * that is acurately reflected in the % busy numbers. */
+ /*
+ * If the GPU has been too busy to sleep, make sure
+ * that is acurately reflected in the % busy numbers.
+ */
device->pwrctrl.clk_stats.no_nap_cnt++;
if (device->pwrctrl.clk_stats.no_nap_cnt >
UPDATE_BUSY) {
kgsl_pwrctrl_busy_time(device, true);
device->pwrctrl.clk_stats.no_nap_cnt = 0;
}
+ } else {
+ device->pwrctrl.irq_last = 0;
}
} else if (device->state & (KGSL_STATE_HUNG |
KGSL_STATE_DUMP_AND_FT)) {
@@ -1087,6 +1143,7 @@
mutex_unlock(&device->mutex);
}
+EXPORT_SYMBOL(kgsl_idle_check);
void kgsl_timer(unsigned long data)
{
@@ -1104,54 +1161,26 @@
}
}
+
+/**
+ * kgsl_pre_hwaccess - Enforce preconditions for touching registers
+ * @device: The device
+ *
+ * This function ensures that the correct lock is held and that the GPU
+ * clock is on immediately before a register is read or written. Note
+ * that this function does not check active_cnt because the registers
+ * must be accessed during device start and stop, when the active_cnt
+ * may legitimately be 0.
+ */
void kgsl_pre_hwaccess(struct kgsl_device *device)
{
+ /* In order to touch a register you must hold the device mutex...*/
BUG_ON(!mutex_is_locked(&device->mutex));
- switch (device->state) {
- case KGSL_STATE_ACTIVE:
- return;
- case KGSL_STATE_NAP:
- case KGSL_STATE_SLEEP:
- case KGSL_STATE_SLUMBER:
- kgsl_pwrctrl_wake(device);
- break;
- case KGSL_STATE_SUSPEND:
- kgsl_check_suspended(device);
- break;
- case KGSL_STATE_INIT:
- case KGSL_STATE_HUNG:
- case KGSL_STATE_DUMP_AND_FT:
- if (test_bit(KGSL_PWRFLAGS_CLK_ON,
- &device->pwrctrl.power_flags))
- break;
- else
- KGSL_PWR_ERR(device,
- "hw access while clocks off from state %d\n",
- device->state);
- break;
- default:
- KGSL_PWR_ERR(device, "hw access while in unknown state %d\n",
- device->state);
- break;
- }
+ /* and have the clock on! */
+ BUG_ON(!test_bit(KGSL_PWRFLAGS_CLK_ON, &device->pwrctrl.power_flags));
}
EXPORT_SYMBOL(kgsl_pre_hwaccess);
-void kgsl_check_suspended(struct kgsl_device *device)
-{
- if (device->requested_state == KGSL_STATE_SUSPEND ||
- device->state == KGSL_STATE_SUSPEND) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->hwaccess_gate);
- mutex_lock(&device->mutex);
- } else if (device->state == KGSL_STATE_DUMP_AND_FT) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->ft_gate);
- mutex_lock(&device->mutex);
- } else if (device->state == KGSL_STATE_SLUMBER)
- kgsl_pwrctrl_wake(device);
-}
-
static int
_nap(struct kgsl_device *device)
{
@@ -1230,6 +1259,8 @@
case KGSL_STATE_NAP:
case KGSL_STATE_SLEEP:
del_timer_sync(&device->idle_timer);
+ /* make sure power is on to stop the device*/
+ kgsl_pwrctrl_enable(device);
device->ftbl->suspend_context(device);
device->ftbl->stop(device);
_sleep_accounting(device);
@@ -1278,9 +1309,9 @@
/******************************************************************/
/* Caller must hold the device mutex. */
-void kgsl_pwrctrl_wake(struct kgsl_device *device)
+int kgsl_pwrctrl_wake(struct kgsl_device *device)
{
- int status;
+ int status = 0;
unsigned int context_id;
unsigned int state = device->state;
unsigned int ts_processed = 0xdeaddead;
@@ -1329,8 +1360,10 @@
KGSL_PWR_WARN(device, "unhandled state %s\n",
kgsl_pwrstate_to_str(device->state));
kgsl_pwrctrl_request_state(device, KGSL_STATE_NONE);
+ status = -EINVAL;
break;
}
+ return status;
}
EXPORT_SYMBOL(kgsl_pwrctrl_wake);
@@ -1396,3 +1429,124 @@
}
EXPORT_SYMBOL(kgsl_pwrstate_to_str);
+
+/**
+ * kgsl_active_count_get() - Increase the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Increase the active count for the KGSL device and turn on
+ * clocks if this is the first reference. Code paths that need
+ * to touch the hardware or wait for the hardware to complete
+ * an operation must hold an active count reference until they
+ * are finished. An error code will be returned if waking the
+ * device fails. The device mutex must be held while *calling
+ * this function.
+ */
+int kgsl_active_count_get(struct kgsl_device *device)
+{
+ int ret = 0;
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->active_cnt == 0) {
+ if (device->requested_state == KGSL_STATE_SUSPEND ||
+ device->state == KGSL_STATE_SUSPEND) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->hwaccess_gate);
+ mutex_lock(&device->mutex);
+ } else if (device->state == KGSL_STATE_DUMP_AND_FT) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->ft_gate);
+ mutex_lock(&device->mutex);
+ }
+ ret = kgsl_pwrctrl_wake(device);
+ }
+ if (ret == 0)
+ device->active_cnt++;
+ return ret;
+}
+EXPORT_SYMBOL(kgsl_active_count_get);
+
+/**
+ * kgsl_active_count_get_light() - Increase the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Increase the active count for the KGSL device WITHOUT
+ * turning on the clocks. Currently this is only used for creating
+ * kgsl_events. The device mutex must be held while calling this function.
+ */
+int kgsl_active_count_get_light(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->state != KGSL_STATE_ACTIVE) {
+ dev_WARN_ONCE(device->dev, 1, "device in unexpected state %s\n",
+ kgsl_pwrstate_to_str(device->state));
+ return -EINVAL;
+ }
+
+ if (device->active_cnt == 0) {
+ dev_WARN_ONCE(device->dev, 1, "active count is 0!\n");
+ return -EINVAL;
+ }
+
+ device->active_cnt++;
+ return 0;
+}
+EXPORT_SYMBOL(kgsl_active_count_get_light);
+
+/**
+ * kgsl_active_count_put() - Decrease the device active count
+ * @device: Pointer to a KGSL device
+ *
+ * Decrease the active count for the KGSL device and turn off
+ * clocks if there are no remaining references. This function will
+ * transition the device to NAP if there are no other pending state
+ * changes. It also completes the suspend gate. The device mutex must
+ * be held while calling this function.
+ */
+void kgsl_active_count_put(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+ BUG_ON(device->active_cnt == 0);
+
+ kgsl_pwrscale_idle(device);
+ if (device->active_cnt > 1) {
+ device->active_cnt--;
+ return;
+ }
+
+ INIT_COMPLETION(device->suspend_gate);
+
+ if (device->pwrctrl.nap_allowed == true &&
+ (device->state == KGSL_STATE_ACTIVE &&
+ device->requested_state == KGSL_STATE_NONE)) {
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ if (kgsl_pwrctrl_sleep(device) && device->pwrctrl.irq_last) {
+ kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
+ queue_work(device->work_queue, &device->idle_check_ws);
+ }
+ }
+ device->active_cnt--;
+
+ if (device->active_cnt == 0)
+ complete(&device->suspend_gate);
+}
+EXPORT_SYMBOL(kgsl_active_count_put);
+
+/**
+ * kgsl_active_count_wait() - Wait for activity to finish.
+ * @device: Pointer to a KGSL device
+ *
+ * Block until all active_cnt users put() their reference.
+ */
+void kgsl_active_count_wait(struct kgsl_device *device)
+{
+ BUG_ON(!mutex_is_locked(&device->mutex));
+
+ if (device->active_cnt != 0) {
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->suspend_gate);
+ mutex_lock(&device->mutex);
+ }
+}
+EXPORT_SYMBOL(kgsl_active_count_wait);
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.h b/drivers/gpu/msm/kgsl_pwrctrl.h
index ced52e1..b3e8702 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.h
+++ b/drivers/gpu/msm/kgsl_pwrctrl.h
@@ -23,7 +23,7 @@
#define KGSL_PWRLEVEL_NOMINAL 1
#define KGSL_PWRLEVEL_LAST_OFFSET 2
-#define KGSL_MAX_CLKS 5
+#define KGSL_MAX_CLKS 6
struct platform_device;
@@ -91,6 +91,7 @@
struct pm_qos_request pm_qos_req_dma;
unsigned int pm_qos_latency;
unsigned int step_mul;
+ unsigned int irq_last;
};
void kgsl_pwrctrl_irq(struct kgsl_device *device, int state);
@@ -99,9 +100,8 @@
void kgsl_timer(unsigned long data);
void kgsl_idle_check(struct work_struct *work);
void kgsl_pre_hwaccess(struct kgsl_device *device);
-void kgsl_check_suspended(struct kgsl_device *device);
int kgsl_pwrctrl_sleep(struct kgsl_device *device);
-void kgsl_pwrctrl_wake(struct kgsl_device *device);
+int kgsl_pwrctrl_wake(struct kgsl_device *device);
void kgsl_pwrctrl_pwrlevel_change(struct kgsl_device *device,
unsigned int level);
int kgsl_pwrctrl_init_sysfs(struct kgsl_device *device);
@@ -115,4 +115,10 @@
void kgsl_pwrctrl_set_state(struct kgsl_device *device, unsigned int state);
void kgsl_pwrctrl_request_state(struct kgsl_device *device, unsigned int state);
+
+int kgsl_active_count_get(struct kgsl_device *device);
+int kgsl_active_count_get_light(struct kgsl_device *device);
+void kgsl_active_count_put(struct kgsl_device *device);
+void kgsl_active_count_wait(struct kgsl_device *device);
+
#endif /* __KGSL_PWRCTRL_H */
diff --git a/drivers/gpu/msm/kgsl_pwrscale.c b/drivers/gpu/msm/kgsl_pwrscale.c
index 02ada38..afef62e 100644
--- a/drivers/gpu/msm/kgsl_pwrscale.c
+++ b/drivers/gpu/msm/kgsl_pwrscale.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -240,6 +240,7 @@
device->pwrscale.policy->busy(device,
&device->pwrscale);
}
+EXPORT_SYMBOL(kgsl_pwrscale_busy);
void kgsl_pwrscale_idle(struct kgsl_device *device)
{
diff --git a/drivers/gpu/msm/kgsl_pwrscale_trustzone.c b/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
index 9b2ac70..5d5d5b1 100644
--- a/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
+++ b/drivers/gpu/msm/kgsl_pwrscale_trustzone.c
@@ -31,6 +31,7 @@
unsigned int no_switch_cnt;
unsigned int skip_cnt;
struct kgsl_power_stats bin;
+ unsigned int idle_dcvs;
};
spinlock_t tz_lock;
@@ -47,24 +48,32 @@
#define SKIP_COUNTER 500
#define TZ_RESET_ID 0x3
#define TZ_UPDATE_ID 0x4
+#define TZ_INIT_ID 0x6
-#ifdef CONFIG_MSM_SCM
/* Trap into the TrustZone, and call funcs there. */
-static int __secure_tz_entry(u32 cmd, u32 val, u32 id)
+static int __secure_tz_entry2(u32 cmd, u32 val1, u32 val2)
{
int ret;
spin_lock(&tz_lock);
+ /* sync memory before sending the commands to tz*/
__iowmb();
- ret = scm_call_atomic2(SCM_SVC_IO, cmd, val, id);
+ ret = scm_call_atomic2(SCM_SVC_IO, cmd, val1, val2);
spin_unlock(&tz_lock);
return ret;
}
-#else
-static int __secure_tz_entry(u32 cmd, u32 val, u32 id)
+
+static int __secure_tz_entry3(u32 cmd, u32 val1, u32 val2,
+ u32 val3)
{
- return 0;
+ int ret;
+ spin_lock(&tz_lock);
+ /* sync memory before sending the commands to tz*/
+ __iowmb();
+ ret = scm_call_atomic3(SCM_SVC_IO, cmd, val1, val2,
+ val3);
+ spin_unlock(&tz_lock);
+ return ret;
}
-#endif /* CONFIG_MSM_SCM */
static ssize_t tz_governor_show(struct kgsl_device *device,
struct kgsl_pwrscale *pwrscale,
@@ -172,11 +181,21 @@
*/
if (priv->bin.busy_time > CEILING) {
val = -1;
- } else {
+ } else if (priv->idle_dcvs) {
idle = priv->bin.total_time - priv->bin.busy_time;
idle = (idle > 0) ? idle : 0;
- val = __secure_tz_entry(TZ_UPDATE_ID, idle, device->id);
+ val = __secure_tz_entry2(TZ_UPDATE_ID, idle, device->id);
+ } else {
+ if (pwr->step_mul > 1)
+ val = __secure_tz_entry3(TZ_UPDATE_ID,
+ (pwr->active_pwrlevel + 1)/2,
+ priv->bin.total_time, priv->bin.busy_time);
+ else
+ val = __secure_tz_entry3(TZ_UPDATE_ID,
+ pwr->active_pwrlevel,
+ priv->bin.total_time, priv->bin.busy_time);
}
+
priv->bin.total_time = 0;
priv->bin.busy_time = 0;
@@ -201,7 +220,7 @@
{
struct tz_priv *priv = pwrscale->priv;
- __secure_tz_entry(TZ_RESET_ID, 0, device->id);
+ __secure_tz_entry2(TZ_RESET_ID, 0, 0);
priv->no_switch_cnt = 0;
priv->bin.total_time = 0;
priv->bin.busy_time = 0;
@@ -210,16 +229,32 @@
#ifdef CONFIG_MSM_SCM
static int tz_init(struct kgsl_device *device, struct kgsl_pwrscale *pwrscale)
{
+ int i = 0, j = 1, ret = 0;
struct tz_priv *priv;
+ struct kgsl_pwrctrl *pwr = &device->pwrctrl;
+ unsigned int tz_pwrlevels[KGSL_MAX_PWRLEVELS + 1];
priv = pwrscale->priv = kzalloc(sizeof(struct tz_priv), GFP_KERNEL);
if (pwrscale->priv == NULL)
return -ENOMEM;
-
+ priv->idle_dcvs = 0;
priv->governor = TZ_GOVERNOR_ONDEMAND;
spin_lock_init(&tz_lock);
kgsl_pwrscale_policy_add_files(device, pwrscale, &tz_attr_group);
-
+ for (i = 0; i < pwr->num_pwrlevels - 1; i++) {
+ if (i == 0)
+ tz_pwrlevels[j] = pwr->pwrlevels[i].gpu_freq;
+ else if (pwr->pwrlevels[i].gpu_freq !=
+ pwr->pwrlevels[i - 1].gpu_freq) {
+ j++;
+ tz_pwrlevels[j] = pwr->pwrlevels[i].gpu_freq;
+ }
+ }
+ tz_pwrlevels[0] = j;
+ ret = scm_call(SCM_SVC_DCVS, TZ_INIT_ID, tz_pwrlevels,
+ sizeof(tz_pwrlevels), NULL, 0);
+ if (ret)
+ priv->idle_dcvs = 1;
return 0;
}
#else
diff --git a/drivers/gpu/msm/kgsl_sharedmem.c b/drivers/gpu/msm/kgsl_sharedmem.c
index 595f78f..62db513 100644
--- a/drivers/gpu/msm/kgsl_sharedmem.c
+++ b/drivers/gpu/msm/kgsl_sharedmem.c
@@ -607,16 +607,22 @@
while (len > 0) {
struct page *page;
- unsigned int gfp_mask = GFP_KERNEL | __GFP_HIGHMEM |
- __GFP_NOWARN | __GFP_NORETRY;
+ unsigned int gfp_mask = __GFP_HIGHMEM;
int j;
/* don't waste space at the end of the allocation*/
if (len < page_size)
page_size = PAGE_SIZE;
+ /*
+ * Don't do some of the more aggressive memory recovery
+ * techniques for large order allocations
+ */
if (page_size != PAGE_SIZE)
- gfp_mask |= __GFP_COMP;
+ gfp_mask |= __GFP_COMP | __GFP_NORETRY |
+ __GFP_NO_KSWAPD | __GFP_NOWARN;
+ else
+ gfp_mask |= GFP_KERNEL | __GFP_NORETRY;
page = alloc_pages(gfp_mask, get_order(page_size));
diff --git a/drivers/gpu/msm/kgsl_snapshot.c b/drivers/gpu/msm/kgsl_snapshot.c
index 4c9c744..abcebfb 100644
--- a/drivers/gpu/msm/kgsl_snapshot.c
+++ b/drivers/gpu/msm/kgsl_snapshot.c
@@ -325,6 +325,7 @@
return 0;
}
+EXPORT_SYMBOL(kgsl_snapshot_have_object);
/* kgsl_snapshot_get_object - Mark a GPU buffer to be frozen
* @device - the device that is being snapshotted
diff --git a/drivers/gpu/msm/z180.c b/drivers/gpu/msm/z180.c
index a07959b..49265fc 100644
--- a/drivers/gpu/msm/z180.c
+++ b/drivers/gpu/msm/z180.c
@@ -17,7 +17,6 @@
#include "kgsl.h"
#include "kgsl_cffdump.h"
#include "kgsl_sharedmem.h"
-#include "kgsl_trace.h"
#include "z180.h"
#include "z180_reg.h"
@@ -485,7 +484,7 @@
z180_cmdwindow_write(device, ADDR_VGV3_CONTROL, 0);
error:
- trace_kgsl_issueibcmds(device, context->id, ibdesc, numibs,
+ kgsl_trace_issueibcmds(device, context->id, ibdesc, numibs,
*timestamp, ctrl, result, 0);
return (int)result;
diff --git a/drivers/hwmon/qpnp-adc-common.c b/drivers/hwmon/qpnp-adc-common.c
index 1458bc5..b3b5643 100644
--- a/drivers/hwmon/qpnp-adc-common.c
+++ b/drivers/hwmon/qpnp-adc-common.c
@@ -638,20 +638,21 @@
{
struct qpnp_vadc_linear_graph vbatt_param;
int rc = 0;
+ int64_t low_thr = 0, high_thr = 0;
rc = qpnp_get_vadc_gain_and_offset(&vbatt_param, CALIB_ABSOLUTE);
if (rc < 0)
return rc;
- *low_threshold = (((param->low_thr/3) - QPNP_ADC_625_UV) *
+ low_thr = (((param->low_thr/3) - QPNP_ADC_625_UV) *
vbatt_param.dy);
- do_div(*low_threshold, QPNP_ADC_625_UV);
- *low_threshold += vbatt_param.adc_gnd;
+ do_div(low_thr, QPNP_ADC_625_UV);
+ *low_threshold = low_thr + vbatt_param.adc_gnd;
- *high_threshold = (((param->high_thr/3) - QPNP_ADC_625_UV) *
+ high_thr = (((param->high_thr/3) - QPNP_ADC_625_UV) *
vbatt_param.dy);
- do_div(*high_threshold, QPNP_ADC_625_UV);
- *high_threshold += vbatt_param.adc_gnd;
+ do_div(high_thr, QPNP_ADC_625_UV);
+ *high_threshold = high_thr + vbatt_param.adc_gnd;
pr_debug("high_volt:%d, low_volt:%d\n", param->high_thr,
param->low_thr);
diff --git a/drivers/hwmon/qpnp-adc-current.c b/drivers/hwmon/qpnp-adc-current.c
index 0b02a34..66811bf 100644
--- a/drivers/hwmon/qpnp-adc-current.c
+++ b/drivers/hwmon/qpnp-adc-current.c
@@ -129,6 +129,7 @@
struct qpnp_iadc_drv {
struct qpnp_adc_drv *adc;
int32_t rsense;
+ bool external_rsense;
struct device *iadc_hwmon;
bool iadc_initialized;
int64_t die_temp_calib_offset;
@@ -543,6 +544,9 @@
if (!iadc || !iadc->iadc_initialized)
return -EPROBE_DEFER;
+ if (iadc->external_rsense)
+ *rsense = iadc->rsense;
+
rc = qpnp_iadc_read_reg(QPNP_IADC_NOMINAL_RSENSE, &rslt_rsense);
if (rc < 0) {
pr_err("qpnp adc rsense read failed with %d\n", rc);
@@ -571,15 +575,21 @@
{
struct qpnp_iadc_drv *iadc = qpnp_iadc;
struct qpnp_vadc_result result_pmic_therm;
+ int64_t die_temp_offset;
int rc = 0;
rc = qpnp_vadc_read(DIE_TEMP, &result_pmic_therm);
if (rc < 0)
return rc;
- if (((uint64_t) (result_pmic_therm.physical -
- iadc->die_temp_calib_offset))
- > QPNP_IADC_DIE_TEMP_CALIB_OFFSET) {
+ die_temp_offset = result_pmic_therm.physical -
+ iadc->die_temp_calib_offset;
+ if (die_temp_offset < 0)
+ die_temp_offset = -die_temp_offset;
+
+ if (die_temp_offset > QPNP_IADC_DIE_TEMP_CALIB_OFFSET) {
+ iadc->die_temp_calib_offset =
+ result_pmic_therm.physical;
rc = qpnp_iadc_calibrate_for_trim();
if (rc)
pr_err("periodic IADC calibration failed\n");
@@ -822,9 +832,11 @@
rc = of_property_read_u32(node, "qcom,rsense",
&iadc->rsense);
- if (rc) {
- pr_err("Invalid rsens reference property\n");
- goto fail;
+ if (rc)
+ pr_debug("Defaulting to internal rsense\n");
+ else {
+ pr_debug("Use external rsense\n");
+ iadc->external_rsense = true;
}
rc = devm_request_irq(&spmi->dev, iadc->adc->adc_irq_eoc,
diff --git a/drivers/input/touchscreen/atmel_mxt_ts.c b/drivers/input/touchscreen/atmel_mxt_ts.c
index 0ea230a..29b269a 100644
--- a/drivers/input/touchscreen/atmel_mxt_ts.c
+++ b/drivers/input/touchscreen/atmel_mxt_ts.c
@@ -128,6 +128,7 @@
#define MXT_SPT_DIGITIZER_T43 43
#define MXT_SPT_MESSAGECOUNT_T44 44
#define MXT_SPT_CTECONFIG_T46 46
+#define MXT_SPT_EXTRANOISESUPCTRLS_T58 58
#define MXT_SPT_TIMER_T61 61
/* MXT_GEN_COMMAND_T6 field */
@@ -396,6 +397,7 @@
atomic_t st_enabled;
atomic_t st_pending_irqs;
struct completion st_completion;
+ struct completion st_powerdown;
#endif
};
@@ -432,6 +434,7 @@
case MXT_SPT_USERDATA_T38:
case MXT_SPT_DIGITIZER_T43:
case MXT_SPT_CTECONFIG_T46:
+ case MXT_SPT_EXTRANOISESUPCTRLS_T58:
case MXT_SPT_TIMER_T61:
case MXT_PROCI_ADAPTIVETHRESHOLD_T55:
return true;
@@ -469,6 +472,7 @@
case MXT_SPT_USERDATA_T38:
case MXT_SPT_DIGITIZER_T43:
case MXT_SPT_CTECONFIG_T46:
+ case MXT_SPT_EXTRANOISESUPCTRLS_T58:
case MXT_SPT_TIMER_T61:
case MXT_PROCI_ADAPTIVETHRESHOLD_T55:
return true;
@@ -998,8 +1002,8 @@
static irqreturn_t mxt_filter_interrupt(struct mxt_data *data)
{
if (atomic_read(&data->st_enabled)) {
- atomic_cmpxchg(&data->st_pending_irqs, 0, 1);
- complete(&data->st_completion);
+ if (atomic_cmpxchg(&data->st_pending_irqs, 0, 1) == 0)
+ complete(&data->st_completion);
return IRQ_HANDLED;
}
return IRQ_NONE;
@@ -1993,6 +1997,7 @@
atomic_set(&data->st_enabled, 0);
complete(&data->st_completion);
mxt_interrupt(data->client->irq, data);
+ complete(&data->st_powerdown);
break;
case 1:
if (atomic_read(&data->st_enabled)) {
@@ -2005,6 +2010,8 @@
err = -EIO;
break;
}
+ INIT_COMPLETION(data->st_completion);
+ INIT_COMPLETION(data->st_powerdown);
atomic_set(&data->st_pending_irqs, 0);
atomic_set(&data->st_enabled, 1);
break;
@@ -2032,7 +2039,7 @@
return err;
if (atomic_cmpxchg(&data->st_pending_irqs, 1, 0) != 1)
- return -EBADF;
+ return -EINVAL;
return scnprintf(buf, PAGE_SIZE, "%u", 1);
}
@@ -2061,9 +2068,25 @@
.attrs = mxt_attrs,
};
+
+#if defined(CONFIG_SECURE_TOUCH)
+static void mxt_secure_touch_stop(struct mxt_data *data)
+{
+ if (atomic_read(&data->st_enabled)) {
+ complete(&data->st_completion);
+ wait_for_completion_interruptible(&data->st_powerdown);
+ }
+}
+#else
+static void mxt_secure_touch_stop(struct mxt_data *data)
+{
+}
+#endif
+
static int mxt_start(struct mxt_data *data)
{
int error;
+ mxt_secure_touch_stop(data);
/* restore the old power state values and reenable touch */
error = __mxt_write_reg(data->client, data->t7_start_addr,
@@ -2081,6 +2104,7 @@
{
int error;
u8 t7_data[T7_DATA_SIZE] = {0};
+ mxt_secure_touch_stop(data);
error = __mxt_write_reg(data->client, data->t7_start_addr,
T7_DATA_SIZE, t7_data);
@@ -2462,6 +2486,7 @@
/* calibrate */
if (data->pdata->need_calibration) {
+ mxt_secure_touch_stop(data);
error = mxt_write_object(data, MXT_GEN_COMMAND_T6,
MXT_COMMAND_CALIBRATE, 1);
if (error < 0)
@@ -2797,12 +2822,13 @@
#endif
#if defined(CONFIG_SECURE_TOUCH)
-static void __devinit secure_touch_init(struct mxt_data *data)
+static void __devinit mxt_secure_touch_init(struct mxt_data *data)
{
init_completion(&data->st_completion);
+ init_completion(&data->st_powerdown);
}
#else
-static void __devinit secure_touch_init(struct mxt_data *data)
+static void __devinit mxt_secure_touch_init(struct mxt_data *data)
{
}
#endif
@@ -2999,7 +3025,7 @@
mxt_debugfs_init(data);
- secure_touch_init(data);
+ mxt_secure_touch_init(data);
return 0;
diff --git a/drivers/input/touchscreen/synaptics_i2c_rmi4.c b/drivers/input/touchscreen/synaptics_i2c_rmi4.c
index 417ef83..4a021e1 100644
--- a/drivers/input/touchscreen/synaptics_i2c_rmi4.c
+++ b/drivers/input/touchscreen/synaptics_i2c_rmi4.c
@@ -34,6 +34,9 @@
#define DRIVER_NAME "synaptics_rmi4_i2c"
#define INPUT_PHYS_NAME "synaptics_rmi4_i2c/input0"
+
+#define RESET_DELAY 100
+
#define TYPE_B_PROTOCOL
#define NO_0D_WHILE_2D
@@ -68,6 +71,16 @@
#define NO_SLEEP_OFF (0 << 2)
#define NO_SLEEP_ON (1 << 2)
+enum device_status {
+ STATUS_NO_ERROR = 0x00,
+ STATUS_RESET_OCCURED = 0x01,
+ STATUS_INVALID_CONFIG = 0x02,
+ STATUS_DEVICE_FAILURE = 0x03,
+ STATUS_CONFIG_CRC_FAILURE = 0x04,
+ STATUS_FIRMWARE_CRC_FAILURE = 0x05,
+ STATUS_CRC_IN_PROGRESS = 0x06
+};
+
#define RMI4_VTG_MIN_UV 2700000
#define RMI4_VTG_MAX_UV 3300000
#define RMI4_ACTIVE_LOAD_UA 15000
@@ -79,7 +92,6 @@
#define RMI4_I2C_LPM_LOAD_UA 10
#define RMI4_GPIO_SLEEP_LOW_US 10000
-#define RMI4_GPIO_WAIT_HIGH_MS 25
static int synaptics_rmi4_i2c_read(struct synaptics_rmi4_data *rmi4_data,
unsigned short addr, unsigned char *data,
@@ -91,6 +103,10 @@
static int synaptics_rmi4_reset_device(struct synaptics_rmi4_data *rmi4_data);
+#ifdef CONFIG_PM
+static int synaptics_rmi4_suspend(struct device *dev);
+
+static int synaptics_rmi4_resume(struct device *dev);
#ifdef CONFIG_HAS_EARLYSUSPEND
static ssize_t synaptics_rmi4_full_pm_cycle_show(struct device *dev,
struct device_attribute *attr, char *buf);
@@ -101,10 +117,7 @@
static void synaptics_rmi4_early_suspend(struct early_suspend *h);
static void synaptics_rmi4_late_resume(struct early_suspend *h);
-
-static int synaptics_rmi4_suspend(struct device *dev);
-
-static int synaptics_rmi4_resume(struct device *dev);
+#endif
#endif
static ssize_t synaptics_rmi4_f01_reset_store(struct device *dev,
@@ -1482,6 +1495,16 @@
if (retval < 0)
return retval;
+ while (status.status_code == STATUS_CRC_IN_PROGRESS) {
+ msleep(1);
+ retval = synaptics_rmi4_i2c_read(rmi4_data,
+ rmi4_data->f01_data_base_addr,
+ status.data,
+ sizeof(status.data));
+ if (retval < 0)
+ return retval;
+ }
+
if (status.flash_prog == 1) {
pr_notice("%s: In flash prog mode, status = 0x%02x\n",
__func__,
@@ -1645,7 +1668,7 @@
return retval;
}
- msleep(100);
+ msleep(RESET_DELAY);
return retval;
};
@@ -1816,7 +1839,7 @@
RMI4_VTG_MIN_UV, RMI4_VTG_MAX_UV);
if (retval) {
dev_err(&rmi4_data->i2c_client->dev,
- "regulator set_vtg failed retval=%d\n",
+ "regulator set_vtg failed retval =%d\n",
retval);
goto err_set_vtg_vdd;
}
@@ -1839,7 +1862,7 @@
RMI4_I2C_VTG_MIN_UV, RMI4_I2C_VTG_MAX_UV);
if (retval) {
dev_err(&rmi4_data->i2c_client->dev,
- "reg set i2c vtg failed retval=%d\n",
+ "reg set i2c vtg failed retval =%d\n",
retval);
goto err_set_vtg_i2c;
}
@@ -2110,7 +2133,7 @@
gpio_set_value(platform_data->reset_gpio, 0);
usleep(RMI4_GPIO_SLEEP_LOW_US);
gpio_set_value(platform_data->reset_gpio, 1);
- msleep(RMI4_GPIO_WAIT_HIGH_MS);
+ msleep(RESET_DELAY);
} else
synaptics_rmi4_reset_command(rmi4_data);
@@ -2470,6 +2493,75 @@
}
#endif
+static int synaptics_rmi4_regulator_lpm(struct synaptics_rmi4_data *rmi4_data,
+ bool on)
+{
+ int retval;
+
+ if (on == false)
+ goto regulator_hpm;
+
+ retval = reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_LPM_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_ana set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_lpm;
+ }
+
+ if (rmi4_data->board->i2c_pull_up) {
+ retval = reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_i2c set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_lpm;
+ }
+ }
+
+ return 0;
+
+regulator_hpm:
+
+ retval = reg_set_optimum_mode_check(rmi4_data->vdd,
+ RMI4_ACTIVE_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_ana set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_hpm;
+ }
+
+ if (rmi4_data->board->i2c_pull_up) {
+ retval = reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+ if (retval < 0) {
+ dev_err(&rmi4_data->i2c_client->dev,
+ "Regulator vcc_i2c set_opt failed rc=%d\n",
+ retval);
+ goto fail_regulator_hpm;
+ }
+ }
+
+ return 0;
+
+fail_regulator_lpm:
+ reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_ACTIVE_LOAD_UA);
+ if (rmi4_data->board->i2c_pull_up)
+ reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LOAD_UA);
+
+ return retval;
+
+fail_regulator_hpm:
+ reg_set_optimum_mode_check(rmi4_data->vdd, RMI4_LPM_LOAD_UA);
+ if (rmi4_data->board->i2c_pull_up)
+ reg_set_optimum_mode_check(rmi4_data->vcc_i2c,
+ RMI4_I2C_LPM_LOAD_UA);
+ return retval;
+}
+
/**
* synaptics_rmi4_suspend()
*
@@ -2483,6 +2575,7 @@
static int synaptics_rmi4_suspend(struct device *dev)
{
struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ int retval;
if (!rmi4_data->sensor_sleep) {
rmi4_data->touch_stopped = true;
@@ -2491,6 +2584,12 @@
synaptics_rmi4_sensor_sleep(rmi4_data);
}
+ retval = synaptics_rmi4_regulator_lpm(rmi4_data, true);
+ if (retval < 0) {
+ dev_err(dev, "failed to enter low power mode\n");
+ return retval;
+ }
+
return 0;
}
@@ -2507,6 +2606,13 @@
static int synaptics_rmi4_resume(struct device *dev)
{
struct synaptics_rmi4_data *rmi4_data = dev_get_drvdata(dev);
+ int retval;
+
+ retval = synaptics_rmi4_regulator_lpm(rmi4_data, false);
+ if (retval < 0) {
+ dev_err(dev, "failed to enter active power mode\n");
+ return retval;
+ }
synaptics_rmi4_sensor_wake(rmi4_data);
rmi4_data->touch_stopped = false;
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index f35f0e7..aa69475 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -16,7 +16,7 @@
# MSM IOMMU support
config MSM_IOMMU
bool "MSM IOMMU Support"
- depends on ARCH_MSM8X60 || ARCH_MSM8960 || ARCH_APQ8064 || ARCH_MSM8974 || ARCH_MPQ8092 || ARCH_MSM8610 || ARCH_MSM8226 || ARCH_MSMZINC
+ depends on ARCH_MSM8X60 || ARCH_MSM8960 || ARCH_APQ8064 || ARCH_MSM8974 || ARCH_MPQ8092 || ARCH_MSM8610 || ARCH_MSM8226 || ARCH_APQ8084
select IOMMU_API
help
Support for the IOMMUs found on certain Qualcomm SOCs.
diff --git a/drivers/iommu/msm_iommu-v0.c b/drivers/iommu/msm_iommu-v0.c
index c0a4720..b1960c6 100644
--- a/drivers/iommu/msm_iommu-v0.c
+++ b/drivers/iommu/msm_iommu-v0.c
@@ -55,6 +55,9 @@
.name = "msm_iommu_sec_bus",
};
+static int msm_iommu_unmap_range(struct iommu_domain *domain, unsigned int va,
+ unsigned int len);
+
static inline void clean_pte(unsigned long *start, unsigned long *end,
int redirect)
{
@@ -167,6 +170,11 @@
/* No need to do anything. IOMMUv0 is always on. */
}
+static void *_iommu_lock_initialize(void)
+{
+ return msm_iommu_lock_initialize();
+}
+
static void _iommu_lock_acquire(void)
{
msm_iommu_lock();
@@ -182,6 +190,7 @@
.iommu_power_off = __disable_regulators,
.iommu_clk_on = __enable_clocks,
.iommu_clk_off = __disable_clocks,
+ .iommu_lock_initialize = _iommu_lock_initialize,
.iommu_lock_acquire = _iommu_lock_acquire,
.iommu_lock_release = _iommu_lock_release,
};
@@ -953,6 +962,7 @@
int prot)
{
unsigned int pa;
+ unsigned int start_va = va;
unsigned int offset = 0;
unsigned long *fl_table;
unsigned long *fl_pte;
@@ -1026,12 +1036,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
continue;
}
@@ -1085,12 +1089,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
}
@@ -1103,6 +1101,8 @@
__flush_iotlb(domain);
fail:
mutex_unlock(&msm_iommu_lock);
+ if (ret && offset > 0)
+ msm_iommu_unmap_range(domain, start_va, offset);
return ret;
}
diff --git a/drivers/iommu/msm_iommu_pagetable.c b/drivers/iommu/msm_iommu_pagetable.c
index b62bb76..9614692 100644
--- a/drivers/iommu/msm_iommu_pagetable.c
+++ b/drivers/iommu/msm_iommu_pagetable.c
@@ -431,6 +431,7 @@
struct scatterlist *sg, unsigned int len, int prot)
{
phys_addr_t pa;
+ unsigned int start_va = va;
unsigned int offset = 0;
unsigned long *fl_pte;
unsigned long fl_offset;
@@ -495,12 +496,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
continue;
}
@@ -553,12 +548,6 @@
chunk_offset = 0;
sg = sg_next(sg);
pa = get_phys_addr(sg);
- if (pa == 0) {
- pr_debug("No dma address for sg %p\n",
- sg);
- ret = -EINVAL;
- goto fail;
- }
}
}
@@ -569,6 +558,9 @@
}
fail:
+ if (ret && offset > 0)
+ msm_iommu_pagetable_unmap_range(pt, start_va, offset);
+
return ret;
}
diff --git a/drivers/leds/leds-qpnp.c b/drivers/leds/leds-qpnp.c
index 1d34b06..3667296 100644
--- a/drivers/leds/leds-qpnp.c
+++ b/drivers/leds/leds-qpnp.c
@@ -159,6 +159,27 @@
#define PWM_LUT_MAX_SIZE 63
#define RGB_LED_DISABLE 0x00
+#define MPP_MAX_LEVEL LED_FULL
+#define LED_MPP_MODE_CTRL(base) (base + 0x40)
+#define LED_MPP_VIN_CTRL(base) (base + 0x41)
+#define LED_MPP_EN_CTRL(base) (base + 0x46)
+#define LED_MPP_SINK_CTRL(base) (base + 0x4C)
+
+#define LED_MPP_CURRENT_DEFAULT 10
+#define LED_MPP_SOURCE_SEL_DEFAULT LED_MPP_MODE_ENABLE
+
+#define LED_MPP_SINK_MASK 0x07
+#define LED_MPP_MODE_MASK 0x7F
+#define LED_MPP_EN_MASK 0x80
+
+#define LED_MPP_MODE_SINK (0x06 << 4)
+#define LED_MPP_MODE_ENABLE 0x01
+#define LED_MPP_MODE_OUTPUT 0x10
+#define LED_MPP_MODE_DISABLE 0x00
+#define LED_MPP_EN_ENABLE 0x80
+#define LED_MPP_EN_DISABLE 0x00
+
+#define MPP_SOURCE_DTEST1 0x08
/**
* enum qpnp_leds - QPNP supported led ids
* @QPNP_ID_WLED - White led backlight
@@ -170,6 +191,7 @@
QPNP_ID_RGB_RED,
QPNP_ID_RGB_GREEN,
QPNP_ID_RGB_BLUE,
+ QPNP_ID_LED_MPP,
QPNP_ID_MAX,
};
@@ -240,6 +262,11 @@
static u8 rgb_pwm_debug_regs[] = {
0x45, 0x46, 0x47,
};
+
+static u8 mpp_debug_regs[] = {
+ 0x40, 0x41, 0x42, 0x45, 0x46, 0x4c,
+};
+
/**
* wled_config_data - wled configuration data
* @num_strings - number of wled strings supported
@@ -264,6 +291,16 @@
};
/**
+ * mpp_config_data - mpp configuration data
+ * @current_setting - current setting, 5ma-40ma in 5ma increments
+ */
+struct mpp_config_data {
+ u8 current_setting;
+ u8 source_sel;
+ u8 mode_ctrl;
+};
+
+/**
* flash_config_data - flash configuration data
* @current_prgm - current to be programmed, scaled by max level
* @clamp_curr - clamp current to use
@@ -336,6 +373,7 @@
struct wled_config_data *wled_cfg;
struct flash_config_data *flash_cfg;
struct rgb_config_data *rgb_cfg;
+ struct mpp_config_data *mpp_cfg;
int max_current;
bool default_on;
int turn_off_delay_ms;
@@ -458,6 +496,68 @@
return 0;
}
+static int qpnp_mpp_set(struct qpnp_led_data *led)
+{
+ int rc, val;
+
+ if (led->cdev.brightness) {
+ val = (led->cdev.brightness * LED_MPP_SINK_MASK) / LED_FULL;
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_SINK_CTRL(led->base),
+ LED_MPP_SINK_MASK, val);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable reg\n");
+ return rc;
+ }
+
+ val = led->mpp_cfg->source_sel | led->mpp_cfg->mode_ctrl;
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_MODE_CTRL(led->base), LED_MPP_MODE_MASK,
+ val);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led mode reg\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_EN_CTRL(led->base), LED_MPP_EN_MASK,
+ LED_MPP_EN_ENABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable " \
+ "reg\n");
+ return rc;
+ }
+ } else {
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_MODE_CTRL(led->base),
+ LED_MPP_MODE_MASK,
+ LED_MPP_MODE_DISABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led mode reg\n");
+ return rc;
+ }
+
+ rc = qpnp_led_masked_write(led,
+ LED_MPP_EN_CTRL(led->base),
+ LED_MPP_EN_MASK,
+ LED_MPP_EN_DISABLE);
+ if (rc) {
+ dev_err(&led->spmi_dev->dev,
+ "Failed to write led enable reg\n");
+ return rc;
+ }
+ }
+
+ qpnp_dump_regs(led, mpp_debug_regs, ARRAY_SIZE(mpp_debug_regs));
+
+ return 0;
+}
+
static int qpnp_flash_set(struct qpnp_led_data *led)
{
int rc;
@@ -750,6 +850,12 @@
dev_err(&led->spmi_dev->dev,
"RGB set brightness failed (%d)\n", rc);
break;
+ case QPNP_ID_LED_MPP:
+ rc = qpnp_mpp_set(led);
+ if (rc < 0)
+ dev_err(&led->spmi_dev->dev,
+ "MPP set brightness failed (%d)\n", rc);
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
break;
@@ -772,6 +878,9 @@
case QPNP_ID_RGB_BLUE:
led->cdev.max_brightness = RGB_MAX_LEVEL;
break;
+ case QPNP_ID_LED_MPP:
+ led->cdev.max_brightness = MPP_MAX_LEVEL;
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
return -EINVAL;
@@ -1211,6 +1320,8 @@
dev_err(&led->spmi_dev->dev,
"RGB initialize failed(%d)\n", rc);
break;
+ case QPNP_ID_LED_MPP:
+ break;
default:
dev_err(&led->spmi_dev->dev, "Invalid LED(%d)\n", led->id);
return -EINVAL;
@@ -1536,6 +1647,43 @@
return 0;
}
+static int __devinit qpnp_get_config_mpp(struct qpnp_led_data *led,
+ struct device_node *node)
+{
+ int rc;
+ u32 val;
+
+ led->mpp_cfg = devm_kzalloc(&led->spmi_dev->dev,
+ sizeof(struct mpp_config_data), GFP_KERNEL);
+ if (!led->mpp_cfg) {
+ dev_err(&led->spmi_dev->dev, "Unable to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ led->mpp_cfg->current_setting = LED_MPP_CURRENT_DEFAULT;
+ rc = of_property_read_u32(node, "qcom,current-setting", &val);
+ if (!rc)
+ led->mpp_cfg->current_setting = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ led->mpp_cfg->source_sel = LED_MPP_SOURCE_SEL_DEFAULT;
+ rc = of_property_read_u32(node, "qcom,source-sel", &val);
+ if (!rc)
+ led->mpp_cfg->source_sel = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ led->mpp_cfg->mode_ctrl = LED_MPP_MODE_SINK;
+ rc = of_property_read_u32(node, "qcom,mode-ctrl", &val);
+ if (!rc)
+ led->mpp_cfg->mode_ctrl = (u8) val;
+ else if (rc != -EINVAL)
+ return rc;
+
+ return 0;
+}
+
static int __devinit qpnp_leds_probe(struct spmi_device *spmi)
{
struct qpnp_led_data *led, *led_array;
@@ -1638,6 +1786,13 @@
"Unable to read rgb config data\n");
goto fail_id_check;
}
+ } else if (strncmp(led_label, "mpp", sizeof("mpp")) == 0) {
+ rc = qpnp_get_config_mpp(led, temp);
+ if (rc < 0) {
+ dev_err(&led->spmi_dev->dev,
+ "Unable to read mpp config data\n");
+ goto fail_id_check;
+ }
} else {
dev_err(&led->spmi_dev->dev, "No LED matching label\n");
rc = -EINVAL;
diff --git a/drivers/media/dvb/dvb-core/demux.h b/drivers/media/dvb/dvb-core/demux.h
index f0b9b05..89500f9 100644
--- a/drivers/media/dvb/dvb-core/demux.h
+++ b/drivers/media/dvb/dvb-core/demux.h
@@ -127,6 +127,7 @@
u32 cont_err_counter;
u32 ts_packets_num;
u32 ts_dropped_bytes;
+ u64 stc;
} buf;
struct {
diff --git a/drivers/media/dvb/dvb-core/dmxdev.c b/drivers/media/dvb/dvb-core/dmxdev.c
index ca71c06..5e7a09e 100644
--- a/drivers/media/dvb/dvb-core/dmxdev.c
+++ b/drivers/media/dvb/dvb-core/dmxdev.c
@@ -2406,6 +2406,7 @@
event.params.es_data.pts = dmx_data_ready->buf.pts;
event.params.es_data.dts_valid = dmx_data_ready->buf.dts_exists;
event.params.es_data.dts = dmx_data_ready->buf.dts;
+ event.params.es_data.stc = dmx_data_ready->buf.stc;
event.params.es_data.transport_error_indicator_counter =
dmx_data_ready->buf.tei_counter;
event.params.es_data.continuity_error_counter =
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
index 84f7307..9370fc9 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_core.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -142,7 +142,7 @@
buf_p = msm_gemini_hw_pingpong_active_buffer(&we_pingpong_buf);
if (buf_p) {
buf_p->framedone_len = msm_gemini_hw_encode_output_size();
- GMN_DBG("%s:%d] framedone_len %d\n", __func__, __LINE__,
+ pr_debug("%s:%d] framedone_len %d\n", __func__, __LINE__,
buf_p->framedone_len);
}
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
index 0cbb101..79c533e 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010,2013 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -297,6 +297,8 @@
struct msm_gemini_hw_cmd *hw_cmd_p;
+ pr_debug("%s:%d] pingpong index %d", __func__, __LINE__,
+ pingpong_index);
if (pingpong_index == 0) {
hw_cmd_p = &hw_cmd_we_ping_update[0];
@@ -486,40 +488,38 @@
return is_copy_to_user;
}
-void msm_gemini_hw_region_dump(int size)
+#ifdef MSM_GMN_DBG_DUMP
+void msm_gemini_io_dump(int size)
{
- uint32_t *p;
- uint8_t *p8;
-
- if (size > gemini_region_size)
- GMN_PR_ERR("%s:%d] wrong region dump size\n",
- __func__, __LINE__);
-
- p = (uint32_t *) gemini_region_base;
- while (size >= 16) {
- GMN_DBG("0x%08X] %08X %08X %08X %08X\n",
- gemini_region_size - size,
- readl(p), readl(p+1), readl(p+2), readl(p+3));
- p += 4;
- size -= 16;
- }
-
- if (size > 0) {
- uint32_t d;
- GMN_DBG("0x%08X] ", gemini_region_size - size);
- while (size >= 4) {
- GMN_DBG("%08X ", readl(p++));
- size -= 4;
+ char line_str[128], *p_str;
+ void __iomem *addr = gemini_region_base;
+ int i;
+ u32 *p = (u32 *) addr;
+ u32 data;
+ pr_info("%s: %p %d reg_size %d\n", __func__, addr, size,
+ gemini_region_size);
+ line_str[0] = '\0';
+ p_str = line_str;
+ for (i = 0; i < size/4; i++) {
+ if (i % 4 == 0) {
+ snprintf(p_str, 12, "%08x: ", (u32) p);
+ p_str += 10;
}
-
- d = readl(p);
- p8 = (uint8_t *) &d;
- while (size) {
- GMN_DBG("%02X", *p8++);
- size--;
+ data = readl_relaxed(p++);
+ snprintf(p_str, 12, "%08x ", data);
+ p_str += 9;
+ if ((i + 1) % 4 == 0) {
+ pr_info("%s\n", line_str);
+ line_str[0] = '\0';
+ p_str = line_str;
}
-
- GMN_DBG("\n");
}
+ if (line_str[0] != '\0')
+ pr_info("%s\n", line_str);
}
+#else
+void msm_gemini_io_dump(int size)
+{
+}
+#endif
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
index 1c8de19..aa6c4aa1 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -95,7 +95,7 @@
int msm_gemini_hw_wait(struct msm_gemini_hw_cmd *hw_cmd_p, int m_us);
void msm_gemini_hw_delay(struct msm_gemini_hw_cmd *hw_cmd_p, int m_us);
int msm_gemini_hw_exec_cmds(struct msm_gemini_hw_cmd *hw_cmd_p, int m_cmds);
-void msm_gemini_hw_region_dump(int size);
+void msm_gemini_io_dump(int size);
#define MSM_GEMINI_PIPELINE_CLK_128MHZ 128 /* 8MP 128MHz */
#define MSM_GEMINI_PIPELINE_CLK_140MHZ 140 /* 9MP 140MHz */
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
index ea13d68..2fe6038 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_hw_reg.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010, 2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -171,6 +171,6 @@
#define HWIO_JPEG_IRQ_STATUS_RMSK 0xffffffff
#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_ADDR (GEMINI_REG_BASE + 0x00000034)
-#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_RMSK 0xffffff
+#define HWIO_JPEG_STATUS_ENCODE_OUTPUT_SIZE_RMSK 0xffffffff
#endif /* MSM_GEMINI_HW_REG_H */
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
index ed2222a..50c7284 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -16,6 +16,8 @@
#include <linux/uaccess.h>
#include <linux/slab.h>
#include <media/msm_gemini.h>
+#include <mach/msm_bus.h>
+#include <mach/msm_bus_board.h>
#include "msm_gemini_sync.h"
#include "msm_gemini_core.h"
#include "msm_gemini_platform.h"
@@ -23,6 +25,9 @@
static int release_buf;
+/* size is based on 4k page size */
+static const int g_max_out_size = 0x7ff000;
+
/*************** queue helper ****************/
inline void msm_gemini_q_init(char const *name, struct msm_gemini_q *q_p)
{
@@ -180,7 +185,7 @@
{
int rc = 0;
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
+ pr_debug("%s:%d] buf_in %p", __func__, __LINE__, buf_in);
if (buf_in) {
buf_in->vbuf.framedone_len = buf_in->framedone_len;
@@ -266,19 +271,88 @@
/*************** output queue ****************/
+int msm_gemini_get_out_buffer(struct msm_gemini_device *pgmn_dev,
+ struct msm_gemini_hw_buf *p_outbuf)
+{
+ int buf_size = 0;
+ int bytes_remaining = 0;
+ if (pgmn_dev->out_offset >= pgmn_dev->out_buf.y_len) {
+ GMN_PR_ERR("%s:%d] no more buffers", __func__, __LINE__);
+ return -EINVAL;
+ }
+ bytes_remaining = pgmn_dev->out_buf.y_len - pgmn_dev->out_offset;
+ buf_size = min(bytes_remaining, pgmn_dev->max_out_size);
+
+ pgmn_dev->out_frag_cnt++;
+ pr_debug("%s:%d] buf_size[%d] %d", __func__, __LINE__,
+ pgmn_dev->out_frag_cnt, buf_size);
+ p_outbuf->y_len = buf_size;
+ p_outbuf->y_buffer_addr = pgmn_dev->out_buf.y_buffer_addr +
+ pgmn_dev->out_offset;
+ pgmn_dev->out_offset += buf_size;
+ return 0;
+}
+
+int msm_gemini_outmode_single_we_pingpong_irq(
+ struct msm_gemini_device *pgmn_dev,
+ struct msm_gemini_core_buf *buf_in)
+{
+ int rc = 0;
+ struct msm_gemini_core_buf out_buf;
+ int frame_done = buf_in &&
+ buf_in->vbuf.type == MSM_GEMINI_EVT_FRAMEDONE;
+ pr_debug("%s:%d] framedone %d", __func__, __LINE__, frame_done);
+ if (!pgmn_dev->out_buf_set) {
+ pr_err("%s:%d] output buffer not set",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ if (frame_done) {
+ /* send the buffer back */
+ pgmn_dev->out_buf.vbuf.framedone_len = buf_in->framedone_len;
+ pgmn_dev->out_buf.vbuf.type = MSM_GEMINI_EVT_FRAMEDONE;
+ rc = msm_gemini_q_in_buf(&pgmn_dev->output_rtn_q,
+ &pgmn_dev->out_buf);
+ if (rc) {
+ pr_err("%s:%d] cannot queue the output buffer",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ rc = msm_gemini_q_wakeup(&pgmn_dev->output_rtn_q);
+ /* reset the output buffer since the ownership is
+ transferred to the rtn queue */
+ if (!rc)
+ pgmn_dev->out_buf_set = 0;
+ } else {
+ /* configure ping/pong */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc)
+ msm_gemini_core_we_buf_reset(&out_buf);
+ else
+ msm_gemini_core_we_buf_update(&out_buf);
+ }
+ return rc;
+}
+
int msm_gemini_we_pingpong_irq(struct msm_gemini_device *pgmn_dev,
struct msm_gemini_core_buf *buf_in)
{
int rc = 0;
struct msm_gemini_core_buf *buf_out;
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
+ pr_debug("%s:%d] Enter mode %d", __func__, __LINE__,
+ pgmn_dev->out_mode);
+
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_SINGLE)
+ return msm_gemini_outmode_single_we_pingpong_irq(pgmn_dev,
+ buf_in);
+
if (buf_in) {
- GMN_DBG("%s:%d] 0x%08x %d\n", __func__, __LINE__,
+ pr_debug("%s:%d] 0x%08x %d\n", __func__, __LINE__,
(int) buf_in->y_buffer_addr, buf_in->y_len);
rc = msm_gemini_q_in_buf(&pgmn_dev->output_rtn_q, buf_in);
} else {
- GMN_DBG("%s:%d] no output return buffer\n", __func__,
+ pr_debug("%s:%d] no output return buffer\n", __func__,
__LINE__);
rc = -1;
return rc;
@@ -291,7 +365,7 @@
kfree(buf_out);
} else {
msm_gemini_core_we_buf_reset(buf_in);
- GMN_DBG("%s:%d] no output buffer\n", __func__, __LINE__);
+ pr_debug("%s:%d] no output buffer\n", __func__, __LINE__);
rc = -2;
}
@@ -339,6 +413,43 @@
return 0;
}
+int msm_gemini_set_output_buf(struct msm_gemini_device *pgmn_dev,
+ void __user *arg)
+{
+ struct msm_gemini_buf buf_cmd;
+
+ if (pgmn_dev->out_buf_set) {
+ pr_err("%s:%d] outbuffer buffer already provided",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+
+ if (copy_from_user(&buf_cmd, arg, sizeof(struct msm_gemini_buf))) {
+ pr_err("%s:%d] failed\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+
+ GMN_DBG("%s:%d] output addr 0x%08x len %d", __func__, __LINE__,
+ (int) buf_cmd.vaddr,
+ buf_cmd.y_len);
+
+ pgmn_dev->out_buf.y_buffer_addr = msm_gemini_platform_v2p(
+ buf_cmd.fd,
+ buf_cmd.y_len,
+ &pgmn_dev->out_buf.file,
+ &pgmn_dev->out_buf.handle);
+ if (!pgmn_dev->out_buf.y_buffer_addr) {
+ pr_err("%s:%d] cannot map the output address",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ pgmn_dev->out_buf.y_len = buf_cmd.y_len;
+ pgmn_dev->out_buf.vbuf = buf_cmd;
+ pgmn_dev->out_buf_set = 1;
+
+ return 0;
+}
+
int msm_gemini_output_buf_enqueue(struct msm_gemini_device *pgmn_dev,
void __user *arg)
{
@@ -456,6 +567,7 @@
struct msm_gemini_core_buf *buf_p;
struct msm_gemini_buf buf_cmd;
int rc = 0;
+ struct msm_bus_scale_pdata *p_bus_scale_data = NULL;
if (copy_from_user(&buf_cmd, arg, sizeof(struct msm_gemini_buf))) {
GMN_PR_ERR("%s:%d] failed\n", __func__, __LINE__);
@@ -484,9 +596,9 @@
return rc;
}
} else {
- buf_p->y_buffer_addr = msm_gemini_platform_v2p(buf_cmd.fd,
- buf_cmd.y_len + buf_cmd.cbcr_len, &buf_p->file,
- &buf_p->handle) + buf_cmd.offset + buf_cmd.y_off;
+ buf_p->y_buffer_addr = msm_gemini_platform_v2p(buf_cmd.fd,
+ buf_cmd.y_len + buf_cmd.cbcr_len, &buf_p->file,
+ &buf_p->handle) + buf_cmd.offset + buf_cmd.y_off;
}
buf_p->y_len = buf_cmd.y_len;
@@ -504,6 +616,30 @@
return -1;
}
buf_p->vbuf = buf_cmd;
+ buf_p->vbuf.type = MSM_GEMINI_EVT_RESET;
+
+ /* Set bus vectors */
+ p_bus_scale_data = (struct msm_bus_scale_pdata *)
+ pgmn_dev->pdev->dev.platform_data;
+ if (pgmn_dev->bus_perf_client &&
+ (MSM_GMN_OUTMODE_SINGLE == pgmn_dev->out_mode)) {
+ int rc;
+ struct msm_bus_paths *path = &(p_bus_scale_data->usecase[1]);
+ GMN_DBG("%s:%d] Update bus bandwidth", __func__, __LINE__);
+ if (pgmn_dev->op_mode & MSM_GEMINI_MODE_OFFLINE_ENCODE) {
+ path->vectors[0].ab = (buf_p->y_len + buf_p->cbcr_len) *
+ 15 * 2;
+ path->vectors[0].ib = path->vectors[0].ab;
+ path->vectors[1].ab = 0;
+ path->vectors[1].ib = 0;
+ }
+ rc = msm_bus_scale_client_update_request(
+ pgmn_dev->bus_perf_client, 1);
+ if (rc < 0) {
+ GMN_PR_ERR("%s:%d] update_request fails %d",
+ __func__, __LINE__, rc);
+ }
+ }
msm_gemini_q_in(&pgmn_dev->input_buf_q, buf_p);
@@ -545,6 +681,9 @@
int __msm_gemini_open(struct msm_gemini_device *pgmn_dev)
{
int rc;
+ struct msm_bus_scale_pdata *p_bus_scale_data =
+ (struct msm_bus_scale_pdata *)pgmn_dev->pdev->dev.
+ platform_data;
mutex_lock(&pgmn_dev->lock);
if (pgmn_dev->open_count) {
@@ -576,7 +715,23 @@
msm_gemini_q_cleanup(&pgmn_dev->input_rtn_q);
msm_gemini_q_cleanup(&pgmn_dev->input_buf_q);
msm_gemini_core_init();
+ pgmn_dev->out_mode = MSM_GMN_OUTMODE_FRAGMENTED;
+ pgmn_dev->out_buf_set = 0;
+ pgmn_dev->out_offset = 0;
+ pgmn_dev->max_out_size = g_max_out_size;
+ pgmn_dev->out_frag_cnt = 0;
+ pgmn_dev->bus_perf_client = 0;
+ if (p_bus_scale_data) {
+ GMN_DBG("%s:%d] register bus client", __func__, __LINE__);
+ pgmn_dev->bus_perf_client =
+ msm_bus_scale_register_client(p_bus_scale_data);
+ if (!pgmn_dev->bus_perf_client) {
+ GMN_PR_ERR("%s:%d] bus client register failed",
+ __func__, __LINE__);
+ return -EINVAL;
+ }
+ }
GMN_DBG("%s:%d] success\n", __func__, __LINE__);
return rc;
}
@@ -593,13 +748,23 @@
pgmn_dev->open_count--;
mutex_unlock(&pgmn_dev->lock);
- msm_gemini_core_release(release_buf);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED) {
+ msm_gemini_core_release(release_buf);
+ } else if (pgmn_dev->out_buf_set) {
+ msm_gemini_platform_p2v(pgmn_dev->out_buf.file,
+ &pgmn_dev->out_buf.handle);
+ }
msm_gemini_q_cleanup(&pgmn_dev->evt_q);
msm_gemini_q_cleanup(&pgmn_dev->output_rtn_q);
msm_gemini_outbuf_q_cleanup(&pgmn_dev->output_buf_q);
msm_gemini_q_cleanup(&pgmn_dev->input_rtn_q);
msm_gemini_outbuf_q_cleanup(&pgmn_dev->input_buf_q);
+ if (pgmn_dev->bus_perf_client) {
+ msm_bus_scale_unregister_client(pgmn_dev->bus_perf_client);
+ pgmn_dev->bus_perf_client = 0;
+ }
+
if (pgmn_dev->open_count)
GMN_PR_ERR(KERN_ERR "%s: multiple opens\n", __func__);
@@ -699,29 +864,63 @@
}
}
- for (i = 0; i < 2; i++) {
- buf_out_free[i] = msm_gemini_q_out(&pgmn_dev->output_buf_q);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED) {
+ for (i = 0; i < 2; i++) {
+ buf_out_free[i] =
+ msm_gemini_q_out(&pgmn_dev->output_buf_q);
- if (buf_out_free[i]) {
- msm_gemini_core_we_buf_update(buf_out_free[i]);
- } else if (i == 1) {
- /* set the pong to same address as ping */
- buf_out_free[0]->y_len >>= 1;
- buf_out_free[0]->y_buffer_addr +=
- buf_out_free[0]->y_len;
- msm_gemini_core_we_buf_update(buf_out_free[0]);
- /* since ping and pong are same buf release only once*/
- release_buf = 0;
- } else {
- GMN_DBG("%s:%d] no output buffer\n",
- __func__, __LINE__);
- break;
+ if (buf_out_free[i]) {
+ msm_gemini_core_we_buf_update(buf_out_free[i]);
+ } else if (i == 1) {
+ /* set the pong to same address as ping */
+ buf_out_free[0]->y_len >>= 1;
+ buf_out_free[0]->y_buffer_addr +=
+ buf_out_free[0]->y_len;
+ msm_gemini_core_we_buf_update(buf_out_free[0]);
+ /*
+ * since ping and pong are same buf
+ * release only once
+ */
+ release_buf = 0;
+ } else {
+ GMN_DBG("%s:%d] no output buffer\n",
+ __func__, __LINE__);
+ break;
+ }
}
+ for (i = 0; i < 2; i++)
+ kfree(buf_out_free[i]);
+ } else {
+ struct msm_gemini_core_buf out_buf;
+ /*
+ * Since the same buffer is fragmented, p2v need not be
+ * called for all the buffers
+ */
+ release_buf = 0;
+ if (!pgmn_dev->out_buf_set) {
+ GMN_PR_ERR("%s:%d] output buffer not set",
+ __func__, __LINE__);
+ return -EFAULT;
+ }
+ /* configure ping */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc) {
+ GMN_PR_ERR("%s:%d] no output buffer for ping",
+ __func__, __LINE__);
+ return rc;
+ }
+ msm_gemini_core_we_buf_update(&out_buf);
+ /* configure pong */
+ rc = msm_gemini_get_out_buffer(pgmn_dev, &out_buf);
+ if (rc) {
+ GMN_DBG("%s:%d] no output buffer for pong",
+ __func__, __LINE__);
+ /* fall through to configure same buffer */
+ }
+ msm_gemini_core_we_buf_update(&out_buf);
+ msm_gemini_io_dump(0x150);
}
- for (i = 0; i < 2; i++)
- kfree(buf_out_free[i]);
-
rc = msm_gemini_ioctl_hw_cmds(pgmn_dev, arg);
GMN_DBG("%s:%d]\n", __func__, __LINE__);
return rc;
@@ -746,12 +945,22 @@
return rc;
}
-int msm_gemini_ioctl_test_dump_region(struct msm_gemini_device *pgmn_dev,
- unsigned long arg)
+int msm_gemini_ioctl_set_outmode(struct msm_gemini_device *pgmn_dev,
+ void * __user arg)
{
- GMN_DBG("%s:%d] Enter\n", __func__, __LINE__);
- msm_gemini_hw_region_dump(arg);
- return 0;
+ int rc = 0;
+ enum msm_gmn_out_mode mode;
+
+ if (copy_from_user(&mode, arg, sizeof(mode))) {
+ GMN_PR_ERR("%s:%d] failed\n", __func__, __LINE__);
+ return -EFAULT;
+ }
+ GMN_DBG("%s:%d] mode %d", __func__, __LINE__, mode);
+
+ if ((mode == MSM_GMN_OUTMODE_FRAGMENTED)
+ || (mode == MSM_GMN_OUTMODE_SINGLE))
+ pgmn_dev->out_mode = mode;
+ return rc;
}
long __msm_gemini_ioctl(struct msm_gemini_device *pgmn_dev,
@@ -790,8 +999,12 @@
break;
case MSM_GMN_IOCTL_OUTPUT_BUF_ENQUEUE:
- rc = msm_gemini_output_buf_enqueue(pgmn_dev,
- (void __user *) arg);
+ if (pgmn_dev->out_mode == MSM_GMN_OUTMODE_FRAGMENTED)
+ rc = msm_gemini_output_buf_enqueue(pgmn_dev,
+ (void __user *) arg);
+ else
+ rc = msm_gemini_set_output_buf(pgmn_dev,
+ (void __user *) arg);
break;
case MSM_GMN_IOCTL_OUTPUT_GET:
@@ -818,8 +1031,8 @@
rc = msm_gemini_ioctl_hw_cmds(pgmn_dev, (void __user *) arg);
break;
- case MSM_GMN_IOCTL_TEST_DUMP_REGION:
- rc = msm_gemini_ioctl_test_dump_region(pgmn_dev, arg);
+ case MSM_GMN_IOCTL_SET_MODE:
+ rc = msm_gemini_ioctl_set_outmode(pgmn_dev, (void __user *)arg);
break;
default:
diff --git a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
index d1a43e1..88e9615 100644
--- a/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
+++ b/drivers/media/platform/msm/camera_v1/gemini/msm_gemini_sync.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2010, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2010,2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -74,6 +74,16 @@
struct msm_gemini_q input_buf_q;
struct v4l2_subdev subdev;
+ enum msm_gmn_out_mode out_mode;
+
+ /* single out mode parameters */
+ struct msm_gemini_hw_buf out_buf;
+ int out_offset;
+ int out_buf_set;
+ int max_out_size;
+ int out_frag_cnt;
+
+ uint32_t bus_perf_client;
};
int __msm_gemini_open(struct msm_gemini_device *pgmn_dev);
diff --git a/drivers/media/platform/msm/camera_v2/camera/camera.c b/drivers/media/platform/msm/camera_v2/camera/camera.c
index 71087d9..4579cee 100644
--- a/drivers/media/platform/msm/camera_v2/camera/camera.c
+++ b/drivers/media/platform/msm/camera_v2/camera/camera.c
@@ -595,13 +595,18 @@
struct v4l2_event event;
struct msm_video_device *pvdev = video_drvdata(filep);
struct camera_v4l2_private *sp = fh_to_private(filep->private_data);
-
BUG_ON(!pvdev);
atomic_sub_return(1, &pvdev->opened);
if (atomic_read(&pvdev->opened) == 0) {
+ camera_pack_event(filep, MSM_CAMERA_SET_PARM,
+ MSM_CAMERA_PRIV_DEL_STREAM, -1, &event);
+
+ /* Donot wait, imaging server may have crashed */
+ msm_post_event(&event, MSM_POST_EVT_TIMEOUT);
+
camera_pack_event(filep, MSM_CAMERA_DEL_SESSION, 0, -1, &event);
/* Donot wait, imaging server may have crashed */
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
index 22e8400..8ea1a00 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.c
@@ -281,16 +281,21 @@
list_for_each_entry(temp_buf_info,
&bufq->share_head, share_list) {
if (!temp_buf_info->buf_used[id]) {
- *buf_info = temp_buf_info;
temp_buf_info->buf_used[id] = 1;
temp_buf_info->buf_get_count++;
if (temp_buf_info->buf_get_count ==
bufq->buf_client_count)
list_del_init(
&temp_buf_info->share_list);
+ if (temp_buf_info->buf_reuse_flag) {
+ kfree(temp_buf_info);
+ } else {
+ *buf_info = temp_buf_info;
+ rc = 0;
+ }
spin_unlock_irqrestore(
&bufq->bufq_lock, flags);
- return 0;
+ return rc;
}
}
}
@@ -322,21 +327,31 @@
}
if (!(*buf_info)) {
- spin_unlock_irqrestore(&bufq->bufq_lock, flags);
- return rc;
- }
-
- (*buf_info)->state = MSM_ISP_BUFFER_STATE_DEQUEUED;
- if (bufq->buf_type == ISP_SHARE_BUF) {
- memset((*buf_info)->buf_used, 0,
- sizeof(uint8_t) * bufq->buf_client_count);
- (*buf_info)->buf_used[id] = 1;
- (*buf_info)->buf_get_count = 1;
- (*buf_info)->buf_put_count = 0;
- list_add_tail(&(*buf_info)->share_list, &bufq->share_head);
+ if (bufq->buf_type == ISP_SHARE_BUF) {
+ temp_buf_info = kzalloc(
+ sizeof(struct msm_isp_buffer), GFP_ATOMIC);
+ temp_buf_info->buf_reuse_flag = 1;
+ temp_buf_info->buf_used[id] = 1;
+ temp_buf_info->buf_get_count = 1;
+ list_add_tail(&temp_buf_info->share_list,
+ &bufq->share_head);
+ }
+ } else {
+ (*buf_info)->state = MSM_ISP_BUFFER_STATE_DEQUEUED;
+ if (bufq->buf_type == ISP_SHARE_BUF) {
+ memset((*buf_info)->buf_used, 0,
+ sizeof(uint8_t) * bufq->buf_client_count);
+ (*buf_info)->buf_used[id] = 1;
+ (*buf_info)->buf_get_count = 1;
+ (*buf_info)->buf_put_count = 0;
+ (*buf_info)->buf_reuse_flag = 0;
+ list_add_tail(&(*buf_info)->share_list,
+ &bufq->share_head);
+ }
+ rc = 0;
}
spin_unlock_irqrestore(&bufq->bufq_lock, flags);
- return 0;
+ return rc;
}
static int msm_isp_put_buf(struct msm_isp_buf_mgr *buf_mgr,
@@ -648,22 +663,35 @@
}
}
-static int msm_isp_attach_ctx(struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx)
+static void msm_isp_register_ctx(struct msm_isp_buf_mgr *buf_mgr,
+ struct device **iommu_ctx, int num_iommu_ctx)
{
- int rc;
- rc = iommu_attach_device(buf_mgr->iommu_domain, iommu_ctx);
- if (rc) {
- pr_err("%s: Iommu attach error\n", __func__);
- return -EINVAL;
+ int i;
+ buf_mgr->num_iommu_ctx = num_iommu_ctx;
+ for (i = 0; i < num_iommu_ctx; i++)
+ buf_mgr->iommu_ctx[i] = iommu_ctx[i];
+}
+
+static int msm_isp_attach_ctx(struct msm_isp_buf_mgr *buf_mgr)
+{
+ int rc, i;
+ for (i = 0; i < buf_mgr->num_iommu_ctx; i++) {
+ rc = iommu_attach_device(buf_mgr->iommu_domain,
+ buf_mgr->iommu_ctx[i]);
+ if (rc) {
+ pr_err("%s: Iommu attach error\n", __func__);
+ return -EINVAL;
+ }
}
return 0;
}
-static void msm_isp_detach_ctx(struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx)
+static void msm_isp_detach_ctx(struct msm_isp_buf_mgr *buf_mgr)
{
- iommu_detach_device(buf_mgr->iommu_domain, iommu_ctx);
+ int i;
+ for (i = 0; i < buf_mgr->num_iommu_ctx; i++)
+ iommu_detach_device(buf_mgr->iommu_domain,
+ buf_mgr->iommu_ctx[i]);
}
static int msm_isp_init_isp_buf_mgr(
@@ -680,6 +708,7 @@
}
CDBG("%s: E\n", __func__);
+ msm_isp_attach_ctx(buf_mgr);
buf_mgr->num_buf_q = num_buf_q;
buf_mgr->bufq =
kzalloc(sizeof(struct msm_isp_bufq) * num_buf_q,
@@ -704,6 +733,7 @@
ion_client_destroy(buf_mgr->client);
kfree(buf_mgr->bufq);
buf_mgr->num_buf_q = 0;
+ msm_isp_detach_ctx(buf_mgr);
return 0;
}
@@ -740,8 +770,7 @@
.flush_buf = msm_isp_flush_buf,
.buf_done = msm_isp_buf_done,
.buf_divert = msm_isp_buf_divert,
- .attach_ctx = msm_isp_attach_ctx,
- .detach_ctx = msm_isp_detach_ctx,
+ .register_ctx = msm_isp_register_ctx,
.buf_mgr_init = msm_isp_init_isp_buf_mgr,
.buf_mgr_deinit = msm_isp_deinit_isp_buf_mgr,
};
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
index d4e7c88..fda1a57 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_buf_mgr.h
@@ -65,6 +65,7 @@
uint8_t buf_used[ISP_SHARE_BUF_CLIENT];
uint8_t buf_get_count;
uint8_t buf_put_count;
+ uint8_t buf_reuse_flag;
};
struct msm_isp_bufq {
@@ -111,10 +112,8 @@
int (*buf_divert) (struct msm_isp_buf_mgr *buf_mgr,
uint32_t bufq_handle, uint32_t buf_index,
struct timeval *tv, uint32_t frame_id);
- int (*attach_ctx) (struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx);
- void (*detach_ctx) (struct msm_isp_buf_mgr *buf_mgr,
- struct device *iommu_ctx);
+ void (*register_ctx) (struct msm_isp_buf_mgr *buf_mgr,
+ struct device **iommu_ctx, int num_iommu_ctx);
int (*buf_mgr_init) (struct msm_isp_buf_mgr *buf_mgr,
const char *ctx_name, uint16_t num_buf_q);
int (*buf_mgr_deinit) (struct msm_isp_buf_mgr *buf_mgr);
@@ -136,6 +135,9 @@
/*IOMMU specific*/
int iommu_domain_num;
struct iommu_domain *iommu_domain;
+
+ int num_iommu_ctx;
+ struct device *iommu_ctx[2];
};
int msm_isp_create_isp_buf_mgr(struct msm_isp_buf_mgr *buf_mgr,
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
index 447c752..b31b3f1 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp.c
@@ -138,6 +138,8 @@
kfree(vfe_dev);
return -EINVAL;
}
+ vfe_dev->buf_mgr->ops->register_ctx(vfe_dev->buf_mgr,
+ &vfe_dev->iommu_ctx[0], vfe_dev->hw_info->num_iommu_ctx);
vfe_dev->vfe_open_cnt = 0;
end:
return rc;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
index 1b762ea..7bc2b7d 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp.h
@@ -148,7 +148,8 @@
struct msm_vfe_stats_stream *stream_info);
void (*clear_framedrop) (struct vfe_device *vfe_dev,
struct msm_vfe_stats_stream *stream_info);
- void (*cfg_comp_mask) (struct vfe_device *vfe_dev);
+ void (*cfg_comp_mask) (struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable);
void (*cfg_wm_irq_mask) (struct vfe_device *vfe_dev,
struct msm_vfe_stats_stream *stream_info);
void (*clear_wm_irq_mask) (struct vfe_device *vfe_dev,
@@ -294,6 +295,7 @@
composite_info[MAX_NUM_COMPOSITE_MASK];
uint8_t num_used_composite_mask;
uint32_t stream_update;
+ enum msm_isp_camif_update_state pipeline_update;
struct msm_vfe_src_info src_info[VFE_SRC_MAX];
uint16_t stream_handle_cnt;
unsigned long event_mask;
@@ -312,6 +314,7 @@
STATS_ACTIVE,
STATS_START_PENDING,
STATS_STOP_PENDING,
+ STATS_STARTING,
STATS_STOPPING,
};
@@ -319,9 +322,11 @@
uint32_t session_id;
uint32_t stream_id;
uint32_t stream_handle;
+ uint32_t composite_flag;
enum msm_isp_stats_type stats_type;
enum msm_vfe_stats_state state;
uint32_t framedrop_pattern;
+ uint32_t framedrop_period;
uint32_t irq_subsample_pattern;
uint32_t buffer_offset;
@@ -331,11 +336,10 @@
struct msm_vfe_stats_shared_data {
struct msm_vfe_stats_stream stream_info[MSM_ISP_STATS_MAX];
- enum msm_vfe_stats_pipeline_policy stats_pipeline_policy;
- uint32_t comp_framedrop_pattern;
- uint32_t comp_irq_subsample_pattern;
uint8_t num_active_stream;
+ atomic_t stats_comp_mask;
uint16_t stream_handle_cnt;
+ atomic_t stats_update;
};
struct msm_vfe_tasklet_queue_cmd {
@@ -380,6 +384,7 @@
struct completion reset_complete;
struct completion halt_complete;
struct completion stream_config_complete;
+ struct completion stats_config_complete;
struct mutex realtime_mutex;
struct mutex core_mutex;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
index d4f6a07..a251f0a 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp32.c
@@ -114,7 +114,7 @@
msm_camera_io_w(0x07FFFFFF, vfe_dev->vfe_base + 0xC);
/* BUS_CFG */
msm_camera_io_w(0x00000001, vfe_dev->vfe_base + 0x3C);
- msm_camera_io_w(0x00000025, vfe_dev->vfe_base + 0x1C);
+ msm_camera_io_w(0x01000025, vfe_dev->vfe_base + 0x1C);
msm_camera_io_w_mb(0x1DFFFFFF, vfe_dev->vfe_base + 0x20);
msm_camera_io_w(0xFFFFFFFF, vfe_dev->vfe_base + 0x24);
msm_camera_io_w_mb(0x1FFFFFFF, vfe_dev->vfe_base + 0x28);
@@ -304,6 +304,8 @@
if (vfe_dev->axi_data.stream_update)
msm_isp_axi_stream_update(vfe_dev);
+ if (atomic_read(&vfe_dev->stats_data.stats_update))
+ msm_isp_stats_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
msm_isp_update_error_frame_count(vfe_dev);
@@ -767,7 +769,8 @@
}
}
-static void msm_vfe32_stats_cfg_comp_mask(struct vfe_device *vfe_dev)
+static void msm_vfe32_stats_cfg_comp_mask(struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable)
{
return;
}
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
index 256d136..5a17635 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp40.c
@@ -266,7 +266,7 @@
msm_camera_io_w(0xC001FF7F, vfe_dev->vfe_base + 0x974);
/* BUS_CFG */
msm_camera_io_w(0x10000001, vfe_dev->vfe_base + 0x50);
- msm_camera_io_w(0x800000F3, vfe_dev->vfe_base + 0x28);
+ msm_camera_io_w(0xE00000F3, vfe_dev->vfe_base + 0x28);
msm_camera_io_w_mb(0xFEFFFFFF, vfe_dev->vfe_base + 0x2C);
msm_camera_io_w(0xFFFFFFFF, vfe_dev->vfe_base + 0x30);
msm_camera_io_w_mb(0xFEFFFFFF, vfe_dev->vfe_base + 0x34);
@@ -466,6 +466,8 @@
if (vfe_dev->axi_data.stream_update)
msm_isp_axi_stream_update(vfe_dev);
+ if (atomic_read(&vfe_dev->stats_data.stats_update))
+ msm_isp_stats_stream_update(vfe_dev);
msm_isp_update_framedrop_reg(vfe_dev);
msm_isp_update_error_frame_count(vfe_dev);
@@ -1003,12 +1005,16 @@
}
}
-static void msm_vfe40_stats_cfg_comp_mask(struct vfe_device *vfe_dev)
+static void msm_vfe40_stats_cfg_comp_mask(struct vfe_device *vfe_dev,
+ uint32_t stats_mask, uint8_t enable)
{
- if (vfe_dev->stats_data.stats_pipeline_policy == STATS_COMP_ALL)
- msm_camera_io_w(0x00FF0000, vfe_dev->vfe_base + 0x44);
+ uint32_t comp_mask;
+ comp_mask = msm_camera_io_r(vfe_dev->vfe_base + 0x44) >> 16;
+ if (enable)
+ comp_mask |= stats_mask;
else
- msm_camera_io_w(0x00000000, vfe_dev->vfe_base + 0x44);
+ comp_mask &= ~stats_mask;
+ msm_camera_io_w(comp_mask << 16, vfe_dev->vfe_base + 0x44);
}
static void msm_vfe40_stats_cfg_wm_irq_mask(
@@ -1039,9 +1045,10 @@
uint32_t stats_base = VFE40_STATS_BASE(stats_idx);
/*WR_ADDR_CFG*/
- msm_camera_io_w(0x7C, vfe_dev->vfe_base + stats_base + 0x8);
+ msm_camera_io_w(stream_info->framedrop_period << 2,
+ vfe_dev->vfe_base + stats_base + 0x8);
/*WR_IRQ_FRAMEDROP_PATTERN*/
- msm_camera_io_w(0xFFFFFFFF,
+ msm_camera_io_w(stream_info->framedrop_pattern,
vfe_dev->vfe_base + stats_base + 0x10);
/*WR_IRQ_SUBSAMPLE_PATTERN*/
msm_camera_io_w(0xFFFFFFFF,
@@ -1189,11 +1196,9 @@
goto vfe_no_resource;
}
- if (vfe_dev->pdev->id == 0)
- vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe0");
- else if (vfe_dev->pdev->id == 1)
- vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe1");
- if (!vfe_dev->iommu_ctx[0]) {
+ vfe_dev->iommu_ctx[0] = msm_iommu_get_ctx("vfe0");
+ vfe_dev->iommu_ctx[1] = msm_iommu_get_ctx("vfe1");
+ if (!vfe_dev->iommu_ctx[0] || !vfe_dev->iommu_ctx[1]) {
pr_err("%s: cannot get iommu_ctx\n", __func__);
rc = -ENODEV;
goto vfe_no_resource;
@@ -1245,7 +1250,7 @@
};
struct msm_vfe_hardware_info vfe40_hw_info = {
- .num_iommu_ctx = 1,
+ .num_iommu_ctx = 2,
.vfe_clk_idx = VFE40_CLK_IDX,
.vfe_ops = {
.irq_ops = {
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
index 477985d..728e172 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_axi_util.c
@@ -389,31 +389,6 @@
msm_isp_send_event(vfe_dev, ISP_EVENT_SOF, &sof_event);
}
-uint32_t msm_isp_get_framedrop_period(
- enum msm_vfe_frame_skip_pattern frame_skip_pattern)
-{
- switch (frame_skip_pattern) {
- case NO_SKIP:
- case EVERY_2FRAME:
- case EVERY_3FRAME:
- case EVERY_4FRAME:
- case EVERY_5FRAME:
- case EVERY_6FRAME:
- case EVERY_7FRAME:
- case EVERY_8FRAME:
- return frame_skip_pattern + 1;
- case EVERY_16FRAME:
- return 16;
- break;
- case EVERY_32FRAME:
- return 32;
- break;
- default:
- return 1;
- }
- return 1;
-}
-
void msm_isp_calculate_framedrop(
struct msm_vfe_axi_shared_data *axi_data,
struct msm_vfe_axi_stream_request_cmd *stream_cfg_cmd)
@@ -611,6 +586,13 @@
ACTIVE : INACTIVE;
}
}
+
+ if (vfe_dev->axi_data.pipeline_update == DISABLE_CAMIF) {
+ vfe_dev->hw_info->vfe_ops.stats_ops.
+ enable_module(vfe_dev, 0xFF, 0);
+ vfe_dev->axi_data.pipeline_update = NO_UPDATE;
+ }
+
vfe_dev->axi_data.stream_update--;
if (vfe_dev->axi_data.stream_update == 0)
complete(&vfe_dev->stream_config_complete);
@@ -849,12 +831,14 @@
return rc;
}
-static int msm_isp_axi_wait_for_cfg_done(struct vfe_device *vfe_dev)
+static int msm_isp_axi_wait_for_cfg_done(struct vfe_device *vfe_dev,
+ enum msm_isp_camif_update_state camif_update)
{
int rc;
unsigned long flags;
spin_lock_irqsave(&vfe_dev->shared_data_lock, flags);
init_completion(&vfe_dev->stream_config_complete);
+ vfe_dev->axi_data.pipeline_update = camif_update;
vfe_dev->axi_data.stream_update = 2;
spin_unlock_irqrestore(&vfe_dev->shared_data_lock, flags);
rc = wait_for_completion_interruptible_timeout(
@@ -958,7 +942,7 @@
update_camif_state(vfe_dev, camif_update);
if (wait_for_complete)
- rc = msm_isp_axi_wait_for_cfg_done(vfe_dev);
+ rc = msm_isp_axi_wait_for_cfg_done(vfe_dev, camif_update);
return rc;
}
@@ -976,7 +960,7 @@
stream_info->state = STOP_PENDING;
}
- rc = msm_isp_axi_wait_for_cfg_done(vfe_dev);
+ rc = msm_isp_axi_wait_for_cfg_done(vfe_dev, camif_update);
if (rc < 0) {
pr_err("%s: wait for config done failed\n", __func__);
return rc;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
index c47209f..e08dea2 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.c
@@ -10,6 +10,7 @@
* GNU General Public License for more details.
*/
#include <linux/io.h>
+#include <linux/atomic.h>
#include <media/v4l2-subdev.h>
#include "msm_isp_util.h"
#include "msm_isp_stats_util.h"
@@ -67,6 +68,7 @@
struct msm_isp_buffer *done_buf;
struct msm_vfe_stats_stream *stream_info = NULL;
uint32_t pingpong_status;
+ uint32_t comp_stats_type_mask = 0;
uint32_t stats_comp_mask = 0, stats_irq_mask = 0;
stats_comp_mask = vfe_dev->hw_info->vfe_ops.stats_ops.
get_comp_mask(irq_status0, irq_status1);
@@ -76,13 +78,17 @@
return;
ISP_DBG("%s: status: 0x%x\n", __func__, irq_status0);
- if (vfe_dev->stats_data.stats_pipeline_policy == STATS_COMP_ALL) {
- if (!stats_comp_mask)
- return;
- stats_irq_mask = 0xFFFFFFFF;
- }
+ if (!stats_comp_mask)
+ stats_irq_mask &=
+ ~atomic_read(&vfe_dev->stats_data.stats_comp_mask);
+ else
+ stats_irq_mask |=
+ atomic_read(&vfe_dev->stats_data.stats_comp_mask);
memset(&buf_event, 0, sizeof(struct msm_isp_event_data));
+ buf_event.timestamp = ts->event_time;
+ buf_event.frame_id =
+ vfe_dev->axi_data.src_info[VFE_PIX_0].frame_id;
pingpong_status = vfe_dev->hw_info->
vfe_ops.stats_ops.get_pingpong_status(vfe_dev);
@@ -98,22 +104,32 @@
done_buf->bufq_handle, done_buf->buf_idx,
&ts->buf_time, vfe_dev->axi_data.
src_info[VFE_PIX_0].frame_id);
- if (rc == 0) {
- stats_event->stats_mask |=
+ if (rc != 0)
+ continue;
+
+ stats_event->stats_buf_idxs[stream_info->stats_type] =
+ done_buf->buf_idx;
+ if (!stream_info->composite_flag) {
+ stats_event->stats_mask =
1 << stream_info->stats_type;
- stats_event->stats_buf_idxs[
- stream_info->stats_type] =
- done_buf->buf_idx;
+ ISP_DBG("%s: stats event frame id: 0x%x\n",
+ __func__, buf_event.frame_id);
+ msm_isp_send_event(vfe_dev,
+ ISP_EVENT_STATS_NOTIFY +
+ stream_info->stats_type, &buf_event);
+ } else {
+ comp_stats_type_mask |=
+ 1 << stream_info->stats_type;
}
}
}
- if (stats_event->stats_mask) {
- buf_event.timestamp = ts->event_time;
- buf_event.frame_id =
- vfe_dev->axi_data.src_info[VFE_PIX_0].frame_id;
- msm_isp_send_event(vfe_dev, ISP_EVENT_STATS_NOTIFY +
- stream_info->stats_type, &buf_event);
+ if (comp_stats_type_mask) {
+ ISP_DBG("%s: composite stats event frame id: 0x%x mask: 0x%x\n",
+ __func__, buf_event.frame_id, comp_stats_type_mask);
+ stats_event->stats_mask = comp_stats_type_mask;
+ msm_isp_send_event(vfe_dev,
+ ISP_EVENT_COMP_STATS_NOTIFY, &buf_event);
}
}
@@ -140,36 +156,19 @@
return rc;
}
- if (stats_data->stats_pipeline_policy != STATS_COMP_ALL) {
- if (stream_req_cmd->framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid framedrop pattern\n", __func__);
- return rc;
- }
+ if (stream_req_cmd->framedrop_pattern >= MAX_SKIP) {
+ pr_err("%s: Invalid framedrop pattern\n", __func__);
+ return rc;
+ }
- if (stream_req_cmd->irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid irq subsample pattern\n", __func__);
- return rc;
- }
- } else {
- if (stats_data->comp_framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp framedrop pattern\n",
- __func__);
- return rc;
- }
-
- if (stats_data->comp_irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp irq subsample pattern\n",
- __func__);
- return rc;
- }
- stream_req_cmd->framedrop_pattern =
- vfe_dev->stats_data.comp_framedrop_pattern;
- stream_req_cmd->irq_subsample_pattern =
- vfe_dev->stats_data.comp_irq_subsample_pattern;
+ if (stream_req_cmd->irq_subsample_pattern >= MAX_SKIP) {
+ pr_err("%s: Invalid irq subsample pattern\n", __func__);
+ return rc;
}
stream_info->session_id = stream_req_cmd->session_id;
stream_info->stream_id = stream_req_cmd->stream_id;
+ stream_info->composite_flag = stream_req_cmd->composite_flag;
stream_info->stats_type = stream_req_cmd->stats_type;
stream_info->buffer_offset = stream_req_cmd->buffer_offset;
stream_info->framedrop_pattern = stream_req_cmd->framedrop_pattern;
@@ -193,6 +192,7 @@
struct msm_vfe_stats_stream_request_cmd *stream_req_cmd = arg;
struct msm_vfe_stats_stream *stream_info = NULL;
struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ uint32_t framedrop_period;
uint32_t stats_idx;
rc = msm_isp_stats_create_stream(vfe_dev, stream_req_cmd);
@@ -204,31 +204,12 @@
stats_idx = STATS_IDX(stream_req_cmd->stream_handle);
stream_info = &stats_data->stream_info[stats_idx];
- switch (stream_info->framedrop_pattern) {
- case NO_SKIP:
- stream_info->framedrop_pattern = VFE_NO_DROP;
- break;
- case EVERY_2FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_2FRAME;
- break;
- case EVERY_4FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_4FRAME;
- break;
- case EVERY_8FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_8FRAME;
- break;
- case EVERY_16FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_16FRAME;
- break;
- case EVERY_32FRAME:
- stream_info->framedrop_pattern = VFE_DROP_EVERY_32FRAME;
- break;
- default:
- stream_info->framedrop_pattern = VFE_NO_DROP;
- break;
- }
+ framedrop_period = msm_isp_get_framedrop_period(
+ stream_req_cmd->framedrop_pattern);
+ stream_info->framedrop_pattern = 0x1;
+ stream_info->framedrop_period = framedrop_period - 1;
- if (stats_data->stats_pipeline_policy == STATS_COMP_NONE)
+ if (!stream_info->composite_flag)
vfe_dev->hw_info->vfe_ops.stats_ops.
cfg_wm_irq_mask(vfe_dev, stream_info);
@@ -257,7 +238,7 @@
rc = msm_isp_cfg_stats_stream(vfe_dev, &stream_cfg_cmd);
}
- if (stats_data->stats_pipeline_policy == STATS_COMP_NONE)
+ if (!stream_info->composite_flag)
vfe_dev->hw_info->vfe_ops.stats_ops.
clear_wm_irq_mask(vfe_dev, stream_info);
@@ -266,18 +247,145 @@
return 0;
}
-int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg)
+static int msm_isp_init_stats_ping_pong_reg(
+ struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream *stream_info)
+{
+ int rc = 0;
+ stream_info->bufq_handle =
+ vfe_dev->buf_mgr->ops->get_bufq_handle(
+ vfe_dev->buf_mgr, stream_info->session_id,
+ stream_info->stream_id);
+ if (stream_info->bufq_handle == 0) {
+ pr_err("%s: no buf configured for stream: 0x%x\n",
+ __func__, stream_info->stream_handle);
+ return -EINVAL;
+ }
+
+ rc = msm_isp_stats_cfg_ping_pong_address(vfe_dev,
+ stream_info, VFE_PING_FLAG, NULL);
+ if (rc < 0) {
+ pr_err("%s: No free buffer for ping\n", __func__);
+ return rc;
+ }
+ rc = msm_isp_stats_cfg_ping_pong_address(vfe_dev,
+ stream_info, VFE_PONG_FLAG, NULL);
+ if (rc < 0) {
+ pr_err("%s: No free buffer for pong\n", __func__);
+ return rc;
+ }
+ return rc;
+}
+
+void msm_isp_stats_stream_update(struct vfe_device *vfe_dev)
+{
+ int i;
+ uint32_t stats_mask = 0, comp_stats_mask = 0;
+ uint32_t enable = 0;
+ struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ for (i = 0; i < vfe_dev->hw_info->stats_hw_info->num_stats_type; i++) {
+ if (stats_data->stream_info[i].state == STATS_START_PENDING ||
+ stats_data->stream_info[i].state ==
+ STATS_STOP_PENDING) {
+ stats_mask |= i;
+ enable = stats_data->stream_info[i].state ==
+ STATS_START_PENDING ? 1 : 0;
+ stats_data->stream_info[i].state =
+ stats_data->stream_info[i].state ==
+ STATS_START_PENDING ?
+ STATS_STARTING : STATS_STOPPING;
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, BIT(i), enable);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, BIT(i), enable);
+ } else if (stats_data->stream_info[i].state == STATS_STARTING ||
+ stats_data->stream_info[i].state == STATS_STOPPING) {
+ if (stats_data->stream_info[i].composite_flag)
+ comp_stats_mask |= i;
+ if (stats_data->stream_info[i].state == STATS_STARTING)
+ atomic_add(BIT(i),
+ &stats_data->stats_comp_mask);
+ else
+ atomic_sub(BIT(i),
+ &stats_data->stats_comp_mask);
+ stats_data->stream_info[i].state =
+ stats_data->stream_info[i].state ==
+ STATS_STARTING ? STATS_ACTIVE : STATS_INACTIVE;
+ }
+ }
+ atomic_sub(1, &stats_data->stats_update);
+ if (!atomic_read(&stats_data->stats_update))
+ complete(&vfe_dev->stats_config_complete);
+}
+
+static int msm_isp_stats_wait_for_cfg_done(struct vfe_device *vfe_dev)
+{
+ int rc;
+ init_completion(&vfe_dev->stats_config_complete);
+ atomic_set(&vfe_dev->stats_data.stats_update, 2);
+ rc = wait_for_completion_interruptible_timeout(
+ &vfe_dev->stats_config_complete,
+ msecs_to_jiffies(500));
+ if (rc == 0) {
+ pr_err("%s: wait timeout\n", __func__);
+ rc = -1;
+ } else {
+ rc = 0;
+ }
+ return rc;
+}
+
+static int msm_isp_start_stats_stream(struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd)
{
int i, rc = 0;
- struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd = arg;
+ uint32_t stats_mask = 0, comp_stats_mask = 0, idx;
struct msm_vfe_stats_stream *stream_info;
struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
- int idx;
- uint32_t stats_mask = 0;
+ for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
+ idx = STATS_IDX(stream_cfg_cmd->stream_handle[i]);
+ stream_info = &stats_data->stream_info[idx];
+ if (stream_info->stream_handle !=
+ stream_cfg_cmd->stream_handle[i]) {
+ pr_err("%s: Invalid stream handle: 0x%x received\n",
+ __func__, stream_cfg_cmd->stream_handle[i]);
+ continue;
+ }
+ rc = msm_isp_init_stats_ping_pong_reg(vfe_dev, stream_info);
+ if (rc < 0) {
+ pr_err("%s: No buffer for stream%d\n", __func__, idx);
+ return rc;
+ }
- if (stats_data->num_active_stream == 0)
- vfe_dev->hw_info->vfe_ops.stats_ops.cfg_ub(vfe_dev);
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active)
+ stream_info->state = STATS_START_PENDING;
+ else
+ stream_info->state = STATS_ACTIVE;
+ stats_data->num_active_stream++;
+ stats_mask |= 1 << idx;
+ if (stream_info->composite_flag)
+ comp_stats_mask |= 1 << idx;
+ }
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active) {
+ rc = msm_isp_stats_wait_for_cfg_done(vfe_dev);
+ } else {
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ atomic_add(comp_stats_mask, &stats_data->stats_comp_mask);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, comp_stats_mask, 1);
+ }
+ return rc;
+}
+
+static int msm_isp_stop_stats_stream(struct vfe_device *vfe_dev,
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd)
+{
+ int i, rc = 0;
+ uint32_t stats_mask = 0, comp_stats_mask = 0, idx;
+ struct msm_vfe_stats_stream *stream_info;
+ struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
for (i = 0; i < stream_cfg_cmd->num_streams; i++) {
idx = STATS_IDX(stream_cfg_cmd->stream_handle[i]);
stream_info = &stats_data->stream_info[idx];
@@ -288,71 +396,39 @@
continue;
}
- if (stream_cfg_cmd->enable) {
- stream_info->bufq_handle =
- vfe_dev->buf_mgr->ops->get_bufq_handle(
- vfe_dev->buf_mgr, stream_info->session_id,
- stream_info->stream_id);
- if (stream_info->bufq_handle == 0) {
- pr_err("%s: no buf configured for stream: 0x%x\n",
- __func__,
- stream_info->stream_handle);
- return -EINVAL;
- }
-
- msm_isp_stats_cfg_ping_pong_address(vfe_dev,
- stream_info, VFE_PING_FLAG, NULL);
- msm_isp_stats_cfg_ping_pong_address(vfe_dev,
- stream_info, VFE_PONG_FLAG, NULL);
- stream_info->state = STATS_START_PENDING;
- stats_data->num_active_stream++;
- } else {
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active)
stream_info->state = STATS_STOP_PENDING;
- stats_data->num_active_stream--;
- }
+ else
+ stream_info->state = STATS_INACTIVE;
+
+ stats_data->num_active_stream--;
stats_mask |= 1 << idx;
+ if (stream_info->composite_flag)
+ comp_stats_mask |= 1 << idx;
}
- vfe_dev->hw_info->vfe_ops.stats_ops.
- enable_module(vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ if (vfe_dev->axi_data.src_info[VFE_PIX_0].active) {
+ rc = msm_isp_stats_wait_for_cfg_done(vfe_dev);
+ } else {
+ vfe_dev->hw_info->vfe_ops.stats_ops.enable_module(
+ vfe_dev, stats_mask, stream_cfg_cmd->enable);
+ atomic_sub(comp_stats_mask, &stats_data->stats_comp_mask);
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(
+ vfe_dev, comp_stats_mask, 0);
+ }
return rc;
}
-int msm_isp_cfg_stats_comp_policy(struct vfe_device *vfe_dev, void *arg)
+int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg)
{
- int rc = -1;
- struct msm_vfe_stats_comp_policy_cfg *policy_cfg_cmd = arg;
- struct msm_vfe_stats_shared_data *stats_data = &vfe_dev->stats_data;
+ int rc = 0;
+ struct msm_vfe_stats_stream_cfg_cmd *stream_cfg_cmd = arg;
+ if (vfe_dev->stats_data.num_active_stream == 0)
+ vfe_dev->hw_info->vfe_ops.stats_ops.cfg_ub(vfe_dev);
- if (stats_data->num_active_stream != 0) {
- pr_err("%s: Cannot update policy when there are active streams\n",
- __func__);
- return rc;
- }
+ if (stream_cfg_cmd->enable)
+ rc = msm_isp_start_stats_stream(vfe_dev, stream_cfg_cmd);
+ else
+ rc = msm_isp_stop_stats_stream(vfe_dev, stream_cfg_cmd);
- if (policy_cfg_cmd->stats_pipeline_policy >= MAX_STATS_POLICY) {
- pr_err("%s: Invalid stats composite policy\n", __func__);
- return rc;
- }
-
- if (policy_cfg_cmd->comp_framedrop_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp framedrop pattern\n", __func__);
- return rc;
- }
-
- if (policy_cfg_cmd->comp_irq_subsample_pattern >= MAX_SKIP) {
- pr_err("%s: Invalid comp irq subsample pattern\n", __func__);
- return rc;
- }
-
- stats_data->stats_pipeline_policy =
- policy_cfg_cmd->stats_pipeline_policy;
- stats_data->comp_framedrop_pattern =
- policy_cfg_cmd->comp_framedrop_pattern;
- stats_data->comp_irq_subsample_pattern =
- policy_cfg_cmd->comp_irq_subsample_pattern;
-
- vfe_dev->hw_info->vfe_ops.stats_ops.cfg_comp_mask(vfe_dev);
-
- return 0;
+ return rc;
}
-
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
index 13e1fd6..7b4c4b4 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_stats_util.h
@@ -18,8 +18,8 @@
void msm_isp_process_stats_irq(struct vfe_device *vfe_dev,
uint32_t irq_status0, uint32_t irq_status1,
struct msm_isp_timestamp *ts);
+void msm_isp_stats_stream_update(struct vfe_device *vfe_dev);
int msm_isp_cfg_stats_stream(struct vfe_device *vfe_dev, void *arg);
int msm_isp_release_stats_stream(struct vfe_device *vfe_dev, void *arg);
int msm_isp_request_stats_stream(struct vfe_device *vfe_dev, void *arg);
-int msm_isp_cfg_stats_comp_policy(struct vfe_device *vfe_dev, void *arg);
#endif /* __MSM_ISP_STATS_UTIL_H__ */
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
index 9fd87f3..c981901 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.c
@@ -12,6 +12,7 @@
#include <linux/mutex.h>
#include <linux/io.h>
#include <media/v4l2-subdev.h>
+#include <linux/ratelimit.h>
#include "msm.h"
#include "msm_isp_util.h"
@@ -121,7 +122,7 @@
path->vectors[0].ab = MSM_ISP_MIN_AB;
path->vectors[0].ib = MSM_ISP_MIN_IB;
for (i = 0; i < MAX_ISP_CLIENT; i++) {
- if (isp_bandwidth_mgr.client_info[client].active) {
+ if (isp_bandwidth_mgr.client_info[i].active) {
path->vectors[0].ab +=
isp_bandwidth_mgr.client_info[i].ab;
path->vectors[0].ib +=
@@ -154,6 +155,31 @@
mutex_unlock(&bandwidth_mgr_mutex);
}
+uint32_t msm_isp_get_framedrop_period(
+ enum msm_vfe_frame_skip_pattern frame_skip_pattern)
+{
+ switch (frame_skip_pattern) {
+ case NO_SKIP:
+ case EVERY_2FRAME:
+ case EVERY_3FRAME:
+ case EVERY_4FRAME:
+ case EVERY_5FRAME:
+ case EVERY_6FRAME:
+ case EVERY_7FRAME:
+ case EVERY_8FRAME:
+ return frame_skip_pattern + 1;
+ case EVERY_16FRAME:
+ return 16;
+ break;
+ case EVERY_32FRAME:
+ return 32;
+ break;
+ default:
+ return 1;
+ }
+ return 1;
+}
+
static inline void msm_isp_get_timestamp(struct msm_isp_timestamp *time_stamp)
{
struct timespec ts;
@@ -355,11 +381,6 @@
rc = msm_isp_cfg_stats_stream(vfe_dev, arg);
mutex_unlock(&vfe_dev->core_mutex);
break;
- case VIDIOC_MSM_ISP_CFG_STATS_COMP_POLICY:
- mutex_lock(&vfe_dev->core_mutex);
- rc = msm_isp_cfg_stats_comp_policy(vfe_dev, arg);
- mutex_unlock(&vfe_dev->core_mutex);
- break;
case VIDIOC_MSM_ISP_UPDATE_STREAM:
mutex_lock(&vfe_dev->core_mutex);
rc = msm_isp_update_axi_stream(vfe_dev, arg);
@@ -686,6 +707,11 @@
{
int i;
struct msm_vfe_error_info *error_info = &vfe_dev->error_info;
+ static DEFINE_RATELIMIT_STATE(rs,
+ DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
+ static DEFINE_RATELIMIT_STATE(rs_stats,
+ DEFAULT_RATELIMIT_INTERVAL, DEFAULT_RATELIMIT_BURST);
+
if (error_info->error_count == 1 ||
!(error_info->info_dump_frame_count % 100)) {
vfe_dev->hw_info->vfe_ops.core_ops.
@@ -695,7 +721,8 @@
error_info->camif_status = 0;
error_info->violation_status = 0;
for (i = 0; i < MAX_NUM_STREAM; i++) {
- if (error_info->stream_framedrop_count[i] != 0) {
+ if (error_info->stream_framedrop_count[i] != 0 &&
+ __ratelimit(&rs)) {
pr_err("%s: Stream[%d]: dropped %d frames\n",
__func__, i,
error_info->stream_framedrop_count[i]);
@@ -703,7 +730,8 @@
}
}
for (i = 0; i < MSM_ISP_STATS_MAX; i++) {
- if (error_info->stats_framedrop_count[i] != 0) {
+ if (error_info->stats_framedrop_count[i] != 0 &&
+ __ratelimit(&rs_stats)) {
pr_err("%s: Stats stream[%d]: dropped %d frames\n",
__func__, i,
error_info->stats_framedrop_count[i]);
@@ -750,7 +778,8 @@
spin_lock_irqsave(&vfe_dev->tasklet_lock, flags);
queue_cmd = &vfe_dev->tasklet_queue_cmd[vfe_dev->taskletq_idx];
if (queue_cmd->cmd_used) {
- pr_err("%s: Tasklet queue overflow\n", __func__);
+ pr_err_ratelimited("%s: Tasklet queue overflow: %d\n",
+ __func__, vfe_dev->pdev->id);
list_del(&queue_cmd->list);
} else {
atomic_add(1, &vfe_dev->irq_cnt);
@@ -818,7 +847,6 @@
int msm_isp_open_node(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- uint32_t i;
struct vfe_device *vfe_dev = v4l2_get_subdevdata(sd);
long rc;
ISP_DBG("%s\n", __func__);
@@ -851,9 +879,6 @@
vfe_dev->hw_info->vfe_ops.core_ops.init_hw_reg(vfe_dev);
- for (i = 0; i < vfe_dev->hw_info->num_iommu_ctx; i++)
- vfe_dev->buf_mgr->ops->attach_ctx(vfe_dev->buf_mgr,
- vfe_dev->iommu_ctx[i]);
vfe_dev->buf_mgr->ops->buf_mgr_init(vfe_dev->buf_mgr, "msm_isp", 28);
memset(&vfe_dev->axi_data, 0, sizeof(struct msm_vfe_axi_shared_data));
@@ -870,7 +895,6 @@
int msm_isp_close_node(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh)
{
- int i;
long rc;
struct vfe_device *vfe_dev = v4l2_get_subdevdata(sd);
ISP_DBG("%s\n", __func__);
@@ -888,11 +912,6 @@
pr_err("%s: halt timeout\n", __func__);
vfe_dev->buf_mgr->ops->buf_mgr_deinit(vfe_dev->buf_mgr);
-
- for (i = vfe_dev->hw_info->num_iommu_ctx - 1; i >= 0; i--)
- vfe_dev->buf_mgr->ops->detach_ctx(vfe_dev->buf_mgr,
- vfe_dev->iommu_ctx[i]);
-
vfe_dev->hw_info->vfe_ops.core_ops.release_hw(vfe_dev);
vfe_dev->vfe_open_cnt--;
diff --git a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
index 7934f26..34b9859 100644
--- a/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
+++ b/drivers/media/platform/msm/camera_v2/isp/msm_isp_util.h
@@ -43,6 +43,9 @@
struct msm_isp_bandwidth_info client_info[MAX_ISP_CLIENT];
};
+uint32_t msm_isp_get_framedrop_period(
+ enum msm_vfe_frame_skip_pattern frame_skip_pattern);
+
int msm_isp_init_bandwidth_mgr(enum msm_isp_hw_client client);
int msm_isp_update_bandwidth(enum msm_isp_hw_client client,
uint64_t ab, uint64_t ib);
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
index 9dcd64c..962c079 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.c
@@ -11,7 +11,6 @@
*/
#include <linux/delay.h>
-#include <linux/clk.h>
#include <linux/io.h>
#include <linux/module.h>
#include <linux/jiffies.h>
@@ -60,113 +59,6 @@
false : true;
}
-static struct msm_cam_clk_info ispif_8960_clk_info[] = {
- {"csi_pix_clk", 0},
- {"csi_rdi_clk", 0},
- {"csi_pix1_clk", 0},
- {"csi_rdi1_clk", 0},
- {"csi_rdi2_clk", 0},
-};
-
-static struct msm_cam_clk_info ispif_8974_clk_info_vfe0[] = {
- {"camss_vfe_vfe_clk", -1},
- {"camss_csi_vfe_clk", -1},
-};
-
-static struct msm_cam_clk_info ispif_8974_clk_info_vfe1[] = {
- {"camss_vfe_vfe_clk1", -1},
- {"camss_csi_vfe_clk1", -1},
-};
-
-static int msm_ispif_clk_enable_one(struct ispif_device *ispif,
- enum msm_ispif_vfe_intf vfe_intf, int enable)
-{
- int rc = 0;
-
- if (enable)
- pr_debug("enable clk for VFE%d\n", vfe_intf);
- else
- pr_debug("disable clk for VFE%d\n", vfe_intf);
-
- if (ispif->csid_version < CSID_VERSION_V2) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev, ispif_8960_clk_info,
- ispif->ispif_clk[vfe_intf], 2, enable);
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d\n",
- __func__, rc);
- goto end;
- }
- } else if (ispif->csid_version == CSID_VERSION_V2) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev, ispif_8960_clk_info,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8960_clk_info),
- enable);
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d\n",
- __func__, rc);
- goto end;
- }
- } else if (ispif->csid_version >= CSID_VERSION_V3) {
- if (vfe_intf == VFE0) {
- rc = msm_cam_clk_enable(&ispif->pdev->dev,
- ispif_8974_clk_info_vfe0,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8974_clk_info_vfe0), enable);
- } else {
- rc = msm_cam_clk_enable(&ispif->pdev->dev,
- ispif_8974_clk_info_vfe1,
- ispif->ispif_clk[vfe_intf],
- ARRAY_SIZE(ispif_8974_clk_info_vfe1), enable);
- }
- if (rc) {
- pr_err("%s: cannot enable clock, error = %d, vfeid = %d\n",
- __func__, rc, vfe_intf);
- goto end;
- }
- } else {
- pr_err("%s: unsupported version=%d\n", __func__,
- ispif->csid_version);
- goto end;
- }
-
-end:
- return rc;
-}
-
-static int msm_ispif_clk_enable(struct ispif_device *ispif,
- struct msm_ispif_param_data *params, int enable)
-{
- int rc = 0;
- int i, j;
- uint32_t vfe_intf_mask = 0;
-
- for (i = 0; i < params->num; i++) {
- if (vfe_intf_mask & (1 << params->entries[i].vfe_intf))
- continue;
- rc = msm_ispif_clk_enable_one(ispif,
- params->entries[i].vfe_intf, 1);
- if (rc < 0 && enable) {
- pr_err("%s: unable to enable clocks for VFE %d",
- __func__, params->entries[i].vfe_intf);
- for (j = 0; j < i; j++) {
- /* if VFE clock is not enabled do
- * not disable the clock */
- if (!(vfe_intf_mask & (1 <<
- params->entries[i].vfe_intf)))
- continue;
- msm_ispif_clk_enable_one(ispif,
- params->entries[j].vfe_intf, 0);
- /* remove the VFE ID from the mask */
- vfe_intf_mask &=
- ~(1 << params->entries[i].vfe_intf);
- }
- break;
- }
- vfe_intf_mask |= 1 << params->entries[i].vfe_intf;
- }
- return rc;
-}
-
static struct msm_cam_clk_info ispif_8974_ahb_clk_info[] = {
{"ispif_ahb_clk", -1},
};
@@ -191,139 +83,51 @@
return rc;
}
-static int msm_ispif_intf_reset(struct ispif_device *ispif,
- struct msm_ispif_param_data *params)
-{
-
- int i, rc = 0;
- enum msm_ispif_intftype intf_type;
- int vfe_intf = 0;
- uint32_t data = 0;
-
- for (i = 0; i < params->num; i++) {
- data = STROBED_RST_EN;
- vfe_intf = params->entries[i].vfe_intf;
- intf_type = params->entries[i].intftype;
- ispif->sof_count[params->entries[i].vfe_intf].
- sof_cnt[intf_type] = 0;
-
- switch (intf_type) {
- case PIX0:
- data |= (PIX_0_VFE_RST_STB | PIX_0_CSID_RST_STB);
- break;
- case RDI0:
- data |= (RDI_0_VFE_RST_STB | RDI_0_CSID_RST_STB);
- break;
- case PIX1:
- data |= (PIX_1_VFE_RST_STB | PIX_1_CSID_RST_STB);
- break;
- case RDI1:
- data |= (RDI_1_VFE_RST_STB | RDI_1_CSID_RST_STB);
- break;
- case RDI2:
- data |= (RDI_2_VFE_RST_STB | RDI_2_CSID_RST_STB);
- break;
- default:
- rc = -EINVAL;
- break;
- }
- if (data > 0x1) {
- unsigned long jiffes = msecs_to_jiffies(500);
- long lrc = 0;
- unsigned long flags;
-
- spin_lock_irqsave(
- &ispif->auto_complete_lock, flags);
- ispif->wait_timeout[vfe_intf] = 0;
- init_completion(&ispif->reset_complete[vfe_intf]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
-
- if (vfe_intf == VFE0)
- msm_camera_io_w(data, ispif->base +
- ISPIF_RST_CMD_ADDR);
- else
- msm_camera_io_w(data, ispif->base +
- ISPIF_RST_CMD_1_ADDR);
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[vfe_intf], jiffes);
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfe_id = %d\n",
- __func__, lrc, vfe_intf);
- rc = -EIO;
-
- spin_lock_irqsave(
- &ispif->auto_complete_lock, flags);
- ispif->wait_timeout[vfe_intf] = 1;
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
- }
- }
-
- return rc;
-}
-
static int msm_ispif_reset(struct ispif_device *ispif)
{
int rc = 0;
- long lrc = 0;
- unsigned long jiffes = msecs_to_jiffies(500);
- unsigned long flags;
-
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE0] = 0;
- init_completion(&ispif->reset_complete[VFE0]);
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- ispif->wait_timeout[VFE1] = 0;
- init_completion(&ispif->reset_complete[VFE1]);
- }
- spin_unlock_irqrestore(&ispif->auto_complete_lock, flags);
+ int i;
BUG_ON(!ispif);
memset(ispif->sof_count, 0, sizeof(ispif->sof_count));
- msm_camera_io_w(ISPIF_RST_CMD_MASK, ispif->base + ISPIF_RST_CMD_ADDR);
+ for (i = 0; i < ispif->vfe_info.num_vfe; i++) {
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[VFE0], jiffes);
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_CTRL_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_1(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_IRQ_MASK_2(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_0(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_1(i));
+ msm_camera_io_w(0xFFFFFFFF, ispif->base +
+ ISPIF_VFE_m_IRQ_CLEAR_2(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INPUT_SEL(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INTF_CMD_0(i));
+ msm_camera_io_w(0, ispif->base + ISPIF_VFE_m_INTF_CMD_1(i));
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfeid = %d\n",
- __func__, lrc, VFE0);
- rc = -EIO;
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 1));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 1));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CID_MASK(i, 2));
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE0] = 1;
- spin_unlock_irqrestore(&ispif->auto_complete_lock, flags);
-
- goto end;
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CROP(i, 0));
+ msm_camera_io_w(0, ispif->base +
+ ISPIF_VFE_m_PIX_INTF_n_CROP(i, 1));
}
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- msm_camera_io_w_mb(ISPIF_RST_CMD_1_MASK, ispif->base +
- ISPIF_RST_CMD_1_ADDR);
+ msm_camera_io_w_mb(ISPIF_IRQ_GLOBAL_CLEAR_CMD, ispif->base +
+ ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
- lrc = wait_for_completion_interruptible_timeout(
- &ispif->reset_complete[VFE1], jiffes);
-
- if (lrc < 0 || !lrc) {
- pr_err("%s: wait timeout ret = %ld, vfeid = %d\n",
- __func__, lrc, VFE1);
- rc = -EIO;
-
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- ispif->wait_timeout[VFE1] = 1;
- spin_unlock_irqrestore(&ispif->auto_complete_lock,
- flags);
- }
-
- }
-
-end:
return rc;
}
@@ -339,7 +143,6 @@
static void msm_ispif_sel_csid_core(struct ispif_device *ispif,
uint8_t intftype, uint8_t csid, uint8_t vfe_intf)
{
- int rc = 0;
uint32_t data;
BUG_ON(!ispif);
@@ -349,19 +152,6 @@
return;
}
- if (ispif->csid_version <= CSID_VERSION_V2) {
- if (ispif->ispif_clk[vfe_intf][intftype] == NULL) {
- CDBG("%s: ispif NULL clk\n", __func__);
- return;
- }
-
- rc = clk_set_rate(ispif->ispif_clk[vfe_intf][intftype], csid);
- if (rc) {
- pr_err("%s: clk_set_rate failed %d\n", __func__, rc);
- return;
- }
- }
-
data = msm_camera_io_r(ispif->base + ISPIF_VFE_m_INPUT_SEL(vfe_intf));
switch (intftype) {
case PIX0:
@@ -385,9 +175,9 @@
data |= (csid << 20);
break;
}
- if (data)
- msm_camera_io_w_mb(data, ispif->base +
- ISPIF_VFE_m_INPUT_SEL(vfe_intf));
+
+ msm_camera_io_w_mb(data, ispif->base +
+ ISPIF_VFE_m_INPUT_SEL(vfe_intf));
}
static void msm_ispif_enable_crop(struct ispif_device *ispif,
@@ -504,6 +294,58 @@
return rc;
}
+static void msm_ispif_select_clk_mux(struct ispif_device *ispif,
+ uint8_t intftype, uint8_t csid, uint8_t vfe_intf)
+{
+ uint32_t data = 0;
+
+ switch (intftype) {
+ case PIX0:
+ data = msm_camera_io_r(ispif->clk_mux_base);
+ data &= ~(0xf << (vfe_intf * 8));
+ data |= (csid << (vfe_intf * 8));
+ msm_camera_io_w(data, ispif->clk_mux_base);
+ break;
+
+ case RDI0:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (vfe_intf * 12));
+ data |= (csid << (vfe_intf * 12));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+
+ case PIX1:
+ data = msm_camera_io_r(ispif->clk_mux_base);
+ data &= ~(0xf0 << (vfe_intf * 8));
+ data |= (csid << (4 + (vfe_intf * 8)));
+ msm_camera_io_w(data, ispif->clk_mux_base);
+ break;
+
+ case RDI1:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (4 + (vfe_intf * 12)));
+ data |= (csid << (4 + (vfe_intf * 12)));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+
+ case RDI2:
+ data = msm_camera_io_r(ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ data &= ~(0xf << (8 + (vfe_intf * 12)));
+ data |= (csid << (8 + (vfe_intf * 12)));
+ msm_camera_io_w(data, ispif->clk_mux_base +
+ ISPIF_RDI_CLK_MUX_SEL_ADDR);
+ break;
+ }
+ CDBG("%s intftype %d data %x\n", __func__, intftype, data);
+ mb();
+ return;
+}
+
static uint16_t msm_ispif_get_cids_mask_from_cfg(
struct msm_ispif_params_entry *entry)
{
@@ -536,11 +378,6 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
for (i = 0; i < params->num; i++) {
vfe_intf = params->entries[i].vfe_intf;
if (!msm_ispif_is_intf_valid(ispif->csid_version,
@@ -573,6 +410,10 @@
return -EINVAL;
}
+ if (ispif->csid_version >= CSID_VERSION_V3)
+ msm_ispif_select_clk_mux(ispif, intftype,
+ params->entries[i].csid, vfe_intf);
+
rc = msm_ispif_validate_intf_status(ispif, intftype, vfe_intf);
if (rc) {
pr_err("%s:validate_intf_status failed, rc = %d\n",
@@ -615,8 +456,6 @@
msm_camera_io_w_mb(ISPIF_IRQ_GLOBAL_CLEAR_CMD, ispif->base +
ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
- msm_ispif_clk_enable(ispif, params, 0);
-
return rc;
}
@@ -709,7 +548,7 @@
static int msm_ispif_start_frame_boundary(struct ispif_device *ispif,
struct msm_ispif_param_data *params)
{
- int rc;
+ int rc = 0;
if (ispif->ispif_state != ISPIF_POWER_UP) {
pr_err("%s: ispif invalid state %d\n", __func__,
@@ -718,23 +557,8 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
-
- rc = msm_ispif_intf_reset(ispif, params);
- if (rc) {
- pr_err("%s: msm_ispif_intf_reset failed. rc=%d\n",
- __func__, rc);
- goto end;
- }
-
msm_ispif_intf_cmd(ispif, ISPIF_INTF_CMD_ENABLE_FRAME_BOUNDARY, params);
-end:
- msm_ispif_clk_enable(ispif, params, 0);
return rc;
}
@@ -757,12 +581,6 @@
return rc;
}
- rc = msm_ispif_clk_enable(ispif, params, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks", __func__);
- return rc;
- }
-
for (i = 0; i < params->num; i++) {
if (!msm_ispif_is_intf_valid(ispif->csid_version,
params->entries[i].vfe_intf)) {
@@ -814,8 +632,6 @@
}
end:
- msm_ispif_clk_enable(ispif, params, 0);
-
return rc;
}
@@ -886,15 +702,6 @@
ISPIF_IRQ_GLOBAL_CLEAR_CMD_ADDR);
if (out[VFE0].ispifIrqStatus0 & ISPIF_IRQ_STATUS_MASK) {
- if (out[VFE0].ispifIrqStatus0 & RESET_DONE_IRQ) {
- unsigned long flags;
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- if (ispif->wait_timeout[VFE0] == 0)
- complete(&ispif->reset_complete[VFE0]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
-
if (out[VFE0].ispifIrqStatus0 & PIX_INTF_0_OVERFLOW_IRQ)
pr_err("%s: VFE0 pix0 overflow.\n", __func__);
@@ -910,15 +717,6 @@
ispif_process_irq(ispif, out, VFE0);
}
if (ispif->vfe_info.num_vfe > 1) {
- if (out[VFE1].ispifIrqStatus0 & RESET_DONE_IRQ) {
- unsigned long flags;
- spin_lock_irqsave(&ispif->auto_complete_lock, flags);
- if (ispif->wait_timeout[VFE1] == 0)
- complete(&ispif->reset_complete[VFE1]);
- spin_unlock_irqrestore(
- &ispif->auto_complete_lock, flags);
- }
-
if (out[VFE1].ispifIrqStatus0 & PIX_INTF_0_OVERFLOW_IRQ)
pr_err("%s: VFE1 pix0 overflow.\n", __func__);
@@ -973,19 +771,20 @@
memset(ispif->sof_count, 0, sizeof(ispif->sof_count));
ispif->csid_version = csid_version;
- rc = msm_ispif_clk_enable_one(ispif, VFE0, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks for VFE0", __func__);
- goto error_clk0;
- }
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1) {
- rc = msm_ispif_clk_enable_one(ispif, VFE1, 1);
- if (rc < 0) {
- pr_err("%s: unable to enable clocks for VFE1",
- __func__);
- goto error_clk1;
+ if (ispif->csid_version >= CSID_VERSION_V3) {
+ if (!ispif->clk_mux_mem || !ispif->clk_mux_io) {
+ pr_err("%s csi clk mux mem %p io %p\n", __func__,
+ ispif->clk_mux_mem, ispif->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+ ispif->clk_mux_base = ioremap(ispif->clk_mux_mem->start,
+ resource_size(ispif->clk_mux_mem));
+ if (!ispif->clk_mux_base) {
+ pr_err("%s: clk_mux_mem ioremap failed\n", __func__);
+ rc = -ENOMEM;
+ return rc;
}
}
@@ -1022,20 +821,11 @@
iounmap(ispif->base);
end:
- if (ispif->csid_version >= CSID_VERSION_V3 &&
- ispif->vfe_info.num_vfe > 1)
- msm_ispif_clk_enable_one(ispif, VFE1, 0);
-
-error_clk1:
- msm_ispif_clk_enable_one(ispif, VFE0, 0);
-
-error_clk0:
return rc;
}
static void msm_ispif_release(struct ispif_device *ispif)
{
- int i;
BUG_ON(!ispif);
if (ispif->ispif_state != ISPIF_POWER_UP) {
@@ -1044,9 +834,6 @@
return;
}
- for (i = 0; i < ispif->vfe_info.num_vfe; i++)
- msm_ispif_clk_enable_one(ispif, i, 1);
-
/* make sure no streaming going on */
msm_ispif_reset(ispif);
@@ -1056,8 +843,7 @@
iounmap(ispif->base);
- for (i = 0; i < ispif->vfe_info.num_vfe; i++)
- msm_ispif_clk_enable_one(ispif, i, 0);
+ iounmap(ispif->clk_mux_base);
ispif->ispif_state = ISPIF_POWER_DOWN;
}
@@ -1177,7 +963,6 @@
{
int rc;
struct ispif_device *ispif;
- int i;
ispif = kzalloc(sizeof(struct ispif_device), GFP_KERNEL);
if (!ispif) {
@@ -1231,13 +1016,20 @@
rc = -EBUSY;
goto error;
}
+ ispif->clk_mux_mem = platform_get_resource_byname(pdev,
+ IORESOURCE_MEM, "csi_clk_mux");
+ if (ispif->clk_mux_mem) {
+ ispif->clk_mux_io = request_mem_region(
+ ispif->clk_mux_mem->start,
+ resource_size(ispif->clk_mux_mem),
+ ispif->clk_mux_mem->name);
+ if (!ispif->clk_mux_io)
+ pr_err("%s: no valid csi_mux region\n", __func__);
+ }
ispif->pdev = pdev;
ispif->ispif_state = ISPIF_POWER_DOWN;
ispif->open_cnt = 0;
- spin_lock_init(&ispif->auto_complete_lock);
- for (i = 0; i < VFE_MAX; i++)
- ispif->wait_timeout[i] = 0;
return 0;
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
index 945b5b8..faa32aa 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif.h
@@ -42,21 +42,20 @@
struct platform_device *pdev;
struct msm_sd_subdev msm_sd;
struct resource *mem;
+ struct resource *clk_mux_mem;
struct resource *irq;
struct resource *io;
+ struct resource *clk_mux_io;
void __iomem *base;
+ void __iomem *clk_mux_base;
struct mutex mutex;
uint8_t start_ack_pending;
- struct completion reset_complete[VFE_MAX];
- spinlock_t auto_complete_lock;
- uint8_t wait_timeout[VFE_MAX];
uint32_t csid_version;
int enb_dump_reg;
uint32_t open_cnt;
struct ispif_sof_count sof_count[VFE_MAX];
struct ispif_intf_cmd applied_intf_cmd[VFE_MAX];
enum msm_ispif_state_t ispif_state;
- struct clk *ispif_clk[VFE_MAX][INTF_MAX];
struct msm_ispif_vfe_info vfe_info;
struct clk *ahb_clk;
};
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
index 9f8b2fa..6396486 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v1.h
@@ -50,6 +50,8 @@
+/* CSID CLK MUX SEL REGISTERS */
+#define ISPIF_RDI_CLK_MUX_SEL_ADDR 0x8
/*ISPIF RESET BITS*/
#define VFE_CLK_DOMAIN_RST BIT(31)
diff --git a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
index 5e61a4d..c805c3d 100644
--- a/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
+++ b/drivers/media/platform/msm/camera_v2/ispif/msm_ispif_hwreg_v2.h
@@ -46,6 +46,9 @@
#define ISPIF_VFE_m_RDI_INTF_n_STATUS(m, n) (0x2D0 + ISPIF_VFE(m) + 4*(n))
#define ISPIF_VFE_m_3D_DESKEW_SIZE(m) (0x2E4 + ISPIF_VFE(m))
+/* CSID CLK MUX SEL REGISTERS */
+#define ISPIF_RDI_CLK_MUX_SEL_ADDR 0x8
+
/*ISPIF RESET BITS*/
#define VFE_CLK_DOMAIN_RST BIT(31)
#define PIX_1_CLK_DOMAIN_RST BIT(30)
diff --git a/drivers/media/platform/msm/camera_v2/msm.c b/drivers/media/platform/msm/camera_v2/msm.c
index 9f1c81a..be9f613 100644
--- a/drivers/media/platform/msm/camera_v2/msm.c
+++ b/drivers/media/platform/msm/camera_v2/msm.c
@@ -661,6 +661,7 @@
rc = -ETIMEDOUT;
}
if (rc < 0) {
+ pr_err("%s: rc = %d\n", __func__, rc);
mutex_unlock(&session->lock);
return rc;
}
diff --git a/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c b/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
index 1203d17..8cdaa4b 100644
--- a/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
+++ b/drivers/media/platform/msm/camera_v2/pproc/cpp/msm_cpp.c
@@ -588,7 +588,7 @@
pr_err("%s: Bandwidth registration Failed!\n", __func__);
goto bus_scale_register_failed;
}
- msm_isp_update_bandwidth(ISP_CPP, 981345600, 1600020000);
+ msm_isp_update_bandwidth(ISP_CPP, 981345600, 1066680000);
if (cpp_dev->fs_cpp == NULL) {
cpp_dev->fs_cpp =
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
index fbd4a2e..33eaa69 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.c
@@ -185,49 +185,15 @@
{"csi_pclk", -1},
};
-static struct msm_cam_clk_info csid0_8974_clk_info[] = {
+static struct msm_cam_clk_info csid_8974_clk_info[] = {
{"camss_top_ahb_clk", -1},
{"ispif_ahb_clk", -1},
- {"csi0_ahb_clk", -1},
- {"csi0_src_clk", 200000000},
- {"csi0_clk", -1},
- {"csi0_phy_clk", -1},
- {"csi0_pix_clk", -1},
- {"csi0_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid1_8974_clk_info[] = {
- {"csi1_ahb_clk", -1},
- {"csi1_src_clk", 200000000},
- {"csi1_clk", -1},
- {"csi1_phy_clk", -1},
- {"csi1_pix_clk", -1},
- {"csi1_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid2_8974_clk_info[] = {
- {"csi2_ahb_clk", -1},
- {"csi2_src_clk", 200000000},
- {"csi2_clk", -1},
- {"csi2_phy_clk", -1},
- {"csi2_pix_clk", -1},
- {"csi2_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_info csid3_8974_clk_info[] = {
- {"csi3_ahb_clk", -1},
- {"csi3_src_clk", 200000000},
- {"csi3_clk", -1},
- {"csi3_phy_clk", -1},
- {"csi3_pix_clk", -1},
- {"csi3_rdi_clk", -1},
-};
-
-static struct msm_cam_clk_setting csid_8974_clk_info[] = {
- {&csid0_8974_clk_info[0], ARRAY_SIZE(csid0_8974_clk_info)},
- {&csid1_8974_clk_info[0], ARRAY_SIZE(csid1_8974_clk_info)},
- {&csid2_8974_clk_info[0], ARRAY_SIZE(csid2_8974_clk_info)},
- {&csid3_8974_clk_info[0], ARRAY_SIZE(csid3_8974_clk_info)},
+ {"csi_ahb_clk", -1},
+ {"csi_src_clk", 200000000},
+ {"csi_clk", -1},
+ {"csi_phy_clk", -1},
+ {"csi_pix_clk", -1},
+ {"csi_rdi_clk", -1},
};
static struct camera_vreg_t csid_8960_vreg_info[] = {
@@ -241,7 +207,6 @@
static int msm_csid_init(struct csid_device *csid_dev, uint32_t *csid_version)
{
int rc = 0;
- uint8_t core_id = 0;
if (!csid_version) {
pr_err("%s:%d csid_version NULL\n", __func__, __LINE__);
@@ -306,26 +271,14 @@
}
rc = msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 1);
+ csid_8974_clk_info, csid_dev->csid_clk,
+ ARRAY_SIZE(csid_8974_clk_info), 1);
if (rc < 0) {
pr_err("%s: clock enable failed\n", __func__);
- goto csid0_clk_enable_failed;
- }
- core_id = csid_dev->pdev->id;
- if (core_id) {
- rc = msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[core_id].clk_info,
- csid_dev->csid_clk,
- csid_8974_clk_info[core_id].num_clk_info, 1);
- if (rc < 0) {
- pr_err("%s: clock enable failed\n",
- __func__);
- goto clk_enable_failed;
- }
+ goto clk_enable_failed;
}
}
-
+ CDBG("%s:%d called\n", __func__, __LINE__);
csid_dev->hw_version =
msm_camera_io_r(csid_dev->base + CSID_HW_VERSION_ADDR);
CDBG("%s:%d called csid_dev->hw_version %x\n", __func__, __LINE__,
@@ -341,12 +294,6 @@
return rc;
clk_enable_failed:
- if (CSID_VERSION >= CSID_VERSION_V3) {
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 0);
- }
-csid0_clk_enable_failed:
if (CSID_VERSION <= CSID_VERSION_V2) {
msm_camera_enable_vreg(&csid_dev->pdev->dev,
csid_8960_vreg_info, ARRAY_SIZE(csid_8960_vreg_info),
@@ -375,7 +322,6 @@
static int msm_csid_release(struct csid_device *csid_dev)
{
uint32_t irq;
- uint8_t core_id = 0;
if (csid_dev->csid_state != CSID_POWER_UP) {
pr_err("%s: csid invalid state %d\n", __func__,
@@ -401,16 +347,8 @@
csid_8960_vreg_info, ARRAY_SIZE(csid_8960_vreg_info),
NULL, 0, &csid_dev->csi_vdd, 0);
} else if (csid_dev->hw_version >= CSID_VERSION_V3) {
- core_id = csid_dev->pdev->id;
- if (core_id)
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[core_id].clk_info,
- csid_dev->csid_clk,
- csid_8974_clk_info[core_id].num_clk_info, 0);
-
- msm_cam_clk_enable(&csid_dev->pdev->dev,
- csid_8974_clk_info[0].clk_info, csid_dev->csid0_clk,
- csid_8974_clk_info[0].num_clk_info, 0);
+ msm_cam_clk_enable(&csid_dev->pdev->dev, csid_8974_clk_info,
+ csid_dev->csid_clk, ARRAY_SIZE(csid_8974_clk_info), 0);
msm_camera_enable_vreg(&csid_dev->pdev->dev,
csid_vreg_info, ARRAY_SIZE(csid_vreg_info),
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
index 7ae1392..fd4db79 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csid/msm_csid.h
@@ -38,7 +38,6 @@
uint32_t hw_version;
enum msm_csid_state_t csid_state;
- struct clk *csid0_clk[11];
struct clk *csid_clk[11];
};
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h b/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
index e5093f8..ba964a2 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/include/csi2.0/msm_csiphy_hwreg.h
@@ -14,7 +14,7 @@
#define MSM_CSIPHY_HWREG_H
/*MIPI CSI PHY registers*/
-#define MIPI_CSIPHY_HW_VERSION_ADDR 0x180
+#define MIPI_CSIPHY_HW_VERSION_ADDR 0x17C
#define MIPI_CSIPHY_LNn_CFG1_ADDR 0x0
#define MIPI_CSIPHY_LNn_CFG2_ADDR 0x4
#define MIPI_CSIPHY_LNn_CFG3_ADDR 0x8
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
index df3ee60..7d3a1fc 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.c
@@ -43,14 +43,15 @@
uint8_t lane_cnt = 0;
uint16_t lane_mask = 0;
void __iomem *csiphybase;
+ uint8_t csiphy_id = csiphy_dev->pdev->id;
csiphybase = csiphy_dev->base;
if (!csiphybase) {
pr_err("%s: csiphybase NULL\n", __func__);
return -EINVAL;
}
- csiphy_dev->lane_mask[csiphy_dev->pdev->id] |= csiphy_params->lane_mask;
- lane_mask = csiphy_dev->lane_mask[csiphy_dev->pdev->id];
+ csiphy_dev->lane_mask[csiphy_id] |= csiphy_params->lane_mask;
+ lane_mask = csiphy_dev->lane_mask[csiphy_id];
lane_cnt = csiphy_params->lane_cnt;
if (csiphy_params->lane_cnt < 1 || csiphy_params->lane_cnt > 4) {
pr_err("%s: unsupported lane cnt %d\n",
@@ -58,11 +59,28 @@
return rc;
}
- CDBG("%s csiphy_params, mask = %x, cnt = %d, settle cnt = %x\n",
+ CDBG("%s csiphy_params, mask = %x cnt = %d settle cnt = %x csid %d\n",
__func__,
csiphy_params->lane_mask,
csiphy_params->lane_cnt,
- csiphy_params->settle_cnt);
+ csiphy_params->settle_cnt,
+ csiphy_params->csid_core);
+
+ if (csiphy_dev->hw_version >= CSIPHY_VERSION_V3) {
+ val = msm_camera_io_r(csiphy_dev->clk_mux_base);
+ if (csiphy_params->combo_mode &&
+ (csiphy_params->lane_mask & 0x18)) {
+ val &= ~0xf0;
+ val |= csiphy_params->csid_core << 4;
+ } else {
+ val &= ~0xf;
+ val |= csiphy_params->csid_core;
+ }
+ msm_camera_io_w(val, csiphy_dev->clk_mux_base);
+ CDBG("%s clk mux addr %p val 0x%x\n", __func__,
+ csiphy_dev->clk_mux_base, val);
+ mb();
+ }
msm_camera_io_w(0x1, csiphybase + MIPI_CSIPHY_GLBL_T_INIT_CFG0_ADDR);
msm_camera_io_w(0x1, csiphybase + MIPI_CSIPHY_T_WAKEUP_CFG0_ADDR);
@@ -204,6 +222,22 @@
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 1);
} else {
+ if (!csiphy_dev->clk_mux_mem || !csiphy_dev->clk_mux_io) {
+ pr_err("%s clk mux mem %p io %p\n", __func__,
+ csiphy_dev->clk_mux_mem,
+ csiphy_dev->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+ csiphy_dev->clk_mux_base = ioremap(
+ csiphy_dev->clk_mux_mem->start,
+ resource_size(csiphy_dev->clk_mux_mem));
+ if (!csiphy_dev->clk_mux_base) {
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ rc = -ENOMEM;
+ return rc;
+ }
+
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
@@ -269,12 +303,31 @@
}
CDBG("%s:%d called\n", __func__, __LINE__);
+
if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 1);
} else {
+
+ if (!csiphy_dev->clk_mux_mem || !csiphy_dev->clk_mux_io) {
+ pr_err("%s clk mux mem %p io %p\n", __func__,
+ csiphy_dev->clk_mux_mem,
+ csiphy_dev->clk_mux_io);
+ rc = -ENOMEM;
+ return rc;
+ }
+
+ csiphy_dev->clk_mux_base = ioremap(
+ csiphy_dev->clk_mux_mem->start,
+ resource_size(csiphy_dev->clk_mux_mem));
+ if (!csiphy_dev->clk_mux_base) {
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ rc = -ENOMEM;
+ return rc;
+ }
+
CDBG("%s:%d called\n", __func__, __LINE__);
rc = msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
@@ -359,14 +412,16 @@
disable_irq(csiphy_dev->irq->start);
- if (CSIPHY_VERSION < CSIPHY_VERSION_V3)
+ if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 0);
- else
+ } else {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8974_clk_info), 0);
+ iounmap(csiphy_dev->clk_mux_base);
+ }
iounmap(csiphy_dev->base);
csiphy_dev->base = NULL;
@@ -426,20 +481,23 @@
msm_camera_io_w(0x0, csiphy_dev->base + MIPI_CSIPHY_LNCK_CFG2_ADDR);
msm_camera_io_w(0x0, csiphy_dev->base + MIPI_CSIPHY_GLBL_PWR_CFG_ADDR);
- if (CSIPHY_VERSION < CSIPHY_VERSION_V3)
+ if (CSIPHY_VERSION < CSIPHY_VERSION_V3) {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8960_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8960_clk_info), 0);
- else
+ } else {
msm_cam_clk_enable(&csiphy_dev->pdev->dev,
csiphy_8974_clk_info, csiphy_dev->csiphy_clk,
ARRAY_SIZE(csiphy_8974_clk_info), 0);
+ iounmap(csiphy_dev->clk_mux_base);
+ }
iounmap(csiphy_dev->base);
csiphy_dev->base = NULL;
csiphy_dev->csiphy_state = CSIPHY_POWER_DOWN;
return 0;
}
+
#endif
static long msm_csiphy_cmd(struct csiphy_device *csiphy_dev, void *arg)
@@ -588,6 +646,17 @@
}
disable_irq(new_csiphy_dev->irq->start);
+ new_csiphy_dev->clk_mux_mem = platform_get_resource_byname(pdev,
+ IORESOURCE_MEM, "csiphy_clk_mux");
+ if (new_csiphy_dev->clk_mux_mem) {
+ new_csiphy_dev->clk_mux_io = request_mem_region(
+ new_csiphy_dev->clk_mux_mem->start,
+ resource_size(new_csiphy_dev->clk_mux_mem),
+ new_csiphy_dev->clk_mux_mem->name);
+ if (!new_csiphy_dev->clk_mux_io)
+ pr_err("%s: ERROR %d\n", __func__, __LINE__);
+ }
+
new_csiphy_dev->pdev = pdev;
new_csiphy_dev->msm_sd.sd.internal_ops = &msm_csiphy_internal_ops;
new_csiphy_dev->msm_sd.sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
diff --git a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
index e19be34..a11b958 100644
--- a/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
+++ b/drivers/media/platform/msm/camera_v2/sensor/csiphy/msm_csiphy.h
@@ -32,9 +32,12 @@
struct msm_sd_subdev msm_sd;
struct v4l2_subdev subdev;
struct resource *mem;
+ struct resource *clk_mux_mem;
struct resource *irq;
struct resource *io;
+ struct resource *clk_mux_io;
void __iomem *base;
+ void __iomem *clk_mux_base;
struct mutex mutex;
uint32_t hw_version;
enum msm_csiphy_state_t csiphy_state;
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
index 431d35d..b56378a 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.c
@@ -25,6 +25,9 @@
/* Length of mandatory fields that must exist in header of video PES */
#define PES_MANDATORY_FIELDS_LEN 9
+/* Index of first byte in TS packet holding STC */
+#define STC_LOCATION_IDX 188
+
#define MAX_PES_LENGTH (SZ_64K)
#define MAX_TS_PACKETS_FOR_SDMX_PROCESS (500)
@@ -1085,7 +1088,7 @@
feed_data->buffer_desc.desc[0].read_ptr = 0;
feed_data->buffer_desc.desc[0].write_ptr = 0;
feed_data->buffer_desc.desc[0].handle =
- ion_share_dma_buf(client, temp_handle);
+ ion_share_dma_buf_fd(client, temp_handle);
if (IS_ERR_VALUE(feed_data->buffer_desc.desc[0].handle)) {
MPQ_DVB_ERR_PRINT(
"%s: FAILED to share payload buffer %d\n",
@@ -2202,6 +2205,14 @@
size_t len = 0;
struct dmx_pts_dts_info *pts_dts;
+ if (meta_data->packet_type == DMX_PES_PACKET) {
+ pts_dts = &meta_data->info.pes.pts_dts_info;
+ data->buf.stc = meta_data->info.pes.stc;
+ } else {
+ pts_dts = &meta_data->info.framing.pts_dts_info;
+ data->buf.stc = meta_data->info.framing.stc;
+ }
+
pts_dts = meta_data->packet_type == DMX_PES_PACKET ?
&meta_data->info.pes.pts_dts_info :
&meta_data->info.framing.pts_dts_info;
@@ -2313,6 +2324,7 @@
packet.raw_data_offset = feed_data->frame_offset;
meta_data.info.framing.pattern_type =
feed_data->last_framing_match_type;
+ meta_data.info.framing.stc = feed_data->last_framing_match_stc;
mpq_streambuffer_get_buffer_handle(stream_buffer,
0, /* current write buffer handle */
@@ -2385,6 +2397,7 @@
mpq_dmx_save_pts_dts(feed_data);
meta_data.packet_type = DMX_PES_PACKET;
+ meta_data.info.pes.stc = feed_data->prev_stc;
mpq_dmx_update_decoder_stat(mpq_demux);
@@ -2411,7 +2424,8 @@
static int mpq_dmx_process_video_packet_framing(
struct dvb_demux_feed *feed,
- const u8 *buf)
+ const u8 *buf,
+ u64 curr_stc)
{
int bytes_avail;
u32 ts_payload_offset;
@@ -2596,7 +2610,8 @@
* pass data to decoder only after sequence header
* or equivalent is found. Otherwise the data is dropped.
*/
- if (!(feed_data->found_sequence_header_pattern)) {
+ if (!feed_data->found_sequence_header_pattern) {
+ feed_data->prev_stc = curr_stc;
spin_unlock(&feed_data->video_buffer_lock);
return 0;
}
@@ -2699,6 +2714,12 @@
framing_res.info[i].type;
feed_data->last_pattern_offset =
framing_res.info[i].offset;
+ if (framing_res.info[i].used_prefix_size)
+ feed_data->last_framing_match_stc =
+ feed_data->prev_stc;
+ else
+ feed_data->last_framing_match_stc =
+ curr_stc;
continue;
}
/*
@@ -2744,6 +2765,8 @@
packet.raw_data_offset = feed_data->frame_offset;
meta_data.info.framing.pattern_type =
feed_data->last_framing_match_type;
+ meta_data.info.framing.stc =
+ feed_data->last_framing_match_stc;
mpq_streambuffer_get_buffer_handle(
stream_buffer,
@@ -2784,8 +2807,13 @@
framing_res.info[i].type;
feed_data->last_pattern_offset =
framing_res.info[i].offset;
+ if (framing_res.info[i].used_prefix_size)
+ feed_data->last_framing_match_stc = feed_data->prev_stc;
+ else
+ feed_data->last_framing_match_stc = curr_stc;
}
+ feed_data->prev_stc = curr_stc;
feed_data->first_prefix_size = 0;
if (pending_data_len) {
@@ -2810,7 +2838,8 @@
static int mpq_dmx_process_video_packet_no_framing(
struct dvb_demux_feed *feed,
- const u8 *buf)
+ const u8 *buf,
+ u64 curr_stc)
{
int bytes_avail;
u32 ts_payload_offset;
@@ -2886,6 +2915,7 @@
mpq_dmx_save_pts_dts(feed_data);
meta_data.packet_type = DMX_PES_PACKET;
+ meta_data.info.pes.stc = feed_data->prev_stc;
mpq_dmx_update_decoder_stat(mpq_demux);
@@ -2928,6 +2958,8 @@
} else {
feed->pusi_seen = 1;
}
+
+ feed_data->prev_stc = curr_stc;
}
/*
@@ -3077,10 +3109,25 @@
struct dvb_demux_feed *feed,
const u8 *buf)
{
+ u64 curr_stc;
+ struct mpq_demux *mpq_demux = feed->demux->priv;
+
+ if ((mpq_demux->source >= DMX_SOURCE_DVR0) &&
+ (mpq_demux->demux.tsp_format != DMX_TSP_FORMAT_192_TAIL)) {
+ curr_stc = 0;
+ } else {
+ curr_stc = buf[STC_LOCATION_IDX + 2] << 16;
+ curr_stc += buf[STC_LOCATION_IDX + 1] << 8;
+ curr_stc += buf[STC_LOCATION_IDX];
+ curr_stc *= 256; /* convert from 105.47 KHZ to 27MHz */
+ }
+
if (mpq_dmx_info.decoder_framing)
- return mpq_dmx_process_video_packet_no_framing(feed, buf);
+ return mpq_dmx_process_video_packet_no_framing(feed, buf,
+ curr_stc);
else
- return mpq_dmx_process_video_packet_framing(feed, buf);
+ return mpq_dmx_process_video_packet_framing(feed, buf,
+ curr_stc);
}
EXPORT_SYMBOL(mpq_dmx_process_video_packet);
@@ -3151,9 +3198,9 @@
(mpq_demux->demux.tsp_format != DMX_TSP_FORMAT_192_TAIL)) {
stc = 0;
} else {
- stc = buf[190] << 16;
- stc += buf[189] << 8;
- stc += buf[188];
+ stc = buf[STC_LOCATION_IDX + 2] << 16;
+ stc += buf[STC_LOCATION_IDX + 1] << 8;
+ stc += buf[STC_LOCATION_IDX];
stc *= 256; /* convert from 105.47 KHZ to 27MHz */
}
@@ -4190,6 +4237,9 @@
pes_header = (struct pes_packet_header *)
&metadata_buf[pes_header_offset];
meta_data.packet_type = DMX_PES_PACKET;
+ /* TODO - set to real STC when SDMX supports it */
+ meta_data.info.pes.stc = 0;
+
if (pes_header->pts_dts_flag & 0x2) {
meta_data.info.pes.pts_dts_info.pts_exist = 1;
meta_data.info.pes.pts_dts_info.pts =
@@ -4544,7 +4594,7 @@
}
MPQ_DVB_DBG_PRINT(
- "\n\n%s: Before SDMX_process: input read_offset=%u, fill count=%u\n",
+ "%s: Before SDMX_process: input read_offset=%u, fill count=%u\n",
__func__, read_offset, fill_count);
process_start_time = current_kernel_time();
@@ -4587,14 +4637,19 @@
int mpq_sdmx_process(struct mpq_demux *mpq_demux,
struct sdmx_buff_descr *input,
u32 fill_count,
- u32 read_offset)
+ u32 read_offset,
+ size_t tsp_size)
{
int ret;
int todo;
int total_bytes_read = 0;
- int limit = mpq_sdmx_proc_limit * mpq_demux->demux.ts_packet_size;
+ int limit = mpq_sdmx_proc_limit * tsp_size;
- while (fill_count >= mpq_demux->demux.ts_packet_size) {
+ MPQ_DVB_DBG_PRINT(
+ "\n\n%s: read_offset=%u, fill_count=%u, tsp_size=%u\n",
+ __func__, read_offset, fill_count, tsp_size);
+
+ while (fill_count >= tsp_size) {
todo = fill_count > limit ? limit : fill_count;
ret = mpq_sdmx_process_buffer(mpq_demux, input, todo,
read_offset);
@@ -4643,7 +4698,8 @@
}
read_offset = mpq_demux->demux.dmx.dvr_input.ringbuff->pread;
- return mpq_sdmx_process(mpq_demux, &buf_desc, count, read_offset);
+ return mpq_sdmx_process(mpq_demux, &buf_desc, count,
+ read_offset, mpq_demux->demux.ts_packet_size);
}
int mpq_dmx_write(struct dmx_demux *demux, const char *buf, size_t count)
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
index 80b5428..ca7c15a 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_common.h
@@ -244,6 +244,8 @@
* reported for this frame.
* @last_framing_match_type: Used for saving the type of
* the previous pattern match found in this video feed.
+ * @last_framing_match_stc: Used for saving the STC attached to TS packet
+ * of the previous pattern match found in this video feed.
* @found_sequence_header_pattern: Flag used to note that an MPEG-2
* Sequence Header, H.264 SPS or VC-1 Sequence Header pattern
* (whichever is relevant according to the video standard) had already
@@ -272,6 +274,7 @@
* buffer space.
* @last_pkt_index: used to save the last streambuffer packet index reported in
* a new elementary stream data event.
+ * @prev_stc: STC attached to the previous video TS packet
*/
struct mpq_video_feed_info {
struct mpq_streambuffer *video_buffer;
@@ -289,6 +292,7 @@
u32 last_pattern_offset;
u32 pending_pattern_len;
u64 last_framing_match_type;
+ u64 last_framing_match_stc;
int found_sequence_header_pattern;
struct dvb_dmx_video_prefix_size_masks prefix_size;
u32 first_prefix_size;
@@ -303,6 +307,7 @@
u32 ts_packets_num;
u32 ts_dropped_bytes;
int last_pkt_index;
+ u64 prev_stc;
};
/**
@@ -716,13 +721,15 @@
* @input: input buffer descriptor
* @fill_count: number of data bytes in input buffer that can be read
* @read_offset: offset in buffer for reading
+ * @tsp_size: size of single TS packet
*
* Return number of bytes read or error code
*/
int mpq_sdmx_process(struct mpq_demux *mpq_demux,
struct sdmx_buff_descr *input,
u32 fill_count,
- u32 read_offset);
+ u32 read_offset,
+ size_t tsp_size);
/**
* mpq_sdmx_loaded - Returns 1 if secure demux application is loaded,
diff --git a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
index 193141a..43a65e9 100644
--- a/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
+++ b/drivers/media/platform/msm/dvb/demux/mpq_dmx_plugin_tspp_v1.c
@@ -382,7 +382,8 @@
buff_current_addr_phys - buff_start_addr_phys);
mpq_sdmx_process(mpq_demux, &input, aggregate_len,
- buff_current_addr_phys - buff_start_addr_phys);
+ buff_current_addr_phys - buff_start_addr_phys,
+ TSPP_RAW_TTS_SIZE);
}
for (i = 0; i < aggregate_count; i++)
diff --git a/drivers/media/platform/msm/dvb/include/mpq_adapter.h b/drivers/media/platform/msm/dvb/include/mpq_adapter.h
index 19abbbe..a2ade18 100644
--- a/drivers/media/platform/msm/dvb/include/mpq_adapter.h
+++ b/drivers/media/platform/msm/dvb/include/mpq_adapter.h
@@ -64,11 +64,17 @@
/** PTS/DTS information */
struct dmx_pts_dts_info pts_dts_info;
+
+ /** STC value attached to first TS packet holding the pattern */
+ u64 stc;
};
struct dmx_pes_packet_info {
/** PTS/DTS information */
struct dmx_pts_dts_info pts_dts_info;
+
+ /** STC value attached to first TS packet holding the PES */
+ u64 stc;
};
struct dmx_marker_info {
diff --git a/drivers/media/platform/msm/vidc/hfi_packetization.c b/drivers/media/platform/msm/vidc/hfi_packetization.c
index ef3e698..7fc8810 100644
--- a/drivers/media/platform/msm/vidc/hfi_packetization.c
+++ b/drivers/media/platform/msm/vidc/hfi_packetization.c
@@ -1057,6 +1057,51 @@
pkt->size += sizeof(u32) + sizeof(struct hfi_quantization);
break;
}
+ case HAL_PARAM_VENC_SESSION_QP_RANGE:
+ {
+ struct hfi_quantization_range *hfi;
+ struct hfi_quantization_range *hal_range =
+ (struct hfi_quantization_range *) pdata;
+ u32 min_qp, max_qp;
+
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_SESSION_QP_RANGE;
+ hfi = (struct hfi_quantization_range *)
+ &pkt->rg_property_data[1];
+
+ min_qp = hal_range->min_qp;
+ max_qp = hal_range->max_qp;
+
+ /* We'll be packing in the qp, so make sure we
+ * won't be losing data when masking */
+ if (min_qp > 0xff || max_qp > 0xff) {
+ dprintk(VIDC_ERR, "qp value out of range\n");
+ rc = -ERANGE;
+ break;
+ }
+
+ /* When creating the packet, pack the qp value as
+ * 0xiippbb, where ii = qp range for I-frames,
+ * pp = qp range for P-frames, etc. */
+ hfi->min_qp = min_qp | min_qp << 8 | min_qp << 16;
+ hfi->max_qp = max_qp | max_qp << 8 | max_qp << 16;
+ hfi->layer_id = hal_range->layer_id;
+
+ pkt->size += sizeof(u32) +
+ sizeof(struct hfi_quantization_range);
+ break;
+ }
+ case HAL_PARAM_VENC_MAX_NUM_B_FRAMES:
+ {
+ struct hfi_max_num_b_frames *hfi;
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_MAX_NUM_B_FRAMES;
+ hfi = (struct hfi_max_num_b_frames *) &pkt->rg_property_data[1];
+ memcpy(hfi, (struct hfi_max_num_b_frames *) pdata,
+ sizeof(struct hfi_max_num_b_frames));
+ pkt->size += sizeof(u32) + sizeof(struct hfi_max_num_b_frames);
+ break;
+ }
case HAL_CONFIG_VENC_INTRA_PERIOD:
{
struct hfi_intra_period *hfi;
@@ -1203,6 +1248,16 @@
}
case HAL_CONFIG_VPE_DEINTERLACE:
break;
+ case HAL_PARAM_VENC_H264_GENERATE_AUDNAL:
+ {
+ struct hfi_enable *hfi;
+ pkt->rg_property_data[0] =
+ HFI_PROPERTY_PARAM_VENC_H264_GENERATE_AUDNAL;
+ hfi = (struct hfi_enable *) &pkt->rg_property_data[1];
+ hfi->enable = ((struct hal_enable *) pdata)->enable;
+ pkt->size += sizeof(u32) + sizeof(struct hfi_enable);
+ break;
+ }
/* FOLLOWING PROPERTIES ARE NOT IMPLEMENTED IN CORE YET */
case HAL_CONFIG_BUFFER_REQUIREMENTS:
case HAL_CONFIG_PRIORITY:
diff --git a/drivers/media/platform/msm/vidc/hfi_response_handler.c b/drivers/media/platform/msm/vidc/hfi_response_handler.c
index 91fb514..43a3dad 100644
--- a/drivers/media/platform/msm/vidc/hfi_response_handler.c
+++ b/drivers/media/platform/msm/vidc/hfi_response_handler.c
@@ -174,8 +174,8 @@
switch (pkt->event_id) {
case HFI_EVENT_SYS_ERROR:
- dprintk(VIDC_ERR, "HFI_EVENT_SYS_ERROR: %d\n",
- pkt->event_data1);
+ dprintk(VIDC_ERR, "HFI_EVENT_SYS_ERROR: %d, 0x%x\n",
+ pkt->event_data1, pkt->event_data2);
hfi_process_sys_error(callback, device_id);
break;
case HFI_EVENT_SESSION_ERROR:
diff --git a/drivers/media/platform/msm/vidc/msm_venc.c b/drivers/media/platform/msm/vidc/msm_venc.c
index da97c7a..cb50431 100644
--- a/drivers/media/platform/msm/vidc/msm_venc.c
+++ b/drivers/media/platform/msm/vidc/msm_venc.c
@@ -20,6 +20,7 @@
#define MSM_VENC_DVC_NAME "msm_venc_8974"
#define MIN_NUM_OUTPUT_BUFFERS 4
+#define MIN_NUM_CAPTURE_BUFFERS 4
#define MIN_BIT_RATE 64000
#define MAX_BIT_RATE 160000000
#define DEFAULT_BIT_RATE 64000
@@ -35,6 +36,7 @@
#define P_FRAME_QP 28
#define B_FRAME_QP 30
#define MAX_INTRA_REFRESH_MBS 300
+#define MAX_NUM_B_FRAMES 4
#define L_MODE V4L2_MPEG_VIDEO_H264_LOOP_FILTER_MODE_DISABLED_AT_SLICE_BOUNDARY
#define CODING V4L2_MPEG_VIDEO_MPEG4_PROFILE_ADVANCED_CODING_EFFICIENCY
@@ -419,6 +421,30 @@
.cluster = MSM_VENC_CTRL_CLUSTER_QP,
},
{
+ .id = V4L2_CID_MPEG_VIDEO_H264_MIN_QP,
+ .name = "H264 Minimum QP",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 1,
+ .maximum = 51,
+ .default_value = 1,
+ .step = 1,
+ .menu_skip_mask = 0,
+ .qmenu = NULL,
+ .cluster = MSM_VENC_CTRL_CLUSTER_QP,
+ },
+ {
+ .id = V4L2_CID_MPEG_VIDEO_H264_MAX_QP,
+ .name = "H264 Maximum QP",
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .minimum = 1,
+ .maximum = 51,
+ .default_value = 51,
+ .step = 1,
+ .menu_skip_mask = 0,
+ .qmenu = NULL,
+ .cluster = MSM_VENC_CTRL_CLUSTER_QP,
+ },
+ {
.id = V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE,
.name = "Slice Mode",
.type = V4L2_CTRL_TYPE_MENU,
@@ -641,6 +667,15 @@
V4L2_MPEG_VIDC_VIDEO_H264_VUI_TIMING_INFO_DISABLED,
.cluster = MSM_VENC_CTRL_CLUSTER_TIMING,
},
+ {
+ .id = V4L2_CID_MPEG_VIDC_VIDEO_H264_AU_DELIMITER,
+ .name = "H264 AU Delimiter",
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .minimum = V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED,
+ .maximum = V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_ENABLED,
+ .default_value =
+ V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED,
+ },
};
#define NUM_CTRLS ARRAY_SIZE(msm_venc_ctrls)
@@ -728,7 +763,7 @@
struct v4l2_ctrl *ctrl = NULL;
u32 extradata = 0;
if (!q || !q->drv_priv) {
- dprintk(VIDC_ERR, "Invalid input, q = %p\n", q);
+ dprintk(VIDC_ERR, "Invalid input\n");
return -EINVAL;
}
inst = q->drv_priv;
@@ -743,16 +778,21 @@
case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
*num_planes = 1;
buff_req = get_buff_req_buffer(inst, HAL_BUFFER_OUTPUT);
- *num_buffers = buff_req->buffer_count_actual =
+ if (buff_req) {
+ *num_buffers = buff_req->buffer_count_actual =
max(*num_buffers, buff_req->buffer_count_actual);
- if (*num_buffers > VIDEO_MAX_FRAME) {
- dprintk(VIDC_ERR,
+ }
+ if (*num_buffers < MIN_NUM_CAPTURE_BUFFERS)
+ *num_buffers = MIN_NUM_CAPTURE_BUFFERS;
+
+ if (*num_buffers > VIDEO_MAX_FRAME) {
+ dprintk(VIDC_ERR,
"Failed : No of slices requested = %d"\
" Max supported slices = %d",
*num_buffers, VIDEO_MAX_FRAME);
- rc = -EINVAL;
- break;
- }
+ rc = -EINVAL;
+ break;
+ }
ctrl = v4l2_ctrl_find(&inst->ctrl_handler,
V4L2_CID_MPEG_VIDC_VIDEO_EXTRADATA);
if (ctrl)
@@ -1129,6 +1169,7 @@
struct hal_h264_db_control h264_db_control;
struct hal_enable enable;
struct hal_h264_vui_timing_info vui_timing_info;
+ struct hal_quantization_range qp_range;
u32 property_id = 0, property_val = 0;
void *pdata = NULL;
struct v4l2_ctrl *temp_ctrl = NULL;
@@ -1203,11 +1244,24 @@
break;
case V4L2_CID_MPEG_VIDC_VIDEO_NUM_B_FRAMES:
temp_ctrl = TRY_GET_CTRL(V4L2_CID_MPEG_VIDC_VIDEO_NUM_P_FRAMES);
-
- property_id =
- HAL_CONFIG_VENC_INTRA_PERIOD;
intra_period.bframes = ctrl->val;
intra_period.pframes = temp_ctrl->val;
+ if (intra_period.bframes) {
+ u32 max_num_b_frames = MAX_NUM_B_FRAMES;
+ property_id =
+ HAL_PARAM_VENC_MAX_NUM_B_FRAMES;
+ pdata = &max_num_b_frames;
+ rc = call_hfi_op(hdev, session_set_property,
+ (void *)inst->session, property_id, pdata);
+ if (rc) {
+ dprintk(VIDC_ERR,
+ "Failed : Setprop MAX_NUM_B_FRAMES"
+ "%d", rc);
+ break;
+ }
+ }
+ property_id =
+ HAL_CONFIG_VENC_INTRA_PERIOD;
pdata = &intra_period;
break;
case V4L2_CID_MPEG_VIDC_VIDEO_REQUEST_IFRAME:
@@ -1458,6 +1512,44 @@
pdata = &quantization;
break;
}
+ case V4L2_CID_MPEG_VIDEO_H264_MIN_QP: {
+ struct v4l2_ctrl *qp_max;
+
+ qp_max = TRY_GET_CTRL(V4L2_CID_MPEG_VIDEO_H264_MAX_QP);
+ if (ctrl->val >= qp_max->val) {
+ dprintk(VIDC_ERR, "Bad range: Min QP (%d) > Max QP(%d)",
+ ctrl->val, qp_max->val);
+ rc = -ERANGE;
+ break;
+ }
+
+ property_id = HAL_PARAM_VENC_SESSION_QP_RANGE;
+ qp_range.layer_id = 0;
+ qp_range.max_qp = qp_max->val;
+ qp_range.min_qp = ctrl->val;
+
+ pdata = &qp_range;
+ break;
+ }
+ case V4L2_CID_MPEG_VIDEO_H264_MAX_QP: {
+ struct v4l2_ctrl *qp_min;
+
+ qp_min = TRY_GET_CTRL(V4L2_CID_MPEG_VIDEO_H264_MIN_QP);
+ if (ctrl->val <= qp_min->val) {
+ dprintk(VIDC_ERR, "Bad range: Max QP (%d) < Min QP(%d)",
+ ctrl->val, qp_min->val);
+ rc = -ERANGE;
+ break;
+ }
+
+ property_id = HAL_PARAM_VENC_SESSION_QP_RANGE;
+ qp_range.layer_id = 0;
+ qp_range.max_qp = ctrl->val;
+ qp_range.min_qp = qp_min->val;
+
+ pdata = &qp_range;
+ break;
+ }
case V4L2_CID_MPEG_VIDEO_MULTI_SLICE_MODE: {
int temp = 0;
@@ -1680,6 +1772,23 @@
pdata = &vui_timing_info;
break;
}
+ case V4L2_CID_MPEG_VIDC_VIDEO_H264_AU_DELIMITER:
+ property_id = HAL_PARAM_VENC_H264_GENERATE_AUDNAL;
+
+ switch (ctrl->val) {
+ case V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_DISABLED:
+ enable.enable = 0;
+ break;
+ case V4L2_MPEG_VIDC_VIDEO_H264_AU_DELIMITER_ENABLED:
+ enable.enable = 1;
+ break;
+ default:
+ rc = -ENOTSUPP;
+ break;
+ }
+
+ pdata = &enable;
+ break;
default:
rc = -ENOTSUPP;
break;
@@ -1713,10 +1822,13 @@
for (c = 0; c < ctrl->ncontrols; ++c) {
if (ctrl->cluster[c]->is_new) {
- rc = try_set_ctrl(inst, ctrl->cluster[c]);
+ struct v4l2_ctrl *temp = ctrl->cluster[c];
+
+ rc = try_set_ctrl(inst, temp);
if (rc) {
- dprintk(VIDC_ERR, "Failed setting %x",
- ctrl->cluster[c]->id);
+ dprintk(VIDC_ERR, "Failed setting %s (%x)",
+ v4l2_ctrl_get_name(temp->id),
+ temp->id);
break;
}
}
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_common.c b/drivers/media/platform/msm/vidc/msm_vidc_common.c
index dc08c64..1ee9c67 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_common.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_common.c
@@ -22,7 +22,7 @@
#include "msm_smem.h"
#include "msm_vidc_debug.h"
-#define HW_RESPONSE_TIMEOUT 200
+#define HW_RESPONSE_TIMEOUT 1000
#define IS_ALREADY_IN_STATE(__p, __d) ({\
int __rc = (__p >= __d);\
@@ -1282,7 +1282,6 @@
}
hdev = inst->core->device;
-
if (IS_ALREADY_IN_STATE(flipped_state, MSM_VIDC_LOAD_RESOURCES)) {
dprintk(VIDC_INFO, "inst: %p is already in state: %d\n",
inst, inst->state);
@@ -1668,6 +1667,7 @@
core->state == VIDC_CORE_INVALID) {
dprintk(VIDC_ERR,
"Core is in bad state can't change the state");
+ rc = -EINVAL;
goto exit;
}
flipped_state = get_flipped_state(inst->state, state);
@@ -1901,8 +1901,14 @@
dprintk(VIDC_ERR, "%s invalid parameters", __func__);
return -EINVAL;
}
- hdev = inst->core->device;
+ if (inst->state == MSM_VIDC_CORE_INVALID ||
+ inst->core->state == VIDC_CORE_INVALID) {
+ dprintk(VIDC_ERR,
+ "Core is in bad state can't query get_bufreqs()");
+ return -EINVAL;
+ }
+ hdev = inst->core->device;
mutex_lock(&inst->sync_lock);
if (inst->state < MSM_VIDC_OPEN_DONE || inst->state >= MSM_VIDC_CLOSE) {
dprintk(VIDC_ERR,
@@ -2388,6 +2394,7 @@
if (inst->capability.capability_set) {
if (msm_vp8_low_tier &&
+ inst->core->hfi_type == VIDC_HFI_VENUS &&
inst->fmts[OUTPUT_PORT]->fourcc == V4L2_PIX_FMT_VP8) {
capability->width.max = DEFAULT_WIDTH;
capability->height.max = DEFAULT_HEIGHT;
diff --git a/drivers/media/platform/msm/vidc/venus_hfi.c b/drivers/media/platform/msm/vidc/venus_hfi.c
index 8031c74..2e92f1f 100644
--- a/drivers/media/platform/msm/vidc/venus_hfi.c
+++ b/drivers/media/platform/msm/vidc/venus_hfi.c
@@ -282,6 +282,7 @@
u32 packet_size_in_words, new_read_idx;
u32 *read_ptr;
struct vidc_iface_q_info *qinfo;
+ int rc = 0;
if (!info || !packet || !pb_tx_req_is_set) {
dprintk(VIDC_ERR, "Invalid Params");
@@ -317,17 +318,27 @@
new_read_idx = queue->qhdr_read_idx + packet_size_in_words;
dprintk(VIDC_DBG, "Read Ptr: %d", (u32) new_read_idx);
- if (new_read_idx < queue->qhdr_q_size) {
- memcpy(packet, read_ptr,
- packet_size_in_words << 2);
- } else {
- new_read_idx -= queue->qhdr_q_size;
- memcpy(packet, read_ptr,
+ if (((packet_size_in_words << 2) <= VIDC_IFACEQ_MED_PKT_SIZE)
+ && queue->qhdr_read_idx <= queue->qhdr_q_size) {
+ if (new_read_idx < queue->qhdr_q_size) {
+ memcpy(packet, read_ptr,
+ packet_size_in_words << 2);
+ } else {
+ new_read_idx -= queue->qhdr_q_size;
+ memcpy(packet, read_ptr,
(packet_size_in_words - new_read_idx) << 2);
- memcpy(packet + ((packet_size_in_words -
- new_read_idx) << 2),
- (u8 *)qinfo->q_array.align_virtual_addr,
- new_read_idx << 2);
+ memcpy(packet + ((packet_size_in_words -
+ new_read_idx) << 2),
+ (u8 *)qinfo->q_array.align_virtual_addr,
+ new_read_idx << 2);
+ }
+ } else {
+ dprintk(VIDC_WARN,
+ "BAD packet received, read_idx: 0x%x, pkt_size: %d\n",
+ queue->qhdr_read_idx, packet_size_in_words << 2);
+ dprintk(VIDC_WARN, "Dropping this packet\n");
+ new_read_idx = queue->qhdr_write_idx;
+ rc = -ENODATA;
}
queue->qhdr_read_idx = new_read_idx;
@@ -340,7 +351,7 @@
*pb_tx_req_is_set = (1 == queue->qhdr_tx_req) ? 1 : 0;
venus_hfi_hal_sim_modify_msg_packet(packet);
dprintk(VIDC_DBG, "Out : ");
- return 0;
+ return rc;
}
static int venus_hfi_alloc(void *mem, void *clnt, u32 size, u32 align,
@@ -547,7 +558,7 @@
q_info = &device->iface_queues[VIDC_IFACEQ_CMDQ_IDX];
if (!q_info) {
dprintk(VIDC_ERR, "cannot write to shared Q's");
- goto err_q_write;
+ goto err_q_null;
}
mutex_lock(&device->clock_lock);
result = venus_hfi_clk_gating_off(device);
@@ -572,8 +583,9 @@
dprintk(VIDC_ERR, "venus_hfi_iface_cmdq_write:queue_full");
}
err_q_write:
- mutex_unlock(&device->write_lock);
mutex_unlock(&device->clock_lock);
+err_q_null:
+ mutex_unlock(&device->write_lock);
return result;
}
@@ -592,7 +604,7 @@
q_array.align_virtual_addr == 0) {
dprintk(VIDC_ERR, "cannot read from shared MSG Q's");
rc = -ENODATA;
- goto read_error;
+ goto read_error_null;
}
q_info = &device->iface_queues[VIDC_IFACEQ_MSGQ_IDX];
mutex_lock(&device->clock_lock);
@@ -614,8 +626,9 @@
rc = -ENODATA;
}
read_error:
- mutex_unlock(&device->read_lock);
mutex_unlock(&device->clock_lock);
+read_error_null:
+ mutex_unlock(&device->read_lock);
return rc;
}
@@ -634,7 +647,7 @@
q_array.align_virtual_addr == 0) {
dprintk(VIDC_ERR, "cannot read from shared DBG Q's");
rc = -ENODATA;
- goto dbg_error;
+ goto dbg_error_null;
}
mutex_lock(&device->clock_lock);
rc = venus_hfi_clk_gating_off(device);
@@ -656,8 +669,9 @@
rc = -ENODATA;
}
dbg_error:
- mutex_unlock(&device->read_lock);
mutex_unlock(&device->clock_lock);
+dbg_error_null:
+ mutex_unlock(&device->read_lock);
return rc;
}
@@ -1070,8 +1084,8 @@
disable_irq_nosync(dev->hal_data->irq);
dev->intr_status = 0;
venus_hfi_interface_queues_release(dev);
+ mutex_unlock(&dev->clock_lock);
}
- mutex_unlock(&dev->clock_lock);
dprintk(VIDC_INFO, "HAL exited\n");
return 0;
}
@@ -1867,21 +1881,46 @@
if (((ctrl_status & VIDC_CPU_CS_SCIACMDARG0_HFI_CTRL_INIT_IDLE_MSG_BMSK)
!= 0) && !rc)
venus_hfi_clk_gating_on(device);
- mutex_unlock(&device->write_lock);
mutex_unlock(&device->clock_lock);
+ mutex_unlock(&device->write_lock);
return rc;
}
+static void venus_hfi_process_msg_event_notify(
+ struct venus_hfi_device *device, void *packet)
+{
+ struct hfi_sfr_struct *vsfr = NULL;
+ struct hfi_msg_event_notify_packet *event_pkt;
+ struct vidc_hal_msg_pkt_hdr *msg_hdr;
+ msg_hdr = (struct vidc_hal_msg_pkt_hdr *)packet;
+ event_pkt =
+ (struct hfi_msg_event_notify_packet *)msg_hdr;
+ if (event_pkt && event_pkt->event_id ==
+ HFI_EVENT_SYS_ERROR) {
+ vsfr = (struct hfi_sfr_struct *)
+ device->sfr.align_virtual_addr;
+ if (vsfr)
+ dprintk(VIDC_ERR, "SFR Message from FW : %s",
+ vsfr->rg_data);
+ }
+}
static void venus_hfi_response_handler(struct venus_hfi_device *device)
{
u8 packet[VIDC_IFACEQ_MED_PKT_SIZE];
u32 rc = 0;
+ struct hfi_sfr_struct *vsfr = NULL;
dprintk(VIDC_INFO, "#####venus_hfi_response_handler#####\n");
if (device) {
if ((device->intr_status &
VIDC_WRAPPER_INTR_CLEAR_A2HWD_BMSK)) {
dprintk(VIDC_ERR, "Received: Watchdog timeout %s",
__func__);
+ vsfr = (struct hfi_sfr_struct *)
+ device->sfr.align_virtual_addr;
+ if (vsfr)
+ dprintk(VIDC_ERR,
+ "SFR Message from FW : %s",
+ vsfr->rg_data);
venus_hfi_process_sys_watchdog_timeout(device);
}
@@ -1889,6 +1928,9 @@
rc = hfi_process_msg_packet(device->callback,
device->device_id,
(struct vidc_hal_msg_pkt_hdr *) packet);
+ if (rc == HFI_MSG_EVENT_NOTIFY)
+ venus_hfi_process_msg_event_notify(
+ device, (void *)packet);
}
while (!venus_hfi_iface_dbgq_read(device, packet)) {
struct hfi_msg_sys_debug_packet *pkt =
@@ -2749,9 +2791,10 @@
return;
}
if (device->resources.fw.cookie) {
+ flush_workqueue(device->vidc_workq);
venus_hfi_disable_clks(device);
- venus_hfi_iommu_detach(device);
subsystem_put(device->resources.fw.cookie);
+ venus_hfi_iommu_detach(device);
device->resources.fw.cookie = NULL;
}
}
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi.h b/drivers/media/platform/msm/vidc/vidc_hfi.h
index 075b391..1311752 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi.h
@@ -113,8 +113,6 @@
#define HFI_PROPERTY_SYS_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + 0x0000)
-#define HFI_PROPERTY_SYS_IDLE_INDICATOR \
- (HFI_PROPERTY_SYS_OX_START + 0x001)
#define HFI_PROPERTY_PARAM_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + 0x1000)
@@ -333,7 +331,6 @@
#define HFI_MSG_SYS_OX_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_OX_OFFSET + HFI_MSG_START_OFFSET + 0x0000)
-#define HFI_MSG_SYS_IDLE (HFI_MSG_SYS_OX_START + 0x1)
#define HFI_MSG_SYS_PING_ACK (HFI_MSG_SYS_OX_START + 0x2)
#define HFI_MSG_SYS_PROPERTY_INFO (HFI_MSG_SYS_OX_START + 0x3)
#define HFI_MSG_SYS_SESSION_ABORT_DONE (HFI_MSG_SYS_OX_START + 0x4)
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi_api.h b/drivers/media/platform/msm/vidc/vidc_hfi_api.h
index 3729c3a..3fbfec4 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi_api.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi_api.h
@@ -130,6 +130,7 @@
HAL_PARAM_VENC_H264_DEBLOCK_CONTROL,
HAL_PARAM_VENC_TEMPORAL_SPATIAL_TRADEOFF,
HAL_PARAM_VENC_SESSION_QP,
+ HAL_PARAM_VENC_SESSION_QP_RANGE,
HAL_CONFIG_VENC_INTRA_PERIOD,
HAL_CONFIG_VENC_IDR_PERIOD,
HAL_CONFIG_VPE_OPERATIONS,
@@ -168,6 +169,8 @@
HAL_PARAM_VENC_H264_ENTROPY_CABAC_MODEL,
HAL_CONFIG_VENC_MAX_BITRATE,
HAL_PARAM_VENC_H264_VUI_TIMING_INFO,
+ HAL_PARAM_VENC_H264_GENERATE_AUDNAL,
+ HAL_PARAM_VENC_MAX_NUM_B_FRAMES,
};
enum hal_domain {
@@ -595,6 +598,12 @@
u32 layer_id;
};
+struct hal_quantization_range {
+ u32 min_qp;
+ u32 max_qp;
+ u32 layer_id;
+};
+
struct hal_intra_period {
u32 pframes;
u32 bframes;
diff --git a/drivers/media/platform/msm/vidc/vidc_hfi_helper.h b/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
index 2d0c3bd..6234dba 100644
--- a/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
+++ b/drivers/media/platform/msm/vidc/vidc_hfi_helper.h
@@ -195,6 +195,8 @@
(HFI_PROPERTY_SYS_COMMON_START + 0x002)
#define HFI_PROPERTY_SYS_CONFIG_VCODEC_CLKFREQ \
(HFI_PROPERTY_SYS_COMMON_START + 0x003)
+#define HFI_PROPERTY_SYS_IDLE_INDICATOR \
+ (HFI_PROPERTY_SYS_COMMON_START + 0x004)
#define HFI_PROPERTY_PARAM_COMMON_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_COMMON_OFFSET + 0x1000)
@@ -304,7 +306,8 @@
(HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x01E)
#define HFI_PROPERTY_PARAM_VENC_VC1_PERF_CFG \
(HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x01F)
-
+#define HFI_PROPERTY_PARAM_VENC_MAX_NUM_B_FRAMES \
+ (HFI_PROPERTY_PARAM_VENC_COMMON_START + 0x020)
#define HFI_PROPERTY_CONFIG_VENC_COMMON_START \
(HFI_DOMAIN_BASE_VENC + HFI_ARCH_COMMON_OFFSET + 0x6000)
#define HFI_PROPERTY_CONFIG_VENC_TARGET_BITRATE \
@@ -424,6 +427,10 @@
u32 idr_period;
};
+struct hfi_max_num_b_frames {
+ u32 max_num_b_frames;
+};
+
struct hfi_intra_period {
u32 pframes;
u32 bframes;
@@ -692,6 +699,7 @@
#define HFI_MSG_SYS_DEBUG (HFI_MSG_SYS_COMMON_START + 0x4)
#define HFI_MSG_SYS_SESSION_INIT_DONE (HFI_MSG_SYS_COMMON_START + 0x6)
#define HFI_MSG_SYS_SESSION_END_DONE (HFI_MSG_SYS_COMMON_START + 0x7)
+#define HFI_MSG_SYS_IDLE (HFI_MSG_SYS_COMMON_START + 0x8)
#define HFI_MSG_SESSION_COMMON_START \
(HFI_DOMAIN_BASE_COMMON + HFI_ARCH_COMMON_OFFSET + \
diff --git a/drivers/media/platform/msm/wfd/wfd-ioctl.c b/drivers/media/platform/msm/wfd/wfd-ioctl.c
index af3cd69..b1b1980 100644
--- a/drivers/media/platform/msm/wfd/wfd-ioctl.c
+++ b/drivers/media/platform/msm/wfd/wfd-ioctl.c
@@ -1006,6 +1006,7 @@
spin_unlock_irqrestore(&inst->inst_lock, flags);
WFD_MSG_DBG("Calling videobuf_streamoff\n");
vb2_streamoff(&inst->vid_bufq, i);
+ wake_up(&inst->event_handler.wait);
return 0;
}
static int wfdioc_dqbuf(struct file *filp, void *fh,
@@ -1545,14 +1546,22 @@
unsigned int wfd_poll(struct file *filp, struct poll_table_struct *pt)
{
struct wfd_inst *inst = file_to_inst(filp);
- unsigned int flags = 0;
+ unsigned int poll_flags = 0;
+ unsigned long flags;
+ bool streamoff = false;
poll_wait(filp, &inst->event_handler.wait, pt);
- if (v4l2_event_pending(&inst->event_handler))
- flags |= POLLPRI;
+ spin_lock_irqsave(&inst->inst_lock, flags);
+ streamoff = inst->streamoff;
+ spin_unlock_irqrestore(&inst->inst_lock, flags);
- return flags;
+ if (v4l2_event_pending(&inst->event_handler))
+ poll_flags |= POLLPRI;
+ if (streamoff)
+ poll_flags |= POLLERR;
+
+ return poll_flags;
}
static const struct v4l2_file_operations g_wfd_fops = {
diff --git a/drivers/mfd/wcd9xxx-irq.c b/drivers/mfd/wcd9xxx-irq.c
index 356aecb..5efd905 100644
--- a/drivers/mfd/wcd9xxx-irq.c
+++ b/drivers/mfd/wcd9xxx-irq.c
@@ -25,6 +25,7 @@
#include <linux/of.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
+#include <linux/ratelimit.h>
#include <mach/cpuidle.h>
#define BYTE_BIT_MASK(nr) (1UL << ((nr) % BITS_PER_BYTE))
@@ -226,9 +227,11 @@
{
int ret;
int i;
+ char linebuf[128];
struct wcd9xxx *wcd9xxx = data;
int num_irq_regs = wcd9xxx_num_irq_regs(wcd9xxx);
- u8 status[num_irq_regs];
+ u8 status[num_irq_regs], status1[num_irq_regs];
+ static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 1);
if (unlikely(wcd9xxx_lock_sleep(wcd9xxx) == false)) {
dev_err(wcd9xxx->dev, "Failed to hold suspend\n");
@@ -250,12 +253,17 @@
for (i = 0; i < num_irq_regs; i++)
status[i] &= ~wcd9xxx->irq_masks_cur[i];
+ memcpy(status1, status, sizeof(status1));
+
/* Find out which interrupt was triggered and call that interrupt's
* handler function
*/
if (status[BIT_BYTE(WCD9XXX_IRQ_SLIMBUS)] &
- BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS))
+ BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS)) {
wcd9xxx_irq_dispatch(wcd9xxx, WCD9XXX_IRQ_SLIMBUS);
+ status1[BIT_BYTE(WCD9XXX_IRQ_SLIMBUS)] &=
+ ~BYTE_BIT_MASK(WCD9XXX_IRQ_SLIMBUS);
+ }
/* Since codec has only one hardware irq line which is shared by
* codec's different internal interrupts, so it's possible master irq
@@ -264,13 +272,43 @@
* machine's order */
for (i = WCD9XXX_IRQ_MBHC_INSERTION;
i >= WCD9XXX_IRQ_MBHC_REMOVAL; i--) {
- if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i))
+ if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i)) {
wcd9xxx_irq_dispatch(wcd9xxx, i);
+ status1[BIT_BYTE(i)] &= ~BYTE_BIT_MASK(i);
+ }
}
for (i = WCD9XXX_IRQ_BG_PRECHARGE; i < wcd9xxx->codec_type->num_irqs;
i++) {
- if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i))
+ if (status[BIT_BYTE(i)] & BYTE_BIT_MASK(i)) {
wcd9xxx_irq_dispatch(wcd9xxx, i);
+ status1[BIT_BYTE(i)] &= ~BYTE_BIT_MASK(i);
+ }
+ }
+
+ /*
+ * As a failsafe if unhandled irq is found, clear it to prevent
+ * interrupt storm.
+ * Note that we can say there was an unhandled irq only when no irq
+ * handled by nested irq handler since Taiko supports qdsp as irqs'
+ * destination for few irqs. Therefore driver shouldn't clear pending
+ * irqs when few handled while few others not.
+ */
+ if (unlikely(!memcmp(status, status1, sizeof(status)))) {
+ if (__ratelimit(&ratelimit)) {
+ pr_warn("%s: Unhandled irq found\n", __func__);
+ hex_dump_to_buffer(status, sizeof(status), 16, 1,
+ linebuf, sizeof(linebuf), false);
+ pr_warn("%s: status0 : %s\n", __func__, linebuf);
+ hex_dump_to_buffer(status1, sizeof(status1), 16, 1,
+ linebuf, sizeof(linebuf), false);
+ pr_warn("%s: status1 : %s\n", __func__, linebuf);
+ }
+
+ memset(status, 0xff, num_irq_regs);
+ wcd9xxx_bulk_write(wcd9xxx, WCD9XXX_A_INTR_CLEAR0,
+ num_irq_regs, status);
+ if (wcd9xxx_get_intf_type() == WCD9XXX_INTERFACE_TYPE_I2C)
+ wcd9xxx_reg_write(wcd9xxx, WCD9XXX_A_INTR_MODE, 0x02);
}
wcd9xxx_unlock_sleep(wcd9xxx);
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index 05a15b1..fcadc30 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -52,6 +52,8 @@
#define QSEE_VERSION_02 0x402000
#define QSEE_VERSION_03 0x403000
#define QSEE_VERSION_04 0x404000
+#define QSEE_VERSION_05 0x405000
+
#define QSEOS_CHECK_VERSION_CMD 0x00001803
@@ -565,13 +567,14 @@
if (wait_event_freezable(qseecom.send_resp_wq,
__qseecom_listener_has_sent_rsp(data))) {
pr_warning("Interrupted: exiting send_cmd loop\n");
- return -ERESTARTSYS;
+ ret = -ERESTARTSYS;
}
- if (data->abort) {
- pr_err("Aborting listener service %d\n",
- data->listener.id);
- rc = -ENODEV;
+ if ((data->abort) || (ret == -ERESTARTSYS)) {
+ pr_err("Abort clnt %d waiting on lstnr svc %d, ret %d",
+ data->client.app_id, lstnr, ret);
+ if (data->abort)
+ rc = -ENODEV;
send_data_rsp.status = QSEOS_RESULT_FAILURE;
} else {
send_data_rsp.status = QSEOS_RESULT_SUCCESS;
@@ -672,7 +675,7 @@
app_id = ret;
if (app_id) {
- pr_warn("App id %d (%s) already exists\n", app_id,
+ pr_debug("App id %d (%s) already exists\n", app_id,
(char *)(req.app_name));
spin_lock_irqsave(&qseecom.registered_app_list_lock, flags);
list_for_each_entry(entry,
@@ -830,7 +833,7 @@
break;
} else {
ptr_app->ref_cnt--;
- pr_warn("Can't unload app(%d) inuse\n",
+ pr_debug("Can't unload app(%d) inuse\n",
ptr_app->app_id);
break;
}
@@ -1647,7 +1650,6 @@
data->abort = 0;
data->type = QSEECOM_CLIENT_APP;
data->released = false;
- data->client.app_id = ret;
data->client.sb_length = size;
data->client.user_virt_sb_base = 0;
data->client.ihandle = NULL;
@@ -1692,7 +1694,7 @@
*handle = NULL;
return -EINVAL;
}
-
+ data->client.app_id = ret;
if (ret > 0) {
pr_warn("App id %d for [%s] app exists\n", ret,
(char *)app_ireq.app_name);
@@ -2315,7 +2317,7 @@
pr_err(" scm call to check if app is loaded failed");
return ret; /* scm call failed */
} else if (ret > 0) {
- pr_warn("App id %d (%s) already exists\n", ret,
+ pr_debug("App id %d (%s) already exists\n", ret,
(char *)(req.app_name));
spin_lock_irqsave(&qseecom.registered_app_list_lock, flags);
list_for_each_entry(entry,
@@ -2381,9 +2383,11 @@
memcpy(ireq.key_id, key_id, QSEECOM_KEY_ID_SIZE);
ireq.flags = flags;
+ ireq.qsee_command_id = QSEOS_GENERATE_KEY;
__qseecom_enable_clk(CLK_QSEE);
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_GENERATE_KEY,
+
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_generate_ireq),
&resp, sizeof(resp));
if (ret) {
@@ -2395,10 +2399,19 @@
switch (resp.result) {
case QSEOS_RESULT_SUCCESS:
break;
+ case QSEOS_RESULT_FAIL_KEY_ID_EXISTS:
+ break;
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
- if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ if (ret) {
+ if (resp.result == QSEOS_RESULT_FAIL_KEY_ID_EXISTS) {
+ pr_warn("process_incomplete_cmd return Key ID exits.\n");
+ ret = 0;
+ } else {
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
+ }
+ }
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2425,9 +2438,11 @@
memcpy(ireq.key_id, key_id, QSEECOM_KEY_ID_SIZE);
ireq.flags = flags;
+ ireq.qsee_command_id = QSEOS_DELETE_KEY;
__qseecom_enable_clk(CLK_QSEE);
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_DELETE_KEY,
+
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_delete_ireq),
&resp, sizeof(struct qseecom_command_scm_resp));
if (ret) {
@@ -2442,7 +2457,8 @@
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2474,10 +2490,14 @@
__qseecom_enable_clk(CLK_CE_DRV);
memcpy(ireq.key_id, set_key_para->key_id, QSEECOM_KEY_ID_SIZE);
+ ireq.qsee_command_id = QSEOS_SET_KEY;
ireq.ce = set_key_para->ce_hw;
ireq.pipe = set_key_para->pipe;
ireq.flags = set_key_para->flags;
+ /* set PIPE_ENC */
+ ireq.pipe_type = QSEOS_PIPE_ENC;
+
if (set_key_para->set_clear_key_flag ==
QSEECOM_SET_CE_KEY_CMD)
memcpy((void *)ireq.hash, (void *)set_key_para->hash32,
@@ -2485,11 +2505,22 @@
else
memset((void *)ireq.hash, 0, QSEECOM_HASH_SIZE);
- ret = scm_call(SCM_SVC_CRYPTO, QSEOS_SET_KEY,
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
&ireq, sizeof(struct qseecom_key_select_ireq),
&resp, sizeof(struct qseecom_command_scm_resp));
if (ret) {
- pr_err("scm call to set key failed : %d\n", ret);
+ pr_err("scm call to set QSEOS_PIPE_ENC key failed : %d\n", ret);
+ return ret;
+ }
+
+ /* set PIPE_ENC_XTS */
+ ireq.pipe_type = QSEOS_PIPE_ENC_XTS;
+ ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
+ &ireq, sizeof(struct qseecom_key_select_ireq),
+ &resp, sizeof(struct qseecom_command_scm_resp));
+ if (ret) {
+ pr_err("scm call to set QSEOS_PIPE_ENC_XTS key failed : %d\n",
+ ret);
return ret;
}
@@ -2499,7 +2530,8 @@
case QSEOS_RESULT_INCOMPLETE:
ret = __qseecom_process_incomplete_cmd(data, &resp);
if (ret)
- pr_err("process_incomplete_cmd FAILED\n");
+ pr_err("process_incomplete_cmd FAILED, resp.result %d\n",
+ resp.result);
break;
case QSEOS_RESULT_FAILURE:
default:
@@ -2878,6 +2910,11 @@
break;
}
case QSEECOM_IOCTL_CREATE_KEY_REQ: {
+ if (qseecom.qsee_version < QSEE_VERSION_05) {
+ pr_err("Create Key feature not supported in qsee version %u\n",
+ qseecom.qsee_version);
+ return -EINVAL;
+ }
data->released = true;
mutex_lock(&app_access_lock);
atomic_inc(&data->ioctl_count);
@@ -2890,6 +2927,11 @@
break;
}
case QSEECOM_IOCTL_WIPE_KEY_REQ: {
+ if (qseecom.qsee_version < QSEE_VERSION_05) {
+ pr_err("Wipe Key feature not supported in qsee version %u\n",
+ qseecom.qsee_version);
+ return -EINVAL;
+ }
data->released = true;
mutex_lock(&app_access_lock);
atomic_inc(&data->ioctl_count);
diff --git a/drivers/misc/tspp.c b/drivers/misc/tspp.c
index dbb4f5e..73cae32 100644
--- a/drivers/misc/tspp.c
+++ b/drivers/misc/tspp.c
@@ -1571,6 +1571,7 @@
int id;
int table_idx;
u32 val;
+ unsigned long flags;
struct sps_connect *config;
struct tspp_device *pdev;
@@ -1591,6 +1592,15 @@
if (!channel->used)
return 0;
+ /*
+ * Need to protect access to used and waiting fields, as they are
+ * used by the tasklet which is invoked from interrupt context
+ */
+ spin_lock_irqsave(&pdev->spinlock, flags);
+ channel->used = 0;
+ channel->waiting = NULL;
+ spin_unlock_irqrestore(&pdev->spinlock, flags);
+
if (channel->expiration_period_ms)
del_timer(&channel->expiration_timer);
@@ -1644,9 +1654,7 @@
channel->buffer_count = 0;
channel->data = NULL;
channel->read = NULL;
- channel->waiting = NULL;
channel->locked = NULL;
- channel->used = 0;
if (tspp_channels_in_use(pdev) == 0) {
wake_unlock(&pdev->wake_lock);
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 5b7f08f..9f12142 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -67,12 +67,19 @@
(rq_data_dir(req) == WRITE))
#define PACKED_CMD_VER 0x01
#define PACKED_CMD_WR 0x02
+#define PACKED_TRIGGER_MAX_ELEMENTS 5000
#define MMC_BLK_UPDATE_STOP_REASON(stats, reason) \
do { \
if (stats->enabled) \
stats->pack_stop_reason[reason]++; \
} while (0)
+#define PCKD_TRGR_INIT_MEAN_POTEN 17
+#define PCKD_TRGR_POTEN_LOWER_BOUND 5
+#define PCKD_TRGR_URGENT_PENALTY 2
+#define PCKD_TRGR_LOWER_BOUND 5
+#define PCKD_TRGR_PRECISION_MULTIPLIER 100
+
static DEFINE_MUTEX(block_mutex);
/*
@@ -1862,6 +1869,80 @@
}
EXPORT_SYMBOL(mmc_blk_disable_wr_packing);
+static int get_packed_trigger(int potential, struct mmc_card *card,
+ struct request *req, int curr_trigger)
+{
+ static int num_mean_elements = 1;
+ static unsigned long mean_potential = PCKD_TRGR_INIT_MEAN_POTEN;
+ unsigned int trigger = curr_trigger;
+ unsigned int pckd_trgr_upper_bound = card->ext_csd.max_packed_writes;
+
+ /* scale down the upper bound to 75% */
+ pckd_trgr_upper_bound = (pckd_trgr_upper_bound * 3) / 4;
+
+ /*
+ * since the most common calls for this function are with small
+ * potential write values and since we don't want these calls to affect
+ * the packed trigger, set a lower bound and ignore calls with
+ * potential lower than that bound
+ */
+ if (potential <= PCKD_TRGR_POTEN_LOWER_BOUND)
+ return trigger;
+
+ /*
+ * this is to prevent integer overflow in the following calculation:
+ * once every PACKED_TRIGGER_MAX_ELEMENTS reset the algorithm
+ */
+ if (num_mean_elements > PACKED_TRIGGER_MAX_ELEMENTS) {
+ num_mean_elements = 1;
+ mean_potential = PCKD_TRGR_INIT_MEAN_POTEN;
+ }
+
+ /*
+ * get next mean value based on previous mean value and current
+ * potential packed writes. Calculation is as follows:
+ * mean_pot[i+1] =
+ * ((mean_pot[i] * num_mean_elem) + potential)/(num_mean_elem + 1)
+ */
+ mean_potential *= num_mean_elements;
+ /*
+ * add num_mean_elements so that the division of two integers doesn't
+ * lower mean_potential too much
+ */
+ if (potential > mean_potential)
+ mean_potential += num_mean_elements;
+ mean_potential += potential;
+ /* this is for gaining more precision when dividing two integers */
+ mean_potential *= PCKD_TRGR_PRECISION_MULTIPLIER;
+ /* this completes the mean calculation */
+ mean_potential /= ++num_mean_elements;
+ mean_potential /= PCKD_TRGR_PRECISION_MULTIPLIER;
+
+ /*
+ * if current potential packed writes is greater than the mean potential
+ * then the heuristic is that the following workload will contain many
+ * write requests, therefore we lower the packed trigger. In the
+ * opposite case we want to increase the trigger in order to get less
+ * packing events.
+ */
+ if (potential >= mean_potential)
+ trigger = (trigger <= PCKD_TRGR_LOWER_BOUND) ?
+ PCKD_TRGR_LOWER_BOUND : trigger - 1;
+ else
+ trigger = (trigger >= pckd_trgr_upper_bound) ?
+ pckd_trgr_upper_bound : trigger + 1;
+
+ /*
+ * an urgent read request indicates a packed list being interrupted
+ * by this read, therefore we aim for less packing, hence the trigger
+ * gets increased
+ */
+ if (req && (req->cmd_flags & REQ_URGENT) && (rq_data_dir(req) == READ))
+ trigger += PCKD_TRGR_URGENT_PENALTY;
+
+ return trigger;
+}
+
static void mmc_blk_write_packing_control(struct mmc_queue *mq,
struct request *req)
{
@@ -1889,6 +1970,10 @@
if (mq->num_of_potential_packed_wr_reqs >
mq->num_wr_reqs_to_start_packing)
mq->wr_packing_enabled = true;
+ mq->num_wr_reqs_to_start_packing =
+ get_packed_trigger(mq->num_of_potential_packed_wr_reqs,
+ mq->card, req,
+ mq->num_wr_reqs_to_start_packing);
mq->num_of_potential_packed_wr_reqs = 0;
return;
}
@@ -1897,6 +1982,12 @@
if (data_dir == READ) {
mmc_blk_disable_wr_packing(mq);
+ mq->num_wr_reqs_to_start_packing =
+ get_packed_trigger(mq->num_of_potential_packed_wr_reqs,
+ mq->card, req,
+ mq->num_wr_reqs_to_start_packing);
+ mq->num_of_potential_packed_wr_reqs = 0;
+ mq->wr_packing_enabled = false;
return;
} else if (data_dir == WRITE) {
mq->num_of_potential_packed_wr_reqs++;
diff --git a/drivers/mmc/host/msm_sdcc.c b/drivers/mmc/host/msm_sdcc.c
index 9c30cd1..3f3687b 100644
--- a/drivers/mmc/host/msm_sdcc.c
+++ b/drivers/mmc/host/msm_sdcc.c
@@ -4683,10 +4683,10 @@
}
/* Now save the sps pipe handle */
ep->pipe_handle = sps_pipe_handle;
- pr_debug("%s: %s, success !!! %s: pipe_handle=0x%x,"
- " desc_fifo.phys_base=0x%x\n", mmc_hostname(host->mmc),
+ pr_debug("%s: %s, success !!! %s: pipe_handle=0x%x,"\
+ " desc_fifo.phys_base=%pa\n", mmc_hostname(host->mmc),
__func__, is_producer ? "READ" : "WRITE",
- (u32)sps_pipe_handle, sps_config->desc.phys_base);
+ (u32)sps_pipe_handle, &sps_config->desc.phys_base);
goto out;
reg_event_err:
@@ -4929,11 +4929,8 @@
host->bam_base = ioremap(host->bam_memres->start,
resource_size(host->bam_memres));
if (!host->bam_base) {
- pr_err("%s: BAM ioremap() failed!!! phys_addr=0x%x,"
- " size=0x%x", mmc_hostname(host->mmc),
- host->bam_memres->start,
- (host->bam_memres->end -
- host->bam_memres->start));
+ pr_err("%s: BAM ioremap() failed!!! resource: %pr\n",
+ mmc_hostname(host->mmc), host->bam_memres);
rc = -ENOMEM;
goto out;
}
@@ -4954,7 +4951,7 @@
*/
bam.summing_threshold = SPS_MIN_XFER_SIZE;
/* SPS driver wll handle the SDCC BAM IRQ */
- bam.irq = (u32)host->bam_irqres->start;
+ bam.irq = host->bam_irqres->start;
bam.manage = SPS_BAM_MGR_LOCAL;
bam.callback = msmsdcc_sps_bam_global_irq_cb;
bam.user = (void *)host;
@@ -4990,10 +4987,8 @@
if (rc)
goto cons_conn_err;
- pr_info("%s: Qualcomm MSM SDCC-BAM at 0x%016llx irq %d\n",
- mmc_hostname(host->mmc),
- (unsigned long long)host->bam_memres->start,
- (unsigned int)host->bam_irqres->start);
+ pr_info("%s: Qualcomm MSM SDCC-BAM at %pr %pr\n",
+ mmc_hostname(host->mmc), host->bam_memres, host->bam_irqres);
goto out;
cons_conn_err:
@@ -5180,15 +5175,16 @@
}
static void msmsdcc_print_regs(const char *name, void __iomem *base,
- u32 phys_base, unsigned int no_of_regs)
+ resource_size_t phys_base,
+ unsigned int no_of_regs)
{
unsigned int i;
if (!base)
return;
- pr_err("===== %s: Register Dumps @phys_base=0x%x, @virt_base=0x%x"
- " =====\n", name, phys_base, (u32)base);
+ pr_err("===== %s: Register Dumps @phys_base=%pa, @virt_base=0x%x"\
+ " =====\n", name, &phys_base, (u32)base);
for (i = 0; i < no_of_regs; i = i + 4) {
pr_err("Reg=0x%.2x: 0x%.8x, 0x%.8x, 0x%.8x, 0x%.8x\n", i*4,
(u32)readl_relaxed(base + i*4),
@@ -6275,10 +6271,8 @@
mmc->clk_scaling.polling_delay_ms = 100;
mmc->caps2 |= MMC_CAP2_CLK_SCALE;
- pr_info("%s: Qualcomm MSM SDCC-core at 0x%016llx irq %d,%d dma %d"
- " dmacrcri %d\n", mmc_hostname(mmc),
- (unsigned long long)core_memres->start,
- (unsigned int) core_irqres->start,
+ pr_info("%s: Qualcomm MSM SDCC-core %pr %pr,%d dma %d dmacrcri %d\n",
+ mmc_hostname(mmc), core_memres, core_irqres,
(unsigned int) plat->status_irq, host->dma.channel,
host->dma.crci);
@@ -6300,11 +6294,11 @@
if (is_dma_mode(host) && host->dma.channel != -1
&& host->dma.crci != -1) {
- pr_info("%s: DM non-cached buffer at %p, dma_addr 0x%.8x\n",
- mmc_hostname(mmc), host->dma.nc, host->dma.nc_busaddr);
- pr_info("%s: DM cmd busaddr 0x%.8x, cmdptr busaddr 0x%.8x\n",
- mmc_hostname(mmc), host->dma.cmd_busaddr,
- host->dma.cmdptr_busaddr);
+ pr_info("%s: DM non-cached buffer at %p, dma_addr: %pa\n",
+ mmc_hostname(mmc), host->dma.nc, &host->dma.nc_busaddr);
+ pr_info("%s: DM cmd busaddr: %pa, cmdptr busaddr: %pa\n",
+ mmc_hostname(mmc), &host->dma.cmd_busaddr,
+ &host->dma.cmdptr_busaddr);
} else if (is_sps_mode(host)) {
pr_info("%s: SPS-BAM data transfer mode available\n",
mmc_hostname(mmc));
diff --git a/drivers/mmc/host/msm_sdcc_dml.c b/drivers/mmc/host/msm_sdcc_dml.c
index 91ab7e3..2562436 100644
--- a/drivers/mmc/host/msm_sdcc_dml.c
+++ b/drivers/mmc/host/msm_sdcc_dml.c
@@ -166,17 +166,13 @@
host->dml_base = ioremap(host->dml_memres->start,
resource_size(host->dml_memres));
if (!host->dml_base) {
- pr_err("%s: DML ioremap() failed!!! phys_addr=0x%x,"
- " size=0x%x", mmc_hostname(host->mmc),
- host->dml_memres->start,
- (host->dml_memres->end -
- host->dml_memres->start));
+ pr_err("%s: DML ioremap() failed!!! %pr\n",
+ mmc_hostname(host->mmc), host->dml_memres);
rc = -ENOMEM;
goto out;
}
- pr_info("%s: Qualcomm MSM SDCC-DML at 0x%016llx\n",
- mmc_hostname(host->mmc),
- (unsigned long long)host->dml_memres->start);
+ pr_info("%s: Qualcomm MSM SDCC-DML %pr\n",
+ mmc_hostname(host->mmc), host->dml_memres);
}
dml_base = host->dml_base;
diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
index d1909aa..15e36a8 100644
--- a/drivers/mmc/host/sdhci-msm.c
+++ b/drivers/mmc/host/sdhci-msm.c
@@ -38,6 +38,7 @@
#include <linux/dma-mapping.h>
#include <mach/gpio.h>
#include <mach/msm_bus.h>
+#include <linux/iopoll.h>
#include "sdhci-pltfm.h"
@@ -81,6 +82,31 @@
#define CORE_CLK_PWRSAVE (1 << 1)
#define CORE_IO_PAD_PWR_SWITCH (1 << 16)
+#define CORE_MCI_DATA_CTRL 0x2C
+#define CORE_MCI_DPSM_ENABLE (1 << 0)
+
+#define CORE_TESTBUS_CONFIG 0x0CC
+#define CORE_TESTBUS_ENA (1 << 3)
+#define CORE_TESTBUS_SEL2 (1 << 4)
+
+/*
+ * Waiting until end of potential AHB access for data:
+ * 16 AHB cycles (160ns for 100MHz and 320ns for 50MHz) +
+ * delay on AHB (2us) = maximum 2.32us
+ * Taking x10 times margin
+ */
+#define CORE_AHB_DATA_DELAY_US 23
+/* Waiting until end of potential AHB access for descriptor:
+ * Single (1 AHB cycle) + delay on AHB bus = max 2us
+ * INCR4 (4 AHB cycles) + delay on AHB bus = max 2us
+ * Single (1 AHB cycle) + delay on AHB bus = max 2us
+ * Total 8 us delay with margin
+ */
+#define CORE_AHB_DESC_DELAY_US 8
+
+#define CORE_SDCC_DEBUG_REG 0x124
+#define CORE_DEBUG_REG_AHB_HTRANS (3 << 12)
+
/* 8KB descriptors */
#define SDHCI_MSM_MAX_SEGMENTS (1 << 13)
#define SDHCI_MSM_MMC_CLK_GATE_DELAY 200 /* msecs */
@@ -2044,6 +2070,51 @@
return 0;
}
+/*
+ * sdhci_msm_disable_data_xfer - disable undergoing AHB bus data transfer
+ *
+ * Write 0 to bit 0 in MCI_DATA_CTL (offset 0x2C) - clearing TxActive bit by
+ * access to legacy registers. It will stop current burst and prevent start of
+ * the next on.
+ *
+ * Polling CORE_AHB_DATA_DELAY_US timeout, by reading bit 13:12 until they are 0
+ * in CORE_SDCC_DEBUG_REG (offset 0x124) will validate that AHB burst was
+ * completed and a new one didn't start.
+ *
+ * Waiting for 4us while AHB finishes descriptors fetch.
+ */
+static void sdhci_msm_disable_data_xfer(struct sdhci_host *host)
+{
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_msm_host *msm_host = pltfm_host->priv;
+ u32 value;
+ int ret;
+
+ value = readl_relaxed(msm_host->core_mem + CORE_MCI_DATA_CTRL);
+ value &= ~(u32)CORE_MCI_DPSM_ENABLE;
+ writel_relaxed(value, msm_host->core_mem + CORE_MCI_DATA_CTRL);
+
+ /* Enable the test bus for device slot */
+ writel_relaxed(CORE_TESTBUS_ENA | CORE_TESTBUS_SEL2,
+ msm_host->core_mem + CORE_TESTBUS_CONFIG);
+
+ ret = readl_poll_timeout_noirq(msm_host->core_mem
+ + CORE_SDCC_DEBUG_REG, value,
+ !(value & CORE_DEBUG_REG_AHB_HTRANS),
+ CORE_AHB_DATA_DELAY_US, 1);
+ if (ret) {
+ pr_err("%s: %s: can't stop ongoing AHB bus access by ADMA\n",
+ mmc_hostname(host->mmc), __func__);
+ BUG();
+ }
+ /* Disable the test bus for device slot */
+ value = readl_relaxed(msm_host->core_mem + CORE_TESTBUS_CONFIG);
+ value &= ~CORE_TESTBUS_ENA;
+ writel_relaxed(value, msm_host->core_mem + CORE_TESTBUS_CONFIG);
+
+ udelay(CORE_AHB_DESC_DELAY_US);
+}
+
static struct sdhci_ops sdhci_msm_ops = {
.set_uhs_signaling = sdhci_msm_set_uhs_signaling,
.check_power_status = sdhci_msm_check_power_status,
@@ -2054,6 +2125,7 @@
.platform_bus_voting = sdhci_msm_bus_voting,
.get_min_clock = sdhci_msm_get_min_clock,
.get_max_clock = sdhci_msm_get_max_clock,
+ .disable_data_xfer = sdhci_msm_disable_data_xfer,
};
static int __devinit sdhci_msm_probe(struct platform_device *pdev)
@@ -2064,7 +2136,7 @@
struct resource *core_memres = NULL;
int ret = 0, dead = 0;
u32 vdd_max_current;
- u32 host_version;
+ u16 host_version;
pr_debug("%s: Enter %s\n", dev_name(&pdev->dev), __func__);
msm_host = devm_kzalloc(&pdev->dev, sizeof(struct sdhci_msm_host),
@@ -2192,7 +2264,7 @@
host->quirks2 |= SDHCI_QUIRK2_BROKEN_PRESET_VALUE;
host->quirks2 |= SDHCI_QUIRK2_USE_RESERVED_MAX_TIMEOUT;
- host_version = readl_relaxed((host->ioaddr + SDHCI_HOST_VERSION));
+ host_version = readw_relaxed((host->ioaddr + SDHCI_HOST_VERSION));
dev_dbg(&pdev->dev, "Host Version: 0x%x Vendor Version 0x%x\n",
host_version, ((host_version & SDHCI_VENDOR_VER_MASK) >>
SDHCI_VENDOR_VER_SHIFT));
@@ -2262,6 +2334,7 @@
msm_host->mmc->caps2 |= MMC_CAP2_CACHE_CTRL;
msm_host->mmc->caps2 |= MMC_CAP2_POWEROFF_NOTIFY;
msm_host->mmc->caps2 |= MMC_CAP2_CLK_SCALE;
+ msm_host->mmc->caps2 |= MMC_CAP2_STOP_REQUEST;
if (msm_host->pdata->nonremovable)
msm_host->mmc->caps |= MMC_CAP_NONREMOVABLE;
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 70a9e96..3efea77 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2124,6 +2124,53 @@
sdhci_runtime_pm_put(host);
}
+static int sdhci_stop_request(struct mmc_host *mmc)
+{
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+ struct mmc_data *data;
+
+ spin_lock_irqsave(&host->lock, flags);
+ if (!host->mrq || !host->data)
+ goto out;
+
+ data = host->data;
+
+ if (host->ops->disable_data_xfer)
+ host->ops->disable_data_xfer(host);
+
+ sdhci_reset(host, SDHCI_RESET_CMD | SDHCI_RESET_DATA);
+
+ if (host->flags & SDHCI_REQ_USE_DMA) {
+ if (host->flags & SDHCI_USE_ADMA) {
+ sdhci_adma_table_post(host, data);
+ } else {
+ if (!data->host_cookie)
+ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
+ data->sg_len,
+ (data->flags & MMC_DATA_READ) ?
+ DMA_FROM_DEVICE : DMA_TO_DEVICE);
+ }
+ }
+ del_timer(&host->timer);
+ host->mrq = NULL;
+ host->cmd = NULL;
+ host->data = NULL;
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+ return 0;
+}
+
+static unsigned int sdhci_get_xfer_remain(struct mmc_host *mmc)
+{
+ struct sdhci_host *host = mmc_priv(mmc);
+ u32 present_state = 0;
+
+ present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
+
+ return present_state & SDHCI_DOING_WRITE;
+}
+
static const struct mmc_host_ops sdhci_ops = {
.pre_req = sdhci_pre_req,
.post_req = sdhci_post_req,
@@ -2137,6 +2184,8 @@
.enable_preset_value = sdhci_enable_preset_value,
.enable = sdhci_enable,
.disable = sdhci_disable,
+ .stop_request = sdhci_stop_request,
+ .get_xfer_remain = sdhci_get_xfer_remain,
};
/*****************************************************************************\
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index c6bef8a..a3d8442 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -286,6 +286,7 @@
void (*toggle_cdr)(struct sdhci_host *host, bool enable);
unsigned int (*get_max_segments)(void);
void (*platform_bus_voting)(struct sdhci_host *host, u32 enable);
+ void (*disable_data_xfer)(struct sdhci_host *host);
};
#ifdef CONFIG_MMC_SDHCI_IO_ACCESSORS
diff --git a/drivers/mtd/devices/msm_qpic_nand.c b/drivers/mtd/devices/msm_qpic_nand.c
index 570c257..fe8bdd0 100644
--- a/drivers/mtd/devices/msm_qpic_nand.c
+++ b/drivers/mtd/devices/msm_qpic_nand.c
@@ -691,9 +691,9 @@
/* Lookup the 'APPS' partition's first page address */
for (i = 0; i < FLASH_PTABLE_MAX_PARTS_V4; i++) {
- if (!strncmp("apps", ptable.part_entry[i].name,
- strlen(ptable.part_entry[i].name))) {
- page_address = ptable.part_entry[i].offset << 6;
+ if (!strncmp("apps", mtd_part[i].name,
+ strlen(mtd_part[i].name))) {
+ page_address = mtd_part[i].offset << 6;
break;
}
}
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index 4e12bb7..147e378 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -1127,6 +1127,33 @@
}
#endif
+static inline unsigned long get_vm_size(struct vm_area_struct *vma)
+{
+ return vma->vm_end - vma->vm_start;
+}
+
+static inline resource_size_t get_vm_offset(struct vm_area_struct *vma)
+{
+ return (resource_size_t) vma->vm_pgoff << PAGE_SHIFT;
+}
+
+/*
+ * Set a new vm offset.
+ *
+ * Verify that the incoming offset really works as a page offset,
+ * and that the offset and size fit in a resource_size_t.
+ */
+static inline int set_vm_offset(struct vm_area_struct *vma, resource_size_t off)
+{
+ pgoff_t pgoff = off >> PAGE_SHIFT;
+ if (off != (resource_size_t) pgoff << PAGE_SHIFT)
+ return -EINVAL;
+ if (off + get_vm_size(vma) - 1 < off)
+ return -EINVAL;
+ vma->vm_pgoff = pgoff;
+ return 0;
+}
+
/*
* set up a mapping for shared memory segments
*/
@@ -1136,20 +1163,29 @@
struct mtd_file_info *mfi = file->private_data;
struct mtd_info *mtd = mfi->mtd;
struct map_info *map = mtd->priv;
- unsigned long start;
- unsigned long off;
- u32 len;
+ resource_size_t start, off;
+ unsigned long len, vma_len;
if (mtd->type == MTD_RAM || mtd->type == MTD_ROM) {
- off = vma->vm_pgoff << PAGE_SHIFT;
+ off = get_vm_offset(vma);
start = map->phys;
len = PAGE_ALIGN((start & ~PAGE_MASK) + map->size);
start &= PAGE_MASK;
- if ((vma->vm_end - vma->vm_start + off) > len)
+ vma_len = get_vm_size(vma);
+
+ /* Overflow in off+len? */
+ if (vma_len + off < off)
+ return -EINVAL;
+ /* Does it fit in the mapping? */
+ if (vma_len + off > len)
return -EINVAL;
off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
+ /* Did that overflow? */
+ if (off < start)
+ return -EINVAL;
+ if (set_vm_offset(vma, off) < 0)
+ return -EINVAL;
vma->vm_flags |= VM_IO | VM_RESERVED;
#ifdef pgprot_noncached
diff --git a/drivers/net/ethernet/msm/ecm_ipa.c b/drivers/net/ethernet/msm/ecm_ipa.c
index 114b23d..38a36bb 100644
--- a/drivers/net/ethernet/msm/ecm_ipa.c
+++ b/drivers/net/ethernet/msm/ecm_ipa.c
@@ -22,13 +22,13 @@
#include <mach/ecm_ipa.h>
#define DRIVER_NAME "ecm_ipa"
-#define DRIVER_VERSION "20-Mar-2013"
#define ECM_IPA_IPV4_HDR_NAME "ecm_eth_ipv4"
#define ECM_IPA_IPV6_HDR_NAME "ecm_eth_ipv6"
#define IPA_TO_USB_CLIENT IPA_CLIENT_USB_CONS
#define INACTIVITY_MSEC_DELAY 100
#define DEFAULT_OUTSTANDING_HIGH 64
#define DEFAULT_OUTSTANDING_LOW 32
+#define DEBUGFS_TEMP_BUF_SIZE 4
#define ECM_IPA_ERROR(fmt, args...) \
pr_err(DRIVER_NAME "@%s@%d@ctx:%s: "\
@@ -57,6 +57,7 @@
* @tx_file: saved debugfs entry to allow cleanup
* @rx_file: saved debugfs entry to allow cleanup
* @rm_file: saved debugfs entry to allow cleanup
+ * @outstanding_file: saved debugfs entry to allow cleanup
* @outstanding_high_file saved debugfs entry to allow cleanup
* @outstanding_low_file saved debugfs entry to allow cleanup
* @dma_file: saved debugfs entry to allow cleanup
@@ -82,6 +83,7 @@
struct dentry *outstanding_high_file;
struct dentry *outstanding_low_file;
struct dentry *dma_file;
+ struct dentry *outstanding_file;
uint32_t eth_ipv4_hdr_hdl;
uint32_t eth_ipv6_hdr_hdl;
u32 usb_to_ipa_hdl;
@@ -129,8 +131,11 @@
static int ecm_ipa_debugfs_rx_open(struct inode *inode, struct file *file);
static int ecm_ipa_debugfs_rm_open(struct inode *inode, struct file *file);
static int ecm_ipa_debugfs_dma_open(struct inode *inode, struct file *file);
+static int ecm_ipa_debugfs_atomic_open(struct inode *inode, struct file *file);
static ssize_t ecm_ipa_debugfs_enable_read(struct file *file,
char __user *ubuf, size_t count, loff_t *ppos);
+static ssize_t ecm_ipa_debugfs_atomic_read(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos);
static ssize_t ecm_ipa_debugfs_enable_write(struct file *file,
const char __user *buf, size_t count, loff_t *ppos);
static ssize_t ecm_ipa_debugfs_enable_write_dma(struct file *file,
@@ -169,6 +174,11 @@
.write = ecm_ipa_debugfs_enable_write_dma,
};
+const struct file_operations ecm_ipa_debugfs_atomic_ops = {
+ .open = ecm_ipa_debugfs_atomic_open,
+ .read = ecm_ipa_debugfs_atomic_read,
+};
+
/**
* ecm_ipa_init() - initializes internal data structures
* @ecm_ipa_rx_dp_notify: supplied callback to be called by the IPA
@@ -195,10 +205,12 @@
struct net_device *net;
struct ecm_ipa_dev *dev;
ECM_IPA_LOG_ENTRY();
- pr_debug("%s version %s\n", DRIVER_NAME, DRIVER_VERSION);
+ pr_debug("%s initializing\n", DRIVER_NAME);
NULL_CHECK(ecm_ipa_rx_dp_notify);
NULL_CHECK(ecm_ipa_tx_dp_notify);
NULL_CHECK(priv);
+ pr_debug("rx_cb=0x%p, tx_cb=0x%p priv=0x%p\n",
+ ecm_ipa_rx_dp_notify, ecm_ipa_tx_dp_notify, *priv);
net = alloc_etherdev(sizeof(struct ecm_ipa_dev));
if (!net) {
ret = -ENOMEM;
@@ -420,8 +432,8 @@
NULL_CHECK(dev);
net = dev->net;
NULL_CHECK(net);
- pr_debug("host_ethaddr=%pM device_ethaddr=%pM\n",
- host_ethaddr, device_ethaddr);
+ pr_debug("priv=0x%p, host_ethaddr=%pM device_ethaddr=%pM\n",
+ priv, host_ethaddr, device_ethaddr);
result = ecm_ipa_create_rm_resource(dev);
if (result) {
ECM_IPA_ERROR("fail on RM create\n");
@@ -472,8 +484,8 @@
struct ecm_ipa_dev *dev = priv;
ECM_IPA_LOG_ENTRY();
NULL_CHECK(priv);
- pr_debug("usb_to_ipa_hdl = %d, ipa_to_usb_hdl = %d\n",
- usb_to_ipa_hdl, ipa_to_usb_hdl);
+ pr_debug("usb_to_ipa_hdl = %d, ipa_to_usb_hdl = %d, priv=0x%p\n",
+ usb_to_ipa_hdl, ipa_to_usb_hdl, priv);
if (!usb_to_ipa_hdl || usb_to_ipa_hdl >= IPA_CLIENT_MAX) {
ECM_IPA_ERROR("usb_to_ipa_hdl(%d) is not a valid ipa handle\n",
usb_to_ipa_hdl);
@@ -502,6 +514,7 @@
struct ecm_ipa_dev *dev = priv;
ECM_IPA_LOG_ENTRY();
NULL_CHECK(dev);
+ pr_debug("priv=0x%p\n", priv);
netif_carrier_off(dev->net);
ECM_IPA_LOG_EXIT();
return 0;
@@ -622,6 +635,7 @@
{
struct ecm_ipa_dev *dev = priv;
ECM_IPA_LOG_ENTRY();
+ pr_debug("priv=0x%p\n", priv);
if (!dev) {
ECM_IPA_ERROR("dev NULL pointer\n");
return;
@@ -630,10 +644,9 @@
ecm_ipa_destory_rm_resource(dev);
ecm_ipa_debugfs_destroy(dev);
- if (!dev->net) {
- unregister_netdev(dev->net);
- free_netdev(dev->net);
- }
+ unregister_netdev(dev->net);
+ free_netdev(dev->net);
+
pr_debug("cleanup done\n");
ecm_ipa_ctx = NULL;
ECM_IPA_LOG_EXIT();
@@ -833,6 +846,15 @@
return 0;
}
+static int ecm_ipa_debugfs_atomic_open(struct inode *inode, struct file *file)
+{
+ struct ecm_ipa_dev *dev = inode->i_private;
+ ECM_IPA_LOG_ENTRY();
+ file->private_data = &(dev->outstanding_pkts);
+ ECM_IPA_LOG_EXIT();
+ return 0;
+}
+
static ssize_t ecm_ipa_debugfs_enable_write_dma(struct file *file,
const char __user *buf, size_t count, loff_t *ppos)
{
@@ -904,9 +926,22 @@
return size;
}
+static ssize_t ecm_ipa_debugfs_atomic_read(struct file *file,
+ char __user *ubuf, size_t count, loff_t *ppos)
+{
+ int nbytes;
+ u8 atomic_str[DEBUGFS_TEMP_BUF_SIZE] = {0};
+ atomic_t *atomic_var = file->private_data;
+ nbytes = scnprintf(atomic_str, sizeof(atomic_str), "%d\n",
+ atomic_read(atomic_var));
+ return simple_read_from_buffer(ubuf, count, ppos, atomic_str, nbytes);
+}
+
+
static int ecm_ipa_debugfs_init(struct ecm_ipa_dev *dev)
{
const mode_t flags = S_IRUGO | S_IWUGO;
+ const mode_t flags_read_only = S_IRUGO;
int ret = -EINVAL;
ECM_IPA_LOG_ENTRY();
@@ -961,6 +996,14 @@
ret = -EFAULT;
goto fail_file;
}
+ dev->outstanding_file = debugfs_create_file("outstanding",
+ flags_read_only, dev->folder, dev,
+ &ecm_ipa_debugfs_atomic_ops);
+ if (!dev->outstanding_file) {
+ ECM_IPA_ERROR("could not create outstanding file\n");
+ ret = -EFAULT;
+ goto fail_file;
+ }
ECM_IPA_LOG_EXIT();
return 0;
@@ -980,7 +1023,6 @@
{
ECM_IPA_LOG_ENTRY();
strlcpy(drv_info->driver, DRIVER_NAME, sizeof(drv_info->driver));
- strlcpy(drv_info->version, DRIVER_VERSION, sizeof(drv_info->version));
ECM_IPA_LOG_EXIT();
}
diff --git a/drivers/platform/msm/ipa/Makefile b/drivers/platform/msm/ipa/Makefile
index b7eca61..2b6ce75 100644
--- a/drivers/platform/msm/ipa/Makefile
+++ b/drivers/platform/msm/ipa/Makefile
@@ -1,4 +1,4 @@
obj-$(CONFIG_IPA) += ipat.o
ipat-y := ipa.o ipa_debugfs.o ipa_hdr.o ipa_flt.o ipa_rt.o ipa_dp.o ipa_client.o \
- ipa_utils.o ipa_nat.o rmnet_bridge.o a2_service.o ipa_bridge.o ipa_intf.o teth_bridge.o \
+ ipa_utils.o ipa_nat.o a2_service.o ipa_bridge.o ipa_intf.o teth_bridge.o \
ipa_rm.o ipa_rm_dependency_graph.o ipa_rm_peers_list.o ipa_rm_resource.o ipa_rm_inactivity_timer.o
diff --git a/drivers/platform/msm/ipa/a2_service.c b/drivers/platform/msm/ipa/a2_service.c
index 4b5f0a2..1b33dc0 100644
--- a/drivers/platform/msm/ipa/a2_service.c
+++ b/drivers/platform/msm/ipa/a2_service.c
@@ -205,6 +205,7 @@
smsm_change_state(SMSM_APPS_STATE,
clear_bit & SMSM_A2_POWER_CONTROL_ACK,
~clear_bit & SMSM_A2_POWER_CONTROL_ACK);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_apps_acks);
clear_bit = ~clear_bit;
}
@@ -216,10 +217,13 @@
if (a2_mux_ctx->bam_dmux_uplink_vote == vote)
IPADBG("%s: warning - duplicate power vote\n", __func__);
a2_mux_ctx->bam_dmux_uplink_vote = vote;
- if (vote)
+ if (vote) {
smsm_change_state(SMSM_APPS_STATE, 0, SMSM_A2_POWER_CONTROL);
- else
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_on_reqs_out);
+ } else {
smsm_change_state(SMSM_APPS_STATE, SMSM_A2_POWER_CONTROL, 0);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_off_reqs_out);
+ }
}
static inline void ul_powerdown(void)
@@ -634,12 +638,14 @@
last_processed_state = new_state & SMSM_A2_POWER_CONTROL;
if (new_state & SMSM_A2_POWER_CONTROL) {
IPADBG("%s: MODEM PWR CTRL 1\n", __func__);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_on_reqs_in);
grab_wakelock();
(void) connect_to_bam();
queue_work(a2_mux_ctx->a2_mux_tx_workqueue,
&a2_mux_ctx->kickoff_ul_request_resource);
} else if (!(new_state & SMSM_A2_POWER_CONTROL)) {
IPADBG("%s: MODEM PWR CTRL 0\n", __func__);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_off_reqs_in);
(void) disconnect_to_bam();
release_wakelock();
} else {
@@ -653,6 +659,7 @@
{
IPADBG("%s: 0x%08x -> 0x%08x\n", __func__, old_state,
new_state);
+ IPA_STATS_INC_CNT(ipa_ctx->stats.a2_power_modem_acks);
complete_all(&a2_mux_ctx->ul_wakeup_ack_completion);
}
@@ -1492,6 +1499,7 @@
a2_props.options = SPS_BAM_OPT_IRQ_WAKEUP;
a2_props.num_pipes = A2_NUM_PIPES;
a2_props.summing_threshold = A2_SUMMING_THRESHOLD;
+ a2_props.manage = SPS_BAM_MGR_DEVICE_REMOTE;
/* need to free on tear down */
rc = sps_register_bam_device(&a2_props, &h);
if (rc < 0) {
diff --git a/drivers/platform/msm/ipa/ipa.c b/drivers/platform/msm/ipa/ipa.c
index 5cc5d46..466e694 100644
--- a/drivers/platform/msm/ipa/ipa.c
+++ b/drivers/platform/msm/ipa/ipa.c
@@ -50,6 +50,13 @@
#define IPA_AGGR_STR_IN_BYTES(str) \
(strnlen((str), IPA_AGGR_MAX_STR_LENGTH - 1) + 1)
+/*
+ * This equals a timer value of 162.56us. This value was
+ * determined empirically and shows good bi-directional
+ * WLAN throughputs
+ */
+#define IPA_HOLB_TMR_DEFAULT_VAL 0x7f
+
static struct ipa_plat_drv_res ipa_res = {0, };
static struct of_device_id ipa_plat_drv_match[] = {
{
@@ -73,6 +80,18 @@
.ab = 0,
.ib = 0,
},
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_EBI_CH0,
+ .ab = 0,
+ .ib = 0,
+ },
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_OCIMEM,
+ .ab = 0,
+ .ib = 0,
+ },
};
static struct msm_bus_vectors ipa_max_perf_vectors[] = {
@@ -82,6 +101,18 @@
.ab = 50000000,
.ib = 960000000,
},
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_EBI_CH0,
+ .ab = 50000000,
+ .ib = 960000000,
+ },
+ {
+ .src = MSM_BUS_MASTER_BAM_DMA,
+ .dst = MSM_BUS_SLAVE_OCIMEM,
+ .ab = 50000000,
+ .ib = 960000000,
+ },
};
static struct msm_bus_paths ipa_usecases[] = {
@@ -901,7 +932,7 @@
/* LAN-WAN OUT (A5->IPA) */
memset(&sys_in, 0, sizeof(struct ipa_sys_connect_params));
sys_in.client = IPA_CLIENT_A5_LAN_WAN_PROD;
- sys_in.desc_fifo_sz = IPA_SYS_DESC_FIFO_SZ;
+ sys_in.desc_fifo_sz = IPA_SYS_TX_DATA_DESC_FIFO_SZ;
sys_in.ipa_ep_cfg.mode.mode = IPA_BASIC;
sys_in.ipa_ep_cfg.mode.dst = IPA_CLIENT_A5_LAN_WAN_CONS;
if (ipa_setup_sys_pipe(&sys_in, &ipa_ctx->clnt_hdl_data_out)) {
@@ -1123,7 +1154,7 @@
u32 producer_hdl = 0;
u32 consumer_hdl = 0;
- rmnet_bridge_get_client_handles(&producer_hdl, &consumer_hdl);
+ teth_bridge_get_client_handles(&producer_hdl, &consumer_hdl);
/* configure aggregation on producer */
memset(&agg_params, 0, sizeof(struct ipa_ep_cfg_aggr));
@@ -1460,6 +1491,41 @@
WARN_ON(1);
}
+/**
+* ipa_inc_client_enable_clks() - Increase active clients counter, and
+* enable ipa clocks if necessary
+*
+* Return codes:
+* None
+*/
+void ipa_inc_client_enable_clks(void)
+{
+ mutex_lock(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients++;
+ if (ipa_ctx->ipa_active_clients == 1)
+ if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
+ ipa_enable_clks();
+ mutex_unlock(&ipa_ctx->ipa_active_clients_lock);
+}
+
+/**
+* ipa_dec_client_disable_clks() - Decrease active clients counter, and
+* disable ipa clocks if necessary
+*
+* Return codes:
+* None
+*/
+void ipa_dec_client_disable_clks(void)
+{
+ mutex_lock(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients--;
+ if (ipa_ctx->ipa_active_clients == 0)
+ if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
+ ipa_disable_clks();
+ mutex_unlock(&ipa_ctx->ipa_active_clients_lock);
+}
+
+
static int ipa_setup_bam_cfg(const struct ipa_plat_drv_res *res)
{
void *bam_cnfg_bits;
@@ -1595,6 +1661,8 @@
result = -ENOMEM;
goto fail_mem;
}
+ ipa_ctx->hol_en = 0x1;
+ ipa_ctx->hol_timer = IPA_HOLB_TMR_DEFAULT_VAL;
IPADBG("polling_mode=%u delay_ms=%u\n", polling_mode, polling_delay_ms);
ipa_ctx->polling_mode = polling_mode;
@@ -1677,6 +1745,7 @@
bam_props.num_pipes = IPA_NUM_PIPES;
bam_props.summing_threshold = IPA_SUMMING_THRESHOLD;
bam_props.event_threshold = IPA_EVENT_THRESHOLD;
+ bam_props.options |= SPS_BAM_NO_LOCAL_CLK_GATING;
result = sps_register_bam_device(&bam_props, &ipa_ctx->bam_handle);
if (result) {
@@ -1845,13 +1914,14 @@
ipa_ctx->flt_rule_hdl_tree = RB_ROOT;
ipa_ctx->tag_tree = RB_ROOT;
- atomic_set(&ipa_ctx->ipa_active_clients, 0);
+ mutex_init(&ipa_ctx->ipa_active_clients_lock);
+ ipa_ctx->ipa_active_clients = 0;
result = ipa_bridge_init();
if (result) {
IPAERR("ipa bridge init err.\n");
result = -ENODEV;
- goto fail_bridge_init;
+ goto fail_a5_pipes;
}
/* setup the A5-IPA pipes */
@@ -1972,8 +2042,6 @@
ipa_cleanup_rx();
ipa_teardown_a5_pipes();
fail_a5_pipes:
- ipa_bridge_cleanup();
-fail_bridge_init:
destroy_workqueue(ipa_ctx->tx_wq);
fail_tx_wq:
destroy_workqueue(ipa_ctx->rx_wq);
diff --git a/drivers/platform/msm/ipa/ipa_bridge.c b/drivers/platform/msm/ipa/ipa_bridge.c
index dd00081..3ff604c 100644
--- a/drivers/platform/msm/ipa/ipa_bridge.c
+++ b/drivers/platform/msm/ipa/ipa_bridge.c
@@ -12,10 +12,52 @@
#include <linux/delay.h>
#include <linux/ratelimit.h>
+#include <mach/msm_smsm.h>
#include "ipa_i.h"
-#define A2_EMBEDDED_PIPE_TX 4
-#define A2_EMBEDDED_PIPE_RX 5
+/*
+ * EP0 (teth)
+ * A2_BAM(1)->(12)DMA_BAM->DMA_BAM(13)->(6)IPA_BAM->IPA_BAM(10)->USB_BAM(0)
+ * A2_BAM(0)<-(15)DMA_BAM<-DMA_BAM(14)<-(7)IPA_BAM<-IPA_BAM(11)<-USB_BAM(1)
+ *
+ * EP2 (emb)
+ * A2_BAM(5)->(16)DMA_BAM->DMA_BAM(17)->(8)IPA_BAM->
+ * A2_BAM(4)<-(19)DMA_BAM<-DMA_BAM(18)<-(9)IPA_BAM<-
+ */
+
+#define A2_TETHERED_PIPE_UL 0
+#define DMA_A2_TETHERED_PIPE_UL 15
+#define DMA_IPA_TETHERED_PIPE_UL 14
+#define A2_TETHERED_PIPE_DL 1
+#define DMA_A2_TETHERED_PIPE_DL 12
+#define DMA_IPA_TETHERED_PIPE_DL 13
+
+#define A2_EMBEDDED_PIPE_UL 4
+#define DMA_A2_EMBEDDED_PIPE_UL 19
+#define DMA_IPA_EMBEDDED_PIPE_UL 18
+#define A2_EMBEDDED_PIPE_DL 5
+#define DMA_A2_EMBEDDED_PIPE_DL 16
+#define DMA_IPA_EMBEDDED_PIPE_DL 17
+
+#define IPA_SMEM_PIPE_MEM_SZ 32768
+
+#define IPA_UL_DATA_FIFO_SZ 0xc00
+#define IPA_UL_DESC_FIFO_SZ 0x530
+#define IPA_DL_DATA_FIFO_SZ 0x2400
+#define IPA_DL_DESC_FIFO_SZ 0x8a0
+
+#define IPA_SMEM_UL_DATA_FIFO_OFST 0x3dd0
+#define IPA_SMEM_UL_DESC_FIFO_OFST 0x49d0
+#define IPA_SMEM_DL_DATA_FIFO_OFST 0x4f00
+#define IPA_SMEM_DL_DESC_FIFO_OFST 0x7300
+
+#define IPA_OCIMEM_UL_DATA_FIFO_OFST 0
+#define IPA_OCIMEM_UL_DESC_FIFO_OFST (IPA_OCIMEM_UL_DATA_FIFO_OFST + \
+ IPA_UL_DATA_FIFO_SZ)
+#define IPA_OCIMEM_DL_DATA_FIFO_OFST (IPA_OCIMEM_UL_DESC_FIFO_OFST + \
+ IPA_UL_DESC_FIFO_SZ)
+#define IPA_OCIMEM_DL_DESC_FIFO_OFST (IPA_OCIMEM_DL_DATA_FIFO_OFST + \
+ IPA_DL_DATA_FIFO_SZ)
enum ipa_pipe_type {
IPA_DL_FROM_A2,
@@ -25,678 +67,383 @@
IPA_PIPE_TYPE_MAX
};
-static int polling_min_sleep[IPA_BRIDGE_DIR_MAX] = { 950, 950 };
-static int polling_max_sleep[IPA_BRIDGE_DIR_MAX] = { 1050, 1050 };
-static int polling_inactivity[IPA_BRIDGE_DIR_MAX] = { 4, 4 };
-
-struct ipa_pkt_info {
- void *buffer;
- dma_addr_t dma_address;
- uint32_t len;
- struct list_head link;
-};
-
struct ipa_bridge_pipe_context {
- struct list_head head_desc_list;
struct sps_pipe *pipe;
- struct sps_connect connection;
- struct sps_mem_buffer desc_mem_buf;
- struct sps_register_event register_event;
- struct list_head free_desc_list;
+ bool ipa_facing;
bool valid;
};
struct ipa_bridge_context {
struct ipa_bridge_pipe_context pipe[IPA_PIPE_TYPE_MAX];
- struct workqueue_struct *ul_wq;
- struct workqueue_struct *dl_wq;
- struct work_struct ul_work;
- struct work_struct dl_work;
enum ipa_bridge_type type;
};
static struct ipa_bridge_context bridge[IPA_BRIDGE_TYPE_MAX];
-static void ipa_do_bridge_work(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx);
-
-static void ul_work_func(struct work_struct *work)
+static void ipa_get_dma_pipe_num(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type, int *a2, int *ipa)
{
- struct ipa_bridge_context *ctx = container_of(work,
- struct ipa_bridge_context, ul_work);
- ipa_do_bridge_work(IPA_BRIDGE_DIR_UL, ctx);
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ *a2 = DMA_A2_TETHERED_PIPE_UL;
+ *ipa = DMA_IPA_TETHERED_PIPE_UL;
+ } else {
+ *a2 = DMA_A2_TETHERED_PIPE_DL;
+ *ipa = DMA_IPA_TETHERED_PIPE_DL;
+ }
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ *a2 = DMA_A2_EMBEDDED_PIPE_UL;
+ *ipa = DMA_IPA_EMBEDDED_PIPE_UL;
+ } else {
+ *a2 = DMA_A2_EMBEDDED_PIPE_DL;
+ *ipa = DMA_IPA_EMBEDDED_PIPE_DL;
+ }
+ }
}
-static void dl_work_func(struct work_struct *work)
+static int ipa_get_desc_fifo_sz(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
{
- struct ipa_bridge_context *ctx = container_of(work,
- struct ipa_bridge_context, dl_work);
- ipa_do_bridge_work(IPA_BRIDGE_DIR_DL, ctx);
+ int sz;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DESC_FIFO_SZ;
+ else
+ sz = IPA_DL_DESC_FIFO_SZ;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DESC_FIFO_SZ;
+ else
+ sz = IPA_DL_DESC_FIFO_SZ;
+ }
+
+ return sz;
}
-static int ipa_switch_to_intr_mode(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx)
+static int ipa_get_data_fifo_sz(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
+{
+ int sz;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DATA_FIFO_SZ;
+ else
+ sz = IPA_DL_DATA_FIFO_SZ;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ sz = IPA_UL_DATA_FIFO_SZ;
+ else
+ sz = IPA_DL_DATA_FIFO_SZ;
+ }
+
+ return sz;
+}
+
+static int ipa_get_a2_pipe_num(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type)
+{
+ int ep;
+
+ if (type == IPA_BRIDGE_TYPE_TETHERED) {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ ep = A2_TETHERED_PIPE_UL;
+ else
+ ep = A2_TETHERED_PIPE_DL;
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL)
+ ep = A2_EMBEDDED_PIPE_UL;
+ else
+ ep = A2_EMBEDDED_PIPE_DL;
+ }
+
+ return ep;
+}
+
+int ipa_setup_a2_dma_fifos(enum ipa_bridge_dir dir,
+ enum ipa_bridge_type type,
+ struct sps_mem_buffer *desc,
+ struct sps_mem_buffer *data)
{
int ret;
- struct ipa_bridge_pipe_context *sys = &ctx->pipe[2 * dir];
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_get_config() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- sys->register_event.options = SPS_O_EOT;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret) {
- IPAERR("sps_register_event() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_EOT;
- ret = sps_set_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_set_config() failed %d type=%d dir=%d\n",
- ret, ctx->type, dir);
- goto fail;
- }
- ret = 0;
-fail:
- return ret;
-}
+ if (type == IPA_BRIDGE_TYPE_EMBEDDED) {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ desc->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_UL_DESC_FIFO_OFST;
+ desc->phys_base = smem_virt_to_phys(desc->base);
+ desc->size = ipa_get_desc_fifo_sz(dir, type);
+ data->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_UL_DATA_FIFO_OFST;
+ data->phys_base = smem_virt_to_phys(data->base);
+ data->size = ipa_get_data_fifo_sz(dir, type);
+ } else {
+ desc->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_DL_DESC_FIFO_OFST;
+ desc->phys_base = smem_virt_to_phys(desc->base);
+ desc->size = ipa_get_desc_fifo_sz(dir, type);
+ data->base = ipa_ctx->smem_pipe_mem +
+ IPA_SMEM_DL_DATA_FIFO_OFST;
+ data->phys_base = smem_virt_to_phys(data->base);
+ data->size = ipa_get_data_fifo_sz(dir, type);
+ }
+ } else {
+ if (dir == IPA_BRIDGE_DIR_UL) {
+ ret = sps_setup_bam2bam_fifo(data,
+ IPA_OCIMEM_UL_DATA_FIFO_OFST,
+ ipa_get_data_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DAFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
-static int ipa_switch_to_poll_mode(enum ipa_bridge_dir dir,
- enum ipa_bridge_type type)
-{
- int ret;
- struct ipa_bridge_pipe_context *sys = &bridge[type].pipe[2 * dir];
+ ret = sps_setup_bam2bam_fifo(desc,
+ IPA_OCIMEM_UL_DESC_FIFO_OFST,
+ ipa_get_desc_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DEFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
+ } else {
+ ret = sps_setup_bam2bam_fifo(data,
+ IPA_OCIMEM_DL_DATA_FIFO_OFST,
+ ipa_get_data_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DAFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_get_config() failed %d type=%d dir=%d\n",
- ret, type, dir);
- goto fail;
- }
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- ret = sps_set_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("sps_set_config() failed %d type=%d dir=%d\n",
- ret, type, dir);
- goto fail;
- }
- ret = 0;
-fail:
- return ret;
-}
-
-static int queue_rx_single(enum ipa_bridge_dir dir, enum ipa_bridge_type type)
-{
- struct ipa_bridge_pipe_context *sys_rx = &bridge[type].pipe[2 * dir];
- struct ipa_pkt_info *info;
- int ret;
-
- info = kmalloc(sizeof(struct ipa_pkt_info), GFP_KERNEL);
- if (!info) {
- IPAERR("unable to alloc rx_pkt_info type=%d dir=%d\n",
- type, dir);
- goto fail_pkt;
+ ret = sps_setup_bam2bam_fifo(desc,
+ IPA_OCIMEM_DL_DESC_FIFO_OFST,
+ ipa_get_desc_fifo_sz(dir, type), 1);
+ if (ret) {
+ IPAERR("DEFIFO setup fail %d dir %d type %d\n",
+ ret, dir, type);
+ return ret;
+ }
+ }
}
- info->buffer = kmalloc(IPA_RX_SKB_SIZE, GFP_KERNEL | GFP_DMA);
- if (!info->buffer) {
- IPAERR("unable to alloc rx_pkt_buffer type=%d dir=%d\n",
- type, dir);
- goto fail_buffer;
- }
+ IPADBG("dir=%d type=%d Dpa=%x Dsz=%u Dva=%p dpa=%x dsz=%u dva=%p\n",
+ dir, type, data->phys_base, data->size, data->base,
+ desc->phys_base, desc->size, desc->base);
- info->dma_address = dma_map_single(NULL, info->buffer, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- if (info->dma_address == 0 || info->dma_address == ~0) {
- IPAERR("dma_map_single failure %p for %p type=%d dir=%d\n",
- (void *)info->dma_address, info->buffer,
- type, dir);
- goto fail_dma;
- }
-
- list_add_tail(&info->link, &sys_rx->head_desc_list);
- ret = sps_transfer_one(sys_rx->pipe, info->dma_address,
- IPA_RX_SKB_SIZE, info,
- SPS_IOVEC_FLAG_INT);
- if (ret) {
- list_del(&info->link);
- dma_unmap_single(NULL, info->dma_address, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- IPAERR("sps_transfer_one failed %d type=%d dir=%d\n", ret,
- type, dir);
- goto fail_dma;
- }
return 0;
-
-fail_dma:
- kfree(info->buffer);
-fail_buffer:
- kfree(info);
-fail_pkt:
- IPAERR("failed type=%d dir=%d\n", type, dir);
- return -ENOMEM;
}
-static int ipa_reclaim_tx(struct ipa_bridge_pipe_context *sys_tx, bool all)
-{
- struct sps_iovec iov;
- struct ipa_pkt_info *tx_pkt;
- int cnt = 0;
- int ret;
-
- do {
- iov.addr = 0;
- ret = sps_get_iovec(sys_tx->pipe, &iov);
- if (ret || iov.addr == 0) {
- break;
- } else {
- tx_pkt = list_first_entry(&sys_tx->head_desc_list,
- struct ipa_pkt_info,
- link);
- list_move_tail(&tx_pkt->link,
- &sys_tx->free_desc_list);
- cnt++;
- }
- } while (all);
-
- return cnt;
-}
-
-static void ipa_do_bridge_work(enum ipa_bridge_dir dir,
- struct ipa_bridge_context *ctx)
-{
- struct ipa_bridge_pipe_context *sys_rx = &ctx->pipe[2 * dir];
- struct ipa_bridge_pipe_context *sys_tx = &ctx->pipe[2 * dir + 1];
- struct ipa_pkt_info *tx_pkt;
- struct ipa_pkt_info *rx_pkt;
- struct ipa_pkt_info *tmp_pkt;
- struct sps_iovec iov;
- int ret;
- int inactive_cycles = 0;
-
- while (1) {
- ++inactive_cycles;
-
- if (ipa_reclaim_tx(sys_tx, false))
- inactive_cycles = 0;
-
- iov.addr = 0;
- ret = sps_get_iovec(sys_rx->pipe, &iov);
- if (ret || iov.addr == 0) {
- /* no-op */
- } else {
- inactive_cycles = 0;
-
- rx_pkt = list_first_entry(&sys_rx->head_desc_list,
- struct ipa_pkt_info,
- link);
- list_del(&rx_pkt->link);
- rx_pkt->len = iov.size;
-
-retry_alloc_tx:
- if (list_empty(&sys_tx->free_desc_list)) {
- tmp_pkt = kmalloc(sizeof(struct ipa_pkt_info),
- GFP_KERNEL);
- if (!tmp_pkt) {
- pr_debug_ratelimited("%s: unable to alloc tx_pkt_info type=%d dir=%d\n",
- __func__, ctx->type, dir);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_alloc_tx;
- }
-
- tmp_pkt->buffer = kmalloc(IPA_RX_SKB_SIZE,
- GFP_KERNEL | GFP_DMA);
- if (!tmp_pkt->buffer) {
- pr_debug_ratelimited("%s: unable to alloc tx_pkt_buffer type=%d dir=%d\n",
- __func__, ctx->type, dir);
- kfree(tmp_pkt);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_alloc_tx;
- }
-
- tmp_pkt->dma_address = dma_map_single(NULL,
- tmp_pkt->buffer,
- IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- if (tmp_pkt->dma_address == 0 ||
- tmp_pkt->dma_address == ~0) {
- pr_debug_ratelimited("%s: dma_map_single failure %p for %p type=%d dir=%d\n",
- __func__,
- (void *)tmp_pkt->dma_address,
- tmp_pkt->buffer, ctx->type, dir);
- }
-
- list_add_tail(&tmp_pkt->link,
- &sys_tx->free_desc_list);
- }
-
- tx_pkt = list_first_entry(&sys_tx->free_desc_list,
- struct ipa_pkt_info,
- link);
- list_del(&tx_pkt->link);
-
-retry_add_rx:
- list_add_tail(&tx_pkt->link,
- &sys_rx->head_desc_list);
- ret = sps_transfer_one(sys_rx->pipe,
- tx_pkt->dma_address,
- IPA_RX_SKB_SIZE,
- tx_pkt,
- SPS_IOVEC_FLAG_INT);
- if (ret) {
- list_del(&tx_pkt->link);
- pr_debug_ratelimited("%s: sps_transfer_one failed %d type=%d dir=%d\n",
- __func__, ret, ctx->type, dir);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_add_rx;
- }
-
-retry_add_tx:
- list_add_tail(&rx_pkt->link,
- &sys_tx->head_desc_list);
- ret = sps_transfer_one(sys_tx->pipe,
- rx_pkt->dma_address,
- iov.size,
- rx_pkt,
- SPS_IOVEC_FLAG_INT |
- SPS_IOVEC_FLAG_EOT);
- if (ret) {
- pr_debug_ratelimited("%s: fail to add to TX type=%d dir=%d\n",
- __func__, ctx->type, dir);
- list_del(&rx_pkt->link);
- ipa_reclaim_tx(sys_tx, true);
- usleep_range(polling_min_sleep[dir],
- polling_max_sleep[dir]);
- goto retry_add_tx;
- }
- IPA_STATS_INC_BRIDGE_CNT(ctx->type, dir,
- ipa_ctx->stats.bridged_pkts);
- }
-
- if (inactive_cycles >= polling_inactivity[dir]) {
- ipa_switch_to_intr_mode(dir, ctx);
- break;
- }
- }
-}
-
-static void ipa_sps_irq_rx_notify(struct sps_event_notify *notify)
-{
- enum ipa_bridge_type type = (enum ipa_bridge_type) notify->user;
-
- switch (notify->event_id) {
- case SPS_EVENT_EOT:
- ipa_switch_to_poll_mode(IPA_BRIDGE_DIR_UL, type);
- queue_work(bridge[type].ul_wq, &bridge[type].ul_work);
- break;
- default:
- IPAERR("recieved unexpected event id %d type %d\n",
- notify->event_id, type);
- }
-}
-
-static int setup_bridge_to_ipa(enum ipa_bridge_dir dir,
+static int setup_dma_bam_bridge(enum ipa_bridge_dir dir,
enum ipa_bridge_type type,
struct ipa_sys_connect_params *props,
u32 *clnt_hdl)
{
- struct ipa_bridge_pipe_context *sys;
- dma_addr_t dma_addr;
- enum ipa_pipe_type pipe_type;
- int ipa_ep_idx;
- int ret;
- int i;
-
- ipa_ep_idx = ipa_get_ep_mapping(ipa_ctx->mode, props->client);
- if (ipa_ep_idx == -1) {
- IPAERR("Invalid client=%d mode=%d type=%d dir=%d\n",
- props->client, ipa_ctx->mode, type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
- }
-
- if (ipa_ctx->ep[ipa_ep_idx].valid) {
- IPAERR("EP %d already allocated type=%d dir=%d\n", ipa_ep_idx,
- type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
- }
-
- pipe_type = (dir == IPA_BRIDGE_DIR_DL) ? IPA_DL_TO_IPA :
- IPA_UL_FROM_IPA;
-
- sys = &bridge[type].pipe[pipe_type];
- sys->pipe = sps_alloc_endpoint();
- if (sys->pipe == NULL) {
- IPAERR("alloc endpoint failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto alloc_endpoint_failed;
- }
- ret = sps_get_config(sys->pipe, &sys->connection);
- if (ret) {
- IPAERR("get config failed %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto get_config_failed;
- }
-
- if (dir == IPA_BRIDGE_DIR_DL) {
- sys->connection.source = SPS_DEV_HANDLE_MEM;
- sys->connection.src_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.destination = ipa_ctx->bam_handle;
- sys->connection.dest_pipe_index = ipa_ep_idx;
- sys->connection.mode = SPS_MODE_DEST;
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- } else {
- sys->connection.source = ipa_ctx->bam_handle;
- sys->connection.src_pipe_index = ipa_ep_idx;
- sys->connection.destination = SPS_DEV_HANDLE_MEM;
- sys->connection.dest_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.mode = SPS_MODE_SRC;
- sys->connection.options = SPS_O_AUTO_ENABLE | SPS_O_EOT |
- SPS_O_ACK_TRANSFERS | SPS_O_NO_DISABLE;
- }
-
- sys->desc_mem_buf.size = props->desc_fifo_sz;
- sys->desc_mem_buf.base = dma_alloc_coherent(NULL,
- sys->desc_mem_buf.size,
- &dma_addr,
- 0);
- if (sys->desc_mem_buf.base == NULL) {
- IPAERR("memory alloc failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto get_config_failed;
- }
- sys->desc_mem_buf.phys_base = dma_addr;
- memset(sys->desc_mem_buf.base, 0x0, sys->desc_mem_buf.size);
- sys->connection.desc = sys->desc_mem_buf;
- sys->connection.event_thresh = IPA_EVENT_THRESHOLD;
-
- ret = sps_connect(sys->pipe, &sys->connection);
- if (ret < 0) {
- IPAERR("connect error %d type=%d dir=%d\n", ret, type, dir);
- goto connect_failed;
- }
-
- INIT_LIST_HEAD(&sys->head_desc_list);
- INIT_LIST_HEAD(&sys->free_desc_list);
-
- memset(&ipa_ctx->ep[ipa_ep_idx], 0,
- sizeof(struct ipa_ep_context));
-
- ipa_ctx->ep[ipa_ep_idx].valid = 1;
- ipa_ctx->ep[ipa_ep_idx].client_notify = props->notify;
- ipa_ctx->ep[ipa_ep_idx].priv = props->priv;
-
- ret = ipa_cfg_ep(ipa_ep_idx, &props->ipa_ep_cfg);
- if (ret < 0) {
- IPAERR("ep cfg set error %d type=%d dir=%d\n", ret, type, dir);
- ipa_ctx->ep[ipa_ep_idx].valid = 0;
- goto event_reg_failed;
- }
-
- if (dir == IPA_BRIDGE_DIR_UL) {
- sys->register_event.options = SPS_O_EOT;
- sys->register_event.mode = SPS_TRIGGER_CALLBACK;
- sys->register_event.xfer_done = NULL;
- sys->register_event.callback = ipa_sps_irq_rx_notify;
- sys->register_event.user = (void *)type;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret < 0) {
- IPAERR("register event error %d type=%d dir=%d\n", ret,
- type, dir);
- goto event_reg_failed;
- }
-
- for (i = 0; i < IPA_RX_POOL_CEIL; i++) {
- ret = queue_rx_single(dir, type);
- if (ret < 0)
- IPAERR("queue fail dir=%d type=%d iter=%d\n",
- dir, type, i);
- }
- }
-
- *clnt_hdl = ipa_ep_idx;
- sys->valid = true;
-
- return 0;
-
-event_reg_failed:
- sps_disconnect(sys->pipe);
-connect_failed:
- dma_free_coherent(NULL,
- sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
-get_config_failed:
- sps_free_endpoint(sys->pipe);
-alloc_endpoint_failed:
- return ret;
-}
-
-static void bam_mux_rx_notify(struct sps_event_notify *notify)
-{
- enum ipa_bridge_type type = (enum ipa_bridge_type) notify->user;
-
- switch (notify->event_id) {
- case SPS_EVENT_EOT:
- ipa_switch_to_poll_mode(IPA_BRIDGE_DIR_DL, type);
- queue_work(bridge[type].dl_wq, &bridge[type].dl_work);
- break;
- default:
- IPAERR("recieved unexpected event id %d type %d\n",
- notify->event_id, type);
- }
-}
-
-static int setup_bridge_to_a2(enum ipa_bridge_dir dir,
- enum ipa_bridge_type type,
- u32 desc_fifo_sz)
-{
- struct ipa_bridge_pipe_context *sys;
- struct a2_mux_pipe_connection pipe_conn = { 0 };
- dma_addr_t dma_addr;
- u32 a2_handle;
+ struct ipa_connect_params ipa_in_params;
+ struct ipa_sps_params sps_out_params;
+ int dma_a2_pipe;
+ int dma_ipa_pipe;
+ struct sps_pipe *pipe;
+ struct sps_pipe *pipe_a2;
+ struct sps_connect _connection;
+ struct sps_connect *connection = &_connection;
+ struct a2_mux_pipe_connection pipe_conn = {0};
enum a2_mux_pipe_direction pipe_dir;
- enum ipa_pipe_type pipe_type;
+ u32 dma_hdl = sps_dma_get_bam_handle();
+ u32 a2_hdl;
u32 pa;
int ret;
- int i;
+
+ memset(&ipa_in_params, 0, sizeof(ipa_in_params));
+ memset(&sps_out_params, 0, sizeof(sps_out_params));
pipe_dir = (dir == IPA_BRIDGE_DIR_UL) ? IPA_TO_A2 : A2_TO_IPA;
ret = ipa_get_a2_mux_pipe_info(pipe_dir, &pipe_conn);
if (ret) {
- IPAERR("ipa_get_a2_mux_pipe_info failed type=%d dir=%d\n",
- type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
+ IPAERR("ipa_get_a2_mux_pipe_info failed dir=%d type=%d\n",
+ dir, type);
+ goto fail_get_a2_prop;
}
pa = (dir == IPA_BRIDGE_DIR_UL) ? pipe_conn.dst_phy_addr :
pipe_conn.src_phy_addr;
- ret = sps_phy2h(pa, &a2_handle);
+ ret = sps_phy2h(pa, &a2_hdl);
if (ret) {
- IPAERR("sps_phy2h failed (A2 BAM) %d type=%d dir=%d\n",
- ret, type, dir);
- ret = -EINVAL;
- goto alloc_endpoint_failed;
+ IPAERR("sps_phy2h failed (A2 BAM) %d dir=%d type=%d\n",
+ ret, dir, type);
+ goto fail_get_a2_prop;
}
- pipe_type = (dir == IPA_BRIDGE_DIR_UL) ? IPA_UL_TO_A2 : IPA_DL_FROM_A2;
+ ipa_get_dma_pipe_num(dir, type, &dma_a2_pipe, &dma_ipa_pipe);
- sys = &bridge[type].pipe[pipe_type];
- sys->pipe = sps_alloc_endpoint();
- if (sys->pipe == NULL) {
- IPAERR("alloc endpoint failed type=%d dir=%d\n", type, dir);
+ ipa_in_params.ipa_ep_cfg = props->ipa_ep_cfg;
+ ipa_in_params.client = props->client;
+ ipa_in_params.client_bam_hdl = dma_hdl;
+ ipa_in_params.client_ep_idx = dma_ipa_pipe;
+ ipa_in_params.priv = props->priv;
+ ipa_in_params.notify = props->notify;
+ ipa_in_params.desc_fifo_sz = ipa_get_desc_fifo_sz(dir, type);
+ ipa_in_params.data_fifo_sz = ipa_get_data_fifo_sz(dir, type);
+
+ if (ipa_connect(&ipa_in_params, &sps_out_params, clnt_hdl)) {
+ IPAERR("ipa connect failed dir=%d type=%d\n", dir, type);
+ goto fail_get_a2_prop;
+ }
+
+ pipe = sps_alloc_endpoint();
+ if (pipe == NULL) {
+ IPAERR("sps_alloc_endpoint failed dir=%d type=%d\n", dir, type);
ret = -ENOMEM;
- goto alloc_endpoint_failed;
+ goto fail_sps_alloc;
}
- ret = sps_get_config(sys->pipe, &sys->connection);
+
+ memset(&_connection, 0, sizeof(_connection));
+ ret = sps_get_config(pipe, connection);
if (ret) {
- IPAERR("get config failed %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto get_config_failed;
+ IPAERR("sps_get_config failed %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config;
}
- if (dir == IPA_BRIDGE_DIR_UL) {
- sys->connection.source = SPS_DEV_HANDLE_MEM;
- sys->connection.src_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.destination = a2_handle;
- if (type == IPA_BRIDGE_TYPE_TETHERED)
- sys->connection.dest_pipe_index =
- pipe_conn.dst_pipe_index;
- else
- sys->connection.dest_pipe_index = A2_EMBEDDED_PIPE_TX;
- sys->connection.mode = SPS_MODE_DEST;
- sys->connection.options =
- SPS_O_AUTO_ENABLE | SPS_O_ACK_TRANSFERS | SPS_O_POLL;
- } else {
- sys->connection.source = a2_handle;
- if (type == IPA_BRIDGE_TYPE_TETHERED)
- sys->connection.src_pipe_index =
- pipe_conn.src_pipe_index;
- else
- sys->connection.src_pipe_index = A2_EMBEDDED_PIPE_RX;
- sys->connection.destination = SPS_DEV_HANDLE_MEM;
- sys->connection.dest_pipe_index = ipa_ctx->a5_pipe_index++;
- sys->connection.mode = SPS_MODE_SRC;
- sys->connection.options = SPS_O_AUTO_ENABLE | SPS_O_EOT |
- SPS_O_ACK_TRANSFERS;
- }
-
- sys->desc_mem_buf.size = desc_fifo_sz;
- sys->desc_mem_buf.base = dma_alloc_coherent(NULL,
- sys->desc_mem_buf.size,
- &dma_addr,
- 0);
- if (sys->desc_mem_buf.base == NULL) {
- IPAERR("memory alloc failed type=%d dir=%d\n", type, dir);
- ret = -ENOMEM;
- goto get_config_failed;
- }
- sys->desc_mem_buf.phys_base = dma_addr;
- memset(sys->desc_mem_buf.base, 0x0, sys->desc_mem_buf.size);
- sys->connection.desc = sys->desc_mem_buf;
- sys->connection.event_thresh = IPA_EVENT_THRESHOLD;
-
- ret = sps_connect(sys->pipe, &sys->connection);
- if (ret < 0) {
- IPAERR("connect error %d type=%d dir=%d\n", ret, type, dir);
- ret = -EINVAL;
- goto connect_failed;
- }
-
- INIT_LIST_HEAD(&sys->head_desc_list);
- INIT_LIST_HEAD(&sys->free_desc_list);
-
if (dir == IPA_BRIDGE_DIR_DL) {
- sys->register_event.options = SPS_O_EOT;
- sys->register_event.mode = SPS_TRIGGER_CALLBACK;
- sys->register_event.xfer_done = NULL;
- sys->register_event.callback = bam_mux_rx_notify;
- sys->register_event.user = (void *)type;
- ret = sps_register_event(sys->pipe, &sys->register_event);
- if (ret < 0) {
- IPAERR("register event error %d type=%d dir=%d\n",
- ret, type, dir);
- ret = -EINVAL;
- goto event_reg_failed;
- }
-
- for (i = 0; i < IPA_RX_POOL_CEIL; i++) {
- ret = queue_rx_single(dir, type);
- if (ret < 0)
- IPAERR("queue fail dir=%d type=%d iter=%d\n",
- dir, type, i);
- }
+ connection->mode = SPS_MODE_SRC;
+ connection->source = dma_hdl;
+ connection->destination = sps_out_params.ipa_bam_hdl;
+ connection->src_pipe_index = dma_ipa_pipe;
+ connection->dest_pipe_index = sps_out_params.ipa_ep_idx;
+ } else {
+ connection->mode = SPS_MODE_DEST;
+ connection->source = sps_out_params.ipa_bam_hdl;
+ connection->destination = dma_hdl;
+ connection->src_pipe_index = sps_out_params.ipa_ep_idx;
+ connection->dest_pipe_index = dma_ipa_pipe;
}
- sys->valid = true;
+ connection->event_thresh = IPA_EVENT_THRESHOLD;
+ connection->data = sps_out_params.data;
+ connection->desc = sps_out_params.desc;
+ connection->options = SPS_O_AUTO_ENABLE;
+
+ ret = sps_connect(pipe, connection);
+ if (ret) {
+ IPAERR("sps_connect failed %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ bridge[type].pipe[IPA_DL_TO_IPA].pipe = pipe;
+ bridge[type].pipe[IPA_DL_TO_IPA].ipa_facing = true;
+ bridge[type].pipe[IPA_DL_TO_IPA].valid = true;
+ } else {
+ bridge[type].pipe[IPA_UL_FROM_IPA].pipe = pipe;
+ bridge[type].pipe[IPA_UL_FROM_IPA].ipa_facing = true;
+ bridge[type].pipe[IPA_UL_FROM_IPA].valid = true;
+ }
+
+ IPADBG("dir=%d type=%d (ipa) src(0x%x:%u)->dst(0x%x:%u)\n", dir, type,
+ connection->source, connection->src_pipe_index,
+ connection->destination, connection->dest_pipe_index);
+
+ pipe_a2 = sps_alloc_endpoint();
+ if (pipe_a2 == NULL) {
+ IPAERR("sps_alloc_endpoint failed2 dir=%d type=%d\n", dir,
+ type);
+ ret = -ENOMEM;
+ goto fail_sps_alloc_a2;
+ }
+
+ memset(&_connection, 0, sizeof(_connection));
+ ret = sps_get_config(pipe_a2, connection);
+ if (ret) {
+ IPAERR("sps_get_config failed2 %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config_a2;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ connection->mode = SPS_MODE_DEST;
+ connection->source = a2_hdl;
+ connection->destination = dma_hdl;
+ connection->src_pipe_index = ipa_get_a2_pipe_num(dir, type);
+ connection->dest_pipe_index = dma_a2_pipe;
+ } else {
+ connection->mode = SPS_MODE_SRC;
+ connection->source = dma_hdl;
+ connection->destination = a2_hdl;
+ connection->src_pipe_index = dma_a2_pipe;
+ connection->dest_pipe_index = ipa_get_a2_pipe_num(dir, type);
+ }
+
+ connection->event_thresh = IPA_EVENT_THRESHOLD;
+
+ if (ipa_setup_a2_dma_fifos(dir, type, &connection->desc,
+ &connection->data)) {
+ IPAERR("fail to setup A2-DMA FIFOs dir=%d type=%d\n",
+ dir, type);
+ goto fail_sps_get_config_a2;
+ }
+
+ connection->options = SPS_O_AUTO_ENABLE;
+
+ ret = sps_connect(pipe_a2, connection);
+ if (ret) {
+ IPAERR("sps_connect failed2 %d dir=%d type=%d\n", ret, dir,
+ type);
+ goto fail_sps_get_config_a2;
+ }
+
+ if (dir == IPA_BRIDGE_DIR_DL) {
+ bridge[type].pipe[IPA_DL_FROM_A2].pipe = pipe_a2;
+ bridge[type].pipe[IPA_DL_FROM_A2].valid = true;
+ } else {
+ bridge[type].pipe[IPA_UL_TO_A2].pipe = pipe_a2;
+ bridge[type].pipe[IPA_UL_TO_A2].valid = true;
+ }
+
+ IPADBG("dir=%d type=%d (a2) src(0x%x:%u)->dst(0x%x:%u)\n", dir, type,
+ connection->source, connection->src_pipe_index,
+ connection->destination, connection->dest_pipe_index);
return 0;
-event_reg_failed:
- sps_disconnect(sys->pipe);
-connect_failed:
- dma_free_coherent(NULL,
- sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
-get_config_failed:
- sps_free_endpoint(sys->pipe);
-alloc_endpoint_failed:
+fail_sps_get_config_a2:
+ sps_free_endpoint(pipe_a2);
+fail_sps_alloc_a2:
+ sps_disconnect(pipe);
+fail_sps_get_config:
+ sps_free_endpoint(pipe);
+fail_sps_alloc:
+ ipa_disconnect(*clnt_hdl);
+fail_get_a2_prop:
return ret;
}
/**
- * ipa_bridge_init() - create workqueues and work items serving SW bridges
+ * ipa_bridge_init()
*
* Return codes: 0: success, -ENOMEM: failure
*/
int ipa_bridge_init(void)
{
- int ret;
int i;
- bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq =
- create_singlethread_workqueue("ipa_ul_teth");
- if (!bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq) {
- IPAERR("ipa ul teth wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_ul_teth;
+ ipa_ctx->smem_pipe_mem = smem_alloc(SMEM_BAM_PIPE_MEMORY,
+ IPA_SMEM_PIPE_MEM_SZ);
+ if (!ipa_ctx->smem_pipe_mem) {
+ IPAERR("smem alloc failed\n");
+ return -ENOMEM;
}
+ IPADBG("smem_pipe_mem = %p\n", ipa_ctx->smem_pipe_mem);
- bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq =
- create_singlethread_workqueue("ipa_dl_teth");
- if (!bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq) {
- IPAERR("ipa dl teth wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_dl_teth;
- }
-
- bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq =
- create_singlethread_workqueue("ipa_ul_emb");
- if (!bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq) {
- IPAERR("ipa ul emb wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_ul_emb;
- }
-
- bridge[IPA_BRIDGE_TYPE_EMBEDDED].dl_wq =
- create_singlethread_workqueue("ipa_dl_emb");
- if (!bridge[IPA_BRIDGE_TYPE_EMBEDDED].dl_wq) {
- IPAERR("ipa dl emb wq alloc failed\n");
- ret = -ENOMEM;
- goto fail_dl_emb;
- }
-
- for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++) {
- INIT_WORK(&bridge[i].ul_work, ul_work_func);
- INIT_WORK(&bridge[i].dl_work, dl_work_func);
+ for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++)
bridge[i].type = i;
- }
return 0;
-
-fail_dl_emb:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_EMBEDDED].ul_wq);
-fail_ul_emb:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_TETHERED].dl_wq);
-fail_dl_teth:
- destroy_workqueue(bridge[IPA_BRIDGE_TYPE_TETHERED].ul_wq);
-fail_ul_teth:
- return ret;
}
/**
@@ -720,66 +467,29 @@
if (props == NULL || clnt_hdl == NULL ||
type >= IPA_BRIDGE_TYPE_MAX || dir >= IPA_BRIDGE_DIR_MAX ||
- props->client >= IPA_CLIENT_MAX || props->desc_fifo_sz == 0) {
+ props->client >= IPA_CLIENT_MAX) {
IPAERR("Bad param props=%p clnt_hdl=%p type=%d dir=%d\n",
props, clnt_hdl, type, dir);
return -EINVAL;
}
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
- }
+ ipa_inc_client_enable_clks();
- if (setup_bridge_to_ipa(dir, type, props, clnt_hdl)) {
+ if (setup_dma_bam_bridge(dir, type, props, clnt_hdl)) {
IPAERR("fail to setup SYS pipe to IPA dir=%d type=%d\n",
dir, type);
ret = -EINVAL;
goto bail_ipa;
}
- if (setup_bridge_to_a2(dir, type, props->desc_fifo_sz)) {
- IPAERR("fail to setup SYS pipe to A2 dir=%d type=%d\n",
- dir, type);
- ret = -EINVAL;
- goto bail_a2;
- }
-
-
return 0;
-bail_a2:
- ipa_bridge_teardown(dir, type, *clnt_hdl);
bail_ipa:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
return ret;
}
EXPORT_SYMBOL(ipa_bridge_setup);
-static void ipa_bridge_free_pkt(struct ipa_pkt_info *pkt)
-{
- list_del(&pkt->link);
- dma_unmap_single(NULL, pkt->dma_address, IPA_RX_SKB_SIZE,
- DMA_BIDIRECTIONAL);
- kfree(pkt->buffer);
- kfree(pkt);
-}
-
-static void ipa_bridge_free_resources(struct ipa_bridge_pipe_context *pipe)
-{
- struct ipa_pkt_info *pkt;
- struct ipa_pkt_info *n;
-
- list_for_each_entry_safe(pkt, n, &pipe->head_desc_list, link)
- ipa_bridge_free_pkt(pkt);
-
- list_for_each_entry_safe(pkt, n, &pipe->free_desc_list, link)
- ipa_bridge_free_pkt(pkt);
-}
-
/**
* ipa_bridge_teardown() - teardown SW bridge leg
* @dir: downlink or uplink (from air interface perspective)
@@ -814,39 +524,18 @@
for (; lo <= hi; lo++) {
sys = &bridge[type].pipe[lo];
if (sys->valid) {
+ if (sys->ipa_facing)
+ ipa_disconnect(clnt_hdl);
sps_disconnect(sys->pipe);
- dma_free_coherent(NULL, sys->desc_mem_buf.size,
- sys->desc_mem_buf.base,
- sys->desc_mem_buf.phys_base);
sps_free_endpoint(sys->pipe);
- ipa_bridge_free_resources(sys);
sys->valid = false;
}
}
memset(&ipa_ctx->ep[clnt_hdl], 0, sizeof(struct ipa_ep_context));
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
return 0;
}
EXPORT_SYMBOL(ipa_bridge_teardown);
-
-/**
- * ipa_bridge_cleanup() - destroy workqueues serving the SW bridges
- *
- * Return codes:
- * None
- */
-void ipa_bridge_cleanup(void)
-{
- int i;
-
- for (i = 0; i < IPA_BRIDGE_TYPE_MAX; i++) {
- destroy_workqueue(bridge[i].dl_wq);
- destroy_workqueue(bridge[i].ul_wq);
- }
-}
diff --git a/drivers/platform/msm/ipa/ipa_client.c b/drivers/platform/msm/ipa/ipa_client.c
index f2f80bf..a78879d 100644
--- a/drivers/platform/msm/ipa/ipa_client.c
+++ b/drivers/platform/msm/ipa/ipa_client.c
@@ -13,8 +13,6 @@
#include <linux/delay.h>
#include "ipa_i.h"
-#define IPA_HOLB_TMR_VAL 0xff
-
static void ipa_enable_data_path(u32 clnt_hdl)
{
struct ipa_ep_context *ep = &ipa_ctx->ep[clnt_hdl];
@@ -204,9 +202,7 @@
int result = -EFAULT;
struct ipa_ep_context *ep;
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
+ ipa_inc_client_enable_clks();
if (in == NULL || sps == NULL || clnt_hdl == NULL ||
in->client >= IPA_CLIENT_MAX ||
@@ -304,13 +300,13 @@
in->client == IPA_CLIENT_HSIC3_CONS ||
in->client == IPA_CLIENT_HSIC4_CONS) {
IPADBG("disable holb for ep=%d tmr=%d\n", ipa_ep_idx,
- IPA_HOLB_TMR_VAL);
+ ipa_ctx->hol_timer);
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_HOL_BLOCK_EN_n_OFST(ipa_ep_idx),
- 0x1);
+ ipa_ctx->hol_en);
ipa_write_reg(ipa_ctx->mmio,
IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_OFST(ipa_ep_idx),
- IPA_HOLB_TMR_VAL);
+ ipa_ctx->hol_timer);
}
IPADBG("client %d (ep: %d) connected\n", in->client, ipa_ep_idx);
@@ -342,11 +338,7 @@
ipa_cfg_ep_fail:
memset(&ipa_ctx->ep[ipa_ep_idx], 0, sizeof(struct ipa_ep_context));
fail:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
-
+ ipa_dec_client_disable_clks();
return result;
}
EXPORT_SYMBOL(ipa_connect);
@@ -421,10 +413,7 @@
ipa_enable_data_path(clnt_hdl);
memset(&ipa_ctx->ep[clnt_hdl], 0, sizeof(struct ipa_ep_context));
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0) {
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
- }
+ ipa_dec_client_disable_clks();
IPADBG("client (ep: %d) disconnected\n", clnt_hdl);
diff --git a/drivers/platform/msm/ipa/ipa_debugfs.c b/drivers/platform/msm/ipa/ipa_debugfs.c
index 51a950d..fb69817 100644
--- a/drivers/platform/msm/ipa/ipa_debugfs.c
+++ b/drivers/platform/msm/ipa/ipa_debugfs.c
@@ -94,6 +94,8 @@
static struct dentry *dent;
static struct dentry *dfile_gen_reg;
static struct dentry *dfile_ep_reg;
+static struct dentry *dfile_ep_hol_en;
+static struct dentry *dfile_ep_hol_timer;
static struct dentry *dfile_hdr;
static struct dentry *dfile_ip4_rt;
static struct dentry *dfile_ip6_rt;
@@ -144,6 +146,58 @@
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
+static ssize_t ipa_write_ep_hol_en_reg(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ u32 endp_reg_val;
+ unsigned long missing;
+
+ if (sizeof(dbg_buff) < count + 1)
+ return -EFAULT;
+
+ missing = copy_from_user(dbg_buff, buf, count);
+ if (missing)
+ return -EFAULT;
+
+ dbg_buff[count] = '\0';
+ if (kstrtou32(dbg_buff, 16, &endp_reg_val))
+ return -EFAULT;
+
+ ipa_write_reg(ipa_ctx->mmio,
+ IPA_ENDP_INIT_HOL_BLOCK_EN_n_OFST(ep_reg_idx),
+ endp_reg_val);
+
+ ipa_ctx->hol_en = endp_reg_val;
+
+ return count;
+}
+
+static ssize_t ipa_write_ep_hol_timer_reg(struct file *file,
+ const char __user *buf, size_t count, loff_t *ppos)
+{
+ u32 endp_reg_val;
+ unsigned long missing;
+
+ if (sizeof(dbg_buff) < count + 1)
+ return -EFAULT;
+
+ missing = copy_from_user(dbg_buff, buf, count);
+ if (missing)
+ return -EFAULT;
+
+ dbg_buff[count] = '\0';
+ if (kstrtou32(dbg_buff, 16, &endp_reg_val))
+ return -EFAULT;
+
+ ipa_write_reg(ipa_ctx->mmio,
+ IPA_ENDP_INIT_HOL_BLOCK_TIMER_n_OFST(ep_reg_idx),
+ endp_reg_val);
+
+ ipa_ctx->hol_timer = endp_reg_val;
+
+ return count;
+}
+
static ssize_t ipa_write_ep_reg(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
@@ -556,15 +610,27 @@
"x_intr_repost=%u\n"
"rx_q_len=%u\n"
"act_clnt=%u\n"
- "con_clnt_bmap=0x%x\n",
+ "con_clnt_bmap=0x%x\n"
+ "a2_power_on_reqs_in=%u\n"
+ "a2_power_on_reqs_out=%u\n"
+ "a2_power_off_reqs_in=%u\n"
+ "a2_power_off_reqs_out=%u\n"
+ "a2_power_modem_acks=%u\n"
+ "a2_power_apps_acks=%u\n",
ipa_ctx->stats.tx_sw_pkts,
ipa_ctx->stats.tx_hw_pkts,
ipa_ctx->stats.rx_pkts,
ipa_ctx->stats.rx_repl_repost,
ipa_ctx->stats.x_intr_repost,
ipa_ctx->stats.rx_q_len,
- atomic_read(&ipa_ctx->ipa_active_clients),
- connect);
+ ipa_ctx->ipa_active_clients,
+ connect,
+ ipa_ctx->stats.a2_power_on_reqs_in,
+ ipa_ctx->stats.a2_power_on_reqs_out,
+ ipa_ctx->stats.a2_power_off_reqs_in,
+ ipa_ctx->stats.a2_power_off_reqs_out,
+ ipa_ctx->stats.a2_power_modem_acks,
+ ipa_ctx->stats.a2_power_apps_acks);
cnt += nbytes;
for (i = 0; i < MAX_NUM_EXCP; i++) {
@@ -663,6 +729,13 @@
.write = ipa_write_ep_reg,
};
+const struct file_operations ipa_ep_hol_en_ops = {
+ .write = ipa_write_ep_hol_en_reg,
+};
+const struct file_operations ipa_ep_hol_timer_ops = {
+ .write = ipa_write_ep_hol_timer_reg,
+};
+
const struct file_operations ipa_hdr_ops = {
.read = ipa_read_hdr,
};
@@ -695,6 +768,7 @@
const mode_t read_only_mode = S_IRUSR | S_IRGRP | S_IROTH;
const mode_t read_write_mode = S_IRUSR | S_IRGRP | S_IROTH |
S_IWUSR | S_IWGRP | S_IWOTH;
+ const mode_t write_only_mode = S_IWUSR | S_IWGRP | S_IWOTH;
dent = debugfs_create_dir("ipa", 0);
if (IS_ERR(dent)) {
@@ -716,6 +790,20 @@
goto fail;
}
+ dfile_ep_hol_en = debugfs_create_file("hol_en", write_only_mode, dent,
+ 0, &ipa_ep_hol_en_ops);
+ if (!dfile_ep_hol_en || IS_ERR(dfile_ep_hol_en)) {
+ IPAERR("fail to create file for debug_fs dfile_ep_hol_en\n");
+ goto fail;
+ }
+
+ dfile_ep_hol_timer = debugfs_create_file("hol_timer", write_only_mode,
+ dent, 0, &ipa_ep_hol_timer_ops);
+ if (!dfile_ep_hol_timer || IS_ERR(dfile_ep_hol_timer)) {
+ IPAERR("fail to create file for debug_fs dfile_ep_hol_timer\n");
+ goto fail;
+ }
+
dfile_hdr = debugfs_create_file("hdr", read_only_mode, dent, 0,
&ipa_hdr_ops);
if (!dfile_hdr || IS_ERR(dfile_hdr)) {
diff --git a/drivers/platform/msm/ipa/ipa_dp.c b/drivers/platform/msm/ipa/ipa_dp.c
index 2555c6b..228c77fe 100644
--- a/drivers/platform/msm/ipa/ipa_dp.c
+++ b/drivers/platform/msm/ipa/ipa_dp.c
@@ -433,9 +433,7 @@
struct ipa_desc *desc;
int result = 0;
- if (atomic_inc_return(&ipa_ctx->ipa_active_clients) == 1)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_enable_clks();
+ ipa_inc_client_enable_clks();
if (num_desc == 1) {
init_completion(&descr->xfer_done);
@@ -471,9 +469,7 @@
IPA_STATS_INC_IC_CNT(num_desc, descr, ipa_ctx->stats.imm_cmds);
bail:
- if (atomic_dec_return(&ipa_ctx->ipa_active_clients) == 0)
- if (ipa_ctx->ipa_hw_mode == IPA_HW_MODE_NORMAL)
- ipa_disable_clks();
+ ipa_dec_client_disable_clks();
return result;
}
@@ -1087,6 +1083,7 @@
int inactive_cycles = 0;
int cnt;
+ ipa_inc_client_enable_clks();
do {
cnt = ipa_handle_rx_core(true, true);
if (cnt == 0) {
@@ -1098,6 +1095,7 @@
} while (inactive_cycles <= POLLING_INACTIVITY);
ipa_rx_switch_to_intr_mode();
+ ipa_dec_client_disable_clks();
}
/**
diff --git a/drivers/platform/msm/ipa/ipa_flt.c b/drivers/platform/msm/ipa/ipa_flt.c
index b63b939..edb9fb1 100644
--- a/drivers/platform/msm/ipa/ipa_flt.c
+++ b/drivers/platform/msm/ipa/ipa_flt.c
@@ -368,6 +368,7 @@
return 0;
proc_err:
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
+ mem->base = NULL;
error:
return -EPERM;
@@ -456,7 +457,7 @@
if (mem->size > avail) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size, avail);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
if (ip == IPA_IP_v4) {
diff --git a/drivers/platform/msm/ipa/ipa_hdr.c b/drivers/platform/msm/ipa/ipa_hdr.c
index 7d0bc24..9618da2 100644
--- a/drivers/platform/msm/ipa/ipa_hdr.c
+++ b/drivers/platform/msm/ipa/ipa_hdr.c
@@ -89,7 +89,7 @@
if (ipa_ctx->hdr_tbl_lcl && mem->size > IPA_RAM_HDR_SIZE) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size,
IPA_RAM_HDR_SIZE);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
cmd->hdr_table_addr = mem->phys_base;
@@ -126,7 +126,7 @@
return 0;
fail_send_cmd:
- if (mem->phys_base)
+ if (mem->base)
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
fail_hw_tbl_gen:
kfree(cmd);
diff --git a/drivers/platform/msm/ipa/ipa_i.h b/drivers/platform/msm/ipa/ipa_i.h
index cb67514..cc3e630 100644
--- a/drivers/platform/msm/ipa/ipa_i.h
+++ b/drivers/platform/msm/ipa/ipa_i.h
@@ -30,7 +30,8 @@
#define IPA_COOKIE 0xfacefeed
#define IPA_NUM_PIPES 0x14
-#define IPA_SYS_DESC_FIFO_SZ (0x800)
+#define IPA_SYS_DESC_FIFO_SZ 0x800
+#define IPA_SYS_TX_DATA_DESC_FIFO_SZ 0x1000
#ifdef IPA_DEBUG
#define IPADBG(fmt, args...) \
@@ -40,10 +41,11 @@
#define IPADBG(fmt, args...)
#endif
-#define WLAN_AMPDU_TX_EP (15)
-#define WLAN_PROD_TX_EP (19)
-#define MAX_NUM_EXCP (8)
-#define MAX_NUM_IMM_CMD (17)
+#define WLAN_AMPDU_TX_EP 15
+#define WLAN_PROD_TX_EP 19
+
+#define MAX_NUM_EXCP 8
+#define MAX_NUM_IMM_CMD 17
#define IPA_STATS
@@ -531,6 +533,12 @@
u32 rx_q_len;
u32 msg_w[IPA_EVENT_MAX];
u32 msg_r[IPA_EVENT_MAX];
+ u32 a2_power_on_reqs_in;
+ u32 a2_power_on_reqs_out;
+ u32 a2_power_off_reqs_in;
+ u32 a2_power_off_reqs_out;
+ u32 a2_power_modem_acks;
+ u32 a2_power_apps_acks;
};
/**
@@ -644,7 +652,8 @@
struct ipa_mem_buffer empty_rt_tbl_mem;
struct gen_pool *pipe_mem_pool;
struct dma_pool *dma_pool;
- atomic_t ipa_active_clients;
+ struct mutex ipa_active_clients_lock;
+ int ipa_active_clients;
u32 clnt_hdl_cmd;
u32 clnt_hdl_data_in;
u32 clnt_hdl_data_out;
@@ -658,6 +667,10 @@
enum ipa_hw_mode ipa_hw_mode;
/* featurize if memory footprint becomes a concern */
struct ipa_stats stats;
+ void *smem_pipe_mem;
+ /* store HOLB configuration for WLAN TX pipes */
+ u32 hol_en;
+ u32 hol_timer;
};
/**
@@ -742,7 +755,7 @@
struct a2_mux_pipe_connection *pipe_connect);
int ipa_get_a2_mux_bam_info(u32 *a2_bam_mem_base, u32 *a2_bam_mem_size,
u32 *a2_bam_irq);
-void rmnet_bridge_get_client_handles(u32 *producer_handle,
+void teth_bridge_get_client_handles(u32 *producer_handle,
u32 *consumer_handle);
int ipa_send_one(struct ipa_sys_context *sys, struct ipa_desc *desc,
bool in_atomic);
@@ -795,6 +808,8 @@
struct ipa_context *ipa_get_ctx(void);
void ipa_enable_clks(void);
void ipa_disable_clks(void);
+void ipa_inc_client_enable_clks(void);
+void ipa_dec_client_disable_clks(void);
int __ipa_del_rt_rule(u32 rule_hdl);
int __ipa_del_hdr(u32 hdr_hdl);
int __ipa_release_hdr(u32 hdr_hdl);
diff --git a/drivers/platform/msm/ipa/ipa_rm_resource.c b/drivers/platform/msm/ipa/ipa_rm_resource.c
index 0a6771c..3615952 100644
--- a/drivers/platform/msm/ipa/ipa_rm_resource.c
+++ b/drivers/platform/msm/ipa/ipa_rm_resource.c
@@ -80,7 +80,8 @@
int result = 0;
int driver_result;
unsigned long flags;
- IPADBG("IPA RM ::ipa_rm_resource_consumer_request ENTER\n");
+ IPADBG("IPA RM ::ipa_rm_resource_consumer_request %d ENTER\n",
+ consumer->resource.name);
spin_lock_irqsave(&consumer->resource.state_lock, flags);
switch (consumer->resource.state) {
case IPA_RM_RELEASED:
@@ -114,7 +115,8 @@
consumer->usage_count++;
bail:
spin_unlock_irqrestore(&consumer->resource.state_lock, flags);
- IPADBG("IPA RM ::ipa_rm_resource_consumer_request EXIT [%d]\n", result);
+ IPADBG("IPA RM ::ipa_rm_resource_consumer_request %d EXIT %d\n",
+ consumer->resource.name, result);
return result;
}
@@ -125,7 +127,8 @@
int driver_result;
unsigned long flags;
enum ipa_rm_resource_state save_state;
- IPADBG("IPA RM ::ipa_rm_resource_consumer_release ENTER\n");
+ IPADBG("IPA RM ::ipa_rm_resource_consumer_release %d ENTER\n",
+ consumer->resource.name);
spin_lock_irqsave(&consumer->resource.state_lock, flags);
switch (consumer->resource.state) {
case IPA_RM_RELEASED:
@@ -160,7 +163,8 @@
}
bail:
spin_unlock_irqrestore(&consumer->resource.state_lock, flags);
- IPADBG("IPA RM ::ipa_rm_resource_consumer_release EXIT [%d]\n", result);
+ IPADBG("IPA RM ::ipa_rm_resource_consumer_release %d EXIT %d\n",
+ consumer->resource.name, result);
return result;
}
@@ -564,7 +568,7 @@
unsigned long flags;
struct ipa_rm_resource *consumer;
int consumer_result;
- IPADBG("IPA RM ::ipa_rm_resource_producer_request [%d] ENTER\n",
+ IPADBG("IPA RM ::ipa_rm_resource_producer_request %d ENTER\n",
producer->resource.name);
if (ipa_rm_peers_list_is_empty(producer->resource.peers_list)) {
spin_lock_irqsave(&producer->resource.state_lock, flags);
@@ -628,7 +632,8 @@
unlock_and_bail:
spin_unlock_irqrestore(&producer->resource.state_lock, flags);
bail:
- IPADBG("IPA RM ::ipa_rm_resource_producer_request EXIT[%d]\n", result);
+ IPADBG("IPA RM ::ipa_rm_resource_producer_request %d EXIT %d\n",
+ producer->resource.name, result);
return result;
}
@@ -646,7 +651,8 @@
unsigned long flags;
struct ipa_rm_resource *consumer;
int consumer_result;
- IPADBG("IPA RM ::ipa_rm_resource_producer_release ENTER\n");
+ IPADBG("IPA RM ::ipa_rm_resource_producer_release %d ENTER\n",
+ producer->resource.name);
if (ipa_rm_peers_list_is_empty(producer->resource.peers_list)) {
spin_lock_irqsave(&producer->resource.state_lock, flags);
producer->resource.state = IPA_RM_RELEASED;
@@ -702,7 +708,8 @@
return result;
bail:
spin_unlock_irqrestore(&producer->resource.state_lock, flags);
- IPADBG("IPA RM ::ipa_rm_resource_producer_release EXIT[%d]\n", result);
+ IPADBG("IPA RM ::ipa_rm_resource_producer_release %d EXIT %d\n",
+ producer->resource.name, result);
return result;
}
diff --git a/drivers/platform/msm/ipa/ipa_rt.c b/drivers/platform/msm/ipa/ipa_rt.c
index fc5f668..6430c07 100644
--- a/drivers/platform/msm/ipa/ipa_rt.c
+++ b/drivers/platform/msm/ipa/ipa_rt.c
@@ -305,6 +305,7 @@
rt_tbl_mem.base, rt_tbl_mem.phys_base);
proc_err:
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
+ mem->base = NULL;
error:
return -EPERM;
}
@@ -378,7 +379,7 @@
if (mem->size > avail) {
IPAERR("tbl too big, needed %d avail %d\n", mem->size, avail);
- goto fail_hw_tbl_gen;
+ goto fail_send_cmd;
}
if (ip == IPA_IP_v4) {
@@ -413,7 +414,7 @@
return 0;
fail_send_cmd:
- if (mem->phys_base)
+ if (mem->base)
dma_free_coherent(NULL, mem->size, mem->base, mem->phys_base);
fail_hw_tbl_gen:
kfree(cmd);
diff --git a/drivers/platform/msm/ipa/rmnet_bridge.c b/drivers/platform/msm/ipa/rmnet_bridge.c
deleted file mode 100644
index 696b363..0000000
--- a/drivers/platform/msm/ipa/rmnet_bridge.c
+++ /dev/null
@@ -1,141 +0,0 @@
-/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 and
- * only version 2 as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/export.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <mach/bam_dmux.h>
-#include <mach/ipa.h>
-#include <mach/sps.h>
-
-static struct rmnet_bridge_cb_type {
- u32 producer_handle;
- u32 consumer_handle;
- u32 ipa_producer_handle;
- u32 ipa_consumer_handle;
- bool is_connected;
-} rmnet_bridge_cb;
-
-/**
-* rmnet_bridge_init() - Initialize RmNet bridge module
-*
-* Return codes:
-* 0: success
-*/
-int rmnet_bridge_init(void)
-{
- memset(&rmnet_bridge_cb, 0, sizeof(struct rmnet_bridge_cb_type));
-
- return 0;
-}
-EXPORT_SYMBOL(rmnet_bridge_init);
-
-/**
-* rmnet_bridge_disconnect() - Disconnect RmNet bridge module
-*
-* Return codes:
-* 0: success
-* -EINVAL: invalid parameters
-*/
-int rmnet_bridge_disconnect(void)
-{
- int ret = 0;
- if (false == rmnet_bridge_cb.is_connected) {
- pr_err("%s: trying to disconnect already disconnected RmNet bridge\n",
- __func__);
- goto bail;
- }
-
- rmnet_bridge_cb.is_connected = false;
-
- ret = ipa_bridge_teardown(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_consumer_handle);
- ret = ipa_bridge_teardown(IPA_BRIDGE_DIR_UL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_producer_handle);
-bail:
- return ret;
-}
-EXPORT_SYMBOL(rmnet_bridge_disconnect);
-
-/**
-* rmnet_bridge_connect() - Connect RmNet bridge module
-* @producer_hdl: IPA producer handle
-* @consumer_hdl: IPA consumer handle
-* @wwan_logical_channel_id: WWAN logical channel ID
-*
-* Return codes:
-* 0: success
-* -EINVAL: invalid parameters
-*/
-int rmnet_bridge_connect(u32 producer_hdl,
- u32 consumer_hdl,
- int wwan_logical_channel_id)
-{
- struct ipa_sys_connect_params props;
- int ret = 0;
-
- if (true == rmnet_bridge_cb.is_connected) {
- ret = 0;
- pr_err("%s: trying to connect already connected RmNet bridge\n",
- __func__);
- goto bail;
- }
-
- rmnet_bridge_cb.consumer_handle = consumer_hdl;
- rmnet_bridge_cb.producer_handle = producer_hdl;
- rmnet_bridge_cb.is_connected = true;
-
- memset(&props, 0, sizeof(props));
- props.ipa_ep_cfg.mode.mode = IPA_DMA;
- props.ipa_ep_cfg.mode.dst = IPA_CLIENT_USB_CONS;
- props.client = IPA_CLIENT_A2_TETHERED_PROD;
- props.desc_fifo_sz = 0x800;
- /* setup notification callback if needed */
-
- ret = ipa_bridge_setup(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- &props, &rmnet_bridge_cb.ipa_consumer_handle);
- if (ret) {
- pr_err("%s: IPA DL bridge setup failure\n", __func__);
- goto bail_dl;
- }
-
- memset(&props, 0, sizeof(props));
- props.client = IPA_CLIENT_A2_TETHERED_CONS;
- props.desc_fifo_sz = 0x800;
- /* setup notification callback if needed */
-
- ret = ipa_bridge_setup(IPA_BRIDGE_DIR_UL, IPA_BRIDGE_TYPE_TETHERED,
- &props, &rmnet_bridge_cb.ipa_producer_handle);
- if (ret) {
- pr_err("%s: IPA UL bridge setup failure\n", __func__);
- goto bail_ul;
- }
- return 0;
-bail_ul:
- ipa_bridge_teardown(IPA_BRIDGE_DIR_DL, IPA_BRIDGE_TYPE_TETHERED,
- rmnet_bridge_cb.ipa_consumer_handle);
-bail_dl:
- rmnet_bridge_cb.is_connected = false;
-bail:
- return ret;
-}
-EXPORT_SYMBOL(rmnet_bridge_connect);
-
-void rmnet_bridge_get_client_handles(u32 *producer_handle,
- u32 *consumer_handle)
-{
- if (producer_handle == NULL || consumer_handle == NULL)
- return;
-
- *producer_handle = rmnet_bridge_cb.producer_handle;
- *consumer_handle = rmnet_bridge_cb.consumer_handle;
-}
diff --git a/drivers/platform/msm/ipa/teth_bridge.c b/drivers/platform/msm/ipa/teth_bridge.c
index 5b26e41..40c8fc7 100644
--- a/drivers/platform/msm/ipa/teth_bridge.c
+++ b/drivers/platform/msm/ipa/teth_bridge.c
@@ -58,6 +58,17 @@
#define TETH_AGGR_MAX_DATAGRAMS_DEFAULT 16
#define TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT (8*1024)
+#define TETH_MTU_BYTE 1500
+
+#define TETH_INACTIVITY_TIME_MSEC (1000)
+
+#define TETH_WORKQUEUE_NAME "tethering_bridge_wq"
+
+#define TETH_TOTAL_HDR_ENTRIES 6
+#define TETH_TOTAL_RT_ENTRIES_IP 3
+#define TETH_TOTAL_FLT_ENTRIES_IP 2
+#define TETH_IP_FAMILIES 2
+
struct mac_addresses_type {
u8 host_pc_mac_addr[ETH_ALEN];
bool host_pc_mac_addr_known;
@@ -68,6 +79,7 @@
struct stats {
u64 a2_to_usb_num_sw_tx_packets;
u64 usb_to_a2_num_sw_tx_packets;
+ u64 num_sw_tx_packets_during_resource_wakeup;
};
struct teth_bridge_ctx {
@@ -92,9 +104,24 @@
bool comp_hw_bridge_in_progress;
struct teth_aggr_capabilities *aggr_caps;
struct stats stats;
+ struct workqueue_struct *teth_wq;
+ u16 a2_ipa_hdr_len;
+ struct ipa_ioc_del_hdr *hdr_del;
+ struct ipa_ioc_del_rt_rule *routing_del[TETH_IP_FAMILIES];
+ struct ipa_ioc_del_flt_rule *filtering_del[TETH_IP_FAMILIES];
+};
+static struct teth_bridge_ctx *teth_ctx;
+
+enum teth_packet_direction {
+ TETH_USB_TO_A2,
+ TETH_A2_TO_USB,
};
-static struct teth_bridge_ctx *teth_ctx;
+struct teth_work {
+ struct work_struct work;
+ struct sk_buff *skb;
+ enum teth_packet_direction dir;
+};
#ifdef CONFIG_DEBUG_FS
#define TETH_MAX_MSG_LEN 512
@@ -108,6 +135,7 @@
struct ipa_ioc_add_hdr *hdrs;
struct ethhdr hdr_ipv4;
struct ethhdr hdr_ipv6;
+ int idx1;
TETH_DBG_FUNC_ENTRY();
memcpy(hdr_ipv4.h_source, src_mac_addr, ETH_ALEN);
@@ -142,6 +170,13 @@
res = ipa_add_hdr(hdrs);
if (res || hdrs->hdr[0].status || hdrs->hdr[1].status)
TETH_ERR("Header insertion failed\n");
+
+ /* Save the headers handles in order to delete them later */
+ for (idx1 = 0; idx1 < hdrs->num_hdrs; idx1++) {
+ int idx2 = teth_ctx->hdr_del->num_hdls++;
+ teth_ctx->hdr_del->hdl[idx2].hdl = hdrs->hdr[idx1].hdr_hdl;
+ }
+
kfree(hdrs);
TETH_DBG_FUNC_EXIT();
@@ -167,6 +202,7 @@
}
hdr_cfg.hdr_len = a2_ipa_hdr_len;
+ teth_ctx->a2_ipa_hdr_len = a2_ipa_hdr_len;
res = ipa_cfg_ep_hdr(teth_ctx->a2_ipa_pipe_hdl, &hdr_cfg);
if (res) {
TETH_ERR("Header removal config for A2->IPA pipe failed\n");
@@ -198,6 +234,7 @@
int res;
struct ipa_ioc_add_hdr *mbim_hdr;
u8 mbim_stream_id = 0;
+ int idx;
TETH_DBG_FUNC_ENTRY();
mbim_hdr = kzalloc(sizeof(struct ipa_ioc_add_hdr) +
@@ -221,6 +258,11 @@
} else {
TETH_DBG("Added MBIM stream ID header\n");
}
+
+ /* Save the header handle in order to delete it later */
+ idx = teth_ctx->hdr_del->num_hdls++;
+ teth_ctx->hdr_del->hdl[idx].hdl = mbim_hdr->hdr[0].hdr_hdl;
+
kfree(mbim_hdr);
TETH_DBG_FUNC_EXIT();
@@ -283,14 +325,7 @@
TETH_ERR("Configuration of header removal/insertion failed\n");
goto bail;
}
-
- res = ipa_commit_hdr();
- if (res) {
- TETH_ERR("Failed committing headers\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -304,6 +339,7 @@
struct ipa_ioc_add_rt_rule *rt_rule;
struct ipa_ioc_get_hdr hdr_info;
int res;
+ int idx;
TETH_DBG_FUNC_ENTRY();
/* Get the header handle */
@@ -330,6 +366,12 @@
res = ipa_add_rt_rule(rt_rule);
if (res || rt_rule->rules[0].status)
TETH_ERR("Failed adding routing rule\n");
+
+ /* Save the routing rule handle in order to delete it later */
+ idx = teth_ctx->routing_del[ip_address_family]->num_hdls++;
+ teth_ctx->routing_del[ip_address_family]->hdl[idx].hdl =
+ rt_rule->rules[0].rt_rule_hdl;
+
kfree(rt_rule);
TETH_DBG_FUNC_EXIT();
@@ -425,20 +467,7 @@
TETH_ERR("A2 to USB routing block configuration failed\n");
goto bail;
}
-
- /* Commit all the changes to HW in one shot */
- res = ipa_commit_rt(IPA_IP_v4);
- if (res) {
- TETH_ERR("Failed commiting IPv4 routing tables\n");
- goto bail;
- }
- res = ipa_commit_rt(IPA_IP_v6);
- if (res) {
- TETH_ERR("Failed commiting IPv6 routing tables\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -450,6 +479,7 @@
struct ipa_ioc_add_flt_rule *flt_tbl;
struct ipa_ioc_get_rt_tbl rt_tbl_info;
int res;
+ int idx;
TETH_DBG_FUNC_ENTRY();
/* Get the needed routing table handle */
@@ -480,6 +510,12 @@
res = ipa_add_flt_rule(flt_tbl);
if (res || flt_tbl->rules[0].status)
TETH_ERR("Failed adding filtering table\n");
+
+ /* Save the filtering rule handle in order to delete it later */
+ idx = teth_ctx->filtering_del[ip_address_family]->num_hdls++;
+ teth_ctx->filtering_del[ip_address_family]->hdl[idx].hdl =
+ flt_tbl->rules[0].flt_rule_hdl;
+
kfree(flt_tbl);
TETH_DBG_FUNC_EXIT();
@@ -533,20 +569,7 @@
TETH_ERR("A2_PROD filtering configuration failed\n");
goto bail;
}
-
- /* Commit all the changes to HW in one shot */
- res = ipa_commit_flt(IPA_IP_v4);
- if (res) {
- TETH_ERR("Failed commiting IPv4 filtering tables\n");
- goto bail;
- }
- res = ipa_commit_flt(IPA_IP_v6);
- if (res) {
- TETH_ERR("Failed commiting IPv6 filtering tables\n");
- goto bail;
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -578,8 +601,15 @@
return -EFAULT;
}
+ /*
+ * Due to a HW 'feature', the maximal aggregated packet size may be the
+ * requested aggr_byte_limit plus the MTU. Therefore, the MTU is
+ * subtracted from the requested aggr_byte_limit so that the requested
+ * byte limit is honored .
+ */
ipa_aggr_params->aggr_byte_limit =
- teth_aggr_params->max_transfer_size_byte / 1024;
+ (teth_aggr_params->max_transfer_size_byte - TETH_MTU_BYTE) /
+ 1024;
ipa_aggr_params->aggr_time_limit = TETH_DEFAULT_AGGR_TIME_LIMIT;
TETH_DBG_FUNC_EXIT();
@@ -592,7 +622,6 @@
u32 pipe_hdl)
{
struct ipa_ep_cfg_aggr agg_params;
- struct ipa_ep_cfg_hdr hdr_params;
int res;
TETH_DBG_FUNC_ENTRY();
@@ -609,18 +638,7 @@
TETH_ERR("ipa_cfg_ep_aggr() failed\n");
goto bail;
}
-
- if (!client_is_prod) {
- memset(&hdr_params, 0, sizeof(hdr_params));
- hdr_params.hdr_len = 1;
- res = ipa_cfg_ep_hdr(pipe_hdl, &hdr_params);
- if (res) {
- TETH_ERR("ipa_cfg_ep_hdr() failed\n");
- goto bail;
- }
- }
TETH_DBG_FUNC_EXIT();
-
bail:
return res;
}
@@ -651,6 +669,19 @@
char aggr_prot_str[20];
TETH_DBG_FUNC_ENTRY();
+ if (!teth_ctx->aggr_params_known) {
+ TETH_ERR("Aggregation parameters unknown.\n");
+ return -EINVAL;
+ }
+
+ if ((teth_ctx->usb_ipa_pipe_hdl == 0) ||
+ (teth_ctx->ipa_usb_pipe_hdl == 0))
+ return 0;
+ /*
+ * Returning 0 in case pipe handles are 0 becuase aggregation
+ * params will be set later
+ */
+
if (teth_ctx->aggr_params.ul.aggr_prot == TETH_AGGR_PROTOCOL_MBIM ||
teth_ctx->aggr_params.dl.aggr_prot == TETH_AGGR_PROTOCOL_MBIM) {
res = ipa_set_aggr_mode(IPA_MBIM);
@@ -696,12 +727,26 @@
return res;
}
+static int teth_request_resource(void)
+{
+ int res;
+
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_up);
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS)
+ wait_for_completion(&teth_ctx->is_bridge_prod_up);
+ else
+ return res;
+ }
+
+ return 0;
+}
+
static void complete_hw_bridge(struct work_struct *work)
{
int res;
- static DEFINE_MUTEX(f_lock);
-
- mutex_lock(&f_lock);
TETH_DBG_FUNC_ENTRY();
TETH_DBG("Completing HW bridge in %s mode\n",
@@ -709,20 +754,15 @@
"ETHERNET" :
"IP");
- res = teth_set_aggregation();
+ res = teth_request_resource();
if (res) {
- TETH_ERR("Failed setting aggregation params\n");
+ TETH_ERR("request_resource() failed.\n");
goto bail;
}
- /*
- * Reset the Header, Routing and Filtering blocks.
- * Resetting the Header block will also reset the other blocks.
- * This reset is not comitted to HW.
- */
- res = ipa_reset_hdr();
+ res = teth_set_aggregation();
if (res) {
- TETH_ERR("Failed resetting IPA\n");
+ TETH_ERR("Failed setting aggregation params\n");
goto bail;
}
@@ -744,10 +784,20 @@
goto bail;
}
+ /*
+ * Commit all the data to HW, including header, routing and filtering
+ * blocks, IPv4 and IPv6
+ */
+ res = ipa_commit_hdr();
+ if (res) {
+ TETH_ERR("Failed committing headers / routing / filtering.\n");
+ goto bail;
+ }
+
teth_ctx->is_hw_bridge_complete = true;
- teth_ctx->comp_hw_bridge_in_progress = false;
bail:
- mutex_unlock(&f_lock);
+ teth_ctx->comp_hw_bridge_in_progress = false;
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
TETH_DBG_FUNC_EXIT();
return;
@@ -787,10 +837,78 @@
(teth_ctx->aggr_params_known)) {
INIT_WORK(&teth_ctx->comp_hw_bridge_work, complete_hw_bridge);
teth_ctx->comp_hw_bridge_in_progress = true;
- schedule_work(&teth_ctx->comp_hw_bridge_work);
+ queue_work(teth_ctx->teth_wq, &teth_ctx->comp_hw_bridge_work);
}
}
+static void teth_send_skb_work(struct work_struct *work)
+{
+ struct teth_work *work_data =
+ container_of(work, struct teth_work, work);
+ int res;
+
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+
+ switch (work_data->dir) {
+ case TETH_USB_TO_A2:
+ res = a2_mux_write(A2_MUX_TETHERED_0, work_data->skb);
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+ teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ break;
+
+ case TETH_A2_TO_USB:
+ res = ipa_tx_dp(IPA_CLIENT_USB_CONS, work_data->skb, NULL);
+ if (res) {
+ TETH_ERR("Packet send failure, dropping packet !\n");
+ goto bail;
+ }
+ teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ break;
+
+ default:
+ TETH_ERR("Unsupported direction to send !\n");
+ WARN_ON(1);
+ }
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ kfree(work_data);
+ teth_ctx->stats.num_sw_tx_packets_during_resource_wakeup++;
+
+ return;
+bail:
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ dev_kfree_skb(work_data->skb);
+ kfree(work_data);
+}
+
+static void defer_skb_send(struct sk_buff *skb, enum teth_packet_direction dir)
+{
+ struct teth_work *work = kmalloc(sizeof(struct teth_work), GFP_KERNEL);
+
+ if (!work) {
+ TETH_ERR("No mem, dropping packet\n");
+ dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource
+ (IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
+
+ /*
+ * Since IPA uses a single Rx thread, we don't
+ * want to wait for completion here
+ */
+ INIT_WORK(&work->work, teth_send_skb_work);
+ work->dir = dir;
+ work->skb = skb;
+ queue_work(teth_ctx->teth_wq, &work->work);
+}
+
static void usb_notify_cb(void *priv,
enum ipa_dp_evt_type evt,
unsigned long data)
@@ -807,13 +925,36 @@
&teth_ctx->mac_addresses.host_pc_mac_addr_known,
&teth_ctx->mac_addresses.device_mac_addr_known);
- /* Send the packet to A2, using a2_service driver API */
- teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ /*
+ * Request the BRIDGE_PROD resource, send the packet and release
+ * the resource
+ */
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS) {
+ /* The resource is waking up */
+ defer_skb_send(skb, TETH_USB_TO_A2);
+ } else {
+ TETH_ERR(
+ "Packet send failure, dropping packet !\n");
+ dev_kfree_skb(skb);
+ }
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
res = a2_mux_write(A2_MUX_TETHERED_0, skb);
if (res) {
TETH_ERR("Packet send failure, dropping packet !\n");
dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
}
+ teth_ctx->stats.usb_to_a2_num_sw_tx_packets++;
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
break;
case IPA_WRITE_DONE:
@@ -845,13 +986,37 @@
&teth_ctx->
mac_addresses.host_pc_mac_addr_known);
- /* Send the packet to USB */
- teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ /*
+ * Request the BRIDGE_PROD resource, send the packet and release
+ * the resource
+ */
+ res = ipa_rm_inactivity_timer_request_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ if (res < 0) {
+ if (res == -EINPROGRESS) {
+ /* The resource is waking up */
+ defer_skb_send(skb, TETH_A2_TO_USB);
+ } else {
+ TETH_ERR(
+ "Packet send failure, dropping packet !\n");
+ dev_kfree_skb(skb);
+ }
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
+ }
+
res = ipa_tx_dp(IPA_CLIENT_USB_CONS, skb, NULL);
if (res) {
TETH_ERR("Packet send failure, dropping packet !\n");
dev_kfree_skb(skb);
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
+ return;
}
+ teth_ctx->stats.a2_to_usb_num_sw_tx_packets++;
+ ipa_rm_inactivity_timer_release_resource(
+ IPA_RM_RESOURCE_BRIDGE_PROD);
break;
case A2_MUX_WRITE_DONE:
@@ -870,8 +1035,28 @@
enum ipa_rm_event event,
unsigned long data)
{
+ int res;
+ struct ipa_ep_cfg ipa_ep_cfg;
+
switch (event) {
case IPA_RM_RESOURCE_GRANTED:
+ res = a2_mux_get_tethered_client_handles(
+ A2_MUX_TETHERED_0,
+ &teth_ctx->ipa_a2_pipe_hdl,
+ &teth_ctx->a2_ipa_pipe_hdl);
+ if (res) {
+ TETH_ERR(
+ "a2_mux_get_tethered_client_handles() failed, res = %d\n",
+ res);
+ return;
+ }
+
+ /* Reset the various endpoints configuration */
+ memset(&ipa_ep_cfg, 0, sizeof(ipa_ep_cfg));
+ ipa_cfg_ep(teth_ctx->ipa_a2_pipe_hdl, &ipa_ep_cfg);
+
+ ipa_ep_cfg.hdr.hdr_len = teth_ctx->a2_ipa_hdr_len;
+ ipa_cfg_ep(teth_ctx->a2_ipa_pipe_hdl, &ipa_ep_cfg);
complete(&teth_ctx->is_bridge_prod_up);
break;
@@ -915,32 +1100,34 @@
/* Build IPA Resource manager dependency graph */
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_USB_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto bail;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_A2_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_1;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_USB_PROD,
IPA_RM_RESOURCE_A2_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_2;
}
res = ipa_rm_add_dependency(IPA_RM_RESOURCE_A2_PROD,
IPA_RM_RESOURCE_USB_CONS);
- if (res && res != -EEXIST) {
+ if (res && res != -EINPROGRESS) {
TETH_ERR("ipa_rm_add_dependency() failed\n");
goto fail_add_dependency_3;
}
+ /* Return 0 as EINPROGRESS is a valid return value at this point */
+ res = 0;
goto bail;
fail_add_dependency_3:
@@ -958,6 +1145,57 @@
}
EXPORT_SYMBOL(teth_bridge_init);
+static void initialize_context(void)
+{
+ TETH_DBG_FUNC_ENTRY();
+ /* Initialize context variables */
+ teth_ctx->usb_ipa_pipe_hdl = 0;
+ teth_ctx->ipa_a2_pipe_hdl = 0;
+ teth_ctx->a2_ipa_pipe_hdl = 0;
+ teth_ctx->ipa_usb_pipe_hdl = 0;
+ teth_ctx->is_connected = false;
+
+ /* The default link protocol is Ethernet */
+ teth_ctx->link_protocol = TETH_LINK_PROTOCOL_ETHERNET;
+
+ memset(&teth_ctx->mac_addresses, 0, sizeof(teth_ctx->mac_addresses));
+ teth_ctx->is_hw_bridge_complete = false;
+ memset(&teth_ctx->aggr_params, 0, sizeof(teth_ctx->aggr_params));
+ teth_ctx->aggr_params_known = false;
+ teth_ctx->tethering_mode = 0;
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_up);
+ INIT_COMPLETION(teth_ctx->is_bridge_prod_down);
+ teth_ctx->comp_hw_bridge_in_progress = false;
+ memset(&teth_ctx->stats, 0, sizeof(teth_ctx->stats));
+ teth_ctx->a2_ipa_hdr_len = 0;
+ memset(teth_ctx->hdr_del,
+ 0,
+ sizeof(struct ipa_ioc_del_hdr) + TETH_TOTAL_HDR_ENTRIES *
+ sizeof(struct ipa_hdr_del));
+ memset(teth_ctx->routing_del[IPA_IP_v4],
+ 0,
+ sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP * sizeof(struct ipa_rt_rule_del));
+ teth_ctx->routing_del[IPA_IP_v4]->ip = IPA_IP_v4;
+ memset(teth_ctx->routing_del[IPA_IP_v6],
+ 0,
+ sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP * sizeof(struct ipa_rt_rule_del));
+ teth_ctx->routing_del[IPA_IP_v6]->ip = IPA_IP_v6;
+ memset(teth_ctx->filtering_del[IPA_IP_v4],
+ 0,
+ sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP * sizeof(struct ipa_flt_rule_del));
+ teth_ctx->filtering_del[IPA_IP_v4]->ip = IPA_IP_v4;
+ memset(teth_ctx->filtering_del[IPA_IP_v6],
+ 0,
+ sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP * sizeof(struct ipa_flt_rule_del));
+ teth_ctx->filtering_del[IPA_IP_v6]->ip = IPA_IP_v6;
+
+ TETH_DBG_FUNC_EXIT();
+}
+
/**
* teth_bridge_disconnect() - Disconnect tethering bridge module
*
@@ -967,38 +1205,82 @@
*/
int teth_bridge_disconnect(void)
{
- int res = -EPERM;
+ int res;
TETH_DBG_FUNC_ENTRY();
if (!teth_ctx->is_connected) {
TETH_ERR(
- "Trying to disconnect an already disconnected bridge\n");
+ "Trying to disconnect an already disconnected bridge\n");
+ goto bail;
+ }
+
+ /* Request the BRIDGE_PROD resource */
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("request_resource() failed.\n");
goto bail;
}
teth_ctx->is_connected = false;
- res = ipa_rm_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
- if (res == -EINPROGRESS)
- wait_for_completion(&teth_ctx->is_bridge_prod_down);
+ /* Close the channel to A2 */
+ if (a2_mux_close_channel(A2_MUX_TETHERED_0))
+ TETH_ERR("a2_mux_close_channel() failed\n");
- /* Initialize statistics */
- memset(&teth_ctx->stats, 0, sizeof(teth_ctx->stats));
+ if (teth_ctx->is_hw_bridge_complete) {
+ /* Delete header entries */
+ if (ipa_del_hdr(teth_ctx->hdr_del))
+ TETH_ERR("ipa_del_hdr() failed\n");
+
+ /* Delete installed routing rules */
+ if (ipa_del_rt_rule(teth_ctx->routing_del[IPA_IP_v4]))
+ TETH_ERR("ipa_del_rt_rule() failed\n");
+ if (ipa_del_rt_rule(teth_ctx->routing_del[IPA_IP_v6]))
+ TETH_ERR("ipa_del_rt_rule() failed\n");
+
+ /* Delete installed filtering rules */
+ if (ipa_del_flt_rule(teth_ctx->filtering_del[IPA_IP_v4]))
+ TETH_ERR("ipa_del_flt_rule() failed\n");
+ if (ipa_del_flt_rule(teth_ctx->filtering_del[IPA_IP_v6]))
+ TETH_ERR("ipa_del_flt_rule() failed\n");
+
+ /*
+ * Commit all the data to HW, including header, routing and
+ * filtering blocks, IPv4 and IPv6
+ */
+ if (ipa_commit_hdr())
+ TETH_ERR("Failed committing headers\n");
+ }
+
+ initialize_context();
+
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
/* Delete IPA Resource manager dependency graph */
res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
IPA_RM_RESOURCE_USB_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
- IPA_RM_RESOURCE_A2_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
- IPA_RM_RESOURCE_A2_CONS);
- res |= ipa_rm_delete_dependency(IPA_RM_RESOURCE_A2_PROD,
- IPA_RM_RESOURCE_USB_CONS);
- if (res)
- TETH_ERR("Failed deleting ipa_rm dependency.\n");
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency BRIDGE_PROD <-> USB_CONS\n");
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_BRIDGE_PROD,
+ IPA_RM_RESOURCE_A2_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency BRIDGE_PROD <-> A2_CONS\n");
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_USB_PROD,
+ IPA_RM_RESOURCE_A2_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency USB_PROD <-> A2_CONS\n");
+ res = ipa_rm_delete_dependency(IPA_RM_RESOURCE_A2_PROD,
+ IPA_RM_RESOURCE_USB_CONS);
+ if ((res != 0) && (res != -EINPROGRESS))
+ TETH_ERR(
+ "Failed deleting ipa_rm dependency A2_PROD <-> USB_CONS\n");
bail:
TETH_DBG_FUNC_EXIT();
- return res;
+
+ return 0;
}
EXPORT_SYMBOL(teth_bridge_disconnect);
@@ -1032,12 +1314,10 @@
teth_ctx->usb_ipa_pipe_hdl = connect_params->usb_ipa_pipe_hdl;
teth_ctx->tethering_mode = connect_params->tethering_mode;
- res = ipa_rm_request_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
- if (res < 0) {
- if (res == -EINPROGRESS)
- wait_for_completion(&teth_ctx->is_bridge_prod_up);
- else
- goto bail;
+ res = teth_request_resource();
+ if (res) {
+ TETH_ERR("request_resource() failed.\n");
+ goto bail;
}
res = a2_mux_open_channel(A2_MUX_TETHERED_0,
@@ -1068,10 +1348,28 @@
if (teth_ctx->tethering_mode == TETH_TETHERING_MODE_MBIM)
teth_ctx->link_protocol = TETH_LINK_PROTOCOL_IP;
- TETH_DBG_FUNC_EXIT();
+
+ if (teth_ctx->aggr_params_known) {
+ res = teth_set_aggregation();
+ if (res) {
+ TETH_ERR("Failed setting aggregation params\n");
+ goto bail;
+ }
+ }
+
+ /* In case of IP link protocol, complete HW bridge */
+ if ((teth_ctx->link_protocol == TETH_LINK_PROTOCOL_IP) &&
+ (!teth_ctx->comp_hw_bridge_in_progress) &&
+ (teth_ctx->aggr_params_known) &&
+ (!teth_ctx->is_hw_bridge_complete)) {
+ INIT_WORK(&teth_ctx->comp_hw_bridge_work, complete_hw_bridge);
+ teth_ctx->comp_hw_bridge_in_progress = true;
+ queue_work(teth_ctx->teth_wq, &teth_ctx->comp_hw_bridge_work);
+ }
bail:
- if (res)
- ipa_rm_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ ipa_rm_inactivity_timer_release_resource(IPA_RM_RESOURCE_BRIDGE_PROD);
+ TETH_DBG_FUNC_EXIT();
+
return res;
}
EXPORT_SYMBOL(teth_bridge_connect);
@@ -1097,11 +1395,33 @@
{
int res;
+ TETH_DBG_FUNC_ENTRY();
if (!aggr_params) {
TETH_ERR("Invalid parameter\n");
return -EINVAL;
}
+ /*
+ * In case the requested max transfer size is larger than 8K, set it to
+ * to the default 8K
+ */
+ if (aggr_params->dl.max_transfer_size_byte >
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT)
+ aggr_params->dl.max_transfer_size_byte =
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT;
+ if (aggr_params->ul.max_transfer_size_byte >
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT)
+ aggr_params->ul.max_transfer_size_byte =
+ TETH_AGGR_MAX_AGGR_PACKET_SIZE_DEFAULT;
+
+ /* Ethernet link protocol and MBIM aggregation is not supported */
+ if (teth_ctx->link_protocol == TETH_LINK_PROTOCOL_ETHERNET &&
+ (aggr_params->dl.aggr_prot == TETH_AGGR_PROTOCOL_MBIM ||
+ aggr_params->ul.aggr_prot == TETH_AGGR_PROTOCOL_MBIM)) {
+ TETH_ERR("Ethernet with MBIM is not supported.\n");
+ return -EINVAL;
+ }
+
memcpy(&teth_ctx->aggr_params,
aggr_params,
sizeof(struct teth_aggr_params));
@@ -1110,10 +1430,9 @@
teth_ctx->aggr_params_known = true;
res = teth_set_aggregation();
- if (res) {
+ if (res)
TETH_ERR("Failed setting aggregation params\n");
- res = -EFAULT;
- }
+ TETH_DBG_FUNC_EXIT();
return res;
}
@@ -1153,6 +1472,19 @@
}
res = teth_bridge_set_aggr_params(&aggr_params);
+ if (res)
+ break;
+
+ /* In case of IP link protocol, complete HW bridge */
+ if ((teth_ctx->link_protocol == TETH_LINK_PROTOCOL_IP) &&
+ (!teth_ctx->comp_hw_bridge_in_progress) &&
+ (!teth_ctx->is_hw_bridge_complete)) {
+ INIT_WORK(&teth_ctx->comp_hw_bridge_work,
+ complete_hw_bridge);
+ teth_ctx->comp_hw_bridge_in_progress = true;
+ queue_work(teth_ctx->teth_wq,
+ &teth_ctx->comp_hw_bridge_work);
+ }
break;
case TETH_BRIDGE_IOC_GET_AGGR_PARAMS:
@@ -1208,7 +1540,7 @@
return res;
}
-static void set_aggr_capabilities(void)
+static int set_aggr_capabilities(void)
{
u16 NUM_PROTOCOLS = 2;
@@ -1216,9 +1548,9 @@
NUM_PROTOCOLS *
sizeof(struct teth_aggr_params_link),
GFP_KERNEL);
- if (teth_ctx->aggr_caps == NULL) {
+ if (!teth_ctx->aggr_caps) {
TETH_ERR("Memory alloc failed for aggregation capabilities.\n");
- return;
+ return -ENOMEM;
}
teth_ctx->aggr_caps->num_protocols = NUM_PROTOCOLS;
@@ -1228,8 +1560,15 @@
teth_ctx->aggr_caps->prot_caps[1].aggr_prot = TETH_AGGR_PROTOCOL_TLP;
set_aggr_default_params(&teth_ctx->aggr_caps->prot_caps[1]);
+
+ return 0;
}
+/**
+* teth_bridge_get_client_handles() - Get USB <--> IPA pipe handles
+* @producer_handle: USB --> IPA pipe handle
+* @consumer_handle: IPA --> USB pipe handle
+*/
void teth_bridge_get_client_handles(u32 *producer_handle,
u32 *consumer_handle)
{
@@ -1397,6 +1736,11 @@
TETH_MAX_MSG_LEN - nbytes,
"A2 to USB SW Tx packets: %lld\n",
teth_ctx->stats.a2_to_usb_num_sw_tx_packets);
+ nbytes += scnprintf(
+ &dbg_buff[nbytes],
+ TETH_MAX_MSG_LEN - nbytes,
+ "SW Tx packets sent during resource wakeup: %lld\n",
+ teth_ctx->stats.num_sw_tx_packets_during_resource_wakeup);
return simple_read_from_buffer(ubuf, count, ppos, dbg_buff, nbytes);
}
@@ -1523,7 +1867,59 @@
return -ENOMEM;
}
- set_aggr_capabilities();
+ res = set_aggr_capabilities();
+ if (res) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_aggr_caps;
+ }
+
+ res = -ENOMEM;
+ teth_ctx->hdr_del = kzalloc(sizeof(struct ipa_ioc_del_hdr) +
+ TETH_TOTAL_HDR_ENTRIES *
+ sizeof(struct ipa_hdr_del),
+ GFP_KERNEL);
+ if (!teth_ctx->hdr_del) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_hdr_del;
+ }
+
+ teth_ctx->routing_del[IPA_IP_v4] =
+ kzalloc(sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP *
+ sizeof(struct ipa_rt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->routing_del[IPA_IP_v4]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_routing_del_ipv4;
+ }
+ teth_ctx->routing_del[IPA_IP_v6] =
+ kzalloc(sizeof(struct ipa_ioc_del_rt_rule) +
+ TETH_TOTAL_RT_ENTRIES_IP *
+ sizeof(struct ipa_rt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->routing_del[IPA_IP_v6]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_routing_del_ipv6;
+ }
+
+ teth_ctx->filtering_del[IPA_IP_v4] =
+ kzalloc(sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP *
+ sizeof(struct ipa_flt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->filtering_del[IPA_IP_v4]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_filtering_del_ipv4;
+ }
+ teth_ctx->filtering_del[IPA_IP_v6] =
+ kzalloc(sizeof(struct ipa_ioc_del_flt_rule) +
+ TETH_TOTAL_FLT_ENTRIES_IP *
+ sizeof(struct ipa_flt_rule_del),
+ GFP_KERNEL);
+ if (!teth_ctx->filtering_del[IPA_IP_v6]) {
+ TETH_ERR("kzalloc err.\n");
+ goto fail_alloc_filtering_del_ipv6;
+ }
teth_ctx->class = class_create(THIS_MODULE, TETH_BRIDGE_DRV_NAME);
@@ -1554,8 +1950,6 @@
goto fail_cdev_add;
}
- teth_ctx->comp_hw_bridge_in_progress = false;
-
teth_debugfs_init();
/* Create BRIDGE_PROD entity in IPA Resource Manager */
@@ -1570,9 +1964,21 @@
init_completion(&teth_ctx->is_bridge_prod_up);
init_completion(&teth_ctx->is_bridge_prod_down);
- /* The default link protocol is Ethernet */
- teth_ctx->link_protocol = TETH_LINK_PROTOCOL_ETHERNET;
+ res = ipa_rm_inactivity_timer_init(IPA_RM_RESOURCE_BRIDGE_PROD,
+ TETH_INACTIVITY_TIME_MSEC);
+ if (res) {
+ TETH_ERR("ipa_rm_inactivity_timer_init() failed, res=%d\n",
+ res);
+ goto fail_cdev_add;
+ }
+ teth_ctx->teth_wq = create_workqueue(TETH_WORKQUEUE_NAME);
+ if (!teth_ctx->teth_wq) {
+ TETH_ERR("workqueue creation failed\n");
+ goto fail_cdev_add;
+ }
+
+ initialize_context();
TETH_DBG("Tethering bridge driver init OK\n");
return 0;
@@ -1581,7 +1987,18 @@
fail_device_create:
unregister_chrdev_region(teth_ctx->dev_num, 1);
fail_alloc_chrdev_region:
+ kfree(teth_ctx->filtering_del[IPA_IP_v6]);
+fail_alloc_filtering_del_ipv6:
+ kfree(teth_ctx->filtering_del[IPA_IP_v4]);
+fail_alloc_filtering_del_ipv4:
+ kfree(teth_ctx->routing_del[IPA_IP_v6]);
+fail_alloc_routing_del_ipv6:
+ kfree(teth_ctx->routing_del[IPA_IP_v4]);
+fail_alloc_routing_del_ipv4:
+ kfree(teth_ctx->hdr_del);
+fail_alloc_hdr_del:
kfree(teth_ctx->aggr_caps);
+fail_alloc_aggr_caps:
kfree(teth_ctx);
teth_ctx = NULL;
diff --git a/drivers/power/battery_current_limit.c b/drivers/power/battery_current_limit.c
index ecda153..69fa4a8 100644
--- a/drivers/power/battery_current_limit.c
+++ b/drivers/power/battery_current_limit.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -120,8 +120,10 @@
if (psy == NULL) {
psy = power_supply_get_by_name("battery");
- if (psy == NULL)
+ if (psy == NULL) {
+ pr_err("failed to get ps battery\n");
return;
+ }
}
if (psy->get_property(psy, POWER_SUPPLY_PROP_CURRENT_NOW, &ret))
@@ -143,6 +145,7 @@
gbcl->bcl_imax_ma = imax_ma;
gbcl->bcl_vbat_mv = vbatt_mv;
+ pr_debug("ibatt %d, imax %d, vbatt %d\n", ibatt_ma, imax_ma, vbatt_mv);
if (gbcl->bcl_threshold_mode[BCL_IBAT_IMAX_THRESHOLD_TYPE_HIGH]
== BCL_IBAT_IMAX_THRESHOLD_ENABLED) {
imax_high_threshold =
@@ -179,8 +182,7 @@
bcl_calculate_imax_trigger();
/* restart the delay work for caculating imax */
schedule_delayed_work(&bcl->bcl_imax_work,
- round_jiffies_relative(msecs_to_jiffies
- (bcl->bcl_poll_interval_msec)));
+ msecs_to_jiffies(bcl->bcl_poll_interval_msec));
}
}
diff --git a/drivers/power/bq28400_battery.c b/drivers/power/bq28400_battery.c
index beab4e2..79ed2b1 100644
--- a/drivers/power/bq28400_battery.c
+++ b/drivers/power/bq28400_battery.c
@@ -164,6 +164,9 @@
u16 subcmd;
};
+static int fake_battery = -EINVAL;
+module_param(fake_battery, int, 0644);
+
#define BQ28400_DEBUG_REG(x) {#x, SBS_##x, 0}
#define BQ28400_DEBUG_SUBREG(x, y) {#y, SBS_##x, SUBCMD_##y}
@@ -403,6 +406,11 @@
{
int percentage = 0;
+ if (fake_battery != -EINVAL) {
+ pr_debug("Reporting Fake SOC = %d\n", fake_battery);
+ return fake_battery;
+ }
+
/* This register is only 1 byte */
percentage = i2c_smbus_read_byte_data(client, SBS_RSOC);
diff --git a/drivers/power/pm8921-charger.c b/drivers/power/pm8921-charger.c
index 459ab1d..1ad7f21 100644
--- a/drivers/power/pm8921-charger.c
+++ b/drivers/power/pm8921-charger.c
@@ -83,6 +83,7 @@
#define CHG_COMP_OVR 0x20A
#define IUSB_FINE_RES 0x2B6
#define OVP_USB_UVD 0x2B7
+#define PM8921_USB_TRIM_SEL 0x339
/* check EOC every 10 seconds */
#define EOC_CHECK_PERIOD_MS 10000
@@ -213,6 +214,8 @@
* @alarm_high_mv: the battery alarm voltage high
* @cool_temp_dc: the cool temp threshold in deciCelcius
* @warm_temp_dc: the warm temp threshold in deciCelcius
+ * @hysteresis_temp_dc: the hysteresis between temp thresholds in
+ * deciCelcius
* @resume_voltage_delta: the voltage delta from vdd max at which the
* battery should resume charging
* @term_current: The charging based term current
@@ -235,6 +238,7 @@
unsigned int alarm_high_mv;
int cool_temp_dc;
int warm_temp_dc;
+ int hysteresis_temp_dc;
unsigned int temp_check_period;
unsigned int cool_bat_chg_current;
unsigned int warm_bat_chg_current;
@@ -308,6 +312,7 @@
static int thermal_mitigation;
static struct pm8921_chg_chip *the_chip;
+static void check_temp_thresholds(struct pm8921_chg_chip *chip);
#define LPM_ENABLE_BIT BIT(2)
static int pm8921_chg_set_lpm(struct pm8921_chg_chip *chip, int enable)
@@ -773,6 +778,44 @@
};
/* USB Trim tables */
+static int usb_trim_pm8921_table_1[USB_TRIM_ENTRIES] = {
+ 0x0,
+ 0x0,
+ -0x5,
+ 0x0,
+ -0x7,
+ 0x0,
+ -0x9,
+ -0xA,
+ 0x0,
+ 0x0,
+ -0xE,
+ 0x0,
+ -0xF,
+ 0x0,
+ -0x10,
+ 0x0
+};
+
+static int usb_trim_pm8921_table_2[USB_TRIM_ENTRIES] = {
+ 0x0,
+ 0x0,
+ -0x2,
+ 0x0,
+ -0x4,
+ 0x0,
+ -0x4,
+ -0x5,
+ 0x0,
+ 0x0,
+ -0x6,
+ 0x0,
+ -0x6,
+ 0x0,
+ -0x6,
+ 0x0
+};
+
static int usb_trim_8038_table[USB_TRIM_ENTRIES] = {
0x0,
0x0,
@@ -840,6 +883,8 @@
#define REG_USB_OVP_TRIM_ORIG_MSB 0x09C
#define REG_USB_OVP_TRIM_PM8917 0x2B5
#define REG_USB_OVP_TRIM_PM8917_BIT BIT(0)
+#define USB_TRIM_MAX_DATA_PM8917 0x3F
+#define USB_TRIM_POLARITY_PM8917_BIT BIT(6)
static int pm_chg_usb_trim(struct pm8921_chg_chip *chip, int index)
{
u8 temp, sbi_config, msb, lsb, mask;
@@ -3162,6 +3207,22 @@
struct delayed_work *dwork = to_delayed_work(work);
struct pm8921_chg_chip *chip = container_of(dwork,
struct pm8921_chg_chip, update_heartbeat_work);
+ bool chg_present = chip->usb_present || chip->dc_present;
+
+ /* for battery health when charger is not connected */
+ if (chip->btc_override && !chg_present)
+ schedule_delayed_work(&chip->btc_override_work,
+ round_jiffies_relative(msecs_to_jiffies
+ (chip->btc_delay_ms)));
+
+ /*
+ * check temp thresholds when charger is present and
+ * and battery is FULL. The temperature here can impact
+ * the charging restart conditions.
+ */
+ if (chip->btc_override && chg_present &&
+ !wake_lock_active(&chip->eoc_wake_lock))
+ check_temp_thresholds(chip);
power_supply_changed(&chip->batt_psy);
if (chip->recent_reported_soc <= 20)
@@ -3317,7 +3378,7 @@
if (chip->warm_temp_dc != INT_MIN) {
if (chip->is_bat_warm
- && temp < chip->warm_temp_dc - TEMP_HYSTERISIS_DECIDEGC)
+ && temp < chip->warm_temp_dc - chip->hysteresis_temp_dc)
battery_warm(false);
else if (!chip->is_bat_warm && temp >= chip->warm_temp_dc)
battery_warm(true);
@@ -3325,7 +3386,7 @@
if (chip->cool_temp_dc != INT_MIN) {
if (chip->is_bat_cool
- && temp > chip->cool_temp_dc + TEMP_HYSTERISIS_DECIDEGC)
+ && temp > chip->cool_temp_dc + chip->hysteresis_temp_dc)
battery_cool(false);
else if (!chip->is_bat_cool && temp <= chip->cool_temp_dc)
battery_cool(true);
@@ -3543,7 +3604,8 @@
temp = pm_chg_get_rt_status(chip, BATTTEMP_HOT_IRQ);
if (temp) {
- if (decidegc < chip->btc_override_hot_decidegc)
+ if (decidegc < chip->btc_override_hot_decidegc -
+ chip->hysteresis_temp_dc)
/* stop forcing batt hot */
rc = pm_chg_override_hot(chip, 0);
if (rc)
@@ -3558,7 +3620,8 @@
temp = pm_chg_get_rt_status(chip, BATTTEMP_COLD_IRQ);
if (temp) {
- if (decidegc > chip->btc_override_cold_decidegc)
+ if (decidegc > chip->btc_override_cold_decidegc +
+ chip->hysteresis_temp_dc)
/* stop forcing batt cold */
rc = pm_chg_override_cold(chip, 0);
if (rc)
@@ -3622,7 +3685,8 @@
end = is_charging_finished(chip, vbat_batt_terminal_uv, ichg_meas_ma);
- if (end == CHG_NOT_IN_PROGRESS) {
+ if (end == CHG_NOT_IN_PROGRESS && (!chip->btc_override ||
+ !(chip->usb_present || chip->dc_present))) {
count = 0;
goto eoc_worker_stop;
}
@@ -3650,7 +3714,8 @@
chgdone_irq_handler(chip->pmic_chg_irq[CHGDONE_IRQ], chip);
} else {
check_temp_thresholds(chip);
- adjust_vdd_max_for_fastchg(chip, vbat_batt_terminal_uv);
+ if (end != CHG_NOT_IN_PROGRESS)
+ adjust_vdd_max_for_fastchg(chip, vbat_batt_terminal_uv);
pr_debug("EOC count = %d\n", count);
schedule_delayed_work(&chip->eoc_work,
round_jiffies_relative(msecs_to_jiffies
@@ -3659,9 +3724,9 @@
}
eoc_worker_stop:
- wake_unlock(&chip->eoc_wake_lock);
/* set the vbatdet back, in case it was changed to trigger charging */
set_appropriate_vbatdet(chip);
+ wake_unlock(&chip->eoc_wake_lock);
}
/**
@@ -3781,11 +3846,14 @@
}
}
+#define PM8921_USB_TRIM_SEL_BIT BIT(6)
/* determines the initial present states */
static void __devinit determine_initial_state(struct pm8921_chg_chip *chip)
{
int fsm_state;
int is_fast_chg;
+ int rc = 0;
+ u8 trim_sel_reg = 0, regsbi;
chip->dc_present = !!is_dc_chg_plugged_in(chip);
chip->usb_present = !!is_usb_chg_plugged_in(chip);
@@ -3795,6 +3863,11 @@
schedule_delayed_work(&chip->unplug_check_work,
msecs_to_jiffies(UNPLUG_CHECK_WAIT_PERIOD_MS));
pm8921_chg_enable_irq(chip, CHG_GONE_IRQ);
+
+ if (chip->btc_override)
+ schedule_delayed_work(&chip->btc_override_work,
+ round_jiffies_relative(msecs_to_jiffies
+ (chip->btc_delay_ms)));
}
pm8921_chg_enable_irq(chip, DCIN_VALID_IRQ);
@@ -3843,10 +3916,26 @@
fsm_state);
/* Determine which USB trim column to use */
- if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8917)
+ if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8917) {
chip->usb_trim_table = usb_trim_8917_table;
- else if (pm8xxx_get_version(chip->dev->parent) == PM8XXX_VERSION_8038)
+ } else if (pm8xxx_get_version(chip->dev->parent) ==
+ PM8XXX_VERSION_8038) {
chip->usb_trim_table = usb_trim_8038_table;
+ } else if (pm8xxx_get_version(chip->dev->parent) ==
+ PM8XXX_VERSION_8921) {
+ rc = pm8xxx_readb(chip->dev->parent, REG_SBI_CONFIG, ®sbi);
+ rc |= pm8xxx_writeb(chip->dev->parent, REG_SBI_CONFIG, 0x5E);
+ rc |= pm8xxx_readb(chip->dev->parent, PM8921_USB_TRIM_SEL,
+ &trim_sel_reg);
+ rc |= pm8xxx_writeb(chip->dev->parent, REG_SBI_CONFIG, regsbi);
+ if (rc)
+ pr_err("Failed to read trim sel register rc=%d\n", rc);
+
+ if (trim_sel_reg & PM8921_USB_TRIM_SEL_BIT)
+ chip->usb_trim_table = usb_trim_pm8921_table_1;
+ else
+ chip->usb_trim_table = usb_trim_pm8921_table_2;
+ }
}
struct pm_chg_irq_init_data {
@@ -4600,6 +4689,8 @@
is_usb_chg_plugged_in(the_chip)))
schedule_delayed_work(&chip->btc_override_work, 0);
+ schedule_delayed_work(&chip->update_heartbeat_work, 0);
+
return 0;
}
@@ -4607,6 +4698,8 @@
{
struct pm8921_chg_chip *chip = dev_get_drvdata(dev);
+ cancel_delayed_work_sync(&chip->update_heartbeat_work);
+
if (chip->btc_override)
cancel_delayed_work_sync(&chip->btc_override_work);
@@ -4663,6 +4756,11 @@
else
chip->warm_temp_dc = INT_MIN;
+ if (pdata->hysteresis_temp)
+ chip->hysteresis_temp_dc = pdata->hysteresis_temp * 10;
+ else
+ chip->hysteresis_temp_dc = TEMP_HYSTERISIS_DECIDEGC;
+
chip->temp_check_period = pdata->temp_check_period;
chip->max_bat_chg_current = pdata->max_bat_chg_current;
/* Assign to corresponding module parameter */
diff --git a/drivers/power/qpnp-charger.c b/drivers/power/qpnp-charger.c
index da0a5b6..6a2ce8d 100644
--- a/drivers/power/qpnp-charger.c
+++ b/drivers/power/qpnp-charger.c
@@ -85,7 +85,11 @@
#define CHGR_BUCK_BCK_VBAT_REG_MODE 0x74
#define MISC_REVISION2 0x01
#define USB_OVP_CTL 0x42
+#define USB_CHG_GONE_REV_BST 0xED
+#define BUCK_VCHG_OV 0x77
+#define BUCK_TEST_SMBC_MODES 0xE6
#define SEC_ACCESS 0xD0
+#define BAT_IF_VREF_BAT_THM_CTRL 0x4A
#define REG_OFFSET_PERP_SUBTYPE 0x05
/* SMBB peripheral subtype values */
@@ -120,6 +124,8 @@
#define CHGR_ON_BAT_FORCE_BIT BIT(0)
#define USB_VALID_DEB_20MS 0x03
#define BUCK_VBAT_REG_NODE_SEL_BIT BIT(0)
+#define VREF_BATT_THERM_FORCE_ON 0xC0
+#define VREF_BAT_THM_ENABLED_FSM 0x80
/* Interrupt definitions */
/* smbb_chg_interrupts */
@@ -232,6 +238,7 @@
u16 freq_base;
unsigned int usbin_valid_irq;
unsigned int dcin_valid_irq;
+ unsigned int chg_gone_irq;
unsigned int chg_fastchg_irq;
unsigned int chg_trklchg_irq;
unsigned int chg_failed_irq;
@@ -272,6 +279,7 @@
uint32_t flags;
struct qpnp_adc_tm_btm_param adc_param;
struct work_struct adc_measure_work;
+ struct delayed_work arb_stop_work;
};
static struct of_device_id qpnp_charger_match_table[] = {
@@ -524,7 +532,41 @@
enable ? USB_SUSPEND_BIT : 0, 1);
}
-static void qpnp_bat_if_adc_measure_work(struct work_struct *work)
+static int
+qpnp_chg_charge_en(struct qpnp_chg_chip *chip, int enable)
+{
+ return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
+ CHGR_CHG_EN,
+ enable ? CHGR_CHG_EN : 0, 1);
+}
+
+static int
+qpnp_chg_force_run_on_batt(struct qpnp_chg_chip *chip, int disable)
+{
+ /* Don't run on battery for batteryless hardware */
+ if (chip->use_default_batt_values)
+ return 0;
+
+ /* This bit forces the charger to run off of the battery rather
+ * than a connected charger */
+ return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
+ CHGR_ON_BAT_FORCE_BIT,
+ disable ? CHGR_ON_BAT_FORCE_BIT : 0, 1);
+}
+
+static void
+qpnp_arb_stop_work(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct qpnp_chg_chip *chip = container_of(dwork,
+ struct qpnp_chg_chip, arb_stop_work);
+
+ qpnp_chg_charge_en(chip, !chip->charging_disabled);
+ qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+}
+
+static void
+qpnp_bat_if_adc_measure_work(struct work_struct *work)
{
struct qpnp_chg_chip *chip = container_of(work,
struct qpnp_chg_chip, adc_measure_work);
@@ -533,6 +575,23 @@
pr_err("request ADC error\n");
}
+#define ARB_STOP_WORK_MS 1000
+static irqreturn_t
+qpnp_chg_usb_chg_gone_irq_handler(int irq, void *_chip)
+{
+ struct qpnp_chg_chip *chip = _chip;
+
+ pr_debug("chg_gone triggered\n");
+ if (qpnp_chg_is_usb_chg_plugged_in(chip)) {
+ qpnp_chg_charge_en(chip, 0);
+ qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+ schedule_delayed_work(&chip->arb_stop_work,
+ msecs_to_jiffies(ARB_STOP_WORK_MS));
+ }
+
+ return IRQ_HANDLED;
+}
+
#define ENUM_T_STOP_BIT BIT(0)
static irqreturn_t
qpnp_chg_usb_usbin_valid_irq_handler(int irq, void *_chip)
@@ -552,7 +611,8 @@
if (chip->usb_present ^ usb_present) {
chip->usb_present = usb_present;
if (!usb_present)
- qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
+ qpnp_chg_usb_suspend_enable(chip, 1);
+
power_supply_set_present(chip->usb_psy,
chip->usb_present);
}
@@ -661,28 +721,6 @@
}
static int
-qpnp_chg_charge_en(struct qpnp_chg_chip *chip, int enable)
-{
- return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
- CHGR_CHG_EN,
- enable ? CHGR_CHG_EN : 0, 1);
-}
-
-static int
-qpnp_chg_force_run_on_batt(struct qpnp_chg_chip *chip, int disable)
-{
- /* Don't run on battery for batteryless hardware */
- if (chip->use_default_batt_values)
- return 0;
-
- /* This bit forces the charger to run off of the battery rather
- * than a connected charger */
- return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_CHG_CTRL,
- CHGR_ON_BAT_FORCE_BIT,
- disable ? CHGR_ON_BAT_FORCE_BIT : 0, 1);
-}
-
-static int
qpnp_chg_buck_control(struct qpnp_chg_chip *chip, int enable)
{
int rc;
@@ -1059,8 +1097,8 @@
POWER_SUPPLY_PROP_CURRENT_MAX, &ret);
if (ret.intval <= 2 && !chip->use_default_batt_values &&
get_prop_batt_present(chip)) {
- qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
qpnp_chg_usb_suspend_enable(chip, 1);
+ qpnp_chg_iusbmax_set(chip, QPNP_CHG_I_MAX_MIN_100);
} else {
qpnp_chg_usb_suspend_enable(chip, 0);
qpnp_chg_iusbmax_set(chip, ret.intval / 1000);
@@ -1383,6 +1421,7 @@
if (state == ADC_TM_WARM_STATE) {
if (temp > chip->warm_bat_decidegc) {
+ /* Normal to warm */
bat_warm = true;
bat_cool = false;
chip->adc_param.low_temp =
@@ -1391,6 +1430,7 @@
ADC_TM_COOL_THR_ENABLE;
} else if (temp >
chip->cool_bat_decidegc + HYSTERISIS_DECIDEGC){
+ /* Cool to normal */
bat_warm = false;
bat_cool = false;
@@ -1401,14 +1441,16 @@
}
} else {
if (temp < chip->cool_bat_decidegc) {
+ /* Normal to cool */
bat_warm = false;
bat_cool = true;
chip->adc_param.high_temp =
chip->cool_bat_decidegc + HYSTERISIS_DECIDEGC;
chip->adc_param.state_request =
ADC_TM_WARM_THR_ENABLE;
- } else if (temp >
+ } else if (temp <
chip->warm_bat_decidegc - HYSTERISIS_DECIDEGC){
+ /* Warm to normal */
bat_warm = false;
bat_cool = false;
@@ -1592,7 +1634,24 @@
chip->usbin_valid_irq, rc);
return rc;
}
+
+ chip->chg_gone_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "chg-gone");
+ if (chip->chg_gone_irq < 0) {
+ pr_err("Unable to get chg-gone irq\n");
+ return rc;
+ }
+ rc = devm_request_irq(chip->dev, chip->chg_gone_irq,
+ qpnp_chg_usb_chg_gone_irq_handler,
+ IRQF_TRIGGER_RISING,
+ "chg_gone_irq", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d chg_gone: %d\n",
+ chip->chg_gone_irq, rc);
+ return rc;
+ }
enable_irq_wake(chip->usbin_valid_irq);
+ enable_irq_wake(chip->chg_gone_irq);
break;
case SMBB_DC_CHGPTH_SUBTYPE:
chip->dcin_valid_irq = spmi_get_irq_byname(spmi,
@@ -1679,10 +1738,10 @@
rc = qpnp_chg_masked_write(chip, chip->chgr_base + 0x62,
0xFF, 0xA0, 1);
- /* HACK: use digital EOC */
+ /* HACK: use analog EOC */
rc = qpnp_chg_masked_write(chip, chip->chgr_base +
CHGR_IBAT_TERM_CHGR,
- 0x88, 0x80, 1);
+ 0x80, 0x00, 1);
break;
case SMBB_BUCK_SUBTYPE:
@@ -1700,6 +1759,15 @@
case SMBB_BAT_IF_SUBTYPE:
case SMBBP_BAT_IF_SUBTYPE:
case SMBCL_BAT_IF_SUBTYPE:
+ /* Force on VREF_BAT_THM */
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BATT_THERM_FORCE_ON, 1);
+ if (rc) {
+ pr_debug("failed to force on VREF_BAT_THM rc=%d\n", rc);
+ return rc;
+ }
break;
case SMBB_USB_CHGPTH_SUBTYPE:
case SMBBP_USB_CHGPTH_SUBTYPE:
@@ -1726,6 +1794,16 @@
ENUM_T_STOP_BIT,
ENUM_T_STOP_BIT, 1);
+ rc = qpnp_chg_masked_write(chip,
+ chip->usb_chgpth_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->usb_chgpth_base + USB_CHG_GONE_REV_BST,
+ 0xFF,
+ 0x80, 1);
+
break;
case SMBB_DC_CHGPTH_SUBTYPE:
break;
@@ -1927,6 +2005,27 @@
subtype, rc);
goto fail_chg_enable;
}
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + BUCK_VCHG_OV,
+ 0xff,
+ 0x00, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + SEC_ACCESS,
+ 0xFF,
+ 0xA5, 1);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + BUCK_TEST_SMBC_MODES,
+ 0xFF,
+ 0x80, 1);
+
break;
case SMBB_BAT_IF_SUBTYPE:
case SMBBP_BAT_IF_SUBTYPE:
@@ -2021,6 +2120,7 @@
}
INIT_WORK(&chip->adc_measure_work,
qpnp_bat_if_adc_measure_work);
+ INIT_DELAYED_WORK(&chip->arb_stop_work, qpnp_arb_stop_work);
}
if (chip->dc_chgpth_base) {
@@ -2115,6 +2215,41 @@
return 0;
}
+static int qpnp_chg_resume(struct device *dev)
+{
+ struct qpnp_chg_chip *chip = dev_get_drvdata(dev);
+ int rc = 0;
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BATT_THERM_FORCE_ON, 1);
+ if (rc)
+ pr_debug("failed to force on VREF_BAT_THM rc=%d\n", rc);
+
+ return rc;
+}
+
+static int qpnp_chg_suspend(struct device *dev)
+{
+ struct qpnp_chg_chip *chip = dev_get_drvdata(dev);
+ int rc = 0;
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->bat_if_base + BAT_IF_VREF_BAT_THM_CTRL,
+ VREF_BATT_THERM_FORCE_ON,
+ VREF_BAT_THM_ENABLED_FSM, 1);
+ if (rc)
+ pr_debug("failed to enable FSM ctrl VREF_BAT_THM rc=%d\n", rc);
+
+ return rc;
+}
+
+static const struct dev_pm_ops qpnp_bms_pm_ops = {
+ .resume = qpnp_chg_resume,
+ .suspend = qpnp_chg_suspend,
+};
+
static struct spmi_driver qpnp_charger_driver = {
.probe = qpnp_charger_probe,
.remove = __devexit_p(qpnp_charger_remove),
diff --git a/drivers/regulator/qpnp-regulator.c b/drivers/regulator/qpnp-regulator.c
index 4cdfaeb..2d10f89 100644
--- a/drivers/regulator/qpnp-regulator.c
+++ b/drivers/regulator/qpnp-regulator.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -19,12 +19,14 @@
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/init.h>
+#include <linux/interrupt.h>
#include <linux/bitops.h>
#include <linux/slab.h>
#include <linux/spmi.h>
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
+#include <linux/ktime.h>
#include <linux/regulator/driver.h>
#include <linux/regulator/of_regulator.h>
#include <linux/regulator/qpnp-regulator.h>
@@ -36,6 +38,7 @@
QPNP_VREG_DEBUG_INIT = BIT(2), /* Show state after probe */
QPNP_VREG_DEBUG_WRITES = BIT(3), /* Show SPMI writes */
QPNP_VREG_DEBUG_READS = BIT(4), /* Show SPMI reads */
+ QPNP_VREG_DEBUG_OCP = BIT(5), /* Show VS OCP IRQ events */
};
static int qpnp_vreg_debug_mask;
@@ -156,9 +159,8 @@
#define QPNP_LDO_SOFT_START_ENABLE_MASK 0x80
/* VS regulator over current protection control register layout */
-#define QPNP_VS_OCP_ENABLE_MASK 0x80
-#define QPNP_VS_OCP_OVERRIDE_MASK 0x01
-#define QPNP_VS_OCP_DISABLE 0x00
+#define QPNP_VS_OCP_OVERRIDE 0x01
+#define QPNP_VS_OCP_NO_OVERRIDE 0x00
/* VS regulator soft start control register layout */
#define QPNP_VS_SOFT_START_ENABLE_MASK 0x80
@@ -168,6 +170,11 @@
#define QPNP_BOOST_CURRENT_LIMIT_ENABLE_MASK 0x80
#define QPNP_BOOST_CURRENT_LIMIT_MASK 0x07
+#define QPNP_VS_OCP_DEFAULT_MAX_RETRIES 10
+#define QPNP_VS_OCP_DEFAULT_RETRY_DELAY_MS 30
+#define QPNP_VS_OCP_FALL_DELAY_US 90
+#define QPNP_VS_OCP_FAULT_DELAY_US 20000
+
/*
* This voltage in uV is returned by get_voltage functions when there is no way
* to determine the current voltage level. It is needed because the regulator
@@ -203,17 +210,22 @@
struct qpnp_regulator {
struct regulator_desc rdesc;
+ struct delayed_work ocp_work;
struct spmi_device *spmi_dev;
struct regulator_dev *rdev;
struct qpnp_voltage_set_points *set_points;
enum qpnp_regulator_logical_type logical_type;
int enable_time;
- int ocp_enable_time;
int ocp_enable;
+ int ocp_irq;
+ int ocp_count;
+ int ocp_max_retries;
+ int ocp_retry_delay_ms;
int system_load;
int hpm_min_load;
u32 write_count;
u32 prev_write_count;
+ ktime_t vs_enable_time;
u16 base_addr;
/* ctrl_reg provides a shadow copy of register values 0x40 to 0x47. */
u8 ctrl_reg[8];
@@ -501,35 +513,11 @@
static int qpnp_regulator_vs_enable(struct regulator_dev *rdev)
{
struct qpnp_regulator *vreg = rdev_get_drvdata(rdev);
- int rc;
- u8 reg;
- if (vreg->ocp_enable == QPNP_REGULATOR_ENABLE) {
- /* Disable OCP */
- reg = QPNP_VS_OCP_DISABLE;
- rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
- if (rc)
- goto fail;
- }
+ if (vreg->ocp_irq)
+ vreg->vs_enable_time = ktime_get();
- rc = qpnp_regulator_common_enable(rdev);
- if (rc)
- goto fail;
-
- if (vreg->ocp_enable == QPNP_REGULATOR_ENABLE) {
- /* Wait for inrush current to subsided, then enable OCP. */
- udelay(vreg->ocp_enable_time);
- reg = QPNP_VS_OCP_ENABLE_MASK;
- rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
- if (rc)
- goto fail;
- }
-
- return rc;
-fail:
- vreg_err(vreg, "qpnp_vreg_write failed, rc=%d\n", rc);
-
- return rc;
+ return qpnp_regulator_common_enable(rdev);
}
static int qpnp_regulator_common_disable(struct regulator_dev *rdev)
@@ -785,6 +773,88 @@
return vreg->enable_time;
}
+static int qpnp_regulator_vs_clear_ocp(struct qpnp_regulator *vreg)
+{
+ int rc;
+
+ rc = qpnp_vreg_masked_write(vreg, QPNP_COMMON_REG_ENABLE,
+ QPNP_COMMON_DISABLE, QPNP_COMMON_ENABLE_MASK,
+ &vreg->ctrl_reg[QPNP_COMMON_IDX_ENABLE]);
+ if (rc)
+ vreg_err(vreg, "qpnp_vreg_masked_write failed, rc=%d\n", rc);
+
+ vreg->vs_enable_time = ktime_get();
+
+ rc = qpnp_vreg_masked_write(vreg, QPNP_COMMON_REG_ENABLE,
+ QPNP_COMMON_ENABLE, QPNP_COMMON_ENABLE_MASK,
+ &vreg->ctrl_reg[QPNP_COMMON_IDX_ENABLE]);
+ if (rc)
+ vreg_err(vreg, "qpnp_vreg_masked_write failed, rc=%d\n", rc);
+
+ if (qpnp_vreg_debug_mask & QPNP_VREG_DEBUG_OCP) {
+ pr_info("%s: switch state toggled after OCP event\n",
+ vreg->rdesc.name);
+ }
+
+ return rc;
+}
+
+static void qpnp_regulator_vs_ocp_work(struct work_struct *work)
+{
+ struct delayed_work *dwork
+ = container_of(work, struct delayed_work, work);
+ struct qpnp_regulator *vreg
+ = container_of(dwork, struct qpnp_regulator, ocp_work);
+
+ qpnp_regulator_vs_clear_ocp(vreg);
+
+ return;
+}
+
+static irqreturn_t qpnp_regulator_vs_ocp_isr(int irq, void *data)
+{
+ struct qpnp_regulator *vreg = data;
+ ktime_t ocp_irq_time;
+ s64 ocp_trigger_delay_us;
+
+ ocp_irq_time = ktime_get();
+ ocp_trigger_delay_us = ktime_us_delta(ocp_irq_time,
+ vreg->vs_enable_time);
+
+ /*
+ * Reset the OCP count if there is a large delay between switch enable
+ * and when OCP triggers. This is indicative of a hotplug event as
+ * opposed to a fault.
+ */
+ if (ocp_trigger_delay_us > QPNP_VS_OCP_FAULT_DELAY_US)
+ vreg->ocp_count = 0;
+
+ /* Wait for switch output to settle back to 0 V after OCP triggered. */
+ udelay(QPNP_VS_OCP_FALL_DELAY_US);
+
+ vreg->ocp_count++;
+
+ if (qpnp_vreg_debug_mask & QPNP_VREG_DEBUG_OCP) {
+ pr_info("%s: VS OCP triggered, count = %d, delay = %lld us\n",
+ vreg->rdesc.name, vreg->ocp_count,
+ ocp_trigger_delay_us);
+ }
+
+ if (vreg->ocp_count == 1) {
+ /* Immediately clear the over current condition. */
+ qpnp_regulator_vs_clear_ocp(vreg);
+ } else if (vreg->ocp_count <= vreg->ocp_max_retries) {
+ /* Schedule the over current clear task to run later. */
+ schedule_delayed_work(&vreg->ocp_work,
+ msecs_to_jiffies(vreg->ocp_retry_delay_ms) + 1);
+ } else {
+ vreg_err(vreg, "OCP triggered %d times; no further retries\n",
+ vreg->ocp_count);
+ }
+
+ return IRQ_HANDLED;
+}
+
static const char const *qpnp_print_actions[] = {
[QPNP_REGULATOR_ACTION_INIT] = "initial ",
[QPNP_REGULATOR_ACTION_ENABLE] = "enable ",
@@ -834,7 +904,8 @@
if (type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
|| type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
- || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS) {
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_VS) {
mode = qpnp_regulator_common_get_mode(rdev);
mode_label = mode == REGULATOR_MODE_NORMAL ? "HPM" : "LPM";
}
@@ -903,9 +974,9 @@
pc_mode_label[1] =
mode_reg & QPNP_COMMON_MODE_FOLLOW_AWAKE_MASK ? 'W' : '_';
- pr_info("%s %-11s: %s, pc_en=%s, alt_mode=%s\n",
+ pr_info("%s %-11s: %s, mode=%s, pc_en=%s, alt_mode=%s\n",
action_label, vreg->rdesc.name, enable_label,
- pc_enable_label, pc_mode_label);
+ mode_label, pc_enable_label, pc_mode_label);
break;
case QPNP_REGULATOR_LOGICAL_TYPE_BOOST:
pr_info("%s %-11s: %s, v=%7d uV\n",
@@ -1099,6 +1170,17 @@
pdata->pin_ctrl_enable & QPNP_COMMON_ENABLE_FOLLOW_ALL_MASK;
}
+ /* Set up HPM control. */
+ if ((type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_VS
+ || type == QPNP_REGULATOR_LOGICAL_TYPE_FTSMPS)
+ && (pdata->hpm_enable != QPNP_REGULATOR_USE_HW_DEFAULT)) {
+ ctrl_reg[QPNP_COMMON_IDX_MODE] &= ~QPNP_COMMON_MODE_HPM_MASK;
+ ctrl_reg[QPNP_COMMON_IDX_MODE] |=
+ (pdata->hpm_enable ? QPNP_COMMON_MODE_HPM_MASK : 0);
+ }
+
/* Set up auto mode control. */
if ((type == QPNP_REGULATOR_LOGICAL_TYPE_SMPS
|| type == QPNP_REGULATOR_LOGICAL_TYPE_LDO
@@ -1224,7 +1306,8 @@
}
if (pdata->ocp_enable != QPNP_REGULATOR_USE_HW_DEFAULT) {
- reg = pdata->ocp_enable ? QPNP_VS_OCP_ENABLE_MASK : 0;
+ reg = pdata->ocp_enable ? QPNP_VS_OCP_NO_OVERRIDE
+ : QPNP_VS_OCP_OVERRIDE;
rc = qpnp_vreg_write(vreg, QPNP_VS_REG_OCP, ®, 1);
if (rc) {
vreg_err(vreg, "spmi write failed, rc=%d\n",
@@ -1256,6 +1339,11 @@
}
pdata->base_addr = res->start;
+ /* OCP IRQ is optional so ignore get errors. */
+ pdata->ocp_irq = spmi_get_irq_byname(spmi, NULL, "ocp");
+ if (pdata->ocp_irq < 0)
+ pdata->ocp_irq = 0;
+
/*
* Initialize configuration parameters to use hardware default in case
* no value is specified via device tree.
@@ -1269,6 +1357,7 @@
pdata->pin_ctrl_enable = QPNP_REGULATOR_PIN_CTRL_ENABLE_HW_DEFAULT;
pdata->pin_ctrl_hpm = QPNP_REGULATOR_PIN_CTRL_HPM_HW_DEFAULT;
pdata->vs_soft_start_strength = QPNP_VS_SOFT_START_STR_HW_DEFAULT;
+ pdata->hpm_enable = QPNP_REGULATOR_USE_HW_DEFAULT;
/* These bindings are optional, so it is okay if they are not found. */
of_property_read_u32(node, "qcom,auto-mode-enable",
@@ -1276,6 +1365,10 @@
of_property_read_u32(node, "qcom,bypass-mode-enable",
&pdata->bypass_mode_enable);
of_property_read_u32(node, "qcom,ocp-enable", &pdata->ocp_enable);
+ of_property_read_u32(node, "qcom,ocp-max-retries",
+ &pdata->ocp_max_retries);
+ of_property_read_u32(node, "qcom,ocp-retry-delay",
+ &pdata->ocp_retry_delay_ms);
of_property_read_u32(node, "qcom,pull-down-enable",
&pdata->pull_down_enable);
of_property_read_u32(node, "qcom,soft-start-enable",
@@ -1285,12 +1378,11 @@
of_property_read_u32(node, "qcom,pin-ctrl-enable",
&pdata->pin_ctrl_enable);
of_property_read_u32(node, "qcom,pin-ctrl-hpm", &pdata->pin_ctrl_hpm);
+ of_property_read_u32(node, "qcom,hpm-enable", &pdata->hpm_enable);
of_property_read_u32(node, "qcom,vs-soft-start-strength",
&pdata->vs_soft_start_strength);
of_property_read_u32(node, "qcom,system-load", &pdata->system_load);
of_property_read_u32(node, "qcom,enable-time", &pdata->enable_time);
- of_property_read_u32(node, "qcom,ocp-enable-time",
- &pdata->ocp_enable_time);
return rc;
}
@@ -1364,7 +1456,14 @@
vreg->enable_time = pdata->enable_time;
vreg->system_load = pdata->system_load;
vreg->ocp_enable = pdata->ocp_enable;
- vreg->ocp_enable_time = pdata->ocp_enable_time;
+ vreg->ocp_irq = pdata->ocp_irq;
+ vreg->ocp_max_retries = pdata->ocp_max_retries;
+ vreg->ocp_retry_delay_ms = pdata->ocp_retry_delay_ms;
+
+ if (vreg->ocp_max_retries == 0)
+ vreg->ocp_max_retries = QPNP_VS_OCP_DEFAULT_MAX_RETRIES;
+ if (vreg->ocp_retry_delay_ms == 0)
+ vreg->ocp_retry_delay_ms = QPNP_VS_OCP_DEFAULT_RETRY_DELAY_MS;
rdesc = &vreg->rdesc;
rdesc->id = spmi->ctrl->nr;
@@ -1414,18 +1513,37 @@
goto bail;
}
+ if (vreg->logical_type != QPNP_REGULATOR_LOGICAL_TYPE_VS)
+ vreg->ocp_irq = 0;
+
+ if (vreg->ocp_irq) {
+ rc = devm_request_irq(&spmi->dev, vreg->ocp_irq,
+ qpnp_regulator_vs_ocp_isr, IRQF_TRIGGER_RISING, "ocp",
+ vreg);
+ if (rc < 0) {
+ vreg_err(vreg, "failed to request irq %d, rc=%d\n",
+ vreg->ocp_irq, rc);
+ goto bail;
+ }
+
+ INIT_DELAYED_WORK(&vreg->ocp_work, qpnp_regulator_vs_ocp_work);
+ }
+
vreg->rdev = regulator_register(rdesc, &spmi->dev,
&(pdata->init_data), vreg, spmi->dev.of_node);
if (IS_ERR(vreg->rdev)) {
rc = PTR_ERR(vreg->rdev);
vreg_err(vreg, "regulator_register failed, rc=%d\n", rc);
- goto bail;
+ goto cancel_ocp_work;
}
qpnp_vreg_show_state(vreg->rdev, QPNP_REGULATOR_ACTION_INIT);
return 0;
+cancel_ocp_work:
+ if (vreg->ocp_irq)
+ cancel_delayed_work_sync(&vreg->ocp_work);
bail:
if (rc)
vreg_err(vreg, "probe failed, rc=%d\n", rc);
@@ -1445,6 +1563,8 @@
if (vreg) {
regulator_unregister(vreg->rdev);
+ if (vreg->ocp_irq)
+ cancel_delayed_work_sync(&vreg->ocp_work);
kfree(vreg->rdesc.name);
kfree(vreg);
}
diff --git a/drivers/scsi/ufs/ufs.h b/drivers/scsi/ufs/ufs.h
index 139bc06..086ff03 100644
--- a/drivers/scsi/ufs/ufs.h
+++ b/drivers/scsi/ufs/ufs.h
@@ -36,10 +36,20 @@
#ifndef _UFS_H
#define _UFS_H
+#include <linux/mutex.h>
+#include <linux/types.h>
+
#define MAX_CDB_SIZE 16
+#define GENERAL_UPIU_REQUEST_SIZE 32
+#define UPIU_HEADER_DATA_SEGMENT_MAX_SIZE ((ALIGNED_UPIU_SIZE) - \
+ (GENERAL_UPIU_REQUEST_SIZE))
+#define QUERY_OSF_SIZE ((GENERAL_UPIU_REQUEST_SIZE) - \
+ (sizeof(struct utp_upiu_header)))
+#define UFS_QUERY_RESERVED_SCSI_CMD 0xCC
+#define UFS_QUERY_CMD_SIZE 10
#define UPIU_HEADER_DWORD(byte3, byte2, byte1, byte0)\
- ((byte3 << 24) | (byte2 << 16) |\
+ cpu_to_be32((byte3 << 24) | (byte2 << 16) |\
(byte1 << 8) | (byte0))
/*
@@ -62,7 +72,7 @@
UPIU_TRANSACTION_COMMAND = 0x01,
UPIU_TRANSACTION_DATA_OUT = 0x02,
UPIU_TRANSACTION_TASK_REQ = 0x04,
- UPIU_TRANSACTION_QUERY_REQ = 0x26,
+ UPIU_TRANSACTION_QUERY_REQ = 0x16,
};
/* UTP UPIU Transaction Codes Target to Initiator */
@@ -73,6 +83,7 @@
UPIU_TRANSACTION_TASK_RSP = 0x24,
UPIU_TRANSACTION_READY_XFER = 0x31,
UPIU_TRANSACTION_QUERY_RSP = 0x36,
+ UPIU_TRANSACTION_REJECT_UPIU = 0x3F,
};
/* UPIU Read/Write flags */
@@ -90,6 +101,12 @@
UPIU_TASK_ATTR_ACA = 0x03,
};
+/* UPIU Query request function */
+enum {
+ UPIU_QUERY_FUNC_STANDARD_READ_REQUEST = 0x01,
+ UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST = 0x81,
+};
+
/* UTP QUERY Transaction Specific Fields OpCode */
enum {
UPIU_QUERY_OPCODE_NOP = 0x0,
@@ -103,6 +120,21 @@
UPIU_QUERY_OPCODE_TOGGLE_FLAG = 0x8,
};
+/* Query response result code */
+enum {
+ QUERY_RESULT_SUCCESS = 0x00,
+ QUERY_RESULT_NOT_READABLE = 0xF6,
+ QUERY_RESULT_NOT_WRITEABLE = 0xF7,
+ QUERY_RESULT_ALREADY_WRITTEN = 0xF8,
+ QUERY_RESULT_INVALID_LENGTH = 0xF9,
+ QUERY_RESULT_INVALID_VALUE = 0xFA,
+ QUERY_RESULT_INVALID_SELECTOR = 0xFB,
+ QUERY_RESULT_INVALID_INDEX = 0xFC,
+ QUERY_RESULT_INVALID_IDN = 0xFD,
+ QUERY_RESULT_INVALID_OPCODE = 0xFE,
+ QUERY_RESULT_GENERAL_FAILURE = 0xFF,
+};
+
/* UTP Transfer Request Command Type (CT) */
enum {
UPIU_COMMAND_SET_TYPE_SCSI = 0x0,
@@ -110,10 +142,17 @@
UPIU_COMMAND_SET_TYPE_QUERY = 0x2,
};
+/* UTP Transfer Request Command Offset */
+#define UPIU_COMMAND_TYPE_OFFSET 28
+
+/* Offset of the response code in the UPIU header */
+#define UPIU_RSP_CODE_OFFSET 8
+
enum {
MASK_SCSI_STATUS = 0xFF,
MASK_TASK_RESPONSE = 0xFF00,
MASK_RSP_UPIU_RESULT = 0xFFFF,
+ MASK_QUERY_DATA_SEG_LEN = 0xFFFF,
};
/* Task management service response */
@@ -138,26 +177,59 @@
/**
* struct utp_upiu_cmd - Command UPIU structure
- * @header: UPIU header structure DW-0 to DW-2
* @data_transfer_len: Data Transfer Length DW-3
* @cdb: Command Descriptor Block CDB DW-4 to DW-7
*/
struct utp_upiu_cmd {
- struct utp_upiu_header header;
u32 exp_data_transfer_len;
u8 cdb[MAX_CDB_SIZE];
};
/**
- * struct utp_upiu_rsp - Response UPIU structure
- * @header: UPIU header DW-0 to DW-2
+ * struct utp_upiu_query - upiu request buffer structure for
+ * query request.
+ * @opcode: command to perform B-0
+ * @idn: a value that indicates the particular type of data B-1
+ * @index: Index to further identify data B-2
+ * @selector: Index to further identify data B-3
+ * @reserved_osf: spec reserved field B-4,5
+ * @length: number of descriptor bytes to read/write B-6,7
+ * @value: Attribute value to be written DW-6
+ * @reserved: spec reserved DW-7,8
+ */
+struct utp_upiu_query {
+ u8 opcode;
+ u8 idn;
+ u8 index;
+ u8 selector;
+ u16 reserved_osf;
+ u16 length;
+ u32 value;
+ u32 reserved[2];
+};
+
+/**
+ * struct utp_upiu_req - general upiu request structure
+ * @header:UPIU header structure DW-0 to DW-2
+ * @sc: fields structure for scsi command
+ * @qr: fields structure for query request
+ */
+struct utp_upiu_req {
+ struct utp_upiu_header header;
+ union {
+ struct utp_upiu_cmd sc;
+ struct utp_upiu_query qr;
+ };
+};
+
+/**
+ * struct utp_cmd_rsp - Response UPIU structure
* @residual_transfer_count: Residual transfer count DW-3
* @reserved: Reserved double words DW-4 to DW-7
* @sense_data_len: Sense data length DW-8 U16
* @sense_data: Sense data field DW-8 to DW-12
*/
-struct utp_upiu_rsp {
- struct utp_upiu_header header;
+struct utp_cmd_rsp {
u32 residual_transfer_count;
u32 reserved[4];
u16 sense_data_len;
@@ -165,6 +237,20 @@
};
/**
+ * struct utp_upiu_rsp - general upiu response structure
+ * @header: UPIU header structure DW-0 to DW-2
+ * @sc: fields structure for scsi command
+ * @qr: fields structure for query request
+ */
+struct utp_upiu_rsp {
+ struct utp_upiu_header header;
+ union {
+ struct utp_cmd_rsp sc;
+ struct utp_upiu_query qr;
+ };
+};
+
+/**
* struct utp_upiu_task_req - Task request UPIU structure
* @header - UPIU header structure DW0 to DW-2
* @input_param1: Input parameter 1 DW-3
@@ -194,4 +280,24 @@
u32 reserved[3];
};
+/**
+ * struct ufs_query_req - parameters for building a query request
+ * @query_func: UPIU header query function
+ * @upiu_req: the query request data
+ */
+struct ufs_query_req {
+ u8 query_func;
+ struct utp_upiu_query upiu_req;
+};
+
+/**
+ * struct ufs_query_resp - UPIU QUERY
+ * @response: device response code
+ * @upiu_res: query response data
+ */
+struct ufs_query_res {
+ u8 response;
+ struct utp_upiu_query upiu_res;
+};
+
#endif /* End of Header */
diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
index c32a478..e12b9b4 100644
--- a/drivers/scsi/ufs/ufshcd.c
+++ b/drivers/scsi/ufs/ufshcd.c
@@ -202,18 +202,13 @@
}
/**
- * ufshcd_is_valid_req_rsp - checks if controller TR response is valid
+ * ufshcd_get_req_rsp - returns the TR response
* @ucd_rsp_ptr: pointer to response UPIU
- *
- * This function checks the response UPIU for valid transaction type in
- * response field
- * Returns 0 on success, non-zero on failure
*/
static inline int
-ufshcd_is_valid_req_rsp(struct utp_upiu_rsp *ucd_rsp_ptr)
+ufshcd_get_req_rsp(struct utp_upiu_rsp *ucd_rsp_ptr)
{
- return ((be32_to_cpu(ucd_rsp_ptr->header.dword_0) >> 24) ==
- UPIU_TRANSACTION_RESPONSE) ? 0 : DID_ERROR << 16;
+ return be32_to_cpu(ucd_rsp_ptr->header.dword_0) >> 24;
}
/**
@@ -316,14 +311,71 @@
{
int len;
if (lrbp->sense_buffer) {
- len = be16_to_cpu(lrbp->ucd_rsp_ptr->sense_data_len);
+ len = be16_to_cpu(lrbp->ucd_rsp_ptr->sc.sense_data_len);
memcpy(lrbp->sense_buffer,
- lrbp->ucd_rsp_ptr->sense_data,
+ lrbp->ucd_rsp_ptr->sc.sense_data,
min_t(int, len, SCSI_SENSE_BUFFERSIZE));
}
}
/**
+ * ufshcd_query_to_cpu() - formats the received buffer in to the native cpu
+ * endian
+ * @response: upiu query response to convert
+ */
+static inline void ufshcd_query_to_cpu(struct utp_upiu_query *response)
+{
+ response->length = be16_to_cpu(response->length);
+ response->value = be32_to_cpu(response->value);
+}
+
+/**
+ * ufshcd_query_to_be() - formats the buffer before sending in to big endian
+ * @response: upiu query request to convert
+ */
+static inline void ufshcd_query_to_be(struct utp_upiu_query *request)
+{
+ request->length = cpu_to_be16(request->length);
+ request->value = cpu_to_be32(request->value);
+}
+
+/**
+ * ufshcd_copy_query_response() - Copy Query Response and descriptor
+ * @lrb - pointer to local reference block
+ * @query_res - pointer to the query result
+ */
+static
+void ufshcd_copy_query_response(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+{
+ struct ufs_query_res *query_res = hba->query.response;
+
+ /* Get the UPIU response */
+ if (query_res) {
+ query_res->response = ufshcd_get_rsp_upiu_result(
+ lrbp->ucd_rsp_ptr) >> UPIU_RSP_CODE_OFFSET;
+
+ memcpy(&query_res->upiu_res, &lrbp->ucd_rsp_ptr->qr,
+ QUERY_OSF_SIZE);
+ ufshcd_query_to_cpu(&query_res->upiu_res);
+ }
+
+ /* Get the descriptor */
+ if (hba->query.descriptor && lrbp->ucd_rsp_ptr->qr.opcode ==
+ UPIU_QUERY_OPCODE_READ_DESC) {
+ u8 *descp = (u8 *)&lrbp->ucd_rsp_ptr +
+ GENERAL_UPIU_REQUEST_SIZE;
+ u16 len;
+
+ /* data segment length */
+ len = be32_to_cpu(lrbp->ucd_rsp_ptr->header.dword_2) &
+ MASK_QUERY_DATA_SEG_LEN;
+
+ memcpy(hba->query.descriptor, descp,
+ min_t(u16, len, UPIU_HEADER_DATA_SEGMENT_MAX_SIZE));
+ }
+}
+
+/**
* ufshcd_hba_capabilities - Read controller capabilities
* @hba: per adapter instance
*/
@@ -423,76 +475,159 @@
}
/**
+ * ufshcd_prepare_req_desc - Fills the requests header
+ * descriptor according to request
+ * lrbp: pointer to local reference block
+ * upiu_flags: flags required in the header
+ */
+static void ufshcd_prepare_req_desc(struct ufshcd_lrb *lrbp, u32 *upiu_flags)
+{
+ struct utp_transfer_req_desc *req_desc = lrbp->utr_descriptor_ptr;
+ enum dma_data_direction cmd_dir =
+ lrbp->cmd->sc_data_direction;
+ u32 data_direction;
+ u32 dword_0;
+
+ if (cmd_dir == DMA_FROM_DEVICE) {
+ data_direction = UTP_DEVICE_TO_HOST;
+ *upiu_flags = UPIU_CMD_FLAGS_READ;
+ } else if (cmd_dir == DMA_TO_DEVICE) {
+ data_direction = UTP_HOST_TO_DEVICE;
+ *upiu_flags = UPIU_CMD_FLAGS_WRITE;
+ } else {
+ data_direction = UTP_NO_DATA_TRANSFER;
+ *upiu_flags = UPIU_CMD_FLAGS_NONE;
+ }
+
+ dword_0 = data_direction | (lrbp->command_type
+ << UPIU_COMMAND_TYPE_OFFSET);
+
+ /* Transfer request descriptor header fields */
+ req_desc->header.dword_0 = cpu_to_le32(dword_0);
+
+ /*
+ * assigning invalid value for command status. Controller
+ * updates OCS on command completion, with the command
+ * status
+ */
+ req_desc->header.dword_2 =
+ cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
+}
+
+static inline bool ufshcd_is_query_req(struct ufshcd_lrb *lrbp)
+{
+ return lrbp->cmd ? lrbp->cmd->cmnd[0] == UFS_QUERY_RESERVED_SCSI_CMD :
+ false;
+}
+
+/**
+ * ufshcd_prepare_utp_scsi_cmd_upiu() - fills the utp_transfer_req_desc,
+ * for scsi commands
+ * @lrbp - local reference block pointer
+ * @upiu_flags - flags
+ */
+static
+void ufshcd_prepare_utp_scsi_cmd_upiu(struct ufshcd_lrb *lrbp, u32 upiu_flags)
+{
+ struct utp_upiu_req *ucd_req_ptr = lrbp->ucd_req_ptr;
+
+ /* command descriptor fields */
+ ucd_req_ptr->header.dword_0 = UPIU_HEADER_DWORD(
+ UPIU_TRANSACTION_COMMAND, upiu_flags,
+ lrbp->lun, lrbp->task_tag);
+ ucd_req_ptr->header.dword_1 = UPIU_HEADER_DWORD(
+ UPIU_COMMAND_SET_TYPE_SCSI, 0, 0, 0);
+
+ /* Total EHS length and Data segment length will be zero */
+ ucd_req_ptr->header.dword_2 = 0;
+
+ ucd_req_ptr->sc.exp_data_transfer_len =
+ cpu_to_be32(lrbp->cmd->sdb.length);
+
+ memcpy(ucd_req_ptr->sc.cdb, lrbp->cmd->cmnd,
+ (min_t(unsigned short, lrbp->cmd->cmd_len, MAX_CDB_SIZE)));
+}
+
+/**
+ * ufshcd_prepare_utp_query_req_upiu() - fills the utp_transfer_req_desc,
+ * for query requsts
+ * @hba: UFS hba
+ * @lrbp: local reference block pointer
+ * @upiu_flags: flags
+ */
+static void ufshcd_prepare_utp_query_req_upiu(struct ufs_hba *hba,
+ struct ufshcd_lrb *lrbp,
+ u32 upiu_flags)
+{
+ struct utp_upiu_req *ucd_req_ptr = lrbp->ucd_req_ptr;
+ u16 len = hba->query.request->upiu_req.length;
+ u8 *descp = (u8 *)lrbp->ucd_req_ptr + GENERAL_UPIU_REQUEST_SIZE;
+
+ /* Query request header */
+ ucd_req_ptr->header.dword_0 = UPIU_HEADER_DWORD(
+ UPIU_TRANSACTION_QUERY_REQ, upiu_flags,
+ lrbp->lun, lrbp->task_tag);
+ ucd_req_ptr->header.dword_1 = UPIU_HEADER_DWORD(
+ 0, hba->query.request->query_func, 0, 0);
+
+ /* Data segment length */
+ ucd_req_ptr->header.dword_2 = UPIU_HEADER_DWORD(
+ 0, 0, len >> 8, (u8)len);
+
+ /* Copy the Query Request buffer as is */
+ memcpy(&lrbp->ucd_req_ptr->qr, &hba->query.request->upiu_req,
+ QUERY_OSF_SIZE);
+ ufshcd_query_to_be(&lrbp->ucd_req_ptr->qr);
+
+ /* Copy the Descriptor */
+ if (hba->query.descriptor != NULL && len > 0 &&
+ (hba->query.request->upiu_req.opcode ==
+ UPIU_QUERY_OPCODE_WRITE_DESC)) {
+ memcpy(descp, hba->query.descriptor,
+ min_t(u16, len, UPIU_HEADER_DATA_SEGMENT_MAX_SIZE));
+ }
+
+}
+
+/**
* ufshcd_compose_upiu - form UFS Protocol Information Unit(UPIU)
+ * @hba - UFS hba
* @lrb - pointer to local reference block
*/
-static void ufshcd_compose_upiu(struct ufshcd_lrb *lrbp)
+static int ufshcd_compose_upiu(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
{
- struct utp_transfer_req_desc *req_desc;
- struct utp_upiu_cmd *ucd_cmd_ptr;
- u32 data_direction;
u32 upiu_flags;
-
- ucd_cmd_ptr = lrbp->ucd_cmd_ptr;
- req_desc = lrbp->utr_descriptor_ptr;
+ int ret = 0;
switch (lrbp->command_type) {
case UTP_CMD_TYPE_SCSI:
- if (lrbp->cmd->sc_data_direction == DMA_FROM_DEVICE) {
- data_direction = UTP_DEVICE_TO_HOST;
- upiu_flags = UPIU_CMD_FLAGS_READ;
- } else if (lrbp->cmd->sc_data_direction == DMA_TO_DEVICE) {
- data_direction = UTP_HOST_TO_DEVICE;
- upiu_flags = UPIU_CMD_FLAGS_WRITE;
- } else {
- data_direction = UTP_NO_DATA_TRANSFER;
- upiu_flags = UPIU_CMD_FLAGS_NONE;
- }
-
- /* Transfer request descriptor header fields */
- req_desc->header.dword_0 =
- cpu_to_le32(data_direction | UTP_SCSI_COMMAND);
-
- /*
- * assigning invalid value for command status. Controller
- * updates OCS on command completion, with the command
- * status
- */
- req_desc->header.dword_2 =
- cpu_to_le32(OCS_INVALID_COMMAND_STATUS);
-
- /* command descriptor fields */
- ucd_cmd_ptr->header.dword_0 =
- cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_COMMAND,
- upiu_flags,
- lrbp->lun,
- lrbp->task_tag));
- ucd_cmd_ptr->header.dword_1 =
- cpu_to_be32(
- UPIU_HEADER_DWORD(UPIU_COMMAND_SET_TYPE_SCSI,
- 0,
- 0,
- 0));
-
- /* Total EHS length and Data segment length will be zero */
- ucd_cmd_ptr->header.dword_2 = 0;
-
- ucd_cmd_ptr->exp_data_transfer_len =
- cpu_to_be32(lrbp->cmd->sdb.length);
-
- memcpy(ucd_cmd_ptr->cdb,
- lrbp->cmd->cmnd,
- (min_t(unsigned short,
- lrbp->cmd->cmd_len,
- MAX_CDB_SIZE)));
- break;
case UTP_CMD_TYPE_DEV_MANAGE:
- /* For query function implementation */
+ ufshcd_prepare_req_desc(lrbp, &upiu_flags);
+ if (lrbp->cmd && lrbp->command_type == UTP_CMD_TYPE_SCSI)
+ ufshcd_prepare_utp_scsi_cmd_upiu(lrbp, upiu_flags);
+ else if (lrbp->cmd)
+ ufshcd_prepare_utp_query_req_upiu(hba, lrbp,
+ upiu_flags);
+ else {
+ dev_err(hba->dev, "%s: Invalid UPIU request\n",
+ __func__);
+ ret = -EINVAL;
+ }
break;
case UTP_CMD_TYPE_UFS:
/* For UFS native command implementation */
+ dev_err(hba->dev, "%s: UFS native command are not supported\n",
+ __func__);
+ ret = -ENOTSUPP;
+ break;
+ default:
+ ret = -ENOTSUPP;
+ dev_err(hba->dev, "%s: unknown command type: 0x%x\n",
+ __func__, lrbp->command_type);
break;
} /* end of switch */
+
+ return ret;
}
/**
@@ -527,10 +662,13 @@
lrbp->task_tag = tag;
lrbp->lun = cmd->device->lun;
- lrbp->command_type = UTP_CMD_TYPE_SCSI;
+ if (ufshcd_is_query_req(lrbp))
+ lrbp->command_type = UTP_CMD_TYPE_DEV_MANAGE;
+ else
+ lrbp->command_type = UTP_CMD_TYPE_SCSI;
/* form UPIU before issuing the command */
- ufshcd_compose_upiu(lrbp);
+ ufshcd_compose_upiu(hba, lrbp);
err = ufshcd_map_sg(lrbp);
if (err)
goto out;
@@ -544,6 +682,110 @@
}
/**
+ * ufshcd_query_request() - Entry point for issuing query request to a
+ * ufs device.
+ * @hba: ufs driver context
+ * @query: params for query request
+ * @descriptor: buffer for sending/receiving descriptor
+ * @response: pointer to a buffer that will contain the response code and
+ * response upiu
+ * @timeout: time limit for the command in seconds
+ * @retries: number of times to try executing the command
+ *
+ * The query request is submitted to the same request queue as the rest of
+ * the scsi commands passed to the UFS controller. In order to use this
+ * queue, we need to receive a tag, same as all other commands. The tags
+ * are issued from the block layer. To simulate a request from the block
+ * layer, we use the same interface as the SCSI layer does when it issues
+ * commands not generated by users. To distinguish a query request from
+ * the SCSI commands, we use a vendor specific unused SCSI command
+ * op-code. This op-code is not part of the SCSI command subset used in
+ * UFS. In such way it is easy to check the command in the driver and
+ * handle it appropriately.
+ *
+ * All necessary fields for issuing a query and receiving its response are
+ * stored in the UFS hba struct. We can use this method since we know
+ * there is only one active query request at all times.
+ *
+ * The request that will pass to the device is stored in "query" argument
+ * passed to this function, while the "response" argument (which is output
+ * field) will hold the query response from the device along with the
+ * response code.
+ */
+int ufshcd_query_request(struct ufs_hba *hba,
+ struct ufs_query_req *query,
+ u8 *descriptor,
+ struct ufs_query_res *response,
+ int timeout,
+ int retries)
+{
+ struct scsi_device *sdev;
+ u8 cmd[UFS_QUERY_CMD_SIZE] = {0};
+ int result;
+ bool sdev_lookup = true;
+
+ if (!hba || !query || !response) {
+ dev_err(hba->dev,
+ "%s: NULL pointer hba = %p, query = %p response = %p\n",
+ __func__, hba, query, response);
+ return -EINVAL;
+ }
+
+ /*
+ * A SCSI command structure is composed from opcode at the
+ * begining and 0 at the end.
+ */
+ cmd[0] = UFS_QUERY_RESERVED_SCSI_CMD;
+
+ /* extracting the SCSI Device */
+ sdev = scsi_device_lookup(hba->host, 0, 0, 0);
+ if (!sdev) {
+ /**
+ * There are some Query Requests that are sent during device
+ * initialization, this happens before the scsi device was
+ * initialized. If there is no scsi device, we generate a
+ * temporary device to allow the Query Request flow.
+ */
+ sdev_lookup = false;
+ sdev = scsi_get_host_dev(hba->host);
+ }
+
+ if (!sdev) {
+ dev_err(hba->dev, "%s: Could not fetch scsi device\n",
+ __func__);
+ return -ENODEV;
+ }
+
+ mutex_lock(&hba->query.lock_ufs_query);
+ hba->query.request = query;
+ hba->query.descriptor = descriptor;
+ hba->query.response = response;
+
+ /* wait until request is completed */
+ result = scsi_execute(sdev, cmd, DMA_NONE, NULL, 0, NULL,
+ timeout, retries, 0, NULL);
+ if (result) {
+ dev_err(hba->dev,
+ "%s: Query with opcode 0x%x, failed with result %d\n",
+ __func__, query->upiu_req.opcode, result);
+ result = -EIO;
+ }
+
+ hba->query.request = NULL;
+ hba->query.descriptor = NULL;
+ hba->query.response = NULL;
+ mutex_unlock(&hba->query.lock_ufs_query);
+
+ /* Releasing scsi device resource */
+ if (sdev_lookup)
+ scsi_device_put(sdev);
+ else
+ scsi_free_host_dev(sdev);
+
+ return result;
+}
+
+/**
* ufshcd_memory_alloc - allocate memory for host memory space data structures
* @hba: per adapter instance
*
@@ -677,8 +919,8 @@
cpu_to_le16(ALIGNED_UPIU_SIZE);
hba->lrb[i].utr_descriptor_ptr = (utrdlp + i);
- hba->lrb[i].ucd_cmd_ptr =
- (struct utp_upiu_cmd *)(cmd_descp + i);
+ hba->lrb[i].ucd_req_ptr =
+ (struct utp_upiu_req *)(cmd_descp + i);
hba->lrb[i].ucd_rsp_ptr =
(struct utp_upiu_rsp *)cmd_descp[i].response_upiu;
hba->lrb[i].ucd_prdt_ptr =
@@ -1101,7 +1343,9 @@
* @hba: per adapter instance
* @lrb: pointer to local reference block of completed command
*
- * Returns result of the command to notify SCSI midlayer
+ * Returns result of the command to notify SCSI midlayer. In
+ * case of query request specific result, returns DID_OK, and
+ * the error will be handled by the dispatcher.
*/
static inline int
ufshcd_transfer_rsp_status(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
@@ -1115,27 +1359,46 @@
switch (ocs) {
case OCS_SUCCESS:
-
/* check if the returned transfer response is valid */
- result = ufshcd_is_valid_req_rsp(lrbp->ucd_rsp_ptr);
- if (result) {
- dev_err(hba->dev,
- "Invalid response = %x\n", result);
+ result = ufshcd_get_req_rsp(lrbp->ucd_rsp_ptr);
+
+ switch (result) {
+ case UPIU_TRANSACTION_RESPONSE:
+ /*
+ * get the response UPIU result to extract
+ * the SCSI command status
+ */
+ result = ufshcd_get_rsp_upiu_result(lrbp->ucd_rsp_ptr);
+
+ /*
+ * get the result based on SCSI status response
+ * to notify the SCSI midlayer of the command status
+ */
+ scsi_status = result & MASK_SCSI_STATUS;
+ result = ufshcd_scsi_cmd_status(lrbp, scsi_status);
break;
+ case UPIU_TRANSACTION_QUERY_RSP:
+ /*
+ * Return result = ok, since SCSI layer wouldn't
+ * know how to handle errors from query requests.
+ * The result is saved with the response so that
+ * the ufs_core layer will handle it.
+ */
+ result |= DID_OK << 16;
+ ufshcd_copy_query_response(hba, lrbp);
+ break;
+ case UPIU_TRANSACTION_REJECT_UPIU:
+ /* TODO: handle Reject UPIU Response */
+ result |= DID_ERROR << 16;
+ dev_err(hba->dev,
+ "Reject UPIU not fully implemented\n");
+ break;
+ default:
+ result |= DID_ERROR << 16;
+ dev_err(hba->dev,
+ "Unexpected request response code = %x\n",
+ result);
}
-
- /*
- * get the response UPIU result to extract
- * the SCSI command status
- */
- result = ufshcd_get_rsp_upiu_result(lrbp->ucd_rsp_ptr);
-
- /*
- * get the result based on SCSI status response
- * to notify the SCSI midlayer of the command status
- */
- scsi_status = result & MASK_SCSI_STATUS;
- result = ufshcd_scsi_cmd_status(lrbp, scsi_status);
break;
case OCS_ABORTED:
result |= DID_ABORT << 16;
@@ -1364,10 +1627,10 @@
task_req_upiup =
(struct utp_upiu_task_req *) task_req_descp->task_req_upiu;
task_req_upiup->header.dword_0 =
- cpu_to_be32(UPIU_HEADER_DWORD(UPIU_TRANSACTION_TASK_REQ, 0,
- lrbp->lun, lrbp->task_tag));
+ UPIU_HEADER_DWORD(UPIU_TRANSACTION_TASK_REQ, 0,
+ lrbp->lun, lrbp->task_tag);
task_req_upiup->header.dword_1 =
- cpu_to_be32(UPIU_HEADER_DWORD(0, tm_function, 0, 0));
+ UPIU_HEADER_DWORD(0, tm_function, 0, 0);
task_req_upiup->input_param1 = lrbp->lun;
task_req_upiup->input_param1 =
@@ -1670,6 +1933,9 @@
INIT_WORK(&hba->uic_workq, ufshcd_uic_cc_handler);
INIT_WORK(&hba->feh_workq, ufshcd_fatal_err_handler);
+ /* Initialize mutex for query requests */
+ mutex_init(&hba->query.lock_ufs_query);
+
/* IRQ registration */
err = request_irq(irq, ufshcd_intr, IRQF_SHARED, UFSHCD, hba);
if (err) {
diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
index 6b99a42..336980b 100644
--- a/drivers/scsi/ufs/ufshcd.h
+++ b/drivers/scsi/ufs/ufshcd.h
@@ -60,6 +60,7 @@
#include <scsi/scsi_tcq.h>
#include <scsi/scsi_dbg.h>
#include <scsi/scsi_eh.h>
+#include <scsi/scsi_device.h>
#include "ufs.h"
#include "ufshci.h"
@@ -88,7 +89,7 @@
/**
* struct ufshcd_lrb - local reference block
* @utr_descriptor_ptr: UTRD address of the command
- * @ucd_cmd_ptr: UCD address of the command
+ * @ucd_req_ptr: UCD address of the command
* @ucd_rsp_ptr: Response UPIU address for this command
* @ucd_prdt_ptr: PRDT address of the command
* @cmd: pointer to SCSI command
@@ -101,7 +102,7 @@
*/
struct ufshcd_lrb {
struct utp_transfer_req_desc *utr_descriptor_ptr;
- struct utp_upiu_cmd *ucd_cmd_ptr;
+ struct utp_upiu_req *ucd_req_ptr;
struct utp_upiu_rsp *ucd_rsp_ptr;
struct ufshcd_sg_entry *ucd_prdt_ptr;
@@ -115,6 +116,19 @@
unsigned int lun;
};
+/**
+ * struct ufs_query - keeps the query request information
+ * @request: request upiu and function
+ * @descriptor: buffer for sending/receiving descriptor
+ * @response: response upiu and response
+ * @mutex: lock to allow one query at a time
+ */
+struct ufs_query {
+ struct ufs_query_req *request;
+ u8 *descriptor;
+ struct ufs_query_res *response;
+ struct mutex lock_ufs_query;
+};
/**
* struct ufs_hba - per adapter private structure
@@ -143,6 +157,7 @@
* @uic_workq: Work queue for UIC completion handling
* @feh_workq: Work queue for fatal controller error handling
* @errors: HBA errors
+ * @query: query request information
*/
struct ufs_hba {
void __iomem *mmio_base;
@@ -184,6 +199,9 @@
/* HBA Errors */
u32 errors;
+
+ /* Query Request */
+ struct ufs_query query;
};
int ufshcd_init(struct device *, struct ufs_hba ** , void __iomem * ,
diff --git a/drivers/scsi/ufs/ufshci.h b/drivers/scsi/ufs/ufshci.h
index 0c16484..4a86247 100644
--- a/drivers/scsi/ufs/ufshci.h
+++ b/drivers/scsi/ufs/ufshci.h
@@ -39,7 +39,7 @@
enum {
TASK_REQ_UPIU_SIZE_DWORDS = 8,
TASK_RSP_UPIU_SIZE_DWORDS = 8,
- ALIGNED_UPIU_SIZE = 128,
+ ALIGNED_UPIU_SIZE = 512,
};
/* UFSHCI Registers */
diff --git a/drivers/slimbus/slim-msm-ctrl.c b/drivers/slimbus/slim-msm-ctrl.c
index 9a864aa..4a3ea76 100644
--- a/drivers/slimbus/slim-msm-ctrl.c
+++ b/drivers/slimbus/slim-msm-ctrl.c
@@ -37,8 +37,6 @@
#define QC_DEVID_SAT2 0x4
#define QC_DEVID_PGD 0x5
#define QC_MSM_DEVS 5
-#define INIT_MX_RETRIES 10
-#define DEF_RETRY_MS 10
/* Manager registers */
enum mgr_reg {
diff --git a/drivers/slimbus/slim-msm-ngd.c b/drivers/slimbus/slim-msm-ngd.c
index 63d3750..a0179cb 100644
--- a/drivers/slimbus/slim-msm-ngd.c
+++ b/drivers/slimbus/slim-msm-ngd.c
@@ -631,6 +631,7 @@
if (mc == SLIM_USR_MC_MASTER_CAPABILITY &&
mt == SLIM_MSG_MT_SRC_REFERRED_USER) {
struct slim_msg_txn txn;
+ int retries = 0;
u8 wbuf[8];
txn.dt = SLIM_MSG_DEST_LOGICALADDR;
txn.ec = 0;
@@ -638,7 +639,6 @@
txn.mc = SLIM_USR_MC_REPORT_SATELLITE;
txn.mt = SLIM_MSG_MT_SRC_REFERRED_USER;
txn.la = SLIM_LA_MGR;
- txn.rl = 8;
wbuf[0] = SAT_MAGIC_LSB;
wbuf[1] = SAT_MAGIC_MSB;
wbuf[2] = SAT_MSG_VER;
@@ -655,7 +655,8 @@
/* make sure NGD MSG-Q config goes through */
mb();
}
-
+capability_retry:
+ txn.rl = 8;
ret = ngd_xfer_msg(&dev->ctrl, &txn);
if (!ret) {
enum msm_ctrl_state prev_state = dev->state;
@@ -668,6 +669,13 @@
/* ADSP SSR, send device_up notifications */
if (prev_state == MSM_CTRL_DOWN)
schedule_work(&dev->slave_notify);
+ } else if (ret == -EIO) {
+ pr_info("capability message NACKed, retrying");
+ if (retries < INIT_MX_RETRIES) {
+ msleep(DEF_RETRY_MS);
+ retries++;
+ goto capability_retry;
+ }
}
}
if (mc == SLIM_MSG_MC_REPLY_INFORMATION ||
diff --git a/drivers/slimbus/slim-msm.h b/drivers/slimbus/slim-msm.h
index 6ff3f19..cf2d26f 100644
--- a/drivers/slimbus/slim-msm.h
+++ b/drivers/slimbus/slim-msm.h
@@ -50,6 +50,8 @@
#define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \
((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16))
+#define INIT_MX_RETRIES 10
+#define DEF_RETRY_MS 10
#define MSM_CONCUR_MSG 8
#define SAT_CONCUR_MSG 8
#define DEF_WATERMARK (8 << 1)
diff --git a/drivers/thermal/msm8974-tsens.c b/drivers/thermal/msm8974-tsens.c
index 676f69d..991cf2e 100644
--- a/drivers/thermal/msm8974-tsens.c
+++ b/drivers/thermal/msm8974-tsens.c
@@ -284,7 +284,7 @@
while (i < tmdev->tsens_num_sensor && !id_found) {
if (sensor_hw_num == tmdev->sensor[i].sensor_hw_num) {
- *sensor_sw_idx = i;
+ *sensor_sw_idx = tmdev->sensor[i].sensor_sw_id;
id_found = true;
}
i++;
@@ -304,7 +304,7 @@
while (i < tmdev->tsens_num_sensor && !id_found) {
if (sensor_sw_id == tmdev->sensor[i].sensor_sw_id) {
- *sensor_hw_num = i;
+ *sensor_hw_num = tmdev->sensor[i].sensor_hw_num;
id_found = true;
}
i++;
@@ -333,6 +333,8 @@
else
degc = num/den;
+ pr_debug("raw_code:0x%x, sensor_num:%d, degc:%d\n",
+ adc_code, idx, degc);
return degc;
}
@@ -345,6 +347,8 @@
code = TSENS_THRESHOLD_MAX_CODE;
else if (code < TSENS_THRESHOLD_MIN_CODE)
code = TSENS_THRESHOLD_MIN_CODE;
+ pr_debug("raw_code:0x%x, sensor_num:%d, degc:%d\n",
+ code, idx, degc);
return code;
}
@@ -729,6 +733,7 @@
tsens_calibration_mode = (calib_data[0] & TSENS_8X10_TSENS_CAL_SEL)
>> TSENS_8X10_CAL_SEL_SHIFT;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_TWO_POINT_CALIB) ||
(tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2)) {
@@ -783,6 +788,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -827,6 +835,7 @@
tsens_calibration_mode = (calib_data[5] & TSENS_8X26_TSENS_CAL_SEL)
>> TSENS_8X26_CAL_SEL_SHIFT;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_TWO_POINT_CALIB) ||
(tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2)) {
@@ -936,6 +945,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -976,11 +988,14 @@
TSENS_EEPROM_REDUNDANCY_SEL(tmdev->tsens_calib_addr));
calib_redun_sel = calib_redun_sel & TSENS_QFPROM_BACKUP_REDUN_SEL;
calib_redun_sel >>= TSENS_QFPROM_BACKUP_REDUN_SHIFT;
+ pr_debug("calib_redun_sel:%x\n", calib_redun_sel);
- for (i = 0; i < TSENS_MAIN_CALIB_ADDR_RANGE; i++)
+ for (i = 0; i < TSENS_MAIN_CALIB_ADDR_RANGE; i++) {
calib_data[i] = readl_relaxed(
(TSENS_EEPROM(tmdev->tsens_calib_addr))
+ (i * TSENS_SN_ADDR_OFFSET));
+ pr_debug("calib raw data row%d:0x%x\n", i, calib_data[i]);
+ }
if (calib_redun_sel == TSENS_QFPROM_BACKUP_SEL) {
tsens_calibration_mode = (calib_data[4] & TSENS_CAL_SEL_0_1)
@@ -988,6 +1003,7 @@
temp = (calib_data[5] & TSENS_CAL_SEL_2)
>> TSENS_CAL_SEL_SHIFT_2;
tsens_calibration_mode |= temp;
+ pr_debug("backup calib mode:%x\n", calib_redun_sel);
for (i = 0; i < TSENS_BACKUP_CALIB_ADDR_RANGE; i++)
calib_data_backup[i] = readl_relaxed(
@@ -1073,6 +1089,7 @@
temp = (calib_data[3] & TSENS_CAL_SEL_2)
>> TSENS_CAL_SEL_SHIFT_2;
tsens_calibration_mode |= temp;
+ pr_debug("calib mode scheme:%x\n", tsens_calibration_mode);
if ((tsens_calibration_mode == TSENS_ONE_POINT_CALIB) ||
(tsens_calibration_mode ==
TSENS_ONE_POINT_CALIB_OPTION_2) ||
@@ -1184,6 +1201,7 @@
if ((tsens_calibration_mode == TSENS_ONE_POINT_CALIB_OPTION_2) ||
(tsens_calibration_mode == TSENS_TWO_POINT_CALIB)) {
+ pr_debug("one point calibration calculation\n");
calib_tsens_point1_data[0] =
((((tsens_base1_data) + tsens0_point1) << 2) |
TSENS_BIT_APPEND);
@@ -1261,6 +1279,9 @@
int32_t num = 0, den = 0;
tmdev->sensor[i].calib_data_point2 = calib_tsens_point2_data[i];
tmdev->sensor[i].calib_data_point1 = calib_tsens_point1_data[i];
+ pr_debug("sensor:%d - calib_data_point1:0x%x, calib_data_point2:0x%x\n",
+ i, tmdev->sensor[i].calib_data_point1,
+ tmdev->sensor[i].calib_data_point2);
if (tsens_calibration_mode == TSENS_TWO_POINT_CALIB) {
/* slope (m) = adc_code2 - adc_code1 (y2 - y1)/
temp_120_degc - temp_30_degc (x2 - x1) */
@@ -1273,6 +1294,7 @@
tmdev->sensor[i].offset = (tmdev->sensor[i].calib_data_point1 *
tmdev->tsens_factor) - (TSENS_CAL_DEGC_POINT1 *
tmdev->sensor[i].slope_mul_tsens_factor);
+ pr_debug("offset:%d\n", tmdev->sensor[i].offset);
INIT_WORK(&tmdev->sensor[i].work, notify_uspace_tsens_fn);
tmdev->prev_reading_avail = false;
}
@@ -1373,12 +1395,16 @@
"qcom,sensor-id", sensor_id, tsens_num_sensors);
if (rc) {
pr_debug("Default sensor id mapping\n");
- for (i = 0; i < tsens_num_sensors; i++)
+ for (i = 0; i < tsens_num_sensors; i++) {
tmdev->sensor[i].sensor_hw_num = i;
+ tmdev->sensor[i].sensor_sw_id = i;
+ }
} else {
pr_debug("Use specified sensor id mapping\n");
- for (i = 0; i < tsens_num_sensors; i++)
+ for (i = 0; i < tsens_num_sensors; i++) {
tmdev->sensor[i].sensor_hw_num = sensor_id[i];
+ tmdev->sensor[i].sensor_sw_id = i;
+ }
}
tmdev->tsens_irq = platform_get_irq(pdev, 0);
diff --git a/drivers/thermal/qpnp-adc-tm.c b/drivers/thermal/qpnp-adc-tm.c
index e7d2e0f..d848a18 100644
--- a/drivers/thermal/qpnp-adc-tm.c
+++ b/drivers/thermal/qpnp-adc-tm.c
@@ -36,6 +36,8 @@
/* QPNP VADC TM register definition */
#define QPNP_REVISION3 0x2
+#define QPNP_PERPH_SUBTYPE 0x5
+#define QPNP_PERPH_TYPE2 0x2
#define QPNP_REVISION_EIGHT_CHANNEL_SUPPORT 2
#define QPNP_STATUS1 0x8
#define QPNP_STATUS1_OP_MODE 4
@@ -366,7 +368,7 @@
static int32_t qpnp_adc_tm_check_revision(uint32_t btm_chan_num)
{
- u8 rev;
+ u8 rev, perph_subtype;
int rc = 0;
rc = qpnp_adc_tm_read_reg(QPNP_REVISION3, &rev);
@@ -375,10 +377,18 @@
return rc;
}
- if ((rev < QPNP_REVISION_EIGHT_CHANNEL_SUPPORT) &&
- (btm_chan_num > QPNP_ADC_TM_M4_ADC_CH_SEL_CTL)) {
- pr_debug("Version does not support more than 5 channels\n");
- return -EINVAL;
+ rc = qpnp_adc_tm_read_reg(QPNP_PERPH_SUBTYPE, &perph_subtype);
+ if (rc) {
+ pr_err("adc-tm perph_subtype read failed\n");
+ return rc;
+ }
+
+ if (perph_subtype == QPNP_PERPH_TYPE2) {
+ if ((rev < QPNP_REVISION_EIGHT_CHANNEL_SUPPORT) &&
+ (btm_chan_num > QPNP_ADC_TM_M4_ADC_CH_SEL_CTL)) {
+ pr_debug("Version does not support more than 5 channels\n");
+ return -EINVAL;
+ }
}
return rc;
@@ -1584,6 +1594,8 @@
adc_tm->sensor[sen_idx].sensor_num = sen_idx;
pr_debug("btm_chan:%x, vadc_chan:%x\n", btm_channel_num,
adc_tm->adc->adc_channels[sen_idx].channel_num);
+ thermal_node = of_property_read_bool(child,
+ "qcom,thermal-node");
if (thermal_node) {
/* Register with the thermal zone */
pr_debug("thermal node%x\n", btm_channel_num);
diff --git a/drivers/tty/serial/msm_serial_hs.c b/drivers/tty/serial/msm_serial_hs.c
index ad7c702..f695870 100644
--- a/drivers/tty/serial/msm_serial_hs.c
+++ b/drivers/tty/serial/msm_serial_hs.c
@@ -2340,9 +2340,7 @@
msm_hs_start_rx_locked(uport);
spin_unlock_irqrestore(&uport->lock, flags);
- ret = pm_runtime_set_active(uport->dev);
- if (ret)
- dev_err(uport->dev, "set active error:%d\n", ret);
+
pm_runtime_enable(uport->dev);
return 0;
@@ -3180,7 +3178,6 @@
cancel_delayed_work_sync(&msm_uport->rx.flip_insert_work);
flush_workqueue(msm_uport->hsuart_wq);
pm_runtime_disable(uport->dev);
- pm_runtime_set_suspended(uport->dev);
/* Disable the transmitter */
msm_hs_write(uport, UARTDM_CR_ADDR, UARTDM_CR_TX_DISABLE_BMSK);
diff --git a/drivers/tty/smux_ctl.c b/drivers/tty/smux_ctl.c
index 2e091cc..1b3a7abe 100644
--- a/drivers/tty/smux_ctl.c
+++ b/drivers/tty/smux_ctl.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -940,6 +940,7 @@
static int smux_ctl_remove(struct platform_device *pdev)
{
int i;
+ int ret;
SMUXCTL_DBG(SMUX_CTL_MODULE_NAME ": %s Begins\n", __func__);
@@ -950,6 +951,13 @@
devp->abort_wait = 1;
wake_up(&devp->write_wait_queue);
wake_up(&devp->read_wait_queue);
+
+ if (atomic_read(&devp->ref_count)) {
+ ret = msm_smux_close(devp->id);
+ if (ret)
+ pr_err("%s: unable to close ch %d, ret %d\n",
+ __func__, devp->id, ret);
+ }
mutex_unlock(&devp->dev_lock);
/* Empty RX queue */
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index 6619e96..fab5219 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -475,6 +475,7 @@
void *mem;
u8 mode;
+ bool host_only_mode;
mem = devm_kzalloc(dev, sizeof(*dwc) + DWC3_ALIGN_MASK, GFP_KERNEL);
if (!mem) {
@@ -487,7 +488,7 @@
if (!dev->dma_mask)
dev->dma_mask = &dwc3_dma_mask;
if (!dev->coherent_dma_mask)
- dev->coherent_dma_mask = DMA_BIT_MASK(32);
+ dev->coherent_dma_mask = DMA_BIT_MASK(64);
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
if (!res) {
@@ -548,6 +549,7 @@
dwc->maximum_speed = DWC3_DCFG_SUPERSPEED;
dwc->needs_fifo_resize = of_property_read_bool(node, "tx-fifo-resize");
+ host_only_mode = of_property_read_bool(node, "host-only-mode");
pm_runtime_no_callbacks(dev);
pm_runtime_set_active(dev);
@@ -561,6 +563,12 @@
mode = DWC3_MODE(dwc->hwparams.hwparams0);
+ /* Override mode if user selects host-only config with DRD core */
+ if (host_only_mode && (mode == DWC3_MODE_DRD)) {
+ dev_dbg(dev, "host only mode selected\n");
+ mode = DWC3_MODE_HOST;
+ }
+
switch (mode) {
case DWC3_MODE_DEVICE:
dwc3_set_mode(dwc, DWC3_GCTL_PRTCAP_DEVICE);
diff --git a/drivers/usb/dwc3/debugfs.c b/drivers/usb/dwc3/debugfs.c
index 93504eb..df95646 100644
--- a/drivers/usb/dwc3/debugfs.c
+++ b/drivers/usb/dwc3/debugfs.c
@@ -693,9 +693,10 @@
list_for_each(ptr, &dep->request_list) {
req = list_entry(ptr, struct dwc3_request, list);
- seq_printf(s, "req:0x%p len: %d sts: %d dma:0x%x num_sgs: %d\n",
+ seq_printf(s,
+ "req:0x%p len: %d sts: %d dma:0x%pa num_sgs: %d\n",
req, req->request.length, req->request.status,
- req->request.dma, req->request.num_sgs);
+ &req->request.dma, req->request.num_sgs);
}
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -731,9 +732,10 @@
list_for_each(ptr, &dep->req_queued) {
req = list_entry(ptr, struct dwc3_request, list);
- seq_printf(s, "req:0x%p len:%d sts:%d dma:%x nsg:%d trb:0x%p\n",
+ seq_printf(s,
+ "req:0x%p len:%d sts:%d dma:%pa nsg:%d trb:0x%p\n",
req, req->request.length, req->request.status,
- req->request.dma, req->request.num_sgs, req->trb);
+ &req->request.dma, req->request.num_sgs, req->trb);
}
spin_unlock_irqrestore(&dwc->lock, flags);
diff --git a/drivers/usb/dwc3/dwc3-msm.c b/drivers/usb/dwc3/dwc3-msm.c
index 7a6765b..924e8f4 100644
--- a/drivers/usb/dwc3/dwc3-msm.c
+++ b/drivers/usb/dwc3/dwc3-msm.c
@@ -127,6 +127,8 @@
#define DBM_TRB_DMA 0x20000000
#define DBM_TRB_EP_NUM(ep) (ep<<24)
+#define USB3_PORTSC (0x430)
+#define PORT_PE (0x1 << 1)
/**
* USB QSCRATCH Hardware registers
*
@@ -177,6 +179,9 @@
struct regulator *hsusb_vddcx;
struct regulator *ssusb_1p8;
struct regulator *ssusb_vddcx;
+
+ /* VBUS regulator if no OTG and running in host only mode */
+ struct regulator *vbus_otg;
struct dwc3_ext_xceiv ext_xceiv;
bool resume_pending;
atomic_t pm_suspended;
@@ -210,6 +215,9 @@
bool vbus_active;
bool ext_inuse;
enum dwc3_id_state id_state;
+ unsigned long lpm_flags;
+#define MDWC3_CORECLK_OFF BIT(0)
+#define MDWC3_TCXO_SHUTDOWN BIT(1)
};
#define USB_HSPHY_3P3_VOL_MIN 3050000 /* uV */
@@ -1585,6 +1593,7 @@
int ret;
bool dcp;
bool host_bus_suspend;
+ bool host_ss_active;
dev_dbg(mdwc->dev, "%s: entering lpm\n", __func__);
@@ -1593,6 +1602,7 @@
return 0;
}
+ host_ss_active = dwc3_msm_read_reg(mdwc->base, USB3_PORTSC) & PORT_PE;
if (mdwc->hs_phy_irq)
disable_irq(mdwc->hs_phy_irq);
@@ -1606,7 +1616,8 @@
0x37, 0x0);
}
- dcp = mdwc->charger.chg_type == DWC3_DCP_CHARGER;
+ dcp = ((mdwc->charger.chg_type == DWC3_DCP_CHARGER) ||
+ (mdwc->charger.chg_type == DWC3_PROPRIETARY_CHARGER));
host_bus_suspend = mdwc->host_mode == 1;
/* Sequence to put SSPHY in low power state:
@@ -1661,14 +1672,19 @@
/* make sure above writes are completed before turning off clocks */
wmb();
- clk_disable_unprepare(mdwc->core_clk);
+ if (!host_bus_suspend || !host_ss_active) {
+ clk_disable_unprepare(mdwc->core_clk);
+ mdwc->lpm_flags |= MDWC3_CORECLK_OFF;
+ }
clk_disable_unprepare(mdwc->iface_clk);
- if (!host_bus_suspend) {
+ if (!host_bus_suspend)
clk_disable_unprepare(mdwc->utmi_clk);
+ if (!host_bus_suspend) {
/* USB PHY no more requires TCXO */
clk_disable_unprepare(mdwc->xo_clk);
+ mdwc->lpm_flags |= MDWC3_TCXO_SHUTDOWN;
}
if (mdwc->bus_perf_client) {
@@ -1684,15 +1700,19 @@
dwc3_ssusb_ldo_enable(0);
dwc3_ssusb_config_vddcx(0);
- if (!host_bus_suspend)
+ if (!host_bus_suspend && !dcp)
dwc3_hsusb_config_vddcx(0);
wake_unlock(&mdwc->wlock);
atomic_set(&mdwc->in_lpm, 1);
dev_info(mdwc->dev, "DWC3 in low power mode\n");
- if (mdwc->hs_phy_irq)
+ if (mdwc->hs_phy_irq) {
enable_irq(mdwc->hs_phy_irq);
+ /* with DCP we dont require wakeup using HS_PHY_IRQ */
+ if (dcp)
+ disable_irq_wake(mdwc->hs_phy_irq);
+ }
return 0;
}
@@ -1719,17 +1739,22 @@
dev_err(mdwc->dev, "Failed to vote for bus scaling\n");
}
- dcp = mdwc->charger.chg_type == DWC3_DCP_CHARGER;
+ dcp = ((mdwc->charger.chg_type == DWC3_DCP_CHARGER) ||
+ (mdwc->charger.chg_type == DWC3_PROPRIETARY_CHARGER));
host_bus_suspend = mdwc->host_mode == 1;
- if (!host_bus_suspend) {
+ if (mdwc->lpm_flags & MDWC3_TCXO_SHUTDOWN) {
/* Vote for TCXO while waking up USB HSPHY */
ret = clk_prepare_enable(mdwc->xo_clk);
if (ret)
dev_err(mdwc->dev, "%s failed to vote TCXO buffer%d\n",
__func__, ret);
+ mdwc->lpm_flags &= ~MDWC3_TCXO_SHUTDOWN;
}
+ if (!host_bus_suspend)
+ clk_prepare_enable(mdwc->utmi_clk);
+
if (mdwc->otg_xceiv && mdwc->ext_xceiv.otg_capability && !dcp &&
!host_bus_suspend)
dwc3_hsusb_ldo_enable(1);
@@ -1737,16 +1762,17 @@
dwc3_ssusb_ldo_enable(1);
dwc3_ssusb_config_vddcx(1);
- if (!host_bus_suspend) {
+ if (!host_bus_suspend && !dcp)
dwc3_hsusb_config_vddcx(1);
- clk_prepare_enable(mdwc->utmi_clk);
- }
clk_prepare_enable(mdwc->ref_clk);
usleep_range(1000, 1200);
clk_prepare_enable(mdwc->iface_clk);
- clk_prepare_enable(mdwc->core_clk);
+ if (mdwc->lpm_flags & MDWC3_CORECLK_OFF) {
+ clk_prepare_enable(mdwc->core_clk);
+ mdwc->lpm_flags &= ~MDWC3_CORECLK_OFF;
+ }
if (host_bus_suspend) {
/* Disable HV interrupt */
@@ -1809,6 +1835,9 @@
enable_irq(mdwc->hs_phy_irq);
mdwc->lpm_irq_seen = false;
}
+ /* it must DCP disconnect, re-enable HS_PHY wakeup IRQ */
+ if (mdwc->hs_phy_irq && dcp)
+ enable_irq_wake(mdwc->hs_phy_irq);
dev_info(mdwc->dev, "DWC3 exited from low power mode\n");
@@ -1838,7 +1867,7 @@
if (mdwc->otg_xceiv)
mdwc->ext_xceiv.notify_ext_events(mdwc->otg_xceiv->otg,
DWC3_EVENT_PHY_RESUME);
- pm_runtime_put_sync(mdwc->dev);
+ pm_runtime_put_noidle(mdwc->dev);
if (mdwc->otg_xceiv && (mdwc->ext_xceiv.otg_capability))
mdwc->ext_xceiv.notify_ext_events(mdwc->otg_xceiv->otg,
DWC3_EVENT_XCEIV_STATE);
@@ -2529,24 +2558,28 @@
goto disable_hs_ldo;
}
- msm->usb_psy.name = "usb";
- msm->usb_psy.type = POWER_SUPPLY_TYPE_USB;
- msm->usb_psy.supplied_to = dwc3_msm_pm_power_supplied_to;
- msm->usb_psy.num_supplicants = ARRAY_SIZE(
- dwc3_msm_pm_power_supplied_to);
- msm->usb_psy.properties = dwc3_msm_pm_power_props_usb;
- msm->usb_psy.num_properties = ARRAY_SIZE(dwc3_msm_pm_power_props_usb);
- msm->usb_psy.get_property = dwc3_msm_power_get_property_usb;
- msm->usb_psy.set_property = dwc3_msm_power_set_property_usb;
- msm->usb_psy.external_power_changed =
- dwc3_msm_external_power_changed;
+ /* usb_psy required only for vbus_notifications or charging support */
+ if (msm->ext_xceiv.otg_capability || !msm->charger.charging_disabled) {
+ msm->usb_psy.name = "usb";
+ msm->usb_psy.type = POWER_SUPPLY_TYPE_USB;
+ msm->usb_psy.supplied_to = dwc3_msm_pm_power_supplied_to;
+ msm->usb_psy.num_supplicants = ARRAY_SIZE(
+ dwc3_msm_pm_power_supplied_to);
+ msm->usb_psy.properties = dwc3_msm_pm_power_props_usb;
+ msm->usb_psy.num_properties =
+ ARRAY_SIZE(dwc3_msm_pm_power_props_usb);
+ msm->usb_psy.get_property = dwc3_msm_power_get_property_usb;
+ msm->usb_psy.set_property = dwc3_msm_power_set_property_usb;
+ msm->usb_psy.external_power_changed =
+ dwc3_msm_external_power_changed;
- ret = power_supply_register(&pdev->dev, &msm->usb_psy);
- if (ret < 0) {
- dev_err(&pdev->dev,
- "%s:power_supply_register usb failed\n",
- __func__);
- goto disable_hs_ldo;
+ ret = power_supply_register(&pdev->dev, &msm->usb_psy);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "%s:power_supply_register usb failed\n",
+ __func__);
+ goto disable_hs_ldo;
+ }
}
if (node) {
@@ -2571,7 +2604,8 @@
}
msm->otg_xceiv = usb_get_transceiver();
- if (msm->otg_xceiv) {
+ /* Register with OTG if present, ignore USB2 OTG using other PHY */
+ if (msm->otg_xceiv && !(msm->otg_xceiv->flags & ENABLE_SECONDARY_PHY)) {
msm->charger.start_detection = dwc3_start_chg_det;
ret = dwc3_set_charger(msm->otg_xceiv->otg, &msm->charger);
if (ret || !msm->charger.notify_detection_complete) {
@@ -2589,7 +2623,20 @@
goto put_xcvr;
}
} else {
- dev_err(&pdev->dev, "%s: No OTG transceiver found\n", __func__);
+ dev_dbg(&pdev->dev, "No OTG, DWC3 running in host only mode\n");
+ msm->host_mode = 1;
+ msm->vbus_otg = devm_regulator_get(&pdev->dev, "vbus_dwc3");
+ if (IS_ERR(msm->vbus_otg)) {
+ dev_dbg(&pdev->dev, "Failed to get vbus regulator\n");
+ msm->vbus_otg = 0;
+ } else {
+ ret = regulator_enable(msm->vbus_otg);
+ if (ret) {
+ msm->vbus_otg = 0;
+ dev_err(&pdev->dev, "Failed to enable vbus_otg\n");
+ }
+ }
+ msm->otg_xceiv = NULL;
}
wake_lock_init(&msm->wlock, WAKE_LOCK_SUSPEND, "msm_dwc3");
@@ -2601,7 +2648,8 @@
put_xcvr:
usb_put_transceiver(msm->otg_xceiv);
put_psupply:
- power_supply_unregister(&msm->usb_psy);
+ if (msm->usb_psy.dev)
+ power_supply_unregister(&msm->usb_psy);
disable_hs_ldo:
dwc3_hsusb_ldo_enable(0);
free_hs_ldo_init:
@@ -2650,6 +2698,10 @@
dwc3_start_chg_det(&msm->charger, false);
usb_put_transceiver(msm->otg_xceiv);
}
+ if (msm->usb_psy.dev)
+ power_supply_unregister(&msm->usb_psy);
+ if (msm->vbus_otg)
+ regulator_disable(msm->vbus_otg);
pm_runtime_disable(msm->dev);
wake_lock_destroy(&msm->wlock);
diff --git a/drivers/usb/dwc3/dwc3_otg.c b/drivers/usb/dwc3/dwc3_otg.c
index 1d67cee..a3b2617 100644
--- a/drivers/usb/dwc3/dwc3_otg.c
+++ b/drivers/usb/dwc3/dwc3_otg.c
@@ -198,8 +198,6 @@
} else {
dev_dbg(otg->phy->dev, "%s: turn off host\n", __func__);
- platform_device_del(dwc->xhci);
-
ret = regulator_disable(dotg->vbus_otg);
if (ret) {
dev_err(otg->phy->dev, "unable to disable vbus_otg\n");
@@ -207,6 +205,7 @@
}
dwc3_otg_notify_host_mode(otg, on);
+ platform_device_del(dwc->xhci);
/*
* Perform USB hardware RESET (both core reset and DBM reset)
* when moving from host to peripheral. This is required for
diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
index d6d8a76..420d030 100644
--- a/drivers/usb/dwc3/host.c
+++ b/drivers/usb/dwc3/host.c
@@ -79,6 +79,7 @@
/* Add XHCI device if !OTG, otherwise OTG takes care of this */
if (!dwc->dotg) {
+ xhci->dev.parent = dwc->dev;
ret = platform_device_add(xhci);
if (ret) {
dev_err(dwc->dev, "failed to register xHCI device\n");
diff --git a/drivers/usb/gadget/ci13xxx_udc.c b/drivers/usb/gadget/ci13xxx_udc.c
index 6fca910..d0ebda1 100644
--- a/drivers/usb/gadget/ci13xxx_udc.c
+++ b/drivers/usb/gadget/ci13xxx_udc.c
@@ -445,7 +445,8 @@
int n = hw_ep_bit(num, dir);
struct ci13xxx_ep *mEp = &_udc->ci13xxx_ep[n];
- if (_udc->skip_flush || list_empty(&mEp->qh.queue))
+ /* Flush ep0 even when queue is empty */
+ if (_udc->skip_flush || (num && list_empty(&mEp->qh.queue)))
return 0;
start = ktime_get();
@@ -2107,7 +2108,18 @@
if (mReq->zptr) {
if ((TD_STATUS_ACTIVE & mReq->zptr->token) != 0)
return -EBUSY;
- dma_pool_free(mEp->td_pool, mReq->zptr, mReq->zdma);
+
+ /* The controller may access this dTD one more time.
+ * Defer freeing this to next zero length dTD completion.
+ * It is safe to assume that controller will no longer
+ * access the previous dTD after next dTD completion.
+ */
+ if (mEp->last_zptr)
+ dma_pool_free(mEp->td_pool, mEp->last_zptr,
+ mEp->last_zdma);
+ mEp->last_zptr = mReq->zptr;
+ mEp->last_zdma = mReq->zdma;
+
mReq->zptr = NULL;
}
@@ -2259,9 +2271,10 @@
usb_ep_fifo_flush(&udc->ep0out.ep);
usb_ep_fifo_flush(&udc->ep0in.ep);
- if (udc->status != NULL) {
- usb_ep_free_request(&udc->ep0in.ep, udc->status);
- udc->status = NULL;
+ if (udc->ep0in.last_zptr) {
+ dma_pool_free(udc->ep0in.td_pool, udc->ep0in.last_zptr,
+ udc->ep0in.last_zdma);
+ udc->ep0in.last_zptr = NULL;
}
return 0;
@@ -2316,10 +2329,6 @@
if (retval)
goto done;
- udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_ATOMIC);
- if (udc->status == NULL)
- retval = -ENOMEM;
-
spin_lock(udc->lock);
done:
@@ -2388,8 +2397,8 @@
return;
}
- kfree(req->buf);
- usb_ep_free_request(ep, req);
+ if (req->status)
+ err("GET_STATUS failed");
}
/**
@@ -2405,8 +2414,7 @@
__acquires(mEp->lock)
{
struct ci13xxx_ep *mEp = &udc->ep0in;
- struct usb_request *req = NULL;
- gfp_t gfp_flags = GFP_ATOMIC;
+ struct usb_request *req = udc->status;
int dir, num, retval;
trace("%p, %p", mEp, setup);
@@ -2414,19 +2422,9 @@
if (mEp == NULL || setup == NULL)
return -EINVAL;
- spin_unlock(mEp->lock);
- req = usb_ep_alloc_request(&mEp->ep, gfp_flags);
- spin_lock(mEp->lock);
- if (req == NULL)
- return -ENOMEM;
-
req->complete = isr_get_status_complete;
req->length = 2;
- req->buf = kzalloc(req->length, gfp_flags);
- if (req->buf == NULL) {
- retval = -ENOMEM;
- goto err_free_req;
- }
+ req->buf = udc->status_buf;
if ((setup->bRequestType & USB_RECIP_MASK) == USB_RECIP_DEVICE) {
if (setup->wIndex == OTG_STATUS_SELECTOR) {
@@ -2449,18 +2447,7 @@
/* else do nothing; reserved for future use */
spin_unlock(mEp->lock);
- retval = usb_ep_queue(&mEp->ep, req, gfp_flags);
- spin_lock(mEp->lock);
- if (retval)
- goto err_free_buf;
-
- return 0;
-
- err_free_buf:
- kfree(req->buf);
- err_free_req:
- spin_unlock(mEp->lock);
- usb_ep_free_request(&mEp->ep, req);
+ retval = usb_ep_queue(&mEp->ep, req, GFP_ATOMIC);
spin_lock(mEp->lock);
return retval;
}
@@ -2503,11 +2490,9 @@
trace("%p", udc);
mEp = (udc->ep0_dir == TX) ? &udc->ep0out : &udc->ep0in;
- if (udc->status) {
- udc->status->context = udc;
- udc->status->complete = isr_setup_status_complete;
- } else
- return -EINVAL;
+ udc->status->context = udc;
+ udc->status->complete = isr_setup_status_complete;
+ udc->status->length = 0;
spin_unlock(mEp->lock);
retval = usb_ep_queue(&mEp->ep, udc->status, GFP_ATOMIC);
@@ -2941,6 +2926,12 @@
} while (mEp->dir != direction);
+ if (mEp->last_zptr) {
+ dma_pool_free(mEp->td_pool, mEp->last_zptr,
+ mEp->last_zdma);
+ mEp->last_zptr = NULL;
+ }
+
mEp->desc = NULL;
mEp->ep.desc = NULL;
mEp->ep.maxpacket = USHRT_MAX;
@@ -3475,6 +3466,14 @@
retval = usb_ep_enable(&udc->ep0in.ep);
if (retval)
return retval;
+ udc->status = usb_ep_alloc_request(&udc->ep0in.ep, GFP_KERNEL);
+ if (!udc->status)
+ return -ENOMEM;
+ udc->status_buf = kzalloc(2, GFP_KERNEL); /* for GET_STATUS */
+ if (!udc->status_buf) {
+ usb_ep_free_request(&udc->ep0in.ep, udc->status);
+ return -ENOMEM;
+ }
spin_lock_irqsave(udc->lock, flags);
udc->gadget.ep0 = &udc->ep0in.ep;
@@ -3558,6 +3557,9 @@
driver->unbind(&udc->gadget); /* MAY SLEEP */
spin_lock_irqsave(udc->lock, flags);
+ usb_ep_free_request(&udc->ep0in.ep, udc->status);
+ kfree(udc->status_buf);
+
udc->gadget.dev.driver = NULL;
/* free resources */
diff --git a/drivers/usb/gadget/ci13xxx_udc.h b/drivers/usb/gadget/ci13xxx_udc.h
index 3145418..1530474 100644
--- a/drivers/usb/gadget/ci13xxx_udc.h
+++ b/drivers/usb/gadget/ci13xxx_udc.h
@@ -115,6 +115,8 @@
spinlock_t *lock;
struct device *device;
struct dma_pool *td_pool;
+ struct ci13xxx_td *last_zptr;
+ dma_addr_t last_zdma;
unsigned long dTD_update_fail_count;
unsigned long prime_fail_count;
int prime_timer_count;
@@ -153,6 +155,7 @@
struct dma_pool *qh_pool; /* DMA pool for queue heads */
struct dma_pool *td_pool; /* DMA pool for transfer descs */
struct usb_request *status; /* ep0 status request */
+ void *status_buf;/* GET_STATUS buffer */
struct usb_gadget gadget; /* USB slave device */
struct ci13xxx_ep ci13xxx_ep[ENDPT_MAX]; /* extended endpts */
diff --git a/drivers/usb/gadget/f_diag.c b/drivers/usb/gadget/f_diag.c
index 3b1843b..3355e19 100644
--- a/drivers/usb/gadget/f_diag.c
+++ b/drivers/usb/gadget/f_diag.c
@@ -139,9 +139,6 @@
* @out_desc: USB OUT endpoint descriptor struct
* @read_pool: List of requests used for Rx (OUT ep)
* @write_pool: List of requests used for Tx (IN ep)
- * @config_work: Work item schedule after interface is configured to notify
- * CONNECT event to diag char driver and updating product id
- * and serial number to MODEM/IMEM.
* @lock: Spinlock to proctect read_pool, write_pool lists
* @cdev: USB composite device struct
* @ch: USB diag channel
@@ -153,7 +150,6 @@
struct usb_ep *in;
struct list_head read_pool;
struct list_head write_pool;
- struct work_struct config_work;
spinlock_t lock;
unsigned configured;
struct usb_composite_dev *cdev;
@@ -176,21 +172,20 @@
return container_of(f, struct diag_context, function);
}
-static void usb_config_work_func(struct work_struct *work)
+static void diag_update_pid_and_serial_num(struct diag_context *ctxt)
{
- struct diag_context *ctxt = container_of(work,
- struct diag_context, config_work);
struct usb_composite_dev *cdev = ctxt->cdev;
struct usb_gadget_strings *table;
struct usb_string *s;
- if (!ctxt->ch)
+ if (!ctxt->update_pid_and_serial_num)
return;
- if (ctxt->ch->notify)
- ctxt->ch->notify(ctxt->ch->priv, USB_DIAG_CONNECT, NULL);
-
- if (!ctxt->update_pid_and_serial_num)
+ /*
+ * update pid and serail number to dload only if diag
+ * interface is zeroth interface.
+ */
+ if (intf_desc.bInterfaceNumber)
return;
/* pass on product id and serial number to dload */
@@ -612,7 +607,6 @@
usb_ep_disable(dev->in);
return rc;
}
- schedule_work(&dev->config_work);
dev->dpkts_tolaptop = 0;
dev->dpkts_tomodem = 0;
@@ -622,6 +616,9 @@
dev->configured = 1;
spin_unlock_irqrestore(&dev->lock, flags);
+ if (dev->ch->notify)
+ dev->ch->notify(dev->ch->priv, USB_DIAG_CONNECT, NULL);
+
return rc;
}
@@ -699,6 +696,7 @@
if (!f->ss_descriptors)
goto fail;
}
+ diag_update_pid_and_serial_num(ctxt);
return 0;
fail:
if (f->ss_descriptors)
@@ -761,7 +759,6 @@
spin_lock_init(&dev->lock);
INIT_LIST_HEAD(&dev->read_pool);
INIT_LIST_HEAD(&dev->write_pool);
- INIT_WORK(&dev->config_work, usb_config_work_func);
ret = usb_add_function(c, &dev->function);
if (ret) {
diff --git a/drivers/usb/gadget/f_qc_ecm.c b/drivers/usb/gadget/f_qc_ecm.c
index 51f0e50..8e7cbb2 100644
--- a/drivers/usb/gadget/f_qc_ecm.c
+++ b/drivers/usb/gadget/f_qc_ecm.c
@@ -430,8 +430,6 @@
bam_data_disconnect(&ecm_qc_bam_port, 0);
- ecm_ipa_cleanup(ipa_params.ipa_priv);
-
return 0;
}
@@ -849,6 +847,10 @@
usb_ep_free_request(ecm->notify, ecm->notify_req);
ecm_qc_string_defs[1].s = NULL;
+
+ if (ecm->xport == USB_GADGET_XPORT_BAM2BAM_IPA)
+ ecm_ipa_cleanup(ipa_params.ipa_priv);
+
kfree(ecm);
}
diff --git a/drivers/usb/gadget/f_rndis.c b/drivers/usb/gadget/f_rndis.c
index f3c6481..1017900 100644
--- a/drivers/usb/gadget/f_rndis.c
+++ b/drivers/usb/gadget/f_rndis.c
@@ -25,11 +25,6 @@
#include "u_ether.h"
#include "rndis.h"
-static bool rndis_multipacket_dl_disable;
-module_param(rndis_multipacket_dl_disable, bool, S_IRUGO|S_IWUSR);
-MODULE_PARM_DESC(rndis_multipacket_dl_disable,
- "Disable RNDIS Multi-packet support in DownLink");
-
/*
* This function is an RNDIS Ethernet port -- a Microsoft protocol that's
* been promoted instead of the standard CDC Ethernet. The published RNDIS
@@ -71,6 +66,16 @@
* - MS-Windows drivers sometimes emit undocumented requests.
*/
+static bool rndis_multipacket_dl_disable;
+module_param(rndis_multipacket_dl_disable, bool, S_IRUGO|S_IWUSR);
+MODULE_PARM_DESC(rndis_multipacket_dl_disable,
+ "Disable RNDIS Multi-packet support in DownLink");
+
+static unsigned int rndis_ul_max_pkt_per_xfer = 1;
+module_param(rndis_ul_max_pkt_per_xfer, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(rndis_ul_max_pkt_per_xfer,
+ "Disable RNDIS Multi-packet support in DownLink");
+
struct f_rndis {
struct gether port;
u8 ctrl_id, data_id;
@@ -799,6 +804,7 @@
rndis_set_param_medium(rndis->config, NDIS_MEDIUM_802_3, 0);
rndis_set_host_mac(rndis->config, rndis->ethaddr);
+ rndis_set_max_pkt_xfer(rndis->config, rndis_ul_max_pkt_per_xfer);
if (rndis->manufacturer && rndis->vendorID &&
rndis_set_param_vendor(rndis->config, rndis->vendorID,
@@ -945,6 +951,7 @@
rndis->port.header_len = sizeof(struct rndis_packet_msg_type);
rndis->port.wrap = rndis_add_header;
rndis->port.unwrap = rndis_rm_hdr;
+ rndis->port.ul_max_pkts_per_xfer = rndis_ul_max_pkt_per_xfer;
rndis->port.func.name = "rndis";
rndis->port.func.strings = rndis_strings;
diff --git a/drivers/usb/gadget/rndis.c b/drivers/usb/gadget/rndis.c
index 7ac5b64..07f4d26 100644
--- a/drivers/usb/gadget/rndis.c
+++ b/drivers/usb/gadget/rndis.c
@@ -59,6 +59,16 @@
#define RNDIS_MAX_CONFIGS 1
+int rndis_ul_max_pkt_per_xfer_rcvd;
+module_param(rndis_ul_max_pkt_per_xfer_rcvd, int, S_IRUGO);
+MODULE_PARM_DESC(rndis_ul_max_pkt_per_xfer_rcvd,
+ "Max num of REMOTE_NDIS_PACKET_MSGs received in a single transfer");
+
+int rndis_ul_max_xfer_size_rcvd;
+module_param(rndis_ul_max_xfer_size_rcvd, int, S_IRUGO);
+MODULE_PARM_DESC(rndis_ul_max_xfer_size_rcvd,
+ "Max size of bus transfer received");
+
static rndis_params rndis_per_dev_params[RNDIS_MAX_CONFIGS];
@@ -590,7 +600,6 @@
(params->dev->mtu
+ sizeof(struct ethhdr)
+ sizeof(struct rndis_packet_msg_type)
-
+ 22));
resp->PacketAlignmentFactor = cpu_to_le32(params->pkt_alignment_factor);
resp->AFListOffset = cpu_to_le32(0);
@@ -1051,25 +1060,77 @@
struct sk_buff *skb,
struct sk_buff_head *list)
{
- /* tmp points to a struct rndis_packet_msg_type */
- __le32 *tmp = (void *)skb->data;
+ int num_pkts = 1;
- /* MessageType, MessageLength */
- if (cpu_to_le32(REMOTE_NDIS_PACKET_MSG)
- != get_unaligned(tmp++)) {
- dev_kfree_skb_any(skb);
- return -EINVAL;
- }
- tmp++;
+ if (skb->len > rndis_ul_max_xfer_size_rcvd)
+ rndis_ul_max_xfer_size_rcvd = skb->len;
- /* DataOffset, DataLength */
- if (!skb_pull(skb, get_unaligned_le32(tmp++) + 8)) {
- dev_kfree_skb_any(skb);
- return -EOVERFLOW;
+ while (skb->len) {
+ struct rndis_packet_msg_type *hdr;
+ struct sk_buff *skb2;
+ u32 msg_len, data_offset, data_len;
+
+ /* some rndis hosts send extra byte to avoid zlp, ignore it */
+ if (skb->len == 1) {
+ dev_kfree_skb_any(skb);
+ return 0;
+ }
+
+ if (skb->len < sizeof *hdr) {
+ pr_err("invalid rndis pkt: skblen:%u hdr_len:%u",
+ skb->len, sizeof *hdr);
+ dev_kfree_skb_any(skb);
+ return -EINVAL;
+ }
+
+ hdr = (void *)skb->data;
+ msg_len = le32_to_cpu(hdr->MessageLength);
+ data_offset = le32_to_cpu(hdr->DataOffset);
+ data_len = le32_to_cpu(hdr->DataLength);
+
+ if (skb->len < msg_len ||
+ ((data_offset + data_len + 8) > msg_len)) {
+ pr_err("invalid rndis message: %d/%d/%d/%d, len:%d\n",
+ le32_to_cpu(hdr->MessageType),
+ msg_len, data_offset, data_len, skb->len);
+ dev_kfree_skb_any(skb);
+ return -EOVERFLOW;
+ }
+
+ if (le32_to_cpu(hdr->MessageType) != REMOTE_NDIS_PACKET_MSG) {
+ pr_err("invalid rndis message: %d/%d/%d/%d, len:%d\n",
+ le32_to_cpu(hdr->MessageType),
+ msg_len, data_offset, data_len, skb->len);
+ dev_kfree_skb_any(skb);
+ return -EINVAL;
+ }
+
+ skb_pull(skb, data_offset + 8);
+
+ if (msg_len == skb->len) {
+ skb_trim(skb, data_len);
+ break;
+ }
+
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+ if (!skb2) {
+ pr_err("%s:skb clone failed\n", __func__);
+ dev_kfree_skb_any(skb);
+ return -ENOMEM;
+ }
+
+ skb_pull(skb, msg_len - sizeof *hdr);
+ skb_trim(skb2, data_len);
+ skb_queue_tail(list, skb2);
+
+ num_pkts++;
}
- skb_trim(skb, get_unaligned_le32(tmp++));
+
+ if (num_pkts > rndis_ul_max_pkt_per_xfer_rcvd)
+ rndis_ul_max_pkt_per_xfer_rcvd = num_pkts;
skb_queue_tail(list, skb);
+
return 0;
}
diff --git a/drivers/usb/gadget/rndis.h b/drivers/usb/gadget/rndis.h
index 8a6a630..d00f81f 100644
--- a/drivers/usb/gadget/rndis.h
+++ b/drivers/usb/gadget/rndis.h
@@ -252,6 +252,7 @@
int rndis_set_param_vendor (u8 configNr, u32 vendorID,
const char *vendorDescr);
int rndis_set_param_medium (u8 configNr, u32 medium, u32 speed);
+void rndis_set_max_pkt_xfer(u8 configNr, u8 max_pkt_per_xfer);
void rndis_add_hdr (struct sk_buff *skb);
int rndis_rm_hdr(struct gether *port, struct sk_buff *skb,
struct sk_buff_head *list);
diff --git a/drivers/usb/gadget/u_bam.c b/drivers/usb/gadget/u_bam.c
index 3768b46..c05f683 100644
--- a/drivers/usb/gadget/u_bam.c
+++ b/drivers/usb/gadget/u_bam.c
@@ -717,6 +717,7 @@
ipa_notify_cb usb_notify_cb;
void *priv;
int ret;
+ unsigned long flags;
if (d->trans == USB_GADGET_XPORT_BAM2BAM) {
ret = usb_bam_connect(d->src_connection_idx, &d->src_pipe_idx);
@@ -769,9 +770,21 @@
}
}
- d->rx_req = usb_ep_alloc_request(port->port_usb->out, GFP_KERNEL);
- if (!d->rx_req)
+ spin_lock_irqsave(&port->port_lock_ul, flags);
+ spin_lock(&port->port_lock_dl);
+ if (!port->port_usb) {
+ pr_debug("%s: usb cable is disconnected, exiting\n", __func__);
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
return;
+ }
+ d->rx_req = usb_ep_alloc_request(port->port_usb->out, GFP_ATOMIC);
+ if (!d->rx_req) {
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
+ pr_err("%s: out of memory\n", __func__);
+ return;
+ }
d->rx_req->context = port;
d->rx_req->complete = gbam_endless_rx_complete;
@@ -779,9 +792,14 @@
sps_params = (MSM_SPS_MODE | d->src_pipe_idx |
MSM_VENDOR_ID) & ~MSM_IS_FINITE_TRANSFER;
d->rx_req->udc_priv = sps_params;
- d->tx_req = usb_ep_alloc_request(port->port_usb->in, GFP_KERNEL);
- if (!d->tx_req)
+
+ d->tx_req = usb_ep_alloc_request(port->port_usb->in, GFP_ATOMIC);
+ spin_unlock(&port->port_lock_dl);
+ spin_unlock_irqrestore(&port->port_lock_ul, flags);
+ if (!d->tx_req) {
+ pr_err("%s: out of memory\n", __func__);
return;
+ }
d->tx_req->context = port;
d->tx_req->complete = gbam_endless_tx_complete;
diff --git a/drivers/usb/gadget/u_ether.c b/drivers/usb/gadget/u_ether.c
index 2d974ab..d4c21dd 100644
--- a/drivers/usb/gadget/u_ether.c
+++ b/drivers/usb/gadget/u_ether.c
@@ -70,6 +70,7 @@
struct sk_buff_head rx_frames;
unsigned header_len;
+ unsigned ul_max_pkts_per_xfer;
struct sk_buff *(*wrap)(struct gether *, struct sk_buff *skb);
int (*unwrap)(struct gether *,
struct sk_buff *skb,
@@ -233,9 +234,13 @@
size += out->maxpacket - 1;
size -= size % out->maxpacket;
+ if (dev->ul_max_pkts_per_xfer)
+ size *= dev->ul_max_pkts_per_xfer;
+
if (dev->port_usb->is_fixed)
size = max_t(size_t, size, dev->port_usb->fixed_out_len);
+ pr_debug("%s: size: %d", __func__, size);
skb = alloc_skb(size + NET_IP_ALIGN, gfp_flags);
if (skb == NULL) {
DBG(dev, "no rx skb\n");
@@ -1076,6 +1081,7 @@
dev->header_len = link->header_len;
dev->unwrap = link->unwrap;
dev->wrap = link->wrap;
+ dev->ul_max_pkts_per_xfer = link->ul_max_pkts_per_xfer;
spin_lock(&dev->lock);
dev->tx_skb_hold_count = 0;
diff --git a/drivers/usb/gadget/u_ether.h b/drivers/usb/gadget/u_ether.h
index faa9a3b..7040ab0 100644
--- a/drivers/usb/gadget/u_ether.h
+++ b/drivers/usb/gadget/u_ether.h
@@ -53,6 +53,8 @@
bool is_fixed;
u32 fixed_out_len;
u32 fixed_in_len;
+
+ unsigned ul_max_pkts_per_xfer;
/* Max number of SKB packets to be used to create Multi Packet RNDIS */
#define TX_SKB_HOLD_THRESHOLD 3
bool multi_pkt_xfer;
diff --git a/drivers/usb/gadget/u_rmnet_ctrl_qti.c b/drivers/usb/gadget/u_rmnet_ctrl_qti.c
index e92978f..182cd40 100644
--- a/drivers/usb/gadget/u_rmnet_ctrl_qti.c
+++ b/drivers/usb/gadget/u_rmnet_ctrl_qti.c
@@ -259,20 +259,6 @@
return -EBUSY;
}
- /* block until online */
- while (!(atomic_read(&port->connected))) {
- pr_debug("Not connected. Wait.\n");
- ret = wait_event_interruptible(port->read_wq,
- atomic_read(&port->connected));
- if (ret < 0) {
- rmnet_ctrl_unlock(&port->read_excl);
- if (ret == -ERESTARTSYS)
- return -ERESTARTSYS;
- else
- return -EINTR;
- }
- }
-
/* block until a new packet is available */
do {
spin_lock_irqsave(&port->lock, flags);
diff --git a/drivers/usb/host/ehci-msm-hsic.c b/drivers/usb/host/ehci-msm-hsic.c
index 4a085aa..ede8bdb 100644
--- a/drivers/usb/host/ehci-msm-hsic.c
+++ b/drivers/usb/host/ehci-msm-hsic.c
@@ -78,6 +78,7 @@
struct clk *alt_core_clk;
struct clk *phy_clk;
struct clk *cal_clk;
+ struct clk *inactivity_clk;
struct regulator *hsic_vddcx;
struct regulator *hsic_gdsc;
atomic_t async_int;
@@ -575,6 +576,8 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
ret = clk_reset(mehci->core_clk, CLK_RESET_ASSERT);
if (ret) {
@@ -596,6 +599,8 @@
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
}
}
@@ -794,6 +799,8 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
none_vol = vdd_val[mehci->vdd_type][VDD_NONE];
max_vol = vdd_val[mehci->vdd_type][VDD_MAX];
@@ -876,6 +883,8 @@
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
temp = readl_relaxed(USB_USBCMD);
temp &= ~ASYNC_INTR_CTRL;
@@ -1507,10 +1516,21 @@
goto put_cal_clk;
}
+ /*
+ * Inactivity_clk is required for hsic bam inactivity timer.
+ * This clock is not compulsory and is defined in clock lookup
+ * only for targets that need to use the inactivity timer feature.
+ */
+ mehci->inactivity_clk = clk_get(mehci->dev, "inactivity_clk");
+ if (IS_ERR(mehci->inactivity_clk))
+ dev_dbg(mehci->dev, "failed to get inactivity_clk\n");
+
clk_prepare_enable(mehci->core_clk);
clk_prepare_enable(mehci->phy_clk);
clk_prepare_enable(mehci->cal_clk);
clk_prepare_enable(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_prepare_enable(mehci->inactivity_clk);
return 0;
@@ -1520,7 +1540,11 @@
clk_disable_unprepare(mehci->phy_clk);
clk_disable_unprepare(mehci->cal_clk);
clk_disable_unprepare(mehci->ahb_clk);
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_disable_unprepare(mehci->inactivity_clk);
}
+ if (!IS_ERR(mehci->inactivity_clk))
+ clk_put(mehci->inactivity_clk);
clk_put(mehci->ahb_clk);
put_cal_clk:
clk_put(mehci->cal_clk);
@@ -1827,6 +1851,8 @@
"qcom,pool-64-bit-align");
pdata->enable_hbm = of_property_read_bool(node,
"qcom,enable-hbm");
+ pdata->disable_park_mode = (of_property_read_bool(node,
+ "qcom,disable-park-mode"));
return pdata;
}
@@ -2064,7 +2090,7 @@
pm_runtime_put_sync(pdev->dev.parent);
if (mehci->enable_hbm)
- hbm_init(hcd);
+ hbm_init(hcd, pdata->disable_park_mode);
return 0;
diff --git a/drivers/usb/host/hbm.c b/drivers/usb/host/hbm.c
index 1a0c0aa..d34301d 100644
--- a/drivers/usb/host/hbm.c
+++ b/drivers/usb/host/hbm.c
@@ -44,6 +44,7 @@
struct hbm_msm {
u32 *base;
struct usb_hcd *hcd;
+ bool disable_park_mode;
};
static struct hbm_msm *hbm_ctx;
@@ -173,8 +174,8 @@
USB_OTG_HS_HBM_PIPE_PRODUCER, 1 << pipe_num,
(is_consumer ? 0 : 1));
- /* disable park mode as default */
- set_disable_park_mode(pipe_num, true);
+ /* set park mode */
+ set_disable_park_mode(pipe_num, hbm_ctx->disable_park_mode);
/* enable zlt as default*/
set_disable_zlt(pipe_num, false);
@@ -186,7 +187,7 @@
return 0;
}
-void hbm_init(struct usb_hcd *hcd)
+void hbm_init(struct usb_hcd *hcd, bool disable_park_mode)
{
pr_info("%s\n", __func__);
@@ -198,6 +199,7 @@
hbm_ctx->base = hcd->regs;
hbm_ctx->hcd = hcd;
+ hbm_ctx->disable_park_mode = disable_park_mode;
/* reset hbm */
hbm_reset(true);
diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
index 79dcf2f..46b5ce4 100644
--- a/drivers/usb/host/xhci-plat.c
+++ b/drivers/usb/host/xhci-plat.c
@@ -16,6 +16,7 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/usb/otg.h>
+#include <linux/usb/msm_hsusb.h>
#include "xhci.h"
@@ -140,6 +141,10 @@
goto release_mem_region;
}
+ pm_runtime_set_active(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+ pm_runtime_get_sync(&pdev->dev);
+
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (ret)
goto unmap_registers;
@@ -166,7 +171,8 @@
goto put_usb3_hcd;
phy = usb_get_transceiver();
- if (phy && phy->otg) {
+ /* Register with OTG if present, ignore USB2 OTG using other PHY */
+ if (phy && phy->otg && !(phy->flags & ENABLE_SECONDARY_PHY)) {
dev_dbg(&pdev->dev, "%s otg support available\n", __func__);
ret = otg_set_host(phy->otg, &hcd->self);
if (ret) {
@@ -175,15 +181,12 @@
usb_put_transceiver(phy);
goto put_usb3_hcd;
}
- pm_runtime_set_active(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
} else {
pm_runtime_no_callbacks(&pdev->dev);
- pm_runtime_set_active(&pdev->dev);
- pm_runtime_enable(&pdev->dev);
- pm_runtime_get(&pdev->dev);
}
+ pm_runtime_put(&pdev->dev);
+
return 0;
put_usb3_hcd:
@@ -222,9 +225,6 @@
if (phy && phy->otg) {
otg_set_host(phy->otg, NULL);
usb_put_transceiver(phy);
- } else {
- pm_runtime_put(&dev->dev);
- pm_runtime_disable(&dev->dev);
}
return 0;
diff --git a/drivers/video/msm/mdp4_overlay.c b/drivers/video/msm/mdp4_overlay.c
index e8d7489..afa7b97 100644
--- a/drivers/video/msm/mdp4_overlay.c
+++ b/drivers/video/msm/mdp4_overlay.c
@@ -245,7 +245,7 @@
return PTR_ERR(*srcp_ihdl);
}
pr_debug("%s(): ion_hdl %p, ion_buf %d\n", __func__, *srcp_ihdl,
- ion_share_dma_buf(display_iclient, *srcp_ihdl));
+ mem_id);
pr_debug("mixer %u, pipe %u, plane %u\n", pipe->mixer_num,
pipe->pipe_ndx, plane);
if (ion_map_iommu(display_iclient, *srcp_ihdl,
diff --git a/drivers/video/msm/mdss/dsi_host_v2.c b/drivers/video/msm/mdss/dsi_host_v2.c
index 453cbaa..887dde7 100644
--- a/drivers/video/msm/mdss/dsi_host_v2.c
+++ b/drivers/video/msm/mdss/dsi_host_v2.c
@@ -29,6 +29,7 @@
#define DSI_POLL_SLEEP_US 1000
#define DSI_POLL_TIMEOUT_US 16000
#define DSI_ESC_CLK_RATE 19200000
+#define DSI_DMA_CMD_TIMEOUT_MS 200
struct dsi_host_v2_private {
struct completion dma_comp;
@@ -426,18 +427,15 @@
dsi_ctrl = MIPI_INP(ctrl_base + DSI_CTRL);
/*If Video enabled, Keep Video and Cmd mode ON */
- if (dsi_ctrl & 0x02)
- dsi_ctrl &= ~0x05;
- else
- dsi_ctrl &= ~0x07;
+
+
+ dsi_ctrl &= ~0x06;
if (mode == DSI_VIDEO_MODE) {
- dsi_ctrl |= 0x03;
+ dsi_ctrl |= 0x02;
intr_ctrl = DSI_INTR_CMD_DMA_DONE_MASK;
} else { /* command mode */
- dsi_ctrl |= 0x05;
- if (pdata->panel_info.type == MIPI_VIDEO_PANEL)
- dsi_ctrl |= 0x02;
+ dsi_ctrl |= 0x04;
intr_ctrl = DSI_INTR_CMD_DMA_DONE_MASK | DSI_INTR_ERROR_MASK |
DSI_INTR_CMD_MDP_DONE_MASK;
@@ -480,7 +478,7 @@
int msm_dsi_cmd_dma_tx(struct dsi_buf *tp)
{
- int len;
+ int len, rc;
unsigned long size, addr;
unsigned char *ctrl_base = dsi_host_private->dsi_base;
@@ -505,12 +503,17 @@
MIPI_OUTP(ctrl_base + DSI_CMD_MODE_DMA_SW_TRIGGER, 0x01);
wmb();
- wait_for_completion_interruptible(&dsi_host_private->dma_comp);
+ rc = wait_for_completion_timeout(&dsi_host_private->dma_comp,
+ msecs_to_jiffies(DSI_DMA_CMD_TIMEOUT_MS));
+ if (rc == 0) {
+ pr_err("DSI command transaction time out\n");
+ rc = -ETIME;
+ }
dma_unmap_single(&dsi_host_private->dis_dev, tp->dmap, size,
DMA_TO_DEVICE);
tp->dmap = 0;
- return 0;
+ return rc;
}
int msm_dsi_cmd_dma_rx(struct dsi_buf *rp, int rlen)
@@ -723,6 +726,7 @@
static int msm_dsi_cal_clk_rate(struct mdss_panel_data *pdata,
u32 *bitclk_rate,
+ u32 *dsiclk_rate,
u32 *byteclk_rate,
u32 *pclk_rate)
{
@@ -761,10 +765,11 @@
*bitclk_rate /= lanes;
*byteclk_rate = *bitclk_rate / 8;
+ *dsiclk_rate = *byteclk_rate * lanes;
*pclk_rate = *byteclk_rate * lanes * 8 / pdata->panel_info.bpp;
- pr_debug("bitclk=%u, byteclk=%u, pck_=%u\n",
- *bitclk_rate, *byteclk_rate, *pclk_rate);
+ pr_debug("dsiclk_rate=%u, byteclk=%u, pck_=%u\n",
+ *dsiclk_rate, *byteclk_rate, *pclk_rate);
return 0;
}
@@ -777,7 +782,7 @@
u32 hbp, hfp, vbp, vfp, hspw, vspw, width, height;
u32 ystride, bpp, data;
u32 dummy_xres, dummy_yres;
- u32 bitclk_rate = 0, byteclk_rate = 0, pclk_rate = 0;
+ u32 bitclk_rate = 0, byteclk_rate = 0, pclk_rate = 0, dsiclk_rate = 0;
unsigned char *ctrl_base = dsi_host_private->dsi_base;
pr_debug("msm_dsi_on\n");
@@ -794,8 +799,10 @@
msm_dsi_phy_sw_reset(dsi_host_private->dsi_base);
msm_dsi_phy_init(dsi_host_private->dsi_base, pdata);
- msm_dsi_cal_clk_rate(pdata, &bitclk_rate, &byteclk_rate, &pclk_rate);
- msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, byteclk_rate, pclk_rate);
+ msm_dsi_cal_clk_rate(pdata, &bitclk_rate, &dsiclk_rate,
+ &byteclk_rate, &pclk_rate);
+ msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, dsiclk_rate,
+ byteclk_rate, pclk_rate);
msm_dsi_prepare_clocks();
msm_dsi_clk_enable();
@@ -879,12 +886,11 @@
int ret = 0;
pr_debug("msm_dsi_off\n");
- msm_dsi_clk_set_rate(0, 0, 0);
+ msm_dsi_controller_cfg(0);
+ msm_dsi_clk_set_rate(DSI_ESC_CLK_RATE, 0, 0, 0);
msm_dsi_clk_disable();
msm_dsi_unprepare_clocks();
- /* disable DSI controller */
- msm_dsi_controller_cfg(0);
msm_dsi_ahb_ctrl(0);
ret = msm_dsi_regulator_disable();
diff --git a/drivers/video/msm/mdss/dsi_io_v2.c b/drivers/video/msm/mdss/dsi_io_v2.c
index 0486c4c..93f2c76 100644
--- a/drivers/video/msm/mdss/dsi_io_v2.c
+++ b/drivers/video/msm/mdss/dsi_io_v2.c
@@ -29,6 +29,7 @@
struct clk *dsi_esc_clk;
struct clk *dsi_pixel_clk;
struct clk *dsi_ahb_clk;
+ struct clk *dsi_clk;
int msm_dsi_clk_on;
int msm_dsi_ahb_clk_on;
};
@@ -97,6 +98,13 @@
{
int rc = 0;
+ dsi_io_private->dsi_clk = clk_get(&dev->dev, "dsi_clk");
+ if (IS_ERR(dsi_io_private->dsi_clk)) {
+ pr_err("can't find dsi core_clk\n");
+ rc = PTR_ERR(dsi_io_private->dsi_clk);
+ dsi_io_private->dsi_clk = NULL;
+ return rc;
+ }
dsi_io_private->dsi_byte_clk = clk_get(&dev->dev, "byte_clk");
if (IS_ERR(dsi_io_private->dsi_byte_clk)) {
pr_err("can't find dsi byte_clk\n");
@@ -135,6 +143,10 @@
void msm_dsi_clk_deinit(void)
{
+ if (dsi_io_private->dsi_clk) {
+ clk_put(dsi_io_private->dsi_clk);
+ dsi_io_private->dsi_clk = NULL;
+ }
if (dsi_io_private->dsi_byte_clk) {
clk_put(dsi_io_private->dsi_byte_clk);
dsi_io_private->dsi_byte_clk = NULL;
@@ -156,6 +168,7 @@
int msm_dsi_prepare_clocks(void)
{
+ clk_prepare(dsi_io_private->dsi_clk);
clk_prepare(dsi_io_private->dsi_byte_clk);
clk_prepare(dsi_io_private->dsi_esc_clk);
clk_prepare(dsi_io_private->dsi_pixel_clk);
@@ -164,16 +177,24 @@
int msm_dsi_unprepare_clocks(void)
{
+ clk_unprepare(dsi_io_private->dsi_clk);
clk_unprepare(dsi_io_private->dsi_esc_clk);
clk_unprepare(dsi_io_private->dsi_byte_clk);
clk_unprepare(dsi_io_private->dsi_pixel_clk);
return 0;
}
-int msm_dsi_clk_set_rate(unsigned long esc_rate, unsigned long byte_rate,
+int msm_dsi_clk_set_rate(unsigned long esc_rate,
+ unsigned long dsi_rate,
+ unsigned long byte_rate,
unsigned long pixel_rate)
{
int rc;
+ rc = clk_set_rate(dsi_io_private->dsi_clk, dsi_rate);
+ if (rc) {
+ pr_err("dsi_esc_clk - clk_set_rate failed =%d\n", rc);
+ return rc;
+ }
rc = clk_set_rate(dsi_io_private->dsi_esc_clk, esc_rate);
if (rc) {
@@ -202,6 +223,7 @@
return 0;
}
+ clk_enable(dsi_io_private->dsi_clk);
clk_enable(dsi_io_private->dsi_esc_clk);
clk_enable(dsi_io_private->dsi_byte_clk);
clk_enable(dsi_io_private->dsi_pixel_clk);
@@ -217,6 +239,7 @@
return 0;
}
+ clk_disable(dsi_io_private->dsi_clk);
clk_disable(dsi_io_private->dsi_byte_clk);
clk_disable(dsi_io_private->dsi_esc_clk);
clk_disable(dsi_io_private->dsi_pixel_clk);
@@ -246,7 +269,7 @@
void msm_dsi_regulator_deinit(void)
{
- if (dsi_io_private->vdda_vreg) {
+ if (!IS_ERR(dsi_io_private->vdda_vreg)) {
devm_regulator_put(dsi_io_private->vdda_vreg);
dsi_io_private->vdda_vreg = NULL;
}
diff --git a/drivers/video/msm/mdss/dsi_io_v2.h b/drivers/video/msm/mdss/dsi_io_v2.h
index 25ecd7f..285bf30 100644
--- a/drivers/video/msm/mdss/dsi_io_v2.h
+++ b/drivers/video/msm/mdss/dsi_io_v2.h
@@ -29,7 +29,9 @@
int msm_dsi_unprepare_clocks(void);
-int msm_dsi_clk_set_rate(unsigned long esc_rate, unsigned long byte_rate,
+int msm_dsi_clk_set_rate(unsigned long esc_rate,
+ unsigned long dsi_rate,
+ unsigned long byte_rate,
unsigned long pixel_rate);
int msm_dsi_clk_enable(void);
diff --git a/drivers/video/msm/mdss/dsi_panel_v2.c b/drivers/video/msm/mdss/dsi_panel_v2.c
index 6686de3..e46ea3b 100644
--- a/drivers/video/msm/mdss/dsi_panel_v2.c
+++ b/drivers/video/msm/mdss/dsi_panel_v2.c
@@ -20,6 +20,7 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/leds.h>
+#include <linux/err.h>
#include <linux/regulator/consumer.h>
#include "dsi_v2.h"
@@ -32,6 +33,7 @@
int rst_gpio;
int disp_en_gpio;
+ int video_mode_gpio;
char bl_ctrl;
struct regulator *vddio_vreg;
@@ -40,6 +42,9 @@
struct dsi_panel_cmds_list *on_cmds_list;
struct dsi_panel_cmds_list *off_cmds_list;
struct mdss_dsi_phy_ctrl phy_params;
+
+ char *on_cmds;
+ char *off_cmds;
};
static struct dsi_panel_private *panel_private;
@@ -82,15 +87,54 @@
kfree(panel_private->dsi_panel_tx_buf.start);
kfree(panel_private->dsi_panel_rx_buf.start);
- if (panel_private->vddio_vreg)
+ if (!IS_ERR(panel_private->vddio_vreg))
devm_regulator_put(panel_private->vddio_vreg);
- if (panel_private->vdda_vreg)
- devm_regulator_put(panel_private->vddio_vreg);
+ if (!IS_ERR(panel_private->vdda_vreg))
+ devm_regulator_put(panel_private->vdda_vreg);
+ if (panel_private->on_cmds_list) {
+ kfree(panel_private->on_cmds_list->buf);
+ kfree(panel_private->on_cmds_list);
+ }
+ if (panel_private->off_cmds_list) {
+ kfree(panel_private->off_cmds_list->buf);
+ kfree(panel_private->off_cmds_list);
+ }
+
+ kfree(panel_private->on_cmds);
+ kfree(panel_private->off_cmds);
kfree(panel_private);
panel_private = NULL;
}
+int dsi_panel_power(int enable)
+{
+ int ret;
+ if (enable) {
+ ret = regulator_enable(panel_private->vddio_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator enable vddio fail\n");
+ return ret;
+ }
+ ret = regulator_enable(panel_private->vdda_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator enable vdda fail\n");
+ return ret;
+ }
+ } else {
+ ret = regulator_disable(panel_private->vddio_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator disable vddio fail\n");
+ return ret;
+ }
+ ret = regulator_disable(panel_private->vdda_vreg);
+ if (ret) {
+ pr_err("dsi_panel_power regulator dsiable vdda fail\n");
+ return ret;
+ }
+ }
+ return 0;
+}
void dsi_panel_reset(struct mdss_panel_data *pdata, int enable)
{
@@ -113,6 +157,8 @@
pr_debug("%s: enable = %d\n", __func__, enable);
if (enable) {
+ dsi_panel_power(1);
+ gpio_request(panel_private->rst_gpio, "panel_reset");
gpio_set_value(panel_private->rst_gpio, 1);
/*
* these delay values are by experiments currently, will need
@@ -123,12 +169,34 @@
udelay(200);
gpio_set_value(panel_private->rst_gpio, 1);
msleep(20);
- if (gpio_is_valid(panel_private->disp_en_gpio))
+ if (gpio_is_valid(panel_private->disp_en_gpio)) {
+ gpio_request(panel_private->disp_en_gpio,
+ "panel_enable");
gpio_set_value(panel_private->disp_en_gpio, 1);
+ }
+ if (gpio_is_valid(panel_private->video_mode_gpio)) {
+ gpio_request(panel_private->video_mode_gpio,
+ "panel_video_mdoe");
+ if (pdata->panel_info.mipi.mode == DSI_VIDEO_MODE)
+ gpio_set_value(panel_private->video_mode_gpio,
+ 1);
+ else
+ gpio_set_value(panel_private->video_mode_gpio,
+ 0);
+ }
} else {
gpio_set_value(panel_private->rst_gpio, 0);
- if (gpio_is_valid(panel_private->disp_en_gpio))
+ gpio_free(panel_private->rst_gpio);
+
+ if (gpio_is_valid(panel_private->disp_en_gpio)) {
gpio_set_value(panel_private->disp_en_gpio, 0);
+ gpio_free(panel_private->disp_en_gpio);
+ }
+
+ if (gpio_is_valid(panel_private->video_mode_gpio))
+ gpio_free(panel_private->video_mode_gpio);
+
+ dsi_panel_power(0);
}
}
@@ -196,13 +264,42 @@
panel_private->disp_en_gpio = of_get_named_gpio(np,
"qcom,enable-gpio", 0);
panel_private->rst_gpio = of_get_named_gpio(np, "qcom,rst-gpio", 0);
+ panel_private->video_mode_gpio = of_get_named_gpio(np,
+ "qcom,mode-selection-gpio", 0);
return 0;
}
static int dsi_panel_parse_regulator(struct platform_device *pdev)
{
+ int ret;
panel_private->vddio_vreg = devm_regulator_get(&pdev->dev, "vddio");
+ if (IS_ERR(panel_private->vddio_vreg)) {
+ pr_err("%s: could not get vddio vreg, rc=%ld\n",
+ __func__, PTR_ERR(panel_private->vddio_vreg));
+ return PTR_ERR(panel_private->vddio_vreg);
+ }
+ ret = regulator_set_voltage(panel_private->vddio_vreg,
+ 1800000,
+ 1800000);
+ if (ret) {
+ pr_err("%s: set voltage failed on vddio_vreg, rc=%d\n",
+ __func__, ret);
+ return ret;
+ }
panel_private->vdda_vreg = devm_regulator_get(&pdev->dev, "vdda");
+ if (IS_ERR(panel_private->vdda_vreg)) {
+ pr_err("%s: could not get vdda_vreg , rc=%ld\n",
+ __func__, PTR_ERR(panel_private->vdda_vreg));
+ return PTR_ERR(panel_private->vdda_vreg);
+ }
+ ret = regulator_set_voltage(panel_private->vdda_vreg,
+ 2850000,
+ 2850000);
+ if (ret) {
+ pr_err("%s: set voltage failed on vdda_vreg, rc=%d\n",
+ __func__, ret);
+ return ret;
+ }
return 0;
}
@@ -398,180 +495,163 @@
int cmd_plen, data_offset;
const char *data;
const char *on_cmds_state, *off_cmds_state;
- char *on_cmds = NULL, *off_cmds = NULL;
int num_of_on_cmds = 0, num_of_off_cmds = 0;
data = of_get_property(np, "qcom,panel-on-cmds", &len);
if (!data) {
pr_err("%s:%d, Unable to read ON cmds", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- on_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
- if (!on_cmds)
- goto parse_init_cmds_error;
+ panel_private->on_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
+ if (!panel_private->on_cmds)
+ return -ENOMEM;
- memcpy(on_cmds, data, len);
+ memcpy(panel_private->on_cmds, data, len);
data_offset = 0;
cmd_plen = 0;
while ((len - data_offset) >= DT_CMD_HDR) {
data_offset += (DT_CMD_HDR - 1);
- cmd_plen = on_cmds[data_offset++];
+ cmd_plen = panel_private->on_cmds[data_offset++];
data_offset += cmd_plen;
num_of_on_cmds++;
}
if (!num_of_on_cmds) {
pr_err("%s:%d, No ON cmds specified", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- panel_data->dsi_panel_on_cmds =
+ panel_private->on_cmds_list =
kzalloc(sizeof(struct dsi_panel_cmds_list), GFP_KERNEL);
- if (!panel_data->dsi_panel_on_cmds)
- goto parse_init_cmds_error;
+ if (!panel_private->on_cmds_list)
+ return -ENOMEM;
- (panel_data->dsi_panel_on_cmds)->buf =
+ panel_private->on_cmds_list->buf =
kzalloc((num_of_on_cmds * sizeof(struct dsi_cmd_desc)),
GFP_KERNEL);
- if (!(panel_data->dsi_panel_on_cmds)->buf)
- goto parse_init_cmds_error;
+ if (!panel_private->on_cmds_list->buf)
+ return -ENOMEM;
data_offset = 0;
for (i = 0; i < num_of_on_cmds; i++) {
- panel_data->dsi_panel_on_cmds->buf[i].dtype =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].last =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].vc =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].ack =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].wait =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].dlen =
- on_cmds[data_offset++];
- panel_data->dsi_panel_on_cmds->buf[i].payload =
- &on_cmds[data_offset];
- data_offset += (panel_data->dsi_panel_on_cmds->buf[i].dlen);
+ panel_private->on_cmds_list->buf[i].dtype =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].last =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].vc =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].ack =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].wait =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].dlen =
+ panel_private->on_cmds[data_offset++];
+ panel_private->on_cmds_list->buf[i].payload =
+ &panel_private->on_cmds[data_offset];
+ data_offset += (panel_private->on_cmds_list->buf[i].dlen);
}
if (data_offset != len) {
pr_err("%s:%d, Incorrect ON command entries",
__func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- (panel_data->dsi_panel_on_cmds)->size = num_of_on_cmds;
+ panel_private->on_cmds_list->size = num_of_on_cmds;
on_cmds_state = of_get_property(pdev->dev.of_node,
"qcom,on-cmds-dsi-state", NULL);
if (!strncmp(on_cmds_state, "DSI_LP_MODE", 11)) {
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_LP_MODE;
} else if (!strncmp(on_cmds_state, "DSI_HS_MODE", 11)) {
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_HS_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_HS_MODE;
} else {
pr_debug("%s: ON cmds state not specified. Set Default\n",
__func__);
- (panel_data->dsi_panel_on_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->on_cmds_list->ctrl_state = DSI_LP_MODE;
}
- panel_private->on_cmds_list = panel_data->dsi_panel_on_cmds;
+ panel_data->dsi_panel_on_cmds = panel_private->on_cmds_list;
+
data = of_get_property(np, "qcom,panel-off-cmds", &len);
if (!data) {
pr_err("%s:%d, Unable to read OFF cmds", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- off_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
- if (!off_cmds)
- goto parse_init_cmds_error;
+ panel_private->off_cmds = kzalloc(sizeof(char) * len, GFP_KERNEL);
+ if (!panel_private->off_cmds)
+ return -ENOMEM;
- memcpy(off_cmds, data, len);
+ memcpy(panel_private->off_cmds, data, len);
data_offset = 0;
cmd_plen = 0;
while ((len - data_offset) >= DT_CMD_HDR) {
data_offset += (DT_CMD_HDR - 1);
- cmd_plen = off_cmds[data_offset++];
+ cmd_plen = panel_private->off_cmds[data_offset++];
data_offset += cmd_plen;
num_of_off_cmds++;
}
if (!num_of_off_cmds) {
pr_err("%s:%d, No OFF cmds specified", __func__, __LINE__);
- goto parse_init_cmds_error;
+ return -ENOMEM;
}
- panel_data->dsi_panel_off_cmds =
+ panel_private->off_cmds_list =
kzalloc(sizeof(struct dsi_panel_cmds_list), GFP_KERNEL);
- if (!panel_data->dsi_panel_off_cmds)
- goto parse_init_cmds_error;
+ if (!panel_private->off_cmds_list)
+ return -ENOMEM;
- (panel_data->dsi_panel_off_cmds)->buf = kzalloc(num_of_off_cmds
+ panel_private->off_cmds_list->buf = kzalloc(num_of_off_cmds
* sizeof(struct dsi_cmd_desc),
GFP_KERNEL);
- if (!(panel_data->dsi_panel_off_cmds)->buf)
- goto parse_init_cmds_error;
+ if (!panel_private->off_cmds_list->buf)
+ return -ENOMEM;
data_offset = 0;
for (i = 0; i < num_of_off_cmds; i++) {
- panel_data->dsi_panel_off_cmds->buf[i].dtype =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].last =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].vc =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].ack =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].wait =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].dlen =
- off_cmds[data_offset++];
- panel_data->dsi_panel_off_cmds->buf[i].payload =
- &off_cmds[data_offset];
- data_offset += (panel_data->dsi_panel_off_cmds->buf[i].dlen);
+ panel_private->off_cmds_list->buf[i].dtype =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].last =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].vc =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].ack =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].wait =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].dlen =
+ panel_private->off_cmds[data_offset++];
+ panel_private->off_cmds_list->buf[i].payload =
+ &panel_private->off_cmds[data_offset];
+ data_offset += (panel_private->off_cmds_list->buf[i].dlen);
}
if (data_offset != len) {
pr_err("%s:%d, Incorrect OFF command entries",
__func__, __LINE__);
- goto parse_init_cmds_error;
+ return -EINVAL;
}
- (panel_data->dsi_panel_off_cmds)->size = num_of_off_cmds;
- off_cmds_state = of_get_property(pdev->dev.of_node,
+ panel_private->off_cmds_list->size = num_of_off_cmds;
+ off_cmds_state = of_get_property(pdev->dev.of_node,
"qcom,off-cmds-dsi-state", NULL);
if (!strncmp(off_cmds_state, "DSI_LP_MODE", 11)) {
- (panel_data->dsi_panel_off_cmds)->ctrl_state =
+ panel_private->off_cmds_list->ctrl_state =
DSI_LP_MODE;
} else if (!strncmp(off_cmds_state, "DSI_HS_MODE", 11)) {
- (panel_data->dsi_panel_off_cmds)->ctrl_state = DSI_HS_MODE;
+ panel_private->off_cmds_list->ctrl_state = DSI_HS_MODE;
} else {
pr_debug("%s: ON cmds state not specified. Set Default\n",
__func__);
- (panel_data->dsi_panel_off_cmds)->ctrl_state = DSI_LP_MODE;
+ panel_private->off_cmds_list->ctrl_state = DSI_LP_MODE;
}
- panel_private->off_cmds_list = panel_data->dsi_panel_on_cmds;
- kfree(on_cmds);
- kfree(off_cmds);
+ panel_data->dsi_panel_off_cmds = panel_private->off_cmds_list;
return 0;
-parse_init_cmds_error:
- if (panel_data->dsi_panel_on_cmds) {
- kfree((panel_data->dsi_panel_on_cmds)->buf);
- kfree(panel_data->dsi_panel_on_cmds);
- panel_data->dsi_panel_on_cmds = NULL;
- }
- if (panel_data->dsi_panel_off_cmds) {
- kfree((panel_data->dsi_panel_off_cmds)->buf);
- kfree(panel_data->dsi_panel_off_cmds);
- panel_data->dsi_panel_off_cmds = NULL;
- }
-
- kfree(on_cmds);
- kfree(off_cmds);
- return -EINVAL;
}
static int dsi_panel_parse_backlight(struct platform_device *pdev,
@@ -709,6 +789,7 @@
vendor_pdata.on = dsi_panel_on;
vendor_pdata.off = dsi_panel_off;
+ vendor_pdata.reset = dsi_panel_reset;
vendor_pdata.bl_fnc = dsi_panel_bl_ctrl;
rc = dsi_panel_device_register_v2(pdev, &vendor_pdata,
diff --git a/drivers/video/msm/mdss/dsi_v2.c b/drivers/video/msm/mdss/dsi_v2.c
index 5e46bf5..5833796 100644
--- a/drivers/video/msm/mdss/dsi_v2.c
+++ b/drivers/video/msm/mdss/dsi_v2.c
@@ -29,6 +29,14 @@
if (!panel_common_data || !pdata)
return -ENODEV;
+ if (dsi_intf.op_mode_config)
+ dsi_intf.op_mode_config(DSI_CMD_MODE, pdata);
+
+ pr_debug("panel off commands\n");
+ if (panel_common_data->off)
+ panel_common_data->off(pdata);
+
+ pr_debug("turn off dsi controller\n");
if (dsi_intf.off)
rc = dsi_intf.off(pdata);
@@ -37,9 +45,9 @@
return rc;
}
- pr_debug("dsi_off reset\n");
- if (panel_common_data->off)
- panel_common_data->off(pdata);
+ pr_debug("turn off panel power\n");
+ if (panel_common_data->reset)
+ panel_common_data->reset(pdata, 0);
return rc;
}
@@ -53,8 +61,6 @@
if (!panel_common_data || !pdata)
return -ENODEV;
- if (panel_common_data->reset)
- panel_common_data->reset(1);
pr_debug("dsi_on DSI controller ont\n");
if (dsi_intf.on)
@@ -64,6 +70,9 @@
pr_err("mdss_dsi_on DSI failed %d\n", rc);
return rc;
}
+ pr_debug("dsi_on power on panel\n");
+ if (panel_common_data->reset)
+ panel_common_data->reset(pdata, 1);
pr_debug("dsi_on DSI panel ont\n");
if (panel_common_data->on)
diff --git a/drivers/video/msm/mdss/dsi_v2.h b/drivers/video/msm/mdss/dsi_v2.h
index fa868ab..f68527c 100644
--- a/drivers/video/msm/mdss/dsi_v2.h
+++ b/drivers/video/msm/mdss/dsi_v2.h
@@ -189,7 +189,7 @@
struct mdss_panel_info panel_info;
int (*on) (struct mdss_panel_data *pdata);
int (*off) (struct mdss_panel_data *pdata);
- void (*reset)(int enable);
+ void (*reset)(struct mdss_panel_data *pdata, int enable);
void (*bl_fnc) (struct mdss_panel_data *pdata, u32 bl_level);
struct dsi_panel_cmds_list *dsi_panel_on_cmds;
struct dsi_panel_cmds_list *dsi_panel_off_cmds;
diff --git a/drivers/video/msm/mdss/mdp3.c b/drivers/video/msm/mdss/mdp3.c
index 890b00b..52243eb 100644
--- a/drivers/video/msm/mdss/mdp3.c
+++ b/drivers/video/msm/mdss/mdp3.c
@@ -47,7 +47,7 @@
#include "mdp3_hwio.h"
#include "mdp3_ctrl.h"
-#define MDP_CORE_HW_VERSION 0x03030304
+#define MDP_CORE_HW_VERSION 0x03040310
struct mdp3_hw_resource *mdp3_res;
#define MDP_BUS_VECTOR_ENTRY(ab_val, ib_val) \
@@ -302,16 +302,7 @@
return ret;
}
-int mdp3_vsync_clk_enable(int enable)
-{
- int ret = 0;
- pr_debug("vsync clk enable=%d\n", enable);
- mutex_lock(&mdp3_res->res_mutex);
- mdp3_clk_update(MDP3_CLK_VSYNC, enable);
- mutex_unlock(&mdp3_res->res_mutex);
- return ret;
-}
int mdp3_clk_set_rate(int clk_type, unsigned long clk_rate)
{
@@ -400,6 +391,9 @@
if (rc)
return rc;
+ rc = mdp3_clk_register("dsi_clk", MDP3_CLK_DSI);
+ if (rc)
+ return rc;
return rc;
}
@@ -409,6 +403,7 @@
clk_put(mdp3_res->clocks[MDP3_CLK_CORE]);
clk_put(mdp3_res->clocks[MDP3_CLK_VSYNC]);
clk_put(mdp3_res->clocks[MDP3_CLK_LCDC]);
+ clk_put(mdp3_res->clocks[MDP3_CLK_DSI]);
}
int mdp3_clk_enable(int enable)
@@ -421,6 +416,7 @@
rc = mdp3_clk_update(MDP3_CLK_AHB, enable);
rc |= mdp3_clk_update(MDP3_CLK_CORE, enable);
rc |= mdp3_clk_update(MDP3_CLK_VSYNC, enable);
+ rc |= mdp3_clk_update(MDP3_CLK_DSI, enable);
mutex_unlock(&mdp3_res->res_mutex);
return rc;
}
@@ -577,12 +573,14 @@
int rc;
rc = mdp3_clk_update(MDP3_CLK_AHB, 1);
+ rc |= mdp3_clk_update(MDP3_CLK_CORE, 1);
if (rc)
return rc;
mdp3_res->mdp_rev = MDP3_REG_READ(MDP3_REG_HW_VERSION);
rc = mdp3_clk_update(MDP3_CLK_AHB, 0);
+ rc |= mdp3_clk_update(MDP3_CLK_CORE, 0);
if (rc)
pr_err("fail to turn off the MDP3_CLK_AHB clk\n");
diff --git a/drivers/video/msm/mdss/mdp3.h b/drivers/video/msm/mdss/mdp3.h
index c853664..5774e5a 100644
--- a/drivers/video/msm/mdss/mdp3.h
+++ b/drivers/video/msm/mdss/mdp3.h
@@ -29,6 +29,7 @@
MDP3_CLK_CORE,
MDP3_CLK_VSYNC,
MDP3_CLK_LCDC,
+ MDP3_CLK_DSI,
MDP3_MAX_CLK
};
diff --git a/drivers/video/msm/mdss/mdp3_ctrl.c b/drivers/video/msm/mdss/mdp3_ctrl.c
index e07c0a4..929e5f8 100644
--- a/drivers/video/msm/mdss/mdp3_ctrl.c
+++ b/drivers/video/msm/mdss/mdp3_ctrl.c
@@ -28,7 +28,7 @@
void vsync_notify_handler(void *arg)
{
- struct mdp3_session_data *session = (struct mdp3_session_data *)session;
+ struct mdp3_session_data *session = (struct mdp3_session_data *)arg;
complete(&session->vsync_comp);
}
@@ -100,18 +100,30 @@
.attrs = vsync_fs_attrs,
};
-static int mdp3_ctrl_res_req_dma(struct msm_fb_data_type *mfd, int status)
+static int mdp3_ctrl_res_req_bus(struct msm_fb_data_type *mfd, int status)
{
int rc = 0;
if (status) {
struct mdss_panel_info *panel_info = mfd->panel_info;
int ab = 0;
int ib = 0;
- unsigned long core_clk = 0;
- int vtotal = 0;
ab = panel_info->xres * panel_info->yres * 4;
ab *= panel_info->mipi.frame_rate;
ib = (ab * 3) / 2;
+ rc = mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, ab, ib);
+ } else {
+ rc = mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, 0, 0);
+ }
+ return rc;
+}
+
+static int mdp3_ctrl_res_req_clk(struct msm_fb_data_type *mfd, int status)
+{
+ int rc = 0;
+ if (status) {
+ struct mdss_panel_info *panel_info = mfd->panel_info;
+ unsigned long core_clk;
+ int vtotal;
vtotal = panel_info->lcdc.v_back_porch +
panel_info->lcdc.v_front_porch +
panel_info->lcdc.v_pulse_width +
@@ -126,10 +138,8 @@
if (rc)
return rc;
- mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, ab, ib);
} else {
rc = mdp3_clk_enable(false);
- rc |= mdp3_bus_scale_set_quota(MDP3_BW_CLIENT_DMA_P, 0, 0);
}
return rc;
}
@@ -285,13 +295,25 @@
}
mutex_lock(&mdp3_session->lock);
if (mdp3_session->status) {
- pr_info("fb%d is on already", mfd->index);
+ pr_debug("fb%d is on already", mfd->index);
goto on_error;
}
- rc = mdp3_ctrl_res_req_dma(mfd, 1);
+ /* request bus bandwidth before DSI DMA traffic */
+ rc = mdp3_ctrl_res_req_bus(mfd, 1);
+ if (rc)
+ pr_err("fail to request bus resource\n");
+
+ panel = mdp3_session->panel;
+ if (panel->event_handler)
+ rc = panel->event_handler(panel, MDSS_EVENT_PANEL_ON, NULL);
if (rc) {
- pr_err("resource request for dma on failed\n");
+ pr_err("fail to turn on the panel\n");
+ goto on_error;
+ }
+ rc = mdp3_ctrl_res_req_clk(mfd, 1);
+ if (rc) {
+ pr_err("fail to request mdp clk resource\n");
goto on_error;
}
@@ -304,16 +326,9 @@
rc = mdp3_ctrl_intf_init(mfd, mdp3_session->intf);
if (rc) {
pr_err("display interface init failed\n");
- goto on_error;
- }
- panel = mdp3_session->panel;
- if (panel->event_handler)
- rc = panel->event_handler(panel, MDSS_EVENT_PANEL_ON, NULL);
- if (rc) {
- pr_err("fail to turn on the panel\n");
goto on_error;
}
@@ -348,26 +363,31 @@
mutex_lock(&mdp3_session->lock);
if (!mdp3_session->status) {
- pr_info("fb%d is off already", mfd->index);
+ pr_debug("fb%d is off already", mfd->index);
goto off_error;
}
- panel = mdp3_session->panel;
- if (panel->event_handler)
- rc = panel->event_handler(panel, MDSS_EVENT_PANEL_OFF, NULL);
-
- if (rc)
- pr_err("fail to turn off the panel\n");
+ pr_debug("mdp3_ctrl_off stop mdp3 dma engine\n");
rc = mdp3_session->dma->stop(mdp3_session->dma, mdp3_session->intf);
if (rc)
pr_err("fail to stop the MDP3 dma\n");
- rc = mdp3_ctrl_res_req_dma(mfd, 0);
+ pr_debug("mdp3_ctrl_off stop dsi panel and controller\n");
+ panel = mdp3_session->panel;
+ if (panel->event_handler)
+ rc = panel->event_handler(panel, MDSS_EVENT_PANEL_OFF, NULL);
if (rc)
- pr_err("resource release for dma on failed\n");
+ pr_err("fail to turn off the panel\n");
+ pr_debug("mdp3_ctrl_off release bus and clock\n");
+ rc = mdp3_ctrl_res_req_bus(mfd, 0);
+ if (rc)
+ pr_err("mdp bus resource release failed\n");
+ rc = mdp3_ctrl_res_req_clk(mfd, 0);
+ if (rc)
+ pr_err("mdp clock resource release failed\n");
off_error:
mdp3_session->status = 0;
@@ -414,11 +434,32 @@
mutex_unlock(&mdp3_session->lock);
}
+static int mdp3_get_metadata(struct msm_fb_data_type *mfd,
+ struct msmfb_metadata *metadata)
+{
+ int ret = 0;
+ switch (metadata->op) {
+ case metadata_op_frame_rate:
+ metadata->data.panel_frame_rate =
+ mfd->panel_info->mipi.frame_rate;
+ break;
+ case metadata_op_get_caps:
+ metadata->data.caps.mdp_rev = 304;
+ break;
+ default:
+ pr_warn("Unsupported request to MDP META IOCTL.\n");
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
static int mdp3_ctrl_ioctl_handler(struct msm_fb_data_type *mfd,
u32 cmd, void __user *argp)
{
int rc = -EINVAL;
struct mdp3_session_data *mdp3_session;
+ struct msmfb_metadata metadata;
int val;
pr_debug("mdp3_ctrl_ioctl_handler\n");
@@ -444,6 +485,14 @@
rc = -EFAULT;
}
break;
+ case MSMFB_METADATA_GET:
+ rc = copy_from_user(&metadata, argp, sizeof(metadata));
+ if (rc)
+ return rc;
+ rc = mdp3_get_metadata(mfd, &metadata);
+ if (!rc)
+ rc = copy_to_user(argp, &metadata, sizeof(metadata));
+ break;
default:
break;
}
diff --git a/drivers/video/msm/mdss/mdss.h b/drivers/video/msm/mdss/mdss.h
index 763c7f6..5be0173 100644
--- a/drivers/video/msm/mdss/mdss.h
+++ b/drivers/video/msm/mdss/mdss.h
@@ -62,8 +62,6 @@
struct regulator *fs;
u32 max_mdp_clk_rate;
- struct workqueue_struct *clk_ctrl_wq;
- struct work_struct clk_ctrl_worker;
struct platform_device *pdev;
char __iomem *mdp_base;
size_t mdp_reg_size;
@@ -73,12 +71,13 @@
u32 irq_mask;
u32 irq_ena;
u32 irq_buzy;
+ u32 has_bwc;
+ u32 has_decimation;
u32 mdp_irq_mask;
u32 mdp_hist_irq_mask;
int suspend_fs_ena;
- atomic_t clk_ref;
u8 clk_ena;
u8 fs_ena;
u8 vsync_ena;
diff --git a/drivers/video/msm/mdss/mdss_dsi_host.c b/drivers/video/msm/mdss/mdss_dsi_host.c
index 22ff08c..1a64be4 100644
--- a/drivers/video/msm/mdss/mdss_dsi_host.c
+++ b/drivers/video/msm/mdss/mdss_dsi_host.c
@@ -813,7 +813,6 @@
dsi_ctrl |= BIT(0); /* enable dsi */
MIPI_OUTP((ctrl_pdata->ctrl_base) + 0x0004, dsi_ctrl);
- mdss_dsi_irq_ctrl(ctrl_pdata, 1, 0); /* enable dsi irq */
wmb();
}
diff --git a/drivers/video/msm/mdss/mdss_fb.c b/drivers/video/msm/mdss/mdss_fb.c
index a5903cf..a7ea948 100644
--- a/drivers/video/msm/mdss/mdss_fb.c
+++ b/drivers/video/msm/mdss/mdss_fb.c
@@ -1143,6 +1143,7 @@
struct mdp_display_commit disp_commit;
memset(&disp_commit, 0, sizeof(disp_commit));
disp_commit.wait_for_finish = true;
+ memcpy(&disp_commit.var, var, sizeof(struct fb_var_screeninfo));
return mdss_fb_pan_display_ex(info, &disp_commit);
}
@@ -1209,6 +1210,7 @@
mdss_fb_wait_for_fence(mfd);
if (mfd->mdp.kickoff_fnc)
mfd->mdp.kickoff_fnc(mfd);
+ mdss_fb_update_backlight(mfd);
mdss_fb_signal_timeline(mfd);
} else {
var = &fb_backup->disp_commit.var;
@@ -1431,6 +1433,7 @@
int i, fence_cnt = 0, ret = 0;
int acq_fen_fd[MDP_MAX_FENCE_FD];
struct sync_fence *fence;
+ u32 threshold;
if ((buf_sync->acq_fen_fd_cnt > MDP_MAX_FENCE_FD) ||
(mfd->timeline == NULL))
@@ -1464,8 +1467,13 @@
if (buf_sync->flags & MDP_BUF_SYNC_FLAG_WAIT)
mdss_fb_wait_for_fence(mfd);
+ if (mfd->panel.type == WRITEBACK_PANEL)
+ threshold = 1;
+ else
+ threshold = 2;
+
mfd->cur_rel_sync_pt = sw_sync_pt_create(mfd->timeline,
- mfd->timeline_value + 2);
+ mfd->timeline_value + threshold);
if (mfd->cur_rel_sync_pt == NULL) {
pr_err("%s: cannot create sync point", __func__);
ret = -ENOMEM;
diff --git a/drivers/video/msm/mdss/mdss_hdmi_cec.c b/drivers/video/msm/mdss/mdss_hdmi_cec.c
index 694fcde..2cf47fc 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_cec.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_cec.c
@@ -785,12 +785,16 @@
io = cec_ctrl->init_data.io;
- if (!cec_ctrl->cec_enabled)
- return 0;
-
cec_intr = DSS_REG_R_ND(io, HDMI_CEC_INT);
DEV_DBG("%s: cec interrupt status is [0x%x]\n", __func__, cec_intr);
+ if (!cec_ctrl->cec_enabled) {
+ DEV_ERR("%s: cec is not enabled. Just clear int and return.\n",
+ __func__);
+ DSS_REG_W(io, HDMI_CEC_INT, cec_intr);
+ return 0;
+ }
+
cec_status = DSS_REG_R_ND(io, HDMI_CEC_STATUS);
DEV_DBG("%s: cec status is [0x%x]\n", __func__, cec_status);
diff --git a/drivers/video/msm/mdss/mdss_hdmi_hdcp.c b/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
index ef17229..f726e79 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_hdcp.c
@@ -1057,11 +1057,16 @@
io = hdcp_ctrl->init_data.core_io;
- /* Ignore HDCP interrupts if HDCP is disabled */
- if (HDCP_STATE_INACTIVE == hdcp_ctrl->hdcp_state)
- return 0;
-
hdcp_int_val = DSS_REG_R(io, HDMI_HDCP_INT_CTRL);
+
+ /* Ignore HDCP interrupts if HDCP is disabled */
+ if (HDCP_STATE_INACTIVE == hdcp_ctrl->hdcp_state) {
+ DEV_ERR("%s: HDCP inactive. Just clear int and return.\n",
+ __func__);
+ DSS_REG_W(io, HDMI_HDCP_INT_CTRL, hdcp_int_val);
+ return 0;
+ }
+
if (hdcp_int_val & BIT(0)) {
/* AUTH_SUCCESS_INT */
DSS_REG_W(io, HDMI_HDCP_INT_CTRL, (hdcp_int_val | BIT(1)));
diff --git a/drivers/video/msm/mdss/mdss_hdmi_tx.c b/drivers/video/msm/mdss/mdss_hdmi_tx.c
index 9a90b88..1ff8acf 100644
--- a/drivers/video/msm/mdss/mdss_hdmi_tx.c
+++ b/drivers/video/msm/mdss/mdss_hdmi_tx.c
@@ -2298,6 +2298,7 @@
static void hdmi_tx_hpd_off(struct hdmi_tx_ctrl *hdmi_ctrl)
{
int rc = 0;
+ struct dss_io_data *io = NULL;
if (!hdmi_ctrl) {
DEV_ERR("%s: invalid input\n", __func__);
@@ -2309,6 +2310,15 @@
return;
}
+ io = &hdmi_ctrl->pdata.io[HDMI_TX_CORE_IO];
+ if (!io->base) {
+ DEV_ERR("%s: core io not inititalized\n", __func__);
+ return;
+ }
+
+ /* Turn off HPD interrupts */
+ DSS_REG_W(io, HDMI_HPD_INT_CTRL, 0);
+
mdss_disable_irq(&hdmi_tx_hw);
hdmi_tx_set_mode(hdmi_ctrl, false);
@@ -2411,7 +2421,7 @@
struct dss_io_data *io = NULL;
struct hdmi_tx_ctrl *hdmi_ctrl = (struct hdmi_tx_ctrl *)data;
- if (!hdmi_ctrl || !hdmi_ctrl->hpd_initialized) {
+ if (!hdmi_ctrl) {
DEV_WARN("%s: invalid input data, ISR ignored\n", __func__);
return IRQ_HANDLED;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp.c b/drivers/video/msm/mdss/mdss_mdp.c
index baedd03..2f09fee 100644
--- a/drivers/video/msm/mdss/mdss_mdp.c
+++ b/drivers/video/msm/mdss/mdss_mdp.c
@@ -330,7 +330,6 @@
hw->hw_ndx, mdss_res->mdp_irq_mask,
mdss_res->mdp_hist_irq_mask);
} else {
- mdss_irq_handlers[hw->hw_ndx] = NULL;
mdss_res->irq_mask &= ~ndx_bit;
if (mdss_res->irq_mask == 0) {
mdss_res->irq_ena = false;
@@ -622,71 +621,43 @@
return clk_rate;
}
-static void mdss_mdp_clk_ctrl_update(struct mdss_data_type *mdata)
-{
- int enable;
-
- mutex_lock(&mdp_clk_lock);
- enable = atomic_read(&mdata->clk_ref) > 0;
- if (mdata->clk_ena == enable) {
- mutex_unlock(&mdp_clk_lock);
- return;
- }
- mdata->clk_ena = enable;
-
- if (enable)
- pm_runtime_get_sync(&mdata->pdev->dev);
-
- pr_debug("MDP CLKS %s\n", (enable ? "Enable" : "Disable"));
- mb();
-
- mdss_mdp_clk_update(MDSS_CLK_AHB, enable);
- mdss_mdp_clk_update(MDSS_CLK_AXI, enable);
-
- mdss_mdp_clk_update(MDSS_CLK_MDP_CORE, enable);
- mdss_mdp_clk_update(MDSS_CLK_MDP_LUT, enable);
- if (mdata->vsync_ena)
- mdss_mdp_clk_update(MDSS_CLK_MDP_VSYNC, enable);
-
- if (!enable)
- pm_runtime_put(&mdata->pdev->dev);
-
- mutex_unlock(&mdp_clk_lock);
-}
-
-static void mdss_mdp_clk_ctrl_workqueue_handler(struct work_struct *work)
-{
- struct mdss_data_type *mdata;
-
- mdata = container_of(work, struct mdss_data_type, clk_ctrl_worker);
- mdss_mdp_clk_ctrl_update(mdata);
-}
-
void mdss_mdp_clk_ctrl(int enable, int isr)
{
struct mdss_data_type *mdata = mdss_res;
+ static int mdp_clk_cnt;
+ int changed = 0;
- pr_debug("clk enable=%d isr=%d ref= %d\n", enable, isr,
- atomic_read(&mdata->clk_ref));
-
- if (enable == MDP_BLOCK_POWER_ON) {
- BUG_ON(isr);
-
- if (atomic_inc_return(&mdata->clk_ref) == 1)
- mdss_mdp_clk_ctrl_update(mdata);
+ mutex_lock(&mdp_clk_lock);
+ if (enable) {
+ if (mdp_clk_cnt == 0)
+ changed++;
+ mdp_clk_cnt++;
} else {
- BUG_ON(atomic_read(&mdata->clk_ref) == 0);
-
- if (atomic_dec_and_test(&mdata->clk_ref)) {
- if (isr)
- queue_work(mdata->clk_ctrl_wq,
- &mdata->clk_ctrl_worker);
- else
- mdss_mdp_clk_ctrl_update(mdata);
- }
+ mdp_clk_cnt--;
+ if (mdp_clk_cnt == 0)
+ changed++;
}
+ pr_debug("%s: clk_cnt=%d changed=%d enable=%d\n",
+ __func__, mdp_clk_cnt, changed, enable);
+ if (changed) {
+ mdata->clk_ena = enable;
+ if (enable)
+ pm_runtime_get_sync(&mdata->pdev->dev);
+
+ mdss_mdp_clk_update(MDSS_CLK_AHB, enable);
+ mdss_mdp_clk_update(MDSS_CLK_AXI, enable);
+ mdss_mdp_clk_update(MDSS_CLK_MDP_CORE, enable);
+ mdss_mdp_clk_update(MDSS_CLK_MDP_LUT, enable);
+ if (mdata->vsync_ena)
+ mdss_mdp_clk_update(MDSS_CLK_MDP_VSYNC, enable);
+
+ if (!enable)
+ pm_runtime_put(&mdata->pdev->dev);
+ }
+
+ mutex_unlock(&mdp_clk_lock);
}
static inline int mdss_mdp_irq_clk_register(struct mdss_data_type *mdata,
@@ -935,9 +906,6 @@
if (rc)
return rc;
- mdata->clk_ctrl_wq = create_singlethread_workqueue("mdp_clk_wq");
- INIT_WORK(&mdata->clk_ctrl_worker, mdss_mdp_clk_ctrl_workqueue_handler);
-
mdata->iclient = msm_ion_client_create(-1, mdata->pdev->name);
if (IS_ERR_OR_NULL(mdata->iclient)) {
pr_err("msm_ion_client_create() return error (%p)\n",
@@ -1508,6 +1476,10 @@
&data);
mdata->rot_block_size = (!rc ? data : 128);
+ mdata->has_bwc = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-has-bwc");
+ mdata->has_decimation = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-has-decimation");
return 0;
}
@@ -1569,8 +1541,6 @@
static inline int mdss_mdp_suspend_sub(struct mdss_data_type *mdata)
{
- flush_workqueue(mdata->clk_ctrl_wq);
-
mdata->suspend_fs_ena = mdata->fs_ena;
mdss_mdp_footswitch_ctrl(mdata, false);
@@ -1668,8 +1638,6 @@
dev_dbg(dev, "pm_runtime: idling...\n");
- flush_workqueue(mdata->clk_ctrl_wq);
-
return 0;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp.h b/drivers/video/msm/mdss/mdss_mdp.h
index 175a07f..6018e6f 100644
--- a/drivers/video/msm/mdss/mdss_mdp.h
+++ b/drivers/video/msm/mdss/mdss_mdp.h
@@ -38,6 +38,7 @@
#define MAX_PLANES 4
#define MAX_DOWNSCALE_RATIO 4
#define MAX_UPSCALE_RATIO 20
+#define MAX_DECIMATION 4
#define C3_ALPHA 3 /* alpha */
#define C2_R_Cr 2 /* R/Cr */
@@ -121,6 +122,7 @@
u32 opmode;
u32 flush_bits;
+ bool is_video_mode;
u32 play_cnt;
u32 vsync_cnt;
u32 underrun_cnt;
@@ -128,6 +130,7 @@
u16 width;
u16 height;
u32 dst_format;
+ bool is_secure;
u32 bus_ab_quota;
u32 bus_ib_quota;
@@ -148,6 +151,7 @@
int (*display_fnc) (struct mdss_mdp_ctl *ctl, void *arg);
int (*wait_fnc) (struct mdss_mdp_ctl *ctl, void *arg);
int (*set_vsync_handler) (struct mdss_mdp_ctl *, mdp_vsync_handler_t);
+ u32 (*read_line_cnt_fnc) (struct mdss_mdp_ctl *);
void *priv_data;
};
@@ -268,6 +272,8 @@
u16 img_width;
u16 img_height;
+ u8 horz_deci;
+ u8 vert_deci;
struct mdss_mdp_img_rect src;
struct mdss_mdp_img_rect dst;
@@ -287,6 +293,7 @@
u32 params_changed;
unsigned long smp[MAX_PLANES];
+ unsigned long smp_reserved[MAX_PLANES];
struct mdss_mdp_data back_buf;
struct mdss_mdp_data front_buf;
@@ -312,6 +319,7 @@
int borderfill_enable;
int overlay_play_enable;
int hw_refresh;
+ void *cpu_pm_hdl;
struct mdss_data_type *mdata;
struct mutex ov_lock;
@@ -320,6 +328,7 @@
struct list_head overlay_list;
struct list_head pipes_used;
struct list_head pipes_cleanup;
+ bool mixer_swap;
};
#define is_vig_pipe(_pipe_id_) ((_pipe_id_) <= MDSS_MDP_SSPP_VIG2)
@@ -394,6 +403,8 @@
int mdss_mdp_mixer_pipe_unstage(struct mdss_mdp_pipe *pipe);
int mdss_mdp_display_commit(struct mdss_mdp_ctl *ctl, void *arg);
int mdss_mdp_display_wait4comp(struct mdss_mdp_ctl *ctl);
+int mdss_mdp_display_wakeup_time(struct mdss_mdp_ctl *ctl,
+ ktime_t *wakeup_time);
int mdss_mdp_csc_setup(u32 block, u32 blk_idx, u32 tbl_idx, u32 csc_type);
int mdss_mdp_csc_setup_data(u32 block, u32 blk_idx, u32 tbl_idx,
@@ -451,6 +462,9 @@
void mdss_mdp_pipe_unmap(struct mdss_mdp_pipe *pipe);
struct mdss_mdp_pipe *mdss_mdp_pipe_alloc_dma(struct mdss_mdp_mixer *mixer);
+int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe);
+void mdss_mdp_smp_unreserve(struct mdss_mdp_pipe *pipe);
+
int mdss_mdp_pipe_addr_setup(struct mdss_data_type *mdata, u32 *offsets,
u32 *ftch_y_id, u32 type, u32 num_base, u32 len);
int mdss_mdp_mixer_addr_setup(struct mdss_data_type *mdata, u32 *mixer_offsets,
diff --git a/drivers/video/msm/mdss/mdss_mdp_ctl.c b/drivers/video/msm/mdss/mdss_mdp_ctl.c
index 03a33cd..6c9cce2 100644
--- a/drivers/video/msm/mdss/mdss_mdp_ctl.c
+++ b/drivers/video/msm/mdss/mdss_mdp_ctl.c
@@ -47,6 +47,15 @@
writel_relaxed(val, mixer->base + reg);
}
+static inline u32 mdss_mdp_get_pclk_rate(struct mdss_mdp_ctl *ctl)
+{
+ struct mdss_panel_info *pinfo = &ctl->panel_data->panel_info;
+
+ return (ctl->intf_type == MDSS_INTF_DSI) ?
+ pinfo->mipi.dsi_pclk_rate :
+ pinfo->clk_rate;
+}
+
static int mdss_mdp_ctl_perf_commit(struct mdss_data_type *mdata, u32 flags)
{
struct mdss_mdp_ctl *ctl;
@@ -202,13 +211,7 @@
max_clk_rate = clk_rate;
if (ctl->intf_type) {
- struct mdss_panel_info *pinfo;
-
- pinfo = &ctl->panel_data->panel_info;
- clk_rate = (ctl->intf_type == MDSS_INTF_DSI) ?
- pinfo->mipi.dsi_pclk_rate :
- pinfo->clk_rate;
-
+ clk_rate = mdss_mdp_get_pclk_rate(ctl);
/* minimum clock rate due to inefficiency in 3dmux */
clk_rate = mult_frac(clk_rate >> 1, 9, 8);
if (clk_rate > max_clk_rate)
@@ -289,6 +292,8 @@
}
mutex_lock(&mdss_mdp_ctl_lock);
ctl->ref_cnt--;
+ ctl->intf_num = MDSS_MDP_NO_INTF;
+ ctl->is_secure = false;
ctl->power_on = false;
ctl->start_fnc = NULL;
ctl->stop_fnc = NULL;
@@ -296,6 +301,7 @@
ctl->display_fnc = NULL;
ctl->wait_fnc = NULL;
ctl->set_vsync_handler = NULL;
+ ctl->read_line_cnt_fnc = NULL;
mutex_unlock(&mdss_mdp_ctl_lock);
return 0;
@@ -646,15 +652,18 @@
}
ctl->mfd = mfd;
ctl->panel_data = pdata;
+ ctl->is_video_mode = false;
switch (pdata->panel_info.type) {
case EDP_PANEL:
+ ctl->is_video_mode = true;
ctl->intf_num = MDSS_MDP_INTF0;
ctl->intf_type = MDSS_INTF_EDP;
ctl->opmode = MDSS_MDP_CTL_OP_VIDEO_MODE;
ctl->start_fnc = mdss_mdp_video_start;
break;
case MIPI_VIDEO_PANEL:
+ ctl->is_video_mode = true;
if (pdata->panel_info.pdest == DISPLAY_1)
ctl->intf_num = MDSS_MDP_INTF1;
else
@@ -673,6 +682,7 @@
ctl->start_fnc = mdss_mdp_cmd_start;
break;
case DTV_PANEL:
+ ctl->is_video_mode = true;
ctl->intf_num = MDSS_MDP_INTF3;
ctl->intf_type = MDSS_INTF_HDMI;
ctl->opmode = MDSS_MDP_CTL_OP_VIDEO_MODE;
@@ -1084,14 +1094,6 @@
stage);
}
- if (mixercfg == MDSS_MDP_LM_BORDER_COLOR &&
- pipe->src_fmt->alpha_enable &&
- pipe->dst.w == mixer->width &&
- pipe->dst.h == mixer->height) {
- pr_debug("setting pipe=%d as BG_PIPE\n", pipe->num);
- bgalpha = 1;
- }
-
mixercfg |= stage << (3 * pipe->num);
mdp_mixer_write(mixer, off + MDSS_MDP_REG_LM_OP_MODE, blend_op);
@@ -1196,16 +1198,19 @@
struct mdss_mdp_mixer *mdss_mdp_mixer_get(struct mdss_mdp_ctl *ctl, int mux)
{
struct mdss_mdp_mixer *mixer = NULL;
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(ctl->mfd);
if (!ctl)
return NULL;
switch (mux) {
case MDSS_MDP_MIXER_MUX_DEFAULT:
case MDSS_MDP_MIXER_MUX_LEFT:
- mixer = ctl->mixer_left;
+ mixer = mdp5_data->mixer_swap ?
+ ctl->mixer_right : ctl->mixer_left;
break;
case MDSS_MDP_MIXER_MUX_RIGHT:
- mixer = ctl->mixer_right;
+ mixer = mdp5_data->mixer_swap ?
+ ctl->mixer_left : ctl->mixer_right;
break;
}
@@ -1235,6 +1240,7 @@
{
struct mdss_mdp_ctl *ctl;
struct mdss_mdp_mixer *mixer;
+ int i;
if (!pipe)
return -EINVAL;
@@ -1258,7 +1264,12 @@
if (params_changed) {
mixer->params_changed++;
- mixer->stage_pipe[pipe->mixer_stage] = pipe;
+ for (i = 0; i < MDSS_MDP_MAX_STAGE; i++) {
+ if (i == pipe->mixer_stage)
+ mixer->stage_pipe[i] = pipe;
+ else if (mixer->stage_pipe[i] == pipe)
+ mixer->stage_pipe[i] = NULL;
+ }
}
if (pipe->type == MDSS_MDP_PIPE_TYPE_DMA)
@@ -1291,9 +1302,10 @@
if (mutex_lock_interruptible(&ctl->lock))
return -EINTR;
- mixer->params_changed++;
- mixer->stage_pipe[pipe->mixer_stage] = NULL;
-
+ if (pipe == mixer->stage_pipe[pipe->mixer_stage]) {
+ mixer->params_changed++;
+ mixer->stage_pipe[pipe->mixer_stage] = NULL;
+ }
mutex_unlock(&ctl->lock);
return 0;
@@ -1310,6 +1322,71 @@
return 0;
}
+int mdss_mdp_display_wakeup_time(struct mdss_mdp_ctl *ctl,
+ ktime_t *wakeup_time)
+{
+ struct mdss_panel_info *pinfo;
+ u32 clk_rate, clk_period;
+ u32 current_line, total_line;
+ u32 time_of_line, time_to_vsync;
+ ktime_t current_time = ktime_get();
+
+ if (!ctl->read_line_cnt_fnc)
+ return -ENOSYS;
+
+ pinfo = &ctl->panel_data->panel_info;
+ if (!pinfo)
+ return -ENODEV;
+
+ clk_rate = mdss_mdp_get_pclk_rate(ctl);
+
+ clk_rate /= 1000; /* in kHz */
+ if (!clk_rate)
+ return -EINVAL;
+
+ /*
+ * calculate clk_period as pico second to maintain good
+ * accuracy with high pclk rate and this number is in 17 bit
+ * range.
+ */
+ clk_period = 1000000000 / clk_rate;
+ if (!clk_period)
+ return -EINVAL;
+
+ time_of_line = (pinfo->lcdc.h_back_porch +
+ pinfo->lcdc.h_front_porch +
+ pinfo->lcdc.h_pulse_width +
+ pinfo->xres) * clk_period;
+
+ time_of_line /= 1000; /* in nano second */
+ if (!time_of_line)
+ return -EINVAL;
+
+ current_line = ctl->read_line_cnt_fnc(ctl);
+
+ total_line = pinfo->lcdc.v_back_porch +
+ pinfo->lcdc.v_front_porch +
+ pinfo->lcdc.v_pulse_width +
+ pinfo->yres;
+
+ if (current_line > total_line)
+ return -EINVAL;
+
+ time_to_vsync = time_of_line * (total_line - current_line);
+ if (!time_to_vsync)
+ return -EINVAL;
+
+ *wakeup_time = ktime_add_ns(current_time, time_to_vsync);
+
+ pr_debug("clk_rate=%dkHz clk_period=%d cur_line=%d tot_line=%d\n",
+ clk_rate, clk_period, current_line, total_line);
+ pr_debug("time_to_vsync=%d current_time=%d wakeup_time=%d\n",
+ time_to_vsync, (int)ktime_to_ms(current_time),
+ (int)ktime_to_ms(*wakeup_time));
+
+ return 0;
+}
+
int mdss_mdp_display_wait4comp(struct mdss_mdp_ctl *ctl)
{
int ret;
diff --git a/drivers/video/msm/mdss/mdss_mdp_hwio.h b/drivers/video/msm/mdss/mdss_mdp_hwio.h
index d50f47e..a59560e 100644
--- a/drivers/video/msm/mdss/mdss_mdp_hwio.h
+++ b/drivers/video/msm/mdss/mdss_mdp_hwio.h
@@ -190,8 +190,7 @@
#define MDSS_MDP_REG_SSPP_CURRENT_SRC1_ADDR 0x0A8
#define MDSS_MDP_REG_SSPP_CURRENT_SRC2_ADDR 0x0AC
#define MDSS_MDP_REG_SSPP_CURRENT_SRC3_ADDR 0x0B0
-#define MDSS_MDP_REG_SSPP_LINE_SKIP_STEP_C03 0x0B4
-#define MDSS_MDP_REG_SSPP_LINE_SKIP_STEP_C12 0x0B8
+#define MDSS_MDP_REG_SSPP_DECIMATION_CONFIG 0x0B4
#define MDSS_MDP_REG_VIG_OP_MODE 0x200
#define MDSS_MDP_REG_VIG_QSEED2_CONFIG 0x204
@@ -349,6 +348,8 @@
#define MDSS_MDP_REG_WB_OUT_SIZE 0x074
#define MDSS_MDP_REG_WB_ALPHA_X_VALUE 0x078
#define MDSS_MDP_REG_WB_CSC_BASE 0x260
+#define MDSS_MDP_REG_WB_DST_ADDR_SW_STATUS 0x2B0
+
enum mdss_mdp_dspp_index {
MDSS_MDP_DSPP0,
@@ -402,6 +403,9 @@
#define MDSS_MDP_REG_INTF_TEST_CTL 0x054
#define MDSS_MDP_REG_INTF_TP_COLOR0 0x058
#define MDSS_MDP_REG_INTF_TP_COLOR1 0x05C
+#define MDSS_MDP_REG_INTF_FRAME_LINE_COUNT_EN 0x0A8
+#define MDSS_MDP_REG_INTF_FRAME_COUNT 0x0AC
+#define MDSS_MDP_REG_INTF_LINE_COUNT 0x0B0
#define MDSS_MDP_REG_INTF_DEFLICKER_CONFIG 0x0F0
#define MDSS_MDP_REG_INTF_DEFLICKER_STRNG_COEFF 0x0F4
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c b/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
index 006a8dd..e0be862 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_cmd.c
@@ -15,30 +15,50 @@
#include "mdss_panel.h"
#include "mdss_mdp.h"
+#define VSYNC_EXPIRE_TICK 4
+
#define START_THRESHOLD 4
#define CONTINUE_TRESHOLD 4
#define MAX_SESSIONS 2
+/* wait for at most 2 vsync for lowest refresh rate (24hz) */
+#define KOFF_TIMEOUT msecs_to_jiffies(84)
+
struct mdss_mdp_cmd_ctx {
u32 pp_num;
u8 ref_cnt;
-
struct completion pp_comp;
+ struct completion stop_comp;
atomic_t vsync_ref;
- spinlock_t vsync_lock;
- mdp_vsync_handler_t vsync_handler;
+ mdp_vsync_handler_t send_vsync;
int panel_on;
+ int koff_cnt;
+ int clk_enabled;
+ int clk_control;
+ int vsync_enabled;
+ int expire;
+ struct mutex clk_mtx;
+ spinlock_t clk_lock;
+ struct work_struct clk_work;
/* te config */
u8 tear_check;
- u16 total_lcd_lines;
- u16 v_porch; /* vertical porches */
- u32 vsync_cnt;
+ u16 height; /* panel height */
+ u16 vporch; /* vertical porches */
+ u32 vclk_line; /* vsync clock per line */
};
struct mdss_mdp_cmd_ctx mdss_mdp_cmd_ctx_list[MAX_SESSIONS];
+/*
+ * TE configuration:
+ * dsi byte clock calculated base on 70 fps
+ * around 14 ms to complete a kickoff cycle if te disabled
+ * vclk_line base on 60 fps
+ * write is faster than read
+ * init == start == rdptr
+ */
static int mdss_mdp_cmd_tearcheck_cfg(struct mdss_mdp_mixer *mixer,
struct mdss_mdp_cmd_ctx *ctx, int enable)
{
@@ -47,15 +67,21 @@
cfg = BIT(19); /* VSYNC_COUNTER_EN */
if (ctx->tear_check)
cfg |= BIT(20); /* VSYNC_IN_EN */
- cfg |= ctx->vsync_cnt;
+ cfg |= ctx->vclk_line;
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_CONFIG_VSYNC, cfg);
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_CONFIG_HEIGHT,
0xfff0); /* set to verh height */
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_VSYNC_INIT_VAL, 0);
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_RD_PTR_IRQ, 0);
- mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_START_POS, ctx->v_porch);
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_VSYNC_INIT_VAL,
+ ctx->height);
+
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_RD_PTR_IRQ,
+ ctx->height + 1);
+
+ mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_START_POS,
+ ctx->height);
+
mdss_mdp_pingpong_write(mixer, MDSS_MDP_REG_PP_SYNC_THRESH,
(CONTINUE_TRESHOLD << 16) | (START_THRESHOLD));
@@ -73,7 +99,6 @@
if (pinfo->mipi.vsync_enable && enable) {
u32 mdp_vsync_clk_speed_hz, total_lines;
- u32 vsync_cnt_cfg_dem;
mdss_mdp_vsync_clk_enable(1);
@@ -88,21 +113,18 @@
}
ctx->tear_check = pinfo->mipi.hw_vsync_mode;
-
- total_lines = pinfo->lcdc.v_back_porch +
- pinfo->lcdc.v_front_porch +
- pinfo->lcdc.v_pulse_width + pinfo->yres;
-
- vsync_cnt_cfg_dem =
- mult_frac(pinfo->mipi.frame_rate * total_lines,
- 1, 100);
-
- ctx->vsync_cnt = mdp_vsync_clk_speed_hz / vsync_cnt_cfg_dem;
-
- ctx->v_porch = pinfo->lcdc.v_back_porch +
+ ctx->height = pinfo->yres;
+ ctx->vporch = pinfo->lcdc.v_back_porch +
pinfo->lcdc.v_front_porch +
pinfo->lcdc.v_pulse_width;
- ctx->total_lcd_lines = total_lines;
+
+ total_lines = ctx->height + ctx->vporch;
+ total_lines *= pinfo->mipi.frame_rate;
+ ctx->vclk_line = mdp_vsync_clk_speed_hz / total_lines;
+
+ pr_debug("%s: fr=%d tline=%d vcnt=%d vrate=%d\n",
+ __func__, pinfo->mipi.frame_rate, total_lines,
+ ctx->vclk_line, mdp_vsync_clk_speed_hz);
} else {
enable = 0;
}
@@ -118,54 +140,6 @@
return 0;
}
-static inline void cmd_readptr_irq_enable(struct mdss_mdp_ctl *ctl)
-{
- struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
-
- if (atomic_inc_return(&ctx->vsync_ref) == 1) {
- pr_debug("%s:\n", __func__);
- mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num);
- }
-}
-
-static inline void cmd_readptr_irq_disable(struct mdss_mdp_ctl *ctl)
-{
- struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
-
- if (atomic_dec_return(&ctx->vsync_ref) == 0) {
- pr_debug("%s:\n", __func__);
- mdss_mdp_irq_disable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
- ctx->pp_num);
- }
-}
-
-int mdss_mdp_cmd_set_vsync_handler(struct mdss_mdp_ctl *ctl,
- mdp_vsync_handler_t vsync_handler)
-{
- struct mdss_mdp_cmd_ctx *ctx;
- unsigned long flags;
-
- ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
- if (!ctx) {
- pr_err("invalid ctx for ctl=%d\n", ctl->num);
- return -ENODEV;
- }
-
- spin_lock_irqsave(&ctx->vsync_lock, flags);
-
- if (!ctx->vsync_handler && vsync_handler) {
- ctx->vsync_handler = vsync_handler;
- cmd_readptr_irq_enable(ctl);
- } else if (ctx->vsync_handler && !vsync_handler) {
- cmd_readptr_irq_disable(ctl);
- ctx->vsync_handler = vsync_handler;
- }
-
- spin_unlock_irqrestore(&ctx->vsync_lock, flags);
-
- return 0;
-}
-
static void mdss_mdp_cmd_readptr_done(void *arg)
{
struct mdss_mdp_ctl *ctl = arg;
@@ -177,15 +151,29 @@
return;
}
- pr_debug("%s: ctl=%d intf_num=%d\n", __func__, ctl->num, ctl->intf_num);
+ pr_debug("%s: num=%d ctx=%d expire=%d koff=%d\n", __func__, ctl->num,
+ ctx->pp_num, ctx->expire, ctx->koff_cnt);
vsync_time = ktime_get();
ctl->vsync_cnt++;
- spin_lock(&ctx->vsync_lock);
- if (ctx->vsync_handler)
- ctx->vsync_handler(ctl, vsync_time);
- spin_unlock(&ctx->vsync_lock);
+ spin_lock(&ctx->clk_lock);
+ if (ctx->send_vsync)
+ ctx->send_vsync(ctl, vsync_time);
+
+ if (ctx->expire) {
+ ctx->expire--;
+ if (ctx->expire == 0) {
+ if (ctx->koff_cnt <= 0) {
+ ctx->clk_control = 1;
+ schedule_work(&ctx->clk_work);
+ } else {
+ /* put off one vsync */
+ ctx->expire += 1;
+ }
+ }
+ }
+ spin_unlock(&ctx->clk_lock);
}
static void mdss_mdp_cmd_pingpong_done(void *arg)
@@ -193,12 +181,161 @@
struct mdss_mdp_ctl *ctl = arg;
struct mdss_mdp_cmd_ctx *ctx = ctl->priv_data;
- pr_debug("%s: intf_num=%d ctx=%p\n", __func__, ctl->intf_num, ctx);
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return;
+ }
+ spin_lock(&ctx->clk_lock);
mdss_mdp_irq_disable_nosync(MDSS_MDP_IRQ_PING_PONG_COMP, ctx->pp_num);
- if (ctx)
- complete(&ctx->pp_comp);
+ complete_all(&ctx->pp_comp);
+
+ if (ctx->koff_cnt)
+ ctx->koff_cnt--;
+
+ pr_debug("%s: ctl_num=%d intf_num=%d ctx=%d kcnt=%d\n", __func__,
+ ctl->num, ctl->intf_num, ctx->pp_num, ctx->koff_cnt);
+
+ spin_unlock(&ctx->clk_lock);
+}
+
+static void clk_ctrl_work(struct work_struct *work)
+{
+ unsigned long flags;
+ struct mdss_mdp_cmd_ctx *ctx =
+ container_of(work, typeof(*ctx), clk_work);
+
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return;
+ }
+
+ pr_debug("%s:ctx=%p num=%d\n", __func__, ctx, ctx->pp_num);
+
+ mutex_lock(&ctx->clk_mtx);
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ if (ctx->clk_control && ctx->clk_enabled) {
+ ctx->clk_enabled = 0;
+ ctx->clk_control = 0;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ /*
+ * make sure dsi link is idle here
+ */
+ ctx->vsync_enabled = 0;
+ mdss_mdp_irq_disable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
+ ctx->pp_num);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ complete(&ctx->stop_comp);
+ pr_debug("%s: SET_CLK_OFF, pid=%d\n", __func__, current->pid);
+ } else {
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+}
+
+static int mdss_mdp_cmd_vsync_ctrl(struct mdss_mdp_ctl *ctl,
+ mdp_vsync_handler_t send_vsync)
+{
+ struct mdss_mdp_cmd_ctx *ctx;
+ unsigned long flags;
+ int enable;
+
+ ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("%s: invalid ctx\n", __func__);
+ return -ENODEV;
+ }
+
+ enable = (send_vsync != NULL);
+
+ pr_debug("%s: ctx=%p ctx=%d enabled=%d %d clk_enabled=%d clk_ctrl=%d\n",
+ __func__, ctx, ctx->pp_num, ctx->vsync_enabled, enable,
+ ctx->clk_enabled, ctx->clk_control);
+
+ mutex_lock(&ctx->clk_mtx);
+ if (ctx->vsync_enabled == enable) {
+ mutex_unlock(&ctx->clk_mtx);
+ return 0;
+ }
+
+ if (enable) {
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->clk_control = 0;
+ ctx->expire = 0;
+ ctx->send_vsync = send_vsync;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ if (ctx->clk_enabled == 0) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR,
+ ctx->pp_num);
+ ctx->vsync_enabled = 1;
+ ctx->clk_enabled = 1;
+ pr_debug("%s: SET_CLK_ON, pid=%d\n", __func__,
+ current->pid);
+ }
+ } else {
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->expire = VSYNC_EXPIRE_TICK;
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+
+ return 0;
+}
+
+static void mdss_mdp_cmd_chk_clock(struct mdss_mdp_cmd_ctx *ctx)
+{
+ unsigned long flags;
+ int set_clk_on = 0;
+
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return;
+ }
+
+ pr_debug("%s: ctx=%p num=%d clk_enabled=%d\n", __func__,
+ ctx, ctx->pp_num, ctx->clk_enabled);
+
+ mutex_lock(&ctx->clk_mtx);
+ spin_lock_irqsave(&ctx->clk_lock, flags);
+ ctx->koff_cnt++;
+ ctx->clk_control = 0;
+ ctx->expire = VSYNC_EXPIRE_TICK;
+ if (ctx->clk_enabled == 0) {
+ set_clk_on++;
+ ctx->clk_enabled = 1;
+ }
+ spin_unlock_irqrestore(&ctx->clk_lock, flags);
+
+ if (set_clk_on) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ ctx->vsync_enabled = 1;
+ mdss_mdp_irq_enable(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num);
+ pr_debug("%s: ctx=%p num=%d SET_CLK_ON\n", __func__,
+ ctx, ctx->pp_num);
+ }
+ mutex_unlock(&ctx->clk_mtx);
+}
+
+static int mdss_mdp_cmd_wait4comp(struct mdss_mdp_ctl *ctl, void *arg)
+{
+ struct mdss_mdp_cmd_ctx *ctx;
+ int rc;
+
+ ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return -ENODEV;
+ }
+
+ pr_debug("%s: intf_num=%d ctx=%p\n", __func__, ctl->intf_num, ctx);
+
+ rc = wait_for_completion_interruptible_timeout(&ctx->pp_comp,
+ KOFF_TIMEOUT);
+ WARN(rc <= 0, "cmd kickoff timed out (%d) ctl=%d\n", rc, ctl->num);
+
+ return rc;
}
int mdss_mdp_cmd_kickoff(struct mdss_mdp_ctl *ctl, void *arg)
@@ -207,14 +344,16 @@
int rc;
ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
- pr_debug("%s: kickoff intf_num=%d ctx=%p\n", __func__,
- ctl->intf_num, ctx);
-
if (!ctx) {
pr_err("invalid ctx\n");
return -ENODEV;
}
+ pr_debug("%s: kickoff intf_num=%d ctx=%p\n", __func__,
+ ctl->intf_num, ctx);
+
+ mdss_mdp_cmd_chk_clock(ctx);
+
if (ctx->panel_on == 0) {
rc = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_UNBLANK, NULL);
WARN(rc, "intf %d unblank error (%d)\n", ctl->intf_num, rc);
@@ -230,27 +369,37 @@
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_START, 1);
- wait_for_completion_interruptible(&ctx->pp_comp);
-
return 0;
}
int mdss_mdp_cmd_stop(struct mdss_mdp_ctl *ctl)
{
struct mdss_mdp_cmd_ctx *ctx;
+ int need_wait = 0;
int ret;
- pr_debug("%s: +\n", __func__);
-
ctx = (struct mdss_mdp_cmd_ctx *) ctl->priv_data;
if (!ctx) {
pr_err("invalid ctx\n");
return -ENODEV;
}
+ pr_debug("%s:+ vaync_enable=%d expire=%d\n", __func__,
+ ctx->vsync_enabled, ctx->expire);
+
+ mutex_lock(&ctx->clk_mtx);
+ if (ctx->vsync_enabled) {
+ INIT_COMPLETION(ctx->stop_comp);
+ need_wait = 1;
+ }
+ mutex_unlock(&ctx->clk_mtx);
+
+ if (need_wait)
+ wait_for_completion_interruptible(&ctx->stop_comp);
+
ctx->panel_on = 0;
- mdss_mdp_cmd_set_vsync_handler(ctl, NULL);
+ mdss_mdp_cmd_vsync_ctrl(ctl, NULL);
mdss_mdp_set_intr_callback(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctl->intf_num,
NULL, NULL);
@@ -265,7 +414,6 @@
ret = mdss_mdp_ctl_intf_event(ctl, MDSS_EVENT_PANEL_OFF, NULL);
WARN(ret, "intf %d unblank error (%d)\n", ctl->intf_num, ret);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
pr_debug("%s:-\n", __func__);
@@ -280,8 +428,6 @@
pr_debug("%s:+\n", __func__);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
-
mixer = mdss_mdp_mixer_get(ctl, MDSS_MDP_MIXER_MUX_LEFT);
if (!mixer) {
pr_err("mixer not setup correctly\n");
@@ -308,8 +454,14 @@
ctx->pp_num = mixer->num;
init_completion(&ctx->pp_comp);
- spin_lock_init(&ctx->vsync_lock);
+ init_completion(&ctx->stop_comp);
atomic_set(&ctx->vsync_ref, 0);
+ spin_lock_init(&ctx->clk_lock);
+ mutex_init(&ctx->clk_mtx);
+ INIT_WORK(&ctx->clk_work, clk_ctrl_work);
+
+ pr_debug("%s: ctx=%p num=%d mixer=%d\n", __func__,
+ ctx, ctx->pp_num, mixer->num);
mdss_mdp_set_intr_callback(MDSS_MDP_IRQ_PING_PONG_RD_PTR, ctx->pp_num,
mdss_mdp_cmd_readptr_done, ctl);
@@ -325,7 +477,8 @@
ctl->stop_fnc = mdss_mdp_cmd_stop;
ctl->display_fnc = mdss_mdp_cmd_kickoff;
- ctl->set_vsync_handler = mdss_mdp_cmd_set_vsync_handler;
+ ctl->wait_fnc = mdss_mdp_cmd_wait4comp;
+ ctl->set_vsync_handler = mdss_mdp_cmd_vsync_ctrl;
pr_debug("%s:-\n", __func__);
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_video.c b/drivers/video/msm/mdss/mdss_mdp_intf_video.c
index 6e631e9..94ae710 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_video.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_video.c
@@ -59,6 +59,22 @@
writel_relaxed(val, ctx->base + reg);
}
+static inline u32 mdp_video_read(struct mdss_mdp_video_ctx *ctx,
+ u32 reg)
+{
+ return readl_relaxed(ctx->base + reg);
+}
+
+static inline u32 mdss_mdp_video_line_count(struct mdss_mdp_ctl *ctl)
+{
+ struct mdss_mdp_video_ctx *ctx = ctl->priv_data;
+ u32 line_cnt = 0;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
+ line_cnt = mdp_video_read(ctx, MDSS_MDP_REG_INTF_LINE_COUNT);
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+ return line_cnt;
+}
+
int mdss_mdp_video_addr_setup(struct mdss_data_type *mdata,
u32 *offsets, u32 count)
{
@@ -171,6 +187,7 @@
p->underflow_clr);
mdp_video_write(ctx, MDSS_MDP_REG_INTF_HSYNC_SKEW, p->hsync_skew);
mdp_video_write(ctx, MDSS_MDP_REG_INTF_POLARITY_CTL, polarity_ctl);
+ mdp_video_write(ctx, MDSS_MDP_REG_INTF_FRAME_LINE_COUNT_EN, 0x3);
return 0;
}
@@ -284,7 +301,8 @@
vsync_time = ktime_get();
ctl->vsync_cnt++;
- pr_debug("intr ctl=%d vsync cnt=%u\n", ctl->num, ctl->vsync_cnt);
+ pr_debug("intr ctl=%d vsync cnt=%u vsync_time=%d\n",
+ ctl->num, ctl->vsync_cnt, (int)ktime_to_ms(vsync_time));
complete_all(&ctx->vsync_comp);
spin_lock(&ctx->vsync_lock);
@@ -454,6 +472,7 @@
ctl->display_fnc = mdss_mdp_video_display;
ctl->wait_fnc = mdss_mdp_video_wait4comp;
ctl->set_vsync_handler = mdss_mdp_video_set_vsync_handler;
+ ctl->read_line_cnt_fnc = mdss_mdp_video_line_count;
return 0;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c b/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
index d86527b..0c08eda 100644
--- a/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
+++ b/drivers/video/msm/mdss/mdss_mdp_intf_writeback.c
@@ -17,6 +17,9 @@
#include "mdss_mdp.h"
#include "mdss_mdp_rotator.h"
+/* wait for at most 2 vsync for lowest refresh rate (24hz) */
+#define KOFF_TIMEOUT msecs_to_jiffies(84)
+
enum mdss_mdp_writeback_type {
MDSS_MDP_WRITEBACK_TYPE_ROTATOR,
MDSS_MDP_WRITEBACK_TYPE_LINE,
@@ -28,6 +31,8 @@
char __iomem *base;
u8 ref_cnt;
u8 type;
+ struct completion wb_comp;
+ int comp_cnt;
u32 intr_type;
u32 intf_num;
@@ -321,10 +326,37 @@
pr_debug("intr wb_num=%d\n", ctx->wb_num);
mdss_mdp_irq_disable_nosync(ctx->intr_type, ctx->intf_num);
- mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, true);
if (ctx->callback_fnc)
ctx->callback_fnc(ctx->callback_arg);
+
+ complete_all(&ctx->wb_comp);
+}
+
+static int mdss_mdp_wb_wait4comp(struct mdss_mdp_ctl *ctl, void *arg)
+{
+ struct mdss_mdp_writeback_ctx *ctx;
+ int rc = 0;
+
+ ctx = (struct mdss_mdp_writeback_ctx *) ctl->priv_data;
+ if (!ctx) {
+ pr_err("invalid ctx\n");
+ return -ENODEV;
+ }
+
+ if (ctx->comp_cnt == 0)
+ return rc;
+
+ rc = wait_for_completion_interruptible_timeout(&ctx->wb_comp,
+ KOFF_TIMEOUT);
+ WARN(rc <= 0, "writeback kickoff timed out (%d) ctl=%d\n",
+ rc, ctl->num);
+
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false); /* clock off */
+
+ ctx->comp_cnt--;
+
+ return rc;
}
static int mdss_mdp_writeback_display(struct mdss_mdp_ctl *ctl, void *arg)
@@ -338,6 +370,12 @@
if (!ctx)
return -ENODEV;
+ if (ctx->comp_cnt) {
+ pr_err("previous kickoff not completed yet, ctl=%d\n",
+ ctl->num);
+ return -EPERM;
+ }
+
wb_args = (struct mdss_mdp_writeback_arg *) arg;
if (!wb_args)
return -ENOENT;
@@ -352,14 +390,18 @@
ctx->callback_arg = wb_args->priv_data;
flush_bits = BIT(16); /* WB */
+ mdp_wb_write(ctx, MDSS_MDP_REG_WB_DST_ADDR_SW_STATUS, ctl->is_secure);
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_FLUSH, flush_bits);
+ INIT_COMPLETION(ctx->wb_comp);
mdss_mdp_irq_enable(ctx->intr_type, ctx->intf_num);
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_ON, false);
mdss_mdp_ctl_write(ctl, MDSS_MDP_REG_CTL_START, 1);
wmb();
+ ctx->comp_cnt++;
+
return 0;
}
@@ -387,6 +429,7 @@
ctx->wb_num = ctl->num; /* wb num should match ctl num */
ctx->base = ctl->wb_base;
ctx->initialized = false;
+ init_completion(&ctx->wb_comp);
mdss_mdp_set_intr_callback(ctx->intr_type, ctx->intf_num,
mdss_mdp_writeback_intr_done, ctx);
@@ -397,6 +440,7 @@
ctl->prepare_fnc = mdss_mdp_writeback_prepare_wfd;
ctl->stop_fnc = mdss_mdp_writeback_stop;
ctl->display_fnc = mdss_mdp_writeback_display;
+ ctl->wait_fnc = mdss_mdp_wb_wait4comp;
return ret;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_overlay.c b/drivers/video/msm/mdss/mdss_mdp_overlay.c
index 3cce4f9..6c90794 100644
--- a/drivers/video/msm/mdss/mdss_mdp_overlay.c
+++ b/drivers/video/msm/mdss/mdss_mdp_overlay.c
@@ -24,6 +24,7 @@
#include <linux/msm_mdp.h>
#include <mach/iommu_domains.h>
+#include <mach/event_timer.h>
#include "mdss.h"
#include "mdss_fb.h"
@@ -37,6 +38,7 @@
static atomic_t ov_active_panels = ATOMIC_INIT(0);
static int mdss_mdp_overlay_free_fb_pipe(struct msm_fb_data_type *mfd);
+static int mdss_mdp_overlay_fb_parse_dt(struct msm_fb_data_type *mfd);
static int mdss_mdp_overlay_get(struct msm_fb_data_type *mfd,
struct mdp_overlay *req)
@@ -98,8 +100,20 @@
return -EOVERFLOW;
}
+ if (req->horz_deci || req->vert_deci) {
+ if (!mdata->has_decimation) {
+ pr_err("No Decimation in MDP V=%x\n", mdata->mdp_rev);
+ return -EINVAL;
+ } else if ((req->horz_deci > MAX_DECIMATION) ||
+ (req->vert_deci > MAX_DECIMATION)) {
+ pr_err("Invalid decimation factors horz=%d vert=%d\n",
+ req->horz_deci, req->vert_deci);
+ return -EINVAL;
+ }
+ }
+
if (!(req->flags & MDSS_MDP_ROT_ONLY)) {
- u32 dst_w, dst_h;
+ u32 src_w, src_h, dst_w, dst_h;
if ((CHECK_BOUNDS(req->dst_rect.x, req->dst_rect.w, xres) ||
CHECK_BOUNDS(req->dst_rect.y, req->dst_rect.h, yres))) {
@@ -117,27 +131,36 @@
dst_h = req->dst_rect.h;
}
- if ((req->src_rect.w * MAX_UPSCALE_RATIO) < dst_w) {
+ src_w = req->src_rect.w >> req->horz_deci;
+ src_h = req->src_rect.h >> req->vert_deci;
+
+ if (src_w > MAX_MIXER_WIDTH) {
+ pr_err("invalid source width=%d HDec=%d\n",
+ req->src_rect.w, req->horz_deci);
+ return -EINVAL;
+ }
+
+ if ((src_w * MAX_UPSCALE_RATIO) < dst_w) {
pr_err("too much upscaling Width %d->%d\n",
req->src_rect.w, req->dst_rect.w);
return -EINVAL;
}
- if ((req->src_rect.h * MAX_UPSCALE_RATIO) < dst_h) {
+ if ((src_h * MAX_UPSCALE_RATIO) < dst_h) {
pr_err("too much upscaling. Height %d->%d\n",
req->src_rect.h, req->dst_rect.h);
return -EINVAL;
}
- if (req->src_rect.w > (dst_w * MAX_DOWNSCALE_RATIO)) {
- pr_err("too much downscaling. Width %d->%d\n",
- req->src_rect.w, req->dst_rect.w);
+ if (src_w > (dst_w * MAX_DOWNSCALE_RATIO)) {
+ pr_err("too much downscaling. Width %d->%d H Dec=%d\n",
+ src_w, req->dst_rect.w, req->horz_deci);
return -EINVAL;
}
- if (req->src_rect.h > (dst_h * MAX_DOWNSCALE_RATIO)) {
- pr_err("too much downscaling. Height %d->%d\n",
- req->src_rect.h, req->dst_rect.h);
+ if (src_h > (dst_h * MAX_DOWNSCALE_RATIO)) {
+ pr_err("too much downscaling. Height %d->%d V Dec=%d\n",
+ src_h, req->dst_rect.h, req->vert_deci);
return -EINVAL;
}
@@ -176,6 +199,7 @@
struct mdss_mdp_rotator_session *rot;
struct mdss_mdp_format_params *fmt;
int ret = 0;
+ u32 bwc_enabled;
pr_debug("rot ctl=%u req id=%x\n", mdp5_data->ctl->num, req->id);
@@ -212,7 +236,14 @@
rot->flags = req->flags & (MDP_ROT_90 | MDP_FLIP_LR | MDP_FLIP_UD |
MDP_SECURE_OVERLAY_SESSION);
- rot->bwc_mode = (req->flags & MDP_BWC_EN) ? 1 : 0;
+ bwc_enabled = req->flags & MDP_BWC_EN;
+ if (bwc_enabled && !mdp5_data->mdata->has_bwc) {
+ pr_err("BWC is not supported in MDP version %x\n",
+ mdp5_data->mdata->mdp_rev);
+ rot->bwc_mode = 0;
+ } else {
+ rot->bwc_mode = bwc_enabled ? 1 : 0;
+ }
rot->format = fmt->format;
rot->img_width = req->src.width;
rot->img_height = req->src.height;
@@ -248,6 +279,7 @@
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
struct mdp_histogram_start_req hist;
int ret;
+ u32 bwc_enabled;
if (mdp5_data->ctl == NULL)
return -ENODEV;
@@ -344,8 +376,8 @@
pr_err("Can't switch mixer %d->%d pnum %d!\n",
pipe->mixer->num, mixer->num,
pipe->num);
- mdss_mdp_pipe_unmap(pipe);
- return -EINVAL;
+ ret = -EINVAL;
+ goto exit_fail;
}
pr_debug("switching pipe mixer %d->%d pnum %d\n",
pipe->mixer->num, mixer->num,
@@ -356,8 +388,15 @@
}
pipe->flags = req->flags;
- pipe->bwc_mode = pipe->mixer->rotator_mode ?
- 0 : (req->flags & MDP_BWC_EN ? 1 : 0) ;
+ bwc_enabled = req->flags & MDP_BWC_EN;
+ if (bwc_enabled && !mdp5_data->mdata->has_bwc) {
+ pr_err("BWC is not supported in MDP version %x\n",
+ mdp5_data->mdata->mdp_rev);
+ pipe->bwc_mode = 0;
+ } else {
+ pipe->bwc_mode = pipe->mixer->rotator_mode ?
+ 0 : (bwc_enabled ? 1 : 0) ;
+ }
pipe->img_width = req->src.width & 0x3fff;
pipe->img_height = req->src.height & 0x3fff;
pipe->src.x = req->src_rect.x;
@@ -368,7 +407,8 @@
pipe->dst.y = req->dst_rect.y;
pipe->dst.w = req->dst_rect.w;
pipe->dst.h = req->dst_rect.h;
-
+ pipe->horz_deci = req->horz_deci;
+ pipe->vert_deci = req->vert_deci;
pipe->src_fmt = fmt;
pipe->mixer_stage = req->z_order;
@@ -436,6 +476,12 @@
}
}
+ ret = mdss_mdp_smp_reserve(pipe);
+ if (ret) {
+ pr_debug("mdss_mdp_smp_reserve failed. ret=%d\n", ret);
+ goto exit_fail;
+ }
+
pipe->params_changed++;
req->id = pipe->ndx;
@@ -445,6 +491,25 @@
mdss_mdp_pipe_unmap(pipe);
return ret;
+
+exit_fail:
+ mdss_mdp_pipe_unmap(pipe);
+
+ mutex_lock(&mfd->lock);
+ if (pipe->play_cnt == 0) {
+ pr_debug("failed for pipe %d\n", pipe->num);
+ list_del(&pipe->used_list);
+ mdss_mdp_pipe_destroy(pipe);
+ }
+
+ /* invalidate any overlays in this framebuffer after failure */
+ list_for_each_entry(pipe, &mdp5_data->pipes_used, used_list) {
+ pr_debug("freeing allocations for pipe %d\n", pipe->num);
+ mdss_mdp_smp_unreserve(pipe);
+ pipe->params_changed = 0;
+ }
+ mutex_unlock(&mfd->lock);
+ return ret;
}
static int mdss_mdp_overlay_set(struct msm_fb_data_type *mfd,
@@ -535,6 +600,7 @@
list_move(&pipe->cleanup_list, &destroy_pipes);
mdss_mdp_overlay_free_buf(&pipe->back_buf);
mdss_mdp_overlay_free_buf(&pipe->front_buf);
+ pipe->mfd = NULL;
}
list_for_each_entry(pipe, &mdp5_data->pipes_used, used_list) {
@@ -677,6 +743,19 @@
return rc;
}
+static void mdss_mdp_overlay_update_pm(struct mdss_overlay_private *mdp5_data)
+{
+ ktime_t wakeup_time;
+
+ if (!mdp5_data->cpu_pm_hdl)
+ return;
+
+ if (mdss_mdp_display_wakeup_time(mdp5_data->ctl, &wakeup_time))
+ return;
+
+ activate_event_timer(mdp5_data->cpu_pm_hdl, wakeup_time);
+}
+
int mdss_mdp_overlay_kickoff(struct msm_fb_data_type *mfd)
{
struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
@@ -694,15 +773,17 @@
} else if (pipe->front_buf.num_planes) {
buf = &pipe->front_buf;
} else {
- pr_warn("pipe queue without buffer\n");
- buf = NULL;
+ pr_warn("pipe queue w/o buffer. unstaging layer\n");
+ pipe->params_changed = 0;
+ mdss_mdp_mixer_pipe_unstage(pipe);
+ continue;
}
ret = mdss_mdp_pipe_queue_data(pipe, buf);
if (IS_ERR_VALUE(ret)) {
pr_warn("Unable to queue data for pnum=%d\n",
pipe->num);
- mdss_mdp_overlay_free_buf(buf);
+ mdss_mdp_mixer_pipe_unstage(pipe);
}
}
@@ -716,6 +797,8 @@
if (IS_ERR_VALUE(ret))
goto commit_fail;
+ mdss_mdp_overlay_update_pm(mdp5_data);
+
ret = mdss_mdp_display_wait4comp(mdp5_data->ctl);
complete(&mfd->update.comp);
@@ -778,12 +861,13 @@
if (ndx == BORDERFILL_NDX) {
pr_debug("borderfill disable\n");
mdp5_data->borderfill_enable = false;
- return 0;
+ ret = 0;
+ goto done;
}
if (!mfd->panel_power_on) {
- mutex_unlock(&mdp5_data->ov_lock);
- return -EPERM;
+ ret = -EPERM;
+ goto done;
}
pr_debug("unset ndx=%x\n", ndx);
@@ -793,6 +877,7 @@
else
ret = mdss_mdp_overlay_release(mfd, ndx);
+done:
mutex_unlock(&mdp5_data->ov_lock);
return ret;
@@ -921,6 +1006,41 @@
return ret;
}
+static int mdss_mdp_overlay_force_cleanup(struct msm_fb_data_type *mfd)
+{
+ struct mdss_overlay_private *mdp5_data = mfd_to_mdp5_data(mfd);
+ struct mdss_mdp_ctl *ctl = mdp5_data->ctl;
+ int ret;
+
+ pr_debug("forcing cleanup to unset dma pipes on fb%d\n", mfd->index);
+
+ /*
+ * video mode panels require the layer to be unstaged and wait for
+ * vsync to be able to release buffer.
+ */
+ if (ctl && ctl->is_video_mode) {
+ ret = mdss_mdp_display_commit(ctl, NULL);
+ if (!IS_ERR_VALUE(ret))
+ mdss_mdp_display_wait4comp(ctl);
+ }
+
+ ret = mdss_mdp_overlay_cleanup(mfd);
+
+ return ret;
+}
+
+static void mdss_mdp_overlay_force_dma_cleanup(struct mdss_data_type *mdata)
+{
+ struct mdss_mdp_pipe *pipe;
+ int i;
+
+ for (i = 0; i < mdata->ndma_pipes; i++) {
+ pipe = mdata->dma_pipes + i;
+ if (atomic_read(&pipe->ref_cnt) && pipe->mfd)
+ mdss_mdp_overlay_force_cleanup(pipe->mfd);
+ }
+}
+
static int mdss_mdp_overlay_play(struct msm_fb_data_type *mfd,
struct msmfb_overlay_data *req)
{
@@ -934,17 +1054,19 @@
return ret;
if (!mfd->panel_power_on) {
- mutex_unlock(&mdp5_data->ov_lock);
- return -EPERM;
+ ret = -EPERM;
+ goto done;
}
ret = mdss_mdp_overlay_start(mfd);
if (ret) {
pr_err("unable to start overlay %d (%d)\n", mfd->index, ret);
- return ret;
+ goto done;
}
if (req->id & MDSS_MDP_ROT_SESSION_MASK) {
+ mdss_mdp_overlay_force_dma_cleanup(mfd_to_mdata(mfd));
+
ret = mdss_mdp_overlay_rotate(mfd, req);
} else if (req->id == BORDERFILL_NDX) {
pr_debug("borderfill enable\n");
@@ -954,6 +1076,7 @@
ret = mdss_mdp_overlay_queue(mfd, req);
}
+done:
mutex_unlock(&mdp5_data->ov_lock);
return ret;
@@ -1562,6 +1685,10 @@
caps->vig_pipes = mdata->nvig_pipes;
caps->rgb_pipes = mdata->nrgb_pipes;
caps->dma_pipes = mdata->ndma_pipes;
+ if (mdata->has_bwc)
+ caps->features |= MDP_BWC_EN;
+ if (mdata->has_decimation)
+ caps->features |= MDP_DECIMATION_EN;
return 0;
}
@@ -1649,11 +1776,8 @@
struct msmfb_overlay_data data;
ret = copy_from_user(&data, argp, sizeof(data));
- if (!ret) {
+ if (!ret)
ret = mdss_mdp_overlay_play(mfd, &data);
- if (!IS_ERR_VALUE(ret))
- mdss_fb_update_backlight(mfd);
- }
if (ret)
pr_debug("OVERLAY_PLAY failed (%d)\n", ret);
@@ -1864,6 +1988,10 @@
}
mfd->mdp.private1 = mdp5_data;
+ rc = mdss_mdp_overlay_fb_parse_dt(mfd);
+ if (rc)
+ return rc;
+
rc = sysfs_create_group(&dev->kobj, &vsync_fs_attr_group);
if (rc) {
pr_err("vsync sysfs group creation failed, ret=%d\n", rc);
@@ -1876,8 +2004,27 @@
kobject_uevent(&dev->kobj, KOBJ_ADD);
pr_debug("vsync kobject_uevent(KOBJ_ADD)\n");
+ mdp5_data->cpu_pm_hdl = add_event_timer(NULL, (void *)mdp5_data);
+ if (!mdp5_data->cpu_pm_hdl)
+ pr_warn("%s: unable to add event timer\n", __func__);
+
return rc;
init_fail:
kfree(mdp5_data);
return rc;
}
+
+static int mdss_mdp_overlay_fb_parse_dt(struct msm_fb_data_type *mfd)
+{
+ struct platform_device *pdev = mfd->pdev;
+ struct mdss_overlay_private *mdp5_mdata = mfd_to_mdp5_data(mfd);
+
+ mdp5_mdata->mixer_swap = of_property_read_bool(pdev->dev.of_node,
+ "qcom,mdss-mixer-swap");
+ if (mdp5_mdata->mixer_swap) {
+ pr_info("mixer swap is enabled for fb device=%s\n",
+ pdev->name);
+ }
+
+ return 0;
+}
diff --git a/drivers/video/msm/mdss/mdss_mdp_pipe.c b/drivers/video/msm/mdss/mdss_mdp_pipe.c
index d9b4483..0f65530 100644
--- a/drivers/video/msm/mdss/mdss_mdp_pipe.c
+++ b/drivers/video/msm/mdss/mdss_mdp_pipe.c
@@ -40,17 +40,18 @@
return readl_relaxed(pipe->base + reg);
}
-static u32 mdss_mdp_smp_mmb_reserve(unsigned long *smp, size_t n)
+static u32 mdss_mdp_smp_mmb_reserve(unsigned long *existing,
+ unsigned long *reserve, size_t n)
{
u32 i, mmb;
/* reserve more blocks if needed, but can't free mmb at this point */
- for (i = bitmap_weight(smp, SMP_MB_CNT); i < n; i++) {
+ for (i = bitmap_weight(existing, SMP_MB_CNT); i < n; i++) {
if (bitmap_full(mdss_mdp_smp_mmb_pool, SMP_MB_CNT))
break;
mmb = find_first_zero_bit(mdss_mdp_smp_mmb_pool, SMP_MB_CNT);
- set_bit(mmb, smp);
+ set_bit(mmb, reserve);
set_bit(mmb, mdss_mdp_smp_mmb_pool);
}
return i;
@@ -74,10 +75,17 @@
return cnt;
}
-static void mdss_mdp_smp_mmb_free(unsigned long *smp)
+static void mdss_mdp_smp_mmb_amend(unsigned long *smp, unsigned long *extra)
+{
+ bitmap_or(smp, smp, extra, SMP_MB_CNT);
+ bitmap_zero(extra, SMP_MB_CNT);
+}
+
+static void mdss_mdp_smp_mmb_free(unsigned long *smp, bool write)
{
if (!bitmap_empty(smp, SMP_MB_CNT)) {
- mdss_mdp_smp_mmb_set(MDSS_MDP_SMP_CLIENT_UNUSED, smp);
+ if (write)
+ mdss_mdp_smp_mmb_set(MDSS_MDP_SMP_CLIENT_UNUSED, smp);
bitmap_andnot(mdss_mdp_smp_mmb_pool, mdss_mdp_smp_mmb_pool,
smp, SMP_MB_CNT);
bitmap_zero(smp, SMP_MB_CNT);
@@ -104,19 +112,33 @@
static void mdss_mdp_smp_free(struct mdss_mdp_pipe *pipe)
{
+ int i;
+
mutex_lock(&mdss_mdp_smp_lock);
- mdss_mdp_smp_mmb_free(&pipe->smp[0]);
- mdss_mdp_smp_mmb_free(&pipe->smp[1]);
- mdss_mdp_smp_mmb_free(&pipe->smp[2]);
+ for (i = 0; i < MAX_PLANES; i++) {
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ mdss_mdp_smp_mmb_free(&pipe->smp[i], true);
+ }
mutex_unlock(&mdss_mdp_smp_lock);
}
-static int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe)
+void mdss_mdp_smp_unreserve(struct mdss_mdp_pipe *pipe)
+{
+ int i;
+
+ mutex_lock(&mdss_mdp_smp_lock);
+ for (i = 0; i < MAX_PLANES; i++)
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ mutex_unlock(&mdss_mdp_smp_lock);
+}
+
+int mdss_mdp_smp_reserve(struct mdss_mdp_pipe *pipe)
{
struct mdss_data_type *mdata = mdss_mdp_get_mdata();
u32 num_blks = 0, reserved = 0;
struct mdss_mdp_plane_sizes ps;
- int i, rc;
+ int i;
+ int rc = 0;
u32 nlines;
if (pipe->bwc_mode) {
@@ -126,11 +148,10 @@
return rc;
pr_debug("BWC SMP strides ystride0=%x ystride1=%x\n",
ps.ystride[0], ps.ystride[1]);
- } else if ((mdata->mdp_rev >= MDSS_MDP_HW_REV_102) &&
- pipe->src_fmt->is_yuv) {
+ } else if (mdata->has_decimation && pipe->src_fmt->is_yuv) {
ps.num_planes = 2;
- ps.ystride[0] = pipe->src.w;
- ps.ystride[1] = pipe->src.w;
+ ps.ystride[0] = pipe->src.w >> pipe->horz_deci;
+ ps.ystride[1] = pipe->src.h >> pipe->vert_deci;
} else {
rc = mdss_mdp_get_plane_sizes(pipe->src_fmt->format,
pipe->src.w, pipe->src.h, &ps, 0);
@@ -141,29 +162,29 @@
mutex_lock(&mdss_mdp_smp_lock);
for (i = 0; i < ps.num_planes; i++) {
nlines = pipe->bwc_mode ? ps.rau_h[i] : 2;
- num_blks = DIV_ROUND_UP(nlines * ps.ystride[i],
- mdss_res->smp_mb_size);
+ num_blks = DIV_ROUND_UP(nlines * ps.ystride[i], SMP_MB_SIZE);
if (mdata->mdp_rev == MDSS_MDP_HW_REV_100)
num_blks = roundup_pow_of_two(num_blks);
pr_debug("reserving %d mmb for pnum=%d plane=%d\n",
num_blks, pipe->num, i);
- reserved = mdss_mdp_smp_mmb_reserve(&pipe->smp[i], num_blks);
+ reserved = mdss_mdp_smp_mmb_reserve(&pipe->smp[i],
+ &pipe->smp_reserved[i], num_blks);
if (reserved < num_blks)
break;
}
if (reserved < num_blks) {
- pr_err("insufficient MMB blocks\n");
+ pr_debug("insufficient MMB blocks\n");
for (; i >= 0; i--)
- mdss_mdp_smp_mmb_free(&pipe->smp[i]);
- return -ENOMEM;
+ mdss_mdp_smp_mmb_free(&pipe->smp_reserved[i], false);
+ rc = -ENOMEM;
}
mutex_unlock(&mdss_mdp_smp_lock);
- return 0;
+ return rc;
}
static int mdss_mdp_smp_alloc(struct mdss_mdp_pipe *pipe)
@@ -172,8 +193,10 @@
int cnt = 0;
mutex_lock(&mdss_mdp_smp_lock);
- for (i = 0; i < MAX_PLANES; i++)
+ for (i = 0; i < MAX_PLANES; i++) {
+ mdss_mdp_smp_mmb_amend(&pipe->smp[i], &pipe->smp_reserved[i]);
cnt += mdss_mdp_smp_mmb_set(pipe->ftch_id + i, &pipe->smp[i]);
+ }
mdss_mdp_smp_set_wm_levels(pipe, cnt);
mutex_unlock(&mdss_mdp_smp_lock);
return 0;
@@ -256,11 +279,12 @@
}
if (pipe) {
- pr_info("type=%x pnum=%d\n", pipe->type, pipe->num);
+ pr_debug("type=%x pnum=%d\n", pipe->type, pipe->num);
mutex_init(&pipe->pp_res.hist.hist_mutex);
spin_lock_init(&pipe->pp_res.hist.hist_lock);
- } else
+ } else {
pr_err("no %d type pipes available\n", type);
+ }
return pipe;
}
@@ -307,14 +331,16 @@
mutex_lock(&mdss_mdp_sspp_lock);
pipe = mdss_mdp_pipe_search(mdata, ndx);
- if (!pipe)
- return ERR_PTR(-EINVAL);
+ if (!pipe) {
+ pipe = ERR_PTR(-EINVAL);
+ goto error;
+ }
if (mdss_mdp_pipe_map(pipe))
- return ERR_PTR(-EACCES);
+ pipe = ERR_PTR(-EACCES);
+error:
mutex_unlock(&mdss_mdp_sspp_lock);
-
return pipe;
}
@@ -376,6 +402,7 @@
{
u32 img_size, src_size, src_xy, dst_size, dst_xy, ystride0, ystride1;
u32 width, height;
+ u32 decimation;
pr_debug("pnum=%d wh=%dx%d src={%d,%d,%d,%d} dst={%d,%d,%d,%d}\n",
pipe->num, pipe->img_width, pipe->img_height,
@@ -396,6 +423,12 @@
height /= 2;
}
+ decimation = ((1 << pipe->horz_deci) - 1) << 8;
+ decimation |= ((1 << pipe->vert_deci) - 1);
+ if (decimation)
+ pr_debug("Image decimation h=%d v=%d\n",
+ pipe->horz_deci, pipe->vert_deci);
+
img_size = (height << 16) | width;
src_size = (pipe->src.h << 16) | pipe->src.w;
src_xy = (pipe->src.y << 16) | pipe->src.x;
@@ -418,6 +451,8 @@
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_OUT_XY, dst_xy);
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_SRC_YSTRIDE0, ystride0);
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_SRC_YSTRIDE1, ystride1);
+ mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_SSPP_DECIMATION_CONFIG,
+ decimation);
return 0;
}
@@ -636,13 +671,6 @@
mdss_mdp_pipe_write(pipe, MDSS_MDP_REG_VIG_OP_MODE,
opmode);
- ret = mdss_mdp_smp_reserve(pipe);
- if (ret) {
- pr_err("unable to reserve smp for pnum=%d\n",
- pipe->num);
- goto done;
- }
-
mdss_mdp_smp_alloc(pipe);
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_pp.c b/drivers/video/msm/mdss/mdss_mdp_pp.c
index cf9a8b6..8bd5674 100644
--- a/drivers/video/msm/mdss/mdss_mdp_pp.c
+++ b/drivers/video/msm/mdss/mdss_mdp_pp.c
@@ -536,6 +536,7 @@
u32 chroma_sample;
u32 filter_mode;
struct mdss_data_type *mdata;
+ u32 src_w, src_h;
mdata = mdss_mdp_get_mdata();
if (mdata->mdp_rev >= MDSS_MDP_HW_REV_102 && pipe->src_fmt->is_yuv)
@@ -552,6 +553,9 @@
}
}
+ src_w = pipe->src.w >> pipe->horz_deci;
+ src_h = pipe->src.h >> pipe->vert_deci;
+
chroma_sample = pipe->src_fmt->chroma_sample;
if (pipe->flags & MDP_SOURCE_ROTATED_90) {
if (chroma_sample == MDSS_MDP_CHROMA_H1V2)
@@ -570,23 +574,22 @@
}
if ((pipe->src_fmt->is_yuv) &&
- !((pipe->dst.w < pipe->src.w) || (pipe->dst.h < pipe->src.h))) {
+ !((pipe->dst.w < src_w) || (pipe->dst.h < src_h))) {
pp_sharp_config(pipe->base +
MDSS_MDP_REG_VIG_QSEED2_SHARP,
&pipe->pp_res.pp_sts,
&pipe->pp_cfg.sharp_cfg);
}
- if ((pipe->src.h != pipe->dst.h) ||
+ if ((src_h != pipe->dst.h) ||
(pipe->pp_res.pp_sts.sharp_sts & PP_STS_ENABLE) ||
(chroma_sample == MDSS_MDP_CHROMA_420) ||
(chroma_sample == MDSS_MDP_CHROMA_H1V2)) {
- pr_debug("scale y - src_h=%d dst_h=%d\n",
- pipe->src.h, pipe->dst.h);
+ pr_debug("scale y - src_h=%d dst_h=%d\n", src_h, pipe->dst.h);
- if ((pipe->src.h / MAX_DOWNSCALE_RATIO) > pipe->dst.h) {
+ if ((src_h / MAX_DOWNSCALE_RATIO) > pipe->dst.h) {
pr_err("too much downscaling height=%d->%d",
- pipe->src.h, pipe->dst.h);
+ src_h, pipe->dst.h);
return -EINVAL;
}
@@ -594,11 +597,12 @@
if (pipe->type == MDSS_MDP_PIPE_TYPE_VIG) {
u32 chr_dst_h = pipe->dst.h;
- if ((chroma_sample == MDSS_MDP_CHROMA_420) ||
- (chroma_sample == MDSS_MDP_CHROMA_H1V2))
+ if (!pipe->vert_deci &&
+ ((chroma_sample == MDSS_MDP_CHROMA_420) ||
+ (chroma_sample == MDSS_MDP_CHROMA_H1V2)))
chr_dst_h *= 2; /* 2x upsample chroma */
- if (pipe->src.h <= pipe->dst.h) {
+ if (src_h <= pipe->dst.h) {
scale_config |= /* G/Y, A */
(filter_mode << 10) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 18);
@@ -607,7 +611,7 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 10) |
(MDSS_MDP_SCALE_FILTER_PCMN << 18);
- if (pipe->src.h <= chr_dst_h)
+ if (src_h <= chr_dst_h)
scale_config |= /* CrCb */
(MDSS_MDP_SCALE_FILTER_BIL << 14);
else
@@ -615,12 +619,12 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 14);
phasey_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.h, chr_dst_h);
+ PHASE_STEP_SHIFT, src_h, chr_dst_h);
writel_relaxed(phasey_step, pipe->base +
MDSS_MDP_REG_VIG_QSEED2_C12_PHASESTEPY);
} else {
- if (pipe->src.h <= pipe->dst.h)
+ if (src_h <= pipe->dst.h)
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_BIL << 10) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 18);
@@ -631,19 +635,18 @@
}
phasey_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.h, pipe->dst.h);
+ PHASE_STEP_SHIFT, src_h, pipe->dst.h);
}
- if ((pipe->src.w != pipe->dst.w) ||
+ if ((src_w != pipe->dst.w) ||
(pipe->pp_res.pp_sts.sharp_sts & PP_STS_ENABLE) ||
(chroma_sample == MDSS_MDP_CHROMA_420) ||
(chroma_sample == MDSS_MDP_CHROMA_H2V1)) {
- pr_debug("scale x - src_w=%d dst_w=%d\n",
- pipe->src.w, pipe->dst.w);
+ pr_debug("scale x - src_w=%d dst_w=%d\n", src_w, pipe->dst.w);
- if ((pipe->src.w / MAX_DOWNSCALE_RATIO) > pipe->dst.w) {
+ if ((src_w / MAX_DOWNSCALE_RATIO) > pipe->dst.w) {
pr_err("too much downscaling width=%d->%d",
- pipe->src.w, pipe->dst.w);
+ src_w, pipe->dst.w);
return -EINVAL;
}
@@ -652,11 +655,12 @@
if (pipe->type == MDSS_MDP_PIPE_TYPE_VIG) {
u32 chr_dst_w = pipe->dst.w;
- if ((chroma_sample == MDSS_MDP_CHROMA_420) ||
- (chroma_sample == MDSS_MDP_CHROMA_H2V1))
+ if (!pipe->horz_deci &&
+ ((chroma_sample == MDSS_MDP_CHROMA_420) ||
+ (chroma_sample == MDSS_MDP_CHROMA_H2V1)))
chr_dst_w *= 2; /* 2x upsample chroma */
- if (pipe->src.w <= pipe->dst.w) {
+ if (src_w <= pipe->dst.w) {
scale_config |= /* G/Y, A */
(filter_mode << 8) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 16);
@@ -665,7 +669,7 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 8) |
(MDSS_MDP_SCALE_FILTER_PCMN << 16);
- if (pipe->src.w <= chr_dst_w)
+ if (src_w <= chr_dst_w)
scale_config |= /* CrCb */
(MDSS_MDP_SCALE_FILTER_BIL << 12);
else
@@ -673,11 +677,11 @@
(MDSS_MDP_SCALE_FILTER_PCMN << 12);
phasex_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.w, chr_dst_w);
+ PHASE_STEP_SHIFT, src_w, chr_dst_w);
writel_relaxed(phasex_step, pipe->base +
MDSS_MDP_REG_VIG_QSEED2_C12_PHASESTEPX);
} else {
- if (pipe->src.w <= pipe->dst.w)
+ if (src_w <= pipe->dst.w)
scale_config |= /* RGB, A */
(MDSS_MDP_SCALE_FILTER_BIL << 8) |
(MDSS_MDP_SCALE_FILTER_NEAREST << 16);
@@ -688,7 +692,7 @@
}
phasex_step = mdss_mdp_scale_phase_step(
- PHASE_STEP_SHIFT, pipe->src.w, pipe->dst.w);
+ PHASE_STEP_SHIFT, src_w, pipe->dst.w);
}
writel_relaxed(scale_config, pipe->base +
@@ -1519,11 +1523,13 @@
if (copy_to_user(config->c0_c1_data, local_cfg.c2_data,
config->len * sizeof(u32))) {
ret = -EFAULT;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
goto igc_config_exit;
}
if (copy_to_user(config->c2_data, local_cfg.c0_c1_data,
config->len * sizeof(u32))) {
ret = -EFAULT;
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
goto igc_config_exit;
}
*copyback = 1;
@@ -1712,16 +1718,19 @@
pp_read_argc_lut(&local_cfg, argc_offset);
if (copy_to_user(config->r_data,
&mdss_pp_res->gc_lut_r[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
if (copy_to_user(config->g_data,
&mdss_pp_res->gc_lut_g[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
if (copy_to_user(config->b_data,
&mdss_pp_res->gc_lut_b[disp_num][0], tbl_size)) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto argc_config_exit;
}
@@ -1792,6 +1801,7 @@
if (copy_to_user(config->data,
&mdss_pp_res->enhist_lut[disp_num][0],
ENHIST_LUT_ENTRIES * sizeof(u32))) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EFAULT;
goto enhist_config_exit;
}
@@ -2031,6 +2041,7 @@
if (IS_ERR_OR_NULL(pipe))
continue;
if (!pipe || pipe->num > MDSS_MDP_SSPP_VIG2) {
+ mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
ret = -EINVAL;
pr_warn("Invalid Hist pipe (%d)", i);
goto hist_exit;
@@ -2149,7 +2160,7 @@
i = MDSS_PP_ARG_MASK & block;
if (!i) {
pr_warn("Must pass pipe arguments, %d", i);
- goto hist_stop_exit;
+ goto hist_stop_clk;
}
for (i = 0; i < MDSS_PP_ARG_NUM; i++) {
@@ -2169,7 +2180,7 @@
ctl_base);
mdss_mdp_pipe_unmap(pipe);
if (ret)
- goto hist_stop_exit;
+ goto hist_stop_clk;
}
} else if (PP_LOCAT(block) == MDSS_PP_DSPP_CFG) {
for (i = 0; i < mixer_cnt; i++) {
@@ -2181,13 +2192,14 @@
ret = pp_histogram_disable(hist_info, done_bit,
ctl_base);
if (ret)
- goto hist_stop_exit;
+ goto hist_stop_clk;
mdss_pp_res->pp_disp_flags[disp_num] |=
PP_FLAGS_DIRTY_HIST_COL;
}
}
-hist_stop_exit:
+hist_stop_clk:
mdss_mdp_clk_ctrl(MDP_BLOCK_POWER_OFF, false);
+hist_stop_exit:
if (!ret && (PP_LOCAT(block) == MDSS_PP_DSPP_CFG))
mdss_mdp_pp_setup(ctl);
return ret;
diff --git a/drivers/video/msm/mdss/mdss_mdp_rotator.c b/drivers/video/msm/mdss/mdss_mdp_rotator.c
index ce4c28f..016c973 100644
--- a/drivers/video/msm/mdss/mdss_mdp_rotator.c
+++ b/drivers/video/msm/mdss/mdss_mdp_rotator.c
@@ -41,7 +41,6 @@
rot->ref_cnt++;
rot->session_id = i | MDSS_MDP_ROT_SESSION_MASK;
mutex_init(&rot->lock);
- init_completion(&rot->comp);
break;
}
}
@@ -87,10 +86,18 @@
static int mdss_mdp_rotator_busy_wait(struct mdss_mdp_rotator_session *rot)
{
+ struct mdss_mdp_pipe *rot_pipe = NULL;
+ struct mdss_mdp_ctl *ctl = NULL;
+
+ rot_pipe = rot->pipe;
+ if (!rot_pipe)
+ return -ENODEV;
+
+ ctl = rot_pipe->mixer->ctl;
mutex_lock(&rot->lock);
if (rot->busy) {
pr_debug("waiting for rot=%d to complete\n", rot->pipe->num);
- wait_for_completion_interruptible(&rot->comp);
+ mdss_mdp_display_wait4comp(ctl);
rot->busy = false;
}
@@ -99,28 +106,18 @@
return 0;
}
-static void mdss_mdp_rotator_callback(void *arg)
-{
- struct mdss_mdp_rotator_session *rot;
-
- rot = (struct mdss_mdp_rotator_session *) arg;
- if (rot)
- complete(&rot->comp);
-}
-
static int mdss_mdp_rotator_kickoff(struct mdss_mdp_ctl *ctl,
struct mdss_mdp_rotator_session *rot,
struct mdss_mdp_data *dst_data)
{
int ret;
struct mdss_mdp_writeback_arg wb_args = {
- .callback_fnc = mdss_mdp_rotator_callback,
+ .callback_fnc = NULL,
.data = dst_data,
.priv_data = rot,
};
mutex_lock(&rot->lock);
- INIT_COMPLETION(rot->comp);
rot->busy = true;
ret = mdss_mdp_display_commit(ctl, &wb_args);
if (ret) {
@@ -204,9 +201,16 @@
rot_pipe->params_changed++;
}
+ ret = mdss_mdp_smp_reserve(rot->pipe);
+ if (ret) {
+ pr_err("unable to mdss_mdp_smp_reserve rot data\n");
+ return ret;
+ }
+
ret = mdss_mdp_pipe_queue_data(rot->pipe, src_data);
if (ret) {
pr_err("unable to queue rot data\n");
+ mdss_mdp_smp_unreserve(rot->pipe);
return ret;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_rotator.h b/drivers/video/msm/mdss/mdss_mdp_rotator.h
index c50d710..3401fe8 100644
--- a/drivers/video/msm/mdss/mdss_mdp_rotator.h
+++ b/drivers/video/msm/mdss/mdss_mdp_rotator.h
@@ -36,7 +36,6 @@
struct mdss_mdp_pipe *pipe;
struct mutex lock;
- struct completion comp;
u8 busy;
u8 no_wait;
diff --git a/drivers/video/msm/mdss/mdss_mdp_util.c b/drivers/video/msm/mdss/mdss_mdp_util.c
index 4de6d03..60f05ca 100644
--- a/drivers/video/msm/mdss/mdss_mdp_util.c
+++ b/drivers/video/msm/mdss/mdss_mdp_util.c
@@ -240,7 +240,7 @@
ps->ystride[1] = 32 * 2;
} else if (fmt->fetch_planes == MDSS_MDP_PLANE_INTERLEAVED) {
ps->rau_cnt = DIV_ROUND_UP(w, 32);
- ps->ystride[0] = 32 * 4;
+ ps->ystride[0] = 32 * 4 * fmt->bpp;
ps->ystride[1] = 0;
ps->rau_h[0] = 4;
ps->rau_h[1] = 0;
@@ -250,8 +250,8 @@
}
stride_off = DIV_ROUND_UP(ps->rau_cnt, 8);
- ps->ystride[0] = ps->ystride[0] * ps->rau_cnt * fmt->bpp + stride_off;
- ps->ystride[1] = ps->ystride[1] * ps->rau_cnt * fmt->bpp + stride_off;
+ ps->ystride[0] = ps->ystride[0] * ps->rau_cnt + stride_off;
+ ps->ystride[1] = ps->ystride[1] * ps->rau_cnt + stride_off;
ps->num_planes = 2;
return 0;
@@ -262,8 +262,7 @@
{
struct mdss_mdp_format_params *fmt;
int i, rc;
- u32 bpp, stride_off;
-
+ u32 bpp, ystride0_off, ystride1_off;
if (ps == NULL)
return -EINVAL;
@@ -281,12 +280,14 @@
rc = mdss_mdp_get_rau_strides(w, h, fmt, ps);
if (rc)
return rc;
- stride_off = DIV_ROUND_UP(h, ps->rau_h[0]);
- ps->ystride[0] = ps->ystride[0] + ps->ystride[1];
- ps->plane_size[0] = ps->ystride[0] * stride_off;
+ ystride0_off = DIV_ROUND_UP(h, ps->rau_h[0]);
+ ystride1_off = DIV_ROUND_UP(h, ps->rau_h[1]);
+ ps->plane_size[0] = (ps->ystride[0] * ystride0_off) +
+ (ps->ystride[1] * ystride1_off);
+ ps->ystride[0] += ps->ystride[1];
ps->ystride[1] = 2;
- ps->plane_size[1] = ps->rau_cnt * ps->ystride[1] * stride_off;
-
+ ps->plane_size[1] = ps->rau_cnt * ps->ystride[1] *
+ (ystride0_off + ystride1_off);
} else {
if (fmt->fetch_planes == MDSS_MDP_PLANE_INTERLEAVED) {
ps->num_planes = 1;
@@ -346,43 +347,38 @@
int mdss_mdp_data_check(struct mdss_mdp_data *data,
struct mdss_mdp_plane_sizes *ps)
{
+ struct mdss_mdp_img_data *prev, *curr;
+ int i;
+
if (!ps)
return 0;
if (!data || data->num_planes == 0)
return -ENOMEM;
- if (data->bwc_enabled) {
- data->num_planes = ps->num_planes;
- data->p[1].addr = data->p[0].addr + ps->plane_size[0];
- } else {
- struct mdss_mdp_img_data *prev, *curr;
- int i;
+ pr_debug("srcp0=%x len=%u frame_size=%u\n", data->p[0].addr,
+ data->p[0].len, ps->total_size);
- pr_debug("srcp0=%x len=%u frame_size=%u\n", data->p[0].addr,
- data->p[0].len, ps->total_size);
-
- for (i = 0; i < ps->num_planes; i++) {
- curr = &data->p[i];
- if (i >= data->num_planes) {
- u32 psize = ps->plane_size[i-1];
- prev = &data->p[i-1];
- if (prev->len > psize) {
- curr->len = prev->len - psize;
- prev->len = psize;
- }
- curr->addr = prev->addr + psize;
+ for (i = 0; i < ps->num_planes; i++) {
+ curr = &data->p[i];
+ if (i >= data->num_planes) {
+ u32 psize = ps->plane_size[i-1];
+ prev = &data->p[i-1];
+ if (prev->len > psize) {
+ curr->len = prev->len - psize;
+ prev->len = psize;
}
- if (curr->len < ps->plane_size[i]) {
- pr_err("insufficient mem=%u p=%d len=%u\n",
- curr->len, i, ps->plane_size[i]);
- return -ENOMEM;
- }
- pr_debug("plane[%d] addr=%x len=%u\n", i,
- curr->addr, curr->len);
+ curr->addr = prev->addr + psize;
}
- data->num_planes = ps->num_planes;
+ if (curr->len < ps->plane_size[i]) {
+ pr_err("insufficient mem=%u p=%d len=%u\n",
+ curr->len, i, ps->plane_size[i]);
+ return -ENOMEM;
+ }
+ pr_debug("plane[%d] addr=%x len=%u\n", i,
+ curr->addr, curr->len);
}
+ data->num_planes = ps->num_planes;
return 0;
}
diff --git a/drivers/video/msm/mdss/mdss_mdp_wb.c b/drivers/video/msm/mdss/mdss_mdp_wb.c
index c19e07a..7ccf1b9 100644
--- a/drivers/video/msm/mdss/mdss_mdp_wb.c
+++ b/drivers/video/msm/mdss/mdss_mdp_wb.c
@@ -132,7 +132,13 @@
pr_debug("setting secure=%d\n", enable);
+ ctl->is_secure = enable;
wb->is_secure = enable;
+
+ /* newer revisions don't require secure src pipe for secure session */
+ if (ctl->mdata->mdp_rev > MDSS_MDP_HW_REV_100)
+ return 0;
+
pipe = wb->secure_pipe;
if (!enable) {
@@ -243,6 +249,7 @@
mdss_mdp_pipe_destroy(wb->secure_pipe);
mutex_unlock(&wb->lock);
+ mdp5_data->ctl->is_secure = false;
mdp5_data->wb = NULL;
mutex_unlock(&mdss_mdp_wb_buf_lock);
diff --git a/drivers/video/msm/mdss/mdss_qpic.c b/drivers/video/msm/mdss/mdss_qpic.c
index be02113..fa6bd3d 100644
--- a/drivers/video/msm/mdss/mdss_qpic.c
+++ b/drivers/video/msm/mdss/mdss_qpic.c
@@ -428,7 +428,7 @@
param[0]);
param++;
bytes_left -= 4;
- space++;
+ space--;
} else if (bytes_left == 2) {
QPIC_OUTPW(QPIC_REG_QPIC_LCDC_FIFO_DATA_PORT0,
*(u16 *)param);
diff --git a/fs/file.c b/fs/file.c
index ba3f605..2f989c3 100644
--- a/fs/file.c
+++ b/fs/file.c
@@ -476,6 +476,7 @@
spin_unlock(&files->file_lock);
return error;
}
+EXPORT_SYMBOL(alloc_fd);
int get_unused_fd(void)
{
diff --git a/include/linux/bluetooth-power.h b/include/linux/bluetooth-power.h
new file mode 100644
index 0000000..ba53a40
--- /dev/null
+++ b/include/linux/bluetooth-power.h
@@ -0,0 +1,67 @@
+/*
+ * Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#ifndef __LINUX_BLUETOOTH_POWER_H
+#define __LINUX_BLUETOOTH_POWER_H
+
+/*
+ * voltage regulator information required for configuring the
+ * bluetooth chipset
+ */
+struct bt_power_vreg_data {
+ /* voltage regulator handle */
+ struct regulator *reg;
+ /* regulator name */
+ const char *name;
+ /* voltage levels to be set */
+ unsigned int low_vol_level;
+ unsigned int high_vol_level;
+ /*
+ * is set voltage supported for this regulator?
+ * false => set voltage is not supported
+ * true => set voltage is supported
+ *
+ * Some regulators (like gpio-regulators, LVS (low voltage swtiches)
+ * PMIC regulators) dont have the capability to call
+ * regulator_set_voltage or regulator_set_optimum_mode
+ * Use this variable to indicate if its a such regulator or not
+ */
+ bool set_voltage_sup;
+ /* is this regulator enabled? */
+ bool is_enabled;
+};
+
+/*
+ * Platform data for the bluetooth power driver.
+ */
+struct bluetooth_power_platform_data {
+ /* Bluetooth reset gpio */
+ int bt_gpio_sys_rst;
+ /* VDDIO voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_io;
+ /* VDD_PA voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_pa;
+ /* VDD_LDOIN voltage regulator */
+ struct bt_power_vreg_data *bt_vdd_ldo;
+ /* Optional: chip power down gpio-regulator
+ * chip power down data is required when bluetooth module
+ * and other modules like wifi co-exist in a single chip and
+ * shares a common gpio to bring chip out of reset.
+ */
+ struct bt_power_vreg_data *bt_chip_pwd;
+ /* Optional: Bluetooth power setup function */
+ int (*bt_power_setup) (int);
+};
+
+#endif /* __LINUX_BLUETOOTH_POWER_H */
diff --git a/include/linux/dvb/dmx.h b/include/linux/dvb/dmx.h
index c2b35c8..c219725 100644
--- a/include/linux/dvb/dmx.h
+++ b/include/linux/dvb/dmx.h
@@ -368,6 +368,9 @@
/* DTS value associated with the buffer */
__u64 dts;
+ /* STC value associated with the buffer in 27MHz */
+ __u64 stc;
+
/*
* Number of TS packets with Transport Error Indicator (TEI) set
* in the TS packet header since last reported event
diff --git a/include/linux/ion.h b/include/linux/ion.h
index 88ad9a0..4983316 100644
--- a/include/linux/ion.h
+++ b/include/linux/ion.h
@@ -35,6 +35,7 @@
ION_HEAP_TYPE_SYSTEM,
ION_HEAP_TYPE_SYSTEM_CONTIG,
ION_HEAP_TYPE_CARVEOUT,
+ ION_HEAP_TYPE_CHUNK,
ION_HEAP_TYPE_CUSTOM, /* must be last so device specific heaps always
are at the end of this enum */
ION_NUM_HEAPS,
@@ -44,8 +45,10 @@
#define ION_HEAP_SYSTEM_CONTIG_MASK (1 << ION_HEAP_TYPE_SYSTEM_CONTIG)
#define ION_HEAP_CARVEOUT_MASK (1 << ION_HEAP_TYPE_CARVEOUT)
+#define ION_NUM_HEAP_IDS sizeof(unsigned int) * 8
+
/**
- * heap flags - the lower 16 bits are used by core ion, the upper 16
+ * allocation flags - the lower 16 bits are used by core ion, the upper 16
* bits are reserved for use by the heaps themselves.
*/
#define ION_FLAG_CACHED 1 /* mappings of this buffer should be
@@ -74,8 +77,9 @@
/**
* struct ion_platform_heap - defines a heap in the given platform
* @type: type of the heap from ion_heap_type enum
- * @id: unique identifier for heap. When allocating (lower numbers
- * will be allocated from first)
+ * @id: unique identifier for heap. When allocating higher numbers
+ * will be allocated from first. At allocation these are passed
+ * as a bit mask and therefore can not exceed ION_NUM_HEAP_IDS.
* @name: used for debug purposes
* @base: base address of heap in physical memory if applicable
* @size: size of the heap in bytes if applicable
@@ -83,6 +87,10 @@
* @has_outer_cache: set to 1 if outer cache is used, 0 otherwise.
* @extra_data: Extra data specific to each heap type
* @priv: heap private data
+ * @align: required alignment in physical memory if applicable
+ * @priv: private info passed from the board file
+ *
+ * Provided by the board file.
*/
struct ion_platform_heap {
enum ion_heap_type type;
@@ -93,6 +101,7 @@
enum ion_memory_types memory_type;
unsigned int has_outer_cache;
void *extra_data;
+ ion_phys_addr_t align;
void *priv;
};
@@ -125,12 +134,12 @@
/**
* ion_client_create() - allocate a client and returns it
- * @dev: the global ion device
- * @heap_mask: mask of heaps this client can allocate from
- * @name: used for debugging
+ * @dev: the global ion device
+ * @heap_type_mask: mask of heaps this client can allocate from
+ * @name: used for debugging
*/
struct ion_client *ion_client_create(struct ion_device *dev,
- unsigned int heap_mask, const char *name);
+ const char *name);
/**
* ion_client_destroy() - free's a client and all it's handles
@@ -143,21 +152,22 @@
/**
* ion_alloc - allocate ion memory
- * @client: the client
- * @len: size of the allocation
- * @align: requested allocation alignment, lots of hardware blocks have
- * alignment requirements of some kind
- * @heap_mask: mask of heaps to allocate from, if multiple bits are set
- * heaps will be tried in order from lowest to highest order bit
- * @flags: heap flags, the low 16 bits are consumed by ion, the high 16
- * bits are passed on to the respective heap and can be heap
- * custom
+ * @client: the client
+ * @len: size of the allocation
+ * @align: requested allocation alignment, lots of hardware blocks
+ * have alignment requirements of some kind
+ * @heap_id_mask: mask of heaps to allocate from, if multiple bits are set
+ * heaps will be tried in order from highest to lowest
+ * id
+ * @flags: heap flags, the low 16 bits are consumed by ion, the
+ * high 16 bits are passed on to the respective heap and
+ * can be heap custom
*
* Allocate memory in one of the heaps provided in heap mask and return
* an opaque handle to it.
*/
struct ion_handle *ion_alloc(struct ion_client *client, size_t len,
- size_t align, unsigned int heap_mask,
+ size_t align, unsigned int heap_id_mask,
unsigned int flags);
/**
@@ -217,11 +227,19 @@
void ion_unmap_kernel(struct ion_client *client, struct ion_handle *handle);
/**
- * ion_share_dma_buf() - given an ion client, create a dma-buf fd
+ * ion_share_dma_buf() - share buffer as dma-buf
* @client: the client
* @handle: the handle
*/
-int ion_share_dma_buf(struct ion_client *client, struct ion_handle *handle);
+struct dma_buf *ion_share_dma_buf(struct ion_client *client,
+ struct ion_handle *handle);
+
+/**
+ * ion_share_dma_buf_fd() - given an ion client, create a dma-buf fd
+ * @client: the client
+ * @handle: the handle
+ */
+int ion_share_dma_buf_fd(struct ion_client *client, struct ion_handle *handle);
/**
* ion_import_dma_buf() - given an dma-buf fd from the ion exporter get handle
@@ -310,12 +328,12 @@
/**
* struct ion_allocation_data - metadata passed from userspace for allocations
- * @len: size of the allocation
- * @align: required alignment of the allocation
- * @heap_mask: mask of heaps to allocate from
- * @flags: flags passed to heap
- * @handle: pointer that will be populated with a cookie to use to refer
- * to this allocation
+ * @len: size of the allocation
+ * @align: required alignment of the allocation
+ * @heap_id_mask: mask of heap ids to allocate from
+ * @flags: flags passed to heap
+ * @handle: pointer that will be populated with a cookie to use to
+ * refer to this allocation
*
* Provided by userspace as an argument to the ioctl
*/
diff --git a/include/linux/iopoll.h b/include/linux/iopoll.h
index 8aa758d..b882fe2 100644
--- a/include/linux/iopoll.h
+++ b/include/linux/iopoll.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012 The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013 The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -40,8 +40,12 @@
might_sleep_if(timeout_us); \
for (;;) { \
(val) = readl(addr); \
- if ((cond) || (timeout_us && time_after(jiffies, timeout))) \
+ if (cond) \
break; \
+ if (timeout_us && time_after(jiffies, timeout)) { \
+ (val) = readl(addr); \
+ break; \
+ } \
if (sleep_us) \
usleep_range(DIV_ROUND_UP(sleep_us, 4), sleep_us); \
} \
diff --git a/include/linux/mfd/pm8xxx/pm8921-charger.h b/include/linux/mfd/pm8xxx/pm8921-charger.h
index 1c67b1e..5439fd1 100644
--- a/include/linux/mfd/pm8xxx/pm8921-charger.h
+++ b/include/linux/mfd/pm8xxx/pm8921-charger.h
@@ -165,6 +165,7 @@
unsigned int warm_bat_chg_current;
unsigned int cool_bat_voltage;
unsigned int warm_bat_voltage;
+ int hysteresis_temp;
unsigned int (*get_batt_capacity_percent) (void);
int64_t batt_id_min;
int64_t batt_id_max;
diff --git a/include/linux/mfd/wcd9xxx/core.h b/include/linux/mfd/wcd9xxx/core.h
index 3a9b1b9..0d1f49f 100644
--- a/include/linux/mfd/wcd9xxx/core.h
+++ b/include/linux/mfd/wcd9xxx/core.h
@@ -77,7 +77,8 @@
WCD9XXX_IRQ_HPH_L_PA_STARTUP,
WCD9XXX_IRQ_HPH_R_PA_STARTUP,
WCD9XXX_IRQ_EAR_PA_STARTUP,
- WCD9XXX_IRQ_RESERVED_0,
+ WCD9310_NUM_IRQS,
+ WCD9XXX_IRQ_RESERVED_0 = WCD9310_NUM_IRQS,
WCD9XXX_IRQ_RESERVED_1,
/* INTR_REG 3 */
WCD9XXX_IRQ_MAD_AUDIO,
@@ -85,12 +86,14 @@
WCD9XXX_IRQ_MAD_ULTRASOUND,
WCD9XXX_IRQ_SPEAKER_CLIPPING,
WCD9XXX_IRQ_MBHC_JACK_SWITCH,
+ WCD9XXX_IRQ_VBAT_MONITOR_ATTACK,
+ WCD9XXX_IRQ_VBAT_MONITOR_RELEASE,
WCD9XXX_NUM_IRQS,
};
enum {
- TABLA_NUM_IRQS = WCD9XXX_NUM_IRQS,
- SITAR_NUM_IRQS = WCD9XXX_NUM_IRQS,
+ TABLA_NUM_IRQS = WCD9310_NUM_IRQS,
+ SITAR_NUM_IRQS = WCD9310_NUM_IRQS,
TAIKO_NUM_IRQS = WCD9XXX_NUM_IRQS,
TAPAN_NUM_IRQS = WCD9XXX_NUM_IRQS,
};
diff --git a/include/linux/msm_mdp.h b/include/linux/msm_mdp.h
index db5c69c..f9e483c 100644
--- a/include/linux/msm_mdp.h
+++ b/include/linux/msm_mdp.h
@@ -173,6 +173,7 @@
#define MDP_OV_PIPE_FORCE_DMA 0x00004000
#define MDP_MEMORY_ID_TYPE_FB 0x00001000
#define MDP_BWC_EN 0x00000400
+#define MDP_DECIMATION_EN 0x00000800
#define MDP_TRANSP_NOP 0xffffffff
#define MDP_ALPHA_NOP 0xff
@@ -406,7 +407,9 @@
uint32_t transp_mask;
uint32_t flags;
uint32_t id;
- uint32_t user_data[8];
+ uint32_t user_data[7];
+ uint8_t horz_deci;
+ uint8_t vert_deci;
struct mdp_overlay_pp_params overlay_pp_cfg;
};
@@ -696,6 +699,7 @@
uint8_t rgb_pipes;
uint8_t vig_pipes;
uint8_t dma_pipes;
+ uint32_t features;
};
struct msmfb_metadata {
diff --git a/include/linux/qpnp-misc.h b/include/linux/qpnp-misc.h
index b241e5d..ee614a4 100644
--- a/include/linux/qpnp-misc.h
+++ b/include/linux/qpnp-misc.h
@@ -30,7 +30,7 @@
int qpnp_misc_irqs_available(struct device *consumer_dev);
#else
-static int qpnp_misc_irq_available(struct device *consumer_dev)
+static int qpnp_misc_irqs_available(struct device *consumer_dev)
{
return 0;
}
diff --git a/include/linux/regulator/qpnp-regulator.h b/include/linux/regulator/qpnp-regulator.h
index ec580ab..c7afeb5 100644
--- a/include/linux/regulator/qpnp-regulator.h
+++ b/include/linux/regulator/qpnp-regulator.h
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -93,12 +93,24 @@
* @system_load: Load in uA present on regulator that is not captured
* by any consumer request
* @enable_time: Time in us to delay after enabling the regulator
- * @ocp_enable: 1 = Enable over current protection (OCP) for voltage
- * switch type regulators so that they latch off
- * automatically when over current is detected
+ * @ocp_enable: 1 = Allow over current protection (OCP) to be
+ * enabled for voltage switch type regulators so
+ * that they latch off automatically when over
+ * current is detected. OCP is enabled when in HPM
+ * or auto mode.
* 0 = Disable OCP
* QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
* OCP state
+ * @ocp_irq: IRQ number of the voltage switch OCP IRQ. If
+ * specified the voltage switch will be toggled off
+ * and back on when OCP triggers in order to handle
+ * high in-rush current.
+ * @ocp_max_retries: Maximum number of times to try toggling a voltage
+ * switch off and back on as a result of
+ * consecutive over current events.
+ * @ocp_retry_delay_ms: Time to delay in milliseconds between each
+ * voltage switch toggle after an over current
+ * event takes place.
* @boost_current_limit: This parameter sets the current limit of boost type
* regulators. Its value should be one of
* QPNP_BOOST_CURRENT_LIMIT_*. If its value is
@@ -117,9 +129,6 @@
* its value is QPNP_VS_SOFT_START_STR_HW_DEFAULT,
* then the soft start strength will be left at its
* default hardware value.
- * @ocp_enable_time: Time to delay in us between enabling a switch and
- * subsequently enabling over current protection
- * (OCP) for the switch
* @auto_mode_enable: 1 = Enable automatic hardware selection of regulator
* mode (HPM vs LPM). Auto mode is not available
* on boost type regulators
@@ -132,6 +141,18 @@
* 0 = Do not enable bypass mode
* QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
* bypass mode state
+ * @hpm_enable: 1 = Enable high power mode (HPM), also referred to
+ * as NPM. HPM consumes more ground current than
+ * LPM, but it can source significantly higher load
+ * current. HPM is not available on boost type
+ * regulators. For voltage switch type regulators,
+ * HPM implies that over current protection and
+ * soft start are active all the time. This
+ * configuration can be overwritten by changing the
+ * regulator's mode dynamically.
+ * 0 = Do not enable HPM
+ * QPNP_REGULATOR_USE_HW_DEFAULT = do not modify
+ * HPM state
* @base_addr: SMPI base address for the regulator peripheral
*/
struct qpnp_regulator_platform_data {
@@ -142,12 +163,15 @@
int system_load;
int enable_time;
int ocp_enable;
+ int ocp_irq;
+ int ocp_max_retries;
+ int ocp_retry_delay_ms;
enum qpnp_boost_current_limit boost_current_limit;
int soft_start_enable;
enum qpnp_vs_soft_start_str vs_soft_start_strength;
- int ocp_enable_time;
int auto_mode_enable;
int bypass_mode_enable;
+ int hpm_enable;
u16 base_addr;
};
diff --git a/include/linux/usb/msm_hsusb.h b/include/linux/usb/msm_hsusb.h
index d64e198..4ae3b79 100644
--- a/include/linux/usb/msm_hsusb.h
+++ b/include/linux/usb/msm_hsusb.h
@@ -423,6 +423,7 @@
u32 standalone_latency;
bool pool_64_bit_align;
bool enable_hbm;
+ bool disable_park_mode;
};
struct msm_usb_host_platform_data {
diff --git a/include/media/msm_cam_sensor.h b/include/media/msm_cam_sensor.h
index bce6af3..992649f 100644
--- a/include/media/msm_cam_sensor.h
+++ b/include/media/msm_cam_sensor.h
@@ -108,10 +108,10 @@
SUB_MODULE_EEPROM,
SUB_MODULE_LED_FLASH,
SUB_MODULE_STROBE_FLASH,
- SUB_MODULE_CSIPHY,
- SUB_MODULE_CSIPHY_3D,
SUB_MODULE_CSID,
SUB_MODULE_CSID_3D,
+ SUB_MODULE_CSIPHY,
+ SUB_MODULE_CSIPHY_3D,
SUB_MODULE_MAX,
};
@@ -207,6 +207,7 @@
uint8_t settle_cnt;
uint16_t lane_mask;
uint8_t combo_mode;
+ uint8_t csid_core;
};
struct msm_camera_csi2_params {
diff --git a/include/media/msm_camera.h b/include/media/msm_camera.h
index afd5a42..b4b3bfc 100644
--- a/include/media/msm_camera.h
+++ b/include/media/msm_camera.h
@@ -1396,6 +1396,7 @@
uint8_t settle_cnt;
uint16_t lane_mask;
uint8_t combo_mode;
+ uint8_t csid_core;
};
struct msm_camera_csi2_params {
diff --git a/include/media/msm_gemini.h b/include/media/msm_gemini.h
index 0167335..2209758 100644
--- a/include/media/msm_gemini.h
+++ b/include/media/msm_gemini.h
@@ -51,10 +51,19 @@
#define MSM_GMN_IOCTL_TEST_DUMP_REGION \
_IOW(MSM_GMN_IOCTL_MAGIC, 15, unsigned long)
+#define MSM_GMN_IOCTL_SET_MODE \
+ _IOW(MSM_GMN_IOCTL_MAGIC, 16, enum msm_gmn_out_mode)
+
#define MSM_GEMINI_MODE_REALTIME_ENCODE 0
#define MSM_GEMINI_MODE_OFFLINE_ENCODE 1
#define MSM_GEMINI_MODE_REALTIME_ROTATION 2
#define MSM_GEMINI_MODE_OFFLINE_ROTATION 3
+
+enum msm_gmn_out_mode {
+ MSM_GMN_OUTMODE_FRAGMENTED,
+ MSM_GMN_OUTMODE_SINGLE
+};
+
struct msm_gemini_ctrl_cmd {
uint32_t type;
uint32_t len;
diff --git a/include/media/msmb_isp.h b/include/media/msmb_isp.h
index 6fb1a65..bf6b23b 100644
--- a/include/media/msmb_isp.h
+++ b/include/media/msmb_isp.h
@@ -168,12 +168,6 @@
enum msm_vfe_frame_skip_pattern skip_pattern;
};
-enum msm_vfe_stats_pipeline_policy {
- STATS_COMP_ALL,
- STATS_COMP_NONE,
- MAX_STATS_POLICY,
-};
-
enum msm_isp_stats_type {
MSM_ISP_STATS_AEC, /* legacy based AEC */
MSM_ISP_STATS_AF, /* legacy based AF */
@@ -193,11 +187,11 @@
uint32_t session_id;
uint32_t stream_id;
enum msm_isp_stats_type stats_type;
+ uint32_t composite_flag;
uint32_t framedrop_pattern;
uint32_t irq_subsample_pattern;
uint32_t buffer_offset;
uint32_t stream_handle;
- uint8_t comp_flag;
};
struct msm_vfe_stats_stream_release_cmd {
@@ -209,12 +203,6 @@
uint8_t enable;
};
-struct msm_vfe_stats_comp_policy_cfg {
- enum msm_vfe_stats_pipeline_policy stats_pipeline_policy;
- uint32_t comp_framedrop_pattern;
- uint32_t comp_irq_subsample_pattern;
-};
-
enum msm_vfe_reg_cfg_type {
VFE_WRITE,
VFE_WRITE_MB,
@@ -319,7 +307,7 @@
#define ISP_EVENT_EOF (ISP_EVENT_BASE + ISP_EOF)
#define ISP_EVENT_BUF_DIVERT (ISP_BUF_EVENT_BASE)
#define ISP_EVENT_STATS_NOTIFY (ISP_STATS_EVENT_BASE)
-
+#define ISP_EVENT_COMP_STATS_NOTIFY (ISP_EVENT_STATS_NOTIFY + MSM_ISP_STATS_MAX)
/* The msm_v4l2_event_data structure should match the
* v4l2_event.u.data field.
* should not exceed 64 bytes */
@@ -412,10 +400,6 @@
_IOWR('V', BASE_VIDIOC_PRIVATE+11, \
struct msm_vfe_stats_stream_release_cmd)
-#define VIDIOC_MSM_ISP_CFG_STATS_COMP_POLICY \
- _IOWR('V', BASE_VIDIOC_PRIVATE+12, \
- struct msm_vfe_stats_comp_policy_cfg)
-
#define VIDIOC_MSM_ISP_UPDATE_STREAM \
_IOWR('V', BASE_VIDIOC_PRIVATE+13, struct msm_vfe_axi_stream_update_cmd)
diff --git a/include/sound/apr_audio-v2.h b/include/sound/apr_audio-v2.h
index fa4dedc..87730b1 100644
--- a/include/sound/apr_audio-v2.h
+++ b/include/sound/apr_audio-v2.h
@@ -634,8 +634,6 @@
* when a new port is added here. */
#define PRIMARY_I2S_RX 0 /* index = 0 */
#define PRIMARY_I2S_TX 1 /* index = 1 */
-#define PCM_RX 2 /* index = 2 */
-#define PCM_TX 3 /* index = 3 */
#define SECONDARY_I2S_RX 4 /* index = 4 */
#define SECONDARY_I2S_TX 5 /* index = 5 */
#define MI2S_RX 6 /* index = 6 */
@@ -784,26 +782,6 @@
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_RX 0x4008
/* SLIMbus Tx port on channel 4. */
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_TX 0x4009
-/* SLIMbus Rx port on channel 0. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_0_RX 0x4000
-/* SLIMbus Tx port on channel 0. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_0_TX 0x4001
-/* SLIMbus Rx port on channel 1. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_1_RX 0x4002
-/* SLIMbus Tx port on channel 1. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_1_TX 0x4003
-/* SLIMbus Rx port on channel 2. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_2_RX 0x4004
-/* SLIMbus Tx port on channel 2. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_2_TX 0x4005
-/* SLIMbus Rx port on channel 3. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_3_RX 0x4006
-/* SLIMbus Tx port on channel 3. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_3_TX 0x4007
-/* SLIMbus Rx port on channel 4. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_RX 0x4008
-/* SLIMbus Tx port on channel 4. */
-#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_4_TX 0x4009
/* SLIMbus Rx port on channel 5. */
#define AFE_PORT_ID_SLIMBUS_MULTI_CHAN_5_RX 0x400a
/* SLIMbus Tx port on channel 5. */
diff --git a/include/sound/q6afe-v2.h b/include/sound/q6afe-v2.h
index 2a740f4..1389b2a 100644
--- a/include/sound/q6afe-v2.h
+++ b/include/sound/q6afe-v2.h
@@ -41,8 +41,8 @@
enum {
IDX_PRIMARY_I2S_RX = 0,
IDX_PRIMARY_I2S_TX = 1,
- IDX_PCM_RX = 2,
- IDX_PCM_TX = 3,
+ IDX_AFE_PORT_ID_PRIMARY_PCM_RX = 2,
+ IDX_AFE_PORT_ID_PRIMARY_PCM_TX = 3,
IDX_SECONDARY_I2S_RX = 4,
IDX_SECONDARY_I2S_TX = 5,
IDX_MI2S_RX = 6,
@@ -166,7 +166,7 @@
int afe_port_start(u16 port_id, union afe_port_config *afe_config,
u32 rate);
int afe_spk_prot_feed_back_cfg(int src_port, int dst_port,
- int l_ch, int r_ch);
+ int l_ch, int r_ch, u32 enable);
int afe_spk_prot_get_calib_data(struct afe_spkr_prot_get_vi_calib *calib);
int afe_port_stop_nowait(int port_id);
int afe_apply_gain(u16 port_id, u16 gain);
diff --git a/include/trace/events/cpufreq_interactive.h b/include/trace/events/cpufreq_interactive.h
index ea83664..951e6ca 100644
--- a/include/trace/events/cpufreq_interactive.h
+++ b/include/trace/events/cpufreq_interactive.h
@@ -28,13 +28,7 @@
__entry->actualfreq)
);
-DEFINE_EVENT(set, cpufreq_interactive_up,
- TP_PROTO(u32 cpu_id, unsigned long targfreq,
- unsigned long actualfreq),
- TP_ARGS(cpu_id, targfreq, actualfreq)
-);
-
-DEFINE_EVENT(set, cpufreq_interactive_down,
+DEFINE_EVENT(set, cpufreq_interactive_setspeed,
TP_PROTO(u32 cpu_id, unsigned long targfreq,
unsigned long actualfreq),
TP_ARGS(cpu_id, targfreq, actualfreq)
@@ -42,44 +36,50 @@
DECLARE_EVENT_CLASS(loadeval,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq),
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg),
TP_STRUCT__entry(
__field(unsigned long, cpu_id )
__field(unsigned long, load )
- __field(unsigned long, curfreq )
- __field(unsigned long, targfreq )
+ __field(unsigned long, curtarg )
+ __field(unsigned long, curactual )
+ __field(unsigned long, newtarg )
),
TP_fast_assign(
__entry->cpu_id = cpu_id;
__entry->load = load;
- __entry->curfreq = curfreq;
- __entry->targfreq = targfreq;
+ __entry->curtarg = curtarg;
+ __entry->curactual = curactual;
+ __entry->newtarg = newtarg;
),
- TP_printk("cpu=%lu load=%lu cur=%lu targ=%lu",
- __entry->cpu_id, __entry->load, __entry->curfreq,
- __entry->targfreq)
+ TP_printk("cpu=%lu load=%lu cur=%lu actual=%lu targ=%lu",
+ __entry->cpu_id, __entry->load, __entry->curtarg,
+ __entry->curactual, __entry->newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_target,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_already,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
DEFINE_EVENT(loadeval, cpufreq_interactive_notyet,
TP_PROTO(unsigned long cpu_id, unsigned long load,
- unsigned long curfreq, unsigned long targfreq),
- TP_ARGS(cpu_id, load, curfreq, targfreq)
+ unsigned long curtarg, unsigned long curactual,
+ unsigned long newtarg),
+ TP_ARGS(cpu_id, load, curtarg, curactual, newtarg)
);
TRACE_EVENT(cpufreq_interactive_boost,
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index a1da44f..e76e822 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -495,6 +495,70 @@
TP_ARGS(mode)
);
+DECLARE_EVENT_CLASS(ion_alloc_pages,
+
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order),
+
+ TP_STRUCT__entry(
+ __field(gfp_t, gfp_flags)
+ __field(unsigned int, order)
+ ),
+
+ TP_fast_assign(
+ __entry->gfp_flags = gfp_flags;
+ __entry->order = order;
+ ),
+
+ TP_printk("gfp_flags=%s order=%d",
+ show_gfp_flags(__entry->gfp_flags),
+ __entry->order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_start,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_end,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_iommu_fail,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_start,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_end,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
+DEFINE_EVENT(ion_alloc_pages, alloc_pages_sys_fail,
+ TP_PROTO(gfp_t gfp_flags,
+ unsigned int order),
+
+ TP_ARGS(gfp_flags, order)
+ );
+
#endif /* _TRACE_KMEM_H */
/* This part must be outside protection */
diff --git a/kernel/pid.c b/kernel/pid.c
index 9f08dfa..7acf590 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -430,6 +430,7 @@
{
return find_task_by_pid_ns(vnr, current->nsproxy->pid_ns);
}
+EXPORT_SYMBOL_GPL(find_task_by_vpid);
struct pid *get_task_pid(struct task_struct *task, enum pid_type type)
{
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index a2bad88..862e172 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4757,6 +4757,7 @@
delayacct_blkio_end();
return ret;
}
+EXPORT_SYMBOL(io_schedule_timeout);
/**
* sys_sched_get_priority_max - return maximum RT priority.
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b175073..7e31770 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2126,11 +2126,11 @@
static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
{
struct sched_entity *se = &p->se;
- struct cfs_rq *cfs_rq = cfs_rq_of(se);
+ struct cfs_rq *cfs_rq = &rq->cfs;
WARN_ON(task_rq(p) != rq);
- if (cfs_rq->nr_running > 1) {
+ if (cfs_rq->h_nr_running > 1) {
u64 slice = sched_slice(cfs_rq, se);
u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime;
s64 delta = slice - ran;
@@ -2154,8 +2154,7 @@
/*
* called from enqueue/dequeue and updates the hrtick when the
- * current task is from our class and nr_running is low enough
- * to matter.
+ * current task is from our class.
*/
static void hrtick_update(struct rq *rq)
{
@@ -2164,8 +2163,7 @@
if (!hrtick_enabled(rq) || curr->sched_class != &fair_sched_class)
return;
- if (cfs_rq_of(&curr->se)->nr_running < sched_nr_latency)
- hrtick_start_fair(rq, curr);
+ hrtick_start_fair(rq, curr);
}
#else /* !CONFIG_SCHED_HRTICK */
static inline void
@@ -4626,7 +4624,7 @@
raw_spin_lock(&this_rq->lock);
- if (pulled_task || time_after(jiffies, this_rq->next_balance)) {
+ if (!pulled_task || time_after(jiffies, this_rq->next_balance)) {
/*
* We are going idle. next_balance may be set based on
* a busy processor. So reset next_balance.
diff --git a/scripts/build-all.py b/scripts/build-all.py
index 3cecbe2..4789af7 100755
--- a/scripts/build-all.py
+++ b/scripts/build-all.py
@@ -88,7 +88,6 @@
r'[fm]sm[0-9]*_defconfig',
r'apq*_defconfig',
r'qsd*_defconfig',
- r'msmzinc*_defconfig',
)
for p in arch_pats:
for n in glob.glob('arch/arm/configs/' + p):
diff --git a/sound/soc/codecs/wcd9304-tables.c b/sound/soc/codecs/wcd9304-tables.c
index 83c0c1d..7ec0152 100644
--- a/sound/soc/codecs/wcd9304-tables.c
+++ b/sound/soc/codecs/wcd9304-tables.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -235,8 +235,24 @@
[SITAR_A_CDC_RX1_B4_CTL] = SITAR_A_CDC_RX1_B4_CTL__POR,
[SITAR_A_CDC_RX1_B5_CTL] = SITAR_A_CDC_RX1_B5_CTL__POR,
[SITAR_A_CDC_RX1_B6_CTL] = SITAR_A_CDC_RX1_B6_CTL__POR,
+ [SITAR_A_CDC_RX2_B1_CTL] = SITAR_A_CDC_RX2_B1_CTL__POR,
+ [SITAR_A_CDC_RX2_B2_CTL] = SITAR_A_CDC_RX2_B2_CTL__POR,
+ [SITAR_A_CDC_RX2_B3_CTL] = SITAR_A_CDC_RX2_B3_CTL__POR,
+ [SITAR_A_CDC_RX2_B4_CTL] = SITAR_A_CDC_RX2_B4_CTL__POR,
+ [SITAR_A_CDC_RX2_B5_CTL] = SITAR_A_CDC_RX2_B5_CTL__POR,
+ [SITAR_A_CDC_RX2_B6_CTL] = SITAR_A_CDC_RX2_B6_CTL__POR,
+ [SITAR_A_CDC_RX3_B1_CTL] = SITAR_A_CDC_RX3_B1_CTL__POR,
+ [SITAR_A_CDC_RX3_B2_CTL] = SITAR_A_CDC_RX3_B2_CTL__POR,
+ [SITAR_A_CDC_RX3_B3_CTL] = SITAR_A_CDC_RX3_B3_CTL__POR,
+ [SITAR_A_CDC_RX3_B4_CTL] = SITAR_A_CDC_RX3_B4_CTL__POR,
+ [SITAR_A_CDC_RX3_B5_CTL] = SITAR_A_CDC_RX3_B5_CTL__POR,
+ [SITAR_A_CDC_RX3_B6_CTL] = SITAR_A_CDC_RX3_B6_CTL__POR,
[SITAR_A_CDC_RX1_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX1_VOL_CTL_B1_CTL__POR,
[SITAR_A_CDC_RX1_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX1_VOL_CTL_B2_CTL__POR,
+ [SITAR_A_CDC_RX2_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX2_VOL_CTL_B1_CTL__POR,
+ [SITAR_A_CDC_RX2_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX2_VOL_CTL_B2_CTL__POR,
+ [SITAR_A_CDC_RX3_VOL_CTL_B1_CTL] = SITAR_A_CDC_RX3_VOL_CTL_B1_CTL__POR,
+ [SITAR_A_CDC_RX3_VOL_CTL_B2_CTL] = SITAR_A_CDC_RX3_VOL_CTL_B2_CTL__POR,
[SITAR_A_CDC_CLK_ANC_RESET_CTL] = SITAR_A_CDC_CLK_ANC_RESET_CTL__POR,
[SITAR_A_CDC_CLK_RX_RESET_CTL] = SITAR_A_CDC_CLK_RX_RESET_CTL__POR,
[SITAR_A_CDC_CLK_TX_RESET_B1_CTL] =
@@ -322,6 +338,15 @@
[SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS] =
SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS__POR,
[SITAR_A_CDC_COMP1_FS_CFG] = SITAR_A_CDC_COMP1_FS_CFG__POR,
+ [SITAR_A_CDC_COMP2_B1_CTL] = SITAR_A_CDC_COMP2_B1_CTL__POR,
+ [SITAR_A_CDC_COMP2_B2_CTL] = SITAR_A_CDC_COMP2_B2_CTL__POR,
+ [SITAR_A_CDC_COMP2_B3_CTL] = SITAR_A_CDC_COMP2_B3_CTL__POR,
+ [SITAR_A_CDC_COMP2_B4_CTL] = SITAR_A_CDC_COMP2_B4_CTL__POR,
+ [SITAR_A_CDC_COMP2_B5_CTL] = SITAR_A_CDC_COMP2_B5_CTL__POR,
+ [SITAR_A_CDC_COMP2_B6_CTL] = SITAR_A_CDC_COMP2_B6_CTL__POR,
+ [SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS] =
+ SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS__POR,
+ [SITAR_A_CDC_COMP2_FS_CFG] = SITAR_A_CDC_COMP2_FS_CFG__POR,
[SITAR_A_CDC_CONN_RX1_B1_CTL] = SITAR_A_CDC_CONN_RX1_B1_CTL__POR,
[SITAR_A_CDC_CONN_RX1_B2_CTL] = SITAR_A_CDC_CONN_RX1_B2_CTL__POR,
[SITAR_A_CDC_CONN_RX1_B3_CTL] = SITAR_A_CDC_CONN_RX1_B3_CTL__POR,
diff --git a/sound/soc/codecs/wcd9304.c b/sound/soc/codecs/wcd9304.c
index f5f4e23..616f8d5 100644
--- a/sound/soc/codecs/wcd9304.c
+++ b/sound/soc/codecs/wcd9304.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -81,6 +81,11 @@
#define SITAR_OCP_ATTEMPT 1
+#define COMP_DIGITAL_DB_GAIN_APPLY(a, b) \
+ (((a) <= 0) ? ((a) - b) : (a))
+/* The wait time value comes from codec HW specification */
+#define COMP_BRINGUP_WAIT_TIME 3000
+
#define SITAR_MCLK_RATE_12288KHZ 12288000
#define SITAR_MCLK_RATE_9600KHZ 9600000
@@ -148,6 +153,22 @@
BAND_MAX,
};
+enum {
+ COMPANDER_1 = 0,
+ COMPANDER_2,
+ COMPANDER_MAX,
+};
+
+enum {
+ COMPANDER_FS_8KHZ = 0,
+ COMPANDER_FS_16KHZ,
+ COMPANDER_FS_32KHZ,
+ COMPANDER_FS_48KHZ,
+ COMPANDER_FS_96KHZ,
+ COMPANDER_FS_192KHZ,
+ COMPANDER_FS_MAX,
+};
+
/* Flags to track of PA and DAC state.
* PA and DAC should be tracked separately as AUXPGA loopback requires
* only PA to be turned on without DAC being on. */
@@ -158,6 +179,33 @@
SITAR_HPHR_DAC_OFF_ACK
};
+struct comp_sample_dependent_params {
+ u32 peak_det_timeout;
+ u32 rms_meter_div_fact;
+ u32 rms_meter_resamp_fact;
+};
+
+struct comp_dgtl_gain_offset {
+ u8 whole_db_gain;
+ u8 half_db_gain;
+};
+
+static const struct comp_dgtl_gain_offset comp_dgtl_gain[] = {
+ {0, 0},
+ {1, 1},
+ {3, 0},
+ {4, 1},
+ {6, 0},
+ {7, 1},
+ {9, 0},
+ {10, 1},
+ {12, 0},
+ {13, 1},
+ {15, 0},
+ {16, 1},
+ {18, 0},
+};
+
/* Data used by MBHC */
struct mbhc_internal_cal_data {
u16 dce_z;
@@ -273,6 +321,11 @@
/* num of slim ports required */
struct wcd9xxx_codec_dai_data dai[NUM_CODEC_DAIS];
+ /*compander*/
+ int comp_enabled[COMPANDER_MAX];
+ u32 comp_fs[COMPANDER_MAX];
+ u8 comp_gain_offset[NUM_INTERPOLATORS];
+
/* Currently, only used for mbhc purpose, to protect
* concurrent execution of mbhc threaded irq handlers and
* kill race between DAPM and MBHC.But can serve as a
@@ -295,6 +348,47 @@
struct sitar_priv *debug_sitar_priv;
#endif
+static const int comp_rx_path[] = {
+ COMPANDER_2,
+ COMPANDER_1,
+ COMPANDER_1,
+ COMPANDER_MAX,
+};
+
+static const struct comp_sample_dependent_params
+ comp_samp_params[COMPANDER_FS_MAX] = {
+ {
+ .peak_det_timeout = 0x6,
+ .rms_meter_div_fact = 0x9 << 4,
+ .rms_meter_resamp_fact = 0x06,
+ },
+ {
+ .peak_det_timeout = 0x7,
+ .rms_meter_div_fact = 0xA << 4,
+ .rms_meter_resamp_fact = 0x0C,
+ },
+ {
+ .peak_det_timeout = 0x8,
+ .rms_meter_div_fact = 0xB << 4,
+ .rms_meter_resamp_fact = 0x30,
+ },
+ {
+ .peak_det_timeout = 0x9,
+ .rms_meter_div_fact = 0xB << 4,
+ .rms_meter_resamp_fact = 0x28,
+ },
+ {
+ .peak_det_timeout = 0xA,
+ .rms_meter_div_fact = 0xC << 4,
+ .rms_meter_resamp_fact = 0x50,
+ },
+ {
+ .peak_det_timeout = 0xB,
+ .rms_meter_div_fact = 0xC << 4,
+ .rms_meter_resamp_fact = 0x50,
+ },
+};
+
static int sitar_get_anc_slot(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
@@ -539,6 +633,268 @@
return 0;
}
+static int sitar_compander_gain_offset(
+ struct snd_soc_codec *codec, u32 enable,
+ unsigned int pa_reg, unsigned int vol_reg,
+ int mask, int event,
+ struct comp_dgtl_gain_offset *gain_offset,
+ int index)
+{
+ unsigned int pa_gain = snd_soc_read(codec, pa_reg);
+ unsigned int digital_vol = snd_soc_read(codec, vol_reg);
+ int pa_mode = pa_gain & mask;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: pa_gain(0x%x=0x%x)digital_vol(0x%x=0x%x)event(0x%x) index(%d)\n",
+ __func__, pa_reg, pa_gain, vol_reg, digital_vol, event, index);
+ if (((pa_gain & 0xF) + 1) > ARRAY_SIZE(comp_dgtl_gain) ||
+ (index >= ARRAY_SIZE(sitar->comp_gain_offset))) {
+ pr_err("%s: Out of array boundary\n", __func__);
+ return -EINVAL;
+ }
+
+ if (SND_SOC_DAPM_EVENT_ON(event) && (enable != 0)) {
+ gain_offset->whole_db_gain = COMP_DIGITAL_DB_GAIN_APPLY(
+ (digital_vol - comp_dgtl_gain[pa_gain & 0xF].whole_db_gain),
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain);
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain =
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain;
+ sitar->comp_gain_offset[index] = digital_vol -
+ gain_offset->whole_db_gain ;
+ }
+ if (SND_SOC_DAPM_EVENT_OFF(event) && (pa_mode == 0)) {
+ gain_offset->whole_db_gain = digital_vol +
+ sitar->comp_gain_offset[index];
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain = 0;
+ }
+
+ pr_debug("%s: half_db_gain(%d)whole_db_gain(0x%x)comp_gain_offset[%d](%d)\n",
+ __func__, gain_offset->half_db_gain,
+ gain_offset->whole_db_gain, index,
+ sitar->comp_gain_offset[index]);
+ return 0;
+}
+
+static int sitar_config_gain_compander(
+ struct snd_soc_codec *codec,
+ u32 compander, u32 enable, int event)
+{
+ int value = 0;
+ int mask = 1 << 4;
+ struct comp_dgtl_gain_offset gain_offset = {0, 0};
+ if (compander >= COMPANDER_MAX) {
+ pr_err("%s: Error, invalid compander channel\n", __func__);
+ return -EINVAL;
+ }
+
+ if ((enable == 0) || SND_SOC_DAPM_EVENT_OFF(event))
+ value = 1 << 4;
+
+ if (compander == COMPANDER_1) {
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_HPH_L_GAIN,
+ SITAR_A_CDC_RX2_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 1);
+ snd_soc_update_bits(codec, SITAR_A_RX_HPH_L_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX2_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX2_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_HPH_R_GAIN,
+ SITAR_A_CDC_RX3_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 2);
+ snd_soc_update_bits(codec, SITAR_A_RX_HPH_R_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX3_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX3_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ } else if (compander == COMPANDER_2) {
+ sitar_compander_gain_offset(codec, enable,
+ SITAR_A_RX_LINE_1_GAIN,
+ SITAR_A_CDC_RX1_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 0);
+ snd_soc_update_bits(codec, SITAR_A_RX_LINE_1_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_RX_LINE_2_GAIN, mask, value);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX1_VOL_CTL_B2_CTL,
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, SITAR_A_CDC_RX1_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ }
+ return 0;
+}
+
+static int sitar_get_compander(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ int comp = ((struct soc_multi_mixer_control *)
+ kcontrol->private_value)->shift;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+
+ ucontrol->value.integer.value[0] = sitar->comp_enabled[comp];
+
+ return 0;
+}
+
+static int sitar_set_compander(struct snd_kcontrol *kcontrol,
+ struct snd_ctl_elem_value *ucontrol)
+{
+ struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ int comp = ((struct soc_multi_mixer_control *)
+ kcontrol->private_value)->shift;
+ int value = ucontrol->value.integer.value[0];
+
+ pr_debug("%s: compander #%d enable %d\n",
+ __func__, comp + 1, value);
+ if (value == sitar->comp_enabled[comp]) {
+ pr_debug("%s: compander #%d enable %d no change\n",
+ __func__, comp + 1, value);
+ return 0;
+ }
+ sitar->comp_enabled[comp] = value;
+ return 0;
+}
+
+static int sitar_config_compander(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ u32 rate = sitar->comp_fs[w->shift];
+ u32 value;
+
+ pr_debug("%s: compander #%d enable %d event %d widget name %s\n",
+ __func__, w->shift + 1,
+ sitar->comp_enabled[w->shift], event , w->name);
+ if (sitar->comp_enabled[w->shift] == 0)
+ goto rtn;
+ switch (event) {
+ case SND_SOC_DAPM_PRE_PMU:
+ /* Update compander sample rate */
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_FS_CFG +
+ w->shift * 8, 0x07, rate);
+ /* Enable compander clock */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_RX_B2_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift, 0);
+ sitar_config_gain_compander(codec, w->shift, 1, event);
+ /* Compander enable -> 0x370/0x378 */
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x03);
+ /* Update the RMS meter resampling */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF, 0x01);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0, 0x50);
+ usleep_range(COMP_BRINGUP_WAIT_TIME, COMP_BRINGUP_WAIT_TIME);
+ break;
+ case SND_SOC_DAPM_POST_PMU:
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLSG_CTL,
+ 0x11, 0x00);
+ if (w->shift == COMPANDER_1)
+ value = 0x22;
+ else
+ value = 0x11;
+ snd_soc_write(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL, value);
+
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0x0F,
+ comp_samp_params[rate].peak_det_timeout);
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0,
+ comp_samp_params[rate].rms_meter_div_fact);
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF,
+ comp_samp_params[rate].rms_meter_resamp_fact);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, SITAR_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x00);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift,
+ 1 << w->shift);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << w->shift, 0);
+ /* Disable compander clock */
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLK_RX_B2_CTL,
+ 1 << w->shift,
+ 0);
+ /* Restore the gain */
+ sitar_config_gain_compander(codec, w->shift,
+ sitar->comp_enabled[w->shift],
+ event);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CLSG_CTL,
+ 0x11, 0x11);
+ snd_soc_write(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL, 0x14);
+ break;
+ }
+rtn:
+ return 0;
+}
+
+static int sitar_codec_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
+ pr_debug("%s: compander#1->enable(%d) compander#2->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, sitar->comp_enabled[COMPANDER_1],
+ sitar->comp_enabled[COMPANDER_2],
+ SITAR_A_CDC_RX1_B6_CTL + w->shift * 8,
+ snd_soc_read(codec, SITAR_A_CDC_RX1_B6_CTL + w->shift * 8),
+ event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (sitar->comp_enabled[COMPANDER_1] ||
+ sitar->comp_enabled[COMPANDER_2])
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL +
+ w->shift * 8,
+ 1 << 5, 0);
+ else
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL +
+ w->shift * 8,
+ 1 << 5, 0x20);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_RX1_B6_CTL + w->shift * 8,
+ 1 << 5, 0);
+ break;
+ }
+ return 0;
+}
+
static const char * const sitar_ear_pa_gain_text[] = {"POS_6_DB",
"POS_2_DB", "NEG_2P5_DB", "NEG_12_DB"};
@@ -652,6 +1008,10 @@
sitar_get_iir_band_audio_mixer, sitar_put_iir_band_audio_mixer),
SOC_SINGLE_MULTI_EXT("IIR2 Band5", IIR2, BAND5, 255, 0, 5,
sitar_get_iir_band_audio_mixer, sitar_put_iir_band_audio_mixer),
+ SOC_SINGLE_EXT("COMP1 Switch", SND_SOC_NOPM, COMPANDER_1, 1, 0,
+ sitar_get_compander, sitar_set_compander),
+ SOC_SINGLE_EXT("COMP2 Switch", SND_SOC_NOPM, COMPANDER_2, 1, 0,
+ sitar_get_compander, sitar_set_compander),
};
static const char *rx_mix1_text[] = {
@@ -1267,9 +1627,14 @@
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
u16 lineout_gain_reg;
- pr_debug("%s %d %s\n", __func__, event, w->name);
+ pr_debug("%s %d %s comp2 enable %d\n", __func__, event, w->name,
+ sitar->comp_enabled[COMPANDER_2]);
+
+ if (sitar->comp_enabled[COMPANDER_2])
+ goto rtn;
switch (w->shift) {
case 0:
@@ -1311,6 +1676,7 @@
snd_soc_update_bits(codec, lineout_gain_reg, 0x10, 0x00);
break;
}
+rtn:
return 0;
}
@@ -2010,16 +2376,22 @@
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ struct sitar_priv *sitar = snd_soc_codec_get_drvdata(codec);
- pr_debug("%s %s %d\n", __func__, w->name, event);
+ pr_debug("%s %s %d comp#1 enable %d\n", __func__,
+ w->name, event, sitar->comp_enabled[COMPANDER_1]);
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
if (w->reg == SITAR_A_RX_HPH_L_DAC_CTL) {
- snd_soc_update_bits(codec, SITAR_A_CDC_CONN_CLSG_CTL,
- 0x30, 0x20);
- snd_soc_update_bits(codec, SITAR_A_CDC_CONN_CLSG_CTL,
- 0x0C, 0x08);
+ if (!sitar->comp_enabled[COMPANDER_1]) {
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL,
+ 0x30, 0x20);
+ snd_soc_update_bits(codec,
+ SITAR_A_CDC_CONN_CLSG_CTL,
+ 0x0C, 0x08);
+ }
}
snd_soc_update_bits(codec, w->reg, 0x40, 0x40);
break;
@@ -2338,9 +2710,15 @@
SND_SOC_DAPM_MUX("DAC4 MUX", SND_SOC_NOPM, 0, 0,
&rx_dac4_mux),
- SND_SOC_DAPM_MIXER("RX1 CHAIN", SITAR_A_CDC_RX1_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX2 CHAIN", SITAR_A_CDC_RX2_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX3 CHAIN", SITAR_A_CDC_RX3_B6_CTL, 5, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER_E("RX1 CHAIN", SND_SOC_NOPM, 0, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX2 CHAIN", SND_SOC_NOPM, 1, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX3 CHAIN", SND_SOC_NOPM, 2, 0, NULL,
+ 0, sitar_codec_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
SND_SOC_DAPM_MUX("RX1 MIX1 INP1", SND_SOC_NOPM, 0, 0,
&rx_mix1_inp1_mux),
@@ -2463,6 +2841,13 @@
sitar_codec_enable_dmic, SND_SOC_DAPM_PRE_PMU |
SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_SUPPLY("COMP1_CLK", SND_SOC_NOPM, COMPANDER_1, 0,
+ sitar_config_compander, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+ SND_SOC_DAPM_SUPPLY("COMP2_CLK", SND_SOC_NOPM, COMPANDER_2, 0,
+ sitar_config_compander, SND_SOC_DAPM_PRE_PMU |
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_POST_PMD),
+
/* Sidetone */
SND_SOC_DAPM_MUX("IIR1 INP1 MUX", SND_SOC_NOPM, 0, 0, &iir1_inp1_mux),
SND_SOC_DAPM_PGA("IIR1", SITAR_A_CDC_CLK_SD_CTL, 0, 0, NULL, 0),
@@ -2592,6 +2977,10 @@
{"SLIM RX3", NULL, "SLIM RX3 MUX"},
{"SLIM RX4", NULL, "SLIM RX4 MUX"},
+ {"RX1 MIX1", NULL, "COMP2_CLK"},
+ {"RX2 MIX1", NULL, "COMP1_CLK"},
+ {"RX3 MIX1", NULL, "COMP1_CLK"},
+
/* Slimbus port 5 is non functional in Sitar 1.0 */
{"RX1 MIX1 INP1", "RX1", "SLIM RX1"},
{"RX1 MIX1 INP1", "RX2", "SLIM RX2"},
@@ -2733,6 +3122,10 @@
if (reg == SITAR_A_CDC_RX1_VOL_CTL_B2_CTL + (8 * i))
return 1;
}
+
+ if ((reg == SITAR_A_CDC_COMP1_SHUT_DOWN_STATUS) ||
+ (reg == SITAR_A_CDC_COMP2_SHUT_DOWN_STATUS))
+ return 1;
return 0;
}
@@ -3191,6 +3584,7 @@
struct snd_soc_codec *codec = dai->codec;
struct sitar_priv *sitar = snd_soc_codec_get_drvdata(dai->codec);
u8 path, shift;
+ u32 compander_fs;
u16 tx_fs_reg, rx_fs_reg;
u8 tx_fs_rate, rx_fs_rate, rx_state, tx_state;
@@ -3200,18 +3594,32 @@
case 8000:
tx_fs_rate = 0x00;
rx_fs_rate = 0x00;
+ compander_fs = COMPANDER_FS_8KHZ;
break;
case 16000:
tx_fs_rate = 0x01;
rx_fs_rate = 0x20;
+ compander_fs = COMPANDER_FS_16KHZ;
break;
case 32000:
tx_fs_rate = 0x02;
rx_fs_rate = 0x40;
+ compander_fs = COMPANDER_FS_32KHZ;
break;
case 48000:
tx_fs_rate = 0x03;
rx_fs_rate = 0x60;
+ compander_fs = COMPANDER_FS_48KHZ;
+ break;
+ case 96000:
+ tx_fs_rate = 0x04;
+ rx_fs_rate = 0x80;
+ compander_fs = COMPANDER_FS_96KHZ;
+ break;
+ case 192000:
+ tx_fs_rate = 0x05;
+ rx_fs_rate = 0xa0;
+ compander_fs = COMPANDER_FS_192KHZ;
break;
default:
pr_err("%s: Invalid sampling rate %d\n", __func__,
@@ -3285,6 +3693,9 @@
+ (BITS_PER_REG*(path-1));
snd_soc_update_bits(codec, rx_fs_reg,
0xE0, rx_fs_rate);
+ if (comp_rx_path[shift] < COMPANDER_MAX)
+ sitar->comp_fs[comp_rx_path[shift]]
+ = compander_fs;
}
}
if (sitar->intf_type == WCD9XXX_INTERFACE_TYPE_I2C) {
@@ -5484,6 +5895,11 @@
if (sitar->intf_type == WCD9XXX_INTERFACE_TYPE_I2C)
sitar_i2c_codec_init_reg(codec);
+ for (i = 0; i < COMPANDER_MAX; i++) {
+ sitar->comp_enabled[i] = 0;
+ sitar->comp_fs[i] = COMPANDER_FS_48KHZ;
+ }
+
ret = sitar_handle_pdata(sitar);
if (IS_ERR_VALUE(ret)) {
pr_err("%s: bad pdata\n", __func__);
diff --git a/sound/soc/codecs/wcd9306.c b/sound/soc/codecs/wcd9306.c
index a7069a6..67674f3 100644
--- a/sound/soc/codecs/wcd9306.c
+++ b/sound/soc/codecs/wcd9306.c
@@ -4371,6 +4371,12 @@
return ret;
}
+static void tapan_cleanup_irqs(struct tapan_priv *tapan)
+{
+ struct snd_soc_codec *codec = tapan->codec;
+ wcd9xxx_free_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS, tapan);
+}
+
int tapan_hs_detect(struct snd_soc_codec *codec,
struct wcd9xxx_mbhc_config *mbhc_cfg)
{
@@ -4512,6 +4518,10 @@
mutex_unlock(&dapm->codec->mutex);
codec->ignore_pmdown_time = 1;
+
+ if (ret)
+ tapan_cleanup_irqs(tapan);
+
return ret;
err_pdata:
@@ -4532,6 +4542,9 @@
wcd9xxx_resmgr_put_bandgap(&tapan->resmgr,
WCD9XXX_BANDGAP_AUDIO_MODE);
WCD9XXX_BCL_UNLOCK(&tapan->resmgr);
+
+ tapan_cleanup_irqs(tapan);
+
/* cleanup MBHC */
wcd9xxx_mbhc_deinit(&tapan->mbhc);
/* cleanup resmgr */
diff --git a/sound/soc/codecs/wcd9310.c b/sound/soc/codecs/wcd9310.c
index b8a4a86..29703b9 100644
--- a/sound/soc/codecs/wcd9310.c
+++ b/sound/soc/codecs/wcd9310.c
@@ -55,9 +55,12 @@
#define MBHC_FW_READ_ATTEMPTS 15
#define MBHC_FW_READ_TIMEOUT 2000000
#define MBHC_VDDIO_SWITCH_WAIT_MS 10
+#define COMP_DIGITAL_DB_GAIN_APPLY(a, b) \
+ (((a) <= 0) ? ((a) - b) : (a))
#define SLIM_CLOSE_TIMEOUT 1000
-
+/* The wait time value comes from codec HW specification */
+#define COMP_BRINGUP_WAIT_TIME 2000
enum {
MBHC_USE_HPHL_TRIGGER = 1,
MBHC_USE_MB_TRIGGER = 2
@@ -99,9 +102,7 @@
RX_MIX1_INP_SEL_RX6,
RX_MIX1_INP_SEL_RX7,
};
-
-#define TABLA_COMP_DIGITAL_GAIN_HP_OFFSET 3
-#define TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET 6
+#define MAX_PA_GAIN_OPTIONS 13
#define TABLA_MCLK_RATE_12288KHZ 12288000
#define TABLA_MCLK_RATE_9600KHZ 9600000
@@ -220,6 +221,28 @@
u32 shutdown_timeout;
};
+struct comp_dgtl_gain_offset {
+ u8 whole_db_gain;
+ u8 half_db_gain;
+};
+
+static const struct comp_dgtl_gain_offset
+ comp_dgtl_gain[MAX_PA_GAIN_OPTIONS] = {
+ {0, 0},
+ {1, 1},
+ {3, 0},
+ {4, 1},
+ {6, 0},
+ {7, 1},
+ {9, 0},
+ {10, 1},
+ {12, 0},
+ {13, 1},
+ {15, 0},
+ {16, 1},
+ {18, 0},
+};
+
/* Data used by MBHC */
struct mbhc_internal_cal_data {
u16 dce_z;
@@ -377,6 +400,7 @@
/*compander*/
int comp_enabled[COMPANDER_MAX];
u32 comp_fs[COMPANDER_MAX];
+ u8 comp_gain_offset[TABLA_SB_PGD_MAX_NUMBER_OF_RX_SLAVE_DEV_PORTS - 1];
/* Maintain the status of AUX PGA */
int aux_pga_cnt;
@@ -547,7 +571,10 @@
{
struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ mutex_lock(&codec->dapm.codec->mutex);
ucontrol->value.integer.value[0] = (tabla->anc_func == true ? 1 : 0);
+ mutex_unlock(&codec->dapm.codec->mutex);
return 0;
}
@@ -802,34 +829,51 @@
static int tabla_compander_gain_offset(
struct snd_soc_codec *codec, u32 enable,
- unsigned int reg, int mask, int event, u32 comp)
+ unsigned int pa_reg, unsigned int vol_reg,
+ int mask, int event,
+ struct comp_dgtl_gain_offset *gain_offset,
+ int index)
{
- int pa_mode = snd_soc_read(codec, reg) & mask;
- int gain_offset = 0;
- /* if PMU && enable is 1-> offset is 3
- * if PMU && enable is 0-> offset is 0
- * if PMD && pa_mode is PA -> offset is 0: PMU compander is off
- * if PMD && pa_mode is comp -> offset is -3: PMU compander is on.
- */
+ unsigned int pa_gain = snd_soc_read(codec, pa_reg);
+ unsigned int digital_vol = snd_soc_read(codec, vol_reg);
+ int pa_mode = pa_gain & mask;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: pa_gain(0x%x=0x%x)digital_vol(0x%x=0x%x)event(0x%x) index(%d)\n",
+ __func__, pa_reg, pa_gain, vol_reg, digital_vol, event, index);
+ if (((pa_gain & 0xF) + 1) > ARRAY_SIZE(comp_dgtl_gain) ||
+ (index >= ARRAY_SIZE(tabla->comp_gain_offset))) {
+ pr_err("%s: Out of array boundary\n", __func__);
+ return -EINVAL;
+ }
if (SND_SOC_DAPM_EVENT_ON(event) && (enable != 0)) {
- if (comp == COMPANDER_1)
- gain_offset = TABLA_COMP_DIGITAL_GAIN_HP_OFFSET;
- if (comp == COMPANDER_2)
- gain_offset = TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET;
+ gain_offset->whole_db_gain = COMP_DIGITAL_DB_GAIN_APPLY(
+ (digital_vol - comp_dgtl_gain[pa_gain & 0xF].whole_db_gain),
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain);
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain =
+ comp_dgtl_gain[pa_gain & 0xF].half_db_gain;
+ tabla->comp_gain_offset[index] = digital_vol -
+ gain_offset->whole_db_gain ;
}
if (SND_SOC_DAPM_EVENT_OFF(event) && (pa_mode == 0)) {
- if (comp == COMPANDER_1)
- gain_offset = -TABLA_COMP_DIGITAL_GAIN_HP_OFFSET;
- if (comp == COMPANDER_2)
- gain_offset = -TABLA_COMP_DIGITAL_GAIN_LINEOUT_OFFSET;
-
+ gain_offset->whole_db_gain = digital_vol +
+ tabla->comp_gain_offset[index];
+ pr_debug("%s: listed whole_db_gain:0x%x, adjusted whole_db_gain:0x%x\n",
+ __func__, comp_dgtl_gain[pa_gain & 0xF].whole_db_gain,
+ gain_offset->whole_db_gain);
+ gain_offset->half_db_gain = 0;
}
- pr_debug("%s: compander #%d gain_offset %d\n",
- __func__, comp + 1, gain_offset);
- return gain_offset;
-}
+ pr_debug("%s: half_db_gain(%d)whole_db_gain(%d)comp_gain_offset[%d](%d)\n",
+ __func__, gain_offset->half_db_gain,
+ gain_offset->whole_db_gain, index,
+ tabla->comp_gain_offset[index]);
+ return 0;
+}
static int tabla_config_gain_compander(
struct snd_soc_codec *codec,
@@ -837,8 +881,7 @@
{
int value = 0;
int mask = 1 << 4;
- int gain = 0;
- int gain_offset;
+ struct comp_dgtl_gain_offset gain_offset = {0, 0};
if (compander >= COMPANDER_MAX) {
pr_err("%s: Error, invalid compander channel\n", __func__);
return -EINVAL;
@@ -848,43 +891,61 @@
value = 1 << 4;
if (compander == COMPANDER_1) {
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_HPH_L_GAIN, mask, event, compander);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_HPH_L_GAIN,
+ TABLA_A_CDC_RX1_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 0);
snd_soc_update_bits(codec, TABLA_A_RX_HPH_L_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX1_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX1_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_HPH_R_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_HPH_R_GAIN,
+ TABLA_A_CDC_RX2_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 1);
snd_soc_update_bits(codec, TABLA_A_RX_HPH_R_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX2_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX2_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
} else if (compander == COMPANDER_2) {
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_1_GAIN, mask, event, compander);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_1_GAIN,
+ TABLA_A_CDC_RX3_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 2);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_1_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX3_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX3_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_3_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX3_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_3_GAIN,
+ TABLA_A_CDC_RX4_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 3);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_3_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX4_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX4_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_2_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX4_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_2_GAIN,
+ TABLA_A_CDC_RX5_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 4);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_2_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX5_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX5_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
- gain_offset = tabla_compander_gain_offset(codec, enable,
- TABLA_A_RX_LINE_4_GAIN, mask, event, compander);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX5_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
+ tabla_compander_gain_offset(codec, enable,
+ TABLA_A_RX_LINE_4_GAIN,
+ TABLA_A_CDC_RX6_VOL_CTL_B2_CTL,
+ mask, event, &gain_offset, 5);
snd_soc_update_bits(codec, TABLA_A_RX_LINE_4_GAIN, mask, value);
- gain = snd_soc_read(codec, TABLA_A_CDC_RX6_VOL_CTL_B2_CTL);
snd_soc_update_bits(codec, TABLA_A_CDC_RX6_VOL_CTL_B2_CTL,
- 0xFF, gain - gain_offset);
+ 0xFF, gain_offset.whole_db_gain);
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX6_B6_CTL,
+ 0x02, gain_offset.half_db_gain);
}
return 0;
}
@@ -921,7 +982,6 @@
return 0;
}
-
static int tabla_config_compander(struct snd_soc_dapm_widget *w,
struct snd_kcontrol *kcontrol,
int event)
@@ -929,106 +989,161 @@
struct snd_soc_codec *codec = w->codec;
struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
u32 rate = tabla->comp_fs[w->shift];
- u32 status;
- unsigned long timeout;
- pr_debug("%s: compander #%d enable %d event %d\n",
+
+ pr_debug("%s: compander #%d enable %d event %d widget name %s\n",
__func__, w->shift + 1,
- tabla->comp_enabled[w->shift], event);
+ tabla->comp_enabled[w->shift], event , w->name);
+ if (tabla->comp_enabled[w->shift] == 0)
+ goto rtn;
+ if ((w->shift == COMPANDER_1) && (tabla->anc_func)) {
+ pr_debug("%s: ANC is enabled so compander #%d cannot be enabled\n",
+ __func__, w->shift + 1);
+ goto rtn;
+ }
switch (event) {
case SND_SOC_DAPM_PRE_PMU:
- if (tabla->comp_enabled[w->shift] != 0) {
- /* Enable both L/R compander clocks */
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_RX_B2_CTL,
- 1 << comp_shift[w->shift],
- 1 << comp_shift[w->shift]);
- /* Clear the HALT for the compander*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 0);
- /* Toggle compander reset bits*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_OTHR_RESET_CTL,
- 1 << comp_shift[w->shift],
- 1 << comp_shift[w->shift]);
- snd_soc_update_bits(codec,
- TABLA_A_CDC_CLK_OTHR_RESET_CTL,
- 1 << comp_shift[w->shift], 0);
- tabla_config_gain_compander(codec, w->shift, 1, event);
- /* Compander enable -> 0x370/0x378*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x03, 0x03);
- /* Update the RMS meter resampling*/
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B3_CTL +
- w->shift * 8, 0xFF, 0x01);
- snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0xF0, 0x50);
- /* Wait for 1ms*/
- usleep_range(5000, 5000);
- }
+ /* Update compander sample rate */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_FS_CFG +
+ w->shift * 8, 0x07, rate);
+ /* Enable both L/R compander clocks */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_RX_B2_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift], 0);
+ tabla_config_gain_compander(codec, w->shift, 1, event);
+ /* Compander enable -> 0x370/0x378 */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x03);
+ /* Update the RMS meter resampling */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF, 0x01);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0, 0x50);
+ usleep_range(COMP_BRINGUP_WAIT_TIME, COMP_BRINGUP_WAIT_TIME);
break;
case SND_SOC_DAPM_POST_PMU:
- /* Set sample rate dependent paramater*/
- if (tabla->comp_enabled[w->shift] != 0) {
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_FS_CFG +
- w->shift * 8, 0x07, rate);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0x0F,
- comp_samp_params[rate].peak_det_timeout);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
- w->shift * 8, 0xF0,
- comp_samp_params[rate].rms_meter_div_fact);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B3_CTL +
- w->shift * 8, 0xFF,
- comp_samp_params[rate].rms_meter_resamp_fact);
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x38,
- comp_samp_params[rate].shutdown_timeout);
+ /* Set sample rate dependent paramater */
+ if (w->shift == COMPANDER_1) {
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLSG_CTL,
+ 0x11, 0x00);
+ snd_soc_write(codec,
+ TABLA_A_CDC_CONN_CLSG_CTL, 0x11);
}
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0x0F,
+ comp_samp_params[rate].peak_det_timeout);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B2_CTL +
+ w->shift * 8, 0xF0,
+ comp_samp_params[rate].rms_meter_div_fact);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B3_CTL +
+ w->shift * 8, 0xFF,
+ comp_samp_params[rate].rms_meter_resamp_fact);
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x38,
+ comp_samp_params[rate].shutdown_timeout);
break;
case SND_SOC_DAPM_PRE_PMD:
- if (tabla->comp_enabled[w->shift] != 0) {
- status = snd_soc_read(codec,
- TABLA_A_CDC_COMP1_SHUT_DOWN_STATUS +
- w->shift * 8);
- pr_debug("%s: compander #%d shutdown status %d in event %d\n",
- __func__, w->shift + 1, status, event);
- /* Halt the compander*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 1 << 2);
- }
break;
case SND_SOC_DAPM_POST_PMD:
- if (tabla->comp_enabled[w->shift] != 0) {
- /* Wait up to a second for shutdown complete */
- timeout = jiffies + HZ;
- do {
- status = snd_soc_read(codec,
- TABLA_A_CDC_COMP1_SHUT_DOWN_STATUS +
- w->shift * 8);
- if (status == 0x3)
- break;
- usleep_range(5000, 5000);
- } while (!(time_after(jiffies, timeout)));
- /* Restore the gain */
- tabla_config_gain_compander(codec, w->shift,
- tabla->comp_enabled[w->shift],
- event);
- /* Disable the compander*/
- snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 0x03, 0x00);
- /* Turn off the clock for compander in pair*/
- snd_soc_update_bits(codec, TABLA_A_CDC_CLK_RX_B2_CTL,
- 0x03 << comp_shift[w->shift], 0);
- /* Clear the HALT for the compander*/
+ /* Disable the compander */
+ snd_soc_update_bits(codec, TABLA_A_CDC_COMP1_B1_CTL +
+ w->shift * 8, 0x03, 0x00);
+ /* Toggle compander reset bits */
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift],
+ 1 << comp_shift[w->shift]);
+ snd_soc_update_bits(codec,
+ TABLA_A_CDC_CLK_OTHR_RESET_CTL,
+ 1 << comp_shift[w->shift], 0);
+ /* Turn off the clock for compander in pair */
+ snd_soc_update_bits(codec, TABLA_A_CDC_CLK_RX_B2_CTL,
+ 0x03 << comp_shift[w->shift], 0);
+ /* Restore the gain */
+ tabla_config_gain_compander(codec, w->shift,
+ tabla->comp_enabled[w->shift],
+ event);
+ if (w->shift == COMPANDER_1) {
snd_soc_update_bits(codec,
- TABLA_A_CDC_COMP1_B1_CTL +
- w->shift * 8, 1 << 2, 0);
+ TABLA_A_CDC_CLSG_CTL,
+ 0x11, 0x11);
+ snd_soc_write(codec,
+ TABLA_A_CDC_CONN_CLSG_CTL, 0x14);
}
break;
}
+rtn:
+ return 0;
+}
+
+static int tabla_codec_hphr_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: compander#1->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, tabla->comp_enabled[COMPANDER_1],
+ TABLA_A_CDC_RX1_B6_CTL,
+ snd_soc_read(codec, TABLA_A_CDC_RX1_B6_CTL), event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (tabla->comp_enabled[COMPANDER_1] && !tabla->anc_func)
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 0);
+ else
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 1 << w->shift);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX1_B6_CTL,
+ 1 << w->shift, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int tabla_codec_hphl_dem_input_selection(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *kcontrol,
+ int event)
+{
+ struct snd_soc_codec *codec = w->codec;
+ struct tabla_priv *tabla = snd_soc_codec_get_drvdata(codec);
+
+ pr_debug("%s: compander#1->enable(%d) reg(0x%x = 0x%x) event(%d)\n",
+ __func__, tabla->comp_enabled[COMPANDER_1],
+ TABLA_A_CDC_RX2_B6_CTL,
+ snd_soc_read(codec, TABLA_A_CDC_RX2_B6_CTL), event);
+ switch (event) {
+ case SND_SOC_DAPM_POST_PMU:
+ if (tabla->comp_enabled[COMPANDER_1] && !tabla->anc_func)
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 0);
+ else
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 1 << w->shift);
+ break;
+ case SND_SOC_DAPM_POST_PMD:
+ snd_soc_update_bits(codec, TABLA_A_CDC_RX2_B6_CTL,
+ 1 << w->shift, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
return 0;
}
@@ -5361,8 +5476,12 @@
&rx6_dsm_mux, tabla_codec_reset_interpolator,
SND_SOC_DAPM_PRE_PMU),
- SND_SOC_DAPM_MIXER("RX1 CHAIN", TABLA_A_CDC_RX1_B6_CTL, 5, 0, NULL, 0),
- SND_SOC_DAPM_MIXER("RX2 CHAIN", TABLA_A_CDC_RX2_B6_CTL, 5, 0, NULL, 0),
+ SND_SOC_DAPM_MIXER_E("RX1 CHAIN", SND_SOC_NOPM, 5, 0, NULL,
+ 0, tabla_codec_hphr_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
+ SND_SOC_DAPM_MIXER_E("RX2 CHAIN", SND_SOC_NOPM, 5, 0, NULL,
+ 0, tabla_codec_hphl_dem_input_selection,
+ SND_SOC_DAPM_POST_PMU | SND_SOC_DAPM_PRE_POST_PMD),
SND_SOC_DAPM_MUX("RX1 MIX1 INP1", SND_SOC_NOPM, 0, 0,
&rx_mix1_inp1_mux),
diff --git a/sound/soc/codecs/wcd9320.c b/sound/soc/codecs/wcd9320.c
index 7ba43c0..43a1042 100644
--- a/sound/soc/codecs/wcd9320.c
+++ b/sound/soc/codecs/wcd9320.c
@@ -5841,28 +5841,39 @@
taiko_codec_reg_init_val[i].val);
}
-static int taiko_setup_irqs(struct taiko_priv *taiko)
+static void taiko_slim_interface_init_reg(struct snd_soc_codec *codec)
{
int i;
- int ret = 0;
- struct snd_soc_codec *codec = taiko->codec;
-
- ret = wcd9xxx_request_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS,
- taiko_slimbus_irq, "SLIMBUS Slave", taiko);
- if (ret) {
- pr_err("%s: Failed to request irq %d\n", __func__,
- WCD9XXX_IRQ_SLIMBUS);
- goto exit;
- }
for (i = 0; i < WCD9XXX_SLIM_NUM_PORT_REG; i++)
wcd9xxx_interface_reg_write(codec->control_data,
TAIKO_SLIM_PGD_PORT_INT_EN0 + i,
0xFF);
-exit:
+}
+
+static int taiko_setup_irqs(struct taiko_priv *taiko)
+{
+ int ret = 0;
+ struct snd_soc_codec *codec = taiko->codec;
+
+ ret = wcd9xxx_request_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS,
+ taiko_slimbus_irq, "SLIMBUS Slave", taiko);
+ if (ret)
+ pr_err("%s: Failed to request irq %d\n", __func__,
+ WCD9XXX_IRQ_SLIMBUS);
+ else
+ taiko_slim_interface_init_reg(codec);
+
return ret;
}
+static void taiko_cleanup_irqs(struct taiko_priv *taiko)
+{
+ struct snd_soc_codec *codec = taiko->codec;
+
+ wcd9xxx_free_irq(codec->control_data, WCD9XXX_IRQ_SLIMBUS, taiko);
+}
+
int taiko_hs_detect(struct snd_soc_codec *codec,
struct wcd9xxx_mbhc_config *mbhc_cfg)
{
@@ -5921,6 +5932,7 @@
pr_err("%s: bad pdata\n", __func__);
taiko_init_slim_slave_cfg(codec);
+ taiko_slim_interface_init_reg(codec);
wcd9xxx_mbhc_deinit(&taiko->mbhc);
ret = wcd9xxx_mbhc_init(&taiko->mbhc, &taiko->resmgr, codec,
@@ -6063,10 +6075,8 @@
tx_hpf_corner_freq_callback);
}
-
snd_soc_codec_set_drvdata(codec, taiko);
-
/* codec resmgr module init */
wcd9xxx = codec->control_data;
pdata = dev_get_platdata(codec->dev->parent);
@@ -6074,7 +6084,7 @@
&taiko_reg_address);
if (ret) {
pr_err("%s: wcd9xxx init failed %d\n", __func__, ret);
- return ret;
+ goto err_init;
}
taiko->clsh_d.buck_mv = taiko_codec_get_buck_mv(codec);
@@ -6085,7 +6095,7 @@
WCD9XXX_MBHC_VERSION_TAIKO);
if (ret) {
pr_err("%s: mbhc init failed %d\n", __func__, ret);
- return ret;
+ goto err_init;
}
taiko->codec = codec;
@@ -6179,7 +6189,11 @@
snd_soc_dapm_sync(dapm);
- (void) taiko_setup_irqs(taiko);
+ ret = taiko_setup_irqs(taiko);
+ if (ret) {
+ pr_err("%s: taiko irq setup failed %d\n", __func__, ret);
+ goto err_irq;
+ }
atomic_set(&kp_taiko_priv, (unsigned long)taiko);
mutex_lock(&dapm->codec->mutex);
@@ -6194,10 +6208,13 @@
codec->ignore_pmdown_time = 1;
return ret;
+err_irq:
+ taiko_cleanup_irqs(taiko);
err_pdata:
kfree(ptr);
err_nomem_slimch:
kfree(taiko);
+err_init:
return ret;
}
static int taiko_codec_remove(struct snd_soc_codec *codec)
@@ -6212,6 +6229,8 @@
WCD9XXX_BANDGAP_AUDIO_MODE);
WCD9XXX_BCL_UNLOCK(&taiko->resmgr);
+ taiko_cleanup_irqs(taiko);
+
/* cleanup MBHC */
wcd9xxx_mbhc_deinit(&taiko->mbhc);
/* cleanup resmgr */
diff --git a/sound/soc/codecs/wcd9xxx-mbhc.c b/sound/soc/codecs/wcd9xxx-mbhc.c
index 1d01d2e..daba6d5 100644
--- a/sound/soc/codecs/wcd9xxx-mbhc.c
+++ b/sound/soc/codecs/wcd9xxx-mbhc.c
@@ -3534,7 +3534,6 @@
{
void *cdata = mbhc->codec->control_data;
- wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_SLIMBUS, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_RELEASE, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_POTENTIAL, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_REMOVAL, mbhc);
@@ -3543,7 +3542,6 @@
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_JACK_SWITCH, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_HPH_PA_OCPL_FAULT, mbhc);
wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_HPH_PA_OCPR_FAULT, mbhc);
- wcd9xxx_free_irq(cdata, WCD9XXX_IRQ_MBHC_RELEASE, mbhc);
if (mbhc->mbhc_fw)
release_firmware(mbhc->mbhc_fw);
diff --git a/sound/soc/msm/msm8974.c b/sound/soc/msm/msm8974.c
index 778dc10..76d6b57 100644
--- a/sound/soc/msm/msm8974.c
+++ b/sound/soc/msm/msm8974.c
@@ -75,6 +75,11 @@
#define EXT_CLASS_D_DIS_DELAY 3000
#define EXT_CLASS_D_DELAY_DELTA 2000
+/* It takes about 13ms for Class-AB PAs to ramp-up */
+#define EXT_CLASS_AB_EN_DELAY 10000
+#define EXT_CLASS_AB_DIS_DELAY 1000
+#define EXT_CLASS_AB_DELAY_DELTA 1000
+
#define NUM_OF_AUXPCM_GPIOS 4
static inline int param_is_mask(int p)
@@ -184,6 +189,7 @@
static struct platform_device *spdev;
static struct regulator *ext_spk_amp_regulator;
static int ext_spk_amp_gpio = -1;
+static int ext_ult_spk_amp_gpio = -1;
static int msm8974_spk_control = 1;
static int msm8974_ext_spk_pamp;
static int msm_slim_0_rx_ch = 1;
@@ -231,9 +237,43 @@
}
}
+ ext_ult_spk_amp_gpio = of_get_named_gpio(spdev->dev.of_node,
+ "qcom,ext-ult-spk-amp-gpio", 0);
+
+ if (ext_ult_spk_amp_gpio >= 0) {
+ ret = gpio_request(ext_ult_spk_amp_gpio,
+ "ext_ult_spk_amp_gpio");
+ if (ret) {
+ pr_err("%s: gpio_request failed for ext-ult_spk-amp-gpio.\n",
+ __func__);
+ return -EINVAL;
+ }
+ gpio_direction_output(ext_ult_spk_amp_gpio, 0);
+ }
+
return 0;
}
+static void msm8974_liquid_ext_ult_spk_power_amp_enable(u32 on)
+{
+ if (on) {
+ regulator_enable(ext_spk_amp_regulator);
+ gpio_direction_output(ext_ult_spk_amp_gpio, 1);
+ /* time takes enable the external power class AB amplifier */
+ usleep_range(EXT_CLASS_AB_EN_DELAY,
+ EXT_CLASS_AB_EN_DELAY + EXT_CLASS_AB_DELAY_DELTA);
+ } else {
+ gpio_direction_output(ext_ult_spk_amp_gpio, 0);
+ regulator_disable(ext_spk_amp_regulator);
+ /* time takes disable the external power class AB amplifier */
+ usleep_range(EXT_CLASS_AB_DIS_DELAY,
+ EXT_CLASS_AB_DIS_DELAY + EXT_CLASS_AB_DELAY_DELTA);
+ }
+
+ pr_debug("%s: %s external ultrasound SPKR_DRV PAs.\n", __func__,
+ on ? "Enable" : "Disable");
+}
+
static void msm8974_liquid_ext_spk_power_amp_enable(u32 on)
{
if (on) {
@@ -287,7 +327,6 @@
}
-
static irqreturn_t msm8974_liquid_docking_irq_handler(int irq, void *dev)
{
struct msm8974_liquid_dock_dev *dock_dev = dev;
@@ -495,6 +534,33 @@
}
+static int msm_ext_spkramp_ultrasound_event(struct snd_soc_dapm_widget *w,
+ struct snd_kcontrol *k, int event)
+{
+
+ pr_debug("%s()\n", __func__);
+
+ if (!strncmp(w->name, "SPK_ultrasound amp", 19)) {
+ if (!gpio_is_valid(ext_ult_spk_amp_gpio)) {
+ pr_err("%s: ext_ult_spk_amp_gpio isn't configured\n",
+ __func__);
+ return -EINVAL;
+ }
+
+ if (SND_SOC_DAPM_EVENT_ON(event))
+ msm8974_liquid_ext_ult_spk_power_amp_enable(1);
+ else
+ msm8974_liquid_ext_ult_spk_power_amp_enable(0);
+
+ } else {
+ pr_err("%s() Invalid Speaker Widget = %s\n",
+ __func__, w->name);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int msm_snd_enable_codec_ext_clk(struct snd_soc_codec *codec, int enable,
bool dapm)
{
@@ -567,6 +633,8 @@
SND_SOC_DAPM_SPK("Lineout_2 amp", msm_ext_spkramp_event),
SND_SOC_DAPM_SPK("Lineout_4 amp", msm_ext_spkramp_event),
+ SND_SOC_DAPM_SPK("SPK_ultrasound amp",
+ msm_ext_spkramp_ultrasound_event),
SND_SOC_DAPM_MIC("Handset Mic", NULL),
SND_SOC_DAPM_MIC("Headset Mic", NULL),
@@ -2271,6 +2339,7 @@
struct snd_soc_card *card = &snd_soc_card_msm8974;
struct msm8974_asoc_mach_data *pdata;
int ret;
+ const char *auxpcm_pri_gpio_set = NULL;
if (!pdev->dev.of_node) {
dev_err(&pdev->dev, "No platform supplied from device tree\n");
@@ -2396,7 +2465,24 @@
goto err;
}
- lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_PRI_MODE_MUXSEL, 4);
+ ret = of_property_read_string(pdev->dev.of_node,
+ "qcom,prim-auxpcm-gpio-set", &auxpcm_pri_gpio_set);
+ if (ret) {
+ dev_err(&pdev->dev, "Looking up %s property in node %s failed",
+ "qcom,prim-auxpcm-gpio-set",
+ pdev->dev.of_node->full_name);
+ goto err;
+ }
+ if (!strcmp(auxpcm_pri_gpio_set, "prim-gpio-prim")) {
+ lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_PRI_MODE_MUXSEL, 4);
+ } else if (!strcmp(auxpcm_pri_gpio_set, "prim-gpio-tert")) {
+ lpaif_pri_muxsel_virt_addr = ioremap(LPAIF_TER_MODE_MUXSEL, 4);
+ } else {
+ dev_err(&pdev->dev, "Invalid value %s for AUXPCM GPIO set\n",
+ auxpcm_pri_gpio_set);
+ ret = -EINVAL;
+ goto err;
+ }
if (lpaif_pri_muxsel_virt_addr == NULL) {
pr_err("%s Pri muxsel virt addr is null\n", __func__);
ret = -EINVAL;
@@ -2434,9 +2520,12 @@
if (ext_spk_amp_regulator)
regulator_put(ext_spk_amp_regulator);
+ if (gpio_is_valid(ext_ult_spk_amp_gpio))
+ gpio_free(ext_ult_spk_amp_gpio);
+
gpio_free(pdata->mclk_gpio);
gpio_free(pdata->us_euro_gpio);
- if (ext_spk_amp_gpio >= 0)
+ if (gpio_is_valid(ext_spk_amp_gpio))
gpio_free(ext_spk_amp_gpio);
if (msm8974_liquid_dock_dev != NULL) {
diff --git a/sound/soc/msm/qdsp6v2/audio_ocmem.c b/sound/soc/msm/qdsp6v2/audio_ocmem.c
index ad96ae3..08d7277 100644
--- a/sound/soc/msm/qdsp6v2/audio_ocmem.c
+++ b/sound/soc/msm/qdsp6v2/audio_ocmem.c
@@ -125,7 +125,7 @@
atomic_t audio_cond;
atomic_t audio_exit;
spinlock_t audio_lock;
- struct mutex protect_lock;
+ struct mutex state_process_lock;
struct workqueue_struct *audio_ocmem_workqueue;
struct workqueue_struct *voice_ocmem_workqueue;
bool ocmem_en;
@@ -252,6 +252,7 @@
if (ret != 0) {
pr_err("%s: get low power segments from DSP failed, rc=%d\n",
__func__, ret);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
goto fail_cmd;
}
}
@@ -273,7 +274,9 @@
buf = ocmem_allocate_nb(cid, AUDIO_OCMEM_BUF_SIZE);
if (IS_ERR_OR_NULL(buf)) {
pr_err("%s: failed: %d\n", __func__, cid);
- return -ENOMEM;
+ ret = -ENOMEM;
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
+ goto fail_cmd;
}
set_bit_pos(audio_ocmem_lcl.audio_state, OCMEM_STATE_ALLOC);
@@ -285,7 +288,7 @@
if (!buf->len) {
pr_debug("%s: buf.len is 0, waiting for ocmem region\n",
__func__);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
wait_event_interruptible(audio_ocmem_lcl.audio_wait,
(atomic_read(&audio_ocmem_lcl.audio_cond) == 0) ||
(atomic_read(&audio_ocmem_lcl.audio_exit) == 1));
@@ -302,7 +305,7 @@
goto fail_cmd;
}
clear_bit_pos(audio_ocmem_lcl.audio_state, OCMEM_STATE_GROW);
- mutex_trylock(&audio_ocmem_lcl.protect_lock);
+ mutex_trylock(&audio_ocmem_lcl.state_process_lock);
}
pr_debug("%s: buf->len: %ld\n", __func__, (audio_ocmem_lcl.buf)->len);
@@ -333,7 +336,7 @@
_MAP_RESPONSE_BIT_MASK_) != 0);
atomic_set(&audio_ocmem_lcl.audio_cond, 1);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
pr_debug("%s: audio_cond[%d] audio_state[0x%x]\n", __func__,
atomic_read(&audio_ocmem_lcl.audio_cond),
atomic_read(&audio_ocmem_lcl.audio_state));
@@ -344,7 +347,7 @@
wait_event_interruptible(audio_ocmem_lcl.audio_wait,
(atomic_read(&audio_ocmem_lcl.audio_state) &
_BIT_MASK_) != 0);
-
+ mutex_lock(&audio_ocmem_lcl.state_process_lock);
state_bit = get_state_to_process(&audio_ocmem_lcl.audio_state);
switch (state_bit) {
case OCMEM_STATE_MAP_COMPL:
@@ -502,6 +505,7 @@
atomic_read(&audio_ocmem_lcl.audio_state));
break;
}
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
}
ret = 0;
fail_cmd:
@@ -531,7 +535,7 @@
wake_up(&audio_ocmem_lcl.audio_wait);
- mutex_unlock(&audio_ocmem_lcl.protect_lock);
+ mutex_unlock(&audio_ocmem_lcl.state_process_lock);
pr_debug("%s: exit\n", __func__);
return 0;
}
@@ -654,7 +658,7 @@
container_of(work, struct audio_ocmem_workdata, work);
en = audio_ocm_work->en;
- mutex_lock(&audio_ocmem_lcl.protect_lock);
+ mutex_lock(&audio_ocmem_lcl.state_process_lock);
/* if previous work waiting for ocmem - signal it to exit */
atomic_set(&audio_ocmem_lcl.audio_exit, 1);
pr_debug("%s: acquired mutex for %d\n", __func__, en);
@@ -889,7 +893,7 @@
atomic_set(&audio_ocmem_lcl.audio_state, OCMEM_STATE_DEFAULT);
atomic_set(&audio_ocmem_lcl.audio_exit, 0);
spin_lock_init(&audio_ocmem_lcl.audio_lock);
- mutex_init(&audio_ocmem_lcl.protect_lock);
+ mutex_init(&audio_ocmem_lcl.state_process_lock);
audio_ocmem_lcl.ocmem_en = true;
audio_ocmem_lcl.audio_ocmem_running = false;
diff --git a/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c b/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
index 8bb3eaf..7d9cc16 100644
--- a/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-dai-q6-v2.c
@@ -190,8 +190,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX
|| dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX
|| dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
@@ -295,8 +295,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX ||
dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX ||
dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
@@ -420,8 +420,8 @@
if (dai->id == AFE_PORT_ID_PRIMARY_PCM_RX ||
dai->id == AFE_PORT_ID_PRIMARY_PCM_TX) {
- rx_port = PCM_RX;
- tx_port = PCM_TX;
+ rx_port = AFE_PORT_ID_PRIMARY_PCM_RX;
+ tx_port = AFE_PORT_ID_PRIMARY_PCM_TX;
} else if (dai->id == AFE_PORT_ID_SECONDARY_PCM_RX ||
dai->id == AFE_PORT_ID_SECONDARY_PCM_TX) {
rx_port = AFE_PORT_ID_SECONDARY_PCM_RX;
diff --git a/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c b/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
index b43c3bd..e6934f6 100644
--- a/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
+++ b/sound/soc/msm/qdsp6v2/msm-dolby-dap-config.c
@@ -305,6 +305,7 @@
}
if (idx >= NUM_DOLBY_ENDP_DEVICE) {
pr_err("%s: device is not set accordingly\n", __func__);
+ kfree(params_value);
return -EINVAL;
}
for (i = 0; i < DOLBY_ENDDEP_PARAM_LENGTH; i++) {
@@ -367,6 +368,7 @@
params_length);
if (rc) {
pr_err("%s: send dolby params failed\n", __func__);
+ kfree(params_value);
return -EINVAL;
}
for (i = 0; i < MAX_DOLBY_PARAMS; i++) {
@@ -435,7 +437,8 @@
if (device == DEVICE_OUT_ALL) {
port_id = PRIMARY_I2S_RX | SLIMBUS_0_RX | HDMI_RX |
INT_BT_SCO_RX | INT_FM_RX |
- RT_PROXY_PORT_001_RX | PCM_RX |
+ RT_PROXY_PORT_001_RX |
+ AFE_PORT_ID_PRIMARY_PCM_RX |
MI2S_RX | SECONDARY_I2S_RX |
SLIMBUS_1_RX | SLIMBUS_4_RX | SLIMBUS_3_RX |
AFE_PORT_ID_SECONDARY_MI2S_RX;
@@ -584,7 +587,7 @@
if (rc) {
pr_err("%s: get parameters failed\n", __func__);
kfree(params_value);
- rc = -EINVAL;
+ return -EINVAL;
}
update_params_value = (int *)params_value;
ucontrol->value.integer.value[0] = dolby_dap_params_get.device_id;
diff --git a/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c b/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
index a163f6a..27b3f56 100644
--- a/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-pcm-lpa-v2.c
@@ -33,6 +33,7 @@
#include <sound/compress_offload.h>
#include <sound/compress_driver.h>
#include <sound/timer.h>
+#include <sound/pcm_params.h>
#include "msm-pcm-q6-v2.h"
#include "msm-pcm-routing-v2.h"
@@ -55,9 +56,10 @@
SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME),
.formats = SNDRV_PCM_FMTBIT_S16_LE |
SNDRV_PCM_FMTBIT_S24_LE,
- .rates = SNDRV_PCM_RATE_8000_48000 | SNDRV_PCM_RATE_KNOT,
+ .rates = SNDRV_PCM_RATE_8000_192000 |
+ SNDRV_PCM_RATE_KNOT,
.rate_min = 8000,
- .rate_max = 48000,
+ .rate_max = 192000,
.channels_min = 1,
.channels_max = 2,
.buffer_bytes_max = 1024 * 1024,
@@ -70,7 +72,8 @@
/* Conventional and unconventional sample rate supported */
static unsigned int supported_sample_rates[] = {
- 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000
+ 8000, 11025, 12000, 16000, 22050, 24000, 32000, 44100, 48000,
+ 96000, 192000
};
static struct snd_pcm_hw_constraint_list constraints_sample_rates = {
@@ -277,19 +280,7 @@
static int msm_pcm_open(struct snd_pcm_substream *substream)
{
struct snd_pcm_runtime *runtime = substream->runtime;
- struct snd_soc_pcm_runtime *soc_prtd = substream->private_data;
struct msm_audio *prtd;
- struct asm_softpause_params softpause = {
- .enable = SOFT_PAUSE_ENABLE,
- .period = SOFT_PAUSE_PERIOD,
- .step = SOFT_PAUSE_STEP,
- .rampingcurve = SOFT_PAUSE_CURVE_LINEAR,
- };
- struct asm_softvolume_params softvol = {
- .period = SOFT_VOLUME_PERIOD,
- .step = SOFT_VOLUME_STEP,
- .rampingcurve = SOFT_VOLUME_CURVE_LINEAR,
- };
int ret = 0;
pr_debug("%s\n", __func__);
@@ -307,31 +298,9 @@
kfree(prtd);
return -ENOMEM;
}
- prtd->audio_client->perf_mode = false;
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- ret = q6asm_open_write(prtd->audio_client, FORMAT_LINEAR_PCM);
- if (ret < 0) {
- pr_err("%s: pcm out open failed\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
- kfree(prtd);
- return -ENOMEM;
- }
- ret = q6asm_set_io_mode(prtd->audio_client, ASYNC_IO_MODE);
- if (ret < 0) {
- pr_err("%s: Set IO mode failed\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
- kfree(prtd);
- return -ENOMEM;
- }
- }
/* Capture path */
if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
return -EPERM;
- pr_debug("%s: session ID %d\n", __func__, prtd->audio_client->session);
- prtd->session_id = prtd->audio_client->session;
- msm_pcm_routing_reg_phy_stream(soc_prtd->dai_link->be_id,
- prtd->audio_client->perf_mode,
- prtd->session_id, substream->stream);
ret = snd_pcm_hw_constraint_list(runtime, 0,
SNDRV_PCM_HW_PARAM_RATE,
@@ -350,15 +319,6 @@
atomic_set(&lpa_audio.audio_ocmem_req, 0);
runtime->private_data = prtd;
lpa_audio.prtd = prtd;
- lpa_set_volume(0);
- ret = q6asm_set_softpause(lpa_audio.prtd->audio_client, &softpause);
- if (ret < 0)
- pr_err("%s: Send SoftPause Param failed ret=%d\n",
- __func__, ret);
- ret = q6asm_set_softvolume(lpa_audio.prtd->audio_client, &softvol);
- if (ret < 0)
- pr_err("%s: Send SoftVolume Param failed ret=%d\n",
- __func__, ret);
return 0;
}
@@ -407,22 +367,24 @@
prtd->pcm_irq_pos = 0;
}
- dir = IN;
- atomic_set(&prtd->pending_buffer, 0);
+ if (prtd->audio_client) {
+ dir = IN;
+ atomic_set(&prtd->pending_buffer, 0);
- if (atomic_cmpxchg(&lpa_audio.audio_ocmem_req, 1, 0))
- audio_ocmem_process_req(AUDIO, false);
- lpa_audio.prtd = NULL;
- q6asm_cmd(prtd->audio_client, CMD_CLOSE);
- q6asm_audio_client_buf_free_contiguous(dir,
+ if (atomic_cmpxchg(&lpa_audio.audio_ocmem_req, 1, 0))
+ audio_ocmem_process_req(AUDIO, false);
+ lpa_audio.prtd = NULL;
+ q6asm_cmd(prtd->audio_client, CMD_CLOSE);
+ q6asm_audio_client_buf_free_contiguous(dir,
prtd->audio_client);
- atomic_set(&prtd->stop, 1);
- pr_debug("%s\n", __func__);
+ atomic_set(&prtd->stop, 1);
+ q6asm_audio_client_free(prtd->audio_client);
+ pr_debug("%s\n", __func__);
+ }
msm_pcm_routing_dereg_phy_stream(soc_prtd->dai_link->be_id,
SNDRV_PCM_STREAM_PLAYBACK);
pr_debug("%s\n", __func__);
- q6asm_audio_client_free(prtd->audio_client);
kfree(prtd);
return 0;
@@ -484,9 +446,59 @@
{
struct snd_pcm_runtime *runtime = substream->runtime;
struct msm_audio *prtd = runtime->private_data;
+ struct snd_soc_pcm_runtime *soc_prtd = substream->private_data;
struct snd_dma_buffer *dma_buf = &substream->dma_buffer;
struct audio_buffer *buf;
+ uint16_t bits_per_sample = 16;
int dir, ret;
+ struct asm_softpause_params softpause = {
+ .enable = SOFT_PAUSE_ENABLE,
+ .period = SOFT_PAUSE_PERIOD,
+ .step = SOFT_PAUSE_STEP,
+ .rampingcurve = SOFT_PAUSE_CURVE_LINEAR,
+ };
+ struct asm_softvolume_params softvol = {
+ .period = SOFT_VOLUME_PERIOD,
+ .step = SOFT_VOLUME_STEP,
+ .rampingcurve = SOFT_VOLUME_CURVE_LINEAR,
+ };
+
+ prtd->audio_client->perf_mode = false;
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ if (params_format(params) == SNDRV_PCM_FORMAT_S24_LE)
+ bits_per_sample = 24;
+ ret = q6asm_open_write_v2(prtd->audio_client,
+ FORMAT_LINEAR_PCM, bits_per_sample);
+ if (ret < 0) {
+ pr_err("%s: pcm out open failed\n", __func__);
+ q6asm_audio_client_free(prtd->audio_client);
+ prtd->audio_client = NULL;
+ return -ENOMEM;
+ }
+ ret = q6asm_set_io_mode(prtd->audio_client, ASYNC_IO_MODE);
+ if (ret < 0) {
+ pr_err("%s: Set IO mode failed\n", __func__);
+ q6asm_audio_client_free(prtd->audio_client);
+ prtd->audio_client = NULL;
+ return -ENOMEM;
+ }
+ }
+
+ pr_debug("%s: session ID %d\n", __func__, prtd->audio_client->session);
+ prtd->session_id = prtd->audio_client->session;
+ msm_pcm_routing_reg_phy_stream(soc_prtd->dai_link->be_id,
+ prtd->audio_client->perf_mode,
+ prtd->session_id, substream->stream);
+
+ lpa_set_volume(0);
+ ret = q6asm_set_softpause(lpa_audio.prtd->audio_client, &softpause);
+ if (ret < 0)
+ pr_err("%s: Send SoftPause Param failed ret=%d\n",
+ __func__, ret);
+ ret = q6asm_set_softvolume(lpa_audio.prtd->audio_client, &softvol);
+ if (ret < 0)
+ pr_err("%s: Send SoftVolume Param failed ret=%d\n",
+ __func__, ret);
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
dir = IN;
diff --git a/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c b/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
index 8257023..e7c4a8a 100644
--- a/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
+++ b/sound/soc/msm/qdsp6v2/msm-pcm-routing-v2.c
@@ -30,11 +30,11 @@
#include <sound/tlv.h>
#include <sound/asound.h>
#include <sound/pcm_params.h>
-#include <mach/qdsp6v2/q6core.h>
#include "msm-pcm-routing-v2.h"
#include "msm-dolby-dap-config.h"
#include "q6voice.h"
+#include "q6core.h"
struct msm_pcm_routing_bdai_data {
u16 port_id; /* AFE port ID */
@@ -213,8 +213,8 @@
{ INT_FM_TX, 0, 0, 0, 0, 0},
{ RT_PROXY_PORT_001_RX, 0, 0, 0, 0, 0},
{ RT_PROXY_PORT_001_TX, 0, 0, 0, 0, 0},
- { PCM_RX, 0, 0, 0, 0, 0},
- { PCM_TX, 0, 0, 0, 0, 0},
+ { AFE_PORT_ID_PRIMARY_PCM_RX, 0, 0, 0, 0, 0},
+ { AFE_PORT_ID_PRIMARY_PCM_TX, 0, 0, 0, 0, 0},
{ VOICE_PLAYBACK_TX, 0, 0, 0, 0, 0},
{ VOICE_RECORD_RX, 0, 0, 0, 0, 0},
{ VOICE_RECORD_TX, 0, 0, 0, 0, 0},
@@ -1618,6 +1618,12 @@
msm_routing_put_audio_mixer),
};
+static const struct snd_kcontrol_new mmul4_mixer_controls[] = {
+ SOC_SINGLE_EXT("SLIM_0_TX", MSM_BACKEND_DAI_SLIMBUS_0_TX,
+ MSM_FRONTEND_DAI_MULTIMEDIA4, 1, 0, msm_routing_get_audio_mixer,
+ msm_routing_put_audio_mixer),
+};
+
static const struct snd_kcontrol_new mmul5_mixer_controls[] = {
SOC_SINGLE_EXT("SLIM_0_TX", MSM_BACKEND_DAI_SLIMBUS_0_TX,
MSM_FRONTEND_DAI_MULTIMEDIA5, 1, 0, msm_routing_get_audio_mixer,
@@ -2420,15 +2426,21 @@
__func__, e->shift_l , e->values[item]);
if (e->shift_l < MSM_BACKEND_DAI_MAX &&
e->values[item] < MSM_BACKEND_DAI_MAX)
+ /* Enable feedback TX path */
ret = afe_spk_prot_feed_back_cfg(
msm_bedais[e->values[item]].port_id,
- msm_bedais[e->shift_l].port_id, 1, 0);
+ msm_bedais[e->shift_l].port_id, 1, 0, 1);
else {
- pr_err("%s values are out of range\n", __func__);
- ret = -EINVAL;
+ pr_debug("%s values are out of range item %d\n",
+ __func__, e->values[item]);
+ /* Disable feedback TX path */
+ if (e->values[item] == MSM_BACKEND_DAI_MAX)
+ ret = afe_spk_prot_feed_back_cfg(0, 0, 0, 0, 0);
+ else
+ ret = -EINVAL;
}
} else {
- pr_err("%s item value is out of range\n", __func__);
+ pr_err("%s item value is out of range item\n", __func__);
ret = -EINVAL;
}
mutex_unlock(&routing_lock);
@@ -2443,11 +2455,11 @@
}
static const char * const slim0_rx_vi_fb_tx_lch_mux_text[] = {
- "SLIM4_TX",
+ "ZERO", "SLIM4_TX"
};
static const int const slim0_rx_vi_fb_tx_lch_value[] = {
- MSM_BACKEND_DAI_SLIMBUS_4_TX,
+ MSM_BACKEND_DAI_MAX, MSM_BACKEND_DAI_SLIMBUS_4_TX
};
static const struct soc_enum slim0_rx_vi_fb_lch_mux_enum =
SOC_VALUE_ENUM_DOUBLE(0, MSM_BACKEND_DAI_SLIMBUS_0_RX, 0, 0,
@@ -2472,6 +2484,7 @@
SND_SOC_DAPM_AIF_IN("VOIP_DL", "VoIP Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, 0, 0, 0),
+ SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_IN("CS-VOICE_DL1", "CS-VOICE Playback", 0, 0, 0, 0),
SND_SOC_DAPM_AIF_OUT("CS-VOICE_UL1", "CS-VOICE Capture", 0, 0, 0, 0),
@@ -2621,6 +2634,8 @@
mmul1_mixer_controls, ARRAY_SIZE(mmul1_mixer_controls)),
SND_SOC_DAPM_MIXER("MultiMedia2 Mixer", SND_SOC_NOPM, 0, 0,
mmul2_mixer_controls, ARRAY_SIZE(mmul2_mixer_controls)),
+ SND_SOC_DAPM_MIXER("MultiMedia4 Mixer", SND_SOC_NOPM, 0, 0,
+ mmul4_mixer_controls, ARRAY_SIZE(mmul4_mixer_controls)),
SND_SOC_DAPM_MIXER("MultiMedia5 Mixer", SND_SOC_NOPM, 0, 0,
mmul5_mixer_controls, ARRAY_SIZE(mmul5_mixer_controls)),
SND_SOC_DAPM_MIXER("AUX_PCM_RX Audio Mixer", SND_SOC_NOPM, 0, 0,
@@ -2777,6 +2792,7 @@
{"MultiMedia1 Mixer", "VOC_REC_UL", "INCALL_RECORD_TX"},
{"MultiMedia1 Mixer", "VOC_REC_DL", "INCALL_RECORD_RX"},
{"MultiMedia1 Mixer", "SLIM_4_TX", "SLIMBUS_4_TX"},
+ {"MultiMedia4 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MultiMedia5 Mixer", "SLIM_0_TX", "SLIMBUS_0_TX"},
{"MI2S_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
{"MI2S_RX Audio Mixer", "MultiMedia2", "MM_DL2"},
@@ -2847,6 +2863,7 @@
{"MM_UL1", NULL, "MultiMedia1 Mixer"},
{"MultiMedia2 Mixer", "INTERNAL_FM_TX", "INT_FM_TX"},
{"MM_UL2", NULL, "MultiMedia2 Mixer"},
+ {"MM_UL4", NULL, "MultiMedia4 Mixer"},
{"MM_UL5", NULL, "MultiMedia5 Mixer"},
{"AUX_PCM_RX Audio Mixer", "MultiMedia1", "MM_DL1"},
@@ -3070,6 +3087,13 @@
{"BE_OUT", NULL, "SLIMBUS_3_RX"},
{"BE_OUT", NULL, "AUX_PCM_RX"},
{"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
+ {"BE_OUT", NULL, "INT_BT_SCO_RX"},
+ {"BE_OUT", NULL, "INT_FM_RX"},
+ {"BE_OUT", NULL, "PCM_RX"},
+ {"BE_OUT", NULL, "SLIMBUS_3_RX"},
+ {"BE_OUT", NULL, "AUX_PCM_RX"},
+ {"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
+ {"BE_OUT", NULL, "VOICE_PLAYBACK_TX"},
{"PRI_I2S_TX", NULL, "BE_IN"},
{"MI2S_TX", NULL, "BE_IN"},
@@ -3080,20 +3104,13 @@
{"SLIMBUS_3_TX", NULL, "BE_IN" },
{"SLIMBUS_4_TX", NULL, "BE_IN" },
{"SLIMBUS_5_TX", NULL, "BE_IN" },
- {"BE_OUT", NULL, "INT_BT_SCO_RX"},
{"INT_BT_SCO_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "INT_FM_RX"},
{"INT_FM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "PCM_RX"},
{"PCM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "SLIMBUS_3_RX"},
- {"BE_OUT", NULL, "AUX_PCM_RX"},
{"AUX_PCM_TX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "SEC_AUX_PCM_RX"},
{"SEC_AUX_PCM_TX", NULL, "BE_IN"},
{"INCALL_RECORD_TX", NULL, "BE_IN"},
{"INCALL_RECORD_RX", NULL, "BE_IN"},
- {"BE_OUT", NULL, "VOICE_PLAYBACK_TX"},
{"SLIM0_RX_VI_FB_LCH_MUX", "SLIM4_TX", "SLIMBUS_4_TX"},
{"SLIMBUS_0_RX", NULL, "SLIM0_RX_VI_FB_LCH_MUX"},
};
diff --git a/sound/soc/msm/qdsp6v2/q6afe.c b/sound/soc/msm/qdsp6v2/q6afe.c
index 63af271..3598977 100644
--- a/sound/soc/msm/qdsp6v2/q6afe.c
+++ b/sound/soc/msm/qdsp6v2/q6afe.c
@@ -80,6 +80,8 @@
static int32_t afe_callback(struct apr_client_data *data, void *priv)
{
+ int i;
+
if (!data) {
pr_err("%s: Invalid param data\n", __func__);
return -EINVAL;
@@ -87,6 +89,12 @@
if (data->opcode == RESET_EVENTS) {
pr_debug("q6afe: reset event = %d %d apr[%p]\n",
data->reset_event, data->reset_proc, this_afe.apr);
+
+ for (i = 0; i < MAX_AUDPROC_TYPES; i++) {
+ this_afe.afe_cal_addr[i].cal_paddr = 0;
+ this_afe.afe_cal_addr[i].cal_size = 0;
+ }
+
if (this_afe.apr) {
apr_reset(this_afe.apr);
atomic_set(&this_afe.state, 0);
@@ -209,7 +217,7 @@
switch (port_id) {
case PRIMARY_I2S_RX:
- case PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
case SECONDARY_I2S_RX:
case MI2S_RX:
case HDMI_RX:
@@ -233,7 +241,7 @@
break;
case PRIMARY_I2S_TX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case SECONDARY_I2S_TX:
case MI2S_TX:
case DIGI_MIC_TX:
@@ -299,8 +307,8 @@
case RT_PROXY_PORT_001_TX:
ret_size = SIZEOF_CFG_CMD(afe_param_id_rt_proxy_port_cfg);
break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
default:
@@ -1145,8 +1153,8 @@
case PRIMARY_I2S_TX:
cfg_type = AFE_PARAM_ID_PCM_CONFIG;
break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
cfg_type = AFE_PARAM_ID_PCM_CONFIG;
@@ -1242,8 +1250,10 @@
switch (port_id) {
case PRIMARY_I2S_RX: return IDX_PRIMARY_I2S_RX;
case PRIMARY_I2S_TX: return IDX_PRIMARY_I2S_TX;
- case PCM_RX: return IDX_PCM_RX;
- case PCM_TX: return IDX_PCM_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return IDX_AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -1341,8 +1351,8 @@
case PRIMARY_I2S_TX:
cfg_type = AFE_PARAM_ID_I2S_CONFIG;
break;
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
cfg_type = AFE_PARAM_ID_PCM_CONFIG;
@@ -2530,8 +2540,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
@@ -2957,12 +2967,18 @@
}
int afe_spk_prot_feed_back_cfg(int src_port, int dst_port,
- int l_ch, int r_ch)
+ int l_ch, int r_ch, u32 enable)
{
int ret = -EINVAL;
union afe_spkr_prot_config prot_config;
int index = 0;
+ if (!enable) {
+ pr_debug("%s Disable Feedback tx path", __func__);
+ this_afe.vi_tx_port = -1;
+ return 0;
+ }
+
if ((q6audio_validate_port(src_port) < 0) ||
(q6audio_validate_port(dst_port) < 0)) {
pr_err("%s invalid ports src %d dst %d",
@@ -3003,6 +3019,7 @@
this_afe.apr = NULL;
this_afe.dtmf_gen_rx_portid = -1;
this_afe.mmap_handle = 0;
+ this_afe.vi_tx_port = -1;
for (i = 0; i < AFE_MAX_PORTS; i++)
init_waitqueue_head(&this_afe.wait[i]);
diff --git a/sound/soc/msm/qdsp6v2/q6audio-v2.c b/sound/soc/msm/qdsp6v2/q6audio-v2.c
index faf5f35..3cb7a1f 100644
--- a/sound/soc/msm/qdsp6v2/q6audio-v2.c
+++ b/sound/soc/msm/qdsp6v2/q6audio-v2.c
@@ -24,8 +24,10 @@
switch (port_id) {
case PRIMARY_I2S_RX: return IDX_PRIMARY_I2S_RX;
case PRIMARY_I2S_TX: return IDX_PRIMARY_I2S_TX;
- case PCM_RX: return IDX_PCM_RX;
- case PCM_TX: return IDX_PCM_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return IDX_AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return IDX_AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -76,8 +78,10 @@
switch (port_id) {
case PRIMARY_I2S_RX: return AFE_PORT_ID_PRIMARY_MI2S_RX;
case PRIMARY_I2S_TX: return AFE_PORT_ID_PRIMARY_MI2S_TX;
- case PCM_RX: return AFE_PORT_ID_PRIMARY_PCM_RX;
- case PCM_TX: return AFE_PORT_ID_PRIMARY_PCM_TX;
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ return AFE_PORT_ID_PRIMARY_PCM_RX;
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
+ return AFE_PORT_ID_PRIMARY_PCM_TX;
case AFE_PORT_ID_SECONDARY_PCM_RX:
return AFE_PORT_ID_SECONDARY_PCM_RX;
case AFE_PORT_ID_SECONDARY_PCM_TX:
@@ -152,8 +156,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
@@ -179,8 +183,8 @@
switch (port_id) {
case PRIMARY_I2S_RX:
case PRIMARY_I2S_TX:
- case PCM_RX:
- case PCM_TX:
+ case AFE_PORT_ID_PRIMARY_PCM_RX:
+ case AFE_PORT_ID_PRIMARY_PCM_TX:
case AFE_PORT_ID_SECONDARY_PCM_RX:
case AFE_PORT_ID_SECONDARY_PCM_TX:
case SECONDARY_I2S_RX:
diff --git a/sound/soc/msm/qdsp6v2/q6core.c b/sound/soc/msm/qdsp6v2/q6core.c
index 557b326..42cbcd1 100644
--- a/sound/soc/msm/qdsp6v2/q6core.c
+++ b/sound/soc/msm/qdsp6v2/q6core.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -32,7 +32,7 @@
struct avcs_cmd_rsp_get_low_power_segments_info_t *lp_ocm_payload;
};
-struct q6core_str q6core_lcl;
+static struct q6core_str q6core_lcl;
static int32_t aprv2_core_fn_q(struct apr_client_data *data, void *priv)
{
@@ -40,7 +40,7 @@
uint32_t nseg;
int i, j;
- pr_info("core msg: payload len = %u, apr resp opcode = 0x%X\n",
+ pr_debug("core msg: payload len = %u, apr resp opcode = 0x%X\n",
data->payload_size, data->opcode);
switch (data->opcode) {
@@ -121,6 +121,33 @@
pr_err("%s: Unable to register CORE\n", __func__);
}
+uint32_t core_set_dolby_manufacturer_id(int manufacturer_id)
+{
+ struct adsp_dolby_manufacturer_id payload;
+ int rc = 0;
+ pr_debug("%s manufacturer_id :%d\n", __func__, manufacturer_id);
+ ocm_core_open();
+ if (q6core_lcl.core_handle_q) {
+ payload.hdr.hdr_field = APR_HDR_FIELD(APR_MSG_TYPE_EVENT,
+ APR_HDR_LEN(APR_HDR_SIZE), APR_PKT_VER);
+ payload.hdr.pkt_size =
+ sizeof(struct adsp_dolby_manufacturer_id);
+ payload.hdr.src_port = 0;
+ payload.hdr.dest_port = 0;
+ payload.hdr.token = 0;
+ payload.hdr.opcode = ADSP_CMD_SET_DOLBY_MANUFACTURER_ID;
+ payload.manufacturer_id = manufacturer_id;
+ pr_debug("Send Dolby security opcode=%x manufacturer ID = %d\n",
+ payload.hdr.opcode, payload.manufacturer_id);
+ rc = apr_send_pkt(q6core_lcl.core_handle_q,
+ (uint32_t *)&payload);
+ if (rc < 0)
+ pr_err("%s: SET_DOLBY_MANUFACTURER_ID failed op[0x%x]rc[%d]\n",
+ __func__, payload.hdr.opcode, rc);
+ }
+ return rc;
+}
+
int core_get_low_power_segments(
struct avcs_cmd_rsp_get_low_power_segments_info_t **lp_memseg)
{
diff --git a/sound/soc/msm/qdsp6v2/q6core.h b/sound/soc/msm/qdsp6v2/q6core.h
index ff611d5..39bf4ab 100644
--- a/sound/soc/msm/qdsp6v2/q6core.h
+++ b/sound/soc/msm/qdsp6v2/q6core.h
@@ -1,4 +1,4 @@
-/* Copyright (c) 2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2012-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -90,4 +90,13 @@
int core_get_low_power_segments(
struct avcs_cmd_rsp_get_low_power_segments_info_t **);
+#define ADSP_CMD_SET_DOLBY_MANUFACTURER_ID 0x00012918
+
+struct adsp_dolby_manufacturer_id {
+ struct apr_hdr hdr;
+ int manufacturer_id;
+};
+
+uint32_t core_set_dolby_manufacturer_id(int manufacturer_id);
+
#endif /* __Q6CORE_H__ */
diff --git a/sound/soc/msm/qdsp6v2/q6voice.c b/sound/soc/msm/qdsp6v2/q6voice.c
index 80bc4f9..e9d0a7e 100644
--- a/sound/soc/msm/qdsp6v2/q6voice.c
+++ b/sound/soc/msm/qdsp6v2/q6voice.c
@@ -2963,6 +2963,7 @@
cvp_mute_cmd.hdr.opcode = VSS_IVOLUME_CMD_MUTE_V2;
cvp_mute_cmd.cvp_set_mute.direction = VSS_IVOLUME_DIRECTION_RX;
cvp_mute_cmd.cvp_set_mute.mute_flag = v->dev_rx.mute;
+ cvp_mute_cmd.cvp_set_mute.ramp_duration_ms = DEFAULT_MUTE_RAMP_DURATION;
v->cvp_state = CMD_STATUS_FAIL;
ret = apr_send_pkt(common.apr_q6_cvp, (uint32_t *) &cvp_mute_cmd);
diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
index 40595bf..519f325 100644
--- a/sound/soc/soc-pcm.c
+++ b/sound/soc/soc-pcm.c
@@ -1767,7 +1767,12 @@
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_HW_FREE) &&
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
- (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
+ (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
+ !((be->dpcm[stream].state == SND_SOC_DPCM_STATE_START) &&
+ ((fe->dpcm[stream].state != SND_SOC_DPCM_STATE_START) &&
+ (fe->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED) &&
+ (fe->dpcm[stream].state !=
+ SND_SOC_DPCM_STATE_SUSPEND))))
continue;
dev_dbg(be->dev, "dpcm: hw_free BE %s\n",