Merge "defconfig: 8226: enable coresight hardware event driver"
diff --git a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
index 4cbff52..a7a3f0c 100644
--- a/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
+++ b/Documentation/devicetree/bindings/pil/pil-q6v5-lpass.txt
@@ -15,6 +15,7 @@
- vdd_cx-supply: Reference to the regulator that supplies the vdd_cx domain.
- qcom,firmware-name: Base name of the firmware image. Ex. "lpass"
- qcom,gpio-err-fatal: GPIO used by the lpass to indicate error fatal to the apps.
+- qcom,gpio-err-ready: GPIO used by the lpass to indicate apps error service is ready.
- qcom,gpio-force-stop: GPIO used by the apps to force the lpass to shutdown.
- qcom,gpio-proxy-unvote: GPIO used by the lpass to indicate apps clock is ready.
diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt
index a0b577d..a6ab4b6 100644
--- a/Documentation/workqueue.txt
+++ b/Documentation/workqueue.txt
@@ -89,25 +89,28 @@
The cmwq design differentiates between the user-facing workqueues that
subsystems and drivers queue work items on and the backend mechanism
-which manages thread-pool and processes the queued work items.
+which manages thread-pools and processes the queued work items.
The backend is called gcwq. There is one gcwq for each possible CPU
-and one gcwq to serve work items queued on unbound workqueues.
+and one gcwq to serve work items queued on unbound workqueues. Each
+gcwq has two thread-pools - one for normal work items and the other
+for high priority ones.
Subsystems and drivers can create and queue work items through special
workqueue API functions as they see fit. They can influence some
aspects of the way the work items are executed by setting flags on the
workqueue they are putting the work item on. These flags include
-things like CPU locality, reentrancy, concurrency limits and more. To
-get a detailed overview refer to the API description of
+things like CPU locality, reentrancy, concurrency limits, priority and
+more. To get a detailed overview refer to the API description of
alloc_workqueue() below.
-When a work item is queued to a workqueue, the target gcwq is
-determined according to the queue parameters and workqueue attributes
-and appended on the shared worklist of the gcwq. For example, unless
-specifically overridden, a work item of a bound workqueue will be
-queued on the worklist of exactly that gcwq that is associated to the
-CPU the issuer is running on.
+When a work item is queued to a workqueue, the target gcwq and
+thread-pool is determined according to the queue parameters and
+workqueue attributes and appended on the shared worklist of the
+thread-pool. For example, unless specifically overridden, a work item
+of a bound workqueue will be queued on the worklist of either normal
+or highpri thread-pool of the gcwq that is associated to the CPU the
+issuer is running on.
For any worker pool implementation, managing the concurrency level
(how many execution contexts are active) is an important issue. cmwq
@@ -115,26 +118,26 @@
Minimal to save resources and sufficient in that the system is used at
its full capacity.
-Each gcwq bound to an actual CPU implements concurrency management by
-hooking into the scheduler. The gcwq is notified whenever an active
-worker wakes up or sleeps and keeps track of the number of the
-currently runnable workers. Generally, work items are not expected to
-hog a CPU and consume many cycles. That means maintaining just enough
-concurrency to prevent work processing from stalling should be
-optimal. As long as there are one or more runnable workers on the
-CPU, the gcwq doesn't start execution of a new work, but, when the
-last running worker goes to sleep, it immediately schedules a new
-worker so that the CPU doesn't sit idle while there are pending work
-items. This allows using a minimal number of workers without losing
-execution bandwidth.
+Each thread-pool bound to an actual CPU implements concurrency
+management by hooking into the scheduler. The thread-pool is notified
+whenever an active worker wakes up or sleeps and keeps track of the
+number of the currently runnable workers. Generally, work items are
+not expected to hog a CPU and consume many cycles. That means
+maintaining just enough concurrency to prevent work processing from
+stalling should be optimal. As long as there are one or more runnable
+workers on the CPU, the thread-pool doesn't start execution of a new
+work, but, when the last running worker goes to sleep, it immediately
+schedules a new worker so that the CPU doesn't sit idle while there
+are pending work items. This allows using a minimal number of workers
+without losing execution bandwidth.
Keeping idle workers around doesn't cost other than the memory space
for kthreads, so cmwq holds onto idle ones for a while before killing
them.
For an unbound wq, the above concurrency management doesn't apply and
-the gcwq for the pseudo unbound CPU tries to start executing all work
-items as soon as possible. The responsibility of regulating
+the thread-pools for the pseudo unbound CPU try to start executing all
+work items as soon as possible. The responsibility of regulating
concurrency level is on the users. There is also a flag to mark a
bound wq to ignore the concurrency management. Please refer to the
API section for details.
@@ -205,31 +208,22 @@
WQ_HIGHPRI
- Work items of a highpri wq are queued at the head of the
- worklist of the target gcwq and start execution regardless of
- the current concurrency level. In other words, highpri work
- items will always start execution as soon as execution
- resource is available.
+ Work items of a highpri wq are queued to the highpri
+ thread-pool of the target gcwq. Highpri thread-pools are
+ served by worker threads with elevated nice level.
- Ordering among highpri work items is preserved - a highpri
- work item queued after another highpri work item will start
- execution after the earlier highpri work item starts.
-
- Although highpri work items are not held back by other
- runnable work items, they still contribute to the concurrency
- level. Highpri work items in runnable state will prevent
- non-highpri work items from starting execution.
-
- This flag is meaningless for unbound wq.
+ Note that normal and highpri thread-pools don't interact with
+ each other. Each maintain its separate pool of workers and
+ implements concurrency management among its workers.
WQ_CPU_INTENSIVE
Work items of a CPU intensive wq do not contribute to the
concurrency level. In other words, runnable CPU intensive
- work items will not prevent other work items from starting
- execution. This is useful for bound work items which are
- expected to hog CPU cycles so that their execution is
- regulated by the system scheduler.
+ work items will not prevent other work items in the same
+ thread-pool from starting execution. This is useful for bound
+ work items which are expected to hog CPU cycles so that their
+ execution is regulated by the system scheduler.
Although CPU intensive work items don't contribute to the
concurrency level, start of their executions is still
@@ -239,14 +233,6 @@
This flag is meaningless for unbound wq.
- WQ_HIGHPRI | WQ_CPU_INTENSIVE
-
- This combination makes the wq avoid interaction with
- concurrency management completely and behave as a simple
- per-CPU execution context provider. Work items queued on a
- highpri CPU-intensive wq start execution as soon as resources
- are available and don't affect execution of other work items.
-
@max_active:
@max_active determines the maximum number of execution contexts per
@@ -328,20 +314,7 @@
35 w2 wakes up and finishes
Now, let's assume w1 and w2 are queued to a different wq q1 which has
-WQ_HIGHPRI set,
-
- TIME IN MSECS EVENT
- 0 w1 and w2 start and burn CPU
- 5 w1 sleeps
- 10 w2 sleeps
- 10 w0 starts and burns CPU
- 15 w0 sleeps
- 15 w1 wakes up and finishes
- 20 w2 wakes up and finishes
- 25 w0 wakes up and burns CPU
- 30 w0 finishes
-
-If q1 has WQ_CPU_INTENSIVE set,
+WQ_CPU_INTENSIVE set,
TIME IN MSECS EVENT
0 w0 starts and burns CPU
diff --git a/arch/arm/boot/dts/msm-pm8226.dtsi b/arch/arm/boot/dts/msm-pm8226.dtsi
index 41920d5..4d6751c 100644
--- a/arch/arm/boot/dts/msm-pm8226.dtsi
+++ b/arch/arm/boot/dts/msm-pm8226.dtsi
@@ -61,7 +61,7 @@
qcom,vinmin-mv = <4200>;
qcom,vbatdet-delta-mv = <150>;
qcom,ibatmax-ma = <1500>;
- qcom,ibatterm-ma = <200>;
+ qcom,ibatterm-ma = <100>;
qcom,ibatsafe-ma = <1500>;
qcom,thermal-mitigation = <1500 700 600 325>;
qcom,tchg-mins = <150>;
diff --git a/arch/arm/boot/dts/msm-pm8941.dtsi b/arch/arm/boot/dts/msm-pm8941.dtsi
index 9a51827..ce6bc63 100644
--- a/arch/arm/boot/dts/msm-pm8941.dtsi
+++ b/arch/arm/boot/dts/msm-pm8941.dtsi
@@ -175,6 +175,7 @@
qcom,vddsafe-mv = <4200>;
qcom,vinmin-mv = <4200>;
qcom,ibatmax-ma = <1500>;
+ qcom,ibatterm-ma = <100>;
qcom,ibatsafe-ma = <1500>;
qcom,thermal-mitigation = <1500 700 600 325>;
qcom,cool-bat-decidegc = <100>;
@@ -183,7 +184,7 @@
qcom,warm-bat-decidegc = <450>;
qcom,warm-bat-mv = <4100>;
qcom,ibatmax-cool-ma = <350>;
- qcom,vbatdet-delta-mv = <350>;
+ qcom,vbatdet-delta-mv = <100>;
qcom,tchg-mins = <150>;
qcom,chgr@1000 {
diff --git a/arch/arm/boot/dts/msm8226-mtp.dts b/arch/arm/boot/dts/msm8226-mtp.dts
index 3dd517b..589fe69 100644
--- a/arch/arm/boot/dts/msm8226-mtp.dts
+++ b/arch/arm/boot/dts/msm8226-mtp.dts
@@ -365,3 +365,10 @@
&pm8226_chg {
qcom,charging-disabled;
};
+
+&slim_msm {
+ tapan_codec {
+ qcom,cdc-micbias1-ext-cap;
+ qcom,cdc-micbias2-ext-cap;
+ };
+};
diff --git a/arch/arm/boot/dts/msm8226-qrd.dts b/arch/arm/boot/dts/msm8226-qrd.dts
index 721bcbb..f917d45 100644
--- a/arch/arm/boot/dts/msm8226-qrd.dts
+++ b/arch/arm/boot/dts/msm8226-qrd.dts
@@ -331,3 +331,10 @@
mpp@a700 { /* MPP 8 */
};
};
+
+&slim_msm {
+ tapan_codec {
+ qcom,cdc-micbias1-ext-cap;
+ };
+
+};
diff --git a/arch/arm/boot/dts/msm8226.dtsi b/arch/arm/boot/dts/msm8226.dtsi
index 3cd9cb5..a9c25d3 100644
--- a/arch/arm/boot/dts/msm8226.dtsi
+++ b/arch/arm/boot/dts/msm8226.dtsi
@@ -282,7 +282,7 @@
interrupt-names = "cdc-int";
};
- slim@fe12f000 {
+ slim_msm: slim@fe12f000 {
cell-index = <1>;
compatible = "qcom,slim-ngd";
reg = <0xfe12f000 0x35000>,
@@ -784,6 +784,7 @@
/* GPIO inputs from lpass */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_2_in 1 0>;
/* GPIO output to lpass */
qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
diff --git a/arch/arm/boot/dts/msm8610-cdp.dts b/arch/arm/boot/dts/msm8610-cdp.dts
index ecedcb0..d3fc917 100644
--- a/arch/arm/boot/dts/msm8610-cdp.dts
+++ b/arch/arm/boot/dts/msm8610-cdp.dts
@@ -327,3 +327,52 @@
mpp@a300 { /* MPP 4 */
};
};
+
+/* CoreSight */
+&tpiu {
+ qcom,seta-gpios = <&msmgpio 4 0>,
+ <&msmgpio 5 0>,
+ <&msmgpio 6 0>,
+ <&msmgpio 7 0>,
+ <&msmgpio 22 0>,
+ <&msmgpio 23 0>,
+ <&msmgpio 24 0>,
+ <&msmgpio 25 0>,
+ <&msmgpio 26 0>,
+ <&msmgpio 27 0>,
+ <&msmgpio 28 0>,
+ <&msmgpio 29 0>,
+ <&msmgpio 30 0>,
+ <&msmgpio 31 0>,
+ <&msmgpio 94 0>,
+ <&msmgpio 95 0>,
+ <&msmgpio 96 0>,
+ <&msmgpio 97 0>;
+ qcom,seta-gpios-func = <9 9 8 11 2 2 2 2 2 2 3 2 3 3 4 4 4 4>;
+ qcom,seta-gpios-drv = <7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7>;
+ qcom,seta-gpios-pull = <0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0>;
+ qcom,seta-gpios-dir = <2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2>;
+
+ qcom,setb-gpios = <&msmgpio 8 0>,
+ <&msmgpio 10 0>,
+ <&msmgpio 11 0>,
+ <&msmgpio 13 0>,
+ <&msmgpio 14 0>,
+ <&msmgpio 15 0>,
+ <&msmgpio 16 0>,
+ <&msmgpio 17 0>,
+ <&msmgpio 18 0>,
+ <&msmgpio 19 0>,
+ <&msmgpio 20 0>,
+ <&msmgpio 21 0>,
+ <&msmgpio 42 0>,
+ <&msmgpio 80 0>,
+ <&msmgpio 81 0>,
+ <&msmgpio 82 0>,
+ <&msmgpio 83 0>,
+ <&msmgpio 84 0>;
+ qcom,setb-gpios-func = <10 8 8 6 9 9 9 9 9 9 9 9 5 7 7 8 8 8>;
+ qcom,setb-gpios-drv = <7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7>;
+ qcom,setb-gpios-pull = <0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0>;
+ qcom,setb-gpios-dir = <2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2>;
+};
diff --git a/arch/arm/boot/dts/msm8610-coresight.dtsi b/arch/arm/boot/dts/msm8610-coresight.dtsi
index 4945693..0d9ae9a 100644
--- a/arch/arm/boot/dts/msm8610-coresight.dtsi
+++ b/arch/arm/boot/dts/msm8610-coresight.dtsi
@@ -34,6 +34,11 @@
coresight-id = <1>;
coresight-name = "coresight-tpiu";
coresight-nr-inports = <1>;
+
+ vdd-supply = <&pm8110_l18>;
+
+ qcom,vdd-voltage-level = <2950000 2950000>;
+ qcom,vdd-current-level = <15000 400000>;
};
replicator: replicator@fc324000 {
diff --git a/arch/arm/boot/dts/msm8610.dtsi b/arch/arm/boot/dts/msm8610.dtsi
index aff8759..9dbd71d 100644
--- a/arch/arm/boot/dts/msm8610.dtsi
+++ b/arch/arm/boot/dts/msm8610.dtsi
@@ -711,6 +711,7 @@
/* GPIO inputs from lpass */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_2_in 1 0>;
/* GPIO output to lpass */
qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
diff --git a/arch/arm/boot/dts/msm8974.dtsi b/arch/arm/boot/dts/msm8974.dtsi
index 825fe8c..a4a3efe 100644
--- a/arch/arm/boot/dts/msm8974.dtsi
+++ b/arch/arm/boot/dts/msm8974.dtsi
@@ -869,6 +869,7 @@
/* GPIO inputs from lpass */
qcom,gpio-err-fatal = <&smp2pgpio_ssr_smp2p_2_in 0 0>;
qcom,gpio-proxy-unvote = <&smp2pgpio_ssr_smp2p_2_in 2 0>;
+ qcom,gpio-err-ready = <&smp2pgpio_ssr_smp2p_2_in 1 0>;
/* GPIO output to lpass */
qcom,gpio-force-stop = <&smp2pgpio_ssr_smp2p_2_out 0 0>;
diff --git a/arch/arm/configs/mpq8092_defconfig b/arch/arm/configs/mpq8092_defconfig
new file mode 100644
index 0000000..28ca32f
--- /dev/null
+++ b/arch/arm/configs/mpq8092_defconfig
@@ -0,0 +1,368 @@
+# CONFIG_ARM_PATCH_PHYS_VIRT is not set
+CONFIG_EXPERIMENTAL=y
+CONFIG_SYSVIPC=y
+CONFIG_RCU_FAST_NO_HZ=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_CGROUPS=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_RESOURCE_COUNTERS=y
+CONFIG_CGROUP_SCHED=y
+CONFIG_RT_GROUP_SCHED=y
+CONFIG_NAMESPACES=y
+# CONFIG_UTS_NS is not set
+# CONFIG_IPC_NS is not set
+# CONFIG_PID_NS is not set
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_RD_BZIP2=y
+CONFIG_RD_LZMA=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_KALLSYMS_ALL=y
+CONFIG_EMBEDDED=y
+CONFIG_PROFILING=y
+CONFIG_KPROBES=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODULE_FORCE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_EFI_PARTITION=y
+CONFIG_IOSCHED_TEST=y
+CONFIG_ARCH_MSM=y
+CONFIG_ARCH_MPQ8092=y
+CONFIG_MSM_KRAIT_TBB_ABORT_HANDLER=y
+CONFIG_MSM_MPM_OF=y
+# CONFIG_MSM_STACKED_MEMORY is not set
+CONFIG_CPU_HAS_L2_PMU=y
+# CONFIG_MSM_FIQ_SUPPORT is not set
+# CONFIG_MSM_PROC_COMM is not set
+CONFIG_MSM_SMD=y
+CONFIG_MSM_SMD_PKG4=y
+CONFIG_MSM_IPC_LOGGING=y
+CONFIG_MSM_IPC_ROUTER=y
+CONFIG_MSM_IPC_ROUTER_SMD_XPRT=y
+CONFIG_MSM_IPC_ROUTER_SECURITY=y
+CONFIG_MSM_SUBSYSTEM_RESTART=y
+CONFIG_MSM_SYSMON_COMM=y
+CONFIG_MSM_DIRECT_SCLK_ACCESS=y
+CONFIG_MSM_WATCHDOG_V2=y
+CONFIG_MSM_DLOAD_MODE=y
+CONFIG_MSM_RUN_QUEUE_STATS=y
+CONFIG_MSM_SPM_V2=y
+CONFIG_MSM_L2_SPM=y
+CONFIG_MSM_OCMEM=y
+CONFIG_MSM_OCMEM_LOCAL_POWER_CTRL=y
+CONFIG_MSM_OCMEM_DEBUG=y
+CONFIG_MSM_OCMEM_NONSECURE=y
+CONFIG_SENSORS_ADSP=y
+CONFIG_MSM_RTB=y
+CONFIG_MSM_RTB_SEPARATE_CPUS=y
+CONFIG_MSM_CACHE_ERP=y
+CONFIG_MSM_L1_ERR_PANIC=y
+CONFIG_MSM_L1_RECOV_ERR_PANIC=y
+CONFIG_MSM_L1_ERR_LOG=y
+CONFIG_MSM_L2_ERP_PRINT_ACCESS_ERRORS=y
+CONFIG_MSM_L2_ERP_PORT_PANIC=y
+CONFIG_MSM_L2_ERP_1BIT_PANIC=y
+CONFIG_MSM_L2_ERP_2BIT_PANIC=y
+CONFIG_MSM_CACHE_DUMP=y
+CONFIG_MSM_CACHE_DUMP_ON_PANIC=y
+CONFIG_MSM_ENABLE_WDOG_DEBUG_CONTROL=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_SMP=y
+# CONFIG_SMP_ON_UP is not set
+CONFIG_SCHED_MC=y
+CONFIG_ARM_ARCH_TIMER=y
+CONFIG_PREEMPT=y
+CONFIG_AEABI=y
+CONFIG_HIGHMEM=y
+CONFIG_CC_STACKPROTECTOR=y
+CONFIG_CP_ACCESS=y
+CONFIG_USE_OF=y
+CONFIG_CPU_FREQ=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=y
+CONFIG_CPU_FREQ_GOV_USERSPACE=y
+CONFIG_CPU_FREQ_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
+CONFIG_CPU_IDLE=y
+CONFIG_VFP=y
+CONFIG_NEON=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_PM_RUNTIME=y
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+CONFIG_IPV6=y
+CONFIG_IPV6_PRIVACY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_IPV6_MULTIPLE_TABLES=y
+CONFIG_IPV6_SUBTREES=y
+CONFIG_NETFILTER=y
+CONFIG_NETFILTER_NETLINK_LOG=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+CONFIG_NF_CONNTRACK_SIP=y
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+CONFIG_NETFILTER_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+CONFIG_NETFILTER_XT_TARGET_LOG=y
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QTAGUID=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2_LOG=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+CONFIG_NF_CONNTRACK_IPV4=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+CONFIG_NF_CONNTRACK_IPV6=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_PRIO=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+CONFIG_CLS_U32_MARK=y
+CONFIG_NET_CLS_FLOW=y
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_CMP=y
+CONFIG_NET_EMATCH_NBYTE=y
+CONFIG_NET_EMATCH_U32=y
+CONFIG_NET_EMATCH_META=y
+CONFIG_NET_EMATCH_TEXT=y
+CONFIG_NET_CLS_ACT=y
+CONFIG_CFG80211=y
+CONFIG_RFKILL=y
+CONFIG_GENLOCK=y
+CONFIG_GENLOCK_MISCDEVICE=y
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_RAM=y
+CONFIG_HAPTIC_ISA1200=y
+CONFIG_USB_HSIC_SMSC_HUB=y
+CONFIG_TI_DRV2667=y
+CONFIG_SCSI=y
+CONFIG_SCSI_TGT=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_CHR_DEV_SCH=y
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_MD=y
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=y
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+CONFIG_TUN=y
+CONFIG_KS8851=m
+# CONFIG_MSM_RMNET is not set
+CONFIG_SLIP=y
+CONFIG_SLIP_COMPRESSED=y
+CONFIG_SLIP_MODE_SLIP6=y
+CONFIG_USB_USBNET=y
+CONFIG_INPUT_EVDEV=y
+CONFIG_INPUT_EVBUG=m
+CONFIG_KEYBOARD_GPIO=y
+CONFIG_INPUT_JOYSTICK=y
+CONFIG_INPUT_TOUCHSCREEN=y
+CONFIG_TOUCHSCREEN_ATMEL_MXT=y
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_UINPUT=y
+CONFIG_SERIAL_MSM_HSL=y
+CONFIG_SERIAL_MSM_HSL_CONSOLE=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_MSM=y
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_QUP=y
+CONFIG_SPI=y
+CONFIG_SPI_SPIDEV=m
+CONFIG_SPMI=y
+CONFIG_SPMI_MSM_PMIC_ARB=y
+CONFIG_MSM_QPNP_INT=y
+CONFIG_DEBUG_GPIO=y
+CONFIG_GPIO_SYSFS=y
+CONFIG_GPIO_QPNP_PIN=y
+CONFIG_GPIO_QPNP_PIN_DEBUG=y
+CONFIG_POWER_SUPPLY=y
+CONFIG_SMB350_CHARGER=y
+CONFIG_BATTERY_BQ28400=y
+CONFIG_QPNP_CHARGER=y
+CONFIG_BATTERY_BCL=y
+CONFIG_SENSORS_EPM_ADC=y
+CONFIG_SENSORS_QPNP_ADC_VOLTAGE=y
+CONFIG_SENSORS_QPNP_ADC_CURRENT=y
+CONFIG_THERMAL=y
+CONFIG_THERMAL_QPNP=y
+CONFIG_THERMAL_QPNP_ADC_TM=y
+CONFIG_REGULATOR=y
+CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_STUB=y
+CONFIG_REGULATOR_QPNP=y
+CONFIG_MEDIA_SUPPORT=y
+CONFIG_MEDIA_CONTROLLER=y
+CONFIG_MSM_KGSL=y
+CONFIG_FB=y
+CONFIG_FB_MSM=y
+# CONFIG_FB_MSM_BACKLIGHT is not set
+CONFIG_FB_MSM_MDSS=y
+CONFIG_FB_MSM_MDSS_WRITEBACK=y
+CONFIG_FB_MSM_MDSS_HDMI_PANEL=y
+CONFIG_FB_MSM_MDSS_HDMI_MHL_SII8334=y
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+# CONFIG_LCD_CLASS_DEVICE is not set
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+# CONFIG_BACKLIGHT_GENERIC is not set
+CONFIG_USB=y
+CONFIG_USB_SUSPEND=y
+CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_MSM=y
+CONFIG_USB_EHCI_MSM_HSIC=y
+CONFIG_USB_ACM=y
+CONFIG_USB_STORAGE=y
+CONFIG_USB_STORAGE_DATAFAB=y
+CONFIG_USB_STORAGE_FREECOM=y
+CONFIG_USB_STORAGE_ISD200=y
+CONFIG_USB_STORAGE_USBAT=y
+CONFIG_USB_STORAGE_SDDR09=y
+CONFIG_USB_STORAGE_SDDR55=y
+CONFIG_USB_STORAGE_JUMPSHOT=y
+CONFIG_USB_STORAGE_ALAUDA=y
+CONFIG_USB_STORAGE_ONETOUCH=y
+CONFIG_USB_STORAGE_KARMA=y
+CONFIG_USB_STORAGE_CYPRESS_ATACB=y
+CONFIG_USB_STORAGE_ENE_UB6250=y
+CONFIG_LEDS_QPNP=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_BACKLIGHT=y
+CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
+CONFIG_SWITCH=y
+CONFIG_STAGING=y
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_ASHMEM=y
+CONFIG_ANDROID_LOGGER=y
+CONFIG_ANDROID_TIMED_GPIO=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER=y
+CONFIG_MSM_SSBI=y
+CONFIG_SPS=y
+CONFIG_SPS_SUPPORT_BAMDMA=y
+CONFIG_SPS_SUPPORT_NDP_BAM=y
+CONFIG_QPNP_PWM=y
+CONFIG_QPNP_POWER_ON=y
+CONFIG_QPNP_CLKDIV=y
+CONFIG_MSM_IOMMU=y
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT3_FS=y
+# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+CONFIG_EXT4_FS=y
+CONFIG_FUSE_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_TMPFS=y
+CONFIG_PSTORE=y
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_PRINTK_TIME=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_LOCKUP_DETECTOR=y
+# CONFIG_DETECT_HUNG_TASK is not set
+CONFIG_SCHEDSTATS=y
+CONFIG_TIMER_STATS=y
+CONFIG_DEBUG_KMEMLEAK=y
+CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF=y
+CONFIG_DEBUG_SPINLOCK=y
+CONFIG_DEBUG_MUTEXES=y
+CONFIG_DEBUG_ATOMIC_SLEEP=y
+CONFIG_DEBUG_STACK_USAGE=y
+CONFIG_DEBUG_INFO=y
+CONFIG_DEBUG_MEMORY_INIT=y
+CONFIG_DEBUG_LIST=y
+CONFIG_FAULT_INJECTION=y
+CONFIG_FAIL_PAGE_ALLOC=y
+CONFIG_FAULT_INJECTION_DEBUG_FS=y
+CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y
+CONFIG_DEBUG_PAGEALLOC=y
+CONFIG_CPU_FREQ_SWITCH_PROFILER=y
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_DEBUG_USER=y
+CONFIG_DEBUG_LL=y
+CONFIG_EARLY_PRINTK=y
+CONFIG_PID_IN_CONTEXTIDR=y
+CONFIG_KEYS=y
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_ARC4=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_DEV_QCRYPTO=m
+CONFIG_CRYPTO_DEV_QCE=m
+CONFIG_CRYPTO_DEV_QCEDEV=m
+CONFIG_CRC_CCITT=y
diff --git a/arch/arm/mach-msm/Makefile b/arch/arm/mach-msm/Makefile
index 8efc000..321040e 100644
--- a/arch/arm/mach-msm/Makefile
+++ b/arch/arm/mach-msm/Makefile
@@ -297,6 +297,7 @@
obj-$(CONFIG_ARCH_MSM9615) += board-9615.o devices-9615.o board-9615-regulator.o board-9615-gpiomux.o board-9615-storage.o board-9615-display.o
obj-$(CONFIG_ARCH_MSM9615) += clock-local.o clock-9615.o acpuclock-9615.o clock-rpm.o clock-pll.o
obj-$(CONFIG_ARCH_APQ8084) += board-8084.o board-8084-gpiomux.o
+obj-$(CONFIG_ARCH_APQ8084) += clock-8084.o
obj-$(CONFIG_ARCH_MSM8974) += board-8974.o board-8974-gpiomux.o
obj-$(CONFIG_ARCH_MSM8974) += acpuclock-8974.o
obj-$(CONFIG_ARCH_MSM8974) += clock-local2.o clock-pll.o clock-8974.o clock-rpm.o clock-voter.o clock-mdss-8974.o
diff --git a/arch/arm/mach-msm/board-8084.c b/arch/arm/mach-msm/board-8084.c
index c20ba92..500c302 100644
--- a/arch/arm/mach-msm/board-8084.c
+++ b/arch/arm/mach-msm/board-8084.c
@@ -74,27 +74,6 @@
of_scan_flat_dt(dt_scan_for_memory_hole, apq8084_reserve_table);
}
-static struct clk_lookup msm_clocks_dummy[] = {
- CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
- CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
- CLK_DUMMY("core_clk", SDC1_CLK, "msm_sdcc.1", OFF),
- CLK_DUMMY("iface_clk", SDC1_P_CLK, "msm_sdcc.1", OFF),
- CLK_DUMMY("core_clk", SDC2_CLK, "msm_sdcc.2", OFF),
- CLK_DUMMY("iface_clk", SDC2_P_CLK, "msm_sdcc.2", OFF),
- CLK_DUMMY("xo", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("core_clk", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("iface_clk", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("sleep_clk", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("sleep_a_clk", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("utmi_clk", NULL, "f9200000.qcom,ssusb", OFF),
- CLK_DUMMY("ref_clk", NULL, "f9200000.qcom,ssusb", OFF),
-};
-
-static struct clock_init_data msm_dummy_clock_init_data __initdata = {
- .table = msm_clocks_dummy,
- .size = ARRAY_SIZE(msm_clocks_dummy),
-};
-
/*
* Used to satisfy dependencies for devices that need to be
* run early or in a particular order. Most likely your device doesn't fall
@@ -104,7 +83,7 @@
void __init apq8084_add_drivers(void)
{
msm_smd_init();
- msm_clock_init(&msm_dummy_clock_init_data);
+ msm_clock_init(&msm8084_clock_init_data);
}
static void __init apq8084_map_io(void)
diff --git a/arch/arm/mach-msm/board-8092.c b/arch/arm/mach-msm/board-8092.c
index 3da3e2d..6adff30 100644
--- a/arch/arm/mach-msm/board-8092.c
+++ b/arch/arm/mach-msm/board-8092.c
@@ -29,6 +29,7 @@
#include <linux/gpio.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
+#include <mach/clk-provider.h>
#include "board-dt.h"
#include "clock.h"
diff --git a/arch/arm/mach-msm/clock-8084.c b/arch/arm/mach-msm/clock-8084.c
new file mode 100644
index 0000000..424b694
--- /dev/null
+++ b/arch/arm/mach-msm/clock-8084.c
@@ -0,0 +1,351 @@
+/* Copyright (c) 2013, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/err.h>
+#include <linux/ctype.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/delay.h>
+#include <linux/clk.h>
+#include <linux/iopoll.h>
+#include <linux/regulator/consumer.h>
+
+#include <mach/rpm-regulator-smd.h>
+#include <mach/socinfo.h>
+#include <mach/rpm-smd.h>
+
+#include "clock-local2.h"
+#include "clock-pll.h"
+#include "clock-rpm.h"
+#include "clock-voter.h"
+#include "clock.h"
+
+/*
+ * TODO: Drivers need to fill in the clock names and device names for the clocks
+ * they need to control.
+ */
+static struct clk_lookup msm_clocks_8084[] = {
+ CLK_DUMMY("core_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("iface_clk", BLSP1_UART_CLK, "f991f000.serial", OFF),
+ CLK_DUMMY("core_clk", SDC1_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("iface_clk", SDC1_P_CLK, "msm_sdcc.1", OFF),
+ CLK_DUMMY("core_clk", SDC2_CLK, "msm_sdcc.2", OFF),
+ CLK_DUMMY("iface_clk", SDC2_P_CLK, "msm_sdcc.2", OFF),
+ CLK_DUMMY("xo", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("core_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("iface_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("sleep_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("sleep_a_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("utmi_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("ref_clk", NULL, "f9200000.qcom,ssusb", OFF),
+ CLK_DUMMY("", ufs_axi_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb30_master_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb30_sec_master_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hsic_ahb_clk_src.c, "", OFF),
+ CLK_DUMMY("", sata_asic0_clk_src.c, "", OFF),
+ CLK_DUMMY("", sata_pmalive_clk_src.c, "", OFF),
+ CLK_DUMMY("", sata_rx_clk_src.c, "", OFF),
+ CLK_DUMMY("", sata_rx_oob_clk_src.c, "", OFF),
+ CLK_DUMMY("", sdcc1_apps_clk_src.c, "", OFF),
+ CLK_DUMMY("", sdcc2_apps_clk_src.c, "", OFF),
+ CLK_DUMMY("", sdcc3_apps_clk_src.c, "", OFF),
+ CLK_DUMMY("", sdcc4_apps_clk_src.c, "", OFF),
+ CLK_DUMMY("", tsif_ref_clk_src.c, "", OFF),
+ CLK_DUMMY("", ufs_rx_cfg_postdiv_clk_src.c, "", OFF),
+ CLK_DUMMY("", ufs_tx_cfg_postdiv_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb30_mock_utmi_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb30_sec_mock_utmi_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hs_system_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hsic_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hsic_io_cal_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hsic_mock_utmi_clk_src.c, "", OFF),
+ CLK_DUMMY("", usb_hsic_system_clk_src.c, "", OFF),
+ CLK_DUMMY("", gcc_bam_dma_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_bam_dma_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup1_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup1_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup2_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup2_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup3_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup3_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup4_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup4_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup5_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup5_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup6_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_qup6_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart1_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart2_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart3_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart4_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart5_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp1_uart6_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup1_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup1_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup2_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup2_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup3_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup3_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup4_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup4_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup5_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup5_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup6_i2c_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_qup6_spi_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart1_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart2_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart3_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart4_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart5_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_blsp2_uart6_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_boot_rom_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce1_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce1_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce1_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce2_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce2_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce2_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce3_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce3_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ce3_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_copss_smmu_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_copss_smmu_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_dcd_xo_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_bimc_gfx_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_xo_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_xo_div4_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_gp1_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_gp2_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_gp3_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_lpass_mport_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_lpass_q6_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_lpass_sway_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_mmss_bimc_gfx_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_mmss_vpu_maple_sys_noc_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ocmem_noc_cfg_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_msg_ram_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_pdm2_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_pdm_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_pdm_xo4_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_periph_noc_usb_hsic_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_prng_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_asic0_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_cfg_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_pmalive_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_rx_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sata_rx_oob_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc1_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc1_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc1_cdccal_ff_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc1_cdccal_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc2_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc2_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc2_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc3_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc3_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc3_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc4_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc4_apps_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sdcc4_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_spss_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sys_noc_ufs_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sys_noc_usb3_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_sys_noc_usb3_sec_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_tsif_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_tsif_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_tsif_ref_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_axi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_rx_cfg_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_rx_symbol_0_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_rx_symbol_1_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_tx_cfg_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_tx_symbol_0_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_ufs_tx_symbol_1_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb2a_phy_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb2b_phy_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_master_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_mock_utmi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_sec_master_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_sec_mock_utmi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb30_sec_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hs_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hs_inactivity_timers_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hs_system_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_io_cal_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_io_cal_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_mock_utmi_clk.c, "", OFF),
+ CLK_DUMMY("", gcc_usb_hsic_system_clk.c, "", OFF),
+
+ CLK_DUMMY("", axi_clk_src.c, "", OFF),
+ CLK_DUMMY("", mmpll0_pll_clk_src.c, "", OFF),
+ CLK_DUMMY("", mmpll1_pll_clk_src.c, "", OFF),
+ CLK_DUMMY("", mmpll2_pll_clk_src.c, "", OFF),
+ CLK_DUMMY("", mmpll3_pll_clk_src.c, "", OFF),
+ CLK_DUMMY("", mmpll4_pll_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi0_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi1_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi2_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi3_clk_src.c, "", OFF),
+ CLK_DUMMY("", vcodec0_clk_src.c, "", OFF),
+ CLK_DUMMY("", vfe0_clk_src.c, "", OFF),
+ CLK_DUMMY("", vfe1_clk_src.c, "", OFF),
+ CLK_DUMMY("", edppixel_clk_src.c, "", OFF),
+ CLK_DUMMY("", extpclk_clk_src.c, "", OFF),
+ CLK_DUMMY("", mdp_clk_src.c, "", OFF),
+ CLK_DUMMY("", pclk0_clk_src.c, "", OFF),
+ CLK_DUMMY("", pclk1_clk_src.c, "", OFF),
+ CLK_DUMMY("", ocmemnoc_clk_src.c, "", OFF),
+ CLK_DUMMY("", gfx3d_clk_src.c, "", OFF),
+ CLK_DUMMY("", vp_clk_src.c, "", OFF),
+ CLK_DUMMY("", cci_clk_src.c, "", OFF),
+ CLK_DUMMY("", gp0_clk_src.c, "", OFF),
+ CLK_DUMMY("", gp1_clk_src.c, "", OFF),
+ CLK_DUMMY("", jpeg0_clk_src.c, "", OFF),
+ CLK_DUMMY("", jpeg1_clk_src.c, "", OFF),
+ CLK_DUMMY("", jpeg2_clk_src.c, "", OFF),
+ CLK_DUMMY("", mclk0_clk_src.c, "", OFF),
+ CLK_DUMMY("", mclk1_clk_src.c, "", OFF),
+ CLK_DUMMY("", mclk2_clk_src.c, "", OFF),
+ CLK_DUMMY("", mclk3_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi0phytimer_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi1phytimer_clk_src.c, "", OFF),
+ CLK_DUMMY("", csi2phytimer_clk_src.c, "", OFF),
+ CLK_DUMMY("", cpp_clk_src.c, "", OFF),
+ CLK_DUMMY("", byte0_clk_src.c, "", OFF),
+ CLK_DUMMY("", byte1_clk_src.c, "", OFF),
+ CLK_DUMMY("", edpaux_clk_src.c, "", OFF),
+ CLK_DUMMY("", edplink_clk_src.c, "", OFF),
+ CLK_DUMMY("", esc0_clk_src.c, "", OFF),
+ CLK_DUMMY("", esc1_clk_src.c, "", OFF),
+ CLK_DUMMY("", hdmi_clk_src.c, "", OFF),
+ CLK_DUMMY("", vsync_clk_src.c, "", OFF),
+ CLK_DUMMY("", rbbmtimer_clk_src.c, "", OFF),
+ CLK_DUMMY("", maple_clk_src.c, "", OFF),
+ CLK_DUMMY("", vdp_clk_src.c, "", OFF),
+ CLK_DUMMY("", vpu_bus_clk_src.c, "", OFF),
+ CLK_DUMMY("", dsi0_phy_pll_out_byteclk.c, "", OFF),
+ CLK_DUMMY("", dsi0_phy_pll_out_dsiclk.c, "", OFF),
+ CLK_DUMMY("", dsi1_phy_pll_out_byteclk.c, "", OFF),
+ CLK_DUMMY("", dsi1_phy_pll_out_dsiclk.c, "", OFF),
+ CLK_DUMMY("", edpphy_cc_link_clk.c, "", OFF),
+ CLK_DUMMY("", edpphy_cc_vco_div_clk.c, "", OFF),
+ CLK_DUMMY("", hdmi_phy_pll_out.c, "", OFF),
+ CLK_DUMMY("", csiphy_bist_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_edppixel_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_extpclk_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_pclk0_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_pclk1_clk.c, "", OFF),
+ CLK_DUMMY("", avsync_vp_clk.c, "", OFF),
+ CLK_DUMMY("", camss_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_cci_cci_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_cci_cci_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi0_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi0phy_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi0pix_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi0rdi_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi1_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi1phy_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi1pix_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi1rdi_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi2_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi2_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi2phy_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi2pix_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi2rdi_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi3_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi3_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi3phy_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi3pix_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi3rdi_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi_vfe0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_csi_vfe1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_gp0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_gp1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_ispif_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_jpeg_jpeg0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_jpeg_jpeg1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_jpeg_jpeg2_clk.c, "", OFF),
+ CLK_DUMMY("", camss_jpeg_jpeg_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_jpeg_jpeg_axi_clk.c, "", OFF),
+ CLK_DUMMY("", camss_mclk0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_mclk1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_mclk2_clk.c, "", OFF),
+ CLK_DUMMY("", camss_mclk3_clk.c, "", OFF),
+ CLK_DUMMY("", camss_micro_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_phy0_csi0phytimer_clk.c, "", OFF),
+ CLK_DUMMY("", camss_phy1_csi1phytimer_clk.c, "", OFF),
+ CLK_DUMMY("", camss_phy2_csi2phytimer_clk.c, "", OFF),
+ CLK_DUMMY("", camss_top_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_cpp_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_cpp_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_vfe0_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_vfe1_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_vfe_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", camss_vfe_vfe_axi_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_axi_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_byte0_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_byte1_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_edpaux_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_edplink_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_edppixel_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_esc0_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_esc1_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_extpclk_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_hdmi_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_hdmi_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_mdp_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_mdp_lut_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_pclk0_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_pclk1_clk.c, "", OFF),
+ CLK_DUMMY("", mdss_vsync_clk.c, "", OFF),
+ CLK_DUMMY("", mmss_misc_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", mmss_mmssnoc_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", mmss_mmssnoc_axi_clk.c, "", OFF),
+ CLK_DUMMY("", mmss_s0_axi_clk.c, "", OFF),
+ CLK_DUMMY("", ocmemcx_ocmemnoc_clk.c, "", OFF),
+ CLK_DUMMY("", oxili_ocmemgx_clk.c, "", OFF),
+ CLK_DUMMY("", oxili_gfx3d_clk.c, "", OFF),
+ CLK_DUMMY("", oxili_rbbmtimer_clk.c, "", OFF),
+ CLK_DUMMY("", oxilicx_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_axi_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_core0_vcodec_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_core1_vcodec_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_ocmemnoc_clk.c, "", OFF),
+ CLK_DUMMY("", venus0_vcodec0_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_ahb_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_axi_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_bus_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_cxo_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_maple_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_sleep_clk.c, "", OFF),
+ CLK_DUMMY("", vpu_vdp_clk.c, "", OFF),
+};
+
+struct clock_init_data msm8084_clock_init_data __initdata = {
+ .table = msm_clocks_8084,
+ .size = ARRAY_SIZE(msm_clocks_8084),
+};
diff --git a/arch/arm/mach-msm/clock.h b/arch/arm/mach-msm/clock.h
index 9ca1965..674ef77 100644
--- a/arch/arm/mach-msm/clock.h
+++ b/arch/arm/mach-msm/clock.h
@@ -53,6 +53,7 @@
extern struct clock_init_data msm8610_rumi_clock_init_data;
extern struct clock_init_data msm8226_clock_init_data;
extern struct clock_init_data msm8226_rumi_clock_init_data;
+extern struct clock_init_data msm8084_clock_init_data;
int msm_clock_init(struct clock_init_data *data);
int find_vdd_level(struct clk *clk, unsigned long rate);
diff --git a/arch/arm/mach-msm/include/mach/qseecomi.h b/arch/arm/mach-msm/include/mach/qseecomi.h
index e889242..3a997be 100644
--- a/arch/arm/mach-msm/include/mach/qseecomi.h
+++ b/arch/arm/mach-msm/include/mach/qseecomi.h
@@ -67,9 +67,9 @@
};
enum qseecom_pipe_type {
- QSEOS_PIPE_ENC = 0,
- QSEOS_PIPE_ENC_XTS,
- QSEOS_PIPE_AUTH,
+ QSEOS_PIPE_ENC = 0x1,
+ QSEOS_PIPE_ENC_XTS = 0x2,
+ QSEOS_PIPE_AUTH = 0x4,
QSEOS_PIPE_ENUM_FILL = 0x7FFFFFFF
};
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_arb.c b/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
index 5002a7d..eddf017 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_arb.c
@@ -247,6 +247,11 @@
struct msm_bus_fabric_device *gwfab =
msm_bus_get_fabric_device(fabnodeinfo->
info->node_info->priv_id);
+ if (!gwfab) {
+ MSM_BUS_ERR("Err: No gateway found\n");
+ return -ENXIO;
+ }
+
if (!gwfab->visited) {
MSM_BUS_DBG("VISITED ID: %d\n",
gwfab->id);
@@ -320,6 +325,12 @@
struct msm_bus_fabric_device *fabdev = msm_bus_get_fabric_device
(GET_FABID(curr));
+ if (!fabdev) {
+ MSM_BUS_ERR("Bus device for bus ID: %d not found!\n",
+ GET_FABID(curr));
+ return -ENXIO;
+ }
+
MSM_BUS_DBG("args: %d %d %d %llu %llu %llu %llu %u\n",
curr, GET_NODE(pnode), GET_INDEX(pnode), req_clk, req_bw,
curr_clk, curr_bw, ctx);
@@ -525,6 +536,11 @@
goto err;
}
srcfab = msm_bus_get_fabric_device(GET_FABID(src));
+ if (!srcfab) {
+ MSM_BUS_ERR("Fabric not found\n");
+ goto err;
+ }
+
srcfab->visited = true;
pnode[i] = getpath(src, dest);
bus_for_each_dev(&msm_bus_type, NULL, NULL, clearvisitedflag);
@@ -661,6 +677,12 @@
struct msm_bus_fabric_device *fabdev;
int index, next_pnode;
fabdev = msm_bus_get_fabric_device(GET_FABID(curr));
+ if (!fabdev) {
+ MSM_BUS_ERR("Fabric not found for: %d\n",
+ (GET_FABID(curr)));
+ return -ENXIO;
+ }
+
index = GET_INDEX(pnode);
info = fabdev->algo->find_node(fabdev, curr);
if (!info) {
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
index cd6693e..f05b381 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_bimc.c
@@ -1769,8 +1769,13 @@
}
}
+ if (fab_pdata->virt) {
+ MSM_BUS_DBG("Don't get memory regions for virtual fabric\n");
+ goto skip_mem;
+ }
+
bimc_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!bimc_mem && !fab_pdata->virt) {
+ if (!bimc_mem) {
MSM_BUS_ERR("Cannot get BIMC Base address\n");
kfree(binfo);
return NULL;
@@ -1792,6 +1797,7 @@
return NULL;
}
+skip_mem:
fab_pdata->hw_data = (void *)binfo;
return (void *)binfo;
}
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_config.c b/arch/arm/mach-msm/msm_bus/msm_bus_config.c
index c6fa250..858b15e 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_config.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_config.c
@@ -1,4 +1,4 @@
-/* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved.
+/* Copyright (c) 2011-2013, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -39,7 +39,7 @@
MSM_BUS_DBG("master_port: %d iid: %d fabid%d\n",
master_port, priv_id, GET_FABID(priv_id));
fabdev = msm_bus_get_fabric_device(GET_FABID(priv_id));
- if (IS_ERR(fabdev)) {
+ if (IS_ERR_OR_NULL(fabdev)) {
MSM_BUS_ERR("Fabric device not found for mport: %d\n",
master_port);
return -ENODEV;
@@ -65,7 +65,7 @@
MSM_BUS_DBG("master_port: %d iid: %d fabid: %d\n",
master_port, priv_id, GET_FABID(priv_id));
fabdev = msm_bus_get_fabric_device(GET_FABID(priv_id));
- if (IS_ERR(fabdev)) {
+ if (IS_ERR_OR_NULL(fabdev)) {
MSM_BUS_ERR("Fabric device not found for mport: %d\n",
master_port);
return -ENODEV;
diff --git a/arch/arm/mach-msm/msm_bus/msm_bus_of.c b/arch/arm/mach-msm/msm_bus/msm_bus_of.c
index af3537c..4e25637 100644
--- a/arch/arm/mach-msm/msm_bus/msm_bus_of.c
+++ b/arch/arm/mach-msm/msm_bus/msm_bus_of.c
@@ -103,6 +103,11 @@
}
vec_arr = of_get_property(of_node, "qcom,msm-bus,vectors-KBps", &len);
+ if (vec_arr == NULL) {
+ pr_err("Error: Vector array not found\n");
+ goto err;
+ }
+
if (len != num_usecases * num_paths * sizeof(uint32_t) * 4) {
pr_err("Error: Length-error on getting vectors\n");
goto err;
@@ -432,7 +437,7 @@
struct msm_bus_fabric_registration
*msm_bus_of_get_fab_data(struct platform_device *pdev)
{
- struct device_node *of_node = pdev->dev.of_node;
+ struct device_node *of_node;
struct msm_bus_fabric_registration *pdata;
bool mem_err = false;
int ret = 0;
@@ -443,6 +448,7 @@
return NULL;
}
+ of_node = pdev->dev.of_node;
pdata = devm_kzalloc(&pdev->dev,
sizeof(struct msm_bus_fabric_registration), GFP_KERNEL);
if (!pdata) {
diff --git a/arch/arm/mach-msm/pil-q6v5-lpass.c b/arch/arm/mach-msm/pil-q6v5-lpass.c
index 04c1be3..6e8e79e 100644
--- a/arch/arm/mach-msm/pil-q6v5-lpass.c
+++ b/arch/arm/mach-msm/pil-q6v5-lpass.c
@@ -409,6 +409,12 @@
drv->err_fatal_irq = ret;
ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
+ "qcom,gpio-err-ready", 0));
+ if (ret < 0)
+ return ret;
+ drv->subsys_desc.err_ready_irq = ret;
+
+ ret = gpio_to_irq(of_get_named_gpio(pdev->dev.of_node,
"qcom,gpio-proxy-unvote", 0));
if (ret < 0)
return ret;
diff --git a/drivers/gpu/msm/adreno.c b/drivers/gpu/msm/adreno.c
index f071801..a4f60f9 100644
--- a/drivers/gpu/msm/adreno.c
+++ b/drivers/gpu/msm/adreno.c
@@ -101,13 +101,6 @@
.iomemname = KGSL_3D0_REG_MEMORY,
.shadermemname = KGSL_3D0_SHADER_MEMORY,
.ftbl = &adreno_functable,
-#ifdef CONFIG_HAS_EARLYSUSPEND
- .display_off = {
- .level = EARLY_SUSPEND_LEVEL_STOP_DRAWING,
- .suspend = kgsl_early_suspend_driver,
- .resume = kgsl_late_resume_driver,
- },
-#endif
},
.gmem_base = 0,
.gmem_size = SZ_256K,
diff --git a/drivers/gpu/msm/kgsl.c b/drivers/gpu/msm/kgsl.c
index 07f8ef5..992f88d 100644
--- a/drivers/gpu/msm/kgsl.c
+++ b/drivers/gpu/msm/kgsl.c
@@ -625,20 +625,6 @@
};
EXPORT_SYMBOL(kgsl_pm_ops);
-void kgsl_early_suspend_driver(struct early_suspend *h)
-{
- struct kgsl_device *device = container_of(h,
- struct kgsl_device, display_off);
- KGSL_PWR_WARN(device, "early suspend start\n");
- mutex_lock(&device->mutex);
- device->pwrctrl.restore_slumber = true;
- kgsl_pwrctrl_request_state(device, KGSL_STATE_SLUMBER);
- kgsl_pwrctrl_sleep(device);
- mutex_unlock(&device->mutex);
- KGSL_PWR_WARN(device, "early suspend end\n");
-}
-EXPORT_SYMBOL(kgsl_early_suspend_driver);
-
int kgsl_suspend_driver(struct platform_device *pdev,
pm_message_t state)
{
@@ -654,29 +640,6 @@
}
EXPORT_SYMBOL(kgsl_resume_driver);
-void kgsl_late_resume_driver(struct early_suspend *h)
-{
- struct kgsl_device *device = container_of(h,
- struct kgsl_device, display_off);
- KGSL_PWR_WARN(device, "late resume start\n");
- mutex_lock(&device->mutex);
- device->pwrctrl.restore_slumber = false;
- if (device->pwrscale.policy == NULL)
- kgsl_pwrctrl_pwrlevel_change(device, KGSL_PWRLEVEL_TURBO);
- if (kgsl_pwrctrl_wake(device) != 0)
- return;
- /*
- * We don't have a way to go directly from
- * a deeper sleep state to NAP, which is
- * the desired state here.
- */
- kgsl_pwrctrl_request_state(device, KGSL_STATE_NAP);
- kgsl_pwrctrl_sleep(device);
- mutex_unlock(&device->mutex);
- KGSL_PWR_WARN(device, "late resume end\n");
-}
-EXPORT_SYMBOL(kgsl_late_resume_driver);
-
/**
* kgsl_destroy_process_private() - Cleanup function to free process private
* @kref: - Pointer to object being destroyed's kref struct
@@ -946,7 +909,12 @@
result = device->ftbl->start(device);
if (result)
goto err_freedevpriv;
-
+ /*
+ * Make sure the gates are open, so they don't block until
+ * we start suspend or FT.
+ */
+ complete_all(&device->ft_gate);
+ complete_all(&device->hwaccess_gate);
kgsl_pwrctrl_set_state(device, KGSL_STATE_ACTIVE);
kgsl_active_count_put(device);
}
diff --git a/drivers/gpu/msm/kgsl.h b/drivers/gpu/msm/kgsl.h
index 94cc551..c7cbaf8 100644
--- a/drivers/gpu/msm/kgsl.h
+++ b/drivers/gpu/msm/kgsl.h
@@ -226,11 +226,8 @@
extern const struct dev_pm_ops kgsl_pm_ops;
-struct early_suspend;
int kgsl_suspend_driver(struct platform_device *pdev, pm_message_t state);
int kgsl_resume_driver(struct platform_device *pdev);
-void kgsl_early_suspend_driver(struct early_suspend *h);
-void kgsl_late_resume_driver(struct early_suspend *h);
void kgsl_trace_regwrite(struct kgsl_device *device, unsigned int offset,
unsigned int value);
diff --git a/drivers/gpu/msm/kgsl_device.h b/drivers/gpu/msm/kgsl_device.h
index 42c1475..e80721a 100644
--- a/drivers/gpu/msm/kgsl_device.h
+++ b/drivers/gpu/msm/kgsl_device.h
@@ -15,7 +15,6 @@
#include <linux/idr.h>
#include <linux/pm_qos.h>
-#include <linux/earlysuspend.h>
#include "kgsl.h"
#include "kgsl_mmu.h"
@@ -190,7 +189,6 @@
struct completion ft_gate;
struct dentry *d_debugfs;
struct idr context_idr;
- struct early_suspend display_off;
void *snapshot; /* Pointer to the snapshot memory region */
int snapshot_maxsize; /* Max size of the snapshot region */
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.c b/drivers/gpu/msm/kgsl_pwrctrl.c
index b124257..e071650 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.c
+++ b/drivers/gpu/msm/kgsl_pwrctrl.c
@@ -1021,7 +1021,6 @@
pwr->pm_qos_latency = 501;
pm_runtime_enable(device->parentdev);
- register_early_suspend(&device->display_off);
return result;
clk_err:
@@ -1041,7 +1040,6 @@
KGSL_PWR_INFO(device, "close device %d\n", device->id);
pm_runtime_disable(device->parentdev);
- unregister_early_suspend(&device->display_off);
clk_put(pwr->ebi1_clk);
@@ -1151,8 +1149,7 @@
KGSL_PWR_INFO(device, "idle timer expired device %d\n", device->id);
if (device->requested_state != KGSL_STATE_SUSPEND) {
- if (device->pwrctrl.restore_slumber ||
- device->pwrctrl.strtstp_sleepwake)
+ if (device->pwrctrl.strtstp_sleepwake)
kgsl_pwrctrl_request_state(device, KGSL_STATE_SLUMBER);
else
kgsl_pwrctrl_request_state(device, KGSL_STATE_SLEEP);
@@ -1448,16 +1445,11 @@
BUG_ON(!mutex_is_locked(&device->mutex));
if (device->active_cnt == 0) {
- if (device->requested_state == KGSL_STATE_SUSPEND ||
- device->state == KGSL_STATE_SUSPEND) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->hwaccess_gate);
- mutex_lock(&device->mutex);
- } else if (device->state == KGSL_STATE_DUMP_AND_FT) {
- mutex_unlock(&device->mutex);
- wait_for_completion(&device->ft_gate);
- mutex_lock(&device->mutex);
- }
+ mutex_unlock(&device->mutex);
+ wait_for_completion(&device->hwaccess_gate);
+ wait_for_completion(&device->ft_gate);
+ mutex_lock(&device->mutex);
+
ret = kgsl_pwrctrl_wake(device);
}
if (ret == 0)
diff --git a/drivers/gpu/msm/kgsl_pwrctrl.h b/drivers/gpu/msm/kgsl_pwrctrl.h
index b3e8702..2b986c8 100644
--- a/drivers/gpu/msm/kgsl_pwrctrl.h
+++ b/drivers/gpu/msm/kgsl_pwrctrl.h
@@ -58,7 +58,6 @@
* @nap_allowed - true if the device supports naps
* @idle_needed - true if the device needs a idle before clock change
* @irq_name - resource name for the IRQ
- * @restore_slumber - Flag to indicate that we are in a suspend/restore sequence
* @clk_stats - structure of clock statistics
* @pm_qos_req_dma - the power management quality of service structure
* @pm_qos_latency - allowed CPU latency in microseconds
@@ -86,7 +85,6 @@
unsigned int idle_needed;
const char *irq_name;
s64 time;
- unsigned int restore_slumber;
struct kgsl_clk_stats clk_stats;
struct pm_qos_request pm_qos_req_dma;
unsigned int pm_qos_latency;
diff --git a/drivers/media/platform/msm/vidc/msm_vidc_common.c b/drivers/media/platform/msm/vidc/msm_vidc_common.c
index ade8597..cf051b6 100644
--- a/drivers/media/platform/msm/vidc/msm_vidc_common.c
+++ b/drivers/media/platform/msm/vidc/msm_vidc_common.c
@@ -334,6 +334,7 @@
msecs_to_jiffies(HW_RESPONSE_TIMEOUT));
if (!rc) {
dprintk(VIDC_ERR, "Wait interrupted or timeout: %d\n", rc);
+ msm_comm_recover_from_session_error(inst);
rc = -EIO;
} else {
rc = 0;
@@ -364,7 +365,7 @@
{
struct msm_vidc_cb_cmd_done *response = data;
struct msm_vidc_inst *inst;
- if (response) {
+ if (response && !response->status) {
struct vidc_hal_session_init_done *session_init_done =
(struct vidc_hal_session_init_done *) response->data;
inst = (struct msm_vidc_inst *)response->session_id;
@@ -1251,7 +1252,8 @@
}
mutex_unlock(&temp->lock);
}
-
+ inst->state = MSM_VIDC_CORE_INVALID;
+ msm_comm_recover_from_session_error(inst);
return -ENOMEM;
}
@@ -1903,6 +1905,8 @@
if (!rc) {
dprintk(VIDC_ERR,
"Wait interrupted or timeout: %d\n", rc);
+ inst->state = MSM_VIDC_CORE_INVALID;
+ msm_comm_recover_from_session_error(inst);
rc = -EIO;
goto exit;
}
@@ -1963,6 +1967,13 @@
mutex_unlock(&inst->lock);
rc = wait_for_sess_signal_receipt(inst,
SESSION_RELEASE_BUFFER_DONE);
+ if (rc) {
+ mutex_lock(&inst->sync_lock);
+ inst->state = MSM_VIDC_CORE_INVALID;
+ mutex_unlock(&inst->sync_lock);
+ msm_comm_recover_from_session_error(
+ inst);
+ }
mutex_lock(&inst->lock);
}
list_del(&buf->list);
@@ -2027,6 +2038,13 @@
mutex_unlock(&inst->lock);
rc = wait_for_sess_signal_receipt(inst,
SESSION_RELEASE_BUFFER_DONE);
+ if (rc) {
+ mutex_lock(&inst->sync_lock);
+ inst->state = MSM_VIDC_CORE_INVALID;
+ mutex_unlock(&inst->sync_lock);
+ msm_comm_recover_from_session_error(
+ inst);
+ }
mutex_lock(&inst->lock);
}
list_del(&buf->list);
@@ -2412,6 +2430,20 @@
return rc;
}
+static void msm_comm_generate_sys_error(struct msm_vidc_inst *inst)
+{
+ struct msm_vidc_core *core;
+ enum command_response cmd = SYS_ERROR;
+ struct msm_vidc_cb_cmd_done response = {0};
+ if (!inst || !inst->core) {
+ dprintk(VIDC_ERR, "%s: invalid input parameters", __func__);
+ return;
+ }
+ core = inst->core;
+ response.device_id = (u32) core->id;
+ handle_sys_error(cmd, (void *) &response);
+
+}
int msm_comm_recover_from_session_error(struct msm_vidc_inst *inst)
{
struct hfi_device *hdev;
@@ -2421,6 +2453,12 @@
dprintk(VIDC_ERR, "%s: invalid input parameters", __func__);
return -EINVAL;
}
+ if (inst->state < MSM_VIDC_OPEN_DONE) {
+ dprintk(VIDC_WARN,
+ "No corresponding FW session. No need to send Abort");
+ inst->state = MSM_VIDC_CORE_INVALID;
+ return rc;
+ }
hdev = inst->core->device;
init_completion(&inst->completions[SESSION_MSG_INDEX
@@ -2434,10 +2472,14 @@
dprintk(VIDC_ERR, "session_abort failed rc: %d\n", rc);
return rc;
}
-
- rc = wait_for_sess_signal_receipt(inst, SESSION_ABORT_DONE);
- if (rc)
+ rc = wait_for_completion_timeout(
+ &inst->completions[SESSION_MSG_INDEX(SESSION_ABORT_DONE)],
+ msecs_to_jiffies(HW_RESPONSE_TIMEOUT));
+ if (!rc) {
dprintk(VIDC_ERR, "%s: Wait interrupted or timeout: %d\n",
__func__, rc);
+ msm_comm_generate_sys_error(inst);
+ } else
+ change_inst_state(inst, MSM_VIDC_CLOSE_DONE);
return rc;
}
diff --git a/drivers/misc/qseecom.c b/drivers/misc/qseecom.c
index fcadc30..fa28d6a 100644
--- a/drivers/misc/qseecom.c
+++ b/drivers/misc/qseecom.c
@@ -1918,6 +1918,10 @@
qclk = &qseecom.ce_drv;
mutex_lock(&clk_access_lock);
+
+ if (qclk->clk_access_cnt == ULONG_MAX)
+ goto err;
+
if (qclk->clk_access_cnt > 0) {
qclk->clk_access_cnt++;
mutex_unlock(&clk_access_lock);
@@ -1965,6 +1969,12 @@
qclk = &qseecom.ce_drv;
mutex_lock(&clk_access_lock);
+
+ if (qclk->clk_access_cnt == 0) {
+ mutex_unlock(&clk_access_lock);
+ return;
+ }
+
if (qclk->clk_access_cnt == 1) {
if (qclk->ce_clk != NULL)
clk_disable_unprepare(qclk->ce_clk);
@@ -2495,8 +2505,8 @@
ireq.pipe = set_key_para->pipe;
ireq.flags = set_key_para->flags;
- /* set PIPE_ENC */
- ireq.pipe_type = QSEOS_PIPE_ENC;
+ /* set both PIPE_ENC and PIPE_ENC_XTS*/
+ ireq.pipe_type = QSEOS_PIPE_ENC|QSEOS_PIPE_ENC_XTS;
if (set_key_para->set_clear_key_flag ==
QSEECOM_SET_CE_KEY_CMD)
@@ -2513,17 +2523,6 @@
return ret;
}
- /* set PIPE_ENC_XTS */
- ireq.pipe_type = QSEOS_PIPE_ENC_XTS;
- ret = scm_call(SCM_SVC_TZSCHEDULER, 1,
- &ireq, sizeof(struct qseecom_key_select_ireq),
- &resp, sizeof(struct qseecom_command_scm_resp));
- if (ret) {
- pr_err("scm call to set QSEOS_PIPE_ENC_XTS key failed : %d\n",
- ret);
- return ret;
- }
-
switch (resp.result) {
case QSEOS_RESULT_SUCCESS:
break;
diff --git a/drivers/power/pm8921-bms.c b/drivers/power/pm8921-bms.c
index c09373a..9b3973b 100644
--- a/drivers/power/pm8921-bms.c
+++ b/drivers/power/pm8921-bms.c
@@ -177,6 +177,9 @@
int vbatt_cutoff_count;
int low_voltage_detect;
int vbatt_cutoff_retries;
+ bool first_report_after_suspend;
+ bool soc_updated_on_resume;
+ int last_soc_at_suspend;
};
/*
@@ -2387,10 +2390,11 @@
rbatt, fcc_uah, unusable_charge_uah, cc_uah);
pr_debug("calculated SOC = %d\n", new_calculated_soc);
- if (new_calculated_soc != calculated_soc)
+ if (new_calculated_soc != calculated_soc) {
+ calculated_soc = new_calculated_soc;
update_power_supply(chip);
+ }
- calculated_soc = new_calculated_soc;
firsttime = 0;
get_current_time(&chip->last_recalc_time);
@@ -2495,15 +2499,31 @@
/* last_soc < soc ... scale and catch up */
if (last_soc != -EINVAL && last_soc < soc && soc != 100)
- soc = scale_soc_while_chg(chip, delta_time_us,
- soc, last_soc);
+ soc = scale_soc_while_chg(chip, delta_time_us, soc, last_soc);
- /* restrict soc to 1% change */
if (last_soc != -EINVAL) {
- if (soc < last_soc && soc != 0)
+ if (chip->first_report_after_suspend) {
+ chip->first_report_after_suspend = false;
+ if (chip->soc_updated_on_resume) {
+ /* coming here after a long suspend */
+ chip->soc_updated_on_resume = false;
+ if (last_soc < soc)
+ /* if soc has falsely increased during
+ * suspend, set the soc_at_suspend
+ */
+ soc = chip->last_soc_at_suspend;
+ } else {
+ /*
+ * suspended for a short time
+ * report the last_soc before suspend
+ */
+ soc = chip->last_soc_at_suspend;
+ }
+ } else if (soc < last_soc && soc != 0) {
soc = last_soc - 1;
- if (soc > last_soc && soc != 100)
+ } else if (soc > last_soc && soc != 100) {
soc = last_soc + 1;
+ }
}
last_soc = bound_soc(soc);
@@ -3560,12 +3580,11 @@
static int pm8921_bms_suspend(struct device *dev)
{
- /*
- * set the last reported soc to invalid, so that
- * next time we resume we don't want to restrict
- * the decrease of soc by only 1%
- */
- last_soc = -EINVAL;
+ struct pm8921_bms_chip *chip = dev_get_drvdata(dev);
+
+ cancel_delayed_work_sync(&chip->calculate_soc_delayed_work);
+
+ chip->last_soc_at_suspend = last_soc;
return 0;
}
@@ -3575,22 +3594,30 @@
int rc;
unsigned long time_since_last_recalc;
unsigned long tm_now_sec;
+ struct pm8921_bms_chip *chip = dev_get_drvdata(dev);
rc = get_current_time(&tm_now_sec);
if (rc) {
pr_err("Could not read current time: %d\n", rc);
return 0;
}
- if (tm_now_sec > the_chip->last_recalc_time) {
+
+ if (tm_now_sec > chip->last_recalc_time) {
time_since_last_recalc = tm_now_sec -
- the_chip->last_recalc_time;
+ chip->last_recalc_time;
pr_debug("Time since last recalc: %lu\n",
time_since_last_recalc);
- if (time_since_last_recalc >= the_chip->soc_calc_period) {
- the_chip->last_recalc_time = tm_now_sec;
- recalculate_soc(the_chip);
+ if ((time_since_last_recalc * 1000) >=
+ chip->soc_calc_period) {
+ chip->last_recalc_time = tm_now_sec;
+ recalculate_soc(chip);
+ chip->soc_updated_on_resume = true;
}
}
+ chip->first_report_after_suspend = true;
+ update_power_supply(chip);
+ schedule_delayed_work(&chip->calculate_soc_delayed_work,
+ msecs_to_jiffies(chip->soc_calc_period));
return 0;
}
diff --git a/drivers/power/qpnp-bms.c b/drivers/power/qpnp-bms.c
index e9f4ebe..4f9254f 100644
--- a/drivers/power/qpnp-bms.c
+++ b/drivers/power/qpnp-bms.c
@@ -140,9 +140,11 @@
struct delayed_work calculate_soc_delayed_work;
struct work_struct recalc_work;
+ struct work_struct battery_insertion_work;
struct mutex bms_output_lock;
struct mutex last_ocv_uv_mutex;
+ struct mutex vbat_monitor_mutex;
struct mutex soc_invalidation_mutex;
struct mutex last_soc_mutex;
@@ -155,12 +157,12 @@
int shutdown_iavg_ma;
struct wake_lock low_voltage_wake_lock;
- bool low_voltage_wake_lock_held;
int low_voltage_threshold;
int low_soc_calc_threshold;
int low_soc_calculate_soc_ms;
int calculate_soc_ms;
struct wake_lock soc_wake_lock;
+ struct wake_lock cv_wake_lock;
uint16_t ocv_reading_at_100;
uint16_t prev_last_good_ocv_raw;
@@ -186,6 +188,7 @@
int catch_up_time_sec;
struct single_row_lut *adjusted_fcc_temp_lut;
+ struct qpnp_adc_tm_btm_param vbat_monitor_params;
unsigned int vadc_v0625;
unsigned int vadc_v1250;
@@ -197,6 +200,7 @@
int calculated_soc;
int prev_voltage_based_soc;
bool use_voltage_soc;
+ bool in_cv_range;
int prev_batt_terminal_uv;
int high_ocv_correction_limit_uv;
@@ -586,6 +590,24 @@
return false;
}
+static bool is_battery_present(struct qpnp_bms_chip *chip)
+{
+ union power_supply_propval ret = {0,};
+
+ if (chip->batt_psy == NULL)
+ chip->batt_psy = power_supply_get_by_name("battery");
+ if (chip->batt_psy) {
+ /* if battery has been registered, use the status property */
+ chip->batt_psy->get_property(chip->batt_psy,
+ POWER_SUPPLY_PROP_PRESENT, &ret);
+ return ret.intval;
+ }
+
+ /* Default to false if the battery power supply is not registered. */
+ pr_debug("battery power supply is not registered\n");
+ return false;
+}
+
static bool is_battery_full(struct qpnp_bms_chip *chip)
{
union power_supply_propval ret = {0,};
@@ -1307,25 +1329,26 @@
module_param_cb(bms_reset, &bms_reset_ops, &bms_reset, 0644);
+#define VBATT_ERROR_MARGIN 20000
static int charging_adjustments(struct qpnp_bms_chip *chip,
struct soc_params *params, int soc,
int vbat_uv, int ibat_ua, int batt_temp)
{
- int chg_soc;
- int batt_terminal_uv = vbat_uv + (ibat_ua * chip->r_conn_mohm) / 1000;
+ int chg_soc, batt_terminal_uv;
+
+ batt_terminal_uv = vbat_uv + VBATT_ERROR_MARGIN
+ + (ibat_ua * chip->r_conn_mohm) / 1000;
if (chip->soc_at_cv == -EINVAL) {
- /* In constant current charging return the calc soc */
- if (batt_terminal_uv <= chip->max_voltage_uv)
- pr_debug("CC CHG SOC %d\n", soc);
-
- /* Note the CC to CV point */
if (batt_terminal_uv >= chip->max_voltage_uv) {
chip->soc_at_cv = soc;
chip->prev_chg_soc = soc;
chip->ibat_at_cv_ua = ibat_ua;
pr_debug("CC_TO_CV ibat_ua = %d CHG SOC %d\n",
ibat_ua, soc);
+ } else {
+ /* In constant current charging return the calc soc */
+ pr_debug("CC CHG SOC %d\n", soc);
}
chip->prev_batt_terminal_uv = batt_terminal_uv;
@@ -1379,18 +1402,40 @@
* a wakelock untill soc = 0%
*/
if (vbat_uv <= chip->low_voltage_threshold
- && !chip->low_voltage_wake_lock_held) {
+ && !wake_lock_active(&chip->low_voltage_wake_lock)) {
pr_debug("voltage = %d low holding wakelock\n", vbat_uv);
wake_lock(&chip->low_voltage_wake_lock);
- chip->low_voltage_wake_lock_held = 1;
} else if (vbat_uv > chip->low_voltage_threshold
- && chip->low_voltage_wake_lock_held) {
+ && wake_lock_active(&chip->low_voltage_wake_lock)) {
pr_debug("voltage = %d releasing wakelock\n", vbat_uv);
- chip->low_voltage_wake_lock_held = 0;
wake_unlock(&chip->low_voltage_wake_lock);
}
}
+#define VBATT_ERROR_MARGIN 20000
+static void cv_voltage_check(struct qpnp_bms_chip *chip, int vbat_uv)
+{
+ /*
+ * if battery is very low (v_cutoff voltage + 20mv) hold
+ * a wakelock untill soc = 0%
+ */
+ if (wake_lock_active(&chip->cv_wake_lock)) {
+ if (chip->soc_at_cv != -EINVAL) {
+ pr_debug("hit CV, releasing cv wakelock\n");
+ wake_unlock(&chip->cv_wake_lock);
+ } else if (!is_battery_charging(chip)) {
+ pr_debug("charging stopped, releasing cv wakelock\n");
+ wake_unlock(&chip->cv_wake_lock);
+ }
+ } else if (vbat_uv > chip->max_voltage_uv - VBATT_ERROR_MARGIN
+ && chip->soc_at_cv == -EINVAL
+ && is_battery_charging(chip)
+ && !wake_lock_active(&chip->cv_wake_lock)) {
+ pr_debug("voltage = %d holding cv wakelock\n", vbat_uv);
+ wake_lock(&chip->cv_wake_lock);
+ }
+}
+
#define NO_ADJUST_HIGH_SOC_THRESHOLD 90
static int adjust_soc(struct qpnp_bms_chip *chip, struct soc_params *params,
int soc, int batt_temp)
@@ -1414,6 +1459,7 @@
}
very_low_voltage_check(chip, vbat_uv);
+ cv_voltage_check(chip, vbat_uv);
delta_ocv_uv_limit = DIV_ROUND_CLOSEST(ibat_ua, 1000);
@@ -1565,7 +1611,7 @@
int shutdown_soc, new_calculated_soc, remaining_usable_charge_uah;
struct soc_params params;
- if (!chip->battery_present) {
+ if (!is_battery_present(chip)) {
pr_debug("battery gone, reporting 100\n");
new_calculated_soc = 100;
goto done_calculating;
@@ -1731,6 +1777,9 @@
if (!wake_lock_active(&chip->soc_wake_lock))
wake_lock(&chip->soc_wake_lock);
+ mutex_lock(&chip->vbat_monitor_mutex);
+ qpnp_adc_tm_channel_measure(&chip->vbat_monitor_params);
+ mutex_unlock(&chip->vbat_monitor_mutex);
if (chip->use_voltage_soc) {
soc = calculate_soc_from_voltage(chip);
} else {
@@ -1772,7 +1821,7 @@
int soc = recalculate_soc(chip);
if (soc < chip->low_soc_calc_threshold
- || chip->low_voltage_wake_lock_held)
+ || wake_lock_active(&chip->low_voltage_wake_lock))
schedule_delayed_work(&chip->calculate_soc_delayed_work,
round_jiffies_relative(msecs_to_jiffies
(chip->low_soc_calculate_soc_ms)));
@@ -1972,6 +2021,226 @@
return report_cc_based_soc(chip);
}
+static void configure_vbat_monitor_low(struct qpnp_bms_chip *chip)
+{
+ mutex_lock(&chip->vbat_monitor_mutex);
+ if (chip->vbat_monitor_params.state_request
+ == ADC_TM_HIGH_LOW_THR_ENABLE) {
+ /*
+ * Battery is now around or below v_cutoff
+ */
+ pr_debug("battery entered cutoff range\n");
+ if (!wake_lock_active(&chip->low_voltage_wake_lock)) {
+ pr_debug("voltage low, holding wakelock\n");
+ wake_lock(&chip->low_voltage_wake_lock);
+ cancel_delayed_work_sync(
+ &chip->calculate_soc_delayed_work);
+ schedule_delayed_work(
+ &chip->calculate_soc_delayed_work, 0);
+ }
+ chip->vbat_monitor_params.state_request =
+ ADC_TM_HIGH_THR_ENABLE;
+ chip->vbat_monitor_params.high_thr =
+ (chip->low_voltage_threshold + VBATT_ERROR_MARGIN);
+ pr_debug("set low thr to %d and high to %d\n",
+ chip->vbat_monitor_params.low_thr,
+ chip->vbat_monitor_params.high_thr);
+ chip->vbat_monitor_params.low_thr = 0;
+ } else if (chip->vbat_monitor_params.state_request
+ == ADC_TM_LOW_THR_ENABLE) {
+ /*
+ * Battery is in normal operation range.
+ */
+ pr_debug("battery entered normal range\n");
+ if (wake_lock_active(&chip->cv_wake_lock)) {
+ wake_unlock(&chip->cv_wake_lock);
+ pr_debug("releasing cv wake lock\n");
+ }
+ chip->in_cv_range = false;
+ chip->vbat_monitor_params.state_request =
+ ADC_TM_HIGH_LOW_THR_ENABLE;
+ chip->vbat_monitor_params.high_thr = chip->max_voltage_uv
+ - VBATT_ERROR_MARGIN;
+ chip->vbat_monitor_params.low_thr =
+ chip->low_voltage_threshold;
+ pr_debug("set low thr to %d and high to %d\n",
+ chip->vbat_monitor_params.low_thr,
+ chip->vbat_monitor_params.high_thr);
+ }
+ qpnp_adc_tm_channel_measure(&chip->vbat_monitor_params);
+ mutex_unlock(&chip->vbat_monitor_mutex);
+}
+
+#define CV_LOW_THRESHOLD_HYST_UV 100000
+static void configure_vbat_monitor_high(struct qpnp_bms_chip *chip)
+{
+ mutex_lock(&chip->vbat_monitor_mutex);
+ if (chip->vbat_monitor_params.state_request
+ == ADC_TM_HIGH_LOW_THR_ENABLE) {
+ /*
+ * Battery is around vddmax
+ */
+ pr_debug("battery entered vddmax range\n");
+ chip->in_cv_range = true;
+ if (!wake_lock_active(&chip->cv_wake_lock)) {
+ wake_lock(&chip->cv_wake_lock);
+ pr_debug("holding cv wake lock\n");
+ }
+ schedule_work(&chip->recalc_work);
+ chip->vbat_monitor_params.state_request =
+ ADC_TM_LOW_THR_ENABLE;
+ chip->vbat_monitor_params.low_thr =
+ (chip->max_voltage_uv - CV_LOW_THRESHOLD_HYST_UV);
+ chip->vbat_monitor_params.high_thr = chip->max_voltage_uv * 2;
+ pr_debug("set low thr to %d and high to %d\n",
+ chip->vbat_monitor_params.low_thr,
+ chip->vbat_monitor_params.high_thr);
+ } else if (chip->vbat_monitor_params.state_request
+ == ADC_TM_HIGH_THR_ENABLE) {
+ /*
+ * Battery is in normal operation range.
+ */
+ pr_debug("battery entered normal range\n");
+ if (wake_lock_active(&chip->low_voltage_wake_lock)) {
+ pr_debug("voltage high, releasing wakelock\n");
+ wake_unlock(&chip->low_voltage_wake_lock);
+ }
+ chip->vbat_monitor_params.state_request =
+ ADC_TM_HIGH_LOW_THR_ENABLE;
+ chip->vbat_monitor_params.high_thr =
+ chip->max_voltage_uv - VBATT_ERROR_MARGIN;
+ chip->vbat_monitor_params.low_thr =
+ chip->low_voltage_threshold;
+ pr_debug("set low thr to %d and high to %d\n",
+ chip->vbat_monitor_params.low_thr,
+ chip->vbat_monitor_params.high_thr);
+ }
+ qpnp_adc_tm_channel_measure(&chip->vbat_monitor_params);
+ mutex_unlock(&chip->vbat_monitor_mutex);
+}
+
+static void btm_notify_vbat(enum qpnp_tm_state state, void *ctx)
+{
+ struct qpnp_bms_chip *chip = ctx;
+ int vbat_uv;
+ struct qpnp_vadc_result result;
+ int rc;
+
+ rc = qpnp_vadc_read(VBAT_SNS, &result);
+ pr_debug("vbat = %lld, raw = 0x%x\n", result.physical, result.adc_code);
+
+ get_battery_voltage(&vbat_uv);
+ pr_debug("vbat is at %d, state is at %d\n", vbat_uv, state);
+
+ if (state == ADC_TM_LOW_STATE) {
+ pr_debug("low voltage btm notification triggered\n");
+ if (vbat_uv - VBATT_ERROR_MARGIN
+ < chip->vbat_monitor_params.low_thr) {
+ configure_vbat_monitor_low(chip);
+ } else {
+ pr_debug("faulty btm trigger, discarding\n");
+ qpnp_adc_tm_channel_measure(
+ &chip->vbat_monitor_params);
+ }
+ } else if (state == ADC_TM_HIGH_STATE) {
+ pr_debug("high voltage btm notification triggered\n");
+ if (vbat_uv + VBATT_ERROR_MARGIN
+ > chip->vbat_monitor_params.high_thr) {
+ configure_vbat_monitor_high(chip);
+ } else {
+ pr_debug("faulty btm trigger, discarding\n");
+ qpnp_adc_tm_channel_measure(
+ &chip->vbat_monitor_params);
+ }
+ } else {
+ pr_debug("unknown voltage notification state: %d\n", state);
+ }
+ power_supply_changed(&chip->bms_psy);
+}
+
+static int reset_vbat_monitoring(struct qpnp_bms_chip *chip)
+{
+ int rc;
+
+ chip->vbat_monitor_params.state_request = ADC_TM_HIGH_LOW_THR_DISABLE;
+ rc = qpnp_adc_tm_channel_measure(&chip->vbat_monitor_params);
+ if (rc) {
+ pr_err("tm measure failed: %d\n", rc);
+ return rc;
+ }
+ mutex_lock(&chip->vbat_monitor_mutex);
+ if (wake_lock_active(&chip->low_voltage_wake_lock)) {
+ pr_debug("battery removed, releasing wakelock\n");
+ wake_unlock(&chip->low_voltage_wake_lock);
+ }
+ if (chip->in_cv_range) {
+ pr_debug("battery removed, removing in_cv_range state\n");
+ chip->in_cv_range = false;
+ }
+ mutex_unlock(&chip->vbat_monitor_mutex);
+ return 0;
+}
+
+static int setup_vbat_monitoring(struct qpnp_bms_chip *chip)
+{
+ int rc;
+
+ rc = qpnp_adc_tm_is_ready();
+ if (rc) {
+ pr_info("adc tm is not ready yet: %d, defer probe\n", rc);
+ return -EPROBE_DEFER;
+ }
+
+ if (!is_battery_present(chip)) {
+ pr_debug("no battery inserted, do not setup vbat monitoring\n");
+ return 0;
+ }
+
+ chip->vbat_monitor_params.low_thr = chip->low_voltage_threshold;
+ chip->vbat_monitor_params.high_thr = chip->max_voltage_uv
+ - VBATT_ERROR_MARGIN;
+ chip->vbat_monitor_params.state_request = ADC_TM_HIGH_LOW_THR_ENABLE;
+ chip->vbat_monitor_params.channel = VBAT_SNS;
+ chip->vbat_monitor_params.btm_ctx = (void *)chip;
+ chip->vbat_monitor_params.timer_interval = ADC_MEAS1_INTERVAL_1S;
+ chip->vbat_monitor_params.threshold_notification = &btm_notify_vbat;
+ pr_debug("set low thr to %d and high to %d\n",
+ chip->vbat_monitor_params.low_thr,
+ chip->vbat_monitor_params.high_thr);
+ rc = qpnp_adc_tm_channel_measure(&chip->vbat_monitor_params);
+ if (rc) {
+ pr_err("tm setup failed: %d\n", rc);
+ return rc;
+ }
+ pr_debug("setup complete\n");
+ return 0;
+}
+
+static void battery_insertion_work(struct work_struct *work)
+{
+ struct qpnp_bms_chip *chip = container_of(work,
+ struct qpnp_bms_chip,
+ battery_insertion_work);
+ bool present = is_battery_present(chip);
+
+ mutex_lock(&chip->vbat_monitor_mutex);
+ if (chip->battery_present != present) {
+ if (chip->battery_present != -EINVAL) {
+ if (present) {
+ setup_vbat_monitoring(chip);
+ chip->new_battery = true;
+ } else {
+ reset_vbat_monitoring(chip);
+ }
+ }
+ chip->battery_present = present;
+ /* a new battery was inserted or removed, so force a soc
+ * recalculation to update the SoC */
+ schedule_work(&chip->recalc_work);
+ }
+ mutex_unlock(&chip->vbat_monitor_mutex);
+}
+
/* Returns capacity as a SoC percentage between 0 and 100 */
static int get_prop_bms_capacity(struct qpnp_bms_chip *chip)
{
@@ -2005,19 +2274,12 @@
static int get_prop_bms_present(struct qpnp_bms_chip *chip)
{
- return chip->battery_present;
+ return is_battery_present(chip);
}
static void set_prop_bms_present(struct qpnp_bms_chip *chip, int present)
{
- if (chip->battery_present != present) {
- chip->battery_present = present;
- if (present)
- chip->new_battery = true;
- /* a new battery was inserted or removed, so force a soc
- * recalculation to update the SoC */
- schedule_work(&chip->recalc_work);
- }
+ schedule_work(&chip->battery_insertion_work);
}
static void qpnp_bms_external_power_changed(struct power_supply *psy)
@@ -2308,6 +2570,7 @@
chip->calculated_soc = -EINVAL;
chip->last_soc = -EINVAL;
chip->last_soc_est = -EINVAL;
+ chip->battery_present = -EINVAL;
chip->last_cc_uah = INT_MIN;
chip->ocv_reading_at_100 = OCV_RAW_UNINITIALIZED;
chip->prev_last_good_ocv_raw = OCV_RAW_UNINITIALIZED;
@@ -2455,7 +2718,6 @@
static int __devinit qpnp_bms_probe(struct spmi_device *spmi)
{
struct qpnp_bms_chip *chip;
- union power_supply_propval retval = {0,};
bool warm_reset;
int rc, vbatt;
@@ -2469,12 +2731,14 @@
rc = qpnp_vadc_is_ready();
if (rc) {
pr_info("vadc not ready: %d, deferring probe\n", rc);
+ rc = -EPROBE_DEFER;
goto error_read;
}
rc = qpnp_iadc_is_ready();
if (rc) {
pr_info("iadc not ready: %d, deferring probe\n", rc);
+ rc = -EPROBE_DEFER;
goto error_read;
}
@@ -2537,6 +2801,7 @@
mutex_init(&chip->bms_output_lock);
mutex_init(&chip->last_ocv_uv_mutex);
+ mutex_init(&chip->vbat_monitor_mutex);
mutex_init(&chip->soc_invalidation_mutex);
mutex_init(&chip->last_soc_mutex);
@@ -2544,24 +2809,22 @@
"qpnp_soc_lock");
wake_lock_init(&chip->low_voltage_wake_lock, WAKE_LOCK_SUSPEND,
"qpnp_low_voltage_lock");
+ wake_lock_init(&chip->cv_wake_lock, WAKE_LOCK_SUSPEND,
+ "qpnp_cv_lock");
INIT_DELAYED_WORK(&chip->calculate_soc_delayed_work,
calculate_soc_work);
INIT_WORK(&chip->recalc_work, recalculate_work);
+ INIT_WORK(&chip->battery_insertion_work, battery_insertion_work);
read_shutdown_soc_and_iavg(chip);
dev_set_drvdata(&spmi->dev, chip);
device_init_wakeup(&spmi->dev, 1);
- if (!chip->batt_psy)
- chip->batt_psy = power_supply_get_by_name("battery");
- if (chip->batt_psy) {
- chip->batt_psy->get_property(chip->batt_psy,
- POWER_SUPPLY_PROP_PRESENT, &retval);
- chip->battery_present = retval.intval;
- pr_debug("present = %d\n", chip->battery_present);
- } else {
- chip->battery_present = 1;
+ rc = setup_vbat_monitoring(chip);
+ if (rc < 0) {
+ pr_err("failed to set up voltage notifications: %d\n", rc);
+ goto error_setup;
}
calculate_soc_work(&(chip->calculate_soc_delayed_work.work));
@@ -2599,10 +2862,12 @@
return 0;
unregister_dc:
+ power_supply_unregister(&chip->bms_psy);
+error_setup:
+ dev_set_drvdata(&spmi->dev, NULL);
wake_lock_destroy(&chip->soc_wake_lock);
wake_lock_destroy(&chip->low_voltage_wake_lock);
- power_supply_unregister(&chip->bms_psy);
- dev_set_drvdata(&spmi->dev, NULL);
+ wake_lock_destroy(&chip->cv_wake_lock);
error_resource:
error_read:
kfree(chip);
diff --git a/drivers/power/qpnp-charger.c b/drivers/power/qpnp-charger.c
index 089c129..4be1760 100644
--- a/drivers/power/qpnp-charger.c
+++ b/drivers/power/qpnp-charger.c
@@ -81,6 +81,7 @@
#define CHGR_BAT_IF_VCP 0x42
#define CHGR_BAT_IF_BATFET_CTRL1 0x90
#define CHGR_MISC_BOOT_DONE 0x42
+#define CHGR_BUCK_COMPARATOR_OVRIDE_1 0xEB
#define CHGR_BUCK_COMPARATOR_OVRIDE_3 0xED
#define CHGR_BUCK_BCK_VBAT_REG_MODE 0x74
#define MISC_REVISION2 0x01
@@ -242,6 +243,7 @@
unsigned int chg_fastchg_irq;
unsigned int chg_trklchg_irq;
unsigned int chg_failed_irq;
+ unsigned int chg_vbatdet_lo_irq;
unsigned int batt_pres_irq;
bool bat_is_cool;
bool bat_is_warm;
@@ -260,7 +262,7 @@
unsigned int warm_bat_mv;
unsigned int cool_bat_mv;
unsigned int resume_delta_mv;
- unsigned int term_current;
+ int term_current;
unsigned int maxinput_usb_ma;
unsigned int maxinput_dc_ma;
unsigned int warm_bat_decidegc;
@@ -280,6 +282,8 @@
struct qpnp_adc_tm_btm_param adc_param;
struct work_struct adc_measure_work;
struct delayed_work arb_stop_work;
+ struct delayed_work eoc_work;
+ struct wake_lock eoc_wake_lock;
};
static struct of_device_id qpnp_charger_match_table[] = {
@@ -557,6 +561,53 @@
disable ? CHGR_ON_BAT_FORCE_BIT : 0, 1);
}
+#define COMPATATOR_OVERRIDE_0 0x80
+static int
+qpnp_chg_toggle_chg_done_logic(struct qpnp_chg_chip *chip, int enable)
+{
+ int rc;
+
+ pr_debug("toggle: %d\n", enable);
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + SEC_ACCESS, 0xA5, 0xA5, 1);
+ if (rc) {
+ pr_debug("failed to write sec access rc=%d\n", rc);
+ return rc;
+ }
+
+ rc = qpnp_chg_masked_write(chip,
+ chip->buck_base + CHGR_BUCK_COMPARATOR_OVRIDE_1,
+ 0xC0, enable ? 0x00 : COMPATATOR_OVERRIDE_0, 1);
+ if (rc) {
+ pr_debug("failed to toggle chg done override rc=%d\n", rc);
+ return rc;
+ }
+
+ return rc;
+}
+
+#define QPNP_CHG_VBATDET_MIN_MV 3240
+#define QPNP_CHG_VBATDET_MAX_MV 5780
+#define QPNP_CHG_VBATDET_STEP_MV 20
+static int
+qpnp_chg_vbatdet_set(struct qpnp_chg_chip *chip, int vbatdet_mv)
+{
+ u8 temp;
+
+ if (vbatdet_mv < QPNP_CHG_VBATDET_MIN_MV
+ || vbatdet_mv > QPNP_CHG_VBATDET_MAX_MV) {
+ pr_err("bad mV=%d asked to set\n", vbatdet_mv);
+ return -EINVAL;
+ }
+ temp = (vbatdet_mv - QPNP_CHG_VBATDET_MIN_MV)
+ / QPNP_CHG_VBATDET_STEP_MV;
+
+ pr_debug("voltage=%d setting %02x\n", vbatdet_mv, temp);
+ return qpnp_chg_write(chip, &temp,
+ chip->chgr_base + CHGR_VBAT_DET, 1);
+}
+
static void
qpnp_arb_stop_work(struct work_struct *work)
{
@@ -564,7 +615,8 @@
struct qpnp_chg_chip *chip = container_of(dwork,
struct qpnp_chg_chip, arb_stop_work);
- qpnp_chg_charge_en(chip, !chip->charging_disabled);
+ if (!chip->chg_done)
+ qpnp_chg_charge_en(chip, !chip->charging_disabled);
qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
}
@@ -578,6 +630,36 @@
pr_err("request ADC error\n");
}
+#define EOC_CHECK_PERIOD_MS 10000
+static irqreturn_t
+qpnp_chg_vbatdet_lo_irq_handler(int irq, void *_chip)
+{
+ struct qpnp_chg_chip *chip = _chip;
+ u8 chg_sts = 0;
+ int rc;
+
+ pr_debug("vbatdet-lo triggered\n");
+
+ rc = qpnp_chg_read(chip, &chg_sts, INT_RT_STS(chip->chgr_base), 1);
+ if (rc)
+ pr_err("failed to read chg_sts rc=%d\n", rc);
+
+ pr_debug("chg_done chg_sts: 0x%x triggered\n", chg_sts);
+ if (!chip->charging_disabled && (chg_sts & FAST_CHG_ON_IRQ)) {
+ schedule_delayed_work(&chip->eoc_work,
+ msecs_to_jiffies(EOC_CHECK_PERIOD_MS));
+ wake_lock(&chip->eoc_wake_lock);
+ disable_irq_nosync(chip->chg_vbatdet_lo_irq);
+ } else {
+ qpnp_chg_charge_en(chip, !chip->charging_disabled);
+ }
+
+ power_supply_changed(chip->usb_psy);
+ power_supply_changed(&chip->dc_psy);
+ power_supply_changed(&chip->batt_psy);
+ return IRQ_HANDLED;
+}
+
#define ARB_STOP_WORK_MS 1000
static irqreturn_t
qpnp_chg_usb_chg_gone_irq_handler(int irq, void *_chip)
@@ -587,7 +669,7 @@
pr_debug("chg_gone triggered\n");
if (qpnp_chg_is_usb_chg_plugged_in(chip)) {
qpnp_chg_charge_en(chip, 0);
- qpnp_chg_force_run_on_batt(chip, chip->charging_disabled);
+ qpnp_chg_force_run_on_batt(chip, 1);
schedule_delayed_work(&chip->arb_stop_work,
msecs_to_jiffies(ARB_STOP_WORK_MS));
}
@@ -613,11 +695,15 @@
if (chip->usb_present ^ usb_present) {
chip->usb_present = usb_present;
- if (!usb_present)
+ if (!usb_present) {
qpnp_chg_usb_suspend_enable(chip, 1);
+ chip->chg_done = false;
+ } else {
+ schedule_delayed_work(&chip->eoc_work,
+ msecs_to_jiffies(EOC_CHECK_PERIOD_MS));
+ }
- power_supply_set_present(chip->usb_psy,
- chip->usb_present);
+ power_supply_set_present(chip->usb_psy, chip->usb_present);
}
return IRQ_HANDLED;
@@ -635,6 +721,7 @@
if (chip->batt_present ^ batt_present) {
chip->batt_present = batt_present;
power_supply_changed(&chip->batt_psy);
+ power_supply_changed(chip->usb_psy);
if (chip->cool_bat_decidegc && chip->warm_bat_decidegc
&& batt_present) {
@@ -659,7 +746,13 @@
if (chip->dc_present ^ dc_present) {
chip->dc_present = dc_present;
+ if (!dc_present)
+ chip->chg_done = false;
+ else
+ schedule_delayed_work(&chip->eoc_work,
+ msecs_to_jiffies(EOC_CHECK_PERIOD_MS));
power_supply_changed(&chip->dc_psy);
+ power_supply_changed(&chip->batt_psy);
}
return IRQ_HANDLED;
@@ -672,6 +765,8 @@
struct qpnp_chg_chip *chip = _chip;
int rc;
+ pr_debug("chg_failed triggered\n");
+
rc = qpnp_chg_masked_write(chip,
chip->chgr_base + CHGR_CHG_FAILED,
CHGR_CHG_FAILED_BIT,
@@ -679,6 +774,9 @@
if (rc)
pr_err("Failed to write chg_fail clear bit!\n");
+ power_supply_changed(&chip->batt_psy);
+ power_supply_changed(chip->usb_psy);
+ power_supply_changed(&chip->dc_psy);
return IRQ_HANDLED;
}
@@ -701,9 +799,11 @@
struct qpnp_chg_chip *chip = _chip;
pr_debug("FAST_CHG IRQ triggered\n");
-
chip->chg_done = false;
power_supply_changed(&chip->batt_psy);
+ power_supply_changed(chip->usb_psy);
+ power_supply_changed(&chip->dc_psy);
+ enable_irq(chip->chg_vbatdet_lo_irq);
return IRQ_HANDLED;
}
@@ -940,11 +1040,12 @@
int rc;
u8 chgr_sts;
- if (chip->chg_done)
+ if ((qpnp_chg_is_usb_chg_plugged_in(chip) ||
+ qpnp_chg_is_dc_chg_plugged_in(chip)) && chip->chg_done) {
return POWER_SUPPLY_STATUS_FULL;
+ }
- rc = qpnp_chg_read(chip, &chgr_sts,
- INT_RT_STS(chip->chgr_base), 1);
+ rc = qpnp_chg_read(chip, &chgr_sts, INT_RT_STS(chip->chgr_base), 1);
if (rc) {
pr_err("failed to read interrupt sts %d\n", rc);
return POWER_SUPPLY_CHARGE_TYPE_NONE;
@@ -1263,27 +1364,7 @@
temp = (minutes - 1)/QPNP_CHG_TCHG_STEP;
return qpnp_chg_masked_write(chip, chip->chgr_base + CHGR_TCHG_MAX,
- QPNP_CHG_I_MASK, temp, 1);
-}
-#define QPNP_CHG_VBATDET_MIN_MV 3240
-#define QPNP_CHG_VBATDET_MAX_MV 5780
-#define QPNP_CHG_VBATDET_STEP_MV 20
-static int
-qpnp_chg_vbatdet_set(struct qpnp_chg_chip *chip, int vbatdet_mv)
-{
- u8 temp;
-
- if (vbatdet_mv < QPNP_CHG_VBATDET_MIN_MV
- || vbatdet_mv > QPNP_CHG_VBATDET_MAX_MV) {
- pr_err("bad mV=%d asked to set\n", vbatdet_mv);
- return -EINVAL;
- }
- temp = (vbatdet_mv - QPNP_CHG_VBATDET_MIN_MV)
- / QPNP_CHG_VBATDET_STEP_MV;
-
- pr_debug("voltage=%d setting %02x\n", vbatdet_mv, temp);
- return qpnp_chg_write(chip, &temp,
- chip->chgr_base + CHGR_VBAT_DET, 1);
+ QPNP_CHG_TCHG_MASK, temp, 1);
}
#define QPNP_CHG_V_MIN_MV 3240
@@ -1385,6 +1466,88 @@
}
}
+#define CONSECUTIVE_COUNT 3
+static void
+qpnp_eoc_work(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct qpnp_chg_chip *chip = container_of(dwork,
+ struct qpnp_chg_chip, eoc_work);
+ static int count;
+ int ibat_ma, vbat_mv, rc = 0;
+ u8 batt_sts = 0, buck_sts = 0, chg_sts = 0;
+
+ wake_lock(&chip->eoc_wake_lock);
+ qpnp_chg_charge_en(chip, !chip->charging_disabled);
+
+ rc = qpnp_chg_read(chip, &batt_sts, INT_RT_STS(chip->bat_if_base), 1);
+ if (rc) {
+ pr_err("failed to read batt_if rc=%d\n", rc);
+ return;
+ }
+
+ rc = qpnp_chg_read(chip, &buck_sts, INT_RT_STS(chip->buck_base), 1);
+ if (rc) {
+ pr_err("failed to read buck rc=%d\n", rc);
+ return;
+ }
+
+ rc = qpnp_chg_read(chip, &chg_sts, INT_RT_STS(chip->chgr_base), 1);
+ if (rc) {
+ pr_err("failed to read chg_sts rc=%d\n", rc);
+ return;
+ }
+
+ pr_debug("chgr: 0x%x, bat_if: 0x%x, buck: 0x%x\n",
+ chg_sts, batt_sts, buck_sts);
+
+ if (!qpnp_chg_is_usb_chg_plugged_in(chip) &&
+ !qpnp_chg_is_dc_chg_plugged_in(chip)) {
+ pr_debug("no chg connected, stopping\n");
+ goto stop_eoc;
+ }
+
+ if ((batt_sts & BAT_FET_ON_IRQ) && (chg_sts & FAST_CHG_ON_IRQ
+ || chg_sts & TRKL_CHG_ON_IRQ)) {
+ ibat_ma = get_prop_current_now(chip) / 1000;
+ vbat_mv = get_prop_battery_voltage_now(chip) / 1000;
+ pr_debug("ibat_ma: %d term_current =%d\n",
+ ibat_ma, chip->term_current);
+ if (ibat_ma > chip->term_current) {
+ pr_debug("charging but increase in current demand\n");
+ count = 0;
+ } else if ((ibat_ma * -1) < chip->term_current) {
+ if (count == CONSECUTIVE_COUNT) {
+ pr_info("End of Charging\n");
+ qpnp_chg_charge_en(chip, 0);
+ chip->chg_done = true;
+ power_supply_changed(&chip->batt_psy);
+ enable_irq(chip->chg_vbatdet_lo_irq);
+ goto stop_eoc;
+ } else {
+ count += 1;
+ pr_debug("EOC count = %d\n", count);
+ }
+ } else if ((!(chg_sts & VBAT_DET_LOW_IRQ)) && (vbat_mv <
+ (chip->max_voltage_mv - chip->resume_delta_mv))) {
+ pr_debug("woke up too early\n");
+ enable_irq(chip->chg_vbatdet_lo_irq);
+ goto stop_eoc;
+ }
+ } else {
+ pr_debug("not charging\n");
+ goto stop_eoc;
+ }
+
+ schedule_delayed_work(&chip->eoc_work,
+ msecs_to_jiffies(EOC_CHECK_PERIOD_MS));
+ return;
+
+stop_eoc:
+ count = 0;
+ wake_unlock(&chip->eoc_wake_lock);
+}
+
#define HYSTERISIS_DECIDEGC 20
static void
qpnp_chg_adc_notification(enum qpnp_tm_state state, void *ctx)
@@ -1546,6 +1709,13 @@
return rc;
}
+ chip->chg_vbatdet_lo_irq = spmi_get_irq_byname(spmi,
+ spmi_resource, "vbat-det-lo");
+ if (chip->chg_vbatdet_lo_irq < 0) {
+ pr_err("Unable to get fast-chg-on irq\n");
+ return rc;
+ }
+
rc |= devm_request_irq(chip->dev, chip->chg_failed_irq,
qpnp_chg_chgr_chg_failed_irq_handler,
IRQF_TRIGGER_RISING, "chg-failed", chip);
@@ -1574,9 +1744,23 @@
chip->chg_trklchg_irq, rc);
return rc;
}
+
+ rc |= devm_request_irq(chip->dev,
+ chip->chg_vbatdet_lo_irq,
+ qpnp_chg_vbatdet_lo_irq_handler,
+ IRQF_TRIGGER_RISING,
+ "vbat-det-lo", chip);
+ if (rc < 0) {
+ pr_err("Can't request %d vbat-det-lo: %d\n",
+ chip->chg_vbatdet_lo_irq, rc);
+ return rc;
+ }
+
enable_irq_wake(chip->chg_fastchg_irq);
enable_irq_wake(chip->chg_trklchg_irq);
enable_irq_wake(chip->chg_failed_irq);
+ disable_irq_nosync(chip->chg_vbatdet_lo_irq);
+ enable_irq_wake(chip->chg_vbatdet_lo_irq);
break;
case SMBB_BAT_IF_SUBTYPE:
@@ -1725,12 +1909,16 @@
/* HACK: use analog EOC */
rc = qpnp_chg_masked_write(chip, chip->chgr_base +
CHGR_IBAT_TERM_CHGR,
- 0x80, 0x00, 1);
+ 0xFF, 0x08, 1);
break;
case SMBB_BUCK_SUBTYPE:
case SMBBP_BUCK_SUBTYPE:
case SMBCL_BUCK_SUBTYPE:
+ rc = qpnp_chg_toggle_chg_done_logic(chip, 0);
+ if (rc)
+ return rc;
+
rc = qpnp_chg_masked_write(chip,
chip->chgr_base + CHGR_BUCK_BCK_VBAT_REG_MODE,
BUCK_VBAT_REG_NODE_SEL_BIT,
@@ -1756,8 +1944,7 @@
case SMBB_USB_CHGPTH_SUBTYPE:
case SMBBP_USB_CHGPTH_SUBTYPE:
case SMBCL_USB_CHGPTH_SUBTYPE:
- chip->usb_present = qpnp_chg_is_usb_chg_plugged_in(chip);
- if (chip->usb_present) {
+ if (qpnp_chg_is_usb_chg_plugged_in(chip)) {
rc = qpnp_chg_masked_write(chip,
chip->usb_chgpth_base + CHGR_USB_ENUM_T_STOP,
ENUM_T_STOP_BIT,
@@ -2110,6 +2297,9 @@
qpnp_bat_if_adc_measure_work);
}
+ wake_lock_init(&chip->eoc_wake_lock,
+ WAKE_LOCK_SUSPEND, "qpnp-chg-eoc-lock");
+ INIT_DELAYED_WORK(&chip->eoc_work, qpnp_eoc_work);
INIT_DELAYED_WORK(&chip->arb_stop_work, qpnp_arb_stop_work);
if (chip->dc_chgpth_base) {
@@ -2131,9 +2321,6 @@
/* Turn on appropriate workaround flags */
qpnp_chg_setup_flags(chip);
- power_supply_set_present(chip->usb_psy,
- qpnp_chg_is_usb_chg_plugged_in(chip));
-
if (chip->maxinput_dc_ma && chip->dc_chgpth_base) {
rc = qpnp_chg_idcmax_set(chip, chip->maxinput_dc_ma);
if (rc) {
@@ -2170,6 +2357,10 @@
goto unregister_batt;
}
+ qpnp_chg_usb_usbin_valid_irq_handler(USBIN_VALID_IRQ, chip);
+ power_supply_set_present(chip->usb_psy,
+ qpnp_chg_is_usb_chg_plugged_in(chip));
+
pr_info("success chg_dis = %d, usb = %d, dc = %d b_health = %d batt_present = %d\n",
chip->charging_disabled,
qpnp_chg_is_usb_chg_plugged_in(chip),
@@ -2197,6 +2388,7 @@
qpnp_adc_tm_disable_chan_meas(&chip->adc_param);
}
cancel_work_sync(&chip->adc_measure_work);
+ cancel_delayed_work_sync(&chip->eoc_work);
dev_set_drvdata(&spmi->dev, NULL);
kfree(chip);
diff --git a/include/trace/events/workqueue.h b/include/trace/events/workqueue.h
index 7d49729..82f61f4 100644
--- a/include/trace/events/workqueue.h
+++ b/include/trace/events/workqueue.h
@@ -54,7 +54,7 @@
__entry->function = work->func;
__entry->workqueue = cwq->wq;
__entry->req_cpu = req_cpu;
- __entry->cpu = cwq->gcwq->cpu;
+ __entry->cpu = cwq->pool->gcwq->cpu;
),
TP_printk("work struct=%p function=%pf workqueue=%p req_cpu=%u cpu=%u",
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5abf42f..f1a6e9e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -46,11 +46,12 @@
enum {
/* global_cwq flags */
- GCWQ_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
- GCWQ_MANAGING_WORKERS = 1 << 1, /* managing workers */
- GCWQ_DISASSOCIATED = 1 << 2, /* cpu can't serve workers */
- GCWQ_FREEZING = 1 << 3, /* freeze in progress */
- GCWQ_HIGHPRI_PENDING = 1 << 4, /* highpri works on queue */
+ GCWQ_DISASSOCIATED = 1 << 0, /* cpu can't serve workers */
+ GCWQ_FREEZING = 1 << 1, /* freeze in progress */
+
+ /* pool flags */
+ POOL_MANAGE_WORKERS = 1 << 0, /* need to manage workers */
+ POOL_MANAGING_WORKERS = 1 << 1, /* managing workers */
/* worker flags */
WORKER_STARTED = 1 << 0, /* started */
@@ -72,6 +73,8 @@
TRUSTEE_RELEASE = 3, /* release workers */
TRUSTEE_DONE = 4, /* trustee is done */
+ NR_WORKER_POOLS = 2, /* # worker pools per gcwq */
+
BUSY_WORKER_HASH_ORDER = 6, /* 64 pointers */
BUSY_WORKER_HASH_SIZE = 1 << BUSY_WORKER_HASH_ORDER,
BUSY_WORKER_HASH_MASK = BUSY_WORKER_HASH_SIZE - 1,
@@ -91,6 +94,7 @@
* all cpus. Give -20.
*/
RESCUER_NICE_LEVEL = -20,
+ HIGHPRI_NICE_LEVEL = -20,
};
/*
@@ -115,6 +119,7 @@
*/
struct global_cwq;
+struct worker_pool;
/*
* The poor guys doing the actual heavy lifting. All on-duty workers
@@ -131,7 +136,7 @@
struct cpu_workqueue_struct *current_cwq; /* L: current_work's cwq */
struct list_head scheduled; /* L: scheduled works */
struct task_struct *task; /* I: worker task */
- struct global_cwq *gcwq; /* I: the associated gcwq */
+ struct worker_pool *pool; /* I: the associated pool */
/* 64 bytes boundary on 64bit, 32 on 32bit */
unsigned long last_active; /* L: last active timestamp */
unsigned int flags; /* X: flags */
@@ -139,6 +144,22 @@
struct work_struct rebind_work; /* L: rebind worker to cpu */
};
+struct worker_pool {
+ struct global_cwq *gcwq; /* I: the owning gcwq */
+ unsigned int flags; /* X: flags */
+
+ struct list_head worklist; /* L: list of pending works */
+ int nr_workers; /* L: total number of workers */
+ int nr_idle; /* L: currently idle ones */
+
+ struct list_head idle_list; /* X: list of idle workers */
+ struct timer_list idle_timer; /* L: worker idle timeout */
+ struct timer_list mayday_timer; /* L: SOS timer for workers */
+
+ struct ida worker_ida; /* L: for worker IDs */
+ struct worker *first_idle; /* L: first idle worker */
+};
+
/*
* Global per-cpu workqueue. There's one and only one for each cpu
* and all works are queued and processed here regardless of their
@@ -146,27 +167,18 @@
*/
struct global_cwq {
spinlock_t lock; /* the gcwq lock */
- struct list_head worklist; /* L: list of pending works */
unsigned int cpu; /* I: the associated cpu */
unsigned int flags; /* L: GCWQ_* flags */
- int nr_workers; /* L: total number of workers */
- int nr_idle; /* L: currently idle ones */
-
- /* workers are chained either in the idle_list or busy_hash */
- struct list_head idle_list; /* X: list of idle workers */
+ /* workers are chained either in busy_hash or pool idle_list */
struct hlist_head busy_hash[BUSY_WORKER_HASH_SIZE];
/* L: hash of busy workers */
- struct timer_list idle_timer; /* L: worker idle timeout */
- struct timer_list mayday_timer; /* L: SOS timer for dworkers */
-
- struct ida worker_ida; /* L: for worker IDs */
+ struct worker_pool pools[2]; /* normal and highpri pools */
struct task_struct *trustee; /* L: for gcwq shutdown */
unsigned int trustee_state; /* L: trustee state */
wait_queue_head_t trustee_wait; /* trustee wait */
- struct worker *first_idle; /* L: first idle worker */
} ____cacheline_aligned_in_smp;
/*
@@ -175,7 +187,7 @@
* aligned at two's power of the number of flag bits.
*/
struct cpu_workqueue_struct {
- struct global_cwq *gcwq; /* I: the associated gcwq */
+ struct worker_pool *pool; /* I: the associated pool */
struct workqueue_struct *wq; /* I: the owning workqueue */
int work_color; /* L: current color */
int flush_color; /* L: flushing color */
@@ -264,6 +276,10 @@
#define CREATE_TRACE_POINTS
#include <trace/events/workqueue.h>
+#define for_each_worker_pool(pool, gcwq) \
+ for ((pool) = &(gcwq)->pools[0]; \
+ (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
+
#define for_each_busy_worker(worker, i, pos, gcwq) \
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++) \
hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
@@ -444,7 +460,7 @@
* try_to_wake_up(). Put it in a separate cacheline.
*/
static DEFINE_PER_CPU(struct global_cwq, global_cwq);
-static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, gcwq_nr_running);
+static DEFINE_PER_CPU_SHARED_ALIGNED(atomic_t, pool_nr_running[NR_WORKER_POOLS]);
/*
* Global cpu workqueue and nr_running counter for unbound gcwq. The
@@ -452,10 +468,17 @@
* workers have WORKER_UNBOUND set.
*/
static struct global_cwq unbound_global_cwq;
-static atomic_t unbound_gcwq_nr_running = ATOMIC_INIT(0); /* always 0 */
+static atomic_t unbound_pool_nr_running[NR_WORKER_POOLS] = {
+ [0 ... NR_WORKER_POOLS - 1] = ATOMIC_INIT(0), /* always 0 */
+};
static int worker_thread(void *__worker);
+static int worker_pool_pri(struct worker_pool *pool)
+{
+ return pool - pool->gcwq->pools;
+}
+
static struct global_cwq *get_gcwq(unsigned int cpu)
{
if (cpu != WORK_CPU_UNBOUND)
@@ -464,12 +487,15 @@
return &unbound_global_cwq;
}
-static atomic_t *get_gcwq_nr_running(unsigned int cpu)
+static atomic_t *get_pool_nr_running(struct worker_pool *pool)
{
+ int cpu = pool->gcwq->cpu;
+ int idx = worker_pool_pri(pool);
+
if (cpu != WORK_CPU_UNBOUND)
- return &per_cpu(gcwq_nr_running, cpu);
+ return &per_cpu(pool_nr_running, cpu)[idx];
else
- return &unbound_gcwq_nr_running;
+ return &unbound_pool_nr_running[idx];
}
static struct cpu_workqueue_struct *get_cwq(unsigned int cpu,
@@ -555,7 +581,7 @@
if (data & WORK_STRUCT_CWQ)
return ((struct cpu_workqueue_struct *)
- (data & WORK_STRUCT_WQ_DATA_MASK))->gcwq;
+ (data & WORK_STRUCT_WQ_DATA_MASK))->pool->gcwq;
cpu = data >> WORK_STRUCT_FLAG_BITS;
if (cpu == WORK_CPU_NONE)
@@ -566,60 +592,62 @@
}
/*
- * Policy functions. These define the policies on how the global
- * worker pool is managed. Unless noted otherwise, these functions
- * assume that they're being called with gcwq->lock held.
+ * Policy functions. These define the policies on how the global worker
+ * pools are managed. Unless noted otherwise, these functions assume that
+ * they're being called with gcwq->lock held.
*/
-static bool __need_more_worker(struct global_cwq *gcwq)
+static bool __need_more_worker(struct worker_pool *pool)
{
- return !atomic_read(get_gcwq_nr_running(gcwq->cpu)) ||
- gcwq->flags & GCWQ_HIGHPRI_PENDING;
+ return !atomic_read(get_pool_nr_running(pool));
}
/*
* Need to wake up a worker? Called from anything but currently
* running workers.
+ *
+ * Note that, because unbound workers never contribute to nr_running, this
+ * function will always return %true for unbound gcwq as long as the
+ * worklist isn't empty.
*/
-static bool need_more_worker(struct global_cwq *gcwq)
+static bool need_more_worker(struct worker_pool *pool)
{
- return !list_empty(&gcwq->worklist) && __need_more_worker(gcwq);
+ return !list_empty(&pool->worklist) && __need_more_worker(pool);
}
/* Can I start working? Called from busy but !running workers. */
-static bool may_start_working(struct global_cwq *gcwq)
+static bool may_start_working(struct worker_pool *pool)
{
- return gcwq->nr_idle;
+ return pool->nr_idle;
}
/* Do I need to keep working? Called from currently running workers. */
-static bool keep_working(struct global_cwq *gcwq)
+static bool keep_working(struct worker_pool *pool)
{
- atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+ atomic_t *nr_running = get_pool_nr_running(pool);
- return !list_empty(&gcwq->worklist) &&
- (atomic_read(nr_running) <= 1 ||
- gcwq->flags & GCWQ_HIGHPRI_PENDING);
+ return !list_empty(&pool->worklist) && atomic_read(nr_running) <= 1;
}
/* Do we need a new worker? Called from manager. */
-static bool need_to_create_worker(struct global_cwq *gcwq)
+static bool need_to_create_worker(struct worker_pool *pool)
{
- return need_more_worker(gcwq) && !may_start_working(gcwq);
+ return need_more_worker(pool) && !may_start_working(pool);
}
/* Do I need to be the manager? */
-static bool need_to_manage_workers(struct global_cwq *gcwq)
+static bool need_to_manage_workers(struct worker_pool *pool)
{
- return need_to_create_worker(gcwq) || gcwq->flags & GCWQ_MANAGE_WORKERS;
+ return need_to_create_worker(pool) ||
+ (pool->flags & POOL_MANAGE_WORKERS);
}
/* Do we have too many workers and should some go away? */
-static bool too_many_workers(struct global_cwq *gcwq)
+static bool too_many_workers(struct worker_pool *pool)
{
- bool managing = gcwq->flags & GCWQ_MANAGING_WORKERS;
- int nr_idle = gcwq->nr_idle + managing; /* manager is considered idle */
- int nr_busy = gcwq->nr_workers - nr_idle;
+ bool managing = pool->flags & POOL_MANAGING_WORKERS;
+ int nr_idle = pool->nr_idle + managing; /* manager is considered idle */
+ int nr_busy = pool->nr_workers - nr_idle;
return nr_idle > 2 && (nr_idle - 2) * MAX_IDLE_WORKERS_RATIO >= nr_busy;
}
@@ -629,26 +657,26 @@
*/
/* Return the first worker. Safe with preemption disabled */
-static struct worker *first_worker(struct global_cwq *gcwq)
+static struct worker *first_worker(struct worker_pool *pool)
{
- if (unlikely(list_empty(&gcwq->idle_list)))
+ if (unlikely(list_empty(&pool->idle_list)))
return NULL;
- return list_first_entry(&gcwq->idle_list, struct worker, entry);
+ return list_first_entry(&pool->idle_list, struct worker, entry);
}
/**
* wake_up_worker - wake up an idle worker
- * @gcwq: gcwq to wake worker for
+ * @pool: worker pool to wake worker from
*
- * Wake up the first idle worker of @gcwq.
+ * Wake up the first idle worker of @pool.
*
* CONTEXT:
* spin_lock_irq(gcwq->lock).
*/
-static void wake_up_worker(struct global_cwq *gcwq)
+static void wake_up_worker(struct worker_pool *pool)
{
- struct worker *worker = first_worker(gcwq);
+ struct worker *worker = first_worker(pool);
if (likely(worker))
wake_up_process(worker->task);
@@ -670,7 +698,7 @@
struct worker *worker = kthread_data(task);
if (!(worker->flags & WORKER_NOT_RUNNING))
- atomic_inc(get_gcwq_nr_running(cpu));
+ atomic_inc(get_pool_nr_running(worker->pool));
}
/**
@@ -692,8 +720,8 @@
unsigned int cpu)
{
struct worker *worker = kthread_data(task), *to_wakeup = NULL;
- struct global_cwq *gcwq = get_gcwq(cpu);
- atomic_t *nr_running = get_gcwq_nr_running(cpu);
+ struct worker_pool *pool = worker->pool;
+ atomic_t *nr_running = get_pool_nr_running(pool);
if (worker->flags & WORKER_NOT_RUNNING)
return NULL;
@@ -712,8 +740,8 @@
* could be manipulating idle_list, so dereferencing idle_list
* without gcwq lock is safe.
*/
- if (atomic_dec_and_test(nr_running) && !list_empty(&gcwq->worklist))
- to_wakeup = first_worker(gcwq);
+ if (atomic_dec_and_test(nr_running) && !list_empty(&pool->worklist))
+ to_wakeup = first_worker(pool);
return to_wakeup ? to_wakeup->task : NULL;
}
@@ -733,7 +761,7 @@
static inline void worker_set_flags(struct worker *worker, unsigned int flags,
bool wakeup)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
WARN_ON_ONCE(worker->task != current);
@@ -744,12 +772,12 @@
*/
if ((flags & WORKER_NOT_RUNNING) &&
!(worker->flags & WORKER_NOT_RUNNING)) {
- atomic_t *nr_running = get_gcwq_nr_running(gcwq->cpu);
+ atomic_t *nr_running = get_pool_nr_running(pool);
if (wakeup) {
if (atomic_dec_and_test(nr_running) &&
- !list_empty(&gcwq->worklist))
- wake_up_worker(gcwq);
+ !list_empty(&pool->worklist))
+ wake_up_worker(pool);
} else
atomic_dec(nr_running);
}
@@ -769,7 +797,7 @@
*/
static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
unsigned int oflags = worker->flags;
WARN_ON_ONCE(worker->task != current);
@@ -783,7 +811,7 @@
*/
if ((flags & WORKER_NOT_RUNNING) && (oflags & WORKER_NOT_RUNNING))
if (!(worker->flags & WORKER_NOT_RUNNING))
- atomic_inc(get_gcwq_nr_running(gcwq->cpu));
+ atomic_inc(get_pool_nr_running(pool));
}
/**
@@ -867,43 +895,6 @@
}
/**
- * gcwq_determine_ins_pos - find insertion position
- * @gcwq: gcwq of interest
- * @cwq: cwq a work is being queued for
- *
- * A work for @cwq is about to be queued on @gcwq, determine insertion
- * position for the work. If @cwq is for HIGHPRI wq, the work is
- * queued at the head of the queue but in FIFO order with respect to
- * other HIGHPRI works; otherwise, at the end of the queue. This
- * function also sets GCWQ_HIGHPRI_PENDING flag to hint @gcwq that
- * there are HIGHPRI works pending.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to inserstion position.
- */
-static inline struct list_head *gcwq_determine_ins_pos(struct global_cwq *gcwq,
- struct cpu_workqueue_struct *cwq)
-{
- struct work_struct *twork;
-
- if (likely(!(cwq->wq->flags & WQ_HIGHPRI)))
- return &gcwq->worklist;
-
- list_for_each_entry(twork, &gcwq->worklist, entry) {
- struct cpu_workqueue_struct *tcwq = get_work_cwq(twork);
-
- if (!(tcwq->wq->flags & WQ_HIGHPRI))
- break;
- }
-
- gcwq->flags |= GCWQ_HIGHPRI_PENDING;
- return &twork->entry;
-}
-
-/**
* insert_work - insert a work into gcwq
* @cwq: cwq @work belongs to
* @work: work to insert
@@ -920,7 +911,7 @@
struct work_struct *work, struct list_head *head,
unsigned int extra_flags)
{
- struct global_cwq *gcwq = cwq->gcwq;
+ struct worker_pool *pool = cwq->pool;
/* we own @work, set data and link */
set_work_cwq(work, cwq, extra_flags);
@@ -940,8 +931,8 @@
*/
smp_mb();
- if (__need_more_worker(gcwq))
- wake_up_worker(gcwq);
+ if (__need_more_worker(pool))
+ wake_up_worker(pool);
}
/*
@@ -1040,7 +1031,7 @@
if (likely(cwq->nr_active < cwq->max_active)) {
trace_workqueue_activate_work(work);
cwq->nr_active++;
- worklist = gcwq_determine_ins_pos(gcwq, cwq);
+ worklist = &cwq->pool->worklist;
} else {
work_flags |= WORK_STRUCT_DELAYED;
worklist = &cwq->delayed_works;
@@ -1189,7 +1180,8 @@
*/
static void worker_enter_idle(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
BUG_ON(worker->flags & WORKER_IDLE);
BUG_ON(!list_empty(&worker->entry) &&
@@ -1197,22 +1189,27 @@
/* can't use worker_set_flags(), also called from start_worker() */
worker->flags |= WORKER_IDLE;
- gcwq->nr_idle++;
+ pool->nr_idle++;
worker->last_active = jiffies;
/* idle_list is LIFO */
- list_add(&worker->entry, &gcwq->idle_list);
+ list_add(&worker->entry, &pool->idle_list);
if (likely(!(worker->flags & WORKER_ROGUE))) {
- if (too_many_workers(gcwq) && !timer_pending(&gcwq->idle_timer))
- mod_timer(&gcwq->idle_timer,
+ if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
+ mod_timer(&pool->idle_timer,
jiffies + IDLE_WORKER_TIMEOUT);
} else
wake_up_all(&gcwq->trustee_wait);
- /* sanity check nr_running */
- WARN_ON_ONCE(gcwq->nr_workers == gcwq->nr_idle &&
- atomic_read(get_gcwq_nr_running(gcwq->cpu)));
+ /*
+ * Sanity check nr_running. Because trustee releases gcwq->lock
+ * between setting %WORKER_ROGUE and zapping nr_running, the
+ * warning may trigger spuriously. Check iff trustee is idle.
+ */
+ WARN_ON_ONCE(gcwq->trustee_state == TRUSTEE_DONE &&
+ pool->nr_workers == pool->nr_idle &&
+ atomic_read(get_pool_nr_running(pool)));
}
/**
@@ -1226,11 +1223,11 @@
*/
static void worker_leave_idle(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
BUG_ON(!(worker->flags & WORKER_IDLE));
worker_clr_flags(worker, WORKER_IDLE);
- gcwq->nr_idle--;
+ pool->nr_idle--;
list_del_init(&worker->entry);
}
@@ -1267,7 +1264,7 @@
static bool worker_maybe_bind_and_lock(struct worker *worker)
__acquires(&gcwq->lock)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;
struct task_struct *task = worker->task;
while (true) {
@@ -1309,7 +1306,7 @@
static void worker_rebind_fn(struct work_struct *work)
{
struct worker *worker = container_of(work, struct worker, rebind_work);
- struct global_cwq *gcwq = worker->gcwq;
+ struct global_cwq *gcwq = worker->pool->gcwq;
if (worker_maybe_bind_and_lock(worker))
worker_clr_flags(worker, WORKER_REBIND);
@@ -1334,10 +1331,10 @@
/**
* create_worker - create a new workqueue worker
- * @gcwq: gcwq the new worker will belong to
+ * @pool: pool the new worker will belong to
* @bind: whether to set affinity to @cpu or not
*
- * Create a new worker which is bound to @gcwq. The returned worker
+ * Create a new worker which is bound to @pool. The returned worker
* can be started by calling start_worker() or destroyed using
* destroy_worker().
*
@@ -1347,16 +1344,18 @@
* RETURNS:
* Pointer to the newly created worker.
*/
-static struct worker *create_worker(struct global_cwq *gcwq, bool bind)
+static struct worker *create_worker(struct worker_pool *pool, bool bind)
{
+ struct global_cwq *gcwq = pool->gcwq;
bool on_unbound_cpu = gcwq->cpu == WORK_CPU_UNBOUND;
+ const char *pri = worker_pool_pri(pool) ? "H" : "";
struct worker *worker = NULL;
int id = -1;
spin_lock_irq(&gcwq->lock);
- while (ida_get_new(&gcwq->worker_ida, &id)) {
+ while (ida_get_new(&pool->worker_ida, &id)) {
spin_unlock_irq(&gcwq->lock);
- if (!ida_pre_get(&gcwq->worker_ida, GFP_KERNEL))
+ if (!ida_pre_get(&pool->worker_ida, GFP_KERNEL))
goto fail;
spin_lock_irq(&gcwq->lock);
}
@@ -1366,20 +1365,22 @@
if (!worker)
goto fail;
- worker->gcwq = gcwq;
+ worker->pool = pool;
worker->id = id;
if (!on_unbound_cpu)
worker->task = kthread_create_on_node(worker_thread,
- worker,
- cpu_to_node(gcwq->cpu),
- "kworker/%u:%d", gcwq->cpu, id);
+ worker, cpu_to_node(gcwq->cpu),
+ "kworker/%u:%d%s", gcwq->cpu, id, pri);
else
worker->task = kthread_create(worker_thread, worker,
- "kworker/u:%d", id);
+ "kworker/u:%d%s", id, pri);
if (IS_ERR(worker->task))
goto fail;
+ if (worker_pool_pri(pool))
+ set_user_nice(worker->task, HIGHPRI_NICE_LEVEL);
+
/*
* A rogue worker will become a regular one if CPU comes
* online later on. Make sure every worker has
@@ -1397,7 +1398,7 @@
fail:
if (id >= 0) {
spin_lock_irq(&gcwq->lock);
- ida_remove(&gcwq->worker_ida, id);
+ ida_remove(&pool->worker_ida, id);
spin_unlock_irq(&gcwq->lock);
}
kfree(worker);
@@ -1416,7 +1417,7 @@
static void start_worker(struct worker *worker)
{
worker->flags |= WORKER_STARTED;
- worker->gcwq->nr_workers++;
+ worker->pool->nr_workers++;
worker_enter_idle(worker);
wake_up_process(worker->task);
}
@@ -1432,7 +1433,8 @@
*/
static void destroy_worker(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
int id = worker->id;
/* sanity check frenzy */
@@ -1440,9 +1442,9 @@
BUG_ON(!list_empty(&worker->scheduled));
if (worker->flags & WORKER_STARTED)
- gcwq->nr_workers--;
+ pool->nr_workers--;
if (worker->flags & WORKER_IDLE)
- gcwq->nr_idle--;
+ pool->nr_idle--;
list_del_init(&worker->entry);
worker->flags |= WORKER_DIE;
@@ -1453,29 +1455,30 @@
kfree(worker);
spin_lock_irq(&gcwq->lock);
- ida_remove(&gcwq->worker_ida, id);
+ ida_remove(&pool->worker_ida, id);
}
-static void idle_worker_timeout(unsigned long __gcwq)
+static void idle_worker_timeout(unsigned long __pool)
{
- struct global_cwq *gcwq = (void *)__gcwq;
+ struct worker_pool *pool = (void *)__pool;
+ struct global_cwq *gcwq = pool->gcwq;
spin_lock_irq(&gcwq->lock);
- if (too_many_workers(gcwq)) {
+ if (too_many_workers(pool)) {
struct worker *worker;
unsigned long expires;
/* idle_list is kept in LIFO order, check the last one */
- worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+ worker = list_entry(pool->idle_list.prev, struct worker, entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;
if (time_before(jiffies, expires))
- mod_timer(&gcwq->idle_timer, expires);
+ mod_timer(&pool->idle_timer, expires);
else {
/* it's been idle for too long, wake up manager */
- gcwq->flags |= GCWQ_MANAGE_WORKERS;
- wake_up_worker(gcwq);
+ pool->flags |= POOL_MANAGE_WORKERS;
+ wake_up_worker(pool);
}
}
@@ -1492,7 +1495,7 @@
return false;
/* mayday mayday mayday */
- cpu = cwq->gcwq->cpu;
+ cpu = cwq->pool->gcwq->cpu;
/* WORK_CPU_UNBOUND can't be set in cpumask, use cpu 0 instead */
if (cpu == WORK_CPU_UNBOUND)
cpu = 0;
@@ -1501,37 +1504,38 @@
return true;
}
-static void gcwq_mayday_timeout(unsigned long __gcwq)
+static void gcwq_mayday_timeout(unsigned long __pool)
{
- struct global_cwq *gcwq = (void *)__gcwq;
+ struct worker_pool *pool = (void *)__pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct work_struct *work;
spin_lock_irq(&gcwq->lock);
- if (need_to_create_worker(gcwq)) {
+ if (need_to_create_worker(pool)) {
/*
* We've been trying to create a new worker but
* haven't been successful. We might be hitting an
* allocation deadlock. Send distress signals to
* rescuers.
*/
- list_for_each_entry(work, &gcwq->worklist, entry)
+ list_for_each_entry(work, &pool->worklist, entry)
send_mayday(work);
}
spin_unlock_irq(&gcwq->lock);
- mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INTERVAL);
+ mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
}
/**
* maybe_create_worker - create a new worker if necessary
- * @gcwq: gcwq to create a new worker for
+ * @pool: pool to create a new worker for
*
- * Create a new worker for @gcwq if necessary. @gcwq is guaranteed to
+ * Create a new worker for @pool if necessary. @pool is guaranteed to
* have at least one idle worker on return from this function. If
* creating a new worker takes longer than MAYDAY_INTERVAL, mayday is
- * sent to all rescuers with works scheduled on @gcwq to resolve
+ * sent to all rescuers with works scheduled on @pool to resolve
* possible allocation deadlock.
*
* On return, need_to_create_worker() is guaranteed to be false and
@@ -1546,52 +1550,54 @@
* false if no action was taken and gcwq->lock stayed locked, true
* otherwise.
*/
-static bool maybe_create_worker(struct global_cwq *gcwq)
+static bool maybe_create_worker(struct worker_pool *pool)
__releases(&gcwq->lock)
__acquires(&gcwq->lock)
{
- if (!need_to_create_worker(gcwq))
+ struct global_cwq *gcwq = pool->gcwq;
+
+ if (!need_to_create_worker(pool))
return false;
restart:
spin_unlock_irq(&gcwq->lock);
/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
- mod_timer(&gcwq->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
+ mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
while (true) {
struct worker *worker;
- worker = create_worker(gcwq, true);
+ worker = create_worker(pool, true);
if (worker) {
- del_timer_sync(&gcwq->mayday_timer);
+ del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&gcwq->lock);
start_worker(worker);
- BUG_ON(need_to_create_worker(gcwq));
+ BUG_ON(need_to_create_worker(pool));
return true;
}
- if (!need_to_create_worker(gcwq))
+ if (!need_to_create_worker(pool))
break;
__set_current_state(TASK_INTERRUPTIBLE);
schedule_timeout(CREATE_COOLDOWN);
- if (!need_to_create_worker(gcwq))
+ if (!need_to_create_worker(pool))
break;
}
- del_timer_sync(&gcwq->mayday_timer);
+ del_timer_sync(&pool->mayday_timer);
spin_lock_irq(&gcwq->lock);
- if (need_to_create_worker(gcwq))
+ if (need_to_create_worker(pool))
goto restart;
return true;
}
/**
* maybe_destroy_worker - destroy workers which have been idle for a while
- * @gcwq: gcwq to destroy workers for
+ * @pool: pool to destroy workers for
*
- * Destroy @gcwq workers which have been idle for longer than
+ * Destroy @pool workers which have been idle for longer than
* IDLE_WORKER_TIMEOUT.
*
* LOCKING:
@@ -1602,19 +1608,19 @@
* false if no action was taken and gcwq->lock stayed locked, true
* otherwise.
*/
-static bool maybe_destroy_workers(struct global_cwq *gcwq)
+static bool maybe_destroy_workers(struct worker_pool *pool)
{
bool ret = false;
- while (too_many_workers(gcwq)) {
+ while (too_many_workers(pool)) {
struct worker *worker;
unsigned long expires;
- worker = list_entry(gcwq->idle_list.prev, struct worker, entry);
+ worker = list_entry(pool->idle_list.prev, struct worker, entry);
expires = worker->last_active + IDLE_WORKER_TIMEOUT;
if (time_before(jiffies, expires)) {
- mod_timer(&gcwq->idle_timer, expires);
+ mod_timer(&pool->idle_timer, expires);
break;
}
@@ -1647,23 +1653,24 @@
*/
static bool manage_workers(struct worker *worker)
{
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
bool ret = false;
- if (gcwq->flags & GCWQ_MANAGING_WORKERS)
+ if (pool->flags & POOL_MANAGING_WORKERS)
return ret;
- gcwq->flags &= ~GCWQ_MANAGE_WORKERS;
- gcwq->flags |= GCWQ_MANAGING_WORKERS;
+ pool->flags &= ~POOL_MANAGE_WORKERS;
+ pool->flags |= POOL_MANAGING_WORKERS;
/*
* Destroy and then create so that may_start_working() is true
* on return.
*/
- ret |= maybe_destroy_workers(gcwq);
- ret |= maybe_create_worker(gcwq);
+ ret |= maybe_destroy_workers(pool);
+ ret |= maybe_create_worker(pool);
- gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+ pool->flags &= ~POOL_MANAGING_WORKERS;
/*
* The trustee might be waiting to take over the manager
@@ -1720,10 +1727,9 @@
{
struct work_struct *work = list_first_entry(&cwq->delayed_works,
struct work_struct, entry);
- struct list_head *pos = gcwq_determine_ins_pos(cwq->gcwq, cwq);
trace_workqueue_activate_work(work);
- move_linked_works(work, pos, NULL);
+ move_linked_works(work, &cwq->pool->worklist, NULL);
__clear_bit(WORK_STRUCT_DELAYED_BIT, work_data_bits(work));
cwq->nr_active++;
}
@@ -1796,7 +1802,8 @@
__acquires(&gcwq->lock)
{
struct cpu_workqueue_struct *cwq = get_work_cwq(work);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct hlist_head *bwh = busy_worker_head(gcwq, work);
bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
work_func_t f = work->func;
@@ -1836,27 +1843,19 @@
list_del_init(&work->entry);
/*
- * If HIGHPRI_PENDING, check the next work, and, if HIGHPRI,
- * wake up another worker; otherwise, clear HIGHPRI_PENDING.
- */
- if (unlikely(gcwq->flags & GCWQ_HIGHPRI_PENDING)) {
- struct work_struct *nwork = list_first_entry(&gcwq->worklist,
- struct work_struct, entry);
-
- if (!list_empty(&gcwq->worklist) &&
- get_work_cwq(nwork)->wq->flags & WQ_HIGHPRI)
- wake_up_worker(gcwq);
- else
- gcwq->flags &= ~GCWQ_HIGHPRI_PENDING;
- }
-
- /*
* CPU intensive works don't participate in concurrency
* management. They're the scheduler's responsibility.
*/
if (unlikely(cpu_intensive))
worker_set_flags(worker, WORKER_CPU_INTENSIVE, true);
+ /*
+ * Unbound gcwq isn't concurrency managed and work items should be
+ * executed ASAP. Wake up another worker if necessary.
+ */
+ if ((worker->flags & WORKER_UNBOUND) && need_more_worker(pool))
+ wake_up_worker(pool);
+
spin_unlock_irq(&gcwq->lock);
work_clear_pending(work);
@@ -1929,7 +1928,8 @@
static int worker_thread(void *__worker)
{
struct worker *worker = __worker;
- struct global_cwq *gcwq = worker->gcwq;
+ struct worker_pool *pool = worker->pool;
+ struct global_cwq *gcwq = pool->gcwq;
/* tell the scheduler that this is a workqueue worker */
worker->task->flags |= PF_WQ_WORKER;
@@ -1946,11 +1946,11 @@
worker_leave_idle(worker);
recheck:
/* no more worker necessary? */
- if (!need_more_worker(gcwq))
+ if (!need_more_worker(pool))
goto sleep;
/* do we need to manage? */
- if (unlikely(!may_start_working(gcwq)) && manage_workers(worker))
+ if (unlikely(!may_start_working(pool)) && manage_workers(worker))
goto recheck;
/*
@@ -1969,7 +1969,7 @@
do {
struct work_struct *work =
- list_first_entry(&gcwq->worklist,
+ list_first_entry(&pool->worklist,
struct work_struct, entry);
if (likely(!(*work_data_bits(work) & WORK_STRUCT_LINKED))) {
@@ -1981,11 +1981,11 @@
move_linked_works(work, &worker->scheduled, NULL);
process_scheduled_works(worker);
}
- } while (keep_working(gcwq));
+ } while (keep_working(pool));
worker_set_flags(worker, WORKER_PREP, false);
sleep:
- if (unlikely(need_to_manage_workers(gcwq)) && manage_workers(worker))
+ if (unlikely(need_to_manage_workers(pool)) && manage_workers(worker))
goto recheck;
/*
@@ -2043,14 +2043,15 @@
for_each_mayday_cpu(cpu, wq->mayday_mask) {
unsigned int tcpu = is_unbound ? WORK_CPU_UNBOUND : cpu;
struct cpu_workqueue_struct *cwq = get_cwq(tcpu, wq);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct worker_pool *pool = cwq->pool;
+ struct global_cwq *gcwq = pool->gcwq;
struct work_struct *work, *n;
__set_current_state(TASK_RUNNING);
mayday_clear_cpu(cpu, wq->mayday_mask);
/* migrate to the target cpu if possible */
- rescuer->gcwq = gcwq;
+ rescuer->pool = pool;
worker_maybe_bind_and_lock(rescuer);
/*
@@ -2058,7 +2059,7 @@
* process'em.
*/
BUG_ON(!list_empty(&rescuer->scheduled));
- list_for_each_entry_safe(work, n, &gcwq->worklist, entry)
+ list_for_each_entry_safe(work, n, &pool->worklist, entry)
if (get_work_cwq(work) == cwq)
move_linked_works(work, scheduled, &n);
@@ -2069,8 +2070,8 @@
* regular worker; otherwise, we end up with 0 concurrency
* and stalling the execution.
*/
- if (keep_working(gcwq))
- wake_up_worker(gcwq);
+ if (keep_working(pool))
+ wake_up_worker(pool);
spin_unlock_irq(&gcwq->lock);
}
@@ -2195,7 +2196,7 @@
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
- struct global_cwq *gcwq = cwq->gcwq;
+ struct global_cwq *gcwq = cwq->pool->gcwq;
spin_lock_irq(&gcwq->lock);
@@ -2411,9 +2412,9 @@
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
bool drained;
- spin_lock_irq(&cwq->gcwq->lock);
+ spin_lock_irq(&cwq->pool->gcwq->lock);
drained = !cwq->nr_active && list_empty(&cwq->delayed_works);
- spin_unlock_irq(&cwq->gcwq->lock);
+ spin_unlock_irq(&cwq->pool->gcwq->lock);
if (drained)
continue;
@@ -2453,7 +2454,7 @@
*/
smp_rmb();
cwq = get_work_cwq(work);
- if (unlikely(!cwq || gcwq != cwq->gcwq))
+ if (unlikely(!cwq || gcwq != cwq->pool->gcwq))
goto already_gone;
} else if (wait_executing) {
worker = find_worker_executing_work(gcwq, work);
@@ -2971,13 +2972,6 @@
if (flags & WQ_MEM_RECLAIM)
flags |= WQ_RESCUER;
- /*
- * Unbound workqueues aren't concurrency managed and should be
- * dispatched to workers immediately.
- */
- if (flags & WQ_UNBOUND)
- flags |= WQ_HIGHPRI;
-
max_active = max_active ?: WQ_DFL_ACTIVE;
max_active = wq_clamp_max_active(max_active, flags, wq->name);
@@ -2998,9 +2992,10 @@
for_each_cwq_cpu(cpu, wq) {
struct cpu_workqueue_struct *cwq = get_cwq(cpu, wq);
struct global_cwq *gcwq = get_gcwq(cpu);
+ int pool_idx = (bool)(flags & WQ_HIGHPRI);
BUG_ON((unsigned long)cwq & WORK_STRUCT_FLAG_MASK);
- cwq->gcwq = gcwq;
+ cwq->pool = &gcwq->pools[pool_idx];
cwq->wq = wq;
cwq->flush_color = -1;
cwq->max_active = max_active;
@@ -3304,9 +3299,30 @@
__ret1 < 0 ? -1 : 0; \
})
+static bool gcwq_is_managing_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (pool->flags & POOL_MANAGING_WORKERS)
+ return true;
+ return false;
+}
+
+static bool gcwq_has_idle_workers(struct global_cwq *gcwq)
+{
+ struct worker_pool *pool;
+
+ for_each_worker_pool(pool, gcwq)
+ if (!list_empty(&pool->idle_list))
+ return true;
+ return false;
+}
+
static int __cpuinit trustee_thread(void *__gcwq)
{
struct global_cwq *gcwq = __gcwq;
+ struct worker_pool *pool;
struct worker *worker;
struct work_struct *work;
struct hlist_node *pos;
@@ -3322,13 +3338,15 @@
* cancelled.
*/
BUG_ON(gcwq->cpu != smp_processor_id());
- rc = trustee_wait_event(!(gcwq->flags & GCWQ_MANAGING_WORKERS));
+ rc = trustee_wait_event(!gcwq_is_managing_workers(gcwq));
BUG_ON(rc < 0);
- gcwq->flags |= GCWQ_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq) {
+ pool->flags |= POOL_MANAGING_WORKERS;
- list_for_each_entry(worker, &gcwq->idle_list, entry)
- worker->flags |= WORKER_ROGUE;
+ list_for_each_entry(worker, &pool->idle_list, entry)
+ worker->flags |= WORKER_ROGUE;
+ }
for_each_busy_worker(worker, i, pos, gcwq)
worker->flags |= WORKER_ROGUE;
@@ -3349,10 +3367,12 @@
* keep_working() are always true as long as the worklist is
* not empty.
*/
- atomic_set(get_gcwq_nr_running(gcwq->cpu), 0);
+ for_each_worker_pool(pool, gcwq)
+ atomic_set(get_pool_nr_running(pool), 0);
spin_unlock_irq(&gcwq->lock);
- del_timer_sync(&gcwq->idle_timer);
+ for_each_worker_pool(pool, gcwq)
+ del_timer_sync(&pool->idle_timer);
spin_lock_irq(&gcwq->lock);
/*
@@ -3374,29 +3394,38 @@
* may be frozen works in freezable cwqs. Don't declare
* completion while frozen.
*/
- while (gcwq->nr_workers != gcwq->nr_idle ||
- gcwq->flags & GCWQ_FREEZING ||
- gcwq->trustee_state == TRUSTEE_IN_CHARGE) {
- int nr_works = 0;
+ while (true) {
+ bool busy = false;
- list_for_each_entry(work, &gcwq->worklist, entry) {
- send_mayday(work);
- nr_works++;
- }
+ for_each_worker_pool(pool, gcwq)
+ busy |= pool->nr_workers != pool->nr_idle;
- list_for_each_entry(worker, &gcwq->idle_list, entry) {
- if (!nr_works--)
- break;
- wake_up_process(worker->task);
- }
+ if (!busy && !(gcwq->flags & GCWQ_FREEZING) &&
+ gcwq->trustee_state != TRUSTEE_IN_CHARGE)
+ break;
- if (need_to_create_worker(gcwq)) {
- spin_unlock_irq(&gcwq->lock);
- worker = create_worker(gcwq, false);
- spin_lock_irq(&gcwq->lock);
- if (worker) {
- worker->flags |= WORKER_ROGUE;
- start_worker(worker);
+ for_each_worker_pool(pool, gcwq) {
+ int nr_works = 0;
+
+ list_for_each_entry(work, &pool->worklist, entry) {
+ send_mayday(work);
+ nr_works++;
+ }
+
+ list_for_each_entry(worker, &pool->idle_list, entry) {
+ if (!nr_works--)
+ break;
+ wake_up_process(worker->task);
+ }
+
+ if (need_to_create_worker(pool)) {
+ spin_unlock_irq(&gcwq->lock);
+ worker = create_worker(pool, false);
+ spin_lock_irq(&gcwq->lock);
+ if (worker) {
+ worker->flags |= WORKER_ROGUE;
+ start_worker(worker);
+ }
}
}
@@ -3411,11 +3440,18 @@
* all workers till we're canceled.
*/
do {
- rc = trustee_wait_event(!list_empty(&gcwq->idle_list));
- while (!list_empty(&gcwq->idle_list))
- destroy_worker(list_first_entry(&gcwq->idle_list,
- struct worker, entry));
- } while (gcwq->nr_workers && rc >= 0);
+ rc = trustee_wait_event(gcwq_has_idle_workers(gcwq));
+
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ while (!list_empty(&pool->idle_list)) {
+ worker = list_first_entry(&pool->idle_list,
+ struct worker, entry);
+ destroy_worker(worker);
+ }
+ i |= pool->nr_workers;
+ }
+ } while (i && rc >= 0);
/*
* At this point, either draining has completed and no worker
@@ -3424,7 +3460,8 @@
* Tell the remaining busy ones to rebind once it finishes the
* currently scheduled works by scheduling the rebind_work.
*/
- WARN_ON(!list_empty(&gcwq->idle_list));
+ for_each_worker_pool(pool, gcwq)
+ WARN_ON(!list_empty(&pool->idle_list));
for_each_busy_worker(worker, i, pos, gcwq) {
struct work_struct *rebind_work = &worker->rebind_work;
@@ -3449,7 +3486,8 @@
}
/* relinquish manager role */
- gcwq->flags &= ~GCWQ_MANAGING_WORKERS;
+ for_each_worker_pool(pool, gcwq)
+ pool->flags &= ~POOL_MANAGING_WORKERS;
/* notify completion */
gcwq->trustee = NULL;
@@ -3491,8 +3529,10 @@
unsigned int cpu = (unsigned long)hcpu;
struct global_cwq *gcwq = get_gcwq(cpu);
struct task_struct *new_trustee = NULL;
- struct worker *uninitialized_var(new_worker);
+ struct worker *new_workers[NR_WORKER_POOLS] = { };
+ struct worker_pool *pool;
unsigned long flags;
+ int i;
action &= ~CPU_TASKS_FROZEN;
@@ -3505,12 +3545,12 @@
kthread_bind(new_trustee, cpu);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->first_idle);
- new_worker = create_worker(gcwq, false);
- if (!new_worker) {
- if (new_trustee)
- kthread_stop(new_trustee);
- return NOTIFY_BAD;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ new_workers[i] = create_worker(pool, false);
+ if (!new_workers[i++])
+ goto err_destroy;
}
}
@@ -3527,8 +3567,11 @@
wait_trustee_state(gcwq, TRUSTEE_IN_CHARGE);
/* fall through */
case CPU_UP_PREPARE:
- BUG_ON(gcwq->first_idle);
- gcwq->first_idle = new_worker;
+ i = 0;
+ for_each_worker_pool(pool, gcwq) {
+ BUG_ON(pool->first_idle);
+ pool->first_idle = new_workers[i++];
+ }
break;
case CPU_DYING:
@@ -3545,8 +3588,10 @@
gcwq->trustee_state = TRUSTEE_BUTCHER;
/* fall through */
case CPU_UP_CANCELED:
- destroy_worker(gcwq->first_idle);
- gcwq->first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ destroy_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;
case CPU_DOWN_FAILED:
@@ -3563,18 +3608,32 @@
* Put the first_idle in and request a real manager to
* take a look.
*/
- spin_unlock_irq(&gcwq->lock);
- kthread_bind(gcwq->first_idle->task, cpu);
- spin_lock_irq(&gcwq->lock);
- gcwq->flags |= GCWQ_MANAGE_WORKERS;
- start_worker(gcwq->first_idle);
- gcwq->first_idle = NULL;
+ for_each_worker_pool(pool, gcwq) {
+ spin_unlock_irq(&gcwq->lock);
+ kthread_bind(pool->first_idle->task, cpu);
+ spin_lock_irq(&gcwq->lock);
+ pool->flags |= POOL_MANAGE_WORKERS;
+ start_worker(pool->first_idle);
+ pool->first_idle = NULL;
+ }
break;
}
spin_unlock_irqrestore(&gcwq->lock, flags);
return notifier_from_errno(0);
+
+err_destroy:
+ if (new_trustee)
+ kthread_stop(new_trustee);
+
+ spin_lock_irqsave(&gcwq->lock, flags);
+ for (i = 0; i < NR_WORKER_POOLS; i++)
+ if (new_workers[i])
+ destroy_worker(new_workers[i]);
+ spin_unlock_irqrestore(&gcwq->lock, flags);
+
+ return NOTIFY_BAD;
}
#ifdef CONFIG_SMP
@@ -3733,6 +3792,7 @@
for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;
struct workqueue_struct *wq;
spin_lock_irq(&gcwq->lock);
@@ -3754,7 +3814,8 @@
cwq_activate_first_delayed(cwq);
}
- wake_up_worker(gcwq);
+ for_each_worker_pool(pool, gcwq)
+ wake_up_worker(pool);
spin_unlock_irq(&gcwq->lock);
}
@@ -3775,24 +3836,29 @@
/* initialize gcwqs */
for_each_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
+ struct worker_pool *pool;
spin_lock_init(&gcwq->lock);
- INIT_LIST_HEAD(&gcwq->worklist);
gcwq->cpu = cpu;
gcwq->flags |= GCWQ_DISASSOCIATED;
- INIT_LIST_HEAD(&gcwq->idle_list);
for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
- init_timer_deferrable(&gcwq->idle_timer);
- gcwq->idle_timer.function = idle_worker_timeout;
- gcwq->idle_timer.data = (unsigned long)gcwq;
+ for_each_worker_pool(pool, gcwq) {
+ pool->gcwq = gcwq;
+ INIT_LIST_HEAD(&pool->worklist);
+ INIT_LIST_HEAD(&pool->idle_list);
- setup_timer(&gcwq->mayday_timer, gcwq_mayday_timeout,
- (unsigned long)gcwq);
+ init_timer_deferrable(&pool->idle_timer);
+ pool->idle_timer.function = idle_worker_timeout;
+ pool->idle_timer.data = (unsigned long)pool;
- ida_init(&gcwq->worker_ida);
+ setup_timer(&pool->mayday_timer, gcwq_mayday_timeout,
+ (unsigned long)pool);
+
+ ida_init(&pool->worker_ida);
+ }
gcwq->trustee_state = TRUSTEE_DONE;
init_waitqueue_head(&gcwq->trustee_wait);
@@ -3801,15 +3867,20 @@
/* create the initial worker */
for_each_online_gcwq_cpu(cpu) {
struct global_cwq *gcwq = get_gcwq(cpu);
- struct worker *worker;
+ struct worker_pool *pool;
if (cpu != WORK_CPU_UNBOUND)
gcwq->flags &= ~GCWQ_DISASSOCIATED;
- worker = create_worker(gcwq, true);
- BUG_ON(!worker);
- spin_lock_irq(&gcwq->lock);
- start_worker(worker);
- spin_unlock_irq(&gcwq->lock);
+
+ for_each_worker_pool(pool, gcwq) {
+ struct worker *worker;
+
+ worker = create_worker(pool, true);
+ BUG_ON(!worker);
+ spin_lock_irq(&gcwq->lock);
+ start_worker(worker);
+ spin_unlock_irq(&gcwq->lock);
+ }
}
system_wq = alloc_workqueue("events", 0, 0);
diff --git a/sound/soc/msm/msm8974.c b/sound/soc/msm/msm8974.c
index 2a60b88..945840d 100644
--- a/sound/soc/msm/msm8974.c
+++ b/sound/soc/msm/msm8974.c
@@ -1260,6 +1260,22 @@
return 0;
}
+static int msm_slim_4_tx_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_interval *rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
+
+ struct snd_interval *channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
+
+ pr_debug("%s()\n", __func__);
+ rate->min = rate->max = 48000;
+ channels->min = channels->max = 2;
+
+ return 0;
+}
+
static int msm_slim_5_tx_be_hw_params_fixup(struct snd_soc_pcm_runtime *rtd,
struct snd_pcm_hw_params *params)
{
@@ -2195,7 +2211,7 @@
.codec_name = "taiko_codec",
.codec_dai_name = "taiko_vifeedback",
.be_id = MSM_BACKEND_DAI_SLIMBUS_4_TX,
- .be_hw_params_fixup = msm_slim_0_tx_be_hw_params_fixup,
+ .be_hw_params_fixup = msm_slim_4_tx_be_hw_params_fixup,
.ops = &msm8974_be_ops,
.no_host_mode = SND_SOC_DAI_LINK_NO_HOST,
.ignore_suspend = 1,