Merge branch 'pm-domains' into pm-for-linus
* pm-domains:
ARM: mach-shmobile: sh7372 A4R support (v4)
ARM: mach-shmobile: sh7372 A3SP support (v4)
PM / Sleep: Mark devices involved in wakeup signaling during suspend
diff --git a/Documentation/ABI/testing/sysfs-class-devfreq b/Documentation/ABI/testing/sysfs-class-devfreq
new file mode 100644
index 0000000..23d78b5
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-class-devfreq
@@ -0,0 +1,52 @@
+What: /sys/class/devfreq/.../
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ Provide a place in sysfs for the devfreq objects.
+ This allows accessing various devfreq specific variables.
+ The name of devfreq object denoted as ... is same as the
+ name of device using devfreq.
+
+What: /sys/class/devfreq/.../governor
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/devfreq/.../governor shows the name of the
+ governor used by the corresponding devfreq object.
+
+What: /sys/class/devfreq/.../cur_freq
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/devfreq/.../cur_freq shows the current
+ frequency of the corresponding devfreq object.
+
+What: /sys/class/devfreq/.../central_polling
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/devfreq/.../central_polling shows whether
+ the devfreq ojbect is using devfreq-provided central
+ polling mechanism or not.
+
+What: /sys/class/devfreq/.../polling_interval
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/devfreq/.../polling_interval shows and sets
+ the requested polling interval of the corresponding devfreq
+ object. The values are represented in ms. If the value is
+ less than 1 jiffy, it is considered to be 0, which means
+ no polling. This value is meaningless if the governor is
+ not polling; thus. If the governor is not using
+ devfreq-provided central polling
+ (/sys/class/devfreq/.../central_polling is 0), this value
+ may be useless.
+
+What: /sys/class/devfreq/.../userspace/set_freq
+Date: September 2011
+Contact: MyungJoo Ham <myungjoo.ham@samsung.com>
+Description:
+ The /sys/class/devfreq/.../userspace/set_freq shows and
+ sets the requested frequency for the devfreq object if
+ userspace governor is in effect.
diff --git a/Documentation/hwmon/coretemp b/Documentation/hwmon/coretemp
index fa8776a..84d46c0 100644
--- a/Documentation/hwmon/coretemp
+++ b/Documentation/hwmon/coretemp
@@ -35,13 +35,6 @@
All Sysfs entries are named with their core_id (represented here by 'X').
tempX_input - Core temperature (in millidegrees Celsius).
tempX_max - All cooling devices should be turned on (on Core2).
- Initialized with IA32_THERM_INTERRUPT. When the CPU
- temperature reaches this temperature, an interrupt is
- generated and tempX_max_alarm is set.
-tempX_max_hyst - If the CPU temperature falls below than temperature,
- an interrupt is generated and tempX_max_alarm is reset.
-tempX_max_alarm - Set if the temperature reaches or exceeds tempX_max.
- Reset if the temperature drops to or below tempX_max_hyst.
tempX_crit - Maximum junction temperature (in millidegrees Celsius).
tempX_crit_alarm - Set when Out-of-spec bit is set, never clears.
Correct CPU operation is no longer guaranteed.
@@ -49,9 +42,10 @@
number. For Package temp, this will be "Physical id Y",
where Y is the package number.
-The TjMax temperature is set to 85 degrees C if undocumented model specific
-register (UMSR) 0xee has bit 30 set. If not the TjMax is 100 degrees C as
-(sometimes) documented in processor datasheet.
+On CPU models which support it, TjMax is read from a model-specific register.
+On other models, it is set to an arbitrary value based on weak heuristics.
+If these heuristics don't work for you, you can pass the correct TjMax value
+as a module parameter (tjmax).
Appendix A. Known TjMax lists (TBD):
Some information comes from ark.intel.com
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 854ed5ca..831bde2 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2240,6 +2240,13 @@
in <PAGE_SIZE> units (needed only for swap files).
See Documentation/power/swsusp-and-swap-files.txt
+ resumedelay= [HIBERNATION] Delay (in seconds) to pause before attempting to
+ read the resume files
+
+ resumewait [HIBERNATION] Wait (indefinitely) for resume device to show up.
+ Useful for devices that are detected asynchronously
+ (e.g. USB and MMC devices).
+
hibernate= [HIBERNATION]
noresume Don't check if there's a hibernation image
present during boot.
diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
index 8154699..ca5cdcd 100644
--- a/Documentation/networking/ip-sysctl.txt
+++ b/Documentation/networking/ip-sysctl.txt
@@ -1042,7 +1042,7 @@
The functional behaviour for certain settings is different
depending on whether local forwarding is enabled or not.
-accept_ra - BOOLEAN
+accept_ra - INTEGER
Accept Router Advertisements; autoconfigure using them.
Possible values are:
@@ -1106,7 +1106,7 @@
The amount of Duplicate Address Detection probes to send.
Default: 1
-forwarding - BOOLEAN
+forwarding - INTEGER
Configure interface-specific Host/Router behaviour.
Note: It is recommended to have the same setting on all
diff --git a/Documentation/networking/scaling.txt b/Documentation/networking/scaling.txt
index 58fd741..fe67b5c 100644
--- a/Documentation/networking/scaling.txt
+++ b/Documentation/networking/scaling.txt
@@ -27,7 +27,7 @@
of logical flows. Packets for each flow are steered to a separate receive
queue, which in turn can be processed by separate CPUs. This mechanism is
generally known as “Receive-side Scaling” (RSS). The goal of RSS and
-the other scaling techniques to increase performance uniformly.
+the other scaling techniques is to increase performance uniformly.
Multi-queue distribution can also be used for traffic prioritization, but
that is not the focus of these techniques.
@@ -186,10 +186,10 @@
same CPU. Indeed, with many flows and few CPUs, it is very likely that
a single application thread handles flows with many different flow hashes.
-rps_sock_table is a global flow table that contains the *desired* CPU for
-flows: the CPU that is currently processing the flow in userspace. Each
-table value is a CPU index that is updated during calls to recvmsg and
-sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
+rps_sock_flow_table is a global flow table that contains the *desired* CPU
+for flows: the CPU that is currently processing the flow in userspace.
+Each table value is a CPU index that is updated during calls to recvmsg
+and sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
and tcp_splice_read()).
When the scheduler moves a thread to a new CPU while it has outstanding
@@ -243,7 +243,7 @@
The number of entries in the per-queue flow table are set through:
- /sys/class/net/<dev>/queues/tx-<n>/rps_flow_cnt
+ /sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
== Suggested Configuration
diff --git a/Documentation/power/basic-pm-debugging.txt b/Documentation/power/basic-pm-debugging.txt
index ddd78172..62eca08 100644
--- a/Documentation/power/basic-pm-debugging.txt
+++ b/Documentation/power/basic-pm-debugging.txt
@@ -201,3 +201,27 @@
analogous to the one described in section 1. If you find some failing drivers,
you will have to unload them every time before an STR transition (ie. before
you run s2ram), and please report the problems with them.
+
+There is a debugfs entry which shows the suspend to RAM statistics. Here is an
+example of its output.
+ # mount -t debugfs none /sys/kernel/debug
+ # cat /sys/kernel/debug/suspend_stats
+ success: 20
+ fail: 5
+ failed_freeze: 0
+ failed_prepare: 0
+ failed_suspend: 5
+ failed_suspend_noirq: 0
+ failed_resume: 0
+ failed_resume_noirq: 0
+ failures:
+ last_failed_dev: alarm
+ adc
+ last_failed_errno: -16
+ -16
+ last_failed_step: suspend
+ suspend
+Field success means the success number of suspend to RAM, and field fail means
+the failure number. Others are the failure number of different steps of suspend
+to RAM. suspend_stats just lists the last 2 failed devices, error number and
+failed step of suspend.
diff --git a/Documentation/power/devices.txt b/Documentation/power/devices.txt
index 3384d59..646a89e 100644
--- a/Documentation/power/devices.txt
+++ b/Documentation/power/devices.txt
@@ -152,7 +152,9 @@
for the most part drivers should not change its value. The initial value of
should_wakeup is supposed to be false for the majority of devices; the major
exceptions are power buttons, keyboards, and Ethernet adapters whose WoL
-(wake-on-LAN) feature has been set up with ethtool.
+(wake-on-LAN) feature has been set up with ethtool. It should also default
+to true for devices that don't generate wakeup requests on their own but merely
+forward wakeup requests from one bus to another (like PCI bridges).
Whether or not a device is capable of issuing wakeup events is a hardware
matter, and the kernel is responsible for keeping track of it. By contrast,
@@ -279,10 +281,6 @@
time.) Unlike the other suspend-related phases, during the prepare
phase the device tree is traversed top-down.
- In addition to that, if device drivers need to allocate additional
- memory to be able to hadle device suspend correctly, that should be
- done in the prepare phase.
-
After the prepare callback method returns, no new children may be
registered below the device. The method may also prepare the device or
driver in some way for the upcoming system power transition (for
diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt
index bfed898..17e130a 100644
--- a/Documentation/power/pm_qos_interface.txt
+++ b/Documentation/power/pm_qos_interface.txt
@@ -4,14 +4,19 @@
performance expectations by drivers, subsystems and user space applications on
one of the parameters.
-Currently we have {cpu_dma_latency, network_latency, network_throughput} as the
-initial set of pm_qos parameters.
+Two different PM QoS frameworks are available:
+1. PM QoS classes for cpu_dma_latency, network_latency, network_throughput.
+2. the per-device PM QoS framework provides the API to manage the per-device latency
+constraints.
Each parameters have defined units:
* latency: usec
* timeout: usec
* throughput: kbs (kilo bit / sec)
+
+1. PM QoS framework
+
The infrastructure exposes multiple misc device nodes one per implemented
parameter. The set of parameters implement is defined by pm_qos_power_init()
and pm_qos_params.h. This is done because having the available parameters
@@ -23,14 +28,18 @@
changes to the request list or elements of the list. Typically the
aggregated target value is simply the max or min of the request values held
in the parameter list elements.
+Note: the aggregated target value is implemented as an atomic variable so that
+reading the aggregated value does not require any locking mechanism.
+
From kernel mode the use of this interface is simple:
-handle = pm_qos_add_request(param_class, target_value):
-Will insert an element into the list for that identified PM_QOS class with the
+void pm_qos_add_request(handle, param_class, target_value):
+Will insert an element into the list for that identified PM QoS class with the
target value. Upon change to this list the new target is recomputed and any
registered notifiers are called only if the target value is now different.
-Clients of pm_qos need to save the returned handle.
+Clients of pm_qos need to save the returned handle for future use in other
+pm_qos API functions.
void pm_qos_update_request(handle, new_target_value):
Will update the list element pointed to by the handle with the new target value
@@ -42,6 +51,20 @@
call the notification tree if the target was changed as a result of removing
the request.
+int pm_qos_request(param_class):
+Returns the aggregated value for a given PM QoS class.
+
+int pm_qos_request_active(handle):
+Returns if the request is still active, i.e. it has not been removed from a
+PM QoS class constraints list.
+
+int pm_qos_add_notifier(param_class, notifier):
+Adds a notification callback function to the PM QoS class. The callback is
+called when the aggregated value for the PM QoS class is changed.
+
+int pm_qos_remove_notifier(int param_class, notifier):
+Removes the notification callback function for the PM QoS class.
+
From user mode:
Only processes can register a pm_qos request. To provide for automatic
@@ -63,4 +86,63 @@
node.
+2. PM QoS per-device latency framework
+
+For each device a list of performance requests is maintained along with
+an aggregated target value. The aggregated target value is updated with
+changes to the request list or elements of the list. Typically the
+aggregated target value is simply the max or min of the request values held
+in the parameter list elements.
+Note: the aggregated target value is implemented as an atomic variable so that
+reading the aggregated value does not require any locking mechanism.
+
+
+From kernel mode the use of this interface is the following:
+
+int dev_pm_qos_add_request(device, handle, value):
+Will insert an element into the list for that identified device with the
+target value. Upon change to this list the new target is recomputed and any
+registered notifiers are called only if the target value is now different.
+Clients of dev_pm_qos need to save the handle for future use in other
+dev_pm_qos API functions.
+
+int dev_pm_qos_update_request(handle, new_value):
+Will update the list element pointed to by the handle with the new target value
+and recompute the new aggregated target, calling the notification trees if the
+target is changed.
+
+int dev_pm_qos_remove_request(handle):
+Will remove the element. After removal it will update the aggregate target and
+call the notification trees if the target was changed as a result of removing
+the request.
+
+s32 dev_pm_qos_read_value(device):
+Returns the aggregated value for a given device's constraints list.
+
+
+Notification mechanisms:
+The per-device PM QoS framework has 2 different and distinct notification trees:
+a per-device notification tree and a global notification tree.
+
+int dev_pm_qos_add_notifier(device, notifier):
+Adds a notification callback function for the device.
+The callback is called when the aggregated value of the device constraints list
+is changed.
+
+int dev_pm_qos_remove_notifier(device, notifier):
+Removes the notification callback function for the device.
+
+int dev_pm_qos_add_global_notifier(notifier):
+Adds a notification callback function in the global notification tree of the
+framework.
+The callback is called when the aggregated value for any device is changed.
+
+int dev_pm_qos_remove_global_notifier(notifier):
+Removes the notification callback function from the global notification tree
+of the framework.
+
+
+From user mode:
+No API for user space access to the per-device latency constraints is provided
+yet - still under discussion.
diff --git a/Documentation/power/runtime_pm.txt b/Documentation/power/runtime_pm.txt
index 6066e3a..0e85608 100644
--- a/Documentation/power/runtime_pm.txt
+++ b/Documentation/power/runtime_pm.txt
@@ -43,13 +43,18 @@
...
};
-The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks are
-executed by the PM core for either the device type, or the class (if the device
-type's struct dev_pm_ops object does not exist), or the bus type (if the
-device type's and class' struct dev_pm_ops objects do not exist) of the given
-device (this allows device types to override callbacks provided by bus types or
-classes if necessary). The bus type, device type and class callbacks are
-referred to as subsystem-level callbacks in what follows.
+The ->runtime_suspend(), ->runtime_resume() and ->runtime_idle() callbacks
+are executed by the PM core for either the power domain, or the device type
+(if the device power domain's struct dev_pm_ops does not exist), or the class
+(if the device power domain's and type's struct dev_pm_ops object does not
+exist), or the bus type (if the device power domain's, type's and class'
+struct dev_pm_ops objects do not exist) of the given device, so the priority
+order of callbacks from high to low is that power domain callbacks, device
+type callbacks, class callbacks and bus type callbacks, and the high priority
+one will take precedence over low priority one. The bus type, device type and
+class callbacks are referred to as subsystem-level callbacks in what follows,
+and generally speaking, the power domain callbacks are used for representing
+power domains within a SoC.
By default, the callbacks are always invoked in process context with interrupts
enabled. However, subsystems can use the pm_runtime_irq_safe() helper function
@@ -477,12 +482,14 @@
If pm_runtime_irq_safe() has been called for a device then the following helper
functions may also be used in interrupt context:
+pm_runtime_idle()
pm_runtime_suspend()
pm_runtime_autosuspend()
pm_runtime_resume()
pm_runtime_get_sync()
pm_runtime_put_sync()
pm_runtime_put_sync_suspend()
+pm_runtime_put_sync_autosuspend()
5. Runtime PM Initialization, Device Probing and Removal
diff --git a/Documentation/usb/power-management.txt b/Documentation/usb/power-management.txt
index c9ffa9c..e8662a5 100644
--- a/Documentation/usb/power-management.txt
+++ b/Documentation/usb/power-management.txt
@@ -439,10 +439,10 @@
device.
External suspend calls should never be allowed to fail in this way,
-only autosuspend calls. The driver can tell them apart by checking
-the PM_EVENT_AUTO bit in the message.event argument to the suspend
-method; this bit will be set for internal PM events (autosuspend) and
-clear for external PM events.
+only autosuspend calls. The driver can tell them apart by applying
+the PMSG_IS_AUTO() macro to the message argument to the suspend
+method; it will return True for internal PM events (autosuspend) and
+False for external PM events.
Mutual exclusion
diff --git a/MAINTAINERS b/MAINTAINERS
index ae8820e..1774b68 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2733,7 +2733,7 @@
FREEZER
M: Pavel Machek <pavel@ucw.cz>
M: "Rafael J. Wysocki" <rjw@sisk.pl>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
S: Supported
F: Documentation/power/freezing-of-tasks.txt
F: include/linux/freezer.h
@@ -2995,7 +2995,7 @@
HIBERNATION (aka Software Suspend, aka swsusp)
M: Pavel Machek <pavel@ucw.cz>
M: "Rafael J. Wysocki" <rjw@sisk.pl>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
S: Supported
F: arch/x86/power/
F: drivers/base/power/
@@ -3189,7 +3189,7 @@
IDLE-I7300
M: Andy Henroid <andrew.d.henroid@intel.com>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
S: Supported
F: drivers/idle/i7300_idle.c
@@ -3272,7 +3272,7 @@
INTEL IDLE DRIVER
M: Len Brown <lenb@kernel.org>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6.git
S: Supported
F: drivers/idle/intel_idle.c
@@ -3376,7 +3376,7 @@
INTEL MRST PMU DRIVER
M: Len Brown <len.brown@intel.com>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
S: Supported
F: arch/x86/platform/mrst/pmu.*
@@ -6306,7 +6306,7 @@
M: Len Brown <len.brown@intel.com>
M: Pavel Machek <pavel@ucw.cz>
M: "Rafael J. Wysocki" <rjw@sisk.pl>
-L: linux-pm@lists.linux-foundation.org
+L: linux-pm@vger.kernel.org
S: Supported
F: Documentation/power/
F: arch/x86/kernel/acpi/
@@ -6374,7 +6374,6 @@
F: arch/arm/mach-tegra
TEHUTI ETHERNET DRIVER
-M: Alexander Indenbaum <baum@tehutinetworks.net>
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
S: Supported
diff --git a/Makefile b/Makefile
index 733dcba..31f967c 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 1
SUBLEVEL = 0
-EXTRAVERSION = -rc7
+EXTRAVERSION = -rc9
NAME = "Divemaster Edition"
# *DOCUMENTATION*
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 3269576..3146ed3 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1283,6 +1283,20 @@
processor into full low interrupt latency mode. ARM11MPCore
is not affected.
+config ARM_ERRATA_764369
+ bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
+ depends on CPU_V7 && SMP
+ help
+ This option enables the workaround for erratum 764369
+ affecting Cortex-A9 MPCore with two or more processors (all
+ current revisions). Under certain timing circumstances, a data
+ cache line maintenance operation by MVA targeting an Inner
+ Shareable memory region may fail to proceed up to either the
+ Point of Coherency or to the Point of Unification of the
+ system. This workaround adds a DSB instruction before the
+ relevant cache maintenance functions and sets a specific bit
+ in the diagnostic control register of the SCU.
+
endmenu
source "arch/arm/common/Kconfig"
diff --git a/arch/arm/include/asm/futex.h b/arch/arm/include/asm/futex.h
index 8c73900..253cc86 100644
--- a/arch/arm/include/asm/futex.h
+++ b/arch/arm/include/asm/futex.h
@@ -25,17 +25,17 @@
#ifdef CONFIG_SMP
-#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
+#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
smp_mb(); \
__asm__ __volatile__( \
- "1: ldrex %1, [%2]\n" \
+ "1: ldrex %1, [%3]\n" \
" " insn "\n" \
- "2: strex %1, %0, [%2]\n" \
- " teq %1, #0\n" \
+ "2: strex %2, %0, [%3]\n" \
+ " teq %2, #0\n" \
" bne 1b\n" \
" mov %0, #0\n" \
- __futex_atomic_ex_table("%4") \
- : "=&r" (ret), "=&r" (oldval) \
+ __futex_atomic_ex_table("%5") \
+ : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@@ -73,14 +73,14 @@
#include <linux/preempt.h>
#include <asm/domain.h>
-#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
+#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
__asm__ __volatile__( \
- "1: " T(ldr) " %1, [%2]\n" \
+ "1: " T(ldr) " %1, [%3]\n" \
" " insn "\n" \
- "2: " T(str) " %0, [%2]\n" \
+ "2: " T(str) " %0, [%3]\n" \
" mov %0, #0\n" \
- __futex_atomic_ex_table("%4") \
- : "=&r" (ret), "=&r" (oldval) \
+ __futex_atomic_ex_table("%5") \
+ : "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@@ -117,7 +117,7 @@
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
- int oldval = 0, ret;
+ int oldval = 0, ret, tmp;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
@@ -129,19 +129,19 @@
switch (op) {
case FUTEX_OP_SET:
- __futex_atomic_op("mov %0, %3", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("mov %0, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_ADD:
- __futex_atomic_op("add %0, %1, %3", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("add %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_OR:
- __futex_atomic_op("orr %0, %1, %3", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("orr %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
- __futex_atomic_op("and %0, %1, %3", ret, oldval, uaddr, ~oparg);
+ __futex_atomic_op("and %0, %1, %4", ret, oldval, tmp, uaddr, ~oparg);
break;
case FUTEX_OP_XOR:
- __futex_atomic_op("eor %0, %1, %3", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("eor %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
default:
ret = -ENOSYS;
diff --git a/arch/arm/include/asm/unistd.h b/arch/arm/include/asm/unistd.h
index 2c04ed5..c60a294 100644
--- a/arch/arm/include/asm/unistd.h
+++ b/arch/arm/include/asm/unistd.h
@@ -478,8 +478,8 @@
/*
* Unimplemented (or alternatively implemented) syscalls
*/
-#define __IGNORE_fadvise64_64 1
-#define __IGNORE_migrate_pages 1
+#define __IGNORE_fadvise64_64
+#define __IGNORE_migrate_pages
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_UNISTD_H */
diff --git a/arch/arm/kernel/smp_scu.c b/arch/arm/kernel/smp_scu.c
index 79ed5e7..7fcddb7 100644
--- a/arch/arm/kernel/smp_scu.c
+++ b/arch/arm/kernel/smp_scu.c
@@ -13,6 +13,7 @@
#include <asm/smp_scu.h>
#include <asm/cacheflush.h>
+#include <asm/cputype.h>
#define SCU_CTRL 0x00
#define SCU_CONFIG 0x04
@@ -37,6 +38,15 @@
{
u32 scu_ctrl;
+#ifdef CONFIG_ARM_ERRATA_764369
+ /* Cortex-A9 only */
+ if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
+ scu_ctrl = __raw_readl(scu_base + 0x30);
+ if (!(scu_ctrl & 1))
+ __raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
+ }
+#endif
+
scu_ctrl = __raw_readl(scu_base + SCU_CTRL);
/* already enabled? */
if (scu_ctrl & 1)
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index bf977f8..4e66f62 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -23,8 +23,10 @@
#if defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)
#define ARM_EXIT_KEEP(x) x
+#define ARM_EXIT_DISCARD(x)
#else
#define ARM_EXIT_KEEP(x)
+#define ARM_EXIT_DISCARD(x) x
#endif
OUTPUT_ARCH(arm)
@@ -39,6 +41,11 @@
SECTIONS
{
/*
+ * XXX: The linker does not define how output sections are
+ * assigned to input sections when there are multiple statements
+ * matching the same input section name. There is no documented
+ * order of matching.
+ *
* unwind exit sections must be discarded before the rest of the
* unwind sections get included.
*/
@@ -47,6 +54,9 @@
*(.ARM.extab.exit.text)
ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))
ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))
+ ARM_EXIT_DISCARD(EXIT_TEXT)
+ ARM_EXIT_DISCARD(EXIT_DATA)
+ EXIT_CALL
#ifndef CONFIG_HOTPLUG
*(.ARM.exidx.devexit.text)
*(.ARM.extab.devexit.text)
@@ -58,6 +68,8 @@
#ifndef CONFIG_SMP_ON_UP
*(.alt.smp.init)
#endif
+ *(.discard)
+ *(.discard.*)
}
#ifdef CONFIG_XIP_KERNEL
@@ -279,9 +291,6 @@
STABS_DEBUG
.comment 0 : { *(.comment) }
-
- /* Default discards */
- DISCARDS
}
/*
diff --git a/arch/arm/mach-exynos4/clock.c b/arch/arm/mach-exynos4/clock.c
index 79d6cd0..86964d2 100644
--- a/arch/arm/mach-exynos4/clock.c
+++ b/arch/arm/mach-exynos4/clock.c
@@ -899,8 +899,7 @@
.reg_div = { .reg = S5P_CLKDIV_CAM, .shift = 28, .size = 4 },
}, {
.clk = {
- .name = "sclk_cam",
- .devname = "exynos4-fimc.0",
+ .name = "sclk_cam0",
.enable = exynos4_clksrc_mask_cam_ctrl,
.ctrlbit = (1 << 16),
},
@@ -909,8 +908,7 @@
.reg_div = { .reg = S5P_CLKDIV_CAM, .shift = 16, .size = 4 },
}, {
.clk = {
- .name = "sclk_cam",
- .devname = "exynos4-fimc.1",
+ .name = "sclk_cam1",
.enable = exynos4_clksrc_mask_cam_ctrl,
.ctrlbit = (1 << 20),
},
diff --git a/arch/arm/mach-msm/clock.c b/arch/arm/mach-msm/clock.c
index 22a5376..d9145df 100644
--- a/arch/arm/mach-msm/clock.c
+++ b/arch/arm/mach-msm/clock.c
@@ -18,7 +18,7 @@
#include <linux/list.h>
#include <linux/err.h>
#include <linux/spinlock.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/mutex.h>
#include <linux/clk.h>
#include <linux/string.h>
diff --git a/arch/arm/mach-s3c2443/clock.c b/arch/arm/mach-s3c2443/clock.c
index a1a7176..38058af 100644
--- a/arch/arm/mach-s3c2443/clock.c
+++ b/arch/arm/mach-s3c2443/clock.c
@@ -128,7 +128,7 @@
unsigned long clkcon0;
clkcon0 = __raw_readl(S3C2443_CLKDIV0);
- clkcon0 &= S3C2443_CLKDIV0_ARMDIV_MASK;
+ clkcon0 &= ~S3C2443_CLKDIV0_ARMDIV_MASK;
clkcon0 |= val << S3C2443_CLKDIV0_ARMDIV_SHIFT;
__raw_writel(clkcon0, S3C2443_CLKDIV0);
}
diff --git a/arch/arm/mach-s5pv210/clock.c b/arch/arm/mach-s5pv210/clock.c
index 52a8e60..f5f8fa8 100644
--- a/arch/arm/mach-s5pv210/clock.c
+++ b/arch/arm/mach-s5pv210/clock.c
@@ -815,8 +815,7 @@
.reg_div = { .reg = S5P_CLK_DIV3, .shift = 20, .size = 4 },
}, {
.clk = {
- .name = "sclk_cam",
- .devname = "s5pv210-fimc.0",
+ .name = "sclk_cam0",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 3),
},
@@ -825,8 +824,7 @@
.reg_div = { .reg = S5P_CLK_DIV1, .shift = 12, .size = 4 },
}, {
.clk = {
- .name = "sclk_cam",
- .devname = "s5pv210-fimc.1",
+ .name = "sclk_cam1",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 4),
},
diff --git a/arch/arm/mm/cache-v7.S b/arch/arm/mm/cache-v7.S
index 3b24bfa..07c4bc8 100644
--- a/arch/arm/mm/cache-v7.S
+++ b/arch/arm/mm/cache-v7.S
@@ -174,6 +174,10 @@
dcache_line_size r2, r3
sub r3, r2, #1
bic r12, r0, r3
+#ifdef CONFIG_ARM_ERRATA_764369
+ ALT_SMP(W(dsb))
+ ALT_UP(W(nop))
+#endif
1:
USER( mcr p15, 0, r12, c7, c11, 1 ) @ clean D line to the point of unification
add r12, r12, r2
@@ -223,6 +227,10 @@
add r1, r0, r1
sub r3, r2, #1
bic r0, r0, r3
+#ifdef CONFIG_ARM_ERRATA_764369
+ ALT_SMP(W(dsb))
+ ALT_UP(W(nop))
+#endif
1:
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line / unified line
add r0, r0, r2
@@ -247,6 +255,10 @@
sub r3, r2, #1
tst r0, r3
bic r0, r0, r3
+#ifdef CONFIG_ARM_ERRATA_764369
+ ALT_SMP(W(dsb))
+ ALT_UP(W(nop))
+#endif
mcrne p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
tst r1, r3
@@ -270,6 +282,10 @@
dcache_line_size r2, r3
sub r3, r2, #1
bic r0, r0, r3
+#ifdef CONFIG_ARM_ERRATA_764369
+ ALT_SMP(W(dsb))
+ ALT_UP(W(nop))
+#endif
1:
mcr p15, 0, r0, c7, c10, 1 @ clean D / U line
add r0, r0, r2
@@ -288,6 +304,10 @@
dcache_line_size r2, r3
sub r3, r2, #1
bic r0, r0, r3
+#ifdef CONFIG_ARM_ERRATA_764369
+ ALT_SMP(W(dsb))
+ ALT_UP(W(nop))
+#endif
1:
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
add r0, r0, r2
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index 0a0a1e7..c3ff82f 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -324,6 +324,8 @@
if (addr)
*handle = pfn_to_dma(dev, page_to_pfn(page));
+ else
+ __dma_free_buffer(page, size);
return addr;
}
diff --git a/arch/arm/plat-s5p/irq-gpioint.c b/arch/arm/plat-s5p/irq-gpioint.c
index f71078e..f88216d 100644
--- a/arch/arm/plat-s5p/irq-gpioint.c
+++ b/arch/arm/plat-s5p/irq-gpioint.c
@@ -114,17 +114,18 @@
{
static int used_gpioint_groups = 0;
int group = chip->group;
- struct s5p_gpioint_bank *bank = NULL;
+ struct s5p_gpioint_bank *b, *bank = NULL;
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
if (used_gpioint_groups >= S5P_GPIOINT_GROUP_COUNT)
return -ENOMEM;
- list_for_each_entry(bank, &banks, list) {
- if (group >= bank->start &&
- group < bank->start + bank->nr_groups)
+ list_for_each_entry(b, &banks, list) {
+ if (group >= b->start && group < b->start + b->nr_groups) {
+ bank = b;
break;
+ }
}
if (!bank)
return -EINVAL;
diff --git a/arch/powerpc/platforms/powermac/pci.c b/arch/powerpc/platforms/powermac/pci.c
index 5cc8385..31a7d3a 100644
--- a/arch/powerpc/platforms/powermac/pci.c
+++ b/arch/powerpc/platforms/powermac/pci.c
@@ -561,6 +561,20 @@
.write = u4_pcie_write_config,
};
+static void __devinit pmac_pci_fixup_u4_of_node(struct pci_dev *dev)
+{
+ /* Apple's device-tree "hides" the root complex virtual P2P bridge
+ * on U4. However, Linux sees it, causing the PCI <-> OF matching
+ * code to fail to properly match devices below it. This works around
+ * it by setting the node of the bridge to point to the PHB node,
+ * which is not entirely correct but fixes the matching code and
+ * doesn't break anything else. It's also the simplest possible fix.
+ */
+ if (dev->dev.of_node == NULL)
+ dev->dev.of_node = pcibios_get_phb_of_node(dev->bus);
+}
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_APPLE, 0x5b, pmac_pci_fixup_u4_of_node);
+
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_PPC32
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index ed5cb5a..6b99fc3 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -91,6 +91,7 @@
select HAVE_ARCH_MUTEX_CPU_RELAX
select HAVE_ARCH_JUMP_LABEL if !MARCH_G5
select HAVE_RCU_TABLE_FREE if SMP
+ select ARCH_SAVE_PAGE_KEYS if HIBERNATION
select ARCH_INLINE_SPIN_TRYLOCK
select ARCH_INLINE_SPIN_TRYLOCK_BH
select ARCH_INLINE_SPIN_LOCK
diff --git a/arch/s390/include/asm/elf.h b/arch/s390/include/asm/elf.h
index 64b61bf..547f1a6 100644
--- a/arch/s390/include/asm/elf.h
+++ b/arch/s390/include/asm/elf.h
@@ -188,7 +188,8 @@
#define SET_PERSONALITY(ex) \
do { \
if (personality(current->personality) != PER_LINUX32) \
- set_personality(PER_LINUX); \
+ set_personality(PER_LINUX | \
+ (current->personality & ~PER_MASK)); \
if ((ex).e_ident[EI_CLASS] == ELFCLASS32) \
set_thread_flag(TIF_31BIT); \
else \
diff --git a/arch/s390/kernel/suspend.c b/arch/s390/kernel/suspend.c
index cf9e5c6..b6f9afe 100644
--- a/arch/s390/kernel/suspend.c
+++ b/arch/s390/kernel/suspend.c
@@ -7,6 +7,7 @@
*/
#include <linux/pfn.h>
+#include <linux/mm.h>
#include <asm/system.h>
/*
@@ -14,6 +15,123 @@
*/
extern const void __nosave_begin, __nosave_end;
+/*
+ * The restore of the saved pages in an hibernation image will set
+ * the change and referenced bits in the storage key for each page.
+ * Overindication of the referenced bits after an hibernation cycle
+ * does not cause any harm but the overindication of the change bits
+ * would cause trouble.
+ * Use the ARCH_SAVE_PAGE_KEYS hooks to save the storage key of each
+ * page to the most significant byte of the associated page frame
+ * number in the hibernation image.
+ */
+
+/*
+ * Key storage is allocated as a linked list of pages.
+ * The size of the keys array is (PAGE_SIZE - sizeof(long))
+ */
+struct page_key_data {
+ struct page_key_data *next;
+ unsigned char data[];
+};
+
+#define PAGE_KEY_DATA_SIZE (PAGE_SIZE - sizeof(struct page_key_data *))
+
+static struct page_key_data *page_key_data;
+static struct page_key_data *page_key_rp, *page_key_wp;
+static unsigned long page_key_rx, page_key_wx;
+
+/*
+ * For each page in the hibernation image one additional byte is
+ * stored in the most significant byte of the page frame number.
+ * On suspend no additional memory is required but on resume the
+ * keys need to be memorized until the page data has been restored.
+ * Only then can the storage keys be set to their old state.
+ */
+unsigned long page_key_additional_pages(unsigned long pages)
+{
+ return DIV_ROUND_UP(pages, PAGE_KEY_DATA_SIZE);
+}
+
+/*
+ * Free page_key_data list of arrays.
+ */
+void page_key_free(void)
+{
+ struct page_key_data *pkd;
+
+ while (page_key_data) {
+ pkd = page_key_data;
+ page_key_data = pkd->next;
+ free_page((unsigned long) pkd);
+ }
+}
+
+/*
+ * Allocate page_key_data list of arrays with enough room to store
+ * one byte for each page in the hibernation image.
+ */
+int page_key_alloc(unsigned long pages)
+{
+ struct page_key_data *pk;
+ unsigned long size;
+
+ size = DIV_ROUND_UP(pages, PAGE_KEY_DATA_SIZE);
+ while (size--) {
+ pk = (struct page_key_data *) get_zeroed_page(GFP_KERNEL);
+ if (!pk) {
+ page_key_free();
+ return -ENOMEM;
+ }
+ pk->next = page_key_data;
+ page_key_data = pk;
+ }
+ page_key_rp = page_key_wp = page_key_data;
+ page_key_rx = page_key_wx = 0;
+ return 0;
+}
+
+/*
+ * Save the storage key into the upper 8 bits of the page frame number.
+ */
+void page_key_read(unsigned long *pfn)
+{
+ unsigned long addr;
+
+ addr = (unsigned long) page_address(pfn_to_page(*pfn));
+ *(unsigned char *) pfn = (unsigned char) page_get_storage_key(addr);
+}
+
+/*
+ * Extract the storage key from the upper 8 bits of the page frame number
+ * and store it in the page_key_data list of arrays.
+ */
+void page_key_memorize(unsigned long *pfn)
+{
+ page_key_wp->data[page_key_wx] = *(unsigned char *) pfn;
+ *(unsigned char *) pfn = 0;
+ if (++page_key_wx < PAGE_KEY_DATA_SIZE)
+ return;
+ page_key_wp = page_key_wp->next;
+ page_key_wx = 0;
+}
+
+/*
+ * Get the next key from the page_key_data list of arrays and set the
+ * storage key of the page referred by @address. If @address refers to
+ * a "safe" page the swsusp_arch_resume code will transfer the storage
+ * key from the buffer page to the original page.
+ */
+void page_key_write(void *address)
+{
+ page_set_storage_key((unsigned long) address,
+ page_key_rp->data[page_key_rx], 0);
+ if (++page_key_rx >= PAGE_KEY_DATA_SIZE)
+ return;
+ page_key_rp = page_key_rp->next;
+ page_key_rx = 0;
+}
+
int pfn_is_nosave(unsigned long pfn)
{
unsigned long nosave_begin_pfn = PFN_DOWN(__pa(&__nosave_begin));
diff --git a/arch/s390/kernel/swsusp_asm64.S b/arch/s390/kernel/swsusp_asm64.S
index 51bcdb5..acb78cd 100644
--- a/arch/s390/kernel/swsusp_asm64.S
+++ b/arch/s390/kernel/swsusp_asm64.S
@@ -136,11 +136,14 @@
0:
lg %r2,8(%r1)
lg %r4,0(%r1)
+ iske %r0,%r4
lghi %r3,PAGE_SIZE
lghi %r5,PAGE_SIZE
1:
mvcle %r2,%r4,0
jo 1b
+ lg %r2,8(%r1)
+ sske %r0,%r2
lg %r1,16(%r1)
ltgr %r1,%r1
jnz 0b
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index f69ff3c..5d56c2b 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -303,15 +303,15 @@
/* Walk the guest addr space page table */
table = gmap->table + (((to + off) >> 53) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
- return 0;
+ goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 42) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
- return 0;
+ goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 31) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
- return 0;
+ goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 20) & 0x7ff);
@@ -319,6 +319,7 @@
flush |= gmap_unlink_segment(gmap, table);
*table = _SEGMENT_ENTRY_INV;
}
+out:
up_read(&gmap->mm->mmap_sem);
if (flush)
gmap_flush_tlb(gmap);
diff --git a/arch/sparc/include/asm/spitfire.h b/arch/sparc/include/asm/spitfire.h
index 55a17c6..d06a2660 100644
--- a/arch/sparc/include/asm/spitfire.h
+++ b/arch/sparc/include/asm/spitfire.h
@@ -43,6 +43,8 @@
#define SUN4V_CHIP_NIAGARA1 0x01
#define SUN4V_CHIP_NIAGARA2 0x02
#define SUN4V_CHIP_NIAGARA3 0x03
+#define SUN4V_CHIP_NIAGARA4 0x04
+#define SUN4V_CHIP_NIAGARA5 0x05
#define SUN4V_CHIP_UNKNOWN 0xff
#ifndef __ASSEMBLY__
diff --git a/arch/sparc/include/asm/xor_64.h b/arch/sparc/include/asm/xor_64.h
index 9ed6ff6..ee8edc6 100644
--- a/arch/sparc/include/asm/xor_64.h
+++ b/arch/sparc/include/asm/xor_64.h
@@ -66,6 +66,8 @@
((tlb_type == hypervisor && \
(sun4v_chip_type == SUN4V_CHIP_NIAGARA1 || \
sun4v_chip_type == SUN4V_CHIP_NIAGARA2 || \
- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)) ? \
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 || \
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 || \
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)) ? \
&xor_block_niagara : \
&xor_block_VIS)
diff --git a/arch/sparc/kernel/cpu.c b/arch/sparc/kernel/cpu.c
index 9810fd8..ba9b1ce 100644
--- a/arch/sparc/kernel/cpu.c
+++ b/arch/sparc/kernel/cpu.c
@@ -481,6 +481,18 @@
sparc_pmu_type = "niagara3";
break;
+ case SUN4V_CHIP_NIAGARA4:
+ sparc_cpu_type = "UltraSparc T4 (Niagara4)";
+ sparc_fpu_type = "UltraSparc T4 integrated FPU";
+ sparc_pmu_type = "niagara4";
+ break;
+
+ case SUN4V_CHIP_NIAGARA5:
+ sparc_cpu_type = "UltraSparc T5 (Niagara5)";
+ sparc_fpu_type = "UltraSparc T5 integrated FPU";
+ sparc_pmu_type = "niagara5";
+ break;
+
default:
printk(KERN_WARNING "CPU: Unknown sun4v cpu type [%s]\n",
prom_cpu_compatible);
diff --git a/arch/sparc/kernel/cpumap.c b/arch/sparc/kernel/cpumap.c
index 4197e8d..9323eaf 100644
--- a/arch/sparc/kernel/cpumap.c
+++ b/arch/sparc/kernel/cpumap.c
@@ -325,6 +325,8 @@
case SUN4V_CHIP_NIAGARA1:
case SUN4V_CHIP_NIAGARA2:
case SUN4V_CHIP_NIAGARA3:
+ case SUN4V_CHIP_NIAGARA4:
+ case SUN4V_CHIP_NIAGARA5:
rover_inc_table = niagara_iterate_method;
break;
default:
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index 0eac1b2..0d810c2 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -133,7 +133,7 @@
prom_niagara_prefix:
.asciz "SUNW,UltraSPARC-T"
prom_sparc_prefix:
- .asciz "SPARC-T"
+ .asciz "SPARC-"
.align 4
prom_root_compatible:
.skip 64
@@ -396,7 +396,7 @@
or %g1, %lo(prom_cpu_compatible), %g1
sethi %hi(prom_sparc_prefix), %g7
or %g7, %lo(prom_sparc_prefix), %g7
- mov 7, %g3
+ mov 6, %g3
90: ldub [%g7], %g2
ldub [%g1], %g4
cmp %g2, %g4
@@ -408,10 +408,23 @@
sethi %hi(prom_cpu_compatible), %g1
or %g1, %lo(prom_cpu_compatible), %g1
- ldub [%g1 + 7], %g2
+ ldub [%g1 + 6], %g2
+ cmp %g2, 'T'
+ be,pt %xcc, 70f
+ cmp %g2, 'M'
+ bne,pn %xcc, 4f
+ nop
+
+70: ldub [%g1 + 7], %g2
cmp %g2, '3'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA3, %g4
+ cmp %g2, '4'
+ be,pt %xcc, 5f
+ mov SUN4V_CHIP_NIAGARA4, %g4
+ cmp %g2, '5'
+ be,pt %xcc, 5f
+ mov SUN4V_CHIP_NIAGARA5, %g4
ba,pt %xcc, 4f
nop
@@ -545,6 +558,12 @@
cmp %g1, SUN4V_CHIP_NIAGARA3
be,pt %xcc, niagara2_patch
nop
+ cmp %g1, SUN4V_CHIP_NIAGARA4
+ be,pt %xcc, niagara2_patch
+ nop
+ cmp %g1, SUN4V_CHIP_NIAGARA5
+ be,pt %xcc, niagara2_patch
+ nop
call generic_patch_copyops
nop
diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
index c8cc461..f793742 100644
--- a/arch/sparc/kernel/process_32.c
+++ b/arch/sparc/kernel/process_32.c
@@ -380,8 +380,7 @@
#endif
}
- /* Now, this task is no longer a kernel thread. */
- current->thread.current_ds = USER_DS;
+ /* This task is no longer a kernel thread. */
if (current->thread.flags & SPARC_FLAG_KTHREAD) {
current->thread.flags &= ~SPARC_FLAG_KTHREAD;
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index c158a95..d959cd0 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -368,9 +368,6 @@
/* Clear FPU register state. */
t->fpsaved[0] = 0;
-
- if (get_thread_current_ds() != ASI_AIUS)
- set_fs(USER_DS);
}
/* It's a bit more tricky when 64-bit tasks are involved... */
diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
index d26e1f6..3e3e291 100644
--- a/arch/sparc/kernel/setup_32.c
+++ b/arch/sparc/kernel/setup_32.c
@@ -137,7 +137,7 @@
prom_halt();
break;
case 'p':
- /* Just ignore, this behavior is now the default. */
+ prom_early_console.flags &= ~CON_BOOT;
break;
default:
printk("Unknown boot switch (-%c)\n", c);
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 3c5bb78..c965595a 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -106,7 +106,7 @@
prom_halt();
break;
case 'p':
- /* Just ignore, this behavior is now the default. */
+ prom_early_console.flags &= ~CON_BOOT;
break;
case 'P':
/* Force UltraSPARC-III P-Cache on. */
@@ -425,10 +425,14 @@
else if (tlb_type == hypervisor) {
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= HWCAP_SPARC_BLKINIT;
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= HWCAP_SPARC_N2;
}
@@ -452,11 +456,15 @@
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1)
cap |= AV_SPARC_ASI_BLK_INIT;
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
- sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 |
AV_SPARC_ASI_BLK_INIT |
AV_SPARC_POPC);
- if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
+ if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
+ sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC |
AV_SPARC_FMAF);
}
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 581531d..8e073d8 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -511,6 +511,11 @@
for (i = 0; i < prom_trans_ents; i++)
prom_trans[i].data &= ~0x0003fe0000000000UL;
}
+
+ /* Force execute bit on. */
+ for (i = 0; i < prom_trans_ents; i++)
+ prom_trans[i].data |= (tlb_type == hypervisor ?
+ _PAGE_EXEC_4V : _PAGE_EXEC_4U);
}
static void __init hypervisor_tlb_lock(unsigned long vaddr,
diff --git a/arch/x86/kernel/rtc.c b/arch/x86/kernel/rtc.c
index 3f2ad26..ccdbc16 100644
--- a/arch/x86/kernel/rtc.c
+++ b/arch/x86/kernel/rtc.c
@@ -42,8 +42,11 @@
{
int real_seconds, real_minutes, cmos_minutes;
unsigned char save_control, save_freq_select;
+ unsigned long flags;
int retval = 0;
+ spin_lock_irqsave(&rtc_lock, flags);
+
/* tell the clock it's being set */
save_control = CMOS_READ(RTC_CONTROL);
CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
@@ -93,12 +96,17 @@
CMOS_WRITE(save_control, RTC_CONTROL);
CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
return retval;
}
unsigned long mach_get_cmos_time(void)
{
unsigned int status, year, mon, day, hour, min, sec, century = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&rtc_lock, flags);
/*
* If UIP is clear, then we have >= 244 microseconds before
@@ -125,6 +133,8 @@
status = CMOS_READ(RTC_CONTROL);
WARN_ON_ONCE(RTC_ALWAYS_BCD && (status & RTC_DM_BINARY));
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
if (RTC_ALWAYS_BCD || !(status & RTC_DM_BINARY)) {
sec = bcd2bin(sec);
min = bcd2bin(min);
@@ -169,24 +179,15 @@
int update_persistent_clock(struct timespec now)
{
- unsigned long flags;
- int retval;
-
- spin_lock_irqsave(&rtc_lock, flags);
- retval = x86_platform.set_wallclock(now.tv_sec);
- spin_unlock_irqrestore(&rtc_lock, flags);
-
- return retval;
+ return x86_platform.set_wallclock(now.tv_sec);
}
/* not static: needed by APM */
void read_persistent_clock(struct timespec *ts)
{
- unsigned long retval, flags;
+ unsigned long retval;
- spin_lock_irqsave(&rtc_lock, flags);
retval = x86_platform.get_wallclock();
- spin_unlock_irqrestore(&rtc_lock, flags);
ts->tv_sec = retval;
ts->tv_nsec = 0;
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 6f08bc9..8b4cc5f 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -3603,7 +3603,7 @@
break;
case Src2CL:
ctxt->src2.bytes = 1;
- ctxt->src2.val = ctxt->regs[VCPU_REGS_RCX] & 0x8;
+ ctxt->src2.val = ctxt->regs[VCPU_REGS_RCX] & 0xff;
break;
case Src2ImmByte:
rc = decode_imm(ctxt, &ctxt->src2, 1, true);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 1c5b693..8e8da79 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -400,7 +400,8 @@
/* xchg acts as a barrier before the setting of the high bits */
orig.spte_low = xchg(&ssptep->spte_low, sspte.spte_low);
- orig.spte_high = ssptep->spte_high = sspte.spte_high;
+ orig.spte_high = ssptep->spte_high;
+ ssptep->spte_high = sspte.spte_high;
count_spte_clear(sptep, spte);
return orig.spte;
diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
index 039d913..404f21a 100644
--- a/arch/x86/pci/acpi.c
+++ b/arch/x86/pci/acpi.c
@@ -43,6 +43,17 @@
DMI_MATCH(DMI_PRODUCT_NAME, "ALiveSATA2-GLAN"),
},
},
+ /* https://bugzilla.kernel.org/show_bug.cgi?id=30552 */
+ /* 2006 AMD HT/VIA system with two host bridges */
+ {
+ .callback = set_use_crs,
+ .ident = "ASUS M2V-MX SE",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
+ DMI_MATCH(DMI_BOARD_NAME, "M2V-MX SE"),
+ DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
+ },
+ },
{}
};
diff --git a/arch/x86/platform/mrst/vrtc.c b/arch/x86/platform/mrst/vrtc.c
index 73d70d6..6d5dbcd 100644
--- a/arch/x86/platform/mrst/vrtc.c
+++ b/arch/x86/platform/mrst/vrtc.c
@@ -58,8 +58,11 @@
unsigned long vrtc_get_time(void)
{
u8 sec, min, hour, mday, mon;
+ unsigned long flags;
u32 year;
+ spin_lock_irqsave(&rtc_lock, flags);
+
while ((vrtc_cmos_read(RTC_FREQ_SELECT) & RTC_UIP))
cpu_relax();
@@ -70,6 +73,8 @@
mon = vrtc_cmos_read(RTC_MONTH);
year = vrtc_cmos_read(RTC_YEAR);
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
/* vRTC YEAR reg contains the offset to 1960 */
year += 1960;
@@ -83,8 +88,10 @@
int vrtc_set_mmss(unsigned long nowtime)
{
int real_sec, real_min;
+ unsigned long flags;
int vrtc_min;
+ spin_lock_irqsave(&rtc_lock, flags);
vrtc_min = vrtc_cmos_read(RTC_MINUTES);
real_sec = nowtime % 60;
@@ -95,6 +102,8 @@
vrtc_cmos_write(real_sec, RTC_SECONDS);
vrtc_cmos_write(real_min, RTC_MINUTES);
+ spin_unlock_irqrestore(&rtc_lock, flags);
+
return 0;
}
diff --git a/block/blk-core.c b/block/blk-core.c
index b2ed78a..d34433a 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -348,9 +348,10 @@
EXPORT_SYMBOL(blk_put_queue);
/*
- * Note: If a driver supplied the queue lock, it should not zap that lock
- * unexpectedly as some queue cleanup components like elevator_exit() and
- * blk_throtl_exit() need queue lock.
+ * Note: If a driver supplied the queue lock, it is disconnected
+ * by this function. The actual state of the lock doesn't matter
+ * here as the request_queue isn't accessible after this point
+ * (QUEUE_FLAG_DEAD is set) and no other requests will be queued.
*/
void blk_cleanup_queue(struct request_queue *q)
{
@@ -367,10 +368,8 @@
queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q);
mutex_unlock(&q->sysfs_lock);
- if (q->elevator)
- elevator_exit(q->elevator);
-
- blk_throtl_exit(q);
+ if (q->queue_lock != &q->__queue_lock)
+ q->queue_lock = &q->__queue_lock;
blk_put_queue(q);
}
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index e681805..60fda88 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -479,6 +479,11 @@
blk_sync_queue(q);
+ if (q->elevator)
+ elevator_exit(q->elevator);
+
+ blk_throtl_exit(q);
+
if (rl->rq_pool)
mempool_destroy(rl->rq_pool);
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 95b9e7e..a1efd75 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -130,4 +130,6 @@
source "drivers/virt/Kconfig"
+source "drivers/devfreq/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 7fa433a..97c957b5 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -127,3 +127,5 @@
# Virtualization drivers
obj-$(CONFIG_VIRT_DRIVERS) += virt/
+
+obj-$(CONFIG_PM_DEVFREQ) += devfreq/
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index 431ab11..2e69e09 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -37,7 +37,7 @@
#include <linux/dmi.h>
#include <linux/moduleparam.h>
#include <linux/sched.h> /* need_resched() */
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/clockchips.h>
#include <linux/cpuidle.h>
#include <linux/irqflags.h>
diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
index 3ed80b2..28a3e96 100644
--- a/drivers/acpi/sleep.c
+++ b/drivers/acpi/sleep.c
@@ -444,6 +444,22 @@
DMI_MATCH(DMI_BOARD_NAME, "A8N-SLI Premium"),
},
},
+ {
+ .callback = init_nvs_nosave,
+ .ident = "Sony Vaio VGN-SR26GN_P",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-SR26GN_P"),
+ },
+ },
+ {
+ .callback = init_nvs_nosave,
+ .ident = "Sony Vaio VGN-FW520F",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-FW520F"),
+ },
+ },
{},
};
#endif /* CONFIG_SUSPEND */
diff --git a/drivers/base/power/Makefile b/drivers/base/power/Makefile
index 6488ce1..81676dd 100644
--- a/drivers/base/power/Makefile
+++ b/drivers/base/power/Makefile
@@ -1,4 +1,4 @@
-obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o
+obj-$(CONFIG_PM) += sysfs.o generic_ops.o common.o qos.o
obj-$(CONFIG_PM_SLEEP) += main.o wakeup.o
obj-$(CONFIG_PM_RUNTIME) += runtime.o
obj-$(CONFIG_PM_TRACE_RTC) += trace.o
@@ -6,4 +6,4 @@
obj-$(CONFIG_PM_GENERIC_DOMAINS) += domain.o
obj-$(CONFIG_HAVE_CLK) += clock_ops.o
-ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
\ No newline at end of file
+ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1e15732..59f8ab2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -46,6 +46,7 @@
LIST_HEAD(dpm_suspended_list);
LIST_HEAD(dpm_noirq_list);
+struct suspend_stats suspend_stats;
static DEFINE_MUTEX(dpm_list_mtx);
static pm_message_t pm_transition;
@@ -65,6 +66,7 @@
spin_lock_init(&dev->power.lock);
pm_runtime_init(dev);
INIT_LIST_HEAD(&dev->power.entry);
+ dev->power.power_state = PMSG_INVALID;
}
/**
@@ -96,6 +98,7 @@
dev_warn(dev, "parent %s should not be sleeping\n",
dev_name(dev->parent));
list_add_tail(&dev->power.entry, &dpm_list);
+ dev_pm_qos_constraints_init(dev);
mutex_unlock(&dpm_list_mtx);
}
@@ -109,6 +112,7 @@
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
complete_all(&dev->power.completion);
mutex_lock(&dpm_list_mtx);
+ dev_pm_qos_constraints_destroy(dev);
list_del_init(&dev->power.entry);
mutex_unlock(&dpm_list_mtx);
device_wakeup_disable(dev);
@@ -464,8 +468,12 @@
mutex_unlock(&dpm_list_mtx);
error = device_resume_noirq(dev, state);
- if (error)
+ if (error) {
+ suspend_stats.failed_resume_noirq++;
+ dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+ dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, " early", error);
+ }
mutex_lock(&dpm_list_mtx);
put_device(dev);
@@ -626,8 +634,12 @@
mutex_unlock(&dpm_list_mtx);
error = device_resume(dev, state, false);
- if (error)
+ if (error) {
+ suspend_stats.failed_resume++;
+ dpm_save_failed_step(SUSPEND_RESUME);
+ dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, "", error);
+ }
mutex_lock(&dpm_list_mtx);
}
@@ -802,6 +814,9 @@
mutex_lock(&dpm_list_mtx);
if (error) {
pm_dev_err(dev, state, " late", error);
+ suspend_stats.failed_suspend_noirq++;
+ dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
+ dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
}
@@ -927,8 +942,10 @@
int error;
error = __device_suspend(dev, pm_transition, true);
- if (error)
+ if (error) {
+ dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, pm_transition, " async", error);
+ }
put_device(dev);
}
@@ -971,6 +988,7 @@
mutex_lock(&dpm_list_mtx);
if (error) {
pm_dev_err(dev, state, "", error);
+ dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
}
@@ -984,7 +1002,10 @@
async_synchronize_full();
if (!error)
error = async_error;
- if (!error)
+ if (error) {
+ suspend_stats.failed_suspend++;
+ dpm_save_failed_step(SUSPEND_SUSPEND);
+ } else
dpm_show_time(starttime, state, NULL);
return error;
}
@@ -1094,7 +1115,10 @@
int error;
error = dpm_prepare(state);
- if (!error)
+ if (error) {
+ suspend_stats.failed_prepare++;
+ dpm_save_failed_step(SUSPEND_PREPARE);
+ } else
error = dpm_suspend(state);
return error;
}
diff --git a/drivers/base/power/opp.c b/drivers/base/power/opp.c
index b23de18..434a6c0 100644
--- a/drivers/base/power/opp.c
+++ b/drivers/base/power/opp.c
@@ -73,6 +73,7 @@
* RCU usage: nodes are not modified in the list of device_opp,
* however addition is possible and is secured by dev_opp_list_lock
* @dev: device pointer
+ * @head: notifier head to notify the OPP availability changes.
* @opp_list: list of opps
*
* This is an internal data structure maintaining the link to opps attached to
@@ -83,6 +84,7 @@
struct list_head node;
struct device *dev;
+ struct srcu_notifier_head head;
struct list_head opp_list;
};
@@ -404,6 +406,7 @@
}
dev_opp->dev = dev;
+ srcu_init_notifier_head(&dev_opp->head);
INIT_LIST_HEAD(&dev_opp->opp_list);
/* Secure the device list modification */
@@ -428,6 +431,11 @@
list_add_rcu(&new_opp->node, head);
mutex_unlock(&dev_opp_list_lock);
+ /*
+ * Notify the changes in the availability of the operable
+ * frequency/voltage list.
+ */
+ srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ADD, new_opp);
return 0;
}
@@ -504,6 +512,14 @@
mutex_unlock(&dev_opp_list_lock);
synchronize_rcu();
+ /* Notify the change of the OPP availability */
+ if (availability_req)
+ srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_ENABLE,
+ new_opp);
+ else
+ srcu_notifier_call_chain(&dev_opp->head, OPP_EVENT_DISABLE,
+ new_opp);
+
/* clean up old opp */
new_opp = opp;
goto out;
@@ -643,3 +659,17 @@
*table = NULL;
}
#endif /* CONFIG_CPU_FREQ */
+
+/**
+ * opp_get_notifier() - find notifier_head of the device with opp
+ * @dev: device pointer used to lookup device OPPs.
+ */
+struct srcu_notifier_head *opp_get_notifier(struct device *dev)
+{
+ struct device_opp *dev_opp = find_device_opp(dev);
+
+ if (IS_ERR(dev_opp))
+ return ERR_PTR(PTR_ERR(dev_opp)); /* matching type */
+
+ return &dev_opp->head;
+}
diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
index f2a25f1..9bf6232 100644
--- a/drivers/base/power/power.h
+++ b/drivers/base/power/power.h
@@ -1,3 +1,5 @@
+#include <linux/pm_qos.h>
+
#ifdef CONFIG_PM_RUNTIME
extern void pm_runtime_init(struct device *dev);
@@ -35,15 +37,21 @@
static inline void device_pm_init(struct device *dev)
{
spin_lock_init(&dev->power.lock);
+ dev->power.power_state = PMSG_INVALID;
pm_runtime_init(dev);
}
+static inline void device_pm_add(struct device *dev)
+{
+ dev_pm_qos_constraints_init(dev);
+}
+
static inline void device_pm_remove(struct device *dev)
{
+ dev_pm_qos_constraints_destroy(dev);
pm_runtime_remove(dev);
}
-static inline void device_pm_add(struct device *dev) {}
static inline void device_pm_move_before(struct device *deva,
struct device *devb) {}
static inline void device_pm_move_after(struct device *deva,
diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
new file mode 100644
index 0000000..91e0614
--- /dev/null
+++ b/drivers/base/power/qos.c
@@ -0,0 +1,419 @@
+/*
+ * Devices PM QoS constraints management
+ *
+ * Copyright (C) 2011 Texas Instruments, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ *
+ * This module exposes the interface to kernel space for specifying
+ * per-device PM QoS dependencies. It provides infrastructure for registration
+ * of:
+ *
+ * Dependents on a QoS value : register requests
+ * Watchers of QoS value : get notified when target QoS value changes
+ *
+ * This QoS design is best effort based. Dependents register their QoS needs.
+ * Watchers register to keep track of the current QoS needs of the system.
+ * Watchers can register different types of notification callbacks:
+ * . a per-device notification callback using the dev_pm_qos_*_notifier API.
+ * The notification chain data is stored in the per-device constraint
+ * data struct.
+ * . a system-wide notification callback using the dev_pm_qos_*_global_notifier
+ * API. The notification chain data is stored in a static variable.
+ *
+ * Note about the per-device constraint data struct allocation:
+ * . The per-device constraints data struct ptr is tored into the device
+ * dev_pm_info.
+ * . To minimize the data usage by the per-device constraints, the data struct
+ * is only allocated at the first call to dev_pm_qos_add_request.
+ * . The data is later free'd when the device is removed from the system.
+ * . A global mutex protects the constraints users from the data being
+ * allocated and free'd.
+ */
+
+#include <linux/pm_qos.h>
+#include <linux/spinlock.h>
+#include <linux/slab.h>
+#include <linux/device.h>
+#include <linux/mutex.h>
+
+
+static DEFINE_MUTEX(dev_pm_qos_mtx);
+
+static BLOCKING_NOTIFIER_HEAD(dev_pm_notifiers);
+
+/**
+ * dev_pm_qos_read_value - Get PM QoS constraint for a given device.
+ * @dev: Device to get the PM QoS constraint value for.
+ */
+s32 dev_pm_qos_read_value(struct device *dev)
+{
+ struct pm_qos_constraints *c;
+ unsigned long flags;
+ s32 ret = 0;
+
+ spin_lock_irqsave(&dev->power.lock, flags);
+
+ c = dev->power.constraints;
+ if (c)
+ ret = pm_qos_read_value(c);
+
+ spin_unlock_irqrestore(&dev->power.lock, flags);
+
+ return ret;
+}
+
+/*
+ * apply_constraint
+ * @req: constraint request to apply
+ * @action: action to perform add/update/remove, of type enum pm_qos_req_action
+ * @value: defines the qos request
+ *
+ * Internal function to update the constraints list using the PM QoS core
+ * code and if needed call the per-device and the global notification
+ * callbacks
+ */
+static int apply_constraint(struct dev_pm_qos_request *req,
+ enum pm_qos_req_action action, int value)
+{
+ int ret, curr_value;
+
+ ret = pm_qos_update_target(req->dev->power.constraints,
+ &req->node, action, value);
+
+ if (ret) {
+ /* Call the global callbacks if needed */
+ curr_value = pm_qos_read_value(req->dev->power.constraints);
+ blocking_notifier_call_chain(&dev_pm_notifiers,
+ (unsigned long)curr_value,
+ req);
+ }
+
+ return ret;
+}
+
+/*
+ * dev_pm_qos_constraints_allocate
+ * @dev: device to allocate data for
+ *
+ * Called at the first call to add_request, for constraint data allocation
+ * Must be called with the dev_pm_qos_mtx mutex held
+ */
+static int dev_pm_qos_constraints_allocate(struct device *dev)
+{
+ struct pm_qos_constraints *c;
+ struct blocking_notifier_head *n;
+
+ c = kzalloc(sizeof(*c), GFP_KERNEL);
+ if (!c)
+ return -ENOMEM;
+
+ n = kzalloc(sizeof(*n), GFP_KERNEL);
+ if (!n) {
+ kfree(c);
+ return -ENOMEM;
+ }
+ BLOCKING_INIT_NOTIFIER_HEAD(n);
+
+ plist_head_init(&c->list);
+ c->target_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
+ c->default_value = PM_QOS_DEV_LAT_DEFAULT_VALUE;
+ c->type = PM_QOS_MIN;
+ c->notifiers = n;
+
+ spin_lock_irq(&dev->power.lock);
+ dev->power.constraints = c;
+ spin_unlock_irq(&dev->power.lock);
+
+ return 0;
+}
+
+/**
+ * dev_pm_qos_constraints_init - Initalize device's PM QoS constraints pointer.
+ * @dev: target device
+ *
+ * Called from the device PM subsystem during device insertion under
+ * device_pm_lock().
+ */
+void dev_pm_qos_constraints_init(struct device *dev)
+{
+ mutex_lock(&dev_pm_qos_mtx);
+ dev->power.constraints = NULL;
+ dev->power.power_state = PMSG_ON;
+ mutex_unlock(&dev_pm_qos_mtx);
+}
+
+/**
+ * dev_pm_qos_constraints_destroy
+ * @dev: target device
+ *
+ * Called from the device PM subsystem on device removal under device_pm_lock().
+ */
+void dev_pm_qos_constraints_destroy(struct device *dev)
+{
+ struct dev_pm_qos_request *req, *tmp;
+ struct pm_qos_constraints *c;
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ dev->power.power_state = PMSG_INVALID;
+ c = dev->power.constraints;
+ if (!c)
+ goto out;
+
+ /* Flush the constraints list for the device */
+ plist_for_each_entry_safe(req, tmp, &c->list, node) {
+ /*
+ * Update constraints list and call the notification
+ * callbacks if needed
+ */
+ apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
+ memset(req, 0, sizeof(*req));
+ }
+
+ spin_lock_irq(&dev->power.lock);
+ dev->power.constraints = NULL;
+ spin_unlock_irq(&dev->power.lock);
+
+ kfree(c->notifiers);
+ kfree(c);
+
+ out:
+ mutex_unlock(&dev_pm_qos_mtx);
+}
+
+/**
+ * dev_pm_qos_add_request - inserts new qos request into the list
+ * @dev: target device for the constraint
+ * @req: pointer to a preallocated handle
+ * @value: defines the qos request
+ *
+ * This function inserts a new entry in the device constraints list of
+ * requested qos performance characteristics. It recomputes the aggregate
+ * QoS expectations of parameters and initializes the dev_pm_qos_request
+ * handle. Caller needs to save this handle for later use in updates and
+ * removal.
+ *
+ * Returns 1 if the aggregated constraint value has changed,
+ * 0 if the aggregated constraint value has not changed,
+ * -EINVAL in case of wrong parameters, -ENOMEM if there's not enough memory
+ * to allocate for data structures, -ENODEV if the device has just been removed
+ * from the system.
+ */
+int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
+ s32 value)
+{
+ int ret = 0;
+
+ if (!dev || !req) /*guard against callers passing in null */
+ return -EINVAL;
+
+ if (dev_pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "dev_pm_qos_add_request() called for already "
+ "added request\n");
+ return -EINVAL;
+ }
+
+ req->dev = dev;
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ if (!dev->power.constraints) {
+ if (dev->power.power_state.event == PM_EVENT_INVALID) {
+ /* The device has been removed from the system. */
+ req->dev = NULL;
+ ret = -ENODEV;
+ goto out;
+ } else {
+ /*
+ * Allocate the constraints data on the first call to
+ * add_request, i.e. only if the data is not already
+ * allocated and if the device has not been removed.
+ */
+ ret = dev_pm_qos_constraints_allocate(dev);
+ }
+ }
+
+ if (!ret)
+ ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
+
+ out:
+ mutex_unlock(&dev_pm_qos_mtx);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_add_request);
+
+/**
+ * dev_pm_qos_update_request - modifies an existing qos request
+ * @req : handle to list element holding a dev_pm_qos request to use
+ * @new_value: defines the qos request
+ *
+ * Updates an existing dev PM qos request along with updating the
+ * target value.
+ *
+ * Attempts are made to make this code callable on hot code paths.
+ *
+ * Returns 1 if the aggregated constraint value has changed,
+ * 0 if the aggregated constraint value has not changed,
+ * -EINVAL in case of wrong parameters, -ENODEV if the device has been
+ * removed from the system
+ */
+int dev_pm_qos_update_request(struct dev_pm_qos_request *req,
+ s32 new_value)
+{
+ int ret = 0;
+
+ if (!req) /*guard against callers passing in null */
+ return -EINVAL;
+
+ if (!dev_pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "dev_pm_qos_update_request() called for "
+ "unknown object\n");
+ return -EINVAL;
+ }
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ if (req->dev->power.constraints) {
+ if (new_value != req->node.prio)
+ ret = apply_constraint(req, PM_QOS_UPDATE_REQ,
+ new_value);
+ } else {
+ /* Return if the device has been removed */
+ ret = -ENODEV;
+ }
+
+ mutex_unlock(&dev_pm_qos_mtx);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_update_request);
+
+/**
+ * dev_pm_qos_remove_request - modifies an existing qos request
+ * @req: handle to request list element
+ *
+ * Will remove pm qos request from the list of constraints and
+ * recompute the current target value. Call this on slow code paths.
+ *
+ * Returns 1 if the aggregated constraint value has changed,
+ * 0 if the aggregated constraint value has not changed,
+ * -EINVAL in case of wrong parameters, -ENODEV if the device has been
+ * removed from the system
+ */
+int dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
+{
+ int ret = 0;
+
+ if (!req) /*guard against callers passing in null */
+ return -EINVAL;
+
+ if (!dev_pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "dev_pm_qos_remove_request() called for "
+ "unknown object\n");
+ return -EINVAL;
+ }
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ if (req->dev->power.constraints) {
+ ret = apply_constraint(req, PM_QOS_REMOVE_REQ,
+ PM_QOS_DEFAULT_VALUE);
+ memset(req, 0, sizeof(*req));
+ } else {
+ /* Return if the device has been removed */
+ ret = -ENODEV;
+ }
+
+ mutex_unlock(&dev_pm_qos_mtx);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_remove_request);
+
+/**
+ * dev_pm_qos_add_notifier - sets notification entry for changes to target value
+ * of per-device PM QoS constraints
+ *
+ * @dev: target device for the constraint
+ * @notifier: notifier block managed by caller.
+ *
+ * Will register the notifier into a notification chain that gets called
+ * upon changes to the target value for the device.
+ */
+int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier)
+{
+ int retval = 0;
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ /* Silently return if the constraints object is not present. */
+ if (dev->power.constraints)
+ retval = blocking_notifier_chain_register(
+ dev->power.constraints->notifiers,
+ notifier);
+
+ mutex_unlock(&dev_pm_qos_mtx);
+ return retval;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_add_notifier);
+
+/**
+ * dev_pm_qos_remove_notifier - deletes notification for changes to target value
+ * of per-device PM QoS constraints
+ *
+ * @dev: target device for the constraint
+ * @notifier: notifier block to be removed.
+ *
+ * Will remove the notifier from the notification chain that gets called
+ * upon changes to the target value.
+ */
+int dev_pm_qos_remove_notifier(struct device *dev,
+ struct notifier_block *notifier)
+{
+ int retval = 0;
+
+ mutex_lock(&dev_pm_qos_mtx);
+
+ /* Silently return if the constraints object is not present. */
+ if (dev->power.constraints)
+ retval = blocking_notifier_chain_unregister(
+ dev->power.constraints->notifiers,
+ notifier);
+
+ mutex_unlock(&dev_pm_qos_mtx);
+ return retval;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_remove_notifier);
+
+/**
+ * dev_pm_qos_add_global_notifier - sets notification entry for changes to
+ * target value of the PM QoS constraints for any device
+ *
+ * @notifier: notifier block managed by caller.
+ *
+ * Will register the notifier into a notification chain that gets called
+ * upon changes to the target value for any device.
+ */
+int dev_pm_qos_add_global_notifier(struct notifier_block *notifier)
+{
+ return blocking_notifier_chain_register(&dev_pm_notifiers, notifier);
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_add_global_notifier);
+
+/**
+ * dev_pm_qos_remove_global_notifier - deletes notification for changes to
+ * target value of PM QoS constraints for any device
+ *
+ * @notifier: notifier block to be removed.
+ *
+ * Will remove the notifier from the notification chain that gets called
+ * upon changes to the target value for any device.
+ */
+int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier)
+{
+ return blocking_notifier_chain_unregister(&dev_pm_notifiers, notifier);
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_remove_global_notifier);
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index acb3f83..6bb3aaf 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -9,6 +9,7 @@
#include <linux/sched.h>
#include <linux/pm_runtime.h>
+#include <trace/events/rpm.h>
#include "power.h"
static int rpm_resume(struct device *dev, int rpmflags);
@@ -155,6 +156,31 @@
}
/**
+ * __rpm_callback - Run a given runtime PM callback for a given device.
+ * @cb: Runtime PM callback to run.
+ * @dev: Device to run the callback for.
+ */
+static int __rpm_callback(int (*cb)(struct device *), struct device *dev)
+ __releases(&dev->power.lock) __acquires(&dev->power.lock)
+{
+ int retval;
+
+ if (dev->power.irq_safe)
+ spin_unlock(&dev->power.lock);
+ else
+ spin_unlock_irq(&dev->power.lock);
+
+ retval = cb(dev);
+
+ if (dev->power.irq_safe)
+ spin_lock(&dev->power.lock);
+ else
+ spin_lock_irq(&dev->power.lock);
+
+ return retval;
+}
+
+/**
* rpm_idle - Notify device bus type if the device can be suspended.
* @dev: Device to notify the bus type about.
* @rpmflags: Flag bits.
@@ -171,6 +197,7 @@
int (*callback)(struct device *);
int retval;
+ trace_rpm_idle(dev, rpmflags);
retval = rpm_check_suspend_allowed(dev);
if (retval < 0)
; /* Conditions are wrong. */
@@ -225,24 +252,14 @@
else
callback = NULL;
- if (callback) {
- if (dev->power.irq_safe)
- spin_unlock(&dev->power.lock);
- else
- spin_unlock_irq(&dev->power.lock);
-
- callback(dev);
-
- if (dev->power.irq_safe)
- spin_lock(&dev->power.lock);
- else
- spin_lock_irq(&dev->power.lock);
- }
+ if (callback)
+ __rpm_callback(callback, dev);
dev->power.idle_notification = false;
wake_up_all(&dev->power.wait_queue);
out:
+ trace_rpm_return_int(dev, _THIS_IP_, retval);
return retval;
}
@@ -252,22 +269,14 @@
* @dev: Device to run the callback for.
*/
static int rpm_callback(int (*cb)(struct device *), struct device *dev)
- __releases(&dev->power.lock) __acquires(&dev->power.lock)
{
int retval;
if (!cb)
return -ENOSYS;
- if (dev->power.irq_safe) {
- retval = cb(dev);
- } else {
- spin_unlock_irq(&dev->power.lock);
+ retval = __rpm_callback(cb, dev);
- retval = cb(dev);
-
- spin_lock_irq(&dev->power.lock);
- }
dev->power.runtime_error = retval;
return retval != -EACCES ? retval : -EIO;
}
@@ -277,14 +286,16 @@
* @dev: Device to suspend.
* @rpmflags: Flag bits.
*
- * Check if the device's runtime PM status allows it to be suspended. If
- * another suspend has been started earlier, either return immediately or wait
- * for it to finish, depending on the RPM_NOWAIT and RPM_ASYNC flags. Cancel a
- * pending idle notification. If the RPM_ASYNC flag is set then queue a
- * suspend request; otherwise run the ->runtime_suspend() callback directly.
- * If a deferred resume was requested while the callback was running then carry
- * it out; otherwise send an idle notification for the device (if the suspend
- * failed) or for its parent (if the suspend succeeded).
+ * Check if the device's runtime PM status allows it to be suspended.
+ * Cancel a pending idle notification, autosuspend or suspend. If
+ * another suspend has been started earlier, either return immediately
+ * or wait for it to finish, depending on the RPM_NOWAIT and RPM_ASYNC
+ * flags. If the RPM_ASYNC flag is set then queue a suspend request;
+ * otherwise run the ->runtime_suspend() callback directly. When
+ * ->runtime_suspend succeeded, if a deferred resume was requested while
+ * the callback was running then carry it out, otherwise send an idle
+ * notification for its parent (if the suspend succeeded and both
+ * ignore_children of parent->power and irq_safe of dev->power are not set).
*
* This function must be called under dev->power.lock with interrupts disabled.
*/
@@ -295,7 +306,7 @@
struct device *parent = NULL;
int retval;
- dev_dbg(dev, "%s flags 0x%x\n", __func__, rpmflags);
+ trace_rpm_suspend(dev, rpmflags);
repeat:
retval = rpm_check_suspend_allowed(dev);
@@ -347,6 +358,15 @@
goto out;
}
+ if (dev->power.irq_safe) {
+ spin_unlock(&dev->power.lock);
+
+ cpu_relax();
+
+ spin_lock(&dev->power.lock);
+ goto repeat;
+ }
+
/* Wait for the other suspend running in parallel with us. */
for (;;) {
prepare_to_wait(&dev->power.wait_queue, &wait,
@@ -400,15 +420,16 @@
dev->power.runtime_error = 0;
else
pm_runtime_cancel_pending(dev);
- } else {
+ wake_up_all(&dev->power.wait_queue);
+ goto out;
+ }
no_callback:
- __update_runtime_status(dev, RPM_SUSPENDED);
- pm_runtime_deactivate_timer(dev);
+ __update_runtime_status(dev, RPM_SUSPENDED);
+ pm_runtime_deactivate_timer(dev);
- if (dev->parent) {
- parent = dev->parent;
- atomic_add_unless(&parent->power.child_count, -1, 0);
- }
+ if (dev->parent) {
+ parent = dev->parent;
+ atomic_add_unless(&parent->power.child_count, -1, 0);
}
wake_up_all(&dev->power.wait_queue);
@@ -430,7 +451,7 @@
}
out:
- dev_dbg(dev, "%s returns %d\n", __func__, retval);
+ trace_rpm_return_int(dev, _THIS_IP_, retval);
return retval;
}
@@ -459,7 +480,7 @@
struct device *parent = NULL;
int retval = 0;
- dev_dbg(dev, "%s flags 0x%x\n", __func__, rpmflags);
+ trace_rpm_resume(dev, rpmflags);
repeat:
if (dev->power.runtime_error)
@@ -496,6 +517,15 @@
goto out;
}
+ if (dev->power.irq_safe) {
+ spin_unlock(&dev->power.lock);
+
+ cpu_relax();
+
+ spin_lock(&dev->power.lock);
+ goto repeat;
+ }
+
/* Wait for the operation carried out in parallel with us. */
for (;;) {
prepare_to_wait(&dev->power.wait_queue, &wait,
@@ -615,7 +645,7 @@
spin_lock_irq(&dev->power.lock);
}
- dev_dbg(dev, "%s returns %d\n", __func__, retval);
+ trace_rpm_return_int(dev, _THIS_IP_, retval);
return retval;
}
@@ -732,13 +762,16 @@
* return immediately if it is larger than zero. Then carry out an idle
* notification, either synchronous or asynchronous.
*
- * This routine may be called in atomic context if the RPM_ASYNC flag is set.
+ * This routine may be called in atomic context if the RPM_ASYNC flag is set,
+ * or if pm_runtime_irq_safe() has been called.
*/
int __pm_runtime_idle(struct device *dev, int rpmflags)
{
unsigned long flags;
int retval;
+ might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe);
+
if (rpmflags & RPM_GET_PUT) {
if (!atomic_dec_and_test(&dev->power.usage_count))
return 0;
@@ -761,13 +794,16 @@
* return immediately if it is larger than zero. Then carry out a suspend,
* either synchronous or asynchronous.
*
- * This routine may be called in atomic context if the RPM_ASYNC flag is set.
+ * This routine may be called in atomic context if the RPM_ASYNC flag is set,
+ * or if pm_runtime_irq_safe() has been called.
*/
int __pm_runtime_suspend(struct device *dev, int rpmflags)
{
unsigned long flags;
int retval;
+ might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe);
+
if (rpmflags & RPM_GET_PUT) {
if (!atomic_dec_and_test(&dev->power.usage_count))
return 0;
@@ -789,13 +825,16 @@
* If the RPM_GET_PUT flag is set, increment the device's usage count. Then
* carry out a resume, either synchronous or asynchronous.
*
- * This routine may be called in atomic context if the RPM_ASYNC flag is set.
+ * This routine may be called in atomic context if the RPM_ASYNC flag is set,
+ * or if pm_runtime_irq_safe() has been called.
*/
int __pm_runtime_resume(struct device *dev, int rpmflags)
{
unsigned long flags;
int retval;
+ might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe);
+
if (rpmflags & RPM_GET_PUT)
atomic_inc(&dev->power.usage_count);
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 84f7c7d..14ee07e 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -276,7 +276,9 @@
*
* By default, most devices should leave wakeup disabled. The exceptions are
* devices that everyone expects to be wakeup sources: keyboards, power buttons,
- * possibly network interfaces, etc.
+ * possibly network interfaces, etc. Also, devices that don't generate their
+ * own wakeup requests but merely forward requests from one bus to another
+ * (like PCI bridges) should have wakeup enabled by default.
*/
int device_init_wakeup(struct device *dev, bool enable)
{
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 9cbac6b..6767b4a 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -1116,7 +1116,7 @@
return 0;
spin_lock_irq(&data->txlock);
- if (!((message.event & PM_EVENT_AUTO) && data->tx_in_flight)) {
+ if (!(PMSG_IS_AUTO(message) && data->tx_in_flight)) {
set_bit(BTUSB_SUSPENDING, &data->flags);
spin_unlock_irq(&data->txlock);
} else {
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index d4c5423..0df0141 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -12,7 +12,7 @@
#include <linux/mutex.h>
#include <linux/sched.h>
#include <linux/notifier.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/ktime.h>
diff --git a/drivers/cpuidle/governors/ladder.c b/drivers/cpuidle/governors/ladder.c
index 12c9890..f62fde2 100644
--- a/drivers/cpuidle/governors/ladder.c
+++ b/drivers/cpuidle/governors/ladder.c
@@ -14,7 +14,7 @@
#include <linux/kernel.h>
#include <linux/cpuidle.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/moduleparam.h>
#include <linux/jiffies.h>
diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index c47f3d0..3600f19 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -12,7 +12,7 @@
#include <linux/kernel.h>
#include <linux/cpuidle.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/time.h>
#include <linux/ktime.h>
#include <linux/hrtimer.h>
diff --git a/drivers/devfreq/Kconfig b/drivers/devfreq/Kconfig
new file mode 100644
index 0000000..643b055
--- /dev/null
+++ b/drivers/devfreq/Kconfig
@@ -0,0 +1,75 @@
+config ARCH_HAS_DEVFREQ
+ bool
+ depends on ARCH_HAS_OPP
+ help
+ Denotes that the architecture supports DEVFREQ. If the architecture
+ supports multiple OPP entries per device and the frequency of the
+ devices with OPPs may be altered dynamically, the architecture
+ supports DEVFREQ.
+
+menuconfig PM_DEVFREQ
+ bool "Generic Dynamic Voltage and Frequency Scaling (DVFS) support"
+ depends on PM_OPP && ARCH_HAS_DEVFREQ
+ help
+ With OPP support, a device may have a list of frequencies and
+ voltages available. DEVFREQ, a generic DVFS framework can be
+ registered for a device with OPP support in order to let the
+ governor provided to DEVFREQ choose an operating frequency
+ based on the OPP's list and the policy given with DEVFREQ.
+
+ Each device may have its own governor and policy. DEVFREQ can
+ reevaluate the device state periodically and/or based on the
+ OPP list changes (each frequency/voltage pair in OPP may be
+ disabled or enabled).
+
+ Like some CPUs with CPUFREQ, a device may have multiple clocks.
+ However, because the clock frequencies of a single device are
+ determined by the single device's state, an instance of DEVFREQ
+ is attached to a single device and returns a "representative"
+ clock frequency from the OPP of the device, which is also attached
+ to a device by 1-to-1. The device registering DEVFREQ takes the
+ responsiblity to "interpret" the frequency listed in OPP and
+ to set its every clock accordingly with the "target" callback
+ given to DEVFREQ.
+
+if PM_DEVFREQ
+
+comment "DEVFREQ Governors"
+
+config DEVFREQ_GOV_SIMPLE_ONDEMAND
+ bool "Simple Ondemand"
+ help
+ Chooses frequency based on the recent load on the device. Works
+ similar as ONDEMAND governor of CPUFREQ does. A device with
+ Simple-Ondemand should be able to provide busy/total counter
+ values that imply the usage rate. A device may provide tuned
+ values to the governor with data field at devfreq_add_device().
+
+config DEVFREQ_GOV_PERFORMANCE
+ bool "Performance"
+ help
+ Sets the frequency at the maximum available frequency.
+ This governor always returns UINT_MAX as frequency so that
+ the DEVFREQ framework returns the highest frequency available
+ at any time.
+
+config DEVFREQ_GOV_POWERSAVE
+ bool "Powersave"
+ help
+ Sets the frequency at the minimum available frequency.
+ This governor always returns 0 as frequency so that
+ the DEVFREQ framework returns the lowest frequency available
+ at any time.
+
+config DEVFREQ_GOV_USERSPACE
+ bool "Userspace"
+ help
+ Sets the frequency at the user specified one.
+ This governor returns the user configured frequency if there
+ has been an input to /sys/devices/.../power/devfreq_set_freq.
+ Otherwise, the governor does not change the frequnecy
+ given at the initialization.
+
+comment "DEVFREQ Drivers"
+
+endif # PM_DEVFREQ
diff --git a/drivers/devfreq/Makefile b/drivers/devfreq/Makefile
new file mode 100644
index 0000000..4564a89
--- /dev/null
+++ b/drivers/devfreq/Makefile
@@ -0,0 +1,5 @@
+obj-$(CONFIG_PM_DEVFREQ) += devfreq.o
+obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) += governor_simpleondemand.o
+obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o
+obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o
+obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
new file mode 100644
index 0000000..5d15b81
--- /dev/null
+++ b/drivers/devfreq/devfreq.c
@@ -0,0 +1,601 @@
+/*
+ * devfreq: Generic Dynamic Voltage and Frequency Scaling (DVFS) Framework
+ * for Non-CPU Devices.
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/opp.h>
+#include <linux/devfreq.h>
+#include <linux/workqueue.h>
+#include <linux/platform_device.h>
+#include <linux/list.h>
+#include <linux/printk.h>
+#include <linux/hrtimer.h>
+#include "governor.h"
+
+struct class *devfreq_class;
+
+/*
+ * devfreq_work periodically monitors every registered device.
+ * The minimum polling interval is one jiffy. The polling interval is
+ * determined by the minimum polling period among all polling devfreq
+ * devices. The resolution of polling interval is one jiffy.
+ */
+static bool polling;
+static struct workqueue_struct *devfreq_wq;
+static struct delayed_work devfreq_work;
+
+/* wait removing if this is to be removed */
+static struct devfreq *wait_remove_device;
+
+/* The list of all device-devfreq */
+static LIST_HEAD(devfreq_list);
+static DEFINE_MUTEX(devfreq_list_lock);
+
+/**
+ * find_device_devfreq() - find devfreq struct using device pointer
+ * @dev: device pointer used to lookup device devfreq.
+ *
+ * Search the list of device devfreqs and return the matched device's
+ * devfreq info. devfreq_list_lock should be held by the caller.
+ */
+static struct devfreq *find_device_devfreq(struct device *dev)
+{
+ struct devfreq *tmp_devfreq;
+
+ if (unlikely(IS_ERR_OR_NULL(dev))) {
+ pr_err("DEVFREQ: %s: Invalid parameters\n", __func__);
+ return ERR_PTR(-EINVAL);
+ }
+ WARN(!mutex_is_locked(&devfreq_list_lock),
+ "devfreq_list_lock must be locked.");
+
+ list_for_each_entry(tmp_devfreq, &devfreq_list, node) {
+ if (tmp_devfreq->dev.parent == dev)
+ return tmp_devfreq;
+ }
+
+ return ERR_PTR(-ENODEV);
+}
+
+/**
+ * update_devfreq() - Reevaluate the device and configure frequency.
+ * @devfreq: the devfreq instance.
+ *
+ * Note: Lock devfreq->lock before calling update_devfreq
+ * This function is exported for governors.
+ */
+int update_devfreq(struct devfreq *devfreq)
+{
+ unsigned long freq;
+ int err = 0;
+
+ if (!mutex_is_locked(&devfreq->lock)) {
+ WARN(true, "devfreq->lock must be locked by the caller.\n");
+ return -EINVAL;
+ }
+
+ /* Reevaluate the proper frequency */
+ err = devfreq->governor->get_target_freq(devfreq, &freq);
+ if (err)
+ return err;
+
+ err = devfreq->profile->target(devfreq->dev.parent, &freq);
+ if (err)
+ return err;
+
+ devfreq->previous_freq = freq;
+ return err;
+}
+
+/**
+ * devfreq_notifier_call() - Notify that the device frequency requirements
+ * has been changed out of devfreq framework.
+ * @nb the notifier_block (supposed to be devfreq->nb)
+ * @type not used
+ * @devp not used
+ *
+ * Called by a notifier that uses devfreq->nb.
+ */
+static int devfreq_notifier_call(struct notifier_block *nb, unsigned long type,
+ void *devp)
+{
+ struct devfreq *devfreq = container_of(nb, struct devfreq, nb);
+ int ret;
+
+ mutex_lock(&devfreq->lock);
+ ret = update_devfreq(devfreq);
+ mutex_unlock(&devfreq->lock);
+
+ return ret;
+}
+
+/**
+ * _remove_devfreq() - Remove devfreq from the device.
+ * @devfreq: the devfreq struct
+ * @skip: skip calling device_unregister().
+ *
+ * Note that the caller should lock devfreq->lock before calling
+ * this. _remove_devfreq() will unlock it and free devfreq
+ * internally. devfreq_list_lock should be locked by the caller
+ * as well (not relased at return)
+ *
+ * Lock usage:
+ * devfreq->lock: locked before call.
+ * unlocked at return (and freed)
+ * devfreq_list_lock: locked before call.
+ * kept locked at return.
+ * if devfreq is centrally polled.
+ *
+ * Freed memory:
+ * devfreq
+ */
+static void _remove_devfreq(struct devfreq *devfreq, bool skip)
+{
+ if (!mutex_is_locked(&devfreq->lock)) {
+ WARN(true, "devfreq->lock must be locked by the caller.\n");
+ return;
+ }
+ if (!devfreq->governor->no_central_polling &&
+ !mutex_is_locked(&devfreq_list_lock)) {
+ WARN(true, "devfreq_list_lock must be locked by the caller.\n");
+ return;
+ }
+
+ if (devfreq->being_removed)
+ return;
+
+ devfreq->being_removed = true;
+
+ if (devfreq->profile->exit)
+ devfreq->profile->exit(devfreq->dev.parent);
+
+ if (devfreq->governor->exit)
+ devfreq->governor->exit(devfreq);
+
+ if (!skip && get_device(&devfreq->dev)) {
+ device_unregister(&devfreq->dev);
+ put_device(&devfreq->dev);
+ }
+
+ if (!devfreq->governor->no_central_polling)
+ list_del(&devfreq->node);
+
+ mutex_unlock(&devfreq->lock);
+ mutex_destroy(&devfreq->lock);
+
+ kfree(devfreq);
+}
+
+/**
+ * devfreq_dev_release() - Callback for struct device to release the device.
+ * @dev: the devfreq device
+ *
+ * This calls _remove_devfreq() if _remove_devfreq() is not called.
+ * Note that devfreq_dev_release() could be called by _remove_devfreq() as
+ * well as by others unregistering the device.
+ */
+static void devfreq_dev_release(struct device *dev)
+{
+ struct devfreq *devfreq = to_devfreq(dev);
+ bool central_polling = !devfreq->governor->no_central_polling;
+
+ /*
+ * If devfreq_dev_release() was called by device_unregister() of
+ * _remove_devfreq(), we cannot mutex_lock(&devfreq->lock) and
+ * being_removed is already set. This also partially checks the case
+ * where devfreq_dev_release() is called from a thread other than
+ * the one called _remove_devfreq(); however, this case is
+ * dealt completely with another following being_removed check.
+ *
+ * Because being_removed is never being
+ * unset, we do not need to worry about race conditions on
+ * being_removed.
+ */
+ if (devfreq->being_removed)
+ return;
+
+ if (central_polling)
+ mutex_lock(&devfreq_list_lock);
+
+ mutex_lock(&devfreq->lock);
+
+ /*
+ * Check being_removed flag again for the case where
+ * devfreq_dev_release() was called in a thread other than the one
+ * possibly called _remove_devfreq().
+ */
+ if (devfreq->being_removed) {
+ mutex_unlock(&devfreq->lock);
+ goto out;
+ }
+
+ /* devfreq->lock is unlocked and removed in _removed_devfreq() */
+ _remove_devfreq(devfreq, true);
+
+out:
+ if (central_polling)
+ mutex_unlock(&devfreq_list_lock);
+}
+
+/**
+ * devfreq_monitor() - Periodically poll devfreq objects.
+ * @work: the work struct used to run devfreq_monitor periodically.
+ *
+ */
+static void devfreq_monitor(struct work_struct *work)
+{
+ static unsigned long last_polled_at;
+ struct devfreq *devfreq, *tmp;
+ int error;
+ unsigned long jiffies_passed;
+ unsigned long next_jiffies = ULONG_MAX, now = jiffies;
+ struct device *dev;
+
+ /* Initially last_polled_at = 0, polling every device at bootup */
+ jiffies_passed = now - last_polled_at;
+ last_polled_at = now;
+ if (jiffies_passed == 0)
+ jiffies_passed = 1;
+
+ mutex_lock(&devfreq_list_lock);
+ list_for_each_entry_safe(devfreq, tmp, &devfreq_list, node) {
+ mutex_lock(&devfreq->lock);
+ dev = devfreq->dev.parent;
+
+ /* Do not remove tmp for a while */
+ wait_remove_device = tmp;
+
+ if (devfreq->governor->no_central_polling ||
+ devfreq->next_polling == 0) {
+ mutex_unlock(&devfreq->lock);
+ continue;
+ }
+ mutex_unlock(&devfreq_list_lock);
+
+ /*
+ * Reduce more next_polling if devfreq_wq took an extra
+ * delay. (i.e., CPU has been idled.)
+ */
+ if (devfreq->next_polling <= jiffies_passed) {
+ error = update_devfreq(devfreq);
+
+ /* Remove a devfreq with an error. */
+ if (error && error != -EAGAIN) {
+
+ dev_err(dev, "Due to update_devfreq error(%d), devfreq(%s) is removed from the device\n",
+ error, devfreq->governor->name);
+
+ /*
+ * Unlock devfreq before locking the list
+ * in order to avoid deadlock with
+ * find_device_devfreq or others
+ */
+ mutex_unlock(&devfreq->lock);
+ mutex_lock(&devfreq_list_lock);
+ /* Check if devfreq is already removed */
+ if (IS_ERR(find_device_devfreq(dev)))
+ continue;
+ mutex_lock(&devfreq->lock);
+ /* This unlocks devfreq->lock and free it */
+ _remove_devfreq(devfreq, false);
+ continue;
+ }
+ devfreq->next_polling = devfreq->polling_jiffies;
+ } else {
+ devfreq->next_polling -= jiffies_passed;
+ }
+
+ if (devfreq->next_polling)
+ next_jiffies = (next_jiffies > devfreq->next_polling) ?
+ devfreq->next_polling : next_jiffies;
+
+ mutex_unlock(&devfreq->lock);
+ mutex_lock(&devfreq_list_lock);
+ }
+ wait_remove_device = NULL;
+ mutex_unlock(&devfreq_list_lock);
+
+ if (next_jiffies > 0 && next_jiffies < ULONG_MAX) {
+ polling = true;
+ queue_delayed_work(devfreq_wq, &devfreq_work, next_jiffies);
+ } else {
+ polling = false;
+ }
+}
+
+/**
+ * devfreq_add_device() - Add devfreq feature to the device
+ * @dev: the device to add devfreq feature.
+ * @profile: device-specific profile to run devfreq.
+ * @governor: the policy to choose frequency.
+ * @data: private data for the governor. The devfreq framework does not
+ * touch this value.
+ */
+struct devfreq *devfreq_add_device(struct device *dev,
+ struct devfreq_dev_profile *profile,
+ const struct devfreq_governor *governor,
+ void *data)
+{
+ struct devfreq *devfreq;
+ int err = 0;
+
+ if (!dev || !profile || !governor) {
+ dev_err(dev, "%s: Invalid parameters.\n", __func__);
+ return ERR_PTR(-EINVAL);
+ }
+
+
+ if (!governor->no_central_polling) {
+ mutex_lock(&devfreq_list_lock);
+ devfreq = find_device_devfreq(dev);
+ mutex_unlock(&devfreq_list_lock);
+ if (!IS_ERR(devfreq)) {
+ dev_err(dev, "%s: Unable to create devfreq for the device. It already has one.\n", __func__);
+ err = -EINVAL;
+ goto out;
+ }
+ }
+
+ devfreq = kzalloc(sizeof(struct devfreq), GFP_KERNEL);
+ if (!devfreq) {
+ dev_err(dev, "%s: Unable to create devfreq for the device\n",
+ __func__);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ mutex_init(&devfreq->lock);
+ mutex_lock(&devfreq->lock);
+ devfreq->dev.parent = dev;
+ devfreq->dev.class = devfreq_class;
+ devfreq->dev.release = devfreq_dev_release;
+ devfreq->profile = profile;
+ devfreq->governor = governor;
+ devfreq->previous_freq = profile->initial_freq;
+ devfreq->data = data;
+ devfreq->next_polling = devfreq->polling_jiffies
+ = msecs_to_jiffies(devfreq->profile->polling_ms);
+ devfreq->nb.notifier_call = devfreq_notifier_call;
+
+ dev_set_name(&devfreq->dev, dev_name(dev));
+ err = device_register(&devfreq->dev);
+ if (err) {
+ put_device(&devfreq->dev);
+ goto err_dev;
+ }
+
+ if (governor->init)
+ err = governor->init(devfreq);
+ if (err)
+ goto err_init;
+
+ mutex_unlock(&devfreq->lock);
+
+ if (governor->no_central_polling)
+ goto out;
+
+ mutex_lock(&devfreq_list_lock);
+
+ list_add(&devfreq->node, &devfreq_list);
+
+ if (devfreq_wq && devfreq->next_polling && !polling) {
+ polling = true;
+ queue_delayed_work(devfreq_wq, &devfreq_work,
+ devfreq->next_polling);
+ }
+ mutex_unlock(&devfreq_list_lock);
+ goto out;
+err_init:
+ device_unregister(&devfreq->dev);
+err_dev:
+ mutex_unlock(&devfreq->lock);
+ kfree(devfreq);
+out:
+ if (err)
+ return ERR_PTR(err);
+ else
+ return devfreq;
+}
+
+/**
+ * devfreq_remove_device() - Remove devfreq feature from a device.
+ * @devfreq the devfreq instance to be removed
+ */
+int devfreq_remove_device(struct devfreq *devfreq)
+{
+ if (!devfreq)
+ return -EINVAL;
+
+ if (!devfreq->governor->no_central_polling) {
+ mutex_lock(&devfreq_list_lock);
+ while (wait_remove_device == devfreq) {
+ mutex_unlock(&devfreq_list_lock);
+ schedule();
+ mutex_lock(&devfreq_list_lock);
+ }
+ }
+
+ mutex_lock(&devfreq->lock);
+ _remove_devfreq(devfreq, false); /* it unlocks devfreq->lock */
+
+ if (!devfreq->governor->no_central_polling)
+ mutex_unlock(&devfreq_list_lock);
+
+ return 0;
+}
+
+static ssize_t show_governor(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%s\n", to_devfreq(dev)->governor->name);
+}
+
+static ssize_t show_freq(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%lu\n", to_devfreq(dev)->previous_freq);
+}
+
+static ssize_t show_polling_interval(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%d\n", to_devfreq(dev)->profile->polling_ms);
+}
+
+static ssize_t store_polling_interval(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct devfreq *df = to_devfreq(dev);
+ unsigned int value;
+ int ret;
+
+ ret = sscanf(buf, "%u", &value);
+ if (ret != 1)
+ goto out;
+
+ mutex_lock(&df->lock);
+ df->profile->polling_ms = value;
+ df->next_polling = df->polling_jiffies
+ = msecs_to_jiffies(value);
+ mutex_unlock(&df->lock);
+
+ ret = count;
+
+ if (df->governor->no_central_polling)
+ goto out;
+
+ mutex_lock(&devfreq_list_lock);
+ if (df->next_polling > 0 && !polling) {
+ polling = true;
+ queue_delayed_work(devfreq_wq, &devfreq_work,
+ df->next_polling);
+ }
+ mutex_unlock(&devfreq_list_lock);
+out:
+ return ret;
+}
+
+static ssize_t show_central_polling(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%d\n",
+ !to_devfreq(dev)->governor->no_central_polling);
+}
+
+static struct device_attribute devfreq_attrs[] = {
+ __ATTR(governor, S_IRUGO, show_governor, NULL),
+ __ATTR(cur_freq, S_IRUGO, show_freq, NULL),
+ __ATTR(central_polling, S_IRUGO, show_central_polling, NULL),
+ __ATTR(polling_interval, S_IRUGO | S_IWUSR, show_polling_interval,
+ store_polling_interval),
+ { },
+};
+
+/**
+ * devfreq_start_polling() - Initialize data structure for devfreq framework and
+ * start polling registered devfreq devices.
+ */
+static int __init devfreq_start_polling(void)
+{
+ mutex_lock(&devfreq_list_lock);
+ polling = false;
+ devfreq_wq = create_freezable_workqueue("devfreq_wq");
+ INIT_DELAYED_WORK_DEFERRABLE(&devfreq_work, devfreq_monitor);
+ mutex_unlock(&devfreq_list_lock);
+
+ devfreq_monitor(&devfreq_work.work);
+ return 0;
+}
+late_initcall(devfreq_start_polling);
+
+static int __init devfreq_init(void)
+{
+ devfreq_class = class_create(THIS_MODULE, "devfreq");
+ if (IS_ERR(devfreq_class)) {
+ pr_err("%s: couldn't create class\n", __FILE__);
+ return PTR_ERR(devfreq_class);
+ }
+ devfreq_class->dev_attrs = devfreq_attrs;
+ return 0;
+}
+subsys_initcall(devfreq_init);
+
+static void __exit devfreq_exit(void)
+{
+ class_destroy(devfreq_class);
+}
+module_exit(devfreq_exit);
+
+/*
+ * The followings are helper functions for devfreq user device drivers with
+ * OPP framework.
+ */
+
+/**
+ * devfreq_recommended_opp() - Helper function to get proper OPP for the
+ * freq value given to target callback.
+ * @dev The devfreq user device. (parent of devfreq)
+ * @freq The frequency given to target function
+ *
+ */
+struct opp *devfreq_recommended_opp(struct device *dev, unsigned long *freq)
+{
+ struct opp *opp = opp_find_freq_ceil(dev, freq);
+
+ if (opp == ERR_PTR(-ENODEV))
+ opp = opp_find_freq_floor(dev, freq);
+ return opp;
+}
+
+/**
+ * devfreq_register_opp_notifier() - Helper function to get devfreq notified
+ * for any changes in the OPP availability
+ * changes
+ * @dev The devfreq user device. (parent of devfreq)
+ * @devfreq The devfreq object.
+ */
+int devfreq_register_opp_notifier(struct device *dev, struct devfreq *devfreq)
+{
+ struct srcu_notifier_head *nh = opp_get_notifier(dev);
+
+ if (IS_ERR(nh))
+ return PTR_ERR(nh);
+ return srcu_notifier_chain_register(nh, &devfreq->nb);
+}
+
+/**
+ * devfreq_unregister_opp_notifier() - Helper function to stop getting devfreq
+ * notified for any changes in the OPP
+ * availability changes anymore.
+ * @dev The devfreq user device. (parent of devfreq)
+ * @devfreq The devfreq object.
+ *
+ * At exit() callback of devfreq_dev_profile, this must be included if
+ * devfreq_recommended_opp is used.
+ */
+int devfreq_unregister_opp_notifier(struct device *dev, struct devfreq *devfreq)
+{
+ struct srcu_notifier_head *nh = opp_get_notifier(dev);
+
+ if (IS_ERR(nh))
+ return PTR_ERR(nh);
+ return srcu_notifier_chain_unregister(nh, &devfreq->nb);
+}
+
+MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
+MODULE_DESCRIPTION("devfreq class support");
+MODULE_LICENSE("GPL");
diff --git a/drivers/devfreq/governor.h b/drivers/devfreq/governor.h
new file mode 100644
index 0000000..ea7f13c
--- /dev/null
+++ b/drivers/devfreq/governor.h
@@ -0,0 +1,24 @@
+/*
+ * governor.h - internal header for devfreq governors.
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This header is for devfreq governors in drivers/devfreq/
+ */
+
+#ifndef _GOVERNOR_H
+#define _GOVERNOR_H
+
+#include <linux/devfreq.h>
+
+#define to_devfreq(DEV) container_of((DEV), struct devfreq, dev)
+
+/* Caution: devfreq->lock must be locked before calling update_devfreq */
+extern int update_devfreq(struct devfreq *devfreq);
+
+#endif /* _GOVERNOR_H */
diff --git a/drivers/devfreq/governor_performance.c b/drivers/devfreq/governor_performance.c
new file mode 100644
index 0000000..c0596b2
--- /dev/null
+++ b/drivers/devfreq/governor_performance.c
@@ -0,0 +1,29 @@
+/*
+ * linux/drivers/devfreq/governor_performance.c
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/devfreq.h>
+
+static int devfreq_performance_func(struct devfreq *df,
+ unsigned long *freq)
+{
+ /*
+ * target callback should be able to get floor value as
+ * said in devfreq.h
+ */
+ *freq = UINT_MAX;
+ return 0;
+}
+
+const struct devfreq_governor devfreq_performance = {
+ .name = "performance",
+ .get_target_freq = devfreq_performance_func,
+ .no_central_polling = true,
+};
diff --git a/drivers/devfreq/governor_powersave.c b/drivers/devfreq/governor_powersave.c
new file mode 100644
index 0000000..2483a85
--- /dev/null
+++ b/drivers/devfreq/governor_powersave.c
@@ -0,0 +1,29 @@
+/*
+ * linux/drivers/devfreq/governor_powersave.c
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/devfreq.h>
+
+static int devfreq_powersave_func(struct devfreq *df,
+ unsigned long *freq)
+{
+ /*
+ * target callback should be able to get ceiling value as
+ * said in devfreq.h
+ */
+ *freq = 0;
+ return 0;
+}
+
+const struct devfreq_governor devfreq_powersave = {
+ .name = "powersave",
+ .get_target_freq = devfreq_powersave_func,
+ .no_central_polling = true,
+};
diff --git a/drivers/devfreq/governor_simpleondemand.c b/drivers/devfreq/governor_simpleondemand.c
new file mode 100644
index 0000000..efad8dc
--- /dev/null
+++ b/drivers/devfreq/governor_simpleondemand.c
@@ -0,0 +1,88 @@
+/*
+ * linux/drivers/devfreq/governor_simpleondemand.c
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/errno.h>
+#include <linux/devfreq.h>
+#include <linux/math64.h>
+
+/* Default constants for DevFreq-Simple-Ondemand (DFSO) */
+#define DFSO_UPTHRESHOLD (90)
+#define DFSO_DOWNDIFFERENCTIAL (5)
+static int devfreq_simple_ondemand_func(struct devfreq *df,
+ unsigned long *freq)
+{
+ struct devfreq_dev_status stat;
+ int err = df->profile->get_dev_status(df->dev.parent, &stat);
+ unsigned long long a, b;
+ unsigned int dfso_upthreshold = DFSO_UPTHRESHOLD;
+ unsigned int dfso_downdifferential = DFSO_DOWNDIFFERENCTIAL;
+ struct devfreq_simple_ondemand_data *data = df->data;
+
+ if (err)
+ return err;
+
+ if (data) {
+ if (data->upthreshold)
+ dfso_upthreshold = data->upthreshold;
+ if (data->downdifferential)
+ dfso_downdifferential = data->downdifferential;
+ }
+ if (dfso_upthreshold > 100 ||
+ dfso_upthreshold < dfso_downdifferential)
+ return -EINVAL;
+
+ /* Assume MAX if it is going to be divided by zero */
+ if (stat.total_time == 0) {
+ *freq = UINT_MAX;
+ return 0;
+ }
+
+ /* Prevent overflow */
+ if (stat.busy_time >= (1 << 24) || stat.total_time >= (1 << 24)) {
+ stat.busy_time >>= 7;
+ stat.total_time >>= 7;
+ }
+
+ /* Set MAX if it's busy enough */
+ if (stat.busy_time * 100 >
+ stat.total_time * dfso_upthreshold) {
+ *freq = UINT_MAX;
+ return 0;
+ }
+
+ /* Set MAX if we do not know the initial frequency */
+ if (stat.current_frequency == 0) {
+ *freq = UINT_MAX;
+ return 0;
+ }
+
+ /* Keep the current frequency */
+ if (stat.busy_time * 100 >
+ stat.total_time * (dfso_upthreshold - dfso_downdifferential)) {
+ *freq = stat.current_frequency;
+ return 0;
+ }
+
+ /* Set the desired frequency based on the load */
+ a = stat.busy_time;
+ a *= stat.current_frequency;
+ b = div_u64(a, stat.total_time);
+ b *= 100;
+ b = div_u64(b, (dfso_upthreshold - dfso_downdifferential / 2));
+ *freq = (unsigned long) b;
+
+ return 0;
+}
+
+const struct devfreq_governor devfreq_simple_ondemand = {
+ .name = "simple_ondemand",
+ .get_target_freq = devfreq_simple_ondemand_func,
+};
diff --git a/drivers/devfreq/governor_userspace.c b/drivers/devfreq/governor_userspace.c
new file mode 100644
index 0000000..4f8b563
--- /dev/null
+++ b/drivers/devfreq/governor_userspace.c
@@ -0,0 +1,116 @@
+/*
+ * linux/drivers/devfreq/governor_simpleondemand.c
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/slab.h>
+#include <linux/device.h>
+#include <linux/devfreq.h>
+#include <linux/pm.h>
+#include <linux/mutex.h>
+#include "governor.h"
+
+struct userspace_data {
+ unsigned long user_frequency;
+ bool valid;
+};
+
+static int devfreq_userspace_func(struct devfreq *df, unsigned long *freq)
+{
+ struct userspace_data *data = df->data;
+
+ if (!data->valid)
+ *freq = df->previous_freq; /* No user freq specified yet */
+ else
+ *freq = data->user_frequency;
+ return 0;
+}
+
+static ssize_t store_freq(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct devfreq *devfreq = to_devfreq(dev);
+ struct userspace_data *data;
+ unsigned long wanted;
+ int err = 0;
+
+
+ mutex_lock(&devfreq->lock);
+ data = devfreq->data;
+
+ sscanf(buf, "%lu", &wanted);
+ data->user_frequency = wanted;
+ data->valid = true;
+ err = update_devfreq(devfreq);
+ if (err == 0)
+ err = count;
+ mutex_unlock(&devfreq->lock);
+ return err;
+}
+
+static ssize_t show_freq(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct devfreq *devfreq = to_devfreq(dev);
+ struct userspace_data *data;
+ int err = 0;
+
+ mutex_lock(&devfreq->lock);
+ data = devfreq->data;
+
+ if (data->valid)
+ err = sprintf(buf, "%lu\n", data->user_frequency);
+ else
+ err = sprintf(buf, "undefined\n");
+ mutex_unlock(&devfreq->lock);
+ return err;
+}
+
+static DEVICE_ATTR(set_freq, 0644, show_freq, store_freq);
+static struct attribute *dev_entries[] = {
+ &dev_attr_set_freq.attr,
+ NULL,
+};
+static struct attribute_group dev_attr_group = {
+ .name = "userspace",
+ .attrs = dev_entries,
+};
+
+static int userspace_init(struct devfreq *devfreq)
+{
+ int err = 0;
+ struct userspace_data *data = kzalloc(sizeof(struct userspace_data),
+ GFP_KERNEL);
+
+ if (!data) {
+ err = -ENOMEM;
+ goto out;
+ }
+ data->valid = false;
+ devfreq->data = data;
+
+ err = sysfs_create_group(&devfreq->dev.kobj, &dev_attr_group);
+out:
+ return err;
+}
+
+static void userspace_exit(struct devfreq *devfreq)
+{
+ sysfs_remove_group(&devfreq->dev.kobj, &dev_attr_group);
+ kfree(devfreq->data);
+ devfreq->data = NULL;
+}
+
+const struct devfreq_governor devfreq_userspace = {
+ .name = "userspace",
+ .get_target_freq = devfreq_userspace_func,
+ .init = userspace_init,
+ .exit = userspace_exit,
+ .no_central_polling = true,
+};
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index ce045a8..f07e425 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -67,11 +67,11 @@
MODULE_PARM_DESC(i915_enable_rc6,
"Enable power-saving render C-state 6 (default: true)");
-unsigned int i915_enable_fbc __read_mostly = 1;
+unsigned int i915_enable_fbc __read_mostly = -1;
module_param_named(i915_enable_fbc, i915_enable_fbc, int, 0600);
MODULE_PARM_DESC(i915_enable_fbc,
"Enable frame buffer compression for power savings "
- "(default: false)");
+ "(default: -1 (use per-chip default))");
unsigned int i915_lvds_downclock __read_mostly = 0;
module_param_named(lvds_downclock, i915_lvds_downclock, int, 0400);
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 56a8554..04411ad 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -1799,6 +1799,7 @@
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
struct drm_i915_gem_object *obj;
+ int enable_fbc;
DRM_DEBUG_KMS("\n");
@@ -1839,8 +1840,15 @@
intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
- if (!i915_enable_fbc) {
- DRM_DEBUG_KMS("fbc disabled per module param (default off)\n");
+ enable_fbc = i915_enable_fbc;
+ if (enable_fbc < 0) {
+ DRM_DEBUG_KMS("fbc set to per-chip default\n");
+ enable_fbc = 1;
+ if (INTEL_INFO(dev)->gen <= 5)
+ enable_fbc = 0;
+ }
+ if (!enable_fbc) {
+ DRM_DEBUG_KMS("fbc disabled per module param\n");
dev_priv->no_fbc_reason = FBC_MODULE_PARAM;
goto out_disable;
}
@@ -4687,13 +4695,13 @@
bpc = 6; /* min is 18bpp */
break;
case 24:
- bpc = min((unsigned int)8, display_bpc);
+ bpc = 8;
break;
case 30:
- bpc = min((unsigned int)10, display_bpc);
+ bpc = 10;
break;
case 48:
- bpc = min((unsigned int)12, display_bpc);
+ bpc = 12;
break;
default:
DRM_DEBUG("unsupported depth, assuming 24 bits\n");
@@ -4701,10 +4709,12 @@
break;
}
+ display_bpc = min(display_bpc, bpc);
+
DRM_DEBUG_DRIVER("setting pipe bpc to %d (max display bpc %d)\n",
bpc, display_bpc);
- *pipe_bpp = bpc * 3;
+ *pipe_bpp = display_bpc * 3;
return display_bpc != bpc;
}
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 0b2ee9d..fe1099d 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -337,9 +337,6 @@
struct drm_connector *connector,
struct intel_load_detect_pipe *old);
-extern struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB);
-extern int intel_sdvo_supports_hotplug(struct drm_connector *connector);
-extern void intel_sdvo_set_hotplug(struct drm_connector *connector, int enable);
extern void intelfb_restore(void);
extern void intel_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
u16 blue, int regno);
diff --git a/drivers/gpu/drm/i915/intel_sdvo.c b/drivers/gpu/drm/i915/intel_sdvo.c
index 30fe554..6348c49 100644
--- a/drivers/gpu/drm/i915/intel_sdvo.c
+++ b/drivers/gpu/drm/i915/intel_sdvo.c
@@ -92,6 +92,11 @@
*/
uint16_t attached_output;
+ /*
+ * Hotplug activation bits for this device
+ */
+ uint8_t hotplug_active[2];
+
/**
* This is used to select the color range of RBG outputs in HDMI mode.
* It is only valid when using TMDS encoding and 8 bit per color mode.
@@ -1208,74 +1213,20 @@
return true;
}
-/* No use! */
-#if 0
-struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB)
-{
- struct drm_connector *connector = NULL;
- struct intel_sdvo *iout = NULL;
- struct intel_sdvo *sdvo;
-
- /* find the sdvo connector */
- list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
- iout = to_intel_sdvo(connector);
-
- if (iout->type != INTEL_OUTPUT_SDVO)
- continue;
-
- sdvo = iout->dev_priv;
-
- if (sdvo->sdvo_reg == SDVOB && sdvoB)
- return connector;
-
- if (sdvo->sdvo_reg == SDVOC && !sdvoB)
- return connector;
-
- }
-
- return NULL;
-}
-
-int intel_sdvo_supports_hotplug(struct drm_connector *connector)
+static int intel_sdvo_supports_hotplug(struct intel_sdvo *intel_sdvo)
{
u8 response[2];
- u8 status;
- struct intel_sdvo *intel_sdvo;
- DRM_DEBUG_KMS("\n");
-
- if (!connector)
- return 0;
-
- intel_sdvo = to_intel_sdvo(connector);
return intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HOT_PLUG_SUPPORT,
&response, 2) && response[0];
}
-void intel_sdvo_set_hotplug(struct drm_connector *connector, int on)
+static void intel_sdvo_enable_hotplug(struct intel_encoder *encoder)
{
- u8 response[2];
- u8 status;
- struct intel_sdvo *intel_sdvo = to_intel_sdvo(connector);
+ struct intel_sdvo *intel_sdvo = to_intel_sdvo(&encoder->base);
- intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0);
- intel_sdvo_read_response(intel_sdvo, &response, 2);
-
- if (on) {
- intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0);
- status = intel_sdvo_read_response(intel_sdvo, &response, 2);
-
- intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2);
- } else {
- response[0] = 0;
- response[1] = 0;
- intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2);
- }
-
- intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0);
- intel_sdvo_read_response(intel_sdvo, &response, 2);
+ intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &intel_sdvo->hotplug_active, 2);
}
-#endif
static bool
intel_sdvo_multifunc_encoder(struct intel_sdvo *intel_sdvo)
@@ -2045,6 +1996,7 @@
{
struct drm_encoder *encoder = &intel_sdvo->base.base;
struct drm_connector *connector;
+ struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
struct intel_connector *intel_connector;
struct intel_sdvo_connector *intel_sdvo_connector;
@@ -2062,7 +2014,17 @@
intel_connector = &intel_sdvo_connector->base;
connector = &intel_connector->base;
- connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
+ if (intel_sdvo_supports_hotplug(intel_sdvo) & (1 << device)) {
+ connector->polled = DRM_CONNECTOR_POLL_HPD;
+ intel_sdvo->hotplug_active[0] |= 1 << device;
+ /* Some SDVO devices have one-shot hotplug interrupts.
+ * Ensure that they get re-enabled when an interrupt happens.
+ */
+ intel_encoder->hot_plug = intel_sdvo_enable_hotplug;
+ intel_sdvo_enable_hotplug(intel_encoder);
+ }
+ else
+ connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
encoder->encoder_type = DRM_MODE_ENCODER_TMDS;
connector->connector_type = DRM_MODE_CONNECTOR_DVID;
@@ -2569,6 +2531,14 @@
if (!intel_sdvo_get_capabilities(intel_sdvo, &intel_sdvo->caps))
goto err;
+ /* Set up hotplug command - note paranoia about contents of reply.
+ * We assume that the hardware is in a sane state, and only touch
+ * the bits we think we understand.
+ */
+ intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG,
+ &intel_sdvo->hotplug_active, 2);
+ intel_sdvo->hotplug_active[0] &= ~0x3;
+
if (intel_sdvo_output_setup(intel_sdvo,
intel_sdvo->caps.output_flags) != true) {
DRM_DEBUG_KMS("SDVO output failed to setup on SDVO%c\n",
diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
index 7ad43c6..4da2388 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -115,6 +115,7 @@
u8 msg[20];
int msg_bytes = send_bytes + 4;
u8 ack;
+ unsigned retry;
if (send_bytes > 16)
return -1;
@@ -125,20 +126,20 @@
msg[3] = (msg_bytes << 4) | (send_bytes - 1);
memcpy(&msg[4], send, send_bytes);
- while (1) {
+ for (retry = 0; retry < 4; retry++) {
ret = radeon_process_aux_ch(dig_connector->dp_i2c_bus,
msg, msg_bytes, NULL, 0, delay, &ack);
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
- break;
+ return send_bytes;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
udelay(400);
else
return -EIO;
}
- return send_bytes;
+ return -EIO;
}
static int radeon_dp_aux_native_read(struct radeon_connector *radeon_connector,
@@ -149,26 +150,29 @@
int msg_bytes = 4;
u8 ack;
int ret;
+ unsigned retry;
msg[0] = address;
msg[1] = address >> 8;
msg[2] = AUX_NATIVE_READ << 4;
msg[3] = (msg_bytes << 4) | (recv_bytes - 1);
- while (1) {
+ for (retry = 0; retry < 4; retry++) {
ret = radeon_process_aux_ch(dig_connector->dp_i2c_bus,
msg, msg_bytes, recv, recv_bytes, delay, &ack);
- if (ret == 0)
- return -EPROTO;
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
return ret;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
udelay(400);
+ else if (ret == 0)
+ return -EPROTO;
else
return -EIO;
}
+
+ return -EIO;
}
static void radeon_write_dpcd_reg(struct radeon_connector *radeon_connector,
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index e8a7467..c4ffa14f 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1590,48 +1590,6 @@
return backend_map;
}
-static void evergreen_program_channel_remap(struct radeon_device *rdev)
-{
- u32 tcp_chan_steer_lo, tcp_chan_steer_hi, mc_shared_chremap, tmp;
-
- tmp = RREG32(MC_SHARED_CHMAP);
- switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
- case 0:
- case 1:
- case 2:
- case 3:
- default:
- /* default mapping */
- mc_shared_chremap = 0x00fac688;
- break;
- }
-
- switch (rdev->family) {
- case CHIP_HEMLOCK:
- case CHIP_CYPRESS:
- case CHIP_BARTS:
- tcp_chan_steer_lo = 0x54763210;
- tcp_chan_steer_hi = 0x0000ba98;
- break;
- case CHIP_JUNIPER:
- case CHIP_REDWOOD:
- case CHIP_CEDAR:
- case CHIP_PALM:
- case CHIP_SUMO:
- case CHIP_SUMO2:
- case CHIP_TURKS:
- case CHIP_CAICOS:
- default:
- tcp_chan_steer_lo = 0x76543210;
- tcp_chan_steer_hi = 0x0000ba98;
- break;
- }
-
- WREG32(TCP_CHAN_STEER_LO, tcp_chan_steer_lo);
- WREG32(TCP_CHAN_STEER_HI, tcp_chan_steer_hi);
- WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
-}
-
static void evergreen_gpu_init(struct radeon_device *rdev)
{
u32 cc_rb_backend_disable = 0;
@@ -2078,8 +2036,6 @@
WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
WREG32(HDP_ADDR_CONFIG, gb_addr_config);
- evergreen_program_channel_remap(rdev);
-
num_shader_engines = ((RREG32(GB_ADDR_CONFIG) & NUM_SHADER_ENGINES(3)) >> 12) + 1;
grbm_gfx_index = INSTANCE_BROADCAST_WRITES;
diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index 99fbd79..8c79ca9 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -569,36 +569,6 @@
return backend_map;
}
-static void cayman_program_channel_remap(struct radeon_device *rdev)
-{
- u32 tcp_chan_steer_lo, tcp_chan_steer_hi, mc_shared_chremap, tmp;
-
- tmp = RREG32(MC_SHARED_CHMAP);
- switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
- case 0:
- case 1:
- case 2:
- case 3:
- default:
- /* default mapping */
- mc_shared_chremap = 0x00fac688;
- break;
- }
-
- switch (rdev->family) {
- case CHIP_CAYMAN:
- default:
- //tcp_chan_steer_lo = 0x54763210
- tcp_chan_steer_lo = 0x76543210;
- tcp_chan_steer_hi = 0x0000ba98;
- break;
- }
-
- WREG32(TCP_CHAN_STEER_LO, tcp_chan_steer_lo);
- WREG32(TCP_CHAN_STEER_HI, tcp_chan_steer_hi);
- WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
-}
-
static u32 cayman_get_disable_mask_per_asic(struct radeon_device *rdev,
u32 disable_mask_per_se,
u32 max_disable_mask_per_se,
@@ -842,8 +812,6 @@
WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
WREG32(HDP_ADDR_CONFIG, gb_addr_config);
- cayman_program_channel_remap(rdev);
-
/* primary versions */
WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index c4b8741..bce63fd 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -68,11 +68,11 @@
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
int saved_dpms = connector->dpms;
- if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&
- radeon_dp_needs_link_train(radeon_connector))
- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
- else
+ /* Only turn off the display it it's physically disconnected */
+ if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd))
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+ else if (radeon_dp_needs_link_train(radeon_connector))
+ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
connector->dpms = saved_dpms;
}
}
diff --git a/drivers/gpu/drm/radeon/radeon_cursor.c b/drivers/gpu/drm/radeon/radeon_cursor.c
index 3189a7e..fde25c0 100644
--- a/drivers/gpu/drm/radeon/radeon_cursor.c
+++ b/drivers/gpu/drm/radeon/radeon_cursor.c
@@ -208,24 +208,26 @@
int xorigin = 0, yorigin = 0;
int w = radeon_crtc->cursor_width;
- if (x < 0)
- xorigin = -x + 1;
- if (y < 0)
- yorigin = -y + 1;
- if (xorigin >= CURSOR_WIDTH)
- xorigin = CURSOR_WIDTH - 1;
- if (yorigin >= CURSOR_HEIGHT)
- yorigin = CURSOR_HEIGHT - 1;
+ if (ASIC_IS_AVIVO(rdev)) {
+ /* avivo cursor are offset into the total surface */
+ x += crtc->x;
+ y += crtc->y;
+ }
+ DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
+
+ if (x < 0) {
+ xorigin = min(-x, CURSOR_WIDTH - 1);
+ x = 0;
+ }
+ if (y < 0) {
+ yorigin = min(-y, CURSOR_HEIGHT - 1);
+ y = 0;
+ }
if (ASIC_IS_AVIVO(rdev)) {
int i = 0;
struct drm_crtc *crtc_p;
- /* avivo cursor are offset into the total surface */
- x += crtc->x;
- y += crtc->y;
- DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
-
/* avivo cursor image can't end on 128 pixel boundary or
* go past the end of the frame if both crtcs are enabled
*/
@@ -253,16 +255,12 @@
radeon_lock_cursor(crtc, true);
if (ASIC_IS_DCE4(rdev)) {
- WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset,
- ((xorigin ? 0 : x) << 16) |
- (yorigin ? 0 : y));
+ WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);
WREG32(EVERGREEN_CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(EVERGREEN_CUR_SIZE + radeon_crtc->crtc_offset,
((w - 1) << 16) | (radeon_crtc->cursor_height - 1));
} else if (ASIC_IS_AVIVO(rdev)) {
- WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset,
- ((xorigin ? 0 : x) << 16) |
- (yorigin ? 0 : y));
+ WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);
WREG32(AVIVO_D1CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(AVIVO_D1CUR_SIZE + radeon_crtc->crtc_offset,
((w - 1) << 16) | (radeon_crtc->cursor_height - 1));
@@ -276,8 +274,8 @@
| yorigin));
WREG32(RADEON_CUR_HORZ_VERT_POSN + radeon_crtc->crtc_offset,
(RADEON_CUR_LOCK
- | ((xorigin ? 0 : x) << 16)
- | (yorigin ? 0 : y)));
+ | (x << 16)
+ | y));
/* offset is from DISP(2)_BASE_ADDRESS */
WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset +
(yorigin * 256)));
diff --git a/drivers/gpu/drm/radeon/rv770.c b/drivers/gpu/drm/radeon/rv770.c
index 4720d00..b13c2ee 100644
--- a/drivers/gpu/drm/radeon/rv770.c
+++ b/drivers/gpu/drm/radeon/rv770.c
@@ -536,55 +536,6 @@
return backend_map;
}
-static void rv770_program_channel_remap(struct radeon_device *rdev)
-{
- u32 tcp_chan_steer, mc_shared_chremap, tmp;
- bool force_no_swizzle;
-
- switch (rdev->family) {
- case CHIP_RV770:
- case CHIP_RV730:
- force_no_swizzle = false;
- break;
- case CHIP_RV710:
- case CHIP_RV740:
- default:
- force_no_swizzle = true;
- break;
- }
-
- tmp = RREG32(MC_SHARED_CHMAP);
- switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
- case 0:
- case 1:
- default:
- /* default mapping */
- mc_shared_chremap = 0x00fac688;
- break;
- case 2:
- case 3:
- if (force_no_swizzle)
- mc_shared_chremap = 0x00fac688;
- else
- mc_shared_chremap = 0x00bbc298;
- break;
- }
-
- if (rdev->family == CHIP_RV740)
- tcp_chan_steer = 0x00ef2a60;
- else
- tcp_chan_steer = 0x00fac688;
-
- /* RV770 CE has special chremap setup */
- if (rdev->pdev->device == 0x944e) {
- tcp_chan_steer = 0x00b08b08;
- mc_shared_chremap = 0x00b08b08;
- }
-
- WREG32(TCP_CHAN_STEER, tcp_chan_steer);
- WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
-}
-
static void rv770_gpu_init(struct radeon_device *rdev)
{
int i, j, num_qd_pipes;
@@ -785,8 +736,6 @@
WREG32(DCP_TILING_CONFIG, (gb_tiling_config & 0xffff));
WREG32(HDP_TILING_CONFIG, (gb_tiling_config & 0xffff));
- rv770_program_channel_remap(rdev);
-
WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config);
WREG32(GC_USER_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config);
diff --git a/drivers/hid/hid-picolcd.c b/drivers/hid/hid-picolcd.c
index 9d8710f..1782693 100644
--- a/drivers/hid/hid-picolcd.c
+++ b/drivers/hid/hid-picolcd.c
@@ -2409,7 +2409,7 @@
#ifdef CONFIG_PM
static int picolcd_suspend(struct hid_device *hdev, pm_message_t message)
{
- if (message.event & PM_EVENT_AUTO)
+ if (PMSG_IS_AUTO(message))
return 0;
picolcd_suspend_backlight(hid_get_drvdata(hdev));
diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
index ad978f5..a9fa294 100644
--- a/drivers/hid/usbhid/hid-core.c
+++ b/drivers/hid/usbhid/hid-core.c
@@ -1332,7 +1332,7 @@
struct usbhid_device *usbhid = hid->driver_data;
int status;
- if (message.event & PM_EVENT_AUTO) {
+ if (PMSG_IS_AUTO(message)) {
spin_lock_irq(&usbhid->lock); /* Sync with error handler */
if (!test_bit(HID_RESET_PENDING, &usbhid->iofl)
&& !test_bit(HID_CLEAR_HALT, &usbhid->iofl)
@@ -1367,7 +1367,7 @@
return -EIO;
}
- if (!ignoreled && (message.event & PM_EVENT_AUTO)) {
+ if (!ignoreled && PMSG_IS_AUTO(message)) {
spin_lock_irq(&usbhid->lock);
if (test_bit(HID_LED_ON, &usbhid->iofl)) {
spin_unlock_irq(&usbhid->lock);
@@ -1380,8 +1380,7 @@
hid_cancel_delayed_stuff(usbhid);
hid_cease_io(usbhid);
- if ((message.event & PM_EVENT_AUTO) &&
- test_bit(HID_KEYS_PRESSED, &usbhid->iofl)) {
+ if (PMSG_IS_AUTO(message) && test_bit(HID_KEYS_PRESSED, &usbhid->iofl)) {
/* lost race against keypresses */
status = hid_start_in(hid);
if (status < 0)
diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
index 4112576..9323837 100644
--- a/drivers/hwmon/coretemp.c
+++ b/drivers/hwmon/coretemp.c
@@ -36,17 +36,25 @@
#include <linux/cpu.h>
#include <linux/pci.h>
#include <linux/smp.h>
+#include <linux/moduleparam.h>
#include <asm/msr.h>
#include <asm/processor.h>
#define DRVNAME "coretemp"
+/*
+ * force_tjmax only matters when TjMax can't be read from the CPU itself.
+ * When set, it replaces the driver's suboptimal heuristic.
+ */
+static int force_tjmax;
+module_param_named(tjmax, force_tjmax, int, 0444);
+MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
+
#define BASE_SYSFS_ATTR_NO 2 /* Sysfs Base attr no for coretemp */
#define NUM_REAL_CORES 16 /* Number of Real cores per cpu */
#define CORETEMP_NAME_LENGTH 17 /* String Length of attrs */
#define MAX_CORE_ATTRS 4 /* Maximum no of basic attrs */
-#define MAX_THRESH_ATTRS 3 /* Maximum no of Threshold attrs */
-#define TOTAL_ATTRS (MAX_CORE_ATTRS + MAX_THRESH_ATTRS)
+#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)
#define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
#ifdef CONFIG_SMP
@@ -69,8 +77,6 @@
* This value is passed as "id" field to rdmsr/wrmsr functions.
* @status_reg: One of IA32_THERM_STATUS or IA32_PACKAGE_THERM_STATUS,
* from where the temperature values should be read.
- * @intrpt_reg: One of IA32_THERM_INTERRUPT or IA32_PACKAGE_THERM_INTERRUPT,
- * from where the thresholds are read.
* @attr_size: Total number of pre-core attrs displayed in the sysfs.
* @is_pkg_data: If this is 1, the temp_data holds pkgtemp data.
* Otherwise, temp_data holds coretemp data.
@@ -79,13 +85,11 @@
struct temp_data {
int temp;
int ttarget;
- int tmin;
int tjmax;
unsigned long last_updated;
unsigned int cpu;
u32 cpu_core_id;
u32 status_reg;
- u32 intrpt_reg;
int attr_size;
bool is_pkg_data;
bool valid;
@@ -143,19 +147,6 @@
return sprintf(buf, "%d\n", (eax >> 5) & 1);
}
-static ssize_t show_max_alarm(struct device *dev,
- struct device_attribute *devattr, char *buf)
-{
- u32 eax, edx;
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct platform_data *pdata = dev_get_drvdata(dev);
- struct temp_data *tdata = pdata->core_data[attr->index];
-
- rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &eax, &edx);
-
- return sprintf(buf, "%d\n", !!(eax & THERM_STATUS_THRESHOLD1));
-}
-
static ssize_t show_tjmax(struct device *dev,
struct device_attribute *devattr, char *buf)
{
@@ -174,83 +165,6 @@
return sprintf(buf, "%d\n", pdata->core_data[attr->index]->ttarget);
}
-static ssize_t store_ttarget(struct device *dev,
- struct device_attribute *devattr,
- const char *buf, size_t count)
-{
- struct platform_data *pdata = dev_get_drvdata(dev);
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct temp_data *tdata = pdata->core_data[attr->index];
- u32 eax, edx;
- unsigned long val;
- int diff;
-
- if (strict_strtoul(buf, 10, &val))
- return -EINVAL;
-
- /*
- * THERM_MASK_THRESHOLD1 is 7 bits wide. Values are entered in terms
- * of milli degree celsius. Hence don't accept val > (127 * 1000)
- */
- if (val > tdata->tjmax || val > 127000)
- return -EINVAL;
-
- diff = (tdata->tjmax - val) / 1000;
-
- mutex_lock(&tdata->update_lock);
- rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);
- eax = (eax & ~THERM_MASK_THRESHOLD1) |
- (diff << THERM_SHIFT_THRESHOLD1);
- wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);
- tdata->ttarget = val;
- mutex_unlock(&tdata->update_lock);
-
- return count;
-}
-
-static ssize_t show_tmin(struct device *dev,
- struct device_attribute *devattr, char *buf)
-{
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct platform_data *pdata = dev_get_drvdata(dev);
-
- return sprintf(buf, "%d\n", pdata->core_data[attr->index]->tmin);
-}
-
-static ssize_t store_tmin(struct device *dev,
- struct device_attribute *devattr,
- const char *buf, size_t count)
-{
- struct platform_data *pdata = dev_get_drvdata(dev);
- struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
- struct temp_data *tdata = pdata->core_data[attr->index];
- u32 eax, edx;
- unsigned long val;
- int diff;
-
- if (strict_strtoul(buf, 10, &val))
- return -EINVAL;
-
- /*
- * THERM_MASK_THRESHOLD0 is 7 bits wide. Values are entered in terms
- * of milli degree celsius. Hence don't accept val > (127 * 1000)
- */
- if (val > tdata->tjmax || val > 127000)
- return -EINVAL;
-
- diff = (tdata->tjmax - val) / 1000;
-
- mutex_lock(&tdata->update_lock);
- rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);
- eax = (eax & ~THERM_MASK_THRESHOLD0) |
- (diff << THERM_SHIFT_THRESHOLD0);
- wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);
- tdata->tmin = val;
- mutex_unlock(&tdata->update_lock);
-
- return count;
-}
-
static ssize_t show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
@@ -374,7 +288,6 @@
static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
{
- /* The 100C is default for both mobile and non mobile CPUs */
int err;
u32 eax, edx;
u32 val;
@@ -385,7 +298,8 @@
*/
err = rdmsr_safe_on_cpu(id, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
if (err) {
- dev_warn(dev, "Unable to read TjMax from CPU.\n");
+ if (c->x86_model > 0xe && c->x86_model != 0x1c)
+ dev_warn(dev, "Unable to read TjMax from CPU %u\n", id);
} else {
val = (eax >> 16) & 0xff;
/*
@@ -393,11 +307,17 @@
* will be used
*/
if (val) {
- dev_info(dev, "TjMax is %d C.\n", val);
+ dev_dbg(dev, "TjMax is %d degrees C\n", val);
return val * 1000;
}
}
+ if (force_tjmax) {
+ dev_notice(dev, "TjMax forced to %d degrees C by user\n",
+ force_tjmax);
+ return force_tjmax * 1000;
+ }
+
/*
* An assumption is made for early CPUs and unreadable MSR.
* NOTE: the calculated value may not be correct.
@@ -414,21 +334,6 @@
rdmsr(MSR_IA32_UCODE_REV, eax, *(u32 *)edx);
}
-static int get_pkg_tjmax(unsigned int cpu, struct device *dev)
-{
- int err;
- u32 eax, edx, val;
-
- err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
- if (!err) {
- val = (eax >> 16) & 0xff;
- if (val)
- return val * 1000;
- }
- dev_warn(dev, "Unable to read Pkg-TjMax from CPU:%u\n", cpu);
- return 100000; /* Default TjMax: 100 degree celsius */
-}
-
static int create_name_attr(struct platform_data *pdata, struct device *dev)
{
sysfs_attr_init(&pdata->name_attr.attr);
@@ -442,19 +347,14 @@
int attr_no)
{
int err, i;
- static ssize_t (*rd_ptr[TOTAL_ATTRS]) (struct device *dev,
+ static ssize_t (*const rd_ptr[TOTAL_ATTRS]) (struct device *dev,
struct device_attribute *devattr, char *buf) = {
show_label, show_crit_alarm, show_temp, show_tjmax,
- show_max_alarm, show_ttarget, show_tmin };
- static ssize_t (*rw_ptr[TOTAL_ATTRS]) (struct device *dev,
- struct device_attribute *devattr, const char *buf,
- size_t count) = { NULL, NULL, NULL, NULL, NULL,
- store_ttarget, store_tmin };
- static const char *names[TOTAL_ATTRS] = {
+ show_ttarget };
+ static const char *const names[TOTAL_ATTRS] = {
"temp%d_label", "temp%d_crit_alarm",
"temp%d_input", "temp%d_crit",
- "temp%d_max_alarm", "temp%d_max",
- "temp%d_max_hyst" };
+ "temp%d_max" };
for (i = 0; i < tdata->attr_size; i++) {
snprintf(tdata->attr_name[i], CORETEMP_NAME_LENGTH, names[i],
@@ -462,10 +362,6 @@
sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr);
tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i];
tdata->sd_attrs[i].dev_attr.attr.mode = S_IRUGO;
- if (rw_ptr[i]) {
- tdata->sd_attrs[i].dev_attr.attr.mode |= S_IWUSR;
- tdata->sd_attrs[i].dev_attr.store = rw_ptr[i];
- }
tdata->sd_attrs[i].dev_attr.show = rd_ptr[i];
tdata->sd_attrs[i].index = attr_no;
err = device_create_file(dev, &tdata->sd_attrs[i].dev_attr);
@@ -481,9 +377,9 @@
}
-static int __devinit chk_ucode_version(struct platform_device *pdev)
+static int __cpuinit chk_ucode_version(unsigned int cpu)
{
- struct cpuinfo_x86 *c = &cpu_data(pdev->id);
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
int err;
u32 edx;
@@ -494,17 +390,15 @@
*/
if (c->x86_model == 0xe && c->x86_mask < 0xc) {
/* check for microcode update */
- err = smp_call_function_single(pdev->id, get_ucode_rev_on_cpu,
+ err = smp_call_function_single(cpu, get_ucode_rev_on_cpu,
&edx, 1);
if (err) {
- dev_err(&pdev->dev,
- "Cannot determine microcode revision of "
- "CPU#%u (%d)!\n", pdev->id, err);
+ pr_err("Cannot determine microcode revision of "
+ "CPU#%u (%d)!\n", cpu, err);
return -ENODEV;
} else if (edx < 0x39) {
- dev_err(&pdev->dev,
- "Errata AE18 not fixed, update BIOS or "
- "microcode of the CPU!\n");
+ pr_err("Errata AE18 not fixed, update BIOS or "
+ "microcode of the CPU!\n");
return -ENODEV;
}
}
@@ -538,8 +432,6 @@
tdata->status_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_STATUS :
MSR_IA32_THERM_STATUS;
- tdata->intrpt_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_INTERRUPT :
- MSR_IA32_THERM_INTERRUPT;
tdata->is_pkg_data = pkg_flag;
tdata->cpu = cpu;
tdata->cpu_core_id = TO_CORE_ID(cpu);
@@ -548,11 +440,11 @@
return tdata;
}
-static int create_core_data(struct platform_data *pdata,
- struct platform_device *pdev,
+static int create_core_data(struct platform_device *pdev,
unsigned int cpu, int pkg_flag)
{
struct temp_data *tdata;
+ struct platform_data *pdata = platform_get_drvdata(pdev);
struct cpuinfo_x86 *c = &cpu_data(cpu);
u32 eax, edx;
int err, attr_no;
@@ -588,25 +480,21 @@
goto exit_free;
/* We can access status register. Get Critical Temperature */
- if (pkg_flag)
- tdata->tjmax = get_pkg_tjmax(pdev->id, &pdev->dev);
- else
- tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);
+ tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);
/*
- * Test if we can access the intrpt register. If so, increase the
- * 'size' enough to have ttarget/tmin/max_alarm interfaces.
- * Initialize ttarget with bits 16:22 of MSR_IA32_THERM_INTERRUPT
+ * Read the still undocumented bits 8:15 of IA32_TEMPERATURE_TARGET.
+ * The target temperature is available on older CPUs but not in this
+ * register. Atoms don't have the register at all.
*/
- err = rdmsr_safe_on_cpu(cpu, tdata->intrpt_reg, &eax, &edx);
- if (!err) {
- tdata->attr_size += MAX_THRESH_ATTRS;
- tdata->tmin = tdata->tjmax -
- ((eax & THERM_MASK_THRESHOLD0) >>
- THERM_SHIFT_THRESHOLD0) * 1000;
- tdata->ttarget = tdata->tjmax -
- ((eax & THERM_MASK_THRESHOLD1) >>
- THERM_SHIFT_THRESHOLD1) * 1000;
+ if (c->x86_model > 0xe && c->x86_model != 0x1c) {
+ err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET,
+ &eax, &edx);
+ if (!err) {
+ tdata->ttarget
+ = tdata->tjmax - ((eax >> 8) & 0xff) * 1000;
+ tdata->attr_size++;
+ }
}
pdata->core_data[attr_no] = tdata;
@@ -618,22 +506,20 @@
return 0;
exit_free:
+ pdata->core_data[attr_no] = NULL;
kfree(tdata);
return err;
}
static void coretemp_add_core(unsigned int cpu, int pkg_flag)
{
- struct platform_data *pdata;
struct platform_device *pdev = coretemp_get_pdev(cpu);
int err;
if (!pdev)
return;
- pdata = platform_get_drvdata(pdev);
-
- err = create_core_data(pdata, pdev, cpu, pkg_flag);
+ err = create_core_data(pdev, cpu, pkg_flag);
if (err)
dev_err(&pdev->dev, "Adding Core %u failed\n", cpu);
}
@@ -657,11 +543,6 @@
struct platform_data *pdata;
int err;
- /* Check the microcode version of the CPU */
- err = chk_ucode_version(pdev);
- if (err)
- return err;
-
/* Initialize the per-package data structures */
pdata = kzalloc(sizeof(struct platform_data), GFP_KERNEL);
if (!pdata)
@@ -671,7 +552,7 @@
if (err)
goto exit_free;
- pdata->phys_proc_id = TO_PHYS_ID(pdev->id);
+ pdata->phys_proc_id = pdev->id;
platform_set_drvdata(pdev, pdata);
pdata->hwmon_dev = hwmon_device_register(&pdev->dev);
@@ -723,7 +604,7 @@
mutex_lock(&pdev_list_mutex);
- pdev = platform_device_alloc(DRVNAME, cpu);
+ pdev = platform_device_alloc(DRVNAME, TO_PHYS_ID(cpu));
if (!pdev) {
err = -ENOMEM;
pr_err("Device allocation failed\n");
@@ -743,7 +624,7 @@
}
pdev_entry->pdev = pdev;
- pdev_entry->phys_proc_id = TO_PHYS_ID(cpu);
+ pdev_entry->phys_proc_id = pdev->id;
list_add_tail(&pdev_entry->list, &pdev_list);
mutex_unlock(&pdev_list_mutex);
@@ -804,6 +685,10 @@
return;
if (!pdev) {
+ /* Check the microcode version of the CPU */
+ if (chk_ucode_version(cpu))
+ return;
+
/*
* Alright, we have DTS support.
* We are bringing the _first_ core in this pkg
diff --git a/drivers/hwmon/ds620.c b/drivers/hwmon/ds620.c
index 257957c..4f7c3fc 100644
--- a/drivers/hwmon/ds620.c
+++ b/drivers/hwmon/ds620.c
@@ -72,7 +72,7 @@
char valid; /* !=0 if following fields are valid */
unsigned long last_updated; /* In jiffies */
- u16 temp[3]; /* Register values, word */
+ s16 temp[3]; /* Register values, word */
};
/*
diff --git a/drivers/hwmon/w83791d.c b/drivers/hwmon/w83791d.c
index 17cf1ab..8c2844e 100644
--- a/drivers/hwmon/w83791d.c
+++ b/drivers/hwmon/w83791d.c
@@ -329,8 +329,8 @@
struct i2c_board_info *info);
static int w83791d_remove(struct i2c_client *client);
-static int w83791d_read(struct i2c_client *client, u8 register);
-static int w83791d_write(struct i2c_client *client, u8 register, u8 value);
+static int w83791d_read(struct i2c_client *client, u8 reg);
+static int w83791d_write(struct i2c_client *client, u8 reg, u8 value);
static struct w83791d_data *w83791d_update_device(struct device *dev);
#ifdef DEBUG
diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 2747980..16f69be 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -435,7 +435,12 @@
if (!(rq->cmd_flags & REQ_FLUSH))
return BLKPREP_OK;
- cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
+ if (rq->special) {
+ cmd = rq->special;
+ memset(cmd, 0, sizeof(*cmd));
+ } else {
+ cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
+ }
/* FIXME: map struct ide_taskfile on rq->cmd[] */
BUG_ON(cmd == NULL);
diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c
index 17bf9d9..6cd642a 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_cm.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c
@@ -287,7 +287,7 @@
if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) {
cxgb3_remove_tid(ep->com.tdev, (void *)ep, ep->hwtid);
dst_release(ep->dst);
- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
+ l2t_release(ep->com.tdev, ep->l2t);
}
kfree(ep);
}
@@ -1178,7 +1178,7 @@
release_tid(ep->com.tdev, GET_TID(rpl), NULL);
cxgb3_free_atid(ep->com.tdev, ep->atid);
dst_release(ep->dst);
- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
+ l2t_release(ep->com.tdev, ep->l2t);
put_ep(&ep->com);
return CPL_RET_BUF_DONE;
}
@@ -1377,7 +1377,7 @@
if (!child_ep) {
printk(KERN_ERR MOD "%s - failed to allocate ep entry!\n",
__func__);
- l2t_release(L2DATA(tdev), l2t);
+ l2t_release(tdev, l2t);
dst_release(dst);
goto reject;
}
@@ -1956,7 +1956,7 @@
if (!err)
goto out;
- l2t_release(L2DATA(h->rdev.t3cdev_p), ep->l2t);
+ l2t_release(h->rdev.t3cdev_p, ep->l2t);
fail4:
dst_release(ep->dst);
fail3:
@@ -2127,7 +2127,7 @@
PDBG("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new,
l2t);
dst_hold(new);
- l2t_release(L2DATA(ep->com.tdev), ep->l2t);
+ l2t_release(ep->com.tdev, ep->l2t);
ep->l2t = l2t;
dst_release(old);
ep->dst = new;
diff --git a/drivers/input/tablet/wacom_wac.c b/drivers/input/tablet/wacom_wac.c
index 0dc97ec..9dea718 100644
--- a/drivers/input/tablet/wacom_wac.c
+++ b/drivers/input/tablet/wacom_wac.c
@@ -1124,11 +1124,8 @@
for (i = 0; i < 8; i++)
__set_bit(BTN_0 + i, input_dev->keybit);
- if (wacom_wac->features.type != WACOM_21UX2) {
- input_set_abs_params(input_dev, ABS_RX, 0, 4096, 0, 0);
- input_set_abs_params(input_dev, ABS_RY, 0, 4096, 0, 0);
- }
-
+ input_set_abs_params(input_dev, ABS_RX, 0, 4096, 0, 0);
+ input_set_abs_params(input_dev, ABS_RY, 0, 4096, 0, 0);
input_set_abs_params(input_dev, ABS_Z, -900, 899, 0, 0);
__set_bit(INPUT_PROP_DIRECT, input_dev->propbit);
diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
index 49da55c..8c2a000 100644
--- a/drivers/md/dm-crypt.c
+++ b/drivers/md/dm-crypt.c
@@ -1698,6 +1698,8 @@
}
ti->num_flush_requests = 1;
+ ti->discard_zeroes_data_unsupported = 1;
+
return 0;
bad:
diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
index 89f73ca..f84c080 100644
--- a/drivers/md/dm-flakey.c
+++ b/drivers/md/dm-flakey.c
@@ -81,8 +81,10 @@
* corrupt_bio_byte <Nth_byte> <direction> <value> <bio_flags>
*/
if (!strcasecmp(arg_name, "corrupt_bio_byte")) {
- if (!argc)
+ if (!argc) {
ti->error = "Feature corrupt_bio_byte requires parameters";
+ return -EINVAL;
+ }
r = dm_read_arg(_args + 1, as, &fc->corrupt_bio_byte, &ti->error);
if (r)
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index a002dd8..86df8b2 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -449,7 +449,7 @@
rs->ti->error = "write_mostly option is only valid for RAID1";
return -EINVAL;
}
- if (value > rs->md.raid_disks) {
+ if (value >= rs->md.raid_disks) {
rs->ti->error = "Invalid write_mostly drive index given";
return -EINVAL;
}
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 986b875..bc04518 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1238,14 +1238,15 @@
return;
template_disk = dm_table_get_integrity_disk(t, true);
- if (!template_disk &&
- blk_integrity_is_initialized(dm_disk(t->md))) {
+ if (template_disk)
+ blk_integrity_register(dm_disk(t->md),
+ blk_get_integrity(template_disk));
+ else if (blk_integrity_is_initialized(dm_disk(t->md)))
DMWARN("%s: device no longer has a valid integrity profile",
dm_device_name(t->md));
- return;
- }
- blk_integrity_register(dm_disk(t->md),
- blk_get_integrity(template_disk));
+ else
+ DMWARN("%s: unable to establish an integrity profile",
+ dm_device_name(t->md));
}
static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
@@ -1282,6 +1283,22 @@
return 0;
}
+static bool dm_table_discard_zeroes_data(struct dm_table *t)
+{
+ struct dm_target *ti;
+ unsigned i = 0;
+
+ /* Ensure that all targets supports discard_zeroes_data. */
+ while (i < dm_table_get_num_targets(t)) {
+ ti = dm_table_get_target(t, i++);
+
+ if (ti->discard_zeroes_data_unsupported)
+ return 0;
+ }
+
+ return 1;
+}
+
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits)
{
@@ -1304,6 +1321,9 @@
}
blk_queue_flush(q, flush);
+ if (!dm_table_discard_zeroes_data(t))
+ q->limits.discard_zeroes_data = 0;
+
dm_table_set_integrity(t);
/*
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 5404b22..5c95ccb 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -61,6 +61,11 @@
static void autostart_arrays(int part);
#endif
+/* pers_list is a list of registered personalities protected
+ * by pers_lock.
+ * pers_lock does extra service to protect accesses to
+ * mddev->thread when the mutex cannot be held.
+ */
static LIST_HEAD(pers_list);
static DEFINE_SPINLOCK(pers_lock);
@@ -739,7 +744,12 @@
} else
mutex_unlock(&mddev->reconfig_mutex);
+ /* was we've dropped the mutex we need a spinlock to
+ * make sur the thread doesn't disappear
+ */
+ spin_lock(&pers_lock);
md_wakeup_thread(mddev->thread);
+ spin_unlock(&pers_lock);
}
static mdk_rdev_t * find_rdev_nr(mddev_t *mddev, int nr)
@@ -6429,11 +6439,18 @@
return thread;
}
-void md_unregister_thread(mdk_thread_t *thread)
+void md_unregister_thread(mdk_thread_t **threadp)
{
+ mdk_thread_t *thread = *threadp;
if (!thread)
return;
dprintk("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
+ /* Locking ensures that mddev_unlock does not wake_up a
+ * non-existent thread
+ */
+ spin_lock(&pers_lock);
+ *threadp = NULL;
+ spin_unlock(&pers_lock);
kthread_stop(thread->tsk);
kfree(thread);
@@ -7340,8 +7357,7 @@
mdk_rdev_t *rdev;
/* resync has finished, collect result */
- md_unregister_thread(mddev->sync_thread);
- mddev->sync_thread = NULL;
+ md_unregister_thread(&mddev->sync_thread);
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
/* success...*/
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 1e586bb..0a309dc 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -560,7 +560,7 @@
extern int unregister_md_personality(struct mdk_personality *p);
extern mdk_thread_t * md_register_thread(void (*run) (mddev_t *mddev),
mddev_t *mddev, const char *name);
-extern void md_unregister_thread(mdk_thread_t *thread);
+extern void md_unregister_thread(mdk_thread_t **threadp);
extern void md_wakeup_thread(mdk_thread_t *thread);
extern void md_check_recovery(mddev_t *mddev);
extern void md_write_start(mddev_t *mddev, struct bio *bi);
diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c
index 3535c23..d5b5fb3 100644
--- a/drivers/md/multipath.c
+++ b/drivers/md/multipath.c
@@ -514,8 +514,7 @@
{
multipath_conf_t *conf = mddev->private;
- md_unregister_thread(mddev->thread);
- mddev->thread = NULL;
+ md_unregister_thread(&mddev->thread);
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
mempool_destroy(conf->pool);
kfree(conf->multipaths);
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index f4622dd..d9587df 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -2562,8 +2562,7 @@
raise_barrier(conf);
lower_barrier(conf);
- md_unregister_thread(mddev->thread);
- mddev->thread = NULL;
+ md_unregister_thread(&mddev->thread);
if (conf->r1bio_pool)
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index d7a8468..0cd9672 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -2955,7 +2955,7 @@
return 0;
out_free_conf:
- md_unregister_thread(mddev->thread);
+ md_unregister_thread(&mddev->thread);
if (conf->r10bio_pool)
mempool_destroy(conf->r10bio_pool);
safe_put_page(conf->tmppage);
@@ -2973,8 +2973,7 @@
raise_barrier(conf, 0);
lower_barrier(conf);
- md_unregister_thread(mddev->thread);
- mddev->thread = NULL;
+ md_unregister_thread(&mddev->thread);
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
if (conf->r10bio_pool)
mempool_destroy(conf->r10bio_pool);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 43709fa..ac5e8b5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -4941,8 +4941,7 @@
return 0;
abort:
- md_unregister_thread(mddev->thread);
- mddev->thread = NULL;
+ md_unregister_thread(&mddev->thread);
if (conf) {
print_raid5_conf(conf);
free_conf(conf);
@@ -4956,8 +4955,7 @@
{
raid5_conf_t *conf = mddev->private;
- md_unregister_thread(mddev->thread);
- mddev->thread = NULL;
+ md_unregister_thread(&mddev->thread);
if (mddev->queue)
mddev->queue->backing_dev_info.congested_fn = NULL;
free_conf(conf);
diff --git a/drivers/media/video/omap/omap_vout.c b/drivers/media/video/omap/omap_vout.c
index b5ef362..b3a5ecd 100644
--- a/drivers/media/video/omap/omap_vout.c
+++ b/drivers/media/video/omap/omap_vout.c
@@ -2194,19 +2194,6 @@
"'%s' Display already enabled\n",
def_display->name);
}
- /* set the update mode */
- if (def_display->caps &
- OMAP_DSS_DISPLAY_CAP_MANUAL_UPDATE) {
- if (dssdrv->enable_te)
- dssdrv->enable_te(def_display, 0);
- if (dssdrv->set_update_mode)
- dssdrv->set_update_mode(def_display,
- OMAP_DSS_UPDATE_MANUAL);
- } else {
- if (dssdrv->set_update_mode)
- dssdrv->set_update_mode(def_display,
- OMAP_DSS_UPDATE_AUTO);
- }
}
}
diff --git a/drivers/media/video/omap3isp/ispccdc.c b/drivers/media/video/omap3isp/ispccdc.c
index 9d3459d..80796eb 100644
--- a/drivers/media/video/omap3isp/ispccdc.c
+++ b/drivers/media/video/omap3isp/ispccdc.c
@@ -31,6 +31,7 @@
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/sched.h>
+#include <linux/slab.h>
#include <media/v4l2-event.h>
#include "isp.h"
diff --git a/drivers/media/video/uvc/uvc_driver.c b/drivers/media/video/uvc/uvc_driver.c
index d29f9c2..e4100b1 100644
--- a/drivers/media/video/uvc/uvc_driver.c
+++ b/drivers/media/video/uvc/uvc_driver.c
@@ -1961,7 +1961,7 @@
list_for_each_entry(stream, &dev->streams, list) {
if (stream->intf == intf)
- return uvc_video_resume(stream);
+ return uvc_video_resume(stream, reset);
}
uvc_trace(UVC_TRACE_SUSPEND, "Resume: video streaming USB interface "
diff --git a/drivers/media/video/uvc/uvc_entity.c b/drivers/media/video/uvc/uvc_entity.c
index 48fea37..29e2399 100644
--- a/drivers/media/video/uvc/uvc_entity.c
+++ b/drivers/media/video/uvc/uvc_entity.c
@@ -49,7 +49,7 @@
if (remote == NULL)
return -EINVAL;
- source = (UVC_ENTITY_TYPE(remote) != UVC_TT_STREAMING)
+ source = (UVC_ENTITY_TYPE(remote) == UVC_TT_STREAMING)
? (remote->vdev ? &remote->vdev->entity : NULL)
: &remote->subdev.entity;
if (source == NULL)
diff --git a/drivers/media/video/uvc/uvc_video.c b/drivers/media/video/uvc/uvc_video.c
index 8244167..ffd1158 100644
--- a/drivers/media/video/uvc/uvc_video.c
+++ b/drivers/media/video/uvc/uvc_video.c
@@ -1104,10 +1104,18 @@
* buffers, making sure userspace applications are notified of the problem
* instead of waiting forever.
*/
-int uvc_video_resume(struct uvc_streaming *stream)
+int uvc_video_resume(struct uvc_streaming *stream, int reset)
{
int ret;
+ /* If the bus has been reset on resume, set the alternate setting to 0.
+ * This should be the default value, but some devices crash or otherwise
+ * misbehave if they don't receive a SET_INTERFACE request before any
+ * other video control request.
+ */
+ if (reset)
+ usb_set_interface(stream->dev->udev, stream->intfnum, 0);
+
stream->frozen = 0;
ret = uvc_commit_video(stream, &stream->ctrl);
diff --git a/drivers/media/video/uvc/uvcvideo.h b/drivers/media/video/uvc/uvcvideo.h
index df32a43..cbdd49b 100644
--- a/drivers/media/video/uvc/uvcvideo.h
+++ b/drivers/media/video/uvc/uvcvideo.h
@@ -638,7 +638,7 @@
/* Video */
extern int uvc_video_init(struct uvc_streaming *stream);
extern int uvc_video_suspend(struct uvc_streaming *stream);
-extern int uvc_video_resume(struct uvc_streaming *stream);
+extern int uvc_video_resume(struct uvc_streaming *stream, int reset);
extern int uvc_video_enable(struct uvc_streaming *stream, int enable);
extern int uvc_probe_video(struct uvc_streaming *stream,
struct uvc_streaming_control *probe);
diff --git a/drivers/media/video/v4l2-dev.c b/drivers/media/video/v4l2-dev.c
index 06f1400..d721565 100644
--- a/drivers/media/video/v4l2-dev.c
+++ b/drivers/media/video/v4l2-dev.c
@@ -173,6 +173,17 @@
media_device_unregister_entity(&vdev->entity);
#endif
+ /* Do not call v4l2_device_put if there is no release callback set.
+ * Drivers that have no v4l2_device release callback might free the
+ * v4l2_dev instance in the video_device release callback below, so we
+ * must perform this check here.
+ *
+ * TODO: In the long run all drivers that use v4l2_device should use the
+ * v4l2_device release callback. This check will then be unnecessary.
+ */
+ if (v4l2_dev->release == NULL)
+ v4l2_dev = NULL;
+
/* Release video_device and perform other
cleanups as needed. */
vdev->release(vdev);
diff --git a/drivers/media/video/v4l2-device.c b/drivers/media/video/v4l2-device.c
index c72856c..e6a2c3b 100644
--- a/drivers/media/video/v4l2-device.c
+++ b/drivers/media/video/v4l2-device.c
@@ -38,6 +38,7 @@
mutex_init(&v4l2_dev->ioctl_lock);
v4l2_prio_init(&v4l2_dev->prio);
kref_init(&v4l2_dev->ref);
+ get_device(dev);
v4l2_dev->dev = dev;
if (dev == NULL) {
/* If dev == NULL, then name must be filled in by the caller */
@@ -93,6 +94,7 @@
if (dev_get_drvdata(v4l2_dev->dev) == v4l2_dev)
dev_set_drvdata(v4l2_dev->dev, NULL);
+ put_device(v4l2_dev->dev);
v4l2_dev->dev = NULL;
}
EXPORT_SYMBOL_GPL(v4l2_device_disconnect);
diff --git a/drivers/media/video/via-camera.c b/drivers/media/video/via-camera.c
index bb7f17f..cbf13d0 100644
--- a/drivers/media/video/via-camera.c
+++ b/drivers/media/video/via-camera.c
@@ -21,7 +21,7 @@
#include <media/videobuf-dma-sg.h>
#include <linux/delay.h>
#include <linux/dma-mapping.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/via-core.h>
#include <linux/via-gpio.h>
#include <linux/via_i2c.h>
@@ -69,7 +69,7 @@
struct mutex lock;
enum viacam_opstate opstate;
unsigned long flags;
- struct pm_qos_request_list qos_request;
+ struct pm_qos_request qos_request;
/*
* GPIO info for power/reset management
*/
diff --git a/drivers/mfd/jz4740-adc.c b/drivers/mfd/jz4740-adc.c
index 21131c7..563654c 100644
--- a/drivers/mfd/jz4740-adc.c
+++ b/drivers/mfd/jz4740-adc.c
@@ -273,7 +273,7 @@
ct->regs.ack = JZ_REG_ADC_STATUS;
ct->chip.irq_mask = irq_gc_mask_set_bit;
ct->chip.irq_unmask = irq_gc_mask_clr_bit;
- ct->chip.irq_ack = irq_gc_ack;
+ ct->chip.irq_ack = irq_gc_ack_set_bit;
irq_setup_generic_chip(gc, IRQ_MSK(5), 0, 0, IRQ_NOPROBE | IRQ_LEVEL);
diff --git a/drivers/misc/lis3lv02d/lis3lv02d.c b/drivers/misc/lis3lv02d/lis3lv02d.c
index b928bc1..8b51cd6 100644
--- a/drivers/misc/lis3lv02d/lis3lv02d.c
+++ b/drivers/misc/lis3lv02d/lis3lv02d.c
@@ -375,12 +375,14 @@
* both have been read. So the value read will always be correct.
* Set BOOT bit to refresh factory tuning values.
*/
- lis3->read(lis3, CTRL_REG2, ®);
- if (lis3->whoami == WAI_12B)
- reg |= CTRL2_BDU | CTRL2_BOOT;
- else
- reg |= CTRL2_BOOT_8B;
- lis3->write(lis3, CTRL_REG2, reg);
+ if (lis3->pdata) {
+ lis3->read(lis3, CTRL_REG2, ®);
+ if (lis3->whoami == WAI_12B)
+ reg |= CTRL2_BDU | CTRL2_BOOT;
+ else
+ reg |= CTRL2_BOOT_8B;
+ lis3->write(lis3, CTRL_REG2, reg);
+ }
/* LIS3 power on delay is quite long */
msleep(lis3->pwron_delay / lis3lv02d_get_odr());
diff --git a/drivers/net/bnx2x/bnx2x_dcb.c b/drivers/net/bnx2x/bnx2x_dcb.c
index a1e004a..0b4acf67 100644
--- a/drivers/net/bnx2x/bnx2x_dcb.c
+++ b/drivers/net/bnx2x/bnx2x_dcb.c
@@ -2120,6 +2120,7 @@
break;
case DCB_CAP_ATTR_DCBX:
*cap = BNX2X_DCBX_CAPS;
+ break;
default:
rval = -EINVAL;
break;
diff --git a/drivers/net/bnx2x/bnx2x_main.c b/drivers/net/bnx2x/bnx2x_main.c
index c027e934..15f8000 100644
--- a/drivers/net/bnx2x/bnx2x_main.c
+++ b/drivers/net/bnx2x/bnx2x_main.c
@@ -4943,7 +4943,7 @@
int igu_seg_id;
int port = BP_PORT(bp);
int func = BP_FUNC(bp);
- int reg_offset;
+ int reg_offset, reg_offset_en5;
u64 section;
int index;
struct hc_sp_status_block_data sp_sb_data;
@@ -4966,6 +4966,8 @@
reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
+ reg_offset_en5 = (port ? MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 :
+ MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0);
for (index = 0; index < MAX_DYNAMIC_ATTN_GRPS; index++) {
int sindex;
/* take care of sig[0]..sig[4] */
@@ -4980,7 +4982,7 @@
* and not 16 between the different groups
*/
bp->attn_group[index].sig[4] = REG_RD(bp,
- reg_offset + 0x10 + 0x4*index);
+ reg_offset_en5 + 0x4*index);
else
bp->attn_group[index].sig[4] = 0;
}
@@ -7625,8 +7627,11 @@
u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
u8 *mac_addr = bp->dev->dev_addr;
u32 val;
+ u16 pmc;
+
/* The mac address is written to entries 1-4 to
- preserve entry 0 which is used by the PMF */
+ * preserve entry 0 which is used by the PMF
+ */
u8 entry = (BP_VN(bp) + 1)*8;
val = (mac_addr[0] << 8) | mac_addr[1];
@@ -7636,6 +7641,11 @@
(mac_addr[4] << 8) | mac_addr[5];
EMAC_WR(bp, EMAC_REG_EMAC_MAC_MATCH + entry + 4, val);
+ /* Enable the PME and clear the status */
+ pci_read_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, &pmc);
+ pmc |= PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS;
+ pci_write_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, pmc);
+
reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_EN;
} else
diff --git a/drivers/net/bnx2x/bnx2x_reg.h b/drivers/net/bnx2x/bnx2x_reg.h
index 750e844..fc7bd0f 100644
--- a/drivers/net/bnx2x/bnx2x_reg.h
+++ b/drivers/net/bnx2x/bnx2x_reg.h
@@ -1384,6 +1384,18 @@
Latched ump_tx_parity; [31] MCP Latched scpad_parity; */
#define MISC_REG_AEU_ENABLE4_PXP_0 0xa108
#define MISC_REG_AEU_ENABLE4_PXP_1 0xa1a8
+/* [RW 32] fifth 32b for enabling the output for function 0 output0. Mapped
+ * as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
+ * attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
+ * mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
+ * parity; [31-10] Reserved; */
+#define MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0 0xa688
+/* [RW 32] Fifth 32b for enabling the output for function 1 output0. Mapped
+ * as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
+ * attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
+ * mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
+ * parity; [31-10] Reserved; */
+#define MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 0xa6b0
/* [RW 1] set/clr general attention 0; this will set/clr bit 94 in the aeu
128 bit vector */
#define MISC_REG_AEU_GENERAL_ATTN_0 0xa000
diff --git a/drivers/net/bonding/bond_3ad.c b/drivers/net/bonding/bond_3ad.c
index a047eb9..47b928e 100644
--- a/drivers/net/bonding/bond_3ad.c
+++ b/drivers/net/bonding/bond_3ad.c
@@ -2168,7 +2168,8 @@
}
re_arm:
- queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
+ if (!bond->kill_timers)
+ queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
out:
read_unlock(&bond->lock);
}
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index 7f8b20a3..d4fbd2e 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -1440,7 +1440,8 @@
}
re_arm:
- queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
+ if (!bond->kill_timers)
+ queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
out:
read_unlock(&bond->lock);
}
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 43f2ea5..6d79b78 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -777,6 +777,9 @@
read_lock(&bond->lock);
+ if (bond->kill_timers)
+ goto out;
+
/* rejoin all groups on bond device */
__bond_resend_igmp_join_requests(bond->dev);
@@ -790,9 +793,9 @@
__bond_resend_igmp_join_requests(vlan_dev);
}
- if (--bond->igmp_retrans > 0)
+ if ((--bond->igmp_retrans > 0) && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);
-
+out:
read_unlock(&bond->lock);
}
@@ -2538,7 +2541,7 @@
}
re_arm:
- if (bond->params.miimon)
+ if (bond->params.miimon && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->mii_work,
msecs_to_jiffies(bond->params.miimon));
out:
@@ -2886,7 +2889,7 @@
}
re_arm:
- if (bond->params.arp_interval)
+ if (bond->params.arp_interval && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
out:
read_unlock(&bond->lock);
@@ -3154,7 +3157,7 @@
bond_ab_arp_probe(bond);
re_arm:
- if (bond->params.arp_interval)
+ if (bond->params.arp_interval && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
out:
read_unlock(&bond->lock);
diff --git a/drivers/net/cxgb3/cxgb3_offload.c b/drivers/net/cxgb3/cxgb3_offload.c
index 805076c..da5a5d9 100644
--- a/drivers/net/cxgb3/cxgb3_offload.c
+++ b/drivers/net/cxgb3/cxgb3_offload.c
@@ -1146,12 +1146,14 @@
if (te && te->ctx && te->client && te->client->redirect) {
update_tcb = te->client->redirect(te->ctx, old, new, e);
if (update_tcb) {
+ rcu_read_lock();
l2t_hold(L2DATA(tdev), e);
+ rcu_read_unlock();
set_l2t_ix(tdev, tid, e);
}
}
}
- l2t_release(L2DATA(tdev), e);
+ l2t_release(tdev, e);
}
/*
@@ -1264,7 +1266,7 @@
goto out_free;
err = -ENOMEM;
- L2DATA(dev) = t3_init_l2t(l2t_capacity);
+ RCU_INIT_POINTER(dev->l2opt, t3_init_l2t(l2t_capacity));
if (!L2DATA(dev))
goto out_free;
@@ -1298,16 +1300,24 @@
out_free_l2t:
t3_free_l2t(L2DATA(dev));
- L2DATA(dev) = NULL;
+ rcu_assign_pointer(dev->l2opt, NULL);
out_free:
kfree(t);
return err;
}
+static void clean_l2_data(struct rcu_head *head)
+{
+ struct l2t_data *d = container_of(head, struct l2t_data, rcu_head);
+ t3_free_l2t(d);
+}
+
+
void cxgb3_offload_deactivate(struct adapter *adapter)
{
struct t3cdev *tdev = &adapter->tdev;
struct t3c_data *t = T3C_DATA(tdev);
+ struct l2t_data *d;
remove_adapter(adapter);
if (list_empty(&adapter_list))
@@ -1315,8 +1325,11 @@
free_tid_maps(&t->tid_maps);
T3C_DATA(tdev) = NULL;
- t3_free_l2t(L2DATA(tdev));
- L2DATA(tdev) = NULL;
+ rcu_read_lock();
+ d = L2DATA(tdev);
+ rcu_read_unlock();
+ rcu_assign_pointer(tdev->l2opt, NULL);
+ call_rcu(&d->rcu_head, clean_l2_data);
if (t->nofail_skb)
kfree_skb(t->nofail_skb);
kfree(t);
diff --git a/drivers/net/cxgb3/l2t.c b/drivers/net/cxgb3/l2t.c
index f452c40..4154097 100644
--- a/drivers/net/cxgb3/l2t.c
+++ b/drivers/net/cxgb3/l2t.c
@@ -300,14 +300,21 @@
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
struct net_device *dev)
{
- struct l2t_entry *e;
- struct l2t_data *d = L2DATA(cdev);
+ struct l2t_entry *e = NULL;
+ struct l2t_data *d;
+ int hash;
u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex;
- int hash = arp_hash(addr, ifidx, d);
struct port_info *p = netdev_priv(dev);
int smt_idx = p->port_id;
+ rcu_read_lock();
+ d = L2DATA(cdev);
+ if (!d)
+ goto done_rcu;
+
+ hash = arp_hash(addr, ifidx, d);
+
write_lock_bh(&d->lock);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx &&
@@ -338,6 +345,8 @@
}
done:
write_unlock_bh(&d->lock);
+done_rcu:
+ rcu_read_unlock();
return e;
}
diff --git a/drivers/net/cxgb3/l2t.h b/drivers/net/cxgb3/l2t.h
index 7a12d52e..c5f5479 100644
--- a/drivers/net/cxgb3/l2t.h
+++ b/drivers/net/cxgb3/l2t.h
@@ -76,6 +76,7 @@
atomic_t nfree; /* number of free entries */
rwlock_t lock;
struct l2t_entry l2tab[0];
+ struct rcu_head rcu_head; /* to handle rcu cleanup */
};
typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
@@ -99,7 +100,7 @@
/*
* Getting to the L2 data from an offload device.
*/
-#define L2DATA(dev) ((dev)->l2opt)
+#define L2DATA(cdev) (rcu_dereference((cdev)->l2opt))
#define W_TCB_L2T_IX 0
#define S_TCB_L2T_IX 7
@@ -126,15 +127,22 @@
return t3_l2t_send_slow(dev, skb, e);
}
-static inline void l2t_release(struct l2t_data *d, struct l2t_entry *e)
+static inline void l2t_release(struct t3cdev *t, struct l2t_entry *e)
{
- if (atomic_dec_and_test(&e->refcnt))
+ struct l2t_data *d;
+
+ rcu_read_lock();
+ d = L2DATA(t);
+
+ if (atomic_dec_and_test(&e->refcnt) && d)
t3_l2e_free(d, e);
+
+ rcu_read_unlock();
}
static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)
{
- if (atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
+ if (d && atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
atomic_dec(&d->nfree);
}
diff --git a/drivers/net/cxgb4/cxgb4_main.c b/drivers/net/cxgb4/cxgb4_main.c
index c9957b7..b4efa29 100644
--- a/drivers/net/cxgb4/cxgb4_main.c
+++ b/drivers/net/cxgb4/cxgb4_main.c
@@ -3712,6 +3712,9 @@
setup_debugfs(adapter);
}
+ /* PCIe EEH recovery on powerpc platforms needs fundamental reset */
+ pdev->needs_freset = 1;
+
if (is_offload(adapter))
attach_ulds(adapter);
diff --git a/drivers/net/e1000e/netdev.c b/drivers/net/e1000e/netdev.c
index 2198e61..07031af 100644
--- a/drivers/net/e1000e/netdev.c
+++ b/drivers/net/e1000e/netdev.c
@@ -47,7 +47,7 @@
#include <linux/if_vlan.h>
#include <linux/cpu.h>
#include <linux/smp.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/pm_runtime.h>
#include <linux/aer.h>
#include <linux/prefetch.h>
diff --git a/drivers/net/ibmveth.c b/drivers/net/ibmveth.c
index 8dd5fcc..d393f1e 100644
--- a/drivers/net/ibmveth.c
+++ b/drivers/net/ibmveth.c
@@ -636,8 +636,8 @@
netdev_err(netdev, "unable to request irq 0x%x, rc %d\n",
netdev->irq, rc);
do {
- rc = h_free_logical_lan(adapter->vdev->unit_address);
- } while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
+ lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);
+ } while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));
goto err_out;
}
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index 05172c3..376e3e9 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -239,7 +239,7 @@
dest = macvlan_hash_lookup(port, eth->h_dest);
if (dest && dest->mode == MACVLAN_MODE_BRIDGE) {
/* send to lowerdev first for its network taps */
- vlan->forward(vlan->lowerdev, skb);
+ dev_forward_skb(vlan->lowerdev, skb);
return NET_XMIT_SUCCESS;
}
diff --git a/drivers/net/pch_gbe/pch_gbe_main.c b/drivers/net/pch_gbe/pch_gbe_main.c
index 567ff10..b8b4ba2 100644
--- a/drivers/net/pch_gbe/pch_gbe_main.c
+++ b/drivers/net/pch_gbe/pch_gbe_main.c
@@ -1199,6 +1199,8 @@
iowrite32((int_en & ~PCH_GBE_INT_RX_FIFO_ERR),
&hw->reg->INT_EN);
pch_gbe_stop_receive(adapter);
+ int_st |= ioread32(&hw->reg->INT_ST);
+ int_st = int_st & ioread32(&hw->reg->INT_EN);
}
if (int_st & PCH_GBE_INT_RX_DMA_ERR)
adapter->stats.intr_rx_dma_err_count++;
@@ -1218,14 +1220,11 @@
/* Set Pause packet */
pch_gbe_mac_set_pause_packet(hw);
}
- if ((int_en & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))
- == 0) {
- return IRQ_HANDLED;
- }
}
/* When request status is Receive interruption */
- if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))) {
+ if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT)) ||
+ (adapter->rx_stop_flag == true)) {
if (likely(napi_schedule_prep(&adapter->napi))) {
/* Enable only Rx Descriptor empty */
atomic_inc(&adapter->irq_sem);
@@ -1385,7 +1384,7 @@
struct sk_buff *skb;
unsigned int i;
unsigned int cleaned_count = 0;
- bool cleaned = false;
+ bool cleaned = true;
pr_debug("next_to_clean : %d\n", tx_ring->next_to_clean);
@@ -1396,7 +1395,6 @@
while ((tx_desc->gbec_status & DSC_INIT16) == 0x0000) {
pr_debug("gbec_status:0x%04x\n", tx_desc->gbec_status);
- cleaned = true;
buffer_info = &tx_ring->buffer_info[i];
skb = buffer_info->skb;
@@ -1439,8 +1437,10 @@
tx_desc = PCH_GBE_TX_DESC(*tx_ring, i);
/* weight of a sort for tx, to avoid endless transmit cleanup */
- if (cleaned_count++ == PCH_GBE_TX_WEIGHT)
+ if (cleaned_count++ == PCH_GBE_TX_WEIGHT) {
+ cleaned = false;
break;
+ }
}
pr_debug("called pch_gbe_unmap_and_free_tx_resource() %d count\n",
cleaned_count);
@@ -2168,7 +2168,6 @@
{
struct pch_gbe_adapter *adapter =
container_of(napi, struct pch_gbe_adapter, napi);
- struct net_device *netdev = adapter->netdev;
int work_done = 0;
bool poll_end_flag = false;
bool cleaned = false;
@@ -2176,33 +2175,32 @@
pr_debug("budget : %d\n", budget);
- /* Keep link state information with original netdev */
- if (!netif_carrier_ok(netdev)) {
+ pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);
+ cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);
+
+ if (!cleaned)
+ work_done = budget;
+ /* If no Tx and not enough Rx work done,
+ * exit the polling mode
+ */
+ if (work_done < budget)
poll_end_flag = true;
- } else {
- pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);
+
+ if (poll_end_flag) {
+ napi_complete(napi);
+ if (adapter->rx_stop_flag) {
+ adapter->rx_stop_flag = false;
+ pch_gbe_start_receive(&adapter->hw);
+ }
+ pch_gbe_irq_enable(adapter);
+ } else
if (adapter->rx_stop_flag) {
adapter->rx_stop_flag = false;
pch_gbe_start_receive(&adapter->hw);
int_en = ioread32(&adapter->hw.reg->INT_EN);
iowrite32((int_en | PCH_GBE_INT_RX_FIFO_ERR),
- &adapter->hw.reg->INT_EN);
+ &adapter->hw.reg->INT_EN);
}
- cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);
-
- if (cleaned)
- work_done = budget;
- /* If no Tx and not enough Rx work done,
- * exit the polling mode
- */
- if ((work_done < budget) || !netif_running(netdev))
- poll_end_flag = true;
- }
-
- if (poll_end_flag) {
- napi_complete(napi);
- pch_gbe_irq_enable(adapter);
- }
pr_debug("poll_end_flag : %d work_done : %d budget : %d\n",
poll_end_flag, work_done, budget);
diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
index cb6e0b4..edd7304 100644
--- a/drivers/net/phy/dp83640.c
+++ b/drivers/net/phy/dp83640.c
@@ -589,7 +589,7 @@
prune_rx_ts(dp83640);
if (list_empty(&dp83640->rxpool)) {
- pr_warning("dp83640: rx timestamp pool is empty\n");
+ pr_debug("dp83640: rx timestamp pool is empty\n");
goto out;
}
rxts = list_first_entry(&dp83640->rxpool, struct rxts, list);
@@ -612,7 +612,7 @@
skb = skb_dequeue(&dp83640->tx_queue);
if (!skb) {
- pr_warning("dp83640: have timestamp but tx_queue empty\n");
+ pr_debug("dp83640: have timestamp but tx_queue empty\n");
return;
}
ns = phy2txts(phy_txts);
diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
index ce395fe..f1c435b 100644
--- a/drivers/net/usb/usbnet.c
+++ b/drivers/net/usb/usbnet.c
@@ -1470,7 +1470,7 @@
if (!dev->suspend_count++) {
spin_lock_irq(&dev->txq.lock);
/* don't autosuspend while transmitting */
- if (dev->txq.qlen && (message.event & PM_EVENT_AUTO)) {
+ if (dev->txq.qlen && PMSG_IS_AUTO(message)) {
spin_unlock_irq(&dev->txq.lock);
return -EBUSY;
} else {
diff --git a/drivers/net/wimax/i2400m/usb.c b/drivers/net/wimax/i2400m/usb.c
index 298f2b0..9a644d0 100644
--- a/drivers/net/wimax/i2400m/usb.c
+++ b/drivers/net/wimax/i2400m/usb.c
@@ -599,7 +599,7 @@
*
* As well, the device might refuse going to sleep for whichever
* reason. In this case we just fail. For system suspend/hibernate,
- * we *can't* fail. We check PM_EVENT_AUTO to see if the
+ * we *can't* fail. We check PMSG_IS_AUTO to see if the
* suspend call comes from the USB stack or from the system and act
* in consequence.
*
@@ -615,7 +615,7 @@
struct i2400m *i2400m = &i2400mu->i2400m;
#ifdef CONFIG_PM
- if (pm_msg.event & PM_EVENT_AUTO)
+ if (PMSG_IS_AUTO(pm_msg))
is_autosuspend = 1;
#endif
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
index 2339728..3e69c63 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
+++ b/drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
@@ -1514,7 +1514,7 @@
{0x00008258, 0x00000000},
{0x0000825c, 0x40000000},
{0x00008260, 0x00080922},
- {0x00008264, 0x9bc00010},
+ {0x00008264, 0x9d400010},
{0x00008268, 0xffffffff},
{0x0000826c, 0x0000ffff},
{0x00008270, 0x00000000},
diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
index 9a48501..4c21f8c 100644
--- a/drivers/net/wireless/ath/ath9k/recv.c
+++ b/drivers/net/wireless/ath/ath9k/recv.c
@@ -205,14 +205,22 @@
static void ath_rx_edma_cleanup(struct ath_softc *sc)
{
+ struct ath_hw *ah = sc->sc_ah;
+ struct ath_common *common = ath9k_hw_common(ah);
struct ath_buf *bf;
ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_LP);
ath_rx_remove_buffer(sc, ATH9K_RX_QUEUE_HP);
list_for_each_entry(bf, &sc->rx.rxbuf, list) {
- if (bf->bf_mpdu)
+ if (bf->bf_mpdu) {
+ dma_unmap_single(sc->dev, bf->bf_buf_addr,
+ common->rx_bufsize,
+ DMA_BIDIRECTIONAL);
dev_kfree_skb_any(bf->bf_mpdu);
+ bf->bf_buf_addr = 0;
+ bf->bf_mpdu = NULL;
+ }
}
INIT_LIST_HEAD(&sc->rx.rxbuf);
diff --git a/drivers/net/wireless/ipw2x00/ipw2100.c b/drivers/net/wireless/ipw2x00/ipw2100.c
index ef9ad79..127e9c6 100644
--- a/drivers/net/wireless/ipw2x00/ipw2100.c
+++ b/drivers/net/wireless/ipw2x00/ipw2100.c
@@ -161,7 +161,7 @@
#include <linux/firmware.h>
#include <linux/acpi.h>
#include <linux/ctype.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <net/lib80211.h>
@@ -174,7 +174,7 @@
#define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2100 Network Driver"
#define DRV_COPYRIGHT "Copyright(c) 2003-2006 Intel Corporation"
-static struct pm_qos_request_list ipw2100_pm_qos_req;
+static struct pm_qos_request ipw2100_pm_qos_req;
/* Debugging stuff */
#ifdef CONFIG_IPW2100_DEBUG
diff --git a/drivers/net/wireless/iwlegacy/iwl-core.c b/drivers/net/wireless/iwlegacy/iwl-core.c
index 35cd2537..e5971fe 100644
--- a/drivers/net/wireless/iwlegacy/iwl-core.c
+++ b/drivers/net/wireless/iwlegacy/iwl-core.c
@@ -937,7 +937,7 @@
&priv->contexts[IWL_RXON_CTX_BSS]);
#endif
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
/* Keep the restart process from trying to send host
* commands by clearing the INIT status bit */
@@ -1746,7 +1746,7 @@
/* Set the FW error flag -- cleared on iwl_down */
set_bit(STATUS_FW_ERROR, &priv->status);
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
/*
* Keep the restart process from trying to send host
* commands by clearing the INIT status bit
diff --git a/drivers/net/wireless/iwlegacy/iwl-hcmd.c b/drivers/net/wireless/iwlegacy/iwl-hcmd.c
index 62b4b09..ce1fc9f 100644
--- a/drivers/net/wireless/iwlegacy/iwl-hcmd.c
+++ b/drivers/net/wireless/iwlegacy/iwl-hcmd.c
@@ -167,7 +167,7 @@
goto out;
}
- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
+ ret = wait_event_timeout(priv->wait_command_queue,
!test_bit(STATUS_HCMD_ACTIVE, &priv->status),
HOST_COMPLETE_TIMEOUT);
if (!ret) {
diff --git a/drivers/net/wireless/iwlegacy/iwl-tx.c b/drivers/net/wireless/iwlegacy/iwl-tx.c
index 4fff995..ef9e268 100644
--- a/drivers/net/wireless/iwlegacy/iwl-tx.c
+++ b/drivers/net/wireless/iwlegacy/iwl-tx.c
@@ -625,6 +625,8 @@
cmd = txq->cmd[cmd_index];
meta = &txq->meta[cmd_index];
+ txq->time_stamp = jiffies;
+
pci_unmap_single(priv->pci_dev,
dma_unmap_addr(meta, mapping),
dma_unmap_len(meta, len),
@@ -645,7 +647,7 @@
clear_bit(STATUS_HCMD_ACTIVE, &priv->status);
IWL_DEBUG_INFO(priv, "Clearing HCMD_ACTIVE for command %s\n",
iwl_legacy_get_cmd_string(cmd->hdr.cmd));
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
}
/* Mark as unmapped */
diff --git a/drivers/net/wireless/iwlegacy/iwl3945-base.c b/drivers/net/wireless/iwlegacy/iwl3945-base.c
index 795826a..66ee1562 100644
--- a/drivers/net/wireless/iwlegacy/iwl3945-base.c
+++ b/drivers/net/wireless/iwlegacy/iwl3945-base.c
@@ -841,7 +841,7 @@
wiphy_rfkill_set_hw_state(priv->hw->wiphy,
test_bit(STATUS_RF_KILL_HW, &priv->status));
else
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
}
/**
@@ -2269,7 +2269,7 @@
iwl3945_reg_txpower_periodic(priv);
IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
return;
@@ -2300,7 +2300,7 @@
iwl_legacy_clear_driver_stations(priv);
/* Unblock any waiting calls */
- wake_up_interruptible_all(&priv->wait_command_queue);
+ wake_up_all(&priv->wait_command_queue);
/* Wipe out the EXIT_PENDING status bit if we are not actually
* exiting the module */
@@ -2853,7 +2853,7 @@
/* Wait for START_ALIVE from ucode. Otherwise callbacks from
* mac80211 will not be run successfully. */
- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
+ ret = wait_event_timeout(priv->wait_command_queue,
test_bit(STATUS_READY, &priv->status),
UCODE_READY_TIMEOUT);
if (!ret) {
diff --git a/drivers/net/wireless/iwlegacy/iwl4965-base.c b/drivers/net/wireless/iwlegacy/iwl4965-base.c
index 1433466..aa0c253 100644
--- a/drivers/net/wireless/iwlegacy/iwl4965-base.c
+++ b/drivers/net/wireless/iwlegacy/iwl4965-base.c
@@ -576,7 +576,7 @@
wiphy_rfkill_set_hw_state(priv->hw->wiphy,
test_bit(STATUS_RF_KILL_HW, &priv->status));
else
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
}
/**
@@ -926,7 +926,7 @@
handled |= CSR_INT_BIT_FH_TX;
/* Wake up uCode load routine, now that load is complete */
priv->ucode_write_complete = 1;
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
}
if (inta & ~handled) {
@@ -1795,7 +1795,7 @@
iwl4965_rf_kill_ct_config(priv);
IWL_DEBUG_INFO(priv, "ALIVE processing complete.\n");
- wake_up_interruptible(&priv->wait_command_queue);
+ wake_up(&priv->wait_command_queue);
iwl_legacy_power_update_mode(priv, true);
IWL_DEBUG_INFO(priv, "Updated power mode\n");
@@ -1828,7 +1828,7 @@
iwl_legacy_clear_driver_stations(priv);
/* Unblock any waiting calls */
- wake_up_interruptible_all(&priv->wait_command_queue);
+ wake_up_all(&priv->wait_command_queue);
/* Wipe out the EXIT_PENDING status bit if we are not actually
* exiting the module */
@@ -2266,7 +2266,7 @@
/* Wait for START_ALIVE from Run Time ucode. Otherwise callbacks from
* mac80211 will not be run successfully. */
- ret = wait_event_interruptible_timeout(priv->wait_command_queue,
+ ret = wait_event_timeout(priv->wait_command_queue,
test_bit(STATUS_READY, &priv->status),
UCODE_READY_TIMEOUT);
if (!ret) {
diff --git a/drivers/net/wireless/iwlwifi/iwl-scan.c b/drivers/net/wireless/iwlwifi/iwl-scan.c
index dd6937e..77e528f 100644
--- a/drivers/net/wireless/iwlwifi/iwl-scan.c
+++ b/drivers/net/wireless/iwlwifi/iwl-scan.c
@@ -405,31 +405,33 @@
mutex_lock(&priv->mutex);
- if (test_bit(STATUS_SCANNING, &priv->status) &&
- priv->scan_type != IWL_SCAN_NORMAL) {
- IWL_DEBUG_SCAN(priv, "Scan already in progress.\n");
- ret = -EAGAIN;
- goto out_unlock;
- }
-
- /* mac80211 will only ask for one band at a time */
- priv->scan_request = req;
- priv->scan_vif = vif;
-
/*
* If an internal scan is in progress, just set
* up the scan_request as per above.
*/
if (priv->scan_type != IWL_SCAN_NORMAL) {
- IWL_DEBUG_SCAN(priv, "SCAN request during internal scan\n");
+ IWL_DEBUG_SCAN(priv,
+ "SCAN request during internal scan - defer\n");
+ priv->scan_request = req;
+ priv->scan_vif = vif;
ret = 0;
- } else
+ } else {
+ priv->scan_request = req;
+ priv->scan_vif = vif;
+ /*
+ * mac80211 will only ask for one band at a time
+ * so using channels[0] here is ok
+ */
ret = iwl_scan_initiate(priv, vif, IWL_SCAN_NORMAL,
req->channels[0]->band);
+ if (ret) {
+ priv->scan_request = NULL;
+ priv->scan_vif = NULL;
+ }
+ }
IWL_DEBUG_MAC80211(priv, "leave\n");
-out_unlock:
mutex_unlock(&priv->mutex);
return ret;
diff --git a/drivers/net/wireless/rtlwifi/usb.c b/drivers/net/wireless/rtlwifi/usb.c
index 8b1cef0..4bf3cf4 100644
--- a/drivers/net/wireless/rtlwifi/usb.c
+++ b/drivers/net/wireless/rtlwifi/usb.c
@@ -863,6 +863,7 @@
u8 tid = 0;
u16 seq_number = 0;
+ memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc));
if (ieee80211_is_auth(fc)) {
RT_TRACE(rtlpriv, COMP_SEND, DBG_DMESG, ("MAC80211_LINKING\n"));
rtl_ips_nic_on(hw);
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 0ca86f9..1825629 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -327,12 +327,12 @@
xenvif_get(vif);
rtnl_lock();
- if (netif_running(vif->dev))
- xenvif_up(vif);
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
netdev_update_features(vif->dev);
netif_carrier_on(vif->dev);
+ if (netif_running(vif->dev))
+ xenvif_up(vif);
rtnl_unlock();
return 0;
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 4e84fd4..e9651f0 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -77,7 +77,7 @@
unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE;
unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE;
-enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_SAFE;
+enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_TUNE_OFF;
/*
* The default CLS is used if arch didn't set CLS explicitly and not
@@ -3568,10 +3568,14 @@
pci_hotplug_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "hpmemsize=", 10)) {
pci_hotplug_mem_size = memparse(str + 10, &str);
+ } else if (!strncmp(str, "pcie_bus_tune_off", 17)) {
+ pcie_bus_config = PCIE_BUS_TUNE_OFF;
} else if (!strncmp(str, "pcie_bus_safe", 13)) {
pcie_bus_config = PCIE_BUS_SAFE;
} else if (!strncmp(str, "pcie_bus_perf", 13)) {
pcie_bus_config = PCIE_BUS_PERFORMANCE;
+ } else if (!strncmp(str, "pcie_bus_peer2peer", 18)) {
+ pcie_bus_config = PCIE_BUS_PEER2PEER;
} else {
printk(KERN_ERR "PCI: Unknown option `%s'\n",
str);
diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index f3f94a5..6ab6bd3 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -1458,12 +1458,24 @@
*/
void pcie_bus_configure_settings(struct pci_bus *bus, u8 mpss)
{
- u8 smpss = mpss;
+ u8 smpss;
if (!pci_is_pcie(bus->self))
return;
+ if (pcie_bus_config == PCIE_BUS_TUNE_OFF)
+ return;
+
+ /* FIXME - Peer to peer DMA is possible, though the endpoint would need
+ * to be aware to the MPS of the destination. To work around this,
+ * simply force the MPS of the entire system to the smallest possible.
+ */
+ if (pcie_bus_config == PCIE_BUS_PEER2PEER)
+ smpss = 0;
+
if (pcie_bus_config == PCIE_BUS_SAFE) {
+ smpss = mpss;
+
pcie_find_smpss(bus->self, &smpss);
pci_walk_bus(bus, pcie_find_smpss, &smpss);
}
diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
index cbde448..eb3140e 100644
--- a/drivers/s390/cio/cio.c
+++ b/drivers/s390/cio/cio.c
@@ -654,8 +654,8 @@
static int console_subchannel_in_use;
/*
- * Use tpi to get a pending interrupt, call the interrupt handler and
- * return a pointer to the subchannel structure.
+ * Use cio_tpi to get a pending interrupt and call the interrupt handler.
+ * Return non-zero if an interrupt was processed, zero otherwise.
*/
static int cio_tpi(void)
{
@@ -667,6 +667,10 @@
tpi_info = (struct tpi_info *)&S390_lowcore.subchannel_id;
if (tpi(NULL) != 1)
return 0;
+ if (tpi_info->adapter_IO) {
+ do_adapter_IO(tpi_info->isc);
+ return 1;
+ }
irb = (struct irb *)&S390_lowcore.irb;
/* Store interrupt response block to lowcore. */
if (tsch(tpi_info->schid, irb) != 0)
diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
index b7bd5b0..3868ab2 100644
--- a/drivers/scsi/3w-9xxx.c
+++ b/drivers/scsi/3w-9xxx.c
@@ -1800,10 +1800,12 @@
switch (retval) {
case SCSI_MLQUEUE_HOST_BUSY:
twa_free_request_id(tw_dev, request_id);
+ twa_unmap_scsi_data(tw_dev, request_id);
break;
case 1:
tw_dev->state[request_id] = TW_S_COMPLETED;
twa_free_request_id(tw_dev, request_id);
+ twa_unmap_scsi_data(tw_dev, request_id);
SCpnt->result = (DID_ERROR << 16);
done(SCpnt);
retval = 0;
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 3c08f53..6153a66 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -88,7 +88,7 @@
obj-$(CONFIG_PCMCIA_QLOGIC) += qlogicfas408.o
obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o
obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/
-obj-$(CONFIG_SCSI_QLA_ISCSI) += qla4xxx/
+obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/
obj-$(CONFIG_SCSI_LPFC) += lpfc/
obj-$(CONFIG_SCSI_BFA_FC) += bfa/
obj-$(CONFIG_SCSI_PAS16) += pas16.o
diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
index e7d0d47..e5f2d7d 100644
--- a/drivers/scsi/aacraid/commsup.c
+++ b/drivers/scsi/aacraid/commsup.c
@@ -1283,6 +1283,8 @@
kfree(aac->queues);
aac->queues = NULL;
free_irq(aac->pdev->irq, aac);
+ if (aac->msi)
+ pci_disable_msi(aac->pdev);
kfree(aac->fsa_dev);
aac->fsa_dev = NULL;
quirks = aac_get_driver_ident(index)->quirks;
diff --git a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
index bd22041..f586448 100644
--- a/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
+++ b/drivers/scsi/cxgbi/cxgb3i/cxgb3i.c
@@ -913,7 +913,7 @@
struct t3cdev *t3dev = (struct t3cdev *)csk->cdev->lldev;
if (csk->l2t) {
- l2t_release(L2DATA(t3dev), csk->l2t);
+ l2t_release(t3dev, csk->l2t);
csk->l2t = NULL;
cxgbi_sock_put(csk);
}
diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
index f84084b..16ad97d 100644
--- a/drivers/scsi/libsas/sas_expander.c
+++ b/drivers/scsi/libsas/sas_expander.c
@@ -1721,7 +1721,7 @@
list_for_each_entry(ch, &ex->children, siblings) {
if (ch->dev_type == EDGE_DEV || ch->dev_type == FANOUT_DEV) {
res = sas_find_bcast_dev(ch, src_dev);
- if (src_dev)
+ if (*src_dev)
return res;
}
}
@@ -1769,10 +1769,12 @@
sas_disable_routing(parent, phy->attached_sas_addr);
}
memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
- sas_port_delete_phy(phy->port, phy->phy);
- if (phy->port->num_phys == 0)
- sas_port_delete(phy->port);
- phy->port = NULL;
+ if (phy->port) {
+ sas_port_delete_phy(phy->port, phy->phy);
+ if (phy->port->num_phys == 0)
+ sas_port_delete(phy->port);
+ phy->port = NULL;
+ }
}
static int sas_discover_bfs_by_root_level(struct domain_device *root,
diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
index 4cace3f..1e69527 100644
--- a/drivers/scsi/qla2xxx/qla_os.c
+++ b/drivers/scsi/qla2xxx/qla_os.c
@@ -1328,10 +1328,9 @@
qla2x00_sp_compl(ha, sp);
} else {
ctx = sp->ctx;
- if (ctx->type == SRB_LOGIN_CMD ||
- ctx->type == SRB_LOGOUT_CMD) {
- ctx->u.iocb_cmd->free(sp);
- } else {
+ if (ctx->type == SRB_ELS_CMD_RPT ||
+ ctx->type == SRB_ELS_CMD_HST ||
+ ctx->type == SRB_CT_CMD) {
struct fc_bsg_job *bsg_job =
ctx->u.bsg_job;
if (bsg_job->request->msgcode
@@ -1343,6 +1342,8 @@
kfree(sp->ctx);
mempool_free(sp,
ha->srb_mempool);
+ } else {
+ ctx->u.iocb_cmd->free(sp);
}
}
}
diff --git a/drivers/spi/spi-topcliff-pch.c b/drivers/spi/spi-topcliff-pch.c
index 1d23f38..6a80749 100644
--- a/drivers/spi/spi-topcliff-pch.c
+++ b/drivers/spi/spi-topcliff-pch.c
@@ -50,6 +50,8 @@
#define PCH_RX_THOLD 7
#define PCH_RX_THOLD_MAX 15
+#define PCH_TX_THOLD 2
+
#define PCH_MAX_BAUDRATE 5000000
#define PCH_MAX_FIFO_DEPTH 16
@@ -58,6 +60,7 @@
#define PCH_SLEEP_TIME 10
#define SSN_LOW 0x02U
+#define SSN_HIGH 0x03U
#define SSN_NO_CONTROL 0x00U
#define PCH_MAX_CS 0xFF
#define PCI_DEVICE_ID_GE_SPI 0x8816
@@ -316,16 +319,19 @@
/* if transfer complete interrupt */
if (reg_spsr_val & SPSR_FI_BIT) {
- if (tx_index < bpw_len)
+ if ((tx_index == bpw_len) && (rx_index == tx_index)) {
+ /* disable interrupts */
+ pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);
+
+ /* transfer is completed;
+ inform pch_spi_process_messages */
+ data->transfer_complete = true;
+ data->transfer_active = false;
+ wake_up(&data->wait);
+ } else {
dev_err(&data->master->dev,
"%s : Transfer is not completed", __func__);
- /* disable interrupts */
- pch_spi_setclr_reg(data->master, PCH_SPCR, 0, PCH_ALL);
-
- /* transfer is completed;inform pch_spi_process_messages */
- data->transfer_complete = true;
- data->transfer_active = false;
- wake_up(&data->wait);
+ }
}
}
@@ -348,16 +354,26 @@
"%s returning due to suspend\n", __func__);
return IRQ_NONE;
}
- if (data->use_dma)
- return IRQ_NONE;
io_remap_addr = data->io_remap_addr;
spsr = io_remap_addr + PCH_SPSR;
reg_spsr_val = ioread32(spsr);
- if (reg_spsr_val & SPSR_ORF_BIT)
- dev_err(&board_dat->pdev->dev, "%s Over run error", __func__);
+ if (reg_spsr_val & SPSR_ORF_BIT) {
+ dev_err(&board_dat->pdev->dev, "%s Over run error\n", __func__);
+ if (data->current_msg->complete != 0) {
+ data->transfer_complete = true;
+ data->current_msg->status = -EIO;
+ data->current_msg->complete(data->current_msg->context);
+ data->bcurrent_msg_processing = false;
+ data->current_msg = NULL;
+ data->cur_trans = NULL;
+ }
+ }
+
+ if (data->use_dma)
+ return IRQ_NONE;
/* Check if the interrupt is for SPI device */
if (reg_spsr_val & (SPSR_FI_BIT | SPSR_RFI_BIT)) {
@@ -756,10 +772,6 @@
wait_event_interruptible(data->wait, data->transfer_complete);
- pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);
- dev_dbg(&data->master->dev,
- "%s:no more control over SSN-writing 0 to SSNXCR.", __func__);
-
/* clear all interrupts */
pch_spi_writereg(data->master, PCH_SPSR,
pch_spi_readreg(data->master, PCH_SPSR));
@@ -815,10 +827,11 @@
}
}
-static void pch_spi_start_transfer(struct pch_spi_data *data)
+static int pch_spi_start_transfer(struct pch_spi_data *data)
{
struct pch_spi_dma_ctrl *dma;
unsigned long flags;
+ int rtn;
dma = &data->dma;
@@ -833,19 +846,23 @@
initiating the transfer. */
dev_dbg(&data->master->dev,
"%s:waiting for transfer to get over\n", __func__);
- wait_event_interruptible(data->wait, data->transfer_complete);
+ rtn = wait_event_interruptible_timeout(data->wait,
+ data->transfer_complete,
+ msecs_to_jiffies(2 * HZ));
dma_sync_sg_for_cpu(&data->master->dev, dma->sg_rx_p, dma->nent,
DMA_FROM_DEVICE);
+
+ dma_sync_sg_for_cpu(&data->master->dev, dma->sg_tx_p, dma->nent,
+ DMA_FROM_DEVICE);
+ memset(data->dma.tx_buf_virt, 0, PAGE_SIZE);
+
async_tx_ack(dma->desc_rx);
async_tx_ack(dma->desc_tx);
kfree(dma->sg_tx_p);
kfree(dma->sg_rx_p);
spin_lock_irqsave(&data->lock, flags);
- pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);
- dev_dbg(&data->master->dev,
- "%s:no more control over SSN-writing 0 to SSNXCR.", __func__);
/* clear fifo threshold, disable interrupts, disable SPI transfer */
pch_spi_setclr_reg(data->master, PCH_SPCR, 0,
@@ -858,6 +875,8 @@
pch_spi_clear_fifo(data->master);
spin_unlock_irqrestore(&data->lock, flags);
+
+ return rtn;
}
static void pch_dma_rx_complete(void *arg)
@@ -1023,8 +1042,7 @@
/* set receive fifo threshold and transmit fifo threshold */
pch_spi_setclr_reg(data->master, PCH_SPCR,
((size - 1) << SPCR_RFIC_FIELD) |
- ((PCH_MAX_FIFO_DEPTH - PCH_DMA_TRANS_SIZE) <<
- SPCR_TFIC_FIELD),
+ (PCH_TX_THOLD << SPCR_TFIC_FIELD),
MASK_RFIC_SPCR_BITS | MASK_TFIC_SPCR_BITS);
spin_unlock_irqrestore(&data->lock, flags);
@@ -1035,13 +1053,20 @@
/* offset, length setting */
sg = dma->sg_rx_p;
for (i = 0; i < num; i++, sg++) {
- if (i == 0) {
- sg->offset = 0;
+ if (i == (num - 2)) {
+ sg->offset = size * i;
+ sg->offset = sg->offset * (*bpw / 8);
sg_set_page(sg, virt_to_page(dma->rx_buf_virt), rem,
sg->offset);
sg_dma_len(sg) = rem;
+ } else if (i == (num - 1)) {
+ sg->offset = size * (i - 1) + rem;
+ sg->offset = sg->offset * (*bpw / 8);
+ sg_set_page(sg, virt_to_page(dma->rx_buf_virt), size,
+ sg->offset);
+ sg_dma_len(sg) = size;
} else {
- sg->offset = rem + size * (i - 1);
+ sg->offset = size * i;
sg->offset = sg->offset * (*bpw / 8);
sg_set_page(sg, virt_to_page(dma->rx_buf_virt), size,
sg->offset);
@@ -1065,6 +1090,16 @@
dma->desc_rx = desc_rx;
/* TX */
+ if (data->bpw_len > PCH_DMA_TRANS_SIZE) {
+ num = data->bpw_len / PCH_DMA_TRANS_SIZE;
+ size = PCH_DMA_TRANS_SIZE;
+ rem = 16;
+ } else {
+ num = 1;
+ size = data->bpw_len;
+ rem = data->bpw_len;
+ }
+
dma->sg_tx_p = kzalloc(sizeof(struct scatterlist)*num, GFP_ATOMIC);
sg_init_table(dma->sg_tx_p, num); /* Initialize SG table */
/* offset, length setting */
@@ -1162,6 +1197,7 @@
if (data->use_dma)
pch_spi_request_dma(data,
data->current_msg->spi->bits_per_word);
+ pch_spi_writereg(data->master, PCH_SSNXCR, SSN_NO_CONTROL);
do {
/* If we are already processing a message get the next
transfer structure from the message otherwise retrieve
@@ -1184,7 +1220,8 @@
if (data->use_dma) {
pch_spi_handle_dma(data, &bpw);
- pch_spi_start_transfer(data);
+ if (!pch_spi_start_transfer(data))
+ goto out;
pch_spi_copy_rx_data_for_dma(data, bpw);
} else {
pch_spi_set_tx(data, &bpw);
@@ -1222,6 +1259,8 @@
} while (data->cur_trans != NULL);
+out:
+ pch_spi_writereg(data->master, PCH_SSNXCR, SSN_HIGH);
if (data->use_dma)
pch_spi_release_dma(data);
}
diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
index bd7cc05..a7188a0 100644
--- a/drivers/tty/Kconfig
+++ b/drivers/tty/Kconfig
@@ -60,6 +60,10 @@
If unsure, say Y.
+config VT_CONSOLE_SLEEP
+ def_bool y
+ depends on VT_CONSOLE && PM_SLEEP
+
config HW_CONSOLE
bool
depends on VT && !S390 && !UML
diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
index dac7676..94e6c5c 100644
--- a/drivers/usb/class/cdc-acm.c
+++ b/drivers/usb/class/cdc-acm.c
@@ -1305,7 +1305,7 @@
struct acm *acm = usb_get_intfdata(intf);
int cnt;
- if (message.event & PM_EVENT_AUTO) {
+ if (PMSG_IS_AUTO(message)) {
int b;
spin_lock_irq(&acm->write_lock);
diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
index 2b9ff51..42f180a 100644
--- a/drivers/usb/class/cdc-wdm.c
+++ b/drivers/usb/class/cdc-wdm.c
@@ -798,11 +798,11 @@
dev_dbg(&desc->intf->dev, "wdm%d_suspend\n", intf->minor);
/* if this is an autosuspend the caller does the locking */
- if (!(message.event & PM_EVENT_AUTO))
+ if (!PMSG_IS_AUTO(message))
mutex_lock(&desc->lock);
spin_lock_irq(&desc->iuspin);
- if ((message.event & PM_EVENT_AUTO) &&
+ if (PMSG_IS_AUTO(message) &&
(test_bit(WDM_IN_USE, &desc->flags)
|| test_bit(WDM_RESPONDING, &desc->flags))) {
spin_unlock_irq(&desc->iuspin);
@@ -815,7 +815,7 @@
kill_urbs(desc);
cancel_work_sync(&desc->rxwork);
}
- if (!(message.event & PM_EVENT_AUTO))
+ if (!PMSG_IS_AUTO(message))
mutex_unlock(&desc->lock);
return rv;
diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c
index 34e3da5..e030428 100644
--- a/drivers/usb/core/driver.c
+++ b/drivers/usb/core/driver.c
@@ -1046,8 +1046,7 @@
/* Non-root devices on a full/low-speed bus must wait for their
* companion high-speed root hub, in case a handoff is needed.
*/
- if (!(msg.event & PM_EVENT_AUTO) && udev->parent &&
- udev->bus->hs_companion)
+ if (!PMSG_IS_AUTO(msg) && udev->parent && udev->bus->hs_companion)
device_pm_wait_for_dev(&udev->dev,
&udev->bus->hs_companion->root_hub->dev);
@@ -1075,7 +1074,7 @@
if (driver->suspend) {
status = driver->suspend(intf, msg);
- if (status && !(msg.event & PM_EVENT_AUTO))
+ if (status && !PMSG_IS_AUTO(msg))
dev_err(&intf->dev, "%s error %d\n",
"suspend", status);
} else {
@@ -1189,7 +1188,7 @@
status = usb_suspend_interface(udev, intf, msg);
/* Ignore errors during system sleep transitions */
- if (!(msg.event & PM_EVENT_AUTO))
+ if (!PMSG_IS_AUTO(msg))
status = 0;
if (status != 0)
break;
@@ -1199,7 +1198,7 @@
status = usb_suspend_device(udev, msg);
/* Again, ignore errors during system sleep transitions */
- if (!(msg.event & PM_EVENT_AUTO))
+ if (!PMSG_IS_AUTO(msg))
status = 0;
}
diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
index 73cbbd8..3b9d906 100644
--- a/drivers/usb/core/hcd.c
+++ b/drivers/usb/core/hcd.c
@@ -1961,8 +1961,9 @@
int status;
int old_state = hcd->state;
- dev_dbg(&rhdev->dev, "bus %s%s\n",
- (msg.event & PM_EVENT_AUTO ? "auto-" : ""), "suspend");
+ dev_dbg(&rhdev->dev, "bus %ssuspend, wakeup %d\n",
+ (PMSG_IS_AUTO(msg) ? "auto-" : ""),
+ rhdev->do_remote_wakeup);
if (HCD_DEAD(hcd)) {
dev_dbg(&rhdev->dev, "skipped %s of dead bus\n", "suspend");
return 0;
@@ -1997,8 +1998,8 @@
int status;
int old_state = hcd->state;
- dev_dbg(&rhdev->dev, "usb %s%s\n",
- (msg.event & PM_EVENT_AUTO ? "auto-" : ""), "resume");
+ dev_dbg(&rhdev->dev, "usb %sresume\n",
+ (PMSG_IS_AUTO(msg) ? "auto-" : ""));
if (HCD_DEAD(hcd)) {
dev_dbg(&rhdev->dev, "skipped %s of dead bus\n", "resume");
return 0;
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index a428aa0..13bc832 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -2324,8 +2324,6 @@
int port1 = udev->portnum;
int status;
- // dev_dbg(hub->intfdev, "suspend port %d\n", port1);
-
/* enable remote wakeup when appropriate; this lets the device
* wake up the upstream hub (including maybe the root hub).
*
@@ -2342,7 +2340,7 @@
dev_dbg(&udev->dev, "won't remote wakeup, status %d\n",
status);
/* bail if autosuspend is requested */
- if (msg.event & PM_EVENT_AUTO)
+ if (PMSG_IS_AUTO(msg))
return status;
}
}
@@ -2367,12 +2365,13 @@
USB_CTRL_SET_TIMEOUT);
/* System sleep transitions should never fail */
- if (!(msg.event & PM_EVENT_AUTO))
+ if (!PMSG_IS_AUTO(msg))
status = 0;
} else {
/* device has up to 10 msec to fully suspend */
- dev_dbg(&udev->dev, "usb %ssuspend\n",
- (msg.event & PM_EVENT_AUTO ? "auto-" : ""));
+ dev_dbg(&udev->dev, "usb %ssuspend, wakeup %d\n",
+ (PMSG_IS_AUTO(msg) ? "auto-" : ""),
+ udev->do_remote_wakeup);
usb_set_device_state(udev, USB_STATE_SUSPENDED);
msleep(10);
}
@@ -2523,7 +2522,7 @@
} else {
/* drive resume for at least 20 msec */
dev_dbg(&udev->dev, "usb %sresume\n",
- (msg.event & PM_EVENT_AUTO ? "auto-" : ""));
+ (PMSG_IS_AUTO(msg) ? "auto-" : ""));
msleep(25);
/* Virtual root hubs can trigger on GET_PORT_STATUS to
@@ -2625,7 +2624,7 @@
udev = hdev->children [port1-1];
if (udev && udev->can_submit) {
dev_warn(&intf->dev, "port %d nyet suspended\n", port1);
- if (msg.event & PM_EVENT_AUTO)
+ if (PMSG_IS_AUTO(msg))
return -EBUSY;
}
}
diff --git a/drivers/usb/serial/sierra.c b/drivers/usb/serial/sierra.c
index d5d136a..b18179b 100644
--- a/drivers/usb/serial/sierra.c
+++ b/drivers/usb/serial/sierra.c
@@ -1009,7 +1009,7 @@
struct sierra_intf_private *intfdata;
int b;
- if (message.event & PM_EVENT_AUTO) {
+ if (PMSG_IS_AUTO(message)) {
intfdata = serial->private;
spin_lock_irq(&intfdata->susp_lock);
b = intfdata->in_flight;
diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
index e4fad5e..d555ca9 100644
--- a/drivers/usb/serial/usb_wwan.c
+++ b/drivers/usb/serial/usb_wwan.c
@@ -651,7 +651,7 @@
dbg("%s entered", __func__);
- if (message.event & PM_EVENT_AUTO) {
+ if (PMSG_IS_AUTO(message)) {
spin_lock_irq(&intfdata->susp_lock);
b = intfdata->in_flight;
spin_unlock_irq(&intfdata->susp_lock);
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index a381cd2..e4e57d5 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1036,11 +1036,13 @@
* on error we return an unlocked page and the error value
* on success we return a locked page and 0
*/
-static int prepare_uptodate_page(struct page *page, u64 pos)
+static int prepare_uptodate_page(struct page *page, u64 pos,
+ bool force_uptodate)
{
int ret = 0;
- if ((pos & (PAGE_CACHE_SIZE - 1)) && !PageUptodate(page)) {
+ if (((pos & (PAGE_CACHE_SIZE - 1)) || force_uptodate) &&
+ !PageUptodate(page)) {
ret = btrfs_readpage(NULL, page);
if (ret)
return ret;
@@ -1061,7 +1063,7 @@
static noinline int prepare_pages(struct btrfs_root *root, struct file *file,
struct page **pages, size_t num_pages,
loff_t pos, unsigned long first_index,
- size_t write_bytes)
+ size_t write_bytes, bool force_uptodate)
{
struct extent_state *cached_state = NULL;
int i;
@@ -1086,10 +1088,11 @@
}
if (i == 0)
- err = prepare_uptodate_page(pages[i], pos);
+ err = prepare_uptodate_page(pages[i], pos,
+ force_uptodate);
if (i == num_pages - 1)
err = prepare_uptodate_page(pages[i],
- pos + write_bytes);
+ pos + write_bytes, false);
if (err) {
page_cache_release(pages[i]);
faili = i - 1;
@@ -1158,6 +1161,7 @@
size_t num_written = 0;
int nrptrs;
int ret = 0;
+ bool force_page_uptodate = false;
nrptrs = min((iov_iter_count(i) + PAGE_CACHE_SIZE - 1) /
PAGE_CACHE_SIZE, PAGE_CACHE_SIZE /
@@ -1200,7 +1204,8 @@
* contents of pages from loop to loop
*/
ret = prepare_pages(root, file, pages, num_pages,
- pos, first_index, write_bytes);
+ pos, first_index, write_bytes,
+ force_page_uptodate);
if (ret) {
btrfs_delalloc_release_space(inode,
num_pages << PAGE_CACHE_SHIFT);
@@ -1217,12 +1222,15 @@
if (copied < write_bytes)
nrptrs = 1;
- if (copied == 0)
+ if (copied == 0) {
+ force_page_uptodate = true;
dirty_pages = 0;
- else
+ } else {
+ force_page_uptodate = false;
dirty_pages = (copied + offset +
PAGE_CACHE_SIZE - 1) >>
PAGE_CACHE_SHIFT;
+ }
/*
* If we had a short copy we need to release the excess delaloc
diff --git a/fs/namei.c b/fs/namei.c
index f478836..0b3138d 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -721,12 +721,6 @@
if (!path->dentry->d_op || !path->dentry->d_op->d_automount)
return -EREMOTE;
- /* We don't want to mount if someone supplied AT_NO_AUTOMOUNT
- * and this is the terminal part of the path.
- */
- if ((flags & LOOKUP_NO_AUTOMOUNT) && !(flags & LOOKUP_PARENT))
- return -EISDIR; /* we actually want to stop here */
-
/* We don't want to mount if someone's just doing a stat -
* unless they're stat'ing a directory and appended a '/' to
* the name.
@@ -739,7 +733,7 @@
* of the daemon to instantiate them before they can be used.
*/
if (!(flags & (LOOKUP_PARENT | LOOKUP_DIRECTORY |
- LOOKUP_OPEN | LOOKUP_CREATE)) &&
+ LOOKUP_OPEN | LOOKUP_CREATE | LOOKUP_AUTOMOUNT)) &&
path->dentry->d_inode)
return -EISDIR;
diff --git a/fs/namespace.c b/fs/namespace.c
index 22bfe82..b4febb2 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -1757,7 +1757,7 @@
return err;
if (!old_name || !*old_name)
return -EINVAL;
- err = kern_path(old_name, LOOKUP_FOLLOW, &old_path);
+ err = kern_path(old_name, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &old_path);
if (err)
return err;
diff --git a/fs/nfs/super.c b/fs/nfs/super.c
index 9b7dd70..5b19b6a 100644
--- a/fs/nfs/super.c
+++ b/fs/nfs/super.c
@@ -2798,7 +2798,7 @@
goto out_put_mnt_ns;
ret = vfs_path_lookup(root_mnt->mnt_root, root_mnt,
- export_path, LOOKUP_FOLLOW, &path);
+ export_path, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &path);
nfs_referral_loop_unprotect();
put_mnt_ns(ns_private);
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index b34bdb2..10b6be3 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -355,7 +355,7 @@
* resolution (think about autofs) and thus deadlocks could arise.
*/
if (cmds == Q_QUOTAON) {
- ret = user_path_at(AT_FDCWD, addr, LOOKUP_FOLLOW, &path);
+ ret = user_path_at(AT_FDCWD, addr, LOOKUP_FOLLOW|LOOKUP_AUTOMOUNT, &path);
if (ret)
pathp = ERR_PTR(ret);
else
diff --git a/fs/stat.c b/fs/stat.c
index ba5316f..78a3aa8 100644
--- a/fs/stat.c
+++ b/fs/stat.c
@@ -81,8 +81,6 @@
if (!(flag & AT_SYMLINK_NOFOLLOW))
lookup_flags |= LOOKUP_FOLLOW;
- if (flag & AT_NO_AUTOMOUNT)
- lookup_flags |= LOOKUP_NO_AUTOMOUNT;
if (flag & AT_EMPTY_PATH)
lookup_flags |= LOOKUP_EMPTY;
diff --git a/include/linux/devfreq.h b/include/linux/devfreq.h
new file mode 100644
index 0000000..afb9458
--- /dev/null
+++ b/include/linux/devfreq.h
@@ -0,0 +1,238 @@
+/*
+ * devfreq: Generic Dynamic Voltage and Frequency Scaling (DVFS) Framework
+ * for Non-CPU Devices.
+ *
+ * Copyright (C) 2011 Samsung Electronics
+ * MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __LINUX_DEVFREQ_H__
+#define __LINUX_DEVFREQ_H__
+
+#include <linux/device.h>
+#include <linux/notifier.h>
+#include <linux/opp.h>
+
+#define DEVFREQ_NAME_LEN 16
+
+struct devfreq;
+
+/**
+ * struct devfreq_dev_status - Data given from devfreq user device to
+ * governors. Represents the performance
+ * statistics.
+ * @total_time The total time represented by this instance of
+ * devfreq_dev_status
+ * @busy_time The time that the device was working among the
+ * total_time.
+ * @current_frequency The operating frequency.
+ * @private_data An entry not specified by the devfreq framework.
+ * A device and a specific governor may have their
+ * own protocol with private_data. However, because
+ * this is governor-specific, a governor using this
+ * will be only compatible with devices aware of it.
+ */
+struct devfreq_dev_status {
+ /* both since the last measure */
+ unsigned long total_time;
+ unsigned long busy_time;
+ unsigned long current_frequency;
+ void *private_date;
+};
+
+/**
+ * struct devfreq_dev_profile - Devfreq's user device profile
+ * @initial_freq The operating frequency when devfreq_add_device() is
+ * called.
+ * @polling_ms The polling interval in ms. 0 disables polling.
+ * @target The device should set its operating frequency at
+ * freq or lowest-upper-than-freq value. If freq is
+ * higher than any operable frequency, set maximum.
+ * Before returning, target function should set
+ * freq at the current frequency.
+ * @get_dev_status The device should provide the current performance
+ * status to devfreq, which is used by governors.
+ * @exit An optional callback that is called when devfreq
+ * is removing the devfreq object due to error or
+ * from devfreq_remove_device() call. If the user
+ * has registered devfreq->nb at a notifier-head,
+ * this is the time to unregister it.
+ */
+struct devfreq_dev_profile {
+ unsigned long initial_freq;
+ unsigned int polling_ms;
+
+ int (*target)(struct device *dev, unsigned long *freq);
+ int (*get_dev_status)(struct device *dev,
+ struct devfreq_dev_status *stat);
+ void (*exit)(struct device *dev);
+};
+
+/**
+ * struct devfreq_governor - Devfreq policy governor
+ * @name Governor's name
+ * @get_target_freq Returns desired operating frequency for the device.
+ * Basically, get_target_freq will run
+ * devfreq_dev_profile.get_dev_status() to get the
+ * status of the device (load = busy_time / total_time).
+ * If no_central_polling is set, this callback is called
+ * only with update_devfreq() notified by OPP.
+ * @init Called when the devfreq is being attached to a device
+ * @exit Called when the devfreq is being removed from a
+ * device. Governor should stop any internal routines
+ * before return because related data may be
+ * freed after exit().
+ * @no_central_polling Do not use devfreq's central polling mechanism.
+ * When this is set, devfreq will not call
+ * get_target_freq with devfreq_monitor(). However,
+ * devfreq will call get_target_freq with
+ * devfreq_update() notified by OPP framework.
+ *
+ * Note that the callbacks are called with devfreq->lock locked by devfreq.
+ */
+struct devfreq_governor {
+ const char name[DEVFREQ_NAME_LEN];
+ int (*get_target_freq)(struct devfreq *this, unsigned long *freq);
+ int (*init)(struct devfreq *this);
+ void (*exit)(struct devfreq *this);
+ const bool no_central_polling;
+};
+
+/**
+ * struct devfreq - Device devfreq structure
+ * @node list node - contains the devices with devfreq that have been
+ * registered.
+ * @lock a mutex to protect accessing devfreq.
+ * @dev device registered by devfreq class. dev.parent is the device
+ * using devfreq.
+ * @profile device-specific devfreq profile
+ * @governor method how to choose frequency based on the usage.
+ * @nb notifier block used to notify devfreq object that it should
+ * reevaluate operable frequencies. Devfreq users may use
+ * devfreq.nb to the corresponding register notifier call chain.
+ * @polling_jiffies interval in jiffies.
+ * @previous_freq previously configured frequency value.
+ * @next_polling the number of remaining jiffies to poll with
+ * "devfreq_monitor" executions to reevaluate
+ * frequency/voltage of the device. Set by
+ * profile's polling_ms interval.
+ * @data Private data of the governor. The devfreq framework does not
+ * touch this.
+ * @being_removed a flag to mark that this object is being removed in
+ * order to prevent trying to remove the object multiple times.
+ *
+ * This structure stores the devfreq information for a give device.
+ *
+ * Note that when a governor accesses entries in struct devfreq in its
+ * functions except for the context of callbacks defined in struct
+ * devfreq_governor, the governor should protect its access with the
+ * struct mutex lock in struct devfreq. A governor may use this mutex
+ * to protect its own private data in void *data as well.
+ */
+struct devfreq {
+ struct list_head node;
+
+ struct mutex lock;
+ struct device dev;
+ struct devfreq_dev_profile *profile;
+ const struct devfreq_governor *governor;
+ struct notifier_block nb;
+
+ unsigned long polling_jiffies;
+ unsigned long previous_freq;
+ unsigned int next_polling;
+
+ void *data; /* private data for governors */
+
+ bool being_removed;
+};
+
+#if defined(CONFIG_PM_DEVFREQ)
+extern struct devfreq *devfreq_add_device(struct device *dev,
+ struct devfreq_dev_profile *profile,
+ const struct devfreq_governor *governor,
+ void *data);
+extern int devfreq_remove_device(struct devfreq *devfreq);
+
+/* Helper functions for devfreq user device driver with OPP. */
+extern struct opp *devfreq_recommended_opp(struct device *dev,
+ unsigned long *freq);
+extern int devfreq_register_opp_notifier(struct device *dev,
+ struct devfreq *devfreq);
+extern int devfreq_unregister_opp_notifier(struct device *dev,
+ struct devfreq *devfreq);
+
+#ifdef CONFIG_DEVFREQ_GOV_POWERSAVE
+extern const struct devfreq_governor devfreq_powersave;
+#endif
+#ifdef CONFIG_DEVFREQ_GOV_PERFORMANCE
+extern const struct devfreq_governor devfreq_performance;
+#endif
+#ifdef CONFIG_DEVFREQ_GOV_USERSPACE
+extern const struct devfreq_governor devfreq_userspace;
+#endif
+#ifdef CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND
+extern const struct devfreq_governor devfreq_simple_ondemand;
+/**
+ * struct devfreq_simple_ondemand_data - void *data fed to struct devfreq
+ * and devfreq_add_device
+ * @ upthreshold If the load is over this value, the frequency jumps.
+ * Specify 0 to use the default. Valid value = 0 to 100.
+ * @ downdifferential If the load is under upthreshold - downdifferential,
+ * the governor may consider slowing the frequency down.
+ * Specify 0 to use the default. Valid value = 0 to 100.
+ * downdifferential < upthreshold must hold.
+ *
+ * If the fed devfreq_simple_ondemand_data pointer is NULL to the governor,
+ * the governor uses the default values.
+ */
+struct devfreq_simple_ondemand_data {
+ unsigned int upthreshold;
+ unsigned int downdifferential;
+};
+#endif
+
+#else /* !CONFIG_PM_DEVFREQ */
+static struct devfreq *devfreq_add_device(struct device *dev,
+ struct devfreq_dev_profile *profile,
+ struct devfreq_governor *governor,
+ void *data);
+{
+ return NULL;
+}
+
+static int devfreq_remove_device(struct devfreq *devfreq);
+{
+ return 0;
+}
+
+static struct opp *devfreq_recommended_opp(struct device *dev,
+ unsigned long *freq)
+{
+ return -EINVAL;
+}
+
+static int devfreq_register_opp_notifier(struct device *dev,
+ struct devfreq *devfreq)
+{
+ return -EINVAL;
+}
+
+static int devfreq_unregister_opp_notifier(struct device *dev,
+ struct devfreq *devfreq)
+{
+ return -EINVAL;
+}
+
+#define devfreq_powersave NULL
+#define devfreq_performance NULL
+#define devfreq_userspace NULL
+#define devfreq_simple_ondemand NULL
+
+#endif /* CONFIG_PM_DEVFREQ */
+
+#endif /* __LINUX_DEVFREQ_H__ */
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index 3fa1f3d..99e3e50 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -197,6 +197,11 @@
* whether or not its underlying devices have support.
*/
unsigned discards_supported:1;
+
+ /*
+ * Set if this target does not return zeroes on discarded blocks.
+ */
+ unsigned discard_zeroes_data_unsupported:1;
};
/* Each target can link one of these into the table */
diff --git a/include/linux/freezer.h b/include/linux/freezer.h
index 1effc8b..aa56cf3 100644
--- a/include/linux/freezer.h
+++ b/include/linux/freezer.h
@@ -49,6 +49,7 @@
extern void refrigerator(void);
extern int freeze_processes(void);
+extern int freeze_kernel_threads(void);
extern void thaw_processes(void);
static inline int try_to_freeze(void)
@@ -171,7 +172,8 @@
static inline int thaw_process(struct task_struct *p) { return 1; }
static inline void refrigerator(void) {}
-static inline int freeze_processes(void) { BUG(); return 0; }
+static inline int freeze_processes(void) { return -ENOSYS; }
+static inline int freeze_kernel_threads(void) { return -ENOSYS; }
static inline void thaw_processes(void) {}
static inline int try_to_freeze(void) { return 0; }
diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h
index e807ad6..3ad553e 100644
--- a/include/linux/irqdomain.h
+++ b/include/linux/irqdomain.h
@@ -80,6 +80,7 @@
#endif /* CONFIG_IRQ_DOMAIN */
#if defined(CONFIG_IRQ_DOMAIN) && defined(CONFIG_OF_IRQ)
+extern struct irq_domain_ops irq_domain_simple_ops;
extern void irq_domain_add_simple(struct device_node *controller, int irq_base);
extern void irq_domain_generate_simple(const struct of_device_id *match,
u64 phys_base, unsigned int irq_start);
diff --git a/include/linux/namei.h b/include/linux/namei.h
index 76fe2c6..409328d 100644
--- a/include/linux/namei.h
+++ b/include/linux/namei.h
@@ -48,11 +48,12 @@
*/
#define LOOKUP_FOLLOW 0x0001
#define LOOKUP_DIRECTORY 0x0002
+#define LOOKUP_AUTOMOUNT 0x0004
#define LOOKUP_PARENT 0x0010
#define LOOKUP_REVAL 0x0020
#define LOOKUP_RCU 0x0040
-#define LOOKUP_NO_AUTOMOUNT 0x0080
+
/*
* Intent data
*/
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index ddee79b..f38ab5b 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -31,7 +31,7 @@
#include <linux/if_link.h>
#ifdef __KERNEL__
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/timer.h>
#include <linux/delay.h>
#include <linux/atomic.h>
@@ -964,7 +964,7 @@
*/
char name[IFNAMSIZ];
- struct pm_qos_request_list pm_qos_req;
+ struct pm_qos_request pm_qos_req;
/* device name hash chain */
struct hlist_node name_hlist;
diff --git a/include/linux/opp.h b/include/linux/opp.h
index 7020e97..87a9208 100644
--- a/include/linux/opp.h
+++ b/include/linux/opp.h
@@ -16,9 +16,14 @@
#include <linux/err.h>
#include <linux/cpufreq.h>
+#include <linux/notifier.h>
struct opp;
+enum opp_event {
+ OPP_EVENT_ADD, OPP_EVENT_ENABLE, OPP_EVENT_DISABLE,
+};
+
#if defined(CONFIG_PM_OPP)
unsigned long opp_get_voltage(struct opp *opp);
@@ -40,6 +45,8 @@
int opp_disable(struct device *dev, unsigned long freq);
+struct srcu_notifier_head *opp_get_notifier(struct device *dev);
+
#else
static inline unsigned long opp_get_voltage(struct opp *opp)
{
@@ -89,6 +96,11 @@
{
return 0;
}
+
+struct srcu_notifier_head *opp_get_notifier(struct device *dev)
+{
+ return ERR_PTR(-EINVAL);
+}
#endif /* CONFIG_PM */
#if defined(CONFIG_CPU_FREQ) && defined(CONFIG_PM_OPP)
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 8c230cb..9fc0122 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -621,8 +621,9 @@
extern void pcie_bus_configure_settings(struct pci_bus *bus, u8 smpss);
enum pcie_bus_config_types {
- PCIE_BUS_PERFORMANCE,
+ PCIE_BUS_TUNE_OFF,
PCIE_BUS_SAFE,
+ PCIE_BUS_PERFORMANCE,
PCIE_BUS_PEER2PEER,
};
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 74711a9..f15acb6 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -326,6 +326,7 @@
* requested by a driver.
*/
+#define PM_EVENT_INVALID (-1)
#define PM_EVENT_ON 0x0000
#define PM_EVENT_FREEZE 0x0001
#define PM_EVENT_SUSPEND 0x0002
@@ -346,6 +347,7 @@
#define PM_EVENT_AUTO_SUSPEND (PM_EVENT_AUTO | PM_EVENT_SUSPEND)
#define PM_EVENT_AUTO_RESUME (PM_EVENT_AUTO | PM_EVENT_RESUME)
+#define PMSG_INVALID ((struct pm_message){ .event = PM_EVENT_INVALID, })
#define PMSG_ON ((struct pm_message){ .event = PM_EVENT_ON, })
#define PMSG_FREEZE ((struct pm_message){ .event = PM_EVENT_FREEZE, })
#define PMSG_QUIESCE ((struct pm_message){ .event = PM_EVENT_QUIESCE, })
@@ -366,6 +368,8 @@
#define PMSG_AUTO_RESUME ((struct pm_message) \
{ .event = PM_EVENT_AUTO_RESUME, })
+#define PMSG_IS_AUTO(msg) (((msg).event & PM_EVENT_AUTO) != 0)
+
/**
* Device run-time power management status.
*
@@ -480,6 +484,7 @@
unsigned long accounting_timestamp;
#endif
struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */
+ struct pm_qos_constraints *constraints;
};
extern void update_pm_runtime_accounting(struct device *dev);
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
new file mode 100644
index 0000000..83b0ea3
--- /dev/null
+++ b/include/linux/pm_qos.h
@@ -0,0 +1,155 @@
+#ifndef _LINUX_PM_QOS_H
+#define _LINUX_PM_QOS_H
+/* interface for the pm_qos_power infrastructure of the linux kernel.
+ *
+ * Mark Gross <mgross@linux.intel.com>
+ */
+#include <linux/plist.h>
+#include <linux/notifier.h>
+#include <linux/miscdevice.h>
+#include <linux/device.h>
+
+#define PM_QOS_RESERVED 0
+#define PM_QOS_CPU_DMA_LATENCY 1
+#define PM_QOS_NETWORK_LATENCY 2
+#define PM_QOS_NETWORK_THROUGHPUT 3
+
+#define PM_QOS_NUM_CLASSES 4
+#define PM_QOS_DEFAULT_VALUE -1
+
+#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
+#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
+#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
+#define PM_QOS_DEV_LAT_DEFAULT_VALUE 0
+
+struct pm_qos_request {
+ struct plist_node node;
+ int pm_qos_class;
+};
+
+struct dev_pm_qos_request {
+ struct plist_node node;
+ struct device *dev;
+};
+
+enum pm_qos_type {
+ PM_QOS_UNITIALIZED,
+ PM_QOS_MAX, /* return the largest value */
+ PM_QOS_MIN /* return the smallest value */
+};
+
+/*
+ * Note: The lockless read path depends on the CPU accessing
+ * target_value atomically. Atomic access is only guaranteed on all CPU
+ * types linux supports for 32 bit quantites
+ */
+struct pm_qos_constraints {
+ struct plist_head list;
+ s32 target_value; /* Do not change to 64 bit */
+ s32 default_value;
+ enum pm_qos_type type;
+ struct blocking_notifier_head *notifiers;
+};
+
+/* Action requested to pm_qos_update_target */
+enum pm_qos_req_action {
+ PM_QOS_ADD_REQ, /* Add a new request */
+ PM_QOS_UPDATE_REQ, /* Update an existing request */
+ PM_QOS_REMOVE_REQ /* Remove an existing request */
+};
+
+static inline int dev_pm_qos_request_active(struct dev_pm_qos_request *req)
+{
+ return req->dev != 0;
+}
+
+#ifdef CONFIG_PM
+int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
+ enum pm_qos_req_action action, int value);
+void pm_qos_add_request(struct pm_qos_request *req, int pm_qos_class,
+ s32 value);
+void pm_qos_update_request(struct pm_qos_request *req,
+ s32 new_value);
+void pm_qos_remove_request(struct pm_qos_request *req);
+
+int pm_qos_request(int pm_qos_class);
+int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
+int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
+int pm_qos_request_active(struct pm_qos_request *req);
+s32 pm_qos_read_value(struct pm_qos_constraints *c);
+
+s32 dev_pm_qos_read_value(struct device *dev);
+int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
+ s32 value);
+int dev_pm_qos_update_request(struct dev_pm_qos_request *req, s32 new_value);
+int dev_pm_qos_remove_request(struct dev_pm_qos_request *req);
+int dev_pm_qos_add_notifier(struct device *dev,
+ struct notifier_block *notifier);
+int dev_pm_qos_remove_notifier(struct device *dev,
+ struct notifier_block *notifier);
+int dev_pm_qos_add_global_notifier(struct notifier_block *notifier);
+int dev_pm_qos_remove_global_notifier(struct notifier_block *notifier);
+void dev_pm_qos_constraints_init(struct device *dev);
+void dev_pm_qos_constraints_destroy(struct device *dev);
+#else
+static inline int pm_qos_update_target(struct pm_qos_constraints *c,
+ struct plist_node *node,
+ enum pm_qos_req_action action,
+ int value)
+ { return 0; }
+static inline void pm_qos_add_request(struct pm_qos_request *req,
+ int pm_qos_class, s32 value)
+ { return; }
+static inline void pm_qos_update_request(struct pm_qos_request *req,
+ s32 new_value)
+ { return; }
+static inline void pm_qos_remove_request(struct pm_qos_request *req)
+ { return; }
+
+static inline int pm_qos_request(int pm_qos_class)
+ { return 0; }
+static inline int pm_qos_add_notifier(int pm_qos_class,
+ struct notifier_block *notifier)
+ { return 0; }
+static inline int pm_qos_remove_notifier(int pm_qos_class,
+ struct notifier_block *notifier)
+ { return 0; }
+static inline int pm_qos_request_active(struct pm_qos_request *req)
+ { return 0; }
+static inline s32 pm_qos_read_value(struct pm_qos_constraints *c)
+ { return 0; }
+
+static inline s32 dev_pm_qos_read_value(struct device *dev)
+ { return 0; }
+static inline int dev_pm_qos_add_request(struct device *dev,
+ struct dev_pm_qos_request *req,
+ s32 value)
+ { return 0; }
+static inline int dev_pm_qos_update_request(struct dev_pm_qos_request *req,
+ s32 new_value)
+ { return 0; }
+static inline int dev_pm_qos_remove_request(struct dev_pm_qos_request *req)
+ { return 0; }
+static inline int dev_pm_qos_add_notifier(struct device *dev,
+ struct notifier_block *notifier)
+ { return 0; }
+static inline int dev_pm_qos_remove_notifier(struct device *dev,
+ struct notifier_block *notifier)
+ { return 0; }
+static inline int dev_pm_qos_add_global_notifier(
+ struct notifier_block *notifier)
+ { return 0; }
+static inline int dev_pm_qos_remove_global_notifier(
+ struct notifier_block *notifier)
+ { return 0; }
+static inline void dev_pm_qos_constraints_init(struct device *dev)
+{
+ dev->power.power_state = PMSG_ON;
+}
+static inline void dev_pm_qos_constraints_destroy(struct device *dev)
+{
+ dev->power.power_state = PMSG_INVALID;
+}
+#endif
+
+#endif
diff --git a/include/linux/pm_qos_params.h b/include/linux/pm_qos_params.h
deleted file mode 100644
index a7d87f9..0000000
--- a/include/linux/pm_qos_params.h
+++ /dev/null
@@ -1,38 +0,0 @@
-#ifndef _LINUX_PM_QOS_PARAMS_H
-#define _LINUX_PM_QOS_PARAMS_H
-/* interface for the pm_qos_power infrastructure of the linux kernel.
- *
- * Mark Gross <mgross@linux.intel.com>
- */
-#include <linux/plist.h>
-#include <linux/notifier.h>
-#include <linux/miscdevice.h>
-
-#define PM_QOS_RESERVED 0
-#define PM_QOS_CPU_DMA_LATENCY 1
-#define PM_QOS_NETWORK_LATENCY 2
-#define PM_QOS_NETWORK_THROUGHPUT 3
-
-#define PM_QOS_NUM_CLASSES 4
-#define PM_QOS_DEFAULT_VALUE -1
-
-#define PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
-#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
-#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
-
-struct pm_qos_request_list {
- struct plist_node list;
- int pm_qos_class;
-};
-
-void pm_qos_add_request(struct pm_qos_request_list *l, int pm_qos_class, s32 value);
-void pm_qos_update_request(struct pm_qos_request_list *pm_qos_req,
- s32 new_value);
-void pm_qos_remove_request(struct pm_qos_request_list *pm_qos_req);
-
-int pm_qos_request(int pm_qos_class);
-int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier);
-int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier);
-int pm_qos_request_active(struct pm_qos_request_list *req);
-
-#endif
diff --git a/include/linux/ptp_classify.h b/include/linux/ptp_classify.h
index e07e274..1dc420b 100644
--- a/include/linux/ptp_classify.h
+++ b/include/linux/ptp_classify.h
@@ -51,6 +51,7 @@
#define PTP_CLASS_V2_VLAN (PTP_CLASS_V2 | PTP_CLASS_VLAN)
#define PTP_EV_PORT 319
+#define PTP_GEN_BIT 0x08 /* indicates general message, if set in message type */
#define OFF_ETYPE 12
#define OFF_IHL 14
@@ -116,14 +117,20 @@
{OP_OR, 0, 0, PTP_CLASS_IPV6 }, /* */ \
{OP_RETA, 0, 0, 0 }, /* */ \
/*L3x*/ {OP_RETK, 0, 0, PTP_CLASS_NONE }, /* */ \
-/*L40*/ {OP_JEQ, 0, 6, ETH_P_8021Q }, /* f goto L50 */ \
+/*L40*/ {OP_JEQ, 0, 9, ETH_P_8021Q }, /* f goto L50 */ \
{OP_LDH, 0, 0, OFF_ETYPE + 4 }, /* */ \
- {OP_JEQ, 0, 9, ETH_P_1588 }, /* f goto L60 */ \
+ {OP_JEQ, 0, 15, ETH_P_1588 }, /* f goto L60 */ \
+ {OP_LDB, 0, 0, ETH_HLEN + VLAN_HLEN }, /* */ \
+ {OP_AND, 0, 0, PTP_GEN_BIT }, /* */ \
+ {OP_JEQ, 0, 12, 0 }, /* f goto L6x */ \
{OP_LDH, 0, 0, ETH_HLEN + VLAN_HLEN }, /* */ \
{OP_AND, 0, 0, PTP_CLASS_VMASK }, /* */ \
{OP_OR, 0, 0, PTP_CLASS_VLAN }, /* */ \
{OP_RETA, 0, 0, 0 }, /* */ \
-/*L50*/ {OP_JEQ, 0, 4, ETH_P_1588 }, /* f goto L61 */ \
+/*L50*/ {OP_JEQ, 0, 7, ETH_P_1588 }, /* f goto L61 */ \
+ {OP_LDB, 0, 0, ETH_HLEN }, /* */ \
+ {OP_AND, 0, 0, PTP_GEN_BIT }, /* */ \
+ {OP_JEQ, 0, 4, 0 }, /* f goto L6x */ \
{OP_LDH, 0, 0, ETH_HLEN }, /* */ \
{OP_AND, 0, 0, PTP_CLASS_VMASK }, /* */ \
{OP_OR, 0, 0, PTP_CLASS_L2 }, /* */ \
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4ac2c05..41d0237 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1956,7 +1956,6 @@
extern unsigned long long
task_sched_runtime(struct task_struct *task);
-extern unsigned long long thread_group_sched_runtime(struct task_struct *task);
/* sched_exec is called by processes performing an exec */
#ifdef CONFIG_SMP
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index 6bbcef2..57a6924 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -8,15 +8,18 @@
#include <linux/mm.h>
#include <asm/errno.h>
-#if defined(CONFIG_PM_SLEEP) && defined(CONFIG_VT) && defined(CONFIG_VT_CONSOLE)
+#ifdef CONFIG_VT
extern void pm_set_vt_switch(int);
-extern int pm_prepare_console(void);
-extern void pm_restore_console(void);
#else
static inline void pm_set_vt_switch(int do_switch)
{
}
+#endif
+#ifdef CONFIG_VT_CONSOLE_SLEEP
+extern int pm_prepare_console(void);
+extern void pm_restore_console(void);
+#else
static inline int pm_prepare_console(void)
{
return 0;
@@ -34,6 +37,58 @@
#define PM_SUSPEND_MEM ((__force suspend_state_t) 3)
#define PM_SUSPEND_MAX ((__force suspend_state_t) 4)
+enum suspend_stat_step {
+ SUSPEND_FREEZE = 1,
+ SUSPEND_PREPARE,
+ SUSPEND_SUSPEND,
+ SUSPEND_SUSPEND_NOIRQ,
+ SUSPEND_RESUME_NOIRQ,
+ SUSPEND_RESUME
+};
+
+struct suspend_stats {
+ int success;
+ int fail;
+ int failed_freeze;
+ int failed_prepare;
+ int failed_suspend;
+ int failed_suspend_noirq;
+ int failed_resume;
+ int failed_resume_noirq;
+#define REC_FAILED_NUM 2
+ int last_failed_dev;
+ char failed_devs[REC_FAILED_NUM][40];
+ int last_failed_errno;
+ int errno[REC_FAILED_NUM];
+ int last_failed_step;
+ enum suspend_stat_step failed_steps[REC_FAILED_NUM];
+};
+
+extern struct suspend_stats suspend_stats;
+
+static inline void dpm_save_failed_dev(const char *name)
+{
+ strlcpy(suspend_stats.failed_devs[suspend_stats.last_failed_dev],
+ name,
+ sizeof(suspend_stats.failed_devs[0]));
+ suspend_stats.last_failed_dev++;
+ suspend_stats.last_failed_dev %= REC_FAILED_NUM;
+}
+
+static inline void dpm_save_failed_errno(int err)
+{
+ suspend_stats.errno[suspend_stats.last_failed_errno] = err;
+ suspend_stats.last_failed_errno++;
+ suspend_stats.last_failed_errno %= REC_FAILED_NUM;
+}
+
+static inline void dpm_save_failed_step(enum suspend_stat_step step)
+{
+ suspend_stats.failed_steps[suspend_stats.last_failed_step] = step;
+ suspend_stats.last_failed_step++;
+ suspend_stats.last_failed_step %= REC_FAILED_NUM;
+}
+
/**
* struct platform_suspend_ops - Callbacks for managing platform dependent
* system sleep states.
@@ -334,4 +389,38 @@
}
#endif
+#ifdef CONFIG_ARCH_SAVE_PAGE_KEYS
+/*
+ * The ARCH_SAVE_PAGE_KEYS functions can be used by an architecture
+ * to save/restore additional information to/from the array of page
+ * frame numbers in the hibernation image. For s390 this is used to
+ * save and restore the storage key for each page that is included
+ * in the hibernation image.
+ */
+unsigned long page_key_additional_pages(unsigned long pages);
+int page_key_alloc(unsigned long pages);
+void page_key_free(void);
+void page_key_read(unsigned long *pfn);
+void page_key_memorize(unsigned long *pfn);
+void page_key_write(void *address);
+
+#else /* !CONFIG_ARCH_SAVE_PAGE_KEYS */
+
+static inline unsigned long page_key_additional_pages(unsigned long pages)
+{
+ return 0;
+}
+
+static inline int page_key_alloc(unsigned long pages)
+{
+ return 0;
+}
+
+static inline void page_key_free(void) {}
+static inline void page_key_read(unsigned long *pfn) {}
+static inline void page_key_memorize(unsigned long *pfn) {}
+static inline void page_key_write(void *address) {}
+
+#endif /* !CONFIG_ARCH_SAVE_PAGE_KEYS */
+
#endif /* _LINUX_SUSPEND_H */
diff --git a/include/sound/pcm.h b/include/sound/pcm.h
index 57e71fa..54cb079 100644
--- a/include/sound/pcm.h
+++ b/include/sound/pcm.h
@@ -29,7 +29,7 @@
#include <linux/poll.h>
#include <linux/mm.h>
#include <linux/bitops.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#define snd_pcm_substream_chip(substream) ((substream)->private_data)
#define snd_pcm_chip(pcm) ((pcm)->private_data)
@@ -373,7 +373,7 @@
int number;
char name[32]; /* substream name */
int stream; /* stream (direction) */
- struct pm_qos_request_list latency_pm_qos_req; /* pm_qos request */
+ struct pm_qos_request latency_pm_qos_req; /* pm_qos request */
size_t buffer_bytes_max; /* limit ring buffer size */
struct snd_dma_buffer dma_buffer;
unsigned int dma_buf_id;
diff --git a/include/trace/events/rpm.h b/include/trace/events/rpm.h
new file mode 100644
index 0000000..d62c558
--- /dev/null
+++ b/include/trace/events/rpm.h
@@ -0,0 +1,99 @@
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM rpm
+
+#if !defined(_TRACE_RUNTIME_POWER_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_RUNTIME_POWER_H
+
+#include <linux/ktime.h>
+#include <linux/tracepoint.h>
+#include <linux/device.h>
+
+/*
+ * The rpm_internal events are used for tracing some important
+ * runtime pm internal functions.
+ */
+DECLARE_EVENT_CLASS(rpm_internal,
+
+ TP_PROTO(struct device *dev, int flags),
+
+ TP_ARGS(dev, flags),
+
+ TP_STRUCT__entry(
+ __string( name, dev_name(dev) )
+ __field( int, flags )
+ __field( int , usage_count )
+ __field( int , disable_depth )
+ __field( int , runtime_auto )
+ __field( int , request_pending )
+ __field( int , irq_safe )
+ __field( int , child_count )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, dev_name(dev));
+ __entry->flags = flags;
+ __entry->usage_count = atomic_read(
+ &dev->power.usage_count);
+ __entry->disable_depth = dev->power.disable_depth;
+ __entry->runtime_auto = dev->power.runtime_auto;
+ __entry->request_pending = dev->power.request_pending;
+ __entry->irq_safe = dev->power.irq_safe;
+ __entry->child_count = atomic_read(
+ &dev->power.child_count);
+ ),
+
+ TP_printk("%s flags-%x cnt-%-2d dep-%-2d auto-%-1d p-%-1d"
+ " irq-%-1d child-%d",
+ __get_str(name), __entry->flags,
+ __entry->usage_count,
+ __entry->disable_depth,
+ __entry->runtime_auto,
+ __entry->request_pending,
+ __entry->irq_safe,
+ __entry->child_count
+ )
+);
+DEFINE_EVENT(rpm_internal, rpm_suspend,
+
+ TP_PROTO(struct device *dev, int flags),
+
+ TP_ARGS(dev, flags)
+);
+DEFINE_EVENT(rpm_internal, rpm_resume,
+
+ TP_PROTO(struct device *dev, int flags),
+
+ TP_ARGS(dev, flags)
+);
+DEFINE_EVENT(rpm_internal, rpm_idle,
+
+ TP_PROTO(struct device *dev, int flags),
+
+ TP_ARGS(dev, flags)
+);
+
+TRACE_EVENT(rpm_return_int,
+ TP_PROTO(struct device *dev, unsigned long ip, int ret),
+ TP_ARGS(dev, ip, ret),
+
+ TP_STRUCT__entry(
+ __string( name, dev_name(dev))
+ __field( unsigned long, ip )
+ __field( int, ret )
+ ),
+
+ TP_fast_assign(
+ __assign_str(name, dev_name(dev));
+ __entry->ip = ip;
+ __entry->ret = ret;
+ ),
+
+ TP_printk("%pS:%s ret=%d", (void *)__entry->ip, __get_str(name),
+ __entry->ret)
+);
+
+#endif /* _TRACE_RUNTIME_POWER_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
index 6bca4cc..5f17270 100644
--- a/include/trace/events/writeback.h
+++ b/include/trace/events/writeback.h
@@ -298,7 +298,7 @@
__array(char, name, 32)
__field(unsigned long, ino)
__field(unsigned long, state)
- __field(unsigned long, age)
+ __field(unsigned long, dirtied_when)
__field(unsigned long, writeback_index)
__field(long, nr_to_write)
__field(unsigned long, wrote)
@@ -309,19 +309,19 @@
dev_name(inode->i_mapping->backing_dev_info->dev), 32);
__entry->ino = inode->i_ino;
__entry->state = inode->i_state;
- __entry->age = (jiffies - inode->dirtied_when) *
- 1000 / HZ;
+ __entry->dirtied_when = inode->dirtied_when;
__entry->writeback_index = inode->i_mapping->writeback_index;
__entry->nr_to_write = nr_to_write;
__entry->wrote = nr_to_write - wbc->nr_to_write;
),
- TP_printk("bdi %s: ino=%lu state=%s age=%lu "
+ TP_printk("bdi %s: ino=%lu state=%s dirtied_when=%lu age=%lu "
"index=%lu to_write=%ld wrote=%lu",
__entry->name,
__entry->ino,
show_inode_state(__entry->state),
- __entry->age,
+ __entry->dirtied_when,
+ (jiffies - __entry->dirtied_when) / HZ,
__entry->writeback_index,
__entry->nr_to_write,
__entry->wrote
diff --git a/init/main.c b/init/main.c
index 2a9b88a..03b408d 100644
--- a/init/main.c
+++ b/init/main.c
@@ -381,9 +381,6 @@
preempt_enable_no_resched();
schedule();
- /* At this point, we can enable user mode helper functionality */
- usermodehelper_enable();
-
/* Call into cpu_idle with preempt disabled */
preempt_disable();
cpu_idle();
@@ -733,6 +730,7 @@
driver_init();
init_irq_proc();
do_ctors();
+ usermodehelper_enable();
do_initcalls();
}
diff --git a/kernel/Makefile b/kernel/Makefile
index eca595e..2da48d3 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -9,7 +9,7 @@
rcupdate.o extable.o params.o posix-timers.o \
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o \
hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
- notifier.o ksysfs.o pm_qos_params.o sched_clock.o cred.o \
+ notifier.o ksysfs.o sched_clock.o cred.o \
async.o range.o
obj-y += groups.o
diff --git a/kernel/freezer.c b/kernel/freezer.c
index 7b01de98..66a594e 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -67,7 +67,7 @@
unsigned long flags;
spin_lock_irqsave(&p->sighand->siglock, flags);
- signal_wake_up(p, 0);
+ signal_wake_up(p, 1);
spin_unlock_irqrestore(&p->sighand->siglock, flags);
}
diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
index d5828da..b57a377 100644
--- a/kernel/irq/irqdomain.c
+++ b/kernel/irq/irqdomain.c
@@ -29,7 +29,11 @@
*/
for (hwirq = 0; hwirq < domain->nr_irq; hwirq++) {
d = irq_get_irq_data(irq_domain_to_irq(domain, hwirq));
- if (d || d->domain) {
+ if (!d) {
+ WARN(1, "error: assigning domain to non existant irq_desc");
+ return;
+ }
+ if (d->domain) {
/* things are broken; just report, don't clean up */
WARN(1, "error: irq_desc already assigned to a domain");
return;
diff --git a/kernel/posix-cpu-timers.c b/kernel/posix-cpu-timers.c
index 58f405b..c8008dd 100644
--- a/kernel/posix-cpu-timers.c
+++ b/kernel/posix-cpu-timers.c
@@ -250,7 +250,7 @@
do {
times->utime = cputime_add(times->utime, t->utime);
times->stime = cputime_add(times->stime, t->stime);
- times->sum_exec_runtime += t->se.sum_exec_runtime;
+ times->sum_exec_runtime += task_sched_runtime(t);
} while_each_thread(tsk, t);
out:
rcu_read_unlock();
@@ -312,7 +312,8 @@
cpu->cpu = cputime.utime;
break;
case CPUCLOCK_SCHED:
- cpu->sched = thread_group_sched_runtime(p);
+ thread_group_cputime(p, &cputime);
+ cpu->sched = cputime.sum_exec_runtime;
break;
}
return 0;
diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
index 3744c59..cedd998 100644
--- a/kernel/power/Kconfig
+++ b/kernel/power/Kconfig
@@ -27,6 +27,7 @@
select HIBERNATE_CALLBACKS
select LZO_COMPRESS
select LZO_DECOMPRESS
+ select CRC32
---help---
Enable the suspend to disk (STD) functionality, which is usually
called "hibernation" in user interfaces. STD checkpoints the
@@ -65,6 +66,9 @@
For more information take a look at <file:Documentation/power/swsusp.txt>.
+config ARCH_SAVE_PAGE_KEYS
+ bool
+
config PM_STD_PARTITION
string "Default resume partition"
depends on HIBERNATION
diff --git a/kernel/power/Makefile b/kernel/power/Makefile
index c5ebc6a..07e0e28 100644
--- a/kernel/power/Makefile
+++ b/kernel/power/Makefile
@@ -1,8 +1,8 @@
ccflags-$(CONFIG_PM_DEBUG) := -DDEBUG
-obj-$(CONFIG_PM) += main.o
-obj-$(CONFIG_PM_SLEEP) += console.o
+obj-$(CONFIG_PM) += main.o qos.o
+obj-$(CONFIG_VT_CONSOLE_SLEEP) += console.o
obj-$(CONFIG_FREEZER) += process.o
obj-$(CONFIG_SUSPEND) += suspend.o
obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o
diff --git a/kernel/power/console.c b/kernel/power/console.c
index 218e5af..b1dc456 100644
--- a/kernel/power/console.c
+++ b/kernel/power/console.c
@@ -1,5 +1,5 @@
/*
- * drivers/power/process.c - Functions for saving/restoring console.
+ * Functions for saving/restoring console.
*
* Originally from swsusp.
*/
@@ -10,7 +10,6 @@
#include <linux/module.h>
#include "power.h"
-#if defined(CONFIG_VT) && defined(CONFIG_VT_CONSOLE)
#define SUSPEND_CONSOLE (MAX_NR_CONSOLES-1)
static int orig_fgconsole, orig_kmsg;
@@ -32,4 +31,3 @@
vt_kmsg_redirect(orig_kmsg);
}
}
-#endif
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index 8f7b1db..1c53f7f 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -14,6 +14,7 @@
#include <linux/reboot.h>
#include <linux/string.h>
#include <linux/device.h>
+#include <linux/async.h>
#include <linux/kmod.h>
#include <linux/delay.h>
#include <linux/fs.h>
@@ -29,12 +30,14 @@
#include "power.h"
-static int nocompress = 0;
-static int noresume = 0;
+static int nocompress;
+static int noresume;
+static int resume_wait;
+static int resume_delay;
static char resume_file[256] = CONFIG_PM_STD_PARTITION;
dev_t swsusp_resume_device;
sector_t swsusp_resume_block;
-int in_suspend __nosavedata = 0;
+int in_suspend __nosavedata;
enum {
HIBERNATION_INVALID,
@@ -334,13 +337,17 @@
if (error)
goto Close;
- error = dpm_prepare(PMSG_FREEZE);
- if (error)
- goto Complete_devices;
-
/* Preallocate image memory before shutting down devices. */
error = hibernate_preallocate_memory();
if (error)
+ goto Close;
+
+ error = freeze_kernel_threads();
+ if (error)
+ goto Close;
+
+ error = dpm_prepare(PMSG_FREEZE);
+ if (error)
goto Complete_devices;
suspend_console();
@@ -463,7 +470,7 @@
* @platform_mode: If set, use platform driver to prepare for the transition.
*
* This routine must be called with pm_mutex held. If it is successful, control
- * reappears in the restored target kernel in hibernation_snaphot().
+ * reappears in the restored target kernel in hibernation_snapshot().
*/
int hibernation_restore(int platform_mode)
{
@@ -650,6 +657,9 @@
flags |= SF_PLATFORM_MODE;
if (nocompress)
flags |= SF_NOCOMPRESS_MODE;
+ else
+ flags |= SF_CRC32_MODE;
+
pr_debug("PM: writing image.\n");
error = swsusp_write(flags);
swsusp_free();
@@ -724,6 +734,12 @@
pr_debug("PM: Checking hibernation image partition %s\n", resume_file);
+ if (resume_delay) {
+ printk(KERN_INFO "Waiting %dsec before reading resume device...\n",
+ resume_delay);
+ ssleep(resume_delay);
+ }
+
/* Check if the device is there */
swsusp_resume_device = name_to_dev_t(resume_file);
if (!swsusp_resume_device) {
@@ -732,6 +748,13 @@
* to wait for this to finish.
*/
wait_for_device_probe();
+
+ if (resume_wait) {
+ while ((swsusp_resume_device = name_to_dev_t(resume_file)) == 0)
+ msleep(10);
+ async_synchronize_full();
+ }
+
/*
* We can't depend on SCSI devices being available after loading
* one of their modules until scsi_complete_async_scans() is
@@ -1060,7 +1083,21 @@
return 1;
}
+static int __init resumewait_setup(char *str)
+{
+ resume_wait = 1;
+ return 1;
+}
+
+static int __init resumedelay_setup(char *str)
+{
+ resume_delay = simple_strtoul(str, NULL, 0);
+ return 1;
+}
+
__setup("noresume", noresume_setup);
__setup("resume_offset=", resume_offset_setup);
__setup("resume=", resume_setup);
__setup("hibernate=", hibernate_setup);
+__setup("resumewait", resumewait_setup);
+__setup("resumedelay=", resumedelay_setup);
diff --git a/kernel/power/main.c b/kernel/power/main.c
index 6c601f8..a52e884 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -12,6 +12,8 @@
#include <linux/string.h>
#include <linux/resume-trace.h>
#include <linux/workqueue.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
#include "power.h"
@@ -131,6 +133,101 @@
power_attr(pm_test);
#endif /* CONFIG_PM_DEBUG */
+#ifdef CONFIG_DEBUG_FS
+static char *suspend_step_name(enum suspend_stat_step step)
+{
+ switch (step) {
+ case SUSPEND_FREEZE:
+ return "freeze";
+ case SUSPEND_PREPARE:
+ return "prepare";
+ case SUSPEND_SUSPEND:
+ return "suspend";
+ case SUSPEND_SUSPEND_NOIRQ:
+ return "suspend_noirq";
+ case SUSPEND_RESUME_NOIRQ:
+ return "resume_noirq";
+ case SUSPEND_RESUME:
+ return "resume";
+ default:
+ return "";
+ }
+}
+
+static int suspend_stats_show(struct seq_file *s, void *unused)
+{
+ int i, index, last_dev, last_errno, last_step;
+
+ last_dev = suspend_stats.last_failed_dev + REC_FAILED_NUM - 1;
+ last_dev %= REC_FAILED_NUM;
+ last_errno = suspend_stats.last_failed_errno + REC_FAILED_NUM - 1;
+ last_errno %= REC_FAILED_NUM;
+ last_step = suspend_stats.last_failed_step + REC_FAILED_NUM - 1;
+ last_step %= REC_FAILED_NUM;
+ seq_printf(s, "%s: %d\n%s: %d\n%s: %d\n%s: %d\n"
+ "%s: %d\n%s: %d\n%s: %d\n%s: %d\n",
+ "success", suspend_stats.success,
+ "fail", suspend_stats.fail,
+ "failed_freeze", suspend_stats.failed_freeze,
+ "failed_prepare", suspend_stats.failed_prepare,
+ "failed_suspend", suspend_stats.failed_suspend,
+ "failed_suspend_noirq",
+ suspend_stats.failed_suspend_noirq,
+ "failed_resume", suspend_stats.failed_resume,
+ "failed_resume_noirq",
+ suspend_stats.failed_resume_noirq);
+ seq_printf(s, "failures:\n last_failed_dev:\t%-s\n",
+ suspend_stats.failed_devs[last_dev]);
+ for (i = 1; i < REC_FAILED_NUM; i++) {
+ index = last_dev + REC_FAILED_NUM - i;
+ index %= REC_FAILED_NUM;
+ seq_printf(s, "\t\t\t%-s\n",
+ suspend_stats.failed_devs[index]);
+ }
+ seq_printf(s, " last_failed_errno:\t%-d\n",
+ suspend_stats.errno[last_errno]);
+ for (i = 1; i < REC_FAILED_NUM; i++) {
+ index = last_errno + REC_FAILED_NUM - i;
+ index %= REC_FAILED_NUM;
+ seq_printf(s, "\t\t\t%-d\n",
+ suspend_stats.errno[index]);
+ }
+ seq_printf(s, " last_failed_step:\t%-s\n",
+ suspend_step_name(
+ suspend_stats.failed_steps[last_step]));
+ for (i = 1; i < REC_FAILED_NUM; i++) {
+ index = last_step + REC_FAILED_NUM - i;
+ index %= REC_FAILED_NUM;
+ seq_printf(s, "\t\t\t%-s\n",
+ suspend_step_name(
+ suspend_stats.failed_steps[index]));
+ }
+
+ return 0;
+}
+
+static int suspend_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, suspend_stats_show, NULL);
+}
+
+static const struct file_operations suspend_stats_operations = {
+ .open = suspend_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static int __init pm_debugfs_init(void)
+{
+ debugfs_create_file("suspend_stats", S_IFREG | S_IRUGO,
+ NULL, NULL, &suspend_stats_operations);
+ return 0;
+}
+
+late_initcall(pm_debugfs_init);
+#endif /* CONFIG_DEBUG_FS */
+
#endif /* CONFIG_PM_SLEEP */
struct kobject *power_kobj;
@@ -194,6 +291,11 @@
}
if (state < PM_SUSPEND_MAX && *s)
error = enter_state(state);
+ if (error) {
+ suspend_stats.fail++;
+ dpm_save_failed_errno(error);
+ } else
+ suspend_stats.success++;
#endif
Exit:
diff --git a/kernel/power/power.h b/kernel/power/power.h
index 9a00a0a..23a2db1 100644
--- a/kernel/power/power.h
+++ b/kernel/power/power.h
@@ -146,6 +146,7 @@
*/
#define SF_PLATFORM_MODE 1
#define SF_NOCOMPRESS_MODE 2
+#define SF_CRC32_MODE 4
/* kernel/power/hibernate.c */
extern int swsusp_check(void);
@@ -228,7 +229,8 @@
#ifdef CONFIG_SUSPEND_FREEZER
static inline int suspend_freeze_processes(void)
{
- return freeze_processes();
+ int error = freeze_processes();
+ return error ? : freeze_kernel_threads();
}
static inline void suspend_thaw_processes(void)
diff --git a/kernel/power/process.c b/kernel/power/process.c
index 0cf3a27..addbbe5 100644
--- a/kernel/power/process.c
+++ b/kernel/power/process.c
@@ -135,7 +135,7 @@
}
/**
- * freeze_processes - tell processes to enter the refrigerator
+ * freeze_processes - Signal user space processes to enter the refrigerator.
*/
int freeze_processes(void)
{
@@ -143,20 +143,30 @@
printk("Freezing user space processes ... ");
error = try_to_freeze_tasks(true);
- if (error)
- goto Exit;
- printk("done.\n");
+ if (!error) {
+ printk("done.");
+ oom_killer_disable();
+ }
+ printk("\n");
+ BUG_ON(in_atomic());
+
+ return error;
+}
+
+/**
+ * freeze_kernel_threads - Make freezable kernel threads go to the refrigerator.
+ */
+int freeze_kernel_threads(void)
+{
+ int error;
printk("Freezing remaining freezable tasks ... ");
error = try_to_freeze_tasks(false);
- if (error)
- goto Exit;
- printk("done.");
+ if (!error)
+ printk("done.");
- oom_killer_disable();
- Exit:
- BUG_ON(in_atomic());
printk("\n");
+ BUG_ON(in_atomic());
return error;
}
diff --git a/kernel/pm_qos_params.c b/kernel/power/qos.c
similarity index 70%
rename from kernel/pm_qos_params.c
rename to kernel/power/qos.c
index 37f05d0..1c1797d 100644
--- a/kernel/pm_qos_params.c
+++ b/kernel/power/qos.c
@@ -29,7 +29,7 @@
/*#define DEBUG*/
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/slab.h>
@@ -45,62 +45,57 @@
#include <linux/uaccess.h>
/*
- * locking rule: all changes to requests or notifiers lists
+ * locking rule: all changes to constraints or notifiers lists
* or pm_qos_object list and pm_qos_objects need to happen with pm_qos_lock
* held, taken with _irqsave. One lock to rule them all
*/
-enum pm_qos_type {
- PM_QOS_MAX, /* return the largest value */
- PM_QOS_MIN /* return the smallest value */
-};
-
-/*
- * Note: The lockless read path depends on the CPU accessing
- * target_value atomically. Atomic access is only guaranteed on all CPU
- * types linux supports for 32 bit quantites
- */
struct pm_qos_object {
- struct plist_head requests;
- struct blocking_notifier_head *notifiers;
+ struct pm_qos_constraints *constraints;
struct miscdevice pm_qos_power_miscdev;
char *name;
- s32 target_value; /* Do not change to 64 bit */
- s32 default_value;
- enum pm_qos_type type;
};
static DEFINE_SPINLOCK(pm_qos_lock);
static struct pm_qos_object null_pm_qos;
+
static BLOCKING_NOTIFIER_HEAD(cpu_dma_lat_notifier);
-static struct pm_qos_object cpu_dma_pm_qos = {
- .requests = PLIST_HEAD_INIT(cpu_dma_pm_qos.requests),
- .notifiers = &cpu_dma_lat_notifier,
- .name = "cpu_dma_latency",
+static struct pm_qos_constraints cpu_dma_constraints = {
+ .list = PLIST_HEAD_INIT(cpu_dma_constraints.list),
.target_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.default_value = PM_QOS_CPU_DMA_LAT_DEFAULT_VALUE,
.type = PM_QOS_MIN,
+ .notifiers = &cpu_dma_lat_notifier,
+};
+static struct pm_qos_object cpu_dma_pm_qos = {
+ .constraints = &cpu_dma_constraints,
};
static BLOCKING_NOTIFIER_HEAD(network_lat_notifier);
-static struct pm_qos_object network_lat_pm_qos = {
- .requests = PLIST_HEAD_INIT(network_lat_pm_qos.requests),
- .notifiers = &network_lat_notifier,
- .name = "network_latency",
+static struct pm_qos_constraints network_lat_constraints = {
+ .list = PLIST_HEAD_INIT(network_lat_constraints.list),
.target_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
.default_value = PM_QOS_NETWORK_LAT_DEFAULT_VALUE,
- .type = PM_QOS_MIN
+ .type = PM_QOS_MIN,
+ .notifiers = &network_lat_notifier,
+};
+static struct pm_qos_object network_lat_pm_qos = {
+ .constraints = &network_lat_constraints,
+ .name = "network_latency",
};
static BLOCKING_NOTIFIER_HEAD(network_throughput_notifier);
-static struct pm_qos_object network_throughput_pm_qos = {
- .requests = PLIST_HEAD_INIT(network_throughput_pm_qos.requests),
- .notifiers = &network_throughput_notifier,
- .name = "network_throughput",
+static struct pm_qos_constraints network_tput_constraints = {
+ .list = PLIST_HEAD_INIT(network_tput_constraints.list),
.target_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
.default_value = PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE,
.type = PM_QOS_MAX,
+ .notifiers = &network_throughput_notifier,
+};
+static struct pm_qos_object network_throughput_pm_qos = {
+ .constraints = &network_tput_constraints,
+ .name = "network_throughput",
};
@@ -127,17 +122,17 @@
};
/* unlocked internal variant */
-static inline int pm_qos_get_value(struct pm_qos_object *o)
+static inline int pm_qos_get_value(struct pm_qos_constraints *c)
{
- if (plist_head_empty(&o->requests))
- return o->default_value;
+ if (plist_head_empty(&c->list))
+ return c->default_value;
- switch (o->type) {
+ switch (c->type) {
case PM_QOS_MIN:
- return plist_first(&o->requests)->prio;
+ return plist_first(&c->list)->prio;
case PM_QOS_MAX:
- return plist_last(&o->requests)->prio;
+ return plist_last(&c->list)->prio;
default:
/* runtime check for not using enum */
@@ -145,49 +140,217 @@
}
}
-static inline s32 pm_qos_read_value(struct pm_qos_object *o)
+s32 pm_qos_read_value(struct pm_qos_constraints *c)
{
- return o->target_value;
+ return c->target_value;
}
-static inline void pm_qos_set_value(struct pm_qos_object *o, s32 value)
+static inline void pm_qos_set_value(struct pm_qos_constraints *c, s32 value)
{
- o->target_value = value;
+ c->target_value = value;
}
-static void update_target(struct pm_qos_object *o, struct plist_node *node,
- int del, int value)
+/**
+ * pm_qos_update_target - manages the constraints list and calls the notifiers
+ * if needed
+ * @c: constraints data struct
+ * @node: request to add to the list, to update or to remove
+ * @action: action to take on the constraints list
+ * @value: value of the request to add or update
+ *
+ * This function returns 1 if the aggregated constraint value has changed, 0
+ * otherwise.
+ */
+int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
+ enum pm_qos_req_action action, int value)
{
unsigned long flags;
- int prev_value, curr_value;
+ int prev_value, curr_value, new_value;
spin_lock_irqsave(&pm_qos_lock, flags);
- prev_value = pm_qos_get_value(o);
- /* PM_QOS_DEFAULT_VALUE is a signal that the value is unchanged */
- if (value != PM_QOS_DEFAULT_VALUE) {
+ prev_value = pm_qos_get_value(c);
+ if (value == PM_QOS_DEFAULT_VALUE)
+ new_value = c->default_value;
+ else
+ new_value = value;
+
+ switch (action) {
+ case PM_QOS_REMOVE_REQ:
+ plist_del(node, &c->list);
+ break;
+ case PM_QOS_UPDATE_REQ:
/*
* to change the list, we atomically remove, reinit
* with new value and add, then see if the extremal
* changed
*/
- plist_del(node, &o->requests);
- plist_node_init(node, value);
- plist_add(node, &o->requests);
- } else if (del) {
- plist_del(node, &o->requests);
- } else {
- plist_add(node, &o->requests);
+ plist_del(node, &c->list);
+ case PM_QOS_ADD_REQ:
+ plist_node_init(node, new_value);
+ plist_add(node, &c->list);
+ break;
+ default:
+ /* no action */
+ ;
}
- curr_value = pm_qos_get_value(o);
- pm_qos_set_value(o, curr_value);
+
+ curr_value = pm_qos_get_value(c);
+ pm_qos_set_value(c, curr_value);
+
spin_unlock_irqrestore(&pm_qos_lock, flags);
- if (prev_value != curr_value)
- blocking_notifier_call_chain(o->notifiers,
+ if (prev_value != curr_value) {
+ blocking_notifier_call_chain(c->notifiers,
(unsigned long)curr_value,
NULL);
+ return 1;
+ } else {
+ return 0;
+ }
}
+/**
+ * pm_qos_request - returns current system wide qos expectation
+ * @pm_qos_class: identification of which qos value is requested
+ *
+ * This function returns the current target value.
+ */
+int pm_qos_request(int pm_qos_class)
+{
+ return pm_qos_read_value(pm_qos_array[pm_qos_class]->constraints);
+}
+EXPORT_SYMBOL_GPL(pm_qos_request);
+
+int pm_qos_request_active(struct pm_qos_request *req)
+{
+ return req->pm_qos_class != 0;
+}
+EXPORT_SYMBOL_GPL(pm_qos_request_active);
+
+/**
+ * pm_qos_add_request - inserts new qos request into the list
+ * @req: pointer to a preallocated handle
+ * @pm_qos_class: identifies which list of qos request to use
+ * @value: defines the qos request
+ *
+ * This function inserts a new entry in the pm_qos_class list of requested qos
+ * performance characteristics. It recomputes the aggregate QoS expectations
+ * for the pm_qos_class of parameters and initializes the pm_qos_request
+ * handle. Caller needs to save this handle for later use in updates and
+ * removal.
+ */
+
+void pm_qos_add_request(struct pm_qos_request *req,
+ int pm_qos_class, s32 value)
+{
+ if (!req) /*guard against callers passing in null */
+ return;
+
+ if (pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
+ return;
+ }
+ req->pm_qos_class = pm_qos_class;
+ pm_qos_update_target(pm_qos_array[pm_qos_class]->constraints,
+ &req->node, PM_QOS_ADD_REQ, value);
+}
+EXPORT_SYMBOL_GPL(pm_qos_add_request);
+
+/**
+ * pm_qos_update_request - modifies an existing qos request
+ * @req : handle to list element holding a pm_qos request to use
+ * @value: defines the qos request
+ *
+ * Updates an existing qos request for the pm_qos_class of parameters along
+ * with updating the target pm_qos_class value.
+ *
+ * Attempts are made to make this code callable on hot code paths.
+ */
+void pm_qos_update_request(struct pm_qos_request *req,
+ s32 new_value)
+{
+ if (!req) /*guard against callers passing in null */
+ return;
+
+ if (!pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
+ return;
+ }
+
+ if (new_value != req->node.prio)
+ pm_qos_update_target(
+ pm_qos_array[req->pm_qos_class]->constraints,
+ &req->node, PM_QOS_UPDATE_REQ, new_value);
+}
+EXPORT_SYMBOL_GPL(pm_qos_update_request);
+
+/**
+ * pm_qos_remove_request - modifies an existing qos request
+ * @req: handle to request list element
+ *
+ * Will remove pm qos request from the list of constraints and
+ * recompute the current target value for the pm_qos_class. Call this
+ * on slow code paths.
+ */
+void pm_qos_remove_request(struct pm_qos_request *req)
+{
+ if (!req) /*guard against callers passing in null */
+ return;
+ /* silent return to keep pcm code cleaner */
+
+ if (!pm_qos_request_active(req)) {
+ WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
+ return;
+ }
+
+ pm_qos_update_target(pm_qos_array[req->pm_qos_class]->constraints,
+ &req->node, PM_QOS_REMOVE_REQ,
+ PM_QOS_DEFAULT_VALUE);
+ memset(req, 0, sizeof(*req));
+}
+EXPORT_SYMBOL_GPL(pm_qos_remove_request);
+
+/**
+ * pm_qos_add_notifier - sets notification entry for changes to target value
+ * @pm_qos_class: identifies which qos target changes should be notified.
+ * @notifier: notifier block managed by caller.
+ *
+ * will register the notifier into a notification chain that gets called
+ * upon changes to the pm_qos_class target value.
+ */
+int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
+{
+ int retval;
+
+ retval = blocking_notifier_chain_register(
+ pm_qos_array[pm_qos_class]->constraints->notifiers,
+ notifier);
+
+ return retval;
+}
+EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
+
+/**
+ * pm_qos_remove_notifier - deletes notification entry from chain.
+ * @pm_qos_class: identifies which qos target changes are notified.
+ * @notifier: notifier block to be removed.
+ *
+ * will remove the notifier from the notification chain that gets called
+ * upon changes to the pm_qos_class target value.
+ */
+int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
+{
+ int retval;
+
+ retval = blocking_notifier_chain_unregister(
+ pm_qos_array[pm_qos_class]->constraints->notifiers,
+ notifier);
+
+ return retval;
+}
+EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
+
+/* User space interface to PM QoS classes via misc devices */
static int register_pm_qos_misc(struct pm_qos_object *qos)
{
qos->pm_qos_power_miscdev.minor = MISC_DYNAMIC_MINOR;
@@ -210,165 +373,13 @@
return -1;
}
-/**
- * pm_qos_request - returns current system wide qos expectation
- * @pm_qos_class: identification of which qos value is requested
- *
- * This function returns the current target value.
- */
-int pm_qos_request(int pm_qos_class)
-{
- return pm_qos_read_value(pm_qos_array[pm_qos_class]);
-}
-EXPORT_SYMBOL_GPL(pm_qos_request);
-
-int pm_qos_request_active(struct pm_qos_request_list *req)
-{
- return req->pm_qos_class != 0;
-}
-EXPORT_SYMBOL_GPL(pm_qos_request_active);
-
-/**
- * pm_qos_add_request - inserts new qos request into the list
- * @dep: pointer to a preallocated handle
- * @pm_qos_class: identifies which list of qos request to use
- * @value: defines the qos request
- *
- * This function inserts a new entry in the pm_qos_class list of requested qos
- * performance characteristics. It recomputes the aggregate QoS expectations
- * for the pm_qos_class of parameters and initializes the pm_qos_request_list
- * handle. Caller needs to save this handle for later use in updates and
- * removal.
- */
-
-void pm_qos_add_request(struct pm_qos_request_list *dep,
- int pm_qos_class, s32 value)
-{
- struct pm_qos_object *o = pm_qos_array[pm_qos_class];
- int new_value;
-
- if (pm_qos_request_active(dep)) {
- WARN(1, KERN_ERR "pm_qos_add_request() called for already added request\n");
- return;
- }
- if (value == PM_QOS_DEFAULT_VALUE)
- new_value = o->default_value;
- else
- new_value = value;
- plist_node_init(&dep->list, new_value);
- dep->pm_qos_class = pm_qos_class;
- update_target(o, &dep->list, 0, PM_QOS_DEFAULT_VALUE);
-}
-EXPORT_SYMBOL_GPL(pm_qos_add_request);
-
-/**
- * pm_qos_update_request - modifies an existing qos request
- * @pm_qos_req : handle to list element holding a pm_qos request to use
- * @value: defines the qos request
- *
- * Updates an existing qos request for the pm_qos_class of parameters along
- * with updating the target pm_qos_class value.
- *
- * Attempts are made to make this code callable on hot code paths.
- */
-void pm_qos_update_request(struct pm_qos_request_list *pm_qos_req,
- s32 new_value)
-{
- s32 temp;
- struct pm_qos_object *o;
-
- if (!pm_qos_req) /*guard against callers passing in null */
- return;
-
- if (!pm_qos_request_active(pm_qos_req)) {
- WARN(1, KERN_ERR "pm_qos_update_request() called for unknown object\n");
- return;
- }
-
- o = pm_qos_array[pm_qos_req->pm_qos_class];
-
- if (new_value == PM_QOS_DEFAULT_VALUE)
- temp = o->default_value;
- else
- temp = new_value;
-
- if (temp != pm_qos_req->list.prio)
- update_target(o, &pm_qos_req->list, 0, temp);
-}
-EXPORT_SYMBOL_GPL(pm_qos_update_request);
-
-/**
- * pm_qos_remove_request - modifies an existing qos request
- * @pm_qos_req: handle to request list element
- *
- * Will remove pm qos request from the list of requests and
- * recompute the current target value for the pm_qos_class. Call this
- * on slow code paths.
- */
-void pm_qos_remove_request(struct pm_qos_request_list *pm_qos_req)
-{
- struct pm_qos_object *o;
-
- if (pm_qos_req == NULL)
- return;
- /* silent return to keep pcm code cleaner */
-
- if (!pm_qos_request_active(pm_qos_req)) {
- WARN(1, KERN_ERR "pm_qos_remove_request() called for unknown object\n");
- return;
- }
-
- o = pm_qos_array[pm_qos_req->pm_qos_class];
- update_target(o, &pm_qos_req->list, 1, PM_QOS_DEFAULT_VALUE);
- memset(pm_qos_req, 0, sizeof(*pm_qos_req));
-}
-EXPORT_SYMBOL_GPL(pm_qos_remove_request);
-
-/**
- * pm_qos_add_notifier - sets notification entry for changes to target value
- * @pm_qos_class: identifies which qos target changes should be notified.
- * @notifier: notifier block managed by caller.
- *
- * will register the notifier into a notification chain that gets called
- * upon changes to the pm_qos_class target value.
- */
-int pm_qos_add_notifier(int pm_qos_class, struct notifier_block *notifier)
-{
- int retval;
-
- retval = blocking_notifier_chain_register(
- pm_qos_array[pm_qos_class]->notifiers, notifier);
-
- return retval;
-}
-EXPORT_SYMBOL_GPL(pm_qos_add_notifier);
-
-/**
- * pm_qos_remove_notifier - deletes notification entry from chain.
- * @pm_qos_class: identifies which qos target changes are notified.
- * @notifier: notifier block to be removed.
- *
- * will remove the notifier from the notification chain that gets called
- * upon changes to the pm_qos_class target value.
- */
-int pm_qos_remove_notifier(int pm_qos_class, struct notifier_block *notifier)
-{
- int retval;
-
- retval = blocking_notifier_chain_unregister(
- pm_qos_array[pm_qos_class]->notifiers, notifier);
-
- return retval;
-}
-EXPORT_SYMBOL_GPL(pm_qos_remove_notifier);
-
static int pm_qos_power_open(struct inode *inode, struct file *filp)
{
long pm_qos_class;
pm_qos_class = find_pm_qos_object_by_minor(iminor(inode));
if (pm_qos_class >= 0) {
- struct pm_qos_request_list *req = kzalloc(sizeof(*req), GFP_KERNEL);
+ struct pm_qos_request *req = kzalloc(sizeof(*req), GFP_KERNEL);
if (!req)
return -ENOMEM;
@@ -383,7 +394,7 @@
static int pm_qos_power_release(struct inode *inode, struct file *filp)
{
- struct pm_qos_request_list *req;
+ struct pm_qos_request *req;
req = filp->private_data;
pm_qos_remove_request(req);
@@ -398,17 +409,15 @@
{
s32 value;
unsigned long flags;
- struct pm_qos_object *o;
- struct pm_qos_request_list *pm_qos_req = filp->private_data;
+ struct pm_qos_request *req = filp->private_data;
- if (!pm_qos_req)
+ if (!req)
return -EINVAL;
- if (!pm_qos_request_active(pm_qos_req))
+ if (!pm_qos_request_active(req))
return -EINVAL;
- o = pm_qos_array[pm_qos_req->pm_qos_class];
spin_lock_irqsave(&pm_qos_lock, flags);
- value = pm_qos_get_value(o);
+ value = pm_qos_get_value(pm_qos_array[req->pm_qos_class]->constraints);
spin_unlock_irqrestore(&pm_qos_lock, flags);
return simple_read_from_buffer(buf, count, f_pos, &value, sizeof(s32));
@@ -418,7 +427,7 @@
size_t count, loff_t *f_pos)
{
s32 value;
- struct pm_qos_request_list *pm_qos_req;
+ struct pm_qos_request *req;
if (count == sizeof(s32)) {
if (copy_from_user(&value, buf, sizeof(s32)))
@@ -449,8 +458,8 @@
return -EINVAL;
}
- pm_qos_req = filp->private_data;
- pm_qos_update_request(pm_qos_req, value);
+ req = filp->private_data;
+ pm_qos_update_request(req, value);
return count;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 06efa54..cbe2c14 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1339,6 +1339,9 @@
count += highmem;
count -= totalreserve_pages;
+ /* Add number of pages required for page keys (s390 only). */
+ size += page_key_additional_pages(saveable);
+
/* Compute the maximum number of saveable pages to leave in memory. */
max_size = (count - (size + PAGES_FOR_IO)) / 2
- 2 * DIV_ROUND_UP(reserved_size, PAGE_SIZE);
@@ -1662,6 +1665,8 @@
buf[j] = memory_bm_next_pfn(bm);
if (unlikely(buf[j] == BM_END_OF_MAP))
break;
+ /* Save page key for data page (s390 only). */
+ page_key_read(buf + j);
}
}
@@ -1821,6 +1826,9 @@
if (unlikely(buf[j] == BM_END_OF_MAP))
break;
+ /* Extract and buffer page key for data page (s390 only). */
+ page_key_memorize(buf + j);
+
if (memory_bm_pfn_present(bm, buf[j]))
memory_bm_set_bit(bm, buf[j]);
else
@@ -2223,6 +2231,11 @@
if (error)
return error;
+ /* Allocate buffer for page keys. */
+ error = page_key_alloc(nr_copy_pages);
+ if (error)
+ return error;
+
} else if (handle->cur <= nr_meta_pages + 1) {
error = unpack_orig_pfns(buffer, ©_bm);
if (error)
@@ -2243,6 +2256,8 @@
}
} else {
copy_last_highmem_page();
+ /* Restore page key for data page (s390 only). */
+ page_key_write(handle->buffer);
handle->buffer = get_buffer(&orig_bm, &ca);
if (IS_ERR(handle->buffer))
return PTR_ERR(handle->buffer);
@@ -2264,6 +2279,9 @@
void snapshot_write_finalize(struct snapshot_handle *handle)
{
copy_last_highmem_page();
+ /* Restore page key for data page (s390 only). */
+ page_key_write(handle->buffer);
+ page_key_free();
/* Free only if we have loaded the image entirely */
if (handle->cur > 1 && handle->cur > nr_meta_pages + nr_copy_pages) {
memory_bm_free(&orig_bm, PG_UNSAFE_CLEAR);
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index b6b71ad..fdd4263 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -104,7 +104,10 @@
goto Finish;
error = suspend_freeze_processes();
- if (!error)
+ if (error) {
+ suspend_stats.failed_freeze++;
+ dpm_save_failed_step(SUSPEND_FREEZE);
+ } else
return 0;
suspend_thaw_processes();
@@ -315,8 +318,16 @@
*/
int pm_suspend(suspend_state_t state)
{
- if (state > PM_SUSPEND_ON && state <= PM_SUSPEND_MAX)
- return enter_state(state);
+ int ret;
+ if (state > PM_SUSPEND_ON && state < PM_SUSPEND_MAX) {
+ ret = enter_state(state);
+ if (ret) {
+ suspend_stats.fail++;
+ dpm_save_failed_errno(ret);
+ } else
+ suspend_stats.success++;
+ return ret;
+ }
return -EINVAL;
}
EXPORT_SYMBOL(pm_suspend);
diff --git a/kernel/power/swap.c b/kernel/power/swap.c
index 7c97c3a..11a594c 100644
--- a/kernel/power/swap.c
+++ b/kernel/power/swap.c
@@ -27,6 +27,10 @@
#include <linux/slab.h>
#include <linux/lzo.h>
#include <linux/vmalloc.h>
+#include <linux/cpumask.h>
+#include <linux/atomic.h>
+#include <linux/kthread.h>
+#include <linux/crc32.h>
#include "power.h"
@@ -43,8 +47,7 @@
* allocated and populated one at a time, so we only need one memory
* page to set up the entire structure.
*
- * During resume we also only need to use one swap_map_page structure
- * at a time.
+ * During resume we pick up all swap_map_page structures into a list.
*/
#define MAP_PAGE_ENTRIES (PAGE_SIZE / sizeof(sector_t) - 1)
@@ -54,6 +57,11 @@
sector_t next_swap;
};
+struct swap_map_page_list {
+ struct swap_map_page *map;
+ struct swap_map_page_list *next;
+};
+
/**
* The swap_map_handle structure is used for handling swap in
* a file-alike way
@@ -61,13 +69,18 @@
struct swap_map_handle {
struct swap_map_page *cur;
+ struct swap_map_page_list *maps;
sector_t cur_swap;
sector_t first_sector;
unsigned int k;
+ unsigned long nr_free_pages, written;
+ u32 crc32;
};
struct swsusp_header {
- char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int)];
+ char reserved[PAGE_SIZE - 20 - sizeof(sector_t) - sizeof(int) -
+ sizeof(u32)];
+ u32 crc32;
sector_t image;
unsigned int flags; /* Flags to pass to the "boot" kernel */
char orig_sig[10];
@@ -199,6 +212,8 @@
memcpy(swsusp_header->sig, HIBERNATE_SIG, 10);
swsusp_header->image = handle->first_sector;
swsusp_header->flags = flags;
+ if (flags & SF_CRC32_MODE)
+ swsusp_header->crc32 = handle->crc32;
error = hib_bio_write_page(swsusp_resume_block,
swsusp_header, NULL);
} else {
@@ -245,6 +260,7 @@
static int write_page(void *buf, sector_t offset, struct bio **bio_chain)
{
void *src;
+ int ret;
if (!offset)
return -ENOSPC;
@@ -254,9 +270,17 @@
if (src) {
copy_page(src, buf);
} else {
- WARN_ON_ONCE(1);
- bio_chain = NULL; /* Go synchronous */
- src = buf;
+ ret = hib_wait_on_bio_chain(bio_chain); /* Free pages */
+ if (ret)
+ return ret;
+ src = (void *)__get_free_page(__GFP_WAIT | __GFP_HIGH);
+ if (src) {
+ copy_page(src, buf);
+ } else {
+ WARN_ON_ONCE(1);
+ bio_chain = NULL; /* Go synchronous */
+ src = buf;
+ }
}
} else {
src = buf;
@@ -293,6 +317,8 @@
goto err_rel;
}
handle->k = 0;
+ handle->nr_free_pages = nr_free_pages() >> 1;
+ handle->written = 0;
handle->first_sector = handle->cur_swap;
return 0;
err_rel:
@@ -316,20 +342,23 @@
return error;
handle->cur->entries[handle->k++] = offset;
if (handle->k >= MAP_PAGE_ENTRIES) {
- error = hib_wait_on_bio_chain(bio_chain);
- if (error)
- goto out;
offset = alloc_swapdev_block(root_swap);
if (!offset)
return -ENOSPC;
handle->cur->next_swap = offset;
- error = write_page(handle->cur, handle->cur_swap, NULL);
+ error = write_page(handle->cur, handle->cur_swap, bio_chain);
if (error)
goto out;
clear_page(handle->cur);
handle->cur_swap = offset;
handle->k = 0;
}
+ if (bio_chain && ++handle->written > handle->nr_free_pages) {
+ error = hib_wait_on_bio_chain(bio_chain);
+ if (error)
+ goto out;
+ handle->written = 0;
+ }
out:
return error;
}
@@ -372,6 +401,13 @@
LZO_HEADER, PAGE_SIZE)
#define LZO_CMP_SIZE (LZO_CMP_PAGES * PAGE_SIZE)
+/* Maximum number of threads for compression/decompression. */
+#define LZO_THREADS 3
+
+/* Maximum number of pages for read buffering. */
+#define LZO_READ_PAGES (MAP_PAGE_ENTRIES * 8)
+
+
/**
* save_image - save the suspend image data
*/
@@ -419,6 +455,92 @@
return ret;
}
+/**
+ * Structure used for CRC32.
+ */
+struct crc_data {
+ struct task_struct *thr; /* thread */
+ atomic_t ready; /* ready to start flag */
+ atomic_t stop; /* ready to stop flag */
+ unsigned run_threads; /* nr current threads */
+ wait_queue_head_t go; /* start crc update */
+ wait_queue_head_t done; /* crc update done */
+ u32 *crc32; /* points to handle's crc32 */
+ size_t *unc_len[LZO_THREADS]; /* uncompressed lengths */
+ unsigned char *unc[LZO_THREADS]; /* uncompressed data */
+};
+
+/**
+ * CRC32 update function that runs in its own thread.
+ */
+static int crc32_threadfn(void *data)
+{
+ struct crc_data *d = data;
+ unsigned i;
+
+ while (1) {
+ wait_event(d->go, atomic_read(&d->ready) ||
+ kthread_should_stop());
+ if (kthread_should_stop()) {
+ d->thr = NULL;
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ break;
+ }
+ atomic_set(&d->ready, 0);
+
+ for (i = 0; i < d->run_threads; i++)
+ *d->crc32 = crc32_le(*d->crc32,
+ d->unc[i], *d->unc_len[i]);
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ }
+ return 0;
+}
+/**
+ * Structure used for LZO data compression.
+ */
+struct cmp_data {
+ struct task_struct *thr; /* thread */
+ atomic_t ready; /* ready to start flag */
+ atomic_t stop; /* ready to stop flag */
+ int ret; /* return code */
+ wait_queue_head_t go; /* start compression */
+ wait_queue_head_t done; /* compression done */
+ size_t unc_len; /* uncompressed length */
+ size_t cmp_len; /* compressed length */
+ unsigned char unc[LZO_UNC_SIZE]; /* uncompressed buffer */
+ unsigned char cmp[LZO_CMP_SIZE]; /* compressed buffer */
+ unsigned char wrk[LZO1X_1_MEM_COMPRESS]; /* compression workspace */
+};
+
+/**
+ * Compression function that runs in its own thread.
+ */
+static int lzo_compress_threadfn(void *data)
+{
+ struct cmp_data *d = data;
+
+ while (1) {
+ wait_event(d->go, atomic_read(&d->ready) ||
+ kthread_should_stop());
+ if (kthread_should_stop()) {
+ d->thr = NULL;
+ d->ret = -1;
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ break;
+ }
+ atomic_set(&d->ready, 0);
+
+ d->ret = lzo1x_1_compress(d->unc, d->unc_len,
+ d->cmp + LZO_HEADER, &d->cmp_len,
+ d->wrk);
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ }
+ return 0;
+}
/**
* save_image_lzo - Save the suspend image data compressed with LZO.
@@ -437,42 +559,93 @@
struct bio *bio;
struct timeval start;
struct timeval stop;
- size_t off, unc_len, cmp_len;
- unsigned char *unc, *cmp, *wrk, *page;
+ size_t off;
+ unsigned thr, run_threads, nr_threads;
+ unsigned char *page = NULL;
+ struct cmp_data *data = NULL;
+ struct crc_data *crc = NULL;
+
+ /*
+ * We'll limit the number of threads for compression to limit memory
+ * footprint.
+ */
+ nr_threads = num_online_cpus() - 1;
+ nr_threads = clamp_val(nr_threads, 1, LZO_THREADS);
page = (void *)__get_free_page(__GFP_WAIT | __GFP_HIGH);
if (!page) {
printk(KERN_ERR "PM: Failed to allocate LZO page\n");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_clean;
}
- wrk = vmalloc(LZO1X_1_MEM_COMPRESS);
- if (!wrk) {
- printk(KERN_ERR "PM: Failed to allocate LZO workspace\n");
- free_page((unsigned long)page);
- return -ENOMEM;
+ data = vmalloc(sizeof(*data) * nr_threads);
+ if (!data) {
+ printk(KERN_ERR "PM: Failed to allocate LZO data\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
+ for (thr = 0; thr < nr_threads; thr++)
+ memset(&data[thr], 0, offsetof(struct cmp_data, go));
+
+ crc = kmalloc(sizeof(*crc), GFP_KERNEL);
+ if (!crc) {
+ printk(KERN_ERR "PM: Failed to allocate crc\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
+ memset(crc, 0, offsetof(struct crc_data, go));
+
+ /*
+ * Start the compression threads.
+ */
+ for (thr = 0; thr < nr_threads; thr++) {
+ init_waitqueue_head(&data[thr].go);
+ init_waitqueue_head(&data[thr].done);
+
+ data[thr].thr = kthread_run(lzo_compress_threadfn,
+ &data[thr],
+ "image_compress/%u", thr);
+ if (IS_ERR(data[thr].thr)) {
+ data[thr].thr = NULL;
+ printk(KERN_ERR
+ "PM: Cannot start compression threads\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
}
- unc = vmalloc(LZO_UNC_SIZE);
- if (!unc) {
- printk(KERN_ERR "PM: Failed to allocate LZO uncompressed\n");
- vfree(wrk);
- free_page((unsigned long)page);
- return -ENOMEM;
+ /*
+ * Adjust number of free pages after all allocations have been done.
+ * We don't want to run out of pages when writing.
+ */
+ handle->nr_free_pages = nr_free_pages() >> 1;
+
+ /*
+ * Start the CRC32 thread.
+ */
+ init_waitqueue_head(&crc->go);
+ init_waitqueue_head(&crc->done);
+
+ handle->crc32 = 0;
+ crc->crc32 = &handle->crc32;
+ for (thr = 0; thr < nr_threads; thr++) {
+ crc->unc[thr] = data[thr].unc;
+ crc->unc_len[thr] = &data[thr].unc_len;
}
- cmp = vmalloc(LZO_CMP_SIZE);
- if (!cmp) {
- printk(KERN_ERR "PM: Failed to allocate LZO compressed\n");
- vfree(unc);
- vfree(wrk);
- free_page((unsigned long)page);
- return -ENOMEM;
+ crc->thr = kthread_run(crc32_threadfn, crc, "image_crc32");
+ if (IS_ERR(crc->thr)) {
+ crc->thr = NULL;
+ printk(KERN_ERR "PM: Cannot start CRC32 thread\n");
+ ret = -ENOMEM;
+ goto out_clean;
}
printk(KERN_INFO
+ "PM: Using %u thread(s) for compression.\n"
"PM: Compressing and saving image data (%u pages) ... ",
- nr_to_write);
+ nr_threads, nr_to_write);
m = nr_to_write / 100;
if (!m)
m = 1;
@@ -480,55 +653,83 @@
bio = NULL;
do_gettimeofday(&start);
for (;;) {
- for (off = 0; off < LZO_UNC_SIZE; off += PAGE_SIZE) {
- ret = snapshot_read_next(snapshot);
- if (ret < 0)
- goto out_finish;
+ for (thr = 0; thr < nr_threads; thr++) {
+ for (off = 0; off < LZO_UNC_SIZE; off += PAGE_SIZE) {
+ ret = snapshot_read_next(snapshot);
+ if (ret < 0)
+ goto out_finish;
- if (!ret)
+ if (!ret)
+ break;
+
+ memcpy(data[thr].unc + off,
+ data_of(*snapshot), PAGE_SIZE);
+
+ if (!(nr_pages % m))
+ printk(KERN_CONT "\b\b\b\b%3d%%",
+ nr_pages / m);
+ nr_pages++;
+ }
+ if (!off)
break;
- memcpy(unc + off, data_of(*snapshot), PAGE_SIZE);
+ data[thr].unc_len = off;
- if (!(nr_pages % m))
- printk(KERN_CONT "\b\b\b\b%3d%%", nr_pages / m);
- nr_pages++;
+ atomic_set(&data[thr].ready, 1);
+ wake_up(&data[thr].go);
}
- if (!off)
+ if (!thr)
break;
- unc_len = off;
- ret = lzo1x_1_compress(unc, unc_len,
- cmp + LZO_HEADER, &cmp_len, wrk);
- if (ret < 0) {
- printk(KERN_ERR "PM: LZO compression failed\n");
- break;
- }
+ crc->run_threads = thr;
+ atomic_set(&crc->ready, 1);
+ wake_up(&crc->go);
- if (unlikely(!cmp_len ||
- cmp_len > lzo1x_worst_compress(unc_len))) {
- printk(KERN_ERR "PM: Invalid LZO compressed length\n");
- ret = -1;
- break;
- }
+ for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ wait_event(data[thr].done,
+ atomic_read(&data[thr].stop));
+ atomic_set(&data[thr].stop, 0);
- *(size_t *)cmp = cmp_len;
+ ret = data[thr].ret;
- /*
- * Given we are writing one page at a time to disk, we copy
- * that much from the buffer, although the last bit will likely
- * be smaller than full page. This is OK - we saved the length
- * of the compressed data, so any garbage at the end will be
- * discarded when we read it.
- */
- for (off = 0; off < LZO_HEADER + cmp_len; off += PAGE_SIZE) {
- memcpy(page, cmp + off, PAGE_SIZE);
-
- ret = swap_write_page(handle, page, &bio);
- if (ret)
+ if (ret < 0) {
+ printk(KERN_ERR "PM: LZO compression failed\n");
goto out_finish;
+ }
+
+ if (unlikely(!data[thr].cmp_len ||
+ data[thr].cmp_len >
+ lzo1x_worst_compress(data[thr].unc_len))) {
+ printk(KERN_ERR
+ "PM: Invalid LZO compressed length\n");
+ ret = -1;
+ goto out_finish;
+ }
+
+ *(size_t *)data[thr].cmp = data[thr].cmp_len;
+
+ /*
+ * Given we are writing one page at a time to disk, we
+ * copy that much from the buffer, although the last
+ * bit will likely be smaller than full page. This is
+ * OK - we saved the length of the compressed data, so
+ * any garbage at the end will be discarded when we
+ * read it.
+ */
+ for (off = 0;
+ off < LZO_HEADER + data[thr].cmp_len;
+ off += PAGE_SIZE) {
+ memcpy(page, data[thr].cmp + off, PAGE_SIZE);
+
+ ret = swap_write_page(handle, page, &bio);
+ if (ret)
+ goto out_finish;
+ }
}
+
+ wait_event(crc->done, atomic_read(&crc->stop));
+ atomic_set(&crc->stop, 0);
}
out_finish:
@@ -536,16 +737,25 @@
do_gettimeofday(&stop);
if (!ret)
ret = err2;
- if (!ret)
+ if (!ret) {
printk(KERN_CONT "\b\b\b\bdone\n");
- else
+ } else {
printk(KERN_CONT "\n");
+ }
swsusp_show_speed(&start, &stop, nr_to_write, "Wrote");
-
- vfree(cmp);
- vfree(unc);
- vfree(wrk);
- free_page((unsigned long)page);
+out_clean:
+ if (crc) {
+ if (crc->thr)
+ kthread_stop(crc->thr);
+ kfree(crc);
+ }
+ if (data) {
+ for (thr = 0; thr < nr_threads; thr++)
+ if (data[thr].thr)
+ kthread_stop(data[thr].thr);
+ vfree(data);
+ }
+ if (page) free_page((unsigned long)page);
return ret;
}
@@ -625,8 +835,15 @@
static void release_swap_reader(struct swap_map_handle *handle)
{
- if (handle->cur)
- free_page((unsigned long)handle->cur);
+ struct swap_map_page_list *tmp;
+
+ while (handle->maps) {
+ if (handle->maps->map)
+ free_page((unsigned long)handle->maps->map);
+ tmp = handle->maps;
+ handle->maps = handle->maps->next;
+ kfree(tmp);
+ }
handle->cur = NULL;
}
@@ -634,22 +851,46 @@
unsigned int *flags_p)
{
int error;
+ struct swap_map_page_list *tmp, *last;
+ sector_t offset;
*flags_p = swsusp_header->flags;
if (!swsusp_header->image) /* how can this happen? */
return -EINVAL;
- handle->cur = (struct swap_map_page *)get_zeroed_page(__GFP_WAIT | __GFP_HIGH);
- if (!handle->cur)
- return -ENOMEM;
+ handle->cur = NULL;
+ last = handle->maps = NULL;
+ offset = swsusp_header->image;
+ while (offset) {
+ tmp = kmalloc(sizeof(*handle->maps), GFP_KERNEL);
+ if (!tmp) {
+ release_swap_reader(handle);
+ return -ENOMEM;
+ }
+ memset(tmp, 0, sizeof(*tmp));
+ if (!handle->maps)
+ handle->maps = tmp;
+ if (last)
+ last->next = tmp;
+ last = tmp;
- error = hib_bio_read_page(swsusp_header->image, handle->cur, NULL);
- if (error) {
- release_swap_reader(handle);
- return error;
+ tmp->map = (struct swap_map_page *)
+ __get_free_page(__GFP_WAIT | __GFP_HIGH);
+ if (!tmp->map) {
+ release_swap_reader(handle);
+ return -ENOMEM;
+ }
+
+ error = hib_bio_read_page(offset, tmp->map, NULL);
+ if (error) {
+ release_swap_reader(handle);
+ return error;
+ }
+ offset = tmp->map->next_swap;
}
handle->k = 0;
+ handle->cur = handle->maps->map;
return 0;
}
@@ -658,6 +899,7 @@
{
sector_t offset;
int error;
+ struct swap_map_page_list *tmp;
if (!handle->cur)
return -EINVAL;
@@ -668,13 +910,15 @@
if (error)
return error;
if (++handle->k >= MAP_PAGE_ENTRIES) {
- error = hib_wait_on_bio_chain(bio_chain);
handle->k = 0;
- offset = handle->cur->next_swap;
- if (!offset)
+ free_page((unsigned long)handle->maps->map);
+ tmp = handle->maps;
+ handle->maps = handle->maps->next;
+ kfree(tmp);
+ if (!handle->maps)
release_swap_reader(handle);
- else if (!error)
- error = hib_bio_read_page(offset, handle->cur, NULL);
+ else
+ handle->cur = handle->maps->map;
}
return error;
}
@@ -697,7 +941,7 @@
unsigned int nr_to_read)
{
unsigned int m;
- int error = 0;
+ int ret = 0;
struct timeval start;
struct timeval stop;
struct bio *bio;
@@ -713,15 +957,15 @@
bio = NULL;
do_gettimeofday(&start);
for ( ; ; ) {
- error = snapshot_write_next(snapshot);
- if (error <= 0)
+ ret = snapshot_write_next(snapshot);
+ if (ret <= 0)
break;
- error = swap_read_page(handle, data_of(*snapshot), &bio);
- if (error)
+ ret = swap_read_page(handle, data_of(*snapshot), &bio);
+ if (ret)
break;
if (snapshot->sync_read)
- error = hib_wait_on_bio_chain(&bio);
- if (error)
+ ret = hib_wait_on_bio_chain(&bio);
+ if (ret)
break;
if (!(nr_pages % m))
printk("\b\b\b\b%3d%%", nr_pages / m);
@@ -729,17 +973,61 @@
}
err2 = hib_wait_on_bio_chain(&bio);
do_gettimeofday(&stop);
- if (!error)
- error = err2;
- if (!error) {
+ if (!ret)
+ ret = err2;
+ if (!ret) {
printk("\b\b\b\bdone\n");
snapshot_write_finalize(snapshot);
if (!snapshot_image_loaded(snapshot))
- error = -ENODATA;
+ ret = -ENODATA;
} else
printk("\n");
swsusp_show_speed(&start, &stop, nr_to_read, "Read");
- return error;
+ return ret;
+}
+
+/**
+ * Structure used for LZO data decompression.
+ */
+struct dec_data {
+ struct task_struct *thr; /* thread */
+ atomic_t ready; /* ready to start flag */
+ atomic_t stop; /* ready to stop flag */
+ int ret; /* return code */
+ wait_queue_head_t go; /* start decompression */
+ wait_queue_head_t done; /* decompression done */
+ size_t unc_len; /* uncompressed length */
+ size_t cmp_len; /* compressed length */
+ unsigned char unc[LZO_UNC_SIZE]; /* uncompressed buffer */
+ unsigned char cmp[LZO_CMP_SIZE]; /* compressed buffer */
+};
+
+/**
+ * Deompression function that runs in its own thread.
+ */
+static int lzo_decompress_threadfn(void *data)
+{
+ struct dec_data *d = data;
+
+ while (1) {
+ wait_event(d->go, atomic_read(&d->ready) ||
+ kthread_should_stop());
+ if (kthread_should_stop()) {
+ d->thr = NULL;
+ d->ret = -1;
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ break;
+ }
+ atomic_set(&d->ready, 0);
+
+ d->unc_len = LZO_UNC_SIZE;
+ d->ret = lzo1x_decompress_safe(d->cmp + LZO_HEADER, d->cmp_len,
+ d->unc, &d->unc_len);
+ atomic_set(&d->stop, 1);
+ wake_up(&d->done);
+ }
+ return 0;
}
/**
@@ -753,50 +1041,120 @@
unsigned int nr_to_read)
{
unsigned int m;
- int error = 0;
+ int ret = 0;
+ int eof = 0;
struct bio *bio;
struct timeval start;
struct timeval stop;
unsigned nr_pages;
- size_t i, off, unc_len, cmp_len;
- unsigned char *unc, *cmp, *page[LZO_CMP_PAGES];
+ size_t off;
+ unsigned i, thr, run_threads, nr_threads;
+ unsigned ring = 0, pg = 0, ring_size = 0,
+ have = 0, want, need, asked = 0;
+ unsigned long read_pages;
+ unsigned char **page = NULL;
+ struct dec_data *data = NULL;
+ struct crc_data *crc = NULL;
- for (i = 0; i < LZO_CMP_PAGES; i++) {
- page[i] = (void *)__get_free_page(__GFP_WAIT | __GFP_HIGH);
- if (!page[i]) {
- printk(KERN_ERR "PM: Failed to allocate LZO page\n");
+ /*
+ * We'll limit the number of threads for decompression to limit memory
+ * footprint.
+ */
+ nr_threads = num_online_cpus() - 1;
+ nr_threads = clamp_val(nr_threads, 1, LZO_THREADS);
- while (i)
- free_page((unsigned long)page[--i]);
+ page = vmalloc(sizeof(*page) * LZO_READ_PAGES);
+ if (!page) {
+ printk(KERN_ERR "PM: Failed to allocate LZO page\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
- return -ENOMEM;
+ data = vmalloc(sizeof(*data) * nr_threads);
+ if (!data) {
+ printk(KERN_ERR "PM: Failed to allocate LZO data\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
+ for (thr = 0; thr < nr_threads; thr++)
+ memset(&data[thr], 0, offsetof(struct dec_data, go));
+
+ crc = kmalloc(sizeof(*crc), GFP_KERNEL);
+ if (!crc) {
+ printk(KERN_ERR "PM: Failed to allocate crc\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ }
+ memset(crc, 0, offsetof(struct crc_data, go));
+
+ /*
+ * Start the decompression threads.
+ */
+ for (thr = 0; thr < nr_threads; thr++) {
+ init_waitqueue_head(&data[thr].go);
+ init_waitqueue_head(&data[thr].done);
+
+ data[thr].thr = kthread_run(lzo_decompress_threadfn,
+ &data[thr],
+ "image_decompress/%u", thr);
+ if (IS_ERR(data[thr].thr)) {
+ data[thr].thr = NULL;
+ printk(KERN_ERR
+ "PM: Cannot start decompression threads\n");
+ ret = -ENOMEM;
+ goto out_clean;
}
}
- unc = vmalloc(LZO_UNC_SIZE);
- if (!unc) {
- printk(KERN_ERR "PM: Failed to allocate LZO uncompressed\n");
+ /*
+ * Start the CRC32 thread.
+ */
+ init_waitqueue_head(&crc->go);
+ init_waitqueue_head(&crc->done);
- for (i = 0; i < LZO_CMP_PAGES; i++)
- free_page((unsigned long)page[i]);
-
- return -ENOMEM;
+ handle->crc32 = 0;
+ crc->crc32 = &handle->crc32;
+ for (thr = 0; thr < nr_threads; thr++) {
+ crc->unc[thr] = data[thr].unc;
+ crc->unc_len[thr] = &data[thr].unc_len;
}
- cmp = vmalloc(LZO_CMP_SIZE);
- if (!cmp) {
- printk(KERN_ERR "PM: Failed to allocate LZO compressed\n");
-
- vfree(unc);
- for (i = 0; i < LZO_CMP_PAGES; i++)
- free_page((unsigned long)page[i]);
-
- return -ENOMEM;
+ crc->thr = kthread_run(crc32_threadfn, crc, "image_crc32");
+ if (IS_ERR(crc->thr)) {
+ crc->thr = NULL;
+ printk(KERN_ERR "PM: Cannot start CRC32 thread\n");
+ ret = -ENOMEM;
+ goto out_clean;
}
+ /*
+ * Adjust number of pages for read buffering, in case we are short.
+ */
+ read_pages = (nr_free_pages() - snapshot_get_image_size()) >> 1;
+ read_pages = clamp_val(read_pages, LZO_CMP_PAGES, LZO_READ_PAGES);
+
+ for (i = 0; i < read_pages; i++) {
+ page[i] = (void *)__get_free_page(i < LZO_CMP_PAGES ?
+ __GFP_WAIT | __GFP_HIGH :
+ __GFP_WAIT);
+ if (!page[i]) {
+ if (i < LZO_CMP_PAGES) {
+ ring_size = i;
+ printk(KERN_ERR
+ "PM: Failed to allocate LZO pages\n");
+ ret = -ENOMEM;
+ goto out_clean;
+ } else {
+ break;
+ }
+ }
+ }
+ want = ring_size = i;
+
printk(KERN_INFO
+ "PM: Using %u thread(s) for decompression.\n"
"PM: Loading and decompressing image data (%u pages) ... ",
- nr_to_read);
+ nr_threads, nr_to_read);
m = nr_to_read / 100;
if (!m)
m = 1;
@@ -804,85 +1162,189 @@
bio = NULL;
do_gettimeofday(&start);
- error = snapshot_write_next(snapshot);
- if (error <= 0)
+ ret = snapshot_write_next(snapshot);
+ if (ret <= 0)
goto out_finish;
- for (;;) {
- error = swap_read_page(handle, page[0], NULL); /* sync */
- if (error)
- break;
-
- cmp_len = *(size_t *)page[0];
- if (unlikely(!cmp_len ||
- cmp_len > lzo1x_worst_compress(LZO_UNC_SIZE))) {
- printk(KERN_ERR "PM: Invalid LZO compressed length\n");
- error = -1;
- break;
+ for(;;) {
+ for (i = 0; !eof && i < want; i++) {
+ ret = swap_read_page(handle, page[ring], &bio);
+ if (ret) {
+ /*
+ * On real read error, finish. On end of data,
+ * set EOF flag and just exit the read loop.
+ */
+ if (handle->cur &&
+ handle->cur->entries[handle->k]) {
+ goto out_finish;
+ } else {
+ eof = 1;
+ break;
+ }
+ }
+ if (++ring >= ring_size)
+ ring = 0;
}
+ asked += i;
+ want -= i;
- for (off = PAGE_SIZE, i = 1;
- off < LZO_HEADER + cmp_len; off += PAGE_SIZE, i++) {
- error = swap_read_page(handle, page[i], &bio);
- if (error)
+ /*
+ * We are out of data, wait for some more.
+ */
+ if (!have) {
+ if (!asked)
+ break;
+
+ ret = hib_wait_on_bio_chain(&bio);
+ if (ret)
goto out_finish;
+ have += asked;
+ asked = 0;
+ if (eof)
+ eof = 2;
}
- error = hib_wait_on_bio_chain(&bio); /* need all data now */
- if (error)
- goto out_finish;
-
- for (off = 0, i = 0;
- off < LZO_HEADER + cmp_len; off += PAGE_SIZE, i++) {
- memcpy(cmp + off, page[i], PAGE_SIZE);
+ if (crc->run_threads) {
+ wait_event(crc->done, atomic_read(&crc->stop));
+ atomic_set(&crc->stop, 0);
+ crc->run_threads = 0;
}
- unc_len = LZO_UNC_SIZE;
- error = lzo1x_decompress_safe(cmp + LZO_HEADER, cmp_len,
- unc, &unc_len);
- if (error < 0) {
- printk(KERN_ERR "PM: LZO decompression failed\n");
- break;
- }
-
- if (unlikely(!unc_len ||
- unc_len > LZO_UNC_SIZE ||
- unc_len & (PAGE_SIZE - 1))) {
- printk(KERN_ERR "PM: Invalid LZO uncompressed length\n");
- error = -1;
- break;
- }
-
- for (off = 0; off < unc_len; off += PAGE_SIZE) {
- memcpy(data_of(*snapshot), unc + off, PAGE_SIZE);
-
- if (!(nr_pages % m))
- printk("\b\b\b\b%3d%%", nr_pages / m);
- nr_pages++;
-
- error = snapshot_write_next(snapshot);
- if (error <= 0)
+ for (thr = 0; have && thr < nr_threads; thr++) {
+ data[thr].cmp_len = *(size_t *)page[pg];
+ if (unlikely(!data[thr].cmp_len ||
+ data[thr].cmp_len >
+ lzo1x_worst_compress(LZO_UNC_SIZE))) {
+ printk(KERN_ERR
+ "PM: Invalid LZO compressed length\n");
+ ret = -1;
goto out_finish;
+ }
+
+ need = DIV_ROUND_UP(data[thr].cmp_len + LZO_HEADER,
+ PAGE_SIZE);
+ if (need > have) {
+ if (eof > 1) {
+ ret = -1;
+ goto out_finish;
+ }
+ break;
+ }
+
+ for (off = 0;
+ off < LZO_HEADER + data[thr].cmp_len;
+ off += PAGE_SIZE) {
+ memcpy(data[thr].cmp + off,
+ page[pg], PAGE_SIZE);
+ have--;
+ want++;
+ if (++pg >= ring_size)
+ pg = 0;
+ }
+
+ atomic_set(&data[thr].ready, 1);
+ wake_up(&data[thr].go);
}
+
+ /*
+ * Wait for more data while we are decompressing.
+ */
+ if (have < LZO_CMP_PAGES && asked) {
+ ret = hib_wait_on_bio_chain(&bio);
+ if (ret)
+ goto out_finish;
+ have += asked;
+ asked = 0;
+ if (eof)
+ eof = 2;
+ }
+
+ for (run_threads = thr, thr = 0; thr < run_threads; thr++) {
+ wait_event(data[thr].done,
+ atomic_read(&data[thr].stop));
+ atomic_set(&data[thr].stop, 0);
+
+ ret = data[thr].ret;
+
+ if (ret < 0) {
+ printk(KERN_ERR
+ "PM: LZO decompression failed\n");
+ goto out_finish;
+ }
+
+ if (unlikely(!data[thr].unc_len ||
+ data[thr].unc_len > LZO_UNC_SIZE ||
+ data[thr].unc_len & (PAGE_SIZE - 1))) {
+ printk(KERN_ERR
+ "PM: Invalid LZO uncompressed length\n");
+ ret = -1;
+ goto out_finish;
+ }
+
+ for (off = 0;
+ off < data[thr].unc_len; off += PAGE_SIZE) {
+ memcpy(data_of(*snapshot),
+ data[thr].unc + off, PAGE_SIZE);
+
+ if (!(nr_pages % m))
+ printk("\b\b\b\b%3d%%", nr_pages / m);
+ nr_pages++;
+
+ ret = snapshot_write_next(snapshot);
+ if (ret <= 0) {
+ crc->run_threads = thr + 1;
+ atomic_set(&crc->ready, 1);
+ wake_up(&crc->go);
+ goto out_finish;
+ }
+ }
+ }
+
+ crc->run_threads = thr;
+ atomic_set(&crc->ready, 1);
+ wake_up(&crc->go);
}
out_finish:
+ if (crc->run_threads) {
+ wait_event(crc->done, atomic_read(&crc->stop));
+ atomic_set(&crc->stop, 0);
+ }
do_gettimeofday(&stop);
- if (!error) {
+ if (!ret) {
printk("\b\b\b\bdone\n");
snapshot_write_finalize(snapshot);
if (!snapshot_image_loaded(snapshot))
- error = -ENODATA;
+ ret = -ENODATA;
+ if (!ret) {
+ if (swsusp_header->flags & SF_CRC32_MODE) {
+ if(handle->crc32 != swsusp_header->crc32) {
+ printk(KERN_ERR
+ "PM: Invalid image CRC32!\n");
+ ret = -ENODATA;
+ }
+ }
+ }
} else
printk("\n");
swsusp_show_speed(&start, &stop, nr_to_read, "Read");
-
- vfree(cmp);
- vfree(unc);
- for (i = 0; i < LZO_CMP_PAGES; i++)
+out_clean:
+ for (i = 0; i < ring_size; i++)
free_page((unsigned long)page[i]);
+ if (crc) {
+ if (crc->thr)
+ kthread_stop(crc->thr);
+ kfree(crc);
+ }
+ if (data) {
+ for (thr = 0; thr < nr_threads; thr++)
+ if (data[thr].thr)
+ kthread_stop(data[thr].thr);
+ vfree(data);
+ }
+ if (page) vfree(page);
- return error;
+ return ret;
}
/**
diff --git a/kernel/resource.c b/kernel/resource.c
index 3b3cedc..c8dc249 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -419,6 +419,9 @@
else
tmp.end = root->end;
+ if (tmp.end < tmp.start)
+ goto next;
+
resource_clip(&tmp, constraint->min, constraint->max);
arch_remove_reservations(&tmp);
@@ -436,8 +439,10 @@
return 0;
}
}
- if (!this)
+
+next: if (!this || this->end == root->end)
break;
+
if (this != old)
tmp.start = this->end + 1;
this = this->sibling;
diff --git a/kernel/sched.c b/kernel/sched.c
index ec5f472..b50b0f0 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3725,30 +3725,6 @@
}
/*
- * Return sum_exec_runtime for the thread group.
- * In case the task is currently running, return the sum plus current's
- * pending runtime that have not been accounted yet.
- *
- * Note that the thread group might have other running tasks as well,
- * so the return value not includes other pending runtime that other
- * running tasks might have.
- */
-unsigned long long thread_group_sched_runtime(struct task_struct *p)
-{
- struct task_cputime totals;
- unsigned long flags;
- struct rq *rq;
- u64 ns;
-
- rq = task_rq_lock(p, &flags);
- thread_group_cputime(p, &totals);
- ns = totals.sum_exec_runtime + do_task_delta_exec(p, rq);
- task_rq_unlock(rq, p, &flags);
-
- return ns;
-}
-
-/*
* Account user cpu time to a process.
* @p: the process that the cpu time gets accounted to
* @cputime: the cpu time spent in user space since the last update
@@ -4372,7 +4348,7 @@
blk_schedule_flush_plug(tsk);
}
-asmlinkage void schedule(void)
+asmlinkage void __sched schedule(void)
{
struct task_struct *tsk = current;
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 97540f0..af11778 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -1050,7 +1050,7 @@
*/
if (curr && unlikely(rt_task(curr)) &&
(curr->rt.nr_cpus_allowed < 2 ||
- curr->prio < p->prio) &&
+ curr->prio <= p->prio) &&
(p->rt.nr_cpus_allowed > 1)) {
int target = find_lowest_rq(p);
@@ -1581,7 +1581,7 @@
p->rt.nr_cpus_allowed > 1 &&
rt_task(rq->curr) &&
(rq->curr->rt.nr_cpus_allowed < 2 ||
- rq->curr->prio < p->prio))
+ rq->curr->prio <= p->prio))
push_rt_tasks(rq);
}
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index 761c510..f49405f 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -53,6 +53,9 @@
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
obj-$(CONFIG_TRACEPOINTS) += power-traces.o
+ifeq ($(CONFIG_PM_RUNTIME),y)
+obj-$(CONFIG_TRACEPOINTS) += rpm-traces.o
+endif
ifeq ($(CONFIG_TRACING),y)
obj-$(CONFIG_KGDB_KDB) += trace_kdb.o
endif
diff --git a/kernel/trace/rpm-traces.c b/kernel/trace/rpm-traces.c
new file mode 100644
index 0000000..4b3b5ea
--- /dev/null
+++ b/kernel/trace/rpm-traces.c
@@ -0,0 +1,20 @@
+/*
+ * Power trace points
+ *
+ * Copyright (C) 2009 Ming Lei <ming.lei@canonical.com>
+ */
+
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/sched.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/rpm.h>
+
+EXPORT_TRACEPOINT_SYMBOL_GPL(rpm_return_int);
+EXPORT_TRACEPOINT_SYMBOL_GPL(rpm_idle);
+EXPORT_TRACEPOINT_SYMBOL_GPL(rpm_suspend);
+EXPORT_TRACEPOINT_SYMBOL_GPL(rpm_resume);
diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
index 3e2f91f..05dd351 100644
--- a/net/batman-adv/soft-interface.c
+++ b/net/batman-adv/soft-interface.c
@@ -565,7 +565,7 @@
struct orig_node *orig_node = NULL;
int data_len = skb->len, ret;
short vid = -1;
- bool do_bcast = false;
+ bool do_bcast;
if (atomic_read(&bat_priv->mesh_state) != MESH_ACTIVE)
goto dropped;
@@ -598,15 +598,15 @@
tt_local_add(soft_iface, ethhdr->h_source);
orig_node = transtable_search(bat_priv, ethhdr->h_dest);
- if (is_multicast_ether_addr(ethhdr->h_dest) ||
- (orig_node && orig_node->gw_flags)) {
+ do_bcast = is_multicast_ether_addr(ethhdr->h_dest);
+ if (do_bcast || (orig_node && orig_node->gw_flags)) {
ret = gw_is_target(bat_priv, skb, orig_node);
if (ret < 0)
goto dropped;
- if (ret == 0)
- do_bcast = true;
+ if (ret)
+ do_bcast = false;
}
/* ethernet packet should be broadcasted */
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index 32b8f9f..ff3ed60 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -91,7 +91,6 @@
{
struct net_bridge *br = netdev_priv(dev);
- netif_carrier_off(dev);
netdev_update_features(dev);
netif_start_queue(dev);
br_stp_enable_bridge(br);
@@ -108,8 +107,6 @@
{
struct net_bridge *br = netdev_priv(dev);
- netif_carrier_off(dev);
-
br_stp_disable_bridge(br);
br_multicast_stop(br);
diff --git a/net/can/bcm.c b/net/can/bcm.c
index d6c8ae5..c84963d 100644
--- a/net/can/bcm.c
+++ b/net/can/bcm.c
@@ -344,6 +344,18 @@
}
}
+static void bcm_tx_start_timer(struct bcm_op *op)
+{
+ if (op->kt_ival1.tv64 && op->count)
+ hrtimer_start(&op->timer,
+ ktime_add(ktime_get(), op->kt_ival1),
+ HRTIMER_MODE_ABS);
+ else if (op->kt_ival2.tv64)
+ hrtimer_start(&op->timer,
+ ktime_add(ktime_get(), op->kt_ival2),
+ HRTIMER_MODE_ABS);
+}
+
static void bcm_tx_timeout_tsklet(unsigned long data)
{
struct bcm_op *op = (struct bcm_op *)data;
@@ -365,26 +377,12 @@
bcm_send_to_user(op, &msg_head, NULL, 0);
}
- }
-
- if (op->kt_ival1.tv64 && (op->count > 0)) {
-
- /* send (next) frame */
bcm_can_tx(op);
- hrtimer_start(&op->timer,
- ktime_add(ktime_get(), op->kt_ival1),
- HRTIMER_MODE_ABS);
- } else {
- if (op->kt_ival2.tv64) {
+ } else if (op->kt_ival2.tv64)
+ bcm_can_tx(op);
- /* send (next) frame */
- bcm_can_tx(op);
- hrtimer_start(&op->timer,
- ktime_add(ktime_get(), op->kt_ival2),
- HRTIMER_MODE_ABS);
- }
- }
+ bcm_tx_start_timer(op);
}
/*
@@ -964,23 +962,20 @@
hrtimer_cancel(&op->timer);
}
- if ((op->flags & STARTTIMER) &&
- ((op->kt_ival1.tv64 && op->count) || op->kt_ival2.tv64)) {
-
+ if (op->flags & STARTTIMER) {
+ hrtimer_cancel(&op->timer);
/* spec: send can_frame when starting timer */
op->flags |= TX_ANNOUNCE;
-
- if (op->kt_ival1.tv64 && (op->count > 0)) {
- /* op->count-- is done in bcm_tx_timeout_handler */
- hrtimer_start(&op->timer, op->kt_ival1,
- HRTIMER_MODE_REL);
- } else
- hrtimer_start(&op->timer, op->kt_ival2,
- HRTIMER_MODE_REL);
}
- if (op->flags & TX_ANNOUNCE)
+ if (op->flags & TX_ANNOUNCE) {
bcm_can_tx(op);
+ if (op->count)
+ op->count--;
+ }
+
+ if (op->flags & STARTTIMER)
+ bcm_tx_start_timer(op);
return msg_head->nframes * CFSIZ + MHSIZ;
}
diff --git a/net/ceph/ceph_common.c b/net/ceph/ceph_common.c
index 132963a..2883ea0 100644
--- a/net/ceph/ceph_common.c
+++ b/net/ceph/ceph_common.c
@@ -232,6 +232,7 @@
ceph_crypto_key_destroy(opt->key);
kfree(opt->key);
}
+ kfree(opt->mon_addr);
kfree(opt);
}
EXPORT_SYMBOL(ceph_destroy_options);
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index c340e2e..9918e9e 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -2307,6 +2307,7 @@
m->front_max = front_len;
m->front_is_vmalloc = false;
m->more_to_follow = false;
+ m->ack_stamp = 0;
m->pool = NULL;
/* middle */
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 16836a7..88ad8a2 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -217,6 +217,7 @@
INIT_LIST_HEAD(&req->r_unsafe_item);
INIT_LIST_HEAD(&req->r_linger_item);
INIT_LIST_HEAD(&req->r_linger_osd);
+ INIT_LIST_HEAD(&req->r_req_lru_item);
req->r_flags = flags;
WARN_ON((flags & (CEPH_OSD_FLAG_READ|CEPH_OSD_FLAG_WRITE)) == 0);
@@ -816,13 +817,10 @@
{
req->r_tid = ++osdc->last_tid;
req->r_request->hdr.tid = cpu_to_le64(req->r_tid);
- INIT_LIST_HEAD(&req->r_req_lru_item);
-
dout("__register_request %p tid %lld\n", req, req->r_tid);
__insert_request(osdc, req);
ceph_osdc_get_request(req);
osdc->num_requests++;
-
if (osdc->num_requests == 1) {
dout(" first request, scheduling timeout\n");
__schedule_osd_timeout(osdc);
diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
index e97c358..fd863fe7 100644
--- a/net/ceph/osdmap.c
+++ b/net/ceph/osdmap.c
@@ -339,6 +339,7 @@
struct ceph_pg_mapping *pg = NULL;
int c;
+ dout("__insert_pg_mapping %llx %p\n", *(u64 *)&new->pgid, new);
while (*p) {
parent = *p;
pg = rb_entry(parent, struct ceph_pg_mapping, node);
@@ -366,16 +367,33 @@
while (n) {
pg = rb_entry(n, struct ceph_pg_mapping, node);
c = pgid_cmp(pgid, pg->pgid);
- if (c < 0)
+ if (c < 0) {
n = n->rb_left;
- else if (c > 0)
+ } else if (c > 0) {
n = n->rb_right;
- else
+ } else {
+ dout("__lookup_pg_mapping %llx got %p\n",
+ *(u64 *)&pgid, pg);
return pg;
+ }
}
return NULL;
}
+static int __remove_pg_mapping(struct rb_root *root, struct ceph_pg pgid)
+{
+ struct ceph_pg_mapping *pg = __lookup_pg_mapping(root, pgid);
+
+ if (pg) {
+ dout("__remove_pg_mapping %llx %p\n", *(u64 *)&pgid, pg);
+ rb_erase(&pg->node, root);
+ kfree(pg);
+ return 0;
+ }
+ dout("__remove_pg_mapping %llx dne\n", *(u64 *)&pgid);
+ return -ENOENT;
+}
+
/*
* rbtree of pg pool info
*/
@@ -711,7 +729,6 @@
void *start = *p;
int err = -EINVAL;
u16 version;
- struct rb_node *rbp;
ceph_decode_16_safe(p, end, version, bad);
if (version > CEPH_OSDMAP_INC_VERSION) {
@@ -861,7 +878,6 @@
}
/* new_pg_temp */
- rbp = rb_first(&map->pg_temp);
ceph_decode_32_safe(p, end, len, bad);
while (len--) {
struct ceph_pg_mapping *pg;
@@ -872,18 +888,6 @@
ceph_decode_copy(p, &pgid, sizeof(pgid));
pglen = ceph_decode_32(p);
- /* remove any? */
- while (rbp && pgid_cmp(rb_entry(rbp, struct ceph_pg_mapping,
- node)->pgid, pgid) <= 0) {
- struct ceph_pg_mapping *cur =
- rb_entry(rbp, struct ceph_pg_mapping, node);
-
- rbp = rb_next(rbp);
- dout(" removed pg_temp %llx\n", *(u64 *)&cur->pgid);
- rb_erase(&cur->node, &map->pg_temp);
- kfree(cur);
- }
-
if (pglen) {
/* insert */
ceph_decode_need(p, end, pglen*sizeof(u32), bad);
@@ -903,17 +907,11 @@
}
dout(" added pg_temp %llx len %d\n", *(u64 *)&pgid,
pglen);
+ } else {
+ /* remove */
+ __remove_pg_mapping(&map->pg_temp, pgid);
}
}
- while (rbp) {
- struct ceph_pg_mapping *cur =
- rb_entry(rbp, struct ceph_pg_mapping, node);
-
- rbp = rb_next(rbp);
- dout(" removed pg_temp %llx\n", *(u64 *)&cur->pgid);
- rb_erase(&cur->node, &map->pg_temp);
- kfree(cur);
- }
/* ignore the rest */
*p = end;
@@ -1046,10 +1044,25 @@
struct ceph_pg_mapping *pg;
struct ceph_pg_pool_info *pool;
int ruleno;
- unsigned poolid, ps, pps;
+ unsigned poolid, ps, pps, t;
int preferred;
+ poolid = le32_to_cpu(pgid.pool);
+ ps = le16_to_cpu(pgid.ps);
+ preferred = (s16)le16_to_cpu(pgid.preferred);
+
+ pool = __lookup_pg_pool(&osdmap->pg_pools, poolid);
+ if (!pool)
+ return NULL;
+
/* pg_temp? */
+ if (preferred >= 0)
+ t = ceph_stable_mod(ps, le32_to_cpu(pool->v.lpg_num),
+ pool->lpgp_num_mask);
+ else
+ t = ceph_stable_mod(ps, le32_to_cpu(pool->v.pg_num),
+ pool->pgp_num_mask);
+ pgid.ps = cpu_to_le16(t);
pg = __lookup_pg_mapping(&osdmap->pg_temp, pgid);
if (pg) {
*num = pg->len;
@@ -1057,18 +1070,6 @@
}
/* crush */
- poolid = le32_to_cpu(pgid.pool);
- ps = le16_to_cpu(pgid.ps);
- preferred = (s16)le16_to_cpu(pgid.preferred);
-
- /* don't forcefeed bad device ids to crush */
- if (preferred >= osdmap->max_osd ||
- preferred >= osdmap->crush->max_devices)
- preferred = -1;
-
- pool = __lookup_pg_pool(&osdmap->pg_pools, poolid);
- if (!pool)
- return NULL;
ruleno = crush_find_rule(osdmap->crush, pool->v.crush_ruleset,
pool->v.type, pool->v.size);
if (ruleno < 0) {
@@ -1078,6 +1079,11 @@
return NULL;
}
+ /* don't forcefeed bad device ids to crush */
+ if (preferred >= osdmap->max_osd ||
+ preferred >= osdmap->crush->max_devices)
+ preferred = -1;
+
if (preferred >= 0)
pps = ceph_stable_mod(ps,
le32_to_cpu(pool->v.lpgp_num),
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 21fab3e..d73aab3 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1389,9 +1389,7 @@
BUG_ON(!pcount);
- /* Tweak before seqno plays */
- if (!tcp_is_fack(tp) && tcp_is_sack(tp) && tp->lost_skb_hint &&
- !before(TCP_SKB_CB(tp->lost_skb_hint)->seq, TCP_SKB_CB(skb)->seq))
+ if (skb == tp->lost_skb_hint)
tp->lost_cnt_hint += pcount;
TCP_SKB_CB(prev)->end_seq += shifted;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index c34f015..7963e03 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -927,18 +927,21 @@
}
sk_nocaps_add(sk, NETIF_F_GSO_MASK);
}
- if (tcp_alloc_md5sig_pool(sk) == NULL) {
+
+ md5sig = tp->md5sig_info;
+ if (md5sig->entries4 == 0 &&
+ tcp_alloc_md5sig_pool(sk) == NULL) {
kfree(newkey);
return -ENOMEM;
}
- md5sig = tp->md5sig_info;
if (md5sig->alloced4 == md5sig->entries4) {
keys = kmalloc((sizeof(*keys) *
(md5sig->entries4 + 1)), GFP_ATOMIC);
if (!keys) {
kfree(newkey);
- tcp_free_md5sig_pool();
+ if (md5sig->entries4 == 0)
+ tcp_free_md5sig_pool();
return -ENOMEM;
}
@@ -982,6 +985,7 @@
kfree(tp->md5sig_info->keys4);
tp->md5sig_info->keys4 = NULL;
tp->md5sig_info->alloced4 = 0;
+ tcp_free_md5sig_pool();
} else if (tp->md5sig_info->entries4 != i) {
/* Need to do some manipulation */
memmove(&tp->md5sig_info->keys4[i],
@@ -989,7 +993,6 @@
(tp->md5sig_info->entries4 - i) *
sizeof(struct tcp4_md5sig_key));
}
- tcp_free_md5sig_pool();
return 0;
}
}
diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
index 705c828..def0538 100644
--- a/net/ipv6/ip6mr.c
+++ b/net/ipv6/ip6mr.c
@@ -696,8 +696,10 @@
int err;
err = ip6mr_fib_lookup(net, &fl6, &mrt);
- if (err < 0)
+ if (err < 0) {
+ kfree_skb(skb);
return err;
+ }
read_lock(&mrt_lock);
dev->stats.tx_bytes += skb->len;
@@ -2052,8 +2054,10 @@
int err;
err = ip6mr_fib_lookup(net, &fl6, &mrt);
- if (err < 0)
+ if (err < 0) {
+ kfree_skb(skb);
return err;
+ }
read_lock(&mrt_lock);
cache = ip6mr_cache_find(mrt,
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 1250f90..fb545ed 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -244,7 +244,9 @@
{
struct rt6_info *rt = dst_alloc(ops, dev, 0, 0, flags);
- memset(&rt->rt6i_table, 0, sizeof(*rt) - sizeof(struct dst_entry));
+ if (rt != NULL)
+ memset(&rt->rt6i_table, 0,
+ sizeof(*rt) - sizeof(struct dst_entry));
return rt;
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 3c9fa61..7b8fc57 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -591,7 +591,8 @@
}
sk_nocaps_add(sk, NETIF_F_GSO_MASK);
}
- if (tcp_alloc_md5sig_pool(sk) == NULL) {
+ if (tp->md5sig_info->entries6 == 0 &&
+ tcp_alloc_md5sig_pool(sk) == NULL) {
kfree(newkey);
return -ENOMEM;
}
@@ -600,8 +601,9 @@
(tp->md5sig_info->entries6 + 1)), GFP_ATOMIC);
if (!keys) {
- tcp_free_md5sig_pool();
kfree(newkey);
+ if (tp->md5sig_info->entries6 == 0)
+ tcp_free_md5sig_pool();
return -ENOMEM;
}
@@ -647,6 +649,7 @@
kfree(tp->md5sig_info->keys6);
tp->md5sig_info->keys6 = NULL;
tp->md5sig_info->alloced6 = 0;
+ tcp_free_md5sig_pool();
} else {
/* shrink the database */
if (tp->md5sig_info->entries6 != i)
@@ -655,7 +658,6 @@
(tp->md5sig_info->entries6 - i)
* sizeof (tp->md5sig_info->keys6[0]));
}
- tcp_free_md5sig_pool();
return 0;
}
}
@@ -1383,6 +1385,8 @@
newtp->af_specific = &tcp_sock_ipv6_mapped_specific;
#endif
+ newnp->ipv6_ac_list = NULL;
+ newnp->ipv6_fl_list = NULL;
newnp->pktoptions = NULL;
newnp->opt = NULL;
newnp->mcast_oif = inet6_iif(skb);
@@ -1447,6 +1451,7 @@
First: no IPv4 options.
*/
newinet->inet_opt = NULL;
+ newnp->ipv6_ac_list = NULL;
newnp->ipv6_fl_list = NULL;
/* Clone RX bits */
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index acb4423..b1c23c0 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -19,7 +19,7 @@
#include <linux/if_arp.h>
#include <linux/rtnetlink.h>
#include <linux/bitmap.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/inetdevice.h>
#include <net/net_namespace.h>
#include <net/cfg80211.h>
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index d6470c7..9604abc 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -17,7 +17,7 @@
#include <linux/if_arp.h>
#include <linux/etherdevice.h>
#include <linux/rtnetlink.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/crc32.h>
#include <linux/slab.h>
#include <net/mac80211.h>
diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
index 6f09eca..beefb0a 100644
--- a/net/mac80211/scan.c
+++ b/net/mac80211/scan.c
@@ -14,7 +14,7 @@
#include <linux/if_arp.h>
#include <linux/rtnetlink.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <net/sch_generic.h>
#include <linux/slab.h>
#include <net/mac80211.h>
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index 2b771dc..5290ac3 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -3679,7 +3679,7 @@
int idx;
struct netns_ipvs *ipvs = net_ipvs(net);
- ipvs->rs_lock = __RW_LOCK_UNLOCKED(ipvs->rs_lock);
+ rwlock_init(&ipvs->rs_lock);
/* Initialize rs_table */
for (idx = 0; idx < IP_VS_RTAB_SIZE; idx++)
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
index c698cec..fabb4fa 100644
--- a/net/packet/af_packet.c
+++ b/net/packet/af_packet.c
@@ -961,7 +961,10 @@
return 0;
drop_n_acct:
- po->stats.tp_drops = atomic_inc_return(&sk->sk_drops);
+ spin_lock(&sk->sk_receive_queue.lock);
+ po->stats.tp_drops++;
+ atomic_inc(&sk->sk_drops);
+ spin_unlock(&sk->sk_receive_queue.lock);
drop_n_restore:
if (skb_head != skb->data && skb_shared(skb)) {
diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c
index 8b77edb..4e1de17 100644
--- a/net/rds/iw_rdma.c
+++ b/net/rds/iw_rdma.c
@@ -84,7 +84,8 @@
static void rds_iw_free_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr);
static unsigned int rds_iw_unmap_fastreg_list(struct rds_iw_mr_pool *pool,
struct list_head *unmap_list,
- struct list_head *kill_list);
+ struct list_head *kill_list,
+ int *unpinned);
static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr);
static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id)
@@ -499,7 +500,7 @@
LIST_HEAD(unmap_list);
LIST_HEAD(kill_list);
unsigned long flags;
- unsigned int nfreed = 0, ncleaned = 0, free_goal;
+ unsigned int nfreed = 0, ncleaned = 0, unpinned = 0, free_goal;
int ret = 0;
rds_iw_stats_inc(s_iw_rdma_mr_pool_flush);
@@ -524,7 +525,8 @@
* will be destroyed by the unmap function.
*/
if (!list_empty(&unmap_list)) {
- ncleaned = rds_iw_unmap_fastreg_list(pool, &unmap_list, &kill_list);
+ ncleaned = rds_iw_unmap_fastreg_list(pool, &unmap_list,
+ &kill_list, &unpinned);
/* If we've been asked to destroy all MRs, move those
* that were simply cleaned to the kill list */
if (free_all)
@@ -548,6 +550,7 @@
spin_unlock_irqrestore(&pool->list_lock, flags);
}
+ atomic_sub(unpinned, &pool->free_pinned);
atomic_sub(ncleaned, &pool->dirty_count);
atomic_sub(nfreed, &pool->item_count);
@@ -828,7 +831,8 @@
static unsigned int rds_iw_unmap_fastreg_list(struct rds_iw_mr_pool *pool,
struct list_head *unmap_list,
- struct list_head *kill_list)
+ struct list_head *kill_list,
+ int *unpinned)
{
struct rds_iw_mapping *mapping, *next;
unsigned int ncleaned = 0;
@@ -855,6 +859,7 @@
spin_lock_irqsave(&pool->list_lock, flags);
list_for_each_entry_safe(mapping, next, unmap_list, m_list) {
+ *unpinned += mapping->m_sg.len;
list_move(&mapping->m_list, &laundered);
ncleaned++;
}
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index e83e7fe..ea40d54 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -4113,9 +4113,12 @@
if (len % sizeof(u32))
return -EINVAL;
+ if (settings->n_akm_suites > NL80211_MAX_NR_AKM_SUITES)
+ return -EINVAL;
+
memcpy(settings->akm_suites, data, len);
- for (i = 0; i < settings->n_ciphers_pairwise; i++)
+ for (i = 0; i < settings->n_akm_suites; i++)
if (!nl80211_valid_akm_suite(settings->akm_suites[i]))
return -EINVAL;
}
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index 94fdcc7..552df27 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -1349,14 +1349,16 @@
BUG();
}
xdst = dst_alloc(dst_ops, NULL, 0, 0, 0);
- memset(&xdst->u.rt6.rt6i_table, 0, sizeof(*xdst) - sizeof(struct dst_entry));
- xfrm_policy_put_afinfo(afinfo);
- if (likely(xdst))
+ if (likely(xdst)) {
+ memset(&xdst->u.rt6.rt6i_table, 0,
+ sizeof(*xdst) - sizeof(struct dst_entry));
xdst->flo.ops = &xfrm_bundle_fc_ops;
- else
+ } else
xdst = ERR_PTR(-ENOBUFS);
+ xfrm_policy_put_afinfo(afinfo);
+
return xdst;
}
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 1c6be91..c74e228 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -23,7 +23,7 @@
#include <linux/file.h>
#include <linux/slab.h>
#include <linux/time.h>
-#include <linux/pm_qos_params.h>
+#include <linux/pm_qos.h>
#include <linux/uio.h>
#include <linux/dma-mapping.h>
#include <sound/core.h>
diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
index be69822..e9a2a87 100644
--- a/sound/pci/hda/hda_intel.c
+++ b/sound/pci/hda/hda_intel.c
@@ -1924,7 +1924,8 @@
}
static unsigned int azx_get_position(struct azx *chip,
- struct azx_dev *azx_dev)
+ struct azx_dev *azx_dev,
+ bool with_check)
{
unsigned int pos;
int stream = azx_dev->substream->stream;
@@ -1940,7 +1941,7 @@
default:
/* use the position buffer */
pos = le32_to_cpu(*azx_dev->posbuf);
- if (chip->position_fix[stream] == POS_FIX_AUTO) {
+ if (with_check && chip->position_fix[stream] == POS_FIX_AUTO) {
if (!pos || pos == (u32)-1) {
printk(KERN_WARNING
"hda-intel: Invalid position buffer, "
@@ -1964,7 +1965,7 @@
struct azx *chip = apcm->chip;
struct azx_dev *azx_dev = get_azx_dev(substream);
return bytes_to_frames(substream->runtime,
- azx_get_position(chip, azx_dev));
+ azx_get_position(chip, azx_dev, false));
}
/*
@@ -1987,7 +1988,7 @@
return -1; /* bogus (too early) interrupt */
stream = azx_dev->substream->stream;
- pos = azx_get_position(chip, azx_dev);
+ pos = azx_get_position(chip, azx_dev, true);
if (WARN_ONCE(!azx_dev->period_bytes,
"hda-intel: zero azx_dev->period_bytes"))
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 0503c99..7a73621 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -578,6 +578,10 @@
{
struct alc_spec *spec = codec->spec;
+ /* check LO jack only when it's different from HP */
+ if (spec->autocfg.line_out_pins[0] == spec->autocfg.hp_pins[0])
+ return;
+
spec->line_jack_present =
detect_jacks(codec, ARRAY_SIZE(spec->autocfg.line_out_pins),
spec->autocfg.line_out_pins);
@@ -1321,7 +1325,9 @@
* 15 : 1 --> enable the function "Mute internal speaker
* when the external headphone out jack is plugged"
*/
- if (!spec->autocfg.hp_pins[0]) {
+ if (!spec->autocfg.hp_pins[0] &&
+ !(spec->autocfg.line_out_pins[0] &&
+ spec->autocfg.line_out_type == AUTO_PIN_HP_OUT)) {
hda_nid_t nid;
tmp = (ass >> 11) & 0x3; /* HP to chassis */
if (tmp == 0)
diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
index 1b7c114..987e3cf 100644
--- a/sound/pci/hda/patch_sigmatel.c
+++ b/sound/pci/hda/patch_sigmatel.c
@@ -5630,6 +5630,7 @@
switch (codec->vendor_id) {
case 0x111d76d1:
case 0x111d76d9:
+ case 0x111d76df:
case 0x111d76e5:
case 0x111d7666:
case 0x111d7667:
diff --git a/sound/soc/codecs/ssm2602.c b/sound/soc/codecs/ssm2602.c
index 84f4ad5..9801cd7 100644
--- a/sound/soc/codecs/ssm2602.c
+++ b/sound/soc/codecs/ssm2602.c
@@ -431,7 +431,8 @@
static int ssm2602_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
- u16 reg = snd_soc_read(codec, SSM2602_PWR) & 0xff7f;
+ u16 reg = snd_soc_read(codec, SSM2602_PWR);
+ reg &= ~(PWR_POWER_OFF | PWR_OSC_PDN);
switch (level) {
case SND_SOC_BIAS_ON:
diff --git a/sound/soc/codecs/wm8753.c b/sound/soc/codecs/wm8753.c
index ffa2ffe..aa091a0 100644
--- a/sound/soc/codecs/wm8753.c
+++ b/sound/soc/codecs/wm8753.c
@@ -1454,8 +1454,8 @@
/* set the update bits */
snd_soc_update_bits(codec, WM8753_LDAC, 0x0100, 0x0100);
snd_soc_update_bits(codec, WM8753_RDAC, 0x0100, 0x0100);
- snd_soc_update_bits(codec, WM8753_LDAC, 0x0100, 0x0100);
- snd_soc_update_bits(codec, WM8753_RDAC, 0x0100, 0x0100);
+ snd_soc_update_bits(codec, WM8753_LADC, 0x0100, 0x0100);
+ snd_soc_update_bits(codec, WM8753_RADC, 0x0100, 0x0100);
snd_soc_update_bits(codec, WM8753_LOUT1V, 0x0100, 0x0100);
snd_soc_update_bits(codec, WM8753_ROUT1V, 0x0100, 0x0100);
snd_soc_update_bits(codec, WM8753_LOUT2V, 0x0100, 0x0100);
diff --git a/sound/soc/omap/mcpdm.c b/sound/soc/omap/mcpdm.c
index 928f037..50e5919 100644
--- a/sound/soc/omap/mcpdm.c
+++ b/sound/soc/omap/mcpdm.c
@@ -449,7 +449,7 @@
return ret;
}
-int __devexit omap_mcpdm_remove(struct platform_device *pdev)
+int omap_mcpdm_remove(struct platform_device *pdev)
{
struct omap_mcpdm *mcpdm_ptr = platform_get_drvdata(pdev);
diff --git a/sound/soc/omap/mcpdm.h b/sound/soc/omap/mcpdm.h
index df3e16f..20c20a8 100644
--- a/sound/soc/omap/mcpdm.h
+++ b/sound/soc/omap/mcpdm.h
@@ -150,4 +150,4 @@
extern void omap_mcpdm_free(void);
extern int omap_mcpdm_set_offset(int offset1, int offset2);
int __devinit omap_mcpdm_probe(struct platform_device *pdev);
-int __devexit omap_mcpdm_remove(struct platform_device *pdev);
+int omap_mcpdm_remove(struct platform_device *pdev);
diff --git a/sound/soc/omap/omap-mcbsp.c b/sound/soc/omap/omap-mcbsp.c
index ebcc2d4..478d607 100644
--- a/sound/soc/omap/omap-mcbsp.c
+++ b/sound/soc/omap/omap-mcbsp.c
@@ -516,6 +516,12 @@
struct omap_mcbsp_reg_cfg *regs = &mcbsp_data->regs;
int err = 0;
+ if (mcbsp_data->active)
+ if (freq == mcbsp_data->in_freq)
+ return 0;
+ else
+ return -EBUSY;
+
/* The McBSP signal muxing functions are only available on McBSP1 */
if (clk_id == OMAP_MCBSP_CLKR_SRC_CLKR ||
clk_id == OMAP_MCBSP_CLKR_SRC_CLKX ||
diff --git a/sound/soc/pxa/zylonite.c b/sound/soc/pxa/zylonite.c
index b644575..2b8350b 100644
--- a/sound/soc/pxa/zylonite.c
+++ b/sound/soc/pxa/zylonite.c
@@ -196,20 +196,20 @@
if (clk_pout) {
pout = clk_get(NULL, "CLK_POUT");
if (IS_ERR(pout)) {
- dev_err(&pdev->dev, "Unable to obtain CLK_POUT: %ld\n",
+ dev_err(card->dev, "Unable to obtain CLK_POUT: %ld\n",
PTR_ERR(pout));
return PTR_ERR(pout);
}
ret = clk_enable(pout);
if (ret != 0) {
- dev_err(&pdev->dev, "Unable to enable CLK_POUT: %d\n",
+ dev_err(card->dev, "Unable to enable CLK_POUT: %d\n",
ret);
clk_put(pout);
return ret;
}
- dev_dbg(&pdev->dev, "MCLK enabled at %luHz\n",
+ dev_dbg(card->dev, "MCLK enabled at %luHz\n",
clk_get_rate(pout));
}
@@ -241,7 +241,7 @@
if (clk_pout) {
ret = clk_enable(pout);
if (ret != 0)
- dev_err(&pdev->dev, "Unable to enable CLK_POUT: %d\n",
+ dev_err(card->dev, "Unable to enable CLK_POUT: %d\n",
ret);
}
diff --git a/sound/usb/card.c b/sound/usb/card.c
index ed120ca..3068f04 100644
--- a/sound/usb/card.c
+++ b/sound/usb/card.c
@@ -530,9 +530,11 @@
return chip;
__error:
- if (chip && !chip->num_interfaces)
- snd_card_free(chip->card);
- chip->probing = 0;
+ if (chip) {
+ if (!chip->num_interfaces)
+ snd_card_free(chip->card);
+ chip->probing = 0;
+ }
mutex_unlock(®ister_mutex);
__err_val:
return NULL;
@@ -629,7 +631,7 @@
if (chip == (void *)-1L)
return 0;
- if (!(message.event & PM_EVENT_AUTO)) {
+ if (!PMSG_IS_AUTO(message)) {
snd_power_change_state(chip->card, SNDRV_CTL_POWER_D3hot);
if (!chip->num_suspended_intf++) {
list_for_each(p, &chip->pcm_list) {
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index c5748c5..e389815 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -449,6 +449,8 @@
}
if (type & PERF_SAMPLE_RAW) {
+ const u64 *pdata;
+
u.val64 = *array;
if (WARN_ONCE(swapped,
"Endianness of raw data not corrected!\n")) {
@@ -462,11 +464,12 @@
return -EFAULT;
data->raw_size = u.val32[0];
+ pdata = (void *) array + sizeof(u32);
- if (sample_overlap(event, &u.val32[1], data->raw_size))
+ if (sample_overlap(event, pdata, data->raw_size))
return -EFAULT;
- data->raw_data = &u.val32[1];
+ data->raw_data = (void *) pdata;
}
return 0;