Merge commit '317f394160e9beb97d19a84c39b7e5eb3d7815a8'
Conflicts:
arch/sparc/kernel/smp_32.c
With merge conflict help from Daniel Hellstrom.
Signed-off-by: David S. Miller <davem@davemloft.net>
diff --git a/Documentation/input/event-codes.txt b/Documentation/input/event-codes.txt
new file mode 100644
index 0000000..23fcb05
--- /dev/null
+++ b/Documentation/input/event-codes.txt
@@ -0,0 +1,262 @@
+The input protocol uses a map of types and codes to express input device values
+to userspace. This document describes the types and codes and how and when they
+may be used.
+
+A single hardware event generates multiple input events. Each input event
+contains the new value of a single data item. A special event type, EV_SYN, is
+used to separate input events into packets of input data changes occurring at
+the same moment in time. In the following, the term "event" refers to a single
+input event encompassing a type, code, and value.
+
+The input protocol is a stateful protocol. Events are emitted only when values
+of event codes have changed. However, the state is maintained within the Linux
+input subsystem; drivers do not need to maintain the state and may attempt to
+emit unchanged values without harm. Userspace may obtain the current state of
+event code values using the EVIOCG* ioctls defined in linux/input.h. The event
+reports supported by a device are also provided by sysfs in
+class/input/event*/device/capabilities/, and the properties of a device are
+provided in class/input/event*/device/properties.
+
+Types:
+==========
+Types are groupings of codes under a logical input construct. Each type has a
+set of applicable codes to be used in generating events. See the Codes section
+for details on valid codes for each type.
+
+* EV_SYN:
+ - Used as markers to separate events. Events may be separated in time or in
+ space, such as with the multitouch protocol.
+
+* EV_KEY:
+ - Used to describe state changes of keyboards, buttons, or other key-like
+ devices.
+
+* EV_REL:
+ - Used to describe relative axis value changes, e.g. moving the mouse 5 units
+ to the left.
+
+* EV_ABS:
+ - Used to describe absolute axis value changes, e.g. describing the
+ coordinates of a touch on a touchscreen.
+
+* EV_MSC:
+ - Used to describe miscellaneous input data that do not fit into other types.
+
+* EV_SW:
+ - Used to describe binary state input switches.
+
+* EV_LED:
+ - Used to turn LEDs on devices on and off.
+
+* EV_SND:
+ - Used to output sound to devices.
+
+* EV_REP:
+ - Used for autorepeating devices.
+
+* EV_FF:
+ - Used to send force feedback commands to an input device.
+
+* EV_PWR:
+ - A special type for power button and switch input.
+
+* EV_FF_STATUS:
+ - Used to receive force feedback device status.
+
+Codes:
+==========
+Codes define the precise type of event.
+
+EV_SYN:
+----------
+EV_SYN event values are undefined. Their usage is defined only by when they are
+sent in the evdev event stream.
+
+* SYN_REPORT:
+ - Used to synchronize and separate events into packets of input data changes
+ occurring at the same moment in time. For example, motion of a mouse may set
+ the REL_X and REL_Y values for one motion, then emit a SYN_REPORT. The next
+ motion will emit more REL_X and REL_Y values and send another SYN_REPORT.
+
+* SYN_CONFIG:
+ - TBD
+
+* SYN_MT_REPORT:
+ - Used to synchronize and separate touch events. See the
+ multi-touch-protocol.txt document for more information.
+
+* SYN_DROPPED:
+ - Used to indicate buffer overrun in the evdev client's event queue.
+ Client should ignore all events up to and including next SYN_REPORT
+ event and query the device (using EVIOCG* ioctls) to obtain its
+ current state.
+
+EV_KEY:
+----------
+EV_KEY events take the form KEY_<name> or BTN_<name>. For example, KEY_A is used
+to represent the 'A' key on a keyboard. When a key is depressed, an event with
+the key's code is emitted with value 1. When the key is released, an event is
+emitted with value 0. Some hardware send events when a key is repeated. These
+events have a value of 2. In general, KEY_<name> is used for keyboard keys, and
+BTN_<name> is used for other types of momentary switch events.
+
+A few EV_KEY codes have special meanings:
+
+* BTN_TOOL_<name>:
+ - These codes are used in conjunction with input trackpads, tablets, and
+ touchscreens. These devices may be used with fingers, pens, or other tools.
+ When an event occurs and a tool is used, the corresponding BTN_TOOL_<name>
+ code should be set to a value of 1. When the tool is no longer interacting
+ with the input device, the BTN_TOOL_<name> code should be reset to 0. All
+ trackpads, tablets, and touchscreens should use at least one BTN_TOOL_<name>
+ code when events are generated.
+
+* BTN_TOUCH:
+ BTN_TOUCH is used for touch contact. While an input tool is determined to be
+ within meaningful physical contact, the value of this property must be set
+ to 1. Meaningful physical contact may mean any contact, or it may mean
+ contact conditioned by an implementation defined property. For example, a
+ touchpad may set the value to 1 only when the touch pressure rises above a
+ certain value. BTN_TOUCH may be combined with BTN_TOOL_<name> codes. For
+ example, a pen tablet may set BTN_TOOL_PEN to 1 and BTN_TOUCH to 0 while the
+ pen is hovering over but not touching the tablet surface.
+
+Note: For appropriate function of the legacy mousedev emulation driver,
+BTN_TOUCH must be the first evdev code emitted in a synchronization frame.
+
+Note: Historically a touch device with BTN_TOOL_FINGER and BTN_TOUCH was
+interpreted as a touchpad by userspace, while a similar device without
+BTN_TOOL_FINGER was interpreted as a touchscreen. For backwards compatibility
+with current userspace it is recommended to follow this distinction. In the
+future, this distinction will be deprecated and the device properties ioctl
+EVIOCGPROP, defined in linux/input.h, will be used to convey the device type.
+
+* BTN_TOOL_FINGER, BTN_TOOL_DOUBLETAP, BTN_TOOL_TRIPLETAP, BTN_TOOL_QUADTAP:
+ - These codes denote one, two, three, and four finger interaction on a
+ trackpad or touchscreen. For example, if the user uses two fingers and moves
+ them on the touchpad in an effort to scroll content on screen,
+ BTN_TOOL_DOUBLETAP should be set to value 1 for the duration of the motion.
+ Note that all BTN_TOOL_<name> codes and the BTN_TOUCH code are orthogonal in
+ purpose. A trackpad event generated by finger touches should generate events
+ for one code from each group. At most only one of these BTN_TOOL_<name>
+ codes should have a value of 1 during any synchronization frame.
+
+Note: Historically some drivers emitted multiple of the finger count codes with
+a value of 1 in the same synchronization frame. This usage is deprecated.
+
+Note: In multitouch drivers, the input_mt_report_finger_count() function should
+be used to emit these codes. Please see multi-touch-protocol.txt for details.
+
+EV_REL:
+----------
+EV_REL events describe relative changes in a property. For example, a mouse may
+move to the left by a certain number of units, but its absolute position in
+space is unknown. If the absolute position is known, EV_ABS codes should be used
+instead of EV_REL codes.
+
+A few EV_REL codes have special meanings:
+
+* REL_WHEEL, REL_HWHEEL:
+ - These codes are used for vertical and horizontal scroll wheels,
+ respectively.
+
+EV_ABS:
+----------
+EV_ABS events describe absolute changes in a property. For example, a touchpad
+may emit coordinates for a touch location.
+
+A few EV_ABS codes have special meanings:
+
+* ABS_DISTANCE:
+ - Used to describe the distance of a tool from an interaction surface. This
+ event should only be emitted while the tool is hovering, meaning in close
+ proximity of the device and while the value of the BTN_TOUCH code is 0. If
+ the input device may be used freely in three dimensions, consider ABS_Z
+ instead.
+
+* ABS_MT_<name>:
+ - Used to describe multitouch input events. Please see
+ multi-touch-protocol.txt for details.
+
+EV_SW:
+----------
+EV_SW events describe stateful binary switches. For example, the SW_LID code is
+used to denote when a laptop lid is closed.
+
+Upon binding to a device or resuming from suspend, a driver must report
+the current switch state. This ensures that the device, kernel, and userspace
+state is in sync.
+
+Upon resume, if the switch state is the same as before suspend, then the input
+subsystem will filter out the duplicate switch state reports. The driver does
+not need to keep the state of the switch at any time.
+
+EV_MSC:
+----------
+EV_MSC events are used for input and output events that do not fall under other
+categories.
+
+EV_LED:
+----------
+EV_LED events are used for input and output to set and query the state of
+various LEDs on devices.
+
+EV_REP:
+----------
+EV_REP events are used for specifying autorepeating events.
+
+EV_SND:
+----------
+EV_SND events are used for sending sound commands to simple sound output
+devices.
+
+EV_FF:
+----------
+EV_FF events are used to initialize a force feedback capable device and to cause
+such device to feedback.
+
+EV_PWR:
+----------
+EV_PWR events are a special type of event used specifically for power
+mangement. Its usage is not well defined. To be addressed later.
+
+Guidelines:
+==========
+The guidelines below ensure proper single-touch and multi-finger functionality.
+For multi-touch functionality, see the multi-touch-protocol.txt document for
+more information.
+
+Mice:
+----------
+REL_{X,Y} must be reported when the mouse moves. BTN_LEFT must be used to report
+the primary button press. BTN_{MIDDLE,RIGHT,4,5,etc.} should be used to report
+further buttons of the device. REL_WHEEL and REL_HWHEEL should be used to report
+scroll wheel events where available.
+
+Touchscreens:
+----------
+ABS_{X,Y} must be reported with the location of the touch. BTN_TOUCH must be
+used to report when a touch is active on the screen.
+BTN_{MOUSE,LEFT,MIDDLE,RIGHT} must not be reported as the result of touch
+contact. BTN_TOOL_<name> events should be reported where possible.
+
+Trackpads:
+----------
+Legacy trackpads that only provide relative position information must report
+events like mice described above.
+
+Trackpads that provide absolute touch position must report ABS_{X,Y} for the
+location of the touch. BTN_TOUCH should be used to report when a touch is active
+on the trackpad. Where multi-finger support is available, BTN_TOOL_<name> should
+be used to report the number of touches active on the trackpad.
+
+Tablets:
+----------
+BTN_TOOL_<name> events must be reported when a stylus or other tool is active on
+the tablet. ABS_{X,Y} must be reported with the location of the tool. BTN_TOUCH
+should be used to report when the tool is in contact with the tablet.
+BTN_{STYLUS,STYLUS2} should be used to report buttons on the tool itself. Any
+button may be used for buttons on the tablet except BTN_{MOUSE,LEFT}.
+BTN_{0,1,2,etc} are good generic codes for unlabeled buttons. Do not use
+meaningful buttons, like BTN_FORWARD, unless the button is labeled for that
+purpose on the device.
diff --git a/MAINTAINERS b/MAINTAINERS
index 649600c..1e2724e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -151,6 +151,7 @@
F: drivers/net/hamradio/6pack.c
8169 10/100/1000 GIGABIT ETHERNET DRIVER
+M: Realtek linux nic maintainers <nic_swsd@realtek.com>
M: Francois Romieu <romieu@fr.zoreil.com>
L: netdev@vger.kernel.org
S: Maintained
@@ -184,10 +185,9 @@
F: fs/9p/
A2232 SERIAL BOARD DRIVER
-M: Enver Haase <A2232@gmx.net>
L: linux-m68k@lists.linux-m68k.org
-S: Maintained
-F: drivers/char/ser_a2232*
+S: Orphan
+F: drivers/staging/generic_serial/ser_a2232*
AACRAID SCSI RAID DRIVER
M: Adaptec OEM Raid Solutions <aacraid@adaptec.com>
@@ -877,6 +877,13 @@
F: arch/arm/mach-orion5x/
F: arch/arm/plat-orion/
+ARM/Orion SoC/Technologic Systems TS-78xx platform support
+M: Alexander Clouter <alex@digriz.org.uk>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+W: http://www.digriz.org.uk/ts78xx/kernel
+S: Maintained
+F: arch/arm/mach-orion5x/ts78xx-*
+
ARM/MIOA701 MACHINE SUPPORT
M: Robert Jarzmik <robert.jarzmik@free.fr>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@@ -1063,7 +1070,7 @@
F: drivers/sh/
ARM/TELECHIPS ARM ARCHITECTURE
-M: "Hans J. Koch" <hjk@linutronix.de>
+M: "Hans J. Koch" <hjk@hansjkoch.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/plat-tcc/
@@ -1823,11 +1830,10 @@
F: drivers/platform/x86/compal-laptop.c
COMPUTONE INTELLIPORT MULTIPORT CARD
-M: "Michael H. Warfield" <mhw@wittsend.com>
W: http://www.wittsend.com/computone.html
-S: Maintained
+S: Orphan
F: Documentation/serial/computone.txt
-F: drivers/char/ip2/
+F: drivers/staging/tty/ip2/
CONEXANT ACCESSRUNNER USB DRIVER
M: Simon Arlott <cxacru@fire.lp0.eu>
@@ -2010,7 +2016,7 @@
CYCLADES ASYNC MUX DRIVER
W: http://www.cyclades.com/
S: Orphan
-F: drivers/char/cyclades.c
+F: drivers/tty/cyclades.c
F: include/linux/cyclades.h
CYCLADES PC300 DRIVER
@@ -2124,8 +2130,8 @@
W: http://www.digi.com
S: Orphan
F: Documentation/serial/digiepca.txt
-F: drivers/char/epca*
-F: drivers/char/digi*
+F: drivers/staging/tty/epca*
+F: drivers/staging/tty/digi*
DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <guenter.roeck@ericsson.com>
@@ -4077,7 +4083,7 @@
F: include/linux/matroxfb.h
MAX6650 HARDWARE MONITOR AND FAN CONTROLLER DRIVER
-M: "Hans J. Koch" <hjk@linutronix.de>
+M: "Hans J. Koch" <hjk@hansjkoch.de>
L: lm-sensors@lm-sensors.org
S: Maintained
F: Documentation/hwmon/max6650
@@ -4192,7 +4198,7 @@
M: Jiri Slaby <jirislaby@gmail.com>
S: Maintained
F: Documentation/serial/moxa-smartio
-F: drivers/char/mxser.*
+F: drivers/tty/mxser.*
MSI LAPTOP SUPPORT
M: "Lee, Chun-Yi" <jlee@novell.com>
@@ -4234,7 +4240,7 @@
MULTITECH MULTIPORT CARD (ISICOM)
S: Orphan
-F: drivers/char/isicom.c
+F: drivers/tty/isicom.c
F: include/linux/isicom.h
MUSB MULTIPOINT HIGH SPEED DUAL-ROLE CONTROLLER
@@ -5273,14 +5279,14 @@
RISCOM8 DRIVER
S: Orphan
F: Documentation/serial/riscom8.txt
-F: drivers/char/riscom8*
+F: drivers/staging/tty/riscom8*
ROCKETPORT DRIVER
P: Comtrol Corp.
W: http://www.comtrol.com
S: Maintained
F: Documentation/serial/rocket.txt
-F: drivers/char/rocket*
+F: drivers/tty/rocket*
ROSE NETWORK LAYER
M: Ralf Baechle <ralf@linux-mips.org>
@@ -5916,10 +5922,9 @@
F: arch/arm/mach-spear6xx/spear600_evb.c
SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER
-M: Roger Wolff <R.E.Wolff@BitWizard.nl>
-S: Supported
+S: Orphan
F: Documentation/serial/specialix.txt
-F: drivers/char/specialix*
+F: drivers/staging/tty/specialix*
SPI SUBSYSTEM
M: David Brownell <dbrownell@users.sourceforge.net>
@@ -5964,7 +5969,6 @@
STABLE BRANCH
M: Greg Kroah-Hartman <greg@kroah.com>
-M: Chris Wright <chrisw@sous-sol.org>
L: stable@kernel.org
S: Maintained
@@ -6248,7 +6252,8 @@
W: http://www.uclinux.org/
L: uclinux-dev@uclinux.org (subscribers-only)
S: Maintained
-F: arch/m68knommu/
+F: arch/m68k/*/*_no.*
+F: arch/m68k/include/asm/*_no.*
UCLINUX FOR RENESAS H8/300 (H8300)
M: Yoshinori Sato <ysato@users.sourceforge.jp>
@@ -6618,7 +6623,7 @@
F: fs/hppfs/
USERSPACE I/O (UIO)
-M: "Hans J. Koch" <hjk@linutronix.de>
+M: "Hans J. Koch" <hjk@hansjkoch.de>
M: Greg Kroah-Hartman <gregkh@suse.de>
S: Maintained
F: Documentation/DocBook/uio-howto.tmpl
diff --git a/Makefile b/Makefile
index 322e733..b967b96 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 39
-EXTRAVERSION = -rc3
+EXTRAVERSION = -rc4
NAME = Flesh-Eating Bats with Fangs
# *DOCUMENTATION*
diff --git a/arch/alpha/kernel/Makefile b/arch/alpha/kernel/Makefile
index 9bb7b858..7a6d908 100644
--- a/arch/alpha/kernel/Makefile
+++ b/arch/alpha/kernel/Makefile
@@ -4,7 +4,7 @@
extra-y := head.o vmlinux.lds
asflags-y := $(KBUILD_CFLAGS)
-ccflags-y := -Werror -Wno-sign-compare
+ccflags-y := -Wno-sign-compare
obj-y := entry.o traps.o process.o init_task.o osf_sys.o irq.o \
irq_alpha.o signal.o setup.o ptrace.o time.o \
diff --git a/arch/alpha/kernel/core_mcpcia.c b/arch/alpha/kernel/core_mcpcia.c
index 381fec0..da7bcc3 100644
--- a/arch/alpha/kernel/core_mcpcia.c
+++ b/arch/alpha/kernel/core_mcpcia.c
@@ -88,7 +88,7 @@
{
unsigned long flags;
unsigned long mid = MCPCIA_HOSE2MID(hose->index);
- unsigned int stat0, value, temp, cpu;
+ unsigned int stat0, value, cpu;
cpu = smp_processor_id();
@@ -101,7 +101,7 @@
stat0 = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid) = stat0;
mb();
- temp = *(vuip)MCPCIA_CAP_ERR(mid);
+ *(vuip)MCPCIA_CAP_ERR(mid);
DBG_CFG(("conf_read: MCPCIA_CAP_ERR(%d) was 0x%x\n", mid, stat0));
mb();
@@ -136,7 +136,7 @@
{
unsigned long flags;
unsigned long mid = MCPCIA_HOSE2MID(hose->index);
- unsigned int stat0, temp, cpu;
+ unsigned int stat0, cpu;
cpu = smp_processor_id();
@@ -145,7 +145,7 @@
/* Reset status register to avoid losing errors. */
stat0 = *(vuip)MCPCIA_CAP_ERR(mid);
*(vuip)MCPCIA_CAP_ERR(mid) = stat0; mb();
- temp = *(vuip)MCPCIA_CAP_ERR(mid);
+ *(vuip)MCPCIA_CAP_ERR(mid);
DBG_CFG(("conf_write: MCPCIA CAP_ERR(%d) was 0x%x\n", mid, stat0));
draina();
@@ -157,7 +157,7 @@
*((vuip)addr) = value;
mb();
mb(); /* magic */
- temp = *(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */
+ *(vuip)MCPCIA_CAP_ERR(mid); /* read to force the write */
mcheck_expected(cpu) = 0;
mb();
@@ -572,12 +572,10 @@
void
mcpcia_machine_check(unsigned long vector, unsigned long la_ptr)
{
- struct el_common *mchk_header;
struct el_MCPCIA_uncorrected_frame_mcheck *mchk_logout;
unsigned int cpu = smp_processor_id();
int expected;
- mchk_header = (struct el_common *)la_ptr;
mchk_logout = (struct el_MCPCIA_uncorrected_frame_mcheck *)la_ptr;
expected = mcheck_expected(cpu);
diff --git a/arch/alpha/kernel/err_titan.c b/arch/alpha/kernel/err_titan.c
index c3b3781..14b26c4 100644
--- a/arch/alpha/kernel/err_titan.c
+++ b/arch/alpha/kernel/err_titan.c
@@ -533,8 +533,6 @@
static struct el_subpacket *
el_process_regatta_subpacket(struct el_subpacket *header)
{
- int status;
-
if (header->class != EL_CLASS__REGATTA_FAMILY) {
printk("%s ** Unexpected header CLASS %d TYPE %d, aborting\n",
err_print_prefix,
@@ -551,7 +549,7 @@
printk("%s ** Occurred on CPU %d:\n",
err_print_prefix,
(int)header->by_type.regatta_frame.cpuid);
- status = privateer_process_logout_frame((struct el_common *)
+ privateer_process_logout_frame((struct el_common *)
header->by_type.regatta_frame.data_start, 1);
break;
default:
diff --git a/arch/alpha/kernel/irq_alpha.c b/arch/alpha/kernel/irq_alpha.c
index 1479dc6..51b7fbd 100644
--- a/arch/alpha/kernel/irq_alpha.c
+++ b/arch/alpha/kernel/irq_alpha.c
@@ -228,7 +228,7 @@
void __init
init_rtc_irq(void)
{
- irq_set_chip_and_handler_name(RTC_IRQ, &no_irq_chip,
+ irq_set_chip_and_handler_name(RTC_IRQ, &dummy_irq_chip,
handle_simple_irq, "RTC");
setup_irq(RTC_IRQ, &timer_irqaction);
}
diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
index d2634e4..edbddcb 100644
--- a/arch/alpha/kernel/setup.c
+++ b/arch/alpha/kernel/setup.c
@@ -1404,8 +1404,6 @@
case PCA56_CPU:
case PCA57_CPU:
{
- unsigned long cbox_config, size;
-
if (cpu_type == PCA56_CPU) {
L1I = CSHAPE(16*1024, 6, 1);
L1D = CSHAPE(8*1024, 5, 1);
@@ -1415,10 +1413,12 @@
}
L3 = -1;
+#if 0
+ unsigned long cbox_config, size;
+
cbox_config = *(vulp) phys_to_virt (0xfffff00008UL);
size = 512*1024 * (1 << ((cbox_config >> 12) & 3));
-#if 0
L2 = ((cbox_config >> 31) & 1 ? CSHAPE (size, 6, 1) : -1);
#else
L2 = external_cache_probe(512*1024, 6);
diff --git a/arch/alpha/kernel/smc37c93x.c b/arch/alpha/kernel/smc37c93x.c
index 3e6a289..6886b83 100644
--- a/arch/alpha/kernel/smc37c93x.c
+++ b/arch/alpha/kernel/smc37c93x.c
@@ -79,7 +79,6 @@
static unsigned long __init SMCConfigState(unsigned long baseAddr)
{
unsigned char devId;
- unsigned char devRev;
unsigned long configPort;
unsigned long indexPort;
@@ -100,7 +99,7 @@
devId = inb(dataPort);
if (devId == VALID_DEVICE_ID) {
outb(DEVICE_REV, indexPort);
- devRev = inb(dataPort);
+ /* unsigned char devRev = */ inb(dataPort);
break;
}
else
diff --git a/arch/alpha/kernel/sys_wildfire.c b/arch/alpha/kernel/sys_wildfire.c
index d3cb28b..d92cdc7 100644
--- a/arch/alpha/kernel/sys_wildfire.c
+++ b/arch/alpha/kernel/sys_wildfire.c
@@ -156,7 +156,6 @@
wildfire_init_irq_per_pca(int qbbno, int pcano)
{
int i, irq_bias;
- unsigned long io_bias;
static struct irqaction isa_enable = {
.handler = no_action,
.name = "isa_enable",
@@ -165,10 +164,12 @@
irq_bias = qbbno * (WILDFIRE_PCA_PER_QBB * WILDFIRE_IRQ_PER_PCA)
+ pcano * WILDFIRE_IRQ_PER_PCA;
+#if 0
+ unsigned long io_bias;
+
/* Only need the following for first PCI bus per PCA. */
io_bias = WILDFIRE_IO(qbbno, pcano<<1) - WILDFIRE_IO_BIAS;
-#if 0
outb(0, DMA1_RESET_REG + io_bias);
outb(0, DMA2_RESET_REG + io_bias);
outb(DMA_MODE_CASCADE, DMA2_MODE_REG + io_bias);
diff --git a/arch/alpha/kernel/time.c b/arch/alpha/kernel/time.c
index a58e84f..918e8e0 100644
--- a/arch/alpha/kernel/time.c
+++ b/arch/alpha/kernel/time.c
@@ -153,6 +153,7 @@
year += 100;
ts->tv_sec = mktime(year, mon, day, hour, min, sec);
+ ts->tv_nsec = 0;
}
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index fdc9d4d..377a7a5 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1540,7 +1540,6 @@
config HIGHPTE
bool "Allocate 2nd-level pagetables from highmem"
depends on HIGHMEM
- depends on !OUTER_CACHE
config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events"
@@ -2012,6 +2011,8 @@
config ARCH_SUSPEND_POSSIBLE
depends on !ARCH_S5P64X0 && !ARCH_S5P6442
+ depends on CPU_ARM920T || CPU_ARM926T || CPU_SA1100 || \
+ CPU_V6 || CPU_V6K || CPU_V7 || CPU_XSC3 || CPU_XSCALE
def_bool y
endmenu
diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
index 494224a..03d01d7 100644
--- a/arch/arm/Kconfig.debug
+++ b/arch/arm/Kconfig.debug
@@ -63,17 +63,6 @@
8 - SIGSEGV faults
16 - SIGBUS faults
-config DEBUG_ERRORS
- bool "Verbose kernel error messages"
- depends on DEBUG_KERNEL
- help
- This option controls verbose debugging information which can be
- printed when the kernel detects an internal error. This debugging
- information is useful to kernel hackers when tracking down problems,
- but mostly meaningless to other people. It's safe to say Y unless
- you are concerned with the code size or don't want to see these
- messages.
-
config DEBUG_STACK_USAGE
bool "Enable stack utilization instrumentation"
depends on DEBUG_KERNEL
diff --git a/arch/arm/common/Makefile b/arch/arm/common/Makefile
index e7521bca..6ea9b6f 100644
--- a/arch/arm/common/Makefile
+++ b/arch/arm/common/Makefile
@@ -16,5 +16,4 @@
obj-$(CONFIG_ARCH_IXP2000) += uengine.o
obj-$(CONFIG_ARCH_IXP23XX) += uengine.o
obj-$(CONFIG_PCI_HOST_ITE8152) += it8152.o
-obj-$(CONFIG_COMMON_CLKDEV) += clkdev.o
obj-$(CONFIG_ARM_TIMER_SP804) += timer-sp.o
diff --git a/arch/arm/include/asm/thread_notify.h b/arch/arm/include/asm/thread_notify.h
index c4391ba..1dc9806 100644
--- a/arch/arm/include/asm/thread_notify.h
+++ b/arch/arm/include/asm/thread_notify.h
@@ -43,6 +43,7 @@
#define THREAD_NOTIFY_FLUSH 0
#define THREAD_NOTIFY_EXIT 1
#define THREAD_NOTIFY_SWITCH 2
+#define THREAD_NOTIFY_COPY 3
#endif
#endif
diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
index 74554f1..8d95446 100644
--- a/arch/arm/kernel/Makefile
+++ b/arch/arm/kernel/Makefile
@@ -29,7 +29,7 @@
obj-$(CONFIG_ARTHUR) += arthur.o
obj-$(CONFIG_ISA_DMA) += dma-isa.o
obj-$(CONFIG_PCI) += bios32.o isa.o
-obj-$(CONFIG_PM) += sleep.o
+obj-$(CONFIG_PM_SLEEP) += sleep.o
obj-$(CONFIG_HAVE_SCHED_CLOCK) += sched_clock.o
obj-$(CONFIG_SMP) += smp.o smp_tlb.o
obj-$(CONFIG_HAVE_ARM_SCU) += smp_scu.o
diff --git a/arch/arm/kernel/elf.c b/arch/arm/kernel/elf.c
index d4a0da1..9b05c6a 100644
--- a/arch/arm/kernel/elf.c
+++ b/arch/arm/kernel/elf.c
@@ -40,15 +40,22 @@
void elf_set_personality(const struct elf32_hdr *x)
{
unsigned int eflags = x->e_flags;
- unsigned int personality = PER_LINUX_32BIT;
+ unsigned int personality = current->personality & ~PER_MASK;
+
+ /*
+ * We only support Linux ELF executables, so always set the
+ * personality to LINUX.
+ */
+ personality |= PER_LINUX;
/*
* APCS-26 is only valid for OABI executables
*/
- if ((eflags & EF_ARM_EABI_MASK) == EF_ARM_EABI_UNKNOWN) {
- if (eflags & EF_ARM_APCS_26)
- personality = PER_LINUX;
- }
+ if ((eflags & EF_ARM_EABI_MASK) == EF_ARM_EABI_UNKNOWN &&
+ (eflags & EF_ARM_APCS_26))
+ personality &= ~ADDR_LIMIT_32BIT;
+ else
+ personality |= ADDR_LIMIT_32BIT;
set_personality(personality);
diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
index 8dbc126..87acc25 100644
--- a/arch/arm/kernel/hw_breakpoint.c
+++ b/arch/arm/kernel/hw_breakpoint.c
@@ -868,6 +868,13 @@
*/
asm volatile("mcr p14, 0, %0, c1, c0, 4" : : "r" (0));
isb();
+
+ /*
+ * Clear any configured vector-catch events before
+ * enabling monitor mode.
+ */
+ asm volatile("mcr p14, 0, %0, c0, c7, 0" : : "r" (0));
+ isb();
}
if (enable_monitor_mode())
diff --git a/arch/arm/kernel/perf_event.c b/arch/arm/kernel/perf_event.c
index 69cfee0..979da39 100644
--- a/arch/arm/kernel/perf_event.c
+++ b/arch/arm/kernel/perf_event.c
@@ -221,7 +221,7 @@
prev_raw_count &= armpmu->max_period;
if (overflow)
- delta = armpmu->max_period - prev_raw_count + new_raw_count;
+ delta = armpmu->max_period - prev_raw_count + new_raw_count + 1;
else
delta = new_raw_count - prev_raw_count;
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 94bbedb..5e1e541 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -372,6 +372,8 @@
if (clone_flags & CLONE_SETTLS)
thread->tp_value = regs->ARM_r3;
+ thread_notify(THREAD_NOTIFY_COPY, thread);
+
return 0;
}
diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c
index f0000e1..3b54ad1 100644
--- a/arch/arm/kernel/traps.c
+++ b/arch/arm/kernel/traps.c
@@ -410,8 +410,7 @@
struct thread_info *thread = current_thread_info();
siginfo_t info;
- if (current->personality != PER_LINUX &&
- current->personality != PER_LINUX_32BIT &&
+ if ((current->personality & PER_MASK) != PER_LINUX &&
thread->exec_domain->handler) {
thread->exec_domain->handler(n, regs);
return regs->ARM_r0;
diff --git a/arch/arm/mach-mmp/include/mach/gpio.h b/arch/arm/mach-mmp/include/mach/gpio.h
index ee8b02e..7bfb827 100644
--- a/arch/arm/mach-mmp/include/mach/gpio.h
+++ b/arch/arm/mach-mmp/include/mach/gpio.h
@@ -10,7 +10,7 @@
#define BANK_OFF(n) (((n) < 3) ? (n) << 2 : 0x100 + (((n) - 3) << 2))
#define GPIO_REG(x) (*((volatile u32 *)(GPIO_REGS_VIRT + (x))))
-#define NR_BUILTIN_GPIO (192)
+#define NR_BUILTIN_GPIO IRQ_GPIO_NUM
#define gpio_to_bank(gpio) ((gpio) >> 5)
#define gpio_to_irq(gpio) (IRQ_GPIO_START + (gpio))
diff --git a/arch/arm/mach-mmp/include/mach/mfp-pxa168.h b/arch/arm/mach-mmp/include/mach/mfp-pxa168.h
index 4621067..713be15 100644
--- a/arch/arm/mach-mmp/include/mach/mfp-pxa168.h
+++ b/arch/arm/mach-mmp/include/mach/mfp-pxa168.h
@@ -8,6 +8,15 @@
#define MFP_DRIVE_MEDIUM (0x2 << 13)
#define MFP_DRIVE_FAST (0x3 << 13)
+#undef MFP_CFG
+#undef MFP_CFG_DRV
+
+#define MFP_CFG(pin, af) \
+ (MFP_LPM_INPUT | MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_DRIVE_MEDIUM)
+
+#define MFP_CFG_DRV(pin, af, drv) \
+ (MFP_LPM_INPUT | MFP_PIN(MFP_PIN_##pin) | MFP_##af | MFP_DRIVE_##drv)
+
/* GPIO */
#define GPIO0_GPIO MFP_CFG(GPIO0, AF5)
#define GPIO1_GPIO MFP_CFG(GPIO1, AF5)
diff --git a/arch/arm/mach-msm/board-qsd8x50.c b/arch/arm/mach-msm/board-qsd8x50.c
index 7f56861..6a96911 100644
--- a/arch/arm/mach-msm/board-qsd8x50.c
+++ b/arch/arm/mach-msm/board-qsd8x50.c
@@ -160,10 +160,7 @@
static void __init qsd8x50_init_mmc(void)
{
- if (machine_is_qsd8x50_ffa() || machine_is_qsd8x50a_ffa())
- vreg_mmc = vreg_get(NULL, "gp6");
- else
- vreg_mmc = vreg_get(NULL, "gp5");
+ vreg_mmc = vreg_get(NULL, "gp5");
if (IS_ERR(vreg_mmc)) {
pr_err("vreg get for vreg_mmc failed (%ld)\n",
diff --git a/arch/arm/mach-msm/timer.c b/arch/arm/mach-msm/timer.c
index 56f920c..38b95e9 100644
--- a/arch/arm/mach-msm/timer.c
+++ b/arch/arm/mach-msm/timer.c
@@ -269,7 +269,7 @@
/* Use existing clock_event for cpu 0 */
if (!smp_processor_id())
- return;
+ return 0;
writel(DGT_CLK_CTL_DIV_4, MSM_TMR_BASE + DGT_CLK_CTL);
diff --git a/arch/arm/mach-pxa/include/mach/gpio.h b/arch/arm/mach-pxa/include/mach/gpio.h
index b024a8b..c463950 100644
--- a/arch/arm/mach-pxa/include/mach/gpio.h
+++ b/arch/arm/mach-pxa/include/mach/gpio.h
@@ -99,11 +99,24 @@
#define GAFR(x) GPIO_REG(0x54 + (((x) & 0x70) >> 2))
-#define NR_BUILTIN_GPIO 128
+#define NR_BUILTIN_GPIO PXA_GPIO_IRQ_NUM
#define gpio_to_bank(gpio) ((gpio) >> 5)
#define gpio_to_irq(gpio) IRQ_GPIO(gpio)
-#define irq_to_gpio(irq) IRQ_TO_GPIO(irq)
+
+static inline int irq_to_gpio(unsigned int irq)
+{
+ int gpio;
+
+ if (irq == IRQ_GPIO0 || irq == IRQ_GPIO1)
+ return irq - IRQ_GPIO0;
+
+ gpio = irq - PXA_GPIO_IRQ_BASE;
+ if (gpio >= 2 && gpio < NR_BUILTIN_GPIO)
+ return gpio;
+
+ return -1;
+}
#ifdef CONFIG_CPU_PXA26x
/* GPIO86/87/88/89 on PXA26x have their direction bits in GPDR2 inverted,
diff --git a/arch/arm/mach-pxa/include/mach/irqs.h b/arch/arm/mach-pxa/include/mach/irqs.h
index a4285fc..0384024 100644
--- a/arch/arm/mach-pxa/include/mach/irqs.h
+++ b/arch/arm/mach-pxa/include/mach/irqs.h
@@ -93,9 +93,6 @@
#define GPIO_2_x_TO_IRQ(x) (PXA_GPIO_IRQ_BASE + (x))
#define IRQ_GPIO(x) (((x) < 2) ? (IRQ_GPIO0 + (x)) : GPIO_2_x_TO_IRQ(x))
-#define IRQ_TO_GPIO_2_x(i) ((i) - PXA_GPIO_IRQ_BASE)
-#define IRQ_TO_GPIO(i) (((i) < IRQ_GPIO(2)) ? ((i) - IRQ_GPIO0) : IRQ_TO_GPIO_2_x(i))
-
/*
* The following interrupts are for board specific purposes. Since
* the kernel can only run on one machine at a time, we can re-use
diff --git a/arch/arm/mach-pxa/pxa25x.c b/arch/arm/mach-pxa/pxa25x.c
index 6bde595..a4af8c5 100644
--- a/arch/arm/mach-pxa/pxa25x.c
+++ b/arch/arm/mach-pxa/pxa25x.c
@@ -285,7 +285,7 @@
static int pxa25x_set_wake(struct irq_data *d, unsigned int on)
{
- int gpio = IRQ_TO_GPIO(d->irq);
+ int gpio = irq_to_gpio(d->irq);
uint32_t mask = 0;
if (gpio >= 0 && gpio < 85)
diff --git a/arch/arm/mach-pxa/pxa27x.c b/arch/arm/mach-pxa/pxa27x.c
index 1cb5d0f..909756e 100644
--- a/arch/arm/mach-pxa/pxa27x.c
+++ b/arch/arm/mach-pxa/pxa27x.c
@@ -345,7 +345,7 @@
*/
static int pxa27x_set_wake(struct irq_data *d, unsigned int on)
{
- int gpio = IRQ_TO_GPIO(d->irq);
+ int gpio = irq_to_gpio(d->irq);
uint32_t mask;
if (gpio >= 0 && gpio < 128)
diff --git a/arch/arm/mach-tegra/gpio.c b/arch/arm/mach-tegra/gpio.c
index 76a3f65..65a1aba 100644
--- a/arch/arm/mach-tegra/gpio.c
+++ b/arch/arm/mach-tegra/gpio.c
@@ -257,7 +257,8 @@
void tegra_gpio_resume(void)
{
unsigned long flags;
- int b, p, i;
+ int b;
+ int p;
local_irq_save(flags);
@@ -280,7 +281,8 @@
void tegra_gpio_suspend(void)
{
unsigned long flags;
- int b, p, i;
+ int b;
+ int p;
local_irq_save(flags);
for (b = 0; b < ARRAY_SIZE(tegra_gpio_banks); b++) {
diff --git a/arch/arm/mach-tegra/tegra2_clocks.c b/arch/arm/mach-tegra/tegra2_clocks.c
index 6d7c4ee..4459470 100644
--- a/arch/arm/mach-tegra/tegra2_clocks.c
+++ b/arch/arm/mach-tegra/tegra2_clocks.c
@@ -1362,14 +1362,15 @@
{
unsigned long flags;
int ret;
+ long new_rate = rate;
- rate = clk_round_rate(c->parent, rate);
- if (rate < 0)
- return rate;
+ new_rate = clk_round_rate(c->parent, new_rate);
+ if (new_rate < 0)
+ return new_rate;
spin_lock_irqsave(&c->parent->spinlock, flags);
- c->u.shared_bus_user.rate = rate;
+ c->u.shared_bus_user.rate = new_rate;
ret = tegra_clk_shared_bus_update(c->parent);
spin_unlock_irqrestore(&c->parent->spinlock, flags);
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index afe209e..74be05f 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -7,6 +7,7 @@
#include <linux/shm.h>
#include <linux/sched.h>
#include <linux/io.h>
+#include <linux/personality.h>
#include <linux/random.h>
#include <asm/cputype.h>
#include <asm/system.h>
@@ -82,7 +83,8 @@
mm->cached_hole_size = 0;
}
/* 8 bits of randomness in 20 address space bits */
- if (current->flags & PF_RANDOMIZE)
+ if ((current->flags & PF_RANDOMIZE) &&
+ !(current->personality & ADDR_NO_RANDOMIZE))
addr += (get_random_int() % (1 << 8)) << PAGE_SHIFT;
full_search:
diff --git a/arch/arm/mm/proc-arm920.S b/arch/arm/mm/proc-arm920.S
index b46eb21..bf8a1d1 100644
--- a/arch/arm/mm/proc-arm920.S
+++ b/arch/arm/mm/proc-arm920.S
@@ -390,7 +390,7 @@
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm920_suspend_size
.equ cpu_arm920_suspend_size, 4 * 3
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_arm920_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID
diff --git a/arch/arm/mm/proc-arm926.S b/arch/arm/mm/proc-arm926.S
index 6a4bdb2..0ed85d9 100644
--- a/arch/arm/mm/proc-arm926.S
+++ b/arch/arm/mm/proc-arm926.S
@@ -404,7 +404,7 @@
/* Suspend/resume support: taken from arch/arm/plat-s3c24xx/sleep.S */
.globl cpu_arm926_suspend_size
.equ cpu_arm926_suspend_size, 4 * 3
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_arm926_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c13, c0, 0 @ PID
diff --git a/arch/arm/mm/proc-sa1100.S b/arch/arm/mm/proc-sa1100.S
index 74483d1..184a9c9 100644
--- a/arch/arm/mm/proc-sa1100.S
+++ b/arch/arm/mm/proc-sa1100.S
@@ -171,7 +171,7 @@
.globl cpu_sa1100_suspend_size
.equ cpu_sa1100_suspend_size, 4*4
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_sa1100_do_suspend)
stmfd sp!, {r4 - r7, lr}
mrc p15, 0, r4, c3, c0, 0 @ domain ID
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
index bfa0c9f..7c99cb4 100644
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -124,7 +124,7 @@
/* Suspend/resume support: taken from arch/arm/mach-s3c64xx/sleep.S */
.globl cpu_v6_suspend_size
.equ cpu_v6_suspend_size, 4 * 8
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_v6_do_suspend)
stmfd sp!, {r4 - r11, lr}
mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index c35618e..babfba09 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -211,7 +211,7 @@
/* Suspend/resume support: derived from arch/arm/mach-s5pv210/sleep.S */
.globl cpu_v7_suspend_size
.equ cpu_v7_suspend_size, 4 * 8
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_v7_do_suspend)
stmfd sp!, {r4 - r11, lr}
mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID
diff --git a/arch/arm/mm/proc-xsc3.S b/arch/arm/mm/proc-xsc3.S
index 63d8b20..5962136 100644
--- a/arch/arm/mm/proc-xsc3.S
+++ b/arch/arm/mm/proc-xsc3.S
@@ -417,7 +417,7 @@
.globl cpu_xsc3_suspend_size
.equ cpu_xsc3_suspend_size, 4 * 8
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_xsc3_do_suspend)
stmfd sp!, {r4 - r10, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
diff --git a/arch/arm/mm/proc-xscale.S b/arch/arm/mm/proc-xscale.S
index 086038c..ce233bc 100644
--- a/arch/arm/mm/proc-xscale.S
+++ b/arch/arm/mm/proc-xscale.S
@@ -518,7 +518,7 @@
.globl cpu_xscale_suspend_size
.equ cpu_xscale_suspend_size, 4 * 7
-#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
ENTRY(cpu_xscale_do_suspend)
stmfd sp!, {r4 - r10, lr}
mrc p14, 0, r4, c6, c0, 0 @ clock configuration, for turbo mode
diff --git a/arch/arm/plat-s5p/pm.c b/arch/arm/plat-s5p/pm.c
index d592b63..d15dc47 100644
--- a/arch/arm/plat-s5p/pm.c
+++ b/arch/arm/plat-s5p/pm.c
@@ -19,17 +19,6 @@
#define PFX "s5p pm: "
-/* s3c_pm_check_resume_pin
- *
- * check to see if the pin is configured correctly for sleep mode, and
- * make any necessary adjustments if it is not
-*/
-
-static void s3c_pm_check_resume_pin(unsigned int pin, unsigned int irqoffs)
-{
- /* nothing here yet */
-}
-
/* s3c_pm_configure_extint
*
* configure all external interrupt pins
diff --git a/arch/arm/plat-samsung/pm-check.c b/arch/arm/plat-samsung/pm-check.c
index e4baf76..6b733fa 100644
--- a/arch/arm/plat-samsung/pm-check.c
+++ b/arch/arm/plat-samsung/pm-check.c
@@ -164,7 +164,6 @@
*/
static u32 *s3c_pm_runcheck(struct resource *res, u32 *val)
{
- void *save_at = phys_to_virt(s3c_sleep_save_phys);
unsigned long addr;
unsigned long left;
void *stkpage;
@@ -192,11 +191,6 @@
goto skip_check;
}
- if (in_region(ptr, left, save_at, 32*4 )) {
- S3C_PMDBG("skipping %08lx, has save block in\n", addr);
- goto skip_check;
- }
-
/* calculate and check the checksum */
calc = crc32_le(~0, ptr, left);
diff --git a/arch/arm/plat-samsung/pm.c b/arch/arm/plat-samsung/pm.c
index d5b58d3..5c0a440 100644
--- a/arch/arm/plat-samsung/pm.c
+++ b/arch/arm/plat-samsung/pm.c
@@ -214,8 +214,9 @@
*
* print any IRQs asserted at resume time (ie, we woke from)
*/
-static void s3c_pm_show_resume_irqs(int start, unsigned long which,
- unsigned long mask)
+static void __maybe_unused s3c_pm_show_resume_irqs(int start,
+ unsigned long which,
+ unsigned long mask)
{
int i;
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index bbf3da0..f746950 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -78,6 +78,14 @@
put_cpu();
}
+static void vfp_thread_copy(struct thread_info *thread)
+{
+ struct thread_info *parent = current_thread_info();
+
+ vfp_sync_hwstate(parent);
+ thread->vfpstate = parent->vfpstate;
+}
+
/*
* When this function is called with the following 'cmd's, the following
* is true while this function is being run:
@@ -104,12 +112,17 @@
static int vfp_notifier(struct notifier_block *self, unsigned long cmd, void *v)
{
struct thread_info *thread = v;
+ u32 fpexc;
+#ifdef CONFIG_SMP
+ unsigned int cpu;
+#endif
- if (likely(cmd == THREAD_NOTIFY_SWITCH)) {
- u32 fpexc = fmrx(FPEXC);
+ switch (cmd) {
+ case THREAD_NOTIFY_SWITCH:
+ fpexc = fmrx(FPEXC);
#ifdef CONFIG_SMP
- unsigned int cpu = thread->cpu;
+ cpu = thread->cpu;
/*
* On SMP, if VFP is enabled, save the old state in
@@ -134,13 +147,20 @@
* old state.
*/
fmxr(FPEXC, fpexc & ~FPEXC_EN);
- return NOTIFY_DONE;
- }
+ break;
- if (cmd == THREAD_NOTIFY_FLUSH)
+ case THREAD_NOTIFY_FLUSH:
vfp_thread_flush(thread);
- else
+ break;
+
+ case THREAD_NOTIFY_EXIT:
vfp_thread_exit(thread);
+ break;
+
+ case THREAD_NOTIFY_COPY:
+ vfp_thread_copy(thread);
+ break;
+ }
return NOTIFY_DONE;
}
diff --git a/arch/blackfin/include/asm/system.h b/arch/blackfin/include/asm/system.h
index 19e2c7c..44bd0cc 100644
--- a/arch/blackfin/include/asm/system.h
+++ b/arch/blackfin/include/asm/system.h
@@ -19,11 +19,11 @@
* Force strict CPU ordering.
*/
#define nop() __asm__ __volatile__ ("nop;\n\t" : : )
-#define mb() __asm__ __volatile__ ("" : : : "memory")
-#define rmb() __asm__ __volatile__ ("" : : : "memory")
-#define wmb() __asm__ __volatile__ ("" : : : "memory")
-#define set_mb(var, value) do { (void) xchg(&var, value); } while (0)
-#define read_barrier_depends() do { } while(0)
+#define smp_mb() mb()
+#define smp_rmb() rmb()
+#define smp_wmb() wmb()
+#define set_mb(var, value) do { var = value; mb(); } while (0)
+#define smp_read_barrier_depends() read_barrier_depends()
#ifdef CONFIG_SMP
asmlinkage unsigned long __raw_xchg_1_asm(volatile void *ptr, unsigned long value);
@@ -37,16 +37,16 @@
unsigned long new, unsigned long old);
#ifdef __ARCH_SYNC_CORE_DCACHE
-# define smp_mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)
-# define smp_rmb() do { barrier(); smp_check_barrier(); } while (0)
-# define smp_wmb() do { barrier(); smp_mark_barrier(); } while (0)
-#define smp_read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)
-
+/* Force Core data cache coherence */
+# define mb() do { barrier(); smp_check_barrier(); smp_mark_barrier(); } while (0)
+# define rmb() do { barrier(); smp_check_barrier(); } while (0)
+# define wmb() do { barrier(); smp_mark_barrier(); } while (0)
+# define read_barrier_depends() do { barrier(); smp_check_barrier(); } while (0)
#else
-# define smp_mb() barrier()
-# define smp_rmb() barrier()
-# define smp_wmb() barrier()
-#define smp_read_barrier_depends() barrier()
+# define mb() barrier()
+# define rmb() barrier()
+# define wmb() barrier()
+# define read_barrier_depends() do { } while (0)
#endif
static inline unsigned long __xchg(unsigned long x, volatile void *ptr,
@@ -99,10 +99,10 @@
#else /* !CONFIG_SMP */
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define smp_wmb() barrier()
-#define smp_read_barrier_depends() do { } while(0)
+#define mb() barrier()
+#define rmb() barrier()
+#define wmb() barrier()
+#define read_barrier_depends() do { } while (0)
struct __xchg_dummy {
unsigned long a[100];
diff --git a/arch/blackfin/kernel/gptimers.c b/arch/blackfin/kernel/gptimers.c
index cdbe075..8b81dc0 100644
--- a/arch/blackfin/kernel/gptimers.c
+++ b/arch/blackfin/kernel/gptimers.c
@@ -268,7 +268,7 @@
_disable_gptimers(mask);
for (i = 0; i < MAX_BLACKFIN_GPTIMERS; ++i)
if (mask & (1 << i))
- group_regs[BFIN_TIMER_OCTET(i)]->status |= trun_mask[i];
+ group_regs[BFIN_TIMER_OCTET(i)]->status = trun_mask[i];
SSYNC();
}
EXPORT_SYMBOL(disable_gptimers);
diff --git a/arch/blackfin/kernel/time-ts.c b/arch/blackfin/kernel/time-ts.c
index 8c9a43d..cdb4beb 100644
--- a/arch/blackfin/kernel/time-ts.c
+++ b/arch/blackfin/kernel/time-ts.c
@@ -206,8 +206,14 @@
{
struct clock_event_device *evt = dev_id;
smp_mb();
- evt->event_handler(evt);
+ /*
+ * We want to ACK before we handle so that we can handle smaller timer
+ * intervals. This way if the timer expires again while we're handling
+ * things, we're more likely to see that 2nd int rather than swallowing
+ * it by ACKing the int at the end of this handler.
+ */
bfin_gptmr0_ack();
+ evt->event_handler(evt);
return IRQ_HANDLED;
}
diff --git a/arch/blackfin/mach-common/smp.c b/arch/blackfin/mach-common/smp.c
index 326bb86..1fbd94c 100644
--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -109,10 +109,23 @@
struct blackfin_flush_data *fdata = info;
/* Invalidate the memory holding the bounds of the flushed region. */
- invalidate_dcache_range((unsigned long)fdata,
- (unsigned long)fdata + sizeof(*fdata));
+ blackfin_dcache_invalidate_range((unsigned long)fdata,
+ (unsigned long)fdata + sizeof(*fdata));
- flush_icache_range(fdata->start, fdata->end);
+ /* Make sure all write buffers in the data side of the core
+ * are flushed before trying to invalidate the icache. This
+ * needs to be after the data flush and before the icache
+ * flush so that the SSYNC does the right thing in preventing
+ * the instruction prefetcher from hitting things in cached
+ * memory at the wrong time -- it runs much further ahead than
+ * the pipeline.
+ */
+ SSYNC();
+
+ /* ipi_flaush_icache is invoked by generic flush_icache_range,
+ * so call blackfin arch icache flush directly here.
+ */
+ blackfin_icache_flush_range(fdata->start, fdata->end);
}
static void ipi_call_function(unsigned int cpu, struct ipi_message *msg)
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 851b3bf..eccdefe 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -6,7 +6,6 @@
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
- select USB_ARCH_HAS_EHCI
select ARCH_WANT_OPTIONAL_GPIOLIB
select HAVE_OPROFILE
select HAVE_ARCH_KGDB
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b6ff882..8f4d50b 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -209,7 +209,7 @@
config ARCH_SUSPEND_POSSIBLE
def_bool y
depends on ADB_PMU || PPC_EFIKA || PPC_LITE5200 || PPC_83xx || \
- PPC_85xx || PPC_86xx || PPC_PSERIES || 44x || 40x
+ (PPC_85xx && !SMP) || PPC_86xx || PPC_PSERIES || 44x || 40x
config PPC_DCR_NATIVE
bool
diff --git a/arch/powerpc/include/asm/cputable.h b/arch/powerpc/include/asm/cputable.h
index be3cdf9..1833d1a 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -382,10 +382,12 @@
#define CPU_FTRS_E500_2 (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
CPU_FTR_SPE_COMP | CPU_FTR_MAYBE_CAN_NAP | \
CPU_FTR_NODSISRALIGN | CPU_FTR_NOEXECUTE)
-#define CPU_FTRS_E500MC (CPU_FTR_MAYBE_CAN_DOZE | CPU_FTR_USE_TB | \
- CPU_FTR_MAYBE_CAN_NAP | CPU_FTR_NODSISRALIGN | \
+#define CPU_FTRS_E500MC (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
CPU_FTR_DBELL)
+#define CPU_FTRS_E5500 (CPU_FTR_USE_TB | CPU_FTR_NODSISRALIGN | \
+ CPU_FTR_L2CSR | CPU_FTR_LWSYNC | CPU_FTR_NOEXECUTE | \
+ CPU_FTR_DBELL | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD)
#define CPU_FTRS_GENERIC_32 (CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
/* 64-bit CPUs */
@@ -435,11 +437,15 @@
#define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2)
#ifdef __powerpc64__
+#ifdef CONFIG_PPC_BOOK3E
+#define CPU_FTRS_POSSIBLE (CPU_FTRS_E5500)
+#else
#define CPU_FTRS_POSSIBLE \
(CPU_FTRS_POWER3 | CPU_FTRS_RS64 | CPU_FTRS_POWER4 | \
CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | CPU_FTRS_POWER6 | \
CPU_FTRS_POWER7 | CPU_FTRS_CELL | CPU_FTRS_PA6T | \
CPU_FTR_1T_SEGMENT | CPU_FTR_VSX)
+#endif
#else
enum {
CPU_FTRS_POSSIBLE =
@@ -473,16 +479,21 @@
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 | CPU_FTRS_E500_2 | CPU_FTRS_E500MC |
+ CPU_FTRS_E5500 |
#endif
0,
};
#endif /* __powerpc64__ */
#ifdef __powerpc64__
+#ifdef CONFIG_PPC_BOOK3E
+#define CPU_FTRS_ALWAYS (CPU_FTRS_E5500)
+#else
#define CPU_FTRS_ALWAYS \
(CPU_FTRS_POWER3 & CPU_FTRS_RS64 & CPU_FTRS_POWER4 & \
CPU_FTRS_PPC970 & CPU_FTRS_POWER5 & CPU_FTRS_POWER6 & \
CPU_FTRS_POWER7 & CPU_FTRS_CELL & CPU_FTRS_PA6T & CPU_FTRS_POSSIBLE)
+#endif
#else
enum {
CPU_FTRS_ALWAYS =
@@ -513,6 +524,7 @@
#endif
#ifdef CONFIG_E500
CPU_FTRS_E500 & CPU_FTRS_E500_2 & CPU_FTRS_E500MC &
+ CPU_FTRS_E5500 &
#endif
CPU_FTRS_POSSIBLE,
};
diff --git a/arch/powerpc/include/asm/pte-common.h b/arch/powerpc/include/asm/pte-common.h
index 811f04a..8d1569c 100644
--- a/arch/powerpc/include/asm/pte-common.h
+++ b/arch/powerpc/include/asm/pte-common.h
@@ -162,7 +162,7 @@
* on platforms where such control is possible.
*/
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
- defined(CONFIG_KPROBES)
+ defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index c9b68d0..b9602ee 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -1973,7 +1973,7 @@
.pvr_mask = 0xffff0000,
.pvr_value = 0x80240000,
.cpu_name = "e5500",
- .cpu_features = CPU_FTRS_E500MC,
+ .cpu_features = CPU_FTRS_E5500,
.cpu_user_features = COMMON_USER_BOOKE,
.mmu_features = MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS |
MMU_FTR_USE_TLBILX,
diff --git a/arch/powerpc/kernel/crash.c b/arch/powerpc/kernel/crash.c
index 3d3d416..5b5e1f0 100644
--- a/arch/powerpc/kernel/crash.c
+++ b/arch/powerpc/kernel/crash.c
@@ -163,7 +163,7 @@
}
/* wait for all the CPUs to hit real mode but timeout if they don't come in */
-#if defined(CONFIG_PPC_STD_MMU_64) && defined(CONFIG_SMP)
+#ifdef CONFIG_PPC_STD_MMU_64
static void crash_kexec_wait_realmode(int cpu)
{
unsigned int msecs;
@@ -188,9 +188,7 @@
}
mb();
}
-#else
-static inline void crash_kexec_wait_realmode(int cpu) {}
-#endif
+#endif /* CONFIG_PPC_STD_MMU_64 */
/*
* This function will be called by secondary cpus or by kexec cpu
@@ -235,7 +233,9 @@
crash_ipi_callback(regs);
}
-#else
+#else /* ! CONFIG_SMP */
+static inline void crash_kexec_wait_realmode(int cpu) {}
+
static void crash_kexec_prepare_cpus(int cpu)
{
/*
@@ -255,7 +255,7 @@
{
cpus_in_sr = CPU_MASK_NONE;
}
-#endif
+#endif /* CONFIG_SMP */
/*
* Register a function to be called on shutdown. Only use this if you
diff --git a/arch/powerpc/kernel/legacy_serial.c b/arch/powerpc/kernel/legacy_serial.c
index c834757..2b97b80 100644
--- a/arch/powerpc/kernel/legacy_serial.c
+++ b/arch/powerpc/kernel/legacy_serial.c
@@ -330,9 +330,11 @@
if (!parent)
continue;
if (of_match_node(legacy_serial_parents, parent) != NULL) {
- index = add_legacy_soc_port(np, np);
- if (index >= 0 && np == stdout)
- legacy_serial_console = index;
+ if (of_device_is_available(np)) {
+ index = add_legacy_soc_port(np, np);
+ if (index >= 0 && np == stdout)
+ legacy_serial_console = index;
+ }
}
of_node_put(parent);
}
diff --git a/arch/powerpc/kernel/perf_event.c b/arch/powerpc/kernel/perf_event.c
index c4063b7..822f630 100644
--- a/arch/powerpc/kernel/perf_event.c
+++ b/arch/powerpc/kernel/perf_event.c
@@ -398,6 +398,25 @@
return 0;
}
+static u64 check_and_compute_delta(u64 prev, u64 val)
+{
+ u64 delta = (val - prev) & 0xfffffffful;
+
+ /*
+ * POWER7 can roll back counter values, if the new value is smaller
+ * than the previous value it will cause the delta and the counter to
+ * have bogus values unless we rolled a counter over. If a coutner is
+ * rolled back, it will be smaller, but within 256, which is the maximum
+ * number of events to rollback at once. If we dectect a rollback
+ * return 0. This can lead to a small lack of precision in the
+ * counters.
+ */
+ if (prev > val && (prev - val) < 256)
+ delta = 0;
+
+ return delta;
+}
+
static void power_pmu_read(struct perf_event *event)
{
s64 val, delta, prev;
@@ -416,10 +435,11 @@
prev = local64_read(&event->hw.prev_count);
barrier();
val = read_pmc(event->hw.idx);
+ delta = check_and_compute_delta(prev, val);
+ if (!delta)
+ return;
} while (local64_cmpxchg(&event->hw.prev_count, prev, val) != prev);
- /* The counters are only 32 bits wide */
- delta = (val - prev) & 0xfffffffful;
local64_add(delta, &event->count);
local64_sub(delta, &event->hw.period_left);
}
@@ -449,8 +469,9 @@
val = (event->hw.idx == 5) ? pmc5 : pmc6;
prev = local64_read(&event->hw.prev_count);
event->hw.idx = 0;
- delta = (val - prev) & 0xfffffffful;
- local64_add(delta, &event->count);
+ delta = check_and_compute_delta(prev, val);
+ if (delta)
+ local64_add(delta, &event->count);
}
}
@@ -458,14 +479,16 @@
unsigned long pmc5, unsigned long pmc6)
{
struct perf_event *event;
- u64 val;
+ u64 val, prev;
int i;
for (i = 0; i < cpuhw->n_limited; ++i) {
event = cpuhw->limited_counter[i];
event->hw.idx = cpuhw->limited_hwidx[i];
val = (event->hw.idx == 5) ? pmc5 : pmc6;
- local64_set(&event->hw.prev_count, val);
+ prev = local64_read(&event->hw.prev_count);
+ if (check_and_compute_delta(prev, val))
+ local64_set(&event->hw.prev_count, val);
perf_event_update_userpage(event);
}
}
@@ -1197,7 +1220,7 @@
/* we don't have to worry about interrupts here */
prev = local64_read(&event->hw.prev_count);
- delta = (val - prev) & 0xfffffffful;
+ delta = check_and_compute_delta(prev, val);
local64_add(delta, &event->count);
/*
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 375480c..f33acfd 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -229,6 +229,9 @@
u64 stolen = 0;
u64 dtb;
+ if (!dtl)
+ return 0;
+
if (i == vpa->dtl_idx)
return 0;
while (i < vpa->dtl_idx) {
diff --git a/arch/powerpc/platforms/powermac/smp.c b/arch/powerpc/platforms/powermac/smp.c
index a830c5e..bc5f0dc 100644
--- a/arch/powerpc/platforms/powermac/smp.c
+++ b/arch/powerpc/platforms/powermac/smp.c
@@ -842,6 +842,7 @@
mpic_setup_this_cpu();
}
+#ifdef CONFIG_PPC64
#ifdef CONFIG_HOTPLUG_CPU
static int smp_core99_cpu_notify(struct notifier_block *self,
unsigned long action, void *hcpu)
@@ -879,7 +880,6 @@
static void __init smp_core99_bringup_done(void)
{
-#ifdef CONFIG_PPC64
extern void g5_phy_disable_cpu1(void);
/* Close i2c bus if it was used for tb sync */
@@ -894,14 +894,14 @@
set_cpu_present(1, false);
g5_phy_disable_cpu1();
}
-#endif /* CONFIG_PPC64 */
-
#ifdef CONFIG_HOTPLUG_CPU
register_cpu_notifier(&smp_core99_cpu_nb);
#endif
+
if (ppc_md.progress)
ppc_md.progress("smp_core99_bringup_done", 0x349);
}
+#endif /* CONFIG_PPC64 */
#ifdef CONFIG_HOTPLUG_CPU
@@ -975,7 +975,9 @@
struct smp_ops_t core99_smp_ops = {
.message_pass = smp_mpic_message_pass,
.probe = smp_core99_probe,
+#ifdef CONFIG_PPC64
.bringup_done = smp_core99_bringup_done,
+#endif
.kick_cpu = smp_core99_kick_cpu,
.setup_cpu = smp_core99_setup_cpu,
.give_timebase = smp_core99_give_timebase,
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 0007241..6c42cfd 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -287,14 +287,22 @@
int cpu, ret;
struct paca_struct *pp;
struct dtl_entry *dtl;
+ struct kmem_cache *dtl_cache;
if (!firmware_has_feature(FW_FEATURE_SPLPAR))
return 0;
+ dtl_cache = kmem_cache_create("dtl", DISPATCH_LOG_BYTES,
+ DISPATCH_LOG_BYTES, 0, NULL);
+ if (!dtl_cache) {
+ pr_warn("Failed to create dispatch trace log buffer cache\n");
+ pr_warn("Stolen time statistics will be unreliable\n");
+ return 0;
+ }
+
for_each_possible_cpu(cpu) {
pp = &paca[cpu];
- dtl = kmalloc_node(DISPATCH_LOG_BYTES, GFP_KERNEL,
- cpu_to_node(cpu));
+ dtl = kmem_cache_alloc(dtl_cache, GFP_KERNEL);
if (!dtl) {
pr_warn("Failed to allocate dispatch trace log for cpu %d\n",
cpu);
diff --git a/arch/powerpc/sysdev/fsl_pci.c b/arch/powerpc/sysdev/fsl_pci.c
index f8f7f28..68ca929 100644
--- a/arch/powerpc/sysdev/fsl_pci.c
+++ b/arch/powerpc/sysdev/fsl_pci.c
@@ -324,6 +324,11 @@
struct resource rsrc;
const int *bus_range;
+ if (!of_device_is_available(dev)) {
+ pr_warning("%s: disabled\n", dev->full_name);
+ return -ENODEV;
+ }
+
pr_debug("Adding PCI host bridge %s\n", dev->full_name);
/* Fetch host bridge registers address */
diff --git a/arch/powerpc/sysdev/fsl_rio.c b/arch/powerpc/sysdev/fsl_rio.c
index 14232d5..4979853 100644
--- a/arch/powerpc/sysdev/fsl_rio.c
+++ b/arch/powerpc/sysdev/fsl_rio.c
@@ -1457,7 +1457,6 @@
port->ops = ops;
port->priv = priv;
port->phys_efptr = 0x100;
- rio_register_mport(port);
priv->regs_win = ioremap(regs.start, regs.end - regs.start + 1);
rio_regs_win = priv->regs_win;
@@ -1504,6 +1503,9 @@
dev_info(&dev->dev, "RapidIO Common Transport System size: %d\n",
port->sys_size ? 65536 : 256);
+ if (rio_register_mport(port))
+ goto err;
+
if (port->host_deviceid >= 0)
out_be32(priv->regs_win + RIO_GCCSR, RIO_PORT_GEN_HOST |
RIO_PORT_GEN_MASTER | RIO_PORT_GEN_DISCOVERED);
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index e560d10..63a027c 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -25,6 +25,10 @@
select HAVE_DMA_ATTRS
select HAVE_DMA_API_DEBUG
select HAVE_ARCH_JUMP_LABEL
+ select HAVE_GENERIC_HARDIRQS
+ select GENERIC_HARDIRQS_NO_DEPRECATED
+ select GENERIC_IRQ_SHOW
+ select USE_GENERIC_SMP_HELPERS if SMP
config SPARC32
def_bool !64BIT
@@ -43,15 +47,12 @@
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_SYSCALL_TRACEPOINTS
- select USE_GENERIC_SMP_HELPERS if SMP
select RTC_DRV_CMOS
select RTC_DRV_BQ4802
select RTC_DRV_SUN4V
select RTC_DRV_STARFIRE
select HAVE_PERF_EVENTS
select PERF_USE_VMALLOC
- select HAVE_GENERIC_HARDIRQS
- select GENERIC_IRQ_SHOW
select IRQ_PREFLOW_FASTEOI
config ARCH_DEFCONFIG
diff --git a/arch/sparc/include/asm/cpudata_32.h b/arch/sparc/include/asm/cpudata_32.h
index 31d48a0..a4c5a93 100644
--- a/arch/sparc/include/asm/cpudata_32.h
+++ b/arch/sparc/include/asm/cpudata_32.h
@@ -16,6 +16,10 @@
unsigned long clock_tick;
unsigned int multiplier;
unsigned int counter;
+#ifdef CONFIG_SMP
+ unsigned int irq_resched_count;
+ unsigned int irq_call_count;
+#endif
int prom_node;
int mid;
int next;
@@ -23,5 +27,6 @@
DECLARE_PER_CPU(cpuinfo_sparc, __cpu_data);
#define cpu_data(__cpu) per_cpu(__cpu_data, (__cpu))
+#define local_cpu_data() __get_cpu_var(__cpu_data)
#endif /* _SPARC_CPUDATA_H */
diff --git a/arch/sparc/include/asm/floppy_32.h b/arch/sparc/include/asm/floppy_32.h
index 86666f7..482c79e 100644
--- a/arch/sparc/include/asm/floppy_32.h
+++ b/arch/sparc/include/asm/floppy_32.h
@@ -281,28 +281,27 @@
pdma_areasize = pdma_size;
}
-/* Our low-level entry point in arch/sparc/kernel/entry.S */
-extern int sparc_floppy_request_irq(int irq, unsigned long flags,
- irq_handler_t irq_handler);
+extern int sparc_floppy_request_irq(unsigned int irq,
+ irq_handler_t irq_handler);
static int sun_fd_request_irq(void)
{
static int once = 0;
- int error;
- if(!once) {
+ if (!once) {
once = 1;
- error = sparc_floppy_request_irq(FLOPPY_IRQ,
- IRQF_DISABLED,
- floppy_interrupt);
- return ((error == 0) ? 0 : -1);
- } else return 0;
+ return sparc_floppy_request_irq(FLOPPY_IRQ, floppy_interrupt);
+ } else {
+ return 0;
+ }
}
static struct linux_prom_registers fd_regs[2];
static int sun_floppy_init(void)
{
+ struct platform_device *op;
+ struct device_node *dp;
char state[128];
phandle tnode, fd_node;
int num_regs;
@@ -310,7 +309,6 @@
use_virtual_dma = 1;
- FLOPPY_IRQ = 11;
/* Forget it if we aren't on a machine that could possibly
* ever have a floppy drive.
*/
@@ -349,6 +347,26 @@
sun_fdc = (struct sun_flpy_controller *)
of_ioremap(&r, 0, fd_regs[0].reg_size, "floppy");
+ /* Look up irq in platform_device.
+ * We try "SUNW,fdtwo" and "fd"
+ */
+ for_each_node_by_name(dp, "SUNW,fdtwo") {
+ op = of_find_device_by_node(dp);
+ if (op)
+ break;
+ }
+ if (!op) {
+ for_each_node_by_name(dp, "fd") {
+ op = of_find_device_by_node(dp);
+ if (op)
+ break;
+ }
+ }
+ if (!op)
+ goto no_sun_fdc;
+
+ FLOPPY_IRQ = op->archdata.irqs[0];
+
/* Last minute sanity check... */
if(sun_fdc->status_82072 == 0xff) {
sun_fdc = NULL;
diff --git a/arch/sparc/include/asm/io.h b/arch/sparc/include/asm/io.h
index a34b299..f6902cf 100644
--- a/arch/sparc/include/asm/io.h
+++ b/arch/sparc/include/asm/io.h
@@ -5,4 +5,17 @@
#else
#include <asm/io_32.h>
#endif
+
+/*
+ * Defines used for both SPARC32 and SPARC64
+ */
+
+/* Big endian versions of memory read/write routines */
+#define readb_be(__addr) __raw_readb(__addr)
+#define readw_be(__addr) __raw_readw(__addr)
+#define readl_be(__addr) __raw_readl(__addr)
+#define writeb_be(__b, __addr) __raw_writeb(__b, __addr)
+#define writel_be(__w, __addr) __raw_writel(__w, __addr)
+#define writew_be(__l, __addr) __raw_writew(__l, __addr)
+
#endif
diff --git a/arch/sparc/include/asm/irq_32.h b/arch/sparc/include/asm/irq_32.h
index eced3e3..2ae3aca 100644
--- a/arch/sparc/include/asm/irq_32.h
+++ b/arch/sparc/include/asm/irq_32.h
@@ -6,7 +6,11 @@
#ifndef _SPARC_IRQ_H
#define _SPARC_IRQ_H
-#define NR_IRQS 16
+/* Allocated number of logical irq numbers.
+ * sun4d boxes (ss2000e) should be OK with ~32.
+ * Be on the safe side and make room for 64
+ */
+#define NR_IRQS 64
#include <linux/interrupt.h>
diff --git a/arch/sparc/include/asm/leon.h b/arch/sparc/include/asm/leon.h
index c04f96f..6bdaf1e 100644
--- a/arch/sparc/include/asm/leon.h
+++ b/arch/sparc/include/asm/leon.h
@@ -52,29 +52,6 @@
#define LEON_DIAGF_VALID 0x2000
#define LEON_DIAGF_VALID_SHIFT 13
-/*
- * Interrupt Sources
- *
- * The interrupt source numbers directly map to the trap type and to
- * the bits used in the Interrupt Clear, Interrupt Force, Interrupt Mask,
- * and the Interrupt Pending Registers.
- */
-#define LEON_INTERRUPT_CORRECTABLE_MEMORY_ERROR 1
-#define LEON_INTERRUPT_UART_1_RX_TX 2
-#define LEON_INTERRUPT_UART_0_RX_TX 3
-#define LEON_INTERRUPT_EXTERNAL_0 4
-#define LEON_INTERRUPT_EXTERNAL_1 5
-#define LEON_INTERRUPT_EXTERNAL_2 6
-#define LEON_INTERRUPT_EXTERNAL_3 7
-#define LEON_INTERRUPT_TIMER1 8
-#define LEON_INTERRUPT_TIMER2 9
-#define LEON_INTERRUPT_EMPTY1 10
-#define LEON_INTERRUPT_EMPTY2 11
-#define LEON_INTERRUPT_OPEN_ETH 12
-#define LEON_INTERRUPT_EMPTY4 13
-#define LEON_INTERRUPT_EMPTY5 14
-#define LEON_INTERRUPT_EMPTY6 15
-
/* irq masks */
#define LEON_HARD_INT(x) (1 << (x)) /* irq 0-15 */
#define LEON_IRQMASK_R 0x0000fffe /* bit 15- 1 of lregs.irqmask */
@@ -183,7 +160,6 @@
/* macro access for leon_readnobuffer_reg() */
#define LEON_BYPASSCACHE_LOAD_VA(x) leon_readnobuffer_reg((unsigned long)(x))
-extern void sparc_leon_eirq_register(int eirq);
extern void leon_init(void);
extern void leon_switch_mm(void);
extern void leon_init_IRQ(void);
@@ -239,8 +215,8 @@
#endif /*!__ASSEMBLY__*/
#ifdef CONFIG_SMP
-# define LEON3_IRQ_RESCHEDULE 13
-# define LEON3_IRQ_TICKER (leon_percpu_timer_dev[0].irq)
+# define LEON3_IRQ_IPI_DEFAULT 13
+# define LEON3_IRQ_TICKER (leon3_ticker_irq)
# define LEON3_IRQ_CROSS_CALL 15
#endif
@@ -339,9 +315,9 @@
#include <linux/interrupt.h>
struct device_node;
-extern int sparc_leon_eirq_get(int eirq, int cpu);
-extern irqreturn_t sparc_leon_eirq_isr(int dummy, void *dev_id);
-extern void sparc_leon_eirq_register(int eirq);
+extern unsigned int leon_build_device_irq(unsigned int real_irq,
+ irq_flow_handler_t flow_handler,
+ const char *name, int do_ack);
extern void leon_clear_clock_irq(void);
extern void leon_load_profile_irq(int cpu, unsigned int limit);
extern void leon_init_timers(irq_handler_t counter_fn);
@@ -358,6 +334,7 @@
extern int leon_flush_needed(void);
extern void leon_switch_mm(void);
extern int srmmu_swprobe_trace;
+extern int leon3_ticker_irq;
#ifdef CONFIG_SMP
extern int leon_smp_nrcpus(void);
@@ -366,17 +343,19 @@
extern void leon_boot_cpus(void);
extern int leon_boot_one_cpu(int i);
void leon_init_smp(void);
-extern void cpu_probe(void);
extern void cpu_idle(void);
extern void init_IRQ(void);
extern void cpu_panic(void);
extern int __leon_processor_id(void);
void leon_enable_irq_cpu(unsigned int irq_nr, unsigned int cpu);
+extern irqreturn_t leon_percpu_timer_interrupt(int irq, void *unused);
-extern unsigned int real_irq_entry[], smpleon_ticker[];
+extern unsigned int real_irq_entry[];
+extern unsigned int smpleon_ipi[];
extern unsigned int patchme_maybe_smp_msg[];
extern unsigned int t_nmi[], linux_trap_ipi15_leon[];
extern unsigned int linux_trap_ipi15_sun4m[];
+extern int leon_ipi_irq;
#endif /* CONFIG_SMP */
diff --git a/arch/sparc/include/asm/pcic.h b/arch/sparc/include/asm/pcic.h
index f20ef56..7eb5d78 100644
--- a/arch/sparc/include/asm/pcic.h
+++ b/arch/sparc/include/asm/pcic.h
@@ -29,11 +29,17 @@
int pcic_imdim;
};
-extern int pcic_probe(void);
-/* Erm... MJ redefined pcibios_present() so that it does not work early. */
+#ifdef CONFIG_PCI
extern int pcic_present(void);
+extern int pcic_probe(void);
+extern void pci_time_init(void);
extern void sun4m_pci_init_IRQ(void);
-
+#else
+static inline int pcic_present(void) { return 0; }
+static inline int pcic_probe(void) { return 0; }
+static inline void pci_time_init(void) {}
+static inline void sun4m_pci_init_IRQ(void) {}
+#endif
#endif
/* Size of PCI I/O space which we relocate. */
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 303bd4d..5b31a8e 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -8,6 +8,8 @@
* Copyright (C) 1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
*/
+#include <linux/const.h>
+
#ifndef __ASSEMBLY__
#include <asm-generic/4level-fixup.h>
@@ -456,9 +458,9 @@
#endif /* !(__ASSEMBLY__) */
-#define VMALLOC_START 0xfe600000
+#define VMALLOC_START _AC(0xfe600000,UL)
/* XXX Alter this when I get around to fixing sun4c - Anton */
-#define VMALLOC_END 0xffc00000
+#define VMALLOC_END _AC(0xffc00000,UL)
/* We provide our own get_unmapped_area to cope with VA holes for userland */
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index f8dddb7..b77128c 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -699,6 +699,9 @@
extern void paging_init(void);
extern unsigned long find_ecache_flush_span(unsigned long size);
+struct seq_file;
+extern void mmu_info(struct seq_file *);
+
/* These do nothing with the way I have things setup. */
#define mmu_lockarea(vaddr, len) (vaddr)
#define mmu_unlockarea(vaddr, len) do { } while(0)
diff --git a/arch/sparc/include/asm/setup.h b/arch/sparc/include/asm/setup.h
index 2643c62..64718ba 100644
--- a/arch/sparc/include/asm/setup.h
+++ b/arch/sparc/include/asm/setup.h
@@ -11,4 +11,16 @@
# define COMMAND_LINE_SIZE 256
#endif
+#ifdef __KERNEL__
+
+#ifdef CONFIG_SPARC32
+/* The CPU that was used for booting
+ * Only sun4d + leon may have boot_cpu_id != 0
+ */
+extern unsigned char boot_cpu_id;
+extern unsigned char boot_cpu_id4;
+#endif
+
+#endif /* __KERNEL__ */
+
#endif /* _SPARC_SETUP_H */
diff --git a/arch/sparc/include/asm/smp_32.h b/arch/sparc/include/asm/smp_32.h
index d82d7f4..093f108 100644
--- a/arch/sparc/include/asm/smp_32.h
+++ b/arch/sparc/include/asm/smp_32.h
@@ -50,42 +50,38 @@
void smp_boot_cpus(void);
void smp_store_cpu_info(int);
+void smp_resched_interrupt(void);
+void smp_call_function_single_interrupt(void);
+void smp_call_function_interrupt(void);
+
struct seq_file;
void smp_bogo(struct seq_file *);
void smp_info(struct seq_file *);
BTFIXUPDEF_CALL(void, smp_cross_call, smpfunc_t, cpumask_t, unsigned long, unsigned long, unsigned long, unsigned long)
BTFIXUPDEF_CALL(int, __hard_smp_processor_id, void)
+BTFIXUPDEF_CALL(void, smp_ipi_resched, int);
+BTFIXUPDEF_CALL(void, smp_ipi_single, int);
+BTFIXUPDEF_CALL(void, smp_ipi_mask_one, int);
BTFIXUPDEF_BLACKBOX(hard_smp_processor_id)
BTFIXUPDEF_BLACKBOX(load_current)
#define smp_cross_call(func,mask,arg1,arg2,arg3,arg4) BTFIXUP_CALL(smp_cross_call)(func,mask,arg1,arg2,arg3,arg4)
-static inline void xc0(smpfunc_t func) { smp_cross_call(func, cpu_online_map, 0, 0, 0, 0); }
+static inline void xc0(smpfunc_t func) { smp_cross_call(func, *cpu_online_mask, 0, 0, 0, 0); }
static inline void xc1(smpfunc_t func, unsigned long arg1)
-{ smp_cross_call(func, cpu_online_map, arg1, 0, 0, 0); }
+{ smp_cross_call(func, *cpu_online_mask, arg1, 0, 0, 0); }
static inline void xc2(smpfunc_t func, unsigned long arg1, unsigned long arg2)
-{ smp_cross_call(func, cpu_online_map, arg1, arg2, 0, 0); }
+{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, 0, 0); }
static inline void xc3(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3)
-{ smp_cross_call(func, cpu_online_map, arg1, arg2, arg3, 0); }
+{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, arg3, 0); }
static inline void xc4(smpfunc_t func, unsigned long arg1, unsigned long arg2,
unsigned long arg3, unsigned long arg4)
-{ smp_cross_call(func, cpu_online_map, arg1, arg2, arg3, arg4); }
+{ smp_cross_call(func, *cpu_online_mask, arg1, arg2, arg3, arg4); }
-static inline int smp_call_function(void (*func)(void *info), void *info, int wait)
-{
- xc1((smpfunc_t)func, (unsigned long)info);
- return 0;
-}
-
-static inline int smp_call_function_single(int cpuid, void (*func) (void *info),
- void *info, int wait)
-{
- smp_cross_call((smpfunc_t)func, cpumask_of_cpu(cpuid),
- (unsigned long) info, 0, 0, 0);
- return 0;
-}
+extern void arch_send_call_function_single_ipi(int cpu);
+extern void arch_send_call_function_ipi_mask(const struct cpumask *mask);
static inline int cpu_logical_map(int cpu)
{
@@ -135,6 +131,11 @@
__asm__ __volatile__("lda [%g0] ASI_M_VIKING_TMP1, %0\n\t"
"nop; nop" :
"=&r" (cpuid));
+ - leon
+ __asm__ __volatile__( "rd %asr17, %0\n\t"
+ "srl %0, 0x1c, %0\n\t"
+ "nop\n\t" :
+ "=&r" (cpuid));
See btfixup.h and btfixupprep.c to understand how a blackbox works.
*/
__asm__ __volatile__("sethi %%hi(___b_hard_smp_processor_id), %0\n\t"
diff --git a/arch/sparc/include/asm/smp_64.h b/arch/sparc/include/asm/smp_64.h
index f49e11c..20bca89 100644
--- a/arch/sparc/include/asm/smp_64.h
+++ b/arch/sparc/include/asm/smp_64.h
@@ -49,6 +49,10 @@
extern void smp_fetch_global_regs(void);
+struct seq_file;
+void smp_bogo(struct seq_file *);
+void smp_info(struct seq_file *);
+
#ifdef CONFIG_HOTPLUG_CPU
extern int __cpu_disable(void);
extern void __cpu_die(unsigned int cpu);
diff --git a/arch/sparc/include/asm/system_32.h b/arch/sparc/include/asm/system_32.h
index 890036b..47a7e86 100644
--- a/arch/sparc/include/asm/system_32.h
+++ b/arch/sparc/include/asm/system_32.h
@@ -15,11 +15,6 @@
#include <linux/irqflags.h>
-static inline unsigned int probe_irq_mask(unsigned long val)
-{
- return 0;
-}
-
/*
* Sparc (general) CPU types
*/
diff --git a/arch/sparc/include/asm/system_64.h b/arch/sparc/include/asm/system_64.h
index e3b65d8..3c96d3b 100644
--- a/arch/sparc/include/asm/system_64.h
+++ b/arch/sparc/include/asm/system_64.h
@@ -29,10 +29,6 @@
/* This cannot ever be a sun4c :) That's just history. */
#define ARCH_SUN4C 0
-extern const char *sparc_cpu_type;
-extern const char *sparc_fpu_type;
-extern const char *sparc_pmu_type;
-
extern char reboot_command[];
/* These are here in an effort to more fully work around Spitfire Errata
diff --git a/arch/sparc/include/asm/winmacro.h b/arch/sparc/include/asm/winmacro.h
index 5b0a06d..a9be04b 100644
--- a/arch/sparc/include/asm/winmacro.h
+++ b/arch/sparc/include/asm/winmacro.h
@@ -103,6 +103,7 @@
st %scratch, [%cur_reg + TI_W_SAVED];
#ifdef CONFIG_SMP
+/* Results of LOAD_CURRENT() after BTFIXUP for SUN4M, SUN4D & LEON (comments) */
#define LOAD_CURRENT4M(dest_reg, idreg) \
rd %tbr, %idreg; \
sethi %hi(current_set), %dest_reg; \
@@ -118,6 +119,14 @@
or %dest_reg, %lo(C_LABEL(current_set)), %dest_reg; \
ld [%idreg + %dest_reg], %dest_reg;
+#define LOAD_CURRENT_LEON(dest_reg, idreg) \
+ rd %asr17, %idreg; \
+ sethi %hi(current_set), %dest_reg; \
+ srl %idreg, 0x1c, %idreg; \
+ or %dest_reg, %lo(current_set), %dest_reg; \
+ sll %idreg, 0x2, %idreg; \
+ ld [%idreg + %dest_reg], %dest_reg;
+
/* Blackbox - take care with this... - check smp4m and smp4d before changing this. */
#define LOAD_CURRENT(dest_reg, idreg) \
sethi %hi(___b_load_current), %idreg; \
diff --git a/arch/sparc/kernel/Makefile b/arch/sparc/kernel/Makefile
index 99aa4db..9cff270 100644
--- a/arch/sparc/kernel/Makefile
+++ b/arch/sparc/kernel/Makefile
@@ -71,10 +71,6 @@
obj-$(CONFIG_SPARC64) += nmi.o
obj-$(CONFIG_SPARC64_SMP) += cpumap.o
-# sparc32 do not use GENERIC_HARDIRQS but uses the generic devres implementation
-obj-$(CONFIG_SPARC32) += devres.o
-devres-y := ../../../kernel/irq/devres.o
-
obj-y += dma.o
obj-$(CONFIG_SPARC32_PCI) += pcic.o
diff --git a/arch/sparc/kernel/apc.c b/arch/sparc/kernel/apc.c
index f679c57..1e34f29 100644
--- a/arch/sparc/kernel/apc.c
+++ b/arch/sparc/kernel/apc.c
@@ -165,7 +165,7 @@
return 0;
}
-static struct of_device_id __initdata apc_match[] = {
+static struct of_device_id apc_match[] = {
{
.name = APC_OBPNAME,
},
diff --git a/arch/sparc/kernel/cpu.c b/arch/sparc/kernel/cpu.c
index 7925c54..138dbbc 100644
--- a/arch/sparc/kernel/cpu.c
+++ b/arch/sparc/kernel/cpu.c
@@ -4,6 +4,7 @@
* Copyright (C) 1996 David S. Miller (davem@caip.rutgers.edu)
*/
+#include <linux/seq_file.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -11,7 +12,9 @@
#include <linux/threads.h>
#include <asm/spitfire.h>
+#include <asm/pgtable.h>
#include <asm/oplib.h>
+#include <asm/setup.h>
#include <asm/page.h>
#include <asm/head.h>
#include <asm/psr.h>
@@ -23,6 +26,9 @@
DEFINE_PER_CPU(cpuinfo_sparc, __cpu_data) = { 0 };
EXPORT_PER_CPU_SYMBOL(__cpu_data);
+int ncpus_probed;
+unsigned int fsr_storage;
+
struct cpu_info {
int psr_vers;
const char *name;
@@ -247,13 +253,12 @@
* machine type value into consideration too. I will fix this.
*/
-const char *sparc_cpu_type;
-const char *sparc_fpu_type;
+static const char *sparc_cpu_type;
+static const char *sparc_fpu_type;
const char *sparc_pmu_type;
-unsigned int fsr_storage;
-static void set_cpu_and_fpu(int psr_impl, int psr_vers, int fpu_vers)
+static void __init set_cpu_and_fpu(int psr_impl, int psr_vers, int fpu_vers)
{
const struct manufacturer_info *manuf;
int i;
@@ -313,7 +318,123 @@
}
#ifdef CONFIG_SPARC32
-void __cpuinit cpu_probe(void)
+static int show_cpuinfo(struct seq_file *m, void *__unused)
+{
+ seq_printf(m,
+ "cpu\t\t: %s\n"
+ "fpu\t\t: %s\n"
+ "promlib\t\t: Version %d Revision %d\n"
+ "prom\t\t: %d.%d\n"
+ "type\t\t: %s\n"
+ "ncpus probed\t: %d\n"
+ "ncpus active\t: %d\n"
+#ifndef CONFIG_SMP
+ "CPU0Bogo\t: %lu.%02lu\n"
+ "CPU0ClkTck\t: %ld\n"
+#endif
+ ,
+ sparc_cpu_type,
+ sparc_fpu_type ,
+ romvec->pv_romvers,
+ prom_rev,
+ romvec->pv_printrev >> 16,
+ romvec->pv_printrev & 0xffff,
+ &cputypval[0],
+ ncpus_probed,
+ num_online_cpus()
+#ifndef CONFIG_SMP
+ , cpu_data(0).udelay_val/(500000/HZ),
+ (cpu_data(0).udelay_val/(5000/HZ)) % 100,
+ cpu_data(0).clock_tick
+#endif
+ );
+
+#ifdef CONFIG_SMP
+ smp_bogo(m);
+#endif
+ mmu_info(m);
+#ifdef CONFIG_SMP
+ smp_info(m);
+#endif
+ return 0;
+}
+#endif /* CONFIG_SPARC32 */
+
+#ifdef CONFIG_SPARC64
+unsigned int dcache_parity_tl1_occurred;
+unsigned int icache_parity_tl1_occurred;
+
+
+static int show_cpuinfo(struct seq_file *m, void *__unused)
+{
+ seq_printf(m,
+ "cpu\t\t: %s\n"
+ "fpu\t\t: %s\n"
+ "pmu\t\t: %s\n"
+ "prom\t\t: %s\n"
+ "type\t\t: %s\n"
+ "ncpus probed\t: %d\n"
+ "ncpus active\t: %d\n"
+ "D$ parity tl1\t: %u\n"
+ "I$ parity tl1\t: %u\n"
+#ifndef CONFIG_SMP
+ "Cpu0ClkTck\t: %016lx\n"
+#endif
+ ,
+ sparc_cpu_type,
+ sparc_fpu_type,
+ sparc_pmu_type,
+ prom_version,
+ ((tlb_type == hypervisor) ?
+ "sun4v" :
+ "sun4u"),
+ ncpus_probed,
+ num_online_cpus(),
+ dcache_parity_tl1_occurred,
+ icache_parity_tl1_occurred
+#ifndef CONFIG_SMP
+ , cpu_data(0).clock_tick
+#endif
+ );
+#ifdef CONFIG_SMP
+ smp_bogo(m);
+#endif
+ mmu_info(m);
+#ifdef CONFIG_SMP
+ smp_info(m);
+#endif
+ return 0;
+}
+#endif /* CONFIG_SPARC64 */
+
+static void *c_start(struct seq_file *m, loff_t *pos)
+{
+ /* The pointer we are returning is arbitrary,
+ * it just has to be non-NULL and not IS_ERR
+ * in the success case.
+ */
+ return *pos == 0 ? &c_start : NULL;
+}
+
+static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+{
+ ++*pos;
+ return c_start(m, pos);
+}
+
+static void c_stop(struct seq_file *m, void *v)
+{
+}
+
+const struct seq_operations cpuinfo_op = {
+ .start =c_start,
+ .next = c_next,
+ .stop = c_stop,
+ .show = show_cpuinfo,
+};
+
+#ifdef CONFIG_SPARC32
+static int __init cpu_type_probe(void)
{
int psr_impl, psr_vers, fpu_vers;
int psr;
@@ -332,8 +453,12 @@
put_psr(psr);
set_cpu_and_fpu(psr_impl, psr_vers, fpu_vers);
+
+ return 0;
}
-#else
+#endif /* CONFIG_SPARC32 */
+
+#ifdef CONFIG_SPARC64
static void __init sun4v_cpu_probe(void)
{
switch (sun4v_chip_type) {
@@ -374,6 +499,6 @@
}
return 0;
}
+#endif /* CONFIG_SPARC64 */
early_initcall(cpu_type_probe);
-#endif
diff --git a/arch/sparc/kernel/cpumap.c b/arch/sparc/kernel/cpumap.c
index 8de64c8..d91fd78 100644
--- a/arch/sparc/kernel/cpumap.c
+++ b/arch/sparc/kernel/cpumap.c
@@ -202,7 +202,7 @@
new_tree->total_nodes = n;
memcpy(&new_tree->level, tmp_level, sizeof(tmp_level));
- prev_cpu = cpu = first_cpu(cpu_online_map);
+ prev_cpu = cpu = cpumask_first(cpu_online_mask);
/* Initialize all levels in the tree with the first CPU */
for (level = CPUINFO_LVL_PROC; level >= CPUINFO_LVL_ROOT; level--) {
@@ -381,7 +381,7 @@
}
/* Impossible, since num_online_cpus() <= num_possible_cpus() */
- return first_cpu(cpu_online_map);
+ return cpumask_first(cpu_online_mask);
}
static int _map_to_cpu(unsigned int index)
diff --git a/arch/sparc/kernel/devices.c b/arch/sparc/kernel/devices.c
index d2eddd6..113c052 100644
--- a/arch/sparc/kernel/devices.c
+++ b/arch/sparc/kernel/devices.c
@@ -20,7 +20,6 @@
#include <asm/system.h>
#include <asm/cpudata.h>
-extern void cpu_probe(void);
extern void clock_stop_probe(void); /* tadpole.c */
extern void sun4c_probe_memerr_reg(void);
@@ -115,7 +114,7 @@
void __init device_scan(void)
{
- prom_printf("Booting Linux...\n");
+ printk(KERN_NOTICE "Booting Linux...\n");
#ifndef CONFIG_SMP
{
@@ -133,7 +132,6 @@
}
#endif /* !CONFIG_SMP */
- cpu_probe();
{
extern void auxio_probe(void);
extern void auxio_power_probe(void);
diff --git a/arch/sparc/kernel/ds.c b/arch/sparc/kernel/ds.c
index 3add4de..dd1342c 100644
--- a/arch/sparc/kernel/ds.c
+++ b/arch/sparc/kernel/ds.c
@@ -497,7 +497,7 @@
tag->num_records = ncpus;
i = 0;
- for_each_cpu_mask(cpu, *mask) {
+ for_each_cpu(cpu, mask) {
ent[i].cpu = cpu;
ent[i].result = DR_CPU_RES_OK;
ent[i].stat = default_stat;
@@ -534,7 +534,7 @@
int resp_len, ncpus, cpu;
unsigned long flags;
- ncpus = cpus_weight(*mask);
+ ncpus = cpumask_weight(mask);
resp_len = dr_cpu_size_response(ncpus);
resp = kzalloc(resp_len, GFP_KERNEL);
if (!resp)
@@ -547,7 +547,7 @@
mdesc_populate_present_mask(mask);
mdesc_fill_in_cpu_data(mask);
- for_each_cpu_mask(cpu, *mask) {
+ for_each_cpu(cpu, mask) {
int err;
printk(KERN_INFO "ds-%llu: Starting cpu %d...\n",
@@ -593,7 +593,7 @@
int resp_len, ncpus, cpu;
unsigned long flags;
- ncpus = cpus_weight(*mask);
+ ncpus = cpumask_weight(mask);
resp_len = dr_cpu_size_response(ncpus);
resp = kzalloc(resp_len, GFP_KERNEL);
if (!resp)
@@ -603,7 +603,7 @@
resp_len, ncpus, mask,
DR_CPU_STAT_UNCONFIGURED);
- for_each_cpu_mask(cpu, *mask) {
+ for_each_cpu(cpu, mask) {
int err;
printk(KERN_INFO "ds-%llu: Shutting down cpu %d...\n",
@@ -649,13 +649,13 @@
purge_dups(cpu_list, tag->num_records);
- cpus_clear(mask);
+ cpumask_clear(&mask);
for (i = 0; i < tag->num_records; i++) {
if (cpu_list[i] == CPU_SENTINEL)
continue;
if (cpu_list[i] < nr_cpu_ids)
- cpu_set(cpu_list[i], mask);
+ cpumask_set_cpu(cpu_list[i], &mask);
}
if (tag->type == DR_CPU_CONFIGURE)
diff --git a/arch/sparc/kernel/entry.S b/arch/sparc/kernel/entry.S
index 6da784a..8341963 100644
--- a/arch/sparc/kernel/entry.S
+++ b/arch/sparc/kernel/entry.S
@@ -269,19 +269,22 @@
/* Here is where we check for possible SMP IPI passed to us
* on some level other than 15 which is the NMI and only used
* for cross calls. That has a separate entry point below.
+ *
+ * IPIs are sent on Level 12, 13 and 14. See IRQ_IPI_*.
*/
maybe_smp4m_msg:
GET_PROCESSOR4M_ID(o3)
sethi %hi(sun4m_irq_percpu), %l5
sll %o3, 2, %o3
or %l5, %lo(sun4m_irq_percpu), %o5
- sethi %hi(0x40000000), %o2
+ sethi %hi(0x70000000), %o2 ! Check all soft-IRQs
ld [%o5 + %o3], %o1
ld [%o1 + 0x00], %o3 ! sun4m_irq_percpu[cpu]->pending
andcc %o3, %o2, %g0
be,a smp4m_ticker
cmp %l7, 14
- st %o2, [%o1 + 0x04] ! sun4m_irq_percpu[cpu]->clear=0x40000000
+ /* Soft-IRQ IPI */
+ st %o2, [%o1 + 0x04] ! sun4m_irq_percpu[cpu]->clear=0x70000000
WRITE_PAUSE
ld [%o1 + 0x00], %g0 ! sun4m_irq_percpu[cpu]->pending
WRITE_PAUSE
@@ -290,9 +293,27 @@
WRITE_PAUSE
wr %l4, PSR_ET, %psr
WRITE_PAUSE
- call smp_reschedule_irq
+ sll %o2, 28, %o2 ! shift for simpler checks below
+maybe_smp4m_msg_check_single:
+ andcc %o2, 0x1, %g0
+ beq,a maybe_smp4m_msg_check_mask
+ andcc %o2, 0x2, %g0
+ call smp_call_function_single_interrupt
nop
-
+ andcc %o2, 0x2, %g0
+maybe_smp4m_msg_check_mask:
+ beq,a maybe_smp4m_msg_check_resched
+ andcc %o2, 0x4, %g0
+ call smp_call_function_interrupt
+ nop
+ andcc %o2, 0x4, %g0
+maybe_smp4m_msg_check_resched:
+ /* rescheduling is done in RESTORE_ALL regardless, but incr stats */
+ beq,a maybe_smp4m_msg_out
+ nop
+ call smp_resched_interrupt
+ nop
+maybe_smp4m_msg_out:
RESTORE_ALL
.align 4
@@ -401,18 +422,18 @@
1: b,a 1b
#ifdef CONFIG_SPARC_LEON
-
- .globl smpleon_ticker
- /* SMP per-cpu ticker interrupts are handled specially. */
-smpleon_ticker:
+ .globl smpleon_ipi
+ .extern leon_ipi_interrupt
+ /* SMP per-cpu IPI interrupts are handled specially. */
+smpleon_ipi:
SAVE_ALL
or %l0, PSR_PIL, %g2
wr %g2, 0x0, %psr
WRITE_PAUSE
wr %g2, PSR_ET, %psr
WRITE_PAUSE
- call leon_percpu_timer_interrupt
- add %sp, STACKFRAME_SZ, %o0
+ call leonsmp_ipi_interrupt
+ add %sp, STACKFRAME_SZ, %o1 ! pt_regs
wr %l0, PSR_ET, %psr
WRITE_PAUSE
RESTORE_ALL
diff --git a/arch/sparc/kernel/head_32.S b/arch/sparc/kernel/head_32.S
index 5942349..5877857 100644
--- a/arch/sparc/kernel/head_32.S
+++ b/arch/sparc/kernel/head_32.S
@@ -810,30 +810,24 @@
got_prop:
#ifdef CONFIG_SPARC_LEON
/* no cpu-type check is needed, it is a SPARC-LEON */
+
+ sethi %hi(boot_cpu_id), %g2 ! boot-cpu index
+
#ifdef CONFIG_SMP
- ba leon_smp_init
- nop
-
- .global leon_smp_init
-leon_smp_init:
- sethi %hi(boot_cpu_id), %g1 ! master always 0
- stb %g0, [%g1 + %lo(boot_cpu_id)]
- sethi %hi(boot_cpu_id4), %g1 ! master always 0
- stb %g0, [%g1 + %lo(boot_cpu_id4)]
-
- rd %asr17,%g1
- srl %g1,28,%g1
-
- cmp %g0,%g1
- beq sun4c_continue_boot !continue with master
- nop
-
- ba leon_smp_cpu_startup
- nop
-#else
- ba sun4c_continue_boot
+ ldub [%g2 + %lo(boot_cpu_id)], %g1
+ cmp %g1, 0xff ! unset means first CPU
+ bne leon_smp_cpu_startup ! continue only with master
nop
#endif
+ /* Get CPU-ID from most significant 4-bit of ASR17 */
+ rd %asr17, %g1
+ srl %g1, 28, %g1
+
+ /* Update boot_cpu_id only on boot cpu */
+ stub %g1, [%g2 + %lo(boot_cpu_id)]
+
+ ba sun4c_continue_boot
+ nop
#endif
set cputypval, %o2
ldub [%o2 + 0x4], %l1
@@ -893,9 +887,6 @@
sta %g4, [%g0] ASI_M_VIKING_TMP1
sethi %hi(boot_cpu_id), %g5
stb %g4, [%g5 + %lo(boot_cpu_id)]
- sll %g4, 2, %g4
- sethi %hi(boot_cpu_id4), %g5
- stb %g4, [%g5 + %lo(boot_cpu_id4)]
#endif
/* Fall through to sun4m_init */
@@ -1024,14 +1015,28 @@
bl 1b
add %o0, 0x1, %o0
+ /* If boot_cpu_id has not been setup by machine specific
+ * init-code above we default it to zero.
+ */
+ sethi %hi(boot_cpu_id), %g2
+ ldub [%g2 + %lo(boot_cpu_id)], %g3
+ cmp %g3, 0xff
+ bne 1f
+ nop
+ mov %g0, %g3
+ stub %g3, [%g2 + %lo(boot_cpu_id)]
+
+1: /* boot_cpu_id set. calculate boot_cpu_id4 = boot_cpu_id*4 */
+ sll %g3, 2, %g3
+ sethi %hi(boot_cpu_id4), %g2
+ stub %g3, [%g2 + %lo(boot_cpu_id4)]
+
/* Initialize the uwinmask value for init task just in case.
* But first make current_set[boot_cpu_id] point to something useful.
*/
set init_thread_union, %g6
set current_set, %g2
#ifdef CONFIG_SMP
- sethi %hi(boot_cpu_id4), %g3
- ldub [%g3 + %lo(boot_cpu_id4)], %g3
st %g6, [%g2]
add %g2, %g3, %g2
#endif
diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index c6ce9a6..1c9c80a 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -50,10 +50,15 @@
#include <asm/io-unit.h>
#include <asm/leon.h>
+/* This function must make sure that caches and memory are coherent after DMA
+ * On LEON systems without cache snooping it flushes the entire D-CACHE.
+ */
#ifndef CONFIG_SPARC_LEON
-#define mmu_inval_dma_area(p, l) /* Anton pulled it out for 2.4.0-xx */
+static inline void dma_make_coherent(unsigned long pa, unsigned long len)
+{
+}
#else
-static inline void mmu_inval_dma_area(void *va, unsigned long len)
+static inline void dma_make_coherent(unsigned long pa, unsigned long len)
{
if (!sparc_leon3_snooping_enabled())
leon_flush_dcache_all();
@@ -284,7 +289,6 @@
printk("sbus_alloc_consistent: cannot occupy 0x%lx", len_total);
goto err_nova;
}
- mmu_inval_dma_area((void *)va, len_total);
// XXX The mmu_map_dma_area does this for us below, see comments.
// sparc_mapiorange(0, virt_to_phys(va), res->start, len_total);
@@ -336,7 +340,6 @@
release_resource(res);
kfree(res);
- /* mmu_inval_dma_area(va, n); */ /* it's consistent, isn't it */
pgv = virt_to_page(p);
mmu_unmap_dma_area(dev, ba, n);
@@ -463,7 +466,6 @@
printk("pci_alloc_consistent: cannot occupy 0x%lx", len_total);
goto err_nova;
}
- mmu_inval_dma_area(va, len_total);
sparc_mapiorange(0, virt_to_phys(va), res->start, len_total);
*pba = virt_to_phys(va); /* equals virt_to_bus (R.I.P.) for us. */
@@ -489,7 +491,6 @@
dma_addr_t ba)
{
struct resource *res;
- void *pgp;
if ((res = _sparc_find_resource(&_sparc_dvma,
(unsigned long)p)) == NULL) {
@@ -509,14 +510,12 @@
return;
}
- pgp = phys_to_virt(ba); /* bus_to_virt actually */
- mmu_inval_dma_area(pgp, n);
+ dma_make_coherent(ba, n);
sparc_unmapiorange((unsigned long)p, n);
release_resource(res);
kfree(res);
-
- free_pages((unsigned long)pgp, get_order(n));
+ free_pages((unsigned long)phys_to_virt(ba), get_order(n));
}
/*
@@ -535,7 +534,7 @@
enum dma_data_direction dir, struct dma_attrs *attrs)
{
if (dir != PCI_DMA_TODEVICE)
- mmu_inval_dma_area(phys_to_virt(ba), PAGE_ALIGN(size));
+ dma_make_coherent(ba, PAGE_ALIGN(size));
}
/* Map a set of buffers described by scatterlist in streaming
@@ -562,8 +561,7 @@
/* IIep is write-through, not flushing. */
for_each_sg(sgl, sg, nents, n) {
- BUG_ON(page_address(sg_page(sg)) == NULL);
- sg->dma_address = virt_to_phys(sg_virt(sg));
+ sg->dma_address = sg_phys(sg);
sg->dma_length = sg->length;
}
return nents;
@@ -582,9 +580,7 @@
if (dir != PCI_DMA_TODEVICE) {
for_each_sg(sgl, sg, nents, n) {
- BUG_ON(page_address(sg_page(sg)) == NULL);
- mmu_inval_dma_area(page_address(sg_page(sg)),
- PAGE_ALIGN(sg->length));
+ dma_make_coherent(sg_phys(sg), PAGE_ALIGN(sg->length));
}
}
}
@@ -603,8 +599,7 @@
size_t size, enum dma_data_direction dir)
{
if (dir != PCI_DMA_TODEVICE) {
- mmu_inval_dma_area(phys_to_virt(ba),
- PAGE_ALIGN(size));
+ dma_make_coherent(ba, PAGE_ALIGN(size));
}
}
@@ -612,8 +607,7 @@
size_t size, enum dma_data_direction dir)
{
if (dir != PCI_DMA_TODEVICE) {
- mmu_inval_dma_area(phys_to_virt(ba),
- PAGE_ALIGN(size));
+ dma_make_coherent(ba, PAGE_ALIGN(size));
}
}
@@ -631,9 +625,7 @@
if (dir != PCI_DMA_TODEVICE) {
for_each_sg(sgl, sg, nents, n) {
- BUG_ON(page_address(sg_page(sg)) == NULL);
- mmu_inval_dma_area(page_address(sg_page(sg)),
- PAGE_ALIGN(sg->length));
+ dma_make_coherent(sg_phys(sg), PAGE_ALIGN(sg->length));
}
}
}
@@ -646,9 +638,7 @@
if (dir != PCI_DMA_TODEVICE) {
for_each_sg(sgl, sg, nents, n) {
- BUG_ON(page_address(sg_page(sg)) == NULL);
- mmu_inval_dma_area(page_address(sg_page(sg)),
- PAGE_ALIGN(sg->length));
+ dma_make_coherent(sg_phys(sg), PAGE_ALIGN(sg->length));
}
}
}
diff --git a/arch/sparc/kernel/irq.h b/arch/sparc/kernel/irq.h
index 008453b..100b9c2 100644
--- a/arch/sparc/kernel/irq.h
+++ b/arch/sparc/kernel/irq.h
@@ -2,6 +2,23 @@
#include <asm/btfixup.h>
+struct irq_bucket {
+ struct irq_bucket *next;
+ unsigned int real_irq;
+ unsigned int irq;
+ unsigned int pil;
+};
+
+#define SUN4D_MAX_BOARD 10
+#define SUN4D_MAX_IRQ ((SUN4D_MAX_BOARD + 2) << 5)
+
+/* Map between the irq identifier used in hw to the
+ * irq_bucket. The map is sufficient large to hold
+ * the sun4d hw identifiers.
+ */
+extern struct irq_bucket *irq_map[SUN4D_MAX_IRQ];
+
+
/* sun4m specific type definitions */
/* This maps direct to CPU specific interrupt registers */
@@ -35,6 +52,10 @@
};
extern struct sparc_irq_config sparc_irq_config;
+unsigned int irq_alloc(unsigned int real_irq, unsigned int pil);
+void irq_link(unsigned int irq);
+void irq_unlink(unsigned int irq);
+void handler_irq(unsigned int pil, struct pt_regs *regs);
/* Dave Redman (djhr@tadpole.co.uk)
* changed these to function pointers.. it saves cycles and will allow
@@ -44,33 +65,9 @@
* Changed these to btfixup entities... It saves cycles :)
*/
-BTFIXUPDEF_CALL(void, disable_irq, unsigned int)
-BTFIXUPDEF_CALL(void, enable_irq, unsigned int)
-BTFIXUPDEF_CALL(void, disable_pil_irq, unsigned int)
-BTFIXUPDEF_CALL(void, enable_pil_irq, unsigned int)
BTFIXUPDEF_CALL(void, clear_clock_irq, void)
BTFIXUPDEF_CALL(void, load_profile_irq, int, unsigned int)
-static inline void __disable_irq(unsigned int irq)
-{
- BTFIXUP_CALL(disable_irq)(irq);
-}
-
-static inline void __enable_irq(unsigned int irq)
-{
- BTFIXUP_CALL(enable_irq)(irq);
-}
-
-static inline void disable_pil_irq(unsigned int irq)
-{
- BTFIXUP_CALL(disable_pil_irq)(irq);
-}
-
-static inline void enable_pil_irq(unsigned int irq)
-{
- BTFIXUP_CALL(enable_pil_irq)(irq);
-}
-
static inline void clear_clock_irq(void)
{
BTFIXUP_CALL(clear_clock_irq)();
@@ -89,4 +86,10 @@
#define set_cpu_int(cpu,level) BTFIXUP_CALL(set_cpu_int)(cpu,level)
#define clear_cpu_int(cpu,level) BTFIXUP_CALL(clear_cpu_int)(cpu,level)
#define set_irq_udt(cpu) BTFIXUP_CALL(set_irq_udt)(cpu)
+
+/* All SUN4D IPIs are sent on this IRQ, may be shared with hard IRQs */
+#define SUN4D_IPI_IRQ 14
+
+extern void sun4d_ipi_interrupt(void);
+
#endif
diff --git a/arch/sparc/kernel/irq_32.c b/arch/sparc/kernel/irq_32.c
index 7c93df4..9b89d842 100644
--- a/arch/sparc/kernel/irq_32.c
+++ b/arch/sparc/kernel/irq_32.c
@@ -15,6 +15,7 @@
#include <linux/seq_file.h>
#include <asm/cacheflush.h>
+#include <asm/cpudata.h>
#include <asm/pcic.h>
#include <asm/leon.h>
@@ -101,284 +102,173 @@
* directed CPU interrupts using the existing enable/disable irq code
* with tweaks.
*
+ * Sun4d complicates things even further. IRQ numbers are arbitrary
+ * 32-bit values in that case. Since this is similar to sparc64,
+ * we adopt a virtual IRQ numbering scheme as is done there.
+ * Virutal interrupt numbers are allocated by build_irq(). So NR_IRQS
+ * just becomes a limit of how many interrupt sources we can handle in
+ * a single system. Even fully loaded SS2000 machines top off at
+ * about 32 interrupt sources or so, therefore a NR_IRQS value of 64
+ * is more than enough.
+ *
+ * We keep a map of per-PIL enable interrupts. These get wired
+ * up via the irq_chip->startup() method which gets invoked by
+ * the generic IRQ layer during request_irq().
*/
+/* Table of allocated irqs. Unused entries has irq == 0 */
+static struct irq_bucket irq_table[NR_IRQS];
+/* Protect access to irq_table */
+static DEFINE_SPINLOCK(irq_table_lock);
-/*
- * Dave Redman (djhr@tadpole.co.uk)
- *
- * There used to be extern calls and hard coded values here.. very sucky!
- * instead, because some of the devices attach very early, I do something
- * equally sucky but at least we'll never try to free statically allocated
- * space or call kmalloc before kmalloc_init :(.
- *
- * In fact it's the timer10 that attaches first.. then timer14
- * then kmalloc_init is called.. then the tty interrupts attach.
- * hmmm....
- *
- */
-#define MAX_STATIC_ALLOC 4
-struct irqaction static_irqaction[MAX_STATIC_ALLOC];
-int static_irq_count;
+/* Map between the irq identifier used in hw to the irq_bucket. */
+struct irq_bucket *irq_map[SUN4D_MAX_IRQ];
+/* Protect access to irq_map */
+static DEFINE_SPINLOCK(irq_map_lock);
-static struct {
- struct irqaction *action;
- int flags;
-} sparc_irq[NR_IRQS];
-#define SPARC_IRQ_INPROGRESS 1
-
-/* Used to protect the IRQ action lists */
-DEFINE_SPINLOCK(irq_action_lock);
-
-int show_interrupts(struct seq_file *p, void *v)
+/* Allocate a new irq from the irq_table */
+unsigned int irq_alloc(unsigned int real_irq, unsigned int pil)
{
- int i = *(loff_t *)v;
- struct irqaction *action;
unsigned long flags;
-#ifdef CONFIG_SMP
- int j;
-#endif
+ unsigned int i;
- if (sparc_cpu_model == sun4d)
- return show_sun4d_interrupts(p, v);
-
- spin_lock_irqsave(&irq_action_lock, flags);
- if (i < NR_IRQS) {
- action = sparc_irq[i].action;
- if (!action)
- goto out_unlock;
- seq_printf(p, "%3d: ", i);
-#ifndef CONFIG_SMP
- seq_printf(p, "%10u ", kstat_irqs(i));
-#else
- for_each_online_cpu(j) {
- seq_printf(p, "%10u ",
- kstat_cpu(j).irqs[i]);
- }
-#endif
- seq_printf(p, " %c %s",
- (action->flags & IRQF_DISABLED) ? '+' : ' ',
- action->name);
- for (action = action->next; action; action = action->next) {
- seq_printf(p, ",%s %s",
- (action->flags & IRQF_DISABLED) ? " +" : "",
- action->name);
- }
- seq_putc(p, '\n');
+ spin_lock_irqsave(&irq_table_lock, flags);
+ for (i = 1; i < NR_IRQS; i++) {
+ if (irq_table[i].real_irq == real_irq && irq_table[i].pil == pil)
+ goto found;
}
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
+
+ for (i = 1; i < NR_IRQS; i++) {
+ if (!irq_table[i].irq)
+ break;
+ }
+
+ if (i < NR_IRQS) {
+ irq_table[i].real_irq = real_irq;
+ irq_table[i].irq = i;
+ irq_table[i].pil = pil;
+ } else {
+ printk(KERN_ERR "IRQ: Out of virtual IRQs.\n");
+ i = 0;
+ }
+found:
+ spin_unlock_irqrestore(&irq_table_lock, flags);
+
+ return i;
+}
+
+/* Based on a single pil handler_irq may need to call several
+ * interrupt handlers. Use irq_map as entry to irq_table,
+ * and let each entry in irq_table point to the next entry.
+ */
+void irq_link(unsigned int irq)
+{
+ struct irq_bucket *p;
+ unsigned long flags;
+ unsigned int pil;
+
+ BUG_ON(irq >= NR_IRQS);
+
+ spin_lock_irqsave(&irq_map_lock, flags);
+
+ p = &irq_table[irq];
+ pil = p->pil;
+ BUG_ON(pil > SUN4D_MAX_IRQ);
+ p->next = irq_map[pil];
+ irq_map[pil] = p;
+
+ spin_unlock_irqrestore(&irq_map_lock, flags);
+}
+
+void irq_unlink(unsigned int irq)
+{
+ struct irq_bucket *p, **pnext;
+ unsigned long flags;
+
+ BUG_ON(irq >= NR_IRQS);
+
+ spin_lock_irqsave(&irq_map_lock, flags);
+
+ p = &irq_table[irq];
+ BUG_ON(p->pil > SUN4D_MAX_IRQ);
+ pnext = &irq_map[p->pil];
+ while (*pnext != p)
+ pnext = &(*pnext)->next;
+ *pnext = p->next;
+
+ spin_unlock_irqrestore(&irq_map_lock, flags);
+}
+
+
+/* /proc/interrupts printing */
+int arch_show_interrupts(struct seq_file *p, int prec)
+{
+ int j;
+
+#ifdef CONFIG_SMP
+ seq_printf(p, "RES: ");
+ for_each_online_cpu(j)
+ seq_printf(p, "%10u ", cpu_data(j).irq_resched_count);
+ seq_printf(p, " IPI rescheduling interrupts\n");
+ seq_printf(p, "CAL: ");
+ for_each_online_cpu(j)
+ seq_printf(p, "%10u ", cpu_data(j).irq_call_count);
+ seq_printf(p, " IPI function call interrupts\n");
+#endif
+ seq_printf(p, "NMI: ");
+ for_each_online_cpu(j)
+ seq_printf(p, "%10u ", cpu_data(j).counter);
+ seq_printf(p, " Non-maskable interrupts\n");
return 0;
}
-void free_irq(unsigned int irq, void *dev_id)
-{
- struct irqaction *action;
- struct irqaction **actionp;
- unsigned long flags;
- unsigned int cpu_irq;
-
- if (sparc_cpu_model == sun4d) {
- sun4d_free_irq(irq, dev_id);
- return;
- }
- cpu_irq = irq & (NR_IRQS - 1);
- if (cpu_irq > 14) { /* 14 irq levels on the sparc */
- printk(KERN_ERR "Trying to free bogus IRQ %d\n", irq);
- return;
- }
-
- spin_lock_irqsave(&irq_action_lock, flags);
-
- actionp = &sparc_irq[cpu_irq].action;
- action = *actionp;
-
- if (!action->handler) {
- printk(KERN_ERR "Trying to free free IRQ%d\n", irq);
- goto out_unlock;
- }
- if (dev_id) {
- for (; action; action = action->next) {
- if (action->dev_id == dev_id)
- break;
- actionp = &action->next;
- }
- if (!action) {
- printk(KERN_ERR "Trying to free free shared IRQ%d\n",
- irq);
- goto out_unlock;
- }
- } else if (action->flags & IRQF_SHARED) {
- printk(KERN_ERR "Trying to free shared IRQ%d with NULL device ID\n",
- irq);
- goto out_unlock;
- }
- if (action->flags & SA_STATIC_ALLOC) {
- /*
- * This interrupt is marked as specially allocated
- * so it is a bad idea to free it.
- */
- printk(KERN_ERR "Attempt to free statically allocated IRQ%d (%s)\n",
- irq, action->name);
- goto out_unlock;
- }
-
- *actionp = action->next;
-
- spin_unlock_irqrestore(&irq_action_lock, flags);
-
- synchronize_irq(irq);
-
- spin_lock_irqsave(&irq_action_lock, flags);
-
- kfree(action);
-
- if (!sparc_irq[cpu_irq].action)
- __disable_irq(irq);
-
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
-}
-EXPORT_SYMBOL(free_irq);
-
-/*
- * This is called when we want to synchronize with
- * interrupts. We may for example tell a device to
- * stop sending interrupts: but to make sure there
- * are no interrupts that are executing on another
- * CPU we need to call this function.
- */
-#ifdef CONFIG_SMP
-void synchronize_irq(unsigned int irq)
-{
- unsigned int cpu_irq;
-
- cpu_irq = irq & (NR_IRQS - 1);
- while (sparc_irq[cpu_irq].flags & SPARC_IRQ_INPROGRESS)
- cpu_relax();
-}
-EXPORT_SYMBOL(synchronize_irq);
-#endif /* SMP */
-
-void unexpected_irq(int irq, void *dev_id, struct pt_regs *regs)
-{
- int i;
- struct irqaction *action;
- unsigned int cpu_irq;
-
- cpu_irq = irq & (NR_IRQS - 1);
- action = sparc_irq[cpu_irq].action;
-
- printk(KERN_ERR "IO device interrupt, irq = %d\n", irq);
- printk(KERN_ERR "PC = %08lx NPC = %08lx FP=%08lx\n", regs->pc,
- regs->npc, regs->u_regs[14]);
- if (action) {
- printk(KERN_ERR "Expecting: ");
- for (i = 0; i < 16; i++)
- if (action->handler)
- printk(KERN_CONT "[%s:%d:0x%x] ", action->name,
- i, (unsigned int)action->handler);
- }
- printk(KERN_ERR "AIEEE\n");
- panic("bogus interrupt received");
-}
-
-void handler_irq(int pil, struct pt_regs *regs)
+void handler_irq(unsigned int pil, struct pt_regs *regs)
{
struct pt_regs *old_regs;
- struct irqaction *action;
- int cpu = smp_processor_id();
+ struct irq_bucket *p;
+ BUG_ON(pil > 15);
old_regs = set_irq_regs(regs);
irq_enter();
- disable_pil_irq(pil);
-#ifdef CONFIG_SMP
- /* Only rotate on lower priority IRQs (scsi, ethernet, etc.). */
- if ((sparc_cpu_model==sun4m) && (pil < 10))
- smp4m_irq_rotate(cpu);
-#endif
- action = sparc_irq[pil].action;
- sparc_irq[pil].flags |= SPARC_IRQ_INPROGRESS;
- kstat_cpu(cpu).irqs[pil]++;
- do {
- if (!action || !action->handler)
- unexpected_irq(pil, NULL, regs);
- action->handler(pil, action->dev_id);
- action = action->next;
- } while (action);
- sparc_irq[pil].flags &= ~SPARC_IRQ_INPROGRESS;
- enable_pil_irq(pil);
+
+ p = irq_map[pil];
+ while (p) {
+ struct irq_bucket *next = p->next;
+
+ generic_handle_irq(p->irq);
+ p = next;
+ }
irq_exit();
set_irq_regs(old_regs);
}
#if defined(CONFIG_BLK_DEV_FD) || defined(CONFIG_BLK_DEV_FD_MODULE)
+static unsigned int floppy_irq;
-/*
- * Fast IRQs on the Sparc can only have one routine attached to them,
- * thus no sharing possible.
- */
-static int request_fast_irq(unsigned int irq,
- void (*handler)(void),
- unsigned long irqflags, const char *devname)
+int sparc_floppy_request_irq(unsigned int irq, irq_handler_t irq_handler)
{
- struct irqaction *action;
- unsigned long flags;
unsigned int cpu_irq;
- int ret;
+ int err;
+
#if defined CONFIG_SMP && !defined CONFIG_SPARC_LEON
struct tt_entry *trap_table;
#endif
- cpu_irq = irq & (NR_IRQS - 1);
- if (cpu_irq > 14) {
- ret = -EINVAL;
- goto out;
- }
- if (!handler) {
- ret = -EINVAL;
- goto out;
- }
- spin_lock_irqsave(&irq_action_lock, flags);
+ err = request_irq(irq, irq_handler, 0, "floppy", NULL);
+ if (err)
+ return -1;
- action = sparc_irq[cpu_irq].action;
- if (action) {
- if (action->flags & IRQF_SHARED)
- panic("Trying to register fast irq when already shared.\n");
- if (irqflags & IRQF_SHARED)
- panic("Trying to register fast irq as shared.\n");
+ /* Save for later use in floppy interrupt handler */
+ floppy_irq = irq;
- /* Anyway, someone already owns it so cannot be made fast. */
- printk(KERN_ERR "request_fast_irq: Trying to register yet already owned.\n");
- ret = -EBUSY;
- goto out_unlock;
- }
-
- /*
- * If this is flagged as statically allocated then we use our
- * private struct which is never freed.
- */
- if (irqflags & SA_STATIC_ALLOC) {
- if (static_irq_count < MAX_STATIC_ALLOC)
- action = &static_irqaction[static_irq_count++];
- else
- printk(KERN_ERR "Fast IRQ%d (%s) SA_STATIC_ALLOC failed using kmalloc\n",
- irq, devname);
- }
-
- if (action == NULL)
- action = kmalloc(sizeof(struct irqaction), GFP_ATOMIC);
- if (!action) {
- ret = -ENOMEM;
- goto out_unlock;
- }
+ cpu_irq = (irq & (NR_IRQS - 1));
/* Dork with trap table if we get this far. */
#define INSTANTIATE(table) \
table[SP_TRAP_IRQ1+(cpu_irq-1)].inst_one = SPARC_RD_PSR_L0; \
table[SP_TRAP_IRQ1+(cpu_irq-1)].inst_two = \
- SPARC_BRANCH((unsigned long) handler, \
+ SPARC_BRANCH((unsigned long) floppy_hardint, \
(unsigned long) &table[SP_TRAP_IRQ1+(cpu_irq-1)].inst_two);\
table[SP_TRAP_IRQ1+(cpu_irq-1)].inst_three = SPARC_RD_WIM_L3; \
table[SP_TRAP_IRQ1+(cpu_irq-1)].inst_four = SPARC_NOP;
@@ -399,22 +289,9 @@
* writing we have no CPU-neutral interface to fine-grained flushes.
*/
flush_cache_all();
-
- action->flags = irqflags;
- action->name = devname;
- action->dev_id = NULL;
- action->next = NULL;
-
- sparc_irq[cpu_irq].action = action;
-
- __enable_irq(irq);
-
- ret = 0;
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
-out:
- return ret;
+ return 0;
}
+EXPORT_SYMBOL(sparc_floppy_request_irq);
/*
* These variables are used to access state from the assembler
@@ -440,154 +317,23 @@
unsigned long pdma_areasize;
EXPORT_SYMBOL(pdma_areasize);
-static irq_handler_t floppy_irq_handler;
-
+/* Use the generic irq support to call floppy_interrupt
+ * which was setup using request_irq() in sparc_floppy_request_irq().
+ * We only have one floppy interrupt so we do not need to check
+ * for additional handlers being wired up by irq_link()
+ */
void sparc_floppy_irq(int irq, void *dev_id, struct pt_regs *regs)
{
struct pt_regs *old_regs;
- int cpu = smp_processor_id();
old_regs = set_irq_regs(regs);
- disable_pil_irq(irq);
irq_enter();
- kstat_cpu(cpu).irqs[irq]++;
- floppy_irq_handler(irq, dev_id);
+ generic_handle_irq(floppy_irq);
irq_exit();
- enable_pil_irq(irq);
set_irq_regs(old_regs);
- /*
- * XXX Eek, it's totally changed with preempt_count() and such
- * if (softirq_pending(cpu))
- * do_softirq();
- */
}
-
-int sparc_floppy_request_irq(int irq, unsigned long flags,
- irq_handler_t irq_handler)
-{
- floppy_irq_handler = irq_handler;
- return request_fast_irq(irq, floppy_hardint, flags, "floppy");
-}
-EXPORT_SYMBOL(sparc_floppy_request_irq);
-
#endif
-int request_irq(unsigned int irq,
- irq_handler_t handler,
- unsigned long irqflags, const char *devname, void *dev_id)
-{
- struct irqaction *action, **actionp;
- unsigned long flags;
- unsigned int cpu_irq;
- int ret;
-
- if (sparc_cpu_model == sun4d)
- return sun4d_request_irq(irq, handler, irqflags, devname, dev_id);
-
- cpu_irq = irq & (NR_IRQS - 1);
- if (cpu_irq > 14) {
- ret = -EINVAL;
- goto out;
- }
- if (!handler) {
- ret = -EINVAL;
- goto out;
- }
-
- spin_lock_irqsave(&irq_action_lock, flags);
-
- actionp = &sparc_irq[cpu_irq].action;
- action = *actionp;
- if (action) {
- if (!(action->flags & IRQF_SHARED) || !(irqflags & IRQF_SHARED)) {
- ret = -EBUSY;
- goto out_unlock;
- }
- if ((action->flags & IRQF_DISABLED) != (irqflags & IRQF_DISABLED)) {
- printk(KERN_ERR "Attempt to mix fast and slow interrupts on IRQ%d denied\n",
- irq);
- ret = -EBUSY;
- goto out_unlock;
- }
- for ( ; action; action = *actionp)
- actionp = &action->next;
- }
-
- /* If this is flagged as statically allocated then we use our
- * private struct which is never freed.
- */
- if (irqflags & SA_STATIC_ALLOC) {
- if (static_irq_count < MAX_STATIC_ALLOC)
- action = &static_irqaction[static_irq_count++];
- else
- printk(KERN_ERR "Request for IRQ%d (%s) SA_STATIC_ALLOC failed using kmalloc\n",
- irq, devname);
- }
- if (action == NULL)
- action = kmalloc(sizeof(struct irqaction), GFP_ATOMIC);
- if (!action) {
- ret = -ENOMEM;
- goto out_unlock;
- }
-
- action->handler = handler;
- action->flags = irqflags;
- action->name = devname;
- action->next = NULL;
- action->dev_id = dev_id;
-
- *actionp = action;
-
- __enable_irq(irq);
-
- ret = 0;
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
-out:
- return ret;
-}
-EXPORT_SYMBOL(request_irq);
-
-void disable_irq_nosync(unsigned int irq)
-{
- __disable_irq(irq);
-}
-EXPORT_SYMBOL(disable_irq_nosync);
-
-void disable_irq(unsigned int irq)
-{
- __disable_irq(irq);
-}
-EXPORT_SYMBOL(disable_irq);
-
-void enable_irq(unsigned int irq)
-{
- __enable_irq(irq);
-}
-EXPORT_SYMBOL(enable_irq);
-
-/*
- * We really don't need these at all on the Sparc. We only have
- * stubs here because they are exported to modules.
- */
-unsigned long probe_irq_on(void)
-{
- return 0;
-}
-EXPORT_SYMBOL(probe_irq_on);
-
-int probe_irq_off(unsigned long mask)
-{
- return 0;
-}
-EXPORT_SYMBOL(probe_irq_off);
-
-static unsigned int build_device_irq(struct platform_device *op,
- unsigned int real_irq)
-{
- return real_irq;
-}
-
/* djhr
* This could probably be made indirect too and assigned in the CPU
* bits of the code. That would be much nicer I think and would also
@@ -598,8 +344,6 @@
void __init init_IRQ(void)
{
- sparc_irq_config.build_device_irq = build_device_irq;
-
switch (sparc_cpu_model) {
case sun4c:
case sun4:
@@ -607,14 +351,11 @@
break;
case sun4m:
-#ifdef CONFIG_PCI
pcic_probe();
- if (pcic_present()) {
+ if (pcic_present())
sun4m_pci_init_IRQ();
- break;
- }
-#endif
- sun4m_init_IRQ();
+ else
+ sun4m_init_IRQ();
break;
case sun4d:
@@ -632,9 +373,3 @@
btfixup();
}
-#ifdef CONFIG_PROC_FS
-void init_irq_proc(void)
-{
- /* For now, nothing... */
-}
-#endif /* CONFIG_PROC_FS */
diff --git a/arch/sparc/kernel/irq_64.c b/arch/sparc/kernel/irq_64.c
index b1d275c..4e78862 100644
--- a/arch/sparc/kernel/irq_64.c
+++ b/arch/sparc/kernel/irq_64.c
@@ -224,13 +224,13 @@
int cpuid;
cpumask_copy(&mask, affinity);
- if (cpus_equal(mask, cpu_online_map)) {
+ if (cpumask_equal(&mask, cpu_online_mask)) {
cpuid = map_to_cpu(irq);
} else {
cpumask_t tmp;
- cpus_and(tmp, cpu_online_map, mask);
- cpuid = cpus_empty(tmp) ? map_to_cpu(irq) : first_cpu(tmp);
+ cpumask_and(&tmp, cpu_online_mask, &mask);
+ cpuid = cpumask_empty(&tmp) ? map_to_cpu(irq) : cpumask_first(&tmp);
}
return cpuid;
diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
index 24ad449..6f6544c 100644
--- a/arch/sparc/kernel/kernel.h
+++ b/arch/sparc/kernel/kernel.h
@@ -6,11 +6,9 @@
#include <asm/traps.h>
/* cpu.c */
-extern const char *sparc_cpu_type;
extern const char *sparc_pmu_type;
-extern const char *sparc_fpu_type;
-
extern unsigned int fsr_storage;
+extern int ncpus_probed;
#ifdef CONFIG_SPARC32
/* cpu.c */
@@ -37,6 +35,7 @@
extern unsigned int lvl14_resolution;
extern void sun4m_init_IRQ(void);
+extern void sun4m_unmask_profile_irq(void);
extern void sun4m_clear_profile_irq(int cpu);
/* sun4d_irq.c */
diff --git a/arch/sparc/kernel/leon_kernel.c b/arch/sparc/kernel/leon_kernel.c
index 2969f77..2f538ac 100644
--- a/arch/sparc/kernel/leon_kernel.c
+++ b/arch/sparc/kernel/leon_kernel.c
@@ -19,53 +19,70 @@
#include <asm/leon_amba.h>
#include <asm/traps.h>
#include <asm/cacheflush.h>
+#include <asm/smp.h>
+#include <asm/setup.h>
#include "prom.h"
#include "irq.h"
struct leon3_irqctrl_regs_map *leon3_irqctrl_regs; /* interrupt controller base address */
struct leon3_gptimer_regs_map *leon3_gptimer_regs; /* timer controller base address */
-struct amba_apb_device leon_percpu_timer_dev[16];
int leondebug_irq_disable;
int leon_debug_irqout;
static int dummy_master_l10_counter;
unsigned long amba_system_id;
+static DEFINE_SPINLOCK(leon_irq_lock);
unsigned long leon3_gptimer_irq; /* interrupt controller irq number */
unsigned long leon3_gptimer_idx; /* Timer Index (0..6) within Timer Core */
+int leon3_ticker_irq; /* Timer ticker IRQ */
unsigned int sparc_leon_eirq;
-#define LEON_IMASK ((&leon3_irqctrl_regs->mask[0]))
+#define LEON_IMASK(cpu) (&leon3_irqctrl_regs->mask[cpu])
+#define LEON_IACK (&leon3_irqctrl_regs->iclear)
+#define LEON_DO_ACK_HW 1
-/* Return the IRQ of the pending IRQ on the extended IRQ controller */
-int sparc_leon_eirq_get(int eirq, int cpu)
+/* Return the last ACKed IRQ by the Extended IRQ controller. It has already
+ * been (automatically) ACKed when the CPU takes the trap.
+ */
+static inline unsigned int leon_eirq_get(int cpu)
{
return LEON3_BYPASS_LOAD_PA(&leon3_irqctrl_regs->intid[cpu]) & 0x1f;
}
-irqreturn_t sparc_leon_eirq_isr(int dummy, void *dev_id)
+/* Handle one or multiple IRQs from the extended interrupt controller */
+static void leon_handle_ext_irq(unsigned int irq, struct irq_desc *desc)
{
- printk(KERN_ERR "sparc_leon_eirq_isr: ERROR EXTENDED IRQ\n");
- return IRQ_HANDLED;
+ unsigned int eirq;
+ int cpu = sparc_leon3_cpuid();
+
+ eirq = leon_eirq_get(cpu);
+ if ((eirq & 0x10) && irq_map[eirq]->irq) /* bit4 tells if IRQ happened */
+ generic_handle_irq(irq_map[eirq]->irq);
}
/* The extended IRQ controller has been found, this function registers it */
-void sparc_leon_eirq_register(int eirq)
+void leon_eirq_setup(unsigned int eirq)
{
- int irq;
+ unsigned long mask, oldmask;
+ unsigned int veirq;
- /* Register a "BAD" handler for this interrupt, it should never happen */
- irq = request_irq(eirq, sparc_leon_eirq_isr,
- (IRQF_DISABLED | SA_STATIC_ALLOC), "extirq", NULL);
-
- if (irq) {
- printk(KERN_ERR
- "sparc_leon_eirq_register: unable to attach IRQ%d\n",
- eirq);
- } else {
- sparc_leon_eirq = eirq;
+ if (eirq < 1 || eirq > 0xf) {
+ printk(KERN_ERR "LEON EXT IRQ NUMBER BAD: %d\n", eirq);
+ return;
}
+ veirq = leon_build_device_irq(eirq, leon_handle_ext_irq, "extirq", 0);
+
+ /*
+ * Unmask the Extended IRQ, the IRQs routed through the Ext-IRQ
+ * controller have a mask-bit of their own, so this is safe.
+ */
+ irq_link(veirq);
+ mask = 1 << eirq;
+ oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(boot_cpu_id));
+ LEON3_BYPASS_STORE_PA(LEON_IMASK(boot_cpu_id), (oldmask | mask));
+ sparc_leon_eirq = eirq;
}
static inline unsigned long get_irqmask(unsigned int irq)
@@ -83,35 +100,151 @@
return mask;
}
-static void leon_enable_irq(unsigned int irq_nr)
+#ifdef CONFIG_SMP
+static int irq_choose_cpu(const struct cpumask *affinity)
{
- unsigned long mask, flags;
- mask = get_irqmask(irq_nr);
- local_irq_save(flags);
- LEON3_BYPASS_STORE_PA(LEON_IMASK,
- (LEON3_BYPASS_LOAD_PA(LEON_IMASK) | (mask)));
- local_irq_restore(flags);
+ cpumask_t mask;
+
+ cpus_and(mask, cpu_online_map, *affinity);
+ if (cpus_equal(mask, cpu_online_map) || cpus_empty(mask))
+ return boot_cpu_id;
+ else
+ return first_cpu(mask);
+}
+#else
+#define irq_choose_cpu(affinity) boot_cpu_id
+#endif
+
+static int leon_set_affinity(struct irq_data *data, const struct cpumask *dest,
+ bool force)
+{
+ unsigned long mask, oldmask, flags;
+ int oldcpu, newcpu;
+
+ mask = (unsigned long)data->chip_data;
+ oldcpu = irq_choose_cpu(data->affinity);
+ newcpu = irq_choose_cpu(dest);
+
+ if (oldcpu == newcpu)
+ goto out;
+
+ /* unmask on old CPU first before enabling on the selected CPU */
+ spin_lock_irqsave(&leon_irq_lock, flags);
+ oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(oldcpu));
+ LEON3_BYPASS_STORE_PA(LEON_IMASK(oldcpu), (oldmask & ~mask));
+ oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(newcpu));
+ LEON3_BYPASS_STORE_PA(LEON_IMASK(newcpu), (oldmask | mask));
+ spin_unlock_irqrestore(&leon_irq_lock, flags);
+out:
+ return IRQ_SET_MASK_OK;
}
-static void leon_disable_irq(unsigned int irq_nr)
+static void leon_unmask_irq(struct irq_data *data)
{
- unsigned long mask, flags;
- mask = get_irqmask(irq_nr);
- local_irq_save(flags);
- LEON3_BYPASS_STORE_PA(LEON_IMASK,
- (LEON3_BYPASS_LOAD_PA(LEON_IMASK) & ~(mask)));
- local_irq_restore(flags);
+ unsigned long mask, oldmask, flags;
+ int cpu;
+ mask = (unsigned long)data->chip_data;
+ cpu = irq_choose_cpu(data->affinity);
+ spin_lock_irqsave(&leon_irq_lock, flags);
+ oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(cpu));
+ LEON3_BYPASS_STORE_PA(LEON_IMASK(cpu), (oldmask | mask));
+ spin_unlock_irqrestore(&leon_irq_lock, flags);
+}
+
+static void leon_mask_irq(struct irq_data *data)
+{
+ unsigned long mask, oldmask, flags;
+ int cpu;
+
+ mask = (unsigned long)data->chip_data;
+ cpu = irq_choose_cpu(data->affinity);
+ spin_lock_irqsave(&leon_irq_lock, flags);
+ oldmask = LEON3_BYPASS_LOAD_PA(LEON_IMASK(cpu));
+ LEON3_BYPASS_STORE_PA(LEON_IMASK(cpu), (oldmask & ~mask));
+ spin_unlock_irqrestore(&leon_irq_lock, flags);
+}
+
+static unsigned int leon_startup_irq(struct irq_data *data)
+{
+ irq_link(data->irq);
+ leon_unmask_irq(data);
+ return 0;
+}
+
+static void leon_shutdown_irq(struct irq_data *data)
+{
+ leon_mask_irq(data);
+ irq_unlink(data->irq);
+}
+
+/* Used by external level sensitive IRQ handlers on the LEON: ACK IRQ ctrl */
+static void leon_eoi_irq(struct irq_data *data)
+{
+ unsigned long mask = (unsigned long)data->chip_data;
+
+ if (mask & LEON_DO_ACK_HW)
+ LEON3_BYPASS_STORE_PA(LEON_IACK, mask & ~LEON_DO_ACK_HW);
+}
+
+static struct irq_chip leon_irq = {
+ .name = "leon",
+ .irq_startup = leon_startup_irq,
+ .irq_shutdown = leon_shutdown_irq,
+ .irq_mask = leon_mask_irq,
+ .irq_unmask = leon_unmask_irq,
+ .irq_eoi = leon_eoi_irq,
+ .irq_set_affinity = leon_set_affinity,
+};
+
+/*
+ * Build a LEON IRQ for the edge triggered LEON IRQ controller:
+ * Edge (normal) IRQ - handle_simple_irq, ack=DONT-CARE, never ack
+ * Level IRQ (PCI|Level-GPIO) - handle_fasteoi_irq, ack=1, ack after ISR
+ * Per-CPU Edge - handle_percpu_irq, ack=0
+ */
+unsigned int leon_build_device_irq(unsigned int real_irq,
+ irq_flow_handler_t flow_handler,
+ const char *name, int do_ack)
+{
+ unsigned int irq;
+ unsigned long mask;
+
+ irq = 0;
+ mask = get_irqmask(real_irq);
+ if (mask == 0)
+ goto out;
+
+ irq = irq_alloc(real_irq, real_irq);
+ if (irq == 0)
+ goto out;
+
+ if (do_ack)
+ mask |= LEON_DO_ACK_HW;
+
+ irq_set_chip_and_handler_name(irq, &leon_irq,
+ flow_handler, name);
+ irq_set_chip_data(irq, (void *)mask);
+
+out:
+ return irq;
+}
+
+static unsigned int _leon_build_device_irq(struct platform_device *op,
+ unsigned int real_irq)
+{
+ return leon_build_device_irq(real_irq, handle_simple_irq, "edge", 0);
}
void __init leon_init_timers(irq_handler_t counter_fn)
{
- int irq;
+ int irq, eirq;
struct device_node *rootnp, *np, *nnp;
struct property *pp;
int len;
- int cpu, icsel;
+ int icsel;
int ampopts;
+ int err;
leondebug_irq_disable = 0;
leon_debug_irqout = 0;
@@ -173,98 +306,85 @@
leon3_gptimer_irq = *(unsigned int *)pp->value;
} while (0);
- if (leon3_gptimer_regs && leon3_irqctrl_regs && leon3_gptimer_irq) {
- LEON3_BYPASS_STORE_PA(
- &leon3_gptimer_regs->e[leon3_gptimer_idx].val, 0);
- LEON3_BYPASS_STORE_PA(
- &leon3_gptimer_regs->e[leon3_gptimer_idx].rld,
- (((1000000 / HZ) - 1)));
- LEON3_BYPASS_STORE_PA(
+ if (!(leon3_gptimer_regs && leon3_irqctrl_regs && leon3_gptimer_irq))
+ goto bad;
+
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].val, 0);
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].rld,
+ (((1000000 / HZ) - 1)));
+ LEON3_BYPASS_STORE_PA(
&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl, 0);
#ifdef CONFIG_SMP
- leon_percpu_timer_dev[0].start = (int)leon3_gptimer_regs;
- leon_percpu_timer_dev[0].irq = leon3_gptimer_irq + 1 +
- leon3_gptimer_idx;
+ leon3_ticker_irq = leon3_gptimer_irq + 1 + leon3_gptimer_idx;
- if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) &
- (1<<LEON3_GPTIMER_SEPIRQ))) {
- prom_printf("irq timer not configured with separate irqs\n");
- BUG();
- }
-
- LEON3_BYPASS_STORE_PA(
- &leon3_gptimer_regs->e[leon3_gptimer_idx+1].val, 0);
- LEON3_BYPASS_STORE_PA(
- &leon3_gptimer_regs->e[leon3_gptimer_idx+1].rld,
- (((1000000/HZ) - 1)));
- LEON3_BYPASS_STORE_PA(
- &leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl, 0);
-# endif
-
- /*
- * The IRQ controller may (if implemented) consist of multiple
- * IRQ controllers, each mapped on a 4Kb boundary.
- * Each CPU may be routed to different IRQCTRLs, however
- * we assume that all CPUs (in SMP system) is routed to the
- * same IRQ Controller, and for non-SMP only one IRQCTRL is
- * accessed anyway.
- * In AMP systems, Linux must run on CPU0 for the time being.
- */
- cpu = sparc_leon3_cpuid();
- icsel = LEON3_BYPASS_LOAD_PA(&leon3_irqctrl_regs->icsel[cpu/8]);
- icsel = (icsel >> ((7 - (cpu&0x7)) * 4)) & 0xf;
- leon3_irqctrl_regs += icsel;
- } else {
- goto bad;
+ if (!(LEON3_BYPASS_LOAD_PA(&leon3_gptimer_regs->config) &
+ (1<<LEON3_GPTIMER_SEPIRQ))) {
+ printk(KERN_ERR "timer not configured with separate irqs\n");
+ BUG();
}
- irq = request_irq(leon3_gptimer_irq+leon3_gptimer_idx,
- counter_fn,
- (IRQF_DISABLED | SA_STATIC_ALLOC), "timer", NULL);
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].val,
+ 0);
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].rld,
+ (((1000000/HZ) - 1)));
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl,
+ 0);
+#endif
- if (irq) {
- printk(KERN_ERR "leon_time_init: unable to attach IRQ%d\n",
- LEON_INTERRUPT_TIMER1);
+ /*
+ * The IRQ controller may (if implemented) consist of multiple
+ * IRQ controllers, each mapped on a 4Kb boundary.
+ * Each CPU may be routed to different IRQCTRLs, however
+ * we assume that all CPUs (in SMP system) is routed to the
+ * same IRQ Controller, and for non-SMP only one IRQCTRL is
+ * accessed anyway.
+ * In AMP systems, Linux must run on CPU0 for the time being.
+ */
+ icsel = LEON3_BYPASS_LOAD_PA(&leon3_irqctrl_regs->icsel[boot_cpu_id/8]);
+ icsel = (icsel >> ((7 - (boot_cpu_id&0x7)) * 4)) & 0xf;
+ leon3_irqctrl_regs += icsel;
+
+ /* Mask all IRQs on boot-cpu IRQ controller */
+ LEON3_BYPASS_STORE_PA(&leon3_irqctrl_regs->mask[boot_cpu_id], 0);
+
+ /* Probe extended IRQ controller */
+ eirq = (LEON3_BYPASS_LOAD_PA(&leon3_irqctrl_regs->mpstatus)
+ >> 16) & 0xf;
+ if (eirq != 0)
+ leon_eirq_setup(eirq);
+
+ irq = _leon_build_device_irq(NULL, leon3_gptimer_irq+leon3_gptimer_idx);
+ err = request_irq(irq, counter_fn, IRQF_TIMER, "timer", NULL);
+ if (err) {
+ printk(KERN_ERR "unable to attach timer IRQ%d\n", irq);
prom_halt();
}
-# ifdef CONFIG_SMP
- {
- unsigned long flags;
- struct tt_entry *trap_table = &sparc_ttable[SP_TRAP_IRQ1 + (leon_percpu_timer_dev[0].irq - 1)];
-
- /* For SMP we use the level 14 ticker, however the bootup code
- * has copied the firmwares level 14 vector into boot cpu's
- * trap table, we must fix this now or we get squashed.
- */
- local_irq_save(flags);
-
- patchme_maybe_smp_msg[0] = 0x01000000; /* NOP out the branch */
-
- /* Adjust so that we jump directly to smpleon_ticker */
- trap_table->inst_three += smpleon_ticker - real_irq_entry;
-
- local_flush_cache_all();
- local_irq_restore(flags);
- }
-# endif
-
- if (leon3_gptimer_regs) {
- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl,
- LEON3_GPTIMER_EN |
- LEON3_GPTIMER_RL |
- LEON3_GPTIMER_LD | LEON3_GPTIMER_IRQEN);
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx].ctrl,
+ LEON3_GPTIMER_EN |
+ LEON3_GPTIMER_RL |
+ LEON3_GPTIMER_LD |
+ LEON3_GPTIMER_IRQEN);
#ifdef CONFIG_SMP
- LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl,
- LEON3_GPTIMER_EN |
- LEON3_GPTIMER_RL |
- LEON3_GPTIMER_LD |
- LEON3_GPTIMER_IRQEN);
-#endif
-
+ /* Install per-cpu IRQ handler for broadcasted ticker */
+ irq = leon_build_device_irq(leon3_ticker_irq, handle_percpu_irq,
+ "per-cpu", 0);
+ err = request_irq(irq, leon_percpu_timer_interrupt,
+ IRQF_PERCPU | IRQF_TIMER, "ticker",
+ NULL);
+ if (err) {
+ printk(KERN_ERR "unable to attach ticker IRQ%d\n", irq);
+ prom_halt();
}
+
+ LEON3_BYPASS_STORE_PA(&leon3_gptimer_regs->e[leon3_gptimer_idx+1].ctrl,
+ LEON3_GPTIMER_EN |
+ LEON3_GPTIMER_RL |
+ LEON3_GPTIMER_LD |
+ LEON3_GPTIMER_IRQEN);
+#endif
return;
bad:
printk(KERN_ERR "No Timer/irqctrl found\n");
@@ -281,9 +401,6 @@
BUG();
}
-
-
-
void __init leon_trans_init(struct device_node *dp)
{
if (strcmp(dp->type, "cpu") == 0 && strcmp(dp->name, "<NULL>") == 0) {
@@ -337,22 +454,18 @@
{
unsigned long mask, flags, *addr;
mask = get_irqmask(irq_nr);
- local_irq_save(flags);
- addr = (unsigned long *)&(leon3_irqctrl_regs->mask[cpu]);
- LEON3_BYPASS_STORE_PA(addr, (LEON3_BYPASS_LOAD_PA(addr) | (mask)));
- local_irq_restore(flags);
+ spin_lock_irqsave(&leon_irq_lock, flags);
+ addr = (unsigned long *)LEON_IMASK(cpu);
+ LEON3_BYPASS_STORE_PA(addr, (LEON3_BYPASS_LOAD_PA(addr) | mask));
+ spin_unlock_irqrestore(&leon_irq_lock, flags);
}
#endif
void __init leon_init_IRQ(void)
{
- sparc_irq_config.init_timers = leon_init_timers;
-
- BTFIXUPSET_CALL(enable_irq, leon_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_irq, leon_disable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(enable_pil_irq, leon_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_pil_irq, leon_disable_irq, BTFIXUPCALL_NORM);
+ sparc_irq_config.init_timers = leon_init_timers;
+ sparc_irq_config.build_device_irq = _leon_build_device_irq;
BTFIXUPSET_CALL(clear_clock_irq, leon_clear_clock_irq,
BTFIXUPCALL_NORM);
diff --git a/arch/sparc/kernel/leon_smp.c b/arch/sparc/kernel/leon_smp.c
index 8f5de4a..fe8fb44 100644
--- a/arch/sparc/kernel/leon_smp.c
+++ b/arch/sparc/kernel/leon_smp.c
@@ -14,6 +14,7 @@
#include <linux/smp.h>
#include <linux/interrupt.h>
#include <linux/kernel_stat.h>
+#include <linux/of.h>
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/mm.h>
@@ -29,6 +30,7 @@
#include <asm/ptrace.h>
#include <asm/atomic.h>
#include <asm/irq_regs.h>
+#include <asm/traps.h>
#include <asm/delay.h>
#include <asm/irq.h>
@@ -50,9 +52,12 @@
extern ctxd_t *srmmu_ctx_table_phys;
static int smp_processors_ready;
extern volatile unsigned long cpu_callin_map[NR_CPUS];
-extern unsigned char boot_cpu_id;
extern cpumask_t smp_commenced_mask;
void __init leon_configure_cache_smp(void);
+static void leon_ipi_init(void);
+
+/* IRQ number of LEON IPIs */
+int leon_ipi_irq = LEON3_IRQ_IPI_DEFAULT;
static inline unsigned long do_swap(volatile unsigned long *ptr,
unsigned long val)
@@ -94,8 +99,6 @@
local_flush_cache_all();
local_flush_tlb_all();
- cpu_probe();
-
/* Fix idle thread fields. */
__asm__ __volatile__("ld [%0], %%g6\n\t" : : "r"(¤t_set[cpuid])
: "memory" /* paranoid */);
@@ -104,11 +107,11 @@
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
- while (!cpu_isset(cpuid, smp_commenced_mask))
+ while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
local_irq_enable();
- cpu_set(cpuid, cpu_online_map);
+ set_cpu_online(cpuid, true);
}
/*
@@ -179,13 +182,16 @@
int nrcpu = leon_smp_nrcpus();
int me = smp_processor_id();
+ /* Setup IPI */
+ leon_ipi_init();
+
printk(KERN_INFO "%d:(%d:%d) cpus mpirq at 0x%x\n", (unsigned int)me,
(unsigned int)nrcpu, (unsigned int)NR_CPUS,
(unsigned int)&(leon3_irqctrl_regs->mpstatus));
leon_enable_irq_cpu(LEON3_IRQ_CROSS_CALL, me);
leon_enable_irq_cpu(LEON3_IRQ_TICKER, me);
- leon_enable_irq_cpu(LEON3_IRQ_RESCHEDULE, me);
+ leon_enable_irq_cpu(leon_ipi_irq, me);
leon_smp_setbroadcast(1 << LEON3_IRQ_TICKER);
@@ -220,6 +226,10 @@
(unsigned int)&leon3_irqctrl_regs->mpstatus);
local_flush_cache_all();
+ /* Make sure all IRQs are of from the start for this new CPU */
+ LEON_BYPASS_STORE_PA(&leon3_irqctrl_regs->mask[i], 0);
+
+ /* Wake one CPU */
LEON_BYPASS_STORE_PA(&(leon3_irqctrl_regs->mpstatus), 1 << i);
/* wheee... it's going... */
@@ -236,7 +246,7 @@
} else {
leon_enable_irq_cpu(LEON3_IRQ_CROSS_CALL, i);
leon_enable_irq_cpu(LEON3_IRQ_TICKER, i);
- leon_enable_irq_cpu(LEON3_IRQ_RESCHEDULE, i);
+ leon_enable_irq_cpu(leon_ipi_irq, i);
}
local_flush_cache_all();
@@ -262,21 +272,21 @@
local_flush_cache_all();
/* Free unneeded trap tables */
- if (!cpu_isset(1, cpu_present_map)) {
+ if (!cpu_present(1)) {
ClearPageReserved(virt_to_page(&trapbase_cpu1));
init_page_count(virt_to_page(&trapbase_cpu1));
free_page((unsigned long)&trapbase_cpu1);
totalram_pages++;
num_physpages++;
}
- if (!cpu_isset(2, cpu_present_map)) {
+ if (!cpu_present(2)) {
ClearPageReserved(virt_to_page(&trapbase_cpu2));
init_page_count(virt_to_page(&trapbase_cpu2));
free_page((unsigned long)&trapbase_cpu2);
totalram_pages++;
num_physpages++;
}
- if (!cpu_isset(3, cpu_present_map)) {
+ if (!cpu_present(3)) {
ClearPageReserved(virt_to_page(&trapbase_cpu3));
init_page_count(virt_to_page(&trapbase_cpu3));
free_page((unsigned long)&trapbase_cpu3);
@@ -292,6 +302,99 @@
{
}
+struct leon_ipi_work {
+ int single;
+ int msk;
+ int resched;
+};
+
+static DEFINE_PER_CPU_SHARED_ALIGNED(struct leon_ipi_work, leon_ipi_work);
+
+/* Initialize IPIs on the LEON, in order to save IRQ resources only one IRQ
+ * is used for all three types of IPIs.
+ */
+static void __init leon_ipi_init(void)
+{
+ int cpu, len;
+ struct leon_ipi_work *work;
+ struct property *pp;
+ struct device_node *rootnp;
+ struct tt_entry *trap_table;
+ unsigned long flags;
+
+ /* Find IPI IRQ or stick with default value */
+ rootnp = of_find_node_by_path("/ambapp0");
+ if (rootnp) {
+ pp = of_find_property(rootnp, "ipi_num", &len);
+ if (pp && (*(int *)pp->value))
+ leon_ipi_irq = *(int *)pp->value;
+ }
+ printk(KERN_INFO "leon: SMP IPIs at IRQ %d\n", leon_ipi_irq);
+
+ /* Adjust so that we jump directly to smpleon_ipi */
+ local_irq_save(flags);
+ trap_table = &sparc_ttable[SP_TRAP_IRQ1 + (leon_ipi_irq - 1)];
+ trap_table->inst_three += smpleon_ipi - real_irq_entry;
+ local_flush_cache_all();
+ local_irq_restore(flags);
+
+ for_each_possible_cpu(cpu) {
+ work = &per_cpu(leon_ipi_work, cpu);
+ work->single = work->msk = work->resched = 0;
+ }
+}
+
+static void leon_ipi_single(int cpu)
+{
+ struct leon_ipi_work *work = &per_cpu(leon_ipi_work, cpu);
+
+ /* Mark work */
+ work->single = 1;
+
+ /* Generate IRQ on the CPU */
+ set_cpu_int(cpu, leon_ipi_irq);
+}
+
+static void leon_ipi_mask_one(int cpu)
+{
+ struct leon_ipi_work *work = &per_cpu(leon_ipi_work, cpu);
+
+ /* Mark work */
+ work->msk = 1;
+
+ /* Generate IRQ on the CPU */
+ set_cpu_int(cpu, leon_ipi_irq);
+}
+
+static void leon_ipi_resched(int cpu)
+{
+ struct leon_ipi_work *work = &per_cpu(leon_ipi_work, cpu);
+
+ /* Mark work */
+ work->resched = 1;
+
+ /* Generate IRQ on the CPU (any IRQ will cause resched) */
+ set_cpu_int(cpu, leon_ipi_irq);
+}
+
+void leonsmp_ipi_interrupt(void)
+{
+ struct leon_ipi_work *work = &__get_cpu_var(leon_ipi_work);
+
+ if (work->single) {
+ work->single = 0;
+ smp_call_function_single_interrupt();
+ }
+ if (work->msk) {
+ work->msk = 0;
+ smp_call_function_interrupt();
+ }
+ if (work->resched) {
+ work->resched = 0;
+ smp_resched_interrupt();
+ }
+}
+
static struct smp_funcall {
smpfunc_t func;
unsigned long arg1;
@@ -337,10 +440,10 @@
{
register int i;
- cpu_clear(smp_processor_id(), mask);
- cpus_and(mask, cpu_online_map, mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
+ cpumask_and(&mask, cpu_online_mask, &mask);
for (i = 0; i <= high; i++) {
- if (cpu_isset(i, mask)) {
+ if (cpumask_test_cpu(i, &mask)) {
ccall_info.processors_in[i] = 0;
ccall_info.processors_out[i] = 0;
set_cpu_int(i, LEON3_IRQ_CROSS_CALL);
@@ -354,7 +457,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_in[i])
@@ -363,7 +466,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_out[i])
@@ -386,27 +489,23 @@
ccall_info.processors_out[i] = 1;
}
-void leon_percpu_timer_interrupt(struct pt_regs *regs)
+irqreturn_t leon_percpu_timer_interrupt(int irq, void *unused)
{
- struct pt_regs *old_regs;
int cpu = smp_processor_id();
- old_regs = set_irq_regs(regs);
-
leon_clear_profile_irq(cpu);
profile_tick(CPU_PROFILING);
if (!--prof_counter(cpu)) {
- int user = user_mode(regs);
+ int user = user_mode(get_irq_regs());
- irq_enter();
update_process_times(user);
- irq_exit();
prof_counter(cpu) = prof_multiplier(cpu);
}
- set_irq_regs(old_regs);
+
+ return IRQ_HANDLED;
}
static void __init smp_setup_percpu_timer(void)
@@ -449,6 +548,9 @@
BTFIXUPSET_CALL(smp_cross_call, leon_cross_call, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(__hard_smp_processor_id, __leon_processor_id,
BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_resched, leon_ipi_resched, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_single, leon_ipi_single, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_mask_one, leon_ipi_mask_one, BTFIXUPCALL_NORM);
}
#endif /* CONFIG_SPARC_LEON */
diff --git a/arch/sparc/kernel/mdesc.c b/arch/sparc/kernel/mdesc.c
index 56db064..42f28c7 100644
--- a/arch/sparc/kernel/mdesc.c
+++ b/arch/sparc/kernel/mdesc.c
@@ -768,7 +768,7 @@
cpuid, NR_CPUS);
continue;
}
- if (!cpu_isset(cpuid, *mask))
+ if (!cpumask_test_cpu(cpuid, mask))
continue;
#endif
diff --git a/arch/sparc/kernel/of_device_64.c b/arch/sparc/kernel/of_device_64.c
index 5c14968..3bb2eac 100644
--- a/arch/sparc/kernel/of_device_64.c
+++ b/arch/sparc/kernel/of_device_64.c
@@ -622,8 +622,9 @@
out:
nid = of_node_to_nid(dp);
if (nid != -1) {
- cpumask_t numa_mask = *cpumask_of_node(nid);
+ cpumask_t numa_mask;
+ cpumask_copy(&numa_mask, cpumask_of_node(nid));
irq_set_affinity(irq, &numa_mask);
}
diff --git a/arch/sparc/kernel/pci_msi.c b/arch/sparc/kernel/pci_msi.c
index 30982e9..580651a 100644
--- a/arch/sparc/kernel/pci_msi.c
+++ b/arch/sparc/kernel/pci_msi.c
@@ -284,8 +284,9 @@
nid = pbm->numa_node;
if (nid != -1) {
- cpumask_t numa_mask = *cpumask_of_node(nid);
+ cpumask_t numa_mask;
+ cpumask_copy(&numa_mask, cpumask_of_node(nid));
irq_set_affinity(irq, &numa_mask);
}
err = request_irq(irq, sparc64_msiq_interrupt, 0,
diff --git a/arch/sparc/kernel/pcic.c b/arch/sparc/kernel/pcic.c
index 2cdc131..948601a 100644
--- a/arch/sparc/kernel/pcic.c
+++ b/arch/sparc/kernel/pcic.c
@@ -164,6 +164,9 @@
volatile int pcic_speculative;
volatile int pcic_trapped;
+/* forward */
+unsigned int pcic_build_device_irq(struct platform_device *op,
+ unsigned int real_irq);
#define CONFIG_CMD(bus, device_fn, where) (0x80000000 | (((unsigned int)bus) << 16) | (((unsigned int)device_fn) << 8) | (where & ~3))
@@ -523,6 +526,7 @@
pcic_fill_irq(struct linux_pcic *pcic, struct pci_dev *dev, int node)
{
struct pcic_ca2irq *p;
+ unsigned int real_irq;
int i, ivec;
char namebuf[64];
@@ -551,26 +555,25 @@
i = p->pin;
if (i >= 0 && i < 4) {
ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_LO);
- dev->irq = ivec >> (i << 2) & 0xF;
+ real_irq = ivec >> (i << 2) & 0xF;
} else if (i >= 4 && i < 8) {
ivec = readw(pcic->pcic_regs+PCI_INT_SELECT_HI);
- dev->irq = ivec >> ((i-4) << 2) & 0xF;
+ real_irq = ivec >> ((i-4) << 2) & 0xF;
} else { /* Corrupted map */
printk("PCIC: BAD PIN %d\n", i); for (;;) {}
}
/* P3 */ /* printk("PCIC: device %s pin %d ivec 0x%x irq %x\n", namebuf, i, ivec, dev->irq); */
- /*
- * dev->irq=0 means PROM did not bother to program the upper
+ /* real_irq means PROM did not bother to program the upper
* half of PCIC. This happens on JS-E with PROM 3.11, for instance.
*/
- if (dev->irq == 0 || p->force) {
+ if (real_irq == 0 || p->force) {
if (p->irq == 0 || p->irq >= 15) { /* Corrupted map */
printk("PCIC: BAD IRQ %d\n", p->irq); for (;;) {}
}
printk("PCIC: setting irq %d at pin %d for device %02x:%02x\n",
p->irq, p->pin, dev->bus->number, dev->devfn);
- dev->irq = p->irq;
+ real_irq = p->irq;
i = p->pin;
if (i >= 4) {
@@ -584,7 +587,8 @@
ivec |= p->irq << (i << 2);
writew(ivec, pcic->pcic_regs+PCI_INT_SELECT_LO);
}
- }
+ }
+ dev->irq = pcic_build_device_irq(NULL, real_irq);
}
/*
@@ -729,6 +733,7 @@
struct linux_pcic *pcic = &pcic0;
unsigned long v;
int timer_irq, irq;
+ int err;
do_arch_gettimeoffset = pci_gettimeoffset;
@@ -740,9 +745,10 @@
timer_irq = PCI_COUNTER_IRQ_SYS(v);
writel (PCI_COUNTER_IRQ_SET(timer_irq, 0),
pcic->pcic_regs+PCI_COUNTER_IRQ);
- irq = request_irq(timer_irq, pcic_timer_handler,
- (IRQF_DISABLED | SA_STATIC_ALLOC), "timer", NULL);
- if (irq) {
+ irq = pcic_build_device_irq(NULL, timer_irq);
+ err = request_irq(irq, pcic_timer_handler,
+ IRQF_TIMER, "timer", NULL);
+ if (err) {
prom_printf("time_init: unable to attach IRQ%d\n", timer_irq);
prom_halt();
}
@@ -803,50 +809,73 @@
return 1 << irq_nr;
}
-static void pcic_disable_irq(unsigned int irq_nr)
+static void pcic_mask_irq(struct irq_data *data)
{
unsigned long mask, flags;
- mask = get_irqmask(irq_nr);
+ mask = (unsigned long)data->chip_data;
local_irq_save(flags);
writel(mask, pcic0.pcic_regs+PCI_SYS_INT_TARGET_MASK_SET);
local_irq_restore(flags);
}
-static void pcic_enable_irq(unsigned int irq_nr)
+static void pcic_unmask_irq(struct irq_data *data)
{
unsigned long mask, flags;
- mask = get_irqmask(irq_nr);
+ mask = (unsigned long)data->chip_data;
local_irq_save(flags);
writel(mask, pcic0.pcic_regs+PCI_SYS_INT_TARGET_MASK_CLEAR);
local_irq_restore(flags);
}
+static unsigned int pcic_startup_irq(struct irq_data *data)
+{
+ irq_link(data->irq);
+ pcic_unmask_irq(data);
+ return 0;
+}
+
+static struct irq_chip pcic_irq = {
+ .name = "pcic",
+ .irq_startup = pcic_startup_irq,
+ .irq_mask = pcic_mask_irq,
+ .irq_unmask = pcic_unmask_irq,
+};
+
+unsigned int pcic_build_device_irq(struct platform_device *op,
+ unsigned int real_irq)
+{
+ unsigned int irq;
+ unsigned long mask;
+
+ irq = 0;
+ mask = get_irqmask(real_irq);
+ if (mask == 0)
+ goto out;
+
+ irq = irq_alloc(real_irq, real_irq);
+ if (irq == 0)
+ goto out;
+
+ irq_set_chip_and_handler_name(irq, &pcic_irq,
+ handle_level_irq, "PCIC");
+ irq_set_chip_data(irq, (void *)mask);
+
+out:
+ return irq;
+}
+
+
static void pcic_load_profile_irq(int cpu, unsigned int limit)
{
printk("PCIC: unimplemented code: FILE=%s LINE=%d", __FILE__, __LINE__);
}
-/* We assume the caller has disabled local interrupts when these are called,
- * or else very bizarre behavior will result.
- */
-static void pcic_disable_pil_irq(unsigned int pil)
-{
- writel(get_irqmask(pil), pcic0.pcic_regs+PCI_SYS_INT_TARGET_MASK_SET);
-}
-
-static void pcic_enable_pil_irq(unsigned int pil)
-{
- writel(get_irqmask(pil), pcic0.pcic_regs+PCI_SYS_INT_TARGET_MASK_CLEAR);
-}
-
void __init sun4m_pci_init_IRQ(void)
{
- BTFIXUPSET_CALL(enable_irq, pcic_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_irq, pcic_disable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(enable_pil_irq, pcic_enable_pil_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_pil_irq, pcic_disable_pil_irq, BTFIXUPCALL_NORM);
+ sparc_irq_config.build_device_irq = pcic_build_device_irq;
+
BTFIXUPSET_CALL(clear_clock_irq, pcic_clear_clock_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(load_profile_irq, pcic_load_profile_irq, BTFIXUPCALL_NORM);
}
diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
index ee8426e..2cb0e1c 100644
--- a/arch/sparc/kernel/perf_event.c
+++ b/arch/sparc/kernel/perf_event.c
@@ -26,6 +26,7 @@
#include <asm/nmi.h>
#include <asm/pcr.h>
+#include "kernel.h"
#include "kstack.h"
/* Sparc64 chips have two performance counters, 32-bits each, with
diff --git a/arch/sparc/kernel/pmc.c b/arch/sparc/kernel/pmc.c
index 93d7b44..6a585d3 100644
--- a/arch/sparc/kernel/pmc.c
+++ b/arch/sparc/kernel/pmc.c
@@ -69,7 +69,7 @@
return 0;
}
-static struct of_device_id __initdata pmc_match[] = {
+static struct of_device_id pmc_match[] = {
{
.name = PMC_OBPNAME,
},
diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
index 1752929..c8cc461 100644
--- a/arch/sparc/kernel/process_32.c
+++ b/arch/sparc/kernel/process_32.c
@@ -128,8 +128,16 @@
set_thread_flag(TIF_POLLING_NRFLAG);
/* endless idle loop with no priority at all */
while(1) {
- while (!need_resched())
- cpu_relax();
+#ifdef CONFIG_SPARC_LEON
+ if (pm_idle) {
+ while (!need_resched())
+ (*pm_idle)();
+ } else
+#endif
+ {
+ while (!need_resched())
+ cpu_relax();
+ }
preempt_enable_no_resched();
schedule();
preempt_disable();
diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
index 05fb253..5ce3d15 100644
--- a/arch/sparc/kernel/prom_32.c
+++ b/arch/sparc/kernel/prom_32.c
@@ -326,7 +326,6 @@
of_console_options = NULL;
}
- prom_printf(msg, of_console_path);
printk(msg, of_console_path);
}
diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
index 7b8b76c..3609bde 100644
--- a/arch/sparc/kernel/setup_32.c
+++ b/arch/sparc/kernel/setup_32.c
@@ -103,16 +103,20 @@
/* Exported for mm/init.c:paging_init. */
unsigned long cmdline_memory_size __initdata = 0;
+/* which CPU booted us (0xff = not set) */
+unsigned char boot_cpu_id = 0xff; /* 0xff will make it into DATA section... */
+unsigned char boot_cpu_id4; /* boot_cpu_id << 2 */
+
static void
prom_console_write(struct console *con, const char *s, unsigned n)
{
prom_write(s, n);
}
-static struct console prom_debug_console = {
- .name = "debug",
+static struct console prom_early_console = {
+ .name = "earlyprom",
.write = prom_console_write,
- .flags = CON_PRINTBUFFER,
+ .flags = CON_PRINTBUFFER | CON_BOOT,
.index = -1,
};
@@ -133,8 +137,7 @@
prom_halt();
break;
case 'p':
- /* Use PROM debug console. */
- register_console(&prom_debug_console);
+ /* Just ignore, this behavior is now the default. */
break;
default:
printk("Unknown boot switch (-%c)\n", c);
@@ -215,6 +218,10 @@
strcpy(boot_command_line, *cmdline_p);
parse_early_param();
+ boot_flags_init(*cmdline_p);
+
+ register_console(&prom_early_console);
+
/* Set sparc_cpu_model */
sparc_cpu_model = sun_unknown;
if (!strcmp(&cputypval[0], "sun4 "))
@@ -265,7 +272,6 @@
#ifdef CONFIG_DUMMY_CONSOLE
conswitchp = &dummy_con;
#endif
- boot_flags_init(*cmdline_p);
idprom_init();
if (ARCH_SUN4C)
@@ -311,75 +317,6 @@
smp_setup_cpu_possible_map();
}
-static int ncpus_probed;
-
-static int show_cpuinfo(struct seq_file *m, void *__unused)
-{
- seq_printf(m,
- "cpu\t\t: %s\n"
- "fpu\t\t: %s\n"
- "promlib\t\t: Version %d Revision %d\n"
- "prom\t\t: %d.%d\n"
- "type\t\t: %s\n"
- "ncpus probed\t: %d\n"
- "ncpus active\t: %d\n"
-#ifndef CONFIG_SMP
- "CPU0Bogo\t: %lu.%02lu\n"
- "CPU0ClkTck\t: %ld\n"
-#endif
- ,
- sparc_cpu_type,
- sparc_fpu_type ,
- romvec->pv_romvers,
- prom_rev,
- romvec->pv_printrev >> 16,
- romvec->pv_printrev & 0xffff,
- &cputypval[0],
- ncpus_probed,
- num_online_cpus()
-#ifndef CONFIG_SMP
- , cpu_data(0).udelay_val/(500000/HZ),
- (cpu_data(0).udelay_val/(5000/HZ)) % 100,
- cpu_data(0).clock_tick
-#endif
- );
-
-#ifdef CONFIG_SMP
- smp_bogo(m);
-#endif
- mmu_info(m);
-#ifdef CONFIG_SMP
- smp_info(m);
-#endif
- return 0;
-}
-
-static void *c_start(struct seq_file *m, loff_t *pos)
-{
- /* The pointer we are returning is arbitrary,
- * it just has to be non-NULL and not IS_ERR
- * in the success case.
- */
- return *pos == 0 ? &c_start : NULL;
-}
-
-static void *c_next(struct seq_file *m, void *v, loff_t *pos)
-{
- ++*pos;
- return c_start(m, pos);
-}
-
-static void c_stop(struct seq_file *m, void *v)
-{
-}
-
-const struct seq_operations cpuinfo_op = {
- .start =c_start,
- .next = c_next,
- .stop = c_stop,
- .show = show_cpuinfo,
-};
-
extern int stop_a_enabled;
void sun_do_break(void)
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 29bafe0..f3b6850 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -339,84 +339,6 @@
paging_init();
}
-/* BUFFER is PAGE_SIZE bytes long. */
-
-extern void smp_info(struct seq_file *);
-extern void smp_bogo(struct seq_file *);
-extern void mmu_info(struct seq_file *);
-
-unsigned int dcache_parity_tl1_occurred;
-unsigned int icache_parity_tl1_occurred;
-
-int ncpus_probed;
-
-static int show_cpuinfo(struct seq_file *m, void *__unused)
-{
- seq_printf(m,
- "cpu\t\t: %s\n"
- "fpu\t\t: %s\n"
- "pmu\t\t: %s\n"
- "prom\t\t: %s\n"
- "type\t\t: %s\n"
- "ncpus probed\t: %d\n"
- "ncpus active\t: %d\n"
- "D$ parity tl1\t: %u\n"
- "I$ parity tl1\t: %u\n"
-#ifndef CONFIG_SMP
- "Cpu0ClkTck\t: %016lx\n"
-#endif
- ,
- sparc_cpu_type,
- sparc_fpu_type,
- sparc_pmu_type,
- prom_version,
- ((tlb_type == hypervisor) ?
- "sun4v" :
- "sun4u"),
- ncpus_probed,
- num_online_cpus(),
- dcache_parity_tl1_occurred,
- icache_parity_tl1_occurred
-#ifndef CONFIG_SMP
- , cpu_data(0).clock_tick
-#endif
- );
-#ifdef CONFIG_SMP
- smp_bogo(m);
-#endif
- mmu_info(m);
-#ifdef CONFIG_SMP
- smp_info(m);
-#endif
- return 0;
-}
-
-static void *c_start(struct seq_file *m, loff_t *pos)
-{
- /* The pointer we are returning is arbitrary,
- * it just has to be non-NULL and not IS_ERR
- * in the success case.
- */
- return *pos == 0 ? &c_start : NULL;
-}
-
-static void *c_next(struct seq_file *m, void *v, loff_t *pos)
-{
- ++*pos;
- return c_start(m, pos);
-}
-
-static void c_stop(struct seq_file *m, void *v)
-{
-}
-
-const struct seq_operations cpuinfo_op = {
- .start =c_start,
- .next = c_next,
- .stop = c_stop,
- .show = show_cpuinfo,
-};
-
extern int stop_a_enabled;
void sun_do_break(void)
diff --git a/arch/sparc/kernel/smp_32.c b/arch/sparc/kernel/smp_32.c
index f95690c..d5b3958 100644
--- a/arch/sparc/kernel/smp_32.c
+++ b/arch/sparc/kernel/smp_32.c
@@ -37,8 +37,6 @@
#include "irq.h"
volatile unsigned long cpu_callin_map[NR_CPUS] __cpuinitdata = {0,};
-unsigned char boot_cpu_id = 0;
-unsigned char boot_cpu_id4 = 0; /* boot_cpu_id << 2 */
cpumask_t smp_commenced_mask = CPU_MASK_NONE;
@@ -53,6 +51,7 @@
void __cpuinit smp_store_cpu_info(int id)
{
int cpu_node;
+ int mid;
cpu_data(id).udelay_val = loops_per_jiffy;
@@ -60,10 +59,13 @@
cpu_data(id).clock_tick = prom_getintdefault(cpu_node,
"clock-frequency", 0);
cpu_data(id).prom_node = cpu_node;
- cpu_data(id).mid = cpu_get_hwmid(cpu_node);
+ mid = cpu_get_hwmid(cpu_node);
- if (cpu_data(id).mid < 0)
- panic("No MID found for CPU%d at node 0x%08d", id, cpu_node);
+ if (mid < 0) {
+ printk(KERN_NOTICE "No MID found for CPU%d at node 0x%08d", id, cpu_node);
+ mid = 0;
+ }
+ cpu_data(id).mid = mid;
}
void __init smp_cpus_done(unsigned int max_cpus)
@@ -126,14 +128,57 @@
void smp_send_reschedule(int cpu)
{
/*
- * XXX missing reschedule IPI, see scheduler_ipi()
+ * CPU model dependent way of implementing IPI generation targeting
+ * a single CPU. The trap handler needs only to do trap entry/return
+ * to call schedule.
*/
+ BTFIXUP_CALL(smp_ipi_resched)(cpu);
}
void smp_send_stop(void)
{
}
+void arch_send_call_function_single_ipi(int cpu)
+{
+ /* trigger one IPI single call on one CPU */
+ BTFIXUP_CALL(smp_ipi_single)(cpu);
+}
+
+void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+{
+ int cpu;
+
+ /* trigger IPI mask call on each CPU */
+ for_each_cpu(cpu, mask)
+ BTFIXUP_CALL(smp_ipi_mask_one)(cpu);
+}
+
+void smp_resched_interrupt(void)
+{
+ irq_enter();
+ scheduler_ipi();
+ local_cpu_data().irq_resched_count++;
+ irq_exit();
+ /* re-schedule routine called by interrupt return code. */
+}
+
+void smp_call_function_single_interrupt(void)
+{
+ irq_enter();
+ generic_smp_call_function_single_interrupt();
+ local_cpu_data().irq_call_count++;
+ irq_exit();
+}
+
+void smp_call_function_interrupt(void)
+{
+ irq_enter();
+ generic_smp_call_function_interrupt();
+ local_cpu_data().irq_call_count++;
+ irq_exit();
+}
+
void smp_flush_cache_all(void)
{
xc0((smpfunc_t) BTFIXUP_CALL(local_flush_cache_all));
@@ -149,9 +194,10 @@
void smp_flush_cache_mm(struct mm_struct *mm)
{
if(mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc1((smpfunc_t) BTFIXUP_CALL(local_flush_cache_mm), (unsigned long) mm);
local_flush_cache_mm(mm);
}
@@ -160,9 +206,10 @@
void smp_flush_tlb_mm(struct mm_struct *mm)
{
if(mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask)) {
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask)) {
xc1((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_mm), (unsigned long) mm);
if(atomic_read(&mm->mm_users) == 1 && current->active_mm == mm)
cpumask_copy(mm_cpumask(mm),
@@ -178,9 +225,10 @@
struct mm_struct *mm = vma->vm_mm;
if (mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc3((smpfunc_t) BTFIXUP_CALL(local_flush_cache_range), (unsigned long) vma, start, end);
local_flush_cache_range(vma, start, end);
}
@@ -192,9 +240,10 @@
struct mm_struct *mm = vma->vm_mm;
if (mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc3((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_range), (unsigned long) vma, start, end);
local_flush_tlb_range(vma, start, end);
}
@@ -205,9 +254,10 @@
struct mm_struct *mm = vma->vm_mm;
if(mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_cache_page), (unsigned long) vma, page);
local_flush_cache_page(vma, page);
}
@@ -218,19 +268,15 @@
struct mm_struct *mm = vma->vm_mm;
if(mm->context != NO_CONTEXT) {
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_tlb_page), (unsigned long) vma, page);
local_flush_tlb_page(vma, page);
}
}
-void smp_reschedule_irq(void)
-{
- set_need_resched();
-}
-
void smp_flush_page_to_ram(unsigned long page)
{
/* Current theory is that those who call this are the one's
@@ -247,9 +293,10 @@
void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr)
{
- cpumask_t cpu_mask = *mm_cpumask(mm);
- cpu_clear(smp_processor_id(), cpu_mask);
- if (!cpus_empty(cpu_mask))
+ cpumask_t cpu_mask;
+ cpumask_copy(&cpu_mask, mm_cpumask(mm));
+ cpumask_clear_cpu(smp_processor_id(), &cpu_mask);
+ if (!cpumask_empty(&cpu_mask))
xc2((smpfunc_t) BTFIXUP_CALL(local_flush_sig_insns), (unsigned long) mm, insn_addr);
local_flush_sig_insns(mm, insn_addr);
}
@@ -403,7 +450,7 @@
};
if (!ret) {
- cpu_set(cpu, smp_commenced_mask);
+ cpumask_set_cpu(cpu, &smp_commenced_mask);
while (!cpu_online(cpu))
mb();
}
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index 9478da7..99cb172 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -121,11 +121,11 @@
/* inform the notifiers about the new cpu */
notify_cpu_starting(cpuid);
- while (!cpu_isset(cpuid, smp_commenced_mask))
+ while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
rmb();
ipi_call_lock_irq();
- cpu_set(cpuid, cpu_online_map);
+ set_cpu_online(cpuid, true);
ipi_call_unlock_irq();
/* idle thread is expected to have preempt disabled */
@@ -785,7 +785,7 @@
/* Send cross call to all processors mentioned in MASK_P
* except self. Really, there are only two cases currently,
- * "&cpu_online_map" and "&mm->cpu_vm_mask".
+ * "cpu_online_mask" and "mm_cpumask(mm)".
*/
static void smp_cross_call_masked(unsigned long *func, u32 ctx, u64 data1, u64 data2, const cpumask_t *mask)
{
@@ -797,7 +797,7 @@
/* Send cross call to all processors except self. */
static void smp_cross_call(unsigned long *func, u32 ctx, u64 data1, u64 data2)
{
- smp_cross_call_masked(func, ctx, data1, data2, &cpu_online_map);
+ smp_cross_call_masked(func, ctx, data1, data2, cpu_online_mask);
}
extern unsigned long xcall_sync_tick;
@@ -805,7 +805,7 @@
static void smp_start_sync_tick_client(int cpu)
{
xcall_deliver((u64) &xcall_sync_tick, 0, 0,
- &cpumask_of_cpu(cpu));
+ cpumask_of(cpu));
}
extern unsigned long xcall_call_function;
@@ -820,7 +820,7 @@
void arch_send_call_function_single_ipi(int cpu)
{
xcall_deliver((u64) &xcall_call_function_single, 0, 0,
- &cpumask_of_cpu(cpu));
+ cpumask_of(cpu));
}
void __irq_entry smp_call_function_client(int irq, struct pt_regs *regs)
@@ -918,7 +918,7 @@
}
if (data0) {
xcall_deliver(data0, __pa(pg_addr),
- (u64) pg_addr, &cpumask_of_cpu(cpu));
+ (u64) pg_addr, cpumask_of(cpu));
#ifdef CONFIG_DEBUG_DCFLUSH
atomic_inc(&dcpage_flushes_xcall);
#endif
@@ -954,7 +954,7 @@
}
if (data0) {
xcall_deliver(data0, __pa(pg_addr),
- (u64) pg_addr, &cpu_online_map);
+ (u64) pg_addr, cpu_online_mask);
#ifdef CONFIG_DEBUG_DCFLUSH
atomic_inc(&dcpage_flushes_xcall);
#endif
@@ -1197,32 +1197,32 @@
for_each_present_cpu(i) {
unsigned int j;
- cpus_clear(cpu_core_map[i]);
+ cpumask_clear(&cpu_core_map[i]);
if (cpu_data(i).core_id == 0) {
- cpu_set(i, cpu_core_map[i]);
+ cpumask_set_cpu(i, &cpu_core_map[i]);
continue;
}
for_each_present_cpu(j) {
if (cpu_data(i).core_id ==
cpu_data(j).core_id)
- cpu_set(j, cpu_core_map[i]);
+ cpumask_set_cpu(j, &cpu_core_map[i]);
}
}
for_each_present_cpu(i) {
unsigned int j;
- cpus_clear(per_cpu(cpu_sibling_map, i));
+ cpumask_clear(&per_cpu(cpu_sibling_map, i));
if (cpu_data(i).proc_id == -1) {
- cpu_set(i, per_cpu(cpu_sibling_map, i));
+ cpumask_set_cpu(i, &per_cpu(cpu_sibling_map, i));
continue;
}
for_each_present_cpu(j) {
if (cpu_data(i).proc_id ==
cpu_data(j).proc_id)
- cpu_set(j, per_cpu(cpu_sibling_map, i));
+ cpumask_set_cpu(j, &per_cpu(cpu_sibling_map, i));
}
}
}
@@ -1232,10 +1232,10 @@
int ret = smp_boot_one_cpu(cpu);
if (!ret) {
- cpu_set(cpu, smp_commenced_mask);
- while (!cpu_isset(cpu, cpu_online_map))
+ cpumask_set_cpu(cpu, &smp_commenced_mask);
+ while (!cpu_online(cpu))
mb();
- if (!cpu_isset(cpu, cpu_online_map)) {
+ if (!cpu_online(cpu)) {
ret = -ENODEV;
} else {
/* On SUN4V, writes to %tick and %stick are
@@ -1269,7 +1269,7 @@
tb->nonresum_mondo_pa, 0);
}
- cpu_clear(cpu, smp_commenced_mask);
+ cpumask_clear_cpu(cpu, &smp_commenced_mask);
membar_safe("#Sync");
local_irq_disable();
@@ -1290,13 +1290,13 @@
cpuinfo_sparc *c;
int i;
- for_each_cpu_mask(i, cpu_core_map[cpu])
- cpu_clear(cpu, cpu_core_map[i]);
- cpus_clear(cpu_core_map[cpu]);
+ for_each_cpu(i, &cpu_core_map[cpu])
+ cpumask_clear_cpu(cpu, &cpu_core_map[i]);
+ cpumask_clear(&cpu_core_map[cpu]);
- for_each_cpu_mask(i, per_cpu(cpu_sibling_map, cpu))
- cpu_clear(cpu, per_cpu(cpu_sibling_map, i));
- cpus_clear(per_cpu(cpu_sibling_map, cpu));
+ for_each_cpu(i, &per_cpu(cpu_sibling_map, cpu))
+ cpumask_clear_cpu(cpu, &per_cpu(cpu_sibling_map, i));
+ cpumask_clear(&per_cpu(cpu_sibling_map, cpu));
c = &cpu_data(cpu);
@@ -1313,7 +1313,7 @@
local_irq_disable();
ipi_call_lock();
- cpu_clear(cpu, cpu_online_map);
+ set_cpu_online(cpu, false);
ipi_call_unlock();
cpu_map_rebuild();
@@ -1327,11 +1327,11 @@
for (i = 0; i < 100; i++) {
smp_rmb();
- if (!cpu_isset(cpu, smp_commenced_mask))
+ if (!cpumask_test_cpu(cpu, &smp_commenced_mask))
break;
msleep(100);
}
- if (cpu_isset(cpu, smp_commenced_mask)) {
+ if (cpumask_test_cpu(cpu, &smp_commenced_mask)) {
printk(KERN_ERR "CPU %u didn't die...\n", cpu);
} else {
#if defined(CONFIG_SUN_LDOMS)
@@ -1341,7 +1341,7 @@
do {
hv_err = sun4v_cpu_stop(cpu);
if (hv_err == HV_EOK) {
- cpu_clear(cpu, cpu_present_map);
+ set_cpu_present(cpu, false);
break;
}
} while (--limit > 0);
@@ -1362,7 +1362,7 @@
void smp_send_reschedule(int cpu)
{
xcall_deliver((u64) &xcall_receive_signal, 0, 0,
- &cpumask_of_cpu(cpu));
+ cpumask_of(cpu));
}
void __irq_entry smp_receive_signal_client(int irq, struct pt_regs *regs)
diff --git a/arch/sparc/kernel/sun4c_irq.c b/arch/sparc/kernel/sun4c_irq.c
index 90eea38..f6bf25a 100644
--- a/arch/sparc/kernel/sun4c_irq.c
+++ b/arch/sparc/kernel/sun4c_irq.c
@@ -65,62 +65,94 @@
*/
unsigned char __iomem *interrupt_enable;
-static void sun4c_disable_irq(unsigned int irq_nr)
+static void sun4c_mask_irq(struct irq_data *data)
{
- unsigned long flags;
- unsigned char current_mask, new_mask;
+ unsigned long mask = (unsigned long)data->chip_data;
- local_irq_save(flags);
- irq_nr &= (NR_IRQS - 1);
- current_mask = sbus_readb(interrupt_enable);
- switch (irq_nr) {
- case 1:
- new_mask = ((current_mask) & (~(SUN4C_INT_E1)));
- break;
- case 8:
- new_mask = ((current_mask) & (~(SUN4C_INT_E8)));
- break;
- case 10:
- new_mask = ((current_mask) & (~(SUN4C_INT_E10)));
- break;
- case 14:
- new_mask = ((current_mask) & (~(SUN4C_INT_E14)));
- break;
- default:
+ if (mask) {
+ unsigned long flags;
+
+ local_irq_save(flags);
+ mask = sbus_readb(interrupt_enable) & ~mask;
+ sbus_writeb(mask, interrupt_enable);
local_irq_restore(flags);
- return;
}
- sbus_writeb(new_mask, interrupt_enable);
- local_irq_restore(flags);
}
-static void sun4c_enable_irq(unsigned int irq_nr)
+static void sun4c_unmask_irq(struct irq_data *data)
{
- unsigned long flags;
- unsigned char current_mask, new_mask;
+ unsigned long mask = (unsigned long)data->chip_data;
- local_irq_save(flags);
- irq_nr &= (NR_IRQS - 1);
- current_mask = sbus_readb(interrupt_enable);
- switch (irq_nr) {
- case 1:
- new_mask = ((current_mask) | SUN4C_INT_E1);
- break;
- case 8:
- new_mask = ((current_mask) | SUN4C_INT_E8);
- break;
- case 10:
- new_mask = ((current_mask) | SUN4C_INT_E10);
- break;
- case 14:
- new_mask = ((current_mask) | SUN4C_INT_E14);
- break;
- default:
+ if (mask) {
+ unsigned long flags;
+
+ local_irq_save(flags);
+ mask = sbus_readb(interrupt_enable) | mask;
+ sbus_writeb(mask, interrupt_enable);
local_irq_restore(flags);
- return;
}
- sbus_writeb(new_mask, interrupt_enable);
- local_irq_restore(flags);
+}
+
+static unsigned int sun4c_startup_irq(struct irq_data *data)
+{
+ irq_link(data->irq);
+ sun4c_unmask_irq(data);
+
+ return 0;
+}
+
+static void sun4c_shutdown_irq(struct irq_data *data)
+{
+ sun4c_mask_irq(data);
+ irq_unlink(data->irq);
+}
+
+static struct irq_chip sun4c_irq = {
+ .name = "sun4c",
+ .irq_startup = sun4c_startup_irq,
+ .irq_shutdown = sun4c_shutdown_irq,
+ .irq_mask = sun4c_mask_irq,
+ .irq_unmask = sun4c_unmask_irq,
+};
+
+static unsigned int sun4c_build_device_irq(struct platform_device *op,
+ unsigned int real_irq)
+{
+ unsigned int irq;
+
+ if (real_irq >= 16) {
+ prom_printf("Bogus sun4c IRQ %u\n", real_irq);
+ prom_halt();
+ }
+
+ irq = irq_alloc(real_irq, real_irq);
+ if (irq) {
+ unsigned long mask = 0UL;
+
+ switch (real_irq) {
+ case 1:
+ mask = SUN4C_INT_E1;
+ break;
+ case 8:
+ mask = SUN4C_INT_E8;
+ break;
+ case 10:
+ mask = SUN4C_INT_E10;
+ break;
+ case 14:
+ mask = SUN4C_INT_E14;
+ break;
+ default:
+ /* All the rest are either always enabled,
+ * or are for signalling software interrupts.
+ */
+ break;
+ }
+ irq_set_chip_and_handler_name(irq, &sun4c_irq,
+ handle_level_irq, "level");
+ irq_set_chip_data(irq, (void *)mask);
+ }
+ return irq;
}
struct sun4c_timer_info {
@@ -144,8 +176,9 @@
static void __init sun4c_init_timers(irq_handler_t counter_fn)
{
- const struct linux_prom_irqs *irq;
+ const struct linux_prom_irqs *prom_irqs;
struct device_node *dp;
+ unsigned int irq;
const u32 *addr;
int err;
@@ -163,9 +196,9 @@
sun4c_timers = (void __iomem *) (unsigned long) addr[0];
- irq = of_get_property(dp, "intr", NULL);
+ prom_irqs = of_get_property(dp, "intr", NULL);
of_node_put(dp);
- if (!irq) {
+ if (!prom_irqs) {
prom_printf("sun4c_init_timers: No intr property\n");
prom_halt();
}
@@ -178,15 +211,15 @@
master_l10_counter = &sun4c_timers->l10_count;
- err = request_irq(irq[0].pri, counter_fn,
- (IRQF_DISABLED | SA_STATIC_ALLOC),
- "timer", NULL);
+ irq = sun4c_build_device_irq(NULL, prom_irqs[0].pri);
+ err = request_irq(irq, counter_fn, IRQF_TIMER, "timer", NULL);
if (err) {
prom_printf("sun4c_init_timers: request_irq() fails with %d\n", err);
prom_halt();
}
- sun4c_disable_irq(irq[1].pri);
+ /* disable timer interrupt */
+ sun4c_mask_irq(irq_get_irq_data(irq));
}
#ifdef CONFIG_SMP
@@ -215,14 +248,11 @@
interrupt_enable = (void __iomem *) (unsigned long) addr[0];
- BTFIXUPSET_CALL(enable_irq, sun4c_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_irq, sun4c_disable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(enable_pil_irq, sun4c_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_pil_irq, sun4c_disable_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(clear_clock_irq, sun4c_clear_clock_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(load_profile_irq, sun4c_load_profile_irq, BTFIXUPCALL_NOP);
- sparc_irq_config.init_timers = sun4c_init_timers;
+ sparc_irq_config.init_timers = sun4c_init_timers;
+ sparc_irq_config.build_device_irq = sun4c_build_device_irq;
#ifdef CONFIG_SMP
BTFIXUPSET_CALL(set_cpu_int, sun4c_nop, BTFIXUPCALL_NOP);
diff --git a/arch/sparc/kernel/sun4d_irq.c b/arch/sparc/kernel/sun4d_irq.c
index 77b4a89..a9ea60e 100644
--- a/arch/sparc/kernel/sun4d_irq.c
+++ b/arch/sparc/kernel/sun4d_irq.c
@@ -14,6 +14,7 @@
#include <asm/io.h>
#include <asm/sbi.h>
#include <asm/cacheflush.h>
+#include <asm/setup.h>
#include "kernel.h"
#include "irq.h"
@@ -22,22 +23,20 @@
* cpu local. CPU local interrupts cover the timer interrupts
* and whatnot, and we encode those as normal PILs between
* 0 and 15.
- *
- * SBUS interrupts are encoded integers including the board number
- * (plus one), the SBUS level, and the SBUS slot number. Sun4D
- * IRQ dispatch is done by:
- *
- * 1) Reading the BW local interrupt table in order to get the bus
- * interrupt mask.
- *
- * This table is indexed by SBUS interrupt level which can be
- * derived from the PIL we got interrupted on.
- *
- * 2) For each bus showing interrupt pending from #1, read the
- * SBI interrupt state register. This will indicate which slots
- * have interrupts pending for that SBUS interrupt level.
+ * SBUS interrupts are encodes as a combination of board, level and slot.
*/
+struct sun4d_handler_data {
+ unsigned int cpuid; /* target cpu */
+ unsigned int real_irq; /* interrupt level */
+};
+
+
+static unsigned int sun4d_encode_irq(int board, int lvl, int slot)
+{
+ return (board + 1) << 5 | (lvl << 2) | slot;
+}
+
struct sun4d_timer_regs {
u32 l10_timer_limit;
u32 l10_cur_countx;
@@ -48,17 +47,12 @@
static struct sun4d_timer_regs __iomem *sun4d_timers;
-#define TIMER_IRQ 10
+#define SUN4D_TIMER_IRQ 10
-#define MAX_STATIC_ALLOC 4
-static unsigned char sbus_tid[32];
-
-static struct irqaction *irq_action[NR_IRQS];
-
-static struct sbus_action {
- struct irqaction *action;
- /* For SMP this needs to be extended */
-} *sbus_actions;
+/* Specify which cpu handle interrupts from which board.
+ * Index is board - value is cpu.
+ */
+static unsigned char board_to_cpu[32];
static int pil_to_sbus[] = {
0,
@@ -79,152 +73,81 @@
0,
};
-static int sbus_to_pil[] = {
- 0,
- 2,
- 3,
- 5,
- 7,
- 9,
- 11,
- 13,
-};
-
-static int nsbi;
-
/* Exported for sun4d_smp.c */
DEFINE_SPINLOCK(sun4d_imsk_lock);
-int show_sun4d_interrupts(struct seq_file *p, void *v)
+/* SBUS interrupts are encoded integers including the board number
+ * (plus one), the SBUS level, and the SBUS slot number. Sun4D
+ * IRQ dispatch is done by:
+ *
+ * 1) Reading the BW local interrupt table in order to get the bus
+ * interrupt mask.
+ *
+ * This table is indexed by SBUS interrupt level which can be
+ * derived from the PIL we got interrupted on.
+ *
+ * 2) For each bus showing interrupt pending from #1, read the
+ * SBI interrupt state register. This will indicate which slots
+ * have interrupts pending for that SBUS interrupt level.
+ *
+ * 3) Call the genreric IRQ support.
+ */
+static void sun4d_sbus_handler_irq(int sbusl)
{
- int i = *(loff_t *) v, j = 0, k = 0, sbusl;
- struct irqaction *action;
- unsigned long flags;
-#ifdef CONFIG_SMP
- int x;
-#endif
+ unsigned int bus_mask;
+ unsigned int sbino, slot;
+ unsigned int sbil;
- spin_lock_irqsave(&irq_action_lock, flags);
- if (i < NR_IRQS) {
- sbusl = pil_to_sbus[i];
- if (!sbusl) {
- action = *(i + irq_action);
- if (!action)
- goto out_unlock;
- } else {
- for (j = 0; j < nsbi; j++) {
- for (k = 0; k < 4; k++)
- action = sbus_actions[(j << 5) + (sbusl << 2) + k].action;
- if (action)
- goto found_it;
- }
- goto out_unlock;
- }
-found_it: seq_printf(p, "%3d: ", i);
-#ifndef CONFIG_SMP
- seq_printf(p, "%10u ", kstat_irqs(i));
-#else
- for_each_online_cpu(x)
- seq_printf(p, "%10u ",
- kstat_cpu(cpu_logical_map(x)).irqs[i]);
-#endif
- seq_printf(p, "%c %s",
- (action->flags & IRQF_DISABLED) ? '+' : ' ',
- action->name);
- action = action->next;
- for (;;) {
- for (; action; action = action->next) {
- seq_printf(p, ",%s %s",
- (action->flags & IRQF_DISABLED) ? " +" : "",
- action->name);
- }
- if (!sbusl)
- break;
- k++;
- if (k < 4) {
- action = sbus_actions[(j << 5) + (sbusl << 2) + k].action;
- } else {
- j++;
- if (j == nsbi)
- break;
- k = 0;
- action = sbus_actions[(j << 5) + (sbusl << 2)].action;
- }
- }
- seq_putc(p, '\n');
- }
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
- return 0;
-}
+ bus_mask = bw_get_intr_mask(sbusl) & 0x3ffff;
+ bw_clear_intr_mask(sbusl, bus_mask);
-void sun4d_free_irq(unsigned int irq, void *dev_id)
-{
- struct irqaction *action, **actionp;
- struct irqaction *tmp = NULL;
- unsigned long flags;
+ sbil = (sbusl << 2);
+ /* Loop for each pending SBI */
+ for (sbino = 0; bus_mask; sbino++) {
+ unsigned int idx, mask;
- spin_lock_irqsave(&irq_action_lock, flags);
- if (irq < 15)
- actionp = irq + irq_action;
- else
- actionp = &(sbus_actions[irq - (1 << 5)].action);
- action = *actionp;
- if (!action) {
- printk(KERN_ERR "Trying to free free IRQ%d\n", irq);
- goto out_unlock;
- }
- if (dev_id) {
- for (; action; action = action->next) {
- if (action->dev_id == dev_id)
- break;
- tmp = action;
- }
- if (!action) {
- printk(KERN_ERR "Trying to free free shared IRQ%d\n",
- irq);
- goto out_unlock;
- }
- } else if (action->flags & IRQF_SHARED) {
- printk(KERN_ERR "Trying to free shared IRQ%d with NULL device ID\n",
- irq);
- goto out_unlock;
- }
- if (action->flags & SA_STATIC_ALLOC) {
- /*
- * This interrupt is marked as specially allocated
- * so it is a bad idea to free it.
+ bus_mask >>= 1;
+ if (!(bus_mask & 1))
+ continue;
+ /* XXX This seems to ACK the irq twice. acquire_sbi()
+ * XXX uses swap, therefore this writes 0xf << sbil,
+ * XXX then later release_sbi() will write the individual
+ * XXX bits which were set again.
*/
- printk(KERN_ERR "Attempt to free statically allocated IRQ%d (%s)\n",
- irq, action->name);
- goto out_unlock;
+ mask = acquire_sbi(SBI2DEVID(sbino), 0xf << sbil);
+ mask &= (0xf << sbil);
+
+ /* Loop for each pending SBI slot */
+ idx = 0;
+ slot = (1 << sbil);
+ while (mask != 0) {
+ unsigned int pil;
+ struct irq_bucket *p;
+
+ idx++;
+ slot <<= 1;
+ if (!(mask & slot))
+ continue;
+
+ mask &= ~slot;
+ pil = sun4d_encode_irq(sbino, sbil, idx);
+
+ p = irq_map[pil];
+ while (p) {
+ struct irq_bucket *next;
+
+ next = p->next;
+ generic_handle_irq(p->irq);
+ p = next;
+ }
+ release_sbi(SBI2DEVID(sbino), slot);
+ }
}
-
- if (tmp)
- tmp->next = action->next;
- else
- *actionp = action->next;
-
- spin_unlock_irqrestore(&irq_action_lock, flags);
-
- synchronize_irq(irq);
-
- spin_lock_irqsave(&irq_action_lock, flags);
-
- kfree(action);
-
- if (!(*actionp))
- __disable_irq(irq);
-
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
}
void sun4d_handler_irq(int pil, struct pt_regs *regs)
{
struct pt_regs *old_regs;
- struct irqaction *action;
- int cpu = smp_processor_id();
/* SBUS IRQ level (1 - 7) */
int sbusl = pil_to_sbus[pil];
@@ -233,160 +156,96 @@
cc_set_iclr(1 << pil);
+#ifdef CONFIG_SMP
+ /*
+ * Check IPI data structures after IRQ has been cleared. Hard and Soft
+ * IRQ can happen at the same time, so both cases are always handled.
+ */
+ if (pil == SUN4D_IPI_IRQ)
+ sun4d_ipi_interrupt();
+#endif
+
old_regs = set_irq_regs(regs);
irq_enter();
- kstat_cpu(cpu).irqs[pil]++;
- if (!sbusl) {
- action = *(pil + irq_action);
- if (!action)
- unexpected_irq(pil, NULL, regs);
- do {
- action->handler(pil, action->dev_id);
- action = action->next;
- } while (action);
+ if (sbusl == 0) {
+ /* cpu interrupt */
+ struct irq_bucket *p;
+
+ p = irq_map[pil];
+ while (p) {
+ struct irq_bucket *next;
+
+ next = p->next;
+ generic_handle_irq(p->irq);
+ p = next;
+ }
} else {
- int bus_mask = bw_get_intr_mask(sbusl) & 0x3ffff;
- int sbino;
- struct sbus_action *actionp;
- unsigned mask, slot;
- int sbil = (sbusl << 2);
-
- bw_clear_intr_mask(sbusl, bus_mask);
-
- /* Loop for each pending SBI */
- for (sbino = 0; bus_mask; sbino++, bus_mask >>= 1)
- if (bus_mask & 1) {
- mask = acquire_sbi(SBI2DEVID(sbino), 0xf << sbil);
- mask &= (0xf << sbil);
- actionp = sbus_actions + (sbino << 5) + (sbil);
- /* Loop for each pending SBI slot */
- for (slot = (1 << sbil); mask; slot <<= 1, actionp++)
- if (mask & slot) {
- mask &= ~slot;
- action = actionp->action;
-
- if (!action)
- unexpected_irq(pil, NULL, regs);
- do {
- action->handler(pil, action->dev_id);
- action = action->next;
- } while (action);
- release_sbi(SBI2DEVID(sbino), slot);
- }
- }
+ /* SBUS interrupt */
+ sun4d_sbus_handler_irq(sbusl);
}
irq_exit();
set_irq_regs(old_regs);
}
-int sun4d_request_irq(unsigned int irq,
- irq_handler_t handler,
- unsigned long irqflags, const char *devname, void *dev_id)
+
+static void sun4d_mask_irq(struct irq_data *data)
{
- struct irqaction *action, *tmp = NULL, **actionp;
+ struct sun4d_handler_data *handler_data = data->handler_data;
+ unsigned int real_irq;
+#ifdef CONFIG_SMP
+ int cpuid = handler_data->cpuid;
unsigned long flags;
- int ret;
-
- if (irq > 14 && irq < (1 << 5)) {
- ret = -EINVAL;
- goto out;
- }
-
- if (!handler) {
- ret = -EINVAL;
- goto out;
- }
-
- spin_lock_irqsave(&irq_action_lock, flags);
-
- if (irq >= (1 << 5))
- actionp = &(sbus_actions[irq - (1 << 5)].action);
- else
- actionp = irq + irq_action;
- action = *actionp;
-
- if (action) {
- if ((action->flags & IRQF_SHARED) && (irqflags & IRQF_SHARED)) {
- for (tmp = action; tmp->next; tmp = tmp->next)
- /* find last entry - tmp used below */;
- } else {
- ret = -EBUSY;
- goto out_unlock;
- }
- if ((action->flags & IRQF_DISABLED) ^ (irqflags & IRQF_DISABLED)) {
- printk(KERN_ERR "Attempt to mix fast and slow interrupts on IRQ%d denied\n",
- irq);
- ret = -EBUSY;
- goto out_unlock;
- }
- action = NULL; /* Or else! */
- }
-
- /* If this is flagged as statically allocated then we use our
- * private struct which is never freed.
- */
- if (irqflags & SA_STATIC_ALLOC) {
- if (static_irq_count < MAX_STATIC_ALLOC)
- action = &static_irqaction[static_irq_count++];
- else
- printk(KERN_ERR "Request for IRQ%d (%s) SA_STATIC_ALLOC failed using kmalloc\n",
- irq, devname);
- }
-
- if (action == NULL)
- action = kmalloc(sizeof(struct irqaction), GFP_ATOMIC);
-
- if (!action) {
- ret = -ENOMEM;
- goto out_unlock;
- }
-
- action->handler = handler;
- action->flags = irqflags;
- action->name = devname;
- action->next = NULL;
- action->dev_id = dev_id;
-
- if (tmp)
- tmp->next = action;
- else
- *actionp = action;
-
- __enable_irq(irq);
-
- ret = 0;
-out_unlock:
- spin_unlock_irqrestore(&irq_action_lock, flags);
-out:
- return ret;
-}
-
-static void sun4d_disable_irq(unsigned int irq)
-{
- int tid = sbus_tid[(irq >> 5) - 1];
- unsigned long flags;
-
- if (irq < NR_IRQS)
- return;
-
+#endif
+ real_irq = handler_data->real_irq;
+#ifdef CONFIG_SMP
spin_lock_irqsave(&sun4d_imsk_lock, flags);
- cc_set_imsk_other(tid, cc_get_imsk_other(tid) | (1 << sbus_to_pil[(irq >> 2) & 7]));
+ cc_set_imsk_other(cpuid, cc_get_imsk_other(cpuid) | (1 << real_irq));
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
+#else
+ cc_set_imsk(cc_get_imsk() | (1 << real_irq));
+#endif
}
-static void sun4d_enable_irq(unsigned int irq)
+static void sun4d_unmask_irq(struct irq_data *data)
{
- int tid = sbus_tid[(irq >> 5) - 1];
+ struct sun4d_handler_data *handler_data = data->handler_data;
+ unsigned int real_irq;
+#ifdef CONFIG_SMP
+ int cpuid = handler_data->cpuid;
unsigned long flags;
+#endif
+ real_irq = handler_data->real_irq;
- if (irq < NR_IRQS)
- return;
-
+#ifdef CONFIG_SMP
spin_lock_irqsave(&sun4d_imsk_lock, flags);
- cc_set_imsk_other(tid, cc_get_imsk_other(tid) & ~(1 << sbus_to_pil[(irq >> 2) & 7]));
+ cc_set_imsk_other(cpuid, cc_get_imsk_other(cpuid) | ~(1 << real_irq));
spin_unlock_irqrestore(&sun4d_imsk_lock, flags);
+#else
+ cc_set_imsk(cc_get_imsk() | ~(1 << real_irq));
+#endif
}
+static unsigned int sun4d_startup_irq(struct irq_data *data)
+{
+ irq_link(data->irq);
+ sun4d_unmask_irq(data);
+ return 0;
+}
+
+static void sun4d_shutdown_irq(struct irq_data *data)
+{
+ sun4d_mask_irq(data);
+ irq_unlink(data->irq);
+}
+
+struct irq_chip sun4d_irq = {
+ .name = "sun4d",
+ .irq_startup = sun4d_startup_irq,
+ .irq_shutdown = sun4d_shutdown_irq,
+ .irq_unmask = sun4d_unmask_irq,
+ .irq_mask = sun4d_mask_irq,
+};
+
#ifdef CONFIG_SMP
static void sun4d_set_cpu_int(int cpu, int level)
{
@@ -413,7 +272,7 @@
for_each_node_by_name(dp, "sbi") {
int devid = of_getintprop_default(dp, "device-id", 0);
int board = of_getintprop_default(dp, "board#", 0);
- sbus_tid[board] = cpuid;
+ board_to_cpu[board] = cpuid;
set_sbi_tid(devid, cpuid << 3);
}
printk(KERN_ERR "All sbus IRQs directed to CPU%d\n", cpuid);
@@ -443,15 +302,16 @@
unsigned int sun4d_build_device_irq(struct platform_device *op,
unsigned int real_irq)
{
- static int pil_to_sbus[] = {
- 0, 0, 1, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 0, 0,
- };
struct device_node *dp = op->dev.of_node;
struct device_node *io_unit, *sbi = dp->parent;
const struct linux_prom_registers *regs;
+ struct sun4d_handler_data *handler_data;
+ unsigned int pil;
+ unsigned int irq;
int board, slot;
int sbusl;
+ irq = 0;
while (sbi) {
if (!strcmp(sbi->name, "sbi"))
break;
@@ -484,7 +344,28 @@
sbusl = pil_to_sbus[real_irq];
if (sbusl)
- return (((board + 1) << 5) + (sbusl << 2) + slot);
+ pil = sun4d_encode_irq(board, sbusl, slot);
+ else
+ pil = real_irq;
+
+ irq = irq_alloc(real_irq, pil);
+ if (irq == 0)
+ goto err_out;
+
+ handler_data = irq_get_handler_data(irq);
+ if (unlikely(handler_data))
+ goto err_out;
+
+ handler_data = kzalloc(sizeof(struct sun4d_handler_data), GFP_ATOMIC);
+ if (unlikely(!handler_data)) {
+ prom_printf("IRQ: kzalloc(sun4d_handler_data) failed.\n");
+ prom_halt();
+ }
+ handler_data->cpuid = board_to_cpu[board];
+ handler_data->real_irq = real_irq;
+ irq_set_chip_and_handler_name(irq, &sun4d_irq,
+ handle_level_irq, "level");
+ irq_set_handler_data(irq, handler_data);
err_out:
return real_irq;
@@ -518,6 +399,7 @@
{
struct device_node *dp;
struct resource res;
+ unsigned int irq;
const u32 *reg;
int err;
@@ -552,9 +434,8 @@
master_l10_counter = &sun4d_timers->l10_cur_count;
- err = request_irq(TIMER_IRQ, counter_fn,
- (IRQF_DISABLED | SA_STATIC_ALLOC),
- "timer", NULL);
+ irq = sun4d_build_device_irq(NULL, SUN4D_TIMER_IRQ);
+ err = request_irq(irq, counter_fn, IRQF_TIMER, "timer", NULL);
if (err) {
prom_printf("sun4d_init_timers: request_irq() failed with %d\n",
err);
@@ -567,27 +448,16 @@
void __init sun4d_init_sbi_irq(void)
{
struct device_node *dp;
- int target_cpu = 0;
+ int target_cpu;
-#ifdef CONFIG_SMP
target_cpu = boot_cpu_id;
-#endif
-
- nsbi = 0;
- for_each_node_by_name(dp, "sbi")
- nsbi++;
- sbus_actions = kzalloc(nsbi * 8 * 4 * sizeof(struct sbus_action), GFP_ATOMIC);
- if (!sbus_actions) {
- prom_printf("SUN4D: Cannot allocate sbus_actions, halting.\n");
- prom_halt();
- }
for_each_node_by_name(dp, "sbi") {
int devid = of_getintprop_default(dp, "device-id", 0);
int board = of_getintprop_default(dp, "board#", 0);
unsigned int mask;
set_sbi_tid(devid, target_cpu << 3);
- sbus_tid[board] = target_cpu;
+ board_to_cpu[board] = target_cpu;
/* Get rid of pending irqs from PROM */
mask = acquire_sbi(devid, 0xffffffff);
@@ -603,12 +473,10 @@
{
local_irq_disable();
- BTFIXUPSET_CALL(enable_irq, sun4d_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_irq, sun4d_disable_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(clear_clock_irq, sun4d_clear_clock_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(load_profile_irq, sun4d_load_profile_irq, BTFIXUPCALL_NORM);
- sparc_irq_config.init_timers = sun4d_init_timers;
+ sparc_irq_config.init_timers = sun4d_init_timers;
sparc_irq_config.build_device_irq = sun4d_build_device_irq;
#ifdef CONFIG_SMP
diff --git a/arch/sparc/kernel/sun4d_smp.c b/arch/sparc/kernel/sun4d_smp.c
index 475d50b..1333879 100644
--- a/arch/sparc/kernel/sun4d_smp.c
+++ b/arch/sparc/kernel/sun4d_smp.c
@@ -32,6 +32,7 @@
return val;
}
+static void smp4d_ipi_init(void);
static void smp_setup_percpu_timer(void);
static unsigned char cpu_leds[32];
@@ -80,8 +81,6 @@
local_flush_cache_all();
local_flush_tlb_all();
- cpu_probe();
-
while ((unsigned long)current_set[cpuid] < PAGE_OFFSET)
barrier();
@@ -105,7 +104,7 @@
local_irq_enable(); /* We don't allow PIL 14 yet */
- while (!cpu_isset(cpuid, smp_commenced_mask))
+ while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
barrier();
spin_lock_irqsave(&sun4d_imsk_lock, flags);
@@ -120,6 +119,7 @@
*/
void __init smp4d_boot_cpus(void)
{
+ smp4d_ipi_init();
if (boot_cpu_id)
current_set[0] = NULL;
smp_setup_percpu_timer();
@@ -191,6 +191,80 @@
sun4d_distribute_irqs();
}
+/* Memory structure giving interrupt handler information about IPI generated */
+struct sun4d_ipi_work {
+ int single;
+ int msk;
+ int resched;
+};
+
+static DEFINE_PER_CPU_SHARED_ALIGNED(struct sun4d_ipi_work, sun4d_ipi_work);
+
+/* Initialize IPIs on the SUN4D SMP machine */
+static void __init smp4d_ipi_init(void)
+{
+ int cpu;
+ struct sun4d_ipi_work *work;
+
+ printk(KERN_INFO "smp4d: setup IPI at IRQ %d\n", SUN4D_IPI_IRQ);
+
+ for_each_possible_cpu(cpu) {
+ work = &per_cpu(sun4d_ipi_work, cpu);
+ work->single = work->msk = work->resched = 0;
+ }
+}
+
+void sun4d_ipi_interrupt(void)
+{
+ struct sun4d_ipi_work *work = &__get_cpu_var(sun4d_ipi_work);
+
+ if (work->single) {
+ work->single = 0;
+ smp_call_function_single_interrupt();
+ }
+ if (work->msk) {
+ work->msk = 0;
+ smp_call_function_interrupt();
+ }
+ if (work->resched) {
+ work->resched = 0;
+ smp_resched_interrupt();
+ }
+}
+
+static void smp4d_ipi_single(int cpu)
+{
+ struct sun4d_ipi_work *work = &per_cpu(sun4d_ipi_work, cpu);
+
+ /* Mark work */
+ work->single = 1;
+
+ /* Generate IRQ on the CPU */
+ sun4d_send_ipi(cpu, SUN4D_IPI_IRQ);
+}
+
+static void smp4d_ipi_mask_one(int cpu)
+{
+ struct sun4d_ipi_work *work = &per_cpu(sun4d_ipi_work, cpu);
+
+ /* Mark work */
+ work->msk = 1;
+
+ /* Generate IRQ on the CPU */
+ sun4d_send_ipi(cpu, SUN4D_IPI_IRQ);
+}
+
+static void smp4d_ipi_resched(int cpu)
+{
+ struct sun4d_ipi_work *work = &per_cpu(sun4d_ipi_work, cpu);
+
+ /* Mark work */
+ work->resched = 1;
+
+ /* Generate IRQ on the CPU (any IRQ will cause resched) */
+ sun4d_send_ipi(cpu, SUN4D_IPI_IRQ);
+}
+
static struct smp_funcall {
smpfunc_t func;
unsigned long arg1;
@@ -239,10 +313,10 @@
{
register int i;
- cpu_clear(smp_processor_id(), mask);
- cpus_and(mask, cpu_online_map, mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
+ cpumask_and(&mask, cpu_online_mask, &mask);
for (i = 0; i <= high; i++) {
- if (cpu_isset(i, mask)) {
+ if (cpumask_test_cpu(i, &mask)) {
ccall_info.processors_in[i] = 0;
ccall_info.processors_out[i] = 0;
sun4d_send_ipi(i, IRQ_CROSS_CALL);
@@ -255,7 +329,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_in[i])
barrier();
@@ -263,7 +337,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_out[i])
barrier();
@@ -356,6 +430,9 @@
BTFIXUPSET_BLACKBOX(load_current, smp4d_blackbox_current);
BTFIXUPSET_CALL(smp_cross_call, smp4d_cross_call, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(__hard_smp_processor_id, __smp4d_processor_id, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_resched, smp4d_ipi_resched, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_single, smp4d_ipi_single, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_mask_one, smp4d_ipi_mask_one, BTFIXUPCALL_NORM);
for (i = 0; i < NR_CPUS; i++) {
ccall_info.processors_in[i] = 1;
diff --git a/arch/sparc/kernel/sun4m_irq.c b/arch/sparc/kernel/sun4m_irq.c
index 69df625..422c16d 100644
--- a/arch/sparc/kernel/sun4m_irq.c
+++ b/arch/sparc/kernel/sun4m_irq.c
@@ -100,6 +100,11 @@
struct sun4m_irq_percpu __iomem *sun4m_irq_percpu[SUN4M_NCPUS];
struct sun4m_irq_global __iomem *sun4m_irq_global;
+struct sun4m_handler_data {
+ bool percpu;
+ long mask;
+};
+
/* Dave Redman (djhr@tadpole.co.uk)
* The sun4m interrupt registers.
*/
@@ -142,9 +147,9 @@
#define OBP_INT_LEVEL_VME 0x40
#define SUN4M_TIMER_IRQ (OBP_INT_LEVEL_ONBOARD | 10)
-#define SUM4M_PROFILE_IRQ (OBP_INT_LEVEL_ONBOARD | 14)
+#define SUN4M_PROFILE_IRQ (OBP_INT_LEVEL_ONBOARD | 14)
-static unsigned long irq_mask[0x50] = {
+static unsigned long sun4m_imask[0x50] = {
/* 0x00 - SMP */
0, SUN4M_SOFT_INT(1),
SUN4M_SOFT_INT(2), SUN4M_SOFT_INT(3),
@@ -169,7 +174,7 @@
SUN4M_INT_VIDEO, SUN4M_INT_MODULE,
SUN4M_INT_REALTIME, SUN4M_INT_FLOPPY,
(SUN4M_INT_SERIAL | SUN4M_INT_KBDMS),
- SUN4M_INT_AUDIO, 0, SUN4M_INT_MODULE_ERR,
+ SUN4M_INT_AUDIO, SUN4M_INT_E14, SUN4M_INT_MODULE_ERR,
/* 0x30 - sbus */
0, 0, SUN4M_INT_SBUS(0), SUN4M_INT_SBUS(1),
0, SUN4M_INT_SBUS(2), 0, SUN4M_INT_SBUS(3),
@@ -182,105 +187,110 @@
0, SUN4M_INT_VME(6), 0, 0
};
-static unsigned long sun4m_get_irqmask(unsigned int irq)
+static void sun4m_mask_irq(struct irq_data *data)
{
- unsigned long mask;
-
- if (irq < 0x50)
- mask = irq_mask[irq];
- else
- mask = 0;
-
- if (!mask)
- printk(KERN_ERR "sun4m_get_irqmask: IRQ%d has no valid mask!\n",
- irq);
-
- return mask;
-}
-
-static void sun4m_disable_irq(unsigned int irq_nr)
-{
- unsigned long mask, flags;
+ struct sun4m_handler_data *handler_data = data->handler_data;
int cpu = smp_processor_id();
- mask = sun4m_get_irqmask(irq_nr);
- local_irq_save(flags);
- if (irq_nr > 15)
- sbus_writel(mask, &sun4m_irq_global->mask_set);
- else
- sbus_writel(mask, &sun4m_irq_percpu[cpu]->set);
- local_irq_restore(flags);
-}
+ if (handler_data->mask) {
+ unsigned long flags;
-static void sun4m_enable_irq(unsigned int irq_nr)
-{
- unsigned long mask, flags;
- int cpu = smp_processor_id();
-
- /* Dreadful floppy hack. When we use 0x2b instead of
- * 0x0b the system blows (it starts to whistle!).
- * So we continue to use 0x0b. Fixme ASAP. --P3
- */
- if (irq_nr != 0x0b) {
- mask = sun4m_get_irqmask(irq_nr);
local_irq_save(flags);
- if (irq_nr > 15)
- sbus_writel(mask, &sun4m_irq_global->mask_clear);
- else
- sbus_writel(mask, &sun4m_irq_percpu[cpu]->clear);
- local_irq_restore(flags);
- } else {
- local_irq_save(flags);
- sbus_writel(SUN4M_INT_FLOPPY, &sun4m_irq_global->mask_clear);
+ if (handler_data->percpu) {
+ sbus_writel(handler_data->mask, &sun4m_irq_percpu[cpu]->set);
+ } else {
+ sbus_writel(handler_data->mask, &sun4m_irq_global->mask_set);
+ }
local_irq_restore(flags);
}
}
-static unsigned long cpu_pil_to_imask[16] = {
-/*0*/ 0x00000000,
-/*1*/ 0x00000000,
-/*2*/ SUN4M_INT_SBUS(0) | SUN4M_INT_VME(0),
-/*3*/ SUN4M_INT_SBUS(1) | SUN4M_INT_VME(1),
-/*4*/ SUN4M_INT_SCSI,
-/*5*/ SUN4M_INT_SBUS(2) | SUN4M_INT_VME(2),
-/*6*/ SUN4M_INT_ETHERNET,
-/*7*/ SUN4M_INT_SBUS(3) | SUN4M_INT_VME(3),
-/*8*/ SUN4M_INT_VIDEO,
-/*9*/ SUN4M_INT_SBUS(4) | SUN4M_INT_VME(4) | SUN4M_INT_MODULE_ERR,
-/*10*/ SUN4M_INT_REALTIME,
-/*11*/ SUN4M_INT_SBUS(5) | SUN4M_INT_VME(5) | SUN4M_INT_FLOPPY,
-/*12*/ SUN4M_INT_SERIAL | SUN4M_INT_KBDMS,
-/*13*/ SUN4M_INT_SBUS(6) | SUN4M_INT_VME(6) | SUN4M_INT_AUDIO,
-/*14*/ SUN4M_INT_E14,
-/*15*/ SUN4M_INT_ERROR,
-};
-
-/* We assume the caller has disabled local interrupts when these are called,
- * or else very bizarre behavior will result.
- */
-static void sun4m_disable_pil_irq(unsigned int pil)
+static void sun4m_unmask_irq(struct irq_data *data)
{
- sbus_writel(cpu_pil_to_imask[pil], &sun4m_irq_global->mask_set);
+ struct sun4m_handler_data *handler_data = data->handler_data;
+ int cpu = smp_processor_id();
+
+ if (handler_data->mask) {
+ unsigned long flags;
+
+ local_irq_save(flags);
+ if (handler_data->percpu) {
+ sbus_writel(handler_data->mask, &sun4m_irq_percpu[cpu]->clear);
+ } else {
+ sbus_writel(handler_data->mask, &sun4m_irq_global->mask_clear);
+ }
+ local_irq_restore(flags);
+ }
}
-static void sun4m_enable_pil_irq(unsigned int pil)
+static unsigned int sun4m_startup_irq(struct irq_data *data)
{
- sbus_writel(cpu_pil_to_imask[pil], &sun4m_irq_global->mask_clear);
+ irq_link(data->irq);
+ sun4m_unmask_irq(data);
+ return 0;
+}
+
+static void sun4m_shutdown_irq(struct irq_data *data)
+{
+ sun4m_mask_irq(data);
+ irq_unlink(data->irq);
+}
+
+static struct irq_chip sun4m_irq = {
+ .name = "sun4m",
+ .irq_startup = sun4m_startup_irq,
+ .irq_shutdown = sun4m_shutdown_irq,
+ .irq_mask = sun4m_mask_irq,
+ .irq_unmask = sun4m_unmask_irq,
+};
+
+
+static unsigned int sun4m_build_device_irq(struct platform_device *op,
+ unsigned int real_irq)
+{
+ struct sun4m_handler_data *handler_data;
+ unsigned int irq;
+ unsigned int pil;
+
+ if (real_irq >= OBP_INT_LEVEL_VME) {
+ prom_printf("Bogus sun4m IRQ %u\n", real_irq);
+ prom_halt();
+ }
+ pil = (real_irq & 0xf);
+ irq = irq_alloc(real_irq, pil);
+
+ if (irq == 0)
+ goto out;
+
+ handler_data = irq_get_handler_data(irq);
+ if (unlikely(handler_data))
+ goto out;
+
+ handler_data = kzalloc(sizeof(struct sun4m_handler_data), GFP_ATOMIC);
+ if (unlikely(!handler_data)) {
+ prom_printf("IRQ: kzalloc(sun4m_handler_data) failed.\n");
+ prom_halt();
+ }
+
+ handler_data->mask = sun4m_imask[real_irq];
+ handler_data->percpu = real_irq < OBP_INT_LEVEL_ONBOARD;
+ irq_set_chip_and_handler_name(irq, &sun4m_irq,
+ handle_level_irq, "level");
+ irq_set_handler_data(irq, handler_data);
+
+out:
+ return irq;
}
#ifdef CONFIG_SMP
static void sun4m_send_ipi(int cpu, int level)
{
- unsigned long mask = sun4m_get_irqmask(level);
-
- sbus_writel(mask, &sun4m_irq_percpu[cpu]->set);
+ sbus_writel(SUN4M_SOFT_INT(level), &sun4m_irq_percpu[cpu]->set);
}
static void sun4m_clear_ipi(int cpu, int level)
{
- unsigned long mask = sun4m_get_irqmask(level);
-
- sbus_writel(mask, &sun4m_irq_percpu[cpu]->clear);
+ sbus_writel(SUN4M_SOFT_INT(level), &sun4m_irq_percpu[cpu]->clear);
}
static void sun4m_set_udt(int cpu)
@@ -343,7 +353,15 @@
prom_halt();
}
-/* Exported for sun4m_smp.c */
+void sun4m_unmask_profile_irq(void)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ sbus_writel(sun4m_imask[SUN4M_PROFILE_IRQ], &sun4m_irq_global->mask_clear);
+ local_irq_restore(flags);
+}
+
void sun4m_clear_profile_irq(int cpu)
{
sbus_readl(&timers_percpu[cpu]->l14_limit);
@@ -358,6 +376,7 @@
{
struct device_node *dp = of_find_node_by_name(NULL, "counter");
int i, err, len, num_cpu_timers;
+ unsigned int irq;
const u32 *addr;
if (!dp) {
@@ -384,8 +403,9 @@
master_l10_counter = &timers_global->l10_count;
- err = request_irq(SUN4M_TIMER_IRQ, counter_fn,
- (IRQF_DISABLED | SA_STATIC_ALLOC), "timer", NULL);
+ irq = sun4m_build_device_irq(NULL, SUN4M_TIMER_IRQ);
+
+ err = request_irq(irq, counter_fn, IRQF_TIMER, "timer", NULL);
if (err) {
printk(KERN_ERR "sun4m_init_timers: Register IRQ error %d.\n",
err);
@@ -452,14 +472,11 @@
if (num_cpu_iregs == 4)
sbus_writel(0, &sun4m_irq_global->interrupt_target);
- BTFIXUPSET_CALL(enable_irq, sun4m_enable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_irq, sun4m_disable_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(enable_pil_irq, sun4m_enable_pil_irq, BTFIXUPCALL_NORM);
- BTFIXUPSET_CALL(disable_pil_irq, sun4m_disable_pil_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(clear_clock_irq, sun4m_clear_clock_irq, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(load_profile_irq, sun4m_load_profile_irq, BTFIXUPCALL_NORM);
sparc_irq_config.init_timers = sun4m_init_timers;
+ sparc_irq_config.build_device_irq = sun4m_build_device_irq;
#ifdef CONFIG_SMP
BTFIXUPSET_CALL(set_cpu_int, sun4m_send_ipi, BTFIXUPCALL_NORM);
diff --git a/arch/sparc/kernel/sun4m_smp.c b/arch/sparc/kernel/sun4m_smp.c
index 5cc7dc5..5947686 100644
--- a/arch/sparc/kernel/sun4m_smp.c
+++ b/arch/sparc/kernel/sun4m_smp.c
@@ -15,6 +15,9 @@
#include "irq.h"
#include "kernel.h"
+#define IRQ_IPI_SINGLE 12
+#define IRQ_IPI_MASK 13
+#define IRQ_IPI_RESCHED 14
#define IRQ_CROSS_CALL 15
static inline unsigned long
@@ -26,6 +29,7 @@
return val;
}
+static void smp4m_ipi_init(void);
static void smp_setup_percpu_timer(void);
void __cpuinit smp4m_callin(void)
@@ -59,8 +63,6 @@
local_flush_cache_all();
local_flush_tlb_all();
- cpu_probe();
-
/* Fix idle thread fields. */
__asm__ __volatile__("ld [%0], %%g6\n\t"
: : "r" (¤t_set[cpuid])
@@ -70,7 +72,7 @@
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
- while (!cpu_isset(cpuid, smp_commenced_mask))
+ while (!cpumask_test_cpu(cpuid, &smp_commenced_mask))
mb();
local_irq_enable();
@@ -83,6 +85,7 @@
*/
void __init smp4m_boot_cpus(void)
{
+ smp4m_ipi_init();
smp_setup_percpu_timer();
local_flush_cache_all();
}
@@ -150,18 +153,25 @@
/* Ok, they are spinning and ready to go. */
}
-/* At each hardware IRQ, we get this called to forward IRQ reception
- * to the next processor. The caller must disable the IRQ level being
- * serviced globally so that there are no double interrupts received.
- *
- * XXX See sparc64 irq.c.
- */
-void smp4m_irq_rotate(int cpu)
-{
- int next = cpu_data(cpu).next;
- if (next != cpu)
- set_irq_udt(next);
+/* Initialize IPIs on the SUN4M SMP machine */
+static void __init smp4m_ipi_init(void)
+{
+}
+
+static void smp4m_ipi_resched(int cpu)
+{
+ set_cpu_int(cpu, IRQ_IPI_RESCHED);
+}
+
+static void smp4m_ipi_single(int cpu)
+{
+ set_cpu_int(cpu, IRQ_IPI_SINGLE);
+}
+
+static void smp4m_ipi_mask_one(int cpu)
+{
+ set_cpu_int(cpu, IRQ_IPI_MASK);
}
static struct smp_funcall {
@@ -199,10 +209,10 @@
{
register int i;
- cpu_clear(smp_processor_id(), mask);
- cpus_and(mask, cpu_online_map, mask);
+ cpumask_clear_cpu(smp_processor_id(), &mask);
+ cpumask_and(&mask, cpu_online_mask, &mask);
for (i = 0; i < ncpus; i++) {
- if (cpu_isset(i, mask)) {
+ if (cpumask_test_cpu(i, &mask)) {
ccall_info.processors_in[i] = 0;
ccall_info.processors_out[i] = 0;
set_cpu_int(i, IRQ_CROSS_CALL);
@@ -218,7 +228,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_in[i])
barrier();
@@ -226,7 +236,7 @@
i = 0;
do {
- if (!cpu_isset(i, mask))
+ if (!cpumask_test_cpu(i, &mask))
continue;
while (!ccall_info.processors_out[i])
barrier();
@@ -277,7 +287,7 @@
load_profile_irq(cpu, lvl14_resolution);
if (cpu == boot_cpu_id)
- enable_pil_irq(14);
+ sun4m_unmask_profile_irq();
}
static void __init smp4m_blackbox_id(unsigned *addr)
@@ -306,4 +316,7 @@
BTFIXUPSET_BLACKBOX(load_current, smp4m_blackbox_current);
BTFIXUPSET_CALL(smp_cross_call, smp4m_cross_call, BTFIXUPCALL_NORM);
BTFIXUPSET_CALL(__hard_smp_processor_id, __smp4m_processor_id, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_resched, smp4m_ipi_resched, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_single, smp4m_ipi_single, BTFIXUPCALL_NORM);
+ BTFIXUPSET_CALL(smp_ipi_mask_one, smp4m_ipi_mask_one, BTFIXUPCALL_NORM);
}
diff --git a/arch/sparc/kernel/sysfs.c b/arch/sparc/kernel/sysfs.c
index 1eb8b00..7408201 100644
--- a/arch/sparc/kernel/sysfs.c
+++ b/arch/sparc/kernel/sysfs.c
@@ -103,9 +103,10 @@
unsigned long (*func)(unsigned long),
unsigned long arg)
{
- cpumask_t old_affinity = current->cpus_allowed;
+ cpumask_t old_affinity;
unsigned long ret;
+ cpumask_copy(&old_affinity, tsk_cpus_allowed(current));
/* should return -EINVAL to userspace */
if (set_cpus_allowed_ptr(current, cpumask_of(cpu)))
return 0;
diff --git a/arch/sparc/kernel/time_32.c b/arch/sparc/kernel/time_32.c
index 4e23639..1060e06 100644
--- a/arch/sparc/kernel/time_32.c
+++ b/arch/sparc/kernel/time_32.c
@@ -168,7 +168,7 @@
return 0;
}
-static struct of_device_id __initdata clock_match[] = {
+static struct of_device_id clock_match[] = {
{
.name = "eeprom",
},
@@ -228,14 +228,10 @@
void __init time_init(void)
{
-#ifdef CONFIG_PCI
- extern void pci_time_init(void);
- if (pcic_present()) {
+ if (pcic_present())
pci_time_init();
- return;
- }
-#endif
- sbus_time_init();
+ else
+ sbus_time_init();
}
diff --git a/arch/sparc/kernel/us2e_cpufreq.c b/arch/sparc/kernel/us2e_cpufreq.c
index 8f982b7..531d54f 100644
--- a/arch/sparc/kernel/us2e_cpufreq.c
+++ b/arch/sparc/kernel/us2e_cpufreq.c
@@ -237,7 +237,7 @@
if (!cpu_online(cpu))
return 0;
- cpus_allowed = current->cpus_allowed;
+ cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
clock_tick = sparc64_get_clock_tick(cpu) / 1000;
@@ -258,7 +258,7 @@
if (!cpu_online(cpu))
return;
- cpus_allowed = current->cpus_allowed;
+ cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
new_freq = clock_tick = sparc64_get_clock_tick(cpu) / 1000;
diff --git a/arch/sparc/kernel/us3_cpufreq.c b/arch/sparc/kernel/us3_cpufreq.c
index f35d1e7..9a8ceb7 100644
--- a/arch/sparc/kernel/us3_cpufreq.c
+++ b/arch/sparc/kernel/us3_cpufreq.c
@@ -85,7 +85,7 @@
if (!cpu_online(cpu))
return 0;
- cpus_allowed = current->cpus_allowed;
+ cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
reg = read_safari_cfg();
@@ -105,7 +105,7 @@
if (!cpu_online(cpu))
return;
- cpus_allowed = current->cpus_allowed;
+ cpumask_copy(&cpus_allowed, tsk_cpus_allowed(current));
set_cpus_allowed_ptr(current, cpumask_of(cpu));
new_freq = sparc64_get_clock_tick(cpu) / 1000;
diff --git a/arch/sparc/lib/Makefile b/arch/sparc/lib/Makefile
index 846d1c4..7f01b8f 100644
--- a/arch/sparc/lib/Makefile
+++ b/arch/sparc/lib/Makefile
@@ -15,7 +15,6 @@
lib-$(CONFIG_SPARC32) += copy_user.o locks.o
lib-y += atomic_$(BITS).o
lib-$(CONFIG_SPARC32) += lshrdi3.o ashldi3.o
-lib-$(CONFIG_SPARC32) += rwsem_32.o
lib-$(CONFIG_SPARC32) += muldi3.o bitext.o cmpdi2.o
lib-$(CONFIG_SPARC64) += copy_page.o clear_page.o bzero.o
diff --git a/arch/sparc/lib/checksum_32.S b/arch/sparc/lib/checksum_32.S
index 3632cb3..0084c33 100644
--- a/arch/sparc/lib/checksum_32.S
+++ b/arch/sparc/lib/checksum_32.S
@@ -289,10 +289,16 @@
/* Also, handle the alignment code out of band. */
cc_dword_align:
- cmp %g1, 6
- bl,a ccte
+ cmp %g1, 16
+ bge 1f
+ srl %g1, 1, %o3
+2: cmp %o3, 0
+ be,a ccte
andcc %g1, 0xf, %o3
- andcc %o0, 0x1, %g0
+ andcc %o3, %o0, %g0 ! Check %o0 only (%o1 has the same last 2 bits)
+ be,a 2b
+ srl %o3, 1, %o3
+1: andcc %o0, 0x1, %g0
bne ccslow
andcc %o0, 0x2, %g0
be 1f
diff --git a/arch/sparc/lib/rwsem_32.S b/arch/sparc/lib/rwsem_32.S
deleted file mode 100644
index 9675268..0000000
--- a/arch/sparc/lib/rwsem_32.S
+++ /dev/null
@@ -1,204 +0,0 @@
-/*
- * Assembly part of rw semaphores.
- *
- * Copyright (C) 1999 Jakub Jelinek (jakub@redhat.com)
- */
-
-#include <asm/ptrace.h>
-#include <asm/psr.h>
-
- .section .sched.text, "ax"
- .align 4
-
- .globl ___down_read
-___down_read:
- rd %psr, %g3
- nop
- nop
- nop
- or %g3, PSR_PIL, %g7
- wr %g7, 0, %psr
- nop
- nop
- nop
-#ifdef CONFIG_SMP
-1: ldstub [%g1 + 4], %g7
- tst %g7
- bne 1b
- ld [%g1], %g7
- sub %g7, 1, %g7
- st %g7, [%g1]
- stb %g0, [%g1 + 4]
-#else
- ld [%g1], %g7
- sub %g7, 1, %g7
- st %g7, [%g1]
-#endif
- wr %g3, 0, %psr
- add %g7, 1, %g7
- nop
- nop
- subcc %g7, 1, %g7
- bneg 3f
- nop
-2: jmpl %o7, %g0
- mov %g4, %o7
-3: save %sp, -64, %sp
- mov %g1, %l1
- mov %g4, %l4
- bcs 4f
- mov %g5, %l5
- call down_read_failed
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba ___down_read
- restore %l5, %g0, %g5
-4: call down_read_failed_biased
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba 2b
- restore %l5, %g0, %g5
-
- .globl ___down_write
-___down_write:
- rd %psr, %g3
- nop
- nop
- nop
- or %g3, PSR_PIL, %g7
- wr %g7, 0, %psr
- sethi %hi(0x01000000), %g2
- nop
- nop
-#ifdef CONFIG_SMP
-1: ldstub [%g1 + 4], %g7
- tst %g7
- bne 1b
- ld [%g1], %g7
- sub %g7, %g2, %g7
- st %g7, [%g1]
- stb %g0, [%g1 + 4]
-#else
- ld [%g1], %g7
- sub %g7, %g2, %g7
- st %g7, [%g1]
-#endif
- wr %g3, 0, %psr
- add %g7, %g2, %g7
- nop
- nop
- subcc %g7, %g2, %g7
- bne 3f
- nop
-2: jmpl %o7, %g0
- mov %g4, %o7
-3: save %sp, -64, %sp
- mov %g1, %l1
- mov %g4, %l4
- bcs 4f
- mov %g5, %l5
- call down_write_failed
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba ___down_write
- restore %l5, %g0, %g5
-4: call down_write_failed_biased
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba 2b
- restore %l5, %g0, %g5
-
- .text
- .globl ___up_read
-___up_read:
- rd %psr, %g3
- nop
- nop
- nop
- or %g3, PSR_PIL, %g7
- wr %g7, 0, %psr
- nop
- nop
- nop
-#ifdef CONFIG_SMP
-1: ldstub [%g1 + 4], %g7
- tst %g7
- bne 1b
- ld [%g1], %g7
- add %g7, 1, %g7
- st %g7, [%g1]
- stb %g0, [%g1 + 4]
-#else
- ld [%g1], %g7
- add %g7, 1, %g7
- st %g7, [%g1]
-#endif
- wr %g3, 0, %psr
- nop
- nop
- nop
- cmp %g7, 0
- be 3f
- nop
-2: jmpl %o7, %g0
- mov %g4, %o7
-3: save %sp, -64, %sp
- mov %g1, %l1
- mov %g4, %l4
- mov %g5, %l5
- clr %o1
- call __rwsem_wake
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba 2b
- restore %l5, %g0, %g5
-
- .globl ___up_write
-___up_write:
- rd %psr, %g3
- nop
- nop
- nop
- or %g3, PSR_PIL, %g7
- wr %g7, 0, %psr
- sethi %hi(0x01000000), %g2
- nop
- nop
-#ifdef CONFIG_SMP
-1: ldstub [%g1 + 4], %g7
- tst %g7
- bne 1b
- ld [%g1], %g7
- add %g7, %g2, %g7
- st %g7, [%g1]
- stb %g0, [%g1 + 4]
-#else
- ld [%g1], %g7
- add %g7, %g2, %g7
- st %g7, [%g1]
-#endif
- wr %g3, 0, %psr
- sub %g7, %g2, %g7
- nop
- nop
- addcc %g7, %g2, %g7
- bcs 3f
- nop
-2: jmpl %o7, %g0
- mov %g4, %o7
-3: save %sp, -64, %sp
- mov %g1, %l1
- mov %g4, %l4
- mov %g5, %l5
- mov %g7, %o1
- call __rwsem_wake
- mov %l1, %o0
- mov %l1, %g1
- mov %l4, %g4
- ba 2b
- restore %l5, %g0, %g5
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 2f6ae1d..e10cd03 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -862,7 +862,7 @@
for (i = 0; i < NR_CPUS; i++)
numa_cpu_lookup_table[i] = 0;
- numa_cpumask_lookup_table[0] = CPU_MASK_ALL;
+ cpumask_setall(&numa_cpumask_lookup_table[0]);
}
#ifdef CONFIG_NEED_MULTIPLE_NODES
@@ -1080,7 +1080,7 @@
{
u64 arc;
- cpus_clear(*mask);
+ cpumask_clear(mask);
mdesc_for_each_arc(arc, md, grp, MDESC_ARC_TYPE_BACK) {
u64 target = mdesc_arc_target(md, arc);
@@ -1091,7 +1091,7 @@
continue;
id = mdesc_get_property(md, target, "id", NULL);
if (*id < nr_cpu_ids)
- cpu_set(*id, *mask);
+ cpumask_set_cpu(*id, mask);
}
}
@@ -1153,13 +1153,13 @@
numa_parse_mdesc_group_cpus(md, grp, &mask);
- for_each_cpu_mask(cpu, mask)
+ for_each_cpu(cpu, &mask)
numa_cpu_lookup_table[cpu] = index;
- numa_cpumask_lookup_table[index] = mask;
+ cpumask_copy(&numa_cpumask_lookup_table[index], &mask);
if (numa_debug) {
printk(KERN_INFO "NUMA GROUP[%d]: cpus [ ", index);
- for_each_cpu_mask(cpu, mask)
+ for_each_cpu(cpu, &mask)
printk("%d ", cpu);
printk("]\n");
}
@@ -1218,7 +1218,7 @@
index = 0;
for_each_present_cpu(cpu) {
numa_cpu_lookup_table[cpu] = index;
- numa_cpumask_lookup_table[index] = cpumask_of_cpu(cpu);
+ cpumask_copy(&numa_cpumask_lookup_table[index], cpumask_of(cpu));
node_masks[index].mask = ~((1UL << 36UL) - 1UL);
node_masks[index].val = cpu << 36UL;
diff --git a/arch/um/Kconfig.x86 b/arch/um/Kconfig.x86
index 02fb017..a9da516 100644
--- a/arch/um/Kconfig.x86
+++ b/arch/um/Kconfig.x86
@@ -4,6 +4,10 @@
menu "Host processor type and features"
+config CMPXCHG_LOCAL
+ bool
+ default n
+
source "arch/x86/Kconfig.cpu"
endmenu
diff --git a/arch/um/include/asm/bug.h b/arch/um/include/asm/bug.h
new file mode 100644
index 0000000..9e33b86
--- /dev/null
+++ b/arch/um/include/asm/bug.h
@@ -0,0 +1,6 @@
+#ifndef __UM_BUG_H
+#define __UM_BUG_H
+
+#include <asm-generic/bug.h>
+
+#endif
diff --git a/arch/x86/include/asm/gart.h b/arch/x86/include/asm/gart.h
index 43085bf..156cd5d 100644
--- a/arch/x86/include/asm/gart.h
+++ b/arch/x86/include/asm/gart.h
@@ -66,7 +66,7 @@
* Don't enable translation but enable GART IO and CPU accesses.
* Also, set DISTLBWALKPRB since GART tables memory is UC.
*/
- ctl = DISTLBWALKPRB | order << 1;
+ ctl = order << 1;
pci_write_config_dword(dev, AMD64_GARTAPERTURECTL, ctl);
}
@@ -75,17 +75,17 @@
{
u32 tmp, ctl;
- /* address of the mappings table */
- addr >>= 12;
- tmp = (u32) addr<<4;
- tmp &= ~0xf;
- pci_write_config_dword(dev, AMD64_GARTTABLEBASE, tmp);
+ /* address of the mappings table */
+ addr >>= 12;
+ tmp = (u32) addr<<4;
+ tmp &= ~0xf;
+ pci_write_config_dword(dev, AMD64_GARTTABLEBASE, tmp);
- /* Enable GART translation for this hammer. */
- pci_read_config_dword(dev, AMD64_GARTAPERTURECTL, &ctl);
- ctl |= GARTEN;
- ctl &= ~(DISGARTCPU | DISGARTIO);
- pci_write_config_dword(dev, AMD64_GARTAPERTURECTL, ctl);
+ /* Enable GART translation for this hammer. */
+ pci_read_config_dword(dev, AMD64_GARTAPERTURECTL, &ctl);
+ ctl |= GARTEN | DISTLBWALKPRB;
+ ctl &= ~(DISGARTCPU | DISGARTIO);
+ pci_write_config_dword(dev, AMD64_GARTAPERTURECTL, ctl);
}
static inline int aperture_valid(u64 aper_base, u32 aper_size, u32 min_size)
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index fd5a1f3..3cce714 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -96,11 +96,15 @@
#define MSR_IA32_MC0_ADDR 0x00000402
#define MSR_IA32_MC0_MISC 0x00000403
+#define MSR_AMD64_MC0_MASK 0xc0010044
+
#define MSR_IA32_MCx_CTL(x) (MSR_IA32_MC0_CTL + 4*(x))
#define MSR_IA32_MCx_STATUS(x) (MSR_IA32_MC0_STATUS + 4*(x))
#define MSR_IA32_MCx_ADDR(x) (MSR_IA32_MC0_ADDR + 4*(x))
#define MSR_IA32_MCx_MISC(x) (MSR_IA32_MC0_MISC + 4*(x))
+#define MSR_AMD64_MCx_MASK(x) (MSR_AMD64_MC0_MASK + (x))
+
/* These are consecutive and not in the normal 4er MCE bank block */
#define MSR_IA32_MC0_CTL2 0x00000280
#define MSR_IA32_MCx_CTL2(x) (MSR_IA32_MC0_CTL2 + (x))
diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
index 86d1ad4..73fb469 100644
--- a/arch/x86/kernel/aperture_64.c
+++ b/arch/x86/kernel/aperture_64.c
@@ -499,7 +499,7 @@
* Don't enable translation yet but enable GART IO and CPU
* accesses and set DISTLBWALKPRB since GART table memory is UC.
*/
- u32 ctl = DISTLBWALKPRB | aper_order << 1;
+ u32 ctl = aper_order << 1;
bus = amd_nb_bus_dev_ranges[i].bus;
dev_base = amd_nb_bus_dev_ranges[i].dev_base;
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 3ecece0..3532d3b 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -615,6 +615,25 @@
/* As a rule processors have APIC timer running in deep C states */
if (c->x86 >= 0xf && !cpu_has_amd_erratum(amd_erratum_400))
set_cpu_cap(c, X86_FEATURE_ARAT);
+
+ /*
+ * Disable GART TLB Walk Errors on Fam10h. We do this here
+ * because this is always needed when GART is enabled, even in a
+ * kernel which has no MCE support built in.
+ */
+ if (c->x86 == 0x10) {
+ /*
+ * BIOS should disable GartTlbWlk Errors themself. If
+ * it doesn't do it here as suggested by the BKDG.
+ *
+ * Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=33012
+ */
+ u64 mask;
+
+ rdmsrl(MSR_AMD64_MCx_MASK(4), mask);
+ mask |= (1 << 10);
+ wrmsrl(MSR_AMD64_MCx_MASK(4), mask);
+ }
}
#ifdef CONFIG_X86_32
diff --git a/arch/x86/kernel/cpu/perf_event_amd.c b/arch/x86/kernel/cpu/perf_event_amd.c
index 461f62b..cf4e369 100644
--- a/arch/x86/kernel/cpu/perf_event_amd.c
+++ b/arch/x86/kernel/cpu/perf_event_amd.c
@@ -8,7 +8,7 @@
[ C(L1D) ] = {
[ C(OP_READ) ] = {
[ C(RESULT_ACCESS) ] = 0x0040, /* Data Cache Accesses */
- [ C(RESULT_MISS) ] = 0x0041, /* Data Cache Misses */
+ [ C(RESULT_MISS) ] = 0x0141, /* Data Cache Misses */
},
[ C(OP_WRITE) ] = {
[ C(RESULT_ACCESS) ] = 0x0142, /* Data Cache Refills :system */
@@ -427,7 +427,9 @@
*
* Exceptions:
*
+ * 0x000 FP PERF_CTL[3], PERF_CTL[5:3] (*)
* 0x003 FP PERF_CTL[3]
+ * 0x004 FP PERF_CTL[3], PERF_CTL[5:3] (*)
* 0x00B FP PERF_CTL[3]
* 0x00D FP PERF_CTL[3]
* 0x023 DE PERF_CTL[2:0]
@@ -448,6 +450,8 @@
* 0x0DF LS PERF_CTL[5:0]
* 0x1D6 EX PERF_CTL[5:0]
* 0x1D8 EX PERF_CTL[5:0]
+ *
+ * (*) depending on the umask all FPU counters may be used
*/
static struct event_constraint amd_f15_PMC0 = EVENT_CONSTRAINT(0, 0x01, 0);
@@ -460,18 +464,28 @@
static struct event_constraint *
amd_get_event_constraints_f15h(struct cpu_hw_events *cpuc, struct perf_event *event)
{
- unsigned int event_code = amd_get_event_code(&event->hw);
+ struct hw_perf_event *hwc = &event->hw;
+ unsigned int event_code = amd_get_event_code(hwc);
switch (event_code & AMD_EVENT_TYPE_MASK) {
case AMD_EVENT_FP:
switch (event_code) {
+ case 0x000:
+ if (!(hwc->config & 0x0000F000ULL))
+ break;
+ if (!(hwc->config & 0x00000F00ULL))
+ break;
+ return &amd_f15_PMC3;
+ case 0x004:
+ if (hweight_long(hwc->config & ARCH_PERFMON_EVENTSEL_UMASK) <= 1)
+ break;
+ return &amd_f15_PMC3;
case 0x003:
case 0x00B:
case 0x00D:
return &amd_f15_PMC3;
- default:
- return &amd_f15_PMC53;
}
+ return &amd_f15_PMC53;
case AMD_EVENT_LS:
case AMD_EVENT_DC:
case AMD_EVENT_EX_LS:
diff --git a/arch/x86/kernel/pci-gart_64.c b/arch/x86/kernel/pci-gart_64.c
index 82ada01..b117efd 100644
--- a/arch/x86/kernel/pci-gart_64.c
+++ b/arch/x86/kernel/pci-gart_64.c
@@ -81,6 +81,9 @@
#define AGPEXTERN
#endif
+/* GART can only remap to physical addresses < 1TB */
+#define GART_MAX_PHYS_ADDR (1ULL << 40)
+
/* backdoor interface to AGP driver */
AGPEXTERN int agp_memory_reserved;
AGPEXTERN __u32 *agp_gatt_table;
@@ -212,9 +215,13 @@
size_t size, int dir, unsigned long align_mask)
{
unsigned long npages = iommu_num_pages(phys_mem, size, PAGE_SIZE);
- unsigned long iommu_page = alloc_iommu(dev, npages, align_mask);
+ unsigned long iommu_page;
int i;
+ if (unlikely(phys_mem + size > GART_MAX_PHYS_ADDR))
+ return bad_dma_addr;
+
+ iommu_page = alloc_iommu(dev, npages, align_mask);
if (iommu_page == -1) {
if (!nonforced_iommu(dev, phys_mem, size))
return phys_mem;
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index c2871d3..8ed8908 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -312,6 +312,26 @@
identify_secondary_cpu(c);
}
+static void __cpuinit check_cpu_siblings_on_same_node(int cpu1, int cpu2)
+{
+ int node1 = early_cpu_to_node(cpu1);
+ int node2 = early_cpu_to_node(cpu2);
+
+ /*
+ * Our CPU scheduler assumes all logical cpus in the same physical cpu
+ * share the same node. But, buggy ACPI or NUMA emulation might assign
+ * them to different node. Fix it.
+ */
+ if (node1 != node2) {
+ pr_warning("CPU %d in node %d and CPU %d in node %d are in the same physical CPU. forcing same node %d\n",
+ cpu1, node1, cpu2, node2, node2);
+
+ numa_remove_cpu(cpu1);
+ numa_set_node(cpu1, node2);
+ numa_add_cpu(cpu1);
+ }
+}
+
static void __cpuinit link_thread_siblings(int cpu1, int cpu2)
{
cpumask_set_cpu(cpu1, cpu_sibling_mask(cpu2));
@@ -320,6 +340,7 @@
cpumask_set_cpu(cpu2, cpu_core_mask(cpu1));
cpumask_set_cpu(cpu1, cpu_llc_shared_mask(cpu2));
cpumask_set_cpu(cpu2, cpu_llc_shared_mask(cpu1));
+ check_cpu_siblings_on_same_node(cpu1, cpu2);
}
@@ -361,10 +382,12 @@
per_cpu(cpu_llc_id, cpu) == per_cpu(cpu_llc_id, i)) {
cpumask_set_cpu(i, cpu_llc_shared_mask(cpu));
cpumask_set_cpu(cpu, cpu_llc_shared_mask(i));
+ check_cpu_siblings_on_same_node(cpu, i);
}
if (c->phys_proc_id == cpu_data(i).phys_proc_id) {
cpumask_set_cpu(i, cpu_core_mask(cpu));
cpumask_set_cpu(cpu, cpu_core_mask(i));
+ check_cpu_siblings_on_same_node(cpu, i);
/*
* Does this new cpu bringup a new core?
*/
diff --git a/arch/x86/platform/ce4100/falconfalls.dts b/arch/x86/platform/ce4100/falconfalls.dts
index dc701ea..2d6d226 100644
--- a/arch/x86/platform/ce4100/falconfalls.dts
+++ b/arch/x86/platform/ce4100/falconfalls.dts
@@ -74,6 +74,7 @@
compatible = "intel,ce4100-pci", "pci";
device_type = "pci";
bus-range = <1 1>;
+ reg = <0x0800 0x0 0x0 0x0 0x0>;
ranges = <0x2000000 0 0xdffe0000 0x2000000 0 0xdffe0000 0 0x1000>;
interrupt-parent = <&ioapic2>;
@@ -412,6 +413,7 @@
#address-cells = <2>;
#size-cells = <1>;
compatible = "isa";
+ reg = <0xf800 0x0 0x0 0x0 0x0>;
ranges = <1 0 0 0 0 0x100>;
rtc@70 {
diff --git a/arch/x86/platform/mrst/mrst.c b/arch/x86/platform/mrst/mrst.c
index 5c0207b..275dbc1 100644
--- a/arch/x86/platform/mrst/mrst.c
+++ b/arch/x86/platform/mrst/mrst.c
@@ -97,11 +97,11 @@
pentry->freq_hz, pentry->irq);
if (!pentry->irq)
continue;
- mp_irq.type = MP_IOAPIC;
+ mp_irq.type = MP_INTSRC;
mp_irq.irqtype = mp_INT;
/* triggering mode edge bit 2-3, active high polarity bit 0-1 */
mp_irq.irqflag = 5;
- mp_irq.srcbus = 0;
+ mp_irq.srcbus = MP_BUS_ISA;
mp_irq.srcbusirq = pentry->irq; /* IRQ */
mp_irq.dstapic = MP_APIC_ALL;
mp_irq.dstirq = pentry->irq;
@@ -168,10 +168,10 @@
for (totallen = 0; totallen < sfi_mrtc_num; totallen++, pentry++) {
pr_debug("RTC[%d]: paddr = 0x%08x, irq = %d\n",
totallen, (u32)pentry->phys_addr, pentry->irq);
- mp_irq.type = MP_IOAPIC;
+ mp_irq.type = MP_INTSRC;
mp_irq.irqtype = mp_INT;
mp_irq.irqflag = 0xf; /* level trigger and active low */
- mp_irq.srcbus = 0;
+ mp_irq.srcbus = MP_BUS_ISA;
mp_irq.srcbusirq = pentry->irq; /* IRQ */
mp_irq.dstapic = MP_APIC_ALL;
mp_irq.dstirq = pentry->irq;
@@ -282,7 +282,7 @@
/* Avoid searching for BIOS MP tables */
x86_init.mpparse.find_smp_config = x86_init_noop;
x86_init.mpparse.get_smp_config = x86_init_uint_noop;
-
+ set_bit(MP_BUS_ISA, mp_bus_not_pci);
}
/*
diff --git a/block/blk-core.c b/block/blk-core.c
index 90f22cc..5fa3dd2 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -198,26 +198,13 @@
}
EXPORT_SYMBOL(blk_dump_rq_flags);
-/*
- * Make sure that plugs that were pending when this function was entered,
- * are now complete and requests pushed to the queue.
-*/
-static inline void queue_sync_plugs(struct request_queue *q)
-{
- /*
- * If the current process is plugged and has barriers submitted,
- * we will livelock if we don't unplug first.
- */
- blk_flush_plug(current);
-}
-
static void blk_delay_work(struct work_struct *work)
{
struct request_queue *q;
q = container_of(work, struct request_queue, delay_work.work);
spin_lock_irq(q->queue_lock);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
spin_unlock_irq(q->queue_lock);
}
@@ -233,7 +220,8 @@
*/
void blk_delay_queue(struct request_queue *q, unsigned long msecs)
{
- schedule_delayed_work(&q->delay_work, msecs_to_jiffies(msecs));
+ queue_delayed_work(kblockd_workqueue, &q->delay_work,
+ msecs_to_jiffies(msecs));
}
EXPORT_SYMBOL(blk_delay_queue);
@@ -251,7 +239,7 @@
WARN_ON(!irqs_disabled());
queue_flag_clear(QUEUE_FLAG_STOPPED, q);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
}
EXPORT_SYMBOL(blk_start_queue);
@@ -298,7 +286,6 @@
{
del_timer_sync(&q->timeout);
cancel_delayed_work_sync(&q->delay_work);
- queue_sync_plugs(q);
}
EXPORT_SYMBOL(blk_sync_queue);
@@ -310,9 +297,8 @@
* Description:
* See @blk_run_queue. This variant must be called with the queue lock
* held and interrupts disabled.
- *
*/
-void __blk_run_queue(struct request_queue *q, bool force_kblockd)
+void __blk_run_queue(struct request_queue *q)
{
if (unlikely(blk_queue_stopped(q)))
return;
@@ -321,7 +307,7 @@
* Only recurse once to avoid overrunning the stack, let the unplug
* handling reinvoke the handler shortly if we already got there.
*/
- if (!force_kblockd && !queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
+ if (!queue_flag_test_and_set(QUEUE_FLAG_REENTER, q)) {
q->request_fn(q);
queue_flag_clear(QUEUE_FLAG_REENTER, q);
} else
@@ -330,6 +316,20 @@
EXPORT_SYMBOL(__blk_run_queue);
/**
+ * blk_run_queue_async - run a single device queue in workqueue context
+ * @q: The queue to run
+ *
+ * Description:
+ * Tells kblockd to perform the equivalent of @blk_run_queue on behalf
+ * of us.
+ */
+void blk_run_queue_async(struct request_queue *q)
+{
+ if (likely(!blk_queue_stopped(q)))
+ queue_delayed_work(kblockd_workqueue, &q->delay_work, 0);
+}
+
+/**
* blk_run_queue - run a single device queue
* @q: The queue to run
*
@@ -342,7 +342,7 @@
unsigned long flags;
spin_lock_irqsave(q->queue_lock, flags);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_run_queue);
@@ -991,7 +991,7 @@
blk_queue_end_tag(q, rq);
add_acct_request(q, rq, where);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
spin_unlock_irqrestore(q->queue_lock, flags);
}
EXPORT_SYMBOL(blk_insert_request);
@@ -1311,7 +1311,15 @@
plug = current->plug;
if (plug) {
- if (!plug->should_sort && !list_empty(&plug->list)) {
+ /*
+ * If this is the first request added after a plug, fire
+ * of a plug trace. If others have been added before, check
+ * if we have multiple devices in this plug. If so, make a
+ * note to sort the list before dispatch.
+ */
+ if (list_empty(&plug->list))
+ trace_block_plug(q);
+ else if (!plug->should_sort) {
struct request *__rq;
__rq = list_entry_rq(plug->list.prev);
@@ -1327,7 +1335,7 @@
} else {
spin_lock_irq(q->queue_lock);
add_acct_request(q, req, where);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
out_unlock:
spin_unlock_irq(q->queue_lock);
}
@@ -2644,6 +2652,7 @@
plug->magic = PLUG_MAGIC;
INIT_LIST_HEAD(&plug->list);
+ INIT_LIST_HEAD(&plug->cb_list);
plug->should_sort = 0;
/*
@@ -2668,33 +2677,93 @@
return !(rqa->q <= rqb->q);
}
-static void flush_plug_list(struct blk_plug *plug)
+/*
+ * If 'from_schedule' is true, then postpone the dispatch of requests
+ * until a safe kblockd context. We due this to avoid accidental big
+ * additional stack usage in driver dispatch, in places where the originally
+ * plugger did not intend it.
+ */
+static void queue_unplugged(struct request_queue *q, unsigned int depth,
+ bool from_schedule)
+ __releases(q->queue_lock)
+{
+ trace_block_unplug(q, depth, !from_schedule);
+
+ /*
+ * If we are punting this to kblockd, then we can safely drop
+ * the queue_lock before waking kblockd (which needs to take
+ * this lock).
+ */
+ if (from_schedule) {
+ spin_unlock(q->queue_lock);
+ blk_run_queue_async(q);
+ } else {
+ __blk_run_queue(q);
+ spin_unlock(q->queue_lock);
+ }
+
+}
+
+static void flush_plug_callbacks(struct blk_plug *plug)
+{
+ LIST_HEAD(callbacks);
+
+ if (list_empty(&plug->cb_list))
+ return;
+
+ list_splice_init(&plug->cb_list, &callbacks);
+
+ while (!list_empty(&callbacks)) {
+ struct blk_plug_cb *cb = list_first_entry(&callbacks,
+ struct blk_plug_cb,
+ list);
+ list_del(&cb->list);
+ cb->callback(cb);
+ }
+}
+
+void blk_flush_plug_list(struct blk_plug *plug, bool from_schedule)
{
struct request_queue *q;
unsigned long flags;
struct request *rq;
+ LIST_HEAD(list);
+ unsigned int depth;
BUG_ON(plug->magic != PLUG_MAGIC);
+ flush_plug_callbacks(plug);
if (list_empty(&plug->list))
return;
- if (plug->should_sort)
- list_sort(NULL, &plug->list, plug_rq_cmp);
+ list_splice_init(&plug->list, &list);
+
+ if (plug->should_sort) {
+ list_sort(NULL, &list, plug_rq_cmp);
+ plug->should_sort = 0;
+ }
q = NULL;
+ depth = 0;
+
+ /*
+ * Save and disable interrupts here, to avoid doing it for every
+ * queue lock we have to take.
+ */
local_irq_save(flags);
- while (!list_empty(&plug->list)) {
- rq = list_entry_rq(plug->list.next);
+ while (!list_empty(&list)) {
+ rq = list_entry_rq(list.next);
list_del_init(&rq->queuelist);
BUG_ON(!(rq->cmd_flags & REQ_ON_PLUG));
BUG_ON(!rq->q);
if (rq->q != q) {
- if (q) {
- __blk_run_queue(q, false);
- spin_unlock(q->queue_lock);
- }
+ /*
+ * This drops the queue lock
+ */
+ if (q)
+ queue_unplugged(q, depth, from_schedule);
q = rq->q;
+ depth = 0;
spin_lock(q->queue_lock);
}
rq->cmd_flags &= ~REQ_ON_PLUG;
@@ -2706,39 +2775,29 @@
__elv_add_request(q, rq, ELEVATOR_INSERT_FLUSH);
else
__elv_add_request(q, rq, ELEVATOR_INSERT_SORT_MERGE);
+
+ depth++;
}
- if (q) {
- __blk_run_queue(q, false);
- spin_unlock(q->queue_lock);
- }
+ /*
+ * This drops the queue lock
+ */
+ if (q)
+ queue_unplugged(q, depth, from_schedule);
- BUG_ON(!list_empty(&plug->list));
local_irq_restore(flags);
}
-
-static void __blk_finish_plug(struct task_struct *tsk, struct blk_plug *plug)
-{
- flush_plug_list(plug);
-
- if (plug == tsk->plug)
- tsk->plug = NULL;
-}
+EXPORT_SYMBOL(blk_flush_plug_list);
void blk_finish_plug(struct blk_plug *plug)
{
- if (plug)
- __blk_finish_plug(current, plug);
+ blk_flush_plug_list(plug, false);
+
+ if (plug == current->plug)
+ current->plug = NULL;
}
EXPORT_SYMBOL(blk_finish_plug);
-void __blk_flush_plug(struct task_struct *tsk, struct blk_plug *plug)
-{
- __blk_finish_plug(tsk, plug);
- tsk->plug = plug;
-}
-EXPORT_SYMBOL(__blk_flush_plug);
-
int __init blk_dev_init(void)
{
BUILD_BUG_ON(__REQ_NR_BITS > 8 *
diff --git a/block/blk-exec.c b/block/blk-exec.c
index 7482b7f..81e3181 100644
--- a/block/blk-exec.c
+++ b/block/blk-exec.c
@@ -55,7 +55,7 @@
WARN_ON(irqs_disabled());
spin_lock_irq(q->queue_lock);
__elv_add_request(q, rq, where);
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
/* the queue is stopped so it won't be plugged+unplugged */
if (rq->cmd_type == REQ_TYPE_PM_RESUME)
q->request_fn(q);
diff --git a/block/blk-flush.c b/block/blk-flush.c
index eba4a27..6c9b5e1 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -218,7 +218,7 @@
* request_fn may confuse the driver. Always use kblockd.
*/
if (queued)
- __blk_run_queue(q, true);
+ blk_run_queue_async(q);
}
/**
@@ -274,7 +274,7 @@
* the comment in flush_end_io().
*/
if (blk_flush_complete_seq(rq, REQ_FSEQ_DATA, error))
- __blk_run_queue(q, true);
+ blk_run_queue_async(q);
}
/**
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 261c75c..6d73512 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -498,7 +498,6 @@
{
int ret;
struct device *dev = disk_to_dev(disk);
-
struct request_queue *q = disk->queue;
if (WARN_ON(!q))
@@ -521,7 +520,7 @@
if (ret) {
kobject_uevent(&q->kobj, KOBJ_REMOVE);
kobject_del(&q->kobj);
- blk_trace_remove_sysfs(disk_to_dev(disk));
+ blk_trace_remove_sysfs(dev);
kobject_put(&dev->kobj);
return ret;
}
diff --git a/block/blk.h b/block/blk.h
index 6126346..c9df8fc 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -22,6 +22,7 @@
void blk_delete_timer(struct request *);
void blk_add_timer(struct request *);
void __generic_unplug_device(struct request_queue *);
+void blk_run_queue_async(struct request_queue *q);
/*
* Internal atomic flags for request handling
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 3be881e..46b0a1d 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -3368,7 +3368,7 @@
cfqd->busy_queues > 1) {
cfq_del_timer(cfqd, cfqq);
cfq_clear_cfqq_wait_request(cfqq);
- __blk_run_queue(cfqd->queue, false);
+ __blk_run_queue(cfqd->queue);
} else {
cfq_blkiocg_update_idle_time_stats(
&cfqq->cfqg->blkg);
@@ -3383,7 +3383,7 @@
* this new queue is RT and the current one is BE
*/
cfq_preempt_queue(cfqd, cfqq);
- __blk_run_queue(cfqd->queue, false);
+ __blk_run_queue(cfqd->queue);
}
}
@@ -3743,7 +3743,7 @@
struct request_queue *q = cfqd->queue;
spin_lock_irq(q->queue_lock);
- __blk_run_queue(cfqd->queue, false);
+ __blk_run_queue(cfqd->queue);
spin_unlock_irq(q->queue_lock);
}
diff --git a/block/elevator.c b/block/elevator.c
index 0cdb4e7..6f6abc0 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -642,7 +642,7 @@
*/
elv_drain_elevator(q);
while (q->rq.elvpriv) {
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
spin_unlock_irq(q->queue_lock);
msleep(10);
spin_lock_irq(q->queue_lock);
@@ -695,7 +695,7 @@
* with anything. There's no point in delaying queue
* processing.
*/
- __blk_run_queue(q, false);
+ __blk_run_queue(q);
break;
case ELEVATOR_INSERT_SORT_MERGE:
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index d770058..219d88a 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -142,6 +142,7 @@
cbq->callback(msg, nsp);
kfree_skb(skb);
cn_queue_release_callback(cbq);
+ err = 0;
}
return err;
diff --git a/drivers/gpu/drm/nouveau/nouveau_dma.c b/drivers/gpu/drm/nouveau/nouveau_dma.c
index ce38e97..568caed 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dma.c
@@ -83,7 +83,7 @@
return ret;
/* NV_MEMORY_TO_MEMORY_FORMAT requires a notifier object */
- ret = nouveau_notifier_alloc(chan, NvNotify0, 32, 0xfd0, 0x1000,
+ ret = nouveau_notifier_alloc(chan, NvNotify0, 32, 0xfe0, 0x1000,
&chan->m2mf_ntfy);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h
index 856d56a..a76514a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -682,6 +682,9 @@
/* For PFIFO and PGRAPH. */
spinlock_t context_switch_lock;
+ /* VM/PRAMIN flush, legacy PRAMIN aperture */
+ spinlock_t vm_lock;
+
/* RAMIN configuration, RAMFC, RAMHT and RAMRO offsets */
struct nouveau_ramht *ramht;
struct nouveau_gpuobj *ramfc;
diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
index 889c445..39aee6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
@@ -181,13 +181,13 @@
OUT_RING (chan, 0);
}
- nouveau_bo_wr32(chan->notifier_bo, chan->m2mf_ntfy + 3, 0xffffffff);
+ nouveau_bo_wr32(chan->notifier_bo, chan->m2mf_ntfy/4 + 3, 0xffffffff);
FIRE_RING(chan);
mutex_unlock(&chan->mutex);
ret = -EBUSY;
for (i = 0; i < 100000; i++) {
- if (!nouveau_bo_rd32(chan->notifier_bo, chan->m2mf_ntfy + 3)) {
+ if (!nouveau_bo_rd32(chan->notifier_bo, chan->m2mf_ntfy/4 + 3)) {
ret = 0;
break;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 78f467f..5045f8b 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -398,7 +398,7 @@
dma_bits = 40;
} else
if (drm_pci_device_is_pcie(dev) &&
- dev_priv->chipset != 0x40 &&
+ dev_priv->chipset > 0x40 &&
dev_priv->chipset != 0x45) {
if (pci_dma_supported(dev->pdev, DMA_BIT_MASK(39)))
dma_bits = 39;
diff --git a/drivers/gpu/drm/nouveau/nouveau_notifier.c b/drivers/gpu/drm/nouveau/nouveau_notifier.c
index 7ba3fc0..5b39718 100644
--- a/drivers/gpu/drm/nouveau/nouveau_notifier.c
+++ b/drivers/gpu/drm/nouveau/nouveau_notifier.c
@@ -35,19 +35,22 @@
{
struct drm_device *dev = chan->dev;
struct nouveau_bo *ntfy = NULL;
- uint32_t flags;
+ uint32_t flags, ttmpl;
int ret;
- if (nouveau_vram_notify)
+ if (nouveau_vram_notify) {
flags = NOUVEAU_GEM_DOMAIN_VRAM;
- else
+ ttmpl = TTM_PL_FLAG_VRAM;
+ } else {
flags = NOUVEAU_GEM_DOMAIN_GART;
+ ttmpl = TTM_PL_FLAG_TT;
+ }
ret = nouveau_gem_new(dev, NULL, PAGE_SIZE, 0, flags, 0, 0, &ntfy);
if (ret)
return ret;
- ret = nouveau_bo_pin(ntfy, flags);
+ ret = nouveau_bo_pin(ntfy, ttmpl);
if (ret)
goto out_err;
diff --git a/drivers/gpu/drm/nouveau/nouveau_object.c b/drivers/gpu/drm/nouveau/nouveau_object.c
index 4f00c87..67a16e0 100644
--- a/drivers/gpu/drm/nouveau/nouveau_object.c
+++ b/drivers/gpu/drm/nouveau/nouveau_object.c
@@ -1039,19 +1039,20 @@
{
struct drm_nouveau_private *dev_priv = gpuobj->dev->dev_private;
struct drm_device *dev = gpuobj->dev;
+ unsigned long flags;
if (gpuobj->pinst == ~0 || !dev_priv->ramin_available) {
u64 ptr = gpuobj->vinst + offset;
u32 base = ptr >> 16;
u32 val;
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
if (dev_priv->ramin_base != base) {
dev_priv->ramin_base = base;
nv_wr32(dev, 0x001700, dev_priv->ramin_base);
}
val = nv_rd32(dev, 0x700000 + (ptr & 0xffff));
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
return val;
}
@@ -1063,18 +1064,19 @@
{
struct drm_nouveau_private *dev_priv = gpuobj->dev->dev_private;
struct drm_device *dev = gpuobj->dev;
+ unsigned long flags;
if (gpuobj->pinst == ~0 || !dev_priv->ramin_available) {
u64 ptr = gpuobj->vinst + offset;
u32 base = ptr >> 16;
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
if (dev_priv->ramin_base != base) {
dev_priv->ramin_base = base;
nv_wr32(dev, 0x001700, dev_priv->ramin_base);
}
nv_wr32(dev, 0x700000 + (ptr & 0xffff), val);
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
return;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index a33fe40..4bce801 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -55,6 +55,7 @@
be->func->clear(be);
return -EFAULT;
}
+ nvbe->ttm_alloced[nvbe->nr_pages] = false;
}
nvbe->nr_pages++;
@@ -427,7 +428,7 @@
u32 aper_size, align;
int ret;
- if (dev_priv->card_type >= NV_50 || drm_pci_device_is_pcie(dev))
+ if (dev_priv->card_type >= NV_40 && drm_pci_device_is_pcie(dev))
aper_size = 512 * 1024 * 1024;
else
aper_size = 64 * 1024 * 1024;
@@ -457,7 +458,7 @@
dev_priv->gart_info.func = &nv50_sgdma_backend;
} else
if (drm_pci_device_is_pcie(dev) &&
- dev_priv->chipset != 0x40 && dev_priv->chipset != 0x45) {
+ dev_priv->chipset > 0x40 && dev_priv->chipset != 0x45) {
if (nv44_graph_class(dev)) {
dev_priv->gart_info.func = &nv44_sgdma_backend;
align = 512 * 1024;
diff --git a/drivers/gpu/drm/nouveau/nouveau_state.c b/drivers/gpu/drm/nouveau/nouveau_state.c
index 6e2b1a6..a30adec 100644
--- a/drivers/gpu/drm/nouveau/nouveau_state.c
+++ b/drivers/gpu/drm/nouveau/nouveau_state.c
@@ -608,6 +608,7 @@
spin_lock_init(&dev_priv->channels.lock);
spin_lock_init(&dev_priv->tile.lock);
spin_lock_init(&dev_priv->context_switch_lock);
+ spin_lock_init(&dev_priv->vm_lock);
/* Make the CRTCs and I2C buses accessible */
ret = engine->display.early_init(dev);
diff --git a/drivers/gpu/drm/nouveau/nv50_instmem.c b/drivers/gpu/drm/nouveau/nv50_instmem.c
index a6f8aa6..4f95a1e 100644
--- a/drivers/gpu/drm/nouveau/nv50_instmem.c
+++ b/drivers/gpu/drm/nouveau/nv50_instmem.c
@@ -404,23 +404,25 @@
nv50_instmem_flush(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
+ unsigned long flags;
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
nv_wr32(dev, 0x00330c, 0x00000001);
if (!nv_wait(dev, 0x00330c, 0x00000002, 0x00000000))
NV_ERROR(dev, "PRAMIN flush timeout\n");
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
}
void
nv84_instmem_flush(struct drm_device *dev)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
+ unsigned long flags;
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
nv_wr32(dev, 0x070000, 0x00000001);
if (!nv_wait(dev, 0x070000, 0x00000002, 0x00000000))
NV_ERROR(dev, "PRAMIN flush timeout\n");
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
}
diff --git a/drivers/gpu/drm/nouveau/nv50_vm.c b/drivers/gpu/drm/nouveau/nv50_vm.c
index 4fd3432..6c26944 100644
--- a/drivers/gpu/drm/nouveau/nv50_vm.c
+++ b/drivers/gpu/drm/nouveau/nv50_vm.c
@@ -174,10 +174,11 @@
nv50_vm_flush_engine(struct drm_device *dev, int engine)
{
struct drm_nouveau_private *dev_priv = dev->dev_private;
+ unsigned long flags;
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
nv_wr32(dev, 0x100c80, (engine << 16) | 1);
if (!nv_wait(dev, 0x100c80, 0x00000001, 0x00000000))
NV_ERROR(dev, "vm flush timeout: engine %d\n", engine);
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
}
diff --git a/drivers/gpu/drm/nouveau/nvc0_vm.c b/drivers/gpu/drm/nouveau/nvc0_vm.c
index a0a2a02..a179e6c 100644
--- a/drivers/gpu/drm/nouveau/nvc0_vm.c
+++ b/drivers/gpu/drm/nouveau/nvc0_vm.c
@@ -104,11 +104,12 @@
struct nouveau_instmem_engine *pinstmem = &dev_priv->engine.instmem;
struct drm_device *dev = vm->dev;
struct nouveau_vm_pgd *vpgd;
+ unsigned long flags;
u32 engine = (dev_priv->chan_vm == vm) ? 1 : 5;
pinstmem->flush(vm->dev);
- spin_lock(&dev_priv->ramin_lock);
+ spin_lock_irqsave(&dev_priv->vm_lock, flags);
list_for_each_entry(vpgd, &vm->pgd_list, head) {
/* looks like maybe a "free flush slots" counter, the
* faster you write to 0x100cbc to more it decreases
@@ -125,5 +126,5 @@
nv_rd32(dev, 0x100c80), engine);
}
}
- spin_unlock(&dev_priv->ramin_lock);
+ spin_unlock_irqrestore(&dev_priv->vm_lock, flags);
}
diff --git a/drivers/gpu/drm/radeon/atom.c b/drivers/gpu/drm/radeon/atom.c
index d71d375..7bd7456 100644
--- a/drivers/gpu/drm/radeon/atom.c
+++ b/drivers/gpu/drm/radeon/atom.c
@@ -135,7 +135,7 @@
case ATOM_IIO_MOVE_INDEX:
temp &=
~((0xFFFFFFFF >> (32 - CU8(base + 1))) <<
- CU8(base + 2));
+ CU8(base + 3));
temp |=
((index >> CU8(base + 2)) &
(0xFFFFFFFF >> (32 - CU8(base + 1)))) << CU8(base +
@@ -145,7 +145,7 @@
case ATOM_IIO_MOVE_DATA:
temp &=
~((0xFFFFFFFF >> (32 - CU8(base + 1))) <<
- CU8(base + 2));
+ CU8(base + 3));
temp |=
((data >> CU8(base + 2)) &
(0xFFFFFFFF >> (32 - CU8(base + 1)))) << CU8(base +
@@ -155,7 +155,7 @@
case ATOM_IIO_MOVE_ATTR:
temp &=
~((0xFFFFFFFF >> (32 - CU8(base + 1))) <<
- CU8(base + 2));
+ CU8(base + 3));
temp |=
((ctx->
io_attr >> CU8(base + 2)) & (0xFFFFFFFF >> (32 -
diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
index 9d516a8..529a3a7 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -532,10 +532,7 @@
else
pll->flags |= RADEON_PLL_PREFER_LOW_REF_DIV;
- if ((rdev->family == CHIP_R600) ||
- (rdev->family == CHIP_RV610) ||
- (rdev->family == CHIP_RV630) ||
- (rdev->family == CHIP_RV670))
+ if (rdev->family < CHIP_RV770)
pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
} else {
pll->flags |= RADEON_PLL_LEGACY;
@@ -565,7 +562,6 @@
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
if (ss_enabled) {
if (ss->refdiv) {
- pll->flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP;
pll->flags |= RADEON_PLL_USE_REF_DIV;
pll->reference_div = ss->refdiv;
if (ASIC_IS_AVIVO(rdev))
diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
index 3453910..43fd016 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -353,7 +353,7 @@
struct drm_display_mode *mode,
struct drm_display_mode *other_mode)
{
- u32 tmp = 0;
+ u32 tmp;
/*
* Line Buffer Setup
* There are 3 line buffers, each one shared by 2 display controllers.
@@ -363,64 +363,63 @@
* first display controller
* 0 - first half of lb (3840 * 2)
* 1 - first 3/4 of lb (5760 * 2)
- * 2 - whole lb (7680 * 2)
+ * 2 - whole lb (7680 * 2), other crtc must be disabled
* 3 - first 1/4 of lb (1920 * 2)
* second display controller
* 4 - second half of lb (3840 * 2)
* 5 - second 3/4 of lb (5760 * 2)
- * 6 - whole lb (7680 * 2)
+ * 6 - whole lb (7680 * 2), other crtc must be disabled
* 7 - last 1/4 of lb (1920 * 2)
*/
- if (mode && other_mode) {
- if (mode->hdisplay > other_mode->hdisplay) {
- if (mode->hdisplay > 2560)
- tmp = 1; /* 3/4 */
- else
- tmp = 0; /* 1/2 */
- } else if (other_mode->hdisplay > mode->hdisplay) {
- if (other_mode->hdisplay > 2560)
- tmp = 3; /* 1/4 */
- else
- tmp = 0; /* 1/2 */
- } else
+ /* this can get tricky if we have two large displays on a paired group
+ * of crtcs. Ideally for multiple large displays we'd assign them to
+ * non-linked crtcs for maximum line buffer allocation.
+ */
+ if (radeon_crtc->base.enabled && mode) {
+ if (other_mode)
tmp = 0; /* 1/2 */
- } else if (mode)
- tmp = 2; /* whole */
- else if (other_mode)
- tmp = 3; /* 1/4 */
+ else
+ tmp = 2; /* whole */
+ } else
+ tmp = 0;
/* second controller of the pair uses second half of the lb */
if (radeon_crtc->crtc_id % 2)
tmp += 4;
WREG32(DC_LB_MEMORY_SPLIT + radeon_crtc->crtc_offset, tmp);
- switch (tmp) {
- case 0:
- case 4:
- default:
- if (ASIC_IS_DCE5(rdev))
- return 4096 * 2;
- else
- return 3840 * 2;
- case 1:
- case 5:
- if (ASIC_IS_DCE5(rdev))
- return 6144 * 2;
- else
- return 5760 * 2;
- case 2:
- case 6:
- if (ASIC_IS_DCE5(rdev))
- return 8192 * 2;
- else
- return 7680 * 2;
- case 3:
- case 7:
- if (ASIC_IS_DCE5(rdev))
- return 2048 * 2;
- else
- return 1920 * 2;
+ if (radeon_crtc->base.enabled && mode) {
+ switch (tmp) {
+ case 0:
+ case 4:
+ default:
+ if (ASIC_IS_DCE5(rdev))
+ return 4096 * 2;
+ else
+ return 3840 * 2;
+ case 1:
+ case 5:
+ if (ASIC_IS_DCE5(rdev))
+ return 6144 * 2;
+ else
+ return 5760 * 2;
+ case 2:
+ case 6:
+ if (ASIC_IS_DCE5(rdev))
+ return 8192 * 2;
+ else
+ return 7680 * 2;
+ case 3:
+ case 7:
+ if (ASIC_IS_DCE5(rdev))
+ return 2048 * 2;
+ else
+ return 1920 * 2;
+ }
}
+
+ /* controller not enabled, so no lb used */
+ return 0;
}
static u32 evergreen_get_number_of_dram_channels(struct radeon_device *rdev)
diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
index 2ef6d51..5f45fa1 100644
--- a/drivers/gpu/drm/radeon/radeon_connectors.c
+++ b/drivers/gpu/drm/radeon/radeon_connectors.c
@@ -1199,7 +1199,7 @@
if (router->ddc_valid || router->cd_valid) {
radeon_connector->router_bus = radeon_i2c_lookup(rdev, &router->i2c_info);
if (!radeon_connector->router_bus)
- goto failed;
+ DRM_ERROR("Failed to assign router i2c bus! Check dmesg for i2c errors.\n");
}
switch (connector_type) {
case DRM_MODE_CONNECTOR_VGA:
@@ -1208,7 +1208,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("VGA: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
radeon_connector->dac_load_detect = true;
drm_connector_attach_property(&radeon_connector->base,
@@ -1226,7 +1226,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("DVIA: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
radeon_connector->dac_load_detect = true;
drm_connector_attach_property(&radeon_connector->base,
@@ -1249,7 +1249,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("DVI: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
subpixel_order = SubPixelHorizontalRGB;
drm_connector_attach_property(&radeon_connector->base,
@@ -1290,7 +1290,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("HDMI: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
drm_connector_attach_property(&radeon_connector->base,
rdev->mode_info.coherent_mode_property,
@@ -1329,10 +1329,10 @@
else
radeon_dig_connector->dp_i2c_bus = radeon_i2c_create_dp(dev, i2c_bus, "DP-auxch");
if (!radeon_dig_connector->dp_i2c_bus)
- goto failed;
+ DRM_ERROR("DP: Failed to assign dp ddc bus! Check dmesg for i2c errors.\n");
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("DP: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
subpixel_order = SubPixelHorizontalRGB;
drm_connector_attach_property(&radeon_connector->base,
@@ -1381,7 +1381,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("LVDS: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
drm_connector_attach_property(&radeon_connector->base,
dev->mode_config.scaling_mode_property,
@@ -1457,7 +1457,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("VGA: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
radeon_connector->dac_load_detect = true;
drm_connector_attach_property(&radeon_connector->base,
@@ -1475,7 +1475,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("DVIA: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
radeon_connector->dac_load_detect = true;
drm_connector_attach_property(&radeon_connector->base,
@@ -1493,7 +1493,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("DVI: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
if (connector_type == DRM_MODE_CONNECTOR_DVII) {
radeon_connector->dac_load_detect = true;
@@ -1538,7 +1538,7 @@
if (i2c_bus->valid) {
radeon_connector->ddc_bus = radeon_i2c_lookup(rdev, i2c_bus);
if (!radeon_connector->ddc_bus)
- goto failed;
+ DRM_ERROR("LVDS: Failed to assign ddc bus! Check dmesg for i2c errors.\n");
}
drm_connector_attach_property(&radeon_connector->base,
dev->mode_config.scaling_mode_property,
@@ -1567,9 +1567,4 @@
radeon_legacy_backlight_init(radeon_encoder, connector);
}
}
- return;
-
-failed:
- drm_connector_cleanup(connector);
- kfree(connector);
}
diff --git a/drivers/gpu/drm/radeon/radeon_i2c.c b/drivers/gpu/drm/radeon/radeon_i2c.c
index ccbabf7..983cbac7 100644
--- a/drivers/gpu/drm/radeon/radeon_i2c.c
+++ b/drivers/gpu/drm/radeon/radeon_i2c.c
@@ -1096,6 +1096,9 @@
if (!radeon_connector->router.ddc_valid)
return;
+ if (!radeon_connector->router_bus)
+ return;
+
radeon_i2c_get_byte(radeon_connector->router_bus,
radeon_connector->router.i2c_addr,
0x3, &val);
@@ -1121,6 +1124,9 @@
if (!radeon_connector->router.cd_valid)
return;
+ if (!radeon_connector->router_bus)
+ return;
+
radeon_i2c_get_byte(radeon_connector->router_bus,
radeon_connector->router.i2c_addr,
0x3, &val);
diff --git a/drivers/i2c/algos/i2c-algo-bit.c b/drivers/i2c/algos/i2c-algo-bit.c
index 38319a6..d6d5868 100644
--- a/drivers/i2c/algos/i2c-algo-bit.c
+++ b/drivers/i2c/algos/i2c-algo-bit.c
@@ -232,9 +232,17 @@
* Sanity check for the adapter hardware - check the reaction of
* the bus lines only if it seems to be idle.
*/
-static int test_bus(struct i2c_algo_bit_data *adap, char *name)
+static int test_bus(struct i2c_adapter *i2c_adap)
{
- int scl, sda;
+ struct i2c_algo_bit_data *adap = i2c_adap->algo_data;
+ const char *name = i2c_adap->name;
+ int scl, sda, ret;
+
+ if (adap->pre_xfer) {
+ ret = adap->pre_xfer(i2c_adap);
+ if (ret < 0)
+ return -ENODEV;
+ }
if (adap->getscl == NULL)
pr_info("%s: Testing SDA only, SCL is not readable\n", name);
@@ -297,11 +305,19 @@
"while pulling SCL high!\n", name);
goto bailout;
}
+
+ if (adap->post_xfer)
+ adap->post_xfer(i2c_adap);
+
pr_info("%s: Test OK\n", name);
return 0;
bailout:
sdahi(adap);
sclhi(adap);
+
+ if (adap->post_xfer)
+ adap->post_xfer(i2c_adap);
+
return -ENODEV;
}
@@ -607,7 +623,7 @@
int ret;
if (bit_test) {
- ret = test_bus(bit_adap, adap->name);
+ ret = test_bus(adap);
if (ret < 0)
return -ENODEV;
}
diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
index 70c30e6..9a58994 100644
--- a/drivers/i2c/i2c-core.c
+++ b/drivers/i2c/i2c-core.c
@@ -797,7 +797,8 @@
/* Let legacy drivers scan this bus for matching devices */
if (driver->attach_adapter) {
- dev_warn(&adap->dev, "attach_adapter method is deprecated\n");
+ dev_warn(&adap->dev, "%s: attach_adapter method is deprecated\n",
+ driver->driver.name);
dev_warn(&adap->dev, "Please use another way to instantiate "
"your i2c_client\n");
/* We ignore the return code; if it fails, too bad */
@@ -984,7 +985,8 @@
if (!driver->detach_adapter)
return 0;
- dev_warn(&adapter->dev, "detach_adapter method is deprecated\n");
+ dev_warn(&adapter->dev, "%s: detach_adapter method is deprecated\n",
+ driver->driver.name);
res = driver->detach_adapter(adapter);
if (res)
dev_err(&adapter->dev, "detach_adapter failed (%d) "
diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
index 7f42d3a..88d8e4c 100644
--- a/drivers/input/evdev.c
+++ b/drivers/input/evdev.c
@@ -39,13 +39,13 @@
};
struct evdev_client {
- int head;
- int tail;
+ unsigned int head;
+ unsigned int tail;
spinlock_t buffer_lock; /* protects access to buffer, head and tail */
struct fasync_struct *fasync;
struct evdev *evdev;
struct list_head node;
- int bufsize;
+ unsigned int bufsize;
struct input_event buffer[];
};
@@ -55,16 +55,25 @@
static void evdev_pass_event(struct evdev_client *client,
struct input_event *event)
{
- /*
- * Interrupts are disabled, just acquire the lock.
- * Make sure we don't leave with the client buffer
- * "empty" by having client->head == client->tail.
- */
+ /* Interrupts are disabled, just acquire the lock. */
spin_lock(&client->buffer_lock);
- do {
- client->buffer[client->head++] = *event;
- client->head &= client->bufsize - 1;
- } while (client->head == client->tail);
+
+ client->buffer[client->head++] = *event;
+ client->head &= client->bufsize - 1;
+
+ if (unlikely(client->head == client->tail)) {
+ /*
+ * This effectively "drops" all unconsumed events, leaving
+ * EV_SYN/SYN_DROPPED plus the newest event in the queue.
+ */
+ client->tail = (client->head - 2) & (client->bufsize - 1);
+
+ client->buffer[client->tail].time = event->time;
+ client->buffer[client->tail].type = EV_SYN;
+ client->buffer[client->tail].code = SYN_DROPPED;
+ client->buffer[client->tail].value = 0;
+ }
+
spin_unlock(&client->buffer_lock);
if (event->type == EV_SYN)
diff --git a/drivers/input/input.c b/drivers/input/input.c
index d6e8bd8..ebbceed 100644
--- a/drivers/input/input.c
+++ b/drivers/input/input.c
@@ -1746,6 +1746,42 @@
}
EXPORT_SYMBOL(input_set_capability);
+static unsigned int input_estimate_events_per_packet(struct input_dev *dev)
+{
+ int mt_slots;
+ int i;
+ unsigned int events;
+
+ if (dev->mtsize) {
+ mt_slots = dev->mtsize;
+ } else if (test_bit(ABS_MT_TRACKING_ID, dev->absbit)) {
+ mt_slots = dev->absinfo[ABS_MT_TRACKING_ID].maximum -
+ dev->absinfo[ABS_MT_TRACKING_ID].minimum + 1,
+ clamp(mt_slots, 2, 32);
+ } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) {
+ mt_slots = 2;
+ } else {
+ mt_slots = 0;
+ }
+
+ events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */
+
+ for (i = 0; i < ABS_CNT; i++) {
+ if (test_bit(i, dev->absbit)) {
+ if (input_is_mt_axis(i))
+ events += mt_slots;
+ else
+ events++;
+ }
+ }
+
+ for (i = 0; i < REL_CNT; i++)
+ if (test_bit(i, dev->relbit))
+ events++;
+
+ return events;
+}
+
#define INPUT_CLEANSE_BITMASK(dev, type, bits) \
do { \
if (!test_bit(EV_##type, dev->evbit)) \
@@ -1793,6 +1829,10 @@
/* Make sure that bitmasks not mentioned in dev->evbit are clean. */
input_cleanse_bitmasks(dev);
+ if (!dev->hint_events_per_packet)
+ dev->hint_events_per_packet =
+ input_estimate_events_per_packet(dev);
+
/*
* If delay and period are pre-set by the driver, then autorepeating
* is handled by the driver itself and we don't do it in input.c.
diff --git a/drivers/input/keyboard/twl4030_keypad.c b/drivers/input/keyboard/twl4030_keypad.c
index 09bef79..a26922c 100644
--- a/drivers/input/keyboard/twl4030_keypad.c
+++ b/drivers/input/keyboard/twl4030_keypad.c
@@ -332,18 +332,20 @@
static int __devinit twl4030_kp_probe(struct platform_device *pdev)
{
struct twl4030_keypad_data *pdata = pdev->dev.platform_data;
- const struct matrix_keymap_data *keymap_data = pdata->keymap_data;
+ const struct matrix_keymap_data *keymap_data;
struct twl4030_keypad *kp;
struct input_dev *input;
u8 reg;
int error;
- if (!pdata || !pdata->rows || !pdata->cols ||
+ if (!pdata || !pdata->rows || !pdata->cols || !pdata->keymap_data ||
pdata->rows > TWL4030_MAX_ROWS || pdata->cols > TWL4030_MAX_COLS) {
dev_err(&pdev->dev, "Invalid platform_data\n");
return -EINVAL;
}
+ keymap_data = pdata->keymap_data;
+
kp = kzalloc(sizeof(*kp), GFP_KERNEL);
input = input_allocate_device();
if (!kp || !input) {
diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
index 7077f9b..62bae99 100644
--- a/drivers/input/misc/xen-kbdfront.c
+++ b/drivers/input/misc/xen-kbdfront.c
@@ -303,7 +303,7 @@
enum xenbus_state backend_state)
{
struct xenkbd_info *info = dev_get_drvdata(&dev->dev);
- int val;
+ int ret, val;
switch (backend_state) {
case XenbusStateInitialising:
@@ -316,6 +316,17 @@
case XenbusStateInitWait:
InitWait:
+ ret = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+ "feature-abs-pointer", "%d", &val);
+ if (ret < 0)
+ val = 0;
+ if (val) {
+ ret = xenbus_printf(XBT_NIL, info->xbdev->nodename,
+ "request-abs-pointer", "1");
+ if (ret)
+ pr_warning("xenkbd: can't request abs-pointer");
+ }
+
xenbus_switch_state(dev, XenbusStateConnected);
break;
diff --git a/drivers/input/touchscreen/h3600_ts_input.c b/drivers/input/touchscreen/h3600_ts_input.c
index efa0688..45f93d0 100644
--- a/drivers/input/touchscreen/h3600_ts_input.c
+++ b/drivers/input/touchscreen/h3600_ts_input.c
@@ -399,31 +399,34 @@
IRQF_SHARED | IRQF_DISABLED, "h3600_action", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Action Button IRQ!\n");
err = -EBUSY;
- goto fail2;
+ goto fail1;
}
if (request_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, npower_button_handler,
IRQF_SHARED | IRQF_DISABLED, "h3600_suspend", &ts->dev)) {
printk(KERN_ERR "h3600ts.c: Could not allocate Power Button IRQ!\n");
err = -EBUSY;
- goto fail3;
+ goto fail2;
}
serio_set_drvdata(serio, ts);
err = serio_open(serio, drv);
if (err)
- return err;
+ goto fail3;
//h3600_flite_control(1, 25); /* default brightness */
- input_register_device(ts->dev);
+ err = input_register_device(ts->dev);
+ if (err)
+ goto fail4;
return 0;
-fail3: free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev);
+fail4: serio_close(serio);
+fail3: serio_set_drvdata(serio, NULL);
+ free_irq(IRQ_GPIO_BITSY_NPOWER_BUTTON, ts->dev);
fail2: free_irq(IRQ_GPIO_BITSY_ACTION_BUTTON, ts->dev);
-fail1: serio_set_drvdata(serio, NULL);
- input_free_device(input_dev);
+fail1: input_free_device(input_dev);
kfree(ts);
return err;
}
diff --git a/drivers/leds/leds-regulator.c b/drivers/leds/leds-regulator.c
index 3790816..8497f56 100644
--- a/drivers/leds/leds-regulator.c
+++ b/drivers/leds/leds-regulator.c
@@ -178,6 +178,10 @@
led->cdev.flags |= LED_CORE_SUSPENDRESUME;
led->vcc = vcc;
+ /* to handle correctly an already enabled regulator */
+ if (regulator_is_enabled(led->vcc))
+ led->enabled = 1;
+
mutex_init(&led->mutex);
INIT_WORK(&led->work, led_work);
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index 5ef136c..e5d8904 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -390,13 +390,6 @@
return md_raid5_congested(&rs->md, bits);
}
-static void raid_unplug(struct dm_target_callbacks *cb)
-{
- struct raid_set *rs = container_of(cb, struct raid_set, callbacks);
-
- md_raid5_kick_device(rs->md.private);
-}
-
/*
* Construct a RAID4/5/6 mapping:
* Args:
@@ -487,7 +480,6 @@
}
rs->callbacks.congested_fn = raid_is_congested;
- rs->callbacks.unplug_fn = raid_unplug;
dm_table_add_target_callbacks(ti->table, &rs->callbacks);
return 0;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index b12b377..6e853c6 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -447,48 +447,59 @@
/* Support for plugging.
* This mirrors the plugging support in request_queue, but does not
- * require having a whole queue
+ * require having a whole queue or request structures.
+ * We allocate an md_plug_cb for each md device and each thread it gets
+ * plugged on. This links tot the private plug_handle structure in the
+ * personality data where we keep a count of the number of outstanding
+ * plugs so other code can see if a plug is active.
*/
-static void plugger_work(struct work_struct *work)
-{
- struct plug_handle *plug =
- container_of(work, struct plug_handle, unplug_work);
- plug->unplug_fn(plug);
-}
-static void plugger_timeout(unsigned long data)
-{
- struct plug_handle *plug = (void *)data;
- kblockd_schedule_work(NULL, &plug->unplug_work);
-}
-void plugger_init(struct plug_handle *plug,
- void (*unplug_fn)(struct plug_handle *))
-{
- plug->unplug_flag = 0;
- plug->unplug_fn = unplug_fn;
- init_timer(&plug->unplug_timer);
- plug->unplug_timer.function = plugger_timeout;
- plug->unplug_timer.data = (unsigned long)plug;
- INIT_WORK(&plug->unplug_work, plugger_work);
-}
-EXPORT_SYMBOL_GPL(plugger_init);
+struct md_plug_cb {
+ struct blk_plug_cb cb;
+ mddev_t *mddev;
+};
-void plugger_set_plug(struct plug_handle *plug)
+static void plugger_unplug(struct blk_plug_cb *cb)
{
- if (!test_and_set_bit(PLUGGED_FLAG, &plug->unplug_flag))
- mod_timer(&plug->unplug_timer, jiffies + msecs_to_jiffies(3)+1);
+ struct md_plug_cb *mdcb = container_of(cb, struct md_plug_cb, cb);
+ if (atomic_dec_and_test(&mdcb->mddev->plug_cnt))
+ md_wakeup_thread(mdcb->mddev->thread);
+ kfree(mdcb);
}
-EXPORT_SYMBOL_GPL(plugger_set_plug);
-int plugger_remove_plug(struct plug_handle *plug)
+/* Check that an unplug wakeup will come shortly.
+ * If not, wakeup the md thread immediately
+ */
+int mddev_check_plugged(mddev_t *mddev)
{
- if (test_and_clear_bit(PLUGGED_FLAG, &plug->unplug_flag)) {
- del_timer(&plug->unplug_timer);
- return 1;
- } else
+ struct blk_plug *plug = current->plug;
+ struct md_plug_cb *mdcb;
+
+ if (!plug)
return 0;
-}
-EXPORT_SYMBOL_GPL(plugger_remove_plug);
+ list_for_each_entry(mdcb, &plug->cb_list, cb.list) {
+ if (mdcb->cb.callback == plugger_unplug &&
+ mdcb->mddev == mddev) {
+ /* Already on the list, move to top */
+ if (mdcb != list_first_entry(&plug->cb_list,
+ struct md_plug_cb,
+ cb.list))
+ list_move(&mdcb->cb.list, &plug->cb_list);
+ return 1;
+ }
+ }
+ /* Not currently on the callback list */
+ mdcb = kmalloc(sizeof(*mdcb), GFP_ATOMIC);
+ if (!mdcb)
+ return 0;
+
+ mdcb->mddev = mddev;
+ mdcb->cb.callback = plugger_unplug;
+ atomic_inc(&mddev->plug_cnt);
+ list_add(&mdcb->cb.list, &plug->cb_list);
+ return 1;
+}
+EXPORT_SYMBOL_GPL(mddev_check_plugged);
static inline mddev_t *mddev_get(mddev_t *mddev)
{
@@ -538,6 +549,7 @@
atomic_set(&mddev->active, 1);
atomic_set(&mddev->openers, 0);
atomic_set(&mddev->active_io, 0);
+ atomic_set(&mddev->plug_cnt, 0);
spin_lock_init(&mddev->write_lock);
atomic_set(&mddev->flush_pending, 0);
init_waitqueue_head(&mddev->sb_wait);
@@ -4723,7 +4735,6 @@
mddev->bitmap_info.chunksize = 0;
mddev->bitmap_info.daemon_sleep = 0;
mddev->bitmap_info.max_write_behind = 0;
- mddev->plug = NULL;
}
static void __md_stop_writes(mddev_t *mddev)
@@ -6688,12 +6699,6 @@
}
EXPORT_SYMBOL_GPL(md_allow_write);
-void md_unplug(mddev_t *mddev)
-{
- if (mddev->plug)
- mddev->plug->unplug_fn(mddev->plug);
-}
-
#define SYNC_MARKS 10
#define SYNC_MARK_STEP (3*HZ)
void md_do_sync(mddev_t *mddev)
diff --git a/drivers/md/md.h b/drivers/md/md.h
index 52b4073..0b1fd3f 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -29,26 +29,6 @@
typedef struct mddev_s mddev_t;
typedef struct mdk_rdev_s mdk_rdev_t;
-/* generic plugging support - like that provided with request_queue,
- * but does not require a request_queue
- */
-struct plug_handle {
- void (*unplug_fn)(struct plug_handle *);
- struct timer_list unplug_timer;
- struct work_struct unplug_work;
- unsigned long unplug_flag;
-};
-#define PLUGGED_FLAG 1
-void plugger_init(struct plug_handle *plug,
- void (*unplug_fn)(struct plug_handle *));
-void plugger_set_plug(struct plug_handle *plug);
-int plugger_remove_plug(struct plug_handle *plug);
-static inline void plugger_flush(struct plug_handle *plug)
-{
- del_timer_sync(&plug->unplug_timer);
- cancel_work_sync(&plug->unplug_work);
-}
-
/*
* MD's 'extended' device
*/
@@ -199,6 +179,9 @@
int delta_disks, new_level, new_layout;
int new_chunk_sectors;
+ atomic_t plug_cnt; /* If device is expecting
+ * more bios soon.
+ */
struct mdk_thread_s *thread; /* management thread */
struct mdk_thread_s *sync_thread; /* doing resync or reconstruct */
sector_t curr_resync; /* last block scheduled */
@@ -336,7 +319,6 @@
struct list_head all_mddevs;
struct attribute_group *to_remove;
- struct plug_handle *plug; /* if used by personality */
struct bio_set *bio_set;
@@ -516,7 +498,6 @@
extern void md_integrity_add_rdev(mdk_rdev_t *rdev, mddev_t *mddev);
extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
extern void restore_bitmap_write_access(struct file *file);
-extern void md_unplug(mddev_t *mddev);
extern void mddev_init(mddev_t *mddev);
extern int md_run(mddev_t *mddev);
@@ -530,4 +511,5 @@
mddev_t *mddev);
extern struct bio *bio_alloc_mddev(gfp_t gfp_mask, int nr_iovecs,
mddev_t *mddev);
+extern int mddev_check_plugged(mddev_t *mddev);
#endif /* _MD_MD_H */
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index c2a21ae5..2b7a7ff 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -565,12 +565,6 @@
spin_unlock_irq(&conf->device_lock);
}
-static void md_kick_device(mddev_t *mddev)
-{
- blk_flush_plug(current);
- md_wakeup_thread(mddev->thread);
-}
-
/* Barriers....
* Sometimes we need to suspend IO while we do something else,
* either some resync/recovery, or reconfigure the array.
@@ -600,7 +594,7 @@
/* Wait until no block IO is waiting */
wait_event_lock_irq(conf->wait_barrier, !conf->nr_waiting,
- conf->resync_lock, md_kick_device(conf->mddev));
+ conf->resync_lock, );
/* block any new IO from starting */
conf->barrier++;
@@ -608,7 +602,7 @@
/* Now wait for all pending IO to complete */
wait_event_lock_irq(conf->wait_barrier,
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
- conf->resync_lock, md_kick_device(conf->mddev));
+ conf->resync_lock, );
spin_unlock_irq(&conf->resync_lock);
}
@@ -630,7 +624,7 @@
conf->nr_waiting++;
wait_event_lock_irq(conf->wait_barrier, !conf->barrier,
conf->resync_lock,
- md_kick_device(conf->mddev));
+ );
conf->nr_waiting--;
}
conf->nr_pending++;
@@ -666,8 +660,7 @@
wait_event_lock_irq(conf->wait_barrier,
conf->nr_pending == conf->nr_queued+1,
conf->resync_lock,
- ({ flush_pending_writes(conf);
- md_kick_device(conf->mddev); }));
+ flush_pending_writes(conf));
spin_unlock_irq(&conf->resync_lock);
}
static void unfreeze_array(conf_t *conf)
@@ -729,6 +722,7 @@
const unsigned long do_sync = (bio->bi_rw & REQ_SYNC);
const unsigned long do_flush_fua = (bio->bi_rw & (REQ_FLUSH | REQ_FUA));
mdk_rdev_t *blocked_rdev;
+ int plugged;
/*
* Register the new request and wait if the reconstruction
@@ -820,6 +814,8 @@
* inc refcount on their rdev. Record them by setting
* bios[x] to bio
*/
+ plugged = mddev_check_plugged(mddev);
+
disks = conf->raid_disks;
retry_write:
blocked_rdev = NULL;
@@ -925,7 +921,7 @@
/* In case raid1d snuck in to freeze_array */
wake_up(&conf->wait_barrier);
- if (do_sync || !bitmap)
+ if (do_sync || !bitmap || !plugged)
md_wakeup_thread(mddev->thread);
return 0;
@@ -1516,13 +1512,16 @@
conf_t *conf = mddev->private;
struct list_head *head = &conf->retry_list;
mdk_rdev_t *rdev;
+ struct blk_plug plug;
md_check_recovery(mddev);
-
+
+ blk_start_plug(&plug);
for (;;) {
char b[BDEVNAME_SIZE];
- flush_pending_writes(conf);
+ if (atomic_read(&mddev->plug_cnt) == 0)
+ flush_pending_writes(conf);
spin_lock_irqsave(&conf->device_lock, flags);
if (list_empty(head)) {
@@ -1593,6 +1592,7 @@
}
cond_resched();
}
+ blk_finish_plug(&plug);
}
@@ -2039,7 +2039,6 @@
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
- blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
if (conf->r1bio_pool)
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 2da83d5..8e94626 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -634,12 +634,6 @@
spin_unlock_irq(&conf->device_lock);
}
-static void md_kick_device(mddev_t *mddev)
-{
- blk_flush_plug(current);
- md_wakeup_thread(mddev->thread);
-}
-
/* Barriers....
* Sometimes we need to suspend IO while we do something else,
* either some resync/recovery, or reconfigure the array.
@@ -669,15 +663,15 @@
/* Wait until no block IO is waiting (unless 'force') */
wait_event_lock_irq(conf->wait_barrier, force || !conf->nr_waiting,
- conf->resync_lock, md_kick_device(conf->mddev));
+ conf->resync_lock, );
/* block any new IO from starting */
conf->barrier++;
- /* No wait for all pending IO to complete */
+ /* Now wait for all pending IO to complete */
wait_event_lock_irq(conf->wait_barrier,
!conf->nr_pending && conf->barrier < RESYNC_DEPTH,
- conf->resync_lock, md_kick_device(conf->mddev));
+ conf->resync_lock, );
spin_unlock_irq(&conf->resync_lock);
}
@@ -698,7 +692,7 @@
conf->nr_waiting++;
wait_event_lock_irq(conf->wait_barrier, !conf->barrier,
conf->resync_lock,
- md_kick_device(conf->mddev));
+ );
conf->nr_waiting--;
}
conf->nr_pending++;
@@ -734,8 +728,8 @@
wait_event_lock_irq(conf->wait_barrier,
conf->nr_pending == conf->nr_queued+1,
conf->resync_lock,
- ({ flush_pending_writes(conf);
- md_kick_device(conf->mddev); }));
+ flush_pending_writes(conf));
+
spin_unlock_irq(&conf->resync_lock);
}
@@ -762,6 +756,7 @@
const unsigned long do_fua = (bio->bi_rw & REQ_FUA);
unsigned long flags;
mdk_rdev_t *blocked_rdev;
+ int plugged;
if (unlikely(bio->bi_rw & REQ_FLUSH)) {
md_flush_request(mddev, bio);
@@ -870,6 +865,8 @@
* inc refcount on their rdev. Record them by setting
* bios[x] to bio
*/
+ plugged = mddev_check_plugged(mddev);
+
raid10_find_phys(conf, r10_bio);
retry_write:
blocked_rdev = NULL;
@@ -946,9 +943,8 @@
/* In case raid10d snuck in to freeze_array */
wake_up(&conf->wait_barrier);
- if (do_sync || !mddev->bitmap)
+ if (do_sync || !mddev->bitmap || !plugged)
md_wakeup_thread(mddev->thread);
-
return 0;
}
@@ -1640,9 +1636,11 @@
conf_t *conf = mddev->private;
struct list_head *head = &conf->retry_list;
mdk_rdev_t *rdev;
+ struct blk_plug plug;
md_check_recovery(mddev);
+ blk_start_plug(&plug);
for (;;) {
char b[BDEVNAME_SIZE];
@@ -1716,6 +1714,7 @@
}
cond_resched();
}
+ blk_finish_plug(&plug);
}
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index e867ee4..f301e6a 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -27,12 +27,12 @@
*
* We group bitmap updates into batches. Each batch has a number.
* We may write out several batches at once, but that isn't very important.
- * conf->bm_write is the number of the last batch successfully written.
- * conf->bm_flush is the number of the last batch that was closed to
+ * conf->seq_write is the number of the last batch successfully written.
+ * conf->seq_flush is the number of the last batch that was closed to
* new additions.
* When we discover that we will need to write to any block in a stripe
* (in add_stripe_bio) we update the in-memory bitmap and record in sh->bm_seq
- * the number of the batch it will be in. This is bm_flush+1.
+ * the number of the batch it will be in. This is seq_flush+1.
* When we are ready to do a write, if that batch hasn't been written yet,
* we plug the array and queue the stripe for later.
* When an unplug happens, we increment bm_flush, thus closing the current
@@ -199,14 +199,12 @@
BUG_ON(!list_empty(&sh->lru));
BUG_ON(atomic_read(&conf->active_stripes)==0);
if (test_bit(STRIPE_HANDLE, &sh->state)) {
- if (test_bit(STRIPE_DELAYED, &sh->state)) {
+ if (test_bit(STRIPE_DELAYED, &sh->state))
list_add_tail(&sh->lru, &conf->delayed_list);
- plugger_set_plug(&conf->plug);
- } else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
- sh->bm_seq - conf->seq_write > 0) {
+ else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
+ sh->bm_seq - conf->seq_write > 0)
list_add_tail(&sh->lru, &conf->bitmap_list);
- plugger_set_plug(&conf->plug);
- } else {
+ else {
clear_bit(STRIPE_BIT_DELAY, &sh->state);
list_add_tail(&sh->lru, &conf->handle_list);
}
@@ -461,7 +459,7 @@
< (conf->max_nr_stripes *3/4)
|| !conf->inactive_blocked),
conf->device_lock,
- md_raid5_kick_device(conf));
+ );
conf->inactive_blocked = 0;
} else
init_stripe(sh, sector, previous);
@@ -1470,7 +1468,7 @@
wait_event_lock_irq(conf->wait_for_stripe,
!list_empty(&conf->inactive_list),
conf->device_lock,
- blk_flush_plug(current));
+ );
osh = get_free_stripe(conf);
spin_unlock_irq(&conf->device_lock);
atomic_set(&nsh->count, 1);
@@ -3623,8 +3621,7 @@
atomic_inc(&conf->preread_active_stripes);
list_add_tail(&sh->lru, &conf->hold_list);
}
- } else
- plugger_set_plug(&conf->plug);
+ }
}
static void activate_bit_delay(raid5_conf_t *conf)
@@ -3641,21 +3638,6 @@
}
}
-void md_raid5_kick_device(raid5_conf_t *conf)
-{
- blk_flush_plug(current);
- raid5_activate_delayed(conf);
- md_wakeup_thread(conf->mddev->thread);
-}
-EXPORT_SYMBOL_GPL(md_raid5_kick_device);
-
-static void raid5_unplug(struct plug_handle *plug)
-{
- raid5_conf_t *conf = container_of(plug, raid5_conf_t, plug);
-
- md_raid5_kick_device(conf);
-}
-
int md_raid5_congested(mddev_t *mddev, int bits)
{
raid5_conf_t *conf = mddev->private;
@@ -3945,6 +3927,7 @@
struct stripe_head *sh;
const int rw = bio_data_dir(bi);
int remaining;
+ int plugged;
if (unlikely(bi->bi_rw & REQ_FLUSH)) {
md_flush_request(mddev, bi);
@@ -3963,6 +3946,7 @@
bi->bi_next = NULL;
bi->bi_phys_segments = 1; /* over-loaded to count active stripes */
+ plugged = mddev_check_plugged(mddev);
for (;logical_sector < last_sector; logical_sector += STRIPE_SECTORS) {
DEFINE_WAIT(w);
int disks, data_disks;
@@ -4057,7 +4041,7 @@
* add failed due to overlap. Flush everything
* and wait a while
*/
- md_raid5_kick_device(conf);
+ md_wakeup_thread(mddev->thread);
release_stripe(sh);
schedule();
goto retry;
@@ -4077,6 +4061,9 @@
}
}
+ if (!plugged)
+ md_wakeup_thread(mddev->thread);
+
spin_lock_irq(&conf->device_lock);
remaining = raid5_dec_bi_phys_segments(bi);
spin_unlock_irq(&conf->device_lock);
@@ -4478,24 +4465,30 @@
struct stripe_head *sh;
raid5_conf_t *conf = mddev->private;
int handled;
+ struct blk_plug plug;
pr_debug("+++ raid5d active\n");
md_check_recovery(mddev);
+ blk_start_plug(&plug);
handled = 0;
spin_lock_irq(&conf->device_lock);
while (1) {
struct bio *bio;
- if (conf->seq_flush != conf->seq_write) {
- int seq = conf->seq_flush;
+ if (atomic_read(&mddev->plug_cnt) == 0 &&
+ !list_empty(&conf->bitmap_list)) {
+ /* Now is a good time to flush some bitmap updates */
+ conf->seq_flush++;
spin_unlock_irq(&conf->device_lock);
bitmap_unplug(mddev->bitmap);
spin_lock_irq(&conf->device_lock);
- conf->seq_write = seq;
+ conf->seq_write = conf->seq_flush;
activate_bit_delay(conf);
}
+ if (atomic_read(&mddev->plug_cnt) == 0)
+ raid5_activate_delayed(conf);
while ((bio = remove_bio_from_retry(conf))) {
int ok;
@@ -4525,6 +4518,7 @@
spin_unlock_irq(&conf->device_lock);
async_tx_issue_pending_all();
+ blk_finish_plug(&plug);
pr_debug("--- raid5d inactive\n");
}
@@ -5141,8 +5135,6 @@
mdname(mddev));
md_set_array_sectors(mddev, raid5_size(mddev, 0, 0));
- plugger_init(&conf->plug, raid5_unplug);
- mddev->plug = &conf->plug;
if (mddev->queue) {
int chunk_size;
/* read-ahead size must cover two whole stripes, which
@@ -5192,7 +5184,6 @@
mddev->thread = NULL;
if (mddev->queue)
mddev->queue->backing_dev_info.congested_fn = NULL;
- plugger_flush(&conf->plug); /* the unplug fn references 'conf'*/
free_conf(conf);
mddev->private = NULL;
mddev->to_remove = &raid5_attrs_group;
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 8d563a4..3ca77a2 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -400,8 +400,6 @@
* Cleared when a sync completes.
*/
- struct plug_handle plug;
-
/* per cpu variables */
struct raid5_percpu {
struct page *spare_page; /* Used when checking P/Q in raid6 */
diff --git a/drivers/media/video/videobuf-dma-contig.c b/drivers/media/video/videobuf-dma-contig.c
index c4742fc..c969111 100644
--- a/drivers/media/video/videobuf-dma-contig.c
+++ b/drivers/media/video/videobuf-dma-contig.c
@@ -300,7 +300,7 @@
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
retval = remap_pfn_range(vma, vma->vm_start,
- PFN_DOWN(virt_to_phys(mem->vaddr)),
+ mem->dma_handle >> PAGE_SHIFT,
size, vma->vm_page_prot);
if (retval) {
dev_err(q->dev, "mmap: remap failed with error %d. ", retval);
diff --git a/drivers/misc/sgi-gru/grufile.c b/drivers/misc/sgi-gru/grufile.c
index 20e4e93..ecafa4b 100644
--- a/drivers/misc/sgi-gru/grufile.c
+++ b/drivers/misc/sgi-gru/grufile.c
@@ -348,15 +348,15 @@
static int gru_irq_count[GRU_CHIPLETS_PER_BLADE];
-static void gru_noop(unsigned int irq)
+static void gru_noop(struct irq_data *d)
{
}
static struct irq_chip gru_chip[GRU_CHIPLETS_PER_BLADE] = {
[0 ... GRU_CHIPLETS_PER_BLADE - 1] {
- .mask = gru_noop,
- .unmask = gru_noop,
- .ack = gru_noop
+ .irq_mask = gru_noop,
+ .irq_unmask = gru_noop,
+ .irq_ack = gru_noop
}
};
diff --git a/drivers/net/bna/bfa_ioc.c b/drivers/net/bna/bfa_ioc.c
index e3de0b8..7581518ec 100644
--- a/drivers/net/bna/bfa_ioc.c
+++ b/drivers/net/bna/bfa_ioc.c
@@ -38,6 +38,8 @@
#define bfa_ioc_map_port(__ioc) ((__ioc)->ioc_hwif->ioc_map_port(__ioc))
#define bfa_ioc_notify_fail(__ioc) \
((__ioc)->ioc_hwif->ioc_notify_fail(__ioc))
+#define bfa_ioc_sync_start(__ioc) \
+ ((__ioc)->ioc_hwif->ioc_sync_start(__ioc))
#define bfa_ioc_sync_join(__ioc) \
((__ioc)->ioc_hwif->ioc_sync_join(__ioc))
#define bfa_ioc_sync_leave(__ioc) \
@@ -602,7 +604,7 @@
switch (event) {
case IOCPF_E_SEMLOCKED:
if (bfa_ioc_firmware_lock(ioc)) {
- if (bfa_ioc_sync_complete(ioc)) {
+ if (bfa_ioc_sync_start(ioc)) {
iocpf->retry_count = 0;
bfa_ioc_sync_join(ioc);
bfa_fsm_set_state(iocpf, bfa_iocpf_sm_hwinit);
@@ -1314,7 +1316,7 @@
* execution context (driver/bios) must match.
*/
static bool
-bfa_ioc_fwver_valid(struct bfa_ioc *ioc)
+bfa_ioc_fwver_valid(struct bfa_ioc *ioc, u32 boot_env)
{
struct bfi_ioc_image_hdr fwhdr, *drv_fwhdr;
@@ -1325,7 +1327,7 @@
if (fwhdr.signature != drv_fwhdr->signature)
return false;
- if (fwhdr.exec != drv_fwhdr->exec)
+ if (swab32(fwhdr.param) != boot_env)
return false;
return bfa_nw_ioc_fwver_cmp(ioc, &fwhdr);
@@ -1352,9 +1354,12 @@
{
enum bfi_ioc_state ioc_fwstate;
bool fwvalid;
+ u32 boot_env;
ioc_fwstate = readl(ioc->ioc_regs.ioc_fwstate);
+ boot_env = BFI_BOOT_LOADER_OS;
+
if (force)
ioc_fwstate = BFI_IOC_UNINIT;
@@ -1362,10 +1367,10 @@
* check if firmware is valid
*/
fwvalid = (ioc_fwstate == BFI_IOC_UNINIT) ?
- false : bfa_ioc_fwver_valid(ioc);
+ false : bfa_ioc_fwver_valid(ioc, boot_env);
if (!fwvalid) {
- bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, ioc->pcidev.device_id);
+ bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, boot_env);
return;
}
@@ -1396,7 +1401,7 @@
/**
* Initialize the h/w for any other states.
*/
- bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, ioc->pcidev.device_id);
+ bfa_ioc_boot(ioc, BFI_BOOT_TYPE_NORMAL, boot_env);
}
void
@@ -1506,7 +1511,7 @@
*/
static void
bfa_ioc_download_fw(struct bfa_ioc *ioc, u32 boot_type,
- u32 boot_param)
+ u32 boot_env)
{
u32 *fwimg;
u32 pgnum, pgoff;
@@ -1558,10 +1563,10 @@
/*
* Set boot type and boot param at the end.
*/
- writel((swab32(swab32(boot_type))), ((ioc->ioc_regs.smem_page_start)
+ writel(boot_type, ((ioc->ioc_regs.smem_page_start)
+ (BFI_BOOT_TYPE_OFF)));
- writel((swab32(swab32(boot_param))), ((ioc->ioc_regs.smem_page_start)
- + (BFI_BOOT_PARAM_OFF)));
+ writel(boot_env, ((ioc->ioc_regs.smem_page_start)
+ + (BFI_BOOT_LOADER_OFF)));
}
static void
@@ -1721,7 +1726,7 @@
* as the entry vector.
*/
static void
-bfa_ioc_boot(struct bfa_ioc *ioc, u32 boot_type, u32 boot_param)
+bfa_ioc_boot(struct bfa_ioc *ioc, u32 boot_type, u32 boot_env)
{
void __iomem *rb;
@@ -1734,7 +1739,7 @@
* Initialize IOC state of all functions on a chip reset.
*/
rb = ioc->pcidev.pci_bar_kva;
- if (boot_param == BFI_BOOT_TYPE_MEMTEST) {
+ if (boot_type == BFI_BOOT_TYPE_MEMTEST) {
writel(BFI_IOC_MEMTEST, (rb + BFA_IOC0_STATE_REG));
writel(BFI_IOC_MEMTEST, (rb + BFA_IOC1_STATE_REG));
} else {
@@ -1743,7 +1748,7 @@
}
bfa_ioc_msgflush(ioc);
- bfa_ioc_download_fw(ioc, boot_type, boot_param);
+ bfa_ioc_download_fw(ioc, boot_type, boot_env);
/**
* Enable interrupts just before starting LPU
diff --git a/drivers/net/bna/bfa_ioc.h b/drivers/net/bna/bfa_ioc.h
index e4974bc..bd48abe 100644
--- a/drivers/net/bna/bfa_ioc.h
+++ b/drivers/net/bna/bfa_ioc.h
@@ -194,6 +194,7 @@
bool msix);
void (*ioc_notify_fail) (struct bfa_ioc *ioc);
void (*ioc_ownership_reset) (struct bfa_ioc *ioc);
+ bool (*ioc_sync_start) (struct bfa_ioc *ioc);
void (*ioc_sync_join) (struct bfa_ioc *ioc);
void (*ioc_sync_leave) (struct bfa_ioc *ioc);
void (*ioc_sync_ack) (struct bfa_ioc *ioc);
diff --git a/drivers/net/bna/bfa_ioc_ct.c b/drivers/net/bna/bfa_ioc_ct.c
index 469997c..87aecdf 100644
--- a/drivers/net/bna/bfa_ioc_ct.c
+++ b/drivers/net/bna/bfa_ioc_ct.c
@@ -41,6 +41,7 @@
static void bfa_ioc_ct_isr_mode_set(struct bfa_ioc *ioc, bool msix);
static void bfa_ioc_ct_notify_fail(struct bfa_ioc *ioc);
static void bfa_ioc_ct_ownership_reset(struct bfa_ioc *ioc);
+static bool bfa_ioc_ct_sync_start(struct bfa_ioc *ioc);
static void bfa_ioc_ct_sync_join(struct bfa_ioc *ioc);
static void bfa_ioc_ct_sync_leave(struct bfa_ioc *ioc);
static void bfa_ioc_ct_sync_ack(struct bfa_ioc *ioc);
@@ -63,6 +64,7 @@
nw_hwif_ct.ioc_isr_mode_set = bfa_ioc_ct_isr_mode_set;
nw_hwif_ct.ioc_notify_fail = bfa_ioc_ct_notify_fail;
nw_hwif_ct.ioc_ownership_reset = bfa_ioc_ct_ownership_reset;
+ nw_hwif_ct.ioc_sync_start = bfa_ioc_ct_sync_start;
nw_hwif_ct.ioc_sync_join = bfa_ioc_ct_sync_join;
nw_hwif_ct.ioc_sync_leave = bfa_ioc_ct_sync_leave;
nw_hwif_ct.ioc_sync_ack = bfa_ioc_ct_sync_ack;
@@ -345,6 +347,32 @@
/**
* Synchronized IOC failure processing routines
*/
+static bool
+bfa_ioc_ct_sync_start(struct bfa_ioc *ioc)
+{
+ u32 r32 = readl(ioc->ioc_regs.ioc_fail_sync);
+ u32 sync_reqd = bfa_ioc_ct_get_sync_reqd(r32);
+
+ /*
+ * Driver load time. If the sync required bit for this PCI fn
+ * is set, it is due to an unclean exit by the driver for this
+ * PCI fn in the previous incarnation. Whoever comes here first
+ * should clean it up, no matter which PCI fn.
+ */
+
+ if (sync_reqd & bfa_ioc_ct_sync_pos(ioc)) {
+ writel(0, ioc->ioc_regs.ioc_fail_sync);
+ writel(1, ioc->ioc_regs.ioc_usage_reg);
+ writel(BFI_IOC_UNINIT, ioc->ioc_regs.ioc_fwstate);
+ writel(BFI_IOC_UNINIT, ioc->ioc_regs.alt_ioc_fwstate);
+ return true;
+ }
+
+ return bfa_ioc_ct_sync_complete(ioc);
+}
+/**
+ * Synchronized IOC failure processing routines
+ */
static void
bfa_ioc_ct_sync_join(struct bfa_ioc *ioc)
{
diff --git a/drivers/net/bna/bfi.h b/drivers/net/bna/bfi.h
index a973968..6050379 100644
--- a/drivers/net/bna/bfi.h
+++ b/drivers/net/bna/bfi.h
@@ -184,12 +184,14 @@
#define BFI_IOC_MSGLEN_MAX 32 /* 32 bytes */
#define BFI_BOOT_TYPE_OFF 8
-#define BFI_BOOT_PARAM_OFF 12
+#define BFI_BOOT_LOADER_OFF 12
-#define BFI_BOOT_TYPE_NORMAL 0 /* param is device id */
+#define BFI_BOOT_TYPE_NORMAL 0
#define BFI_BOOT_TYPE_FLASH 1
#define BFI_BOOT_TYPE_MEMTEST 2
+#define BFI_BOOT_LOADER_OS 0
+
#define BFI_BOOT_MEMTEST_RES_ADDR 0x900
#define BFI_BOOT_MEMTEST_RES_SIG 0xA0A1A2A3
diff --git a/drivers/net/bna/bnad.c b/drivers/net/bna/bnad.c
index 9f356d5..8e6ceab 100644
--- a/drivers/net/bna/bnad.c
+++ b/drivers/net/bna/bnad.c
@@ -1837,7 +1837,6 @@
/* Initialize the Rx event handlers */
rx_cbfn.rcb_setup_cbfn = bnad_cb_rcb_setup;
rx_cbfn.rcb_destroy_cbfn = bnad_cb_rcb_destroy;
- rx_cbfn.rcb_destroy_cbfn = NULL;
rx_cbfn.ccb_setup_cbfn = bnad_cb_ccb_setup;
rx_cbfn.ccb_destroy_cbfn = bnad_cb_ccb_destroy;
rx_cbfn.rx_cleanup_cbfn = bnad_cb_rx_cleanup;
diff --git a/drivers/net/bnx2x/bnx2x_ethtool.c b/drivers/net/bnx2x/bnx2x_ethtool.c
index f505015..89cb977 100644
--- a/drivers/net/bnx2x/bnx2x_ethtool.c
+++ b/drivers/net/bnx2x/bnx2x_ethtool.c
@@ -2114,19 +2114,18 @@
for (i = 0; i < (data * 2); i++) {
if ((i % 2) == 0)
bnx2x_set_led(&bp->link_params, &bp->link_vars,
- LED_MODE_OPER, SPEED_1000);
+ LED_MODE_ON, SPEED_1000);
else
bnx2x_set_led(&bp->link_params, &bp->link_vars,
- LED_MODE_OFF, 0);
+ LED_MODE_FRONT_PANEL_OFF, 0);
msleep_interruptible(500);
if (signal_pending(current))
break;
}
- if (bp->link_vars.link_up)
- bnx2x_set_led(&bp->link_params, &bp->link_vars, LED_MODE_OPER,
- bp->link_vars.line_speed);
+ bnx2x_set_led(&bp->link_params, &bp->link_vars,
+ LED_MODE_OPER, bp->link_vars.line_speed);
return 0;
}
diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
index 9bc5de3..ba71582 100644
--- a/drivers/net/bonding/bond_alb.c
+++ b/drivers/net/bonding/bond_alb.c
@@ -176,7 +176,7 @@
bond_info->tx_hashtbl = new_hashtbl;
for (i = 0; i < TLB_HASH_TABLE_SIZE; i++) {
- tlb_init_table_entry(&bond_info->tx_hashtbl[i], 1);
+ tlb_init_table_entry(&bond_info->tx_hashtbl[i], 0);
}
_unlock_tx_hashtbl(bond);
@@ -701,7 +701,7 @@
*/
rlb_choose_channel(skb, bond);
- /* The ARP relpy packets must be delayed so that
+ /* The ARP reply packets must be delayed so that
* they can cancel out the influence of the ARP request.
*/
bond->alb_info.rlb_update_delay_counter = RLB_UPDATE_DELAY;
@@ -1042,7 +1042,7 @@
*
* If the permanent hw address of @slave is @bond's hw address, we need to
* find a different hw address to give @slave, that isn't in use by any other
- * slave in the bond. This address must be, of course, one of the premanent
+ * slave in the bond. This address must be, of course, one of the permanent
* addresses of the other slaves.
*
* We go over the slave list, and for each slave there we compare its
diff --git a/drivers/net/bonding/bond_alb.h b/drivers/net/bonding/bond_alb.h
index 86861f0..8ca7158 100644
--- a/drivers/net/bonding/bond_alb.h
+++ b/drivers/net/bonding/bond_alb.h
@@ -75,7 +75,7 @@
* gave this entry index.
*/
u32 tx_bytes; /* Each Client accumulates the BytesTx that
- * were tranmitted to it, and after each
+ * were transmitted to it, and after each
* CallBack the LoadHistory is divided
* by the balance interval
*/
@@ -122,7 +122,6 @@
};
struct alb_bond_info {
- struct timer_list alb_timer;
struct tlb_client_info *tx_hashtbl; /* Dynamically allocated */
spinlock_t tx_hashtbl_lock;
u32 unbalanced_load;
@@ -140,7 +139,6 @@
struct slave *next_rx_slave;/* next slave to be assigned
* to a new rx client for
*/
- u32 rlb_interval_counter;
u8 primary_is_promisc; /* boolean */
u32 rlb_promisc_timeout_counter;/* counts primary
* promiscuity time
diff --git a/drivers/net/can/mscan/mpc5xxx_can.c b/drivers/net/can/mscan/mpc5xxx_can.c
index c0a1bc5..bd1d811 100644
--- a/drivers/net/can/mscan/mpc5xxx_can.c
+++ b/drivers/net/can/mscan/mpc5xxx_can.c
@@ -260,7 +260,7 @@
if (!ofdev->dev.of_match)
return -EINVAL;
- data = (struct mpc5xxx_can_data *)of_dev->dev.of_match->data;
+ data = (struct mpc5xxx_can_data *)ofdev->dev.of_match->data;
base = of_iomap(np, 0);
if (!base) {
diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
index ea0dc45..d70fb76 100644
--- a/drivers/net/loopback.c
+++ b/drivers/net/loopback.c
@@ -173,7 +173,8 @@
| NETIF_F_RXCSUM
| NETIF_F_HIGHDMA
| NETIF_F_LLTX
- | NETIF_F_NETNS_LOCAL;
+ | NETIF_F_NETNS_LOCAL
+ | NETIF_F_VLAN_CHALLENGED;
dev->ethtool_ops = &loopback_ethtool_ops;
dev->header_ops = ð_header_ops;
dev->netdev_ops = &loopback_ops;
diff --git a/drivers/net/natsemi.c b/drivers/net/natsemi.c
index aa2813e..1074231 100644
--- a/drivers/net/natsemi.c
+++ b/drivers/net/natsemi.c
@@ -860,6 +860,9 @@
prev_eedata = eedata;
}
+ /* Store MAC Address in perm_addr */
+ memcpy(dev->perm_addr, dev->dev_addr, ETH_ALEN);
+
dev->base_addr = (unsigned long __force) ioaddr;
dev->irq = irq;
diff --git a/drivers/net/netxen/netxen_nic.h b/drivers/net/netxen/netxen_nic.h
index d7299f1..679dc85 100644
--- a/drivers/net/netxen/netxen_nic.h
+++ b/drivers/net/netxen/netxen_nic.h
@@ -174,7 +174,7 @@
#define MAX_NUM_CARDS 4
-#define MAX_BUFFERS_PER_CMD 32
+#define NETXEN_MAX_FRAGS_PER_TX 14
#define MAX_TSO_HEADER_DESC 2
#define MGMT_CMD_DESC_RESV 4
#define TX_STOP_THRESH ((MAX_SKB_FRAGS >> 2) + MAX_TSO_HEADER_DESC \
@@ -558,7 +558,7 @@
*/
struct netxen_cmd_buffer {
struct sk_buff *skb;
- struct netxen_skb_frag frag_array[MAX_BUFFERS_PER_CMD + 1];
+ struct netxen_skb_frag frag_array[MAX_SKB_FRAGS + 1];
u32 frag_count;
};
diff --git a/drivers/net/netxen/netxen_nic_main.c b/drivers/net/netxen/netxen_nic_main.c
index 83348dc..e8a4b66 100644
--- a/drivers/net/netxen/netxen_nic_main.c
+++ b/drivers/net/netxen/netxen_nic_main.c
@@ -1844,6 +1844,8 @@
struct cmd_desc_type0 *hwdesc, *first_desc;
struct pci_dev *pdev;
int i, k;
+ int delta = 0;
+ struct skb_frag_struct *frag;
u32 producer;
int frag_count, no_of_desc;
@@ -1851,6 +1853,21 @@
frag_count = skb_shinfo(skb)->nr_frags + 1;
+ /* 14 frags supported for normal packet and
+ * 32 frags supported for TSO packet
+ */
+ if (!skb_is_gso(skb) && frag_count > NETXEN_MAX_FRAGS_PER_TX) {
+
+ for (i = 0; i < (frag_count - NETXEN_MAX_FRAGS_PER_TX); i++) {
+ frag = &skb_shinfo(skb)->frags[i];
+ delta += frag->size;
+ }
+
+ if (!__pskb_pull_tail(skb, delta))
+ goto drop_packet;
+
+ frag_count = 1 + skb_shinfo(skb)->nr_frags;
+ }
/* 4 fragments per cmd des */
no_of_desc = (frag_count + 3) >> 2;
diff --git a/drivers/net/qlcnic/qlcnic.h b/drivers/net/qlcnic/qlcnic.h
index dc44564..b0dead0 100644
--- a/drivers/net/qlcnic/qlcnic.h
+++ b/drivers/net/qlcnic/qlcnic.h
@@ -99,6 +99,7 @@
#define TX_UDPV6_PKT 0x0c
/* Tx defines */
+#define QLCNIC_MAX_FRAGS_PER_TX 14
#define MAX_TSO_HEADER_DESC 2
#define MGMT_CMD_DESC_RESV 4
#define TX_STOP_THRESH ((MAX_SKB_FRAGS >> 2) + MAX_TSO_HEADER_DESC \
diff --git a/drivers/net/qlcnic/qlcnic_main.c b/drivers/net/qlcnic/qlcnic_main.c
index cd88c7e..cb1a1ef 100644
--- a/drivers/net/qlcnic/qlcnic_main.c
+++ b/drivers/net/qlcnic/qlcnic_main.c
@@ -2099,6 +2099,7 @@
struct cmd_desc_type0 *hwdesc, *first_desc;
struct pci_dev *pdev;
struct ethhdr *phdr;
+ int delta = 0;
int i, k;
u32 producer;
@@ -2118,6 +2119,19 @@
}
frag_count = skb_shinfo(skb)->nr_frags + 1;
+ /* 14 frags supported for normal packet and
+ * 32 frags supported for TSO packet
+ */
+ if (!skb_is_gso(skb) && frag_count > QLCNIC_MAX_FRAGS_PER_TX) {
+
+ for (i = 0; i < (frag_count - QLCNIC_MAX_FRAGS_PER_TX); i++)
+ delta += skb_shinfo(skb)->frags[i].size;
+
+ if (!__pskb_pull_tail(skb, delta))
+ goto drop_packet;
+
+ frag_count = 1 + skb_shinfo(skb)->nr_frags;
+ }
/* 4 fragments per cmd des */
no_of_desc = (frag_count + 3) >> 2;
diff --git a/drivers/net/sfc/efx.c b/drivers/net/sfc/efx.c
index d890679..a3c2aab 100644
--- a/drivers/net/sfc/efx.c
+++ b/drivers/net/sfc/efx.c
@@ -328,7 +328,8 @@
* processing to finish, then directly poll (and ack ) the eventq.
* Finally reenable NAPI and interrupts.
*
- * Since we are touching interrupts the caller should hold the suspend lock
+ * This is for use only during a loopback self-test. It must not
+ * deliver any packets up the stack as this can result in deadlock.
*/
void efx_process_channel_now(struct efx_channel *channel)
{
@@ -336,6 +337,7 @@
BUG_ON(channel->channel >= efx->n_channels);
BUG_ON(!channel->enabled);
+ BUG_ON(!efx->loopback_selftest);
/* Disable interrupts and wait for ISRs to complete */
efx_nic_disable_interrupts(efx);
@@ -1436,7 +1438,7 @@
* restart the transmit interface early so the watchdog timer stops */
efx_start_port(efx);
- if (efx_dev_registered(efx))
+ if (efx_dev_registered(efx) && !efx->port_inhibited)
netif_tx_wake_all_queues(efx->net_dev);
efx_for_each_channel(channel, efx)
diff --git a/drivers/net/sfc/io.h b/drivers/net/sfc/io.h
index d9d8c2e..cc97880 100644
--- a/drivers/net/sfc/io.h
+++ b/drivers/net/sfc/io.h
@@ -152,6 +152,7 @@
spin_lock_irqsave(&efx->biu_lock, flags);
value->u32[0] = _efx_readd(efx, reg + 0);
+ rmb();
value->u32[1] = _efx_readd(efx, reg + 4);
value->u32[2] = _efx_readd(efx, reg + 8);
value->u32[3] = _efx_readd(efx, reg + 12);
@@ -174,6 +175,7 @@
value->u64[0] = (__force __le64)__raw_readq(membase + addr);
#else
value->u32[0] = (__force __le32)__raw_readl(membase + addr);
+ rmb();
value->u32[1] = (__force __le32)__raw_readl(membase + addr + 4);
#endif
spin_unlock_irqrestore(&efx->biu_lock, flags);
diff --git a/drivers/net/sfc/net_driver.h b/drivers/net/sfc/net_driver.h
index 9ffa9a6..191a311 100644
--- a/drivers/net/sfc/net_driver.h
+++ b/drivers/net/sfc/net_driver.h
@@ -330,7 +330,6 @@
* @eventq_mask: Event queue pointer mask
* @eventq_read_ptr: Event queue read pointer
* @last_eventq_read_ptr: Last event queue read pointer value.
- * @magic_count: Event queue test event count
* @irq_count: Number of IRQs since last adaptive moderation decision
* @irq_mod_score: IRQ moderation score
* @rx_alloc_level: Watermark based heuristic counter for pushing descriptors
@@ -360,7 +359,6 @@
unsigned int eventq_mask;
unsigned int eventq_read_ptr;
unsigned int last_eventq_read_ptr;
- unsigned int magic_count;
unsigned int irq_count;
unsigned int irq_mod_score;
diff --git a/drivers/net/sfc/nic.c b/drivers/net/sfc/nic.c
index e839661..10f1cb7 100644
--- a/drivers/net/sfc/nic.c
+++ b/drivers/net/sfc/nic.c
@@ -84,7 +84,8 @@
static inline efx_qword_t *efx_event(struct efx_channel *channel,
unsigned int index)
{
- return ((efx_qword_t *) (channel->eventq.addr)) + index;
+ return ((efx_qword_t *) (channel->eventq.addr)) +
+ (index & channel->eventq_mask);
}
/* See if an event is present
@@ -673,7 +674,8 @@
efx_dword_t reg;
struct efx_nic *efx = channel->efx;
- EFX_POPULATE_DWORD_1(reg, FRF_AZ_EVQ_RPTR, channel->eventq_read_ptr);
+ EFX_POPULATE_DWORD_1(reg, FRF_AZ_EVQ_RPTR,
+ channel->eventq_read_ptr & channel->eventq_mask);
efx_writed_table(efx, ®, efx->type->evq_rptr_tbl_base,
channel->channel);
}
@@ -908,7 +910,7 @@
code = EFX_QWORD_FIELD(*event, FSF_AZ_DRV_GEN_EV_MAGIC);
if (code == EFX_CHANNEL_MAGIC_TEST(channel))
- ++channel->magic_count;
+ ; /* ignore */
else if (code == EFX_CHANNEL_MAGIC_FILL(channel))
/* The queue must be empty, so we won't receive any rx
* events, so efx_process_channel() won't refill the
@@ -1015,8 +1017,7 @@
/* Clear this event by marking it all ones */
EFX_SET_QWORD(*p_event);
- /* Increment read pointer */
- read_ptr = (read_ptr + 1) & channel->eventq_mask;
+ ++read_ptr;
ev_code = EFX_QWORD_FIELD(event, FSF_AZ_EV_CODE);
@@ -1060,6 +1061,13 @@
return spent;
}
+/* Check whether an event is present in the eventq at the current
+ * read pointer. Only useful for self-test.
+ */
+bool efx_nic_event_present(struct efx_channel *channel)
+{
+ return efx_event_present(efx_event(channel, channel->eventq_read_ptr));
+}
/* Allocate buffer table entries for event queue */
int efx_nic_probe_eventq(struct efx_channel *channel)
@@ -1165,7 +1173,7 @@
struct efx_tx_queue *tx_queue;
struct efx_rx_queue *rx_queue;
unsigned int read_ptr = channel->eventq_read_ptr;
- unsigned int end_ptr = (read_ptr - 1) & channel->eventq_mask;
+ unsigned int end_ptr = read_ptr + channel->eventq_mask - 1;
do {
efx_qword_t *event = efx_event(channel, read_ptr);
@@ -1205,7 +1213,7 @@
* it's ok to throw away every non-flush event */
EFX_SET_QWORD(*event);
- read_ptr = (read_ptr + 1) & channel->eventq_mask;
+ ++read_ptr;
} while (read_ptr != end_ptr);
channel->eventq_read_ptr = read_ptr;
diff --git a/drivers/net/sfc/nic.h b/drivers/net/sfc/nic.h
index d9de1b6..a42db6e 100644
--- a/drivers/net/sfc/nic.h
+++ b/drivers/net/sfc/nic.h
@@ -184,6 +184,7 @@
extern void efx_nic_remove_eventq(struct efx_channel *channel);
extern int efx_nic_process_eventq(struct efx_channel *channel, int rx_quota);
extern void efx_nic_eventq_read_ack(struct efx_channel *channel);
+extern bool efx_nic_event_present(struct efx_channel *channel);
/* MAC/PHY */
extern void falcon_drain_tx_fifo(struct efx_nic *efx);
diff --git a/drivers/net/sfc/selftest.c b/drivers/net/sfc/selftest.c
index a0f49b3..50ad3bc 100644
--- a/drivers/net/sfc/selftest.c
+++ b/drivers/net/sfc/selftest.c
@@ -131,8 +131,6 @@
static int efx_test_interrupts(struct efx_nic *efx,
struct efx_self_tests *tests)
{
- struct efx_channel *channel;
-
netif_dbg(efx, drv, efx->net_dev, "testing interrupts\n");
tests->interrupt = -1;
@@ -140,15 +138,6 @@
efx->last_irq_cpu = -1;
smp_wmb();
- /* ACK each interrupting event queue. Receiving an interrupt due to
- * traffic before a test event is raised is considered a pass */
- efx_for_each_channel(channel, efx) {
- if (channel->work_pending)
- efx_process_channel_now(channel);
- if (efx->last_irq_cpu >= 0)
- goto success;
- }
-
efx_nic_generate_interrupt(efx);
/* Wait for arrival of test interrupt. */
@@ -173,13 +162,13 @@
struct efx_self_tests *tests)
{
struct efx_nic *efx = channel->efx;
- unsigned int magic_count, count;
+ unsigned int read_ptr, count;
tests->eventq_dma[channel->channel] = -1;
tests->eventq_int[channel->channel] = -1;
tests->eventq_poll[channel->channel] = -1;
- magic_count = channel->magic_count;
+ read_ptr = channel->eventq_read_ptr;
channel->efx->last_irq_cpu = -1;
smp_wmb();
@@ -190,10 +179,7 @@
do {
schedule_timeout_uninterruptible(HZ / 100);
- if (channel->work_pending)
- efx_process_channel_now(channel);
-
- if (channel->magic_count != magic_count)
+ if (ACCESS_ONCE(channel->eventq_read_ptr) != read_ptr)
goto eventq_ok;
} while (++count < 2);
@@ -211,8 +197,7 @@
}
/* Check to see if event was received even if interrupt wasn't */
- efx_process_channel_now(channel);
- if (channel->magic_count != magic_count) {
+ if (efx_nic_event_present(channel)) {
netif_err(efx, drv, efx->net_dev,
"channel %d event was generated, but "
"failed to trigger an interrupt\n", channel->channel);
@@ -770,6 +755,8 @@
__efx_reconfigure_port(efx);
mutex_unlock(&efx->mac_lock);
+ netif_tx_wake_all_queues(efx->net_dev);
+
return rc_test;
}
diff --git a/drivers/net/sfc/tx.c b/drivers/net/sfc/tx.c
index 1398019..d2c85df 100644
--- a/drivers/net/sfc/tx.c
+++ b/drivers/net/sfc/tx.c
@@ -435,7 +435,8 @@
* queue state. */
smp_mb();
if (unlikely(netif_tx_queue_stopped(tx_queue->core_txq)) &&
- likely(efx->port_enabled)) {
+ likely(efx->port_enabled) &&
+ likely(!efx->port_inhibited)) {
fill_level = tx_queue->insert_count - tx_queue->read_count;
if (fill_level < EFX_TXQ_THRESHOLD(efx)) {
EFX_BUG_ON_PARANOID(!efx_dev_registered(efx));
diff --git a/drivers/net/sis900.c b/drivers/net/sis900.c
index cb317cd..484f795 100644
--- a/drivers/net/sis900.c
+++ b/drivers/net/sis900.c
@@ -240,7 +240,8 @@
* @net_dev: the net device to get address for
*
* Older SiS900 and friends, use EEPROM to store MAC address.
- * MAC address is read from read_eeprom() into @net_dev->dev_addr.
+ * MAC address is read from read_eeprom() into @net_dev->dev_addr and
+ * @net_dev->perm_addr.
*/
static int __devinit sis900_get_mac_addr(struct pci_dev * pci_dev, struct net_device *net_dev)
@@ -261,6 +262,9 @@
for (i = 0; i < 3; i++)
((u16 *)(net_dev->dev_addr))[i] = read_eeprom(ioaddr, i+EEPROMMACAddr);
+ /* Store MAC Address in perm_addr */
+ memcpy(net_dev->perm_addr, net_dev->dev_addr, ETH_ALEN);
+
return 1;
}
@@ -271,7 +275,8 @@
*
* SiS630E model, use APC CMOS RAM to store MAC address.
* APC CMOS RAM is accessed through ISA bridge.
- * MAC address is read into @net_dev->dev_addr.
+ * MAC address is read into @net_dev->dev_addr and
+ * @net_dev->perm_addr.
*/
static int __devinit sis630e_get_mac_addr(struct pci_dev * pci_dev,
@@ -296,6 +301,10 @@
outb(0x09 + i, 0x70);
((u8 *)(net_dev->dev_addr))[i] = inb(0x71);
}
+
+ /* Store MAC Address in perm_addr */
+ memcpy(net_dev->perm_addr, net_dev->dev_addr, ETH_ALEN);
+
pci_write_config_byte(isa_bridge, 0x48, reg & ~0x40);
pci_dev_put(isa_bridge);
@@ -310,7 +319,7 @@
*
* SiS635 model, set MAC Reload Bit to load Mac address from APC
* to rfdr. rfdr is accessed through rfcr. MAC address is read into
- * @net_dev->dev_addr.
+ * @net_dev->dev_addr and @net_dev->perm_addr.
*/
static int __devinit sis635_get_mac_addr(struct pci_dev * pci_dev,
@@ -334,6 +343,9 @@
*( ((u16 *)net_dev->dev_addr) + i) = inw(ioaddr + rfdr);
}
+ /* Store MAC Address in perm_addr */
+ memcpy(net_dev->perm_addr, net_dev->dev_addr, ETH_ALEN);
+
/* enable packet filtering */
outl(rfcrSave | RFEN, rfcr + ioaddr);
@@ -353,7 +365,7 @@
* EEDONE signal to refuse EEPROM access by LAN.
* The EEPROM map of SiS962 or SiS963 is different to SiS900.
* The signature field in SiS962 or SiS963 spec is meaningless.
- * MAC address is read into @net_dev->dev_addr.
+ * MAC address is read into @net_dev->dev_addr and @net_dev->perm_addr.
*/
static int __devinit sis96x_get_mac_addr(struct pci_dev * pci_dev,
@@ -372,6 +384,9 @@
for (i = 0; i < 3; i++)
((u16 *)(net_dev->dev_addr))[i] = read_eeprom(ioaddr, i+EEPROMMACAddr);
+ /* Store MAC Address in perm_addr */
+ memcpy(net_dev->perm_addr, net_dev->dev_addr, ETH_ALEN);
+
outl(EEDONE, ee_addr);
return 1;
} else {
diff --git a/drivers/net/stmmac/dwmac_lib.c b/drivers/net/stmmac/dwmac_lib.c
index d65fab1..e250935 100644
--- a/drivers/net/stmmac/dwmac_lib.c
+++ b/drivers/net/stmmac/dwmac_lib.c
@@ -26,9 +26,9 @@
#undef DWMAC_DMA_DEBUG
#ifdef DWMAC_DMA_DEBUG
-#define DBG(fmt, args...) printk(fmt, ## args)
+#define DWMAC_LIB_DBG(fmt, args...) printk(fmt, ## args)
#else
-#define DBG(fmt, args...) do { } while (0)
+#define DWMAC_LIB_DBG(fmt, args...) do { } while (0)
#endif
/* CSR1 enables the transmit DMA to check for new descriptor */
@@ -152,7 +152,7 @@
/* read the status register (CSR5) */
u32 intr_status = readl(ioaddr + DMA_STATUS);
- DBG(INFO, "%s: [CSR5: 0x%08x]\n", __func__, intr_status);
+ DWMAC_LIB_DBG(KERN_INFO "%s: [CSR5: 0x%08x]\n", __func__, intr_status);
#ifdef DWMAC_DMA_DEBUG
/* It displays the DMA process states (CSR5 register) */
show_tx_process_state(intr_status);
@@ -160,43 +160,43 @@
#endif
/* ABNORMAL interrupts */
if (unlikely(intr_status & DMA_STATUS_AIS)) {
- DBG(INFO, "CSR5[15] DMA ABNORMAL IRQ: ");
+ DWMAC_LIB_DBG(KERN_INFO "CSR5[15] DMA ABNORMAL IRQ: ");
if (unlikely(intr_status & DMA_STATUS_UNF)) {
- DBG(INFO, "transmit underflow\n");
+ DWMAC_LIB_DBG(KERN_INFO "transmit underflow\n");
ret = tx_hard_error_bump_tc;
x->tx_undeflow_irq++;
}
if (unlikely(intr_status & DMA_STATUS_TJT)) {
- DBG(INFO, "transmit jabber\n");
+ DWMAC_LIB_DBG(KERN_INFO "transmit jabber\n");
x->tx_jabber_irq++;
}
if (unlikely(intr_status & DMA_STATUS_OVF)) {
- DBG(INFO, "recv overflow\n");
+ DWMAC_LIB_DBG(KERN_INFO "recv overflow\n");
x->rx_overflow_irq++;
}
if (unlikely(intr_status & DMA_STATUS_RU)) {
- DBG(INFO, "receive buffer unavailable\n");
+ DWMAC_LIB_DBG(KERN_INFO "receive buffer unavailable\n");
x->rx_buf_unav_irq++;
}
if (unlikely(intr_status & DMA_STATUS_RPS)) {
- DBG(INFO, "receive process stopped\n");
+ DWMAC_LIB_DBG(KERN_INFO "receive process stopped\n");
x->rx_process_stopped_irq++;
}
if (unlikely(intr_status & DMA_STATUS_RWT)) {
- DBG(INFO, "receive watchdog\n");
+ DWMAC_LIB_DBG(KERN_INFO "receive watchdog\n");
x->rx_watchdog_irq++;
}
if (unlikely(intr_status & DMA_STATUS_ETI)) {
- DBG(INFO, "transmit early interrupt\n");
+ DWMAC_LIB_DBG(KERN_INFO "transmit early interrupt\n");
x->tx_early_irq++;
}
if (unlikely(intr_status & DMA_STATUS_TPS)) {
- DBG(INFO, "transmit process stopped\n");
+ DWMAC_LIB_DBG(KERN_INFO "transmit process stopped\n");
x->tx_process_stopped_irq++;
ret = tx_hard_error;
}
if (unlikely(intr_status & DMA_STATUS_FBI)) {
- DBG(INFO, "fatal bus error\n");
+ DWMAC_LIB_DBG(KERN_INFO "fatal bus error\n");
x->fatal_bus_error_irq++;
ret = tx_hard_error;
}
@@ -215,7 +215,7 @@
/* Clear the interrupt by writing a logic 1 to the CSR5[15-0] */
writel((intr_status & 0x1ffff), ioaddr + DMA_STATUS);
- DBG(INFO, "\n\n");
+ DWMAC_LIB_DBG(KERN_INFO "\n\n");
return ret;
}
diff --git a/drivers/net/stmmac/stmmac_main.c b/drivers/net/stmmac/stmmac_main.c
index 0e5f031..cc973fc 100644
--- a/drivers/net/stmmac/stmmac_main.c
+++ b/drivers/net/stmmac/stmmac_main.c
@@ -750,7 +750,6 @@
priv->hw->dma->dma_mode(priv->ioaddr, tc, SF_DMA_MODE);
priv->xstats.threshold = tc;
}
- stmmac_tx_err(priv);
} else if (unlikely(status == tx_hard_error))
stmmac_tx_err(priv);
}
@@ -781,21 +780,6 @@
stmmac_verify_args();
- ret = stmmac_init_phy(dev);
- if (unlikely(ret)) {
- pr_err("%s: Cannot attach to PHY (error: %d)\n", __func__, ret);
- return ret;
- }
-
- /* Request the IRQ lines */
- ret = request_irq(dev->irq, stmmac_interrupt,
- IRQF_SHARED, dev->name, dev);
- if (unlikely(ret < 0)) {
- pr_err("%s: ERROR: allocating the IRQ %d (error: %d)\n",
- __func__, dev->irq, ret);
- return ret;
- }
-
#ifdef CONFIG_STMMAC_TIMER
priv->tm = kzalloc(sizeof(struct stmmac_timer *), GFP_KERNEL);
if (unlikely(priv->tm == NULL)) {
@@ -814,6 +798,11 @@
} else
priv->tm->enable = 1;
#endif
+ ret = stmmac_init_phy(dev);
+ if (unlikely(ret)) {
+ pr_err("%s: Cannot attach to PHY (error: %d)\n", __func__, ret);
+ goto open_error;
+ }
/* Create and initialize the TX/RX descriptors chains. */
priv->dma_tx_size = STMMAC_ALIGN(dma_txsize);
@@ -822,12 +811,11 @@
init_dma_desc_rings(dev);
/* DMA initialization and SW reset */
- if (unlikely(priv->hw->dma->init(priv->ioaddr, priv->plat->pbl,
- priv->dma_tx_phy,
- priv->dma_rx_phy) < 0)) {
-
+ ret = priv->hw->dma->init(priv->ioaddr, priv->plat->pbl,
+ priv->dma_tx_phy, priv->dma_rx_phy);
+ if (ret < 0) {
pr_err("%s: DMA initialization failed\n", __func__);
- return -1;
+ goto open_error;
}
/* Copy the MAC addr into the HW */
@@ -848,6 +836,15 @@
writel(0xffffffff, priv->ioaddr + MMC_HIGH_INTR_MASK);
writel(0xffffffff, priv->ioaddr + MMC_LOW_INTR_MASK);
+ /* Request the IRQ lines */
+ ret = request_irq(dev->irq, stmmac_interrupt,
+ IRQF_SHARED, dev->name, dev);
+ if (unlikely(ret < 0)) {
+ pr_err("%s: ERROR: allocating the IRQ %d (error: %d)\n",
+ __func__, dev->irq, ret);
+ goto open_error;
+ }
+
/* Enable the MAC Rx/Tx */
stmmac_enable_mac(priv->ioaddr);
@@ -878,7 +875,17 @@
napi_enable(&priv->napi);
skb_queue_head_init(&priv->rx_recycle);
netif_start_queue(dev);
+
return 0;
+
+open_error:
+#ifdef CONFIG_STMMAC_TIMER
+ kfree(priv->tm);
+#endif
+ if (priv->phydev)
+ phy_disconnect(priv->phydev);
+
+ return ret;
}
/**
diff --git a/drivers/net/tokenring/3c359.c b/drivers/net/tokenring/3c359.c
index 8a3b191..ff32bef 100644
--- a/drivers/net/tokenring/3c359.c
+++ b/drivers/net/tokenring/3c359.c
@@ -1251,7 +1251,7 @@
/*
* The NIC has told us that a packet has been downloaded onto the card, we must
* find out which packet it has done, clear the skb and information for the packet
- * then advance around the ring for all tranmitted packets
+ * then advance around the ring for all transmitted packets
*/
static void xl_dn_comp(struct net_device *dev)
@@ -1568,7 +1568,7 @@
if (lan_status_diff & LSC_SOFT_ERR)
printk(KERN_WARNING "%s: Adapter transmitted Soft Error Report Mac Frame\n",dev->name);
if (lan_status_diff & LSC_TRAN_BCN)
- printk(KERN_INFO "%s: We are tranmitting the beacon, aaah\n",dev->name);
+ printk(KERN_INFO "%s: We are transmitting the beacon, aaah\n",dev->name);
if (lan_status_diff & LSC_SS)
printk(KERN_INFO "%s: Single Station on the ring\n", dev->name);
if (lan_status_diff & LSC_RING_REC)
diff --git a/drivers/net/tokenring/lanstreamer.c b/drivers/net/tokenring/lanstreamer.c
index 5bd1407..9354ca9 100644
--- a/drivers/net/tokenring/lanstreamer.c
+++ b/drivers/net/tokenring/lanstreamer.c
@@ -1675,7 +1675,7 @@
if (lan_status_diff & LSC_SOFT_ERR)
printk(KERN_WARNING "%s: Adapter transmitted Soft Error Report Mac Frame\n", dev->name);
if (lan_status_diff & LSC_TRAN_BCN)
- printk(KERN_INFO "%s: We are tranmitting the beacon, aaah\n", dev->name);
+ printk(KERN_INFO "%s: We are transmitting the beacon, aaah\n", dev->name);
if (lan_status_diff & LSC_SS)
printk(KERN_INFO "%s: Single Station on the ring\n", dev->name);
if (lan_status_diff & LSC_RING_REC)
diff --git a/drivers/net/tokenring/olympic.c b/drivers/net/tokenring/olympic.c
index 3d2fbe6..2684003 100644
--- a/drivers/net/tokenring/olympic.c
+++ b/drivers/net/tokenring/olympic.c
@@ -1500,7 +1500,7 @@
if (lan_status_diff & LSC_SOFT_ERR)
printk(KERN_WARNING "%s: Adapter transmitted Soft Error Report Mac Frame\n",dev->name);
if (lan_status_diff & LSC_TRAN_BCN)
- printk(KERN_INFO "%s: We are tranmitting the beacon, aaah\n",dev->name);
+ printk(KERN_INFO "%s: We are transmitting the beacon, aaah\n",dev->name);
if (lan_status_diff & LSC_SS)
printk(KERN_INFO "%s: Single Station on the ring\n", dev->name);
if (lan_status_diff & LSC_RING_REC)
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
index f1b8af6..2d10239 100644
--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
+++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
@@ -1040,7 +1040,7 @@
}
ret = ath9k_htc_hw_init(hif_dev->htc_handle,
- &hif_dev->udev->dev, hif_dev->device_id,
+ &interface->dev, hif_dev->device_id,
hif_dev->udev->product, id->driver_info);
if (ret) {
ret = -EINVAL;
@@ -1158,7 +1158,7 @@
#endif
static struct usb_driver ath9k_hif_usb_driver = {
- .name = "ath9k_hif_usb",
+ .name = KBUILD_MODNAME,
.probe = ath9k_hif_usb_probe,
.disconnect = ath9k_hif_usb_disconnect,
#ifdef CONFIG_PM
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 1ec9bcd..c95bc5c 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -1254,15 +1254,6 @@
ah->txchainmask = common->tx_chainmask;
ah->rxchainmask = common->rx_chainmask;
- if ((common->bus_ops->ath_bus_type != ATH_USB) && !ah->chip_fullsleep) {
- ath9k_hw_abortpcurecv(ah);
- if (!ath9k_hw_stopdmarecv(ah)) {
- ath_dbg(common, ATH_DBG_XMIT,
- "Failed to stop receive dma\n");
- bChannelChange = false;
- }
- }
-
if (!ath9k_hw_setpower(ah, ATH9K_PM_AWAKE))
return -EIO;
diff --git a/drivers/net/wireless/ath/ath9k/mac.c b/drivers/net/wireless/ath/ath9k/mac.c
index 562257a..edc1cbb 100644
--- a/drivers/net/wireless/ath/ath9k/mac.c
+++ b/drivers/net/wireless/ath/ath9k/mac.c
@@ -751,28 +751,47 @@
}
EXPORT_SYMBOL(ath9k_hw_abortpcurecv);
-bool ath9k_hw_stopdmarecv(struct ath_hw *ah)
+bool ath9k_hw_stopdmarecv(struct ath_hw *ah, bool *reset)
{
#define AH_RX_STOP_DMA_TIMEOUT 10000 /* usec */
#define AH_RX_TIME_QUANTUM 100 /* usec */
struct ath_common *common = ath9k_hw_common(ah);
+ u32 mac_status, last_mac_status = 0;
int i;
+ /* Enable access to the DMA observation bus */
+ REG_WRITE(ah, AR_MACMISC,
+ ((AR_MACMISC_DMA_OBS_LINE_8 << AR_MACMISC_DMA_OBS_S) |
+ (AR_MACMISC_MISC_OBS_BUS_1 <<
+ AR_MACMISC_MISC_OBS_BUS_MSB_S)));
+
REG_WRITE(ah, AR_CR, AR_CR_RXD);
/* Wait for rx enable bit to go low */
for (i = AH_RX_STOP_DMA_TIMEOUT / AH_TIME_QUANTUM; i != 0; i--) {
if ((REG_READ(ah, AR_CR) & AR_CR_RXE) == 0)
break;
+
+ if (!AR_SREV_9300_20_OR_LATER(ah)) {
+ mac_status = REG_READ(ah, AR_DMADBG_7) & 0x7f0;
+ if (mac_status == 0x1c0 && mac_status == last_mac_status) {
+ *reset = true;
+ break;
+ }
+
+ last_mac_status = mac_status;
+ }
+
udelay(AH_TIME_QUANTUM);
}
if (i == 0) {
ath_err(common,
- "DMA failed to stop in %d ms AR_CR=0x%08x AR_DIAG_SW=0x%08x\n",
+ "DMA failed to stop in %d ms AR_CR=0x%08x AR_DIAG_SW=0x%08x DMADBG_7=0x%08x\n",
AH_RX_STOP_DMA_TIMEOUT / 1000,
REG_READ(ah, AR_CR),
- REG_READ(ah, AR_DIAG_SW));
+ REG_READ(ah, AR_DIAG_SW),
+ REG_READ(ah, AR_DMADBG_7));
return false;
} else {
return true;
diff --git a/drivers/net/wireless/ath/ath9k/mac.h b/drivers/net/wireless/ath/ath9k/mac.h
index b2b2ff8..c2a5938 100644
--- a/drivers/net/wireless/ath/ath9k/mac.h
+++ b/drivers/net/wireless/ath/ath9k/mac.h
@@ -695,7 +695,7 @@
void ath9k_hw_putrxbuf(struct ath_hw *ah, u32 rxdp);
void ath9k_hw_startpcureceive(struct ath_hw *ah, bool is_scanning);
void ath9k_hw_abortpcurecv(struct ath_hw *ah);
-bool ath9k_hw_stopdmarecv(struct ath_hw *ah);
+bool ath9k_hw_stopdmarecv(struct ath_hw *ah, bool *reset);
int ath9k_hw_beaconq_setup(struct ath_hw *ah);
/* Interrupt Handling */
diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
index dddb85d..17d04ff 100644
--- a/drivers/net/wireless/ath/ath9k/main.c
+++ b/drivers/net/wireless/ath/ath9k/main.c
@@ -1376,7 +1376,6 @@
ath9k_calculate_iter_data(hw, vif, &iter_data);
- ath9k_ps_wakeup(sc);
/* Set BSSID mask. */
memcpy(common->bssidmask, iter_data.mask, ETH_ALEN);
ath_hw_setbssidmask(common);
@@ -1411,7 +1410,6 @@
}
ath9k_hw_set_interrupts(ah, ah->imask);
- ath9k_ps_restore(sc);
/* Set up ANI */
if ((iter_data.naps + iter_data.nadhocs) > 0) {
@@ -1457,6 +1455,7 @@
struct ath_vif *avp = (void *)vif->drv_priv;
int ret = 0;
+ ath9k_ps_wakeup(sc);
mutex_lock(&sc->mutex);
switch (vif->type) {
@@ -1503,6 +1502,7 @@
ath9k_do_vif_add_setup(hw, vif);
out:
mutex_unlock(&sc->mutex);
+ ath9k_ps_restore(sc);
return ret;
}
@@ -1517,6 +1517,7 @@
ath_dbg(common, ATH_DBG_CONFIG, "Change Interface\n");
mutex_lock(&sc->mutex);
+ ath9k_ps_wakeup(sc);
/* See if new interface type is valid. */
if ((new_type == NL80211_IFTYPE_ADHOC) &&
@@ -1546,6 +1547,7 @@
ath9k_do_vif_add_setup(hw, vif);
out:
+ ath9k_ps_restore(sc);
mutex_unlock(&sc->mutex);
return ret;
}
@@ -1558,6 +1560,7 @@
ath_dbg(common, ATH_DBG_CONFIG, "Detach Interface\n");
+ ath9k_ps_wakeup(sc);
mutex_lock(&sc->mutex);
sc->nvifs--;
@@ -1569,6 +1572,7 @@
ath9k_calculate_summary_state(hw, NULL);
mutex_unlock(&sc->mutex);
+ ath9k_ps_restore(sc);
}
static void ath9k_enable_ps(struct ath_softc *sc)
@@ -1809,6 +1813,7 @@
txq = sc->tx.txq_map[queue];
+ ath9k_ps_wakeup(sc);
mutex_lock(&sc->mutex);
memset(&qi, 0, sizeof(struct ath9k_tx_queue_info));
@@ -1832,6 +1837,7 @@
ath_beaconq_config(sc);
mutex_unlock(&sc->mutex);
+ ath9k_ps_restore(sc);
return ret;
}
@@ -1894,6 +1900,7 @@
int slottime;
int error;
+ ath9k_ps_wakeup(sc);
mutex_lock(&sc->mutex);
if (changed & BSS_CHANGED_BSSID) {
@@ -1994,6 +2001,7 @@
}
mutex_unlock(&sc->mutex);
+ ath9k_ps_restore(sc);
}
static u64 ath9k_get_tsf(struct ieee80211_hw *hw)
diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
index a9c3f46..dcd19bc 100644
--- a/drivers/net/wireless/ath/ath9k/recv.c
+++ b/drivers/net/wireless/ath/ath9k/recv.c
@@ -486,12 +486,12 @@
bool ath_stoprecv(struct ath_softc *sc)
{
struct ath_hw *ah = sc->sc_ah;
- bool stopped;
+ bool stopped, reset = false;
spin_lock_bh(&sc->rx.rxbuflock);
ath9k_hw_abortpcurecv(ah);
ath9k_hw_setrxfilter(ah, 0);
- stopped = ath9k_hw_stopdmarecv(ah);
+ stopped = ath9k_hw_stopdmarecv(ah, &reset);
if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA)
ath_edma_stop_recv(sc);
@@ -506,7 +506,7 @@
"confusing the DMA engine when we start RX up\n");
ATH_DBG_WARN_ON_ONCE(!stopped);
}
- return stopped;
+ return stopped || reset;
}
void ath_flushrecv(struct ath_softc *sc)
diff --git a/drivers/net/wireless/ath/regd_common.h b/drivers/net/wireless/ath/regd_common.h
index 248c670..5c2cfe6 100644
--- a/drivers/net/wireless/ath/regd_common.h
+++ b/drivers/net/wireless/ath/regd_common.h
@@ -195,6 +195,7 @@
{APL9_WORLD, CTL_ETSI, CTL_ETSI},
{APL3_FCCA, CTL_FCC, CTL_FCC},
+ {APL7_FCCA, CTL_FCC, CTL_FCC},
{APL1_ETSIC, CTL_FCC, CTL_ETSI},
{APL2_ETSIC, CTL_FCC, CTL_ETSI},
{APL2_APLD, CTL_FCC, NO_CTL},
diff --git a/drivers/net/wireless/iwlegacy/Kconfig b/drivers/net/wireless/iwlegacy/Kconfig
index 2a45dd4..aef65cd 100644
--- a/drivers/net/wireless/iwlegacy/Kconfig
+++ b/drivers/net/wireless/iwlegacy/Kconfig
@@ -1,6 +1,5 @@
config IWLWIFI_LEGACY
- tristate "Intel Wireless Wifi legacy devices"
- depends on PCI && MAC80211
+ tristate
select FW_LOADER
select NEW_LEDS
select LEDS_CLASS
@@ -65,7 +64,8 @@
config IWL4965
tristate "Intel Wireless WiFi 4965AGN (iwl4965)"
- depends on IWLWIFI_LEGACY
+ depends on PCI && MAC80211
+ select IWLWIFI_LEGACY
---help---
This option enables support for
@@ -92,7 +92,8 @@
config IWL3945
tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)"
- depends on IWLWIFI_LEGACY
+ depends on PCI && MAC80211
+ select IWLWIFI_LEGACY
---help---
Select to build the driver supporting the:
diff --git a/drivers/net/wireless/iwlegacy/iwl-3945-hw.h b/drivers/net/wireless/iwlegacy/iwl-3945-hw.h
index 779d3cb..5c3a68d 100644
--- a/drivers/net/wireless/iwlegacy/iwl-3945-hw.h
+++ b/drivers/net/wireless/iwlegacy/iwl-3945-hw.h
@@ -74,8 +74,6 @@
/* RSSI to dBm */
#define IWL39_RSSI_OFFSET 95
-#define IWL_DEFAULT_TX_POWER 0x0F
-
/*
* EEPROM related constants, enums, and structures.
*/
diff --git a/drivers/net/wireless/iwlegacy/iwl-4965-hw.h b/drivers/net/wireless/iwlegacy/iwl-4965-hw.h
index 08b189c..fc6fa28 100644
--- a/drivers/net/wireless/iwlegacy/iwl-4965-hw.h
+++ b/drivers/net/wireless/iwlegacy/iwl-4965-hw.h
@@ -804,9 +804,6 @@
#define IWL4965_DEFAULT_TX_RETRY 15
-/* Limit range of txpower output target to be between these values */
-#define IWL4965_TX_POWER_TARGET_POWER_MIN (0) /* 0 dBm: 1 milliwatt */
-
/* EEPROM */
#define IWL4965_FIRST_AMPDU_QUEUE 10
diff --git a/drivers/net/wireless/iwlegacy/iwl-core.c b/drivers/net/wireless/iwlegacy/iwl-core.c
index 7007d61..c1511b1 100644
--- a/drivers/net/wireless/iwlegacy/iwl-core.c
+++ b/drivers/net/wireless/iwlegacy/iwl-core.c
@@ -160,6 +160,7 @@
struct ieee80211_channel *geo_ch;
struct ieee80211_rate *rates;
int i = 0;
+ s8 max_tx_power = 0;
if (priv->bands[IEEE80211_BAND_2GHZ].n_bitrates ||
priv->bands[IEEE80211_BAND_5GHZ].n_bitrates) {
@@ -235,8 +236,8 @@
geo_ch->flags |= ch->ht40_extension_channel;
- if (ch->max_power_avg > priv->tx_power_device_lmt)
- priv->tx_power_device_lmt = ch->max_power_avg;
+ if (ch->max_power_avg > max_tx_power)
+ max_tx_power = ch->max_power_avg;
} else {
geo_ch->flags |= IEEE80211_CHAN_DISABLED;
}
@@ -249,6 +250,10 @@
geo_ch->flags);
}
+ priv->tx_power_device_lmt = max_tx_power;
+ priv->tx_power_user_lmt = max_tx_power;
+ priv->tx_power_next = max_tx_power;
+
if ((priv->bands[IEEE80211_BAND_5GHZ].n_channels == 0) &&
priv->cfg->sku & IWL_SKU_A) {
IWL_INFO(priv, "Incorrectly detected BG card as ABG. "
@@ -1124,11 +1129,11 @@
if (!priv->cfg->ops->lib->send_tx_power)
return -EOPNOTSUPP;
- if (tx_power < IWL4965_TX_POWER_TARGET_POWER_MIN) {
+ /* 0 dBm mean 1 milliwatt */
+ if (tx_power < 0) {
IWL_WARN(priv,
- "Requested user TXPOWER %d below lower limit %d.\n",
- tx_power,
- IWL4965_TX_POWER_TARGET_POWER_MIN);
+ "Requested user TXPOWER %d below 1 mW.\n",
+ tx_power);
return -EINVAL;
}
diff --git a/drivers/net/wireless/iwlegacy/iwl-eeprom.c b/drivers/net/wireless/iwlegacy/iwl-eeprom.c
index 04c5648..cb346d1 100644
--- a/drivers/net/wireless/iwlegacy/iwl-eeprom.c
+++ b/drivers/net/wireless/iwlegacy/iwl-eeprom.c
@@ -471,13 +471,6 @@
flags & EEPROM_CHANNEL_RADAR))
? "" : "not ");
- /* Set the tx_power_user_lmt to the highest power
- * supported by any channel */
- if (eeprom_ch_info[ch].max_power_avg >
- priv->tx_power_user_lmt)
- priv->tx_power_user_lmt =
- eeprom_ch_info[ch].max_power_avg;
-
ch_info++;
}
}
diff --git a/drivers/net/wireless/iwlegacy/iwl3945-base.c b/drivers/net/wireless/iwlegacy/iwl3945-base.c
index 28eb3d8..cc7ebce 100644
--- a/drivers/net/wireless/iwlegacy/iwl3945-base.c
+++ b/drivers/net/wireless/iwlegacy/iwl3945-base.c
@@ -3825,10 +3825,6 @@
priv->force_reset[IWL_FW_RESET].reset_duration =
IWL_DELAY_NEXT_FORCE_FW_RELOAD;
-
- priv->tx_power_user_lmt = IWL_DEFAULT_TX_POWER;
- priv->tx_power_next = IWL_DEFAULT_TX_POWER;
-
if (eeprom->version < EEPROM_3945_EEPROM_VERSION) {
IWL_WARN(priv, "Unsupported EEPROM version: 0x%04X\n",
eeprom->version);
diff --git a/drivers/net/wireless/iwlegacy/iwl4965-base.c b/drivers/net/wireless/iwlegacy/iwl4965-base.c
index 91b3d8b..d484c36 100644
--- a/drivers/net/wireless/iwlegacy/iwl4965-base.c
+++ b/drivers/net/wireless/iwlegacy/iwl4965-base.c
@@ -3140,12 +3140,6 @@
iwl_legacy_init_scan_params(priv);
- /* Set the tx_power_user_lmt to the lowest power level
- * this value will get overwritten by channel max power avg
- * from eeprom */
- priv->tx_power_user_lmt = IWL4965_TX_POWER_TARGET_POWER_MIN;
- priv->tx_power_next = IWL4965_TX_POWER_TARGET_POWER_MIN;
-
ret = iwl_legacy_init_channel_map(priv);
if (ret) {
IWL_ERR(priv, "initializing regulatory failed: %d\n", ret);
diff --git a/drivers/net/wireless/iwlwifi/iwl-5000.c b/drivers/net/wireless/iwlwifi/iwl-5000.c
index 3ea31b6..22e045b 100644
--- a/drivers/net/wireless/iwlwifi/iwl-5000.c
+++ b/drivers/net/wireless/iwlwifi/iwl-5000.c
@@ -530,6 +530,9 @@
struct iwl_cfg iwl5300_agn_cfg = {
.name = "Intel(R) Ultimate N WiFi Link 5300 AGN",
IWL_DEVICE_5000,
+ /* at least EEPROM 0x11A has wrong info */
+ .valid_tx_ant = ANT_ABC, /* .cfg overwrite */
+ .valid_rx_ant = ANT_ABC, /* .cfg overwrite */
.ht_params = &iwl5000_ht_params,
};
diff --git a/drivers/net/wireless/mwl8k.c b/drivers/net/wireless/mwl8k.c
index 3695227..c1ceb4b 100644
--- a/drivers/net/wireless/mwl8k.c
+++ b/drivers/net/wireless/mwl8k.c
@@ -137,6 +137,7 @@
struct mwl8k_priv {
struct ieee80211_hw *hw;
struct pci_dev *pdev;
+ int irq;
struct mwl8k_device_info *device_info;
@@ -3761,9 +3762,11 @@
rc = request_irq(priv->pdev->irq, mwl8k_interrupt,
IRQF_SHARED, MWL8K_NAME, hw);
if (rc) {
+ priv->irq = -1;
wiphy_err(hw->wiphy, "failed to register IRQ handler\n");
return -EIO;
}
+ priv->irq = priv->pdev->irq;
/* Enable TX reclaim and RX tasklets. */
tasklet_enable(&priv->poll_tx_task);
@@ -3800,6 +3803,7 @@
if (rc) {
iowrite32(0, priv->regs + MWL8K_HIU_A2H_INTERRUPT_MASK);
free_irq(priv->pdev->irq, hw);
+ priv->irq = -1;
tasklet_disable(&priv->poll_tx_task);
tasklet_disable(&priv->poll_rx_task);
}
@@ -3818,7 +3822,10 @@
/* Disable interrupts */
iowrite32(0, priv->regs + MWL8K_HIU_A2H_INTERRUPT_MASK);
- free_irq(priv->pdev->irq, hw);
+ if (priv->irq != -1) {
+ free_irq(priv->pdev->irq, hw);
+ priv->irq = -1;
+ }
/* Stop finalize join worker */
cancel_work_sync(&priv->finalize_join_worker);
diff --git a/drivers/net/wireless/p54/txrx.c b/drivers/net/wireless/p54/txrx.c
index 7834c26..042842e 100644
--- a/drivers/net/wireless/p54/txrx.c
+++ b/drivers/net/wireless/p54/txrx.c
@@ -703,7 +703,7 @@
struct p54_tx_info *p54info;
struct p54_hdr *hdr;
struct p54_tx_data *txhdr;
- unsigned int padding, len, extra_len;
+ unsigned int padding, len, extra_len = 0;
int i, j, ridx;
u16 hdr_flags = 0, aid = 0;
u8 rate, queue = 0, crypt_offset = 0;
diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
index c8ff646..0fa466a 100644
--- a/drivers/pci/Kconfig
+++ b/drivers/pci/Kconfig
@@ -88,4 +88,6 @@
depends on HOTPLUG
default y
-select NLS if (DMI || ACPI)
+config PCI_LABEL
+ def_bool y if (DMI || ACPI)
+ select NLS
diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile
index 98d61c8..c85f744 100644
--- a/drivers/pci/Makefile
+++ b/drivers/pci/Makefile
@@ -56,10 +56,10 @@
# ACPI Related PCI FW Functions
# ACPI _DSM provided firmware instance and string name
#
-obj-$(CONFIG_ACPI) += pci-acpi.o pci-label.o
+obj-$(CONFIG_ACPI) += pci-acpi.o
# SMBIOS provided firmware instance and labels
-obj-$(CONFIG_DMI) += pci-label.o
+obj-$(CONFIG_PCI_LABEL) += pci-label.o
# Cardbus & CompactPCI use setup-bus
obj-$(CONFIG_HOTPLUG) += setup-bus.o
diff --git a/drivers/pcmcia/pxa2xx_balloon3.c b/drivers/pcmcia/pxa2xx_balloon3.c
index 453c54c..4c3e94c 100644
--- a/drivers/pcmcia/pxa2xx_balloon3.c
+++ b/drivers/pcmcia/pxa2xx_balloon3.c
@@ -25,6 +25,8 @@
#include <mach/balloon3.h>
+#include <asm/mach-types.h>
+
#include "soc_common.h"
/*
@@ -127,6 +129,9 @@
{
int ret;
+ if (!machine_is_balloon3())
+ return -ENODEV;
+
balloon3_pcmcia_device = platform_device_alloc("pxa2xx-pcmcia", -1);
if (!balloon3_pcmcia_device)
return -ENOMEM;
diff --git a/drivers/pcmcia/pxa2xx_trizeps4.c b/drivers/pcmcia/pxa2xx_trizeps4.c
index b7e5966..b829e65 100644
--- a/drivers/pcmcia/pxa2xx_trizeps4.c
+++ b/drivers/pcmcia/pxa2xx_trizeps4.c
@@ -69,15 +69,15 @@
for (i = 0; i < ARRAY_SIZE(irqs); i++) {
if (irqs[i].sock != skt->nr)
continue;
- if (gpio_request(IRQ_TO_GPIO(irqs[i].irq), irqs[i].str) < 0) {
+ if (gpio_request(irq_to_gpio(irqs[i].irq), irqs[i].str) < 0) {
pr_err("%s: sock %d unable to request gpio %d\n",
- __func__, skt->nr, IRQ_TO_GPIO(irqs[i].irq));
+ __func__, skt->nr, irq_to_gpio(irqs[i].irq));
ret = -EBUSY;
goto error;
}
- if (gpio_direction_input(IRQ_TO_GPIO(irqs[i].irq)) < 0) {
+ if (gpio_direction_input(irq_to_gpio(irqs[i].irq)) < 0) {
pr_err("%s: sock %d unable to set input gpio %d\n",
- __func__, skt->nr, IRQ_TO_GPIO(irqs[i].irq));
+ __func__, skt->nr, irq_to_gpio(irqs[i].irq));
ret = -EINVAL;
goto error;
}
@@ -86,7 +86,7 @@
error:
for (; i >= 0; i--) {
- gpio_free(IRQ_TO_GPIO(irqs[i].irq));
+ gpio_free(irq_to_gpio(irqs[i].irq));
}
return (ret);
}
@@ -97,7 +97,7 @@
/* free allocated gpio's */
gpio_free(GPIO_PRDY);
for (i = 0; i < ARRAY_SIZE(irqs); i++)
- gpio_free(IRQ_TO_GPIO(irqs[i].irq));
+ gpio_free(irq_to_gpio(irqs[i].irq));
}
static unsigned long trizeps_pcmcia_status[2];
@@ -226,6 +226,9 @@
{
int ret;
+ if (!machine_is_trizeps4() && !machine_is_trizeps4wl())
+ return -ENODEV;
+
trizeps_pcmcia_device = platform_device_alloc("pxa2xx-pcmcia", -1);
if (!trizeps_pcmcia_device)
return -ENOMEM;
diff --git a/drivers/rapidio/rio.c b/drivers/rapidio/rio.c
index c29719c..86c9a09 100644
--- a/drivers/rapidio/rio.c
+++ b/drivers/rapidio/rio.c
@@ -1171,16 +1171,17 @@
__setup("riohdid=", rio_hdid_setup);
-void rio_register_mport(struct rio_mport *port)
+int rio_register_mport(struct rio_mport *port)
{
if (next_portid >= RIO_MAX_MPORTS) {
pr_err("RIO: reached specified max number of mports\n");
- return;
+ return 1;
}
port->id = next_portid++;
port->host_deviceid = rio_get_hdid(port->id);
list_add_tail(&port->node, &rio_mports);
+ return 0;
}
EXPORT_SYMBOL_GPL(rio_local_get_device_id);
diff --git a/drivers/rapidio/switches/idt_gen2.c b/drivers/rapidio/switches/idt_gen2.c
index 095016a..ac2701b 100644
--- a/drivers/rapidio/switches/idt_gen2.c
+++ b/drivers/rapidio/switches/idt_gen2.c
@@ -418,3 +418,4 @@
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1616, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTVPS1616, idtg2_switch_init);
DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTSPS1616, idtg2_switch_init);
+DECLARE_RIO_SWITCH_INIT(RIO_VID_IDT, RIO_DID_IDTCPS1432, idtg2_switch_init);
diff --git a/drivers/rtc/class.c b/drivers/rtc/class.c
index 09b4437b..3901386 100644
--- a/drivers/rtc/class.c
+++ b/drivers/rtc/class.c
@@ -171,7 +171,7 @@
err = __rtc_read_alarm(rtc, &alrm);
if (!err && !rtc_valid_tm(&alrm.time))
- rtc_set_alarm(rtc, &alrm);
+ rtc_initialize_alarm(rtc, &alrm);
strlcpy(rtc->name, name, RTC_DEVICE_NAME_SIZE);
dev_set_name(&rtc->dev, "rtc%d", id);
diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 23719f0..ef6316a 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -375,6 +375,32 @@
}
EXPORT_SYMBOL_GPL(rtc_set_alarm);
+/* Called once per device from rtc_device_register */
+int rtc_initialize_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
+{
+ int err;
+
+ err = rtc_valid_tm(&alarm->time);
+ if (err != 0)
+ return err;
+
+ err = mutex_lock_interruptible(&rtc->ops_lock);
+ if (err)
+ return err;
+
+ rtc->aie_timer.node.expires = rtc_tm_to_ktime(alarm->time);
+ rtc->aie_timer.period = ktime_set(0, 0);
+ if (alarm->enabled) {
+ rtc->aie_timer.enabled = 1;
+ timerqueue_add(&rtc->timerqueue, &rtc->aie_timer.node);
+ }
+ mutex_unlock(&rtc->ops_lock);
+ return err;
+}
+EXPORT_SYMBOL_GPL(rtc_initialize_alarm);
+
+
+
int rtc_alarm_irq_enable(struct rtc_device *rtc, unsigned int enabled)
{
int err = mutex_lock_interruptible(&rtc->ops_lock);
diff --git a/drivers/rtc/rtc-bfin.c b/drivers/rtc/rtc-bfin.c
index a0fc4cf..90d8662 100644
--- a/drivers/rtc/rtc-bfin.c
+++ b/drivers/rtc/rtc-bfin.c
@@ -250,6 +250,8 @@
bfin_rtc_int_set_alarm(rtc);
else
bfin_rtc_int_clear(~(RTC_ISTAT_ALARM | RTC_ISTAT_ALARM_DAY));
+
+ return 0;
}
static int bfin_rtc_read_time(struct device *dev, struct rtc_time *tm)
diff --git a/drivers/rtc/rtc-mc13xxx.c b/drivers/rtc/rtc-mc13xxx.c
index c420064..c5ac037 100644
--- a/drivers/rtc/rtc-mc13xxx.c
+++ b/drivers/rtc/rtc-mc13xxx.c
@@ -401,6 +401,7 @@
}, {
.name = "mc13892-rtc",
},
+ { }
};
static struct platform_driver mc13xxx_rtc_driver = {
diff --git a/drivers/rtc/rtc-omap.c b/drivers/rtc/rtc-omap.c
index de0dd7b..bcae8dd 100644
--- a/drivers/rtc/rtc-omap.c
+++ b/drivers/rtc/rtc-omap.c
@@ -394,7 +394,7 @@
return 0;
fail2:
- free_irq(omap_rtc_timer, NULL);
+ free_irq(omap_rtc_timer, rtc);
fail1:
rtc_device_unregister(rtc);
fail0:
diff --git a/drivers/rtc/rtc-s3c.c b/drivers/rtc/rtc-s3c.c
index 7149649..b3466c4 100644
--- a/drivers/rtc/rtc-s3c.c
+++ b/drivers/rtc/rtc-s3c.c
@@ -336,7 +336,6 @@
/* do not clear AIE here, it may be needed for wake */
- s3c_rtc_setpie(dev, 0);
free_irq(s3c_rtc_alarmno, rtc_dev);
free_irq(s3c_rtc_tickno, rtc_dev);
}
@@ -408,7 +407,6 @@
platform_set_drvdata(dev, NULL);
rtc_device_unregister(rtc);
- s3c_rtc_setpie(&dev->dev, 0);
s3c_rtc_setaie(&dev->dev, 0);
clk_disable(rtc_clk);
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 6d5c7ff..ab55c2f 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -443,7 +443,7 @@
&sdev->request_queue->queue_flags);
if (flagset)
queue_flag_set(QUEUE_FLAG_REENTER, sdev->request_queue);
- __blk_run_queue(sdev->request_queue, false);
+ __blk_run_queue(sdev->request_queue);
if (flagset)
queue_flag_clear(QUEUE_FLAG_REENTER, sdev->request_queue);
spin_unlock(sdev->request_queue->queue_lock);
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index fdf3fa6..28c3350 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -3829,7 +3829,7 @@
!test_bit(QUEUE_FLAG_REENTER, &rport->rqst_q->queue_flags);
if (flagset)
queue_flag_set(QUEUE_FLAG_REENTER, rport->rqst_q);
- __blk_run_queue(rport->rqst_q, false);
+ __blk_run_queue(rport->rqst_q);
if (flagset)
queue_flag_clear(QUEUE_FLAG_REENTER, rport->rqst_q);
spin_unlock_irqrestore(rport->rqst_q->queue_lock, flags);
diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig
index 41b6e51..006489d 100644
--- a/drivers/usb/Kconfig
+++ b/drivers/usb/Kconfig
@@ -66,6 +66,7 @@
default y if ARCH_VT8500
default y if PLAT_SPEAR
default y if ARCH_MSM
+ default y if MICROBLAZE
default PCI
# ARM SA1111 chips have a non-PCI based "OHCI-compatible" USB host interface.
diff --git a/drivers/usb/core/devices.c b/drivers/usb/core/devices.c
index a3d2e23..96fdfb8 100644
--- a/drivers/usb/core/devices.c
+++ b/drivers/usb/core/devices.c
@@ -221,7 +221,7 @@
break;
case USB_ENDPOINT_XFER_INT:
type = "Int.";
- if (speed == USB_SPEED_HIGH)
+ if (speed == USB_SPEED_HIGH || speed == USB_SPEED_SUPER)
interval = 1 << (desc->bInterval - 1);
else
interval = desc->bInterval;
@@ -229,7 +229,8 @@
default: /* "can't happen" */
return start;
}
- interval *= (speed == USB_SPEED_HIGH) ? 125 : 1000;
+ interval *= (speed == USB_SPEED_HIGH ||
+ speed == USB_SPEED_SUPER) ? 125 : 1000;
if (interval % 1000)
unit = 'u';
else {
@@ -542,8 +543,9 @@
if (level == 0) {
int max;
- /* high speed reserves 80%, full/low reserves 90% */
- if (usbdev->speed == USB_SPEED_HIGH)
+ /* super/high speed reserves 80%, full/low reserves 90% */
+ if (usbdev->speed == USB_SPEED_HIGH ||
+ usbdev->speed == USB_SPEED_SUPER)
max = 800;
else
max = FRAME_TIME_MAX_USECS_ALLOC;
diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
index 8eed05d2..77a7fae 100644
--- a/drivers/usb/core/hcd.c
+++ b/drivers/usb/core/hcd.c
@@ -1908,7 +1908,7 @@
/* Streams only apply to bulk endpoints. */
for (i = 0; i < num_eps; i++)
- if (!usb_endpoint_xfer_bulk(&eps[i]->desc))
+ if (!eps[i] || !usb_endpoint_xfer_bulk(&eps[i]->desc))
return;
hcd->driver->free_streams(hcd, dev, eps, num_eps, mem_flags);
diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
index 8fb7549..93720bdc 100644
--- a/drivers/usb/core/hub.c
+++ b/drivers/usb/core/hub.c
@@ -2285,7 +2285,17 @@
}
/* see 7.1.7.6 */
- status = set_port_feature(hub->hdev, port1, USB_PORT_FEAT_SUSPEND);
+ /* Clear PORT_POWER if it's a USB3.0 device connected to USB 3.0
+ * external hub.
+ * FIXME: this is a temporary workaround to make the system able
+ * to suspend/resume.
+ */
+ if ((hub->hdev->parent != NULL) && hub_is_superspeed(hub->hdev))
+ status = clear_port_feature(hub->hdev, port1,
+ USB_PORT_FEAT_POWER);
+ else
+ status = set_port_feature(hub->hdev, port1,
+ USB_PORT_FEAT_SUSPEND);
if (status) {
dev_dbg(hub->intfdev, "can't suspend port %d, status %d\n",
port1, status);
diff --git a/drivers/usb/gadget/f_audio.c b/drivers/usb/gadget/f_audio.c
index 9abecfd..0111f8a 100644
--- a/drivers/usb/gadget/f_audio.c
+++ b/drivers/usb/gadget/f_audio.c
@@ -706,6 +706,7 @@
struct f_audio *audio = func_to_audio(f);
usb_free_descriptors(f->descriptors);
+ usb_free_descriptors(f->hs_descriptors);
kfree(audio);
}
diff --git a/drivers/usb/gadget/f_eem.c b/drivers/usb/gadget/f_eem.c
index 95dd466..b3c3042 100644
--- a/drivers/usb/gadget/f_eem.c
+++ b/drivers/usb/gadget/f_eem.c
@@ -314,6 +314,9 @@
static void eem_cmd_complete(struct usb_ep *ep, struct usb_request *req)
{
+ struct sk_buff *skb = (struct sk_buff *)req->context;
+
+ dev_kfree_skb_any(skb);
}
/*
@@ -428,10 +431,11 @@
skb_trim(skb2, len);
put_unaligned_le16(BIT(15) | BIT(11) | len,
skb_push(skb2, 2));
- skb_copy_bits(skb, 0, req->buf, skb->len);
- req->length = skb->len;
+ skb_copy_bits(skb2, 0, req->buf, skb2->len);
+ req->length = skb2->len;
req->complete = eem_cmd_complete;
req->zero = 1;
+ req->context = skb2;
if (usb_ep_queue(port->in_ep, req, GFP_ATOMIC))
DBG(cdev, "echo response queue fail\n");
break;
diff --git a/drivers/usb/gadget/fsl_qe_udc.c b/drivers/usb/gadget/fsl_qe_udc.c
index aee7e3c..36613b3 100644
--- a/drivers/usb/gadget/fsl_qe_udc.c
+++ b/drivers/usb/gadget/fsl_qe_udc.c
@@ -1148,6 +1148,12 @@
static int txcomplete(struct qe_ep *ep, unsigned char restart)
{
if (ep->tx_req != NULL) {
+ struct qe_req *req = ep->tx_req;
+ unsigned zlp = 0, last_len = 0;
+
+ last_len = min_t(unsigned, req->req.length - ep->sent,
+ ep->ep.maxpacket);
+
if (!restart) {
int asent = ep->last;
ep->sent += asent;
@@ -1156,9 +1162,18 @@
ep->last = 0;
}
+ /* zlp needed when req->re.zero is set */
+ if (req->req.zero) {
+ if (last_len == 0 ||
+ (req->req.length % ep->ep.maxpacket) != 0)
+ zlp = 0;
+ else
+ zlp = 1;
+ } else
+ zlp = 0;
+
/* a request already were transmitted completely */
- if ((ep->tx_req->req.length - ep->sent) <= 0) {
- ep->tx_req->req.actual = (unsigned int)ep->sent;
+ if (((ep->tx_req->req.length - ep->sent) <= 0) && !zlp) {
done(ep, ep->tx_req, 0);
ep->tx_req = NULL;
ep->last = 0;
@@ -1191,6 +1206,7 @@
buf = (u8 *)ep->tx_req->req.buf + ep->sent;
if (buf && size) {
ep->last = size;
+ ep->tx_req->req.actual += size;
frame_set_data(frame, buf);
frame_set_length(frame, size);
frame_set_status(frame, FRAME_OK);
diff --git a/drivers/usb/gadget/inode.c b/drivers/usb/gadget/inode.c
index 3ed73f4..a01383f 100644
--- a/drivers/usb/gadget/inode.c
+++ b/drivers/usb/gadget/inode.c
@@ -386,8 +386,10 @@
/* halt any endpoint by doing a "wrong direction" i/o call */
if (usb_endpoint_dir_in(&data->desc)) {
- if (usb_endpoint_xfer_isoc(&data->desc))
+ if (usb_endpoint_xfer_isoc(&data->desc)) {
+ mutex_unlock(&data->lock);
return -EINVAL;
+ }
DBG (data->dev, "%s halt\n", data->name);
spin_lock_irq (&data->dev->lock);
if (likely (data->ep != NULL))
diff --git a/drivers/usb/gadget/pch_udc.c b/drivers/usb/gadget/pch_udc.c
index 3e4b35e..68dbcc3 100644
--- a/drivers/usb/gadget/pch_udc.c
+++ b/drivers/usb/gadget/pch_udc.c
@@ -1608,7 +1608,7 @@
return -EINVAL;
if (!dev->driver || (dev->gadget.speed == USB_SPEED_UNKNOWN))
return -ESHUTDOWN;
- spin_lock_irqsave(&ep->dev->lock, iflags);
+ spin_lock_irqsave(&dev->lock, iflags);
/* map the buffer for dma */
if (usbreq->length &&
((usbreq->dma == DMA_ADDR_INVALID) || !usbreq->dma)) {
@@ -1625,8 +1625,10 @@
DMA_FROM_DEVICE);
} else {
req->buf = kzalloc(usbreq->length, GFP_ATOMIC);
- if (!req->buf)
- return -ENOMEM;
+ if (!req->buf) {
+ retval = -ENOMEM;
+ goto probe_end;
+ }
if (ep->in) {
memcpy(req->buf, usbreq->buf, usbreq->length);
req->dma = dma_map_single(&dev->pdev->dev,
diff --git a/drivers/usb/gadget/r8a66597-udc.c b/drivers/usb/gadget/r8a66597-udc.c
index 0151185..6dcc1f6 100644
--- a/drivers/usb/gadget/r8a66597-udc.c
+++ b/drivers/usb/gadget/r8a66597-udc.c
@@ -1083,7 +1083,9 @@
if (dvsq == DS_DFLT) {
/* bus reset */
+ spin_unlock(&r8a66597->lock);
r8a66597->driver->disconnect(&r8a66597->gadget);
+ spin_lock(&r8a66597->lock);
r8a66597_update_usb_speed(r8a66597);
}
if (r8a66597->old_dvsq == DS_CNFG && dvsq != DS_CNFG)
diff --git a/drivers/usb/host/ehci-q.c b/drivers/usb/host/ehci-q.c
index 98ded66..42abd0f 100644
--- a/drivers/usb/host/ehci-q.c
+++ b/drivers/usb/host/ehci-q.c
@@ -1247,24 +1247,27 @@
static void scan_async (struct ehci_hcd *ehci)
{
+ bool stopped;
struct ehci_qh *qh;
enum ehci_timer_action action = TIMER_IO_WATCHDOG;
ehci->stamp = ehci_readl(ehci, &ehci->regs->frame_index);
timer_action_done (ehci, TIMER_ASYNC_SHRINK);
rescan:
+ stopped = !HC_IS_RUNNING(ehci_to_hcd(ehci)->state);
qh = ehci->async->qh_next.qh;
if (likely (qh != NULL)) {
do {
/* clean any finished work for this qh */
- if (!list_empty (&qh->qtd_list)
- && qh->stamp != ehci->stamp) {
+ if (!list_empty(&qh->qtd_list) && (stopped ||
+ qh->stamp != ehci->stamp)) {
int temp;
/* unlinks could happen here; completion
* reporting drops the lock. rescan using
* the latest schedule, but don't rescan
- * qhs we already finished (no looping).
+ * qhs we already finished (no looping)
+ * unless the controller is stopped.
*/
qh = qh_get (qh);
qh->stamp = ehci->stamp;
@@ -1285,9 +1288,9 @@
*/
if (list_empty(&qh->qtd_list)
&& qh->qh_state == QH_STATE_LINKED) {
- if (!ehci->reclaim
- && ((ehci->stamp - qh->stamp) & 0x1fff)
- >= (EHCI_SHRINK_FRAMES * 8))
+ if (!ehci->reclaim && (stopped ||
+ ((ehci->stamp - qh->stamp) & 0x1fff)
+ >= EHCI_SHRINK_FRAMES * 8))
start_unlink_async(ehci, qh);
else
action = TIMER_ASYNC_SHRINK;
diff --git a/drivers/usb/host/isp1760-hcd.c b/drivers/usb/host/isp1760-hcd.c
index f50e84a..795345a 100644
--- a/drivers/usb/host/isp1760-hcd.c
+++ b/drivers/usb/host/isp1760-hcd.c
@@ -295,7 +295,7 @@
}
dev_err(hcd->self.controller,
- "%s: Can not allocate %lu bytes of memory\n"
+ "%s: Cannot allocate %zu bytes of memory\n"
"Current memory map:\n",
__func__, qtd->length);
for (i = 0; i < BLOCKS; i++) {
diff --git a/drivers/usb/host/ohci-au1xxx.c b/drivers/usb/host/ohci-au1xxx.c
index 17a6043..958d985f 100644
--- a/drivers/usb/host/ohci-au1xxx.c
+++ b/drivers/usb/host/ohci-au1xxx.c
@@ -33,7 +33,7 @@
#ifdef __LITTLE_ENDIAN
#define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C)
-#elif __BIG_ENDIAN
+#elif defined(__BIG_ENDIAN)
#define USBH_ENABLE_INIT (USBH_ENABLE_CE | USBH_ENABLE_E | USBH_ENABLE_C | \
USBH_ENABLE_BE)
#else
diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
index 1d586d4..9b166d7 100644
--- a/drivers/usb/host/pci-quirks.c
+++ b/drivers/usb/host/pci-quirks.c
@@ -84,65 +84,92 @@
{
u8 rev = 0;
unsigned long flags;
+ struct amd_chipset_info info;
+ int ret;
spin_lock_irqsave(&amd_lock, flags);
- amd_chipset.probe_count++;
/* probe only once */
- if (amd_chipset.probe_count > 1) {
+ if (amd_chipset.probe_count > 0) {
+ amd_chipset.probe_count++;
spin_unlock_irqrestore(&amd_lock, flags);
return amd_chipset.probe_result;
}
+ memset(&info, 0, sizeof(info));
+ spin_unlock_irqrestore(&amd_lock, flags);
- amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL);
- if (amd_chipset.smbus_dev) {
- rev = amd_chipset.smbus_dev->revision;
+ info.smbus_dev = pci_get_device(PCI_VENDOR_ID_ATI, 0x4385, NULL);
+ if (info.smbus_dev) {
+ rev = info.smbus_dev->revision;
if (rev >= 0x40)
- amd_chipset.sb_type = 1;
+ info.sb_type = 1;
else if (rev >= 0x30 && rev <= 0x3b)
- amd_chipset.sb_type = 3;
+ info.sb_type = 3;
} else {
- amd_chipset.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD,
- 0x780b, NULL);
- if (!amd_chipset.smbus_dev) {
- spin_unlock_irqrestore(&amd_lock, flags);
- return 0;
+ info.smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD,
+ 0x780b, NULL);
+ if (!info.smbus_dev) {
+ ret = 0;
+ goto commit;
}
- rev = amd_chipset.smbus_dev->revision;
+
+ rev = info.smbus_dev->revision;
if (rev >= 0x11 && rev <= 0x18)
- amd_chipset.sb_type = 2;
+ info.sb_type = 2;
}
- if (amd_chipset.sb_type == 0) {
- if (amd_chipset.smbus_dev) {
- pci_dev_put(amd_chipset.smbus_dev);
- amd_chipset.smbus_dev = NULL;
+ if (info.sb_type == 0) {
+ if (info.smbus_dev) {
+ pci_dev_put(info.smbus_dev);
+ info.smbus_dev = NULL;
}
- spin_unlock_irqrestore(&amd_lock, flags);
- return 0;
+ ret = 0;
+ goto commit;
}
- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL);
- if (amd_chipset.nb_dev) {
- amd_chipset.nb_type = 1;
+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x9601, NULL);
+ if (info.nb_dev) {
+ info.nb_type = 1;
} else {
- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,
- 0x1510, NULL);
- if (amd_chipset.nb_dev) {
- amd_chipset.nb_type = 2;
- } else {
- amd_chipset.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,
- 0x9600, NULL);
- if (amd_chipset.nb_dev)
- amd_chipset.nb_type = 3;
+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD, 0x1510, NULL);
+ if (info.nb_dev) {
+ info.nb_type = 2;
+ } else {
+ info.nb_dev = pci_get_device(PCI_VENDOR_ID_AMD,
+ 0x9600, NULL);
+ if (info.nb_dev)
+ info.nb_type = 3;
}
}
- amd_chipset.probe_result = 1;
+ ret = info.probe_result = 1;
printk(KERN_DEBUG "QUIRK: Enable AMD PLL fix\n");
- spin_unlock_irqrestore(&amd_lock, flags);
- return amd_chipset.probe_result;
+commit:
+
+ spin_lock_irqsave(&amd_lock, flags);
+ if (amd_chipset.probe_count > 0) {
+ /* race - someone else was faster - drop devices */
+
+ /* Mark that we where here */
+ amd_chipset.probe_count++;
+ ret = amd_chipset.probe_result;
+
+ spin_unlock_irqrestore(&amd_lock, flags);
+
+ if (info.nb_dev)
+ pci_dev_put(info.nb_dev);
+ if (info.smbus_dev)
+ pci_dev_put(info.smbus_dev);
+
+ } else {
+ /* no race - commit the result */
+ info.probe_count++;
+ amd_chipset = info;
+ spin_unlock_irqrestore(&amd_lock, flags);
+ }
+
+ return ret;
}
EXPORT_SYMBOL_GPL(usb_amd_find_chipset_info);
@@ -284,6 +311,7 @@
void usb_amd_dev_put(void)
{
+ struct pci_dev *nb, *smbus;
unsigned long flags;
spin_lock_irqsave(&amd_lock, flags);
@@ -294,20 +322,23 @@
return;
}
- if (amd_chipset.nb_dev) {
- pci_dev_put(amd_chipset.nb_dev);
- amd_chipset.nb_dev = NULL;
- }
- if (amd_chipset.smbus_dev) {
- pci_dev_put(amd_chipset.smbus_dev);
- amd_chipset.smbus_dev = NULL;
- }
+ /* save them to pci_dev_put outside of spinlock */
+ nb = amd_chipset.nb_dev;
+ smbus = amd_chipset.smbus_dev;
+
+ amd_chipset.nb_dev = NULL;
+ amd_chipset.smbus_dev = NULL;
amd_chipset.nb_type = 0;
amd_chipset.sb_type = 0;
amd_chipset.isoc_reqs = 0;
amd_chipset.probe_result = 0;
spin_unlock_irqrestore(&amd_lock, flags);
+
+ if (nb)
+ pci_dev_put(nb);
+ if (smbus)
+ pci_dev_put(smbus);
}
EXPORT_SYMBOL_GPL(usb_amd_dev_put);
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index a003e79..627f343 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -846,7 +846,7 @@
* Skip ports that don't have known speeds, or have duplicate
* Extended Capabilities port speed entries.
*/
- if (port_speed == 0 || port_speed == -1)
+ if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)
continue;
/*
@@ -974,6 +974,47 @@
return 0;
}
+/*
+ * Convert interval expressed as 2^(bInterval - 1) == interval into
+ * straight exponent value 2^n == interval.
+ *
+ */
+static unsigned int xhci_parse_exponent_interval(struct usb_device *udev,
+ struct usb_host_endpoint *ep)
+{
+ unsigned int interval;
+
+ interval = clamp_val(ep->desc.bInterval, 1, 16) - 1;
+ if (interval != ep->desc.bInterval - 1)
+ dev_warn(&udev->dev,
+ "ep %#x - rounding interval to %d microframes\n",
+ ep->desc.bEndpointAddress,
+ 1 << interval);
+
+ return interval;
+}
+
+/*
+ * Convert bInterval expressed in frames (in 1-255 range) to exponent of
+ * microframes, rounded down to nearest power of 2.
+ */
+static unsigned int xhci_parse_frame_interval(struct usb_device *udev,
+ struct usb_host_endpoint *ep)
+{
+ unsigned int interval;
+
+ interval = fls(8 * ep->desc.bInterval) - 1;
+ interval = clamp_val(interval, 3, 10);
+ if ((1 << interval) != 8 * ep->desc.bInterval)
+ dev_warn(&udev->dev,
+ "ep %#x - rounding interval to %d microframes, ep desc says %d microframes\n",
+ ep->desc.bEndpointAddress,
+ 1 << interval,
+ 8 * ep->desc.bInterval);
+
+ return interval;
+}
+
/* Return the polling or NAK interval.
*
* The polling interval is expressed in "microframes". If xHCI's Interval field
@@ -982,7 +1023,7 @@
* The NAK interval is one NAK per 1 to 255 microframes, or no NAKs if interval
* is set to 0.
*/
-static inline unsigned int xhci_get_endpoint_interval(struct usb_device *udev,
+static unsigned int xhci_get_endpoint_interval(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
unsigned int interval = 0;
@@ -991,45 +1032,38 @@
case USB_SPEED_HIGH:
/* Max NAK rate */
if (usb_endpoint_xfer_control(&ep->desc) ||
- usb_endpoint_xfer_bulk(&ep->desc))
+ usb_endpoint_xfer_bulk(&ep->desc)) {
interval = ep->desc.bInterval;
+ break;
+ }
/* Fall through - SS and HS isoc/int have same decoding */
+
case USB_SPEED_SUPER:
if (usb_endpoint_xfer_int(&ep->desc) ||
- usb_endpoint_xfer_isoc(&ep->desc)) {
- if (ep->desc.bInterval == 0)
- interval = 0;
- else
- interval = ep->desc.bInterval - 1;
- if (interval > 15)
- interval = 15;
- if (interval != ep->desc.bInterval + 1)
- dev_warn(&udev->dev, "ep %#x - rounding interval to %d microframes\n",
- ep->desc.bEndpointAddress, 1 << interval);
+ usb_endpoint_xfer_isoc(&ep->desc)) {
+ interval = xhci_parse_exponent_interval(udev, ep);
}
break;
- /* Convert bInterval (in 1-255 frames) to microframes and round down to
- * nearest power of 2.
- */
+
case USB_SPEED_FULL:
+ if (usb_endpoint_xfer_int(&ep->desc)) {
+ interval = xhci_parse_exponent_interval(udev, ep);
+ break;
+ }
+ /*
+ * Fall through for isochronous endpoint interval decoding
+ * since it uses the same rules as low speed interrupt
+ * endpoints.
+ */
+
case USB_SPEED_LOW:
if (usb_endpoint_xfer_int(&ep->desc) ||
- usb_endpoint_xfer_isoc(&ep->desc)) {
- interval = fls(8*ep->desc.bInterval) - 1;
- if (interval > 10)
- interval = 10;
- if (interval < 3)
- interval = 3;
- if ((1 << interval) != 8*ep->desc.bInterval)
- dev_warn(&udev->dev,
- "ep %#x - rounding interval"
- " to %d microframes, "
- "ep desc says %d microframes\n",
- ep->desc.bEndpointAddress,
- 1 << interval,
- 8*ep->desc.bInterval);
+ usb_endpoint_xfer_isoc(&ep->desc)) {
+
+ interval = xhci_parse_frame_interval(udev, ep);
}
break;
+
default:
BUG();
}
@@ -1041,7 +1075,7 @@
* transaction opportunities per microframe", but that goes in the Max Burst
* endpoint context field.
*/
-static inline u32 xhci_get_endpoint_mult(struct usb_device *udev,
+static u32 xhci_get_endpoint_mult(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
if (udev->speed != USB_SPEED_SUPER ||
@@ -1050,7 +1084,7 @@
return ep->ss_ep_comp.bmAttributes;
}
-static inline u32 xhci_get_endpoint_type(struct usb_device *udev,
+static u32 xhci_get_endpoint_type(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
int in;
@@ -1084,7 +1118,7 @@
* Basically, this is the maxpacket size, multiplied by the burst size
* and mult size.
*/
-static inline u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,
+static u32 xhci_get_max_esit_payload(struct xhci_hcd *xhci,
struct usb_device *udev,
struct usb_host_endpoint *ep)
{
@@ -1727,12 +1761,12 @@
* found a similar duplicate.
*/
if (xhci->port_array[i] != major_revision &&
- xhci->port_array[i] != (u8) -1) {
+ xhci->port_array[i] != DUPLICATE_ENTRY) {
if (xhci->port_array[i] == 0x03)
xhci->num_usb3_ports--;
else
xhci->num_usb2_ports--;
- xhci->port_array[i] = (u8) -1;
+ xhci->port_array[i] = DUPLICATE_ENTRY;
}
/* FIXME: Should we disable the port? */
continue;
@@ -1831,7 +1865,7 @@
for (i = 0; i < num_ports; i++) {
if (xhci->port_array[i] == 0x03 ||
xhci->port_array[i] == 0 ||
- xhci->port_array[i] == -1)
+ xhci->port_array[i] == DUPLICATE_ENTRY)
continue;
xhci->usb2_ports[port_index] =
diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
index ceea9f3..a10494c 100644
--- a/drivers/usb/host/xhci-pci.c
+++ b/drivers/usb/host/xhci-pci.c
@@ -114,6 +114,10 @@
if (pdev->vendor == PCI_VENDOR_ID_NEC)
xhci->quirks |= XHCI_NEC_HOST;
+ /* AMD PLL quirk */
+ if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())
+ xhci->quirks |= XHCI_AMD_PLL_FIX;
+
/* Make sure the HC is halted. */
retval = xhci_halt(xhci);
if (retval)
diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index cfc1ad9..7437386 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -93,7 +93,7 @@
/* Does this link TRB point to the first segment in a ring,
* or was the previous TRB the last TRB on the last segment in the ERST?
*/
-static inline bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring,
+static bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring,
struct xhci_segment *seg, union xhci_trb *trb)
{
if (ring == xhci->event_ring)
@@ -107,7 +107,7 @@
* segment? I.e. would the updated event TRB pointer step off the end of the
* event seg?
*/
-static inline int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
+static int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring,
struct xhci_segment *seg, union xhci_trb *trb)
{
if (ring == xhci->event_ring)
@@ -116,7 +116,7 @@
return (trb->link.control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK);
}
-static inline int enqueue_is_link_trb(struct xhci_ring *ring)
+static int enqueue_is_link_trb(struct xhci_ring *ring)
{
struct xhci_link_trb *link = &ring->enqueue->link;
return ((link->control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK));
@@ -592,7 +592,7 @@
ep->ep_state |= SET_DEQ_PENDING;
}
-static inline void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,
+static void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci,
struct xhci_virt_ep *ep)
{
ep->ep_state &= ~EP_HALT_PENDING;
@@ -619,6 +619,13 @@
/* Only giveback urb when this is the last td in urb */
if (urb_priv->td_cnt == urb_priv->length) {
+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {
+ if (xhci->quirks & XHCI_AMD_PLL_FIX)
+ usb_amd_quirk_pll_enable();
+ }
+ }
usb_hcd_unlink_urb_from_ep(hcd, urb);
xhci_dbg(xhci, "Giveback %s URB %p\n", adjective, urb);
@@ -1209,7 +1216,7 @@
* Skip ports that don't have known speeds, or have duplicate
* Extended Capabilities port speed entries.
*/
- if (port_speed == 0 || port_speed == -1)
+ if (port_speed == 0 || port_speed == DUPLICATE_ENTRY)
continue;
/*
@@ -1235,6 +1242,7 @@
u8 major_revision;
struct xhci_bus_state *bus_state;
u32 __iomem **port_array;
+ bool bogus_port_status = false;
/* Port status change events always have a successful completion code */
if (GET_COMP_CODE(event->generic.field[2]) != COMP_SUCCESS) {
@@ -1247,6 +1255,7 @@
max_ports = HCS_MAX_PORTS(xhci->hcs_params1);
if ((port_id <= 0) || (port_id > max_ports)) {
xhci_warn(xhci, "Invalid port id %d\n", port_id);
+ bogus_port_status = true;
goto cleanup;
}
@@ -1258,12 +1267,14 @@
xhci_warn(xhci, "Event for port %u not in "
"Extended Capabilities, ignoring.\n",
port_id);
+ bogus_port_status = true;
goto cleanup;
}
- if (major_revision == (u8) -1) {
+ if (major_revision == DUPLICATE_ENTRY) {
xhci_warn(xhci, "Event for port %u duplicated in"
"Extended Capabilities, ignoring.\n",
port_id);
+ bogus_port_status = true;
goto cleanup;
}
@@ -1335,6 +1346,13 @@
/* Update event ring dequeue pointer before dropping the lock */
inc_deq(xhci, xhci->event_ring, true);
+ /* Don't make the USB core poll the roothub if we got a bad port status
+ * change event. Besides, at that point we can't tell which roothub
+ * (USB 2.0 or USB 3.0) to kick.
+ */
+ if (bogus_port_status)
+ return;
+
spin_unlock(&xhci->lock);
/* Pass this up to the core */
usb_hcd_poll_rh_status(hcd);
@@ -1554,8 +1572,17 @@
urb_priv->td_cnt++;
/* Giveback the urb when all the tds are completed */
- if (urb_priv->td_cnt == urb_priv->length)
+ if (urb_priv->td_cnt == urb_priv->length) {
ret = 1;
+ if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) {
+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs--;
+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs
+ == 0) {
+ if (xhci->quirks & XHCI_AMD_PLL_FIX)
+ usb_amd_quirk_pll_enable();
+ }
+ }
+ }
}
return ret;
@@ -1675,71 +1702,52 @@
struct urb_priv *urb_priv;
int idx;
int len = 0;
- int skip_td = 0;
union xhci_trb *cur_trb;
struct xhci_segment *cur_seg;
+ struct usb_iso_packet_descriptor *frame;
u32 trb_comp_code;
+ bool skip_td = false;
ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer);
trb_comp_code = GET_COMP_CODE(event->transfer_len);
urb_priv = td->urb->hcpriv;
idx = urb_priv->td_cnt;
+ frame = &td->urb->iso_frame_desc[idx];
- if (ep->skip) {
- /* The transfer is partly done */
- *status = -EXDEV;
- td->urb->iso_frame_desc[idx].status = -EXDEV;
- } else {
- /* handle completion code */
- switch (trb_comp_code) {
- case COMP_SUCCESS:
- td->urb->iso_frame_desc[idx].status = 0;
- xhci_dbg(xhci, "Successful isoc transfer!\n");
- break;
- case COMP_SHORT_TX:
- if (td->urb->transfer_flags & URB_SHORT_NOT_OK)
- td->urb->iso_frame_desc[idx].status =
- -EREMOTEIO;
- else
- td->urb->iso_frame_desc[idx].status = 0;
- break;
- case COMP_BW_OVER:
- td->urb->iso_frame_desc[idx].status = -ECOMM;
- skip_td = 1;
- break;
- case COMP_BUFF_OVER:
- case COMP_BABBLE:
- td->urb->iso_frame_desc[idx].status = -EOVERFLOW;
- skip_td = 1;
- break;
- case COMP_STALL:
- td->urb->iso_frame_desc[idx].status = -EPROTO;
- skip_td = 1;
- break;
- case COMP_STOP:
- case COMP_STOP_INVAL:
- break;
- default:
- td->urb->iso_frame_desc[idx].status = -1;
- break;
- }
+ /* handle completion code */
+ switch (trb_comp_code) {
+ case COMP_SUCCESS:
+ frame->status = 0;
+ xhci_dbg(xhci, "Successful isoc transfer!\n");
+ break;
+ case COMP_SHORT_TX:
+ frame->status = td->urb->transfer_flags & URB_SHORT_NOT_OK ?
+ -EREMOTEIO : 0;
+ break;
+ case COMP_BW_OVER:
+ frame->status = -ECOMM;
+ skip_td = true;
+ break;
+ case COMP_BUFF_OVER:
+ case COMP_BABBLE:
+ frame->status = -EOVERFLOW;
+ skip_td = true;
+ break;
+ case COMP_STALL:
+ frame->status = -EPROTO;
+ skip_td = true;
+ break;
+ case COMP_STOP:
+ case COMP_STOP_INVAL:
+ break;
+ default:
+ frame->status = -1;
+ break;
}
- /* calc actual length */
- if (ep->skip) {
- td->urb->iso_frame_desc[idx].actual_length = 0;
- /* Update ring dequeue pointer */
- while (ep_ring->dequeue != td->last_trb)
- inc_deq(xhci, ep_ring, false);
- inc_deq(xhci, ep_ring, false);
- return finish_td(xhci, td, event_trb, event, ep, status, true);
- }
-
- if (trb_comp_code == COMP_SUCCESS || skip_td == 1) {
- td->urb->iso_frame_desc[idx].actual_length =
- td->urb->iso_frame_desc[idx].length;
- td->urb->actual_length +=
- td->urb->iso_frame_desc[idx].length;
+ if (trb_comp_code == COMP_SUCCESS || skip_td) {
+ frame->actual_length = frame->length;
+ td->urb->actual_length += frame->length;
} else {
for (cur_trb = ep_ring->dequeue,
cur_seg = ep_ring->deq_seg; cur_trb != event_trb;
@@ -1755,7 +1763,7 @@
TRB_LEN(event->transfer_len);
if (trb_comp_code != COMP_STOP_INVAL) {
- td->urb->iso_frame_desc[idx].actual_length = len;
+ frame->actual_length = len;
td->urb->actual_length += len;
}
}
@@ -1766,6 +1774,35 @@
return finish_td(xhci, td, event_trb, event, ep, status, false);
}
+static int skip_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ struct xhci_transfer_event *event,
+ struct xhci_virt_ep *ep, int *status)
+{
+ struct xhci_ring *ep_ring;
+ struct urb_priv *urb_priv;
+ struct usb_iso_packet_descriptor *frame;
+ int idx;
+
+ ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer);
+ urb_priv = td->urb->hcpriv;
+ idx = urb_priv->td_cnt;
+ frame = &td->urb->iso_frame_desc[idx];
+
+ /* The transfer is partly done */
+ *status = -EXDEV;
+ frame->status = -EXDEV;
+
+ /* calc actual length */
+ frame->actual_length = 0;
+
+ /* Update ring dequeue pointer */
+ while (ep_ring->dequeue != td->last_trb)
+ inc_deq(xhci, ep_ring, false);
+ inc_deq(xhci, ep_ring, false);
+
+ return finish_td(xhci, td, NULL, event, ep, status, true);
+}
+
/*
* Process bulk and interrupt tds, update urb status and actual_length.
*/
@@ -2024,36 +2061,42 @@
}
td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list);
+
/* Is this a TRB in the currently executing TD? */
event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue,
td->last_trb, event_dma);
- if (event_seg && ep->skip) {
+ if (!event_seg) {
+ if (!ep->skip ||
+ !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
+ /* HC is busted, give up! */
+ xhci_err(xhci,
+ "ERROR Transfer event TRB DMA ptr not "
+ "part of current TD\n");
+ return -ESHUTDOWN;
+ }
+
+ ret = skip_isoc_td(xhci, td, event, ep, &status);
+ goto cleanup;
+ }
+
+ if (ep->skip) {
xhci_dbg(xhci, "Found td. Clear skip flag.\n");
ep->skip = false;
}
- if (!event_seg &&
- (!ep->skip || !usb_endpoint_xfer_isoc(&td->urb->ep->desc))) {
- /* HC is busted, give up! */
- xhci_err(xhci, "ERROR Transfer event TRB DMA ptr not "
- "part of current TD\n");
- return -ESHUTDOWN;
- }
- if (event_seg) {
- event_trb = &event_seg->trbs[(event_dma -
- event_seg->dma) / sizeof(*event_trb)];
- /*
- * No-op TRB should not trigger interrupts.
- * If event_trb is a no-op TRB, it means the
- * corresponding TD has been cancelled. Just ignore
- * the TD.
- */
- if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK)
- == TRB_TYPE(TRB_TR_NOOP)) {
- xhci_dbg(xhci, "event_trb is a no-op TRB. "
- "Skip it\n");
- goto cleanup;
- }
+ event_trb = &event_seg->trbs[(event_dma - event_seg->dma) /
+ sizeof(*event_trb)];
+ /*
+ * No-op TRB should not trigger interrupts.
+ * If event_trb is a no-op TRB, it means the
+ * corresponding TD has been cancelled. Just ignore
+ * the TD.
+ */
+ if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK)
+ == TRB_TYPE(TRB_TR_NOOP)) {
+ xhci_dbg(xhci,
+ "event_trb is a no-op TRB. Skip it\n");
+ goto cleanup;
}
/* Now update the urb's actual_length and give back to
@@ -3126,6 +3169,12 @@
}
}
+ if (xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs == 0) {
+ if (xhci->quirks & XHCI_AMD_PLL_FIX)
+ usb_amd_quirk_pll_disable();
+ }
+ xhci_to_hcd(xhci)->self.bandwidth_isoc_reqs++;
+
giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id,
start_cycle, start_trb);
return 0;
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 196e018..81b976e 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -550,6 +550,9 @@
del_timer_sync(&xhci->event_ring_timer);
#endif
+ if (xhci->quirks & XHCI_AMD_PLL_FIX)
+ usb_amd_dev_put();
+
xhci_dbg(xhci, "// Disabling event ring interrupts\n");
temp = xhci_readl(xhci, &xhci->op_regs->status);
xhci_writel(xhci, temp & ~STS_EINT, &xhci->op_regs->status);
@@ -771,7 +774,9 @@
/* If restore operation fails, re-initialize the HC during resume */
if ((temp & STS_SRE) || hibernated) {
- usb_root_hub_lost_power(hcd->self.root_hub);
+ /* Let the USB core know _both_ roothubs lost power. */
+ usb_root_hub_lost_power(xhci->main_hcd->self.root_hub);
+ usb_root_hub_lost_power(xhci->shared_hcd->self.root_hub);
xhci_dbg(xhci, "Stop HCD\n");
xhci_halt(xhci);
@@ -2386,10 +2391,18 @@
/* Everything but endpoint 0 is disabled, so free or cache the rings. */
last_freed_endpoint = 1;
for (i = 1; i < 31; ++i) {
- if (!virt_dev->eps[i].ring)
- continue;
- xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);
- last_freed_endpoint = i;
+ struct xhci_virt_ep *ep = &virt_dev->eps[i];
+
+ if (ep->ep_state & EP_HAS_STREAMS) {
+ xhci_free_stream_info(xhci, ep->stream_info);
+ ep->stream_info = NULL;
+ ep->ep_state &= ~EP_HAS_STREAMS;
+ }
+
+ if (ep->ring) {
+ xhci_free_or_cache_endpoint_ring(xhci, virt_dev, i);
+ last_freed_endpoint = i;
+ }
}
xhci_dbg(xhci, "Output context after successful reset device cmd:\n");
xhci_dbg_ctx(xhci, virt_dev->out_ctx, last_freed_endpoint);
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 07e2630..ba1be6b 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -30,6 +30,7 @@
/* Code sharing between pci-quirks and xhci hcd */
#include "xhci-ext-caps.h"
+#include "pci-quirks.h"
/* xHCI PCI Configuration Registers */
#define XHCI_SBRN_OFFSET (0x60)
@@ -232,7 +233,7 @@
* notification type that matches a bit set in this bit field.
*/
#define DEV_NOTE_MASK (0xffff)
-#define ENABLE_DEV_NOTE(x) (1 << x)
+#define ENABLE_DEV_NOTE(x) (1 << (x))
/* Most of the device notification types should only be used for debug.
* SW does need to pay attention to function wake notifications.
*/
@@ -348,6 +349,9 @@
/* Initiate a warm port reset - complete when PORT_WRC is '1' */
#define PORT_WR (1 << 31)
+/* We mark duplicate entries with -1 */
+#define DUPLICATE_ENTRY ((u8)(-1))
+
/* Port Power Management Status and Control - port_power_base bitmasks */
/* Inactivity timer value for transitions into U1, in microseconds.
* Timeout can be up to 127us. 0xFF means an infinite timeout.
@@ -601,11 +605,11 @@
#define EP_STATE_STOPPED 3
#define EP_STATE_ERROR 4
/* Mult - Max number of burtst within an interval, in EP companion desc. */
-#define EP_MULT(p) ((p & 0x3) << 8)
+#define EP_MULT(p) (((p) & 0x3) << 8)
/* bits 10:14 are Max Primary Streams */
/* bit 15 is Linear Stream Array */
/* Interval - period between requests to an endpoint - 125u increments. */
-#define EP_INTERVAL(p) ((p & 0xff) << 16)
+#define EP_INTERVAL(p) (((p) & 0xff) << 16)
#define EP_INTERVAL_TO_UFRAMES(p) (1 << (((p) >> 16) & 0xff))
#define EP_MAXPSTREAMS_MASK (0x1f << 10)
#define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK)
@@ -1276,6 +1280,7 @@
#define XHCI_LINK_TRB_QUIRK (1 << 0)
#define XHCI_RESET_EP_QUIRK (1 << 1)
#define XHCI_NEC_HOST (1 << 2)
+#define XHCI_AMD_PLL_FIX (1 << 3)
/* There are two roothubs to keep track of bus suspend info for */
struct xhci_bus_state bus_state[2];
/* Is each xHCI roothub port a USB 3.0, USB 2.0, or USB 1.1 port? */
diff --git a/drivers/usb/musb/Kconfig b/drivers/usb/musb/Kconfig
index 4cbb7e4..74073b3 100644
--- a/drivers/usb/musb/Kconfig
+++ b/drivers/usb/musb/Kconfig
@@ -14,7 +14,7 @@
select TWL4030_USB if MACH_OMAP_3430SDP
select TWL6030_USB if MACH_OMAP_4430SDP || MACH_OMAP4_PANDA
select USB_OTG_UTILS
- tristate 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'
+ bool 'Inventra Highspeed Dual Role Controller (TI, ADI, ...)'
help
Say Y here if your system has a dual role high speed USB
controller based on the Mentor Graphics silicon IP. Then
@@ -30,8 +30,8 @@
If you do not know what this is, please say N.
- To compile this driver as a module, choose M here; the
- module will be called "musb-hdrc".
+# To compile this driver as a module, choose M here; the
+# module will be called "musb-hdrc".
choice
prompt "Platform Glue Layer"
diff --git a/drivers/usb/musb/blackfin.c b/drivers/usb/musb/blackfin.c
index 52312e8..8e2a1ff 100644
--- a/drivers/usb/musb/blackfin.c
+++ b/drivers/usb/musb/blackfin.c
@@ -21,6 +21,7 @@
#include <asm/cacheflush.h>
#include "musb_core.h"
+#include "musbhsdma.h"
#include "blackfin.h"
struct bfin_glue {
@@ -332,6 +333,27 @@
return -EIO;
}
+static int bfin_musb_adjust_channel_params(struct dma_channel *channel,
+ u16 packet_sz, u8 *mode,
+ dma_addr_t *dma_addr, u32 *len)
+{
+ struct musb_dma_channel *musb_channel = channel->private_data;
+
+ /*
+ * Anomaly 05000450 might cause data corruption when using DMA
+ * MODE 1 transmits with short packet. So to work around this,
+ * we truncate all MODE 1 transfers down to a multiple of the
+ * max packet size, and then do the last short packet transfer
+ * (if there is any) using MODE 0.
+ */
+ if (ANOMALY_05000450) {
+ if (musb_channel->transmit && *mode == 1)
+ *len = *len - (*len % packet_sz);
+ }
+
+ return 0;
+}
+
static void bfin_musb_reg_init(struct musb *musb)
{
if (ANOMALY_05000346) {
@@ -430,6 +452,8 @@
.vbus_status = bfin_musb_vbus_status,
.set_vbus = bfin_musb_set_vbus,
+
+ .adjust_channel_params = bfin_musb_adjust_channel_params,
};
static u64 bfin_dmamask = DMA_BIT_MASK(32);
diff --git a/drivers/usb/musb/cppi_dma.c b/drivers/usb/musb/cppi_dma.c
index de55a3c..ab434fb 100644
--- a/drivers/usb/musb/cppi_dma.c
+++ b/drivers/usb/musb/cppi_dma.c
@@ -597,12 +597,12 @@
length = min(n_bds * maxpacket, length);
}
- DBG(4, "TX DMA%d, pktSz %d %s bds %d dma 0x%x len %u\n",
+ DBG(4, "TX DMA%d, pktSz %d %s bds %d dma 0x%llx len %u\n",
tx->index,
maxpacket,
rndis ? "rndis" : "transparent",
n_bds,
- addr, length);
+ (unsigned long long)addr, length);
cppi_rndis_update(tx, 0, musb->ctrl_base, rndis);
@@ -820,7 +820,7 @@
length = min(n_bds * maxpacket, length);
DBG(4, "RX DMA%d seg, maxp %d %s bds %d (cnt %d) "
- "dma 0x%x len %u %u/%u\n",
+ "dma 0x%llx len %u %u/%u\n",
rx->index, maxpacket,
onepacket
? (is_rndis ? "rndis" : "onepacket")
@@ -829,7 +829,8 @@
musb_readl(tibase,
DAVINCI_RXCPPI_BUFCNT0_REG + (rx->index * 4))
& 0xffff,
- addr, length, rx->channel.actual_len, rx->buf_len);
+ (unsigned long long)addr, length,
+ rx->channel.actual_len, rx->buf_len);
/* only queue one segment at a time, since the hardware prevents
* correct queue shutdown after unexpected short packets
@@ -1039,9 +1040,9 @@
if (!completed && (bd->hw_options & CPPI_OWN_SET))
break;
- DBG(5, "C/RXBD %08x: nxt %08x buf %08x "
+ DBG(5, "C/RXBD %llx: nxt %08x buf %08x "
"off.len %08x opt.len %08x (%d)\n",
- bd->dma, bd->hw_next, bd->hw_bufp,
+ (unsigned long long)bd->dma, bd->hw_next, bd->hw_bufp,
bd->hw_off_len, bd->hw_options,
rx->channel.actual_len);
@@ -1111,11 +1112,12 @@
musb_ep_select(cppi->mregs, rx->index + 1);
csr = musb_readw(regs, MUSB_RXCSR);
if (csr & MUSB_RXCSR_DMAENAB) {
- DBG(4, "list%d %p/%p, last %08x%s, csr %04x\n",
+ DBG(4, "list%d %p/%p, last %llx%s, csr %04x\n",
rx->index,
rx->head, rx->tail,
rx->last_processed
- ? rx->last_processed->dma
+ ? (unsigned long long)
+ rx->last_processed->dma
: 0,
completed ? ", completed" : "",
csr);
@@ -1167,8 +1169,11 @@
tx = musb_readl(tibase, DAVINCI_TXCPPI_MASKED_REG);
rx = musb_readl(tibase, DAVINCI_RXCPPI_MASKED_REG);
- if (!tx && !rx)
+ if (!tx && !rx) {
+ if (cppi->irq)
+ spin_unlock_irqrestore(&musb->lock, flags);
return IRQ_NONE;
+ }
DBG(4, "CPPI IRQ Tx%x Rx%x\n", tx, rx);
@@ -1199,7 +1204,7 @@
*/
if (NULL == bd) {
DBG(1, "null BD\n");
- tx_ram->tx_complete = 0;
+ musb_writel(&tx_ram->tx_complete, 0, 0);
continue;
}
@@ -1452,7 +1457,7 @@
* compare mode by writing 1 to the tx_complete register.
*/
cppi_reset_tx(tx_ram, 1);
- cppi_ch->head = 0;
+ cppi_ch->head = NULL;
musb_writel(&tx_ram->tx_complete, 0, 1);
cppi_dump_tx(5, cppi_ch, " (done teardown)");
diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
index 630ae7f..f10ff00 100644
--- a/drivers/usb/musb/musb_core.c
+++ b/drivers/usb/musb/musb_core.c
@@ -1030,6 +1030,7 @@
struct musb *musb = dev_to_musb(&pdev->dev);
unsigned long flags;
+ pm_runtime_get_sync(musb->controller);
spin_lock_irqsave(&musb->lock, flags);
musb_platform_disable(musb);
musb_generic_disable(musb);
@@ -1040,6 +1041,7 @@
musb_writeb(musb->mregs, MUSB_DEVCTL, 0);
musb_platform_exit(musb);
+ pm_runtime_put(musb->controller);
/* FIXME power down */
}
diff --git a/drivers/usb/musb/musb_core.h b/drivers/usb/musb/musb_core.h
index 4bd9e21..0e053b5 100644
--- a/drivers/usb/musb/musb_core.h
+++ b/drivers/usb/musb/musb_core.h
@@ -261,6 +261,7 @@
* @try_ilde: tries to idle the IP
* @vbus_status: returns vbus status if possible
* @set_vbus: forces vbus status
+ * @channel_program: pre check for standard dma channel_program func
*/
struct musb_platform_ops {
int (*init)(struct musb *musb);
@@ -274,6 +275,10 @@
int (*vbus_status)(struct musb *musb);
void (*set_vbus)(struct musb *musb, int on);
+
+ int (*adjust_channel_params)(struct dma_channel *channel,
+ u16 packet_sz, u8 *mode,
+ dma_addr_t *dma_addr, u32 *len);
};
/*
diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
index 98519c5..6dfbf9f 100644
--- a/drivers/usb/musb/musb_gadget.c
+++ b/drivers/usb/musb/musb_gadget.c
@@ -535,7 +535,7 @@
is_dma = 1;
csr |= MUSB_TXCSR_P_WZC_BITS;
csr &= ~(MUSB_TXCSR_DMAENAB | MUSB_TXCSR_P_UNDERRUN |
- MUSB_TXCSR_TXPKTRDY);
+ MUSB_TXCSR_TXPKTRDY | MUSB_TXCSR_AUTOSET);
musb_writew(epio, MUSB_TXCSR, csr);
/* Ensure writebuffer is empty. */
csr = musb_readw(epio, MUSB_TXCSR);
@@ -1296,7 +1296,7 @@
}
/* if the hardware doesn't have the request, easy ... */
- if (musb_ep->req_list.next != &request->list || musb_ep->busy)
+ if (musb_ep->req_list.next != &req->list || musb_ep->busy)
musb_g_giveback(musb_ep, request, -ECONNRESET);
/* ... else abort the dma transfer ... */
diff --git a/drivers/usb/musb/musbhsdma.c b/drivers/usb/musb/musbhsdma.c
index 0144a2d..d281792 100644
--- a/drivers/usb/musb/musbhsdma.c
+++ b/drivers/usb/musb/musbhsdma.c
@@ -169,6 +169,14 @@
BUG_ON(channel->status == MUSB_DMA_STATUS_UNKNOWN ||
channel->status == MUSB_DMA_STATUS_BUSY);
+ /* Let targets check/tweak the arguments */
+ if (musb->ops->adjust_channel_params) {
+ int ret = musb->ops->adjust_channel_params(channel,
+ packet_sz, &mode, &dma_addr, &len);
+ if (ret)
+ return ret;
+ }
+
/*
* The DMA engine in RTL1.8 and above cannot handle
* DMA addresses that are not aligned to a 4 byte boundary.
diff --git a/drivers/usb/musb/omap2430.c b/drivers/usb/musb/omap2430.c
index 25cb8b0..57a27fa 100644
--- a/drivers/usb/musb/omap2430.c
+++ b/drivers/usb/musb/omap2430.c
@@ -259,9 +259,10 @@
case USB_EVENT_VBUS:
DBG(4, "VBUS Connect\n");
+#ifdef CONFIG_USB_GADGET_MUSB_HDRC
if (musb->gadget_driver)
pm_runtime_get_sync(musb->controller);
-
+#endif
otg_init(musb->xceiv);
break;
diff --git a/drivers/usb/musb/ux500.c b/drivers/usb/musb/ux500.c
index d6384e4..f7e04bf 100644
--- a/drivers/usb/musb/ux500.c
+++ b/drivers/usb/musb/ux500.c
@@ -93,6 +93,8 @@
}
musb->dev.parent = &pdev->dev;
+ musb->dev.dma_mask = pdev->dev.dma_mask;
+ musb->dev.coherent_dma_mask = pdev->dev.coherent_dma_mask;
glue->dev = &pdev->dev;
glue->musb = musb;
diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
index a973c7a..4de6ef0 100644
--- a/drivers/usb/serial/ftdi_sio.c
+++ b/drivers/usb/serial/ftdi_sio.c
@@ -151,6 +151,8 @@
* /sys/bus/usb/ftdi_sio/new_id, then send patch/report!
*/
static struct usb_device_id id_table_combined [] = {
+ { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_AMC232_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANUSB_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_CANDAPTER_PID) },
@@ -525,6 +527,7 @@
{ USB_DEVICE(SEALEVEL_VID, SEALEVEL_2803_8_PID) },
{ USB_DEVICE(IDTECH_VID, IDTECH_IDT1221U_PID) },
{ USB_DEVICE(OCT_VID, OCT_US101_PID) },
+ { USB_DEVICE(OCT_VID, OCT_DK201_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_HE_TIRA1_PID),
.driver_info = (kernel_ulong_t)&ftdi_HE_TIRA1_quirk },
{ USB_DEVICE(FTDI_VID, FTDI_USB_UIRT_PID),
@@ -787,6 +790,8 @@
{ USB_DEVICE(FTDI_VID, MARVELL_OPENRD_PID),
.driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
{ USB_DEVICE(FTDI_VID, HAMEG_HO820_PID) },
+ { USB_DEVICE(FTDI_VID, HAMEG_HO720_PID) },
+ { USB_DEVICE(FTDI_VID, HAMEG_HO730_PID) },
{ USB_DEVICE(FTDI_VID, HAMEG_HO870_PID) },
{ USB_DEVICE(FTDI_VID, MJSG_GENERIC_PID) },
{ USB_DEVICE(FTDI_VID, MJSG_SR_RADIO_PID) },
diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
index c543e55..efffc237 100644
--- a/drivers/usb/serial/ftdi_sio_ids.h
+++ b/drivers/usb/serial/ftdi_sio_ids.h
@@ -300,6 +300,8 @@
* Hameg HO820 and HO870 interface (using VID 0x0403)
*/
#define HAMEG_HO820_PID 0xed74
+#define HAMEG_HO730_PID 0xed73
+#define HAMEG_HO720_PID 0xed72
#define HAMEG_HO870_PID 0xed71
/*
@@ -572,6 +574,7 @@
/* Note: OCT US101 is also rebadged as Dick Smith Electronics (NZ) XH6381 */
/* Also rebadged as Dick Smith Electronics (Aus) XH6451 */
/* Also rebadged as SIIG Inc. model US2308 hardware version 1 */
+#define OCT_DK201_PID 0x0103 /* OCT DK201 USB docking station */
#define OCT_US101_PID 0x0421 /* OCT US101 USB to RS-232 */
/*
@@ -1141,3 +1144,12 @@
#define QIHARDWARE_VID 0x20B7
#define MILKYMISTONE_JTAGSERIAL_PID 0x0713
+/*
+ * CTI GmbH RS485 Converter http://www.cti-lean.com/
+ */
+/* USB-485-Mini*/
+#define FTDI_CTI_MINI_PID 0xF608
+/* USB-Nano-485*/
+#define FTDI_CTI_NANO_PID 0xF60B
+
+
diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
index 75c7f45..d77ff04 100644
--- a/drivers/usb/serial/option.c
+++ b/drivers/usb/serial/option.c
@@ -407,6 +407,10 @@
/* ONDA MT825UP HSDPA 14.2 modem */
#define ONDA_MT825UP 0x000b
+/* Samsung products */
+#define SAMSUNG_VENDOR_ID 0x04e8
+#define SAMSUNG_PRODUCT_GT_B3730 0x6889
+
/* some devices interfaces need special handling due to a number of reasons */
enum option_blacklist_reason {
OPTION_BLACKLIST_NONE = 0,
@@ -968,6 +972,7 @@
{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },
{ USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
{ USB_DEVICE(ONDA_VENDOR_ID, ONDA_MT825UP) }, /* ONDA MT825UP modem */
+ { USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730/GT-B3710 LTE USB modem.*/
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
index 8858201..54a9dab 100644
--- a/drivers/usb/serial/qcserial.c
+++ b/drivers/usb/serial/qcserial.c
@@ -111,7 +111,7 @@
ifnum = intf->desc.bInterfaceNumber;
dbg("This Interface = %d", ifnum);
- data = serial->private = kzalloc(sizeof(struct usb_wwan_intf_private),
+ data = kzalloc(sizeof(struct usb_wwan_intf_private),
GFP_KERNEL);
if (!data)
return -ENOMEM;
@@ -134,8 +134,10 @@
usb_endpoint_is_bulk_out(&intf->endpoint[1].desc)) {
dbg("QDL port found");
- if (serial->interface->num_altsetting == 1)
- return 0;
+ if (serial->interface->num_altsetting == 1) {
+ retval = 0; /* Success */
+ break;
+ }
retval = usb_set_interface(serial->dev, ifnum, 1);
if (retval < 0) {
@@ -145,7 +147,6 @@
retval = -ENODEV;
kfree(data);
}
- return retval;
}
break;
@@ -166,6 +167,7 @@
"Could not set interface, error %d\n",
retval);
retval = -ENODEV;
+ kfree(data);
}
} else if (ifnum == 2) {
dbg("Modem port found");
@@ -177,7 +179,6 @@
retval = -ENODEV;
kfree(data);
}
- return retval;
} else if (ifnum==3) {
/*
* NMEA (serial line 9600 8N1)
@@ -191,6 +192,7 @@
"Could not set interface, error %d\n",
retval);
retval = -ENODEV;
+ kfree(data);
}
}
break;
@@ -199,12 +201,27 @@
dev_err(&serial->dev->dev,
"unknown number of interfaces: %d\n", nintf);
kfree(data);
- return -ENODEV;
+ retval = -ENODEV;
}
+ /* Set serial->private if not returning -ENODEV */
+ if (retval != -ENODEV)
+ usb_set_serial_data(serial, data);
return retval;
}
+static void qc_release(struct usb_serial *serial)
+{
+ struct usb_wwan_intf_private *priv = usb_get_serial_data(serial);
+
+ dbg("%s", __func__);
+
+ /* Call usb_wwan release & free the private data allocated in qcprobe */
+ usb_wwan_release(serial);
+ usb_set_serial_data(serial, NULL);
+ kfree(priv);
+}
+
static struct usb_serial_driver qcdevice = {
.driver = {
.owner = THIS_MODULE,
@@ -222,7 +239,7 @@
.chars_in_buffer = usb_wwan_chars_in_buffer,
.attach = usb_wwan_startup,
.disconnect = usb_wwan_disconnect,
- .release = usb_wwan_release,
+ .release = qc_release,
#ifdef CONFIG_PM
.suspend = usb_wwan_suspend,
.resume = usb_wwan_resume,
diff --git a/drivers/video/pxafb.c b/drivers/video/pxafb.c
index a2e5b51..0f4e8c9 100644
--- a/drivers/video/pxafb.c
+++ b/drivers/video/pxafb.c
@@ -1648,7 +1648,9 @@
switch (val) {
case CPUFREQ_PRECHANGE:
- if (!fbi->overlay[0].usage && !fbi->overlay[1].usage)
+#ifdef CONFIG_FB_PXA_OVERLAY
+ if (!(fbi->overlay[0].usage || fbi->overlay[1].usage))
+#endif
set_ctrlr_state(fbi, C_DISABLE_CLKCHANGE);
break;
diff --git a/fs/9p/fid.c b/fs/9p/fid.c
index 0ee5945..85b67ff 100644
--- a/fs/9p/fid.c
+++ b/fs/9p/fid.c
@@ -286,11 +286,9 @@
struct p9_fid *v9fs_writeback_fid(struct dentry *dentry)
{
- int err, flags;
+ int err;
struct p9_fid *fid;
- struct v9fs_session_info *v9ses;
- v9ses = v9fs_dentry2v9ses(dentry);
fid = v9fs_fid_clone_with_uid(dentry, 0);
if (IS_ERR(fid))
goto error_out;
@@ -299,17 +297,8 @@
* dirty pages. We always request for the open fid in read-write
* mode so that a partial page write which result in page
* read can work.
- *
- * we don't have a tsyncfs operation for older version
- * of protocol. So make sure the write back fid is
- * opened in O_SYNC mode.
*/
- if (!v9fs_proto_dotl(v9ses))
- flags = O_RDWR | O_SYNC;
- else
- flags = O_RDWR;
-
- err = p9_client_open(fid, flags);
+ err = p9_client_open(fid, O_RDWR);
if (err < 0) {
p9_client_clunk(fid);
fid = ERR_PTR(err);
diff --git a/fs/9p/v9fs.h b/fs/9p/v9fs.h
index 9665c2b..e5ebedf 100644
--- a/fs/9p/v9fs.h
+++ b/fs/9p/v9fs.h
@@ -116,7 +116,6 @@
struct list_head slist; /* list of sessions registered with v9fs */
struct backing_dev_info bdi;
struct rw_semaphore rename_sem;
- struct p9_fid *root_fid; /* Used for file system sync */
};
/* cache_validity flags */
diff --git a/fs/9p/vfs_dentry.c b/fs/9p/vfs_dentry.c
index b6a3b9f..e022890 100644
--- a/fs/9p/vfs_dentry.c
+++ b/fs/9p/vfs_dentry.c
@@ -126,7 +126,9 @@
retval = v9fs_refresh_inode_dotl(fid, inode);
else
retval = v9fs_refresh_inode(fid, inode);
- if (retval <= 0)
+ if (retval == -ENOENT)
+ return 0;
+ if (retval < 0)
return retval;
}
out_valid:
diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
index ffbb113..82a7c38 100644
--- a/fs/9p/vfs_inode_dotl.c
+++ b/fs/9p/vfs_inode_dotl.c
@@ -811,7 +811,7 @@
fid = v9fs_fid_lookup(dentry);
if (IS_ERR(fid)) {
__putname(link);
- link = ERR_PTR(PTR_ERR(fid));
+ link = ERR_CAST(fid);
goto ndset;
}
retval = p9_client_readlink(fid, &target);
diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c
index f3eed33..feef6cd 100644
--- a/fs/9p/vfs_super.c
+++ b/fs/9p/vfs_super.c
@@ -154,6 +154,7 @@
retval = PTR_ERR(inode);
goto release_sb;
}
+
root = d_alloc_root(inode);
if (!root) {
iput(inode);
@@ -185,21 +186,10 @@
p9stat_free(st);
kfree(st);
}
- v9fs_fid_add(root, fid);
retval = v9fs_get_acl(inode, fid);
if (retval)
goto release_sb;
- /*
- * Add the root fid to session info. This is used
- * for file system sync. We want a cloned fid here
- * so that we can do a sync_filesystem after a
- * shrink_dcache_for_umount
- */
- v9ses->root_fid = v9fs_fid_clone(root);
- if (IS_ERR(v9ses->root_fid)) {
- retval = PTR_ERR(v9ses->root_fid);
- goto release_sb;
- }
+ v9fs_fid_add(root, fid);
P9_DPRINTK(P9_DEBUG_VFS, " simple set mount, return 0\n");
return dget(sb->s_root);
@@ -210,11 +200,15 @@
v9fs_session_close(v9ses);
kfree(v9ses);
return ERR_PTR(retval);
+
release_sb:
/*
- * we will do the session_close and root dentry
- * release in the below call.
+ * we will do the session_close and root dentry release
+ * in the below call. But we need to clunk fid, because we haven't
+ * attached the fid to dentry so it won't get clunked
+ * automatically.
*/
+ p9_client_clunk(fid);
deactivate_locked_super(sb);
return ERR_PTR(retval);
}
@@ -232,7 +226,7 @@
P9_DPRINTK(P9_DEBUG_VFS, " %p\n", s);
kill_anon_super(s);
- p9_client_clunk(v9ses->root_fid);
+
v9fs_session_cancel(v9ses);
v9fs_session_close(v9ses);
kfree(v9ses);
@@ -285,14 +279,6 @@
return res;
}
-static int v9fs_sync_fs(struct super_block *sb, int wait)
-{
- struct v9fs_session_info *v9ses = sb->s_fs_info;
-
- P9_DPRINTK(P9_DEBUG_VFS, "v9fs_sync_fs: super_block %p\n", sb);
- return p9_client_sync_fs(v9ses->root_fid);
-}
-
static int v9fs_drop_inode(struct inode *inode)
{
struct v9fs_session_info *v9ses;
@@ -307,6 +293,51 @@
return 1;
}
+static int v9fs_write_inode(struct inode *inode,
+ struct writeback_control *wbc)
+{
+ int ret;
+ struct p9_wstat wstat;
+ struct v9fs_inode *v9inode;
+ /*
+ * send an fsync request to server irrespective of
+ * wbc->sync_mode.
+ */
+ P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);
+ v9inode = V9FS_I(inode);
+ if (!v9inode->writeback_fid)
+ return 0;
+ v9fs_blank_wstat(&wstat);
+
+ ret = p9_client_wstat(v9inode->writeback_fid, &wstat);
+ if (ret < 0) {
+ __mark_inode_dirty(inode, I_DIRTY_DATASYNC);
+ return ret;
+ }
+ return 0;
+}
+
+static int v9fs_write_inode_dotl(struct inode *inode,
+ struct writeback_control *wbc)
+{
+ int ret;
+ struct v9fs_inode *v9inode;
+ /*
+ * send an fsync request to server irrespective of
+ * wbc->sync_mode.
+ */
+ P9_DPRINTK(P9_DEBUG_VFS, "%s: inode %p\n", __func__, inode);
+ v9inode = V9FS_I(inode);
+ if (!v9inode->writeback_fid)
+ return 0;
+ ret = p9_client_fsync(v9inode->writeback_fid, 0);
+ if (ret < 0) {
+ __mark_inode_dirty(inode, I_DIRTY_DATASYNC);
+ return ret;
+ }
+ return 0;
+}
+
static const struct super_operations v9fs_super_ops = {
.alloc_inode = v9fs_alloc_inode,
.destroy_inode = v9fs_destroy_inode,
@@ -314,17 +345,18 @@
.evict_inode = v9fs_evict_inode,
.show_options = generic_show_options,
.umount_begin = v9fs_umount_begin,
+ .write_inode = v9fs_write_inode,
};
static const struct super_operations v9fs_super_ops_dotl = {
.alloc_inode = v9fs_alloc_inode,
.destroy_inode = v9fs_destroy_inode,
- .sync_fs = v9fs_sync_fs,
.statfs = v9fs_statfs,
.drop_inode = v9fs_drop_inode,
.evict_inode = v9fs_evict_inode,
.show_options = generic_show_options,
.umount_begin = v9fs_umount_begin,
+ .write_inode = v9fs_write_inode_dotl,
};
struct file_system_type v9fs_fs_type = {
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index f34078d..303983f 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -941,9 +941,13 @@
current->mm->start_stack = bprm->p;
#ifdef arch_randomize_brk
- if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1))
+ if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) {
current->mm->brk = current->mm->start_brk =
arch_randomize_brk(current->mm);
+#ifdef CONFIG_COMPAT_BRK
+ current->brk_randomized = 1;
+#endif
+ }
#endif
if (current->personality & MMAP_PAGE_ZERO) {
diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c
index de34bfa..5d505aa 100644
--- a/fs/btrfs/acl.c
+++ b/fs/btrfs/acl.c
@@ -178,16 +178,17 @@
if (value) {
acl = posix_acl_from_xattr(value, size);
- if (acl == NULL) {
- value = NULL;
- size = 0;
+ if (acl) {
+ ret = posix_acl_valid(acl);
+ if (ret)
+ goto out;
} else if (IS_ERR(acl)) {
return PTR_ERR(acl);
}
}
ret = btrfs_set_acl(NULL, dentry->d_inode, acl, type);
-
+out:
posix_acl_release(acl);
return ret;
diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 3458b57..2e61fe1 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -740,8 +740,10 @@
*/
unsigned long reservation_progress;
- int full; /* indicates that we cannot allocate any more
+ int full:1; /* indicates that we cannot allocate any more
chunks for this space */
+ int chunk_alloc:1; /* set if we are allocating a chunk */
+
int force_alloc; /* set if we need to force a chunk alloc for
this space */
@@ -2576,6 +2578,11 @@
int btrfs_mark_extent_written(struct btrfs_trans_handle *trans,
struct inode *inode, u64 start, u64 end);
int btrfs_release_file(struct inode *inode, struct file *file);
+void btrfs_drop_pages(struct page **pages, size_t num_pages);
+int btrfs_dirty_pages(struct btrfs_root *root, struct inode *inode,
+ struct page **pages, size_t num_pages,
+ loff_t pos, size_t write_bytes,
+ struct extent_state **cached);
/* tree-defrag.c */
int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 8f1d44b..68c84c8 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3057,7 +3057,7 @@
btrfs_destroy_pinned_extent(root,
root->fs_info->pinned_extents);
- t->use_count = 0;
+ atomic_set(&t->use_count, 0);
list_del_init(&t->list);
memset(t, 0, sizeof(*t));
kmem_cache_free(btrfs_transaction_cachep, t);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index f619c3c..31f33ba 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -33,6 +33,25 @@
#include "locking.h"
#include "free-space-cache.h"
+/* control flags for do_chunk_alloc's force field
+ * CHUNK_ALLOC_NO_FORCE means to only allocate a chunk
+ * if we really need one.
+ *
+ * CHUNK_ALLOC_FORCE means it must try to allocate one
+ *
+ * CHUNK_ALLOC_LIMITED means to only try and allocate one
+ * if we have very few chunks already allocated. This is
+ * used as part of the clustering code to help make sure
+ * we have a good pool of storage to cluster in, without
+ * filling the FS with empty chunks
+ *
+ */
+enum {
+ CHUNK_ALLOC_NO_FORCE = 0,
+ CHUNK_ALLOC_FORCE = 1,
+ CHUNK_ALLOC_LIMITED = 2,
+};
+
static int update_block_group(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
u64 bytenr, u64 num_bytes, int alloc);
@@ -3019,7 +3038,8 @@
found->bytes_readonly = 0;
found->bytes_may_use = 0;
found->full = 0;
- found->force_alloc = 0;
+ found->force_alloc = CHUNK_ALLOC_NO_FORCE;
+ found->chunk_alloc = 0;
*space_info = found;
list_add_rcu(&found->list, &info->space_info);
atomic_set(&found->caching_threads, 0);
@@ -3150,7 +3170,7 @@
if (!data_sinfo->full && alloc_chunk) {
u64 alloc_target;
- data_sinfo->force_alloc = 1;
+ data_sinfo->force_alloc = CHUNK_ALLOC_FORCE;
spin_unlock(&data_sinfo->lock);
alloc:
alloc_target = btrfs_get_alloc_profile(root, 1);
@@ -3160,7 +3180,8 @@
ret = do_chunk_alloc(trans, root->fs_info->extent_root,
bytes + 2 * 1024 * 1024,
- alloc_target, 0);
+ alloc_target,
+ CHUNK_ALLOC_NO_FORCE);
btrfs_end_transaction(trans, root);
if (ret < 0) {
if (ret != -ENOSPC)
@@ -3239,31 +3260,56 @@
rcu_read_lock();
list_for_each_entry_rcu(found, head, list) {
if (found->flags & BTRFS_BLOCK_GROUP_METADATA)
- found->force_alloc = 1;
+ found->force_alloc = CHUNK_ALLOC_FORCE;
}
rcu_read_unlock();
}
static int should_alloc_chunk(struct btrfs_root *root,
- struct btrfs_space_info *sinfo, u64 alloc_bytes)
+ struct btrfs_space_info *sinfo, u64 alloc_bytes,
+ int force)
{
u64 num_bytes = sinfo->total_bytes - sinfo->bytes_readonly;
+ u64 num_allocated = sinfo->bytes_used + sinfo->bytes_reserved;
u64 thresh;
- if (sinfo->bytes_used + sinfo->bytes_reserved +
- alloc_bytes + 256 * 1024 * 1024 < num_bytes)
+ if (force == CHUNK_ALLOC_FORCE)
+ return 1;
+
+ /*
+ * in limited mode, we want to have some free space up to
+ * about 1% of the FS size.
+ */
+ if (force == CHUNK_ALLOC_LIMITED) {
+ thresh = btrfs_super_total_bytes(&root->fs_info->super_copy);
+ thresh = max_t(u64, 64 * 1024 * 1024,
+ div_factor_fine(thresh, 1));
+
+ if (num_bytes - num_allocated < thresh)
+ return 1;
+ }
+
+ /*
+ * we have two similar checks here, one based on percentage
+ * and once based on a hard number of 256MB. The idea
+ * is that if we have a good amount of free
+ * room, don't allocate a chunk. A good mount is
+ * less than 80% utilized of the chunks we have allocated,
+ * or more than 256MB free
+ */
+ if (num_allocated + alloc_bytes + 256 * 1024 * 1024 < num_bytes)
return 0;
- if (sinfo->bytes_used + sinfo->bytes_reserved +
- alloc_bytes < div_factor(num_bytes, 8))
+ if (num_allocated + alloc_bytes < div_factor(num_bytes, 8))
return 0;
thresh = btrfs_super_total_bytes(&root->fs_info->super_copy);
+
+ /* 256MB or 5% of the FS */
thresh = max_t(u64, 256 * 1024 * 1024, div_factor_fine(thresh, 5));
if (num_bytes > thresh && sinfo->bytes_used < div_factor(num_bytes, 3))
return 0;
-
return 1;
}
@@ -3273,10 +3319,9 @@
{
struct btrfs_space_info *space_info;
struct btrfs_fs_info *fs_info = extent_root->fs_info;
+ int wait_for_alloc = 0;
int ret = 0;
- mutex_lock(&fs_info->chunk_mutex);
-
flags = btrfs_reduce_alloc_profile(extent_root, flags);
space_info = __find_space_info(extent_root->fs_info, flags);
@@ -3287,21 +3332,40 @@
}
BUG_ON(!space_info);
+again:
spin_lock(&space_info->lock);
if (space_info->force_alloc)
- force = 1;
+ force = space_info->force_alloc;
if (space_info->full) {
spin_unlock(&space_info->lock);
- goto out;
+ return 0;
}
- if (!force && !should_alloc_chunk(extent_root, space_info,
- alloc_bytes)) {
+ if (!should_alloc_chunk(extent_root, space_info, alloc_bytes, force)) {
spin_unlock(&space_info->lock);
- goto out;
+ return 0;
+ } else if (space_info->chunk_alloc) {
+ wait_for_alloc = 1;
+ } else {
+ space_info->chunk_alloc = 1;
}
+
spin_unlock(&space_info->lock);
+ mutex_lock(&fs_info->chunk_mutex);
+
+ /*
+ * The chunk_mutex is held throughout the entirety of a chunk
+ * allocation, so once we've acquired the chunk_mutex we know that the
+ * other guy is done and we need to recheck and see if we should
+ * allocate.
+ */
+ if (wait_for_alloc) {
+ mutex_unlock(&fs_info->chunk_mutex);
+ wait_for_alloc = 0;
+ goto again;
+ }
+
/*
* If we have mixed data/metadata chunks we want to make sure we keep
* allocating mixed chunks instead of individual chunks.
@@ -3327,9 +3391,10 @@
space_info->full = 1;
else
ret = 1;
- space_info->force_alloc = 0;
+
+ space_info->force_alloc = CHUNK_ALLOC_NO_FORCE;
+ space_info->chunk_alloc = 0;
spin_unlock(&space_info->lock);
-out:
mutex_unlock(&extent_root->fs_info->chunk_mutex);
return ret;
}
@@ -5303,11 +5368,13 @@
if (allowed_chunk_alloc) {
ret = do_chunk_alloc(trans, root, num_bytes +
- 2 * 1024 * 1024, data, 1);
+ 2 * 1024 * 1024, data,
+ CHUNK_ALLOC_LIMITED);
allowed_chunk_alloc = 0;
done_chunk_alloc = 1;
- } else if (!done_chunk_alloc) {
- space_info->force_alloc = 1;
+ } else if (!done_chunk_alloc &&
+ space_info->force_alloc == CHUNK_ALLOC_NO_FORCE) {
+ space_info->force_alloc = CHUNK_ALLOC_LIMITED;
}
if (loop < LOOP_NO_EMPTY_SIZE) {
@@ -5393,7 +5460,8 @@
*/
if (empty_size || root->ref_cows)
ret = do_chunk_alloc(trans, root->fs_info->extent_root,
- num_bytes + 2 * 1024 * 1024, data, 0);
+ num_bytes + 2 * 1024 * 1024, data,
+ CHUNK_ALLOC_NO_FORCE);
WARN_ON(num_bytes < root->sectorsize);
ret = find_free_extent(trans, root, num_bytes, empty_size,
@@ -5405,7 +5473,7 @@
num_bytes = num_bytes & ~(root->sectorsize - 1);
num_bytes = max(num_bytes, min_alloc_size);
do_chunk_alloc(trans, root->fs_info->extent_root,
- num_bytes, data, 1);
+ num_bytes, data, CHUNK_ALLOC_FORCE);
goto again;
}
if (ret == -ENOSPC && btrfs_test_opt(root, ENOSPC_DEBUG)) {
@@ -8109,13 +8177,15 @@
alloc_flags = update_block_group_flags(root, cache->flags);
if (alloc_flags != cache->flags)
- do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);
+ do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,
+ CHUNK_ALLOC_FORCE);
ret = set_block_group_ro(cache);
if (!ret)
goto out;
alloc_flags = get_alloc_profile(root, cache->space_info->flags);
- ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);
+ ret = do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,
+ CHUNK_ALLOC_FORCE);
if (ret < 0)
goto out;
ret = set_block_group_ro(cache);
@@ -8128,7 +8198,8 @@
struct btrfs_root *root, u64 type)
{
u64 alloc_flags = get_alloc_profile(root, type);
- return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags, 1);
+ return do_chunk_alloc(trans, root, 2 * 1024 * 1024, alloc_flags,
+ CHUNK_ALLOC_FORCE);
}
/*
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 20ddb28..3151386 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -690,6 +690,15 @@
}
}
+static void uncache_state(struct extent_state **cached_ptr)
+{
+ if (cached_ptr && (*cached_ptr)) {
+ struct extent_state *state = *cached_ptr;
+ *cached_ptr = NULL;
+ free_extent_state(state);
+ }
+}
+
/*
* set some bits on a range in the tree. This may require allocations or
* sleeping, so the gfp mask is used to indicate what is allowed.
@@ -940,10 +949,10 @@
}
int set_extent_uptodate(struct extent_io_tree *tree, u64 start, u64 end,
- gfp_t mask)
+ struct extent_state **cached_state, gfp_t mask)
{
- return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0, NULL,
- NULL, mask);
+ return set_extent_bit(tree, start, end, EXTENT_UPTODATE, 0,
+ NULL, cached_state, mask);
}
static int clear_extent_uptodate(struct extent_io_tree *tree, u64 start,
@@ -1012,8 +1021,7 @@
mask);
}
-int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end,
- gfp_t mask)
+int unlock_extent(struct extent_io_tree *tree, u64 start, u64 end, gfp_t mask)
{
return clear_extent_bit(tree, start, end, EXTENT_LOCKED, 1, 0, NULL,
mask);
@@ -1735,6 +1743,9 @@
do {
struct page *page = bvec->bv_page;
+ struct extent_state *cached = NULL;
+ struct extent_state *state;
+
tree = &BTRFS_I(page->mapping->host)->io_tree;
start = ((u64)page->index << PAGE_CACHE_SHIFT) +
@@ -1749,9 +1760,20 @@
if (++bvec <= bvec_end)
prefetchw(&bvec->bv_page->flags);
+ spin_lock(&tree->lock);
+ state = find_first_extent_bit_state(tree, start, EXTENT_LOCKED);
+ if (state && state->start == start) {
+ /*
+ * take a reference on the state, unlock will drop
+ * the ref
+ */
+ cache_state(state, &cached);
+ }
+ spin_unlock(&tree->lock);
+
if (uptodate && tree->ops && tree->ops->readpage_end_io_hook) {
ret = tree->ops->readpage_end_io_hook(page, start, end,
- NULL);
+ state);
if (ret)
uptodate = 0;
}
@@ -1764,15 +1786,16 @@
test_bit(BIO_UPTODATE, &bio->bi_flags);
if (err)
uptodate = 0;
+ uncache_state(&cached);
continue;
}
}
if (uptodate) {
- set_extent_uptodate(tree, start, end,
+ set_extent_uptodate(tree, start, end, &cached,
GFP_ATOMIC);
}
- unlock_extent(tree, start, end, GFP_ATOMIC);
+ unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC);
if (whole_page) {
if (uptodate) {
@@ -1811,6 +1834,7 @@
do {
struct page *page = bvec->bv_page;
+ struct extent_state *cached = NULL;
tree = &BTRFS_I(page->mapping->host)->io_tree;
start = ((u64)page->index << PAGE_CACHE_SHIFT) +
@@ -1821,13 +1845,14 @@
prefetchw(&bvec->bv_page->flags);
if (uptodate) {
- set_extent_uptodate(tree, start, end, GFP_ATOMIC);
+ set_extent_uptodate(tree, start, end, &cached,
+ GFP_ATOMIC);
} else {
ClearPageUptodate(page);
SetPageError(page);
}
- unlock_extent(tree, start, end, GFP_ATOMIC);
+ unlock_extent_cached(tree, start, end, &cached, GFP_ATOMIC);
} while (bvec >= bio->bi_io_vec);
@@ -2016,14 +2041,17 @@
while (cur <= end) {
if (cur >= last_byte) {
char *userpage;
+ struct extent_state *cached = NULL;
+
iosize = PAGE_CACHE_SIZE - page_offset;
userpage = kmap_atomic(page, KM_USER0);
memset(userpage + page_offset, 0, iosize);
flush_dcache_page(page);
kunmap_atomic(userpage, KM_USER0);
set_extent_uptodate(tree, cur, cur + iosize - 1,
- GFP_NOFS);
- unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS);
+ &cached, GFP_NOFS);
+ unlock_extent_cached(tree, cur, cur + iosize - 1,
+ &cached, GFP_NOFS);
break;
}
em = get_extent(inode, page, page_offset, cur,
@@ -2063,14 +2091,17 @@
/* we've found a hole, just zero and go on */
if (block_start == EXTENT_MAP_HOLE) {
char *userpage;
+ struct extent_state *cached = NULL;
+
userpage = kmap_atomic(page, KM_USER0);
memset(userpage + page_offset, 0, iosize);
flush_dcache_page(page);
kunmap_atomic(userpage, KM_USER0);
set_extent_uptodate(tree, cur, cur + iosize - 1,
- GFP_NOFS);
- unlock_extent(tree, cur, cur + iosize - 1, GFP_NOFS);
+ &cached, GFP_NOFS);
+ unlock_extent_cached(tree, cur, cur + iosize - 1,
+ &cached, GFP_NOFS);
cur = cur + iosize;
page_offset += iosize;
continue;
@@ -2789,9 +2820,12 @@
iocount++;
block_start = block_start + iosize;
} else {
- set_extent_uptodate(tree, block_start, cur_end,
+ struct extent_state *cached = NULL;
+
+ set_extent_uptodate(tree, block_start, cur_end, &cached,
GFP_NOFS);
- unlock_extent(tree, block_start, cur_end, GFP_NOFS);
+ unlock_extent_cached(tree, block_start, cur_end,
+ &cached, GFP_NOFS);
block_start = cur_end + 1;
}
page_offset = block_start & (PAGE_CACHE_SIZE - 1);
@@ -3457,7 +3491,7 @@
num_pages = num_extent_pages(eb->start, eb->len);
set_extent_uptodate(tree, eb->start, eb->start + eb->len - 1,
- GFP_NOFS);
+ NULL, GFP_NOFS);
for (i = 0; i < num_pages; i++) {
page = extent_buffer_page(eb, i);
if ((i == 0 && (eb->start & (PAGE_CACHE_SIZE - 1))) ||
@@ -3885,6 +3919,12 @@
kunmap_atomic(dst_kaddr, KM_USER0);
}
+static inline bool areas_overlap(unsigned long src, unsigned long dst, unsigned long len)
+{
+ unsigned long distance = (src > dst) ? src - dst : dst - src;
+ return distance < len;
+}
+
static void copy_pages(struct page *dst_page, struct page *src_page,
unsigned long dst_off, unsigned long src_off,
unsigned long len)
@@ -3892,10 +3932,12 @@
char *dst_kaddr = kmap_atomic(dst_page, KM_USER0);
char *src_kaddr;
- if (dst_page != src_page)
+ if (dst_page != src_page) {
src_kaddr = kmap_atomic(src_page, KM_USER1);
- else
+ } else {
src_kaddr = dst_kaddr;
+ BUG_ON(areas_overlap(src_off, dst_off, len));
+ }
memcpy(dst_kaddr + dst_off, src_kaddr + src_off, len);
kunmap_atomic(dst_kaddr, KM_USER0);
@@ -3970,7 +4012,7 @@
"len %lu len %lu\n", dst_offset, len, dst->len);
BUG_ON(1);
}
- if (dst_offset < src_offset) {
+ if (!areas_overlap(src_offset, dst_offset, len)) {
memcpy_extent_buffer(dst, dst_offset, src_offset, len);
return;
}
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index f62c5442..af2d717 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -208,7 +208,7 @@
int bits, int exclusive_bits, u64 *failed_start,
struct extent_state **cached_state, gfp_t mask);
int set_extent_uptodate(struct extent_io_tree *tree, u64 start, u64 end,
- gfp_t mask);
+ struct extent_state **cached_state, gfp_t mask);
int set_extent_new(struct extent_io_tree *tree, u64 start, u64 end,
gfp_t mask);
int set_extent_dirty(struct extent_io_tree *tree, u64 start, u64 end,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index e621ea5..75899a0 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -104,7 +104,7 @@
/*
* unlocks pages after btrfs_file_write is done with them
*/
-static noinline void btrfs_drop_pages(struct page **pages, size_t num_pages)
+void btrfs_drop_pages(struct page **pages, size_t num_pages)
{
size_t i;
for (i = 0; i < num_pages; i++) {
@@ -127,16 +127,13 @@
* this also makes the decision about creating an inline extent vs
* doing real data extents, marking pages dirty and delalloc as required.
*/
-static noinline int dirty_and_release_pages(struct btrfs_root *root,
- struct file *file,
- struct page **pages,
- size_t num_pages,
- loff_t pos,
- size_t write_bytes)
+int btrfs_dirty_pages(struct btrfs_root *root, struct inode *inode,
+ struct page **pages, size_t num_pages,
+ loff_t pos, size_t write_bytes,
+ struct extent_state **cached)
{
int err = 0;
int i;
- struct inode *inode = fdentry(file)->d_inode;
u64 num_bytes;
u64 start_pos;
u64 end_of_last_block;
@@ -149,7 +146,7 @@
end_of_last_block = start_pos + num_bytes - 1;
err = btrfs_set_extent_delalloc(inode, start_pos, end_of_last_block,
- NULL);
+ cached);
if (err)
return err;
@@ -992,9 +989,9 @@
}
if (copied > 0) {
- ret = dirty_and_release_pages(root, file, pages,
- dirty_pages, pos,
- copied);
+ ret = btrfs_dirty_pages(root, inode, pages,
+ dirty_pages, pos, copied,
+ NULL);
if (ret) {
btrfs_delalloc_release_space(inode,
dirty_pages << PAGE_CACHE_SHIFT);
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index f561c95..11d2e9c 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -508,6 +508,7 @@
struct inode *inode;
struct rb_node *node;
struct list_head *pos, *n;
+ struct page **pages;
struct page *page;
struct extent_state *cached_state = NULL;
struct btrfs_free_cluster *cluster = NULL;
@@ -517,13 +518,13 @@
u64 start, end, len;
u64 bytes = 0;
u32 *crc, *checksums;
- pgoff_t index = 0, last_index = 0;
unsigned long first_page_offset;
- int num_checksums;
+ int index = 0, num_pages = 0;
int entries = 0;
int bitmaps = 0;
int ret = 0;
bool next_page = false;
+ bool out_of_space = false;
root = root->fs_info->tree_root;
@@ -551,24 +552,31 @@
return 0;
}
- last_index = (i_size_read(inode) - 1) >> PAGE_CACHE_SHIFT;
+ num_pages = (i_size_read(inode) + PAGE_CACHE_SIZE - 1) >>
+ PAGE_CACHE_SHIFT;
filemap_write_and_wait(inode->i_mapping);
btrfs_wait_ordered_range(inode, inode->i_size &
~(root->sectorsize - 1), (u64)-1);
/* We need a checksum per page. */
- num_checksums = i_size_read(inode) / PAGE_CACHE_SIZE;
- crc = checksums = kzalloc(sizeof(u32) * num_checksums, GFP_NOFS);
+ crc = checksums = kzalloc(sizeof(u32) * num_pages, GFP_NOFS);
if (!crc) {
iput(inode);
return 0;
}
+ pages = kzalloc(sizeof(struct page *) * num_pages, GFP_NOFS);
+ if (!pages) {
+ kfree(crc);
+ iput(inode);
+ return 0;
+ }
+
/* Since the first page has all of our checksums and our generation we
* need to calculate the offset into the page that we can start writing
* our entries.
*/
- first_page_offset = (sizeof(u32) * num_checksums) + sizeof(u64);
+ first_page_offset = (sizeof(u32) * num_pages) + sizeof(u64);
/* Get the cluster for this block_group if it exists */
if (!list_empty(&block_group->cluster_list))
@@ -590,20 +598,18 @@
* after find_get_page at this point. Just putting this here so people
* know and don't freak out.
*/
- while (index <= last_index) {
+ while (index < num_pages) {
page = grab_cache_page(inode->i_mapping, index);
if (!page) {
- pgoff_t i = 0;
+ int i;
- while (i < index) {
- page = find_get_page(inode->i_mapping, i);
- unlock_page(page);
- page_cache_release(page);
- page_cache_release(page);
- i++;
+ for (i = 0; i < num_pages; i++) {
+ unlock_page(pages[i]);
+ page_cache_release(pages[i]);
}
goto out_free;
}
+ pages[index] = page;
index++;
}
@@ -631,7 +637,12 @@
offset = start_offset;
}
- page = find_get_page(inode->i_mapping, index);
+ if (index >= num_pages) {
+ out_of_space = true;
+ break;
+ }
+
+ page = pages[index];
addr = kmap(page);
entry = addr + start_offset;
@@ -708,23 +719,6 @@
bytes += PAGE_CACHE_SIZE;
- ClearPageChecked(page);
- set_page_extent_mapped(page);
- SetPageUptodate(page);
- set_page_dirty(page);
-
- /*
- * We need to release our reference we got for grab_cache_page,
- * except for the first page which will hold our checksums, we
- * do that below.
- */
- if (index != 0) {
- unlock_page(page);
- page_cache_release(page);
- }
-
- page_cache_release(page);
-
index++;
} while (node || next_page);
@@ -734,7 +728,11 @@
struct btrfs_free_space *entry =
list_entry(pos, struct btrfs_free_space, list);
- page = find_get_page(inode->i_mapping, index);
+ if (index >= num_pages) {
+ out_of_space = true;
+ break;
+ }
+ page = pages[index];
addr = kmap(page);
memcpy(addr, entry->bitmap, PAGE_CACHE_SIZE);
@@ -745,64 +743,58 @@
crc++;
bytes += PAGE_CACHE_SIZE;
- ClearPageChecked(page);
- set_page_extent_mapped(page);
- SetPageUptodate(page);
- set_page_dirty(page);
- unlock_page(page);
- page_cache_release(page);
- page_cache_release(page);
list_del_init(&entry->list);
index++;
}
+ if (out_of_space) {
+ btrfs_drop_pages(pages, num_pages);
+ unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
+ i_size_read(inode) - 1, &cached_state,
+ GFP_NOFS);
+ ret = 0;
+ goto out_free;
+ }
+
/* Zero out the rest of the pages just to make sure */
- while (index <= last_index) {
+ while (index < num_pages) {
void *addr;
- page = find_get_page(inode->i_mapping, index);
-
+ page = pages[index];
addr = kmap(page);
memset(addr, 0, PAGE_CACHE_SIZE);
kunmap(page);
- ClearPageChecked(page);
- set_page_extent_mapped(page);
- SetPageUptodate(page);
- set_page_dirty(page);
- unlock_page(page);
- page_cache_release(page);
- page_cache_release(page);
bytes += PAGE_CACHE_SIZE;
index++;
}
- btrfs_set_extent_delalloc(inode, 0, bytes - 1, &cached_state);
-
/* Write the checksums and trans id to the first page */
{
void *addr;
u64 *gen;
- page = find_get_page(inode->i_mapping, 0);
+ page = pages[0];
addr = kmap(page);
- memcpy(addr, checksums, sizeof(u32) * num_checksums);
- gen = addr + (sizeof(u32) * num_checksums);
+ memcpy(addr, checksums, sizeof(u32) * num_pages);
+ gen = addr + (sizeof(u32) * num_pages);
*gen = trans->transid;
kunmap(page);
- ClearPageChecked(page);
- set_page_extent_mapped(page);
- SetPageUptodate(page);
- set_page_dirty(page);
- unlock_page(page);
- page_cache_release(page);
- page_cache_release(page);
}
- BTRFS_I(inode)->generation = trans->transid;
+ ret = btrfs_dirty_pages(root, inode, pages, num_pages, 0,
+ bytes, &cached_state);
+ btrfs_drop_pages(pages, num_pages);
unlock_extent_cached(&BTRFS_I(inode)->io_tree, 0,
i_size_read(inode) - 1, &cached_state, GFP_NOFS);
+ if (ret) {
+ ret = 0;
+ goto out_free;
+ }
+
+ BTRFS_I(inode)->generation = trans->transid;
+
filemap_write_and_wait(inode->i_mapping);
key.objectid = BTRFS_FREE_SPACE_OBJECTID;
@@ -853,6 +845,7 @@
BTRFS_I(inode)->generation = 0;
}
kfree(checksums);
+ kfree(pages);
btrfs_update_inode(trans, root, inode);
iput(inode);
return ret;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 5cc64ab..fcd66b6 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -1770,9 +1770,12 @@
add_pending_csums(trans, inode, ordered_extent->file_offset,
&ordered_extent->list);
- btrfs_ordered_update_i_size(inode, 0, ordered_extent);
- ret = btrfs_update_inode(trans, root, inode);
- BUG_ON(ret);
+ ret = btrfs_ordered_update_i_size(inode, 0, ordered_extent);
+ if (!ret) {
+ ret = btrfs_update_inode(trans, root, inode);
+ BUG_ON(ret);
+ }
+ ret = 0;
out:
if (nolock) {
if (trans)
@@ -2590,6 +2593,13 @@
struct btrfs_inode_item *item,
struct inode *inode)
{
+ if (!leaf->map_token)
+ map_private_extent_buffer(leaf, (unsigned long)item,
+ sizeof(struct btrfs_inode_item),
+ &leaf->map_token, &leaf->kaddr,
+ &leaf->map_start, &leaf->map_len,
+ KM_USER1);
+
btrfs_set_inode_uid(leaf, item, inode->i_uid);
btrfs_set_inode_gid(leaf, item, inode->i_gid);
btrfs_set_inode_size(leaf, item, BTRFS_I(inode)->disk_i_size);
@@ -2618,6 +2628,11 @@
btrfs_set_inode_rdev(leaf, item, inode->i_rdev);
btrfs_set_inode_flags(leaf, item, BTRFS_I(inode)->flags);
btrfs_set_inode_block_group(leaf, item, BTRFS_I(inode)->block_group);
+
+ if (leaf->map_token) {
+ unmap_extent_buffer(leaf, leaf->map_token, KM_USER1);
+ leaf->map_token = NULL;
+ }
}
/*
@@ -4207,10 +4222,8 @@
struct btrfs_key found_key;
struct btrfs_path *path;
int ret;
- u32 nritems;
struct extent_buffer *leaf;
int slot;
- int advance;
unsigned char d_type;
int over = 0;
u32 di_cur;
@@ -4253,27 +4266,19 @@
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
if (ret < 0)
goto err;
- advance = 0;
while (1) {
leaf = path->nodes[0];
- nritems = btrfs_header_nritems(leaf);
slot = path->slots[0];
- if (advance || slot >= nritems) {
- if (slot >= nritems - 1) {
- ret = btrfs_next_leaf(root, path);
- if (ret)
- break;
- leaf = path->nodes[0];
- nritems = btrfs_header_nritems(leaf);
- slot = path->slots[0];
- } else {
- slot++;
- path->slots[0]++;
- }
+ if (slot >= btrfs_header_nritems(leaf)) {
+ ret = btrfs_next_leaf(root, path);
+ if (ret < 0)
+ goto err;
+ else if (ret > 0)
+ break;
+ continue;
}
- advance = 1;
item = btrfs_item_nr(leaf, slot);
btrfs_item_key_to_cpu(leaf, &found_key, slot);
@@ -4282,7 +4287,7 @@
if (btrfs_key_type(&found_key) != key_type)
break;
if (found_key.offset < filp->f_pos)
- continue;
+ goto next;
filp->f_pos = found_key.offset;
@@ -4335,6 +4340,8 @@
di_cur += di_len;
di = (struct btrfs_dir_item *)((char *)di + di_len);
}
+next:
+ path->slots[0]++;
}
/* Reached end of directory/root. Bump pos past the last item. */
@@ -4527,14 +4534,17 @@
BUG_ON(!path);
inode = new_inode(root->fs_info->sb);
- if (!inode)
+ if (!inode) {
+ btrfs_free_path(path);
return ERR_PTR(-ENOMEM);
+ }
if (dir) {
trace_btrfs_inode_request(dir);
ret = btrfs_set_inode_index(dir, index);
if (ret) {
+ btrfs_free_path(path);
iput(inode);
return ERR_PTR(ret);
}
@@ -4834,9 +4844,6 @@
if (inode->i_nlink == ~0U)
return -EMLINK;
- btrfs_inc_nlink(inode);
- inode->i_ctime = CURRENT_TIME;
-
err = btrfs_set_inode_index(dir, &index);
if (err)
goto fail;
@@ -4852,6 +4859,9 @@
goto fail;
}
+ btrfs_inc_nlink(inode);
+ inode->i_ctime = CURRENT_TIME;
+
btrfs_set_trans_block_group(trans, dir);
ihold(inode);
@@ -5221,7 +5231,7 @@
btrfs_mark_buffer_dirty(leaf);
}
set_extent_uptodate(io_tree, em->start,
- extent_map_end(em) - 1, GFP_NOFS);
+ extent_map_end(em) - 1, NULL, GFP_NOFS);
goto insert;
} else {
printk(KERN_ERR "btrfs unknown found_type %d\n", found_type);
@@ -5428,17 +5438,30 @@
}
static struct extent_map *btrfs_new_extent_direct(struct inode *inode,
+ struct extent_map *em,
u64 start, u64 len)
{
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_trans_handle *trans;
- struct extent_map *em;
struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
struct btrfs_key ins;
u64 alloc_hint;
int ret;
+ bool insert = false;
- btrfs_drop_extent_cache(inode, start, start + len - 1, 0);
+ /*
+ * Ok if the extent map we looked up is a hole and is for the exact
+ * range we want, there is no reason to allocate a new one, however if
+ * it is not right then we need to free this one and drop the cache for
+ * our range.
+ */
+ if (em->block_start != EXTENT_MAP_HOLE || em->start != start ||
+ em->len != len) {
+ free_extent_map(em);
+ em = NULL;
+ insert = true;
+ btrfs_drop_extent_cache(inode, start, start + len - 1, 0);
+ }
trans = btrfs_join_transaction(root, 0);
if (IS_ERR(trans))
@@ -5454,10 +5477,12 @@
goto out;
}
- em = alloc_extent_map(GFP_NOFS);
if (!em) {
- em = ERR_PTR(-ENOMEM);
- goto out;
+ em = alloc_extent_map(GFP_NOFS);
+ if (!em) {
+ em = ERR_PTR(-ENOMEM);
+ goto out;
+ }
}
em->start = start;
@@ -5467,9 +5492,15 @@
em->block_start = ins.objectid;
em->block_len = ins.offset;
em->bdev = root->fs_info->fs_devices->latest_bdev;
+
+ /*
+ * We need to do this because if we're using the original em we searched
+ * for, we could have EXTENT_FLAG_VACANCY set, and we don't want that.
+ */
+ em->flags = 0;
set_bit(EXTENT_FLAG_PINNED, &em->flags);
- while (1) {
+ while (insert) {
write_lock(&em_tree->lock);
ret = add_extent_mapping(em_tree, em);
write_unlock(&em_tree->lock);
@@ -5687,8 +5718,7 @@
* it above
*/
len = bh_result->b_size;
- free_extent_map(em);
- em = btrfs_new_extent_direct(inode, start, len);
+ em = btrfs_new_extent_direct(inode, em, start, len);
if (IS_ERR(em))
return PTR_ERR(em);
len = min(len, em->len - (start - em->start));
@@ -5851,8 +5881,10 @@
}
add_pending_csums(trans, inode, ordered->file_offset, &ordered->list);
- btrfs_ordered_update_i_size(inode, 0, ordered);
- btrfs_update_inode(trans, root, inode);
+ ret = btrfs_ordered_update_i_size(inode, 0, ordered);
+ if (!ret)
+ btrfs_update_inode(trans, root, inode);
+ ret = 0;
out_unlock:
unlock_extent_cached(&BTRFS_I(inode)->io_tree, ordered->file_offset,
ordered->file_offset + ordered->len - 1,
@@ -5938,7 +5970,7 @@
static inline int __btrfs_submit_dio_bio(struct bio *bio, struct inode *inode,
int rw, u64 file_offset, int skip_sum,
- u32 *csums)
+ u32 *csums, int async_submit)
{
int write = rw & REQ_WRITE;
struct btrfs_root *root = BTRFS_I(inode)->root;
@@ -5949,13 +5981,24 @@
if (ret)
goto err;
- if (write && !skip_sum) {
+ if (skip_sum)
+ goto map;
+
+ if (write && async_submit) {
ret = btrfs_wq_submit_bio(root->fs_info,
inode, rw, bio, 0, 0,
file_offset,
__btrfs_submit_bio_start_direct_io,
__btrfs_submit_bio_done);
goto err;
+ } else if (write) {
+ /*
+ * If we aren't doing async submit, calculate the csum of the
+ * bio now.
+ */
+ ret = btrfs_csum_one_bio(root, inode, bio, file_offset, 1);
+ if (ret)
+ goto err;
} else if (!skip_sum) {
ret = btrfs_lookup_bio_sums_dio(root, inode, bio,
file_offset, csums);
@@ -5963,7 +6006,8 @@
goto err;
}
- ret = btrfs_map_bio(root, rw, bio, 0, 1);
+map:
+ ret = btrfs_map_bio(root, rw, bio, 0, async_submit);
err:
bio_put(bio);
return ret;
@@ -5985,15 +6029,9 @@
int nr_pages = 0;
u32 *csums = dip->csums;
int ret = 0;
+ int async_submit = 0;
int write = rw & REQ_WRITE;
- bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS);
- if (!bio)
- return -ENOMEM;
- bio->bi_private = dip;
- bio->bi_end_io = btrfs_end_dio_bio;
- atomic_inc(&dip->pending_bios);
-
map_length = orig_bio->bi_size;
ret = btrfs_map_block(map_tree, READ, start_sector << 9,
&map_length, NULL, 0);
@@ -6002,6 +6040,19 @@
return -EIO;
}
+ if (map_length >= orig_bio->bi_size) {
+ bio = orig_bio;
+ goto submit;
+ }
+
+ async_submit = 1;
+ bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS);
+ if (!bio)
+ return -ENOMEM;
+ bio->bi_private = dip;
+ bio->bi_end_io = btrfs_end_dio_bio;
+ atomic_inc(&dip->pending_bios);
+
while (bvec <= (orig_bio->bi_io_vec + orig_bio->bi_vcnt - 1)) {
if (unlikely(map_length < submit_len + bvec->bv_len ||
bio_add_page(bio, bvec->bv_page, bvec->bv_len,
@@ -6015,7 +6066,7 @@
atomic_inc(&dip->pending_bios);
ret = __btrfs_submit_dio_bio(bio, inode, rw,
file_offset, skip_sum,
- csums);
+ csums, async_submit);
if (ret) {
bio_put(bio);
atomic_dec(&dip->pending_bios);
@@ -6052,8 +6103,9 @@
}
}
+submit:
ret = __btrfs_submit_dio_bio(bio, inode, rw, file_offset, skip_sum,
- csums);
+ csums, async_submit);
if (!ret)
return 0;
@@ -6148,6 +6200,7 @@
unsigned long nr_segs)
{
int seg;
+ int i;
size_t size;
unsigned long addr;
unsigned blocksize_mask = root->sectorsize - 1;
@@ -6162,8 +6215,22 @@
addr = (unsigned long)iov[seg].iov_base;
size = iov[seg].iov_len;
end += size;
- if ((addr & blocksize_mask) || (size & blocksize_mask))
+ if ((addr & blocksize_mask) || (size & blocksize_mask))
goto out;
+
+ /* If this is a write we don't need to check anymore */
+ if (rw & WRITE)
+ continue;
+
+ /*
+ * Check to make sure we don't have duplicate iov_base's in this
+ * iovec, if so return EINVAL, otherwise we'll get csum errors
+ * when reading back.
+ */
+ for (i = seg + 1; i < nr_segs; i++) {
+ if (iov[seg].iov_base == iov[i].iov_base)
+ goto out;
+ }
}
retval = 0;
out:
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index cfc264f..ffb48d6c 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2287,7 +2287,7 @@
struct btrfs_ioctl_space_info space;
struct btrfs_ioctl_space_info *dest;
struct btrfs_ioctl_space_info *dest_orig;
- struct btrfs_ioctl_space_info *user_dest;
+ struct btrfs_ioctl_space_info __user *user_dest;
struct btrfs_space_info *info;
u64 types[] = {BTRFS_BLOCK_GROUP_DATA,
BTRFS_BLOCK_GROUP_SYSTEM,
diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index 58e7de9..0ac712e 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -159,7 +159,7 @@
Opt_compress_type, Opt_compress_force, Opt_compress_force_type,
Opt_notreelog, Opt_ratio, Opt_flushoncommit, Opt_discard,
Opt_space_cache, Opt_clear_cache, Opt_user_subvol_rm_allowed,
- Opt_enospc_debug, Opt_err,
+ Opt_enospc_debug, Opt_subvolrootid, Opt_err,
};
static match_table_t tokens = {
@@ -189,6 +189,7 @@
{Opt_clear_cache, "clear_cache"},
{Opt_user_subvol_rm_allowed, "user_subvol_rm_allowed"},
{Opt_enospc_debug, "enospc_debug"},
+ {Opt_subvolrootid, "subvolrootid=%d"},
{Opt_err, NULL},
};
@@ -232,6 +233,7 @@
break;
case Opt_subvol:
case Opt_subvolid:
+ case Opt_subvolrootid:
case Opt_device:
/*
* These are parsed by btrfs_parse_early_options
@@ -388,7 +390,7 @@
*/
static int btrfs_parse_early_options(const char *options, fmode_t flags,
void *holder, char **subvol_name, u64 *subvol_objectid,
- struct btrfs_fs_devices **fs_devices)
+ u64 *subvol_rootid, struct btrfs_fs_devices **fs_devices)
{
substring_t args[MAX_OPT_ARGS];
char *opts, *orig, *p;
@@ -429,6 +431,18 @@
*subvol_objectid = intarg;
}
break;
+ case Opt_subvolrootid:
+ intarg = 0;
+ error = match_int(&args[0], &intarg);
+ if (!error) {
+ /* we want the original fs_tree */
+ if (!intarg)
+ *subvol_rootid =
+ BTRFS_FS_TREE_OBJECTID;
+ else
+ *subvol_rootid = intarg;
+ }
+ break;
case Opt_device:
error = btrfs_scan_one_device(match_strdup(&args[0]),
flags, holder, fs_devices);
@@ -736,6 +750,7 @@
fmode_t mode = FMODE_READ;
char *subvol_name = NULL;
u64 subvol_objectid = 0;
+ u64 subvol_rootid = 0;
int error = 0;
if (!(flags & MS_RDONLY))
@@ -743,7 +758,7 @@
error = btrfs_parse_early_options(data, mode, fs_type,
&subvol_name, &subvol_objectid,
- &fs_devices);
+ &subvol_rootid, &fs_devices);
if (error)
return ERR_PTR(error);
@@ -807,15 +822,17 @@
s->s_flags |= MS_ACTIVE;
}
- root = get_default_root(s, subvol_objectid);
- if (IS_ERR(root)) {
- error = PTR_ERR(root);
- deactivate_locked_super(s);
- goto error_free_subvol_name;
- }
/* if they gave us a subvolume name bind mount into that */
if (strcmp(subvol_name, ".")) {
struct dentry *new_root;
+
+ root = get_default_root(s, subvol_rootid);
+ if (IS_ERR(root)) {
+ error = PTR_ERR(root);
+ deactivate_locked_super(s);
+ goto error_free_subvol_name;
+ }
+
mutex_lock(&root->d_inode->i_mutex);
new_root = lookup_one_len(subvol_name, root,
strlen(subvol_name));
@@ -836,6 +853,13 @@
}
dput(root);
root = new_root;
+ } else {
+ root = get_default_root(s, subvol_objectid);
+ if (IS_ERR(root)) {
+ error = PTR_ERR(root);
+ deactivate_locked_super(s);
+ goto error_free_subvol_name;
+ }
}
kfree(subvol_name);
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 5b158da..c571734 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -32,10 +32,8 @@
static noinline void put_transaction(struct btrfs_transaction *transaction)
{
- WARN_ON(transaction->use_count == 0);
- transaction->use_count--;
- if (transaction->use_count == 0) {
- list_del_init(&transaction->list);
+ WARN_ON(atomic_read(&transaction->use_count) == 0);
+ if (atomic_dec_and_test(&transaction->use_count)) {
memset(transaction, 0, sizeof(*transaction));
kmem_cache_free(btrfs_transaction_cachep, transaction);
}
@@ -60,14 +58,14 @@
if (!cur_trans)
return -ENOMEM;
root->fs_info->generation++;
- cur_trans->num_writers = 1;
+ atomic_set(&cur_trans->num_writers, 1);
cur_trans->num_joined = 0;
cur_trans->transid = root->fs_info->generation;
init_waitqueue_head(&cur_trans->writer_wait);
init_waitqueue_head(&cur_trans->commit_wait);
cur_trans->in_commit = 0;
cur_trans->blocked = 0;
- cur_trans->use_count = 1;
+ atomic_set(&cur_trans->use_count, 1);
cur_trans->commit_done = 0;
cur_trans->start_time = get_seconds();
@@ -88,7 +86,7 @@
root->fs_info->running_transaction = cur_trans;
spin_unlock(&root->fs_info->new_trans_lock);
} else {
- cur_trans->num_writers++;
+ atomic_inc(&cur_trans->num_writers);
cur_trans->num_joined++;
}
@@ -145,7 +143,7 @@
cur_trans = root->fs_info->running_transaction;
if (cur_trans && cur_trans->blocked) {
DEFINE_WAIT(wait);
- cur_trans->use_count++;
+ atomic_inc(&cur_trans->use_count);
while (1) {
prepare_to_wait(&root->fs_info->transaction_wait, &wait,
TASK_UNINTERRUPTIBLE);
@@ -181,6 +179,7 @@
{
struct btrfs_trans_handle *h;
struct btrfs_transaction *cur_trans;
+ int retries = 0;
int ret;
if (root->fs_info->fs_state & BTRFS_SUPER_FLAG_ERROR)
@@ -204,7 +203,7 @@
}
cur_trans = root->fs_info->running_transaction;
- cur_trans->use_count++;
+ atomic_inc(&cur_trans->use_count);
if (type != TRANS_JOIN_NOLOCK)
mutex_unlock(&root->fs_info->trans_mutex);
@@ -224,10 +223,18 @@
if (num_items > 0) {
ret = btrfs_trans_reserve_metadata(h, root, num_items);
- if (ret == -EAGAIN) {
+ if (ret == -EAGAIN && !retries) {
+ retries++;
btrfs_commit_transaction(h, root);
goto again;
+ } else if (ret == -EAGAIN) {
+ /*
+ * We have already retried and got EAGAIN, so really we
+ * don't have space, so set ret to -ENOSPC.
+ */
+ ret = -ENOSPC;
}
+
if (ret < 0) {
btrfs_end_transaction(h, root);
return ERR_PTR(ret);
@@ -327,7 +334,7 @@
goto out_unlock; /* nothing committing|committed */
}
- cur_trans->use_count++;
+ atomic_inc(&cur_trans->use_count);
mutex_unlock(&root->fs_info->trans_mutex);
wait_for_commit(root, cur_trans);
@@ -457,18 +464,14 @@
wake_up_process(info->transaction_kthread);
}
- if (lock)
- mutex_lock(&info->trans_mutex);
WARN_ON(cur_trans != info->running_transaction);
- WARN_ON(cur_trans->num_writers < 1);
- cur_trans->num_writers--;
+ WARN_ON(atomic_read(&cur_trans->num_writers) < 1);
+ atomic_dec(&cur_trans->num_writers);
smp_mb();
if (waitqueue_active(&cur_trans->writer_wait))
wake_up(&cur_trans->writer_wait);
put_transaction(cur_trans);
- if (lock)
- mutex_unlock(&info->trans_mutex);
if (current->journal_info == trans)
current->journal_info = NULL;
@@ -1178,7 +1181,7 @@
/* take transaction reference */
mutex_lock(&root->fs_info->trans_mutex);
cur_trans = trans->transaction;
- cur_trans->use_count++;
+ atomic_inc(&cur_trans->use_count);
mutex_unlock(&root->fs_info->trans_mutex);
btrfs_end_transaction(trans, root);
@@ -1237,7 +1240,7 @@
mutex_lock(&root->fs_info->trans_mutex);
if (cur_trans->in_commit) {
- cur_trans->use_count++;
+ atomic_inc(&cur_trans->use_count);
mutex_unlock(&root->fs_info->trans_mutex);
btrfs_end_transaction(trans, root);
@@ -1259,7 +1262,7 @@
prev_trans = list_entry(cur_trans->list.prev,
struct btrfs_transaction, list);
if (!prev_trans->commit_done) {
- prev_trans->use_count++;
+ atomic_inc(&prev_trans->use_count);
mutex_unlock(&root->fs_info->trans_mutex);
wait_for_commit(root, prev_trans);
@@ -1300,14 +1303,14 @@
TASK_UNINTERRUPTIBLE);
smp_mb();
- if (cur_trans->num_writers > 1)
+ if (atomic_read(&cur_trans->num_writers) > 1)
schedule_timeout(MAX_SCHEDULE_TIMEOUT);
else if (should_grow)
schedule_timeout(1);
mutex_lock(&root->fs_info->trans_mutex);
finish_wait(&cur_trans->writer_wait, &wait);
- } while (cur_trans->num_writers > 1 ||
+ } while (atomic_read(&cur_trans->num_writers) > 1 ||
(should_grow && cur_trans->num_joined != joined));
ret = create_pending_snapshots(trans, root->fs_info);
@@ -1394,6 +1397,7 @@
wake_up(&cur_trans->commit_wait);
+ list_del_init(&cur_trans->list);
put_transaction(cur_trans);
put_transaction(cur_trans);
diff --git a/fs/btrfs/transaction.h b/fs/btrfs/transaction.h
index 229a594..e441acc 100644
--- a/fs/btrfs/transaction.h
+++ b/fs/btrfs/transaction.h
@@ -27,11 +27,11 @@
* total writers in this transaction, it must be zero before the
* transaction can end
*/
- unsigned long num_writers;
+ atomic_t num_writers;
unsigned long num_joined;
int in_commit;
- int use_count;
+ atomic_t use_count;
int commit_done;
int blocked;
struct list_head list;
diff --git a/fs/btrfs/xattr.c b/fs/btrfs/xattr.c
index a5303b8..cfd6605 100644
--- a/fs/btrfs/xattr.c
+++ b/fs/btrfs/xattr.c
@@ -180,11 +180,10 @@
struct btrfs_path *path;
struct extent_buffer *leaf;
struct btrfs_dir_item *di;
- int ret = 0, slot, advance;
+ int ret = 0, slot;
size_t total_size = 0, size_left = size;
unsigned long name_ptr;
size_t name_len;
- u32 nritems;
/*
* ok we want all objects associated with this id.
@@ -204,34 +203,24 @@
ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
if (ret < 0)
goto err;
- advance = 0;
+
while (1) {
leaf = path->nodes[0];
- nritems = btrfs_header_nritems(leaf);
slot = path->slots[0];
/* this is where we start walking through the path */
- if (advance || slot >= nritems) {
+ if (slot >= btrfs_header_nritems(leaf)) {
/*
* if we've reached the last slot in this leaf we need
* to go to the next leaf and reset everything
*/
- if (slot >= nritems-1) {
- ret = btrfs_next_leaf(root, path);
- if (ret)
- break;
- leaf = path->nodes[0];
- nritems = btrfs_header_nritems(leaf);
- slot = path->slots[0];
- } else {
- /*
- * just walking through the slots on this leaf
- */
- slot++;
- path->slots[0]++;
- }
+ ret = btrfs_next_leaf(root, path);
+ if (ret < 0)
+ goto err;
+ else if (ret > 0)
+ break;
+ continue;
}
- advance = 1;
btrfs_item_key_to_cpu(leaf, &found_key, slot);
@@ -250,7 +239,7 @@
/* we are just looking for how big our buffer needs to be */
if (!size)
- continue;
+ goto next;
if (!buffer || (name_len + 1) > size_left) {
ret = -ERANGE;
@@ -263,6 +252,8 @@
size_left -= name_len + 1;
buffer += name_len + 1;
+next:
+ path->slots[0]++;
}
ret = total_size;
diff --git a/fs/dcache.c b/fs/dcache.c
index ad25c4c..129a357 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -2131,7 +2131,7 @@
*/
void dentry_update_name_case(struct dentry *dentry, struct qstr *name)
{
- BUG_ON(!mutex_is_locked(&dentry->d_inode->i_mutex));
+ BUG_ON(!mutex_is_locked(&dentry->d_parent->d_inode->i_mutex));
BUG_ON(dentry->d_name.len != name->len); /* d_lookup gives this */
spin_lock(&dentry->d_lock);
diff --git a/fs/fhandle.c b/fs/fhandle.c
index bf93ad2..6b08864 100644
--- a/fs/fhandle.c
+++ b/fs/fhandle.c
@@ -7,6 +7,7 @@
#include <linux/exportfs.h>
#include <linux/fs_struct.h>
#include <linux/fsnotify.h>
+#include <linux/personality.h>
#include <asm/uaccess.h>
#include "internal.h"
diff --git a/fs/filesystems.c b/fs/filesystems.c
index 751d6b2..0845f84 100644
--- a/fs/filesystems.c
+++ b/fs/filesystems.c
@@ -110,14 +110,13 @@
*tmp = fs->next;
fs->next = NULL;
write_unlock(&file_systems_lock);
+ synchronize_rcu();
return 0;
}
tmp = &(*tmp)->next;
}
write_unlock(&file_systems_lock);
- synchronize_rcu();
-
return -EINVAL;
}
diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c
index c71995b..0f5c4f9 100644
--- a/fs/gfs2/aops.c
+++ b/fs/gfs2/aops.c
@@ -884,8 +884,8 @@
}
brelse(dibh);
- gfs2_trans_end(sdp);
failed:
+ gfs2_trans_end(sdp);
if (al) {
gfs2_inplace_release(ip);
gfs2_quota_unlock(ip);
diff --git a/fs/gfs2/dir.c b/fs/gfs2/dir.c
index 5c356d0..f789c57 100644
--- a/fs/gfs2/dir.c
+++ b/fs/gfs2/dir.c
@@ -1506,7 +1506,7 @@
inode = gfs2_inode_lookup(dir->i_sb,
be16_to_cpu(dent->de_type),
be64_to_cpu(dent->de_inum.no_addr),
- be64_to_cpu(dent->de_inum.no_formal_ino));
+ be64_to_cpu(dent->de_inum.no_formal_ino), 0);
brelse(bh);
return inode;
}
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index b2682e0..e483108 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -617,18 +617,51 @@
return generic_file_aio_write(iocb, iov, nr_segs, pos);
}
-static void empty_write_end(struct page *page, unsigned from,
- unsigned to)
+static int empty_write_end(struct page *page, unsigned from,
+ unsigned to, int mode)
{
- struct gfs2_inode *ip = GFS2_I(page->mapping->host);
+ struct inode *inode = page->mapping->host;
+ struct gfs2_inode *ip = GFS2_I(inode);
+ struct buffer_head *bh;
+ unsigned offset, blksize = 1 << inode->i_blkbits;
+ pgoff_t end_index = i_size_read(inode) >> PAGE_CACHE_SHIFT;
zero_user(page, from, to-from);
mark_page_accessed(page);
- if (!gfs2_is_writeback(ip))
- gfs2_page_add_databufs(ip, page, from, to);
+ if (page->index < end_index || !(mode & FALLOC_FL_KEEP_SIZE)) {
+ if (!gfs2_is_writeback(ip))
+ gfs2_page_add_databufs(ip, page, from, to);
- block_commit_write(page, from, to);
+ block_commit_write(page, from, to);
+ return 0;
+ }
+
+ offset = 0;
+ bh = page_buffers(page);
+ while (offset < to) {
+ if (offset >= from) {
+ set_buffer_uptodate(bh);
+ mark_buffer_dirty(bh);
+ clear_buffer_new(bh);
+ write_dirty_buffer(bh, WRITE);
+ }
+ offset += blksize;
+ bh = bh->b_this_page;
+ }
+
+ offset = 0;
+ bh = page_buffers(page);
+ while (offset < to) {
+ if (offset >= from) {
+ wait_on_buffer(bh);
+ if (!buffer_uptodate(bh))
+ return -EIO;
+ }
+ offset += blksize;
+ bh = bh->b_this_page;
+ }
+ return 0;
}
static int needs_empty_write(sector_t block, struct inode *inode)
@@ -643,7 +676,8 @@
return !buffer_mapped(&bh_map);
}
-static int write_empty_blocks(struct page *page, unsigned from, unsigned to)
+static int write_empty_blocks(struct page *page, unsigned from, unsigned to,
+ int mode)
{
struct inode *inode = page->mapping->host;
unsigned start, end, next, blksize;
@@ -668,7 +702,9 @@
gfs2_block_map);
if (unlikely(ret))
return ret;
- empty_write_end(page, start, end);
+ ret = empty_write_end(page, start, end, mode);
+ if (unlikely(ret))
+ return ret;
end = 0;
}
start = next;
@@ -682,7 +718,9 @@
ret = __block_write_begin(page, start, end - start, gfs2_block_map);
if (unlikely(ret))
return ret;
- empty_write_end(page, start, end);
+ ret = empty_write_end(page, start, end, mode);
+ if (unlikely(ret))
+ return ret;
}
return 0;
@@ -731,7 +769,7 @@
if (curr == end)
to = end_offset;
- error = write_empty_blocks(page, from, to);
+ error = write_empty_blocks(page, from, to, mode);
if (!error && offset + to > inode->i_size &&
!(mode & FALLOC_FL_KEEP_SIZE)) {
i_size_write(inode, offset + to);
diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
index 3754e3c..25eeb2b 100644
--- a/fs/gfs2/glops.c
+++ b/fs/gfs2/glops.c
@@ -385,6 +385,10 @@
static void iopen_go_callback(struct gfs2_glock *gl)
{
struct gfs2_inode *ip = (struct gfs2_inode *)gl->gl_object;
+ struct gfs2_sbd *sdp = gl->gl_sbd;
+
+ if (sdp->sd_vfs->s_flags & MS_RDONLY)
+ return;
if (gl->gl_demote_state == LM_ST_UNLOCKED &&
gl->gl_state == LM_ST_SHARED && ip) {
diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
index 97d54a2..9134dcb 100644
--- a/fs/gfs2/inode.c
+++ b/fs/gfs2/inode.c
@@ -40,37 +40,61 @@
u64 ir_length;
};
+struct gfs2_skip_data {
+ u64 no_addr;
+ int skipped;
+ int non_block;
+};
+
static int iget_test(struct inode *inode, void *opaque)
{
struct gfs2_inode *ip = GFS2_I(inode);
- u64 *no_addr = opaque;
+ struct gfs2_skip_data *data = opaque;
- if (ip->i_no_addr == *no_addr)
+ if (ip->i_no_addr == data->no_addr) {
+ if (data->non_block &&
+ inode->i_state & (I_FREEING|I_CLEAR|I_WILL_FREE)) {
+ data->skipped = 1;
+ return 0;
+ }
return 1;
-
+ }
return 0;
}
static int iget_set(struct inode *inode, void *opaque)
{
struct gfs2_inode *ip = GFS2_I(inode);
- u64 *no_addr = opaque;
+ struct gfs2_skip_data *data = opaque;
- inode->i_ino = (unsigned long)*no_addr;
- ip->i_no_addr = *no_addr;
+ if (data->skipped)
+ return -ENOENT;
+ inode->i_ino = (unsigned long)(data->no_addr);
+ ip->i_no_addr = data->no_addr;
return 0;
}
struct inode *gfs2_ilookup(struct super_block *sb, u64 no_addr)
{
unsigned long hash = (unsigned long)no_addr;
- return ilookup5(sb, hash, iget_test, &no_addr);
+ struct gfs2_skip_data data;
+
+ data.no_addr = no_addr;
+ data.skipped = 0;
+ data.non_block = 0;
+ return ilookup5(sb, hash, iget_test, &data);
}
-static struct inode *gfs2_iget(struct super_block *sb, u64 no_addr)
+static struct inode *gfs2_iget(struct super_block *sb, u64 no_addr,
+ int non_block)
{
+ struct gfs2_skip_data data;
unsigned long hash = (unsigned long)no_addr;
- return iget5_locked(sb, hash, iget_test, iget_set, &no_addr);
+
+ data.no_addr = no_addr;
+ data.skipped = 0;
+ data.non_block = non_block;
+ return iget5_locked(sb, hash, iget_test, iget_set, &data);
}
/**
@@ -111,19 +135,20 @@
* @sb: The super block
* @no_addr: The inode number
* @type: The type of the inode
+ * non_block: Can we block on inodes that are being freed?
*
* Returns: A VFS inode, or an error
*/
struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned int type,
- u64 no_addr, u64 no_formal_ino)
+ u64 no_addr, u64 no_formal_ino, int non_block)
{
struct inode *inode;
struct gfs2_inode *ip;
struct gfs2_glock *io_gl = NULL;
int error;
- inode = gfs2_iget(sb, no_addr);
+ inode = gfs2_iget(sb, no_addr, non_block);
ip = GFS2_I(inode);
if (!inode)
@@ -185,11 +210,12 @@
{
struct super_block *sb = sdp->sd_vfs;
struct gfs2_holder i_gh;
- struct inode *inode;
+ struct inode *inode = NULL;
int error;
+ /* Must not read in block until block type is verified */
error = gfs2_glock_nq_num(sdp, no_addr, &gfs2_inode_glops,
- LM_ST_SHARED, LM_FLAG_ANY, &i_gh);
+ LM_ST_EXCLUSIVE, GL_SKIP, &i_gh);
if (error)
return ERR_PTR(error);
@@ -197,7 +223,7 @@
if (error)
goto fail;
- inode = gfs2_inode_lookup(sb, DT_UNKNOWN, no_addr, 0);
+ inode = gfs2_inode_lookup(sb, DT_UNKNOWN, no_addr, 0, 1);
if (IS_ERR(inode))
goto fail;
@@ -843,7 +869,7 @@
goto fail_gunlock2;
inode = gfs2_inode_lookup(dir->i_sb, IF2DT(mode), inum.no_addr,
- inum.no_formal_ino);
+ inum.no_formal_ino, 0);
if (IS_ERR(inode))
goto fail_gunlock2;
diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h
index 3e00a66..099ca30 100644
--- a/fs/gfs2/inode.h
+++ b/fs/gfs2/inode.h
@@ -97,7 +97,8 @@
}
extern struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned type,
- u64 no_addr, u64 no_formal_ino);
+ u64 no_addr, u64 no_formal_ino,
+ int non_block);
extern struct inode *gfs2_lookup_by_inum(struct gfs2_sbd *sdp, u64 no_addr,
u64 *no_formal_ino,
unsigned int blktype);
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 42ef243..d3c69eb 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -430,7 +430,7 @@
struct dentry *dentry;
struct inode *inode;
- inode = gfs2_inode_lookup(sb, DT_DIR, no_addr, 0);
+ inode = gfs2_inode_lookup(sb, DT_DIR, no_addr, 0, 0);
if (IS_ERR(inode)) {
fs_err(sdp, "can't read in %s inode: %ld\n", name, PTR_ERR(inode));
return PTR_ERR(inode);
diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
index cf930cd..6fcae84 100644
--- a/fs/gfs2/rgrp.c
+++ b/fs/gfs2/rgrp.c
@@ -945,7 +945,7 @@
/* rgblk_search can return a block < goal, so we need to
keep it marching forward. */
no_addr = block + rgd->rd_data0;
- goal++;
+ goal = max(block + 1, goal + 1);
if (*last_unlinked != NO_BLOCK && no_addr <= *last_unlinked)
continue;
if (no_addr == skip)
@@ -971,7 +971,7 @@
found++;
/* Limit reclaim to sensible number of tasks */
- if (found > 2*NR_CPUS)
+ if (found > NR_CPUS)
return;
}
diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
index a4e23d6..b9f28e6 100644
--- a/fs/gfs2/super.c
+++ b/fs/gfs2/super.c
@@ -1318,15 +1318,17 @@
static void gfs2_evict_inode(struct inode *inode)
{
- struct gfs2_sbd *sdp = inode->i_sb->s_fs_info;
+ struct super_block *sb = inode->i_sb;
+ struct gfs2_sbd *sdp = sb->s_fs_info;
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_holder gh;
int error;
- if (inode->i_nlink)
+ if (inode->i_nlink || (sb->s_flags & MS_RDONLY))
goto out;
- error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, 0, &gh);
+ /* Must not read inode block until block type has been verified */
+ error = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, &gh);
if (unlikely(error)) {
gfs2_glock_dq_uninit(&ip->i_iopen_gh);
goto out;
@@ -1336,6 +1338,12 @@
if (error)
goto out_truncate;
+ if (test_bit(GIF_INVALID, &ip->i_flags)) {
+ error = gfs2_inode_refresh(ip);
+ if (error)
+ goto out_truncate;
+ }
+
ip->i_iopen_gh.gh_flags |= GL_NOCACHE;
gfs2_glock_dq_wait(&ip->i_iopen_gh);
gfs2_holder_reinit(LM_ST_EXCLUSIVE, LM_FLAG_TRY_1CB | GL_NOCACHE, &ip->i_iopen_gh);
diff --git a/fs/namei.c b/fs/namei.c
index e6cd611..54fc993 100644
--- a/fs/namei.c
+++ b/fs/namei.c
@@ -697,6 +697,7 @@
do {
seq = read_seqcount_begin(&fs->seq);
nd->root = fs->root;
+ nd->seq = __read_seqcount_begin(&nd->root.dentry->d_seq);
} while (read_seqcount_retry(&fs->seq, seq));
}
}
diff --git a/fs/partitions/ldm.c b/fs/partitions/ldm.c
index b10e354..ce4f624 100644
--- a/fs/partitions/ldm.c
+++ b/fs/partitions/ldm.c
@@ -1299,6 +1299,11 @@
BUG_ON (!data || !frags);
+ if (size < 2 * VBLK_SIZE_HEAD) {
+ ldm_error("Value of size is to small.");
+ return false;
+ }
+
group = get_unaligned_be32(data + 0x08);
rec = get_unaligned_be16(data + 0x0C);
num = get_unaligned_be16(data + 0x0E);
@@ -1306,6 +1311,10 @@
ldm_error ("A VBLK claims to have %d parts.", num);
return false;
}
+ if (rec >= num) {
+ ldm_error("REC value (%d) exceeds NUM value (%d)", rec, num);
+ return false;
+ }
list_for_each (item, frags) {
f = list_entry (item, struct frag, list);
@@ -1334,10 +1343,9 @@
f->map |= (1 << rec);
- if (num > 0) {
- data += VBLK_SIZE_HEAD;
- size -= VBLK_SIZE_HEAD;
- }
+ data += VBLK_SIZE_HEAD;
+ size -= VBLK_SIZE_HEAD;
+
memcpy (f->data+rec*(size-VBLK_SIZE_HEAD)+VBLK_SIZE_HEAD, data, size);
return true;
diff --git a/fs/proc/base.c b/fs/proc/base.c
index dd6628d..dfa5327 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -3124,11 +3124,16 @@
/* for the /proc/ directory itself, after non-process stuff has been done */
int proc_pid_readdir(struct file * filp, void * dirent, filldir_t filldir)
{
- unsigned int nr = filp->f_pos - FIRST_PROCESS_ENTRY;
- struct task_struct *reaper = get_proc_task(filp->f_path.dentry->d_inode);
+ unsigned int nr;
+ struct task_struct *reaper;
struct tgid_iter iter;
struct pid_namespace *ns;
+ if (filp->f_pos >= PID_MAX_LIMIT + TGID_OFFSET)
+ goto out_no_task;
+ nr = filp->f_pos - FIRST_PROCESS_ENTRY;
+
+ reaper = get_proc_task(filp->f_path.dentry->d_inode);
if (!reaper)
goto out_no_task;
diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
index 9eead2c..fbb0b47 100644
--- a/fs/ramfs/file-nommu.c
+++ b/fs/ramfs/file-nommu.c
@@ -112,6 +112,7 @@
SetPageDirty(page);
unlock_page(page);
+ put_page(page);
}
return 0;
diff --git a/fs/ubifs/debug.h b/fs/ubifs/debug.h
index 919f0de..e6493ca 100644
--- a/fs/ubifs/debug.h
+++ b/fs/ubifs/debug.h
@@ -23,6 +23,12 @@
#ifndef __UBIFS_DEBUG_H__
#define __UBIFS_DEBUG_H__
+/* Checking helper functions */
+typedef int (*dbg_leaf_callback)(struct ubifs_info *c,
+ struct ubifs_zbranch *zbr, void *priv);
+typedef int (*dbg_znode_callback)(struct ubifs_info *c,
+ struct ubifs_znode *znode, void *priv);
+
#ifdef CONFIG_UBIFS_FS_DEBUG
/**
@@ -270,11 +276,6 @@
void dbg_dump_index(struct ubifs_info *c);
void dbg_dump_lpt_lebs(const struct ubifs_info *c);
-/* Checking helper functions */
-typedef int (*dbg_leaf_callback)(struct ubifs_info *c,
- struct ubifs_zbranch *zbr, void *priv);
-typedef int (*dbg_znode_callback)(struct ubifs_info *c,
- struct ubifs_znode *znode, void *priv);
int dbg_walk_index(struct ubifs_info *c, dbg_leaf_callback leaf_cb,
dbg_znode_callback znode_cb, void *priv);
@@ -295,7 +296,6 @@
int dbg_check_filesystem(struct ubifs_info *c);
void dbg_check_heap(struct ubifs_info *c, struct ubifs_lpt_heap *heap, int cat,
int add_pos);
-int dbg_check_lprops(struct ubifs_info *c);
int dbg_check_lpt_nodes(struct ubifs_info *c, struct ubifs_cnode *cnode,
int row, int col);
int dbg_check_inode_size(struct ubifs_info *c, const struct inode *inode,
@@ -401,58 +401,94 @@
#define DBGKEY(key) ((char *)(key))
#define DBGKEY1(key) ((char *)(key))
-#define ubifs_debugging_init(c) 0
-#define ubifs_debugging_exit(c) ({})
+static inline int ubifs_debugging_init(struct ubifs_info *c) { return 0; }
+static inline void ubifs_debugging_exit(struct ubifs_info *c) { return; }
+static inline const char *dbg_ntype(int type) { return ""; }
+static inline const char *dbg_cstate(int cmt_state) { return ""; }
+static inline const char *dbg_jhead(int jhead) { return ""; }
+static inline const char *
+dbg_get_key_dump(const struct ubifs_info *c,
+ const union ubifs_key *key) { return ""; }
+static inline void dbg_dump_inode(const struct ubifs_info *c,
+ const struct inode *inode) { return; }
+static inline void dbg_dump_node(const struct ubifs_info *c,
+ const void *node) { return; }
+static inline void dbg_dump_lpt_node(const struct ubifs_info *c,
+ void *node, int lnum,
+ int offs) { return; }
+static inline void
+dbg_dump_budget_req(const struct ubifs_budget_req *req) { return; }
+static inline void
+dbg_dump_lstats(const struct ubifs_lp_stats *lst) { return; }
+static inline void dbg_dump_budg(struct ubifs_info *c) { return; }
+static inline void dbg_dump_lprop(const struct ubifs_info *c,
+ const struct ubifs_lprops *lp) { return; }
+static inline void dbg_dump_lprops(struct ubifs_info *c) { return; }
+static inline void dbg_dump_lpt_info(struct ubifs_info *c) { return; }
+static inline void dbg_dump_leb(const struct ubifs_info *c,
+ int lnum) { return; }
+static inline void
+dbg_dump_znode(const struct ubifs_info *c,
+ const struct ubifs_znode *znode) { return; }
+static inline void dbg_dump_heap(struct ubifs_info *c,
+ struct ubifs_lpt_heap *heap,
+ int cat) { return; }
+static inline void dbg_dump_pnode(struct ubifs_info *c,
+ struct ubifs_pnode *pnode,
+ struct ubifs_nnode *parent,
+ int iip) { return; }
+static inline void dbg_dump_tnc(struct ubifs_info *c) { return; }
+static inline void dbg_dump_index(struct ubifs_info *c) { return; }
+static inline void dbg_dump_lpt_lebs(const struct ubifs_info *c) { return; }
-#define dbg_ntype(type) ""
-#define dbg_cstate(cmt_state) ""
-#define dbg_jhead(jhead) ""
-#define dbg_get_key_dump(c, key) ({})
-#define dbg_dump_inode(c, inode) ({})
-#define dbg_dump_node(c, node) ({})
-#define dbg_dump_lpt_node(c, node, lnum, offs) ({})
-#define dbg_dump_budget_req(req) ({})
-#define dbg_dump_lstats(lst) ({})
-#define dbg_dump_budg(c) ({})
-#define dbg_dump_lprop(c, lp) ({})
-#define dbg_dump_lprops(c) ({})
-#define dbg_dump_lpt_info(c) ({})
-#define dbg_dump_leb(c, lnum) ({})
-#define dbg_dump_znode(c, znode) ({})
-#define dbg_dump_heap(c, heap, cat) ({})
-#define dbg_dump_pnode(c, pnode, parent, iip) ({})
-#define dbg_dump_tnc(c) ({})
-#define dbg_dump_index(c) ({})
-#define dbg_dump_lpt_lebs(c) ({})
+static inline int dbg_walk_index(struct ubifs_info *c,
+ dbg_leaf_callback leaf_cb,
+ dbg_znode_callback znode_cb,
+ void *priv) { return 0; }
+static inline void dbg_save_space_info(struct ubifs_info *c) { return; }
+static inline int dbg_check_space_info(struct ubifs_info *c) { return 0; }
+static inline int dbg_check_lprops(struct ubifs_info *c) { return 0; }
+static inline int
+dbg_old_index_check_init(struct ubifs_info *c,
+ struct ubifs_zbranch *zroot) { return 0; }
+static inline int
+dbg_check_old_index(struct ubifs_info *c,
+ struct ubifs_zbranch *zroot) { return 0; }
+static inline int dbg_check_cats(struct ubifs_info *c) { return 0; }
+static inline int dbg_check_ltab(struct ubifs_info *c) { return 0; }
+static inline int dbg_chk_lpt_free_spc(struct ubifs_info *c) { return 0; }
+static inline int dbg_chk_lpt_sz(struct ubifs_info *c,
+ int action, int len) { return 0; }
+static inline int dbg_check_synced_i_size(struct inode *inode) { return 0; }
+static inline int dbg_check_dir_size(struct ubifs_info *c,
+ const struct inode *dir) { return 0; }
+static inline int dbg_check_tnc(struct ubifs_info *c, int extra) { return 0; }
+static inline int dbg_check_idx_size(struct ubifs_info *c,
+ long long idx_size) { return 0; }
+static inline int dbg_check_filesystem(struct ubifs_info *c) { return 0; }
+static inline void dbg_check_heap(struct ubifs_info *c,
+ struct ubifs_lpt_heap *heap,
+ int cat, int add_pos) { return; }
+static inline int dbg_check_lpt_nodes(struct ubifs_info *c,
+ struct ubifs_cnode *cnode, int row, int col) { return 0; }
+static inline int dbg_check_inode_size(struct ubifs_info *c,
+ const struct inode *inode,
+ loff_t size) { return 0; }
+static inline int
+dbg_check_data_nodes_order(struct ubifs_info *c,
+ struct list_head *head) { return 0; }
+static inline int
+dbg_check_nondata_nodes_order(struct ubifs_info *c,
+ struct list_head *head) { return 0; }
-#define dbg_walk_index(c, leaf_cb, znode_cb, priv) 0
-#define dbg_old_index_check_init(c, zroot) 0
-#define dbg_save_space_info(c) ({})
-#define dbg_check_space_info(c) 0
-#define dbg_check_old_index(c, zroot) 0
-#define dbg_check_cats(c) 0
-#define dbg_check_ltab(c) 0
-#define dbg_chk_lpt_free_spc(c) 0
-#define dbg_chk_lpt_sz(c, action, len) 0
-#define dbg_check_synced_i_size(inode) 0
-#define dbg_check_dir_size(c, dir) 0
-#define dbg_check_tnc(c, x) 0
-#define dbg_check_idx_size(c, idx_size) 0
-#define dbg_check_filesystem(c) 0
-#define dbg_check_heap(c, heap, cat, add_pos) ({})
-#define dbg_check_lprops(c) 0
-#define dbg_check_lpt_nodes(c, cnode, row, col) 0
-#define dbg_check_inode_size(c, inode, size) 0
-#define dbg_check_data_nodes_order(c, head) 0
-#define dbg_check_nondata_nodes_order(c, head) 0
-#define dbg_force_in_the_gaps_enabled 0
-#define dbg_force_in_the_gaps() 0
-#define dbg_failure_mode 0
+static inline int dbg_force_in_the_gaps(void) { return 0; }
+#define dbg_force_in_the_gaps_enabled 0
+#define dbg_failure_mode 0
-#define dbg_debugfs_init() 0
-#define dbg_debugfs_exit()
-#define dbg_debugfs_init_fs(c) 0
-#define dbg_debugfs_exit_fs(c) 0
+static inline int dbg_debugfs_init(void) { return 0; }
+static inline void dbg_debugfs_exit(void) { return; }
+static inline int dbg_debugfs_init_fs(struct ubifs_info *c) { return 0; }
+static inline int dbg_debugfs_exit_fs(struct ubifs_info *c) { return 0; }
#endif /* !CONFIG_UBIFS_FS_DEBUG */
#endif /* !__UBIFS_DEBUG_H__ */
diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index 28be1e6..b286db7 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -1312,6 +1312,9 @@
dbg_gen("syncing inode %lu", inode->i_ino);
+ if (inode->i_sb->s_flags & MS_RDONLY)
+ return 0;
+
/*
* VFS has already synchronized dirty pages for this inode. Synchronize
* the inode unless this is a 'datasync()' call.
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 32176cc..cbbfd98 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -697,7 +697,7 @@
extern void blk_stop_queue(struct request_queue *q);
extern void blk_sync_queue(struct request_queue *q);
extern void __blk_stop_queue(struct request_queue *q);
-extern void __blk_run_queue(struct request_queue *q, bool force_kblockd);
+extern void __blk_run_queue(struct request_queue *q);
extern void blk_run_queue(struct request_queue *);
extern int blk_rq_map_user(struct request_queue *, struct request *,
struct rq_map_data *, void __user *, unsigned long,
@@ -857,26 +857,39 @@
struct blk_plug {
unsigned long magic;
struct list_head list;
+ struct list_head cb_list;
unsigned int should_sort;
};
+struct blk_plug_cb {
+ struct list_head list;
+ void (*callback)(struct blk_plug_cb *);
+};
extern void blk_start_plug(struct blk_plug *);
extern void blk_finish_plug(struct blk_plug *);
-extern void __blk_flush_plug(struct task_struct *, struct blk_plug *);
+extern void blk_flush_plug_list(struct blk_plug *, bool);
static inline void blk_flush_plug(struct task_struct *tsk)
{
struct blk_plug *plug = tsk->plug;
- if (unlikely(plug))
- __blk_flush_plug(tsk, plug);
+ if (plug)
+ blk_flush_plug_list(plug, false);
+}
+
+static inline void blk_schedule_flush_plug(struct task_struct *tsk)
+{
+ struct blk_plug *plug = tsk->plug;
+
+ if (plug)
+ blk_flush_plug_list(plug, true);
}
static inline bool blk_needs_flush_plug(struct task_struct *tsk)
{
struct blk_plug *plug = tsk->plug;
- return plug && !list_empty(&plug->list);
+ return plug && (!list_empty(&plug->list) || !list_empty(&plug->cb_list));
}
/*
@@ -1314,6 +1327,11 @@
{
}
+static inline void blk_schedule_flush_plug(struct task_struct *task)
+{
+}
+
+
static inline bool blk_needs_flush_plug(struct task_struct *tsk)
{
return false;
diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h
index e276883..32a4423 100644
--- a/include/linux/device-mapper.h
+++ b/include/linux/device-mapper.h
@@ -197,7 +197,6 @@
struct dm_target_callbacks {
struct list_head list;
int (*congested_fn) (struct dm_target_callbacks *, int);
- void (*unplug_fn)(struct dm_target_callbacks *);
};
int dm_register_target(struct target_type *t);
diff --git a/include/linux/input.h b/include/linux/input.h
index f3a7794..771d6d8 100644
--- a/include/linux/input.h
+++ b/include/linux/input.h
@@ -167,6 +167,7 @@
#define SYN_REPORT 0
#define SYN_CONFIG 1
#define SYN_MT_REPORT 2
+#define SYN_DROPPED 3
/*
* Keys and buttons
@@ -553,8 +554,8 @@
#define KEY_DVD 0x185 /* Media Select DVD */
#define KEY_AUX 0x186
#define KEY_MP3 0x187
-#define KEY_AUDIO 0x188
-#define KEY_VIDEO 0x189
+#define KEY_AUDIO 0x188 /* AL Audio Browser */
+#define KEY_VIDEO 0x189 /* AL Movie Browser */
#define KEY_DIRECTORY 0x18a
#define KEY_LIST 0x18b
#define KEY_MEMO 0x18c /* Media Select Messages */
@@ -603,8 +604,9 @@
#define KEY_FRAMEFORWARD 0x1b5
#define KEY_CONTEXT_MENU 0x1b6 /* GenDesc - system context menu */
#define KEY_MEDIA_REPEAT 0x1b7 /* Consumer - transport control */
-#define KEY_10CHANNELSUP 0x1b8 /* 10 channels up (10+) */
-#define KEY_10CHANNELSDOWN 0x1b9 /* 10 channels down (10-) */
+#define KEY_10CHANNELSUP 0x1b8 /* 10 channels up (10+) */
+#define KEY_10CHANNELSDOWN 0x1b9 /* 10 channels down (10-) */
+#define KEY_IMAGES 0x1ba /* AL Image Browser */
#define KEY_DEL_EOL 0x1c0
#define KEY_DEL_EOS 0x1c1
diff --git a/include/linux/input/mt.h b/include/linux/input/mt.h
index b3ac06a..318bb82 100644
--- a/include/linux/input/mt.h
+++ b/include/linux/input/mt.h
@@ -48,6 +48,12 @@
input_event(dev, EV_ABS, ABS_MT_SLOT, slot);
}
+static inline bool input_is_mt_axis(int axis)
+{
+ return axis == ABS_MT_SLOT ||
+ (axis >= ABS_MT_FIRST && axis <= ABS_MT_LAST);
+}
+
void input_mt_report_slot_state(struct input_dev *dev,
unsigned int tool_type, bool active);
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 5a5ce70..5e9840f5 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -216,7 +216,7 @@
return ;
}
-static inline inline void mem_cgroup_rotate_reclaimable_page(struct page *page)
+static inline void mem_cgroup_rotate_reclaimable_page(struct page *page)
{
return ;
}
diff --git a/include/linux/pid.h b/include/linux/pid.h
index 31afb7e..cdced84 100644
--- a/include/linux/pid.h
+++ b/include/linux/pid.h
@@ -117,7 +117,7 @@
*/
extern struct pid *find_get_pid(int nr);
extern struct pid *find_ge_pid(int nr, struct pid_namespace *);
-int next_pidmap(struct pid_namespace *pid_ns, int last);
+int next_pidmap(struct pid_namespace *pid_ns, unsigned int last);
extern struct pid *alloc_pid(struct pid_namespace *ns);
extern void free_pid(struct pid *pid);
diff --git a/include/linux/posix-clock.h b/include/linux/posix-clock.h
index 369e19d..7f1183dc 100644
--- a/include/linux/posix-clock.h
+++ b/include/linux/posix-clock.h
@@ -24,6 +24,7 @@
#include <linux/fs.h>
#include <linux/poll.h>
#include <linux/posix-timers.h>
+#include <linux/rwsem.h>
struct posix_clock;
@@ -104,7 +105,7 @@
* @ops: Functional interface to the clock
* @cdev: Character device instance for this clock
* @kref: Reference count.
- * @mutex: Protects the 'zombie' field from concurrent access.
+ * @rwsem: Protects the 'zombie' field from concurrent access.
* @zombie: If 'zombie' is true, then the hardware has disappeared.
* @release: A function to free the structure when the reference count reaches
* zero. May be NULL if structure is statically allocated.
@@ -117,7 +118,7 @@
struct posix_clock_operations ops;
struct cdev cdev;
struct kref kref;
- struct mutex mutex;
+ struct rw_semaphore rwsem;
bool zombie;
void (*release)(struct posix_clock *clk);
};
diff --git a/include/linux/rio.h b/include/linux/rio.h
index 4e37a7cf..4d50611 100644
--- a/include/linux/rio.h
+++ b/include/linux/rio.h
@@ -396,7 +396,7 @@
};
/* Architecture and hardware-specific functions */
-extern void rio_register_mport(struct rio_mport *);
+extern int rio_register_mport(struct rio_mport *);
extern int rio_open_inb_mbox(struct rio_mport *, void *, int, int);
extern void rio_close_inb_mbox(struct rio_mport *, int);
extern int rio_open_outb_mbox(struct rio_mport *, void *, int, int);
diff --git a/include/linux/rio_ids.h b/include/linux/rio_ids.h
index 7410d33..0cee015 100644
--- a/include/linux/rio_ids.h
+++ b/include/linux/rio_ids.h
@@ -35,6 +35,7 @@
#define RIO_DID_IDTCPS6Q 0x035f
#define RIO_DID_IDTCPS10Q 0x035e
#define RIO_DID_IDTCPS1848 0x0374
+#define RIO_DID_IDTCPS1432 0x0375
#define RIO_DID_IDTCPS1616 0x0379
#define RIO_DID_IDTVPS1616 0x0377
#define RIO_DID_IDTSPS1616 0x0378
diff --git a/include/linux/rtc.h b/include/linux/rtc.h
index 2ca7e8a..877ece4 100644
--- a/include/linux/rtc.h
+++ b/include/linux/rtc.h
@@ -228,6 +228,8 @@
struct rtc_wkalrm *alrm);
extern int rtc_set_alarm(struct rtc_device *rtc,
struct rtc_wkalrm *alrm);
+extern int rtc_initialize_alarm(struct rtc_device *rtc,
+ struct rtc_wkalrm *alrm);
extern void rtc_update_irq(struct rtc_device *rtc,
unsigned long num, unsigned long events);
diff --git a/include/linux/sched.h b/include/linux/sched.h
index e09dafa..94107a2 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1257,6 +1257,9 @@
#endif
struct mm_struct *mm, *active_mm;
+#ifdef CONFIG_COMPAT_BRK
+ unsigned brk_randomized:1;
+#endif
#if defined(SPLIT_RSS_COUNTING)
struct task_rss_stat rss_stat;
#endif
diff --git a/include/linux/usb/usbnet.h b/include/linux/usb/usbnet.h
index 3c7329b..0e18550 100644
--- a/include/linux/usb/usbnet.h
+++ b/include/linux/usb/usbnet.h
@@ -103,8 +103,8 @@
* Indicates to usbnet, that USB driver accumulates multiple IP packets.
* Affects statistic (counters) and short packet handling.
*/
-#define FLAG_MULTI_PACKET 0x1000
-#define FLAG_RX_ASSEMBLE 0x2000 /* rx packets may span >1 frames */
+#define FLAG_MULTI_PACKET 0x2000
+#define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */
/* init device ... can sleep, or cause probe() failure */
int (*bind)(struct usbnet *, struct usb_interface *);
diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index 461c011..2b3831b 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -58,6 +58,13 @@
UNEVICTABLE_PGCLEARED, /* on COW, page truncate */
UNEVICTABLE_PGSTRANDED, /* unable to isolate on unlock */
UNEVICTABLE_MLOCKFREED,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ THP_FAULT_ALLOC,
+ THP_FAULT_FALLBACK,
+ THP_COLLAPSE_ALLOC,
+ THP_COLLAPSE_ALLOC_FAILED,
+ THP_SPLIT,
+#endif
NR_VM_EVENT_ITEMS
};
diff --git a/include/net/9p/9p.h b/include/net/9p/9p.h
index cdf2e8ac..d2df55b 100644
--- a/include/net/9p/9p.h
+++ b/include/net/9p/9p.h
@@ -139,8 +139,6 @@
*/
enum p9_msg_t {
- P9_TSYNCFS = 0,
- P9_RSYNCFS,
P9_TLERROR = 6,
P9_RLERROR,
P9_TSTATFS = 8,
diff --git a/include/net/9p/client.h b/include/net/9p/client.h
index 85c1413..051a99f 100644
--- a/include/net/9p/client.h
+++ b/include/net/9p/client.h
@@ -218,8 +218,8 @@
void p9_client_begin_disconnect(struct p9_client *clnt);
struct p9_fid *p9_client_attach(struct p9_client *clnt, struct p9_fid *afid,
char *uname, u32 n_uname, char *aname);
-struct p9_fid *p9_client_walk(struct p9_fid *oldfid, int nwname, char **wnames,
- int clone);
+struct p9_fid *p9_client_walk(struct p9_fid *oldfid, uint16_t nwname,
+ char **wnames, int clone);
int p9_client_open(struct p9_fid *fid, int mode);
int p9_client_fcreate(struct p9_fid *fid, char *name, u32 perm, int mode,
char *extension);
@@ -230,7 +230,6 @@
gid_t gid, struct p9_qid *qid);
int p9_client_clunk(struct p9_fid *fid);
int p9_client_fsync(struct p9_fid *fid, int datasync);
-int p9_client_sync_fs(struct p9_fid *fid);
int p9_client_remove(struct p9_fid *fid);
int p9_client_read(struct p9_fid *fid, char *data, char __user *udata,
u64 offset, u32 count);
diff --git a/include/trace/events/block.h b/include/trace/events/block.h
index 78f18ad..bf36654 100644
--- a/include/trace/events/block.h
+++ b/include/trace/events/block.h
@@ -401,9 +401,9 @@
DECLARE_EVENT_CLASS(block_unplug,
- TP_PROTO(struct request_queue *q),
+ TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit),
- TP_ARGS(q),
+ TP_ARGS(q, depth, explicit),
TP_STRUCT__entry(
__field( int, nr_rq )
@@ -411,7 +411,7 @@
),
TP_fast_assign(
- __entry->nr_rq = q->rq.count[READ] + q->rq.count[WRITE];
+ __entry->nr_rq = depth;
memcpy(__entry->comm, current->comm, TASK_COMM_LEN);
),
@@ -419,31 +419,19 @@
);
/**
- * block_unplug_timer - timed release of operations requests in queue to device driver
+ * block_unplug - release of operations requests in request queue
* @q: request queue to unplug
- *
- * Unplug the request queue @q because a timer expired and allow block
- * operation requests to be sent to the device driver.
- */
-DEFINE_EVENT(block_unplug, block_unplug_timer,
-
- TP_PROTO(struct request_queue *q),
-
- TP_ARGS(q)
-);
-
-/**
- * block_unplug_io - release of operations requests in request queue
- * @q: request queue to unplug
+ * @depth: number of requests just added to the queue
+ * @explicit: whether this was an explicit unplug, or one from schedule()
*
* Unplug request queue @q because device driver is scheduled to work
* on elements in the request queue.
*/
-DEFINE_EVENT(block_unplug, block_unplug_io,
+DEFINE_EVENT(block_unplug, block_unplug,
- TP_PROTO(struct request_queue *q),
+ TP_PROTO(struct request_queue *q, unsigned int depth, bool explicit),
- TP_ARGS(q)
+ TP_ARGS(q, depth, explicit)
);
/**
diff --git a/kernel/futex.c b/kernel/futex.c
index dfb924f..fe28dc2 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -1886,7 +1886,7 @@
restart->futex.val = val;
restart->futex.time = abs_time->tv64;
restart->futex.bitset = bitset;
- restart->futex.flags = flags;
+ restart->futex.flags = flags | FLAGS_HAS_TIMEOUT;
ret = -ERESTART_RESTARTBLOCK;
diff --git a/kernel/perf_event.c b/kernel/perf_event.c
index 27960f1..8e81a98 100644
--- a/kernel/perf_event.c
+++ b/kernel/perf_event.c
@@ -364,6 +364,7 @@
}
if (mode & PERF_CGROUP_SWIN) {
+ WARN_ON_ONCE(cpuctx->cgrp);
/* set cgrp before ctxsw in to
* allow event_filter_match() to not
* have to pass task around
@@ -2423,6 +2424,14 @@
if (!ctx || !ctx->nr_events)
goto out;
+ /*
+ * We must ctxsw out cgroup events to avoid conflict
+ * when invoking perf_task_event_sched_in() later on
+ * in this function. Otherwise we end up trying to
+ * ctxswin cgroup events which are already scheduled
+ * in.
+ */
+ perf_cgroup_sched_out(current);
task_ctx_sched_out(ctx, EVENT_ALL);
raw_spin_lock(&ctx->lock);
@@ -2447,6 +2456,9 @@
raw_spin_unlock(&ctx->lock);
+ /*
+ * Also calls ctxswin for cgroup events, if any:
+ */
perf_event_context_sched_in(ctx, ctx->task);
out:
local_irq_restore(flags);
diff --git a/kernel/pid.c b/kernel/pid.c
index 02f2212..57a8346 100644
--- a/kernel/pid.c
+++ b/kernel/pid.c
@@ -217,11 +217,14 @@
return -1;
}
-int next_pidmap(struct pid_namespace *pid_ns, int last)
+int next_pidmap(struct pid_namespace *pid_ns, unsigned int last)
{
int offset;
struct pidmap *map, *end;
+ if (last >= PID_MAX_LIMIT)
+ return -1;
+
offset = (last + 1) & BITS_PER_PAGE_MASK;
map = &pid_ns->pidmap[(last + 1)/BITS_PER_PAGE];
end = &pid_ns->pidmap[PIDMAP_ENTRIES];
diff --git a/kernel/sched.c b/kernel/sched.c
index 9e3ede1..8c9d804 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4224,7 +4224,7 @@
*/
if (blk_needs_flush_plug(prev)) {
raw_spin_unlock(&rq->lock);
- blk_flush_plug(prev);
+ blk_schedule_flush_plug(prev);
raw_spin_lock(&rq->lock);
}
}
diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
index 25028dd..c340ca6 100644
--- a/kernel/time/posix-clock.c
+++ b/kernel/time/posix-clock.c
@@ -19,7 +19,6 @@
*/
#include <linux/device.h>
#include <linux/file.h>
-#include <linux/mutex.h>
#include <linux/posix-clock.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
@@ -34,19 +33,19 @@
{
struct posix_clock *clk = fp->private_data;
- mutex_lock(&clk->mutex);
+ down_read(&clk->rwsem);
if (!clk->zombie)
return clk;
- mutex_unlock(&clk->mutex);
+ up_read(&clk->rwsem);
return NULL;
}
static void put_posix_clock(struct posix_clock *clk)
{
- mutex_unlock(&clk->mutex);
+ up_read(&clk->rwsem);
}
static ssize_t posix_clock_read(struct file *fp, char __user *buf,
@@ -156,7 +155,7 @@
struct posix_clock *clk =
container_of(inode->i_cdev, struct posix_clock, cdev);
- mutex_lock(&clk->mutex);
+ down_read(&clk->rwsem);
if (clk->zombie) {
err = -ENODEV;
@@ -172,7 +171,7 @@
fp->private_data = clk;
}
out:
- mutex_unlock(&clk->mutex);
+ up_read(&clk->rwsem);
return err;
}
@@ -211,25 +210,20 @@
int err;
kref_init(&clk->kref);
- mutex_init(&clk->mutex);
+ init_rwsem(&clk->rwsem);
cdev_init(&clk->cdev, &posix_clock_file_operations);
clk->cdev.owner = clk->ops.owner;
err = cdev_add(&clk->cdev, devid, 1);
- if (err)
- goto no_cdev;
return err;
-no_cdev:
- mutex_destroy(&clk->mutex);
- return err;
}
EXPORT_SYMBOL_GPL(posix_clock_register);
static void delete_clock(struct kref *kref)
{
struct posix_clock *clk = container_of(kref, struct posix_clock, kref);
- mutex_destroy(&clk->mutex);
+
if (clk->release)
clk->release(clk);
}
@@ -238,9 +232,9 @@
{
cdev_del(&clk->cdev);
- mutex_lock(&clk->mutex);
+ down_write(&clk->rwsem);
clk->zombie = true;
- mutex_unlock(&clk->mutex);
+ up_write(&clk->rwsem);
kref_put(&clk->kref, delete_clock);
}
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 7aa40f8..6957aa2 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -850,29 +850,21 @@
__blk_add_trace(bt, 0, 0, 0, BLK_TA_PLUG, 0, 0, NULL);
}
-static void blk_add_trace_unplug_io(void *ignore, struct request_queue *q)
+static void blk_add_trace_unplug(void *ignore, struct request_queue *q,
+ unsigned int depth, bool explicit)
{
struct blk_trace *bt = q->blk_trace;
if (bt) {
- unsigned int pdu = q->rq.count[READ] + q->rq.count[WRITE];
- __be64 rpdu = cpu_to_be64(pdu);
+ __be64 rpdu = cpu_to_be64(depth);
+ u32 what;
- __blk_add_trace(bt, 0, 0, 0, BLK_TA_UNPLUG_IO, 0,
- sizeof(rpdu), &rpdu);
- }
-}
+ if (explicit)
+ what = BLK_TA_UNPLUG_IO;
+ else
+ what = BLK_TA_UNPLUG_TIMER;
-static void blk_add_trace_unplug_timer(void *ignore, struct request_queue *q)
-{
- struct blk_trace *bt = q->blk_trace;
-
- if (bt) {
- unsigned int pdu = q->rq.count[READ] + q->rq.count[WRITE];
- __be64 rpdu = cpu_to_be64(pdu);
-
- __blk_add_trace(bt, 0, 0, 0, BLK_TA_UNPLUG_TIMER, 0,
- sizeof(rpdu), &rpdu);
+ __blk_add_trace(bt, 0, 0, 0, what, 0, sizeof(rpdu), &rpdu);
}
}
@@ -1015,9 +1007,7 @@
WARN_ON(ret);
ret = register_trace_block_plug(blk_add_trace_plug, NULL);
WARN_ON(ret);
- ret = register_trace_block_unplug_timer(blk_add_trace_unplug_timer, NULL);
- WARN_ON(ret);
- ret = register_trace_block_unplug_io(blk_add_trace_unplug_io, NULL);
+ ret = register_trace_block_unplug(blk_add_trace_unplug, NULL);
WARN_ON(ret);
ret = register_trace_block_split(blk_add_trace_split, NULL);
WARN_ON(ret);
@@ -1032,8 +1022,7 @@
unregister_trace_block_rq_remap(blk_add_trace_rq_remap, NULL);
unregister_trace_block_bio_remap(blk_add_trace_bio_remap, NULL);
unregister_trace_block_split(blk_add_trace_split, NULL);
- unregister_trace_block_unplug_io(blk_add_trace_unplug_io, NULL);
- unregister_trace_block_unplug_timer(blk_add_trace_unplug_timer, NULL);
+ unregister_trace_block_unplug(blk_add_trace_unplug, NULL);
unregister_trace_block_plug(blk_add_trace_plug, NULL);
unregister_trace_block_sleeprq(blk_add_trace_sleeprq, NULL);
unregister_trace_block_getrq(blk_add_trace_getrq, NULL);
diff --git a/lib/kstrtox.c b/lib/kstrtox.c
index 05672e8..a235f3c 100644
--- a/lib/kstrtox.c
+++ b/lib/kstrtox.c
@@ -49,12 +49,9 @@
val = *s - '0';
else if ('a' <= _tolower(*s) && _tolower(*s) <= 'f')
val = _tolower(*s) - 'a' + 10;
- else if (*s == '\n') {
- if (*(s + 1) == '\0')
- break;
- else
- return -EINVAL;
- } else
+ else if (*s == '\n' && *(s + 1) == '\0')
+ break;
+ else
return -EINVAL;
if (val >= base)
diff --git a/lib/test-kstrtox.c b/lib/test-kstrtox.c
index 325c2f9..d55769d 100644
--- a/lib/test-kstrtox.c
+++ b/lib/test-kstrtox.c
@@ -315,12 +315,12 @@
{"65537", 10, 65537},
{"2147483646", 10, 2147483646},
{"2147483647", 10, 2147483647},
- {"2147483648", 10, 2147483648},
- {"2147483649", 10, 2147483649},
- {"4294967294", 10, 4294967294},
- {"4294967295", 10, 4294967295},
- {"4294967296", 10, 4294967296},
- {"4294967297", 10, 4294967297},
+ {"2147483648", 10, 2147483648ULL},
+ {"2147483649", 10, 2147483649ULL},
+ {"4294967294", 10, 4294967294ULL},
+ {"4294967295", 10, 4294967295ULL},
+ {"4294967296", 10, 4294967296ULL},
+ {"4294967297", 10, 4294967297ULL},
{"9223372036854775806", 10, 9223372036854775806ULL},
{"9223372036854775807", 10, 9223372036854775807ULL},
{"9223372036854775808", 10, 9223372036854775808ULL},
@@ -369,12 +369,12 @@
{"65537", 10, 65537},
{"2147483646", 10, 2147483646},
{"2147483647", 10, 2147483647},
- {"2147483648", 10, 2147483648},
- {"2147483649", 10, 2147483649},
- {"4294967294", 10, 4294967294},
- {"4294967295", 10, 4294967295},
- {"4294967296", 10, 4294967296},
- {"4294967297", 10, 4294967297},
+ {"2147483648", 10, 2147483648LL},
+ {"2147483649", 10, 2147483649LL},
+ {"4294967294", 10, 4294967294LL},
+ {"4294967295", 10, 4294967295LL},
+ {"4294967296", 10, 4294967296LL},
+ {"4294967297", 10, 4294967297LL},
{"9223372036854775806", 10, 9223372036854775806LL},
{"9223372036854775807", 10, 9223372036854775807LL},
};
@@ -418,10 +418,10 @@
{"65537", 10, 65537},
{"2147483646", 10, 2147483646},
{"2147483647", 10, 2147483647},
- {"2147483648", 10, 2147483648},
- {"2147483649", 10, 2147483649},
- {"4294967294", 10, 4294967294},
- {"4294967295", 10, 4294967295},
+ {"2147483648", 10, 2147483648U},
+ {"2147483649", 10, 2147483649U},
+ {"4294967294", 10, 4294967294U},
+ {"4294967295", 10, 4294967295U},
};
TEST_OK(kstrtou32, u32, "%u", test_u32_ok);
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 0a619e0..470dcda 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -244,25 +244,29 @@
struct kobj_attribute *attr, char *buf,
enum transparent_hugepage_flag flag)
{
- if (test_bit(flag, &transparent_hugepage_flags))
- return sprintf(buf, "[yes] no\n");
- else
- return sprintf(buf, "yes [no]\n");
+ return sprintf(buf, "%d\n",
+ !!test_bit(flag, &transparent_hugepage_flags));
}
+
static ssize_t single_flag_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count,
enum transparent_hugepage_flag flag)
{
- if (!memcmp("yes", buf,
- min(sizeof("yes")-1, count))) {
- set_bit(flag, &transparent_hugepage_flags);
- } else if (!memcmp("no", buf,
- min(sizeof("no")-1, count))) {
- clear_bit(flag, &transparent_hugepage_flags);
- } else
+ unsigned long value;
+ int ret;
+
+ ret = kstrtoul(buf, 10, &value);
+ if (ret < 0)
+ return ret;
+ if (value > 1)
return -EINVAL;
+ if (value)
+ set_bit(flag, &transparent_hugepage_flags);
+ else
+ clear_bit(flag, &transparent_hugepage_flags);
+
return count;
}
@@ -680,8 +684,11 @@
return VM_FAULT_OOM;
page = alloc_hugepage_vma(transparent_hugepage_defrag(vma),
vma, haddr, numa_node_id(), 0);
- if (unlikely(!page))
+ if (unlikely(!page)) {
+ count_vm_event(THP_FAULT_FALLBACK);
goto out;
+ }
+ count_vm_event(THP_FAULT_ALLOC);
if (unlikely(mem_cgroup_newpage_charge(page, mm, GFP_KERNEL))) {
put_page(page);
goto out;
@@ -909,11 +916,13 @@
new_page = NULL;
if (unlikely(!new_page)) {
+ count_vm_event(THP_FAULT_FALLBACK);
ret = do_huge_pmd_wp_page_fallback(mm, vma, address,
pmd, orig_pmd, page, haddr);
put_page(page);
goto out;
}
+ count_vm_event(THP_FAULT_ALLOC);
if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
put_page(new_page);
@@ -1390,6 +1399,7 @@
BUG_ON(!PageSwapBacked(page));
__split_huge_page(page, anon_vma);
+ count_vm_event(THP_SPLIT);
BUG_ON(PageCompound(page));
out_unlock:
@@ -1784,9 +1794,11 @@
node, __GFP_OTHER_NODE);
if (unlikely(!new_page)) {
up_read(&mm->mmap_sem);
+ count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
*hpage = ERR_PTR(-ENOMEM);
return;
}
+ count_vm_event(THP_COLLAPSE_ALLOC);
if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
up_read(&mm->mmap_sem);
put_page(new_page);
@@ -2151,8 +2163,11 @@
#ifndef CONFIG_NUMA
if (!*hpage) {
*hpage = alloc_hugepage(khugepaged_defrag());
- if (unlikely(!*hpage))
+ if (unlikely(!*hpage)) {
+ count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
break;
+ }
+ count_vm_event(THP_COLLAPSE_ALLOC);
}
#else
if (IS_ERR(*hpage))
@@ -2192,8 +2207,11 @@
do {
hpage = alloc_hugepage(khugepaged_defrag());
- if (!hpage)
+ if (!hpage) {
+ count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
khugepaged_alloc_sleep();
+ } else
+ count_vm_event(THP_COLLAPSE_ALLOC);
} while (unlikely(!hpage) &&
likely(khugepaged_enabled()));
return hpage;
@@ -2210,8 +2228,11 @@
while (likely(khugepaged_enabled())) {
#ifndef CONFIG_NUMA
hpage = khugepaged_alloc_hugepage();
- if (unlikely(!hpage))
+ if (unlikely(!hpage)) {
+ count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
break;
+ }
+ count_vm_event(THP_COLLAPSE_ALLOC);
#else
if (IS_ERR(hpage)) {
khugepaged_alloc_sleep();
diff --git a/mm/memory.c b/mm/memory.c
index b623a24..ce22a25 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3688,7 +3688,7 @@
*/
#ifdef CONFIG_HAVE_IOREMAP_PROT
vma = find_vma(mm, addr);
- if (!vma)
+ if (!vma || vma->vm_start > addr)
break;
if (vma->vm_ops && vma->vm_ops->access)
ret = vma->vm_ops->access(vma, addr, buf,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a2acaf8..9ca1d60 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -375,7 +375,7 @@
#endif
#ifdef CONFIG_FLATMEM
- max_mapnr = max(page_to_pfn(page), max_mapnr);
+ max_mapnr = max(pfn, max_mapnr);
#endif
ClearPageReserved(page);
diff --git a/mm/mmap.c b/mm/mmap.c
index 8c05e5b..e27e0cf 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -259,7 +259,7 @@
* randomize_va_space to 2, which will still cause mm->start_brk
* to be arbitrarily shifted
*/
- if (mm->start_brk > PAGE_ALIGN(mm->end_data))
+ if (current->brk_randomized)
min_brk = mm->start_brk;
else
min_brk = mm->end_data;
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 6a819d1..83fb72c1 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -84,24 +84,6 @@
#endif /* CONFIG_NUMA */
/*
- * If this is a system OOM (not a memcg OOM) and the task selected to be
- * killed is not already running at high (RT) priorities, speed up the
- * recovery by boosting the dying task to the lowest FIFO priority.
- * That helps with the recovery and avoids interfering with RT tasks.
- */
-static void boost_dying_task_prio(struct task_struct *p,
- struct mem_cgroup *mem)
-{
- struct sched_param param = { .sched_priority = 1 };
-
- if (mem)
- return;
-
- if (!rt_task(p))
- sched_setscheduler_nocheck(p, SCHED_FIFO, ¶m);
-}
-
-/*
* The process p may have detached its own ->mm while exiting or through
* use_mm(), but one or more of its subthreads may still have a valid
* pointer. Return p, or any of its subthreads with a valid ->mm, with
@@ -452,13 +434,6 @@
set_tsk_thread_flag(p, TIF_MEMDIE);
force_sig(SIGKILL, p);
- /*
- * We give our sacrificial lamb high priority and access to
- * all the memory it needs. That way it should be able to
- * exit() and clear out its resources quickly...
- */
- boost_dying_task_prio(p, mem);
-
return 0;
}
#undef K
@@ -482,7 +457,6 @@
*/
if (p->flags & PF_EXITING) {
set_tsk_thread_flag(p, TIF_MEMDIE);
- boost_dying_task_prio(p, mem);
return 0;
}
@@ -556,7 +530,6 @@
*/
if (fatal_signal_pending(current)) {
set_thread_flag(TIF_MEMDIE);
- boost_dying_task_prio(current, NULL);
return;
}
@@ -712,7 +685,6 @@
*/
if (fatal_signal_pending(current)) {
set_thread_flag(TIF_MEMDIE);
- boost_dying_task_prio(current, NULL);
return;
}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2747f5e5a..9f8a97b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3176,7 +3176,7 @@
* Called with zonelists_mutex held always
* unless system_state == SYSTEM_BOOTING.
*/
-void build_all_zonelists(void *data)
+void __ref build_all_zonelists(void *data)
{
set_zonelist_order();
diff --git a/mm/shmem.c b/mm/shmem.c
index 58da7c1..8fa27e4 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -421,7 +421,8 @@
* a waste to allocate index if we cannot allocate data.
*/
if (sbinfo->max_blocks) {
- if (percpu_counter_compare(&sbinfo->used_blocks, (sbinfo->max_blocks - 1)) > 0)
+ if (percpu_counter_compare(&sbinfo->used_blocks,
+ sbinfo->max_blocks - 1) >= 0)
return ERR_PTR(-ENOSPC);
percpu_counter_inc(&sbinfo->used_blocks);
spin_lock(&inode->i_lock);
@@ -1397,7 +1398,8 @@
shmem_swp_unmap(entry);
sbinfo = SHMEM_SB(inode->i_sb);
if (sbinfo->max_blocks) {
- if ((percpu_counter_compare(&sbinfo->used_blocks, sbinfo->max_blocks) > 0) ||
+ if (percpu_counter_compare(&sbinfo->used_blocks,
+ sbinfo->max_blocks) >= 0 ||
shmem_acct_block(info->flags)) {
spin_unlock(&info->lock);
error = -ENOSPC;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c7f5a6d..f6b435c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -41,6 +41,7 @@
#include <linux/memcontrol.h>
#include <linux/delayacct.h>
#include <linux/sysctl.h>
+#include <linux/oom.h>
#include <asm/tlbflush.h>
#include <asm/div64.h>
@@ -1988,17 +1989,12 @@
return zone->pages_scanned < zone_reclaimable_pages(zone) * 6;
}
-/*
- * As hibernation is going on, kswapd is freezed so that it can't mark
- * the zone into all_unreclaimable. It can't handle OOM during hibernation.
- * So let's check zone's unreclaimable in direct reclaim as well as kswapd.
- */
+/* All zones in zonelist are unreclaimable? */
static bool all_unreclaimable(struct zonelist *zonelist,
struct scan_control *sc)
{
struct zoneref *z;
struct zone *zone;
- bool all_unreclaimable = true;
for_each_zone_zonelist_nodemask(zone, z, zonelist,
gfp_zone(sc->gfp_mask), sc->nodemask) {
@@ -2006,13 +2002,11 @@
continue;
if (!cpuset_zone_allowed_hardwall(zone, GFP_KERNEL))
continue;
- if (zone_reclaimable(zone)) {
- all_unreclaimable = false;
- break;
- }
+ if (!zone->all_unreclaimable)
+ return false;
}
- return all_unreclaimable;
+ return true;
}
/*
@@ -2108,6 +2102,14 @@
if (sc->nr_reclaimed)
return sc->nr_reclaimed;
+ /*
+ * As hibernation is going on, kswapd is freezed so that it can't mark
+ * the zone into all_unreclaimable. Thus bypassing all_unreclaimable
+ * check.
+ */
+ if (oom_killer_disabled)
+ return 0;
+
/* top priority shrink_zones still had more to do? don't OOM, then */
if (scanning_global_lru(sc) && !all_unreclaimable(zonelist, sc))
return 1;
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 772b39b..897ea9e 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -321,9 +321,12 @@
/*
* The fetching of the stat_threshold is racy. We may apply
* a counter threshold to the wrong the cpu if we get
- * rescheduled while executing here. However, the following
- * will apply the threshold again and therefore bring the
- * counter under the threshold.
+ * rescheduled while executing here. However, the next
+ * counter update will apply the threshold again and
+ * therefore bring the counter under the threshold again.
+ *
+ * Most of the time the thresholds are the same anyways
+ * for all cpus in a zone.
*/
t = this_cpu_read(pcp->stat_threshold);
@@ -945,7 +948,16 @@
"unevictable_pgs_cleared",
"unevictable_pgs_stranded",
"unevictable_pgs_mlockfreed",
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ "thp_fault_alloc",
+ "thp_fault_fallback",
+ "thp_collapse_alloc",
+ "thp_collapse_alloc_failed",
+ "thp_split",
#endif
+
+#endif /* CONFIG_VM_EVENTS_COUNTERS */
};
static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
diff --git a/net/9p/client.c b/net/9p/client.c
index 48b8e08..7736774 100644
--- a/net/9p/client.c
+++ b/net/9p/client.c
@@ -929,15 +929,15 @@
}
EXPORT_SYMBOL(p9_client_attach);
-struct p9_fid *p9_client_walk(struct p9_fid *oldfid, int nwname, char **wnames,
- int clone)
+struct p9_fid *p9_client_walk(struct p9_fid *oldfid, uint16_t nwname,
+ char **wnames, int clone)
{
int err;
struct p9_client *clnt;
struct p9_fid *fid;
struct p9_qid *wqids;
struct p9_req_t *req;
- int16_t nwqids, count;
+ uint16_t nwqids, count;
err = 0;
wqids = NULL;
@@ -955,7 +955,7 @@
fid = oldfid;
- P9_DPRINTK(P9_DEBUG_9P, ">>> TWALK fids %d,%d nwname %d wname[0] %s\n",
+ P9_DPRINTK(P9_DEBUG_9P, ">>> TWALK fids %d,%d nwname %ud wname[0] %s\n",
oldfid->fid, fid->fid, nwname, wnames ? wnames[0] : NULL);
req = p9_client_rpc(clnt, P9_TWALK, "ddT", oldfid->fid, fid->fid,
@@ -1220,27 +1220,6 @@
}
EXPORT_SYMBOL(p9_client_fsync);
-int p9_client_sync_fs(struct p9_fid *fid)
-{
- int err = 0;
- struct p9_req_t *req;
- struct p9_client *clnt;
-
- P9_DPRINTK(P9_DEBUG_9P, ">>> TSYNC_FS fid %d\n", fid->fid);
-
- clnt = fid->clnt;
- req = p9_client_rpc(clnt, P9_TSYNCFS, "d", fid->fid);
- if (IS_ERR(req)) {
- err = PTR_ERR(req);
- goto error;
- }
- P9_DPRINTK(P9_DEBUG_9P, "<<< RSYNCFS fid %d\n", fid->fid);
- p9_free_req(clnt, req);
-error:
- return err;
-}
-EXPORT_SYMBOL(p9_client_sync_fs);
-
int p9_client_clunk(struct p9_fid *fid)
{
int err;
diff --git a/net/9p/protocol.c b/net/9p/protocol.c
index 8a4084f..b58a501 100644
--- a/net/9p/protocol.c
+++ b/net/9p/protocol.c
@@ -265,7 +265,7 @@
}
break;
case 'T':{
- int16_t *nwname = va_arg(ap, int16_t *);
+ uint16_t *nwname = va_arg(ap, uint16_t *);
char ***wnames = va_arg(ap, char ***);
errcode = p9pdu_readf(pdu, proto_version,
@@ -468,7 +468,8 @@
case 'E':{
int32_t cnt = va_arg(ap, int32_t);
const char *k = va_arg(ap, const void *);
- const char *u = va_arg(ap, const void *);
+ const char __user *u = va_arg(ap,
+ const void __user *);
errcode = p9pdu_writef(pdu, proto_version, "d",
cnt);
if (!errcode && pdu_write_urw(pdu, k, u, cnt))
@@ -495,7 +496,7 @@
}
break;
case 'T':{
- int16_t nwname = va_arg(ap, int);
+ uint16_t nwname = va_arg(ap, int);
const char **wnames = va_arg(ap, const char **);
errcode = p9pdu_writef(pdu, proto_version, "w",
diff --git a/net/9p/trans_common.c b/net/9p/trans_common.c
index d47880e..e883172 100644
--- a/net/9p/trans_common.c
+++ b/net/9p/trans_common.c
@@ -66,7 +66,7 @@
uint32_t pdata_mapped_pages;
struct trans_rpage_info *rpinfo;
- *pdata_off = (size_t)req->tc->pubuf & (PAGE_SIZE-1);
+ *pdata_off = (__force size_t)req->tc->pubuf & (PAGE_SIZE-1);
if (*pdata_off)
first_page_bytes = min(((size_t)PAGE_SIZE - *pdata_off),
diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
index e8f046b..244e707 100644
--- a/net/9p/trans_virtio.c
+++ b/net/9p/trans_virtio.c
@@ -326,8 +326,11 @@
outp = pack_sg_list_p(chan->sg, out, VIRTQUEUE_NUM,
pdata_off, rpinfo->rp_data, pdata_len);
} else {
- char *pbuf = req->tc->pubuf ? req->tc->pubuf :
- req->tc->pkbuf;
+ char *pbuf;
+ if (req->tc->pubuf)
+ pbuf = (__force char *) req->tc->pubuf;
+ else
+ pbuf = req->tc->pkbuf;
outp = pack_sg_list(chan->sg, out, VIRTQUEUE_NUM, pbuf,
req->tc->pbuf_size);
}
@@ -352,8 +355,12 @@
in = pack_sg_list_p(chan->sg, out+inp, VIRTQUEUE_NUM,
pdata_off, rpinfo->rp_data, pdata_len);
} else {
- char *pbuf = req->tc->pubuf ? req->tc->pubuf :
- req->tc->pkbuf;
+ char *pbuf;
+ if (req->tc->pubuf)
+ pbuf = (__force char *) req->tc->pubuf;
+ else
+ pbuf = req->tc->pkbuf;
+
in = pack_sg_list(chan->sg, out+inp, VIRTQUEUE_NUM,
pbuf, req->tc->pbuf_size);
}
diff --git a/net/bridge/br_netfilter.c b/net/bridge/br_netfilter.c
index 008ff6c..f3bc322 100644
--- a/net/bridge/br_netfilter.c
+++ b/net/bridge/br_netfilter.c
@@ -249,11 +249,9 @@
goto drop;
}
- /* Zero out the CB buffer if no options present */
- if (iph->ihl == 5) {
- memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ memset(IPCB(skb), 0, sizeof(struct inet_skb_parm));
+ if (iph->ihl == 5)
return 0;
- }
opt->optlen = iph->ihl*4 - sizeof(struct iphdr);
if (ip_options_compile(dev_net(dev), opt, skb))
diff --git a/net/caif/cfdgml.c b/net/caif/cfdgml.c
index 27dab26..054fdb5 100644
--- a/net/caif/cfdgml.c
+++ b/net/caif/cfdgml.c
@@ -13,6 +13,7 @@
#include <net/caif/cfsrvl.h>
#include <net/caif/cfpkt.h>
+
#define container_obj(layr) ((struct cfsrvl *) layr)
#define DGM_CMD_BIT 0x80
@@ -83,6 +84,7 @@
static int cfdgml_transmit(struct cflayer *layr, struct cfpkt *pkt)
{
+ u8 packet_type;
u32 zero = 0;
struct caif_payload_info *info;
struct cfsrvl *service = container_obj(layr);
@@ -94,7 +96,9 @@
if (cfpkt_getlen(pkt) > DGM_MTU)
return -EMSGSIZE;
- cfpkt_add_head(pkt, &zero, 4);
+ cfpkt_add_head(pkt, &zero, 3);
+ packet_type = 0x08; /* B9 set - UNCLASSIFIED */
+ cfpkt_add_head(pkt, &packet_type, 1);
/* Add info for MUX-layer to route the packet out. */
info = cfpkt_info(pkt);
diff --git a/net/caif/cfmuxl.c b/net/caif/cfmuxl.c
index 46f34b2..24f1ffa 100644
--- a/net/caif/cfmuxl.c
+++ b/net/caif/cfmuxl.c
@@ -244,9 +244,9 @@
int phyid)
{
struct cfmuxl *muxl = container_obj(layr);
- struct list_head *node;
+ struct list_head *node, *next;
struct cflayer *layer;
- list_for_each(node, &muxl->srvl_list) {
+ list_for_each_safe(node, next, &muxl->srvl_list) {
layer = list_entry(node, struct cflayer, node);
if (cfsrvl_phyid_match(layer, phyid))
layer->ctrlcmd(layer, ctrl, phyid);
diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
index 50af027..5a80f41 100644
--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -579,9 +579,15 @@
list_for_each_entry_safe(req, nreq, &osd->o_linger_requests,
r_linger_osd) {
- __unregister_linger_request(osdc, req);
+ /*
+ * reregister request prior to unregistering linger so
+ * that r_osd is preserved.
+ */
+ BUG_ON(!list_empty(&req->r_req_lru_item));
__register_request(osdc, req);
- list_move(&req->r_req_lru_item, &osdc->req_unsent);
+ list_add(&req->r_req_lru_item, &osdc->req_unsent);
+ list_add(&req->r_osd_item, &req->r_osd->o_requests);
+ __unregister_linger_request(osdc, req);
dout("requeued lingering %p tid %llu osd%d\n", req, req->r_tid,
osd->o_osd);
}
@@ -798,7 +804,7 @@
req->r_request->hdr.tid = cpu_to_le64(req->r_tid);
INIT_LIST_HEAD(&req->r_req_lru_item);
- dout("register_request %p tid %lld\n", req, req->r_tid);
+ dout("__register_request %p tid %lld\n", req, req->r_tid);
__insert_request(osdc, req);
ceph_osdc_get_request(req);
osdc->num_requests++;
diff --git a/net/core/dev.c b/net/core/dev.c
index 956d3b0..c2ac599 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5203,11 +5203,15 @@
}
/* TSO requires that SG is present as well. */
- if ((features & NETIF_F_TSO) && !(features & NETIF_F_SG)) {
- netdev_info(dev, "Dropping NETIF_F_TSO since no SG feature.\n");
- features &= ~NETIF_F_TSO;
+ if ((features & NETIF_F_ALL_TSO) && !(features & NETIF_F_SG)) {
+ netdev_info(dev, "Dropping TSO features since no SG feature.\n");
+ features &= ~NETIF_F_ALL_TSO;
}
+ /* TSO ECN requires that TSO is present as well. */
+ if ((features & NETIF_F_ALL_TSO) == NETIF_F_TSO_ECN)
+ features &= ~NETIF_F_TSO_ECN;
+
/* Software GSO depends on SG. */
if ((features & NETIF_F_GSO) && !(features & NETIF_F_SG)) {
netdev_info(dev, "Dropping NETIF_F_GSO since no SG feature.\n");
diff --git a/net/ieee802154/Makefile b/net/ieee802154/Makefile
index ce2d335..5761185 100644
--- a/net/ieee802154/Makefile
+++ b/net/ieee802154/Makefile
@@ -1,5 +1,3 @@
obj-$(CONFIG_IEEE802154) += ieee802154.o af_802154.o
ieee802154-y := netlink.o nl-mac.o nl-phy.o nl_policy.o wpan-class.o
af_802154-y := af_ieee802154.o raw.o dgram.o
-
-ccflags-y += -Wall -DDEBUG
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 6c0b7f4..38f23e7 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -73,7 +73,7 @@
!sk2->sk_bound_dev_if ||
sk->sk_bound_dev_if == sk2->sk_bound_dev_if)) {
if (!reuse || !sk2->sk_reuse ||
- ((1 << sk2->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))) {
+ sk2->sk_state == TCP_LISTEN) {
const __be32 sk2_rcv_saddr = sk_rcv_saddr(sk2);
if (!sk2_rcv_saddr || !sk_rcv_saddr(sk) ||
sk2_rcv_saddr == sk_rcv_saddr(sk))
@@ -122,8 +122,7 @@
(tb->num_owners < smallest_size || smallest_size == -1)) {
smallest_size = tb->num_owners;
smallest_rover = rover;
- if (atomic_read(&hashinfo->bsockets) > (high - low) + 1 &&
- !inet_csk(sk)->icsk_af_ops->bind_conflict(sk, tb)) {
+ if (atomic_read(&hashinfo->bsockets) > (high - low) + 1) {
spin_unlock(&head->lock);
snum = smallest_rover;
goto have_snum;
diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
index dd1b20e..9df4e63 100644
--- a/net/ipv4/inetpeer.c
+++ b/net/ipv4/inetpeer.c
@@ -354,7 +354,8 @@
}
/* May be called with local BH enabled. */
-static void unlink_from_pool(struct inet_peer *p, struct inet_peer_base *base)
+static void unlink_from_pool(struct inet_peer *p, struct inet_peer_base *base,
+ struct inet_peer __rcu **stack[PEER_MAXDEPTH])
{
int do_free;
@@ -368,7 +369,6 @@
* We use refcnt=-1 to alert lockless readers this entry is deleted.
*/
if (atomic_cmpxchg(&p->refcnt, 1, -1) == 1) {
- struct inet_peer __rcu **stack[PEER_MAXDEPTH];
struct inet_peer __rcu ***stackptr, ***delp;
if (lookup(&p->daddr, stack, base) != p)
BUG();
@@ -422,7 +422,7 @@
}
/* May be called with local BH enabled. */
-static int cleanup_once(unsigned long ttl)
+static int cleanup_once(unsigned long ttl, struct inet_peer __rcu **stack[PEER_MAXDEPTH])
{
struct inet_peer *p = NULL;
@@ -454,7 +454,7 @@
* happen because of entry limits in route cache. */
return -1;
- unlink_from_pool(p, peer_to_base(p));
+ unlink_from_pool(p, peer_to_base(p), stack);
return 0;
}
@@ -524,7 +524,7 @@
if (base->total >= inet_peer_threshold)
/* Remove one less-recently-used entry. */
- cleanup_once(0);
+ cleanup_once(0, stack);
return p;
}
@@ -540,6 +540,7 @@
{
unsigned long now = jiffies;
int ttl, total;
+ struct inet_peer __rcu **stack[PEER_MAXDEPTH];
total = compute_total();
if (total >= inet_peer_threshold)
@@ -548,7 +549,7 @@
ttl = inet_peer_maxttl
- (inet_peer_maxttl - inet_peer_minttl) / HZ *
total / inet_peer_threshold * HZ;
- while (!cleanup_once(ttl)) {
+ while (!cleanup_once(ttl, stack)) {
if (jiffies != now)
break;
}
diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
index 28a736f..2391b24 100644
--- a/net/ipv4/ip_options.c
+++ b/net/ipv4/ip_options.c
@@ -329,7 +329,7 @@
pp_ptr = optptr + 2;
goto error;
}
- if (skb) {
+ if (rt) {
memcpy(&optptr[optptr[2]-1], &rt->rt_spec_dst, 4);
opt->is_changed = 1;
}
@@ -371,7 +371,7 @@
goto error;
}
opt->ts = optptr - iph;
- if (skb) {
+ if (rt) {
memcpy(&optptr[optptr[2]-1], &rt->rt_spec_dst, 4);
timeptr = (__be32*)&optptr[optptr[2]+3];
}
@@ -603,7 +603,7 @@
unsigned long orefdst;
int err;
- if (!opt->srr)
+ if (!opt->srr || !rt)
return 0;
if (skb->pkt_type != PACKET_HOST)
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 1a45665..321e6e8 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -311,7 +311,6 @@
.mode = 0644,
.proc_handler = proc_do_large_bitmap,
},
-#ifdef CONFIG_IP_MULTICAST
{
.procname = "igmp_max_memberships",
.data = &sysctl_igmp_max_memberships,
@@ -319,8 +318,6 @@
.mode = 0644,
.proc_handler = proc_dointvec
},
-
-#endif
{
.procname = "igmp_max_msf",
.data = &sysctl_igmp_max_msf,
diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
index 1660546..f2c5b0f 100644
--- a/net/ipv6/inet6_connection_sock.c
+++ b/net/ipv6/inet6_connection_sock.c
@@ -44,7 +44,7 @@
!sk2->sk_bound_dev_if ||
sk->sk_bound_dev_if == sk2->sk_bound_dev_if) &&
(!sk->sk_reuse || !sk2->sk_reuse ||
- ((1 << sk2->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))) &&
+ sk2->sk_state == TCP_LISTEN) &&
ipv6_rcv_saddr_equal(sk, sk2))
break;
}
diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c
index c9890e2..cc61697 100644
--- a/net/irda/af_irda.c
+++ b/net/irda/af_irda.c
@@ -1297,8 +1297,7 @@
/* Note : socket.c set MSG_EOR on SEQPACKET sockets */
if (msg->msg_flags & ~(MSG_DONTWAIT | MSG_EOR | MSG_CMSG_COMPAT |
MSG_NOSIGNAL)) {
- err = -EINVAL;
- goto out;
+ return -EINVAL;
}
lock_sock(sk);
diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
index 058f1e9..9032421 100644
--- a/net/llc/llc_input.c
+++ b/net/llc/llc_input.c
@@ -121,8 +121,7 @@
s32 data_size = ntohs(pdulen) - llc_len;
if (data_size < 0 ||
- ((skb_tail_pointer(skb) -
- (u8 *)pdu) - llc_len) < data_size)
+ !pskb_may_pull(skb, data_size))
return 0;
if (unlikely(pskb_trim_rcsum(skb, data_size)))
return 0;
diff --git a/net/netfilter/ipset/ip_set_bitmap_ipmac.c b/net/netfilter/ipset/ip_set_bitmap_ipmac.c
index 00a3324..a274300 100644
--- a/net/netfilter/ipset/ip_set_bitmap_ipmac.c
+++ b/net/netfilter/ipset/ip_set_bitmap_ipmac.c
@@ -343,6 +343,10 @@
ipset_adtfn adtfn = set->variant->adt[adt];
struct ipmac data;
+ /* MAC can be src only */
+ if (!(flags & IPSET_DIM_TWO_SRC))
+ return 0;
+
data.id = ntohl(ip4addr(skb, flags & IPSET_DIM_ONE_SRC));
if (data.id < map->first_ip || data.id > map->last_ip)
return -IPSET_ERR_BITMAP_RANGE;
diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
index 9152e69..72d1ac6 100644
--- a/net/netfilter/ipset/ip_set_core.c
+++ b/net/netfilter/ipset/ip_set_core.c
@@ -1022,8 +1022,9 @@
if (cb->args[1] >= ip_set_max)
goto out;
- pr_debug("args[0]: %ld args[1]: %ld\n", cb->args[0], cb->args[1]);
max = cb->args[0] == DUMP_ONE ? cb->args[1] + 1 : ip_set_max;
+dump_last:
+ pr_debug("args[0]: %ld args[1]: %ld\n", cb->args[0], cb->args[1]);
for (; cb->args[1] < max; cb->args[1]++) {
index = (ip_set_id_t) cb->args[1];
set = ip_set_list[index];
@@ -1038,8 +1039,8 @@
* so that lists (unions of sets) are dumped last.
*/
if (cb->args[0] != DUMP_ONE &&
- !((cb->args[0] == DUMP_ALL) ^
- (set->type->features & IPSET_DUMP_LAST)))
+ ((cb->args[0] == DUMP_ALL) ==
+ !!(set->type->features & IPSET_DUMP_LAST)))
continue;
pr_debug("List set: %s\n", set->name);
if (!cb->args[2]) {
@@ -1083,6 +1084,12 @@
goto release_refcount;
}
}
+ /* If we dump all sets, continue with dumping last ones */
+ if (cb->args[0] == DUMP_ALL) {
+ cb->args[0] = DUMP_LAST;
+ cb->args[1] = 0;
+ goto dump_last;
+ }
goto out;
nla_put_failure:
@@ -1093,11 +1100,6 @@
pr_debug("release set %s\n", ip_set_list[index]->name);
ip_set_put_byindex(index);
}
-
- /* If we dump all sets, continue with dumping last ones */
- if (cb->args[0] == DUMP_ALL && cb->args[1] >= max && !cb->args[2])
- cb->args[0] = DUMP_LAST;
-
out:
if (nlh) {
nlmsg_end(skb, nlh);
diff --git a/net/netfilter/xt_set.c b/net/netfilter/xt_set.c
index 061d48c..b3babae 100644
--- a/net/netfilter/xt_set.c
+++ b/net/netfilter/xt_set.c
@@ -81,6 +81,7 @@
if (info->match_set.u.flags[IPSET_DIM_MAX-1] != 0) {
pr_warning("Protocol error: set match dimension "
"is over the limit!\n");
+ ip_set_nfnl_put(info->match_set.index);
return -ERANGE;
}
@@ -135,6 +136,8 @@
if (index == IPSET_INVALID_ID) {
pr_warning("Cannot find del_set index %u as target\n",
info->del_set.index);
+ if (info->add_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->add_set.index);
return -ENOENT;
}
}
@@ -142,6 +145,10 @@
info->del_set.u.flags[IPSET_DIM_MAX-1] != 0) {
pr_warning("Protocol error: SET target dimension "
"is over the limit!\n");
+ if (info->add_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->add_set.index);
+ if (info->del_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->del_set.index);
return -ERANGE;
}
@@ -192,6 +199,7 @@
if (info->match_set.dim > IPSET_DIM_MAX) {
pr_warning("Protocol error: set match dimension "
"is over the limit!\n");
+ ip_set_nfnl_put(info->match_set.index);
return -ERANGE;
}
@@ -219,7 +227,7 @@
if (info->del_set.index != IPSET_INVALID_ID)
ip_set_del(info->del_set.index,
skb, par->family,
- info->add_set.dim,
+ info->del_set.dim,
info->del_set.flags);
return XT_CONTINUE;
@@ -245,13 +253,19 @@
if (index == IPSET_INVALID_ID) {
pr_warning("Cannot find del_set index %u as target\n",
info->del_set.index);
+ if (info->add_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->add_set.index);
return -ENOENT;
}
}
if (info->add_set.dim > IPSET_DIM_MAX ||
- info->del_set.flags > IPSET_DIM_MAX) {
+ info->del_set.dim > IPSET_DIM_MAX) {
pr_warning("Protocol error: SET target dimension "
"is over the limit!\n");
+ if (info->add_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->add_set.index);
+ if (info->del_set.index != IPSET_INVALID_ID)
+ ip_set_nfnl_put(info->del_set.index);
return -ERANGE;
}
diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index 0698cad..1a21c57 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -569,6 +569,8 @@
sctp_assoc_set_primary(asoc, transport);
if (asoc->peer.active_path == peer)
asoc->peer.active_path = transport;
+ if (asoc->peer.retran_path == peer)
+ asoc->peer.retran_path = transport;
if (asoc->peer.last_data_from == peer)
asoc->peer.last_data_from = transport;
@@ -1323,6 +1325,8 @@
if (t)
asoc->peer.retran_path = t;
+ else
+ t = asoc->peer.retran_path;
SCTP_DEBUG_PRINTK_IPADDR("sctp_assoc_update_retran_path:association"
" %p addr: ",
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 17d1dcb..4165382 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -163,6 +163,7 @@
struct perf_event_attr *attr = &evsel->attr;
int track = !evsel->idx; /* only the first counter needs these */
+ attr->inherit = !no_inherit;
attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
PERF_FORMAT_TOTAL_TIME_RUNNING |
PERF_FORMAT_ID;
@@ -251,6 +252,9 @@
{
struct perf_evsel *pos;
+ if (evlist->cpus->map[0] < 0)
+ no_inherit = true;
+
list_for_each_entry(pos, &evlist->entries, node) {
struct perf_event_attr *attr = &pos->attr;
/*
@@ -271,8 +275,7 @@
retry_sample_id:
attr->sample_id_all = sample_id_all_avail ? 1 : 0;
try_again:
- if (perf_evsel__open(pos, evlist->cpus, evlist->threads, group,
- !no_inherit) < 0) {
+ if (perf_evsel__open(pos, evlist->cpus, evlist->threads, group) < 0) {
int err = errno;
if (err == EPERM || err == EACCES) {
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index e2109f9..03f0e45 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -167,16 +167,17 @@
attr->read_format = PERF_FORMAT_TOTAL_TIME_ENABLED |
PERF_FORMAT_TOTAL_TIME_RUNNING;
- if (system_wide)
- return perf_evsel__open_per_cpu(evsel, evsel_list->cpus, false, false);
-
attr->inherit = !no_inherit;
+
+ if (system_wide)
+ return perf_evsel__open_per_cpu(evsel, evsel_list->cpus, false);
+
if (target_pid == -1 && target_tid == -1) {
attr->disabled = 1;
attr->enable_on_exec = 1;
}
- return perf_evsel__open_per_thread(evsel, evsel_list->threads, false, false);
+ return perf_evsel__open_per_thread(evsel, evsel_list->threads, false);
}
/*
diff --git a/tools/perf/builtin-test.c b/tools/perf/builtin-test.c
index 1b2106c..11e3c84 100644
--- a/tools/perf/builtin-test.c
+++ b/tools/perf/builtin-test.c
@@ -290,7 +290,7 @@
goto out_thread_map_delete;
}
- if (perf_evsel__open_per_thread(evsel, threads, false, false) < 0) {
+ if (perf_evsel__open_per_thread(evsel, threads, false) < 0) {
pr_debug("failed to open counter: %s, "
"tweak /proc/sys/kernel/perf_event_paranoid?\n",
strerror(errno));
@@ -303,7 +303,7 @@
}
if (perf_evsel__read_on_cpu(evsel, 0, 0) < 0) {
- pr_debug("perf_evsel__open_read_on_cpu\n");
+ pr_debug("perf_evsel__read_on_cpu\n");
goto out_close_fd;
}
@@ -365,7 +365,7 @@
goto out_thread_map_delete;
}
- if (perf_evsel__open(evsel, cpus, threads, false, false) < 0) {
+ if (perf_evsel__open(evsel, cpus, threads, false) < 0) {
pr_debug("failed to open counter: %s, "
"tweak /proc/sys/kernel/perf_event_paranoid?\n",
strerror(errno));
@@ -418,7 +418,7 @@
continue;
if (perf_evsel__read_on_cpu(evsel, cpu, 0) < 0) {
- pr_debug("perf_evsel__open_read_on_cpu\n");
+ pr_debug("perf_evsel__read_on_cpu\n");
err = -1;
break;
}
@@ -529,7 +529,7 @@
perf_evlist__add(evlist, evsels[i]);
- if (perf_evsel__open(evsels[i], cpus, threads, false, false) < 0) {
+ if (perf_evsel__open(evsels[i], cpus, threads, false) < 0) {
pr_debug("failed to open counter: %s, "
"tweak /proc/sys/kernel/perf_event_paranoid?\n",
strerror(errno));
diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
index fc1273e..7e3d6e3 100644
--- a/tools/perf/builtin-top.c
+++ b/tools/perf/builtin-top.c
@@ -845,9 +845,10 @@
}
attr->mmap = 1;
+ attr->inherit = inherit;
try_again:
if (perf_evsel__open(counter, top.evlist->cpus,
- top.evlist->threads, group, inherit) < 0) {
+ top.evlist->threads, group) < 0) {
int err = errno;
if (err == EPERM || err == EACCES) {
diff --git a/tools/perf/util/cgroup.c b/tools/perf/util/cgroup.c
index 9fea755..96bee5c 100644
--- a/tools/perf/util/cgroup.c
+++ b/tools/perf/util/cgroup.c
@@ -13,7 +13,7 @@
{
FILE *fp;
char mountpoint[MAX_PATH+1], tokens[MAX_PATH+1], type[MAX_PATH+1];
- char *token, *saved_ptr;
+ char *token, *saved_ptr = NULL;
int found = 0;
fp = fopen("/proc/mounts", "r");
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index d852cef..45da8d1 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -12,6 +12,7 @@
#include "evlist.h"
#include "evsel.h"
#include "util.h"
+#include "debug.h"
#include <sys/mman.h>
@@ -250,15 +251,19 @@
return evlist->mmap != NULL ? 0 : -ENOMEM;
}
-static int __perf_evlist__mmap(struct perf_evlist *evlist, int cpu, int prot,
- int mask, int fd)
+static int __perf_evlist__mmap(struct perf_evlist *evlist, struct perf_evsel *evsel,
+ int cpu, int prot, int mask, int fd)
{
evlist->mmap[cpu].prev = 0;
evlist->mmap[cpu].mask = mask;
evlist->mmap[cpu].base = mmap(NULL, evlist->mmap_len, prot,
MAP_SHARED, fd, 0);
- if (evlist->mmap[cpu].base == MAP_FAILED)
+ if (evlist->mmap[cpu].base == MAP_FAILED) {
+ if (evlist->cpus->map[cpu] == -1 && evsel->attr.inherit)
+ ui__warning("Inherit is not allowed on per-task "
+ "events using mmap.\n");
return -1;
+ }
perf_evlist__add_pollfd(evlist, fd);
return 0;
@@ -312,7 +317,8 @@
if (ioctl(fd, PERF_EVENT_IOC_SET_OUTPUT,
FD(first_evsel, cpu, 0)) != 0)
goto out_unmap;
- } else if (__perf_evlist__mmap(evlist, cpu, prot, mask, fd) < 0)
+ } else if (__perf_evlist__mmap(evlist, evsel, cpu,
+ prot, mask, fd) < 0)
goto out_unmap;
if ((evsel->attr.read_format & PERF_FORMAT_ID) &&
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 662596a..d6fd59b 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -175,7 +175,7 @@
}
static int __perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
- struct thread_map *threads, bool group, bool inherit)
+ struct thread_map *threads, bool group)
{
int cpu, thread;
unsigned long flags = 0;
@@ -192,19 +192,6 @@
for (cpu = 0; cpu < cpus->nr; cpu++) {
int group_fd = -1;
- /*
- * Don't allow mmap() of inherited per-task counters. This
- * would create a performance issue due to all children writing
- * to the same buffer.
- *
- * FIXME:
- * Proper fix is not to pass 'inherit' to perf_evsel__open*,
- * but a 'flags' parameter, with 'group' folded there as well,
- * then introduce a PERF_O_{MMAP,GROUP,INHERIT} enum, and if
- * O_MMAP is set, emit a warning if cpu < 0 and O_INHERIT is
- * set. Lets go for the minimal fix first tho.
- */
- evsel->attr.inherit = (cpus->map[cpu] >= 0) && inherit;
for (thread = 0; thread < threads->nr; thread++) {
@@ -253,7 +240,7 @@
};
int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
- struct thread_map *threads, bool group, bool inherit)
+ struct thread_map *threads, bool group)
{
if (cpus == NULL) {
/* Work around old compiler warnings about strict aliasing */
@@ -263,19 +250,19 @@
if (threads == NULL)
threads = &empty_thread_map.map;
- return __perf_evsel__open(evsel, cpus, threads, group, inherit);
+ return __perf_evsel__open(evsel, cpus, threads, group);
}
int perf_evsel__open_per_cpu(struct perf_evsel *evsel,
- struct cpu_map *cpus, bool group, bool inherit)
+ struct cpu_map *cpus, bool group)
{
- return __perf_evsel__open(evsel, cpus, &empty_thread_map.map, group, inherit);
+ return __perf_evsel__open(evsel, cpus, &empty_thread_map.map, group);
}
int perf_evsel__open_per_thread(struct perf_evsel *evsel,
- struct thread_map *threads, bool group, bool inherit)
+ struct thread_map *threads, bool group)
{
- return __perf_evsel__open(evsel, &empty_cpu_map.map, threads, group, inherit);
+ return __perf_evsel__open(evsel, &empty_cpu_map.map, threads, group);
}
static int perf_event__parse_id_sample(const union perf_event *event, u64 type,
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 6710ab5..f79bb2c 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -81,11 +81,11 @@
void perf_evsel__close_fd(struct perf_evsel *evsel, int ncpus, int nthreads);
int perf_evsel__open_per_cpu(struct perf_evsel *evsel,
- struct cpu_map *cpus, bool group, bool inherit);
+ struct cpu_map *cpus, bool group);
int perf_evsel__open_per_thread(struct perf_evsel *evsel,
- struct thread_map *threads, bool group, bool inherit);
+ struct thread_map *threads, bool group);
int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
- struct thread_map *threads, bool group, bool inherit);
+ struct thread_map *threads, bool group);
#define perf_evsel__match(evsel, t, c) \
(evsel->attr.type == PERF_TYPE_##t && \
diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
index a9f2d7e..f5e3845 100644
--- a/tools/perf/util/python.c
+++ b/tools/perf/util/python.c
@@ -498,11 +498,11 @@
struct cpu_map *cpus = NULL;
struct thread_map *threads = NULL;
PyObject *pcpus = NULL, *pthreads = NULL;
- int group = 0, overwrite = 0;
- static char *kwlist[] = {"cpus", "threads", "group", "overwrite", NULL, NULL};
+ int group = 0, inherit = 0;
+ static char *kwlist[] = {"cpus", "threads", "group", "inherit", NULL, NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|OOii", kwlist,
- &pcpus, &pthreads, &group, &overwrite))
+ &pcpus, &pthreads, &group, &inherit))
return NULL;
if (pthreads != NULL)
@@ -511,7 +511,8 @@
if (pcpus != NULL)
cpus = ((struct pyrf_cpu_map *)pcpus)->cpus;
- if (perf_evsel__open(evsel, cpus, threads, group, overwrite) < 0) {
+ evsel->attr.inherit = inherit;
+ if (perf_evsel__open(evsel, cpus, threads, group) < 0) {
PyErr_SetFromErrno(PyExc_OSError);
return NULL;
}
diff --git a/tools/perf/util/ui/browsers/annotate.c b/tools/perf/util/ui/browsers/annotate.c
index 8c17a87..15633d6 100644
--- a/tools/perf/util/ui/browsers/annotate.c
+++ b/tools/perf/util/ui/browsers/annotate.c
@@ -256,10 +256,9 @@
int refresh)
{
struct objdump_line *pos, *n;
- struct annotation *notes = symbol__annotation(sym);
+ struct annotation *notes;
struct annotate_browser browser = {
.b = {
- .entries = ¬es->src->source,
.refresh = ui_browser__list_head_refresh,
.seek = ui_browser__list_head_seek,
.write = annotate_browser__write,
@@ -281,6 +280,8 @@
ui_helpline__push("Press <- or ESC to exit");
+ notes = symbol__annotation(sym);
+
list_for_each_entry(pos, ¬es->src->source, node) {
struct objdump_line_rb_node *rbpos;
size_t line_len = strlen(pos->line);
@@ -291,6 +292,7 @@
rbpos->idx = browser.b.nr_entries++;
}
+ browser.b.entries = ¬es->src->source,
browser.b.width += 18; /* Percentage */
ret = annotate_browser__run(&browser, evidx, refresh);
list_for_each_entry_safe(pos, n, ¬es->src->source, node) {
diff --git a/tools/perf/util/ui/browsers/hists.c b/tools/perf/util/ui/browsers/hists.c
index 798efdc..5d767c6 100644
--- a/tools/perf/util/ui/browsers/hists.c
+++ b/tools/perf/util/ui/browsers/hists.c
@@ -851,7 +851,7 @@
goto out_free_stack;
case 'a':
if (browser->selection == NULL ||
- browser->selection->map == NULL ||
+ browser->selection->sym == NULL ||
browser->selection->map->dso->annotate_warned)
continue;
goto do_annotate;