Merge tag 'char-misc-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc driver updates from Greg KH:
 "Here's the big char and misc driver update for 4.7-rc1.

  Lots of different tiny driver subsystems have updates here with new
  drivers and functionality.  Details in the shortlog.

  All have been in linux-next with no reported issues for a while"

* tag 'char-misc-4.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (125 commits)
  mcb: Delete num_cells variable which is not required
  mcb: Fixed bar number assignment for the gdd
  mcb: Replace ioremap and request_region with the devm version
  mcb: Implement bus->dev.release callback
  mcb: export bus information via sysfs
  mcb: Correctly initialize the bus's device
  mei: bus: call mei_cl_read_start under device lock
  coresight: etb10: adjust read pointer only when needed
  coresight: configuring ETF in FIFO mode when acting as link
  coresight: tmc: implementing TMC-ETF AUX space API
  coresight: moving struct cs_buffers to header file
  coresight: tmc: keep track of memory width
  coresight: tmc: make sysFS and Perf mode mutually exclusive
  coresight: tmc: dump system memory content only when needed
  coresight: tmc: adding mode of operation for link/sinks
  coresight: tmc: getting rid of multiple read access
  coresight: tmc: allocating memory when needed
  coresight: tmc: making prepare/unprepare functions generic
  coresight: tmc: splitting driver in ETB/ETF and ETR components
  coresight: tmc: cleaning up header file
  ...
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10 b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
index 4b8d6ec..b5f5260 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
@@ -6,13 +6,6 @@
 		source for a single sink.
 		ex: echo 1 > /sys/bus/coresight/devices/20010000.etb/enable_sink
 
-What:		/sys/bus/coresight/devices/<memory_map>.etb/status
-Date:		November 2014
-KernelVersion:	3.19
-Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
-Description:	(R) List various control and status registers.  The specific
-		layout and content is driver specific.
-
 What:		/sys/bus/coresight/devices/<memory_map>.etb/trigger_cntr
 Date:		November 2014
 KernelVersion:	3.19
@@ -22,3 +15,65 @@
 		following the trigger event. The number of 32-bit words written
 		into the Trace RAM following the trigger event is equal to the
 		value stored in this register+1 (from ARM ETB-TRM).
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/rdp
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Defines the depth, in words, of the trace RAM in powers of
+		2.  The value is read directly from HW register RDP, 0x004.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/sts
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB status register.  The value
+		is read directly from HW register STS, 0x00C.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/rrp
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB RAM Read Pointer register
+		that is used to read entries from the Trace RAM over the APB
+		interface.  The value is read directly from HW register RRP,
+		0x014.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/rwp
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB RAM Write Pointer register
+		that is used to sets the write pointer to write entries from
+		the CoreSight bus into the Trace RAM. The value is read directly
+		from HW register RWP, 0x018.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/trg
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Similar to "trigger_cntr" above except that this value is
+		read directly from HW register TRG, 0x01C.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/ctl
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB Control register. The value
+		is read directly from HW register CTL, 0x020.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffsr
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB Formatter and Flush Status
+		register.  The value is read directly from HW register FFSR,
+		0x300.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffcr
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the ETB Formatter and Flush Control
+		register.  The value is read directly from HW register FFCR,
+		0x304.
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
index 2355ed8..36258bc 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
@@ -359,6 +359,19 @@
 Description:	(R) Print the content of the Peripheral ID3 Register
 		(0xFEC).  The value is taken directly from the HW.
 
+What:		/sys/bus/coresight/devices/<memory_map>.etm/mgmt/trcconfig
+Date:		February 2016
+KernelVersion:	4.07
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Print the content of the trace configuration register
+		(0x010) as currently set by SW.
+
+What:		/sys/bus/coresight/devices/<memory_map>.etm/mgmt/trctraceid
+Date:		February 2016
+KernelVersion:	4.07
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Print the content of the trace ID register (0x040).
+
 What:		/sys/bus/coresight/devices/<memory_map>.etm/trcidr/trcidr0
 Date:		April 2015
 KernelVersion:	4.01
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm b/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
new file mode 100644
index 0000000..1dffabe
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
@@ -0,0 +1,53 @@
+What:		/sys/bus/coresight/devices/<memory_map>.stm/enable_source
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Enable/disable tracing on this specific trace macrocell.
+		Enabling the trace macrocell implies it has been configured
+		properly and a sink has been identified for it.  The path
+		of coresight components linking the source to the sink is
+		configured and managed automatically by the coresight framework.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/hwevent_enable
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Provides access to the HW event enable register, used in
+		conjunction with HW event bank select register.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/hwevent_select
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Gives access to the HW event block select register
+		(STMHEBSR) in order to configure up to 256 channels.  Used in
+		conjunction with "hwevent_enable" register as described above.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/port_enable
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Provides access to the stimulus port enable register
+		(STMSPER).  Used in conjunction with "port_select" described
+		below.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/port_select
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Used to determine which bank of stimulus port bit in
+		register STMSPER (see above) apply to.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/status
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) List various control and status registers.  The specific
+		layout and content is driver specific.
+
+What:		/sys/bus/coresight/devices/<memory_map>.stm/traceid
+Date:		April 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(RW) Holds the trace ID that will appear in the trace stream
+		coming from this trace entity.
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc b/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
index f38cded5..4fe677e 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
@@ -6,3 +6,80 @@
 		formatter after a defined number of words have been stored
 		following the trigger event. Additional interface for this
 		driver are expected to be added as it matures.
+
+What:           /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rsz
+Date:           March 2016
+KernelVersion:  4.7
+Contact:        Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:    (R) Defines the size, in 32-bit words, of the local RAM buffer.
+                The value is read directly from HW register RSZ, 0x004.
+
+What:           /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/sts
+Date:           March 2016
+KernelVersion:  4.7
+Contact:        Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC status register.  The value
+                is read directly from HW register STS, 0x00C.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rrp
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC RAM Read Pointer register
+		that is used to read entries from the Trace RAM over the APB
+		interface.  The value is read directly from HW register RRP,
+		0x014.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rwp
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC RAM Write Pointer register
+		that is used to sets the write pointer to write entries from
+		the CoreSight bus into the Trace RAM. The value is read directly
+		from HW register RWP, 0x018.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/trg
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Similar to "trigger_cntr" above except that this value is
+		read directly from HW register TRG, 0x01C.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ctl
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC Control register. The value
+		is read directly from HW register CTL, 0x020.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffsr
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC Formatter and Flush Status
+		register.  The value is read directly from HW register FFSR,
+		0x300.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffcr
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC Formatter and Flush Control
+		register.  The value is read directly from HW register FFCR,
+		0x304.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/mode
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Shows the value held by the TMC Mode register, which
+		indicate the mode the device has been configured to enact.  The
+		The value is read directly from the MODE register, 0x028.
+
+What:		/sys/bus/coresight/devices/<memory_map>.tmc/mgmt/devid
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Mathieu Poirier <mathieu.poirier@linaro.org>
+Description:	(R) Indicates the capabilities of the Coresight TMC.
+		The value is read directly from the DEVID register, 0xFC8,
diff --git a/Documentation/ABI/testing/sysfs-bus-mcb b/Documentation/ABI/testing/sysfs-bus-mcb
new file mode 100644
index 0000000..77947c5
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-mcb
@@ -0,0 +1,29 @@
+What:		/sys/bus/mcb/devices/mcb:X
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Johannes Thumshirn <jth@kernel.org>
+Description:	Hardware chip or device hosting the MEN chameleon bus
+
+What:		/sys/bus/mcb/devices/mcb:X/revision
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Johannes Thumshirn <jth@kernel.org>
+Description:	The FPGA's revision number
+
+What:		/sys/bus/mcb/devices/mcb:X/minor
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Johannes Thumshirn <jth@kernel.org>
+Description:	The FPGA's minor number
+
+What:		/sys/bus/mcb/devices/mcb:X/model
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Johannes Thumshirn <jth@kernel.org>
+Description:	The FPGA's model number
+
+What:		/sys/bus/mcb/devices/mcb:X/name
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Johannes Thumshirn <jth@kernel.org>
+Description:	The FPGA's name
diff --git a/Documentation/ABI/testing/sysfs-class-stm b/Documentation/ABI/testing/sysfs-class-stm
index c9aa4f3..77ed3da 100644
--- a/Documentation/ABI/testing/sysfs-class-stm
+++ b/Documentation/ABI/testing/sysfs-class-stm
@@ -12,3 +12,13 @@
 Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 Description:
 		Shows the number of channels per master on this STM device.
+
+What:		/sys/class/stm/<stm>/hw_override
+Date:		March 2016
+KernelVersion:	4.7
+Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
+Description:
+		Reads as 0 if master numbers in the STP stream produced by
+		this stm device will match the master numbers assigned by
+		the software or 1 if the stm hardware overrides software
+		assigned masters.
diff --git a/Documentation/devicetree/bindings/arm/coresight.txt b/Documentation/devicetree/bindings/arm/coresight.txt
index 62938eb..93147c0c 100644
--- a/Documentation/devicetree/bindings/arm/coresight.txt
+++ b/Documentation/devicetree/bindings/arm/coresight.txt
@@ -19,6 +19,7 @@
 		- "arm,coresight-etm3x", "arm,primecell";
 		- "arm,coresight-etm4x", "arm,primecell";
 		- "qcom,coresight-replicator1x", "arm,primecell";
+		- "arm,coresight-stm", "arm,primecell"; [1]
 
 	* reg: physical base address and length of the register
 	  set(s) of the component.
@@ -36,6 +37,14 @@
 	  layout using the generic DT graph presentation found in
 	  "bindings/graph.txt".
 
+* Additional required properties for System Trace Macrocells (STM):
+	* reg: along with the physical base address and length of the register
+	  set as described above, another entry is required to describe the
+	  mapping of the extended stimulus port area.
+
+	* reg-names: the only acceptable values are "stm-base" and
+	  "stm-stimulus-base", each corresponding to the areas defined in "reg".
+
 * Required properties for devices that don't show up on the AMBA bus, such as
   non-configurable replicators:
 
@@ -202,3 +211,22 @@
 			};
 		};
 	};
+
+4. STM
+	stm@20100000 {
+		compatible = "arm,coresight-stm", "arm,primecell";
+		reg = <0 0x20100000 0 0x1000>,
+		      <0 0x28000000 0 0x180000>;
+		reg-names = "stm-base", "stm-stimulus-base";
+
+		clocks = <&soc_smc50mhz>;
+		clock-names = "apb_pclk";
+		port {
+			stm_out_port: endpoint {
+				remote-endpoint = <&main_funnel_in_port2>;
+			};
+		};
+	};
+
+[1]. There is currently two version of STM: STM32 and STM500.  Both
+have the same HW interface and as such don't need an explicit binding name.
diff --git a/Documentation/trace/coresight.txt b/Documentation/trace/coresight.txt
index 0a5c329..a33c88c 100644
--- a/Documentation/trace/coresight.txt
+++ b/Documentation/trace/coresight.txt
@@ -190,8 +190,8 @@
 Last but not least, "struct module *owner" is expected to be set to reflect
 the information carried in "THIS_MODULE".
 
-How to use
-----------
+How to use the tracer modules
+-----------------------------
 
 Before trace collection can start, a coresight sink needs to be identify.
 There is no limit on the amount of sinks (nor sources) that can be enabled at
@@ -297,3 +297,36 @@
 Instruction     13570831        0x8026B584      E28DD00C        false   ADD      sp,sp,#0xc
 Instruction     0       0x8026B588      E8BD8000        true    LDM      sp!,{pc}
 Timestamp                                       Timestamp: 17107041535
+
+How to use the STM module
+-------------------------
+
+Using the System Trace Macrocell module is the same as the tracers - the only
+difference is that clients are driving the trace capture rather
+than the program flow through the code.
+
+As with any other CoreSight component, specifics about the STM tracer can be
+found in sysfs with more information on each entry being found in [1]:
+
+root@genericarmv8:~# ls /sys/bus/coresight/devices/20100000.stm
+enable_source   hwevent_select  port_enable     subsystem       uevent
+hwevent_enable  mgmt            port_select     traceid
+root@genericarmv8:~#
+
+Like any other source a sink needs to be identified and the STM enabled before
+being used:
+
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20010000.etf/enable_sink
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20100000.stm/enable_source
+
+From there user space applications can request and use channels using the devfs
+interface provided for that purpose by the generic STM API:
+
+root@genericarmv8:~# ls -l /dev/20100000.stm
+crw-------    1 root     root       10,  61 Jan  3 18:11 /dev/20100000.stm
+root@genericarmv8:~#
+
+Details on how to use the generic STM API can be found here [2].
+
+[1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
+[2]. Documentation/trace/stm.txt
diff --git a/Documentation/w1/slaves/w1_therm b/Documentation/w1/slaves/w1_therm
index 13411fe..d1f93af 100644
--- a/Documentation/w1/slaves/w1_therm
+++ b/Documentation/w1/slaves/w1_therm
@@ -33,7 +33,15 @@
 powered it would be possible to convert all the devices at the same
 time and then go back to read individual sensors.  That isn't
 currently supported.  The driver also doesn't support reduced
-precision (which would also reduce the conversion time).
+precision (which would also reduce the conversion time) when reading values.
+
+Writing a value between 9 and 12 to the sysfs w1_slave file will change the
+precision of the sensor for the next readings. This value is in (volatile)
+SRAM, so it is reset when the sensor gets power-cycled.
+
+To store the current precision configuration into EEPROM, the value 0
+has to be written to the sysfs w1_slave file. Since the EEPROM has a limited
+amount of writes (>50k), this command should be used wisely.
 
 The module parameter strong_pullup can be set to 0 to disable the
 strong pullup, 1 to enable autodetection or 2 to force strong pullup.
diff --git a/MAINTAINERS b/MAINTAINERS
index 832f070..3dc5d89 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9843,6 +9843,7 @@
 SYSTEM TRACE MODULE CLASS
 M:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 S:	Maintained
+T:	git git://git.kernel.org/pub/scm/linux/kernel/git/ash/stm.git
 F:	Documentation/trace/stm.txt
 F:	drivers/hwtracing/stm/
 F:	include/linux/stm.h
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index 3ec0766..601f64f 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -279,8 +279,7 @@
 
 config RTC
 	tristate "Enhanced Real Time Clock Support (legacy PC RTC driver)"
-	depends on !PPC && !PARISC && !IA64 && !M68K && !SPARC && !FRV \
-			&& !ARM && !SUPERH && !S390 && !AVR32 && !BLACKFIN && !UML
+	depends on ALPHA || (MIPS && MACH_LOONGSON64) || MN10300
 	---help---
 	  If you say Y here and create a character special file /dev/rtc with
 	  major number 10 and minor number 135 using mknod ("man mknod"), you
@@ -585,7 +584,6 @@
 
 config DEVPORT
 	bool
-	depends on !M68K
 	depends on ISA || PCI
 	default y
 
diff --git a/drivers/char/xillybus/xillybus_of.c b/drivers/char/xillybus/xillybus_of.c
index 7818650..78a492f 100644
--- a/drivers/char/xillybus/xillybus_of.c
+++ b/drivers/char/xillybus/xillybus_of.c
@@ -81,7 +81,6 @@
 {
 	dma_addr_t addr;
 	struct xilly_mapping *this;
-	int rc;
 
 	this = kzalloc(sizeof(*this), GFP_KERNEL);
 	if (!this)
@@ -101,15 +100,7 @@
 
 	*ret_dma_handle = addr;
 
-	rc = devm_add_action(ep->dev, xilly_of_unmap, this);
-
-	if (rc) {
-		dma_unmap_single(ep->dev, addr, size, direction);
-		kfree(this);
-		return rc;
-	}
-
-	return 0;
+	return devm_add_action_or_reset(ep->dev, xilly_of_unmap, this);
 }
 
 static struct xilly_endpoint_hardware of_hw = {
diff --git a/drivers/char/xillybus/xillybus_pcie.c b/drivers/char/xillybus/xillybus_pcie.c
index 9418300..dff2d15 100644
--- a/drivers/char/xillybus/xillybus_pcie.c
+++ b/drivers/char/xillybus/xillybus_pcie.c
@@ -98,7 +98,6 @@
 	int pci_direction;
 	dma_addr_t addr;
 	struct xilly_mapping *this;
-	int rc;
 
 	this = kzalloc(sizeof(*this), GFP_KERNEL);
 	if (!this)
@@ -120,14 +119,7 @@
 
 	*ret_dma_handle = addr;
 
-	rc = devm_add_action(ep->dev, xilly_pci_unmap, this);
-	if (rc) {
-		pci_unmap_single(ep->pdev, addr, size, pci_direction);
-		kfree(this);
-		return rc;
-	}
-
-	return 0;
+	return devm_add_action_or_reset(ep->dev, xilly_pci_unmap, this);
 }
 
 static struct xilly_endpoint_hardware pci_hw = {
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 38b682ba..b6c1211 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -597,27 +597,55 @@
 
 static void vmbus_wait_for_unload(void)
 {
-	int cpu = smp_processor_id();
-	void *page_addr = hv_context.synic_message_page[cpu];
-	struct hv_message *msg = (struct hv_message *)page_addr +
-				  VMBUS_MESSAGE_SINT;
+	int cpu;
+	void *page_addr;
+	struct hv_message *msg;
 	struct vmbus_channel_message_header *hdr;
-	bool unloaded = false;
+	u32 message_type;
 
+	/*
+	 * CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was
+	 * used for initial contact or to CPU0 depending on host version. When
+	 * we're crashing on a different CPU let's hope that IRQ handler on
+	 * the cpu which receives CHANNELMSG_UNLOAD_RESPONSE is still
+	 * functional and vmbus_unload_response() will complete
+	 * vmbus_connection.unload_event. If not, the last thing we can do is
+	 * read message pages for all CPUs directly.
+	 */
 	while (1) {
-		if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
-			mdelay(10);
-			continue;
+		if (completion_done(&vmbus_connection.unload_event))
+			break;
+
+		for_each_online_cpu(cpu) {
+			page_addr = hv_context.synic_message_page[cpu];
+			msg = (struct hv_message *)page_addr +
+				VMBUS_MESSAGE_SINT;
+
+			message_type = READ_ONCE(msg->header.message_type);
+			if (message_type == HVMSG_NONE)
+				continue;
+
+			hdr = (struct vmbus_channel_message_header *)
+				msg->u.payload;
+
+			if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
+				complete(&vmbus_connection.unload_event);
+
+			vmbus_signal_eom(msg, message_type);
 		}
 
-		hdr = (struct vmbus_channel_message_header *)msg->u.payload;
-		if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
-			unloaded = true;
+		mdelay(10);
+	}
 
-		vmbus_signal_eom(msg);
-
-		if (unloaded)
-			break;
+	/*
+	 * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+	 * maybe-pending messages on all CPUs to be able to receive new
+	 * messages after we reconnect.
+	 */
+	for_each_online_cpu(cpu) {
+		page_addr = hv_context.synic_message_page[cpu];
+		msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+		msg->header.message_type = HVMSG_NONE;
 	}
 }
 
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index d02f137..fcf8a02 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -495,3 +495,4 @@
 
 	hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL);
 }
+EXPORT_SYMBOL_GPL(vmbus_set_event);
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index b853b4b..df35fb7 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -714,7 +714,7 @@
 		 * If the pfn range we are dealing with is not in the current
 		 * "hot add block", move on.
 		 */
-		if ((start_pfn >= has->end_pfn))
+		if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn)
 			continue;
 		/*
 		 * If the current hot add-request extends beyond
@@ -768,7 +768,7 @@
 		 * If the pfn range we are dealing with is not in the current
 		 * "hot add block", move on.
 		 */
-		if ((start_pfn >= has->end_pfn))
+		if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn)
 			continue;
 
 		old_covered_state = has->covered_end_pfn;
@@ -1400,6 +1400,7 @@
 				 * This is a normal hot-add request specifying
 				 * hot-add memory.
 				 */
+				dm->host_specified_ha_region = false;
 				ha_pg_range = &ha_msg->range;
 				dm->ha_wrk.ha_page_range = *ha_pg_range;
 				dm->ha_wrk.ha_region_range.page_range = 0;
diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
index 9b9b370..cb1a916 100644
--- a/drivers/hv/hv_kvp.c
+++ b/drivers/hv/hv_kvp.c
@@ -78,9 +78,11 @@
 
 static void kvp_respond_to_host(struct hv_kvp_msg *msg, int error);
 static void kvp_timeout_func(struct work_struct *dummy);
+static void kvp_host_handshake_func(struct work_struct *dummy);
 static void kvp_register(int);
 
 static DECLARE_DELAYED_WORK(kvp_timeout_work, kvp_timeout_func);
+static DECLARE_DELAYED_WORK(kvp_host_handshake_work, kvp_host_handshake_func);
 static DECLARE_WORK(kvp_sendkey_work, kvp_send_key);
 
 static const char kvp_devname[] = "vmbus/hv_kvp";
@@ -130,6 +132,11 @@
 	hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper);
 }
 
+static void kvp_host_handshake_func(struct work_struct *dummy)
+{
+	hv_poll_channel(kvp_transaction.recv_channel, hv_kvp_onchannelcallback);
+}
+
 static int kvp_handle_handshake(struct hv_kvp_msg *msg)
 {
 	switch (msg->kvp_hdr.operation) {
@@ -154,6 +161,12 @@
 	pr_debug("KVP: userspace daemon ver. %d registered\n",
 		 KVP_OP_REGISTER);
 	kvp_register(dm_reg_value);
+
+	/*
+	 * If we're still negotiating with the host cancel the timeout
+	 * work to not poll the channel twice.
+	 */
+	cancel_delayed_work_sync(&kvp_host_handshake_work);
 	hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper);
 
 	return 0;
@@ -594,7 +607,22 @@
 	struct icmsg_negotiate *negop = NULL;
 	int util_fw_version;
 	int kvp_srv_version;
+	static enum {NEGO_NOT_STARTED,
+		     NEGO_IN_PROGRESS,
+		     NEGO_FINISHED} host_negotiatied = NEGO_NOT_STARTED;
 
+	if (host_negotiatied == NEGO_NOT_STARTED &&
+	    kvp_transaction.state < HVUTIL_READY) {
+		/*
+		 * If userspace daemon is not connected and host is asking
+		 * us to negotiate we need to delay to not lose messages.
+		 * This is important for Failover IP setting.
+		 */
+		host_negotiatied = NEGO_IN_PROGRESS;
+		schedule_delayed_work(&kvp_host_handshake_work,
+				      HV_UTIL_NEGO_TIMEOUT * HZ);
+		return;
+	}
 	if (kvp_transaction.state > HVUTIL_READY)
 		return;
 
@@ -672,6 +700,8 @@
 		vmbus_sendpacket(channel, recv_buffer,
 				       recvlen, requestid,
 				       VM_PKT_DATA_INBAND, 0);
+
+		host_negotiatied = NEGO_FINISHED;
 	}
 
 }
@@ -708,6 +738,7 @@
 void hv_kvp_deinit(void)
 {
 	kvp_transaction.state = HVUTIL_DEVICE_DYING;
+	cancel_delayed_work_sync(&kvp_host_handshake_work);
 	cancel_delayed_work_sync(&kvp_timeout_work);
 	cancel_work_sync(&kvp_sendkey_work);
 	hvutil_transport_destroy(hvt);
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 12321b9..718b5c7 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -36,6 +36,11 @@
 #define HV_UTIL_TIMEOUT 30
 
 /*
+ * Timeout for guest-host handshake for services.
+ */
+#define HV_UTIL_NEGO_TIMEOUT 60
+
+/*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
  * is set by CPUID(HVCPUID_VERSION_FEATURES).
  */
@@ -620,9 +625,21 @@
 	channel_message_table[CHANNELMSG_COUNT];
 
 /* Free the message slot and signal end-of-message if required */
-static inline void vmbus_signal_eom(struct hv_message *msg)
+static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
 {
-	msg->header.message_type = HVMSG_NONE;
+	/*
+	 * On crash we're reading some other CPU's message page and we need
+	 * to be careful: this other CPU may already had cleared the header
+	 * and the host may already had delivered some other message there.
+	 * In case we blindly write msg->header.message_type we're going
+	 * to lose it. We can still lose a message of the same type but
+	 * we count on the fact that there can only be one
+	 * CHANNELMSG_UNLOAD_RESPONSE and we don't care about other messages
+	 * on crash.
+	 */
+	if (cmpxchg(&msg->header.message_type, old_msg_type,
+		    HVMSG_NONE) != old_msg_type)
+		return;
 
 	/*
 	 * Make sure the write to MessageType (ie set to
@@ -667,8 +684,6 @@
 
 int vmbus_post_msg(void *buffer, size_t buflen);
 
-void vmbus_set_event(struct vmbus_channel *channel);
-
 void vmbus_on_event(unsigned long data);
 void vmbus_on_msg_dpc(unsigned long data);
 
diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index a40a73a..fe586bf 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -33,25 +33,21 @@
 void hv_begin_read(struct hv_ring_buffer_info *rbi)
 {
 	rbi->ring_buffer->interrupt_mask = 1;
-	mb();
+	virt_mb();
 }
 
 u32 hv_end_read(struct hv_ring_buffer_info *rbi)
 {
-	u32 read;
-	u32 write;
 
 	rbi->ring_buffer->interrupt_mask = 0;
-	mb();
+	virt_mb();
 
 	/*
 	 * Now check to see if the ring buffer is still empty.
 	 * If it is not, we raced and we need to process new
 	 * incoming messages.
 	 */
-	hv_get_ringbuffer_availbytes(rbi, &read, &write);
-
-	return read;
+	return hv_get_bytes_to_read(rbi);
 }
 
 /*
@@ -72,69 +68,17 @@
 
 static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi)
 {
-	mb();
-	if (rbi->ring_buffer->interrupt_mask)
+	virt_mb();
+	if (READ_ONCE(rbi->ring_buffer->interrupt_mask))
 		return false;
 
 	/* check interrupt_mask before read_index */
-	rmb();
+	virt_rmb();
 	/*
 	 * This is the only case we need to signal when the
 	 * ring transitions from being empty to non-empty.
 	 */
-	if (old_write == rbi->ring_buffer->read_index)
-		return true;
-
-	return false;
-}
-
-/*
- * To optimize the flow management on the send-side,
- * when the sender is blocked because of lack of
- * sufficient space in the ring buffer, potential the
- * consumer of the ring buffer can signal the producer.
- * This is controlled by the following parameters:
- *
- * 1. pending_send_sz: This is the size in bytes that the
- *    producer is trying to send.
- * 2. The feature bit feat_pending_send_sz set to indicate if
- *    the consumer of the ring will signal when the ring
- *    state transitions from being full to a state where
- *    there is room for the producer to send the pending packet.
- */
-
-static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
-{
-	u32 cur_write_sz;
-	u32 r_size;
-	u32 write_loc;
-	u32 read_loc = rbi->ring_buffer->read_index;
-	u32 pending_sz;
-
-	/*
-	 * Issue a full memory barrier before making the signaling decision.
-	 * Here is the reason for having this barrier:
-	 * If the reading of the pend_sz (in this function)
-	 * were to be reordered and read before we commit the new read
-	 * index (in the calling function)  we could
-	 * have a problem. If the host were to set the pending_sz after we
-	 * have sampled pending_sz and go to sleep before we commit the
-	 * read index, we could miss sending the interrupt. Issue a full
-	 * memory barrier to address this.
-	 */
-	mb();
-
-	pending_sz = rbi->ring_buffer->pending_send_sz;
-	write_loc = rbi->ring_buffer->write_index;
-	/* If the other end is not blocked on write don't bother. */
-	if (pending_sz == 0)
-		return false;
-
-	r_size = rbi->ring_datasize;
-	cur_write_sz = write_loc >= read_loc ? r_size - (write_loc - read_loc) :
-			read_loc - write_loc;
-
-	if (cur_write_sz >= pending_sz)
+	if (old_write == READ_ONCE(rbi->ring_buffer->read_index))
 		return true;
 
 	return false;
@@ -188,17 +132,9 @@
 		    u32 next_read_location)
 {
 	ring_info->ring_buffer->read_index = next_read_location;
+	ring_info->priv_read_index = next_read_location;
 }
 
-
-/* Get the start of the ring buffer. */
-static inline void *
-hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info)
-{
-	return (void *)ring_info->ring_buffer->buffer;
-}
-
-
 /* Get the size of the ring buffer. */
 static inline u32
 hv_get_ring_buffersize(struct hv_ring_buffer_info *ring_info)
@@ -332,7 +268,6 @@
 {
 	int i = 0;
 	u32 bytes_avail_towrite;
-	u32 bytes_avail_toread;
 	u32 totalbytes_towrite = 0;
 
 	u32 next_write_location;
@@ -348,9 +283,7 @@
 	if (lock)
 		spin_lock_irqsave(&outring_info->ring_lock, flags);
 
-	hv_get_ringbuffer_availbytes(outring_info,
-				&bytes_avail_toread,
-				&bytes_avail_towrite);
+	bytes_avail_towrite = hv_get_bytes_to_write(outring_info);
 
 	/*
 	 * If there is only room for the packet, assume it is full.
@@ -384,7 +317,7 @@
 					     sizeof(u64));
 
 	/* Issue a full memory barrier before updating the write index */
-	mb();
+	virt_mb();
 
 	/* Now, update the write location */
 	hv_set_next_write_location(outring_info, next_write_location);
@@ -401,7 +334,6 @@
 		       void *buffer, u32 buflen, u32 *buffer_actual_len,
 		       u64 *requestid, bool *signal, bool raw)
 {
-	u32 bytes_avail_towrite;
 	u32 bytes_avail_toread;
 	u32 next_read_location = 0;
 	u64 prev_indices = 0;
@@ -417,10 +349,7 @@
 	*buffer_actual_len = 0;
 	*requestid = 0;
 
-	hv_get_ringbuffer_availbytes(inring_info,
-				&bytes_avail_toread,
-				&bytes_avail_towrite);
-
+	bytes_avail_toread = hv_get_bytes_to_read(inring_info);
 	/* Make sure there is something to read */
 	if (bytes_avail_toread < sizeof(desc)) {
 		/*
@@ -464,7 +393,7 @@
 	 * the writer may start writing to the read area once the read index
 	 * is updated.
 	 */
-	mb();
+	virt_mb();
 
 	/* Update the read index */
 	hv_set_next_read_location(inring_info, next_read_location);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 64713ff..952f20f 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -41,6 +41,7 @@
 #include <linux/ptrace.h>
 #include <linux/screen_info.h>
 #include <linux/kdebug.h>
+#include <linux/efi.h>
 #include "hyperv_vmbus.h"
 
 static struct acpi_device  *hv_acpi_dev;
@@ -101,7 +102,10 @@
 	.notifier_call = hyperv_panic_event,
 };
 
+static const char *fb_mmio_name = "fb_range";
+static struct resource *fb_mmio;
 struct resource *hyperv_mmio;
+DEFINE_SEMAPHORE(hyperv_mmio_lock);
 
 static int vmbus_exists(void)
 {
@@ -708,7 +712,7 @@
 	if (dev->event_handler)
 		dev->event_handler(dev);
 
-	vmbus_signal_eom(msg);
+	vmbus_signal_eom(msg, HVMSG_TIMER_EXPIRED);
 }
 
 void vmbus_on_msg_dpc(unsigned long data)
@@ -720,8 +724,9 @@
 	struct vmbus_channel_message_header *hdr;
 	struct vmbus_channel_message_table_entry *entry;
 	struct onmessage_work_context *ctx;
+	u32 message_type = msg->header.message_type;
 
-	if (msg->header.message_type == HVMSG_NONE)
+	if (message_type == HVMSG_NONE)
 		/* no msg */
 		return;
 
@@ -746,7 +751,7 @@
 		entry->message_handler(hdr);
 
 msg_handled:
-	vmbus_signal_eom(msg);
+	vmbus_signal_eom(msg, message_type);
 }
 
 static void vmbus_isr(void)
@@ -1048,7 +1053,6 @@
 	new_res->end = end;
 
 	/*
-	 * Stick ranges from higher in address space at the front of the list.
 	 * If two ranges are adjacent, merge them.
 	 */
 	do {
@@ -1069,7 +1073,7 @@
 			break;
 		}
 
-		if ((*old_res)->end < new_res->start) {
+		if ((*old_res)->start > new_res->end) {
 			new_res->sibling = *old_res;
 			if (prev_res)
 				(*prev_res)->sibling = new_res;
@@ -1091,6 +1095,12 @@
 	struct resource *next_res;
 
 	if (hyperv_mmio) {
+		if (fb_mmio) {
+			__release_region(hyperv_mmio, fb_mmio->start,
+					 resource_size(fb_mmio));
+			fb_mmio = NULL;
+		}
+
 		for (cur_res = hyperv_mmio; cur_res; cur_res = next_res) {
 			next_res = cur_res->sibling;
 			kfree(cur_res);
@@ -1100,6 +1110,30 @@
 	return 0;
 }
 
+static void vmbus_reserve_fb(void)
+{
+	int size;
+	/*
+	 * Make a claim for the frame buffer in the resource tree under the
+	 * first node, which will be the one below 4GB.  The length seems to
+	 * be underreported, particularly in a Generation 1 VM.  So start out
+	 * reserving a larger area and make it smaller until it succeeds.
+	 */
+
+	if (screen_info.lfb_base) {
+		if (efi_enabled(EFI_BOOT))
+			size = max_t(__u32, screen_info.lfb_size, 0x800000);
+		else
+			size = max_t(__u32, screen_info.lfb_size, 0x4000000);
+
+		for (; !fb_mmio && (size >= 0x100000); size >>= 1) {
+			fb_mmio = __request_region(hyperv_mmio,
+						   screen_info.lfb_base, size,
+						   fb_mmio_name, 0);
+		}
+	}
+}
+
 /**
  * vmbus_allocate_mmio() - Pick a memory-mapped I/O range.
  * @new:		If successful, supplied a pointer to the
@@ -1128,11 +1162,33 @@
 			resource_size_t size, resource_size_t align,
 			bool fb_overlap_ok)
 {
-	struct resource *iter;
-	resource_size_t range_min, range_max, start, local_min, local_max;
+	struct resource *iter, *shadow;
+	resource_size_t range_min, range_max, start;
 	const char *dev_n = dev_name(&device_obj->device);
-	u32 fb_end = screen_info.lfb_base + (screen_info.lfb_size << 1);
-	int i;
+	int retval;
+
+	retval = -ENXIO;
+	down(&hyperv_mmio_lock);
+
+	/*
+	 * If overlaps with frame buffers are allowed, then first attempt to
+	 * make the allocation from within the reserved region.  Because it
+	 * is already reserved, no shadow allocation is necessary.
+	 */
+	if (fb_overlap_ok && fb_mmio && !(min > fb_mmio->end) &&
+	    !(max < fb_mmio->start)) {
+
+		range_min = fb_mmio->start;
+		range_max = fb_mmio->end;
+		start = (range_min + align - 1) & ~(align - 1);
+		for (; start + size - 1 <= range_max; start += align) {
+			*new = request_mem_region_exclusive(start, size, dev_n);
+			if (*new) {
+				retval = 0;
+				goto exit;
+			}
+		}
+	}
 
 	for (iter = hyperv_mmio; iter; iter = iter->sibling) {
 		if ((iter->start >= max) || (iter->end <= min))
@@ -1140,46 +1196,56 @@
 
 		range_min = iter->start;
 		range_max = iter->end;
+		start = (range_min + align - 1) & ~(align - 1);
+		for (; start + size - 1 <= range_max; start += align) {
+			shadow = __request_region(iter, start, size, NULL,
+						  IORESOURCE_BUSY);
+			if (!shadow)
+				continue;
 
-		/* If this range overlaps the frame buffer, split it into
-		   two tries. */
-		for (i = 0; i < 2; i++) {
-			local_min = range_min;
-			local_max = range_max;
-			if (fb_overlap_ok || (range_min >= fb_end) ||
-			    (range_max <= screen_info.lfb_base)) {
-				i++;
-			} else {
-				if ((range_min <= screen_info.lfb_base) &&
-				    (range_max >= screen_info.lfb_base)) {
-					/*
-					 * The frame buffer is in this window,
-					 * so trim this into the part that
-					 * preceeds the frame buffer.
-					 */
-					local_max = screen_info.lfb_base - 1;
-					range_min = fb_end;
-				} else {
-					range_min = fb_end;
-					continue;
-				}
+			*new = request_mem_region_exclusive(start, size, dev_n);
+			if (*new) {
+				shadow->name = (char *)*new;
+				retval = 0;
+				goto exit;
 			}
 
-			start = (local_min + align - 1) & ~(align - 1);
-			for (; start + size - 1 <= local_max; start += align) {
-				*new = request_mem_region_exclusive(start, size,
-								    dev_n);
-				if (*new)
-					return 0;
-			}
+			__release_region(iter, start, size);
 		}
 	}
 
-	return -ENXIO;
+exit:
+	up(&hyperv_mmio_lock);
+	return retval;
 }
 EXPORT_SYMBOL_GPL(vmbus_allocate_mmio);
 
 /**
+ * vmbus_free_mmio() - Free a memory-mapped I/O range.
+ * @start:		Base address of region to release.
+ * @size:		Size of the range to be allocated
+ *
+ * This function releases anything requested by
+ * vmbus_mmio_allocate().
+ */
+void vmbus_free_mmio(resource_size_t start, resource_size_t size)
+{
+	struct resource *iter;
+
+	down(&hyperv_mmio_lock);
+	for (iter = hyperv_mmio; iter; iter = iter->sibling) {
+		if ((iter->start >= start + size) || (iter->end <= start))
+			continue;
+
+		__release_region(iter, start, size);
+	}
+	release_mem_region(start, size);
+	up(&hyperv_mmio_lock);
+
+}
+EXPORT_SYMBOL_GPL(vmbus_free_mmio);
+
+/**
  * vmbus_cpu_number_to_vp_number() - Map CPU to VP.
  * @cpu_number: CPU number in Linux terms
  *
@@ -1219,8 +1285,10 @@
 
 		if (ACPI_FAILURE(result))
 			continue;
-		if (hyperv_mmio)
+		if (hyperv_mmio) {
+			vmbus_reserve_fb();
 			break;
+		}
 	}
 	ret_val = 0;
 
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
index db05410..130cb21 100644
--- a/drivers/hwtracing/coresight/Kconfig
+++ b/drivers/hwtracing/coresight/Kconfig
@@ -78,4 +78,15 @@
 	  programmable ATB replicator sends the ATB trace stream from the
 	  ETB/ETF to the TPIUi and ETR.
 
+config CORESIGHT_STM
+	bool "CoreSight System Trace Macrocell driver"
+	depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64
+	select CORESIGHT_LINKS_AND_SINKS
+	select STM
+	help
+	  This driver provides support for hardware assisted software
+	  instrumentation based tracing. This is primarily used for
+	  logging useful software events or data coming from various entities
+	  in the system, possibly running different OSs
+
 endif
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index cf8c6d6..af480d9 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -1,15 +1,18 @@
 #
 # Makefile for CoreSight drivers.
 #
-obj-$(CONFIG_CORESIGHT) += coresight.o
+obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o
 obj-$(CONFIG_OF) += of_coresight.o
-obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
+obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \
+					     coresight-tmc-etf.o \
+					     coresight-tmc-etr.o
 obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o
 obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
 obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
 					   coresight-replicator.o
 obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
-					coresight-etm3x-sysfs.o \
-					coresight-etm-perf.o
-obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
+					coresight-etm3x-sysfs.o
+obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
+					coresight-etm4x-sysfs.o
 obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
+obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
index acbce79..4d20b0b 100644
--- a/drivers/hwtracing/coresight/coresight-etb10.c
+++ b/drivers/hwtracing/coresight/coresight-etb10.c
@@ -71,26 +71,6 @@
 #define ETB_FRAME_SIZE_WORDS	4
 
 /**
- * struct cs_buffer - keep track of a recording session' specifics
- * @cur:	index of the current buffer
- * @nr_pages:	max number of pages granted to us
- * @offset:	offset within the current buffer
- * @data_size:	how much we collected in this run
- * @lost:	other than zero if we had a HW buffer wrap around
- * @snapshot:	is this run in snapshot mode
- * @data_pages:	a handle the ring buffer
- */
-struct cs_buffers {
-	unsigned int		cur;
-	unsigned int		nr_pages;
-	unsigned long		offset;
-	local_t			data_size;
-	local_t			lost;
-	bool			snapshot;
-	void			**data_pages;
-};
-
-/**
  * struct etb_drvdata - specifics associated to an ETB component
  * @base:	memory mapped base address for this component.
  * @dev:	the device entity associated to this component.
@@ -440,7 +420,7 @@
 		u32 mask = ~(ETB_FRAME_SIZE_WORDS - 1);
 
 		/* The new read pointer must be frame size aligned */
-		to_read -= handle->size & mask;
+		to_read = handle->size & mask;
 		/*
 		 * Move the RAM read pointer up, keeping in mind that
 		 * everything is in frame size units.
@@ -448,7 +428,8 @@
 		read_ptr = (write_ptr + drvdata->buffer_depth) -
 					to_read / ETB_FRAME_SIZE_WORDS;
 		/* Wrap around if need be*/
-		read_ptr &= ~(drvdata->buffer_depth - 1);
+		if (read_ptr > (drvdata->buffer_depth - 1))
+			read_ptr -= drvdata->buffer_depth;
 		/* let the decoder know we've skipped ahead */
 		local_inc(&buf->lost);
 	}
@@ -579,47 +560,29 @@
 	.llseek		= no_llseek,
 };
 
-static ssize_t status_show(struct device *dev,
-			   struct device_attribute *attr, char *buf)
-{
-	unsigned long flags;
-	u32 etb_rdr, etb_sr, etb_rrp, etb_rwp;
-	u32 etb_trg, etb_cr, etb_ffsr, etb_ffcr;
-	struct etb_drvdata *drvdata = dev_get_drvdata(dev->parent);
+#define coresight_etb10_simple_func(name, offset)                       \
+	coresight_simple_func(struct etb_drvdata, name, offset)
 
-	pm_runtime_get_sync(drvdata->dev);
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	CS_UNLOCK(drvdata->base);
+coresight_etb10_simple_func(rdp, ETB_RAM_DEPTH_REG);
+coresight_etb10_simple_func(sts, ETB_STATUS_REG);
+coresight_etb10_simple_func(rrp, ETB_RAM_READ_POINTER);
+coresight_etb10_simple_func(rwp, ETB_RAM_WRITE_POINTER);
+coresight_etb10_simple_func(trg, ETB_TRG);
+coresight_etb10_simple_func(ctl, ETB_CTL_REG);
+coresight_etb10_simple_func(ffsr, ETB_FFSR);
+coresight_etb10_simple_func(ffcr, ETB_FFCR);
 
-	etb_rdr = readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG);
-	etb_sr = readl_relaxed(drvdata->base + ETB_STATUS_REG);
-	etb_rrp = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER);
-	etb_rwp = readl_relaxed(drvdata->base + ETB_RAM_WRITE_POINTER);
-	etb_trg = readl_relaxed(drvdata->base + ETB_TRG);
-	etb_cr = readl_relaxed(drvdata->base + ETB_CTL_REG);
-	etb_ffsr = readl_relaxed(drvdata->base + ETB_FFSR);
-	etb_ffcr = readl_relaxed(drvdata->base + ETB_FFCR);
-
-	CS_LOCK(drvdata->base);
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
-	pm_runtime_put(drvdata->dev);
-
-	return sprintf(buf,
-		       "Depth:\t\t0x%x\n"
-		       "Status:\t\t0x%x\n"
-		       "RAM read ptr:\t0x%x\n"
-		       "RAM wrt ptr:\t0x%x\n"
-		       "Trigger cnt:\t0x%x\n"
-		       "Control:\t0x%x\n"
-		       "Flush status:\t0x%x\n"
-		       "Flush ctrl:\t0x%x\n",
-		       etb_rdr, etb_sr, etb_rrp, etb_rwp,
-		       etb_trg, etb_cr, etb_ffsr, etb_ffcr);
-
-	return -EINVAL;
-}
-static DEVICE_ATTR_RO(status);
+static struct attribute *coresight_etb_mgmt_attrs[] = {
+	&dev_attr_rdp.attr,
+	&dev_attr_sts.attr,
+	&dev_attr_rrp.attr,
+	&dev_attr_rwp.attr,
+	&dev_attr_trg.attr,
+	&dev_attr_ctl.attr,
+	&dev_attr_ffsr.attr,
+	&dev_attr_ffcr.attr,
+	NULL,
+};
 
 static ssize_t trigger_cntr_show(struct device *dev,
 			    struct device_attribute *attr, char *buf)
@@ -649,10 +612,23 @@
 
 static struct attribute *coresight_etb_attrs[] = {
 	&dev_attr_trigger_cntr.attr,
-	&dev_attr_status.attr,
 	NULL,
 };
-ATTRIBUTE_GROUPS(coresight_etb);
+
+static const struct attribute_group coresight_etb_group = {
+	.attrs = coresight_etb_attrs,
+};
+
+static const struct attribute_group coresight_etb_mgmt_group = {
+	.attrs = coresight_etb_mgmt_attrs,
+	.name = "mgmt",
+};
+
+const struct attribute_group *coresight_etb_groups[] = {
+	&coresight_etb_group,
+	&coresight_etb_mgmt_group,
+	NULL,
+};
 
 static int etb_probe(struct amba_device *adev, const struct amba_id *id)
 {
@@ -729,7 +705,6 @@
 	if (ret)
 		goto err_misc_register;
 
-	dev_info(dev, "ETB initialized\n");
 	return 0;
 
 err_misc_register:
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
index cbb4046..02d4b62 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
@@ -1221,26 +1221,19 @@
 	NULL,
 };
 
-#define coresight_simple_func(name, offset)                             \
-static ssize_t name##_show(struct device *_dev,                         \
-			   struct device_attribute *attr, char *buf)    \
-{                                                                       \
-	struct etm_drvdata *drvdata = dev_get_drvdata(_dev->parent);    \
-	return scnprintf(buf, PAGE_SIZE, "0x%x\n",                      \
-			 readl_relaxed(drvdata->base + offset));        \
-}                                                                       \
-DEVICE_ATTR_RO(name)
+#define coresight_etm3x_simple_func(name, offset)			\
+	coresight_simple_func(struct etm_drvdata, name, offset)
 
-coresight_simple_func(etmccr, ETMCCR);
-coresight_simple_func(etmccer, ETMCCER);
-coresight_simple_func(etmscr, ETMSCR);
-coresight_simple_func(etmidr, ETMIDR);
-coresight_simple_func(etmcr, ETMCR);
-coresight_simple_func(etmtraceidr, ETMTRACEIDR);
-coresight_simple_func(etmteevr, ETMTEEVR);
-coresight_simple_func(etmtssvr, ETMTSSCR);
-coresight_simple_func(etmtecr1, ETMTECR1);
-coresight_simple_func(etmtecr2, ETMTECR2);
+coresight_etm3x_simple_func(etmccr, ETMCCR);
+coresight_etm3x_simple_func(etmccer, ETMCCER);
+coresight_etm3x_simple_func(etmscr, ETMSCR);
+coresight_etm3x_simple_func(etmidr, ETMIDR);
+coresight_etm3x_simple_func(etmcr, ETMCR);
+coresight_etm3x_simple_func(etmtraceidr, ETMTRACEIDR);
+coresight_etm3x_simple_func(etmteevr, ETMTEEVR);
+coresight_etm3x_simple_func(etmtssvr, ETMTSSCR);
+coresight_etm3x_simple_func(etmtecr1, ETMTECR1);
+coresight_etm3x_simple_func(etmtecr2, ETMTECR2);
 
 static struct attribute *coresight_etm_mgmt_attrs[] = {
 	&dev_attr_etmccr.attr,
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
new file mode 100644
index 0000000..7c84308
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
@@ -0,0 +1,2126 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/pm_runtime.h>
+#include <linux/sysfs.h>
+#include "coresight-etm4x.h"
+
+static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude)
+{
+	u8 idx;
+	struct etmv4_config *config = &drvdata->config;
+
+	idx = config->addr_idx;
+
+	/*
+	 * TRCACATRn.TYPE bit[1:0]: type of comparison
+	 * the trace unit performs
+	 */
+	if (BMVAL(config->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) {
+		if (idx % 2 != 0)
+			return -EINVAL;
+
+		/*
+		 * We are performing instruction address comparison. Set the
+		 * relevant bit of ViewInst Include/Exclude Control register
+		 * for corresponding address comparator pair.
+		 */
+		if (config->addr_type[idx] != ETM_ADDR_TYPE_RANGE ||
+		    config->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE)
+			return -EINVAL;
+
+		if (exclude == true) {
+			/*
+			 * Set exclude bit and unset the include bit
+			 * corresponding to comparator pair
+			 */
+			config->viiectlr |= BIT(idx / 2 + 16);
+			config->viiectlr &= ~BIT(idx / 2);
+		} else {
+			/*
+			 * Set include bit and unset exclude bit
+			 * corresponding to comparator pair
+			 */
+			config->viiectlr |= BIT(idx / 2);
+			config->viiectlr &= ~BIT(idx / 2 + 16);
+		}
+	}
+	return 0;
+}
+
+static ssize_t nr_pe_cmp_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_pe_cmp;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_pe_cmp);
+
+static ssize_t nr_addr_cmp_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_addr_cmp;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_addr_cmp);
+
+static ssize_t nr_cntr_show(struct device *dev,
+			    struct device_attribute *attr,
+			    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_cntr;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_cntr);
+
+static ssize_t nr_ext_inp_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_ext_inp;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_ext_inp);
+
+static ssize_t numcidc_show(struct device *dev,
+			    struct device_attribute *attr,
+			    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->numcidc;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(numcidc);
+
+static ssize_t numvmidc_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->numvmidc;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(numvmidc);
+
+static ssize_t nrseqstate_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nrseqstate;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nrseqstate);
+
+static ssize_t nr_resource_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_resource;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_resource);
+
+static ssize_t nr_ss_cmp_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->nr_ss_cmp;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_ss_cmp);
+
+static ssize_t reset_store(struct device *dev,
+			   struct device_attribute *attr,
+			   const char *buf, size_t size)
+{
+	int i;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	if (val)
+		config->mode = 0x0;
+
+	/* Disable data tracing: do not trace load and store data transfers */
+	config->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE);
+	config->cfg &= ~(BIT(1) | BIT(2));
+
+	/* Disable data value and data address tracing */
+	config->mode &= ~(ETM_MODE_DATA_TRACE_ADDR |
+			   ETM_MODE_DATA_TRACE_VAL);
+	config->cfg &= ~(BIT(16) | BIT(17));
+
+	/* Disable all events tracing */
+	config->eventctrl0 = 0x0;
+	config->eventctrl1 = 0x0;
+
+	/* Disable timestamp event */
+	config->ts_ctrl = 0x0;
+
+	/* Disable stalling */
+	config->stall_ctrl = 0x0;
+
+	/* Reset trace synchronization period  to 2^8 = 256 bytes*/
+	if (drvdata->syncpr == false)
+		config->syncfreq = 0x8;
+
+	/*
+	 * Enable ViewInst to trace everything with start-stop logic in
+	 * started state. ARM recommends start-stop logic is set before
+	 * each trace run.
+	 */
+	config->vinst_ctrl |= BIT(0);
+	if (drvdata->nr_addr_cmp == true) {
+		config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
+		/* SSSTATUS, bit[9] */
+		config->vinst_ctrl |= BIT(9);
+	}
+
+	/* No address range filtering for ViewInst */
+	config->viiectlr = 0x0;
+
+	/* No start-stop filtering for ViewInst */
+	config->vissctlr = 0x0;
+
+	/* Disable seq events */
+	for (i = 0; i < drvdata->nrseqstate-1; i++)
+		config->seq_ctrl[i] = 0x0;
+	config->seq_rst = 0x0;
+	config->seq_state = 0x0;
+
+	/* Disable external input events */
+	config->ext_inp = 0x0;
+
+	config->cntr_idx = 0x0;
+	for (i = 0; i < drvdata->nr_cntr; i++) {
+		config->cntrldvr[i] = 0x0;
+		config->cntr_ctrl[i] = 0x0;
+		config->cntr_val[i] = 0x0;
+	}
+
+	config->res_idx = 0x0;
+	for (i = 0; i < drvdata->nr_resource; i++)
+		config->res_ctrl[i] = 0x0;
+
+	for (i = 0; i < drvdata->nr_ss_cmp; i++) {
+		config->ss_ctrl[i] = 0x0;
+		config->ss_pe_cmp[i] = 0x0;
+	}
+
+	config->addr_idx = 0x0;
+	for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+		config->addr_val[i] = 0x0;
+		config->addr_acc[i] = 0x0;
+		config->addr_type[i] = ETM_ADDR_TYPE_NONE;
+	}
+
+	config->ctxid_idx = 0x0;
+	for (i = 0; i < drvdata->numcidc; i++) {
+		config->ctxid_pid[i] = 0x0;
+		config->ctxid_vpid[i] = 0x0;
+	}
+
+	config->ctxid_mask0 = 0x0;
+	config->ctxid_mask1 = 0x0;
+
+	config->vmid_idx = 0x0;
+	for (i = 0; i < drvdata->numvmidc; i++)
+		config->vmid_val[i] = 0x0;
+	config->vmid_mask0 = 0x0;
+	config->vmid_mask1 = 0x0;
+
+	drvdata->trcid = drvdata->cpu + 1;
+
+	spin_unlock(&drvdata->spinlock);
+
+	return size;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t mode_show(struct device *dev,
+			 struct device_attribute *attr,
+			 char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->mode;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t mode_store(struct device *dev,
+			  struct device_attribute *attr,
+			  const char *buf, size_t size)
+{
+	unsigned long val, mode;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	config->mode = val & ETMv4_MODE_ALL;
+
+	if (config->mode & ETM_MODE_EXCLUDE)
+		etm4_set_mode_exclude(drvdata, true);
+	else
+		etm4_set_mode_exclude(drvdata, false);
+
+	if (drvdata->instrp0 == true) {
+		/* start by clearing instruction P0 field */
+		config->cfg  &= ~(BIT(1) | BIT(2));
+		if (config->mode & ETM_MODE_LOAD)
+			/* 0b01 Trace load instructions as P0 instructions */
+			config->cfg  |= BIT(1);
+		if (config->mode & ETM_MODE_STORE)
+			/* 0b10 Trace store instructions as P0 instructions */
+			config->cfg  |= BIT(2);
+		if (config->mode & ETM_MODE_LOAD_STORE)
+			/*
+			 * 0b11 Trace load and store instructions
+			 * as P0 instructions
+			 */
+			config->cfg  |= BIT(1) | BIT(2);
+	}
+
+	/* bit[3], Branch broadcast mode */
+	if ((config->mode & ETM_MODE_BB) && (drvdata->trcbb == true))
+		config->cfg |= BIT(3);
+	else
+		config->cfg &= ~BIT(3);
+
+	/* bit[4], Cycle counting instruction trace bit */
+	if ((config->mode & ETMv4_MODE_CYCACC) &&
+		(drvdata->trccci == true))
+		config->cfg |= BIT(4);
+	else
+		config->cfg &= ~BIT(4);
+
+	/* bit[6], Context ID tracing bit */
+	if ((config->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size))
+		config->cfg |= BIT(6);
+	else
+		config->cfg &= ~BIT(6);
+
+	if ((config->mode & ETM_MODE_VMID) && (drvdata->vmid_size))
+		config->cfg |= BIT(7);
+	else
+		config->cfg &= ~BIT(7);
+
+	/* bits[10:8], Conditional instruction tracing bit */
+	mode = ETM_MODE_COND(config->mode);
+	if (drvdata->trccond == true) {
+		config->cfg &= ~(BIT(8) | BIT(9) | BIT(10));
+		config->cfg |= mode << 8;
+	}
+
+	/* bit[11], Global timestamp tracing bit */
+	if ((config->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size))
+		config->cfg |= BIT(11);
+	else
+		config->cfg &= ~BIT(11);
+
+	/* bit[12], Return stack enable bit */
+	if ((config->mode & ETM_MODE_RETURNSTACK) &&
+					(drvdata->retstack == true))
+		config->cfg |= BIT(12);
+	else
+		config->cfg &= ~BIT(12);
+
+	/* bits[14:13], Q element enable field */
+	mode = ETM_MODE_QELEM(config->mode);
+	/* start by clearing QE bits */
+	config->cfg &= ~(BIT(13) | BIT(14));
+	/* if supported, Q elements with instruction counts are enabled */
+	if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
+		config->cfg |= BIT(13);
+	/*
+	 * if supported, Q elements with and without instruction
+	 * counts are enabled
+	 */
+	if ((mode & BIT(1)) && (drvdata->q_support & BIT(1)))
+		config->cfg |= BIT(14);
+
+	/* bit[11], AMBA Trace Bus (ATB) trigger enable bit */
+	if ((config->mode & ETM_MODE_ATB_TRIGGER) &&
+	    (drvdata->atbtrig == true))
+		config->eventctrl1 |= BIT(11);
+	else
+		config->eventctrl1 &= ~BIT(11);
+
+	/* bit[12], Low-power state behavior override bit */
+	if ((config->mode & ETM_MODE_LPOVERRIDE) &&
+	    (drvdata->lpoverride == true))
+		config->eventctrl1 |= BIT(12);
+	else
+		config->eventctrl1 &= ~BIT(12);
+
+	/* bit[8], Instruction stall bit */
+	if (config->mode & ETM_MODE_ISTALL_EN)
+		config->stall_ctrl |= BIT(8);
+	else
+		config->stall_ctrl &= ~BIT(8);
+
+	/* bit[10], Prioritize instruction trace bit */
+	if (config->mode & ETM_MODE_INSTPRIO)
+		config->stall_ctrl |= BIT(10);
+	else
+		config->stall_ctrl &= ~BIT(10);
+
+	/* bit[13], Trace overflow prevention bit */
+	if ((config->mode & ETM_MODE_NOOVERFLOW) &&
+		(drvdata->nooverflow == true))
+		config->stall_ctrl |= BIT(13);
+	else
+		config->stall_ctrl &= ~BIT(13);
+
+	/* bit[9] Start/stop logic control bit */
+	if (config->mode & ETM_MODE_VIEWINST_STARTSTOP)
+		config->vinst_ctrl |= BIT(9);
+	else
+		config->vinst_ctrl &= ~BIT(9);
+
+	/* bit[10], Whether a trace unit must trace a Reset exception */
+	if (config->mode & ETM_MODE_TRACE_RESET)
+		config->vinst_ctrl |= BIT(10);
+	else
+		config->vinst_ctrl &= ~BIT(10);
+
+	/* bit[11], Whether a trace unit must trace a system error exception */
+	if ((config->mode & ETM_MODE_TRACE_ERR) &&
+		(drvdata->trc_error == true))
+		config->vinst_ctrl |= BIT(11);
+	else
+		config->vinst_ctrl &= ~BIT(11);
+
+	if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
+		etm4_config_trace_mode(config);
+
+	spin_unlock(&drvdata->spinlock);
+
+	return size;
+}
+static DEVICE_ATTR_RW(mode);
+
+static ssize_t pe_show(struct device *dev,
+		       struct device_attribute *attr,
+		       char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->pe_sel;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t pe_store(struct device *dev,
+			struct device_attribute *attr,
+			const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	if (val > drvdata->nr_pe) {
+		spin_unlock(&drvdata->spinlock);
+		return -EINVAL;
+	}
+
+	config->pe_sel = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(pe);
+
+static ssize_t event_show(struct device *dev,
+			  struct device_attribute *attr,
+			  char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->eventctrl0;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_store(struct device *dev,
+			   struct device_attribute *attr,
+			   const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	switch (drvdata->nr_event) {
+	case 0x0:
+		/* EVENT0, bits[7:0] */
+		config->eventctrl0 = val & 0xFF;
+		break;
+	case 0x1:
+		 /* EVENT1, bits[15:8] */
+		config->eventctrl0 = val & 0xFFFF;
+		break;
+	case 0x2:
+		/* EVENT2, bits[23:16] */
+		config->eventctrl0 = val & 0xFFFFFF;
+		break;
+	case 0x3:
+		/* EVENT3, bits[31:24] */
+		config->eventctrl0 = val;
+		break;
+	default:
+		break;
+	}
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(event);
+
+static ssize_t event_instren_show(struct device *dev,
+				  struct device_attribute *attr,
+				  char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = BMVAL(config->eventctrl1, 0, 3);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_instren_store(struct device *dev,
+				   struct device_attribute *attr,
+				   const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	/* start by clearing all instruction event enable bits */
+	config->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3));
+	switch (drvdata->nr_event) {
+	case 0x0:
+		/* generate Event element for event 1 */
+		config->eventctrl1 |= val & BIT(1);
+		break;
+	case 0x1:
+		/* generate Event element for event 1 and 2 */
+		config->eventctrl1 |= val & (BIT(0) | BIT(1));
+		break;
+	case 0x2:
+		/* generate Event element for event 1, 2 and 3 */
+		config->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2));
+		break;
+	case 0x3:
+		/* generate Event element for all 4 events */
+		config->eventctrl1 |= val & 0xF;
+		break;
+	default:
+		break;
+	}
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(event_instren);
+
+static ssize_t event_ts_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->ts_ctrl;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_ts_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (!drvdata->ts_size)
+		return -EINVAL;
+
+	config->ts_ctrl = val & ETMv4_EVENT_MASK;
+	return size;
+}
+static DEVICE_ATTR_RW(event_ts);
+
+static ssize_t syncfreq_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->syncfreq;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t syncfreq_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (drvdata->syncpr == true)
+		return -EINVAL;
+
+	config->syncfreq = val & ETMv4_SYNC_MASK;
+	return size;
+}
+static DEVICE_ATTR_RW(syncfreq);
+
+static ssize_t cyc_threshold_show(struct device *dev,
+				  struct device_attribute *attr,
+				  char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->ccctlr;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cyc_threshold_store(struct device *dev,
+				   struct device_attribute *attr,
+				   const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val < drvdata->ccitmin)
+		return -EINVAL;
+
+	config->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
+	return size;
+}
+static DEVICE_ATTR_RW(cyc_threshold);
+
+static ssize_t bb_ctrl_show(struct device *dev,
+			    struct device_attribute *attr,
+			    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->bb_ctrl;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t bb_ctrl_store(struct device *dev,
+			     struct device_attribute *attr,
+			     const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (drvdata->trcbb == false)
+		return -EINVAL;
+	if (!drvdata->nr_addr_cmp)
+		return -EINVAL;
+	/*
+	 * Bit[7:0] selects which address range comparator is used for
+	 * branch broadcast control.
+	 */
+	if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
+		return -EINVAL;
+
+	config->bb_ctrl = val;
+	return size;
+}
+static DEVICE_ATTR_RW(bb_ctrl);
+
+static ssize_t event_vinst_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->vinst_ctrl & ETMv4_EVENT_MASK;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_vinst_store(struct device *dev,
+				 struct device_attribute *attr,
+				 const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	val &= ETMv4_EVENT_MASK;
+	config->vinst_ctrl &= ~ETMv4_EVENT_MASK;
+	config->vinst_ctrl |= val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(event_vinst);
+
+static ssize_t s_exlevel_vinst_show(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = BMVAL(config->vinst_ctrl, 16, 19);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t s_exlevel_vinst_store(struct device *dev,
+				     struct device_attribute *attr,
+				     const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	/* clear all EXLEVEL_S bits (bit[18] is never implemented) */
+	config->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19));
+	/* enable instruction tracing for corresponding exception level */
+	val &= drvdata->s_ex_level;
+	config->vinst_ctrl |= (val << 16);
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(s_exlevel_vinst);
+
+static ssize_t ns_exlevel_vinst_show(struct device *dev,
+				     struct device_attribute *attr,
+				     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	/* EXLEVEL_NS, bits[23:20] */
+	val = BMVAL(config->vinst_ctrl, 20, 23);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ns_exlevel_vinst_store(struct device *dev,
+				      struct device_attribute *attr,
+				      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	/* clear EXLEVEL_NS bits (bit[23] is never implemented */
+	config->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22));
+	/* enable instruction tracing for corresponding exception level */
+	val &= drvdata->ns_ex_level;
+	config->vinst_ctrl |= (val << 20);
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(ns_exlevel_vinst);
+
+static ssize_t addr_idx_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->addr_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_idx_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->nr_addr_cmp * 2)
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->addr_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_idx);
+
+static ssize_t addr_instdatatype_show(struct device *dev,
+				      struct device_attribute *attr,
+				      char *buf)
+{
+	ssize_t len;
+	u8 val, idx;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	val = BMVAL(config->addr_acc[idx], 0, 1);
+	len = scnprintf(buf, PAGE_SIZE, "%s\n",
+			val == ETM_INSTR_ADDR ? "instr" :
+			(val == ETM_DATA_LOAD_ADDR ? "data_load" :
+			(val == ETM_DATA_STORE_ADDR ? "data_store" :
+			"data_load_store")));
+	spin_unlock(&drvdata->spinlock);
+	return len;
+}
+
+static ssize_t addr_instdatatype_store(struct device *dev,
+				       struct device_attribute *attr,
+				       const char *buf, size_t size)
+{
+	u8 idx;
+	char str[20] = "";
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (strlen(buf) >= 20)
+		return -EINVAL;
+	if (sscanf(buf, "%s", str) != 1)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (!strcmp(str, "instr"))
+		/* TYPE, bits[1:0] */
+		config->addr_acc[idx] &= ~(BIT(0) | BIT(1));
+
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_instdatatype);
+
+static ssize_t addr_single_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	idx = config->addr_idx;
+	spin_lock(&drvdata->spinlock);
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	      config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+	val = (unsigned long)config->addr_val[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_single_store(struct device *dev,
+				 struct device_attribute *attr,
+				 const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	      config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	config->addr_val[idx] = (u64)val;
+	config->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_single);
+
+static ssize_t addr_range_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	u8 idx;
+	unsigned long val1, val2;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (idx % 2 != 0) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+	if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+	       config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+	      (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+	       config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	val1 = (unsigned long)config->addr_val[idx];
+	val2 = (unsigned long)config->addr_val[idx + 1];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t addr_range_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val1, val2;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+		return -EINVAL;
+	/* lower address comparator cannot have a higher address value */
+	if (val1 > val2)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (idx % 2 != 0) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+	       config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+	      (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+	       config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	config->addr_val[idx] = (u64)val1;
+	config->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
+	config->addr_val[idx + 1] = (u64)val2;
+	config->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
+	/*
+	 * Program include or exclude control bits for vinst or vdata
+	 * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
+	 */
+	if (config->mode & ETM_MODE_EXCLUDE)
+		etm4_set_mode_exclude(drvdata, true);
+	else
+		etm4_set_mode_exclude(drvdata, false);
+
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_range);
+
+static ssize_t addr_start_show(struct device *dev,
+			       struct device_attribute *attr,
+			       char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	      config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	val = (unsigned long)config->addr_val[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_start_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (!drvdata->nr_addr_cmp) {
+		spin_unlock(&drvdata->spinlock);
+		return -EINVAL;
+	}
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	      config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	config->addr_val[idx] = (u64)val;
+	config->addr_type[idx] = ETM_ADDR_TYPE_START;
+	config->vissctlr |= BIT(idx);
+	/* SSSTATUS, bit[9] - turn on start/stop logic */
+	config->vinst_ctrl |= BIT(9);
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_start);
+
+static ssize_t addr_stop_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	      config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	val = (unsigned long)config->addr_val[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_stop_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (!drvdata->nr_addr_cmp) {
+		spin_unlock(&drvdata->spinlock);
+		return -EINVAL;
+	}
+	if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+	       config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+		spin_unlock(&drvdata->spinlock);
+		return -EPERM;
+	}
+
+	config->addr_val[idx] = (u64)val;
+	config->addr_type[idx] = ETM_ADDR_TYPE_STOP;
+	config->vissctlr |= BIT(idx + 16);
+	/* SSSTATUS, bit[9] - turn on start/stop logic */
+	config->vinst_ctrl |= BIT(9);
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_stop);
+
+static ssize_t addr_ctxtype_show(struct device *dev,
+				 struct device_attribute *attr,
+				 char *buf)
+{
+	ssize_t len;
+	u8 idx, val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	/* CONTEXTTYPE, bits[3:2] */
+	val = BMVAL(config->addr_acc[idx], 2, 3);
+	len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" :
+			(val == ETM_CTX_CTXID ? "ctxid" :
+			(val == ETM_CTX_VMID ? "vmid" : "all")));
+	spin_unlock(&drvdata->spinlock);
+	return len;
+}
+
+static ssize_t addr_ctxtype_store(struct device *dev,
+				  struct device_attribute *attr,
+				  const char *buf, size_t size)
+{
+	u8 idx;
+	char str[10] = "";
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (strlen(buf) >= 10)
+		return -EINVAL;
+	if (sscanf(buf, "%s", str) != 1)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	if (!strcmp(str, "none"))
+		/* start by clearing context type bits */
+		config->addr_acc[idx] &= ~(BIT(2) | BIT(3));
+	else if (!strcmp(str, "ctxid")) {
+		/* 0b01 The trace unit performs a Context ID */
+		if (drvdata->numcidc) {
+			config->addr_acc[idx] |= BIT(2);
+			config->addr_acc[idx] &= ~BIT(3);
+		}
+	} else if (!strcmp(str, "vmid")) {
+		/* 0b10 The trace unit performs a VMID */
+		if (drvdata->numvmidc) {
+			config->addr_acc[idx] &= ~BIT(2);
+			config->addr_acc[idx] |= BIT(3);
+		}
+	} else if (!strcmp(str, "all")) {
+		/*
+		 * 0b11 The trace unit performs a Context ID
+		 * comparison and a VMID
+		 */
+		if (drvdata->numcidc)
+			config->addr_acc[idx] |= BIT(2);
+		if (drvdata->numvmidc)
+			config->addr_acc[idx] |= BIT(3);
+	}
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_ctxtype);
+
+static ssize_t addr_context_show(struct device *dev,
+				 struct device_attribute *attr,
+				 char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	/* context ID comparator bits[6:4] */
+	val = BMVAL(config->addr_acc[idx], 4, 6);
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_context_store(struct device *dev,
+				  struct device_attribute *attr,
+				  const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1))
+		return -EINVAL;
+	if (val >=  (drvdata->numcidc >= drvdata->numvmidc ?
+		     drvdata->numcidc : drvdata->numvmidc))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->addr_idx;
+	/* clear context ID comparator bits[6:4] */
+	config->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6));
+	config->addr_acc[idx] |= (val << 4);
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(addr_context);
+
+static ssize_t seq_idx_show(struct device *dev,
+			    struct device_attribute *attr,
+			    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->seq_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_idx_store(struct device *dev,
+			     struct device_attribute *attr,
+			     const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->nrseqstate - 1)
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->seq_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(seq_idx);
+
+static ssize_t seq_state_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->seq_state;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_state_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->nrseqstate)
+		return -EINVAL;
+
+	config->seq_state = val;
+	return size;
+}
+static DEVICE_ATTR_RW(seq_state);
+
+static ssize_t seq_event_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->seq_idx;
+	val = config->seq_ctrl[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_event_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->seq_idx;
+	/* RST, bits[7:0] */
+	config->seq_ctrl[idx] = val & 0xFF;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(seq_event);
+
+static ssize_t seq_reset_event_show(struct device *dev,
+				    struct device_attribute *attr,
+				    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->seq_rst;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_reset_event_store(struct device *dev,
+				     struct device_attribute *attr,
+				     const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (!(drvdata->nrseqstate))
+		return -EINVAL;
+
+	config->seq_rst = val & ETMv4_EVENT_MASK;
+	return size;
+}
+static DEVICE_ATTR_RW(seq_reset_event);
+
+static ssize_t cntr_idx_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->cntr_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_idx_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->nr_cntr)
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->cntr_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(cntr_idx);
+
+static ssize_t cntrldvr_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	val = config->cntrldvr[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntrldvr_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val > ETM_CNTR_MAX_VAL)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	config->cntrldvr[idx] = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(cntrldvr);
+
+static ssize_t cntr_val_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	val = config->cntr_val[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_val_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val > ETM_CNTR_MAX_VAL)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	config->cntr_val[idx] = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(cntr_val);
+
+static ssize_t cntr_ctrl_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	val = config->cntr_ctrl[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_ctrl_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->cntr_idx;
+	config->cntr_ctrl[idx] = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(cntr_ctrl);
+
+static ssize_t res_idx_show(struct device *dev,
+			    struct device_attribute *attr,
+			    char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->res_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t res_idx_store(struct device *dev,
+			     struct device_attribute *attr,
+			     const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	/* Resource selector pair 0 is always implemented and reserved */
+	if ((val == 0) || (val >= drvdata->nr_resource))
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->res_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(res_idx);
+
+static ssize_t res_ctrl_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->res_idx;
+	val = config->res_ctrl[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t res_ctrl_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->res_idx;
+	/* For odd idx pair inversal bit is RES0 */
+	if (idx % 2 != 0)
+		/* PAIRINV, bit[21] */
+		val &= ~BIT(21);
+	config->res_ctrl[idx] = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(res_ctrl);
+
+static ssize_t ctxid_idx_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->ctxid_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ctxid_idx_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->numcidc)
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->ctxid_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(ctxid_idx);
+
+static ssize_t ctxid_pid_show(struct device *dev,
+			      struct device_attribute *attr,
+			      char *buf)
+{
+	u8 idx;
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->ctxid_idx;
+	val = (unsigned long)config->ctxid_vpid[idx];
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ctxid_pid_store(struct device *dev,
+			       struct device_attribute *attr,
+			       const char *buf, size_t size)
+{
+	u8 idx;
+	unsigned long vpid, pid;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	/*
+	 * only implemented when ctxid tracing is enabled, i.e. at least one
+	 * ctxid comparator is implemented and ctxid is greater than 0 bits
+	 * in length
+	 */
+	if (!drvdata->ctxid_size || !drvdata->numcidc)
+		return -EINVAL;
+	if (kstrtoul(buf, 16, &vpid))
+		return -EINVAL;
+
+	pid = coresight_vpid_to_pid(vpid);
+
+	spin_lock(&drvdata->spinlock);
+	idx = config->ctxid_idx;
+	config->ctxid_pid[idx] = (u64)pid;
+	config->ctxid_vpid[idx] = (u64)vpid;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(ctxid_pid);
+
+static ssize_t ctxid_masks_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	unsigned long val1, val2;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	val1 = config->ctxid_mask0;
+	val2 = config->ctxid_mask1;
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t ctxid_masks_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t size)
+{
+	u8 i, j, maskbyte;
+	unsigned long val1, val2, mask;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	/*
+	 * only implemented when ctxid tracing is enabled, i.e. at least one
+	 * ctxid comparator is implemented and ctxid is greater than 0 bits
+	 * in length
+	 */
+	if (!drvdata->ctxid_size || !drvdata->numcidc)
+		return -EINVAL;
+	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	/*
+	 * each byte[0..3] controls mask value applied to ctxid
+	 * comparator[0..3]
+	 */
+	switch (drvdata->numcidc) {
+	case 0x1:
+		/* COMP0, bits[7:0] */
+		config->ctxid_mask0 = val1 & 0xFF;
+		break;
+	case 0x2:
+		/* COMP1, bits[15:8] */
+		config->ctxid_mask0 = val1 & 0xFFFF;
+		break;
+	case 0x3:
+		/* COMP2, bits[23:16] */
+		config->ctxid_mask0 = val1 & 0xFFFFFF;
+		break;
+	case 0x4:
+		 /* COMP3, bits[31:24] */
+		config->ctxid_mask0 = val1;
+		break;
+	case 0x5:
+		/* COMP4, bits[7:0] */
+		config->ctxid_mask0 = val1;
+		config->ctxid_mask1 = val2 & 0xFF;
+		break;
+	case 0x6:
+		/* COMP5, bits[15:8] */
+		config->ctxid_mask0 = val1;
+		config->ctxid_mask1 = val2 & 0xFFFF;
+		break;
+	case 0x7:
+		/* COMP6, bits[23:16] */
+		config->ctxid_mask0 = val1;
+		config->ctxid_mask1 = val2 & 0xFFFFFF;
+		break;
+	case 0x8:
+		/* COMP7, bits[31:24] */
+		config->ctxid_mask0 = val1;
+		config->ctxid_mask1 = val2;
+		break;
+	default:
+		break;
+	}
+	/*
+	 * If software sets a mask bit to 1, it must program relevant byte
+	 * of ctxid comparator value 0x0, otherwise behavior is unpredictable.
+	 * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24]
+	 * of ctxid comparator0 value (corresponding to byte 0) register.
+	 */
+	mask = config->ctxid_mask0;
+	for (i = 0; i < drvdata->numcidc; i++) {
+		/* mask value of corresponding ctxid comparator */
+		maskbyte = mask & ETMv4_EVENT_MASK;
+		/*
+		 * each bit corresponds to a byte of respective ctxid comparator
+		 * value register
+		 */
+		for (j = 0; j < 8; j++) {
+			if (maskbyte & 1)
+				config->ctxid_pid[i] &= ~(0xFF << (j * 8));
+			maskbyte >>= 1;
+		}
+		/* Select the next ctxid comparator mask value */
+		if (i == 3)
+			/* ctxid comparators[4-7] */
+			mask = config->ctxid_mask1;
+		else
+			mask >>= 0x8;
+	}
+
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(ctxid_masks);
+
+static ssize_t vmid_idx_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = config->vmid_idx;
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t vmid_idx_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+	if (val >= drvdata->numvmidc)
+		return -EINVAL;
+
+	/*
+	 * Use spinlock to ensure index doesn't change while it gets
+	 * dereferenced multiple times within a spinlock block elsewhere.
+	 */
+	spin_lock(&drvdata->spinlock);
+	config->vmid_idx = val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(vmid_idx);
+
+static ssize_t vmid_val_show(struct device *dev,
+			     struct device_attribute *attr,
+			     char *buf)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	val = (unsigned long)config->vmid_val[config->vmid_idx];
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t vmid_val_store(struct device *dev,
+			      struct device_attribute *attr,
+			      const char *buf, size_t size)
+{
+	unsigned long val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	/*
+	 * only implemented when vmid tracing is enabled, i.e. at least one
+	 * vmid comparator is implemented and at least 8 bit vmid size
+	 */
+	if (!drvdata->vmid_size || !drvdata->numvmidc)
+		return -EINVAL;
+	if (kstrtoul(buf, 16, &val))
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+	config->vmid_val[config->vmid_idx] = (u64)val;
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(vmid_val);
+
+static ssize_t vmid_masks_show(struct device *dev,
+			       struct device_attribute *attr, char *buf)
+{
+	unsigned long val1, val2;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	spin_lock(&drvdata->spinlock);
+	val1 = config->vmid_mask0;
+	val2 = config->vmid_mask1;
+	spin_unlock(&drvdata->spinlock);
+	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t vmid_masks_store(struct device *dev,
+				struct device_attribute *attr,
+				const char *buf, size_t size)
+{
+	u8 i, j, maskbyte;
+	unsigned long val1, val2, mask;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	struct etmv4_config *config = &drvdata->config;
+
+	/*
+	 * only implemented when vmid tracing is enabled, i.e. at least one
+	 * vmid comparator is implemented and at least 8 bit vmid size
+	 */
+	if (!drvdata->vmid_size || !drvdata->numvmidc)
+		return -EINVAL;
+	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+		return -EINVAL;
+
+	spin_lock(&drvdata->spinlock);
+
+	/*
+	 * each byte[0..3] controls mask value applied to vmid
+	 * comparator[0..3]
+	 */
+	switch (drvdata->numvmidc) {
+	case 0x1:
+		/* COMP0, bits[7:0] */
+		config->vmid_mask0 = val1 & 0xFF;
+		break;
+	case 0x2:
+		/* COMP1, bits[15:8] */
+		config->vmid_mask0 = val1 & 0xFFFF;
+		break;
+	case 0x3:
+		/* COMP2, bits[23:16] */
+		config->vmid_mask0 = val1 & 0xFFFFFF;
+		break;
+	case 0x4:
+		/* COMP3, bits[31:24] */
+		config->vmid_mask0 = val1;
+		break;
+	case 0x5:
+		/* COMP4, bits[7:0] */
+		config->vmid_mask0 = val1;
+		config->vmid_mask1 = val2 & 0xFF;
+		break;
+	case 0x6:
+		/* COMP5, bits[15:8] */
+		config->vmid_mask0 = val1;
+		config->vmid_mask1 = val2 & 0xFFFF;
+		break;
+	case 0x7:
+		/* COMP6, bits[23:16] */
+		config->vmid_mask0 = val1;
+		config->vmid_mask1 = val2 & 0xFFFFFF;
+		break;
+	case 0x8:
+		/* COMP7, bits[31:24] */
+		config->vmid_mask0 = val1;
+		config->vmid_mask1 = val2;
+		break;
+	default:
+		break;
+	}
+
+	/*
+	 * If software sets a mask bit to 1, it must program relevant byte
+	 * of vmid comparator value 0x0, otherwise behavior is unpredictable.
+	 * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24]
+	 * of vmid comparator0 value (corresponding to byte 0) register.
+	 */
+	mask = config->vmid_mask0;
+	for (i = 0; i < drvdata->numvmidc; i++) {
+		/* mask value of corresponding vmid comparator */
+		maskbyte = mask & ETMv4_EVENT_MASK;
+		/*
+		 * each bit corresponds to a byte of respective vmid comparator
+		 * value register
+		 */
+		for (j = 0; j < 8; j++) {
+			if (maskbyte & 1)
+				config->vmid_val[i] &= ~(0xFF << (j * 8));
+			maskbyte >>= 1;
+		}
+		/* Select the next vmid comparator mask value */
+		if (i == 3)
+			/* vmid comparators[4-7] */
+			mask = config->vmid_mask1;
+		else
+			mask >>= 0x8;
+	}
+	spin_unlock(&drvdata->spinlock);
+	return size;
+}
+static DEVICE_ATTR_RW(vmid_masks);
+
+static ssize_t cpu_show(struct device *dev,
+			struct device_attribute *attr, char *buf)
+{
+	int val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->cpu;
+	return scnprintf(buf, PAGE_SIZE, "%d\n", val);
+
+}
+static DEVICE_ATTR_RO(cpu);
+
+static struct attribute *coresight_etmv4_attrs[] = {
+	&dev_attr_nr_pe_cmp.attr,
+	&dev_attr_nr_addr_cmp.attr,
+	&dev_attr_nr_cntr.attr,
+	&dev_attr_nr_ext_inp.attr,
+	&dev_attr_numcidc.attr,
+	&dev_attr_numvmidc.attr,
+	&dev_attr_nrseqstate.attr,
+	&dev_attr_nr_resource.attr,
+	&dev_attr_nr_ss_cmp.attr,
+	&dev_attr_reset.attr,
+	&dev_attr_mode.attr,
+	&dev_attr_pe.attr,
+	&dev_attr_event.attr,
+	&dev_attr_event_instren.attr,
+	&dev_attr_event_ts.attr,
+	&dev_attr_syncfreq.attr,
+	&dev_attr_cyc_threshold.attr,
+	&dev_attr_bb_ctrl.attr,
+	&dev_attr_event_vinst.attr,
+	&dev_attr_s_exlevel_vinst.attr,
+	&dev_attr_ns_exlevel_vinst.attr,
+	&dev_attr_addr_idx.attr,
+	&dev_attr_addr_instdatatype.attr,
+	&dev_attr_addr_single.attr,
+	&dev_attr_addr_range.attr,
+	&dev_attr_addr_start.attr,
+	&dev_attr_addr_stop.attr,
+	&dev_attr_addr_ctxtype.attr,
+	&dev_attr_addr_context.attr,
+	&dev_attr_seq_idx.attr,
+	&dev_attr_seq_state.attr,
+	&dev_attr_seq_event.attr,
+	&dev_attr_seq_reset_event.attr,
+	&dev_attr_cntr_idx.attr,
+	&dev_attr_cntrldvr.attr,
+	&dev_attr_cntr_val.attr,
+	&dev_attr_cntr_ctrl.attr,
+	&dev_attr_res_idx.attr,
+	&dev_attr_res_ctrl.attr,
+	&dev_attr_ctxid_idx.attr,
+	&dev_attr_ctxid_pid.attr,
+	&dev_attr_ctxid_masks.attr,
+	&dev_attr_vmid_idx.attr,
+	&dev_attr_vmid_val.attr,
+	&dev_attr_vmid_masks.attr,
+	&dev_attr_cpu.attr,
+	NULL,
+};
+
+#define coresight_etm4x_simple_func(name, offset)			\
+	coresight_simple_func(struct etmv4_drvdata, name, offset)
+
+coresight_etm4x_simple_func(trcoslsr, TRCOSLSR);
+coresight_etm4x_simple_func(trcpdcr, TRCPDCR);
+coresight_etm4x_simple_func(trcpdsr, TRCPDSR);
+coresight_etm4x_simple_func(trclsr, TRCLSR);
+coresight_etm4x_simple_func(trcconfig, TRCCONFIGR);
+coresight_etm4x_simple_func(trctraceid, TRCTRACEIDR);
+coresight_etm4x_simple_func(trcauthstatus, TRCAUTHSTATUS);
+coresight_etm4x_simple_func(trcdevid, TRCDEVID);
+coresight_etm4x_simple_func(trcdevtype, TRCDEVTYPE);
+coresight_etm4x_simple_func(trcpidr0, TRCPIDR0);
+coresight_etm4x_simple_func(trcpidr1, TRCPIDR1);
+coresight_etm4x_simple_func(trcpidr2, TRCPIDR2);
+coresight_etm4x_simple_func(trcpidr3, TRCPIDR3);
+
+static struct attribute *coresight_etmv4_mgmt_attrs[] = {
+	&dev_attr_trcoslsr.attr,
+	&dev_attr_trcpdcr.attr,
+	&dev_attr_trcpdsr.attr,
+	&dev_attr_trclsr.attr,
+	&dev_attr_trcconfig.attr,
+	&dev_attr_trctraceid.attr,
+	&dev_attr_trcauthstatus.attr,
+	&dev_attr_trcdevid.attr,
+	&dev_attr_trcdevtype.attr,
+	&dev_attr_trcpidr0.attr,
+	&dev_attr_trcpidr1.attr,
+	&dev_attr_trcpidr2.attr,
+	&dev_attr_trcpidr3.attr,
+	NULL,
+};
+
+coresight_etm4x_simple_func(trcidr0, TRCIDR0);
+coresight_etm4x_simple_func(trcidr1, TRCIDR1);
+coresight_etm4x_simple_func(trcidr2, TRCIDR2);
+coresight_etm4x_simple_func(trcidr3, TRCIDR3);
+coresight_etm4x_simple_func(trcidr4, TRCIDR4);
+coresight_etm4x_simple_func(trcidr5, TRCIDR5);
+/* trcidr[6,7] are reserved */
+coresight_etm4x_simple_func(trcidr8, TRCIDR8);
+coresight_etm4x_simple_func(trcidr9, TRCIDR9);
+coresight_etm4x_simple_func(trcidr10, TRCIDR10);
+coresight_etm4x_simple_func(trcidr11, TRCIDR11);
+coresight_etm4x_simple_func(trcidr12, TRCIDR12);
+coresight_etm4x_simple_func(trcidr13, TRCIDR13);
+
+static struct attribute *coresight_etmv4_trcidr_attrs[] = {
+	&dev_attr_trcidr0.attr,
+	&dev_attr_trcidr1.attr,
+	&dev_attr_trcidr2.attr,
+	&dev_attr_trcidr3.attr,
+	&dev_attr_trcidr4.attr,
+	&dev_attr_trcidr5.attr,
+	/* trcidr[6,7] are reserved */
+	&dev_attr_trcidr8.attr,
+	&dev_attr_trcidr9.attr,
+	&dev_attr_trcidr10.attr,
+	&dev_attr_trcidr11.attr,
+	&dev_attr_trcidr12.attr,
+	&dev_attr_trcidr13.attr,
+	NULL,
+};
+
+static const struct attribute_group coresight_etmv4_group = {
+	.attrs = coresight_etmv4_attrs,
+};
+
+static const struct attribute_group coresight_etmv4_mgmt_group = {
+	.attrs = coresight_etmv4_mgmt_attrs,
+	.name = "mgmt",
+};
+
+static const struct attribute_group coresight_etmv4_trcidr_group = {
+	.attrs = coresight_etmv4_trcidr_attrs,
+	.name = "trcidr",
+};
+
+const struct attribute_group *coresight_etmv4_groups[] = {
+	&coresight_etmv4_group,
+	&coresight_etmv4_mgmt_group,
+	&coresight_etmv4_trcidr_group,
+	NULL,
+};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
index 1c59bd3..462f0dc 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
@@ -26,15 +26,19 @@
 #include <linux/clk.h>
 #include <linux/cpu.h>
 #include <linux/coresight.h>
+#include <linux/coresight-pmu.h>
 #include <linux/pm_wakeup.h>
 #include <linux/amba/bus.h>
 #include <linux/seq_file.h>
 #include <linux/uaccess.h>
+#include <linux/perf_event.h>
 #include <linux/pm_runtime.h>
 #include <linux/perf_event.h>
 #include <asm/sections.h>
+#include <asm/local.h>
 
 #include "coresight-etm4x.h"
+#include "coresight-etm-perf.h"
 
 static int boot_enable;
 module_param_named(boot_enable, boot_enable, int, S_IRUGO);
@@ -42,13 +46,13 @@
 /* The number of ETMv4 currently registered */
 static int etm4_count;
 static struct etmv4_drvdata *etmdrvdata[NR_CPUS];
+static void etm4_set_default(struct etmv4_config *config);
 
-static void etm4_os_unlock(void *info)
+static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
 {
-	struct etmv4_drvdata *drvdata = (struct etmv4_drvdata *)info;
-
 	/* Writing any value to ETMOSLAR unlocks the trace registers */
 	writel_relaxed(0x0, drvdata->base + TRCOSLAR);
+	drvdata->os_unlock = true;
 	isb();
 }
 
@@ -76,7 +80,7 @@
 	unsigned long flags;
 	int trace_id = -1;
 
-	if (!drvdata->enable)
+	if (!local_read(&drvdata->mode))
 		return drvdata->trcid;
 
 	spin_lock_irqsave(&drvdata->spinlock, flags);
@@ -95,6 +99,7 @@
 {
 	int i;
 	struct etmv4_drvdata *drvdata = info;
+	struct etmv4_config *config = &drvdata->config;
 
 	CS_UNLOCK(drvdata->base);
 
@@ -109,69 +114,69 @@
 			"timeout observed when probing at offset %#x\n",
 			TRCSTATR);
 
-	writel_relaxed(drvdata->pe_sel, drvdata->base + TRCPROCSELR);
-	writel_relaxed(drvdata->cfg, drvdata->base + TRCCONFIGR);
+	writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR);
+	writel_relaxed(config->cfg, drvdata->base + TRCCONFIGR);
 	/* nothing specific implemented */
 	writel_relaxed(0x0, drvdata->base + TRCAUXCTLR);
-	writel_relaxed(drvdata->eventctrl0, drvdata->base + TRCEVENTCTL0R);
-	writel_relaxed(drvdata->eventctrl1, drvdata->base + TRCEVENTCTL1R);
-	writel_relaxed(drvdata->stall_ctrl, drvdata->base + TRCSTALLCTLR);
-	writel_relaxed(drvdata->ts_ctrl, drvdata->base + TRCTSCTLR);
-	writel_relaxed(drvdata->syncfreq, drvdata->base + TRCSYNCPR);
-	writel_relaxed(drvdata->ccctlr, drvdata->base + TRCCCCTLR);
-	writel_relaxed(drvdata->bb_ctrl, drvdata->base + TRCBBCTLR);
+	writel_relaxed(config->eventctrl0, drvdata->base + TRCEVENTCTL0R);
+	writel_relaxed(config->eventctrl1, drvdata->base + TRCEVENTCTL1R);
+	writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR);
+	writel_relaxed(config->ts_ctrl, drvdata->base + TRCTSCTLR);
+	writel_relaxed(config->syncfreq, drvdata->base + TRCSYNCPR);
+	writel_relaxed(config->ccctlr, drvdata->base + TRCCCCTLR);
+	writel_relaxed(config->bb_ctrl, drvdata->base + TRCBBCTLR);
 	writel_relaxed(drvdata->trcid, drvdata->base + TRCTRACEIDR);
-	writel_relaxed(drvdata->vinst_ctrl, drvdata->base + TRCVICTLR);
-	writel_relaxed(drvdata->viiectlr, drvdata->base + TRCVIIECTLR);
-	writel_relaxed(drvdata->vissctlr,
+	writel_relaxed(config->vinst_ctrl, drvdata->base + TRCVICTLR);
+	writel_relaxed(config->viiectlr, drvdata->base + TRCVIIECTLR);
+	writel_relaxed(config->vissctlr,
 		       drvdata->base + TRCVISSCTLR);
-	writel_relaxed(drvdata->vipcssctlr,
+	writel_relaxed(config->vipcssctlr,
 		       drvdata->base + TRCVIPCSSCTLR);
 	for (i = 0; i < drvdata->nrseqstate - 1; i++)
-		writel_relaxed(drvdata->seq_ctrl[i],
+		writel_relaxed(config->seq_ctrl[i],
 			       drvdata->base + TRCSEQEVRn(i));
-	writel_relaxed(drvdata->seq_rst, drvdata->base + TRCSEQRSTEVR);
-	writel_relaxed(drvdata->seq_state, drvdata->base + TRCSEQSTR);
-	writel_relaxed(drvdata->ext_inp, drvdata->base + TRCEXTINSELR);
+	writel_relaxed(config->seq_rst, drvdata->base + TRCSEQRSTEVR);
+	writel_relaxed(config->seq_state, drvdata->base + TRCSEQSTR);
+	writel_relaxed(config->ext_inp, drvdata->base + TRCEXTINSELR);
 	for (i = 0; i < drvdata->nr_cntr; i++) {
-		writel_relaxed(drvdata->cntrldvr[i],
+		writel_relaxed(config->cntrldvr[i],
 			       drvdata->base + TRCCNTRLDVRn(i));
-		writel_relaxed(drvdata->cntr_ctrl[i],
+		writel_relaxed(config->cntr_ctrl[i],
 			       drvdata->base + TRCCNTCTLRn(i));
-		writel_relaxed(drvdata->cntr_val[i],
+		writel_relaxed(config->cntr_val[i],
 			       drvdata->base + TRCCNTVRn(i));
 	}
 
 	/* Resource selector pair 0 is always implemented and reserved */
-	for (i = 2; i < drvdata->nr_resource * 2; i++)
-		writel_relaxed(drvdata->res_ctrl[i],
+	for (i = 0; i < drvdata->nr_resource * 2; i++)
+		writel_relaxed(config->res_ctrl[i],
 			       drvdata->base + TRCRSCTLRn(i));
 
 	for (i = 0; i < drvdata->nr_ss_cmp; i++) {
-		writel_relaxed(drvdata->ss_ctrl[i],
+		writel_relaxed(config->ss_ctrl[i],
 			       drvdata->base + TRCSSCCRn(i));
-		writel_relaxed(drvdata->ss_status[i],
+		writel_relaxed(config->ss_status[i],
 			       drvdata->base + TRCSSCSRn(i));
-		writel_relaxed(drvdata->ss_pe_cmp[i],
+		writel_relaxed(config->ss_pe_cmp[i],
 			       drvdata->base + TRCSSPCICRn(i));
 	}
 	for (i = 0; i < drvdata->nr_addr_cmp; i++) {
-		writeq_relaxed(drvdata->addr_val[i],
+		writeq_relaxed(config->addr_val[i],
 			       drvdata->base + TRCACVRn(i));
-		writeq_relaxed(drvdata->addr_acc[i],
+		writeq_relaxed(config->addr_acc[i],
 			       drvdata->base + TRCACATRn(i));
 	}
 	for (i = 0; i < drvdata->numcidc; i++)
-		writeq_relaxed(drvdata->ctxid_pid[i],
+		writeq_relaxed(config->ctxid_pid[i],
 			       drvdata->base + TRCCIDCVRn(i));
-	writel_relaxed(drvdata->ctxid_mask0, drvdata->base + TRCCIDCCTLR0);
-	writel_relaxed(drvdata->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
+	writel_relaxed(config->ctxid_mask0, drvdata->base + TRCCIDCCTLR0);
+	writel_relaxed(config->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
 
 	for (i = 0; i < drvdata->numvmidc; i++)
-		writeq_relaxed(drvdata->vmid_val[i],
+		writeq_relaxed(config->vmid_val[i],
 			       drvdata->base + TRCVMIDCVRn(i));
-	writel_relaxed(drvdata->vmid_mask0, drvdata->base + TRCVMIDCCTLR0);
-	writel_relaxed(drvdata->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
+	writel_relaxed(config->vmid_mask0, drvdata->base + TRCVMIDCCTLR0);
+	writel_relaxed(config->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
 
 	/* Enable the trace unit */
 	writel_relaxed(1, drvdata->base + TRCPRGCTLR);
@@ -187,8 +192,59 @@
 	dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
 }
 
-static int etm4_enable(struct coresight_device *csdev,
-		       struct perf_event_attr *attr, u32 mode)
+static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
+				   struct perf_event_attr *attr)
+{
+	struct etmv4_config *config = &drvdata->config;
+
+	if (!attr)
+		return -EINVAL;
+
+	/* Clear configuration from previous run */
+	memset(config, 0, sizeof(struct etmv4_config));
+
+	if (attr->exclude_kernel)
+		config->mode = ETM_MODE_EXCL_KERN;
+
+	if (attr->exclude_user)
+		config->mode = ETM_MODE_EXCL_USER;
+
+	/* Always start from the default config */
+	etm4_set_default(config);
+
+	/*
+	 * By default the tracers are configured to trace the whole address
+	 * range.  Narrow the field only if requested by user space.
+	 */
+	if (config->mode)
+		etm4_config_trace_mode(config);
+
+	/* Go from generic option to ETMv4 specifics */
+	if (attr->config & BIT(ETM_OPT_CYCACC))
+		config->cfg |= ETMv4_MODE_CYCACC;
+	if (attr->config & BIT(ETM_OPT_TS))
+		config->cfg |= ETMv4_MODE_TIMESTAMP;
+
+	return 0;
+}
+
+static int etm4_enable_perf(struct coresight_device *csdev,
+			    struct perf_event_attr *attr)
+{
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
+		return -EINVAL;
+
+	/* Configure the tracer based on the session's specifics */
+	etm4_parse_event_config(drvdata, attr);
+	/* And enable it */
+	etm4_enable_hw(drvdata);
+
+	return 0;
+}
+
+static int etm4_enable_sysfs(struct coresight_device *csdev)
 {
 	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
 	int ret;
@@ -203,18 +259,49 @@
 				       etm4_enable_hw, drvdata, 1);
 	if (ret)
 		goto err;
-	drvdata->enable = true;
-	drvdata->sticky_enable = true;
 
+	drvdata->sticky_enable = true;
 	spin_unlock(&drvdata->spinlock);
 
 	dev_info(drvdata->dev, "ETM tracing enabled\n");
 	return 0;
+
 err:
 	spin_unlock(&drvdata->spinlock);
 	return ret;
 }
 
+static int etm4_enable(struct coresight_device *csdev,
+		       struct perf_event_attr *attr, u32 mode)
+{
+	int ret;
+	u32 val;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode);
+
+	/* Someone is already using the tracer */
+	if (val)
+		return -EBUSY;
+
+	switch (mode) {
+	case CS_MODE_SYSFS:
+		ret = etm4_enable_sysfs(csdev);
+		break;
+	case CS_MODE_PERF:
+		ret = etm4_enable_perf(csdev, attr);
+		break;
+	default:
+		ret = -EINVAL;
+	}
+
+	/* The tracer didn't start */
+	if (ret)
+		local_set(&drvdata->mode, CS_MODE_DISABLED);
+
+	return ret;
+}
+
 static void etm4_disable_hw(void *info)
 {
 	u32 control;
@@ -237,7 +324,18 @@
 	dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
 }
 
-static void etm4_disable(struct coresight_device *csdev)
+static int etm4_disable_perf(struct coresight_device *csdev)
+{
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
+		return -EINVAL;
+
+	etm4_disable_hw(drvdata);
+	return 0;
+}
+
+static void etm4_disable_sysfs(struct coresight_device *csdev)
 {
 	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
 
@@ -255,7 +353,6 @@
 	 * ensures that register writes occur when cpu is powered.
 	 */
 	smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
-	drvdata->enable = false;
 
 	spin_unlock(&drvdata->spinlock);
 	put_online_cpus();
@@ -263,6 +360,33 @@
 	dev_info(drvdata->dev, "ETM tracing disabled\n");
 }
 
+static void etm4_disable(struct coresight_device *csdev)
+{
+	u32 mode;
+	struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	/*
+	 * For as long as the tracer isn't disabled another entity can't
+	 * change its status.  As such we can read the status here without
+	 * fearing it will change under us.
+	 */
+	mode = local_read(&drvdata->mode);
+
+	switch (mode) {
+	case CS_MODE_DISABLED:
+		break;
+	case CS_MODE_SYSFS:
+		etm4_disable_sysfs(csdev);
+		break;
+	case CS_MODE_PERF:
+		etm4_disable_perf(csdev);
+		break;
+	}
+
+	if (mode)
+		local_set(&drvdata->mode, CS_MODE_DISABLED);
+}
+
 static const struct coresight_ops_source etm4_source_ops = {
 	.cpu_id		= etm4_cpu_id,
 	.trace_id	= etm4_trace_id,
@@ -274,2035 +398,6 @@
 	.source_ops	= &etm4_source_ops,
 };
 
-static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude)
-{
-	u8 idx = drvdata->addr_idx;
-
-	/*
-	 * TRCACATRn.TYPE bit[1:0]: type of comparison
-	 * the trace unit performs
-	 */
-	if (BMVAL(drvdata->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) {
-		if (idx % 2 != 0)
-			return -EINVAL;
-
-		/*
-		 * We are performing instruction address comparison. Set the
-		 * relevant bit of ViewInst Include/Exclude Control register
-		 * for corresponding address comparator pair.
-		 */
-		if (drvdata->addr_type[idx] != ETM_ADDR_TYPE_RANGE ||
-		    drvdata->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE)
-			return -EINVAL;
-
-		if (exclude == true) {
-			/*
-			 * Set exclude bit and unset the include bit
-			 * corresponding to comparator pair
-			 */
-			drvdata->viiectlr |= BIT(idx / 2 + 16);
-			drvdata->viiectlr &= ~BIT(idx / 2);
-		} else {
-			/*
-			 * Set include bit and unset exclude bit
-			 * corresponding to comparator pair
-			 */
-			drvdata->viiectlr |= BIT(idx / 2);
-			drvdata->viiectlr &= ~BIT(idx / 2 + 16);
-		}
-	}
-	return 0;
-}
-
-static ssize_t nr_pe_cmp_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_pe_cmp;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_pe_cmp);
-
-static ssize_t nr_addr_cmp_show(struct device *dev,
-				struct device_attribute *attr,
-				char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_addr_cmp;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_addr_cmp);
-
-static ssize_t nr_cntr_show(struct device *dev,
-			    struct device_attribute *attr,
-			    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_cntr;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_cntr);
-
-static ssize_t nr_ext_inp_show(struct device *dev,
-			       struct device_attribute *attr,
-			       char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_ext_inp;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_ext_inp);
-
-static ssize_t numcidc_show(struct device *dev,
-			    struct device_attribute *attr,
-			    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->numcidc;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(numcidc);
-
-static ssize_t numvmidc_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->numvmidc;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(numvmidc);
-
-static ssize_t nrseqstate_show(struct device *dev,
-			       struct device_attribute *attr,
-			       char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nrseqstate;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nrseqstate);
-
-static ssize_t nr_resource_show(struct device *dev,
-				struct device_attribute *attr,
-				char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_resource;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_resource);
-
-static ssize_t nr_ss_cmp_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->nr_ss_cmp;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_ss_cmp);
-
-static ssize_t reset_store(struct device *dev,
-			   struct device_attribute *attr,
-			   const char *buf, size_t size)
-{
-	int i;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	if (val)
-		drvdata->mode = 0x0;
-
-	/* Disable data tracing: do not trace load and store data transfers */
-	drvdata->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE);
-	drvdata->cfg &= ~(BIT(1) | BIT(2));
-
-	/* Disable data value and data address tracing */
-	drvdata->mode &= ~(ETM_MODE_DATA_TRACE_ADDR |
-			   ETM_MODE_DATA_TRACE_VAL);
-	drvdata->cfg &= ~(BIT(16) | BIT(17));
-
-	/* Disable all events tracing */
-	drvdata->eventctrl0 = 0x0;
-	drvdata->eventctrl1 = 0x0;
-
-	/* Disable timestamp event */
-	drvdata->ts_ctrl = 0x0;
-
-	/* Disable stalling */
-	drvdata->stall_ctrl = 0x0;
-
-	/* Reset trace synchronization period  to 2^8 = 256 bytes*/
-	if (drvdata->syncpr == false)
-		drvdata->syncfreq = 0x8;
-
-	/*
-	 * Enable ViewInst to trace everything with start-stop logic in
-	 * started state. ARM recommends start-stop logic is set before
-	 * each trace run.
-	 */
-	drvdata->vinst_ctrl |= BIT(0);
-	if (drvdata->nr_addr_cmp == true) {
-		drvdata->mode |= ETM_MODE_VIEWINST_STARTSTOP;
-		/* SSSTATUS, bit[9] */
-		drvdata->vinst_ctrl |= BIT(9);
-	}
-
-	/* No address range filtering for ViewInst */
-	drvdata->viiectlr = 0x0;
-
-	/* No start-stop filtering for ViewInst */
-	drvdata->vissctlr = 0x0;
-
-	/* Disable seq events */
-	for (i = 0; i < drvdata->nrseqstate-1; i++)
-		drvdata->seq_ctrl[i] = 0x0;
-	drvdata->seq_rst = 0x0;
-	drvdata->seq_state = 0x0;
-
-	/* Disable external input events */
-	drvdata->ext_inp = 0x0;
-
-	drvdata->cntr_idx = 0x0;
-	for (i = 0; i < drvdata->nr_cntr; i++) {
-		drvdata->cntrldvr[i] = 0x0;
-		drvdata->cntr_ctrl[i] = 0x0;
-		drvdata->cntr_val[i] = 0x0;
-	}
-
-	/* Resource selector pair 0 is always implemented and reserved */
-	drvdata->res_idx = 0x2;
-	for (i = 2; i < drvdata->nr_resource * 2; i++)
-		drvdata->res_ctrl[i] = 0x0;
-
-	for (i = 0; i < drvdata->nr_ss_cmp; i++) {
-		drvdata->ss_ctrl[i] = 0x0;
-		drvdata->ss_pe_cmp[i] = 0x0;
-	}
-
-	drvdata->addr_idx = 0x0;
-	for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
-		drvdata->addr_val[i] = 0x0;
-		drvdata->addr_acc[i] = 0x0;
-		drvdata->addr_type[i] = ETM_ADDR_TYPE_NONE;
-	}
-
-	drvdata->ctxid_idx = 0x0;
-	for (i = 0; i < drvdata->numcidc; i++) {
-		drvdata->ctxid_pid[i] = 0x0;
-		drvdata->ctxid_vpid[i] = 0x0;
-	}
-
-	drvdata->ctxid_mask0 = 0x0;
-	drvdata->ctxid_mask1 = 0x0;
-
-	drvdata->vmid_idx = 0x0;
-	for (i = 0; i < drvdata->numvmidc; i++)
-		drvdata->vmid_val[i] = 0x0;
-	drvdata->vmid_mask0 = 0x0;
-	drvdata->vmid_mask1 = 0x0;
-
-	drvdata->trcid = drvdata->cpu + 1;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_WO(reset);
-
-static ssize_t mode_show(struct device *dev,
-			 struct device_attribute *attr,
-			 char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->mode;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t mode_store(struct device *dev,
-			  struct device_attribute *attr,
-			  const char *buf, size_t size)
-{
-	unsigned long val, mode;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	drvdata->mode = val & ETMv4_MODE_ALL;
-
-	if (drvdata->mode & ETM_MODE_EXCLUDE)
-		etm4_set_mode_exclude(drvdata, true);
-	else
-		etm4_set_mode_exclude(drvdata, false);
-
-	if (drvdata->instrp0 == true) {
-		/* start by clearing instruction P0 field */
-		drvdata->cfg  &= ~(BIT(1) | BIT(2));
-		if (drvdata->mode & ETM_MODE_LOAD)
-			/* 0b01 Trace load instructions as P0 instructions */
-			drvdata->cfg  |= BIT(1);
-		if (drvdata->mode & ETM_MODE_STORE)
-			/* 0b10 Trace store instructions as P0 instructions */
-			drvdata->cfg  |= BIT(2);
-		if (drvdata->mode & ETM_MODE_LOAD_STORE)
-			/*
-			 * 0b11 Trace load and store instructions
-			 * as P0 instructions
-			 */
-			drvdata->cfg  |= BIT(1) | BIT(2);
-	}
-
-	/* bit[3], Branch broadcast mode */
-	if ((drvdata->mode & ETM_MODE_BB) && (drvdata->trcbb == true))
-		drvdata->cfg |= BIT(3);
-	else
-		drvdata->cfg &= ~BIT(3);
-
-	/* bit[4], Cycle counting instruction trace bit */
-	if ((drvdata->mode & ETMv4_MODE_CYCACC) &&
-		(drvdata->trccci == true))
-		drvdata->cfg |= BIT(4);
-	else
-		drvdata->cfg &= ~BIT(4);
-
-	/* bit[6], Context ID tracing bit */
-	if ((drvdata->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size))
-		drvdata->cfg |= BIT(6);
-	else
-		drvdata->cfg &= ~BIT(6);
-
-	if ((drvdata->mode & ETM_MODE_VMID) && (drvdata->vmid_size))
-		drvdata->cfg |= BIT(7);
-	else
-		drvdata->cfg &= ~BIT(7);
-
-	/* bits[10:8], Conditional instruction tracing bit */
-	mode = ETM_MODE_COND(drvdata->mode);
-	if (drvdata->trccond == true) {
-		drvdata->cfg &= ~(BIT(8) | BIT(9) | BIT(10));
-		drvdata->cfg |= mode << 8;
-	}
-
-	/* bit[11], Global timestamp tracing bit */
-	if ((drvdata->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size))
-		drvdata->cfg |= BIT(11);
-	else
-		drvdata->cfg &= ~BIT(11);
-
-	/* bit[12], Return stack enable bit */
-	if ((drvdata->mode & ETM_MODE_RETURNSTACK) &&
-		(drvdata->retstack == true))
-		drvdata->cfg |= BIT(12);
-	else
-		drvdata->cfg &= ~BIT(12);
-
-	/* bits[14:13], Q element enable field */
-	mode = ETM_MODE_QELEM(drvdata->mode);
-	/* start by clearing QE bits */
-	drvdata->cfg &= ~(BIT(13) | BIT(14));
-	/* if supported, Q elements with instruction counts are enabled */
-	if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
-		drvdata->cfg |= BIT(13);
-	/*
-	 * if supported, Q elements with and without instruction
-	 * counts are enabled
-	 */
-	if ((mode & BIT(1)) && (drvdata->q_support & BIT(1)))
-		drvdata->cfg |= BIT(14);
-
-	/* bit[11], AMBA Trace Bus (ATB) trigger enable bit */
-	if ((drvdata->mode & ETM_MODE_ATB_TRIGGER) &&
-	    (drvdata->atbtrig == true))
-		drvdata->eventctrl1 |= BIT(11);
-	else
-		drvdata->eventctrl1 &= ~BIT(11);
-
-	/* bit[12], Low-power state behavior override bit */
-	if ((drvdata->mode & ETM_MODE_LPOVERRIDE) &&
-	    (drvdata->lpoverride == true))
-		drvdata->eventctrl1 |= BIT(12);
-	else
-		drvdata->eventctrl1 &= ~BIT(12);
-
-	/* bit[8], Instruction stall bit */
-	if (drvdata->mode & ETM_MODE_ISTALL_EN)
-		drvdata->stall_ctrl |= BIT(8);
-	else
-		drvdata->stall_ctrl &= ~BIT(8);
-
-	/* bit[10], Prioritize instruction trace bit */
-	if (drvdata->mode & ETM_MODE_INSTPRIO)
-		drvdata->stall_ctrl |= BIT(10);
-	else
-		drvdata->stall_ctrl &= ~BIT(10);
-
-	/* bit[13], Trace overflow prevention bit */
-	if ((drvdata->mode & ETM_MODE_NOOVERFLOW) &&
-		(drvdata->nooverflow == true))
-		drvdata->stall_ctrl |= BIT(13);
-	else
-		drvdata->stall_ctrl &= ~BIT(13);
-
-	/* bit[9] Start/stop logic control bit */
-	if (drvdata->mode & ETM_MODE_VIEWINST_STARTSTOP)
-		drvdata->vinst_ctrl |= BIT(9);
-	else
-		drvdata->vinst_ctrl &= ~BIT(9);
-
-	/* bit[10], Whether a trace unit must trace a Reset exception */
-	if (drvdata->mode & ETM_MODE_TRACE_RESET)
-		drvdata->vinst_ctrl |= BIT(10);
-	else
-		drvdata->vinst_ctrl &= ~BIT(10);
-
-	/* bit[11], Whether a trace unit must trace a system error exception */
-	if ((drvdata->mode & ETM_MODE_TRACE_ERR) &&
-		(drvdata->trc_error == true))
-		drvdata->vinst_ctrl |= BIT(11);
-	else
-		drvdata->vinst_ctrl &= ~BIT(11);
-
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(mode);
-
-static ssize_t pe_show(struct device *dev,
-		       struct device_attribute *attr,
-		       char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->pe_sel;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t pe_store(struct device *dev,
-			struct device_attribute *attr,
-			const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	if (val > drvdata->nr_pe) {
-		spin_unlock(&drvdata->spinlock);
-		return -EINVAL;
-	}
-
-	drvdata->pe_sel = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(pe);
-
-static ssize_t event_show(struct device *dev,
-			  struct device_attribute *attr,
-			  char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->eventctrl0;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_store(struct device *dev,
-			   struct device_attribute *attr,
-			   const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	switch (drvdata->nr_event) {
-	case 0x0:
-		/* EVENT0, bits[7:0] */
-		drvdata->eventctrl0 = val & 0xFF;
-		break;
-	case 0x1:
-		 /* EVENT1, bits[15:8] */
-		drvdata->eventctrl0 = val & 0xFFFF;
-		break;
-	case 0x2:
-		/* EVENT2, bits[23:16] */
-		drvdata->eventctrl0 = val & 0xFFFFFF;
-		break;
-	case 0x3:
-		/* EVENT3, bits[31:24] */
-		drvdata->eventctrl0 = val;
-		break;
-	default:
-		break;
-	}
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(event);
-
-static ssize_t event_instren_show(struct device *dev,
-				  struct device_attribute *attr,
-				  char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = BMVAL(drvdata->eventctrl1, 0, 3);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_instren_store(struct device *dev,
-				   struct device_attribute *attr,
-				   const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	/* start by clearing all instruction event enable bits */
-	drvdata->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3));
-	switch (drvdata->nr_event) {
-	case 0x0:
-		/* generate Event element for event 1 */
-		drvdata->eventctrl1 |= val & BIT(1);
-		break;
-	case 0x1:
-		/* generate Event element for event 1 and 2 */
-		drvdata->eventctrl1 |= val & (BIT(0) | BIT(1));
-		break;
-	case 0x2:
-		/* generate Event element for event 1, 2 and 3 */
-		drvdata->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2));
-		break;
-	case 0x3:
-		/* generate Event element for all 4 events */
-		drvdata->eventctrl1 |= val & 0xF;
-		break;
-	default:
-		break;
-	}
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(event_instren);
-
-static ssize_t event_ts_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->ts_ctrl;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_ts_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (!drvdata->ts_size)
-		return -EINVAL;
-
-	drvdata->ts_ctrl = val & ETMv4_EVENT_MASK;
-	return size;
-}
-static DEVICE_ATTR_RW(event_ts);
-
-static ssize_t syncfreq_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->syncfreq;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t syncfreq_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (drvdata->syncpr == true)
-		return -EINVAL;
-
-	drvdata->syncfreq = val & ETMv4_SYNC_MASK;
-	return size;
-}
-static DEVICE_ATTR_RW(syncfreq);
-
-static ssize_t cyc_threshold_show(struct device *dev,
-				  struct device_attribute *attr,
-				  char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->ccctlr;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cyc_threshold_store(struct device *dev,
-				   struct device_attribute *attr,
-				   const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val < drvdata->ccitmin)
-		return -EINVAL;
-
-	drvdata->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
-	return size;
-}
-static DEVICE_ATTR_RW(cyc_threshold);
-
-static ssize_t bb_ctrl_show(struct device *dev,
-			    struct device_attribute *attr,
-			    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->bb_ctrl;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t bb_ctrl_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (drvdata->trcbb == false)
-		return -EINVAL;
-	if (!drvdata->nr_addr_cmp)
-		return -EINVAL;
-	/*
-	 * Bit[7:0] selects which address range comparator is used for
-	 * branch broadcast control.
-	 */
-	if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
-		return -EINVAL;
-
-	drvdata->bb_ctrl = val;
-	return size;
-}
-static DEVICE_ATTR_RW(bb_ctrl);
-
-static ssize_t event_vinst_show(struct device *dev,
-				struct device_attribute *attr,
-				char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->vinst_ctrl & ETMv4_EVENT_MASK;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_vinst_store(struct device *dev,
-				 struct device_attribute *attr,
-				 const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	val &= ETMv4_EVENT_MASK;
-	drvdata->vinst_ctrl &= ~ETMv4_EVENT_MASK;
-	drvdata->vinst_ctrl |= val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(event_vinst);
-
-static ssize_t s_exlevel_vinst_show(struct device *dev,
-				    struct device_attribute *attr,
-				    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = BMVAL(drvdata->vinst_ctrl, 16, 19);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t s_exlevel_vinst_store(struct device *dev,
-				     struct device_attribute *attr,
-				     const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	/* clear all EXLEVEL_S bits (bit[18] is never implemented) */
-	drvdata->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19));
-	/* enable instruction tracing for corresponding exception level */
-	val &= drvdata->s_ex_level;
-	drvdata->vinst_ctrl |= (val << 16);
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(s_exlevel_vinst);
-
-static ssize_t ns_exlevel_vinst_show(struct device *dev,
-				     struct device_attribute *attr,
-				     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	/* EXLEVEL_NS, bits[23:20] */
-	val = BMVAL(drvdata->vinst_ctrl, 20, 23);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t ns_exlevel_vinst_store(struct device *dev,
-				      struct device_attribute *attr,
-				      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	/* clear EXLEVEL_NS bits (bit[23] is never implemented */
-	drvdata->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22));
-	/* enable instruction tracing for corresponding exception level */
-	val &= drvdata->ns_ex_level;
-	drvdata->vinst_ctrl |= (val << 20);
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(ns_exlevel_vinst);
-
-static ssize_t addr_idx_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->addr_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_idx_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->nr_addr_cmp * 2)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->addr_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_idx);
-
-static ssize_t addr_instdatatype_show(struct device *dev,
-				      struct device_attribute *attr,
-				      char *buf)
-{
-	ssize_t len;
-	u8 val, idx;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	val = BMVAL(drvdata->addr_acc[idx], 0, 1);
-	len = scnprintf(buf, PAGE_SIZE, "%s\n",
-			val == ETM_INSTR_ADDR ? "instr" :
-			(val == ETM_DATA_LOAD_ADDR ? "data_load" :
-			(val == ETM_DATA_STORE_ADDR ? "data_store" :
-			"data_load_store")));
-	spin_unlock(&drvdata->spinlock);
-	return len;
-}
-
-static ssize_t addr_instdatatype_store(struct device *dev,
-				       struct device_attribute *attr,
-				       const char *buf, size_t size)
-{
-	u8 idx;
-	char str[20] = "";
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (strlen(buf) >= 20)
-		return -EINVAL;
-	if (sscanf(buf, "%s", str) != 1)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (!strcmp(str, "instr"))
-		/* TYPE, bits[1:0] */
-		drvdata->addr_acc[idx] &= ~(BIT(0) | BIT(1));
-
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_instdatatype);
-
-static ssize_t addr_single_show(struct device *dev,
-				struct device_attribute *attr,
-				char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	idx = drvdata->addr_idx;
-	spin_lock(&drvdata->spinlock);
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	      drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-	val = (unsigned long)drvdata->addr_val[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_single_store(struct device *dev,
-				 struct device_attribute *attr,
-				 const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	      drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	drvdata->addr_val[idx] = (u64)val;
-	drvdata->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_single);
-
-static ssize_t addr_range_show(struct device *dev,
-			       struct device_attribute *attr,
-			       char *buf)
-{
-	u8 idx;
-	unsigned long val1, val2;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (idx % 2 != 0) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-	if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
-	       drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
-	      (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
-	       drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	val1 = (unsigned long)drvdata->addr_val[idx];
-	val2 = (unsigned long)drvdata->addr_val[idx + 1];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
-
-static ssize_t addr_range_store(struct device *dev,
-				struct device_attribute *attr,
-				const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val1, val2;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
-		return -EINVAL;
-	/* lower address comparator cannot have a higher address value */
-	if (val1 > val2)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (idx % 2 != 0) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
-	       drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
-	      (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
-	       drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	drvdata->addr_val[idx] = (u64)val1;
-	drvdata->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
-	drvdata->addr_val[idx + 1] = (u64)val2;
-	drvdata->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
-	/*
-	 * Program include or exclude control bits for vinst or vdata
-	 * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
-	 */
-	if (drvdata->mode & ETM_MODE_EXCLUDE)
-		etm4_set_mode_exclude(drvdata, true);
-	else
-		etm4_set_mode_exclude(drvdata, false);
-
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_range);
-
-static ssize_t addr_start_show(struct device *dev,
-			       struct device_attribute *attr,
-			       char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	      drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	val = (unsigned long)drvdata->addr_val[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_start_store(struct device *dev,
-				struct device_attribute *attr,
-				const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (!drvdata->nr_addr_cmp) {
-		spin_unlock(&drvdata->spinlock);
-		return -EINVAL;
-	}
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	      drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	drvdata->addr_val[idx] = (u64)val;
-	drvdata->addr_type[idx] = ETM_ADDR_TYPE_START;
-	drvdata->vissctlr |= BIT(idx);
-	/* SSSTATUS, bit[9] - turn on start/stop logic */
-	drvdata->vinst_ctrl |= BIT(9);
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_start);
-
-static ssize_t addr_stop_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	      drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	val = (unsigned long)drvdata->addr_val[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_stop_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (!drvdata->nr_addr_cmp) {
-		spin_unlock(&drvdata->spinlock);
-		return -EINVAL;
-	}
-	if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
-	       drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
-		spin_unlock(&drvdata->spinlock);
-		return -EPERM;
-	}
-
-	drvdata->addr_val[idx] = (u64)val;
-	drvdata->addr_type[idx] = ETM_ADDR_TYPE_STOP;
-	drvdata->vissctlr |= BIT(idx + 16);
-	/* SSSTATUS, bit[9] - turn on start/stop logic */
-	drvdata->vinst_ctrl |= BIT(9);
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_stop);
-
-static ssize_t addr_ctxtype_show(struct device *dev,
-				 struct device_attribute *attr,
-				 char *buf)
-{
-	ssize_t len;
-	u8 idx, val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	/* CONTEXTTYPE, bits[3:2] */
-	val = BMVAL(drvdata->addr_acc[idx], 2, 3);
-	len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" :
-			(val == ETM_CTX_CTXID ? "ctxid" :
-			(val == ETM_CTX_VMID ? "vmid" : "all")));
-	spin_unlock(&drvdata->spinlock);
-	return len;
-}
-
-static ssize_t addr_ctxtype_store(struct device *dev,
-				  struct device_attribute *attr,
-				  const char *buf, size_t size)
-{
-	u8 idx;
-	char str[10] = "";
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (strlen(buf) >= 10)
-		return -EINVAL;
-	if (sscanf(buf, "%s", str) != 1)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	if (!strcmp(str, "none"))
-		/* start by clearing context type bits */
-		drvdata->addr_acc[idx] &= ~(BIT(2) | BIT(3));
-	else if (!strcmp(str, "ctxid")) {
-		/* 0b01 The trace unit performs a Context ID */
-		if (drvdata->numcidc) {
-			drvdata->addr_acc[idx] |= BIT(2);
-			drvdata->addr_acc[idx] &= ~BIT(3);
-		}
-	} else if (!strcmp(str, "vmid")) {
-		/* 0b10 The trace unit performs a VMID */
-		if (drvdata->numvmidc) {
-			drvdata->addr_acc[idx] &= ~BIT(2);
-			drvdata->addr_acc[idx] |= BIT(3);
-		}
-	} else if (!strcmp(str, "all")) {
-		/*
-		 * 0b11 The trace unit performs a Context ID
-		 * comparison and a VMID
-		 */
-		if (drvdata->numcidc)
-			drvdata->addr_acc[idx] |= BIT(2);
-		if (drvdata->numvmidc)
-			drvdata->addr_acc[idx] |= BIT(3);
-	}
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_ctxtype);
-
-static ssize_t addr_context_show(struct device *dev,
-				 struct device_attribute *attr,
-				 char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	/* context ID comparator bits[6:4] */
-	val = BMVAL(drvdata->addr_acc[idx], 4, 6);
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_context_store(struct device *dev,
-				  struct device_attribute *attr,
-				  const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1))
-		return -EINVAL;
-	if (val >=  (drvdata->numcidc >= drvdata->numvmidc ?
-		     drvdata->numcidc : drvdata->numvmidc))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->addr_idx;
-	/* clear context ID comparator bits[6:4] */
-	drvdata->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6));
-	drvdata->addr_acc[idx] |= (val << 4);
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(addr_context);
-
-static ssize_t seq_idx_show(struct device *dev,
-			    struct device_attribute *attr,
-			    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->seq_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_idx_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->nrseqstate - 1)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->seq_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(seq_idx);
-
-static ssize_t seq_state_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->seq_state;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_state_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->nrseqstate)
-		return -EINVAL;
-
-	drvdata->seq_state = val;
-	return size;
-}
-static DEVICE_ATTR_RW(seq_state);
-
-static ssize_t seq_event_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->seq_idx;
-	val = drvdata->seq_ctrl[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_event_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->seq_idx;
-	/* RST, bits[7:0] */
-	drvdata->seq_ctrl[idx] = val & 0xFF;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(seq_event);
-
-static ssize_t seq_reset_event_show(struct device *dev,
-				    struct device_attribute *attr,
-				    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->seq_rst;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_reset_event_store(struct device *dev,
-				     struct device_attribute *attr,
-				     const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (!(drvdata->nrseqstate))
-		return -EINVAL;
-
-	drvdata->seq_rst = val & ETMv4_EVENT_MASK;
-	return size;
-}
-static DEVICE_ATTR_RW(seq_reset_event);
-
-static ssize_t cntr_idx_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->cntr_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_idx_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->nr_cntr)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->cntr_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(cntr_idx);
-
-static ssize_t cntrldvr_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	val = drvdata->cntrldvr[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntrldvr_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val > ETM_CNTR_MAX_VAL)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	drvdata->cntrldvr[idx] = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(cntrldvr);
-
-static ssize_t cntr_val_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	val = drvdata->cntr_val[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_val_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val > ETM_CNTR_MAX_VAL)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	drvdata->cntr_val[idx] = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(cntr_val);
-
-static ssize_t cntr_ctrl_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	val = drvdata->cntr_ctrl[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_ctrl_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->cntr_idx;
-	drvdata->cntr_ctrl[idx] = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(cntr_ctrl);
-
-static ssize_t res_idx_show(struct device *dev,
-			    struct device_attribute *attr,
-			    char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->res_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t res_idx_store(struct device *dev,
-			     struct device_attribute *attr,
-			     const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	/* Resource selector pair 0 is always implemented and reserved */
-	if (val < 2 || val >= drvdata->nr_resource * 2)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->res_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(res_idx);
-
-static ssize_t res_ctrl_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->res_idx;
-	val = drvdata->res_ctrl[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t res_ctrl_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->res_idx;
-	/* For odd idx pair inversal bit is RES0 */
-	if (idx % 2 != 0)
-		/* PAIRINV, bit[21] */
-		val &= ~BIT(21);
-	drvdata->res_ctrl[idx] = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(res_ctrl);
-
-static ssize_t ctxid_idx_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->ctxid_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t ctxid_idx_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->numcidc)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->ctxid_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(ctxid_idx);
-
-static ssize_t ctxid_pid_show(struct device *dev,
-			      struct device_attribute *attr,
-			      char *buf)
-{
-	u8 idx;
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->ctxid_idx;
-	val = (unsigned long)drvdata->ctxid_vpid[idx];
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t ctxid_pid_store(struct device *dev,
-			       struct device_attribute *attr,
-			       const char *buf, size_t size)
-{
-	u8 idx;
-	unsigned long vpid, pid;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	/*
-	 * only implemented when ctxid tracing is enabled, i.e. at least one
-	 * ctxid comparator is implemented and ctxid is greater than 0 bits
-	 * in length
-	 */
-	if (!drvdata->ctxid_size || !drvdata->numcidc)
-		return -EINVAL;
-	if (kstrtoul(buf, 16, &vpid))
-		return -EINVAL;
-
-	pid = coresight_vpid_to_pid(vpid);
-
-	spin_lock(&drvdata->spinlock);
-	idx = drvdata->ctxid_idx;
-	drvdata->ctxid_pid[idx] = (u64)pid;
-	drvdata->ctxid_vpid[idx] = (u64)vpid;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(ctxid_pid);
-
-static ssize_t ctxid_masks_show(struct device *dev,
-				struct device_attribute *attr,
-				char *buf)
-{
-	unsigned long val1, val2;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	val1 = drvdata->ctxid_mask0;
-	val2 = drvdata->ctxid_mask1;
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
-
-static ssize_t ctxid_masks_store(struct device *dev,
-				struct device_attribute *attr,
-				const char *buf, size_t size)
-{
-	u8 i, j, maskbyte;
-	unsigned long val1, val2, mask;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	/*
-	 * only implemented when ctxid tracing is enabled, i.e. at least one
-	 * ctxid comparator is implemented and ctxid is greater than 0 bits
-	 * in length
-	 */
-	if (!drvdata->ctxid_size || !drvdata->numcidc)
-		return -EINVAL;
-	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	/*
-	 * each byte[0..3] controls mask value applied to ctxid
-	 * comparator[0..3]
-	 */
-	switch (drvdata->numcidc) {
-	case 0x1:
-		/* COMP0, bits[7:0] */
-		drvdata->ctxid_mask0 = val1 & 0xFF;
-		break;
-	case 0x2:
-		/* COMP1, bits[15:8] */
-		drvdata->ctxid_mask0 = val1 & 0xFFFF;
-		break;
-	case 0x3:
-		/* COMP2, bits[23:16] */
-		drvdata->ctxid_mask0 = val1 & 0xFFFFFF;
-		break;
-	case 0x4:
-		 /* COMP3, bits[31:24] */
-		drvdata->ctxid_mask0 = val1;
-		break;
-	case 0x5:
-		/* COMP4, bits[7:0] */
-		drvdata->ctxid_mask0 = val1;
-		drvdata->ctxid_mask1 = val2 & 0xFF;
-		break;
-	case 0x6:
-		/* COMP5, bits[15:8] */
-		drvdata->ctxid_mask0 = val1;
-		drvdata->ctxid_mask1 = val2 & 0xFFFF;
-		break;
-	case 0x7:
-		/* COMP6, bits[23:16] */
-		drvdata->ctxid_mask0 = val1;
-		drvdata->ctxid_mask1 = val2 & 0xFFFFFF;
-		break;
-	case 0x8:
-		/* COMP7, bits[31:24] */
-		drvdata->ctxid_mask0 = val1;
-		drvdata->ctxid_mask1 = val2;
-		break;
-	default:
-		break;
-	}
-	/*
-	 * If software sets a mask bit to 1, it must program relevant byte
-	 * of ctxid comparator value 0x0, otherwise behavior is unpredictable.
-	 * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24]
-	 * of ctxid comparator0 value (corresponding to byte 0) register.
-	 */
-	mask = drvdata->ctxid_mask0;
-	for (i = 0; i < drvdata->numcidc; i++) {
-		/* mask value of corresponding ctxid comparator */
-		maskbyte = mask & ETMv4_EVENT_MASK;
-		/*
-		 * each bit corresponds to a byte of respective ctxid comparator
-		 * value register
-		 */
-		for (j = 0; j < 8; j++) {
-			if (maskbyte & 1)
-				drvdata->ctxid_pid[i] &= ~(0xFF << (j * 8));
-			maskbyte >>= 1;
-		}
-		/* Select the next ctxid comparator mask value */
-		if (i == 3)
-			/* ctxid comparators[4-7] */
-			mask = drvdata->ctxid_mask1;
-		else
-			mask >>= 0x8;
-	}
-
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(ctxid_masks);
-
-static ssize_t vmid_idx_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->vmid_idx;
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t vmid_idx_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-	if (val >= drvdata->numvmidc)
-		return -EINVAL;
-
-	/*
-	 * Use spinlock to ensure index doesn't change while it gets
-	 * dereferenced multiple times within a spinlock block elsewhere.
-	 */
-	spin_lock(&drvdata->spinlock);
-	drvdata->vmid_idx = val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(vmid_idx);
-
-static ssize_t vmid_val_show(struct device *dev,
-			     struct device_attribute *attr,
-			     char *buf)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = (unsigned long)drvdata->vmid_val[drvdata->vmid_idx];
-	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t vmid_val_store(struct device *dev,
-			      struct device_attribute *attr,
-			      const char *buf, size_t size)
-{
-	unsigned long val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	/*
-	 * only implemented when vmid tracing is enabled, i.e. at least one
-	 * vmid comparator is implemented and at least 8 bit vmid size
-	 */
-	if (!drvdata->vmid_size || !drvdata->numvmidc)
-		return -EINVAL;
-	if (kstrtoul(buf, 16, &val))
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-	drvdata->vmid_val[drvdata->vmid_idx] = (u64)val;
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(vmid_val);
-
-static ssize_t vmid_masks_show(struct device *dev,
-			       struct device_attribute *attr, char *buf)
-{
-	unsigned long val1, val2;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	spin_lock(&drvdata->spinlock);
-	val1 = drvdata->vmid_mask0;
-	val2 = drvdata->vmid_mask1;
-	spin_unlock(&drvdata->spinlock);
-	return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
-
-static ssize_t vmid_masks_store(struct device *dev,
-				struct device_attribute *attr,
-				const char *buf, size_t size)
-{
-	u8 i, j, maskbyte;
-	unsigned long val1, val2, mask;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-	/*
-	 * only implemented when vmid tracing is enabled, i.e. at least one
-	 * vmid comparator is implemented and at least 8 bit vmid size
-	 */
-	if (!drvdata->vmid_size || !drvdata->numvmidc)
-		return -EINVAL;
-	if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
-		return -EINVAL;
-
-	spin_lock(&drvdata->spinlock);
-
-	/*
-	 * each byte[0..3] controls mask value applied to vmid
-	 * comparator[0..3]
-	 */
-	switch (drvdata->numvmidc) {
-	case 0x1:
-		/* COMP0, bits[7:0] */
-		drvdata->vmid_mask0 = val1 & 0xFF;
-		break;
-	case 0x2:
-		/* COMP1, bits[15:8] */
-		drvdata->vmid_mask0 = val1 & 0xFFFF;
-		break;
-	case 0x3:
-		/* COMP2, bits[23:16] */
-		drvdata->vmid_mask0 = val1 & 0xFFFFFF;
-		break;
-	case 0x4:
-		/* COMP3, bits[31:24] */
-		drvdata->vmid_mask0 = val1;
-		break;
-	case 0x5:
-		/* COMP4, bits[7:0] */
-		drvdata->vmid_mask0 = val1;
-		drvdata->vmid_mask1 = val2 & 0xFF;
-		break;
-	case 0x6:
-		/* COMP5, bits[15:8] */
-		drvdata->vmid_mask0 = val1;
-		drvdata->vmid_mask1 = val2 & 0xFFFF;
-		break;
-	case 0x7:
-		/* COMP6, bits[23:16] */
-		drvdata->vmid_mask0 = val1;
-		drvdata->vmid_mask1 = val2 & 0xFFFFFF;
-		break;
-	case 0x8:
-		/* COMP7, bits[31:24] */
-		drvdata->vmid_mask0 = val1;
-		drvdata->vmid_mask1 = val2;
-		break;
-	default:
-		break;
-	}
-
-	/*
-	 * If software sets a mask bit to 1, it must program relevant byte
-	 * of vmid comparator value 0x0, otherwise behavior is unpredictable.
-	 * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24]
-	 * of vmid comparator0 value (corresponding to byte 0) register.
-	 */
-	mask = drvdata->vmid_mask0;
-	for (i = 0; i < drvdata->numvmidc; i++) {
-		/* mask value of corresponding vmid comparator */
-		maskbyte = mask & ETMv4_EVENT_MASK;
-		/*
-		 * each bit corresponds to a byte of respective vmid comparator
-		 * value register
-		 */
-		for (j = 0; j < 8; j++) {
-			if (maskbyte & 1)
-				drvdata->vmid_val[i] &= ~(0xFF << (j * 8));
-			maskbyte >>= 1;
-		}
-		/* Select the next vmid comparator mask value */
-		if (i == 3)
-			/* vmid comparators[4-7] */
-			mask = drvdata->vmid_mask1;
-		else
-			mask >>= 0x8;
-	}
-	spin_unlock(&drvdata->spinlock);
-	return size;
-}
-static DEVICE_ATTR_RW(vmid_masks);
-
-static ssize_t cpu_show(struct device *dev,
-			struct device_attribute *attr, char *buf)
-{
-	int val;
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
-	val = drvdata->cpu;
-	return scnprintf(buf, PAGE_SIZE, "%d\n", val);
-
-}
-static DEVICE_ATTR_RO(cpu);
-
-static struct attribute *coresight_etmv4_attrs[] = {
-	&dev_attr_nr_pe_cmp.attr,
-	&dev_attr_nr_addr_cmp.attr,
-	&dev_attr_nr_cntr.attr,
-	&dev_attr_nr_ext_inp.attr,
-	&dev_attr_numcidc.attr,
-	&dev_attr_numvmidc.attr,
-	&dev_attr_nrseqstate.attr,
-	&dev_attr_nr_resource.attr,
-	&dev_attr_nr_ss_cmp.attr,
-	&dev_attr_reset.attr,
-	&dev_attr_mode.attr,
-	&dev_attr_pe.attr,
-	&dev_attr_event.attr,
-	&dev_attr_event_instren.attr,
-	&dev_attr_event_ts.attr,
-	&dev_attr_syncfreq.attr,
-	&dev_attr_cyc_threshold.attr,
-	&dev_attr_bb_ctrl.attr,
-	&dev_attr_event_vinst.attr,
-	&dev_attr_s_exlevel_vinst.attr,
-	&dev_attr_ns_exlevel_vinst.attr,
-	&dev_attr_addr_idx.attr,
-	&dev_attr_addr_instdatatype.attr,
-	&dev_attr_addr_single.attr,
-	&dev_attr_addr_range.attr,
-	&dev_attr_addr_start.attr,
-	&dev_attr_addr_stop.attr,
-	&dev_attr_addr_ctxtype.attr,
-	&dev_attr_addr_context.attr,
-	&dev_attr_seq_idx.attr,
-	&dev_attr_seq_state.attr,
-	&dev_attr_seq_event.attr,
-	&dev_attr_seq_reset_event.attr,
-	&dev_attr_cntr_idx.attr,
-	&dev_attr_cntrldvr.attr,
-	&dev_attr_cntr_val.attr,
-	&dev_attr_cntr_ctrl.attr,
-	&dev_attr_res_idx.attr,
-	&dev_attr_res_ctrl.attr,
-	&dev_attr_ctxid_idx.attr,
-	&dev_attr_ctxid_pid.attr,
-	&dev_attr_ctxid_masks.attr,
-	&dev_attr_vmid_idx.attr,
-	&dev_attr_vmid_val.attr,
-	&dev_attr_vmid_masks.attr,
-	&dev_attr_cpu.attr,
-	NULL,
-};
-
-#define coresight_simple_func(name, offset)				\
-static ssize_t name##_show(struct device *_dev,				\
-			   struct device_attribute *attr, char *buf)	\
-{									\
-	struct etmv4_drvdata *drvdata = dev_get_drvdata(_dev->parent);	\
-	return scnprintf(buf, PAGE_SIZE, "0x%x\n",			\
-			 readl_relaxed(drvdata->base + offset));	\
-}									\
-static DEVICE_ATTR_RO(name)
-
-coresight_simple_func(trcoslsr, TRCOSLSR);
-coresight_simple_func(trcpdcr, TRCPDCR);
-coresight_simple_func(trcpdsr, TRCPDSR);
-coresight_simple_func(trclsr, TRCLSR);
-coresight_simple_func(trcauthstatus, TRCAUTHSTATUS);
-coresight_simple_func(trcdevid, TRCDEVID);
-coresight_simple_func(trcdevtype, TRCDEVTYPE);
-coresight_simple_func(trcpidr0, TRCPIDR0);
-coresight_simple_func(trcpidr1, TRCPIDR1);
-coresight_simple_func(trcpidr2, TRCPIDR2);
-coresight_simple_func(trcpidr3, TRCPIDR3);
-
-static struct attribute *coresight_etmv4_mgmt_attrs[] = {
-	&dev_attr_trcoslsr.attr,
-	&dev_attr_trcpdcr.attr,
-	&dev_attr_trcpdsr.attr,
-	&dev_attr_trclsr.attr,
-	&dev_attr_trcauthstatus.attr,
-	&dev_attr_trcdevid.attr,
-	&dev_attr_trcdevtype.attr,
-	&dev_attr_trcpidr0.attr,
-	&dev_attr_trcpidr1.attr,
-	&dev_attr_trcpidr2.attr,
-	&dev_attr_trcpidr3.attr,
-	NULL,
-};
-
-coresight_simple_func(trcidr0, TRCIDR0);
-coresight_simple_func(trcidr1, TRCIDR1);
-coresight_simple_func(trcidr2, TRCIDR2);
-coresight_simple_func(trcidr3, TRCIDR3);
-coresight_simple_func(trcidr4, TRCIDR4);
-coresight_simple_func(trcidr5, TRCIDR5);
-/* trcidr[6,7] are reserved */
-coresight_simple_func(trcidr8, TRCIDR8);
-coresight_simple_func(trcidr9, TRCIDR9);
-coresight_simple_func(trcidr10, TRCIDR10);
-coresight_simple_func(trcidr11, TRCIDR11);
-coresight_simple_func(trcidr12, TRCIDR12);
-coresight_simple_func(trcidr13, TRCIDR13);
-
-static struct attribute *coresight_etmv4_trcidr_attrs[] = {
-	&dev_attr_trcidr0.attr,
-	&dev_attr_trcidr1.attr,
-	&dev_attr_trcidr2.attr,
-	&dev_attr_trcidr3.attr,
-	&dev_attr_trcidr4.attr,
-	&dev_attr_trcidr5.attr,
-	/* trcidr[6,7] are reserved */
-	&dev_attr_trcidr8.attr,
-	&dev_attr_trcidr9.attr,
-	&dev_attr_trcidr10.attr,
-	&dev_attr_trcidr11.attr,
-	&dev_attr_trcidr12.attr,
-	&dev_attr_trcidr13.attr,
-	NULL,
-};
-
-static const struct attribute_group coresight_etmv4_group = {
-	.attrs = coresight_etmv4_attrs,
-};
-
-static const struct attribute_group coresight_etmv4_mgmt_group = {
-	.attrs = coresight_etmv4_mgmt_attrs,
-	.name = "mgmt",
-};
-
-static const struct attribute_group coresight_etmv4_trcidr_group = {
-	.attrs = coresight_etmv4_trcidr_attrs,
-	.name = "trcidr",
-};
-
-static const struct attribute_group *coresight_etmv4_groups[] = {
-	&coresight_etmv4_group,
-	&coresight_etmv4_mgmt_group,
-	&coresight_etmv4_trcidr_group,
-	NULL,
-};
-
 static void etm4_init_arch_data(void *info)
 {
 	u32 etmidr0;
@@ -2313,6 +408,9 @@
 	u32 etmidr5;
 	struct etmv4_drvdata *drvdata = info;
 
+	/* Make sure all registers are accessible */
+	etm4_os_unlock(drvdata);
+
 	CS_UNLOCK(drvdata->base);
 
 	/* find all capabilities of the tracing unit */
@@ -2464,93 +562,115 @@
 	CS_LOCK(drvdata->base);
 }
 
-static void etm4_init_default_data(struct etmv4_drvdata *drvdata)
+static void etm4_set_default(struct etmv4_config *config)
 {
-	int i;
+	if (WARN_ON_ONCE(!config))
+		return;
 
-	drvdata->pe_sel = 0x0;
-	drvdata->cfg = (ETMv4_MODE_CTXID | ETM_MODE_VMID |
-			ETMv4_MODE_TIMESTAMP | ETM_MODE_RETURNSTACK);
+	/*
+	 * Make default initialisation trace everything
+	 *
+	 * Select the "always true" resource selector on the
+	 * "Enablign Event" line and configure address range comparator
+	 * '0' to trace all the possible address range.  From there
+	 * configure the "include/exclude" engine to include address
+	 * range comparator '0'.
+	 */
 
 	/* disable all events tracing */
-	drvdata->eventctrl0 = 0x0;
-	drvdata->eventctrl1 = 0x0;
+	config->eventctrl0 = 0x0;
+	config->eventctrl1 = 0x0;
 
 	/* disable stalling */
-	drvdata->stall_ctrl = 0x0;
+	config->stall_ctrl = 0x0;
+
+	/* enable trace synchronization every 4096 bytes, if available */
+	config->syncfreq = 0xC;
 
 	/* disable timestamp event */
-	drvdata->ts_ctrl = 0x0;
+	config->ts_ctrl = 0x0;
 
-	/* enable trace synchronization every 4096 bytes for trace */
-	if (drvdata->syncpr == false)
-		drvdata->syncfreq = 0xC;
+	/* TRCVICTLR::EVENT = 0x01, select the always on logic */
+	config->vinst_ctrl |= BIT(0);
 
 	/*
-	 *  enable viewInst to trace everything with start-stop logic in
-	 *  started state
+	 * TRCVICTLR::SSSTATUS == 1, the start-stop logic is
+	 * in the started state
 	 */
-	drvdata->vinst_ctrl |= BIT(0);
-	/* set initial state of start-stop logic */
-	if (drvdata->nr_addr_cmp)
-		drvdata->vinst_ctrl |= BIT(9);
+	config->vinst_ctrl |= BIT(9);
 
-	/* no address range filtering for ViewInst */
-	drvdata->viiectlr = 0x0;
+	/*
+	 * Configure address range comparator '0' to encompass all
+	 * possible addresses.
+	 */
+
+	/* First half of default address comparator: start at address 0 */
+	config->addr_val[ETM_DEFAULT_ADDR_COMP] = 0x0;
+	/* trace instruction addresses */
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP] &= ~(BIT(0) | BIT(1));
+	/* EXLEVEL_NS, bits[12:15], only trace application and kernel space */
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= ETM_EXLEVEL_NS_HYP;
+	/* EXLEVEL_S, bits[11:8], don't trace anything in secure state */
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= (ETM_EXLEVEL_S_APP |
+						    ETM_EXLEVEL_S_OS |
+						    ETM_EXLEVEL_S_HYP);
+	config->addr_type[ETM_DEFAULT_ADDR_COMP] = ETM_ADDR_TYPE_RANGE;
+
+	/*
+	 * Second half of default address comparator: go all
+	 * the way to the top.
+	*/
+	config->addr_val[ETM_DEFAULT_ADDR_COMP + 1] = ~0x0;
+	/* trace instruction addresses */
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] &= ~(BIT(0) | BIT(1));
+	/* Address comparator type must be equal for both halves */
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] =
+					config->addr_acc[ETM_DEFAULT_ADDR_COMP];
+	config->addr_type[ETM_DEFAULT_ADDR_COMP + 1] = ETM_ADDR_TYPE_RANGE;
+
+	/*
+	 * Configure the ViewInst function to filter on address range
+	 * comparator '0'.
+	 */
+	config->viiectlr = BIT(0);
+
 	/* no start-stop filtering for ViewInst */
-	drvdata->vissctlr = 0x0;
+	config->vissctlr = 0x0;
+}
 
-	/* disable seq events */
-	for (i = 0; i < drvdata->nrseqstate-1; i++)
-		drvdata->seq_ctrl[i] = 0x0;
-	drvdata->seq_rst = 0x0;
-	drvdata->seq_state = 0x0;
+void etm4_config_trace_mode(struct etmv4_config *config)
+{
+	u32 addr_acc, mode;
 
-	/* disable external input events */
-	drvdata->ext_inp = 0x0;
+	mode = config->mode;
+	mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER);
 
-	for (i = 0; i < drvdata->nr_cntr; i++) {
-		drvdata->cntrldvr[i] = 0x0;
-		drvdata->cntr_ctrl[i] = 0x0;
-		drvdata->cntr_val[i] = 0x0;
-	}
+	/* excluding kernel AND user space doesn't make sense */
+	WARN_ON_ONCE(mode == (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER));
 
-	/* Resource selector pair 0 is always implemented and reserved */
-	drvdata->res_idx = 0x2;
-	for (i = 2; i < drvdata->nr_resource * 2; i++)
-		drvdata->res_ctrl[i] = 0x0;
+	/* nothing to do if neither flags are set */
+	if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER))
+		return;
 
-	for (i = 0; i < drvdata->nr_ss_cmp; i++) {
-		drvdata->ss_ctrl[i] = 0x0;
-		drvdata->ss_pe_cmp[i] = 0x0;
-	}
-
-	if (drvdata->nr_addr_cmp >= 1) {
-		drvdata->addr_val[0] = (unsigned long)_stext;
-		drvdata->addr_val[1] = (unsigned long)_etext;
-		drvdata->addr_type[0] = ETM_ADDR_TYPE_RANGE;
-		drvdata->addr_type[1] = ETM_ADDR_TYPE_RANGE;
-	}
-
-	for (i = 0; i < drvdata->numcidc; i++) {
-		drvdata->ctxid_pid[i] = 0x0;
-		drvdata->ctxid_vpid[i] = 0x0;
-	}
-
-	drvdata->ctxid_mask0 = 0x0;
-	drvdata->ctxid_mask1 = 0x0;
-
-	for (i = 0; i < drvdata->numvmidc; i++)
-		drvdata->vmid_val[i] = 0x0;
-	drvdata->vmid_mask0 = 0x0;
-	drvdata->vmid_mask1 = 0x0;
+	addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP];
+	/* clear default config */
+	addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS);
 
 	/*
-	 * A trace ID value of 0 is invalid, so let's start at some
-	 * random value that fits in 7 bits.  ETMv3.x has 0x10 so let's
-	 * start at 0x20.
+	 * EXLEVEL_NS, bits[15:12]
+	 * The Exception levels are:
+	 *   Bit[12] Exception level 0 - Application
+	 *   Bit[13] Exception level 1 - OS
+	 *   Bit[14] Exception level 2 - Hypervisor
+	 *   Bit[15] Never implemented
 	 */
-	drvdata->trcid = 0x20 + drvdata->cpu;
+	if (mode & ETM_MODE_EXCL_KERN)
+		addr_acc |= ETM_EXLEVEL_NS_OS;
+	else
+		addr_acc |= ETM_EXLEVEL_NS_APP;
+
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc;
+	config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;
 }
 
 static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action,
@@ -2569,7 +689,7 @@
 			etmdrvdata[cpu]->os_unlock = true;
 		}
 
-		if (etmdrvdata[cpu]->enable)
+		if (local_read(&etmdrvdata[cpu]->mode))
 			etm4_enable_hw(etmdrvdata[cpu]);
 		spin_unlock(&etmdrvdata[cpu]->spinlock);
 		break;
@@ -2582,7 +702,7 @@
 
 	case CPU_DYING:
 		spin_lock(&etmdrvdata[cpu]->spinlock);
-		if (etmdrvdata[cpu]->enable)
+		if (local_read(&etmdrvdata[cpu]->mode))
 			etm4_disable_hw(etmdrvdata[cpu]);
 		spin_unlock(&etmdrvdata[cpu]->spinlock);
 		break;
@@ -2595,6 +715,11 @@
 	.notifier_call = etm4_cpu_callback,
 };
 
+static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
+{
+	drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
+}
+
 static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
 {
 	int ret;
@@ -2638,9 +763,6 @@
 	get_online_cpus();
 	etmdrvdata[drvdata->cpu] = drvdata;
 
-	if (!smp_call_function_single(drvdata->cpu, etm4_os_unlock, drvdata, 1))
-		drvdata->os_unlock = true;
-
 	if (smp_call_function_single(drvdata->cpu,
 				etm4_init_arch_data,  drvdata, 1))
 		dev_err(dev, "ETM arch init failed\n");
@@ -2654,9 +776,9 @@
 		ret = -EINVAL;
 		goto err_arch_supported;
 	}
-	etm4_init_default_data(drvdata);
 
-	pm_runtime_put(&adev->dev);
+	etm4_init_trace_id(drvdata);
+	etm4_set_default(&drvdata->config);
 
 	desc->type = CORESIGHT_DEV_TYPE_SOURCE;
 	desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
@@ -2667,9 +789,16 @@
 	drvdata->csdev = coresight_register(desc);
 	if (IS_ERR(drvdata->csdev)) {
 		ret = PTR_ERR(drvdata->csdev);
-		goto err_coresight_register;
+		goto err_arch_supported;
 	}
 
+	ret = etm_perf_symlink(drvdata->csdev, true);
+	if (ret) {
+		coresight_unregister(drvdata->csdev);
+		goto err_arch_supported;
+	}
+
+	pm_runtime_put(&adev->dev);
 	dev_info(dev, "%s initialized\n", (char *)id->data);
 
 	if (boot_enable) {
@@ -2680,8 +809,6 @@
 	return 0;
 
 err_arch_supported:
-	pm_runtime_put(&adev->dev);
-err_coresight_register:
 	if (--etm4_count == 0)
 		unregister_hotcpu_notifier(&etm4_cpu_notifier);
 	return ret;
@@ -2698,6 +825,11 @@
 		.mask	= 0x000fffff,
 		.data	= "ETM 4.0",
 	},
+	{       /* ETM 4.0 - A72, Maia, HiSilicon */
+		.id = 0x000bb95a,
+		.mask = 0x000fffff,
+		.data = "ETM 4.0",
+	},
 	{ 0, 0},
 };
 
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index c341002..5359c51 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -13,6 +13,7 @@
 #ifndef _CORESIGHT_CORESIGHT_ETM_H
 #define _CORESIGHT_CORESIGHT_ETM_H
 
+#include <asm/local.h>
 #include <linux/spinlock.h>
 #include "coresight-priv.h"
 
@@ -175,71 +176,38 @@
 #define ETM_MODE_TRACE_RESET		BIT(25)
 #define ETM_MODE_TRACE_ERR		BIT(26)
 #define ETM_MODE_VIEWINST_STARTSTOP	BIT(27)
-#define ETMv4_MODE_ALL			0xFFFFFFF
+#define ETMv4_MODE_ALL			(GENMASK(27, 0) | \
+					 ETM_MODE_EXCL_KERN | \
+					 ETM_MODE_EXCL_USER)
 
 #define TRCSTATR_IDLE_BIT		0
+#define ETM_DEFAULT_ADDR_COMP		0
+
+/* secure state access levels */
+#define ETM_EXLEVEL_S_APP		BIT(8)
+#define ETM_EXLEVEL_S_OS		BIT(9)
+#define ETM_EXLEVEL_S_NA		BIT(10)
+#define ETM_EXLEVEL_S_HYP		BIT(11)
+/* non-secure state access levels */
+#define ETM_EXLEVEL_NS_APP		BIT(12)
+#define ETM_EXLEVEL_NS_OS		BIT(13)
+#define ETM_EXLEVEL_NS_HYP		BIT(14)
+#define ETM_EXLEVEL_NS_NA		BIT(15)
 
 /**
- * struct etm4_drvdata - specifics associated to an ETM component
- * @base:       Memory mapped base address for this component.
- * @dev:        The device entity associated to this component.
- * @csdev:      Component vitals needed by the framework.
- * @spinlock:   Only one at a time pls.
- * @cpu:        The cpu this component is affined to.
- * @arch:       ETM version number.
- * @enable:	Is this ETM currently tracing.
- * @sticky_enable: true if ETM base configuration has been done.
- * @boot_enable:True if we should start tracing at boot time.
- * @os_unlock:  True if access to management registers is allowed.
- * @nr_pe:	The number of processing entity available for tracing.
- * @nr_pe_cmp:	The number of processing entity comparator inputs that are
- *		available for tracing.
- * @nr_addr_cmp:Number of pairs of address comparators available
- *		as found in ETMIDR4 0-3.
- * @nr_cntr:    Number of counters as found in ETMIDR5 bit 28-30.
- * @nr_ext_inp: Number of external input.
- * @numcidc:	Number of contextID comparators.
- * @numvmidc:	Number of VMID comparators.
- * @nrseqstate: The number of sequencer states that are implemented.
- * @nr_event:	Indicates how many events the trace unit support.
- * @nr_resource:The number of resource selection pairs available for tracing.
- * @nr_ss_cmp:	Number of single-shot comparator controls that are available.
+ * struct etmv4_config - configuration information related to an ETMv4
  * @mode:	Controls various modes supported by this ETM.
- * @trcid:	value of the current ID for this component.
- * @trcid_size: Indicates the trace ID width.
- * @instrp0:	Tracing of load and store instructions
- *		as P0 elements is supported.
- * @trccond:	If the trace unit supports conditional
- *		instruction tracing.
- * @retstack:	Indicates if the implementation supports a return stack.
- * @trc_error:	Whether a trace unit can trace a system
- *		error exception.
- * @atbtrig:	If the implementation can support ATB triggers
- * @lpoverride:	If the implementation can support low-power state over.
  * @pe_sel:	Controls which PE to trace.
  * @cfg:	Controls the tracing options.
  * @eventctrl0: Controls the tracing of arbitrary events.
  * @eventctrl1: Controls the behavior of the events that @event_ctrl0 selects.
  * @stallctl:	If functionality that prevents trace unit buffer overflows
  *		is available.
- * @sysstall:	Does the system support stall control of the PE?
- * @nooverflow:	Indicate if overflow prevention is supported.
- * @stall_ctrl:	Enables trace unit functionality that prevents trace
- *		unit buffer overflows.
- * @ts_size:	Global timestamp size field.
  * @ts_ctrl:	Controls the insertion of global timestamps in the
  *		trace streams.
- * @syncpr:	Indicates if an implementation has a fixed
- *		synchronization period.
  * @syncfreq:	Controls how often trace synchronization requests occur.
- * @trccci:	Indicates if the trace unit supports cycle counting
- *		for instruction.
- * @ccsize:	Indicates the size of the cycle counter in bits.
- * @ccitmin:	minimum value that can be programmed in
  *		the TRCCCCTLR register.
  * @ccctlr:	Sets the threshold value for cycle counting.
- * @trcbb:	Indicates if the trace unit supports branch broadcast tracing.
- * @q_support:	Q element support characteristics.
  * @vinst_ctrl:	Controls instruction trace filtering.
  * @viiectlr:	Set or read, the address range comparators.
  * @vissctlr:	Set, or read, the single address comparators that control the
@@ -264,73 +232,28 @@
  * @addr_acc:	Address comparator access type.
  * @addr_type:	Current status of the comparator register.
  * @ctxid_idx:	Context ID index selector.
- * @ctxid_size:	Size of the context ID field to consider.
  * @ctxid_pid:	Value of the context ID comparator.
  * @ctxid_vpid:	Virtual PID seen by users if PID namespace is enabled, otherwise
  *		the same value of ctxid_pid.
  * @ctxid_mask0:Context ID comparator mask for comparator 0-3.
  * @ctxid_mask1:Context ID comparator mask for comparator 4-7.
  * @vmid_idx:	VM ID index selector.
- * @vmid_size:	Size of the VM ID comparator to consider.
  * @vmid_val:	Value of the VM ID comparator.
  * @vmid_mask0:	VM ID comparator mask for comparator 0-3.
  * @vmid_mask1:	VM ID comparator mask for comparator 4-7.
- * @s_ex_level:	In secure state, indicates whether instruction tracing is
- *		supported for the corresponding Exception level.
- * @ns_ex_level:In non-secure state, indicates whether instruction tracing is
- *		supported for the corresponding Exception level.
  * @ext_inp:	External input selection.
  */
-struct etmv4_drvdata {
-	void __iomem			*base;
-	struct device			*dev;
-	struct coresight_device		*csdev;
-	spinlock_t			spinlock;
-	int				cpu;
-	u8				arch;
-	bool				enable;
-	bool				sticky_enable;
-	bool				boot_enable;
-	bool				os_unlock;
-	u8				nr_pe;
-	u8				nr_pe_cmp;
-	u8				nr_addr_cmp;
-	u8				nr_cntr;
-	u8				nr_ext_inp;
-	u8				numcidc;
-	u8				numvmidc;
-	u8				nrseqstate;
-	u8				nr_event;
-	u8				nr_resource;
-	u8				nr_ss_cmp;
+struct etmv4_config {
 	u32				mode;
-	u8				trcid;
-	u8				trcid_size;
-	bool				instrp0;
-	bool				trccond;
-	bool				retstack;
-	bool				trc_error;
-	bool				atbtrig;
-	bool				lpoverride;
 	u32				pe_sel;
 	u32				cfg;
 	u32				eventctrl0;
 	u32				eventctrl1;
-	bool				stallctl;
-	bool				sysstall;
-	bool				nooverflow;
 	u32				stall_ctrl;
-	u8				ts_size;
 	u32				ts_ctrl;
-	bool				syncpr;
 	u32				syncfreq;
-	bool				trccci;
-	u8				ccsize;
-	u8				ccitmin;
 	u32				ccctlr;
-	bool				trcbb;
 	u32				bb_ctrl;
-	bool				q_support;
 	u32				vinst_ctrl;
 	u32				viiectlr;
 	u32				vissctlr;
@@ -353,19 +276,119 @@
 	u64				addr_acc[ETM_MAX_SINGLE_ADDR_CMP];
 	u8				addr_type[ETM_MAX_SINGLE_ADDR_CMP];
 	u8				ctxid_idx;
-	u8				ctxid_size;
 	u64				ctxid_pid[ETMv4_MAX_CTXID_CMP];
 	u64				ctxid_vpid[ETMv4_MAX_CTXID_CMP];
 	u32				ctxid_mask0;
 	u32				ctxid_mask1;
 	u8				vmid_idx;
-	u8				vmid_size;
 	u64				vmid_val[ETM_MAX_VMID_CMP];
 	u32				vmid_mask0;
 	u32				vmid_mask1;
+	u32				ext_inp;
+};
+
+/**
+ * struct etm4_drvdata - specifics associated to an ETM component
+ * @base:       Memory mapped base address for this component.
+ * @dev:        The device entity associated to this component.
+ * @csdev:      Component vitals needed by the framework.
+ * @spinlock:   Only one at a time pls.
+ * @mode:	This tracer's mode, i.e sysFS, Perf or disabled.
+ * @cpu:        The cpu this component is affined to.
+ * @arch:       ETM version number.
+ * @nr_pe:	The number of processing entity available for tracing.
+ * @nr_pe_cmp:	The number of processing entity comparator inputs that are
+ *		available for tracing.
+ * @nr_addr_cmp:Number of pairs of address comparators available
+ *		as found in ETMIDR4 0-3.
+ * @nr_cntr:    Number of counters as found in ETMIDR5 bit 28-30.
+ * @nr_ext_inp: Number of external input.
+ * @numcidc:	Number of contextID comparators.
+ * @numvmidc:	Number of VMID comparators.
+ * @nrseqstate: The number of sequencer states that are implemented.
+ * @nr_event:	Indicates how many events the trace unit support.
+ * @nr_resource:The number of resource selection pairs available for tracing.
+ * @nr_ss_cmp:	Number of single-shot comparator controls that are available.
+ * @trcid:	value of the current ID for this component.
+ * @trcid_size: Indicates the trace ID width.
+ * @ts_size:	Global timestamp size field.
+ * @ctxid_size:	Size of the context ID field to consider.
+ * @vmid_size:	Size of the VM ID comparator to consider.
+ * @ccsize:	Indicates the size of the cycle counter in bits.
+ * @ccitmin:	minimum value that can be programmed in
+ * @s_ex_level:	In secure state, indicates whether instruction tracing is
+ *		supported for the corresponding Exception level.
+ * @ns_ex_level:In non-secure state, indicates whether instruction tracing is
+ *		supported for the corresponding Exception level.
+ * @sticky_enable: true if ETM base configuration has been done.
+ * @boot_enable:True if we should start tracing at boot time.
+ * @os_unlock:  True if access to management registers is allowed.
+ * @instrp0:	Tracing of load and store instructions
+ *		as P0 elements is supported.
+ * @trcbb:	Indicates if the trace unit supports branch broadcast tracing.
+ * @trccond:	If the trace unit supports conditional
+ *		instruction tracing.
+ * @retstack:	Indicates if the implementation supports a return stack.
+ * @trccci:	Indicates if the trace unit supports cycle counting
+ *		for instruction.
+ * @q_support:	Q element support characteristics.
+ * @trc_error:	Whether a trace unit can trace a system
+ *		error exception.
+ * @syncpr:	Indicates if an implementation has a fixed
+ *		synchronization period.
+ * @stall_ctrl:	Enables trace unit functionality that prevents trace
+ *		unit buffer overflows.
+ * @sysstall:	Does the system support stall control of the PE?
+ * @nooverflow:	Indicate if overflow prevention is supported.
+ * @atbtrig:	If the implementation can support ATB triggers
+ * @lpoverride:	If the implementation can support low-power state over.
+ * @config:	structure holding configuration parameters.
+ */
+struct etmv4_drvdata {
+	void __iomem			*base;
+	struct device			*dev;
+	struct coresight_device		*csdev;
+	spinlock_t			spinlock;
+	local_t				mode;
+	int				cpu;
+	u8				arch;
+	u8				nr_pe;
+	u8				nr_pe_cmp;
+	u8				nr_addr_cmp;
+	u8				nr_cntr;
+	u8				nr_ext_inp;
+	u8				numcidc;
+	u8				numvmidc;
+	u8				nrseqstate;
+	u8				nr_event;
+	u8				nr_resource;
+	u8				nr_ss_cmp;
+	u8				trcid;
+	u8				trcid_size;
+	u8				ts_size;
+	u8				ctxid_size;
+	u8				vmid_size;
+	u8				ccsize;
+	u8				ccitmin;
 	u8				s_ex_level;
 	u8				ns_ex_level;
-	u32				ext_inp;
+	u8				q_support;
+	bool				sticky_enable;
+	bool				boot_enable;
+	bool				os_unlock;
+	bool				instrp0;
+	bool				trcbb;
+	bool				trccond;
+	bool				retstack;
+	bool				trccci;
+	bool				trc_error;
+	bool				syncpr;
+	bool				stallctl;
+	bool				sysstall;
+	bool				nooverflow;
+	bool				atbtrig;
+	bool				lpoverride;
+	struct etmv4_config		config;
 };
 
 /* Address comparator access types */
@@ -391,4 +414,7 @@
 	ETM_ADDR_TYPE_START,
 	ETM_ADDR_TYPE_STOP,
 };
+
+extern const struct attribute_group *coresight_etmv4_groups[];
+void etm4_config_trace_mode(struct etmv4_config *config);
 #endif
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
index 0600ca3..05df789 100644
--- a/drivers/hwtracing/coresight/coresight-funnel.c
+++ b/drivers/hwtracing/coresight/coresight-funnel.c
@@ -221,7 +221,6 @@
 	if (IS_ERR(drvdata->csdev))
 		return PTR_ERR(drvdata->csdev);
 
-	dev_info(dev, "FUNNEL initialized\n");
 	return 0;
 }
 
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
index 333edda..ad975c5 100644
--- a/drivers/hwtracing/coresight/coresight-priv.h
+++ b/drivers/hwtracing/coresight/coresight-priv.h
@@ -37,12 +37,42 @@
 #define ETM_MODE_EXCL_KERN	BIT(30)
 #define ETM_MODE_EXCL_USER	BIT(31)
 
+#define coresight_simple_func(type, name, offset)			\
+static ssize_t name##_show(struct device *_dev,				\
+			   struct device_attribute *attr, char *buf)	\
+{									\
+	type *drvdata = dev_get_drvdata(_dev->parent);			\
+	return scnprintf(buf, PAGE_SIZE, "0x%x\n",			\
+			 readl_relaxed(drvdata->base + offset));	\
+}									\
+static DEVICE_ATTR_RO(name)
+
 enum cs_mode {
 	CS_MODE_DISABLED,
 	CS_MODE_SYSFS,
 	CS_MODE_PERF,
 };
 
+/**
+ * struct cs_buffer - keep track of a recording session' specifics
+ * @cur:	index of the current buffer
+ * @nr_pages:	max number of pages granted to us
+ * @offset:	offset within the current buffer
+ * @data_size:	how much we collected in this run
+ * @lost:	other than zero if we had a HW buffer wrap around
+ * @snapshot:	is this run in snapshot mode
+ * @data_pages:	a handle the ring buffer
+ */
+struct cs_buffers {
+	unsigned int		cur;
+	unsigned int		nr_pages;
+	unsigned long		offset;
+	local_t			data_size;
+	local_t			lost;
+	bool			snapshot;
+	void			**data_pages;
+};
+
 static inline void CS_LOCK(void __iomem *addr)
 {
 	do {
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
index 4299c05..c6982e3 100644
--- a/drivers/hwtracing/coresight/coresight-replicator.c
+++ b/drivers/hwtracing/coresight/coresight-replicator.c
@@ -114,7 +114,6 @@
 
 	pm_runtime_put(&pdev->dev);
 
-	dev_info(dev, "REPLICATOR initialized\n");
 	return 0;
 
 out_disable_pm:
diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
new file mode 100644
index 0000000..73be58a
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -0,0 +1,920 @@
+/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+ *
+ * Description: CoreSight System Trace Macrocell driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * Initial implementation by Pratik Patel
+ * (C) 2014-2015 Pratik Patel <pratikp@codeaurora.org>
+ *
+ * Serious refactoring, code cleanup and upgrading to the Coresight upstream
+ * framework by Mathieu Poirier
+ * (C) 2015-2016 Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * Guaranteed timing and support for various packet type coming from the
+ * generic STM API by Chunyan Zhang
+ * (C) 2015-2016 Chunyan Zhang <zhang.chunyan@linaro.org>
+ */
+#include <asm/local.h>
+#include <linux/amba/bus.h>
+#include <linux/bitmap.h>
+#include <linux/clk.h>
+#include <linux/coresight.h>
+#include <linux/coresight-stm.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/moduleparam.h>
+#include <linux/of_address.h>
+#include <linux/perf_event.h>
+#include <linux/pm_runtime.h>
+#include <linux/stm.h>
+
+#include "coresight-priv.h"
+
+#define STMDMASTARTR			0xc04
+#define STMDMASTOPR			0xc08
+#define STMDMASTATR			0xc0c
+#define STMDMACTLR			0xc10
+#define STMDMAIDR			0xcfc
+#define STMHEER				0xd00
+#define STMHETER			0xd20
+#define STMHEBSR			0xd60
+#define STMHEMCR			0xd64
+#define STMHEMASTR			0xdf4
+#define STMHEFEAT1R			0xdf8
+#define STMHEIDR			0xdfc
+#define STMSPER				0xe00
+#define STMSPTER			0xe20
+#define STMPRIVMASKR			0xe40
+#define STMSPSCR			0xe60
+#define STMSPMSCR			0xe64
+#define STMSPOVERRIDER			0xe68
+#define STMSPMOVERRIDER			0xe6c
+#define STMSPTRIGCSR			0xe70
+#define STMTCSR				0xe80
+#define STMTSSTIMR			0xe84
+#define STMTSFREQR			0xe8c
+#define STMSYNCR			0xe90
+#define STMAUXCR			0xe94
+#define STMSPFEAT1R			0xea0
+#define STMSPFEAT2R			0xea4
+#define STMSPFEAT3R			0xea8
+#define STMITTRIGGER			0xee8
+#define STMITATBDATA0			0xeec
+#define STMITATBCTR2			0xef0
+#define STMITATBID			0xef4
+#define STMITATBCTR0			0xef8
+
+#define STM_32_CHANNEL			32
+#define BYTES_PER_CHANNEL		256
+#define STM_TRACE_BUF_SIZE		4096
+#define STM_SW_MASTER_END		127
+
+/* Register bit definition */
+#define STMTCSR_BUSY_BIT		23
+/* Reserve the first 10 channels for kernel usage */
+#define STM_CHANNEL_OFFSET		0
+
+enum stm_pkt_type {
+	STM_PKT_TYPE_DATA	= 0x98,
+	STM_PKT_TYPE_FLAG	= 0xE8,
+	STM_PKT_TYPE_TRIG	= 0xF8,
+};
+
+#define stm_channel_addr(drvdata, ch)	(drvdata->chs.base +	\
+					(ch * BYTES_PER_CHANNEL))
+#define stm_channel_off(type, opts)	(type & ~opts)
+
+static int boot_nr_channel;
+
+/*
+ * Not really modular but using module_param is the easiest way to
+ * remain consistent with existing use cases for now.
+ */
+module_param_named(
+	boot_nr_channel, boot_nr_channel, int, S_IRUGO
+);
+
+/**
+ * struct channel_space - central management entity for extended ports
+ * @base:		memory mapped base address where channels start.
+ * @guaraneed:		is the channel delivery guaranteed.
+ */
+struct channel_space {
+	void __iomem		*base;
+	unsigned long		*guaranteed;
+};
+
+/**
+ * struct stm_drvdata - specifics associated to an STM component
+ * @base:		memory mapped base address for this component.
+ * @dev:		the device entity associated to this component.
+ * @atclk:		optional clock for the core parts of the STM.
+ * @csdev:		component vitals needed by the framework.
+ * @spinlock:		only one at a time pls.
+ * @chs:		the channels accociated to this STM.
+ * @stm:		structure associated to the generic STM interface.
+ * @mode:		this tracer's mode, i.e sysFS, or disabled.
+ * @traceid:		value of the current ID for this component.
+ * @write_bytes:	Maximus bytes this STM can write at a time.
+ * @stmsper:		settings for register STMSPER.
+ * @stmspscr:		settings for register STMSPSCR.
+ * @numsp:		the total number of stimulus port support by this STM.
+ * @stmheer:		settings for register STMHEER.
+ * @stmheter:		settings for register STMHETER.
+ * @stmhebsr:		settings for register STMHEBSR.
+ */
+struct stm_drvdata {
+	void __iomem		*base;
+	struct device		*dev;
+	struct clk		*atclk;
+	struct coresight_device	*csdev;
+	spinlock_t		spinlock;
+	struct channel_space	chs;
+	struct stm_data		stm;
+	local_t			mode;
+	u8			traceid;
+	u32			write_bytes;
+	u32			stmsper;
+	u32			stmspscr;
+	u32			numsp;
+	u32			stmheer;
+	u32			stmheter;
+	u32			stmhebsr;
+};
+
+static void stm_hwevent_enable_hw(struct stm_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	writel_relaxed(drvdata->stmhebsr, drvdata->base + STMHEBSR);
+	writel_relaxed(drvdata->stmheter, drvdata->base + STMHETER);
+	writel_relaxed(drvdata->stmheer, drvdata->base + STMHEER);
+	writel_relaxed(0x01 |	/* Enable HW event tracing */
+		       0x04,	/* Error detection on event tracing */
+		       drvdata->base + STMHEMCR);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void stm_port_enable_hw(struct stm_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+	/* ATB trigger enable on direct writes to TRIG locations */
+	writel_relaxed(0x10,
+		       drvdata->base + STMSPTRIGCSR);
+	writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR);
+	writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void stm_enable_hw(struct stm_drvdata *drvdata)
+{
+	if (drvdata->stmheer)
+		stm_hwevent_enable_hw(drvdata);
+
+	stm_port_enable_hw(drvdata);
+
+	CS_UNLOCK(drvdata->base);
+
+	/* 4096 byte between synchronisation packets */
+	writel_relaxed(0xFFF, drvdata->base + STMSYNCR);
+	writel_relaxed((drvdata->traceid << 16 | /* trace id */
+			0x02 |			 /* timestamp enable */
+			0x01),			 /* global STM enable */
+			drvdata->base + STMTCSR);
+
+	CS_LOCK(drvdata->base);
+}
+
+static int stm_enable(struct coresight_device *csdev,
+		      struct perf_event_attr *attr, u32 mode)
+{
+	u32 val;
+	struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	if (mode != CS_MODE_SYSFS)
+		return -EINVAL;
+
+	val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode);
+
+	/* Someone is already using the tracer */
+	if (val)
+		return -EBUSY;
+
+	pm_runtime_get_sync(drvdata->dev);
+
+	spin_lock(&drvdata->spinlock);
+	stm_enable_hw(drvdata);
+	spin_unlock(&drvdata->spinlock);
+
+	dev_info(drvdata->dev, "STM tracing enabled\n");
+	return 0;
+}
+
+static void stm_hwevent_disable_hw(struct stm_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	writel_relaxed(0x0, drvdata->base + STMHEMCR);
+	writel_relaxed(0x0, drvdata->base + STMHEER);
+	writel_relaxed(0x0, drvdata->base + STMHETER);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void stm_port_disable_hw(struct stm_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	writel_relaxed(0x0, drvdata->base + STMSPER);
+	writel_relaxed(0x0, drvdata->base + STMSPTRIGCSR);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void stm_disable_hw(struct stm_drvdata *drvdata)
+{
+	u32 val;
+
+	CS_UNLOCK(drvdata->base);
+
+	val = readl_relaxed(drvdata->base + STMTCSR);
+	val &= ~0x1; /* clear global STM enable [0] */
+	writel_relaxed(val, drvdata->base + STMTCSR);
+
+	CS_LOCK(drvdata->base);
+
+	stm_port_disable_hw(drvdata);
+	if (drvdata->stmheer)
+		stm_hwevent_disable_hw(drvdata);
+}
+
+static void stm_disable(struct coresight_device *csdev)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	/*
+	 * For as long as the tracer isn't disabled another entity can't
+	 * change its status.  As such we can read the status here without
+	 * fearing it will change under us.
+	 */
+	if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+		spin_lock(&drvdata->spinlock);
+		stm_disable_hw(drvdata);
+		spin_unlock(&drvdata->spinlock);
+
+		/* Wait until the engine has completely stopped */
+		coresight_timeout(drvdata, STMTCSR, STMTCSR_BUSY_BIT, 0);
+
+		pm_runtime_put(drvdata->dev);
+
+		local_set(&drvdata->mode, CS_MODE_DISABLED);
+		dev_info(drvdata->dev, "STM tracing disabled\n");
+	}
+}
+
+static int stm_trace_id(struct coresight_device *csdev)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	return drvdata->traceid;
+}
+
+static const struct coresight_ops_source stm_source_ops = {
+	.trace_id	= stm_trace_id,
+	.enable		= stm_enable,
+	.disable	= stm_disable,
+};
+
+static const struct coresight_ops stm_cs_ops = {
+	.source_ops	= &stm_source_ops,
+};
+
+static inline bool stm_addr_unaligned(const void *addr, u8 write_bytes)
+{
+	return ((unsigned long)addr & (write_bytes - 1));
+}
+
+static void stm_send(void *addr, const void *data, u32 size, u8 write_bytes)
+{
+	u8 paload[8];
+
+	if (stm_addr_unaligned(data, write_bytes)) {
+		memcpy(paload, data, size);
+		data = paload;
+	}
+
+	/* now we are 64bit/32bit aligned */
+	switch (size) {
+#ifdef CONFIG_64BIT
+	case 8:
+		writeq_relaxed(*(u64 *)data, addr);
+		break;
+#endif
+	case 4:
+		writel_relaxed(*(u32 *)data, addr);
+		break;
+	case 2:
+		writew_relaxed(*(u16 *)data, addr);
+		break;
+	case 1:
+		writeb_relaxed(*(u8 *)data, addr);
+		break;
+	default:
+		break;
+	}
+}
+
+static int stm_generic_link(struct stm_data *stm_data,
+			    unsigned int master,  unsigned int channel)
+{
+	struct stm_drvdata *drvdata = container_of(stm_data,
+						   struct stm_drvdata, stm);
+	if (!drvdata || !drvdata->csdev)
+		return -EINVAL;
+
+	return coresight_enable(drvdata->csdev);
+}
+
+static void stm_generic_unlink(struct stm_data *stm_data,
+			       unsigned int master,  unsigned int channel)
+{
+	struct stm_drvdata *drvdata = container_of(stm_data,
+						   struct stm_drvdata, stm);
+	if (!drvdata || !drvdata->csdev)
+		return;
+
+	stm_disable(drvdata->csdev);
+}
+
+static long stm_generic_set_options(struct stm_data *stm_data,
+				    unsigned int master,
+				    unsigned int channel,
+				    unsigned int nr_chans,
+				    unsigned long options)
+{
+	struct stm_drvdata *drvdata = container_of(stm_data,
+						   struct stm_drvdata, stm);
+	if (!(drvdata && local_read(&drvdata->mode)))
+		return -EINVAL;
+
+	if (channel >= drvdata->numsp)
+		return -EINVAL;
+
+	switch (options) {
+	case STM_OPTION_GUARANTEED:
+		set_bit(channel, drvdata->chs.guaranteed);
+		break;
+
+	case STM_OPTION_INVARIANT:
+		clear_bit(channel, drvdata->chs.guaranteed);
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+static ssize_t stm_generic_packet(struct stm_data *stm_data,
+				  unsigned int master,
+				  unsigned int channel,
+				  unsigned int packet,
+				  unsigned int flags,
+				  unsigned int size,
+				  const unsigned char *payload)
+{
+	unsigned long ch_addr;
+	struct stm_drvdata *drvdata = container_of(stm_data,
+						   struct stm_drvdata, stm);
+
+	if (!(drvdata && local_read(&drvdata->mode)))
+		return 0;
+
+	if (channel >= drvdata->numsp)
+		return 0;
+
+	ch_addr = (unsigned long)stm_channel_addr(drvdata, channel);
+
+	flags = (flags == STP_PACKET_TIMESTAMPED) ? STM_FLAG_TIMESTAMPED : 0;
+	flags |= test_bit(channel, drvdata->chs.guaranteed) ?
+			   STM_FLAG_GUARANTEED : 0;
+
+	if (size > drvdata->write_bytes)
+		size = drvdata->write_bytes;
+	else
+		size = rounddown_pow_of_two(size);
+
+	switch (packet) {
+	case STP_PACKET_FLAG:
+		ch_addr |= stm_channel_off(STM_PKT_TYPE_FLAG, flags);
+
+		/*
+		 * The generic STM core sets a size of '0' on flag packets.
+		 * As such send a flag packet of size '1' and tell the
+		 * core we did so.
+		 */
+		stm_send((void *)ch_addr, payload, 1, drvdata->write_bytes);
+		size = 1;
+		break;
+
+	case STP_PACKET_DATA:
+		ch_addr |= stm_channel_off(STM_PKT_TYPE_DATA, flags);
+		stm_send((void *)ch_addr, payload, size,
+				drvdata->write_bytes);
+		break;
+
+	default:
+		return -ENOTSUPP;
+	}
+
+	return size;
+}
+
+static ssize_t hwevent_enable_show(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val = drvdata->stmheer;
+
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t hwevent_enable_store(struct device *dev,
+				    struct device_attribute *attr,
+				    const char *buf, size_t size)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val;
+	int ret = 0;
+
+	ret = kstrtoul(buf, 16, &val);
+	if (ret)
+		return -EINVAL;
+
+	drvdata->stmheer = val;
+	/* HW event enable and trigger go hand in hand */
+	drvdata->stmheter = val;
+
+	return size;
+}
+static DEVICE_ATTR_RW(hwevent_enable);
+
+static ssize_t hwevent_select_show(struct device *dev,
+				   struct device_attribute *attr, char *buf)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val = drvdata->stmhebsr;
+
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t hwevent_select_store(struct device *dev,
+				    struct device_attribute *attr,
+				    const char *buf, size_t size)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val;
+	int ret = 0;
+
+	ret = kstrtoul(buf, 16, &val);
+	if (ret)
+		return -EINVAL;
+
+	drvdata->stmhebsr = val;
+
+	return size;
+}
+static DEVICE_ATTR_RW(hwevent_select);
+
+static ssize_t port_select_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val;
+
+	if (!local_read(&drvdata->mode)) {
+		val = drvdata->stmspscr;
+	} else {
+		spin_lock(&drvdata->spinlock);
+		val = readl_relaxed(drvdata->base + STMSPSCR);
+		spin_unlock(&drvdata->spinlock);
+	}
+
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t port_select_store(struct device *dev,
+				 struct device_attribute *attr,
+				 const char *buf, size_t size)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val, stmsper;
+	int ret = 0;
+
+	ret = kstrtoul(buf, 16, &val);
+	if (ret)
+		return ret;
+
+	spin_lock(&drvdata->spinlock);
+	drvdata->stmspscr = val;
+
+	if (local_read(&drvdata->mode)) {
+		CS_UNLOCK(drvdata->base);
+		/* Process as per ARM's TRM recommendation */
+		stmsper = readl_relaxed(drvdata->base + STMSPER);
+		writel_relaxed(0x0, drvdata->base + STMSPER);
+		writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR);
+		writel_relaxed(stmsper, drvdata->base + STMSPER);
+		CS_LOCK(drvdata->base);
+	}
+	spin_unlock(&drvdata->spinlock);
+
+	return size;
+}
+static DEVICE_ATTR_RW(port_select);
+
+static ssize_t port_enable_show(struct device *dev,
+				struct device_attribute *attr, char *buf)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val;
+
+	if (!local_read(&drvdata->mode)) {
+		val = drvdata->stmsper;
+	} else {
+		spin_lock(&drvdata->spinlock);
+		val = readl_relaxed(drvdata->base + STMSPER);
+		spin_unlock(&drvdata->spinlock);
+	}
+
+	return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t port_enable_store(struct device *dev,
+				 struct device_attribute *attr,
+				 const char *buf, size_t size)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	unsigned long val;
+	int ret = 0;
+
+	ret = kstrtoul(buf, 16, &val);
+	if (ret)
+		return ret;
+
+	spin_lock(&drvdata->spinlock);
+	drvdata->stmsper = val;
+
+	if (local_read(&drvdata->mode)) {
+		CS_UNLOCK(drvdata->base);
+		writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER);
+		CS_LOCK(drvdata->base);
+	}
+	spin_unlock(&drvdata->spinlock);
+
+	return size;
+}
+static DEVICE_ATTR_RW(port_enable);
+
+static ssize_t traceid_show(struct device *dev,
+			    struct device_attribute *attr, char *buf)
+{
+	unsigned long val;
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	val = drvdata->traceid;
+	return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t traceid_store(struct device *dev,
+			     struct device_attribute *attr,
+			     const char *buf, size_t size)
+{
+	int ret;
+	unsigned long val;
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+	ret = kstrtoul(buf, 16, &val);
+	if (ret)
+		return ret;
+
+	/* traceid field is 7bit wide on STM32 */
+	drvdata->traceid = val & 0x7f;
+	return size;
+}
+static DEVICE_ATTR_RW(traceid);
+
+#define coresight_stm_simple_func(name, offset)	\
+	coresight_simple_func(struct stm_drvdata, name, offset)
+
+coresight_stm_simple_func(tcsr, STMTCSR);
+coresight_stm_simple_func(tsfreqr, STMTSFREQR);
+coresight_stm_simple_func(syncr, STMSYNCR);
+coresight_stm_simple_func(sper, STMSPER);
+coresight_stm_simple_func(spter, STMSPTER);
+coresight_stm_simple_func(privmaskr, STMPRIVMASKR);
+coresight_stm_simple_func(spscr, STMSPSCR);
+coresight_stm_simple_func(spmscr, STMSPMSCR);
+coresight_stm_simple_func(spfeat1r, STMSPFEAT1R);
+coresight_stm_simple_func(spfeat2r, STMSPFEAT2R);
+coresight_stm_simple_func(spfeat3r, STMSPFEAT3R);
+coresight_stm_simple_func(devid, CORESIGHT_DEVID);
+
+static struct attribute *coresight_stm_attrs[] = {
+	&dev_attr_hwevent_enable.attr,
+	&dev_attr_hwevent_select.attr,
+	&dev_attr_port_enable.attr,
+	&dev_attr_port_select.attr,
+	&dev_attr_traceid.attr,
+	NULL,
+};
+
+static struct attribute *coresight_stm_mgmt_attrs[] = {
+	&dev_attr_tcsr.attr,
+	&dev_attr_tsfreqr.attr,
+	&dev_attr_syncr.attr,
+	&dev_attr_sper.attr,
+	&dev_attr_spter.attr,
+	&dev_attr_privmaskr.attr,
+	&dev_attr_spscr.attr,
+	&dev_attr_spmscr.attr,
+	&dev_attr_spfeat1r.attr,
+	&dev_attr_spfeat2r.attr,
+	&dev_attr_spfeat3r.attr,
+	&dev_attr_devid.attr,
+	NULL,
+};
+
+static const struct attribute_group coresight_stm_group = {
+	.attrs = coresight_stm_attrs,
+};
+
+static const struct attribute_group coresight_stm_mgmt_group = {
+	.attrs = coresight_stm_mgmt_attrs,
+	.name = "mgmt",
+};
+
+static const struct attribute_group *coresight_stm_groups[] = {
+	&coresight_stm_group,
+	&coresight_stm_mgmt_group,
+	NULL,
+};
+
+static int stm_get_resource_byname(struct device_node *np,
+				   char *ch_base, struct resource *res)
+{
+	const char *name = NULL;
+	int index = 0, found = 0;
+
+	while (!of_property_read_string_index(np, "reg-names", index, &name)) {
+		if (strcmp(ch_base, name)) {
+			index++;
+			continue;
+		}
+
+		/* We have a match and @index is where it's at */
+		found = 1;
+		break;
+	}
+
+	if (!found)
+		return -EINVAL;
+
+	return of_address_to_resource(np, index, res);
+}
+
+static u32 stm_fundamental_data_size(struct stm_drvdata *drvdata)
+{
+	u32 stmspfeat2r;
+
+	if (!IS_ENABLED(CONFIG_64BIT))
+		return 4;
+
+	stmspfeat2r = readl_relaxed(drvdata->base + STMSPFEAT2R);
+
+	/*
+	 * bit[15:12] represents the fundamental data size
+	 * 0 - 32-bit data
+	 * 1 - 64-bit data
+	 */
+	return BMVAL(stmspfeat2r, 12, 15) ? 8 : 4;
+}
+
+static u32 stm_num_stimulus_port(struct stm_drvdata *drvdata)
+{
+	u32 numsp;
+
+	numsp = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
+	/*
+	 * NUMPS in STMDEVID is 17 bit long and if equal to 0x0,
+	 * 32 stimulus ports are supported.
+	 */
+	numsp &= 0x1ffff;
+	if (!numsp)
+		numsp = STM_32_CHANNEL;
+	return numsp;
+}
+
+static void stm_init_default_data(struct stm_drvdata *drvdata)
+{
+	/* Don't use port selection */
+	drvdata->stmspscr = 0x0;
+	/*
+	 * Enable all channel regardless of their number.  When port
+	 * selection isn't used (see above) STMSPER applies to all
+	 * 32 channel group available, hence setting all 32 bits to 1
+	 */
+	drvdata->stmsper = ~0x0;
+
+	/*
+	 * The trace ID value for *ETM* tracers start at CPU_ID * 2 + 0x10 and
+	 * anything equal to or higher than 0x70 is reserved.  Since 0x00 is
+	 * also reserved the STM trace ID needs to be higher than 0x00 and
+	 * lowner than 0x10.
+	 */
+	drvdata->traceid = 0x1;
+
+	/* Set invariant transaction timing on all channels */
+	bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp);
+}
+
+static void stm_init_generic_data(struct stm_drvdata *drvdata)
+{
+	drvdata->stm.name = dev_name(drvdata->dev);
+
+	/*
+	 * MasterIDs are assigned at HW design phase. As such the core is
+	 * using a single master for interaction with this device.
+	 */
+	drvdata->stm.sw_start = 1;
+	drvdata->stm.sw_end = 1;
+	drvdata->stm.hw_override = true;
+	drvdata->stm.sw_nchannels = drvdata->numsp;
+	drvdata->stm.packet = stm_generic_packet;
+	drvdata->stm.link = stm_generic_link;
+	drvdata->stm.unlink = stm_generic_unlink;
+	drvdata->stm.set_options = stm_generic_set_options;
+}
+
+static int stm_probe(struct amba_device *adev, const struct amba_id *id)
+{
+	int ret;
+	void __iomem *base;
+	unsigned long *guaranteed;
+	struct device *dev = &adev->dev;
+	struct coresight_platform_data *pdata = NULL;
+	struct stm_drvdata *drvdata;
+	struct resource *res = &adev->res;
+	struct resource ch_res;
+	size_t res_size, bitmap_size;
+	struct coresight_desc *desc;
+	struct device_node *np = adev->dev.of_node;
+
+	if (np) {
+		pdata = of_get_coresight_platform_data(dev, np);
+		if (IS_ERR(pdata))
+			return PTR_ERR(pdata);
+		adev->dev.platform_data = pdata;
+	}
+	drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
+	if (!drvdata)
+		return -ENOMEM;
+
+	drvdata->dev = &adev->dev;
+	drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
+	if (!IS_ERR(drvdata->atclk)) {
+		ret = clk_prepare_enable(drvdata->atclk);
+		if (ret)
+			return ret;
+	}
+	dev_set_drvdata(dev, drvdata);
+
+	base = devm_ioremap_resource(dev, res);
+	if (IS_ERR(base))
+		return PTR_ERR(base);
+	drvdata->base = base;
+
+	ret = stm_get_resource_byname(np, "stm-stimulus-base", &ch_res);
+	if (ret)
+		return ret;
+
+	base = devm_ioremap_resource(dev, &ch_res);
+	if (IS_ERR(base))
+		return PTR_ERR(base);
+	drvdata->chs.base = base;
+
+	drvdata->write_bytes = stm_fundamental_data_size(drvdata);
+
+	if (boot_nr_channel) {
+		drvdata->numsp = boot_nr_channel;
+		res_size = min((resource_size_t)(boot_nr_channel *
+				  BYTES_PER_CHANNEL), resource_size(res));
+	} else {
+		drvdata->numsp = stm_num_stimulus_port(drvdata);
+		res_size = min((resource_size_t)(drvdata->numsp *
+				 BYTES_PER_CHANNEL), resource_size(res));
+	}
+	bitmap_size = BITS_TO_LONGS(drvdata->numsp) * sizeof(long);
+
+	guaranteed = devm_kzalloc(dev, bitmap_size, GFP_KERNEL);
+	if (!guaranteed)
+		return -ENOMEM;
+	drvdata->chs.guaranteed = guaranteed;
+
+	spin_lock_init(&drvdata->spinlock);
+
+	stm_init_default_data(drvdata);
+	stm_init_generic_data(drvdata);
+
+	if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) {
+		dev_info(dev,
+			 "stm_register_device failed, probing deffered\n");
+		return -EPROBE_DEFER;
+	}
+
+	desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
+	if (!desc) {
+		ret = -ENOMEM;
+		goto stm_unregister;
+	}
+
+	desc->type = CORESIGHT_DEV_TYPE_SOURCE;
+	desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE;
+	desc->ops = &stm_cs_ops;
+	desc->pdata = pdata;
+	desc->dev = dev;
+	desc->groups = coresight_stm_groups;
+	drvdata->csdev = coresight_register(desc);
+	if (IS_ERR(drvdata->csdev)) {
+		ret = PTR_ERR(drvdata->csdev);
+		goto stm_unregister;
+	}
+
+	pm_runtime_put(&adev->dev);
+
+	dev_info(dev, "%s initialized\n", (char *)id->data);
+	return 0;
+
+stm_unregister:
+	stm_unregister_device(&drvdata->stm);
+	return ret;
+}
+
+#ifdef CONFIG_PM
+static int stm_runtime_suspend(struct device *dev)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev);
+
+	if (drvdata && !IS_ERR(drvdata->atclk))
+		clk_disable_unprepare(drvdata->atclk);
+
+	return 0;
+}
+
+static int stm_runtime_resume(struct device *dev)
+{
+	struct stm_drvdata *drvdata = dev_get_drvdata(dev);
+
+	if (drvdata && !IS_ERR(drvdata->atclk))
+		clk_prepare_enable(drvdata->atclk);
+
+	return 0;
+}
+#endif
+
+static const struct dev_pm_ops stm_dev_pm_ops = {
+	SET_RUNTIME_PM_OPS(stm_runtime_suspend, stm_runtime_resume, NULL)
+};
+
+static struct amba_id stm_ids[] = {
+	{
+		.id     = 0x0003b962,
+		.mask   = 0x0003ffff,
+		.data	= "STM32",
+	},
+	{ 0, 0},
+};
+
+static struct amba_driver stm_driver = {
+	.drv = {
+		.name   = "coresight-stm",
+		.owner	= THIS_MODULE,
+		.pm	= &stm_dev_pm_ops,
+		.suppress_bind_attrs = true,
+	},
+	.probe          = stm_probe,
+	.id_table	= stm_ids,
+};
+
+builtin_amba_driver(stm_driver);
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
new file mode 100644
index 0000000..466af86
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
@@ -0,0 +1,604 @@
+/*
+ * Copyright(C) 2016 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/circ_buf.h>
+#include <linux/coresight.h>
+#include <linux/perf_event.h>
+#include <linux/slab.h>
+#include "coresight-priv.h"
+#include "coresight-tmc.h"
+
+void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	/* Wait for TMCSReady bit to be set */
+	tmc_wait_for_tmcready(drvdata);
+
+	writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
+	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
+		       TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
+		       TMC_FFCR_TRIGON_TRIGIN,
+		       drvdata->base + TMC_FFCR);
+
+	writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
+	tmc_enable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
+{
+	char *bufp;
+	u32 read_data;
+	int i;
+
+	bufp = drvdata->buf;
+	while (1) {
+		for (i = 0; i < drvdata->memwidth; i++) {
+			read_data = readl_relaxed(drvdata->base + TMC_RRD);
+			if (read_data == 0xFFFFFFFF)
+				return;
+			memcpy(bufp, &read_data, 4);
+			bufp += 4;
+		}
+	}
+}
+
+static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	tmc_flush_and_stop(drvdata);
+	/*
+	 * When operating in sysFS mode the content of the buffer needs to be
+	 * read before the TMC is disabled.
+	 */
+	if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
+		tmc_etb_dump_hw(drvdata);
+	tmc_disable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	/* Wait for TMCSReady bit to be set */
+	tmc_wait_for_tmcready(drvdata);
+
+	writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE);
+	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI,
+		       drvdata->base + TMC_FFCR);
+	writel_relaxed(0x0, drvdata->base + TMC_BUFWM);
+	tmc_enable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	tmc_flush_and_stop(drvdata);
+	tmc_disable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode)
+{
+	int ret = 0;
+	bool used = false;
+	char *buf = NULL;
+	long val;
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	 /* This shouldn't be happening */
+	if (WARN_ON(mode != CS_MODE_SYSFS))
+		return -EINVAL;
+
+	/*
+	 * If we don't have a buffer release the lock and allocate memory.
+	 * Otherwise keep the lock and move along.
+	 */
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (!drvdata->buf) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+		/* Allocating the memory here while outside of the spinlock */
+		buf = kzalloc(drvdata->size, GFP_KERNEL);
+		if (!buf)
+			return -ENOMEM;
+
+		/* Let's try again */
+		spin_lock_irqsave(&drvdata->spinlock, flags);
+	}
+
+	if (drvdata->reading) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	val = local_xchg(&drvdata->mode, mode);
+	/*
+	 * In sysFS mode we can have multiple writers per sink.  Since this
+	 * sink is already enabled no memory is needed and the HW need not be
+	 * touched.
+	 */
+	if (val == CS_MODE_SYSFS)
+		goto out;
+
+	/*
+	 * If drvdata::buf isn't NULL, memory was allocated for a previous
+	 * trace run but wasn't read.  If so simply zero-out the memory.
+	 * Otherwise use the memory allocated above.
+	 *
+	 * The memory is freed when users read the buffer using the
+	 * /dev/xyz.{etf|etb} interface.  See tmc_read_unprepare_etf() for
+	 * details.
+	 */
+	if (drvdata->buf) {
+		memset(drvdata->buf, 0, drvdata->size);
+	} else {
+		used = true;
+		drvdata->buf = buf;
+	}
+
+	tmc_etb_enable_hw(drvdata);
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	/* Free memory outside the spinlock if need be */
+	if (!used && buf)
+		kfree(buf);
+
+	if (!ret)
+		dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n");
+
+	return ret;
+}
+
+static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, u32 mode)
+{
+	int ret = 0;
+	long val;
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	 /* This shouldn't be happening */
+	if (WARN_ON(mode != CS_MODE_PERF))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	val = local_xchg(&drvdata->mode, mode);
+	/*
+	 * In Perf mode there can be only one writer per sink.  There
+	 * is also no need to continue if the ETB/ETR is already operated
+	 * from sysFS.
+	 */
+	if (val != CS_MODE_DISABLED) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	tmc_etb_enable_hw(drvdata);
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	return ret;
+}
+
+static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
+{
+	switch (mode) {
+	case CS_MODE_SYSFS:
+		return tmc_enable_etf_sink_sysfs(csdev, mode);
+	case CS_MODE_PERF:
+		return tmc_enable_etf_sink_perf(csdev, mode);
+	}
+
+	/* We shouldn't be here */
+	return -EINVAL;
+}
+
+static void tmc_disable_etf_sink(struct coresight_device *csdev)
+{
+	long val;
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+		return;
+	}
+
+	val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
+	/* Disable the TMC only if it needs to */
+	if (val != CS_MODE_DISABLED)
+		tmc_etb_disable_hw(drvdata);
+
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	dev_info(drvdata->dev, "TMC-ETB/ETF disabled\n");
+}
+
+static int tmc_enable_etf_link(struct coresight_device *csdev,
+			       int inport, int outport)
+{
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+		return -EBUSY;
+	}
+
+	tmc_etf_enable_hw(drvdata);
+	local_set(&drvdata->mode, CS_MODE_SYSFS);
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	dev_info(drvdata->dev, "TMC-ETF enabled\n");
+	return 0;
+}
+
+static void tmc_disable_etf_link(struct coresight_device *csdev,
+				 int inport, int outport)
+{
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+		return;
+	}
+
+	tmc_etf_disable_hw(drvdata);
+	local_set(&drvdata->mode, CS_MODE_DISABLED);
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	dev_info(drvdata->dev, "TMC disabled\n");
+}
+
+static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu,
+				  void **pages, int nr_pages, bool overwrite)
+{
+	int node;
+	struct cs_buffers *buf;
+
+	if (cpu == -1)
+		cpu = smp_processor_id();
+	node = cpu_to_node(cpu);
+
+	/* Allocate memory structure for interaction with Perf */
+	buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
+	if (!buf)
+		return NULL;
+
+	buf->snapshot = overwrite;
+	buf->nr_pages = nr_pages;
+	buf->data_pages = pages;
+
+	return buf;
+}
+
+static void tmc_free_etf_buffer(void *config)
+{
+	struct cs_buffers *buf = config;
+
+	kfree(buf);
+}
+
+static int tmc_set_etf_buffer(struct coresight_device *csdev,
+			      struct perf_output_handle *handle,
+			      void *sink_config)
+{
+	int ret = 0;
+	unsigned long head;
+	struct cs_buffers *buf = sink_config;
+
+	/* wrap head around to the amount of space we have */
+	head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
+
+	/* find the page to write to */
+	buf->cur = head / PAGE_SIZE;
+
+	/* and offset within that page */
+	buf->offset = head % PAGE_SIZE;
+
+	local_set(&buf->data_size, 0);
+
+	return ret;
+}
+
+static unsigned long tmc_reset_etf_buffer(struct coresight_device *csdev,
+					  struct perf_output_handle *handle,
+					  void *sink_config, bool *lost)
+{
+	long size = 0;
+	struct cs_buffers *buf = sink_config;
+
+	if (buf) {
+		/*
+		 * In snapshot mode ->data_size holds the new address of the
+		 * ring buffer's head.  The size itself is the whole address
+		 * range since we want the latest information.
+		 */
+		if (buf->snapshot)
+			handle->head = local_xchg(&buf->data_size,
+						  buf->nr_pages << PAGE_SHIFT);
+		/*
+		 * Tell the tracer PMU how much we got in this run and if
+		 * something went wrong along the way.  Nobody else can use
+		 * this cs_buffers instance until we are done.  As such
+		 * resetting parameters here and squaring off with the ring
+		 * buffer API in the tracer PMU is fine.
+		 */
+		*lost = !!local_xchg(&buf->lost, 0);
+		size = local_xchg(&buf->data_size, 0);
+	}
+
+	return size;
+}
+
+static void tmc_update_etf_buffer(struct coresight_device *csdev,
+				  struct perf_output_handle *handle,
+				  void *sink_config)
+{
+	int i, cur;
+	u32 *buf_ptr;
+	u32 read_ptr, write_ptr;
+	u32 status, to_read;
+	unsigned long offset;
+	struct cs_buffers *buf = sink_config;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	if (!buf)
+		return;
+
+	/* This shouldn't happen */
+	if (WARN_ON_ONCE(local_read(&drvdata->mode) != CS_MODE_PERF))
+		return;
+
+	CS_UNLOCK(drvdata->base);
+
+	tmc_flush_and_stop(drvdata);
+
+	read_ptr = readl_relaxed(drvdata->base + TMC_RRP);
+	write_ptr = readl_relaxed(drvdata->base + TMC_RWP);
+
+	/*
+	 * Get a hold of the status register and see if a wrap around
+	 * has occurred.  If so adjust things accordingly.
+	 */
+	status = readl_relaxed(drvdata->base + TMC_STS);
+	if (status & TMC_STS_FULL) {
+		local_inc(&buf->lost);
+		to_read = drvdata->size;
+	} else {
+		to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->size);
+	}
+
+	/*
+	 * The TMC RAM buffer may be bigger than the space available in the
+	 * perf ring buffer (handle->size).  If so advance the RRP so that we
+	 * get the latest trace data.
+	 */
+	if (to_read > handle->size) {
+		u32 mask = 0;
+
+		/*
+		 * The value written to RRP must be byte-address aligned to
+		 * the width of the trace memory databus _and_ to a frame
+		 * boundary (16 byte), whichever is the biggest. For example,
+		 * for 32-bit, 64-bit and 128-bit wide trace memory, the four
+		 * LSBs must be 0s. For 256-bit wide trace memory, the five
+		 * LSBs must be 0s.
+		 */
+		switch (drvdata->memwidth) {
+		case TMC_MEM_INTF_WIDTH_32BITS:
+		case TMC_MEM_INTF_WIDTH_64BITS:
+		case TMC_MEM_INTF_WIDTH_128BITS:
+			mask = GENMASK(31, 5);
+			break;
+		case TMC_MEM_INTF_WIDTH_256BITS:
+			mask = GENMASK(31, 6);
+			break;
+		}
+
+		/*
+		 * Make sure the new size is aligned in accordance with the
+		 * requirement explained above.
+		 */
+		to_read = handle->size & mask;
+		/* Move the RAM read pointer up */
+		read_ptr = (write_ptr + drvdata->size) - to_read;
+		/* Make sure we are still within our limits */
+		if (read_ptr > (drvdata->size - 1))
+			read_ptr -= drvdata->size;
+		/* Tell the HW */
+		writel_relaxed(read_ptr, drvdata->base + TMC_RRP);
+		local_inc(&buf->lost);
+	}
+
+	cur = buf->cur;
+	offset = buf->offset;
+
+	/* for every byte to read */
+	for (i = 0; i < to_read; i += 4) {
+		buf_ptr = buf->data_pages[cur] + offset;
+		*buf_ptr = readl_relaxed(drvdata->base + TMC_RRD);
+
+		offset += 4;
+		if (offset >= PAGE_SIZE) {
+			offset = 0;
+			cur++;
+			/* wrap around at the end of the buffer */
+			cur &= buf->nr_pages - 1;
+		}
+	}
+
+	/*
+	 * In snapshot mode all we have to do is communicate to
+	 * perf_aux_output_end() the address of the current head.  In full
+	 * trace mode the same function expects a size to move rb->aux_head
+	 * forward.
+	 */
+	if (buf->snapshot)
+		local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
+	else
+		local_add(to_read, &buf->data_size);
+
+	CS_LOCK(drvdata->base);
+}
+
+static const struct coresight_ops_sink tmc_etf_sink_ops = {
+	.enable		= tmc_enable_etf_sink,
+	.disable	= tmc_disable_etf_sink,
+	.alloc_buffer	= tmc_alloc_etf_buffer,
+	.free_buffer	= tmc_free_etf_buffer,
+	.set_buffer	= tmc_set_etf_buffer,
+	.reset_buffer	= tmc_reset_etf_buffer,
+	.update_buffer	= tmc_update_etf_buffer,
+};
+
+static const struct coresight_ops_link tmc_etf_link_ops = {
+	.enable		= tmc_enable_etf_link,
+	.disable	= tmc_disable_etf_link,
+};
+
+const struct coresight_ops tmc_etb_cs_ops = {
+	.sink_ops	= &tmc_etf_sink_ops,
+};
+
+const struct coresight_ops tmc_etf_cs_ops = {
+	.sink_ops	= &tmc_etf_sink_ops,
+	.link_ops	= &tmc_etf_link_ops,
+};
+
+int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+{
+	long val;
+	enum tmc_mode mode;
+	int ret = 0;
+	unsigned long flags;
+
+	/* config types are set a boot time and never change */
+	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+			 drvdata->config_type != TMC_CONFIG_TYPE_ETF))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+
+	if (drvdata->reading) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	/* There is no point in reading a TMC in HW FIFO mode */
+	mode = readl_relaxed(drvdata->base + TMC_MODE);
+	if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	val = local_read(&drvdata->mode);
+	/* Don't interfere if operated from Perf */
+	if (val == CS_MODE_PERF) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	/* If drvdata::buf is NULL the trace data has been read already */
+	if (drvdata->buf == NULL) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	/* Disable the TMC if need be */
+	if (val == CS_MODE_SYSFS)
+		tmc_etb_disable_hw(drvdata);
+
+	drvdata->reading = true;
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	return ret;
+}
+
+int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+{
+	char *buf = NULL;
+	enum tmc_mode mode;
+	unsigned long flags;
+
+	/* config types are set a boot time and never change */
+	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+			 drvdata->config_type != TMC_CONFIG_TYPE_ETF))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+
+	/* There is no point in reading a TMC in HW FIFO mode */
+	mode = readl_relaxed(drvdata->base + TMC_MODE);
+	if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+		return -EINVAL;
+	}
+
+	/* Re-enable the TMC if need be */
+	if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+		/*
+		 * The trace run will continue with the same allocated trace
+		 * buffer. As such zero-out the buffer so that we don't end
+		 * up with stale data.
+		 *
+		 * Since the tracer is still enabled drvdata::buf
+		 * can't be NULL.
+		 */
+		memset(drvdata->buf, 0, drvdata->size);
+		tmc_etb_enable_hw(drvdata);
+	} else {
+		/*
+		 * The ETB/ETF is not tracing and the buffer was just read.
+		 * As such prepare to free the trace buffer.
+		 */
+		buf = drvdata->buf;
+		drvdata->buf = NULL;
+	}
+
+	drvdata->reading = false;
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	/*
+	 * Free allocated memory outside of the spinlock.  There is no need
+	 * to assert the validity of 'buf' since calling kfree(NULL) is safe.
+	 */
+	kfree(buf);
+
+	return 0;
+}
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
new file mode 100644
index 0000000..847d1b5
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
@@ -0,0 +1,329 @@
+/*
+ * Copyright(C) 2016 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/coresight.h>
+#include <linux/dma-mapping.h>
+#include "coresight-priv.h"
+#include "coresight-tmc.h"
+
+void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
+{
+	u32 axictl;
+
+	/* Zero out the memory to help with debug */
+	memset(drvdata->vaddr, 0, drvdata->size);
+
+	CS_UNLOCK(drvdata->base);
+
+	/* Wait for TMCSReady bit to be set */
+	tmc_wait_for_tmcready(drvdata);
+
+	writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ);
+	writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
+
+	axictl = readl_relaxed(drvdata->base + TMC_AXICTL);
+	axictl |= TMC_AXICTL_WR_BURST_16;
+	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+	axictl &= ~TMC_AXICTL_SCT_GAT_MODE;
+	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+	axictl = (axictl &
+		  ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) |
+		  TMC_AXICTL_PROT_CTL_B1;
+	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+
+	writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO);
+	writel_relaxed(0x0, drvdata->base + TMC_DBAHI);
+	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
+		       TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
+		       TMC_FFCR_TRIGON_TRIGIN,
+		       drvdata->base + TMC_FFCR);
+	writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
+	tmc_enable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
+{
+	u32 rwp, val;
+
+	rwp = readl_relaxed(drvdata->base + TMC_RWP);
+	val = readl_relaxed(drvdata->base + TMC_STS);
+
+	/* How much memory do we still have */
+	if (val & BIT(0))
+		drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr;
+	else
+		drvdata->buf = drvdata->vaddr;
+}
+
+static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
+{
+	CS_UNLOCK(drvdata->base);
+
+	tmc_flush_and_stop(drvdata);
+	/*
+	 * When operating in sysFS mode the content of the buffer needs to be
+	 * read before the TMC is disabled.
+	 */
+	if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
+		tmc_etr_dump_hw(drvdata);
+	tmc_disable_hw(drvdata);
+
+	CS_LOCK(drvdata->base);
+}
+
+static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode)
+{
+	int ret = 0;
+	bool used = false;
+	long val;
+	unsigned long flags;
+	void __iomem *vaddr = NULL;
+	dma_addr_t paddr;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	 /* This shouldn't be happening */
+	if (WARN_ON(mode != CS_MODE_SYSFS))
+		return -EINVAL;
+
+	/*
+	 * If we don't have a buffer release the lock and allocate memory.
+	 * Otherwise keep the lock and move along.
+	 */
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (!drvdata->vaddr) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+		/*
+		 * Contiguous  memory can't be allocated while a spinlock is
+		 * held.  As such allocate memory here and free it if a buffer
+		 * has already been allocated (from a previous session).
+		 */
+		vaddr = dma_alloc_coherent(drvdata->dev, drvdata->size,
+					   &paddr, GFP_KERNEL);
+		if (!vaddr)
+			return -ENOMEM;
+
+		/* Let's try again */
+		spin_lock_irqsave(&drvdata->spinlock, flags);
+	}
+
+	if (drvdata->reading) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	val = local_xchg(&drvdata->mode, mode);
+	/*
+	 * In sysFS mode we can have multiple writers per sink.  Since this
+	 * sink is already enabled no memory is needed and the HW need not be
+	 * touched.
+	 */
+	if (val == CS_MODE_SYSFS)
+		goto out;
+
+	/*
+	 * If drvdata::buf == NULL, use the memory allocated above.
+	 * Otherwise a buffer still exists from a previous session, so
+	 * simply use that.
+	 */
+	if (drvdata->buf == NULL) {
+		used = true;
+		drvdata->vaddr = vaddr;
+		drvdata->paddr = paddr;
+		drvdata->buf = drvdata->vaddr;
+	}
+
+	memset(drvdata->vaddr, 0, drvdata->size);
+
+	tmc_etr_enable_hw(drvdata);
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	/* Free memory outside the spinlock if need be */
+	if (!used && vaddr)
+		dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr);
+
+	if (!ret)
+		dev_info(drvdata->dev, "TMC-ETR enabled\n");
+
+	return ret;
+}
+
+static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, u32 mode)
+{
+	int ret = 0;
+	long val;
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	 /* This shouldn't be happening */
+	if (WARN_ON(mode != CS_MODE_PERF))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	val = local_xchg(&drvdata->mode, mode);
+	/*
+	 * In Perf mode there can be only one writer per sink.  There
+	 * is also no need to continue if the ETR is already operated
+	 * from sysFS.
+	 */
+	if (val != CS_MODE_DISABLED) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	tmc_etr_enable_hw(drvdata);
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	return ret;
+}
+
+static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode)
+{
+	switch (mode) {
+	case CS_MODE_SYSFS:
+		return tmc_enable_etr_sink_sysfs(csdev, mode);
+	case CS_MODE_PERF:
+		return tmc_enable_etr_sink_perf(csdev, mode);
+	}
+
+	/* We shouldn't be here */
+	return -EINVAL;
+}
+
+static void tmc_disable_etr_sink(struct coresight_device *csdev)
+{
+	long val;
+	unsigned long flags;
+	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		spin_unlock_irqrestore(&drvdata->spinlock, flags);
+		return;
+	}
+
+	val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
+	/* Disable the TMC only if it needs to */
+	if (val != CS_MODE_DISABLED)
+		tmc_etr_disable_hw(drvdata);
+
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	dev_info(drvdata->dev, "TMC-ETR disabled\n");
+}
+
+static const struct coresight_ops_sink tmc_etr_sink_ops = {
+	.enable		= tmc_enable_etr_sink,
+	.disable	= tmc_disable_etr_sink,
+};
+
+const struct coresight_ops tmc_etr_cs_ops = {
+	.sink_ops	= &tmc_etr_sink_ops,
+};
+
+int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
+{
+	int ret = 0;
+	long val;
+	unsigned long flags;
+
+	/* config types are set a boot time and never change */
+	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+	if (drvdata->reading) {
+		ret = -EBUSY;
+		goto out;
+	}
+
+	val = local_read(&drvdata->mode);
+	/* Don't interfere if operated from Perf */
+	if (val == CS_MODE_PERF) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	/* If drvdata::buf is NULL the trace data has been read already */
+	if (drvdata->buf == NULL) {
+		ret = -EINVAL;
+		goto out;
+	}
+
+	/* Disable the TMC if need be */
+	if (val == CS_MODE_SYSFS)
+		tmc_etr_disable_hw(drvdata);
+
+	drvdata->reading = true;
+out:
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	return ret;
+}
+
+int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata)
+{
+	unsigned long flags;
+	dma_addr_t paddr;
+	void __iomem *vaddr = NULL;
+
+	/* config types are set a boot time and never change */
+	if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR))
+		return -EINVAL;
+
+	spin_lock_irqsave(&drvdata->spinlock, flags);
+
+	/* RE-enable the TMC if need be */
+	if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+		/*
+		 * The trace run will continue with the same allocated trace
+		 * buffer. As such zero-out the buffer so that we don't end
+		 * up with stale data.
+		 *
+		 * Since the tracer is still enabled drvdata::buf
+		 * can't be NULL.
+		 */
+		memset(drvdata->buf, 0, drvdata->size);
+		tmc_etr_enable_hw(drvdata);
+	} else {
+		/*
+		 * The ETR is not tracing and the buffer was just read.
+		 * As such prepare to free the trace buffer.
+		 */
+		vaddr = drvdata->vaddr;
+		paddr = drvdata->paddr;
+		drvdata->buf = NULL;
+	}
+
+	drvdata->reading = false;
+	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+	/* Free allocated memory out side of the spinlock */
+	if (vaddr)
+		dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr);
+
+	return 0;
+}
diff --git a/drivers/hwtracing/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c
index 1be191f..9e02ac9 100644
--- a/drivers/hwtracing/coresight/coresight-tmc.c
+++ b/drivers/hwtracing/coresight/coresight-tmc.c
@@ -30,127 +30,27 @@
 #include <linux/amba/bus.h>
 
 #include "coresight-priv.h"
+#include "coresight-tmc.h"
 
-#define TMC_RSZ			0x004
-#define TMC_STS			0x00c
-#define TMC_RRD			0x010
-#define TMC_RRP			0x014
-#define TMC_RWP			0x018
-#define TMC_TRG			0x01c
-#define TMC_CTL			0x020
-#define TMC_RWD			0x024
-#define TMC_MODE		0x028
-#define TMC_LBUFLEVEL		0x02c
-#define TMC_CBUFLEVEL		0x030
-#define TMC_BUFWM		0x034
-#define TMC_RRPHI		0x038
-#define TMC_RWPHI		0x03c
-#define TMC_AXICTL		0x110
-#define TMC_DBALO		0x118
-#define TMC_DBAHI		0x11c
-#define TMC_FFSR		0x300
-#define TMC_FFCR		0x304
-#define TMC_PSCR		0x308
-#define TMC_ITMISCOP0		0xee0
-#define TMC_ITTRFLIN		0xee8
-#define TMC_ITATBDATA0		0xeec
-#define TMC_ITATBCTR2		0xef0
-#define TMC_ITATBCTR1		0xef4
-#define TMC_ITATBCTR0		0xef8
-
-/* register description */
-/* TMC_CTL - 0x020 */
-#define TMC_CTL_CAPT_EN		BIT(0)
-/* TMC_STS - 0x00C */
-#define TMC_STS_TRIGGERED	BIT(1)
-/* TMC_AXICTL - 0x110 */
-#define TMC_AXICTL_PROT_CTL_B0	BIT(0)
-#define TMC_AXICTL_PROT_CTL_B1	BIT(1)
-#define TMC_AXICTL_SCT_GAT_MODE	BIT(7)
-#define TMC_AXICTL_WR_BURST_LEN 0xF00
-/* TMC_FFCR - 0x304 */
-#define TMC_FFCR_EN_FMT		BIT(0)
-#define TMC_FFCR_EN_TI		BIT(1)
-#define TMC_FFCR_FON_FLIN	BIT(4)
-#define TMC_FFCR_FON_TRIG_EVT	BIT(5)
-#define TMC_FFCR_FLUSHMAN	BIT(6)
-#define TMC_FFCR_TRIGON_TRIGIN	BIT(8)
-#define TMC_FFCR_STOP_ON_FLUSH	BIT(12)
-
-#define TMC_STS_TRIGGERED_BIT	2
-#define TMC_FFCR_FLUSHMAN_BIT	6
-
-enum tmc_config_type {
-	TMC_CONFIG_TYPE_ETB,
-	TMC_CONFIG_TYPE_ETR,
-	TMC_CONFIG_TYPE_ETF,
-};
-
-enum tmc_mode {
-	TMC_MODE_CIRCULAR_BUFFER,
-	TMC_MODE_SOFTWARE_FIFO,
-	TMC_MODE_HARDWARE_FIFO,
-};
-
-enum tmc_mem_intf_width {
-	TMC_MEM_INTF_WIDTH_32BITS	= 0x2,
-	TMC_MEM_INTF_WIDTH_64BITS	= 0x3,
-	TMC_MEM_INTF_WIDTH_128BITS	= 0x4,
-	TMC_MEM_INTF_WIDTH_256BITS	= 0x5,
-};
-
-/**
- * struct tmc_drvdata - specifics associated to an TMC component
- * @base:	memory mapped base address for this component.
- * @dev:	the device entity associated to this component.
- * @csdev:	component vitals needed by the framework.
- * @miscdev:	specifics to handle "/dev/xyz.tmc" entry.
- * @spinlock:	only one at a time pls.
- * @read_count:	manages preparation of buffer for reading.
- * @buf:	area of memory where trace data get sent.
- * @paddr:	DMA start location in RAM.
- * @vaddr:	virtual representation of @paddr.
- * @size:	@buf size.
- * @enable:	this TMC is being used.
- * @config_type: TMC variant, must be of type @tmc_config_type.
- * @trigger_cntr: amount of words to store after a trigger.
- */
-struct tmc_drvdata {
-	void __iomem		*base;
-	struct device		*dev;
-	struct coresight_device	*csdev;
-	struct miscdevice	miscdev;
-	spinlock_t		spinlock;
-	int			read_count;
-	bool			reading;
-	char			*buf;
-	dma_addr_t		paddr;
-	void			*vaddr;
-	u32			size;
-	bool			enable;
-	enum tmc_config_type	config_type;
-	u32			trigger_cntr;
-};
-
-static void tmc_wait_for_ready(struct tmc_drvdata *drvdata)
+void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
 {
 	/* Ensure formatter, unformatter and hardware fifo are empty */
 	if (coresight_timeout(drvdata->base,
-			      TMC_STS, TMC_STS_TRIGGERED_BIT, 1)) {
+			      TMC_STS, TMC_STS_TMCREADY_BIT, 1)) {
 		dev_err(drvdata->dev,
 			"timeout observed when probing at offset %#x\n",
 			TMC_STS);
 	}
 }
 
-static void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
+void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
 {
 	u32 ffcr;
 
 	ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
 	ffcr |= TMC_FFCR_STOP_ON_FLUSH;
 	writel_relaxed(ffcr, drvdata->base + TMC_FFCR);
-	ffcr |= TMC_FFCR_FLUSHMAN;
+	ffcr |= BIT(TMC_FFCR_FLUSHMAN_BIT);
 	writel_relaxed(ffcr, drvdata->base + TMC_FFCR);
 	/* Ensure flush completes */
 	if (coresight_timeout(drvdata->base,
@@ -160,338 +60,73 @@
 			TMC_FFCR);
 	}
 
-	tmc_wait_for_ready(drvdata);
+	tmc_wait_for_tmcready(drvdata);
 }
 
-static void tmc_enable_hw(struct tmc_drvdata *drvdata)
+void tmc_enable_hw(struct tmc_drvdata *drvdata)
 {
 	writel_relaxed(TMC_CTL_CAPT_EN, drvdata->base + TMC_CTL);
 }
 
-static void tmc_disable_hw(struct tmc_drvdata *drvdata)
+void tmc_disable_hw(struct tmc_drvdata *drvdata)
 {
 	writel_relaxed(0x0, drvdata->base + TMC_CTL);
 }
 
-static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
-{
-	/* Zero out the memory to help with debug */
-	memset(drvdata->buf, 0, drvdata->size);
-
-	CS_UNLOCK(drvdata->base);
-
-	writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
-	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
-		       TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
-		       TMC_FFCR_TRIGON_TRIGIN,
-		       drvdata->base + TMC_FFCR);
-
-	writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
-	tmc_enable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
-{
-	u32 axictl;
-
-	/* Zero out the memory to help with debug */
-	memset(drvdata->vaddr, 0, drvdata->size);
-
-	CS_UNLOCK(drvdata->base);
-
-	writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ);
-	writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
-
-	axictl = readl_relaxed(drvdata->base + TMC_AXICTL);
-	axictl |= TMC_AXICTL_WR_BURST_LEN;
-	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
-	axictl &= ~TMC_AXICTL_SCT_GAT_MODE;
-	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
-	axictl = (axictl &
-		  ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) |
-		  TMC_AXICTL_PROT_CTL_B1;
-	writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
-
-	writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO);
-	writel_relaxed(0x0, drvdata->base + TMC_DBAHI);
-	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
-		       TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
-		       TMC_FFCR_TRIGON_TRIGIN,
-		       drvdata->base + TMC_FFCR);
-	writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
-	tmc_enable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
-{
-	CS_UNLOCK(drvdata->base);
-
-	writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE);
-	writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI,
-		       drvdata->base + TMC_FFCR);
-	writel_relaxed(0x0, drvdata->base + TMC_BUFWM);
-	tmc_enable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static int tmc_enable(struct tmc_drvdata *drvdata, enum tmc_mode mode)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	if (drvdata->reading) {
-		spin_unlock_irqrestore(&drvdata->spinlock, flags);
-		return -EBUSY;
-	}
-
-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
-		tmc_etb_enable_hw(drvdata);
-	} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
-		tmc_etr_enable_hw(drvdata);
-	} else {
-		if (mode == TMC_MODE_CIRCULAR_BUFFER)
-			tmc_etb_enable_hw(drvdata);
-		else
-			tmc_etf_enable_hw(drvdata);
-	}
-	drvdata->enable = true;
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
-	dev_info(drvdata->dev, "TMC enabled\n");
-	return 0;
-}
-
-static int tmc_enable_sink(struct coresight_device *csdev, u32 mode)
-{
-	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return tmc_enable(drvdata, TMC_MODE_CIRCULAR_BUFFER);
-}
-
-static int tmc_enable_link(struct coresight_device *csdev, int inport,
-			   int outport)
-{
-	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	return tmc_enable(drvdata, TMC_MODE_HARDWARE_FIFO);
-}
-
-static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
-{
-	enum tmc_mem_intf_width memwidth;
-	u8 memwords;
-	char *bufp;
-	u32 read_data;
-	int i;
-
-	memwidth = BMVAL(readl_relaxed(drvdata->base + CORESIGHT_DEVID), 8, 10);
-	if (memwidth == TMC_MEM_INTF_WIDTH_32BITS)
-		memwords = 1;
-	else if (memwidth == TMC_MEM_INTF_WIDTH_64BITS)
-		memwords = 2;
-	else if (memwidth == TMC_MEM_INTF_WIDTH_128BITS)
-		memwords = 4;
-	else
-		memwords = 8;
-
-	bufp = drvdata->buf;
-	while (1) {
-		for (i = 0; i < memwords; i++) {
-			read_data = readl_relaxed(drvdata->base + TMC_RRD);
-			if (read_data == 0xFFFFFFFF)
-				return;
-			memcpy(bufp, &read_data, 4);
-			bufp += 4;
-		}
-	}
-}
-
-static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
-{
-	CS_UNLOCK(drvdata->base);
-
-	tmc_flush_and_stop(drvdata);
-	tmc_etb_dump_hw(drvdata);
-	tmc_disable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
-{
-	u32 rwp, val;
-
-	rwp = readl_relaxed(drvdata->base + TMC_RWP);
-	val = readl_relaxed(drvdata->base + TMC_STS);
-
-	/* How much memory do we still have */
-	if (val & BIT(0))
-		drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr;
-	else
-		drvdata->buf = drvdata->vaddr;
-}
-
-static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
-{
-	CS_UNLOCK(drvdata->base);
-
-	tmc_flush_and_stop(drvdata);
-	tmc_etr_dump_hw(drvdata);
-	tmc_disable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
-{
-	CS_UNLOCK(drvdata->base);
-
-	tmc_flush_and_stop(drvdata);
-	tmc_disable_hw(drvdata);
-
-	CS_LOCK(drvdata->base);
-}
-
-static void tmc_disable(struct tmc_drvdata *drvdata, enum tmc_mode mode)
-{
-	unsigned long flags;
-
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	if (drvdata->reading)
-		goto out;
-
-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
-		tmc_etb_disable_hw(drvdata);
-	} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
-		tmc_etr_disable_hw(drvdata);
-	} else {
-		if (mode == TMC_MODE_CIRCULAR_BUFFER)
-			tmc_etb_disable_hw(drvdata);
-		else
-			tmc_etf_disable_hw(drvdata);
-	}
-out:
-	drvdata->enable = false;
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
-	dev_info(drvdata->dev, "TMC disabled\n");
-}
-
-static void tmc_disable_sink(struct coresight_device *csdev)
-{
-	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	tmc_disable(drvdata, TMC_MODE_CIRCULAR_BUFFER);
-}
-
-static void tmc_disable_link(struct coresight_device *csdev, int inport,
-			     int outport)
-{
-	struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
-	tmc_disable(drvdata, TMC_MODE_HARDWARE_FIFO);
-}
-
-static const struct coresight_ops_sink tmc_sink_ops = {
-	.enable		= tmc_enable_sink,
-	.disable	= tmc_disable_sink,
-};
-
-static const struct coresight_ops_link tmc_link_ops = {
-	.enable		= tmc_enable_link,
-	.disable	= tmc_disable_link,
-};
-
-static const struct coresight_ops tmc_etb_cs_ops = {
-	.sink_ops	= &tmc_sink_ops,
-};
-
-static const struct coresight_ops tmc_etr_cs_ops = {
-	.sink_ops	= &tmc_sink_ops,
-};
-
-static const struct coresight_ops tmc_etf_cs_ops = {
-	.sink_ops	= &tmc_sink_ops,
-	.link_ops	= &tmc_link_ops,
-};
-
 static int tmc_read_prepare(struct tmc_drvdata *drvdata)
 {
-	int ret;
-	unsigned long flags;
-	enum tmc_mode mode;
+	int ret = 0;
 
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	if (!drvdata->enable)
-		goto out;
-
-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
-		tmc_etb_disable_hw(drvdata);
-	} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
-		tmc_etr_disable_hw(drvdata);
-	} else {
-		mode = readl_relaxed(drvdata->base + TMC_MODE);
-		if (mode == TMC_MODE_CIRCULAR_BUFFER) {
-			tmc_etb_disable_hw(drvdata);
-		} else {
-			ret = -ENODEV;
-			goto err;
-		}
+	switch (drvdata->config_type) {
+	case TMC_CONFIG_TYPE_ETB:
+	case TMC_CONFIG_TYPE_ETF:
+		ret = tmc_read_prepare_etb(drvdata);
+		break;
+	case TMC_CONFIG_TYPE_ETR:
+		ret = tmc_read_prepare_etr(drvdata);
+		break;
+	default:
+		ret = -EINVAL;
 	}
-out:
-	drvdata->reading = true;
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
 
-	dev_info(drvdata->dev, "TMC read start\n");
-	return 0;
-err:
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
+	if (!ret)
+		dev_info(drvdata->dev, "TMC read start\n");
+
 	return ret;
 }
 
-static void tmc_read_unprepare(struct tmc_drvdata *drvdata)
+static int tmc_read_unprepare(struct tmc_drvdata *drvdata)
 {
-	unsigned long flags;
-	enum tmc_mode mode;
+	int ret = 0;
 
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	if (!drvdata->enable)
-		goto out;
-
-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
-		tmc_etb_enable_hw(drvdata);
-	} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
-		tmc_etr_enable_hw(drvdata);
-	} else {
-		mode = readl_relaxed(drvdata->base + TMC_MODE);
-		if (mode == TMC_MODE_CIRCULAR_BUFFER)
-			tmc_etb_enable_hw(drvdata);
+	switch (drvdata->config_type) {
+	case TMC_CONFIG_TYPE_ETB:
+	case TMC_CONFIG_TYPE_ETF:
+		ret = tmc_read_unprepare_etb(drvdata);
+		break;
+	case TMC_CONFIG_TYPE_ETR:
+		ret = tmc_read_unprepare_etr(drvdata);
+		break;
+	default:
+		ret = -EINVAL;
 	}
-out:
-	drvdata->reading = false;
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
 
-	dev_info(drvdata->dev, "TMC read end\n");
+	if (!ret)
+		dev_info(drvdata->dev, "TMC read end\n");
+
+	return ret;
 }
 
 static int tmc_open(struct inode *inode, struct file *file)
 {
+	int ret;
 	struct tmc_drvdata *drvdata = container_of(file->private_data,
 						   struct tmc_drvdata, miscdev);
-	int ret = 0;
-
-	if (drvdata->read_count++)
-		goto out;
 
 	ret = tmc_read_prepare(drvdata);
 	if (ret)
 		return ret;
-out:
+
 	nonseekable_open(inode, file);
 
 	dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
@@ -531,19 +166,14 @@
 
 static int tmc_release(struct inode *inode, struct file *file)
 {
+	int ret;
 	struct tmc_drvdata *drvdata = container_of(file->private_data,
 						   struct tmc_drvdata, miscdev);
 
-	if (--drvdata->read_count) {
-		if (drvdata->read_count < 0) {
-			dev_err(drvdata->dev, "mismatched close\n");
-			drvdata->read_count = 0;
-		}
-		goto out;
-	}
+	ret = tmc_read_unprepare(drvdata);
+	if (ret)
+		return ret;
 
-	tmc_read_unprepare(drvdata);
-out:
 	dev_dbg(drvdata->dev, "%s: released\n", __func__);
 	return 0;
 }
@@ -556,56 +186,71 @@
 	.llseek		= no_llseek,
 };
 
-static ssize_t status_show(struct device *dev,
-			   struct device_attribute *attr, char *buf)
+static enum tmc_mem_intf_width tmc_get_memwidth(u32 devid)
 {
-	unsigned long flags;
-	u32 tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg;
-	u32 tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr;
-	u32 devid;
-	struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
+	enum tmc_mem_intf_width memwidth;
 
-	pm_runtime_get_sync(drvdata->dev);
-	spin_lock_irqsave(&drvdata->spinlock, flags);
-	CS_UNLOCK(drvdata->base);
+	/*
+	 * Excerpt from the TRM:
+	 *
+	 * DEVID::MEMWIDTH[10:8]
+	 * 0x2 Memory interface databus is 32 bits wide.
+	 * 0x3 Memory interface databus is 64 bits wide.
+	 * 0x4 Memory interface databus is 128 bits wide.
+	 * 0x5 Memory interface databus is 256 bits wide.
+	 */
+	switch (BMVAL(devid, 8, 10)) {
+	case 0x2:
+		memwidth = TMC_MEM_INTF_WIDTH_32BITS;
+		break;
+	case 0x3:
+		memwidth = TMC_MEM_INTF_WIDTH_64BITS;
+		break;
+	case 0x4:
+		memwidth = TMC_MEM_INTF_WIDTH_128BITS;
+		break;
+	case 0x5:
+		memwidth = TMC_MEM_INTF_WIDTH_256BITS;
+		break;
+	default:
+		memwidth = 0;
+	}
 
-	tmc_rsz = readl_relaxed(drvdata->base + TMC_RSZ);
-	tmc_sts = readl_relaxed(drvdata->base + TMC_STS);
-	tmc_rrp = readl_relaxed(drvdata->base + TMC_RRP);
-	tmc_rwp = readl_relaxed(drvdata->base + TMC_RWP);
-	tmc_trg = readl_relaxed(drvdata->base + TMC_TRG);
-	tmc_ctl = readl_relaxed(drvdata->base + TMC_CTL);
-	tmc_ffsr = readl_relaxed(drvdata->base + TMC_FFSR);
-	tmc_ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
-	tmc_mode = readl_relaxed(drvdata->base + TMC_MODE);
-	tmc_pscr = readl_relaxed(drvdata->base + TMC_PSCR);
-	devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
-
-	CS_LOCK(drvdata->base);
-	spin_unlock_irqrestore(&drvdata->spinlock, flags);
-	pm_runtime_put(drvdata->dev);
-
-	return sprintf(buf,
-		       "Depth:\t\t0x%x\n"
-		       "Status:\t\t0x%x\n"
-		       "RAM read ptr:\t0x%x\n"
-		       "RAM wrt ptr:\t0x%x\n"
-		       "Trigger cnt:\t0x%x\n"
-		       "Control:\t0x%x\n"
-		       "Flush status:\t0x%x\n"
-		       "Flush ctrl:\t0x%x\n"
-		       "Mode:\t\t0x%x\n"
-		       "PSRC:\t\t0x%x\n"
-		       "DEVID:\t\t0x%x\n",
-			tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg,
-			tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr, devid);
-
-	return -EINVAL;
+	return memwidth;
 }
-static DEVICE_ATTR_RO(status);
 
-static ssize_t trigger_cntr_show(struct device *dev,
-			    struct device_attribute *attr, char *buf)
+#define coresight_tmc_simple_func(name, offset)			\
+	coresight_simple_func(struct tmc_drvdata, name, offset)
+
+coresight_tmc_simple_func(rsz, TMC_RSZ);
+coresight_tmc_simple_func(sts, TMC_STS);
+coresight_tmc_simple_func(rrp, TMC_RRP);
+coresight_tmc_simple_func(rwp, TMC_RWP);
+coresight_tmc_simple_func(trg, TMC_TRG);
+coresight_tmc_simple_func(ctl, TMC_CTL);
+coresight_tmc_simple_func(ffsr, TMC_FFSR);
+coresight_tmc_simple_func(ffcr, TMC_FFCR);
+coresight_tmc_simple_func(mode, TMC_MODE);
+coresight_tmc_simple_func(pscr, TMC_PSCR);
+coresight_tmc_simple_func(devid, CORESIGHT_DEVID);
+
+static struct attribute *coresight_tmc_mgmt_attrs[] = {
+	&dev_attr_rsz.attr,
+	&dev_attr_sts.attr,
+	&dev_attr_rrp.attr,
+	&dev_attr_rwp.attr,
+	&dev_attr_trg.attr,
+	&dev_attr_ctl.attr,
+	&dev_attr_ffsr.attr,
+	&dev_attr_ffcr.attr,
+	&dev_attr_mode.attr,
+	&dev_attr_pscr.attr,
+	&dev_attr_devid.attr,
+	NULL,
+};
+
+ssize_t trigger_cntr_show(struct device *dev,
+			  struct device_attribute *attr, char *buf)
 {
 	struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
 	unsigned long val = drvdata->trigger_cntr;
@@ -630,26 +275,25 @@
 }
 static DEVICE_ATTR_RW(trigger_cntr);
 
-static struct attribute *coresight_etb_attrs[] = {
+static struct attribute *coresight_tmc_attrs[] = {
 	&dev_attr_trigger_cntr.attr,
-	&dev_attr_status.attr,
 	NULL,
 };
-ATTRIBUTE_GROUPS(coresight_etb);
 
-static struct attribute *coresight_etr_attrs[] = {
-	&dev_attr_trigger_cntr.attr,
-	&dev_attr_status.attr,
-	NULL,
+static const struct attribute_group coresight_tmc_group = {
+	.attrs = coresight_tmc_attrs,
 };
-ATTRIBUTE_GROUPS(coresight_etr);
 
-static struct attribute *coresight_etf_attrs[] = {
-	&dev_attr_trigger_cntr.attr,
-	&dev_attr_status.attr,
+static const struct attribute_group coresight_tmc_mgmt_group = {
+	.attrs = coresight_tmc_mgmt_attrs,
+	.name = "mgmt",
+};
+
+const struct attribute_group *coresight_tmc_groups[] = {
+	&coresight_tmc_group,
+	&coresight_tmc_mgmt_group,
 	NULL,
 };
-ATTRIBUTE_GROUPS(coresight_etf);
 
 static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
 {
@@ -688,6 +332,7 @@
 
 	devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
 	drvdata->config_type = BMVAL(devid, 6, 7);
+	drvdata->memwidth = tmc_get_memwidth(devid);
 
 	if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
 		if (np)
@@ -702,20 +347,6 @@
 
 	pm_runtime_put(&adev->dev);
 
-	if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
-		drvdata->vaddr = dma_alloc_coherent(dev, drvdata->size,
-						&drvdata->paddr, GFP_KERNEL);
-		if (!drvdata->vaddr)
-			return -ENOMEM;
-
-		memset(drvdata->vaddr, 0, drvdata->size);
-		drvdata->buf = drvdata->vaddr;
-	} else {
-		drvdata->buf = devm_kzalloc(dev, drvdata->size, GFP_KERNEL);
-		if (!drvdata->buf)
-			return -ENOMEM;
-	}
-
 	desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
 	if (!desc) {
 		ret = -ENOMEM;
@@ -725,20 +356,18 @@
 	desc->pdata = pdata;
 	desc->dev = dev;
 	desc->subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
+	desc->groups = coresight_tmc_groups;
 
 	if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
 		desc->type = CORESIGHT_DEV_TYPE_SINK;
 		desc->ops = &tmc_etb_cs_ops;
-		desc->groups = coresight_etb_groups;
 	} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
 		desc->type = CORESIGHT_DEV_TYPE_SINK;
 		desc->ops = &tmc_etr_cs_ops;
-		desc->groups = coresight_etr_groups;
 	} else {
 		desc->type = CORESIGHT_DEV_TYPE_LINKSINK;
 		desc->subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
 		desc->ops = &tmc_etf_cs_ops;
-		desc->groups = coresight_etf_groups;
 	}
 
 	drvdata->csdev = coresight_register(desc);
@@ -754,7 +383,6 @@
 	if (ret)
 		goto err_misc_register;
 
-	dev_info(dev, "TMC initialized\n");
 	return 0;
 
 err_misc_register:
diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h
new file mode 100644
index 0000000..5c5fe2a
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc.h
@@ -0,0 +1,140 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _CORESIGHT_TMC_H
+#define _CORESIGHT_TMC_H
+
+#include <linux/miscdevice.h>
+
+#define TMC_RSZ			0x004
+#define TMC_STS			0x00c
+#define TMC_RRD			0x010
+#define TMC_RRP			0x014
+#define TMC_RWP			0x018
+#define TMC_TRG			0x01c
+#define TMC_CTL			0x020
+#define TMC_RWD			0x024
+#define TMC_MODE		0x028
+#define TMC_LBUFLEVEL		0x02c
+#define TMC_CBUFLEVEL		0x030
+#define TMC_BUFWM		0x034
+#define TMC_RRPHI		0x038
+#define TMC_RWPHI		0x03c
+#define TMC_AXICTL		0x110
+#define TMC_DBALO		0x118
+#define TMC_DBAHI		0x11c
+#define TMC_FFSR		0x300
+#define TMC_FFCR		0x304
+#define TMC_PSCR		0x308
+#define TMC_ITMISCOP0		0xee0
+#define TMC_ITTRFLIN		0xee8
+#define TMC_ITATBDATA0		0xeec
+#define TMC_ITATBCTR2		0xef0
+#define TMC_ITATBCTR1		0xef4
+#define TMC_ITATBCTR0		0xef8
+
+/* register description */
+/* TMC_CTL - 0x020 */
+#define TMC_CTL_CAPT_EN		BIT(0)
+/* TMC_STS - 0x00C */
+#define TMC_STS_TMCREADY_BIT	2
+#define TMC_STS_FULL		BIT(0)
+#define TMC_STS_TRIGGERED	BIT(1)
+/* TMC_AXICTL - 0x110 */
+#define TMC_AXICTL_PROT_CTL_B0	BIT(0)
+#define TMC_AXICTL_PROT_CTL_B1	BIT(1)
+#define TMC_AXICTL_SCT_GAT_MODE	BIT(7)
+#define TMC_AXICTL_WR_BURST_16	0xF00
+/* TMC_FFCR - 0x304 */
+#define TMC_FFCR_FLUSHMAN_BIT	6
+#define TMC_FFCR_EN_FMT		BIT(0)
+#define TMC_FFCR_EN_TI		BIT(1)
+#define TMC_FFCR_FON_FLIN	BIT(4)
+#define TMC_FFCR_FON_TRIG_EVT	BIT(5)
+#define TMC_FFCR_TRIGON_TRIGIN	BIT(8)
+#define TMC_FFCR_STOP_ON_FLUSH	BIT(12)
+
+
+enum tmc_config_type {
+	TMC_CONFIG_TYPE_ETB,
+	TMC_CONFIG_TYPE_ETR,
+	TMC_CONFIG_TYPE_ETF,
+};
+
+enum tmc_mode {
+	TMC_MODE_CIRCULAR_BUFFER,
+	TMC_MODE_SOFTWARE_FIFO,
+	TMC_MODE_HARDWARE_FIFO,
+};
+
+enum tmc_mem_intf_width {
+	TMC_MEM_INTF_WIDTH_32BITS	= 1,
+	TMC_MEM_INTF_WIDTH_64BITS	= 2,
+	TMC_MEM_INTF_WIDTH_128BITS	= 4,
+	TMC_MEM_INTF_WIDTH_256BITS	= 8,
+};
+
+/**
+ * struct tmc_drvdata - specifics associated to an TMC component
+ * @base:	memory mapped base address for this component.
+ * @dev:	the device entity associated to this component.
+ * @csdev:	component vitals needed by the framework.
+ * @miscdev:	specifics to handle "/dev/xyz.tmc" entry.
+ * @spinlock:	only one at a time pls.
+ * @buf:	area of memory where trace data get sent.
+ * @paddr:	DMA start location in RAM.
+ * @vaddr:	virtual representation of @paddr.
+ * @size:	@buf size.
+ * @mode:	how this TMC is being used.
+ * @config_type: TMC variant, must be of type @tmc_config_type.
+ * @memwidth:	width of the memory interface databus, in bytes.
+ * @trigger_cntr: amount of words to store after a trigger.
+ */
+struct tmc_drvdata {
+	void __iomem		*base;
+	struct device		*dev;
+	struct coresight_device	*csdev;
+	struct miscdevice	miscdev;
+	spinlock_t		spinlock;
+	bool			reading;
+	char			*buf;
+	dma_addr_t		paddr;
+	void __iomem		*vaddr;
+	u32			size;
+	local_t			mode;
+	enum tmc_config_type	config_type;
+	enum tmc_mem_intf_width	memwidth;
+	u32			trigger_cntr;
+};
+
+/* Generic functions */
+void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata);
+void tmc_flush_and_stop(struct tmc_drvdata *drvdata);
+void tmc_enable_hw(struct tmc_drvdata *drvdata);
+void tmc_disable_hw(struct tmc_drvdata *drvdata);
+
+/* ETB/ETF functions */
+int tmc_read_prepare_etb(struct tmc_drvdata *drvdata);
+int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata);
+extern const struct coresight_ops tmc_etb_cs_ops;
+extern const struct coresight_ops tmc_etf_cs_ops;
+
+/* ETR functions */
+int tmc_read_prepare_etr(struct tmc_drvdata *drvdata);
+int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata);
+extern const struct coresight_ops tmc_etr_cs_ops;
+#endif
diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
index 8fb09d9..4e471e2 100644
--- a/drivers/hwtracing/coresight/coresight-tpiu.c
+++ b/drivers/hwtracing/coresight/coresight-tpiu.c
@@ -167,7 +167,6 @@
 	if (IS_ERR(drvdata->csdev))
 		return PTR_ERR(drvdata->csdev);
 
-	dev_info(dev, "TPIU initialized\n");
 	return 0;
 }
 
diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
index 2ea5961..5443d03 100644
--- a/drivers/hwtracing/coresight/coresight.c
+++ b/drivers/hwtracing/coresight/coresight.c
@@ -43,7 +43,15 @@
  * When operating Coresight drivers from the sysFS interface, only a single
  * path can exist from a tracer (associated to a CPU) to a sink.
  */
-static DEFINE_PER_CPU(struct list_head *, sysfs_path);
+static DEFINE_PER_CPU(struct list_head *, tracer_path);
+
+/*
+ * As of this writing only a single STM can be found in CS topologies.  Since
+ * there is no way to know if we'll ever see more and what kind of
+ * configuration they will enact, for the time being only define a single path
+ * for STM.
+ */
+static struct list_head *stm_path;
 
 static int coresight_id_match(struct device *dev, void *data)
 {
@@ -257,15 +265,27 @@
 
 void coresight_disable_path(struct list_head *path)
 {
+	u32 type;
 	struct coresight_node *nd;
 	struct coresight_device *csdev, *parent, *child;
 
 	list_for_each_entry(nd, path, link) {
 		csdev = nd->csdev;
+		type = csdev->type;
 
-		switch (csdev->type) {
+		/*
+		 * ETF devices are tricky... They can be a link or a sink,
+		 * depending on how they are configured.  If an ETF has been
+		 * "activated" it will be configured as a sink, otherwise
+		 * go ahead with the link configuration.
+		 */
+		if (type == CORESIGHT_DEV_TYPE_LINKSINK)
+			type = (csdev == coresight_get_sink(path)) ?
+						CORESIGHT_DEV_TYPE_SINK :
+						CORESIGHT_DEV_TYPE_LINK;
+
+		switch (type) {
 		case CORESIGHT_DEV_TYPE_SINK:
-		case CORESIGHT_DEV_TYPE_LINKSINK:
 			coresight_disable_sink(csdev);
 			break;
 		case CORESIGHT_DEV_TYPE_SOURCE:
@@ -286,15 +306,27 @@
 {
 
 	int ret = 0;
+	u32 type;
 	struct coresight_node *nd;
 	struct coresight_device *csdev, *parent, *child;
 
 	list_for_each_entry_reverse(nd, path, link) {
 		csdev = nd->csdev;
+		type = csdev->type;
 
-		switch (csdev->type) {
+		/*
+		 * ETF devices are tricky... They can be a link or a sink,
+		 * depending on how they are configured.  If an ETF has been
+		 * "activated" it will be configured as a sink, otherwise
+		 * go ahead with the link configuration.
+		 */
+		if (type == CORESIGHT_DEV_TYPE_LINKSINK)
+			type = (csdev == coresight_get_sink(path)) ?
+						CORESIGHT_DEV_TYPE_SINK :
+						CORESIGHT_DEV_TYPE_LINK;
+
+		switch (type) {
 		case CORESIGHT_DEV_TYPE_SINK:
-		case CORESIGHT_DEV_TYPE_LINKSINK:
 			ret = coresight_enable_sink(csdev, mode);
 			if (ret)
 				goto err;
@@ -432,18 +464,45 @@
 	path = NULL;
 }
 
+/** coresight_validate_source - make sure a source has the right credentials
+ *  @csdev:	the device structure for a source.
+ *  @function:	the function this was called from.
+ *
+ * Assumes the coresight_mutex is held.
+ */
+static int coresight_validate_source(struct coresight_device *csdev,
+				     const char *function)
+{
+	u32 type, subtype;
+
+	type = csdev->type;
+	subtype = csdev->subtype.source_subtype;
+
+	if (type != CORESIGHT_DEV_TYPE_SOURCE) {
+		dev_err(&csdev->dev, "wrong device type in %s\n", function);
+		return -EINVAL;
+	}
+
+	if (subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_PROC &&
+	    subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE) {
+		dev_err(&csdev->dev, "wrong device subtype in %s\n", function);
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 int coresight_enable(struct coresight_device *csdev)
 {
-	int ret = 0;
-	int cpu;
+	int cpu, ret = 0;
 	struct list_head *path;
 
 	mutex_lock(&coresight_mutex);
-	if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
-		ret = -EINVAL;
-		dev_err(&csdev->dev, "wrong device type in %s\n", __func__);
+
+	ret = coresight_validate_source(csdev, __func__);
+	if (ret)
 		goto out;
-	}
+
 	if (csdev->enable)
 		goto out;
 
@@ -461,15 +520,25 @@
 	if (ret)
 		goto err_source;
 
-	/*
-	 * When working from sysFS it is important to keep track
-	 * of the paths that were created so that they can be
-	 * undone in 'coresight_disable()'.  Since there can only
-	 * be a single session per tracer (when working from sysFS)
-	 * a per-cpu variable will do just fine.
-	 */
-	cpu = source_ops(csdev)->cpu_id(csdev);
-	per_cpu(sysfs_path, cpu) = path;
+	switch (csdev->subtype.source_subtype) {
+	case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
+		/*
+		 * When working from sysFS it is important to keep track
+		 * of the paths that were created so that they can be
+		 * undone in 'coresight_disable()'.  Since there can only
+		 * be a single session per tracer (when working from sysFS)
+		 * a per-cpu variable will do just fine.
+		 */
+		cpu = source_ops(csdev)->cpu_id(csdev);
+		per_cpu(tracer_path, cpu) = path;
+		break;
+	case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE:
+		stm_path = path;
+		break;
+	default:
+		/* We can't be here */
+		break;
+	}
 
 out:
 	mutex_unlock(&coresight_mutex);
@@ -486,23 +555,36 @@
 
 void coresight_disable(struct coresight_device *csdev)
 {
-	int cpu;
-	struct list_head *path;
+	int cpu, ret;
+	struct list_head *path = NULL;
 
 	mutex_lock(&coresight_mutex);
-	if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
-		dev_err(&csdev->dev, "wrong device type in %s\n", __func__);
+
+	ret = coresight_validate_source(csdev, __func__);
+	if (ret)
 		goto out;
-	}
+
 	if (!csdev->enable)
 		goto out;
 
-	cpu = source_ops(csdev)->cpu_id(csdev);
-	path = per_cpu(sysfs_path, cpu);
+	switch (csdev->subtype.source_subtype) {
+	case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
+		cpu = source_ops(csdev)->cpu_id(csdev);
+		path = per_cpu(tracer_path, cpu);
+		per_cpu(tracer_path, cpu) = NULL;
+		break;
+	case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE:
+		path = stm_path;
+		stm_path = NULL;
+		break;
+	default:
+		/* We can't be here */
+		break;
+	}
+
 	coresight_disable_source(csdev);
 	coresight_disable_path(path);
 	coresight_release_path(path);
-	per_cpu(sysfs_path, cpu) = NULL;
 
 out:
 	mutex_unlock(&coresight_mutex);
@@ -514,7 +596,7 @@
 {
 	struct coresight_device *csdev = to_coresight_device(dev);
 
-	return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->activated);
+	return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->activated);
 }
 
 static ssize_t enable_sink_store(struct device *dev,
@@ -544,7 +626,7 @@
 {
 	struct coresight_device *csdev = to_coresight_device(dev);
 
-	return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->enable);
+	return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->enable);
 }
 
 static ssize_t enable_source_store(struct device *dev,
diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
index 4272f2c..1be543e 100644
--- a/drivers/hwtracing/intel_th/core.c
+++ b/drivers/hwtracing/intel_th/core.c
@@ -71,6 +71,15 @@
 	if (ret)
 		return ret;
 
+	if (thdrv->attr_group) {
+		ret = sysfs_create_group(&thdev->dev.kobj, thdrv->attr_group);
+		if (ret) {
+			thdrv->remove(thdev);
+
+			return ret;
+		}
+	}
+
 	if (thdev->type == INTEL_TH_OUTPUT &&
 	    !intel_th_output_assigned(thdev))
 		ret = hubdrv->assign(hub, thdev);
@@ -91,6 +100,9 @@
 			return err;
 	}
 
+	if (thdrv->attr_group)
+		sysfs_remove_group(&thdev->dev.kobj, thdrv->attr_group);
+
 	thdrv->remove(thdev);
 
 	if (intel_th_output_assigned(thdev)) {
@@ -171,7 +183,14 @@
 
 static int intel_th_output_activate(struct intel_th_device *thdev)
 {
-	struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver);
+	struct intel_th_driver *thdrv =
+		to_intel_th_driver_or_null(thdev->dev.driver);
+
+	if (!thdrv)
+		return -ENODEV;
+
+	if (!try_module_get(thdrv->driver.owner))
+		return -ENODEV;
 
 	if (thdrv->activate)
 		return thdrv->activate(thdev);
@@ -183,12 +202,18 @@
 
 static void intel_th_output_deactivate(struct intel_th_device *thdev)
 {
-	struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver);
+	struct intel_th_driver *thdrv =
+		to_intel_th_driver_or_null(thdev->dev.driver);
+
+	if (!thdrv)
+		return;
 
 	if (thdrv->deactivate)
 		thdrv->deactivate(thdev);
 	else
 		intel_th_trace_disable(thdev);
+
+	module_put(thdrv->driver.owner);
 }
 
 static ssize_t active_show(struct device *dev, struct device_attribute *attr,
diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h
index eedd093..0df22e3 100644
--- a/drivers/hwtracing/intel_th/intel_th.h
+++ b/drivers/hwtracing/intel_th/intel_th.h
@@ -115,6 +115,7 @@
  * @enable:	enable tracing for a given output device
  * @disable:	disable tracing for a given output device
  * @fops:	file operations for device nodes
+ * @attr_group:	attributes provided by the driver
  *
  * Callbacks @probe and @remove are required for all device types.
  * Switch device driver needs to fill in @assign, @enable and @disable
@@ -139,6 +140,8 @@
 	void			(*deactivate)(struct intel_th_device *thdev);
 	/* file_operations for those who want a device node */
 	const struct file_operations *fops;
+	/* optional attributes */
+	struct attribute_group	*attr_group;
 
 	/* source ops */
 	int			(*set_output)(struct intel_th_device *thdev,
@@ -148,6 +151,9 @@
 #define to_intel_th_driver(_d)					\
 	container_of((_d), struct intel_th_driver, driver)
 
+#define to_intel_th_driver_or_null(_d)		\
+	((_d) ? to_intel_th_driver(_d) : NULL)
+
 static inline struct intel_th_device *
 to_intel_th_hub(struct intel_th_device *thdev)
 {
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index d220914..e8d55a1 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -122,7 +122,6 @@
 	atomic_t		mmap_count;
 	struct mutex		buf_mutex;
 
-	struct mutex		iter_mutex;
 	struct list_head	iter_list;
 
 	/* config */
@@ -257,23 +256,37 @@
 
 	iter = kzalloc(sizeof(*iter), GFP_KERNEL);
 	if (!iter)
-		return NULL;
+		return ERR_PTR(-ENOMEM);
+
+	mutex_lock(&msc->buf_mutex);
+
+	/*
+	 * Reading and tracing are mutually exclusive; if msc is
+	 * enabled, open() will fail; otherwise existing readers
+	 * will prevent enabling the msc and the rest of fops don't
+	 * need to worry about it.
+	 */
+	if (msc->enabled) {
+		kfree(iter);
+		iter = ERR_PTR(-EBUSY);
+		goto unlock;
+	}
 
 	msc_iter_init(iter);
 	iter->msc = msc;
 
-	mutex_lock(&msc->iter_mutex);
 	list_add_tail(&iter->entry, &msc->iter_list);
-	mutex_unlock(&msc->iter_mutex);
+unlock:
+	mutex_unlock(&msc->buf_mutex);
 
 	return iter;
 }
 
 static void msc_iter_remove(struct msc_iter *iter, struct msc *msc)
 {
-	mutex_lock(&msc->iter_mutex);
+	mutex_lock(&msc->buf_mutex);
 	list_del(&iter->entry);
-	mutex_unlock(&msc->iter_mutex);
+	mutex_unlock(&msc->buf_mutex);
 
 	kfree(iter);
 }
@@ -454,7 +467,6 @@
 {
 	struct msc_window *win;
 
-	mutex_lock(&msc->buf_mutex);
 	list_for_each_entry(win, &msc->win_list, entry) {
 		unsigned int blk;
 		size_t hw_sz = sizeof(struct msc_block_desc) -
@@ -466,7 +478,6 @@
 			memset(&bdesc->hw_tag, 0, hw_sz);
 		}
 	}
-	mutex_unlock(&msc->buf_mutex);
 }
 
 /**
@@ -474,12 +485,15 @@
  * @msc:	the MSC device to configure
  *
  * Program storage mode, wrapping, burst length and trace buffer address
- * into a given MSC. If msc::enabled is set, enable the trace, too.
+ * into a given MSC. Then, enable tracing and set msc::enabled.
+ * The latter is serialized on msc::buf_mutex, so make sure to hold it.
  */
 static int msc_configure(struct msc *msc)
 {
 	u32 reg;
 
+	lockdep_assert_held(&msc->buf_mutex);
+
 	if (msc->mode > MSC_MODE_MULTI)
 		return -ENOTSUPP;
 
@@ -497,21 +511,19 @@
 	reg = ioread32(msc->reg_base + REG_MSU_MSC0CTL);
 	reg &= ~(MSC_MODE | MSC_WRAPEN | MSC_EN | MSC_RD_HDR_OVRD);
 
+	reg |= MSC_EN;
 	reg |= msc->mode << __ffs(MSC_MODE);
 	reg |= msc->burst_len << __ffs(MSC_LEN);
-	/*if (msc->mode == MSC_MODE_MULTI)
-	  reg |= MSC_RD_HDR_OVRD; */
+
 	if (msc->wrap)
 		reg |= MSC_WRAPEN;
-	if (msc->enabled)
-		reg |= MSC_EN;
 
 	iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL);
 
-	if (msc->enabled) {
-		msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI;
-		intel_th_trace_enable(msc->thdev);
-	}
+	msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI;
+	intel_th_trace_enable(msc->thdev);
+	msc->enabled = 1;
+
 
 	return 0;
 }
@@ -521,15 +533,14 @@
  * @msc:	MSC device to disable
  *
  * If @msc is enabled, disable tracing on the switch and then disable MSC
- * storage.
+ * storage. Caller must hold msc::buf_mutex.
  */
 static void msc_disable(struct msc *msc)
 {
 	unsigned long count;
 	u32 reg;
 
-	if (!msc->enabled)
-		return;
+	lockdep_assert_held(&msc->buf_mutex);
 
 	intel_th_trace_disable(msc->thdev);
 
@@ -569,33 +580,35 @@
 static int intel_th_msc_activate(struct intel_th_device *thdev)
 {
 	struct msc *msc = dev_get_drvdata(&thdev->dev);
-	int ret = 0;
+	int ret = -EBUSY;
 
 	if (!atomic_inc_unless_negative(&msc->user_count))
 		return -ENODEV;
 
-	mutex_lock(&msc->iter_mutex);
-	if (!list_empty(&msc->iter_list))
-		ret = -EBUSY;
-	mutex_unlock(&msc->iter_mutex);
+	mutex_lock(&msc->buf_mutex);
 
-	if (ret) {
+	/* if there are readers, refuse */
+	if (list_empty(&msc->iter_list))
+		ret = msc_configure(msc);
+
+	mutex_unlock(&msc->buf_mutex);
+
+	if (ret)
 		atomic_dec(&msc->user_count);
-		return ret;
-	}
 
-	msc->enabled = 1;
-
-	return msc_configure(msc);
+	return ret;
 }
 
 static void intel_th_msc_deactivate(struct intel_th_device *thdev)
 {
 	struct msc *msc = dev_get_drvdata(&thdev->dev);
 
-	msc_disable(msc);
-
-	atomic_dec(&msc->user_count);
+	mutex_lock(&msc->buf_mutex);
+	if (msc->enabled) {
+		msc_disable(msc);
+		atomic_dec(&msc->user_count);
+	}
+	mutex_unlock(&msc->buf_mutex);
 }
 
 /**
@@ -1035,8 +1048,8 @@
 		return -EPERM;
 
 	iter = msc_iter_install(msc);
-	if (!iter)
-		return -ENOMEM;
+	if (IS_ERR(iter))
+		return PTR_ERR(iter);
 
 	file->private_data = iter;
 
@@ -1101,11 +1114,6 @@
 	if (!atomic_inc_unless_negative(&msc->user_count))
 		return 0;
 
-	if (msc->enabled) {
-		ret = -EBUSY;
-		goto put_count;
-	}
-
 	if (msc->mode == MSC_MODE_SINGLE && !msc->single_wrap)
 		size = msc->single_sz;
 	else
@@ -1245,6 +1253,7 @@
 	.read		= intel_th_msc_read,
 	.mmap		= intel_th_msc_mmap,
 	.llseek		= no_llseek,
+	.owner		= THIS_MODULE,
 };
 
 static int intel_th_msc_init(struct msc *msc)
@@ -1254,8 +1263,6 @@
 	msc->mode = MSC_MODE_MULTI;
 	mutex_init(&msc->buf_mutex);
 	INIT_LIST_HEAD(&msc->win_list);
-
-	mutex_init(&msc->iter_mutex);
 	INIT_LIST_HEAD(&msc->iter_list);
 
 	msc->burst_len =
@@ -1393,6 +1400,11 @@
 	do {
 		end = memchr(p, ',', len);
 		s = kstrndup(p, end ? end - p : len, GFP_KERNEL);
+		if (!s) {
+			ret = -ENOMEM;
+			goto free_win;
+		}
+
 		ret = kstrtoul(s, 10, &val);
 		kfree(s);
 
@@ -1473,10 +1485,6 @@
 	if (err)
 		return err;
 
-	err = sysfs_create_group(&dev->kobj, &msc_output_group);
-	if (err)
-		return err;
-
 	dev_set_drvdata(dev, msc);
 
 	return 0;
@@ -1484,7 +1492,18 @@
 
 static void intel_th_msc_remove(struct intel_th_device *thdev)
 {
-	sysfs_remove_group(&thdev->dev.kobj, &msc_output_group);
+	struct msc *msc = dev_get_drvdata(&thdev->dev);
+	int ret;
+
+	intel_th_msc_deactivate(thdev);
+
+	/*
+	 * Buffers should not be used at this point except if the
+	 * output character device is still open and the parent
+	 * device gets detached from its bus, which is a FIXME.
+	 */
+	ret = msc_buffer_free_unless_used(msc);
+	WARN_ON_ONCE(ret);
 }
 
 static struct intel_th_driver intel_th_msc_driver = {
@@ -1493,6 +1512,7 @@
 	.activate	= intel_th_msc_activate,
 	.deactivate	= intel_th_msc_deactivate,
 	.fops	= &intel_th_msc_fops,
+	.attr_group	= &msc_output_group,
 	.driver	= {
 		.name	= "msc",
 		.owner	= THIS_MODULE,
diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
index bca7a2a..5e25c7e 100644
--- a/drivers/hwtracing/intel_th/pci.c
+++ b/drivers/hwtracing/intel_th/pci.c
@@ -75,6 +75,11 @@
 		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0a80),
 		.driver_data = (kernel_ulong_t)0,
 	},
+	{
+		/* Broxton B-step */
+		PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x1a8e),
+		.driver_data = (kernel_ulong_t)0,
+	},
 	{ 0 },
 };
 
diff --git a/drivers/hwtracing/intel_th/pti.c b/drivers/hwtracing/intel_th/pti.c
index 57cbfdc..35738b5 100644
--- a/drivers/hwtracing/intel_th/pti.c
+++ b/drivers/hwtracing/intel_th/pti.c
@@ -200,7 +200,6 @@
 	struct resource *res;
 	struct pti_device *pti;
 	void __iomem *base;
-	int ret;
 
 	res = intel_th_device_get_resource(thdev, IORESOURCE_MEM, 0);
 	if (!res)
@@ -219,10 +218,6 @@
 
 	read_hw_config(pti);
 
-	ret = sysfs_create_group(&dev->kobj, &pti_output_group);
-	if (ret)
-		return ret;
-
 	dev_set_drvdata(dev, pti);
 
 	return 0;
@@ -237,6 +232,7 @@
 	.remove	= intel_th_pti_remove,
 	.activate	= intel_th_pti_activate,
 	.deactivate	= intel_th_pti_deactivate,
+	.attr_group	= &pti_output_group,
 	.driver	= {
 		.name	= "pti",
 		.owner	= THIS_MODULE,
diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
index de80d45..ff31108 100644
--- a/drivers/hwtracing/stm/core.c
+++ b/drivers/hwtracing/stm/core.c
@@ -67,9 +67,24 @@
 
 static DEVICE_ATTR_RO(channels);
 
+static ssize_t hw_override_show(struct device *dev,
+				struct device_attribute *attr,
+				char *buf)
+{
+	struct stm_device *stm = to_stm_device(dev);
+	int ret;
+
+	ret = sprintf(buf, "%u\n", stm->data->hw_override);
+
+	return ret;
+}
+
+static DEVICE_ATTR_RO(hw_override);
+
 static struct attribute *stm_attrs[] = {
 	&dev_attr_masters.attr,
 	&dev_attr_channels.attr,
+	&dev_attr_hw_override.attr,
 	NULL,
 };
 
@@ -546,8 +561,6 @@
 	if (ret)
 		goto err_free;
 
-	ret = 0;
-
 	if (stm->data->link)
 		ret = stm->data->link(stm->data, stmf->output.master,
 				      stmf->output.channel);
@@ -668,6 +681,18 @@
 	stm->dev.parent = parent;
 	stm->dev.release = stm_device_release;
 
+	mutex_init(&stm->link_mutex);
+	spin_lock_init(&stm->link_lock);
+	INIT_LIST_HEAD(&stm->link_list);
+
+	/* initialize the object before it is accessible via sysfs */
+	spin_lock_init(&stm->mc_lock);
+	mutex_init(&stm->policy_mutex);
+	stm->sw_nmasters = nmasters;
+	stm->owner = owner;
+	stm->data = stm_data;
+	stm_data->stm = stm;
+
 	err = kobject_set_name(&stm->dev.kobj, "%s", stm_data->name);
 	if (err)
 		goto err_device;
@@ -676,20 +701,11 @@
 	if (err)
 		goto err_device;
 
-	mutex_init(&stm->link_mutex);
-	spin_lock_init(&stm->link_lock);
-	INIT_LIST_HEAD(&stm->link_list);
-
-	spin_lock_init(&stm->mc_lock);
-	mutex_init(&stm->policy_mutex);
-	stm->sw_nmasters = nmasters;
-	stm->owner = owner;
-	stm->data = stm_data;
-	stm_data->stm = stm;
-
 	return 0;
 
 err_device:
+	unregister_chrdev(stm->major, stm_data->name);
+
 	/* matches device_initialize() above */
 	put_device(&stm->dev);
 err_free:
diff --git a/drivers/hwtracing/stm/dummy_stm.c b/drivers/hwtracing/stm/dummy_stm.c
index 310adf5..a86612d 100644
--- a/drivers/hwtracing/stm/dummy_stm.c
+++ b/drivers/hwtracing/stm/dummy_stm.c
@@ -46,9 +46,7 @@
 
 static int nr_dummies = 4;
 
-module_param(nr_dummies, int, 0600);
-
-static unsigned int dummy_stm_nr;
+module_param(nr_dummies, int, 0400);
 
 static unsigned int fail_mode;
 
@@ -65,12 +63,12 @@
 
 static int dummy_stm_init(void)
 {
-	int i, ret = -ENOMEM, __nr_dummies = ACCESS_ONCE(nr_dummies);
+	int i, ret = -ENOMEM;
 
-	if (__nr_dummies < 0 || __nr_dummies > DUMMY_STM_MAX)
+	if (nr_dummies < 0 || nr_dummies > DUMMY_STM_MAX)
 		return -EINVAL;
 
-	for (i = 0; i < __nr_dummies; i++) {
+	for (i = 0; i < nr_dummies; i++) {
 		dummy_stm[i].name = kasprintf(GFP_KERNEL, "dummy_stm.%d", i);
 		if (!dummy_stm[i].name)
 			goto fail_unregister;
@@ -86,8 +84,6 @@
 			goto fail_free;
 	}
 
-	dummy_stm_nr = __nr_dummies;
-
 	return 0;
 
 fail_unregister:
@@ -105,7 +101,7 @@
 {
 	int i;
 
-	for (i = 0; i < dummy_stm_nr; i++) {
+	for (i = 0; i < nr_dummies; i++) {
 		stm_unregister_device(&dummy_stm[i]);
 		kfree(dummy_stm[i].name);
 	}
diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c
index 0133571..3da7b67 100644
--- a/drivers/hwtracing/stm/heartbeat.c
+++ b/drivers/hwtracing/stm/heartbeat.c
@@ -26,7 +26,7 @@
 static int nr_devs = 4;
 static int interval_ms = 10;
 
-module_param(nr_devs, int, 0600);
+module_param(nr_devs, int, 0400);
 module_param(interval_ms, int, 0600);
 
 static struct stm_heartbeat {
@@ -35,8 +35,6 @@
 	unsigned int		active;
 } stm_heartbeat[STM_HEARTBEAT_MAX];
 
-static unsigned int nr_instances;
-
 static const char str[] = "heartbeat stm source driver is here to serve you";
 
 static enum hrtimer_restart stm_heartbeat_hrtimer_handler(struct hrtimer *hr)
@@ -74,12 +72,12 @@
 
 static int stm_heartbeat_init(void)
 {
-	int i, ret = -ENOMEM, __nr_instances = ACCESS_ONCE(nr_devs);
+	int i, ret = -ENOMEM;
 
-	if (__nr_instances < 0 || __nr_instances > STM_HEARTBEAT_MAX)
+	if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX)
 		return -EINVAL;
 
-	for (i = 0; i < __nr_instances; i++) {
+	for (i = 0; i < nr_devs; i++) {
 		stm_heartbeat[i].data.name =
 			kasprintf(GFP_KERNEL, "heartbeat.%d", i);
 		if (!stm_heartbeat[i].data.name)
@@ -98,8 +96,6 @@
 			goto fail_free;
 	}
 
-	nr_instances = __nr_instances;
-
 	return 0;
 
 fail_unregister:
@@ -116,7 +112,7 @@
 {
 	int i;
 
-	for (i = 0; i < nr_instances; i++) {
+	for (i = 0; i < nr_devs; i++) {
 		stm_source_unregister_device(&stm_heartbeat[i].data);
 		kfree(stm_heartbeat[i].data.name);
 	}
diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c
index 1db1896..6c0ae29 100644
--- a/drivers/hwtracing/stm/policy.c
+++ b/drivers/hwtracing/stm/policy.c
@@ -107,8 +107,7 @@
 		goto unlock;
 
 	/* must be within [sw_start..sw_end], which is an inclusive range */
-	if (first > INT_MAX || last > INT_MAX || first > last ||
-	    first < stm->data->sw_start ||
+	if (first > last || first < stm->data->sw_start ||
 	    last > stm->data->sw_end) {
 		ret = -ERANGE;
 		goto unlock;
@@ -342,7 +341,7 @@
 		return ERR_PTR(-EINVAL);
 	}
 
-	*p++ = '\0';
+	*p = '\0';
 
 	stm = stm_find_device(devname);
 	kfree(devname);
diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
index a4be451..b73c6e7 100644
--- a/drivers/mcb/mcb-core.c
+++ b/drivers/mcb/mcb-core.c
@@ -83,13 +83,67 @@
 
 static void mcb_shutdown(struct device *dev)
 {
+	struct mcb_driver *mdrv = to_mcb_driver(dev->driver);
 	struct mcb_device *mdev = to_mcb_device(dev);
-	struct mcb_driver *mdrv = mdev->driver;
 
 	if (mdrv && mdrv->shutdown)
 		mdrv->shutdown(mdev);
 }
 
+static ssize_t revision_show(struct device *dev, struct device_attribute *attr,
+			 char *buf)
+{
+	struct mcb_bus *bus = to_mcb_bus(dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%d\n", bus->revision);
+}
+static DEVICE_ATTR_RO(revision);
+
+static ssize_t model_show(struct device *dev, struct device_attribute *attr,
+			 char *buf)
+{
+	struct mcb_bus *bus = to_mcb_bus(dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%c\n", bus->model);
+}
+static DEVICE_ATTR_RO(model);
+
+static ssize_t minor_show(struct device *dev, struct device_attribute *attr,
+			 char *buf)
+{
+	struct mcb_bus *bus = to_mcb_bus(dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%d\n", bus->minor);
+}
+static DEVICE_ATTR_RO(minor);
+
+static ssize_t name_show(struct device *dev, struct device_attribute *attr,
+			 char *buf)
+{
+	struct mcb_bus *bus = to_mcb_bus(dev);
+
+	return scnprintf(buf, PAGE_SIZE, "%s\n", bus->name);
+}
+static DEVICE_ATTR_RO(name);
+
+static struct attribute *mcb_bus_attrs[] = {
+	&dev_attr_revision.attr,
+	&dev_attr_model.attr,
+	&dev_attr_minor.attr,
+	&dev_attr_name.attr,
+	NULL,
+};
+
+static const struct attribute_group mcb_carrier_group = {
+	.attrs = mcb_bus_attrs,
+};
+
+static const struct attribute_group *mcb_carrier_groups[] = {
+	&mcb_carrier_group,
+	NULL,
+};
+
+
 static struct bus_type mcb_bus_type = {
 	.name = "mcb",
 	.match = mcb_match,
@@ -99,6 +153,11 @@
 	.shutdown = mcb_shutdown,
 };
 
+static struct device_type mcb_carrier_device_type = {
+	.name = "mcb-carrier",
+	.groups = mcb_carrier_groups,
+};
+
 /**
  * __mcb_register_driver() - Register a @mcb_driver at the system
  * @drv: The @mcb_driver
@@ -155,6 +214,7 @@
 	int device_id;
 
 	device_initialize(&dev->dev);
+	mcb_bus_get(bus);
 	dev->dev.bus = &mcb_bus_type;
 	dev->dev.parent = bus->dev.parent;
 	dev->dev.release = mcb_release_dev;
@@ -178,6 +238,15 @@
 }
 EXPORT_SYMBOL_GPL(mcb_device_register);
 
+static void mcb_free_bus(struct device *dev)
+{
+	struct mcb_bus *bus = to_mcb_bus(dev);
+
+	put_device(bus->carrier);
+	ida_simple_remove(&mcb_ida, bus->bus_nr);
+	kfree(bus);
+}
+
 /**
  * mcb_alloc_bus() - Allocate a new @mcb_bus
  *
@@ -187,6 +256,7 @@
 {
 	struct mcb_bus *bus;
 	int bus_nr;
+	int rc;
 
 	bus = kzalloc(sizeof(struct mcb_bus), GFP_KERNEL);
 	if (!bus)
@@ -194,14 +264,29 @@
 
 	bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL);
 	if (bus_nr < 0) {
-		kfree(bus);
-		return ERR_PTR(bus_nr);
+		rc = bus_nr;
+		goto err_free;
 	}
 
-	INIT_LIST_HEAD(&bus->children);
 	bus->bus_nr = bus_nr;
-	bus->carrier = carrier;
+	bus->carrier = get_device(carrier);
+
+	device_initialize(&bus->dev);
+	bus->dev.parent = carrier;
+	bus->dev.bus = &mcb_bus_type;
+	bus->dev.type = &mcb_carrier_device_type;
+	bus->dev.release = &mcb_free_bus;
+
+	dev_set_name(&bus->dev, "mcb:%d", bus_nr);
+	rc = device_add(&bus->dev);
+	if (rc)
+		goto err_free;
+
 	return bus;
+err_free:
+	put_device(carrier);
+	kfree(bus);
+	return ERR_PTR(rc);
 }
 EXPORT_SYMBOL_GPL(mcb_alloc_bus);
 
@@ -224,10 +309,6 @@
 void mcb_release_bus(struct mcb_bus *bus)
 {
 	mcb_devices_unregister(bus);
-
-	ida_simple_remove(&mcb_ida, bus->bus_nr);
-
-	kfree(bus);
 }
 EXPORT_SYMBOL_GPL(mcb_release_bus);
 
diff --git a/drivers/mcb/mcb-internal.h b/drivers/mcb/mcb-internal.h
index fb7493d..5254e02 100644
--- a/drivers/mcb/mcb-internal.h
+++ b/drivers/mcb/mcb-internal.h
@@ -5,7 +5,6 @@
 
 #define PCI_VENDOR_ID_MEN		0x1a88
 #define PCI_DEVICE_ID_MEN_CHAMELEON	0x4d45
-#define CHAMELEON_FILENAME_LEN		12
 #define CHAMELEONV2_MAGIC		0xabce
 #define CHAM_HEADER_SIZE		0x200
 
diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
index 0049269..dbecbed 100644
--- a/drivers/mcb/mcb-parse.c
+++ b/drivers/mcb/mcb-parse.c
@@ -57,7 +57,7 @@
 	mdev->id = GDD_DEV(reg1);
 	mdev->rev = GDD_REV(reg1);
 	mdev->var = GDD_VAR(reg1);
-	mdev->bar = GDD_BAR(reg1);
+	mdev->bar = GDD_BAR(reg2);
 	mdev->group = GDD_GRP(reg2);
 	mdev->inst = GDD_INS(reg2);
 
@@ -113,16 +113,11 @@
 	}
 	p += hsize;
 
-	pr_debug("header->revision = %d\n", header->revision);
-	pr_debug("header->model = 0x%x ('%c')\n", header->model,
-		header->model);
-	pr_debug("header->minor = %d\n", header->minor);
-	pr_debug("header->bus_type = 0x%x\n", header->bus_type);
-
-
-	pr_debug("header->magic = 0x%x\n", header->magic);
-	pr_debug("header->filename = \"%.*s\"\n", CHAMELEON_FILENAME_LEN,
-		header->filename);
+	bus->revision = header->revision;
+	bus->model = header->model;
+	bus->minor = header->minor;
+	snprintf(bus->name, CHAMELEON_FILENAME_LEN + 1, "%s",
+		 header->filename);
 
 	for_each_chameleon_cell(dtype, p) {
 		switch (dtype) {
diff --git a/drivers/mcb/mcb-pci.c b/drivers/mcb/mcb-pci.c
index 67d5e7d..b15a034 100644
--- a/drivers/mcb/mcb-pci.c
+++ b/drivers/mcb/mcb-pci.c
@@ -35,7 +35,6 @@
 	struct resource *res;
 	struct priv *priv;
 	int ret;
-	int num_cells;
 	unsigned long flags;
 
 	priv = devm_kzalloc(&pdev->dev, sizeof(struct priv), GFP_KERNEL);
@@ -55,19 +54,20 @@
 		goto out_disable;
 	}
 
-	res = request_mem_region(priv->mapbase, CHAM_HEADER_SIZE,
-				 KBUILD_MODNAME);
+	res = devm_request_mem_region(&pdev->dev, priv->mapbase,
+				      CHAM_HEADER_SIZE,
+				      KBUILD_MODNAME);
 	if (!res) {
 		dev_err(&pdev->dev, "Failed to request PCI memory\n");
 		ret = -EBUSY;
 		goto out_disable;
 	}
 
-	priv->base = ioremap(priv->mapbase, CHAM_HEADER_SIZE);
+	priv->base = devm_ioremap(&pdev->dev, priv->mapbase, CHAM_HEADER_SIZE);
 	if (!priv->base) {
 		dev_err(&pdev->dev, "Cannot ioremap\n");
 		ret = -ENOMEM;
-		goto out_release;
+		goto out_disable;
 	}
 
 	flags = pci_resource_flags(pdev, 0);
@@ -75,7 +75,7 @@
 		ret = -ENOTSUPP;
 		dev_err(&pdev->dev,
 			"IO mapped PCI devices are not supported\n");
-		goto out_iounmap;
+		goto out_disable;
 	}
 
 	pci_set_drvdata(pdev, priv);
@@ -83,7 +83,7 @@
 	priv->bus = mcb_alloc_bus(&pdev->dev);
 	if (IS_ERR(priv->bus)) {
 		ret = PTR_ERR(priv->bus);
-		goto out_iounmap;
+		goto out_disable;
 	}
 
 	priv->bus->get_irq = mcb_pci_get_irq;
@@ -91,9 +91,8 @@
 	ret = chameleon_parse_cells(priv->bus, priv->mapbase, priv->base);
 	if (ret < 0)
 		goto out_mcb_bus;
-	num_cells = ret;
 
-	dev_dbg(&pdev->dev, "Found %d cells\n", num_cells);
+	dev_dbg(&pdev->dev, "Found %d cells\n", ret);
 
 	mcb_bus_add_devices(priv->bus);
 
@@ -101,10 +100,6 @@
 
 out_mcb_bus:
 	mcb_release_bus(priv->bus);
-out_iounmap:
-	iounmap(priv->base);
-out_release:
-	pci_release_region(pdev, 0);
 out_disable:
 	pci_disable_device(pdev);
 	return ret;
@@ -116,8 +111,6 @@
 
 	mcb_release_bus(priv->bus);
 
-	iounmap(priv->base);
-	release_region(priv->mapbase, CHAM_HEADER_SIZE);
 	pci_disable_device(pdev);
 }
 
diff --git a/drivers/memory/of_memory.c b/drivers/memory/of_memory.c
index 6007435..9daf94b 100644
--- a/drivers/memory/of_memory.c
+++ b/drivers/memory/of_memory.c
@@ -109,7 +109,7 @@
 	struct lpddr2_timings	*timings = NULL;
 	u32			arr_sz = 0, i = 0;
 	struct device_node	*np_tim;
-	char			*tim_compat;
+	char			*tim_compat = NULL;
 
 	switch (device_type) {
 	case DDR_TYPE_LPDDR2_S2:
diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig
index cfc493c..c4e41c2 100644
--- a/drivers/misc/eeprom/Kconfig
+++ b/drivers/misc/eeprom/Kconfig
@@ -3,7 +3,6 @@
 config EEPROM_AT24
 	tristate "I2C EEPROMs / RAMs / ROMs from most vendors"
 	depends on I2C && SYSFS
-	select REGMAP
 	select NVMEM
 	help
 	  Enable this driver to get read/write support to most I2C EEPROMs
@@ -32,7 +31,6 @@
 config EEPROM_AT25
 	tristate "SPI EEPROMs from most vendors"
 	depends on SPI && SYSFS
-	select REGMAP
 	select NVMEM
 	help
 	  Enable this driver to get read/write support to most SPI EEPROMs,
diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
index 6cc17b77..9ceb63b 100644
--- a/drivers/misc/eeprom/at24.c
+++ b/drivers/misc/eeprom/at24.c
@@ -23,7 +23,6 @@
 #include <linux/acpi.h>
 #include <linux/i2c.h>
 #include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
 #include <linux/platform_data/at24.h>
 
 /*
@@ -69,7 +68,6 @@
 	unsigned write_max;
 	unsigned num_addresses;
 
-	struct regmap_config regmap_config;
 	struct nvmem_config nvmem_config;
 	struct nvmem_device *nvmem;
 
@@ -251,10 +249,10 @@
 	return -ETIMEDOUT;
 }
 
-static ssize_t at24_read(struct at24_data *at24,
-		char *buf, loff_t off, size_t count)
+static int at24_read(void *priv, unsigned int off, void *val, size_t count)
 {
-	ssize_t retval = 0;
+	struct at24_data *at24 = priv;
+	char *buf = val;
 
 	if (unlikely(!count))
 		return count;
@@ -266,23 +264,21 @@
 	mutex_lock(&at24->lock);
 
 	while (count) {
-		ssize_t	status;
+		int	status;
 
 		status = at24_eeprom_read(at24, buf, off, count);
-		if (status <= 0) {
-			if (retval == 0)
-				retval = status;
-			break;
+		if (status < 0) {
+			mutex_unlock(&at24->lock);
+			return status;
 		}
 		buf += status;
 		off += status;
 		count -= status;
-		retval += status;
 	}
 
 	mutex_unlock(&at24->lock);
 
-	return retval;
+	return 0;
 }
 
 /*
@@ -370,13 +366,13 @@
 	return -ETIMEDOUT;
 }
 
-static ssize_t at24_write(struct at24_data *at24, const char *buf, loff_t off,
-			  size_t count)
+static int at24_write(void *priv, unsigned int off, void *val, size_t count)
 {
-	ssize_t retval = 0;
+	struct at24_data *at24 = priv;
+	char *buf = val;
 
 	if (unlikely(!count))
-		return count;
+		return -EINVAL;
 
 	/*
 	 * Write data to chip, protecting against concurrent updates
@@ -385,70 +381,23 @@
 	mutex_lock(&at24->lock);
 
 	while (count) {
-		ssize_t	status;
+		int status;
 
 		status = at24_eeprom_write(at24, buf, off, count);
-		if (status <= 0) {
-			if (retval == 0)
-				retval = status;
-			break;
+		if (status < 0) {
+			mutex_unlock(&at24->lock);
+			return status;
 		}
 		buf += status;
 		off += status;
 		count -= status;
-		retval += status;
 	}
 
 	mutex_unlock(&at24->lock);
 
-	return retval;
-}
-
-/*-------------------------------------------------------------------------*/
-
-/*
- * Provide a regmap interface, which is registered with the NVMEM
- * framework
-*/
-static int at24_regmap_read(void *context, const void *reg, size_t reg_size,
-			    void *val, size_t val_size)
-{
-	struct at24_data *at24 = context;
-	off_t offset = *(u32 *)reg;
-	int err;
-
-	err = at24_read(at24, val, offset, val_size);
-	if (err)
-		return err;
 	return 0;
 }
 
-static int at24_regmap_write(void *context, const void *data, size_t count)
-{
-	struct at24_data *at24 = context;
-	const char *buf;
-	u32 offset;
-	size_t len;
-	int err;
-
-	memcpy(&offset, data, sizeof(offset));
-	buf = (const char *)data + sizeof(offset);
-	len = count - sizeof(offset);
-
-	err = at24_write(at24, buf, offset, len);
-	if (err)
-		return err;
-	return 0;
-}
-
-static const struct regmap_bus at24_regmap_bus = {
-	.read = at24_regmap_read,
-	.write = at24_regmap_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-/*-------------------------------------------------------------------------*/
-
 #ifdef CONFIG_OF
 static void at24_get_ofdata(struct i2c_client *client,
 		struct at24_platform_data *chip)
@@ -480,7 +429,6 @@
 	struct at24_data *at24;
 	int err;
 	unsigned i, num_addresses;
-	struct regmap *regmap;
 
 	if (client->dev.platform_data) {
 		chip = *(struct at24_platform_data *)client->dev.platform_data;
@@ -607,19 +555,6 @@
 		}
 	}
 
-	at24->regmap_config.reg_bits = 32;
-	at24->regmap_config.val_bits = 8;
-	at24->regmap_config.reg_stride = 1;
-	at24->regmap_config.max_register = chip.byte_len - 1;
-
-	regmap = devm_regmap_init(&client->dev, &at24_regmap_bus, at24,
-				  &at24->regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(&client->dev, "regmap init failed\n");
-		err = PTR_ERR(regmap);
-		goto err_clients;
-	}
-
 	at24->nvmem_config.name = dev_name(&client->dev);
 	at24->nvmem_config.dev = &client->dev;
 	at24->nvmem_config.read_only = !writable;
@@ -627,6 +562,12 @@
 	at24->nvmem_config.owner = THIS_MODULE;
 	at24->nvmem_config.compat = true;
 	at24->nvmem_config.base_dev = &client->dev;
+	at24->nvmem_config.reg_read = at24_read;
+	at24->nvmem_config.reg_write = at24_write;
+	at24->nvmem_config.priv = at24;
+	at24->nvmem_config.stride = 4;
+	at24->nvmem_config.word_size = 1;
+	at24->nvmem_config.size = chip.byte_len;
 
 	at24->nvmem = nvmem_register(&at24->nvmem_config);
 
diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
index fa36a6e..2c6c7c8 100644
--- a/drivers/misc/eeprom/at25.c
+++ b/drivers/misc/eeprom/at25.c
@@ -17,7 +17,6 @@
 #include <linux/sched.h>
 
 #include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
 #include <linux/spi/spi.h>
 #include <linux/spi/eeprom.h>
 #include <linux/property.h>
@@ -34,7 +33,6 @@
 	struct mutex		lock;
 	struct spi_eeprom	chip;
 	unsigned		addrlen;
-	struct regmap_config	regmap_config;
 	struct nvmem_config	nvmem_config;
 	struct nvmem_device	*nvmem;
 };
@@ -65,14 +63,11 @@
 
 #define	io_limit	PAGE_SIZE	/* bytes */
 
-static ssize_t
-at25_ee_read(
-	struct at25_data	*at25,
-	char			*buf,
-	unsigned		offset,
-	size_t			count
-)
+static int at25_ee_read(void *priv, unsigned int offset,
+			void *val, size_t count)
 {
+	struct at25_data *at25 = priv;
+	char *buf = val;
 	u8			command[EE_MAXADDRLEN + 1];
 	u8			*cp;
 	ssize_t			status;
@@ -81,11 +76,11 @@
 	u8			instr;
 
 	if (unlikely(offset >= at25->chip.byte_len))
-		return 0;
+		return -EINVAL;
 	if ((offset + count) > at25->chip.byte_len)
 		count = at25->chip.byte_len - offset;
 	if (unlikely(!count))
-		return count;
+		return -EINVAL;
 
 	cp = command;
 
@@ -131,28 +126,14 @@
 		count, offset, (int) status);
 
 	mutex_unlock(&at25->lock);
-	return status ? status : count;
+	return status;
 }
 
-static int at25_regmap_read(void *context, const void *reg, size_t reg_size,
-			    void *val, size_t val_size)
+static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count)
 {
-	struct at25_data *at25 = context;
-	off_t offset = *(u32 *)reg;
-	int err;
-
-	err = at25_ee_read(at25, val, offset, val_size);
-	if (err)
-		return err;
-	return 0;
-}
-
-static ssize_t
-at25_ee_write(struct at25_data *at25, const char *buf, loff_t off,
-	      size_t count)
-{
-	ssize_t			status = 0;
-	unsigned		written = 0;
+	struct at25_data *at25 = priv;
+	const char *buf = val;
+	int			status = 0;
 	unsigned		buf_size;
 	u8			*bounce;
 
@@ -161,7 +142,7 @@
 	if ((off + count) > at25->chip.byte_len)
 		count = at25->chip.byte_len - off;
 	if (unlikely(!count))
-		return count;
+		return -EINVAL;
 
 	/* Temp buffer starts with command and address */
 	buf_size = at25->chip.page_size;
@@ -256,40 +237,15 @@
 		off += segment;
 		buf += segment;
 		count -= segment;
-		written += segment;
 
 	} while (count > 0);
 
 	mutex_unlock(&at25->lock);
 
 	kfree(bounce);
-	return written ? written : status;
+	return status;
 }
 
-static int at25_regmap_write(void *context, const void *data, size_t count)
-{
-	struct at25_data *at25 = context;
-	const char *buf;
-	u32 offset;
-	size_t len;
-	int err;
-
-	memcpy(&offset, data, sizeof(offset));
-	buf = (const char *)data + sizeof(offset);
-	len = count - sizeof(offset);
-
-	err = at25_ee_write(at25, buf, offset, len);
-	if (err)
-		return err;
-	return 0;
-}
-
-static const struct regmap_bus at25_regmap_bus = {
-	.read = at25_regmap_read,
-	.write = at25_regmap_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
 /*-------------------------------------------------------------------------*/
 
 static int at25_fw_to_chip(struct device *dev, struct spi_eeprom *chip)
@@ -349,7 +305,6 @@
 {
 	struct at25_data	*at25 = NULL;
 	struct spi_eeprom	chip;
-	struct regmap		*regmap;
 	int			err;
 	int			sr;
 	int			addrlen;
@@ -390,22 +345,10 @@
 
 	mutex_init(&at25->lock);
 	at25->chip = chip;
-	at25->spi = spi_dev_get(spi);
+	at25->spi = spi;
 	spi_set_drvdata(spi, at25);
 	at25->addrlen = addrlen;
 
-	at25->regmap_config.reg_bits = 32;
-	at25->regmap_config.val_bits = 8;
-	at25->regmap_config.reg_stride = 1;
-	at25->regmap_config.max_register = chip.byte_len - 1;
-
-	regmap = devm_regmap_init(&spi->dev, &at25_regmap_bus, at25,
-				  &at25->regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(&spi->dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
-
 	at25->nvmem_config.name = dev_name(&spi->dev);
 	at25->nvmem_config.dev = &spi->dev;
 	at25->nvmem_config.read_only = chip.flags & EE_READONLY;
@@ -413,6 +356,12 @@
 	at25->nvmem_config.owner = THIS_MODULE;
 	at25->nvmem_config.compat = true;
 	at25->nvmem_config.base_dev = &spi->dev;
+	at25->nvmem_config.reg_read = at25_ee_read;
+	at25->nvmem_config.reg_write = at25_ee_write;
+	at25->nvmem_config.priv = at25;
+	at25->nvmem_config.stride = 4;
+	at25->nvmem_config.word_size = 1;
+	at25->nvmem_config.size = chip.byte_len;
 
 	at25->nvmem = nvmem_register(&at25->nvmem_config);
 	if (IS_ERR(at25->nvmem))
diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
index 426fe2f..94cc035 100644
--- a/drivers/misc/eeprom/eeprom_93xx46.c
+++ b/drivers/misc/eeprom/eeprom_93xx46.c
@@ -20,7 +20,6 @@
 #include <linux/slab.h>
 #include <linux/spi/spi.h>
 #include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
 #include <linux/eeprom_93xx46.h>
 
 #define OP_START	0x4
@@ -43,7 +42,6 @@
 	struct spi_device *spi;
 	struct eeprom_93xx46_platform_data *pdata;
 	struct mutex lock;
-	struct regmap_config regmap_config;
 	struct nvmem_config nvmem_config;
 	struct nvmem_device *nvmem;
 	int addrlen;
@@ -60,11 +58,12 @@
 	return edev->pdata->quirks & EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH;
 }
 
-static ssize_t
-eeprom_93xx46_read(struct eeprom_93xx46_dev *edev, char *buf,
-		   unsigned off, size_t count)
+static int eeprom_93xx46_read(void *priv, unsigned int off,
+			      void *val, size_t count)
 {
-	ssize_t ret = 0;
+	struct eeprom_93xx46_dev *edev = priv;
+	char *buf = val;
+	int err = 0;
 
 	if (unlikely(off >= edev->size))
 		return 0;
@@ -84,7 +83,6 @@
 		u16 cmd_addr = OP_READ << edev->addrlen;
 		size_t nbytes = count;
 		int bits;
-		int err;
 
 		if (edev->addrlen == 7) {
 			cmd_addr |= off & 0x7f;
@@ -120,21 +118,20 @@
 		if (err) {
 			dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n",
 				nbytes, (int)off, err);
-			ret = err;
 			break;
 		}
 
 		buf += nbytes;
 		off += nbytes;
 		count -= nbytes;
-		ret += nbytes;
 	}
 
 	if (edev->pdata->finish)
 		edev->pdata->finish(edev);
 
 	mutex_unlock(&edev->lock);
-	return ret;
+
+	return err;
 }
 
 static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on)
@@ -230,10 +227,11 @@
 	return ret;
 }
 
-static ssize_t
-eeprom_93xx46_write(struct eeprom_93xx46_dev *edev, const char *buf,
-		    loff_t off, size_t count)
+static int eeprom_93xx46_write(void *priv, unsigned int off,
+				   void *val, size_t count)
 {
+	struct eeprom_93xx46_dev *edev = priv;
+	char *buf = val;
 	int i, ret, step = 1;
 
 	if (unlikely(off >= edev->size))
@@ -275,52 +273,9 @@
 
 	/* erase/write disable */
 	eeprom_93xx46_ew(edev, 0);
-	return ret ? : count;
+	return ret;
 }
 
-/*
- * Provide a regmap interface, which is registered with the NVMEM
- * framework
-*/
-static int eeprom_93xx46_regmap_read(void *context, const void *reg,
-				     size_t reg_size, void *val,
-				     size_t val_size)
-{
-	struct eeprom_93xx46_dev *eeprom_93xx46 = context;
-	off_t offset = *(u32 *)reg;
-	int err;
-
-	err = eeprom_93xx46_read(eeprom_93xx46, val, offset, val_size);
-	if (err)
-		return err;
-	return 0;
-}
-
-static int eeprom_93xx46_regmap_write(void *context, const void *data,
-				      size_t count)
-{
-	struct eeprom_93xx46_dev *eeprom_93xx46 = context;
-	const char *buf;
-	u32 offset;
-	size_t len;
-	int err;
-
-	memcpy(&offset, data, sizeof(offset));
-	buf = (const char *)data + sizeof(offset);
-	len = count - sizeof(offset);
-
-	err = eeprom_93xx46_write(eeprom_93xx46, buf, offset, len);
-	if (err)
-		return err;
-	return 0;
-}
-
-static const struct regmap_bus eeprom_93xx46_regmap_bus = {
-	.read = eeprom_93xx46_regmap_read,
-	.write = eeprom_93xx46_regmap_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
 static int eeprom_93xx46_eral(struct eeprom_93xx46_dev *edev)
 {
 	struct eeprom_93xx46_platform_data *pd = edev->pdata;
@@ -480,7 +435,6 @@
 {
 	struct eeprom_93xx46_platform_data *pd;
 	struct eeprom_93xx46_dev *edev;
-	struct regmap *regmap;
 	int err;
 
 	if (spi->dev.of_node) {
@@ -511,24 +465,10 @@
 
 	mutex_init(&edev->lock);
 
-	edev->spi = spi_dev_get(spi);
+	edev->spi = spi;
 	edev->pdata = pd;
 
 	edev->size = 128;
-
-	edev->regmap_config.reg_bits = 32;
-	edev->regmap_config.val_bits = 8;
-	edev->regmap_config.reg_stride = 1;
-	edev->regmap_config.max_register = edev->size - 1;
-
-	regmap = devm_regmap_init(&spi->dev, &eeprom_93xx46_regmap_bus, edev,
-				  &edev->regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(&spi->dev, "regmap init failed\n");
-		err = PTR_ERR(regmap);
-		goto fail;
-	}
-
 	edev->nvmem_config.name = dev_name(&spi->dev);
 	edev->nvmem_config.dev = &spi->dev;
 	edev->nvmem_config.read_only = pd->flags & EE_READONLY;
@@ -536,6 +476,12 @@
 	edev->nvmem_config.owner = THIS_MODULE;
 	edev->nvmem_config.compat = true;
 	edev->nvmem_config.base_dev = &spi->dev;
+	edev->nvmem_config.reg_read = eeprom_93xx46_read;
+	edev->nvmem_config.reg_write = eeprom_93xx46_write;
+	edev->nvmem_config.priv = edev;
+	edev->nvmem_config.stride = 4;
+	edev->nvmem_config.word_size = 1;
+	edev->nvmem_config.size = edev->size;
 
 	edev->nvmem = nvmem_register(&edev->nvmem_config);
 	if (IS_ERR(edev->nvmem)) {
diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
index 194360a..a039a5d 100644
--- a/drivers/misc/mei/amthif.c
+++ b/drivers/misc/mei/amthif.c
@@ -380,8 +380,10 @@
 
 	dev = cl->dev;
 
-	if (dev->iamthif_state != MEI_IAMTHIF_READING)
+	if (dev->iamthif_state != MEI_IAMTHIF_READING) {
+		mei_irq_discard_msg(dev, mei_hdr);
 		return 0;
+	}
 
 	ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
 	if (ret)
diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
index 5d5996e..1f33fea 100644
--- a/drivers/misc/mei/bus.c
+++ b/drivers/misc/mei/bus.c
@@ -220,17 +220,23 @@
 static void mei_cl_bus_event_work(struct work_struct *work)
 {
 	struct mei_cl_device *cldev;
+	struct mei_device *bus;
 
 	cldev = container_of(work, struct mei_cl_device, event_work);
 
+	bus = cldev->bus;
+
 	if (cldev->event_cb)
 		cldev->event_cb(cldev, cldev->events, cldev->event_context);
 
 	cldev->events = 0;
 
 	/* Prepare for the next read */
-	if (cldev->events_mask & BIT(MEI_CL_EVENT_RX))
+	if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
+		mutex_lock(&bus->device_lock);
 		mei_cl_read_start(cldev->cl, 0, NULL);
+		mutex_unlock(&bus->device_lock);
+	}
 }
 
 /**
@@ -304,6 +310,7 @@
 				unsigned long events_mask,
 				mei_cldev_event_cb_t event_cb, void *context)
 {
+	struct mei_device *bus = cldev->bus;
 	int ret;
 
 	if (cldev->event_cb)
@@ -316,15 +323,17 @@
 	INIT_WORK(&cldev->event_work, mei_cl_bus_event_work);
 
 	if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
+		mutex_lock(&bus->device_lock);
 		ret = mei_cl_read_start(cldev->cl, 0, NULL);
+		mutex_unlock(&bus->device_lock);
 		if (ret && ret != -EBUSY)
 			return ret;
 	}
 
 	if (cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)) {
-		mutex_lock(&cldev->cl->dev->device_lock);
+		mutex_lock(&bus->device_lock);
 		ret = mei_cl_notify_request(cldev->cl, NULL, event_cb ? 1 : 0);
-		mutex_unlock(&cldev->cl->dev->device_lock);
+		mutex_unlock(&bus->device_lock);
 		if (ret)
 			return ret;
 	}
@@ -580,6 +589,7 @@
 	struct mei_cl_device *cldev;
 	struct mei_cl_driver *cldrv;
 	const struct mei_cl_device_id *id;
+	int ret;
 
 	cldev = to_mei_cl_device(dev);
 	cldrv = to_mei_cl_driver(dev->driver);
@@ -594,9 +604,12 @@
 	if (!id)
 		return -ENODEV;
 
-	__module_get(THIS_MODULE);
+	ret = cldrv->probe(cldev, id);
+	if (ret)
+		return ret;
 
-	return cldrv->probe(cldev, id);
+	__module_get(THIS_MODULE);
+	return 0;
 }
 
 /**
@@ -634,11 +647,8 @@
 			     char *buf)
 {
 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
-	size_t len;
 
-	len = snprintf(buf, PAGE_SIZE, "%s", cldev->name);
-
-	return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+	return scnprintf(buf, PAGE_SIZE, "%s", cldev->name);
 }
 static DEVICE_ATTR_RO(name);
 
@@ -647,11 +657,8 @@
 {
 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
 	const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl);
-	size_t len;
 
-	len = snprintf(buf, PAGE_SIZE, "%pUl", uuid);
-
-	return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+	return scnprintf(buf, PAGE_SIZE, "%pUl", uuid);
 }
 static DEVICE_ATTR_RO(uuid);
 
@@ -660,11 +667,8 @@
 {
 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
 	u8 version = mei_me_cl_ver(cldev->me_cl);
-	size_t len;
 
-	len = snprintf(buf, PAGE_SIZE, "%02X", version);
-
-	return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+	return scnprintf(buf, PAGE_SIZE, "%02X", version);
 }
 static DEVICE_ATTR_RO(version);
 
@@ -673,10 +677,8 @@
 {
 	struct mei_cl_device *cldev = to_mei_cl_device(dev);
 	const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl);
-	size_t len;
 
-	len = snprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid);
-	return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+	return scnprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid);
 }
 static DEVICE_ATTR_RO(modalias);
 
diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
index bab17e4..eed254d 100644
--- a/drivers/misc/mei/client.c
+++ b/drivers/misc/mei/client.c
@@ -727,6 +727,11 @@
 		cl_dbg(dev, cl, "Waking up waiting for event clients!\n");
 		wake_up_interruptible(&cl->ev_wait);
 	}
+	/* synchronized under device mutex */
+	if (waitqueue_active(&cl->wait)) {
+		cl_dbg(dev, cl, "Waking up ctrl write clients!\n");
+		wake_up_interruptible(&cl->wait);
+	}
 }
 
 /**
@@ -879,12 +884,15 @@
 	}
 
 	mutex_unlock(&dev->device_lock);
-	wait_event_timeout(cl->wait, cl->state == MEI_FILE_DISCONNECT_REPLY,
+	wait_event_timeout(cl->wait,
+			   cl->state == MEI_FILE_DISCONNECT_REPLY ||
+			   cl->state == MEI_FILE_DISCONNECTED,
 			   mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
 	mutex_lock(&dev->device_lock);
 
 	rets = cl->status;
-	if (cl->state != MEI_FILE_DISCONNECT_REPLY) {
+	if (cl->state != MEI_FILE_DISCONNECT_REPLY &&
+	    cl->state != MEI_FILE_DISCONNECTED) {
 		cl_dbg(dev, cl, "timeout on disconnect from FW client.\n");
 		rets = -ETIME;
 	}
@@ -1085,6 +1093,7 @@
 	mutex_unlock(&dev->device_lock);
 	wait_event_timeout(cl->wait,
 			(cl->state == MEI_FILE_CONNECTED ||
+			 cl->state == MEI_FILE_DISCONNECTED ||
 			 cl->state == MEI_FILE_DISCONNECT_REQUIRED ||
 			 cl->state == MEI_FILE_DISCONNECT_REPLY),
 			mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
@@ -1333,16 +1342,13 @@
 	}
 
 	mutex_unlock(&dev->device_lock);
-	wait_event_timeout(cl->wait, cl->notify_en == request,
-			mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
+	wait_event_timeout(cl->wait,
+			   cl->notify_en == request || !mei_cl_is_connected(cl),
+			   mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
 	mutex_lock(&dev->device_lock);
 
-	if (cl->notify_en != request) {
-		mei_io_list_flush(&dev->ctrl_rd_list, cl);
-		mei_io_list_flush(&dev->ctrl_wr_list, cl);
-		if (!cl->status)
-			cl->status = -EFAULT;
-	}
+	if (cl->notify_en != request && !cl->status)
+		cl->status = -EFAULT;
 
 	rets = cl->status;
 
@@ -1767,6 +1773,10 @@
 			wake_up(&cl->wait);
 
 		break;
+	case MEI_FOP_DISCONNECT_RSP:
+		mei_io_cb_free(cb);
+		mei_cl_set_disconnected(cl);
+		break;
 	default:
 		BUG_ON(0);
 	}
diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
index 5e305d2..5aa606c 100644
--- a/drivers/misc/mei/hbm.c
+++ b/drivers/misc/mei/hbm.c
@@ -113,8 +113,6 @@
  */
 void mei_hbm_reset(struct mei_device *dev)
 {
-	dev->me_client_index = 0;
-
 	mei_me_cl_rm_all(dev);
 
 	mei_hbm_idle(dev);
@@ -530,24 +528,22 @@
  * mei_hbm_prop_req - request property for a single client
  *
  * @dev: the device structure
+ * @start_idx: client index to start search
  *
  * Return: 0 on success and < 0 on failure
  */
-
-static int mei_hbm_prop_req(struct mei_device *dev)
+static int mei_hbm_prop_req(struct mei_device *dev, unsigned long start_idx)
 {
-
 	struct mei_msg_hdr *mei_hdr = &dev->wr_msg.hdr;
 	struct hbm_props_request *prop_req;
 	const size_t len = sizeof(struct hbm_props_request);
-	unsigned long next_client_index;
+	unsigned long addr;
 	int ret;
 
-	next_client_index = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX,
-					  dev->me_client_index);
+	addr = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX, start_idx);
 
 	/* We got all client properties */
-	if (next_client_index == MEI_CLIENTS_MAX) {
+	if (addr == MEI_CLIENTS_MAX) {
 		dev->hbm_state = MEI_HBM_STARTED;
 		mei_host_client_init(dev);
 
@@ -560,7 +556,7 @@
 	memset(prop_req, 0, sizeof(struct hbm_props_request));
 
 	prop_req->hbm_cmd = HOST_CLIENT_PROPERTIES_REQ_CMD;
-	prop_req->me_addr = next_client_index;
+	prop_req->me_addr = addr;
 
 	ret = mei_write_message(dev, mei_hdr, dev->wr_msg.data);
 	if (ret) {
@@ -570,7 +566,6 @@
 	}
 
 	dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT;
-	dev->me_client_index = next_client_index;
 
 	return 0;
 }
@@ -882,8 +877,7 @@
 		cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT_RSP, NULL);
 		if (!cb)
 			return -ENOMEM;
-		cl_dbg(dev, cl, "add disconnect response as first\n");
-		list_add(&cb->list, &dev->ctrl_wr_list.list);
+		list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
 	}
 	return 0;
 }
@@ -1152,10 +1146,8 @@
 
 		mei_hbm_me_cl_add(dev, props_res);
 
-		dev->me_client_index++;
-
 		/* request property for the next client */
-		if (mei_hbm_prop_req(dev))
+		if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
 			return -EIO;
 
 		break;
@@ -1181,7 +1173,7 @@
 		dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES;
 
 		/* first property request */
-		if (mei_hbm_prop_req(dev))
+		if (mei_hbm_prop_req(dev, 0))
 			return -EIO;
 
 		break;
diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
index 1e5cb1f..3831a7b 100644
--- a/drivers/misc/mei/interrupt.c
+++ b/drivers/misc/mei/interrupt.c
@@ -76,7 +76,6 @@
  * @dev: mei device
  * @hdr: message header
  */
-static inline
 void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr)
 {
 	/*
@@ -194,10 +193,7 @@
 		return -EMSGSIZE;
 
 	ret = mei_hbm_cl_disconnect_rsp(dev, cl);
-	mei_cl_set_disconnected(cl);
-	mei_io_cb_free(cb);
-	mei_me_cl_put(cl->me_cl);
-	cl->me_cl = NULL;
+	list_move_tail(&cb->list, &cmpl_list->list);
 
 	return ret;
 }
diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
index db78e6d..c9e0102 100644
--- a/drivers/misc/mei/mei_dev.h
+++ b/drivers/misc/mei/mei_dev.h
@@ -396,7 +396,6 @@
  * @me_clients  : list of FW clients
  * @me_clients_map : FW clients bit map
  * @host_clients_map : host clients id pool
- * @me_client_index : last FW client index in enumeration
  *
  * @allow_fixed_address: allow user space to connect a fixed client
  * @override_fixed_address: force allow fixed address behavior
@@ -486,7 +485,6 @@
 	struct list_head me_clients;
 	DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
 	DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX);
-	unsigned long me_client_index;
 
 	bool allow_fixed_address;
 	bool override_fixed_address;
@@ -704,6 +702,8 @@
 
 bool mei_write_is_idle(struct mei_device *dev);
 
+void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr);
+
 #if IS_ENABLED(CONFIG_DEBUG_FS)
 int mei_dbgfs_register(struct mei_device *dev, const char *name);
 void mei_dbgfs_deregister(struct mei_device *dev);
diff --git a/drivers/misc/mic/Kconfig b/drivers/misc/mic/Kconfig
index 2e4f3ba..89e5917 100644
--- a/drivers/misc/mic/Kconfig
+++ b/drivers/misc/mic/Kconfig
@@ -132,6 +132,7 @@
 	tristate "VOP Driver"
 	depends on 64BIT && PCI && X86 && VOP_BUS
 	select VHOST_RING
+	select VIRTIO
 	help
 	  This enables VOP (Virtio over PCIe) Driver support for the Intel
 	  Many Integrated Core (MIC) family of PCIe form factor coprocessor
diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c
index 8c91c99..e047efd 100644
--- a/drivers/misc/mic/host/mic_boot.c
+++ b/drivers/misc/mic/host/mic_boot.c
@@ -76,7 +76,7 @@
 {
 	struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
 
-	return mic_free_irq(mdev, cookie, data);
+	mic_free_irq(mdev, cookie, data);
 }
 
 static void __mic_ack_interrupt(struct vop_device *vpdev, int num)
@@ -272,7 +272,7 @@
 {
 	struct mic_device *mdev = scdev_to_mdev(scdev);
 
-	return mic_free_irq(mdev, cookie, data);
+	mic_free_irq(mdev, cookie, data);
 }
 
 static void ___mic_ack_interrupt(struct scif_hw_dev *scdev, int num)
@@ -362,7 +362,7 @@
 static void _mic_free_irq(struct mbus_device *mbdev,
 			  struct mic_irq *cookie, void *data)
 {
-	return mic_free_irq(mbdev_to_mdev(mbdev), cookie, data);
+	mic_free_irq(mbdev_to_mdev(mbdev), cookie, data);
 }
 
 static void _mic_ack_interrupt(struct mbus_device *mbdev, int num)
diff --git a/drivers/misc/mic/scif/scif_fence.c b/drivers/misc/mic/scif/scif_fence.c
index 7f2c96f..cac3bcc 100644
--- a/drivers/misc/mic/scif/scif_fence.c
+++ b/drivers/misc/mic/scif/scif_fence.c
@@ -27,7 +27,8 @@
 void scif_recv_mark(struct scif_dev *scifdev, struct scifmsg *msg)
 {
 	struct scif_endpt *ep = (struct scif_endpt *)msg->payload[0];
-	int mark, err;
+	int mark = 0;
+	int err;
 
 	err = _scif_fence_mark(ep, &mark);
 	if (err)
diff --git a/drivers/misc/qcom-coincell.c b/drivers/misc/qcom-coincell.c
index 7b4a2da..829a61d 100644
--- a/drivers/misc/qcom-coincell.c
+++ b/drivers/misc/qcom-coincell.c
@@ -94,7 +94,8 @@
 {
 	struct device_node *node = pdev->dev.of_node;
 	struct qcom_coincell chgr;
-	u32 rset, vset;
+	u32 rset = 0;
+	u32 vset = 0;
 	bool enable;
 	int rc;
 
diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
index 69cdabe..f84b53d 100644
--- a/drivers/misc/sram.c
+++ b/drivers/misc/sram.c
@@ -364,8 +364,8 @@
 		sram->virt_base = devm_ioremap(sram->dev, res->start, size);
 	else
 		sram->virt_base = devm_ioremap_wc(sram->dev, res->start, size);
-	if (IS_ERR(sram->virt_base))
-		return PTR_ERR(sram->virt_base);
+	if (!sram->virt_base)
+		return -ENOMEM;
 
 	sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY),
 					  NUMA_NO_NODE, NULL);
diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c
index 71b6455..bf0d770 100644
--- a/drivers/misc/ti-st/st_kim.c
+++ b/drivers/misc/ti-st/st_kim.c
@@ -78,7 +78,6 @@
 		memcpy(kim_gdata->resp_buffer,
 				kim_gdata->rx_skb->data,
 				kim_gdata->rx_skb->len);
-		complete_all(&kim_gdata->kim_rcvd);
 		kim_gdata->rx_state = ST_W4_PACKET_TYPE;
 		kim_gdata->rx_skb = NULL;
 		kim_gdata->rx_count = 0;
diff --git a/drivers/nvmem/Kconfig b/drivers/nvmem/Kconfig
index ca52952..3041d48 100644
--- a/drivers/nvmem/Kconfig
+++ b/drivers/nvmem/Kconfig
@@ -1,6 +1,5 @@
 menuconfig NVMEM
 	tristate "NVMEM Support"
-	select REGMAP
 	help
 	  Support for NVMEM(Non Volatile Memory) devices like EEPROM, EFUSES...
 
@@ -28,6 +27,7 @@
 config NVMEM_LPC18XX_EEPROM
 	tristate "NXP LPC18XX EEPROM Memory Support"
 	depends on ARCH_LPC18XX || COMPILE_TEST
+	depends on HAS_IOMEM
 	help
 	  Say Y here to include support for NXP LPC18xx EEPROM memory found in
 	  NXP LPC185x/3x and LPC435x/3x/2x/1x devices.
@@ -49,6 +49,7 @@
 config MTK_EFUSE
 	tristate "Mediatek SoCs EFUSE support"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
+	depends on HAS_IOMEM
 	select REGMAP_MMIO
 	help
 	  This is a driver to access hardware related data like sensor
@@ -61,7 +62,6 @@
 	tristate "QCOM QFPROM Support"
 	depends on ARCH_QCOM || COMPILE_TEST
 	depends on HAS_IOMEM
-	select REGMAP_MMIO
 	help
 	  Say y here to enable QFPROM support. The QFPROM provides access
 	  functions for QFPROM data to rest of the drivers via nvmem interface.
@@ -83,7 +83,6 @@
 config NVMEM_SUNXI_SID
 	tristate "Allwinner SoCs SID support"
 	depends on ARCH_SUNXI
-	select REGMAP_MMIO
 	help
 	  This is a driver for the 'security ID' available on various Allwinner
 	  devices.
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index 0de3d87..bb4ea12 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -23,12 +23,10 @@
 #include <linux/nvmem-consumer.h>
 #include <linux/nvmem-provider.h>
 #include <linux/of.h>
-#include <linux/regmap.h>
 #include <linux/slab.h>
 
 struct nvmem_device {
 	const char		*name;
-	struct regmap		*regmap;
 	struct module		*owner;
 	struct device		dev;
 	int			stride;
@@ -41,6 +39,9 @@
 	int			flags;
 	struct bin_attribute	eeprom;
 	struct device		*base_dev;
+	nvmem_reg_read_t	reg_read;
+	nvmem_reg_write_t	reg_write;
+	void *priv;
 };
 
 #define FLAG_COMPAT		BIT(0)
@@ -66,6 +67,23 @@
 #endif
 
 #define to_nvmem_device(d) container_of(d, struct nvmem_device, dev)
+static int nvmem_reg_read(struct nvmem_device *nvmem, unsigned int offset,
+			  void *val, size_t bytes)
+{
+	if (nvmem->reg_read)
+		return nvmem->reg_read(nvmem->priv, offset, val, bytes);
+
+	return -EINVAL;
+}
+
+static int nvmem_reg_write(struct nvmem_device *nvmem, unsigned int offset,
+			   void *val, size_t bytes)
+{
+	if (nvmem->reg_write)
+		return nvmem->reg_write(nvmem->priv, offset, val, bytes);
+
+	return -EINVAL;
+}
 
 static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj,
 				    struct bin_attribute *attr,
@@ -93,7 +111,7 @@
 
 	count = round_down(count, nvmem->word_size);
 
-	rc = regmap_raw_read(nvmem->regmap, pos, buf, count);
+	rc = nvmem_reg_read(nvmem, pos, buf, count);
 
 	if (IS_ERR_VALUE(rc))
 		return rc;
@@ -127,7 +145,7 @@
 
 	count = round_down(count, nvmem->word_size);
 
-	rc = regmap_raw_write(nvmem->regmap, pos, buf, count);
+	rc = nvmem_reg_write(nvmem, pos, buf, count);
 
 	if (IS_ERR_VALUE(rc))
 		return rc;
@@ -421,18 +439,11 @@
 {
 	struct nvmem_device *nvmem;
 	struct device_node *np;
-	struct regmap *rm;
 	int rval;
 
 	if (!config->dev)
 		return ERR_PTR(-EINVAL);
 
-	rm = dev_get_regmap(config->dev, NULL);
-	if (!rm) {
-		dev_err(config->dev, "Regmap not found\n");
-		return ERR_PTR(-EINVAL);
-	}
-
 	nvmem = kzalloc(sizeof(*nvmem), GFP_KERNEL);
 	if (!nvmem)
 		return ERR_PTR(-ENOMEM);
@@ -444,14 +455,16 @@
 	}
 
 	nvmem->id = rval;
-	nvmem->regmap = rm;
 	nvmem->owner = config->owner;
-	nvmem->stride = regmap_get_reg_stride(rm);
-	nvmem->word_size = regmap_get_val_bytes(rm);
-	nvmem->size = regmap_get_max_register(rm) + nvmem->stride;
+	nvmem->stride = config->stride;
+	nvmem->word_size = config->word_size;
+	nvmem->size = config->size;
 	nvmem->dev.type = &nvmem_provider_type;
 	nvmem->dev.bus = &nvmem_bus_type;
 	nvmem->dev.parent = config->dev;
+	nvmem->priv = config->priv;
+	nvmem->reg_read = config->reg_read;
+	nvmem->reg_write = config->reg_write;
 	np = config->dev->of_node;
 	nvmem->dev.of_node = np;
 	dev_set_name(&nvmem->dev, "%s%d",
@@ -948,7 +961,7 @@
 {
 	int rc;
 
-	rc = regmap_raw_read(nvmem->regmap, cell->offset, buf, cell->bytes);
+	rc = nvmem_reg_read(nvmem, cell->offset, buf, cell->bytes);
 
 	if (IS_ERR_VALUE(rc))
 		return rc;
@@ -977,7 +990,7 @@
 	u8 *buf;
 	int rc;
 
-	if (!nvmem || !nvmem->regmap)
+	if (!nvmem)
 		return ERR_PTR(-EINVAL);
 
 	buf = kzalloc(cell->bytes, GFP_KERNEL);
@@ -1014,7 +1027,7 @@
 		*b <<= bit_offset;
 
 		/* setup the first byte with lsb bits from nvmem */
-		rc = regmap_raw_read(nvmem->regmap, cell->offset, &v, 1);
+		rc = nvmem_reg_read(nvmem, cell->offset, &v, 1);
 		*b++ |= GENMASK(bit_offset - 1, 0) & v;
 
 		/* setup rest of the byte if any */
@@ -1031,7 +1044,7 @@
 	/* if it's not end on byte boundary */
 	if ((nbits + bit_offset) % BITS_PER_BYTE) {
 		/* setup the last byte with msb bits from nvmem */
-		rc = regmap_raw_read(nvmem->regmap,
+		rc = nvmem_reg_read(nvmem,
 				    cell->offset + cell->bytes - 1, &v, 1);
 		*p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v;
 
@@ -1054,7 +1067,7 @@
 	struct nvmem_device *nvmem = cell->nvmem;
 	int rc;
 
-	if (!nvmem || !nvmem->regmap || nvmem->read_only ||
+	if (!nvmem || nvmem->read_only ||
 	    (cell->bit_offset == 0 && len != cell->bytes))
 		return -EINVAL;
 
@@ -1064,7 +1077,7 @@
 			return PTR_ERR(buf);
 	}
 
-	rc = regmap_raw_write(nvmem->regmap, cell->offset, buf, cell->bytes);
+	rc = nvmem_reg_write(nvmem, cell->offset, buf, cell->bytes);
 
 	/* free the tmp buffer */
 	if (cell->bit_offset || cell->nbits)
@@ -1094,7 +1107,7 @@
 	int rc;
 	ssize_t len;
 
-	if (!nvmem || !nvmem->regmap)
+	if (!nvmem)
 		return -EINVAL;
 
 	rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell);
@@ -1124,7 +1137,7 @@
 	struct nvmem_cell cell;
 	int rc;
 
-	if (!nvmem || !nvmem->regmap)
+	if (!nvmem)
 		return -EINVAL;
 
 	rc = nvmem_cell_info_to_nvmem_cell(nvmem, info, &cell);
@@ -1152,10 +1165,10 @@
 {
 	int rc;
 
-	if (!nvmem || !nvmem->regmap)
+	if (!nvmem)
 		return -EINVAL;
 
-	rc = regmap_raw_read(nvmem->regmap, offset, buf, bytes);
+	rc = nvmem_reg_read(nvmem, offset, buf, bytes);
 
 	if (IS_ERR_VALUE(rc))
 		return rc;
@@ -1180,10 +1193,10 @@
 {
 	int rc;
 
-	if (!nvmem || !nvmem->regmap)
+	if (!nvmem)
 		return -EINVAL;
 
-	rc = regmap_raw_write(nvmem->regmap, offset, buf, bytes);
+	rc = nvmem_reg_write(nvmem, offset, buf, bytes);
 
 	if (IS_ERR_VALUE(rc))
 		return rc;
diff --git a/drivers/nvmem/imx-ocotp.c b/drivers/nvmem/imx-ocotp.c
index d7796eb..75e66ef 100644
--- a/drivers/nvmem/imx-ocotp.c
+++ b/drivers/nvmem/imx-ocotp.c
@@ -22,7 +22,6 @@
 #include <linux/of.h>
 #include <linux/of_device.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 #include <linux/slab.h>
 
 struct ocotp_priv {
@@ -31,59 +30,34 @@
 	unsigned int nregs;
 };
 
-static int imx_ocotp_read(void *context, const void *reg, size_t reg_size,
-			  void *val, size_t val_size)
+static int imx_ocotp_read(void *context, unsigned int offset,
+			  void *val, size_t bytes)
 {
 	struct ocotp_priv *priv = context;
-	unsigned int offset = *(u32 *)reg;
 	unsigned int count;
+	u32 *buf = val;
 	int i;
 	u32 index;
 
 	index = offset >> 2;
-	count = val_size >> 2;
+	count = bytes >> 2;
 
 	if (count > (priv->nregs - index))
 		count = priv->nregs - index;
 
-	for (i = index; i < (index + count); i++) {
-		*(u32 *)val = readl(priv->base + 0x400 + i * 0x10);
-		val += 4;
-	}
+	for (i = index; i < (index + count); i++)
+		*buf++ = readl(priv->base + 0x400 + i * 0x10);
 
 	return 0;
 }
 
-static int imx_ocotp_write(void *context, const void *data, size_t count)
-{
-	/* Not implemented */
-	return 0;
-}
-
-static struct regmap_bus imx_ocotp_bus = {
-	.read = imx_ocotp_read,
-	.write = imx_ocotp_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-static bool imx_ocotp_writeable_reg(struct device *dev, unsigned int reg)
-{
-	return false;
-}
-
-static struct regmap_config imx_ocotp_regmap_config = {
-	.reg_bits = 32,
-	.val_bits = 32,
-	.reg_stride = 4,
-	.writeable_reg = imx_ocotp_writeable_reg,
-	.name = "imx-ocotp",
-};
-
 static struct nvmem_config imx_ocotp_nvmem_config = {
 	.name = "imx-ocotp",
 	.read_only = true,
+	.word_size = 4,
+	.stride = 4,
 	.owner = THIS_MODULE,
+	.reg_read = imx_ocotp_read,
 };
 
 static const struct of_device_id imx_ocotp_dt_ids[] = {
@@ -99,7 +73,6 @@
 	const struct of_device_id *of_id;
 	struct device *dev = &pdev->dev;
 	struct resource *res;
-	struct regmap *regmap;
 	struct ocotp_priv *priv;
 	struct nvmem_device *nvmem;
 
@@ -114,15 +87,9 @@
 
 	of_id = of_match_device(imx_ocotp_dt_ids, dev);
 	priv->nregs = (unsigned int)of_id->data;
-	imx_ocotp_regmap_config.max_register = 4 * priv->nregs - 4;
-
-	regmap = devm_regmap_init(dev, &imx_ocotp_bus, priv,
-				  &imx_ocotp_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
+	imx_ocotp_nvmem_config.size = 4 * priv->nregs;
 	imx_ocotp_nvmem_config.dev = dev;
+	imx_ocotp_nvmem_config.priv = priv;
 	nvmem = nvmem_register(&imx_ocotp_nvmem_config);
 	if (IS_ERR(nvmem))
 		return PTR_ERR(nvmem);
diff --git a/drivers/nvmem/lpc18xx_eeprom.c b/drivers/nvmem/lpc18xx_eeprom.c
index 878fce7..c81ae4c 100644
--- a/drivers/nvmem/lpc18xx_eeprom.c
+++ b/drivers/nvmem/lpc18xx_eeprom.c
@@ -16,7 +16,6 @@
 #include <linux/module.h>
 #include <linux/nvmem-provider.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 #include <linux/reset.h>
 
 /* Registers */
@@ -51,12 +50,7 @@
 	struct nvmem_device *nvmem;
 	unsigned reg_bytes;
 	unsigned val_bytes;
-};
-
-static struct regmap_config lpc18xx_regmap_config = {
-	.reg_bits = 32,
-	.reg_stride = 4,
-	.val_bits = 32,
+	int size;
 };
 
 static inline void lpc18xx_eeprom_writel(struct lpc18xx_eeprom_dev *eeprom,
@@ -95,30 +89,35 @@
 	return -ETIMEDOUT;
 }
 
-static int lpc18xx_eeprom_gather_write(void *context, const void *reg,
-				       size_t reg_size, const void *val,
-				       size_t val_size)
+static int lpc18xx_eeprom_gather_write(void *context, unsigned int reg,
+				       void *val, size_t bytes)
 {
 	struct lpc18xx_eeprom_dev *eeprom = context;
-	unsigned int offset = *(u32 *)reg;
+	unsigned int offset = reg;
 	int ret;
 
-	if (offset % lpc18xx_regmap_config.reg_stride)
+	/*
+	 * The last page contains the EEPROM initialization data and is not
+	 * writable.
+	 */
+	if ((reg > eeprom->size - LPC18XX_EEPROM_PAGE_SIZE) ||
+			(reg + bytes > eeprom->size - LPC18XX_EEPROM_PAGE_SIZE))
 		return -EINVAL;
 
+
 	lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
 			      LPC18XX_EEPROM_PWRDWN_NO);
 
 	/* Wait 100 us while the EEPROM wakes up */
 	usleep_range(100, 200);
 
-	while (val_size) {
+	while (bytes) {
 		writel(*(u32 *)val, eeprom->mem_base + offset);
 		ret = lpc18xx_eeprom_busywait_until_prog(eeprom);
 		if (ret < 0)
 			return ret;
 
-		val_size -= eeprom->val_bytes;
+		bytes -= eeprom->val_bytes;
 		val += eeprom->val_bytes;
 		offset += eeprom->val_bytes;
 	}
@@ -129,23 +128,10 @@
 	return 0;
 }
 
-static int lpc18xx_eeprom_write(void *context, const void *data, size_t count)
+static int lpc18xx_eeprom_read(void *context, unsigned int offset,
+			       void *val, size_t bytes)
 {
 	struct lpc18xx_eeprom_dev *eeprom = context;
-	unsigned int offset = eeprom->reg_bytes;
-
-	if (count <= offset)
-		return -EINVAL;
-
-	return lpc18xx_eeprom_gather_write(context, data, eeprom->reg_bytes,
-					   data + offset, count - offset);
-}
-
-static int lpc18xx_eeprom_read(void *context, const void *reg, size_t reg_size,
-			       void *val, size_t val_size)
-{
-	struct lpc18xx_eeprom_dev *eeprom = context;
-	unsigned int offset = *(u32 *)reg;
 
 	lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
 			      LPC18XX_EEPROM_PWRDWN_NO);
@@ -153,9 +139,9 @@
 	/* Wait 100 us while the EEPROM wakes up */
 	usleep_range(100, 200);
 
-	while (val_size) {
+	while (bytes) {
 		*(u32 *)val = readl(eeprom->mem_base + offset);
-		val_size -= eeprom->val_bytes;
+		bytes -= eeprom->val_bytes;
 		val += eeprom->val_bytes;
 		offset += eeprom->val_bytes;
 	}
@@ -166,31 +152,13 @@
 	return 0;
 }
 
-static struct regmap_bus lpc18xx_eeprom_bus = {
-	.write = lpc18xx_eeprom_write,
-	.gather_write = lpc18xx_eeprom_gather_write,
-	.read = lpc18xx_eeprom_read,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-static bool lpc18xx_eeprom_writeable_reg(struct device *dev, unsigned int reg)
-{
-	/*
-	 * The last page contains the EEPROM initialization data and is not
-	 * writable.
-	 */
-	return reg <= lpc18xx_regmap_config.max_register -
-						LPC18XX_EEPROM_PAGE_SIZE;
-}
-
-static bool lpc18xx_eeprom_readable_reg(struct device *dev, unsigned int reg)
-{
-	return reg <= lpc18xx_regmap_config.max_register;
-}
 
 static struct nvmem_config lpc18xx_nvmem_config = {
 	.name = "lpc18xx-eeprom",
+	.stride = 4,
+	.word_size = 4,
+	.reg_read = lpc18xx_eeprom_read,
+	.reg_write = lpc18xx_eeprom_gather_write,
 	.owner = THIS_MODULE,
 };
 
@@ -200,7 +168,6 @@
 	struct device *dev = &pdev->dev;
 	struct reset_control *rst;
 	unsigned long clk_rate;
-	struct regmap *regmap;
 	struct resource *res;
 	int ret;
 
@@ -243,8 +210,8 @@
 		goto err_clk;
 	}
 
-	eeprom->val_bytes = lpc18xx_regmap_config.val_bits / BITS_PER_BYTE;
-	eeprom->reg_bytes = lpc18xx_regmap_config.reg_bits / BITS_PER_BYTE;
+	eeprom->val_bytes = 4;
+	eeprom->reg_bytes = 4;
 
 	/*
 	 * Clock rate is generated by dividing the system bus clock by the
@@ -264,19 +231,10 @@
 	lpc18xx_eeprom_writel(eeprom, LPC18XX_EEPROM_PWRDWN,
 			      LPC18XX_EEPROM_PWRDWN_YES);
 
-	lpc18xx_regmap_config.max_register = resource_size(res) - 1;
-	lpc18xx_regmap_config.writeable_reg = lpc18xx_eeprom_writeable_reg;
-	lpc18xx_regmap_config.readable_reg = lpc18xx_eeprom_readable_reg;
-
-	regmap = devm_regmap_init(dev, &lpc18xx_eeprom_bus, eeprom,
-				  &lpc18xx_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(dev, "regmap init failed: %ld\n", PTR_ERR(regmap));
-		ret = PTR_ERR(regmap);
-		goto err_clk;
-	}
-
+	eeprom->size = resource_size(res);
+	lpc18xx_nvmem_config.size = resource_size(res);
 	lpc18xx_nvmem_config.dev = dev;
+	lpc18xx_nvmem_config.priv = eeprom;
 
 	eeprom->nvmem = nvmem_register(&lpc18xx_nvmem_config);
 	if (IS_ERR(eeprom->nvmem)) {
diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
index 3829e5f..b5305f0 100644
--- a/drivers/nvmem/qfprom.c
+++ b/drivers/nvmem/qfprom.c
@@ -13,21 +13,35 @@
 
 #include <linux/device.h>
 #include <linux/module.h>
+#include <linux/io.h>
 #include <linux/nvmem-provider.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 
-static struct regmap_config qfprom_regmap_config = {
-	.reg_bits = 32,
-	.val_bits = 8,
-	.reg_stride = 1,
-	.val_format_endian = REGMAP_ENDIAN_LITTLE,
-};
+static int qfprom_reg_read(void *context,
+			unsigned int reg, void *_val, size_t bytes)
+{
+	void __iomem *base = context;
+	u32 *val = _val;
+	int i = 0, words = bytes / 4;
 
-static struct nvmem_config econfig = {
-	.name = "qfprom",
-	.owner = THIS_MODULE,
-};
+	while (words--)
+		*val++ = readl(base + reg + (i++ * 4));
+
+	return 0;
+}
+
+static int qfprom_reg_write(void *context,
+			 unsigned int reg, void *_val, size_t bytes)
+{
+	void __iomem *base = context;
+	u32 *val = _val;
+	int i = 0, words = bytes / 4;
+
+	while (words--)
+		writel(*val++, base + reg + (i++ * 4));
+
+	return 0;
+}
 
 static int qfprom_remove(struct platform_device *pdev)
 {
@@ -36,12 +50,20 @@
 	return nvmem_unregister(nvmem);
 }
 
+static struct nvmem_config econfig = {
+	.name = "qfprom",
+	.owner = THIS_MODULE,
+	.stride = 4,
+	.word_size = 1,
+	.reg_read = qfprom_reg_read,
+	.reg_write = qfprom_reg_write,
+};
+
 static int qfprom_probe(struct platform_device *pdev)
 {
 	struct device *dev = &pdev->dev;
 	struct resource *res;
 	struct nvmem_device *nvmem;
-	struct regmap *regmap;
 	void __iomem *base;
 
 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@@ -49,14 +71,10 @@
 	if (IS_ERR(base))
 		return PTR_ERR(base);
 
-	qfprom_regmap_config.max_register = resource_size(res) - 1;
-
-	regmap = devm_regmap_init_mmio(dev, base, &qfprom_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
+	econfig.size = resource_size(res);
 	econfig.dev = dev;
+	econfig.priv = base;
+
 	nvmem = nvmem_register(&econfig);
 	if (IS_ERR(nvmem))
 		return PTR_ERR(nvmem);
diff --git a/drivers/nvmem/rockchip-efuse.c b/drivers/nvmem/rockchip-efuse.c
index a009795..4d3f391 100644
--- a/drivers/nvmem/rockchip-efuse.c
+++ b/drivers/nvmem/rockchip-efuse.c
@@ -23,7 +23,6 @@
 #include <linux/slab.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 
 #define EFUSE_A_SHIFT			6
 #define EFUSE_A_MASK			0x3ff
@@ -41,17 +40,9 @@
 	struct clk *clk;
 };
 
-static int rockchip_efuse_write(void *context, const void *data, size_t count)
+static int rockchip_efuse_read(void *context, unsigned int offset,
+			       void *val, size_t bytes)
 {
-	/* Nothing TBD, Read-Only */
-	return 0;
-}
-
-static int rockchip_efuse_read(void *context,
-			       const void *reg, size_t reg_size,
-			       void *val, size_t val_size)
-{
-	unsigned int offset = *(u32 *)reg;
 	struct rockchip_efuse_chip *efuse = context;
 	u8 *buf = val;
 	int ret;
@@ -64,12 +55,12 @@
 
 	writel(EFUSE_LOAD | EFUSE_PGENB, efuse->base + REG_EFUSE_CTRL);
 	udelay(1);
-	while (val_size) {
+	while (bytes--) {
 		writel(readl(efuse->base + REG_EFUSE_CTRL) &
 			     (~(EFUSE_A_MASK << EFUSE_A_SHIFT)),
 			     efuse->base + REG_EFUSE_CTRL);
 		writel(readl(efuse->base + REG_EFUSE_CTRL) |
-			     ((offset & EFUSE_A_MASK) << EFUSE_A_SHIFT),
+			     ((offset++ & EFUSE_A_MASK) << EFUSE_A_SHIFT),
 			     efuse->base + REG_EFUSE_CTRL);
 		udelay(1);
 		writel(readl(efuse->base + REG_EFUSE_CTRL) |
@@ -79,9 +70,6 @@
 		writel(readl(efuse->base + REG_EFUSE_CTRL) &
 		     (~EFUSE_STROBE), efuse->base + REG_EFUSE_CTRL);
 		udelay(1);
-
-		val_size -= 1;
-		offset += 1;
 	}
 
 	/* Switch to standby mode */
@@ -92,22 +80,11 @@
 	return 0;
 }
 
-static struct regmap_bus rockchip_efuse_bus = {
-	.read = rockchip_efuse_read,
-	.write = rockchip_efuse_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-static struct regmap_config rockchip_efuse_regmap_config = {
-	.reg_bits = 32,
-	.reg_stride = 1,
-	.val_bits = 8,
-};
-
 static struct nvmem_config econfig = {
 	.name = "rockchip-efuse",
 	.owner = THIS_MODULE,
+	.stride = 1,
+	.word_size = 1,
 	.read_only = true,
 };
 
@@ -121,7 +98,6 @@
 {
 	struct resource *res;
 	struct nvmem_device *nvmem;
-	struct regmap *regmap;
 	struct rockchip_efuse_chip *efuse;
 
 	efuse = devm_kzalloc(&pdev->dev, sizeof(struct rockchip_efuse_chip),
@@ -139,16 +115,9 @@
 		return PTR_ERR(efuse->clk);
 
 	efuse->dev = &pdev->dev;
-
-	rockchip_efuse_regmap_config.max_register = resource_size(res) - 1;
-
-	regmap = devm_regmap_init(efuse->dev, &rockchip_efuse_bus,
-				  efuse, &rockchip_efuse_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(efuse->dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
-
+	econfig.size = resource_size(res);
+	econfig.reg_read = rockchip_efuse_read;
+	econfig.priv = efuse;
 	econfig.dev = efuse->dev;
 	nvmem = nvmem_register(&econfig);
 	if (IS_ERR(nvmem))
diff --git a/drivers/nvmem/sunxi_sid.c b/drivers/nvmem/sunxi_sid.c
index bc88b40..1567ccc 100644
--- a/drivers/nvmem/sunxi_sid.c
+++ b/drivers/nvmem/sunxi_sid.c
@@ -21,13 +21,14 @@
 #include <linux/nvmem-provider.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 #include <linux/slab.h>
 #include <linux/random.h>
 
 static struct nvmem_config econfig = {
 	.name = "sunxi-sid",
 	.read_only = true,
+	.stride = 4,
+	.word_size = 1,
 	.owner = THIS_MODULE,
 };
 
@@ -51,54 +52,23 @@
 	return sid_key; /* Only return the last byte */
 }
 
-static int sunxi_sid_read(void *context,
-			  const void *reg, size_t reg_size,
-			  void *val, size_t val_size)
+static int sunxi_sid_read(void *context, unsigned int offset,
+			  void *val, size_t bytes)
 {
 	struct sunxi_sid *sid = context;
-	unsigned int offset = *(u32 *)reg;
 	u8 *buf = val;
 
-	while (val_size) {
-		*buf++ = sunxi_sid_read_byte(sid, offset);
-		val_size--;
-		offset++;
-	}
+	while (bytes--)
+		*buf++ = sunxi_sid_read_byte(sid, offset++);
 
 	return 0;
 }
 
-static int sunxi_sid_write(void *context, const void *data, size_t count)
-{
-	/* Unimplemented, dummy to keep regmap core happy */
-	return 0;
-}
-
-static struct regmap_bus sunxi_sid_bus = {
-	.read = sunxi_sid_read,
-	.write = sunxi_sid_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-static bool sunxi_sid_writeable_reg(struct device *dev, unsigned int reg)
-{
-	return false;
-}
-
-static struct regmap_config sunxi_sid_regmap_config = {
-	.reg_bits = 32,
-	.val_bits = 8,
-	.reg_stride = 1,
-	.writeable_reg = sunxi_sid_writeable_reg,
-};
-
 static int sunxi_sid_probe(struct platform_device *pdev)
 {
 	struct device *dev = &pdev->dev;
 	struct resource *res;
 	struct nvmem_device *nvmem;
-	struct regmap *regmap;
 	struct sunxi_sid *sid;
 	int ret, i, size;
 	char *randomness;
@@ -113,16 +83,10 @@
 		return PTR_ERR(sid->base);
 
 	size = resource_size(res) - 1;
-	sunxi_sid_regmap_config.max_register = size;
-
-	regmap = devm_regmap_init(dev, &sunxi_sid_bus, sid,
-				  &sunxi_sid_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
-
+	econfig.size = resource_size(res);
 	econfig.dev = dev;
+	econfig.reg_read = sunxi_sid_read;
+	econfig.priv = sid;
 	nvmem = nvmem_register(&econfig);
 	if (IS_ERR(nvmem))
 		return PTR_ERR(nvmem);
diff --git a/drivers/nvmem/vf610-ocotp.c b/drivers/nvmem/vf610-ocotp.c
index 8641319..72e4faa 100644
--- a/drivers/nvmem/vf610-ocotp.c
+++ b/drivers/nvmem/vf610-ocotp.c
@@ -25,7 +25,6 @@
 #include <linux/nvmem-provider.h>
 #include <linux/of.h>
 #include <linux/platform_device.h>
-#include <linux/regmap.h>
 #include <linux/slab.h>
 
 /* OCOTP Register Offsets */
@@ -152,23 +151,16 @@
 	return -EINVAL;
 }
 
-static int vf610_ocotp_write(void *context, const void *data, size_t count)
-{
-	return 0;
-}
-
-static int vf610_ocotp_read(void *context,
-			const void *off, size_t reg_size,
-			void *val, size_t val_size)
+static int vf610_ocotp_read(void *context, unsigned int offset,
+			void *val, size_t bytes)
 {
 	struct vf610_ocotp *ocotp = context;
 	void __iomem *base = ocotp->base;
-	unsigned int offset = *(u32 *)off;
 	u32 reg, *buf = val;
 	int fuse_addr;
 	int ret;
 
-	while (val_size > 0) {
+	while (bytes > 0) {
 		fuse_addr = vf610_get_fuse_address(offset);
 		if (fuse_addr > 0) {
 			writel(ocotp->timing, base + OCOTP_TIMING);
@@ -205,29 +197,19 @@
 		}
 
 		buf++;
-		val_size--;
-		offset += reg_size;
+		bytes -= 4;
+		offset += 4;
 	}
 
 	return 0;
 }
 
-static struct regmap_bus vf610_ocotp_bus = {
-	.read = vf610_ocotp_read,
-	.write = vf610_ocotp_write,
-	.reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-	.val_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-static struct regmap_config ocotp_regmap_config = {
-	.reg_bits = 32,
-	.val_bits = 32,
-	.reg_stride = 4,
-};
-
 static struct nvmem_config ocotp_config = {
 	.name = "ocotp",
 	.owner = THIS_MODULE,
+	.stride = 4,
+	.word_size = 4,
+	.reg_read = vf610_ocotp_read,
 };
 
 static const struct of_device_id ocotp_of_match[] = {
@@ -247,7 +229,6 @@
 {
 	struct device *dev = &pdev->dev;
 	struct resource *res;
-	struct regmap *regmap;
 	struct vf610_ocotp *ocotp_dev;
 
 	ocotp_dev = devm_kzalloc(&pdev->dev,
@@ -267,13 +248,8 @@
 		return PTR_ERR(ocotp_dev->clk);
 	}
 
-	ocotp_regmap_config.max_register = resource_size(res);
-	regmap = devm_regmap_init(dev,
-		&vf610_ocotp_bus, ocotp_dev, &ocotp_regmap_config);
-	if (IS_ERR(regmap)) {
-		dev_err(dev, "regmap init failed\n");
-		return PTR_ERR(regmap);
-	}
+	ocotp_config.size = resource_size(res);
+	ocotp_config.priv = ocotp_dev;
 	ocotp_config.dev = dev;
 
 	ocotp_dev->nvmem = nvmem_register(&ocotp_config);
diff --git a/drivers/parport/procfs.c b/drivers/parport/procfs.c
index c776333..74ed3e4 100644
--- a/drivers/parport/procfs.c
+++ b/drivers/parport/procfs.c
@@ -617,5 +617,5 @@
 }
 #endif
 
-module_init(parport_default_proc_register)
+subsys_initcall(parport_default_proc_register)
 module_exit(parport_default_proc_unregister)
diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
index 58f7eeb..7e9b2de 100644
--- a/drivers/pci/host/pci-hyperv.c
+++ b/drivers/pci/host/pci-hyperv.c
@@ -1809,14 +1809,14 @@
 
 	if (hbus->low_mmio_space && hbus->low_mmio_res) {
 		hbus->low_mmio_res->flags |= IORESOURCE_BUSY;
-		release_mem_region(hbus->low_mmio_res->start,
-				   resource_size(hbus->low_mmio_res));
+		vmbus_free_mmio(hbus->low_mmio_res->start,
+				resource_size(hbus->low_mmio_res));
 	}
 
 	if (hbus->high_mmio_space && hbus->high_mmio_res) {
 		hbus->high_mmio_res->flags |= IORESOURCE_BUSY;
-		release_mem_region(hbus->high_mmio_res->start,
-				   resource_size(hbus->high_mmio_res));
+		vmbus_free_mmio(hbus->high_mmio_res->start,
+				resource_size(hbus->high_mmio_res));
 	}
 }
 
@@ -1894,8 +1894,8 @@
 
 release_low_mmio:
 	if (hbus->low_mmio_res) {
-		release_mem_region(hbus->low_mmio_res->start,
-				   resource_size(hbus->low_mmio_res));
+		vmbus_free_mmio(hbus->low_mmio_res->start,
+				resource_size(hbus->low_mmio_res));
 	}
 
 	return ret;
@@ -1938,7 +1938,7 @@
 
 static void hv_free_config_window(struct hv_pcibus_device *hbus)
 {
-	release_mem_region(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH);
+	vmbus_free_mmio(hbus->mem_config->start, PCI_CONFIG_MMIO_LENGTH);
 }
 
 /**
diff --git a/drivers/spmi/spmi.c b/drivers/spmi/spmi.c
index 6b3da1b..2b9b094 100644
--- a/drivers/spmi/spmi.c
+++ b/drivers/spmi/spmi.c
@@ -25,6 +25,7 @@
 #define CREATE_TRACE_POINTS
 #include <trace/events/spmi.h>
 
+static bool is_registered;
 static DEFINE_IDA(ctrl_ida);
 
 static void spmi_dev_release(struct device *dev)
@@ -507,7 +508,7 @@
 	int ret;
 
 	/* Can't register until after driver model init */
-	if (WARN_ON(!spmi_bus_type.p))
+	if (WARN_ON(!is_registered))
 		return -EAGAIN;
 
 	ret = device_add(&ctrl->dev);
@@ -576,7 +577,14 @@
 
 static int __init spmi_init(void)
 {
-	return bus_register(&spmi_bus_type);
+	int ret;
+
+	ret = bus_register(&spmi_bus_type);
+	if (ret)
+		return ret;
+
+	is_registered = true;
+	return 0;
 }
 postcore_initcall(spmi_init);
 
diff --git a/drivers/uio/uio.c b/drivers/uio/uio.c
index bcc1fc0..fba021f 100644
--- a/drivers/uio/uio.c
+++ b/drivers/uio/uio.c
@@ -271,12 +271,16 @@
 			map_found = 1;
 			idev->map_dir = kobject_create_and_add("maps",
 							&idev->dev->kobj);
-			if (!idev->map_dir)
+			if (!idev->map_dir) {
+				ret = -ENOMEM;
 				goto err_map;
+			}
 		}
 		map = kzalloc(sizeof(*map), GFP_KERNEL);
-		if (!map)
+		if (!map) {
+			ret = -ENOMEM;
 			goto err_map_kobj;
+		}
 		kobject_init(&map->kobj, &map_attr_type);
 		map->mem = mem;
 		mem->map = map;
@@ -296,12 +300,16 @@
 			portio_found = 1;
 			idev->portio_dir = kobject_create_and_add("portio",
 							&idev->dev->kobj);
-			if (!idev->portio_dir)
+			if (!idev->portio_dir) {
+				ret = -ENOMEM;
 				goto err_portio;
+			}
 		}
 		portio = kzalloc(sizeof(*portio), GFP_KERNEL);
-		if (!portio)
+		if (!portio) {
+			ret = -ENOMEM;
 			goto err_portio_kobj;
+		}
 		kobject_init(&portio->kobj, &portio_attr_type);
 		portio->port = port;
 		port->portio = portio;
diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
index e2451bd..2fd49b2 100644
--- a/drivers/video/fbdev/hyperv_fb.c
+++ b/drivers/video/fbdev/hyperv_fb.c
@@ -743,7 +743,7 @@
 err3:
 	iounmap(fb_virt);
 err2:
-	release_mem_region(par->mem->start, screen_fb_size);
+	vmbus_free_mmio(par->mem->start, screen_fb_size);
 	par->mem = NULL;
 err1:
 	if (!gen2vm)
@@ -758,7 +758,7 @@
 	struct hvfb_par *par = info->par;
 
 	iounmap(info->screen_base);
-	release_mem_region(par->mem->start, screen_fb_size);
+	vmbus_free_mmio(par->mem->start, screen_fb_size);
 	par->mem = NULL;
 }
 
diff --git a/drivers/vme/bridges/vme_ca91cx42.c b/drivers/vme/bridges/vme_ca91cx42.c
index 5fbeab3..9f2c834 100644
--- a/drivers/vme/bridges/vme_ca91cx42.c
+++ b/drivers/vme/bridges/vme_ca91cx42.c
@@ -204,10 +204,6 @@
 	/* Need pdev */
 	pdev = to_pci_dev(ca91cx42_bridge->parent);
 
-	INIT_LIST_HEAD(&ca91cx42_bridge->vme_error_handlers);
-
-	mutex_init(&ca91cx42_bridge->irq_mtx);
-
 	/* Disable interrupts from PCI to VME */
 	iowrite32(0, bridge->base + VINT_EN);
 
@@ -1626,6 +1622,7 @@
 		retval = -ENOMEM;
 		goto err_struct;
 	}
+	vme_init_bridge(ca91cx42_bridge);
 
 	ca91cx42_device = kzalloc(sizeof(struct ca91cx42_driver), GFP_KERNEL);
 
@@ -1686,7 +1683,6 @@
 	}
 
 	/* Add master windows to list */
-	INIT_LIST_HEAD(&ca91cx42_bridge->master_resources);
 	for (i = 0; i < CA91C142_MAX_MASTER; i++) {
 		master_image = kmalloc(sizeof(struct vme_master_resource),
 			GFP_KERNEL);
@@ -1713,7 +1709,6 @@
 	}
 
 	/* Add slave windows to list */
-	INIT_LIST_HEAD(&ca91cx42_bridge->slave_resources);
 	for (i = 0; i < CA91C142_MAX_SLAVE; i++) {
 		slave_image = kmalloc(sizeof(struct vme_slave_resource),
 			GFP_KERNEL);
@@ -1741,7 +1736,6 @@
 	}
 
 	/* Add dma engines to list */
-	INIT_LIST_HEAD(&ca91cx42_bridge->dma_resources);
 	for (i = 0; i < CA91C142_MAX_DMA; i++) {
 		dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource),
 			GFP_KERNEL);
@@ -1764,7 +1758,6 @@
 	}
 
 	/* Add location monitor to list */
-	INIT_LIST_HEAD(&ca91cx42_bridge->lm_resources);
 	lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL);
 	if (lm == NULL) {
 		dev_err(&pdev->dev, "Failed to allocate memory for "
diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
index 6052483..4bc5d45 100644
--- a/drivers/vme/bridges/vme_tsi148.c
+++ b/drivers/vme/bridges/vme_tsi148.c
@@ -314,10 +314,6 @@
 
 	bridge = tsi148_bridge->driver_priv;
 
-	INIT_LIST_HEAD(&tsi148_bridge->vme_error_handlers);
-
-	mutex_init(&tsi148_bridge->irq_mtx);
-
 	result = request_irq(pdev->irq,
 			     tsi148_irqhandler,
 			     IRQF_SHARED,
@@ -2301,6 +2297,7 @@
 		retval = -ENOMEM;
 		goto err_struct;
 	}
+	vme_init_bridge(tsi148_bridge);
 
 	tsi148_device = kzalloc(sizeof(struct tsi148_driver), GFP_KERNEL);
 	if (tsi148_device == NULL) {
@@ -2387,7 +2384,6 @@
 	}
 
 	/* Add master windows to list */
-	INIT_LIST_HEAD(&tsi148_bridge->master_resources);
 	for (i = 0; i < master_num; i++) {
 		master_image = kmalloc(sizeof(struct vme_master_resource),
 			GFP_KERNEL);
@@ -2417,7 +2413,6 @@
 	}
 
 	/* Add slave windows to list */
-	INIT_LIST_HEAD(&tsi148_bridge->slave_resources);
 	for (i = 0; i < TSI148_MAX_SLAVE; i++) {
 		slave_image = kmalloc(sizeof(struct vme_slave_resource),
 			GFP_KERNEL);
@@ -2442,7 +2437,6 @@
 	}
 
 	/* Add dma engines to list */
-	INIT_LIST_HEAD(&tsi148_bridge->dma_resources);
 	for (i = 0; i < TSI148_MAX_DMA; i++) {
 		dma_ctrlr = kmalloc(sizeof(struct vme_dma_resource),
 			GFP_KERNEL);
@@ -2467,7 +2461,6 @@
 	}
 
 	/* Add location monitor to list */
-	INIT_LIST_HEAD(&tsi148_bridge->lm_resources);
 	lm = kmalloc(sizeof(struct vme_lm_resource), GFP_KERNEL);
 	if (lm == NULL) {
 		dev_err(&pdev->dev, "Failed to allocate memory for "
diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c
index 72924b0..37ac0a5 100644
--- a/drivers/vme/vme.c
+++ b/drivers/vme/vme.c
@@ -782,7 +782,7 @@
 
 	dma_list = kmalloc(sizeof(struct vme_dma_list), GFP_KERNEL);
 	if (dma_list == NULL) {
-		printk(KERN_ERR "Unable to allocate memory for new dma list\n");
+		printk(KERN_ERR "Unable to allocate memory for new DMA list\n");
 		return NULL;
 	}
 	INIT_LIST_HEAD(&dma_list->entries);
@@ -846,7 +846,7 @@
 
 	pci_attr = kmalloc(sizeof(struct vme_dma_pci), GFP_KERNEL);
 	if (pci_attr == NULL) {
-		printk(KERN_ERR "Unable to allocate memory for pci attributes\n");
+		printk(KERN_ERR "Unable to allocate memory for PCI attributes\n");
 		goto err_pci;
 	}
 
@@ -884,7 +884,7 @@
 
 	vme_attr = kmalloc(sizeof(struct vme_dma_vme), GFP_KERNEL);
 	if (vme_attr == NULL) {
-		printk(KERN_ERR "Unable to allocate memory for vme attributes\n");
+		printk(KERN_ERR "Unable to allocate memory for VME attributes\n");
 		goto err_vme;
 	}
 
@@ -975,8 +975,8 @@
 	}
 
 	/*
-	 * Empty out all of the entries from the dma list. We need to go to the
-	 * low level driver as dma entries are driver specific.
+	 * Empty out all of the entries from the DMA list. We need to go to the
+	 * low level driver as DMA entries are driver specific.
 	 */
 	retval = bridge->dma_list_empty(list);
 	if (retval) {
@@ -1091,7 +1091,7 @@
 	if (call != NULL)
 		call(level, statid, priv_data);
 	else
-		printk(KERN_WARNING "Spurilous VME interrupt, level:%x, vector:%x\n",
+		printk(KERN_WARNING "Spurious VME interrupt, level:%x, vector:%x\n",
 		       level, statid);
 }
 EXPORT_SYMBOL(vme_irq_handler);
@@ -1429,6 +1429,20 @@
 	kfree(dev_to_vme_dev(dev));
 }
 
+/* Common bridge initialization */
+struct vme_bridge *vme_init_bridge(struct vme_bridge *bridge)
+{
+	INIT_LIST_HEAD(&bridge->vme_error_handlers);
+	INIT_LIST_HEAD(&bridge->master_resources);
+	INIT_LIST_HEAD(&bridge->slave_resources);
+	INIT_LIST_HEAD(&bridge->dma_resources);
+	INIT_LIST_HEAD(&bridge->lm_resources);
+	mutex_init(&bridge->irq_mtx);
+
+	return bridge;
+}
+EXPORT_SYMBOL(vme_init_bridge);
+
 int vme_register_bridge(struct vme_bridge *bridge)
 {
 	int i;
diff --git a/drivers/vme/vme_bridge.h b/drivers/vme/vme_bridge.h
index b59cbee..cb8246f 100644
--- a/drivers/vme/vme_bridge.h
+++ b/drivers/vme/vme_bridge.h
@@ -177,6 +177,7 @@
 			   unsigned long long address, int am);
 void vme_irq_handler(struct vme_bridge *, int, int);
 
+struct vme_bridge *vme_init_bridge(struct vme_bridge *);
 int vme_register_bridge(struct vme_bridge *);
 void vme_unregister_bridge(struct vme_bridge *);
 struct vme_error_handler *vme_register_error_handler(
diff --git a/drivers/w1/masters/ds2482.c b/drivers/w1/masters/ds2482.c
index b05e8fe..2e30db1 100644
--- a/drivers/w1/masters/ds2482.c
+++ b/drivers/w1/masters/ds2482.c
@@ -24,6 +24,19 @@
 #include "../w1_int.h"
 
 /**
+ * Allow the active pullup to be disabled, default is enabled.
+ *
+ * Note from the DS2482 datasheet:
+ * The APU bit controls whether an active pullup (controlled slew-rate
+ * transistor) or a passive pullup (Rwpu resistor) will be used to drive
+ * a 1-Wire line from low to high. When APU = 0, active pullup is disabled
+ * (resistor mode). Active Pullup should always be selected unless there is
+ * only a single slave on the 1-Wire line.
+ */
+static int ds2482_active_pullup = 1;
+module_param_named(active_pullup, ds2482_active_pullup, int, 0644);
+
+/**
  * The DS2482 registers - there are 3 registers that are addressed by a read
  * pointer. The read pointer is set by the last command executed.
  *
@@ -138,6 +151,9 @@
  */
 static inline u8 ds2482_calculate_config(u8 conf)
 {
+	if (ds2482_active_pullup)
+		conf |= DS2482_REG_CFG_APU;
+
 	return conf | ((~conf & 0x0f) << 4);
 }
 
@@ -546,6 +562,8 @@
 
 module_i2c_driver(ds2482_driver);
 
+MODULE_PARM_DESC(active_pullup, "Active pullup (apply to all buses): " \
+				"0-disable, 1-enable (default)");
 MODULE_AUTHOR("Ben Gardner <bgardner@wabtec.com>");
 MODULE_DESCRIPTION("DS2482 driver");
 MODULE_LICENSE("GPL");
diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
index 2f029e8..581a300 100644
--- a/drivers/w1/slaves/w1_therm.c
+++ b/drivers/w1/slaves/w1_therm.c
@@ -92,10 +92,13 @@
 static ssize_t w1_slave_show(struct device *device,
 	struct device_attribute *attr, char *buf);
 
+static ssize_t w1_slave_store(struct device *device,
+	struct device_attribute *attr, const char *buf, size_t size);
+
 static ssize_t w1_seq_show(struct device *device,
 	struct device_attribute *attr, char *buf);
 
-static DEVICE_ATTR_RO(w1_slave);
+static DEVICE_ATTR_RW(w1_slave);
 static DEVICE_ATTR_RO(w1_seq);
 
 static struct attribute *w1_therm_attrs[] = {
@@ -154,8 +157,17 @@
 	u16			reserved;
 	struct w1_family	*f;
 	int			(*convert)(u8 rom[9]);
+	int			(*precision)(struct device *device, int val);
+	int			(*eeprom)(struct device *device);
 };
 
+/* write configuration to eeprom */
+static inline int w1_therm_eeprom(struct device *device);
+
+/* Set precision for conversion */
+static inline int w1_DS18B20_precision(struct device *device, int val);
+static inline int w1_DS18S20_precision(struct device *device, int val);
+
 /* The return value is millidegrees Centigrade. */
 static inline int w1_DS18B20_convert_temp(u8 rom[9]);
 static inline int w1_DS18S20_convert_temp(u8 rom[9]);
@@ -163,26 +175,194 @@
 static struct w1_therm_family_converter w1_therm_families[] = {
 	{
 		.f		= &w1_therm_family_DS18S20,
-		.convert 	= w1_DS18S20_convert_temp
+		.convert	= w1_DS18S20_convert_temp,
+		.precision	= w1_DS18S20_precision,
+		.eeprom		= w1_therm_eeprom
 	},
 	{
 		.f		= &w1_therm_family_DS1822,
-		.convert 	= w1_DS18B20_convert_temp
+		.convert	= w1_DS18B20_convert_temp,
+		.precision	= w1_DS18S20_precision,
+		.eeprom		= w1_therm_eeprom
 	},
 	{
 		.f		= &w1_therm_family_DS18B20,
-		.convert 	= w1_DS18B20_convert_temp
+		.convert	= w1_DS18B20_convert_temp,
+		.precision	= w1_DS18B20_precision,
+		.eeprom		= w1_therm_eeprom
 	},
 	{
 		.f		= &w1_therm_family_DS28EA00,
-		.convert	= w1_DS18B20_convert_temp
+		.convert	= w1_DS18B20_convert_temp,
+		.precision	= w1_DS18S20_precision,
+		.eeprom		= w1_therm_eeprom
 	},
 	{
 		.f		= &w1_therm_family_DS1825,
-		.convert	= w1_DS18B20_convert_temp
+		.convert	= w1_DS18B20_convert_temp,
+		.precision	= w1_DS18S20_precision,
+		.eeprom		= w1_therm_eeprom
 	}
 };
 
+static inline int w1_therm_eeprom(struct device *device)
+{
+	struct w1_slave *sl = dev_to_w1_slave(device);
+	struct w1_master *dev = sl->master;
+	u8 rom[9], external_power;
+	int ret, max_trying = 10;
+	u8 *family_data = sl->family_data;
+
+	ret = mutex_lock_interruptible(&dev->bus_mutex);
+	if (ret != 0)
+		goto post_unlock;
+
+	if (!sl->family_data) {
+		ret = -ENODEV;
+		goto pre_unlock;
+	}
+
+	/* prevent the slave from going away in sleep */
+	atomic_inc(THERM_REFCNT(family_data));
+	memset(rom, 0, sizeof(rom));
+
+	while (max_trying--) {
+		if (!w1_reset_select_slave(sl)) {
+			unsigned int tm = 10;
+			unsigned long sleep_rem;
+
+			/* check if in parasite mode */
+			w1_write_8(dev, W1_READ_PSUPPLY);
+			external_power = w1_read_8(dev);
+
+			if (w1_reset_select_slave(sl))
+				continue;
+
+			/* 10ms strong pullup/delay after the copy command */
+			if (w1_strong_pullup == 2 ||
+			    (!external_power && w1_strong_pullup))
+				w1_next_pullup(dev, tm);
+
+			w1_write_8(dev, W1_COPY_SCRATCHPAD);
+
+			if (external_power) {
+				mutex_unlock(&dev->bus_mutex);
+
+				sleep_rem = msleep_interruptible(tm);
+				if (sleep_rem != 0) {
+					ret = -EINTR;
+					goto post_unlock;
+				}
+
+				ret = mutex_lock_interruptible(&dev->bus_mutex);
+				if (ret != 0)
+					goto post_unlock;
+			} else if (!w1_strong_pullup) {
+				sleep_rem = msleep_interruptible(tm);
+				if (sleep_rem != 0) {
+					ret = -EINTR;
+					goto pre_unlock;
+				}
+			}
+
+			break;
+		}
+	}
+
+pre_unlock:
+	mutex_unlock(&dev->bus_mutex);
+
+post_unlock:
+	atomic_dec(THERM_REFCNT(family_data));
+	return ret;
+}
+
+/* DS18S20 does not feature configuration register */
+static inline int w1_DS18S20_precision(struct device *device, int val)
+{
+	return 0;
+}
+
+static inline int w1_DS18B20_precision(struct device *device, int val)
+{
+	struct w1_slave *sl = dev_to_w1_slave(device);
+	struct w1_master *dev = sl->master;
+	u8 rom[9], crc;
+	int ret, max_trying = 10;
+	u8 *family_data = sl->family_data;
+	uint8_t precision_bits;
+	uint8_t mask = 0x60;
+
+	if(val > 12 || val < 9) {
+		pr_warn("Unsupported precision\n");
+		return -1;
+	}
+
+	ret = mutex_lock_interruptible(&dev->bus_mutex);
+	if (ret != 0)
+		goto post_unlock;
+
+	if (!sl->family_data) {
+		ret = -ENODEV;
+		goto pre_unlock;
+	}
+
+	/* prevent the slave from going away in sleep */
+	atomic_inc(THERM_REFCNT(family_data));
+	memset(rom, 0, sizeof(rom));
+
+	/* translate precision to bitmask (see datasheet page 9) */
+	switch (val) {
+	case 9:
+		precision_bits = 0x00;
+		break;
+	case 10:
+		precision_bits = 0x20;
+		break;
+	case 11:
+		precision_bits = 0x40;
+		break;
+	case 12:
+	default:
+		precision_bits = 0x60;
+		break;
+	}
+
+	while (max_trying--) {
+		crc = 0;
+
+		if (!w1_reset_select_slave(sl)) {
+			int count = 0;
+
+			/* read values to only alter precision bits */
+			w1_write_8(dev, W1_READ_SCRATCHPAD);
+			if ((count = w1_read_block(dev, rom, 9)) != 9)
+				dev_warn(device, "w1_read_block() returned %u instead of 9.\n",	count);
+
+			crc = w1_calc_crc8(rom, 8);
+			if (rom[8] == crc) {
+				rom[4] = (rom[4] & ~mask) | (precision_bits & mask);
+
+				if (!w1_reset_select_slave(sl)) {
+					w1_write_8(dev, W1_WRITE_SCRATCHPAD);
+					w1_write_8(dev, rom[2]);
+					w1_write_8(dev, rom[3]);
+					w1_write_8(dev, rom[4]);
+
+					break;
+				}
+			}
+		}
+	}
+
+pre_unlock:
+	mutex_unlock(&dev->bus_mutex);
+
+post_unlock:
+	atomic_dec(THERM_REFCNT(family_data));
+	return ret;
+}
+
 static inline int w1_DS18B20_convert_temp(u8 rom[9])
 {
 	s16 t = le16_to_cpup((__le16 *)rom);
@@ -220,6 +400,30 @@
 	return 0;
 }
 
+static ssize_t w1_slave_store(struct device *device,
+			      struct device_attribute *attr, const char *buf,
+			      size_t size)
+{
+	int val, ret;
+	struct w1_slave *sl = dev_to_w1_slave(device);
+	int i;
+
+	ret = kstrtoint(buf, 0, &val);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < ARRAY_SIZE(w1_therm_families); ++i) {
+		if (w1_therm_families[i].f->fid == sl->family->fid) {
+			/* zero value indicates to write current configuration to eeprom */
+			if (0 == val)
+				ret = w1_therm_families[i].eeprom(device);
+			else
+				ret = w1_therm_families[i].precision(device, val);
+			break;
+		}
+	}
+	return ret ? : size;
+}
 
 static ssize_t w1_slave_show(struct device *device,
 	struct device_attribute *attr, char *buf)
@@ -311,7 +515,7 @@
 	for (i = 0; i < 9; ++i)
 		c -= snprintf(buf + PAGE_SIZE - c, c, "%02x ", rom[i]);
 	c -= snprintf(buf + PAGE_SIZE - c, c, ": crc=%02x %s\n",
-			   crc, (verdict) ? "YES" : "NO");
+		      crc, (verdict) ? "YES" : "NO");
 	if (verdict)
 		memcpy(family_data, rom, sizeof(rom));
 	else
diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
index 89a7847..bb34362 100644
--- a/drivers/w1/w1.c
+++ b/drivers/w1/w1.c
@@ -335,7 +335,7 @@
 	int tmp;
 	struct w1_master *md = dev_to_w1_master(dev);
 
-	if (kstrtoint(buf, 0, &tmp) == -EINVAL || tmp < 1)
+	if (kstrtoint(buf, 0, &tmp) || tmp < 1)
 		return -EINVAL;
 
 	mutex_lock(&md->mutex);
diff --git a/drivers/w1/w1.h b/drivers/w1/w1.h
index 56a49ba..129895f 100644
--- a/drivers/w1/w1.h
+++ b/drivers/w1/w1.h
@@ -58,6 +58,8 @@
 #define W1_ALARM_SEARCH		0xEC
 #define W1_CONVERT_TEMP		0x44
 #define W1_SKIP_ROM		0xCC
+#define W1_COPY_SCRATCHPAD	0x48
+#define W1_WRITE_SCRATCHPAD	0x4E
 #define W1_READ_SCRATCHPAD	0xBE
 #define W1_READ_ROM		0x33
 #define W1_READ_PSUPPLY		0xB4
diff --git a/include/linux/coresight-stm.h b/include/linux/coresight-stm.h
new file mode 100644
index 0000000..a978bb8
--- /dev/null
+++ b/include/linux/coresight-stm.h
@@ -0,0 +1,6 @@
+#ifndef __LINUX_CORESIGHT_STM_H_
+#define __LINUX_CORESIGHT_STM_H_
+
+#include <uapi/linux/coresight-stm.h>
+
+#endif
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index aa0fadc..b10954a 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -126,6 +126,8 @@
 
 	u32 ring_datasize;		/* < ring_size */
 	u32 ring_data_startoffset;
+	u32 priv_write_index;
+	u32 priv_read_index;
 };
 
 /*
@@ -151,6 +153,33 @@
 	*read = dsize - *write;
 }
 
+static inline u32 hv_get_bytes_to_read(struct hv_ring_buffer_info *rbi)
+{
+	u32 read_loc, write_loc, dsize, read;
+
+	dsize = rbi->ring_datasize;
+	read_loc = rbi->ring_buffer->read_index;
+	write_loc = READ_ONCE(rbi->ring_buffer->write_index);
+
+	read = write_loc >= read_loc ? (write_loc - read_loc) :
+		(dsize - read_loc) + write_loc;
+
+	return read;
+}
+
+static inline u32 hv_get_bytes_to_write(struct hv_ring_buffer_info *rbi)
+{
+	u32 read_loc, write_loc, dsize, write;
+
+	dsize = rbi->ring_datasize;
+	read_loc = READ_ONCE(rbi->ring_buffer->read_index);
+	write_loc = rbi->ring_buffer->write_index;
+
+	write = write_loc >= read_loc ? dsize - (write_loc - read_loc) :
+		read_loc - write_loc;
+	return write;
+}
+
 /*
  * VMBUS version is 32 bit entity broken up into
  * two 16 bit quantities: major_number. minor_number.
@@ -1091,7 +1120,7 @@
 			resource_size_t min, resource_size_t max,
 			resource_size_t size, resource_size_t align,
 			bool fb_overlap_ok);
-
+void vmbus_free_mmio(resource_size_t start, resource_size_t size);
 int vmbus_cpu_number_to_vp_number(int cpu_number);
 u64 hv_do_hypercall(u64 control, void *input, void *output);
 
@@ -1338,4 +1367,143 @@
 
 int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
 				  const uuid_le *shv_host_servie_id);
+void vmbus_set_event(struct vmbus_channel *channel);
+
+/* Get the start of the ring buffer. */
+static inline void *
+hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info)
+{
+	return (void *)ring_info->ring_buffer->buffer;
+}
+
+/*
+ * To optimize the flow management on the send-side,
+ * when the sender is blocked because of lack of
+ * sufficient space in the ring buffer, potential the
+ * consumer of the ring buffer can signal the producer.
+ * This is controlled by the following parameters:
+ *
+ * 1. pending_send_sz: This is the size in bytes that the
+ *    producer is trying to send.
+ * 2. The feature bit feat_pending_send_sz set to indicate if
+ *    the consumer of the ring will signal when the ring
+ *    state transitions from being full to a state where
+ *    there is room for the producer to send the pending packet.
+ */
+
+static inline  bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
+{
+	u32 cur_write_sz;
+	u32 pending_sz;
+
+	/*
+	 * Issue a full memory barrier before making the signaling decision.
+	 * Here is the reason for having this barrier:
+	 * If the reading of the pend_sz (in this function)
+	 * were to be reordered and read before we commit the new read
+	 * index (in the calling function)  we could
+	 * have a problem. If the host were to set the pending_sz after we
+	 * have sampled pending_sz and go to sleep before we commit the
+	 * read index, we could miss sending the interrupt. Issue a full
+	 * memory barrier to address this.
+	 */
+	virt_mb();
+
+	pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz);
+	/* If the other end is not blocked on write don't bother. */
+	if (pending_sz == 0)
+		return false;
+
+	cur_write_sz = hv_get_bytes_to_write(rbi);
+
+	if (cur_write_sz >= pending_sz)
+		return true;
+
+	return false;
+}
+
+/*
+ * An API to support in-place processing of incoming VMBUS packets.
+ */
+#define VMBUS_PKT_TRAILER	8
+
+static inline struct vmpacket_descriptor *
+get_next_pkt_raw(struct vmbus_channel *channel)
+{
+	struct hv_ring_buffer_info *ring_info = &channel->inbound;
+	u32 read_loc = ring_info->priv_read_index;
+	void *ring_buffer = hv_get_ring_buffer(ring_info);
+	struct vmpacket_descriptor *cur_desc;
+	u32 packetlen;
+	u32 dsize = ring_info->ring_datasize;
+	u32 delta = read_loc - ring_info->ring_buffer->read_index;
+	u32 bytes_avail_toread = (hv_get_bytes_to_read(ring_info) - delta);
+
+	if (bytes_avail_toread < sizeof(struct vmpacket_descriptor))
+		return NULL;
+
+	if ((read_loc + sizeof(*cur_desc)) > dsize)
+		return NULL;
+
+	cur_desc = ring_buffer + read_loc;
+	packetlen = cur_desc->len8 << 3;
+
+	/*
+	 * If the packet under consideration is wrapping around,
+	 * return failure.
+	 */
+	if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > (dsize - 1))
+		return NULL;
+
+	return cur_desc;
+}
+
+/*
+ * A helper function to step through packets "in-place"
+ * This API is to be called after each successful call
+ * get_next_pkt_raw().
+ */
+static inline void put_pkt_raw(struct vmbus_channel *channel,
+				struct vmpacket_descriptor *desc)
+{
+	struct hv_ring_buffer_info *ring_info = &channel->inbound;
+	u32 read_loc = ring_info->priv_read_index;
+	u32 packetlen = desc->len8 << 3;
+	u32 dsize = ring_info->ring_datasize;
+
+	if ((read_loc + packetlen + VMBUS_PKT_TRAILER) > dsize)
+		BUG();
+	/*
+	 * Include the packet trailer.
+	 */
+	ring_info->priv_read_index += packetlen + VMBUS_PKT_TRAILER;
+}
+
+/*
+ * This call commits the read index and potentially signals the host.
+ * Here is the pattern for using the "in-place" consumption APIs:
+ *
+ * while (get_next_pkt_raw() {
+ *	process the packet "in-place";
+ *	put_pkt_raw();
+ * }
+ * if (packets processed in place)
+ *	commit_rd_index();
+ */
+static inline void commit_rd_index(struct vmbus_channel *channel)
+{
+	struct hv_ring_buffer_info *ring_info = &channel->inbound;
+	/*
+	 * Make sure all reads are done before we update the read index since
+	 * the writer may start writing to the read area once the read index
+	 * is updated.
+	 */
+	virt_rmb();
+	ring_info->ring_buffer->read_index = ring_info->priv_read_index;
+
+	if (hv_need_to_signal_on_read(ring_info))
+		vmbus_set_event(channel);
+}
+
+
 #endif /* _HYPERV_H */
diff --git a/include/linux/mcb.h b/include/linux/mcb.h
index ed06e15..ead13d2 100644
--- a/include/linux/mcb.h
+++ b/include/linux/mcb.h
@@ -15,22 +15,30 @@
 #include <linux/device.h>
 #include <linux/irqreturn.h>
 
+#define CHAMELEON_FILENAME_LEN 12
+
 struct mcb_driver;
 struct mcb_device;
 
 /**
  * struct mcb_bus - MEN Chameleon Bus
  *
- * @dev: pointer to carrier device
- * @children: the child busses
+ * @dev: bus device
+ * @carrier: pointer to carrier device
  * @bus_nr: mcb bus number
  * @get_irq: callback to get IRQ number
+ * @revision: the FPGA's revision number
+ * @model: the FPGA's model number
+ * @filename: the FPGA's name
  */
 struct mcb_bus {
-	struct list_head children;
 	struct device dev;
 	struct device *carrier;
 	int bus_nr;
+	u8 revision;
+	char model;
+	u8 minor;
+	char name[CHAMELEON_FILENAME_LEN + 1];
 	int (*get_irq)(struct mcb_device *dev);
 };
 #define to_mcb_bus(b) container_of((b), struct mcb_bus, dev)
diff --git a/include/linux/nvmem-provider.h b/include/linux/nvmem-provider.h
index a4fcc90..cd93416 100644
--- a/include/linux/nvmem-provider.h
+++ b/include/linux/nvmem-provider.h
@@ -14,6 +14,10 @@
 
 struct nvmem_device;
 struct nvmem_cell_info;
+typedef int (*nvmem_reg_read_t)(void *priv, unsigned int offset,
+				void *val, size_t bytes);
+typedef int (*nvmem_reg_write_t)(void *priv, unsigned int offset,
+				 void *val, size_t bytes);
 
 struct nvmem_config {
 	struct device		*dev;
@@ -24,6 +28,12 @@
 	int			ncells;
 	bool			read_only;
 	bool			root_only;
+	nvmem_reg_read_t	reg_read;
+	nvmem_reg_write_t	reg_write;
+	int	size;
+	int	word_size;
+	int	stride;
+	void	*priv;
 	/* To be only used by old driver/misc/eeprom drivers */
 	bool			compat;
 	struct device		*base_dev;
diff --git a/include/linux/stm.h b/include/linux/stm.h
index 1a79ed8..8369d8a 100644
--- a/include/linux/stm.h
+++ b/include/linux/stm.h
@@ -50,6 +50,8 @@
  * @sw_end:		last STP master available to software
  * @sw_nchannels:	number of STP channels per master
  * @sw_mmiosz:		size of one channel's IO space, for mmap, optional
+ * @hw_override:	masters in the STP stream will not match the ones
+ *			assigned by software, but are up to the STM hardware
  * @packet:		callback that sends an STP packet
  * @mmio_addr:		mmap callback, optional
  * @link:		called when a new stm_source gets linked to us, optional
@@ -85,6 +87,7 @@
 	unsigned int		sw_end;
 	unsigned int		sw_nchannels;
 	unsigned int		sw_mmiosz;
+	unsigned int		hw_override;
 	ssize_t			(*packet)(struct stm_data *, unsigned int,
 					  unsigned int, unsigned int,
 					  unsigned int, unsigned int,
diff --git a/include/uapi/linux/coresight-stm.h b/include/uapi/linux/coresight-stm.h
new file mode 100644
index 0000000..7e4272c
--- /dev/null
+++ b/include/uapi/linux/coresight-stm.h
@@ -0,0 +1,21 @@
+#ifndef __UAPI_CORESIGHT_STM_H_
+#define __UAPI_CORESIGHT_STM_H_
+
+#define STM_FLAG_TIMESTAMPED   BIT(3)
+#define STM_FLAG_GUARANTEED    BIT(7)
+
+/*
+ * The CoreSight STM supports guaranteed and invariant timing
+ * transactions.  Guaranteed transactions are guaranteed to be
+ * traced, this might involve stalling the bus or system to
+ * ensure the transaction is accepted by the STM.  While invariant
+ * timing transactions are not guaranteed to be traced, they
+ * will take an invariant amount of time regardless of the
+ * state of the STM.
+ */
+enum {
+	STM_OPTION_GUARANTEED = 0,
+	STM_OPTION_INVARIANT,
+};
+
+#endif
diff --git a/scripts/checkkconfigsymbols.py b/scripts/checkkconfigsymbols.py
index d8f6c09..df643f6 100755
--- a/scripts/checkkconfigsymbols.py
+++ b/scripts/checkkconfigsymbols.py
@@ -89,7 +89,7 @@
 
     if opts.diff and not re.match(r"^[\w\-\.]+\.\.[\w\-\.]+$", opts.diff):
         sys.exit("Please specify valid input in the following format: "
-                 "\'commmit1..commit2\'")
+                 "\'commit1..commit2\'")
 
     if opts.commit or opts.diff:
         if not opts.force and tree_is_dirty():
diff --git a/tools/hv/lsvmbus b/tools/hv/lsvmbus
index 162a378..e8fecd6 100644
--- a/tools/hv/lsvmbus
+++ b/tools/hv/lsvmbus
@@ -35,6 +35,7 @@
 	'{ba6163d9-04a1-4d29-b605-72e2ffb1dc7f}' : 'Synthetic SCSI Controller',
 	'{2f9bcc4a-0069-4af3-b76b-6fd0be528cda}' : 'Synthetic fiber channel adapter',
 	'{8c2eaf3d-32a7-4b09-ab99-bd1f1c86b501}' : 'Synthetic RDMA adapter',
+	'{44c4f61d-4444-4400-9d52-802e27ede19f}' : 'PCI Express pass-through',
 	'{276aacf4-ac15-426c-98dd-7521ad3f01fe}' : '[Reserved system device]',
 	'{f8e65716-3cb3-4a06-9a60-1889c5cccab5}' : '[Reserved system device]',
 	'{3375baf4-9e15-4b30-b765-67acb10d607b}' : '[Reserved system device]',