Merge branch 'percpu-cpumask-x86-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

* 'percpu-cpumask-x86-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (682 commits)
  percpu: fix spurious alignment WARN in legacy SMP percpu allocator
  percpu: generalize embedding first chunk setup helper
  percpu: more flexibility for @dyn_size of pcpu_setup_first_chunk()
  percpu: make x86 addr <-> pcpu ptr conversion macros generic
  linker script: define __per_cpu_load on all SMP capable archs
  x86: UV: remove uv_flush_tlb_others() WARN_ON
  percpu: finer grained locking to break deadlock and allow atomic free
  percpu: move fully free chunk reclamation into a work
  percpu: move chunk area map extension out of area allocation
  percpu: replace pcpu_realloc() with pcpu_mem_alloc() and pcpu_mem_free()
  x86, percpu: setup reserved percpu area for x86_64
  percpu, module: implement reserved allocation and use it for module percpu variables
  percpu: add an indirection ptr for chunk page map access
  x86: make embedding percpu allocator return excessive free space
  percpu: use negative for auto for pcpu_setup_first_chunk() arguments
  percpu: improve first chunk initial area map handling
  percpu: cosmetic renames in pcpu_setup_first_chunk()
  percpu: clean up percpu constants
  x86: un-__init fill_pud/pmd/pte
  x86: remove vestigial fix_ioremap prototypes
  ...

Manually merge conflicts in arch/ia64/kernel/irq_ia64.c
diff --git a/Documentation/devices.txt b/Documentation/devices.txt
index 2be0824..62254d4 100644
--- a/Documentation/devices.txt
+++ b/Documentation/devices.txt
@@ -3145,6 +3145,12 @@
 		  1 = /dev/blockrom1	Second ROM card's translation layer interface
 		  ...
 
+260 char	OSD (Object-based-device) SCSI Device
+		  0 = /dev/osd0		First OSD Device
+		  1 = /dev/osd1		Second OSD Device
+		  ...
+		  255 = /dev/osd255	256th OSD Device
+
  ****	ADDITIONAL /dev DIRECTORY ENTRIES
 
 This section details additional entries that should or may exist in
diff --git a/Documentation/scsi/osd.txt b/Documentation/scsi/osd.txt
new file mode 100644
index 0000000..da162f7
--- /dev/null
+++ b/Documentation/scsi/osd.txt
@@ -0,0 +1,198 @@
+The OSD Standard
+================
+OSD (Object-Based Storage Device) is a T10 SCSI command set that is designed
+to provide efficient operation of input/output logical units that manage the
+allocation, placement, and accessing of variable-size data-storage containers,
+called objects. Objects are intended to contain operating system and application
+constructs. Each object has associated attributes attached to it, which are
+integral part of the object and provide metadata about the object. The standard
+defines some common obligatory attributes, but user attributes can be added as
+needed.
+
+See: http://www.t10.org/ftp/t10/drafts/osd2/ for the latest draft for OSD 2
+or search the web for "OSD SCSI"
+
+OSD in the Linux Kernel
+=======================
+osd-initiator:
+  The main component of OSD in Kernel is the osd-initiator library. Its main
+user is intended to be the pNFS-over-objects layout driver, which uses objects
+as its back-end data storage. Other clients are the other osd parts listed below.
+
+osd-uld:
+  This is a SCSI ULD that registers for OSD type devices and provides a testing
+platform, both for the in-kernel initiator as well as connected targets. It
+currently has no useful user-mode API, though it could have if need be.
+
+exofs:
+  Is an OSD based Linux file system. It uses the osd-initiator and osd-uld,
+to export a usable file system for users.
+See Documentation/filesystems/exofs.txt for more details
+
+osd target:
+  There are no current plans for an OSD target implementation in kernel. For all
+needs, a user-mode target that is based on the scsi tgt target framework is
+available from Ohio Supercomputer Center (OSC) at:
+http://www.open-osd.org/bin/view/Main/OscOsdProject
+There are several other target implementations. See http://open-osd.org for more
+links.
+
+Files and Folders
+=================
+This is the complete list of files included in this work:
+include/scsi/
+	osd_initiator.h   Main API for the initiator library
+	osd_types.h	  Common OSD types
+	osd_sec.h	  Security Manager API
+	osd_protocol.h	  Wire definitions of the OSD standard protocol
+	osd_attributes.h  Wire definitions of OSD attributes
+
+drivers/scsi/osd/
+	osd_initiator.c   OSD-Initiator library implementation
+	osd_uld.c	  The OSD scsi ULD
+	osd_ktest.{h,c}	  In-kernel test suite (called by osd_uld)
+	osd_debug.h	  Some printk macros
+	Makefile	  For both in-tree and out-of-tree compilation
+	Kconfig		  Enables inclusion of the different pieces
+	osd_test.c	  User-mode application to call the kernel tests
+
+The OSD-Initiator Library
+=========================
+osd_initiator is a low level implementation of an osd initiator encoder.
+But even though, it should be intuitive and easy to use. Perhaps over time an
+higher lever will form that automates some of the more common recipes.
+
+init/fini:
+- osd_dev_init() associates a scsi_device with an osd_dev structure
+  and initializes some global pools. This should be done once per scsi_device
+  (OSD LUN). The osd_dev structure is needed for calling osd_start_request().
+
+- osd_dev_fini() cleans up before a osd_dev/scsi_device destruction.
+
+OSD commands encoding, execution, and decoding of results:
+
+struct osd_request's is used to iteratively encode an OSD command and carry
+its state throughout execution. Each request goes through these stages:
+
+a. osd_start_request() allocates the request.
+
+b. Any of the osd_req_* methods is used to encode a request of the specified
+   type.
+
+c. osd_req_add_{get,set}_attr_* may be called to add get/set attributes to the
+   CDB. "List" or "Page" mode can be used exclusively. The attribute-list API
+   can be called multiple times on the same request. However, only one
+   attribute-page can be read, as mandated by the OSD standard.
+
+d. osd_finalize_request() computes offsets into the data-in and data-out buffers
+   and signs the request using the provided capability key and integrity-
+   check parameters.
+
+e. osd_execute_request() may be called to execute the request via the block
+   layer and wait for its completion.  The request can be executed
+   asynchronously by calling the block layer API directly.
+
+f. After execution, osd_req_decode_sense() can be called to decode the request's
+   sense information.
+
+g. osd_req_decode_get_attr() may be called to retrieve osd_add_get_attr_list()
+   values.
+
+h. osd_end_request() must be called to deallocate the request and any resource
+   associated with it. Note that osd_end_request cleans up the request at any
+   stage and it must always be called after a successful osd_start_request().
+
+osd_request's structure:
+
+The OSD standard defines a complex structure of IO segments pointed to by
+members in the CDB. Up to 3 segments can be deployed in the IN-Buffer and up to
+4 in the OUT-Buffer. The ASCII illustration below depicts a secure-read with
+associated get+set of attributes-lists. Other combinations very on the same
+basic theme. From no-segments-used up to all-segments-used.
+
+|________OSD-CDB__________|
+|                         |
+|read_len (offset=0)     -|---------\
+|                         |         |
+|get_attrs_list_length    |         |
+|get_attrs_list_offset   -|----\    |
+|                         |    |    |
+|retrieved_attrs_alloc_len|    |    |
+|retrieved_attrs_offset  -|----|----|-\
+|                         |    |    | |
+|set_attrs_list_length    |    |    | |
+|set_attrs_list_offset   -|-\  |    | |
+|                         | |  |    | |
+|in_data_integ_offset    -|-|--|----|-|-\
+|out_data_integ_offset   -|-|--|--\ | | |
+\_________________________/ |  |  | | | |
+                            |  |  | | | |
+|_______OUT-BUFFER________| |  |  | | | |
+|      Set attr list      |</  |  | | | |
+|                         |    |  | | | |
+|-------------------------|    |  | | | |
+|   Get attr descriptors  |<---/  | | | |
+|                         |       | | | |
+|-------------------------|       | | | |
+|    Out-data integrity   |<------/ | | |
+|                         |         | | |
+\_________________________/         | | |
+                                    | | |
+|________IN-BUFFER________|         | | |
+|      In-Data read       |<--------/ | |
+|                         |           | |
+|-------------------------|           | |
+|      Get attr list      |<----------/ |
+|                         |             |
+|-------------------------|             |
+|    In-data integrity    |<------------/
+|                         |
+\_________________________/
+
+A block device request can carry bidirectional payload by means of associating
+a bidi_read request with a main write-request. Each in/out request is described
+by a chain of BIOs associated with each request.
+The CDB is of a SCSI VARLEN CDB format, as described by OSD standard.
+The OSD standard also mandates alignment restrictions at start of each segment.
+
+In the code, in struct osd_request, there are two _osd_io_info structures to
+describe the IN/OUT buffers above, two BIOs for the data payload and up to five
+_osd_req_data_segment structures to hold the different segments allocation and
+information.
+
+Important: We have chosen to disregard the assumption that a BIO-chain (and
+the resulting sg-list) describes a linear memory buffer. Meaning only first and
+last scatter chain can be incomplete and all the middle chains are of PAGE_SIZE.
+For us, a scatter-gather-list, as its name implies and as used by the Networking
+layer, is to describe a vector of buffers that will be transferred to/from the
+wire. It works very well with current iSCSI transport. iSCSI is currently the
+only deployed OSD transport. In the future we anticipate SAS and FC attached OSD
+devices as well.
+
+The OSD Testing ULD
+===================
+TODO: More user-mode control on tests.
+
+Authors, Mailing list
+=====================
+Please communicate with us on any deployment of osd, whether using this code
+or not.
+
+Any problems, questions, bug reports, lonely OSD nights, please email:
+   OSD Dev List <osd-dev@open-osd.org>
+
+More up-to-date information can be found on:
+http://open-osd.org
+
+Boaz Harrosh <bharrosh@panasas.com>
+Benny Halevy <bhalevy@panasas.com>
+
+References
+==========
+Weber, R., "SCSI Object-Based Storage Device Commands",
+T10/1355-D ANSI/INCITS 400-2004,
+http://www.t10.org/ftp/t10/drafts/osd/osd-r10.pdf
+
+Weber, R., "SCSI Object-Based Storage Device Commands -2 (OSD-2)"
+T10/1729-D, Working Draft, rev. 3
+http://www.t10.org/ftp/t10/drafts/osd2/osd2r03.pdf
diff --git a/MAINTAINERS b/MAINTAINERS
index 64c89c2..d8a4c8d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3310,6 +3310,16 @@
 W:	http://www.nongnu.org/orinoco/
 S:	Maintained
 
+OSD LIBRARY
+P:	Boaz Harrosh
+M:	bharrosh@panasas.com
+P:	Benny Halevy
+M:	bhalevy@panasas.com
+L:	osd-dev@open-osd.org
+W:	http://open-osd.org
+T:	git://git.open-osd.org/open-osd.git
+S:	Maintained
+
 P54 WIRELESS DRIVER
 P:	Michael Wu
 M:	flamingice@sourmilk.net
diff --git a/arch/ia64/kernel/irq_ia64.c b/arch/ia64/kernel/irq_ia64.c
index 927ad02..acc4d19 100644
--- a/arch/ia64/kernel/irq_ia64.c
+++ b/arch/ia64/kernel/irq_ia64.c
@@ -493,16 +493,15 @@
 	saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
 	ia64_srlz_d();
 	while (vector != IA64_SPURIOUS_INT_VECTOR) {
-		struct irq_desc *desc = irq_to_desc(vector);
+		int irq = local_vector_to_irq(vector);
+		struct irq_desc *desc = irq_to_desc(irq);
 
 		if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
 			smp_local_flush_tlb();
-			kstat_incr_irqs_this_cpu(vector, desc);
-		} else if (unlikely(IS_RESCHEDULE(vector)))
-			kstat_incr_irqs_this_cpu(vector, desc);
-		else {
-			int irq = local_vector_to_irq(vector);
-
+			kstat_incr_irqs_this_cpu(irq, desc);
+		} else if (unlikely(IS_RESCHEDULE(vector))) {
+			kstat_incr_irqs_this_cpu(irq, desc);
+		} else {
 			ia64_setreg(_IA64_REG_CR_TPR, vector);
 			ia64_srlz_d();
 
@@ -545,24 +544,24 @@
 
 	vector = ia64_get_ivr();
 
-	 irq_enter();
-	 saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
-	 ia64_srlz_d();
+	irq_enter();
+	saved_tpr = ia64_getreg(_IA64_REG_CR_TPR);
+	ia64_srlz_d();
 
 	 /*
 	  * Perform normal interrupt style processing
 	  */
 	while (vector != IA64_SPURIOUS_INT_VECTOR) {
-		struct irq_desc *desc = irq_to_desc(vector);
+		int irq = local_vector_to_irq(vector);
+		struct irq_desc *desc = irq_to_desc(irq);
 
 		if (unlikely(IS_LOCAL_TLB_FLUSH(vector))) {
 			smp_local_flush_tlb();
-			kstat_incr_irqs_this_cpu(vector, desc);
-		} else if (unlikely(IS_RESCHEDULE(vector)))
-			kstat_incr_irqs_this_cpu(vector, desc);
-		else {
+			kstat_incr_irqs_this_cpu(irq, desc);
+		} else if (unlikely(IS_RESCHEDULE(vector))) {
+			kstat_incr_irqs_this_cpu(irq, desc);
+		} else {
 			struct pt_regs *old_regs = set_irq_regs(NULL);
-			int irq = local_vector_to_irq(vector);
 
 			ia64_setreg(_IA64_REG_CR_TPR, vector);
 			ia64_srlz_d();
diff --git a/block/cmd-filter.c b/block/cmd-filter.c
index 504b275..572bbc2 100644
--- a/block/cmd-filter.c
+++ b/block/cmd-filter.c
@@ -22,6 +22,7 @@
 #include <linux/spinlock.h>
 #include <linux/capability.h>
 #include <linux/bitops.h>
+#include <linux/blkdev.h>
 
 #include <scsi/scsi.h>
 #include <linux/cdrom.h>
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.c b/drivers/infiniband/ulp/iser/iscsi_iser.c
index 1287639..13d7674 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.c
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.c
@@ -168,7 +168,7 @@
 {
 	int error = 0;
 
-	debug_scsi("task deq [cid %d itt 0x%x]\n", conn->id, task->itt);
+	iser_dbg("task deq [cid %d itt 0x%x]\n", conn->id, task->itt);
 
 	error = iser_send_control(conn, task);
 
@@ -195,7 +195,7 @@
 	/* Send data-out PDUs while there's still unsolicited data to send */
 	while (iscsi_task_has_unsol_data(task)) {
 		iscsi_prep_data_out_pdu(task, r2t, &hdr);
-		debug_scsi("Sending data-out: itt 0x%x, data count %d\n",
+		iser_dbg("Sending data-out: itt 0x%x, data count %d\n",
 			   hdr.itt, r2t->data_count);
 
 		/* the buffer description has been passed with the command */
@@ -206,7 +206,7 @@
 			goto iscsi_iser_task_xmit_unsol_data_exit;
 		}
 		r2t->sent += r2t->data_count;
-		debug_scsi("Need to send %d more as data-out PDUs\n",
+		iser_dbg("Need to send %d more as data-out PDUs\n",
 			   r2t->data_length - r2t->sent);
 	}
 
@@ -227,12 +227,12 @@
 	if (task->sc->sc_data_direction == DMA_TO_DEVICE) {
 		BUG_ON(scsi_bufflen(task->sc) == 0);
 
-		debug_scsi("cmd [itt %x total %d imm %d unsol_data %d\n",
+		iser_dbg("cmd [itt %x total %d imm %d unsol_data %d\n",
 			   task->itt, scsi_bufflen(task->sc),
 			   task->imm_count, task->unsol_r2t.data_length);
 	}
 
-	debug_scsi("task deq [cid %d itt 0x%x]\n",
+	iser_dbg("task deq [cid %d itt 0x%x]\n",
 		   conn->id, task->itt);
 
 	/* Send the cmd PDU */
@@ -397,14 +397,14 @@
 static struct iscsi_cls_session *
 iscsi_iser_session_create(struct iscsi_endpoint *ep,
 			  uint16_t cmds_max, uint16_t qdepth,
-			  uint32_t initial_cmdsn, uint32_t *hostno)
+			  uint32_t initial_cmdsn)
 {
 	struct iscsi_cls_session *cls_session;
 	struct iscsi_session *session;
 	struct Scsi_Host *shost;
 	struct iser_conn *ib_conn;
 
-	shost = iscsi_host_alloc(&iscsi_iser_sht, 0, ISCSI_MAX_CMD_PER_LUN);
+	shost = iscsi_host_alloc(&iscsi_iser_sht, 0, 1);
 	if (!shost)
 		return NULL;
 	shost->transportt = iscsi_iser_scsi_transport;
@@ -423,7 +423,6 @@
 	if (iscsi_host_add(shost,
 			   ep ? ib_conn->device->ib_device->dma_device : NULL))
 		goto free_host;
-	*hostno = shost->host_no;
 
 	/*
 	 * we do not support setting can_queue cmd_per_lun from userspace yet
@@ -596,7 +595,7 @@
 	.change_queue_depth	= iscsi_change_queue_depth,
 	.sg_tablesize           = ISCSI_ISER_SG_TABLESIZE,
 	.max_sectors		= 1024,
-	.cmd_per_lun            = ISCSI_MAX_CMD_PER_LUN,
+	.cmd_per_lun            = ISER_DEF_CMD_PER_LUN,
 	.eh_abort_handler       = iscsi_eh_abort,
 	.eh_device_reset_handler= iscsi_eh_device_reset,
 	.eh_target_reset_handler= iscsi_eh_target_reset,
diff --git a/drivers/infiniband/ulp/iser/iscsi_iser.h b/drivers/infiniband/ulp/iser/iscsi_iser.h
index 8611195..9d529ca 100644
--- a/drivers/infiniband/ulp/iser/iscsi_iser.h
+++ b/drivers/infiniband/ulp/iser/iscsi_iser.h
@@ -93,7 +93,7 @@
 
 					/* support upto 512KB in one RDMA */
 #define ISCSI_ISER_SG_TABLESIZE         (0x80000 >> SHIFT_4K)
-#define ISCSI_ISER_MAX_LUN		256
+#define ISER_DEF_CMD_PER_LUN		128
 
 /* QP settings */
 /* Maximal bounds on received asynchronous PDUs */
diff --git a/drivers/infiniband/ulp/iser/iser_initiator.c b/drivers/infiniband/ulp/iser/iser_initiator.c
index e209cb8d..9de6402 100644
--- a/drivers/infiniband/ulp/iser/iser_initiator.c
+++ b/drivers/infiniband/ulp/iser/iser_initiator.c
@@ -661,7 +661,7 @@
 
 	if (resume_tx) {
 		iser_dbg("%ld resuming tx\n",jiffies);
-		scsi_queue_work(conn->session->host, &conn->xmitwork);
+		iscsi_conn_queue_work(conn);
 	}
 
 	if (tx_desc->type == ISCSI_TX_CONTROL) {
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 9f102a6..f673253 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1511,7 +1511,7 @@
 static void xennet_set_features(struct net_device *dev)
 {
 	/* Turn off all GSO bits except ROBUST. */
-	dev->features &= (1 << NETIF_F_GSO_SHIFT) - 1;
+	dev->features &= ~NETIF_F_GSO_MASK;
 	dev->features |= NETIF_F_GSO_ROBUST;
 	xennet_set_sg(dev, 0);
 
diff --git a/drivers/s390/scsi/zfcp_aux.c b/drivers/s390/scsi/zfcp_aux.c
index 8af7dfb..616c60f 100644
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -3,7 +3,7 @@
  *
  * Module interface and handling of zfcp data structures.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 /*
@@ -249,8 +249,8 @@
 	struct zfcp_port *port;
 
 	list_for_each_entry(port, &adapter->port_list_head, list)
-		if ((port->wwpn == wwpn) && !(atomic_read(&port->status) &
-		      (ZFCP_STATUS_PORT_NO_WWPN | ZFCP_STATUS_COMMON_REMOVE)))
+		if ((port->wwpn == wwpn) &&
+		    !(atomic_read(&port->status) & ZFCP_STATUS_COMMON_REMOVE))
 			return port;
 	return NULL;
 }
@@ -421,7 +421,8 @@
 	while (atomic_read(&adapter->stat_miss) > 0)
 		if (zfcp_fsf_status_read(adapter)) {
 			if (atomic_read(&adapter->stat_miss) >= 16) {
-				zfcp_erp_adapter_reopen(adapter, 0, 103, NULL);
+				zfcp_erp_adapter_reopen(adapter, 0, "axsref1",
+							NULL);
 				return 1;
 			}
 			break;
@@ -501,6 +502,7 @@
 	spin_lock_init(&adapter->scsi_dbf_lock);
 	spin_lock_init(&adapter->rec_dbf_lock);
 	spin_lock_init(&adapter->req_q_lock);
+	spin_lock_init(&adapter->qdio_stat_lock);
 
 	rwlock_init(&adapter->erp_lock);
 	rwlock_init(&adapter->abort_lock);
@@ -522,7 +524,6 @@
 		goto sysfs_failed;
 
 	atomic_clear_mask(ZFCP_STATUS_COMMON_REMOVE, &adapter->status);
-	zfcp_fc_nameserver_init(adapter);
 
 	if (!zfcp_adapter_scsi_register(adapter))
 		return 0;
@@ -552,6 +553,7 @@
 
 	cancel_work_sync(&adapter->scan_work);
 	cancel_work_sync(&adapter->stat_work);
+	cancel_delayed_work_sync(&adapter->nsp.work);
 	zfcp_adapter_scsi_unregister(adapter);
 	sysfs_remove_group(&adapter->ccw_device->dev.kobj,
 			   &zfcp_sysfs_adapter_attrs);
@@ -603,10 +605,13 @@
 	init_waitqueue_head(&port->remove_wq);
 	INIT_LIST_HEAD(&port->unit_list_head);
 	INIT_WORK(&port->gid_pn_work, zfcp_erp_port_strategy_open_lookup);
+	INIT_WORK(&port->test_link_work, zfcp_fc_link_test_work);
+	INIT_WORK(&port->rport_work, zfcp_scsi_rport_work);
 
 	port->adapter = adapter;
 	port->d_id = d_id;
 	port->wwpn = wwpn;
+	port->rport_task = RPORT_NONE;
 
 	/* mark port unusable as long as sysfs registration is not complete */
 	atomic_set_mask(status | ZFCP_STATUS_COMMON_REMOVE, &port->status);
@@ -620,11 +625,10 @@
 	dev_set_drvdata(&port->sysfs_device, port);
 
 	read_lock_irq(&zfcp_data.config_lock);
-	if (!(status & ZFCP_STATUS_PORT_NO_WWPN))
-		if (zfcp_get_port_by_wwpn(adapter, wwpn)) {
-			read_unlock_irq(&zfcp_data.config_lock);
-			goto err_out_free;
-		}
+	if (zfcp_get_port_by_wwpn(adapter, wwpn)) {
+		read_unlock_irq(&zfcp_data.config_lock);
+		goto err_out_free;
+	}
 	read_unlock_irq(&zfcp_data.config_lock);
 
 	if (device_register(&port->sysfs_device))
diff --git a/drivers/s390/scsi/zfcp_ccw.c b/drivers/s390/scsi/zfcp_ccw.c
index 285881f..1fe1e2e 100644
--- a/drivers/s390/scsi/zfcp_ccw.c
+++ b/drivers/s390/scsi/zfcp_ccw.c
@@ -3,7 +3,7 @@
  *
  * Registration and callback for the s390 common I/O layer.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #define KMSG_COMPONENT "zfcp"
@@ -72,8 +72,7 @@
 
 	list_for_each_entry_safe(port, p, &port_remove_lh, list) {
 		list_for_each_entry_safe(unit, u, &unit_remove_lh, list) {
-			if (atomic_read(&unit->status) &
-			    ZFCP_STATUS_UNIT_REGISTERED)
+			if (unit->device)
 				scsi_remove_device(unit->device);
 			zfcp_unit_dequeue(unit);
 		}
@@ -109,11 +108,12 @@
 	/* initialize request counter */
 	BUG_ON(!zfcp_reqlist_isempty(adapter));
 	adapter->req_no = 0;
+	zfcp_fc_nameserver_init(adapter);
 
-	zfcp_erp_modify_adapter_status(adapter, 10, NULL,
+	zfcp_erp_modify_adapter_status(adapter, "ccsonl1", NULL,
 				       ZFCP_STATUS_COMMON_RUNNING, ZFCP_SET);
-	zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 85,
-				NULL);
+	zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
+				"ccsonl2", NULL);
 	zfcp_erp_wait(adapter);
 	up(&zfcp_data.config_sema);
 	flush_work(&adapter->scan_work);
@@ -137,7 +137,7 @@
 
 	down(&zfcp_data.config_sema);
 	adapter = dev_get_drvdata(&ccw_device->dev);
-	zfcp_erp_adapter_shutdown(adapter, 0, 86, NULL);
+	zfcp_erp_adapter_shutdown(adapter, 0, "ccsoff1", NULL);
 	zfcp_erp_wait(adapter);
 	zfcp_erp_thread_kill(adapter);
 	up(&zfcp_data.config_sema);
@@ -160,21 +160,21 @@
 	case CIO_GONE:
 		dev_warn(&adapter->ccw_device->dev,
 			 "The FCP device has been detached\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 87, NULL);
+		zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti1", NULL);
 		break;
 	case CIO_NO_PATH:
 		dev_warn(&adapter->ccw_device->dev,
 			 "The CHPID for the FCP device is offline\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 88, NULL);
+		zfcp_erp_adapter_shutdown(adapter, 0, "ccnoti2", NULL);
 		break;
 	case CIO_OPER:
 		dev_info(&adapter->ccw_device->dev,
 			 "The FCP device is operational again\n");
-		zfcp_erp_modify_adapter_status(adapter, 11, NULL,
+		zfcp_erp_modify_adapter_status(adapter, "ccnoti3", NULL,
 					       ZFCP_STATUS_COMMON_RUNNING,
 					       ZFCP_SET);
 		zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
-					89, NULL);
+					"ccnoti4", NULL);
 		break;
 	}
 	return 1;
@@ -190,7 +190,7 @@
 
 	down(&zfcp_data.config_sema);
 	adapter = dev_get_drvdata(&cdev->dev);
-	zfcp_erp_adapter_shutdown(adapter, 0, 90, NULL);
+	zfcp_erp_adapter_shutdown(adapter, 0, "ccshut1", NULL);
 	zfcp_erp_wait(adapter);
 	up(&zfcp_data.config_sema);
 }
diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c
index cb6df60..0a1a5dd 100644
--- a/drivers/s390/scsi/zfcp_dbf.c
+++ b/drivers/s390/scsi/zfcp_dbf.c
@@ -490,172 +490,17 @@
 	[ZFCP_REC_DBF_ID_ACTION] = "action",
 };
 
-static const char *zfcp_rec_dbf_ids[] = {
-	[1]	= "new",
-	[2]	= "ready",
-	[3]	= "kill",
-	[4]	= "down sleep",
-	[5]	= "down wakeup",
-	[6]	= "down sleep ecd",
-	[7]	= "down wakeup ecd",
-	[8]	= "down sleep epd",
-	[9]	= "down wakeup epd",
-	[10]	= "online",
-	[11]	= "operational",
-	[12]	= "scsi slave destroy",
-	[13]	= "propagate failed adapter",
-	[14]	= "propagate failed port",
-	[15]	= "block adapter",
-	[16]	= "unblock adapter",
-	[17]	= "block port",
-	[18]	= "unblock port",
-	[19]	= "block unit",
-	[20]	= "unblock unit",
-	[21]	= "unit recovery failed",
-	[22]	= "port recovery failed",
-	[23]	= "adapter recovery failed",
-	[24]	= "qdio queues down",
-	[25]	= "p2p failed",
-	[26]	= "nameserver lookup failed",
-	[27]	= "nameserver port failed",
-	[28]	= "link up",
-	[29]	= "link down",
-	[30]	= "link up status read",
-	[31]	= "open port failed",
-	[32]	= "",
-	[33]	= "close port",
-	[34]	= "open unit failed",
-	[35]	= "exclusive open unit failed",
-	[36]	= "shared open unit failed",
-	[37]	= "link down",
-	[38]	= "link down status read no link",
-	[39]	= "link down status read fdisc login",
-	[40]	= "link down status read firmware update",
-	[41]	= "link down status read unknown reason",
-	[42]	= "link down ecd incomplete",
-	[43]	= "link down epd incomplete",
-	[44]	= "sysfs adapter recovery",
-	[45]	= "sysfs port recovery",
-	[46]	= "sysfs unit recovery",
-	[47]	= "port boxed abort",
-	[48]	= "unit boxed abort",
-	[49]	= "port boxed ct",
-	[50]	= "port boxed close physical",
-	[51]	= "port boxed open unit",
-	[52]	= "port boxed close unit",
-	[53]	= "port boxed fcp",
-	[54]	= "unit boxed fcp",
-	[55]	= "port access denied",
-	[56]	= "",
-	[57]	= "",
-	[58]	= "",
-	[59]	= "unit access denied",
-	[60]	= "shared unit access denied open unit",
-	[61]	= "",
-	[62]	= "request timeout",
-	[63]	= "adisc link test reject or timeout",
-	[64]	= "adisc link test d_id changed",
-	[65]	= "adisc link test failed",
-	[66]	= "recovery out of memory",
-	[67]	= "adapter recovery repeated after state change",
-	[68]	= "port recovery repeated after state change",
-	[69]	= "unit recovery repeated after state change",
-	[70]	= "port recovery follow-up after successful adapter recovery",
-	[71]	= "adapter recovery escalation after failed adapter recovery",
-	[72]	= "port recovery follow-up after successful physical port "
-		  "recovery",
-	[73]	= "adapter recovery escalation after failed physical port "
-		  "recovery",
-	[74]	= "unit recovery follow-up after successful port recovery",
-	[75]	= "physical port recovery escalation after failed port "
-		  "recovery",
-	[76]	= "port recovery escalation after failed unit recovery",
-	[77]	= "",
-	[78]	= "duplicate request id",
-	[79]	= "link down",
-	[80]	= "exclusive read-only unit access unsupported",
-	[81]	= "shared read-write unit access unsupported",
-	[82]	= "incoming rscn",
-	[83]	= "incoming wwpn",
-	[84]	= "wka port handle not valid close port",
-	[85]	= "online",
-	[86]	= "offline",
-	[87]	= "ccw device gone",
-	[88]	= "ccw device no path",
-	[89]	= "ccw device operational",
-	[90]	= "ccw device shutdown",
-	[91]	= "sysfs port addition",
-	[92]	= "sysfs port removal",
-	[93]	= "sysfs adapter recovery",
-	[94]	= "sysfs unit addition",
-	[95]	= "sysfs unit removal",
-	[96]	= "sysfs port recovery",
-	[97]	= "sysfs unit recovery",
-	[98]	= "sequence number mismatch",
-	[99]	= "link up",
-	[100]	= "error state",
-	[101]	= "status read physical port closed",
-	[102]	= "link up status read",
-	[103]	= "too many failed status read buffers",
-	[104]	= "port handle not valid abort",
-	[105]	= "lun handle not valid abort",
-	[106]	= "port handle not valid ct",
-	[107]	= "port handle not valid close port",
-	[108]	= "port handle not valid close physical port",
-	[109]	= "port handle not valid open unit",
-	[110]	= "port handle not valid close unit",
-	[111]	= "lun handle not valid close unit",
-	[112]	= "port handle not valid fcp",
-	[113]	= "lun handle not valid fcp",
-	[114]	= "handle mismatch fcp",
-	[115]	= "lun not valid fcp",
-	[116]	= "qdio send failed",
-	[117]	= "version mismatch",
-	[118]	= "incompatible qtcb type",
-	[119]	= "unknown protocol status",
-	[120]	= "unknown fsf command",
-	[121]	= "no recommendation for status qualifier",
-	[122]	= "status read physical port closed in error",
-	[123]	= "fc service class not supported",
-	[124]	= "",
-	[125]	= "need newer zfcp",
-	[126]	= "need newer microcode",
-	[127]	= "arbitrated loop not supported",
-	[128]	= "",
-	[129]	= "qtcb size mismatch",
-	[130]	= "unknown fsf status ecd",
-	[131]	= "fcp request too big",
-	[132]	= "",
-	[133]	= "data direction not valid fcp",
-	[134]	= "command length not valid fcp",
-	[135]	= "status read act update",
-	[136]	= "status read cfdc update",
-	[137]	= "hbaapi port open",
-	[138]	= "hbaapi unit open",
-	[139]	= "hbaapi unit shutdown",
-	[140]	= "qdio error outbound",
-	[141]	= "scsi host reset",
-	[142]	= "dismissing fsf request for recovery action",
-	[143]	= "recovery action timed out",
-	[144]	= "recovery action gone",
-	[145]	= "recovery action being processed",
-	[146]	= "recovery action ready for next step",
-	[147]	= "qdio error inbound",
-	[148]   = "nameserver needed for port scan",
-	[149]   = "port scan",
-	[150]	= "ptp attach",
-	[151]   = "port validation failed",
-};
-
 static int zfcp_rec_dbf_view_format(debug_info_t *id, struct debug_view *view,
 				    char *buf, const char *_rec)
 {
 	struct zfcp_rec_dbf_record *r = (struct zfcp_rec_dbf_record *)_rec;
 	char *p = buf;
+	char hint[ZFCP_DBF_ID_SIZE + 1];
 
+	memcpy(hint, r->id2, ZFCP_DBF_ID_SIZE);
+	hint[ZFCP_DBF_ID_SIZE] = 0;
 	zfcp_dbf_outs(&p, "tag", zfcp_rec_dbf_tags[r->id]);
-	zfcp_dbf_outs(&p, "hint", zfcp_rec_dbf_ids[r->id2]);
-	zfcp_dbf_out(&p, "id", "%d", r->id2);
+	zfcp_dbf_outs(&p, "hint", hint);
 	switch (r->id) {
 	case ZFCP_REC_DBF_ID_THREAD:
 		zfcp_dbf_out(&p, "total", "%d", r->u.thread.total);
@@ -707,7 +552,7 @@
  * @adapter: adapter
  * This function assumes that the caller is holding erp_lock.
  */
-void zfcp_rec_dbf_event_thread(u8 id2, struct zfcp_adapter *adapter)
+void zfcp_rec_dbf_event_thread(char *id2, struct zfcp_adapter *adapter)
 {
 	struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf;
 	unsigned long flags = 0;
@@ -723,7 +568,7 @@
 	spin_lock_irqsave(&adapter->rec_dbf_lock, flags);
 	memset(r, 0, sizeof(*r));
 	r->id = ZFCP_REC_DBF_ID_THREAD;
-	r->id2 = id2;
+	memcpy(r->id2, id2, ZFCP_DBF_ID_SIZE);
 	r->u.thread.total = total;
 	r->u.thread.ready = ready;
 	r->u.thread.running = running;
@@ -737,7 +582,7 @@
  * @adapter: adapter
  * This function assumes that the caller does not hold erp_lock.
  */
-void zfcp_rec_dbf_event_thread_lock(u8 id2, struct zfcp_adapter *adapter)
+void zfcp_rec_dbf_event_thread_lock(char *id2, struct zfcp_adapter *adapter)
 {
 	unsigned long flags;
 
@@ -746,7 +591,7 @@
 	read_unlock_irqrestore(&adapter->erp_lock, flags);
 }
 
-static void zfcp_rec_dbf_event_target(u8 id2, void *ref,
+static void zfcp_rec_dbf_event_target(char *id2, void *ref,
 				      struct zfcp_adapter *adapter,
 				      atomic_t *status, atomic_t *erp_count,
 				      u64 wwpn, u32 d_id, u64 fcp_lun)
@@ -757,7 +602,7 @@
 	spin_lock_irqsave(&adapter->rec_dbf_lock, flags);
 	memset(r, 0, sizeof(*r));
 	r->id = ZFCP_REC_DBF_ID_TARGET;
-	r->id2 = id2;
+	memcpy(r->id2, id2, ZFCP_DBF_ID_SIZE);
 	r->u.target.ref = (unsigned long)ref;
 	r->u.target.status = atomic_read(status);
 	r->u.target.wwpn = wwpn;
@@ -774,7 +619,8 @@
  * @ref: additional reference (e.g. request)
  * @adapter: adapter
  */
-void zfcp_rec_dbf_event_adapter(u8 id, void *ref, struct zfcp_adapter *adapter)
+void zfcp_rec_dbf_event_adapter(char *id, void *ref,
+				struct zfcp_adapter *adapter)
 {
 	zfcp_rec_dbf_event_target(id, ref, adapter, &adapter->status,
 				  &adapter->erp_counter, 0, 0, 0);
@@ -786,7 +632,7 @@
  * @ref: additional reference (e.g. request)
  * @port: port
  */
-void zfcp_rec_dbf_event_port(u8 id, void *ref, struct zfcp_port *port)
+void zfcp_rec_dbf_event_port(char *id, void *ref, struct zfcp_port *port)
 {
 	struct zfcp_adapter *adapter = port->adapter;
 
@@ -801,7 +647,7 @@
  * @ref: additional reference (e.g. request)
  * @unit: unit
  */
-void zfcp_rec_dbf_event_unit(u8 id, void *ref, struct zfcp_unit *unit)
+void zfcp_rec_dbf_event_unit(char *id, void *ref, struct zfcp_unit *unit)
 {
 	struct zfcp_port *port = unit->port;
 	struct zfcp_adapter *adapter = port->adapter;
@@ -822,7 +668,7 @@
  * @port: port
  * @unit: unit
  */
-void zfcp_rec_dbf_event_trigger(u8 id2, void *ref, u8 want, u8 need,
+void zfcp_rec_dbf_event_trigger(char *id2, void *ref, u8 want, u8 need,
 				void *action, struct zfcp_adapter *adapter,
 				struct zfcp_port *port, struct zfcp_unit *unit)
 {
@@ -832,7 +678,7 @@
 	spin_lock_irqsave(&adapter->rec_dbf_lock, flags);
 	memset(r, 0, sizeof(*r));
 	r->id = ZFCP_REC_DBF_ID_TRIGGER;
-	r->id2 = id2;
+	memcpy(r->id2, id2, ZFCP_DBF_ID_SIZE);
 	r->u.trigger.ref = (unsigned long)ref;
 	r->u.trigger.want = want;
 	r->u.trigger.need = need;
@@ -855,7 +701,7 @@
  * @id2: identifier
  * @erp_action: error recovery action struct pointer
  */
-void zfcp_rec_dbf_event_action(u8 id2, struct zfcp_erp_action *erp_action)
+void zfcp_rec_dbf_event_action(char *id2, struct zfcp_erp_action *erp_action)
 {
 	struct zfcp_adapter *adapter = erp_action->adapter;
 	struct zfcp_rec_dbf_record *r = &adapter->rec_dbf_buf;
@@ -864,7 +710,7 @@
 	spin_lock_irqsave(&adapter->rec_dbf_lock, flags);
 	memset(r, 0, sizeof(*r));
 	r->id = ZFCP_REC_DBF_ID_ACTION;
-	r->id2 = id2;
+	memcpy(r->id2, id2, ZFCP_DBF_ID_SIZE);
 	r->u.action.action = (unsigned long)erp_action;
 	r->u.action.status = erp_action->status;
 	r->u.action.step = erp_action->step;
diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h
index 74998ff..a573f73 100644
--- a/drivers/s390/scsi/zfcp_dbf.h
+++ b/drivers/s390/scsi/zfcp_dbf.h
@@ -25,6 +25,7 @@
 #include "zfcp_fsf.h"
 
 #define ZFCP_DBF_TAG_SIZE      4
+#define ZFCP_DBF_ID_SIZE       7
 
 struct zfcp_dbf_dump {
 	u8 tag[ZFCP_DBF_TAG_SIZE];
@@ -70,7 +71,7 @@
 
 struct zfcp_rec_dbf_record {
 	u8 id;
-	u8 id2;
+	char id2[7];
 	union {
 		struct zfcp_rec_dbf_record_action action;
 		struct zfcp_rec_dbf_record_thread thread;
diff --git a/drivers/s390/scsi/zfcp_def.h b/drivers/s390/scsi/zfcp_def.h
index 5106627..a031863 100644
--- a/drivers/s390/scsi/zfcp_def.h
+++ b/drivers/s390/scsi/zfcp_def.h
@@ -3,7 +3,7 @@
  *
  * Global definitions for the zfcp device driver.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #ifndef ZFCP_DEF_H
@@ -243,9 +243,6 @@
 
 /* remote port status */
 #define ZFCP_STATUS_PORT_PHYS_OPEN		0x00000001
-#define ZFCP_STATUS_PORT_PHYS_CLOSING		0x00000004
-#define ZFCP_STATUS_PORT_NO_WWPN		0x00000008
-#define ZFCP_STATUS_PORT_INVALID_WWPN		0x00000020
 
 /* well known address (WKA) port status*/
 enum zfcp_wka_status {
@@ -258,7 +255,6 @@
 /* logical unit status */
 #define ZFCP_STATUS_UNIT_SHARED			0x00000004
 #define ZFCP_STATUS_UNIT_READONLY		0x00000008
-#define ZFCP_STATUS_UNIT_REGISTERED		0x00000010
 #define ZFCP_STATUS_UNIT_SCSI_WORK_PENDING	0x00000020
 
 /* FSF request status (this does not have a common part) */
@@ -447,8 +443,9 @@
 	spinlock_t		req_list_lock;	   /* request list lock */
 	struct zfcp_qdio_queue	req_q;		   /* request queue */
 	spinlock_t		req_q_lock;	   /* for operations on queue */
-	int			req_q_pci_batch;   /* SBALs since PCI indication
-						      was last set */
+	ktime_t			req_q_time; /* time of last fill level change */
+	u64			req_q_util; /* for accounting */
+	spinlock_t		qdio_stat_lock;
 	u32			fsf_req_seq_no;	   /* FSF cmnd seq number */
 	wait_queue_head_t	request_wq;	   /* can be used to wait for
 						      more avaliable SBALs */
@@ -514,6 +511,9 @@
 	u32                    maxframe_size;
 	u32                    supported_classes;
 	struct work_struct     gid_pn_work;
+	struct work_struct     test_link_work;
+	struct work_struct     rport_work;
+	enum { RPORT_NONE, RPORT_ADD, RPORT_DEL }  rport_task;
 };
 
 struct zfcp_unit {
@@ -587,9 +587,6 @@
 
 /********************** ZFCP SPECIFIC DEFINES ********************************/
 
-#define ZFCP_REQ_AUTO_CLEANUP	0x00000002
-#define ZFCP_REQ_NO_QTCB	0x00000008
-
 #define ZFCP_SET                0x00000100
 #define ZFCP_CLEAR              0x00000200
 
diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
index 387a3af..631bdb1 100644
--- a/drivers/s390/scsi/zfcp_erp.c
+++ b/drivers/s390/scsi/zfcp_erp.c
@@ -3,7 +3,7 @@
  *
  * Error Recovery Procedures (ERP).
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #define KMSG_COMPONENT "zfcp"
@@ -55,7 +55,7 @@
 
 static void zfcp_erp_adapter_block(struct zfcp_adapter *adapter, int mask)
 {
-	zfcp_erp_modify_adapter_status(adapter, 15, NULL,
+	zfcp_erp_modify_adapter_status(adapter, "erablk1", NULL,
 				       ZFCP_STATUS_COMMON_UNBLOCKED | mask,
 				       ZFCP_CLEAR);
 }
@@ -75,9 +75,9 @@
 	struct zfcp_adapter *adapter = act->adapter;
 
 	list_move(&act->list, &act->adapter->erp_ready_head);
-	zfcp_rec_dbf_event_action(146, act);
+	zfcp_rec_dbf_event_action("erardy1", act);
 	up(&adapter->erp_ready_sem);
-	zfcp_rec_dbf_event_thread(2, adapter);
+	zfcp_rec_dbf_event_thread("erardy2", adapter);
 }
 
 static void zfcp_erp_action_dismiss(struct zfcp_erp_action *act)
@@ -208,7 +208,7 @@
 
 static int zfcp_erp_action_enqueue(int want, struct zfcp_adapter *adapter,
 				   struct zfcp_port *port,
-				   struct zfcp_unit *unit, u8 id, void *ref)
+				   struct zfcp_unit *unit, char *id, void *ref)
 {
 	int retval = 1, need;
 	struct zfcp_erp_action *act = NULL;
@@ -228,7 +228,7 @@
 	++adapter->erp_total_count;
 	list_add_tail(&act->list, &adapter->erp_ready_head);
 	up(&adapter->erp_ready_sem);
-	zfcp_rec_dbf_event_thread(1, adapter);
+	zfcp_rec_dbf_event_thread("eracte1", adapter);
 	retval = 0;
  out:
 	zfcp_rec_dbf_event_trigger(id, ref, want, need, act,
@@ -237,13 +237,14 @@
 }
 
 static int _zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter,
-				    int clear_mask, u8 id, void *ref)
+				    int clear_mask, char *id, void *ref)
 {
 	zfcp_erp_adapter_block(adapter, clear_mask);
+	zfcp_scsi_schedule_rports_block(adapter);
 
 	/* ensure propagation of failed status to new devices */
 	if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
-		zfcp_erp_adapter_failed(adapter, 13, NULL);
+		zfcp_erp_adapter_failed(adapter, "erareo1", NULL);
 		return -EIO;
 	}
 	return zfcp_erp_action_enqueue(ZFCP_ERP_ACTION_REOPEN_ADAPTER,
@@ -258,7 +259,7 @@
  * @ref: Reference for debug trace event.
  */
 void zfcp_erp_adapter_reopen(struct zfcp_adapter *adapter, int clear,
-			     u8 id, void *ref)
+			     char *id, void *ref)
 {
 	unsigned long flags;
 
@@ -277,7 +278,7 @@
  * @ref: Reference for debug trace event.
  */
 void zfcp_erp_adapter_shutdown(struct zfcp_adapter *adapter, int clear,
-			       u8 id, void *ref)
+			       char *id, void *ref)
 {
 	int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
 	zfcp_erp_adapter_reopen(adapter, clear | flags, id, ref);
@@ -290,7 +291,8 @@
  * @id: Id for debug trace event.
  * @ref: Reference for debug trace event.
  */
-void zfcp_erp_port_shutdown(struct zfcp_port *port, int clear, u8 id, void *ref)
+void zfcp_erp_port_shutdown(struct zfcp_port *port, int clear, char *id,
+			    void *ref)
 {
 	int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
 	zfcp_erp_port_reopen(port, clear | flags, id, ref);
@@ -303,7 +305,8 @@
  * @id: Id for debug trace event.
  * @ref: Reference for debug trace event.
  */
-void zfcp_erp_unit_shutdown(struct zfcp_unit *unit, int clear, u8 id, void *ref)
+void zfcp_erp_unit_shutdown(struct zfcp_unit *unit, int clear, char *id,
+			    void *ref)
 {
 	int flags = ZFCP_STATUS_COMMON_RUNNING | ZFCP_STATUS_COMMON_ERP_FAILED;
 	zfcp_erp_unit_reopen(unit, clear | flags, id, ref);
@@ -311,15 +314,16 @@
 
 static void zfcp_erp_port_block(struct zfcp_port *port, int clear)
 {
-	zfcp_erp_modify_port_status(port, 17, NULL,
+	zfcp_erp_modify_port_status(port, "erpblk1", NULL,
 				    ZFCP_STATUS_COMMON_UNBLOCKED | clear,
 				    ZFCP_CLEAR);
 }
 
 static void _zfcp_erp_port_forced_reopen(struct zfcp_port *port,
-					 int clear, u8 id, void *ref)
+					 int clear, char *id, void *ref)
 {
 	zfcp_erp_port_block(port, clear);
+	zfcp_scsi_schedule_rport_block(port);
 
 	if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED)
 		return;
@@ -334,7 +338,7 @@
  * @id: Id for debug trace event.
  * @ref: Reference for debug trace event.
  */
-void zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear, u8 id,
+void zfcp_erp_port_forced_reopen(struct zfcp_port *port, int clear, char *id,
 				 void *ref)
 {
 	unsigned long flags;
@@ -347,14 +351,15 @@
 	read_unlock_irqrestore(&zfcp_data.config_lock, flags);
 }
 
-static int _zfcp_erp_port_reopen(struct zfcp_port *port, int clear, u8 id,
+static int _zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id,
 				 void *ref)
 {
 	zfcp_erp_port_block(port, clear);
+	zfcp_scsi_schedule_rport_block(port);
 
 	if (atomic_read(&port->status) & ZFCP_STATUS_COMMON_ERP_FAILED) {
 		/* ensure propagation of failed status to new devices */
-		zfcp_erp_port_failed(port, 14, NULL);
+		zfcp_erp_port_failed(port, "erpreo1", NULL);
 		return -EIO;
 	}
 
@@ -369,7 +374,7 @@
  *
  * Returns 0 if recovery has been triggered, < 0 if not.
  */
-int zfcp_erp_port_reopen(struct zfcp_port *port, int clear, u8 id, void *ref)
+int zfcp_erp_port_reopen(struct zfcp_port *port, int clear, char *id, void *ref)
 {
 	unsigned long flags;
 	int retval;
@@ -386,12 +391,12 @@
 
 static void zfcp_erp_unit_block(struct zfcp_unit *unit, int clear_mask)
 {
-	zfcp_erp_modify_unit_status(unit, 19, NULL,
+	zfcp_erp_modify_unit_status(unit, "erublk1", NULL,
 				    ZFCP_STATUS_COMMON_UNBLOCKED | clear_mask,
 				    ZFCP_CLEAR);
 }
 
-static void _zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear, u8 id,
+static void _zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear, char *id,
 				  void *ref)
 {
 	struct zfcp_adapter *adapter = unit->port->adapter;
@@ -411,7 +416,8 @@
  * @clear_mask: specifies flags in unit status to be cleared
  * Return: 0 on success, < 0 on error
  */
-void zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear, u8 id, void *ref)
+void zfcp_erp_unit_reopen(struct zfcp_unit *unit, int clear, char *id,
+			  void *ref)
 {
 	unsigned long flags;
 	struct zfcp_port *port = unit->port;
@@ -437,28 +443,28 @@
 static void zfcp_erp_adapter_unblock(struct zfcp_adapter *adapter)
 {
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status))
-		zfcp_rec_dbf_event_adapter(16, NULL, adapter);
+		zfcp_rec_dbf_event_adapter("eraubl1", NULL, adapter);
 	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status);
 }
 
 static void zfcp_erp_port_unblock(struct zfcp_port *port)
 {
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status))
-		zfcp_rec_dbf_event_port(18, NULL, port);
+		zfcp_rec_dbf_event_port("erpubl1", NULL, port);
 	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status);
 }
 
 static void zfcp_erp_unit_unblock(struct zfcp_unit *unit)
 {
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status))
-		zfcp_rec_dbf_event_unit(20, NULL, unit);
+		zfcp_rec_dbf_event_unit("eruubl1", NULL, unit);
 	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &unit->status);
 }
 
 static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action)
 {
 	list_move(&erp_action->list, &erp_action->adapter->erp_running_head);
-	zfcp_rec_dbf_event_action(145, erp_action);
+	zfcp_rec_dbf_event_action("erator1", erp_action);
 }
 
 static void zfcp_erp_strategy_check_fsfreq(struct zfcp_erp_action *act)
@@ -474,11 +480,11 @@
 		if (act->status & (ZFCP_STATUS_ERP_DISMISSED |
 				   ZFCP_STATUS_ERP_TIMEDOUT)) {
 			act->fsf_req->status |= ZFCP_STATUS_FSFREQ_DISMISSED;
-			zfcp_rec_dbf_event_action(142, act);
+			zfcp_rec_dbf_event_action("erscf_1", act);
 			act->fsf_req->erp_action = NULL;
 		}
 		if (act->status & ZFCP_STATUS_ERP_TIMEDOUT)
-			zfcp_rec_dbf_event_action(143, act);
+			zfcp_rec_dbf_event_action("erscf_2", act);
 		if (act->fsf_req->status & (ZFCP_STATUS_FSFREQ_COMPLETED |
 					    ZFCP_STATUS_FSFREQ_DISMISSED))
 			act->fsf_req = NULL;
@@ -530,7 +536,7 @@
 }
 
 static void _zfcp_erp_port_reopen_all(struct zfcp_adapter *adapter,
-				      int clear, u8 id, void *ref)
+				      int clear, char *id, void *ref)
 {
 	struct zfcp_port *port;
 
@@ -538,8 +544,8 @@
 		_zfcp_erp_port_reopen(port, clear, id, ref);
 }
 
-static void _zfcp_erp_unit_reopen_all(struct zfcp_port *port, int clear, u8 id,
-				      void *ref)
+static void _zfcp_erp_unit_reopen_all(struct zfcp_port *port, int clear,
+				      char *id, void *ref)
 {
 	struct zfcp_unit *unit;
 
@@ -559,28 +565,28 @@
 
 	case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
 		if (status == ZFCP_ERP_SUCCEEDED)
-			_zfcp_erp_port_reopen_all(adapter, 0, 70, NULL);
+			_zfcp_erp_port_reopen_all(adapter, 0, "ersfa_1", NULL);
 		else
-			_zfcp_erp_adapter_reopen(adapter, 0, 71, NULL);
+			_zfcp_erp_adapter_reopen(adapter, 0, "ersfa_2", NULL);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
 		if (status == ZFCP_ERP_SUCCEEDED)
-			_zfcp_erp_port_reopen(port, 0, 72, NULL);
+			_zfcp_erp_port_reopen(port, 0, "ersfa_3", NULL);
 		else
-			_zfcp_erp_adapter_reopen(adapter, 0, 73, NULL);
+			_zfcp_erp_adapter_reopen(adapter, 0, "ersfa_4", NULL);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_PORT:
 		if (status == ZFCP_ERP_SUCCEEDED)
-			_zfcp_erp_unit_reopen_all(port, 0, 74, NULL);
+			_zfcp_erp_unit_reopen_all(port, 0, "ersfa_5", NULL);
 		else
-			_zfcp_erp_port_forced_reopen(port, 0, 75, NULL);
+			_zfcp_erp_port_forced_reopen(port, 0, "ersfa_6", NULL);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_UNIT:
 		if (status != ZFCP_ERP_SUCCEEDED)
-			_zfcp_erp_port_reopen(unit->port, 0, 76, NULL);
+			_zfcp_erp_port_reopen(unit->port, 0, "ersfa_7", NULL);
 		break;
 	}
 }
@@ -617,7 +623,7 @@
 				 adapter->peer_d_id);
 	if (IS_ERR(port)) /* error or port already attached */
 		return;
-	_zfcp_erp_port_reopen(port, 0, 150, NULL);
+	_zfcp_erp_port_reopen(port, 0, "ereptp1", NULL);
 }
 
 static int zfcp_erp_adapter_strat_fsf_xconf(struct zfcp_erp_action *erp_action)
@@ -640,9 +646,9 @@
 			return ZFCP_ERP_FAILED;
 		}
 
-		zfcp_rec_dbf_event_thread_lock(6, adapter);
+		zfcp_rec_dbf_event_thread_lock("erasfx1", adapter);
 		down(&adapter->erp_ready_sem);
-		zfcp_rec_dbf_event_thread_lock(7, adapter);
+		zfcp_rec_dbf_event_thread_lock("erasfx2", adapter);
 		if (erp_action->status & ZFCP_STATUS_ERP_TIMEDOUT)
 			break;
 
@@ -681,9 +687,9 @@
 	if (ret)
 		return ZFCP_ERP_FAILED;
 
-	zfcp_rec_dbf_event_thread_lock(8, adapter);
+	zfcp_rec_dbf_event_thread_lock("erasox1", adapter);
 	down(&adapter->erp_ready_sem);
-	zfcp_rec_dbf_event_thread_lock(9, adapter);
+	zfcp_rec_dbf_event_thread_lock("erasox2", adapter);
 	if (act->status & ZFCP_STATUS_ERP_TIMEDOUT)
 		return ZFCP_ERP_FAILED;
 
@@ -705,60 +711,59 @@
 	return ZFCP_ERP_SUCCEEDED;
 }
 
-static int zfcp_erp_adapter_strategy_generic(struct zfcp_erp_action *act,
-					     int close)
+static void zfcp_erp_adapter_strategy_close(struct zfcp_erp_action *act)
 {
-	int retval = ZFCP_ERP_SUCCEEDED;
 	struct zfcp_adapter *adapter = act->adapter;
 
-	if (close)
-		goto close_only;
-
-	retval = zfcp_erp_adapter_strategy_open_qdio(act);
-	if (retval != ZFCP_ERP_SUCCEEDED)
-		goto failed_qdio;
-
-	retval = zfcp_erp_adapter_strategy_open_fsf(act);
-	if (retval != ZFCP_ERP_SUCCEEDED)
-		goto failed_openfcp;
-
-	atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &act->adapter->status);
-
-	return ZFCP_ERP_SUCCEEDED;
-
- close_only:
-	atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN,
-			  &act->adapter->status);
-
- failed_openfcp:
 	/* close queues to ensure that buffers are not accessed by adapter */
 	zfcp_qdio_close(adapter);
 	zfcp_fsf_req_dismiss_all(adapter);
 	adapter->fsf_req_seq_no = 0;
 	/* all ports and units are closed */
-	zfcp_erp_modify_adapter_status(adapter, 24, NULL,
+	zfcp_erp_modify_adapter_status(adapter, "erascl1", NULL,
 				       ZFCP_STATUS_COMMON_OPEN, ZFCP_CLEAR);
- failed_qdio:
+
 	atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
-			  ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED,
-			  &act->adapter->status);
-	return retval;
+			  ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
+}
+
+static int zfcp_erp_adapter_strategy_open(struct zfcp_erp_action *act)
+{
+	struct zfcp_adapter *adapter = act->adapter;
+
+	if (zfcp_erp_adapter_strategy_open_qdio(act)) {
+		atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
+				  ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED,
+				  &adapter->status);
+		return ZFCP_ERP_FAILED;
+	}
+
+	if (zfcp_erp_adapter_strategy_open_fsf(act)) {
+		zfcp_erp_adapter_strategy_close(act);
+		return ZFCP_ERP_FAILED;
+	}
+
+	atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &adapter->status);
+
+	return ZFCP_ERP_SUCCEEDED;
 }
 
 static int zfcp_erp_adapter_strategy(struct zfcp_erp_action *act)
 {
-	int retval;
+	struct zfcp_adapter *adapter = act->adapter;
 
-	zfcp_erp_adapter_strategy_generic(act, 1); /* close */
-	if (act->status & ZFCP_STATUS_ERP_CLOSE_ONLY)
-		return ZFCP_ERP_EXIT;
+	if (atomic_read(&adapter->status) & ZFCP_STATUS_COMMON_OPEN) {
+		zfcp_erp_adapter_strategy_close(act);
+		if (act->status & ZFCP_STATUS_ERP_CLOSE_ONLY)
+			return ZFCP_ERP_EXIT;
+	}
 
-	retval = zfcp_erp_adapter_strategy_generic(act, 0); /* open */
-
-	if (retval == ZFCP_ERP_FAILED)
+	if (zfcp_erp_adapter_strategy_open(act)) {
 		ssleep(8);
+		return ZFCP_ERP_FAILED;
+	}
 
-	return retval;
+	return ZFCP_ERP_SUCCEEDED;
 }
 
 static int zfcp_erp_port_forced_strategy_close(struct zfcp_erp_action *act)
@@ -777,10 +782,7 @@
 
 static void zfcp_erp_port_strategy_clearstati(struct zfcp_port *port)
 {
-	atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED |
-			  ZFCP_STATUS_PORT_PHYS_CLOSING |
-			  ZFCP_STATUS_PORT_INVALID_WWPN,
-			  &port->status);
+	atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED, &port->status);
 }
 
 static int zfcp_erp_port_forced_strategy(struct zfcp_erp_action *erp_action)
@@ -836,7 +838,7 @@
 	struct zfcp_port *port = act->port;
 
 	if (port->wwpn != adapter->peer_wwpn) {
-		zfcp_erp_port_failed(port, 25, NULL);
+		zfcp_erp_port_failed(port, "eroptp1", NULL);
 		return ZFCP_ERP_FAILED;
 	}
 	port->d_id = adapter->peer_d_id;
@@ -855,7 +857,7 @@
 	port->erp_action.step = ZFCP_ERP_STEP_NAMESERVER_LOOKUP;
 	if (retval)
 		zfcp_erp_notify(&port->erp_action, ZFCP_ERP_FAILED);
-
+	zfcp_port_put(port);
 }
 
 static int zfcp_erp_port_strategy_open_common(struct zfcp_erp_action *act)
@@ -871,17 +873,15 @@
 		if (fc_host_port_type(adapter->scsi_host) == FC_PORTTYPE_PTP)
 			return zfcp_erp_open_ptp_port(act);
 		if (!port->d_id) {
-			queue_work(zfcp_data.work_queue, &port->gid_pn_work);
+			zfcp_port_get(port);
+			if (!queue_work(zfcp_data.work_queue,
+					&port->gid_pn_work))
+				zfcp_port_put(port);
 			return ZFCP_ERP_CONTINUES;
 		}
 	case ZFCP_ERP_STEP_NAMESERVER_LOOKUP:
-		if (!port->d_id) {
-			if (p_status & (ZFCP_STATUS_PORT_INVALID_WWPN)) {
-				zfcp_erp_port_failed(port, 26, NULL);
-				return ZFCP_ERP_EXIT;
-			}
+		if (!port->d_id)
 			return ZFCP_ERP_FAILED;
-		}
 		return zfcp_erp_port_strategy_open_port(act);
 
 	case ZFCP_ERP_STEP_PORT_OPENING:
@@ -995,7 +995,7 @@
 				"port 0x%016Lx\n",
 				(unsigned long long)unit->fcp_lun,
 				(unsigned long long)unit->port->wwpn);
-			zfcp_erp_unit_failed(unit, 21, NULL);
+			zfcp_erp_unit_failed(unit, "erusck1", NULL);
 		}
 		break;
 	}
@@ -1025,7 +1025,7 @@
 			dev_err(&port->adapter->ccw_device->dev,
 				"ERP failed for remote port 0x%016Lx\n",
 				(unsigned long long)port->wwpn);
-			zfcp_erp_port_failed(port, 22, NULL);
+			zfcp_erp_port_failed(port, "erpsck1", NULL);
 		}
 		break;
 	}
@@ -1052,7 +1052,7 @@
 			dev_err(&adapter->ccw_device->dev,
 				"ERP cannot recover an error "
 				"on the FCP device\n");
-			zfcp_erp_adapter_failed(adapter, 23, NULL);
+			zfcp_erp_adapter_failed(adapter, "erasck1", NULL);
 		}
 		break;
 	}
@@ -1117,7 +1117,7 @@
 		if (zfcp_erp_strat_change_det(&adapter->status, erp_status)) {
 			_zfcp_erp_adapter_reopen(adapter,
 						 ZFCP_STATUS_COMMON_ERP_FAILED,
-						 67, NULL);
+						 "ersscg1", NULL);
 			return ZFCP_ERP_EXIT;
 		}
 		break;
@@ -1127,7 +1127,7 @@
 		if (zfcp_erp_strat_change_det(&port->status, erp_status)) {
 			_zfcp_erp_port_reopen(port,
 					      ZFCP_STATUS_COMMON_ERP_FAILED,
-					      68, NULL);
+					      "ersscg2", NULL);
 			return ZFCP_ERP_EXIT;
 		}
 		break;
@@ -1136,7 +1136,7 @@
 		if (zfcp_erp_strat_change_det(&unit->status, erp_status)) {
 			_zfcp_erp_unit_reopen(unit,
 					      ZFCP_STATUS_COMMON_ERP_FAILED,
-					      69, NULL);
+					      "ersscg3", NULL);
 			return ZFCP_ERP_EXIT;
 		}
 		break;
@@ -1155,7 +1155,7 @@
 	}
 
 	list_del(&erp_action->list);
-	zfcp_rec_dbf_event_action(144, erp_action);
+	zfcp_rec_dbf_event_action("eractd1", erp_action);
 
 	switch (erp_action->action) {
 	case ZFCP_ERP_ACTION_REOPEN_UNIT:
@@ -1214,38 +1214,8 @@
 	atomic_set_mask(ZFCP_STATUS_UNIT_SCSI_WORK_PENDING, &unit->status);
 	INIT_WORK(&p->work, zfcp_erp_scsi_scan);
 	p->unit = unit;
-	queue_work(zfcp_data.work_queue, &p->work);
-}
-
-static void zfcp_erp_rport_register(struct zfcp_port *port)
-{
-	struct fc_rport_identifiers ids;
-	ids.node_name = port->wwnn;
-	ids.port_name = port->wwpn;
-	ids.port_id = port->d_id;
-	ids.roles = FC_RPORT_ROLE_FCP_TARGET;
-	port->rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids);
-	if (!port->rport) {
-		dev_err(&port->adapter->ccw_device->dev,
-			"Registering port 0x%016Lx failed\n",
-			(unsigned long long)port->wwpn);
-		return;
-	}
-
-	scsi_target_unblock(&port->rport->dev);
-	port->rport->maxframe_size = port->maxframe_size;
-	port->rport->supported_classes = port->supported_classes;
-}
-
-static void zfcp_erp_rports_del(struct zfcp_adapter *adapter)
-{
-	struct zfcp_port *port;
-	list_for_each_entry(port, &adapter->port_list_head, list) {
-		if (!port->rport)
-			continue;
-		fc_remote_port_delete(port->rport);
-		port->rport = NULL;
-	}
+	if (!queue_work(zfcp_data.work_queue, &p->work))
+		zfcp_unit_put(unit);
 }
 
 static void zfcp_erp_action_cleanup(struct zfcp_erp_action *act, int result)
@@ -1256,10 +1226,8 @@
 
 	switch (act->action) {
 	case ZFCP_ERP_ACTION_REOPEN_UNIT:
-		if ((result == ZFCP_ERP_SUCCEEDED) &&
-		    !unit->device && port->rport) {
-			atomic_set_mask(ZFCP_STATUS_UNIT_REGISTERED,
-					&unit->status);
+		flush_work(&port->rport_work);
+		if ((result == ZFCP_ERP_SUCCEEDED) && !unit->device) {
 			if (!(atomic_read(&unit->status) &
 			      ZFCP_STATUS_UNIT_SCSI_WORK_PENDING))
 				zfcp_erp_schedule_work(unit);
@@ -1269,27 +1237,17 @@
 
 	case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
 	case ZFCP_ERP_ACTION_REOPEN_PORT:
-		if (atomic_read(&port->status) & ZFCP_STATUS_PORT_NO_WWPN) {
-			zfcp_port_put(port);
-			return;
-		}
-		if ((result == ZFCP_ERP_SUCCEEDED) && !port->rport)
-			zfcp_erp_rport_register(port);
-		if ((result != ZFCP_ERP_SUCCEEDED) && port->rport) {
-			fc_remote_port_delete(port->rport);
-			port->rport = NULL;
-		}
+		if (result == ZFCP_ERP_SUCCEEDED)
+			zfcp_scsi_schedule_rport_register(port);
 		zfcp_port_put(port);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
-		if (result != ZFCP_ERP_SUCCEEDED) {
-			unregister_service_level(&adapter->service_level);
-			zfcp_erp_rports_del(adapter);
-		} else {
+		if (result == ZFCP_ERP_SUCCEEDED) {
 			register_service_level(&adapter->service_level);
 			schedule_work(&adapter->scan_work);
-		}
+		} else
+			unregister_service_level(&adapter->service_level);
 		zfcp_adapter_put(adapter);
 		break;
 	}
@@ -1346,7 +1304,7 @@
 			erp_action->status |= ZFCP_STATUS_ERP_LOWMEM;
 		}
 		if (adapter->erp_total_count == adapter->erp_low_mem_count)
-			_zfcp_erp_adapter_reopen(adapter, 0, 66, NULL);
+			_zfcp_erp_adapter_reopen(adapter, 0, "erstgy1", NULL);
 		else {
 			zfcp_erp_strategy_memwait(erp_action);
 			retval = ZFCP_ERP_CONTINUES;
@@ -1406,9 +1364,9 @@
 				zfcp_erp_wakeup(adapter);
 		}
 
-		zfcp_rec_dbf_event_thread_lock(4, adapter);
+		zfcp_rec_dbf_event_thread_lock("erthrd1", adapter);
 		ignore = down_interruptible(&adapter->erp_ready_sem);
-		zfcp_rec_dbf_event_thread_lock(5, adapter);
+		zfcp_rec_dbf_event_thread_lock("erthrd2", adapter);
 	}
 
 	atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_UP, &adapter->status);
@@ -1453,7 +1411,7 @@
 {
 	atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_THREAD_KILL, &adapter->status);
 	up(&adapter->erp_ready_sem);
-	zfcp_rec_dbf_event_thread_lock(3, adapter);
+	zfcp_rec_dbf_event_thread_lock("erthrk1", adapter);
 
 	wait_event(adapter->erp_thread_wqh,
 		   !(atomic_read(&adapter->status) &
@@ -1469,7 +1427,7 @@
  * @id: Event id for debug trace.
  * @ref: Reference for debug trace.
  */
-void zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, u8 id, void *ref)
+void zfcp_erp_adapter_failed(struct zfcp_adapter *adapter, char *id, void *ref)
 {
 	zfcp_erp_modify_adapter_status(adapter, id, ref,
 				       ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET);
@@ -1481,7 +1439,7 @@
  * @id: Event id for debug trace.
  * @ref: Reference for debug trace.
  */
-void zfcp_erp_port_failed(struct zfcp_port *port, u8 id, void *ref)
+void zfcp_erp_port_failed(struct zfcp_port *port, char *id, void *ref)
 {
 	zfcp_erp_modify_port_status(port, id, ref,
 				    ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET);
@@ -1493,7 +1451,7 @@
  * @id: Event id for debug trace.
  * @ref: Reference for debug trace.
  */
-void zfcp_erp_unit_failed(struct zfcp_unit *unit, u8 id, void *ref)
+void zfcp_erp_unit_failed(struct zfcp_unit *unit, char *id, void *ref)
 {
 	zfcp_erp_modify_unit_status(unit, id, ref,
 				    ZFCP_STATUS_COMMON_ERP_FAILED, ZFCP_SET);
@@ -1520,7 +1478,7 @@
  *
  * Changes in common status bits are propagated to attached ports and units.
  */
-void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, u8 id,
+void zfcp_erp_modify_adapter_status(struct zfcp_adapter *adapter, char *id,
 				    void *ref, u32 mask, int set_or_clear)
 {
 	struct zfcp_port *port;
@@ -1554,7 +1512,7 @@
  *
  * Changes in common status bits are propagated to attached units.
  */
-void zfcp_erp_modify_port_status(struct zfcp_port *port, u8 id, void *ref,
+void zfcp_erp_modify_port_status(struct zfcp_port *port, char *id, void *ref,
 				 u32 mask, int set_or_clear)
 {
 	struct zfcp_unit *unit;
@@ -1586,7 +1544,7 @@
  * @mask: status bits to change
  * @set_or_clear: ZFCP_SET or ZFCP_CLEAR
  */
-void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, u8 id, void *ref,
+void zfcp_erp_modify_unit_status(struct zfcp_unit *unit, char *id, void *ref,
 				 u32 mask, int set_or_clear)
 {
 	if (set_or_clear == ZFCP_SET) {
@@ -1609,7 +1567,7 @@
  * @id: The debug trace id.
  * @id: Reference for the debug trace.
  */
-void zfcp_erp_port_boxed(struct zfcp_port *port, u8 id, void *ref)
+void zfcp_erp_port_boxed(struct zfcp_port *port, char *id, void *ref)
 {
 	unsigned long flags;
 
@@ -1626,7 +1584,7 @@
  * @id: The debug trace id.
  * @id: Reference for the debug trace.
  */
-void zfcp_erp_unit_boxed(struct zfcp_unit *unit, u8 id, void *ref)
+void zfcp_erp_unit_boxed(struct zfcp_unit *unit, char *id, void *ref)
 {
 	zfcp_erp_modify_unit_status(unit, id, ref,
 				    ZFCP_STATUS_COMMON_ACCESS_BOXED, ZFCP_SET);
@@ -1642,7 +1600,7 @@
  * Since the adapter has denied access, stop using the port and the
  * attached units.
  */
-void zfcp_erp_port_access_denied(struct zfcp_port *port, u8 id, void *ref)
+void zfcp_erp_port_access_denied(struct zfcp_port *port, char *id, void *ref)
 {
 	unsigned long flags;
 
@@ -1661,14 +1619,14 @@
  *
  * Since the adapter has denied access, stop using the unit.
  */
-void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, u8 id, void *ref)
+void zfcp_erp_unit_access_denied(struct zfcp_unit *unit, char *id, void *ref)
 {
 	zfcp_erp_modify_unit_status(unit, id, ref,
 				    ZFCP_STATUS_COMMON_ERP_FAILED |
 				    ZFCP_STATUS_COMMON_ACCESS_DENIED, ZFCP_SET);
 }
 
-static void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, u8 id,
+static void zfcp_erp_unit_access_changed(struct zfcp_unit *unit, char *id,
 					 void *ref)
 {
 	int status = atomic_read(&unit->status);
@@ -1679,7 +1637,7 @@
 	zfcp_erp_unit_reopen(unit, ZFCP_STATUS_COMMON_ERP_FAILED, id, ref);
 }
 
-static void zfcp_erp_port_access_changed(struct zfcp_port *port, u8 id,
+static void zfcp_erp_port_access_changed(struct zfcp_port *port, char *id,
 					 void *ref)
 {
 	struct zfcp_unit *unit;
@@ -1701,7 +1659,7 @@
  * @id: Id for debug trace
  * @ref: Reference for debug trace
  */
-void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, u8 id,
+void zfcp_erp_adapter_access_changed(struct zfcp_adapter *adapter, char *id,
 				     void *ref)
 {
 	struct zfcp_port *port;
diff --git a/drivers/s390/scsi/zfcp_ext.h b/drivers/s390/scsi/zfcp_ext.h
index b5adeda..f6399ca 100644
--- a/drivers/s390/scsi/zfcp_ext.h
+++ b/drivers/s390/scsi/zfcp_ext.h
@@ -3,7 +3,7 @@
  *
  * External function declarations.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #ifndef ZFCP_EXT_H
@@ -35,15 +35,15 @@
 /* zfcp_dbf.c */
 extern int zfcp_adapter_debug_register(struct zfcp_adapter *);
 extern void zfcp_adapter_debug_unregister(struct zfcp_adapter *);
-extern void zfcp_rec_dbf_event_thread(u8, struct zfcp_adapter *);
-extern void zfcp_rec_dbf_event_thread_lock(u8, struct zfcp_adapter *);
-extern void zfcp_rec_dbf_event_adapter(u8, void *, struct zfcp_adapter *);
-extern void zfcp_rec_dbf_event_port(u8, void *, struct zfcp_port *);
-extern void zfcp_rec_dbf_event_unit(u8, void *, struct zfcp_unit *);
-extern void zfcp_rec_dbf_event_trigger(u8, void *, u8, u8, void *,
+extern void zfcp_rec_dbf_event_thread(char *, struct zfcp_adapter *);
+extern void zfcp_rec_dbf_event_thread_lock(char *, struct zfcp_adapter *);
+extern void zfcp_rec_dbf_event_adapter(char *, void *, struct zfcp_adapter *);
+extern void zfcp_rec_dbf_event_port(char *, void *, struct zfcp_port *);
+extern void zfcp_rec_dbf_event_unit(char *, void *, struct zfcp_unit *);
+extern void zfcp_rec_dbf_event_trigger(char *, void *, u8, u8, void *,
 				       struct zfcp_adapter *,
 				       struct zfcp_port *, struct zfcp_unit *);
-extern void zfcp_rec_dbf_event_action(u8, struct zfcp_erp_action *);
+extern void zfcp_rec_dbf_event_action(char *, struct zfcp_erp_action *);
 extern void zfcp_hba_dbf_event_fsf_response(struct zfcp_fsf_req *);
 extern void zfcp_hba_dbf_event_fsf_unsol(const char *, struct zfcp_adapter *,
 					 struct fsf_status_read_buffer *);
@@ -66,31 +66,34 @@
 					 struct scsi_cmnd *);
 
 /* zfcp_erp.c */
-extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, u8, void *,
-					   u32, int);
-extern void zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, u8, void *);
-extern void zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, u8, void *);
-extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, u8, void *);
-extern void zfcp_erp_modify_port_status(struct zfcp_port *, u8, void *, u32,
+extern void zfcp_erp_modify_adapter_status(struct zfcp_adapter *, char *,
+					   void *, u32, int);
+extern void zfcp_erp_adapter_reopen(struct zfcp_adapter *, int, char *, void *);
+extern void zfcp_erp_adapter_shutdown(struct zfcp_adapter *, int, char *,
+				      void *);
+extern void zfcp_erp_adapter_failed(struct zfcp_adapter *, char *, void *);
+extern void zfcp_erp_modify_port_status(struct zfcp_port *, char *, void *, u32,
 					int);
-extern int  zfcp_erp_port_reopen(struct zfcp_port *, int, u8, void *);
-extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, u8, void *);
-extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, u8, void *);
-extern void zfcp_erp_port_failed(struct zfcp_port *, u8, void *);
-extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, u8, void *, u32,
+extern int  zfcp_erp_port_reopen(struct zfcp_port *, int, char *, void *);
+extern void zfcp_erp_port_shutdown(struct zfcp_port *, int, char *, void *);
+extern void zfcp_erp_port_forced_reopen(struct zfcp_port *, int, char *,
+					void *);
+extern void zfcp_erp_port_failed(struct zfcp_port *, char *, void *);
+extern void zfcp_erp_modify_unit_status(struct zfcp_unit *, char *, void *, u32,
 					int);
-extern void zfcp_erp_unit_reopen(struct zfcp_unit *, int, u8, void *);
-extern void zfcp_erp_unit_shutdown(struct zfcp_unit *, int, u8, void *);
-extern void zfcp_erp_unit_failed(struct zfcp_unit *, u8, void *);
+extern void zfcp_erp_unit_reopen(struct zfcp_unit *, int, char *, void *);
+extern void zfcp_erp_unit_shutdown(struct zfcp_unit *, int, char *, void *);
+extern void zfcp_erp_unit_failed(struct zfcp_unit *, char *, void *);
 extern int  zfcp_erp_thread_setup(struct zfcp_adapter *);
 extern void zfcp_erp_thread_kill(struct zfcp_adapter *);
 extern void zfcp_erp_wait(struct zfcp_adapter *);
 extern void zfcp_erp_notify(struct zfcp_erp_action *, unsigned long);
-extern void zfcp_erp_port_boxed(struct zfcp_port *, u8, void *);
-extern void zfcp_erp_unit_boxed(struct zfcp_unit *, u8, void *);
-extern void zfcp_erp_port_access_denied(struct zfcp_port *, u8, void *);
-extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, u8, void *);
-extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *, u8, void *);
+extern void zfcp_erp_port_boxed(struct zfcp_port *, char *, void *);
+extern void zfcp_erp_unit_boxed(struct zfcp_unit *, char *, void *);
+extern void zfcp_erp_port_access_denied(struct zfcp_port *, char *, void *);
+extern void zfcp_erp_unit_access_denied(struct zfcp_unit *, char *, void *);
+extern void zfcp_erp_adapter_access_changed(struct zfcp_adapter *, char *,
+					    void *);
 extern void zfcp_erp_timeout_handler(unsigned long);
 extern void zfcp_erp_port_strategy_open_lookup(struct work_struct *);
 
@@ -101,6 +104,7 @@
 extern int zfcp_fc_ns_gid_pn(struct zfcp_erp_action *);
 extern void zfcp_fc_plogi_evaluate(struct zfcp_port *, struct fsf_plogi *);
 extern void zfcp_test_link(struct zfcp_port *);
+extern void zfcp_fc_link_test_work(struct work_struct *);
 extern void zfcp_fc_nameserver_init(struct zfcp_adapter *);
 
 /* zfcp_fsf.c */
@@ -125,16 +129,13 @@
 extern int zfcp_fsf_send_ct(struct zfcp_send_ct *, mempool_t *,
 			    struct zfcp_erp_action *);
 extern int zfcp_fsf_send_els(struct zfcp_send_els *);
-extern int zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *,
-					  struct zfcp_unit *,
-					  struct scsi_cmnd *, int, int);
+extern int zfcp_fsf_send_fcp_command_task(struct zfcp_unit *,
+					  struct scsi_cmnd *);
 extern void zfcp_fsf_req_complete(struct zfcp_fsf_req *);
 extern void zfcp_fsf_req_free(struct zfcp_fsf_req *);
-extern struct zfcp_fsf_req *zfcp_fsf_send_fcp_ctm(struct zfcp_adapter *,
-						  struct zfcp_unit *, u8, int);
+extern struct zfcp_fsf_req *zfcp_fsf_send_fcp_ctm(struct zfcp_unit *, u8);
 extern struct zfcp_fsf_req *zfcp_fsf_abort_fcp_command(unsigned long,
-						       struct zfcp_adapter *,
-						       struct zfcp_unit *, int);
+						       struct zfcp_unit *);
 
 /* zfcp_qdio.c */
 extern int zfcp_qdio_allocate(struct zfcp_adapter *);
@@ -153,6 +154,10 @@
 extern void zfcp_adapter_scsi_unregister(struct zfcp_adapter *);
 extern char *zfcp_get_fcp_sns_info_ptr(struct fcp_rsp_iu *);
 extern struct fc_function_template zfcp_transport_functions;
+extern void zfcp_scsi_rport_work(struct work_struct *);
+extern void zfcp_scsi_schedule_rport_register(struct zfcp_port *);
+extern void zfcp_scsi_schedule_rport_block(struct zfcp_port *);
+extern void zfcp_scsi_schedule_rports_block(struct zfcp_adapter *);
 
 /* zfcp_sysfs.c */
 extern struct attribute_group zfcp_sysfs_unit_attrs;
diff --git a/drivers/s390/scsi/zfcp_fc.c b/drivers/s390/scsi/zfcp_fc.c
index eabdfe2..aab8123 100644
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -3,7 +3,7 @@
  *
  * Fibre Channel related functions for the zfcp device driver.
  *
- * Copyright IBM Corporation 2008
+ * Copyright IBM Corporation 2008, 2009
  */
 
 #define KMSG_COMPONENT "zfcp"
@@ -98,8 +98,12 @@
 	struct zfcp_wka_port *wka_port =
 			container_of(dw, struct zfcp_wka_port, work);
 
-	wait_event(wka_port->completion_wq,
-			atomic_read(&wka_port->refcount) == 0);
+	/* Don't wait forvever. If the wka_port is too busy take it offline
+	   through a new call later */
+	if (!wait_event_timeout(wka_port->completion_wq,
+				atomic_read(&wka_port->refcount) == 0,
+				HZ >> 1))
+		return;
 
 	mutex_lock(&wka_port->mutex);
 	if ((atomic_read(&wka_port->refcount) != 0) ||
@@ -145,16 +149,10 @@
 	struct zfcp_port *port;
 
 	read_lock_irqsave(&zfcp_data.config_lock, flags);
-	list_for_each_entry(port, &fsf_req->adapter->port_list_head, list) {
-		if (!(atomic_read(&port->status) & ZFCP_STATUS_PORT_PHYS_OPEN))
-			/* Try to connect to unused ports anyway. */
-			zfcp_erp_port_reopen(port,
-					     ZFCP_STATUS_COMMON_ERP_FAILED,
-					     82, fsf_req);
-		else if ((port->d_id & range) == (elem->nport_did & range))
-			/* Check connection status for connected ports */
+	list_for_each_entry(port, &fsf_req->adapter->port_list_head, list)
+		if ((port->d_id & range) == (elem->nport_did & range))
 			zfcp_test_link(port);
-	}
+
 	read_unlock_irqrestore(&zfcp_data.config_lock, flags);
 }
 
@@ -196,7 +194,7 @@
 	read_unlock_irqrestore(&zfcp_data.config_lock, flags);
 
 	if (port && (port->wwpn == wwpn))
-		zfcp_erp_port_forced_reopen(port, 0, 83, req);
+		zfcp_erp_port_forced_reopen(port, 0, "fciwwp1", req);
 }
 
 static void zfcp_fc_incoming_plogi(struct zfcp_fsf_req *req)
@@ -259,10 +257,9 @@
 
 	if (ct->status)
 		return;
-	if (ct_iu_resp->header.cmd_rsp_code != ZFCP_CT_ACCEPT) {
-		atomic_set_mask(ZFCP_STATUS_PORT_INVALID_WWPN, &port->status);
+	if (ct_iu_resp->header.cmd_rsp_code != ZFCP_CT_ACCEPT)
 		return;
-	}
+
 	/* paranoia */
 	if (ct_iu_req->wwpn != port->wwpn)
 		return;
@@ -375,16 +372,22 @@
 
 	if (adisc->els.status) {
 		/* request rejected or timed out */
-		zfcp_erp_port_forced_reopen(port, 0, 63, NULL);
+		zfcp_erp_port_forced_reopen(port, 0, "fcadh_1", NULL);
 		goto out;
 	}
 
 	if (!port->wwnn)
 		port->wwnn = ls_adisc->wwnn;
 
-	if (port->wwpn != ls_adisc->wwpn)
-		zfcp_erp_port_reopen(port, 0, 64, NULL);
+	if ((port->wwpn != ls_adisc->wwpn) ||
+	    !(atomic_read(&port->status) & ZFCP_STATUS_COMMON_OPEN)) {
+		zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED,
+				     "fcadh_2", NULL);
+		goto out;
+	}
 
+	/* port is good, unblock rport without going through erp */
+	zfcp_scsi_schedule_rport_register(port);
  out:
 	zfcp_port_put(port);
 	kfree(adisc);
@@ -422,6 +425,31 @@
 	return zfcp_fsf_send_els(&adisc->els);
 }
 
+void zfcp_fc_link_test_work(struct work_struct *work)
+{
+	struct zfcp_port *port =
+		container_of(work, struct zfcp_port, test_link_work);
+	int retval;
+
+	if (!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_UNBLOCKED)) {
+		zfcp_port_put(port);
+		return; /* port erp is running and will update rport status */
+	}
+
+	zfcp_port_get(port);
+	port->rport_task = RPORT_DEL;
+	zfcp_scsi_rport_work(&port->rport_work);
+
+	retval = zfcp_fc_adisc(port);
+	if (retval == 0)
+		return;
+
+	/* send of ADISC was not possible */
+	zfcp_erp_port_forced_reopen(port, 0, "fcltwk1", NULL);
+
+	zfcp_port_put(port);
+}
+
 /**
  * zfcp_test_link - lightweight link test procedure
  * @port: port to be tested
@@ -432,17 +460,9 @@
  */
 void zfcp_test_link(struct zfcp_port *port)
 {
-	int retval;
-
 	zfcp_port_get(port);
-	retval = zfcp_fc_adisc(port);
-	if (retval == 0)
-		return;
-
-	/* send of ADISC was not possible */
-	zfcp_port_put(port);
-	if (retval != -EBUSY)
-		zfcp_erp_port_forced_reopen(port, 0, 65, NULL);
+	if (!queue_work(zfcp_data.work_queue, &port->test_link_work))
+		zfcp_port_put(port);
 }
 
 static void zfcp_free_sg_env(struct zfcp_gpn_ft *gpn_ft, int buf_num)
@@ -529,7 +549,7 @@
 		zfcp_port_put(port);
 		return;
 	}
-	zfcp_erp_port_shutdown(port, 0, 151, NULL);
+	zfcp_erp_port_shutdown(port, 0, "fcpval1", NULL);
 	zfcp_erp_wait(adapter);
 	zfcp_port_put(port);
 	zfcp_port_dequeue(port);
@@ -592,7 +612,7 @@
 		if (IS_ERR(port))
 			ret = PTR_ERR(port);
 		else
-			zfcp_erp_port_reopen(port, 0, 149, NULL);
+			zfcp_erp_port_reopen(port, 0, "fcegpf1", NULL);
 	}
 
 	zfcp_erp_wait(adapter);
diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c
index e6416f8..b29f312 100644
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -3,7 +3,7 @@
  *
  * Implementation of FSF commands.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #define KMSG_COMPONENT "zfcp"
@@ -12,11 +12,14 @@
 #include <linux/blktrace_api.h>
 #include "zfcp_ext.h"
 
+#define ZFCP_REQ_AUTO_CLEANUP	0x00000002
+#define ZFCP_REQ_NO_QTCB	0x00000008
+
 static void zfcp_fsf_request_timeout_handler(unsigned long data)
 {
 	struct zfcp_adapter *adapter = (struct zfcp_adapter *) data;
-	zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED, 62,
-				NULL);
+	zfcp_erp_adapter_reopen(adapter, ZFCP_STATUS_COMMON_ERP_FAILED,
+				"fsrth_1", NULL);
 }
 
 static void zfcp_fsf_start_timer(struct zfcp_fsf_req *fsf_req,
@@ -75,7 +78,7 @@
 		 (unsigned long long)port->wwpn);
 	zfcp_act_eval_err(req->adapter, header->fsf_status_qual.halfword[0]);
 	zfcp_act_eval_err(req->adapter, header->fsf_status_qual.halfword[1]);
-	zfcp_erp_port_access_denied(port, 55, req);
+	zfcp_erp_port_access_denied(port, "fspad_1", req);
 	req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 }
 
@@ -89,7 +92,7 @@
 		 (unsigned long long)unit->port->wwpn);
 	zfcp_act_eval_err(req->adapter, header->fsf_status_qual.halfword[0]);
 	zfcp_act_eval_err(req->adapter, header->fsf_status_qual.halfword[1]);
-	zfcp_erp_unit_access_denied(unit, 59, req);
+	zfcp_erp_unit_access_denied(unit, "fsuad_1", req);
 	req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 }
 
@@ -97,7 +100,7 @@
 {
 	dev_err(&req->adapter->ccw_device->dev, "FCP device not "
 		"operational because of an unsupported FC class\n");
-	zfcp_erp_adapter_shutdown(req->adapter, 0, 123, req);
+	zfcp_erp_adapter_shutdown(req->adapter, 0, "fscns_1", req);
 	req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 }
 
@@ -159,20 +162,13 @@
 	list_for_each_entry(port, &adapter->port_list_head, list)
 		if (port->d_id == d_id) {
 			read_unlock_irqrestore(&zfcp_data.config_lock, flags);
-			switch (sr_buf->status_subtype) {
-			case FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT:
-				zfcp_erp_port_reopen(port, 0, 101, req);
-				break;
-			case FSF_STATUS_READ_SUB_ERROR_PORT:
-				zfcp_erp_port_shutdown(port, 0, 122, req);
-				break;
-			}
+			zfcp_erp_port_reopen(port, 0, "fssrpc1", req);
 			return;
 		}
 	read_unlock_irqrestore(&zfcp_data.config_lock, flags);
 }
 
-static void zfcp_fsf_link_down_info_eval(struct zfcp_fsf_req *req, u8 id,
+static void zfcp_fsf_link_down_info_eval(struct zfcp_fsf_req *req, char *id,
 					 struct fsf_link_down_info *link_down)
 {
 	struct zfcp_adapter *adapter = req->adapter;
@@ -181,6 +177,7 @@
 		return;
 
 	atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
+	zfcp_scsi_schedule_rports_block(adapter);
 
 	if (!link_down)
 		goto out;
@@ -261,13 +258,13 @@
 
 	switch (sr_buf->status_subtype) {
 	case FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK:
-		zfcp_fsf_link_down_info_eval(req, 38, ldi);
+		zfcp_fsf_link_down_info_eval(req, "fssrld1", ldi);
 		break;
 	case FSF_STATUS_READ_SUB_FDISC_FAILED:
-		zfcp_fsf_link_down_info_eval(req, 39, ldi);
+		zfcp_fsf_link_down_info_eval(req, "fssrld2", ldi);
 		break;
 	case FSF_STATUS_READ_SUB_FIRMWARE_UPDATE:
-		zfcp_fsf_link_down_info_eval(req, 40, NULL);
+		zfcp_fsf_link_down_info_eval(req, "fssrld3", NULL);
 	};
 }
 
@@ -307,22 +304,23 @@
 		dev_info(&adapter->ccw_device->dev,
 			 "The local link has been restored\n");
 		/* All ports should be marked as ready to run again */
-		zfcp_erp_modify_adapter_status(adapter, 30, NULL,
+		zfcp_erp_modify_adapter_status(adapter, "fssrh_1", NULL,
 					       ZFCP_STATUS_COMMON_RUNNING,
 					       ZFCP_SET);
 		zfcp_erp_adapter_reopen(adapter,
 					ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
 					ZFCP_STATUS_COMMON_ERP_FAILED,
-					102, req);
+					"fssrh_2", req);
 		break;
 	case FSF_STATUS_READ_NOTIFICATION_LOST:
 		if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_ACT_UPDATED)
-			zfcp_erp_adapter_access_changed(adapter, 135, req);
+			zfcp_erp_adapter_access_changed(adapter, "fssrh_3",
+							req);
 		if (sr_buf->status_subtype & FSF_STATUS_READ_SUB_INCOMING_ELS)
 			schedule_work(&adapter->scan_work);
 		break;
 	case FSF_STATUS_READ_CFDC_UPDATED:
-		zfcp_erp_adapter_access_changed(adapter, 136, req);
+		zfcp_erp_adapter_access_changed(adapter, "fssrh_4", req);
 		break;
 	case FSF_STATUS_READ_FEATURE_UPDATE_ALERT:
 		adapter->adapter_features = sr_buf->payload.word[0];
@@ -351,7 +349,7 @@
 		dev_err(&req->adapter->ccw_device->dev,
 			"The FCP adapter reported a problem "
 			"that cannot be recovered\n");
-		zfcp_erp_adapter_shutdown(req->adapter, 0, 121, req);
+		zfcp_erp_adapter_shutdown(req->adapter, 0, "fsfsqe1", req);
 		break;
 	}
 	/* all non-return stats set FSFREQ_ERROR*/
@@ -368,7 +366,7 @@
 		dev_err(&req->adapter->ccw_device->dev,
 			"The FCP adapter does not recognize the command 0x%x\n",
 			req->qtcb->header.fsf_command);
-		zfcp_erp_adapter_shutdown(req->adapter, 0, 120, req);
+		zfcp_erp_adapter_shutdown(req->adapter, 0, "fsfse_1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_ADAPTER_STATUS_AVAILABLE:
@@ -400,17 +398,17 @@
 			"QTCB version 0x%x not supported by FCP adapter "
 			"(0x%x to 0x%x)\n", FSF_QTCB_CURRENT_VERSION,
 			psq->word[0], psq->word[1]);
-		zfcp_erp_adapter_shutdown(adapter, 0, 117, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fspse_1", req);
 		break;
 	case FSF_PROT_ERROR_STATE:
 	case FSF_PROT_SEQ_NUMB_ERROR:
-		zfcp_erp_adapter_reopen(adapter, 0, 98, req);
+		zfcp_erp_adapter_reopen(adapter, 0, "fspse_2", req);
 		req->status |= ZFCP_STATUS_FSFREQ_RETRY;
 		break;
 	case FSF_PROT_UNSUPP_QTCB_TYPE:
 		dev_err(&adapter->ccw_device->dev,
 			"The QTCB type is not supported by the FCP adapter\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 118, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fspse_3", req);
 		break;
 	case FSF_PROT_HOST_CONNECTION_INITIALIZING:
 		atomic_set_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
@@ -420,27 +418,29 @@
 		dev_err(&adapter->ccw_device->dev,
 			"0x%Lx is an ambiguous request identifier\n",
 			(unsigned long long)qtcb->bottom.support.req_handle);
-		zfcp_erp_adapter_shutdown(adapter, 0, 78, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fspse_4", req);
 		break;
 	case FSF_PROT_LINK_DOWN:
-		zfcp_fsf_link_down_info_eval(req, 37, &psq->link_down_info);
+		zfcp_fsf_link_down_info_eval(req, "fspse_5",
+					     &psq->link_down_info);
 		/* FIXME: reopening adapter now? better wait for link up */
-		zfcp_erp_adapter_reopen(adapter, 0, 79, req);
+		zfcp_erp_adapter_reopen(adapter, 0, "fspse_6", req);
 		break;
 	case FSF_PROT_REEST_QUEUE:
 		/* All ports should be marked as ready to run again */
-		zfcp_erp_modify_adapter_status(adapter, 28, NULL,
+		zfcp_erp_modify_adapter_status(adapter, "fspse_7", NULL,
 					       ZFCP_STATUS_COMMON_RUNNING,
 					       ZFCP_SET);
 		zfcp_erp_adapter_reopen(adapter,
 					ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED |
-					ZFCP_STATUS_COMMON_ERP_FAILED, 99, req);
+					ZFCP_STATUS_COMMON_ERP_FAILED,
+					"fspse_8", req);
 		break;
 	default:
 		dev_err(&adapter->ccw_device->dev,
 			"0x%x is not a valid transfer protocol status\n",
 			qtcb->prefix.prot_status);
-		zfcp_erp_adapter_shutdown(adapter, 0, 119, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fspse_9", req);
 	}
 	req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 }
@@ -526,7 +526,7 @@
 		dev_err(&adapter->ccw_device->dev,
 			"Unknown or unsupported arbitrated loop "
 			"fibre channel topology detected\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 127, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fsece_1", req);
 		return -EIO;
 	}
 
@@ -560,7 +560,7 @@
 				"FCP adapter maximum QTCB size (%d bytes) "
 				"is too small\n",
 				bottom->max_qtcb_size);
-			zfcp_erp_adapter_shutdown(adapter, 0, 129, req);
+			zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh1", req);
 			return;
 		}
 		atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
@@ -577,11 +577,11 @@
 		atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
 				&adapter->status);
 
-		zfcp_fsf_link_down_info_eval(req, 42,
+		zfcp_fsf_link_down_info_eval(req, "fsecdh2",
 			&qtcb->header.fsf_status_qual.link_down_info);
 		break;
 	default:
-		zfcp_erp_adapter_shutdown(adapter, 0, 130, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh3", req);
 		return;
 	}
 
@@ -597,14 +597,14 @@
 		dev_err(&adapter->ccw_device->dev,
 			"The FCP adapter only supports newer "
 			"control block versions\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 125, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh4", req);
 		return;
 	}
 	if (FSF_QTCB_CURRENT_VERSION > bottom->high_qtcb_version) {
 		dev_err(&adapter->ccw_device->dev,
 			"The FCP adapter only supports older "
 			"control block versions\n");
-		zfcp_erp_adapter_shutdown(adapter, 0, 126, req);
+		zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh5", req);
 	}
 }
 
@@ -617,9 +617,10 @@
 	if (req->data)
 		memcpy(req->data, bottom, sizeof(*bottom));
 
-	if (adapter->connection_features & FSF_FEATURE_NPIV_MODE)
+	if (adapter->connection_features & FSF_FEATURE_NPIV_MODE) {
 		fc_host_permanent_port_name(shost) = bottom->wwpn;
-	else
+		fc_host_port_type(shost) = FC_PORTTYPE_NPIV;
+	} else
 		fc_host_permanent_port_name(shost) = fc_host_port_name(shost);
 	fc_host_maxframe_size(shost) = bottom->maximum_frame_size;
 	fc_host_supported_speeds(shost) = bottom->supported_speed;
@@ -638,20 +639,12 @@
 		break;
 	case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE:
 		zfcp_fsf_exchange_port_evaluate(req);
-		zfcp_fsf_link_down_info_eval(req, 43,
+		zfcp_fsf_link_down_info_eval(req, "fsepdh1",
 			&qtcb->header.fsf_status_qual.link_down_info);
 		break;
 	}
 }
 
-static int zfcp_fsf_sbal_available(struct zfcp_adapter *adapter)
-{
-	if (atomic_read(&adapter->req_q.count) > 0)
-		return 1;
-	atomic_inc(&adapter->qdio_outb_full);
-	return 0;
-}
-
 static int zfcp_fsf_req_sbal_get(struct zfcp_adapter *adapter)
 	__releases(&adapter->req_q_lock)
 	__acquires(&adapter->req_q_lock)
@@ -735,7 +728,7 @@
 
 	req->adapter = adapter;
 	req->fsf_command = fsf_cmd;
-	req->req_id = adapter->req_no++;
+	req->req_id = adapter->req_no;
 	req->sbal_number = 1;
 	req->sbal_first = req_q->first;
 	req->sbal_last = req_q->first;
@@ -791,13 +784,14 @@
 		if (zfcp_reqlist_find_safe(adapter, req))
 			zfcp_reqlist_remove(adapter, req);
 		spin_unlock_irqrestore(&adapter->req_list_lock, flags);
-		zfcp_erp_adapter_reopen(adapter, 0, 116, req);
+		zfcp_erp_adapter_reopen(adapter, 0, "fsrs__1", req);
 		return -EIO;
 	}
 
 	/* Don't increase for unsolicited status */
 	if (req->qtcb)
 		adapter->fsf_req_seq_no++;
+	adapter->req_no++;
 
 	return 0;
 }
@@ -870,14 +864,14 @@
 	switch (req->qtcb->header.fsf_status) {
 	case FSF_PORT_HANDLE_NOT_VALID:
 		if (fsq->word[0] == fsq->word[1]) {
-			zfcp_erp_adapter_reopen(unit->port->adapter, 0, 104,
-						req);
+			zfcp_erp_adapter_reopen(unit->port->adapter, 0,
+						"fsafch1", req);
 			req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		}
 		break;
 	case FSF_LUN_HANDLE_NOT_VALID:
 		if (fsq->word[0] == fsq->word[1]) {
-			zfcp_erp_port_reopen(unit->port, 0, 105, req);
+			zfcp_erp_port_reopen(unit->port, 0, "fsafch2", req);
 			req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		}
 		break;
@@ -885,12 +879,12 @@
 		req->status |= ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED;
 		break;
 	case FSF_PORT_BOXED:
-		zfcp_erp_port_boxed(unit->port, 47, req);
+		zfcp_erp_port_boxed(unit->port, "fsafch3", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
 	case FSF_LUN_BOXED:
-		zfcp_erp_unit_boxed(unit, 48, req);
+		zfcp_erp_unit_boxed(unit, "fsafch4", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
                 break;
@@ -912,27 +906,22 @@
 /**
  * zfcp_fsf_abort_fcp_command - abort running SCSI command
  * @old_req_id: unsigned long
- * @adapter: pointer to struct zfcp_adapter
  * @unit: pointer to struct zfcp_unit
- * @req_flags: integer specifying the request flags
  * Returns: pointer to struct zfcp_fsf_req
- *
- * FIXME(design): should be watched by a timeout !!!
  */
 
 struct zfcp_fsf_req *zfcp_fsf_abort_fcp_command(unsigned long old_req_id,
-						struct zfcp_adapter *adapter,
-						struct zfcp_unit *unit,
-						int req_flags)
+						struct zfcp_unit *unit)
 {
 	struct qdio_buffer_element *sbale;
 	struct zfcp_fsf_req *req = NULL;
+	struct zfcp_adapter *adapter = unit->port->adapter;
 
-	spin_lock(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	spin_lock_bh(&adapter->req_q_lock);
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
 	req = zfcp_fsf_req_create(adapter, FSF_QTCB_ABORT_FCP_CMND,
-				  req_flags, adapter->pool.fsf_req_abort);
+				  0, adapter->pool.fsf_req_abort);
 	if (IS_ERR(req)) {
 		req = NULL;
 		goto out;
@@ -960,7 +949,7 @@
 	zfcp_fsf_req_free(req);
 	req = NULL;
 out:
-	spin_unlock(&adapter->req_q_lock);
+	spin_unlock_bh(&adapter->req_q_lock);
 	return req;
 }
 
@@ -998,7 +987,7 @@
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(adapter, 0, 106, req);
+		zfcp_erp_adapter_reopen(adapter, 0, "fsscth1", req);
 	case FSF_GENERIC_COMMAND_REJECTED:
 	case FSF_PAYLOAD_SIZE_MISMATCH:
 	case FSF_REQUEST_SIZE_TOO_LARGE:
@@ -1174,12 +1163,8 @@
 	struct fsf_qtcb_bottom_support *bottom;
 	int ret = -EIO;
 
-	if (unlikely(!(atomic_read(&els->port->status) &
-		       ZFCP_STATUS_COMMON_UNBLOCKED)))
-		return -EBUSY;
-
-	spin_lock(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	spin_lock_bh(&adapter->req_q_lock);
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
 	req = zfcp_fsf_req_create(adapter, FSF_QTCB_SEND_ELS,
 				  ZFCP_REQ_AUTO_CLEANUP, NULL);
@@ -1212,7 +1197,7 @@
 failed_send:
 	zfcp_fsf_req_free(req);
 out:
-	spin_unlock(&adapter->req_q_lock);
+	spin_unlock_bh(&adapter->req_q_lock);
 	return ret;
 }
 
@@ -1224,7 +1209,7 @@
 	int retval = -EIO;
 
 	spin_lock_bh(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
 	req = zfcp_fsf_req_create(adapter,
 				  FSF_QTCB_EXCHANGE_CONFIG_DATA,
@@ -1320,7 +1305,7 @@
 		return -EOPNOTSUPP;
 
 	spin_lock_bh(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
 	req = zfcp_fsf_req_create(adapter, FSF_QTCB_EXCHANGE_PORT_DATA,
 				  ZFCP_REQ_AUTO_CLEANUP,
@@ -1366,7 +1351,7 @@
 		return -EOPNOTSUPP;
 
 	spin_lock_bh(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
 
 	req = zfcp_fsf_req_create(adapter, FSF_QTCB_EXCHANGE_PORT_DATA, 0,
@@ -1416,7 +1401,7 @@
 			 "Not enough FCP adapter resources to open "
 			 "remote port 0x%016Lx\n",
 			 (unsigned long long)port->wwpn);
-		zfcp_erp_port_failed(port, 31, req);
+		zfcp_erp_port_failed(port, "fsoph_1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_ADAPTER_STATUS_AVAILABLE:
@@ -1522,13 +1507,13 @@
 
 	switch (req->qtcb->header.fsf_status) {
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(port->adapter, 0, 107, req);
+		zfcp_erp_adapter_reopen(port->adapter, 0, "fscph_1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_ADAPTER_STATUS_AVAILABLE:
 		break;
 	case FSF_GOOD:
-		zfcp_erp_modify_port_status(port, 33, req,
+		zfcp_erp_modify_port_status(port, "fscph_2", req,
 					    ZFCP_STATUS_COMMON_OPEN,
 					    ZFCP_CLEAR);
 		break;
@@ -1657,7 +1642,7 @@
 
 	if (req->qtcb->header.fsf_status == FSF_PORT_HANDLE_NOT_VALID) {
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
-		zfcp_erp_adapter_reopen(wka_port->adapter, 0, 84, req);
+		zfcp_erp_adapter_reopen(wka_port->adapter, 0, "fscwph1", req);
 	}
 
 	wka_port->status = ZFCP_WKA_PORT_OFFLINE;
@@ -1712,18 +1697,18 @@
 	struct zfcp_unit *unit;
 
 	if (req->status & ZFCP_STATUS_FSFREQ_ERROR)
-		goto skip_fsfstatus;
+		return;
 
 	switch (header->fsf_status) {
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(port->adapter, 0, 108, req);
+		zfcp_erp_adapter_reopen(port->adapter, 0, "fscpph1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_ACCESS_DENIED:
 		zfcp_fsf_access_denied_port(req, port);
 		break;
 	case FSF_PORT_BOXED:
-		zfcp_erp_port_boxed(port, 50, req);
+		zfcp_erp_port_boxed(port, "fscpph2", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		/* can't use generic zfcp_erp_modify_port_status because
@@ -1752,8 +1737,6 @@
 					  &unit->status);
 		break;
 	}
-skip_fsfstatus:
-	atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_CLOSING, &port->status);
 }
 
 /**
@@ -1789,8 +1772,6 @@
 	req->erp_action = erp_action;
 	req->handler = zfcp_fsf_close_physical_port_handler;
 	erp_action->fsf_req = req;
-	atomic_set_mask(ZFCP_STATUS_PORT_PHYS_CLOSING,
-			&erp_action->port->status);
 
 	zfcp_fsf_start_erp_timer(req);
 	retval = zfcp_fsf_req_send(req);
@@ -1825,7 +1806,7 @@
 	switch (header->fsf_status) {
 
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(unit->port->adapter, 0, 109, req);
+		zfcp_erp_adapter_reopen(unit->port->adapter, 0, "fsouh_1", req);
 		/* fall through */
 	case FSF_LUN_ALREADY_OPEN:
 		break;
@@ -1835,7 +1816,7 @@
 		atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status);
 		break;
 	case FSF_PORT_BOXED:
-		zfcp_erp_port_boxed(unit->port, 51, req);
+		zfcp_erp_port_boxed(unit->port, "fsouh_2", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
@@ -1851,7 +1832,7 @@
 		else
 			zfcp_act_eval_err(adapter,
 					  header->fsf_status_qual.word[2]);
-		zfcp_erp_unit_access_denied(unit, 60, req);
+		zfcp_erp_unit_access_denied(unit, "fsouh_3", req);
 		atomic_clear_mask(ZFCP_STATUS_UNIT_SHARED, &unit->status);
 		atomic_clear_mask(ZFCP_STATUS_UNIT_READONLY, &unit->status);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
@@ -1862,7 +1843,7 @@
 			 "0x%016Lx on port 0x%016Lx\n",
 			 (unsigned long long)unit->fcp_lun,
 			 (unsigned long long)unit->port->wwpn);
-		zfcp_erp_unit_failed(unit, 34, req);
+		zfcp_erp_unit_failed(unit, "fsouh_4", req);
 		/* fall through */
 	case FSF_INVALID_COMMAND_OPTION:
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
@@ -1911,9 +1892,9 @@
 					"port 0x%016Lx)\n",
 					(unsigned long long)unit->fcp_lun,
 					(unsigned long long)unit->port->wwpn);
-				zfcp_erp_unit_failed(unit, 35, req);
+				zfcp_erp_unit_failed(unit, "fsouh_5", req);
 				req->status |= ZFCP_STATUS_FSFREQ_ERROR;
-				zfcp_erp_unit_shutdown(unit, 0, 80, req);
+				zfcp_erp_unit_shutdown(unit, 0, "fsouh_6", req);
         		} else if (!exclusive && readwrite) {
 				dev_err(&adapter->ccw_device->dev,
 					"Shared read-write access not "
@@ -1921,9 +1902,9 @@
 					"0x%016Lx)\n",
 					(unsigned long long)unit->fcp_lun,
 					(unsigned long long)unit->port->wwpn);
-				zfcp_erp_unit_failed(unit, 36, req);
+				zfcp_erp_unit_failed(unit, "fsouh_7", req);
 				req->status |= ZFCP_STATUS_FSFREQ_ERROR;
-				zfcp_erp_unit_shutdown(unit, 0, 81, req);
+				zfcp_erp_unit_shutdown(unit, 0, "fsouh_8", req);
         		}
 		}
 		break;
@@ -1988,15 +1969,15 @@
 
 	switch (req->qtcb->header.fsf_status) {
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(unit->port->adapter, 0, 110, req);
+		zfcp_erp_adapter_reopen(unit->port->adapter, 0, "fscuh_1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_LUN_HANDLE_NOT_VALID:
-		zfcp_erp_port_reopen(unit->port, 0, 111, req);
+		zfcp_erp_port_reopen(unit->port, 0, "fscuh_2", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_PORT_BOXED:
-		zfcp_erp_port_boxed(unit->port, 52, req);
+		zfcp_erp_port_boxed(unit->port, "fscuh_3", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
@@ -2073,7 +2054,6 @@
 	struct fsf_qual_latency_info *lat_inf;
 	struct latency_cont *lat;
 	struct zfcp_unit *unit = req->unit;
-	unsigned long flags;
 
 	lat_inf = &req->qtcb->prefix.prot_status_qual.latency_info;
 
@@ -2091,11 +2071,11 @@
 		return;
 	}
 
-	spin_lock_irqsave(&unit->latencies.lock, flags);
+	spin_lock(&unit->latencies.lock);
 	zfcp_fsf_update_lat(&lat->channel, lat_inf->channel_lat);
 	zfcp_fsf_update_lat(&lat->fabric, lat_inf->fabric_lat);
 	lat->counter++;
-	spin_unlock_irqrestore(&unit->latencies.lock, flags);
+	spin_unlock(&unit->latencies.lock);
 }
 
 #ifdef CONFIG_BLK_DEV_IO_TRACE
@@ -2147,7 +2127,6 @@
 
 	if (unlikely(req->status & ZFCP_STATUS_FSFREQ_ABORTED)) {
 		set_host_byte(scpnt, DID_SOFT_ERROR);
-		set_driver_byte(scpnt, SUGGEST_RETRY);
 		goto skip_fsfstatus;
 	}
 
@@ -2237,12 +2216,12 @@
 	switch (header->fsf_status) {
 	case FSF_HANDLE_MISMATCH:
 	case FSF_PORT_HANDLE_NOT_VALID:
-		zfcp_erp_adapter_reopen(unit->port->adapter, 0, 112, req);
+		zfcp_erp_adapter_reopen(unit->port->adapter, 0, "fssfch1", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_FCPLUN_NOT_VALID:
 	case FSF_LUN_HANDLE_NOT_VALID:
-		zfcp_erp_port_reopen(unit->port, 0, 113, req);
+		zfcp_erp_port_reopen(unit->port, 0, "fssfch2", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_SERVICE_CLASS_NOT_SUPPORTED:
@@ -2258,7 +2237,8 @@
 			req->qtcb->bottom.io.data_direction,
 			(unsigned long long)unit->fcp_lun,
 			(unsigned long long)unit->port->wwpn);
-		zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 133, req);
+		zfcp_erp_adapter_shutdown(unit->port->adapter, 0, "fssfch3",
+					  req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_CMND_LENGTH_NOT_VALID:
@@ -2268,16 +2248,17 @@
 			req->qtcb->bottom.io.fcp_cmnd_length,
 			(unsigned long long)unit->fcp_lun,
 			(unsigned long long)unit->port->wwpn);
-		zfcp_erp_adapter_shutdown(unit->port->adapter, 0, 134, req);
+		zfcp_erp_adapter_shutdown(unit->port->adapter, 0, "fssfch4",
+					  req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR;
 		break;
 	case FSF_PORT_BOXED:
-		zfcp_erp_port_boxed(unit->port, 53, req);
+		zfcp_erp_port_boxed(unit->port, "fssfch5", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
 	case FSF_LUN_BOXED:
-		zfcp_erp_unit_boxed(unit, 54, req);
+		zfcp_erp_unit_boxed(unit, "fssfch6", req);
 		req->status |= ZFCP_STATUS_FSFREQ_ERROR |
 			       ZFCP_STATUS_FSFREQ_RETRY;
 		break;
@@ -2314,30 +2295,29 @@
 
 /**
  * zfcp_fsf_send_fcp_command_task - initiate an FCP command (for a SCSI command)
- * @adapter: adapter where scsi command is issued
  * @unit: unit where command is sent to
  * @scsi_cmnd: scsi command to be sent
- * @timer: timer to be started when request is initiated
- * @req_flags: flags for fsf_request
  */
-int zfcp_fsf_send_fcp_command_task(struct zfcp_adapter *adapter,
-				   struct zfcp_unit *unit,
-				   struct scsi_cmnd *scsi_cmnd,
-				   int use_timer, int req_flags)
+int zfcp_fsf_send_fcp_command_task(struct zfcp_unit *unit,
+				   struct scsi_cmnd *scsi_cmnd)
 {
 	struct zfcp_fsf_req *req;
 	struct fcp_cmnd_iu *fcp_cmnd_iu;
 	unsigned int sbtype;
 	int real_bytes, retval = -EIO;
+	struct zfcp_adapter *adapter = unit->port->adapter;
 
 	if (unlikely(!(atomic_read(&unit->status) &
 		       ZFCP_STATUS_COMMON_UNBLOCKED)))
 		return -EBUSY;
 
 	spin_lock(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	if (atomic_read(&adapter->req_q.count) <= 0) {
+		atomic_inc(&adapter->qdio_outb_full);
 		goto out;
-	req = zfcp_fsf_req_create(adapter, FSF_QTCB_FCP_CMND, req_flags,
+	}
+	req = zfcp_fsf_req_create(adapter, FSF_QTCB_FCP_CMND,
+				  ZFCP_REQ_AUTO_CLEANUP,
 				  adapter->pool.fsf_req_scsi);
 	if (IS_ERR(req)) {
 		retval = PTR_ERR(req);
@@ -2411,7 +2391,7 @@
 				"on port 0x%016Lx closed\n",
 				(unsigned long long)unit->fcp_lun,
 				(unsigned long long)unit->port->wwpn);
-			zfcp_erp_unit_shutdown(unit, 0, 131, req);
+			zfcp_erp_unit_shutdown(unit, 0, "fssfct1", req);
 			retval = -EINVAL;
 		}
 		goto failed_scsi_cmnd;
@@ -2419,9 +2399,6 @@
 
 	zfcp_set_fcp_dl(fcp_cmnd_iu, real_bytes);
 
-	if (use_timer)
-		zfcp_fsf_start_timer(req, ZFCP_FSF_REQUEST_TIMEOUT);
-
 	retval = zfcp_fsf_req_send(req);
 	if (unlikely(retval))
 		goto failed_scsi_cmnd;
@@ -2439,28 +2416,25 @@
 
 /**
  * zfcp_fsf_send_fcp_ctm - send SCSI task management command
- * @adapter: pointer to struct zfcp-adapter
  * @unit: pointer to struct zfcp_unit
  * @tm_flags: unsigned byte for task management flags
- * @req_flags: int request flags
  * Returns: on success pointer to struct fsf_req, NULL otherwise
  */
-struct zfcp_fsf_req *zfcp_fsf_send_fcp_ctm(struct zfcp_adapter *adapter,
-					   struct zfcp_unit *unit,
-					   u8 tm_flags, int req_flags)
+struct zfcp_fsf_req *zfcp_fsf_send_fcp_ctm(struct zfcp_unit *unit, u8 tm_flags)
 {
 	struct qdio_buffer_element *sbale;
 	struct zfcp_fsf_req *req = NULL;
 	struct fcp_cmnd_iu *fcp_cmnd_iu;
+	struct zfcp_adapter *adapter = unit->port->adapter;
 
 	if (unlikely(!(atomic_read(&unit->status) &
 		       ZFCP_STATUS_COMMON_UNBLOCKED)))
 		return NULL;
 
-	spin_lock(&adapter->req_q_lock);
-	if (!zfcp_fsf_sbal_available(adapter))
+	spin_lock_bh(&adapter->req_q_lock);
+	if (zfcp_fsf_req_sbal_get(adapter))
 		goto out;
-	req = zfcp_fsf_req_create(adapter, FSF_QTCB_FCP_CMND, req_flags,
+	req = zfcp_fsf_req_create(adapter, FSF_QTCB_FCP_CMND, 0,
 				  adapter->pool.fsf_req_scsi);
 	if (IS_ERR(req)) {
 		req = NULL;
@@ -2492,7 +2466,7 @@
 	zfcp_fsf_req_free(req);
 	req = NULL;
 out:
-	spin_unlock(&adapter->req_q_lock);
+	spin_unlock_bh(&adapter->req_q_lock);
 	return req;
 }
 
diff --git a/drivers/s390/scsi/zfcp_fsf.h b/drivers/s390/scsi/zfcp_fsf.h
index 8bb2002..df7f232 100644
--- a/drivers/s390/scsi/zfcp_fsf.h
+++ b/drivers/s390/scsi/zfcp_fsf.h
@@ -127,10 +127,6 @@
 #define FSF_STATUS_READ_CFDC_UPDATED		0x0000000A
 #define FSF_STATUS_READ_FEATURE_UPDATE_ALERT	0x0000000C
 
-/* status subtypes in status read buffer */
-#define FSF_STATUS_READ_SUB_CLOSE_PHYS_PORT	0x00000001
-#define FSF_STATUS_READ_SUB_ERROR_PORT		0x00000002
-
 /* status subtypes for link down */
 #define FSF_STATUS_READ_SUB_NO_PHYSICAL_LINK	0x00000000
 #define FSF_STATUS_READ_SUB_FDISC_FAILED	0x00000001
diff --git a/drivers/s390/scsi/zfcp_qdio.c b/drivers/s390/scsi/zfcp_qdio.c
index 33e0a20..e0a2153 100644
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -11,9 +11,6 @@
 
 #include "zfcp_ext.h"
 
-/* FIXME(tune): free space should be one max. SBAL chain plus what? */
-#define ZFCP_QDIO_PCI_INTERVAL	(QDIO_MAX_BUFFERS_PER_Q \
-				- (FSF_MAX_SBALS_PER_REQ + 4))
 #define QBUFF_PER_PAGE		(PAGE_SIZE / sizeof(struct qdio_buffer))
 
 static int zfcp_qdio_buffers_enqueue(struct qdio_buffer **sbal)
@@ -58,7 +55,7 @@
 	}
 }
 
-static void zfcp_qdio_handler_error(struct zfcp_adapter *adapter, u8 id)
+static void zfcp_qdio_handler_error(struct zfcp_adapter *adapter, char *id)
 {
 	dev_warn(&adapter->ccw_device->dev, "A QDIO problem occurred\n");
 
@@ -77,6 +74,23 @@
 	}
 }
 
+/* this needs to be called prior to updating the queue fill level */
+static void zfcp_qdio_account(struct zfcp_adapter *adapter)
+{
+	ktime_t now;
+	s64 span;
+	int free, used;
+
+	spin_lock(&adapter->qdio_stat_lock);
+	now = ktime_get();
+	span = ktime_us_delta(now, adapter->req_q_time);
+	free = max(0, atomic_read(&adapter->req_q.count));
+	used = QDIO_MAX_BUFFERS_PER_Q - free;
+	adapter->req_q_util += used * span;
+	adapter->req_q_time = now;
+	spin_unlock(&adapter->qdio_stat_lock);
+}
+
 static void zfcp_qdio_int_req(struct ccw_device *cdev, unsigned int qdio_err,
 			      int queue_no, int first, int count,
 			      unsigned long parm)
@@ -86,13 +100,14 @@
 
 	if (unlikely(qdio_err)) {
 		zfcp_hba_dbf_event_qdio(adapter, qdio_err, first, count);
-		zfcp_qdio_handler_error(adapter, 140);
+		zfcp_qdio_handler_error(adapter, "qdireq1");
 		return;
 	}
 
 	/* cleanup all SBALs being program-owned now */
 	zfcp_qdio_zero_sbals(queue->sbal, first, count);
 
+	zfcp_qdio_account(adapter);
 	atomic_add(count, &queue->count);
 	wake_up(&adapter->request_wq);
 }
@@ -154,7 +169,7 @@
 
 	if (unlikely(qdio_err)) {
 		zfcp_hba_dbf_event_qdio(adapter, qdio_err, first, count);
-		zfcp_qdio_handler_error(adapter, 147);
+		zfcp_qdio_handler_error(adapter, "qdires1");
 		return;
 	}
 
@@ -346,21 +361,12 @@
 	struct zfcp_qdio_queue *req_q = &adapter->req_q;
 	int first = fsf_req->sbal_first;
 	int count = fsf_req->sbal_number;
-	int retval, pci, pci_batch;
-	struct qdio_buffer_element *sbale;
+	int retval;
+	unsigned int qdio_flags = QDIO_FLAG_SYNC_OUTPUT;
 
-	/* acknowledgements for transferred buffers */
-	pci_batch = adapter->req_q_pci_batch + count;
-	if (unlikely(pci_batch >= ZFCP_QDIO_PCI_INTERVAL)) {
-		pci_batch %= ZFCP_QDIO_PCI_INTERVAL;
-		pci = first + count - (pci_batch + 1);
-		pci %= QDIO_MAX_BUFFERS_PER_Q;
-		sbale = zfcp_qdio_sbale(req_q, pci, 0);
-		sbale->flags |= SBAL_FLAGS0_PCI;
-	}
+	zfcp_qdio_account(adapter);
 
-	retval = do_QDIO(adapter->ccw_device, QDIO_FLAG_SYNC_OUTPUT, 0, first,
-			 count);
+	retval = do_QDIO(adapter->ccw_device, qdio_flags, 0, first, count);
 	if (unlikely(retval)) {
 		zfcp_qdio_zero_sbals(req_q->sbal, first, count);
 		return retval;
@@ -370,7 +376,6 @@
 	atomic_sub(count, &req_q->count);
 	req_q->first += count;
 	req_q->first %= QDIO_MAX_BUFFERS_PER_Q;
-	adapter->req_q_pci_batch = pci_batch;
 	return 0;
 }
 
@@ -441,7 +446,6 @@
 	}
 	req_q->first = 0;
 	atomic_set(&req_q->count, 0);
-	adapter->req_q_pci_batch = 0;
 	adapter->resp_q.first = 0;
 	atomic_set(&adapter->resp_q.count, 0);
 }
@@ -479,7 +483,6 @@
 	/* set index of first avalable SBALS / number of available SBALS */
 	adapter->req_q.first = 0;
 	atomic_set(&adapter->req_q.count, QDIO_MAX_BUFFERS_PER_Q);
-	adapter->req_q_pci_batch = 0;
 
 	return 0;
 
diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c
index 9dc42a6..58201e1 100644
--- a/drivers/s390/scsi/zfcp_scsi.c
+++ b/drivers/s390/scsi/zfcp_scsi.c
@@ -3,7 +3,7 @@
  *
  * Interface to Linux SCSI midlayer.
  *
- * Copyright IBM Corporation 2002, 2008
+ * Copyright IBM Corporation 2002, 2009
  */
 
 #define KMSG_COMPONENT "zfcp"
@@ -27,9 +27,7 @@
 static void zfcp_scsi_slave_destroy(struct scsi_device *sdpnt)
 {
 	struct zfcp_unit *unit = (struct zfcp_unit *) sdpnt->hostdata;
-	atomic_clear_mask(ZFCP_STATUS_UNIT_REGISTERED, &unit->status);
 	unit->device = NULL;
-	zfcp_erp_unit_failed(unit, 12, NULL);
 	zfcp_unit_put(unit);
 }
 
@@ -58,8 +56,8 @@
 {
 	struct zfcp_unit *unit;
 	struct zfcp_adapter *adapter;
-	int    status;
-	int    ret;
+	int    status, scsi_result, ret;
+	struct fc_rport *rport = starget_to_rport(scsi_target(scpnt->device));
 
 	/* reset the status for this request */
 	scpnt->result = 0;
@@ -81,6 +79,14 @@
 		return 0;
 	}
 
+	scsi_result = fc_remote_port_chkready(rport);
+	if (unlikely(scsi_result)) {
+		scpnt->result = scsi_result;
+		zfcp_scsi_dbf_event_result("fail", 4, adapter, scpnt, NULL);
+		scpnt->scsi_done(scpnt);
+		return 0;
+	}
+
 	status = atomic_read(&unit->status);
 	if (unlikely((status & ZFCP_STATUS_COMMON_ERP_FAILED) ||
 		     !(status & ZFCP_STATUS_COMMON_RUNNING))) {
@@ -88,8 +94,7 @@
 		return 0;;
 	}
 
-	ret = zfcp_fsf_send_fcp_command_task(adapter, unit, scpnt, 0,
-					     ZFCP_REQ_AUTO_CLEANUP);
+	ret = zfcp_fsf_send_fcp_command_task(unit, scpnt);
 	if (unlikely(ret == -EBUSY))
 		return SCSI_MLQUEUE_DEVICE_BUSY;
 	else if (unlikely(ret < 0))
@@ -133,8 +138,7 @@
 
 	read_lock_irqsave(&zfcp_data.config_lock, flags);
 	unit = zfcp_unit_lookup(adapter, sdp->channel, sdp->id, sdp->lun);
-	if (unit &&
-	    (atomic_read(&unit->status) & ZFCP_STATUS_UNIT_REGISTERED)) {
+	if (unit) {
 		sdp->hostdata = unit;
 		unit->device = sdp;
 		zfcp_unit_get(unit);
@@ -147,79 +151,91 @@
 
 static int zfcp_scsi_eh_abort_handler(struct scsi_cmnd *scpnt)
 {
- 	struct Scsi_Host *scsi_host;
- 	struct zfcp_adapter *adapter;
-	struct zfcp_unit *unit;
-	struct zfcp_fsf_req *fsf_req;
+	struct Scsi_Host *scsi_host = scpnt->device->host;
+	struct zfcp_adapter *adapter =
+		(struct zfcp_adapter *) scsi_host->hostdata[0];
+	struct zfcp_unit *unit = scpnt->device->hostdata;
+	struct zfcp_fsf_req *old_req, *abrt_req;
 	unsigned long flags;
 	unsigned long old_req_id = (unsigned long) scpnt->host_scribble;
 	int retval = SUCCESS;
-
-	scsi_host = scpnt->device->host;
-	adapter = (struct zfcp_adapter *) scsi_host->hostdata[0];
-	unit = scpnt->device->hostdata;
+	int retry = 3;
 
 	/* avoid race condition between late normal completion and abort */
 	write_lock_irqsave(&adapter->abort_lock, flags);
 
-	/* Check whether corresponding fsf_req is still pending */
 	spin_lock(&adapter->req_list_lock);
-	fsf_req = zfcp_reqlist_find(adapter, old_req_id);
+	old_req = zfcp_reqlist_find(adapter, old_req_id);
 	spin_unlock(&adapter->req_list_lock);
-	if (!fsf_req) {
+	if (!old_req) {
 		write_unlock_irqrestore(&adapter->abort_lock, flags);
-		zfcp_scsi_dbf_event_abort("lte1", adapter, scpnt, NULL, 0);
-		return retval;
+		zfcp_scsi_dbf_event_abort("lte1", adapter, scpnt, NULL,
+					  old_req_id);
+		return SUCCESS;
 	}
-	fsf_req->data = NULL;
+	old_req->data = NULL;
 
 	/* don't access old fsf_req after releasing the abort_lock */
 	write_unlock_irqrestore(&adapter->abort_lock, flags);
 
-	fsf_req = zfcp_fsf_abort_fcp_command(old_req_id, adapter, unit, 0);
-	if (!fsf_req) {
-		zfcp_scsi_dbf_event_abort("nres", adapter, scpnt, NULL,
-					  old_req_id);
-		retval = FAILED;
-		return retval;
+	while (retry--) {
+		abrt_req = zfcp_fsf_abort_fcp_command(old_req_id, unit);
+		if (abrt_req)
+			break;
+
+		zfcp_erp_wait(adapter);
+		if (!(atomic_read(&adapter->status) &
+		      ZFCP_STATUS_COMMON_RUNNING)) {
+			zfcp_scsi_dbf_event_abort("nres", adapter, scpnt, NULL,
+						  old_req_id);
+			return SUCCESS;
+		}
 	}
+	if (!abrt_req)
+		return FAILED;
 
-	__wait_event(fsf_req->completion_wq,
-		     fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED);
+	wait_event(abrt_req->completion_wq,
+		   abrt_req->status & ZFCP_STATUS_FSFREQ_COMPLETED);
 
-	if (fsf_req->status & ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED) {
-		zfcp_scsi_dbf_event_abort("okay", adapter, scpnt, fsf_req, 0);
-	} else if (fsf_req->status & ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED) {
-		zfcp_scsi_dbf_event_abort("lte2", adapter, scpnt, fsf_req, 0);
-	} else {
-		zfcp_scsi_dbf_event_abort("fail", adapter, scpnt, fsf_req, 0);
+	if (abrt_req->status & ZFCP_STATUS_FSFREQ_ABORTSUCCEEDED)
+		zfcp_scsi_dbf_event_abort("okay", adapter, scpnt, abrt_req, 0);
+	else if (abrt_req->status & ZFCP_STATUS_FSFREQ_ABORTNOTNEEDED)
+		zfcp_scsi_dbf_event_abort("lte2", adapter, scpnt, abrt_req, 0);
+	else {
+		zfcp_scsi_dbf_event_abort("fail", adapter, scpnt, abrt_req, 0);
 		retval = FAILED;
 	}
-	zfcp_fsf_req_free(fsf_req);
-
+	zfcp_fsf_req_free(abrt_req);
 	return retval;
 }
 
-static int zfcp_task_mgmt_function(struct zfcp_unit *unit, u8 tm_flags,
-					 struct scsi_cmnd *scpnt)
+static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags)
 {
+	struct zfcp_unit *unit = scpnt->device->hostdata;
 	struct zfcp_adapter *adapter = unit->port->adapter;
 	struct zfcp_fsf_req *fsf_req;
 	int retval = SUCCESS;
+	int retry = 3;
 
-	/* issue task management function */
-	fsf_req = zfcp_fsf_send_fcp_ctm(adapter, unit, tm_flags, 0);
-	if (!fsf_req) {
-		zfcp_scsi_dbf_event_devreset("nres", tm_flags, unit, scpnt);
-		return FAILED;
+	while (retry--) {
+		fsf_req = zfcp_fsf_send_fcp_ctm(unit, tm_flags);
+		if (fsf_req)
+			break;
+
+		zfcp_erp_wait(adapter);
+		if (!(atomic_read(&adapter->status) &
+		      ZFCP_STATUS_COMMON_RUNNING)) {
+			zfcp_scsi_dbf_event_devreset("nres", tm_flags, unit,
+						     scpnt);
+			return SUCCESS;
+		}
 	}
+	if (!fsf_req)
+		return FAILED;
 
-	__wait_event(fsf_req->completion_wq,
-		     fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED);
+	wait_event(fsf_req->completion_wq,
+		   fsf_req->status & ZFCP_STATUS_FSFREQ_COMPLETED);
 
-	/*
-	 * check completion status of task management function
-	 */
 	if (fsf_req->status & ZFCP_STATUS_FSFREQ_TMFUNCFAILED) {
 		zfcp_scsi_dbf_event_devreset("fail", tm_flags, unit, scpnt);
 		retval = FAILED;
@@ -230,40 +246,25 @@
 		zfcp_scsi_dbf_event_devreset("okay", tm_flags, unit, scpnt);
 
 	zfcp_fsf_req_free(fsf_req);
-
 	return retval;
 }
 
 static int zfcp_scsi_eh_device_reset_handler(struct scsi_cmnd *scpnt)
 {
-	struct zfcp_unit *unit = scpnt->device->hostdata;
-
-	if (!unit) {
-		WARN_ON(1);
-		return SUCCESS;
-	}
-	return zfcp_task_mgmt_function(unit, FCP_LOGICAL_UNIT_RESET, scpnt);
+	return zfcp_task_mgmt_function(scpnt, FCP_LOGICAL_UNIT_RESET);
 }
 
 static int zfcp_scsi_eh_target_reset_handler(struct scsi_cmnd *scpnt)
 {
-	struct zfcp_unit *unit = scpnt->device->hostdata;
-
-	if (!unit) {
-		WARN_ON(1);
-		return SUCCESS;
-	}
-	return zfcp_task_mgmt_function(unit, FCP_TARGET_RESET, scpnt);
+	return zfcp_task_mgmt_function(scpnt, FCP_TARGET_RESET);
 }
 
 static int zfcp_scsi_eh_host_reset_handler(struct scsi_cmnd *scpnt)
 {
-	struct zfcp_unit *unit;
-	struct zfcp_adapter *adapter;
+	struct zfcp_unit *unit = scpnt->device->hostdata;
+	struct zfcp_adapter *adapter = unit->port->adapter;
 
-	unit = scpnt->device->hostdata;
-	adapter = unit->port->adapter;
-	zfcp_erp_adapter_reopen(adapter, 0, 141, scpnt);
+	zfcp_erp_adapter_reopen(adapter, 0, "schrh_1", scpnt);
 	zfcp_erp_wait(adapter);
 
 	return SUCCESS;
@@ -479,6 +480,109 @@
 	rport->dev_loss_tmo = timeout;
 }
 
+/**
+ * zfcp_scsi_dev_loss_tmo_callbk - Free any reference to rport
+ * @rport: The rport that is about to be deleted.
+ */
+static void zfcp_scsi_dev_loss_tmo_callbk(struct fc_rport *rport)
+{
+	struct zfcp_port *port = rport->dd_data;
+
+	write_lock_irq(&zfcp_data.config_lock);
+	port->rport = NULL;
+	write_unlock_irq(&zfcp_data.config_lock);
+}
+
+/**
+ * zfcp_scsi_terminate_rport_io - Terminate all I/O on a rport
+ * @rport: The FC rport where to teminate I/O
+ *
+ * Abort all pending SCSI commands for a port by closing the
+ * port. Using a reopen for avoids a conflict with a shutdown
+ * overwriting a reopen.
+ */
+static void zfcp_scsi_terminate_rport_io(struct fc_rport *rport)
+{
+	struct zfcp_port *port = rport->dd_data;
+
+	zfcp_erp_port_reopen(port, 0, "sctrpi1", NULL);
+}
+
+static void zfcp_scsi_rport_register(struct zfcp_port *port)
+{
+	struct fc_rport_identifiers ids;
+	struct fc_rport *rport;
+
+	ids.node_name = port->wwnn;
+	ids.port_name = port->wwpn;
+	ids.port_id = port->d_id;
+	ids.roles = FC_RPORT_ROLE_FCP_TARGET;
+
+	rport = fc_remote_port_add(port->adapter->scsi_host, 0, &ids);
+	if (!rport) {
+		dev_err(&port->adapter->ccw_device->dev,
+			"Registering port 0x%016Lx failed\n",
+			(unsigned long long)port->wwpn);
+		return;
+	}
+
+	rport->dd_data = port;
+	rport->maxframe_size = port->maxframe_size;
+	rport->supported_classes = port->supported_classes;
+	port->rport = rport;
+}
+
+static void zfcp_scsi_rport_block(struct zfcp_port *port)
+{
+	if (port->rport)
+		fc_remote_port_delete(port->rport);
+}
+
+void zfcp_scsi_schedule_rport_register(struct zfcp_port *port)
+{
+	zfcp_port_get(port);
+	port->rport_task = RPORT_ADD;
+
+	if (!queue_work(zfcp_data.work_queue, &port->rport_work))
+		zfcp_port_put(port);
+}
+
+void zfcp_scsi_schedule_rport_block(struct zfcp_port *port)
+{
+	zfcp_port_get(port);
+	port->rport_task = RPORT_DEL;
+
+	if (!queue_work(zfcp_data.work_queue, &port->rport_work))
+		zfcp_port_put(port);
+}
+
+void zfcp_scsi_schedule_rports_block(struct zfcp_adapter *adapter)
+{
+	struct zfcp_port *port;
+
+	list_for_each_entry(port, &adapter->port_list_head, list)
+		zfcp_scsi_schedule_rport_block(port);
+}
+
+void zfcp_scsi_rport_work(struct work_struct *work)
+{
+	struct zfcp_port *port = container_of(work, struct zfcp_port,
+					      rport_work);
+
+	while (port->rport_task) {
+		if (port->rport_task == RPORT_ADD) {
+			port->rport_task = RPORT_NONE;
+			zfcp_scsi_rport_register(port);
+		} else {
+			port->rport_task = RPORT_NONE;
+			zfcp_scsi_rport_block(port);
+		}
+	}
+
+	zfcp_port_put(port);
+}
+
+
 struct fc_function_template zfcp_transport_functions = {
 	.show_starget_port_id = 1,
 	.show_starget_port_name = 1,
@@ -497,6 +601,8 @@
 	.reset_fc_host_stats = zfcp_reset_fc_host_stats,
 	.set_rport_dev_loss_tmo = zfcp_set_rport_dev_loss_tmo,
 	.get_host_port_state = zfcp_get_host_port_state,
+	.dev_loss_tmo_callbk = zfcp_scsi_dev_loss_tmo_callbk,
+	.terminate_rport_io = zfcp_scsi_terminate_rport_io,
 	.show_host_port_state = 1,
 	/* no functions registered for following dynamic attributes but
 	   directly set by LLDD */
diff --git a/drivers/s390/scsi/zfcp_sysfs.c b/drivers/s390/scsi/zfcp_sysfs.c
index 899af2b..9a3b8e2 100644
--- a/drivers/s390/scsi/zfcp_sysfs.c
+++ b/drivers/s390/scsi/zfcp_sysfs.c
@@ -112,9 +112,9 @@
 		     zfcp_sysfs_##_feat##_failed_show,			       \
 		     zfcp_sysfs_##_feat##_failed_store);
 
-ZFCP_SYSFS_FAILED(zfcp_adapter, adapter, adapter, 44, 93);
-ZFCP_SYSFS_FAILED(zfcp_port, port, port->adapter, 45, 96);
-ZFCP_SYSFS_FAILED(zfcp_unit, unit, unit->port->adapter, 46, 97);
+ZFCP_SYSFS_FAILED(zfcp_adapter, adapter, adapter, "syafai1", "syafai2");
+ZFCP_SYSFS_FAILED(zfcp_port, port, port->adapter, "sypfai1", "sypfai2");
+ZFCP_SYSFS_FAILED(zfcp_unit, unit, unit->port->adapter, "syufai1", "syufai2");
 
 static ssize_t zfcp_sysfs_port_rescan_store(struct device *dev,
 					    struct device_attribute *attr,
@@ -168,7 +168,7 @@
 		goto out;
 	}
 
-	zfcp_erp_port_shutdown(port, 0, 92, NULL);
+	zfcp_erp_port_shutdown(port, 0, "syprs_1", NULL);
 	zfcp_erp_wait(adapter);
 	zfcp_port_put(port);
 	zfcp_port_dequeue(port);
@@ -222,7 +222,7 @@
 
 	retval = 0;
 
-	zfcp_erp_unit_reopen(unit, 0, 94, NULL);
+	zfcp_erp_unit_reopen(unit, 0, "syuas_1", NULL);
 	zfcp_erp_wait(unit->port->adapter);
 	zfcp_unit_put(unit);
 out:
@@ -268,7 +268,7 @@
 		goto out;
 	}
 
-	zfcp_erp_unit_shutdown(unit, 0, 95, NULL);
+	zfcp_erp_unit_shutdown(unit, 0, "syurs_1", NULL);
 	zfcp_erp_wait(unit->port->adapter);
 	zfcp_unit_put(unit);
 	zfcp_unit_dequeue(unit);
@@ -318,10 +318,9 @@
 	struct zfcp_unit *unit = sdev->hostdata;			\
 	struct zfcp_latencies *lat = &unit->latencies;			\
 	struct zfcp_adapter *adapter = unit->port->adapter;		\
-	unsigned long flags;						\
 	unsigned long long fsum, fmin, fmax, csum, cmin, cmax, cc;	\
 									\
-	spin_lock_irqsave(&lat->lock, flags);				\
+	spin_lock_bh(&lat->lock);					\
 	fsum = lat->_name.fabric.sum * adapter->timer_ticks;		\
 	fmin = lat->_name.fabric.min * adapter->timer_ticks;		\
 	fmax = lat->_name.fabric.max * adapter->timer_ticks;		\
@@ -329,7 +328,7 @@
 	cmin = lat->_name.channel.min * adapter->timer_ticks;		\
 	cmax = lat->_name.channel.max * adapter->timer_ticks;		\
 	cc  = lat->_name.counter;					\
-	spin_unlock_irqrestore(&lat->lock, flags);			\
+	spin_unlock_bh(&lat->lock);					\
 									\
 	do_div(fsum, 1000);						\
 	do_div(fmin, 1000);						\
@@ -487,7 +486,8 @@
 	struct zfcp_adapter *adapter =
 		(struct zfcp_adapter *) scsi_host->hostdata[0];
 
-	return sprintf(buf, "%d\n", atomic_read(&adapter->qdio_outb_full));
+	return sprintf(buf, "%d %llu\n", atomic_read(&adapter->qdio_outb_full),
+		       (unsigned long long)adapter->req_q_util);
 }
 static DEVICE_ATTR(queue_full, S_IRUGO, zfcp_sysfs_adapter_q_full_show, NULL);
 
diff --git a/drivers/scsi/3w-9xxx.c b/drivers/scsi/3w-9xxx.c
index 5311317..a12783e 100644
--- a/drivers/scsi/3w-9xxx.c
+++ b/drivers/scsi/3w-9xxx.c
@@ -4,7 +4,7 @@
    Written By: Adam Radford <linuxraid@amcc.com>
    Modifications By: Tom Couch <linuxraid@amcc.com>
 
-   Copyright (C) 2004-2008 Applied Micro Circuits Corporation.
+   Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
 
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
@@ -75,6 +75,7 @@
                  Add MSI support and "use_msi" module parameter.
                  Fix bug in twa_get_param() on 4GB+.
                  Use pci_resource_len() for ioremap().
+   2.26.02.012 - Add power management support.
 */
 
 #include <linux/module.h>
@@ -99,7 +100,7 @@
 #include "3w-9xxx.h"
 
 /* Globals */
-#define TW_DRIVER_VERSION "2.26.02.011"
+#define TW_DRIVER_VERSION "2.26.02.012"
 static TW_Device_Extension *twa_device_extension_list[TW_MAX_SLOT];
 static unsigned int twa_device_extension_count;
 static int twa_major = -1;
@@ -2182,6 +2183,98 @@
 	twa_device_extension_count--;
 } /* End twa_remove() */
 
+#ifdef CONFIG_PM
+/* This function is called on PCI suspend */
+static int twa_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct Scsi_Host *host = pci_get_drvdata(pdev);
+	TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+	printk(KERN_WARNING "3w-9xxx: Suspending host %d.\n", tw_dev->host->host_no);
+
+	TW_DISABLE_INTERRUPTS(tw_dev);
+	free_irq(tw_dev->tw_pci_dev->irq, tw_dev);
+
+	if (test_bit(TW_USING_MSI, &tw_dev->flags))
+		pci_disable_msi(pdev);
+
+	/* Tell the card we are shutting down */
+	if (twa_initconnection(tw_dev, 1, 0, 0, 0, 0, 0, NULL, NULL, NULL, NULL, NULL)) {
+		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x38, "Connection shutdown failed during suspend");
+	} else {
+		printk(KERN_WARNING "3w-9xxx: Suspend complete.\n");
+	}
+	TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+
+	pci_save_state(pdev);
+	pci_disable_device(pdev);
+	pci_set_power_state(pdev, pci_choose_state(pdev, state));
+
+	return 0;
+} /* End twa_suspend() */
+
+/* This function is called on PCI resume */
+static int twa_resume(struct pci_dev *pdev)
+{
+	int retval = 0;
+	struct Scsi_Host *host = pci_get_drvdata(pdev);
+	TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+	printk(KERN_WARNING "3w-9xxx: Resuming host %d.\n", tw_dev->host->host_no);
+	pci_set_power_state(pdev, PCI_D0);
+	pci_enable_wake(pdev, PCI_D0, 0);
+	pci_restore_state(pdev);
+
+	retval = pci_enable_device(pdev);
+	if (retval) {
+		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x39, "Enable device failed during resume");
+		return retval;
+	}
+
+	pci_set_master(pdev);
+	pci_try_set_mwi(pdev);
+
+	if (pci_set_dma_mask(pdev, DMA_64BIT_MASK)
+	    || pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK))
+		if (pci_set_dma_mask(pdev, DMA_32BIT_MASK)
+		    || pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK)) {
+			TW_PRINTK(host, TW_DRIVER, 0x40, "Failed to set dma mask during resume");
+			retval = -ENODEV;
+			goto out_disable_device;
+		}
+
+	/* Initialize the card */
+	if (twa_reset_sequence(tw_dev, 0)) {
+		retval = -ENODEV;
+		goto out_disable_device;
+	}
+
+	/* Now setup the interrupt handler */
+	retval = request_irq(pdev->irq, twa_interrupt, IRQF_SHARED, "3w-9xxx", tw_dev);
+	if (retval) {
+		TW_PRINTK(tw_dev->host, TW_DRIVER, 0x42, "Error requesting IRQ during resume");
+		retval = -ENODEV;
+		goto out_disable_device;
+	}
+
+	/* Now enable MSI if enabled */
+	if (test_bit(TW_USING_MSI, &tw_dev->flags))
+		pci_enable_msi(pdev);
+
+	/* Re-enable interrupts on the card */
+	TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+
+	printk(KERN_WARNING "3w-9xxx: Resume complete.\n");
+	return 0;
+
+out_disable_device:
+	scsi_remove_host(host);
+	pci_disable_device(pdev);
+
+	return retval;
+} /* End twa_resume() */
+#endif
+
 /* PCI Devices supported by this driver */
 static struct pci_device_id twa_pci_tbl[] __devinitdata = {
 	{ PCI_VENDOR_ID_3WARE, PCI_DEVICE_ID_3WARE_9000,
@@ -2202,6 +2295,10 @@
 	.id_table	= twa_pci_tbl,
 	.probe		= twa_probe,
 	.remove		= twa_remove,
+#ifdef CONFIG_PM
+	.suspend	= twa_suspend,
+	.resume		= twa_resume,
+#endif
 	.shutdown	= twa_shutdown
 };
 
diff --git a/drivers/scsi/3w-9xxx.h b/drivers/scsi/3w-9xxx.h
index 1729a87..2893eec 100644
--- a/drivers/scsi/3w-9xxx.h
+++ b/drivers/scsi/3w-9xxx.h
@@ -4,7 +4,7 @@
    Written By: Adam Radford <linuxraid@amcc.com>
    Modifications By: Tom Couch <linuxraid@amcc.com>
 
-   Copyright (C) 2004-2008 Applied Micro Circuits Corporation.
+   Copyright (C) 2004-2009 Applied Micro Circuits Corporation.
 
    This program is free software; you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 256c7be..e2f44e6 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -224,14 +224,15 @@
 	  can enable logging by saying Y to "/proc file system support" and
 	  "Sysctl support" below and executing the command
 
-	  echo "scsi log token [level]" > /proc/scsi/scsi
+	  echo <bitmask> > /proc/sys/dev/scsi/logging_level
 
-	  at boot time after the /proc file system has been mounted.
+	  where <bitmask> is a four byte value representing the logging type
+	  and logging level for each type of logging selected.
 
-	  There are a number of things that can be used for 'token' (you can
-	  find them in the source: <file:drivers/scsi/scsi.c>), and this
-	  allows you to select the types of information you want, and the
-	  level allows you to select the level of verbosity.
+	  There are a number of logging types and you can find them in the
+	  source at <file:drivers/scsi/scsi_logging.h>. The logging levels
+	  are also described in that file and they determine the verbosity of
+	  the logging for each logging type.
 
 	  If you say N here, it may be harder to track down some types of SCSI
 	  problems. If you say Y here your kernel will be somewhat larger, but
@@ -570,6 +571,7 @@
 	  To enable this function, choose Y here.
 
 source "drivers/scsi/megaraid/Kconfig.megaraid"
+source "drivers/scsi/mpt2sas/Kconfig"
 
 config SCSI_HPTIOP
 	tristate "HighPoint RocketRAID 3xxx/4xxx Controller support"
@@ -608,6 +610,7 @@
 config LIBFC
 	tristate "LibFC module"
 	select SCSI_FC_ATTRS
+	select CRC32
 	---help---
 	  Fibre Channel library module
 
@@ -1535,6 +1538,7 @@
 config SCSI_DEBUG
 	tristate "SCSI debugging host simulator"
 	depends on SCSI
+	select CRC_T10DIF
 	help
 	  This is a host adapter simulator that can simulate multiple hosts
 	  each with multiple dummy SCSI devices (disks). It defaults to one
@@ -1803,4 +1807,6 @@
 
 source "drivers/scsi/device_handler/Kconfig"
 
+source "drivers/scsi/osd/Kconfig"
+
 endmenu
diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile
index 7461eb0..cf79296 100644
--- a/drivers/scsi/Makefile
+++ b/drivers/scsi/Makefile
@@ -99,6 +99,7 @@
 obj-$(CONFIG_MEGARAID_LEGACY)	+= megaraid.o
 obj-$(CONFIG_MEGARAID_NEWGEN)	+= megaraid/
 obj-$(CONFIG_MEGARAID_SAS)	+= megaraid/
+obj-$(CONFIG_SCSI_MPT2SAS)	+= mpt2sas/
 obj-$(CONFIG_SCSI_ACARD)	+= atp870u.o
 obj-$(CONFIG_SCSI_SUNESP)	+= esp_scsi.o	sun_esp.o
 obj-$(CONFIG_SCSI_GDTH)		+= gdth.o
@@ -137,6 +138,8 @@
 obj-$(CONFIG_CHR_DEV_SCH)	+= ch.o
 obj-$(CONFIG_SCSI_ENCLOSURE)	+= ses.o
 
+obj-$(CONFIG_SCSI_OSD_INITIATOR) += osd/
+
 # This goes last, so that "real" scsi devices probe earlier
 obj-$(CONFIG_SCSI_DEBUG)	+= scsi_debug.o
 
diff --git a/drivers/scsi/ch.c b/drivers/scsi/ch.c
index af97254..7b1633a 100644
--- a/drivers/scsi/ch.c
+++ b/drivers/scsi/ch.c
@@ -41,6 +41,7 @@
 MODULE_AUTHOR("Gerd Knorr <kraxel@bytesex.org>");
 MODULE_LICENSE("GPL");
 MODULE_ALIAS_CHARDEV_MAJOR(SCSI_CHANGER_MAJOR);
+MODULE_ALIAS_SCSI_DEVICE(TYPE_MEDIUM_CHANGER);
 
 static int init = 1;
 module_param(init, int, 0444);
diff --git a/drivers/scsi/constants.c b/drivers/scsi/constants.c
index 4003dee..e79e181 100644
--- a/drivers/scsi/constants.c
+++ b/drivers/scsi/constants.c
@@ -1373,21 +1373,14 @@
 "DRIVER_INVALID", "DRIVER_TIMEOUT", "DRIVER_HARD", "DRIVER_SENSE"};
 #define NUM_DRIVERBYTE_STRS ARRAY_SIZE(driverbyte_table)
 
-static const char * const driversuggest_table[]={"SUGGEST_OK",
-"SUGGEST_RETRY", "SUGGEST_ABORT", "SUGGEST_REMAP", "SUGGEST_DIE",
-"SUGGEST_5", "SUGGEST_6", "SUGGEST_7", "SUGGEST_SENSE"};
-#define NUM_SUGGEST_STRS ARRAY_SIZE(driversuggest_table)
-
 void scsi_show_result(int result)
 {
 	int hb = host_byte(result);
-	int db = (driver_byte(result) & DRIVER_MASK);
-	int su = ((driver_byte(result) & SUGGEST_MASK) >> 4);
+	int db = driver_byte(result);
 
-	printk("Result: hostbyte=%s driverbyte=%s,%s\n",
+	printk("Result: hostbyte=%s driverbyte=%s\n",
 	       (hb < NUM_HOSTBYTE_STRS ? hostbyte_table[hb]     : "invalid"),
-	       (db < NUM_DRIVERBYTE_STRS ? driverbyte_table[db] : "invalid"),
-	       (su < NUM_SUGGEST_STRS ? driversuggest_table[su] : "invalid"));
+	       (db < NUM_DRIVERBYTE_STRS ? driverbyte_table[db] : "invalid"));
 }
 
 #else
diff --git a/drivers/scsi/cxgb3i/cxgb3i_ddp.c b/drivers/scsi/cxgb3i/cxgb3i_ddp.c
index a83d36e..4eb6f55 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_ddp.c
+++ b/drivers/scsi/cxgb3i/cxgb3i_ddp.c
@@ -196,7 +196,7 @@
 }
 
 /**
- * cxgb3i_ddp_find_page_index - return ddp page index for a given page size.
+ * cxgb3i_ddp_find_page_index - return ddp page index for a given page size
  * @pgsz: page size
  * return the ddp page index, if no match is found return DDP_PGIDX_MAX.
  */
@@ -355,8 +355,7 @@
  * @tdev: t3cdev adapter
  * @tid: connection id
  * @tformat: tag format
- * @tagp: the s/w tag, if ddp setup is successful, it will be updated with
- *	  ddp/hw tag
+ * @tagp: contains s/w tag initially, will be updated with ddp/hw tag
  * @gl: the page momory list
  * @gfp: allocation mode
  *
diff --git a/drivers/scsi/cxgb3i/cxgb3i_ddp.h b/drivers/scsi/cxgb3i/cxgb3i_ddp.h
index 3faae78..75a63a8 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_ddp.h
+++ b/drivers/scsi/cxgb3i/cxgb3i_ddp.h
@@ -185,12 +185,11 @@
 }
 
 /**
- * cxgb3i_sw_tag_usable - check if a given s/w tag has enough bits left for
- *			  the reserved/hw bits
+ * cxgb3i_sw_tag_usable - check if s/w tag has enough bits left for hw bits
  * @tformat: tag format information
  * @sw_tag: s/w tag to be checked
  *
- * return true if the tag is a ddp tag, false otherwise.
+ * return true if the tag can be used for hw ddp tag, false otherwise.
  */
 static inline int cxgb3i_sw_tag_usable(struct cxgb3i_tag_format *tformat,
 					u32 sw_tag)
@@ -222,8 +221,7 @@
 }
 
 /**
- * cxgb3i_ddp_tag_base - shift the s/w tag bits so that reserved bits are not
- *			 used.
+ * cxgb3i_ddp_tag_base - shift s/w tag bits so that reserved bits are not used
  * @tformat: tag format information
  * @sw_tag: s/w tag to be checked
  */
diff --git a/drivers/scsi/cxgb3i/cxgb3i_iscsi.c b/drivers/scsi/cxgb3i/cxgb3i_iscsi.c
index fa2a44f..e185ded 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_iscsi.c
+++ b/drivers/scsi/cxgb3i/cxgb3i_iscsi.c
@@ -101,8 +101,7 @@
 }
 
 /**
- * cxgb3i_adapter_remove - release all the resources held and cleanup any
- *	h/w settings
+ * cxgb3i_adapter_remove - release the resources held and cleanup h/w settings
  * @t3dev: t3cdev adapter
  */
 void cxgb3i_adapter_remove(struct t3cdev *t3dev)
@@ -135,8 +134,7 @@
 }
 
 /**
- * cxgb3i_hba_find_by_netdev - find the cxgb3i_hba structure with a given
- *	net_device
+ * cxgb3i_hba_find_by_netdev - find the cxgb3i_hba structure via net_device
  * @t3dev: t3cdev adapter
  */
 struct cxgb3i_hba *cxgb3i_hba_find_by_netdev(struct net_device *ndev)
@@ -170,8 +168,7 @@
 	int err;
 
 	shost = iscsi_host_alloc(&cxgb3i_host_template,
-				 sizeof(struct cxgb3i_hba),
-				 CXGB3I_SCSI_QDEPTH_DFLT);
+				 sizeof(struct cxgb3i_hba), 1);
 	if (!shost) {
 		cxgb3i_log_info("iscsi_host_alloc failed.\n");
 		return NULL;
@@ -335,13 +332,12 @@
  * @cmds_max:		max # of commands
  * @qdepth:		scsi queue depth
  * @initial_cmdsn:	initial iscsi CMDSN for this session
- * @host_no:		pointer to return host no
  *
  * Creates a new iSCSI session
  */
 static struct iscsi_cls_session *
 cxgb3i_session_create(struct iscsi_endpoint *ep, u16 cmds_max, u16 qdepth,
-		      u32 initial_cmdsn, u32 *host_no)
+		      u32 initial_cmdsn)
 {
 	struct cxgb3i_endpoint *cep;
 	struct cxgb3i_hba *hba;
@@ -360,8 +356,6 @@
 	cxgb3i_api_debug("ep 0x%p, cep 0x%p, hba 0x%p.\n", ep, cep, hba);
 	BUG_ON(hba != iscsi_host_priv(shost));
 
-	*host_no = shost->host_no;
-
 	cls_session = iscsi_session_setup(&cxgb3i_iscsi_transport, shost,
 					  cmds_max,
 					  sizeof(struct iscsi_tcp_task) +
@@ -394,9 +388,9 @@
 }
 
 /**
- * cxgb3i_conn_max_xmit_dlength -- check the max. xmit pdu segment size,
- * reduce it to be within the hardware limit if needed
+ * cxgb3i_conn_max_xmit_dlength -- calc the max. xmit pdu segment size
  * @conn: iscsi connection
+ * check the max. xmit pdu payload, reduce it if needed
  */
 static inline int cxgb3i_conn_max_xmit_dlength(struct iscsi_conn *conn)
 
@@ -417,8 +411,7 @@
 }
 
 /**
- * cxgb3i_conn_max_recv_dlength -- check the max. recv pdu segment size against
- * the hardware limit
+ * cxgb3i_conn_max_recv_dlength -- check the max. recv pdu segment size
  * @conn: iscsi connection
  * return 0 if the value is valid, < 0 otherwise.
  */
@@ -759,9 +752,9 @@
 
 /**
  * cxgb3i_reserve_itt - generate tag for a give task
- * Try to set up ddp for a scsi read task.
  * @task: iscsi task
  * @hdr_itt: tag, filled in by this function
+ * Set up ddp for scsi read tasks if possible.
  */
 int cxgb3i_reserve_itt(struct iscsi_task *task, itt_t *hdr_itt)
 {
@@ -809,9 +802,9 @@
 
 /**
  * cxgb3i_release_itt - release the tag for a given task
- * if the tag is a ddp tag, release the ddp setup
  * @task:	iscsi task
  * @hdr_itt:	tag
+ * If the tag is a ddp tag, release the ddp setup
  */
 void cxgb3i_release_itt(struct iscsi_task *task, itt_t hdr_itt)
 {
@@ -843,7 +836,7 @@
 	.can_queue		= CXGB3I_SCSI_QDEPTH_DFLT - 1,
 	.sg_tablesize		= SG_ALL,
 	.max_sectors		= 0xFFFF,
-	.cmd_per_lun		= ISCSI_DEF_CMD_PER_LUN,
+	.cmd_per_lun		= CXGB3I_SCSI_QDEPTH_DFLT,
 	.eh_abort_handler	= iscsi_eh_abort,
 	.eh_device_reset_handler = iscsi_eh_device_reset,
 	.eh_target_reset_handler = iscsi_eh_target_reset,
diff --git a/drivers/scsi/cxgb3i/cxgb3i_offload.c b/drivers/scsi/cxgb3i/cxgb3i_offload.c
index de3b3b6..c2e434e 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_offload.c
+++ b/drivers/scsi/cxgb3i/cxgb3i_offload.c
@@ -1417,8 +1417,7 @@
 }
 
 /**
- * cxgb3i_c3cn_release - close and release an iscsi tcp connection and any
- * 	resource held
+ * cxgb3i_c3cn_release - close and release an iscsi tcp connection
  * @c3cn: the iscsi tcp connection
  */
 void cxgb3i_c3cn_release(struct s3_conn *c3cn)
diff --git a/drivers/scsi/cxgb3i/cxgb3i_offload.h b/drivers/scsi/cxgb3i/cxgb3i_offload.h
index 6344b9e..275f23f 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_offload.h
+++ b/drivers/scsi/cxgb3i/cxgb3i_offload.h
@@ -139,6 +139,7 @@
 
 /**
  * cxgb3i_sdev_data - Per adapter data.
+ *
  * Linked off of each Ethernet device port on the adapter.
  * Also available via the t3cdev structure since we have pointers to our port
  * net_device's there ...
diff --git a/drivers/scsi/cxgb3i/cxgb3i_pdu.c b/drivers/scsi/cxgb3i/cxgb3i_pdu.c
index 17115c2..7eebc9a 100644
--- a/drivers/scsi/cxgb3i/cxgb3i_pdu.c
+++ b/drivers/scsi/cxgb3i/cxgb3i_pdu.c
@@ -479,7 +479,7 @@
 	cxgb3i_tx_debug("cn 0x%p.\n", c3cn);
 	if (conn) {
 		cxgb3i_tx_debug("cn 0x%p, cid %d.\n", c3cn, conn->id);
-		scsi_queue_work(conn->session->host, &conn->xmitwork);
+		iscsi_conn_queue_work(conn);
 	}
 }
 
diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
index e356b43..dba154c 100644
--- a/drivers/scsi/device_handler/scsi_dh_alua.c
+++ b/drivers/scsi/device_handler/scsi_dh_alua.c
@@ -247,8 +247,8 @@
 	/* Prepare the data buffer */
 	memset(h->buff, 0, stpg_len);
 	h->buff[4] = TPGS_STATE_OPTIMIZED & 0x0f;
-	h->buff[6] = (h->group_id >> 8) & 0x0f;
-	h->buff[7] = h->group_id & 0x0f;
+	h->buff[6] = (h->group_id >> 8) & 0xff;
+	h->buff[7] = h->group_id & 0xff;
 
 	rq = get_alua_req(sdev, h->buff, stpg_len, WRITE);
 	if (!rq)
@@ -461,6 +461,15 @@
 			 */
 			return ADD_TO_MLQUEUE;
 		}
+		if (sense_hdr->asc == 0x3f && sense_hdr->ascq == 0x0e) {
+			/*
+			 * REPORTED_LUNS_DATA_HAS_CHANGED is reported
+			 * when switching controllers on targets like
+			 * Intel Multi-Flex. We can just retry.
+			 */
+			return ADD_TO_MLQUEUE;
+		}
+
 		break;
 	}
 
@@ -691,6 +700,7 @@
 	{"IBM", "2107900" },
 	{"IBM", "2145" },
 	{"Pillar", "Axiom" },
+	{"Intel", "Multi-Flex"},
 	{NULL, NULL}
 };
 
diff --git a/drivers/scsi/device_handler/scsi_dh_rdac.c b/drivers/scsi/device_handler/scsi_dh_rdac.c
index 5366476..43b8c51 100644
--- a/drivers/scsi/device_handler/scsi_dh_rdac.c
+++ b/drivers/scsi/device_handler/scsi_dh_rdac.c
@@ -449,28 +449,40 @@
 				    unsigned char *sensebuf)
 {
 	struct scsi_sense_hdr sense_hdr;
-	int sense, err = SCSI_DH_IO, ret;
+	int err = SCSI_DH_IO, ret;
 
 	ret = scsi_normalize_sense(sensebuf, SCSI_SENSE_BUFFERSIZE, &sense_hdr);
 	if (!ret)
 		goto done;
 
 	err = SCSI_DH_OK;
-	sense = (sense_hdr.sense_key << 16) | (sense_hdr.asc << 8) |
-			sense_hdr.ascq;
-	/* If it is retryable failure, submit the c9 inquiry again */
-	if (sense == 0x59136 || sense == 0x68b02 || sense == 0xb8b02 ||
-			    sense == 0x62900) {
-		/* 0x59136    - Command lock contention
-		 * 0x[6b]8b02 - Quiesense in progress or achieved
-		 * 0x62900    - Power On, Reset, or Bus Device Reset
-		 */
+
+	switch (sense_hdr.sense_key) {
+	case NO_SENSE:
+	case ABORTED_COMMAND:
+	case UNIT_ATTENTION:
 		err = SCSI_DH_RETRY;
+		break;
+	case NOT_READY:
+		if (sense_hdr.asc == 0x04 && sense_hdr.ascq == 0x01)
+			/* LUN Not Ready and is in the Process of Becoming
+			 * Ready
+			 */
+			err = SCSI_DH_RETRY;
+		break;
+	case ILLEGAL_REQUEST:
+		if (sense_hdr.asc == 0x91 && sense_hdr.ascq == 0x36)
+			/*
+			 * Command Lock contention
+			 */
+			err = SCSI_DH_RETRY;
+		break;
+	default:
+		sdev_printk(KERN_INFO, sdev,
+			    "MODE_SELECT failed with sense %02x/%02x/%02x.\n",
+			    sense_hdr.sense_key, sense_hdr.asc, sense_hdr.ascq);
 	}
 
-	if (sense)
-		sdev_printk(KERN_INFO, sdev,
-			"MODE_SELECT failed with sense 0x%x.\n", sense);
 done:
 	return err;
 }
@@ -562,6 +574,12 @@
 			 * Just retry and wait.
 			 */
 			return ADD_TO_MLQUEUE;
+		if (sense_hdr->asc == 0xA1  && sense_hdr->ascq == 0x02)
+			/* LUN Not Ready - Quiescense in progress
+			 * or has been achieved
+			 * Just retry.
+			 */
+			return ADD_TO_MLQUEUE;
 		break;
 	case ILLEGAL_REQUEST:
 		if (sense_hdr->asc == 0x94 && sense_hdr->ascq == 0x01) {
@@ -579,6 +597,11 @@
 			 * Power On, Reset, or Bus Device Reset, just retry.
 			 */
 			return ADD_TO_MLQUEUE;
+		if (sense_hdr->asc == 0x8b && sense_hdr->ascq == 0x02)
+			/*
+			 * Quiescence in progress , just retry.
+			 */
+			return ADD_TO_MLQUEUE;
 		break;
 	}
 	/* success just means we do not care what scsi-ml does */
diff --git a/drivers/scsi/fcoe/fcoe_sw.c b/drivers/scsi/fcoe/fcoe_sw.c
index da210eb..2bbbe3c 100644
--- a/drivers/scsi/fcoe/fcoe_sw.c
+++ b/drivers/scsi/fcoe/fcoe_sw.c
@@ -133,6 +133,13 @@
 	/* lport fc_lport related configuration */
 	fc_lport_config(lp);
 
+	/* offload related configuration */
+	lp->crc_offload = 0;
+	lp->seq_offload = 0;
+	lp->lro_enabled = 0;
+	lp->lro_xid = 0;
+	lp->lso_max = 0;
+
 	return 0;
 }
 
@@ -186,7 +193,27 @@
 	if (fc->real_dev->features & NETIF_F_SG)
 		lp->sg_supp = 1;
 
-
+#ifdef NETIF_F_FCOE_CRC
+	if (netdev->features & NETIF_F_FCOE_CRC) {
+		lp->crc_offload = 1;
+		printk(KERN_DEBUG "fcoe:%s supports FCCRC offload\n",
+		       netdev->name);
+	}
+#endif
+#ifdef NETIF_F_FSO
+	if (netdev->features & NETIF_F_FSO) {
+		lp->seq_offload = 1;
+		lp->lso_max = netdev->gso_max_size;
+		printk(KERN_DEBUG "fcoe:%s supports LSO for max len 0x%x\n",
+		       netdev->name, lp->lso_max);
+	}
+#endif
+	if (netdev->fcoe_ddp_xid) {
+		lp->lro_enabled = 1;
+		lp->lro_xid = netdev->fcoe_ddp_xid;
+		printk(KERN_DEBUG "fcoe:%s supports LRO for max xid 0x%x\n",
+		       netdev->name, lp->lro_xid);
+	}
 	skb_queue_head_init(&fc->fcoe_pending_queue);
 	fc->fcoe_pending_queue_active = 0;
 
@@ -346,8 +373,46 @@
 	return 0;
 }
 
+/*
+ * fcoe_sw_ddp_setup - calls LLD's ddp_setup through net_device
+ * @lp:	the corresponding fc_lport
+ * @xid: the exchange id for this ddp transfer
+ * @sgl: the scatterlist describing this transfer
+ * @sgc: number of sg items
+ *
+ * Returns : 0 no ddp
+ */
+static int fcoe_sw_ddp_setup(struct fc_lport *lp, u16 xid,
+			     struct scatterlist *sgl, unsigned int sgc)
+{
+	struct net_device *n = fcoe_netdev(lp);
+
+	if (n->netdev_ops && n->netdev_ops->ndo_fcoe_ddp_setup)
+		return n->netdev_ops->ndo_fcoe_ddp_setup(n, xid, sgl, sgc);
+
+	return 0;
+}
+
+/*
+ * fcoe_sw_ddp_done - calls LLD's ddp_done through net_device
+ * @lp:	the corresponding fc_lport
+ * @xid: the exchange id for this ddp transfer
+ *
+ * Returns : the length of data that have been completed by ddp
+ */
+static int fcoe_sw_ddp_done(struct fc_lport *lp, u16 xid)
+{
+	struct net_device *n = fcoe_netdev(lp);
+
+	if (n->netdev_ops && n->netdev_ops->ndo_fcoe_ddp_done)
+		return n->netdev_ops->ndo_fcoe_ddp_done(n, xid);
+	return 0;
+}
+
 static struct libfc_function_template fcoe_sw_libfc_fcn_templ = {
 	.frame_send = fcoe_xmit,
+	.ddp_setup = fcoe_sw_ddp_setup,
+	.ddp_done = fcoe_sw_ddp_done,
 };
 
 /**
diff --git a/drivers/scsi/fcoe/libfcoe.c b/drivers/scsi/fcoe/libfcoe.c
index 5548bf3..0d6f5be 100644
--- a/drivers/scsi/fcoe/libfcoe.c
+++ b/drivers/scsi/fcoe/libfcoe.c
@@ -423,7 +423,7 @@
 
 	/* crc offload */
 	if (likely(lp->crc_offload)) {
-		skb->ip_summed = CHECKSUM_COMPLETE;
+		skb->ip_summed = CHECKSUM_PARTIAL;
 		skb->csum_start = skb_headroom(skb);
 		skb->csum_offset = skb->len;
 		crc = 0;
@@ -460,7 +460,7 @@
 	skb_reset_mac_header(skb);
 	skb_reset_network_header(skb);
 	skb->mac_len = elen;
-	skb->protocol = htons(ETH_P_802_3);
+	skb->protocol = htons(ETH_P_FCOE);
 	skb->dev = fc->real_dev;
 
 	/* fill up mac and fcoe headers */
@@ -483,6 +483,16 @@
 		FC_FCOE_ENCAPS_VER(hp, FC_FCOE_VER);
 	hp->fcoe_sof = sof;
 
+#ifdef NETIF_F_FSO
+	/* fcoe lso, mss is in max_payload which is non-zero for FCP data */
+	if (lp->seq_offload && fr_max_payload(fp)) {
+		skb_shinfo(skb)->gso_type = SKB_GSO_FCOE;
+		skb_shinfo(skb)->gso_size = fr_max_payload(fp);
+	} else {
+		skb_shinfo(skb)->gso_type = 0;
+		skb_shinfo(skb)->gso_size = 0;
+	}
+#endif
 	/* update tx stats: regardless if LLD fails */
 	stats = lp->dev_stats[smp_processor_id()];
 	if (stats) {
@@ -623,7 +633,7 @@
 		 * it's solicited data, in which case, the FCP layer would
 		 * check it during the copy.
 		 */
-		if (lp->crc_offload)
+		if (lp->crc_offload && skb->ip_summed == CHECKSUM_UNNECESSARY)
 			fr_flags(fp) &= ~FCPHF_CRC_UNCHECKED;
 		else
 			fr_flags(fp) |= FCPHF_CRC_UNCHECKED;
diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
index aa670a1..89d41a4 100644
--- a/drivers/scsi/hosts.c
+++ b/drivers/scsi/hosts.c
@@ -176,7 +176,6 @@
 	transport_unregister_device(&shost->shost_gendev);
 	device_unregister(&shost->shost_dev);
 	device_del(&shost->shost_gendev);
-	scsi_proc_hostdir_rm(shost->hostt);
 }
 EXPORT_SYMBOL(scsi_remove_host);
 
@@ -270,6 +269,8 @@
 	struct Scsi_Host *shost = dev_to_shost(dev);
 	struct device *parent = dev->parent;
 
+	scsi_proc_hostdir_rm(shost->hostt);
+
 	if (shost->ehandler)
 		kthread_stop(shost->ehandler);
 	if (shost->work_q)
diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
index 34be88d..af1f0af 100644
--- a/drivers/scsi/hptiop.c
+++ b/drivers/scsi/hptiop.c
@@ -580,8 +580,7 @@
 		break;
 
 	default:
-		scp->result = ((DRIVER_INVALID|SUGGEST_ABORT)<<24) |
-					(DID_ABORT<<16);
+		scp->result = DRIVER_INVALID << 24 | DID_ABORT << 16;
 		break;
 	}
 
diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c
index ed1e728..93d1fbe 100644
--- a/drivers/scsi/ibmvscsi/ibmvfc.c
+++ b/drivers/scsi/ibmvscsi/ibmvfc.c
@@ -2767,6 +2767,40 @@
 		ibmvfc_init_tgt(tgt, job_step);
 }
 
+/* Defined in FC-LS */
+static const struct {
+	int code;
+	int retry;
+	int logged_in;
+} prli_rsp [] = {
+	{ 0, 1, 0 },
+	{ 1, 0, 1 },
+	{ 2, 1, 0 },
+	{ 3, 1, 0 },
+	{ 4, 0, 0 },
+	{ 5, 0, 0 },
+	{ 6, 0, 1 },
+	{ 7, 0, 0 },
+	{ 8, 1, 0 },
+};
+
+/**
+ * ibmvfc_get_prli_rsp - Find PRLI response index
+ * @flags:	PRLI response flags
+ *
+ **/
+static int ibmvfc_get_prli_rsp(u16 flags)
+{
+	int i;
+	int code = (flags & 0x0f00) >> 8;
+
+	for (i = 0; i < ARRAY_SIZE(prli_rsp); i++)
+		if (prli_rsp[i].code == code)
+			return i;
+
+	return 0;
+}
+
 /**
  * ibmvfc_tgt_prli_done - Completion handler for Process Login
  * @evt:	ibmvfc event struct
@@ -2777,15 +2811,36 @@
 	struct ibmvfc_target *tgt = evt->tgt;
 	struct ibmvfc_host *vhost = evt->vhost;
 	struct ibmvfc_process_login *rsp = &evt->xfer_iu->prli;
+	struct ibmvfc_prli_svc_parms *parms = &rsp->parms;
 	u32 status = rsp->common.status;
+	int index;
 
 	vhost->discovery_threads--;
 	ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_NONE);
 	switch (status) {
 	case IBMVFC_MAD_SUCCESS:
-		tgt_dbg(tgt, "Process Login succeeded\n");
-		tgt->need_login = 0;
-		ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_ADD_RPORT);
+		tgt_dbg(tgt, "Process Login succeeded: %X %02X %04X\n",
+			parms->type, parms->flags, parms->service_parms);
+
+		if (parms->type == IBMVFC_SCSI_FCP_TYPE) {
+			index = ibmvfc_get_prli_rsp(parms->flags);
+			if (prli_rsp[index].logged_in) {
+				if (parms->flags & IBMVFC_PRLI_EST_IMG_PAIR) {
+					tgt->need_login = 0;
+					tgt->ids.roles = 0;
+					if (parms->service_parms & IBMVFC_PRLI_TARGET_FUNC)
+						tgt->ids.roles |= FC_PORT_ROLE_FCP_TARGET;
+					if (parms->service_parms & IBMVFC_PRLI_INITIATOR_FUNC)
+						tgt->ids.roles |= FC_PORT_ROLE_FCP_INITIATOR;
+					ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_ADD_RPORT);
+				} else
+					ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT);
+			} else if (prli_rsp[index].retry)
+				ibmvfc_retry_tgt_init(tgt, ibmvfc_tgt_send_prli);
+			else
+				ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT);
+		} else
+			ibmvfc_set_tgt_action(tgt, IBMVFC_TGT_ACTION_DEL_RPORT);
 		break;
 	case IBMVFC_MAD_DRIVER_FAILED:
 		break;
@@ -2874,7 +2929,6 @@
 		tgt->ids.node_name = wwn_to_u64(rsp->service_parms.node_name);
 		tgt->ids.port_name = wwn_to_u64(rsp->service_parms.port_name);
 		tgt->ids.port_id = tgt->scsi_id;
-		tgt->ids.roles = FC_PORT_ROLE_FCP_TARGET;
 		memcpy(&tgt->service_parms, &rsp->service_parms,
 		       sizeof(tgt->service_parms));
 		memcpy(&tgt->service_parms_change, &rsp->service_parms_change,
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index 0782900..def473f 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -152,13 +152,13 @@
 MODULE_PARM_DESC(log_level, "Set to 0 - 4 for increasing verbosity of device driver");
 module_param_named(testmode, ipr_testmode, int, 0);
 MODULE_PARM_DESC(testmode, "DANGEROUS!!! Allows unsupported configurations");
-module_param_named(fastfail, ipr_fastfail, int, 0);
+module_param_named(fastfail, ipr_fastfail, int, S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(fastfail, "Reduce timeouts and retries");
 module_param_named(transop_timeout, ipr_transop_timeout, int, 0);
 MODULE_PARM_DESC(transop_timeout, "Time in seconds to wait for adapter to come operational (default: 300)");
 module_param_named(enable_cache, ipr_enable_cache, int, 0);
 MODULE_PARM_DESC(enable_cache, "Enable adapter's non-volatile write cache (default: 1)");
-module_param_named(debug, ipr_debug, int, 0);
+module_param_named(debug, ipr_debug, int, S_IRUGO | S_IWUSR);
 MODULE_PARM_DESC(debug, "Enable device driver debugging logging. Set to 1 to enable. (default: 0)");
 module_param_named(dual_ioa_raid, ipr_dual_ioa_raid, int, 0);
 MODULE_PARM_DESC(dual_ioa_raid, "Enable dual adapter RAID support. Set to 1 to enable. (default: 1)");
@@ -354,6 +354,8 @@
 	"9076: Configuration error, missing remote IOA"},
 	{0x06679100, 0, IPR_DEFAULT_LOG_LEVEL,
 	"4050: Enclosure does not support a required multipath function"},
+	{0x06690000, 0, IPR_DEFAULT_LOG_LEVEL,
+	"4070: Logically bad block written on device"},
 	{0x06690200, 0, IPR_DEFAULT_LOG_LEVEL,
 	"9041: Array protection temporarily suspended"},
 	{0x06698200, 0, IPR_DEFAULT_LOG_LEVEL,
@@ -7147,6 +7149,7 @@
 
 	ENTER;
 	free_irq(pdev->irq, ioa_cfg);
+	pci_disable_msi(pdev);
 	iounmap(ioa_cfg->hdw_dma_regs);
 	pci_release_regions(pdev);
 	ipr_free_mem(ioa_cfg);
@@ -7432,6 +7435,11 @@
 		goto out;
 	}
 
+	if (!(rc = pci_enable_msi(pdev)))
+		dev_info(&pdev->dev, "MSI enabled\n");
+	else if (ipr_debug)
+		dev_info(&pdev->dev, "Cannot enable MSI\n");
+
 	dev_info(&pdev->dev, "Found IOA with IRQ: %d\n", pdev->irq);
 
 	host = scsi_host_alloc(&driver_template, sizeof(*ioa_cfg));
@@ -7574,6 +7582,7 @@
 out_scsi_host_put:
 	scsi_host_put(host);
 out_disable:
+	pci_disable_msi(pdev);
 	pci_disable_device(pdev);
 	goto out;
 }
diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
index 8f872f8..79a3ae4 100644
--- a/drivers/scsi/ipr.h
+++ b/drivers/scsi/ipr.h
@@ -37,8 +37,8 @@
 /*
  * Literals
  */
-#define IPR_DRIVER_VERSION "2.4.1"
-#define IPR_DRIVER_DATE "(April 24, 2007)"
+#define IPR_DRIVER_VERSION "2.4.2"
+#define IPR_DRIVER_DATE "(January 21, 2009)"
 
 /*
  * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
diff --git a/drivers/scsi/ips.c b/drivers/scsi/ips.c
index ef683f0..457d76a 100644
--- a/drivers/scsi/ips.c
+++ b/drivers/scsi/ips.c
@@ -1004,8 +1004,7 @@
 	DEBUG_VAR(1, "(%s%d) Failing active commands", ips_name, ha->host_num);
 
 	while ((scb = ips_removeq_scb_head(&ha->scb_activelist))) {
-		scb->scsi_cmd->result =
-		    (DID_RESET << 16) | (SUGGEST_RETRY << 24);
+		scb->scsi_cmd->result = DID_RESET << 16;
 		scb->scsi_cmd->scsi_done(scb->scsi_cmd);
 		ips_freescb(ha, scb);
 	}
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 23808df..b3e5e08 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -48,13 +48,6 @@
 	      "Alex Aizman <itn780@yahoo.com>");
 MODULE_DESCRIPTION("iSCSI/TCP data-path");
 MODULE_LICENSE("GPL");
-#undef DEBUG_TCP
-
-#ifdef DEBUG_TCP
-#define debug_tcp(fmt...) printk(KERN_INFO "tcp: " fmt)
-#else
-#define debug_tcp(fmt...)
-#endif
 
 static struct scsi_transport_template *iscsi_sw_tcp_scsi_transport;
 static struct scsi_host_template iscsi_sw_tcp_sht;
@@ -63,6 +56,21 @@
 static unsigned int iscsi_max_lun = 512;
 module_param_named(max_lun, iscsi_max_lun, uint, S_IRUGO);
 
+static int iscsi_sw_tcp_dbg;
+module_param_named(debug_iscsi_tcp, iscsi_sw_tcp_dbg, int,
+		   S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug_iscsi_tcp, "Turn on debugging for iscsi_tcp module "
+		 "Set to 1 to turn on, and zero to turn off. Default is off.");
+
+#define ISCSI_SW_TCP_DBG(_conn, dbg_fmt, arg...)		\
+	do {							\
+		if (iscsi_sw_tcp_dbg)				\
+			iscsi_conn_printk(KERN_INFO, _conn,	\
+					     "%s " dbg_fmt,	\
+					     __func__, ##arg);	\
+	} while (0);
+
+
 /**
  * iscsi_sw_tcp_recv - TCP receive in sendfile fashion
  * @rd_desc: read descriptor
@@ -77,7 +85,7 @@
 	unsigned int consumed, total_consumed = 0;
 	int status;
 
-	debug_tcp("in %d bytes\n", skb->len - offset);
+	ISCSI_SW_TCP_DBG(conn, "in %d bytes\n", skb->len - offset);
 
 	do {
 		status = 0;
@@ -86,7 +94,8 @@
 		total_consumed += consumed;
 	} while (consumed != 0 && status != ISCSI_TCP_SKB_DONE);
 
-	debug_tcp("read %d bytes status %d\n", skb->len - offset, status);
+	ISCSI_SW_TCP_DBG(conn, "read %d bytes status %d\n",
+			 skb->len - offset, status);
 	return total_consumed;
 }
 
@@ -131,7 +140,8 @@
 	if ((sk->sk_state == TCP_CLOSE_WAIT ||
 	     sk->sk_state == TCP_CLOSE) &&
 	    !atomic_read(&sk->sk_rmem_alloc)) {
-		debug_tcp("iscsi_tcp_state_change: TCP_CLOSE|TCP_CLOSE_WAIT\n");
+		ISCSI_SW_TCP_DBG(conn, "iscsi_tcp_state_change: "
+				 "TCP_CLOSE|TCP_CLOSE_WAIT\n");
 		iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
 	}
 
@@ -155,8 +165,8 @@
 	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
 
 	tcp_sw_conn->old_write_space(sk);
-	debug_tcp("iscsi_write_space: cid %d\n", conn->id);
-	scsi_queue_work(conn->session->host, &conn->xmitwork);
+	ISCSI_SW_TCP_DBG(conn, "iscsi_write_space\n");
+	iscsi_conn_queue_work(conn);
 }
 
 static void iscsi_sw_tcp_conn_set_callbacks(struct iscsi_conn *conn)
@@ -283,7 +293,7 @@
 		}
 	}
 
-	debug_tcp("xmit %d bytes\n", consumed);
+	ISCSI_SW_TCP_DBG(conn, "xmit %d bytes\n", consumed);
 
 	conn->txdata_octets += consumed;
 	return consumed;
@@ -291,7 +301,7 @@
 error:
 	/* Transmit error. We could initiate error recovery
 	 * here. */
-	debug_tcp("Error sending PDU, errno=%d\n", rc);
+	ISCSI_SW_TCP_DBG(conn, "Error sending PDU, errno=%d\n", rc);
 	iscsi_conn_failure(conn, rc);
 	return -EIO;
 }
@@ -334,9 +344,10 @@
 	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
 
 	tcp_sw_conn->out.segment = tcp_sw_conn->out.data_segment;
-	debug_tcp("Header done. Next segment size %u total_size %u\n",
-		  tcp_sw_conn->out.segment.size,
-		  tcp_sw_conn->out.segment.total_size);
+	ISCSI_SW_TCP_DBG(tcp_conn->iscsi_conn,
+			 "Header done. Next segment size %u total_size %u\n",
+			 tcp_sw_conn->out.segment.size,
+			 tcp_sw_conn->out.segment.total_size);
 	return 0;
 }
 
@@ -346,8 +357,8 @@
 	struct iscsi_tcp_conn *tcp_conn = conn->dd_data;
 	struct iscsi_sw_tcp_conn *tcp_sw_conn = tcp_conn->dd_data;
 
-	debug_tcp("%s(%p%s)\n", __func__, tcp_conn,
-			conn->hdrdgst_en? ", digest enabled" : "");
+	ISCSI_SW_TCP_DBG(conn, "%s\n", conn->hdrdgst_en ?
+			 "digest enabled" : "digest disabled");
 
 	/* Clear the data segment - needs to be filled in by the
 	 * caller using iscsi_tcp_send_data_prep() */
@@ -389,9 +400,9 @@
 	struct hash_desc *tx_hash = NULL;
 	unsigned int hdr_spec_len;
 
-	debug_tcp("%s(%p, offset=%d, datalen=%d%s)\n", __func__,
-			tcp_conn, offset, len,
-			conn->datadgst_en? ", digest enabled" : "");
+	ISCSI_SW_TCP_DBG(conn, "offset=%d, datalen=%d %s\n", offset, len,
+			 conn->datadgst_en ?
+			 "digest enabled" : "digest disabled");
 
 	/* Make sure the datalen matches what the caller
 	   said he would send. */
@@ -415,8 +426,8 @@
 	struct hash_desc *tx_hash = NULL;
 	unsigned int hdr_spec_len;
 
-	debug_tcp("%s(%p, datalen=%d%s)\n", __func__, tcp_conn, len,
-		  conn->datadgst_en? ", digest enabled" : "");
+	ISCSI_SW_TCP_DBG(conn, "datalen=%zd %s\n", len, conn->datadgst_en ?
+			 "digest enabled" : "digest disabled");
 
 	/* Make sure the datalen matches what the caller
 	   said he would send. */
@@ -754,8 +765,7 @@
 
 static struct iscsi_cls_session *
 iscsi_sw_tcp_session_create(struct iscsi_endpoint *ep, uint16_t cmds_max,
-			    uint16_t qdepth, uint32_t initial_cmdsn,
-			    uint32_t *hostno)
+			    uint16_t qdepth, uint32_t initial_cmdsn)
 {
 	struct iscsi_cls_session *cls_session;
 	struct iscsi_session *session;
@@ -766,10 +776,11 @@
 		return NULL;
 	}
 
-	shost = iscsi_host_alloc(&iscsi_sw_tcp_sht, 0, qdepth);
+	shost = iscsi_host_alloc(&iscsi_sw_tcp_sht, 0, 1);
 	if (!shost)
 		return NULL;
 	shost->transportt = iscsi_sw_tcp_scsi_transport;
+	shost->cmd_per_lun = qdepth;
 	shost->max_lun = iscsi_max_lun;
 	shost->max_id = 0;
 	shost->max_channel = 0;
@@ -777,7 +788,6 @@
 
 	if (iscsi_host_add(shost, NULL))
 		goto free_host;
-	*hostno = shost->host_no;
 
 	cls_session = iscsi_session_setup(&iscsi_sw_tcp_transport, shost,
 					  cmds_max,
@@ -813,6 +823,12 @@
 	iscsi_host_free(shost);
 }
 
+static int iscsi_sw_tcp_slave_alloc(struct scsi_device *sdev)
+{
+	set_bit(QUEUE_FLAG_BIDI, &sdev->request_queue->queue_flags);
+	return 0;
+}
+
 static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev)
 {
 	blk_queue_bounce_limit(sdev->request_queue, BLK_BOUNCE_ANY);
@@ -833,6 +849,7 @@
 	.eh_device_reset_handler= iscsi_eh_device_reset,
 	.eh_target_reset_handler= iscsi_eh_target_reset,
 	.use_clustering         = DISABLE_CLUSTERING,
+	.slave_alloc            = iscsi_sw_tcp_slave_alloc,
 	.slave_configure        = iscsi_sw_tcp_slave_configure,
 	.proc_name		= "iscsi_tcp",
 	.this_id		= -1,
diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c
index 505825b..992af05 100644
--- a/drivers/scsi/libfc/fc_exch.c
+++ b/drivers/scsi/libfc/fc_exch.c
@@ -281,7 +281,7 @@
 			ep->destructor(&ep->seq, ep->arg);
 		if (ep->lp->tt.exch_put)
 			ep->lp->tt.exch_put(ep->lp, mp, ep->xid);
-		WARN_ON(!ep->esb_stat & ESB_ST_COMPLETE);
+		WARN_ON(!(ep->esb_stat & ESB_ST_COMPLETE));
 		mempool_free(ep, mp->ep_pool);
 	}
 }
@@ -489,7 +489,7 @@
 	struct fc_exch *ep = NULL;
 
 	if (mp->max_read) {
-		if (fc_frame_is_read(fp)) {
+		if (fc_fcp_is_read(fr_fsp(fp))) {
 			min = mp->min_xid;
 			max = mp->max_read;
 			plast = &mp->last_read;
@@ -1841,6 +1841,8 @@
 	fc_exch_setup_hdr(ep, fp, ep->f_ctl);
 	sp->cnt++;
 
+	fc_fcp_ddp_setup(fr_fsp(fp), ep->xid);
+
 	if (unlikely(lp->tt.frame_send(lp, fp)))
 		goto err;
 
diff --git a/drivers/scsi/libfc/fc_fcp.c b/drivers/scsi/libfc/fc_fcp.c
index 2a631d7..a5725f3 100644
--- a/drivers/scsi/libfc/fc_fcp.c
+++ b/drivers/scsi/libfc/fc_fcp.c
@@ -259,12 +259,62 @@
 	}
 
 	fsp->state &= ~FC_SRB_ABORT_PENDING;
-	fsp->io_status = SUGGEST_RETRY << 24;
+	fsp->io_status = 0;
 	fsp->status_code = FC_ERROR;
 	fc_fcp_complete_locked(fsp);
 }
 
 /*
+ * fc_fcp_ddp_setup - calls to LLD's ddp_setup to set up DDP
+ * transfer for a read I/O indicated by the fc_fcp_pkt.
+ * @fsp: ptr to the fc_fcp_pkt
+ *
+ * This is called in exch_seq_send() when we have a newly allocated
+ * exchange with a valid exchange id to setup ddp.
+ *
+ * returns: none
+ */
+void fc_fcp_ddp_setup(struct fc_fcp_pkt *fsp, u16 xid)
+{
+	struct fc_lport *lp;
+
+	if (!fsp)
+		return;
+
+	lp = fsp->lp;
+	if ((fsp->req_flags & FC_SRB_READ) &&
+	    (lp->lro_enabled) && (lp->tt.ddp_setup)) {
+		if (lp->tt.ddp_setup(lp, xid, scsi_sglist(fsp->cmd),
+				     scsi_sg_count(fsp->cmd)))
+			fsp->xfer_ddp = xid;
+	}
+}
+EXPORT_SYMBOL(fc_fcp_ddp_setup);
+
+/*
+ * fc_fcp_ddp_done - calls to LLD's ddp_done to release any
+ * DDP related resources for this I/O if it is initialized
+ * as a ddp transfer
+ * @fsp: ptr to the fc_fcp_pkt
+ *
+ * returns: none
+ */
+static void fc_fcp_ddp_done(struct fc_fcp_pkt *fsp)
+{
+	struct fc_lport *lp;
+
+	if (!fsp)
+		return;
+
+	lp = fsp->lp;
+	if (fsp->xfer_ddp && lp->tt.ddp_done) {
+		fsp->xfer_len = lp->tt.ddp_done(lp, fsp->xfer_ddp);
+		fsp->xfer_ddp = 0;
+	}
+}
+
+
+/*
  * Receive SCSI data from target.
  * Called after receiving solicited data.
  */
@@ -289,6 +339,9 @@
 	len = fr_len(fp) - sizeof(*fh);
 	buf = fc_frame_payload_get(fp, 0);
 
+	/* if this I/O is ddped, update xfer len */
+	fc_fcp_ddp_done(fsp);
+
 	if (offset + len > fsp->data_len) {
 		/* this should never happen */
 		if ((fr_flags(fp) & FCPHF_CRC_UNCHECKED) &&
@@ -435,7 +488,13 @@
 	 * burst length (t_blen) to seq_blen, otherwise set t_blen
 	 * to max FC frame payload previously set in fsp->max_payload.
 	 */
-	t_blen = lp->seq_offload ? seq_blen : fsp->max_payload;
+	t_blen = fsp->max_payload;
+	if (lp->seq_offload) {
+		t_blen = min(seq_blen, (size_t)lp->lso_max);
+		FC_DEBUG_FCP("fsp=%p:lso:blen=%zx lso_max=0x%x t_blen=%zx\n",
+			   fsp, seq_blen, lp->lso_max, t_blen);
+	}
+
 	WARN_ON(t_blen < FC_MIN_MAX_PAYLOAD);
 	if (t_blen > 512)
 		t_blen &= ~(512 - 1);	/* round down to block size */
@@ -744,6 +803,9 @@
 	fsp->scsi_comp_flags = flags;
 	expected_len = fsp->data_len;
 
+	/* if ddp, update xfer len */
+	fc_fcp_ddp_done(fsp);
+
 	if (unlikely((flags & ~FCP_CONF_REQ) || fc_rp->fr_status)) {
 		rp_ex = (void *)(fc_rp + 1);
 		if (flags & (FCP_RSP_LEN_VAL | FCP_SNS_LEN_VAL)) {
@@ -859,7 +921,7 @@
 		    (!(fsp->scsi_comp_flags & FCP_RESID_UNDER) ||
 		     fsp->xfer_len < fsp->data_len - fsp->scsi_resid)) {
 			fsp->status_code = FC_DATA_UNDRUN;
-			fsp->io_status = SUGGEST_RETRY << 24;
+			fsp->io_status = 0;
 		}
 	}
 
@@ -1006,7 +1068,7 @@
 	}
 
 	memcpy(fc_frame_payload_get(fp, len), &fsp->cdb_cmd, len);
-	fr_cmd(fp) = fsp->cmd;
+	fr_fsp(fp) = fsp;
 	rport = fsp->rport;
 	fsp->max_payload = rport->maxframe_size;
 	rp = rport->dd_data;
@@ -1267,7 +1329,7 @@
 	rp = rport->dd_data;
 	if (!fsp->seq_ptr || rp->rp_state != RPORT_ST_READY) {
 		fsp->status_code = FC_HRD_ERROR;
-		fsp->io_status = SUGGEST_RETRY << 24;
+		fsp->io_status = 0;
 		fc_fcp_complete_locked(fsp);
 		return;
 	}
@@ -1740,6 +1802,9 @@
 	struct fc_lport *lp;
 	unsigned long flags;
 
+	/* release outstanding ddp context */
+	fc_fcp_ddp_done(fsp);
+
 	fsp->state |= FC_SRB_COMPL;
 	if (!(fsp->state & FC_SRB_FCP_PROCESSING_TMO)) {
 		spin_unlock_bh(&fsp->scsi_pkt_lock);
diff --git a/drivers/scsi/libfc/fc_lport.c b/drivers/scsi/libfc/fc_lport.c
index 2ae50a1..7ef4450 100644
--- a/drivers/scsi/libfc/fc_lport.c
+++ b/drivers/scsi/libfc/fc_lport.c
@@ -762,10 +762,11 @@
 	remote_wwpn = get_unaligned_be64(&flp->fl_wwpn);
 	if (remote_wwpn == lport->wwpn) {
 		FC_DBG("FLOGI from port with same WWPN %llx "
-		       "possible configuration error\n", remote_wwpn);
+		       "possible configuration error\n",
+		       (unsigned long long)remote_wwpn);
 		goto out;
 	}
-	FC_DBG("FLOGI from port WWPN %llx\n", remote_wwpn);
+	FC_DBG("FLOGI from port WWPN %llx\n", (unsigned long long)remote_wwpn);
 
 	/*
 	 * XXX what is the right thing to do for FIDs?
diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
index dae6513..0472bb7 100644
--- a/drivers/scsi/libfc/fc_rport.c
+++ b/drivers/scsi/libfc/fc_rport.c
@@ -988,7 +988,7 @@
 	switch (rdata->rp_state) {
 	case RPORT_ST_INIT:
 		FC_DEBUG_RPORT("incoming PLOGI from %6x wwpn %llx state INIT "
-			       "- reject\n", sid, wwpn);
+			       "- reject\n", sid, (unsigned long long)wwpn);
 		reject = ELS_RJT_UNSUP;
 		break;
 	case RPORT_ST_PLOGI:
diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
index 809d32d..dfaa8ad 100644
--- a/drivers/scsi/libiscsi.c
+++ b/drivers/scsi/libiscsi.c
@@ -38,6 +38,28 @@
 #include <scsi/scsi_transport_iscsi.h>
 #include <scsi/libiscsi.h>
 
+static int iscsi_dbg_lib;
+module_param_named(debug_libiscsi, iscsi_dbg_lib, int, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug_libiscsi, "Turn on debugging for libiscsi module. "
+		 "Set to 1 to turn on, and zero to turn off. Default "
+		 "is off.");
+
+#define ISCSI_DBG_CONN(_conn, dbg_fmt, arg...)			\
+	do {							\
+		if (iscsi_dbg_lib)				\
+			iscsi_conn_printk(KERN_INFO, _conn,	\
+					     "%s " dbg_fmt,	\
+					     __func__, ##arg);	\
+	} while (0);
+
+#define ISCSI_DBG_SESSION(_session, dbg_fmt, arg...)			\
+	do {								\
+		if (iscsi_dbg_lib)					\
+			iscsi_session_printk(KERN_INFO, _session,	\
+					     "%s " dbg_fmt,		\
+					     __func__, ##arg);		\
+	} while (0);
+
 /* Serial Number Arithmetic, 32 bits, less than, RFC1982 */
 #define SNA32_CHECK 2147483648UL
 
@@ -54,6 +76,15 @@
 			    (n1 > n2 && (n2 - n1 < SNA32_CHECK)));
 }
 
+inline void iscsi_conn_queue_work(struct iscsi_conn *conn)
+{
+	struct Scsi_Host *shost = conn->session->host;
+	struct iscsi_host *ihost = shost_priv(shost);
+
+	queue_work(ihost->workq, &conn->xmitwork);
+}
+EXPORT_SYMBOL_GPL(iscsi_conn_queue_work);
+
 void
 iscsi_update_cmdsn(struct iscsi_session *session, struct iscsi_nopin *hdr)
 {
@@ -81,8 +112,7 @@
 		if (!list_empty(&session->leadconn->xmitqueue) ||
 		    !list_empty(&session->leadconn->mgmtqueue)) {
 			if (!(session->tt->caps & CAP_DATA_PATH_OFFLOAD))
-				scsi_queue_work(session->host,
-						&session->leadconn->xmitwork);
+				iscsi_conn_queue_work(session->leadconn);
 		}
 	}
 }
@@ -176,10 +206,11 @@
 	ecdb_ahdr->reserved = 0;
 	memcpy(ecdb_ahdr->ecdb, cmd->cmnd + ISCSI_CDB_SIZE, rlen);
 
-	debug_scsi("iscsi_prep_ecdb_ahs: varlen_cdb_len %d "
-		   "rlen %d pad_len %d ahs_length %d iscsi_headers_size %u\n",
-		   cmd->cmd_len, rlen, pad_len, ahslength, task->hdr_len);
-
+	ISCSI_DBG_SESSION(task->conn->session,
+			  "iscsi_prep_ecdb_ahs: varlen_cdb_len %d "
+		          "rlen %d pad_len %d ahs_length %d iscsi_headers_size "
+		          "%u\n", cmd->cmd_len, rlen, pad_len, ahslength,
+		          task->hdr_len);
 	return 0;
 }
 
@@ -201,10 +232,11 @@
 	rlen_ahdr->reserved = 0;
 	rlen_ahdr->read_length = cpu_to_be32(scsi_in(sc)->length);
 
-	debug_scsi("bidi-in rlen_ahdr->read_length(%d) "
-		   "rlen_ahdr->ahslength(%d)\n",
-		   be32_to_cpu(rlen_ahdr->read_length),
-		   be16_to_cpu(rlen_ahdr->ahslength));
+	ISCSI_DBG_SESSION(task->conn->session,
+			  "bidi-in rlen_ahdr->read_length(%d) "
+		          "rlen_ahdr->ahslength(%d)\n",
+		          be32_to_cpu(rlen_ahdr->read_length),
+		          be16_to_cpu(rlen_ahdr->ahslength));
 	return 0;
 }
 
@@ -335,13 +367,15 @@
 	list_move_tail(&task->running, &conn->run_list);
 
 	conn->scsicmd_pdus_cnt++;
-	debug_scsi("iscsi prep [%s cid %d sc %p cdb 0x%x itt 0x%x len %d "
-		   "bidi_len %d cmdsn %d win %d]\n", scsi_bidi_cmnd(sc) ?
-		   "bidirectional" : sc->sc_data_direction == DMA_TO_DEVICE ?
-		   "write" : "read", conn->id, sc, sc->cmnd[0], task->itt,
-		   scsi_bufflen(sc),
-		   scsi_bidi_cmnd(sc) ? scsi_in(sc)->length : 0,
-		   session->cmdsn, session->max_cmdsn - session->exp_cmdsn + 1);
+	ISCSI_DBG_SESSION(session, "iscsi prep [%s cid %d sc %p cdb 0x%x "
+			  "itt 0x%x len %d bidi_len %d cmdsn %d win %d]\n",
+			  scsi_bidi_cmnd(sc) ? "bidirectional" :
+			  sc->sc_data_direction == DMA_TO_DEVICE ?
+			  "write" : "read", conn->id, sc, sc->cmnd[0],
+			  task->itt, scsi_bufflen(sc),
+			  scsi_bidi_cmnd(sc) ? scsi_in(sc)->length : 0,
+			  session->cmdsn,
+			  session->max_cmdsn - session->exp_cmdsn + 1);
 	return 0;
 }
 
@@ -483,9 +517,9 @@
 
 	task->state = ISCSI_TASK_RUNNING;
 	list_move_tail(&task->running, &conn->mgmt_run_list);
-	debug_scsi("mgmtpdu [op 0x%x hdr->itt 0x%x datalen %d]\n",
-		   hdr->opcode & ISCSI_OPCODE_MASK, hdr->itt,
-		   task->data_count);
+	ISCSI_DBG_SESSION(session, "mgmtpdu [op 0x%x hdr->itt 0x%x "
+			  "datalen %d]\n", hdr->opcode & ISCSI_OPCODE_MASK,
+			  hdr->itt, task->data_count);
 	return 0;
 }
 
@@ -560,7 +594,7 @@
 			goto free_task;
 
 	} else
-		scsi_queue_work(conn->session->host, &conn->xmitwork);
+		iscsi_conn_queue_work(conn);
 
 	return task;
 
@@ -637,8 +671,9 @@
 
 		memcpy(sc->sense_buffer, data + 2,
 		       min_t(uint16_t, senselen, SCSI_SENSE_BUFFERSIZE));
-		debug_scsi("copied %d bytes of sense\n",
-			   min_t(uint16_t, senselen, SCSI_SENSE_BUFFERSIZE));
+		ISCSI_DBG_SESSION(session, "copied %d bytes of sense\n",
+				  min_t(uint16_t, senselen,
+				  SCSI_SENSE_BUFFERSIZE));
 	}
 
 	if (rhdr->flags & (ISCSI_FLAG_CMD_BIDI_UNDERFLOW |
@@ -666,8 +701,8 @@
 			sc->result = (DID_BAD_TARGET << 16) | rhdr->cmd_status;
 	}
 out:
-	debug_scsi("done [sc %lx res %d itt 0x%x]\n",
-		   (long)sc, sc->result, task->itt);
+	ISCSI_DBG_SESSION(session, "done [sc %p res %d itt 0x%x]\n",
+			  sc, sc->result, task->itt);
 	conn->scsirsp_pdus_cnt++;
 
 	__iscsi_put_task(task);
@@ -835,8 +870,8 @@
 	else
 		itt = ~0U;
 
-	debug_scsi("[op 0x%x cid %d itt 0x%x len %d]\n",
-		   opcode, conn->id, itt, datalen);
+	ISCSI_DBG_SESSION(session, "[op 0x%x cid %d itt 0x%x len %d]\n",
+			  opcode, conn->id, itt, datalen);
 
 	if (itt == ~0U) {
 		iscsi_update_cmdsn(session, (struct iscsi_nopin*)hdr);
@@ -1034,10 +1069,9 @@
 }
 EXPORT_SYMBOL_GPL(iscsi_itt_to_ctask);
 
-void iscsi_session_failure(struct iscsi_cls_session *cls_session,
+void iscsi_session_failure(struct iscsi_session *session,
 			   enum iscsi_err err)
 {
-	struct iscsi_session *session = cls_session->dd_data;
 	struct iscsi_conn *conn;
 	struct device *dev;
 	unsigned long flags;
@@ -1095,10 +1129,10 @@
 	 * Check for iSCSI window and take care of CmdSN wrap-around
 	 */
 	if (!iscsi_sna_lte(session->queued_cmdsn, session->max_cmdsn)) {
-		debug_scsi("iSCSI CmdSN closed. ExpCmdSn %u MaxCmdSN %u "
-			   "CmdSN %u/%u\n", session->exp_cmdsn,
-			   session->max_cmdsn, session->cmdsn,
-			   session->queued_cmdsn);
+		ISCSI_DBG_SESSION(session, "iSCSI CmdSN closed. ExpCmdSn "
+				  "%u MaxCmdSN %u CmdSN %u/%u\n",
+				  session->exp_cmdsn, session->max_cmdsn,
+				  session->cmdsn, session->queued_cmdsn);
 		return -ENOSPC;
 	}
 	return 0;
@@ -1133,7 +1167,7 @@
 	struct iscsi_conn *conn = task->conn;
 
 	list_move_tail(&task->running, &conn->requeue);
-	scsi_queue_work(conn->session->host, &conn->xmitwork);
+	iscsi_conn_queue_work(conn);
 }
 EXPORT_SYMBOL_GPL(iscsi_requeue_task);
 
@@ -1152,7 +1186,7 @@
 
 	spin_lock_bh(&conn->session->lock);
 	if (unlikely(conn->suspend_tx)) {
-		debug_scsi("conn %d Tx suspended!\n", conn->id);
+		ISCSI_DBG_SESSION(conn->session, "Tx suspended!\n");
 		spin_unlock_bh(&conn->session->lock);
 		return -ENODATA;
 	}
@@ -1386,7 +1420,7 @@
 			goto prepd_reject;
 		}
 	} else
-		scsi_queue_work(session->host, &conn->xmitwork);
+		iscsi_conn_queue_work(conn);
 
 	session->queued_cmdsn++;
 	spin_unlock(&session->lock);
@@ -1398,7 +1432,8 @@
 	iscsi_complete_command(task);
 reject:
 	spin_unlock(&session->lock);
-	debug_scsi("cmd 0x%x rejected (%d)\n", sc->cmnd[0], reason);
+	ISCSI_DBG_SESSION(session, "cmd 0x%x rejected (%d)\n",
+			  sc->cmnd[0], reason);
 	spin_lock(host->host_lock);
 	return SCSI_MLQUEUE_TARGET_BUSY;
 
@@ -1407,7 +1442,8 @@
 	iscsi_complete_command(task);
 fault:
 	spin_unlock(&session->lock);
-	debug_scsi("iscsi: cmd 0x%x is not queued (%d)\n", sc->cmnd[0], reason);
+	ISCSI_DBG_SESSION(session, "iscsi: cmd 0x%x is not queued (%d)\n",
+			  sc->cmnd[0], reason);
 	if (!scsi_bidi_cmnd(sc))
 		scsi_set_resid(sc, scsi_bufflen(sc));
 	else {
@@ -1422,8 +1458,6 @@
 
 int iscsi_change_queue_depth(struct scsi_device *sdev, int depth)
 {
-	if (depth > ISCSI_MAX_CMD_PER_LUN)
-		depth = ISCSI_MAX_CMD_PER_LUN;
 	scsi_adjust_queue_depth(sdev, scsi_get_tag_type(sdev), depth);
 	return sdev->queue_depth;
 }
@@ -1457,8 +1491,10 @@
 	spin_lock_bh(&session->lock);
 	if (session->state == ISCSI_STATE_TERMINATE) {
 failed:
-		debug_scsi("failing target reset: session terminated "
-			   "[CID %d age %d]\n", conn->id, session->age);
+		iscsi_session_printk(KERN_INFO, session,
+				     "failing target reset: Could not log "
+				     "back into target [age %d]\n",
+				     session->age);
 		spin_unlock_bh(&session->lock);
 		mutex_unlock(&session->eh_mutex);
 		return FAILED;
@@ -1472,7 +1508,7 @@
 	 */
 	iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
 
-	debug_scsi("iscsi_eh_target_reset wait for relogin\n");
+	ISCSI_DBG_SESSION(session, "wait for relogin\n");
 	wait_event_interruptible(conn->ehwait,
 				 session->state == ISCSI_STATE_TERMINATE ||
 				 session->state == ISCSI_STATE_LOGGED_IN ||
@@ -1501,7 +1537,7 @@
 	spin_lock(&session->lock);
 	if (conn->tmf_state == TMF_QUEUED) {
 		conn->tmf_state = TMF_TIMEDOUT;
-		debug_scsi("tmf timedout\n");
+		ISCSI_DBG_SESSION(session, "tmf timedout\n");
 		/* unblock eh_abort() */
 		wake_up(&conn->ehwait);
 	}
@@ -1521,7 +1557,7 @@
 		spin_unlock_bh(&session->lock);
 		iscsi_conn_failure(conn, ISCSI_ERR_CONN_FAILED);
 		spin_lock_bh(&session->lock);
-		debug_scsi("tmf exec failure\n");
+		ISCSI_DBG_SESSION(session, "tmf exec failure\n");
 		return -EPERM;
 	}
 	conn->tmfcmd_pdus_cnt++;
@@ -1529,7 +1565,7 @@
 	conn->tmf_timer.function = iscsi_tmf_timedout;
 	conn->tmf_timer.data = (unsigned long)conn;
 	add_timer(&conn->tmf_timer);
-	debug_scsi("tmf set timeout\n");
+	ISCSI_DBG_SESSION(session, "tmf set timeout\n");
 
 	spin_unlock_bh(&session->lock);
 	mutex_unlock(&session->eh_mutex);
@@ -1567,22 +1603,27 @@
 {
 	struct iscsi_task *task, *tmp;
 
-	if (conn->task && (conn->task->sc->device->lun == lun || lun == -1))
-		conn->task = NULL;
+	if (conn->task) {
+		if (lun == -1 ||
+		    (conn->task->sc && conn->task->sc->device->lun == lun))
+			conn->task = NULL;
+	}
 
 	/* flush pending */
 	list_for_each_entry_safe(task, tmp, &conn->xmitqueue, running) {
 		if (lun == task->sc->device->lun || lun == -1) {
-			debug_scsi("failing pending sc %p itt 0x%x\n",
-				   task->sc, task->itt);
+			ISCSI_DBG_SESSION(conn->session,
+					  "failing pending sc %p itt 0x%x\n",
+					  task->sc, task->itt);
 			fail_command(conn, task, error << 16);
 		}
 	}
 
 	list_for_each_entry_safe(task, tmp, &conn->requeue, running) {
 		if (lun == task->sc->device->lun || lun == -1) {
-			debug_scsi("failing requeued sc %p itt 0x%x\n",
-				   task->sc, task->itt);
+			ISCSI_DBG_SESSION(conn->session,
+					  "failing requeued sc %p itt 0x%x\n",
+					  task->sc, task->itt);
 			fail_command(conn, task, error << 16);
 		}
 	}
@@ -1590,8 +1631,9 @@
 	/* fail all other running */
 	list_for_each_entry_safe(task, tmp, &conn->run_list, running) {
 		if (lun == task->sc->device->lun || lun == -1) {
-			debug_scsi("failing in progress sc %p itt 0x%x\n",
-				   task->sc, task->itt);
+			ISCSI_DBG_SESSION(conn->session,
+					 "failing in progress sc %p itt 0x%x\n",
+					 task->sc, task->itt);
 			fail_command(conn, task, error << 16);
 		}
 	}
@@ -1599,9 +1641,12 @@
 
 void iscsi_suspend_tx(struct iscsi_conn *conn)
 {
+	struct Scsi_Host *shost = conn->session->host;
+	struct iscsi_host *ihost = shost_priv(shost);
+
 	set_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
 	if (!(conn->session->tt->caps & CAP_DATA_PATH_OFFLOAD))
-		scsi_flush_work(conn->session->host);
+		flush_workqueue(ihost->workq);
 }
 EXPORT_SYMBOL_GPL(iscsi_suspend_tx);
 
@@ -1609,7 +1654,7 @@
 {
 	clear_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx);
 	if (!(conn->session->tt->caps & CAP_DATA_PATH_OFFLOAD))
-		scsi_queue_work(conn->session->host, &conn->xmitwork);
+		iscsi_conn_queue_work(conn);
 }
 
 static enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *scmd)
@@ -1622,7 +1667,7 @@
 	cls_session = starget_to_session(scsi_target(scmd->device));
 	session = cls_session->dd_data;
 
-	debug_scsi("scsi cmd %p timedout\n", scmd);
+	ISCSI_DBG_SESSION(session, "scsi cmd %p timedout\n", scmd);
 
 	spin_lock(&session->lock);
 	if (session->state != ISCSI_STATE_LOGGED_IN) {
@@ -1662,8 +1707,8 @@
 		rc = BLK_EH_RESET_TIMER;
 done:
 	spin_unlock(&session->lock);
-	debug_scsi("return %s\n", rc == BLK_EH_RESET_TIMER ?
-					"timer reset" : "nh");
+	ISCSI_DBG_SESSION(session, "return %s\n", rc == BLK_EH_RESET_TIMER ?
+			  "timer reset" : "nh");
 	return rc;
 }
 
@@ -1697,13 +1742,13 @@
 
 	if (time_before_eq(last_recv + recv_timeout, jiffies)) {
 		/* send a ping to try to provoke some traffic */
-		debug_scsi("Sending nopout as ping on conn %p\n", conn);
+		ISCSI_DBG_CONN(conn, "Sending nopout as ping\n");
 		iscsi_send_nopout(conn, NULL);
 		next_timeout = conn->last_ping + (conn->ping_timeout * HZ);
 	} else
 		next_timeout = last_recv + recv_timeout;
 
-	debug_scsi("Setting next tmo %lu\n", next_timeout);
+	ISCSI_DBG_CONN(conn, "Setting next tmo %lu\n", next_timeout);
 	mod_timer(&conn->transport_timer, next_timeout);
 done:
 	spin_unlock(&session->lock);
@@ -1740,7 +1785,8 @@
 	 * got the command.
 	 */
 	if (!sc->SCp.ptr) {
-		debug_scsi("sc never reached iscsi layer or it completed.\n");
+		ISCSI_DBG_SESSION(session, "sc never reached iscsi layer or "
+				  "it completed.\n");
 		spin_unlock_bh(&session->lock);
 		mutex_unlock(&session->eh_mutex);
 		return SUCCESS;
@@ -1762,11 +1808,13 @@
 	age = session->age;
 
 	task = (struct iscsi_task *)sc->SCp.ptr;
-	debug_scsi("aborting [sc %p itt 0x%x]\n", sc, task->itt);
+	ISCSI_DBG_SESSION(session, "aborting [sc %p itt 0x%x]\n",
+			  sc, task->itt);
 
 	/* task completed before time out */
 	if (!task->sc) {
-		debug_scsi("sc completed while abort in progress\n");
+		ISCSI_DBG_SESSION(session, "sc completed while abort in "
+				  "progress\n");
 		goto success;
 	}
 
@@ -1815,7 +1863,8 @@
 		if (!sc->SCp.ptr) {
 			conn->tmf_state = TMF_INITIAL;
 			/* task completed before tmf abort response */
-			debug_scsi("sc completed while abort in progress\n");
+			ISCSI_DBG_SESSION(session, "sc completed while abort "
+					  "in progress\n");
 			goto success;
 		}
 		/* fall through */
@@ -1827,15 +1876,16 @@
 success:
 	spin_unlock_bh(&session->lock);
 success_unlocked:
-	debug_scsi("abort success [sc %lx itt 0x%x]\n", (long)sc, task->itt);
+	ISCSI_DBG_SESSION(session, "abort success [sc %p itt 0x%x]\n",
+			  sc, task->itt);
 	mutex_unlock(&session->eh_mutex);
 	return SUCCESS;
 
 failed:
 	spin_unlock_bh(&session->lock);
 failed_unlocked:
-	debug_scsi("abort failed [sc %p itt 0x%x]\n", sc,
-		    task ? task->itt : 0);
+	ISCSI_DBG_SESSION(session, "abort failed [sc %p itt 0x%x]\n", sc,
+			  task ? task->itt : 0);
 	mutex_unlock(&session->eh_mutex);
 	return FAILED;
 }
@@ -1862,7 +1912,8 @@
 	cls_session = starget_to_session(scsi_target(sc->device));
 	session = cls_session->dd_data;
 
-	debug_scsi("LU Reset [sc %p lun %u]\n", sc, sc->device->lun);
+	ISCSI_DBG_SESSION(session, "LU Reset [sc %p lun %u]\n",
+			  sc, sc->device->lun);
 
 	mutex_lock(&session->eh_mutex);
 	spin_lock_bh(&session->lock);
@@ -1916,8 +1967,8 @@
 unlock:
 	spin_unlock_bh(&session->lock);
 done:
-	debug_scsi("iscsi_eh_device_reset %s\n",
-		  rc == SUCCESS ? "SUCCESS" : "FAILED");
+	ISCSI_DBG_SESSION(session, "dev reset result = %s\n",
+			 rc == SUCCESS ? "SUCCESS" : "FAILED");
 	mutex_unlock(&session->eh_mutex);
 	return rc;
 }
@@ -1944,7 +1995,7 @@
 		num_arrays++;
 	q->pool = kzalloc(num_arrays * max * sizeof(void*), GFP_KERNEL);
 	if (q->pool == NULL)
-		goto enomem;
+		return -ENOMEM;
 
 	q->queue = kfifo_init((void*)q->pool, max * sizeof(void*),
 			      GFP_KERNEL, NULL);
@@ -1979,8 +2030,7 @@
 
 	for (i = 0; i < q->max; i++)
 		kfree(q->pool[i]);
-	if (q->pool)
-		kfree(q->pool);
+	kfree(q->pool);
 	kfree(q->queue);
 }
 EXPORT_SYMBOL_GPL(iscsi_pool_free);
@@ -1998,6 +2048,9 @@
 	if (!shost->can_queue)
 		shost->can_queue = ISCSI_DEF_XMIT_CMDS_MAX;
 
+	if (!shost->cmd_per_lun)
+		shost->cmd_per_lun = ISCSI_DEF_CMD_PER_LUN;
+
 	if (!shost->transportt->eh_timed_out)
 		shost->transportt->eh_timed_out = iscsi_eh_cmd_timed_out;
 	return scsi_add_host(shost, pdev);
@@ -2008,13 +2061,13 @@
  * iscsi_host_alloc - allocate a host and driver data
  * @sht: scsi host template
  * @dd_data_size: driver host data size
- * @qdepth: default device queue depth
+ * @xmit_can_sleep: bool indicating if LLD will queue IO from a work queue
  *
  * This should be called by partial offload and software iscsi drivers.
  * To access the driver specific memory use the iscsi_host_priv() macro.
  */
 struct Scsi_Host *iscsi_host_alloc(struct scsi_host_template *sht,
-				   int dd_data_size, uint16_t qdepth)
+				   int dd_data_size, bool xmit_can_sleep)
 {
 	struct Scsi_Host *shost;
 	struct iscsi_host *ihost;
@@ -2022,28 +2075,31 @@
 	shost = scsi_host_alloc(sht, sizeof(struct iscsi_host) + dd_data_size);
 	if (!shost)
 		return NULL;
-
-	if (qdepth > ISCSI_MAX_CMD_PER_LUN || qdepth < 1) {
-		if (qdepth != 0)
-			printk(KERN_ERR "iscsi: invalid queue depth of %d. "
-			       "Queue depth must be between 1 and %d.\n",
-			       qdepth, ISCSI_MAX_CMD_PER_LUN);
-		qdepth = ISCSI_DEF_CMD_PER_LUN;
-	}
-	shost->cmd_per_lun = qdepth;
-
 	ihost = shost_priv(shost);
+
+	if (xmit_can_sleep) {
+		snprintf(ihost->workq_name, sizeof(ihost->workq_name),
+			"iscsi_q_%d", shost->host_no);
+		ihost->workq = create_singlethread_workqueue(ihost->workq_name);
+		if (!ihost->workq)
+			goto free_host;
+	}
+
 	spin_lock_init(&ihost->lock);
 	ihost->state = ISCSI_HOST_SETUP;
 	ihost->num_sessions = 0;
 	init_waitqueue_head(&ihost->session_removal_wq);
 	return shost;
+
+free_host:
+	scsi_host_put(shost);
+	return NULL;
 }
 EXPORT_SYMBOL_GPL(iscsi_host_alloc);
 
 static void iscsi_notify_host_removed(struct iscsi_cls_session *cls_session)
 {
-	iscsi_session_failure(cls_session, ISCSI_ERR_INVALID_HOST);
+	iscsi_session_failure(cls_session->dd_data, ISCSI_ERR_INVALID_HOST);
 }
 
 /**
@@ -2069,6 +2125,8 @@
 		flush_signals(current);
 
 	scsi_remove_host(shost);
+	if (ihost->workq)
+		destroy_workqueue(ihost->workq);
 }
 EXPORT_SYMBOL_GPL(iscsi_host_remove);
 
@@ -2467,14 +2525,16 @@
 
 	/* handle pending */
 	list_for_each_entry_safe(task, tmp, &conn->mgmtqueue, running) {
-		debug_scsi("flushing pending mgmt task itt 0x%x\n", task->itt);
+		ISCSI_DBG_SESSION(session, "flushing pending mgmt task "
+				  "itt 0x%x\n", task->itt);
 		/* release ref from prep task */
 		__iscsi_put_task(task);
 	}
 
 	/* handle running */
 	list_for_each_entry_safe(task, tmp, &conn->mgmt_run_list, running) {
-		debug_scsi("flushing running mgmt task itt 0x%x\n", task->itt);
+		ISCSI_DBG_SESSION(session, "flushing running mgmt task "
+				  "itt 0x%x\n", task->itt);
 		/* release ref from prep task */
 		__iscsi_put_task(task);
 	}
@@ -2524,7 +2584,7 @@
 		conn->datadgst_en = 0;
 		if (session->state == ISCSI_STATE_IN_RECOVERY &&
 		    old_stop_stage != STOP_CONN_RECOVER) {
-			debug_scsi("blocking session\n");
+			ISCSI_DBG_SESSION(session, "blocking session\n");
 			iscsi_block_session(session->cls_session);
 		}
 	}
diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
index e7705d3..91f8ce4 100644
--- a/drivers/scsi/libiscsi_tcp.c
+++ b/drivers/scsi/libiscsi_tcp.c
@@ -49,13 +49,21 @@
 	      "Alex Aizman <itn780@yahoo.com>");
 MODULE_DESCRIPTION("iSCSI/TCP data-path");
 MODULE_LICENSE("GPL");
-#undef DEBUG_TCP
 
-#ifdef DEBUG_TCP
-#define debug_tcp(fmt...) printk(KERN_INFO "tcp: " fmt)
-#else
-#define debug_tcp(fmt...)
-#endif
+static int iscsi_dbg_libtcp;
+module_param_named(debug_libiscsi_tcp, iscsi_dbg_libtcp, int,
+		   S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(debug_libiscsi_tcp, "Turn on debugging for libiscsi_tcp "
+		 "module. Set to 1 to turn on, and zero to turn off. Default "
+		 "is off.");
+
+#define ISCSI_DBG_TCP(_conn, dbg_fmt, arg...)			\
+	do {							\
+		if (iscsi_dbg_libtcp)				\
+			iscsi_conn_printk(KERN_INFO, _conn,	\
+					     "%s " dbg_fmt,	\
+					     __func__, ##arg);	\
+	} while (0);
 
 static int iscsi_tcp_hdr_recv_done(struct iscsi_tcp_conn *tcp_conn,
 				   struct iscsi_segment *segment);
@@ -123,18 +131,13 @@
 	if (page_count(sg_page(sg)) >= 1 && !recv)
 		return;
 
-	debug_tcp("iscsi_tcp_segment_map %s %p\n", recv ? "recv" : "xmit",
-		  segment);
 	segment->sg_mapped = kmap_atomic(sg_page(sg), KM_SOFTIRQ0);
 	segment->data = segment->sg_mapped + sg->offset + segment->sg_offset;
 }
 
 void iscsi_tcp_segment_unmap(struct iscsi_segment *segment)
 {
-	debug_tcp("iscsi_tcp_segment_unmap %p\n", segment);
-
 	if (segment->sg_mapped) {
-		debug_tcp("iscsi_tcp_segment_unmap valid\n");
 		kunmap_atomic(segment->sg_mapped, KM_SOFTIRQ0);
 		segment->sg_mapped = NULL;
 		segment->data = NULL;
@@ -180,8 +183,9 @@
 	struct scatterlist sg;
 	unsigned int pad;
 
-	debug_tcp("copied %u %u size %u %s\n", segment->copied, copied,
-		  segment->size, recv ? "recv" : "xmit");
+	ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copied %u %u size %u %s\n",
+		      segment->copied, copied, segment->size,
+		      recv ? "recv" : "xmit");
 	if (segment->hash && copied) {
 		/*
 		 * If a segment is kmapd we must unmap it before sending
@@ -214,8 +218,8 @@
 	iscsi_tcp_segment_unmap(segment);
 
 	/* Do we have more scatterlist entries? */
-	debug_tcp("total copied %u total size %u\n", segment->total_copied,
-		   segment->total_size);
+	ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "total copied %u total size %u\n",
+		      segment->total_copied, segment->total_size);
 	if (segment->total_copied < segment->total_size) {
 		/* Proceed to the next entry in the scatterlist. */
 		iscsi_tcp_segment_init_sg(segment, sg_next(segment->sg),
@@ -229,7 +233,8 @@
 	if (!(tcp_conn->iscsi_conn->session->tt->caps & CAP_PADDING_OFFLOAD)) {
 		pad = iscsi_padding(segment->total_copied);
 		if (pad != 0) {
-			debug_tcp("consume %d pad bytes\n", pad);
+			ISCSI_DBG_TCP(tcp_conn->iscsi_conn,
+				      "consume %d pad bytes\n", pad);
 			segment->total_size += pad;
 			segment->size = pad;
 			segment->data = segment->padbuf;
@@ -278,13 +283,13 @@
 
 	while (!iscsi_tcp_segment_done(tcp_conn, segment, 1, copy)) {
 		if (copied == len) {
-			debug_tcp("iscsi_tcp_segment_recv copied %d bytes\n",
-				  len);
+			ISCSI_DBG_TCP(tcp_conn->iscsi_conn,
+				      "copied %d bytes\n", len);
 			break;
 		}
 
 		copy = min(len - copied, segment->size - segment->copied);
-		debug_tcp("iscsi_tcp_segment_recv copying %d\n", copy);
+		ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "copying %d\n", copy);
 		memcpy(segment->data + segment->copied, ptr + copied, copy);
 		copied += copy;
 	}
@@ -311,7 +316,7 @@
 
 	if (memcmp(segment->recv_digest, segment->digest,
 		   segment->digest_len)) {
-		debug_scsi("digest mismatch\n");
+		ISCSI_DBG_TCP(tcp_conn->iscsi_conn, "digest mismatch\n");
 		return 0;
 	}
 
@@ -355,12 +360,8 @@
 	struct scatterlist *sg;
 	unsigned int i;
 
-	debug_scsi("iscsi_segment_seek_sg offset %u size %llu\n",
-		  offset, size);
 	__iscsi_segment_init(segment, size, done, hash);
 	for_each_sg(sg_list, sg, sg_count, i) {
-		debug_scsi("sg %d, len %u offset %u\n", i, sg->length,
-			   sg->offset);
 		if (offset < sg->length) {
 			iscsi_tcp_segment_init_sg(segment, sg, offset);
 			return 0;
@@ -382,8 +383,9 @@
  */
 void iscsi_tcp_hdr_recv_prep(struct iscsi_tcp_conn *tcp_conn)
 {
-	debug_tcp("iscsi_tcp_hdr_recv_prep(%p%s)\n", tcp_conn,
-		  tcp_conn->iscsi_conn->hdrdgst_en ? ", digest enabled" : "");
+	ISCSI_DBG_TCP(tcp_conn->iscsi_conn,
+		      "(%s)\n", tcp_conn->iscsi_conn->hdrdgst_en ?
+		      "digest enabled" : "digest disabled");
 	iscsi_segment_init_linear(&tcp_conn->in.segment,
 				tcp_conn->in.hdr_buf, sizeof(struct iscsi_hdr),
 				iscsi_tcp_hdr_recv_done, NULL);
@@ -446,7 +448,7 @@
 	while (__kfifo_get(tcp_task->r2tqueue, (void*)&r2t, sizeof(void*))) {
 		__kfifo_put(tcp_task->r2tpool.queue, (void*)&r2t,
 			    sizeof(void*));
-		debug_scsi("iscsi_tcp_cleanup_task pending r2t dropped\n");
+		ISCSI_DBG_TCP(task->conn, "pending r2t dropped\n");
 	}
 
 	r2t = tcp_task->r2t;
@@ -476,8 +478,8 @@
 		return 0;
 
 	if (tcp_task->exp_datasn != datasn) {
-		debug_tcp("%s: task->exp_datasn(%d) != rhdr->datasn(%d)\n",
-		          __func__, tcp_task->exp_datasn, datasn);
+		ISCSI_DBG_TCP(conn, "task->exp_datasn(%d) != rhdr->datasn(%d)"
+			      "\n", tcp_task->exp_datasn, datasn);
 		return ISCSI_ERR_DATASN;
 	}
 
@@ -485,9 +487,9 @@
 
 	tcp_task->data_offset = be32_to_cpu(rhdr->offset);
 	if (tcp_task->data_offset + tcp_conn->in.datalen > total_in_length) {
-		debug_tcp("%s: data_offset(%d) + data_len(%d) > total_length_in(%d)\n",
-		          __func__, tcp_task->data_offset,
-		          tcp_conn->in.datalen, total_in_length);
+		ISCSI_DBG_TCP(conn, "data_offset(%d) + data_len(%d) > "
+			      "total_length_in(%d)\n", tcp_task->data_offset,
+			      tcp_conn->in.datalen, total_in_length);
 		return ISCSI_ERR_DATA_OFFSET;
 	}
 
@@ -518,8 +520,8 @@
 	}
 
 	if (tcp_task->exp_datasn != r2tsn){
-		debug_tcp("%s: task->exp_datasn(%d) != rhdr->r2tsn(%d)\n",
-		          __func__, tcp_task->exp_datasn, r2tsn);
+		ISCSI_DBG_TCP(conn, "task->exp_datasn(%d) != rhdr->r2tsn(%d)\n",
+			      tcp_task->exp_datasn, r2tsn);
 		return ISCSI_ERR_R2TSN;
 	}
 
@@ -552,9 +554,9 @@
 	}
 
 	if (r2t->data_length > session->max_burst)
-		debug_scsi("invalid R2T with data len %u and max burst %u."
-			   "Attempting to execute request.\n",
-			    r2t->data_length, session->max_burst);
+		ISCSI_DBG_TCP(conn, "invalid R2T with data len %u and max "
+			      "burst %u. Attempting to execute request.\n",
+			      r2t->data_length, session->max_burst);
 
 	r2t->data_offset = be32_to_cpu(rhdr->data_offset);
 	if (r2t->data_offset + r2t->data_length > scsi_out(task->sc)->length) {
@@ -641,8 +643,8 @@
 	if (rc)
 		return rc;
 
-	debug_tcp("opcode 0x%x ahslen %d datalen %d\n",
-		  opcode, ahslen, tcp_conn->in.datalen);
+	ISCSI_DBG_TCP(conn, "opcode 0x%x ahslen %d datalen %d\n",
+		      opcode, ahslen, tcp_conn->in.datalen);
 
 	switch(opcode) {
 	case ISCSI_OP_SCSI_DATA_IN:
@@ -674,10 +676,10 @@
 			    !(conn->session->tt->caps & CAP_DIGEST_OFFLOAD))
 				rx_hash = tcp_conn->rx_hash;
 
-			debug_tcp("iscsi_tcp_begin_data_in(%p, offset=%d, "
-				  "datalen=%d)\n", tcp_conn,
-				  tcp_task->data_offset,
-				  tcp_conn->in.datalen);
+			ISCSI_DBG_TCP(conn, "iscsi_tcp_begin_data_in( "
+				     "offset=%d, datalen=%d)\n",
+				      tcp_task->data_offset,
+				      tcp_conn->in.datalen);
 			rc = iscsi_segment_seek_sg(&tcp_conn->in.segment,
 						   sdb->table.sgl,
 						   sdb->table.nents,
@@ -854,10 +856,10 @@
 	unsigned int consumed = 0;
 	int rc = 0;
 
-	debug_tcp("in %d bytes\n", skb->len - offset);
+	ISCSI_DBG_TCP(conn, "in %d bytes\n", skb->len - offset);
 
 	if (unlikely(conn->suspend_rx)) {
-		debug_tcp("conn %d Rx suspended!\n", conn->id);
+		ISCSI_DBG_TCP(conn, "Rx suspended!\n");
 		*status = ISCSI_TCP_SUSPENDED;
 		return 0;
 	}
@@ -874,15 +876,16 @@
 
 		avail = skb_seq_read(consumed, &ptr, &seq);
 		if (avail == 0) {
-			debug_tcp("no more data avail. Consumed %d\n",
-				  consumed);
+			ISCSI_DBG_TCP(conn, "no more data avail. Consumed %d\n",
+				      consumed);
 			*status = ISCSI_TCP_SKB_DONE;
 			skb_abort_seq_read(&seq);
 			goto skb_done;
 		}
 		BUG_ON(segment->copied >= segment->size);
 
-		debug_tcp("skb %p ptr=%p avail=%u\n", skb, ptr, avail);
+		ISCSI_DBG_TCP(conn, "skb %p ptr=%p avail=%u\n", skb, ptr,
+			      avail);
 		rc = iscsi_tcp_segment_recv(tcp_conn, segment, ptr, avail);
 		BUG_ON(rc == 0);
 		consumed += rc;
@@ -895,11 +898,11 @@
 
 segment_done:
 	*status = ISCSI_TCP_SEGMENT_DONE;
-	debug_tcp("segment done\n");
+	ISCSI_DBG_TCP(conn, "segment done\n");
 	rc = segment->done(tcp_conn, segment);
 	if (rc != 0) {
 		*status = ISCSI_TCP_CONN_ERR;
-		debug_tcp("Error receiving PDU, errno=%d\n", rc);
+		ISCSI_DBG_TCP(conn, "Error receiving PDU, errno=%d\n", rc);
 		iscsi_conn_failure(conn, rc);
 		return 0;
 	}
@@ -929,8 +932,7 @@
 		 * mgmt tasks do not have a scatterlist since they come
 		 * in from the iscsi interface.
 		 */
-		debug_scsi("mtask deq [cid %d itt 0x%x]\n", conn->id,
-			   task->itt);
+		ISCSI_DBG_TCP(conn, "mtask deq [itt 0x%x]\n", task->itt);
 
 		return conn->session->tt->init_pdu(task, 0, task->data_count);
 	}
@@ -939,9 +941,8 @@
 	tcp_task->exp_datasn = 0;
 
 	/* Prepare PDU, optionally w/ immediate data */
-	debug_scsi("task deq [cid %d itt 0x%x imm %d unsol %d]\n",
-		    conn->id, task->itt, task->imm_count,
-		    task->unsol_r2t.data_length);
+	ISCSI_DBG_TCP(conn, "task deq [itt 0x%x imm %d unsol %d]\n",
+		      task->itt, task->imm_count, task->unsol_r2t.data_length);
 
 	err = conn->session->tt->init_pdu(task, 0, task->imm_count);
 	if (err)
@@ -965,7 +966,8 @@
 			r2t = tcp_task->r2t;
 			/* Continue with this R2T? */
 			if (r2t->data_length <= r2t->sent) {
-				debug_scsi("  done with r2t %p\n", r2t);
+				ISCSI_DBG_TCP(task->conn,
+					      "  done with r2t %p\n", r2t);
 				__kfifo_put(tcp_task->r2tpool.queue,
 					    (void *)&tcp_task->r2t,
 					    sizeof(void *));
@@ -1019,7 +1021,7 @@
 	r2t = iscsi_tcp_get_curr_r2t(task);
 	if (r2t == NULL) {
 		/* Waiting for more R2Ts to arrive. */
-		debug_tcp("no R2Ts yet\n");
+		ISCSI_DBG_TCP(conn, "no R2Ts yet\n");
 		return 0;
 	}
 
@@ -1028,9 +1030,9 @@
 		return rc;
 	iscsi_prep_data_out_pdu(task, r2t, (struct iscsi_data *) task->hdr);
 
-	debug_scsi("sol dout %p [dsn %d itt 0x%x doff %d dlen %d]\n",
-		   r2t, r2t->datasn - 1, task->hdr->itt,
-		   r2t->data_offset + r2t->sent, r2t->data_count);
+	ISCSI_DBG_TCP(conn, "sol dout %p [dsn %d itt 0x%x doff %d dlen %d]\n",
+		      r2t, r2t->datasn - 1, task->hdr->itt,
+		      r2t->data_offset + r2t->sent, r2t->data_count);
 
 	rc = conn->session->tt->init_pdu(task, r2t->data_offset + r2t->sent,
 					 r2t->data_count);
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index b615eda..81cdcf4 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -1132,7 +1132,7 @@
 }
 
 #undef lpfc_debugfs_op_disc_trc
-static struct file_operations lpfc_debugfs_op_disc_trc = {
+static const struct file_operations lpfc_debugfs_op_disc_trc = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_disc_trc_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1141,7 +1141,7 @@
 };
 
 #undef lpfc_debugfs_op_nodelist
-static struct file_operations lpfc_debugfs_op_nodelist = {
+static const struct file_operations lpfc_debugfs_op_nodelist = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_nodelist_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1150,7 +1150,7 @@
 };
 
 #undef lpfc_debugfs_op_hbqinfo
-static struct file_operations lpfc_debugfs_op_hbqinfo = {
+static const struct file_operations lpfc_debugfs_op_hbqinfo = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_hbqinfo_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1159,7 +1159,7 @@
 };
 
 #undef lpfc_debugfs_op_dumpHBASlim
-static struct file_operations lpfc_debugfs_op_dumpHBASlim = {
+static const struct file_operations lpfc_debugfs_op_dumpHBASlim = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_dumpHBASlim_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1168,7 +1168,7 @@
 };
 
 #undef lpfc_debugfs_op_dumpHostSlim
-static struct file_operations lpfc_debugfs_op_dumpHostSlim = {
+static const struct file_operations lpfc_debugfs_op_dumpHostSlim = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_dumpHostSlim_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1177,7 +1177,7 @@
 };
 
 #undef lpfc_debugfs_op_dumpData
-static struct file_operations lpfc_debugfs_op_dumpData = {
+static const struct file_operations lpfc_debugfs_op_dumpData = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_dumpData_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1187,7 +1187,7 @@
 };
 
 #undef lpfc_debugfs_op_dumpDif
-static struct file_operations lpfc_debugfs_op_dumpDif = {
+static const struct file_operations lpfc_debugfs_op_dumpDif = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_dumpDif_open,
 	.llseek =       lpfc_debugfs_lseek,
@@ -1197,7 +1197,7 @@
 };
 
 #undef lpfc_debugfs_op_slow_ring_trc
-static struct file_operations lpfc_debugfs_op_slow_ring_trc = {
+static const struct file_operations lpfc_debugfs_op_slow_ring_trc = {
 	.owner =        THIS_MODULE,
 	.open =         lpfc_debugfs_slow_ring_trc_open,
 	.llseek =       lpfc_debugfs_lseek,
diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
index b103b6e..b1bd3fc 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.c
+++ b/drivers/scsi/lpfc/lpfc_scsi.c
@@ -1357,7 +1357,7 @@
 
 		scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,
 				0x10, 0x1);
-		cmd->result = (DRIVER_SENSE|SUGGEST_DIE) << 24
+		cmd->result = DRIVER_SENSE << 24
 			| ScsiResult(DID_ABORT, SAM_STAT_CHECK_CONDITION);
 		phba->bg_guard_err_cnt++;
 		printk(KERN_ERR "BLKGRD: guard_tag error\n");
@@ -1368,7 +1368,7 @@
 
 		scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,
 				0x10, 0x3);
-		cmd->result = (DRIVER_SENSE|SUGGEST_DIE) << 24
+		cmd->result = DRIVER_SENSE << 24
 			| ScsiResult(DID_ABORT, SAM_STAT_CHECK_CONDITION);
 
 		phba->bg_reftag_err_cnt++;
@@ -1380,7 +1380,7 @@
 
 		scsi_build_sense_buffer(1, cmd->sense_buffer, ILLEGAL_REQUEST,
 				0x10, 0x2);
-		cmd->result = (DRIVER_SENSE|SUGGEST_DIE) << 24
+		cmd->result = DRIVER_SENSE << 24
 			| ScsiResult(DID_ABORT, SAM_STAT_CHECK_CONDITION);
 
 		phba->bg_apptag_err_cnt++;
diff --git a/drivers/scsi/mpt2sas/Kconfig b/drivers/scsi/mpt2sas/Kconfig
new file mode 100644
index 0000000..4a86855
--- /dev/null
+++ b/drivers/scsi/mpt2sas/Kconfig
@@ -0,0 +1,66 @@
+#
+# Kernel configuration file for the MPT2SAS
+#
+# This code is based on drivers/scsi/mpt2sas/Kconfig
+# Copyright (C) 2007-2008  LSI Corporation
+#  (mailto:DL-MPTFusionLinux@lsi.com)
+
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version 2
+# of the License, or (at your option) any later version.
+
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+
+# NO WARRANTY
+# THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+# CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+# LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+# MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+# solely responsible for determining the appropriateness of using and
+# distributing the Program and assumes all risks associated with its
+# exercise of rights under this Agreement, including but not limited to
+# the risks and costs of program errors, damage to or loss of data,
+# programs or equipment, and unavailability or interruption of operations.
+
+# DISCLAIMER OF LIABILITY
+# NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+# TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+# USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+# HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+# USA.
+
+config SCSI_MPT2SAS
+	tristate "LSI MPT Fusion SAS 2.0 Device Driver"
+	depends on PCI && SCSI
+	select SCSI_SAS_ATTRS
+	---help---
+	This driver supports PCI-Express SAS 6Gb/s Host Adapters.
+
+config SCSI_MPT2SAS_MAX_SGE
+	int "LSI MPT Fusion Max number of SG Entries (16 - 128)"
+	depends on PCI && SCSI && SCSI_MPT2SAS
+	default "128"
+	range 16 128
+	---help---
+	This option allows you to specify the maximum number of scatter-
+	gather entries per I/O. The driver default is 128, which matches
+	SAFE_PHYS_SEGMENTS.  However, it may decreased down to 16.
+	Decreasing this parameter will reduce memory requirements
+	on a per controller instance.
+
+config SCSI_MPT2SAS_LOGGING
+	bool "LSI MPT Fusion logging facility"
+	depends on PCI && SCSI && SCSI_MPT2SAS
+	---help---
+	This turns on a logging facility.
diff --git a/drivers/scsi/mpt2sas/Makefile b/drivers/scsi/mpt2sas/Makefile
new file mode 100644
index 0000000..728f047
--- /dev/null
+++ b/drivers/scsi/mpt2sas/Makefile
@@ -0,0 +1,7 @@
+# mpt2sas makefile
+obj-$(CONFIG_SCSI_MPT2SAS) += mpt2sas.o
+mpt2sas-y +=  mpt2sas_base.o        \
+		mpt2sas_config.o    \
+		mpt2sas_scsih.o     \
+		mpt2sas_transport.o \
+		mpt2sas_ctl.o
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2.h b/drivers/scsi/mpt2sas/mpi/mpi2.h
new file mode 100644
index 0000000..7bb2ece
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2.h
@@ -0,0 +1,1067 @@
+/*
+ *  Copyright (c) 2000-2009 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2.h
+ *          Title:  MPI Message independent structures and definitions
+ *                  including System Interface Register Set and
+ *                  scatter/gather formats.
+ *  Creation Date:  June 21, 2006
+ *
+ *  mpi2.h Version:  02.00.11
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  06-04-07  02.00.01  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  06-26-07  02.00.02  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  08-31-07  02.00.03  Bumped MPI2_HEADER_VERSION_UNIT.
+ *                      Moved ReplyPostHostIndex register to offset 0x6C of the
+ *                      MPI2_SYSTEM_INTERFACE_REGS and modified the define for
+ *                      MPI2_REPLY_POST_HOST_INDEX_OFFSET.
+ *                      Added union of request descriptors.
+ *                      Added union of reply descriptors.
+ *  10-31-07  02.00.04  Bumped MPI2_HEADER_VERSION_UNIT.
+ *                      Added define for MPI2_VERSION_02_00.
+ *                      Fixed the size of the FunctionDependent5 field in the
+ *                      MPI2_DEFAULT_REPLY structure.
+ *  12-18-07  02.00.05  Bumped MPI2_HEADER_VERSION_UNIT.
+ *                      Removed the MPI-defined Fault Codes and extended the
+ *                      product specific codes up to 0xEFFF.
+ *                      Added a sixth key value for the WriteSequence register
+ *                      and changed the flush value to 0x0.
+ *                      Added message function codes for Diagnostic Buffer Post
+ *                      and Diagnsotic Release.
+ *                      New IOCStatus define: MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED
+ *                      Moved MPI2_VERSION_UNION from mpi2_ioc.h.
+ *  02-29-08  02.00.06  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  03-03-08  02.00.07  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  05-21-08  02.00.08  Bumped MPI2_HEADER_VERSION_UNIT.
+ *                      Added #defines for marking a reply descriptor as unused.
+ *  06-27-08  02.00.09  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  10-02-08  02.00.10  Bumped MPI2_HEADER_VERSION_UNIT.
+ *                      Moved LUN field defines from mpi2_init.h.
+ *  01-19-09  02.00.11  Bumped MPI2_HEADER_VERSION_UNIT.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_H
+#define MPI2_H
+
+
+/*****************************************************************************
+*
+*        MPI Version Definitions
+*
+*****************************************************************************/
+
+#define MPI2_VERSION_MAJOR                  (0x02)
+#define MPI2_VERSION_MINOR                  (0x00)
+#define MPI2_VERSION_MAJOR_MASK             (0xFF00)
+#define MPI2_VERSION_MAJOR_SHIFT            (8)
+#define MPI2_VERSION_MINOR_MASK             (0x00FF)
+#define MPI2_VERSION_MINOR_SHIFT            (0)
+#define MPI2_VERSION ((MPI2_VERSION_MAJOR << MPI2_VERSION_MAJOR_SHIFT) |   \
+                                      MPI2_VERSION_MINOR)
+
+#define MPI2_VERSION_02_00                  (0x0200)
+
+/* versioning for this MPI header set */
+#define MPI2_HEADER_VERSION_UNIT            (0x0B)
+#define MPI2_HEADER_VERSION_DEV             (0x00)
+#define MPI2_HEADER_VERSION_UNIT_MASK       (0xFF00)
+#define MPI2_HEADER_VERSION_UNIT_SHIFT      (8)
+#define MPI2_HEADER_VERSION_DEV_MASK        (0x00FF)
+#define MPI2_HEADER_VERSION_DEV_SHIFT       (0)
+#define MPI2_HEADER_VERSION ((MPI2_HEADER_VERSION_UNIT << 8) | MPI2_HEADER_VERSION_DEV)
+
+
+/*****************************************************************************
+*
+*        IOC State Definitions
+*
+*****************************************************************************/
+
+#define MPI2_IOC_STATE_RESET               (0x00000000)
+#define MPI2_IOC_STATE_READY               (0x10000000)
+#define MPI2_IOC_STATE_OPERATIONAL         (0x20000000)
+#define MPI2_IOC_STATE_FAULT               (0x40000000)
+
+#define MPI2_IOC_STATE_MASK                (0xF0000000)
+#define MPI2_IOC_STATE_SHIFT               (28)
+
+/* Fault state range for prodcut specific codes */
+#define MPI2_FAULT_PRODUCT_SPECIFIC_MIN                 (0x0000)
+#define MPI2_FAULT_PRODUCT_SPECIFIC_MAX                 (0xEFFF)
+
+
+/*****************************************************************************
+*
+*        System Interface Register Definitions
+*
+*****************************************************************************/
+
+typedef volatile struct _MPI2_SYSTEM_INTERFACE_REGS
+{
+    U32         Doorbell;                   /* 0x00 */
+    U32         WriteSequence;              /* 0x04 */
+    U32         HostDiagnostic;             /* 0x08 */
+    U32         Reserved1;                  /* 0x0C */
+    U32         DiagRWData;                 /* 0x10 */
+    U32         DiagRWAddressLow;           /* 0x14 */
+    U32         DiagRWAddressHigh;          /* 0x18 */
+    U32         Reserved2[5];               /* 0x1C */
+    U32         HostInterruptStatus;        /* 0x30 */
+    U32         HostInterruptMask;          /* 0x34 */
+    U32         DCRData;                    /* 0x38 */
+    U32         DCRAddress;                 /* 0x3C */
+    U32         Reserved3[2];               /* 0x40 */
+    U32         ReplyFreeHostIndex;         /* 0x48 */
+    U32         Reserved4[8];               /* 0x4C */
+    U32         ReplyPostHostIndex;         /* 0x6C */
+    U32         Reserved5;                  /* 0x70 */
+    U32         HCBSize;                    /* 0x74 */
+    U32         HCBAddressLow;              /* 0x78 */
+    U32         HCBAddressHigh;             /* 0x7C */
+    U32         Reserved6[16];              /* 0x80 */
+    U32         RequestDescriptorPostLow;   /* 0xC0 */
+    U32         RequestDescriptorPostHigh;  /* 0xC4 */
+    U32         Reserved7[14];              /* 0xC8 */
+} MPI2_SYSTEM_INTERFACE_REGS, MPI2_POINTER PTR_MPI2_SYSTEM_INTERFACE_REGS,
+  Mpi2SystemInterfaceRegs_t, MPI2_POINTER pMpi2SystemInterfaceRegs_t;
+
+/*
+ * Defines for working with the Doorbell register.
+ */
+#define MPI2_DOORBELL_OFFSET                    (0x00000000)
+
+/* IOC --> System values */
+#define MPI2_DOORBELL_USED                      (0x08000000)
+#define MPI2_DOORBELL_WHO_INIT_MASK             (0x07000000)
+#define MPI2_DOORBELL_WHO_INIT_SHIFT            (24)
+#define MPI2_DOORBELL_FAULT_CODE_MASK           (0x0000FFFF)
+#define MPI2_DOORBELL_DATA_MASK                 (0x0000FFFF)
+
+/* System --> IOC values */
+#define MPI2_DOORBELL_FUNCTION_MASK             (0xFF000000)
+#define MPI2_DOORBELL_FUNCTION_SHIFT            (24)
+#define MPI2_DOORBELL_ADD_DWORDS_MASK           (0x00FF0000)
+#define MPI2_DOORBELL_ADD_DWORDS_SHIFT          (16)
+
+
+/*
+ * Defines for the WriteSequence register
+ */
+#define MPI2_WRITE_SEQUENCE_OFFSET              (0x00000004)
+#define MPI2_WRSEQ_KEY_VALUE_MASK               (0x0000000F)
+#define MPI2_WRSEQ_FLUSH_KEY_VALUE              (0x0)
+#define MPI2_WRSEQ_1ST_KEY_VALUE                (0xF)
+#define MPI2_WRSEQ_2ND_KEY_VALUE                (0x4)
+#define MPI2_WRSEQ_3RD_KEY_VALUE                (0xB)
+#define MPI2_WRSEQ_4TH_KEY_VALUE                (0x2)
+#define MPI2_WRSEQ_5TH_KEY_VALUE                (0x7)
+#define MPI2_WRSEQ_6TH_KEY_VALUE                (0xD)
+
+/*
+ * Defines for the HostDiagnostic register
+ */
+#define MPI2_HOST_DIAGNOSTIC_OFFSET             (0x00000008)
+
+#define MPI2_DIAG_BOOT_DEVICE_SELECT_MASK       (0x00001800)
+#define MPI2_DIAG_BOOT_DEVICE_SELECT_DEFAULT    (0x00000000)
+#define MPI2_DIAG_BOOT_DEVICE_SELECT_HCDW       (0x00000800)
+
+#define MPI2_DIAG_CLEAR_FLASH_BAD_SIG           (0x00000400)
+#define MPI2_DIAG_FORCE_HCB_ON_RESET            (0x00000200)
+#define MPI2_DIAG_HCB_MODE                      (0x00000100)
+#define MPI2_DIAG_DIAG_WRITE_ENABLE             (0x00000080)
+#define MPI2_DIAG_FLASH_BAD_SIG                 (0x00000040)
+#define MPI2_DIAG_RESET_HISTORY                 (0x00000020)
+#define MPI2_DIAG_DIAG_RW_ENABLE                (0x00000010)
+#define MPI2_DIAG_RESET_ADAPTER                 (0x00000004)
+#define MPI2_DIAG_HOLD_IOC_RESET                (0x00000002)
+
+/*
+ * Offsets for DiagRWData and address
+ */
+#define MPI2_DIAG_RW_DATA_OFFSET                (0x00000010)
+#define MPI2_DIAG_RW_ADDRESS_LOW_OFFSET         (0x00000014)
+#define MPI2_DIAG_RW_ADDRESS_HIGH_OFFSET        (0x00000018)
+
+/*
+ * Defines for the HostInterruptStatus register
+ */
+#define MPI2_HOST_INTERRUPT_STATUS_OFFSET       (0x00000030)
+#define MPI2_HIS_SYS2IOC_DB_STATUS              (0x80000000)
+#define MPI2_HIS_IOP_DOORBELL_STATUS            MPI2_HIS_SYS2IOC_DB_STATUS
+#define MPI2_HIS_RESET_IRQ_STATUS               (0x40000000)
+#define MPI2_HIS_REPLY_DESCRIPTOR_INTERRUPT     (0x00000008)
+#define MPI2_HIS_IOC2SYS_DB_STATUS              (0x00000001)
+#define MPI2_HIS_DOORBELL_INTERRUPT             MPI2_HIS_IOC2SYS_DB_STATUS
+
+/*
+ * Defines for the HostInterruptMask register
+ */
+#define MPI2_HOST_INTERRUPT_MASK_OFFSET         (0x00000034)
+#define MPI2_HIM_RESET_IRQ_MASK                 (0x40000000)
+#define MPI2_HIM_REPLY_INT_MASK                 (0x00000008)
+#define MPI2_HIM_RIM                            MPI2_HIM_REPLY_INT_MASK
+#define MPI2_HIM_IOC2SYS_DB_MASK                (0x00000001)
+#define MPI2_HIM_DIM                            MPI2_HIM_IOC2SYS_DB_MASK
+
+/*
+ * Offsets for DCRData and address
+ */
+#define MPI2_DCR_DATA_OFFSET                    (0x00000038)
+#define MPI2_DCR_ADDRESS_OFFSET                 (0x0000003C)
+
+/*
+ * Offset for the Reply Free Queue
+ */
+#define MPI2_REPLY_FREE_HOST_INDEX_OFFSET       (0x00000048)
+
+/*
+ * Offset for the Reply Descriptor Post Queue
+ */
+#define MPI2_REPLY_POST_HOST_INDEX_OFFSET       (0x0000006C)
+
+/*
+ * Defines for the HCBSize and address
+ */
+#define MPI2_HCB_SIZE_OFFSET                    (0x00000074)
+#define MPI2_HCB_SIZE_SIZE_MASK                 (0xFFFFF000)
+#define MPI2_HCB_SIZE_HCB_ENABLE                (0x00000001)
+
+#define MPI2_HCB_ADDRESS_LOW_OFFSET             (0x00000078)
+#define MPI2_HCB_ADDRESS_HIGH_OFFSET            (0x0000007C)
+
+/*
+ * Offsets for the Request Queue
+ */
+#define MPI2_REQUEST_DESCRIPTOR_POST_LOW_OFFSET     (0x000000C0)
+#define MPI2_REQUEST_DESCRIPTOR_POST_HIGH_OFFSET    (0x000000C4)
+
+
+/*****************************************************************************
+*
+*        Message Descriptors
+*
+*****************************************************************************/
+
+/* Request Descriptors */
+
+/* Default Request Descriptor */
+typedef struct _MPI2_DEFAULT_REQUEST_DESCRIPTOR
+{
+    U8              RequestFlags;               /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U16             LMID;                       /* 0x04 */
+    U16             DescriptorTypeDependent;    /* 0x06 */
+} MPI2_DEFAULT_REQUEST_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_DEFAULT_REQUEST_DESCRIPTOR,
+  Mpi2DefaultRequestDescriptor_t, MPI2_POINTER pMpi2DefaultRequestDescriptor_t;
+
+/* defines for the RequestFlags field */
+#define MPI2_REQ_DESCRIPT_FLAGS_TYPE_MASK               (0x0E)
+#define MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO                 (0x00)
+#define MPI2_REQ_DESCRIPT_FLAGS_SCSI_TARGET             (0x02)
+#define MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY           (0x06)
+#define MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE            (0x08)
+
+#define MPI2_REQ_DESCRIPT_FLAGS_IOC_FIFO_MARKER (0x01)
+
+
+/* High Priority Request Descriptor */
+typedef struct _MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR
+{
+    U8              RequestFlags;               /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U16             LMID;                       /* 0x04 */
+    U16             Reserved1;                  /* 0x06 */
+} MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR,
+  Mpi2HighPriorityRequestDescriptor_t,
+  MPI2_POINTER pMpi2HighPriorityRequestDescriptor_t;
+
+
+/* SCSI IO Request Descriptor */
+typedef struct _MPI2_SCSI_IO_REQUEST_DESCRIPTOR
+{
+    U8              RequestFlags;               /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U16             LMID;                       /* 0x04 */
+    U16             DevHandle;                  /* 0x06 */
+} MPI2_SCSI_IO_REQUEST_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST_DESCRIPTOR,
+  Mpi2SCSIIORequestDescriptor_t, MPI2_POINTER pMpi2SCSIIORequestDescriptor_t;
+
+
+/* SCSI Target Request Descriptor */
+typedef struct _MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR
+{
+    U8              RequestFlags;               /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U16             LMID;                       /* 0x04 */
+    U16             IoIndex;                    /* 0x06 */
+} MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR,
+  Mpi2SCSITargetRequestDescriptor_t,
+  MPI2_POINTER pMpi2SCSITargetRequestDescriptor_t;
+
+/* union of Request Descriptors */
+typedef union _MPI2_REQUEST_DESCRIPTOR_UNION
+{
+    MPI2_DEFAULT_REQUEST_DESCRIPTOR         Default;
+    MPI2_HIGH_PRIORITY_REQUEST_DESCRIPTOR   HighPriority;
+    MPI2_SCSI_IO_REQUEST_DESCRIPTOR         SCSIIO;
+    MPI2_SCSI_TARGET_REQUEST_DESCRIPTOR     SCSITarget;
+    U64                                     Words;
+} MPI2_REQUEST_DESCRIPTOR_UNION, MPI2_POINTER PTR_MPI2_REQUEST_DESCRIPTOR_UNION,
+  Mpi2RequestDescriptorUnion_t, MPI2_POINTER pMpi2RequestDescriptorUnion_t;
+
+
+/* Reply Descriptors */
+
+/* Default Reply Descriptor */
+typedef struct _MPI2_DEFAULT_REPLY_DESCRIPTOR
+{
+    U8              ReplyFlags;                 /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             DescriptorTypeDependent1;   /* 0x02 */
+    U32             DescriptorTypeDependent2;   /* 0x04 */
+} MPI2_DEFAULT_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_DEFAULT_REPLY_DESCRIPTOR,
+  Mpi2DefaultReplyDescriptor_t, MPI2_POINTER pMpi2DefaultReplyDescriptor_t;
+
+/* defines for the ReplyFlags field */
+#define MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK               (0x0F)
+#define MPI2_RPY_DESCRIPT_FLAGS_SCSI_IO_SUCCESS         (0x00)
+#define MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY           (0x01)
+#define MPI2_RPY_DESCRIPT_FLAGS_TARGETASSIST_SUCCESS    (0x02)
+#define MPI2_RPY_DESCRIPT_FLAGS_TARGET_COMMAND_BUFFER   (0x03)
+#define MPI2_RPY_DESCRIPT_FLAGS_UNUSED                  (0x0F)
+
+/* values for marking a reply descriptor as unused */
+#define MPI2_RPY_DESCRIPT_UNUSED_WORD0_MARK             (0xFFFFFFFF)
+#define MPI2_RPY_DESCRIPT_UNUSED_WORD1_MARK             (0xFFFFFFFF)
+
+/* Address Reply Descriptor */
+typedef struct _MPI2_ADDRESS_REPLY_DESCRIPTOR
+{
+    U8              ReplyFlags;                 /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U32             ReplyFrameAddress;          /* 0x04 */
+} MPI2_ADDRESS_REPLY_DESCRIPTOR, MPI2_POINTER PTR_MPI2_ADDRESS_REPLY_DESCRIPTOR,
+  Mpi2AddressReplyDescriptor_t, MPI2_POINTER pMpi2AddressReplyDescriptor_t;
+
+#define MPI2_ADDRESS_REPLY_SMID_INVALID                 (0x00)
+
+
+/* SCSI IO Success Reply Descriptor */
+typedef struct _MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR
+{
+    U8              ReplyFlags;                 /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U16             TaskTag;                    /* 0x04 */
+    U16             DevHandle;                  /* 0x06 */
+} MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR,
+  Mpi2SCSIIOSuccessReplyDescriptor_t,
+  MPI2_POINTER pMpi2SCSIIOSuccessReplyDescriptor_t;
+
+
+/* TargetAssist Success Reply Descriptor */
+typedef struct _MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR
+{
+    U8              ReplyFlags;                 /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U16             SMID;                       /* 0x02 */
+    U8              SequenceNumber;             /* 0x04 */
+    U8              Reserved1;                  /* 0x05 */
+    U16             IoIndex;                    /* 0x06 */
+} MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR,
+  Mpi2TargetAssistSuccessReplyDescriptor_t,
+  MPI2_POINTER pMpi2TargetAssistSuccessReplyDescriptor_t;
+
+
+/* Target Command Buffer Reply Descriptor */
+typedef struct _MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR
+{
+    U8              ReplyFlags;                 /* 0x00 */
+    U8              VF_ID;                      /* 0x01 */
+    U8              VP_ID;                      /* 0x02 */
+    U8              Flags;                      /* 0x03 */
+    U16             InitiatorDevHandle;         /* 0x04 */
+    U16             IoIndex;                    /* 0x06 */
+} MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR,
+  MPI2_POINTER PTR_MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR,
+  Mpi2TargetCommandBufferReplyDescriptor_t,
+  MPI2_POINTER pMpi2TargetCommandBufferReplyDescriptor_t;
+
+/* defines for Flags field */
+#define MPI2_RPY_DESCRIPT_TCB_FLAGS_PHYNUM_MASK     (0x3F)
+
+
+/* union of Reply Descriptors */
+typedef union _MPI2_REPLY_DESCRIPTORS_UNION
+{
+    MPI2_DEFAULT_REPLY_DESCRIPTOR               Default;
+    MPI2_ADDRESS_REPLY_DESCRIPTOR               AddressReply;
+    MPI2_SCSI_IO_SUCCESS_REPLY_DESCRIPTOR       SCSIIOSuccess;
+    MPI2_TARGETASSIST_SUCCESS_REPLY_DESCRIPTOR  TargetAssistSuccess;
+    MPI2_TARGET_COMMAND_BUFFER_REPLY_DESCRIPTOR TargetCommandBuffer;
+    U64                                         Words;
+} MPI2_REPLY_DESCRIPTORS_UNION, MPI2_POINTER PTR_MPI2_REPLY_DESCRIPTORS_UNION,
+  Mpi2ReplyDescriptorsUnion_t, MPI2_POINTER pMpi2ReplyDescriptorsUnion_t;
+
+
+
+/*****************************************************************************
+*
+*        Message Functions
+*              0x80 -> 0x8F reserved for private message use per product
+*
+*
+*****************************************************************************/
+
+#define MPI2_FUNCTION_SCSI_IO_REQUEST               (0x00) /* SCSI IO */
+#define MPI2_FUNCTION_SCSI_TASK_MGMT                (0x01) /* SCSI Task Management */
+#define MPI2_FUNCTION_IOC_INIT                      (0x02) /* IOC Init */
+#define MPI2_FUNCTION_IOC_FACTS                     (0x03) /* IOC Facts */
+#define MPI2_FUNCTION_CONFIG                        (0x04) /* Configuration */
+#define MPI2_FUNCTION_PORT_FACTS                    (0x05) /* Port Facts */
+#define MPI2_FUNCTION_PORT_ENABLE                   (0x06) /* Port Enable */
+#define MPI2_FUNCTION_EVENT_NOTIFICATION            (0x07) /* Event Notification */
+#define MPI2_FUNCTION_EVENT_ACK                     (0x08) /* Event Acknowledge */
+#define MPI2_FUNCTION_FW_DOWNLOAD                   (0x09) /* FW Download */
+#define MPI2_FUNCTION_TARGET_ASSIST                 (0x0B) /* Target Assist */
+#define MPI2_FUNCTION_TARGET_STATUS_SEND            (0x0C) /* Target Status Send */
+#define MPI2_FUNCTION_TARGET_MODE_ABORT             (0x0D) /* Target Mode Abort */
+#define MPI2_FUNCTION_FW_UPLOAD                     (0x12) /* FW Upload */
+#define MPI2_FUNCTION_RAID_ACTION                   (0x15) /* RAID Action */
+#define MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH      (0x16) /* SCSI IO RAID Passthrough */
+#define MPI2_FUNCTION_TOOLBOX                       (0x17) /* Toolbox */
+#define MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR      (0x18) /* SCSI Enclosure Processor */
+#define MPI2_FUNCTION_SMP_PASSTHROUGH               (0x1A) /* SMP Passthrough */
+#define MPI2_FUNCTION_SAS_IO_UNIT_CONTROL           (0x1B) /* SAS IO Unit Control */
+#define MPI2_FUNCTION_SATA_PASSTHROUGH              (0x1C) /* SATA Passthrough */
+#define MPI2_FUNCTION_DIAG_BUFFER_POST              (0x1D) /* Diagnostic Buffer Post */
+#define MPI2_FUNCTION_DIAG_RELEASE                  (0x1E) /* Diagnostic Release */
+#define MPI2_FUNCTION_TARGET_CMD_BUF_BASE_POST      (0x24) /* Target Command Buffer Post Base */
+#define MPI2_FUNCTION_TARGET_CMD_BUF_LIST_POST      (0x25) /* Target Command Buffer Post List */
+
+
+
+/* Doorbell functions */
+#define MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET        (0x40)
+/* #define MPI2_FUNCTION_IO_UNIT_RESET                 (0x41) */
+#define MPI2_FUNCTION_HANDSHAKE                     (0x42)
+
+
+/*****************************************************************************
+*
+*        IOC Status Values
+*
+*****************************************************************************/
+
+/* mask for IOCStatus status value */
+#define MPI2_IOCSTATUS_MASK                     (0x7FFF)
+
+/****************************************************************************
+*  Common IOCStatus values for all replies
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_SUCCESS                      (0x0000)
+#define MPI2_IOCSTATUS_INVALID_FUNCTION             (0x0001)
+#define MPI2_IOCSTATUS_BUSY                         (0x0002)
+#define MPI2_IOCSTATUS_INVALID_SGL                  (0x0003)
+#define MPI2_IOCSTATUS_INTERNAL_ERROR               (0x0004)
+#define MPI2_IOCSTATUS_INVALID_VPID                 (0x0005)
+#define MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES       (0x0006)
+#define MPI2_IOCSTATUS_INVALID_FIELD                (0x0007)
+#define MPI2_IOCSTATUS_INVALID_STATE                (0x0008)
+#define MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED       (0x0009)
+
+/****************************************************************************
+*  Config IOCStatus values
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_CONFIG_INVALID_ACTION        (0x0020)
+#define MPI2_IOCSTATUS_CONFIG_INVALID_TYPE          (0x0021)
+#define MPI2_IOCSTATUS_CONFIG_INVALID_PAGE          (0x0022)
+#define MPI2_IOCSTATUS_CONFIG_INVALID_DATA          (0x0023)
+#define MPI2_IOCSTATUS_CONFIG_NO_DEFAULTS           (0x0024)
+#define MPI2_IOCSTATUS_CONFIG_CANT_COMMIT           (0x0025)
+
+/****************************************************************************
+*  SCSI IO Reply
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR         (0x0040)
+#define MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE       (0x0042)
+#define MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE        (0x0043)
+#define MPI2_IOCSTATUS_SCSI_DATA_OVERRUN            (0x0044)
+#define MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN           (0x0045)
+#define MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR           (0x0046)
+#define MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR          (0x0047)
+#define MPI2_IOCSTATUS_SCSI_TASK_TERMINATED         (0x0048)
+#define MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH       (0x0049)
+#define MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED        (0x004A)
+#define MPI2_IOCSTATUS_SCSI_IOC_TERMINATED          (0x004B)
+#define MPI2_IOCSTATUS_SCSI_EXT_TERMINATED          (0x004C)
+
+/****************************************************************************
+*  For use by SCSI Initiator and SCSI Target end-to-end data protection
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_EEDP_GUARD_ERROR             (0x004D)
+#define MPI2_IOCSTATUS_EEDP_REF_TAG_ERROR           (0x004E)
+#define MPI2_IOCSTATUS_EEDP_APP_TAG_ERROR           (0x004F)
+
+/****************************************************************************
+*  SCSI Target values
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_TARGET_INVALID_IO_INDEX      (0x0062)
+#define MPI2_IOCSTATUS_TARGET_ABORTED               (0x0063)
+#define MPI2_IOCSTATUS_TARGET_NO_CONN_RETRYABLE     (0x0064)
+#define MPI2_IOCSTATUS_TARGET_NO_CONNECTION         (0x0065)
+#define MPI2_IOCSTATUS_TARGET_XFER_COUNT_MISMATCH   (0x006A)
+#define MPI2_IOCSTATUS_TARGET_DATA_OFFSET_ERROR     (0x006D)
+#define MPI2_IOCSTATUS_TARGET_TOO_MUCH_WRITE_DATA   (0x006E)
+#define MPI2_IOCSTATUS_TARGET_IU_TOO_SHORT          (0x006F)
+#define MPI2_IOCSTATUS_TARGET_ACK_NAK_TIMEOUT       (0x0070)
+#define MPI2_IOCSTATUS_TARGET_NAK_RECEIVED          (0x0071)
+
+/****************************************************************************
+*  Serial Attached SCSI values
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_SAS_SMP_REQUEST_FAILED       (0x0090)
+#define MPI2_IOCSTATUS_SAS_SMP_DATA_OVERRUN         (0x0091)
+
+/****************************************************************************
+*  Diagnostic Buffer Post / Diagnostic Release values
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED          (0x00A0)
+
+
+/****************************************************************************
+*  IOCStatus flag to indicate that log info is available
+****************************************************************************/
+
+#define MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE  (0x8000)
+
+/****************************************************************************
+*  IOCLogInfo Types
+****************************************************************************/
+
+#define MPI2_IOCLOGINFO_TYPE_MASK               (0xF0000000)
+#define MPI2_IOCLOGINFO_TYPE_SHIFT              (28)
+#define MPI2_IOCLOGINFO_TYPE_NONE               (0x0)
+#define MPI2_IOCLOGINFO_TYPE_SCSI               (0x1)
+#define MPI2_IOCLOGINFO_TYPE_FC                 (0x2)
+#define MPI2_IOCLOGINFO_TYPE_SAS                (0x3)
+#define MPI2_IOCLOGINFO_TYPE_ISCSI              (0x4)
+#define MPI2_IOCLOGINFO_LOG_DATA_MASK           (0x0FFFFFFF)
+
+
+/*****************************************************************************
+*
+*        Standard Message Structures
+*
+*****************************************************************************/
+
+/****************************************************************************
+* Request Message Header for all request messages
+****************************************************************************/
+
+typedef struct _MPI2_REQUEST_HEADER
+{
+    U16             FunctionDependent1;         /* 0x00 */
+    U8              ChainOffset;                /* 0x02 */
+    U8              Function;                   /* 0x03 */
+    U16             FunctionDependent2;         /* 0x04 */
+    U8              FunctionDependent3;         /* 0x06 */
+    U8              MsgFlags;                   /* 0x07 */
+    U8              VP_ID;                      /* 0x08 */
+    U8              VF_ID;                      /* 0x09 */
+    U16             Reserved1;                  /* 0x0A */
+} MPI2_REQUEST_HEADER, MPI2_POINTER PTR_MPI2_REQUEST_HEADER,
+  MPI2RequestHeader_t, MPI2_POINTER pMPI2RequestHeader_t;
+
+
+/****************************************************************************
+*  Default Reply
+****************************************************************************/
+
+typedef struct _MPI2_DEFAULT_REPLY
+{
+    U16             FunctionDependent1;         /* 0x00 */
+    U8              MsgLength;                  /* 0x02 */
+    U8              Function;                   /* 0x03 */
+    U16             FunctionDependent2;         /* 0x04 */
+    U8              FunctionDependent3;         /* 0x06 */
+    U8              MsgFlags;                   /* 0x07 */
+    U8              VP_ID;                      /* 0x08 */
+    U8              VF_ID;                      /* 0x09 */
+    U16             Reserved1;                  /* 0x0A */
+    U16             FunctionDependent5;         /* 0x0C */
+    U16             IOCStatus;                  /* 0x0E */
+    U32             IOCLogInfo;                 /* 0x10 */
+} MPI2_DEFAULT_REPLY, MPI2_POINTER PTR_MPI2_DEFAULT_REPLY,
+  MPI2DefaultReply_t, MPI2_POINTER pMPI2DefaultReply_t;
+
+
+/* common version structure/union used in messages and configuration pages */
+
+typedef struct _MPI2_VERSION_STRUCT
+{
+    U8                      Dev;                        /* 0x00 */
+    U8                      Unit;                       /* 0x01 */
+    U8                      Minor;                      /* 0x02 */
+    U8                      Major;                      /* 0x03 */
+} MPI2_VERSION_STRUCT;
+
+typedef union _MPI2_VERSION_UNION
+{
+    MPI2_VERSION_STRUCT     Struct;
+    U32                     Word;
+} MPI2_VERSION_UNION;
+
+
+/* LUN field defines, common to many structures */
+#define MPI2_LUN_FIRST_LEVEL_ADDRESSING             (0x0000FFFF)
+#define MPI2_LUN_SECOND_LEVEL_ADDRESSING            (0xFFFF0000)
+#define MPI2_LUN_THIRD_LEVEL_ADDRESSING             (0x0000FFFF)
+#define MPI2_LUN_FOURTH_LEVEL_ADDRESSING            (0xFFFF0000)
+#define MPI2_LUN_LEVEL_1_WORD                       (0xFF00)
+#define MPI2_LUN_LEVEL_1_DWORD                      (0x0000FF00)
+
+
+/*****************************************************************************
+*
+*        Fusion-MPT MPI Scatter Gather Elements
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  MPI Simple Element structures
+****************************************************************************/
+
+typedef struct _MPI2_SGE_SIMPLE32
+{
+    U32                     FlagsLength;
+    U32                     Address;
+} MPI2_SGE_SIMPLE32, MPI2_POINTER PTR_MPI2_SGE_SIMPLE32,
+  Mpi2SGESimple32_t, MPI2_POINTER pMpi2SGESimple32_t;
+
+typedef struct _MPI2_SGE_SIMPLE64
+{
+    U32                     FlagsLength;
+    U64                     Address;
+} MPI2_SGE_SIMPLE64, MPI2_POINTER PTR_MPI2_SGE_SIMPLE64,
+  Mpi2SGESimple64_t, MPI2_POINTER pMpi2SGESimple64_t;
+
+typedef struct _MPI2_SGE_SIMPLE_UNION
+{
+    U32                     FlagsLength;
+    union
+    {
+        U32                 Address32;
+        U64                 Address64;
+    } u;
+} MPI2_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_SGE_SIMPLE_UNION,
+  Mpi2SGESimpleUnion_t, MPI2_POINTER pMpi2SGESimpleUnion_t;
+
+
+/****************************************************************************
+*  MPI Chain Element structures
+****************************************************************************/
+
+typedef struct _MPI2_SGE_CHAIN32
+{
+    U16                     Length;
+    U8                      NextChainOffset;
+    U8                      Flags;
+    U32                     Address;
+} MPI2_SGE_CHAIN32, MPI2_POINTER PTR_MPI2_SGE_CHAIN32,
+  Mpi2SGEChain32_t, MPI2_POINTER pMpi2SGEChain32_t;
+
+typedef struct _MPI2_SGE_CHAIN64
+{
+    U16                     Length;
+    U8                      NextChainOffset;
+    U8                      Flags;
+    U64                     Address;
+} MPI2_SGE_CHAIN64, MPI2_POINTER PTR_MPI2_SGE_CHAIN64,
+  Mpi2SGEChain64_t, MPI2_POINTER pMpi2SGEChain64_t;
+
+typedef struct _MPI2_SGE_CHAIN_UNION
+{
+    U16                     Length;
+    U8                      NextChainOffset;
+    U8                      Flags;
+    union
+    {
+        U32                 Address32;
+        U64                 Address64;
+    } u;
+} MPI2_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_SGE_CHAIN_UNION,
+  Mpi2SGEChainUnion_t, MPI2_POINTER pMpi2SGEChainUnion_t;
+
+
+/****************************************************************************
+*  MPI Transaction Context Element structures
+****************************************************************************/
+
+typedef struct _MPI2_SGE_TRANSACTION32
+{
+    U8                      Reserved;
+    U8                      ContextSize;
+    U8                      DetailsLength;
+    U8                      Flags;
+    U32                     TransactionContext[1];
+    U32                     TransactionDetails[1];
+} MPI2_SGE_TRANSACTION32, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION32,
+  Mpi2SGETransaction32_t, MPI2_POINTER pMpi2SGETransaction32_t;
+
+typedef struct _MPI2_SGE_TRANSACTION64
+{
+    U8                      Reserved;
+    U8                      ContextSize;
+    U8                      DetailsLength;
+    U8                      Flags;
+    U32                     TransactionContext[2];
+    U32                     TransactionDetails[1];
+} MPI2_SGE_TRANSACTION64, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION64,
+  Mpi2SGETransaction64_t, MPI2_POINTER pMpi2SGETransaction64_t;
+
+typedef struct _MPI2_SGE_TRANSACTION96
+{
+    U8                      Reserved;
+    U8                      ContextSize;
+    U8                      DetailsLength;
+    U8                      Flags;
+    U32                     TransactionContext[3];
+    U32                     TransactionDetails[1];
+} MPI2_SGE_TRANSACTION96, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION96,
+  Mpi2SGETransaction96_t, MPI2_POINTER pMpi2SGETransaction96_t;
+
+typedef struct _MPI2_SGE_TRANSACTION128
+{
+    U8                      Reserved;
+    U8                      ContextSize;
+    U8                      DetailsLength;
+    U8                      Flags;
+    U32                     TransactionContext[4];
+    U32                     TransactionDetails[1];
+} MPI2_SGE_TRANSACTION128, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION128,
+  Mpi2SGETransaction_t128, MPI2_POINTER pMpi2SGETransaction_t128;
+
+typedef struct _MPI2_SGE_TRANSACTION_UNION
+{
+    U8                      Reserved;
+    U8                      ContextSize;
+    U8                      DetailsLength;
+    U8                      Flags;
+    union
+    {
+        U32                 TransactionContext32[1];
+        U32                 TransactionContext64[2];
+        U32                 TransactionContext96[3];
+        U32                 TransactionContext128[4];
+    } u;
+    U32                     TransactionDetails[1];
+} MPI2_SGE_TRANSACTION_UNION, MPI2_POINTER PTR_MPI2_SGE_TRANSACTION_UNION,
+  Mpi2SGETransactionUnion_t, MPI2_POINTER pMpi2SGETransactionUnion_t;
+
+
+/****************************************************************************
+*  MPI SGE union for IO SGL's
+****************************************************************************/
+
+typedef struct _MPI2_MPI_SGE_IO_UNION
+{
+    union
+    {
+        MPI2_SGE_SIMPLE_UNION   Simple;
+        MPI2_SGE_CHAIN_UNION    Chain;
+    } u;
+} MPI2_MPI_SGE_IO_UNION, MPI2_POINTER PTR_MPI2_MPI_SGE_IO_UNION,
+  Mpi2MpiSGEIOUnion_t, MPI2_POINTER pMpi2MpiSGEIOUnion_t;
+
+
+/****************************************************************************
+*  MPI SGE union for SGL's with Simple and Transaction elements
+****************************************************************************/
+
+typedef struct _MPI2_SGE_TRANS_SIMPLE_UNION
+{
+    union
+    {
+        MPI2_SGE_SIMPLE_UNION       Simple;
+        MPI2_SGE_TRANSACTION_UNION  Transaction;
+    } u;
+} MPI2_SGE_TRANS_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_SGE_TRANS_SIMPLE_UNION,
+  Mpi2SGETransSimpleUnion_t, MPI2_POINTER pMpi2SGETransSimpleUnion_t;
+
+
+/****************************************************************************
+*  All MPI SGE types union
+****************************************************************************/
+
+typedef struct _MPI2_MPI_SGE_UNION
+{
+    union
+    {
+        MPI2_SGE_SIMPLE_UNION       Simple;
+        MPI2_SGE_CHAIN_UNION        Chain;
+        MPI2_SGE_TRANSACTION_UNION  Transaction;
+    } u;
+} MPI2_MPI_SGE_UNION, MPI2_POINTER PTR_MPI2_MPI_SGE_UNION,
+  Mpi2MpiSgeUnion_t, MPI2_POINTER pMpi2MpiSgeUnion_t;
+
+
+/****************************************************************************
+*  MPI SGE field definition and masks
+****************************************************************************/
+
+/* Flags field bit definitions */
+
+#define MPI2_SGE_FLAGS_LAST_ELEMENT             (0x80)
+#define MPI2_SGE_FLAGS_END_OF_BUFFER            (0x40)
+#define MPI2_SGE_FLAGS_ELEMENT_TYPE_MASK        (0x30)
+#define MPI2_SGE_FLAGS_LOCAL_ADDRESS            (0x08)
+#define MPI2_SGE_FLAGS_DIRECTION                (0x04)
+#define MPI2_SGE_FLAGS_ADDRESS_SIZE             (0x02)
+#define MPI2_SGE_FLAGS_END_OF_LIST              (0x01)
+
+#define MPI2_SGE_FLAGS_SHIFT                    (24)
+
+#define MPI2_SGE_LENGTH_MASK                    (0x00FFFFFF)
+#define MPI2_SGE_CHAIN_LENGTH_MASK              (0x0000FFFF)
+
+/* Element Type */
+
+#define MPI2_SGE_FLAGS_TRANSACTION_ELEMENT      (0x00)
+#define MPI2_SGE_FLAGS_SIMPLE_ELEMENT           (0x10)
+#define MPI2_SGE_FLAGS_CHAIN_ELEMENT            (0x30)
+#define MPI2_SGE_FLAGS_ELEMENT_MASK             (0x30)
+
+/* Address location */
+
+#define MPI2_SGE_FLAGS_SYSTEM_ADDRESS           (0x00)
+
+/* Direction */
+
+#define MPI2_SGE_FLAGS_IOC_TO_HOST              (0x00)
+#define MPI2_SGE_FLAGS_HOST_TO_IOC              (0x04)
+
+/* Address Size */
+
+#define MPI2_SGE_FLAGS_32_BIT_ADDRESSING        (0x00)
+#define MPI2_SGE_FLAGS_64_BIT_ADDRESSING        (0x02)
+
+/* Context Size */
+
+#define MPI2_SGE_FLAGS_32_BIT_CONTEXT           (0x00)
+#define MPI2_SGE_FLAGS_64_BIT_CONTEXT           (0x02)
+#define MPI2_SGE_FLAGS_96_BIT_CONTEXT           (0x04)
+#define MPI2_SGE_FLAGS_128_BIT_CONTEXT          (0x06)
+
+#define MPI2_SGE_CHAIN_OFFSET_MASK              (0x00FF0000)
+#define MPI2_SGE_CHAIN_OFFSET_SHIFT             (16)
+
+/****************************************************************************
+*  MPI SGE operation Macros
+****************************************************************************/
+
+/* SIMPLE FlagsLength manipulations... */
+#define MPI2_SGE_SET_FLAGS(f)          ((U32)(f) << MPI2_SGE_FLAGS_SHIFT)
+#define MPI2_SGE_GET_FLAGS(f)          (((f) & ~MPI2_SGE_LENGTH_MASK) >> MPI2_SGE_FLAGS_SHIFT)
+#define MPI2_SGE_LENGTH(f)             ((f) & MPI2_SGE_LENGTH_MASK)
+#define MPI2_SGE_CHAIN_LENGTH(f)       ((f) & MPI2_SGE_CHAIN_LENGTH_MASK)
+
+#define MPI2_SGE_SET_FLAGS_LENGTH(f,l) (MPI2_SGE_SET_FLAGS(f) | MPI2_SGE_LENGTH(l))
+
+#define MPI2_pSGE_GET_FLAGS(psg)            MPI2_SGE_GET_FLAGS((psg)->FlagsLength)
+#define MPI2_pSGE_GET_LENGTH(psg)           MPI2_SGE_LENGTH((psg)->FlagsLength)
+#define MPI2_pSGE_SET_FLAGS_LENGTH(psg,f,l) (psg)->FlagsLength = MPI2_SGE_SET_FLAGS_LENGTH(f,l)
+
+/* CAUTION - The following are READ-MODIFY-WRITE! */
+#define MPI2_pSGE_SET_FLAGS(psg,f)      (psg)->FlagsLength |= MPI2_SGE_SET_FLAGS(f)
+#define MPI2_pSGE_SET_LENGTH(psg,l)     (psg)->FlagsLength |= MPI2_SGE_LENGTH(l)
+
+#define MPI2_GET_CHAIN_OFFSET(x)    ((x & MPI2_SGE_CHAIN_OFFSET_MASK) >> MPI2_SGE_CHAIN_OFFSET_SHIFT)
+
+
+/*****************************************************************************
+*
+*        Fusion-MPT IEEE Scatter Gather Elements
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  IEEE Simple Element structures
+****************************************************************************/
+
+typedef struct _MPI2_IEEE_SGE_SIMPLE32
+{
+    U32                     Address;
+    U32                     FlagsLength;
+} MPI2_IEEE_SGE_SIMPLE32, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE32,
+  Mpi2IeeeSgeSimple32_t, MPI2_POINTER pMpi2IeeeSgeSimple32_t;
+
+typedef struct _MPI2_IEEE_SGE_SIMPLE64
+{
+    U64                     Address;
+    U32                     Length;
+    U16                     Reserved1;
+    U8                      Reserved2;
+    U8                      Flags;
+} MPI2_IEEE_SGE_SIMPLE64, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE64,
+  Mpi2IeeeSgeSimple64_t, MPI2_POINTER pMpi2IeeeSgeSimple64_t;
+
+typedef union _MPI2_IEEE_SGE_SIMPLE_UNION
+{
+    MPI2_IEEE_SGE_SIMPLE32  Simple32;
+    MPI2_IEEE_SGE_SIMPLE64  Simple64;
+} MPI2_IEEE_SGE_SIMPLE_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_SIMPLE_UNION,
+  Mpi2IeeeSgeSimpleUnion_t, MPI2_POINTER pMpi2IeeeSgeSimpleUnion_t;
+
+
+/****************************************************************************
+*  IEEE Chain Element structures
+****************************************************************************/
+
+typedef MPI2_IEEE_SGE_SIMPLE32  MPI2_IEEE_SGE_CHAIN32;
+
+typedef MPI2_IEEE_SGE_SIMPLE64  MPI2_IEEE_SGE_CHAIN64;
+
+typedef union _MPI2_IEEE_SGE_CHAIN_UNION
+{
+    MPI2_IEEE_SGE_CHAIN32   Chain32;
+    MPI2_IEEE_SGE_CHAIN64   Chain64;
+} MPI2_IEEE_SGE_CHAIN_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_CHAIN_UNION,
+  Mpi2IeeeSgeChainUnion_t, MPI2_POINTER pMpi2IeeeSgeChainUnion_t;
+
+
+/****************************************************************************
+*  All IEEE SGE types union
+****************************************************************************/
+
+typedef struct _MPI2_IEEE_SGE_UNION
+{
+    union
+    {
+        MPI2_IEEE_SGE_SIMPLE_UNION  Simple;
+        MPI2_IEEE_SGE_CHAIN_UNION   Chain;
+    } u;
+} MPI2_IEEE_SGE_UNION, MPI2_POINTER PTR_MPI2_IEEE_SGE_UNION,
+  Mpi2IeeeSgeUnion_t, MPI2_POINTER pMpi2IeeeSgeUnion_t;
+
+
+/****************************************************************************
+*  IEEE SGE field definitions and masks
+****************************************************************************/
+
+/* Flags field bit definitions */
+
+#define MPI2_IEEE_SGE_FLAGS_ELEMENT_TYPE_MASK   (0x80)
+
+#define MPI2_IEEE32_SGE_FLAGS_SHIFT             (24)
+
+#define MPI2_IEEE32_SGE_LENGTH_MASK             (0x00FFFFFF)
+
+/* Element Type */
+
+#define MPI2_IEEE_SGE_FLAGS_SIMPLE_ELEMENT      (0x00)
+#define MPI2_IEEE_SGE_FLAGS_CHAIN_ELEMENT       (0x80)
+
+/* Data Location Address Space */
+
+#define MPI2_IEEE_SGE_FLAGS_ADDR_MASK           (0x03)
+#define MPI2_IEEE_SGE_FLAGS_SYSTEM_ADDR         (0x00)
+#define MPI2_IEEE_SGE_FLAGS_IOCDDR_ADDR         (0x01)
+#define MPI2_IEEE_SGE_FLAGS_IOCPLB_ADDR         (0x02)
+#define MPI2_IEEE_SGE_FLAGS_IOCPLBNTA_ADDR      (0x03)
+
+
+/****************************************************************************
+*  IEEE SGE operation Macros
+****************************************************************************/
+
+/* SIMPLE FlagsLength manipulations... */
+#define MPI2_IEEE32_SGE_SET_FLAGS(f)     ((U32)(f) << MPI2_IEEE32_SGE_FLAGS_SHIFT)
+#define MPI2_IEEE32_SGE_GET_FLAGS(f)     (((f) & ~MPI2_IEEE32_SGE_LENGTH_MASK) >> MPI2_IEEE32_SGE_FLAGS_SHIFT)
+#define MPI2_IEEE32_SGE_LENGTH(f)        ((f) & MPI2_IEEE32_SGE_LENGTH_MASK)
+
+#define MPI2_IEEE32_SGE_SET_FLAGS_LENGTH(f, l)      (MPI2_IEEE32_SGE_SET_FLAGS(f) | MPI2_IEEE32_SGE_LENGTH(l))
+
+#define MPI2_IEEE32_pSGE_GET_FLAGS(psg)             MPI2_IEEE32_SGE_GET_FLAGS((psg)->FlagsLength)
+#define MPI2_IEEE32_pSGE_GET_LENGTH(psg)            MPI2_IEEE32_SGE_LENGTH((psg)->FlagsLength)
+#define MPI2_IEEE32_pSGE_SET_FLAGS_LENGTH(psg,f,l)  (psg)->FlagsLength = MPI2_IEEE32_SGE_SET_FLAGS_LENGTH(f,l)
+
+/* CAUTION - The following are READ-MODIFY-WRITE! */
+#define MPI2_IEEE32_pSGE_SET_FLAGS(psg,f)    (psg)->FlagsLength |= MPI2_IEEE32_SGE_SET_FLAGS(f)
+#define MPI2_IEEE32_pSGE_SET_LENGTH(psg,l)   (psg)->FlagsLength |= MPI2_IEEE32_SGE_LENGTH(l)
+
+
+
+
+/*****************************************************************************
+*
+*        Fusion-MPT MPI/IEEE Scatter Gather Unions
+*
+*****************************************************************************/
+
+typedef union _MPI2_SIMPLE_SGE_UNION
+{
+    MPI2_SGE_SIMPLE_UNION       MpiSimple;
+    MPI2_IEEE_SGE_SIMPLE_UNION  IeeeSimple;
+} MPI2_SIMPLE_SGE_UNION, MPI2_POINTER PTR_MPI2_SIMPLE_SGE_UNION,
+  Mpi2SimpleSgeUntion_t, MPI2_POINTER pMpi2SimpleSgeUntion_t;
+
+
+typedef union _MPI2_SGE_IO_UNION
+{
+    MPI2_SGE_SIMPLE_UNION       MpiSimple;
+    MPI2_SGE_CHAIN_UNION        MpiChain;
+    MPI2_IEEE_SGE_SIMPLE_UNION  IeeeSimple;
+    MPI2_IEEE_SGE_CHAIN_UNION   IeeeChain;
+} MPI2_SGE_IO_UNION, MPI2_POINTER PTR_MPI2_SGE_IO_UNION,
+  Mpi2SGEIOUnion_t, MPI2_POINTER pMpi2SGEIOUnion_t;
+
+
+/****************************************************************************
+*
+*  Values for SGLFlags field, used in many request messages with an SGL
+*
+****************************************************************************/
+
+/* values for MPI SGL Data Location Address Space subfield */
+#define MPI2_SGLFLAGS_ADDRESS_SPACE_MASK            (0x0C)
+#define MPI2_SGLFLAGS_SYSTEM_ADDRESS_SPACE          (0x00)
+#define MPI2_SGLFLAGS_IOCDDR_ADDRESS_SPACE          (0x04)
+#define MPI2_SGLFLAGS_IOCPLB_ADDRESS_SPACE          (0x08)
+#define MPI2_SGLFLAGS_IOCPLBNTA_ADDRESS_SPACE       (0x0C)
+/* values for SGL Type subfield */
+#define MPI2_SGLFLAGS_SGL_TYPE_MASK                 (0x03)
+#define MPI2_SGLFLAGS_SGL_TYPE_MPI                  (0x00)
+#define MPI2_SGLFLAGS_SGL_TYPE_IEEE32               (0x01)
+#define MPI2_SGLFLAGS_SGL_TYPE_IEEE64               (0x02)
+
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_cnfg.h b/drivers/scsi/mpt2sas/mpi/mpi2_cnfg.h
new file mode 100644
index 0000000..2f27cf6
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_cnfg.h
@@ -0,0 +1,2151 @@
+/*
+ *  Copyright (c) 2000-2009 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_cnfg.h
+ *          Title:  MPI Configuration messages and pages
+ *  Creation Date:  November 10, 2006
+ *
+ *    mpi2_cnfg.h Version:  02.00.10
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  06-04-07  02.00.01  Added defines for SAS IO Unit Page 2 PhyFlags.
+ *                      Added Manufacturing Page 11.
+ *                      Added MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE
+ *                      define.
+ *  06-26-07  02.00.02  Adding generic structure for product-specific
+ *                      Manufacturing pages: MPI2_CONFIG_PAGE_MANUFACTURING_PS.
+ *                      Rework of BIOS Page 2 configuration page.
+ *                      Fixed MPI2_BIOSPAGE2_BOOT_DEVICE to be a union of the
+ *                      forms.
+ *                      Added configuration pages IOC Page 8 and Driver
+ *                      Persistent Mapping Page 0.
+ *  08-31-07  02.00.03  Modified configuration pages dealing with Integrated
+ *                      RAID (Manufacturing Page 4, RAID Volume Pages 0 and 1,
+ *                      RAID Physical Disk Pages 0 and 1, RAID Configuration
+ *                      Page 0).
+ *                      Added new value for AccessStatus field of SAS Device
+ *                      Page 0 (_SATA_NEEDS_INITIALIZATION).
+ *  10-31-07  02.00.04  Added missing SEPDevHandle field to
+ *                      MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0.
+ *  12-18-07  02.00.05  Modified IO Unit Page 0 to use 32-bit version fields for
+ *                      NVDATA.
+ *                      Modified IOC Page 7 to use masks and added field for
+ *                      SASBroadcastPrimitiveMasks.
+ *                      Added MPI2_CONFIG_PAGE_BIOS_4.
+ *                      Added MPI2_CONFIG_PAGE_LOG_0.
+ *  02-29-08  02.00.06  Modified various names to make them 32-character unique.
+ *                      Added SAS Device IDs.
+ *                      Updated Integrated RAID configuration pages including
+ *                      Manufacturing Page 4, IOC Page 6, and RAID Configuration
+ *                      Page 0.
+ *  05-21-08  02.00.07  Added define MPI2_MANPAGE4_MIX_SSD_SAS_SATA.
+ *                      Added define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION.
+ *                      Fixed define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING.
+ *                      Added missing MaxNumRoutedSasAddresses field to
+ *                      MPI2_CONFIG_PAGE_EXPANDER_0.
+ *                      Added SAS Port Page 0.
+ *                      Modified structure layout for
+ *                      MPI2_CONFIG_PAGE_DRIVER_MAPPING_0.
+ *  06-27-08  02.00.08  Changed MPI2_CONFIG_PAGE_RD_PDISK_1 to use
+ *                      MPI2_RAID_PHYS_DISK1_PATH_MAX to size the array.
+ *  10-02-08  02.00.09  Changed MPI2_RAID_PGAD_CONFIGNUM_MASK from 0x0000FFFF
+ *                      to 0x000000FF.
+ *                      Added two new values for the Physical Disk Coercion Size
+ *                      bits in the Flags field of Manufacturing Page 4.
+ *                      Added product-specific Manufacturing pages 16 to 31.
+ *                      Modified Flags bits for controlling write cache on SATA
+ *                      drives in IO Unit Page 1.
+ *                      Added new bit to AdditionalControlFlags of SAS IO Unit
+ *                      Page 1 to control Invalid Topology Correction.
+ *                      Added additional defines for RAID Volume Page 0
+ *                      VolumeStatusFlags field.
+ *                      Modified meaning of RAID Volume Page 0 VolumeSettings
+ *                      define for auto-configure of hot-swap drives.
+ *                      Added SupportedPhysDisks field to RAID Volume Page 1 and
+ *                      added related defines.
+ *                      Added PhysDiskAttributes field (and related defines) to
+ *                      RAID Physical Disk Page 0.
+ *                      Added MPI2_SAS_PHYINFO_PHY_VACANT define.
+ *                      Added three new DiscoveryStatus bits for SAS IO Unit
+ *                      Page 0 and SAS Expander Page 0.
+ *                      Removed multiplexing information from SAS IO Unit pages.
+ *                      Added BootDeviceWaitTime field to SAS IO Unit Page 4.
+ *                      Removed Zone Address Resolved bit from PhyInfo and from
+ *                      Expander Page 0 Flags field.
+ *                      Added two new AccessStatus values to SAS Device Page 0
+ *                      for indicating routing problems. Added 3 reserved words
+ *                      to this page.
+ *  01-19-09  02.00.10  Fixed defines for GPIOVal field of IO Unit Page 3.
+ *                      Inserted missing reserved field into structure for IOC
+ *                      Page 6.
+ *                      Added more pending task bits to RAID Volume Page 0
+ *                      VolumeStatusFlags defines.
+ *                      Added MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED define.
+ *                      Added a new DiscoveryStatus bit for SAS IO Unit Page 0
+ *                      and SAS Expander Page 0 to flag a downstream initiator
+ *                      when in simplified routing mode.
+ *                      Removed SATA Init Failure defines for DiscoveryStatus
+ *                      fields of SAS IO Unit Page 0 and SAS Expander Page 0.
+ *                      Added MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED define.
+ *                      Added PortGroups, DmaGroup, and ControlGroup fields to
+ *                      SAS Device Page 0.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_CNFG_H
+#define MPI2_CNFG_H
+
+/*****************************************************************************
+*   Configuration Page Header and defines
+*****************************************************************************/
+
+/* Config Page Header */
+typedef struct _MPI2_CONFIG_PAGE_HEADER
+{
+    U8                 PageVersion;                /* 0x00 */
+    U8                 PageLength;                 /* 0x01 */
+    U8                 PageNumber;                 /* 0x02 */
+    U8                 PageType;                   /* 0x03 */
+} MPI2_CONFIG_PAGE_HEADER, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_HEADER,
+  Mpi2ConfigPageHeader_t, MPI2_POINTER pMpi2ConfigPageHeader_t;
+
+typedef union _MPI2_CONFIG_PAGE_HEADER_UNION
+{
+   MPI2_CONFIG_PAGE_HEADER  Struct;
+   U8                       Bytes[4];
+   U16                      Word16[2];
+   U32                      Word32;
+} MPI2_CONFIG_PAGE_HEADER_UNION, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_HEADER_UNION,
+  Mpi2ConfigPageHeaderUnion, MPI2_POINTER pMpi2ConfigPageHeaderUnion;
+
+/* Extended Config Page Header */
+typedef struct _MPI2_CONFIG_EXTENDED_PAGE_HEADER
+{
+    U8                  PageVersion;                /* 0x00 */
+    U8                  Reserved1;                  /* 0x01 */
+    U8                  PageNumber;                 /* 0x02 */
+    U8                  PageType;                   /* 0x03 */
+    U16                 ExtPageLength;              /* 0x04 */
+    U8                  ExtPageType;                /* 0x06 */
+    U8                  Reserved2;                  /* 0x07 */
+} MPI2_CONFIG_EXTENDED_PAGE_HEADER,
+  MPI2_POINTER PTR_MPI2_CONFIG_EXTENDED_PAGE_HEADER,
+  Mpi2ConfigExtendedPageHeader_t, MPI2_POINTER pMpi2ConfigExtendedPageHeader_t;
+
+typedef union _MPI2_CONFIG_EXT_PAGE_HEADER_UNION
+{
+   MPI2_CONFIG_PAGE_HEADER          Struct;
+   MPI2_CONFIG_EXTENDED_PAGE_HEADER Ext;
+   U8                               Bytes[8];
+   U16                              Word16[4];
+   U32                              Word32[2];
+} MPI2_CONFIG_EXT_PAGE_HEADER_UNION, MPI2_POINTER PTR_MPI2_CONFIG_EXT_PAGE_HEADER_UNION,
+  Mpi2ConfigPageExtendedHeaderUnion, MPI2_POINTER pMpi2ConfigPageExtendedHeaderUnion;
+
+
+/* PageType field values */
+#define MPI2_CONFIG_PAGEATTR_READ_ONLY              (0x00)
+#define MPI2_CONFIG_PAGEATTR_CHANGEABLE             (0x10)
+#define MPI2_CONFIG_PAGEATTR_PERSISTENT             (0x20)
+#define MPI2_CONFIG_PAGEATTR_MASK                   (0xF0)
+
+#define MPI2_CONFIG_PAGETYPE_IO_UNIT                (0x00)
+#define MPI2_CONFIG_PAGETYPE_IOC                    (0x01)
+#define MPI2_CONFIG_PAGETYPE_BIOS                   (0x02)
+#define MPI2_CONFIG_PAGETYPE_RAID_VOLUME            (0x08)
+#define MPI2_CONFIG_PAGETYPE_MANUFACTURING          (0x09)
+#define MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK          (0x0A)
+#define MPI2_CONFIG_PAGETYPE_EXTENDED               (0x0F)
+#define MPI2_CONFIG_PAGETYPE_MASK                   (0x0F)
+
+#define MPI2_CONFIG_TYPENUM_MASK                    (0x0FFF)
+
+
+/* ExtPageType field values */
+#define MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT         (0x10)
+#define MPI2_CONFIG_EXTPAGETYPE_SAS_EXPANDER        (0x11)
+#define MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE          (0x12)
+#define MPI2_CONFIG_EXTPAGETYPE_SAS_PHY             (0x13)
+#define MPI2_CONFIG_EXTPAGETYPE_LOG                 (0x14)
+#define MPI2_CONFIG_EXTPAGETYPE_ENCLOSURE           (0x15)
+#define MPI2_CONFIG_EXTPAGETYPE_RAID_CONFIG         (0x16)
+#define MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING      (0x17)
+#define MPI2_CONFIG_EXTPAGETYPE_SAS_PORT            (0x18)
+
+
+/*****************************************************************************
+*   PageAddress defines
+*****************************************************************************/
+
+/* RAID Volume PageAddress format */
+#define MPI2_RAID_VOLUME_PGAD_FORM_MASK             (0xF0000000)
+#define MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE  (0x00000000)
+#define MPI2_RAID_VOLUME_PGAD_FORM_HANDLE           (0x10000000)
+
+#define MPI2_RAID_VOLUME_PGAD_HANDLE_MASK           (0x0000FFFF)
+
+
+/* RAID Physical Disk PageAddress format */
+#define MPI2_PHYSDISK_PGAD_FORM_MASK                    (0xF0000000)
+#define MPI2_PHYSDISK_PGAD_FORM_GET_NEXT_PHYSDISKNUM    (0x00000000)
+#define MPI2_PHYSDISK_PGAD_FORM_PHYSDISKNUM             (0x10000000)
+#define MPI2_PHYSDISK_PGAD_FORM_DEVHANDLE               (0x20000000)
+
+#define MPI2_PHYSDISK_PGAD_PHYSDISKNUM_MASK             (0x000000FF)
+#define MPI2_PHYSDISK_PGAD_DEVHANDLE_MASK               (0x0000FFFF)
+
+
+/* SAS Expander PageAddress format */
+#define MPI2_SAS_EXPAND_PGAD_FORM_MASK              (0xF0000000)
+#define MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL     (0x00000000)
+#define MPI2_SAS_EXPAND_PGAD_FORM_HNDL_PHY_NUM      (0x10000000)
+#define MPI2_SAS_EXPAND_PGAD_FORM_HNDL              (0x20000000)
+
+#define MPI2_SAS_EXPAND_PGAD_HANDLE_MASK            (0x0000FFFF)
+#define MPI2_SAS_EXPAND_PGAD_PHYNUM_MASK            (0x00FF0000)
+#define MPI2_SAS_EXPAND_PGAD_PHYNUM_SHIFT           (16)
+
+
+/* SAS Device PageAddress format */
+#define MPI2_SAS_DEVICE_PGAD_FORM_MASK              (0xF0000000)
+#define MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE   (0x00000000)
+#define MPI2_SAS_DEVICE_PGAD_FORM_HANDLE            (0x20000000)
+
+#define MPI2_SAS_DEVICE_PGAD_HANDLE_MASK            (0x0000FFFF)
+
+
+/* SAS PHY PageAddress format */
+#define MPI2_SAS_PHY_PGAD_FORM_MASK                 (0xF0000000)
+#define MPI2_SAS_PHY_PGAD_FORM_PHY_NUMBER           (0x00000000)
+#define MPI2_SAS_PHY_PGAD_FORM_PHY_TBL_INDEX        (0x10000000)
+
+#define MPI2_SAS_PHY_PGAD_PHY_NUMBER_MASK           (0x000000FF)
+#define MPI2_SAS_PHY_PGAD_PHY_TBL_INDEX_MASK        (0x0000FFFF)
+
+
+/* SAS Port PageAddress format */
+#define MPI2_SASPORT_PGAD_FORM_MASK                 (0xF0000000)
+#define MPI2_SASPORT_PGAD_FORM_GET_NEXT_PORT        (0x00000000)
+#define MPI2_SASPORT_PGAD_FORM_PORT_NUM             (0x10000000)
+
+#define MPI2_SASPORT_PGAD_PORTNUMBER_MASK           (0x00000FFF)
+
+
+/* SAS Enclosure PageAddress format */
+#define MPI2_SAS_ENCLOS_PGAD_FORM_MASK              (0xF0000000)
+#define MPI2_SAS_ENCLOS_PGAD_FORM_GET_NEXT_HANDLE   (0x00000000)
+#define MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE            (0x10000000)
+
+#define MPI2_SAS_ENCLOS_PGAD_HANDLE_MASK            (0x0000FFFF)
+
+
+/* RAID Configuration PageAddress format */
+#define MPI2_RAID_PGAD_FORM_MASK                    (0xF0000000)
+#define MPI2_RAID_PGAD_FORM_GET_NEXT_CONFIGNUM      (0x00000000)
+#define MPI2_RAID_PGAD_FORM_CONFIGNUM               (0x10000000)
+#define MPI2_RAID_PGAD_FORM_ACTIVE_CONFIG           (0x20000000)
+
+#define MPI2_RAID_PGAD_CONFIGNUM_MASK               (0x000000FF)
+
+
+/* Driver Persistent Mapping PageAddress format */
+#define MPI2_DPM_PGAD_FORM_MASK                     (0xF0000000)
+#define MPI2_DPM_PGAD_FORM_ENTRY_RANGE              (0x00000000)
+
+#define MPI2_DPM_PGAD_ENTRY_COUNT_MASK              (0x0FFF0000)
+#define MPI2_DPM_PGAD_ENTRY_COUNT_SHIFT             (16)
+#define MPI2_DPM_PGAD_START_ENTRY_MASK              (0x0000FFFF)
+
+
+/****************************************************************************
+*   Configuration messages
+****************************************************************************/
+
+/* Configuration Request Message */
+typedef struct _MPI2_CONFIG_REQUEST
+{
+    U8                      Action;                     /* 0x00 */
+    U8                      SGLFlags;                   /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     ExtPageLength;              /* 0x04 */
+    U8                      ExtPageType;                /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved1;                  /* 0x0A */
+    U32                     Reserved2;                  /* 0x0C */
+    U32                     Reserved3;                  /* 0x10 */
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x14 */
+    U32                     PageAddress;                /* 0x18 */
+    MPI2_SGE_IO_UNION       PageBufferSGE;              /* 0x1C */
+} MPI2_CONFIG_REQUEST, MPI2_POINTER PTR_MPI2_CONFIG_REQUEST,
+  Mpi2ConfigRequest_t, MPI2_POINTER pMpi2ConfigRequest_t;
+
+/* values for the Action field */
+#define MPI2_CONFIG_ACTION_PAGE_HEADER              (0x00)
+#define MPI2_CONFIG_ACTION_PAGE_READ_CURRENT        (0x01)
+#define MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT       (0x02)
+#define MPI2_CONFIG_ACTION_PAGE_DEFAULT             (0x03)
+#define MPI2_CONFIG_ACTION_PAGE_WRITE_NVRAM         (0x04)
+#define MPI2_CONFIG_ACTION_PAGE_READ_DEFAULT        (0x05)
+#define MPI2_CONFIG_ACTION_PAGE_READ_NVRAM          (0x06)
+#define MPI2_CONFIG_ACTION_PAGE_GET_CHANGEABLE      (0x07)
+
+/* values for SGLFlags field are in the SGL section of mpi2.h */
+
+
+/* Config Reply Message */
+typedef struct _MPI2_CONFIG_REPLY
+{
+    U8                      Action;                     /* 0x00 */
+    U8                      SGLFlags;                   /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     ExtPageLength;              /* 0x04 */
+    U8                      ExtPageType;                /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved1;                  /* 0x0A */
+    U16                     Reserved2;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x14 */
+} MPI2_CONFIG_REPLY, MPI2_POINTER PTR_MPI2_CONFIG_REPLY,
+  Mpi2ConfigReply_t, MPI2_POINTER pMpi2ConfigReply_t;
+
+
+
+/*****************************************************************************
+*
+*               C o n f i g u r a t i o n    P a g e s
+*
+*****************************************************************************/
+
+/****************************************************************************
+*   Manufacturing Config pages
+****************************************************************************/
+
+#define MPI2_MFGPAGE_VENDORID_LSI                   (0x1000)
+
+/* SAS */
+#define MPI2_MFGPAGE_DEVID_SAS2004                  (0x0070)
+#define MPI2_MFGPAGE_DEVID_SAS2008                  (0x0072)
+#define MPI2_MFGPAGE_DEVID_SAS2108_1                (0x0074)
+#define MPI2_MFGPAGE_DEVID_SAS2108_2                (0x0076)
+#define MPI2_MFGPAGE_DEVID_SAS2108_3                (0x0077)
+#define MPI2_MFGPAGE_DEVID_SAS2116_1                (0x0064)
+#define MPI2_MFGPAGE_DEVID_SAS2116_2                (0x0065)
+
+
+/* Manufacturing Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_0
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U8                      ChipName[16];               /* 0x04 */
+    U8                      ChipRevision[8];            /* 0x14 */
+    U8                      BoardName[16];              /* 0x1C */
+    U8                      BoardAssembly[16];          /* 0x2C */
+    U8                      BoardTracerNumber[16];      /* 0x3C */
+} MPI2_CONFIG_PAGE_MAN_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_0,
+  Mpi2ManufacturingPage0_t, MPI2_POINTER pMpi2ManufacturingPage0_t;
+
+#define MPI2_MANUFACTURING0_PAGEVERSION                (0x00)
+
+
+/* Manufacturing Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_1
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U8                      VPD[256];                   /* 0x04 */
+} MPI2_CONFIG_PAGE_MAN_1,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_1,
+  Mpi2ManufacturingPage1_t, MPI2_POINTER pMpi2ManufacturingPage1_t;
+
+#define MPI2_MANUFACTURING1_PAGEVERSION                (0x00)
+
+
+typedef struct _MPI2_CHIP_REVISION_ID
+{
+    U16 DeviceID;                                       /* 0x00 */
+    U8  PCIRevisionID;                                  /* 0x02 */
+    U8  Reserved;                                       /* 0x03 */
+} MPI2_CHIP_REVISION_ID, MPI2_POINTER PTR_MPI2_CHIP_REVISION_ID,
+  Mpi2ChipRevisionId_t, MPI2_POINTER pMpi2ChipRevisionId_t;
+
+
+/* Manufacturing Page 2 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength at runtime.
+ */
+#ifndef MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS
+#define MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS   (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_2
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    MPI2_CHIP_REVISION_ID   ChipId;                     /* 0x04 */
+    U32                     HwSettings[MPI2_MAN_PAGE_2_HW_SETTINGS_WORDS];/* 0x08 */
+} MPI2_CONFIG_PAGE_MAN_2,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_2,
+  Mpi2ManufacturingPage2_t, MPI2_POINTER pMpi2ManufacturingPage2_t;
+
+#define MPI2_MANUFACTURING2_PAGEVERSION                 (0x00)
+
+
+/* Manufacturing Page 3 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength at runtime.
+ */
+#ifndef MPI2_MAN_PAGE_3_INFO_WORDS
+#define MPI2_MAN_PAGE_3_INFO_WORDS          (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_3
+{
+    MPI2_CONFIG_PAGE_HEADER             Header;         /* 0x00 */
+    MPI2_CHIP_REVISION_ID               ChipId;         /* 0x04 */
+    U32                                 Info[MPI2_MAN_PAGE_3_INFO_WORDS];/* 0x08 */
+} MPI2_CONFIG_PAGE_MAN_3,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_3,
+  Mpi2ManufacturingPage3_t, MPI2_POINTER pMpi2ManufacturingPage3_t;
+
+#define MPI2_MANUFACTURING3_PAGEVERSION                 (0x00)
+
+
+/* Manufacturing Page 4 */
+
+typedef struct _MPI2_MANPAGE4_PWR_SAVE_SETTINGS
+{
+    U8                          PowerSaveFlags;                 /* 0x00 */
+    U8                          InternalOperationsSleepTime;    /* 0x01 */
+    U8                          InternalOperationsRunTime;      /* 0x02 */
+    U8                          HostIdleTime;                   /* 0x03 */
+} MPI2_MANPAGE4_PWR_SAVE_SETTINGS,
+  MPI2_POINTER PTR_MPI2_MANPAGE4_PWR_SAVE_SETTINGS,
+  Mpi2ManPage4PwrSaveSettings_t, MPI2_POINTER pMpi2ManPage4PwrSaveSettings_t;
+
+/* defines for the PowerSaveFlags field */
+#define MPI2_MANPAGE4_MASK_POWERSAVE_MODE               (0x03)
+#define MPI2_MANPAGE4_POWERSAVE_MODE_DISABLED           (0x00)
+#define MPI2_MANPAGE4_CUSTOM_POWERSAVE_MODE             (0x01)
+#define MPI2_MANPAGE4_FULL_POWERSAVE_MODE               (0x02)
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_4
+{
+    MPI2_CONFIG_PAGE_HEADER             Header;                 /* 0x00 */
+    U32                                 Reserved1;              /* 0x04 */
+    U32                                 Flags;                  /* 0x08 */
+    U8                                  InquirySize;            /* 0x0C */
+    U8                                  Reserved2;              /* 0x0D */
+    U16                                 Reserved3;              /* 0x0E */
+    U8                                  InquiryData[56];        /* 0x10 */
+    U32                                 RAID0VolumeSettings;    /* 0x48 */
+    U32                                 RAID1EVolumeSettings;   /* 0x4C */
+    U32                                 RAID1VolumeSettings;    /* 0x50 */
+    U32                                 RAID10VolumeSettings;   /* 0x54 */
+    U32                                 Reserved4;              /* 0x58 */
+    U32                                 Reserved5;              /* 0x5C */
+    MPI2_MANPAGE4_PWR_SAVE_SETTINGS     PowerSaveSettings;      /* 0x60 */
+    U8                                  MaxOCEDisks;            /* 0x64 */
+    U8                                  ResyncRate;             /* 0x65 */
+    U16                                 DataScrubDuration;      /* 0x66 */
+    U8                                  MaxHotSpares;           /* 0x68 */
+    U8                                  MaxPhysDisksPerVol;     /* 0x69 */
+    U8                                  MaxPhysDisks;           /* 0x6A */
+    U8                                  MaxVolumes;             /* 0x6B */
+} MPI2_CONFIG_PAGE_MAN_4,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_4,
+  Mpi2ManufacturingPage4_t, MPI2_POINTER pMpi2ManufacturingPage4_t;
+
+#define MPI2_MANUFACTURING4_PAGEVERSION                 (0x0A)
+
+/* Manufacturing Page 4 Flags field */
+#define MPI2_MANPAGE4_METADATA_SIZE_MASK                (0x00030000)
+#define MPI2_MANPAGE4_METADATA_512MB                    (0x00000000)
+
+#define MPI2_MANPAGE4_MIX_SSD_SAS_SATA                  (0x00008000)
+#define MPI2_MANPAGE4_MIX_SSD_AND_NON_SSD               (0x00004000)
+#define MPI2_MANPAGE4_HIDE_PHYSDISK_NON_IR              (0x00002000)
+
+#define MPI2_MANPAGE4_MASK_PHYSDISK_COERCION            (0x00001C00)
+#define MPI2_MANPAGE4_PHYSDISK_COERCION_1GB             (0x00000000)
+#define MPI2_MANPAGE4_PHYSDISK_128MB_COERCION           (0x00000400)
+#define MPI2_MANPAGE4_PHYSDISK_ADAPTIVE_COERCION        (0x00000800)
+#define MPI2_MANPAGE4_PHYSDISK_ZERO_COERCION            (0x00000C00)
+
+#define MPI2_MANPAGE4_MASK_BAD_BLOCK_MARKING            (0x00000300)
+#define MPI2_MANPAGE4_DEFAULT_BAD_BLOCK_MARKING         (0x00000000)
+#define MPI2_MANPAGE4_TABLE_BAD_BLOCK_MARKING           (0x00000100)
+#define MPI2_MANPAGE4_WRITE_LONG_BAD_BLOCK_MARKING      (0x00000200)
+
+#define MPI2_MANPAGE4_FORCE_OFFLINE_FAILOVER            (0x00000080)
+#define MPI2_MANPAGE4_RAID10_DISABLE                    (0x00000040)
+#define MPI2_MANPAGE4_RAID1E_DISABLE                    (0x00000020)
+#define MPI2_MANPAGE4_RAID1_DISABLE                     (0x00000010)
+#define MPI2_MANPAGE4_RAID0_DISABLE                     (0x00000008)
+#define MPI2_MANPAGE4_IR_MODEPAGE8_DISABLE              (0x00000004)
+#define MPI2_MANPAGE4_IM_RESYNC_CACHE_ENABLE            (0x00000002)
+#define MPI2_MANPAGE4_IR_NO_MIX_SAS_SATA                (0x00000001)
+
+
+/* Manufacturing Page 5 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_MAN_PAGE_5_PHY_ENTRIES
+#define MPI2_MAN_PAGE_5_PHY_ENTRIES         (1)
+#endif
+
+typedef struct _MPI2_MANUFACTURING5_ENTRY
+{
+    U64                                 WWID;           /* 0x00 */
+    U64                                 DeviceName;     /* 0x08 */
+} MPI2_MANUFACTURING5_ENTRY, MPI2_POINTER PTR_MPI2_MANUFACTURING5_ENTRY,
+  Mpi2Manufacturing5Entry_t, MPI2_POINTER pMpi2Manufacturing5Entry_t;
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_5
+{
+    MPI2_CONFIG_PAGE_HEADER             Header;         /* 0x00 */
+    U8                                  NumPhys;        /* 0x04 */
+    U8                                  Reserved1;      /* 0x05 */
+    U16                                 Reserved2;      /* 0x06 */
+    U32                                 Reserved3;      /* 0x08 */
+    U32                                 Reserved4;      /* 0x0C */
+    MPI2_MANUFACTURING5_ENTRY           Phy[MPI2_MAN_PAGE_5_PHY_ENTRIES];/* 0x08 */
+} MPI2_CONFIG_PAGE_MAN_5,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_5,
+  Mpi2ManufacturingPage5_t, MPI2_POINTER pMpi2ManufacturingPage5_t;
+
+#define MPI2_MANUFACTURING5_PAGEVERSION                 (0x03)
+
+
+/* Manufacturing Page 6 */
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_6
+{
+    MPI2_CONFIG_PAGE_HEADER         Header;             /* 0x00 */
+    U32                             ProductSpecificInfo;/* 0x04 */
+} MPI2_CONFIG_PAGE_MAN_6,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_6,
+  Mpi2ManufacturingPage6_t, MPI2_POINTER pMpi2ManufacturingPage6_t;
+
+#define MPI2_MANUFACTURING6_PAGEVERSION                 (0x00)
+
+
+/* Manufacturing Page 7 */
+
+typedef struct _MPI2_MANPAGE7_CONNECTOR_INFO
+{
+    U32                         Pinout;                 /* 0x00 */
+    U8                          Connector[16];          /* 0x04 */
+    U8                          Location;               /* 0x14 */
+    U8                          Reserved1;              /* 0x15 */
+    U16                         Slot;                   /* 0x16 */
+    U32                         Reserved2;              /* 0x18 */
+} MPI2_MANPAGE7_CONNECTOR_INFO, MPI2_POINTER PTR_MPI2_MANPAGE7_CONNECTOR_INFO,
+  Mpi2ManPage7ConnectorInfo_t, MPI2_POINTER pMpi2ManPage7ConnectorInfo_t;
+
+/* defines for the Pinout field */
+#define MPI2_MANPAGE7_PINOUT_SFF_8484_L4                (0x00080000)
+#define MPI2_MANPAGE7_PINOUT_SFF_8484_L3                (0x00040000)
+#define MPI2_MANPAGE7_PINOUT_SFF_8484_L2                (0x00020000)
+#define MPI2_MANPAGE7_PINOUT_SFF_8484_L1                (0x00010000)
+#define MPI2_MANPAGE7_PINOUT_SFF_8470_L4                (0x00000800)
+#define MPI2_MANPAGE7_PINOUT_SFF_8470_L3                (0x00000400)
+#define MPI2_MANPAGE7_PINOUT_SFF_8470_L2                (0x00000200)
+#define MPI2_MANPAGE7_PINOUT_SFF_8470_L1                (0x00000100)
+#define MPI2_MANPAGE7_PINOUT_SFF_8482                   (0x00000002)
+#define MPI2_MANPAGE7_PINOUT_CONNECTION_UNKNOWN         (0x00000001)
+
+/* defines for the Location field */
+#define MPI2_MANPAGE7_LOCATION_UNKNOWN                  (0x01)
+#define MPI2_MANPAGE7_LOCATION_INTERNAL                 (0x02)
+#define MPI2_MANPAGE7_LOCATION_EXTERNAL                 (0x04)
+#define MPI2_MANPAGE7_LOCATION_SWITCHABLE               (0x08)
+#define MPI2_MANPAGE7_LOCATION_AUTO                     (0x10)
+#define MPI2_MANPAGE7_LOCATION_NOT_PRESENT              (0x20)
+#define MPI2_MANPAGE7_LOCATION_NOT_CONNECTED            (0x80)
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumPhys at runtime.
+ */
+#ifndef MPI2_MANPAGE7_CONNECTOR_INFO_MAX
+#define MPI2_MANPAGE7_CONNECTOR_INFO_MAX  (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_7
+{
+    MPI2_CONFIG_PAGE_HEADER         Header;             /* 0x00 */
+    U32                             Reserved1;          /* 0x04 */
+    U32                             Reserved2;          /* 0x08 */
+    U32                             Flags;              /* 0x0C */
+    U8                              EnclosureName[16];  /* 0x10 */
+    U8                              NumPhys;            /* 0x20 */
+    U8                              Reserved3;          /* 0x21 */
+    U16                             Reserved4;          /* 0x22 */
+    MPI2_MANPAGE7_CONNECTOR_INFO    ConnectorInfo[MPI2_MANPAGE7_CONNECTOR_INFO_MAX]; /* 0x24 */
+} MPI2_CONFIG_PAGE_MAN_7,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_7,
+  Mpi2ManufacturingPage7_t, MPI2_POINTER pMpi2ManufacturingPage7_t;
+
+#define MPI2_MANUFACTURING7_PAGEVERSION                 (0x00)
+
+/* defines for the Flags field */
+#define MPI2_MANPAGE7_FLAG_USE_SLOT_INFO                (0x00000001)
+
+
+/*
+ * Generic structure to use for product-specific manufacturing pages
+ * (currently Manufacturing Page 8 through Manufacturing Page 31).
+ */
+
+typedef struct _MPI2_CONFIG_PAGE_MAN_PS
+{
+    MPI2_CONFIG_PAGE_HEADER         Header;             /* 0x00 */
+    U32                             ProductSpecificInfo;/* 0x04 */
+} MPI2_CONFIG_PAGE_MAN_PS,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_MAN_PS,
+  Mpi2ManufacturingPagePS_t, MPI2_POINTER pMpi2ManufacturingPagePS_t;
+
+#define MPI2_MANUFACTURING8_PAGEVERSION                 (0x00)
+#define MPI2_MANUFACTURING9_PAGEVERSION                 (0x00)
+#define MPI2_MANUFACTURING10_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING11_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING12_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING13_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING14_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING15_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING16_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING17_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING18_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING19_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING20_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING21_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING22_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING23_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING24_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING25_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING26_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING27_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING28_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING29_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING30_PAGEVERSION                (0x00)
+#define MPI2_MANUFACTURING31_PAGEVERSION                (0x00)
+
+
+/****************************************************************************
+*   IO Unit Config Pages
+****************************************************************************/
+
+/* IO Unit Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_0
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U64                     UniqueValue;                /* 0x04 */
+    MPI2_VERSION_UNION      NvdataVersionDefault;       /* 0x08 */
+    MPI2_VERSION_UNION      NvdataVersionPersistent;    /* 0x0A */
+} MPI2_CONFIG_PAGE_IO_UNIT_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_0,
+  Mpi2IOUnitPage0_t, MPI2_POINTER pMpi2IOUnitPage0_t;
+
+#define MPI2_IOUNITPAGE0_PAGEVERSION                    (0x02)
+
+
+/* IO Unit Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_1
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     Flags;                      /* 0x04 */
+} MPI2_CONFIG_PAGE_IO_UNIT_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_1,
+  Mpi2IOUnitPage1_t, MPI2_POINTER pMpi2IOUnitPage1_t;
+
+#define MPI2_IOUNITPAGE1_PAGEVERSION                    (0x04)
+
+/* IO Unit Page 1 Flags defines */
+#define MPI2_IOUNITPAGE1_MASK_SATA_WRITE_CACHE          (0x00000600)
+#define MPI2_IOUNITPAGE1_ENABLE_SATA_WRITE_CACHE        (0x00000000)
+#define MPI2_IOUNITPAGE1_DISABLE_SATA_WRITE_CACHE       (0x00000200)
+#define MPI2_IOUNITPAGE1_UNCHANGED_SATA_WRITE_CACHE     (0x00000400)
+#define MPI2_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE       (0x00000100)
+#define MPI2_IOUNITPAGE1_DISABLE_IR                     (0x00000040)
+#define MPI2_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING (0x00000020)
+#define MPI2_IOUNITPAGE1_IR_USE_STATIC_VOLUME_ID        (0x00000004)
+#define MPI2_IOUNITPAGE1_MULTI_PATHING                  (0x00000002)
+#define MPI2_IOUNITPAGE1_SINGLE_PATHING                 (0x00000000)
+
+
+/* IO Unit Page 3 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength at runtime.
+ */
+#ifndef MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX
+#define MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX    (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_IO_UNIT_3
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                                   /* 0x00 */
+    U8                      GPIOCount;                                /* 0x04 */
+    U8                      Reserved1;                                /* 0x05 */
+    U16                     Reserved2;                                /* 0x06 */
+    U16                     GPIOVal[MPI2_IO_UNIT_PAGE_3_GPIO_VAL_MAX];/* 0x08 */
+} MPI2_CONFIG_PAGE_IO_UNIT_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IO_UNIT_3,
+  Mpi2IOUnitPage3_t, MPI2_POINTER pMpi2IOUnitPage3_t;
+
+#define MPI2_IOUNITPAGE3_PAGEVERSION                    (0x01)
+
+/* defines for IO Unit Page 3 GPIOVal field */
+#define MPI2_IOUNITPAGE3_GPIO_FUNCTION_MASK             (0xFFFC)
+#define MPI2_IOUNITPAGE3_GPIO_FUNCTION_SHIFT            (2)
+#define MPI2_IOUNITPAGE3_GPIO_SETTING_OFF               (0x0000)
+#define MPI2_IOUNITPAGE3_GPIO_SETTING_ON                (0x0001)
+
+
+/****************************************************************************
+*   IOC Config Pages
+****************************************************************************/
+
+/* IOC Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_IOC_0
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     Reserved1;                  /* 0x04 */
+    U32                     Reserved2;                  /* 0x08 */
+    U16                     VendorID;                   /* 0x0C */
+    U16                     DeviceID;                   /* 0x0E */
+    U8                      RevisionID;                 /* 0x10 */
+    U8                      Reserved3;                  /* 0x11 */
+    U16                     Reserved4;                  /* 0x12 */
+    U32                     ClassCode;                  /* 0x14 */
+    U16                     SubsystemVendorID;          /* 0x18 */
+    U16                     SubsystemID;                /* 0x1A */
+} MPI2_CONFIG_PAGE_IOC_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_0,
+  Mpi2IOCPage0_t, MPI2_POINTER pMpi2IOCPage0_t;
+
+#define MPI2_IOCPAGE0_PAGEVERSION                       (0x02)
+
+
+/* IOC Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_IOC_1
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     Flags;                      /* 0x04 */
+    U32                     CoalescingTimeout;          /* 0x08 */
+    U8                      CoalescingDepth;            /* 0x0C */
+    U8                      PCISlotNum;                 /* 0x0D */
+    U8                      PCIBusNum;                  /* 0x0E */
+    U8                      PCIDomainSegment;           /* 0x0F */
+    U32                     Reserved1;                  /* 0x10 */
+    U32                     Reserved2;                  /* 0x14 */
+} MPI2_CONFIG_PAGE_IOC_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_1,
+  Mpi2IOCPage1_t, MPI2_POINTER pMpi2IOCPage1_t;
+
+#define MPI2_IOCPAGE1_PAGEVERSION                       (0x05)
+
+/* defines for IOC Page 1 Flags field */
+#define MPI2_IOCPAGE1_REPLY_COALESCING                  (0x00000001)
+
+#define MPI2_IOCPAGE1_PCISLOTNUM_UNKNOWN                (0xFF)
+#define MPI2_IOCPAGE1_PCIBUSNUM_UNKNOWN                 (0xFF)
+#define MPI2_IOCPAGE1_PCIDOMAIN_UNKNOWN                 (0xFF)
+
+/* IOC Page 6 */
+
+typedef struct _MPI2_CONFIG_PAGE_IOC_6
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                         /* 0x00 */
+    U32                     CapabilitiesFlags;              /* 0x04 */
+    U8                      MaxDrivesRAID0;                 /* 0x08 */
+    U8                      MaxDrivesRAID1;                 /* 0x09 */
+    U8                      MaxDrivesRAID1E;                /* 0x0A */
+    U8                      MaxDrivesRAID10;                /* 0x0B */
+    U8                      MinDrivesRAID0;                 /* 0x0C */
+    U8                      MinDrivesRAID1;                 /* 0x0D */
+    U8                      MinDrivesRAID1E;                /* 0x0E */
+    U8                      MinDrivesRAID10;                /* 0x0F */
+    U32                     Reserved1;                      /* 0x10 */
+    U8                      MaxGlobalHotSpares;             /* 0x14 */
+    U8                      MaxPhysDisks;                   /* 0x15 */
+    U8                      MaxVolumes;                     /* 0x16 */
+    U8                      MaxConfigs;                     /* 0x17 */
+    U8                      MaxOCEDisks;                    /* 0x18 */
+    U8                      Reserved2;                      /* 0x19 */
+    U16                     Reserved3;                      /* 0x1A */
+    U32                     SupportedStripeSizeMapRAID0;    /* 0x1C */
+    U32                     SupportedStripeSizeMapRAID1E;   /* 0x20 */
+    U32                     SupportedStripeSizeMapRAID10;   /* 0x24 */
+    U32                     Reserved4;                      /* 0x28 */
+    U32                     Reserved5;                      /* 0x2C */
+    U16                     DefaultMetadataSize;            /* 0x30 */
+    U16                     Reserved6;                      /* 0x32 */
+    U16                     MaxBadBlockTableEntries;        /* 0x34 */
+    U16                     Reserved7;                      /* 0x36 */
+    U32                     IRNvsramVersion;                /* 0x38 */
+} MPI2_CONFIG_PAGE_IOC_6, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_6,
+  Mpi2IOCPage6_t, MPI2_POINTER pMpi2IOCPage6_t;
+
+#define MPI2_IOCPAGE6_PAGEVERSION                       (0x04)
+
+/* defines for IOC Page 6 CapabilitiesFlags */
+#define MPI2_IOCPAGE6_CAP_FLAGS_RAID10_SUPPORT          (0x00000010)
+#define MPI2_IOCPAGE6_CAP_FLAGS_RAID1_SUPPORT           (0x00000008)
+#define MPI2_IOCPAGE6_CAP_FLAGS_RAID1E_SUPPORT          (0x00000004)
+#define MPI2_IOCPAGE6_CAP_FLAGS_RAID0_SUPPORT           (0x00000002)
+#define MPI2_IOCPAGE6_CAP_FLAGS_GLOBAL_HOT_SPARE        (0x00000001)
+
+
+/* IOC Page 7 */
+
+#define MPI2_IOCPAGE7_EVENTMASK_WORDS       (4)
+
+typedef struct _MPI2_CONFIG_PAGE_IOC_7
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     Reserved1;                  /* 0x04 */
+    U32                     EventMasks[MPI2_IOCPAGE7_EVENTMASK_WORDS];/* 0x08 */
+    U16                     SASBroadcastPrimitiveMasks; /* 0x18 */
+    U16                     Reserved2;                  /* 0x1A */
+    U32                     Reserved3;                  /* 0x1C */
+} MPI2_CONFIG_PAGE_IOC_7, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_7,
+  Mpi2IOCPage7_t, MPI2_POINTER pMpi2IOCPage7_t;
+
+#define MPI2_IOCPAGE7_PAGEVERSION                       (0x01)
+
+
+/* IOC Page 8 */
+
+typedef struct _MPI2_CONFIG_PAGE_IOC_8
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U8                      NumDevsPerEnclosure;        /* 0x04 */
+    U8                      Reserved1;                  /* 0x05 */
+    U16                     Reserved2;                  /* 0x06 */
+    U16                     MaxPersistentEntries;       /* 0x08 */
+    U16                     MaxNumPhysicalMappedIDs;    /* 0x0A */
+    U16                     Flags;                      /* 0x0C */
+    U16                     Reserved3;                  /* 0x0E */
+    U16                     IRVolumeMappingFlags;       /* 0x10 */
+    U16                     Reserved4;                  /* 0x12 */
+    U32                     Reserved5;                  /* 0x14 */
+} MPI2_CONFIG_PAGE_IOC_8, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_IOC_8,
+  Mpi2IOCPage8_t, MPI2_POINTER pMpi2IOCPage8_t;
+
+#define MPI2_IOCPAGE8_PAGEVERSION                       (0x00)
+
+/* defines for IOC Page 8 Flags field */
+#define MPI2_IOCPAGE8_FLAGS_DA_START_SLOT_1             (0x00000020)
+#define MPI2_IOCPAGE8_FLAGS_RESERVED_TARGETID_0         (0x00000010)
+
+#define MPI2_IOCPAGE8_FLAGS_MASK_MAPPING_MODE           (0x0000000E)
+#define MPI2_IOCPAGE8_FLAGS_DEVICE_PERSISTENCE_MAPPING  (0x00000000)
+#define MPI2_IOCPAGE8_FLAGS_ENCLOSURE_SLOT_MAPPING      (0x00000002)
+
+#define MPI2_IOCPAGE8_FLAGS_DISABLE_PERSISTENT_MAPPING  (0x00000001)
+#define MPI2_IOCPAGE8_FLAGS_ENABLE_PERSISTENT_MAPPING   (0x00000000)
+
+/* defines for IOC Page 8 IRVolumeMappingFlags */
+#define MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE  (0x00000003)
+#define MPI2_IOCPAGE8_IRFLAGS_LOW_VOLUME_MAPPING        (0x00000000)
+#define MPI2_IOCPAGE8_IRFLAGS_HIGH_VOLUME_MAPPING       (0x00000001)
+
+
+/****************************************************************************
+*   BIOS Config Pages
+****************************************************************************/
+
+/* BIOS Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_BIOS_1
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     BiosOptions;                /* 0x04 */
+    U32                     IOCSettings;                /* 0x08 */
+    U32                     Reserved1;                  /* 0x0C */
+    U32                     DeviceSettings;             /* 0x10 */
+    U16                     NumberOfDevices;            /* 0x14 */
+    U16                     Reserved2;                  /* 0x16 */
+    U16                     IOTimeoutBlockDevicesNonRM; /* 0x18 */
+    U16                     IOTimeoutSequential;        /* 0x1A */
+    U16                     IOTimeoutOther;             /* 0x1C */
+    U16                     IOTimeoutBlockDevicesRM;    /* 0x1E */
+} MPI2_CONFIG_PAGE_BIOS_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_1,
+  Mpi2BiosPage1_t, MPI2_POINTER pMpi2BiosPage1_t;
+
+#define MPI2_BIOSPAGE1_PAGEVERSION                      (0x04)
+
+/* values for BIOS Page 1 BiosOptions field */
+#define MPI2_BIOSPAGE1_OPTIONS_DISABLE_BIOS             (0x00000001)
+
+/* values for BIOS Page 1 IOCSettings field */
+#define MPI2_BIOSPAGE1_IOCSET_MASK_BOOT_PREFERENCE      (0x00030000)
+#define MPI2_BIOSPAGE1_IOCSET_ENCLOSURE_SLOT_BOOT       (0x00000000)
+#define MPI2_BIOSPAGE1_IOCSET_SAS_ADDRESS_BOOT          (0x00010000)
+
+#define MPI2_BIOSPAGE1_IOCSET_MASK_RM_SETTING           (0x000000C0)
+#define MPI2_BIOSPAGE1_IOCSET_NONE_RM_SETTING           (0x00000000)
+#define MPI2_BIOSPAGE1_IOCSET_BOOT_RM_SETTING           (0x00000040)
+#define MPI2_BIOSPAGE1_IOCSET_MEDIA_RM_SETTING          (0x00000080)
+
+#define MPI2_BIOSPAGE1_IOCSET_MASK_ADAPTER_SUPPORT      (0x00000030)
+#define MPI2_BIOSPAGE1_IOCSET_NO_SUPPORT                (0x00000000)
+#define MPI2_BIOSPAGE1_IOCSET_BIOS_SUPPORT              (0x00000010)
+#define MPI2_BIOSPAGE1_IOCSET_OS_SUPPORT                (0x00000020)
+#define MPI2_BIOSPAGE1_IOCSET_ALL_SUPPORT               (0x00000030)
+
+#define MPI2_BIOSPAGE1_IOCSET_ALTERNATE_CHS             (0x00000008)
+
+/* values for BIOS Page 1 DeviceSettings field */
+#define MPI2_BIOSPAGE1_DEVSET_DISABLE_SMART_POLLING     (0x00000010)
+#define MPI2_BIOSPAGE1_DEVSET_DISABLE_SEQ_LUN           (0x00000008)
+#define MPI2_BIOSPAGE1_DEVSET_DISABLE_RM_LUN            (0x00000004)
+#define MPI2_BIOSPAGE1_DEVSET_DISABLE_NON_RM_LUN        (0x00000002)
+#define MPI2_BIOSPAGE1_DEVSET_DISABLE_OTHER_LUN         (0x00000001)
+
+
+/* BIOS Page 2 */
+
+typedef struct _MPI2_BOOT_DEVICE_ADAPTER_ORDER
+{
+    U32         Reserved1;                              /* 0x00 */
+    U32         Reserved2;                              /* 0x04 */
+    U32         Reserved3;                              /* 0x08 */
+    U32         Reserved4;                              /* 0x0C */
+    U32         Reserved5;                              /* 0x10 */
+    U32         Reserved6;                              /* 0x14 */
+} MPI2_BOOT_DEVICE_ADAPTER_ORDER,
+  MPI2_POINTER PTR_MPI2_BOOT_DEVICE_ADAPTER_ORDER,
+  Mpi2BootDeviceAdapterOrder_t, MPI2_POINTER pMpi2BootDeviceAdapterOrder_t;
+
+typedef struct _MPI2_BOOT_DEVICE_SAS_WWID
+{
+    U64         SASAddress;                             /* 0x00 */
+    U8          LUN[8];                                 /* 0x08 */
+    U32         Reserved1;                              /* 0x10 */
+    U32         Reserved2;                              /* 0x14 */
+} MPI2_BOOT_DEVICE_SAS_WWID, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_SAS_WWID,
+  Mpi2BootDeviceSasWwid_t, MPI2_POINTER pMpi2BootDeviceSasWwid_t;
+
+typedef struct _MPI2_BOOT_DEVICE_ENCLOSURE_SLOT
+{
+    U64         EnclosureLogicalID;                     /* 0x00 */
+    U32         Reserved1;                              /* 0x08 */
+    U32         Reserved2;                              /* 0x0C */
+    U16         SlotNumber;                             /* 0x10 */
+    U16         Reserved3;                              /* 0x12 */
+    U32         Reserved4;                              /* 0x14 */
+} MPI2_BOOT_DEVICE_ENCLOSURE_SLOT,
+  MPI2_POINTER PTR_MPI2_BOOT_DEVICE_ENCLOSURE_SLOT,
+  Mpi2BootDeviceEnclosureSlot_t, MPI2_POINTER pMpi2BootDeviceEnclosureSlot_t;
+
+typedef struct _MPI2_BOOT_DEVICE_DEVICE_NAME
+{
+    U64         DeviceName;                             /* 0x00 */
+    U8          LUN[8];                                 /* 0x08 */
+    U32         Reserved1;                              /* 0x10 */
+    U32         Reserved2;                              /* 0x14 */
+} MPI2_BOOT_DEVICE_DEVICE_NAME, MPI2_POINTER PTR_MPI2_BOOT_DEVICE_DEVICE_NAME,
+  Mpi2BootDeviceDeviceName_t, MPI2_POINTER pMpi2BootDeviceDeviceName_t;
+
+typedef union _MPI2_MPI2_BIOSPAGE2_BOOT_DEVICE
+{
+    MPI2_BOOT_DEVICE_ADAPTER_ORDER  AdapterOrder;
+    MPI2_BOOT_DEVICE_SAS_WWID       SasWwid;
+    MPI2_BOOT_DEVICE_ENCLOSURE_SLOT EnclosureSlot;
+    MPI2_BOOT_DEVICE_DEVICE_NAME    DeviceName;
+} MPI2_BIOSPAGE2_BOOT_DEVICE, MPI2_POINTER PTR_MPI2_BIOSPAGE2_BOOT_DEVICE,
+  Mpi2BiosPage2BootDevice_t, MPI2_POINTER pMpi2BiosPage2BootDevice_t;
+
+typedef struct _MPI2_CONFIG_PAGE_BIOS_2
+{
+    MPI2_CONFIG_PAGE_HEADER     Header;                 /* 0x00 */
+    U32                         Reserved1;              /* 0x04 */
+    U32                         Reserved2;              /* 0x08 */
+    U32                         Reserved3;              /* 0x0C */
+    U32                         Reserved4;              /* 0x10 */
+    U32                         Reserved5;              /* 0x14 */
+    U32                         Reserved6;              /* 0x18 */
+    U8                          ReqBootDeviceForm;      /* 0x1C */
+    U8                          Reserved7;              /* 0x1D */
+    U16                         Reserved8;              /* 0x1E */
+    MPI2_BIOSPAGE2_BOOT_DEVICE  RequestedBootDevice;    /* 0x20 */
+    U8                          ReqAltBootDeviceForm;   /* 0x38 */
+    U8                          Reserved9;              /* 0x39 */
+    U16                         Reserved10;             /* 0x3A */
+    MPI2_BIOSPAGE2_BOOT_DEVICE  RequestedAltBootDevice; /* 0x3C */
+    U8                          CurrentBootDeviceForm;  /* 0x58 */
+    U8                          Reserved11;             /* 0x59 */
+    U16                         Reserved12;             /* 0x5A */
+    MPI2_BIOSPAGE2_BOOT_DEVICE  CurrentBootDevice;      /* 0x58 */
+} MPI2_CONFIG_PAGE_BIOS_2, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_2,
+  Mpi2BiosPage2_t, MPI2_POINTER pMpi2BiosPage2_t;
+
+#define MPI2_BIOSPAGE2_PAGEVERSION                      (0x04)
+
+/* values for BIOS Page 2 BootDeviceForm fields */
+#define MPI2_BIOSPAGE2_FORM_MASK                        (0x0F)
+#define MPI2_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED         (0x00)
+#define MPI2_BIOSPAGE2_FORM_SAS_WWID                    (0x05)
+#define MPI2_BIOSPAGE2_FORM_ENCLOSURE_SLOT              (0x06)
+#define MPI2_BIOSPAGE2_FORM_DEVICE_NAME                 (0x07)
+
+
+/* BIOS Page 3 */
+
+typedef struct _MPI2_ADAPTER_INFO
+{
+    U8      PciBusNumber;                               /* 0x00 */
+    U8      PciDeviceAndFunctionNumber;                 /* 0x01 */
+    U16     AdapterFlags;                               /* 0x02 */
+} MPI2_ADAPTER_INFO, MPI2_POINTER PTR_MPI2_ADAPTER_INFO,
+  Mpi2AdapterInfo_t, MPI2_POINTER pMpi2AdapterInfo_t;
+
+#define MPI2_ADAPTER_INFO_FLAGS_EMBEDDED                (0x0001)
+#define MPI2_ADAPTER_INFO_FLAGS_INIT_STATUS             (0x0002)
+
+typedef struct _MPI2_CONFIG_PAGE_BIOS_3
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U32                     GlobalFlags;                /* 0x04 */
+    U32                     BiosVersion;                /* 0x08 */
+    MPI2_ADAPTER_INFO       AdapterOrder[4];            /* 0x0C */
+    U32                     Reserved1;                  /* 0x1C */
+} MPI2_CONFIG_PAGE_BIOS_3, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_3,
+  Mpi2BiosPage3_t, MPI2_POINTER pMpi2BiosPage3_t;
+
+#define MPI2_BIOSPAGE3_PAGEVERSION                      (0x00)
+
+/* values for BIOS Page 3 GlobalFlags */
+#define MPI2_BIOSPAGE3_FLAGS_PAUSE_ON_ERROR             (0x00000002)
+#define MPI2_BIOSPAGE3_FLAGS_VERBOSE_ENABLE             (0x00000004)
+#define MPI2_BIOSPAGE3_FLAGS_HOOK_INT_40_DISABLE        (0x00000010)
+
+#define MPI2_BIOSPAGE3_FLAGS_DEV_LIST_DISPLAY_MASK      (0x000000E0)
+#define MPI2_BIOSPAGE3_FLAGS_INSTALLED_DEV_DISPLAY      (0x00000000)
+#define MPI2_BIOSPAGE3_FLAGS_ADAPTER_DISPLAY            (0x00000020)
+#define MPI2_BIOSPAGE3_FLAGS_ADAPTER_DEV_DISPLAY        (0x00000040)
+
+
+/* BIOS Page 4 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_BIOS_PAGE_4_PHY_ENTRIES
+#define MPI2_BIOS_PAGE_4_PHY_ENTRIES        (1)
+#endif
+
+typedef struct _MPI2_BIOS4_ENTRY
+{
+    U64                     ReassignmentWWID;       /* 0x00 */
+    U64                     ReassignmentDeviceName; /* 0x08 */
+} MPI2_BIOS4_ENTRY, MPI2_POINTER PTR_MPI2_BIOS4_ENTRY,
+  Mpi2MBios4Entry_t, MPI2_POINTER pMpi2Bios4Entry_t;
+
+typedef struct _MPI2_CONFIG_PAGE_BIOS_4
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                             /* 0x00 */
+    U8                      NumPhys;                            /* 0x04 */
+    U8                      Reserved1;                          /* 0x05 */
+    U16                     Reserved2;                          /* 0x06 */
+    MPI2_BIOS4_ENTRY        Phy[MPI2_BIOS_PAGE_4_PHY_ENTRIES];  /* 0x08 */
+} MPI2_CONFIG_PAGE_BIOS_4, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_BIOS_4,
+  Mpi2BiosPage4_t, MPI2_POINTER pMpi2BiosPage4_t;
+
+#define MPI2_BIOSPAGE4_PAGEVERSION                      (0x01)
+
+
+/****************************************************************************
+*   RAID Volume Config Pages
+****************************************************************************/
+
+/* RAID Volume Page 0 */
+
+typedef struct _MPI2_RAIDVOL0_PHYS_DISK
+{
+    U8                      RAIDSetNum;                 /* 0x00 */
+    U8                      PhysDiskMap;                /* 0x01 */
+    U8                      PhysDiskNum;                /* 0x02 */
+    U8                      Reserved;                   /* 0x03 */
+} MPI2_RAIDVOL0_PHYS_DISK, MPI2_POINTER PTR_MPI2_RAIDVOL0_PHYS_DISK,
+  Mpi2RaidVol0PhysDisk_t, MPI2_POINTER pMpi2RaidVol0PhysDisk_t;
+
+/* defines for the PhysDiskMap field */
+#define MPI2_RAIDVOL0_PHYSDISK_PRIMARY                  (0x01)
+#define MPI2_RAIDVOL0_PHYSDISK_SECONDARY                (0x02)
+
+typedef struct _MPI2_RAIDVOL0_SETTINGS
+{
+    U16                     Settings;                   /* 0x00 */
+    U8                      HotSparePool;               /* 0x01 */
+    U8                      Reserved;                   /* 0x02 */
+} MPI2_RAIDVOL0_SETTINGS, MPI2_POINTER PTR_MPI2_RAIDVOL0_SETTINGS,
+  Mpi2RaidVol0Settings_t, MPI2_POINTER pMpi2RaidVol0Settings_t;
+
+/* RAID Volume Page 0 HotSparePool defines, also used in RAID Physical Disk */
+#define MPI2_RAID_HOT_SPARE_POOL_0                      (0x01)
+#define MPI2_RAID_HOT_SPARE_POOL_1                      (0x02)
+#define MPI2_RAID_HOT_SPARE_POOL_2                      (0x04)
+#define MPI2_RAID_HOT_SPARE_POOL_3                      (0x08)
+#define MPI2_RAID_HOT_SPARE_POOL_4                      (0x10)
+#define MPI2_RAID_HOT_SPARE_POOL_5                      (0x20)
+#define MPI2_RAID_HOT_SPARE_POOL_6                      (0x40)
+#define MPI2_RAID_HOT_SPARE_POOL_7                      (0x80)
+
+/* RAID Volume Page 0 VolumeSettings defines */
+#define MPI2_RAIDVOL0_SETTING_USE_PRODUCT_ID_SUFFIX     (0x0008)
+#define MPI2_RAIDVOL0_SETTING_AUTO_CONFIG_HSWAP_DISABLE (0x0004)
+
+#define MPI2_RAIDVOL0_SETTING_MASK_WRITE_CACHING        (0x0003)
+#define MPI2_RAIDVOL0_SETTING_UNCHANGED                 (0x0000)
+#define MPI2_RAIDVOL0_SETTING_DISABLE_WRITE_CACHING     (0x0001)
+#define MPI2_RAIDVOL0_SETTING_ENABLE_WRITE_CACHING      (0x0002)
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength at runtime.
+ */
+#ifndef MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX
+#define MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX       (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_RAID_VOL_0
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U16                     DevHandle;                  /* 0x04 */
+    U8                      VolumeState;                /* 0x06 */
+    U8                      VolumeType;                 /* 0x07 */
+    U32                     VolumeStatusFlags;          /* 0x08 */
+    MPI2_RAIDVOL0_SETTINGS  VolumeSettings;             /* 0x0C */
+    U64                     MaxLBA;                     /* 0x10 */
+    U32                     StripeSize;                 /* 0x18 */
+    U16                     BlockSize;                  /* 0x1C */
+    U16                     Reserved1;                  /* 0x1E */
+    U8                      SupportedPhysDisks;         /* 0x20 */
+    U8                      ResyncRate;                 /* 0x21 */
+    U16                     DataScrubDuration;          /* 0x22 */
+    U8                      NumPhysDisks;               /* 0x24 */
+    U8                      Reserved2;                  /* 0x25 */
+    U8                      Reserved3;                  /* 0x26 */
+    U8                      InactiveStatus;             /* 0x27 */
+    MPI2_RAIDVOL0_PHYS_DISK PhysDisk[MPI2_RAID_VOL_PAGE_0_PHYSDISK_MAX]; /* 0x28 */
+} MPI2_CONFIG_PAGE_RAID_VOL_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_VOL_0,
+  Mpi2RaidVolPage0_t, MPI2_POINTER pMpi2RaidVolPage0_t;
+
+#define MPI2_RAIDVOLPAGE0_PAGEVERSION           (0x0A)
+
+/* values for RAID VolumeState */
+#define MPI2_RAID_VOL_STATE_MISSING                         (0x00)
+#define MPI2_RAID_VOL_STATE_FAILED                          (0x01)
+#define MPI2_RAID_VOL_STATE_INITIALIZING                    (0x02)
+#define MPI2_RAID_VOL_STATE_ONLINE                          (0x03)
+#define MPI2_RAID_VOL_STATE_DEGRADED                        (0x04)
+#define MPI2_RAID_VOL_STATE_OPTIMAL                         (0x05)
+
+/* values for RAID VolumeType */
+#define MPI2_RAID_VOL_TYPE_RAID0                            (0x00)
+#define MPI2_RAID_VOL_TYPE_RAID1E                           (0x01)
+#define MPI2_RAID_VOL_TYPE_RAID1                            (0x02)
+#define MPI2_RAID_VOL_TYPE_RAID10                           (0x05)
+#define MPI2_RAID_VOL_TYPE_UNKNOWN                          (0xFF)
+
+/* values for RAID Volume Page 0 VolumeStatusFlags field */
+#define MPI2_RAIDVOL0_STATUS_FLAG_PENDING_RESYNC            (0x02000000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_BACKG_INIT_PENDING        (0x01000000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_MDC_PENDING               (0x00800000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_USER_CONSIST_PENDING      (0x00400000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_MAKE_DATA_CONSISTENT      (0x00200000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_DATA_SCRUB                (0x00100000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_CONSISTENCY_CHECK         (0x00080000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_CAPACITY_EXPANSION        (0x00040000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_BACKGROUND_INIT           (0x00020000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_RESYNC_IN_PROGRESS        (0x00010000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_OCE_ALLOWED               (0x00000040)
+#define MPI2_RAIDVOL0_STATUS_FLAG_BGI_COMPLETE              (0x00000020)
+#define MPI2_RAIDVOL0_STATUS_FLAG_1E_OFFSET_MIRROR          (0x00000000)
+#define MPI2_RAIDVOL0_STATUS_FLAG_1E_ADJACENT_MIRROR        (0x00000010)
+#define MPI2_RAIDVOL0_STATUS_FLAG_BAD_BLOCK_TABLE_FULL      (0x00000008)
+#define MPI2_RAIDVOL0_STATUS_FLAG_VOLUME_INACTIVE           (0x00000004)
+#define MPI2_RAIDVOL0_STATUS_FLAG_QUIESCED                  (0x00000002)
+#define MPI2_RAIDVOL0_STATUS_FLAG_ENABLED                   (0x00000001)
+
+/* values for RAID Volume Page 0 SupportedPhysDisks field */
+#define MPI2_RAIDVOL0_SUPPORT_SOLID_STATE_DISKS             (0x08)
+#define MPI2_RAIDVOL0_SUPPORT_HARD_DISKS                    (0x04)
+#define MPI2_RAIDVOL0_SUPPORT_SAS_PROTOCOL                  (0x02)
+#define MPI2_RAIDVOL0_SUPPORT_SATA_PROTOCOL                 (0x01)
+
+/* values for RAID Volume Page 0 InactiveStatus field */
+#define MPI2_RAIDVOLPAGE0_UNKNOWN_INACTIVE                  (0x00)
+#define MPI2_RAIDVOLPAGE0_STALE_METADATA_INACTIVE           (0x01)
+#define MPI2_RAIDVOLPAGE0_FOREIGN_VOLUME_INACTIVE           (0x02)
+#define MPI2_RAIDVOLPAGE0_INSUFFICIENT_RESOURCE_INACTIVE    (0x03)
+#define MPI2_RAIDVOLPAGE0_CLONE_VOLUME_INACTIVE             (0x04)
+#define MPI2_RAIDVOLPAGE0_INSUFFICIENT_METADATA_INACTIVE    (0x05)
+#define MPI2_RAIDVOLPAGE0_PREVIOUSLY_DELETED                (0x06)
+
+
+/* RAID Volume Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_RAID_VOL_1
+{
+    MPI2_CONFIG_PAGE_HEADER Header;                     /* 0x00 */
+    U16                     DevHandle;                  /* 0x04 */
+    U16                     Reserved0;                  /* 0x06 */
+    U8                      GUID[24];                   /* 0x08 */
+    U8                      Name[16];                   /* 0x20 */
+    U64                     WWID;                       /* 0x30 */
+    U32                     Reserved1;                  /* 0x38 */
+    U32                     Reserved2;                  /* 0x3C */
+} MPI2_CONFIG_PAGE_RAID_VOL_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_VOL_1,
+  Mpi2RaidVolPage1_t, MPI2_POINTER pMpi2RaidVolPage1_t;
+
+#define MPI2_RAIDVOLPAGE1_PAGEVERSION           (0x03)
+
+
+/****************************************************************************
+*   RAID Physical Disk Config Pages
+****************************************************************************/
+
+/* RAID Physical Disk Page 0 */
+
+typedef struct _MPI2_RAIDPHYSDISK0_SETTINGS
+{
+    U16                     Reserved1;                  /* 0x00 */
+    U8                      HotSparePool;               /* 0x02 */
+    U8                      Reserved2;                  /* 0x03 */
+} MPI2_RAIDPHYSDISK0_SETTINGS, MPI2_POINTER PTR_MPI2_RAIDPHYSDISK0_SETTINGS,
+  Mpi2RaidPhysDisk0Settings_t, MPI2_POINTER pMpi2RaidPhysDisk0Settings_t;
+
+/* use MPI2_RAID_HOT_SPARE_POOL_ defines for the HotSparePool field */
+
+typedef struct _MPI2_RAIDPHYSDISK0_INQUIRY_DATA
+{
+    U8                      VendorID[8];                /* 0x00 */
+    U8                      ProductID[16];              /* 0x08 */
+    U8                      ProductRevLevel[4];         /* 0x18 */
+    U8                      SerialNum[32];              /* 0x1C */
+} MPI2_RAIDPHYSDISK0_INQUIRY_DATA,
+  MPI2_POINTER PTR_MPI2_RAIDPHYSDISK0_INQUIRY_DATA,
+  Mpi2RaidPhysDisk0InquiryData_t, MPI2_POINTER pMpi2RaidPhysDisk0InquiryData_t;
+
+typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_0
+{
+    MPI2_CONFIG_PAGE_HEADER         Header;                     /* 0x00 */
+    U16                             DevHandle;                  /* 0x04 */
+    U8                              Reserved1;                  /* 0x06 */
+    U8                              PhysDiskNum;                /* 0x07 */
+    MPI2_RAIDPHYSDISK0_SETTINGS     PhysDiskSettings;           /* 0x08 */
+    U32                             Reserved2;                  /* 0x0C */
+    MPI2_RAIDPHYSDISK0_INQUIRY_DATA InquiryData;                /* 0x10 */
+    U32                             Reserved3;                  /* 0x4C */
+    U8                              PhysDiskState;              /* 0x50 */
+    U8                              OfflineReason;              /* 0x51 */
+    U8                              IncompatibleReason;         /* 0x52 */
+    U8                              PhysDiskAttributes;         /* 0x53 */
+    U32                             PhysDiskStatusFlags;        /* 0x54 */
+    U64                             DeviceMaxLBA;               /* 0x58 */
+    U64                             HostMaxLBA;                 /* 0x60 */
+    U64                             CoercedMaxLBA;              /* 0x68 */
+    U16                             BlockSize;                  /* 0x70 */
+    U16                             Reserved5;                  /* 0x72 */
+    U32                             Reserved6;                  /* 0x74 */
+} MPI2_CONFIG_PAGE_RD_PDISK_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RD_PDISK_0,
+  Mpi2RaidPhysDiskPage0_t, MPI2_POINTER pMpi2RaidPhysDiskPage0_t;
+
+#define MPI2_RAIDPHYSDISKPAGE0_PAGEVERSION          (0x05)
+
+/* PhysDiskState defines */
+#define MPI2_RAID_PD_STATE_NOT_CONFIGURED               (0x00)
+#define MPI2_RAID_PD_STATE_NOT_COMPATIBLE               (0x01)
+#define MPI2_RAID_PD_STATE_OFFLINE                      (0x02)
+#define MPI2_RAID_PD_STATE_ONLINE                       (0x03)
+#define MPI2_RAID_PD_STATE_HOT_SPARE                    (0x04)
+#define MPI2_RAID_PD_STATE_DEGRADED                     (0x05)
+#define MPI2_RAID_PD_STATE_REBUILDING                   (0x06)
+#define MPI2_RAID_PD_STATE_OPTIMAL                      (0x07)
+
+/* OfflineReason defines */
+#define MPI2_PHYSDISK0_ONLINE                           (0x00)
+#define MPI2_PHYSDISK0_OFFLINE_MISSING                  (0x01)
+#define MPI2_PHYSDISK0_OFFLINE_FAILED                   (0x03)
+#define MPI2_PHYSDISK0_OFFLINE_INITIALIZING             (0x04)
+#define MPI2_PHYSDISK0_OFFLINE_REQUESTED                (0x05)
+#define MPI2_PHYSDISK0_OFFLINE_FAILED_REQUESTED         (0x06)
+#define MPI2_PHYSDISK0_OFFLINE_OTHER                    (0xFF)
+
+/* IncompatibleReason defines */
+#define MPI2_PHYSDISK0_COMPATIBLE                       (0x00)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_PROTOCOL            (0x01)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_BLOCKSIZE           (0x02)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_MAX_LBA             (0x03)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_SATA_EXTENDED_CMD   (0x04)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_REMOVEABLE_MEDIA    (0x05)
+#define MPI2_PHYSDISK0_INCOMPATIBLE_UNKNOWN             (0xFF)
+
+/* PhysDiskAttributes defines */
+#define MPI2_PHYSDISK0_ATTRIB_SOLID_STATE_DRIVE         (0x08)
+#define MPI2_PHYSDISK0_ATTRIB_HARD_DISK_DRIVE           (0x04)
+#define MPI2_PHYSDISK0_ATTRIB_SAS_PROTOCOL              (0x02)
+#define MPI2_PHYSDISK0_ATTRIB_SATA_PROTOCOL             (0x01)
+
+/* PhysDiskStatusFlags defines */
+#define MPI2_PHYSDISK0_STATUS_FLAG_NOT_CERTIFIED        (0x00000040)
+#define MPI2_PHYSDISK0_STATUS_FLAG_OCE_TARGET           (0x00000020)
+#define MPI2_PHYSDISK0_STATUS_FLAG_WRITE_CACHE_ENABLED  (0x00000010)
+#define MPI2_PHYSDISK0_STATUS_FLAG_OPTIMAL_PREVIOUS     (0x00000000)
+#define MPI2_PHYSDISK0_STATUS_FLAG_NOT_OPTIMAL_PREVIOUS (0x00000008)
+#define MPI2_PHYSDISK0_STATUS_FLAG_INACTIVE_VOLUME      (0x00000004)
+#define MPI2_PHYSDISK0_STATUS_FLAG_QUIESCED             (0x00000002)
+#define MPI2_PHYSDISK0_STATUS_FLAG_OUT_OF_SYNC          (0x00000001)
+
+
+/* RAID Physical Disk Page 1 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.PageLength or NumPhysDiskPaths at runtime.
+ */
+#ifndef MPI2_RAID_PHYS_DISK1_PATH_MAX
+#define MPI2_RAID_PHYS_DISK1_PATH_MAX   (1)
+#endif
+
+typedef struct _MPI2_RAIDPHYSDISK1_PATH
+{
+    U16             DevHandle;          /* 0x00 */
+    U16             Reserved1;          /* 0x02 */
+    U64             WWID;               /* 0x04 */
+    U64             OwnerWWID;          /* 0x0C */
+    U8              OwnerIdentifier;    /* 0x14 */
+    U8              Reserved2;          /* 0x15 */
+    U16             Flags;              /* 0x16 */
+} MPI2_RAIDPHYSDISK1_PATH, MPI2_POINTER PTR_MPI2_RAIDPHYSDISK1_PATH,
+  Mpi2RaidPhysDisk1Path_t, MPI2_POINTER pMpi2RaidPhysDisk1Path_t;
+
+/* RAID Physical Disk Page 1 Physical Disk Path Flags field defines */
+#define MPI2_RAID_PHYSDISK1_FLAG_PRIMARY        (0x0004)
+#define MPI2_RAID_PHYSDISK1_FLAG_BROKEN         (0x0002)
+#define MPI2_RAID_PHYSDISK1_FLAG_INVALID        (0x0001)
+
+typedef struct _MPI2_CONFIG_PAGE_RD_PDISK_1
+{
+    MPI2_CONFIG_PAGE_HEADER         Header;                     /* 0x00 */
+    U8                              NumPhysDiskPaths;           /* 0x04 */
+    U8                              PhysDiskNum;                /* 0x05 */
+    U16                             Reserved1;                  /* 0x06 */
+    U32                             Reserved2;                  /* 0x08 */
+    MPI2_RAIDPHYSDISK1_PATH         PhysicalDiskPath[MPI2_RAID_PHYS_DISK1_PATH_MAX];/* 0x0C */
+} MPI2_CONFIG_PAGE_RD_PDISK_1,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RD_PDISK_1,
+  Mpi2RaidPhysDiskPage1_t, MPI2_POINTER pMpi2RaidPhysDiskPage1_t;
+
+#define MPI2_RAIDPHYSDISKPAGE1_PAGEVERSION          (0x02)
+
+
+/****************************************************************************
+*   values for fields used by several types of SAS Config Pages
+****************************************************************************/
+
+/* values for NegotiatedLinkRates fields */
+#define MPI2_SAS_NEG_LINK_RATE_MASK_LOGICAL             (0xF0)
+#define MPI2_SAS_NEG_LINK_RATE_SHIFT_LOGICAL            (4)
+#define MPI2_SAS_NEG_LINK_RATE_MASK_PHYSICAL            (0x0F)
+/* link rates used for Negotiated Physical and Logical Link Rate */
+#define MPI2_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE        (0x00)
+#define MPI2_SAS_NEG_LINK_RATE_PHY_DISABLED             (0x01)
+#define MPI2_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED       (0x02)
+#define MPI2_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE        (0x03)
+#define MPI2_SAS_NEG_LINK_RATE_PORT_SELECTOR            (0x04)
+#define MPI2_SAS_NEG_LINK_RATE_SMP_RESET_IN_PROGRESS    (0x05)
+#define MPI2_SAS_NEG_LINK_RATE_1_5                      (0x08)
+#define MPI2_SAS_NEG_LINK_RATE_3_0                      (0x09)
+#define MPI2_SAS_NEG_LINK_RATE_6_0                      (0x0A)
+
+
+/* values for AttachedPhyInfo fields */
+#define MPI2_SAS_APHYINFO_INSIDE_ZPSDS_PERSISTENT       (0x00000040)
+#define MPI2_SAS_APHYINFO_REQUESTED_INSIDE_ZPSDS        (0x00000020)
+#define MPI2_SAS_APHYINFO_BREAK_REPLY_CAPABLE           (0x00000010)
+
+#define MPI2_SAS_APHYINFO_REASON_MASK                   (0x0000000F)
+#define MPI2_SAS_APHYINFO_REASON_UNKNOWN                (0x00000000)
+#define MPI2_SAS_APHYINFO_REASON_POWER_ON               (0x00000001)
+#define MPI2_SAS_APHYINFO_REASON_HARD_RESET             (0x00000002)
+#define MPI2_SAS_APHYINFO_REASON_SMP_PHY_CONTROL        (0x00000003)
+#define MPI2_SAS_APHYINFO_REASON_LOSS_OF_SYNC           (0x00000004)
+#define MPI2_SAS_APHYINFO_REASON_MULTIPLEXING_SEQ       (0x00000005)
+#define MPI2_SAS_APHYINFO_REASON_IT_NEXUS_LOSS_TIMER    (0x00000006)
+#define MPI2_SAS_APHYINFO_REASON_BREAK_TIMEOUT          (0x00000007)
+#define MPI2_SAS_APHYINFO_REASON_PHY_TEST_STOPPED       (0x00000008)
+
+
+/* values for PhyInfo fields */
+#define MPI2_SAS_PHYINFO_PHY_VACANT                     (0x80000000)
+#define MPI2_SAS_PHYINFO_CHANGED_REQ_INSIDE_ZPSDS       (0x04000000)
+#define MPI2_SAS_PHYINFO_INSIDE_ZPSDS_PERSISTENT        (0x02000000)
+#define MPI2_SAS_PHYINFO_REQ_INSIDE_ZPSDS               (0x01000000)
+#define MPI2_SAS_PHYINFO_ZONE_GROUP_PERSISTENT          (0x00400000)
+#define MPI2_SAS_PHYINFO_INSIDE_ZPSDS                   (0x00200000)
+#define MPI2_SAS_PHYINFO_ZONING_ENABLED                 (0x00100000)
+
+#define MPI2_SAS_PHYINFO_REASON_MASK                    (0x000F0000)
+#define MPI2_SAS_PHYINFO_REASON_UNKNOWN                 (0x00000000)
+#define MPI2_SAS_PHYINFO_REASON_POWER_ON                (0x00010000)
+#define MPI2_SAS_PHYINFO_REASON_HARD_RESET              (0x00020000)
+#define MPI2_SAS_PHYINFO_REASON_SMP_PHY_CONTROL         (0x00030000)
+#define MPI2_SAS_PHYINFO_REASON_LOSS_OF_SYNC            (0x00040000)
+#define MPI2_SAS_PHYINFO_REASON_MULTIPLEXING_SEQ        (0x00050000)
+#define MPI2_SAS_PHYINFO_REASON_IT_NEXUS_LOSS_TIMER     (0x00060000)
+#define MPI2_SAS_PHYINFO_REASON_BREAK_TIMEOUT           (0x00070000)
+#define MPI2_SAS_PHYINFO_REASON_PHY_TEST_STOPPED        (0x00080000)
+
+#define MPI2_SAS_PHYINFO_MULTIPLEXING_SUPPORTED         (0x00008000)
+#define MPI2_SAS_PHYINFO_SATA_PORT_ACTIVE               (0x00004000)
+#define MPI2_SAS_PHYINFO_SATA_PORT_SELECTOR_PRESENT     (0x00002000)
+#define MPI2_SAS_PHYINFO_VIRTUAL_PHY                    (0x00001000)
+
+#define MPI2_SAS_PHYINFO_MASK_PARTIAL_PATHWAY_TIME      (0x00000F00)
+#define MPI2_SAS_PHYINFO_SHIFT_PARTIAL_PATHWAY_TIME     (8)
+
+#define MPI2_SAS_PHYINFO_MASK_ROUTING_ATTRIBUTE         (0x000000F0)
+#define MPI2_SAS_PHYINFO_DIRECT_ROUTING                 (0x00000000)
+#define MPI2_SAS_PHYINFO_SUBTRACTIVE_ROUTING            (0x00000010)
+#define MPI2_SAS_PHYINFO_TABLE_ROUTING                  (0x00000020)
+
+
+/* values for SAS ProgrammedLinkRate fields */
+#define MPI2_SAS_PRATE_MAX_RATE_MASK                    (0xF0)
+#define MPI2_SAS_PRATE_MAX_RATE_NOT_PROGRAMMABLE        (0x00)
+#define MPI2_SAS_PRATE_MAX_RATE_1_5                     (0x80)
+#define MPI2_SAS_PRATE_MAX_RATE_3_0                     (0x90)
+#define MPI2_SAS_PRATE_MAX_RATE_6_0                     (0xA0)
+#define MPI2_SAS_PRATE_MIN_RATE_MASK                    (0x0F)
+#define MPI2_SAS_PRATE_MIN_RATE_NOT_PROGRAMMABLE        (0x00)
+#define MPI2_SAS_PRATE_MIN_RATE_1_5                     (0x08)
+#define MPI2_SAS_PRATE_MIN_RATE_3_0                     (0x09)
+#define MPI2_SAS_PRATE_MIN_RATE_6_0                     (0x0A)
+
+
+/* values for SAS HwLinkRate fields */
+#define MPI2_SAS_HWRATE_MAX_RATE_MASK                   (0xF0)
+#define MPI2_SAS_HWRATE_MAX_RATE_1_5                    (0x80)
+#define MPI2_SAS_HWRATE_MAX_RATE_3_0                    (0x90)
+#define MPI2_SAS_HWRATE_MAX_RATE_6_0                    (0xA0)
+#define MPI2_SAS_HWRATE_MIN_RATE_MASK                   (0x0F)
+#define MPI2_SAS_HWRATE_MIN_RATE_1_5                    (0x08)
+#define MPI2_SAS_HWRATE_MIN_RATE_3_0                    (0x09)
+#define MPI2_SAS_HWRATE_MIN_RATE_6_0                    (0x0A)
+
+
+
+/****************************************************************************
+*   SAS IO Unit Config Pages
+****************************************************************************/
+
+/* SAS IO Unit Page 0 */
+
+typedef struct _MPI2_SAS_IO_UNIT0_PHY_DATA
+{
+    U8          Port;                   /* 0x00 */
+    U8          PortFlags;              /* 0x01 */
+    U8          PhyFlags;               /* 0x02 */
+    U8          NegotiatedLinkRate;     /* 0x03 */
+    U32         ControllerPhyDeviceInfo;/* 0x04 */
+    U16         AttachedDevHandle;      /* 0x08 */
+    U16         ControllerDevHandle;    /* 0x0A */
+    U32         DiscoveryStatus;        /* 0x0C */
+    U32         Reserved;               /* 0x10 */
+} MPI2_SAS_IO_UNIT0_PHY_DATA, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT0_PHY_DATA,
+  Mpi2SasIOUnit0PhyData_t, MPI2_POINTER pMpi2SasIOUnit0PhyData_t;
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.ExtPageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_SAS_IOUNIT0_PHY_MAX
+#define MPI2_SAS_IOUNIT0_PHY_MAX        (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                             /* 0x00 */
+    U32                                 Reserved1;                          /* 0x08 */
+    U8                                  NumPhys;                            /* 0x0C */
+    U8                                  Reserved2;                          /* 0x0D */
+    U16                                 Reserved3;                          /* 0x0E */
+    MPI2_SAS_IO_UNIT0_PHY_DATA          PhyData[MPI2_SAS_IOUNIT0_PHY_MAX];  /* 0x10 */
+} MPI2_CONFIG_PAGE_SASIOUNIT_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_0,
+  Mpi2SasIOUnitPage0_t, MPI2_POINTER pMpi2SasIOUnitPage0_t;
+
+#define MPI2_SASIOUNITPAGE0_PAGEVERSION                     (0x05)
+
+/* values for SAS IO Unit Page 0 PortFlags */
+#define MPI2_SASIOUNIT0_PORTFLAGS_DISCOVERY_IN_PROGRESS     (0x08)
+#define MPI2_SASIOUNIT0_PORTFLAGS_AUTO_PORT_CONFIG          (0x01)
+
+/* values for SAS IO Unit Page 0 PhyFlags */
+#define MPI2_SASIOUNIT0_PHYFLAGS_ZONING_ENABLED             (0x10)
+#define MPI2_SASIOUNIT0_PHYFLAGS_PHY_DISABLED               (0x08)
+
+/* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */
+
+/* see mpi2_sas.h for values for SAS IO Unit Page 0 ControllerPhyDeviceInfo values */
+
+/* values for SAS IO Unit Page 0 DiscoveryStatus */
+#define MPI2_SASIOUNIT0_DS_MAX_ENCLOSURES_EXCEED            (0x80000000)
+#define MPI2_SASIOUNIT0_DS_MAX_EXPANDERS_EXCEED             (0x40000000)
+#define MPI2_SASIOUNIT0_DS_MAX_DEVICES_EXCEED               (0x20000000)
+#define MPI2_SASIOUNIT0_DS_MAX_TOPO_PHYS_EXCEED             (0x10000000)
+#define MPI2_SASIOUNIT0_DS_DOWNSTREAM_INITIATOR             (0x08000000)
+#define MPI2_SASIOUNIT0_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE    (0x00008000)
+#define MPI2_SASIOUNIT0_DS_EXP_MULTI_SUBTRACTIVE            (0x00004000)
+#define MPI2_SASIOUNIT0_DS_MULTI_PORT_DOMAIN                (0x00002000)
+#define MPI2_SASIOUNIT0_DS_TABLE_TO_SUBTRACTIVE_LINK        (0x00001000)
+#define MPI2_SASIOUNIT0_DS_UNSUPPORTED_DEVICE               (0x00000800)
+#define MPI2_SASIOUNIT0_DS_TABLE_LINK                       (0x00000400)
+#define MPI2_SASIOUNIT0_DS_SUBTRACTIVE_LINK                 (0x00000200)
+#define MPI2_SASIOUNIT0_DS_SMP_CRC_ERROR                    (0x00000100)
+#define MPI2_SASIOUNIT0_DS_SMP_FUNCTION_FAILED              (0x00000080)
+#define MPI2_SASIOUNIT0_DS_INDEX_NOT_EXIST                  (0x00000040)
+#define MPI2_SASIOUNIT0_DS_OUT_ROUTE_ENTRIES                (0x00000020)
+#define MPI2_SASIOUNIT0_DS_SMP_TIMEOUT                      (0x00000010)
+#define MPI2_SASIOUNIT0_DS_MULTIPLE_PORTS                   (0x00000004)
+#define MPI2_SASIOUNIT0_DS_UNADDRESSABLE_DEVICE             (0x00000002)
+#define MPI2_SASIOUNIT0_DS_LOOP_DETECTED                    (0x00000001)
+
+
+/* SAS IO Unit Page 1 */
+
+typedef struct _MPI2_SAS_IO_UNIT1_PHY_DATA
+{
+    U8          Port;                       /* 0x00 */
+    U8          PortFlags;                  /* 0x01 */
+    U8          PhyFlags;                   /* 0x02 */
+    U8          MaxMinLinkRate;             /* 0x03 */
+    U32         ControllerPhyDeviceInfo;    /* 0x04 */
+    U16         MaxTargetPortConnectTime;   /* 0x08 */
+    U16         Reserved1;                  /* 0x0A */
+} MPI2_SAS_IO_UNIT1_PHY_DATA, MPI2_POINTER PTR_MPI2_SAS_IO_UNIT1_PHY_DATA,
+  Mpi2SasIOUnit1PhyData_t, MPI2_POINTER pMpi2SasIOUnit1PhyData_t;
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.ExtPageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_SAS_IOUNIT1_PHY_MAX
+#define MPI2_SAS_IOUNIT1_PHY_MAX        (1)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_1
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                             /* 0x00 */
+    U16                                 ControlFlags;                       /* 0x08 */
+    U16                                 SASNarrowMaxQueueDepth;             /* 0x0A */
+    U16                                 AdditionalControlFlags;             /* 0x0C */
+    U16                                 SASWideMaxQueueDepth;               /* 0x0E */
+    U8                                  NumPhys;                            /* 0x10 */
+    U8                                  SATAMaxQDepth;                      /* 0x11 */
+    U8                                  ReportDeviceMissingDelay;           /* 0x12 */
+    U8                                  IODeviceMissingDelay;               /* 0x13 */
+    MPI2_SAS_IO_UNIT1_PHY_DATA          PhyData[MPI2_SAS_IOUNIT1_PHY_MAX];  /* 0x14 */
+} MPI2_CONFIG_PAGE_SASIOUNIT_1,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_1,
+  Mpi2SasIOUnitPage1_t, MPI2_POINTER pMpi2SasIOUnitPage1_t;
+
+#define MPI2_SASIOUNITPAGE1_PAGEVERSION     (0x09)
+
+/* values for SAS IO Unit Page 1 ControlFlags */
+#define MPI2_SASIOUNIT1_CONTROL_DEVICE_SELF_TEST                    (0x8000)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_3_0_MAX                        (0x4000)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_1_5_MAX                        (0x2000)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_SW_PRESERVE                    (0x1000)
+
+#define MPI2_SASIOUNIT1_CONTROL_MASK_DEV_SUPPORT                    (0x0600)
+#define MPI2_SASIOUNIT1_CONTROL_SHIFT_DEV_SUPPORT                   (9)
+#define MPI2_SASIOUNIT1_CONTROL_DEV_SUPPORT_BOTH                    (0x0)
+#define MPI2_SASIOUNIT1_CONTROL_DEV_SAS_SUPPORT                     (0x1)
+#define MPI2_SASIOUNIT1_CONTROL_DEV_SATA_SUPPORT                    (0x2)
+
+#define MPI2_SASIOUNIT1_CONTROL_SATA_48BIT_LBA_REQUIRED             (0x0080)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_SMART_REQUIRED                 (0x0040)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_NCQ_REQUIRED                   (0x0020)
+#define MPI2_SASIOUNIT1_CONTROL_SATA_FUA_REQUIRED                   (0x0010)
+#define MPI2_SASIOUNIT1_CONTROL_TABLE_SUBTRACTIVE_ILLEGAL           (0x0008)
+#define MPI2_SASIOUNIT1_CONTROL_SUBTRACTIVE_ILLEGAL                 (0x0004)
+#define MPI2_SASIOUNIT1_CONTROL_FIRST_LVL_DISC_ONLY                 (0x0002)
+#define MPI2_SASIOUNIT1_CONTROL_CLEAR_AFFILIATION                   (0x0001)
+
+/* values for SAS IO Unit Page 1 AdditionalControlFlags */
+#define MPI2_SASIOUNIT1_ACONTROL_MULTI_PORT_DOMAIN_ILLEGAL          (0x0080)
+#define MPI2_SASIOUNIT1_ACONTROL_SATA_ASYNCHROUNOUS_NOTIFICATION    (0x0040)
+#define MPI2_SASIOUNIT1_ACONTROL_INVALID_TOPOLOGY_CORRECTION        (0x0020)
+#define MPI2_SASIOUNIT1_ACONTROL_PORT_ENABLE_ONLY_SATA_LINK_RESET   (0x0010)
+#define MPI2_SASIOUNIT1_ACONTROL_OTHER_AFFILIATION_SATA_LINK_RESET  (0x0008)
+#define MPI2_SASIOUNIT1_ACONTROL_SELF_AFFILIATION_SATA_LINK_RESET   (0x0004)
+#define MPI2_SASIOUNIT1_ACONTROL_NO_AFFILIATION_SATA_LINK_RESET     (0x0002)
+#define MPI2_SASIOUNIT1_ACONTROL_ALLOW_TABLE_TO_TABLE               (0x0001)
+
+/* defines for SAS IO Unit Page 1 ReportDeviceMissingDelay */
+#define MPI2_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK                 (0x7F)
+#define MPI2_SASIOUNIT1_REPORT_MISSING_UNIT_16                      (0x80)
+
+/* values for SAS IO Unit Page 1 PortFlags */
+#define MPI2_SASIOUNIT1_PORT_FLAGS_AUTO_PORT_CONFIG                 (0x01)
+
+/* values for SAS IO Unit Page 2 PhyFlags */
+#define MPI2_SASIOUNIT1_PHYFLAGS_ZONING_ENABLE                      (0x10)
+#define MPI2_SASIOUNIT1_PHYFLAGS_PHY_DISABLE                        (0x08)
+
+/* values for SAS IO Unit Page 0 MaxMinLinkRate */
+#define MPI2_SASIOUNIT1_MAX_RATE_MASK                               (0xF0)
+#define MPI2_SASIOUNIT1_MAX_RATE_1_5                                (0x80)
+#define MPI2_SASIOUNIT1_MAX_RATE_3_0                                (0x90)
+#define MPI2_SASIOUNIT1_MAX_RATE_6_0                                (0xA0)
+#define MPI2_SASIOUNIT1_MIN_RATE_MASK                               (0x0F)
+#define MPI2_SASIOUNIT1_MIN_RATE_1_5                                (0x08)
+#define MPI2_SASIOUNIT1_MIN_RATE_3_0                                (0x09)
+#define MPI2_SASIOUNIT1_MIN_RATE_6_0                                (0x0A)
+
+/* see mpi2_sas.h for values for SAS IO Unit Page 1 ControllerPhyDeviceInfo values */
+
+
+/* SAS IO Unit Page 4 */
+
+typedef struct _MPI2_SAS_IOUNIT4_SPINUP_GROUP
+{
+    U8          MaxTargetSpinup;            /* 0x00 */
+    U8          SpinupDelay;                /* 0x01 */
+    U16         Reserved1;                  /* 0x02 */
+} MPI2_SAS_IOUNIT4_SPINUP_GROUP, MPI2_POINTER PTR_MPI2_SAS_IOUNIT4_SPINUP_GROUP,
+  Mpi2SasIOUnit4SpinupGroup_t, MPI2_POINTER pMpi2SasIOUnit4SpinupGroup_t;
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * four and check Header.ExtPageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_SAS_IOUNIT4_PHY_MAX
+#define MPI2_SAS_IOUNIT4_PHY_MAX        (4)
+#endif
+
+typedef struct _MPI2_CONFIG_PAGE_SASIOUNIT_4
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                         /* 0x00 */
+    MPI2_SAS_IOUNIT4_SPINUP_GROUP       SpinupGroupParameters[4];       /* 0x08 */
+    U32                                 Reserved1;                      /* 0x18 */
+    U32                                 Reserved2;                      /* 0x1C */
+    U32                                 Reserved3;                      /* 0x20 */
+    U8                                  BootDeviceWaitTime;             /* 0x24 */
+    U8                                  Reserved4;                      /* 0x25 */
+    U16                                 Reserved5;                      /* 0x26 */
+    U8                                  NumPhys;                        /* 0x28 */
+    U8                                  PEInitialSpinupDelay;           /* 0x29 */
+    U8                                  PEReplyDelay;                   /* 0x2A */
+    U8                                  Flags;                          /* 0x2B */
+    U8                                  PHY[MPI2_SAS_IOUNIT4_PHY_MAX];  /* 0x2C */
+} MPI2_CONFIG_PAGE_SASIOUNIT_4,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SASIOUNIT_4,
+  Mpi2SasIOUnitPage4_t, MPI2_POINTER pMpi2SasIOUnitPage4_t;
+
+#define MPI2_SASIOUNITPAGE4_PAGEVERSION     (0x02)
+
+/* defines for Flags field */
+#define MPI2_SASIOUNIT4_FLAGS_AUTO_PORTENABLE               (0x01)
+
+/* defines for PHY field */
+#define MPI2_SASIOUNIT4_PHY_SPINUP_GROUP_MASK               (0x03)
+
+
+/****************************************************************************
+*   SAS Expander Config Pages
+****************************************************************************/
+
+/* SAS Expander Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_EXPANDER_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U8                                  PhysicalPort;               /* 0x08 */
+    U8                                  ReportGenLength;            /* 0x09 */
+    U16                                 EnclosureHandle;            /* 0x0A */
+    U64                                 SASAddress;                 /* 0x0C */
+    U32                                 DiscoveryStatus;            /* 0x14 */
+    U16                                 DevHandle;                  /* 0x18 */
+    U16                                 ParentDevHandle;            /* 0x1A */
+    U16                                 ExpanderChangeCount;        /* 0x1C */
+    U16                                 ExpanderRouteIndexes;       /* 0x1E */
+    U8                                  NumPhys;                    /* 0x20 */
+    U8                                  SASLevel;                   /* 0x21 */
+    U16                                 Flags;                      /* 0x22 */
+    U16                                 STPBusInactivityTimeLimit;  /* 0x24 */
+    U16                                 STPMaxConnectTimeLimit;     /* 0x26 */
+    U16                                 STP_SMP_NexusLossTime;      /* 0x28 */
+    U16                                 MaxNumRoutedSasAddresses;   /* 0x2A */
+    U64                                 ActiveZoneManagerSASAddress;/* 0x2C */
+    U16                                 ZoneLockInactivityLimit;    /* 0x34 */
+    U16                                 Reserved1;                  /* 0x36 */
+} MPI2_CONFIG_PAGE_EXPANDER_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_EXPANDER_0,
+  Mpi2ExpanderPage0_t, MPI2_POINTER pMpi2ExpanderPage0_t;
+
+#define MPI2_SASEXPANDER0_PAGEVERSION       (0x05)
+
+/* values for SAS Expander Page 0 DiscoveryStatus field */
+#define MPI2_SAS_EXPANDER0_DS_MAX_ENCLOSURES_EXCEED         (0x80000000)
+#define MPI2_SAS_EXPANDER0_DS_MAX_EXPANDERS_EXCEED          (0x40000000)
+#define MPI2_SAS_EXPANDER0_DS_MAX_DEVICES_EXCEED            (0x20000000)
+#define MPI2_SAS_EXPANDER0_DS_MAX_TOPO_PHYS_EXCEED          (0x10000000)
+#define MPI2_SAS_EXPANDER0_DS_DOWNSTREAM_INITIATOR          (0x08000000)
+#define MPI2_SAS_EXPANDER0_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE (0x00008000)
+#define MPI2_SAS_EXPANDER0_DS_EXP_MULTI_SUBTRACTIVE         (0x00004000)
+#define MPI2_SAS_EXPANDER0_DS_MULTI_PORT_DOMAIN             (0x00002000)
+#define MPI2_SAS_EXPANDER0_DS_TABLE_TO_SUBTRACTIVE_LINK     (0x00001000)
+#define MPI2_SAS_EXPANDER0_DS_UNSUPPORTED_DEVICE            (0x00000800)
+#define MPI2_SAS_EXPANDER0_DS_TABLE_LINK                    (0x00000400)
+#define MPI2_SAS_EXPANDER0_DS_SUBTRACTIVE_LINK              (0x00000200)
+#define MPI2_SAS_EXPANDER0_DS_SMP_CRC_ERROR                 (0x00000100)
+#define MPI2_SAS_EXPANDER0_DS_SMP_FUNCTION_FAILED           (0x00000080)
+#define MPI2_SAS_EXPANDER0_DS_INDEX_NOT_EXIST               (0x00000040)
+#define MPI2_SAS_EXPANDER0_DS_OUT_ROUTE_ENTRIES             (0x00000020)
+#define MPI2_SAS_EXPANDER0_DS_SMP_TIMEOUT                   (0x00000010)
+#define MPI2_SAS_EXPANDER0_DS_MULTIPLE_PORTS                (0x00000004)
+#define MPI2_SAS_EXPANDER0_DS_UNADDRESSABLE_DEVICE          (0x00000002)
+#define MPI2_SAS_EXPANDER0_DS_LOOP_DETECTED                 (0x00000001)
+
+/* values for SAS Expander Page 0 Flags field */
+#define MPI2_SAS_EXPANDER0_FLAGS_ZONE_LOCKED                (0x1000)
+#define MPI2_SAS_EXPANDER0_FLAGS_SUPPORTED_PHYSICAL_PRES    (0x0800)
+#define MPI2_SAS_EXPANDER0_FLAGS_ASSERTED_PHYSICAL_PRES     (0x0400)
+#define MPI2_SAS_EXPANDER0_FLAGS_ZONING_SUPPORT             (0x0200)
+#define MPI2_SAS_EXPANDER0_FLAGS_ENABLED_ZONING             (0x0100)
+#define MPI2_SAS_EXPANDER0_FLAGS_TABLE_TO_TABLE_SUPPORT     (0x0080)
+#define MPI2_SAS_EXPANDER0_FLAGS_CONNECTOR_END_DEVICE       (0x0010)
+#define MPI2_SAS_EXPANDER0_FLAGS_OTHERS_CONFIG              (0x0004)
+#define MPI2_SAS_EXPANDER0_FLAGS_CONFIG_IN_PROGRESS         (0x0002)
+#define MPI2_SAS_EXPANDER0_FLAGS_ROUTE_TABLE_CONFIG         (0x0001)
+
+
+/* SAS Expander Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_EXPANDER_1
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U8                                  PhysicalPort;               /* 0x08 */
+    U8                                  Reserved1;                  /* 0x09 */
+    U16                                 Reserved2;                  /* 0x0A */
+    U8                                  NumPhys;                    /* 0x0C */
+    U8                                  Phy;                        /* 0x0D */
+    U16                                 NumTableEntriesProgrammed;  /* 0x0E */
+    U8                                  ProgrammedLinkRate;         /* 0x10 */
+    U8                                  HwLinkRate;                 /* 0x11 */
+    U16                                 AttachedDevHandle;          /* 0x12 */
+    U32                                 PhyInfo;                    /* 0x14 */
+    U32                                 AttachedDeviceInfo;         /* 0x18 */
+    U16                                 ExpanderDevHandle;          /* 0x1C */
+    U8                                  ChangeCount;                /* 0x1E */
+    U8                                  NegotiatedLinkRate;         /* 0x1F */
+    U8                                  PhyIdentifier;              /* 0x20 */
+    U8                                  AttachedPhyIdentifier;      /* 0x21 */
+    U8                                  Reserved3;                  /* 0x22 */
+    U8                                  DiscoveryInfo;              /* 0x23 */
+    U32                                 AttachedPhyInfo;            /* 0x24 */
+    U8                                  ZoneGroup;                  /* 0x28 */
+    U8                                  SelfConfigStatus;           /* 0x29 */
+    U16                                 Reserved4;                  /* 0x2A */
+} MPI2_CONFIG_PAGE_EXPANDER_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_EXPANDER_1,
+  Mpi2ExpanderPage1_t, MPI2_POINTER pMpi2ExpanderPage1_t;
+
+#define MPI2_SASEXPANDER1_PAGEVERSION       (0x02)
+
+/* use MPI2_SAS_PRATE_ defines for the ProgrammedLinkRate field */
+
+/* use MPI2_SAS_HWRATE_ defines for the HwLinkRate field */
+
+/* use MPI2_SAS_PHYINFO_ for the PhyInfo field */
+
+/* see mpi2_sas.h for the MPI2_SAS_DEVICE_INFO_ defines used for the AttachedDeviceInfo field */
+
+/* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */
+
+/* use MPI2_SAS_APHYINFO_ defines for AttachedPhyInfo field */
+
+/* values for SAS Expander Page 1 DiscoveryInfo field */
+#define MPI2_SAS_EXPANDER1_DISCINFO_BAD_PHY_DISABLED    (0x04)
+#define MPI2_SAS_EXPANDER1_DISCINFO_LINK_STATUS_CHANGE  (0x02)
+#define MPI2_SAS_EXPANDER1_DISCINFO_NO_ROUTING_ENTRIES  (0x01)
+
+
+/****************************************************************************
+*   SAS Device Config Pages
+****************************************************************************/
+
+/* SAS Device Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_DEV_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                 /* 0x00 */
+    U16                                 Slot;                   /* 0x08 */
+    U16                                 EnclosureHandle;        /* 0x0A */
+    U64                                 SASAddress;             /* 0x0C */
+    U16                                 ParentDevHandle;        /* 0x14 */
+    U8                                  PhyNum;                 /* 0x16 */
+    U8                                  AccessStatus;           /* 0x17 */
+    U16                                 DevHandle;              /* 0x18 */
+    U8                                  AttachedPhyIdentifier;  /* 0x1A */
+    U8                                  ZoneGroup;              /* 0x1B */
+    U32                                 DeviceInfo;             /* 0x1C */
+    U16                                 Flags;                  /* 0x20 */
+    U8                                  PhysicalPort;           /* 0x22 */
+    U8                                  MaxPortConnections;     /* 0x23 */
+    U64                                 DeviceName;             /* 0x24 */
+    U8                                  PortGroups;             /* 0x2C */
+    U8                                  DmaGroup;               /* 0x2D */
+    U8                                  ControlGroup;           /* 0x2E */
+    U8                                  Reserved1;              /* 0x2F */
+    U32                                 Reserved2;              /* 0x30 */
+    U32                                 Reserved3;              /* 0x34 */
+} MPI2_CONFIG_PAGE_SAS_DEV_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_DEV_0,
+  Mpi2SasDevicePage0_t, MPI2_POINTER pMpi2SasDevicePage0_t;
+
+#define MPI2_SASDEVICE0_PAGEVERSION         (0x08)
+
+/* values for SAS Device Page 0 AccessStatus field */
+#define MPI2_SAS_DEVICE0_ASTATUS_NO_ERRORS                  (0x00)
+#define MPI2_SAS_DEVICE0_ASTATUS_SATA_INIT_FAILED           (0x01)
+#define MPI2_SAS_DEVICE0_ASTATUS_SATA_CAPABILITY_FAILED     (0x02)
+#define MPI2_SAS_DEVICE0_ASTATUS_SATA_AFFILIATION_CONFLICT  (0x03)
+#define MPI2_SAS_DEVICE0_ASTATUS_SATA_NEEDS_INITIALIZATION  (0x04)
+#define MPI2_SAS_DEVICE0_ASTATUS_ROUTE_NOT_ADDRESSABLE      (0x05)
+#define MPI2_SAS_DEVICE0_ASTATUS_SMP_ERROR_NOT_ADDRESSABLE  (0x06)
+#define MPI2_SAS_DEVICE0_ASTATUS_DEVICE_BLOCKED             (0x07)
+/* specific values for SATA Init failures */
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_UNKNOWN                (0x10)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_AFFILIATION_CONFLICT   (0x11)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_DIAG                   (0x12)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_IDENTIFICATION         (0x13)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_CHECK_POWER            (0x14)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_PIO_SN                 (0x15)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_MDMA_SN                (0x16)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_UDMA_SN                (0x17)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_ZONING_VIOLATION       (0x18)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_NOT_ADDRESSABLE        (0x19)
+#define MPI2_SAS_DEVICE0_ASTATUS_SIF_MAX                    (0x1F)
+
+/* see mpi2_sas.h for values for SAS Device Page 0 DeviceInfo values */
+
+/* values for SAS Device Page 0 Flags field */
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_ASYNCHRONOUS_NOTIFY     (0x0400)
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_SW_PRESERVE             (0x0200)
+#define MPI2_SAS_DEVICE0_FLAGS_UNSUPPORTED_DEVICE           (0x0100)
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_48BIT_LBA_SUPPORTED     (0x0080)
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED         (0x0040)
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_NCQ_SUPPORTED           (0x0020)
+#define MPI2_SAS_DEVICE0_FLAGS_SATA_FUA_SUPPORTED           (0x0010)
+#define MPI2_SAS_DEVICE0_FLAGS_PORT_SELECTOR_ATTACH         (0x0008)
+#define MPI2_SAS_DEVICE0_FLAGS_DEVICE_PRESENT               (0x0001)
+
+
+/* SAS Device Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_DEV_1
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                 /* 0x00 */
+    U32                                 Reserved1;              /* 0x08 */
+    U64                                 SASAddress;             /* 0x0C */
+    U32                                 Reserved2;              /* 0x14 */
+    U16                                 DevHandle;              /* 0x18 */
+    U16                                 Reserved3;              /* 0x1A */
+    U8                                  InitialRegDeviceFIS[20];/* 0x1C */
+} MPI2_CONFIG_PAGE_SAS_DEV_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_DEV_1,
+  Mpi2SasDevicePage1_t, MPI2_POINTER pMpi2SasDevicePage1_t;
+
+#define MPI2_SASDEVICE1_PAGEVERSION         (0x01)
+
+
+/****************************************************************************
+*   SAS PHY Config Pages
+****************************************************************************/
+
+/* SAS PHY Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                 /* 0x00 */
+    U16                                 OwnerDevHandle;         /* 0x08 */
+    U16                                 Reserved1;              /* 0x0A */
+    U16                                 AttachedDevHandle;      /* 0x0C */
+    U8                                  AttachedPhyIdentifier;  /* 0x0E */
+    U8                                  Reserved2;              /* 0x0F */
+    U32                                 AttachedPhyInfo;        /* 0x10 */
+    U8                                  ProgrammedLinkRate;     /* 0x14 */
+    U8                                  HwLinkRate;             /* 0x15 */
+    U8                                  ChangeCount;            /* 0x16 */
+    U8                                  Flags;                  /* 0x17 */
+    U32                                 PhyInfo;                /* 0x18 */
+    U8                                  NegotiatedLinkRate;     /* 0x1C */
+    U8                                  Reserved3;              /* 0x1D */
+    U16                                 Reserved4;              /* 0x1E */
+} MPI2_CONFIG_PAGE_SAS_PHY_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_0,
+  Mpi2SasPhyPage0_t, MPI2_POINTER pMpi2SasPhyPage0_t;
+
+#define MPI2_SASPHY0_PAGEVERSION            (0x03)
+
+/* use MPI2_SAS_PRATE_ defines for the ProgrammedLinkRate field */
+
+/* use MPI2_SAS_HWRATE_ defines for the HwLinkRate field */
+
+/* values for SAS PHY Page 0 Flags field */
+#define MPI2_SAS_PHY0_FLAGS_SGPIO_DIRECT_ATTACH_ENC             (0x01)
+
+/* use MPI2_SAS_APHYINFO_ defines for AttachedPhyInfo field */
+
+/* use MPI2_SAS_NEG_LINK_RATE_ defines for the NegotiatedLinkRate field */
+
+/* use MPI2_SAS_PHYINFO_ for the PhyInfo field */
+
+
+/* SAS PHY Page 1 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_PHY_1
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U32                                 Reserved1;                  /* 0x08 */
+    U32                                 InvalidDwordCount;          /* 0x0C */
+    U32                                 RunningDisparityErrorCount; /* 0x10 */
+    U32                                 LossDwordSynchCount;        /* 0x14 */
+    U32                                 PhyResetProblemCount;       /* 0x18 */
+} MPI2_CONFIG_PAGE_SAS_PHY_1, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PHY_1,
+  Mpi2SasPhyPage1_t, MPI2_POINTER pMpi2SasPhyPage1_t;
+
+#define MPI2_SASPHY1_PAGEVERSION            (0x01)
+
+
+/****************************************************************************
+*   SAS Port Config Pages
+****************************************************************************/
+
+/* SAS Port Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_PORT_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U8                                  PortNumber;                 /* 0x08 */
+    U8                                  PhysicalPort;               /* 0x09 */
+    U8                                  PortWidth;                  /* 0x0A */
+    U8                                  PhysicalPortWidth;          /* 0x0B */
+    U8                                  ZoneGroup;                  /* 0x0C */
+    U8                                  Reserved1;                  /* 0x0D */
+    U16                                 Reserved2;                  /* 0x0E */
+    U64                                 SASAddress;                 /* 0x10 */
+    U32                                 DeviceInfo;                 /* 0x18 */
+    U32                                 Reserved3;                  /* 0x1C */
+    U32                                 Reserved4;                  /* 0x20 */
+} MPI2_CONFIG_PAGE_SAS_PORT_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_PORT_0,
+  Mpi2SasPortPage0_t, MPI2_POINTER pMpi2SasPortPage0_t;
+
+#define MPI2_SASPORT0_PAGEVERSION           (0x00)
+
+/* see mpi2_sas.h for values for SAS Port Page 0 DeviceInfo values */
+
+
+/****************************************************************************
+*   SAS Enclosure Config Pages
+****************************************************************************/
+
+/* SAS Enclosure Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U32                                 Reserved1;                  /* 0x08 */
+    U64                                 EnclosureLogicalID;         /* 0x0C */
+    U16                                 Flags;                      /* 0x14 */
+    U16                                 EnclosureHandle;            /* 0x16 */
+    U16                                 NumSlots;                   /* 0x18 */
+    U16                                 StartSlot;                  /* 0x1A */
+    U16                                 Reserved2;                  /* 0x1C */
+    U16                                 SEPDevHandle;               /* 0x1E */
+    U32                                 Reserved3;                  /* 0x20 */
+    U32                                 Reserved4;                  /* 0x24 */
+} MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_SAS_ENCLOSURE_0,
+  Mpi2SasEnclosurePage0_t, MPI2_POINTER pMpi2SasEnclosurePage0_t;
+
+#define MPI2_SASENCLOSURE0_PAGEVERSION      (0x03)
+
+/* values for SAS Enclosure Page 0 Flags field */
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_MASK              (0x000F)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_UNKNOWN           (0x0000)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SES           (0x0001)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_SGPIO         (0x0002)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_EXP_SGPIO         (0x0003)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_SES_ENCLOSURE     (0x0004)
+#define MPI2_SAS_ENCLS0_FLAGS_MNG_IOC_GPIO          (0x0005)
+
+
+/****************************************************************************
+*   Log Config Page
+****************************************************************************/
+
+/* Log Page 0 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.ExtPageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_LOG_0_NUM_LOG_ENTRIES
+#define MPI2_LOG_0_NUM_LOG_ENTRIES          (1)
+#endif
+
+#define MPI2_LOG_0_LOG_DATA_LENGTH          (0x1C)
+
+typedef struct _MPI2_LOG_0_ENTRY
+{
+    U64         TimeStamp;                          /* 0x00 */
+    U32         Reserved1;                          /* 0x08 */
+    U16         LogSequence;                        /* 0x0C */
+    U16         LogEntryQualifier;                  /* 0x0E */
+    U8          VP_ID;                              /* 0x10 */
+    U8          VF_ID;                              /* 0x11 */
+    U16         Reserved2;                          /* 0x12 */
+    U8          LogData[MPI2_LOG_0_LOG_DATA_LENGTH];/* 0x14 */
+} MPI2_LOG_0_ENTRY, MPI2_POINTER PTR_MPI2_LOG_0_ENTRY,
+  Mpi2Log0Entry_t, MPI2_POINTER pMpi2Log0Entry_t;
+
+/* values for Log Page 0 LogEntry LogEntryQualifier field */
+#define MPI2_LOG_0_ENTRY_QUAL_ENTRY_UNUSED          (0x0000)
+#define MPI2_LOG_0_ENTRY_QUAL_POWER_ON_RESET        (0x0001)
+#define MPI2_LOG_0_ENTRY_QUAL_TIMESTAMP_UPDATE      (0x0002)
+#define MPI2_LOG_0_ENTRY_QUAL_MIN_IMPLEMENT_SPEC    (0x8000)
+#define MPI2_LOG_0_ENTRY_QUAL_MAX_IMPLEMENT_SPEC    (0xFFFF)
+
+typedef struct _MPI2_CONFIG_PAGE_LOG_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U32                                 Reserved1;                  /* 0x08 */
+    U32                                 Reserved2;                  /* 0x0C */
+    U16                                 NumLogEntries;              /* 0x10 */
+    U16                                 Reserved3;                  /* 0x12 */
+    MPI2_LOG_0_ENTRY                    LogEntry[MPI2_LOG_0_NUM_LOG_ENTRIES]; /* 0x14 */
+} MPI2_CONFIG_PAGE_LOG_0, MPI2_POINTER PTR_MPI2_CONFIG_PAGE_LOG_0,
+  Mpi2LogPage0_t, MPI2_POINTER pMpi2LogPage0_t;
+
+#define MPI2_LOG_0_PAGEVERSION              (0x02)
+
+
+/****************************************************************************
+*   RAID Config Page
+****************************************************************************/
+
+/* RAID Page 0 */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check Header.ExtPageLength or NumPhys at runtime.
+ */
+#ifndef MPI2_RAIDCONFIG0_MAX_ELEMENTS
+#define MPI2_RAIDCONFIG0_MAX_ELEMENTS       (1)
+#endif
+
+typedef struct _MPI2_RAIDCONFIG0_CONFIG_ELEMENT
+{
+    U16                     ElementFlags;               /* 0x00 */
+    U16                     VolDevHandle;               /* 0x02 */
+    U8                      HotSparePool;               /* 0x04 */
+    U8                      PhysDiskNum;                /* 0x05 */
+    U16                     PhysDiskDevHandle;          /* 0x06 */
+} MPI2_RAIDCONFIG0_CONFIG_ELEMENT,
+  MPI2_POINTER PTR_MPI2_RAIDCONFIG0_CONFIG_ELEMENT,
+  Mpi2RaidConfig0ConfigElement_t, MPI2_POINTER pMpi2RaidConfig0ConfigElement_t;
+
+/* values for the ElementFlags field */
+#define MPI2_RAIDCONFIG0_EFLAGS_MASK_ELEMENT_TYPE       (0x000F)
+#define MPI2_RAIDCONFIG0_EFLAGS_VOLUME_ELEMENT          (0x0000)
+#define MPI2_RAIDCONFIG0_EFLAGS_VOL_PHYS_DISK_ELEMENT   (0x0001)
+#define MPI2_RAIDCONFIG0_EFLAGS_HOT_SPARE_ELEMENT       (0x0002)
+#define MPI2_RAIDCONFIG0_EFLAGS_OCE_ELEMENT             (0x0003)
+
+
+typedef struct _MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    U8                                  NumHotSpares;               /* 0x08 */
+    U8                                  NumPhysDisks;               /* 0x09 */
+    U8                                  NumVolumes;                 /* 0x0A */
+    U8                                  ConfigNum;                  /* 0x0B */
+    U32                                 Flags;                      /* 0x0C */
+    U8                                  ConfigGUID[24];             /* 0x10 */
+    U32                                 Reserved1;                  /* 0x28 */
+    U8                                  NumElements;                /* 0x2C */
+    U8                                  Reserved2;                  /* 0x2D */
+    U16                                 Reserved3;                  /* 0x2E */
+    MPI2_RAIDCONFIG0_CONFIG_ELEMENT     ConfigElement[MPI2_RAIDCONFIG0_MAX_ELEMENTS]; /* 0x30 */
+} MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_RAID_CONFIGURATION_0,
+  Mpi2RaidConfigurationPage0_t, MPI2_POINTER pMpi2RaidConfigurationPage0_t;
+
+#define MPI2_RAIDCONFIG0_PAGEVERSION            (0x00)
+
+/* values for RAID Configuration Page 0 Flags field */
+#define MPI2_RAIDCONFIG0_FLAG_FOREIGN_CONFIG        (0x00000001)
+
+
+/****************************************************************************
+*   Driver Persistent Mapping Config Pages
+****************************************************************************/
+
+/* Driver Persistent Mapping Page 0 */
+
+typedef struct _MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY
+{
+    U64                                 PhysicalIdentifier;         /* 0x00 */
+    U16                                 MappingInformation;         /* 0x08 */
+    U16                                 DeviceIndex;                /* 0x0A */
+    U32                                 PhysicalBitsMapping;        /* 0x0C */
+    U32                                 Reserved1;                  /* 0x10 */
+} MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY,
+  Mpi2DriverMap0Entry_t, MPI2_POINTER pMpi2DriverMap0Entry_t;
+
+typedef struct _MPI2_CONFIG_PAGE_DRIVER_MAPPING_0
+{
+    MPI2_CONFIG_EXTENDED_PAGE_HEADER    Header;                     /* 0x00 */
+    MPI2_CONFIG_PAGE_DRIVER_MAP0_ENTRY  Entry;                      /* 0x08 */
+} MPI2_CONFIG_PAGE_DRIVER_MAPPING_0,
+  MPI2_POINTER PTR_MPI2_CONFIG_PAGE_DRIVER_MAPPING_0,
+  Mpi2DriverMappingPage0_t, MPI2_POINTER pMpi2DriverMappingPage0_t;
+
+#define MPI2_DRIVERMAPPING0_PAGEVERSION         (0x00)
+
+/* values for Driver Persistent Mapping Page 0 MappingInformation field */
+#define MPI2_DRVMAP0_MAPINFO_SLOT_MASK              (0x07F0)
+#define MPI2_DRVMAP0_MAPINFO_SLOT_SHIFT             (4)
+#define MPI2_DRVMAP0_MAPINFO_MISSING_MASK           (0x000F)
+
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_init.h b/drivers/scsi/mpt2sas/mpi/mpi2_init.h
new file mode 100644
index 0000000..f1115f0
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_init.h
@@ -0,0 +1,420 @@
+/*
+ *  Copyright (c) 2000-2008 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_init.h
+ *          Title:  MPI SCSI initiator mode messages and structures
+ *  Creation Date:  June 23, 2006
+ *
+ *    mpi2_init.h Version:  02.00.06
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  10-31-07  02.00.01  Fixed name for pMpi2SCSITaskManagementRequest_t.
+ *  12-18-07  02.00.02  Modified Task Management Target Reset Method defines.
+ *  02-29-08  02.00.03  Added Query Task Set and Query Unit Attention.
+ *  03-03-08  02.00.04  Fixed name of struct _MPI2_SCSI_TASK_MANAGE_REPLY.
+ *  05-21-08  02.00.05  Fixed typo in name of Mpi2SepRequest_t.
+ *  10-02-08  02.00.06  Removed Untagged and No Disconnect values from SCSI IO
+ *                      Control field Task Attribute flags.
+ *                      Moved LUN field defines to mpi2.h becasue they are
+ *                      common to many structures.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_INIT_H
+#define MPI2_INIT_H
+
+/*****************************************************************************
+*
+*               SCSI Initiator Messages
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  SCSI IO messages and associated structures
+****************************************************************************/
+
+typedef struct
+{
+    U8                      CDB[20];                    /* 0x00 */
+    U32                     PrimaryReferenceTag;        /* 0x14 */
+    U16                     PrimaryApplicationTag;      /* 0x18 */
+    U16                     PrimaryApplicationTagMask;  /* 0x1A */
+    U32                     TransferLength;             /* 0x1C */
+} MPI2_SCSI_IO_CDB_EEDP32, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_EEDP32,
+  Mpi2ScsiIoCdbEedp32_t, MPI2_POINTER pMpi2ScsiIoCdbEedp32_t;
+
+/* TBD: I don't think this is needed for MPI2/Gen2 */
+#if 0
+typedef struct
+{
+    U8                      CDB[16];                    /* 0x00 */
+    U32                     DataLength;                 /* 0x10 */
+    U32                     PrimaryReferenceTag;        /* 0x14 */
+    U16                     PrimaryApplicationTag;      /* 0x18 */
+    U16                     PrimaryApplicationTagMask;  /* 0x1A */
+    U32                     TransferLength;             /* 0x1C */
+} MPI2_SCSI_IO32_CDB_EEDP16, MPI2_POINTER PTR_MPI2_SCSI_IO32_CDB_EEDP16,
+  Mpi2ScsiIo32CdbEedp16_t, MPI2_POINTER pMpi2ScsiIo32CdbEedp16_t;
+#endif
+
+typedef union
+{
+    U8                      CDB32[32];
+    MPI2_SCSI_IO_CDB_EEDP32 EEDP32;
+    MPI2_SGE_SIMPLE_UNION   SGE;
+} MPI2_SCSI_IO_CDB_UNION, MPI2_POINTER PTR_MPI2_SCSI_IO_CDB_UNION,
+  Mpi2ScsiIoCdb_t, MPI2_POINTER pMpi2ScsiIoCdb_t;
+
+/* SCSI IO Request Message */
+typedef struct _MPI2_SCSI_IO_REQUEST
+{
+    U16                     DevHandle;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved1;                      /* 0x04 */
+    U8                      Reserved2;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved3;                      /* 0x0A */
+    U32                     SenseBufferLowAddress;          /* 0x0C */
+    U16                     SGLFlags;                       /* 0x10 */
+    U8                      SenseBufferLength;              /* 0x12 */
+    U8                      Reserved4;                      /* 0x13 */
+    U8                      SGLOffset0;                     /* 0x14 */
+    U8                      SGLOffset1;                     /* 0x15 */
+    U8                      SGLOffset2;                     /* 0x16 */
+    U8                      SGLOffset3;                     /* 0x17 */
+    U32                     SkipCount;                      /* 0x18 */
+    U32                     DataLength;                     /* 0x1C */
+    U32                     BidirectionalDataLength;        /* 0x20 */
+    U16                     IoFlags;                        /* 0x24 */
+    U16                     EEDPFlags;                      /* 0x26 */
+    U32                     EEDPBlockSize;                  /* 0x28 */
+    U32                     SecondaryReferenceTag;          /* 0x2C */
+    U16                     SecondaryApplicationTag;        /* 0x30 */
+    U16                     ApplicationTagTranslationMask;  /* 0x32 */
+    U8                      LUN[8];                         /* 0x34 */
+    U32                     Control;                        /* 0x3C */
+    MPI2_SCSI_IO_CDB_UNION  CDB;                            /* 0x40 */
+    MPI2_SGE_IO_UNION       SGL;                            /* 0x60 */
+} MPI2_SCSI_IO_REQUEST, MPI2_POINTER PTR_MPI2_SCSI_IO_REQUEST,
+  Mpi2SCSIIORequest_t, MPI2_POINTER pMpi2SCSIIORequest_t;
+
+/* SCSI IO MsgFlags bits */
+
+/* MsgFlags for SenseBufferAddressSpace */
+#define MPI2_SCSIIO_MSGFLAGS_MASK_SENSE_ADDR        (0x0C)
+#define MPI2_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR      (0x00)
+#define MPI2_SCSIIO_MSGFLAGS_IOCDDR_SENSE_ADDR      (0x04)
+#define MPI2_SCSIIO_MSGFLAGS_IOCPLB_SENSE_ADDR      (0x08)
+#define MPI2_SCSIIO_MSGFLAGS_IOCPLBNTA_SENSE_ADDR   (0x0C)
+
+/* SCSI IO SGLFlags bits */
+
+/* base values for Data Location Address Space */
+#define MPI2_SCSIIO_SGLFLAGS_ADDR_MASK              (0x0C)
+#define MPI2_SCSIIO_SGLFLAGS_SYSTEM_ADDR            (0x00)
+#define MPI2_SCSIIO_SGLFLAGS_IOCDDR_ADDR            (0x04)
+#define MPI2_SCSIIO_SGLFLAGS_IOCPLB_ADDR            (0x08)
+#define MPI2_SCSIIO_SGLFLAGS_IOCPLBNTA_ADDR         (0x0C)
+
+/* base values for Type */
+#define MPI2_SCSIIO_SGLFLAGS_TYPE_MASK              (0x03)
+#define MPI2_SCSIIO_SGLFLAGS_TYPE_MPI               (0x00)
+#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE32            (0x01)
+#define MPI2_SCSIIO_SGLFLAGS_TYPE_IEEE64            (0x02)
+
+/* shift values for each sub-field */
+#define MPI2_SCSIIO_SGLFLAGS_SGL3_SHIFT             (12)
+#define MPI2_SCSIIO_SGLFLAGS_SGL2_SHIFT             (8)
+#define MPI2_SCSIIO_SGLFLAGS_SGL1_SHIFT             (4)
+#define MPI2_SCSIIO_SGLFLAGS_SGL0_SHIFT             (0)
+
+/* SCSI IO IoFlags bits */
+
+/* Large CDB Address Space */
+#define MPI2_SCSIIO_CDB_ADDR_MASK                   (0x6000)
+#define MPI2_SCSIIO_CDB_ADDR_SYSTEM                 (0x0000)
+#define MPI2_SCSIIO_CDB_ADDR_IOCDDR                 (0x2000)
+#define MPI2_SCSIIO_CDB_ADDR_IOCPLB                 (0x4000)
+#define MPI2_SCSIIO_CDB_ADDR_IOCPLBNTA              (0x6000)
+
+#define MPI2_SCSIIO_IOFLAGS_LARGE_CDB               (0x1000)
+#define MPI2_SCSIIO_IOFLAGS_BIDIRECTIONAL           (0x0800)
+#define MPI2_SCSIIO_IOFLAGS_MULTICAST               (0x0400)
+#define MPI2_SCSIIO_IOFLAGS_CMD_DETERMINES_DATA_DIR (0x0200)
+#define MPI2_SCSIIO_IOFLAGS_CDBLENGTH_MASK          (0x01FF)
+
+/* SCSI IO EEDPFlags bits */
+
+#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_REFTAG        (0x8000)
+#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_REFTAG        (0x4000)
+#define MPI2_SCSIIO_EEDPFLAGS_INC_PRI_APPTAG        (0x2000)
+#define MPI2_SCSIIO_EEDPFLAGS_INC_SEC_APPTAG        (0x1000)
+
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REFTAG          (0x0400)
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_APPTAG          (0x0200)
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_GUARD           (0x0100)
+
+#define MPI2_SCSIIO_EEDPFLAGS_PASSTHRU_REFTAG       (0x0008)
+
+#define MPI2_SCSIIO_EEDPFLAGS_MASK_OP               (0x0007)
+#define MPI2_SCSIIO_EEDPFLAGS_NOOP_OP               (0x0000)
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_OP              (0x0001)
+#define MPI2_SCSIIO_EEDPFLAGS_STRIP_OP              (0x0002)
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REMOVE_OP       (0x0003)
+#define MPI2_SCSIIO_EEDPFLAGS_INSERT_OP             (0x0004)
+#define MPI2_SCSIIO_EEDPFLAGS_REPLACE_OP            (0x0006)
+#define MPI2_SCSIIO_EEDPFLAGS_CHECK_REGEN_OP        (0x0007)
+
+/* SCSI IO LUN fields: use MPI2_LUN_ from mpi2.h */
+
+/* SCSI IO Control bits */
+#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_MASK      (0xFC000000)
+#define MPI2_SCSIIO_CONTROL_ADDCDBLEN_SHIFT     (26)
+
+#define MPI2_SCSIIO_CONTROL_DATADIRECTION_MASK  (0x03000000)
+#define MPI2_SCSIIO_CONTROL_NODATATRANSFER      (0x00000000)
+#define MPI2_SCSIIO_CONTROL_WRITE               (0x01000000)
+#define MPI2_SCSIIO_CONTROL_READ                (0x02000000)
+#define MPI2_SCSIIO_CONTROL_BIDIRECTIONAL       (0x03000000)
+
+#define MPI2_SCSIIO_CONTROL_TASKPRI_MASK        (0x00007800)
+#define MPI2_SCSIIO_CONTROL_TASKPRI_SHIFT       (11)
+
+#define MPI2_SCSIIO_CONTROL_TASKATTRIBUTE_MASK  (0x00000700)
+#define MPI2_SCSIIO_CONTROL_SIMPLEQ             (0x00000000)
+#define MPI2_SCSIIO_CONTROL_HEADOFQ             (0x00000100)
+#define MPI2_SCSIIO_CONTROL_ORDEREDQ            (0x00000200)
+#define MPI2_SCSIIO_CONTROL_ACAQ                (0x00000400)
+
+#define MPI2_SCSIIO_CONTROL_TLR_MASK            (0x000000C0)
+#define MPI2_SCSIIO_CONTROL_NO_TLR              (0x00000000)
+#define MPI2_SCSIIO_CONTROL_TLR_ON              (0x00000040)
+#define MPI2_SCSIIO_CONTROL_TLR_OFF             (0x00000080)
+
+
+/* SCSI IO Error Reply Message */
+typedef struct _MPI2_SCSI_IO_REPLY
+{
+    U16                     DevHandle;                      /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved1;                      /* 0x04 */
+    U8                      Reserved2;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved3;                      /* 0x0A */
+    U8                      SCSIStatus;                     /* 0x0C */
+    U8                      SCSIState;                      /* 0x0D */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+    U32                     TransferCount;                  /* 0x14 */
+    U32                     SenseCount;                     /* 0x18 */
+    U32                     ResponseInfo;                   /* 0x1C */
+    U16                     TaskTag;                        /* 0x20 */
+    U16                     Reserved4;                      /* 0x22 */
+    U32                     BidirectionalTransferCount;     /* 0x24 */
+    U32                     Reserved5;                      /* 0x28 */
+    U32                     Reserved6;                      /* 0x2C */
+} MPI2_SCSI_IO_REPLY, MPI2_POINTER PTR_MPI2_SCSI_IO_REPLY,
+  Mpi2SCSIIOReply_t, MPI2_POINTER pMpi2SCSIIOReply_t;
+
+/* SCSI IO Reply SCSIStatus values (SAM-4 status codes) */
+
+#define MPI2_SCSI_STATUS_GOOD                   (0x00)
+#define MPI2_SCSI_STATUS_CHECK_CONDITION        (0x02)
+#define MPI2_SCSI_STATUS_CONDITION_MET          (0x04)
+#define MPI2_SCSI_STATUS_BUSY                   (0x08)
+#define MPI2_SCSI_STATUS_INTERMEDIATE           (0x10)
+#define MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET   (0x14)
+#define MPI2_SCSI_STATUS_RESERVATION_CONFLICT   (0x18)
+#define MPI2_SCSI_STATUS_COMMAND_TERMINATED     (0x22) /* obsolete */
+#define MPI2_SCSI_STATUS_TASK_SET_FULL          (0x28)
+#define MPI2_SCSI_STATUS_ACA_ACTIVE             (0x30)
+#define MPI2_SCSI_STATUS_TASK_ABORTED           (0x40)
+
+/* SCSI IO Reply SCSIState flags */
+
+#define MPI2_SCSI_STATE_RESPONSE_INFO_VALID     (0x10)
+#define MPI2_SCSI_STATE_TERMINATED              (0x08)
+#define MPI2_SCSI_STATE_NO_SCSI_STATUS          (0x04)
+#define MPI2_SCSI_STATE_AUTOSENSE_FAILED        (0x02)
+#define MPI2_SCSI_STATE_AUTOSENSE_VALID         (0x01)
+
+#define MPI2_SCSI_TASKTAG_UNKNOWN               (0xFFFF)
+
+
+/****************************************************************************
+*  SCSI Task Management messages
+****************************************************************************/
+
+/* SCSI Task Management Request Message */
+typedef struct _MPI2_SCSI_TASK_MANAGE_REQUEST
+{
+    U16                     DevHandle;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U8                      Reserved1;                      /* 0x04 */
+    U8                      TaskType;                       /* 0x05 */
+    U8                      Reserved2;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved3;                      /* 0x0A */
+    U8                      LUN[8];                         /* 0x0C */
+    U32                     Reserved4[7];                   /* 0x14 */
+    U16                     TaskMID;                        /* 0x30 */
+    U16                     Reserved5;                      /* 0x32 */
+} MPI2_SCSI_TASK_MANAGE_REQUEST,
+  MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REQUEST,
+  Mpi2SCSITaskManagementRequest_t,
+  MPI2_POINTER pMpi2SCSITaskManagementRequest_t;
+
+/* TaskType values */
+
+#define MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK           (0x01)
+#define MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET        (0x02)
+#define MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET         (0x03)
+#define MPI2_SCSITASKMGMT_TASKTYPE_LOGICAL_UNIT_RESET   (0x05)
+#define MPI2_SCSITASKMGMT_TASKTYPE_CLEAR_TASK_SET       (0x06)
+#define MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK           (0x07)
+#define MPI2_SCSITASKMGMT_TASKTYPE_CLR_ACA              (0x08)
+#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_TASK_SET         (0x09)
+#define MPI2_SCSITASKMGMT_TASKTYPE_QRY_UNIT_ATTENTION   (0x0A)
+
+/* MsgFlags bits */
+
+#define MPI2_SCSITASKMGMT_MSGFLAGS_MASK_TARGET_RESET    (0x18)
+#define MPI2_SCSITASKMGMT_MSGFLAGS_LINK_RESET           (0x00)
+#define MPI2_SCSITASKMGMT_MSGFLAGS_NEXUS_RESET_SRST     (0x08)
+#define MPI2_SCSITASKMGMT_MSGFLAGS_SAS_HARD_LINK_RESET  (0x10)
+
+#define MPI2_SCSITASKMGMT_MSGFLAGS_DO_NOT_SEND_TASK_IU  (0x01)
+
+
+
+/* SCSI Task Management Reply Message */
+typedef struct _MPI2_SCSI_TASK_MANAGE_REPLY
+{
+    U16                     DevHandle;                      /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U8                      ResponseCode;                   /* 0x04 */
+    U8                      TaskType;                       /* 0x05 */
+    U8                      Reserved1;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved2;                      /* 0x0A */
+    U16                     Reserved3;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+    U32                     TerminationCount;               /* 0x14 */
+} MPI2_SCSI_TASK_MANAGE_REPLY,
+  MPI2_POINTER PTR_MPI2_SCSI_TASK_MANAGE_REPLY,
+  Mpi2SCSITaskManagementReply_t, MPI2_POINTER pMpi2SCSIManagementReply_t;
+
+/* ResponseCode values */
+
+#define MPI2_SCSITASKMGMT_RSP_TM_COMPLETE               (0x00)
+#define MPI2_SCSITASKMGMT_RSP_INVALID_FRAME             (0x02)
+#define MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED          (0x04)
+#define MPI2_SCSITASKMGMT_RSP_TM_FAILED                 (0x05)
+#define MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED              (0x08)
+#define MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN            (0x09)
+#define MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC          (0x80)
+
+
+/****************************************************************************
+*  SCSI Enclosure Processor messages
+****************************************************************************/
+
+/* SCSI Enclosure Processor Request Message */
+typedef struct _MPI2_SEP_REQUEST
+{
+    U16                     DevHandle;          /* 0x00 */
+    U8                      ChainOffset;        /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U8                      Action;             /* 0x04 */
+    U8                      Flags;              /* 0x05 */
+    U8                      Reserved1;          /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved2;          /* 0x0A */
+    U32                     SlotStatus;         /* 0x0C */
+    U32                     Reserved3;          /* 0x10 */
+    U32                     Reserved4;          /* 0x14 */
+    U32                     Reserved5;          /* 0x18 */
+    U16                     Slot;               /* 0x1C */
+    U16                     EnclosureHandle;    /* 0x1E */
+} MPI2_SEP_REQUEST, MPI2_POINTER PTR_MPI2_SEP_REQUEST,
+  Mpi2SepRequest_t, MPI2_POINTER pMpi2SepRequest_t;
+
+/* Action defines */
+#define MPI2_SEP_REQ_ACTION_WRITE_STATUS                (0x00)
+#define MPI2_SEP_REQ_ACTION_READ_STATUS                 (0x01)
+
+/* Flags defines */
+#define MPI2_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS            (0x00)
+#define MPI2_SEP_REQ_FLAGS_ENCLOSURE_SLOT_ADDRESS       (0x01)
+
+/* SlotStatus defines */
+#define MPI2_SEP_REQ_SLOTSTATUS_REQUEST_REMOVE          (0x00040000)
+#define MPI2_SEP_REQ_SLOTSTATUS_IDENTIFY_REQUEST        (0x00020000)
+#define MPI2_SEP_REQ_SLOTSTATUS_REBUILD_STOPPED         (0x00000200)
+#define MPI2_SEP_REQ_SLOTSTATUS_HOT_SPARE               (0x00000100)
+#define MPI2_SEP_REQ_SLOTSTATUS_UNCONFIGURED            (0x00000080)
+#define MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT         (0x00000040)
+#define MPI2_SEP_REQ_SLOTSTATUS_DEV_REBUILDING          (0x00000004)
+#define MPI2_SEP_REQ_SLOTSTATUS_DEV_FAULTY              (0x00000002)
+#define MPI2_SEP_REQ_SLOTSTATUS_NO_ERROR                (0x00000001)
+
+
+/* SCSI Enclosure Processor Reply Message */
+typedef struct _MPI2_SEP_REPLY
+{
+    U16                     DevHandle;          /* 0x00 */
+    U8                      MsgLength;          /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U8                      Action;             /* 0x04 */
+    U8                      Flags;              /* 0x05 */
+    U8                      Reserved1;          /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved2;          /* 0x0A */
+    U16                     Reserved3;          /* 0x0C */
+    U16                     IOCStatus;          /* 0x0E */
+    U32                     IOCLogInfo;         /* 0x10 */
+    U32                     SlotStatus;         /* 0x14 */
+    U32                     Reserved4;          /* 0x18 */
+    U16                     Slot;               /* 0x1C */
+    U16                     EnclosureHandle;    /* 0x1E */
+} MPI2_SEP_REPLY, MPI2_POINTER PTR_MPI2_SEP_REPLY,
+  Mpi2SepReply_t, MPI2_POINTER pMpi2SepReply_t;
+
+/* SlotStatus defines */
+#define MPI2_SEP_REPLY_SLOTSTATUS_REMOVE_READY          (0x00040000)
+#define MPI2_SEP_REPLY_SLOTSTATUS_IDENTIFY_REQUEST      (0x00020000)
+#define MPI2_SEP_REPLY_SLOTSTATUS_REBUILD_STOPPED       (0x00000200)
+#define MPI2_SEP_REPLY_SLOTSTATUS_HOT_SPARE             (0x00000100)
+#define MPI2_SEP_REPLY_SLOTSTATUS_UNCONFIGURED          (0x00000080)
+#define MPI2_SEP_REPLY_SLOTSTATUS_PREDICTED_FAULT       (0x00000040)
+#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_REBUILDING        (0x00000004)
+#define MPI2_SEP_REPLY_SLOTSTATUS_DEV_FAULTY            (0x00000002)
+#define MPI2_SEP_REPLY_SLOTSTATUS_NO_ERROR              (0x00000001)
+
+
+#endif
+
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_ioc.h b/drivers/scsi/mpt2sas/mpi/mpi2_ioc.h
new file mode 100644
index 0000000..8c5d818
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_ioc.h
@@ -0,0 +1,1295 @@
+/*
+ *  Copyright (c) 2000-2009 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_ioc.h
+ *          Title:  MPI IOC, Port, Event, FW Download, and FW Upload messages
+ *  Creation Date:  October 11, 2006
+ *
+ *  mpi2_ioc.h Version:  02.00.10
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  06-04-07  02.00.01  In IOCFacts Reply structure, renamed MaxDevices to
+ *                      MaxTargets.
+ *                      Added TotalImageSize field to FWDownload Request.
+ *                      Added reserved words to FWUpload Request.
+ *  06-26-07  02.00.02  Added IR Configuration Change List Event.
+ *  08-31-07  02.00.03  Removed SystemReplyQueueDepth field from the IOCInit
+ *                      request and replaced it with
+ *                      ReplyDescriptorPostQueueDepth and ReplyFreeQueueDepth.
+ *                      Replaced the MinReplyQueueDepth field of the IOCFacts
+ *                      reply with MaxReplyDescriptorPostQueueDepth.
+ *                      Added MPI2_RDPQ_DEPTH_MIN define to specify the minimum
+ *                      depth for the Reply Descriptor Post Queue.
+ *                      Added SASAddress field to Initiator Device Table
+ *                      Overflow Event data.
+ *  10-31-07  02.00.04  Added ReasonCode MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING
+ *                      for SAS Initiator Device Status Change Event data.
+ *                      Modified Reason Code defines for SAS Topology Change
+ *                      List Event data, including adding a bit for PHY Vacant
+ *                      status, and adding a mask for the Reason Code.
+ *                      Added define for
+ *                      MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING.
+ *                      Added define for MPI2_EXT_IMAGE_TYPE_MEGARAID.
+ *  12-18-07  02.00.05  Added Boot Status defines for the IOCExceptions field of
+ *                      the IOCFacts Reply.
+ *                      Removed MPI2_IOCFACTS_CAPABILITY_EXTENDED_BUFFER define.
+ *                      Moved MPI2_VERSION_UNION to mpi2.h.
+ *                      Changed MPI2_EVENT_NOTIFICATION_REQUEST to use masks
+ *                      instead of enables, and added SASBroadcastPrimitiveMasks
+ *                      field.
+ *                      Added Log Entry Added Event and related structure.
+ *  02-29-08  02.00.06  Added define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID.
+ *                      Removed define MPI2_IOCFACTS_PROTOCOL_SMP_TARGET.
+ *                      Added MaxVolumes and MaxPersistentEntries fields to
+ *                      IOCFacts reply.
+ *                      Added ProtocalFlags and IOCCapabilities fields to
+ *                      MPI2_FW_IMAGE_HEADER.
+ *                      Removed MPI2_PORTENABLE_FLAGS_ENABLE_SINGLE_PORT.
+ *  03-03-08  02.00.07  Fixed MPI2_FW_IMAGE_HEADER by changing Reserved26 to
+ *                      a U16 (from a U32).
+ *                      Removed extra 's' from EventMasks name.
+ *  06-27-08  02.00.08  Fixed an offset in a comment.
+ *  10-02-08  02.00.09  Removed SystemReplyFrameSize from MPI2_IOC_INIT_REQUEST.
+ *                      Removed CurReplyFrameSize from MPI2_IOC_FACTS_REPLY and
+ *                      renamed MinReplyFrameSize to ReplyFrameSize.
+ *                      Added MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX.
+ *                      Added two new RAIDOperation values for Integrated RAID
+ *                      Operations Status Event data.
+ *                      Added four new IR Configuration Change List Event data
+ *                      ReasonCode values.
+ *                      Added two new ReasonCode defines for SAS Device Status
+ *                      Change Event data.
+ *                      Added three new DiscoveryStatus bits for the SAS
+ *                      Discovery event data.
+ *                      Added Multiplexing Status Change bit to the PhyStatus
+ *                      field of the SAS Topology Change List event data.
+ *                      Removed define for MPI2_INIT_IMAGE_BOOTFLAGS_XMEMCOPY.
+ *                      BootFlags are now product-specific.
+ *                      Added defines for the indivdual signature bytes
+ *                      for MPI2_INIT_IMAGE_FOOTER.
+ *  01-19-09  02.00.10  Added MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY define.
+ *                      Added MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR
+ *                      define.
+ *                      Added MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE
+ *                      define.
+ *                      Removed MPI2_EVENT_SAS_DISC_DS_SATA_INIT_FAILURE define.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_IOC_H
+#define MPI2_IOC_H
+
+/*****************************************************************************
+*
+*               IOC Messages
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  IOCInit message
+****************************************************************************/
+
+/* IOCInit Request message */
+typedef struct _MPI2_IOC_INIT_REQUEST
+{
+    U8                      WhoInit;                        /* 0x00 */
+    U8                      Reserved1;                      /* 0x01 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U16                     MsgVersion;                     /* 0x0C */
+    U16                     HeaderVersion;                  /* 0x0E */
+    U32                     Reserved5;                      /* 0x10 */
+    U32                     Reserved6;                      /* 0x14 */
+    U16                     Reserved7;                      /* 0x18 */
+    U16                     SystemRequestFrameSize;         /* 0x1A */
+    U16                     ReplyDescriptorPostQueueDepth;  /* 0x1C */
+    U16                     ReplyFreeQueueDepth;            /* 0x1E */
+    U32                     SenseBufferAddressHigh;         /* 0x20 */
+    U32                     SystemReplyAddressHigh;         /* 0x24 */
+    U64                     SystemRequestFrameBaseAddress;  /* 0x28 */
+    U64                     ReplyDescriptorPostQueueAddress;/* 0x30 */
+    U64                     ReplyFreeQueueAddress;          /* 0x38 */
+    U64                     TimeStamp;                      /* 0x40 */
+} MPI2_IOC_INIT_REQUEST, MPI2_POINTER PTR_MPI2_IOC_INIT_REQUEST,
+  Mpi2IOCInitRequest_t, MPI2_POINTER pMpi2IOCInitRequest_t;
+
+/* WhoInit values */
+#define MPI2_WHOINIT_NOT_INITIALIZED            (0x00)
+#define MPI2_WHOINIT_SYSTEM_BIOS                (0x01)
+#define MPI2_WHOINIT_ROM_BIOS                   (0x02)
+#define MPI2_WHOINIT_PCI_PEER                   (0x03)
+#define MPI2_WHOINIT_HOST_DRIVER                (0x04)
+#define MPI2_WHOINIT_MANUFACTURER               (0x05)
+
+/* MsgVersion */
+#define MPI2_IOCINIT_MSGVERSION_MAJOR_MASK      (0xFF00)
+#define MPI2_IOCINIT_MSGVERSION_MAJOR_SHIFT     (8)
+#define MPI2_IOCINIT_MSGVERSION_MINOR_MASK      (0x00FF)
+#define MPI2_IOCINIT_MSGVERSION_MINOR_SHIFT     (0)
+
+/* HeaderVersion */
+#define MPI2_IOCINIT_HDRVERSION_UNIT_MASK       (0xFF00)
+#define MPI2_IOCINIT_HDRVERSION_UNIT_SHIFT      (8)
+#define MPI2_IOCINIT_HDRVERSION_DEV_MASK        (0x00FF)
+#define MPI2_IOCINIT_HDRVERSION_DEV_SHIFT       (0)
+
+/* minimum depth for the Reply Descriptor Post Queue */
+#define MPI2_RDPQ_DEPTH_MIN                     (16)
+
+
+/* IOCInit Reply message */
+typedef struct _MPI2_IOC_INIT_REPLY
+{
+    U8                      WhoInit;                        /* 0x00 */
+    U8                      Reserved1;                      /* 0x01 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U16                     Reserved5;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+} MPI2_IOC_INIT_REPLY, MPI2_POINTER PTR_MPI2_IOC_INIT_REPLY,
+  Mpi2IOCInitReply_t, MPI2_POINTER pMpi2IOCInitReply_t;
+
+
+/****************************************************************************
+*  IOCFacts message
+****************************************************************************/
+
+/* IOCFacts Request message */
+typedef struct _MPI2_IOC_FACTS_REQUEST
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+} MPI2_IOC_FACTS_REQUEST, MPI2_POINTER PTR_MPI2_IOC_FACTS_REQUEST,
+  Mpi2IOCFactsRequest_t, MPI2_POINTER pMpi2IOCFactsRequest_t;
+
+
+/* IOCFacts Reply message */
+typedef struct _MPI2_IOC_FACTS_REPLY
+{
+    U16                     MsgVersion;                     /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     HeaderVersion;                  /* 0x04 */
+    U8                      IOCNumber;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved1;                      /* 0x0A */
+    U16                     IOCExceptions;                  /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+    U8                      MaxChainDepth;                  /* 0x14 */
+    U8                      WhoInit;                        /* 0x15 */
+    U8                      NumberOfPorts;                  /* 0x16 */
+    U8                      Reserved2;                      /* 0x17 */
+    U16                     RequestCredit;                  /* 0x18 */
+    U16                     ProductID;                      /* 0x1A */
+    U32                     IOCCapabilities;                /* 0x1C */
+    MPI2_VERSION_UNION      FWVersion;                      /* 0x20 */
+    U16                     IOCRequestFrameSize;            /* 0x24 */
+    U16                     Reserved3;                      /* 0x26 */
+    U16                     MaxInitiators;                  /* 0x28 */
+    U16                     MaxTargets;                     /* 0x2A */
+    U16                     MaxSasExpanders;                /* 0x2C */
+    U16                     MaxEnclosures;                  /* 0x2E */
+    U16                     ProtocolFlags;                  /* 0x30 */
+    U16                     HighPriorityCredit;             /* 0x32 */
+    U16                     MaxReplyDescriptorPostQueueDepth; /* 0x34 */
+    U8                      ReplyFrameSize;                 /* 0x36 */
+    U8                      MaxVolumes;                     /* 0x37 */
+    U16                     MaxDevHandle;                   /* 0x38 */
+    U16                     MaxPersistentEntries;           /* 0x3A */
+    U32                     Reserved4;                      /* 0x3C */
+} MPI2_IOC_FACTS_REPLY, MPI2_POINTER PTR_MPI2_IOC_FACTS_REPLY,
+  Mpi2IOCFactsReply_t, MPI2_POINTER pMpi2IOCFactsReply_t;
+
+/* MsgVersion */
+#define MPI2_IOCFACTS_MSGVERSION_MAJOR_MASK             (0xFF00)
+#define MPI2_IOCFACTS_MSGVERSION_MAJOR_SHIFT            (8)
+#define MPI2_IOCFACTS_MSGVERSION_MINOR_MASK             (0x00FF)
+#define MPI2_IOCFACTS_MSGVERSION_MINOR_SHIFT            (0)
+
+/* HeaderVersion */
+#define MPI2_IOCFACTS_HDRVERSION_UNIT_MASK              (0xFF00)
+#define MPI2_IOCFACTS_HDRVERSION_UNIT_SHIFT             (8)
+#define MPI2_IOCFACTS_HDRVERSION_DEV_MASK               (0x00FF)
+#define MPI2_IOCFACTS_HDRVERSION_DEV_SHIFT              (0)
+
+/* IOCExceptions */
+#define MPI2_IOCFACTS_EXCEPT_IR_FOREIGN_CONFIG_MAX      (0x0100)
+
+#define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_MASK              (0x00E0)
+#define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_GOOD              (0x0000)
+#define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_BACKUP            (0x0020)
+#define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_RESTORED          (0x0040)
+#define MPI2_IOCFACTS_EXCEPT_BOOTSTAT_CORRUPT_BACKUP    (0x0060)
+
+#define MPI2_IOCFACTS_EXCEPT_METADATA_UNSUPPORTED       (0x0010)
+#define MPI2_IOCFACTS_EXCEPT_MANUFACT_CHECKSUM_FAIL     (0x0008)
+#define MPI2_IOCFACTS_EXCEPT_FW_CHECKSUM_FAIL           (0x0004)
+#define MPI2_IOCFACTS_EXCEPT_RAID_CONFIG_INVALID        (0x0002)
+#define MPI2_IOCFACTS_EXCEPT_CONFIG_CHECKSUM_FAIL       (0x0001)
+
+/* defines for WhoInit field are after the IOCInit Request */
+
+/* ProductID field uses MPI2_FW_HEADER_PID_ */
+
+/* IOCCapabilities */
+#define MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY           (0x00002000)
+#define MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID        (0x00001000)
+#define MPI2_IOCFACTS_CAPABILITY_TLR                    (0x00000800)
+#define MPI2_IOCFACTS_CAPABILITY_MULTICAST              (0x00000100)
+#define MPI2_IOCFACTS_CAPABILITY_BIDIRECTIONAL_TARGET   (0x00000080)
+#define MPI2_IOCFACTS_CAPABILITY_EEDP                   (0x00000040)
+#define MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER        (0x00000010)
+#define MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER      (0x00000008)
+#define MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING (0x00000004)
+
+/* ProtocolFlags */
+#define MPI2_IOCFACTS_PROTOCOL_SCSI_TARGET              (0x0001)
+#define MPI2_IOCFACTS_PROTOCOL_SCSI_INITIATOR           (0x0002)
+
+
+/****************************************************************************
+*  PortFacts message
+****************************************************************************/
+
+/* PortFacts Request message */
+typedef struct _MPI2_PORT_FACTS_REQUEST
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      PortNumber;                     /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved3;                      /* 0x0A */
+} MPI2_PORT_FACTS_REQUEST, MPI2_POINTER PTR_MPI2_PORT_FACTS_REQUEST,
+  Mpi2PortFactsRequest_t, MPI2_POINTER pMpi2PortFactsRequest_t;
+
+/* PortFacts Reply message */
+typedef struct _MPI2_PORT_FACTS_REPLY
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      PortNumber;                     /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved3;                      /* 0x0A */
+    U16                     Reserved4;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+    U8                      Reserved5;                      /* 0x14 */
+    U8                      PortType;                       /* 0x15 */
+    U16                     Reserved6;                      /* 0x16 */
+    U16                     MaxPostedCmdBuffers;            /* 0x18 */
+    U16                     Reserved7;                      /* 0x1A */
+} MPI2_PORT_FACTS_REPLY, MPI2_POINTER PTR_MPI2_PORT_FACTS_REPLY,
+  Mpi2PortFactsReply_t, MPI2_POINTER pMpi2PortFactsReply_t;
+
+/* PortType values */
+#define MPI2_PORTFACTS_PORTTYPE_INACTIVE            (0x00)
+#define MPI2_PORTFACTS_PORTTYPE_FC                  (0x10)
+#define MPI2_PORTFACTS_PORTTYPE_ISCSI               (0x20)
+#define MPI2_PORTFACTS_PORTTYPE_SAS_PHYSICAL        (0x30)
+#define MPI2_PORTFACTS_PORTTYPE_SAS_VIRTUAL         (0x31)
+
+
+/****************************************************************************
+*  PortEnable message
+****************************************************************************/
+
+/* PortEnable Request message */
+typedef struct _MPI2_PORT_ENABLE_REQUEST
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U8                      Reserved2;                      /* 0x04 */
+    U8                      PortFlags;                      /* 0x05 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+} MPI2_PORT_ENABLE_REQUEST, MPI2_POINTER PTR_MPI2_PORT_ENABLE_REQUEST,
+  Mpi2PortEnableRequest_t, MPI2_POINTER pMpi2PortEnableRequest_t;
+
+
+/* PortEnable Reply message */
+typedef struct _MPI2_PORT_ENABLE_REPLY
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U8                      Reserved2;                      /* 0x04 */
+    U8                      PortFlags;                      /* 0x05 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U16                     Reserved5;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+} MPI2_PORT_ENABLE_REPLY, MPI2_POINTER PTR_MPI2_PORT_ENABLE_REPLY,
+  Mpi2PortEnableReply_t, MPI2_POINTER pMpi2PortEnableReply_t;
+
+
+/****************************************************************************
+*  EventNotification message
+****************************************************************************/
+
+/* EventNotification Request message */
+#define MPI2_EVENT_NOTIFY_EVENTMASK_WORDS           (4)
+
+typedef struct _MPI2_EVENT_NOTIFICATION_REQUEST
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U32                     Reserved5;                      /* 0x0C */
+    U32                     Reserved6;                      /* 0x10 */
+    U32                     EventMasks[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];/* 0x14 */
+    U16                     SASBroadcastPrimitiveMasks;     /* 0x24 */
+    U16                     Reserved7;                      /* 0x26 */
+    U32                     Reserved8;                      /* 0x28 */
+} MPI2_EVENT_NOTIFICATION_REQUEST,
+  MPI2_POINTER PTR_MPI2_EVENT_NOTIFICATION_REQUEST,
+  Mpi2EventNotificationRequest_t, MPI2_POINTER pMpi2EventNotificationRequest_t;
+
+
+/* EventNotification Reply message */
+typedef struct _MPI2_EVENT_NOTIFICATION_REPLY
+{
+    U16                     EventDataLength;                /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved1;                      /* 0x04 */
+    U8                      AckRequired;                    /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved2;                      /* 0x0A */
+    U16                     Reserved3;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+    U16                     Event;                          /* 0x14 */
+    U16                     Reserved4;                      /* 0x16 */
+    U32                     EventContext;                   /* 0x18 */
+    U32                     EventData[1];                   /* 0x1C */
+} MPI2_EVENT_NOTIFICATION_REPLY, MPI2_POINTER PTR_MPI2_EVENT_NOTIFICATION_REPLY,
+  Mpi2EventNotificationReply_t, MPI2_POINTER pMpi2EventNotificationReply_t;
+
+/* AckRequired */
+#define MPI2_EVENT_NOTIFICATION_ACK_NOT_REQUIRED    (0x00)
+#define MPI2_EVENT_NOTIFICATION_ACK_REQUIRED        (0x01)
+
+/* Event */
+#define MPI2_EVENT_LOG_DATA                         (0x0001)
+#define MPI2_EVENT_STATE_CHANGE                     (0x0002)
+#define MPI2_EVENT_HARD_RESET_RECEIVED              (0x0005)
+#define MPI2_EVENT_EVENT_CHANGE                     (0x000A)
+#define MPI2_EVENT_TASK_SET_FULL                    (0x000E)
+#define MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE         (0x000F)
+#define MPI2_EVENT_IR_OPERATION_STATUS              (0x0014)
+#define MPI2_EVENT_SAS_DISCOVERY                    (0x0016)
+#define MPI2_EVENT_SAS_BROADCAST_PRIMITIVE          (0x0017)
+#define MPI2_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE    (0x0018)
+#define MPI2_EVENT_SAS_INIT_TABLE_OVERFLOW          (0x0019)
+#define MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST         (0x001C)
+#define MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE    (0x001D)
+#define MPI2_EVENT_IR_VOLUME                        (0x001E)
+#define MPI2_EVENT_IR_PHYSICAL_DISK                 (0x001F)
+#define MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST     (0x0020)
+#define MPI2_EVENT_LOG_ENTRY_ADDED                  (0x0021)
+
+
+/* Log Entry Added Event data */
+
+/* the following structure matches MPI2_LOG_0_ENTRY in mpi2_cnfg.h */
+#define MPI2_EVENT_DATA_LOG_DATA_LENGTH             (0x1C)
+
+typedef struct _MPI2_EVENT_DATA_LOG_ENTRY_ADDED
+{
+    U64         TimeStamp;                          /* 0x00 */
+    U32         Reserved1;                          /* 0x08 */
+    U16         LogSequence;                        /* 0x0C */
+    U16         LogEntryQualifier;                  /* 0x0E */
+    U8          VP_ID;                              /* 0x10 */
+    U8          VF_ID;                              /* 0x11 */
+    U16         Reserved2;                          /* 0x12 */
+    U8          LogData[MPI2_EVENT_DATA_LOG_DATA_LENGTH];/* 0x14 */
+} MPI2_EVENT_DATA_LOG_ENTRY_ADDED,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_LOG_ENTRY_ADDED,
+  Mpi2EventDataLogEntryAdded_t, MPI2_POINTER pMpi2EventDataLogEntryAdded_t;
+
+/* Hard Reset Received Event data */
+
+typedef struct _MPI2_EVENT_DATA_HARD_RESET_RECEIVED
+{
+    U8                      Reserved1;                      /* 0x00 */
+    U8                      Port;                           /* 0x01 */
+    U16                     Reserved2;                      /* 0x02 */
+} MPI2_EVENT_DATA_HARD_RESET_RECEIVED,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_HARD_RESET_RECEIVED,
+  Mpi2EventDataHardResetReceived_t,
+  MPI2_POINTER pMpi2EventDataHardResetReceived_t;
+
+/* Task Set Full Event data */
+
+typedef struct _MPI2_EVENT_DATA_TASK_SET_FULL
+{
+    U16                     DevHandle;                      /* 0x00 */
+    U16                     CurrentDepth;                   /* 0x02 */
+} MPI2_EVENT_DATA_TASK_SET_FULL, MPI2_POINTER PTR_MPI2_EVENT_DATA_TASK_SET_FULL,
+  Mpi2EventDataTaskSetFull_t, MPI2_POINTER pMpi2EventDataTaskSetFull_t;
+
+
+/* SAS Device Status Change Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE
+{
+    U16                     TaskTag;                        /* 0x00 */
+    U8                      ReasonCode;                     /* 0x02 */
+    U8                      Reserved1;                      /* 0x03 */
+    U8                      ASC;                            /* 0x04 */
+    U8                      ASCQ;                           /* 0x05 */
+    U16                     DevHandle;                      /* 0x06 */
+    U32                     Reserved2;                      /* 0x08 */
+    U64                     SASAddress;                     /* 0x0C */
+    U8                      LUN[8];                         /* 0x14 */
+} MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_DEVICE_STATUS_CHANGE,
+  Mpi2EventDataSasDeviceStatusChange_t,
+  MPI2_POINTER pMpi2EventDataSasDeviceStatusChange_t;
+
+/* SAS Device Status Change Event data ReasonCode values */
+#define MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA               (0x05)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED              (0x07)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET    (0x08)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL      (0x09)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL  (0x0A)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL  (0x0B)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL      (0x0C)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION       (0x0D)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET   (0x0E)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL  (0x0F)
+#define MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE        (0x10)
+
+
+/* Integrated RAID Operation Status Event data */
+
+typedef struct _MPI2_EVENT_DATA_IR_OPERATION_STATUS
+{
+    U16                     VolDevHandle;               /* 0x00 */
+    U16                     Reserved1;                  /* 0x02 */
+    U8                      RAIDOperation;              /* 0x04 */
+    U8                      PercentComplete;            /* 0x05 */
+    U16                     Reserved2;                  /* 0x06 */
+    U32                     Resereved3;                 /* 0x08 */
+} MPI2_EVENT_DATA_IR_OPERATION_STATUS,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_OPERATION_STATUS,
+  Mpi2EventDataIrOperationStatus_t,
+  MPI2_POINTER pMpi2EventDataIrOperationStatus_t;
+
+/* Integrated RAID Operation Status Event data RAIDOperation values */
+#define MPI2_EVENT_IR_RAIDOP_RESYNC                     (0x00)
+#define MPI2_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION       (0x01)
+#define MPI2_EVENT_IR_RAIDOP_CONSISTENCY_CHECK          (0x02)
+#define MPI2_EVENT_IR_RAIDOP_BACKGROUND_INIT            (0x03)
+#define MPI2_EVENT_IR_RAIDOP_MAKE_DATA_CONSISTENT       (0x04)
+
+
+/* Integrated RAID Volume Event data */
+
+typedef struct _MPI2_EVENT_DATA_IR_VOLUME
+{
+    U16                     VolDevHandle;               /* 0x00 */
+    U8                      ReasonCode;                 /* 0x02 */
+    U8                      Reserved1;                  /* 0x03 */
+    U32                     NewValue;                   /* 0x04 */
+    U32                     PreviousValue;              /* 0x08 */
+} MPI2_EVENT_DATA_IR_VOLUME, MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_VOLUME,
+  Mpi2EventDataIrVolume_t, MPI2_POINTER pMpi2EventDataIrVolume_t;
+
+/* Integrated RAID Volume Event data ReasonCode values */
+#define MPI2_EVENT_IR_VOLUME_RC_SETTINGS_CHANGED        (0x01)
+#define MPI2_EVENT_IR_VOLUME_RC_STATUS_FLAGS_CHANGED    (0x02)
+#define MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED           (0x03)
+
+
+/* Integrated RAID Physical Disk Event data */
+
+typedef struct _MPI2_EVENT_DATA_IR_PHYSICAL_DISK
+{
+    U16                     Reserved1;                  /* 0x00 */
+    U8                      ReasonCode;                 /* 0x02 */
+    U8                      PhysDiskNum;                /* 0x03 */
+    U16                     PhysDiskDevHandle;          /* 0x04 */
+    U16                     Reserved2;                  /* 0x06 */
+    U16                     Slot;                       /* 0x08 */
+    U16                     EnclosureHandle;            /* 0x0A */
+    U32                     NewValue;                   /* 0x0C */
+    U32                     PreviousValue;              /* 0x10 */
+} MPI2_EVENT_DATA_IR_PHYSICAL_DISK,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_PHYSICAL_DISK,
+  Mpi2EventDataIrPhysicalDisk_t, MPI2_POINTER pMpi2EventDataIrPhysicalDisk_t;
+
+/* Integrated RAID Physical Disk Event data ReasonCode values */
+#define MPI2_EVENT_IR_PHYSDISK_RC_SETTINGS_CHANGED      (0x01)
+#define MPI2_EVENT_IR_PHYSDISK_RC_STATUS_FLAGS_CHANGED  (0x02)
+#define MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED         (0x03)
+
+
+/* Integrated RAID Configuration Change List Event data */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumElements at runtime.
+ */
+#ifndef MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT
+#define MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT          (1)
+#endif
+
+typedef struct _MPI2_EVENT_IR_CONFIG_ELEMENT
+{
+    U16                     ElementFlags;               /* 0x00 */
+    U16                     VolDevHandle;               /* 0x02 */
+    U8                      ReasonCode;                 /* 0x04 */
+    U8                      PhysDiskNum;                /* 0x05 */
+    U16                     PhysDiskDevHandle;          /* 0x06 */
+} MPI2_EVENT_IR_CONFIG_ELEMENT, MPI2_POINTER PTR_MPI2_EVENT_IR_CONFIG_ELEMENT,
+  Mpi2EventIrConfigElement_t, MPI2_POINTER pMpi2EventIrConfigElement_t;
+
+/* IR Configuration Change List Event data ElementFlags values */
+#define MPI2_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK   (0x000F)
+#define MPI2_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT      (0x0000)
+#define MPI2_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT (0x0001)
+#define MPI2_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT    (0x0002)
+
+/* IR Configuration Change List Event data ReasonCode values */
+#define MPI2_EVENT_IR_CHANGE_RC_ADDED                   (0x01)
+#define MPI2_EVENT_IR_CHANGE_RC_REMOVED                 (0x02)
+#define MPI2_EVENT_IR_CHANGE_RC_NO_CHANGE               (0x03)
+#define MPI2_EVENT_IR_CHANGE_RC_HIDE                    (0x04)
+#define MPI2_EVENT_IR_CHANGE_RC_UNHIDE                  (0x05)
+#define MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED          (0x06)
+#define MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED          (0x07)
+#define MPI2_EVENT_IR_CHANGE_RC_PD_CREATED              (0x08)
+#define MPI2_EVENT_IR_CHANGE_RC_PD_DELETED              (0x09)
+
+typedef struct _MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST
+{
+    U8                              NumElements;        /* 0x00 */
+    U8                              Reserved1;          /* 0x01 */
+    U8                              Reserved2;          /* 0x02 */
+    U8                              ConfigNum;          /* 0x03 */
+    U32                             Flags;              /* 0x04 */
+    MPI2_EVENT_IR_CONFIG_ELEMENT    ConfigElement[MPI2_EVENT_IR_CONFIG_ELEMENT_COUNT];    /* 0x08 */
+} MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_IR_CONFIG_CHANGE_LIST,
+  Mpi2EventDataIrConfigChangeList_t,
+  MPI2_POINTER pMpi2EventDataIrConfigChangeList_t;
+
+/* IR Configuration Change List Event data Flags values */
+#define MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG   (0x00000001)
+
+
+/* SAS Discovery Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_DISCOVERY
+{
+    U8                      Flags;                      /* 0x00 */
+    U8                      ReasonCode;                 /* 0x01 */
+    U8                      PhysicalPort;               /* 0x02 */
+    U8                      Reserved1;                  /* 0x03 */
+    U32                     DiscoveryStatus;            /* 0x04 */
+} MPI2_EVENT_DATA_SAS_DISCOVERY,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_DISCOVERY,
+  Mpi2EventDataSasDiscovery_t, MPI2_POINTER pMpi2EventDataSasDiscovery_t;
+
+/* SAS Discovery Event data Flags values */
+#define MPI2_EVENT_SAS_DISC_DEVICE_CHANGE                   (0x02)
+#define MPI2_EVENT_SAS_DISC_IN_PROGRESS                     (0x01)
+
+/* SAS Discovery Event data ReasonCode values */
+#define MPI2_EVENT_SAS_DISC_RC_STARTED                      (0x01)
+#define MPI2_EVENT_SAS_DISC_RC_COMPLETED                    (0x02)
+
+/* SAS Discovery Event data DiscoveryStatus values */
+#define MPI2_EVENT_SAS_DISC_DS_MAX_ENCLOSURES_EXCEED            (0x80000000)
+#define MPI2_EVENT_SAS_DISC_DS_MAX_EXPANDERS_EXCEED             (0x40000000)
+#define MPI2_EVENT_SAS_DISC_DS_MAX_DEVICES_EXCEED               (0x20000000)
+#define MPI2_EVENT_SAS_DISC_DS_MAX_TOPO_PHYS_EXCEED             (0x10000000)
+#define MPI2_EVENT_SAS_DISC_DS_DOWNSTREAM_INITIATOR             (0x08000000)
+#define MPI2_EVENT_SAS_DISC_DS_MULTI_SUBTRACTIVE_SUBTRACTIVE    (0x00008000)
+#define MPI2_EVENT_SAS_DISC_DS_EXP_MULTI_SUBTRACTIVE            (0x00004000)
+#define MPI2_EVENT_SAS_DISC_DS_MULTI_PORT_DOMAIN                (0x00002000)
+#define MPI2_EVENT_SAS_DISC_DS_TABLE_TO_SUBTRACTIVE_LINK        (0x00001000)
+#define MPI2_EVENT_SAS_DISC_DS_UNSUPPORTED_DEVICE               (0x00000800)
+#define MPI2_EVENT_SAS_DISC_DS_TABLE_LINK                       (0x00000400)
+#define MPI2_EVENT_SAS_DISC_DS_SUBTRACTIVE_LINK                 (0x00000200)
+#define MPI2_EVENT_SAS_DISC_DS_SMP_CRC_ERROR                    (0x00000100)
+#define MPI2_EVENT_SAS_DISC_DS_SMP_FUNCTION_FAILED              (0x00000080)
+#define MPI2_EVENT_SAS_DISC_DS_INDEX_NOT_EXIST                  (0x00000040)
+#define MPI2_EVENT_SAS_DISC_DS_OUT_ROUTE_ENTRIES                (0x00000020)
+#define MPI2_EVENT_SAS_DISC_DS_SMP_TIMEOUT                      (0x00000010)
+#define MPI2_EVENT_SAS_DISC_DS_MULTIPLE_PORTS                   (0x00000004)
+#define MPI2_EVENT_SAS_DISC_DS_UNADDRESSABLE_DEVICE             (0x00000002)
+#define MPI2_EVENT_SAS_DISC_DS_LOOP_DETECTED                    (0x00000001)
+
+
+/* SAS Broadcast Primitive Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE
+{
+    U8                      PhyNum;                     /* 0x00 */
+    U8                      Port;                       /* 0x01 */
+    U8                      PortWidth;                  /* 0x02 */
+    U8                      Primitive;                  /* 0x03 */
+} MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_BROADCAST_PRIMITIVE,
+  Mpi2EventDataSasBroadcastPrimitive_t,
+  MPI2_POINTER pMpi2EventDataSasBroadcastPrimitive_t;
+
+/* defines for the Primitive field */
+#define MPI2_EVENT_PRIMITIVE_CHANGE                         (0x01)
+#define MPI2_EVENT_PRIMITIVE_SES                            (0x02)
+#define MPI2_EVENT_PRIMITIVE_EXPANDER                       (0x03)
+#define MPI2_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT             (0x04)
+#define MPI2_EVENT_PRIMITIVE_RESERVED3                      (0x05)
+#define MPI2_EVENT_PRIMITIVE_RESERVED4                      (0x06)
+#define MPI2_EVENT_PRIMITIVE_CHANGE0_RESERVED               (0x07)
+#define MPI2_EVENT_PRIMITIVE_CHANGE1_RESERVED               (0x08)
+
+
+/* SAS Initiator Device Status Change Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE
+{
+    U8                      ReasonCode;                 /* 0x00 */
+    U8                      PhysicalPort;               /* 0x01 */
+    U16                     DevHandle;                  /* 0x02 */
+    U64                     SASAddress;                 /* 0x04 */
+} MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_INIT_DEV_STATUS_CHANGE,
+  Mpi2EventDataSasInitDevStatusChange_t,
+  MPI2_POINTER pMpi2EventDataSasInitDevStatusChange_t;
+
+/* SAS Initiator Device Status Change event ReasonCode values */
+#define MPI2_EVENT_SAS_INIT_RC_ADDED                (0x01)
+#define MPI2_EVENT_SAS_INIT_RC_NOT_RESPONDING       (0x02)
+
+
+/* SAS Initiator Device Table Overflow Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW
+{
+    U16                     MaxInit;                    /* 0x00 */
+    U16                     CurrentInit;                /* 0x02 */
+    U64                     SASAddress;                 /* 0x04 */
+} MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_INIT_TABLE_OVERFLOW,
+  Mpi2EventDataSasInitTableOverflow_t,
+  MPI2_POINTER pMpi2EventDataSasInitTableOverflow_t;
+
+
+/* SAS Topology Change List Event data */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumEntries at runtime.
+ */
+#ifndef MPI2_EVENT_SAS_TOPO_PHY_COUNT
+#define MPI2_EVENT_SAS_TOPO_PHY_COUNT           (1)
+#endif
+
+typedef struct _MPI2_EVENT_SAS_TOPO_PHY_ENTRY
+{
+    U16                     AttachedDevHandle;          /* 0x00 */
+    U8                      LinkRate;                   /* 0x02 */
+    U8                      PhyStatus;                  /* 0x03 */
+} MPI2_EVENT_SAS_TOPO_PHY_ENTRY, MPI2_POINTER PTR_MPI2_EVENT_SAS_TOPO_PHY_ENTRY,
+  Mpi2EventSasTopoPhyEntry_t, MPI2_POINTER pMpi2EventSasTopoPhyEntry_t;
+
+typedef struct _MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST
+{
+    U16                             EnclosureHandle;            /* 0x00 */
+    U16                             ExpanderDevHandle;          /* 0x02 */
+    U8                              NumPhys;                    /* 0x04 */
+    U8                              Reserved1;                  /* 0x05 */
+    U16                             Reserved2;                  /* 0x06 */
+    U8                              NumEntries;                 /* 0x08 */
+    U8                              StartPhyNum;                /* 0x09 */
+    U8                              ExpStatus;                  /* 0x0A */
+    U8                              PhysicalPort;               /* 0x0B */
+    MPI2_EVENT_SAS_TOPO_PHY_ENTRY   PHY[MPI2_EVENT_SAS_TOPO_PHY_COUNT]; /* 0x0C*/
+} MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_TOPOLOGY_CHANGE_LIST,
+  Mpi2EventDataSasTopologyChangeList_t,
+  MPI2_POINTER pMpi2EventDataSasTopologyChangeList_t;
+
+/* values for the ExpStatus field */
+#define MPI2_EVENT_SAS_TOPO_ES_ADDED                        (0x01)
+#define MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING               (0x02)
+#define MPI2_EVENT_SAS_TOPO_ES_RESPONDING                   (0x03)
+#define MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING         (0x04)
+
+/* defines for the LinkRate field */
+#define MPI2_EVENT_SAS_TOPO_LR_CURRENT_MASK                 (0xF0)
+#define MPI2_EVENT_SAS_TOPO_LR_CURRENT_SHIFT                (4)
+#define MPI2_EVENT_SAS_TOPO_LR_PREV_MASK                    (0x0F)
+#define MPI2_EVENT_SAS_TOPO_LR_PREV_SHIFT                   (0)
+
+#define MPI2_EVENT_SAS_TOPO_LR_UNKNOWN_LINK_RATE            (0x00)
+#define MPI2_EVENT_SAS_TOPO_LR_PHY_DISABLED                 (0x01)
+#define MPI2_EVENT_SAS_TOPO_LR_NEGOTIATION_FAILED           (0x02)
+#define MPI2_EVENT_SAS_TOPO_LR_SATA_OOB_COMPLETE            (0x03)
+#define MPI2_EVENT_SAS_TOPO_LR_PORT_SELECTOR                (0x04)
+#define MPI2_EVENT_SAS_TOPO_LR_SMP_RESET_IN_PROGRESS        (0x05)
+#define MPI2_EVENT_SAS_TOPO_LR_RATE_1_5                     (0x08)
+#define MPI2_EVENT_SAS_TOPO_LR_RATE_3_0                     (0x09)
+#define MPI2_EVENT_SAS_TOPO_LR_RATE_6_0                     (0x0A)
+
+/* values for the PhyStatus field */
+#define MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT                (0x80)
+#define MPI2_EVENT_SAS_TOPO_PS_MULTIPLEX_CHANGE             (0x10)
+/* values for the PhyStatus ReasonCode sub-field */
+#define MPI2_EVENT_SAS_TOPO_RC_MASK                         (0x0F)
+#define MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED                   (0x01)
+#define MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING          (0x02)
+#define MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED                  (0x03)
+#define MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE                    (0x04)
+#define MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING         (0x05)
+
+
+/* SAS Enclosure Device Status Change Event data */
+
+typedef struct _MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE
+{
+    U16                     EnclosureHandle;            /* 0x00 */
+    U8                      ReasonCode;                 /* 0x02 */
+    U8                      PhysicalPort;               /* 0x03 */
+    U64                     EnclosureLogicalID;         /* 0x04 */
+    U16                     NumSlots;                   /* 0x0C */
+    U16                     StartSlot;                  /* 0x0E */
+    U32                     PhyBits;                    /* 0x10 */
+} MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE,
+  MPI2_POINTER PTR_MPI2_EVENT_DATA_SAS_ENCL_DEV_STATUS_CHANGE,
+  Mpi2EventDataSasEnclDevStatusChange_t,
+  MPI2_POINTER pMpi2EventDataSasEnclDevStatusChange_t;
+
+/* SAS Enclosure Device Status Change event ReasonCode values */
+#define MPI2_EVENT_SAS_ENCL_RC_ADDED                (0x01)
+#define MPI2_EVENT_SAS_ENCL_RC_NOT_RESPONDING       (0x02)
+
+
+/****************************************************************************
+*  EventAck message
+****************************************************************************/
+
+/* EventAck Request message */
+typedef struct _MPI2_EVENT_ACK_REQUEST
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U16                     Event;                          /* 0x0C */
+    U16                     Reserved5;                      /* 0x0E */
+    U32                     EventContext;                   /* 0x10 */
+} MPI2_EVENT_ACK_REQUEST, MPI2_POINTER PTR_MPI2_EVENT_ACK_REQUEST,
+  Mpi2EventAckRequest_t, MPI2_POINTER pMpi2EventAckRequest_t;
+
+
+/* EventAck Reply message */
+typedef struct _MPI2_EVENT_ACK_REPLY
+{
+    U16                     Reserved1;                      /* 0x00 */
+    U8                      MsgLength;                      /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     Reserved2;                      /* 0x04 */
+    U8                      Reserved3;                      /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved4;                      /* 0x0A */
+    U16                     Reserved5;                      /* 0x0C */
+    U16                     IOCStatus;                      /* 0x0E */
+    U32                     IOCLogInfo;                     /* 0x10 */
+} MPI2_EVENT_ACK_REPLY, MPI2_POINTER PTR_MPI2_EVENT_ACK_REPLY,
+  Mpi2EventAckReply_t, MPI2_POINTER pMpi2EventAckReply_t;
+
+
+/****************************************************************************
+*  FWDownload message
+****************************************************************************/
+
+/* FWDownload Request message */
+typedef struct _MPI2_FW_DOWNLOAD_REQUEST
+{
+    U8                      ImageType;                  /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U32                     TotalImageSize;             /* 0x0C */
+    U32                     Reserved5;                  /* 0x10 */
+    MPI2_MPI_SGE_UNION      SGL;                        /* 0x14 */
+} MPI2_FW_DOWNLOAD_REQUEST, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_REQUEST,
+  Mpi2FWDownloadRequest, MPI2_POINTER pMpi2FWDownloadRequest;
+
+#define MPI2_FW_DOWNLOAD_MSGFLGS_LAST_SEGMENT   (0x01)
+
+#define MPI2_FW_DOWNLOAD_ITYPE_FW                   (0x01)
+#define MPI2_FW_DOWNLOAD_ITYPE_BIOS                 (0x02)
+#define MPI2_FW_DOWNLOAD_ITYPE_MANUFACTURING        (0x06)
+#define MPI2_FW_DOWNLOAD_ITYPE_CONFIG_1             (0x07)
+#define MPI2_FW_DOWNLOAD_ITYPE_CONFIG_2             (0x08)
+#define MPI2_FW_DOWNLOAD_ITYPE_MEGARAID             (0x09)
+#define MPI2_FW_DOWNLOAD_ITYPE_COMMON_BOOT_BLOCK    (0x0B)
+
+/* FWDownload TransactionContext Element */
+typedef struct _MPI2_FW_DOWNLOAD_TCSGE
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      ContextSize;                /* 0x01 */
+    U8                      DetailsLength;              /* 0x02 */
+    U8                      Flags;                      /* 0x03 */
+    U32                     Reserved2;                  /* 0x04 */
+    U32                     ImageOffset;                /* 0x08 */
+    U32                     ImageSize;                  /* 0x0C */
+} MPI2_FW_DOWNLOAD_TCSGE, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_TCSGE,
+  Mpi2FWDownloadTCSGE_t, MPI2_POINTER pMpi2FWDownloadTCSGE_t;
+
+/* FWDownload Reply message */
+typedef struct _MPI2_FW_DOWNLOAD_REPLY
+{
+    U8                      ImageType;                  /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U16                     Reserved5;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+} MPI2_FW_DOWNLOAD_REPLY, MPI2_POINTER PTR_MPI2_FW_DOWNLOAD_REPLY,
+  Mpi2FWDownloadReply_t, MPI2_POINTER pMpi2FWDownloadReply_t;
+
+
+/****************************************************************************
+*  FWUpload message
+****************************************************************************/
+
+/* FWUpload Request message */
+typedef struct _MPI2_FW_UPLOAD_REQUEST
+{
+    U8                      ImageType;                  /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U32                     Reserved5;                  /* 0x0C */
+    U32                     Reserved6;                  /* 0x10 */
+    MPI2_MPI_SGE_UNION      SGL;                        /* 0x14 */
+} MPI2_FW_UPLOAD_REQUEST, MPI2_POINTER PTR_MPI2_FW_UPLOAD_REQUEST,
+  Mpi2FWUploadRequest_t, MPI2_POINTER pMpi2FWUploadRequest_t;
+
+#define MPI2_FW_UPLOAD_ITYPE_FW_CURRENT         (0x00)
+#define MPI2_FW_UPLOAD_ITYPE_FW_FLASH           (0x01)
+#define MPI2_FW_UPLOAD_ITYPE_BIOS_FLASH         (0x02)
+#define MPI2_FW_UPLOAD_ITYPE_FW_BACKUP          (0x05)
+#define MPI2_FW_UPLOAD_ITYPE_MANUFACTURING      (0x06)
+#define MPI2_FW_UPLOAD_ITYPE_CONFIG_1           (0x07)
+#define MPI2_FW_UPLOAD_ITYPE_CONFIG_2           (0x08)
+#define MPI2_FW_UPLOAD_ITYPE_MEGARAID           (0x09)
+#define MPI2_FW_UPLOAD_ITYPE_COMPLETE           (0x0A)
+#define MPI2_FW_UPLOAD_ITYPE_COMMON_BOOT_BLOCK  (0x0B)
+
+typedef struct _MPI2_FW_UPLOAD_TCSGE
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      ContextSize;                /* 0x01 */
+    U8                      DetailsLength;              /* 0x02 */
+    U8                      Flags;                      /* 0x03 */
+    U32                     Reserved2;                  /* 0x04 */
+    U32                     ImageOffset;                /* 0x08 */
+    U32                     ImageSize;                  /* 0x0C */
+} MPI2_FW_UPLOAD_TCSGE, MPI2_POINTER PTR_MPI2_FW_UPLOAD_TCSGE,
+  Mpi2FWUploadTCSGE_t, MPI2_POINTER pMpi2FWUploadTCSGE_t;
+
+/* FWUpload Reply message */
+typedef struct _MPI2_FW_UPLOAD_REPLY
+{
+    U8                      ImageType;                  /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U16                     Reserved5;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+    U32                     ActualImageSize;            /* 0x14 */
+} MPI2_FW_UPLOAD_REPLY, MPI2_POINTER PTR_MPI2_FW_UPLOAD_REPLY,
+  Mpi2FWUploadReply_t, MPI2_POINTER pMPi2FWUploadReply_t;
+
+
+/* FW Image Header */
+typedef struct _MPI2_FW_IMAGE_HEADER
+{
+    U32                     Signature;                  /* 0x00 */
+    U32                     Signature0;                 /* 0x04 */
+    U32                     Signature1;                 /* 0x08 */
+    U32                     Signature2;                 /* 0x0C */
+    MPI2_VERSION_UNION      MPIVersion;                 /* 0x10 */
+    MPI2_VERSION_UNION      FWVersion;                  /* 0x14 */
+    MPI2_VERSION_UNION      NVDATAVersion;              /* 0x18 */
+    MPI2_VERSION_UNION      PackageVersion;             /* 0x1C */
+    U16                     VendorID;                   /* 0x20 */
+    U16                     ProductID;                  /* 0x22 */
+    U16                     ProtocolFlags;              /* 0x24 */
+    U16                     Reserved26;                 /* 0x26 */
+    U32                     IOCCapabilities;            /* 0x28 */
+    U32                     ImageSize;                  /* 0x2C */
+    U32                     NextImageHeaderOffset;      /* 0x30 */
+    U32                     Checksum;                   /* 0x34 */
+    U32                     Reserved38;                 /* 0x38 */
+    U32                     Reserved3C;                 /* 0x3C */
+    U32                     Reserved40;                 /* 0x40 */
+    U32                     Reserved44;                 /* 0x44 */
+    U32                     Reserved48;                 /* 0x48 */
+    U32                     Reserved4C;                 /* 0x4C */
+    U32                     Reserved50;                 /* 0x50 */
+    U32                     Reserved54;                 /* 0x54 */
+    U32                     Reserved58;                 /* 0x58 */
+    U32                     Reserved5C;                 /* 0x5C */
+    U32                     Reserved60;                 /* 0x60 */
+    U32                     FirmwareVersionNameWhat;    /* 0x64 */
+    U8                      FirmwareVersionName[32];    /* 0x68 */
+    U32                     VendorNameWhat;             /* 0x88 */
+    U8                      VendorName[32];             /* 0x8C */
+    U32                     PackageNameWhat;            /* 0x88 */
+    U8                      PackageName[32];            /* 0x8C */
+    U32                     ReservedD0;                 /* 0xD0 */
+    U32                     ReservedD4;                 /* 0xD4 */
+    U32                     ReservedD8;                 /* 0xD8 */
+    U32                     ReservedDC;                 /* 0xDC */
+    U32                     ReservedE0;                 /* 0xE0 */
+    U32                     ReservedE4;                 /* 0xE4 */
+    U32                     ReservedE8;                 /* 0xE8 */
+    U32                     ReservedEC;                 /* 0xEC */
+    U32                     ReservedF0;                 /* 0xF0 */
+    U32                     ReservedF4;                 /* 0xF4 */
+    U32                     ReservedF8;                 /* 0xF8 */
+    U32                     ReservedFC;                 /* 0xFC */
+} MPI2_FW_IMAGE_HEADER, MPI2_POINTER PTR_MPI2_FW_IMAGE_HEADER,
+  Mpi2FWImageHeader_t, MPI2_POINTER pMpi2FWImageHeader_t;
+
+/* Signature field */
+#define MPI2_FW_HEADER_SIGNATURE_OFFSET         (0x00)
+#define MPI2_FW_HEADER_SIGNATURE_MASK           (0xFF000000)
+#define MPI2_FW_HEADER_SIGNATURE                (0xEA000000)
+
+/* Signature0 field */
+#define MPI2_FW_HEADER_SIGNATURE0_OFFSET        (0x04)
+#define MPI2_FW_HEADER_SIGNATURE0               (0x5AFAA55A)
+
+/* Signature1 field */
+#define MPI2_FW_HEADER_SIGNATURE1_OFFSET        (0x08)
+#define MPI2_FW_HEADER_SIGNATURE1               (0xA55AFAA5)
+
+/* Signature2 field */
+#define MPI2_FW_HEADER_SIGNATURE2_OFFSET        (0x0C)
+#define MPI2_FW_HEADER_SIGNATURE2               (0x5AA55AFA)
+
+
+/* defines for using the ProductID field */
+#define MPI2_FW_HEADER_PID_TYPE_MASK            (0xF000)
+#define MPI2_FW_HEADER_PID_TYPE_SAS             (0x2000)
+
+#define MPI2_FW_HEADER_PID_PROD_MASK            (0x0F00)
+#define MPI2_FW_HEADER_PID_PROD_A               (0x0000)
+
+#define MPI2_FW_HEADER_PID_FAMILY_MASK          (0x00FF)
+/* SAS */
+#define MPI2_FW_HEADER_PID_FAMILY_2108_SAS      (0x0010)
+
+/* use MPI2_IOCFACTS_PROTOCOL_ defines for ProtocolFlags field */
+
+/* use MPI2_IOCFACTS_CAPABILITY_ defines for IOCCapabilities field */
+
+
+#define MPI2_FW_HEADER_IMAGESIZE_OFFSET         (0x2C)
+#define MPI2_FW_HEADER_NEXTIMAGE_OFFSET         (0x30)
+#define MPI2_FW_HEADER_VERNMHWAT_OFFSET         (0x64)
+
+#define MPI2_FW_HEADER_WHAT_SIGNATURE           (0x29232840)
+
+#define MPI2_FW_HEADER_SIZE                     (0x100)
+
+
+/* Extended Image Header */
+typedef struct _MPI2_EXT_IMAGE_HEADER
+
+{
+    U8                      ImageType;                  /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U16                     Reserved2;                  /* 0x02 */
+    U32                     Checksum;                   /* 0x04 */
+    U32                     ImageSize;                  /* 0x08 */
+    U32                     NextImageHeaderOffset;      /* 0x0C */
+    U32                     PackageVersion;             /* 0x10 */
+    U32                     Reserved3;                  /* 0x14 */
+    U32                     Reserved4;                  /* 0x18 */
+    U32                     Reserved5;                  /* 0x1C */
+    U8                      IdentifyString[32];         /* 0x20 */
+} MPI2_EXT_IMAGE_HEADER, MPI2_POINTER PTR_MPI2_EXT_IMAGE_HEADER,
+  Mpi2ExtImageHeader_t, MPI2_POINTER pMpi2ExtImageHeader_t;
+
+/* useful offsets */
+#define MPI2_EXT_IMAGE_IMAGETYPE_OFFSET         (0x00)
+#define MPI2_EXT_IMAGE_IMAGESIZE_OFFSET         (0x08)
+#define MPI2_EXT_IMAGE_NEXTIMAGE_OFFSET         (0x0C)
+
+#define MPI2_EXT_IMAGE_HEADER_SIZE              (0x40)
+
+/* defines for the ImageType field */
+#define MPI2_EXT_IMAGE_TYPE_UNSPECIFIED         (0x00)
+#define MPI2_EXT_IMAGE_TYPE_FW                  (0x01)
+#define MPI2_EXT_IMAGE_TYPE_NVDATA              (0x03)
+#define MPI2_EXT_IMAGE_TYPE_BOOTLOADER          (0x04)
+#define MPI2_EXT_IMAGE_TYPE_INITIALIZATION      (0x05)
+#define MPI2_EXT_IMAGE_TYPE_FLASH_LAYOUT        (0x06)
+#define MPI2_EXT_IMAGE_TYPE_SUPPORTED_DEVICES   (0x07)
+#define MPI2_EXT_IMAGE_TYPE_MEGARAID            (0x08)
+
+#define MPI2_EXT_IMAGE_TYPE_MAX                 (MPI2_EXT_IMAGE_TYPE_MEGARAID)
+
+
+
+/* FLASH Layout Extended Image Data */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check RegionsPerLayout at runtime.
+ */
+#ifndef MPI2_FLASH_NUMBER_OF_REGIONS
+#define MPI2_FLASH_NUMBER_OF_REGIONS        (1)
+#endif
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumberOfLayouts at runtime.
+ */
+#ifndef MPI2_FLASH_NUMBER_OF_LAYOUTS
+#define MPI2_FLASH_NUMBER_OF_LAYOUTS        (1)
+#endif
+
+typedef struct _MPI2_FLASH_REGION
+{
+    U8                      RegionType;                 /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U16                     Reserved2;                  /* 0x02 */
+    U32                     RegionOffset;               /* 0x04 */
+    U32                     RegionSize;                 /* 0x08 */
+    U32                     Reserved3;                  /* 0x0C */
+} MPI2_FLASH_REGION, MPI2_POINTER PTR_MPI2_FLASH_REGION,
+  Mpi2FlashRegion_t, MPI2_POINTER pMpi2FlashRegion_t;
+
+typedef struct _MPI2_FLASH_LAYOUT
+{
+    U32                     FlashSize;                  /* 0x00 */
+    U32                     Reserved1;                  /* 0x04 */
+    U32                     Reserved2;                  /* 0x08 */
+    U32                     Reserved3;                  /* 0x0C */
+    MPI2_FLASH_REGION       Region[MPI2_FLASH_NUMBER_OF_REGIONS];/* 0x10 */
+} MPI2_FLASH_LAYOUT, MPI2_POINTER PTR_MPI2_FLASH_LAYOUT,
+  Mpi2FlashLayout_t, MPI2_POINTER pMpi2FlashLayout_t;
+
+typedef struct _MPI2_FLASH_LAYOUT_DATA
+{
+    U8                      ImageRevision;              /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      SizeOfRegion;               /* 0x02 */
+    U8                      Reserved2;                  /* 0x03 */
+    U16                     NumberOfLayouts;            /* 0x04 */
+    U16                     RegionsPerLayout;           /* 0x06 */
+    U16                     MinimumSectorAlignment;     /* 0x08 */
+    U16                     Reserved3;                  /* 0x0A */
+    U32                     Reserved4;                  /* 0x0C */
+    MPI2_FLASH_LAYOUT       Layout[MPI2_FLASH_NUMBER_OF_LAYOUTS];/* 0x10 */
+} MPI2_FLASH_LAYOUT_DATA, MPI2_POINTER PTR_MPI2_FLASH_LAYOUT_DATA,
+  Mpi2FlashLayoutData_t, MPI2_POINTER pMpi2FlashLayoutData_t;
+
+/* defines for the RegionType field */
+#define MPI2_FLASH_REGION_UNUSED                (0x00)
+#define MPI2_FLASH_REGION_FIRMWARE              (0x01)
+#define MPI2_FLASH_REGION_BIOS                  (0x02)
+#define MPI2_FLASH_REGION_NVDATA                (0x03)
+#define MPI2_FLASH_REGION_FIRMWARE_BACKUP       (0x05)
+#define MPI2_FLASH_REGION_MFG_INFORMATION       (0x06)
+#define MPI2_FLASH_REGION_CONFIG_1              (0x07)
+#define MPI2_FLASH_REGION_CONFIG_2              (0x08)
+#define MPI2_FLASH_REGION_MEGARAID              (0x09)
+#define MPI2_FLASH_REGION_INIT                  (0x0A)
+
+/* ImageRevision */
+#define MPI2_FLASH_LAYOUT_IMAGE_REVISION        (0x00)
+
+
+
+/* Supported Devices Extended Image Data */
+
+/*
+ * Host code (drivers, BIOS, utilities, etc.) should leave this define set to
+ * one and check NumberOfDevices at runtime.
+ */
+#ifndef MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES
+#define MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES    (1)
+#endif
+
+typedef struct _MPI2_SUPPORTED_DEVICE
+{
+    U16                     DeviceID;                   /* 0x00 */
+    U16                     VendorID;                   /* 0x02 */
+    U16                     DeviceIDMask;               /* 0x04 */
+    U16                     Reserved1;                  /* 0x06 */
+    U8                      LowPCIRev;                  /* 0x08 */
+    U8                      HighPCIRev;                 /* 0x09 */
+    U16                     Reserved2;                  /* 0x0A */
+    U32                     Reserved3;                  /* 0x0C */
+} MPI2_SUPPORTED_DEVICE, MPI2_POINTER PTR_MPI2_SUPPORTED_DEVICE,
+  Mpi2SupportedDevice_t, MPI2_POINTER pMpi2SupportedDevice_t;
+
+typedef struct _MPI2_SUPPORTED_DEVICES_DATA
+{
+    U8                      ImageRevision;              /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      NumberOfDevices;            /* 0x02 */
+    U8                      Reserved2;                  /* 0x03 */
+    U32                     Reserved3;                  /* 0x04 */
+    MPI2_SUPPORTED_DEVICE   SupportedDevice[MPI2_SUPPORTED_DEVICES_IMAGE_NUM_DEVICES]; /* 0x08 */
+} MPI2_SUPPORTED_DEVICES_DATA, MPI2_POINTER PTR_MPI2_SUPPORTED_DEVICES_DATA,
+  Mpi2SupportedDevicesData_t, MPI2_POINTER pMpi2SupportedDevicesData_t;
+
+/* ImageRevision */
+#define MPI2_SUPPORTED_DEVICES_IMAGE_REVISION   (0x00)
+
+
+/* Init Extended Image Data */
+
+typedef struct _MPI2_INIT_IMAGE_FOOTER
+
+{
+    U32                     BootFlags;                  /* 0x00 */
+    U32                     ImageSize;                  /* 0x04 */
+    U32                     Signature0;                 /* 0x08 */
+    U32                     Signature1;                 /* 0x0C */
+    U32                     Signature2;                 /* 0x10 */
+    U32                     ResetVector;                /* 0x14 */
+} MPI2_INIT_IMAGE_FOOTER, MPI2_POINTER PTR_MPI2_INIT_IMAGE_FOOTER,
+  Mpi2InitImageFooter_t, MPI2_POINTER pMpi2InitImageFooter_t;
+
+/* defines for the BootFlags field */
+#define MPI2_INIT_IMAGE_BOOTFLAGS_OFFSET        (0x00)
+
+/* defines for the ImageSize field */
+#define MPI2_INIT_IMAGE_IMAGESIZE_OFFSET        (0x04)
+
+/* defines for the Signature0 field */
+#define MPI2_INIT_IMAGE_SIGNATURE0_OFFSET       (0x08)
+#define MPI2_INIT_IMAGE_SIGNATURE0              (0x5AA55AEA)
+
+/* defines for the Signature1 field */
+#define MPI2_INIT_IMAGE_SIGNATURE1_OFFSET       (0x0C)
+#define MPI2_INIT_IMAGE_SIGNATURE1              (0xA55AEAA5)
+
+/* defines for the Signature2 field */
+#define MPI2_INIT_IMAGE_SIGNATURE2_OFFSET       (0x10)
+#define MPI2_INIT_IMAGE_SIGNATURE2              (0x5AEAA55A)
+
+/* Signature fields as individual bytes */
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_0        (0xEA)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_1        (0x5A)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_2        (0xA5)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_3        (0x5A)
+
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_4        (0xA5)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_5        (0xEA)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_6        (0x5A)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_7        (0xA5)
+
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_8        (0x5A)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_9        (0xA5)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_A        (0xEA)
+#define MPI2_INIT_IMAGE_SIGNATURE_BYTE_B        (0x5A)
+
+/* defines for the ResetVector field */
+#define MPI2_INIT_IMAGE_RESETVECTOR_OFFSET      (0x14)
+
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_raid.h b/drivers/scsi/mpt2sas/mpi/mpi2_raid.h
new file mode 100644
index 0000000..7134816d
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_raid.h
@@ -0,0 +1,295 @@
+/*
+ *  Copyright (c) 2000-2008 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_raid.h
+ *          Title:  MPI Integrated RAID messages and structures
+ *  Creation Date:  April 26, 2007
+ *
+ *    mpi2_raid.h Version:  02.00.03
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  08-31-07  02.00.01  Modifications to RAID Action request and reply,
+ *                      including the Actions and ActionData.
+ *  02-29-08  02.00.02  Added MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD.
+ *  05-21-08  02.00.03  Added MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS so that
+ *                      the PhysDisk array in MPI2_RAID_VOLUME_CREATION_STRUCT
+ *                      can be sized by the build environment.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_RAID_H
+#define MPI2_RAID_H
+
+/*****************************************************************************
+*
+*               Integrated RAID Messages
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  RAID Action messages
+****************************************************************************/
+
+/* ActionDataWord defines for use with MPI2_RAID_ACTION_DELETE_VOLUME action */
+#define MPI2_RAID_ACTION_ADATA_KEEP_LBA0            (0x00000000)
+#define MPI2_RAID_ACTION_ADATA_ZERO_LBA0            (0x00000001)
+
+/* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
+
+/* ActionDataWord defines for use with MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES action */
+#define MPI2_RAID_ACTION_ADATA_DISABL_FULL_REBUILD  (0x00000001)
+
+/* ActionDataWord for MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE Action */
+typedef struct _MPI2_RAID_ACTION_RATE_DATA
+{
+    U8              RateToChange;               /* 0x00 */
+    U8              RateOrMode;                 /* 0x01 */
+    U16             DataScrubDuration;          /* 0x02 */
+} MPI2_RAID_ACTION_RATE_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_RATE_DATA,
+  Mpi2RaidActionRateData_t, MPI2_POINTER pMpi2RaidActionRateData_t;
+
+#define MPI2_RAID_ACTION_SET_RATE_RESYNC            (0x00)
+#define MPI2_RAID_ACTION_SET_RATE_DATA_SCRUB        (0x01)
+#define MPI2_RAID_ACTION_SET_RATE_POWERSAVE_MODE    (0x02)
+
+/* ActionDataWord for MPI2_RAID_ACTION_START_RAID_FUNCTION Action */
+typedef struct _MPI2_RAID_ACTION_START_RAID_FUNCTION
+{
+    U8              RAIDFunction;                       /* 0x00 */
+    U8              Flags;                              /* 0x01 */
+    U16             Reserved1;                          /* 0x02 */
+} MPI2_RAID_ACTION_START_RAID_FUNCTION,
+  MPI2_POINTER PTR_MPI2_RAID_ACTION_START_RAID_FUNCTION,
+  Mpi2RaidActionStartRaidFunction_t,
+  MPI2_POINTER pMpi2RaidActionStartRaidFunction_t;
+
+/* defines for the RAIDFunction field */
+#define MPI2_RAID_ACTION_START_BACKGROUND_INIT      (0x00)
+#define MPI2_RAID_ACTION_START_ONLINE_CAP_EXPANSION (0x01)
+#define MPI2_RAID_ACTION_START_CONSISTENCY_CHECK    (0x02)
+
+/* defines for the Flags field */
+#define MPI2_RAID_ACTION_START_NEW                  (0x00)
+#define MPI2_RAID_ACTION_START_RESUME               (0x01)
+
+/* ActionDataWord for MPI2_RAID_ACTION_STOP_RAID_FUNCTION Action */
+typedef struct _MPI2_RAID_ACTION_STOP_RAID_FUNCTION
+{
+    U8              RAIDFunction;                       /* 0x00 */
+    U8              Flags;                              /* 0x01 */
+    U16             Reserved1;                          /* 0x02 */
+} MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
+  MPI2_POINTER PTR_MPI2_RAID_ACTION_STOP_RAID_FUNCTION,
+  Mpi2RaidActionStopRaidFunction_t,
+  MPI2_POINTER pMpi2RaidActionStopRaidFunction_t;
+
+/* defines for the RAIDFunction field */
+#define MPI2_RAID_ACTION_STOP_BACKGROUND_INIT       (0x00)
+#define MPI2_RAID_ACTION_STOP_ONLINE_CAP_EXPANSION  (0x01)
+#define MPI2_RAID_ACTION_STOP_CONSISTENCY_CHECK     (0x02)
+
+/* defines for the Flags field */
+#define MPI2_RAID_ACTION_STOP_ABORT                 (0x00)
+#define MPI2_RAID_ACTION_STOP_PAUSE                 (0x01)
+
+/* ActionDataWord for MPI2_RAID_ACTION_CREATE_HOT_SPARE Action */
+typedef struct _MPI2_RAID_ACTION_HOT_SPARE
+{
+    U8              HotSparePool;               /* 0x00 */
+    U8              Reserved1;                  /* 0x01 */
+    U16             DevHandle;                  /* 0x02 */
+} MPI2_RAID_ACTION_HOT_SPARE, MPI2_POINTER PTR_MPI2_RAID_ACTION_HOT_SPARE,
+  Mpi2RaidActionHotSpare_t, MPI2_POINTER pMpi2RaidActionHotSpare_t;
+
+/* ActionDataWord for MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE Action */
+typedef struct _MPI2_RAID_ACTION_FW_UPDATE_MODE
+{
+    U8              Flags;                              /* 0x00 */
+    U8              DeviceFirmwareUpdateModeTimeout;    /* 0x01 */
+    U16             Reserved1;                          /* 0x02 */
+} MPI2_RAID_ACTION_FW_UPDATE_MODE,
+  MPI2_POINTER PTR_MPI2_RAID_ACTION_FW_UPDATE_MODE,
+  Mpi2RaidActionFwUpdateMode_t, MPI2_POINTER pMpi2RaidActionFwUpdateMode_t;
+
+/* ActionDataWord defines for use with MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE action */
+#define MPI2_RAID_ACTION_ADATA_DISABLE_FW_UPDATE        (0x00)
+#define MPI2_RAID_ACTION_ADATA_ENABLE_FW_UPDATE         (0x01)
+
+typedef union _MPI2_RAID_ACTION_DATA
+{
+    U32                                     Word;
+    MPI2_RAID_ACTION_RATE_DATA              Rates;
+    MPI2_RAID_ACTION_START_RAID_FUNCTION    StartRaidFunction;
+    MPI2_RAID_ACTION_STOP_RAID_FUNCTION     StopRaidFunction;
+    MPI2_RAID_ACTION_HOT_SPARE              HotSpare;
+    MPI2_RAID_ACTION_FW_UPDATE_MODE         FwUpdateMode;
+} MPI2_RAID_ACTION_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_DATA,
+  Mpi2RaidActionData_t, MPI2_POINTER pMpi2RaidActionData_t;
+
+
+/* RAID Action Request Message */
+typedef struct _MPI2_RAID_ACTION_REQUEST
+{
+    U8                      Action;                         /* 0x00 */
+    U8                      Reserved1;                      /* 0x01 */
+    U8                      ChainOffset;                    /* 0x02 */
+    U8                      Function;                       /* 0x03 */
+    U16                     VolDevHandle;                   /* 0x04 */
+    U8                      PhysDiskNum;                    /* 0x06 */
+    U8                      MsgFlags;                       /* 0x07 */
+    U8                      VP_ID;                          /* 0x08 */
+    U8                      VF_ID;                          /* 0x09 */
+    U16                     Reserved2;                      /* 0x0A */
+    U32                     Reserved3;                      /* 0x0C */
+    MPI2_RAID_ACTION_DATA   ActionDataWord;                 /* 0x10 */
+    MPI2_SGE_SIMPLE_UNION   ActionDataSGE;                  /* 0x14 */
+} MPI2_RAID_ACTION_REQUEST, MPI2_POINTER PTR_MPI2_RAID_ACTION_REQUEST,
+  Mpi2RaidActionRequest_t, MPI2_POINTER pMpi2RaidActionRequest_t;
+
+/* RAID Action request Action values */
+
+#define MPI2_RAID_ACTION_INDICATOR_STRUCT           (0x01)
+#define MPI2_RAID_ACTION_CREATE_VOLUME              (0x02)
+#define MPI2_RAID_ACTION_DELETE_VOLUME              (0x03)
+#define MPI2_RAID_ACTION_DISABLE_ALL_VOLUMES        (0x04)
+#define MPI2_RAID_ACTION_ENABLE_ALL_VOLUMES         (0x05)
+#define MPI2_RAID_ACTION_PHYSDISK_OFFLINE           (0x0A)
+#define MPI2_RAID_ACTION_PHYSDISK_ONLINE            (0x0B)
+#define MPI2_RAID_ACTION_FAIL_PHYSDISK              (0x0F)
+#define MPI2_RAID_ACTION_ACTIVATE_VOLUME            (0x11)
+#define MPI2_RAID_ACTION_DEVICE_FW_UPDATE_MODE      (0x15)
+#define MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE     (0x17)
+#define MPI2_RAID_ACTION_SET_VOLUME_NAME            (0x18)
+#define MPI2_RAID_ACTION_SET_RAID_FUNCTION_RATE     (0x19)
+#define MPI2_RAID_ACTION_ENABLE_FAILED_VOLUME       (0x1C)
+#define MPI2_RAID_ACTION_CREATE_HOT_SPARE           (0x1D)
+#define MPI2_RAID_ACTION_DELETE_HOT_SPARE           (0x1E)
+#define MPI2_RAID_ACTION_SYSTEM_SHUTDOWN_INITIATED  (0x20)
+#define MPI2_RAID_ACTION_START_RAID_FUNCTION        (0x21)
+#define MPI2_RAID_ACTION_STOP_RAID_FUNCTION         (0x22)
+
+
+/* RAID Volume Creation Structure */
+
+/*
+ * The following define can be customized for the targeted product.
+ */
+#ifndef MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS
+#define MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS        (1)
+#endif
+
+typedef struct _MPI2_RAID_VOLUME_PHYSDISK
+{
+    U8                      RAIDSetNum;                     /* 0x00 */
+    U8                      PhysDiskMap;                    /* 0x01 */
+    U16                     PhysDiskDevHandle;              /* 0x02 */
+} MPI2_RAID_VOLUME_PHYSDISK, MPI2_POINTER PTR_MPI2_RAID_VOLUME_PHYSDISK,
+  Mpi2RaidVolumePhysDisk_t, MPI2_POINTER pMpi2RaidVolumePhysDisk_t;
+
+/* defines for the PhysDiskMap field */
+#define MPI2_RAIDACTION_PHYSDISK_PRIMARY            (0x01)
+#define MPI2_RAIDACTION_PHYSDISK_SECONDARY          (0x02)
+
+typedef struct _MPI2_RAID_VOLUME_CREATION_STRUCT
+{
+    U8                          NumPhysDisks;               /* 0x00 */
+    U8                          VolumeType;                 /* 0x01 */
+    U16                         Reserved1;                  /* 0x02 */
+    U32                         VolumeCreationFlags;        /* 0x04 */
+    U32                         VolumeSettings;             /* 0x08 */
+    U8                          Reserved2;                  /* 0x0C */
+    U8                          ResyncRate;                 /* 0x0D */
+    U16                         DataScrubDuration;          /* 0x0E */
+    U64                         VolumeMaxLBA;               /* 0x10 */
+    U32                         StripeSize;                 /* 0x18 */
+    U8                          Name[16];                   /* 0x1C */
+    MPI2_RAID_VOLUME_PHYSDISK   PhysDisk[MPI2_RAID_VOL_CREATION_NUM_PHYSDISKS];/* 0x2C */
+} MPI2_RAID_VOLUME_CREATION_STRUCT,
+  MPI2_POINTER PTR_MPI2_RAID_VOLUME_CREATION_STRUCT,
+  Mpi2RaidVolumeCreationStruct_t, MPI2_POINTER pMpi2RaidVolumeCreationStruct_t;
+
+/* use MPI2_RAID_VOL_TYPE_ defines from mpi2_cnfg.h for VolumeType */
+
+/* defines for the VolumeCreationFlags field */
+#define MPI2_RAID_VOL_CREATION_USE_DEFAULT_SETTINGS (0x80)
+#define MPI2_RAID_VOL_CREATION_BACKGROUND_INIT      (0x04)
+#define MPI2_RAID_VOL_CREATION_LOW_LEVEL_INIT       (0x02)
+#define MPI2_RAID_VOL_CREATION_MIGRATE_DATA         (0x01)
+
+
+/* RAID Online Capacity Expansion Structure */
+
+typedef struct _MPI2_RAID_ONLINE_CAPACITY_EXPANSION
+{
+    U32                     Flags;                          /* 0x00 */
+    U16                     DevHandle0;                     /* 0x04 */
+    U16                     Reserved1;                      /* 0x06 */
+    U16                     DevHandle1;                     /* 0x08 */
+    U16                     Reserved2;                      /* 0x0A */
+} MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
+  MPI2_POINTER PTR_MPI2_RAID_ONLINE_CAPACITY_EXPANSION,
+  Mpi2RaidOnlineCapacityExpansion_t,
+  MPI2_POINTER pMpi2RaidOnlineCapacityExpansion_t;
+
+
+/* RAID Volume Indicator Structure */
+
+typedef struct _MPI2_RAID_VOL_INDICATOR
+{
+    U64                     TotalBlocks;                    /* 0x00 */
+    U64                     BlocksRemaining;                /* 0x08 */
+    U32                     Flags;                          /* 0x10 */
+} MPI2_RAID_VOL_INDICATOR, MPI2_POINTER PTR_MPI2_RAID_VOL_INDICATOR,
+  Mpi2RaidVolIndicator_t, MPI2_POINTER pMpi2RaidVolIndicator_t;
+
+/* defines for RAID Volume Indicator Flags field */
+#define MPI2_RAID_VOL_FLAGS_OP_MASK                 (0x0000000F)
+#define MPI2_RAID_VOL_FLAGS_OP_BACKGROUND_INIT      (0x00000000)
+#define MPI2_RAID_VOL_FLAGS_OP_ONLINE_CAP_EXPANSION (0x00000001)
+#define MPI2_RAID_VOL_FLAGS_OP_CONSISTENCY_CHECK    (0x00000002)
+#define MPI2_RAID_VOL_FLAGS_OP_RESYNC               (0x00000003)
+
+
+/* RAID Action Reply ActionData union */
+typedef union _MPI2_RAID_ACTION_REPLY_DATA
+{
+    U32                     Word[5];
+    MPI2_RAID_VOL_INDICATOR RaidVolumeIndicator;
+    U16                     VolDevHandle;
+    U8                      VolumeState;
+    U8                      PhysDiskNum;
+} MPI2_RAID_ACTION_REPLY_DATA, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY_DATA,
+  Mpi2RaidActionReplyData_t, MPI2_POINTER pMpi2RaidActionReplyData_t;
+
+/* use MPI2_RAIDVOL0_SETTING_ defines from mpi2_cnfg.h for MPI2_RAID_ACTION_CHANGE_VOL_WRITE_CACHE action */
+
+
+/* RAID Action Reply Message */
+typedef struct _MPI2_RAID_ACTION_REPLY
+{
+    U8                          Action;                     /* 0x00 */
+    U8                          Reserved1;                  /* 0x01 */
+    U8                          MsgLength;                  /* 0x02 */
+    U8                          Function;                   /* 0x03 */
+    U16                         VolDevHandle;               /* 0x04 */
+    U8                          PhysDiskNum;                /* 0x06 */
+    U8                          MsgFlags;                   /* 0x07 */
+    U8                          VP_ID;                      /* 0x08 */
+    U8                          VF_ID;                      /* 0x09 */
+    U16                         Reserved2;                  /* 0x0A */
+    U16                         Reserved3;                  /* 0x0C */
+    U16                         IOCStatus;                  /* 0x0E */
+    U32                         IOCLogInfo;                 /* 0x10 */
+    MPI2_RAID_ACTION_REPLY_DATA ActionData;                 /* 0x14 */
+} MPI2_RAID_ACTION_REPLY, MPI2_POINTER PTR_MPI2_RAID_ACTION_REPLY,
+  Mpi2RaidActionReply_t, MPI2_POINTER pMpi2RaidActionReply_t;
+
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_sas.h b/drivers/scsi/mpt2sas/mpi/mpi2_sas.h
new file mode 100644
index 0000000..8a42b13
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_sas.h
@@ -0,0 +1,282 @@
+/*
+ *  Copyright (c) 2000-2007 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_sas.h
+ *          Title:  MPI Serial Attached SCSI structures and definitions
+ *  Creation Date:  February 9, 2007
+ *
+ *  mpi2.h Version:  02.00.02
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  06-26-07  02.00.01  Added Clear All Persistent Operation to SAS IO Unit
+ *                      Control Request.
+ *  10-02-08  02.00.02  Added Set IOC Parameter Operation to SAS IO Unit Control
+ *                      Request.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_SAS_H
+#define MPI2_SAS_H
+
+/*
+ * Values for SASStatus.
+ */
+#define MPI2_SASSTATUS_SUCCESS                          (0x00)
+#define MPI2_SASSTATUS_UNKNOWN_ERROR                    (0x01)
+#define MPI2_SASSTATUS_INVALID_FRAME                    (0x02)
+#define MPI2_SASSTATUS_UTC_BAD_DEST                     (0x03)
+#define MPI2_SASSTATUS_UTC_BREAK_RECEIVED               (0x04)
+#define MPI2_SASSTATUS_UTC_CONNECT_RATE_NOT_SUPPORTED   (0x05)
+#define MPI2_SASSTATUS_UTC_PORT_LAYER_REQUEST           (0x06)
+#define MPI2_SASSTATUS_UTC_PROTOCOL_NOT_SUPPORTED       (0x07)
+#define MPI2_SASSTATUS_UTC_STP_RESOURCES_BUSY           (0x08)
+#define MPI2_SASSTATUS_UTC_WRONG_DESTINATION            (0x09)
+#define MPI2_SASSTATUS_SHORT_INFORMATION_UNIT           (0x0A)
+#define MPI2_SASSTATUS_LONG_INFORMATION_UNIT            (0x0B)
+#define MPI2_SASSTATUS_XFER_RDY_INCORRECT_WRITE_DATA    (0x0C)
+#define MPI2_SASSTATUS_XFER_RDY_REQUEST_OFFSET_ERROR    (0x0D)
+#define MPI2_SASSTATUS_XFER_RDY_NOT_EXPECTED            (0x0E)
+#define MPI2_SASSTATUS_DATA_INCORRECT_DATA_LENGTH       (0x0F)
+#define MPI2_SASSTATUS_DATA_TOO_MUCH_READ_DATA          (0x10)
+#define MPI2_SASSTATUS_DATA_OFFSET_ERROR                (0x11)
+#define MPI2_SASSTATUS_SDSF_NAK_RECEIVED                (0x12)
+#define MPI2_SASSTATUS_SDSF_CONNECTION_FAILED           (0x13)
+#define MPI2_SASSTATUS_INITIATOR_RESPONSE_TIMEOUT       (0x14)
+
+
+/*
+ * Values for the SAS DeviceInfo field used in SAS Device Status Change Event
+ * data and SAS Configuration pages.
+ */
+#define MPI2_SAS_DEVICE_INFO_SEP                (0x00004000)
+#define MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE       (0x00002000)
+#define MPI2_SAS_DEVICE_INFO_LSI_DEVICE         (0x00001000)
+#define MPI2_SAS_DEVICE_INFO_DIRECT_ATTACH      (0x00000800)
+#define MPI2_SAS_DEVICE_INFO_SSP_TARGET         (0x00000400)
+#define MPI2_SAS_DEVICE_INFO_STP_TARGET         (0x00000200)
+#define MPI2_SAS_DEVICE_INFO_SMP_TARGET         (0x00000100)
+#define MPI2_SAS_DEVICE_INFO_SATA_DEVICE        (0x00000080)
+#define MPI2_SAS_DEVICE_INFO_SSP_INITIATOR      (0x00000040)
+#define MPI2_SAS_DEVICE_INFO_STP_INITIATOR      (0x00000020)
+#define MPI2_SAS_DEVICE_INFO_SMP_INITIATOR      (0x00000010)
+#define MPI2_SAS_DEVICE_INFO_SATA_HOST          (0x00000008)
+
+#define MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE   (0x00000007)
+#define MPI2_SAS_DEVICE_INFO_NO_DEVICE          (0x00000000)
+#define MPI2_SAS_DEVICE_INFO_END_DEVICE         (0x00000001)
+#define MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER      (0x00000002)
+#define MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER    (0x00000003)
+
+
+/*****************************************************************************
+*
+*        SAS Messages
+*
+*****************************************************************************/
+
+/****************************************************************************
+*  SMP Passthrough messages
+****************************************************************************/
+
+/* SMP Passthrough Request Message */
+typedef struct _MPI2_SMP_PASSTHROUGH_REQUEST
+{
+    U8                      PassthroughFlags;   /* 0x00 */
+    U8                      PhysicalPort;       /* 0x01 */
+    U8                      ChainOffset;        /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     RequestDataLength;  /* 0x04 */
+    U8                      SGLFlags;           /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved1;          /* 0x0A */
+    U32                     Reserved2;          /* 0x0C */
+    U64                     SASAddress;         /* 0x10 */
+    U32                     Reserved3;          /* 0x18 */
+    U32                     Reserved4;          /* 0x1C */
+    MPI2_SIMPLE_SGE_UNION   SGL;                /* 0x20 */
+} MPI2_SMP_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REQUEST,
+  Mpi2SmpPassthroughRequest_t, MPI2_POINTER pMpi2SmpPassthroughRequest_t;
+
+/* values for PassthroughFlags field */
+#define MPI2_SMP_PT_REQ_PT_FLAGS_IMMEDIATE      (0x80)
+
+/* values for SGLFlags field are in the SGL section of mpi2.h */
+
+
+/* SMP Passthrough Reply Message */
+typedef struct _MPI2_SMP_PASSTHROUGH_REPLY
+{
+    U8                      PassthroughFlags;   /* 0x00 */
+    U8                      PhysicalPort;       /* 0x01 */
+    U8                      MsgLength;          /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     ResponseDataLength; /* 0x04 */
+    U8                      SGLFlags;           /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved1;          /* 0x0A */
+    U8                      Reserved2;          /* 0x0C */
+    U8                      SASStatus;          /* 0x0D */
+    U16                     IOCStatus;          /* 0x0E */
+    U32                     IOCLogInfo;         /* 0x10 */
+    U32                     Reserved3;          /* 0x14 */
+    U8                      ResponseData[4];    /* 0x18 */
+} MPI2_SMP_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SMP_PASSTHROUGH_REPLY,
+  Mpi2SmpPassthroughReply_t, MPI2_POINTER pMpi2SmpPassthroughReply_t;
+
+/* values for PassthroughFlags field */
+#define MPI2_SMP_PT_REPLY_PT_FLAGS_IMMEDIATE    (0x80)
+
+/* values for SASStatus field are at the top of this file */
+
+
+/****************************************************************************
+*  SATA Passthrough messages
+****************************************************************************/
+
+/* SATA Passthrough Request Message */
+typedef struct _MPI2_SATA_PASSTHROUGH_REQUEST
+{
+    U16                     DevHandle;          /* 0x00 */
+    U8                      ChainOffset;        /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     PassthroughFlags;   /* 0x04 */
+    U8                      SGLFlags;           /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved1;          /* 0x0A */
+    U32                     Reserved2;          /* 0x0C */
+    U32                     Reserved3;          /* 0x10 */
+    U32                     Reserved4;          /* 0x14 */
+    U32                     DataLength;         /* 0x18 */
+    U8                      CommandFIS[20];     /* 0x1C */
+    MPI2_SIMPLE_SGE_UNION   SGL;                /* 0x20 */
+} MPI2_SATA_PASSTHROUGH_REQUEST, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REQUEST,
+  Mpi2SataPassthroughRequest_t, MPI2_POINTER pMpi2SataPassthroughRequest_t;
+
+/* values for PassthroughFlags field */
+#define MPI2_SATA_PT_REQ_PT_FLAGS_EXECUTE_DIAG      (0x0100)
+#define MPI2_SATA_PT_REQ_PT_FLAGS_DMA               (0x0020)
+#define MPI2_SATA_PT_REQ_PT_FLAGS_PIO               (0x0010)
+#define MPI2_SATA_PT_REQ_PT_FLAGS_UNSPECIFIED_VU    (0x0004)
+#define MPI2_SATA_PT_REQ_PT_FLAGS_WRITE             (0x0002)
+#define MPI2_SATA_PT_REQ_PT_FLAGS_READ              (0x0001)
+
+/* values for SGLFlags field are in the SGL section of mpi2.h */
+
+
+/* SATA Passthrough Reply Message */
+typedef struct _MPI2_SATA_PASSTHROUGH_REPLY
+{
+    U16                     DevHandle;          /* 0x00 */
+    U8                      MsgLength;          /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     PassthroughFlags;   /* 0x04 */
+    U8                      SGLFlags;           /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved1;          /* 0x0A */
+    U8                      Reserved2;          /* 0x0C */
+    U8                      SASStatus;          /* 0x0D */
+    U16                     IOCStatus;          /* 0x0E */
+    U32                     IOCLogInfo;         /* 0x10 */
+    U8                      StatusFIS[20];      /* 0x14 */
+    U32                     StatusControlRegisters; /* 0x28 */
+    U32                     TransferCount;      /* 0x2C */
+} MPI2_SATA_PASSTHROUGH_REPLY, MPI2_POINTER PTR_MPI2_SATA_PASSTHROUGH_REPLY,
+  Mpi2SataPassthroughReply_t, MPI2_POINTER pMpi2SataPassthroughReply_t;
+
+/* values for SASStatus field are at the top of this file */
+
+
+/****************************************************************************
+*  SAS IO Unit Control messages
+****************************************************************************/
+
+/* SAS IO Unit Control Request Message */
+typedef struct _MPI2_SAS_IOUNIT_CONTROL_REQUEST
+{
+    U8                      Operation;          /* 0x00 */
+    U8                      Reserved1;          /* 0x01 */
+    U8                      ChainOffset;        /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     DevHandle;          /* 0x04 */
+    U8                      IOCParameter;       /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved3;          /* 0x0A */
+    U16                     Reserved4;          /* 0x0C */
+    U8                      PhyNum;             /* 0x0E */
+    U8                      PrimFlags;          /* 0x0F */
+    U32                     Primitive;          /* 0x10 */
+    U8                      LookupMethod;       /* 0x14 */
+    U8                      Reserved5;          /* 0x15 */
+    U16                     SlotNumber;         /* 0x16 */
+    U64                     LookupAddress;      /* 0x18 */
+    U32                     IOCParameterValue;  /* 0x20 */
+    U32                     Reserved7;          /* 0x24 */
+    U32                     Reserved8;          /* 0x28 */
+} MPI2_SAS_IOUNIT_CONTROL_REQUEST,
+  MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REQUEST,
+  Mpi2SasIoUnitControlRequest_t, MPI2_POINTER pMpi2SasIoUnitControlRequest_t;
+
+/* values for the Operation field */
+#define MPI2_SAS_OP_CLEAR_ALL_PERSISTENT        (0x02)
+#define MPI2_SAS_OP_PHY_LINK_RESET              (0x06)
+#define MPI2_SAS_OP_PHY_HARD_RESET              (0x07)
+#define MPI2_SAS_OP_PHY_CLEAR_ERROR_LOG         (0x08)
+#define MPI2_SAS_OP_SEND_PRIMITIVE              (0x0A)
+#define MPI2_SAS_OP_FORCE_FULL_DISCOVERY        (0x0B)
+#define MPI2_SAS_OP_TRANSMIT_PORT_SELECT_SIGNAL (0x0C)
+#define MPI2_SAS_OP_REMOVE_DEVICE               (0x0D)
+#define MPI2_SAS_OP_LOOKUP_MAPPING              (0x0E)
+#define MPI2_SAS_OP_SET_IOC_PARAMETER           (0x0F)
+#define MPI2_SAS_OP_PRODUCT_SPECIFIC_MIN        (0x80)
+
+/* values for the PrimFlags field */
+#define MPI2_SAS_PRIMFLAGS_SINGLE               (0x08)
+#define MPI2_SAS_PRIMFLAGS_TRIPLE               (0x02)
+#define MPI2_SAS_PRIMFLAGS_REDUNDANT            (0x01)
+
+/* values for the LookupMethod field */
+#define MPI2_SAS_LOOKUP_METHOD_SAS_ADDRESS          (0x01)
+#define MPI2_SAS_LOOKUP_METHOD_SAS_ENCLOSURE_SLOT   (0x02)
+#define MPI2_SAS_LOOKUP_METHOD_SAS_DEVICE_NAME      (0x03)
+
+
+/* SAS IO Unit Control Reply Message */
+typedef struct _MPI2_SAS_IOUNIT_CONTROL_REPLY
+{
+    U8                      Operation;          /* 0x00 */
+    U8                      Reserved1;          /* 0x01 */
+    U8                      MsgLength;          /* 0x02 */
+    U8                      Function;           /* 0x03 */
+    U16                     DevHandle;          /* 0x04 */
+    U8                      IOCParameter;       /* 0x06 */
+    U8                      MsgFlags;           /* 0x07 */
+    U8                      VP_ID;              /* 0x08 */
+    U8                      VF_ID;              /* 0x09 */
+    U16                     Reserved3;          /* 0x0A */
+    U16                     Reserved4;          /* 0x0C */
+    U16                     IOCStatus;          /* 0x0E */
+    U32                     IOCLogInfo;         /* 0x10 */
+} MPI2_SAS_IOUNIT_CONTROL_REPLY,
+  MPI2_POINTER PTR_MPI2_SAS_IOUNIT_CONTROL_REPLY,
+  Mpi2SasIoUnitControlReply_t, MPI2_POINTER pMpi2SasIoUnitControlReply_t;
+
+
+#endif
+
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_tool.h b/drivers/scsi/mpt2sas/mpi/mpi2_tool.h
new file mode 100644
index 0000000..2ff4e93
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_tool.h
@@ -0,0 +1,249 @@
+/*
+ *  Copyright (c) 2000-2008 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_tool.h
+ *          Title:  MPI diagnostic tool structures and definitions
+ *  Creation Date:  March 26, 2007
+ *
+ *    mpi2_tool.h Version:  02.00.02
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  12-18-07  02.00.01  Added Diagnostic Buffer Post and Diagnostic Release
+ *                      structures and defines.
+ *  02-29-08  02.00.02  Modified various names to make them 32-character unique.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_TOOL_H
+#define MPI2_TOOL_H
+
+/*****************************************************************************
+*
+*               Toolbox Messages
+*
+*****************************************************************************/
+
+/* defines for the Tools */
+#define MPI2_TOOLBOX_CLEAN_TOOL                     (0x00)
+#define MPI2_TOOLBOX_MEMORY_MOVE_TOOL               (0x01)
+#define MPI2_TOOLBOX_BEACON_TOOL                    (0x05)
+
+/****************************************************************************
+*  Toolbox reply
+****************************************************************************/
+
+typedef struct _MPI2_TOOLBOX_REPLY
+{
+    U8                      Tool;                       /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U16                     Reserved5;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+} MPI2_TOOLBOX_REPLY, MPI2_POINTER PTR_MPI2_TOOLBOX_REPLY,
+  Mpi2ToolboxReply_t, MPI2_POINTER pMpi2ToolboxReply_t;
+
+
+/****************************************************************************
+*  Toolbox Clean Tool request
+****************************************************************************/
+
+typedef struct _MPI2_TOOLBOX_CLEAN_REQUEST
+{
+    U8                      Tool;                       /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U32                     Flags;                      /* 0x0C */
+   } MPI2_TOOLBOX_CLEAN_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_CLEAN_REQUEST,
+  Mpi2ToolboxCleanRequest_t, MPI2_POINTER pMpi2ToolboxCleanRequest_t;
+
+/* values for the Flags field */
+#define MPI2_TOOLBOX_CLEAN_BOOT_SERVICES            (0x80000000)
+#define MPI2_TOOLBOX_CLEAN_PERSIST_MANUFACT_PAGES   (0x40000000)
+#define MPI2_TOOLBOX_CLEAN_OTHER_PERSIST_PAGES      (0x20000000)
+#define MPI2_TOOLBOX_CLEAN_FW_CURRENT               (0x10000000)
+#define MPI2_TOOLBOX_CLEAN_FW_BACKUP                (0x08000000)
+#define MPI2_TOOLBOX_CLEAN_MEGARAID                 (0x02000000)
+#define MPI2_TOOLBOX_CLEAN_INITIALIZATION           (0x01000000)
+#define MPI2_TOOLBOX_CLEAN_FLASH                    (0x00000004)
+#define MPI2_TOOLBOX_CLEAN_SEEPROM                  (0x00000002)
+#define MPI2_TOOLBOX_CLEAN_NVSRAM                   (0x00000001)
+
+
+/****************************************************************************
+*  Toolbox Memory Move request
+****************************************************************************/
+
+typedef struct _MPI2_TOOLBOX_MEM_MOVE_REQUEST
+{
+    U8                      Tool;                       /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    MPI2_SGE_SIMPLE_UNION   SGL;                        /* 0x0C */
+} MPI2_TOOLBOX_MEM_MOVE_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_MEM_MOVE_REQUEST,
+  Mpi2ToolboxMemMoveRequest_t, MPI2_POINTER pMpi2ToolboxMemMoveRequest_t;
+
+
+/****************************************************************************
+*  Toolbox Beacon Tool request
+****************************************************************************/
+
+typedef struct _MPI2_TOOLBOX_BEACON_REQUEST
+{
+    U8                      Tool;                       /* 0x00 */
+    U8                      Reserved1;                  /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U8                      Reserved5;                  /* 0x0C */
+    U8                      PhysicalPort;               /* 0x0D */
+    U8                      Reserved6;                  /* 0x0E */
+    U8                      Flags;                      /* 0x0F */
+} MPI2_TOOLBOX_BEACON_REQUEST, MPI2_POINTER PTR_MPI2_TOOLBOX_BEACON_REQUEST,
+  Mpi2ToolboxBeaconRequest_t, MPI2_POINTER pMpi2ToolboxBeaconRequest_t;
+
+/* values for the Flags field */
+#define MPI2_TOOLBOX_FLAGS_BEACONMODE_OFF       (0x00)
+#define MPI2_TOOLBOX_FLAGS_BEACONMODE_ON        (0x01)
+
+
+/*****************************************************************************
+*
+*       Diagnostic Buffer Messages
+*
+*****************************************************************************/
+
+
+/****************************************************************************
+*  Diagnostic Buffer Post request
+****************************************************************************/
+
+typedef struct _MPI2_DIAG_BUFFER_POST_REQUEST
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      BufferType;                 /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U64                     BufferAddress;              /* 0x0C */
+    U32                     BufferLength;               /* 0x14 */
+    U32                     Reserved5;                  /* 0x18 */
+    U32                     Reserved6;                  /* 0x1C */
+    U32                     Flags;                      /* 0x20 */
+    U32                     ProductSpecific[23];        /* 0x24 */
+} MPI2_DIAG_BUFFER_POST_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REQUEST,
+  Mpi2DiagBufferPostRequest_t, MPI2_POINTER pMpi2DiagBufferPostRequest_t;
+
+/* values for the BufferType field */
+#define MPI2_DIAG_BUF_TYPE_TRACE                    (0x00)
+#define MPI2_DIAG_BUF_TYPE_SNAPSHOT                 (0x01)
+/* count of the number of buffer types */
+#define MPI2_DIAG_BUF_TYPE_COUNT                    (0x02)
+
+
+/****************************************************************************
+*  Diagnostic Buffer Post reply
+****************************************************************************/
+
+typedef struct _MPI2_DIAG_BUFFER_POST_REPLY
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      BufferType;                 /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U16                     Reserved5;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+    U32                     TransferLength;             /* 0x14 */
+} MPI2_DIAG_BUFFER_POST_REPLY, MPI2_POINTER PTR_MPI2_DIAG_BUFFER_POST_REPLY,
+  Mpi2DiagBufferPostReply_t, MPI2_POINTER pMpi2DiagBufferPostReply_t;
+
+
+/****************************************************************************
+*  Diagnostic Release request
+****************************************************************************/
+
+typedef struct _MPI2_DIAG_RELEASE_REQUEST
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      BufferType;                 /* 0x01 */
+    U8                      ChainOffset;                /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+} MPI2_DIAG_RELEASE_REQUEST, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REQUEST,
+  Mpi2DiagReleaseRequest_t, MPI2_POINTER pMpi2DiagReleaseRequest_t;
+
+
+/****************************************************************************
+*  Diagnostic Buffer Post reply
+****************************************************************************/
+
+typedef struct _MPI2_DIAG_RELEASE_REPLY
+{
+    U8                      Reserved1;                  /* 0x00 */
+    U8                      BufferType;                 /* 0x01 */
+    U8                      MsgLength;                  /* 0x02 */
+    U8                      Function;                   /* 0x03 */
+    U16                     Reserved2;                  /* 0x04 */
+    U8                      Reserved3;                  /* 0x06 */
+    U8                      MsgFlags;                   /* 0x07 */
+    U8                      VP_ID;                      /* 0x08 */
+    U8                      VF_ID;                      /* 0x09 */
+    U16                     Reserved4;                  /* 0x0A */
+    U16                     Reserved5;                  /* 0x0C */
+    U16                     IOCStatus;                  /* 0x0E */
+    U32                     IOCLogInfo;                 /* 0x10 */
+} MPI2_DIAG_RELEASE_REPLY, MPI2_POINTER PTR_MPI2_DIAG_RELEASE_REPLY,
+  Mpi2DiagReleaseReply_t, MPI2_POINTER pMpi2DiagReleaseReply_t;
+
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpi/mpi2_type.h b/drivers/scsi/mpt2sas/mpi/mpi2_type.h
new file mode 100644
index 0000000..cfde017
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpi/mpi2_type.h
@@ -0,0 +1,61 @@
+/*
+ *  Copyright (c) 2000-2007 LSI Corporation.
+ *
+ *
+ *           Name:  mpi2_type.h
+ *          Title:  MPI basic type definitions
+ *  Creation Date:  August 16, 2006
+ *
+ *    mpi2_type.h Version:  02.00.00
+ *
+ *  Version History
+ *  ---------------
+ *
+ *  Date      Version   Description
+ *  --------  --------  ------------------------------------------------------
+ *  04-30-07  02.00.00  Corresponds to Fusion-MPT MPI Specification Rev A.
+ *  --------------------------------------------------------------------------
+ */
+
+#ifndef MPI2_TYPE_H
+#define MPI2_TYPE_H
+
+
+/*******************************************************************************
+ * Define MPI2_POINTER if it hasn't already been defined. By default
+ * MPI2_POINTER is defined to be a near pointer. MPI2_POINTER can be defined as
+ * a far pointer by defining MPI2_POINTER as "far *" before this header file is
+ * included.
+ */
+#ifndef MPI2_POINTER
+#define MPI2_POINTER     *
+#endif
+
+/* the basic types may have already been included by mpi_type.h */
+#ifndef MPI_TYPE_H
+/*****************************************************************************
+*
+*               Basic Types
+*
+*****************************************************************************/
+
+typedef u8 U8;
+typedef __le16 U16;
+typedef __le32 U32;
+typedef __le64 U64 __attribute__((aligned(4)));
+
+/*****************************************************************************
+*
+*               Pointer Types
+*
+*****************************************************************************/
+
+typedef U8      *PU8;
+typedef U16     *PU16;
+typedef U32     *PU32;
+typedef U64     *PU64;
+
+#endif
+
+#endif
+
diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.c b/drivers/scsi/mpt2sas/mpt2sas_base.c
new file mode 100644
index 0000000..52427a8
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_base.c
@@ -0,0 +1,3435 @@
+/*
+ * This is the Fusion MPT base driver providing common API layer interface
+ * for access to MPT (Message Passing Technology) firmware.
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_base.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/dma-mapping.h>
+#include <linux/sort.h>
+#include <linux/io.h>
+
+#include "mpt2sas_base.h"
+
+static MPT_CALLBACK	mpt_callbacks[MPT_MAX_CALLBACKS];
+
+#define FAULT_POLLING_INTERVAL 1000 /* in milliseconds */
+#define MPT2SAS_MAX_REQUEST_QUEUE 500 /* maximum controller queue depth */
+
+static int max_queue_depth = -1;
+module_param(max_queue_depth, int, 0);
+MODULE_PARM_DESC(max_queue_depth, " max controller queue depth ");
+
+static int max_sgl_entries = -1;
+module_param(max_sgl_entries, int, 0);
+MODULE_PARM_DESC(max_sgl_entries, " max sg entries ");
+
+static int msix_disable = -1;
+module_param(msix_disable, int, 0);
+MODULE_PARM_DESC(msix_disable, " disable msix routed interrupts (default=0)");
+
+/**
+ * _base_fault_reset_work - workq handling ioc fault conditions
+ * @work: input argument, used to derive ioc
+ * Context: sleep.
+ *
+ * Return nothing.
+ */
+static void
+_base_fault_reset_work(struct work_struct *work)
+{
+	struct MPT2SAS_ADAPTER *ioc =
+	    container_of(work, struct MPT2SAS_ADAPTER, fault_reset_work.work);
+	unsigned long	 flags;
+	u32 doorbell;
+	int rc;
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->ioc_reset_in_progress)
+		goto rearm_timer;
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	doorbell = mpt2sas_base_get_iocstate(ioc, 0);
+	if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
+		rc = mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+		printk(MPT2SAS_WARN_FMT "%s: hard reset: %s\n", ioc->name,
+		    __func__, (rc == 0) ? "success" : "failed");
+		doorbell = mpt2sas_base_get_iocstate(ioc, 0);
+		if ((doorbell & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT)
+			mpt2sas_base_fault_info(ioc, doorbell &
+			    MPI2_DOORBELL_DATA_MASK);
+	}
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+ rearm_timer:
+	if (ioc->fault_reset_work_q)
+		queue_delayed_work(ioc->fault_reset_work_q,
+		    &ioc->fault_reset_work,
+		    msecs_to_jiffies(FAULT_POLLING_INTERVAL));
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _base_sas_ioc_info - verbose translation of the ioc status
+ * @ioc: pointer to scsi command object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @request_hdr: request mf
+ *
+ * Return nothing.
+ */
+static void
+_base_sas_ioc_info(struct MPT2SAS_ADAPTER *ioc, MPI2DefaultReply_t *mpi_reply,
+     MPI2RequestHeader_t *request_hdr)
+{
+	u16 ioc_status = le16_to_cpu(mpi_reply->IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	char *desc = NULL;
+	u16 frame_sz;
+	char *func_str = NULL;
+
+	/* SCSI_IO, RAID_PASS are handled from _scsih_scsi_ioc_info */
+	if (request_hdr->Function == MPI2_FUNCTION_SCSI_IO_REQUEST ||
+	    request_hdr->Function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH ||
+	    request_hdr->Function == MPI2_FUNCTION_EVENT_NOTIFICATION)
+		return;
+
+	switch (ioc_status) {
+
+/****************************************************************************
+*  Common IOCStatus values for all replies
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_INVALID_FUNCTION:
+		desc = "invalid function";
+		break;
+	case MPI2_IOCSTATUS_BUSY:
+		desc = "busy";
+		break;
+	case MPI2_IOCSTATUS_INVALID_SGL:
+		desc = "invalid sgl";
+		break;
+	case MPI2_IOCSTATUS_INTERNAL_ERROR:
+		desc = "internal error";
+		break;
+	case MPI2_IOCSTATUS_INVALID_VPID:
+		desc = "invalid vpid";
+		break;
+	case MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES:
+		desc = "insufficient resources";
+		break;
+	case MPI2_IOCSTATUS_INVALID_FIELD:
+		desc = "invalid field";
+		break;
+	case MPI2_IOCSTATUS_INVALID_STATE:
+		desc = "invalid state";
+		break;
+	case MPI2_IOCSTATUS_OP_STATE_NOT_SUPPORTED:
+		desc = "op state not supported";
+		break;
+
+/****************************************************************************
+*  Config IOCStatus values
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_CONFIG_INVALID_ACTION:
+		desc = "config invalid action";
+		break;
+	case MPI2_IOCSTATUS_CONFIG_INVALID_TYPE:
+		desc = "config invalid type";
+		break;
+	case MPI2_IOCSTATUS_CONFIG_INVALID_PAGE:
+		desc = "config invalid page";
+		break;
+	case MPI2_IOCSTATUS_CONFIG_INVALID_DATA:
+		desc = "config invalid data";
+		break;
+	case MPI2_IOCSTATUS_CONFIG_NO_DEFAULTS:
+		desc = "config no defaults";
+		break;
+	case MPI2_IOCSTATUS_CONFIG_CANT_COMMIT:
+		desc = "config cant commit";
+		break;
+
+/****************************************************************************
+*  SCSI IO Reply
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR:
+	case MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE:
+	case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
+	case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN:
+	case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN:
+	case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR:
+	case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR:
+	case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED:
+	case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
+	case MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
+	case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED:
+	case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED:
+		break;
+
+/****************************************************************************
+*  For use by SCSI Initiator and SCSI Target end-to-end data protection
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_EEDP_GUARD_ERROR:
+		desc = "eedp guard error";
+		break;
+	case MPI2_IOCSTATUS_EEDP_REF_TAG_ERROR:
+		desc = "eedp ref tag error";
+		break;
+	case MPI2_IOCSTATUS_EEDP_APP_TAG_ERROR:
+		desc = "eedp app tag error";
+		break;
+
+/****************************************************************************
+*  SCSI Target values
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_TARGET_INVALID_IO_INDEX:
+		desc = "target invalid io index";
+		break;
+	case MPI2_IOCSTATUS_TARGET_ABORTED:
+		desc = "target aborted";
+		break;
+	case MPI2_IOCSTATUS_TARGET_NO_CONN_RETRYABLE:
+		desc = "target no conn retryable";
+		break;
+	case MPI2_IOCSTATUS_TARGET_NO_CONNECTION:
+		desc = "target no connection";
+		break;
+	case MPI2_IOCSTATUS_TARGET_XFER_COUNT_MISMATCH:
+		desc = "target xfer count mismatch";
+		break;
+	case MPI2_IOCSTATUS_TARGET_DATA_OFFSET_ERROR:
+		desc = "target data offset error";
+		break;
+	case MPI2_IOCSTATUS_TARGET_TOO_MUCH_WRITE_DATA:
+		desc = "target too much write data";
+		break;
+	case MPI2_IOCSTATUS_TARGET_IU_TOO_SHORT:
+		desc = "target iu too short";
+		break;
+	case MPI2_IOCSTATUS_TARGET_ACK_NAK_TIMEOUT:
+		desc = "target ack nak timeout";
+		break;
+	case MPI2_IOCSTATUS_TARGET_NAK_RECEIVED:
+		desc = "target nak received";
+		break;
+
+/****************************************************************************
+*  Serial Attached SCSI values
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_SAS_SMP_REQUEST_FAILED:
+		desc = "smp request failed";
+		break;
+	case MPI2_IOCSTATUS_SAS_SMP_DATA_OVERRUN:
+		desc = "smp data overrun";
+		break;
+
+/****************************************************************************
+*  Diagnostic Buffer Post / Diagnostic Release values
+****************************************************************************/
+
+	case MPI2_IOCSTATUS_DIAGNOSTIC_RELEASED:
+		desc = "diagnostic released";
+		break;
+	default:
+		break;
+	}
+
+	if (!desc)
+		return;
+
+	switch (request_hdr->Function) {
+	case MPI2_FUNCTION_CONFIG:
+		frame_sz = sizeof(Mpi2ConfigRequest_t) + ioc->sge_size;
+		func_str = "config_page";
+		break;
+	case MPI2_FUNCTION_SCSI_TASK_MGMT:
+		frame_sz = sizeof(Mpi2SCSITaskManagementRequest_t);
+		func_str = "task_mgmt";
+		break;
+	case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
+		frame_sz = sizeof(Mpi2SasIoUnitControlRequest_t);
+		func_str = "sas_iounit_ctl";
+		break;
+	case MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR:
+		frame_sz = sizeof(Mpi2SepRequest_t);
+		func_str = "enclosure";
+		break;
+	case MPI2_FUNCTION_IOC_INIT:
+		frame_sz = sizeof(Mpi2IOCInitRequest_t);
+		func_str = "ioc_init";
+		break;
+	case MPI2_FUNCTION_PORT_ENABLE:
+		frame_sz = sizeof(Mpi2PortEnableRequest_t);
+		func_str = "port_enable";
+		break;
+	case MPI2_FUNCTION_SMP_PASSTHROUGH:
+		frame_sz = sizeof(Mpi2SmpPassthroughRequest_t) + ioc->sge_size;
+		func_str = "smp_passthru";
+		break;
+	default:
+		frame_sz = 32;
+		func_str = "unknown";
+		break;
+	}
+
+	printk(MPT2SAS_WARN_FMT "ioc_status: %s(0x%04x), request(0x%p),"
+	    " (%s)\n", ioc->name, desc, ioc_status, request_hdr, func_str);
+
+	_debug_dump_mf(request_hdr, frame_sz/4);
+}
+
+/**
+ * _base_display_event_data - verbose translation of firmware asyn events
+ * @ioc: pointer to scsi command object
+ * @mpi_reply: reply mf payload returned from firmware
+ *
+ * Return nothing.
+ */
+static void
+_base_display_event_data(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventNotificationReply_t *mpi_reply)
+{
+	char *desc = NULL;
+	u16 event;
+
+	if (!(ioc->logging_level & MPT_DEBUG_EVENTS))
+		return;
+
+	event = le16_to_cpu(mpi_reply->Event);
+
+	switch (event) {
+	case MPI2_EVENT_LOG_DATA:
+		desc = "Log Data";
+		break;
+	case MPI2_EVENT_STATE_CHANGE:
+		desc = "Status Change";
+		break;
+	case MPI2_EVENT_HARD_RESET_RECEIVED:
+		desc = "Hard Reset Received";
+		break;
+	case MPI2_EVENT_EVENT_CHANGE:
+		desc = "Event Change";
+		break;
+	case MPI2_EVENT_TASK_SET_FULL:
+		desc = "Task Set Full";
+		break;
+	case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE:
+		desc = "Device Status Change";
+		break;
+	case MPI2_EVENT_IR_OPERATION_STATUS:
+		desc = "IR Operation Status";
+		break;
+	case MPI2_EVENT_SAS_DISCOVERY:
+		desc =  "Discovery";
+		break;
+	case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE:
+		desc = "SAS Broadcast Primitive";
+		break;
+	case MPI2_EVENT_SAS_INIT_DEVICE_STATUS_CHANGE:
+		desc = "SAS Init Device Status Change";
+		break;
+	case MPI2_EVENT_SAS_INIT_TABLE_OVERFLOW:
+		desc = "SAS Init Table Overflow";
+		break;
+	case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
+		desc = "SAS Topology Change List";
+		break;
+	case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
+		desc = "SAS Enclosure Device Status Change";
+		break;
+	case MPI2_EVENT_IR_VOLUME:
+		desc = "IR Volume";
+		break;
+	case MPI2_EVENT_IR_PHYSICAL_DISK:
+		desc = "IR Physical Disk";
+		break;
+	case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
+		desc = "IR Configuration Change List";
+		break;
+	case MPI2_EVENT_LOG_ENTRY_ADDED:
+		desc = "Log Entry Added";
+		break;
+	}
+
+	if (!desc)
+		return;
+
+	printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, desc);
+}
+#endif
+
+/**
+ * _base_sas_log_info - verbose translation of firmware log info
+ * @ioc: pointer to scsi command object
+ * @log_info: log info
+ *
+ * Return nothing.
+ */
+static void
+_base_sas_log_info(struct MPT2SAS_ADAPTER *ioc , u32 log_info)
+{
+	union loginfo_type {
+		u32	loginfo;
+		struct {
+			u32	subcode:16;
+			u32	code:8;
+			u32	originator:4;
+			u32	bus_type:4;
+		} dw;
+	};
+	union loginfo_type sas_loginfo;
+	char *originator_str = NULL;
+
+	sas_loginfo.loginfo = log_info;
+	if (sas_loginfo.dw.bus_type != 3 /*SAS*/)
+		return;
+
+	/* eat the loginfos associated with task aborts */
+	if (ioc->ignore_loginfos && (log_info == 30050000 || log_info ==
+	    0x31140000 || log_info == 0x31130000))
+		return;
+
+	switch (sas_loginfo.dw.originator) {
+	case 0:
+		originator_str = "IOP";
+		break;
+	case 1:
+		originator_str = "PL";
+		break;
+	case 2:
+		originator_str = "IR";
+		break;
+	}
+
+	printk(MPT2SAS_WARN_FMT "log_info(0x%08x): originator(%s), "
+	    "code(0x%02x), sub_code(0x%04x)\n", ioc->name, log_info,
+	     originator_str, sas_loginfo.dw.code,
+	     sas_loginfo.dw.subcode);
+}
+
+/**
+ * mpt2sas_base_fault_info - verbose translation of firmware FAULT code
+ * @ioc: pointer to scsi command object
+ * @fault_code: fault code
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_fault_info(struct MPT2SAS_ADAPTER *ioc , u16 fault_code)
+{
+	printk(MPT2SAS_ERR_FMT "fault_state(0x%04x)!\n",
+	    ioc->name, fault_code);
+}
+
+/**
+ * _base_display_reply_info -
+ * @ioc: pointer to scsi command object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ *
+ * Return nothing.
+ */
+static void
+_base_display_reply_info(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID,
+    u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+	u16 ioc_status;
+
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if ((ioc_status & MPI2_IOCSTATUS_MASK) &&
+	    (ioc->logging_level & MPT_DEBUG_REPLY)) {
+		_base_sas_ioc_info(ioc , mpi_reply,
+		   mpt2sas_base_get_msg_frame(ioc, smid));
+	}
+#endif
+	if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)
+		_base_sas_log_info(ioc, le32_to_cpu(mpi_reply->IOCLogInfo));
+}
+
+/**
+ * mpt2sas_base_done - base internal command completion routine
+ * @ioc: pointer to scsi command object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (mpi_reply && mpi_reply->Function == MPI2_FUNCTION_EVENT_ACK)
+		return;
+
+	if (ioc->base_cmds.status == MPT2_CMD_NOT_USED)
+		return;
+
+	ioc->base_cmds.status |= MPT2_CMD_COMPLETE;
+	if (mpi_reply) {
+		ioc->base_cmds.status |= MPT2_CMD_REPLY_VALID;
+		memcpy(ioc->base_cmds.reply, mpi_reply, mpi_reply->MsgLength*4);
+	}
+	ioc->base_cmds.status &= ~MPT2_CMD_PENDING;
+	complete(&ioc->base_cmds.done);
+}
+
+/**
+ * _base_async_event - main callback handler for firmware asyn events
+ * @ioc: pointer to scsi command object
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ *
+ * Return nothing.
+ */
+static void
+_base_async_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, u32 reply)
+{
+	Mpi2EventNotificationReply_t *mpi_reply;
+	Mpi2EventAckRequest_t *ack_request;
+	u16 smid;
+
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (!mpi_reply)
+		return;
+	if (mpi_reply->Function != MPI2_FUNCTION_EVENT_NOTIFICATION)
+		return;
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	_base_display_event_data(ioc, mpi_reply);
+#endif
+	if (!(mpi_reply->AckRequired & MPI2_EVENT_NOTIFICATION_ACK_REQUIRED))
+		goto out;
+	smid = mpt2sas_base_get_smid(ioc, ioc->base_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		goto out;
+	}
+
+	ack_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	memset(ack_request, 0, sizeof(Mpi2EventAckRequest_t));
+	ack_request->Function = MPI2_FUNCTION_EVENT_ACK;
+	ack_request->Event = mpi_reply->Event;
+	ack_request->EventContext = mpi_reply->EventContext;
+	ack_request->VF_ID = VF_ID;
+	mpt2sas_base_put_smid_default(ioc, smid, VF_ID);
+
+ out:
+
+	/* scsih callback handler */
+	mpt2sas_scsih_event_callback(ioc, VF_ID, reply);
+
+	/* ctl callback handler */
+	mpt2sas_ctl_event_callback(ioc, VF_ID, reply);
+}
+
+/**
+ * _base_mask_interrupts - disable interrupts
+ * @ioc: pointer to scsi command object
+ *
+ * Disabling ResetIRQ, Reply and Doorbell Interrupts
+ *
+ * Return nothing.
+ */
+static void
+_base_mask_interrupts(struct MPT2SAS_ADAPTER *ioc)
+{
+	u32 him_register;
+
+	ioc->mask_interrupts = 1;
+	him_register = readl(&ioc->chip->HostInterruptMask);
+	him_register |= MPI2_HIM_DIM + MPI2_HIM_RIM + MPI2_HIM_RESET_IRQ_MASK;
+	writel(him_register, &ioc->chip->HostInterruptMask);
+	readl(&ioc->chip->HostInterruptMask);
+}
+
+/**
+ * _base_unmask_interrupts - enable interrupts
+ * @ioc: pointer to scsi command object
+ *
+ * Enabling only Reply Interrupts
+ *
+ * Return nothing.
+ */
+static void
+_base_unmask_interrupts(struct MPT2SAS_ADAPTER *ioc)
+{
+	u32 him_register;
+
+	writel(0, &ioc->chip->HostInterruptStatus);
+	him_register = readl(&ioc->chip->HostInterruptMask);
+	him_register &= ~MPI2_HIM_RIM;
+	writel(him_register, &ioc->chip->HostInterruptMask);
+	ioc->mask_interrupts = 0;
+}
+
+/**
+ * _base_interrupt - MPT adapter (IOC) specific interrupt handler.
+ * @irq: irq number (not used)
+ * @bus_id: bus identifier cookie == pointer to MPT_ADAPTER structure
+ * @r: pt_regs pointer (not used)
+ *
+ * Return IRQ_HANDLE if processed, else IRQ_NONE.
+ */
+static irqreturn_t
+_base_interrupt(int irq, void *bus_id)
+{
+	u32 post_index, post_index_next, completed_cmds;
+	u8 request_desript_type;
+	u16 smid;
+	u8 cb_idx;
+	u32 reply;
+	u8 VF_ID;
+	int i;
+	struct MPT2SAS_ADAPTER *ioc = bus_id;
+
+	if (ioc->mask_interrupts)
+		return IRQ_NONE;
+
+	post_index = ioc->reply_post_host_index;
+	request_desript_type = ioc->reply_post_free[post_index].
+	    Default.ReplyFlags & MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
+	if (request_desript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
+		return IRQ_NONE;
+
+	completed_cmds = 0;
+	do {
+		if (ioc->reply_post_free[post_index].Words == ~0ULL)
+			goto out;
+		reply = 0;
+		cb_idx = 0xFF;
+		smid = le16_to_cpu(ioc->reply_post_free[post_index].
+		    Default.DescriptorTypeDependent1);
+		VF_ID = ioc->reply_post_free[post_index].
+		    Default.VF_ID;
+		if (request_desript_type ==
+		    MPI2_RPY_DESCRIPT_FLAGS_ADDRESS_REPLY) {
+			reply = le32_to_cpu(ioc->reply_post_free[post_index].
+			    AddressReply.ReplyFrameAddress);
+		} else if (request_desript_type ==
+		    MPI2_RPY_DESCRIPT_FLAGS_TARGET_COMMAND_BUFFER)
+			goto next;
+		else if (request_desript_type ==
+		    MPI2_RPY_DESCRIPT_FLAGS_TARGETASSIST_SUCCESS)
+			goto next;
+		if (smid)
+			cb_idx = ioc->scsi_lookup[smid - 1].cb_idx;
+		if (smid && cb_idx != 0xFF) {
+			mpt_callbacks[cb_idx](ioc, smid, VF_ID, reply);
+			if (reply)
+				_base_display_reply_info(ioc, smid, VF_ID,
+				    reply);
+			mpt2sas_base_free_smid(ioc, smid);
+		}
+		if (!smid)
+			_base_async_event(ioc, VF_ID, reply);
+
+		/* reply free queue handling */
+		if (reply) {
+			ioc->reply_free_host_index =
+			    (ioc->reply_free_host_index ==
+			    (ioc->reply_free_queue_depth - 1)) ?
+			    0 : ioc->reply_free_host_index + 1;
+			ioc->reply_free[ioc->reply_free_host_index] =
+			    cpu_to_le32(reply);
+			writel(ioc->reply_free_host_index,
+			    &ioc->chip->ReplyFreeHostIndex);
+			wmb();
+		}
+
+ next:
+		post_index_next = (post_index == (ioc->reply_post_queue_depth -
+		    1)) ? 0 : post_index + 1;
+		request_desript_type =
+		    ioc->reply_post_free[post_index_next].Default.ReplyFlags
+		    & MPI2_RPY_DESCRIPT_FLAGS_TYPE_MASK;
+		completed_cmds++;
+		if (request_desript_type == MPI2_RPY_DESCRIPT_FLAGS_UNUSED)
+			goto out;
+		post_index = post_index_next;
+	} while (1);
+
+ out:
+
+	if (!completed_cmds)
+		return IRQ_NONE;
+
+	/* reply post descriptor handling */
+	post_index_next = ioc->reply_post_host_index;
+	for (i = 0 ; i < completed_cmds; i++) {
+		post_index = post_index_next;
+		/* poison the reply post descriptor */
+		ioc->reply_post_free[post_index_next].Words = ~0ULL;
+		post_index_next = (post_index ==
+		    (ioc->reply_post_queue_depth - 1))
+		    ? 0 : post_index + 1;
+	}
+	ioc->reply_post_host_index = post_index_next;
+	writel(post_index_next, &ioc->chip->ReplyPostHostIndex);
+	wmb();
+	return IRQ_HANDLED;
+}
+
+/**
+ * mpt2sas_base_release_callback_handler - clear interupt callback handler
+ * @cb_idx: callback index
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_release_callback_handler(u8 cb_idx)
+{
+	mpt_callbacks[cb_idx] = NULL;
+}
+
+/**
+ * mpt2sas_base_register_callback_handler - obtain index for the interrupt callback handler
+ * @cb_func: callback function
+ *
+ * Returns cb_func.
+ */
+u8
+mpt2sas_base_register_callback_handler(MPT_CALLBACK cb_func)
+{
+	u8 cb_idx;
+
+	for (cb_idx = MPT_MAX_CALLBACKS-1; cb_idx; cb_idx--)
+		if (mpt_callbacks[cb_idx] == NULL)
+			break;
+
+	mpt_callbacks[cb_idx] = cb_func;
+	return cb_idx;
+}
+
+/**
+ * mpt2sas_base_initialize_callback_handler - initialize the interrupt callback handler
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_initialize_callback_handler(void)
+{
+	u8 cb_idx;
+
+	for (cb_idx = 0; cb_idx < MPT_MAX_CALLBACKS; cb_idx++)
+		mpt2sas_base_release_callback_handler(cb_idx);
+}
+
+/**
+ * mpt2sas_base_build_zero_len_sge - build zero length sg entry
+ * @ioc: per adapter object
+ * @paddr: virtual address for SGE
+ *
+ * Create a zero length scatter gather entry to insure the IOCs hardware has
+ * something to use if the target device goes brain dead and tries
+ * to send data even when none is asked for.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_build_zero_len_sge(struct MPT2SAS_ADAPTER *ioc, void *paddr)
+{
+	u32 flags_length = (u32)((MPI2_SGE_FLAGS_LAST_ELEMENT |
+	    MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_END_OF_LIST |
+	    MPI2_SGE_FLAGS_SIMPLE_ELEMENT) <<
+	    MPI2_SGE_FLAGS_SHIFT);
+	ioc->base_add_sg_single(paddr, flags_length, -1);
+}
+
+/**
+ * _base_add_sg_single_32 - Place a simple 32 bit SGE at address pAddr.
+ * @paddr: virtual address for SGE
+ * @flags_length: SGE flags and data transfer length
+ * @dma_addr: Physical address
+ *
+ * Return nothing.
+ */
+static void
+_base_add_sg_single_32(void *paddr, u32 flags_length, dma_addr_t dma_addr)
+{
+	Mpi2SGESimple32_t *sgel = paddr;
+
+	flags_length |= (MPI2_SGE_FLAGS_32_BIT_ADDRESSING |
+	    MPI2_SGE_FLAGS_SYSTEM_ADDRESS) << MPI2_SGE_FLAGS_SHIFT;
+	sgel->FlagsLength = cpu_to_le32(flags_length);
+	sgel->Address = cpu_to_le32(dma_addr);
+}
+
+
+/**
+ * _base_add_sg_single_64 - Place a simple 64 bit SGE at address pAddr.
+ * @paddr: virtual address for SGE
+ * @flags_length: SGE flags and data transfer length
+ * @dma_addr: Physical address
+ *
+ * Return nothing.
+ */
+static void
+_base_add_sg_single_64(void *paddr, u32 flags_length, dma_addr_t dma_addr)
+{
+	Mpi2SGESimple64_t *sgel = paddr;
+
+	flags_length |= (MPI2_SGE_FLAGS_64_BIT_ADDRESSING |
+	    MPI2_SGE_FLAGS_SYSTEM_ADDRESS) << MPI2_SGE_FLAGS_SHIFT;
+	sgel->FlagsLength = cpu_to_le32(flags_length);
+	sgel->Address = cpu_to_le64(dma_addr);
+}
+
+#define convert_to_kb(x) ((x) << (PAGE_SHIFT - 10))
+
+/**
+ * _base_config_dma_addressing - set dma addressing
+ * @ioc: per adapter object
+ * @pdev: PCI device struct
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_config_dma_addressing(struct MPT2SAS_ADAPTER *ioc, struct pci_dev *pdev)
+{
+	struct sysinfo s;
+	char *desc = NULL;
+
+	if (sizeof(dma_addr_t) > 4) {
+		const uint64_t required_mask =
+		    dma_get_required_mask(&pdev->dev);
+		if ((required_mask > DMA_32BIT_MASK) && !pci_set_dma_mask(pdev,
+		    DMA_64BIT_MASK) && !pci_set_consistent_dma_mask(pdev,
+		    DMA_64BIT_MASK)) {
+			ioc->base_add_sg_single = &_base_add_sg_single_64;
+			ioc->sge_size = sizeof(Mpi2SGESimple64_t);
+			desc = "64";
+			goto out;
+		}
+	}
+
+	if (!pci_set_dma_mask(pdev, DMA_32BIT_MASK)
+	    && !pci_set_consistent_dma_mask(pdev, DMA_32BIT_MASK)) {
+		ioc->base_add_sg_single = &_base_add_sg_single_32;
+		ioc->sge_size = sizeof(Mpi2SGESimple32_t);
+		desc = "32";
+	} else
+		return -ENODEV;
+
+ out:
+	si_meminfo(&s);
+	printk(MPT2SAS_INFO_FMT "%s BIT PCI BUS DMA ADDRESSING SUPPORTED, "
+	    "total mem (%ld kB)\n", ioc->name, desc, convert_to_kb(s.totalram));
+
+	return 0;
+}
+
+/**
+ * _base_save_msix_table - backup msix vector table
+ * @ioc: per adapter object
+ *
+ * This address an errata where diag reset clears out the table
+ */
+static void
+_base_save_msix_table(struct MPT2SAS_ADAPTER *ioc)
+{
+	int i;
+
+	if (!ioc->msix_enable || ioc->msix_table_backup == NULL)
+		return;
+
+	for (i = 0; i < ioc->msix_vector_count; i++)
+		ioc->msix_table_backup[i] = ioc->msix_table[i];
+}
+
+/**
+ * _base_restore_msix_table - this restores the msix vector table
+ * @ioc: per adapter object
+ *
+ */
+static void
+_base_restore_msix_table(struct MPT2SAS_ADAPTER *ioc)
+{
+	int i;
+
+	if (!ioc->msix_enable || ioc->msix_table_backup == NULL)
+		return;
+
+	for (i = 0; i < ioc->msix_vector_count; i++)
+		ioc->msix_table[i] = ioc->msix_table_backup[i];
+}
+
+/**
+ * _base_check_enable_msix - checks MSIX capabable.
+ * @ioc: per adapter object
+ *
+ * Check to see if card is capable of MSIX, and set number
+ * of avaliable msix vectors
+ */
+static int
+_base_check_enable_msix(struct MPT2SAS_ADAPTER *ioc)
+{
+	int base;
+	u16 message_control;
+	u32 msix_table_offset;
+
+	base = pci_find_capability(ioc->pdev, PCI_CAP_ID_MSIX);
+	if (!base) {
+		dfailprintk(ioc, printk(MPT2SAS_INFO_FMT "msix not "
+		    "supported\n", ioc->name));
+		return -EINVAL;
+	}
+
+	/* get msix vector count */
+	pci_read_config_word(ioc->pdev, base + 2, &message_control);
+	ioc->msix_vector_count = (message_control & 0x3FF) + 1;
+
+	/* get msix table  */
+	pci_read_config_dword(ioc->pdev, base + 4, &msix_table_offset);
+	msix_table_offset &= 0xFFFFFFF8;
+	ioc->msix_table = (u32 *)((void *)ioc->chip + msix_table_offset);
+
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "msix is supported, "
+	    "vector_count(%d), table_offset(0x%08x), table(%p)\n", ioc->name,
+	    ioc->msix_vector_count, msix_table_offset, ioc->msix_table));
+	return 0;
+}
+
+/**
+ * _base_disable_msix - disables msix
+ * @ioc: per adapter object
+ *
+ */
+static void
+_base_disable_msix(struct MPT2SAS_ADAPTER *ioc)
+{
+	if (ioc->msix_enable) {
+		pci_disable_msix(ioc->pdev);
+		kfree(ioc->msix_table_backup);
+		ioc->msix_table_backup = NULL;
+		ioc->msix_enable = 0;
+	}
+}
+
+/**
+ * _base_enable_msix - enables msix, failback to io_apic
+ * @ioc: per adapter object
+ *
+ */
+static int
+_base_enable_msix(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct msix_entry entries;
+	int r;
+	u8 try_msix = 0;
+
+	if (msix_disable == -1 || msix_disable == 0)
+		try_msix = 1;
+
+	if (!try_msix)
+		goto try_ioapic;
+
+	if (_base_check_enable_msix(ioc) != 0)
+		goto try_ioapic;
+
+	ioc->msix_table_backup = kcalloc(ioc->msix_vector_count,
+	    sizeof(u32), GFP_KERNEL);
+	if (!ioc->msix_table_backup) {
+		dfailprintk(ioc, printk(MPT2SAS_INFO_FMT "allocation for "
+		    "msix_table_backup failed!!!\n", ioc->name));
+		goto try_ioapic;
+	}
+
+	memset(&entries, 0, sizeof(struct msix_entry));
+	r = pci_enable_msix(ioc->pdev, &entries, 1);
+	if (r) {
+		dfailprintk(ioc, printk(MPT2SAS_INFO_FMT "pci_enable_msix "
+		    "failed (r=%d) !!!\n", ioc->name, r));
+		goto try_ioapic;
+	}
+
+	r = request_irq(entries.vector, _base_interrupt, IRQF_SHARED,
+	    ioc->name, ioc);
+	if (r) {
+		dfailprintk(ioc, printk(MPT2SAS_INFO_FMT "unable to allocate "
+		    "interrupt %d !!!\n", ioc->name, entries.vector));
+		pci_disable_msix(ioc->pdev);
+		goto try_ioapic;
+	}
+
+	ioc->pci_irq = entries.vector;
+	ioc->msix_enable = 1;
+	return 0;
+
+/* failback to io_apic interrupt routing */
+ try_ioapic:
+
+	r = request_irq(ioc->pdev->irq, _base_interrupt, IRQF_SHARED,
+	    ioc->name, ioc);
+	if (r) {
+		printk(MPT2SAS_ERR_FMT "unable to allocate interrupt %d!\n",
+		    ioc->name, ioc->pdev->irq);
+		r = -EBUSY;
+		goto out_fail;
+	}
+
+	ioc->pci_irq = ioc->pdev->irq;
+	return 0;
+
+ out_fail:
+	return r;
+}
+
+/**
+ * mpt2sas_base_map_resources - map in controller resources (io/irq/memap)
+ * @ioc: per adapter object
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_base_map_resources(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct pci_dev *pdev = ioc->pdev;
+	u32 memap_sz;
+	u32 pio_sz;
+	int i, r = 0;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n",
+	    ioc->name, __func__));
+
+	ioc->bars = pci_select_bars(pdev, IORESOURCE_MEM);
+	if (pci_enable_device_mem(pdev)) {
+		printk(MPT2SAS_WARN_FMT "pci_enable_device_mem: "
+		    "failed\n", ioc->name);
+		return -ENODEV;
+	}
+
+
+	if (pci_request_selected_regions(pdev, ioc->bars,
+	    MPT2SAS_DRIVER_NAME)) {
+		printk(MPT2SAS_WARN_FMT "pci_request_selected_regions: "
+		    "failed\n", ioc->name);
+		r = -ENODEV;
+		goto out_fail;
+	}
+
+	pci_set_master(pdev);
+
+	if (_base_config_dma_addressing(ioc, pdev) != 0) {
+		printk(MPT2SAS_WARN_FMT "no suitable DMA mask for %s\n",
+		    ioc->name, pci_name(pdev));
+		r = -ENODEV;
+		goto out_fail;
+	}
+
+	for (i = 0, memap_sz = 0, pio_sz = 0 ; i < DEVICE_COUNT_RESOURCE; i++) {
+		if (pci_resource_flags(pdev, i) & PCI_BASE_ADDRESS_SPACE_IO) {
+			if (pio_sz)
+				continue;
+			ioc->pio_chip = pci_resource_start(pdev, i);
+			pio_sz = pci_resource_len(pdev, i);
+		} else {
+			if (memap_sz)
+				continue;
+			ioc->chip_phys = pci_resource_start(pdev, i);
+			memap_sz = pci_resource_len(pdev, i);
+			ioc->chip = ioremap(ioc->chip_phys, memap_sz);
+			if (ioc->chip == NULL) {
+				printk(MPT2SAS_ERR_FMT "unable to map adapter "
+				    "memory!\n", ioc->name);
+				r = -EINVAL;
+				goto out_fail;
+			}
+		}
+	}
+
+	pci_set_drvdata(pdev, ioc->shost);
+	_base_mask_interrupts(ioc);
+	r = _base_enable_msix(ioc);
+	if (r)
+		goto out_fail;
+
+	printk(MPT2SAS_INFO_FMT "%s: IRQ %d\n",
+	    ioc->name,  ((ioc->msix_enable) ? "PCI-MSI-X enabled" :
+	    "IO-APIC enabled"), ioc->pci_irq);
+	printk(MPT2SAS_INFO_FMT "iomem(0x%lx), mapped(0x%p), size(%d)\n",
+	    ioc->name, ioc->chip_phys, ioc->chip, memap_sz);
+	printk(MPT2SAS_INFO_FMT "ioport(0x%lx), size(%d)\n",
+	    ioc->name, ioc->pio_chip, pio_sz);
+
+	return 0;
+
+ out_fail:
+	if (ioc->chip_phys)
+		iounmap(ioc->chip);
+	ioc->chip_phys = 0;
+	ioc->pci_irq = -1;
+	pci_release_selected_regions(ioc->pdev, ioc->bars);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return r;
+}
+
+/**
+ * mpt2sas_base_get_msg_frame_dma - obtain request mf pointer phys addr
+ * @ioc: per adapter object
+ * @smid: system request message index(smid zero is invalid)
+ *
+ * Returns phys pointer to message frame.
+ */
+dma_addr_t
+mpt2sas_base_get_msg_frame_dma(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return ioc->request_dma + (smid * ioc->request_sz);
+}
+
+/**
+ * mpt2sas_base_get_msg_frame - obtain request mf pointer
+ * @ioc: per adapter object
+ * @smid: system request message index(smid zero is invalid)
+ *
+ * Returns virt pointer to message frame.
+ */
+void *
+mpt2sas_base_get_msg_frame(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return (void *)(ioc->request + (smid * ioc->request_sz));
+}
+
+/**
+ * mpt2sas_base_get_sense_buffer - obtain a sense buffer assigned to a mf request
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns virt pointer to sense buffer.
+ */
+void *
+mpt2sas_base_get_sense_buffer(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return (void *)(ioc->sense + ((smid - 1) * SCSI_SENSE_BUFFERSIZE));
+}
+
+/**
+ * mpt2sas_base_get_sense_buffer_dma - obtain a sense buffer assigned to a mf request
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns phys pointer to sense buffer.
+ */
+dma_addr_t
+mpt2sas_base_get_sense_buffer_dma(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return ioc->sense_dma + ((smid - 1) * SCSI_SENSE_BUFFERSIZE);
+}
+
+/**
+ * mpt2sas_base_get_reply_virt_addr - obtain reply frames virt address
+ * @ioc: per adapter object
+ * @phys_addr: lower 32 physical addr of the reply
+ *
+ * Converts 32bit lower physical addr into a virt address.
+ */
+void *
+mpt2sas_base_get_reply_virt_addr(struct MPT2SAS_ADAPTER *ioc, u32 phys_addr)
+{
+	if (!phys_addr)
+		return NULL;
+	return ioc->reply + (phys_addr - (u32)ioc->reply_dma);
+}
+
+/**
+ * mpt2sas_base_get_smid - obtain a free smid
+ * @ioc: per adapter object
+ * @cb_idx: callback index
+ *
+ * Returns smid (zero is invalid)
+ */
+u16
+mpt2sas_base_get_smid(struct MPT2SAS_ADAPTER *ioc, u8 cb_idx)
+{
+	unsigned long flags;
+	struct request_tracker *request;
+	u16 smid;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	if (list_empty(&ioc->free_list)) {
+		spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+		printk(MPT2SAS_ERR_FMT "%s: smid not available\n",
+		    ioc->name, __func__);
+		return 0;
+	}
+
+	request = list_entry(ioc->free_list.next,
+	    struct request_tracker, tracker_list);
+	request->cb_idx = cb_idx;
+	smid = request->smid;
+	list_del(&request->tracker_list);
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+	return smid;
+}
+
+
+/**
+ * mpt2sas_base_free_smid - put smid back on free_list
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_free_smid(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	ioc->scsi_lookup[smid - 1].cb_idx = 0xFF;
+	list_add_tail(&ioc->scsi_lookup[smid - 1].tracker_list,
+	    &ioc->free_list);
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+	/*
+	 * See _wait_for_commands_to_complete() call with regards to this code.
+	 */
+	if (ioc->shost_recovery && ioc->pending_io_count) {
+		if (ioc->pending_io_count == 1)
+			wake_up(&ioc->reset_wq);
+		ioc->pending_io_count--;
+	}
+}
+
+/**
+ * _base_writeq - 64 bit write to MMIO
+ * @ioc: per adapter object
+ * @b: data payload
+ * @addr: address in MMIO space
+ * @writeq_lock: spin lock
+ *
+ * Glue for handling an atomic 64 bit word to MMIO. This special handling takes
+ * care of 32 bit environment where its not quarenteed to send the entire word
+ * in one transfer.
+ */
+#ifndef writeq
+static inline void _base_writeq(__u64 b, volatile void __iomem *addr,
+    spinlock_t *writeq_lock)
+{
+	unsigned long flags;
+	__u64 data_out = cpu_to_le64(b);
+
+	spin_lock_irqsave(writeq_lock, flags);
+	writel((u32)(data_out), addr);
+	writel((u32)(data_out >> 32), (addr + 4));
+	spin_unlock_irqrestore(writeq_lock, flags);
+}
+#else
+static inline void _base_writeq(__u64 b, volatile void __iomem *addr,
+    spinlock_t *writeq_lock)
+{
+	writeq(cpu_to_le64(b), addr);
+}
+#endif
+
+/**
+ * mpt2sas_base_put_smid_scsi_io - send SCSI_IO request to firmware
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @vf_id: virtual function id
+ * @handle: device handle
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_put_smid_scsi_io(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 vf_id,
+    u16 handle)
+{
+	Mpi2RequestDescriptorUnion_t descriptor;
+	u64 *request = (u64 *)&descriptor;
+
+
+	descriptor.SCSIIO.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_SCSI_IO;
+	descriptor.SCSIIO.VF_ID = vf_id;
+	descriptor.SCSIIO.SMID = cpu_to_le16(smid);
+	descriptor.SCSIIO.DevHandle = cpu_to_le16(handle);
+	descriptor.SCSIIO.LMID = 0;
+	_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
+	    &ioc->scsi_lookup_lock);
+}
+
+
+/**
+ * mpt2sas_base_put_smid_hi_priority - send Task Managment request to firmware
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @vf_id: virtual function id
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_put_smid_hi_priority(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    u8 vf_id)
+{
+	Mpi2RequestDescriptorUnion_t descriptor;
+	u64 *request = (u64 *)&descriptor;
+
+	descriptor.HighPriority.RequestFlags =
+	    MPI2_REQ_DESCRIPT_FLAGS_HIGH_PRIORITY;
+	descriptor.HighPriority.VF_ID = vf_id;
+	descriptor.HighPriority.SMID = cpu_to_le16(smid);
+	descriptor.HighPriority.LMID = 0;
+	descriptor.HighPriority.Reserved1 = 0;
+	_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
+	    &ioc->scsi_lookup_lock);
+}
+
+/**
+ * mpt2sas_base_put_smid_default - Default, primarily used for config pages
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @vf_id: virtual function id
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_put_smid_default(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 vf_id)
+{
+	Mpi2RequestDescriptorUnion_t descriptor;
+	u64 *request = (u64 *)&descriptor;
+
+	descriptor.Default.RequestFlags = MPI2_REQ_DESCRIPT_FLAGS_DEFAULT_TYPE;
+	descriptor.Default.VF_ID = vf_id;
+	descriptor.Default.SMID = cpu_to_le16(smid);
+	descriptor.Default.LMID = 0;
+	descriptor.Default.DescriptorTypeDependent = 0;
+	_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
+	    &ioc->scsi_lookup_lock);
+}
+
+/**
+ * mpt2sas_base_put_smid_target_assist - send Target Assist/Status to firmware
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @vf_id: virtual function id
+ * @io_index: value used to track the IO
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_put_smid_target_assist(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    u8 vf_id, u16 io_index)
+{
+	Mpi2RequestDescriptorUnion_t descriptor;
+	u64 *request = (u64 *)&descriptor;
+
+	descriptor.SCSITarget.RequestFlags =
+	    MPI2_REQ_DESCRIPT_FLAGS_SCSI_TARGET;
+	descriptor.SCSITarget.VF_ID = vf_id;
+	descriptor.SCSITarget.SMID = cpu_to_le16(smid);
+	descriptor.SCSITarget.LMID = 0;
+	descriptor.SCSITarget.IoIndex = cpu_to_le16(io_index);
+	_base_writeq(*request, &ioc->chip->RequestDescriptorPostLow,
+	    &ioc->scsi_lookup_lock);
+}
+
+/**
+ * _base_display_ioc_capabilities - Disply IOC's capabilities.
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+static void
+_base_display_ioc_capabilities(struct MPT2SAS_ADAPTER *ioc)
+{
+	int i = 0;
+	char desc[16];
+	u8 revision;
+	u32 iounit_pg1_flags;
+
+	pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision);
+	strncpy(desc, ioc->manu_pg0.ChipName, 16);
+	printk(MPT2SAS_INFO_FMT "%s: FWVersion(%02d.%02d.%02d.%02d), "
+	   "ChipRevision(0x%02x), BiosVersion(%02d.%02d.%02d.%02d)\n",
+	    ioc->name, desc,
+	   (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,
+	   (ioc->facts.FWVersion.Word & 0x00FF0000) >> 16,
+	   (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
+	   ioc->facts.FWVersion.Word & 0x000000FF,
+	   revision,
+	   (ioc->bios_pg3.BiosVersion & 0xFF000000) >> 24,
+	   (ioc->bios_pg3.BiosVersion & 0x00FF0000) >> 16,
+	   (ioc->bios_pg3.BiosVersion & 0x0000FF00) >> 8,
+	    ioc->bios_pg3.BiosVersion & 0x000000FF);
+
+	printk(MPT2SAS_INFO_FMT "Protocol=(", ioc->name);
+
+	if (ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_SCSI_INITIATOR) {
+		printk("Initiator");
+		i++;
+	}
+
+	if (ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_SCSI_TARGET) {
+		printk("%sTarget", i ? "," : "");
+		i++;
+	}
+
+	i = 0;
+	printk("), ");
+	printk("Capabilities=(");
+
+	if (ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID) {
+		printk("Raid");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_TLR) {
+		printk("%sTLR", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_MULTICAST) {
+		printk("%sMulticast", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_BIDIRECTIONAL_TARGET) {
+		printk("%sBIDI Target", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_EEDP) {
+		printk("%sEEDP", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER) {
+		printk("%sSnapshot Buffer", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER) {
+		printk("%sDiag Trace Buffer", i ? "," : "");
+		i++;
+	}
+
+	if (ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING) {
+		printk("%sTask Set Full", i ? "," : "");
+		i++;
+	}
+
+	iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags);
+	if (!(iounit_pg1_flags & MPI2_IOUNITPAGE1_NATIVE_COMMAND_Q_DISABLE)) {
+		printk("%sNCQ", i ? "," : "");
+		i++;
+	}
+
+	printk(")\n");
+}
+
+/**
+ * _base_static_config_pages - static start of day config pages
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+static void
+_base_static_config_pages(struct MPT2SAS_ADAPTER *ioc)
+{
+	Mpi2ConfigReply_t mpi_reply;
+	u32 iounit_pg1_flags;
+
+	mpt2sas_config_get_manufacturing_pg0(ioc, &mpi_reply, &ioc->manu_pg0);
+	mpt2sas_config_get_bios_pg2(ioc, &mpi_reply, &ioc->bios_pg2);
+	mpt2sas_config_get_bios_pg3(ioc, &mpi_reply, &ioc->bios_pg3);
+	mpt2sas_config_get_ioc_pg8(ioc, &mpi_reply, &ioc->ioc_pg8);
+	mpt2sas_config_get_iounit_pg0(ioc, &mpi_reply, &ioc->iounit_pg0);
+	mpt2sas_config_get_iounit_pg1(ioc, &mpi_reply, &ioc->iounit_pg1);
+	_base_display_ioc_capabilities(ioc);
+
+	/*
+	 * Enable task_set_full handling in iounit_pg1 when the
+	 * facts capabilities indicate that its supported.
+	 */
+	iounit_pg1_flags = le32_to_cpu(ioc->iounit_pg1.Flags);
+	if ((ioc->facts.IOCCapabilities &
+	    MPI2_IOCFACTS_CAPABILITY_TASK_SET_FULL_HANDLING))
+		iounit_pg1_flags &=
+		    ~MPI2_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING;
+	else
+		iounit_pg1_flags |=
+		    MPI2_IOUNITPAGE1_DISABLE_TASK_SET_FULL_HANDLING;
+	ioc->iounit_pg1.Flags = cpu_to_le32(iounit_pg1_flags);
+	mpt2sas_config_set_iounit_pg1(ioc, &mpi_reply, ioc->iounit_pg1);
+}
+
+/**
+ * _base_release_memory_pools - release memory
+ * @ioc: per adapter object
+ *
+ * Free memory allocated from _base_allocate_memory_pools.
+ *
+ * Return nothing.
+ */
+static void
+_base_release_memory_pools(struct MPT2SAS_ADAPTER *ioc)
+{
+	dexitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	if (ioc->request) {
+		pci_free_consistent(ioc->pdev, ioc->request_dma_sz,
+		    ioc->request,  ioc->request_dma);
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT "request_pool(0x%p)"
+		    ": free\n", ioc->name, ioc->request));
+		ioc->request = NULL;
+	}
+
+	if (ioc->sense) {
+		pci_pool_free(ioc->sense_dma_pool, ioc->sense, ioc->sense_dma);
+		if (ioc->sense_dma_pool)
+			pci_pool_destroy(ioc->sense_dma_pool);
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT "sense_pool(0x%p)"
+		    ": free\n", ioc->name, ioc->sense));
+		ioc->sense = NULL;
+	}
+
+	if (ioc->reply) {
+		pci_pool_free(ioc->reply_dma_pool, ioc->reply, ioc->reply_dma);
+		if (ioc->reply_dma_pool)
+			pci_pool_destroy(ioc->reply_dma_pool);
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_pool(0x%p)"
+		     ": free\n", ioc->name, ioc->reply));
+		ioc->reply = NULL;
+	}
+
+	if (ioc->reply_free) {
+		pci_pool_free(ioc->reply_free_dma_pool, ioc->reply_free,
+		    ioc->reply_free_dma);
+		if (ioc->reply_free_dma_pool)
+			pci_pool_destroy(ioc->reply_free_dma_pool);
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_free_pool"
+		    "(0x%p): free\n", ioc->name, ioc->reply_free));
+		ioc->reply_free = NULL;
+	}
+
+	if (ioc->reply_post_free) {
+		pci_pool_free(ioc->reply_post_free_dma_pool,
+		    ioc->reply_post_free, ioc->reply_post_free_dma);
+		if (ioc->reply_post_free_dma_pool)
+			pci_pool_destroy(ioc->reply_post_free_dma_pool);
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT
+		    "reply_post_free_pool(0x%p): free\n", ioc->name,
+		    ioc->reply_post_free));
+		ioc->reply_post_free = NULL;
+	}
+
+	if (ioc->config_page) {
+		dexitprintk(ioc, printk(MPT2SAS_INFO_FMT
+		    "config_page(0x%p): free\n", ioc->name,
+		    ioc->config_page));
+		pci_free_consistent(ioc->pdev, ioc->config_page_sz,
+		    ioc->config_page, ioc->config_page_dma);
+	}
+
+	kfree(ioc->scsi_lookup);
+}
+
+
+/**
+ * _base_allocate_memory_pools - allocate start of day memory pools
+ * @ioc: per adapter object
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 success, anything else error
+ */
+static int
+_base_allocate_memory_pools(struct MPT2SAS_ADAPTER *ioc,  int sleep_flag)
+{
+	Mpi2IOCFactsReply_t *facts;
+	u32 queue_size, queue_diff;
+	u16 max_sge_elements;
+	u16 num_of_reply_frames;
+	u16 chains_needed_per_io;
+	u32 sz, total_sz;
+	u16 i;
+	u32 retry_sz;
+	u16 max_request_credit;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	retry_sz = 0;
+	facts = &ioc->facts;
+
+	/* command line tunables  for max sgl entries */
+	if (max_sgl_entries != -1) {
+		ioc->shost->sg_tablesize = (max_sgl_entries <
+		    MPT2SAS_SG_DEPTH) ? max_sgl_entries :
+		    MPT2SAS_SG_DEPTH;
+	} else {
+		ioc->shost->sg_tablesize = MPT2SAS_SG_DEPTH;
+	}
+
+	/* command line tunables  for max controller queue depth */
+	if (max_queue_depth != -1) {
+		max_request_credit = (max_queue_depth < facts->RequestCredit)
+		    ? max_queue_depth : facts->RequestCredit;
+	} else {
+		max_request_credit = (facts->RequestCredit >
+		    MPT2SAS_MAX_REQUEST_QUEUE) ? MPT2SAS_MAX_REQUEST_QUEUE :
+		    facts->RequestCredit;
+	}
+	ioc->request_depth = max_request_credit;
+
+	/* request frame size */
+	ioc->request_sz = facts->IOCRequestFrameSize * 4;
+
+	/* reply frame size */
+	ioc->reply_sz = facts->ReplyFrameSize * 4;
+
+ retry_allocation:
+	total_sz = 0;
+	/* calculate number of sg elements left over in the 1st frame */
+	max_sge_elements = ioc->request_sz - ((sizeof(Mpi2SCSIIORequest_t) -
+	    sizeof(Mpi2SGEIOUnion_t)) + ioc->sge_size);
+	ioc->max_sges_in_main_message = max_sge_elements/ioc->sge_size;
+
+	/* now do the same for a chain buffer */
+	max_sge_elements = ioc->request_sz - ioc->sge_size;
+	ioc->max_sges_in_chain_message = max_sge_elements/ioc->sge_size;
+
+	ioc->chain_offset_value_for_main_message =
+	    ((sizeof(Mpi2SCSIIORequest_t) - sizeof(Mpi2SGEIOUnion_t)) +
+	     (ioc->max_sges_in_chain_message * ioc->sge_size)) / 4;
+
+	/*
+	 *  MPT2SAS_SG_DEPTH = CONFIG_FUSION_MAX_SGE
+	 */
+	chains_needed_per_io = ((ioc->shost->sg_tablesize -
+	   ioc->max_sges_in_main_message)/ioc->max_sges_in_chain_message)
+	    + 1;
+	if (chains_needed_per_io > facts->MaxChainDepth) {
+		chains_needed_per_io = facts->MaxChainDepth;
+		ioc->shost->sg_tablesize = min_t(u16,
+		ioc->max_sges_in_main_message + (ioc->max_sges_in_chain_message
+		* chains_needed_per_io), ioc->shost->sg_tablesize);
+	}
+	ioc->chains_needed_per_io = chains_needed_per_io;
+
+	/* reply free queue sizing - taking into account for events */
+	num_of_reply_frames = ioc->request_depth + 32;
+
+	/* number of replies frames can't be a multiple of 16 */
+	/* decrease number of reply frames by 1 */
+	if (!(num_of_reply_frames % 16))
+		num_of_reply_frames--;
+
+	/* calculate number of reply free queue entries
+	 *  (must be multiple of 16)
+	 */
+
+	/* (we know reply_free_queue_depth is not a multiple of 16) */
+	queue_size = num_of_reply_frames;
+	queue_size += 16 - (queue_size % 16);
+	ioc->reply_free_queue_depth = queue_size;
+
+	/* reply descriptor post queue sizing */
+	/* this size should be the number of request frames + number of reply
+	 * frames
+	 */
+
+	queue_size = ioc->request_depth + num_of_reply_frames + 1;
+	/* round up to 16 byte boundary */
+	if (queue_size % 16)
+		queue_size += 16 - (queue_size % 16);
+
+	/* check against IOC maximum reply post queue depth */
+	if (queue_size > facts->MaxReplyDescriptorPostQueueDepth) {
+		queue_diff = queue_size -
+		    facts->MaxReplyDescriptorPostQueueDepth;
+
+		/* round queue_diff up to multiple of 16 */
+		if (queue_diff % 16)
+			queue_diff += 16 - (queue_diff % 16);
+
+		/* adjust request_depth, reply_free_queue_depth,
+		 * and queue_size
+		 */
+		ioc->request_depth -= queue_diff;
+		ioc->reply_free_queue_depth -= queue_diff;
+		queue_size -= queue_diff;
+	}
+	ioc->reply_post_queue_depth = queue_size;
+
+	/* max scsi host queue depth */
+	ioc->shost->can_queue = ioc->request_depth - INTERNAL_CMDS_COUNT;
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "scsi host queue: depth"
+	    "(%d)\n", ioc->name, ioc->shost->can_queue));
+
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "scatter gather: "
+	    "sge_in_main_msg(%d), sge_per_chain(%d), sge_per_io(%d), "
+	    "chains_per_io(%d)\n", ioc->name, ioc->max_sges_in_main_message,
+	    ioc->max_sges_in_chain_message, ioc->shost->sg_tablesize,
+	    ioc->chains_needed_per_io));
+
+	/* contiguous pool for request and chains, 16 byte align, one extra "
+	 * "frame for smid=0
+	 */
+	ioc->chain_depth = ioc->chains_needed_per_io * ioc->request_depth;
+	sz = ((ioc->request_depth + 1 + ioc->chain_depth) * ioc->request_sz);
+
+	ioc->request_dma_sz = sz;
+	ioc->request = pci_alloc_consistent(ioc->pdev, sz, &ioc->request_dma);
+	if (!ioc->request) {
+		printk(MPT2SAS_ERR_FMT "request pool: pci_alloc_consistent "
+		    "failed: req_depth(%d), chains_per_io(%d), frame_sz(%d), "
+		    "total(%d kB)\n", ioc->name, ioc->request_depth,
+		    ioc->chains_needed_per_io, ioc->request_sz, sz/1024);
+		if (ioc->request_depth < MPT2SAS_SAS_QUEUE_DEPTH)
+			goto out;
+		retry_sz += 64;
+		ioc->request_depth = max_request_credit - retry_sz;
+		goto retry_allocation;
+	}
+
+	if (retry_sz)
+		printk(MPT2SAS_ERR_FMT "request pool: pci_alloc_consistent "
+		    "succeed: req_depth(%d), chains_per_io(%d), frame_sz(%d), "
+		    "total(%d kb)\n", ioc->name, ioc->request_depth,
+		    ioc->chains_needed_per_io, ioc->request_sz, sz/1024);
+
+	ioc->chain = ioc->request + ((ioc->request_depth + 1) *
+	    ioc->request_sz);
+	ioc->chain_dma = ioc->request_dma + ((ioc->request_depth + 1) *
+	    ioc->request_sz);
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "request pool(0x%p): "
+	    "depth(%d), frame_size(%d), pool_size(%d kB)\n", ioc->name,
+	    ioc->request, ioc->request_depth, ioc->request_sz,
+	    ((ioc->request_depth + 1) * ioc->request_sz)/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "chain pool(0x%p): depth"
+	    "(%d), frame_size(%d), pool_size(%d kB)\n", ioc->name, ioc->chain,
+	    ioc->chain_depth, ioc->request_sz, ((ioc->chain_depth *
+	    ioc->request_sz))/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "request pool: dma(0x%llx)\n",
+	    ioc->name, (unsigned long long) ioc->request_dma));
+	total_sz += sz;
+
+	ioc->scsi_lookup = kcalloc(ioc->request_depth,
+	    sizeof(struct request_tracker), GFP_KERNEL);
+	if (!ioc->scsi_lookup) {
+		printk(MPT2SAS_ERR_FMT "scsi_lookup: kcalloc failed\n",
+		    ioc->name);
+		goto out;
+	}
+
+	 /* initialize some bits */
+	for (i = 0; i < ioc->request_depth; i++)
+		ioc->scsi_lookup[i].smid = i + 1;
+
+	/* sense buffers, 4 byte align */
+	sz = ioc->request_depth * SCSI_SENSE_BUFFERSIZE;
+	ioc->sense_dma_pool = pci_pool_create("sense pool", ioc->pdev, sz, 4,
+	    0);
+	if (!ioc->sense_dma_pool) {
+		printk(MPT2SAS_ERR_FMT "sense pool: pci_pool_create failed\n",
+		    ioc->name);
+		goto out;
+	}
+	ioc->sense = pci_pool_alloc(ioc->sense_dma_pool , GFP_KERNEL,
+	    &ioc->sense_dma);
+	if (!ioc->sense) {
+		printk(MPT2SAS_ERR_FMT "sense pool: pci_pool_alloc failed\n",
+		    ioc->name);
+		goto out;
+	}
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT
+	    "sense pool(0x%p): depth(%d), element_size(%d), pool_size"
+	    "(%d kB)\n", ioc->name, ioc->sense, ioc->request_depth,
+	    SCSI_SENSE_BUFFERSIZE, sz/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "sense_dma(0x%llx)\n",
+	    ioc->name, (unsigned long long)ioc->sense_dma));
+	total_sz += sz;
+
+	/* reply pool, 4 byte align */
+	sz = ioc->reply_free_queue_depth * ioc->reply_sz;
+	ioc->reply_dma_pool = pci_pool_create("reply pool", ioc->pdev, sz, 4,
+	    0);
+	if (!ioc->reply_dma_pool) {
+		printk(MPT2SAS_ERR_FMT "reply pool: pci_pool_create failed\n",
+		    ioc->name);
+		goto out;
+	}
+	ioc->reply = pci_pool_alloc(ioc->reply_dma_pool , GFP_KERNEL,
+	    &ioc->reply_dma);
+	if (!ioc->reply) {
+		printk(MPT2SAS_ERR_FMT "reply pool: pci_pool_alloc failed\n",
+		    ioc->name);
+		goto out;
+	}
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply pool(0x%p): depth"
+	    "(%d), frame_size(%d), pool_size(%d kB)\n", ioc->name, ioc->reply,
+	    ioc->reply_free_queue_depth, ioc->reply_sz, sz/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_dma(0x%llx)\n",
+	    ioc->name, (unsigned long long)ioc->reply_dma));
+	total_sz += sz;
+
+	/* reply free queue, 16 byte align */
+	sz = ioc->reply_free_queue_depth * 4;
+	ioc->reply_free_dma_pool = pci_pool_create("reply_free pool",
+	    ioc->pdev, sz, 16, 0);
+	if (!ioc->reply_free_dma_pool) {
+		printk(MPT2SAS_ERR_FMT "reply_free pool: pci_pool_create "
+		    "failed\n", ioc->name);
+		goto out;
+	}
+	ioc->reply_free = pci_pool_alloc(ioc->reply_free_dma_pool , GFP_KERNEL,
+	    &ioc->reply_free_dma);
+	if (!ioc->reply_free) {
+		printk(MPT2SAS_ERR_FMT "reply_free pool: pci_pool_alloc "
+		    "failed\n", ioc->name);
+		goto out;
+	}
+	memset(ioc->reply_free, 0, sz);
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_free pool(0x%p): "
+	    "depth(%d), element_size(%d), pool_size(%d kB)\n", ioc->name,
+	    ioc->reply_free, ioc->reply_free_queue_depth, 4, sz/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_free_dma"
+	    "(0x%llx)\n", ioc->name, (unsigned long long)ioc->reply_free_dma));
+	total_sz += sz;
+
+	/* reply post queue, 16 byte align */
+	sz = ioc->reply_post_queue_depth * sizeof(Mpi2DefaultReplyDescriptor_t);
+	ioc->reply_post_free_dma_pool = pci_pool_create("reply_post_free pool",
+	    ioc->pdev, sz, 16, 0);
+	if (!ioc->reply_post_free_dma_pool) {
+		printk(MPT2SAS_ERR_FMT "reply_post_free pool: pci_pool_create "
+		    "failed\n", ioc->name);
+		goto out;
+	}
+	ioc->reply_post_free = pci_pool_alloc(ioc->reply_post_free_dma_pool ,
+	    GFP_KERNEL, &ioc->reply_post_free_dma);
+	if (!ioc->reply_post_free) {
+		printk(MPT2SAS_ERR_FMT "reply_post_free pool: pci_pool_alloc "
+		    "failed\n", ioc->name);
+		goto out;
+	}
+	memset(ioc->reply_post_free, 0, sz);
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply post free pool"
+	    "(0x%p): depth(%d), element_size(%d), pool_size(%d kB)\n",
+	    ioc->name, ioc->reply_post_free, ioc->reply_post_queue_depth, 8,
+	    sz/1024));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "reply_post_free_dma = "
+	    "(0x%llx)\n", ioc->name, (unsigned long long)
+	    ioc->reply_post_free_dma));
+	total_sz += sz;
+
+	ioc->config_page_sz = 512;
+	ioc->config_page = pci_alloc_consistent(ioc->pdev,
+	    ioc->config_page_sz, &ioc->config_page_dma);
+	if (!ioc->config_page) {
+		printk(MPT2SAS_ERR_FMT "config page: pci_pool_alloc "
+		    "failed\n", ioc->name);
+		goto out;
+	}
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "config page(0x%p): size"
+	    "(%d)\n", ioc->name, ioc->config_page, ioc->config_page_sz));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "config_page_dma"
+	    "(0x%llx)\n", ioc->name, (unsigned long long)ioc->config_page_dma));
+	total_sz += ioc->config_page_sz;
+
+	printk(MPT2SAS_INFO_FMT "Allocated physical memory: size(%d kB)\n",
+	    ioc->name, total_sz/1024);
+	printk(MPT2SAS_INFO_FMT "Current Controller Queue Depth(%d), "
+	    "Max Controller Queue Depth(%d)\n",
+	    ioc->name, ioc->shost->can_queue, facts->RequestCredit);
+	printk(MPT2SAS_INFO_FMT "Scatter Gather Elements per IO(%d)\n",
+	    ioc->name, ioc->shost->sg_tablesize);
+	return 0;
+
+ out:
+	_base_release_memory_pools(ioc);
+	return -ENOMEM;
+}
+
+
+/**
+ * mpt2sas_base_get_iocstate - Get the current state of a MPT adapter.
+ * @ioc: Pointer to MPT_ADAPTER structure
+ * @cooked: Request raw or cooked IOC state
+ *
+ * Returns all IOC Doorbell register bits if cooked==0, else just the
+ * Doorbell bits in MPI_IOC_STATE_MASK.
+ */
+u32
+mpt2sas_base_get_iocstate(struct MPT2SAS_ADAPTER *ioc, int cooked)
+{
+	u32 s, sc;
+
+	s = readl(&ioc->chip->Doorbell);
+	sc = s & MPI2_IOC_STATE_MASK;
+	return cooked ? sc : s;
+}
+
+/**
+ * _base_wait_on_iocstate - waiting on a particular ioc state
+ * @ioc_state: controller state { READY, OPERATIONAL, or RESET }
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_wait_on_iocstate(struct MPT2SAS_ADAPTER *ioc, u32 ioc_state, int timeout,
+    int sleep_flag)
+{
+	u32 count, cntdn;
+	u32 current_state;
+
+	count = 0;
+	cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout;
+	do {
+		current_state = mpt2sas_base_get_iocstate(ioc, 1);
+		if (current_state == ioc_state)
+			return 0;
+		if (count && current_state == MPI2_IOC_STATE_FAULT)
+			break;
+		if (sleep_flag == CAN_SLEEP)
+			msleep(1);
+		else
+			udelay(500);
+		count++;
+	} while (--cntdn);
+
+	return current_state;
+}
+
+/**
+ * _base_wait_for_doorbell_int - waiting for controller interrupt(generated by
+ * a write to the doorbell)
+ * @ioc: per adapter object
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ *
+ * Notes: MPI2_HIS_IOC2SYS_DB_STATUS - set to one when IOC writes to doorbell.
+ */
+static int
+_base_wait_for_doorbell_int(struct MPT2SAS_ADAPTER *ioc, int timeout,
+    int sleep_flag)
+{
+	u32 cntdn, count;
+	u32 int_status;
+
+	count = 0;
+	cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout;
+	do {
+		int_status = readl(&ioc->chip->HostInterruptStatus);
+		if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
+			dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+			    "successfull count(%d), timeout(%d)\n", ioc->name,
+			    __func__, count, timeout));
+			return 0;
+		}
+		if (sleep_flag == CAN_SLEEP)
+			msleep(1);
+		else
+			udelay(500);
+		count++;
+	} while (--cntdn);
+
+	printk(MPT2SAS_ERR_FMT "%s: failed due to timeout count(%d), "
+	    "int_status(%x)!\n", ioc->name, __func__, count, int_status);
+	return -EFAULT;
+}
+
+/**
+ * _base_wait_for_doorbell_ack - waiting for controller to read the doorbell.
+ * @ioc: per adapter object
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ *
+ * Notes: MPI2_HIS_SYS2IOC_DB_STATUS - set to one when host writes to
+ * doorbell.
+ */
+static int
+_base_wait_for_doorbell_ack(struct MPT2SAS_ADAPTER *ioc, int timeout,
+    int sleep_flag)
+{
+	u32 cntdn, count;
+	u32 int_status;
+	u32 doorbell;
+
+	count = 0;
+	cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout;
+	do {
+		int_status = readl(&ioc->chip->HostInterruptStatus);
+		if (!(int_status & MPI2_HIS_SYS2IOC_DB_STATUS)) {
+			dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+			    "successfull count(%d), timeout(%d)\n", ioc->name,
+			    __func__, count, timeout));
+			return 0;
+		} else if (int_status & MPI2_HIS_IOC2SYS_DB_STATUS) {
+			doorbell = readl(&ioc->chip->Doorbell);
+			if ((doorbell & MPI2_IOC_STATE_MASK) ==
+			    MPI2_IOC_STATE_FAULT) {
+				mpt2sas_base_fault_info(ioc , doorbell);
+				return -EFAULT;
+			}
+		} else if (int_status == 0xFFFFFFFF)
+			goto out;
+
+		if (sleep_flag == CAN_SLEEP)
+			msleep(1);
+		else
+			udelay(500);
+		count++;
+	} while (--cntdn);
+
+ out:
+	printk(MPT2SAS_ERR_FMT "%s: failed due to timeout count(%d), "
+	    "int_status(%x)!\n", ioc->name, __func__, count, int_status);
+	return -EFAULT;
+}
+
+/**
+ * _base_wait_for_doorbell_not_used - waiting for doorbell to not be in use
+ * @ioc: per adapter object
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ *
+ */
+static int
+_base_wait_for_doorbell_not_used(struct MPT2SAS_ADAPTER *ioc, int timeout,
+    int sleep_flag)
+{
+	u32 cntdn, count;
+	u32 doorbell_reg;
+
+	count = 0;
+	cntdn = (sleep_flag == CAN_SLEEP) ? 1000*timeout : 2000*timeout;
+	do {
+		doorbell_reg = readl(&ioc->chip->Doorbell);
+		if (!(doorbell_reg & MPI2_DOORBELL_USED)) {
+			dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+			    "successfull count(%d), timeout(%d)\n", ioc->name,
+			    __func__, count, timeout));
+			return 0;
+		}
+		if (sleep_flag == CAN_SLEEP)
+			msleep(1);
+		else
+			udelay(500);
+		count++;
+	} while (--cntdn);
+
+	printk(MPT2SAS_ERR_FMT "%s: failed due to timeout count(%d), "
+	    "doorbell_reg(%x)!\n", ioc->name, __func__, count, doorbell_reg);
+	return -EFAULT;
+}
+
+/**
+ * _base_send_ioc_reset - send doorbell reset
+ * @ioc: per adapter object
+ * @reset_type: currently only supports: MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_send_ioc_reset(struct MPT2SAS_ADAPTER *ioc, u8 reset_type, int timeout,
+    int sleep_flag)
+{
+	u32 ioc_state;
+	int r = 0;
+
+	if (reset_type != MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET) {
+		printk(MPT2SAS_ERR_FMT "%s: unknown reset_type\n",
+		    ioc->name, __func__);
+		return -EFAULT;
+	}
+
+	if (!(ioc->facts.IOCCapabilities &
+	   MPI2_IOCFACTS_CAPABILITY_EVENT_REPLAY))
+		return -EFAULT;
+
+	printk(MPT2SAS_INFO_FMT "sending message unit reset !!\n", ioc->name);
+
+	writel(reset_type << MPI2_DOORBELL_FUNCTION_SHIFT,
+	    &ioc->chip->Doorbell);
+	if ((_base_wait_for_doorbell_ack(ioc, 15, sleep_flag))) {
+		r = -EFAULT;
+		goto out;
+	}
+	ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY,
+	    timeout, sleep_flag);
+	if (ioc_state) {
+		printk(MPT2SAS_ERR_FMT "%s: failed going to ready state "
+		    " (ioc_state=0x%x)\n", ioc->name, __func__, ioc_state);
+		r = -EFAULT;
+		goto out;
+	}
+ out:
+	printk(MPT2SAS_INFO_FMT "message unit reset: %s\n",
+	    ioc->name, ((r == 0) ? "SUCCESS" : "FAILED"));
+	return r;
+}
+
+/**
+ * _base_handshake_req_reply_wait - send request thru doorbell interface
+ * @ioc: per adapter object
+ * @request_bytes: request length
+ * @request: pointer having request payload
+ * @reply_bytes: reply length
+ * @reply: pointer to reply payload
+ * @timeout: timeout in second
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_handshake_req_reply_wait(struct MPT2SAS_ADAPTER *ioc, int request_bytes,
+    u32 *request, int reply_bytes, u16 *reply, int timeout, int sleep_flag)
+{
+	MPI2DefaultReply_t *default_reply = (MPI2DefaultReply_t *)reply;
+	int i;
+	u8 failed;
+	u16 dummy;
+	u32 *mfp;
+
+	/* make sure doorbell is not in use */
+	if ((readl(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
+		printk(MPT2SAS_ERR_FMT "doorbell is in use "
+		    " (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+
+	/* clear pending doorbell interrupts from previous state changes */
+	if (readl(&ioc->chip->HostInterruptStatus) &
+	    MPI2_HIS_IOC2SYS_DB_STATUS)
+		writel(0, &ioc->chip->HostInterruptStatus);
+
+	/* send message to ioc */
+	writel(((MPI2_FUNCTION_HANDSHAKE<<MPI2_DOORBELL_FUNCTION_SHIFT) |
+	    ((request_bytes/4)<<MPI2_DOORBELL_ADD_DWORDS_SHIFT)),
+	    &ioc->chip->Doorbell);
+
+	if ((_base_wait_for_doorbell_int(ioc, 5, sleep_flag))) {
+		printk(MPT2SAS_ERR_FMT "doorbell handshake "
+		   "int failed (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+	writel(0, &ioc->chip->HostInterruptStatus);
+
+	if ((_base_wait_for_doorbell_ack(ioc, 5, sleep_flag))) {
+		printk(MPT2SAS_ERR_FMT "doorbell handshake "
+		    "ack failed (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+
+	/* send message 32-bits at a time */
+	for (i = 0, failed = 0; i < request_bytes/4 && !failed; i++) {
+		writel(cpu_to_le32(request[i]), &ioc->chip->Doorbell);
+		if ((_base_wait_for_doorbell_ack(ioc, 5, sleep_flag)))
+			failed = 1;
+	}
+
+	if (failed) {
+		printk(MPT2SAS_ERR_FMT "doorbell handshake "
+		    "sending request failed (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+
+	/* now wait for the reply */
+	if ((_base_wait_for_doorbell_int(ioc, timeout, sleep_flag))) {
+		printk(MPT2SAS_ERR_FMT "doorbell handshake "
+		   "int failed (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+
+	/* read the first two 16-bits, it gives the total length of the reply */
+	reply[0] = le16_to_cpu(readl(&ioc->chip->Doorbell)
+	    & MPI2_DOORBELL_DATA_MASK);
+	writel(0, &ioc->chip->HostInterruptStatus);
+	if ((_base_wait_for_doorbell_int(ioc, 5, sleep_flag))) {
+		printk(MPT2SAS_ERR_FMT "doorbell handshake "
+		   "int failed (line=%d)\n", ioc->name, __LINE__);
+		return -EFAULT;
+	}
+	reply[1] = le16_to_cpu(readl(&ioc->chip->Doorbell)
+	    & MPI2_DOORBELL_DATA_MASK);
+	writel(0, &ioc->chip->HostInterruptStatus);
+
+	for (i = 2; i < default_reply->MsgLength * 2; i++)  {
+		if ((_base_wait_for_doorbell_int(ioc, 5, sleep_flag))) {
+			printk(MPT2SAS_ERR_FMT "doorbell "
+			    "handshake int failed (line=%d)\n", ioc->name,
+			    __LINE__);
+			return -EFAULT;
+		}
+		if (i >=  reply_bytes/2) /* overflow case */
+			dummy = readl(&ioc->chip->Doorbell);
+		else
+			reply[i] = le16_to_cpu(readl(&ioc->chip->Doorbell)
+			    & MPI2_DOORBELL_DATA_MASK);
+		writel(0, &ioc->chip->HostInterruptStatus);
+	}
+
+	_base_wait_for_doorbell_int(ioc, 5, sleep_flag);
+	if (_base_wait_for_doorbell_not_used(ioc, 5, sleep_flag) != 0) {
+		dhsprintk(ioc, printk(MPT2SAS_INFO_FMT "doorbell is in use "
+		    " (line=%d)\n", ioc->name, __LINE__));
+	}
+	writel(0, &ioc->chip->HostInterruptStatus);
+
+	if (ioc->logging_level & MPT_DEBUG_INIT) {
+		mfp = (u32 *)reply;
+		printk(KERN_DEBUG "\toffset:data\n");
+		for (i = 0; i < reply_bytes/4; i++)
+			printk(KERN_DEBUG "\t[0x%02x]:%08x\n", i*4,
+			    le32_to_cpu(mfp[i]));
+	}
+	return 0;
+}
+
+/**
+ * mpt2sas_base_sas_iounit_control - send sas iounit control to FW
+ * @ioc: per adapter object
+ * @mpi_reply: the reply payload from FW
+ * @mpi_request: the request payload sent to FW
+ *
+ * The SAS IO Unit Control Request message allows the host to perform low-level
+ * operations, such as resets on the PHYs of the IO Unit, also allows the host
+ * to obtain the IOC assigned device handles for a device if it has other
+ * identifying information about the device, in addition allows the host to
+ * remove IOC resources associated with the device.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_base_sas_iounit_control(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2SasIoUnitControlReply_t *mpi_reply,
+    Mpi2SasIoUnitControlRequest_t *mpi_request)
+{
+	u16 smid;
+	u32 ioc_state;
+	unsigned long timeleft;
+	u8 issue_reset;
+	int rc;
+	void *request;
+	u16 wait_state_count;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	mutex_lock(&ioc->base_cmds.mutex);
+
+	if (ioc->base_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: base_cmd in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == 10) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			rc = -EFAULT;
+			goto out;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->base_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	ioc->base_cmds.status = MPT2_CMD_PENDING;
+	request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->base_cmds.smid = smid;
+	memcpy(request, mpi_request, sizeof(Mpi2SasIoUnitControlRequest_t));
+	if (mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
+	    mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET)
+		ioc->ioc_link_reset_in_progress = 1;
+	mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->base_cmds.done,
+	    msecs_to_jiffies(10000));
+	if ((mpi_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET ||
+	    mpi_request->Operation == MPI2_SAS_OP_PHY_LINK_RESET) &&
+	    ioc->ioc_link_reset_in_progress)
+		ioc->ioc_link_reset_in_progress = 0;
+	if (!(ioc->base_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2SasIoUnitControlRequest_t)/4);
+		if (!(ioc->base_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+	if (ioc->base_cmds.status & MPT2_CMD_REPLY_VALID)
+		memcpy(mpi_reply, ioc->base_cmds.reply,
+		    sizeof(Mpi2SasIoUnitControlReply_t));
+	else
+		memset(mpi_reply, 0, sizeof(Mpi2SasIoUnitControlReply_t));
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	goto out;
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	rc = -EFAULT;
+ out:
+	mutex_unlock(&ioc->base_cmds.mutex);
+	return rc;
+}
+
+
+/**
+ * mpt2sas_base_scsi_enclosure_processor - sending request to sep device
+ * @ioc: per adapter object
+ * @mpi_reply: the reply payload from FW
+ * @mpi_request: the request payload sent to FW
+ *
+ * The SCSI Enclosure Processor request message causes the IOC to
+ * communicate with SES devices to control LED status signals.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_base_scsi_enclosure_processor(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2SepReply_t *mpi_reply, Mpi2SepRequest_t *mpi_request)
+{
+	u16 smid;
+	u32 ioc_state;
+	unsigned long timeleft;
+	u8 issue_reset;
+	int rc;
+	void *request;
+	u16 wait_state_count;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	mutex_lock(&ioc->base_cmds.mutex);
+
+	if (ioc->base_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: base_cmd in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == 10) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			rc = -EFAULT;
+			goto out;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->base_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	ioc->base_cmds.status = MPT2_CMD_PENDING;
+	request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->base_cmds.smid = smid;
+	memcpy(request, mpi_request, sizeof(Mpi2SepReply_t));
+	mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->base_cmds.done,
+	    msecs_to_jiffies(10000));
+	if (!(ioc->base_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2SepRequest_t)/4);
+		if (!(ioc->base_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+	if (ioc->base_cmds.status & MPT2_CMD_REPLY_VALID)
+		memcpy(mpi_reply, ioc->base_cmds.reply,
+		    sizeof(Mpi2SepReply_t));
+	else
+		memset(mpi_reply, 0, sizeof(Mpi2SepReply_t));
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	goto out;
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	rc = -EFAULT;
+ out:
+	mutex_unlock(&ioc->base_cmds.mutex);
+	return rc;
+}
+
+/**
+ * _base_get_port_facts - obtain port facts reply and save in ioc
+ * @ioc: per adapter object
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_get_port_facts(struct MPT2SAS_ADAPTER *ioc, int port, int sleep_flag)
+{
+	Mpi2PortFactsRequest_t mpi_request;
+	Mpi2PortFactsReply_t mpi_reply, *pfacts;
+	int mpi_reply_sz, mpi_request_sz, r;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	mpi_reply_sz = sizeof(Mpi2PortFactsReply_t);
+	mpi_request_sz = sizeof(Mpi2PortFactsRequest_t);
+	memset(&mpi_request, 0, mpi_request_sz);
+	mpi_request.Function = MPI2_FUNCTION_PORT_FACTS;
+	mpi_request.PortNumber = port;
+	r = _base_handshake_req_reply_wait(ioc, mpi_request_sz,
+	    (u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5, CAN_SLEEP);
+
+	if (r != 0) {
+		printk(MPT2SAS_ERR_FMT "%s: handshake failed (r=%d)\n",
+		    ioc->name, __func__, r);
+		return r;
+	}
+
+	pfacts = &ioc->pfacts[port];
+	memset(pfacts, 0, sizeof(Mpi2PortFactsReply_t));
+	pfacts->PortNumber = mpi_reply.PortNumber;
+	pfacts->VP_ID = mpi_reply.VP_ID;
+	pfacts->VF_ID = mpi_reply.VF_ID;
+	pfacts->MaxPostedCmdBuffers =
+	    le16_to_cpu(mpi_reply.MaxPostedCmdBuffers);
+
+	return 0;
+}
+
+/**
+ * _base_get_ioc_facts - obtain ioc facts reply and save in ioc
+ * @ioc: per adapter object
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_get_ioc_facts(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
+{
+	Mpi2IOCFactsRequest_t mpi_request;
+	Mpi2IOCFactsReply_t mpi_reply, *facts;
+	int mpi_reply_sz, mpi_request_sz, r;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	mpi_reply_sz = sizeof(Mpi2IOCFactsReply_t);
+	mpi_request_sz = sizeof(Mpi2IOCFactsRequest_t);
+	memset(&mpi_request, 0, mpi_request_sz);
+	mpi_request.Function = MPI2_FUNCTION_IOC_FACTS;
+	r = _base_handshake_req_reply_wait(ioc, mpi_request_sz,
+	    (u32 *)&mpi_request, mpi_reply_sz, (u16 *)&mpi_reply, 5, CAN_SLEEP);
+
+	if (r != 0) {
+		printk(MPT2SAS_ERR_FMT "%s: handshake failed (r=%d)\n",
+		    ioc->name, __func__, r);
+		return r;
+	}
+
+	facts = &ioc->facts;
+	memset(facts, 0, sizeof(Mpi2IOCFactsReply_t));
+	facts->MsgVersion = le16_to_cpu(mpi_reply.MsgVersion);
+	facts->HeaderVersion = le16_to_cpu(mpi_reply.HeaderVersion);
+	facts->VP_ID = mpi_reply.VP_ID;
+	facts->VF_ID = mpi_reply.VF_ID;
+	facts->IOCExceptions = le16_to_cpu(mpi_reply.IOCExceptions);
+	facts->MaxChainDepth = mpi_reply.MaxChainDepth;
+	facts->WhoInit = mpi_reply.WhoInit;
+	facts->NumberOfPorts = mpi_reply.NumberOfPorts;
+	facts->RequestCredit = le16_to_cpu(mpi_reply.RequestCredit);
+	facts->MaxReplyDescriptorPostQueueDepth =
+	    le16_to_cpu(mpi_reply.MaxReplyDescriptorPostQueueDepth);
+	facts->ProductID = le16_to_cpu(mpi_reply.ProductID);
+	facts->IOCCapabilities = le32_to_cpu(mpi_reply.IOCCapabilities);
+	if ((facts->IOCCapabilities & MPI2_IOCFACTS_CAPABILITY_INTEGRATED_RAID))
+		ioc->ir_firmware = 1;
+	facts->FWVersion.Word = le32_to_cpu(mpi_reply.FWVersion.Word);
+	facts->IOCRequestFrameSize =
+	    le16_to_cpu(mpi_reply.IOCRequestFrameSize);
+	facts->MaxInitiators = le16_to_cpu(mpi_reply.MaxInitiators);
+	facts->MaxTargets = le16_to_cpu(mpi_reply.MaxTargets);
+	ioc->shost->max_id = -1;
+	facts->MaxSasExpanders = le16_to_cpu(mpi_reply.MaxSasExpanders);
+	facts->MaxEnclosures = le16_to_cpu(mpi_reply.MaxEnclosures);
+	facts->ProtocolFlags = le16_to_cpu(mpi_reply.ProtocolFlags);
+	facts->HighPriorityCredit =
+	    le16_to_cpu(mpi_reply.HighPriorityCredit);
+	facts->ReplyFrameSize = mpi_reply.ReplyFrameSize;
+	facts->MaxDevHandle = le16_to_cpu(mpi_reply.MaxDevHandle);
+
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "hba queue depth(%d), "
+	    "max chains per io(%d)\n", ioc->name, facts->RequestCredit,
+	    facts->MaxChainDepth));
+	dinitprintk(ioc, printk(MPT2SAS_INFO_FMT "request frame size(%d), "
+	    "reply frame size(%d)\n", ioc->name,
+	    facts->IOCRequestFrameSize * 4, facts->ReplyFrameSize * 4));
+	return 0;
+}
+
+/**
+ * _base_send_ioc_init - send ioc_init to firmware
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_send_ioc_init(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, int sleep_flag)
+{
+	Mpi2IOCInitRequest_t mpi_request;
+	Mpi2IOCInitReply_t mpi_reply;
+	int r;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	memset(&mpi_request, 0, sizeof(Mpi2IOCInitRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_IOC_INIT;
+	mpi_request.WhoInit = MPI2_WHOINIT_HOST_DRIVER;
+	mpi_request.VF_ID = VF_ID;
+	mpi_request.MsgVersion = cpu_to_le16(MPI2_VERSION);
+	mpi_request.HeaderVersion = cpu_to_le16(MPI2_HEADER_VERSION);
+
+	/* In MPI Revision I (0xA), the SystemReplyFrameSize(offset 0x18) was
+	 * removed and made reserved.  For those with older firmware will need
+	 * this fix. It was decided that the Reply and Request frame sizes are
+	 * the same.
+	 */
+	if ((ioc->facts.HeaderVersion >> 8) < 0xA) {
+		mpi_request.Reserved7 = cpu_to_le16(ioc->reply_sz);
+/*		mpi_request.SystemReplyFrameSize =
+ *		 cpu_to_le16(ioc->reply_sz);
+ */
+	}
+
+	mpi_request.SystemRequestFrameSize = cpu_to_le16(ioc->request_sz/4);
+	mpi_request.ReplyDescriptorPostQueueDepth =
+	    cpu_to_le16(ioc->reply_post_queue_depth);
+	mpi_request.ReplyFreeQueueDepth =
+	    cpu_to_le16(ioc->reply_free_queue_depth);
+
+#if BITS_PER_LONG > 32
+	mpi_request.SenseBufferAddressHigh =
+	    cpu_to_le32(ioc->sense_dma >> 32);
+	mpi_request.SystemReplyAddressHigh =
+	    cpu_to_le32(ioc->reply_dma >> 32);
+	mpi_request.SystemRequestFrameBaseAddress =
+	    cpu_to_le64(ioc->request_dma);
+	mpi_request.ReplyFreeQueueAddress =
+	    cpu_to_le64(ioc->reply_free_dma);
+	mpi_request.ReplyDescriptorPostQueueAddress =
+	    cpu_to_le64(ioc->reply_post_free_dma);
+#else
+	mpi_request.SystemRequestFrameBaseAddress =
+	    cpu_to_le32(ioc->request_dma);
+	mpi_request.ReplyFreeQueueAddress =
+	    cpu_to_le32(ioc->reply_free_dma);
+	mpi_request.ReplyDescriptorPostQueueAddress =
+	    cpu_to_le32(ioc->reply_post_free_dma);
+#endif
+
+	if (ioc->logging_level & MPT_DEBUG_INIT) {
+		u32 *mfp;
+		int i;
+
+		mfp = (u32 *)&mpi_request;
+		printk(KERN_DEBUG "\toffset:data\n");
+		for (i = 0; i < sizeof(Mpi2IOCInitRequest_t)/4; i++)
+			printk(KERN_DEBUG "\t[0x%02x]:%08x\n", i*4,
+			    le32_to_cpu(mfp[i]));
+	}
+
+	r = _base_handshake_req_reply_wait(ioc,
+	    sizeof(Mpi2IOCInitRequest_t), (u32 *)&mpi_request,
+	    sizeof(Mpi2IOCInitReply_t), (u16 *)&mpi_reply, 10,
+	    sleep_flag);
+
+	if (r != 0) {
+		printk(MPT2SAS_ERR_FMT "%s: handshake failed (r=%d)\n",
+		    ioc->name, __func__, r);
+		return r;
+	}
+
+	if (mpi_reply.IOCStatus != MPI2_IOCSTATUS_SUCCESS ||
+	    mpi_reply.IOCLogInfo) {
+		printk(MPT2SAS_ERR_FMT "%s: failed\n", ioc->name, __func__);
+		r = -EIO;
+	}
+
+	return 0;
+}
+
+/**
+ * _base_send_port_enable - send port_enable(discovery stuff) to firmware
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_send_port_enable(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, int sleep_flag)
+{
+	Mpi2PortEnableRequest_t *mpi_request;
+	u32 ioc_state;
+	unsigned long timeleft;
+	int r = 0;
+	u16 smid;
+
+	printk(MPT2SAS_INFO_FMT "sending port enable !!\n", ioc->name);
+
+	if (ioc->base_cmds.status & MPT2_CMD_PENDING) {
+		printk(MPT2SAS_ERR_FMT "%s: internal command already in use\n",
+		    ioc->name, __func__);
+		return -EAGAIN;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->base_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		return -EAGAIN;
+	}
+
+	ioc->base_cmds.status = MPT2_CMD_PENDING;
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->base_cmds.smid = smid;
+	memset(mpi_request, 0, sizeof(Mpi2PortEnableRequest_t));
+	mpi_request->Function = MPI2_FUNCTION_PORT_ENABLE;
+	mpi_request->VF_ID = VF_ID;
+
+	mpt2sas_base_put_smid_default(ioc, smid, VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->base_cmds.done,
+	    300*HZ);
+	if (!(ioc->base_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2PortEnableRequest_t)/4);
+		if (ioc->base_cmds.status & MPT2_CMD_RESET)
+			r = -EFAULT;
+		else
+			r = -ETIME;
+		goto out;
+	} else
+		dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: complete\n",
+		    ioc->name, __func__));
+
+	ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_OPERATIONAL,
+	    60, sleep_flag);
+	if (ioc_state) {
+		printk(MPT2SAS_ERR_FMT "%s: failed going to operational state "
+		    " (ioc_state=0x%x)\n", ioc->name, __func__, ioc_state);
+		r = -EFAULT;
+	}
+ out:
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	printk(MPT2SAS_INFO_FMT "port enable: %s\n",
+	    ioc->name, ((r == 0) ? "SUCCESS" : "FAILED"));
+	return r;
+}
+
+/**
+ * _base_unmask_events - turn on notification for this event
+ * @ioc: per adapter object
+ * @event: firmware event
+ *
+ * The mask is stored in ioc->event_masks.
+ */
+static void
+_base_unmask_events(struct MPT2SAS_ADAPTER *ioc, u16 event)
+{
+	u32 desired_event;
+
+	if (event >= 128)
+		return;
+
+	desired_event = (1 << (event % 32));
+
+	if (event < 32)
+		ioc->event_masks[0] &= ~desired_event;
+	else if (event < 64)
+		ioc->event_masks[1] &= ~desired_event;
+	else if (event < 96)
+		ioc->event_masks[2] &= ~desired_event;
+	else if (event < 128)
+		ioc->event_masks[3] &= ~desired_event;
+}
+
+/**
+ * _base_event_notification - send event notification
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_event_notification(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, int sleep_flag)
+{
+	Mpi2EventNotificationRequest_t *mpi_request;
+	unsigned long timeleft;
+	u16 smid;
+	int r = 0;
+	int i;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	if (ioc->base_cmds.status & MPT2_CMD_PENDING) {
+		printk(MPT2SAS_ERR_FMT "%s: internal command already in use\n",
+		    ioc->name, __func__);
+		return -EAGAIN;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->base_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		return -EAGAIN;
+	}
+	ioc->base_cmds.status = MPT2_CMD_PENDING;
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->base_cmds.smid = smid;
+	memset(mpi_request, 0, sizeof(Mpi2EventNotificationRequest_t));
+	mpi_request->Function = MPI2_FUNCTION_EVENT_NOTIFICATION;
+	mpi_request->VF_ID = VF_ID;
+	for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++)
+		mpi_request->EventMasks[i] =
+		    le32_to_cpu(ioc->event_masks[i]);
+	mpt2sas_base_put_smid_default(ioc, smid, VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->base_cmds.done, 30*HZ);
+	if (!(ioc->base_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2EventNotificationRequest_t)/4);
+		if (ioc->base_cmds.status & MPT2_CMD_RESET)
+			r = -EFAULT;
+		else
+			r = -ETIME;
+	} else
+		dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: complete\n",
+		    ioc->name, __func__));
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+	return r;
+}
+
+/**
+ * mpt2sas_base_validate_event_type - validating event types
+ * @ioc: per adapter object
+ * @event: firmware event
+ *
+ * This will turn on firmware event notification when application
+ * ask for that event. We don't mask events that are already enabled.
+ */
+void
+mpt2sas_base_validate_event_type(struct MPT2SAS_ADAPTER *ioc, u32 *event_type)
+{
+	int i, j;
+	u32 event_mask, desired_event;
+	u8 send_update_to_fw;
+
+	for (i = 0, send_update_to_fw = 0; i <
+	    MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++) {
+		event_mask = ~event_type[i];
+		desired_event = 1;
+		for (j = 0; j < 32; j++) {
+			if (!(event_mask & desired_event) &&
+			    (ioc->event_masks[i] & desired_event)) {
+				ioc->event_masks[i] &= ~desired_event;
+				send_update_to_fw = 1;
+			}
+			desired_event = (desired_event << 1);
+		}
+	}
+
+	if (!send_update_to_fw)
+		return;
+
+	mutex_lock(&ioc->base_cmds.mutex);
+	_base_event_notification(ioc, 0, CAN_SLEEP);
+	mutex_unlock(&ioc->base_cmds.mutex);
+}
+
+/**
+ * _base_diag_reset - the "big hammer" start of day reset
+ * @ioc: per adapter object
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_diag_reset(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
+{
+	u32 host_diagnostic;
+	u32 ioc_state;
+	u32 count;
+	u32 hcb_size;
+
+	printk(MPT2SAS_INFO_FMT "sending diag reset !!\n", ioc->name);
+
+	_base_save_msix_table(ioc);
+
+	drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "clear interrupts\n",
+	    ioc->name));
+	writel(0, &ioc->chip->HostInterruptStatus);
+
+	count = 0;
+	do {
+		/* Write magic sequence to WriteSequence register
+		 * Loop until in diagnostic mode
+		 */
+		drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "write magic "
+		    "sequence\n", ioc->name));
+		writel(MPI2_WRSEQ_FLUSH_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_1ST_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_2ND_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_3RD_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_4TH_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_5TH_KEY_VALUE, &ioc->chip->WriteSequence);
+		writel(MPI2_WRSEQ_6TH_KEY_VALUE, &ioc->chip->WriteSequence);
+
+		/* wait 100 msec */
+		if (sleep_flag == CAN_SLEEP)
+			msleep(100);
+		else
+			mdelay(100);
+
+		if (count++ > 20)
+			goto out;
+
+		host_diagnostic = readl(&ioc->chip->HostDiagnostic);
+		drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "wrote magic "
+		    "sequence: count(%d), host_diagnostic(0x%08x)\n",
+		    ioc->name, count, host_diagnostic));
+
+	} while ((host_diagnostic & MPI2_DIAG_DIAG_WRITE_ENABLE) == 0);
+
+	hcb_size = readl(&ioc->chip->HCBSize);
+
+	drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "diag reset: issued\n",
+	    ioc->name));
+	writel(host_diagnostic | MPI2_DIAG_RESET_ADAPTER,
+	     &ioc->chip->HostDiagnostic);
+
+	/* don't access any registers for 50 milliseconds */
+	msleep(50);
+
+	/* 300 second max wait */
+	for (count = 0; count < 3000000 ; count++) {
+
+		host_diagnostic = readl(&ioc->chip->HostDiagnostic);
+
+		if (host_diagnostic == 0xFFFFFFFF)
+			goto out;
+		if (!(host_diagnostic & MPI2_DIAG_RESET_ADAPTER))
+			break;
+
+		/* wait 100 msec */
+		if (sleep_flag == CAN_SLEEP)
+			msleep(1);
+		else
+			mdelay(1);
+	}
+
+	if (host_diagnostic & MPI2_DIAG_HCB_MODE) {
+
+		drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "restart the adapter "
+		    "assuming the HCB Address points to good F/W\n",
+		    ioc->name));
+		host_diagnostic &= ~MPI2_DIAG_BOOT_DEVICE_SELECT_MASK;
+		host_diagnostic |= MPI2_DIAG_BOOT_DEVICE_SELECT_HCDW;
+		writel(host_diagnostic, &ioc->chip->HostDiagnostic);
+
+		drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "re-enable the HCDW\n", ioc->name));
+		writel(hcb_size | MPI2_HCB_SIZE_HCB_ENABLE,
+		    &ioc->chip->HCBSize);
+	}
+
+	drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "restart the adapter\n",
+	    ioc->name));
+	writel(host_diagnostic & ~MPI2_DIAG_HOLD_IOC_RESET,
+	    &ioc->chip->HostDiagnostic);
+
+	drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "disable writes to the "
+	    "diagnostic register\n", ioc->name));
+	writel(MPI2_WRSEQ_FLUSH_KEY_VALUE, &ioc->chip->WriteSequence);
+
+	drsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "Wait for FW to go to the "
+	    "READY state\n", ioc->name));
+	ioc_state = _base_wait_on_iocstate(ioc, MPI2_IOC_STATE_READY, 20,
+	    sleep_flag);
+	if (ioc_state) {
+		printk(MPT2SAS_ERR_FMT "%s: failed going to ready state "
+		    " (ioc_state=0x%x)\n", ioc->name, __func__, ioc_state);
+		goto out;
+	}
+
+	_base_restore_msix_table(ioc);
+	printk(MPT2SAS_INFO_FMT "diag reset: SUCCESS\n", ioc->name);
+	return 0;
+
+ out:
+	printk(MPT2SAS_ERR_FMT "diag reset: FAILED\n", ioc->name);
+	return -EFAULT;
+}
+
+/**
+ * _base_make_ioc_ready - put controller in READY state
+ * @ioc: per adapter object
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ * @type: FORCE_BIG_HAMMER or SOFT_RESET
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_make_ioc_ready(struct MPT2SAS_ADAPTER *ioc, int sleep_flag,
+    enum reset_type type)
+{
+	u32 ioc_state;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 0);
+	dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: ioc_state(0x%08x)\n",
+	    ioc->name, __func__, ioc_state));
+
+	if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_READY)
+		return 0;
+
+	if (ioc_state & MPI2_DOORBELL_USED) {
+		dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "unexpected doorbell "
+		    "active!\n", ioc->name));
+		goto issue_diag_reset;
+	}
+
+	if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
+		mpt2sas_base_fault_info(ioc, ioc_state &
+		    MPI2_DOORBELL_DATA_MASK);
+		goto issue_diag_reset;
+	}
+
+	if (type == FORCE_BIG_HAMMER)
+		goto issue_diag_reset;
+
+	if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_OPERATIONAL)
+		if (!(_base_send_ioc_reset(ioc,
+		    MPI2_FUNCTION_IOC_MESSAGE_UNIT_RESET, 15, CAN_SLEEP)))
+			return 0;
+
+ issue_diag_reset:
+	return _base_diag_reset(ioc, CAN_SLEEP);
+}
+
+/**
+ * _base_make_ioc_operational - put controller in OPERATIONAL state
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_base_make_ioc_operational(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    int sleep_flag)
+{
+	int r, i;
+	unsigned long	flags;
+	u32 reply_address;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	/* initialize the scsi lookup free list */
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	INIT_LIST_HEAD(&ioc->free_list);
+	for (i = 0; i < ioc->request_depth; i++) {
+		ioc->scsi_lookup[i].cb_idx = 0xFF;
+		list_add_tail(&ioc->scsi_lookup[i].tracker_list,
+		    &ioc->free_list);
+	}
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+	/* initialize Reply Free Queue */
+	for (i = 0, reply_address = (u32)ioc->reply_dma ;
+	    i < ioc->reply_free_queue_depth ; i++, reply_address +=
+	    ioc->reply_sz)
+		ioc->reply_free[i] = cpu_to_le32(reply_address);
+
+	/* initialize Reply Post Free Queue */
+	for (i = 0; i < ioc->reply_post_queue_depth; i++)
+		ioc->reply_post_free[i].Words = ~0ULL;
+
+	r = _base_send_ioc_init(ioc, VF_ID, sleep_flag);
+	if (r)
+		return r;
+
+	/* initialize the index's */
+	ioc->reply_free_host_index = ioc->reply_free_queue_depth - 1;
+	ioc->reply_post_host_index = 0;
+	writel(ioc->reply_free_host_index, &ioc->chip->ReplyFreeHostIndex);
+	writel(0, &ioc->chip->ReplyPostHostIndex);
+
+	_base_unmask_interrupts(ioc);
+	r = _base_event_notification(ioc, VF_ID, sleep_flag);
+	if (r)
+		return r;
+
+	if (sleep_flag == CAN_SLEEP)
+		_base_static_config_pages(ioc);
+
+	r = _base_send_port_enable(ioc, VF_ID, sleep_flag);
+	if (r)
+		return r;
+
+	return r;
+}
+
+/**
+ * mpt2sas_base_free_resources - free resources controller resources (io/irq/memap)
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_free_resources(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct pci_dev *pdev = ioc->pdev;
+
+	dexitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	_base_mask_interrupts(ioc);
+	_base_make_ioc_ready(ioc, CAN_SLEEP, SOFT_RESET);
+	if (ioc->pci_irq) {
+		synchronize_irq(pdev->irq);
+		free_irq(ioc->pci_irq, ioc);
+	}
+	_base_disable_msix(ioc);
+	if (ioc->chip_phys)
+		iounmap(ioc->chip);
+	ioc->pci_irq = -1;
+	ioc->chip_phys = 0;
+	pci_release_selected_regions(ioc->pdev, ioc->bars);
+	pci_disable_device(pdev);
+	pci_set_drvdata(pdev, NULL);
+	return;
+}
+
+/**
+ * mpt2sas_base_attach - attach controller instance
+ * @ioc: per adapter object
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc)
+{
+	int r, i;
+	unsigned long	 flags;
+
+	dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	r = mpt2sas_base_map_resources(ioc);
+	if (r)
+		return r;
+
+	r = _base_make_ioc_ready(ioc, CAN_SLEEP, SOFT_RESET);
+	if (r)
+		goto out_free_resources;
+
+	r = _base_get_ioc_facts(ioc, CAN_SLEEP);
+	if (r)
+		goto out_free_resources;
+
+	r = _base_allocate_memory_pools(ioc, CAN_SLEEP);
+	if (r)
+		goto out_free_resources;
+
+	init_waitqueue_head(&ioc->reset_wq);
+
+	/* base internal command bits */
+	mutex_init(&ioc->base_cmds.mutex);
+	init_completion(&ioc->base_cmds.done);
+	ioc->base_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
+	ioc->base_cmds.status = MPT2_CMD_NOT_USED;
+
+	/* transport internal command bits */
+	ioc->transport_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
+	ioc->transport_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_init(&ioc->transport_cmds.mutex);
+	init_completion(&ioc->transport_cmds.done);
+
+	/* task management internal command bits */
+	ioc->tm_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
+	ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_init(&ioc->tm_cmds.mutex);
+	init_completion(&ioc->tm_cmds.done);
+
+	/* config page internal command bits */
+	ioc->config_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
+	ioc->config_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_init(&ioc->config_cmds.mutex);
+	init_completion(&ioc->config_cmds.done);
+
+	/* ctl module internal command bits */
+	ioc->ctl_cmds.reply = kzalloc(ioc->reply_sz, GFP_KERNEL);
+	ioc->ctl_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_init(&ioc->ctl_cmds.mutex);
+	init_completion(&ioc->ctl_cmds.done);
+
+	for (i = 0; i < MPI2_EVENT_NOTIFY_EVENTMASK_WORDS; i++)
+		ioc->event_masks[i] = -1;
+
+	/* here we enable the events we care about */
+	_base_unmask_events(ioc, MPI2_EVENT_SAS_DISCOVERY);
+	_base_unmask_events(ioc, MPI2_EVENT_SAS_BROADCAST_PRIMITIVE);
+	_base_unmask_events(ioc, MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST);
+	_base_unmask_events(ioc, MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE);
+	_base_unmask_events(ioc, MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE);
+	_base_unmask_events(ioc, MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST);
+	_base_unmask_events(ioc, MPI2_EVENT_IR_VOLUME);
+	_base_unmask_events(ioc, MPI2_EVENT_IR_PHYSICAL_DISK);
+	_base_unmask_events(ioc, MPI2_EVENT_IR_OPERATION_STATUS);
+	_base_unmask_events(ioc, MPI2_EVENT_TASK_SET_FULL);
+	_base_unmask_events(ioc, MPI2_EVENT_LOG_ENTRY_ADDED);
+
+	ioc->pfacts = kcalloc(ioc->facts.NumberOfPorts,
+	    sizeof(Mpi2PortFactsReply_t), GFP_KERNEL);
+	if (!ioc->pfacts)
+		goto out_free_resources;
+
+	for (i = 0 ; i < ioc->facts.NumberOfPorts; i++) {
+		r = _base_get_port_facts(ioc, i, CAN_SLEEP);
+		if (r)
+			goto out_free_resources;
+	}
+	r = _base_make_ioc_operational(ioc, 0, CAN_SLEEP);
+	if (r)
+		goto out_free_resources;
+
+	/* initialize fault polling */
+	INIT_DELAYED_WORK(&ioc->fault_reset_work, _base_fault_reset_work);
+	snprintf(ioc->fault_reset_work_q_name,
+	    sizeof(ioc->fault_reset_work_q_name), "poll_%d_status", ioc->id);
+	ioc->fault_reset_work_q =
+		create_singlethread_workqueue(ioc->fault_reset_work_q_name);
+	if (!ioc->fault_reset_work_q) {
+		printk(MPT2SAS_ERR_FMT "%s: failed (line=%d)\n",
+		    ioc->name, __func__, __LINE__);
+			goto out_free_resources;
+	}
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->fault_reset_work_q)
+		queue_delayed_work(ioc->fault_reset_work_q,
+		    &ioc->fault_reset_work,
+		    msecs_to_jiffies(FAULT_POLLING_INTERVAL));
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+	return 0;
+
+ out_free_resources:
+
+	ioc->remove_host = 1;
+	mpt2sas_base_free_resources(ioc);
+	_base_release_memory_pools(ioc);
+	kfree(ioc->tm_cmds.reply);
+	kfree(ioc->transport_cmds.reply);
+	kfree(ioc->config_cmds.reply);
+	kfree(ioc->base_cmds.reply);
+	kfree(ioc->ctl_cmds.reply);
+	kfree(ioc->pfacts);
+	ioc->ctl_cmds.reply = NULL;
+	ioc->base_cmds.reply = NULL;
+	ioc->tm_cmds.reply = NULL;
+	ioc->transport_cmds.reply = NULL;
+	ioc->config_cmds.reply = NULL;
+	ioc->pfacts = NULL;
+	return r;
+}
+
+
+/**
+ * mpt2sas_base_detach - remove controller instance
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_base_detach(struct MPT2SAS_ADAPTER *ioc)
+{
+	unsigned long	 flags;
+	struct workqueue_struct *wq;
+
+	dexitprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	wq = ioc->fault_reset_work_q;
+	ioc->fault_reset_work_q = NULL;
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+	if (!cancel_delayed_work(&ioc->fault_reset_work))
+		flush_workqueue(wq);
+	destroy_workqueue(wq);
+
+	mpt2sas_base_free_resources(ioc);
+	_base_release_memory_pools(ioc);
+	kfree(ioc->pfacts);
+	kfree(ioc->ctl_cmds.reply);
+	kfree(ioc->base_cmds.reply);
+	kfree(ioc->tm_cmds.reply);
+	kfree(ioc->transport_cmds.reply);
+	kfree(ioc->config_cmds.reply);
+}
+
+/**
+ * _base_reset_handler - reset callback handler (for base)
+ * @ioc: per adapter object
+ * @reset_phase: phase
+ *
+ * The handler for doing any required cleanup or initialization.
+ *
+ * The reset phase can be MPT2_IOC_PRE_RESET, MPT2_IOC_AFTER_RESET,
+ * MPT2_IOC_DONE_RESET
+ *
+ * Return nothing.
+ */
+static void
+_base_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase)
+{
+	switch (reset_phase) {
+	case MPT2_IOC_PRE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_PRE_RESET\n", ioc->name, __func__));
+		break;
+	case MPT2_IOC_AFTER_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_AFTER_RESET\n", ioc->name, __func__));
+		if (ioc->transport_cmds.status & MPT2_CMD_PENDING) {
+			ioc->transport_cmds.status |= MPT2_CMD_RESET;
+			mpt2sas_base_free_smid(ioc, ioc->transport_cmds.smid);
+			complete(&ioc->transport_cmds.done);
+		}
+		if (ioc->base_cmds.status & MPT2_CMD_PENDING) {
+			ioc->base_cmds.status |= MPT2_CMD_RESET;
+			mpt2sas_base_free_smid(ioc, ioc->base_cmds.smid);
+			complete(&ioc->base_cmds.done);
+		}
+		if (ioc->config_cmds.status & MPT2_CMD_PENDING) {
+			ioc->config_cmds.status |= MPT2_CMD_RESET;
+			mpt2sas_base_free_smid(ioc, ioc->config_cmds.smid);
+			complete(&ioc->config_cmds.done);
+		}
+		break;
+	case MPT2_IOC_DONE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_DONE_RESET\n", ioc->name, __func__));
+		break;
+	}
+	mpt2sas_scsih_reset_handler(ioc, reset_phase);
+	mpt2sas_ctl_reset_handler(ioc, reset_phase);
+}
+
+/**
+ * _wait_for_commands_to_complete - reset controller
+ * @ioc: Pointer to MPT_ADAPTER structure
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ *
+ * This function waiting(3s) for all pending commands to complete
+ * prior to putting controller in reset.
+ */
+static void
+_wait_for_commands_to_complete(struct MPT2SAS_ADAPTER *ioc, int sleep_flag)
+{
+	u32 ioc_state;
+	unsigned long flags;
+	u16 i;
+
+	ioc->pending_io_count = 0;
+	if (sleep_flag != CAN_SLEEP)
+		return;
+
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 0);
+	if ((ioc_state & MPI2_IOC_STATE_MASK) != MPI2_IOC_STATE_OPERATIONAL)
+		return;
+
+	/* pending command count */
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	for (i = 0; i < ioc->request_depth; i++)
+		if (ioc->scsi_lookup[i].cb_idx != 0xFF)
+			ioc->pending_io_count++;
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+	if (!ioc->pending_io_count)
+		return;
+
+	/* wait for pending commands to complete */
+	wait_event_timeout(ioc->reset_wq, ioc->pending_io_count == 0, 3 * HZ);
+}
+
+/**
+ * mpt2sas_base_hard_reset_handler - reset controller
+ * @ioc: Pointer to MPT_ADAPTER structure
+ * @sleep_flag: CAN_SLEEP or NO_SLEEP
+ * @type: FORCE_BIG_HAMMER or SOFT_RESET
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_base_hard_reset_handler(struct MPT2SAS_ADAPTER *ioc, int sleep_flag,
+    enum reset_type type)
+{
+	int r, i;
+	unsigned long flags;
+
+	dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->ioc_reset_in_progress) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		printk(MPT2SAS_ERR_FMT "%s: busy\n",
+		    ioc->name, __func__);
+		return -EBUSY;
+	}
+	ioc->ioc_reset_in_progress = 1;
+	ioc->shost_recovery = 1;
+	if (ioc->shost->shost_state == SHOST_RUNNING) {
+		/* set back to SHOST_RUNNING in mpt2sas_scsih.c */
+		scsi_host_set_state(ioc->shost, SHOST_RECOVERY);
+		printk(MPT2SAS_INFO_FMT "putting controller into "
+		    "SHOST_RECOVERY\n", ioc->name);
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	_base_reset_handler(ioc, MPT2_IOC_PRE_RESET);
+	_wait_for_commands_to_complete(ioc, sleep_flag);
+	_base_mask_interrupts(ioc);
+	r = _base_make_ioc_ready(ioc, sleep_flag, type);
+	if (r)
+		goto out;
+	_base_reset_handler(ioc, MPT2_IOC_AFTER_RESET);
+	for (i = 0 ; i < ioc->facts.NumberOfPorts; i++)
+		r = _base_make_ioc_operational(ioc, ioc->pfacts[i].VF_ID,
+		    sleep_flag);
+	if (!r)
+		_base_reset_handler(ioc, MPT2_IOC_DONE_RESET);
+ out:
+	dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: %s\n",
+	    ioc->name, __func__, ((r == 0) ? "SUCCESS" : "FAILED")));
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	ioc->ioc_reset_in_progress = 0;
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+	return r;
+}
diff --git a/drivers/scsi/mpt2sas/mpt2sas_base.h b/drivers/scsi/mpt2sas/mpt2sas_base.h
new file mode 100644
index 0000000..6945ff4
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_base.h
@@ -0,0 +1,779 @@
+/*
+ * This is the Fusion MPT base driver providing common API layer interface
+ * for access to MPT (Message Passing Technology) firmware.
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_base.h
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef MPT2SAS_BASE_H_INCLUDED
+#define MPT2SAS_BASE_H_INCLUDED
+
+#include "mpi/mpi2_type.h"
+#include "mpi/mpi2.h"
+#include "mpi/mpi2_ioc.h"
+#include "mpi/mpi2_cnfg.h"
+#include "mpi/mpi2_init.h"
+#include "mpi/mpi2_raid.h"
+#include "mpi/mpi2_tool.h"
+#include "mpi/mpi2_sas.h"
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_transport_sas.h>
+#include <scsi/scsi_dbg.h>
+
+#include "mpt2sas_debug.h"
+
+/* driver versioning info */
+#define MPT2SAS_DRIVER_NAME		"mpt2sas"
+#define MPT2SAS_AUTHOR	"LSI Corporation <DL-MPTFusionLinux@lsi.com>"
+#define MPT2SAS_DESCRIPTION	"LSI MPT Fusion SAS 2.0 Device Driver"
+#define MPT2SAS_DRIVER_VERSION		"00.100.11.16"
+#define MPT2SAS_MAJOR_VERSION		00
+#define MPT2SAS_MINOR_VERSION		100
+#define MPT2SAS_BUILD_VERSION		11
+#define MPT2SAS_RELEASE_VERSION		16
+
+/*
+ * Set MPT2SAS_SG_DEPTH value based on user input.
+ */
+#ifdef CONFIG_SCSI_MPT2SAS_MAX_SGE
+#if     CONFIG_SCSI_MPT2SAS_MAX_SGE  < 16
+#define MPT2SAS_SG_DEPTH       16
+#elif CONFIG_SCSI_MPT2SAS_MAX_SGE  > 128
+#define MPT2SAS_SG_DEPTH       128
+#else
+#define MPT2SAS_SG_DEPTH       CONFIG_SCSI_MPT2SAS_MAX_SGE
+#endif
+#else
+#define MPT2SAS_SG_DEPTH       128 /* MAX_HW_SEGMENTS */
+#endif
+
+
+/*
+ * Generic Defines
+ */
+#define MPT2SAS_SATA_QUEUE_DEPTH	32
+#define MPT2SAS_SAS_QUEUE_DEPTH		254
+#define MPT2SAS_RAID_QUEUE_DEPTH	128
+
+#define MPT_NAME_LENGTH			32	/* generic length of strings */
+#define MPT_STRING_LENGTH		64
+
+#define	MPT_MAX_CALLBACKS		16
+
+#define	 CAN_SLEEP			1
+#define  NO_SLEEP			0
+
+#define INTERNAL_CMDS_COUNT		10	/* reserved cmds */
+
+#define MPI2_HIM_MASK			0xFFFFFFFF /* mask every bit*/
+
+#define MPT2SAS_INVALID_DEVICE_HANDLE	0xFFFF
+
+
+/*
+ * reset phases
+ */
+#define MPT2_IOC_PRE_RESET		1 /* prior to host reset */
+#define MPT2_IOC_AFTER_RESET		2 /* just after host reset */
+#define MPT2_IOC_DONE_RESET		3 /* links re-initialized */
+
+/*
+ * logging format
+ */
+#define MPT2SAS_FMT			"%s: "
+#define MPT2SAS_DEBUG_FMT		KERN_DEBUG MPT2SAS_FMT
+#define MPT2SAS_INFO_FMT		KERN_INFO MPT2SAS_FMT
+#define MPT2SAS_NOTE_FMT		KERN_NOTICE MPT2SAS_FMT
+#define MPT2SAS_WARN_FMT		KERN_WARNING MPT2SAS_FMT
+#define MPT2SAS_ERR_FMT			KERN_ERR MPT2SAS_FMT
+
+/*
+ * per target private data
+ */
+#define MPT_TARGET_FLAGS_RAID_COMPONENT	0x01
+#define MPT_TARGET_FLAGS_VOLUME		0x02
+#define MPT_TARGET_FLAGS_DELETED	0x04
+
+/**
+ * struct MPT2SAS_TARGET - starget private hostdata
+ * @starget: starget object
+ * @sas_address: target sas address
+ * @handle: device handle
+ * @num_luns: number luns
+ * @flags: MPT_TARGET_FLAGS_XXX flags
+ * @deleted: target flaged for deletion
+ * @tm_busy: target is busy with TM request.
+ */
+struct MPT2SAS_TARGET {
+	struct scsi_target *starget;
+	u64	sas_address;
+	u16	handle;
+	int	num_luns;
+	u32	flags;
+	u8	deleted;
+	u8	tm_busy;
+};
+
+/*
+ * per device private data
+ */
+#define MPT_DEVICE_FLAGS_INIT		0x01
+#define MPT_DEVICE_TLR_ON		0x02
+
+/**
+ * struct MPT2SAS_DEVICE - sdev private hostdata
+ * @sas_target: starget private hostdata
+ * @lun: lun number
+ * @flags: MPT_DEVICE_XXX flags
+ * @configured_lun: lun is configured
+ * @block: device is in SDEV_BLOCK state
+ * @tlr_snoop_check: flag used in determining whether to disable TLR
+ */
+struct MPT2SAS_DEVICE {
+	struct MPT2SAS_TARGET *sas_target;
+	unsigned int	lun;
+	u32	flags;
+	u8	configured_lun;
+	u8	block;
+	u8	tlr_snoop_check;
+};
+
+#define MPT2_CMD_NOT_USED	0x8000	/* free */
+#define MPT2_CMD_COMPLETE	0x0001	/* completed */
+#define MPT2_CMD_PENDING	0x0002	/* pending */
+#define MPT2_CMD_REPLY_VALID	0x0004	/* reply is valid */
+#define MPT2_CMD_RESET		0x0008	/* host reset dropped the command */
+
+/**
+ * struct _internal_cmd - internal commands struct
+ * @mutex: mutex
+ * @done: completion
+ * @reply: reply message pointer
+ * @status: MPT2_CMD_XXX status
+ * @smid: system message id
+ */
+struct _internal_cmd {
+	struct mutex mutex;
+	struct completion done;
+	void 	*reply;
+	u16	status;
+	u16	smid;
+};
+
+/*
+ * SAS Topology Structures
+ */
+
+/**
+ * struct _sas_device - attached device information
+ * @list: sas device list
+ * @starget: starget object
+ * @sas_address: device sas address
+ * @device_name: retrieved from the SAS IDENTIFY frame.
+ * @handle: device handle
+ * @parent_handle: handle to parent device
+ * @enclosure_handle: enclosure handle
+ * @enclosure_logical_id: enclosure logical identifier
+ * @volume_handle: volume handle (valid when hidden raid member)
+ * @volume_wwid: volume unique identifier
+ * @device_info: bitfield provides detailed info about the device
+ * @id: target id
+ * @channel: target channel
+ * @slot: number number
+ * @hidden_raid_component: set to 1 when this is a raid member
+ * @responding: used in _scsih_sas_device_mark_responding
+ */
+struct _sas_device {
+	struct list_head list;
+	struct scsi_target *starget;
+	u64	sas_address;
+	u64	device_name;
+	u16	handle;
+	u16	parent_handle;
+	u16	enclosure_handle;
+	u64	enclosure_logical_id;
+	u16	volume_handle;
+	u64	volume_wwid;
+	u32	device_info;
+	int	id;
+	int	channel;
+	u16	slot;
+	u8	hidden_raid_component;
+	u8	responding;
+};
+
+/**
+ * struct _raid_device - raid volume link list
+ * @list: sas device list
+ * @starget: starget object
+ * @sdev: scsi device struct (volumes are single lun)
+ * @wwid: unique identifier for the volume
+ * @handle: device handle
+ * @id: target id
+ * @channel: target channel
+ * @volume_type: the raid level
+ * @device_info: bitfield provides detailed info about the hidden components
+ * @num_pds: number of hidden raid components
+ * @responding: used in _scsih_raid_device_mark_responding
+ */
+struct _raid_device {
+	struct list_head list;
+	struct scsi_target *starget;
+	struct scsi_device *sdev;
+	u64	wwid;
+	u16	handle;
+	int	id;
+	int	channel;
+	u8	volume_type;
+	u32	device_info;
+	u8	num_pds;
+	u8	responding;
+};
+
+/**
+ * struct _boot_device - boot device info
+ * @is_raid: flag to indicate whether this is volume
+ * @device: holds pointer for either struct _sas_device or
+ *     struct _raid_device
+ */
+struct _boot_device {
+	u8 is_raid;
+	void *device;
+};
+
+/**
+ * struct _sas_port - wide/narrow sas port information
+ * @port_list: list of ports belonging to expander
+ * @handle: device handle for this port
+ * @sas_address: sas address of this port
+ * @num_phys: number of phys belonging to this port
+ * @remote_identify: attached device identification
+ * @rphy: sas transport rphy object
+ * @port: sas transport wide/narrow port object
+ * @phy_list: _sas_phy list objects belonging to this port
+ */
+struct _sas_port {
+	struct list_head port_list;
+	u16	handle;
+	u64	sas_address;
+	u8	num_phys;
+	struct sas_identify remote_identify;
+	struct sas_rphy *rphy;
+	struct sas_port *port;
+	struct list_head phy_list;
+};
+
+/**
+ * struct _sas_phy - phy information
+ * @port_siblings: list of phys belonging to a port
+ * @identify: phy identification
+ * @remote_identify: attached device identification
+ * @phy: sas transport phy object
+ * @phy_id: unique phy id
+ * @handle: device handle for this phy
+ * @attached_handle: device handle for attached device
+ */
+struct _sas_phy {
+	struct list_head port_siblings;
+	struct sas_identify identify;
+	struct sas_identify remote_identify;
+	struct sas_phy *phy;
+	u8	phy_id;
+	u16	handle;
+	u16	attached_handle;
+};
+
+/**
+ * struct _sas_node - sas_host/expander information
+ * @list: list of expanders
+ * @parent_dev: parent device class
+ * @num_phys: number phys belonging to this sas_host/expander
+ * @sas_address: sas address of this sas_host/expander
+ * @handle: handle for this sas_host/expander
+ * @parent_handle: parent handle
+ * @enclosure_handle: handle for this a member of an enclosure
+ * @device_info: bitwise defining capabilities of this sas_host/expander
+ * @responding: used in _scsih_expander_device_mark_responding
+ * @phy: a list of phys that make up this sas_host/expander
+ * @sas_port_list: list of ports attached to this sas_host/expander
+ */
+struct _sas_node {
+	struct list_head list;
+	struct device *parent_dev;
+	u8	num_phys;
+	u64	sas_address;
+	u16	handle;
+	u16	parent_handle;
+	u16	enclosure_handle;
+	u64	enclosure_logical_id;
+	u8	responding;
+	struct	_sas_phy *phy;
+	struct list_head sas_port_list;
+};
+
+/**
+ * enum reset_type - reset state
+ * @FORCE_BIG_HAMMER: issue diagnostic reset
+ * @SOFT_RESET: issue message_unit_reset, if fails to to big hammer
+ */
+enum reset_type {
+	FORCE_BIG_HAMMER,
+	SOFT_RESET,
+};
+
+/**
+ * struct request_tracker - firmware request tracker
+ * @smid: system message id
+ * @scmd: scsi request pointer
+ * @cb_idx: callback index
+ * @chain_list: list of chains associated to this IO
+ * @tracker_list: list of free request (ioc->free_list)
+ */
+struct request_tracker {
+	u16	smid;
+	struct scsi_cmnd *scmd;
+	u8	cb_idx;
+	struct list_head tracker_list;
+};
+
+typedef void (*MPT_ADD_SGE)(void *paddr, u32 flags_length, dma_addr_t dma_addr);
+
+/**
+ * struct MPT2SAS_ADAPTER - per adapter struct
+ * @list: ioc_list
+ * @shost: shost object
+ * @id: unique adapter id
+ * @pci_irq: irq number
+ * @name: generic ioc string
+ * @tmp_string: tmp string used for logging
+ * @pdev: pci pdev object
+ * @chip: memory mapped register space
+ * @chip_phys: physical addrss prior to mapping
+ * @pio_chip: I/O mapped register space
+ * @logging_level: see mpt2sas_debug.h
+ * @ir_firmware: IR firmware present
+ * @bars: bitmask of BAR's that must be configured
+ * @mask_interrupts: ignore interrupt
+ * @fault_reset_work_q_name: fw fault work queue
+ * @fault_reset_work_q: ""
+ * @fault_reset_work: ""
+ * @firmware_event_name: fw event work queue
+ * @firmware_event_thread: ""
+ * @fw_events_off: flag to turn off fw event handling
+ * @fw_event_lock:
+ * @fw_event_list: list of fw events
+ * @aen_event_read_flag: event log was read
+ * @broadcast_aen_busy: broadcast aen waiting to be serviced
+ * @ioc_reset_in_progress: host reset in progress
+ * @ioc_reset_in_progress_lock:
+ * @ioc_link_reset_in_progress: phy/hard reset in progress
+ * @ignore_loginfos: ignore loginfos during task managment
+ * @remove_host: flag for when driver unloads, to avoid sending dev resets
+ * @wait_for_port_enable_to_complete:
+ * @msix_enable: flag indicating msix is enabled
+ * @msix_vector_count: number msix vectors
+ * @msix_table: virt address to the msix table
+ * @msix_table_backup: backup msix table
+ * @scsi_io_cb_idx: shost generated commands
+ * @tm_cb_idx: task management commands
+ * @transport_cb_idx: transport internal commands
+ * @ctl_cb_idx: clt internal commands
+ * @base_cb_idx: base internal commands
+ * @config_cb_idx: base internal commands
+ * @base_cmds:
+ * @transport_cmds:
+ * @tm_cmds:
+ * @ctl_cmds:
+ * @config_cmds:
+ * @base_add_sg_single: handler for either 32/64 bit sgl's
+ * @event_type: bits indicating which events to log
+ * @event_context: unique id for each logged event
+ * @event_log: event log pointer
+ * @event_masks: events that are masked
+ * @facts: static facts data
+ * @pfacts: static port facts data
+ * @manu_pg0: static manufacturing page 0
+ * @bios_pg2: static bios page 2
+ * @bios_pg3: static bios page 3
+ * @ioc_pg8: static ioc page 8
+ * @iounit_pg0: static iounit page 0
+ * @iounit_pg1: static iounit page 1
+ * @sas_hba: sas host object
+ * @sas_expander_list: expander object list
+ * @sas_node_lock:
+ * @sas_device_list: sas device object list
+ * @sas_device_init_list: sas device object list (used only at init time)
+ * @sas_device_lock:
+ * @io_missing_delay: time for IO completed by fw when PDR enabled
+ * @device_missing_delay: time for device missing by fw when PDR enabled
+ * @config_page_sz: config page size
+ * @config_page: reserve memory for config page payload
+ * @config_page_dma:
+ * @sge_size: sg element size for either 32/64 bit
+ * @request_depth: hba request queue depth
+ * @request_sz: per request frame size
+ * @request: pool of request frames
+ * @request_dma:
+ * @request_dma_sz:
+ * @scsi_lookup: firmware request tracker list
+ * @scsi_lookup_lock:
+ * @free_list: free list of request
+ * @chain: pool of chains
+ * @pending_io_count:
+ * @reset_wq:
+ * @chain_dma:
+ * @max_sges_in_main_message: number sg elements in main message
+ * @max_sges_in_chain_message: number sg elements per chain
+ * @chains_needed_per_io: max chains per io
+ * @chain_offset_value_for_main_message: location 1st sg in main
+ * @chain_depth: total chains allocated
+ * @sense: pool of sense
+ * @sense_dma:
+ * @sense_dma_pool:
+ * @reply_depth: hba reply queue depth:
+ * @reply_sz: per reply frame size:
+ * @reply: pool of replys:
+ * @reply_dma:
+ * @reply_dma_pool:
+ * @reply_free_queue_depth: reply free depth
+ * @reply_free: pool for reply free queue (32 bit addr)
+ * @reply_free_dma:
+ * @reply_free_dma_pool:
+ * @reply_free_host_index: tail index in pool to insert free replys
+ * @reply_post_queue_depth: reply post queue depth
+ * @reply_post_free: pool for reply post (64bit descriptor)
+ * @reply_post_free_dma:
+ * @reply_post_free_dma_pool:
+ * @reply_post_host_index: head index in the pool where FW completes IO
+ */
+struct MPT2SAS_ADAPTER {
+	struct list_head list;
+	struct Scsi_Host *shost;
+	u8		id;
+	u32		pci_irq;
+	char		name[MPT_NAME_LENGTH];
+	char		tmp_string[MPT_STRING_LENGTH];
+	struct pci_dev	*pdev;
+	Mpi2SystemInterfaceRegs_t __iomem *chip;
+	unsigned long	chip_phys;
+	unsigned long	pio_chip;
+	int		logging_level;
+	u8		ir_firmware;
+	int		bars;
+	u8		mask_interrupts;
+
+	/* fw fault handler */
+	char		fault_reset_work_q_name[20];
+	struct workqueue_struct *fault_reset_work_q;
+	struct delayed_work fault_reset_work;
+
+	/* fw event handler */
+	char		firmware_event_name[20];
+	struct workqueue_struct	*firmware_event_thread;
+	u8		fw_events_off;
+	spinlock_t	fw_event_lock;
+	struct list_head fw_event_list;
+
+	 /* misc flags */
+	int		aen_event_read_flag;
+	u8		broadcast_aen_busy;
+	u8		ioc_reset_in_progress;
+	u8		shost_recovery;
+	spinlock_t 	ioc_reset_in_progress_lock;
+	u8		ioc_link_reset_in_progress;
+	u8		ignore_loginfos;
+	u8		remove_host;
+	u8		wait_for_port_enable_to_complete;
+
+	u8		msix_enable;
+	u16		msix_vector_count;
+	u32		*msix_table;
+	u32		*msix_table_backup;
+
+	/* internal commands, callback index */
+	u8		scsi_io_cb_idx;
+	u8		tm_cb_idx;
+	u8		transport_cb_idx;
+	u8		ctl_cb_idx;
+	u8		base_cb_idx;
+	u8		config_cb_idx;
+	struct _internal_cmd base_cmds;
+	struct _internal_cmd transport_cmds;
+	struct _internal_cmd tm_cmds;
+	struct _internal_cmd ctl_cmds;
+	struct _internal_cmd config_cmds;
+
+	MPT_ADD_SGE	base_add_sg_single;
+
+	/* event log */
+	u32		event_type[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];
+	u32		event_context;
+	void		*event_log;
+	u32		event_masks[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];
+
+	/* static config pages */
+	Mpi2IOCFactsReply_t facts;
+	Mpi2PortFactsReply_t *pfacts;
+	Mpi2ManufacturingPage0_t manu_pg0;
+	Mpi2BiosPage2_t	bios_pg2;
+	Mpi2BiosPage3_t	bios_pg3;
+	Mpi2IOCPage8_t ioc_pg8;
+	Mpi2IOUnitPage0_t iounit_pg0;
+	Mpi2IOUnitPage1_t iounit_pg1;
+
+	struct _boot_device req_boot_device;
+	struct _boot_device req_alt_boot_device;
+	struct _boot_device current_boot_device;
+
+	/* sas hba, expander, and device list */
+	struct _sas_node sas_hba;
+	struct list_head sas_expander_list;
+	spinlock_t	sas_node_lock;
+	struct list_head sas_device_list;
+	struct list_head sas_device_init_list;
+	spinlock_t	sas_device_lock;
+	struct list_head raid_device_list;
+	spinlock_t	raid_device_lock;
+	u8		io_missing_delay;
+	u16		device_missing_delay;
+	int		sas_id;
+
+	/* config page */
+	u16		config_page_sz;
+	void 		*config_page;
+	dma_addr_t	config_page_dma;
+
+	/* request */
+	u16		sge_size;
+	u16 		request_depth;
+	u16		request_sz;
+	u8		*request;
+	dma_addr_t	request_dma;
+	u32		request_dma_sz;
+	struct request_tracker *scsi_lookup;
+	spinlock_t scsi_lookup_lock;
+	struct list_head free_list;
+	int		pending_io_count;
+	wait_queue_head_t reset_wq;
+
+	/* chain */
+	u8		*chain;
+	dma_addr_t	chain_dma;
+	u16 		max_sges_in_main_message;
+	u16		max_sges_in_chain_message;
+	u16		chains_needed_per_io;
+	u16		chain_offset_value_for_main_message;
+	u16		chain_depth;
+
+	/* sense */
+	u8		*sense;
+	dma_addr_t	sense_dma;
+	struct dma_pool *sense_dma_pool;
+
+	/* reply */
+	u16		reply_sz;
+	u8		*reply;
+	dma_addr_t	reply_dma;
+	struct dma_pool *reply_dma_pool;
+
+	/* reply free queue */
+	u16 		reply_free_queue_depth;
+	u32		*reply_free;
+	dma_addr_t	reply_free_dma;
+	struct dma_pool *reply_free_dma_pool;
+	u32		reply_free_host_index;
+
+	/* reply post queue */
+	u16 		reply_post_queue_depth;
+	Mpi2ReplyDescriptorsUnion_t *reply_post_free;
+	dma_addr_t	reply_post_free_dma;
+	struct dma_pool *reply_post_free_dma_pool;
+	u32		reply_post_host_index;
+
+	/* diag buffer support */
+	u8		*diag_buffer[MPI2_DIAG_BUF_TYPE_COUNT];
+	u32		diag_buffer_sz[MPI2_DIAG_BUF_TYPE_COUNT];
+	dma_addr_t	diag_buffer_dma[MPI2_DIAG_BUF_TYPE_COUNT];
+	u8		diag_buffer_status[MPI2_DIAG_BUF_TYPE_COUNT];
+	u32		unique_id[MPI2_DIAG_BUF_TYPE_COUNT];
+	u32		product_specific[MPI2_DIAG_BUF_TYPE_COUNT][23];
+	u32		diagnostic_flags[MPI2_DIAG_BUF_TYPE_COUNT];
+};
+
+typedef void (*MPT_CALLBACK)(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID,
+    u32 reply);
+
+
+/* base shared API */
+extern struct list_head mpt2sas_ioc_list;
+
+int mpt2sas_base_attach(struct MPT2SAS_ADAPTER *ioc);
+void mpt2sas_base_detach(struct MPT2SAS_ADAPTER *ioc);
+int mpt2sas_base_map_resources(struct MPT2SAS_ADAPTER *ioc);
+void mpt2sas_base_free_resources(struct MPT2SAS_ADAPTER *ioc);
+int mpt2sas_base_hard_reset_handler(struct MPT2SAS_ADAPTER *ioc, int sleep_flag,
+    enum reset_type type);
+
+void *mpt2sas_base_get_msg_frame(struct MPT2SAS_ADAPTER *ioc, u16 smid);
+void *mpt2sas_base_get_sense_buffer(struct MPT2SAS_ADAPTER *ioc, u16 smid);
+void mpt2sas_base_build_zero_len_sge(struct MPT2SAS_ADAPTER *ioc, void *paddr);
+dma_addr_t mpt2sas_base_get_msg_frame_dma(struct MPT2SAS_ADAPTER *ioc, u16 smid);
+dma_addr_t mpt2sas_base_get_sense_buffer_dma(struct MPT2SAS_ADAPTER *ioc, u16 smid);
+
+u16 mpt2sas_base_get_smid(struct MPT2SAS_ADAPTER *ioc, u8 cb_idx);
+void mpt2sas_base_free_smid(struct MPT2SAS_ADAPTER *ioc, u16 smid);
+void mpt2sas_base_put_smid_scsi_io(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 vf_id,
+    u16 handle);
+void mpt2sas_base_put_smid_hi_priority(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 vf_id);
+void mpt2sas_base_put_smid_target_assist(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    u8 vf_id, u16 io_index);
+void mpt2sas_base_put_smid_default(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 vf_id);
+void mpt2sas_base_initialize_callback_handler(void);
+u8 mpt2sas_base_register_callback_handler(MPT_CALLBACK cb_func);
+void mpt2sas_base_release_callback_handler(u8 cb_idx);
+
+void mpt2sas_base_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply);
+void *mpt2sas_base_get_reply_virt_addr(struct MPT2SAS_ADAPTER *ioc, u32 phys_addr);
+
+u32 mpt2sas_base_get_iocstate(struct MPT2SAS_ADAPTER *ioc, int cooked);
+
+void mpt2sas_base_fault_info(struct MPT2SAS_ADAPTER *ioc , u16 fault_code);
+int mpt2sas_base_sas_iounit_control(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2SasIoUnitControlReply_t *mpi_reply, Mpi2SasIoUnitControlRequest_t
+    *mpi_request);
+int mpt2sas_base_scsi_enclosure_processor(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2SepReply_t *mpi_reply, Mpi2SepRequest_t *mpi_request);
+void mpt2sas_base_validate_event_type(struct MPT2SAS_ADAPTER *ioc, u32 *event_type);
+
+/* scsih shared API */
+void mpt2sas_scsih_issue_tm(struct MPT2SAS_ADAPTER *ioc, u16 handle, uint lun,
+    u8 type, u16 smid_task, ulong timeout);
+void mpt2sas_scsih_set_tm_flag(struct MPT2SAS_ADAPTER *ioc, u16 handle);
+void mpt2sas_scsih_clear_tm_flag(struct MPT2SAS_ADAPTER *ioc, u16 handle);
+struct _sas_node *mpt2sas_scsih_expander_find_by_handle(struct MPT2SAS_ADAPTER *ioc,
+    u16 handle);
+struct _sas_node *mpt2sas_scsih_expander_find_by_sas_address(struct MPT2SAS_ADAPTER
+    *ioc, u64 sas_address);
+struct _sas_device *mpt2sas_scsih_sas_device_find_by_sas_address(
+    struct MPT2SAS_ADAPTER *ioc, u64 sas_address);
+
+void mpt2sas_scsih_event_callback(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, u32 reply);
+void mpt2sas_scsih_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase);
+
+/* config shared API */
+void mpt2sas_config_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply);
+int mpt2sas_config_get_number_hba_phys(struct MPT2SAS_ADAPTER *ioc, u8 *num_phys);
+int mpt2sas_config_get_manufacturing_pg0(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage0_t *config_page);
+int mpt2sas_config_get_bios_pg2(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2BiosPage2_t *config_page);
+int mpt2sas_config_get_bios_pg3(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2BiosPage3_t *config_page);
+int mpt2sas_config_get_iounit_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2IOUnitPage0_t *config_page);
+int mpt2sas_config_get_sas_device_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasDevicePage0_t *config_page, u32 form, u32 handle);
+int mpt2sas_config_get_sas_device_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasDevicePage1_t *config_page, u32 form, u32 handle);
+int mpt2sas_config_get_sas_iounit_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasIOUnitPage0_t *config_page, u16 sz);
+int mpt2sas_config_get_iounit_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2IOUnitPage1_t *config_page);
+int mpt2sas_config_set_iounit_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2IOUnitPage1_t config_page);
+int mpt2sas_config_get_sas_iounit_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasIOUnitPage1_t *config_page, u16 sz);
+int mpt2sas_config_get_ioc_pg8(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2IOCPage8_t *config_page);
+int mpt2sas_config_get_expander_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2ExpanderPage0_t *config_page, u32 form, u32 handle);
+int mpt2sas_config_get_expander_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2ExpanderPage1_t *config_page, u32 phy_number, u16 handle);
+int mpt2sas_config_get_enclosure_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasEnclosurePage0_t *config_page, u32 form, u32 handle);
+int mpt2sas_config_get_phy_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasPhyPage0_t *config_page, u32 phy_number);
+int mpt2sas_config_get_phy_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasPhyPage1_t *config_page, u32 phy_number);
+int mpt2sas_config_get_raid_volume_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2RaidVolPage1_t *config_page, u32 form, u32 handle);
+int mpt2sas_config_get_number_pds(struct MPT2SAS_ADAPTER *ioc, u16 handle, u8 *num_pds);
+int mpt2sas_config_get_raid_volume_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2RaidVolPage0_t *config_page, u32 form, u32 handle, u16 sz);
+int mpt2sas_config_get_phys_disk_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2RaidPhysDiskPage0_t *config_page, u32 form,
+    u32 form_specific);
+int mpt2sas_config_get_volume_handle(struct MPT2SAS_ADAPTER *ioc, u16 pd_handle,
+    u16 *volume_handle);
+int mpt2sas_config_get_volume_wwid(struct MPT2SAS_ADAPTER *ioc, u16 volume_handle,
+    u64 *wwid);
+
+/* ctl shared API */
+extern struct device_attribute *mpt2sas_host_attrs[];
+extern struct device_attribute *mpt2sas_dev_attrs[];
+void mpt2sas_ctl_init(void);
+void mpt2sas_ctl_exit(void);
+void mpt2sas_ctl_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply);
+void mpt2sas_ctl_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase);
+void mpt2sas_ctl_event_callback(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, u32 reply);
+void mpt2sas_ctl_add_to_event_log(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventNotificationReply_t *mpi_reply);
+
+/* transport shared API */
+void mpt2sas_transport_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply);
+struct _sas_port *mpt2sas_transport_port_add(struct MPT2SAS_ADAPTER *ioc,
+    u16 handle, u16 parent_handle);
+void mpt2sas_transport_port_remove(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
+    u16 parent_handle);
+int mpt2sas_transport_add_host_phy(struct MPT2SAS_ADAPTER *ioc, struct _sas_phy
+    *mpt2sas_phy, Mpi2SasPhyPage0_t phy_pg0, struct device *parent_dev);
+int mpt2sas_transport_add_expander_phy(struct MPT2SAS_ADAPTER *ioc, struct _sas_phy
+    *mpt2sas_phy, Mpi2ExpanderPage1_t expander_pg1, struct device *parent_dev);
+void mpt2sas_transport_update_phy_link_change(struct MPT2SAS_ADAPTER *ioc, u16 handle,
+   u16 attached_handle, u8 phy_number, u8 link_rate);
+extern struct sas_function_template mpt2sas_transport_functions;
+extern struct scsi_transport_template *mpt2sas_transport_template;
+
+#endif /* MPT2SAS_BASE_H_INCLUDED */
diff --git a/drivers/scsi/mpt2sas/mpt2sas_config.c b/drivers/scsi/mpt2sas/mpt2sas_config.c
new file mode 100644
index 0000000..58cfb97
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_config.c
@@ -0,0 +1,1873 @@
+/*
+ * This module provides common API for accessing firmware configuration pages
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_base.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/blkdev.h>
+#include <linux/sched.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+
+#include "mpt2sas_base.h"
+
+/* local definitions */
+
+/* Timeout for config page request (in seconds) */
+#define MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT 15
+
+/* Common sgl flags for READING a config page. */
+#define MPT2_CONFIG_COMMON_SGLFLAGS ((MPI2_SGE_FLAGS_SIMPLE_ELEMENT | \
+    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER \
+    | MPI2_SGE_FLAGS_END_OF_LIST) << MPI2_SGE_FLAGS_SHIFT)
+
+/* Common sgl flags for WRITING a config page. */
+#define MPT2_CONFIG_COMMON_WRITE_SGLFLAGS ((MPI2_SGE_FLAGS_SIMPLE_ELEMENT | \
+    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER \
+    | MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_HOST_TO_IOC) \
+    << MPI2_SGE_FLAGS_SHIFT)
+
+/**
+ * struct config_request - obtain dma memory via routine
+ * @config_page_sz: size
+ * @config_page: virt pointer
+ * @config_page_dma: phys pointer
+ *
+ */
+struct config_request{
+	u16			config_page_sz;
+	void			*config_page;
+	dma_addr_t		config_page_dma;
+};
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _config_display_some_debug - debug routine
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @calling_function_name: string pass from calling function
+ * @mpi_reply: reply message frame
+ * Context: none.
+ *
+ * Function for displaying debug info helpfull when debugging issues
+ * in this module.
+ */
+static void
+_config_display_some_debug(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    char *calling_function_name, MPI2DefaultReply_t *mpi_reply)
+{
+	Mpi2ConfigRequest_t *mpi_request;
+	char *desc = NULL;
+
+	if (!(ioc->logging_level & MPT_DEBUG_CONFIG))
+		return;
+
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	switch (mpi_request->Header.PageType & MPI2_CONFIG_PAGETYPE_MASK) {
+	case MPI2_CONFIG_PAGETYPE_IO_UNIT:
+		desc = "io_unit";
+		break;
+	case MPI2_CONFIG_PAGETYPE_IOC:
+		desc = "ioc";
+		break;
+	case MPI2_CONFIG_PAGETYPE_BIOS:
+		desc = "bios";
+		break;
+	case MPI2_CONFIG_PAGETYPE_RAID_VOLUME:
+		desc = "raid_volume";
+		break;
+	case MPI2_CONFIG_PAGETYPE_MANUFACTURING:
+		desc = "manufaucturing";
+		break;
+	case MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK:
+		desc = "physdisk";
+		break;
+	case MPI2_CONFIG_PAGETYPE_EXTENDED:
+		switch (mpi_request->ExtPageType) {
+		case MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT:
+			desc = "sas_io_unit";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_SAS_EXPANDER:
+			desc = "sas_expander";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE:
+			desc = "sas_device";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_SAS_PHY:
+			desc = "sas_phy";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_LOG:
+			desc = "log";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_ENCLOSURE:
+			desc = "enclosure";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_RAID_CONFIG:
+			desc = "raid_config";
+			break;
+		case MPI2_CONFIG_EXTPAGETYPE_DRIVER_MAPPING:
+			desc = "driver_mappping";
+			break;
+		}
+		break;
+	}
+
+	if (!desc)
+		return;
+
+	printk(MPT2SAS_DEBUG_FMT "%s: %s(%d), action(%d), form(0x%08x), "
+	    "smid(%d)\n", ioc->name, calling_function_name, desc,
+	    mpi_request->Header.PageNumber, mpi_request->Action,
+	    le32_to_cpu(mpi_request->PageAddress), smid);
+
+	if (!mpi_reply)
+		return;
+
+	if (mpi_reply->IOCStatus || mpi_reply->IOCLogInfo)
+		printk(MPT2SAS_DEBUG_FMT
+		    "\tiocstatus(0x%04x), loginfo(0x%08x)\n",
+		    ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
+		    le32_to_cpu(mpi_reply->IOCLogInfo));
+}
+#endif
+
+/**
+ * mpt2sas_config_done - config page completion routine
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ * Context: none.
+ *
+ * The callback handler when using _config_request.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_config_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+
+	if (ioc->config_cmds.status == MPT2_CMD_NOT_USED)
+		return;
+	if (ioc->config_cmds.smid != smid)
+		return;
+	ioc->config_cmds.status |= MPT2_CMD_COMPLETE;
+	mpi_reply =  mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (mpi_reply) {
+		ioc->config_cmds.status |= MPT2_CMD_REPLY_VALID;
+		memcpy(ioc->config_cmds.reply, mpi_reply,
+		    mpi_reply->MsgLength*4);
+	}
+	ioc->config_cmds.status &= ~MPT2_CMD_PENDING;
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	_config_display_some_debug(ioc, smid, "config_done", mpi_reply);
+#endif
+	complete(&ioc->config_cmds.done);
+}
+
+/**
+ * _config_request - main routine for sending config page requests
+ * @ioc: per adapter object
+ * @mpi_request: request message frame
+ * @mpi_reply: reply mf payload returned from firmware
+ * @timeout: timeout in seconds
+ * Context: sleep, the calling function needs to acquire the config_cmds.mutex
+ *
+ * A generic API for config page requests to firmware.
+ *
+ * The ioc->config_cmds.status flag should be MPT2_CMD_NOT_USED before calling
+ * this API.
+ *
+ * The callback index is set inside `ioc->config_cb_idx.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_config_request(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigRequest_t
+    *mpi_request, Mpi2ConfigReply_t *mpi_reply, int timeout)
+{
+	u16 smid;
+	u32 ioc_state;
+	unsigned long timeleft;
+	Mpi2ConfigRequest_t *config_request;
+	int r;
+	u8 retry_count;
+	u8 issue_reset;
+	u16 wait_state_count;
+
+	if (ioc->config_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: config_cmd in use\n",
+		    ioc->name, __func__);
+		return -EAGAIN;
+	}
+	retry_count = 0;
+
+ retry_config:
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			ioc->config_cmds.status = MPT2_CMD_NOT_USED;
+			return -EFAULT;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+	if (wait_state_count)
+		printk(MPT2SAS_INFO_FMT "%s: ioc is operational\n",
+		    ioc->name, __func__);
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->config_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		ioc->config_cmds.status = MPT2_CMD_NOT_USED;
+		return -EAGAIN;
+	}
+
+	r = 0;
+	memset(mpi_reply, 0, sizeof(Mpi2ConfigReply_t));
+	ioc->config_cmds.status = MPT2_CMD_PENDING;
+	config_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->config_cmds.smid = smid;
+	memcpy(config_request, mpi_request, sizeof(Mpi2ConfigRequest_t));
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	_config_display_some_debug(ioc, smid, "config_request", NULL);
+#endif
+	mpt2sas_base_put_smid_default(ioc, smid, config_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->config_cmds.done,
+	    timeout*HZ);
+	if (!(ioc->config_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2ConfigRequest_t)/4);
+		if (!(ioc->config_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+	if (ioc->config_cmds.status & MPT2_CMD_REPLY_VALID)
+		memcpy(mpi_reply, ioc->config_cmds.reply,
+		    sizeof(Mpi2ConfigReply_t));
+	if (retry_count)
+		printk(MPT2SAS_INFO_FMT "%s: retry completed!!\n",
+		    ioc->name, __func__);
+	ioc->config_cmds.status = MPT2_CMD_NOT_USED;
+	return r;
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+	ioc->config_cmds.status = MPT2_CMD_NOT_USED;
+	if (!retry_count) {
+		printk(MPT2SAS_INFO_FMT "%s: attempting retry\n",
+		    ioc->name, __func__);
+		retry_count++;
+		goto retry_config;
+	}
+	return -EFAULT;
+}
+
+/**
+ * _config_alloc_config_dma_memory - obtain physical memory
+ * @ioc: per adapter object
+ * @mem: struct config_request
+ *
+ * A wrapper for obtaining dma-able memory for config page request.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_config_alloc_config_dma_memory(struct MPT2SAS_ADAPTER *ioc,
+    struct config_request *mem)
+{
+	int r = 0;
+
+	mem->config_page = pci_alloc_consistent(ioc->pdev, mem->config_page_sz,
+	    &mem->config_page_dma);
+	if (!mem->config_page)
+		r = -ENOMEM;
+	return r;
+}
+
+/**
+ * _config_free_config_dma_memory - wrapper to free the memory
+ * @ioc: per adapter object
+ * @mem: struct config_request
+ *
+ * A wrapper to free dma-able memory when using _config_alloc_config_dma_memory.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static void
+_config_free_config_dma_memory(struct MPT2SAS_ADAPTER *ioc,
+    struct config_request *mem)
+{
+	pci_free_consistent(ioc->pdev, mem->config_page_sz, mem->config_page,
+	    mem->config_page_dma);
+}
+
+/**
+ * mpt2sas_config_get_manufacturing_pg0 - obtain manufacturing page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_manufacturing_pg0(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2ManufacturingPage0_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2ManufacturingPage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_MANUFACTURING;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_MANUFACTURING0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2ManufacturingPage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_bios_pg2 - obtain bios page 2
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_bios_pg2(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2BiosPage2_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2BiosPage2_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_BIOS;
+	mpi_request.Header.PageNumber = 2;
+	mpi_request.Header.PageVersion = MPI2_BIOSPAGE2_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2BiosPage2_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_bios_pg3 - obtain bios page 3
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_bios_pg3(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2BiosPage3_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2BiosPage3_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_BIOS;
+	mpi_request.Header.PageNumber = 3;
+	mpi_request.Header.PageVersion = MPI2_BIOSPAGE3_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2BiosPage3_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_iounit_pg0 - obtain iounit page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_iounit_pg0(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage0_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2IOUnitPage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_IOUNITPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2IOUnitPage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_iounit_pg1 - obtain iounit page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_iounit_pg1(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage1_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2IOUnitPage1_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_IOUNITPAGE1_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2IOUnitPage1_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_set_iounit_pg1 - set iounit page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_set_iounit_pg1(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2IOUnitPage1_t config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IO_UNIT;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_IOUNITPAGE1_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_WRITE_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+
+	memset(mem.config_page, 0, mem.config_page_sz);
+	memcpy(mem.config_page, &config_page,
+	    sizeof(Mpi2IOUnitPage1_t));
+
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_WRITE_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_ioc_pg8 - obtain ioc page 8
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_ioc_pg8(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2IOCPage8_t *config_page)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2IOCPage8_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_IOC;
+	mpi_request.Header.PageNumber = 8;
+	mpi_request.Header.PageVersion = MPI2_IOCPAGE8_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2IOCPage8_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_sas_device_pg0 - obtain sas device page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: device handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_sas_device_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasDevicePage0_t *config_page, u32 form, u32 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2SasDevicePage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE;
+	mpi_request.Header.PageVersion = MPI2_SASDEVICE0_PAGEVERSION;
+	mpi_request.Header.PageNumber = 0;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2SasDevicePage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_sas_device_pg1 - obtain sas device page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: device handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_sas_device_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasDevicePage1_t *config_page, u32 form, u32 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2SasDevicePage1_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_DEVICE;
+	mpi_request.Header.PageVersion = MPI2_SASDEVICE1_PAGEVERSION;
+	mpi_request.Header.PageNumber = 1;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2SasDevicePage1_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_number_hba_phys - obtain number of phys on the host
+ * @ioc: per adapter object
+ * @num_phys: pointer returned with the number of phys
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_number_hba_phys(struct MPT2SAS_ADAPTER *ioc, u8 *num_phys)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+	u16 ioc_status;
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasIOUnitPage0_t config_page;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_SASIOUNITPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply.Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply.Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply.Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply.ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply.ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply.ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r) {
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status == MPI2_IOCSTATUS_SUCCESS) {
+			memcpy(&config_page, mem.config_page,
+			    min_t(u16, mem.config_page_sz,
+			    sizeof(Mpi2SasIOUnitPage0_t)));
+			*num_phys = config_page.NumPhys;
+		}
+	}
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_sas_iounit_pg0 - obtain sas iounit page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @sz: size of buffer passed in config_page
+ * Context: sleep.
+ *
+ * Calling function should call config_get_number_hba_phys prior to
+ * this function, so enough memory is allocated for config_page.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_sas_iounit_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasIOUnitPage0_t *config_page, u16 sz)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sz);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_SASIOUNITPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, sz, mem.config_page_sz));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_sas_iounit_pg1 - obtain sas iounit page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @sz: size of buffer passed in config_page
+ * Context: sleep.
+ *
+ * Calling function should call config_get_number_hba_phys prior to
+ * this function, so enough memory is allocated for config_page.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_sas_iounit_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasIOUnitPage1_t *config_page, u16 sz)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sz);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_IO_UNIT;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_SASIOUNITPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, sz, mem.config_page_sz));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_expander_pg0 - obtain expander page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: expander handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_expander_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2ExpanderPage0_t *config_page, u32 form, u32 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2ExpanderPage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_EXPANDER;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_SASEXPANDER0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2ExpanderPage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_expander_pg1 - obtain expander page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @phy_number: phy number
+ * @handle: expander handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_expander_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2ExpanderPage1_t *config_page, u32 phy_number,
+    u16 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2ExpanderPage1_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_EXPANDER;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_SASEXPANDER1_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress =
+	    cpu_to_le32(MPI2_SAS_EXPAND_PGAD_FORM_HNDL_PHY_NUM |
+	    (phy_number << MPI2_SAS_EXPAND_PGAD_PHYNUM_SHIFT) | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2ExpanderPage1_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_enclosure_pg0 - obtain enclosure page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: expander handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_enclosure_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasEnclosurePage0_t *config_page, u32 form, u32 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2SasEnclosurePage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_ENCLOSURE;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_SASENCLOSURE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2SasEnclosurePage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_phy_pg0 - obtain phy page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @phy_number: phy number
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_phy_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasPhyPage0_t *config_page, u32 phy_number)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2SasPhyPage0_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_PHY;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_SASPHY0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress =
+	    cpu_to_le32(MPI2_SAS_PHY_PGAD_FORM_PHY_NUMBER | phy_number);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2SasPhyPage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_phy_pg1 - obtain phy page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @phy_number: phy number
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_phy_pg1(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2SasPhyPage1_t *config_page, u32 phy_number)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2SasPhyPage1_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_SAS_PHY;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_SASPHY1_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress =
+	    cpu_to_le32(MPI2_SAS_PHY_PGAD_FORM_PHY_NUMBER | phy_number);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply->ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply->ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2SasPhyPage1_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_raid_volume_pg1 - obtain raid volume page 1
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: volume handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_raid_volume_pg1(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage1_t *config_page, u32 form,
+    u32 handle)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(config_page, 0, sizeof(Mpi2RaidVolPage1_t));
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME;
+	mpi_request.Header.PageNumber = 1;
+	mpi_request.Header.PageVersion = MPI2_RAIDVOLPAGE1_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2RaidVolPage1_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_number_pds - obtain number of phys disk assigned to volume
+ * @ioc: per adapter object
+ * @handle: volume handle
+ * @num_pds: returns pds count
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_number_pds(struct MPT2SAS_ADAPTER *ioc, u16 handle,
+    u8 *num_pds)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	Mpi2RaidVolPage0_t *config_page;
+	Mpi2ConfigReply_t mpi_reply;
+	int r;
+	struct config_request mem;
+	u16 ioc_status;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	*num_pds = 0;
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_RAIDVOLPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress =
+	    cpu_to_le32(MPI2_RAID_VOLUME_PGAD_FORM_HANDLE | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply.Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply.Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply.Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply.Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply.Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r) {
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status == MPI2_IOCSTATUS_SUCCESS) {
+			config_page = mem.config_page;
+			*num_pds = config_page->NumPhysDisks;
+		}
+	}
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_raid_volume_pg0 - obtain raid volume page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_HANDLE or HANDLE
+ * @handle: volume handle
+ * @sz: size of buffer passed in config_page
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_raid_volume_pg0(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2ConfigReply_t *mpi_reply, Mpi2RaidVolPage0_t *config_page, u32 form,
+    u32 handle, u16 sz)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	memset(config_page, 0, sz);
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_VOLUME;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_RAIDVOLPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | handle);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, sz, mem.config_page_sz));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_phys_disk_pg0 - obtain phys disk page 0
+ * @ioc: per adapter object
+ * @mpi_reply: reply mf payload returned from firmware
+ * @config_page: contents of the config page
+ * @form: GET_NEXT_PHYSDISKNUM, PHYSDISKNUM, DEVHANDLE
+ * @form_specific: specific to the form
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_phys_disk_pg0(struct MPT2SAS_ADAPTER *ioc, Mpi2ConfigReply_t
+    *mpi_reply, Mpi2RaidPhysDiskPage0_t *config_page, u32 form,
+    u32 form_specific)
+{
+	Mpi2ConfigRequest_t mpi_request;
+	int r;
+	struct config_request mem;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	memset(config_page, 0, sizeof(Mpi2RaidPhysDiskPage0_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_RAID_PHYSDISK;
+	mpi_request.Header.PageNumber = 0;
+	mpi_request.Header.PageVersion = MPI2_RAIDPHYSDISKPAGE0_PAGEVERSION;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress = cpu_to_le32(form | form_specific);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply->Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply->Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply->Header.PageType;
+	mpi_request.Header.PageLength = mpi_reply->Header.PageLength;
+	mem.config_page_sz = le16_to_cpu(mpi_reply->Header.PageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (!r)
+		memcpy(config_page, mem.config_page,
+		    min_t(u16, mem.config_page_sz,
+		    sizeof(Mpi2RaidPhysDiskPage0_t)));
+
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_volume_handle - returns volume handle for give hidden raid components
+ * @ioc: per adapter object
+ * @pd_handle: phys disk handle
+ * @volume_handle: volume handle
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_volume_handle(struct MPT2SAS_ADAPTER *ioc, u16 pd_handle,
+    u16 *volume_handle)
+{
+	Mpi2RaidConfigurationPage0_t *config_page;
+	Mpi2ConfigRequest_t mpi_request;
+	Mpi2ConfigReply_t mpi_reply;
+	int r, i;
+	struct config_request mem;
+	u16 ioc_status;
+
+	mutex_lock(&ioc->config_cmds.mutex);
+	*volume_handle = 0;
+	memset(&mpi_request, 0, sizeof(Mpi2ConfigRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_CONFIG;
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_HEADER;
+	mpi_request.Header.PageType = MPI2_CONFIG_PAGETYPE_EXTENDED;
+	mpi_request.ExtPageType = MPI2_CONFIG_EXTPAGETYPE_RAID_CONFIG;
+	mpi_request.Header.PageVersion = MPI2_RAIDCONFIG0_PAGEVERSION;
+	mpi_request.Header.PageNumber = 0;
+	mpt2sas_base_build_zero_len_sge(ioc, &mpi_request.PageBufferSGE);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	mpi_request.PageAddress =
+	    cpu_to_le32(MPI2_RAID_PGAD_FORM_ACTIVE_CONFIG);
+	mpi_request.Action = MPI2_CONFIG_ACTION_PAGE_READ_CURRENT;
+	mpi_request.Header.PageVersion = mpi_reply.Header.PageVersion;
+	mpi_request.Header.PageNumber = mpi_reply.Header.PageNumber;
+	mpi_request.Header.PageType = mpi_reply.Header.PageType;
+	mpi_request.ExtPageLength = mpi_reply.ExtPageLength;
+	mpi_request.ExtPageType = mpi_reply.ExtPageType;
+	mem.config_page_sz = le16_to_cpu(mpi_reply.ExtPageLength) * 4;
+	if (mem.config_page_sz > ioc->config_page_sz) {
+		r = _config_alloc_config_dma_memory(ioc, &mem);
+		if (r)
+			goto out;
+	} else {
+		mem.config_page_dma = ioc->config_page_dma;
+		mem.config_page = ioc->config_page;
+	}
+	ioc->base_add_sg_single(&mpi_request.PageBufferSGE,
+	    MPT2_CONFIG_COMMON_SGLFLAGS | mem.config_page_sz,
+	    mem.config_page_dma);
+	r = _config_request(ioc, &mpi_request, &mpi_reply,
+	    MPT2_CONFIG_PAGE_DEFAULT_TIMEOUT);
+	if (r)
+		goto out;
+
+	r = -1;
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) & MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
+		goto done;
+	config_page = mem.config_page;
+	for (i = 0; i < config_page->NumElements; i++) {
+		if ((config_page->ConfigElement[i].ElementFlags &
+		    MPI2_RAIDCONFIG0_EFLAGS_MASK_ELEMENT_TYPE) !=
+		    MPI2_RAIDCONFIG0_EFLAGS_VOL_PHYS_DISK_ELEMENT)
+			continue;
+		if (config_page->ConfigElement[i].PhysDiskDevHandle ==
+		    pd_handle) {
+			*volume_handle = le16_to_cpu(config_page->
+			    ConfigElement[i].VolDevHandle);
+			r = 0;
+			goto done;
+		}
+	}
+
+ done:
+	if (mem.config_page_sz > ioc->config_page_sz)
+		_config_free_config_dma_memory(ioc, &mem);
+
+ out:
+	mutex_unlock(&ioc->config_cmds.mutex);
+	return r;
+}
+
+/**
+ * mpt2sas_config_get_volume_wwid - returns wwid given the volume handle
+ * @ioc: per adapter object
+ * @volume_handle: volume handle
+ * @wwid: volume wwid
+ * Context: sleep.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_config_get_volume_wwid(struct MPT2SAS_ADAPTER *ioc, u16 volume_handle,
+    u64 *wwid)
+{
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2RaidVolPage1_t raid_vol_pg1;
+
+	*wwid = 0;
+	if (!(mpt2sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
+	    &raid_vol_pg1, MPI2_RAID_VOLUME_PGAD_FORM_HANDLE,
+	    volume_handle))) {
+		*wwid = le64_to_cpu(raid_vol_pg1.WWID);
+		return 0;
+	} else
+		return -1;
+}
diff --git a/drivers/scsi/mpt2sas/mpt2sas_ctl.c b/drivers/scsi/mpt2sas/mpt2sas_ctl.c
new file mode 100644
index 0000000..2d4f85c
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_ctl.c
@@ -0,0 +1,2516 @@
+/*
+ * Management Module Support for MPT (Message Passing Technology) based
+ * controllers
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_ctl.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/smp_lock.h>
+#include <linux/compat.h>
+#include <linux/poll.h>
+
+#include <linux/io.h>
+#include <linux/uaccess.h>
+
+#include "mpt2sas_base.h"
+#include "mpt2sas_ctl.h"
+
+static struct fasync_struct *async_queue;
+static DECLARE_WAIT_QUEUE_HEAD(ctl_poll_wait);
+
+/**
+ * enum block_state - blocking state
+ * @NON_BLOCKING: non blocking
+ * @BLOCKING: blocking
+ *
+ * These states are for ioctls that need to wait for a response
+ * from firmware, so they probably require sleep.
+ */
+enum block_state {
+	NON_BLOCKING,
+	BLOCKING,
+};
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _ctl_display_some_debug - debug routine
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @calling_function_name: string pass from calling function
+ * @mpi_reply: reply message frame
+ * Context: none.
+ *
+ * Function for displaying debug info helpfull when debugging issues
+ * in this module.
+ */
+static void
+_ctl_display_some_debug(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    char *calling_function_name, MPI2DefaultReply_t *mpi_reply)
+{
+	Mpi2ConfigRequest_t *mpi_request;
+	char *desc = NULL;
+
+	if (!(ioc->logging_level & MPT_DEBUG_IOCTL))
+		return;
+
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	switch (mpi_request->Function) {
+	case MPI2_FUNCTION_SCSI_IO_REQUEST:
+	{
+		Mpi2SCSIIORequest_t *scsi_request =
+		    (Mpi2SCSIIORequest_t *)mpi_request;
+
+		snprintf(ioc->tmp_string, MPT_STRING_LENGTH,
+		    "scsi_io, cmd(0x%02x), cdb_len(%d)",
+		    scsi_request->CDB.CDB32[0],
+		    le16_to_cpu(scsi_request->IoFlags) & 0xF);
+		desc = ioc->tmp_string;
+		break;
+	}
+	case MPI2_FUNCTION_SCSI_TASK_MGMT:
+		desc = "task_mgmt";
+		break;
+	case MPI2_FUNCTION_IOC_INIT:
+		desc = "ioc_init";
+		break;
+	case MPI2_FUNCTION_IOC_FACTS:
+		desc = "ioc_facts";
+		break;
+	case MPI2_FUNCTION_CONFIG:
+	{
+		Mpi2ConfigRequest_t *config_request =
+		    (Mpi2ConfigRequest_t *)mpi_request;
+
+		snprintf(ioc->tmp_string, MPT_STRING_LENGTH,
+		    "config, type(0x%02x), ext_type(0x%02x), number(%d)",
+		    (config_request->Header.PageType &
+		     MPI2_CONFIG_PAGETYPE_MASK), config_request->ExtPageType,
+		    config_request->Header.PageNumber);
+		desc = ioc->tmp_string;
+		break;
+	}
+	case MPI2_FUNCTION_PORT_FACTS:
+		desc = "port_facts";
+		break;
+	case MPI2_FUNCTION_PORT_ENABLE:
+		desc = "port_enable";
+		break;
+	case MPI2_FUNCTION_EVENT_NOTIFICATION:
+		desc = "event_notification";
+		break;
+	case MPI2_FUNCTION_FW_DOWNLOAD:
+		desc = "fw_download";
+		break;
+	case MPI2_FUNCTION_FW_UPLOAD:
+		desc = "fw_upload";
+		break;
+	case MPI2_FUNCTION_RAID_ACTION:
+		desc = "raid_action";
+		break;
+	case MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH:
+	{
+		Mpi2SCSIIORequest_t *scsi_request =
+		    (Mpi2SCSIIORequest_t *)mpi_request;
+
+		snprintf(ioc->tmp_string, MPT_STRING_LENGTH,
+		    "raid_pass, cmd(0x%02x), cdb_len(%d)",
+		    scsi_request->CDB.CDB32[0],
+		    le16_to_cpu(scsi_request->IoFlags) & 0xF);
+		desc = ioc->tmp_string;
+		break;
+	}
+	case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
+		desc = "sas_iounit_cntl";
+		break;
+	case MPI2_FUNCTION_SATA_PASSTHROUGH:
+		desc = "sata_pass";
+		break;
+	case MPI2_FUNCTION_DIAG_BUFFER_POST:
+		desc = "diag_buffer_post";
+		break;
+	case MPI2_FUNCTION_DIAG_RELEASE:
+		desc = "diag_release";
+		break;
+	case MPI2_FUNCTION_SMP_PASSTHROUGH:
+		desc = "smp_passthrough";
+		break;
+	}
+
+	if (!desc)
+		return;
+
+	printk(MPT2SAS_DEBUG_FMT "%s: %s, smid(%d)\n",
+	    ioc->name, calling_function_name, desc, smid);
+
+	if (!mpi_reply)
+		return;
+
+	if (mpi_reply->IOCStatus || mpi_reply->IOCLogInfo)
+		printk(MPT2SAS_DEBUG_FMT
+		    "\tiocstatus(0x%04x), loginfo(0x%08x)\n",
+		    ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
+		    le32_to_cpu(mpi_reply->IOCLogInfo));
+
+	if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST ||
+	    mpi_request->Function ==
+	    MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH) {
+		Mpi2SCSIIOReply_t *scsi_reply =
+		    (Mpi2SCSIIOReply_t *)mpi_reply;
+		if (scsi_reply->SCSIState || scsi_reply->SCSIStatus)
+			printk(MPT2SAS_DEBUG_FMT
+			    "\tscsi_state(0x%02x), scsi_status"
+			    "(0x%02x)\n", ioc->name,
+			    scsi_reply->SCSIState,
+			    scsi_reply->SCSIStatus);
+	}
+}
+#endif
+
+/**
+ * mpt2sas_ctl_done - ctl module completion routine
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ * Context: none.
+ *
+ * The callback handler when using ioc->ctl_cb_idx.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_ctl_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+
+	if (ioc->ctl_cmds.status == MPT2_CMD_NOT_USED)
+		return;
+	if (ioc->ctl_cmds.smid != smid)
+		return;
+	ioc->ctl_cmds.status |= MPT2_CMD_COMPLETE;
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (mpi_reply) {
+		memcpy(ioc->ctl_cmds.reply, mpi_reply, mpi_reply->MsgLength*4);
+		ioc->ctl_cmds.status |= MPT2_CMD_REPLY_VALID;
+	}
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	_ctl_display_some_debug(ioc, smid, "ctl_done", mpi_reply);
+#endif
+	ioc->ctl_cmds.status &= ~MPT2_CMD_PENDING;
+	complete(&ioc->ctl_cmds.done);
+}
+
+/**
+ * _ctl_check_event_type - determines when an event needs logging
+ * @ioc: per adapter object
+ * @event: firmware event
+ *
+ * The bitmask in ioc->event_type[] indicates which events should be
+ * be saved in the driver event_log.  This bitmask is set by application.
+ *
+ * Returns 1 when event should be captured, or zero means no match.
+ */
+static int
+_ctl_check_event_type(struct MPT2SAS_ADAPTER *ioc, u16 event)
+{
+	u16 i;
+	u32 desired_event;
+
+	if (event >= 128 || !event || !ioc->event_log)
+		return 0;
+
+	desired_event = (1 << (event % 32));
+	if (!desired_event)
+		desired_event = 1;
+	i = event / 32;
+	return desired_event & ioc->event_type[i];
+}
+
+/**
+ * mpt2sas_ctl_add_to_event_log - add event
+ * @ioc: per adapter object
+ * @mpi_reply: reply message frame
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_ctl_add_to_event_log(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventNotificationReply_t *mpi_reply)
+{
+	struct MPT2_IOCTL_EVENTS *event_log;
+	u16 event;
+	int i;
+	u32 sz, event_data_sz;
+	u8 send_aen = 0;
+
+	if (!ioc->event_log)
+		return;
+
+	event = le16_to_cpu(mpi_reply->Event);
+
+	if (_ctl_check_event_type(ioc, event)) {
+
+		/* insert entry into circular event_log */
+		i = ioc->event_context % MPT2SAS_CTL_EVENT_LOG_SIZE;
+		event_log = ioc->event_log;
+		event_log[i].event = event;
+		event_log[i].context = ioc->event_context++;
+
+		event_data_sz = le16_to_cpu(mpi_reply->EventDataLength)*4;
+		sz = min_t(u32, event_data_sz, MPT2_EVENT_DATA_SIZE);
+		memset(event_log[i].data, 0, MPT2_EVENT_DATA_SIZE);
+		memcpy(event_log[i].data, mpi_reply->EventData, sz);
+		send_aen = 1;
+	}
+
+	/* This aen_event_read_flag flag is set until the
+	 * application has read the event log.
+	 * For MPI2_EVENT_LOG_ENTRY_ADDED, we always notify.
+	 */
+	if (event == MPI2_EVENT_LOG_ENTRY_ADDED ||
+	    (send_aen && !ioc->aen_event_read_flag)) {
+		ioc->aen_event_read_flag = 1;
+		wake_up_interruptible(&ctl_poll_wait);
+		if (async_queue)
+			kill_fasync(&async_queue, SIGIO, POLL_IN);
+	}
+}
+
+/**
+ * mpt2sas_ctl_event_callback - firmware event handler (called at ISR time)
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ * Context: interrupt.
+ *
+ * This function merely adds a new work task into ioc->firmware_event_thread.
+ * The tasks are worked from _firmware_event_work in user context.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_ctl_event_callback(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, u32 reply)
+{
+	Mpi2EventNotificationReply_t *mpi_reply;
+
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	mpt2sas_ctl_add_to_event_log(ioc, mpi_reply);
+}
+
+/**
+ * _ctl_verify_adapter - validates ioc_number passed from application
+ * @ioc: per adapter object
+ * @iocpp: The ioc pointer is returned in this.
+ *
+ * Return (-1) means error, else ioc_number.
+ */
+static int
+_ctl_verify_adapter(int ioc_number, struct MPT2SAS_ADAPTER **iocpp)
+{
+	struct MPT2SAS_ADAPTER *ioc;
+
+	list_for_each_entry(ioc, &mpt2sas_ioc_list, list) {
+		if (ioc->id != ioc_number)
+			continue;
+		*iocpp = ioc;
+		return ioc_number;
+	}
+	*iocpp = NULL;
+	return -1;
+}
+
+/**
+ * mpt2sas_ctl_reset_handler - reset callback handler (for ctl)
+ * @ioc: per adapter object
+ * @reset_phase: phase
+ *
+ * The handler for doing any required cleanup or initialization.
+ *
+ * The reset phase can be MPT2_IOC_PRE_RESET, MPT2_IOC_AFTER_RESET,
+ * MPT2_IOC_DONE_RESET
+ */
+void
+mpt2sas_ctl_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase)
+{
+	switch (reset_phase) {
+	case MPT2_IOC_PRE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_PRE_RESET\n", ioc->name, __func__));
+		break;
+	case MPT2_IOC_AFTER_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_AFTER_RESET\n", ioc->name, __func__));
+		if (ioc->ctl_cmds.status & MPT2_CMD_PENDING) {
+			ioc->ctl_cmds.status |= MPT2_CMD_RESET;
+			mpt2sas_base_free_smid(ioc, ioc->ctl_cmds.smid);
+			complete(&ioc->ctl_cmds.done);
+		}
+		break;
+	case MPT2_IOC_DONE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_DONE_RESET\n", ioc->name, __func__));
+		break;
+	}
+}
+
+/**
+ * _ctl_fasync -
+ * @fd -
+ * @filep -
+ * @mode -
+ *
+ * Called when application request fasyn callback handler.
+ */
+static int
+_ctl_fasync(int fd, struct file *filep, int mode)
+{
+	return fasync_helper(fd, filep, mode, &async_queue);
+}
+
+/**
+ * _ctl_release -
+ * @inode -
+ * @filep -
+ *
+ * Called when application releases the fasyn callback handler.
+ */
+static int
+_ctl_release(struct inode *inode, struct file *filep)
+{
+	return fasync_helper(-1, filep, 0, &async_queue);
+}
+
+/**
+ * _ctl_poll -
+ * @file -
+ * @wait -
+ *
+ */
+static unsigned int
+_ctl_poll(struct file *filep, poll_table *wait)
+{
+	struct MPT2SAS_ADAPTER *ioc;
+
+	poll_wait(filep, &ctl_poll_wait, wait);
+
+	list_for_each_entry(ioc, &mpt2sas_ioc_list, list) {
+		if (ioc->aen_event_read_flag)
+			return POLLIN | POLLRDNORM;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_do_task_abort - assign an active smid to the abort_task
+ * @ioc: per adapter object
+ * @karg - (struct mpt2_ioctl_command)
+ * @tm_request - pointer to mf from user space
+ *
+ * Returns 0 when an smid if found, else fail.
+ * during failure, the reply frame is filled.
+ */
+static int
+_ctl_do_task_abort(struct MPT2SAS_ADAPTER *ioc, struct mpt2_ioctl_command *karg,
+    Mpi2SCSITaskManagementRequest_t *tm_request)
+{
+	u8 found = 0;
+	u16 i;
+	u16 handle;
+	struct scsi_cmnd *scmd;
+	struct MPT2SAS_DEVICE *priv_data;
+	unsigned long flags;
+	Mpi2SCSITaskManagementReply_t *tm_reply;
+	u32 sz;
+	u32 lun;
+
+	lun = scsilun_to_int((struct scsi_lun *)tm_request->LUN);
+
+	handle = le16_to_cpu(tm_request->DevHandle);
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	for (i = ioc->request_depth; i && !found; i--) {
+		scmd = ioc->scsi_lookup[i - 1].scmd;
+		if (scmd == NULL || scmd->device == NULL ||
+		    scmd->device->hostdata == NULL)
+			continue;
+		if (lun != scmd->device->lun)
+			continue;
+		priv_data = scmd->device->hostdata;
+		if (priv_data->sas_target == NULL)
+			continue;
+		if (priv_data->sas_target->handle != handle)
+			continue;
+		tm_request->TaskMID = cpu_to_le16(ioc->scsi_lookup[i - 1].smid);
+		found = 1;
+	}
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+
+	if (!found) {
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "ABORT_TASK: "
+		    "DevHandle(0x%04x), lun(%d), no active mid!!\n", ioc->name,
+		    tm_request->DevHandle, lun));
+		tm_reply = ioc->ctl_cmds.reply;
+		tm_reply->DevHandle = tm_request->DevHandle;
+		tm_reply->Function = MPI2_FUNCTION_SCSI_TASK_MGMT;
+		tm_reply->TaskType = MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK;
+		tm_reply->MsgLength = sizeof(Mpi2SCSITaskManagementReply_t)/4;
+		tm_reply->VP_ID = tm_request->VP_ID;
+		tm_reply->VF_ID = tm_request->VF_ID;
+		sz = min_t(u32, karg->max_reply_bytes, ioc->reply_sz);
+		if (copy_to_user(karg->reply_frame_buf_ptr, ioc->ctl_cmds.reply,
+		    sz))
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+		return 1;
+	}
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "ABORT_TASK: "
+	    "DevHandle(0x%04x), lun(%d), smid(%d)\n", ioc->name,
+	    tm_request->DevHandle, lun, tm_request->TaskMID));
+	return 0;
+}
+
+/**
+ * _ctl_do_mpt_command - main handler for MPT2COMMAND opcode
+ * @ioc: per adapter object
+ * @karg - (struct mpt2_ioctl_command)
+ * @mf - pointer to mf in user space
+ * @state - NON_BLOCKING or BLOCKING
+ */
+static long
+_ctl_do_mpt_command(struct MPT2SAS_ADAPTER *ioc,
+    struct mpt2_ioctl_command karg, void __user *mf, enum block_state state)
+{
+	MPI2RequestHeader_t *mpi_request;
+	MPI2DefaultReply_t *mpi_reply;
+	u32 ioc_state;
+	u16 ioc_status;
+	u16 smid;
+	unsigned long timeout, timeleft;
+	u8 issue_reset;
+	u32 sz;
+	void *psge;
+	void *priv_sense = NULL;
+	void *data_out = NULL;
+	dma_addr_t data_out_dma;
+	size_t data_out_sz = 0;
+	void *data_in = NULL;
+	dma_addr_t data_in_dma;
+	size_t data_in_sz = 0;
+	u32 sgl_flags;
+	long ret;
+	u16 wait_state_count;
+
+	issue_reset = 0;
+
+	if (state == NON_BLOCKING && !mutex_trylock(&ioc->ctl_cmds.mutex))
+		return -EAGAIN;
+	else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex))
+		return -ERESTARTSYS;
+
+	if (ioc->ctl_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: ctl_cmd in use\n",
+		    ioc->name, __func__);
+		ret = -EAGAIN;
+		goto out;
+	}
+
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == 10) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			ret = -EFAULT;
+			goto out;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+	if (wait_state_count)
+		printk(MPT2SAS_INFO_FMT "%s: ioc is operational\n",
+		    ioc->name, __func__);
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->ctl_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		ret = -EAGAIN;
+		goto out;
+	}
+
+	ret = 0;
+	ioc->ctl_cmds.status = MPT2_CMD_PENDING;
+	memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->ctl_cmds.smid = smid;
+	data_out_sz = karg.data_out_size;
+	data_in_sz = karg.data_in_size;
+
+	/* copy in request message frame from user */
+	if (copy_from_user(mpi_request, mf, karg.data_sge_offset*4)) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__, __LINE__,
+		    __func__);
+		ret = -EFAULT;
+		mpt2sas_base_free_smid(ioc, smid);
+		goto out;
+	}
+
+	if (mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST ||
+	    mpi_request->Function == MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH) {
+		if (!mpi_request->FunctionDependent1 ||
+		    mpi_request->FunctionDependent1 >
+		    cpu_to_le16(ioc->facts.MaxDevHandle)) {
+			ret = -EINVAL;
+			mpt2sas_base_free_smid(ioc, smid);
+			goto out;
+		}
+	}
+
+	/* obtain dma-able memory for data transfer */
+	if (data_out_sz) /* WRITE */ {
+		data_out = pci_alloc_consistent(ioc->pdev, data_out_sz,
+		    &data_out_dma);
+		if (!data_out) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret = -ENOMEM;
+			mpt2sas_base_free_smid(ioc, smid);
+			goto out;
+		}
+		if (copy_from_user(data_out, karg.data_out_buf_ptr,
+			data_out_sz)) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret =  -EFAULT;
+			mpt2sas_base_free_smid(ioc, smid);
+			goto out;
+		}
+	}
+
+	if (data_in_sz) /* READ */ {
+		data_in = pci_alloc_consistent(ioc->pdev, data_in_sz,
+		    &data_in_dma);
+		if (!data_in) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret = -ENOMEM;
+			mpt2sas_base_free_smid(ioc, smid);
+			goto out;
+		}
+	}
+
+	/* add scatter gather elements */
+	psge = (void *)mpi_request + (karg.data_sge_offset*4);
+
+	if (!data_out_sz && !data_in_sz) {
+		mpt2sas_base_build_zero_len_sge(ioc, psge);
+	} else if (data_out_sz && data_in_sz) {
+		/* WRITE sgel first */
+		sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+		    MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC);
+		sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+		ioc->base_add_sg_single(psge, sgl_flags |
+		    data_out_sz, data_out_dma);
+
+		/* incr sgel */
+		psge += ioc->sge_size;
+
+		/* READ sgel last */
+		sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+		    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER |
+		    MPI2_SGE_FLAGS_END_OF_LIST);
+		sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+		ioc->base_add_sg_single(psge, sgl_flags |
+		    data_in_sz, data_in_dma);
+	} else if (data_out_sz) /* WRITE */ {
+		sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+		    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER |
+		    MPI2_SGE_FLAGS_END_OF_LIST | MPI2_SGE_FLAGS_HOST_TO_IOC);
+		sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+		ioc->base_add_sg_single(psge, sgl_flags |
+		    data_out_sz, data_out_dma);
+	} else if (data_in_sz) /* READ */ {
+		sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+		    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER |
+		    MPI2_SGE_FLAGS_END_OF_LIST);
+		sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+		ioc->base_add_sg_single(psge, sgl_flags |
+		    data_in_sz, data_in_dma);
+	}
+
+	/* send command to firmware */
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	_ctl_display_some_debug(ioc, smid, "ctl_request", NULL);
+#endif
+
+	switch (mpi_request->Function) {
+	case MPI2_FUNCTION_SCSI_IO_REQUEST:
+	case MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH:
+	{
+		Mpi2SCSIIORequest_t *scsiio_request =
+		    (Mpi2SCSIIORequest_t *)mpi_request;
+		scsiio_request->SenseBufferLowAddress =
+		    (u32)mpt2sas_base_get_sense_buffer_dma(ioc, smid);
+		priv_sense = mpt2sas_base_get_sense_buffer(ioc, smid);
+		memset(priv_sense, 0, SCSI_SENSE_BUFFERSIZE);
+		mpt2sas_base_put_smid_scsi_io(ioc, smid, 0,
+		    le16_to_cpu(mpi_request->FunctionDependent1));
+		break;
+	}
+	case MPI2_FUNCTION_SCSI_TASK_MGMT:
+	{
+		Mpi2SCSITaskManagementRequest_t *tm_request =
+		    (Mpi2SCSITaskManagementRequest_t *)mpi_request;
+
+		if (tm_request->TaskType ==
+		    MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK) {
+			if (_ctl_do_task_abort(ioc, &karg, tm_request))
+				goto out;
+		}
+
+		mutex_lock(&ioc->tm_cmds.mutex);
+		mpt2sas_scsih_set_tm_flag(ioc, le16_to_cpu(
+		    tm_request->DevHandle));
+		mpt2sas_base_put_smid_hi_priority(ioc, smid,
+		    mpi_request->VF_ID);
+		break;
+	}
+	case MPI2_FUNCTION_SMP_PASSTHROUGH:
+	{
+		Mpi2SmpPassthroughRequest_t *smp_request =
+		    (Mpi2SmpPassthroughRequest_t *)mpi_request;
+		u8 *data;
+
+		/* ioc determines which port to use */
+		smp_request->PhysicalPort = 0xFF;
+		if (smp_request->PassthroughFlags &
+		    MPI2_SMP_PT_REQ_PT_FLAGS_IMMEDIATE)
+			data = (u8 *)&smp_request->SGL;
+		else
+			data = data_out;
+
+		if (data[1] == 0x91 && (data[10] == 1 || data[10] == 2)) {
+			ioc->ioc_link_reset_in_progress = 1;
+			ioc->ignore_loginfos = 1;
+		}
+		mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+		break;
+	}
+	case MPI2_FUNCTION_SAS_IO_UNIT_CONTROL:
+	{
+		Mpi2SasIoUnitControlRequest_t *sasiounit_request =
+		    (Mpi2SasIoUnitControlRequest_t *)mpi_request;
+
+		if (sasiounit_request->Operation == MPI2_SAS_OP_PHY_HARD_RESET
+		    || sasiounit_request->Operation ==
+		    MPI2_SAS_OP_PHY_LINK_RESET) {
+			ioc->ioc_link_reset_in_progress = 1;
+			ioc->ignore_loginfos = 1;
+		}
+		mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+		break;
+	}
+	default:
+		mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+		break;
+	}
+
+	if (karg.timeout < MPT2_IOCTL_DEFAULT_TIMEOUT)
+		timeout = MPT2_IOCTL_DEFAULT_TIMEOUT;
+	else
+		timeout = karg.timeout;
+	timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done,
+	    timeout*HZ);
+	if (mpi_request->Function == MPI2_FUNCTION_SCSI_TASK_MGMT) {
+		Mpi2SCSITaskManagementRequest_t *tm_request =
+		    (Mpi2SCSITaskManagementRequest_t *)mpi_request;
+		mutex_unlock(&ioc->tm_cmds.mutex);
+		mpt2sas_scsih_clear_tm_flag(ioc, le16_to_cpu(
+		    tm_request->DevHandle));
+	} else if ((mpi_request->Function == MPI2_FUNCTION_SMP_PASSTHROUGH ||
+	    mpi_request->Function == MPI2_FUNCTION_SAS_IO_UNIT_CONTROL) &&
+		ioc->ioc_link_reset_in_progress) {
+		ioc->ioc_link_reset_in_progress = 0;
+		ioc->ignore_loginfos = 0;
+	}
+	if (!(ioc->ctl_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n", ioc->name,
+		    __func__);
+		_debug_dump_mf(mpi_request, karg.data_sge_offset);
+		if (!(ioc->ctl_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	mpi_reply = ioc->ctl_cmds.reply;
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (mpi_reply->Function == MPI2_FUNCTION_SCSI_TASK_MGMT &&
+	    (ioc->logging_level & MPT_DEBUG_TM)) {
+		Mpi2SCSITaskManagementReply_t *tm_reply =
+		    (Mpi2SCSITaskManagementReply_t *)mpi_reply;
+
+		printk(MPT2SAS_DEBUG_FMT "TASK_MGMT: "
+		    "IOCStatus(0x%04x), IOCLogInfo(0x%08x), "
+		    "TerminationCount(0x%08x)\n", ioc->name,
+		    tm_reply->IOCStatus, tm_reply->IOCLogInfo,
+		    tm_reply->TerminationCount);
+	}
+#endif
+	/* copy out xdata to user */
+	if (data_in_sz) {
+		if (copy_to_user(karg.data_in_buf_ptr, data_in,
+		    data_in_sz)) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret = -ENODATA;
+			goto out;
+		}
+	}
+
+	/* copy out reply message frame to user */
+	if (karg.max_reply_bytes) {
+		sz = min_t(u32, karg.max_reply_bytes, ioc->reply_sz);
+		if (copy_to_user(karg.reply_frame_buf_ptr, ioc->ctl_cmds.reply,
+		    sz)) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret = -ENODATA;
+			goto out;
+		}
+	}
+
+	/* copy out sense to user */
+	if (karg.max_sense_bytes && (mpi_request->Function ==
+	    MPI2_FUNCTION_SCSI_IO_REQUEST || mpi_request->Function ==
+	    MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) {
+		sz = min_t(u32, karg.max_sense_bytes, SCSI_SENSE_BUFFERSIZE);
+		if (copy_to_user(karg.sense_data_ptr, priv_sense, sz)) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+			    __LINE__, __func__);
+			ret = -ENODATA;
+			goto out;
+		}
+	}
+
+ issue_host_reset:
+	if (issue_reset) {
+		if ((mpi_request->Function == MPI2_FUNCTION_SCSI_IO_REQUEST ||
+		    mpi_request->Function ==
+		    MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH)) {
+			printk(MPT2SAS_INFO_FMT "issue target reset: handle "
+			    "= (0x%04x)\n", ioc->name,
+			    mpi_request->FunctionDependent1);
+			mutex_lock(&ioc->tm_cmds.mutex);
+			mpt2sas_scsih_issue_tm(ioc,
+			    mpi_request->FunctionDependent1, 0,
+			    MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 10);
+			ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+			mutex_unlock(&ioc->tm_cmds.mutex);
+		} else
+			mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+			    FORCE_BIG_HAMMER);
+	}
+
+ out:
+
+	/* free memory associated with sg buffers */
+	if (data_in)
+		pci_free_consistent(ioc->pdev, data_in_sz, data_in,
+		    data_in_dma);
+
+	if (data_out)
+		pci_free_consistent(ioc->pdev, data_out_sz, data_out,
+		    data_out_dma);
+
+	ioc->ctl_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->ctl_cmds.mutex);
+	return ret;
+}
+
+/**
+ * _ctl_getiocinfo - main handler for MPT2IOCINFO opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_getiocinfo(void __user *arg)
+{
+	struct mpt2_ioctl_iocinfo karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	u8 revision;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	memset(&karg, 0 , sizeof(karg));
+	karg.adapter_type = MPT2_IOCTL_INTERFACE_SAS2;
+	if (ioc->pfacts)
+		karg.port_number = ioc->pfacts[0].PortNumber;
+	pci_read_config_byte(ioc->pdev, PCI_CLASS_REVISION, &revision);
+	karg.hw_rev = revision;
+	karg.pci_id = ioc->pdev->device;
+	karg.subsystem_device = ioc->pdev->subsystem_device;
+	karg.subsystem_vendor = ioc->pdev->subsystem_vendor;
+	karg.pci_information.u.bits.bus = ioc->pdev->bus->number;
+	karg.pci_information.u.bits.device = PCI_SLOT(ioc->pdev->devfn);
+	karg.pci_information.u.bits.function = PCI_FUNC(ioc->pdev->devfn);
+	karg.pci_information.segment_id = pci_domain_nr(ioc->pdev->bus);
+	karg.firmware_version = ioc->facts.FWVersion.Word;
+	strncpy(karg.driver_version, MPT2SAS_DRIVER_VERSION,
+	    MPT2_IOCTL_VERSION_LENGTH);
+	karg.driver_version[MPT2_IOCTL_VERSION_LENGTH - 1] = '\0';
+	karg.bios_version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
+
+	if (copy_to_user(arg, &karg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_eventquery - main handler for MPT2EVENTQUERY opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_eventquery(void __user *arg)
+{
+	struct mpt2_ioctl_eventquery karg;
+	struct MPT2SAS_ADAPTER *ioc;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	karg.event_entries = MPT2SAS_CTL_EVENT_LOG_SIZE;
+	memcpy(karg.event_types, ioc->event_type,
+	    MPI2_EVENT_NOTIFY_EVENTMASK_WORDS * sizeof(u32));
+
+	if (copy_to_user(arg, &karg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_eventenable - main handler for MPT2EVENTENABLE opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_eventenable(void __user *arg)
+{
+	struct mpt2_ioctl_eventenable karg;
+	struct MPT2SAS_ADAPTER *ioc;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	if (ioc->event_log)
+		return 0;
+	memcpy(ioc->event_type, karg.event_types,
+	    MPI2_EVENT_NOTIFY_EVENTMASK_WORDS * sizeof(u32));
+	mpt2sas_base_validate_event_type(ioc, ioc->event_type);
+
+	/* initialize event_log */
+	ioc->event_context = 0;
+	ioc->aen_event_read_flag = 0;
+	ioc->event_log = kcalloc(MPT2SAS_CTL_EVENT_LOG_SIZE,
+	    sizeof(struct MPT2_IOCTL_EVENTS), GFP_KERNEL);
+	if (!ioc->event_log) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -ENOMEM;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_eventreport - main handler for MPT2EVENTREPORT opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_eventreport(void __user *arg)
+{
+	struct mpt2_ioctl_eventreport karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	u32 number_bytes, max_events, max;
+	struct mpt2_ioctl_eventreport __user *uarg = arg;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	number_bytes = karg.hdr.max_data_size -
+	    sizeof(struct mpt2_ioctl_header);
+	max_events = number_bytes/sizeof(struct MPT2_IOCTL_EVENTS);
+	max = min_t(u32, MPT2SAS_CTL_EVENT_LOG_SIZE, max_events);
+
+	/* If fewer than 1 event is requested, there must have
+	 * been some type of error.
+	 */
+	if (!max || !ioc->event_log)
+		return -ENODATA;
+
+	number_bytes = max * sizeof(struct MPT2_IOCTL_EVENTS);
+	if (copy_to_user(uarg->event_data, ioc->event_log, number_bytes)) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+
+	/* reset flag so SIGIO can restart */
+	ioc->aen_event_read_flag = 0;
+	return 0;
+}
+
+/**
+ * _ctl_do_reset - main handler for MPT2HARDRESET opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_do_reset(void __user *arg)
+{
+	struct mpt2_ioctl_diag_reset karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	int retval;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	retval = mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+	    FORCE_BIG_HAMMER);
+	printk(MPT2SAS_INFO_FMT "host reset: %s\n",
+	    ioc->name, ((!retval) ? "SUCCESS" : "FAILED"));
+	return 0;
+}
+
+/**
+ * _ctl_btdh_search_sas_device - searching for sas device
+ * @ioc: per adapter object
+ * @btdh: btdh ioctl payload
+ */
+static int
+_ctl_btdh_search_sas_device(struct MPT2SAS_ADAPTER *ioc,
+    struct mpt2_ioctl_btdh_mapping *btdh)
+{
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	int rc = 0;
+
+	if (list_empty(&ioc->sas_device_list))
+		return rc;
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
+		if (btdh->bus == 0xFFFFFFFF && btdh->id == 0xFFFFFFFF &&
+		    btdh->handle == sas_device->handle) {
+			btdh->bus = sas_device->channel;
+			btdh->id = sas_device->id;
+			rc = 1;
+			goto out;
+		} else if (btdh->bus == sas_device->channel && btdh->id ==
+		    sas_device->id && btdh->handle == 0xFFFF) {
+			btdh->handle = sas_device->handle;
+			rc = 1;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	return rc;
+}
+
+/**
+ * _ctl_btdh_search_raid_device - searching for raid device
+ * @ioc: per adapter object
+ * @btdh: btdh ioctl payload
+ */
+static int
+_ctl_btdh_search_raid_device(struct MPT2SAS_ADAPTER *ioc,
+    struct mpt2_ioctl_btdh_mapping *btdh)
+{
+	struct _raid_device *raid_device;
+	unsigned long flags;
+	int rc = 0;
+
+	if (list_empty(&ioc->raid_device_list))
+		return rc;
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
+		if (btdh->bus == 0xFFFFFFFF && btdh->id == 0xFFFFFFFF &&
+		    btdh->handle == raid_device->handle) {
+			btdh->bus = raid_device->channel;
+			btdh->id = raid_device->id;
+			rc = 1;
+			goto out;
+		} else if (btdh->bus == raid_device->channel && btdh->id ==
+		    raid_device->id && btdh->handle == 0xFFFF) {
+			btdh->handle = raid_device->handle;
+			rc = 1;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+	return rc;
+}
+
+/**
+ * _ctl_btdh_mapping - main handler for MPT2BTDHMAPPING opcode
+ * @arg - user space buffer containing ioctl content
+ */
+static long
+_ctl_btdh_mapping(void __user *arg)
+{
+	struct mpt2_ioctl_btdh_mapping karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	int rc;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	rc = _ctl_btdh_search_sas_device(ioc, &karg);
+	if (!rc)
+		_ctl_btdh_search_raid_device(ioc, &karg);
+
+	if (copy_to_user(arg, &karg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_diag_capability - return diag buffer capability
+ * @ioc: per adapter object
+ * @buffer_type: specifies either TRACE or SNAPSHOT
+ *
+ * returns 1 when diag buffer support is enabled in firmware
+ */
+static u8
+_ctl_diag_capability(struct MPT2SAS_ADAPTER *ioc, u8 buffer_type)
+{
+	u8 rc = 0;
+
+	switch (buffer_type) {
+	case MPI2_DIAG_BUF_TYPE_TRACE:
+		if (ioc->facts.IOCCapabilities &
+		    MPI2_IOCFACTS_CAPABILITY_DIAG_TRACE_BUFFER)
+			rc = 1;
+		break;
+	case MPI2_DIAG_BUF_TYPE_SNAPSHOT:
+		if (ioc->facts.IOCCapabilities &
+		    MPI2_IOCFACTS_CAPABILITY_SNAPSHOT_BUFFER)
+			rc = 1;
+		break;
+	}
+
+	return rc;
+}
+
+/**
+ * _ctl_diag_register - application register with driver
+ * @arg - user space buffer containing ioctl content
+ * @state - NON_BLOCKING or BLOCKING
+ *
+ * This will allow the driver to setup any required buffers that will be
+ * needed by firmware to communicate with the driver.
+ */
+static long
+_ctl_diag_register(void __user *arg, enum block_state state)
+{
+	struct mpt2_diag_register karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	int rc, i;
+	void *request_data = NULL;
+	dma_addr_t request_data_dma;
+	u32 request_data_sz = 0;
+	Mpi2DiagBufferPostRequest_t *mpi_request;
+	Mpi2DiagBufferPostReply_t *mpi_reply;
+	u8 buffer_type;
+	unsigned long timeleft;
+	u16 smid;
+	u16 ioc_status;
+	u8 issue_reset = 0;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	buffer_type = karg.buffer_type;
+	if (!_ctl_diag_capability(ioc, buffer_type)) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have capability for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -EPERM;
+	}
+
+	if (ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_REGISTERED) {
+		printk(MPT2SAS_ERR_FMT "%s: already has a registered "
+		    "buffer for buffer_type(0x%02x)\n", ioc->name, __func__,
+		    buffer_type);
+		return -EINVAL;
+	}
+
+	if (karg.requested_buffer_size % 4)  {
+		printk(MPT2SAS_ERR_FMT "%s: the requested_buffer_size "
+		    "is not 4 byte aligned\n", ioc->name, __func__);
+		return -EINVAL;
+	}
+
+	if (state == NON_BLOCKING && !mutex_trylock(&ioc->ctl_cmds.mutex))
+		return -EAGAIN;
+	else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex))
+		return -ERESTARTSYS;
+
+	if (ioc->ctl_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: ctl_cmd in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->ctl_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	ioc->ctl_cmds.status = MPT2_CMD_PENDING;
+	memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->ctl_cmds.smid = smid;
+
+	request_data = ioc->diag_buffer[buffer_type];
+	request_data_sz = karg.requested_buffer_size;
+	ioc->unique_id[buffer_type] = karg.unique_id;
+	ioc->diag_buffer_status[buffer_type] = 0;
+	memcpy(ioc->product_specific[buffer_type], karg.product_specific,
+	    MPT2_PRODUCT_SPECIFIC_DWORDS);
+	ioc->diagnostic_flags[buffer_type] = karg.diagnostic_flags;
+
+	if (request_data) {
+		request_data_dma = ioc->diag_buffer_dma[buffer_type];
+		if (request_data_sz != ioc->diag_buffer_sz[buffer_type]) {
+			pci_free_consistent(ioc->pdev,
+			    ioc->diag_buffer_sz[buffer_type],
+			    request_data, request_data_dma);
+			request_data = NULL;
+		}
+	}
+
+	if (request_data == NULL) {
+		ioc->diag_buffer_sz[buffer_type] = 0;
+		ioc->diag_buffer_dma[buffer_type] = 0;
+		request_data = pci_alloc_consistent(
+			ioc->pdev, request_data_sz, &request_data_dma);
+		if (request_data == NULL) {
+			printk(MPT2SAS_ERR_FMT "%s: failed allocating memory"
+			    " for diag buffers, requested size(%d)\n",
+			    ioc->name, __func__, request_data_sz);
+			mpt2sas_base_free_smid(ioc, smid);
+			return -ENOMEM;
+		}
+		ioc->diag_buffer[buffer_type] = request_data;
+		ioc->diag_buffer_sz[buffer_type] = request_data_sz;
+		ioc->diag_buffer_dma[buffer_type] = request_data_dma;
+	}
+
+	mpi_request->Function = MPI2_FUNCTION_DIAG_BUFFER_POST;
+	mpi_request->BufferType = karg.buffer_type;
+	mpi_request->Flags = cpu_to_le32(karg.diagnostic_flags);
+	mpi_request->BufferAddress = cpu_to_le64(request_data_dma);
+	mpi_request->BufferLength = cpu_to_le32(request_data_sz);
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: diag_buffer(0x%p), "
+	    "dma(0x%llx), sz(%d)\n", ioc->name, __func__, request_data,
+	    (unsigned long long)request_data_dma, mpi_request->BufferLength));
+
+	for (i = 0; i < MPT2_PRODUCT_SPECIFIC_DWORDS; i++)
+		mpi_request->ProductSpecific[i] =
+			cpu_to_le32(ioc->product_specific[buffer_type][i]);
+
+	mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done,
+	    MPT2_IOCTL_DEFAULT_TIMEOUT*HZ);
+
+	if (!(ioc->ctl_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n", ioc->name,
+		    __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2DiagBufferPostRequest_t)/4);
+		if (!(ioc->ctl_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	/* process the completed Reply Message Frame */
+	if ((ioc->ctl_cmds.status & MPT2_CMD_REPLY_VALID) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: no reply message\n",
+		    ioc->name, __func__);
+		rc = -EFAULT;
+		goto out;
+	}
+
+	mpi_reply = ioc->ctl_cmds.reply;
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
+
+	if (ioc_status == MPI2_IOCSTATUS_SUCCESS) {
+		ioc->diag_buffer_status[buffer_type] |=
+			MPT2_DIAG_BUFFER_IS_REGISTERED;
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: success\n",
+		    ioc->name, __func__));
+	} else {
+		printk(MPT2SAS_DEBUG_FMT "%s: ioc_status(0x%04x) "
+		    "log_info(0x%08x)\n", ioc->name, __func__,
+		    ioc_status, mpi_reply->IOCLogInfo);
+		rc = -EFAULT;
+	}
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+
+ out:
+
+	if (rc && request_data)
+		pci_free_consistent(ioc->pdev, request_data_sz,
+		    request_data, request_data_dma);
+
+	ioc->ctl_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->ctl_cmds.mutex);
+	return rc;
+}
+
+/**
+ * _ctl_diag_unregister - application unregister with driver
+ * @arg - user space buffer containing ioctl content
+ *
+ * This will allow the driver to cleanup any memory allocated for diag
+ * messages and to free up any resources.
+ */
+static long
+_ctl_diag_unregister(void __user *arg)
+{
+	struct mpt2_diag_unregister karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	void *request_data;
+	dma_addr_t request_data_dma;
+	u32 request_data_sz;
+	u8 buffer_type;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	buffer_type = karg.unique_id & 0x000000ff;
+	if (!_ctl_diag_capability(ioc, buffer_type)) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have capability for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -EPERM;
+	}
+
+	if ((ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_REGISTERED) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: buffer_type(0x%02x) is not "
+		    "registered\n", ioc->name, __func__, buffer_type);
+		return -EINVAL;
+	}
+	if ((ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_RELEASED) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: buffer_type(0x%02x) has not been "
+		    "released\n", ioc->name, __func__, buffer_type);
+		return -EINVAL;
+	}
+
+	if (karg.unique_id != ioc->unique_id[buffer_type]) {
+		printk(MPT2SAS_ERR_FMT "%s: unique_id(0x%08x) is not "
+		    "registered\n", ioc->name, __func__, karg.unique_id);
+		return -EINVAL;
+	}
+
+	request_data = ioc->diag_buffer[buffer_type];
+	if (!request_data) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have memory allocated for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -ENOMEM;
+	}
+
+	request_data_sz = ioc->diag_buffer_sz[buffer_type];
+	request_data_dma = ioc->diag_buffer_dma[buffer_type];
+	pci_free_consistent(ioc->pdev, request_data_sz,
+	    request_data, request_data_dma);
+	ioc->diag_buffer[buffer_type] = NULL;
+	ioc->diag_buffer_status[buffer_type] = 0;
+	return 0;
+}
+
+/**
+ * _ctl_diag_query - query relevant info associated with diag buffers
+ * @arg - user space buffer containing ioctl content
+ *
+ * The application will send only buffer_type and unique_id.  Driver will
+ * inspect unique_id first, if valid, fill in all the info.  If unique_id is
+ * 0x00, the driver will return info specified by Buffer Type.
+ */
+static long
+_ctl_diag_query(void __user *arg)
+{
+	struct mpt2_diag_query karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	void *request_data;
+	int i;
+	u8 buffer_type;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	karg.application_flags = 0;
+	buffer_type = karg.buffer_type;
+
+	if (!_ctl_diag_capability(ioc, buffer_type)) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have capability for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -EPERM;
+	}
+
+	if ((ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_REGISTERED) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: buffer_type(0x%02x) is not "
+		    "registered\n", ioc->name, __func__, buffer_type);
+		return -EINVAL;
+	}
+
+	if (karg.unique_id & 0xffffff00) {
+		if (karg.unique_id != ioc->unique_id[buffer_type]) {
+			printk(MPT2SAS_ERR_FMT "%s: unique_id(0x%08x) is not "
+			    "registered\n", ioc->name, __func__,
+			    karg.unique_id);
+			return -EINVAL;
+		}
+	}
+
+	request_data = ioc->diag_buffer[buffer_type];
+	if (!request_data) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have buffer for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -ENOMEM;
+	}
+
+	if (ioc->diag_buffer_status[buffer_type] & MPT2_DIAG_BUFFER_IS_RELEASED)
+		karg.application_flags = (MPT2_APP_FLAGS_APP_OWNED |
+		    MPT2_APP_FLAGS_BUFFER_VALID);
+	else
+		karg.application_flags = (MPT2_APP_FLAGS_APP_OWNED |
+		    MPT2_APP_FLAGS_BUFFER_VALID |
+		    MPT2_APP_FLAGS_FW_BUFFER_ACCESS);
+
+	for (i = 0; i < MPT2_PRODUCT_SPECIFIC_DWORDS; i++)
+		karg.product_specific[i] =
+		    ioc->product_specific[buffer_type][i];
+
+	karg.total_buffer_size = ioc->diag_buffer_sz[buffer_type];
+	karg.driver_added_buffer_size = 0;
+	karg.unique_id = ioc->unique_id[buffer_type];
+	karg.diagnostic_flags = ioc->diagnostic_flags[buffer_type];
+
+	if (copy_to_user(arg, &karg, sizeof(struct mpt2_diag_query))) {
+		printk(MPT2SAS_ERR_FMT "%s: unable to write mpt2_diag_query "
+		    "data @ %p\n", ioc->name, __func__, arg);
+		return -EFAULT;
+	}
+	return 0;
+}
+
+/**
+ * _ctl_diag_release - request to send Diag Release Message to firmware
+ * @arg - user space buffer containing ioctl content
+ * @state - NON_BLOCKING or BLOCKING
+ *
+ * This allows ownership of the specified buffer to returned to the driver,
+ * allowing an application to read the buffer without fear that firmware is
+ * overwritting information in the buffer.
+ */
+static long
+_ctl_diag_release(void __user *arg, enum block_state state)
+{
+	struct mpt2_diag_release karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	void *request_data;
+	int rc;
+	Mpi2DiagReleaseRequest_t *mpi_request;
+	Mpi2DiagReleaseReply_t *mpi_reply;
+	u8 buffer_type;
+	unsigned long timeleft;
+	u16 smid;
+	u16 ioc_status;
+	u8 issue_reset = 0;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	buffer_type = karg.unique_id & 0x000000ff;
+	if (!_ctl_diag_capability(ioc, buffer_type)) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have capability for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -EPERM;
+	}
+
+	if ((ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_REGISTERED) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: buffer_type(0x%02x) is not "
+		    "registered\n", ioc->name, __func__, buffer_type);
+		return -EINVAL;
+	}
+
+	if (karg.unique_id != ioc->unique_id[buffer_type]) {
+		printk(MPT2SAS_ERR_FMT "%s: unique_id(0x%08x) is not "
+		    "registered\n", ioc->name, __func__, karg.unique_id);
+		return -EINVAL;
+	}
+
+	if (ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_RELEASED) {
+		printk(MPT2SAS_ERR_FMT "%s: buffer_type(0x%02x) "
+		    "is already released\n", ioc->name, __func__,
+		    buffer_type);
+		return 0;
+	}
+
+	request_data = ioc->diag_buffer[buffer_type];
+
+	if (!request_data) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have memory allocated for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -ENOMEM;
+	}
+
+	if (state == NON_BLOCKING && !mutex_trylock(&ioc->ctl_cmds.mutex))
+		return -EAGAIN;
+	else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex))
+		return -ERESTARTSYS;
+
+	if (ioc->ctl_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: ctl_cmd in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->ctl_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	ioc->ctl_cmds.status = MPT2_CMD_PENDING;
+	memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->ctl_cmds.smid = smid;
+
+	mpi_request->Function = MPI2_FUNCTION_DIAG_RELEASE;
+	mpi_request->BufferType = buffer_type;
+
+	mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done,
+	    MPT2_IOCTL_DEFAULT_TIMEOUT*HZ);
+
+	if (!(ioc->ctl_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n", ioc->name,
+		    __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2DiagReleaseRequest_t)/4);
+		if (!(ioc->ctl_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	/* process the completed Reply Message Frame */
+	if ((ioc->ctl_cmds.status & MPT2_CMD_REPLY_VALID) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: no reply message\n",
+		    ioc->name, __func__);
+		rc = -EFAULT;
+		goto out;
+	}
+
+	mpi_reply = ioc->ctl_cmds.reply;
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
+
+	if (ioc_status == MPI2_IOCSTATUS_SUCCESS) {
+		ioc->diag_buffer_status[buffer_type] |=
+		    MPT2_DIAG_BUFFER_IS_RELEASED;
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: success\n",
+		    ioc->name, __func__));
+	} else {
+		printk(MPT2SAS_DEBUG_FMT "%s: ioc_status(0x%04x) "
+		    "log_info(0x%08x)\n", ioc->name, __func__,
+		    ioc_status, mpi_reply->IOCLogInfo);
+		rc = -EFAULT;
+	}
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+
+ out:
+
+	ioc->ctl_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->ctl_cmds.mutex);
+	return rc;
+}
+
+/**
+ * _ctl_diag_read_buffer - request for copy of the diag buffer
+ * @arg - user space buffer containing ioctl content
+ * @state - NON_BLOCKING or BLOCKING
+ */
+static long
+_ctl_diag_read_buffer(void __user *arg, enum block_state state)
+{
+	struct mpt2_diag_read_buffer karg;
+	struct mpt2_diag_read_buffer __user *uarg = arg;
+	struct MPT2SAS_ADAPTER *ioc;
+	void *request_data, *diag_data;
+	Mpi2DiagBufferPostRequest_t *mpi_request;
+	Mpi2DiagBufferPostReply_t *mpi_reply;
+	int rc, i;
+	u8 buffer_type;
+	unsigned long timeleft;
+	u16 smid;
+	u16 ioc_status;
+	u8 issue_reset = 0;
+
+	if (copy_from_user(&karg, arg, sizeof(karg))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s\n", ioc->name,
+	    __func__));
+
+	buffer_type = karg.unique_id & 0x000000ff;
+	if (!_ctl_diag_capability(ioc, buffer_type)) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have capability for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -EPERM;
+	}
+
+	if (karg.unique_id != ioc->unique_id[buffer_type]) {
+		printk(MPT2SAS_ERR_FMT "%s: unique_id(0x%08x) is not "
+		    "registered\n", ioc->name, __func__, karg.unique_id);
+		return -EINVAL;
+	}
+
+	request_data = ioc->diag_buffer[buffer_type];
+	if (!request_data) {
+		printk(MPT2SAS_ERR_FMT "%s: doesn't have buffer for "
+		    "buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type);
+		return -ENOMEM;
+	}
+
+	if ((karg.starting_offset % 4) || (karg.bytes_to_read % 4)) {
+		printk(MPT2SAS_ERR_FMT "%s: either the starting_offset "
+		    "or bytes_to_read are not 4 byte aligned\n", ioc->name,
+		    __func__);
+		return -EINVAL;
+	}
+
+	diag_data = (void *)(request_data + karg.starting_offset);
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: diag_buffer(%p), "
+	    "offset(%d), sz(%d)\n", ioc->name, __func__,
+	    diag_data, karg.starting_offset, karg.bytes_to_read));
+
+	if (copy_to_user((void __user *)uarg->diagnostic_data,
+	    diag_data, karg.bytes_to_read)) {
+		printk(MPT2SAS_ERR_FMT "%s: Unable to write "
+		    "mpt_diag_read_buffer_t data @ %p\n", ioc->name,
+		    __func__, diag_data);
+		return -EFAULT;
+	}
+
+	if ((karg.flags & MPT2_FLAGS_REREGISTER) == 0)
+		return 0;
+
+	dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: Reregister "
+		"buffer_type(0x%02x)\n", ioc->name, __func__, buffer_type));
+	if ((ioc->diag_buffer_status[buffer_type] &
+	    MPT2_DIAG_BUFFER_IS_RELEASED) == 0) {
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "buffer_type(0x%02x) is still registered\n", ioc->name,
+		     __func__, buffer_type));
+		return 0;
+	}
+	/* Get a free request frame and save the message context.
+	*/
+	if (state == NON_BLOCKING && !mutex_trylock(&ioc->ctl_cmds.mutex))
+		return -EAGAIN;
+	else if (mutex_lock_interruptible(&ioc->ctl_cmds.mutex))
+		return -ERESTARTSYS;
+
+	if (ioc->ctl_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: ctl_cmd in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->ctl_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	ioc->ctl_cmds.status = MPT2_CMD_PENDING;
+	memset(ioc->ctl_cmds.reply, 0, ioc->reply_sz);
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->ctl_cmds.smid = smid;
+
+	mpi_request->Function = MPI2_FUNCTION_DIAG_BUFFER_POST;
+	mpi_request->BufferType = buffer_type;
+	mpi_request->BufferLength =
+	    cpu_to_le32(ioc->diag_buffer_sz[buffer_type]);
+	mpi_request->BufferAddress =
+	    cpu_to_le64(ioc->diag_buffer_dma[buffer_type]);
+	for (i = 0; i < MPT2_PRODUCT_SPECIFIC_DWORDS; i++)
+		mpi_request->ProductSpecific[i] =
+			cpu_to_le32(ioc->product_specific[buffer_type][i]);
+
+	mpt2sas_base_put_smid_default(ioc, smid, mpi_request->VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->ctl_cmds.done,
+	    MPT2_IOCTL_DEFAULT_TIMEOUT*HZ);
+
+	if (!(ioc->ctl_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n", ioc->name,
+		    __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2DiagBufferPostRequest_t)/4);
+		if (!(ioc->ctl_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	/* process the completed Reply Message Frame */
+	if ((ioc->ctl_cmds.status & MPT2_CMD_REPLY_VALID) == 0) {
+		printk(MPT2SAS_ERR_FMT "%s: no reply message\n",
+		    ioc->name, __func__);
+		rc = -EFAULT;
+		goto out;
+	}
+
+	mpi_reply = ioc->ctl_cmds.reply;
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus) & MPI2_IOCSTATUS_MASK;
+
+	if (ioc_status == MPI2_IOCSTATUS_SUCCESS) {
+		ioc->diag_buffer_status[buffer_type] |=
+		    MPT2_DIAG_BUFFER_IS_REGISTERED;
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: success\n",
+		    ioc->name, __func__));
+	} else {
+		printk(MPT2SAS_DEBUG_FMT "%s: ioc_status(0x%04x) "
+		    "log_info(0x%08x)\n", ioc->name, __func__,
+		    ioc_status, mpi_reply->IOCLogInfo);
+		rc = -EFAULT;
+	}
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+
+ out:
+
+	ioc->ctl_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->ctl_cmds.mutex);
+	return rc;
+}
+
+/**
+ * _ctl_ioctl_main - main ioctl entry point
+ * @file - (struct file)
+ * @cmd - ioctl opcode
+ * @arg -
+ */
+static long
+_ctl_ioctl_main(struct file *file, unsigned int cmd, void __user *arg)
+{
+	enum block_state state;
+	long ret = -EINVAL;
+	unsigned long flags;
+
+	state = (file->f_flags & O_NONBLOCK) ? NON_BLOCKING :
+	    BLOCKING;
+
+	switch (cmd) {
+	case MPT2IOCINFO:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_iocinfo))
+			ret = _ctl_getiocinfo(arg);
+		break;
+	case MPT2COMMAND:
+	{
+		struct mpt2_ioctl_command karg;
+		struct mpt2_ioctl_command __user *uarg;
+		struct MPT2SAS_ADAPTER *ioc;
+
+		if (copy_from_user(&karg, arg, sizeof(karg))) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n",
+			    __FILE__, __LINE__, __func__);
+			return -EFAULT;
+		}
+
+		if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 ||
+		    !ioc)
+			return -ENODEV;
+
+		spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+		if (ioc->shost_recovery) {
+			spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock,
+			    flags);
+			return -EAGAIN;
+		}
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_command)) {
+			uarg = arg;
+			ret = _ctl_do_mpt_command(ioc, karg, &uarg->mf, state);
+		}
+		break;
+	}
+	case MPT2EVENTQUERY:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_eventquery))
+			ret = _ctl_eventquery(arg);
+		break;
+	case MPT2EVENTENABLE:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_eventenable))
+			ret = _ctl_eventenable(arg);
+		break;
+	case MPT2EVENTREPORT:
+		ret = _ctl_eventreport(arg);
+		break;
+	case MPT2HARDRESET:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_diag_reset))
+			ret = _ctl_do_reset(arg);
+		break;
+	case MPT2BTDHMAPPING:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_ioctl_btdh_mapping))
+			ret = _ctl_btdh_mapping(arg);
+		break;
+	case MPT2DIAGREGISTER:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_diag_register))
+			ret = _ctl_diag_register(arg, state);
+		break;
+	case MPT2DIAGUNREGISTER:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_diag_unregister))
+			ret = _ctl_diag_unregister(arg);
+		break;
+	case MPT2DIAGQUERY:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_diag_query))
+			ret = _ctl_diag_query(arg);
+		break;
+	case MPT2DIAGRELEASE:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_diag_release))
+			ret = _ctl_diag_release(arg, state);
+		break;
+	case MPT2DIAGREADBUFFER:
+		if (_IOC_SIZE(cmd) == sizeof(struct mpt2_diag_read_buffer))
+			ret = _ctl_diag_read_buffer(arg, state);
+		break;
+	default:
+	{
+		struct mpt2_ioctl_command karg;
+		struct MPT2SAS_ADAPTER *ioc;
+
+		if (copy_from_user(&karg, arg, sizeof(karg))) {
+			printk(KERN_ERR "failure at %s:%d/%s()!\n",
+			    __FILE__, __LINE__, __func__);
+			return -EFAULT;
+		}
+
+		if (_ctl_verify_adapter(karg.hdr.ioc_number, &ioc) == -1 ||
+		    !ioc)
+			return -ENODEV;
+
+		dctlprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "unsupported ioctl opcode(0x%08x)\n", ioc->name, cmd));
+		break;
+	}
+	}
+	return ret;
+}
+
+/**
+ * _ctl_ioctl - main ioctl entry point (unlocked)
+ * @file - (struct file)
+ * @cmd - ioctl opcode
+ * @arg -
+ */
+static long
+_ctl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	long ret;
+	lock_kernel();
+	ret = _ctl_ioctl_main(file, cmd, (void __user *)arg);
+	unlock_kernel();
+	return ret;
+}
+
+#ifdef CONFIG_COMPAT
+/**
+ * _ctl_compat_mpt_command - convert 32bit pointers to 64bit.
+ * @file - (struct file)
+ * @cmd - ioctl opcode
+ * @arg - (struct mpt2_ioctl_command32)
+ *
+ * MPT2COMMAND32 - Handle 32bit applications running on 64bit os.
+ */
+static long
+_ctl_compat_mpt_command(struct file *file, unsigned cmd, unsigned long arg)
+{
+	struct mpt2_ioctl_command32 karg32;
+	struct mpt2_ioctl_command32 __user *uarg;
+	struct mpt2_ioctl_command karg;
+	struct MPT2SAS_ADAPTER *ioc;
+	enum block_state state;
+	unsigned long flags;
+
+	if (_IOC_SIZE(cmd) != sizeof(struct mpt2_ioctl_command32))
+		return -EINVAL;
+
+	uarg = (struct mpt2_ioctl_command32 __user *) arg;
+
+	if (copy_from_user(&karg32, (char __user *)arg, sizeof(karg32))) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n",
+		    __FILE__, __LINE__, __func__);
+		return -EFAULT;
+	}
+	if (_ctl_verify_adapter(karg32.hdr.ioc_number, &ioc) == -1 || !ioc)
+		return -ENODEV;
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->shost_recovery) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock,
+		    flags);
+		return -EAGAIN;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	memset(&karg, 0, sizeof(struct mpt2_ioctl_command));
+	karg.hdr.ioc_number = karg32.hdr.ioc_number;
+	karg.hdr.port_number = karg32.hdr.port_number;
+	karg.hdr.max_data_size = karg32.hdr.max_data_size;
+	karg.timeout = karg32.timeout;
+	karg.max_reply_bytes = karg32.max_reply_bytes;
+	karg.data_in_size = karg32.data_in_size;
+	karg.data_out_size = karg32.data_out_size;
+	karg.max_sense_bytes = karg32.max_sense_bytes;
+	karg.data_sge_offset = karg32.data_sge_offset;
+	memcpy(&karg.reply_frame_buf_ptr, &karg32.reply_frame_buf_ptr,
+	    sizeof(uint32_t));
+	memcpy(&karg.data_in_buf_ptr, &karg32.data_in_buf_ptr,
+	    sizeof(uint32_t));
+	memcpy(&karg.data_out_buf_ptr, &karg32.data_out_buf_ptr,
+	    sizeof(uint32_t));
+	memcpy(&karg.sense_data_ptr, &karg32.sense_data_ptr,
+	    sizeof(uint32_t));
+	state = (file->f_flags & O_NONBLOCK) ? NON_BLOCKING : BLOCKING;
+	return _ctl_do_mpt_command(ioc, karg, &uarg->mf, state);
+}
+
+/**
+ * _ctl_ioctl_compat - main ioctl entry point (compat)
+ * @file -
+ * @cmd -
+ * @arg -
+ *
+ * This routine handles 32 bit applications in 64bit os.
+ */
+static long
+_ctl_ioctl_compat(struct file *file, unsigned cmd, unsigned long arg)
+{
+	long ret;
+	lock_kernel();
+	if (cmd == MPT2COMMAND32)
+		ret = _ctl_compat_mpt_command(file, cmd, arg);
+	else
+		ret = _ctl_ioctl_main(file, cmd, (void __user *)arg);
+	unlock_kernel();
+	return ret;
+}
+#endif
+
+/* scsi host attributes */
+
+/**
+ * _ctl_version_fw_show - firmware version
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_fw_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02d.%02d.%02d.%02d\n",
+	    (ioc->facts.FWVersion.Word & 0xFF000000) >> 24,
+	    (ioc->facts.FWVersion.Word & 0x00FF0000) >> 16,
+	    (ioc->facts.FWVersion.Word & 0x0000FF00) >> 8,
+	    ioc->facts.FWVersion.Word & 0x000000FF);
+}
+static DEVICE_ATTR(version_fw, S_IRUGO, _ctl_version_fw_show, NULL);
+
+/**
+ * _ctl_version_bios_show - bios version
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_bios_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	u32 version = le32_to_cpu(ioc->bios_pg3.BiosVersion);
+
+	return snprintf(buf, PAGE_SIZE, "%02d.%02d.%02d.%02d\n",
+	    (version & 0xFF000000) >> 24,
+	    (version & 0x00FF0000) >> 16,
+	    (version & 0x0000FF00) >> 8,
+	    version & 0x000000FF);
+}
+static DEVICE_ATTR(version_bios, S_IRUGO, _ctl_version_bios_show, NULL);
+
+/**
+ * _ctl_version_mpi_show - MPI (message passing interface) version
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_mpi_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%03x.%02x\n",
+	    ioc->facts.MsgVersion, ioc->facts.HeaderVersion >> 8);
+}
+static DEVICE_ATTR(version_mpi, S_IRUGO, _ctl_version_mpi_show, NULL);
+
+/**
+ * _ctl_version_product_show - product name
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_product_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, 16, "%s\n", ioc->manu_pg0.ChipName);
+}
+static DEVICE_ATTR(version_product, S_IRUGO,
+   _ctl_version_product_show, NULL);
+
+/**
+ * _ctl_version_nvdata_persistent_show - ndvata persistent version
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_nvdata_persistent_show(struct device *cdev,
+    struct device_attribute *attr, char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02xh\n",
+	    le16_to_cpu(ioc->iounit_pg0.NvdataVersionPersistent.Word));
+}
+static DEVICE_ATTR(version_nvdata_persistent, S_IRUGO,
+    _ctl_version_nvdata_persistent_show, NULL);
+
+/**
+ * _ctl_version_nvdata_default_show - nvdata default version
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_version_nvdata_default_show(struct device *cdev,
+    struct device_attribute *attr, char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02xh\n",
+	    le16_to_cpu(ioc->iounit_pg0.NvdataVersionDefault.Word));
+}
+static DEVICE_ATTR(version_nvdata_default, S_IRUGO,
+    _ctl_version_nvdata_default_show, NULL);
+
+/**
+ * _ctl_board_name_show - board name
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_board_name_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardName);
+}
+static DEVICE_ATTR(board_name, S_IRUGO, _ctl_board_name_show, NULL);
+
+/**
+ * _ctl_board_assembly_show - board assembly name
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_board_assembly_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardAssembly);
+}
+static DEVICE_ATTR(board_assembly, S_IRUGO,
+    _ctl_board_assembly_show, NULL);
+
+/**
+ * _ctl_board_tracer_show - board tracer number
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_board_tracer_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, 16, "%s\n", ioc->manu_pg0.BoardTracerNumber);
+}
+static DEVICE_ATTR(board_tracer, S_IRUGO,
+    _ctl_board_tracer_show, NULL);
+
+/**
+ * _ctl_io_delay_show - io missing delay
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is for firmware implemention for deboucing device
+ * removal events.
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_io_delay_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->io_missing_delay);
+}
+static DEVICE_ATTR(io_delay, S_IRUGO,
+    _ctl_io_delay_show, NULL);
+
+/**
+ * _ctl_device_delay_show - device missing delay
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is for firmware implemention for deboucing device
+ * removal events.
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_device_delay_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->device_missing_delay);
+}
+static DEVICE_ATTR(device_delay, S_IRUGO,
+    _ctl_device_delay_show, NULL);
+
+/**
+ * _ctl_fw_queue_depth_show - global credits
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is firmware queue depth limit
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_fw_queue_depth_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%02d\n", ioc->facts.RequestCredit);
+}
+static DEVICE_ATTR(fw_queue_depth, S_IRUGO,
+    _ctl_fw_queue_depth_show, NULL);
+
+/**
+ * _ctl_sas_address_show - sas address
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is the controller sas address
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_host_sas_address_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
+	    (unsigned long long)ioc->sas_hba.sas_address);
+}
+static DEVICE_ATTR(host_sas_address, S_IRUGO,
+    _ctl_host_sas_address_show, NULL);
+
+/**
+ * _ctl_logging_level_show - logging level
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * A sysfs 'read/write' shost attribute.
+ */
+static ssize_t
+_ctl_logging_level_show(struct device *cdev, struct device_attribute *attr,
+    char *buf)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+
+	return snprintf(buf, PAGE_SIZE, "%08xh\n", ioc->logging_level);
+}
+static ssize_t
+_ctl_logging_level_store(struct device *cdev, struct device_attribute *attr,
+    const char *buf, size_t count)
+{
+	struct Scsi_Host *shost = class_to_shost(cdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	int val = 0;
+
+	if (sscanf(buf, "%x", &val) != 1)
+		return -EINVAL;
+
+	ioc->logging_level = val;
+	printk(MPT2SAS_INFO_FMT "logging_level=%08xh\n", ioc->name,
+	    ioc->logging_level);
+	return strlen(buf);
+}
+static DEVICE_ATTR(logging_level, S_IRUGO | S_IWUSR,
+    _ctl_logging_level_show, _ctl_logging_level_store);
+
+struct device_attribute *mpt2sas_host_attrs[] = {
+	&dev_attr_version_fw,
+	&dev_attr_version_bios,
+	&dev_attr_version_mpi,
+	&dev_attr_version_product,
+	&dev_attr_version_nvdata_persistent,
+	&dev_attr_version_nvdata_default,
+	&dev_attr_board_name,
+	&dev_attr_board_assembly,
+	&dev_attr_board_tracer,
+	&dev_attr_io_delay,
+	&dev_attr_device_delay,
+	&dev_attr_logging_level,
+	&dev_attr_fw_queue_depth,
+	&dev_attr_host_sas_address,
+	NULL,
+};
+
+/* device attributes */
+
+/**
+ * _ctl_device_sas_address_show - sas address
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is the sas address for the target
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_device_sas_address_show(struct device *dev, struct device_attribute *attr,
+    char *buf)
+{
+	struct scsi_device *sdev = to_scsi_device(dev);
+	struct MPT2SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
+
+	return snprintf(buf, PAGE_SIZE, "0x%016llx\n",
+	    (unsigned long long)sas_device_priv_data->sas_target->sas_address);
+}
+static DEVICE_ATTR(sas_address, S_IRUGO, _ctl_device_sas_address_show, NULL);
+
+/**
+ * _ctl_device_handle_show - device handle
+ * @cdev - pointer to embedded class device
+ * @buf - the buffer returned
+ *
+ * This is the firmware assigned device handle
+ *
+ * A sysfs 'read-only' shost attribute.
+ */
+static ssize_t
+_ctl_device_handle_show(struct device *dev, struct device_attribute *attr,
+    char *buf)
+{
+	struct scsi_device *sdev = to_scsi_device(dev);
+	struct MPT2SAS_DEVICE *sas_device_priv_data = sdev->hostdata;
+
+	return snprintf(buf, PAGE_SIZE, "0x%04x\n",
+	    sas_device_priv_data->sas_target->handle);
+}
+static DEVICE_ATTR(sas_device_handle, S_IRUGO, _ctl_device_handle_show, NULL);
+
+struct device_attribute *mpt2sas_dev_attrs[] = {
+	&dev_attr_sas_address,
+	&dev_attr_sas_device_handle,
+	NULL,
+};
+
+static const struct file_operations ctl_fops = {
+	.owner = THIS_MODULE,
+	.unlocked_ioctl = _ctl_ioctl,
+	.release = _ctl_release,
+	.poll = _ctl_poll,
+	.fasync = _ctl_fasync,
+#ifdef CONFIG_COMPAT
+	.compat_ioctl = _ctl_ioctl_compat,
+#endif
+};
+
+static struct miscdevice ctl_dev = {
+	.minor  = MPT2SAS_MINOR,
+	.name   = MPT2SAS_DEV_NAME,
+	.fops   = &ctl_fops,
+};
+
+/**
+ * mpt2sas_ctl_init - main entry point for ctl.
+ *
+ */
+void
+mpt2sas_ctl_init(void)
+{
+	async_queue = NULL;
+	if (misc_register(&ctl_dev) < 0)
+		printk(KERN_ERR "%s can't register misc device [minor=%d]\n",
+		    MPT2SAS_DRIVER_NAME, MPT2SAS_MINOR);
+
+	init_waitqueue_head(&ctl_poll_wait);
+}
+
+/**
+ * mpt2sas_ctl_exit - exit point for ctl
+ *
+ */
+void
+mpt2sas_ctl_exit(void)
+{
+	struct MPT2SAS_ADAPTER *ioc;
+	int i;
+
+	list_for_each_entry(ioc, &mpt2sas_ioc_list, list) {
+
+		/* free memory associated to diag buffers */
+		for (i = 0; i < MPI2_DIAG_BUF_TYPE_COUNT; i++) {
+			if (!ioc->diag_buffer[i])
+				continue;
+			pci_free_consistent(ioc->pdev, ioc->diag_buffer_sz[i],
+			    ioc->diag_buffer[i], ioc->diag_buffer_dma[i]);
+			ioc->diag_buffer[i] = NULL;
+			ioc->diag_buffer_status[i] = 0;
+		}
+
+		kfree(ioc->event_log);
+	}
+	misc_deregister(&ctl_dev);
+}
+
diff --git a/drivers/scsi/mpt2sas/mpt2sas_ctl.h b/drivers/scsi/mpt2sas/mpt2sas_ctl.h
new file mode 100644
index 0000000..dbb6c0c
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_ctl.h
@@ -0,0 +1,416 @@
+/*
+ * Management Module Support for MPT (Message Passing Technology) based
+ * controllers
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_ctl.h
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef MPT2SAS_CTL_H_INCLUDED
+#define MPT2SAS_CTL_H_INCLUDED
+
+#ifdef __KERNEL__
+#include <linux/miscdevice.h>
+#endif
+
+#define MPT2SAS_DEV_NAME	"mpt2ctl"
+#define MPT2_MAGIC_NUMBER	'm'
+#define MPT2_IOCTL_DEFAULT_TIMEOUT (10) /* in seconds */
+
+/**
+ * IOCTL opcodes
+ */
+#define MPT2IOCINFO	_IOWR(MPT2_MAGIC_NUMBER, 17, \
+    struct mpt2_ioctl_iocinfo)
+#define MPT2COMMAND	_IOWR(MPT2_MAGIC_NUMBER, 20, \
+    struct mpt2_ioctl_command)
+#ifdef CONFIG_COMPAT
+#define MPT2COMMAND32	_IOWR(MPT2_MAGIC_NUMBER, 20, \
+    struct mpt2_ioctl_command32)
+#endif
+#define MPT2EVENTQUERY	_IOWR(MPT2_MAGIC_NUMBER, 21, \
+    struct mpt2_ioctl_eventquery)
+#define MPT2EVENTENABLE	_IOWR(MPT2_MAGIC_NUMBER, 22, \
+    struct mpt2_ioctl_eventenable)
+#define MPT2EVENTREPORT	_IOWR(MPT2_MAGIC_NUMBER, 23, \
+    struct mpt2_ioctl_eventreport)
+#define MPT2HARDRESET	_IOWR(MPT2_MAGIC_NUMBER, 24, \
+    struct mpt2_ioctl_diag_reset)
+#define MPT2BTDHMAPPING	_IOWR(MPT2_MAGIC_NUMBER, 31, \
+    struct mpt2_ioctl_btdh_mapping)
+
+/* diag buffer support */
+#define MPT2DIAGREGISTER _IOWR(MPT2_MAGIC_NUMBER, 26, \
+    struct mpt2_diag_register)
+#define MPT2DIAGRELEASE	_IOWR(MPT2_MAGIC_NUMBER, 27, \
+    struct mpt2_diag_release)
+#define MPT2DIAGUNREGISTER _IOWR(MPT2_MAGIC_NUMBER, 28, \
+    struct mpt2_diag_unregister)
+#define MPT2DIAGQUERY	_IOWR(MPT2_MAGIC_NUMBER, 29, \
+    struct mpt2_diag_query)
+#define MPT2DIAGREADBUFFER _IOWR(MPT2_MAGIC_NUMBER, 30, \
+    struct mpt2_diag_read_buffer)
+
+/**
+ * struct mpt2_ioctl_header - main header structure
+ * @ioc_number -  IOC unit number
+ * @port_number - IOC port number
+ * @max_data_size - maximum number bytes to transfer on read
+ */
+struct mpt2_ioctl_header {
+	uint32_t ioc_number;
+	uint32_t port_number;
+	uint32_t max_data_size;
+};
+
+/**
+ * struct mpt2_ioctl_diag_reset - diagnostic reset
+ * @hdr - generic header
+ */
+struct mpt2_ioctl_diag_reset {
+	struct mpt2_ioctl_header hdr;
+};
+
+
+/**
+ * struct mpt2_ioctl_pci_info - pci device info
+ * @device - pci device id
+ * @function - pci function id
+ * @bus - pci bus id
+ * @segment_id - pci segment id
+ */
+struct mpt2_ioctl_pci_info {
+	union {
+		struct {
+			uint32_t device:5;
+			uint32_t function:3;
+			uint32_t bus:24;
+		} bits;
+		uint32_t  word;
+	} u;
+	uint32_t segment_id;
+};
+
+
+#define MPT2_IOCTL_INTERFACE_SCSI	(0x00)
+#define MPT2_IOCTL_INTERFACE_FC		(0x01)
+#define MPT2_IOCTL_INTERFACE_FC_IP	(0x02)
+#define MPT2_IOCTL_INTERFACE_SAS	(0x03)
+#define MPT2_IOCTL_INTERFACE_SAS2	(0x04)
+#define MPT2_IOCTL_VERSION_LENGTH	(32)
+
+/**
+ * struct mpt2_ioctl_iocinfo - generic controller info
+ * @hdr - generic header
+ * @adapter_type - type of adapter (spi, fc, sas)
+ * @port_number - port number
+ * @pci_id - PCI Id
+ * @hw_rev - hardware revision
+ * @sub_system_device - PCI subsystem Device ID
+ * @sub_system_vendor - PCI subsystem Vendor ID
+ * @rsvd0 - reserved
+ * @firmware_version - firmware version
+ * @bios_version - BIOS version
+ * @driver_version - driver version - 32 ASCII characters
+ * @rsvd1 - reserved
+ * @scsi_id - scsi id of adapter 0
+ * @rsvd2 - reserved
+ * @pci_information - pci info (2nd revision)
+ */
+struct mpt2_ioctl_iocinfo {
+	struct mpt2_ioctl_header hdr;
+	uint32_t adapter_type;
+	uint32_t port_number;
+	uint32_t pci_id;
+	uint32_t hw_rev;
+	uint32_t subsystem_device;
+	uint32_t subsystem_vendor;
+	uint32_t rsvd0;
+	uint32_t firmware_version;
+	uint32_t bios_version;
+	uint8_t driver_version[MPT2_IOCTL_VERSION_LENGTH];
+	uint8_t rsvd1;
+	uint8_t scsi_id;
+	uint16_t rsvd2;
+	struct mpt2_ioctl_pci_info pci_information;
+};
+
+
+/* number of event log entries */
+#define MPT2SAS_CTL_EVENT_LOG_SIZE (50)
+
+/**
+ * struct mpt2_ioctl_eventquery - query event count and type
+ * @hdr - generic header
+ * @event_entries - number of events returned by get_event_report
+ * @rsvd - reserved
+ * @event_types - type of events currently being captured
+ */
+struct mpt2_ioctl_eventquery {
+	struct mpt2_ioctl_header hdr;
+	uint16_t event_entries;
+	uint16_t rsvd;
+	uint32_t event_types[MPI2_EVENT_NOTIFY_EVENTMASK_WORDS];
+};
+
+/**
+ * struct mpt2_ioctl_eventenable - enable/disable event capturing
+ * @hdr - generic header
+ * @event_types - toggle off/on type of events to be captured
+ */
+struct mpt2_ioctl_eventenable {
+	struct mpt2_ioctl_header hdr;
+	uint32_t event_types[4];
+};
+
+#define MPT2_EVENT_DATA_SIZE (192)
+/**
+ * struct MPT2_IOCTL_EVENTS -
+ * @event - the event that was reported
+ * @context - unique value for each event assigned by driver
+ * @data - event data returned in fw reply message
+ */
+struct MPT2_IOCTL_EVENTS {
+	uint32_t event;
+	uint32_t context;
+	uint8_t data[MPT2_EVENT_DATA_SIZE];
+};
+
+/**
+ * struct mpt2_ioctl_eventreport - returing event log
+ * @hdr - generic header
+ * @event_data - (see struct MPT2_IOCTL_EVENTS)
+ */
+struct mpt2_ioctl_eventreport {
+	struct mpt2_ioctl_header hdr;
+	struct MPT2_IOCTL_EVENTS event_data[1];
+};
+
+/**
+ * struct mpt2_ioctl_command - generic mpt firmware passthru ioclt
+ * @hdr - generic header
+ * @timeout - command timeout in seconds. (if zero then use driver default
+ *  value).
+ * @reply_frame_buf_ptr - reply location
+ * @data_in_buf_ptr - destination for read
+ * @data_out_buf_ptr - data source for write
+ * @sense_data_ptr - sense data location
+ * @max_reply_bytes - maximum number of reply bytes to be sent to app.
+ * @data_in_size - number bytes for data transfer in (read)
+ * @data_out_size - number bytes for data transfer out (write)
+ * @max_sense_bytes - maximum number of bytes for auto sense buffers
+ * @data_sge_offset - offset in words from the start of the request message to
+ * the first SGL
+ * @mf[1];
+ */
+struct mpt2_ioctl_command {
+	struct mpt2_ioctl_header hdr;
+	uint32_t timeout;
+	void __user *reply_frame_buf_ptr;
+	void __user *data_in_buf_ptr;
+	void __user *data_out_buf_ptr;
+	void __user *sense_data_ptr;
+	uint32_t max_reply_bytes;
+	uint32_t data_in_size;
+	uint32_t data_out_size;
+	uint32_t max_sense_bytes;
+	uint32_t data_sge_offset;
+	uint8_t mf[1];
+};
+
+#ifdef CONFIG_COMPAT
+struct mpt2_ioctl_command32 {
+	struct mpt2_ioctl_header hdr;
+	uint32_t timeout;
+	uint32_t reply_frame_buf_ptr;
+	uint32_t data_in_buf_ptr;
+	uint32_t data_out_buf_ptr;
+	uint32_t sense_data_ptr;
+	uint32_t max_reply_bytes;
+	uint32_t data_in_size;
+	uint32_t data_out_size;
+	uint32_t max_sense_bytes;
+	uint32_t data_sge_offset;
+	uint8_t mf[1];
+};
+#endif
+
+/**
+ * struct mpt2_ioctl_btdh_mapping - mapping info
+ * @hdr - generic header
+ * @id - target device identification number
+ * @bus - SCSI bus number that the target device exists on
+ * @handle - device handle for the target device
+ * @rsvd - reserved
+ *
+ * To obtain a bus/id the application sets
+ * handle to valid handle, and bus/id to 0xFFFF.
+ *
+ * To obtain the device handle the application sets
+ * bus/id valid value, and the handle to 0xFFFF.
+ */
+struct mpt2_ioctl_btdh_mapping {
+	struct mpt2_ioctl_header hdr;
+	uint32_t id;
+	uint32_t bus;
+	uint16_t handle;
+	uint16_t rsvd;
+};
+
+
+/* status bits for ioc->diag_buffer_status */
+#define MPT2_DIAG_BUFFER_IS_REGISTERED 	(0x01)
+#define MPT2_DIAG_BUFFER_IS_RELEASED 	(0x02)
+
+/* application flags for mpt2_diag_register, mpt2_diag_query */
+#define MPT2_APP_FLAGS_APP_OWNED	(0x0001)
+#define MPT2_APP_FLAGS_BUFFER_VALID	(0x0002)
+#define MPT2_APP_FLAGS_FW_BUFFER_ACCESS	(0x0004)
+
+/* flags for mpt2_diag_read_buffer */
+#define MPT2_FLAGS_REREGISTER		(0x0001)
+
+#define MPT2_PRODUCT_SPECIFIC_DWORDS 		23
+
+/**
+ * struct mpt2_diag_register - application register with driver
+ * @hdr - generic header
+ * @reserved -
+ * @buffer_type - specifies either TRACE or SNAPSHOT
+ * @application_flags - misc flags
+ * @diagnostic_flags - specifies flags affecting command processing
+ * @product_specific - product specific information
+ * @requested_buffer_size - buffers size in bytes
+ * @unique_id - tag specified by application that is used to signal ownership
+ *  of the buffer.
+ *
+ * This will allow the driver to setup any required buffers that will be
+ * needed by firmware to communicate with the driver.
+ */
+struct mpt2_diag_register {
+	struct mpt2_ioctl_header hdr;
+	uint8_t reserved;
+	uint8_t buffer_type;
+	uint16_t application_flags;
+	uint32_t diagnostic_flags;
+	uint32_t product_specific[MPT2_PRODUCT_SPECIFIC_DWORDS];
+	uint32_t requested_buffer_size;
+	uint32_t unique_id;
+};
+
+/**
+ * struct mpt2_diag_unregister - application unregister with driver
+ * @hdr - generic header
+ * @unique_id - tag uniquely identifies the buffer to be unregistered
+ *
+ * This will allow the driver to cleanup any memory allocated for diag
+ * messages and to free up any resources.
+ */
+struct mpt2_diag_unregister {
+	struct mpt2_ioctl_header hdr;
+	uint32_t unique_id;
+};
+
+/**
+ * struct mpt2_diag_query - query relevant info associated with diag buffers
+ * @hdr - generic header
+ * @reserved -
+ * @buffer_type - specifies either TRACE or SNAPSHOT
+ * @application_flags - misc flags
+ * @diagnostic_flags - specifies flags affecting command processing
+ * @product_specific - product specific information
+ * @total_buffer_size - diag buffer size in bytes
+ * @driver_added_buffer_size - size of extra space appended to end of buffer
+ * @unique_id - unique id associated with this buffer.
+ *
+ * The application will send only buffer_type and unique_id.  Driver will
+ * inspect unique_id first, if valid, fill in all the info.  If unique_id is
+ * 0x00, the driver will return info specified by Buffer Type.
+ */
+struct mpt2_diag_query {
+	struct mpt2_ioctl_header hdr;
+	uint8_t reserved;
+	uint8_t buffer_type;
+	uint16_t application_flags;
+	uint32_t diagnostic_flags;
+	uint32_t product_specific[MPT2_PRODUCT_SPECIFIC_DWORDS];
+	uint32_t total_buffer_size;
+	uint32_t driver_added_buffer_size;
+	uint32_t unique_id;
+};
+
+/**
+ * struct mpt2_diag_release -  request to send Diag Release Message to firmware
+ * @hdr - generic header
+ * @unique_id - tag uniquely identifies the buffer to be released
+ *
+ * This allows ownership of the specified buffer to returned to the driver,
+ * allowing an application to read the buffer without fear that firmware is
+ * overwritting information in the buffer.
+ */
+struct mpt2_diag_release {
+	struct mpt2_ioctl_header hdr;
+	uint32_t unique_id;
+};
+
+/**
+ * struct mpt2_diag_read_buffer - request for copy of the diag buffer
+ * @hdr - generic header
+ * @status -
+ * @reserved -
+ * @flags - misc flags
+ * @starting_offset - starting offset within drivers buffer where to start
+ *  reading data at into the specified application buffer
+ * @bytes_to_read - number of bytes to copy from the drivers buffer into the
+ *  application buffer starting at starting_offset.
+ * @unique_id - unique id associated with this buffer.
+ * @diagnostic_data - data payload
+ */
+struct mpt2_diag_read_buffer {
+	struct mpt2_ioctl_header hdr;
+	uint8_t status;
+	uint8_t reserved;
+	uint16_t flags;
+	uint32_t starting_offset;
+	uint32_t bytes_to_read;
+	uint32_t unique_id;
+	uint32_t diagnostic_data[1];
+};
+
+#endif /* MPT2SAS_CTL_H_INCLUDED */
diff --git a/drivers/scsi/mpt2sas/mpt2sas_debug.h b/drivers/scsi/mpt2sas/mpt2sas_debug.h
new file mode 100644
index 0000000..ad32509
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_debug.h
@@ -0,0 +1,181 @@
+/*
+ * Logging Support for MPT (Message Passing Technology) based controllers
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_debug.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#ifndef MPT2SAS_DEBUG_H_INCLUDED
+#define MPT2SAS_DEBUG_H_INCLUDED
+
+#define MPT_DEBUG			0x00000001
+#define MPT_DEBUG_MSG_FRAME		0x00000002
+#define MPT_DEBUG_SG			0x00000004
+#define MPT_DEBUG_EVENTS		0x00000008
+#define MPT_DEBUG_EVENT_WORK_TASK	0x00000010
+#define MPT_DEBUG_INIT			0x00000020
+#define MPT_DEBUG_EXIT			0x00000040
+#define MPT_DEBUG_FAIL			0x00000080
+#define MPT_DEBUG_TM			0x00000100
+#define MPT_DEBUG_REPLY			0x00000200
+#define MPT_DEBUG_HANDSHAKE		0x00000400
+#define MPT_DEBUG_CONFIG		0x00000800
+#define MPT_DEBUG_DL			0x00001000
+#define MPT_DEBUG_RESET			0x00002000
+#define MPT_DEBUG_SCSI			0x00004000
+#define MPT_DEBUG_IOCTL			0x00008000
+#define MPT_DEBUG_CSMISAS		0x00010000
+#define MPT_DEBUG_SAS			0x00020000
+#define MPT_DEBUG_TRANSPORT		0x00040000
+#define MPT_DEBUG_TASK_SET_FULL		0x00080000
+
+#define MPT_DEBUG_TARGET_MODE		0x00100000
+
+
+/*
+ * CONFIG_SCSI_MPT2SAS_LOGGING - enabled in Kconfig
+ */
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+#define MPT_CHECK_LOGGING(IOC, CMD, BITS)			\
+{								\
+	if (IOC->logging_level & BITS)				\
+		CMD;						\
+}
+#else
+#define MPT_CHECK_LOGGING(IOC, CMD, BITS)
+#endif /* CONFIG_SCSI_MPT2SAS_LOGGING */
+
+
+/*
+ * debug macros
+ */
+
+#define dprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG)
+
+#define dsgprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SG)
+
+#define devtprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENTS)
+
+#define dewtprintk(IOC, CMD)		\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EVENT_WORK_TASK)
+
+#define dinitprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_INIT)
+
+#define dexitprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_EXIT)
+
+#define dfailprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_FAIL)
+
+#define dtmprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TM)
+
+#define dreplyprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_REPLY)
+
+#define dhsprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_HANDSHAKE)
+
+#define dcprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_CONFIG)
+
+#define ddlprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_DL)
+
+#define drsprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_RESET)
+
+#define dsprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SCSI)
+
+#define dctlprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_IOCTL)
+
+#define dcsmisasprintk(IOC, CMD)		\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_CSMISAS)
+
+#define dsasprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS)
+
+#define dsastransport(IOC, CMD)		\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_SAS_WIDE)
+
+#define dmfprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_MSG_FRAME)
+
+#define dtsfprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TASK_SET_FULL)
+
+#define dtransportprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TRANSPORT)
+
+#define dTMprintk(IOC, CMD)			\
+	MPT_CHECK_LOGGING(IOC, CMD, MPT_DEBUG_TARGET_MODE)
+
+/* inline functions for dumping debug data*/
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _debug_dump_mf - print message frame contents
+ * @mpi_request: pointer to message frame
+ * @sz: number of dwords
+ */
+static inline void
+_debug_dump_mf(void *mpi_request, int sz)
+{
+	int i;
+	u32 *mfp = (u32 *)mpi_request;
+
+	printk(KERN_INFO "mf:\n\t");
+	for (i = 0; i < sz; i++) {
+		if (i && ((i % 8) == 0))
+			printk("\n\t");
+		printk("%08x ", le32_to_cpu(mfp[i]));
+	}
+	printk("\n");
+}
+#else
+#define _debug_dump_mf(mpi_request, sz)
+#endif /* CONFIG_SCSI_MPT2SAS_LOGGING */
+
+#endif /* MPT2SAS_DEBUG_H_INCLUDED */
diff --git a/drivers/scsi/mpt2sas/mpt2sas_scsih.c b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
new file mode 100644
index 0000000..0c463c4
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_scsih.c
@@ -0,0 +1,5687 @@
+/*
+ * Scsi Host Layer for MPT (Message Passing Technology) based controllers
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_scsih.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/blkdev.h>
+#include <linux/sched.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+
+#include "mpt2sas_base.h"
+
+MODULE_AUTHOR(MPT2SAS_AUTHOR);
+MODULE_DESCRIPTION(MPT2SAS_DESCRIPTION);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(MPT2SAS_DRIVER_VERSION);
+
+#define RAID_CHANNEL 1
+
+/* forward proto's */
+static void _scsih_expander_node_remove(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_node *sas_expander);
+static void _firmware_event_work(struct work_struct *work);
+
+/* global parameters */
+LIST_HEAD(mpt2sas_ioc_list);
+
+/* local parameters */
+static u8 scsi_io_cb_idx = -1;
+static u8 tm_cb_idx = -1;
+static u8 ctl_cb_idx = -1;
+static u8 base_cb_idx = -1;
+static u8 transport_cb_idx = -1;
+static u8 config_cb_idx = -1;
+static int mpt_ids;
+
+/* command line options */
+static u32 logging_level;
+MODULE_PARM_DESC(logging_level, " bits for enabling additional logging info "
+    "(default=0)");
+
+/* scsi-mid layer global parmeter is max_report_luns, which is 511 */
+#define MPT2SAS_MAX_LUN (16895)
+static int max_lun = MPT2SAS_MAX_LUN;
+module_param(max_lun, int, 0);
+MODULE_PARM_DESC(max_lun, " max lun, default=16895 ");
+
+/**
+ * struct sense_info - common structure for obtaining sense keys
+ * @skey: sense key
+ * @asc: additional sense code
+ * @ascq: additional sense code qualifier
+ */
+struct sense_info {
+	u8 skey;
+	u8 asc;
+	u8 ascq;
+};
+
+
+#define MPT2SAS_RESCAN_AFTER_HOST_RESET (0xFFFF)
+/**
+ * struct fw_event_work - firmware event struct
+ * @list: link list framework
+ * @work: work object (ioc->fault_reset_work_q)
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @host_reset_handling: handling events during host reset
+ * @ignore: flag meaning this event has been marked to ignore
+ * @event: firmware event MPI2_EVENT_XXX defined in mpt2_ioc.h
+ * @event_data: reply event data payload follows
+ *
+ * This object stored on ioc->fw_event_list.
+ */
+struct fw_event_work {
+	struct list_head 	list;
+	struct delayed_work	work;
+	struct MPT2SAS_ADAPTER *ioc;
+	u8			VF_ID;
+	u8			host_reset_handling;
+	u8			ignore;
+	u16			event;
+	void			*event_data;
+};
+
+/**
+ * struct _scsi_io_transfer - scsi io transfer
+ * @handle: sas device handle (assigned by firmware)
+ * @is_raid: flag set for hidden raid components
+ * @dir: DMA_TO_DEVICE, DMA_FROM_DEVICE,
+ * @data_length: data transfer length
+ * @data_dma: dma pointer to data
+ * @sense: sense data
+ * @lun: lun number
+ * @cdb_length: cdb length
+ * @cdb: cdb contents
+ * @valid_reply: flag set for reply message
+ * @timeout: timeout for this command
+ * @sense_length: sense length
+ * @ioc_status: ioc status
+ * @scsi_state: scsi state
+ * @scsi_status: scsi staus
+ * @log_info: log information
+ * @transfer_length: data length transfer when there is a reply message
+ *
+ * Used for sending internal scsi commands to devices within this module.
+ * Refer to _scsi_send_scsi_io().
+ */
+struct _scsi_io_transfer {
+	u16	handle;
+	u8	is_raid;
+	enum dma_data_direction dir;
+	u32	data_length;
+	dma_addr_t data_dma;
+	u8 	sense[SCSI_SENSE_BUFFERSIZE];
+	u32	lun;
+	u8	cdb_length;
+	u8	cdb[32];
+	u8	timeout;
+	u8	valid_reply;
+  /* the following bits are only valid when 'valid_reply = 1' */
+	u32	sense_length;
+	u16	ioc_status;
+	u8	scsi_state;
+	u8	scsi_status;
+	u32	log_info;
+	u32	transfer_length;
+};
+
+/*
+ * The pci device ids are defined in mpi/mpi2_cnfg.h.
+ */
+static struct pci_device_id scsih_pci_table[] = {
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2004,
+		PCI_ANY_ID, PCI_ANY_ID },
+	/* Falcon ~ 2008*/
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2008,
+		PCI_ANY_ID, PCI_ANY_ID },
+	/* Liberator ~ 2108 */
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2108_1,
+		PCI_ANY_ID, PCI_ANY_ID },
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2108_2,
+		PCI_ANY_ID, PCI_ANY_ID },
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2108_3,
+		PCI_ANY_ID, PCI_ANY_ID },
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2116_1,
+		PCI_ANY_ID, PCI_ANY_ID },
+	{ MPI2_MFGPAGE_VENDORID_LSI, MPI2_MFGPAGE_DEVID_SAS2116_2,
+		PCI_ANY_ID, PCI_ANY_ID },
+	{0}	/* Terminating entry */
+};
+MODULE_DEVICE_TABLE(pci, scsih_pci_table);
+
+/**
+ * scsih_set_debug_level - global setting of ioc->logging_level.
+ *
+ * Note: The logging levels are defined in mpt2sas_debug.h.
+ */
+static int
+scsih_set_debug_level(const char *val, struct kernel_param *kp)
+{
+	int ret = param_set_int(val, kp);
+	struct MPT2SAS_ADAPTER *ioc;
+
+	if (ret)
+		return ret;
+
+	printk(KERN_INFO "setting logging_level(0x%08x)\n", logging_level);
+	list_for_each_entry(ioc, &mpt2sas_ioc_list, list)
+		ioc->logging_level = logging_level;
+	return 0;
+}
+module_param_call(logging_level, scsih_set_debug_level, param_get_int,
+    &logging_level, 0644);
+
+/**
+ * _scsih_srch_boot_sas_address - search based on sas_address
+ * @sas_address: sas address
+ * @boot_device: boot device object from bios page 2
+ *
+ * Returns 1 when there's a match, 0 means no match.
+ */
+static inline int
+_scsih_srch_boot_sas_address(u64 sas_address,
+    Mpi2BootDeviceSasWwid_t *boot_device)
+{
+	return (sas_address == le64_to_cpu(boot_device->SASAddress)) ?  1 : 0;
+}
+
+/**
+ * _scsih_srch_boot_device_name - search based on device name
+ * @device_name: device name specified in INDENTIFY fram
+ * @boot_device: boot device object from bios page 2
+ *
+ * Returns 1 when there's a match, 0 means no match.
+ */
+static inline int
+_scsih_srch_boot_device_name(u64 device_name,
+    Mpi2BootDeviceDeviceName_t *boot_device)
+{
+	return (device_name == le64_to_cpu(boot_device->DeviceName)) ? 1 : 0;
+}
+
+/**
+ * _scsih_srch_boot_encl_slot - search based on enclosure_logical_id/slot
+ * @enclosure_logical_id: enclosure logical id
+ * @slot_number: slot number
+ * @boot_device: boot device object from bios page 2
+ *
+ * Returns 1 when there's a match, 0 means no match.
+ */
+static inline int
+_scsih_srch_boot_encl_slot(u64 enclosure_logical_id, u16 slot_number,
+    Mpi2BootDeviceEnclosureSlot_t *boot_device)
+{
+	return (enclosure_logical_id == le64_to_cpu(boot_device->
+	    EnclosureLogicalID) && slot_number == le16_to_cpu(boot_device->
+	    SlotNumber)) ? 1 : 0;
+}
+
+/**
+ * _scsih_is_boot_device - search for matching boot device.
+ * @sas_address: sas address
+ * @device_name: device name specified in INDENTIFY fram
+ * @enclosure_logical_id: enclosure logical id
+ * @slot_number: slot number
+ * @form: specifies boot device form
+ * @boot_device: boot device object from bios page 2
+ *
+ * Returns 1 when there's a match, 0 means no match.
+ */
+static int
+_scsih_is_boot_device(u64 sas_address, u64 device_name,
+    u64 enclosure_logical_id, u16 slot, u8 form,
+    Mpi2BiosPage2BootDevice_t *boot_device)
+{
+	int rc = 0;
+
+	switch (form) {
+	case MPI2_BIOSPAGE2_FORM_SAS_WWID:
+		if (!sas_address)
+			break;
+		rc = _scsih_srch_boot_sas_address(
+		    sas_address, &boot_device->SasWwid);
+		break;
+	case MPI2_BIOSPAGE2_FORM_ENCLOSURE_SLOT:
+		if (!enclosure_logical_id)
+			break;
+		rc = _scsih_srch_boot_encl_slot(
+		    enclosure_logical_id,
+		    slot, &boot_device->EnclosureSlot);
+		break;
+	case MPI2_BIOSPAGE2_FORM_DEVICE_NAME:
+		if (!device_name)
+			break;
+		rc = _scsih_srch_boot_device_name(
+		    device_name, &boot_device->DeviceName);
+		break;
+	case MPI2_BIOSPAGE2_FORM_NO_DEVICE_SPECIFIED:
+		break;
+	}
+
+	return rc;
+}
+
+/**
+ * _scsih_determine_boot_device - determine boot device.
+ * @ioc: per adapter object
+ * @device: either sas_device or raid_device object
+ * @is_raid: [flag] 1 = raid object, 0 = sas object
+ *
+ * Determines whether this device should be first reported device to
+ * to scsi-ml or sas transport, this purpose is for persistant boot device.
+ * There are primary, alternate, and current entries in bios page 2. The order
+ * priority is primary, alternate, then current.  This routine saves
+ * the corresponding device object and is_raid flag in the ioc object.
+ * The saved data to be used later in _scsih_probe_boot_devices().
+ */
+static void
+_scsih_determine_boot_device(struct MPT2SAS_ADAPTER *ioc,
+    void *device, u8 is_raid)
+{
+	struct _sas_device *sas_device;
+	struct _raid_device *raid_device;
+	u64 sas_address;
+	u64 device_name;
+	u64 enclosure_logical_id;
+	u16 slot;
+
+	 /* only process this function when driver loads */
+	if (!ioc->wait_for_port_enable_to_complete)
+		return;
+
+	if (!is_raid) {
+		sas_device = device;
+		sas_address = sas_device->sas_address;
+		device_name = sas_device->device_name;
+		enclosure_logical_id = sas_device->enclosure_logical_id;
+		slot = sas_device->slot;
+	} else {
+		raid_device = device;
+		sas_address = raid_device->wwid;
+		device_name = 0;
+		enclosure_logical_id = 0;
+		slot = 0;
+	}
+
+	if (!ioc->req_boot_device.device) {
+		if (_scsih_is_boot_device(sas_address, device_name,
+		    enclosure_logical_id, slot,
+		    (ioc->bios_pg2.ReqBootDeviceForm &
+		    MPI2_BIOSPAGE2_FORM_MASK),
+		    &ioc->bios_pg2.RequestedBootDevice)) {
+			dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+			   "%s: req_boot_device(0x%016llx)\n",
+			    ioc->name, __func__,
+			    (unsigned long long)sas_address));
+			ioc->req_boot_device.device = device;
+			ioc->req_boot_device.is_raid = is_raid;
+		}
+	}
+
+	if (!ioc->req_alt_boot_device.device) {
+		if (_scsih_is_boot_device(sas_address, device_name,
+		    enclosure_logical_id, slot,
+		    (ioc->bios_pg2.ReqAltBootDeviceForm &
+		    MPI2_BIOSPAGE2_FORM_MASK),
+		    &ioc->bios_pg2.RequestedAltBootDevice)) {
+			dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+			   "%s: req_alt_boot_device(0x%016llx)\n",
+			    ioc->name, __func__,
+			    (unsigned long long)sas_address));
+			ioc->req_alt_boot_device.device = device;
+			ioc->req_alt_boot_device.is_raid = is_raid;
+		}
+	}
+
+	if (!ioc->current_boot_device.device) {
+		if (_scsih_is_boot_device(sas_address, device_name,
+		    enclosure_logical_id, slot,
+		    (ioc->bios_pg2.CurrentBootDeviceForm &
+		    MPI2_BIOSPAGE2_FORM_MASK),
+		    &ioc->bios_pg2.CurrentBootDevice)) {
+			dinitprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+			   "%s: current_boot_device(0x%016llx)\n",
+			    ioc->name, __func__,
+			    (unsigned long long)sas_address));
+			ioc->current_boot_device.device = device;
+			ioc->current_boot_device.is_raid = is_raid;
+		}
+	}
+}
+
+/**
+ * mpt2sas_scsih_sas_device_find_by_sas_address - sas device search
+ * @ioc: per adapter object
+ * @sas_address: sas address
+ * Context: Calling function should acquire ioc->sas_device_lock
+ *
+ * This searches for sas_device based on sas_address, then return sas_device
+ * object.
+ */
+struct _sas_device *
+mpt2sas_scsih_sas_device_find_by_sas_address(struct MPT2SAS_ADAPTER *ioc,
+    u64 sas_address)
+{
+	struct _sas_device *sas_device, *r;
+
+	r = NULL;
+	/* check the sas_device_init_list */
+	list_for_each_entry(sas_device, &ioc->sas_device_init_list,
+	    list) {
+		if (sas_device->sas_address != sas_address)
+			continue;
+		r = sas_device;
+		goto out;
+	}
+
+	/* then check the sas_device_list */
+	list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
+		if (sas_device->sas_address != sas_address)
+			continue;
+		r = sas_device;
+		goto out;
+	}
+ out:
+	return r;
+}
+
+/**
+ * _scsih_sas_device_find_by_handle - sas device search
+ * @ioc: per adapter object
+ * @handle: sas device handle (assigned by firmware)
+ * Context: Calling function should acquire ioc->sas_device_lock
+ *
+ * This searches for sas_device based on sas_address, then return sas_device
+ * object.
+ */
+static struct _sas_device *
+_scsih_sas_device_find_by_handle(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct _sas_device *sas_device, *r;
+
+	r = NULL;
+	if (ioc->wait_for_port_enable_to_complete) {
+		list_for_each_entry(sas_device, &ioc->sas_device_init_list,
+		    list) {
+			if (sas_device->handle != handle)
+				continue;
+			r = sas_device;
+			goto out;
+		}
+	} else {
+		list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
+			if (sas_device->handle != handle)
+				continue;
+			r = sas_device;
+			goto out;
+		}
+	}
+
+ out:
+	return r;
+}
+
+/**
+ * _scsih_sas_device_remove - remove sas_device from list.
+ * @ioc: per adapter object
+ * @sas_device: the sas_device object
+ * Context: This function will acquire ioc->sas_device_lock.
+ *
+ * Removing object and freeing associated memory from the ioc->sas_device_list.
+ */
+static void
+_scsih_sas_device_remove(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_device *sas_device)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	list_del(&sas_device->list);
+	memset(sas_device, 0, sizeof(struct _sas_device));
+	kfree(sas_device);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+}
+
+/**
+ * _scsih_sas_device_add - insert sas_device to the list.
+ * @ioc: per adapter object
+ * @sas_device: the sas_device object
+ * Context: This function will acquire ioc->sas_device_lock.
+ *
+ * Adding new object to the ioc->sas_device_list.
+ */
+static void
+_scsih_sas_device_add(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_device *sas_device)
+{
+	unsigned long flags;
+	u16 handle, parent_handle;
+	u64 sas_address;
+
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: handle"
+	    "(0x%04x), sas_addr(0x%016llx)\n", ioc->name, __func__,
+	    sas_device->handle, (unsigned long long)sas_device->sas_address));
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	list_add_tail(&sas_device->list, &ioc->sas_device_list);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	handle = sas_device->handle;
+	parent_handle = sas_device->parent_handle;
+	sas_address = sas_device->sas_address;
+	if (!mpt2sas_transport_port_add(ioc, handle, parent_handle)) {
+		_scsih_sas_device_remove(ioc, sas_device);
+	} else if (!sas_device->starget) {
+		mpt2sas_transport_port_remove(ioc, sas_address, parent_handle);
+		_scsih_sas_device_remove(ioc, sas_device);
+	}
+}
+
+/**
+ * _scsih_sas_device_init_add - insert sas_device to the list.
+ * @ioc: per adapter object
+ * @sas_device: the sas_device object
+ * Context: This function will acquire ioc->sas_device_lock.
+ *
+ * Adding new object at driver load time to the ioc->sas_device_init_list.
+ */
+static void
+_scsih_sas_device_init_add(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_device *sas_device)
+{
+	unsigned long flags;
+
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: handle"
+	    "(0x%04x), sas_addr(0x%016llx)\n", ioc->name, __func__,
+	    sas_device->handle, (unsigned long long)sas_device->sas_address));
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	list_add_tail(&sas_device->list, &ioc->sas_device_init_list);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	_scsih_determine_boot_device(ioc, sas_device, 0);
+}
+
+/**
+ * mpt2sas_scsih_expander_find_by_handle - expander device search
+ * @ioc: per adapter object
+ * @handle: expander handle (assigned by firmware)
+ * Context: Calling function should acquire ioc->sas_device_lock
+ *
+ * This searches for expander device based on handle, then returns the
+ * sas_node object.
+ */
+struct _sas_node *
+mpt2sas_scsih_expander_find_by_handle(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct _sas_node *sas_expander, *r;
+
+	r = NULL;
+	list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
+		if (sas_expander->handle != handle)
+			continue;
+		r = sas_expander;
+		goto out;
+	}
+ out:
+	return r;
+}
+
+/**
+ * _scsih_raid_device_find_by_id - raid device search
+ * @ioc: per adapter object
+ * @id: sas device target id
+ * @channel: sas device channel
+ * Context: Calling function should acquire ioc->raid_device_lock
+ *
+ * This searches for raid_device based on target id, then return raid_device
+ * object.
+ */
+static struct _raid_device *
+_scsih_raid_device_find_by_id(struct MPT2SAS_ADAPTER *ioc, int id, int channel)
+{
+	struct _raid_device *raid_device, *r;
+
+	r = NULL;
+	list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
+		if (raid_device->id == id && raid_device->channel == channel) {
+			r = raid_device;
+			goto out;
+		}
+	}
+
+ out:
+	return r;
+}
+
+/**
+ * _scsih_raid_device_find_by_handle - raid device search
+ * @ioc: per adapter object
+ * @handle: sas device handle (assigned by firmware)
+ * Context: Calling function should acquire ioc->raid_device_lock
+ *
+ * This searches for raid_device based on handle, then return raid_device
+ * object.
+ */
+static struct _raid_device *
+_scsih_raid_device_find_by_handle(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct _raid_device *raid_device, *r;
+
+	r = NULL;
+	list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
+		if (raid_device->handle != handle)
+			continue;
+		r = raid_device;
+		goto out;
+	}
+
+ out:
+	return r;
+}
+
+/**
+ * _scsih_raid_device_find_by_wwid - raid device search
+ * @ioc: per adapter object
+ * @handle: sas device handle (assigned by firmware)
+ * Context: Calling function should acquire ioc->raid_device_lock
+ *
+ * This searches for raid_device based on wwid, then return raid_device
+ * object.
+ */
+static struct _raid_device *
+_scsih_raid_device_find_by_wwid(struct MPT2SAS_ADAPTER *ioc, u64 wwid)
+{
+	struct _raid_device *raid_device, *r;
+
+	r = NULL;
+	list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
+		if (raid_device->wwid != wwid)
+			continue;
+		r = raid_device;
+		goto out;
+	}
+
+ out:
+	return r;
+}
+
+/**
+ * _scsih_raid_device_add - add raid_device object
+ * @ioc: per adapter object
+ * @raid_device: raid_device object
+ *
+ * This is added to the raid_device_list link list.
+ */
+static void
+_scsih_raid_device_add(struct MPT2SAS_ADAPTER *ioc,
+    struct _raid_device *raid_device)
+{
+	unsigned long flags;
+
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: handle"
+	    "(0x%04x), wwid(0x%016llx)\n", ioc->name, __func__,
+	    raid_device->handle, (unsigned long long)raid_device->wwid));
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	list_add_tail(&raid_device->list, &ioc->raid_device_list);
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+}
+
+/**
+ * _scsih_raid_device_remove - delete raid_device object
+ * @ioc: per adapter object
+ * @raid_device: raid_device object
+ *
+ * This is removed from the raid_device_list link list.
+ */
+static void
+_scsih_raid_device_remove(struct MPT2SAS_ADAPTER *ioc,
+    struct _raid_device *raid_device)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	list_del(&raid_device->list);
+	memset(raid_device, 0, sizeof(struct _raid_device));
+	kfree(raid_device);
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+}
+
+/**
+ * mpt2sas_scsih_expander_find_by_sas_address - expander device search
+ * @ioc: per adapter object
+ * @sas_address: sas address
+ * Context: Calling function should acquire ioc->sas_node_lock.
+ *
+ * This searches for expander device based on sas_address, then returns the
+ * sas_node object.
+ */
+struct _sas_node *
+mpt2sas_scsih_expander_find_by_sas_address(struct MPT2SAS_ADAPTER *ioc,
+    u64 sas_address)
+{
+	struct _sas_node *sas_expander, *r;
+
+	r = NULL;
+	list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
+		if (sas_expander->sas_address != sas_address)
+			continue;
+		r = sas_expander;
+		goto out;
+	}
+ out:
+	return r;
+}
+
+/**
+ * _scsih_expander_node_add - insert expander device to the list.
+ * @ioc: per adapter object
+ * @sas_expander: the sas_device object
+ * Context: This function will acquire ioc->sas_node_lock.
+ *
+ * Adding new object to the ioc->sas_expander_list.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_expander_node_add(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_node *sas_expander)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	list_add_tail(&sas_expander->list, &ioc->sas_expander_list);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+}
+
+/**
+ * _scsih_is_end_device - determines if device is an end device
+ * @device_info: bitfield providing information about the device.
+ * Context: none
+ *
+ * Returns 1 if end device.
+ */
+static int
+_scsih_is_end_device(u32 device_info)
+{
+	if (device_info & MPI2_SAS_DEVICE_INFO_END_DEVICE &&
+		((device_info & MPI2_SAS_DEVICE_INFO_SSP_TARGET) |
+		(device_info & MPI2_SAS_DEVICE_INFO_STP_TARGET) |
+		(device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE)))
+		return 1;
+	else
+		return 0;
+}
+
+/**
+ * _scsih_scsi_lookup_get - returns scmd entry
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * Context: This function will acquire ioc->scsi_lookup_lock.
+ *
+ * Returns the smid stored scmd pointer.
+ */
+static struct scsi_cmnd *
+_scsih_scsi_lookup_get(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	unsigned long	flags;
+	struct scsi_cmnd *scmd;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	scmd = ioc->scsi_lookup[smid - 1].scmd;
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+	return scmd;
+}
+
+/**
+ * mptscsih_getclear_scsi_lookup - returns scmd entry
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * Context: This function will acquire ioc->scsi_lookup_lock.
+ *
+ * Returns the smid stored scmd pointer, as well as clearing the scmd pointer.
+ */
+static struct scsi_cmnd *
+_scsih_scsi_lookup_getclear(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	unsigned long	flags;
+	struct scsi_cmnd *scmd;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	scmd = ioc->scsi_lookup[smid - 1].scmd;
+	ioc->scsi_lookup[smid - 1].scmd = NULL;
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+	return scmd;
+}
+
+/**
+ * _scsih_scsi_lookup_set - updates scmd entry in lookup
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @scmd: pointer to scsi command object
+ * Context: This function will acquire ioc->scsi_lookup_lock.
+ *
+ * This will save scmd pointer in the scsi_lookup array.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_scsi_lookup_set(struct MPT2SAS_ADAPTER *ioc, u16 smid,
+    struct scsi_cmnd *scmd)
+{
+	unsigned long	flags;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	ioc->scsi_lookup[smid - 1].scmd = scmd;
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+}
+
+/**
+ * _scsih_scsi_lookup_find_by_scmd - scmd lookup
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @scmd: pointer to scsi command object
+ * Context: This function will acquire ioc->scsi_lookup_lock.
+ *
+ * This will search for a scmd pointer in the scsi_lookup array,
+ * returning the revelent smid.  A returned value of zero means invalid.
+ */
+static u16
+_scsih_scsi_lookup_find_by_scmd(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd
+    *scmd)
+{
+	u16 smid;
+	unsigned long	flags;
+	int i;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	smid = 0;
+	for (i = 0; i < ioc->request_depth; i++) {
+		if (ioc->scsi_lookup[i].scmd == scmd) {
+			smid = i + 1;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+	return smid;
+}
+
+/**
+ * _scsih_scsi_lookup_find_by_target - search for matching channel:id
+ * @ioc: per adapter object
+ * @id: target id
+ * @channel: channel
+ * Context: This function will acquire ioc->scsi_lookup_lock.
+ *
+ * This will search for a matching channel:id in the scsi_lookup array,
+ * returning 1 if found.
+ */
+static u8
+_scsih_scsi_lookup_find_by_target(struct MPT2SAS_ADAPTER *ioc, int id,
+    int channel)
+{
+	u8 found;
+	unsigned long	flags;
+	int i;
+
+	spin_lock_irqsave(&ioc->scsi_lookup_lock, flags);
+	found = 0;
+	for (i = 0 ; i < ioc->request_depth; i++) {
+		if (ioc->scsi_lookup[i].scmd &&
+		    (ioc->scsi_lookup[i].scmd->device->id == id &&
+		    ioc->scsi_lookup[i].scmd->device->channel == channel)) {
+			found = 1;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->scsi_lookup_lock, flags);
+	return found;
+}
+
+/**
+ * _scsih_get_chain_buffer_dma - obtain block of chains (dma address)
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns phys pointer to chain buffer.
+ */
+static dma_addr_t
+_scsih_get_chain_buffer_dma(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return ioc->chain_dma + ((smid - 1) * (ioc->request_sz *
+	    ioc->chains_needed_per_io));
+}
+
+/**
+ * _scsih_get_chain_buffer - obtain block of chains assigned to a mf request
+ * @ioc: per adapter object
+ * @smid: system request message index
+ *
+ * Returns virt pointer to chain buffer.
+ */
+static void *
+_scsih_get_chain_buffer(struct MPT2SAS_ADAPTER *ioc, u16 smid)
+{
+	return (void *)(ioc->chain + ((smid - 1) * (ioc->request_sz *
+	    ioc->chains_needed_per_io)));
+}
+
+/**
+ * _scsih_build_scatter_gather - main sg creation routine
+ * @ioc: per adapter object
+ * @scmd: scsi command
+ * @smid: system request message index
+ * Context: none.
+ *
+ * The main routine that builds scatter gather table from a given
+ * scsi request sent via the .queuecommand main handler.
+ *
+ * Returns 0 success, anything else error
+ */
+static int
+_scsih_build_scatter_gather(struct MPT2SAS_ADAPTER *ioc,
+    struct scsi_cmnd *scmd, u16 smid)
+{
+	Mpi2SCSIIORequest_t *mpi_request;
+	dma_addr_t chain_dma;
+	struct scatterlist *sg_scmd;
+	void *sg_local, *chain;
+	u32 chain_offset;
+	u32 chain_length;
+	u32 chain_flags;
+	u32 sges_left;
+	u32 sges_in_segment;
+	u32 sgl_flags;
+	u32 sgl_flags_last_element;
+	u32 sgl_flags_end_buffer;
+
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+
+	/* init scatter gather flags */
+	sgl_flags = MPI2_SGE_FLAGS_SIMPLE_ELEMENT;
+	if (scmd->sc_data_direction == DMA_TO_DEVICE)
+		sgl_flags |= MPI2_SGE_FLAGS_HOST_TO_IOC;
+	sgl_flags_last_element = (sgl_flags | MPI2_SGE_FLAGS_LAST_ELEMENT)
+	    << MPI2_SGE_FLAGS_SHIFT;
+	sgl_flags_end_buffer = (sgl_flags | MPI2_SGE_FLAGS_LAST_ELEMENT |
+	    MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_END_OF_LIST)
+	    << MPI2_SGE_FLAGS_SHIFT;
+	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+
+	sg_scmd = scsi_sglist(scmd);
+	sges_left = scsi_dma_map(scmd);
+	if (!sges_left) {
+		sdev_printk(KERN_ERR, scmd->device, "pci_map_sg"
+		" failed: request for %d bytes!\n", scsi_bufflen(scmd));
+		return -ENOMEM;
+	}
+
+	sg_local = &mpi_request->SGL;
+	sges_in_segment = ioc->max_sges_in_main_message;
+	if (sges_left <= sges_in_segment)
+		goto fill_in_last_segment;
+
+	mpi_request->ChainOffset = (offsetof(Mpi2SCSIIORequest_t, SGL) +
+	    (sges_in_segment * ioc->sge_size))/4;
+
+	/* fill in main message segment when there is a chain following */
+	while (sges_in_segment) {
+		if (sges_in_segment == 1)
+			ioc->base_add_sg_single(sg_local,
+			    sgl_flags_last_element | sg_dma_len(sg_scmd),
+			    sg_dma_address(sg_scmd));
+		else
+			ioc->base_add_sg_single(sg_local, sgl_flags |
+			    sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
+		sg_scmd = sg_next(sg_scmd);
+		sg_local += ioc->sge_size;
+		sges_left--;
+		sges_in_segment--;
+	}
+
+	/* initializing the chain flags and pointers */
+	chain_flags = MPI2_SGE_FLAGS_CHAIN_ELEMENT << MPI2_SGE_FLAGS_SHIFT;
+	chain = _scsih_get_chain_buffer(ioc, smid);
+	chain_dma = _scsih_get_chain_buffer_dma(ioc, smid);
+	do {
+		sges_in_segment = (sges_left <=
+		    ioc->max_sges_in_chain_message) ? sges_left :
+		    ioc->max_sges_in_chain_message;
+		chain_offset = (sges_left == sges_in_segment) ?
+		    0 : (sges_in_segment * ioc->sge_size)/4;
+		chain_length = sges_in_segment * ioc->sge_size;
+		if (chain_offset) {
+			chain_offset = chain_offset <<
+			    MPI2_SGE_CHAIN_OFFSET_SHIFT;
+			chain_length += ioc->sge_size;
+		}
+		ioc->base_add_sg_single(sg_local, chain_flags | chain_offset |
+		    chain_length, chain_dma);
+		sg_local = chain;
+		if (!chain_offset)
+			goto fill_in_last_segment;
+
+		/* fill in chain segments */
+		while (sges_in_segment) {
+			if (sges_in_segment == 1)
+				ioc->base_add_sg_single(sg_local,
+				    sgl_flags_last_element |
+				    sg_dma_len(sg_scmd),
+				    sg_dma_address(sg_scmd));
+			else
+				ioc->base_add_sg_single(sg_local, sgl_flags |
+				    sg_dma_len(sg_scmd),
+				    sg_dma_address(sg_scmd));
+			sg_scmd = sg_next(sg_scmd);
+			sg_local += ioc->sge_size;
+			sges_left--;
+			sges_in_segment--;
+		}
+
+		chain_dma += ioc->request_sz;
+		chain += ioc->request_sz;
+	} while (1);
+
+
+ fill_in_last_segment:
+
+	/* fill the last segment */
+	while (sges_left) {
+		if (sges_left == 1)
+			ioc->base_add_sg_single(sg_local, sgl_flags_end_buffer |
+			    sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
+		else
+			ioc->base_add_sg_single(sg_local, sgl_flags |
+			    sg_dma_len(sg_scmd), sg_dma_address(sg_scmd));
+		sg_scmd = sg_next(sg_scmd);
+		sg_local += ioc->sge_size;
+		sges_left--;
+	}
+
+	return 0;
+}
+
+/**
+ * scsih_change_queue_depth - setting device queue depth
+ * @sdev: scsi device struct
+ * @qdepth: requested queue depth
+ *
+ * Returns queue depth.
+ */
+static int
+scsih_change_queue_depth(struct scsi_device *sdev, int qdepth)
+{
+	struct Scsi_Host *shost = sdev->host;
+	int max_depth;
+	int tag_type;
+
+	max_depth = shost->can_queue;
+	if (!sdev->tagged_supported)
+		max_depth = 1;
+	if (qdepth > max_depth)
+		qdepth = max_depth;
+	tag_type = (qdepth == 1) ? 0 : MSG_SIMPLE_TAG;
+	scsi_adjust_queue_depth(sdev, tag_type, qdepth);
+
+	if (sdev->inquiry_len > 7)
+		sdev_printk(KERN_INFO, sdev, "qdepth(%d), tagged(%d), "
+		"simple(%d), ordered(%d), scsi_level(%d), cmd_que(%d)\n",
+		sdev->queue_depth, sdev->tagged_supported, sdev->simple_tags,
+		sdev->ordered_tags, sdev->scsi_level,
+		(sdev->inquiry[7] & 2) >> 1);
+
+	return sdev->queue_depth;
+}
+
+/**
+ * scsih_change_queue_depth - changing device queue tag type
+ * @sdev: scsi device struct
+ * @tag_type: requested tag type
+ *
+ * Returns queue tag type.
+ */
+static int
+scsih_change_queue_type(struct scsi_device *sdev, int tag_type)
+{
+	if (sdev->tagged_supported) {
+		scsi_set_tag_type(sdev, tag_type);
+		if (tag_type)
+			scsi_activate_tcq(sdev, sdev->queue_depth);
+		else
+			scsi_deactivate_tcq(sdev, sdev->queue_depth);
+	} else
+		tag_type = 0;
+
+	return tag_type;
+}
+
+/**
+ * scsih_target_alloc - target add routine
+ * @starget: scsi target struct
+ *
+ * Returns 0 if ok. Any other return is assumed to be an error and
+ * the device is ignored.
+ */
+static int
+scsih_target_alloc(struct scsi_target *starget)
+{
+	struct Scsi_Host *shost = dev_to_shost(&starget->dev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct _sas_device *sas_device;
+	struct _raid_device *raid_device;
+	unsigned long flags;
+	struct sas_rphy *rphy;
+
+	sas_target_priv_data = kzalloc(sizeof(struct scsi_target), GFP_KERNEL);
+	if (!sas_target_priv_data)
+		return -ENOMEM;
+
+	starget->hostdata = sas_target_priv_data;
+	sas_target_priv_data->starget = starget;
+	sas_target_priv_data->handle = MPT2SAS_INVALID_DEVICE_HANDLE;
+
+	/* RAID volumes */
+	if (starget->channel == RAID_CHANNEL) {
+		spin_lock_irqsave(&ioc->raid_device_lock, flags);
+		raid_device = _scsih_raid_device_find_by_id(ioc, starget->id,
+		    starget->channel);
+		if (raid_device) {
+			sas_target_priv_data->handle = raid_device->handle;
+			sas_target_priv_data->sas_address = raid_device->wwid;
+			sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME;
+			raid_device->starget = starget;
+		}
+		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+		return 0;
+	}
+
+	/* sas/sata devices */
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	rphy = dev_to_rphy(starget->dev.parent);
+	sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+	   rphy->identify.sas_address);
+
+	if (sas_device) {
+		sas_target_priv_data->handle = sas_device->handle;
+		sas_target_priv_data->sas_address = sas_device->sas_address;
+		sas_device->starget = starget;
+		sas_device->id = starget->id;
+		sas_device->channel = starget->channel;
+		if (sas_device->hidden_raid_component)
+			sas_target_priv_data->flags |=
+			    MPT_TARGET_FLAGS_RAID_COMPONENT;
+	}
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	return 0;
+}
+
+/**
+ * scsih_target_destroy - target destroy routine
+ * @starget: scsi target struct
+ *
+ * Returns nothing.
+ */
+static void
+scsih_target_destroy(struct scsi_target *starget)
+{
+	struct Scsi_Host *shost = dev_to_shost(&starget->dev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct _sas_device *sas_device;
+	struct _raid_device *raid_device;
+	unsigned long flags;
+	struct sas_rphy *rphy;
+
+	sas_target_priv_data = starget->hostdata;
+	if (!sas_target_priv_data)
+		return;
+
+	if (starget->channel == RAID_CHANNEL) {
+		spin_lock_irqsave(&ioc->raid_device_lock, flags);
+		raid_device = _scsih_raid_device_find_by_id(ioc, starget->id,
+		    starget->channel);
+		if (raid_device) {
+			raid_device->starget = NULL;
+			raid_device->sdev = NULL;
+		}
+		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+		goto out;
+	}
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	rphy = dev_to_rphy(starget->dev.parent);
+	sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+	   rphy->identify.sas_address);
+	if (sas_device)
+		sas_device->starget = NULL;
+
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+ out:
+	kfree(sas_target_priv_data);
+	starget->hostdata = NULL;
+}
+
+/**
+ * scsih_slave_alloc - device add routine
+ * @sdev: scsi device struct
+ *
+ * Returns 0 if ok. Any other return is assumed to be an error and
+ * the device is ignored.
+ */
+static int
+scsih_slave_alloc(struct scsi_device *sdev)
+{
+	struct Scsi_Host *shost;
+	struct MPT2SAS_ADAPTER *ioc;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct scsi_target *starget;
+	struct _raid_device *raid_device;
+	struct _sas_device *sas_device;
+	unsigned long flags;
+
+	sas_device_priv_data = kzalloc(sizeof(struct scsi_device), GFP_KERNEL);
+	if (!sas_device_priv_data)
+		return -ENOMEM;
+
+	sas_device_priv_data->lun = sdev->lun;
+	sas_device_priv_data->flags = MPT_DEVICE_FLAGS_INIT;
+
+	starget = scsi_target(sdev);
+	sas_target_priv_data = starget->hostdata;
+	sas_target_priv_data->num_luns++;
+	sas_device_priv_data->sas_target = sas_target_priv_data;
+	sdev->hostdata = sas_device_priv_data;
+	if ((sas_target_priv_data->flags & MPT_TARGET_FLAGS_RAID_COMPONENT))
+		sdev->no_uld_attach = 1;
+
+	shost = dev_to_shost(&starget->dev);
+	ioc = shost_priv(shost);
+	if (starget->channel == RAID_CHANNEL) {
+		spin_lock_irqsave(&ioc->raid_device_lock, flags);
+		raid_device = _scsih_raid_device_find_by_id(ioc,
+		    starget->id, starget->channel);
+		if (raid_device)
+			raid_device->sdev = sdev; /* raid is single lun */
+		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+	} else {
+		/* set TLR bit for SSP devices */
+		if (!(ioc->facts.IOCCapabilities &
+		     MPI2_IOCFACTS_CAPABILITY_TLR))
+			goto out;
+		spin_lock_irqsave(&ioc->sas_device_lock, flags);
+		sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+		   sas_device_priv_data->sas_target->sas_address);
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		if (sas_device && sas_device->device_info &
+		    MPI2_SAS_DEVICE_INFO_SSP_TARGET)
+			sas_device_priv_data->flags |= MPT_DEVICE_TLR_ON;
+	}
+
+ out:
+	return 0;
+}
+
+/**
+ * scsih_slave_destroy - device destroy routine
+ * @sdev: scsi device struct
+ *
+ * Returns nothing.
+ */
+static void
+scsih_slave_destroy(struct scsi_device *sdev)
+{
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct scsi_target *starget;
+
+	if (!sdev->hostdata)
+		return;
+
+	starget = scsi_target(sdev);
+	sas_target_priv_data = starget->hostdata;
+	sas_target_priv_data->num_luns--;
+	kfree(sdev->hostdata);
+	sdev->hostdata = NULL;
+}
+
+/**
+ * scsih_display_sata_capabilities - sata capabilities
+ * @ioc: per adapter object
+ * @sas_device: the sas_device object
+ * @sdev: scsi device struct
+ */
+static void
+scsih_display_sata_capabilities(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_device *sas_device, struct scsi_device *sdev)
+{
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	u32 ioc_status;
+	u16 flags;
+	u32 device_info;
+
+	if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, &sas_device_pg0,
+	    MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, sas_device->handle))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	flags = le16_to_cpu(sas_device_pg0.Flags);
+	device_info = le16_to_cpu(sas_device_pg0.DeviceInfo);
+
+	sdev_printk(KERN_INFO, sdev,
+	    "atapi(%s), ncq(%s), asyn_notify(%s), smart(%s), fua(%s), "
+	    "sw_preserve(%s)\n",
+	    (device_info & MPI2_SAS_DEVICE_INFO_ATAPI_DEVICE) ? "y" : "n",
+	    (flags & MPI2_SAS_DEVICE0_FLAGS_SATA_NCQ_SUPPORTED) ? "y" : "n",
+	    (flags & MPI2_SAS_DEVICE0_FLAGS_SATA_ASYNCHRONOUS_NOTIFY) ? "y" :
+	    "n",
+	    (flags & MPI2_SAS_DEVICE0_FLAGS_SATA_SMART_SUPPORTED) ? "y" : "n",
+	    (flags & MPI2_SAS_DEVICE0_FLAGS_SATA_FUA_SUPPORTED) ? "y" : "n",
+	    (flags & MPI2_SAS_DEVICE0_FLAGS_SATA_SW_PRESERVE) ? "y" : "n");
+}
+
+/**
+ * _scsih_get_volume_capabilities - volume capabilities
+ * @ioc: per adapter object
+ * @sas_device: the raid_device object
+ */
+static void
+_scsih_get_volume_capabilities(struct MPT2SAS_ADAPTER *ioc,
+    struct _raid_device *raid_device)
+{
+	Mpi2RaidVolPage0_t *vol_pg0;
+	Mpi2RaidPhysDiskPage0_t pd_pg0;
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	Mpi2ConfigReply_t mpi_reply;
+	u16 sz;
+	u8 num_pds;
+
+	if ((mpt2sas_config_get_number_pds(ioc, raid_device->handle,
+	    &num_pds)) || !num_pds) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	raid_device->num_pds = num_pds;
+	sz = offsetof(Mpi2RaidVolPage0_t, PhysDisk) + (num_pds *
+	    sizeof(Mpi2RaidVol0PhysDisk_t));
+	vol_pg0 = kzalloc(sz, GFP_KERNEL);
+	if (!vol_pg0) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	if ((mpt2sas_config_get_raid_volume_pg0(ioc, &mpi_reply, vol_pg0,
+	     MPI2_RAID_VOLUME_PGAD_FORM_HANDLE, raid_device->handle, sz))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		kfree(vol_pg0);
+		return;
+	}
+
+	raid_device->volume_type = vol_pg0->VolumeType;
+
+	/* figure out what the underlying devices are by
+	 * obtaining the device_info bits for the 1st device
+	 */
+	if (!(mpt2sas_config_get_phys_disk_pg0(ioc, &mpi_reply,
+	    &pd_pg0, MPI2_PHYSDISK_PGAD_FORM_PHYSDISKNUM,
+	    vol_pg0->PhysDisk[0].PhysDiskNum))) {
+		if (!(mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply,
+		    &sas_device_pg0, MPI2_SAS_DEVICE_PGAD_FORM_HANDLE,
+		    le16_to_cpu(pd_pg0.DevHandle)))) {
+			raid_device->device_info =
+			    le32_to_cpu(sas_device_pg0.DeviceInfo);
+		}
+	}
+
+	kfree(vol_pg0);
+}
+
+/**
+ * scsih_slave_configure - device configure routine.
+ * @sdev: scsi device struct
+ *
+ * Returns 0 if ok. Any other return is assumed to be an error and
+ * the device is ignored.
+ */
+static int
+scsih_slave_configure(struct scsi_device *sdev)
+{
+	struct Scsi_Host *shost = sdev->host;
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct _sas_device *sas_device;
+	struct _raid_device *raid_device;
+	unsigned long flags;
+	int qdepth;
+	u8 ssp_target = 0;
+	char *ds = "";
+	char *r_level = "";
+
+	qdepth = 1;
+	sas_device_priv_data = sdev->hostdata;
+	sas_device_priv_data->configured_lun = 1;
+	sas_device_priv_data->flags &= ~MPT_DEVICE_FLAGS_INIT;
+	sas_target_priv_data = sas_device_priv_data->sas_target;
+
+	/* raid volume handling */
+	if (sas_target_priv_data->flags & MPT_TARGET_FLAGS_VOLUME) {
+
+		spin_lock_irqsave(&ioc->raid_device_lock, flags);
+		raid_device = _scsih_raid_device_find_by_handle(ioc,
+		     sas_target_priv_data->handle);
+		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+		if (!raid_device) {
+			printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+			    ioc->name, __FILE__, __LINE__, __func__);
+			return 0;
+		}
+
+		_scsih_get_volume_capabilities(ioc, raid_device);
+
+		/* RAID Queue Depth Support
+		 * IS volume = underlying qdepth of drive type, either
+		 *    MPT2SAS_SAS_QUEUE_DEPTH or MPT2SAS_SATA_QUEUE_DEPTH
+		 * IM/IME/R10 = 128 (MPT2SAS_RAID_QUEUE_DEPTH)
+		 */
+		if (raid_device->device_info &
+		    MPI2_SAS_DEVICE_INFO_SSP_TARGET) {
+			qdepth = MPT2SAS_SAS_QUEUE_DEPTH;
+			ds = "SSP";
+		} else {
+			qdepth = MPT2SAS_SATA_QUEUE_DEPTH;
+			 if (raid_device->device_info &
+			    MPI2_SAS_DEVICE_INFO_SATA_DEVICE)
+				ds = "SATA";
+			else
+				ds = "STP";
+		}
+
+		switch (raid_device->volume_type) {
+		case MPI2_RAID_VOL_TYPE_RAID0:
+			r_level = "RAID0";
+			break;
+		case MPI2_RAID_VOL_TYPE_RAID1E:
+			qdepth = MPT2SAS_RAID_QUEUE_DEPTH;
+			r_level = "RAID1E";
+			break;
+		case MPI2_RAID_VOL_TYPE_RAID1:
+			qdepth = MPT2SAS_RAID_QUEUE_DEPTH;
+			r_level = "RAID1";
+			break;
+		case MPI2_RAID_VOL_TYPE_RAID10:
+			qdepth = MPT2SAS_RAID_QUEUE_DEPTH;
+			r_level = "RAID10";
+			break;
+		case MPI2_RAID_VOL_TYPE_UNKNOWN:
+		default:
+			qdepth = MPT2SAS_RAID_QUEUE_DEPTH;
+			r_level = "RAIDX";
+			break;
+		}
+
+		sdev_printk(KERN_INFO, sdev, "%s: "
+		    "handle(0x%04x), wwid(0x%016llx), pd_count(%d), type(%s)\n",
+		    r_level, raid_device->handle,
+		    (unsigned long long)raid_device->wwid,
+		    raid_device->num_pds, ds);
+		scsih_change_queue_depth(sdev, qdepth);
+		return 0;
+	}
+
+	/* non-raid handling */
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+	   sas_device_priv_data->sas_target->sas_address);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	if (sas_device) {
+		if (sas_target_priv_data->flags &
+		    MPT_TARGET_FLAGS_RAID_COMPONENT) {
+			mpt2sas_config_get_volume_handle(ioc,
+			    sas_device->handle, &sas_device->volume_handle);
+			mpt2sas_config_get_volume_wwid(ioc,
+			    sas_device->volume_handle,
+			    &sas_device->volume_wwid);
+		}
+		if (sas_device->device_info & MPI2_SAS_DEVICE_INFO_SSP_TARGET) {
+			qdepth = MPT2SAS_SAS_QUEUE_DEPTH;
+			ssp_target = 1;
+			ds = "SSP";
+		} else {
+			qdepth = MPT2SAS_SATA_QUEUE_DEPTH;
+			if (sas_device->device_info &
+			    MPI2_SAS_DEVICE_INFO_STP_TARGET)
+				ds = "STP";
+			else if (sas_device->device_info &
+			    MPI2_SAS_DEVICE_INFO_SATA_DEVICE)
+				ds = "SATA";
+		}
+
+		sdev_printk(KERN_INFO, sdev, "%s: handle(0x%04x), "
+		    "sas_addr(0x%016llx), device_name(0x%016llx)\n",
+		    ds, sas_device->handle,
+		    (unsigned long long)sas_device->sas_address,
+		    (unsigned long long)sas_device->device_name);
+		sdev_printk(KERN_INFO, sdev, "%s: "
+		    "enclosure_logical_id(0x%016llx), slot(%d)\n", ds,
+		    (unsigned long long) sas_device->enclosure_logical_id,
+		    sas_device->slot);
+
+		if (!ssp_target)
+			scsih_display_sata_capabilities(ioc, sas_device, sdev);
+	}
+
+	scsih_change_queue_depth(sdev, qdepth);
+
+	if (ssp_target)
+		sas_read_port_mode_page(sdev);
+	return 0;
+}
+
+/**
+ * scsih_bios_param - fetch head, sector, cylinder info for a disk
+ * @sdev: scsi device struct
+ * @bdev: pointer to block device context
+ * @capacity: device size (in 512 byte sectors)
+ * @params: three element array to place output:
+ *              params[0] number of heads (max 255)
+ *              params[1] number of sectors (max 63)
+ *              params[2] number of cylinders
+ *
+ * Return nothing.
+ */
+static int
+scsih_bios_param(struct scsi_device *sdev, struct block_device *bdev,
+    sector_t capacity, int params[])
+{
+	int		heads;
+	int		sectors;
+	sector_t	cylinders;
+	ulong 		dummy;
+
+	heads = 64;
+	sectors = 32;
+
+	dummy = heads * sectors;
+	cylinders = capacity;
+	sector_div(cylinders, dummy);
+
+	/*
+	 * Handle extended translation size for logical drives
+	 * > 1Gb
+	 */
+	if ((ulong)capacity >= 0x200000) {
+		heads = 255;
+		sectors = 63;
+		dummy = heads * sectors;
+		cylinders = capacity;
+		sector_div(cylinders, dummy);
+	}
+
+	/* return result */
+	params[0] = heads;
+	params[1] = sectors;
+	params[2] = cylinders;
+
+	return 0;
+}
+
+/**
+ * _scsih_response_code - translation of device response code
+ * @ioc: per adapter object
+ * @response_code: response code returned by the device
+ *
+ * Return nothing.
+ */
+static void
+_scsih_response_code(struct MPT2SAS_ADAPTER *ioc, u8 response_code)
+{
+	char *desc;
+
+	switch (response_code) {
+	case MPI2_SCSITASKMGMT_RSP_TM_COMPLETE:
+		desc = "task management request completed";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_INVALID_FRAME:
+		desc = "invalid frame";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_TM_NOT_SUPPORTED:
+		desc = "task management request not supported";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_TM_FAILED:
+		desc = "task management request failed";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED:
+		desc = "task management request succeeded";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_TM_INVALID_LUN:
+		desc = "invalid lun";
+		break;
+	case 0xA:
+		desc = "overlapped tag attempted";
+		break;
+	case MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC:
+		desc = "task queued, however not sent to target";
+		break;
+	default:
+		desc = "unknown";
+		break;
+	}
+	printk(MPT2SAS_WARN_FMT "response_code(0x%01x): %s\n",
+		ioc->name, response_code, desc);
+}
+
+/**
+ * scsih_tm_done - tm completion routine
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ * Context: none.
+ *
+ * The callback handler when using scsih_issue_tm.
+ *
+ * Return nothing.
+ */
+static void
+scsih_tm_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+
+	if (ioc->tm_cmds.status == MPT2_CMD_NOT_USED)
+		return;
+	if (ioc->tm_cmds.smid != smid)
+		return;
+	ioc->tm_cmds.status |= MPT2_CMD_COMPLETE;
+	mpi_reply =  mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (mpi_reply) {
+		memcpy(ioc->tm_cmds.reply, mpi_reply, mpi_reply->MsgLength*4);
+		ioc->tm_cmds.status |= MPT2_CMD_REPLY_VALID;
+	}
+	ioc->tm_cmds.status &= ~MPT2_CMD_PENDING;
+	complete(&ioc->tm_cmds.done);
+}
+
+/**
+ * mpt2sas_scsih_set_tm_flag - set per target tm_busy
+ * @ioc: per adapter object
+ * @handle: device handle
+ *
+ * During taskmangement request, we need to freeze the device queue.
+ */
+void
+mpt2sas_scsih_set_tm_flag(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct scsi_device *sdev;
+	u8 skip = 0;
+
+	shost_for_each_device(sdev, ioc->shost) {
+		if (skip)
+			continue;
+		sas_device_priv_data = sdev->hostdata;
+		if (!sas_device_priv_data)
+			continue;
+		if (sas_device_priv_data->sas_target->handle == handle) {
+			sas_device_priv_data->sas_target->tm_busy = 1;
+			skip = 1;
+			ioc->ignore_loginfos = 1;
+		}
+	}
+}
+
+/**
+ * mpt2sas_scsih_clear_tm_flag - clear per target tm_busy
+ * @ioc: per adapter object
+ * @handle: device handle
+ *
+ * During taskmangement request, we need to freeze the device queue.
+ */
+void
+mpt2sas_scsih_clear_tm_flag(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct scsi_device *sdev;
+	u8 skip = 0;
+
+	shost_for_each_device(sdev, ioc->shost) {
+		if (skip)
+			continue;
+		sas_device_priv_data = sdev->hostdata;
+		if (!sas_device_priv_data)
+			continue;
+		if (sas_device_priv_data->sas_target->handle == handle) {
+			sas_device_priv_data->sas_target->tm_busy = 0;
+			skip = 1;
+			ioc->ignore_loginfos = 0;
+		}
+	}
+}
+
+/**
+ * mpt2sas_scsih_issue_tm - main routine for sending tm requests
+ * @ioc: per adapter struct
+ * @device_handle: device handle
+ * @lun: lun number
+ * @type: MPI2_SCSITASKMGMT_TASKTYPE__XXX (defined in mpi2_init.h)
+ * @smid_task: smid assigned to the task
+ * @timeout: timeout in seconds
+ * Context: The calling function needs to acquire the tm_cmds.mutex
+ *
+ * A generic API for sending task management requests to firmware.
+ *
+ * The ioc->tm_cmds.status flag should be MPT2_CMD_NOT_USED before calling
+ * this API.
+ *
+ * The callback index is set inside `ioc->tm_cb_idx`.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_scsih_issue_tm(struct MPT2SAS_ADAPTER *ioc, u16 handle, uint lun,
+    u8 type, u16 smid_task, ulong timeout)
+{
+	Mpi2SCSITaskManagementRequest_t *mpi_request;
+	Mpi2SCSITaskManagementReply_t *mpi_reply;
+	u16 smid = 0;
+	u32 ioc_state;
+	unsigned long timeleft;
+	u8 VF_ID = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->tm_cmds.status != MPT2_CMD_NOT_USED ||
+	    ioc->shost_recovery) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		printk(MPT2SAS_INFO_FMT "%s: host reset in progress!\n",
+		    __func__, ioc->name);
+		return;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 0);
+	if (ioc_state & MPI2_DOORBELL_USED) {
+		dhsprintk(ioc, printk(MPT2SAS_DEBUG_FMT "unexpected doorbell "
+		    "active!\n", ioc->name));
+		goto issue_host_reset;
+	}
+
+	if ((ioc_state & MPI2_IOC_STATE_MASK) == MPI2_IOC_STATE_FAULT) {
+		mpt2sas_base_fault_info(ioc, ioc_state &
+		    MPI2_DOORBELL_DATA_MASK);
+		goto issue_host_reset;
+	}
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->tm_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		return;
+	}
+
+	dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "sending tm: handle(0x%04x),"
+	    " task_type(0x%02x), smid(%d)\n", ioc->name, handle, type, smid));
+	ioc->tm_cmds.status = MPT2_CMD_PENDING;
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->tm_cmds.smid = smid;
+	memset(mpi_request, 0, sizeof(Mpi2SCSITaskManagementRequest_t));
+	mpi_request->Function = MPI2_FUNCTION_SCSI_TASK_MGMT;
+	mpi_request->DevHandle = cpu_to_le16(handle);
+	mpi_request->TaskType = type;
+	mpi_request->TaskMID = cpu_to_le16(smid_task);
+	int_to_scsilun(lun, (struct scsi_lun *)mpi_request->LUN);
+	mpt2sas_scsih_set_tm_flag(ioc, handle);
+	mpt2sas_base_put_smid_hi_priority(ioc, smid, VF_ID);
+	timeleft = wait_for_completion_timeout(&ioc->tm_cmds.done, timeout*HZ);
+	mpt2sas_scsih_clear_tm_flag(ioc, handle);
+	if (!(ioc->tm_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2SCSITaskManagementRequest_t)/4);
+		if (!(ioc->tm_cmds.status & MPT2_CMD_RESET))
+			goto issue_host_reset;
+	}
+
+	if (ioc->tm_cmds.status & MPT2_CMD_REPLY_VALID) {
+		mpi_reply = ioc->tm_cmds.reply;
+		dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "complete tm: "
+		    "ioc_status(0x%04x), loginfo(0x%08x), term_count(0x%08x)\n",
+		    ioc->name, le16_to_cpu(mpi_reply->IOCStatus),
+		    le32_to_cpu(mpi_reply->IOCLogInfo),
+		    le32_to_cpu(mpi_reply->TerminationCount)));
+		if (ioc->logging_level & MPT_DEBUG_TM)
+			_scsih_response_code(ioc, mpi_reply->ResponseCode);
+	}
+	return;
+ issue_host_reset:
+	mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP, FORCE_BIG_HAMMER);
+}
+
+/**
+ * scsih_abort - eh threads main abort routine
+ * @sdev: scsi device struct
+ *
+ * Returns SUCCESS if command aborted else FAILED
+ */
+static int
+scsih_abort(struct scsi_cmnd *scmd)
+{
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	u16 smid;
+	u16 handle;
+	int r;
+	struct scsi_cmnd *scmd_lookup;
+
+	printk(MPT2SAS_INFO_FMT "attempting task abort! scmd(%p)\n",
+	    ioc->name, scmd);
+	scsi_print_command(scmd);
+
+	sas_device_priv_data = scmd->device->hostdata;
+	if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
+		printk(MPT2SAS_INFO_FMT "device been deleted! scmd(%p)\n",
+		    ioc->name, scmd);
+		scmd->result = DID_NO_CONNECT << 16;
+		scmd->scsi_done(scmd);
+		r = SUCCESS;
+		goto out;
+	}
+
+	/* search for the command */
+	smid = _scsih_scsi_lookup_find_by_scmd(ioc, scmd);
+	if (!smid) {
+		scmd->result = DID_RESET << 16;
+		r = SUCCESS;
+		goto out;
+	}
+
+	/* for hidden raid components and volumes this is not supported */
+	if (sas_device_priv_data->sas_target->flags &
+	    MPT_TARGET_FLAGS_RAID_COMPONENT ||
+	    sas_device_priv_data->sas_target->flags & MPT_TARGET_FLAGS_VOLUME) {
+		scmd->result = DID_RESET << 16;
+		r = FAILED;
+		goto out;
+	}
+
+	mutex_lock(&ioc->tm_cmds.mutex);
+	handle = sas_device_priv_data->sas_target->handle;
+	mpt2sas_scsih_issue_tm(ioc, handle, sas_device_priv_data->lun,
+	    MPI2_SCSITASKMGMT_TASKTYPE_ABORT_TASK, smid, 30);
+
+	/* sanity check - see whether command actually completed */
+	scmd_lookup = _scsih_scsi_lookup_get(ioc, smid);
+	if (scmd_lookup && (scmd_lookup->serial_number == scmd->serial_number))
+		r = FAILED;
+	else
+		r = SUCCESS;
+	ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->tm_cmds.mutex);
+
+ out:
+	printk(MPT2SAS_INFO_FMT "task abort: %s scmd(%p)\n",
+	    ioc->name, ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
+	return r;
+}
+
+
+/**
+ * scsih_dev_reset - eh threads main device reset routine
+ * @sdev: scsi device struct
+ *
+ * Returns SUCCESS if command aborted else FAILED
+ */
+static int
+scsih_dev_reset(struct scsi_cmnd *scmd)
+{
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	u16	handle;
+	int r;
+
+	printk(MPT2SAS_INFO_FMT "attempting target reset! scmd(%p)\n",
+	    ioc->name, scmd);
+	scsi_print_command(scmd);
+
+	sas_device_priv_data = scmd->device->hostdata;
+	if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
+		printk(MPT2SAS_INFO_FMT "device been deleted! scmd(%p)\n",
+		    ioc->name, scmd);
+		scmd->result = DID_NO_CONNECT << 16;
+		scmd->scsi_done(scmd);
+		r = SUCCESS;
+		goto out;
+	}
+
+	/* for hidden raid components obtain the volume_handle */
+	handle = 0;
+	if (sas_device_priv_data->sas_target->flags &
+	    MPT_TARGET_FLAGS_RAID_COMPONENT) {
+		spin_lock_irqsave(&ioc->sas_device_lock, flags);
+		sas_device = _scsih_sas_device_find_by_handle(ioc,
+		   sas_device_priv_data->sas_target->handle);
+		if (sas_device)
+			handle = sas_device->volume_handle;
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	} else
+		handle = sas_device_priv_data->sas_target->handle;
+
+	if (!handle) {
+		scmd->result = DID_RESET << 16;
+		r = FAILED;
+		goto out;
+	}
+
+	mutex_lock(&ioc->tm_cmds.mutex);
+	mpt2sas_scsih_issue_tm(ioc, handle, 0,
+	    MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 30);
+
+	/*
+	 *  sanity check see whether all commands to this target been
+	 *  completed
+	 */
+	if (_scsih_scsi_lookup_find_by_target(ioc, scmd->device->id,
+	    scmd->device->channel))
+		r = FAILED;
+	else
+		r = SUCCESS;
+	ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->tm_cmds.mutex);
+
+ out:
+	printk(MPT2SAS_INFO_FMT "target reset: %s scmd(%p)\n",
+	    ioc->name, ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
+	return r;
+}
+
+/**
+ * scsih_abort - eh threads main host reset routine
+ * @sdev: scsi device struct
+ *
+ * Returns SUCCESS if command aborted else FAILED
+ */
+static int
+scsih_host_reset(struct scsi_cmnd *scmd)
+{
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
+	int r, retval;
+
+	printk(MPT2SAS_INFO_FMT "attempting host reset! scmd(%p)\n",
+	    ioc->name, scmd);
+	scsi_print_command(scmd);
+
+	retval = mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+	    FORCE_BIG_HAMMER);
+	r = (retval < 0) ? FAILED : SUCCESS;
+	printk(MPT2SAS_INFO_FMT "host reset: %s scmd(%p)\n",
+	    ioc->name, ((r == SUCCESS) ? "SUCCESS" : "FAILED"), scmd);
+
+	return r;
+}
+
+/**
+ * _scsih_fw_event_add - insert and queue up fw_event
+ * @ioc: per adapter object
+ * @fw_event: object describing the event
+ * Context: This function will acquire ioc->fw_event_lock.
+ *
+ * This adds the firmware event object into link list, then queues it up to
+ * be processed from user context.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_fw_event_add(struct MPT2SAS_ADAPTER *ioc, struct fw_event_work *fw_event)
+{
+	unsigned long flags;
+
+	if (ioc->firmware_event_thread == NULL)
+		return;
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	list_add_tail(&fw_event->list, &ioc->fw_event_list);
+	INIT_DELAYED_WORK(&fw_event->work, _firmware_event_work);
+	queue_delayed_work(ioc->firmware_event_thread, &fw_event->work, 1);
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+}
+
+/**
+ * _scsih_fw_event_free - delete fw_event
+ * @ioc: per adapter object
+ * @fw_event: object describing the event
+ * Context: This function will acquire ioc->fw_event_lock.
+ *
+ * This removes firmware event object from link list, frees associated memory.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_fw_event_free(struct MPT2SAS_ADAPTER *ioc, struct fw_event_work
+    *fw_event)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	list_del(&fw_event->list);
+	kfree(fw_event->event_data);
+	kfree(fw_event);
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+}
+
+/**
+ * _scsih_fw_event_add - requeue an event
+ * @ioc: per adapter object
+ * @fw_event: object describing the event
+ * Context: This function will acquire ioc->fw_event_lock.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_fw_event_requeue(struct MPT2SAS_ADAPTER *ioc, struct fw_event_work
+    *fw_event, unsigned long delay)
+{
+	unsigned long flags;
+	if (ioc->firmware_event_thread == NULL)
+		return;
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	queue_delayed_work(ioc->firmware_event_thread, &fw_event->work, delay);
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+}
+
+/**
+ * _scsih_fw_event_off - turn flag off preventing event handling
+ * @ioc: per adapter object
+ *
+ * Used to prevent handling of firmware events during adapter reset
+ * driver unload.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_fw_event_off(struct MPT2SAS_ADAPTER *ioc)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	ioc->fw_events_off = 1;
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+
+}
+
+/**
+ * _scsih_fw_event_on - turn flag on allowing firmware event handling
+ * @ioc: per adapter object
+ *
+ * Returns nothing.
+ */
+static void
+_scsih_fw_event_on(struct MPT2SAS_ADAPTER *ioc)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	ioc->fw_events_off = 0;
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+}
+
+/**
+ * _scsih_ublock_io_device - set the device state to SDEV_RUNNING
+ * @ioc: per adapter object
+ * @handle: device handle
+ *
+ * During device pull we need to appropiately set the sdev state.
+ */
+static void
+_scsih_ublock_io_device(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct scsi_device *sdev;
+
+	shost_for_each_device(sdev, ioc->shost) {
+		sas_device_priv_data = sdev->hostdata;
+		if (!sas_device_priv_data)
+			continue;
+		if (!sas_device_priv_data->block)
+			continue;
+		if (sas_device_priv_data->sas_target->handle == handle) {
+			dewtprintk(ioc, sdev_printk(KERN_INFO, sdev,
+			    MPT2SAS_INFO_FMT "SDEV_RUNNING: "
+			    "handle(0x%04x)\n", ioc->name, handle));
+			sas_device_priv_data->block = 0;
+			scsi_device_set_state(sdev, SDEV_RUNNING);
+		}
+	}
+}
+
+/**
+ * _scsih_block_io_device - set the device state to SDEV_BLOCK
+ * @ioc: per adapter object
+ * @handle: device handle
+ *
+ * During device pull we need to appropiately set the sdev state.
+ */
+static void
+_scsih_block_io_device(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct scsi_device *sdev;
+
+	shost_for_each_device(sdev, ioc->shost) {
+		sas_device_priv_data = sdev->hostdata;
+		if (!sas_device_priv_data)
+			continue;
+		if (sas_device_priv_data->block)
+			continue;
+		if (sas_device_priv_data->sas_target->handle == handle) {
+			dewtprintk(ioc, sdev_printk(KERN_INFO, sdev,
+			    MPT2SAS_INFO_FMT "SDEV_BLOCK: "
+			    "handle(0x%04x)\n", ioc->name, handle));
+			sas_device_priv_data->block = 1;
+			scsi_device_set_state(sdev, SDEV_BLOCK);
+		}
+	}
+}
+
+/**
+ * _scsih_block_io_to_children_attached_to_ex
+ * @ioc: per adapter object
+ * @sas_expander: the sas_device object
+ *
+ * This routine set sdev state to SDEV_BLOCK for all devices
+ * attached to this expander. This function called when expander is
+ * pulled.
+ */
+static void
+_scsih_block_io_to_children_attached_to_ex(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_node *sas_expander)
+{
+	struct _sas_port *mpt2sas_port;
+	struct _sas_device *sas_device;
+	struct _sas_node *expander_sibling;
+	unsigned long flags;
+
+	if (!sas_expander)
+		return;
+
+	list_for_each_entry(mpt2sas_port,
+	   &sas_expander->sas_port_list, port_list) {
+		if (mpt2sas_port->remote_identify.device_type ==
+		    SAS_END_DEVICE) {
+			spin_lock_irqsave(&ioc->sas_device_lock, flags);
+			sas_device =
+			    mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+			   mpt2sas_port->remote_identify.sas_address);
+			spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+			if (!sas_device)
+				continue;
+			_scsih_block_io_device(ioc, sas_device->handle);
+		}
+	}
+
+	list_for_each_entry(mpt2sas_port,
+	   &sas_expander->sas_port_list, port_list) {
+
+		if (mpt2sas_port->remote_identify.device_type ==
+		    MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER ||
+		    mpt2sas_port->remote_identify.device_type ==
+		    MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER) {
+
+			spin_lock_irqsave(&ioc->sas_node_lock, flags);
+			expander_sibling =
+			    mpt2sas_scsih_expander_find_by_sas_address(
+			    ioc, mpt2sas_port->remote_identify.sas_address);
+			spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+			_scsih_block_io_to_children_attached_to_ex(ioc,
+			    expander_sibling);
+		}
+	}
+}
+
+/**
+ * _scsih_block_io_to_children_attached_directly
+ * @ioc: per adapter object
+ * @event_data: topology change event data
+ *
+ * This routine set sdev state to SDEV_BLOCK for all devices
+ * direct attached during device pull.
+ */
+static void
+_scsih_block_io_to_children_attached_directly(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataSasTopologyChangeList_t *event_data)
+{
+	int i;
+	u16 handle;
+	u16 reason_code;
+	u8 phy_number;
+
+	for (i = 0; i < event_data->NumEntries; i++) {
+		handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
+		if (!handle)
+			continue;
+		phy_number = event_data->StartPhyNum + i;
+		reason_code = event_data->PHY[i].PhyStatus &
+		    MPI2_EVENT_SAS_TOPO_RC_MASK;
+		if (reason_code == MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING)
+			_scsih_block_io_device(ioc, handle);
+	}
+}
+
+/**
+ * _scsih_check_topo_delete_events - sanity check on topo events
+ * @ioc: per adapter object
+ * @event_data: the event data payload
+ *
+ * This routine added to better handle cable breaker.
+ *
+ * This handles the case where driver recieves multiple expander
+ * add and delete events in a single shot.  When there is a delete event
+ * the routine will void any pending add events waiting in the event queue.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_check_topo_delete_events(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataSasTopologyChangeList_t *event_data)
+{
+	struct fw_event_work *fw_event;
+	Mpi2EventDataSasTopologyChangeList_t *local_event_data;
+	u16 expander_handle;
+	struct _sas_node *sas_expander;
+	unsigned long flags;
+
+	expander_handle = le16_to_cpu(event_data->ExpanderDevHandle);
+	if (expander_handle < ioc->sas_hba.num_phys) {
+		_scsih_block_io_to_children_attached_directly(ioc, event_data);
+		return;
+	}
+
+	if (event_data->ExpStatus == MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING
+	 || event_data->ExpStatus == MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING) {
+		spin_lock_irqsave(&ioc->sas_node_lock, flags);
+		sas_expander = mpt2sas_scsih_expander_find_by_handle(ioc,
+		    expander_handle);
+		spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+		_scsih_block_io_to_children_attached_to_ex(ioc, sas_expander);
+	} else if (event_data->ExpStatus == MPI2_EVENT_SAS_TOPO_ES_RESPONDING)
+		_scsih_block_io_to_children_attached_directly(ioc, event_data);
+
+	if (event_data->ExpStatus != MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING)
+		return;
+
+	/* mark ignore flag for pending events */
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	list_for_each_entry(fw_event, &ioc->fw_event_list, list) {
+		if (fw_event->event != MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST ||
+		    fw_event->ignore)
+			continue;
+		local_event_data = fw_event->event_data;
+		if (local_event_data->ExpStatus ==
+		    MPI2_EVENT_SAS_TOPO_ES_ADDED ||
+		    local_event_data->ExpStatus ==
+		    MPI2_EVENT_SAS_TOPO_ES_RESPONDING) {
+			if (le16_to_cpu(local_event_data->ExpanderDevHandle) ==
+			    expander_handle) {
+				dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+				    "setting ignoring flag\n", ioc->name));
+				fw_event->ignore = 1;
+			}
+		}
+	}
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+}
+
+/**
+ * _scsih_queue_rescan - queue a topology rescan from user context
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+static void
+_scsih_queue_rescan(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct fw_event_work *fw_event;
+
+	if (ioc->wait_for_port_enable_to_complete)
+		return;
+	fw_event = kzalloc(sizeof(struct fw_event_work), GFP_ATOMIC);
+	if (!fw_event)
+		return;
+	fw_event->event = MPT2SAS_RESCAN_AFTER_HOST_RESET;
+	fw_event->ioc = ioc;
+	_scsih_fw_event_add(ioc, fw_event);
+}
+
+/**
+ * _scsih_flush_running_cmds - completing outstanding commands.
+ * @ioc: per adapter object
+ *
+ * The flushing out of all pending scmd commands following host reset,
+ * where all IO is dropped to the floor.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_flush_running_cmds(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct scsi_cmnd *scmd;
+	u16 smid;
+	u16 count = 0;
+
+	for (smid = 1; smid <= ioc->request_depth; smid++) {
+		scmd = _scsih_scsi_lookup_getclear(ioc, smid);
+		if (!scmd)
+			continue;
+		count++;
+		mpt2sas_base_free_smid(ioc, smid);
+		scsi_dma_unmap(scmd);
+		scmd->result = DID_RESET << 16;
+		scmd->scsi_done(scmd);
+	}
+	dtmprintk(ioc, printk(MPT2SAS_INFO_FMT "completing %d cmds\n",
+	    ioc->name, count));
+}
+
+/**
+ * mpt2sas_scsih_reset_handler - reset callback handler (for scsih)
+ * @ioc: per adapter object
+ * @reset_phase: phase
+ *
+ * The handler for doing any required cleanup or initialization.
+ *
+ * The reset phase can be MPT2_IOC_PRE_RESET, MPT2_IOC_AFTER_RESET,
+ * MPT2_IOC_DONE_RESET
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_scsih_reset_handler(struct MPT2SAS_ADAPTER *ioc, int reset_phase)
+{
+	switch (reset_phase) {
+	case MPT2_IOC_PRE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_PRE_RESET\n", ioc->name, __func__));
+		_scsih_fw_event_off(ioc);
+		break;
+	case MPT2_IOC_AFTER_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_AFTER_RESET\n", ioc->name, __func__));
+		if (ioc->tm_cmds.status & MPT2_CMD_PENDING) {
+			ioc->tm_cmds.status |= MPT2_CMD_RESET;
+			mpt2sas_base_free_smid(ioc, ioc->tm_cmds.smid);
+			complete(&ioc->tm_cmds.done);
+		}
+		_scsih_fw_event_on(ioc);
+		_scsih_flush_running_cmds(ioc);
+		break;
+	case MPT2_IOC_DONE_RESET:
+		dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: "
+		    "MPT2_IOC_DONE_RESET\n", ioc->name, __func__));
+		_scsih_queue_rescan(ioc);
+		break;
+	}
+}
+
+/**
+ * scsih_qcmd - main scsi request entry point
+ * @scmd: pointer to scsi command object
+ * @done: function pointer to be invoked on completion
+ *
+ * The callback index is set inside `ioc->scsi_io_cb_idx`.
+ *
+ * Returns 0 on success.  If there's a failure, return either:
+ * SCSI_MLQUEUE_DEVICE_BUSY if the device queue is full, or
+ * SCSI_MLQUEUE_HOST_BUSY if the entire host queue is full
+ */
+static int
+scsih_qcmd(struct scsi_cmnd *scmd, void (*done)(struct scsi_cmnd *))
+{
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(scmd->device->host);
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	Mpi2SCSIIORequest_t *mpi_request;
+	u32 mpi_control;
+	u16 smid;
+	unsigned long flags;
+
+	scmd->scsi_done = done;
+	sas_device_priv_data = scmd->device->hostdata;
+	if (!sas_device_priv_data) {
+		scmd->result = DID_NO_CONNECT << 16;
+		scmd->scsi_done(scmd);
+		return 0;
+	}
+
+	sas_target_priv_data = sas_device_priv_data->sas_target;
+	if (!sas_target_priv_data || sas_target_priv_data->handle ==
+	    MPT2SAS_INVALID_DEVICE_HANDLE || sas_target_priv_data->deleted) {
+		scmd->result = DID_NO_CONNECT << 16;
+		scmd->scsi_done(scmd);
+		return 0;
+	}
+
+	/* see if we are busy with task managment stuff */
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (sas_target_priv_data->tm_busy ||
+	    ioc->shost_recovery || ioc->ioc_link_reset_in_progress) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		return SCSI_MLQUEUE_HOST_BUSY;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	if (scmd->sc_data_direction == DMA_FROM_DEVICE)
+		mpi_control = MPI2_SCSIIO_CONTROL_READ;
+	else if (scmd->sc_data_direction == DMA_TO_DEVICE)
+		mpi_control = MPI2_SCSIIO_CONTROL_WRITE;
+	else
+		mpi_control = MPI2_SCSIIO_CONTROL_NODATATRANSFER;
+
+	/* set tags */
+	if (!(sas_device_priv_data->flags & MPT_DEVICE_FLAGS_INIT)) {
+		if (scmd->device->tagged_supported) {
+			if (scmd->device->ordered_tags)
+				mpi_control |= MPI2_SCSIIO_CONTROL_ORDEREDQ;
+			else
+				mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
+		} else
+/* MPI Revision I (UNIT = 0xA) - removed MPI2_SCSIIO_CONTROL_UNTAGGED */
+/*			mpi_control |= MPI2_SCSIIO_CONTROL_UNTAGGED;
+ */
+			mpi_control |= (0x500);
+
+	} else
+		mpi_control |= MPI2_SCSIIO_CONTROL_SIMPLEQ;
+
+	if ((sas_device_priv_data->flags & MPT_DEVICE_TLR_ON))
+		mpi_control |= MPI2_SCSIIO_CONTROL_TLR_ON;
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->scsi_io_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		goto out;
+	}
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	memset(mpi_request, 0, sizeof(Mpi2SCSIIORequest_t));
+	mpi_request->Function = MPI2_FUNCTION_SCSI_IO_REQUEST;
+	if (sas_device_priv_data->sas_target->flags &
+	    MPT_TARGET_FLAGS_RAID_COMPONENT)
+		mpi_request->Function = MPI2_FUNCTION_RAID_SCSI_IO_PASSTHROUGH;
+	else
+		mpi_request->Function = MPI2_FUNCTION_SCSI_IO_REQUEST;
+	mpi_request->DevHandle =
+	    cpu_to_le16(sas_device_priv_data->sas_target->handle);
+	mpi_request->DataLength = cpu_to_le32(scsi_bufflen(scmd));
+	mpi_request->Control = cpu_to_le32(mpi_control);
+	mpi_request->IoFlags = cpu_to_le16(scmd->cmd_len);
+	mpi_request->MsgFlags = MPI2_SCSIIO_MSGFLAGS_SYSTEM_SENSE_ADDR;
+	mpi_request->SenseBufferLength = SCSI_SENSE_BUFFERSIZE;
+	mpi_request->SenseBufferLowAddress =
+	    (u32)mpt2sas_base_get_sense_buffer_dma(ioc, smid);
+	mpi_request->SGLOffset0 = offsetof(Mpi2SCSIIORequest_t, SGL) / 4;
+	mpi_request->SGLFlags = cpu_to_le16(MPI2_SCSIIO_SGLFLAGS_TYPE_MPI +
+	    MPI2_SCSIIO_SGLFLAGS_SYSTEM_ADDR);
+
+	int_to_scsilun(sas_device_priv_data->lun, (struct scsi_lun *)
+	    mpi_request->LUN);
+	memcpy(mpi_request->CDB.CDB32, scmd->cmnd, scmd->cmd_len);
+
+	if (!mpi_request->DataLength) {
+		mpt2sas_base_build_zero_len_sge(ioc, &mpi_request->SGL);
+	} else {
+		if (_scsih_build_scatter_gather(ioc, scmd, smid)) {
+			mpt2sas_base_free_smid(ioc, smid);
+			goto out;
+		}
+	}
+
+	_scsih_scsi_lookup_set(ioc, smid, scmd);
+	mpt2sas_base_put_smid_scsi_io(ioc, smid, 0,
+	    sas_device_priv_data->sas_target->handle);
+	return 0;
+
+ out:
+	return SCSI_MLQUEUE_HOST_BUSY;
+}
+
+/**
+ * _scsih_normalize_sense - normalize descriptor and fixed format sense data
+ * @sense_buffer: sense data returned by target
+ * @data: normalized skey/asc/ascq
+ *
+ * Return nothing.
+ */
+static void
+_scsih_normalize_sense(char *sense_buffer, struct sense_info *data)
+{
+	if ((sense_buffer[0] & 0x7F) >= 0x72) {
+		/* descriptor format */
+		data->skey = sense_buffer[1] & 0x0F;
+		data->asc = sense_buffer[2];
+		data->ascq = sense_buffer[3];
+	} else {
+		/* fixed format */
+		data->skey = sense_buffer[2] & 0x0F;
+		data->asc = sense_buffer[12];
+		data->ascq = sense_buffer[13];
+	}
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_scsi_ioc_info - translated non-succesfull SCSI_IO request
+ * @ioc: per adapter object
+ * @scmd: pointer to scsi command object
+ * @mpi_reply: reply mf payload returned from firmware
+ *
+ * scsi_status - SCSI Status code returned from target device
+ * scsi_state - state info associated with SCSI_IO determined by ioc
+ * ioc_status - ioc supplied status info
+ *
+ * Return nothing.
+ */
+static void
+_scsih_scsi_ioc_info(struct MPT2SAS_ADAPTER *ioc, struct scsi_cmnd *scmd,
+    Mpi2SCSIIOReply_t *mpi_reply, u16 smid)
+{
+	u32 response_info;
+	u8 *response_bytes;
+	u16 ioc_status = le16_to_cpu(mpi_reply->IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	u8 scsi_state = mpi_reply->SCSIState;
+	u8 scsi_status = mpi_reply->SCSIStatus;
+	char *desc_ioc_state = NULL;
+	char *desc_scsi_status = NULL;
+	char *desc_scsi_state = ioc->tmp_string;
+
+	switch (ioc_status) {
+	case MPI2_IOCSTATUS_SUCCESS:
+		desc_ioc_state = "success";
+		break;
+	case MPI2_IOCSTATUS_INVALID_FUNCTION:
+		desc_ioc_state = "invalid function";
+		break;
+	case MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR:
+		desc_ioc_state = "scsi recovered error";
+		break;
+	case MPI2_IOCSTATUS_SCSI_INVALID_DEVHANDLE:
+		desc_ioc_state = "scsi invalid dev handle";
+		break;
+	case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
+		desc_ioc_state = "scsi device not there";
+		break;
+	case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN:
+		desc_ioc_state = "scsi data overrun";
+		break;
+	case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN:
+		desc_ioc_state = "scsi data underrun";
+		break;
+	case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR:
+		desc_ioc_state = "scsi io data error";
+		break;
+	case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR:
+		desc_ioc_state = "scsi protocol error";
+		break;
+	case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED:
+		desc_ioc_state = "scsi task terminated";
+		break;
+	case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
+		desc_ioc_state = "scsi residual mismatch";
+		break;
+	case MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
+		desc_ioc_state = "scsi task mgmt failed";
+		break;
+	case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED:
+		desc_ioc_state = "scsi ioc terminated";
+		break;
+	case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED:
+		desc_ioc_state = "scsi ext terminated";
+		break;
+	default:
+		desc_ioc_state = "unknown";
+		break;
+	}
+
+	switch (scsi_status) {
+	case MPI2_SCSI_STATUS_GOOD:
+		desc_scsi_status = "good";
+		break;
+	case MPI2_SCSI_STATUS_CHECK_CONDITION:
+		desc_scsi_status = "check condition";
+		break;
+	case MPI2_SCSI_STATUS_CONDITION_MET:
+		desc_scsi_status = "condition met";
+		break;
+	case MPI2_SCSI_STATUS_BUSY:
+		desc_scsi_status = "busy";
+		break;
+	case MPI2_SCSI_STATUS_INTERMEDIATE:
+		desc_scsi_status = "intermediate";
+		break;
+	case MPI2_SCSI_STATUS_INTERMEDIATE_CONDMET:
+		desc_scsi_status = "intermediate condmet";
+		break;
+	case MPI2_SCSI_STATUS_RESERVATION_CONFLICT:
+		desc_scsi_status = "reservation conflict";
+		break;
+	case MPI2_SCSI_STATUS_COMMAND_TERMINATED:
+		desc_scsi_status = "command terminated";
+		break;
+	case MPI2_SCSI_STATUS_TASK_SET_FULL:
+		desc_scsi_status = "task set full";
+		break;
+	case MPI2_SCSI_STATUS_ACA_ACTIVE:
+		desc_scsi_status = "aca active";
+		break;
+	case MPI2_SCSI_STATUS_TASK_ABORTED:
+		desc_scsi_status = "task aborted";
+		break;
+	default:
+		desc_scsi_status = "unknown";
+		break;
+	}
+
+	desc_scsi_state[0] = '\0';
+	if (!scsi_state)
+		desc_scsi_state = " ";
+	if (scsi_state & MPI2_SCSI_STATE_RESPONSE_INFO_VALID)
+		strcat(desc_scsi_state, "response info ");
+	if (scsi_state & MPI2_SCSI_STATE_TERMINATED)
+		strcat(desc_scsi_state, "state terminated ");
+	if (scsi_state & MPI2_SCSI_STATE_NO_SCSI_STATUS)
+		strcat(desc_scsi_state, "no status ");
+	if (scsi_state & MPI2_SCSI_STATE_AUTOSENSE_FAILED)
+		strcat(desc_scsi_state, "autosense failed ");
+	if (scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID)
+		strcat(desc_scsi_state, "autosense valid ");
+
+	scsi_print_command(scmd);
+	printk(MPT2SAS_WARN_FMT "\tdev handle(0x%04x), "
+	    "ioc_status(%s)(0x%04x), smid(%d)\n", ioc->name,
+	    le16_to_cpu(mpi_reply->DevHandle), desc_ioc_state,
+		ioc_status, smid);
+	printk(MPT2SAS_WARN_FMT "\trequest_len(%d), underflow(%d), "
+	    "resid(%d)\n", ioc->name, scsi_bufflen(scmd), scmd->underflow,
+	    scsi_get_resid(scmd));
+	printk(MPT2SAS_WARN_FMT "\ttag(%d), transfer_count(%d), "
+	    "sc->result(0x%08x)\n", ioc->name, le16_to_cpu(mpi_reply->TaskTag),
+	    le32_to_cpu(mpi_reply->TransferCount), scmd->result);
+	printk(MPT2SAS_WARN_FMT "\tscsi_status(%s)(0x%02x), "
+	    "scsi_state(%s)(0x%02x)\n", ioc->name, desc_scsi_status,
+	    scsi_status, desc_scsi_state, scsi_state);
+
+	if (scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID) {
+		struct sense_info data;
+		_scsih_normalize_sense(scmd->sense_buffer, &data);
+		printk(MPT2SAS_WARN_FMT "\t[sense_key,asc,ascq]: "
+		    "[0x%02x,0x%02x,0x%02x]\n", ioc->name, data.skey,
+		    data.asc, data.ascq);
+	}
+
+	if (scsi_state & MPI2_SCSI_STATE_RESPONSE_INFO_VALID) {
+		response_info = le32_to_cpu(mpi_reply->ResponseInfo);
+		response_bytes = (u8 *)&response_info;
+		_scsih_response_code(ioc, response_bytes[3]);
+	}
+}
+#endif
+
+/**
+ * _scsih_smart_predicted_fault - illuminate Fault LED
+ * @ioc: per adapter object
+ * @handle: device handle
+ *
+ * Return nothing.
+ */
+static void
+_scsih_smart_predicted_fault(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	Mpi2SepReply_t mpi_reply;
+	Mpi2SepRequest_t mpi_request;
+	struct scsi_target *starget;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	Mpi2EventNotificationReply_t *event_reply;
+	Mpi2EventDataSasDeviceStatusChange_t *event_data;
+	struct _sas_device *sas_device;
+	ssize_t sz;
+	unsigned long flags;
+
+	/* only handle non-raid devices */
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	if (!sas_device) {
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		return;
+	}
+	starget = sas_device->starget;
+	sas_target_priv_data = starget->hostdata;
+
+	if ((sas_target_priv_data->flags & MPT_TARGET_FLAGS_RAID_COMPONENT) ||
+	   ((sas_target_priv_data->flags & MPT_TARGET_FLAGS_VOLUME))) {
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		return;
+	}
+	starget_printk(KERN_WARNING, starget, "predicted fault\n");
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	if (ioc->pdev->subsystem_vendor == PCI_VENDOR_ID_IBM) {
+		memset(&mpi_request, 0, sizeof(Mpi2SepRequest_t));
+		mpi_request.Function = MPI2_FUNCTION_SCSI_ENCLOSURE_PROCESSOR;
+		mpi_request.Action = MPI2_SEP_REQ_ACTION_WRITE_STATUS;
+		mpi_request.SlotStatus =
+		    MPI2_SEP_REQ_SLOTSTATUS_PREDICTED_FAULT;
+		mpi_request.DevHandle = cpu_to_le16(handle);
+		mpi_request.Flags = MPI2_SEP_REQ_FLAGS_DEVHANDLE_ADDRESS;
+		if ((mpt2sas_base_scsi_enclosure_processor(ioc, &mpi_reply,
+		    &mpi_request)) != 0) {
+			printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+			    ioc->name, __FILE__, __LINE__, __func__);
+			return;
+		}
+
+		if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo) {
+			dewtprintk(ioc, printk(MPT2SAS_INFO_FMT
+			    "enclosure_processor: ioc_status (0x%04x), "
+			    "loginfo(0x%08x)\n", ioc->name,
+			    le16_to_cpu(mpi_reply.IOCStatus),
+			    le32_to_cpu(mpi_reply.IOCLogInfo)));
+			return;
+		}
+	}
+
+	/* insert into event log */
+	sz = offsetof(Mpi2EventNotificationReply_t, EventData) +
+	     sizeof(Mpi2EventDataSasDeviceStatusChange_t);
+	event_reply = kzalloc(sz, GFP_KERNEL);
+	if (!event_reply) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	event_reply->Function = MPI2_FUNCTION_EVENT_NOTIFICATION;
+	event_reply->Event =
+	    cpu_to_le16(MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE);
+	event_reply->MsgLength = sz/4;
+	event_reply->EventDataLength =
+	    cpu_to_le16(sizeof(Mpi2EventDataSasDeviceStatusChange_t)/4);
+	event_data = (Mpi2EventDataSasDeviceStatusChange_t *)
+	    event_reply->EventData;
+	event_data->ReasonCode = MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA;
+	event_data->ASC = 0x5D;
+	event_data->DevHandle = cpu_to_le16(handle);
+	event_data->SASAddress = cpu_to_le64(sas_target_priv_data->sas_address);
+	mpt2sas_ctl_add_to_event_log(ioc, event_reply);
+	kfree(event_reply);
+}
+
+/**
+ * scsih_io_done - scsi request callback
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ *
+ * Callback handler when using scsih_qcmd.
+ *
+ * Return nothing.
+ */
+static void
+scsih_io_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID, u32 reply)
+{
+	Mpi2SCSIIORequest_t *mpi_request;
+	Mpi2SCSIIOReply_t *mpi_reply;
+	struct scsi_cmnd *scmd;
+	u16 ioc_status;
+	u32 xfer_cnt;
+	u8 scsi_state;
+	u8 scsi_status;
+	u32 log_info;
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	u32 response_code;
+
+	mpi_reply = mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	scmd = _scsih_scsi_lookup_getclear(ioc, smid);
+	if (scmd == NULL)
+		return;
+
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+
+	if (mpi_reply == NULL) {
+		scmd->result = DID_OK << 16;
+		goto out;
+	}
+
+	sas_device_priv_data = scmd->device->hostdata;
+	if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
+	     sas_device_priv_data->sas_target->deleted) {
+		scmd->result = DID_NO_CONNECT << 16;
+		goto out;
+	}
+
+	/* turning off TLR */
+	if (!sas_device_priv_data->tlr_snoop_check) {
+		sas_device_priv_data->tlr_snoop_check++;
+		if (sas_device_priv_data->flags & MPT_DEVICE_TLR_ON) {
+			response_code = (le32_to_cpu(mpi_reply->ResponseInfo)
+			    >> 24);
+			if (response_code ==
+			    MPI2_SCSITASKMGMT_RSP_INVALID_FRAME)
+				sas_device_priv_data->flags &=
+				    ~MPT_DEVICE_TLR_ON;
+		}
+	}
+
+	xfer_cnt = le32_to_cpu(mpi_reply->TransferCount);
+	scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt);
+	ioc_status = le16_to_cpu(mpi_reply->IOCStatus);
+	if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)
+		log_info =  le32_to_cpu(mpi_reply->IOCLogInfo);
+	else
+		log_info = 0;
+	ioc_status &= MPI2_IOCSTATUS_MASK;
+	scsi_state = mpi_reply->SCSIState;
+	scsi_status = mpi_reply->SCSIStatus;
+
+	if (ioc_status == MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN && xfer_cnt == 0 &&
+	    (scsi_status == MPI2_SCSI_STATUS_BUSY ||
+	     scsi_status == MPI2_SCSI_STATUS_RESERVATION_CONFLICT ||
+	     scsi_status == MPI2_SCSI_STATUS_TASK_SET_FULL)) {
+		ioc_status = MPI2_IOCSTATUS_SUCCESS;
+	}
+
+	if (scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID) {
+		struct sense_info data;
+		const void *sense_data = mpt2sas_base_get_sense_buffer(ioc,
+		    smid);
+		memcpy(scmd->sense_buffer, sense_data,
+		    le32_to_cpu(mpi_reply->SenseCount));
+		_scsih_normalize_sense(scmd->sense_buffer, &data);
+		/* failure prediction threshold exceeded */
+		if (data.asc == 0x5D)
+			_scsih_smart_predicted_fault(ioc,
+			    le16_to_cpu(mpi_reply->DevHandle));
+	}
+
+	switch (ioc_status) {
+	case MPI2_IOCSTATUS_BUSY:
+	case MPI2_IOCSTATUS_INSUFFICIENT_RESOURCES:
+		scmd->result = SAM_STAT_BUSY;
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_DEVICE_NOT_THERE:
+		scmd->result = DID_NO_CONNECT << 16;
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_IOC_TERMINATED:
+		if (sas_device_priv_data->block) {
+			scmd->result = (DID_BUS_BUSY << 16);
+			break;
+		}
+
+	case MPI2_IOCSTATUS_SCSI_TASK_TERMINATED:
+	case MPI2_IOCSTATUS_SCSI_EXT_TERMINATED:
+		scmd->result = DID_RESET << 16;
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_RESIDUAL_MISMATCH:
+		if ((xfer_cnt == 0) || (scmd->underflow > xfer_cnt))
+			scmd->result = DID_SOFT_ERROR << 16;
+		else
+			scmd->result = (DID_OK << 16) | scsi_status;
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_DATA_UNDERRUN:
+		scmd->result = (DID_OK << 16) | scsi_status;
+
+		if ((scsi_state & MPI2_SCSI_STATE_AUTOSENSE_VALID))
+			break;
+
+		if (xfer_cnt < scmd->underflow) {
+			if (scsi_status == SAM_STAT_BUSY)
+				scmd->result = SAM_STAT_BUSY;
+			else
+				scmd->result = DID_SOFT_ERROR << 16;
+		} else if (scsi_state & (MPI2_SCSI_STATE_AUTOSENSE_FAILED |
+		     MPI2_SCSI_STATE_NO_SCSI_STATUS))
+			scmd->result = DID_SOFT_ERROR << 16;
+		else if (scsi_state & MPI2_SCSI_STATE_TERMINATED)
+			scmd->result = DID_RESET << 16;
+		else if (!xfer_cnt && scmd->cmnd[0] == REPORT_LUNS) {
+			mpi_reply->SCSIState = MPI2_SCSI_STATE_AUTOSENSE_VALID;
+			mpi_reply->SCSIStatus = SAM_STAT_CHECK_CONDITION;
+			scmd->result = (DRIVER_SENSE << 24) |
+			    SAM_STAT_CHECK_CONDITION;
+			scmd->sense_buffer[0] = 0x70;
+			scmd->sense_buffer[2] = ILLEGAL_REQUEST;
+			scmd->sense_buffer[12] = 0x20;
+			scmd->sense_buffer[13] = 0;
+		}
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_DATA_OVERRUN:
+		scsi_set_resid(scmd, 0);
+	case MPI2_IOCSTATUS_SCSI_RECOVERED_ERROR:
+	case MPI2_IOCSTATUS_SUCCESS:
+		scmd->result = (DID_OK << 16) | scsi_status;
+		if (scsi_state & (MPI2_SCSI_STATE_AUTOSENSE_FAILED |
+		     MPI2_SCSI_STATE_NO_SCSI_STATUS))
+			scmd->result = DID_SOFT_ERROR << 16;
+		else if (scsi_state & MPI2_SCSI_STATE_TERMINATED)
+			scmd->result = DID_RESET << 16;
+		break;
+
+	case MPI2_IOCSTATUS_SCSI_PROTOCOL_ERROR:
+	case MPI2_IOCSTATUS_INVALID_FUNCTION:
+	case MPI2_IOCSTATUS_INVALID_SGL:
+	case MPI2_IOCSTATUS_INTERNAL_ERROR:
+	case MPI2_IOCSTATUS_INVALID_FIELD:
+	case MPI2_IOCSTATUS_INVALID_STATE:
+	case MPI2_IOCSTATUS_SCSI_IO_DATA_ERROR:
+	case MPI2_IOCSTATUS_SCSI_TASK_MGMT_FAILED:
+	default:
+		scmd->result = DID_SOFT_ERROR << 16;
+		break;
+
+	}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (scmd->result && (ioc->logging_level & MPT_DEBUG_REPLY))
+		_scsih_scsi_ioc_info(ioc , scmd, mpi_reply, smid);
+#endif
+
+ out:
+	scsi_dma_unmap(scmd);
+	scmd->scsi_done(scmd);
+}
+
+/**
+ * _scsih_link_change - process phy link changes
+ * @ioc: per adapter object
+ * @handle: phy handle
+ * @attached_handle: valid for devices attached to link
+ * @phy_number: phy number
+ * @link_rate: new link rate
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_link_change(struct MPT2SAS_ADAPTER *ioc, u16 handle, u16 attached_handle,
+   u8 phy_number, u8 link_rate)
+{
+	mpt2sas_transport_update_phy_link_change(ioc, handle, attached_handle,
+	    phy_number, link_rate);
+}
+
+/**
+ * _scsih_sas_host_refresh - refreshing sas host object contents
+ * @ioc: per adapter object
+ * @update: update link information
+ * Context: user
+ *
+ * During port enable, fw will send topology events for every device. Its
+ * possible that the handles may change from the previous setting, so this
+ * code keeping handles updating if changed.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_host_refresh(struct MPT2SAS_ADAPTER *ioc, u8 update)
+{
+	u16 sz;
+	u16 ioc_status;
+	int i;
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasIOUnitPage0_t *sas_iounit_pg0 = NULL;
+
+	dtmprintk(ioc, printk(MPT2SAS_INFO_FMT
+	    "updating handles for sas_host(0x%016llx)\n",
+	    ioc->name, (unsigned long long)ioc->sas_hba.sas_address));
+
+	sz = offsetof(Mpi2SasIOUnitPage0_t, PhyData) + (ioc->sas_hba.num_phys
+	    * sizeof(Mpi2SasIOUnit0PhyData_t));
+	sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
+	if (!sas_iounit_pg0) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+	if (!(mpt2sas_config_get_sas_iounit_pg0(ioc, &mpi_reply,
+	    sas_iounit_pg0, sz))) {
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status != MPI2_IOCSTATUS_SUCCESS)
+			goto out;
+		for (i = 0; i < ioc->sas_hba.num_phys ; i++) {
+			ioc->sas_hba.phy[i].handle =
+			    le16_to_cpu(sas_iounit_pg0->PhyData[i].
+				ControllerDevHandle);
+			if (update)
+				_scsih_link_change(ioc,
+				    ioc->sas_hba.phy[i].handle,
+				    le16_to_cpu(sas_iounit_pg0->PhyData[i].
+				    AttachedDevHandle), i,
+				    sas_iounit_pg0->PhyData[i].
+				    NegotiatedLinkRate >> 4);
+		}
+	}
+
+ out:
+	kfree(sas_iounit_pg0);
+}
+
+/**
+ * _scsih_sas_host_add - create sas host object
+ * @ioc: per adapter object
+ *
+ * Creating host side data object, stored in ioc->sas_hba
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_host_add(struct MPT2SAS_ADAPTER *ioc)
+{
+	int i;
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasIOUnitPage0_t *sas_iounit_pg0 = NULL;
+	Mpi2SasIOUnitPage1_t *sas_iounit_pg1 = NULL;
+	Mpi2SasPhyPage0_t phy_pg0;
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	Mpi2SasEnclosurePage0_t enclosure_pg0;
+	u16 ioc_status;
+	u16 sz;
+	u16 device_missing_delay;
+
+	mpt2sas_config_get_number_hba_phys(ioc, &ioc->sas_hba.num_phys);
+	if (!ioc->sas_hba.num_phys) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	/* sas_iounit page 0 */
+	sz = offsetof(Mpi2SasIOUnitPage0_t, PhyData) + (ioc->sas_hba.num_phys *
+	    sizeof(Mpi2SasIOUnit0PhyData_t));
+	sas_iounit_pg0 = kzalloc(sz, GFP_KERNEL);
+	if (!sas_iounit_pg0) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+	if ((mpt2sas_config_get_sas_iounit_pg0(ioc, &mpi_reply,
+	    sas_iounit_pg0, sz))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+
+	/* sas_iounit page 1 */
+	sz = offsetof(Mpi2SasIOUnitPage1_t, PhyData) + (ioc->sas_hba.num_phys *
+	    sizeof(Mpi2SasIOUnit1PhyData_t));
+	sas_iounit_pg1 = kzalloc(sz, GFP_KERNEL);
+	if (!sas_iounit_pg1) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+	if ((mpt2sas_config_get_sas_iounit_pg1(ioc, &mpi_reply,
+	    sas_iounit_pg1, sz))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+
+	ioc->io_missing_delay =
+	    le16_to_cpu(sas_iounit_pg1->IODeviceMissingDelay);
+	device_missing_delay =
+	    le16_to_cpu(sas_iounit_pg1->ReportDeviceMissingDelay);
+	if (device_missing_delay & MPI2_SASIOUNIT1_REPORT_MISSING_UNIT_16)
+		ioc->device_missing_delay = (device_missing_delay &
+		    MPI2_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK) * 16;
+	else
+		ioc->device_missing_delay = device_missing_delay &
+		    MPI2_SASIOUNIT1_REPORT_MISSING_TIMEOUT_MASK;
+
+	ioc->sas_hba.parent_dev = &ioc->shost->shost_gendev;
+	ioc->sas_hba.phy = kcalloc(ioc->sas_hba.num_phys,
+	    sizeof(struct _sas_phy), GFP_KERNEL);
+	if (!ioc->sas_hba.phy) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+	for (i = 0; i < ioc->sas_hba.num_phys ; i++) {
+		if ((mpt2sas_config_get_phy_pg0(ioc, &mpi_reply, &phy_pg0,
+		    i))) {
+			printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+			    ioc->name, __FILE__, __LINE__, __func__);
+			goto out;
+		}
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+			printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+			    ioc->name, __FILE__, __LINE__, __func__);
+			goto out;
+		}
+		ioc->sas_hba.phy[i].handle =
+		    le16_to_cpu(sas_iounit_pg0->PhyData[i].ControllerDevHandle);
+		ioc->sas_hba.phy[i].phy_id = i;
+		mpt2sas_transport_add_host_phy(ioc, &ioc->sas_hba.phy[i],
+		    phy_pg0, ioc->sas_hba.parent_dev);
+	}
+	if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, &sas_device_pg0,
+	    MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, ioc->sas_hba.phy[0].handle))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out;
+	}
+	ioc->sas_hba.handle = le16_to_cpu(sas_device_pg0.DevHandle);
+	ioc->sas_hba.enclosure_handle =
+	    le16_to_cpu(sas_device_pg0.EnclosureHandle);
+	ioc->sas_hba.sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
+	printk(MPT2SAS_INFO_FMT "host_add: handle(0x%04x), "
+	    "sas_addr(0x%016llx), phys(%d)\n", ioc->name, ioc->sas_hba.handle,
+	    (unsigned long long) ioc->sas_hba.sas_address,
+	    ioc->sas_hba.num_phys) ;
+
+	if (ioc->sas_hba.enclosure_handle) {
+		if (!(mpt2sas_config_get_enclosure_pg0(ioc, &mpi_reply,
+		    &enclosure_pg0,
+		   MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE,
+		   ioc->sas_hba.enclosure_handle))) {
+			ioc->sas_hba.enclosure_logical_id =
+			    le64_to_cpu(enclosure_pg0.EnclosureLogicalID);
+		}
+	}
+
+ out:
+	kfree(sas_iounit_pg1);
+	kfree(sas_iounit_pg0);
+}
+
+/**
+ * _scsih_expander_add -  creating expander object
+ * @ioc: per adapter object
+ * @handle: expander handle
+ *
+ * Creating expander object, stored in ioc->sas_expander_list.
+ *
+ * Return 0 for success, else error.
+ */
+static int
+_scsih_expander_add(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct _sas_node *sas_expander;
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2ExpanderPage0_t expander_pg0;
+	Mpi2ExpanderPage1_t expander_pg1;
+	Mpi2SasEnclosurePage0_t enclosure_pg0;
+	u32 ioc_status;
+	u16 parent_handle;
+	__le64 sas_address;
+	int i;
+	unsigned long flags;
+	struct _sas_port *mpt2sas_port;
+	int rc = 0;
+
+	if (!handle)
+		return -1;
+
+	if ((mpt2sas_config_get_expander_pg0(ioc, &mpi_reply, &expander_pg0,
+	    MPI2_SAS_EXPAND_PGAD_FORM_HNDL, handle))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	/* handle out of order topology events */
+	parent_handle = le16_to_cpu(expander_pg0.ParentDevHandle);
+	if (parent_handle >= ioc->sas_hba.num_phys) {
+		spin_lock_irqsave(&ioc->sas_node_lock, flags);
+		sas_expander = mpt2sas_scsih_expander_find_by_handle(ioc,
+		    parent_handle);
+		spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+		if (!sas_expander) {
+			rc = _scsih_expander_add(ioc, parent_handle);
+			if (rc != 0)
+				return rc;
+		}
+	}
+
+	sas_address = le64_to_cpu(expander_pg0.SASAddress);
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_expander = mpt2sas_scsih_expander_find_by_sas_address(ioc,
+	    sas_address);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+
+	if (sas_expander)
+		return 0;
+
+	sas_expander = kzalloc(sizeof(struct _sas_node),
+	    GFP_KERNEL);
+	if (!sas_expander) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	sas_expander->handle = handle;
+	sas_expander->num_phys = expander_pg0.NumPhys;
+	sas_expander->parent_handle = parent_handle;
+	sas_expander->enclosure_handle =
+	    le16_to_cpu(expander_pg0.EnclosureHandle);
+	sas_expander->sas_address = sas_address;
+
+	printk(MPT2SAS_INFO_FMT "expander_add: handle(0x%04x),"
+	    " parent(0x%04x), sas_addr(0x%016llx), phys(%d)\n", ioc->name,
+	    handle, sas_expander->parent_handle, (unsigned long long)
+	    sas_expander->sas_address, sas_expander->num_phys);
+
+	if (!sas_expander->num_phys)
+		goto out_fail;
+	sas_expander->phy = kcalloc(sas_expander->num_phys,
+	    sizeof(struct _sas_phy), GFP_KERNEL);
+	if (!sas_expander->phy) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		rc = -1;
+		goto out_fail;
+	}
+
+	INIT_LIST_HEAD(&sas_expander->sas_port_list);
+	mpt2sas_port = mpt2sas_transport_port_add(ioc, handle,
+	    sas_expander->parent_handle);
+	if (!mpt2sas_port) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		rc = -1;
+		goto out_fail;
+	}
+	sas_expander->parent_dev = &mpt2sas_port->rphy->dev;
+
+	for (i = 0 ; i < sas_expander->num_phys ; i++) {
+		if ((mpt2sas_config_get_expander_pg1(ioc, &mpi_reply,
+		    &expander_pg1, i, handle))) {
+			printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+			    ioc->name, __FILE__, __LINE__, __func__);
+			continue;
+		}
+		sas_expander->phy[i].handle = handle;
+		sas_expander->phy[i].phy_id = i;
+		mpt2sas_transport_add_expander_phy(ioc, &sas_expander->phy[i],
+		    expander_pg1, sas_expander->parent_dev);
+	}
+
+	if (sas_expander->enclosure_handle) {
+		if (!(mpt2sas_config_get_enclosure_pg0(ioc, &mpi_reply,
+		    &enclosure_pg0, MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE,
+		   sas_expander->enclosure_handle))) {
+			sas_expander->enclosure_logical_id =
+			    le64_to_cpu(enclosure_pg0.EnclosureLogicalID);
+		}
+	}
+
+	_scsih_expander_node_add(ioc, sas_expander);
+	 return 0;
+
+ out_fail:
+
+	if (sas_expander)
+		kfree(sas_expander->phy);
+	kfree(sas_expander);
+	return rc;
+}
+
+/**
+ * _scsih_expander_remove - removing expander object
+ * @ioc: per adapter object
+ * @handle: expander handle
+ *
+ * Return nothing.
+ */
+static void
+_scsih_expander_remove(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct _sas_node *sas_expander;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_expander = mpt2sas_scsih_expander_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+	_scsih_expander_node_remove(ioc, sas_expander);
+}
+
+/**
+ * _scsih_add_device -  creating sas device object
+ * @ioc: per adapter object
+ * @handle: sas device handle
+ * @phy_num: phy number end device attached to
+ * @is_pd: is this hidden raid component
+ *
+ * Creating end device object, stored in ioc->sas_device_list.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_scsih_add_device(struct MPT2SAS_ADAPTER *ioc, u16 handle, u8 phy_num, u8 is_pd)
+{
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	Mpi2SasEnclosurePage0_t enclosure_pg0;
+	struct _sas_device *sas_device;
+	u32 ioc_status;
+	__le64 sas_address;
+	u32 device_info;
+	unsigned long flags;
+
+	if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, &sas_device_pg0,
+	    MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	/* check if device is present */
+	if (!(le16_to_cpu(sas_device_pg0.Flags) &
+	    MPI2_SAS_DEVICE0_FLAGS_DEVICE_PRESENT)) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		printk(MPT2SAS_ERR_FMT "Flags = 0x%04x\n",
+		    ioc->name, le16_to_cpu(sas_device_pg0.Flags));
+		return -1;
+	}
+
+	/* check if there were any issus with discovery */
+	if (sas_device_pg0.AccessStatus ==
+	    MPI2_SAS_DEVICE0_ASTATUS_SATA_INIT_FAILED) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		printk(MPT2SAS_ERR_FMT "AccessStatus = 0x%02x\n",
+		    ioc->name, sas_device_pg0.AccessStatus);
+		return -1;
+	}
+
+	/* check if this is end device */
+	device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
+	if (!(_scsih_is_end_device(device_info))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+	    sas_address);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	if (sas_device) {
+		_scsih_ublock_io_device(ioc, handle);
+		return 0;
+	}
+
+	sas_device = kzalloc(sizeof(struct _sas_device),
+	    GFP_KERNEL);
+	if (!sas_device) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	sas_device->handle = handle;
+	sas_device->parent_handle =
+	    le16_to_cpu(sas_device_pg0.ParentDevHandle);
+	sas_device->enclosure_handle =
+	    le16_to_cpu(sas_device_pg0.EnclosureHandle);
+	sas_device->slot =
+	    le16_to_cpu(sas_device_pg0.Slot);
+	sas_device->device_info = device_info;
+	sas_device->sas_address = sas_address;
+	sas_device->hidden_raid_component = is_pd;
+
+	/* get enclosure_logical_id */
+	if (!(mpt2sas_config_get_enclosure_pg0(ioc, &mpi_reply, &enclosure_pg0,
+	   MPI2_SAS_ENCLOS_PGAD_FORM_HANDLE,
+	   sas_device->enclosure_handle))) {
+		sas_device->enclosure_logical_id =
+		    le64_to_cpu(enclosure_pg0.EnclosureLogicalID);
+	}
+
+	/* get device name */
+	sas_device->device_name = le64_to_cpu(sas_device_pg0.DeviceName);
+
+	if (ioc->wait_for_port_enable_to_complete)
+		_scsih_sas_device_init_add(ioc, sas_device);
+	else
+		_scsih_sas_device_add(ioc, sas_device);
+
+	return 0;
+}
+
+/**
+ * _scsih_remove_device -  removing sas device object
+ * @ioc: per adapter object
+ * @handle: sas device handle
+ *
+ * Return nothing.
+ */
+static void
+_scsih_remove_device(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	Mpi2SasIoUnitControlReply_t mpi_reply;
+	Mpi2SasIoUnitControlRequest_t mpi_request;
+	u16 device_handle;
+
+	/* lookup sas_device */
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	if (!sas_device) {
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		return;
+	}
+
+	dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: enter: handle"
+	    "(0x%04x)\n", ioc->name, __func__, handle));
+
+	if (sas_device->starget && sas_device->starget->hostdata) {
+		sas_target_priv_data = sas_device->starget->hostdata;
+		sas_target_priv_data->deleted = 1;
+	}
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	if (ioc->remove_host)
+		goto out;
+
+	/* Target Reset to flush out all the outstanding IO */
+	device_handle = (sas_device->hidden_raid_component) ?
+	    sas_device->volume_handle : handle;
+	if (device_handle) {
+		dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "issue target reset: "
+		    "handle(0x%04x)\n", ioc->name, device_handle));
+		mutex_lock(&ioc->tm_cmds.mutex);
+		mpt2sas_scsih_issue_tm(ioc, device_handle, 0,
+		    MPI2_SCSITASKMGMT_TASKTYPE_TARGET_RESET, 0, 10);
+		ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+		mutex_unlock(&ioc->tm_cmds.mutex);
+		dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "issue target reset "
+		    "done: handle(0x%04x)\n", ioc->name, device_handle));
+	}
+
+	/* SAS_IO_UNIT_CNTR - send REMOVE_DEVICE */
+	dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "sas_iounit: handle"
+	    "(0x%04x)\n", ioc->name, handle));
+	memset(&mpi_request, 0, sizeof(Mpi2SasIoUnitControlRequest_t));
+	mpi_request.Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
+	mpi_request.Operation = MPI2_SAS_OP_REMOVE_DEVICE;
+	mpi_request.DevHandle = handle;
+	mpi_request.VF_ID = 0;
+	if ((mpt2sas_base_sas_iounit_control(ioc, &mpi_reply,
+	    &mpi_request)) != 0) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+	}
+
+	dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "sas_iounit: ioc_status"
+	    "(0x%04x), loginfo(0x%08x)\n", ioc->name,
+	    le16_to_cpu(mpi_reply.IOCStatus),
+	    le32_to_cpu(mpi_reply.IOCLogInfo)));
+
+ out:
+	mpt2sas_transport_port_remove(ioc, sas_device->sas_address,
+	    sas_device->parent_handle);
+
+	printk(MPT2SAS_INFO_FMT "removing handle(0x%04x), sas_addr"
+	    "(0x%016llx)\n", ioc->name, sas_device->handle,
+	    (unsigned long long) sas_device->sas_address);
+	_scsih_sas_device_remove(ioc, sas_device);
+
+	dewtprintk(ioc, printk(MPT2SAS_INFO_FMT "%s: exit: handle"
+	    "(0x%04x)\n", ioc->name, __func__, handle));
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_sas_topology_change_event_debug - debug for topology event
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ */
+static void
+_scsih_sas_topology_change_event_debug(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataSasTopologyChangeList_t *event_data)
+{
+	int i;
+	u16 handle;
+	u16 reason_code;
+	u8 phy_number;
+	char *status_str = NULL;
+	char link_rate[25];
+
+	switch (event_data->ExpStatus) {
+	case MPI2_EVENT_SAS_TOPO_ES_ADDED:
+		status_str = "add";
+		break;
+	case MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING:
+		status_str = "remove";
+		break;
+	case MPI2_EVENT_SAS_TOPO_ES_RESPONDING:
+		status_str =  "responding";
+		break;
+	case MPI2_EVENT_SAS_TOPO_ES_DELAY_NOT_RESPONDING:
+		status_str = "remove delay";
+		break;
+	default:
+		status_str = "unknown status";
+		break;
+	}
+	printk(MPT2SAS_DEBUG_FMT "sas topology change: (%s)\n",
+	    ioc->name, status_str);
+	printk(KERN_DEBUG "\thandle(0x%04x), enclosure_handle(0x%04x) "
+	    "start_phy(%02d), count(%d)\n",
+	    le16_to_cpu(event_data->ExpanderDevHandle),
+	    le16_to_cpu(event_data->EnclosureHandle),
+	    event_data->StartPhyNum, event_data->NumEntries);
+	for (i = 0; i < event_data->NumEntries; i++) {
+		handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
+		if (!handle)
+			continue;
+		phy_number = event_data->StartPhyNum + i;
+		reason_code = event_data->PHY[i].PhyStatus &
+		    MPI2_EVENT_SAS_TOPO_RC_MASK;
+		switch (reason_code) {
+		case MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED:
+			snprintf(link_rate, 25, ": add, link(0x%02x)",
+			    (event_data->PHY[i].LinkRate >> 4));
+			status_str = link_rate;
+			break;
+		case MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
+			status_str = ": remove";
+			break;
+		case MPI2_EVENT_SAS_TOPO_RC_DELAY_NOT_RESPONDING:
+			status_str = ": remove_delay";
+			break;
+		case MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED:
+			snprintf(link_rate, 25, ": link(0x%02x)",
+			    (event_data->PHY[i].LinkRate >> 4));
+			status_str = link_rate;
+			break;
+		case MPI2_EVENT_SAS_TOPO_RC_NO_CHANGE:
+			status_str = ": responding";
+			break;
+		default:
+			status_str = ": unknown";
+			break;
+		}
+		printk(KERN_DEBUG "\tphy(%02d), attached_handle(0x%04x)%s\n",
+		    phy_number, handle, status_str);
+	}
+}
+#endif
+
+/**
+ * _scsih_sas_topology_change_event - handle topology changes
+ * @ioc: per adapter object
+ * @VF_ID:
+ * @event_data: event data payload
+ * fw_event:
+ * Context: user.
+ *
+ */
+static void
+_scsih_sas_topology_change_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataSasTopologyChangeList_t *event_data,
+    struct fw_event_work *fw_event)
+{
+	int i;
+	u16 parent_handle, handle;
+	u16 reason_code;
+	u8 phy_number;
+	struct _sas_node *sas_expander;
+	unsigned long flags;
+	u8 link_rate_;
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
+		_scsih_sas_topology_change_event_debug(ioc, event_data);
+#endif
+
+	if (!ioc->sas_hba.num_phys)
+		_scsih_sas_host_add(ioc);
+	else
+		_scsih_sas_host_refresh(ioc, 0);
+
+	if (fw_event->ignore) {
+		dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "ignoring expander "
+		    "event\n", ioc->name));
+		return;
+	}
+
+	parent_handle = le16_to_cpu(event_data->ExpanderDevHandle);
+
+	/* handle expander add */
+	if (event_data->ExpStatus == MPI2_EVENT_SAS_TOPO_ES_ADDED)
+		if (_scsih_expander_add(ioc, parent_handle) != 0)
+			return;
+
+	/* handle siblings events */
+	for (i = 0; i < event_data->NumEntries; i++) {
+		if (fw_event->ignore) {
+			dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "ignoring "
+			    "expander event\n", ioc->name));
+			return;
+		}
+		if (event_data->PHY[i].PhyStatus &
+		    MPI2_EVENT_SAS_TOPO_PHYSTATUS_VACANT)
+			continue;
+		handle = le16_to_cpu(event_data->PHY[i].AttachedDevHandle);
+		if (!handle)
+			continue;
+		phy_number = event_data->StartPhyNum + i;
+		reason_code = event_data->PHY[i].PhyStatus &
+		    MPI2_EVENT_SAS_TOPO_RC_MASK;
+		link_rate_ = event_data->PHY[i].LinkRate >> 4;
+		switch (reason_code) {
+		case MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED:
+		case MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED:
+			if (!parent_handle) {
+				if (phy_number < ioc->sas_hba.num_phys)
+					_scsih_link_change(ioc,
+					   ioc->sas_hba.phy[phy_number].handle,
+					   handle, phy_number, link_rate_);
+			} else {
+				spin_lock_irqsave(&ioc->sas_node_lock, flags);
+				sas_expander =
+				    mpt2sas_scsih_expander_find_by_handle(ioc,
+					parent_handle);
+				spin_unlock_irqrestore(&ioc->sas_node_lock,
+				    flags);
+				if (sas_expander) {
+					if (phy_number < sas_expander->num_phys)
+						_scsih_link_change(ioc,
+						   sas_expander->
+						   phy[phy_number].handle,
+						   handle, phy_number,
+						   link_rate_);
+				}
+			}
+			if (reason_code == MPI2_EVENT_SAS_TOPO_RC_PHY_CHANGED) {
+				if (link_rate_ >= MPI2_SAS_NEG_LINK_RATE_1_5)
+					_scsih_ublock_io_device(ioc, handle);
+			}
+			if (reason_code == MPI2_EVENT_SAS_TOPO_RC_TARG_ADDED) {
+				if (link_rate_ < MPI2_SAS_NEG_LINK_RATE_1_5)
+					break;
+				_scsih_add_device(ioc, handle, phy_number, 0);
+			}
+			break;
+		case MPI2_EVENT_SAS_TOPO_RC_TARG_NOT_RESPONDING:
+			_scsih_remove_device(ioc, handle);
+			break;
+		}
+	}
+
+	/* handle expander removal */
+	if (event_data->ExpStatus == MPI2_EVENT_SAS_TOPO_ES_NOT_RESPONDING)
+		_scsih_expander_remove(ioc, parent_handle);
+
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_sas_device_status_change_event_debug - debug for device event
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_device_status_change_event_debug(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataSasDeviceStatusChange_t *event_data)
+{
+	char *reason_str = NULL;
+
+	switch (event_data->ReasonCode) {
+	case MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA:
+		reason_str = "smart data";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_UNSUPPORTED:
+		reason_str = "unsupported device discovered";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_INTERNAL_DEVICE_RESET:
+		reason_str = "internal device reset";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_TASK_ABORT_INTERNAL:
+		reason_str = "internal task abort";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_ABORT_TASK_SET_INTERNAL:
+		reason_str = "internal task abort set";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_CLEAR_TASK_SET_INTERNAL:
+		reason_str = "internal clear task set";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_QUERY_TASK_INTERNAL:
+		reason_str = "internal query task";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_SATA_INIT_FAILURE:
+		reason_str = "sata init failure";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_CMP_INTERNAL_DEV_RESET:
+		reason_str = "internal device reset complete";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_CMP_TASK_ABORT_INTERNAL:
+		reason_str = "internal task abort complete";
+		break;
+	case MPI2_EVENT_SAS_DEV_STAT_RC_ASYNC_NOTIFICATION:
+		reason_str = "internal async notification";
+		break;
+	default:
+		reason_str = "unknown reason";
+		break;
+	}
+	printk(MPT2SAS_DEBUG_FMT "device status change: (%s)\n"
+	    "\thandle(0x%04x), sas address(0x%016llx)", ioc->name,
+	    reason_str, le16_to_cpu(event_data->DevHandle),
+	    (unsigned long long)le64_to_cpu(event_data->SASAddress));
+	if (event_data->ReasonCode == MPI2_EVENT_SAS_DEV_STAT_RC_SMART_DATA)
+		printk(MPT2SAS_DEBUG_FMT ", ASC(0x%x), ASCQ(0x%x)\n", ioc->name,
+		    event_data->ASC, event_data->ASCQ);
+	printk(KERN_INFO "\n");
+}
+#endif
+
+/**
+ * _scsih_sas_device_status_change_event - handle device status change
+ * @ioc: per adapter object
+ * @VF_ID:
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_device_status_change_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataSasDeviceStatusChange_t *event_data)
+{
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
+		_scsih_sas_device_status_change_event_debug(ioc, event_data);
+#endif
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_sas_enclosure_dev_status_change_event_debug - debug for enclosure event
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_enclosure_dev_status_change_event_debug(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataSasEnclDevStatusChange_t *event_data)
+{
+	char *reason_str = NULL;
+
+	switch (event_data->ReasonCode) {
+	case MPI2_EVENT_SAS_ENCL_RC_ADDED:
+		reason_str = "enclosure add";
+		break;
+	case MPI2_EVENT_SAS_ENCL_RC_NOT_RESPONDING:
+		reason_str = "enclosure remove";
+		break;
+	default:
+		reason_str = "unknown reason";
+		break;
+	}
+
+	printk(MPT2SAS_DEBUG_FMT "enclosure status change: (%s)\n"
+	    "\thandle(0x%04x), enclosure logical id(0x%016llx)"
+	    " number slots(%d)\n", ioc->name, reason_str,
+	    le16_to_cpu(event_data->EnclosureHandle),
+	    (unsigned long long)le64_to_cpu(event_data->EnclosureLogicalID),
+	    le16_to_cpu(event_data->StartSlot));
+}
+#endif
+
+/**
+ * _scsih_sas_enclosure_dev_status_change_event - handle enclosure events
+ * @ioc: per adapter object
+ * @VF_ID:
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_enclosure_dev_status_change_event(struct MPT2SAS_ADAPTER *ioc,
+    u8 VF_ID, Mpi2EventDataSasEnclDevStatusChange_t *event_data)
+{
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
+		_scsih_sas_enclosure_dev_status_change_event_debug(ioc,
+		     event_data);
+#endif
+}
+
+/**
+ * _scsih_sas_broadcast_primative_event - handle broadcast events
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_broadcast_primative_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataSasBroadcastPrimitive_t *event_data)
+{
+	struct scsi_cmnd *scmd;
+	u16 smid, handle;
+	u32 lun;
+	struct MPT2SAS_DEVICE *sas_device_priv_data;
+	u32 termination_count;
+	u32 query_count;
+	Mpi2SCSITaskManagementReply_t *mpi_reply;
+
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "broadcast primative: "
+	    "phy number(%d), width(%d)\n", ioc->name, event_data->PhyNum,
+	    event_data->PortWidth));
+
+	dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: enter\n", ioc->name,
+	    __func__));
+
+	mutex_lock(&ioc->tm_cmds.mutex);
+	termination_count = 0;
+	query_count = 0;
+	mpi_reply = ioc->tm_cmds.reply;
+	for (smid = 1; smid <= ioc->request_depth; smid++) {
+		scmd = _scsih_scsi_lookup_get(ioc, smid);
+		if (!scmd)
+			continue;
+		sas_device_priv_data = scmd->device->hostdata;
+		if (!sas_device_priv_data || !sas_device_priv_data->sas_target)
+			continue;
+		 /* skip hidden raid components */
+		if (sas_device_priv_data->sas_target->flags &
+		    MPT_TARGET_FLAGS_RAID_COMPONENT)
+			continue;
+		 /* skip volumes */
+		if (sas_device_priv_data->sas_target->flags &
+		    MPT_TARGET_FLAGS_VOLUME)
+			continue;
+
+		handle = sas_device_priv_data->sas_target->handle;
+		lun = sas_device_priv_data->lun;
+		query_count++;
+
+		mpt2sas_scsih_issue_tm(ioc, handle, lun,
+		    MPI2_SCSITASKMGMT_TASKTYPE_QUERY_TASK, smid, 30);
+		termination_count += le32_to_cpu(mpi_reply->TerminationCount);
+
+		if ((mpi_reply->IOCStatus == MPI2_IOCSTATUS_SUCCESS) &&
+		    (mpi_reply->ResponseCode ==
+		     MPI2_SCSITASKMGMT_RSP_TM_SUCCEEDED ||
+		     mpi_reply->ResponseCode ==
+		     MPI2_SCSITASKMGMT_RSP_IO_QUEUED_ON_IOC))
+			continue;
+
+		mpt2sas_scsih_issue_tm(ioc, handle, lun,
+		    MPI2_SCSITASKMGMT_TASKTYPE_ABRT_TASK_SET, smid, 30);
+		termination_count += le32_to_cpu(mpi_reply->TerminationCount);
+	}
+	ioc->tm_cmds.status = MPT2_CMD_NOT_USED;
+	ioc->broadcast_aen_busy = 0;
+	mutex_unlock(&ioc->tm_cmds.mutex);
+
+	dtmprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+	    "%s - exit, query_count = %d termination_count = %d\n",
+	    ioc->name, __func__, query_count, termination_count));
+}
+
+/**
+ * _scsih_sas_discovery_event - handle discovery events
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_discovery_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataSasDiscovery_t *event_data)
+{
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK) {
+		printk(MPT2SAS_DEBUG_FMT "discovery event: (%s)", ioc->name,
+		    (event_data->ReasonCode == MPI2_EVENT_SAS_DISC_RC_STARTED) ?
+		    "start" : "stop");
+	if (event_data->DiscoveryStatus)
+		printk(MPT2SAS_DEBUG_FMT ", discovery_status(0x%08x)",
+		    ioc->name, le32_to_cpu(event_data->DiscoveryStatus));
+	printk("\n");
+	}
+#endif
+
+	if (event_data->ReasonCode == MPI2_EVENT_SAS_DISC_RC_STARTED &&
+	    !ioc->sas_hba.num_phys)
+		_scsih_sas_host_add(ioc);
+}
+
+/**
+ * _scsih_reprobe_lun - reprobing lun
+ * @sdev: scsi device struct
+ * @no_uld_attach: sdev->no_uld_attach flag setting
+ *
+ **/
+static void
+_scsih_reprobe_lun(struct scsi_device *sdev, void *no_uld_attach)
+{
+	int rc;
+
+	sdev->no_uld_attach = no_uld_attach ? 1 : 0;
+	sdev_printk(KERN_INFO, sdev, "%s raid component\n",
+	    sdev->no_uld_attach ? "hidding" : "exposing");
+	rc = scsi_device_reprobe(sdev);
+}
+
+/**
+ * _scsih_reprobe_target - reprobing target
+ * @starget: scsi target struct
+ * @no_uld_attach: sdev->no_uld_attach flag setting
+ *
+ * Note: no_uld_attach flag determines whether the disk device is attached
+ * to block layer. A value of `1` means to not attach.
+ **/
+static void
+_scsih_reprobe_target(struct scsi_target *starget, int no_uld_attach)
+{
+	struct MPT2SAS_TARGET *sas_target_priv_data = starget->hostdata;
+
+	if (no_uld_attach)
+		sas_target_priv_data->flags |= MPT_TARGET_FLAGS_RAID_COMPONENT;
+	else
+		sas_target_priv_data->flags &= ~MPT_TARGET_FLAGS_RAID_COMPONENT;
+
+	starget_for_each_device(starget, no_uld_attach ? (void *)1 : NULL,
+	    _scsih_reprobe_lun);
+}
+/**
+ * _scsih_sas_volume_add - add new volume
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_volume_add(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _raid_device *raid_device;
+	unsigned long flags;
+	u64 wwid;
+	u16 handle = le16_to_cpu(element->VolDevHandle);
+	int rc;
+
+#if 0 /* RAID_HACKS */
+	if (le32_to_cpu(event_data->Flags) &
+	    MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG)
+		return;
+#endif
+
+	mpt2sas_config_get_volume_wwid(ioc, handle, &wwid);
+	if (!wwid) {
+		printk(MPT2SAS_ERR_FMT
+		    "failure at %s:%d/%s()!\n", ioc->name,
+		    __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	raid_device = _scsih_raid_device_find_by_wwid(ioc, wwid);
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+
+	if (raid_device)
+		return;
+
+	raid_device = kzalloc(sizeof(struct _raid_device), GFP_KERNEL);
+	if (!raid_device) {
+		printk(MPT2SAS_ERR_FMT
+		    "failure at %s:%d/%s()!\n", ioc->name,
+		    __FILE__, __LINE__, __func__);
+		return;
+	}
+
+	raid_device->id = ioc->sas_id++;
+	raid_device->channel = RAID_CHANNEL;
+	raid_device->handle = handle;
+	raid_device->wwid = wwid;
+	_scsih_raid_device_add(ioc, raid_device);
+	if (!ioc->wait_for_port_enable_to_complete) {
+		rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
+		    raid_device->id, 0);
+		if (rc)
+			_scsih_raid_device_remove(ioc, raid_device);
+	} else
+		_scsih_determine_boot_device(ioc, raid_device, 1);
+}
+
+/**
+ * _scsih_sas_volume_delete - delete volume
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_volume_delete(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _raid_device *raid_device;
+	u16 handle = le16_to_cpu(element->VolDevHandle);
+	unsigned long flags;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+
+#if 0 /* RAID_HACKS */
+	if (le32_to_cpu(event_data->Flags) &
+	    MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG)
+		return;
+#endif
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	raid_device = _scsih_raid_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+	if (!raid_device)
+		return;
+	if (raid_device->starget) {
+		sas_target_priv_data = raid_device->starget->hostdata;
+		sas_target_priv_data->deleted = 1;
+		scsi_remove_target(&raid_device->starget->dev);
+	}
+	_scsih_raid_device_remove(ioc, raid_device);
+}
+
+/**
+ * _scsih_sas_pd_expose - expose pd component to /dev/sdX
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_pd_expose(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	if (!sas_device)
+		return;
+
+	/* exposing raid component */
+	sas_device->volume_handle = 0;
+	sas_device->volume_wwid = 0;
+	sas_device->hidden_raid_component = 0;
+	_scsih_reprobe_target(sas_device->starget, 0);
+}
+
+/**
+ * _scsih_sas_pd_hide - hide pd component from /dev/sdX
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_pd_hide(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	if (!sas_device)
+		return;
+
+	/* hiding raid component */
+	mpt2sas_config_get_volume_handle(ioc, handle,
+	    &sas_device->volume_handle);
+	mpt2sas_config_get_volume_wwid(ioc, sas_device->volume_handle,
+	    &sas_device->volume_wwid);
+	sas_device->hidden_raid_component = 1;
+	_scsih_reprobe_target(sas_device->starget, 1);
+}
+
+/**
+ * _scsih_sas_pd_delete - delete pd component
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_pd_delete(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	if (!sas_device)
+		return;
+	_scsih_remove_device(ioc, handle);
+}
+
+/**
+ * _scsih_sas_pd_add - remove pd component
+ * @ioc: per adapter object
+ * @element: IR config element data
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_pd_add(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventIrConfigElement_t *element)
+{
+	struct _sas_device *sas_device;
+	unsigned long flags;
+	u16 handle = le16_to_cpu(element->PhysDiskDevHandle);
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	if (sas_device)
+		sas_device->hidden_raid_component = 1;
+	else
+		_scsih_add_device(ioc, handle, 0, 1);
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_sas_ir_config_change_event_debug - debug for IR Config Change events
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_config_change_event_debug(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataIrConfigChangeList_t *event_data)
+{
+	Mpi2EventIrConfigElement_t *element;
+	u8 element_type;
+	int i;
+	char *reason_str = NULL, *element_str = NULL;
+
+	element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0];
+
+	printk(MPT2SAS_DEBUG_FMT "raid config change: (%s), elements(%d)\n",
+	    ioc->name, (le32_to_cpu(event_data->Flags) &
+	    MPI2_EVENT_IR_CHANGE_FLAGS_FOREIGN_CONFIG) ?
+	    "foreign" : "native", event_data->NumElements);
+	for (i = 0; i < event_data->NumElements; i++, element++) {
+		switch (element->ReasonCode) {
+		case MPI2_EVENT_IR_CHANGE_RC_ADDED:
+			reason_str = "add";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_REMOVED:
+			reason_str = "remove";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_NO_CHANGE:
+			reason_str = "no change";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_HIDE:
+			reason_str = "hide";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_UNHIDE:
+			reason_str = "unhide";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED:
+			reason_str = "volume_created";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED:
+			reason_str = "volume_deleted";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED:
+			reason_str = "pd_created";
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED:
+			reason_str = "pd_deleted";
+			break;
+		default:
+			reason_str = "unknown reason";
+			break;
+		}
+		element_type = le16_to_cpu(element->ElementFlags) &
+		    MPI2_EVENT_IR_CHANGE_EFLAGS_ELEMENT_TYPE_MASK;
+		switch (element_type) {
+		case MPI2_EVENT_IR_CHANGE_EFLAGS_VOLUME_ELEMENT:
+			element_str = "volume";
+			break;
+		case MPI2_EVENT_IR_CHANGE_EFLAGS_VOLPHYSDISK_ELEMENT:
+			element_str = "phys disk";
+			break;
+		case MPI2_EVENT_IR_CHANGE_EFLAGS_HOTSPARE_ELEMENT:
+			element_str = "hot spare";
+			break;
+		default:
+			element_str = "unknown element";
+			break;
+		}
+		printk(KERN_DEBUG "\t(%s:%s), vol handle(0x%04x), "
+		    "pd handle(0x%04x), pd num(0x%02x)\n", element_str,
+		    reason_str, le16_to_cpu(element->VolDevHandle),
+		    le16_to_cpu(element->PhysDiskDevHandle),
+		    element->PhysDiskNum);
+	}
+}
+#endif
+
+/**
+ * _scsih_sas_ir_config_change_event - handle ir configuration change events
+ * @ioc: per adapter object
+ * @VF_ID:
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_config_change_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataIrConfigChangeList_t *event_data)
+{
+	Mpi2EventIrConfigElement_t *element;
+	int i;
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
+		_scsih_sas_ir_config_change_event_debug(ioc, event_data);
+
+#endif
+
+	element = (Mpi2EventIrConfigElement_t *)&event_data->ConfigElement[0];
+	for (i = 0; i < event_data->NumElements; i++, element++) {
+
+		switch (element->ReasonCode) {
+		case MPI2_EVENT_IR_CHANGE_RC_VOLUME_CREATED:
+		case MPI2_EVENT_IR_CHANGE_RC_ADDED:
+			_scsih_sas_volume_add(ioc, element);
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_VOLUME_DELETED:
+		case MPI2_EVENT_IR_CHANGE_RC_REMOVED:
+			_scsih_sas_volume_delete(ioc, element);
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_PD_CREATED:
+			_scsih_sas_pd_hide(ioc, element);
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_PD_DELETED:
+			_scsih_sas_pd_expose(ioc, element);
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_HIDE:
+			_scsih_sas_pd_add(ioc, element);
+			break;
+		case MPI2_EVENT_IR_CHANGE_RC_UNHIDE:
+			_scsih_sas_pd_delete(ioc, element);
+			break;
+		}
+	}
+}
+
+/**
+ * _scsih_sas_ir_volume_event - IR volume event
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_volume_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataIrVolume_t *event_data)
+{
+	u64 wwid;
+	unsigned long flags;
+	struct _raid_device *raid_device;
+	u16 handle;
+	u32 state;
+	int rc;
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+
+	if (event_data->ReasonCode != MPI2_EVENT_IR_VOLUME_RC_STATE_CHANGED)
+		return;
+
+	handle = le16_to_cpu(event_data->VolDevHandle);
+	state = le32_to_cpu(event_data->NewValue);
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: handle(0x%04x), "
+	    "old(0x%08x), new(0x%08x)\n", ioc->name, __func__,  handle,
+	    le32_to_cpu(event_data->PreviousValue), state));
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	raid_device = _scsih_raid_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+
+	switch (state) {
+	case MPI2_RAID_VOL_STATE_MISSING:
+	case MPI2_RAID_VOL_STATE_FAILED:
+		if (!raid_device)
+			break;
+		if (raid_device->starget) {
+			sas_target_priv_data = raid_device->starget->hostdata;
+			sas_target_priv_data->deleted = 1;
+			scsi_remove_target(&raid_device->starget->dev);
+		}
+		_scsih_raid_device_remove(ioc, raid_device);
+		break;
+
+	case MPI2_RAID_VOL_STATE_ONLINE:
+	case MPI2_RAID_VOL_STATE_DEGRADED:
+	case MPI2_RAID_VOL_STATE_OPTIMAL:
+		if (raid_device)
+			break;
+
+		mpt2sas_config_get_volume_wwid(ioc, handle, &wwid);
+		if (!wwid) {
+			printk(MPT2SAS_ERR_FMT
+			    "failure at %s:%d/%s()!\n", ioc->name,
+			    __FILE__, __LINE__, __func__);
+			break;
+		}
+
+		raid_device = kzalloc(sizeof(struct _raid_device), GFP_KERNEL);
+		if (!raid_device) {
+			printk(MPT2SAS_ERR_FMT
+			    "failure at %s:%d/%s()!\n", ioc->name,
+			    __FILE__, __LINE__, __func__);
+			break;
+		}
+
+		raid_device->id = ioc->sas_id++;
+		raid_device->channel = RAID_CHANNEL;
+		raid_device->handle = handle;
+		raid_device->wwid = wwid;
+		_scsih_raid_device_add(ioc, raid_device);
+		rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
+		    raid_device->id, 0);
+		if (rc)
+			_scsih_raid_device_remove(ioc, raid_device);
+		break;
+
+	case MPI2_RAID_VOL_STATE_INITIALIZING:
+	default:
+		break;
+	}
+}
+
+/**
+ * _scsih_sas_ir_physical_disk_event - PD event
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_physical_disk_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+   Mpi2EventDataIrPhysicalDisk_t *event_data)
+{
+	u16 handle;
+	u32 state;
+	struct _sas_device *sas_device;
+	unsigned long flags;
+
+	if (event_data->ReasonCode != MPI2_EVENT_IR_PHYSDISK_RC_STATE_CHANGED)
+		return;
+
+	handle = le16_to_cpu(event_data->PhysDiskDevHandle);
+	state = le32_to_cpu(event_data->NewValue);
+
+	dewtprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s: handle(0x%04x), "
+	    "old(0x%08x), new(0x%08x)\n", ioc->name, __func__,  handle,
+	    le32_to_cpu(event_data->PreviousValue), state));
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	switch (state) {
+#if 0
+	case MPI2_RAID_PD_STATE_OFFLINE:
+		if (sas_device)
+			_scsih_remove_device(ioc, handle);
+		break;
+#endif
+	case MPI2_RAID_PD_STATE_ONLINE:
+	case MPI2_RAID_PD_STATE_DEGRADED:
+	case MPI2_RAID_PD_STATE_REBUILDING:
+	case MPI2_RAID_PD_STATE_OPTIMAL:
+		if (sas_device)
+			sas_device->hidden_raid_component = 1;
+		else
+			_scsih_add_device(ioc, handle, 0, 1);
+		break;
+
+	case MPI2_RAID_PD_STATE_NOT_CONFIGURED:
+	case MPI2_RAID_PD_STATE_NOT_COMPATIBLE:
+	case MPI2_RAID_PD_STATE_HOT_SPARE:
+	default:
+		break;
+	}
+}
+
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+/**
+ * _scsih_sas_ir_operation_status_event_debug - debug for IR op event
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_operation_status_event_debug(struct MPT2SAS_ADAPTER *ioc,
+    Mpi2EventDataIrOperationStatus_t *event_data)
+{
+	char *reason_str = NULL;
+
+	switch (event_data->RAIDOperation) {
+	case MPI2_EVENT_IR_RAIDOP_RESYNC:
+		reason_str = "resync";
+		break;
+	case MPI2_EVENT_IR_RAIDOP_ONLINE_CAP_EXPANSION:
+		reason_str = "online capacity expansion";
+		break;
+	case MPI2_EVENT_IR_RAIDOP_CONSISTENCY_CHECK:
+		reason_str = "consistency check";
+		break;
+	default:
+		reason_str = "unknown reason";
+		break;
+	}
+
+	printk(MPT2SAS_INFO_FMT "raid operational status: (%s)"
+	    "\thandle(0x%04x), percent complete(%d)\n",
+	    ioc->name, reason_str,
+	    le16_to_cpu(event_data->VolDevHandle),
+	    event_data->PercentComplete);
+}
+#endif
+
+/**
+ * _scsih_sas_ir_operation_status_event - handle RAID operation events
+ * @ioc: per adapter object
+ * @VF_ID:
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_sas_ir_operation_status_event(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataIrOperationStatus_t *event_data)
+{
+#ifdef CONFIG_SCSI_MPT2SAS_LOGGING
+	if (ioc->logging_level & MPT_DEBUG_EVENT_WORK_TASK)
+		_scsih_sas_ir_operation_status_event_debug(ioc, event_data);
+#endif
+}
+
+/**
+ * _scsih_task_set_full - handle task set full
+ * @ioc: per adapter object
+ * @event_data: event data payload
+ * Context: user.
+ *
+ * Throttle back qdepth.
+ */
+static void
+_scsih_task_set_full(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID,
+    Mpi2EventDataTaskSetFull_t *event_data)
+{
+	unsigned long flags;
+	struct _sas_device *sas_device;
+	static struct _raid_device *raid_device;
+	struct scsi_device *sdev;
+	int depth;
+	u16 current_depth;
+	u16 handle;
+	int id, channel;
+	u64 sas_address;
+
+	current_depth = le16_to_cpu(event_data->CurrentDepth);
+	handle = le16_to_cpu(event_data->DevHandle);
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = _scsih_sas_device_find_by_handle(ioc, handle);
+	if (!sas_device) {
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+	id = sas_device->id;
+	channel = sas_device->channel;
+	sas_address = sas_device->sas_address;
+
+	/* if hidden raid component, then change to volume characteristics */
+	if (sas_device->hidden_raid_component && sas_device->volume_handle) {
+		spin_lock_irqsave(&ioc->raid_device_lock, flags);
+		raid_device = _scsih_raid_device_find_by_handle(
+		    ioc, sas_device->volume_handle);
+		spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+		if (raid_device) {
+			id = raid_device->id;
+			channel = raid_device->channel;
+			handle = raid_device->handle;
+			sas_address = raid_device->wwid;
+		}
+	}
+
+	if (ioc->logging_level & MPT_DEBUG_TASK_SET_FULL)
+		starget_printk(KERN_DEBUG, sas_device->starget, "task set "
+		    "full: handle(0x%04x), sas_addr(0x%016llx), depth(%d)\n",
+		    handle, (unsigned long long)sas_address, current_depth);
+
+	shost_for_each_device(sdev, ioc->shost) {
+		if (sdev->id == id && sdev->channel == channel) {
+			if (current_depth > sdev->queue_depth) {
+				if (ioc->logging_level &
+				    MPT_DEBUG_TASK_SET_FULL)
+					sdev_printk(KERN_INFO, sdev, "strange "
+					    "observation, the queue depth is"
+					    " (%d) meanwhile fw queue depth "
+					    "is (%d)\n", sdev->queue_depth,
+					    current_depth);
+				continue;
+			}
+			depth = scsi_track_queue_full(sdev,
+			    current_depth - 1);
+			if (depth > 0)
+				sdev_printk(KERN_INFO, sdev, "Queue depth "
+				    "reduced to (%d)\n", depth);
+			else if (depth < 0)
+				sdev_printk(KERN_INFO, sdev, "Tagged Command "
+				    "Queueing is being disabled\n");
+			else if (depth == 0)
+				if (ioc->logging_level &
+				     MPT_DEBUG_TASK_SET_FULL)
+					sdev_printk(KERN_INFO, sdev,
+					     "Queue depth not changed yet\n");
+		}
+	}
+}
+
+/**
+ * _scsih_mark_responding_sas_device - mark a sas_devices as responding
+ * @ioc: per adapter object
+ * @sas_address: sas address
+ * @slot: enclosure slot id
+ * @handle: device handle
+ *
+ * After host reset, find out whether devices are still responding.
+ * Used in _scsi_remove_unresponsive_sas_devices.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_mark_responding_sas_device(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
+    u16 slot, u16 handle)
+{
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct scsi_target *starget;
+	struct _sas_device *sas_device;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	list_for_each_entry(sas_device, &ioc->sas_device_list, list) {
+		if (sas_device->sas_address == sas_address &&
+		    sas_device->slot == slot && sas_device->starget) {
+			sas_device->responding = 1;
+			starget_printk(KERN_INFO, sas_device->starget,
+			    "handle(0x%04x), sas_addr(0x%016llx), enclosure "
+			    "logical id(0x%016llx), slot(%d)\n", handle,
+			    (unsigned long long)sas_device->sas_address,
+			    (unsigned long long)
+			    sas_device->enclosure_logical_id,
+			    sas_device->slot);
+			if (sas_device->handle == handle)
+				goto out;
+			printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n",
+			    sas_device->handle);
+			sas_device->handle = handle;
+			starget = sas_device->starget;
+			sas_target_priv_data = starget->hostdata;
+			sas_target_priv_data->handle = handle;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+}
+
+/**
+ * _scsih_search_responding_sas_devices -
+ * @ioc: per adapter object
+ *
+ * After host reset, find out whether devices are still responding.
+ * If not remove.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_search_responding_sas_devices(struct MPT2SAS_ADAPTER *ioc)
+{
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	Mpi2ConfigReply_t mpi_reply;
+	u16 ioc_status;
+	__le64 sas_address;
+	u16 handle;
+	u32 device_info;
+	u16 slot;
+
+	printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, __func__);
+
+	if (list_empty(&ioc->sas_device_list))
+		return;
+
+	handle = 0xFFFF;
+	while (!(mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply,
+	    &sas_device_pg0, MPI2_SAS_DEVICE_PGAD_FORM_GET_NEXT_HANDLE,
+	    handle))) {
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
+			break;
+		handle = le16_to_cpu(sas_device_pg0.DevHandle);
+		device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
+		if (!(_scsih_is_end_device(device_info)))
+			continue;
+		sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
+		slot = le16_to_cpu(sas_device_pg0.Slot);
+		_scsih_mark_responding_sas_device(ioc, sas_address, slot,
+		    handle);
+	}
+}
+
+/**
+ * _scsih_mark_responding_raid_device - mark a raid_device as responding
+ * @ioc: per adapter object
+ * @wwid: world wide identifier for raid volume
+ * @handle: device handle
+ *
+ * After host reset, find out whether devices are still responding.
+ * Used in _scsi_remove_unresponsive_raid_devices.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_mark_responding_raid_device(struct MPT2SAS_ADAPTER *ioc, u64 wwid,
+    u16 handle)
+{
+	struct MPT2SAS_TARGET *sas_target_priv_data;
+	struct scsi_target *starget;
+	struct _raid_device *raid_device;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->raid_device_lock, flags);
+	list_for_each_entry(raid_device, &ioc->raid_device_list, list) {
+		if (raid_device->wwid == wwid && raid_device->starget) {
+			raid_device->responding = 1;
+			starget_printk(KERN_INFO, raid_device->starget,
+			    "handle(0x%04x), wwid(0x%016llx)\n", handle,
+			    (unsigned long long)raid_device->wwid);
+			if (raid_device->handle == handle)
+				goto out;
+			printk(KERN_INFO "\thandle changed from(0x%04x)!!!\n",
+			    raid_device->handle);
+			raid_device->handle = handle;
+			starget = raid_device->starget;
+			sas_target_priv_data = starget->hostdata;
+			sas_target_priv_data->handle = handle;
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
+}
+
+/**
+ * _scsih_search_responding_raid_devices -
+ * @ioc: per adapter object
+ *
+ * After host reset, find out whether devices are still responding.
+ * If not remove.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_search_responding_raid_devices(struct MPT2SAS_ADAPTER *ioc)
+{
+	Mpi2RaidVolPage1_t volume_pg1;
+	Mpi2ConfigReply_t mpi_reply;
+	u16 ioc_status;
+	u16 handle;
+
+	printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, __func__);
+
+	if (list_empty(&ioc->raid_device_list))
+		return;
+
+	handle = 0xFFFF;
+	while (!(mpt2sas_config_get_raid_volume_pg1(ioc, &mpi_reply,
+	    &volume_pg1, MPI2_RAID_VOLUME_PGAD_FORM_GET_NEXT_HANDLE, handle))) {
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
+			break;
+		handle = le16_to_cpu(volume_pg1.DevHandle);
+		_scsih_mark_responding_raid_device(ioc,
+		    le64_to_cpu(volume_pg1.WWID), handle);
+	}
+}
+
+/**
+ * _scsih_mark_responding_expander - mark a expander as responding
+ * @ioc: per adapter object
+ * @sas_address: sas address
+ * @handle:
+ *
+ * After host reset, find out whether devices are still responding.
+ * Used in _scsi_remove_unresponsive_expanders.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_mark_responding_expander(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
+     u16 handle)
+{
+	struct _sas_node *sas_expander;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	list_for_each_entry(sas_expander, &ioc->sas_expander_list, list) {
+		if (sas_expander->sas_address == sas_address) {
+			sas_expander->responding = 1;
+			if (sas_expander->handle != handle) {
+				printk(KERN_INFO "old handle(0x%04x)\n",
+				    sas_expander->handle);
+				sas_expander->handle = handle;
+			}
+			goto out;
+		}
+	}
+ out:
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+}
+
+/**
+ * _scsih_search_responding_expanders -
+ * @ioc: per adapter object
+ *
+ * After host reset, find out whether devices are still responding.
+ * If not remove.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_search_responding_expanders(struct MPT2SAS_ADAPTER *ioc)
+{
+	Mpi2ExpanderPage0_t expander_pg0;
+	Mpi2ConfigReply_t mpi_reply;
+	u16 ioc_status;
+	__le64 sas_address;
+	u16 handle;
+
+	printk(MPT2SAS_INFO_FMT "%s\n", ioc->name, __func__);
+
+	if (list_empty(&ioc->sas_expander_list))
+		return;
+
+	handle = 0xFFFF;
+	while (!(mpt2sas_config_get_expander_pg0(ioc, &mpi_reply, &expander_pg0,
+	    MPI2_SAS_EXPAND_PGAD_FORM_GET_NEXT_HNDL, handle))) {
+
+		ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+		    MPI2_IOCSTATUS_MASK;
+		if (ioc_status == MPI2_IOCSTATUS_CONFIG_INVALID_PAGE)
+			break;
+
+		handle = le16_to_cpu(expander_pg0.DevHandle);
+		sas_address = le64_to_cpu(expander_pg0.SASAddress);
+		printk(KERN_INFO "\texpander present: handle(0x%04x), "
+		    "sas_addr(0x%016llx)\n", handle,
+		    (unsigned long long)sas_address);
+		_scsih_mark_responding_expander(ioc, sas_address, handle);
+	}
+
+}
+
+/**
+ * _scsih_remove_unresponding_devices - removing unresponding devices
+ * @ioc: per adapter object
+ *
+ * Return nothing.
+ */
+static void
+_scsih_remove_unresponding_devices(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct _sas_device *sas_device, *sas_device_next;
+	struct _sas_node *sas_expander, *sas_expander_next;
+	struct _raid_device *raid_device, *raid_device_next;
+	unsigned long flags;
+
+	_scsih_search_responding_sas_devices(ioc);
+	_scsih_search_responding_raid_devices(ioc);
+	_scsih_search_responding_expanders(ioc);
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	ioc->shost_recovery = 0;
+	if (ioc->shost->shost_state == SHOST_RECOVERY) {
+		printk(MPT2SAS_INFO_FMT "putting controller into "
+		    "SHOST_RUNNING\n", ioc->name);
+		scsi_host_set_state(ioc->shost, SHOST_RUNNING);
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	list_for_each_entry_safe(sas_device, sas_device_next,
+	    &ioc->sas_device_list, list) {
+		if (sas_device->responding) {
+			sas_device->responding = 0;
+			continue;
+		}
+		if (sas_device->starget)
+			starget_printk(KERN_INFO, sas_device->starget,
+			    "removing: handle(0x%04x), sas_addr(0x%016llx), "
+			    "enclosure logical id(0x%016llx), slot(%d)\n",
+			    sas_device->handle,
+			    (unsigned long long)sas_device->sas_address,
+			    (unsigned long long)
+			    sas_device->enclosure_logical_id,
+			    sas_device->slot);
+		_scsih_remove_device(ioc, sas_device->handle);
+	}
+
+	list_for_each_entry_safe(raid_device, raid_device_next,
+	    &ioc->raid_device_list, list) {
+		if (raid_device->responding) {
+			raid_device->responding = 0;
+			continue;
+		}
+		if (raid_device->starget) {
+			starget_printk(KERN_INFO, raid_device->starget,
+			    "removing: handle(0x%04x), wwid(0x%016llx)\n",
+			      raid_device->handle,
+			    (unsigned long long)raid_device->wwid);
+			scsi_remove_target(&raid_device->starget->dev);
+		}
+		_scsih_raid_device_remove(ioc, raid_device);
+	}
+
+	list_for_each_entry_safe(sas_expander, sas_expander_next,
+	    &ioc->sas_expander_list, list) {
+		if (sas_expander->responding) {
+			sas_expander->responding = 0;
+			continue;
+		}
+		printk("\tremoving expander: handle(0x%04x), "
+		    " sas_addr(0x%016llx)\n", sas_expander->handle,
+		    (unsigned long long)sas_expander->sas_address);
+		_scsih_expander_remove(ioc, sas_expander->handle);
+	}
+}
+
+/**
+ * _firmware_event_work - delayed task for processing firmware events
+ * @ioc: per adapter object
+ * @work: equal to the fw_event_work object
+ * Context: user.
+ *
+ * Return nothing.
+ */
+static void
+_firmware_event_work(struct work_struct *work)
+{
+	struct fw_event_work *fw_event = container_of(work,
+	    struct fw_event_work, work.work);
+	unsigned long flags;
+	struct MPT2SAS_ADAPTER *ioc = fw_event->ioc;
+
+	/* This is invoked by calling _scsih_queue_rescan(). */
+	if (fw_event->event == MPT2SAS_RESCAN_AFTER_HOST_RESET) {
+		_scsih_fw_event_free(ioc, fw_event);
+		_scsih_sas_host_refresh(ioc, 1);
+		_scsih_remove_unresponding_devices(ioc);
+		return;
+	}
+
+	/* the queue is being flushed so ignore this event */
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	if (ioc->fw_events_off || ioc->remove_host) {
+		spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+		_scsih_fw_event_free(ioc, fw_event);
+		return;
+	}
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->shost_recovery) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		_scsih_fw_event_requeue(ioc, fw_event, 1000);
+		return;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	switch (fw_event->event) {
+	case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
+		_scsih_sas_topology_change_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data, fw_event);
+		break;
+	case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE:
+		_scsih_sas_device_status_change_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_SAS_DISCOVERY:
+		_scsih_sas_discovery_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE:
+		_scsih_sas_broadcast_primative_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
+		_scsih_sas_enclosure_dev_status_change_event(ioc,
+		    fw_event->VF_ID, fw_event->event_data);
+		break;
+	case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
+		_scsih_sas_ir_config_change_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_IR_VOLUME:
+		_scsih_sas_ir_volume_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_IR_PHYSICAL_DISK:
+		_scsih_sas_ir_physical_disk_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_IR_OPERATION_STATUS:
+		_scsih_sas_ir_operation_status_event(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	case MPI2_EVENT_TASK_SET_FULL:
+		_scsih_task_set_full(ioc, fw_event->VF_ID,
+		    fw_event->event_data);
+		break;
+	}
+	_scsih_fw_event_free(ioc, fw_event);
+}
+
+/**
+ * mpt2sas_scsih_event_callback - firmware event handler (called at ISR time)
+ * @ioc: per adapter object
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ * Context: interrupt.
+ *
+ * This function merely adds a new work task into ioc->firmware_event_thread.
+ * The tasks are worked from _firmware_event_work in user context.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_scsih_event_callback(struct MPT2SAS_ADAPTER *ioc, u8 VF_ID, u32 reply)
+{
+	struct fw_event_work *fw_event;
+	Mpi2EventNotificationReply_t *mpi_reply;
+	unsigned long flags;
+	u16 event;
+
+	/* events turned off due to host reset or driver unloading */
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	if (ioc->fw_events_off || ioc->remove_host) {
+		spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+		return;
+	}
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+
+	mpi_reply =  mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	event = le16_to_cpu(mpi_reply->Event);
+
+	switch (event) {
+	/* handle these */
+	case MPI2_EVENT_SAS_BROADCAST_PRIMITIVE:
+	{
+		Mpi2EventDataSasBroadcastPrimitive_t *baen_data =
+		    (Mpi2EventDataSasBroadcastPrimitive_t *)
+		    mpi_reply->EventData;
+
+		if (baen_data->Primitive !=
+		    MPI2_EVENT_PRIMITIVE_ASYNCHRONOUS_EVENT ||
+		    ioc->broadcast_aen_busy)
+			return;
+		ioc->broadcast_aen_busy = 1;
+		break;
+	}
+
+	case MPI2_EVENT_SAS_TOPOLOGY_CHANGE_LIST:
+		_scsih_check_topo_delete_events(ioc,
+		    (Mpi2EventDataSasTopologyChangeList_t *)
+		    mpi_reply->EventData);
+		break;
+
+	case MPI2_EVENT_SAS_DEVICE_STATUS_CHANGE:
+	case MPI2_EVENT_IR_OPERATION_STATUS:
+	case MPI2_EVENT_SAS_DISCOVERY:
+	case MPI2_EVENT_SAS_ENCL_DEVICE_STATUS_CHANGE:
+	case MPI2_EVENT_IR_VOLUME:
+	case MPI2_EVENT_IR_PHYSICAL_DISK:
+	case MPI2_EVENT_IR_CONFIGURATION_CHANGE_LIST:
+	case MPI2_EVENT_TASK_SET_FULL:
+		break;
+
+	default: /* ignore the rest */
+		return;
+	}
+
+	fw_event = kzalloc(sizeof(struct fw_event_work), GFP_ATOMIC);
+	if (!fw_event) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return;
+	}
+	fw_event->event_data =
+	    kzalloc(mpi_reply->EventDataLength*4, GFP_ATOMIC);
+	if (!fw_event->event_data) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		kfree(fw_event);
+		return;
+	}
+
+	memcpy(fw_event->event_data, mpi_reply->EventData,
+	    mpi_reply->EventDataLength*4);
+	fw_event->ioc = ioc;
+	fw_event->VF_ID = VF_ID;
+	fw_event->event = event;
+	_scsih_fw_event_add(ioc, fw_event);
+}
+
+/* shost template */
+static struct scsi_host_template scsih_driver_template = {
+	.module				= THIS_MODULE,
+	.name				= "Fusion MPT SAS Host",
+	.proc_name			= MPT2SAS_DRIVER_NAME,
+	.queuecommand			= scsih_qcmd,
+	.target_alloc			= scsih_target_alloc,
+	.slave_alloc			= scsih_slave_alloc,
+	.slave_configure		= scsih_slave_configure,
+	.target_destroy			= scsih_target_destroy,
+	.slave_destroy			= scsih_slave_destroy,
+	.change_queue_depth 		= scsih_change_queue_depth,
+	.change_queue_type		= scsih_change_queue_type,
+	.eh_abort_handler		= scsih_abort,
+	.eh_device_reset_handler	= scsih_dev_reset,
+	.eh_host_reset_handler		= scsih_host_reset,
+	.bios_param			= scsih_bios_param,
+	.can_queue			= 1,
+	.this_id			= -1,
+	.sg_tablesize			= MPT2SAS_SG_DEPTH,
+	.max_sectors			= 8192,
+	.cmd_per_lun			= 7,
+	.use_clustering			= ENABLE_CLUSTERING,
+	.shost_attrs			= mpt2sas_host_attrs,
+	.sdev_attrs			= mpt2sas_dev_attrs,
+};
+
+/**
+ * _scsih_expander_node_remove - removing expander device from list.
+ * @ioc: per adapter object
+ * @sas_expander: the sas_device object
+ * Context: Calling function should acquire ioc->sas_node_lock.
+ *
+ * Removing object and freeing associated memory from the
+ * ioc->sas_expander_list.
+ *
+ * Return nothing.
+ */
+static void
+_scsih_expander_node_remove(struct MPT2SAS_ADAPTER *ioc,
+    struct _sas_node *sas_expander)
+{
+	struct _sas_port *mpt2sas_port;
+	struct _sas_device *sas_device;
+	struct _sas_node *expander_sibling;
+	unsigned long flags;
+
+	if (!sas_expander)
+		return;
+
+	/* remove sibling ports attached to this expander */
+ retry_device_search:
+	list_for_each_entry(mpt2sas_port,
+	   &sas_expander->sas_port_list, port_list) {
+		if (mpt2sas_port->remote_identify.device_type ==
+		    SAS_END_DEVICE) {
+			spin_lock_irqsave(&ioc->sas_device_lock, flags);
+			sas_device =
+			    mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+			   mpt2sas_port->remote_identify.sas_address);
+			spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+			if (!sas_device)
+				continue;
+			_scsih_remove_device(ioc, sas_device->handle);
+			goto retry_device_search;
+		}
+	}
+
+ retry_expander_search:
+	list_for_each_entry(mpt2sas_port,
+	   &sas_expander->sas_port_list, port_list) {
+
+		if (mpt2sas_port->remote_identify.device_type ==
+		    MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER ||
+		    mpt2sas_port->remote_identify.device_type ==
+		    MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER) {
+
+			spin_lock_irqsave(&ioc->sas_node_lock, flags);
+			expander_sibling =
+			    mpt2sas_scsih_expander_find_by_sas_address(
+			    ioc, mpt2sas_port->remote_identify.sas_address);
+			spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+			if (!expander_sibling)
+				continue;
+			_scsih_expander_remove(ioc, expander_sibling->handle);
+			goto retry_expander_search;
+		}
+	}
+
+	mpt2sas_transport_port_remove(ioc, sas_expander->sas_address,
+	    sas_expander->parent_handle);
+
+	printk(MPT2SAS_INFO_FMT "expander_remove: handle"
+	   "(0x%04x), sas_addr(0x%016llx)\n", ioc->name,
+	    sas_expander->handle, (unsigned long long)
+	    sas_expander->sas_address);
+
+	list_del(&sas_expander->list);
+	kfree(sas_expander->phy);
+	kfree(sas_expander);
+}
+
+/**
+ * scsih_remove - detach and remove add host
+ * @pdev: PCI device struct
+ *
+ * Return nothing.
+ */
+static void __devexit
+scsih_remove(struct pci_dev *pdev)
+{
+	struct Scsi_Host *shost = pci_get_drvdata(pdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	struct _sas_port *mpt2sas_port;
+	struct _sas_device *sas_device;
+	struct _sas_node *expander_sibling;
+	struct workqueue_struct	*wq;
+	unsigned long flags;
+
+	ioc->remove_host = 1;
+	_scsih_fw_event_off(ioc);
+
+	spin_lock_irqsave(&ioc->fw_event_lock, flags);
+	wq = ioc->firmware_event_thread;
+	ioc->firmware_event_thread = NULL;
+	spin_unlock_irqrestore(&ioc->fw_event_lock, flags);
+	if (wq)
+		destroy_workqueue(wq);
+
+	/* free ports attached to the sas_host */
+ retry_again:
+	list_for_each_entry(mpt2sas_port,
+	   &ioc->sas_hba.sas_port_list, port_list) {
+		if (mpt2sas_port->remote_identify.device_type ==
+		    SAS_END_DEVICE) {
+			sas_device =
+			    mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+			   mpt2sas_port->remote_identify.sas_address);
+			if (sas_device) {
+				_scsih_remove_device(ioc, sas_device->handle);
+				goto retry_again;
+			}
+		} else {
+			expander_sibling =
+			    mpt2sas_scsih_expander_find_by_sas_address(ioc,
+			    mpt2sas_port->remote_identify.sas_address);
+			if (expander_sibling) {
+				_scsih_expander_remove(ioc,
+				    expander_sibling->handle);
+				goto retry_again;
+			}
+		}
+	}
+
+	/* free phys attached to the sas_host */
+	if (ioc->sas_hba.num_phys) {
+		kfree(ioc->sas_hba.phy);
+		ioc->sas_hba.phy = NULL;
+		ioc->sas_hba.num_phys = 0;
+	}
+
+	sas_remove_host(shost);
+	mpt2sas_base_detach(ioc);
+	list_del(&ioc->list);
+	scsi_remove_host(shost);
+	scsi_host_put(shost);
+}
+
+/**
+ * _scsih_probe_boot_devices - reports 1st device
+ * @ioc: per adapter object
+ *
+ * If specified in bios page 2, this routine reports the 1st
+ * device scsi-ml or sas transport for persistent boot device
+ * purposes.  Please refer to function _scsih_determine_boot_device()
+ */
+static void
+_scsih_probe_boot_devices(struct MPT2SAS_ADAPTER *ioc)
+{
+	u8 is_raid;
+	void *device;
+	struct _sas_device *sas_device;
+	struct _raid_device *raid_device;
+	u16 handle, parent_handle;
+	u64 sas_address;
+	unsigned long flags;
+	int rc;
+
+	device = NULL;
+	if (ioc->req_boot_device.device) {
+		device =  ioc->req_boot_device.device;
+		is_raid = ioc->req_boot_device.is_raid;
+	} else if (ioc->req_alt_boot_device.device) {
+		device =  ioc->req_alt_boot_device.device;
+		is_raid = ioc->req_alt_boot_device.is_raid;
+	} else if (ioc->current_boot_device.device) {
+		device =  ioc->current_boot_device.device;
+		is_raid = ioc->current_boot_device.is_raid;
+	}
+
+	if (!device)
+		return;
+
+	if (is_raid) {
+		raid_device = device;
+		rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
+		    raid_device->id, 0);
+		if (rc)
+			_scsih_raid_device_remove(ioc, raid_device);
+	} else {
+		sas_device = device;
+		handle = sas_device->handle;
+		parent_handle = sas_device->parent_handle;
+		sas_address = sas_device->sas_address;
+		spin_lock_irqsave(&ioc->sas_device_lock, flags);
+		list_move_tail(&sas_device->list, &ioc->sas_device_list);
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+		if (!mpt2sas_transport_port_add(ioc, sas_device->handle,
+		    sas_device->parent_handle)) {
+			_scsih_sas_device_remove(ioc, sas_device);
+		} else if (!sas_device->starget) {
+			mpt2sas_transport_port_remove(ioc, sas_address,
+			    parent_handle);
+			_scsih_sas_device_remove(ioc, sas_device);
+		}
+	}
+}
+
+/**
+ * _scsih_probe_raid - reporting raid volumes to scsi-ml
+ * @ioc: per adapter object
+ *
+ * Called during initial loading of the driver.
+ */
+static void
+_scsih_probe_raid(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct _raid_device *raid_device, *raid_next;
+	int rc;
+
+	list_for_each_entry_safe(raid_device, raid_next,
+	    &ioc->raid_device_list, list) {
+		if (raid_device->starget)
+			continue;
+		rc = scsi_add_device(ioc->shost, RAID_CHANNEL,
+		    raid_device->id, 0);
+		if (rc)
+			_scsih_raid_device_remove(ioc, raid_device);
+	}
+}
+
+/**
+ * _scsih_probe_sas - reporting raid volumes to sas transport
+ * @ioc: per adapter object
+ *
+ * Called during initial loading of the driver.
+ */
+static void
+_scsih_probe_sas(struct MPT2SAS_ADAPTER *ioc)
+{
+	struct _sas_device *sas_device, *next;
+	unsigned long flags;
+	u16 handle, parent_handle;
+	u64 sas_address;
+
+	/* SAS Device List */
+	list_for_each_entry_safe(sas_device, next, &ioc->sas_device_init_list,
+	    list) {
+		spin_lock_irqsave(&ioc->sas_device_lock, flags);
+		list_move_tail(&sas_device->list, &ioc->sas_device_list);
+		spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+		handle = sas_device->handle;
+		parent_handle = sas_device->parent_handle;
+		sas_address = sas_device->sas_address;
+		if (!mpt2sas_transport_port_add(ioc, handle, parent_handle)) {
+			_scsih_sas_device_remove(ioc, sas_device);
+		} else if (!sas_device->starget) {
+			mpt2sas_transport_port_remove(ioc, sas_address,
+			    parent_handle);
+			_scsih_sas_device_remove(ioc, sas_device);
+		}
+	}
+}
+
+/**
+ * _scsih_probe_devices - probing for devices
+ * @ioc: per adapter object
+ *
+ * Called during initial loading of the driver.
+ */
+static void
+_scsih_probe_devices(struct MPT2SAS_ADAPTER *ioc)
+{
+	u16 volume_mapping_flags =
+	    le16_to_cpu(ioc->ioc_pg8.IRVolumeMappingFlags) &
+	    MPI2_IOCPAGE8_IRFLAGS_MASK_VOLUME_MAPPING_MODE;
+
+	if (!(ioc->facts.ProtocolFlags & MPI2_IOCFACTS_PROTOCOL_SCSI_INITIATOR))
+		return;  /* return when IOC doesn't support initiator mode */
+
+	_scsih_probe_boot_devices(ioc);
+
+	if (ioc->ir_firmware) {
+		if ((volume_mapping_flags &
+		     MPI2_IOCPAGE8_IRFLAGS_HIGH_VOLUME_MAPPING)) {
+			_scsih_probe_sas(ioc);
+			_scsih_probe_raid(ioc);
+		} else {
+			_scsih_probe_raid(ioc);
+			_scsih_probe_sas(ioc);
+		}
+	} else
+		_scsih_probe_sas(ioc);
+}
+
+/**
+ * scsih_probe - attach and add scsi host
+ * @pdev: PCI device struct
+ * @id: pci device id
+ *
+ * Returns 0 success, anything else error.
+ */
+static int
+scsih_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct MPT2SAS_ADAPTER *ioc;
+	struct Scsi_Host *shost;
+
+	shost = scsi_host_alloc(&scsih_driver_template,
+	    sizeof(struct MPT2SAS_ADAPTER));
+	if (!shost)
+		return -ENODEV;
+
+	/* init local params */
+	ioc = shost_priv(shost);
+	memset(ioc, 0, sizeof(struct MPT2SAS_ADAPTER));
+	INIT_LIST_HEAD(&ioc->list);
+	list_add_tail(&ioc->list, &mpt2sas_ioc_list);
+	ioc->shost = shost;
+	ioc->id = mpt_ids++;
+	sprintf(ioc->name, "%s%d", MPT2SAS_DRIVER_NAME, ioc->id);
+	ioc->pdev = pdev;
+	ioc->scsi_io_cb_idx = scsi_io_cb_idx;
+	ioc->tm_cb_idx = tm_cb_idx;
+	ioc->ctl_cb_idx = ctl_cb_idx;
+	ioc->base_cb_idx = base_cb_idx;
+	ioc->transport_cb_idx = transport_cb_idx;
+	ioc->config_cb_idx = config_cb_idx;
+	ioc->logging_level = logging_level;
+	/* misc semaphores and spin locks */
+	spin_lock_init(&ioc->ioc_reset_in_progress_lock);
+	spin_lock_init(&ioc->scsi_lookup_lock);
+	spin_lock_init(&ioc->sas_device_lock);
+	spin_lock_init(&ioc->sas_node_lock);
+	spin_lock_init(&ioc->fw_event_lock);
+	spin_lock_init(&ioc->raid_device_lock);
+
+	INIT_LIST_HEAD(&ioc->sas_device_list);
+	INIT_LIST_HEAD(&ioc->sas_device_init_list);
+	INIT_LIST_HEAD(&ioc->sas_expander_list);
+	INIT_LIST_HEAD(&ioc->fw_event_list);
+	INIT_LIST_HEAD(&ioc->raid_device_list);
+	INIT_LIST_HEAD(&ioc->sas_hba.sas_port_list);
+
+	/* init shost parameters */
+	shost->max_cmd_len = 16;
+	shost->max_lun = max_lun;
+	shost->transportt = mpt2sas_transport_template;
+	shost->unique_id = ioc->id;
+
+	if ((scsi_add_host(shost, &pdev->dev))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		list_del(&ioc->list);
+		goto out_add_shost_fail;
+	}
+
+	/* event thread */
+	snprintf(ioc->firmware_event_name, sizeof(ioc->firmware_event_name),
+	    "fw_event%d", ioc->id);
+	ioc->firmware_event_thread = create_singlethread_workqueue(
+	    ioc->firmware_event_name);
+	if (!ioc->firmware_event_thread) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_thread_fail;
+	}
+
+	ioc->wait_for_port_enable_to_complete = 1;
+	if ((mpt2sas_base_attach(ioc))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_attach_fail;
+	}
+
+	ioc->wait_for_port_enable_to_complete = 0;
+	_scsih_probe_devices(ioc);
+	return 0;
+
+ out_attach_fail:
+	destroy_workqueue(ioc->firmware_event_thread);
+ out_thread_fail:
+	list_del(&ioc->list);
+	scsi_remove_host(shost);
+ out_add_shost_fail:
+	return -ENODEV;
+}
+
+#ifdef CONFIG_PM
+/**
+ * scsih_suspend - power management suspend main entry point
+ * @pdev: PCI device struct
+ * @state: PM state change to (usually PCI_D3)
+ *
+ * Returns 0 success, anything else error.
+ */
+static int
+scsih_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+	struct Scsi_Host *shost = pci_get_drvdata(pdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	u32 device_state;
+
+	flush_scheduled_work();
+	scsi_block_requests(shost);
+	device_state = pci_choose_state(pdev, state);
+	printk(MPT2SAS_INFO_FMT "pdev=0x%p, slot=%s, entering "
+	    "operating state [D%d]\n", ioc->name, pdev,
+	    pci_name(pdev), device_state);
+
+	mpt2sas_base_free_resources(ioc);
+	pci_save_state(pdev);
+	pci_disable_device(pdev);
+	pci_set_power_state(pdev, device_state);
+	return 0;
+}
+
+/**
+ * scsih_resume - power management resume main entry point
+ * @pdev: PCI device struct
+ *
+ * Returns 0 success, anything else error.
+ */
+static int
+scsih_resume(struct pci_dev *pdev)
+{
+	struct Scsi_Host *shost = pci_get_drvdata(pdev);
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	u32 device_state = pdev->current_state;
+	int r;
+
+	printk(MPT2SAS_INFO_FMT "pdev=0x%p, slot=%s, previous "
+	    "operating state [D%d]\n", ioc->name, pdev,
+	    pci_name(pdev), device_state);
+
+	pci_set_power_state(pdev, PCI_D0);
+	pci_enable_wake(pdev, PCI_D0, 0);
+	pci_restore_state(pdev);
+	ioc->pdev = pdev;
+	r = mpt2sas_base_map_resources(ioc);
+	if (r)
+		return r;
+
+	mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP, SOFT_RESET);
+	scsi_unblock_requests(shost);
+	return 0;
+}
+#endif /* CONFIG_PM */
+
+
+static struct pci_driver scsih_driver = {
+	.name		= MPT2SAS_DRIVER_NAME,
+	.id_table	= scsih_pci_table,
+	.probe		= scsih_probe,
+	.remove		= __devexit_p(scsih_remove),
+#ifdef CONFIG_PM
+	.suspend	= scsih_suspend,
+	.resume		= scsih_resume,
+#endif
+};
+
+
+/**
+ * scsih_init - main entry point for this driver.
+ *
+ * Returns 0 success, anything else error.
+ */
+static int __init
+scsih_init(void)
+{
+	int error;
+
+	mpt_ids = 0;
+	printk(KERN_INFO "%s version %s loaded\n", MPT2SAS_DRIVER_NAME,
+	    MPT2SAS_DRIVER_VERSION);
+
+	mpt2sas_transport_template =
+	    sas_attach_transport(&mpt2sas_transport_functions);
+	if (!mpt2sas_transport_template)
+		return -ENODEV;
+
+	mpt2sas_base_initialize_callback_handler();
+
+	 /* queuecommand callback hander */
+	scsi_io_cb_idx = mpt2sas_base_register_callback_handler(scsih_io_done);
+
+	/* task managment callback handler */
+	tm_cb_idx = mpt2sas_base_register_callback_handler(scsih_tm_done);
+
+	/* base internal commands callback handler */
+	base_cb_idx = mpt2sas_base_register_callback_handler(mpt2sas_base_done);
+
+	/* transport internal commands callback handler */
+	transport_cb_idx = mpt2sas_base_register_callback_handler(
+	    mpt2sas_transport_done);
+
+	/* configuration page API internal commands callback handler */
+	config_cb_idx = mpt2sas_base_register_callback_handler(
+	    mpt2sas_config_done);
+
+	/* ctl module callback handler */
+	ctl_cb_idx = mpt2sas_base_register_callback_handler(mpt2sas_ctl_done);
+
+	mpt2sas_ctl_init();
+
+	error = pci_register_driver(&scsih_driver);
+	if (error)
+		sas_release_transport(mpt2sas_transport_template);
+
+	return error;
+}
+
+/**
+ * scsih_exit - exit point for this driver (when it is a module).
+ *
+ * Returns 0 success, anything else error.
+ */
+static void __exit
+scsih_exit(void)
+{
+	printk(KERN_INFO "mpt2sas version %s unloading\n",
+	    MPT2SAS_DRIVER_VERSION);
+
+	pci_unregister_driver(&scsih_driver);
+
+	sas_release_transport(mpt2sas_transport_template);
+	mpt2sas_base_release_callback_handler(scsi_io_cb_idx);
+	mpt2sas_base_release_callback_handler(tm_cb_idx);
+	mpt2sas_base_release_callback_handler(base_cb_idx);
+	mpt2sas_base_release_callback_handler(transport_cb_idx);
+	mpt2sas_base_release_callback_handler(config_cb_idx);
+	mpt2sas_base_release_callback_handler(ctl_cb_idx);
+
+	mpt2sas_ctl_exit();
+}
+
+module_init(scsih_init);
+module_exit(scsih_exit);
diff --git a/drivers/scsi/mpt2sas/mpt2sas_transport.c b/drivers/scsi/mpt2sas/mpt2sas_transport.c
new file mode 100644
index 0000000..e03dc0b
--- /dev/null
+++ b/drivers/scsi/mpt2sas/mpt2sas_transport.c
@@ -0,0 +1,1211 @@
+/*
+ * SAS Transport Layer for MPT (Message Passing Technology) based controllers
+ *
+ * This code is based on drivers/scsi/mpt2sas/mpt2_transport.c
+ * Copyright (C) 2007-2008  LSI Corporation
+ *  (mailto:DL-MPTFusionLinux@lsi.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * NO WARRANTY
+ * THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ * CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ * LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ * solely responsible for determining the appropriateness of using and
+ * distributing the Program and assumes all risks associated with its
+ * exercise of rights under this Agreement, including but not limited to
+ * the risks and costs of program errors, damage to or loss of data,
+ * programs or equipment, and unavailability or interruption of operations.
+
+ * DISCLAIMER OF LIABILITY
+ * NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ * TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ * USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ * HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301,
+ * USA.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/workqueue.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_transport_sas.h>
+#include <scsi/scsi_dbg.h>
+
+#include "mpt2sas_base.h"
+/**
+ * _transport_sas_node_find_by_handle - sas node search
+ * @ioc: per adapter object
+ * @handle: expander or hba handle (assigned by firmware)
+ * Context: Calling function should acquire ioc->sas_node_lock.
+ *
+ * Search for either hba phys or expander device based on handle, then returns
+ * the sas_node object.
+ */
+static struct _sas_node *
+_transport_sas_node_find_by_handle(struct MPT2SAS_ADAPTER *ioc, u16 handle)
+{
+	int i;
+
+	for (i = 0; i < ioc->sas_hba.num_phys; i++)
+		if (ioc->sas_hba.phy[i].handle == handle)
+			return &ioc->sas_hba;
+
+	return mpt2sas_scsih_expander_find_by_handle(ioc, handle);
+}
+
+/**
+ * _transport_convert_phy_link_rate -
+ * @link_rate: link rate returned from mpt firmware
+ *
+ * Convert link_rate from mpi fusion into sas_transport form.
+ */
+static enum sas_linkrate
+_transport_convert_phy_link_rate(u8 link_rate)
+{
+	enum sas_linkrate rc;
+
+	switch (link_rate) {
+	case MPI2_SAS_NEG_LINK_RATE_1_5:
+		rc = SAS_LINK_RATE_1_5_GBPS;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_3_0:
+		rc = SAS_LINK_RATE_3_0_GBPS;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_6_0:
+		rc = SAS_LINK_RATE_6_0_GBPS;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_PHY_DISABLED:
+		rc = SAS_PHY_DISABLED;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_NEGOTIATION_FAILED:
+		rc = SAS_LINK_RATE_FAILED;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_PORT_SELECTOR:
+		rc = SAS_SATA_PORT_SELECTOR;
+		break;
+	case MPI2_SAS_NEG_LINK_RATE_SMP_RESET_IN_PROGRESS:
+		rc = SAS_PHY_RESET_IN_PROGRESS;
+		break;
+	default:
+	case MPI2_SAS_NEG_LINK_RATE_SATA_OOB_COMPLETE:
+	case MPI2_SAS_NEG_LINK_RATE_UNKNOWN_LINK_RATE:
+		rc = SAS_LINK_RATE_UNKNOWN;
+		break;
+	}
+	return rc;
+}
+
+/**
+ * _transport_set_identify - set identify for phys and end devices
+ * @ioc: per adapter object
+ * @handle: device handle
+ * @identify: sas identify info
+ *
+ * Populates sas identify info.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+_transport_set_identify(struct MPT2SAS_ADAPTER *ioc, u16 handle,
+    struct sas_identify *identify)
+{
+	Mpi2SasDevicePage0_t sas_device_pg0;
+	Mpi2ConfigReply_t mpi_reply;
+	u32 device_info;
+	u32 ioc_status;
+
+	if ((mpt2sas_config_get_sas_device_pg0(ioc, &mpi_reply, &sas_device_pg0,
+	    MPI2_SAS_DEVICE_PGAD_FORM_HANDLE, handle))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	ioc_status = le16_to_cpu(mpi_reply.IOCStatus) &
+	    MPI2_IOCSTATUS_MASK;
+	if (ioc_status != MPI2_IOCSTATUS_SUCCESS) {
+		printk(MPT2SAS_ERR_FMT "handle(0x%04x), ioc_status(0x%04x)"
+		    "\nfailure at %s:%d/%s()!\n", ioc->name, handle, ioc_status,
+		     __FILE__, __LINE__, __func__);
+		return -1;
+	}
+
+	memset(identify, 0, sizeof(identify));
+	device_info = le32_to_cpu(sas_device_pg0.DeviceInfo);
+
+	/* sas_address */
+	identify->sas_address = le64_to_cpu(sas_device_pg0.SASAddress);
+
+	/* device_type */
+	switch (device_info & MPI2_SAS_DEVICE_INFO_MASK_DEVICE_TYPE) {
+	case MPI2_SAS_DEVICE_INFO_NO_DEVICE:
+		identify->device_type = SAS_PHY_UNUSED;
+		break;
+	case MPI2_SAS_DEVICE_INFO_END_DEVICE:
+		identify->device_type = SAS_END_DEVICE;
+		break;
+	case MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER:
+		identify->device_type = SAS_EDGE_EXPANDER_DEVICE;
+		break;
+	case MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER:
+		identify->device_type = SAS_FANOUT_EXPANDER_DEVICE;
+		break;
+	}
+
+	/* initiator_port_protocols */
+	if (device_info & MPI2_SAS_DEVICE_INFO_SSP_INITIATOR)
+		identify->initiator_port_protocols |= SAS_PROTOCOL_SSP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_STP_INITIATOR)
+		identify->initiator_port_protocols |= SAS_PROTOCOL_STP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_SMP_INITIATOR)
+		identify->initiator_port_protocols |= SAS_PROTOCOL_SMP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_SATA_HOST)
+		identify->initiator_port_protocols |= SAS_PROTOCOL_SATA;
+
+	/* target_port_protocols */
+	if (device_info & MPI2_SAS_DEVICE_INFO_SSP_TARGET)
+		identify->target_port_protocols |= SAS_PROTOCOL_SSP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_STP_TARGET)
+		identify->target_port_protocols |= SAS_PROTOCOL_STP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_SMP_TARGET)
+		identify->target_port_protocols |= SAS_PROTOCOL_SMP;
+	if (device_info & MPI2_SAS_DEVICE_INFO_SATA_DEVICE)
+		identify->target_port_protocols |= SAS_PROTOCOL_SATA;
+
+	return 0;
+}
+
+/**
+ * mpt2sas_transport_done -  internal transport layer callback handler.
+ * @ioc: per adapter object
+ * @smid: system request message index
+ * @VF_ID: virtual function id
+ * @reply: reply message frame(lower 32bit addr)
+ *
+ * Callback handler when sending internal generated transport cmds.
+ * The callback index passed is `ioc->transport_cb_idx`
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_transport_done(struct MPT2SAS_ADAPTER *ioc, u16 smid, u8 VF_ID,
+    u32 reply)
+{
+	MPI2DefaultReply_t *mpi_reply;
+
+	mpi_reply =  mpt2sas_base_get_reply_virt_addr(ioc, reply);
+	if (ioc->transport_cmds.status == MPT2_CMD_NOT_USED)
+		return;
+	if (ioc->transport_cmds.smid != smid)
+		return;
+	ioc->transport_cmds.status |= MPT2_CMD_COMPLETE;
+	if (mpi_reply) {
+		memcpy(ioc->transport_cmds.reply, mpi_reply,
+		    mpi_reply->MsgLength*4);
+		ioc->transport_cmds.status |= MPT2_CMD_REPLY_VALID;
+	}
+	ioc->transport_cmds.status &= ~MPT2_CMD_PENDING;
+	complete(&ioc->transport_cmds.done);
+}
+
+/* report manufacture request structure */
+struct rep_manu_request{
+	u8 smp_frame_type;
+	u8 function;
+	u8 reserved;
+	u8 request_length;
+};
+
+/* report manufacture reply structure */
+struct rep_manu_reply{
+	u8 smp_frame_type; /* 0x41 */
+	u8 function; /* 0x01 */
+	u8 function_result;
+	u8 response_length;
+	u16 expander_change_count;
+	u8 reserved0[2];
+	u8 sas_format:1;
+	u8 reserved1:7;
+	u8 reserved2[3];
+	u8 vendor_id[SAS_EXPANDER_VENDOR_ID_LEN];
+	u8 product_id[SAS_EXPANDER_PRODUCT_ID_LEN];
+	u8 product_rev[SAS_EXPANDER_PRODUCT_REV_LEN];
+	u8 component_vendor_id[SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN];
+	u16 component_id;
+	u8 component_revision_id;
+	u8 reserved3;
+	u8 vendor_specific[8];
+};
+
+/**
+ * transport_expander_report_manufacture - obtain SMP report_manufacture
+ * @ioc: per adapter object
+ * @sas_address: expander sas address
+ * @edev: the sas_expander_device object
+ *
+ * Fills in the sas_expander_device object when SMP port is created.
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+transport_expander_report_manufacture(struct MPT2SAS_ADAPTER *ioc,
+    u64 sas_address, struct sas_expander_device *edev)
+{
+	Mpi2SmpPassthroughRequest_t *mpi_request;
+	Mpi2SmpPassthroughReply_t *mpi_reply;
+	struct rep_manu_reply *manufacture_reply;
+	struct rep_manu_request *manufacture_request;
+	int rc;
+	u16 smid;
+	u32 ioc_state;
+	unsigned long timeleft;
+	void *psge;
+	u32 sgl_flags;
+	u8 issue_reset = 0;
+	unsigned long flags;
+	void *data_out = NULL;
+	dma_addr_t data_out_dma;
+	u32 sz;
+	u64 *sas_address_le;
+	u16 wait_state_count;
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->ioc_reset_in_progress) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		printk(MPT2SAS_INFO_FMT "%s: host reset in progress!\n",
+		    __func__, ioc->name);
+		return -EFAULT;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	mutex_lock(&ioc->transport_cmds.mutex);
+
+	if (ioc->transport_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: transport_cmds in use\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+	ioc->transport_cmds.status = MPT2_CMD_PENDING;
+
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == 10) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			rc = -EFAULT;
+			goto out;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+	if (wait_state_count)
+		printk(MPT2SAS_INFO_FMT "%s: ioc is operational\n",
+		    ioc->name, __func__);
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->transport_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->transport_cmds.smid = smid;
+
+	sz = sizeof(struct rep_manu_request) + sizeof(struct rep_manu_reply);
+	data_out = pci_alloc_consistent(ioc->pdev, sz, &data_out_dma);
+
+	if (!data_out) {
+		printk(KERN_ERR "failure at %s:%d/%s()!\n", __FILE__,
+		    __LINE__, __func__);
+		rc = -ENOMEM;
+		mpt2sas_base_free_smid(ioc, smid);
+		goto out;
+	}
+
+	manufacture_request = data_out;
+	manufacture_request->smp_frame_type = 0x40;
+	manufacture_request->function = 1;
+	manufacture_request->reserved = 0;
+	manufacture_request->request_length = 0;
+
+	memset(mpi_request, 0, sizeof(Mpi2SmpPassthroughRequest_t));
+	mpi_request->Function = MPI2_FUNCTION_SMP_PASSTHROUGH;
+	mpi_request->PhysicalPort = 0xFF;
+	sas_address_le = (u64 *)&mpi_request->SASAddress;
+	*sas_address_le = cpu_to_le64(sas_address);
+	mpi_request->RequestDataLength = sizeof(struct rep_manu_request);
+	psge = &mpi_request->SGL;
+
+	/* WRITE sgel first */
+	sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+	    MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC);
+	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+	ioc->base_add_sg_single(psge, sgl_flags |
+	    sizeof(struct rep_manu_request), data_out_dma);
+
+	/* incr sgel */
+	psge += ioc->sge_size;
+
+	/* READ sgel last */
+	sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+	    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER |
+	    MPI2_SGE_FLAGS_END_OF_LIST);
+	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+	ioc->base_add_sg_single(psge, sgl_flags |
+	    sizeof(struct rep_manu_reply), data_out_dma +
+	    sizeof(struct rep_manu_request));
+
+	dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT "report_manufacture - "
+	    "send to sas_addr(0x%016llx)\n", ioc->name,
+	    (unsigned long long)sas_address));
+	mpt2sas_base_put_smid_default(ioc, smid, 0 /* VF_ID */);
+	timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done,
+	    10*HZ);
+
+	if (!(ioc->transport_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s: timeout\n",
+		    ioc->name, __func__);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2SmpPassthroughRequest_t)/4);
+		if (!(ioc->transport_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT "report_manufacture - "
+	    "complete\n", ioc->name));
+
+	if (ioc->transport_cmds.status & MPT2_CMD_REPLY_VALID) {
+		u8 *tmp;
+
+		mpi_reply = ioc->transport_cmds.reply;
+
+		dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "report_manufacture - reply data transfer size(%d)\n",
+		    ioc->name, le16_to_cpu(mpi_reply->ResponseDataLength)));
+
+		if (le16_to_cpu(mpi_reply->ResponseDataLength) !=
+		    sizeof(struct rep_manu_reply))
+			goto out;
+
+		manufacture_reply = data_out + sizeof(struct rep_manu_request);
+		strncpy(edev->vendor_id, manufacture_reply->vendor_id,
+		     SAS_EXPANDER_VENDOR_ID_LEN);
+		strncpy(edev->product_id, manufacture_reply->product_id,
+		     SAS_EXPANDER_PRODUCT_ID_LEN);
+		strncpy(edev->product_rev, manufacture_reply->product_rev,
+		     SAS_EXPANDER_PRODUCT_REV_LEN);
+		edev->level = manufacture_reply->sas_format;
+		if (manufacture_reply->sas_format) {
+			strncpy(edev->component_vendor_id,
+			    manufacture_reply->component_vendor_id,
+			     SAS_EXPANDER_COMPONENT_VENDOR_ID_LEN);
+			tmp = (u8 *)&manufacture_reply->component_id;
+			edev->component_id = tmp[0] << 8 | tmp[1];
+			edev->component_revision_id =
+			    manufacture_reply->component_revision_id;
+		}
+	} else
+		dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "report_manufacture - no reply\n", ioc->name));
+
+ issue_host_reset:
+	if (issue_reset)
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+ out:
+	ioc->transport_cmds.status = MPT2_CMD_NOT_USED;
+	if (data_out)
+		pci_free_consistent(ioc->pdev, sz, data_out, data_out_dma);
+
+	mutex_unlock(&ioc->transport_cmds.mutex);
+	return rc;
+}
+
+/**
+ * mpt2sas_transport_port_add - insert port to the list
+ * @ioc: per adapter object
+ * @handle: handle of attached device
+ * @parent_handle: parent handle(either hba or expander)
+ * Context: This function will acquire ioc->sas_node_lock.
+ *
+ * Adding new port object to the sas_node->sas_port_list.
+ *
+ * Returns mpt2sas_port.
+ */
+struct _sas_port *
+mpt2sas_transport_port_add(struct MPT2SAS_ADAPTER *ioc, u16 handle,
+    u16 parent_handle)
+{
+	struct _sas_phy *mpt2sas_phy, *next;
+	struct _sas_port *mpt2sas_port;
+	unsigned long flags;
+	struct _sas_node *sas_node;
+	struct sas_rphy *rphy;
+	int i;
+	struct sas_port *port;
+
+	if (!parent_handle)
+		return NULL;
+
+	mpt2sas_port = kzalloc(sizeof(struct _sas_port),
+	    GFP_KERNEL);
+	if (!mpt2sas_port) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return NULL;
+	}
+
+	INIT_LIST_HEAD(&mpt2sas_port->port_list);
+	INIT_LIST_HEAD(&mpt2sas_port->phy_list);
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_node = _transport_sas_node_find_by_handle(ioc, parent_handle);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+
+	if (!sas_node) {
+		printk(MPT2SAS_ERR_FMT "%s: Could not find parent(0x%04x)!\n",
+		    ioc->name, __func__, parent_handle);
+		goto out_fail;
+	}
+
+	mpt2sas_port->handle = parent_handle;
+	mpt2sas_port->sas_address = sas_node->sas_address;
+	if ((_transport_set_identify(ioc, handle,
+	    &mpt2sas_port->remote_identify))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_fail;
+	}
+
+	if (mpt2sas_port->remote_identify.device_type == SAS_PHY_UNUSED) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_fail;
+	}
+
+	for (i = 0; i < sas_node->num_phys; i++) {
+		if (sas_node->phy[i].remote_identify.sas_address !=
+		    mpt2sas_port->remote_identify.sas_address)
+			continue;
+		list_add_tail(&sas_node->phy[i].port_siblings,
+		    &mpt2sas_port->phy_list);
+		mpt2sas_port->num_phys++;
+	}
+
+	if (!mpt2sas_port->num_phys) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_fail;
+	}
+
+	port = sas_port_alloc_num(sas_node->parent_dev);
+	if ((sas_port_add(port))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		goto out_fail;
+	}
+
+	list_for_each_entry(mpt2sas_phy, &mpt2sas_port->phy_list,
+	    port_siblings) {
+		if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+			dev_printk(KERN_INFO, &port->dev, "add: handle(0x%04x)"
+			    ", sas_addr(0x%016llx), phy(%d)\n", handle,
+			    (unsigned long long)
+			    mpt2sas_port->remote_identify.sas_address,
+			    mpt2sas_phy->phy_id);
+		sas_port_add_phy(port, mpt2sas_phy->phy);
+	}
+
+	mpt2sas_port->port = port;
+	if (mpt2sas_port->remote_identify.device_type == SAS_END_DEVICE)
+		rphy = sas_end_device_alloc(port);
+	else
+		rphy = sas_expander_alloc(port,
+		    mpt2sas_port->remote_identify.device_type);
+
+	rphy->identify = mpt2sas_port->remote_identify;
+	if ((sas_rphy_add(rphy))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+	}
+	if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+		dev_printk(KERN_INFO, &rphy->dev, "add: handle(0x%04x), "
+		    "sas_addr(0x%016llx)\n", handle,
+		    (unsigned long long)
+		    mpt2sas_port->remote_identify.sas_address);
+	mpt2sas_port->rphy = rphy;
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	list_add_tail(&mpt2sas_port->port_list, &sas_node->sas_port_list);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+
+	/* fill in report manufacture */
+	if (mpt2sas_port->remote_identify.device_type ==
+	    MPI2_SAS_DEVICE_INFO_EDGE_EXPANDER ||
+	    mpt2sas_port->remote_identify.device_type ==
+	    MPI2_SAS_DEVICE_INFO_FANOUT_EXPANDER)
+		transport_expander_report_manufacture(ioc,
+		    mpt2sas_port->remote_identify.sas_address,
+		    rphy_to_expander_device(rphy));
+
+	return mpt2sas_port;
+
+ out_fail:
+	list_for_each_entry_safe(mpt2sas_phy, next, &mpt2sas_port->phy_list,
+	    port_siblings)
+		list_del(&mpt2sas_phy->port_siblings);
+	kfree(mpt2sas_port);
+	return NULL;
+}
+
+/**
+ * mpt2sas_transport_port_remove - remove port from the list
+ * @ioc: per adapter object
+ * @sas_address: sas address of attached device
+ * @parent_handle: handle to the upstream parent(either hba or expander)
+ * Context: This function will acquire ioc->sas_node_lock.
+ *
+ * Removing object and freeing associated memory from the
+ * ioc->sas_port_list.
+ *
+ * Return nothing.
+ */
+void
+mpt2sas_transport_port_remove(struct MPT2SAS_ADAPTER *ioc, u64 sas_address,
+    u16 parent_handle)
+{
+	int i;
+	unsigned long flags;
+	struct _sas_port *mpt2sas_port, *next;
+	struct _sas_node *sas_node;
+	u8 found = 0;
+	struct _sas_phy *mpt2sas_phy, *next_phy;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_node = _transport_sas_node_find_by_handle(ioc, parent_handle);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+	if (!sas_node)
+		return;
+	list_for_each_entry_safe(mpt2sas_port, next, &sas_node->sas_port_list,
+	    port_list) {
+		if (mpt2sas_port->remote_identify.sas_address != sas_address)
+			continue;
+		found = 1;
+		list_del(&mpt2sas_port->port_list);
+		goto out;
+	}
+ out:
+	if (!found)
+		return;
+
+	for (i = 0; i < sas_node->num_phys; i++) {
+		if (sas_node->phy[i].remote_identify.sas_address == sas_address)
+			memset(&sas_node->phy[i].remote_identify, 0 ,
+			    sizeof(struct sas_identify));
+	}
+
+	list_for_each_entry_safe(mpt2sas_phy, next_phy,
+	    &mpt2sas_port->phy_list, port_siblings) {
+		if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+			dev_printk(KERN_INFO, &mpt2sas_port->port->dev,
+			    "remove: parent_handle(0x%04x), "
+			    "sas_addr(0x%016llx), phy(%d)\n", parent_handle,
+			    (unsigned long long)
+			    mpt2sas_port->remote_identify.sas_address,
+			    mpt2sas_phy->phy_id);
+		sas_port_delete_phy(mpt2sas_port->port, mpt2sas_phy->phy);
+		list_del(&mpt2sas_phy->port_siblings);
+	}
+	sas_port_delete(mpt2sas_port->port);
+	kfree(mpt2sas_port);
+}
+
+/**
+ * mpt2sas_transport_add_host_phy - report sas_host phy to transport
+ * @ioc: per adapter object
+ * @mpt2sas_phy: mpt2sas per phy object
+ * @phy_pg0: sas phy page 0
+ * @parent_dev: parent device class object
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_transport_add_host_phy(struct MPT2SAS_ADAPTER *ioc, struct _sas_phy
+    *mpt2sas_phy, Mpi2SasPhyPage0_t phy_pg0, struct device *parent_dev)
+{
+	struct sas_phy *phy;
+	int phy_index = mpt2sas_phy->phy_id;
+
+
+	INIT_LIST_HEAD(&mpt2sas_phy->port_siblings);
+	phy = sas_phy_alloc(parent_dev, phy_index);
+	if (!phy) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+	if ((_transport_set_identify(ioc, mpt2sas_phy->handle,
+	    &mpt2sas_phy->identify))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+	phy->identify = mpt2sas_phy->identify;
+	mpt2sas_phy->attached_handle = le16_to_cpu(phy_pg0.AttachedDevHandle);
+	if (mpt2sas_phy->attached_handle)
+		_transport_set_identify(ioc, mpt2sas_phy->attached_handle,
+		    &mpt2sas_phy->remote_identify);
+	phy->identify.phy_identifier = mpt2sas_phy->phy_id;
+	phy->negotiated_linkrate = _transport_convert_phy_link_rate(
+	    phy_pg0.NegotiatedLinkRate & MPI2_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
+	phy->minimum_linkrate_hw = _transport_convert_phy_link_rate(
+	    phy_pg0.HwLinkRate & MPI2_SAS_HWRATE_MIN_RATE_MASK);
+	phy->maximum_linkrate_hw = _transport_convert_phy_link_rate(
+	    phy_pg0.HwLinkRate >> 4);
+	phy->minimum_linkrate = _transport_convert_phy_link_rate(
+	    phy_pg0.ProgrammedLinkRate & MPI2_SAS_PRATE_MIN_RATE_MASK);
+	phy->maximum_linkrate = _transport_convert_phy_link_rate(
+	    phy_pg0.ProgrammedLinkRate >> 4);
+
+	if ((sas_phy_add(phy))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		sas_phy_free(phy);
+		return -1;
+	}
+	if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+		dev_printk(KERN_INFO, &phy->dev,
+		    "add: handle(0x%04x), sas_addr(0x%016llx)\n"
+		    "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
+		    mpt2sas_phy->handle, (unsigned long long)
+		    mpt2sas_phy->identify.sas_address,
+		    mpt2sas_phy->attached_handle,
+		    (unsigned long long)
+		    mpt2sas_phy->remote_identify.sas_address);
+	mpt2sas_phy->phy = phy;
+	return 0;
+}
+
+
+/**
+ * mpt2sas_transport_add_expander_phy - report expander phy to transport
+ * @ioc: per adapter object
+ * @mpt2sas_phy: mpt2sas per phy object
+ * @expander_pg1: expander page 1
+ * @parent_dev: parent device class object
+ *
+ * Returns 0 for success, non-zero for failure.
+ */
+int
+mpt2sas_transport_add_expander_phy(struct MPT2SAS_ADAPTER *ioc, struct _sas_phy
+    *mpt2sas_phy, Mpi2ExpanderPage1_t expander_pg1, struct device *parent_dev)
+{
+	struct sas_phy *phy;
+	int phy_index = mpt2sas_phy->phy_id;
+
+	INIT_LIST_HEAD(&mpt2sas_phy->port_siblings);
+	phy = sas_phy_alloc(parent_dev, phy_index);
+	if (!phy) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+	if ((_transport_set_identify(ioc, mpt2sas_phy->handle,
+	    &mpt2sas_phy->identify))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -1;
+	}
+	phy->identify = mpt2sas_phy->identify;
+	mpt2sas_phy->attached_handle =
+	    le16_to_cpu(expander_pg1.AttachedDevHandle);
+	if (mpt2sas_phy->attached_handle)
+		_transport_set_identify(ioc, mpt2sas_phy->attached_handle,
+		    &mpt2sas_phy->remote_identify);
+	phy->identify.phy_identifier = mpt2sas_phy->phy_id;
+	phy->negotiated_linkrate = _transport_convert_phy_link_rate(
+	    expander_pg1.NegotiatedLinkRate &
+	    MPI2_SAS_NEG_LINK_RATE_MASK_PHYSICAL);
+	phy->minimum_linkrate_hw = _transport_convert_phy_link_rate(
+	    expander_pg1.HwLinkRate & MPI2_SAS_HWRATE_MIN_RATE_MASK);
+	phy->maximum_linkrate_hw = _transport_convert_phy_link_rate(
+	    expander_pg1.HwLinkRate >> 4);
+	phy->minimum_linkrate = _transport_convert_phy_link_rate(
+	    expander_pg1.ProgrammedLinkRate & MPI2_SAS_PRATE_MIN_RATE_MASK);
+	phy->maximum_linkrate = _transport_convert_phy_link_rate(
+	    expander_pg1.ProgrammedLinkRate >> 4);
+
+	if ((sas_phy_add(phy))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		sas_phy_free(phy);
+		return -1;
+	}
+	if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+		dev_printk(KERN_INFO, &phy->dev,
+		    "add: handle(0x%04x), sas_addr(0x%016llx)\n"
+		    "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
+		    mpt2sas_phy->handle, (unsigned long long)
+		    mpt2sas_phy->identify.sas_address,
+		    mpt2sas_phy->attached_handle,
+		    (unsigned long long)
+		    mpt2sas_phy->remote_identify.sas_address);
+	mpt2sas_phy->phy = phy;
+	return 0;
+}
+
+/**
+ * mpt2sas_transport_update_phy_link_change - refreshing phy link changes and attached devices
+ * @ioc: per adapter object
+ * @handle: handle to sas_host or expander
+ * @attached_handle: attached device handle
+ * @phy_numberv: phy number
+ * @link_rate: new link rate
+ *
+ * Returns nothing.
+ */
+void
+mpt2sas_transport_update_phy_link_change(struct MPT2SAS_ADAPTER *ioc,
+    u16 handle, u16 attached_handle, u8 phy_number, u8 link_rate)
+{
+	unsigned long flags;
+	struct _sas_node *sas_node;
+	struct _sas_phy *mpt2sas_phy;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_node = _transport_sas_node_find_by_handle(ioc, handle);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+	if (!sas_node)
+		return;
+
+	mpt2sas_phy = &sas_node->phy[phy_number];
+	mpt2sas_phy->attached_handle = attached_handle;
+	if (attached_handle && (link_rate >= MPI2_SAS_NEG_LINK_RATE_1_5))
+		_transport_set_identify(ioc, mpt2sas_phy->attached_handle,
+		    &mpt2sas_phy->remote_identify);
+	else
+		memset(&mpt2sas_phy->remote_identify, 0 , sizeof(struct
+		    sas_identify));
+
+	if (mpt2sas_phy->phy)
+		mpt2sas_phy->phy->negotiated_linkrate =
+		    _transport_convert_phy_link_rate(link_rate);
+
+	if ((ioc->logging_level & MPT_DEBUG_TRANSPORT))
+		dev_printk(KERN_INFO, &mpt2sas_phy->phy->dev,
+		    "refresh: handle(0x%04x), sas_addr(0x%016llx),\n"
+		    "\tlink_rate(0x%02x), phy(%d)\n"
+		    "\tattached_handle(0x%04x), sas_addr(0x%016llx)\n",
+		    handle, (unsigned long long)
+		    mpt2sas_phy->identify.sas_address, link_rate,
+		    phy_number, attached_handle,
+		    (unsigned long long)
+		    mpt2sas_phy->remote_identify.sas_address);
+}
+
+static inline void *
+phy_to_ioc(struct sas_phy *phy)
+{
+	struct Scsi_Host *shost = dev_to_shost(phy->dev.parent);
+	return shost_priv(shost);
+}
+
+static inline void *
+rphy_to_ioc(struct sas_rphy *rphy)
+{
+	struct Scsi_Host *shost = dev_to_shost(rphy->dev.parent->parent);
+	return shost_priv(shost);
+}
+
+/**
+ * transport_get_linkerrors -
+ * @phy: The sas phy object
+ *
+ * Only support sas_host direct attached phys.
+ * Returns 0 for success, non-zero for failure.
+ *
+ */
+static int
+transport_get_linkerrors(struct sas_phy *phy)
+{
+	struct MPT2SAS_ADAPTER *ioc = phy_to_ioc(phy);
+	struct _sas_phy *mpt2sas_phy;
+	Mpi2ConfigReply_t mpi_reply;
+	Mpi2SasPhyPage1_t phy_pg1;
+	int i;
+
+	for (i = 0, mpt2sas_phy = NULL; i < ioc->sas_hba.num_phys &&
+	    !mpt2sas_phy; i++) {
+		if (ioc->sas_hba.phy[i].phy != phy)
+			continue;
+		mpt2sas_phy = &ioc->sas_hba.phy[i];
+	}
+
+	if (!mpt2sas_phy) /* this phy not on sas_host */
+		return -EINVAL;
+
+	if ((mpt2sas_config_get_phy_pg1(ioc, &mpi_reply, &phy_pg1,
+		    mpt2sas_phy->phy_id))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -ENXIO;
+	}
+
+	if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo)
+		printk(MPT2SAS_INFO_FMT "phy(%d), ioc_status"
+		    "(0x%04x), loginfo(0x%08x)\n", ioc->name,
+		    mpt2sas_phy->phy_id,
+		    le16_to_cpu(mpi_reply.IOCStatus),
+		    le32_to_cpu(mpi_reply.IOCLogInfo));
+
+	phy->invalid_dword_count = le32_to_cpu(phy_pg1.InvalidDwordCount);
+	phy->running_disparity_error_count =
+	    le32_to_cpu(phy_pg1.RunningDisparityErrorCount);
+	phy->loss_of_dword_sync_count =
+	    le32_to_cpu(phy_pg1.LossDwordSynchCount);
+	phy->phy_reset_problem_count =
+	    le32_to_cpu(phy_pg1.PhyResetProblemCount);
+	return 0;
+}
+
+/**
+ * transport_get_enclosure_identifier -
+ * @phy: The sas phy object
+ *
+ * Obtain the enclosure logical id for an expander.
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+transport_get_enclosure_identifier(struct sas_rphy *rphy, u64 *identifier)
+{
+	struct MPT2SAS_ADAPTER *ioc = rphy_to_ioc(rphy);
+	struct _sas_node *sas_expander;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_node_lock, flags);
+	sas_expander = mpt2sas_scsih_expander_find_by_sas_address(ioc,
+	    rphy->identify.sas_address);
+	spin_unlock_irqrestore(&ioc->sas_node_lock, flags);
+
+	if (!sas_expander)
+		return -ENXIO;
+
+	*identifier = sas_expander->enclosure_logical_id;
+	return 0;
+}
+
+/**
+ * transport_get_bay_identifier -
+ * @phy: The sas phy object
+ *
+ * Returns the slot id for a device that resides inside an enclosure.
+ */
+static int
+transport_get_bay_identifier(struct sas_rphy *rphy)
+{
+	struct MPT2SAS_ADAPTER *ioc = rphy_to_ioc(rphy);
+	struct _sas_device *sas_device;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ioc->sas_device_lock, flags);
+	sas_device = mpt2sas_scsih_sas_device_find_by_sas_address(ioc,
+	    rphy->identify.sas_address);
+	spin_unlock_irqrestore(&ioc->sas_device_lock, flags);
+
+	if (!sas_device)
+		return -ENXIO;
+
+	return sas_device->slot;
+}
+
+/**
+ * transport_phy_reset -
+ * @phy: The sas phy object
+ * @hard_reset:
+ *
+ * Only support sas_host direct attached phys.
+ * Returns 0 for success, non-zero for failure.
+ */
+static int
+transport_phy_reset(struct sas_phy *phy, int hard_reset)
+{
+	struct MPT2SAS_ADAPTER *ioc = phy_to_ioc(phy);
+	struct _sas_phy *mpt2sas_phy;
+	Mpi2SasIoUnitControlReply_t mpi_reply;
+	Mpi2SasIoUnitControlRequest_t mpi_request;
+	int i;
+
+	for (i = 0, mpt2sas_phy = NULL; i < ioc->sas_hba.num_phys &&
+	    !mpt2sas_phy; i++) {
+		if (ioc->sas_hba.phy[i].phy != phy)
+			continue;
+		mpt2sas_phy = &ioc->sas_hba.phy[i];
+	}
+
+	if (!mpt2sas_phy) /* this phy not on sas_host */
+		return -EINVAL;
+
+	memset(&mpi_request, 0, sizeof(Mpi2SasIoUnitControlReply_t));
+	mpi_request.Function = MPI2_FUNCTION_SAS_IO_UNIT_CONTROL;
+	mpi_request.Operation = hard_reset ?
+	    MPI2_SAS_OP_PHY_HARD_RESET : MPI2_SAS_OP_PHY_LINK_RESET;
+	mpi_request.PhyNum = mpt2sas_phy->phy_id;
+
+	if ((mpt2sas_base_sas_iounit_control(ioc, &mpi_reply, &mpi_request))) {
+		printk(MPT2SAS_ERR_FMT "failure at %s:%d/%s()!\n",
+		    ioc->name, __FILE__, __LINE__, __func__);
+		return -ENXIO;
+	}
+
+	if (mpi_reply.IOCStatus || mpi_reply.IOCLogInfo)
+		printk(MPT2SAS_INFO_FMT "phy(%d), ioc_status"
+		    "(0x%04x), loginfo(0x%08x)\n", ioc->name,
+		    mpt2sas_phy->phy_id,
+		    le16_to_cpu(mpi_reply.IOCStatus),
+		    le32_to_cpu(mpi_reply.IOCLogInfo));
+
+	return 0;
+}
+
+/**
+ * transport_smp_handler - transport portal for smp passthru
+ * @shost: shost object
+ * @rphy: sas transport rphy object
+ * @req:
+ *
+ * This used primarily for smp_utils.
+ * Example:
+ *           smp_rep_general /sys/class/bsg/expander-5:0
+ */
+static int
+transport_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
+    struct request *req)
+{
+	struct MPT2SAS_ADAPTER *ioc = shost_priv(shost);
+	Mpi2SmpPassthroughRequest_t *mpi_request;
+	Mpi2SmpPassthroughReply_t *mpi_reply;
+	int rc;
+	u16 smid;
+	u32 ioc_state;
+	unsigned long timeleft;
+	void *psge;
+	u32 sgl_flags;
+	u8 issue_reset = 0;
+	unsigned long flags;
+	dma_addr_t dma_addr_in = 0;
+	dma_addr_t dma_addr_out = 0;
+	u16 wait_state_count;
+	struct request *rsp = req->next_rq;
+
+	if (!rsp) {
+		printk(MPT2SAS_ERR_FMT "%s: the smp response space is "
+		    "missing\n", ioc->name, __func__);
+		return -EINVAL;
+	}
+
+	/* do we need to support multiple segments? */
+	if (req->bio->bi_vcnt > 1 || rsp->bio->bi_vcnt > 1) {
+		printk(MPT2SAS_ERR_FMT "%s: multiple segments req %u %u, "
+		    "rsp %u %u\n", ioc->name, __func__, req->bio->bi_vcnt,
+		    req->data_len, rsp->bio->bi_vcnt, rsp->data_len);
+		return -EINVAL;
+	}
+
+	spin_lock_irqsave(&ioc->ioc_reset_in_progress_lock, flags);
+	if (ioc->ioc_reset_in_progress) {
+		spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+		printk(MPT2SAS_INFO_FMT "%s: host reset in progress!\n",
+		    __func__, ioc->name);
+		return -EFAULT;
+	}
+	spin_unlock_irqrestore(&ioc->ioc_reset_in_progress_lock, flags);
+
+	rc = mutex_lock_interruptible(&ioc->transport_cmds.mutex);
+	if (rc)
+		return rc;
+
+	if (ioc->transport_cmds.status != MPT2_CMD_NOT_USED) {
+		printk(MPT2SAS_ERR_FMT "%s: transport_cmds in use\n", ioc->name,
+		    __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+	ioc->transport_cmds.status = MPT2_CMD_PENDING;
+
+	wait_state_count = 0;
+	ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+	while (ioc_state != MPI2_IOC_STATE_OPERATIONAL) {
+		if (wait_state_count++ == 10) {
+			printk(MPT2SAS_ERR_FMT
+			    "%s: failed due to ioc not operational\n",
+			    ioc->name, __func__);
+			rc = -EFAULT;
+			goto out;
+		}
+		ssleep(1);
+		ioc_state = mpt2sas_base_get_iocstate(ioc, 1);
+		printk(MPT2SAS_INFO_FMT "%s: waiting for "
+		    "operational state(count=%d)\n", ioc->name,
+		    __func__, wait_state_count);
+	}
+	if (wait_state_count)
+		printk(MPT2SAS_INFO_FMT "%s: ioc is operational\n",
+		    ioc->name, __func__);
+
+	smid = mpt2sas_base_get_smid(ioc, ioc->transport_cb_idx);
+	if (!smid) {
+		printk(MPT2SAS_ERR_FMT "%s: failed obtaining a smid\n",
+		    ioc->name, __func__);
+		rc = -EAGAIN;
+		goto out;
+	}
+
+	rc = 0;
+	mpi_request = mpt2sas_base_get_msg_frame(ioc, smid);
+	ioc->transport_cmds.smid = smid;
+
+	memset(mpi_request, 0, sizeof(Mpi2SmpPassthroughRequest_t));
+	mpi_request->Function = MPI2_FUNCTION_SMP_PASSTHROUGH;
+	mpi_request->PhysicalPort = 0xFF;
+	*((u64 *)&mpi_request->SASAddress) = (rphy) ?
+	    cpu_to_le64(rphy->identify.sas_address) :
+	    cpu_to_le64(ioc->sas_hba.sas_address);
+	mpi_request->RequestDataLength = cpu_to_le16(req->data_len - 4);
+	psge = &mpi_request->SGL;
+
+	/* WRITE sgel first */
+	sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+	    MPI2_SGE_FLAGS_END_OF_BUFFER | MPI2_SGE_FLAGS_HOST_TO_IOC);
+	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+	dma_addr_out = pci_map_single(ioc->pdev, bio_data(req->bio),
+	      req->data_len, PCI_DMA_BIDIRECTIONAL);
+	if (!dma_addr_out) {
+		mpt2sas_base_free_smid(ioc, le16_to_cpu(smid));
+		goto unmap;
+	}
+
+	ioc->base_add_sg_single(psge, sgl_flags | (req->data_len - 4),
+	    dma_addr_out);
+
+	/* incr sgel */
+	psge += ioc->sge_size;
+
+	/* READ sgel last */
+	sgl_flags = (MPI2_SGE_FLAGS_SIMPLE_ELEMENT |
+	    MPI2_SGE_FLAGS_LAST_ELEMENT | MPI2_SGE_FLAGS_END_OF_BUFFER |
+	    MPI2_SGE_FLAGS_END_OF_LIST);
+	sgl_flags = sgl_flags << MPI2_SGE_FLAGS_SHIFT;
+	dma_addr_in =  pci_map_single(ioc->pdev, bio_data(rsp->bio),
+	      rsp->data_len, PCI_DMA_BIDIRECTIONAL);
+	if (!dma_addr_in) {
+		mpt2sas_base_free_smid(ioc, le16_to_cpu(smid));
+		goto unmap;
+	}
+
+	ioc->base_add_sg_single(psge, sgl_flags | (rsp->data_len + 4),
+	    dma_addr_in);
+
+	dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s - "
+	    "sending smp request\n", ioc->name, __func__));
+
+	mpt2sas_base_put_smid_default(ioc, smid, 0 /* VF_ID */);
+	timeleft = wait_for_completion_timeout(&ioc->transport_cmds.done,
+	    10*HZ);
+
+	if (!(ioc->transport_cmds.status & MPT2_CMD_COMPLETE)) {
+		printk(MPT2SAS_ERR_FMT "%s : timeout\n",
+		    __func__, ioc->name);
+		_debug_dump_mf(mpi_request,
+		    sizeof(Mpi2SmpPassthroughRequest_t)/4);
+		if (!(ioc->transport_cmds.status & MPT2_CMD_RESET))
+			issue_reset = 1;
+		goto issue_host_reset;
+	}
+
+	dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT "%s - "
+	    "complete\n", ioc->name, __func__));
+
+	if (ioc->transport_cmds.status & MPT2_CMD_REPLY_VALID) {
+
+		mpi_reply = ioc->transport_cmds.reply;
+
+		dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "%s - reply data transfer size(%d)\n",
+		    ioc->name, __func__,
+		    le16_to_cpu(mpi_reply->ResponseDataLength)));
+
+		memcpy(req->sense, mpi_reply, sizeof(*mpi_reply));
+		req->sense_len = sizeof(*mpi_reply);
+		req->data_len = 0;
+		rsp->data_len -= mpi_reply->ResponseDataLength;
+
+	} else {
+		dtransportprintk(ioc, printk(MPT2SAS_DEBUG_FMT
+		    "%s - no reply\n", ioc->name, __func__));
+		rc = -ENXIO;
+	}
+
+ issue_host_reset:
+	if (issue_reset) {
+		mpt2sas_base_hard_reset_handler(ioc, CAN_SLEEP,
+		    FORCE_BIG_HAMMER);
+		rc = -ETIMEDOUT;
+	}
+
+ unmap:
+	if (dma_addr_out)
+		pci_unmap_single(ioc->pdev, dma_addr_out, req->data_len,
+		    PCI_DMA_BIDIRECTIONAL);
+	if (dma_addr_in)
+		pci_unmap_single(ioc->pdev, dma_addr_in, rsp->data_len,
+		    PCI_DMA_BIDIRECTIONAL);
+
+ out:
+	ioc->transport_cmds.status = MPT2_CMD_NOT_USED;
+	mutex_unlock(&ioc->transport_cmds.mutex);
+	return rc;
+}
+
+struct sas_function_template mpt2sas_transport_functions = {
+	.get_linkerrors		= transport_get_linkerrors,
+	.get_enclosure_identifier = transport_get_enclosure_identifier,
+	.get_bay_identifier	= transport_get_bay_identifier,
+	.phy_reset		= transport_phy_reset,
+	.smp_handler		= transport_smp_handler,
+};
+
+struct scsi_transport_template *mpt2sas_transport_template;
diff --git a/drivers/scsi/osd/Kbuild b/drivers/scsi/osd/Kbuild
new file mode 100644
index 0000000..0e207aa
--- /dev/null
+++ b/drivers/scsi/osd/Kbuild
@@ -0,0 +1,45 @@
+#
+# Kbuild for the OSD modules
+#
+# Copyright (C) 2008 Panasas Inc.  All rights reserved.
+#
+# Authors:
+#   Boaz Harrosh <bharrosh@panasas.com>
+#   Benny Halevy <bhalevy@panasas.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2
+#
+
+ifneq ($(OSD_INC),)
+# we are built out-of-tree Kconfigure everything as on
+
+CONFIG_SCSI_OSD_INITIATOR=m
+ccflags-y += -DCONFIG_SCSI_OSD_INITIATOR -DCONFIG_SCSI_OSD_INITIATOR_MODULE
+
+CONFIG_SCSI_OSD_ULD=m
+ccflags-y += -DCONFIG_SCSI_OSD_ULD -DCONFIG_SCSI_OSD_ULD_MODULE
+
+# CONFIG_SCSI_OSD_DPRINT_SENSE =
+#	0 - no print of errors
+#	1 - print errors
+#	2 - errors + warrnings
+ccflags-y += -DCONFIG_SCSI_OSD_DPRINT_SENSE=1
+
+# Uncomment to turn debug on
+# ccflags-y += -DCONFIG_SCSI_OSD_DEBUG
+
+# if we are built out-of-tree and the hosting kernel has OSD headers
+# then "ccflags-y +=" will not pick the out-off-tree headers. Only by doing
+# this it will work. This might break in future kernels
+LINUXINCLUDE := -I$(OSD_INC) $(LINUXINCLUDE)
+
+endif
+
+# libosd.ko - osd-initiator library
+libosd-y := osd_initiator.o
+obj-$(CONFIG_SCSI_OSD_INITIATOR) += libosd.o
+
+# osd.ko - SCSI ULD and char-device
+osd-y := osd_uld.o
+obj-$(CONFIG_SCSI_OSD_ULD) += osd.o
diff --git a/drivers/scsi/osd/Kconfig b/drivers/scsi/osd/Kconfig
new file mode 100644
index 0000000..861b5ce
--- /dev/null
+++ b/drivers/scsi/osd/Kconfig
@@ -0,0 +1,53 @@
+#
+# Kernel configuration file for the OSD scsi protocol
+#
+# Copyright (C) 2008 Panasas Inc.  All rights reserved.
+#
+# Authors:
+#   Boaz Harrosh <bharrosh@panasas.com>
+#   Benny Halevy <bhalevy@panasas.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public version 2 License as
+# published by the Free Software Foundation
+#
+# FIXME: SCSI_OSD_INITIATOR should select CONFIG (HMAC) SHA1 somehow.
+#        How is it done properly?
+#
+
+config SCSI_OSD_INITIATOR
+	tristate "OSD-Initiator library"
+	depends on SCSI
+	help
+		Enable the OSD-Initiator library (libosd.ko).
+		NOTE: You must also select CRYPTO_SHA1 + CRYPTO_HMAC and their
+		dependencies
+
+config SCSI_OSD_ULD
+	tristate "OSD Upper Level driver"
+	depends on SCSI_OSD_INITIATOR
+	help
+		Build a SCSI upper layer driver that exports /dev/osdX devices
+		to user-mode for testing and controlling OSD devices. It is also
+		needed by exofs, for mounting an OSD based file system.
+
+config SCSI_OSD_DPRINT_SENSE
+    int "(0-2) When sense is returned, DEBUG print all sense descriptors"
+    default 1
+    depends on SCSI_OSD_INITIATOR
+    help
+        When a CHECK_CONDITION status is returned from a target, and a
+        sense-buffer is retrieved, turning this on will dump a full
+        sense-decoding message. Setting to 2 will also print recoverable
+        errors that might be regularly returned for some filesystem
+        operations.
+
+config SCSI_OSD_DEBUG
+	bool "Compile All OSD modules with lots of DEBUG prints"
+	default n
+	depends on SCSI_OSD_INITIATOR
+	help
+		OSD Code is populated with lots of OSD_DEBUG(..) printouts to
+		dmesg. Enable this if you found a bug and you want to help us
+		track the problem (see also MAINTAINERS). Setting this will also
+		force SCSI_OSD_DPRINT_SENSE=2.
diff --git a/drivers/scsi/osd/Makefile b/drivers/scsi/osd/Makefile
new file mode 100755
index 0000000..d905344
--- /dev/null
+++ b/drivers/scsi/osd/Makefile
@@ -0,0 +1,37 @@
+#
+# Makefile for the OSD modules (out of tree)
+#
+# Copyright (C) 2008 Panasas Inc.  All rights reserved.
+#
+# Authors:
+#   Boaz Harrosh <bharrosh@panasas.com>
+#   Benny Halevy <bhalevy@panasas.com>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2
+#
+# This Makefile is used to call the kernel Makefile in case of an out-of-tree
+# build.
+# $KSRC should point to a Kernel source tree otherwise host's default is
+# used. (eg. /lib/modules/`uname -r`/build)
+
+# include path for out-of-tree Headers
+OSD_INC ?= `pwd`/../../../include
+
+# allow users to override these
+# e.g. to compile for a kernel that you aren't currently running
+KSRC ?= /lib/modules/$(shell uname -r)/build
+KBUILD_OUTPUT ?=
+ARCH ?=
+V ?= 0
+
+# this is the basic Kbuild out-of-tree invocation, with the M= option
+KBUILD_BASE = +$(MAKE) -C $(KSRC) M=`pwd` KBUILD_OUTPUT=$(KBUILD_OUTPUT) ARCH=$(ARCH) V=$(V)
+
+all: libosd
+
+libosd: ;
+	$(KBUILD_BASE) OSD_INC=$(OSD_INC) modules
+
+clean:
+	$(KBUILD_BASE) clean
diff --git a/drivers/scsi/osd/osd_debug.h b/drivers/scsi/osd/osd_debug.h
new file mode 100644
index 0000000..579e491
--- /dev/null
+++ b/drivers/scsi/osd/osd_debug.h
@@ -0,0 +1,30 @@
+/*
+ * osd_debug.h - Some kprintf macros
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ */
+#ifndef __OSD_DEBUG_H__
+#define __OSD_DEBUG_H__
+
+#define OSD_ERR(fmt, a...) printk(KERN_ERR "osd: " fmt, ##a)
+#define OSD_INFO(fmt, a...) printk(KERN_NOTICE "osd: " fmt, ##a)
+
+#ifdef CONFIG_SCSI_OSD_DEBUG
+#define OSD_DEBUG(fmt, a...) \
+	printk(KERN_NOTICE "osd @%s:%d: " fmt, __func__, __LINE__, ##a)
+#else
+#define OSD_DEBUG(fmt, a...) do {} while (0)
+#endif
+
+/* u64 has problems with printk this will cast it to unsigned long long */
+#define _LLU(x) (unsigned long long)(x)
+
+#endif /* ndef __OSD_DEBUG_H__ */
diff --git a/drivers/scsi/osd/osd_initiator.c b/drivers/scsi/osd/osd_initiator.c
new file mode 100644
index 0000000..552f58b
--- /dev/null
+++ b/drivers/scsi/osd/osd_initiator.c
@@ -0,0 +1,1657 @@
+/*
+ * osd_initiator - Main body of the osd initiator library.
+ *
+ * Note: The file does not contain the advanced security functionality which
+ * is only needed by the security_manager's initiators.
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *  1. Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *  2. Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in the
+ *     documentation and/or other materials provided with the distribution.
+ *  3. Neither the name of the Panasas company nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <scsi/osd_initiator.h>
+#include <scsi/osd_sec.h>
+#include <scsi/osd_attributes.h>
+#include <scsi/osd_sense.h>
+
+#include <scsi/scsi_device.h>
+
+#include "osd_debug.h"
+
+#ifndef __unused
+#    define __unused			__attribute__((unused))
+#endif
+
+enum { OSD_REQ_RETRIES = 1 };
+
+MODULE_AUTHOR("Boaz Harrosh <bharrosh@panasas.com>");
+MODULE_DESCRIPTION("open-osd initiator library libosd.ko");
+MODULE_LICENSE("GPL");
+
+static inline void build_test(void)
+{
+	/* structures were not packed */
+	BUILD_BUG_ON(sizeof(struct osd_capability) != OSD_CAP_LEN);
+	BUILD_BUG_ON(sizeof(struct osdv2_cdb) != OSD_TOTAL_CDB_LEN);
+	BUILD_BUG_ON(sizeof(struct osdv1_cdb) != OSDv1_TOTAL_CDB_LEN);
+}
+
+static const char *_osd_ver_desc(struct osd_request *or)
+{
+	return osd_req_is_ver1(or) ? "OSD1" : "OSD2";
+}
+
+#define ATTR_DEF_RI(id, len) ATTR_DEF(OSD_APAGE_ROOT_INFORMATION, id, len)
+
+static int _osd_print_system_info(struct osd_dev *od, void *caps)
+{
+	struct osd_request *or;
+	struct osd_attr get_attrs[] = {
+		ATTR_DEF_RI(OSD_ATTR_RI_VENDOR_IDENTIFICATION, 8),
+		ATTR_DEF_RI(OSD_ATTR_RI_PRODUCT_IDENTIFICATION, 16),
+		ATTR_DEF_RI(OSD_ATTR_RI_PRODUCT_MODEL, 32),
+		ATTR_DEF_RI(OSD_ATTR_RI_PRODUCT_REVISION_LEVEL, 4),
+		ATTR_DEF_RI(OSD_ATTR_RI_PRODUCT_SERIAL_NUMBER, 64 /*variable*/),
+		ATTR_DEF_RI(OSD_ATTR_RI_OSD_NAME, 64 /*variable*/),
+		ATTR_DEF_RI(OSD_ATTR_RI_TOTAL_CAPACITY, 8),
+		ATTR_DEF_RI(OSD_ATTR_RI_USED_CAPACITY, 8),
+		ATTR_DEF_RI(OSD_ATTR_RI_NUMBER_OF_PARTITIONS, 8),
+		ATTR_DEF_RI(OSD_ATTR_RI_CLOCK, 6),
+		/* IBM-OSD-SIM Has a bug with this one put it last */
+		ATTR_DEF_RI(OSD_ATTR_RI_OSD_SYSTEM_ID, 20),
+	};
+	void *iter = NULL, *pFirst;
+	int nelem = ARRAY_SIZE(get_attrs), a = 0;
+	int ret;
+
+	or = osd_start_request(od, GFP_KERNEL);
+	if (!or)
+		return -ENOMEM;
+
+	/* get attrs */
+	osd_req_get_attributes(or, &osd_root_object);
+	osd_req_add_get_attr_list(or, get_attrs, ARRAY_SIZE(get_attrs));
+
+	ret = osd_finalize_request(or, 0, caps, NULL);
+	if (ret)
+		goto out;
+
+	ret = osd_execute_request(or);
+	if (ret) {
+		OSD_ERR("Failed to detect %s => %d\n", _osd_ver_desc(or), ret);
+		goto out;
+	}
+
+	osd_req_decode_get_attr_list(or, get_attrs, &nelem, &iter);
+
+	OSD_INFO("Detected %s device\n",
+		_osd_ver_desc(or));
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_VENDOR_IDENTIFICATION [%s]\n",
+		(char *)pFirst);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_PRODUCT_IDENTIFICATION [%s]\n",
+		(char *)pFirst);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_PRODUCT_MODEL [%s]\n",
+		(char *)pFirst);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_PRODUCT_REVISION_LEVEL [%u]\n",
+		pFirst ? get_unaligned_be32(pFirst) : ~0U);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_PRODUCT_SERIAL_NUMBER [%s]\n",
+		(char *)pFirst);
+
+	pFirst = get_attrs[a].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_OSD_NAME [%s]\n", (char *)pFirst);
+	a++;
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_TOTAL_CAPACITY [0x%llx]\n",
+		pFirst ? _LLU(get_unaligned_be64(pFirst)) : ~0ULL);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_USED_CAPACITY [0x%llx]\n",
+		pFirst ? _LLU(get_unaligned_be64(pFirst)) : ~0ULL);
+
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_NUMBER_OF_PARTITIONS [%llu]\n",
+		pFirst ? _LLU(get_unaligned_be64(pFirst)) : ~0ULL);
+
+	if (a >= nelem)
+		goto out;
+
+	/* FIXME: Where are the time utilities */
+	pFirst = get_attrs[a++].val_ptr;
+	OSD_INFO("OSD_ATTR_RI_CLOCK [0x%02x%02x%02x%02x%02x%02x]\n",
+		((char *)pFirst)[0], ((char *)pFirst)[1],
+		((char *)pFirst)[2], ((char *)pFirst)[3],
+		((char *)pFirst)[4], ((char *)pFirst)[5]);
+
+	if (a < nelem) { /* IBM-OSD-SIM bug, Might not have it */
+		unsigned len = get_attrs[a].len;
+		char sid_dump[32*4 + 2]; /* 2nibbles+space+ASCII */
+
+		hex_dump_to_buffer(get_attrs[a].val_ptr, len, 32, 1,
+				   sid_dump, sizeof(sid_dump), true);
+		OSD_INFO("OSD_ATTR_RI_OSD_SYSTEM_ID(%d) [%s]\n", len, sid_dump);
+		a++;
+	}
+out:
+	osd_end_request(or);
+	return ret;
+}
+
+int osd_auto_detect_ver(struct osd_dev *od, void *caps)
+{
+	int ret;
+
+	/* Auto-detect the osd version */
+	ret = _osd_print_system_info(od, caps);
+	if (ret) {
+		osd_dev_set_ver(od, OSD_VER1);
+		OSD_DEBUG("converting to OSD1\n");
+		ret = _osd_print_system_info(od, caps);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(osd_auto_detect_ver);
+
+static unsigned _osd_req_cdb_len(struct osd_request *or)
+{
+	return osd_req_is_ver1(or) ? OSDv1_TOTAL_CDB_LEN : OSD_TOTAL_CDB_LEN;
+}
+
+static unsigned _osd_req_alist_elem_size(struct osd_request *or, unsigned len)
+{
+	return osd_req_is_ver1(or) ?
+		osdv1_attr_list_elem_size(len) :
+		osdv2_attr_list_elem_size(len);
+}
+
+static unsigned _osd_req_alist_size(struct osd_request *or, void *list_head)
+{
+	return osd_req_is_ver1(or) ?
+		osdv1_list_size(list_head) :
+		osdv2_list_size(list_head);
+}
+
+static unsigned _osd_req_sizeof_alist_header(struct osd_request *or)
+{
+	return osd_req_is_ver1(or) ?
+		sizeof(struct osdv1_attributes_list_header) :
+		sizeof(struct osdv2_attributes_list_header);
+}
+
+static void _osd_req_set_alist_type(struct osd_request *or,
+	void *list, int list_type)
+{
+	if (osd_req_is_ver1(or)) {
+		struct osdv1_attributes_list_header *attr_list = list;
+
+		memset(attr_list, 0, sizeof(*attr_list));
+		attr_list->type = list_type;
+	} else {
+		struct osdv2_attributes_list_header *attr_list = list;
+
+		memset(attr_list, 0, sizeof(*attr_list));
+		attr_list->type = list_type;
+	}
+}
+
+static bool _osd_req_is_alist_type(struct osd_request *or,
+	void *list, int list_type)
+{
+	if (!list)
+		return false;
+
+	if (osd_req_is_ver1(or)) {
+		struct osdv1_attributes_list_header *attr_list = list;
+
+		return attr_list->type == list_type;
+	} else {
+		struct osdv2_attributes_list_header *attr_list = list;
+
+		return attr_list->type == list_type;
+	}
+}
+
+/* This is for List-objects not Attributes-Lists */
+static void _osd_req_encode_olist(struct osd_request *or,
+	struct osd_obj_id_list *list)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+
+	if (osd_req_is_ver1(or)) {
+		cdbh->v1.list_identifier = list->list_identifier;
+		cdbh->v1.start_address = list->continuation_id;
+	} else {
+		cdbh->v2.list_identifier = list->list_identifier;
+		cdbh->v2.start_address = list->continuation_id;
+	}
+}
+
+static osd_cdb_offset osd_req_encode_offset(struct osd_request *or,
+	u64 offset, unsigned *padding)
+{
+	return __osd_encode_offset(offset, padding,
+			osd_req_is_ver1(or) ?
+				OSDv1_OFFSET_MIN_SHIFT : OSD_OFFSET_MIN_SHIFT,
+			OSD_OFFSET_MAX_SHIFT);
+}
+
+static struct osd_security_parameters *
+_osd_req_sec_params(struct osd_request *or)
+{
+	struct osd_cdb *ocdb = &or->cdb;
+
+	if (osd_req_is_ver1(or))
+		return &ocdb->v1.sec_params;
+	else
+		return &ocdb->v2.sec_params;
+}
+
+void osd_dev_init(struct osd_dev *osdd, struct scsi_device *scsi_device)
+{
+	memset(osdd, 0, sizeof(*osdd));
+	osdd->scsi_device = scsi_device;
+	osdd->def_timeout = BLK_DEFAULT_SG_TIMEOUT;
+#ifdef OSD_VER1_SUPPORT
+	osdd->version = OSD_VER2;
+#endif
+	/* TODO: Allocate pools for osd_request attributes ... */
+}
+EXPORT_SYMBOL(osd_dev_init);
+
+void osd_dev_fini(struct osd_dev *osdd)
+{
+	/* TODO: De-allocate pools */
+
+	osdd->scsi_device = NULL;
+}
+EXPORT_SYMBOL(osd_dev_fini);
+
+static struct osd_request *_osd_request_alloc(gfp_t gfp)
+{
+	struct osd_request *or;
+
+	/* TODO: Use mempool with one saved request */
+	or = kzalloc(sizeof(*or), gfp);
+	return or;
+}
+
+static void _osd_request_free(struct osd_request *or)
+{
+	kfree(or);
+}
+
+struct osd_request *osd_start_request(struct osd_dev *dev, gfp_t gfp)
+{
+	struct osd_request *or;
+
+	or = _osd_request_alloc(gfp);
+	if (!or)
+		return NULL;
+
+	or->osd_dev = dev;
+	or->alloc_flags = gfp;
+	or->timeout = dev->def_timeout;
+	or->retries = OSD_REQ_RETRIES;
+
+	return or;
+}
+EXPORT_SYMBOL(osd_start_request);
+
+/*
+ * If osd_finalize_request() was called but the request was not executed through
+ * the block layer, then we must release BIOs.
+ */
+static void _abort_unexecuted_bios(struct request *rq)
+{
+	struct bio *bio;
+
+	while ((bio = rq->bio) != NULL) {
+		rq->bio = bio->bi_next;
+		bio_endio(bio, 0);
+	}
+}
+
+static void _osd_free_seg(struct osd_request *or __unused,
+	struct _osd_req_data_segment *seg)
+{
+	if (!seg->buff || !seg->alloc_size)
+		return;
+
+	kfree(seg->buff);
+	seg->buff = NULL;
+	seg->alloc_size = 0;
+}
+
+void osd_end_request(struct osd_request *or)
+{
+	struct request *rq = or->request;
+
+	_osd_free_seg(or, &or->set_attr);
+	_osd_free_seg(or, &or->enc_get_attr);
+	_osd_free_seg(or, &or->get_attr);
+
+	if (rq) {
+		if (rq->next_rq) {
+			_abort_unexecuted_bios(rq->next_rq);
+			blk_put_request(rq->next_rq);
+		}
+
+		_abort_unexecuted_bios(rq);
+		blk_put_request(rq);
+	}
+	_osd_request_free(or);
+}
+EXPORT_SYMBOL(osd_end_request);
+
+int osd_execute_request(struct osd_request *or)
+{
+	return blk_execute_rq(or->request->q, NULL, or->request, 0);
+}
+EXPORT_SYMBOL(osd_execute_request);
+
+static void osd_request_async_done(struct request *req, int error)
+{
+	struct osd_request *or = req->end_io_data;
+
+	or->async_error = error;
+
+	if (error)
+		OSD_DEBUG("osd_request_async_done error recieved %d\n", error);
+
+	if (or->async_done)
+		or->async_done(or, or->async_private);
+	else
+		osd_end_request(or);
+}
+
+int osd_execute_request_async(struct osd_request *or,
+	osd_req_done_fn *done, void *private)
+{
+	or->request->end_io_data = or;
+	or->async_private = private;
+	or->async_done = done;
+
+	blk_execute_rq_nowait(or->request->q, NULL, or->request, 0,
+			      osd_request_async_done);
+	return 0;
+}
+EXPORT_SYMBOL(osd_execute_request_async);
+
+u8 sg_out_pad_buffer[1 << OSDv1_OFFSET_MIN_SHIFT];
+u8 sg_in_pad_buffer[1 << OSDv1_OFFSET_MIN_SHIFT];
+
+static int _osd_realloc_seg(struct osd_request *or,
+	struct _osd_req_data_segment *seg, unsigned max_bytes)
+{
+	void *buff;
+
+	if (seg->alloc_size >= max_bytes)
+		return 0;
+
+	buff = krealloc(seg->buff, max_bytes, or->alloc_flags);
+	if (!buff) {
+		OSD_ERR("Failed to Realloc %d-bytes was-%d\n", max_bytes,
+			seg->alloc_size);
+		return -ENOMEM;
+	}
+
+	memset(buff + seg->alloc_size, 0, max_bytes - seg->alloc_size);
+	seg->buff = buff;
+	seg->alloc_size = max_bytes;
+	return 0;
+}
+
+static int _alloc_set_attr_list(struct osd_request *or,
+	const struct osd_attr *oa, unsigned nelem, unsigned add_bytes)
+{
+	unsigned total_bytes = add_bytes;
+
+	for (; nelem; --nelem, ++oa)
+		total_bytes += _osd_req_alist_elem_size(or, oa->len);
+
+	OSD_DEBUG("total_bytes=%d\n", total_bytes);
+	return _osd_realloc_seg(or, &or->set_attr, total_bytes);
+}
+
+static int _alloc_get_attr_desc(struct osd_request *or, unsigned max_bytes)
+{
+	OSD_DEBUG("total_bytes=%d\n", max_bytes);
+	return _osd_realloc_seg(or, &or->enc_get_attr, max_bytes);
+}
+
+static int _alloc_get_attr_list(struct osd_request *or)
+{
+	OSD_DEBUG("total_bytes=%d\n", or->get_attr.total_bytes);
+	return _osd_realloc_seg(or, &or->get_attr, or->get_attr.total_bytes);
+}
+
+/*
+ * Common to all OSD commands
+ */
+
+static void _osdv1_req_encode_common(struct osd_request *or,
+	__be16 act, const struct osd_obj_id *obj, u64 offset, u64 len)
+{
+	struct osdv1_cdb *ocdb = &or->cdb.v1;
+
+	/*
+	 * For speed, the commands
+	 *	OSD_ACT_PERFORM_SCSI_COMMAND	, V1 0x8F7E, V2 0x8F7C
+	 *	OSD_ACT_SCSI_TASK_MANAGEMENT	, V1 0x8F7F, V2 0x8F7D
+	 * are not supported here. Should pass zero and set after the call
+	 */
+	act &= cpu_to_be16(~0x0080); /* V1 action code */
+
+	OSD_DEBUG("OSDv1 execute opcode 0x%x\n", be16_to_cpu(act));
+
+	ocdb->h.varlen_cdb.opcode = VARIABLE_LENGTH_CMD;
+	ocdb->h.varlen_cdb.additional_cdb_length = OSD_ADDITIONAL_CDB_LENGTH;
+	ocdb->h.varlen_cdb.service_action = act;
+
+	ocdb->h.partition = cpu_to_be64(obj->partition);
+	ocdb->h.object = cpu_to_be64(obj->id);
+	ocdb->h.v1.length = cpu_to_be64(len);
+	ocdb->h.v1.start_address = cpu_to_be64(offset);
+}
+
+static void _osdv2_req_encode_common(struct osd_request *or,
+	 __be16 act, const struct osd_obj_id *obj, u64 offset, u64 len)
+{
+	struct osdv2_cdb *ocdb = &or->cdb.v2;
+
+	OSD_DEBUG("OSDv2 execute opcode 0x%x\n", be16_to_cpu(act));
+
+	ocdb->h.varlen_cdb.opcode = VARIABLE_LENGTH_CMD;
+	ocdb->h.varlen_cdb.additional_cdb_length = OSD_ADDITIONAL_CDB_LENGTH;
+	ocdb->h.varlen_cdb.service_action = act;
+
+	ocdb->h.partition = cpu_to_be64(obj->partition);
+	ocdb->h.object = cpu_to_be64(obj->id);
+	ocdb->h.v2.length = cpu_to_be64(len);
+	ocdb->h.v2.start_address = cpu_to_be64(offset);
+}
+
+static void _osd_req_encode_common(struct osd_request *or,
+	__be16 act, const struct osd_obj_id *obj, u64 offset, u64 len)
+{
+	if (osd_req_is_ver1(or))
+		_osdv1_req_encode_common(or, act, obj, offset, len);
+	else
+		_osdv2_req_encode_common(or, act, obj, offset, len);
+}
+
+/*
+ * Device commands
+ */
+/*TODO: void osd_req_set_master_seed_xchg(struct osd_request *, ...); */
+/*TODO: void osd_req_set_master_key(struct osd_request *, ...); */
+
+void osd_req_format(struct osd_request *or, u64 tot_capacity)
+{
+	_osd_req_encode_common(or, OSD_ACT_FORMAT_OSD, &osd_root_object, 0,
+				tot_capacity);
+}
+EXPORT_SYMBOL(osd_req_format);
+
+int osd_req_list_dev_partitions(struct osd_request *or,
+	osd_id initial_id, struct osd_obj_id_list *list, unsigned nelem)
+{
+	return osd_req_list_partition_objects(or, 0, initial_id, list, nelem);
+}
+EXPORT_SYMBOL(osd_req_list_dev_partitions);
+
+static void _osd_req_encode_flush(struct osd_request *or,
+	enum osd_options_flush_scope_values op)
+{
+	struct osd_cdb_head *ocdb = osd_cdb_head(&or->cdb);
+
+	ocdb->command_specific_options = op;
+}
+
+void osd_req_flush_obsd(struct osd_request *or,
+	enum osd_options_flush_scope_values op)
+{
+	_osd_req_encode_common(or, OSD_ACT_FLUSH_OSD, &osd_root_object, 0, 0);
+	_osd_req_encode_flush(or, op);
+}
+EXPORT_SYMBOL(osd_req_flush_obsd);
+
+/*TODO: void osd_req_perform_scsi_command(struct osd_request *,
+	const u8 *cdb, ...); */
+/*TODO: void osd_req_task_management(struct osd_request *, ...); */
+
+/*
+ * Partition commands
+ */
+static void _osd_req_encode_partition(struct osd_request *or,
+	__be16 act, osd_id partition)
+{
+	struct osd_obj_id par = {
+		.partition = partition,
+		.id = 0,
+	};
+
+	_osd_req_encode_common(or, act, &par, 0, 0);
+}
+
+void osd_req_create_partition(struct osd_request *or, osd_id partition)
+{
+	_osd_req_encode_partition(or, OSD_ACT_CREATE_PARTITION, partition);
+}
+EXPORT_SYMBOL(osd_req_create_partition);
+
+void osd_req_remove_partition(struct osd_request *or, osd_id partition)
+{
+	_osd_req_encode_partition(or, OSD_ACT_REMOVE_PARTITION, partition);
+}
+EXPORT_SYMBOL(osd_req_remove_partition);
+
+/*TODO: void osd_req_set_partition_key(struct osd_request *,
+	osd_id partition, u8 new_key_id[OSD_CRYPTO_KEYID_SIZE],
+	u8 seed[OSD_CRYPTO_SEED_SIZE]); */
+
+static int _osd_req_list_objects(struct osd_request *or,
+	__be16 action, const struct osd_obj_id *obj, osd_id initial_id,
+	struct osd_obj_id_list *list, unsigned nelem)
+{
+	struct request_queue *q = or->osd_dev->scsi_device->request_queue;
+	u64 len = nelem * sizeof(osd_id) + sizeof(*list);
+	struct bio *bio;
+
+	_osd_req_encode_common(or, action, obj, (u64)initial_id, len);
+
+	if (list->list_identifier)
+		_osd_req_encode_olist(or, list);
+
+	WARN_ON(or->in.bio);
+	bio = bio_map_kern(q, list, len, or->alloc_flags);
+	if (!bio) {
+		OSD_ERR("!!! Failed to allocate list_objects BIO\n");
+		return -ENOMEM;
+	}
+
+	bio->bi_rw &= ~(1 << BIO_RW);
+	or->in.bio = bio;
+	or->in.total_bytes = bio->bi_size;
+	return 0;
+}
+
+int osd_req_list_partition_collections(struct osd_request *or,
+	osd_id partition, osd_id initial_id, struct osd_obj_id_list *list,
+	unsigned nelem)
+{
+	struct osd_obj_id par = {
+		.partition = partition,
+		.id = 0,
+	};
+
+	return osd_req_list_collection_objects(or, &par, initial_id, list,
+					       nelem);
+}
+EXPORT_SYMBOL(osd_req_list_partition_collections);
+
+int osd_req_list_partition_objects(struct osd_request *or,
+	osd_id partition, osd_id initial_id, struct osd_obj_id_list *list,
+	unsigned nelem)
+{
+	struct osd_obj_id par = {
+		.partition = partition,
+		.id = 0,
+	};
+
+	return _osd_req_list_objects(or, OSD_ACT_LIST, &par, initial_id, list,
+				     nelem);
+}
+EXPORT_SYMBOL(osd_req_list_partition_objects);
+
+void osd_req_flush_partition(struct osd_request *or,
+	osd_id partition, enum osd_options_flush_scope_values op)
+{
+	_osd_req_encode_partition(or, OSD_ACT_FLUSH_PARTITION, partition);
+	_osd_req_encode_flush(or, op);
+}
+EXPORT_SYMBOL(osd_req_flush_partition);
+
+/*
+ * Collection commands
+ */
+/*TODO: void osd_req_create_collection(struct osd_request *,
+	const struct osd_obj_id *); */
+/*TODO: void osd_req_remove_collection(struct osd_request *,
+	const struct osd_obj_id *); */
+
+int osd_req_list_collection_objects(struct osd_request *or,
+	const struct osd_obj_id *obj, osd_id initial_id,
+	struct osd_obj_id_list *list, unsigned nelem)
+{
+	return _osd_req_list_objects(or, OSD_ACT_LIST_COLLECTION, obj,
+				     initial_id, list, nelem);
+}
+EXPORT_SYMBOL(osd_req_list_collection_objects);
+
+/*TODO: void query(struct osd_request *, ...); V2 */
+
+void osd_req_flush_collection(struct osd_request *or,
+	const struct osd_obj_id *obj, enum osd_options_flush_scope_values op)
+{
+	_osd_req_encode_common(or, OSD_ACT_FLUSH_PARTITION, obj, 0, 0);
+	_osd_req_encode_flush(or, op);
+}
+EXPORT_SYMBOL(osd_req_flush_collection);
+
+/*TODO: void get_member_attrs(struct osd_request *, ...); V2 */
+/*TODO: void set_member_attrs(struct osd_request *, ...); V2 */
+
+/*
+ * Object commands
+ */
+void osd_req_create_object(struct osd_request *or, struct osd_obj_id *obj)
+{
+	_osd_req_encode_common(or, OSD_ACT_CREATE, obj, 0, 0);
+}
+EXPORT_SYMBOL(osd_req_create_object);
+
+void osd_req_remove_object(struct osd_request *or, struct osd_obj_id *obj)
+{
+	_osd_req_encode_common(or, OSD_ACT_REMOVE, obj, 0, 0);
+}
+EXPORT_SYMBOL(osd_req_remove_object);
+
+
+/*TODO: void osd_req_create_multi(struct osd_request *or,
+	struct osd_obj_id *first, struct osd_obj_id_list *list, unsigned nelem);
+*/
+
+void osd_req_write(struct osd_request *or,
+	const struct osd_obj_id *obj, struct bio *bio, u64 offset)
+{
+	_osd_req_encode_common(or, OSD_ACT_WRITE, obj, offset, bio->bi_size);
+	WARN_ON(or->out.bio || or->out.total_bytes);
+	bio->bi_rw |= (1 << BIO_RW);
+	or->out.bio = bio;
+	or->out.total_bytes = bio->bi_size;
+}
+EXPORT_SYMBOL(osd_req_write);
+
+/*TODO: void osd_req_append(struct osd_request *,
+	const struct osd_obj_id *, struct bio *data_out); */
+/*TODO: void osd_req_create_write(struct osd_request *,
+	const struct osd_obj_id *, struct bio *data_out, u64 offset); */
+/*TODO: void osd_req_clear(struct osd_request *,
+	const struct osd_obj_id *, u64 offset, u64 len); */
+/*TODO: void osd_req_punch(struct osd_request *,
+	const struct osd_obj_id *, u64 offset, u64 len); V2 */
+
+void osd_req_flush_object(struct osd_request *or,
+	const struct osd_obj_id *obj, enum osd_options_flush_scope_values op,
+	/*V2*/ u64 offset, /*V2*/ u64 len)
+{
+	if (unlikely(osd_req_is_ver1(or) && (offset || len))) {
+		OSD_DEBUG("OSD Ver1 flush on specific range ignored\n");
+		offset = 0;
+		len = 0;
+	}
+
+	_osd_req_encode_common(or, OSD_ACT_FLUSH, obj, offset, len);
+	_osd_req_encode_flush(or, op);
+}
+EXPORT_SYMBOL(osd_req_flush_object);
+
+void osd_req_read(struct osd_request *or,
+	const struct osd_obj_id *obj, struct bio *bio, u64 offset)
+{
+	_osd_req_encode_common(or, OSD_ACT_READ, obj, offset, bio->bi_size);
+	WARN_ON(or->in.bio || or->in.total_bytes);
+	bio->bi_rw &= ~(1 << BIO_RW);
+	or->in.bio = bio;
+	or->in.total_bytes = bio->bi_size;
+}
+EXPORT_SYMBOL(osd_req_read);
+
+void osd_req_get_attributes(struct osd_request *or,
+	const struct osd_obj_id *obj)
+{
+	_osd_req_encode_common(or, OSD_ACT_GET_ATTRIBUTES, obj, 0, 0);
+}
+EXPORT_SYMBOL(osd_req_get_attributes);
+
+void osd_req_set_attributes(struct osd_request *or,
+	const struct osd_obj_id *obj)
+{
+	_osd_req_encode_common(or, OSD_ACT_SET_ATTRIBUTES, obj, 0, 0);
+}
+EXPORT_SYMBOL(osd_req_set_attributes);
+
+/*
+ * Attributes List-mode
+ */
+
+int osd_req_add_set_attr_list(struct osd_request *or,
+	const struct osd_attr *oa, unsigned nelem)
+{
+	unsigned total_bytes = or->set_attr.total_bytes;
+	void *attr_last;
+	int ret;
+
+	if (or->attributes_mode &&
+	    or->attributes_mode != OSD_CDB_GET_SET_ATTR_LISTS) {
+		WARN_ON(1);
+		return -EINVAL;
+	}
+	or->attributes_mode = OSD_CDB_GET_SET_ATTR_LISTS;
+
+	if (!total_bytes) { /* first-time: allocate and put list header */
+		total_bytes = _osd_req_sizeof_alist_header(or);
+		ret = _alloc_set_attr_list(or, oa, nelem, total_bytes);
+		if (ret)
+			return ret;
+		_osd_req_set_alist_type(or, or->set_attr.buff,
+					OSD_ATTR_LIST_SET_RETRIEVE);
+	}
+	attr_last = or->set_attr.buff + total_bytes;
+
+	for (; nelem; --nelem) {
+		struct osd_attributes_list_element *attr;
+		unsigned elem_size = _osd_req_alist_elem_size(or, oa->len);
+
+		total_bytes += elem_size;
+		if (unlikely(or->set_attr.alloc_size < total_bytes)) {
+			or->set_attr.total_bytes = total_bytes - elem_size;
+			ret = _alloc_set_attr_list(or, oa, nelem, total_bytes);
+			if (ret)
+				return ret;
+			attr_last =
+				or->set_attr.buff + or->set_attr.total_bytes;
+		}
+
+		attr = attr_last;
+		attr->attr_page = cpu_to_be32(oa->attr_page);
+		attr->attr_id = cpu_to_be32(oa->attr_id);
+		attr->attr_bytes = cpu_to_be16(oa->len);
+		memcpy(attr->attr_val, oa->val_ptr, oa->len);
+
+		attr_last += elem_size;
+		++oa;
+	}
+
+	or->set_attr.total_bytes = total_bytes;
+	return 0;
+}
+EXPORT_SYMBOL(osd_req_add_set_attr_list);
+
+static int _append_map_kern(struct request *req,
+	void *buff, unsigned len, gfp_t flags)
+{
+	struct bio *bio;
+	int ret;
+
+	bio = bio_map_kern(req->q, buff, len, flags);
+	if (IS_ERR(bio)) {
+		OSD_ERR("Failed bio_map_kern(%p, %d) => %ld\n", buff, len,
+			PTR_ERR(bio));
+		return PTR_ERR(bio);
+	}
+	ret = blk_rq_append_bio(req->q, req, bio);
+	if (ret) {
+		OSD_ERR("Failed blk_rq_append_bio(%p) => %d\n", bio, ret);
+		bio_put(bio);
+	}
+	return ret;
+}
+
+static int _req_append_segment(struct osd_request *or,
+	unsigned padding, struct _osd_req_data_segment *seg,
+	struct _osd_req_data_segment *last_seg, struct _osd_io_info *io)
+{
+	void *pad_buff;
+	int ret;
+
+	if (padding) {
+		/* check if we can just add it to last buffer */
+		if (last_seg &&
+		    (padding <= last_seg->alloc_size - last_seg->total_bytes))
+			pad_buff = last_seg->buff + last_seg->total_bytes;
+		else
+			pad_buff = io->pad_buff;
+
+		ret = _append_map_kern(io->req, pad_buff, padding,
+				       or->alloc_flags);
+		if (ret)
+			return ret;
+		io->total_bytes += padding;
+	}
+
+	ret = _append_map_kern(io->req, seg->buff, seg->total_bytes,
+			       or->alloc_flags);
+	if (ret)
+		return ret;
+
+	io->total_bytes += seg->total_bytes;
+	OSD_DEBUG("padding=%d buff=%p total_bytes=%d\n", padding, seg->buff,
+		  seg->total_bytes);
+	return 0;
+}
+
+static int _osd_req_finalize_set_attr_list(struct osd_request *or)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+	unsigned padding;
+	int ret;
+
+	if (!or->set_attr.total_bytes) {
+		cdbh->attrs_list.set_attr_offset = OSD_OFFSET_UNUSED;
+		return 0;
+	}
+
+	cdbh->attrs_list.set_attr_bytes = cpu_to_be32(or->set_attr.total_bytes);
+	cdbh->attrs_list.set_attr_offset =
+		osd_req_encode_offset(or, or->out.total_bytes, &padding);
+
+	ret = _req_append_segment(or, padding, &or->set_attr,
+				  or->out.last_seg, &or->out);
+	if (ret)
+		return ret;
+
+	or->out.last_seg = &or->set_attr;
+	return 0;
+}
+
+int osd_req_add_get_attr_list(struct osd_request *or,
+	const struct osd_attr *oa, unsigned nelem)
+{
+	unsigned total_bytes = or->enc_get_attr.total_bytes;
+	void *attr_last;
+	int ret;
+
+	if (or->attributes_mode &&
+	    or->attributes_mode != OSD_CDB_GET_SET_ATTR_LISTS) {
+		WARN_ON(1);
+		return -EINVAL;
+	}
+	or->attributes_mode = OSD_CDB_GET_SET_ATTR_LISTS;
+
+	/* first time calc data-in list header size */
+	if (!or->get_attr.total_bytes)
+		or->get_attr.total_bytes = _osd_req_sizeof_alist_header(or);
+
+	/* calc data-out info */
+	if (!total_bytes) { /* first-time: allocate and put list header */
+		unsigned max_bytes;
+
+		total_bytes = _osd_req_sizeof_alist_header(or);
+		max_bytes = total_bytes +
+			nelem * sizeof(struct osd_attributes_list_attrid);
+		ret = _alloc_get_attr_desc(or, max_bytes);
+		if (ret)
+			return ret;
+
+		_osd_req_set_alist_type(or, or->enc_get_attr.buff,
+					OSD_ATTR_LIST_GET);
+	}
+	attr_last = or->enc_get_attr.buff + total_bytes;
+
+	for (; nelem; --nelem) {
+		struct osd_attributes_list_attrid *attrid;
+		const unsigned cur_size = sizeof(*attrid);
+
+		total_bytes += cur_size;
+		if (unlikely(or->enc_get_attr.alloc_size < total_bytes)) {
+			or->enc_get_attr.total_bytes = total_bytes - cur_size;
+			ret = _alloc_get_attr_desc(or,
+					total_bytes + nelem * sizeof(*attrid));
+			if (ret)
+				return ret;
+			attr_last = or->enc_get_attr.buff +
+				or->enc_get_attr.total_bytes;
+		}
+
+		attrid = attr_last;
+		attrid->attr_page = cpu_to_be32(oa->attr_page);
+		attrid->attr_id = cpu_to_be32(oa->attr_id);
+
+		attr_last += cur_size;
+
+		/* calc data-in size */
+		or->get_attr.total_bytes +=
+			_osd_req_alist_elem_size(or, oa->len);
+		++oa;
+	}
+
+	or->enc_get_attr.total_bytes = total_bytes;
+
+	OSD_DEBUG(
+	       "get_attr.total_bytes=%u(%u) enc_get_attr.total_bytes=%u(%Zu)\n",
+	       or->get_attr.total_bytes,
+	       or->get_attr.total_bytes - _osd_req_sizeof_alist_header(or),
+	       or->enc_get_attr.total_bytes,
+	       (or->enc_get_attr.total_bytes - _osd_req_sizeof_alist_header(or))
+			/ sizeof(struct osd_attributes_list_attrid));
+
+	return 0;
+}
+EXPORT_SYMBOL(osd_req_add_get_attr_list);
+
+static int _osd_req_finalize_get_attr_list(struct osd_request *or)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+	unsigned out_padding;
+	unsigned in_padding;
+	int ret;
+
+	if (!or->enc_get_attr.total_bytes) {
+		cdbh->attrs_list.get_attr_desc_offset = OSD_OFFSET_UNUSED;
+		cdbh->attrs_list.get_attr_offset = OSD_OFFSET_UNUSED;
+		return 0;
+	}
+
+	ret = _alloc_get_attr_list(or);
+	if (ret)
+		return ret;
+
+	/* The out-going buffer info update */
+	OSD_DEBUG("out-going\n");
+	cdbh->attrs_list.get_attr_desc_bytes =
+		cpu_to_be32(or->enc_get_attr.total_bytes);
+
+	cdbh->attrs_list.get_attr_desc_offset =
+		osd_req_encode_offset(or, or->out.total_bytes, &out_padding);
+
+	ret = _req_append_segment(or, out_padding, &or->enc_get_attr,
+				  or->out.last_seg, &or->out);
+	if (ret)
+		return ret;
+	or->out.last_seg = &or->enc_get_attr;
+
+	/* The incoming buffer info update */
+	OSD_DEBUG("in-coming\n");
+	cdbh->attrs_list.get_attr_alloc_length =
+		cpu_to_be32(or->get_attr.total_bytes);
+
+	cdbh->attrs_list.get_attr_offset =
+		osd_req_encode_offset(or, or->in.total_bytes, &in_padding);
+
+	ret = _req_append_segment(or, in_padding, &or->get_attr, NULL,
+				  &or->in);
+	if (ret)
+		return ret;
+	or->in.last_seg = &or->get_attr;
+
+	return 0;
+}
+
+int osd_req_decode_get_attr_list(struct osd_request *or,
+	struct osd_attr *oa, int *nelem, void **iterator)
+{
+	unsigned cur_bytes, returned_bytes;
+	int n;
+	const unsigned sizeof_attr_list = _osd_req_sizeof_alist_header(or);
+	void *cur_p;
+
+	if (!_osd_req_is_alist_type(or, or->get_attr.buff,
+				    OSD_ATTR_LIST_SET_RETRIEVE)) {
+		oa->attr_page = 0;
+		oa->attr_id = 0;
+		oa->val_ptr = NULL;
+		oa->len = 0;
+		*iterator = NULL;
+		return 0;
+	}
+
+	if (*iterator) {
+		BUG_ON((*iterator < or->get_attr.buff) ||
+		     (or->get_attr.buff + or->get_attr.alloc_size < *iterator));
+		cur_p = *iterator;
+		cur_bytes = (*iterator - or->get_attr.buff) - sizeof_attr_list;
+		returned_bytes = or->get_attr.total_bytes;
+	} else { /* first time decode the list header */
+		cur_bytes = sizeof_attr_list;
+		returned_bytes = _osd_req_alist_size(or, or->get_attr.buff) +
+					sizeof_attr_list;
+
+		cur_p = or->get_attr.buff + sizeof_attr_list;
+
+		if (returned_bytes > or->get_attr.alloc_size) {
+			OSD_DEBUG("target report: space was not big enough! "
+				  "Allocate=%u Needed=%u\n",
+				  or->get_attr.alloc_size,
+				  returned_bytes + sizeof_attr_list);
+
+			returned_bytes =
+				or->get_attr.alloc_size - sizeof_attr_list;
+		}
+		or->get_attr.total_bytes = returned_bytes;
+	}
+
+	for (n = 0; (n < *nelem) && (cur_bytes < returned_bytes); ++n) {
+		struct osd_attributes_list_element *attr = cur_p;
+		unsigned inc;
+
+		oa->len = be16_to_cpu(attr->attr_bytes);
+		inc = _osd_req_alist_elem_size(or, oa->len);
+		OSD_DEBUG("oa->len=%d inc=%d cur_bytes=%d\n",
+			  oa->len, inc, cur_bytes);
+		cur_bytes += inc;
+		if (cur_bytes > returned_bytes) {
+			OSD_ERR("BAD FOOD from target. list not valid!"
+				"c=%d r=%d n=%d\n",
+				cur_bytes, returned_bytes, n);
+			oa->val_ptr = NULL;
+			break;
+		}
+
+		oa->attr_page = be32_to_cpu(attr->attr_page);
+		oa->attr_id = be32_to_cpu(attr->attr_id);
+		oa->val_ptr = attr->attr_val;
+
+		cur_p += inc;
+		++oa;
+	}
+
+	*iterator = (returned_bytes - cur_bytes) ? cur_p : NULL;
+	*nelem = n;
+	return returned_bytes - cur_bytes;
+}
+EXPORT_SYMBOL(osd_req_decode_get_attr_list);
+
+/*
+ * Attributes Page-mode
+ */
+
+int osd_req_add_get_attr_page(struct osd_request *or,
+	u32 page_id, void *attar_page, unsigned max_page_len,
+	const struct osd_attr *set_one_attr)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+
+	if (or->attributes_mode &&
+	    or->attributes_mode != OSD_CDB_GET_ATTR_PAGE_SET_ONE) {
+		WARN_ON(1);
+		return -EINVAL;
+	}
+	or->attributes_mode = OSD_CDB_GET_ATTR_PAGE_SET_ONE;
+
+	or->get_attr.buff = attar_page;
+	or->get_attr.total_bytes = max_page_len;
+
+	or->set_attr.buff = set_one_attr->val_ptr;
+	or->set_attr.total_bytes = set_one_attr->len;
+
+	cdbh->attrs_page.get_attr_page = cpu_to_be32(page_id);
+	cdbh->attrs_page.get_attr_alloc_length = cpu_to_be32(max_page_len);
+	/* ocdb->attrs_page.get_attr_offset; */
+
+	cdbh->attrs_page.set_attr_page = cpu_to_be32(set_one_attr->attr_page);
+	cdbh->attrs_page.set_attr_id = cpu_to_be32(set_one_attr->attr_id);
+	cdbh->attrs_page.set_attr_length = cpu_to_be32(set_one_attr->len);
+	/* ocdb->attrs_page.set_attr_offset; */
+	return 0;
+}
+EXPORT_SYMBOL(osd_req_add_get_attr_page);
+
+static int _osd_req_finalize_attr_page(struct osd_request *or)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+	unsigned in_padding, out_padding;
+	int ret;
+
+	/* returned page */
+	cdbh->attrs_page.get_attr_offset =
+		osd_req_encode_offset(or, or->in.total_bytes, &in_padding);
+
+	ret = _req_append_segment(or, in_padding, &or->get_attr, NULL,
+				  &or->in);
+	if (ret)
+		return ret;
+
+	/* set one value */
+	cdbh->attrs_page.set_attr_offset =
+		osd_req_encode_offset(or, or->out.total_bytes, &out_padding);
+
+	ret = _req_append_segment(or, out_padding, &or->enc_get_attr, NULL,
+				  &or->out);
+	return ret;
+}
+
+static int _osd_req_finalize_data_integrity(struct osd_request *or,
+	bool has_in, bool has_out, const u8 *cap_key)
+{
+	struct osd_security_parameters *sec_parms = _osd_req_sec_params(or);
+	int ret;
+
+	if (!osd_is_sec_alldata(sec_parms))
+		return 0;
+
+	if (has_out) {
+		struct _osd_req_data_segment seg = {
+			.buff = &or->out_data_integ,
+			.total_bytes = sizeof(or->out_data_integ),
+		};
+		unsigned pad;
+
+		or->out_data_integ.data_bytes = cpu_to_be64(
+			or->out.bio ? or->out.bio->bi_size : 0);
+		or->out_data_integ.set_attributes_bytes = cpu_to_be64(
+			or->set_attr.total_bytes);
+		or->out_data_integ.get_attributes_bytes = cpu_to_be64(
+			or->enc_get_attr.total_bytes);
+
+		sec_parms->data_out_integrity_check_offset =
+			osd_req_encode_offset(or, or->out.total_bytes, &pad);
+
+		ret = _req_append_segment(or, pad, &seg, or->out.last_seg,
+					  &or->out);
+		if (ret)
+			return ret;
+		or->out.last_seg = NULL;
+
+		/* they are now all chained to request sign them all together */
+		osd_sec_sign_data(&or->out_data_integ, or->out.req->bio,
+				  cap_key);
+	}
+
+	if (has_in) {
+		struct _osd_req_data_segment seg = {
+			.buff = &or->in_data_integ,
+			.total_bytes = sizeof(or->in_data_integ),
+		};
+		unsigned pad;
+
+		sec_parms->data_in_integrity_check_offset =
+			osd_req_encode_offset(or, or->in.total_bytes, &pad);
+
+		ret = _req_append_segment(or, pad, &seg, or->in.last_seg,
+					  &or->in);
+		if (ret)
+			return ret;
+
+		or->in.last_seg = NULL;
+	}
+
+	return 0;
+}
+
+/*
+ * osd_finalize_request and helpers
+ */
+
+static int _init_blk_request(struct osd_request *or,
+	bool has_in, bool has_out)
+{
+	gfp_t flags = or->alloc_flags;
+	struct scsi_device *scsi_device = or->osd_dev->scsi_device;
+	struct request_queue *q = scsi_device->request_queue;
+	struct request *req;
+	int ret = -ENOMEM;
+
+	req = blk_get_request(q, has_out, flags);
+	if (!req)
+		goto out;
+
+	or->request = req;
+	req->cmd_type = REQ_TYPE_BLOCK_PC;
+	req->timeout = or->timeout;
+	req->retries = or->retries;
+	req->sense = or->sense;
+	req->sense_len = 0;
+
+	if (has_out) {
+		or->out.req = req;
+		if (has_in) {
+			/* allocate bidi request */
+			req = blk_get_request(q, READ, flags);
+			if (!req) {
+				OSD_DEBUG("blk_get_request for bidi failed\n");
+				goto out;
+			}
+			req->cmd_type = REQ_TYPE_BLOCK_PC;
+			or->in.req = or->request->next_rq = req;
+		}
+	} else if (has_in)
+		or->in.req = req;
+
+	ret = 0;
+out:
+	OSD_DEBUG("or=%p has_in=%d has_out=%d => %d, %p\n",
+			or, has_in, has_out, ret, or->request);
+	return ret;
+}
+
+int osd_finalize_request(struct osd_request *or,
+	u8 options, const void *cap, const u8 *cap_key)
+{
+	struct osd_cdb_head *cdbh = osd_cdb_head(&or->cdb);
+	bool has_in, has_out;
+	int ret;
+
+	if (options & OSD_REQ_FUA)
+		cdbh->options |= OSD_CDB_FUA;
+
+	if (options & OSD_REQ_DPO)
+		cdbh->options |= OSD_CDB_DPO;
+
+	if (options & OSD_REQ_BYPASS_TIMESTAMPS)
+		cdbh->timestamp_control = OSD_CDB_BYPASS_TIMESTAMPS;
+
+	osd_set_caps(&or->cdb, cap);
+
+	has_in = or->in.bio || or->get_attr.total_bytes;
+	has_out = or->out.bio || or->set_attr.total_bytes ||
+		or->enc_get_attr.total_bytes;
+
+	ret = _init_blk_request(or, has_in, has_out);
+	if (ret) {
+		OSD_DEBUG("_init_blk_request failed\n");
+		return ret;
+	}
+
+	if (or->out.bio) {
+		ret = blk_rq_append_bio(or->request->q, or->out.req,
+					or->out.bio);
+		if (ret) {
+			OSD_DEBUG("blk_rq_append_bio out failed\n");
+			return ret;
+		}
+		OSD_DEBUG("out bytes=%llu (bytes_req=%u)\n",
+			_LLU(or->out.total_bytes), or->out.req->data_len);
+	}
+	if (or->in.bio) {
+		ret = blk_rq_append_bio(or->request->q, or->in.req, or->in.bio);
+		if (ret) {
+			OSD_DEBUG("blk_rq_append_bio in failed\n");
+			return ret;
+		}
+		OSD_DEBUG("in bytes=%llu (bytes_req=%u)\n",
+			_LLU(or->in.total_bytes), or->in.req->data_len);
+	}
+
+	or->out.pad_buff = sg_out_pad_buffer;
+	or->in.pad_buff = sg_in_pad_buffer;
+
+	if (!or->attributes_mode)
+		or->attributes_mode = OSD_CDB_GET_SET_ATTR_LISTS;
+	cdbh->command_specific_options |= or->attributes_mode;
+	if (or->attributes_mode == OSD_CDB_GET_ATTR_PAGE_SET_ONE) {
+		ret = _osd_req_finalize_attr_page(or);
+	} else {
+		/* TODO: I think that for the GET_ATTR command these 2 should
+		 * be reversed to keep them in execution order (for embeded
+		 * targets with low memory footprint)
+		 */
+		ret = _osd_req_finalize_set_attr_list(or);
+		if (ret) {
+			OSD_DEBUG("_osd_req_finalize_set_attr_list failed\n");
+			return ret;
+		}
+
+		ret = _osd_req_finalize_get_attr_list(or);
+		if (ret) {
+			OSD_DEBUG("_osd_req_finalize_get_attr_list failed\n");
+			return ret;
+		}
+	}
+
+	ret = _osd_req_finalize_data_integrity(or, has_in, has_out, cap_key);
+	if (ret)
+		return ret;
+
+	osd_sec_sign_cdb(&or->cdb, cap_key);
+
+	or->request->cmd = or->cdb.buff;
+	or->request->cmd_len = _osd_req_cdb_len(or);
+
+	return 0;
+}
+EXPORT_SYMBOL(osd_finalize_request);
+
+#define OSD_SENSE_PRINT1(fmt, a...) \
+	do { \
+		if (__cur_sense_need_output) \
+			OSD_ERR(fmt, ##a); \
+	} while (0)
+
+#define OSD_SENSE_PRINT2(fmt, a...) OSD_SENSE_PRINT1("    " fmt, ##a)
+
+int osd_req_decode_sense_full(struct osd_request *or,
+	struct osd_sense_info *osi, bool silent,
+	struct osd_obj_id *bad_obj_list __unused, int max_obj __unused,
+	struct osd_attr *bad_attr_list, int max_attr)
+{
+	int sense_len, original_sense_len;
+	struct osd_sense_info local_osi;
+	struct scsi_sense_descriptor_based *ssdb;
+	void *cur_descriptor;
+#if (CONFIG_SCSI_OSD_DPRINT_SENSE == 0)
+	const bool __cur_sense_need_output = false;
+#else
+	bool __cur_sense_need_output = !silent;
+#endif
+
+	if (!or->request->errors)
+		return 0;
+
+	ssdb = or->request->sense;
+	sense_len = or->request->sense_len;
+	if ((sense_len < (int)sizeof(*ssdb) || !ssdb->sense_key)) {
+		OSD_ERR("Block-layer returned error(0x%x) but "
+			"sense_len(%u) || key(%d) is empty\n",
+			or->request->errors, sense_len, ssdb->sense_key);
+		return -EIO;
+	}
+
+	if ((ssdb->response_code != 0x72) && (ssdb->response_code != 0x73)) {
+		OSD_ERR("Unrecognized scsi sense: rcode=%x length=%d\n",
+			ssdb->response_code, sense_len);
+		return -EIO;
+	}
+
+	osi = osi ? : &local_osi;
+	memset(osi, 0, sizeof(*osi));
+	osi->key = ssdb->sense_key;
+	osi->additional_code = be16_to_cpu(ssdb->additional_sense_code);
+	original_sense_len = ssdb->additional_sense_length + 8;
+
+#if (CONFIG_SCSI_OSD_DPRINT_SENSE == 1)
+	if (__cur_sense_need_output)
+		__cur_sense_need_output = (osi->key > scsi_sk_recovered_error);
+#endif
+	OSD_SENSE_PRINT1("Main Sense information key=0x%x length(%d, %d) "
+			"additional_code=0x%x\n",
+			osi->key, original_sense_len, sense_len,
+			osi->additional_code);
+
+	if (original_sense_len < sense_len)
+		sense_len = original_sense_len;
+
+	cur_descriptor = ssdb->ssd;
+	sense_len -= sizeof(*ssdb);
+	while (sense_len > 0) {
+		struct scsi_sense_descriptor *ssd = cur_descriptor;
+		int cur_len = ssd->additional_length + 2;
+
+		sense_len -= cur_len;
+
+		if (sense_len < 0)
+			break; /* sense was truncated */
+
+		switch (ssd->descriptor_type) {
+		case scsi_sense_information:
+		case scsi_sense_command_specific_information:
+		{
+			struct scsi_sense_command_specific_data_descriptor
+				*sscd = cur_descriptor;
+
+			osi->command_info =
+				get_unaligned_be64(&sscd->information) ;
+			OSD_SENSE_PRINT2(
+				"command_specific_information 0x%llx \n",
+				_LLU(osi->command_info));
+			break;
+		}
+		case scsi_sense_key_specific:
+		{
+			struct scsi_sense_key_specific_data_descriptor
+				*ssks = cur_descriptor;
+
+			osi->sense_info = get_unaligned_be16(&ssks->value);
+			OSD_SENSE_PRINT2(
+				"sense_key_specific_information %u"
+				"sksv_cd_bpv_bp (0x%x)\n",
+				osi->sense_info, ssks->sksv_cd_bpv_bp);
+			break;
+		}
+		case osd_sense_object_identification:
+		{ /*FIXME: Keep first not last, Store in array*/
+			struct osd_sense_identification_data_descriptor
+				*osidd = cur_descriptor;
+
+			osi->not_initiated_command_functions =
+				le32_to_cpu(osidd->not_initiated_functions);
+			osi->completed_command_functions =
+				le32_to_cpu(osidd->completed_functions);
+			osi->obj.partition = be64_to_cpu(osidd->partition_id);
+			osi->obj.id = be64_to_cpu(osidd->object_id);
+			OSD_SENSE_PRINT2(
+				"object_identification pid=0x%llx oid=0x%llx\n",
+				_LLU(osi->obj.partition), _LLU(osi->obj.id));
+			OSD_SENSE_PRINT2(
+				"not_initiated_bits(%x) "
+				"completed_command_bits(%x)\n",
+				osi->not_initiated_command_functions,
+				osi->completed_command_functions);
+			break;
+		}
+		case osd_sense_response_integrity_check:
+		{
+			struct osd_sense_response_integrity_check_descriptor
+				*osricd = cur_descriptor;
+			const unsigned len =
+					  sizeof(osricd->integrity_check_value);
+			char key_dump[len*4 + 2]; /* 2nibbles+space+ASCII */
+
+			hex_dump_to_buffer(osricd->integrity_check_value, len,
+				       32, 1, key_dump, sizeof(key_dump), true);
+			OSD_SENSE_PRINT2("response_integrity [%s]\n", key_dump);
+		}
+		case osd_sense_attribute_identification:
+		{
+			struct osd_sense_attributes_data_descriptor
+				*osadd = cur_descriptor;
+			int len = min(cur_len, sense_len);
+			int i = 0;
+			struct osd_sense_attr *pattr = osadd->sense_attrs;
+
+			while (len < 0) {
+				u32 attr_page = be32_to_cpu(pattr->attr_page);
+				u32 attr_id = be32_to_cpu(pattr->attr_id);
+
+				if (i++ == 0) {
+					osi->attr.attr_page = attr_page;
+					osi->attr.attr_id = attr_id;
+				}
+
+				if (bad_attr_list && max_attr) {
+					bad_attr_list->attr_page = attr_page;
+					bad_attr_list->attr_id = attr_id;
+					bad_attr_list++;
+					max_attr--;
+				}
+				OSD_SENSE_PRINT2(
+					"osd_sense_attribute_identification"
+					"attr_page=0x%x attr_id=0x%x\n",
+					attr_page, attr_id);
+			}
+		}
+		/*These are not legal for OSD*/
+		case scsi_sense_field_replaceable_unit:
+			OSD_SENSE_PRINT2("scsi_sense_field_replaceable_unit\n");
+			break;
+		case scsi_sense_stream_commands:
+			OSD_SENSE_PRINT2("scsi_sense_stream_commands\n");
+			break;
+		case scsi_sense_block_commands:
+			OSD_SENSE_PRINT2("scsi_sense_block_commands\n");
+			break;
+		case scsi_sense_ata_return:
+			OSD_SENSE_PRINT2("scsi_sense_ata_return\n");
+			break;
+		default:
+			if (ssd->descriptor_type <= scsi_sense_Reserved_last)
+				OSD_SENSE_PRINT2(
+					"scsi_sense Reserved descriptor (0x%x)",
+					ssd->descriptor_type);
+			else
+				OSD_SENSE_PRINT2(
+					"scsi_sense Vendor descriptor (0x%x)",
+					ssd->descriptor_type);
+		}
+
+		cur_descriptor += cur_len;
+	}
+
+	return (osi->key > scsi_sk_recovered_error) ? -EIO : 0;
+}
+EXPORT_SYMBOL(osd_req_decode_sense_full);
+
+/*
+ * Implementation of osd_sec.h API
+ * TODO: Move to a separate osd_sec.c file at a later stage.
+ */
+
+enum { OSD_SEC_CAP_V1_ALL_CAPS =
+	OSD_SEC_CAP_APPEND | OSD_SEC_CAP_OBJ_MGMT | OSD_SEC_CAP_REMOVE   |
+	OSD_SEC_CAP_CREATE | OSD_SEC_CAP_SET_ATTR | OSD_SEC_CAP_GET_ATTR |
+	OSD_SEC_CAP_WRITE  | OSD_SEC_CAP_READ     | OSD_SEC_CAP_POL_SEC  |
+	OSD_SEC_CAP_GLOBAL | OSD_SEC_CAP_DEV_MGMT
+};
+
+enum { OSD_SEC_CAP_V2_ALL_CAPS =
+	OSD_SEC_CAP_V1_ALL_CAPS | OSD_SEC_CAP_QUERY | OSD_SEC_CAP_M_OBJECT
+};
+
+void osd_sec_init_nosec_doall_caps(void *caps,
+	const struct osd_obj_id *obj, bool is_collection, const bool is_v1)
+{
+	struct osd_capability *cap = caps;
+	u8 type;
+	u8 descriptor_type;
+
+	if (likely(obj->id)) {
+		if (unlikely(is_collection)) {
+			type = OSD_SEC_OBJ_COLLECTION;
+			descriptor_type = is_v1 ? OSD_SEC_OBJ_DESC_OBJ :
+						  OSD_SEC_OBJ_DESC_COL;
+		} else {
+			type = OSD_SEC_OBJ_USER;
+			descriptor_type = OSD_SEC_OBJ_DESC_OBJ;
+		}
+		WARN_ON(!obj->partition);
+	} else {
+		type = obj->partition ? OSD_SEC_OBJ_PARTITION :
+					OSD_SEC_OBJ_ROOT;
+		descriptor_type = OSD_SEC_OBJ_DESC_PAR;
+	}
+
+	memset(cap, 0, sizeof(*cap));
+
+	cap->h.format = OSD_SEC_CAP_FORMAT_VER1;
+	cap->h.integrity_algorithm__key_version = 0; /* MAKE_BYTE(0, 0); */
+	cap->h.security_method = OSD_SEC_NOSEC;
+/*	cap->expiration_time;
+	cap->AUDIT[30-10];
+	cap->discriminator[42-30];
+	cap->object_created_time; */
+	cap->h.object_type = type;
+	osd_sec_set_caps(&cap->h, OSD_SEC_CAP_V1_ALL_CAPS);
+	cap->h.object_descriptor_type = descriptor_type;
+	cap->od.obj_desc.policy_access_tag = 0;
+	cap->od.obj_desc.allowed_partition_id = cpu_to_be64(obj->partition);
+	cap->od.obj_desc.allowed_object_id = cpu_to_be64(obj->id);
+}
+EXPORT_SYMBOL(osd_sec_init_nosec_doall_caps);
+
+/* FIXME: Extract version from caps pointer.
+ *        Also Pete's target only supports caps from OSDv1 for now
+ */
+void osd_set_caps(struct osd_cdb *cdb, const void *caps)
+{
+	bool is_ver1 = true;
+	/* NOTE: They start at same address */
+	memcpy(&cdb->v1.caps, caps, is_ver1 ? OSDv1_CAP_LEN : OSD_CAP_LEN);
+}
+
+bool osd_is_sec_alldata(struct osd_security_parameters *sec_parms __unused)
+{
+	return false;
+}
+
+void osd_sec_sign_cdb(struct osd_cdb *ocdb __unused, const u8 *cap_key __unused)
+{
+}
+
+void osd_sec_sign_data(void *data_integ __unused,
+		       struct bio *bio __unused, const u8 *cap_key __unused)
+{
+}
+
+/*
+ * Declared in osd_protocol.h
+ * 4.12.5 Data-In and Data-Out buffer offsets
+ * byte offset = mantissa * (2^(exponent+8))
+ * Returns the smallest allowed encoded offset that contains given @offset
+ * The actual encoded offset returned is @offset + *@padding.
+ */
+osd_cdb_offset __osd_encode_offset(
+	u64 offset, unsigned *padding, int min_shift, int max_shift)
+{
+	u64 try_offset = -1, mod, align;
+	osd_cdb_offset be32_offset;
+	int shift;
+
+	*padding = 0;
+	if (!offset)
+		return 0;
+
+	for (shift = min_shift; shift < max_shift; ++shift) {
+		try_offset = offset >> shift;
+		if (try_offset < (1 << OSD_OFFSET_MAX_BITS))
+			break;
+	}
+
+	BUG_ON(shift == max_shift);
+
+	align = 1 << shift;
+	mod = offset & (align - 1);
+	if (mod) {
+		*padding = align - mod;
+		try_offset += 1;
+	}
+
+	try_offset |= ((shift - 8) & 0xf) << 28;
+	be32_offset = cpu_to_be32((u32)try_offset);
+
+	OSD_DEBUG("offset=%llu mantissa=%llu exp=%d encoded=%x pad=%d\n",
+		 _LLU(offset), _LLU(try_offset & 0x0FFFFFFF), shift,
+		 be32_offset, *padding);
+	return be32_offset;
+}
diff --git a/drivers/scsi/osd/osd_uld.c b/drivers/scsi/osd/osd_uld.c
new file mode 100644
index 0000000..f8b1a74
--- /dev/null
+++ b/drivers/scsi/osd/osd_uld.c
@@ -0,0 +1,487 @@
+/*
+ * osd_uld.c - OSD Upper Layer Driver
+ *
+ * A Linux driver module that registers as a SCSI ULD and probes
+ * for OSD type SCSI devices.
+ * It's main function is to export osd devices to in-kernel users like
+ * osdfs and pNFS-objects-LD. It also provides one ioctl for running
+ * in Kernel tests.
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *  1. Redistributions of source code must retain the above copyright
+ *     notice, this list of conditions and the following disclaimer.
+ *  2. Redistributions in binary form must reproduce the above copyright
+ *     notice, this list of conditions and the following disclaimer in the
+ *     documentation and/or other materials provided with the distribution.
+ *  3. Neither the name of the Panasas company nor the names of its
+ *     contributors may be used to endorse or promote products derived
+ *     from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ * DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
+ * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
+ * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/namei.h>
+#include <linux/cdev.h>
+#include <linux/fs.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/idr.h>
+#include <linux/major.h>
+
+#include <scsi/scsi.h>
+#include <scsi/scsi_driver.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi_ioctl.h>
+
+#include <scsi/osd_initiator.h>
+#include <scsi/osd_sec.h>
+
+#include "osd_debug.h"
+
+#ifndef TYPE_OSD
+#  define TYPE_OSD 0x11
+#endif
+
+#ifndef SCSI_OSD_MAJOR
+#  define SCSI_OSD_MAJOR 260
+#endif
+#define SCSI_OSD_MAX_MINOR 64
+
+static const char osd_name[] = "osd";
+static const char *osd_version_string = "open-osd 0.1.0";
+const char osd_symlink[] = "scsi_osd";
+
+MODULE_AUTHOR("Boaz Harrosh <bharrosh@panasas.com>");
+MODULE_DESCRIPTION("open-osd Upper-Layer-Driver osd.ko");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CHARDEV_MAJOR(SCSI_OSD_MAJOR);
+MODULE_ALIAS_SCSI_DEVICE(TYPE_OSD);
+
+struct osd_uld_device {
+	int minor;
+	struct kref kref;
+	struct cdev cdev;
+	struct osd_dev od;
+	struct gendisk *disk;
+	struct device *class_member;
+};
+
+static void __uld_get(struct osd_uld_device *oud);
+static void __uld_put(struct osd_uld_device *oud);
+
+/*
+ * Char Device operations
+ */
+
+static int osd_uld_open(struct inode *inode, struct file *file)
+{
+	struct osd_uld_device *oud = container_of(inode->i_cdev,
+					struct osd_uld_device, cdev);
+
+	__uld_get(oud);
+	/* cache osd_uld_device on file handle */
+	file->private_data = oud;
+	OSD_DEBUG("osd_uld_open %p\n", oud);
+	return 0;
+}
+
+static int osd_uld_release(struct inode *inode, struct file *file)
+{
+	struct osd_uld_device *oud = file->private_data;
+
+	OSD_DEBUG("osd_uld_release %p\n", file->private_data);
+	file->private_data = NULL;
+	__uld_put(oud);
+	return 0;
+}
+
+/* FIXME: Only one vector for now */
+unsigned g_test_ioctl;
+do_test_fn *g_do_test;
+
+int osduld_register_test(unsigned ioctl, do_test_fn *do_test)
+{
+	if (g_test_ioctl)
+		return -EINVAL;
+
+	g_test_ioctl = ioctl;
+	g_do_test = do_test;
+	return 0;
+}
+EXPORT_SYMBOL(osduld_register_test);
+
+void osduld_unregister_test(unsigned ioctl)
+{
+	if (ioctl == g_test_ioctl) {
+		g_test_ioctl = 0;
+		g_do_test = NULL;
+	}
+}
+EXPORT_SYMBOL(osduld_unregister_test);
+
+static do_test_fn *_find_ioctl(unsigned cmd)
+{
+	if (g_test_ioctl == cmd)
+		return g_do_test;
+	else
+		return NULL;
+}
+
+static long osd_uld_ioctl(struct file *file, unsigned int cmd,
+	unsigned long arg)
+{
+	struct osd_uld_device *oud = file->private_data;
+	int ret;
+	do_test_fn *do_test;
+
+	do_test = _find_ioctl(cmd);
+	if (do_test)
+		ret = do_test(&oud->od, cmd, arg);
+	else {
+		OSD_ERR("Unknown ioctl %d: osd_uld_device=%p\n", cmd, oud);
+		ret = -ENOIOCTLCMD;
+	}
+	return ret;
+}
+
+static const struct file_operations osd_fops = {
+	.owner          = THIS_MODULE,
+	.open           = osd_uld_open,
+	.release        = osd_uld_release,
+	.unlocked_ioctl = osd_uld_ioctl,
+};
+
+struct osd_dev *osduld_path_lookup(const char *path)
+{
+	struct nameidata nd;
+	struct inode *inode;
+	struct cdev *cdev;
+	struct osd_uld_device *uninitialized_var(oud);
+	int error;
+
+	if (!path || !*path) {
+		OSD_ERR("Mount with !path || !*path\n");
+		return ERR_PTR(-EINVAL);
+	}
+
+	error = path_lookup(path, LOOKUP_FOLLOW, &nd);
+	if (error) {
+		OSD_ERR("path_lookup of %s faild=>%d\n", path, error);
+		return ERR_PTR(error);
+	}
+
+	inode = nd.path.dentry->d_inode;
+	error = -EINVAL; /* Not the right device e.g osd_uld_device */
+	if (!S_ISCHR(inode->i_mode)) {
+		OSD_DEBUG("!S_ISCHR()\n");
+		goto out;
+	}
+
+	cdev = inode->i_cdev;
+	if (!cdev) {
+		OSD_ERR("Before mounting an OSD Based filesystem\n");
+		OSD_ERR("  user-mode must open+close the %s device\n", path);
+		OSD_ERR("  Example: bash: echo < %s\n", path);
+		goto out;
+	}
+
+	/* The Magic wand. Is it our char-dev */
+	/* TODO: Support sg devices */
+	if (cdev->owner != THIS_MODULE) {
+		OSD_ERR("Error mounting %s - is not an OSD device\n", path);
+		goto out;
+	}
+
+	oud = container_of(cdev, struct osd_uld_device, cdev);
+
+	__uld_get(oud);
+	error = 0;
+
+out:
+	path_put(&nd.path);
+	return error ? ERR_PTR(error) : &oud->od;
+}
+EXPORT_SYMBOL(osduld_path_lookup);
+
+void osduld_put_device(struct osd_dev *od)
+{
+	if (od) {
+		struct osd_uld_device *oud = container_of(od,
+						struct osd_uld_device, od);
+
+		__uld_put(oud);
+	}
+}
+EXPORT_SYMBOL(osduld_put_device);
+
+/*
+ * Scsi Device operations
+ */
+
+static int __detect_osd(struct osd_uld_device *oud)
+{
+	struct scsi_device *scsi_device = oud->od.scsi_device;
+	char caps[OSD_CAP_LEN];
+	int error;
+
+	/* sending a test_unit_ready as first command seems to be needed
+	 * by some targets
+	 */
+	OSD_DEBUG("start scsi_test_unit_ready %p %p %p\n",
+			oud, scsi_device, scsi_device->request_queue);
+	error = scsi_test_unit_ready(scsi_device, 10*HZ, 5, NULL);
+	if (error)
+		OSD_ERR("warning: scsi_test_unit_ready failed\n");
+
+	osd_sec_init_nosec_doall_caps(caps, &osd_root_object, false, true);
+	if (osd_auto_detect_ver(&oud->od, caps))
+		return -ENODEV;
+
+	return 0;
+}
+
+static struct class *osd_sysfs_class;
+static DEFINE_IDA(osd_minor_ida);
+
+static int osd_probe(struct device *dev)
+{
+	struct scsi_device *scsi_device = to_scsi_device(dev);
+	struct gendisk *disk;
+	struct osd_uld_device *oud;
+	int minor;
+	int error;
+
+	if (scsi_device->type != TYPE_OSD)
+		return -ENODEV;
+
+	do {
+		if (!ida_pre_get(&osd_minor_ida, GFP_KERNEL))
+			return -ENODEV;
+
+		error = ida_get_new(&osd_minor_ida, &minor);
+	} while (error == -EAGAIN);
+
+	if (error)
+		return error;
+	if (minor >= SCSI_OSD_MAX_MINOR) {
+		error = -EBUSY;
+		goto err_retract_minor;
+	}
+
+	error = -ENOMEM;
+	oud = kzalloc(sizeof(*oud), GFP_KERNEL);
+	if (NULL == oud)
+		goto err_retract_minor;
+
+	kref_init(&oud->kref);
+	dev_set_drvdata(dev, oud);
+	oud->minor = minor;
+
+	/* allocate a disk and set it up */
+	/* FIXME: do we need this since sg has already done that */
+	disk = alloc_disk(1);
+	if (!disk) {
+		OSD_ERR("alloc_disk failed\n");
+		goto err_free_osd;
+	}
+	disk->major = SCSI_OSD_MAJOR;
+	disk->first_minor = oud->minor;
+	sprintf(disk->disk_name, "osd%d", oud->minor);
+	oud->disk = disk;
+
+	/* hold one more reference to the scsi_device that will get released
+	 * in __release, in case a logout is happening while fs is mounted
+	 */
+	scsi_device_get(scsi_device);
+	osd_dev_init(&oud->od, scsi_device);
+
+	/* Detect the OSD Version */
+	error = __detect_osd(oud);
+	if (error) {
+		OSD_ERR("osd detection failed, non-compatible OSD device\n");
+		goto err_put_disk;
+	}
+
+	/* init the char-device for communication with user-mode */
+	cdev_init(&oud->cdev, &osd_fops);
+	oud->cdev.owner = THIS_MODULE;
+	error = cdev_add(&oud->cdev,
+			 MKDEV(SCSI_OSD_MAJOR, oud->minor), 1);
+	if (error) {
+		OSD_ERR("cdev_add failed\n");
+		goto err_put_disk;
+	}
+	kobject_get(&oud->cdev.kobj); /* 2nd ref see osd_remove() */
+
+	/* class_member */
+	oud->class_member = device_create(osd_sysfs_class, dev,
+		MKDEV(SCSI_OSD_MAJOR, oud->minor), "%s", disk->disk_name);
+	if (IS_ERR(oud->class_member)) {
+		OSD_ERR("class_device_create failed\n");
+		error = PTR_ERR(oud->class_member);
+		goto err_put_cdev;
+	}
+
+	dev_set_drvdata(oud->class_member, oud);
+	error = sysfs_create_link(&scsi_device->sdev_gendev.kobj,
+				  &oud->class_member->kobj, osd_symlink);
+	if (error)
+		OSD_ERR("warning: unable to make symlink\n");
+
+	OSD_INFO("osd_probe %s\n", disk->disk_name);
+	return 0;
+
+err_put_cdev:
+	cdev_del(&oud->cdev);
+err_put_disk:
+	scsi_device_put(scsi_device);
+	put_disk(disk);
+err_free_osd:
+	dev_set_drvdata(dev, NULL);
+	kfree(oud);
+err_retract_minor:
+	ida_remove(&osd_minor_ida, minor);
+	return error;
+}
+
+static int osd_remove(struct device *dev)
+{
+	struct scsi_device *scsi_device = to_scsi_device(dev);
+	struct osd_uld_device *oud = dev_get_drvdata(dev);
+
+	if (!oud || (oud->od.scsi_device != scsi_device)) {
+		OSD_ERR("Half cooked osd-device %p,%p || %p!=%p",
+			dev, oud, oud ? oud->od.scsi_device : NULL,
+			scsi_device);
+	}
+
+	sysfs_remove_link(&oud->od.scsi_device->sdev_gendev.kobj, osd_symlink);
+
+	if (oud->class_member)
+		device_destroy(osd_sysfs_class,
+			       MKDEV(SCSI_OSD_MAJOR, oud->minor));
+
+	/* We have 2 references to the cdev. One is released here
+	 * and also takes down the /dev/osdX mapping. The second
+	 * Will be released in __remove() after all users have released
+	 * the osd_uld_device.
+	 */
+	if (oud->cdev.owner)
+		cdev_del(&oud->cdev);
+
+	__uld_put(oud);
+	return 0;
+}
+
+static void __remove(struct kref *kref)
+{
+	struct osd_uld_device *oud = container_of(kref,
+					struct osd_uld_device, kref);
+	struct scsi_device *scsi_device = oud->od.scsi_device;
+
+	/* now let delete the char_dev */
+	kobject_put(&oud->cdev.kobj);
+
+	osd_dev_fini(&oud->od);
+	scsi_device_put(scsi_device);
+
+	OSD_INFO("osd_remove %s\n",
+		 oud->disk ? oud->disk->disk_name : NULL);
+
+	if (oud->disk)
+		put_disk(oud->disk);
+
+	ida_remove(&osd_minor_ida, oud->minor);
+	kfree(oud);
+}
+
+static void __uld_get(struct osd_uld_device *oud)
+{
+	kref_get(&oud->kref);
+}
+
+static void __uld_put(struct osd_uld_device *oud)
+{
+	kref_put(&oud->kref, __remove);
+}
+
+/*
+ * Global driver and scsi registration
+ */
+
+static struct scsi_driver osd_driver = {
+	.owner			= THIS_MODULE,
+	.gendrv = {
+		.name		= osd_name,
+		.probe		= osd_probe,
+		.remove		= osd_remove,
+	}
+};
+
+static int __init osd_uld_init(void)
+{
+	int err;
+
+	osd_sysfs_class = class_create(THIS_MODULE, osd_symlink);
+	if (IS_ERR(osd_sysfs_class)) {
+		OSD_ERR("Unable to register sysfs class => %ld\n",
+			PTR_ERR(osd_sysfs_class));
+		return PTR_ERR(osd_sysfs_class);
+	}
+
+	err = register_chrdev_region(MKDEV(SCSI_OSD_MAJOR, 0),
+				     SCSI_OSD_MAX_MINOR, osd_name);
+	if (err) {
+		OSD_ERR("Unable to register major %d for osd ULD => %d\n",
+			SCSI_OSD_MAJOR, err);
+		goto err_out;
+	}
+
+	err = scsi_register_driver(&osd_driver.gendrv);
+	if (err) {
+		OSD_ERR("scsi_register_driver failed => %d\n", err);
+		goto err_out_chrdev;
+	}
+
+	OSD_INFO("LOADED %s\n", osd_version_string);
+	return 0;
+
+err_out_chrdev:
+	unregister_chrdev_region(MKDEV(SCSI_OSD_MAJOR, 0), SCSI_OSD_MAX_MINOR);
+err_out:
+	class_destroy(osd_sysfs_class);
+	return err;
+}
+
+static void __exit osd_uld_exit(void)
+{
+	scsi_unregister_driver(&osd_driver.gendrv);
+	unregister_chrdev_region(MKDEV(SCSI_OSD_MAJOR, 0), SCSI_OSD_MAX_MINOR);
+	class_destroy(osd_sysfs_class);
+	OSD_INFO("UNLOADED %s\n", osd_version_string);
+}
+
+module_init(osd_uld_init);
+module_exit(osd_uld_exit);
diff --git a/drivers/scsi/osst.c b/drivers/scsi/osst.c
index 0ea78d9..acb8358 100644
--- a/drivers/scsi/osst.c
+++ b/drivers/scsi/osst.c
@@ -280,8 +280,8 @@
 			static	int	notyetprinted = 1;
 
 			printk(KERN_WARNING
-			     "%s:W: Warning %x (sugg. bt 0x%x, driver bt 0x%x, host bt 0x%x).\n",
-			     name, result, suggestion(result), driver_byte(result) & DRIVER_MASK,
+			     "%s:W: Warning %x (driver bt 0x%x, host bt 0x%x).\n",
+			     name, result, driver_byte(result),
 			     host_byte(result));
 			if (notyetprinted) {
 				notyetprinted = 0;
@@ -317,18 +317,25 @@
 
 
 /* Wakeup from interrupt */
-static void osst_sleep_done(void *data, char *sense, int result, int resid)
+static void osst_end_async(struct request *req, int update)
 {
-	struct osst_request *SRpnt = data;
+	struct osst_request *SRpnt = req->end_io_data;
 	struct osst_tape *STp = SRpnt->stp;
+	struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data;
 
-	memcpy(SRpnt->sense, sense, SCSI_SENSE_BUFFERSIZE);
-	STp->buffer->cmdstat.midlevel_result = SRpnt->result = result;
+	STp->buffer->cmdstat.midlevel_result = SRpnt->result = req->errors;
 #if DEBUG
 	STp->write_pending = 0;
 #endif
 	if (SRpnt->waiting)
 		complete(SRpnt->waiting);
+
+	if (SRpnt->bio) {
+		kfree(mdata->pages);
+		blk_rq_unmap_user(SRpnt->bio);
+	}
+
+	__blk_put_request(req->q, req);
 }
 
 /* osst_request memory management */
@@ -342,6 +349,74 @@
 	kfree(streq);
 }
 
+static int osst_execute(struct osst_request *SRpnt, const unsigned char *cmd,
+			int cmd_len, int data_direction, void *buffer, unsigned bufflen,
+			int use_sg, int timeout, int retries)
+{
+	struct request *req;
+	struct page **pages = NULL;
+	struct rq_map_data *mdata = &SRpnt->stp->buffer->map_data;
+
+	int err = 0;
+	int write = (data_direction == DMA_TO_DEVICE);
+
+	req = blk_get_request(SRpnt->stp->device->request_queue, write, GFP_KERNEL);
+	if (!req)
+		return DRIVER_ERROR << 24;
+
+	req->cmd_type = REQ_TYPE_BLOCK_PC;
+	req->cmd_flags |= REQ_QUIET;
+
+	SRpnt->bio = NULL;
+
+	if (use_sg) {
+		struct scatterlist *sg, *sgl = (struct scatterlist *)buffer;
+		int i;
+
+		pages = kzalloc(use_sg * sizeof(struct page *), GFP_KERNEL);
+		if (!pages)
+			goto free_req;
+
+		for_each_sg(sgl, sg, use_sg, i)
+			pages[i] = sg_page(sg);
+
+		mdata->null_mapped = 1;
+
+		mdata->page_order = get_order(sgl[0].length);
+		mdata->nr_entries =
+			DIV_ROUND_UP(bufflen, PAGE_SIZE << mdata->page_order);
+		mdata->offset = 0;
+
+		err = blk_rq_map_user(req->q, req, mdata, NULL, bufflen, GFP_KERNEL);
+		if (err) {
+			kfree(pages);
+			goto free_req;
+		}
+		SRpnt->bio = req->bio;
+		mdata->pages = pages;
+
+	} else if (bufflen) {
+		err = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL);
+		if (err)
+			goto free_req;
+	}
+
+	req->cmd_len = cmd_len;
+	memset(req->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
+	memcpy(req->cmd, cmd, req->cmd_len);
+	req->sense = SRpnt->sense;
+	req->sense_len = 0;
+	req->timeout = timeout;
+	req->retries = retries;
+	req->end_io_data = SRpnt;
+
+	blk_execute_rq_nowait(req->q, NULL, req, 1, osst_end_async);
+	return 0;
+free_req:
+	blk_put_request(req);
+	return DRIVER_ERROR << 24;
+}
+
 /* Do the scsi command. Waits until command performed if do_wait is true.
    Otherwise osst_write_behind_check() is used to check that the command
    has finished. */
@@ -403,8 +478,8 @@
 	STp->buffer->cmdstat.have_sense = 0;
 	STp->buffer->syscall_result = 0;
 
-	if (scsi_execute_async(STp->device, cmd, COMMAND_SIZE(cmd[0]), direction, bp, bytes,
-			use_sg, timeout, retries, SRpnt, osst_sleep_done, GFP_KERNEL))
+	if (osst_execute(SRpnt, cmd, COMMAND_SIZE(cmd[0]), direction, bp, bytes,
+			 use_sg, timeout, retries))
 		/* could not allocate the buffer or request was too large */
 		(STp->buffer)->syscall_result = (-EBUSY);
 	else if (do_wait) {
@@ -5286,11 +5361,6 @@
 		struct page *page = alloc_pages(priority, (OS_FRAME_SIZE - got <= PAGE_SIZE) ? 0 : order);
 		STbuffer->sg[segs].offset = 0;
 		if (page == NULL) {
-			if (OS_FRAME_SIZE - got <= (max_segs - segs) * b_size / 2 && order) {
-				b_size /= 2;  /* Large enough for the rest of the buffers */
-				order--;
-				continue;
-			}
 			printk(KERN_WARNING "osst :W: Failed to enlarge buffer to %d bytes.\n",
 						OS_FRAME_SIZE);
 #if DEBUG
diff --git a/drivers/scsi/osst.h b/drivers/scsi/osst.h
index 5aa2274..11d26c5 100644
--- a/drivers/scsi/osst.h
+++ b/drivers/scsi/osst.h
@@ -520,6 +520,7 @@
   int syscall_result;
   struct osst_request *last_SRpnt;
   struct st_cmdstatus cmdstat;
+  struct rq_map_data map_data;
   unsigned char *b_data;
   os_aux_t *aux;               /* onstream AUX structure at end of each block     */
   unsigned short use_sg;       /* zero or number of s/g segments for this adapter */
@@ -634,6 +635,7 @@
 	int result;
 	struct osst_tape *stp;
 	struct completion *waiting;
+	struct bio *bio;
 };
 
 /* Values of write_type */
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index cbcd3f6..a2ef032 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -967,6 +967,110 @@
 EXPORT_SYMBOL(scsi_track_queue_full);
 
 /**
+ * scsi_vpd_inquiry - Request a device provide us with a VPD page
+ * @sdev: The device to ask
+ * @buffer: Where to put the result
+ * @page: Which Vital Product Data to return
+ * @len: The length of the buffer
+ *
+ * This is an internal helper function.  You probably want to use
+ * scsi_get_vpd_page instead.
+ *
+ * Returns 0 on success or a negative error number.
+ */
+static int scsi_vpd_inquiry(struct scsi_device *sdev, unsigned char *buffer,
+							u8 page, unsigned len)
+{
+	int result;
+	unsigned char cmd[16];
+
+	cmd[0] = INQUIRY;
+	cmd[1] = 1;		/* EVPD */
+	cmd[2] = page;
+	cmd[3] = len >> 8;
+	cmd[4] = len & 0xff;
+	cmd[5] = 0;		/* Control byte */
+
+	/*
+	 * I'm not convinced we need to try quite this hard to get VPD, but
+	 * all the existing users tried this hard.
+	 */
+	result = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buffer,
+				  len + 4, NULL, 30 * HZ, 3, NULL);
+	if (result)
+		return result;
+
+	/* Sanity check that we got the page back that we asked for */
+	if (buffer[1] != page)
+		return -EIO;
+
+	return 0;
+}
+
+/**
+ * scsi_get_vpd_page - Get Vital Product Data from a SCSI device
+ * @sdev: The device to ask
+ * @page: Which Vital Product Data to return
+ *
+ * SCSI devices may optionally supply Vital Product Data.  Each 'page'
+ * of VPD is defined in the appropriate SCSI document (eg SPC, SBC).
+ * If the device supports this VPD page, this routine returns a pointer
+ * to a buffer containing the data from that page.  The caller is
+ * responsible for calling kfree() on this pointer when it is no longer
+ * needed.  If we cannot retrieve the VPD page this routine returns %NULL.
+ */
+unsigned char *scsi_get_vpd_page(struct scsi_device *sdev, u8 page)
+{
+	int i, result;
+	unsigned int len;
+	unsigned char *buf = kmalloc(259, GFP_KERNEL);
+
+	if (!buf)
+		return NULL;
+
+	/* Ask for all the pages supported by this device */
+	result = scsi_vpd_inquiry(sdev, buf, 0, 255);
+	if (result)
+		goto fail;
+
+	/* If the user actually wanted this page, we can skip the rest */
+	if (page == 0)
+		return buf;
+
+	for (i = 0; i < buf[3]; i++)
+		if (buf[i + 4] == page)
+			goto found;
+	/* The device claims it doesn't support the requested page */
+	goto fail;
+
+ found:
+	result = scsi_vpd_inquiry(sdev, buf, page, 255);
+	if (result)
+		goto fail;
+
+	/*
+	 * Some pages are longer than 255 bytes.  The actual length of
+	 * the page is returned in the header.
+	 */
+	len = (buf[2] << 8) | buf[3];
+	if (len <= 255)
+		return buf;
+
+	kfree(buf);
+	buf = kmalloc(len + 4, GFP_KERNEL);
+	result = scsi_vpd_inquiry(sdev, buf, page, len);
+	if (result)
+		goto fail;
+
+	return buf;
+
+ fail:
+	kfree(buf);
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(scsi_get_vpd_page);
+
+/**
  * scsi_device_get  -  get an additional reference to a scsi_device
  * @sdev:	device to get a reference to
  *
diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
index 6eebd0b..213123b 100644
--- a/drivers/scsi/scsi_debug.c
+++ b/drivers/scsi/scsi_debug.c
@@ -40,6 +40,9 @@
 #include <linux/moduleparam.h>
 #include <linux/scatterlist.h>
 #include <linux/blkdev.h>
+#include <linux/crc-t10dif.h>
+
+#include <net/checksum.h>
 
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
@@ -48,8 +51,7 @@
 #include <scsi/scsicam.h>
 #include <scsi/scsi_eh.h>
 
-#include <linux/stat.h>
-
+#include "sd.h"
 #include "scsi_logging.h"
 
 #define SCSI_DEBUG_VERSION "1.81"
@@ -95,6 +97,10 @@
 #define DEF_FAKE_RW	0
 #define DEF_VPD_USE_HOSTNO 1
 #define DEF_SECTOR_SIZE 512
+#define DEF_DIX 0
+#define DEF_DIF 0
+#define DEF_GUARD 0
+#define DEF_ATO 1
 
 /* bit mask values for scsi_debug_opts */
 #define SCSI_DEBUG_OPT_NOISE   1
@@ -102,6 +108,8 @@
 #define SCSI_DEBUG_OPT_TIMEOUT   4
 #define SCSI_DEBUG_OPT_RECOVERED_ERR   8
 #define SCSI_DEBUG_OPT_TRANSPORT_ERR   16
+#define SCSI_DEBUG_OPT_DIF_ERR   32
+#define SCSI_DEBUG_OPT_DIX_ERR   64
 /* When "every_nth" > 0 then modulo "every_nth" commands:
  *   - a no response is simulated if SCSI_DEBUG_OPT_TIMEOUT is set
  *   - a RECOVERED_ERROR is simulated on successful read and write
@@ -144,6 +152,10 @@
 static int scsi_debug_fake_rw = DEF_FAKE_RW;
 static int scsi_debug_vpd_use_hostno = DEF_VPD_USE_HOSTNO;
 static int scsi_debug_sector_size = DEF_SECTOR_SIZE;
+static int scsi_debug_dix = DEF_DIX;
+static int scsi_debug_dif = DEF_DIF;
+static int scsi_debug_guard = DEF_GUARD;
+static int scsi_debug_ato = DEF_ATO;
 
 static int scsi_debug_cmnd_count = 0;
 
@@ -204,11 +216,15 @@
 static struct sdebug_queued_cmd queued_arr[SCSI_DEBUG_CANQUEUE];
 
 static unsigned char * fake_storep;	/* ramdisk storage */
+static unsigned char *dif_storep;	/* protection info */
 
 static int num_aborts = 0;
 static int num_dev_resets = 0;
 static int num_bus_resets = 0;
 static int num_host_resets = 0;
+static int dix_writes;
+static int dix_reads;
+static int dif_errors;
 
 static DEFINE_SPINLOCK(queued_arr_lock);
 static DEFINE_RWLOCK(atomic_rw);
@@ -217,6 +233,11 @@
 
 static struct bus_type pseudo_lld_bus;
 
+static inline sector_t dif_offset(sector_t sector)
+{
+	return sector << 3;
+}
+
 static struct device_driver sdebug_driverfs_driver = {
 	.name 		= sdebug_proc_name,
 	.bus		= &pseudo_lld_bus,
@@ -225,6 +246,9 @@
 static const int check_condition_result =
 		(DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
 
+static const int illegal_condition_result =
+	(DRIVER_SENSE << 24) | (DID_ABORT << 16) | SAM_STAT_CHECK_CONDITION;
+
 static unsigned char ctrl_m_pg[] = {0xa, 10, 2, 0, 0, 0, 0, 0,
 				    0, 0, 0x2, 0x4b};
 static unsigned char iec_m_pg[] = {0x1c, 0xa, 0x08, 0, 0, 0, 0, 0,
@@ -726,7 +750,12 @@
 		} else if (0x86 == cmd[2]) { /* extended inquiry */
 			arr[1] = cmd[2];	/*sanity */
 			arr[3] = 0x3c;	/* number of following entries */
-			arr[4] = 0x0;   /* no protection stuff */
+			if (scsi_debug_dif == SD_DIF_TYPE3_PROTECTION)
+				arr[4] = 0x4;	/* SPT: GRD_CHK:1 */
+			else if (scsi_debug_dif)
+				arr[4] = 0x5;   /* SPT: GRD_CHK:1, REF_CHK:1 */
+			else
+				arr[4] = 0x0;   /* no protection stuff */
 			arr[5] = 0x7;   /* head of q, ordered + simple q's */
 		} else if (0x87 == cmd[2]) { /* mode page policy */
 			arr[1] = cmd[2];	/*sanity */
@@ -767,6 +796,7 @@
 	arr[2] = scsi_debug_scsi_level;
 	arr[3] = 2;    /* response_data_format==2 */
 	arr[4] = SDEBUG_LONG_INQ_SZ - 5;
+	arr[5] = scsi_debug_dif ? 1 : 0; /* PROTECT bit */
 	if (0 == scsi_debug_vpd_use_hostno)
 		arr[5] = 0x10; /* claim: implicit TGPS */
 	arr[6] = 0x10; /* claim: MultiP */
@@ -915,6 +945,12 @@
 	arr[9] = (scsi_debug_sector_size >> 16) & 0xff;
 	arr[10] = (scsi_debug_sector_size >> 8) & 0xff;
 	arr[11] = scsi_debug_sector_size & 0xff;
+
+	if (scsi_debug_dif) {
+		arr[12] = (scsi_debug_dif - 1) << 1; /* P_TYPE */
+		arr[12] |= 1; /* PROT_EN */
+	}
+
 	return fill_from_dev_buffer(scp, arr,
 				    min(alloc_len, SDEBUG_READCAP16_ARR_SZ));
 }
@@ -1066,6 +1102,10 @@
 		ctrl_m_pg[2] |= 0x4;
 	else
 		ctrl_m_pg[2] &= ~0x4;
+
+	if (scsi_debug_ato)
+		ctrl_m_pg[5] |= 0x80; /* ATO=1 */
+
 	memcpy(p, ctrl_m_pg, sizeof(ctrl_m_pg));
 	if (1 == pcontrol)
 		memcpy(p + 2, ch_ctrl_m_pg, sizeof(ch_ctrl_m_pg));
@@ -1536,6 +1576,87 @@
 	return ret;
 }
 
+static int prot_verify_read(struct scsi_cmnd *SCpnt, sector_t start_sec,
+			    unsigned int sectors)
+{
+	unsigned int i, resid;
+	struct scatterlist *psgl;
+	struct sd_dif_tuple *sdt;
+	sector_t sector;
+	sector_t tmp_sec = start_sec;
+	void *paddr;
+
+	start_sec = do_div(tmp_sec, sdebug_store_sectors);
+
+	sdt = (struct sd_dif_tuple *)(dif_storep + dif_offset(start_sec));
+
+	for (i = 0 ; i < sectors ; i++) {
+		u16 csum;
+
+		if (sdt[i].app_tag == 0xffff)
+			continue;
+
+		sector = start_sec + i;
+
+		switch (scsi_debug_guard) {
+		case 1:
+			csum = ip_compute_csum(fake_storep +
+					       sector * scsi_debug_sector_size,
+					       scsi_debug_sector_size);
+			break;
+		case 0:
+			csum = crc_t10dif(fake_storep +
+					  sector * scsi_debug_sector_size,
+					  scsi_debug_sector_size);
+			csum = cpu_to_be16(csum);
+			break;
+		default:
+			BUG();
+		}
+
+		if (sdt[i].guard_tag != csum) {
+			printk(KERN_ERR "%s: GUARD check failed on sector %lu" \
+			       " rcvd 0x%04x, data 0x%04x\n", __func__,
+			       (unsigned long)sector,
+			       be16_to_cpu(sdt[i].guard_tag),
+			       be16_to_cpu(csum));
+			dif_errors++;
+			return 0x01;
+		}
+
+		if (scsi_debug_dif != SD_DIF_TYPE3_PROTECTION &&
+		    be32_to_cpu(sdt[i].ref_tag) != (sector & 0xffffffff)) {
+			printk(KERN_ERR "%s: REF check failed on sector %lu\n",
+			       __func__, (unsigned long)sector);
+			dif_errors++;
+			return 0x03;
+		}
+	}
+
+	resid = sectors * 8; /* Bytes of protection data to copy into sgl */
+	sector = start_sec;
+
+	scsi_for_each_prot_sg(SCpnt, psgl, scsi_prot_sg_count(SCpnt), i) {
+		int len = min(psgl->length, resid);
+
+		paddr = kmap_atomic(sg_page(psgl), KM_IRQ0) + psgl->offset;
+		memcpy(paddr, dif_storep + dif_offset(sector), len);
+
+		sector += len >> 3;
+		if (sector >= sdebug_store_sectors) {
+			/* Force wrap */
+			tmp_sec = sector;
+			sector = do_div(tmp_sec, sdebug_store_sectors);
+		}
+		resid -= len;
+		kunmap_atomic(paddr, KM_IRQ0);
+	}
+
+	dix_reads++;
+
+	return 0;
+}
+
 static int resp_read(struct scsi_cmnd *SCpnt, unsigned long long lba,
 		     unsigned int num, struct sdebug_dev_info *devip)
 {
@@ -1563,12 +1684,162 @@
 		}
 		return check_condition_result;
 	}
+
+	/* DIX + T10 DIF */
+	if (scsi_debug_dix && scsi_prot_sg_count(SCpnt)) {
+		int prot_ret = prot_verify_read(SCpnt, lba, num);
+
+		if (prot_ret) {
+			mk_sense_buffer(devip, ABORTED_COMMAND, 0x10, prot_ret);
+			return illegal_condition_result;
+		}
+	}
+
 	read_lock_irqsave(&atomic_rw, iflags);
 	ret = do_device_access(SCpnt, devip, lba, num, 0);
 	read_unlock_irqrestore(&atomic_rw, iflags);
 	return ret;
 }
 
+void dump_sector(unsigned char *buf, int len)
+{
+	int i, j;
+
+	printk(KERN_ERR ">>> Sector Dump <<<\n");
+
+	for (i = 0 ; i < len ; i += 16) {
+		printk(KERN_ERR "%04d: ", i);
+
+		for (j = 0 ; j < 16 ; j++) {
+			unsigned char c = buf[i+j];
+			if (c >= 0x20 && c < 0x7e)
+				printk(" %c ", buf[i+j]);
+			else
+				printk("%02x ", buf[i+j]);
+		}
+
+		printk("\n");
+	}
+}
+
+static int prot_verify_write(struct scsi_cmnd *SCpnt, sector_t start_sec,
+			     unsigned int sectors)
+{
+	int i, j, ret;
+	struct sd_dif_tuple *sdt;
+	struct scatterlist *dsgl = scsi_sglist(SCpnt);
+	struct scatterlist *psgl = scsi_prot_sglist(SCpnt);
+	void *daddr, *paddr;
+	sector_t tmp_sec = start_sec;
+	sector_t sector;
+	int ppage_offset;
+	unsigned short csum;
+
+	sector = do_div(tmp_sec, sdebug_store_sectors);
+
+	if (((SCpnt->cmnd[1] >> 5) & 7) != 1) {
+		printk(KERN_WARNING "scsi_debug: WRPROTECT != 1\n");
+		return 0;
+	}
+
+	BUG_ON(scsi_sg_count(SCpnt) == 0);
+	BUG_ON(scsi_prot_sg_count(SCpnt) == 0);
+
+	paddr = kmap_atomic(sg_page(psgl), KM_IRQ1) + psgl->offset;
+	ppage_offset = 0;
+
+	/* For each data page */
+	scsi_for_each_sg(SCpnt, dsgl, scsi_sg_count(SCpnt), i) {
+		daddr = kmap_atomic(sg_page(dsgl), KM_IRQ0) + dsgl->offset;
+
+		/* For each sector-sized chunk in data page */
+		for (j = 0 ; j < dsgl->length ; j += scsi_debug_sector_size) {
+
+			/* If we're at the end of the current
+			 * protection page advance to the next one
+			 */
+			if (ppage_offset >= psgl->length) {
+				kunmap_atomic(paddr, KM_IRQ1);
+				psgl = sg_next(psgl);
+				BUG_ON(psgl == NULL);
+				paddr = kmap_atomic(sg_page(psgl), KM_IRQ1)
+					+ psgl->offset;
+				ppage_offset = 0;
+			}
+
+			sdt = paddr + ppage_offset;
+
+			switch (scsi_debug_guard) {
+			case 1:
+				csum = ip_compute_csum(daddr,
+						       scsi_debug_sector_size);
+				break;
+			case 0:
+				csum = cpu_to_be16(crc_t10dif(daddr,
+						      scsi_debug_sector_size));
+				break;
+			default:
+				BUG();
+				ret = 0;
+				goto out;
+			}
+
+			if (sdt->guard_tag != csum) {
+				printk(KERN_ERR
+				       "%s: GUARD check failed on sector %lu " \
+				       "rcvd 0x%04x, calculated 0x%04x\n",
+				       __func__, (unsigned long)sector,
+				       be16_to_cpu(sdt->guard_tag),
+				       be16_to_cpu(csum));
+				ret = 0x01;
+				dump_sector(daddr, scsi_debug_sector_size);
+				goto out;
+			}
+
+			if (scsi_debug_dif != SD_DIF_TYPE3_PROTECTION &&
+			    be32_to_cpu(sdt->ref_tag)
+			    != (start_sec & 0xffffffff)) {
+				printk(KERN_ERR
+				       "%s: REF check failed on sector %lu\n",
+				       __func__, (unsigned long)sector);
+				ret = 0x03;
+				dump_sector(daddr, scsi_debug_sector_size);
+				goto out;
+			}
+
+			/* Would be great to copy this in bigger
+			 * chunks.  However, for the sake of
+			 * correctness we need to verify each sector
+			 * before writing it to "stable" storage
+			 */
+			memcpy(dif_storep + dif_offset(sector), sdt, 8);
+
+			sector++;
+
+			if (sector == sdebug_store_sectors)
+				sector = 0;	/* Force wrap */
+
+			start_sec++;
+			daddr += scsi_debug_sector_size;
+			ppage_offset += sizeof(struct sd_dif_tuple);
+		}
+
+		kunmap_atomic(daddr, KM_IRQ0);
+	}
+
+	kunmap_atomic(paddr, KM_IRQ1);
+
+	dix_writes++;
+
+	return 0;
+
+out:
+	dif_errors++;
+	kunmap_atomic(daddr, KM_IRQ0);
+	kunmap_atomic(paddr, KM_IRQ1);
+	return ret;
+}
+
 static int resp_write(struct scsi_cmnd *SCpnt, unsigned long long lba,
 		      unsigned int num, struct sdebug_dev_info *devip)
 {
@@ -1579,6 +1850,16 @@
 	if (ret)
 		return ret;
 
+	/* DIX + T10 DIF */
+	if (scsi_debug_dix && scsi_prot_sg_count(SCpnt)) {
+		int prot_ret = prot_verify_write(SCpnt, lba, num);
+
+		if (prot_ret) {
+			mk_sense_buffer(devip, ILLEGAL_REQUEST, 0x10, prot_ret);
+			return illegal_condition_result;
+		}
+	}
+
 	write_lock_irqsave(&atomic_rw, iflags);
 	ret = do_device_access(SCpnt, devip, lba, num, 1);
 	write_unlock_irqrestore(&atomic_rw, iflags);
@@ -2095,6 +2376,10 @@
 module_param_named(vpd_use_hostno, scsi_debug_vpd_use_hostno, int,
 		   S_IRUGO | S_IWUSR);
 module_param_named(sector_size, scsi_debug_sector_size, int, S_IRUGO);
+module_param_named(dix, scsi_debug_dix, int, S_IRUGO);
+module_param_named(dif, scsi_debug_dif, int, S_IRUGO);
+module_param_named(guard, scsi_debug_guard, int, S_IRUGO);
+module_param_named(ato, scsi_debug_ato, int, S_IRUGO);
 
 MODULE_AUTHOR("Eric Youngdale + Douglas Gilbert");
 MODULE_DESCRIPTION("SCSI debug adapter driver");
@@ -2117,7 +2402,10 @@
 MODULE_PARM_DESC(virtual_gb, "virtual gigabyte size (def=0 -> use dev_size_mb)");
 MODULE_PARM_DESC(vpd_use_hostno, "0 -> dev ids ignore hostno (def=1 -> unique dev ids)");
 MODULE_PARM_DESC(sector_size, "hardware sector size in bytes (def=512)");
-
+MODULE_PARM_DESC(dix, "data integrity extensions mask (def=0)");
+MODULE_PARM_DESC(dif, "data integrity field type: 0-3 (def=0)");
+MODULE_PARM_DESC(guard, "protection checksum: 0=crc, 1=ip (def=0)");
+MODULE_PARM_DESC(ato, "application tag ownership: 0=disk 1=host (def=1)");
 
 static char sdebug_info[256];
 
@@ -2164,14 +2452,14 @@
 	    "delay=%d, max_luns=%d, scsi_level=%d\n"
 	    "sector_size=%d bytes, cylinders=%d, heads=%d, sectors=%d\n"
 	    "number of aborts=%d, device_reset=%d, bus_resets=%d, "
-	    "host_resets=%d\n",
+	    "host_resets=%d\ndix_reads=%d dix_writes=%d dif_errors=%d\n",
 	    SCSI_DEBUG_VERSION, scsi_debug_version_date, scsi_debug_num_tgts,
 	    scsi_debug_dev_size_mb, scsi_debug_opts, scsi_debug_every_nth,
 	    scsi_debug_cmnd_count, scsi_debug_delay,
 	    scsi_debug_max_luns, scsi_debug_scsi_level,
 	    scsi_debug_sector_size, sdebug_cylinders_per, sdebug_heads,
 	    sdebug_sectors_per, num_aborts, num_dev_resets, num_bus_resets,
-	    num_host_resets);
+	    num_host_resets, dix_reads, dix_writes, dif_errors);
 	if (pos < offset) {
 		len = 0;
 		begin = pos;
@@ -2452,6 +2740,31 @@
 }
 DRIVER_ATTR(sector_size, S_IRUGO, sdebug_sector_size_show, NULL);
 
+static ssize_t sdebug_dix_show(struct device_driver *ddp, char *buf)
+{
+	return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dix);
+}
+DRIVER_ATTR(dix, S_IRUGO, sdebug_dix_show, NULL);
+
+static ssize_t sdebug_dif_show(struct device_driver *ddp, char *buf)
+{
+	return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_dif);
+}
+DRIVER_ATTR(dif, S_IRUGO, sdebug_dif_show, NULL);
+
+static ssize_t sdebug_guard_show(struct device_driver *ddp, char *buf)
+{
+	return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_guard);
+}
+DRIVER_ATTR(guard, S_IRUGO, sdebug_guard_show, NULL);
+
+static ssize_t sdebug_ato_show(struct device_driver *ddp, char *buf)
+{
+	return scnprintf(buf, PAGE_SIZE, "%d\n", scsi_debug_ato);
+}
+DRIVER_ATTR(ato, S_IRUGO, sdebug_ato_show, NULL);
+
+
 /* Note: The following function creates attribute files in the
    /sys/bus/pseudo/drivers/scsi_debug directory. The advantage of these
    files (over those found in the /sys/module/scsi_debug/parameters
@@ -2478,11 +2791,19 @@
 	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_virtual_gb);
 	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_vpd_use_hostno);
 	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_sector_size);
+	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dix);
+	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_dif);
+	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_guard);
+	ret |= driver_create_file(&sdebug_driverfs_driver, &driver_attr_ato);
 	return ret;
 }
 
 static void do_remove_driverfs_files(void)
 {
+	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_ato);
+	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_guard);
+	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dif);
+	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_dix);
 	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_sector_size);
 	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_vpd_use_hostno);
 	driver_remove_file(&sdebug_driverfs_driver, &driver_attr_virtual_gb);
@@ -2526,11 +2847,33 @@
 	case 4096:
 		break;
 	default:
-		printk(KERN_ERR "scsi_debug_init: invalid sector_size %u\n",
+		printk(KERN_ERR "scsi_debug_init: invalid sector_size %d\n",
 		       scsi_debug_sector_size);
 		return -EINVAL;
 	}
 
+	switch (scsi_debug_dif) {
+
+	case SD_DIF_TYPE0_PROTECTION:
+	case SD_DIF_TYPE1_PROTECTION:
+	case SD_DIF_TYPE3_PROTECTION:
+		break;
+
+	default:
+		printk(KERN_ERR "scsi_debug_init: dif must be 0, 1 or 3\n");
+		return -EINVAL;
+	}
+
+	if (scsi_debug_guard > 1) {
+		printk(KERN_ERR "scsi_debug_init: guard must be 0 or 1\n");
+		return -EINVAL;
+	}
+
+	if (scsi_debug_ato > 1) {
+		printk(KERN_ERR "scsi_debug_init: ato must be 0 or 1\n");
+		return -EINVAL;
+	}
+
 	if (scsi_debug_dev_size_mb < 1)
 		scsi_debug_dev_size_mb = 1;  /* force minimum 1 MB ramdisk */
 	sz = (unsigned long)scsi_debug_dev_size_mb * 1048576;
@@ -2563,6 +2906,24 @@
 	if (scsi_debug_num_parts > 0)
 		sdebug_build_parts(fake_storep, sz);
 
+	if (scsi_debug_dif) {
+		int dif_size;
+
+		dif_size = sdebug_store_sectors * sizeof(struct sd_dif_tuple);
+		dif_storep = vmalloc(dif_size);
+
+		printk(KERN_ERR "scsi_debug_init: dif_storep %u bytes @ %p\n",
+		       dif_size, dif_storep);
+
+		if (dif_storep == NULL) {
+			printk(KERN_ERR "scsi_debug_init: out of mem. (DIX)\n");
+			ret = -ENOMEM;
+			goto free_vm;
+		}
+
+		memset(dif_storep, 0xff, dif_size);
+	}
+
 	ret = device_register(&pseudo_primary);
 	if (ret < 0) {
 		printk(KERN_WARNING "scsi_debug: device_register error: %d\n",
@@ -2615,6 +2976,8 @@
 dev_unreg:
 	device_unregister(&pseudo_primary);
 free_vm:
+	if (dif_storep)
+		vfree(dif_storep);
 	vfree(fake_storep);
 
 	return ret;
@@ -2632,6 +2995,9 @@
 	bus_unregister(&pseudo_lld_bus);
 	device_unregister(&pseudo_primary);
 
+	if (dif_storep)
+		vfree(dif_storep);
+
 	vfree(fake_storep);
 }
 
@@ -2732,6 +3098,8 @@
 	struct sdebug_dev_info *devip = NULL;
 	int inj_recovered = 0;
 	int inj_transport = 0;
+	int inj_dif = 0;
+	int inj_dix = 0;
 	int delay_override = 0;
 
 	scsi_set_resid(SCpnt, 0);
@@ -2769,6 +3137,10 @@
 			inj_recovered = 1; /* to reads and writes below */
 		else if (SCSI_DEBUG_OPT_TRANSPORT_ERR & scsi_debug_opts)
 			inj_transport = 1; /* to reads and writes below */
+		else if (SCSI_DEBUG_OPT_DIF_ERR & scsi_debug_opts)
+			inj_dif = 1; /* to reads and writes below */
+		else if (SCSI_DEBUG_OPT_DIX_ERR & scsi_debug_opts)
+			inj_dix = 1; /* to reads and writes below */
 	}
 
 	if (devip->wlun) {
@@ -2870,6 +3242,12 @@
 			mk_sense_buffer(devip, ABORTED_COMMAND,
 					TRANSPORT_PROBLEM, ACK_NAK_TO);
 			errsts = check_condition_result;
+		} else if (inj_dif && (0 == errsts)) {
+			mk_sense_buffer(devip, ABORTED_COMMAND, 0x10, 1);
+			errsts = illegal_condition_result;
+		} else if (inj_dix && (0 == errsts)) {
+			mk_sense_buffer(devip, ILLEGAL_REQUEST, 0x10, 1);
+			errsts = illegal_condition_result;
 		}
 		break;
 	case REPORT_LUNS:	/* mandatory, ignore unit attention */
@@ -2894,6 +3272,12 @@
 			mk_sense_buffer(devip, RECOVERED_ERROR,
 					THRESHOLD_EXCEEDED, 0);
 			errsts = check_condition_result;
+		} else if (inj_dif && (0 == errsts)) {
+			mk_sense_buffer(devip, ABORTED_COMMAND, 0x10, 1);
+			errsts = illegal_condition_result;
+		} else if (inj_dix && (0 == errsts)) {
+			mk_sense_buffer(devip, ILLEGAL_REQUEST, 0x10, 1);
+			errsts = illegal_condition_result;
 		}
 		break;
 	case MODE_SENSE:
@@ -2982,6 +3366,7 @@
         int error = 0;
         struct sdebug_host_info *sdbg_host;
         struct Scsi_Host *hpnt;
+	int host_prot;
 
 	sdbg_host = to_sdebug_host(dev);
 
@@ -3000,6 +3385,50 @@
 		hpnt->max_id = scsi_debug_num_tgts;
 	hpnt->max_lun = SAM2_WLUN_REPORT_LUNS;	/* = scsi_debug_max_luns; */
 
+	host_prot = 0;
+
+	switch (scsi_debug_dif) {
+
+	case SD_DIF_TYPE1_PROTECTION:
+		host_prot = SHOST_DIF_TYPE1_PROTECTION;
+		if (scsi_debug_dix)
+			host_prot |= SHOST_DIX_TYPE1_PROTECTION;
+		break;
+
+	case SD_DIF_TYPE2_PROTECTION:
+		host_prot = SHOST_DIF_TYPE2_PROTECTION;
+		if (scsi_debug_dix)
+			host_prot |= SHOST_DIX_TYPE2_PROTECTION;
+		break;
+
+	case SD_DIF_TYPE3_PROTECTION:
+		host_prot = SHOST_DIF_TYPE3_PROTECTION;
+		if (scsi_debug_dix)
+			host_prot |= SHOST_DIX_TYPE3_PROTECTION;
+		break;
+
+	default:
+		if (scsi_debug_dix)
+			host_prot |= SHOST_DIX_TYPE0_PROTECTION;
+		break;
+	}
+
+	scsi_host_set_prot(hpnt, host_prot);
+
+	printk(KERN_INFO "scsi_debug: host protection%s%s%s%s%s%s%s\n",
+	       (host_prot & SHOST_DIF_TYPE1_PROTECTION) ? " DIF1" : "",
+	       (host_prot & SHOST_DIF_TYPE2_PROTECTION) ? " DIF2" : "",
+	       (host_prot & SHOST_DIF_TYPE3_PROTECTION) ? " DIF3" : "",
+	       (host_prot & SHOST_DIX_TYPE0_PROTECTION) ? " DIX0" : "",
+	       (host_prot & SHOST_DIX_TYPE1_PROTECTION) ? " DIX1" : "",
+	       (host_prot & SHOST_DIX_TYPE2_PROTECTION) ? " DIX2" : "",
+	       (host_prot & SHOST_DIX_TYPE3_PROTECTION) ? " DIX3" : "");
+
+	if (scsi_debug_guard == 1)
+		scsi_host_set_guard(hpnt, SHOST_DIX_GUARD_IP);
+	else
+		scsi_host_set_guard(hpnt, SHOST_DIX_GUARD_CRC);
+
         error = scsi_add_host(hpnt, &sdbg_host->dev);
         if (error) {
                 printk(KERN_ERR "%s: scsi_add_host failed\n", __func__);
diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
index ad6a137..0c2c73b 100644
--- a/drivers/scsi/scsi_error.c
+++ b/drivers/scsi/scsi_error.c
@@ -1441,6 +1441,11 @@
 	}
 }
 
+static void eh_lock_door_done(struct request *req, int uptodate)
+{
+	__blk_put_request(req->q, req);
+}
+
 /**
  * scsi_eh_lock_door - Prevent medium removal for the specified device
  * @sdev:	SCSI device to prevent medium removal
@@ -1463,20 +1468,29 @@
  */
 static void scsi_eh_lock_door(struct scsi_device *sdev)
 {
-	unsigned char cmnd[MAX_COMMAND_SIZE];
+	struct request *req;
 
-	cmnd[0] = ALLOW_MEDIUM_REMOVAL;
-	cmnd[1] = 0;
-	cmnd[2] = 0;
-	cmnd[3] = 0;
-	cmnd[4] = SCSI_REMOVAL_PREVENT;
-	cmnd[5] = 0;
+	req = blk_get_request(sdev->request_queue, READ, GFP_KERNEL);
+	if (!req)
+		return;
 
-	scsi_execute_async(sdev, cmnd, 6, DMA_NONE, NULL, 0, 0, 10 * HZ,
-			   5, NULL, NULL, GFP_KERNEL);
+	req->cmd[0] = ALLOW_MEDIUM_REMOVAL;
+	req->cmd[1] = 0;
+	req->cmd[2] = 0;
+	req->cmd[3] = 0;
+	req->cmd[4] = SCSI_REMOVAL_PREVENT;
+	req->cmd[5] = 0;
+
+	req->cmd_len = COMMAND_SIZE(req->cmd[0]);
+
+	req->cmd_type = REQ_TYPE_BLOCK_PC;
+	req->cmd_flags |= REQ_QUIET;
+	req->timeout = 10 * HZ;
+	req->retries = 5;
+
+	blk_execute_rq_nowait(req->q, NULL, req, 1, eh_lock_door_done);
 }
 
-
 /**
  * scsi_restart_operations - restart io operations to the specified host.
  * @shost:	Host we are restarting.
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index b82ffd9..4b13e36 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -277,196 +277,6 @@
 }
 EXPORT_SYMBOL(scsi_execute_req);
 
-struct scsi_io_context {
-	void *data;
-	void (*done)(void *data, char *sense, int result, int resid);
-	char sense[SCSI_SENSE_BUFFERSIZE];
-};
-
-static struct kmem_cache *scsi_io_context_cache;
-
-static void scsi_end_async(struct request *req, int uptodate)
-{
-	struct scsi_io_context *sioc = req->end_io_data;
-
-	if (sioc->done)
-		sioc->done(sioc->data, sioc->sense, req->errors, req->data_len);
-
-	kmem_cache_free(scsi_io_context_cache, sioc);
-	__blk_put_request(req->q, req);
-}
-
-static int scsi_merge_bio(struct request *rq, struct bio *bio)
-{
-	struct request_queue *q = rq->q;
-
-	bio->bi_flags &= ~(1 << BIO_SEG_VALID);
-	if (rq_data_dir(rq) == WRITE)
-		bio->bi_rw |= (1 << BIO_RW);
-	blk_queue_bounce(q, &bio);
-
-	return blk_rq_append_bio(q, rq, bio);
-}
-
-static void scsi_bi_endio(struct bio *bio, int error)
-{
-	bio_put(bio);
-}
-
-/**
- * scsi_req_map_sg - map a scatterlist into a request
- * @rq:		request to fill
- * @sgl:	scatterlist
- * @nsegs:	number of elements
- * @bufflen:	len of buffer
- * @gfp:	memory allocation flags
- *
- * scsi_req_map_sg maps a scatterlist into a request so that the
- * request can be sent to the block layer. We do not trust the scatterlist
- * sent to use, as some ULDs use that struct to only organize the pages.
- */
-static int scsi_req_map_sg(struct request *rq, struct scatterlist *sgl,
-			   int nsegs, unsigned bufflen, gfp_t gfp)
-{
-	struct request_queue *q = rq->q;
-	int nr_pages = (bufflen + sgl[0].offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	unsigned int data_len = bufflen, len, bytes, off;
-	struct scatterlist *sg;
-	struct page *page;
-	struct bio *bio = NULL;
-	int i, err, nr_vecs = 0;
-
-	for_each_sg(sgl, sg, nsegs, i) {
-		page = sg_page(sg);
-		off = sg->offset;
-		len = sg->length;
-
-		while (len > 0 && data_len > 0) {
-			/*
-			 * sg sends a scatterlist that is larger than
-			 * the data_len it wants transferred for certain
-			 * IO sizes
-			 */
-			bytes = min_t(unsigned int, len, PAGE_SIZE - off);
-			bytes = min(bytes, data_len);
-
-			if (!bio) {
-				nr_vecs = min_t(int, BIO_MAX_PAGES, nr_pages);
-				nr_pages -= nr_vecs;
-
-				bio = bio_alloc(gfp, nr_vecs);
-				if (!bio) {
-					err = -ENOMEM;
-					goto free_bios;
-				}
-				bio->bi_end_io = scsi_bi_endio;
-			}
-
-			if (bio_add_pc_page(q, bio, page, bytes, off) !=
-			    bytes) {
-				bio_put(bio);
-				err = -EINVAL;
-				goto free_bios;
-			}
-
-			if (bio->bi_vcnt >= nr_vecs) {
-				err = scsi_merge_bio(rq, bio);
-				if (err) {
-					bio_endio(bio, 0);
-					goto free_bios;
-				}
-				bio = NULL;
-			}
-
-			page++;
-			len -= bytes;
-			data_len -=bytes;
-			off = 0;
-		}
-	}
-
-	rq->buffer = rq->data = NULL;
-	rq->data_len = bufflen;
-	return 0;
-
-free_bios:
-	while ((bio = rq->bio) != NULL) {
-		rq->bio = bio->bi_next;
-		/*
-		 * call endio instead of bio_put incase it was bounced
-		 */
-		bio_endio(bio, 0);
-	}
-
-	return err;
-}
-
-/**
- * scsi_execute_async - insert request
- * @sdev:	scsi device
- * @cmd:	scsi command
- * @cmd_len:	length of scsi cdb
- * @data_direction: DMA_TO_DEVICE, DMA_FROM_DEVICE, or DMA_NONE
- * @buffer:	data buffer (this can be a kernel buffer or scatterlist)
- * @bufflen:	len of buffer
- * @use_sg:	if buffer is a scatterlist this is the number of elements
- * @timeout:	request timeout in seconds
- * @retries:	number of times to retry request
- * @privdata:	data passed to done()
- * @done:	callback function when done
- * @gfp:	memory allocation flags
- */
-int scsi_execute_async(struct scsi_device *sdev, const unsigned char *cmd,
-		       int cmd_len, int data_direction, void *buffer, unsigned bufflen,
-		       int use_sg, int timeout, int retries, void *privdata,
-		       void (*done)(void *, char *, int, int), gfp_t gfp)
-{
-	struct request *req;
-	struct scsi_io_context *sioc;
-	int err = 0;
-	int write = (data_direction == DMA_TO_DEVICE);
-
-	sioc = kmem_cache_zalloc(scsi_io_context_cache, gfp);
-	if (!sioc)
-		return DRIVER_ERROR << 24;
-
-	req = blk_get_request(sdev->request_queue, write, gfp);
-	if (!req)
-		goto free_sense;
-	req->cmd_type = REQ_TYPE_BLOCK_PC;
-	req->cmd_flags |= REQ_QUIET;
-
-	if (use_sg)
-		err = scsi_req_map_sg(req, buffer, use_sg, bufflen, gfp);
-	else if (bufflen)
-		err = blk_rq_map_kern(req->q, req, buffer, bufflen, gfp);
-
-	if (err)
-		goto free_req;
-
-	req->cmd_len = cmd_len;
-	memset(req->cmd, 0, BLK_MAX_CDB); /* ATAPI hates garbage after CDB */
-	memcpy(req->cmd, cmd, req->cmd_len);
-	req->sense = sioc->sense;
-	req->sense_len = 0;
-	req->timeout = timeout;
-	req->retries = retries;
-	req->end_io_data = sioc;
-
-	sioc->data = privdata;
-	sioc->done = done;
-
-	blk_execute_rq_nowait(req->q, NULL, req, 1, scsi_end_async);
-	return 0;
-
-free_req:
-	blk_put_request(req);
-free_sense:
-	kmem_cache_free(scsi_io_context_cache, sioc);
-	return DRIVER_ERROR << 24;
-}
-EXPORT_SYMBOL_GPL(scsi_execute_async);
-
 /*
  * Function:    scsi_init_cmd_errh()
  *
@@ -1920,20 +1730,12 @@
 {
 	int i;
 
-	scsi_io_context_cache = kmem_cache_create("scsi_io_context",
-					sizeof(struct scsi_io_context),
-					0, 0, NULL);
-	if (!scsi_io_context_cache) {
-		printk(KERN_ERR "SCSI: can't init scsi io context cache\n");
-		return -ENOMEM;
-	}
-
 	scsi_sdb_cache = kmem_cache_create("scsi_data_buffer",
 					   sizeof(struct scsi_data_buffer),
 					   0, 0, NULL);
 	if (!scsi_sdb_cache) {
 		printk(KERN_ERR "SCSI: can't init scsi sdb cache\n");
-		goto cleanup_io_context;
+		return -ENOMEM;
 	}
 
 	for (i = 0; i < SG_MEMPOOL_NR; i++) {
@@ -1968,8 +1770,6 @@
 			kmem_cache_destroy(sgp->slab);
 	}
 	kmem_cache_destroy(scsi_sdb_cache);
-cleanup_io_context:
-	kmem_cache_destroy(scsi_io_context_cache);
 
 	return -ENOMEM;
 }
@@ -1978,7 +1778,6 @@
 {
 	int i;
 
-	kmem_cache_destroy(scsi_io_context_cache);
 	kmem_cache_destroy(scsi_sdb_cache);
 
 	for (i = 0; i < SG_MEMPOOL_NR; i++) {
diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
index 8f4de20..a14d245 100644
--- a/drivers/scsi/scsi_scan.c
+++ b/drivers/scsi/scsi_scan.c
@@ -797,6 +797,7 @@
 	case TYPE_ENCLOSURE:
 	case TYPE_COMM:
 	case TYPE_RAID:
+	case TYPE_OSD:
 		sdev->writeable = 1;
 		break;
 	case TYPE_ROM:
diff --git a/drivers/scsi/scsi_sysfs.c b/drivers/scsi/scsi_sysfs.c
index da63802..fa4711d 100644
--- a/drivers/scsi/scsi_sysfs.c
+++ b/drivers/scsi/scsi_sysfs.c
@@ -1043,7 +1043,6 @@
 /**
  * scsi_sysfs_add_host - add scsi host to subsystem
  * @shost:     scsi host struct to add to subsystem
- * @dev:       parent struct device pointer
  **/
 int scsi_sysfs_add_host(struct Scsi_Host *shost)
 {
diff --git a/drivers/scsi/scsi_transport_fc.c b/drivers/scsi/scsi_transport_fc.c
index 3ee4eb4..a152f89 100644
--- a/drivers/scsi/scsi_transport_fc.c
+++ b/drivers/scsi/scsi_transport_fc.c
@@ -95,7 +95,7 @@
 	{ FC_PORTTYPE_NPORT,	"NPort (fabric via point-to-point)" },
 	{ FC_PORTTYPE_NLPORT,	"NLPort (fabric via loop)" },
 	{ FC_PORTTYPE_LPORT,	"LPort (private loop)" },
-	{ FC_PORTTYPE_PTP,	"Point-To-Point (direct nport connection" },
+	{ FC_PORTTYPE_PTP,	"Point-To-Point (direct nport connection)" },
 	{ FC_PORTTYPE_NPIV,		"NPIV VPORT" },
 };
 fc_enum_name_search(port_type, fc_port_type, fc_port_type_names)
diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
index 2adfab8..09479545 100644
--- a/drivers/scsi/scsi_transport_iscsi.c
+++ b/drivers/scsi/scsi_transport_iscsi.c
@@ -246,30 +246,13 @@
 	memset(ihost, 0, sizeof(*ihost));
 	atomic_set(&ihost->nr_scans, 0);
 	mutex_init(&ihost->mutex);
-
-	snprintf(ihost->scan_workq_name, sizeof(ihost->scan_workq_name),
-		 "iscsi_scan_%d", shost->host_no);
-	ihost->scan_workq = create_singlethread_workqueue(
-						ihost->scan_workq_name);
-	if (!ihost->scan_workq)
-		return -ENOMEM;
-	return 0;
-}
-
-static int iscsi_remove_host(struct transport_container *tc, struct device *dev,
-			     struct device *cdev)
-{
-	struct Scsi_Host *shost = dev_to_shost(dev);
-	struct iscsi_cls_host *ihost = shost->shost_data;
-
-	destroy_workqueue(ihost->scan_workq);
 	return 0;
 }
 
 static DECLARE_TRANSPORT_CLASS(iscsi_host_class,
 			       "iscsi_host",
 			       iscsi_setup_host,
-			       iscsi_remove_host,
+			       NULL,
 			       NULL);
 
 static DECLARE_TRANSPORT_CLASS(iscsi_session_class,
@@ -568,7 +551,7 @@
 	 * scanning from userspace).
 	 */
 	if (shost->hostt->scan_finished) {
-		if (queue_work(ihost->scan_workq, &session->scan_work))
+		if (scsi_queue_work(shost, &session->scan_work))
 			atomic_inc(&ihost->nr_scans);
 	}
 }
@@ -636,14 +619,6 @@
 	iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
 }
 
-static int iscsi_unbind_session(struct iscsi_cls_session *session)
-{
-	struct Scsi_Host *shost = iscsi_session_to_shost(session);
-	struct iscsi_cls_host *ihost = shost->shost_data;
-
-	return queue_work(ihost->scan_workq, &session->unbind_work);
-}
-
 struct iscsi_cls_session *
 iscsi_alloc_session(struct Scsi_Host *shost, struct iscsi_transport *transport,
 		    int dd_size)
@@ -796,7 +771,6 @@
 void iscsi_remove_session(struct iscsi_cls_session *session)
 {
 	struct Scsi_Host *shost = iscsi_session_to_shost(session);
-	struct iscsi_cls_host *ihost = shost->shost_data;
 	unsigned long flags;
 	int err;
 
@@ -821,7 +795,7 @@
 
 	scsi_target_unblock(&session->dev);
 	/* flush running scans then delete devices */
-	flush_workqueue(ihost->scan_workq);
+	scsi_flush_work(shost);
 	__iscsi_unbind_session(&session->unbind_work);
 
 	/* hw iscsi may not have removed all connections from session */
@@ -1215,14 +1189,15 @@
 {
 	struct iscsi_transport *transport = priv->iscsi_transport;
 	struct iscsi_cls_session *session;
-	uint32_t host_no;
+	struct Scsi_Host *shost;
 
 	session = transport->create_session(ep, cmds_max, queue_depth,
-					    initial_cmdsn, &host_no);
+					    initial_cmdsn);
 	if (!session)
 		return -ENOMEM;
 
-	ev->r.c_session_ret.host_no = host_no;
+	shost = iscsi_session_to_shost(session);
+	ev->r.c_session_ret.host_no = shost->host_no;
 	ev->r.c_session_ret.sid = session->sid;
 	return 0;
 }
@@ -1439,7 +1414,8 @@
 	case ISCSI_UEVENT_UNBIND_SESSION:
 		session = iscsi_session_lookup(ev->u.d_session.sid);
 		if (session)
-			iscsi_unbind_session(session);
+			scsi_queue_work(iscsi_session_to_shost(session),
+					&session->unbind_work);
 		else
 			err = -EINVAL;
 		break;
@@ -1801,8 +1777,7 @@
 	priv->daemon_pid = -1;
 	priv->iscsi_transport = tt;
 	priv->t.user_scan = iscsi_user_scan;
-	if (!(tt->caps & CAP_DATA_PATH_OFFLOAD))
-		priv->t.create_work_queue = 1;
+	priv->t.create_work_queue = 1;
 
 	priv->dev.class = &iscsi_transport_class;
 	dev_set_name(&priv->dev, "%s", tt->name);
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4970ae4..aeab5d9 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -1273,42 +1273,126 @@
 	sdkp->capacity = 0;
 }
 
-/*
- * read disk capacity
- */
-static void
-sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer)
+static void read_capacity_error(struct scsi_disk *sdkp, struct scsi_device *sdp,
+			struct scsi_sense_hdr *sshdr, int sense_valid,
+			int the_result)
+{
+	sd_print_result(sdkp, the_result);
+	if (driver_byte(the_result) & DRIVER_SENSE)
+		sd_print_sense_hdr(sdkp, sshdr);
+	else
+		sd_printk(KERN_NOTICE, sdkp, "Sense not available.\n");
+
+	/*
+	 * Set dirty bit for removable devices if not ready -
+	 * sometimes drives will not report this properly.
+	 */
+	if (sdp->removable &&
+	    sense_valid && sshdr->sense_key == NOT_READY)
+		sdp->changed = 1;
+
+	/*
+	 * We used to set media_present to 0 here to indicate no media
+	 * in the drive, but some drives fail read capacity even with
+	 * media present, so we can't do that.
+	 */
+	sdkp->capacity = 0; /* unknown mapped to zero - as usual */
+}
+
+#define RC16_LEN 32
+#if RC16_LEN > SD_BUF_SIZE
+#error RC16_LEN must not be more than SD_BUF_SIZE
+#endif
+
+static int read_capacity_16(struct scsi_disk *sdkp, struct scsi_device *sdp,
+						unsigned char *buffer)
 {
 	unsigned char cmd[16];
-	int the_result, retries;
-	int sector_size = 0;
-	/* Force READ CAPACITY(16) when PROTECT=1 */
-	int longrc = scsi_device_protection(sdkp->device) ? 1 : 0;
 	struct scsi_sense_hdr sshdr;
 	int sense_valid = 0;
-	struct scsi_device *sdp = sdkp->device;
+	int the_result;
+	int retries = 3;
+	unsigned long long lba;
+	unsigned sector_size;
 
-repeat:
-	retries = 3;
 	do {
-		if (longrc) {
-			memset((void *) cmd, 0, 16);
-			cmd[0] = SERVICE_ACTION_IN;
-			cmd[1] = SAI_READ_CAPACITY_16;
-			cmd[13] = 13;
-			memset((void *) buffer, 0, 13);
-		} else {
-			cmd[0] = READ_CAPACITY;
-			memset((void *) &cmd[1], 0, 9);
-			memset((void *) buffer, 0, 8);
-		}
-		
+		memset(cmd, 0, 16);
+		cmd[0] = SERVICE_ACTION_IN;
+		cmd[1] = SAI_READ_CAPACITY_16;
+		cmd[13] = RC16_LEN;
+		memset(buffer, 0, RC16_LEN);
+
 		the_result = scsi_execute_req(sdp, cmd, DMA_FROM_DEVICE,
-					      buffer, longrc ? 13 : 8, &sshdr,
-					      SD_TIMEOUT, SD_MAX_RETRIES, NULL);
+					buffer, RC16_LEN, &sshdr,
+					SD_TIMEOUT, SD_MAX_RETRIES, NULL);
 
 		if (media_not_present(sdkp, &sshdr))
-			return;
+			return -ENODEV;
+
+		if (the_result) {
+			sense_valid = scsi_sense_valid(&sshdr);
+			if (sense_valid &&
+			    sshdr.sense_key == ILLEGAL_REQUEST &&
+			    (sshdr.asc == 0x20 || sshdr.asc == 0x24) &&
+			    sshdr.ascq == 0x00)
+				/* Invalid Command Operation Code or
+				 * Invalid Field in CDB, just retry
+				 * silently with RC10 */
+				return -EINVAL;
+		}
+		retries--;
+
+	} while (the_result && retries);
+
+	if (the_result) {
+		sd_printk(KERN_NOTICE, sdkp, "READ CAPACITY(16) failed\n");
+		read_capacity_error(sdkp, sdp, &sshdr, sense_valid, the_result);
+		return -EINVAL;
+	}
+
+	sector_size =	(buffer[8] << 24) | (buffer[9] << 16) |
+			(buffer[10] << 8) | buffer[11];
+	lba =  (((u64)buffer[0] << 56) | ((u64)buffer[1] << 48) |
+		((u64)buffer[2] << 40) | ((u64)buffer[3] << 32) |
+		((u64)buffer[4] << 24) | ((u64)buffer[5] << 16) |
+		((u64)buffer[6] << 8) | (u64)buffer[7]);
+
+	sd_read_protection_type(sdkp, buffer);
+
+	if ((sizeof(sdkp->capacity) == 4) && (lba >= 0xffffffffULL)) {
+		sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
+			"kernel compiled with support for large block "
+			"devices.\n");
+		sdkp->capacity = 0;
+		return -EOVERFLOW;
+	}
+
+	sdkp->capacity = lba + 1;
+	return sector_size;
+}
+
+static int read_capacity_10(struct scsi_disk *sdkp, struct scsi_device *sdp,
+						unsigned char *buffer)
+{
+	unsigned char cmd[16];
+	struct scsi_sense_hdr sshdr;
+	int sense_valid = 0;
+	int the_result;
+	int retries = 3;
+	sector_t lba;
+	unsigned sector_size;
+
+	do {
+		cmd[0] = READ_CAPACITY;
+		memset(&cmd[1], 0, 9);
+		memset(buffer, 0, 8);
+
+		the_result = scsi_execute_req(sdp, cmd, DMA_FROM_DEVICE,
+					buffer, 8, &sshdr,
+					SD_TIMEOUT, SD_MAX_RETRIES, NULL);
+
+		if (media_not_present(sdkp, &sshdr))
+			return -ENODEV;
 
 		if (the_result)
 			sense_valid = scsi_sense_valid(&sshdr);
@@ -1316,85 +1400,96 @@
 
 	} while (the_result && retries);
 
-	if (the_result && !longrc) {
+	if (the_result) {
 		sd_printk(KERN_NOTICE, sdkp, "READ CAPACITY failed\n");
-		sd_print_result(sdkp, the_result);
-		if (driver_byte(the_result) & DRIVER_SENSE)
-			sd_print_sense_hdr(sdkp, &sshdr);
-		else
-			sd_printk(KERN_NOTICE, sdkp, "Sense not available.\n");
+		read_capacity_error(sdkp, sdp, &sshdr, sense_valid, the_result);
+		return -EINVAL;
+	}
 
-		/* Set dirty bit for removable devices if not ready -
-		 * sometimes drives will not report this properly. */
-		if (sdp->removable &&
-		    sense_valid && sshdr.sense_key == NOT_READY)
-			sdp->changed = 1;
+	sector_size =	(buffer[4] << 24) | (buffer[5] << 16) |
+			(buffer[6] << 8) | buffer[7];
+	lba =	(buffer[0] << 24) | (buffer[1] << 16) |
+		(buffer[2] << 8) | buffer[3];
 
-		/* Either no media are present but the drive didn't tell us,
-		   or they are present but the read capacity command fails */
-		/* sdkp->media_present = 0; -- not always correct */
-		sdkp->capacity = 0; /* unknown mapped to zero - as usual */
+	if ((sizeof(sdkp->capacity) == 4) && (lba == 0xffffffff)) {
+		sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
+			"kernel compiled with support for large block "
+			"devices.\n");
+		sdkp->capacity = 0;
+		return -EOVERFLOW;
+	}
 
-		return;
-	} else if (the_result && longrc) {
-		/* READ CAPACITY(16) has been failed */
-		sd_printk(KERN_NOTICE, sdkp, "READ CAPACITY(16) failed\n");
-		sd_print_result(sdkp, the_result);
-		sd_printk(KERN_NOTICE, sdkp, "Use 0xffffffff as device size\n");
+	sdkp->capacity = lba + 1;
+	return sector_size;
+}
 
-		sdkp->capacity = 1 + (sector_t) 0xffffffff;		
-		goto got_data;
-	}	
-	
-	if (!longrc) {
-		sector_size = (buffer[4] << 24) |
-			(buffer[5] << 16) | (buffer[6] << 8) | buffer[7];
-		if (buffer[0] == 0xff && buffer[1] == 0xff &&
-		    buffer[2] == 0xff && buffer[3] == 0xff) {
-			if(sizeof(sdkp->capacity) > 4) {
-				sd_printk(KERN_NOTICE, sdkp, "Very big device. "
-					  "Trying to use READ CAPACITY(16).\n");
-				longrc = 1;
-				goto repeat;
-			}
-			sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use "
-				  "a kernel compiled with support for large "
-				  "block devices.\n");
-			sdkp->capacity = 0;
+static int sd_try_rc16_first(struct scsi_device *sdp)
+{
+	if (sdp->scsi_level > SCSI_SPC_2)
+		return 1;
+	if (scsi_device_protection(sdp))
+		return 1;
+	return 0;
+}
+
+/*
+ * read disk capacity
+ */
+static void
+sd_read_capacity(struct scsi_disk *sdkp, unsigned char *buffer)
+{
+	int sector_size;
+	struct scsi_device *sdp = sdkp->device;
+	sector_t old_capacity = sdkp->capacity;
+
+	if (sd_try_rc16_first(sdp)) {
+		sector_size = read_capacity_16(sdkp, sdp, buffer);
+		if (sector_size == -EOVERFLOW)
 			goto got_data;
+		if (sector_size == -ENODEV)
+			return;
+		if (sector_size < 0)
+			sector_size = read_capacity_10(sdkp, sdp, buffer);
+		if (sector_size < 0)
+			return;
+	} else {
+		sector_size = read_capacity_10(sdkp, sdp, buffer);
+		if (sector_size == -EOVERFLOW)
+			goto got_data;
+		if (sector_size < 0)
+			return;
+		if ((sizeof(sdkp->capacity) > 4) &&
+		    (sdkp->capacity > 0xffffffffULL)) {
+			int old_sector_size = sector_size;
+			sd_printk(KERN_NOTICE, sdkp, "Very big device. "
+					"Trying to use READ CAPACITY(16).\n");
+			sector_size = read_capacity_16(sdkp, sdp, buffer);
+			if (sector_size < 0) {
+				sd_printk(KERN_NOTICE, sdkp,
+					"Using 0xffffffff as device size\n");
+				sdkp->capacity = 1 + (sector_t) 0xffffffff;
+				sector_size = old_sector_size;
+				goto got_data;
+			}
 		}
-		sdkp->capacity = 1 + (((sector_t)buffer[0] << 24) |
-			(buffer[1] << 16) |
-			(buffer[2] << 8) |
-			buffer[3]);			
-	} else {
-		sdkp->capacity = 1 + (((u64)buffer[0] << 56) |
-			((u64)buffer[1] << 48) |
-			((u64)buffer[2] << 40) |
-			((u64)buffer[3] << 32) |
-			((sector_t)buffer[4] << 24) |
-			((sector_t)buffer[5] << 16) |
-			((sector_t)buffer[6] << 8)  |
-			(sector_t)buffer[7]);
-			
-		sector_size = (buffer[8] << 24) |
-			(buffer[9] << 16) | (buffer[10] << 8) | buffer[11];
+	}
 
-		sd_read_protection_type(sdkp, buffer);
-	}	
-
-	/* Some devices return the total number of sectors, not the
-	 * highest sector number.  Make the necessary adjustment. */
-	if (sdp->fix_capacity) {
+	/* Some devices are known to return the total number of blocks,
+	 * not the highest block number.  Some devices have versions
+	 * which do this and others which do not.  Some devices we might
+	 * suspect of doing this but we don't know for certain.
+	 *
+	 * If we know the reported capacity is wrong, decrement it.  If
+	 * we can only guess, then assume the number of blocks is even
+	 * (usually true but not always) and err on the side of lowering
+	 * the capacity.
+	 */
+	if (sdp->fix_capacity ||
+	    (sdp->guess_capacity && (sdkp->capacity & 0x01))) {
+		sd_printk(KERN_INFO, sdkp, "Adjusting the sector count "
+				"from its reported value: %llu\n",
+				(unsigned long long) sdkp->capacity);
 		--sdkp->capacity;
-
-	/* Some devices have version which report the correct sizes
-	 * and others which do not. We guess size according to a heuristic
-	 * and err on the side of lowering the capacity. */
-	} else {
-		if (sdp->guess_capacity)
-			if (sdkp->capacity & 0x01) /* odd sizes are odd */
-				--sdkp->capacity;
 	}
 
 got_data:
@@ -1437,10 +1532,11 @@
 		string_get_size(sz, STRING_UNITS_10, cap_str_10,
 				sizeof(cap_str_10));
 
-		sd_printk(KERN_NOTICE, sdkp,
-			  "%llu %d-byte hardware sectors: (%s/%s)\n",
-			  (unsigned long long)sdkp->capacity,
-			  sector_size, cap_str_10, cap_str_2);
+		if (sdkp->first_scan || old_capacity != sdkp->capacity)
+			sd_printk(KERN_NOTICE, sdkp,
+				  "%llu %d-byte hardware sectors: (%s/%s)\n",
+				  (unsigned long long)sdkp->capacity,
+				  sector_size, cap_str_10, cap_str_2);
 	}
 
 	/* Rescale capacity to 512-byte units */
@@ -1477,6 +1573,7 @@
 	int res;
 	struct scsi_device *sdp = sdkp->device;
 	struct scsi_mode_data data;
+	int old_wp = sdkp->write_prot;
 
 	set_disk_ro(sdkp->disk, 0);
 	if (sdp->skip_ms_page_3f) {
@@ -1517,11 +1614,13 @@
 	} else {
 		sdkp->write_prot = ((data.device_specific & 0x80) != 0);
 		set_disk_ro(sdkp->disk, sdkp->write_prot);
-		sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n",
-			  sdkp->write_prot ? "on" : "off");
-		sd_printk(KERN_DEBUG, sdkp,
-			  "Mode Sense: %02x %02x %02x %02x\n",
-			  buffer[0], buffer[1], buffer[2], buffer[3]);
+		if (sdkp->first_scan || old_wp != sdkp->write_prot) {
+			sd_printk(KERN_NOTICE, sdkp, "Write Protect is %s\n",
+				  sdkp->write_prot ? "on" : "off");
+			sd_printk(KERN_DEBUG, sdkp,
+				  "Mode Sense: %02x %02x %02x %02x\n",
+				  buffer[0], buffer[1], buffer[2], buffer[3]);
+		}
 	}
 }
 
@@ -1539,6 +1638,9 @@
 	int modepage;
 	struct scsi_mode_data data;
 	struct scsi_sense_hdr sshdr;
+	int old_wce = sdkp->WCE;
+	int old_rcd = sdkp->RCD;
+	int old_dpofua = sdkp->DPOFUA;
 
 	if (sdp->skip_ms_page_8)
 		goto defaults;
@@ -1610,12 +1712,14 @@
 			sdkp->DPOFUA = 0;
 		}
 
-		sd_printk(KERN_NOTICE, sdkp,
-		       "Write cache: %s, read cache: %s, %s\n",
-		       sdkp->WCE ? "enabled" : "disabled",
-		       sdkp->RCD ? "disabled" : "enabled",
-		       sdkp->DPOFUA ? "supports DPO and FUA"
-		       : "doesn't support DPO or FUA");
+		if (sdkp->first_scan || old_wce != sdkp->WCE ||
+		    old_rcd != sdkp->RCD || old_dpofua != sdkp->DPOFUA)
+			sd_printk(KERN_NOTICE, sdkp,
+				  "Write cache: %s, read cache: %s, %s\n",
+				  sdkp->WCE ? "enabled" : "disabled",
+				  sdkp->RCD ? "disabled" : "enabled",
+				  sdkp->DPOFUA ? "supports DPO and FUA"
+				  : "doesn't support DPO or FUA");
 
 		return;
 	}
@@ -1711,15 +1815,6 @@
 		goto out;
 	}
 
-	/* defaults, until the device tells us otherwise */
-	sdp->sector_size = 512;
-	sdkp->capacity = 0;
-	sdkp->media_present = 1;
-	sdkp->write_prot = 0;
-	sdkp->WCE = 0;
-	sdkp->RCD = 0;
-	sdkp->ATO = 0;
-
 	sd_spinup_disk(sdkp);
 
 	/*
@@ -1733,6 +1828,8 @@
 		sd_read_app_tag_own(sdkp, buffer);
 	}
 
+	sdkp->first_scan = 0;
+
 	/*
 	 * We now have all cache related info, determine how we deal
 	 * with ordered requests.  Note that as the current SCSI
@@ -1843,6 +1940,16 @@
 	gd->private_data = &sdkp->driver;
 	gd->queue = sdkp->device->request_queue;
 
+	/* defaults, until the device tells us otherwise */
+	sdp->sector_size = 512;
+	sdkp->capacity = 0;
+	sdkp->media_present = 1;
+	sdkp->write_prot = 0;
+	sdkp->WCE = 0;
+	sdkp->RCD = 0;
+	sdkp->ATO = 0;
+	sdkp->first_scan = 1;
+
 	sd_revalidate_disk(gd);
 
 	blk_queue_prep_rq(sdp->request_queue, sd_prep_fn);
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 75638e7..708778c 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -53,6 +53,7 @@
 	unsigned	WCE : 1;	/* state of disk WCE bit */
 	unsigned	RCD : 1;	/* state of disk RCD bit, unused */
 	unsigned	DPOFUA : 1;	/* state of disk DPOFUA bit */
+	unsigned	first_scan : 1;
 };
 #define to_scsi_disk(obj) container_of(obj,struct scsi_disk,dev)
 
diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
index e946e05..c9146d7 100644
--- a/drivers/scsi/ses.c
+++ b/drivers/scsi/ses.c
@@ -345,44 +345,21 @@
 	return 0;
 }
 
-#define VPD_INQUIRY_SIZE 36
-
 static void ses_match_to_enclosure(struct enclosure_device *edev,
 				   struct scsi_device *sdev)
 {
-	unsigned char *buf = kmalloc(VPD_INQUIRY_SIZE, GFP_KERNEL);
+	unsigned char *buf;
 	unsigned char *desc;
-	u16 vpd_len;
+	unsigned int vpd_len;
 	struct efd efd = {
 		.addr = 0,
 	};
-	unsigned char cmd[] = {
-		INQUIRY,
-		1,
-		0x83,
-		VPD_INQUIRY_SIZE >> 8,
-		VPD_INQUIRY_SIZE & 0xff,
-		0
-	};
 
+	buf = scsi_get_vpd_page(sdev, 0x83);
 	if (!buf)
 		return;
 
-	if (scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf,
-			     VPD_INQUIRY_SIZE, NULL, SES_TIMEOUT, SES_RETRIES,
-			     NULL))
-		goto free;
-
-	vpd_len = (buf[2] << 8) + buf[3];
-	kfree(buf);
-	buf = kmalloc(vpd_len, GFP_KERNEL);
-	if (!buf)
-		return;
-	cmd[3] = vpd_len >> 8;
-	cmd[4] = vpd_len & 0xff;
-	if (scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf,
-			     vpd_len, NULL, SES_TIMEOUT, SES_RETRIES, NULL))
-		goto free;
+	vpd_len = ((buf[2] << 8) | buf[3]) + 4;
 
 	desc = buf + 4;
 	while (desc < buf + vpd_len) {
@@ -393,7 +370,7 @@
 		u8 type = desc[1] & 0x0f;
 		u8 len = desc[3];
 
-		if (piv && code_set == 1 && assoc == 1 && code_set == 1
+		if (piv && code_set == 1 && assoc == 1
 		    && proto == SCSI_PROTOCOL_SAS && type == 3 && len == 8)
 			efd.addr = (u64)desc[4] << 56 |
 				(u64)desc[5] << 48 |
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index b4ef2f8..ffc8785 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -98,7 +98,6 @@
 static int scatter_elem_sz_prev = SG_SCATTER_SZ;
 
 #define SG_SECTOR_SZ 512
-#define SG_SECTOR_MSK (SG_SECTOR_SZ - 1)
 
 static int sg_add(struct device *, struct class_interface *);
 static void sg_remove(struct device *, struct class_interface *);
@@ -137,10 +136,11 @@
 	volatile char done;	/* 0->before bh, 1->before read, 2->read */
 	struct request *rq;
 	struct bio *bio;
+	struct execute_work ew;
 } Sg_request;
 
 typedef struct sg_fd {		/* holds the state of a file descriptor */
-	struct sg_fd *nextfp;	/* NULL when last opened fd on this device */
+	struct list_head sfd_siblings;
 	struct sg_device *parentdp;	/* owning device */
 	wait_queue_head_t read_wait;	/* queue read until command done */
 	rwlock_t rq_list_lock;	/* protect access to list in req_arr */
@@ -158,6 +158,8 @@
 	char next_cmd_len;	/* 0 -> automatic (def), >0 -> use on next write() */
 	char keep_orphan;	/* 0 -> drop orphan (def), 1 -> keep for read() */
 	char mmap_called;	/* 0 -> mmap() never called on this fd */
+	struct kref f_ref;
+	struct execute_work ew;
 } Sg_fd;
 
 typedef struct sg_device { /* holds the state of each scsi generic device */
@@ -165,27 +167,25 @@
 	wait_queue_head_t o_excl_wait;	/* queue open() when O_EXCL in use */
 	int sg_tablesize;	/* adapter's max scatter-gather table size */
 	u32 index;		/* device index number */
-	Sg_fd *headfp;		/* first open fd belonging to this device */
+	struct list_head sfds;
 	volatile char detached;	/* 0->attached, 1->detached pending removal */
 	volatile char exclude;	/* opened for exclusive access */
 	char sgdebug;		/* 0->off, 1->sense, 9->dump dev, 10-> all devs */
 	struct gendisk *disk;
 	struct cdev * cdev;	/* char_dev [sysfs: /sys/cdev/major/sg<n>] */
+	struct kref d_ref;
 } Sg_device;
 
-static int sg_fasync(int fd, struct file *filp, int mode);
 /* tasklet or soft irq callback */
 static void sg_rq_end_io(struct request *rq, int uptodate);
 static int sg_start_req(Sg_request *srp, unsigned char *cmd);
 static void sg_finish_rem_req(Sg_request * srp);
 static int sg_build_indirect(Sg_scatter_hold * schp, Sg_fd * sfp, int buff_size);
-static int sg_build_sgat(Sg_scatter_hold * schp, const Sg_fd * sfp,
-			 int tablesize);
 static ssize_t sg_new_read(Sg_fd * sfp, char __user *buf, size_t count,
 			   Sg_request * srp);
 static ssize_t sg_new_write(Sg_fd *sfp, struct file *file,
 			const char __user *buf, size_t count, int blocking,
-			int read_only, Sg_request **o_srp);
+			int read_only, int sg_io_owned, Sg_request **o_srp);
 static int sg_common_write(Sg_fd * sfp, Sg_request * srp,
 			   unsigned char *cmnd, int timeout, int blocking);
 static int sg_read_oxfer(Sg_request * srp, char __user *outp, int num_read_xfer);
@@ -194,16 +194,13 @@
 static void sg_link_reserve(Sg_fd * sfp, Sg_request * srp, int size);
 static void sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp);
 static Sg_fd *sg_add_sfp(Sg_device * sdp, int dev);
-static int sg_remove_sfp(Sg_device * sdp, Sg_fd * sfp);
-static void __sg_remove_sfp(Sg_device * sdp, Sg_fd * sfp);
+static void sg_remove_sfp(struct kref *);
 static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id);
 static Sg_request *sg_add_request(Sg_fd * sfp);
 static int sg_remove_request(Sg_fd * sfp, Sg_request * srp);
 static int sg_res_in_use(Sg_fd * sfp);
 static Sg_device *sg_get_dev(int dev);
-#ifdef CONFIG_SCSI_PROC_FS
-static int sg_last_dev(void);
-#endif
+static void sg_put_dev(Sg_device *sdp);
 
 #define SZ_SG_HEADER sizeof(struct sg_header)
 #define SZ_SG_IO_HDR sizeof(sg_io_hdr_t)
@@ -237,22 +234,17 @@
 	nonseekable_open(inode, filp);
 	SCSI_LOG_TIMEOUT(3, printk("sg_open: dev=%d, flags=0x%x\n", dev, flags));
 	sdp = sg_get_dev(dev);
-	if ((!sdp) || (!sdp->device)) {
-		unlock_kernel();
-		return -ENXIO;
-	}
-	if (sdp->detached) {
-		unlock_kernel();
-		return -ENODEV;
+	if (IS_ERR(sdp)) {
+		retval = PTR_ERR(sdp);
+		sdp = NULL;
+		goto sg_put;
 	}
 
 	/* This driver's module count bumped by fops_get in <linux/fs.h> */
 	/* Prevent the device driver from vanishing while we sleep */
 	retval = scsi_device_get(sdp->device);
-	if (retval) {
-		unlock_kernel();
-		return retval;
-	}
+	if (retval)
+		goto sg_put;
 
 	if (!((flags & O_NONBLOCK) ||
 	      scsi_block_when_processing_errors(sdp->device))) {
@@ -266,13 +258,13 @@
 			retval = -EPERM; /* Can't lock it with read only access */
 			goto error_out;
 		}
-		if (sdp->headfp && (flags & O_NONBLOCK)) {
+		if (!list_empty(&sdp->sfds) && (flags & O_NONBLOCK)) {
 			retval = -EBUSY;
 			goto error_out;
 		}
 		res = 0;
 		__wait_event_interruptible(sdp->o_excl_wait,
-			((sdp->headfp || sdp->exclude) ? 0 : (sdp->exclude = 1)), res);
+					   ((!list_empty(&sdp->sfds) || sdp->exclude) ? 0 : (sdp->exclude = 1)), res);
 		if (res) {
 			retval = res;	/* -ERESTARTSYS because signal hit process */
 			goto error_out;
@@ -294,7 +286,7 @@
 		retval = -ENODEV;
 		goto error_out;
 	}
-	if (!sdp->headfp) {	/* no existing opens on this device */
+	if (list_empty(&sdp->sfds)) {	/* no existing opens on this device */
 		sdp->sgdebug = 0;
 		q = sdp->device->request_queue;
 		sdp->sg_tablesize = min(q->max_hw_segments,
@@ -303,16 +295,20 @@
 	if ((sfp = sg_add_sfp(sdp, dev)))
 		filp->private_data = sfp;
 	else {
-		if (flags & O_EXCL)
+		if (flags & O_EXCL) {
 			sdp->exclude = 0;	/* undo if error */
+			wake_up_interruptible(&sdp->o_excl_wait);
+		}
 		retval = -ENOMEM;
 		goto error_out;
 	}
-	unlock_kernel();
-	return 0;
-
-      error_out:
-	scsi_device_put(sdp->device);
+	retval = 0;
+error_out:
+	if (retval)
+		scsi_device_put(sdp->device);
+sg_put:
+	if (sdp)
+		sg_put_dev(sdp);
 	unlock_kernel();
 	return retval;
 }
@@ -327,13 +323,13 @@
 	if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
 		return -ENXIO;
 	SCSI_LOG_TIMEOUT(3, printk("sg_release: %s\n", sdp->disk->disk_name));
-	if (0 == sg_remove_sfp(sdp, sfp)) {	/* Returns 1 when sdp gone */
-		if (!sdp->detached) {
-			scsi_device_put(sdp->device);
-		}
-		sdp->exclude = 0;
-		wake_up_interruptible(&sdp->o_excl_wait);
-	}
+
+	sfp->closed = 1;
+
+	sdp->exclude = 0;
+	wake_up_interruptible(&sdp->o_excl_wait);
+
+	kref_put(&sfp->f_ref, sg_remove_sfp);
 	return 0;
 }
 
@@ -557,7 +553,8 @@
 		return -EFAULT;
 	blocking = !(filp->f_flags & O_NONBLOCK);
 	if (old_hdr.reply_len < 0)
-		return sg_new_write(sfp, filp, buf, count, blocking, 0, NULL);
+		return sg_new_write(sfp, filp, buf, count,
+				    blocking, 0, 0, NULL);
 	if (count < (SZ_SG_HEADER + 6))
 		return -EIO;	/* The minimum scsi command length is 6 bytes. */
 
@@ -638,7 +635,7 @@
 
 static ssize_t
 sg_new_write(Sg_fd *sfp, struct file *file, const char __user *buf,
-		 size_t count, int blocking, int read_only,
+		 size_t count, int blocking, int read_only, int sg_io_owned,
 		 Sg_request **o_srp)
 {
 	int k;
@@ -658,6 +655,7 @@
 		SCSI_LOG_TIMEOUT(1, printk("sg_new_write: queue full\n"));
 		return -EDOM;
 	}
+	srp->sg_io_owned = sg_io_owned;
 	hp = &srp->header;
 	if (__copy_from_user(hp, buf, SZ_SG_IO_HDR)) {
 		sg_remove_request(sfp, srp);
@@ -755,24 +753,13 @@
 	hp->duration = jiffies_to_msecs(jiffies);
 
 	srp->rq->timeout = timeout;
+	kref_get(&sfp->f_ref); /* sg_rq_end_io() does kref_put(). */
 	blk_execute_rq_nowait(sdp->device->request_queue, sdp->disk,
 			      srp->rq, 1, sg_rq_end_io);
 	return 0;
 }
 
 static int
-sg_srp_done(Sg_request *srp, Sg_fd *sfp)
-{
-	unsigned long iflags;
-	int done;
-
-	read_lock_irqsave(&sfp->rq_list_lock, iflags);
-	done = srp->done;
-	read_unlock_irqrestore(&sfp->rq_list_lock, iflags);
-	return done;
-}
-
-static int
 sg_ioctl(struct inode *inode, struct file *filp,
 	 unsigned int cmd_in, unsigned long arg)
 {
@@ -804,27 +791,26 @@
 				return -EFAULT;
 			result =
 			    sg_new_write(sfp, filp, p, SZ_SG_IO_HDR,
-					 blocking, read_only, &srp);
+					 blocking, read_only, 1, &srp);
 			if (result < 0)
 				return result;
-			srp->sg_io_owned = 1;
 			while (1) {
 				result = 0;	/* following macro to beat race condition */
 				__wait_event_interruptible(sfp->read_wait,
-					(sdp->detached || sfp->closed || sg_srp_done(srp, sfp)),
-							   result);
+					(srp->done || sdp->detached),
+					result);
 				if (sdp->detached)
 					return -ENODEV;
-				if (sfp->closed)
-					return 0;	/* request packet dropped already */
-				if (0 == result)
+				write_lock_irq(&sfp->rq_list_lock);
+				if (srp->done) {
+					srp->done = 2;
+					write_unlock_irq(&sfp->rq_list_lock);
 					break;
+				}
 				srp->orphan = 1;
+				write_unlock_irq(&sfp->rq_list_lock);
 				return result;	/* -ERESTARTSYS because signal hit process */
 			}
-			write_lock_irqsave(&sfp->rq_list_lock, iflags);
-			srp->done = 2;
-			write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
 			result = sg_new_read(sfp, p, SZ_SG_IO_HDR, srp);
 			return (result < 0) ? result : 0;
 		}
@@ -1238,6 +1224,15 @@
 	return 0;
 }
 
+static void sg_rq_end_io_usercontext(struct work_struct *work)
+{
+	struct sg_request *srp = container_of(work, struct sg_request, ew.work);
+	struct sg_fd *sfp = srp->parentfp;
+
+	sg_finish_rem_req(srp);
+	kref_put(&sfp->f_ref, sg_remove_sfp);
+}
+
 /*
  * This function is a "bottom half" handler that is called by the mid
  * level when a command is completed (or has failed).
@@ -1245,24 +1240,23 @@
 static void sg_rq_end_io(struct request *rq, int uptodate)
 {
 	struct sg_request *srp = rq->end_io_data;
-	Sg_device *sdp = NULL;
+	Sg_device *sdp;
 	Sg_fd *sfp;
 	unsigned long iflags;
 	unsigned int ms;
 	char *sense;
-	int result, resid;
+	int result, resid, done = 1;
 
-	if (NULL == srp) {
-		printk(KERN_ERR "sg_cmd_done: NULL request\n");
+	if (WARN_ON(srp->done != 0))
 		return;
-	}
+
 	sfp = srp->parentfp;
-	if (sfp)
-		sdp = sfp->parentdp;
-	if ((NULL == sdp) || sdp->detached) {
-		printk(KERN_INFO "sg_cmd_done: device detached\n");
+	if (WARN_ON(sfp == NULL))
 		return;
-	}
+
+	sdp = sfp->parentdp;
+	if (unlikely(sdp->detached))
+		printk(KERN_INFO "sg_rq_end_io: device detached\n");
 
 	sense = rq->sense;
 	result = rq->errors;
@@ -1301,33 +1295,25 @@
 	}
 	/* Rely on write phase to clean out srp status values, so no "else" */
 
-	if (sfp->closed) {	/* whoops this fd already released, cleanup */
-		SCSI_LOG_TIMEOUT(1, printk("sg_cmd_done: already closed, freeing ...\n"));
-		sg_finish_rem_req(srp);
-		srp = NULL;
-		if (NULL == sfp->headrp) {
-			SCSI_LOG_TIMEOUT(1, printk("sg_cmd_done: already closed, final cleanup\n"));
-			if (0 == sg_remove_sfp(sdp, sfp)) {	/* device still present */
-				scsi_device_put(sdp->device);
-			}
-			sfp = NULL;
-		}
-	} else if (srp && srp->orphan) {
+	write_lock_irqsave(&sfp->rq_list_lock, iflags);
+	if (unlikely(srp->orphan)) {
 		if (sfp->keep_orphan)
 			srp->sg_io_owned = 0;
-		else {
-			sg_finish_rem_req(srp);
-			srp = NULL;
-		}
+		else
+			done = 0;
 	}
-	if (sfp && srp) {
-		/* Now wake up any sg_read() that is waiting for this packet. */
-		kill_fasync(&sfp->async_qp, SIGPOLL, POLL_IN);
-		write_lock_irqsave(&sfp->rq_list_lock, iflags);
-		srp->done = 1;
+	srp->done = done;
+	write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+
+	if (likely(done)) {
+		/* Now wake up any sg_read() that is waiting for this
+		 * packet.
+		 */
 		wake_up_interruptible(&sfp->read_wait);
-		write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
-	}
+		kill_fasync(&sfp->async_qp, SIGPOLL, POLL_IN);
+		kref_put(&sfp->f_ref, sg_remove_sfp);
+	} else
+		execute_in_process_context(sg_rq_end_io_usercontext, &srp->ew);
 }
 
 static struct file_operations sg_fops = {
@@ -1362,17 +1348,18 @@
 		printk(KERN_WARNING "kmalloc Sg_device failure\n");
 		return ERR_PTR(-ENOMEM);
 	}
-	error = -ENOMEM;
+
 	if (!idr_pre_get(&sg_index_idr, GFP_KERNEL)) {
 		printk(KERN_WARNING "idr expansion Sg_device failure\n");
+		error = -ENOMEM;
 		goto out;
 	}
 
 	write_lock_irqsave(&sg_index_lock, iflags);
-	error = idr_get_new(&sg_index_idr, sdp, &k);
-	write_unlock_irqrestore(&sg_index_lock, iflags);
 
+	error = idr_get_new(&sg_index_idr, sdp, &k);
 	if (error) {
+		write_unlock_irqrestore(&sg_index_lock, iflags);
 		printk(KERN_WARNING "idr allocation Sg_device failure: %d\n",
 		       error);
 		goto out;
@@ -1386,9 +1373,13 @@
 	disk->first_minor = k;
 	sdp->disk = disk;
 	sdp->device = scsidp;
+	INIT_LIST_HEAD(&sdp->sfds);
 	init_waitqueue_head(&sdp->o_excl_wait);
 	sdp->sg_tablesize = min(q->max_hw_segments, q->max_phys_segments);
 	sdp->index = k;
+	kref_init(&sdp->d_ref);
+
+	write_unlock_irqrestore(&sg_index_lock, iflags);
 
 	error = 0;
  out:
@@ -1399,6 +1390,8 @@
 	return sdp;
 
  overflow:
+	idr_remove(&sg_index_idr, k);
+	write_unlock_irqrestore(&sg_index_lock, iflags);
 	sdev_printk(KERN_WARNING, scsidp,
 		    "Unable to attach sg device type=%d, minor "
 		    "number exceeds %d\n", scsidp->type, SG_MAX_DEVS - 1);
@@ -1486,49 +1479,46 @@
 	return error;
 }
 
-static void
-sg_remove(struct device *cl_dev, struct class_interface *cl_intf)
+static void sg_device_destroy(struct kref *kref)
+{
+	struct sg_device *sdp = container_of(kref, struct sg_device, d_ref);
+	unsigned long flags;
+
+	/* CAUTION!  Note that the device can still be found via idr_find()
+	 * even though the refcount is 0.  Therefore, do idr_remove() BEFORE
+	 * any other cleanup.
+	 */
+
+	write_lock_irqsave(&sg_index_lock, flags);
+	idr_remove(&sg_index_idr, sdp->index);
+	write_unlock_irqrestore(&sg_index_lock, flags);
+
+	SCSI_LOG_TIMEOUT(3,
+		printk("sg_device_destroy: %s\n",
+			sdp->disk->disk_name));
+
+	put_disk(sdp->disk);
+	kfree(sdp);
+}
+
+static void sg_remove(struct device *cl_dev, struct class_interface *cl_intf)
 {
 	struct scsi_device *scsidp = to_scsi_device(cl_dev->parent);
 	Sg_device *sdp = dev_get_drvdata(cl_dev);
 	unsigned long iflags;
 	Sg_fd *sfp;
-	Sg_fd *tsfp;
-	Sg_request *srp;
-	Sg_request *tsrp;
-	int delay;
 
-	if (!sdp)
+	if (!sdp || sdp->detached)
 		return;
 
-	delay = 0;
+	SCSI_LOG_TIMEOUT(3, printk("sg_remove: %s\n", sdp->disk->disk_name));
+
+	/* Need a write lock to set sdp->detached. */
 	write_lock_irqsave(&sg_index_lock, iflags);
-	if (sdp->headfp) {
-		sdp->detached = 1;
-		for (sfp = sdp->headfp; sfp; sfp = tsfp) {
-			tsfp = sfp->nextfp;
-			for (srp = sfp->headrp; srp; srp = tsrp) {
-				tsrp = srp->nextrp;
-				if (sfp->closed || (0 == sg_srp_done(srp, sfp)))
-					sg_finish_rem_req(srp);
-			}
-			if (sfp->closed) {
-				scsi_device_put(sdp->device);
-				__sg_remove_sfp(sdp, sfp);
-			} else {
-				delay = 1;
-				wake_up_interruptible(&sfp->read_wait);
-				kill_fasync(&sfp->async_qp, SIGPOLL,
-					    POLL_HUP);
-			}
-		}
-		SCSI_LOG_TIMEOUT(3, printk("sg_remove: dev=%d, dirty\n", sdp->index));
-		if (NULL == sdp->headfp) {
-			idr_remove(&sg_index_idr, sdp->index);
-		}
-	} else {	/* nothing active, simple case */
-		SCSI_LOG_TIMEOUT(3, printk("sg_remove: dev=%d\n", sdp->index));
-		idr_remove(&sg_index_idr, sdp->index);
+	sdp->detached = 1;
+	list_for_each_entry(sfp, &sdp->sfds, sfd_siblings) {
+		wake_up_interruptible(&sfp->read_wait);
+		kill_fasync(&sfp->async_qp, SIGPOLL, POLL_HUP);
 	}
 	write_unlock_irqrestore(&sg_index_lock, iflags);
 
@@ -1536,13 +1526,8 @@
 	device_destroy(sg_sysfs_class, MKDEV(SCSI_GENERIC_MAJOR, sdp->index));
 	cdev_del(sdp->cdev);
 	sdp->cdev = NULL;
-	put_disk(sdp->disk);
-	sdp->disk = NULL;
-	if (NULL == sdp->headfp)
-		kfree(sdp);
 
-	if (delay)
-		msleep(10);	/* dirty detach so delay device destruction */
+	sg_put_dev(sdp);
 }
 
 module_param_named(scatter_elem_sz, scatter_elem_sz, int, S_IRUGO | S_IWUSR);
@@ -1736,8 +1721,8 @@
 		return -EFAULT;
 	if (0 == blk_size)
 		++blk_size;	/* don't know why */
-/* round request up to next highest SG_SECTOR_SZ byte boundary */
-	blk_size = (blk_size + SG_SECTOR_MSK) & (~SG_SECTOR_MSK);
+	/* round request up to next highest SG_SECTOR_SZ byte boundary */
+	blk_size = ALIGN(blk_size, SG_SECTOR_SZ);
 	SCSI_LOG_TIMEOUT(4, printk("sg_build_indirect: buff_size=%d, blk_size=%d\n",
 				   buff_size, blk_size));
 
@@ -1939,22 +1924,6 @@
 	return resp;
 }
 
-#ifdef CONFIG_SCSI_PROC_FS
-static Sg_request *
-sg_get_nth_request(Sg_fd * sfp, int nth)
-{
-	Sg_request *resp;
-	unsigned long iflags;
-	int k;
-
-	read_lock_irqsave(&sfp->rq_list_lock, iflags);
-	for (k = 0, resp = sfp->headrp; resp && (k < nth);
-	     ++k, resp = resp->nextrp) ;
-	read_unlock_irqrestore(&sfp->rq_list_lock, iflags);
-	return resp;
-}
-#endif
-
 /* always adds to end of list */
 static Sg_request *
 sg_add_request(Sg_fd * sfp)
@@ -2030,22 +1999,6 @@
 	return res;
 }
 
-#ifdef CONFIG_SCSI_PROC_FS
-static Sg_fd *
-sg_get_nth_sfp(Sg_device * sdp, int nth)
-{
-	Sg_fd *resp;
-	unsigned long iflags;
-	int k;
-
-	read_lock_irqsave(&sg_index_lock, iflags);
-	for (k = 0, resp = sdp->headfp; resp && (k < nth);
-	     ++k, resp = resp->nextfp) ;
-	read_unlock_irqrestore(&sg_index_lock, iflags);
-	return resp;
-}
-#endif
-
 static Sg_fd *
 sg_add_sfp(Sg_device * sdp, int dev)
 {
@@ -2060,6 +2013,7 @@
 	init_waitqueue_head(&sfp->read_wait);
 	rwlock_init(&sfp->rq_list_lock);
 
+	kref_init(&sfp->f_ref);
 	sfp->timeout = SG_DEFAULT_TIMEOUT;
 	sfp->timeout_user = SG_DEFAULT_TIMEOUT_USER;
 	sfp->force_packid = SG_DEF_FORCE_PACK_ID;
@@ -2069,14 +2023,7 @@
 	sfp->keep_orphan = SG_DEF_KEEP_ORPHAN;
 	sfp->parentdp = sdp;
 	write_lock_irqsave(&sg_index_lock, iflags);
-	if (!sdp->headfp)
-		sdp->headfp = sfp;
-	else {			/* add to tail of existing list */
-		Sg_fd *pfp = sdp->headfp;
-		while (pfp->nextfp)
-			pfp = pfp->nextfp;
-		pfp->nextfp = sfp;
-	}
+	list_add_tail(&sfp->sfd_siblings, &sdp->sfds);
 	write_unlock_irqrestore(&sg_index_lock, iflags);
 	SCSI_LOG_TIMEOUT(3, printk("sg_add_sfp: sfp=0x%p\n", sfp));
 	if (unlikely(sg_big_buff != def_reserved_size))
@@ -2087,75 +2034,52 @@
 	sg_build_reserve(sfp, bufflen);
 	SCSI_LOG_TIMEOUT(3, printk("sg_add_sfp:   bufflen=%d, k_use_sg=%d\n",
 			   sfp->reserve.bufflen, sfp->reserve.k_use_sg));
+
+	kref_get(&sdp->d_ref);
+	__module_get(THIS_MODULE);
 	return sfp;
 }
 
-static void
-__sg_remove_sfp(Sg_device * sdp, Sg_fd * sfp)
+static void sg_remove_sfp_usercontext(struct work_struct *work)
 {
-	Sg_fd *fp;
-	Sg_fd *prev_fp;
+	struct sg_fd *sfp = container_of(work, struct sg_fd, ew.work);
+	struct sg_device *sdp = sfp->parentdp;
 
-	prev_fp = sdp->headfp;
-	if (sfp == prev_fp)
-		sdp->headfp = prev_fp->nextfp;
-	else {
-		while ((fp = prev_fp->nextfp)) {
-			if (sfp == fp) {
-				prev_fp->nextfp = fp->nextfp;
-				break;
-			}
-			prev_fp = fp;
-		}
-	}
+	/* Cleanup any responses which were never read(). */
+	while (sfp->headrp)
+		sg_finish_rem_req(sfp->headrp);
+
 	if (sfp->reserve.bufflen > 0) {
-		SCSI_LOG_TIMEOUT(6, 
-			printk("__sg_remove_sfp:    bufflen=%d, k_use_sg=%d\n",
-			(int) sfp->reserve.bufflen, (int) sfp->reserve.k_use_sg));
+		SCSI_LOG_TIMEOUT(6,
+			printk("sg_remove_sfp:    bufflen=%d, k_use_sg=%d\n",
+				(int) sfp->reserve.bufflen,
+				(int) sfp->reserve.k_use_sg));
 		sg_remove_scat(&sfp->reserve);
 	}
-	sfp->parentdp = NULL;
-	SCSI_LOG_TIMEOUT(6, printk("__sg_remove_sfp:    sfp=0x%p\n", sfp));
+
+	SCSI_LOG_TIMEOUT(6,
+		printk("sg_remove_sfp: %s, sfp=0x%p\n",
+			sdp->disk->disk_name,
+			sfp));
 	kfree(sfp);
+
+	scsi_device_put(sdp->device);
+	sg_put_dev(sdp);
+	module_put(THIS_MODULE);
 }
 
-/* Returns 0 in normal case, 1 when detached and sdp object removed */
-static int
-sg_remove_sfp(Sg_device * sdp, Sg_fd * sfp)
+static void sg_remove_sfp(struct kref *kref)
 {
-	Sg_request *srp;
-	Sg_request *tsrp;
-	int dirty = 0;
-	int res = 0;
+	struct sg_fd *sfp = container_of(kref, struct sg_fd, f_ref);
+	struct sg_device *sdp = sfp->parentdp;
+	unsigned long iflags;
 
-	for (srp = sfp->headrp; srp; srp = tsrp) {
-		tsrp = srp->nextrp;
-		if (sg_srp_done(srp, sfp))
-			sg_finish_rem_req(srp);
-		else
-			++dirty;
-	}
-	if (0 == dirty) {
-		unsigned long iflags;
+	write_lock_irqsave(&sg_index_lock, iflags);
+	list_del(&sfp->sfd_siblings);
+	write_unlock_irqrestore(&sg_index_lock, iflags);
+	wake_up_interruptible(&sdp->o_excl_wait);
 
-		write_lock_irqsave(&sg_index_lock, iflags);
-		__sg_remove_sfp(sdp, sfp);
-		if (sdp->detached && (NULL == sdp->headfp)) {
-			idr_remove(&sg_index_idr, sdp->index);
-			kfree(sdp);
-			res = 1;
-		}
-		write_unlock_irqrestore(&sg_index_lock, iflags);
-	} else {
-		/* MOD_INC's to inhibit unloading sg and associated adapter driver */
-		/* only bump the access_count if we actually succeeded in
-		 * throwing another counter on the host module */
-		scsi_device_get(sdp->device);	/* XXX: retval ignored? */	
-		sfp->closed = 1;	/* flag dirty state on this fd */
-		SCSI_LOG_TIMEOUT(1, printk("sg_remove_sfp: worrisome, %d writes pending\n",
-				  dirty));
-	}
-	return res;
+	execute_in_process_context(sg_remove_sfp_usercontext, &sfp->ew);
 }
 
 static int
@@ -2197,19 +2121,38 @@
 }
 #endif
 
-static Sg_device *
-sg_get_dev(int dev)
+/* must be called with sg_index_lock held */
+static Sg_device *sg_lookup_dev(int dev)
 {
-	Sg_device *sdp;
-	unsigned long iflags;
+	return idr_find(&sg_index_idr, dev);
+}
 
-	read_lock_irqsave(&sg_index_lock, iflags);
-	sdp = idr_find(&sg_index_idr, dev);
-	read_unlock_irqrestore(&sg_index_lock, iflags);
+static Sg_device *sg_get_dev(int dev)
+{
+	struct sg_device *sdp;
+	unsigned long flags;
+
+	read_lock_irqsave(&sg_index_lock, flags);
+	sdp = sg_lookup_dev(dev);
+	if (!sdp)
+		sdp = ERR_PTR(-ENXIO);
+	else if (sdp->detached) {
+		/* If sdp->detached, then the refcount may already be 0, in
+		 * which case it would be a bug to do kref_get().
+		 */
+		sdp = ERR_PTR(-ENODEV);
+	} else
+		kref_get(&sdp->d_ref);
+	read_unlock_irqrestore(&sg_index_lock, flags);
 
 	return sdp;
 }
 
+static void sg_put_dev(struct sg_device *sdp)
+{
+	kref_put(&sdp->d_ref, sg_device_destroy);
+}
+
 #ifdef CONFIG_SCSI_PROC_FS
 
 static struct proc_dir_entry *sg_proc_sgp = NULL;
@@ -2466,8 +2409,10 @@
 	struct sg_proc_deviter * it = (struct sg_proc_deviter *) v;
 	Sg_device *sdp;
 	struct scsi_device *scsidp;
+	unsigned long iflags;
 
-	sdp = it ? sg_get_dev(it->index) : NULL;
+	read_lock_irqsave(&sg_index_lock, iflags);
+	sdp = it ? sg_lookup_dev(it->index) : NULL;
 	if (sdp && (scsidp = sdp->device) && (!sdp->detached))
 		seq_printf(s, "%d\t%d\t%d\t%d\t%d\t%d\t%d\t%d\t%d\n",
 			      scsidp->host->host_no, scsidp->channel,
@@ -2478,6 +2423,7 @@
 			      (int) scsi_device_online(scsidp));
 	else
 		seq_printf(s, "-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\n");
+	read_unlock_irqrestore(&sg_index_lock, iflags);
 	return 0;
 }
 
@@ -2491,16 +2437,20 @@
 	struct sg_proc_deviter * it = (struct sg_proc_deviter *) v;
 	Sg_device *sdp;
 	struct scsi_device *scsidp;
+	unsigned long iflags;
 
-	sdp = it ? sg_get_dev(it->index) : NULL;
+	read_lock_irqsave(&sg_index_lock, iflags);
+	sdp = it ? sg_lookup_dev(it->index) : NULL;
 	if (sdp && (scsidp = sdp->device) && (!sdp->detached))
 		seq_printf(s, "%8.8s\t%16.16s\t%4.4s\n",
 			   scsidp->vendor, scsidp->model, scsidp->rev);
 	else
 		seq_printf(s, "<no active device>\n");
+	read_unlock_irqrestore(&sg_index_lock, iflags);
 	return 0;
 }
 
+/* must be called while holding sg_index_lock */
 static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp)
 {
 	int k, m, new_interface, blen, usg;
@@ -2510,9 +2460,12 @@
 	const char * cp;
 	unsigned int ms;
 
-	for (k = 0; (fp = sg_get_nth_sfp(sdp, k)); ++k) {
+	k = 0;
+	list_for_each_entry(fp, &sdp->sfds, sfd_siblings) {
+		k++;
+		read_lock(&fp->rq_list_lock); /* irqs already disabled */
 		seq_printf(s, "   FD(%d): timeout=%dms bufflen=%d "
-			   "(res)sgat=%d low_dma=%d\n", k + 1,
+			   "(res)sgat=%d low_dma=%d\n", k,
 			   jiffies_to_msecs(fp->timeout),
 			   fp->reserve.bufflen,
 			   (int) fp->reserve.k_use_sg,
@@ -2520,7 +2473,9 @@
 		seq_printf(s, "   cmd_q=%d f_packid=%d k_orphan=%d closed=%d\n",
 			   (int) fp->cmd_q, (int) fp->force_packid,
 			   (int) fp->keep_orphan, (int) fp->closed);
-		for (m = 0; (srp = sg_get_nth_request(fp, m)); ++m) {
+		for (m = 0, srp = fp->headrp;
+				srp != NULL;
+				++m, srp = srp->nextrp) {
 			hp = &srp->header;
 			new_interface = (hp->interface_id == '\0') ? 0 : 1;
 			if (srp->res_used) {
@@ -2557,6 +2512,7 @@
 		}
 		if (0 == m)
 			seq_printf(s, "     No requests active\n");
+		read_unlock(&fp->rq_list_lock);
 	}
 }
 
@@ -2569,39 +2525,34 @@
 {
 	struct sg_proc_deviter * it = (struct sg_proc_deviter *) v;
 	Sg_device *sdp;
+	unsigned long iflags;
 
 	if (it && (0 == it->index)) {
 		seq_printf(s, "max_active_device=%d(origin 1)\n",
 			   (int)it->max);
 		seq_printf(s, " def_reserved_size=%d\n", sg_big_buff);
 	}
-	sdp = it ? sg_get_dev(it->index) : NULL;
-	if (sdp) {
+
+	read_lock_irqsave(&sg_index_lock, iflags);
+	sdp = it ? sg_lookup_dev(it->index) : NULL;
+	if (sdp && !list_empty(&sdp->sfds)) {
 		struct scsi_device *scsidp = sdp->device;
 
-		if (NULL == scsidp) {
-			seq_printf(s, "device %d detached ??\n", 
-				   (int)it->index);
-			return 0;
-		}
-
-		if (sg_get_nth_sfp(sdp, 0)) {
-			seq_printf(s, " >>> device=%s ",
-				sdp->disk->disk_name);
-			if (sdp->detached)
-				seq_printf(s, "detached pending close ");
-			else
-				seq_printf
-				    (s, "scsi%d chan=%d id=%d lun=%d   em=%d",
-				     scsidp->host->host_no,
-				     scsidp->channel, scsidp->id,
-				     scsidp->lun,
-				     scsidp->host->hostt->emulated);
-			seq_printf(s, " sg_tablesize=%d excl=%d\n",
-				   sdp->sg_tablesize, sdp->exclude);
-		}
+		seq_printf(s, " >>> device=%s ", sdp->disk->disk_name);
+		if (sdp->detached)
+			seq_printf(s, "detached pending close ");
+		else
+			seq_printf
+			    (s, "scsi%d chan=%d id=%d lun=%d   em=%d",
+			     scsidp->host->host_no,
+			     scsidp->channel, scsidp->id,
+			     scsidp->lun,
+			     scsidp->host->hostt->emulated);
+		seq_printf(s, " sg_tablesize=%d excl=%d\n",
+			   sdp->sg_tablesize, sdp->exclude);
 		sg_proc_debug_helper(s, sdp);
 	}
+	read_unlock_irqrestore(&sg_index_lock, iflags);
 	return 0;
 }
 
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
index c6f19ee..eb24efe 100644
--- a/drivers/scsi/st.c
+++ b/drivers/scsi/st.c
@@ -374,9 +374,9 @@
 	if (!debugging) { /* Abnormal conditions for tape */
 		if (!cmdstatp->have_sense)
 			printk(KERN_WARNING
-			       "%s: Error %x (sugg. bt 0x%x, driver bt 0x%x, host bt 0x%x).\n",
-			       name, result, suggestion(result),
-			       driver_byte(result) & DRIVER_MASK, host_byte(result));
+			       "%s: Error %x (driver bt 0x%x, host bt 0x%x).\n",
+			       name, result, driver_byte(result),
+			       host_byte(result));
 		else if (cmdstatp->have_sense &&
 			 scode != NO_SENSE &&
 			 scode != RECOVERED_ERROR &&
diff --git a/drivers/scsi/stex.c b/drivers/scsi/stex.c
index a3a18ad..47b614e 100644
--- a/drivers/scsi/stex.c
+++ b/drivers/scsi/stex.c
@@ -1,7 +1,7 @@
 /*
  * SuperTrak EX Series Storage Controller driver for Linux
  *
- *	Copyright (C) 2005, 2006 Promise Technology Inc.
+ *	Copyright (C) 2005-2009 Promise Technology Inc.
  *
  *	This program is free software; you can redistribute it and/or
  *	modify it under the terms of the GNU General Public License
@@ -36,8 +36,8 @@
 #include <scsi/scsi_eh.h>
 
 #define DRV_NAME "stex"
-#define ST_DRIVER_VERSION "3.6.0000.1"
-#define ST_VER_MAJOR 		3
+#define ST_DRIVER_VERSION "4.6.0000.1"
+#define ST_VER_MAJOR 		4
 #define ST_VER_MINOR 		6
 #define ST_OEM 			0
 #define ST_BUILD_VER 		1
@@ -103,7 +103,7 @@
 	MU_REQ_COUNT				= (MU_MAX_REQUEST + 1),
 	MU_STATUS_COUNT				= (MU_MAX_REQUEST + 1),
 
-	STEX_CDB_LENGTH				= MAX_COMMAND_SIZE,
+	STEX_CDB_LENGTH				= 16,
 	REQ_VARIABLE_LEN			= 1024,
 	STATUS_VAR_LEN				= 128,
 	ST_CAN_QUEUE				= MU_MAX_REQUEST,
@@ -114,15 +114,19 @@
 	SG_CF_EOT				= 0x80,	/* end of table */
 	SG_CF_64B				= 0x40,	/* 64 bit item */
 	SG_CF_HOST				= 0x20,	/* sg in host memory */
+	MSG_DATA_DIR_ND				= 0,
+	MSG_DATA_DIR_IN				= 1,
+	MSG_DATA_DIR_OUT			= 2,
 
 	st_shasta				= 0,
 	st_vsc					= 1,
 	st_vsc1					= 2,
 	st_yosemite				= 3,
+	st_seq					= 4,
 
 	PASSTHRU_REQ_TYPE			= 0x00000001,
 	PASSTHRU_REQ_NO_WAKEUP			= 0x00000100,
-	ST_INTERNAL_TIMEOUT			= 30,
+	ST_INTERNAL_TIMEOUT			= 180,
 
 	ST_TO_CMD				= 0,
 	ST_FROM_CMD				= 1,
@@ -152,35 +156,6 @@
 	ST_ADDITIONAL_MEM			= 0x200000,
 };
 
-/* SCSI inquiry data */
-typedef struct st_inq {
-	u8 DeviceType			:5;
-	u8 DeviceTypeQualifier		:3;
-	u8 DeviceTypeModifier		:7;
-	u8 RemovableMedia		:1;
-	u8 Versions;
-	u8 ResponseDataFormat		:4;
-	u8 HiSupport			:1;
-	u8 NormACA			:1;
-	u8 ReservedBit			:1;
-	u8 AERC				:1;
-	u8 AdditionalLength;
-	u8 Reserved[2];
-	u8 SoftReset			:1;
-	u8 CommandQueue			:1;
-	u8 Reserved2			:1;
-	u8 LinkedCommands		:1;
-	u8 Synchronous			:1;
-	u8 Wide16Bit			:1;
-	u8 Wide32Bit			:1;
-	u8 RelativeAddressing		:1;
-	u8 VendorId[8];
-	u8 ProductId[16];
-	u8 ProductRevisionLevel[4];
-	u8 VendorSpecific[20];
-	u8 Reserved3[40];
-} ST_INQ;
-
 struct st_sgitem {
 	u8 ctrl;	/* SG_CF_xxx */
 	u8 reserved[3];
@@ -222,7 +197,7 @@
 	u8 target;
 	u8 task_attr;
 	u8 task_manage;
-	u8 prd_entry;
+	u8 data_dir;
 	u8 payload_sz;		/* payload size in 4-byte, not used */
 	u8 cdb[STEX_CDB_LENGTH];
 	u8 variable[REQ_VARIABLE_LEN];
@@ -284,7 +259,7 @@
 #define MU_REQ_BUFFER_SIZE	(MU_REQ_COUNT * sizeof(struct req_msg))
 #define MU_STATUS_BUFFER_SIZE	(MU_STATUS_COUNT * sizeof(struct status_msg))
 #define MU_BUFFER_SIZE		(MU_REQ_BUFFER_SIZE + MU_STATUS_BUFFER_SIZE)
-#define STEX_EXTRA_SIZE		max(sizeof(struct st_frame), sizeof(ST_INQ))
+#define STEX_EXTRA_SIZE		sizeof(struct st_frame)
 #define STEX_BUFFER_SIZE	(MU_BUFFER_SIZE + STEX_EXTRA_SIZE)
 
 struct st_ccb {
@@ -346,8 +321,8 @@
 static void stex_gettime(__le32 *time)
 {
 	struct timeval tv;
-	do_gettimeofday(&tv);
 
+	do_gettimeofday(&tv);
 	*time = cpu_to_le32(tv.tv_sec & 0xffffffff);
 	*(time + 1) = cpu_to_le32((tv.tv_sec >> 16) >> 16);
 }
@@ -368,7 +343,7 @@
 {
 	cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
 
-	/* "Invalid field in cbd" */
+	/* "Invalid field in cdb" */
 	scsi_build_sense_buffer(0, cmd->sense_buffer, ILLEGAL_REQUEST, 0x24,
 				0x0);
 	done(cmd);
@@ -497,6 +472,7 @@
 	unsigned int id,lun;
 	struct req_msg *req;
 	u16 tag;
+
 	host = cmd->device->host;
 	id = cmd->device->id;
 	lun = cmd->device->lun;
@@ -508,6 +484,7 @@
 		static char ms10_caching_page[12] =
 			{ 0, 0x12, 0, 0, 0, 0, 0, 0, 0x8, 0xa, 0x4, 0 };
 		unsigned char page;
+
 		page = cmd->cmnd[2] & 0x3f;
 		if (page == 0x8 || page == 0x3f) {
 			scsi_sg_copy_from_buffer(cmd, ms10_caching_page,
@@ -551,6 +528,7 @@
 		if (cmd->cmnd[1] == PASSTHRU_GET_DRVVER) {
 			struct st_drvver ver;
 			size_t cp_len = sizeof(ver);
+
 			ver.major = ST_VER_MAJOR;
 			ver.minor = ST_VER_MINOR;
 			ver.oem = ST_OEM;
@@ -584,6 +562,13 @@
 	/* cdb */
 	memcpy(req->cdb, cmd->cmnd, STEX_CDB_LENGTH);
 
+	if (cmd->sc_data_direction == DMA_FROM_DEVICE)
+		req->data_dir = MSG_DATA_DIR_IN;
+	else if (cmd->sc_data_direction == DMA_TO_DEVICE)
+		req->data_dir = MSG_DATA_DIR_OUT;
+	else
+		req->data_dir = MSG_DATA_DIR_ND;
+
 	hba->ccb[tag].cmd = cmd;
 	hba->ccb[tag].sense_bufflen = SCSI_SENSE_BUFFERSIZE;
 	hba->ccb[tag].sense_buffer = cmd->sense_buffer;
@@ -642,6 +627,7 @@
 	struct status_msg *resp, unsigned int variable)
 {
 	size_t count = variable;
+
 	if (resp->scsi_status != SAM_STAT_GOOD) {
 		if (ccb->sense_buffer != NULL)
 			memcpy(ccb->sense_buffer, resp->variable,
@@ -661,24 +647,6 @@
 		resp->scsi_status != SAM_STAT_CHECK_CONDITION) {
 		scsi_set_resid(ccb->cmd, scsi_bufflen(ccb->cmd) -
 			le32_to_cpu(*(__le32 *)&resp->variable[0]));
-		return;
-	}
-
-	if (resp->srb_status != 0)
-		return;
-
-	/* determine inquiry command status by DeviceTypeQualifier */
-	if (ccb->cmd->cmnd[0] == INQUIRY &&
-		resp->scsi_status == SAM_STAT_GOOD) {
-		ST_INQ *inq_data;
-
-		scsi_sg_copy_to_buffer(ccb->cmd, hba->copy_buffer,
-				       STEX_EXTRA_SIZE);
-		inq_data = (ST_INQ *)hba->copy_buffer;
-		if (inq_data->DeviceTypeQualifier != 0)
-			ccb->srb_status = SRB_STATUS_SELECTION_TIMEOUT;
-		else
-			ccb->srb_status = SRB_STATUS_SUCCESS;
 	}
 }
 
@@ -746,6 +714,7 @@
 				stex_copy_data(ccb, resp, size);
 		}
 
+		ccb->req = NULL;
 		ccb->srb_status = resp->srb_status;
 		ccb->scsi_status = resp->scsi_status;
 
@@ -983,6 +952,7 @@
 	struct st_hba *hba;
 	unsigned long flags;
 	unsigned long before;
+
 	hba = (struct st_hba *) &cmd->device->host->hostdata[0];
 
 	printk(KERN_INFO DRV_NAME
@@ -1067,6 +1037,7 @@
 static int stex_set_dma_mask(struct pci_dev * pdev)
 {
 	int ret;
+
 	if (!pci_set_dma_mask(pdev, DMA_64BIT_MASK)
 		&& !pci_set_consistent_dma_mask(pdev, DMA_64BIT_MASK))
 		return 0;
@@ -1124,9 +1095,9 @@
 	}
 
 	hba->cardtype = (unsigned int) id->driver_data;
-	if (hba->cardtype == st_vsc && (pdev->subsystem_device & 0xf) == 0x1)
+	if (hba->cardtype == st_vsc && (pdev->subsystem_device & 1))
 		hba->cardtype = st_vsc1;
-	hba->dma_size = (hba->cardtype == st_vsc1) ?
+	hba->dma_size = (hba->cardtype == st_vsc1 || hba->cardtype == st_seq) ?
 		(STEX_BUFFER_SIZE + ST_ADDITIONAL_MEM) : (STEX_BUFFER_SIZE);
 	hba->dma_mem = dma_alloc_coherent(&pdev->dev,
 		hba->dma_size, &hba->dma_handle, GFP_KERNEL);
@@ -1146,10 +1117,10 @@
 		host->max_lun = 8;
 		host->max_id = 16 + 1;
 	} else if (hba->cardtype == st_yosemite) {
-		host->max_lun = 128;
+		host->max_lun = 256;
 		host->max_id = 1 + 1;
 	} else {
-		/* st_vsc and st_vsc1 */
+		/* st_vsc , st_vsc1 and st_seq */
 		host->max_lun = 1;
 		host->max_id = 128 + 1;
 	}
@@ -1299,18 +1270,10 @@
 	{ 0x105a, 0x7250, PCI_ANY_ID, PCI_ANY_ID, 0, 0, st_vsc },
 
 	/* st_yosemite */
-	{ 0x105a, 0x8650, PCI_ANY_ID, 0x4600, 0, 0,
-		st_yosemite }, /* SuperTrak EX4650 */
-	{ 0x105a, 0x8650, PCI_ANY_ID, 0x4610, 0, 0,
-		st_yosemite }, /* SuperTrak EX4650o */
-	{ 0x105a, 0x8650, PCI_ANY_ID, 0x8600, 0, 0,
-		st_yosemite }, /* SuperTrak EX8650EL */
-	{ 0x105a, 0x8650, PCI_ANY_ID, 0x8601, 0, 0,
-		st_yosemite }, /* SuperTrak EX8650 */
-	{ 0x105a, 0x8650, PCI_ANY_ID, 0x8602, 0, 0,
-		st_yosemite }, /* SuperTrak EX8654 */
-	{ 0x105a, 0x8650, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
-		st_yosemite }, /* generic st_yosemite */
+	{ 0x105a, 0x8650, PCI_ANY_ID, PCI_ANY_ID, 0, 0, st_yosemite },
+
+	/* st_seq */
+	{ 0x105a, 0x3360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, st_seq },
 	{ }	/* terminate list */
 };
 MODULE_DEVICE_TABLE(pci, stex_pci_tbl);
diff --git a/drivers/scsi/sym53c8xx_2/sym_glue.c b/drivers/scsi/sym53c8xx_2/sym_glue.c
index f4e6cde..23e7820 100644
--- a/drivers/scsi/sym53c8xx_2/sym_glue.c
+++ b/drivers/scsi/sym53c8xx_2/sym_glue.c
@@ -792,9 +792,9 @@
 
 	/*
 	 *  Select queue depth from driver setup.
-	 *  Donnot use more than configured by user.
-	 *  Use at least 2.
-	 *  Donnot use more than our maximum.
+	 *  Do not use more than configured by user.
+	 *  Use at least 1.
+	 *  Do not use more than our maximum.
 	 */
 	reqtags = sym_driver_setup.max_tag;
 	if (reqtags > tp->usrtags)
@@ -803,7 +803,7 @@
 		reqtags = 0;
 	if (reqtags > SYM_CONF_MAX_TAG)
 		reqtags = SYM_CONF_MAX_TAG;
-	depth_to_use = reqtags ? reqtags : 2;
+	depth_to_use = reqtags ? reqtags : 1;
 	scsi_adjust_queue_depth(sdev,
 				sdev->tagged_supported ? MSG_SIMPLE_TAG : 0,
 				depth_to_use);
@@ -1236,14 +1236,29 @@
 #endif /* SYM_LINUX_PROC_INFO_SUPPORT */
 
 /*
+ * Free resources claimed by sym_iomap_device().  Note that
+ * sym_free_resources() should be used instead of this function after calling
+ * sym_attach().
+ */
+static void __devinit
+sym_iounmap_device(struct sym_device *device)
+{
+	if (device->s.ioaddr)
+		pci_iounmap(device->pdev, device->s.ioaddr);
+	if (device->s.ramaddr)
+		pci_iounmap(device->pdev, device->s.ramaddr);
+}
+
+/*
  *	Free controller resources.
  */
-static void sym_free_resources(struct sym_hcb *np, struct pci_dev *pdev)
+static void sym_free_resources(struct sym_hcb *np, struct pci_dev *pdev,
+		int do_free_irq)
 {
 	/*
 	 *  Free O/S specific resources.
 	 */
-	if (pdev->irq)
+	if (do_free_irq)
 		free_irq(pdev->irq, np->s.host);
 	if (np->s.ioaddr)
 		pci_iounmap(pdev, np->s.ioaddr);
@@ -1271,10 +1286,11 @@
 {
 	struct sym_data *sym_data;
 	struct sym_hcb *np = NULL;
-	struct Scsi_Host *shost;
+	struct Scsi_Host *shost = NULL;
 	struct pci_dev *pdev = dev->pdev;
 	unsigned long flags;
 	struct sym_fw *fw;
+	int do_free_irq = 0;
 
 	printk(KERN_INFO "sym%d: <%s> rev 0x%x at pci %s irq %u\n",
 		unit, dev->chip.name, pdev->revision, pci_name(pdev),
@@ -1285,11 +1301,11 @@
 	 */
 	fw = sym_find_firmware(&dev->chip);
 	if (!fw)
-		return NULL;
+		goto attach_failed;
 
 	shost = scsi_host_alloc(tpnt, sizeof(*sym_data));
 	if (!shost)
-		return NULL;
+		goto attach_failed;
 	sym_data = shost_priv(shost);
 
 	/*
@@ -1319,6 +1335,10 @@
 	np->maxoffs	= dev->chip.offset_max;
 	np->maxburst	= dev->chip.burst_max;
 	np->myaddr	= dev->host_id;
+	np->mmio_ba	= (u32)dev->mmio_base;
+	np->ram_ba	= (u32)dev->ram_base;
+	np->s.ioaddr	= dev->s.ioaddr;
+	np->s.ramaddr	= dev->s.ramaddr;
 
 	/*
 	 *  Edit its name.
@@ -1334,22 +1354,6 @@
 		goto attach_failed;
 	}
 
-	/*
-	 *  Try to map the controller chip to
-	 *  virtual and physical memory.
-	 */
-	np->mmio_ba = (u32)dev->mmio_base;
-	np->s.ioaddr	= dev->s.ioaddr;
-	np->s.ramaddr	= dev->s.ramaddr;
-
-	/*
-	 *  Map on-chip RAM if present and supported.
-	 */
-	if (!(np->features & FE_RAM))
-		dev->ram_base = 0;
-	if (dev->ram_base)
-		np->ram_ba = (u32)dev->ram_base;
-
 	if (sym_hcb_attach(shost, fw, dev->nvram))
 		goto attach_failed;
 
@@ -1364,6 +1368,7 @@
 			sym_name(np), pdev->irq);
 		goto attach_failed;
 	}
+	do_free_irq = 1;
 
 	/*
 	 *  After SCSI devices have been opened, we cannot
@@ -1416,12 +1421,13 @@
 		   "TERMINATION, DEVICE POWER etc.!\n", sym_name(np));
 	spin_unlock_irqrestore(shost->host_lock, flags);
  attach_failed:
-	if (!shost)
-		return NULL;
-	printf_info("%s: giving up ...\n", sym_name(np));
+	printf_info("sym%d: giving up ...\n", unit);
 	if (np)
-		sym_free_resources(np, pdev);
-	scsi_host_put(shost);
+		sym_free_resources(np, pdev, do_free_irq);
+	else
+		sym_iounmap_device(dev);
+	if (shost)
+		scsi_host_put(shost);
 
 	return NULL;
  }
@@ -1550,30 +1556,28 @@
 }
 
 /*
- *  Read and check the PCI configuration for any detected NCR 
- *  boards and save data for attaching after all boards have 
- *  been detected.
+ * Map HBA registers and on-chip SRAM (if present).
  */
-static void __devinit
-sym_init_device(struct pci_dev *pdev, struct sym_device *device)
+static int __devinit
+sym_iomap_device(struct sym_device *device)
 {
-	int i = 2;
+	struct pci_dev *pdev = device->pdev;
 	struct pci_bus_region bus_addr;
-
-	device->host_id = SYM_SETUP_HOST_ID;
-	device->pdev = pdev;
+	int i = 2;
 
 	pcibios_resource_to_bus(pdev, &bus_addr, &pdev->resource[1]);
 	device->mmio_base = bus_addr.start;
 
-	/*
-	 * If the BAR is 64-bit, resource 2 will be occupied by the
-	 * upper 32 bits
-	 */
-	if (!pdev->resource[i].flags)
-		i++;
-	pcibios_resource_to_bus(pdev, &bus_addr, &pdev->resource[i]);
-	device->ram_base = bus_addr.start;
+	if (device->chip.features & FE_RAM) {
+		/*
+		 * If the BAR is 64-bit, resource 2 will be occupied by the
+		 * upper 32 bits
+		 */
+		if (!pdev->resource[i].flags)
+			i++;
+		pcibios_resource_to_bus(pdev, &bus_addr, &pdev->resource[i]);
+		device->ram_base = bus_addr.start;
+	}
 
 #ifdef CONFIG_SCSI_SYM53C8XX_MMIO
 	if (device->mmio_base)
@@ -1583,9 +1587,21 @@
 	if (!device->s.ioaddr)
 		device->s.ioaddr = pci_iomap(pdev, 0,
 						pci_resource_len(pdev, 0));
-	if (device->ram_base)
+	if (!device->s.ioaddr) {
+		dev_err(&pdev->dev, "could not map registers; giving up.\n");
+		return -EIO;
+	}
+	if (device->ram_base) {
 		device->s.ramaddr = pci_iomap(pdev, i,
 						pci_resource_len(pdev, i));
+		if (!device->s.ramaddr) {
+			dev_warn(&pdev->dev,
+				"could not map SRAM; continuing anyway.\n");
+			device->ram_base = 0;
+		}
+	}
+
+	return 0;
 }
 
 /*
@@ -1659,7 +1675,8 @@
 	udelay(10);
 	OUTB(np, nc_istat, 0);
 
-	sym_free_resources(np, pdev);
+	sym_free_resources(np, pdev, 1);
+	scsi_host_put(shost);
 
 	return 1;
 }
@@ -1696,9 +1713,13 @@
 	struct sym_device sym_dev;
 	struct sym_nvram nvram;
 	struct Scsi_Host *shost;
+	int do_iounmap = 0;
+	int do_disable_device = 1;
 
 	memset(&sym_dev, 0, sizeof(sym_dev));
 	memset(&nvram, 0, sizeof(nvram));
+	sym_dev.pdev = pdev;
+	sym_dev.host_id = SYM_SETUP_HOST_ID;
 
 	if (pci_enable_device(pdev))
 		goto leave;
@@ -1708,12 +1729,17 @@
 	if (pci_request_regions(pdev, NAME53C8XX))
 		goto disable;
 
-	sym_init_device(pdev, &sym_dev);
 	if (sym_check_supported(&sym_dev))
 		goto free;
 
-	if (sym_check_raid(&sym_dev))
-		goto leave;	/* Don't disable the device */
+	if (sym_iomap_device(&sym_dev))
+		goto free;
+	do_iounmap = 1;
+
+	if (sym_check_raid(&sym_dev)) {
+		do_disable_device = 0;	/* Don't disable the device */
+		goto free;
+	}
 
 	if (sym_set_workarounds(&sym_dev))
 		goto free;
@@ -1722,6 +1748,7 @@
 
 	sym_get_nvram(&sym_dev, &nvram);
 
+	do_iounmap = 0; /* Don't sym_iounmap_device() after sym_attach(). */
 	shost = sym_attach(&sym2_template, attach_count, &sym_dev);
 	if (!shost)
 		goto free;
@@ -1737,9 +1764,12 @@
  detach:
 	sym_detach(pci_get_drvdata(pdev), pdev);
  free:
+	if (do_iounmap)
+		sym_iounmap_device(&sym_dev);
 	pci_release_regions(pdev);
  disable:
-	pci_disable_device(pdev);
+	if (do_disable_device)
+		pci_disable_device(pdev);
  leave:
 	return -ENODEV;
 }
@@ -1749,7 +1779,6 @@
 	struct Scsi_Host *shost = pci_get_drvdata(pdev);
 
 	scsi_remove_host(shost);
-	scsi_host_put(shost);
 	sym_detach(shost, pdev);
 	pci_release_regions(pdev);
 	pci_disable_device(pdev);
diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.c b/drivers/scsi/sym53c8xx_2/sym_hipd.c
index 98df165..ccea7db 100644
--- a/drivers/scsi/sym53c8xx_2/sym_hipd.c
+++ b/drivers/scsi/sym53c8xx_2/sym_hipd.c
@@ -1433,13 +1433,12 @@
 	 * Many devices implement PPR in a buggy way, so only use it if we
 	 * really want to.
 	 */
-	if (goal->offset &&
-	    (goal->iu || goal->dt || goal->qas || (goal->period < 0xa))) {
+	if (goal->renego == NS_PPR || (goal->offset &&
+	    (goal->iu || goal->dt || goal->qas || (goal->period < 0xa)))) {
 		nego = NS_PPR;
-	} else if (spi_width(starget) != goal->width) {
+	} else if (goal->renego == NS_WIDE || goal->width) {
 		nego = NS_WIDE;
-	} else if (spi_period(starget) != goal->period ||
-		   spi_offset(starget) != goal->offset) {
+	} else if (goal->renego == NS_SYNC || goal->offset) {
 		nego = NS_SYNC;
 	} else {
 		goal->check_nego = 0;
@@ -2040,6 +2039,29 @@
 	}
 }
 
+static void sym_announce_transfer_rate(struct sym_tcb *tp)
+{
+	struct scsi_target *starget = tp->starget;
+
+	if (tp->tprint.period != spi_period(starget) ||
+	    tp->tprint.offset != spi_offset(starget) ||
+	    tp->tprint.width != spi_width(starget) ||
+	    tp->tprint.iu != spi_iu(starget) ||
+	    tp->tprint.dt != spi_dt(starget) ||
+	    tp->tprint.qas != spi_qas(starget) ||
+	    !tp->tprint.check_nego) {
+		tp->tprint.period = spi_period(starget);
+		tp->tprint.offset = spi_offset(starget);
+		tp->tprint.width = spi_width(starget);
+		tp->tprint.iu = spi_iu(starget);
+		tp->tprint.dt = spi_dt(starget);
+		tp->tprint.qas = spi_qas(starget);
+		tp->tprint.check_nego = 1;
+
+		spi_display_xfer_agreement(starget);
+	}
+}
+
 /*
  *  We received a WDTR.
  *  Let everything be aware of the changes.
@@ -2049,11 +2071,13 @@
 	struct sym_tcb *tp = &np->target[target];
 	struct scsi_target *starget = tp->starget;
 
-	if (spi_width(starget) == wide)
-		return;
-
 	sym_settrans(np, target, 0, 0, 0, wide, 0, 0);
 
+	if (wide)
+		tp->tgoal.renego = NS_WIDE;
+	else
+		tp->tgoal.renego = 0;
+	tp->tgoal.check_nego = 0;
 	tp->tgoal.width = wide;
 	spi_offset(starget) = 0;
 	spi_period(starget) = 0;
@@ -2063,7 +2087,7 @@
 	spi_qas(starget) = 0;
 
 	if (sym_verbose >= 3)
-		spi_display_xfer_agreement(starget);
+		sym_announce_transfer_rate(tp);
 }
 
 /*
@@ -2080,6 +2104,12 @@
 
 	sym_settrans(np, target, 0, ofs, per, wide, div, fak);
 
+	if (wide)
+		tp->tgoal.renego = NS_WIDE;
+	else if (ofs)
+		tp->tgoal.renego = NS_SYNC;
+	else
+		tp->tgoal.renego = 0;
 	spi_period(starget) = per;
 	spi_offset(starget) = ofs;
 	spi_iu(starget) = spi_dt(starget) = spi_qas(starget) = 0;
@@ -2090,7 +2120,7 @@
 		tp->tgoal.check_nego = 0;
 	}
 
-	spi_display_xfer_agreement(starget);
+	sym_announce_transfer_rate(tp);
 }
 
 /*
@@ -2106,6 +2136,10 @@
 
 	sym_settrans(np, target, opts, ofs, per, wide, div, fak);
 
+	if (wide || ofs)
+		tp->tgoal.renego = NS_PPR;
+	else
+		tp->tgoal.renego = 0;
 	spi_width(starget) = tp->tgoal.width = wide;
 	spi_period(starget) = tp->tgoal.period = per;
 	spi_offset(starget) = tp->tgoal.offset = ofs;
@@ -2114,7 +2148,7 @@
 	spi_qas(starget) = tp->tgoal.qas = !!(opts & PPR_OPT_QAS);
 	tp->tgoal.check_nego = 0;
 
-	spi_display_xfer_agreement(starget);
+	sym_announce_transfer_rate(tp);
 }
 
 /*
@@ -3516,6 +3550,7 @@
 			spi_dt(starget) = 0;
 			spi_qas(starget) = 0;
 			tp->tgoal.check_nego = 1;
+			tp->tgoal.renego = 0;
 		}
 
 		/*
@@ -5135,9 +5170,14 @@
 	/*
 	 *  Build a negotiation message if needed.
 	 *  (nego_status is filled by sym_prepare_nego())
+	 *
+	 *  Always negotiate on INQUIRY and REQUEST SENSE.
+	 *
 	 */
 	cp->nego_status = 0;
-	if (tp->tgoal.check_nego && !tp->nego_cp && lp) {
+	if ((tp->tgoal.check_nego ||
+	     cmd->cmnd[0] == INQUIRY || cmd->cmnd[0] == REQUEST_SENSE) &&
+	    !tp->nego_cp && lp) {
 		msglen += sym_prepare_nego(np, cp, msgptr + msglen);
 	}
 
diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.h b/drivers/scsi/sym53c8xx_2/sym_hipd.h
index ad07880..61d28fc 100644
--- a/drivers/scsi/sym53c8xx_2/sym_hipd.h
+++ b/drivers/scsi/sym53c8xx_2/sym_hipd.h
@@ -354,6 +354,7 @@
 	unsigned int dt:1;
 	unsigned int qas:1;
 	unsigned int check_nego:1;
+	unsigned int renego:2;
 };
 
 /*
@@ -419,6 +420,9 @@
 	/* Transfer goal */
 	struct sym_trans tgoal;
 
+	/* Last printed transfer speed */
+	struct sym_trans tprint;
+
 	/*
 	 * Keep track of the CCB used for the negotiation in order
 	 * to ensure that only 1 negotiation is queued at a time.
diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c
index d48c855..49aedb3 100644
--- a/drivers/usb/storage/transport.c
+++ b/drivers/usb/storage/transport.c
@@ -787,7 +787,7 @@
 	/* Did we transfer less than the minimum amount required? */
 	if ((srb->result == SAM_STAT_GOOD || srb->sense_buffer[2] == 0) &&
 			scsi_bufflen(srb) - scsi_get_resid(srb) < srb->underflow)
-		srb->result = (DID_ERROR << 16) | (SUGGEST_RETRY << 24);
+		srb->result = DID_ERROR << 16;
 
 	last_sector_hacks(us, srb);
 	return;
diff --git a/include/linux/bsg.h b/include/linux/bsg.h
index 3f0c64a..ecb4730 100644
--- a/include/linux/bsg.h
+++ b/include/linux/bsg.h
@@ -1,6 +1,8 @@
 #ifndef BSG_H
 #define BSG_H
 
+#include <linux/types.h>
+
 #define BSG_PROTOCOL_SCSI		0
 
 #define BSG_SUB_PROTOCOL_SCSI_CMD	0
diff --git a/include/linux/if_ether.h b/include/linux/if_ether.h
index 0216e1b..cfe4fe1 100644
--- a/include/linux/if_ether.h
+++ b/include/linux/if_ether.h
@@ -78,6 +78,7 @@
 #define ETH_P_PAE	0x888E		/* Port Access Entity (IEEE 802.1X) */
 #define ETH_P_AOE	0x88A2		/* ATA over Ethernet		*/
 #define ETH_P_TIPC	0x88CA		/* TIPC 			*/
+#define ETH_P_FCOE	0x8906		/* Fibre Channel over Ethernet  */
 #define ETH_P_EDSA	0xDADA		/* Ethertype DSA [ NOT AN OFFICIALLY REGISTERED ID ] */
 
 /*
diff --git a/include/linux/major.h b/include/linux/major.h
index 8824945..058ec15 100644
--- a/include/linux/major.h
+++ b/include/linux/major.h
@@ -171,5 +171,6 @@
 #define VIOTAPE_MAJOR		230
 
 #define BLOCK_EXT_MAJOR		259
+#define SCSI_OSD_MAJOR		260	/* open-osd's OSD scsi device */
 
 #endif
diff --git a/include/linux/miscdevice.h b/include/linux/miscdevice.h
index a820f81..beb6ec9 100644
--- a/include/linux/miscdevice.h
+++ b/include/linux/miscdevice.h
@@ -26,6 +26,7 @@
 #define TUN_MINOR		200
 #define MWAVE_MINOR		219	/* ACP/Mwave Modem */
 #define MPT_MINOR		220
+#define MPT2SAS_MINOR		221
 #define HPET_MINOR		228
 #define FUSE_MINOR		229
 #define KVM_MINOR		232
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 1b55952..2e7783f 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -594,6 +594,14 @@
 #define HAVE_NETDEV_POLL
 	void                    (*ndo_poll_controller)(struct net_device *dev);
 #endif
+#if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
+	int			(*ndo_fcoe_ddp_setup)(struct net_device *dev,
+						      u16 xid,
+						      struct scatterlist *sgl,
+						      unsigned int sgc);
+	int			(*ndo_fcoe_ddp_done)(struct net_device *dev,
+						     u16 xid);
+#endif
 };
 
 /*
@@ -662,14 +670,17 @@
 #define NETIF_F_GRO		16384	/* Generic receive offload */
 #define NETIF_F_LRO		32768	/* large receive offload */
 
+#define NETIF_F_FCOE_CRC	(1 << 24) /* FCoE CRC32 */
+
 	/* Segmentation offload features */
 #define NETIF_F_GSO_SHIFT	16
-#define NETIF_F_GSO_MASK	0xffff0000
+#define NETIF_F_GSO_MASK	0x00ff0000
 #define NETIF_F_TSO		(SKB_GSO_TCPV4 << NETIF_F_GSO_SHIFT)
 #define NETIF_F_UFO		(SKB_GSO_UDP << NETIF_F_GSO_SHIFT)
 #define NETIF_F_GSO_ROBUST	(SKB_GSO_DODGY << NETIF_F_GSO_SHIFT)
 #define NETIF_F_TSO_ECN		(SKB_GSO_TCP_ECN << NETIF_F_GSO_SHIFT)
 #define NETIF_F_TSO6		(SKB_GSO_TCPV6 << NETIF_F_GSO_SHIFT)
+#define NETIF_F_FSO		(SKB_GSO_FCOE << NETIF_F_GSO_SHIFT)
 
 	/* List of features with software fallbacks. */
 #define NETIF_F_GSO_SOFTWARE	(NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6)
@@ -852,6 +863,11 @@
 	struct dcbnl_rtnl_ops *dcbnl_ops;
 #endif
 
+#if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
+	/* max exchange id for FCoE LRO by ddp */
+	unsigned int		fcoe_ddp_xid;
+#endif
+
 #ifdef CONFIG_COMPAT_NET_DEV_OPS
 	struct {
 		int			(*init)(struct net_device *dev);
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index bb1981f..55d6730 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -236,6 +236,8 @@
 	SKB_GSO_TCP_ECN = 1 << 3,
 
 	SKB_GSO_TCPV6 = 1 << 4,
+
+	SKB_GSO_FCOE = 1 << 5,
 };
 
 #if BITS_PER_LONG > 32
diff --git a/include/scsi/fc/fc_fcoe.h b/include/scsi/fc/fc_fcoe.h
index f271d9c..ccb3dbe 100644
--- a/include/scsi/fc/fc_fcoe.h
+++ b/include/scsi/fc/fc_fcoe.h
@@ -25,13 +25,6 @@
  */
 
 /*
- * The FCoE ethertype eventually goes in net/if_ether.h.
- */
-#ifndef ETH_P_FCOE
-#define	ETH_P_FCOE	0x8906		/* FCOE ether type */
-#endif
-
-/*
  * FC_FCOE_OUI hasn't been standardized yet.   XXX TBD.
  */
 #ifndef FC_FCOE_OUI
diff --git a/include/scsi/fc_frame.h b/include/scsi/fc_frame.h
index 04d34a7..5951105 100644
--- a/include/scsi/fc_frame.h
+++ b/include/scsi/fc_frame.h
@@ -54,8 +54,7 @@
 #define fr_eof(fp)	(fr_cb(fp)->fr_eof)
 #define fr_flags(fp)	(fr_cb(fp)->fr_flags)
 #define fr_max_payload(fp)	(fr_cb(fp)->fr_max_payload)
-#define fr_cmd(fp)	(fr_cb(fp)->fr_cmd)
-#define fr_dir(fp)	(fr_cmd(fp)->sc_data_direction)
+#define fr_fsp(fp)	(fr_cb(fp)->fr_fsp)
 #define fr_crc(fp)	(fr_cb(fp)->fr_crc)
 
 struct fc_frame {
@@ -66,7 +65,7 @@
 	struct packet_type  *ptype;
 	struct fc_lport	*fr_dev;	/* transport layer private pointer */
 	struct fc_seq	*fr_seq;	/* for use with exchange manager */
-	struct scsi_cmnd *fr_cmd;	/* for use of scsi command */
+	struct fc_fcp_pkt *fr_fsp;	/* for the corresponding fcp I/O */
 	u32		fr_crc;
 	u16		fr_max_payload;	/* max FC payload */
 	enum fc_sof	fr_sof;		/* start of frame delimiter */
@@ -218,20 +217,6 @@
 	return fc_frame_rctl(fp) == FC_RCTL_DD_UNSOL_CMD;
 }
 
-static inline bool fc_frame_is_read(const struct fc_frame *fp)
-{
-	if (fc_frame_is_cmd(fp) && fr_cmd(fp))
-		return fr_dir(fp) == DMA_FROM_DEVICE;
-	return false;
-}
-
-static inline bool fc_frame_is_write(const struct fc_frame *fp)
-{
-	if (fc_frame_is_cmd(fp) && fr_cmd(fp))
-		return fr_dir(fp) == DMA_TO_DEVICE;
-	return false;
-}
-
 /*
  * Check for leaks.
  * Print the frame header of any currently allocated frame, assuming there
diff --git a/include/scsi/libfc.h b/include/scsi/libfc.h
index a2e126b..a70eafa 100644
--- a/include/scsi/libfc.h
+++ b/include/scsi/libfc.h
@@ -245,6 +245,7 @@
 	 */
 	struct fcp_cmnd cdb_cmd;
 	size_t		xfer_len;
+	u16		xfer_ddp;	/* this xfer is ddped */
 	u32		xfer_contig_end; /* offset of end of contiguous xfer */
 	u16		max_payload;	/* max payload size in bytes */
 
@@ -267,6 +268,15 @@
 	u8		recov_retry;	/* count of recovery retries */
 	struct fc_seq	*recov_seq;	/* sequence for REC or SRR */
 };
+/*
+ * FC_FCP HELPER FUNCTIONS
+ *****************************/
+static inline bool fc_fcp_is_read(const struct fc_fcp_pkt *fsp)
+{
+	if (fsp && fsp->cmd)
+		return fsp->cmd->sc_data_direction == DMA_FROM_DEVICE;
+	return false;
+}
 
 /*
  * Structure and function definitions for managing Fibre Channel Exchanges
@@ -400,6 +410,21 @@
 					void *arg, unsigned int timer_msec);
 
 	/*
+	 * Sets up the DDP context for a given exchange id on the given
+	 * scatterlist if LLD supports DDP for large receive.
+	 *
+	 * STATUS: OPTIONAL
+	 */
+	int (*ddp_setup)(struct fc_lport *lp, u16 xid,
+			 struct scatterlist *sgl, unsigned int sgc);
+	/*
+	 * Completes the DDP transfer and returns the length of data DDPed
+	 * for the given exchange id.
+	 *
+	 * STATUS: OPTIONAL
+	 */
+	int (*ddp_done)(struct fc_lport *lp, u16 xid);
+	/*
 	 * Send a frame using an existing sequence and exchange.
 	 *
 	 * STATUS: OPTIONAL
@@ -654,6 +679,7 @@
 	u16			link_speed;
 	u16			link_supported_speeds;
 	u16			lro_xid;	/* max xid for fcoe lro */
+	unsigned int		lso_max;	/* max large send size */
 	struct fc_ns_fts	fcts;	        /* FC-4 type masks */
 	struct fc_els_rnid_gen	rnid_gen;	/* RNID information */
 
@@ -821,6 +847,11 @@
 void fc_fcp_destroy(struct fc_lport *);
 
 /*
+ * Set up direct-data placement for this I/O request
+ */
+void fc_fcp_ddp_setup(struct fc_fcp_pkt *fsp, u16 xid);
+
+/*
  * ELS/CT interface
  *****************************/
 /*
diff --git a/include/scsi/libfcoe.h b/include/scsi/libfcoe.h
index 941818f..c41f7d0 100644
--- a/include/scsi/libfcoe.h
+++ b/include/scsi/libfcoe.h
@@ -124,24 +124,6 @@
 	return be16_to_cpu(skb_fc_header(skb)->fh_rx_id);
 }
 
-/* FIXME - DMA_BIDIRECTIONAL ? */
-#define skb_cb(skb)	((struct fcoe_rcv_info *)&((skb)->cb[0]))
-#define skb_cmd(skb)	(skb_cb(skb)->fr_cmd)
-#define skb_dir(skb)	(skb_cmd(skb)->sc_data_direction)
-static inline bool skb_fc_is_read(const struct sk_buff *skb)
-{
-	if (skb_fc_is_cmd(skb) && skb_cmd(skb))
-		return skb_dir(skb) == DMA_FROM_DEVICE;
-	return false;
-}
-
-static inline bool skb_fc_is_write(const struct sk_buff *skb)
-{
-	if (skb_fc_is_cmd(skb) && skb_cmd(skb))
-		return skb_dir(skb) == DMA_TO_DEVICE;
-	return false;
-}
-
 /* libfcoe funcs */
 int fcoe_reset(struct Scsi_Host *shost);
 u64 fcoe_wwn_from_mac(unsigned char mac[MAX_ADDR_LEN],
diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
index 7360e19..7ffaed2 100644
--- a/include/scsi/libiscsi.h
+++ b/include/scsi/libiscsi.h
@@ -45,18 +45,10 @@
 struct iscsi_nopin;
 struct device;
 
-/* #define DEBUG_SCSI */
-#ifdef DEBUG_SCSI
-#define debug_scsi(fmt...) printk(KERN_INFO "iscsi: " fmt)
-#else
-#define debug_scsi(fmt...)
-#endif
-
 #define ISCSI_DEF_XMIT_CMDS_MAX	128	/* must be power of 2 */
 #define ISCSI_MGMT_CMDS_MAX	15
 
-#define ISCSI_DEF_CMD_PER_LUN		32
-#define ISCSI_MAX_CMD_PER_LUN		128
+#define ISCSI_DEF_CMD_PER_LUN	32
 
 /* Task Mgmt states */
 enum {
@@ -326,6 +318,9 @@
 	spinlock_t		lock;
 	int			num_sessions;
 	int			state;
+
+	struct workqueue_struct	*workq;
+	char			workq_name[20];
 };
 
 /*
@@ -351,7 +346,8 @@
 				enum iscsi_host_param param, char *buf);
 extern int iscsi_host_add(struct Scsi_Host *shost, struct device *pdev);
 extern struct Scsi_Host *iscsi_host_alloc(struct scsi_host_template *sht,
-					  int dd_data_size, uint16_t qdepth);
+					  int dd_data_size,
+					  bool xmit_can_sleep);
 extern void iscsi_host_remove(struct Scsi_Host *shost);
 extern void iscsi_host_free(struct Scsi_Host *shost);
 
@@ -382,11 +378,12 @@
 extern int iscsi_conn_bind(struct iscsi_cls_session *, struct iscsi_cls_conn *,
 			   int);
 extern void iscsi_conn_failure(struct iscsi_conn *conn, enum iscsi_err err);
-extern void iscsi_session_failure(struct iscsi_cls_session *cls_session,
+extern void iscsi_session_failure(struct iscsi_session *session,
 				  enum iscsi_err err);
 extern int iscsi_conn_get_param(struct iscsi_cls_conn *cls_conn,
 				enum iscsi_param param, char *buf);
 extern void iscsi_suspend_tx(struct iscsi_conn *conn);
+extern void iscsi_conn_queue_work(struct iscsi_conn *conn);
 
 #define iscsi_conn_printk(prefix, _c, fmt, a...) \
 	iscsi_cls_conn_printk(prefix, ((struct iscsi_conn *)_c)->cls_conn, \
diff --git a/include/scsi/osd_attributes.h b/include/scsi/osd_attributes.h
new file mode 100644
index 0000000..f888a6f
--- /dev/null
+++ b/include/scsi/osd_attributes.h
@@ -0,0 +1,327 @@
+#ifndef __OSD_ATTRIBUTES_H__
+#define __OSD_ATTRIBUTES_H__
+
+#include "osd_protocol.h"
+
+/*
+ * Contains types and constants that define attribute pages and attribute
+ * numbers and their data types.
+ */
+
+#define ATTR_SET(pg, id, l, ptr) \
+	{ .attr_page = pg, .attr_id = id, .len = l, .val_ptr = ptr }
+
+#define ATTR_DEF(pg, id, l) ATTR_SET(pg, id, l, NULL)
+
+/* osd-r10 4.7.3 Attributes pages */
+enum {
+	OSD_APAGE_OBJECT_FIRST		= 0x0,
+	OSD_APAGE_OBJECT_DIRECTORY	= 0,
+	OSD_APAGE_OBJECT_INFORMATION	= 1,
+	OSD_APAGE_OBJECT_QUOTAS		= 2,
+	OSD_APAGE_OBJECT_TIMESTAMP	= 3,
+	OSD_APAGE_OBJECT_COLLECTIONS	= 4,
+	OSD_APAGE_OBJECT_SECURITY	= 5,
+	OSD_APAGE_OBJECT_LAST		= 0x2fffffff,
+
+	OSD_APAGE_PARTITION_FIRST	= 0x30000000,
+	OSD_APAGE_PARTITION_DIRECTORY	= OSD_APAGE_PARTITION_FIRST + 0,
+	OSD_APAGE_PARTITION_INFORMATION = OSD_APAGE_PARTITION_FIRST + 1,
+	OSD_APAGE_PARTITION_QUOTAS	= OSD_APAGE_PARTITION_FIRST + 2,
+	OSD_APAGE_PARTITION_TIMESTAMP	= OSD_APAGE_PARTITION_FIRST + 3,
+	OSD_APAGE_PARTITION_SECURITY	= OSD_APAGE_PARTITION_FIRST + 5,
+	OSD_APAGE_PARTITION_LAST	= 0x5FFFFFFF,
+
+	OSD_APAGE_COLLECTION_FIRST	= 0x60000000,
+	OSD_APAGE_COLLECTION_DIRECTORY	= OSD_APAGE_COLLECTION_FIRST + 0,
+	OSD_APAGE_COLLECTION_INFORMATION = OSD_APAGE_COLLECTION_FIRST + 1,
+	OSD_APAGE_COLLECTION_TIMESTAMP	= OSD_APAGE_COLLECTION_FIRST + 3,
+	OSD_APAGE_COLLECTION_SECURITY	= OSD_APAGE_COLLECTION_FIRST + 5,
+	OSD_APAGE_COLLECTION_LAST	= 0x8FFFFFFF,
+
+	OSD_APAGE_ROOT_FIRST		= 0x90000000,
+	OSD_APAGE_ROOT_DIRECTORY	= OSD_APAGE_ROOT_FIRST + 0,
+	OSD_APAGE_ROOT_INFORMATION	= OSD_APAGE_ROOT_FIRST + 1,
+	OSD_APAGE_ROOT_QUOTAS		= OSD_APAGE_ROOT_FIRST + 2,
+	OSD_APAGE_ROOT_TIMESTAMP	= OSD_APAGE_ROOT_FIRST + 3,
+	OSD_APAGE_ROOT_SECURITY		= OSD_APAGE_ROOT_FIRST + 5,
+	OSD_APAGE_ROOT_LAST		= 0xBFFFFFFF,
+
+	OSD_APAGE_RESERVED_TYPE_FIRST	= 0xC0000000,
+	OSD_APAGE_RESERVED_TYPE_LAST	= 0xEFFFFFFF,
+
+	OSD_APAGE_COMMON_FIRST		= 0xF0000000,
+	OSD_APAGE_COMMON_LAST		= 0xFFFFFFFE,
+
+	OSD_APAGE_REQUEST_ALL		= 0xFFFFFFFF,
+};
+
+/* subcategories of attr pages within each range above */
+enum {
+	OSD_APAGE_STD_FIRST		= 0x0,
+	OSD_APAGE_STD_DIRECTORY		= 0,
+	OSD_APAGE_STD_INFORMATION	= 1,
+	OSD_APAGE_STD_QUOTAS		= 2,
+	OSD_APAGE_STD_TIMESTAMP		= 3,
+	OSD_APAGE_STD_COLLECTIONS	= 4,
+	OSD_APAGE_STD_POLICY_SECURITY	= 5,
+	OSD_APAGE_STD_LAST		= 0x0000007F,
+
+	OSD_APAGE_RESERVED_FIRST	= 0x00000080,
+	OSD_APAGE_RESERVED_LAST		= 0x00007FFF,
+
+	OSD_APAGE_OTHER_STD_FIRST	= 0x00008000,
+	OSD_APAGE_OTHER_STD_LAST	= 0x0000EFFF,
+
+	OSD_APAGE_PUBLIC_FIRST		= 0x0000F000,
+	OSD_APAGE_PUBLIC_LAST		= 0x0000FFFF,
+
+	OSD_APAGE_APP_DEFINED_FIRST	= 0x00010000,
+	OSD_APAGE_APP_DEFINED_LAST	= 0x1FFFFFFF,
+
+	OSD_APAGE_VENDOR_SPECIFIC_FIRST	= 0x20000000,
+	OSD_APAGE_VENDOR_SPECIFIC_LAST	= 0x2FFFFFFF,
+};
+
+enum {
+	OSD_ATTR_PAGE_IDENTIFICATION = 0, /* in all pages 40 bytes */
+};
+
+struct page_identification {
+	u8 vendor_identification[8];
+	u8 page_identification[32];
+}  __packed;
+
+struct osd_attr_page_header {
+	__be32 page_number;
+	__be32 page_length;
+} __packed;
+
+/* 7.1.2.8 Root Information attributes page (OSD_APAGE_ROOT_INFORMATION) */
+enum {
+	OSD_ATTR_RI_OSD_SYSTEM_ID            = 0x3,   /* 20       */
+	OSD_ATTR_RI_VENDOR_IDENTIFICATION    = 0x4,   /* 8        */
+	OSD_ATTR_RI_PRODUCT_IDENTIFICATION   = 0x5,   /* 16       */
+	OSD_ATTR_RI_PRODUCT_MODEL            = 0x6,   /* 32       */
+	OSD_ATTR_RI_PRODUCT_REVISION_LEVEL   = 0x7,   /* 4        */
+	OSD_ATTR_RI_PRODUCT_SERIAL_NUMBER    = 0x8,   /* variable */
+	OSD_ATTR_RI_OSD_NAME                 = 0x9,   /* variable */
+	OSD_ATTR_RI_TOTAL_CAPACITY           = 0x80,  /* 8        */
+	OSD_ATTR_RI_USED_CAPACITY            = 0x81,  /* 8        */
+	OSD_ATTR_RI_NUMBER_OF_PARTITIONS     = 0xC0,  /* 8        */
+	OSD_ATTR_RI_CLOCK                    = 0x100, /* 6        */
+};
+/* Root_Information_attributes_page does not have a get_page structure */
+
+/* 7.1.2.9 Partition Information attributes page
+ * (OSD_APAGE_PARTITION_INFORMATION)
+ */
+enum {
+	OSD_ATTR_PI_PARTITION_ID            = 0x1,     /* 8        */
+	OSD_ATTR_PI_USERNAME                = 0x9,     /* variable */
+	OSD_ATTR_PI_USED_CAPACITY           = 0x81,    /* 8        */
+	OSD_ATTR_PI_NUMBER_OF_OBJECTS       = 0xC1,    /* 8        */
+};
+/* Partition Information attributes page does not have a get_page structure */
+
+/* 7.1.2.10 Collection Information attributes page
+ * (OSD_APAGE_COLLECTION_INFORMATION)
+ */
+enum {
+	OSD_ATTR_CI_PARTITION_ID           = 0x1,       /* 8        */
+	OSD_ATTR_CI_COLLECTION_OBJECT_ID   = 0x2,       /* 8        */
+	OSD_ATTR_CI_USERNAME               = 0x9,       /* variable */
+	OSD_ATTR_CI_USED_CAPACITY          = 0x81,      /* 8        */
+};
+/* Collection Information attributes page does not have a get_page structure */
+
+/* 7.1.2.11 User Object Information attributes page
+ * (OSD_APAGE_OBJECT_INFORMATION)
+ */
+enum {
+	OSD_ATTR_OI_PARTITION_ID         = 0x1,       /* 8        */
+	OSD_ATTR_OI_OBJECT_ID            = 0x2,       /* 8        */
+	OSD_ATTR_OI_USERNAME             = 0x9,       /* variable */
+	OSD_ATTR_OI_USED_CAPACITY        = 0x81,      /* 8        */
+	OSD_ATTR_OI_LOGICAL_LENGTH       = 0x82,      /* 8        */
+};
+/* Object Information attributes page does not have a get_page structure */
+
+/* 7.1.2.12 Root Quotas attributes page (OSD_APAGE_ROOT_QUOTAS) */
+enum {
+	OSD_ATTR_RQ_DEFAULT_MAXIMUM_USER_OBJECT_LENGTH     = 0x1,      /* 8  */
+	OSD_ATTR_RQ_PARTITION_CAPACITY_QUOTA               = 0x10001,  /* 8  */
+	OSD_ATTR_RQ_PARTITION_OBJECT_COUNT                 = 0x10002,  /* 8  */
+	OSD_ATTR_RQ_PARTITION_COLLECTIONS_PER_USER_OBJECT  = 0x10081,  /* 4  */
+	OSD_ATTR_RQ_PARTITION_COUNT                        = 0x20002,  /* 8  */
+};
+
+struct Root_Quotas_attributes_page {
+	struct osd_attr_page_header hdr; /* id=R+2, size=0x24 */
+	__be64 default_maximum_user_object_length;
+	__be64 partition_capacity_quota;
+	__be64 partition_object_count;
+	__be64 partition_collections_per_user_object;
+	__be64 partition_count;
+}  __packed;
+
+/* 7.1.2.13 Partition Quotas attributes page (OSD_APAGE_PARTITION_QUOTAS)*/
+enum {
+	OSD_ATTR_PQ_DEFAULT_MAXIMUM_USER_OBJECT_LENGTH  = 0x1,        /* 8 */
+	OSD_ATTR_PQ_CAPACITY_QUOTA                      = 0x10001,    /* 8 */
+	OSD_ATTR_PQ_OBJECT_COUNT                        = 0x10002,    /* 8 */
+	OSD_ATTR_PQ_COLLECTIONS_PER_USER_OBJECT         = 0x10081,    /* 4 */
+};
+
+struct Partition_Quotas_attributes_page {
+	struct osd_attr_page_header hdr; /* id=P+2, size=0x1C */
+	__be64 default_maximum_user_object_length;
+	__be64 capacity_quota;
+	__be64 object_count;
+	__be64 collections_per_user_object;
+}  __packed;
+
+/* 7.1.2.14 User Object Quotas attributes page (OSD_APAGE_OBJECT_QUOTAS) */
+enum {
+	OSD_ATTR_OQ_MAXIMUM_LENGTH  = 0x1,        /* 8 */
+};
+
+struct Object_Quotas_attributes_page {
+	struct osd_attr_page_header hdr; /* id=U+2, size=0x8 */
+	__be64 maximum_length;
+}  __packed;
+
+/* 7.1.2.15 Root Timestamps attributes page (OSD_APAGE_ROOT_TIMESTAMP) */
+enum {
+	OSD_ATTR_RT_ATTRIBUTES_ACCESSED_TIME  = 0x2,        /* 6 */
+	OSD_ATTR_RT_ATTRIBUTES_MODIFIED_TIME  = 0x3,        /* 6 */
+	OSD_ATTR_RT_TIMESTAMP_BYPASS          = 0xFFFFFFFE, /* 1 */
+};
+
+struct root_timestamps_attributes_page {
+	struct osd_attr_page_header hdr; /* id=R+3, size=0xD */
+	struct osd_timestamp attributes_accessed_time;
+	struct osd_timestamp attributes_modified_time;
+	u8 timestamp_bypass;
+}  __packed;
+
+/* 7.1.2.16 Partition Timestamps attributes page
+ * (OSD_APAGE_PARTITION_TIMESTAMP)
+ */
+enum {
+	OSD_ATTR_PT_CREATED_TIME              = 0x1,        /* 6 */
+	OSD_ATTR_PT_ATTRIBUTES_ACCESSED_TIME  = 0x2,        /* 6 */
+	OSD_ATTR_PT_ATTRIBUTES_MODIFIED_TIME  = 0x3,        /* 6 */
+	OSD_ATTR_PT_DATA_ACCESSED_TIME        = 0x4,        /* 6 */
+	OSD_ATTR_PT_DATA_MODIFIED_TIME        = 0x5,        /* 6 */
+	OSD_ATTR_PT_TIMESTAMP_BYPASS          = 0xFFFFFFFE, /* 1 */
+};
+
+struct partition_timestamps_attributes_page {
+	struct osd_attr_page_header hdr; /* id=P+3, size=0x1F */
+	struct osd_timestamp created_time;
+	struct osd_timestamp attributes_accessed_time;
+	struct osd_timestamp attributes_modified_time;
+	struct osd_timestamp data_accessed_time;
+	struct osd_timestamp data_modified_time;
+	u8 timestamp_bypass;
+}  __packed;
+
+/* 7.1.2.17/18 Collection/Object Timestamps attributes page
+ * (OSD_APAGE_COLLECTION_TIMESTAMP/OSD_APAGE_OBJECT_TIMESTAMP)
+ */
+enum {
+	OSD_ATTR_OT_CREATED_TIME              = 0x1,        /* 6 */
+	OSD_ATTR_OT_ATTRIBUTES_ACCESSED_TIME  = 0x2,        /* 6 */
+	OSD_ATTR_OT_ATTRIBUTES_MODIFIED_TIME  = 0x3,        /* 6 */
+	OSD_ATTR_OT_DATA_ACCESSED_TIME        = 0x4,        /* 6 */
+	OSD_ATTR_OT_DATA_MODIFIED_TIME        = 0x5,        /* 6 */
+};
+
+/* same for collection */
+struct object_timestamps_attributes_page {
+	struct osd_attr_page_header hdr; /* id=C+3/3, size=0x1E */
+	struct osd_timestamp created_time;
+	struct osd_timestamp attributes_accessed_time;
+	struct osd_timestamp attributes_modified_time;
+	struct osd_timestamp data_accessed_time;
+	struct osd_timestamp data_modified_time;
+}  __packed;
+
+/* 7.1.2.19 Collections attributes page */
+/* TBD */
+
+/* 7.1.2.20 Root Policy/Security attributes page (OSD_APAGE_ROOT_SECURITY) */
+enum {
+	OSD_ATTR_RS_DEFAULT_SECURITY_METHOD           = 0x1,       /* 1      */
+	OSD_ATTR_RS_OLDEST_VALID_NONCE_LIMIT          = 0x2,       /* 6      */
+	OSD_ATTR_RS_NEWEST_VALID_NONCE_LIMIT          = 0x3,       /* 6      */
+	OSD_ATTR_RS_PARTITION_DEFAULT_SECURITY_METHOD = 0x6,       /* 1      */
+	OSD_ATTR_RS_SUPPORTED_SECURITY_METHODS        = 0x7,       /* 2      */
+	OSD_ATTR_RS_ADJUSTABLE_CLOCK                  = 0x9,       /* 6      */
+	OSD_ATTR_RS_MASTER_KEY_IDENTIFIER             = 0x7FFD,    /* 0 or 7 */
+	OSD_ATTR_RS_ROOT_KEY_IDENTIFIER               = 0x7FFE,    /* 0 or 7 */
+	OSD_ATTR_RS_SUPPORTED_INTEGRITY_ALGORITHM_0   = 0x80000000,/* 1,(x16)*/
+	OSD_ATTR_RS_SUPPORTED_DH_GROUP_0              = 0x80000010,/* 1,(x16)*/
+};
+
+struct root_security_attributes_page {
+	struct osd_attr_page_header hdr; /* id=R+5, size=0x3F */
+	u8 default_security_method;
+	u8 partition_default_security_method;
+	__be16 supported_security_methods;
+	u8 mki_valid_rki_valid;
+	struct osd_timestamp oldest_valid_nonce_limit;
+	struct osd_timestamp newest_valid_nonce_limit;
+	struct osd_timestamp adjustable_clock;
+	u8 master_key_identifier[32-25];
+	u8 root_key_identifier[39-32];
+	u8 supported_integrity_algorithm[16];
+	u8 supported_dh_group[16];
+}  __packed;
+
+/* 7.1.2.21 Partition Policy/Security attributes page
+ * (OSD_APAGE_PARTITION_SECURITY)
+ */
+enum {
+	OSD_ATTR_PS_DEFAULT_SECURITY_METHOD        = 0x1,        /* 1      */
+	OSD_ATTR_PS_OLDEST_VALID_NONCE             = 0x2,        /* 6      */
+	OSD_ATTR_PS_NEWEST_VALID_NONCE             = 0x3,        /* 6      */
+	OSD_ATTR_PS_REQUEST_NONCE_LIST_DEPTH       = 0x4,        /* 2      */
+	OSD_ATTR_PS_FROZEN_WORKING_KEY_BIT_MASK    = 0x5,        /* 2      */
+	OSD_ATTR_PS_PARTITION_KEY_IDENTIFIER       = 0x7FFF,     /* 0 or 7 */
+	OSD_ATTR_PS_WORKING_KEY_IDENTIFIER_FIRST   = 0x8000,     /* 0 or 7 */
+	OSD_ATTR_PS_WORKING_KEY_IDENTIFIER_LAST    = 0x800F,     /* 0 or 7 */
+	OSD_ATTR_PS_POLICY_ACCESS_TAG              = 0x40000001, /* 4      */
+	OSD_ATTR_PS_USER_OBJECT_POLICY_ACCESS_TAG  = 0x40000002, /* 4      */
+};
+
+struct partition_security_attributes_page {
+	struct osd_attr_page_header hdr; /* id=p+5, size=0x8f */
+	u8 reserved[3];
+	u8 default_security_method;
+	struct osd_timestamp oldest_valid_nonce;
+	struct osd_timestamp newest_valid_nonce;
+	__be16 request_nonce_list_depth;
+	__be16 frozen_working_key_bit_mask;
+	__be32 policy_access_tag;
+	__be32 user_object_policy_access_tag;
+	u8 pki_valid;
+	__be16 wki_00_0f_vld;
+	struct osd_key_identifier partition_key_identifier;
+	struct osd_key_identifier working_key_identifiers[16];
+}  __packed;
+
+/* 7.1.2.22/23 Collection/Object Policy-Security attributes page
+ * (OSD_APAGE_COLLECTION_SECURITY/OSD_APAGE_OBJECT_SECURITY)
+ */
+enum {
+	OSD_ATTR_OS_POLICY_ACCESS_TAG              = 0x40000001, /* 4      */
+};
+
+struct object_security_attributes_page {
+	struct osd_attr_page_header hdr; /* id=C+5/5, size=4 */
+	__be32 policy_access_tag;
+}  __packed;
+
+#endif /*ndef __OSD_ATTRIBUTES_H__*/
diff --git a/include/scsi/osd_initiator.h b/include/scsi/osd_initiator.h
new file mode 100644
index 0000000..b24d961
--- /dev/null
+++ b/include/scsi/osd_initiator.h
@@ -0,0 +1,433 @@
+/*
+ * osd_initiator.h - OSD initiator API definition
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ */
+#ifndef __OSD_INITIATOR_H__
+#define __OSD_INITIATOR_H__
+
+#include "osd_protocol.h"
+#include "osd_types.h"
+
+#include <linux/blkdev.h>
+
+/* Note: "NI" in comments below means "Not Implemented yet" */
+
+/* Configure of code:
+ * #undef if you *don't* want OSD v1 support in runtime.
+ * If #defined the initiator will dynamically configure to encode OSD v1
+ * CDB's if the target is detected to be OSD v1 only.
+ * OSD v2 only commands, options, and attributes will be ignored if target
+ * is v1 only.
+ * If #defined will result in bigger/slower code (OK Slower maybe not)
+ * Q: Should this be CONFIG_SCSI_OSD_VER1_SUPPORT and set from Kconfig?
+ */
+#define OSD_VER1_SUPPORT y
+
+enum osd_std_version {
+	OSD_VER_NONE = 0,
+	OSD_VER1 = 1,
+	OSD_VER2 = 2,
+};
+
+/*
+ * Object-based Storage Device.
+ * This object represents an OSD device.
+ * It is not a full linux device in any way. It is only
+ * a place to hang resources associated with a Linux
+ * request Q and some default properties.
+ */
+struct osd_dev {
+	struct scsi_device *scsi_device;
+	unsigned def_timeout;
+
+#ifdef OSD_VER1_SUPPORT
+	enum osd_std_version version;
+#endif
+};
+
+/* Retrieve/return osd_dev(s) for use by Kernel clients */
+struct osd_dev *osduld_path_lookup(const char *dev_name); /*Use IS_ERR/ERR_PTR*/
+void osduld_put_device(struct osd_dev *od);
+
+/* Add/remove test ioctls from external modules */
+typedef int (do_test_fn)(struct osd_dev *od, unsigned cmd, unsigned long arg);
+int osduld_register_test(unsigned ioctl, do_test_fn *do_test);
+void osduld_unregister_test(unsigned ioctl);
+
+/* These are called by uld at probe time */
+void osd_dev_init(struct osd_dev *od, struct scsi_device *scsi_device);
+void osd_dev_fini(struct osd_dev *od);
+
+/* some hi level device operations */
+int osd_auto_detect_ver(struct osd_dev *od, void *caps);    /* GFP_KERNEL */
+
+/* we might want to use function vector in the future */
+static inline void osd_dev_set_ver(struct osd_dev *od, enum osd_std_version v)
+{
+#ifdef OSD_VER1_SUPPORT
+	od->version = v;
+#endif
+}
+
+struct osd_request;
+typedef void (osd_req_done_fn)(struct osd_request *or, void *private);
+
+struct osd_request {
+	struct osd_cdb cdb;
+	struct osd_data_out_integrity_info out_data_integ;
+	struct osd_data_in_integrity_info in_data_integ;
+
+	struct osd_dev *osd_dev;
+	struct request *request;
+
+	struct _osd_req_data_segment {
+		void *buff;
+		unsigned alloc_size; /* 0 here means: don't call kfree */
+		unsigned total_bytes;
+	} set_attr, enc_get_attr, get_attr;
+
+	struct _osd_io_info {
+		struct bio *bio;
+		u64 total_bytes;
+		struct request *req;
+		struct _osd_req_data_segment *last_seg;
+		u8 *pad_buff;
+	} out, in;
+
+	gfp_t alloc_flags;
+	unsigned timeout;
+	unsigned retries;
+	u8 sense[OSD_MAX_SENSE_LEN];
+	enum osd_attributes_mode attributes_mode;
+
+	osd_req_done_fn *async_done;
+	void *async_private;
+	int async_error;
+};
+
+/* OSD Version control */
+static inline bool osd_req_is_ver1(struct osd_request *or)
+{
+#ifdef OSD_VER1_SUPPORT
+	return or->osd_dev->version == OSD_VER1;
+#else
+	return false;
+#endif
+}
+
+/*
+ * How to use the osd library:
+ *
+ * osd_start_request
+ *	Allocates a request.
+ *
+ * osd_req_*
+ *	Call one of, to encode the desired operation.
+ *
+ * osd_add_{get,set}_attr
+ *	Optionally add attributes to the CDB, list or page mode.
+ *
+ * osd_finalize_request
+ *	Computes final data out/in offsets and signs the request,
+ *	making it ready for execution.
+ *
+ * osd_execute_request
+ *	May be called to execute it through the block layer. Other wise submit
+ *	the associated block request in some other way.
+ *
+ * After execution:
+ * osd_req_decode_sense
+ *	Decodes sense information to verify execution results.
+ *
+ * osd_req_decode_get_attr
+ *	Retrieve osd_add_get_attr_list() values if used.
+ *
+ * osd_end_request
+ *	Must be called to deallocate the request.
+ */
+
+/**
+ * osd_start_request - Allocate and initialize an osd_request
+ *
+ * @osd_dev:    OSD device that holds the scsi-device and default values
+ *              that the request is associated with.
+ * @gfp:        The allocation flags to use for request allocation, and all
+ *              subsequent allocations. This will be stored at
+ *              osd_request->alloc_flags, can be changed by user later
+ *
+ * Allocate osd_request and initialize all members to the
+ * default/initial state.
+ */
+struct osd_request *osd_start_request(struct osd_dev *od, gfp_t gfp);
+
+enum osd_req_options {
+	OSD_REQ_FUA = 0x08,	/* Force Unit Access */
+	OSD_REQ_DPO = 0x10,	/* Disable Page Out */
+
+	OSD_REQ_BYPASS_TIMESTAMPS = 0x80,
+};
+
+/**
+ * osd_finalize_request - Sign request and prepare request for execution
+ *
+ * @or:		osd_request to prepare
+ * @options:	combination of osd_req_options bit flags or 0.
+ * @cap:	A Pointer to an OSD_CAP_LEN bytes buffer that is received from
+ *              The security manager as capabilities for this cdb.
+ * @cap_key:	The cryptographic key used to sign the cdb/data. Can be null
+ *              if NOSEC is used.
+ *
+ * The actual request and bios are only allocated here, so are the get_attr
+ * buffers that will receive the returned attributes. Copy's @cap to cdb.
+ * Sign the cdb/data with @cap_key.
+ */
+int osd_finalize_request(struct osd_request *or,
+	u8 options, const void *cap, const u8 *cap_key);
+
+/**
+ * osd_execute_request - Execute the request synchronously through block-layer
+ *
+ * @or:		osd_request to Executed
+ *
+ * Calls blk_execute_rq to q the command and waits for completion.
+ */
+int osd_execute_request(struct osd_request *or);
+
+/**
+ * osd_execute_request_async - Execute the request without waitting.
+ *
+ * @or:                      - osd_request to Executed
+ * @done: (Optional)         - Called at end of execution
+ * @private:                 - Will be passed to @done function
+ *
+ * Calls blk_execute_rq_nowait to queue the command. When execution is done
+ * optionally calls @done with @private as parameter. @or->async_error will
+ * have the return code
+ */
+int osd_execute_request_async(struct osd_request *or,
+	osd_req_done_fn *done, void *private);
+
+/**
+ * osd_req_decode_sense_full - Decode sense information after execution.
+ *
+ * @or:           - osd_request to examine
+ * @osi           - Recievs a more detailed error report information (optional).
+ * @silent        - Do not print to dmsg (Even if enabled)
+ * @bad_obj_list  - Some commands act on multiple objects. Failed objects will
+ *                  be recieved here (optional)
+ * @max_obj       - Size of @bad_obj_list.
+ * @bad_attr_list - List of failing attributes (optional)
+ * @max_attr      - Size of @bad_attr_list.
+ *
+ * After execution, sense + return code can be analyzed using this function. The
+ * return code is the final disposition on the error. So it is possible that a
+ * CHECK_CONDITION was returned from target but this will return NO_ERROR, for
+ * example on recovered errors. All parameters are optional if caller does
+ * not need any returned information.
+ * Note: This function will also dump the error to dmsg according to settings
+ * of the SCSI_OSD_DPRINT_SENSE Kconfig value. Set @silent if you know the
+ * command would routinely fail, to not spam the dmsg file.
+ */
+struct osd_sense_info {
+	int key;		/* one of enum scsi_sense_keys */
+	int additional_code ;	/* enum osd_additional_sense_codes */
+	union { /* Sense specific information */
+		u16 sense_info;
+		u16 cdb_field_offset; 	/* scsi_invalid_field_in_cdb */
+	};
+	union { /* Command specific information */
+		u64 command_info;
+	};
+
+	u32 not_initiated_command_functions; /* osd_command_functions_bits */
+	u32 completed_command_functions; /* osd_command_functions_bits */
+	struct osd_obj_id obj;
+	struct osd_attr attr;
+};
+
+int osd_req_decode_sense_full(struct osd_request *or,
+	struct osd_sense_info *osi, bool silent,
+	struct osd_obj_id *bad_obj_list, int max_obj,
+	struct osd_attr *bad_attr_list, int max_attr);
+
+static inline int osd_req_decode_sense(struct osd_request *or,
+	struct osd_sense_info *osi)
+{
+	return osd_req_decode_sense_full(or, osi, false, NULL, 0, NULL, 0);
+}
+
+/**
+ * osd_end_request - return osd_request to free store
+ *
+ * @or:		osd_request to free
+ *
+ * Deallocate all osd_request resources (struct req's, BIOs, buffers, etc.)
+ */
+void osd_end_request(struct osd_request *or);
+
+/*
+ * CDB Encoding
+ *
+ * Note: call only one of the following methods.
+ */
+
+/*
+ * Device commands
+ */
+void osd_req_set_master_seed_xchg(struct osd_request *or, ...);/* NI */
+void osd_req_set_master_key(struct osd_request *or, ...);/* NI */
+
+void osd_req_format(struct osd_request *or, u64 tot_capacity);
+
+/* list all partitions
+ * @list header must be initialized to zero on first run.
+ *
+ * Call osd_is_obj_list_done() to find if we got the complete list.
+ */
+int osd_req_list_dev_partitions(struct osd_request *or,
+	osd_id initial_id, struct osd_obj_id_list *list, unsigned nelem);
+
+void osd_req_flush_obsd(struct osd_request *or,
+	enum osd_options_flush_scope_values);
+
+void osd_req_perform_scsi_command(struct osd_request *or,
+	const u8 *cdb, ...);/* NI */
+void osd_req_task_management(struct osd_request *or, ...);/* NI */
+
+/*
+ * Partition commands
+ */
+void osd_req_create_partition(struct osd_request *or, osd_id partition);
+void osd_req_remove_partition(struct osd_request *or, osd_id partition);
+
+void osd_req_set_partition_key(struct osd_request *or,
+	osd_id partition, u8 new_key_id[OSD_CRYPTO_KEYID_SIZE],
+	u8 seed[OSD_CRYPTO_SEED_SIZE]);/* NI */
+
+/* list all collections in the partition
+ * @list header must be init to zero on first run.
+ *
+ * Call osd_is_obj_list_done() to find if we got the complete list.
+ */
+int osd_req_list_partition_collections(struct osd_request *or,
+	osd_id partition, osd_id initial_id, struct osd_obj_id_list *list,
+	unsigned nelem);
+
+/* list all objects in the partition
+ * @list header must be init to zero on first run.
+ *
+ * Call osd_is_obj_list_done() to find if we got the complete list.
+ */
+int osd_req_list_partition_objects(struct osd_request *or,
+	osd_id partition, osd_id initial_id, struct osd_obj_id_list *list,
+	unsigned nelem);
+
+void osd_req_flush_partition(struct osd_request *or,
+	osd_id partition, enum osd_options_flush_scope_values);
+
+/*
+ * Collection commands
+ */
+void osd_req_create_collection(struct osd_request *or,
+	const struct osd_obj_id *);/* NI */
+void osd_req_remove_collection(struct osd_request *or,
+	const struct osd_obj_id *);/* NI */
+
+/* list all objects in the collection */
+int osd_req_list_collection_objects(struct osd_request *or,
+	const struct osd_obj_id *, osd_id initial_id,
+	struct osd_obj_id_list *list, unsigned nelem);
+
+/* V2 only filtered list of objects in the collection */
+void osd_req_query(struct osd_request *or, ...);/* NI */
+
+void osd_req_flush_collection(struct osd_request *or,
+	const struct osd_obj_id *, enum osd_options_flush_scope_values);
+
+void osd_req_get_member_attrs(struct osd_request *or, ...);/* V2-only NI */
+void osd_req_set_member_attrs(struct osd_request *or, ...);/* V2-only NI */
+
+/*
+ * Object commands
+ */
+void osd_req_create_object(struct osd_request *or, struct osd_obj_id *);
+void osd_req_remove_object(struct osd_request *or, struct osd_obj_id *);
+
+void osd_req_write(struct osd_request *or,
+	const struct osd_obj_id *, struct bio *data_out, u64 offset);
+void osd_req_append(struct osd_request *or,
+	const struct osd_obj_id *, struct bio *data_out);/* NI */
+void osd_req_create_write(struct osd_request *or,
+	const struct osd_obj_id *, struct bio *data_out, u64 offset);/* NI */
+void osd_req_clear(struct osd_request *or,
+	const struct osd_obj_id *, u64 offset, u64 len);/* NI */
+void osd_req_punch(struct osd_request *or,
+	const struct osd_obj_id *, u64 offset, u64 len);/* V2-only NI */
+
+void osd_req_flush_object(struct osd_request *or,
+	const struct osd_obj_id *, enum osd_options_flush_scope_values,
+	/*V2*/ u64 offset, /*V2*/ u64 len);
+
+void osd_req_read(struct osd_request *or,
+	const struct osd_obj_id *, struct bio *data_in, u64 offset);
+
+/*
+ * Root/Partition/Collection/Object Attributes commands
+ */
+
+/* get before set */
+void osd_req_get_attributes(struct osd_request *or, const struct osd_obj_id *);
+
+/* set before get */
+void osd_req_set_attributes(struct osd_request *or, const struct osd_obj_id *);
+
+/*
+ * Attributes appended to most commands
+ */
+
+/* Attributes List mode (or V2 CDB) */
+  /*
+   * TODO: In ver2 if at finalize time only one attr was set and no gets,
+   * then the Attributes CDB mode is used automatically to save IO.
+   */
+
+/* set a list of attributes. */
+int osd_req_add_set_attr_list(struct osd_request *or,
+	const struct osd_attr *, unsigned nelem);
+
+/* get a list of attributes */
+int osd_req_add_get_attr_list(struct osd_request *or,
+	const struct osd_attr *, unsigned nelem);
+
+/*
+ * Attributes list decoding
+ * Must be called after osd_request.request was executed
+ * It is called in a loop to decode the returned get_attr
+ * (see osd_add_get_attr)
+ */
+int osd_req_decode_get_attr_list(struct osd_request *or,
+	struct osd_attr *, int *nelem, void **iterator);
+
+/* Attributes Page mode */
+
+/*
+ * Read an attribute page and optionally set one attribute
+ *
+ * Retrieves the attribute page directly to a user buffer.
+ * @attr_page_data shall stay valid until end of execution.
+ * See osd_attributes.h for common page structures
+ */
+int osd_req_add_get_attr_page(struct osd_request *or,
+	u32 page_id, void *attr_page_data, unsigned max_page_len,
+	const struct osd_attr *set_one);
+
+#endif /* __OSD_LIB_H__ */
diff --git a/include/scsi/osd_protocol.h b/include/scsi/osd_protocol.h
new file mode 100644
index 0000000..cd3cbf7
--- /dev/null
+++ b/include/scsi/osd_protocol.h
@@ -0,0 +1,579 @@
+/*
+ * osd_protocol.h - OSD T10 standard C definitions.
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ * This file contains types and constants that are defined by the protocol
+ * Note: All names and symbols are taken from the OSD standard's text.
+ */
+#ifndef __OSD_PROTOCOL_H__
+#define __OSD_PROTOCOL_H__
+
+#include <linux/types.h>
+#include <asm/unaligned.h>
+#include <scsi/scsi.h>
+
+enum {
+	OSDv1_ADDITIONAL_CDB_LENGTH = 192,
+	OSDv1_TOTAL_CDB_LEN = OSDv1_ADDITIONAL_CDB_LENGTH + 8,
+	OSDv1_CAP_LEN = 80,
+	/* Latest supported version */
+/* 	OSD_ADDITIONAL_CDB_LENGTH = 216,*/
+	OSD_ADDITIONAL_CDB_LENGTH =
+		OSDv1_ADDITIONAL_CDB_LENGTH, /* FIXME: Pete rev-001 sup */
+	OSD_TOTAL_CDB_LEN = OSD_ADDITIONAL_CDB_LENGTH + 8,
+/* 	OSD_CAP_LEN = 104,*/
+	OSD_CAP_LEN = OSDv1_CAP_LEN,/* FIXME: Pete rev-001 sup */
+
+	OSD_SYSTEMID_LEN = 20,
+	OSD_CRYPTO_KEYID_SIZE = 20,
+	/*FIXME: OSDv2_CRYPTO_KEYID_SIZE = 32,*/
+	OSD_CRYPTO_SEED_SIZE = 4,
+	OSD_CRYPTO_NONCE_SIZE = 12,
+	OSD_MAX_SENSE_LEN = 252, /* from SPC-3 */
+
+	OSD_PARTITION_FIRST_ID = 0x10000,
+	OSD_OBJECT_FIRST_ID = 0x10000,
+};
+
+/* (osd-r10 5.2.4)
+ * osd2r03: 5.2.3 Caching control bits
+ */
+enum osd_options_byte {
+	OSD_CDB_FUA = 0x08,	/* Force Unit Access */
+	OSD_CDB_DPO = 0x10,	/* Disable Page Out */
+};
+
+/*
+ * osd2r03: 5.2.5 Isolation.
+ * First 3 bits, V2-only.
+ * Also for attr 110h "default isolation method" at Root Information page
+ */
+enum osd_options_byte_isolation {
+	OSD_ISOLATION_DEFAULT = 0,
+	OSD_ISOLATION_NONE = 1,
+	OSD_ISOLATION_STRICT = 2,
+	OSD_ISOLATION_RANGE = 4,
+	OSD_ISOLATION_FUNCTIONAL = 5,
+	OSD_ISOLATION_VENDOR = 7,
+};
+
+/* (osd-r10: 6.7)
+ * osd2r03: 6.8 FLUSH, FLUSH COLLECTION, FLUSH OSD, FLUSH PARTITION
+ */
+enum osd_options_flush_scope_values {
+	OSD_CDB_FLUSH_ALL = 0,
+	OSD_CDB_FLUSH_ATTR_ONLY = 1,
+
+	OSD_CDB_FLUSH_ALL_RECURSIVE = 2,
+	/* V2-only */
+	OSD_CDB_FLUSH_ALL_RANGE = 2,
+};
+
+/* osd2r03: 5.2.10 Timestamps control */
+enum {
+	OSD_CDB_NORMAL_TIMESTAMPS = 0,
+	OSD_CDB_BYPASS_TIMESTAMPS = 0x7f,
+};
+
+/* (osd-r10: 5.2.2.1)
+ * osd2r03: 5.2.4.1 Get and set attributes CDB format selection
+ *	2 bits at second nibble of command_specific_options byte
+ */
+enum osd_attributes_mode {
+	/* V2-only */
+	OSD_CDB_SET_ONE_ATTR = 0x10,
+
+	OSD_CDB_GET_ATTR_PAGE_SET_ONE = 0x20,
+	OSD_CDB_GET_SET_ATTR_LISTS = 0x30,
+
+	OSD_CDB_GET_SET_ATTR_MASK = 0x30,
+};
+
+/* (osd-r10: 4.12.5)
+ * osd2r03: 4.14.5 Data-In and Data-Out buffer offsets
+ *	byte offset = mantissa * (2^(exponent+8))
+ *	struct {
+ *		unsigned mantissa: 28;
+ *		int exponent: 04;
+ *	}
+ */
+typedef __be32 __bitwise osd_cdb_offset;
+
+enum {
+	OSD_OFFSET_UNUSED = 0xFFFFFFFF,
+	OSD_OFFSET_MAX_BITS = 28,
+
+	OSDv1_OFFSET_MIN_SHIFT = 8,
+	OSD_OFFSET_MIN_SHIFT = 3,
+	OSD_OFFSET_MAX_SHIFT = 16,
+};
+
+/* Return the smallest allowed encoded offset that contains @offset.
+ *
+ * The actual encoded offset returned is @offset + *padding.
+ * (up to max_shift, non-inclusive)
+ */
+osd_cdb_offset __osd_encode_offset(u64 offset, unsigned *padding,
+	int min_shift, int max_shift);
+
+/* Minimum alignment is 256 bytes
+ * Note: Seems from std v1 that exponent can be from 0+8 to 0xE+8 (inclusive)
+ * which is 8 to 23 but IBM code restricts it to 16, so be it.
+ */
+static inline osd_cdb_offset osd_encode_offset_v1(u64 offset, unsigned *padding)
+{
+	return __osd_encode_offset(offset, padding,
+				OSDv1_OFFSET_MIN_SHIFT, OSD_OFFSET_MAX_SHIFT);
+}
+
+/* Minimum 8 bytes alignment
+ * Same as v1 but since exponent can be signed than a less than
+ * 256 alignment can be reached with small offsets (<2GB)
+ */
+static inline osd_cdb_offset osd_encode_offset_v2(u64 offset, unsigned *padding)
+{
+	return __osd_encode_offset(offset, padding,
+				   OSD_OFFSET_MIN_SHIFT, OSD_OFFSET_MAX_SHIFT);
+}
+
+/* osd2r03: 5.2.1 Overview */
+struct osd_cdb_head {
+	struct scsi_varlen_cdb_hdr varlen_cdb;
+/*10*/	u8		options;
+	u8		command_specific_options;
+	u8		timestamp_control;
+/*13*/	u8		reserved1[3];
+/*16*/	__be64		partition;
+/*24*/	__be64		object;
+/*32*/	union { /* V1 vs V2 alignment differences */
+		struct __osdv1_cdb_addr_len {
+/*32*/			__be32 		list_identifier;/* Rarely used */
+/*36*/			__be64		length;
+/*44*/			__be64		start_address;
+		} __packed v1;
+
+		struct __osdv2_cdb_addr_len {
+			/* called allocation_length in some commands */
+/*32*/			__be64	length;
+/*40*/			__be64	start_address;
+/*48*/			__be32 list_identifier;/* Rarely used */
+		} __packed v2;
+	};
+/*52*/	union { /* selected attributes mode Page/List/Single */
+		struct osd_attributes_page_mode {
+/*52*/			__be32		get_attr_page;
+/*56*/			__be32		get_attr_alloc_length;
+/*60*/			osd_cdb_offset	get_attr_offset;
+
+/*64*/			__be32		set_attr_page;
+/*68*/			__be32		set_attr_id;
+/*72*/			__be32		set_attr_length;
+/*76*/			osd_cdb_offset	set_attr_offset;
+/*80*/		} __packed attrs_page;
+
+		struct osd_attributes_list_mode {
+/*52*/			__be32		get_attr_desc_bytes;
+/*56*/			osd_cdb_offset	get_attr_desc_offset;
+
+/*60*/			__be32		get_attr_alloc_length;
+/*64*/			osd_cdb_offset	get_attr_offset;
+
+/*68*/			__be32		set_attr_bytes;
+/*72*/			osd_cdb_offset	set_attr_offset;
+			__be32 not_used;
+/*80*/		} __packed attrs_list;
+
+		/* osd2r03:5.2.4.2 Set one attribute value using CDB fields */
+		struct osd_attributes_cdb_mode {
+/*52*/			__be32		set_attr_page;
+/*56*/			__be32		set_attr_id;
+/*60*/			__be16		set_attr_len;
+/*62*/			u8		set_attr_val[18];
+/*80*/		} __packed attrs_cdb;
+/*52*/		u8 get_set_attributes_parameters[28];
+	};
+} __packed;
+/*80*/
+
+/*160 v1*/
+/*184 v2*/
+struct osd_security_parameters {
+/*160*/u8	integrity_check_value[OSD_CRYPTO_KEYID_SIZE];
+/*180*/u8	request_nonce[OSD_CRYPTO_NONCE_SIZE];
+/*192*/osd_cdb_offset	data_in_integrity_check_offset;
+/*196*/osd_cdb_offset	data_out_integrity_check_offset;
+} __packed;
+/*200 v1*/
+/*224 v2*/
+
+/* FIXME: osdv2_security_parameters */
+
+struct osdv1_cdb {
+	struct osd_cdb_head h;
+	u8 caps[OSDv1_CAP_LEN];
+	struct osd_security_parameters sec_params;
+} __packed;
+
+struct osdv2_cdb {
+	struct osd_cdb_head h;
+	u8 caps[OSD_CAP_LEN];
+	struct osd_security_parameters sec_params;
+	/* FIXME: osdv2_security_parameters */
+} __packed;
+
+struct osd_cdb {
+	union {
+		struct osdv1_cdb v1;
+		struct osdv2_cdb v2;
+		u8 buff[OSD_TOTAL_CDB_LEN];
+	};
+} __packed;
+
+static inline struct osd_cdb_head *osd_cdb_head(struct osd_cdb *ocdb)
+{
+	return (struct osd_cdb_head *)ocdb->buff;
+}
+
+/* define both version actions
+ * Ex name = FORMAT_OSD we have OSD_ACT_FORMAT_OSD && OSDv1_ACT_FORMAT_OSD
+ */
+#define OSD_ACT___(Name, Num) \
+	OSD_ACT_##Name = __constant_cpu_to_be16(0x8880 + Num), \
+	OSDv1_ACT_##Name = __constant_cpu_to_be16(0x8800 + Num),
+
+/* V2 only actions */
+#define OSD_ACT_V2(Name, Num) \
+	OSD_ACT_##Name = __constant_cpu_to_be16(0x8880 + Num),
+
+#define OSD_ACT_V1_V2(Name, Num1, Num2) \
+	OSD_ACT_##Name = __constant_cpu_to_be16(Num2), \
+	OSDv1_ACT_##Name = __constant_cpu_to_be16(Num1),
+
+enum osd_service_actions {
+	OSD_ACT_V2(OBJECT_STRUCTURE_CHECK,	0x00)
+	OSD_ACT___(FORMAT_OSD,			0x01)
+	OSD_ACT___(CREATE,			0x02)
+	OSD_ACT___(LIST,			0x03)
+	OSD_ACT_V2(PUNCH,			0x04)
+	OSD_ACT___(READ,			0x05)
+	OSD_ACT___(WRITE,			0x06)
+	OSD_ACT___(APPEND,			0x07)
+	OSD_ACT___(FLUSH,			0x08)
+	OSD_ACT_V2(CLEAR,			0x09)
+	OSD_ACT___(REMOVE,			0x0A)
+	OSD_ACT___(CREATE_PARTITION,		0x0B)
+	OSD_ACT___(REMOVE_PARTITION,		0x0C)
+	OSD_ACT___(GET_ATTRIBUTES,		0x0E)
+	OSD_ACT___(SET_ATTRIBUTES,		0x0F)
+	OSD_ACT___(CREATE_AND_WRITE,		0x12)
+	OSD_ACT___(CREATE_COLLECTION,		0x15)
+	OSD_ACT___(REMOVE_COLLECTION,		0x16)
+	OSD_ACT___(LIST_COLLECTION,		0x17)
+	OSD_ACT___(SET_KEY,			0x18)
+	OSD_ACT___(SET_MASTER_KEY,		0x19)
+	OSD_ACT___(FLUSH_COLLECTION,		0x1A)
+	OSD_ACT___(FLUSH_PARTITION,		0x1B)
+	OSD_ACT___(FLUSH_OSD,			0x1C)
+
+	OSD_ACT_V2(QUERY,			0x20)
+	OSD_ACT_V2(REMOVE_MEMBER_OBJECTS,	0x21)
+	OSD_ACT_V2(GET_MEMBER_ATTRIBUTES,	0x22)
+	OSD_ACT_V2(SET_MEMBER_ATTRIBUTES,	0x23)
+	OSD_ACT_V2(READ_MAP,			0x31)
+
+	OSD_ACT_V1_V2(PERFORM_SCSI_COMMAND,	0x8F7E, 0x8F7C)
+	OSD_ACT_V1_V2(SCSI_TASK_MANAGEMENT,	0x8F7F, 0x8F7D)
+	/* 0x8F80 to 0x8FFF are Vendor specific */
+};
+
+/* osd2r03: 7.1.3.2 List entry format for retrieving attributes */
+struct osd_attributes_list_attrid {
+	__be32 attr_page;
+	__be32 attr_id;
+} __packed;
+
+/*
+ * osd2r03: 7.1.3.3 List entry format for retrieved attributes and
+ *                  for setting attributes
+ * NOTE: v2 is 8-bytes aligned, v1 is not aligned.
+ */
+struct osd_attributes_list_element {
+	__be32 attr_page;
+	__be32 attr_id;
+	__be16 attr_bytes;
+	u8 attr_val[0];
+} __packed;
+
+enum {
+	OSDv1_ATTRIBUTES_ELEM_ALIGN = 1,
+	OSD_ATTRIBUTES_ELEM_ALIGN = 8,
+};
+
+enum {
+	OSD_ATTR_LIST_ALL_PAGES = 0xFFFFFFFF,
+	OSD_ATTR_LIST_ALL_IN_PAGE = 0xFFFFFFFF,
+};
+
+static inline unsigned osdv1_attr_list_elem_size(unsigned len)
+{
+	return ALIGN(len + sizeof(struct osd_attributes_list_element),
+		     OSDv1_ATTRIBUTES_ELEM_ALIGN);
+}
+
+static inline unsigned osdv2_attr_list_elem_size(unsigned len)
+{
+	return ALIGN(len + sizeof(struct osd_attributes_list_element),
+		     OSD_ATTRIBUTES_ELEM_ALIGN);
+}
+
+/*
+ * osd2r03: 7.1.3 OSD attributes lists (Table 184) — List type values
+ */
+enum osd_attr_list_types {
+	OSD_ATTR_LIST_GET = 0x1, 	/* descriptors only */
+	OSD_ATTR_LIST_SET_RETRIEVE = 0x9, /*descriptors/values variable-length*/
+	OSD_V2_ATTR_LIST_MULTIPLE = 0xE,  /* ver2, Multiple Objects lists*/
+	OSD_V1_ATTR_LIST_CREATE_MULTIPLE = 0xF,/*ver1, used by create_multple*/
+};
+
+/* osd2r03: 7.1.3.4 Multi-object retrieved attributes format */
+struct osd_attributes_list_multi_header {
+	__be64 object_id;
+	u8 object_type; /* object_type enum below */
+	u8 reserved[5];
+	__be16 list_bytes;
+	/* followed by struct osd_attributes_list_element's */
+};
+
+struct osdv1_attributes_list_header {
+	u8 type;	/* low 4-bit only */
+	u8 pad;
+	__be16 list_bytes; /* Initiator shall set to Zero. Only set by target */
+	/*
+	 * type=9 followed by struct osd_attributes_list_element's
+	 * type=E followed by struct osd_attributes_list_multi_header's
+	 */
+} __packed;
+
+static inline unsigned osdv1_list_size(struct osdv1_attributes_list_header *h)
+{
+	return be16_to_cpu(h->list_bytes);
+}
+
+struct osdv2_attributes_list_header {
+	u8 type;	/* lower 4-bits only */
+	u8 pad[3];
+/*4*/	__be32 list_bytes; /* Initiator shall set to zero. Only set by target */
+	/*
+	 * type=9 followed by struct osd_attributes_list_element's
+	 * type=E followed by struct osd_attributes_list_multi_header's
+	 */
+} __packed;
+
+static inline unsigned osdv2_list_size(struct osdv2_attributes_list_header *h)
+{
+	return be32_to_cpu(h->list_bytes);
+}
+
+/* (osd-r10 6.13)
+ * osd2r03: 6.15 LIST (Table 79) LIST command parameter data.
+ *	for root_lstchg below
+ */
+enum {
+	OSD_OBJ_ID_LIST_PAR = 0x1, /* V1-only. Not used in V2 */
+	OSD_OBJ_ID_LIST_LSTCHG = 0x2,
+};
+
+/*
+ * osd2r03: 6.15.2 LIST command parameter data
+ * (Also for LIST COLLECTION)
+ */
+struct osd_obj_id_list {
+	__be64 list_bytes; /* bytes in list excluding list_bytes (-8) */
+	__be64 continuation_id;
+	__be32 list_identifier;
+	u8 pad[3];
+	u8 root_lstchg;
+	__be64 object_ids[0];
+} __packed;
+
+static inline bool osd_is_obj_list_done(struct osd_obj_id_list *list,
+	bool *is_changed)
+{
+	*is_changed = (0 != (list->root_lstchg & OSD_OBJ_ID_LIST_LSTCHG));
+	return 0 != list->continuation_id;
+}
+
+/*
+ * osd2r03: 4.12.4.5 The ALLDATA security method
+ */
+struct osd_data_out_integrity_info {
+	__be64 data_bytes;
+	__be64 set_attributes_bytes;
+	__be64 get_attributes_bytes;
+	__be64 integrity_check_value;
+} __packed;
+
+struct osd_data_in_integrity_info {
+	__be64 data_bytes;
+	__be64 retrieved_attributes_bytes;
+	__be64 integrity_check_value;
+} __packed;
+
+struct osd_timestamp {
+	u8 time[6]; /* number of milliseconds since 1/1/1970 UT (big endian) */
+} __packed;
+/* FIXME: define helper functions to convert to/from osd time format */
+
+/*
+ * Capability & Security definitions
+ * osd2r03: 4.11.2.2 Capability format
+ * osd2r03: 5.2.8 Security parameters
+ */
+
+struct osd_key_identifier {
+	u8 id[7]; /* if you know why 7 please email bharrosh@panasas.com */
+} __packed;
+
+/* for osd_capability.format */
+enum {
+	OSD_SEC_CAP_FORMAT_NO_CAPS = 0,
+	OSD_SEC_CAP_FORMAT_VER1 = 1,
+	OSD_SEC_CAP_FORMAT_VER2 = 2,
+};
+
+/* security_method */
+enum {
+	OSD_SEC_NOSEC = 0,
+	OSD_SEC_CAPKEY = 1,
+	OSD_SEC_CMDRSP = 2,
+	OSD_SEC_ALLDATA = 3,
+};
+
+enum object_type {
+	OSD_SEC_OBJ_ROOT = 0x1,
+	OSD_SEC_OBJ_PARTITION = 0x2,
+	OSD_SEC_OBJ_COLLECTION = 0x40,
+	OSD_SEC_OBJ_USER = 0x80,
+};
+
+enum osd_capability_bit_masks {
+	OSD_SEC_CAP_APPEND	= BIT(0),
+	OSD_SEC_CAP_OBJ_MGMT	= BIT(1),
+	OSD_SEC_CAP_REMOVE	= BIT(2),
+	OSD_SEC_CAP_CREATE	= BIT(3),
+	OSD_SEC_CAP_SET_ATTR	= BIT(4),
+	OSD_SEC_CAP_GET_ATTR	= BIT(5),
+	OSD_SEC_CAP_WRITE	= BIT(6),
+	OSD_SEC_CAP_READ	= BIT(7),
+
+	OSD_SEC_CAP_NONE1	= BIT(8),
+	OSD_SEC_CAP_NONE2	= BIT(9),
+	OSD_SEC_CAP_NONE3	= BIT(10),
+	OSD_SEC_CAP_QUERY	= BIT(11), /*v2 only*/
+	OSD_SEC_CAP_M_OBJECT	= BIT(12), /*v2 only*/
+	OSD_SEC_CAP_POL_SEC	= BIT(13),
+	OSD_SEC_CAP_GLOBAL	= BIT(14),
+	OSD_SEC_CAP_DEV_MGMT	= BIT(15),
+};
+
+/* for object_descriptor_type (hi nibble used) */
+enum {
+	OSD_SEC_OBJ_DESC_NONE = 0,     /* Not allowed */
+	OSD_SEC_OBJ_DESC_OBJ = 1 << 4, /* v1: also collection */
+	OSD_SEC_OBJ_DESC_PAR = 2 << 4, /* also root */
+	OSD_SEC_OBJ_DESC_COL = 3 << 4, /* v2 only */
+};
+
+/* (osd-r10:4.9.2.2)
+ * osd2r03:4.11.2.2 Capability format
+ */
+struct osd_capability_head {
+	u8 format; /* low nibble */
+	u8 integrity_algorithm__key_version; /* MAKE_BYTE(integ_alg, key_ver) */
+	u8 security_method;
+	u8 reserved1;
+/*04*/	struct osd_timestamp expiration_time;
+/*10*/	u8 audit[20];
+/*30*/	u8 discriminator[12];
+/*42*/	struct osd_timestamp object_created_time;
+/*48*/	u8 object_type;
+/*49*/	u8 permissions_bit_mask[5];
+/*54*/	u8 reserved2;
+/*55*/	u8 object_descriptor_type; /* high nibble */
+} __packed;
+
+/*56 v1*/
+struct osdv1_cap_object_descriptor {
+	union {
+		struct {
+/*56*/			__be32 policy_access_tag;
+/*60*/			__be64 allowed_partition_id;
+/*68*/			__be64 allowed_object_id;
+/*76*/			__be32 reserved;
+		} __packed obj_desc;
+
+/*56*/		u8 object_descriptor[24];
+	};
+} __packed;
+/*80 v1*/
+
+/*56 v2*/
+struct osd_cap_object_descriptor {
+	union {
+		struct {
+/*56*/			__be32 allowed_attributes_access;
+/*60*/			__be32 policy_access_tag;
+/*64*/			__be16 boot_epoch;
+/*66*/			u8 reserved[6];
+/*72*/			__be64 allowed_partition_id;
+/*80*/			__be64 allowed_object_id;
+/*88*/			__be64 allowed_range_length;
+/*96*/			__be64 allowed_range_start;
+		} __packed obj_desc;
+
+/*56*/		u8 object_descriptor[48];
+	};
+} __packed;
+/*104 v2*/
+
+struct osdv1_capability {
+	struct osd_capability_head h;
+	struct osdv1_cap_object_descriptor od;
+} __packed;
+
+struct osd_capability {
+	struct osd_capability_head h;
+/* 	struct osd_cap_object_descriptor od;*/
+	struct osdv1_cap_object_descriptor od; /* FIXME: Pete rev-001 sup */
+} __packed;
+
+/**
+ * osd_sec_set_caps - set cap-bits into the capabilities header
+ *
+ * @cap:	The osd_capability_head to set cap bits to.
+ * @bit_mask: 	Use an ORed list of enum osd_capability_bit_masks values
+ *
+ * permissions_bit_mask is unaligned use below to set into caps
+ * in a version independent way
+ */
+static inline void osd_sec_set_caps(struct osd_capability_head *cap,
+	u16 bit_mask)
+{
+	/*
+	 *Note: The bits above are defined LE order this is because this way
+	 *      they can grow in the future to more then 16, and still retain
+	 *      there constant values.
+	 */
+	put_unaligned_le16(bit_mask, &cap->permissions_bit_mask);
+}
+
+#endif /* ndef __OSD_PROTOCOL_H__ */
diff --git a/include/scsi/osd_sec.h b/include/scsi/osd_sec.h
new file mode 100644
index 0000000..4c09fee
--- /dev/null
+++ b/include/scsi/osd_sec.h
@@ -0,0 +1,45 @@
+/*
+ * osd_sec.h - OSD security manager API
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ */
+#ifndef __OSD_SEC_H__
+#define __OSD_SEC_H__
+
+#include "osd_protocol.h"
+#include "osd_types.h"
+
+/*
+ * Contains types and constants of osd capabilities and security
+ * encoding/decoding.
+ * API is trying to keep security abstract so initiator of an object
+ * based pNFS client knows as little as possible about security and
+ * capabilities. It is the Server's osd-initiator place to know more.
+ * Also can be used by osd-target.
+ */
+void osd_sec_encode_caps(void *caps, ...);/* NI */
+void osd_sec_init_nosec_doall_caps(void *caps,
+	const struct osd_obj_id *obj, bool is_collection, const bool is_v1);
+
+bool osd_is_sec_alldata(struct osd_security_parameters *sec_params);
+
+/* Conditionally sign the CDB according to security setting in ocdb
+ * with cap_key */
+void osd_sec_sign_cdb(struct osd_cdb *ocdb, const u8 *cap_key);
+
+/* Unconditionally sign the BIO data with cap_key.
+ * Check for osd_is_sec_alldata() was done prior to calling this. */
+void osd_sec_sign_data(void *data_integ, struct bio *bio, const u8 *cap_key);
+
+/* Version independent copy of caps into the cdb */
+void osd_set_caps(struct osd_cdb *cdb, const void *caps);
+
+#endif /* ndef __OSD_SEC_H__ */
diff --git a/include/scsi/osd_sense.h b/include/scsi/osd_sense.h
new file mode 100644
index 0000000..ff9b33c
--- /dev/null
+++ b/include/scsi/osd_sense.h
@@ -0,0 +1,260 @@
+/*
+ * osd_sense.h - OSD Related sense handling definitions.
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ * This file contains types and constants that are defined by the protocol
+ * Note: All names and symbols are taken from the OSD standard's text.
+ */
+#ifndef __OSD_SENSE_H__
+#define __OSD_SENSE_H__
+
+#include <scsi/osd_protocol.h>
+
+/* SPC3r23 4.5.6 Sense key and sense code definitions table 27 */
+enum scsi_sense_keys {
+	scsi_sk_no_sense        = 0x0,
+	scsi_sk_recovered_error = 0x1,
+	scsi_sk_not_ready       = 0x2,
+	scsi_sk_medium_error    = 0x3,
+	scsi_sk_hardware_error  = 0x4,
+	scsi_sk_illegal_request = 0x5,
+	scsi_sk_unit_attention  = 0x6,
+	scsi_sk_data_protect    = 0x7,
+	scsi_sk_blank_check     = 0x8,
+	scsi_sk_vendor_specific = 0x9,
+	scsi_sk_copy_aborted    = 0xa,
+	scsi_sk_aborted_command = 0xb,
+	scsi_sk_volume_overflow = 0xd,
+	scsi_sk_miscompare      = 0xe,
+	scsi_sk_reserved        = 0xf,
+};
+
+/* SPC3r23 4.5.6 Sense key and sense code definitions table 28 */
+/* Note: only those which can be returned by an OSD target. Most of
+ *       these errors are taken care of by the generic scsi layer.
+ */
+enum osd_additional_sense_codes {
+	scsi_no_additional_sense_information			= 0x0000,
+	scsi_operation_in_progress				= 0x0016,
+	scsi_cleaning_requested					= 0x0017,
+	scsi_lunr_cause_not_reportable				= 0x0400,
+	scsi_logical_unit_is_in_process_of_becoming_ready	= 0x0401,
+	scsi_lunr_initializing_command_required			= 0x0402,
+	scsi_lunr_manual_intervention_required			= 0x0403,
+	scsi_lunr_operation_in_progress				= 0x0407,
+	scsi_lunr_selftest_in_progress				= 0x0409,
+	scsi_luna_asymmetric_access_state_transition		= 0x040a,
+	scsi_luna_target_port_in_standby_state			= 0x040b,
+	scsi_luna_target_port_in_unavailable_state		= 0x040c,
+	scsi_lunr_notify_enable_spinup_required			= 0x0411,
+	scsi_logical_unit_does_not_respond_to_selection		= 0x0500,
+	scsi_logical_unit_communication_failure			= 0x0800,
+	scsi_logical_unit_communication_timeout			= 0x0801,
+	scsi_logical_unit_communication_parity_error		= 0x0802,
+	scsi_error_log_overflow					= 0x0a00,
+	scsi_warning						= 0x0b00,
+	scsi_warning_specified_temperature_exceeded		= 0x0b01,
+	scsi_warning_enclosure_degraded				= 0x0b02,
+	scsi_write_error_unexpected_unsolicited_data		= 0x0c0c,
+	scsi_write_error_not_enough_unsolicited_data		= 0x0c0d,
+	scsi_invalid_information_unit				= 0x0e00,
+	scsi_invalid_field_in_command_information_unit		= 0x0e03,
+	scsi_read_error_failed_retransmission_request		= 0x1113,
+	scsi_parameter_list_length_error			= 0x1a00,
+	scsi_invalid_command_operation_code			= 0x2000,
+	scsi_invalid_field_in_cdb				= 0x2400,
+	osd_security_audit_value_frozen				= 0x2404,
+	osd_security_working_key_frozen				= 0x2405,
+	osd_nonce_not_unique					= 0x2406,
+	osd_nonce_timestamp_out_of_range			= 0x2407,
+	scsi_logical_unit_not_supported				= 0x2500,
+	scsi_invalid_field_in_parameter_list			= 0x2600,
+	scsi_parameter_not_supported				= 0x2601,
+	scsi_parameter_value_invalid				= 0x2602,
+	scsi_invalid_release_of_persistent_reservation		= 0x2604,
+	osd_invalid_dataout_buffer_integrity_check_value	= 0x260f,
+	scsi_not_ready_to_ready_change_medium_may_have_changed	= 0x2800,
+	scsi_power_on_reset_or_bus_device_reset_occurred	= 0x2900,
+	scsi_power_on_occurred					= 0x2901,
+	scsi_scsi_bus_reset_occurred				= 0x2902,
+	scsi_bus_device_reset_function_occurred			= 0x2903,
+	scsi_device_internal_reset				= 0x2904,
+	scsi_transceiver_mode_changed_to_single_ended		= 0x2905,
+	scsi_transceiver_mode_changed_to_lvd			= 0x2906,
+	scsi_i_t_nexus_loss_occurred				= 0x2907,
+	scsi_parameters_changed					= 0x2a00,
+	scsi_mode_parameters_changed				= 0x2a01,
+	scsi_asymmetric_access_state_changed			= 0x2a06,
+	scsi_priority_changed					= 0x2a08,
+	scsi_command_sequence_error				= 0x2c00,
+	scsi_previous_busy_status				= 0x2c07,
+	scsi_previous_task_set_full_status			= 0x2c08,
+	scsi_previous_reservation_conflict_status		= 0x2c09,
+	osd_partition_or_collection_contains_user_objects	= 0x2c0a,
+	scsi_commands_cleared_by_another_initiator		= 0x2f00,
+	scsi_cleaning_failure					= 0x3007,
+	scsi_enclosure_failure					= 0x3400,
+	scsi_enclosure_services_failure				= 0x3500,
+	scsi_unsupported_enclosure_function			= 0x3501,
+	scsi_enclosure_services_unavailable			= 0x3502,
+	scsi_enclosure_services_transfer_failure		= 0x3503,
+	scsi_enclosure_services_transfer_refused		= 0x3504,
+	scsi_enclosure_services_checksum_error			= 0x3505,
+	scsi_rounded_parameter					= 0x3700,
+	osd_read_past_end_of_user_object			= 0x3b17,
+	scsi_logical_unit_has_not_self_configured_yet		= 0x3e00,
+	scsi_logical_unit_failure				= 0x3e01,
+	scsi_timeout_on_logical_unit				= 0x3e02,
+	scsi_logical_unit_failed_selftest			= 0x3e03,
+	scsi_logical_unit_unable_to_update_selftest_log		= 0x3e04,
+	scsi_target_operating_conditions_have_changed		= 0x3f00,
+	scsi_microcode_has_been_changed				= 0x3f01,
+	scsi_inquiry_data_has_changed				= 0x3f03,
+	scsi_echo_buffer_overwritten				= 0x3f0f,
+	scsi_diagnostic_failure_on_component_nn_first		= 0x4080,
+	scsi_diagnostic_failure_on_component_nn_last		= 0x40ff,
+	scsi_message_error					= 0x4300,
+	scsi_internal_target_failure				= 0x4400,
+	scsi_select_or_reselect_failure				= 0x4500,
+	scsi_scsi_parity_error					= 0x4700,
+	scsi_data_phase_crc_error_detected			= 0x4701,
+	scsi_scsi_parity_error_detected_during_st_data_phase	= 0x4702,
+	scsi_asynchronous_information_protection_error_detected	= 0x4704,
+	scsi_protocol_service_crc_error				= 0x4705,
+	scsi_phy_test_function_in_progress			= 0x4706,
+	scsi_invalid_message_error				= 0x4900,
+	scsi_command_phase_error				= 0x4a00,
+	scsi_data_phase_error					= 0x4b00,
+	scsi_logical_unit_failed_self_configuration		= 0x4c00,
+	scsi_overlapped_commands_attempted			= 0x4e00,
+	osd_quota_error						= 0x5507,
+	scsi_failure_prediction_threshold_exceeded		= 0x5d00,
+	scsi_failure_prediction_threshold_exceeded_false	= 0x5dff,
+	scsi_voltage_fault					= 0x6500,
+};
+
+enum scsi_descriptor_types {
+	scsi_sense_information			= 0x0,
+	scsi_sense_command_specific_information	= 0x1,
+	scsi_sense_key_specific			= 0x2,
+	scsi_sense_field_replaceable_unit	= 0x3,
+	scsi_sense_stream_commands		= 0x4,
+	scsi_sense_block_commands		= 0x5,
+	osd_sense_object_identification		= 0x6,
+	osd_sense_response_integrity_check	= 0x7,
+	osd_sense_attribute_identification	= 0x8,
+	scsi_sense_ata_return			= 0x9,
+
+	scsi_sense_Reserved_first		= 0x0A,
+	scsi_sense_Reserved_last		= 0x7F,
+	scsi_sense_Vendor_specific_first	= 0x80,
+	scsi_sense_Vendor_specific_last		= 0xFF,
+};
+
+struct scsi_sense_descriptor { /* for picking into desc type */
+	u8	descriptor_type; /* one of enum scsi_descriptor_types */
+	u8	additional_length; /* n - 1 */
+	u8	data[];
+} __packed;
+
+/* OSD deploys only scsi descriptor_based sense buffers */
+struct scsi_sense_descriptor_based {
+/*0*/	u8 	response_code; /* 0x72 or 0x73 */
+/*1*/	u8 	sense_key; /* one of enum scsi_sense_keys (4 lower bits) */
+/*2*/	__be16	additional_sense_code; /* enum osd_additional_sense_codes */
+/*4*/	u8	Reserved[3];
+/*7*/	u8	additional_sense_length; /* n - 7 */
+/*8*/	struct	scsi_sense_descriptor ssd[0]; /* variable length, 1 or more */
+} __packed;
+
+/* some descriptors deployed by OSD */
+
+/* SPC3r23 4.5.2.3 Command-specific information sense data descriptor */
+/* Note: this is the same for descriptor_type=00 but with type=00 the
+ *        Reserved[0] == 0x80 (ie. bit-7 set)
+ */
+struct scsi_sense_command_specific_data_descriptor {
+/*0*/	u8	descriptor_type; /* (00h/01h) */
+/*1*/	u8	additional_length; /* (0Ah) */
+/*2*/	u8	Reserved[2];
+/*4*/	__be64  information;
+} __packed;
+/*12*/
+
+struct scsi_sense_key_specific_data_descriptor {
+/*0*/	u8	descriptor_type; /* (02h) */
+/*1*/	u8	additional_length; /* (06h) */
+/*2*/	u8	Reserved[2];
+/* SKSV, C/D, Reserved (2), BPV, BIT POINTER (3) */
+/*4*/	u8	sksv_cd_bpv_bp;
+/*5*/	__be16	value; /* field-pointer/progress-value/retry-count/... */
+/*7*/	u8	Reserved2;
+} __packed;
+/*8*/
+
+/* 4.16.2.1 OSD error identification sense data descriptor - table 52 */
+/* Note: these bits are defined LE order for easy definition, this way the BIT()
+ * number is the same as in the documentation. Below members at
+ * osd_sense_identification_data_descriptor are therefore defined __le32.
+ */
+enum osd_command_functions_bits {
+	OSD_CFB_COMMAND		 = BIT(4),
+	OSD_CFB_CMD_CAP_VERIFIED = BIT(5),
+	OSD_CFB_VALIDATION	 = BIT(7),
+	OSD_CFB_IMP_ST_ATT	 = BIT(12),
+	OSD_CFB_SET_ATT		 = BIT(20),
+	OSD_CFB_SA_CAP_VERIFIED	 = BIT(21),
+	OSD_CFB_GET_ATT		 = BIT(28),
+	OSD_CFB_GA_CAP_VERIFIED	 = BIT(29),
+};
+
+struct osd_sense_identification_data_descriptor {
+/*0*/	u8	descriptor_type; /* (06h) */
+/*1*/	u8	additional_length; /* (1Eh) */
+/*2*/	u8	Reserved[6];
+/*8*/	__le32	not_initiated_functions; /*osd_command_functions_bits*/
+/*12*/	__le32	completed_functions; /*osd_command_functions_bits*/
+/*16*/ 	__be64	partition_id;
+/*24*/	__be64	object_id;
+} __packed;
+/*32*/
+
+struct osd_sense_response_integrity_check_descriptor {
+/*0*/	u8	descriptor_type; /* (07h) */
+/*1*/	u8	additional_length; /* (20h) */
+/*2*/	u8	integrity_check_value[32]; /*FIXME: OSDv2_CRYPTO_KEYID_SIZE*/
+} __packed;
+/*34*/
+
+struct osd_sense_attributes_data_descriptor {
+/*0*/	u8	descriptor_type; /* (08h) */
+/*1*/	u8	additional_length; /* (n-2) */
+/*2*/	u8	Reserved[6];
+	struct osd_sense_attr {
+/*8*/		__be32	attr_page;
+/*12*/		__be32	attr_id;
+/*16*/	} sense_attrs[0]; /* 1 or more */
+} __packed;
+/*variable*/
+
+/* Dig into scsi_sk_illegal_request/scsi_invalid_field_in_cdb errors */
+
+/*FIXME: Support also field in CAPS*/
+#define OSD_CDB_OFFSET(F) offsetof(struct osd_cdb_head, F)
+
+enum osdv2_cdb_field_offset {
+	OSDv1_CFO_STARTING_BYTE	= OSD_CDB_OFFSET(v1.start_address),
+	OSD_CFO_STARTING_BYTE	= OSD_CDB_OFFSET(v2.start_address),
+	OSD_CFO_PARTITION_ID	= OSD_CDB_OFFSET(partition),
+	OSD_CFO_OBJECT_ID	= OSD_CDB_OFFSET(object),
+};
+
+#endif /* ndef __OSD_SENSE_H__ */
diff --git a/include/scsi/osd_types.h b/include/scsi/osd_types.h
new file mode 100644
index 0000000..3f5e88c
--- /dev/null
+++ b/include/scsi/osd_types.h
@@ -0,0 +1,40 @@
+/*
+ * osd_types.h - Types and constants which are not part of the protocol.
+ *
+ * Copyright (C) 2008 Panasas Inc.  All rights reserved.
+ *
+ * Authors:
+ *   Boaz Harrosh <bharrosh@panasas.com>
+ *   Benny Halevy <bhalevy@panasas.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ *
+ * Contains types and constants that are implementation specific and are
+ * used by more than one part of the osd library.
+ *     (Eg initiator/target/security_manager/...)
+ */
+#ifndef __OSD_TYPES_H__
+#define __OSD_TYPES_H__
+
+struct osd_systemid {
+	u8 data[OSD_SYSTEMID_LEN];
+};
+
+typedef u64 __bitwise osd_id;
+
+struct osd_obj_id {
+	osd_id partition;
+	osd_id id;
+};
+
+static const struct __weak osd_obj_id osd_root_object = {0, 0};
+
+struct osd_attr {
+	u32 attr_page;
+	u32 attr_id;
+	u16 len;		/* byte count of operand */
+	void *val_ptr;		/* in network order */
+};
+
+#endif /* ndef __OSD_TYPES_H__ */
diff --git a/include/scsi/scsi.h b/include/scsi/scsi.h
index a109165..084478e 100644
--- a/include/scsi/scsi.h
+++ b/include/scsi/scsi.h
@@ -9,7 +9,8 @@
 #define _SCSI_SCSI_H
 
 #include <linux/types.h>
-#include <scsi/scsi_cmnd.h>
+
+struct scsi_cmnd;
 
 /*
  * The maximum number of SG segments that we will put inside a
@@ -263,6 +264,7 @@
 #define TYPE_RAID           0x0c
 #define TYPE_ENCLOSURE      0x0d    /* Enclosure Services Device */
 #define TYPE_RBC	    0x0e
+#define TYPE_OSD            0x11
 #define TYPE_NO_LUN         0x7f
 
 /* SCSI protocols; these are taken from SPC-3 section 7.5 */
@@ -402,16 +404,6 @@
 #define DRIVER_HARD         0x07
 #define DRIVER_SENSE	    0x08
 
-#define SUGGEST_RETRY       0x10
-#define SUGGEST_ABORT       0x20
-#define SUGGEST_REMAP       0x30
-#define SUGGEST_DIE         0x40
-#define SUGGEST_SENSE       0x80
-#define SUGGEST_IS_OK       0xff
-
-#define DRIVER_MASK         0x0f
-#define SUGGEST_MASK        0xf0
-
 /*
  * Internal return values.
  */
@@ -447,23 +439,6 @@
 #define msg_byte(result)    (((result) >> 8) & 0xff)
 #define host_byte(result)   (((result) >> 16) & 0xff)
 #define driver_byte(result) (((result) >> 24) & 0xff)
-#define suggestion(result)  (driver_byte(result) & SUGGEST_MASK)
-
-static inline void set_msg_byte(struct scsi_cmnd *cmd, char status)
-{
-	cmd->result |= status << 8;
-}
-
-static inline void set_host_byte(struct scsi_cmnd *cmd, char status)
-{
-	cmd->result |= status << 16;
-}
-
-static inline void set_driver_byte(struct scsi_cmnd *cmd, char status)
-{
-	cmd->result |= status << 24;
-}
-
 
 #define sense_class(sense)  (((sense) >> 4) & 0x7)
 #define sense_error(sense)  ((sense) & 0xf)
diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
index 855bf95..43b50d3 100644
--- a/include/scsi/scsi_cmnd.h
+++ b/include/scsi/scsi_cmnd.h
@@ -291,4 +291,19 @@
 #define scsi_for_each_prot_sg(cmd, sg, nseg, __i)		\
 	for_each_sg(scsi_prot_sglist(cmd), sg, nseg, __i)
 
+static inline void set_msg_byte(struct scsi_cmnd *cmd, char status)
+{
+	cmd->result |= status << 8;
+}
+
+static inline void set_host_byte(struct scsi_cmnd *cmd, char status)
+{
+	cmd->result |= status << 16;
+}
+
+static inline void set_driver_byte(struct scsi_cmnd *cmd, char status)
+{
+	cmd->result |= status << 24;
+}
+
 #endif /* _SCSI_SCSI_CMND_H */
diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
index 01a4c58..3f566af 100644
--- a/include/scsi/scsi_device.h
+++ b/include/scsi/scsi_device.h
@@ -340,6 +340,7 @@
 			    struct scsi_sense_hdr *);
 extern int scsi_test_unit_ready(struct scsi_device *sdev, int timeout,
 				int retries, struct scsi_sense_hdr *sshdr);
+extern unsigned char *scsi_get_vpd_page(struct scsi_device *, u8 page);
 extern int scsi_device_set_state(struct scsi_device *sdev,
 				 enum scsi_device_state state);
 extern struct scsi_event *sdev_evt_alloc(enum scsi_device_event evt_type,
@@ -370,12 +371,6 @@
 			    int data_direction, void *buffer, unsigned bufflen,
 			    struct scsi_sense_hdr *, int timeout, int retries,
 			    int *resid);
-extern int scsi_execute_async(struct scsi_device *sdev,
-			      const unsigned char *cmd, int cmd_len, int data_direction,
-			      void *buffer, unsigned bufflen, int use_sg,
-			      int timeout, int retries, void *privdata,
-			      void (*done)(void *, char *, int, int),
-			      gfp_t gfp);
 
 static inline int __must_check scsi_device_reprobe(struct scsi_device *sdev)
 {
@@ -400,7 +395,8 @@
  */
 static inline int scsi_device_online(struct scsi_device *sdev)
 {
-	return sdev->sdev_state != SDEV_OFFLINE;
+	return (sdev->sdev_state != SDEV_OFFLINE &&
+		sdev->sdev_state != SDEV_DEL);
 }
 static inline int scsi_device_blocked(struct scsi_device *sdev)
 {
diff --git a/include/scsi/scsi_transport_iscsi.h b/include/scsi/scsi_transport_iscsi.h
index b50aabe..457588e 100644
--- a/include/scsi/scsi_transport_iscsi.h
+++ b/include/scsi/scsi_transport_iscsi.h
@@ -88,7 +88,7 @@
 	uint64_t host_param_mask;
 	struct iscsi_cls_session *(*create_session) (struct iscsi_endpoint *ep,
 					uint16_t cmds_max, uint16_t qdepth,
-					uint32_t sn, uint32_t *hn);
+					uint32_t sn);
 	void (*destroy_session) (struct iscsi_cls_session *session);
 	struct iscsi_cls_conn *(*create_conn) (struct iscsi_cls_session *sess,
 				uint32_t cid);
@@ -206,8 +206,6 @@
 struct iscsi_cls_host {
 	atomic_t nr_scans;
 	struct mutex mutex;
-	struct workqueue_struct *scan_workq;
-	char scan_workq_name[20];
 };
 
 extern void iscsi_host_for_each_session(struct Scsi_Host *shost,
diff --git a/net/core/dev.c b/net/core/dev.c
index 63ec4bf..52fea5b 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1457,7 +1457,9 @@
 		((features & NETIF_F_IP_CSUM) &&
 		 protocol == htons(ETH_P_IP)) ||
 		((features & NETIF_F_IPV6_CSUM) &&
-		 protocol == htons(ETH_P_IPV6)));
+		 protocol == htons(ETH_P_IPV6)) ||
+		((features & NETIF_F_FCOE_CRC) &&
+		 protocol == htons(ETH_P_FCOE)));
 }
 
 static bool dev_can_checksum(struct net_device *dev, struct sk_buff *skb)