Initial import of eigen 3.1.1

Added a README.android and a MODULE_LICENSE_MPL2 file.
Added empty Android.mk and CleanSpec.mk to optimize Android build.
Non MPL2 license code is disabled in ./Eigen/src/Core/util/NonMPL2.h.
Trying to include such files will lead to an error.

Change-Id: I0e148b7c3e83999bcc4dfaa5809d33bfac2aac32
diff --git a/doc/A05_PortingFrom2To3.dox b/doc/A05_PortingFrom2To3.dox
new file mode 100644
index 0000000..10ce968
--- /dev/null
+++ b/doc/A05_PortingFrom2To3.dox
@@ -0,0 +1,319 @@
+namespace Eigen {
+
+/** \page Eigen2ToEigen3 Porting from Eigen2 to Eigen3
+
+This page lists the most important API changes between Eigen2 and Eigen3,
+and gives tips to help porting your application from Eigen2 to Eigen3.
+
+\b Table \b of \b contents
+  - \ref CompatibilitySupport
+  - \ref Using
+  - \ref ComplexDot
+  - \ref VectorBlocks
+  - \ref Corners
+  - \ref CoefficientWiseOperations
+  - \ref PartAndExtract
+  - \ref TriangularSolveInPlace
+  - \ref Decompositions
+  - \ref LinearSolvers
+  - \ref GeometryModule
+  - \ref Transform
+  - \ref LazyVsNoalias
+  - \ref AlignMacros
+  - \ref AlignedMap
+  - \ref StdContainers
+  - \ref eiPrefix
+
+\section CompatibilitySupport Eigen2 compatibility support
+
+In order to ease the switch from Eigen2 to Eigen3, Eigen3 features \ref Eigen2SupportModes "Eigen2 support modes".
+
+The quick way to enable this is to define the \c EIGEN2_SUPPORT preprocessor token \b before including any Eigen header (typically it should be set in your project options).
+
+A more powerful, \em staged migration path is also provided, which may be useful to migrate larger projects from Eigen2 to Eigen3. This is explained in the \ref Eigen2SupportModes "Eigen 2 support modes" page. 
+
+\section Using The USING_PART_OF_NAMESPACE_EIGEN macro
+
+The USING_PART_OF_NAMESPACE_EIGEN macro has been removed. In Eigen 3, just do:
+\code
+using namespace Eigen;
+\endcode
+
+\section ComplexDot Dot products over complex numbers
+
+This is the single trickiest change between Eigen 2 and Eigen 3. It only affects code using \c std::complex numbers as scalar type.
+
+Eigen 2's dot product was linear in the first variable. Eigen 3's dot product is linear in the second variable. In other words, the Eigen 2 code \code x.dot(y) \endcode is equivalent to the Eigen 3 code \code y.dot(x) \endcode In yet other words, dot products are complex-conjugated in Eigen 3 compared to Eigen 2. The switch to the new convention was commanded by common usage, especially with the notation \f$ x^Ty \f$ for dot products of column-vectors.
+
+\section VectorBlocks Vector blocks
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th></th>
+<tr><td>\code
+vector.start(length)
+vector.start<length>()
+vector.end(length)
+vector.end<length>()
+\endcode</td><td>\code
+vector.head(length)
+vector.head<length>()
+vector.tail(length)
+vector.tail<length>()
+\endcode</td></tr>
+</table>
+
+
+\section Corners Matrix Corners
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th></th>
+<tr><td>\code
+matrix.corner(TopLeft,r,c)
+matrix.corner(TopRight,r,c)
+matrix.corner(BottomLeft,r,c)
+matrix.corner(BottomRight,r,c)
+matrix.corner<r,c>(TopLeft)
+matrix.corner<r,c>(TopRight)
+matrix.corner<r,c>(BottomLeft)
+matrix.corner<r,c>(BottomRight)
+\endcode</td><td>\code
+matrix.topLeftCorner(r,c)
+matrix.topRightCorner(r,c)
+matrix.bottomLeftCorner(r,c)
+matrix.bottomRightCorner(r,c)
+matrix.topLeftCorner<r,c>()
+matrix.topRightCorner<r,c>()
+matrix.bottomLeftCorner<r,c>()
+matrix.bottomRightCorner<r,c>()
+\endcode</td>
+</tr>
+</table>
+
+Notice that Eigen3 also provides these new convenience methods: topRows(), bottomRows(), leftCols(), rightCols(). See in class DenseBase.
+
+\section CoefficientWiseOperations Coefficient wise operations
+
+In Eigen2, coefficient wise operations which have no proper mathematical definition (as a coefficient wise product)
+were achieved using the .cwise() prefix, e.g.:
+\code a.cwise() * b \endcode
+In Eigen3 this .cwise() prefix has been superseded by a new kind of matrix type called
+Array for which all operations are performed coefficient wise. You can easily view a matrix as an array and vice versa using
+the MatrixBase::array() and ArrayBase::matrix() functions respectively. Here is an example:
+\code
+Vector4f a, b, c;
+c = a.array() * b.array();
+\endcode
+Note that the .array() function is not at all a synonym of the deprecated .cwise() prefix.
+While the .cwise() prefix changed the behavior of the following operator, the array() function performs
+a permanent conversion to the array world. Therefore, for binary operations such as the coefficient wise product,
+both sides must be converted to an \em array as in the above example. On the other hand, when you
+concatenate multiple coefficient wise operations you only have to do the conversion once, e.g.:
+\code
+Vector4f a, b, c;
+c = a.array().abs().pow(3) * b.array().abs().sin();
+\endcode
+With Eigen2 you would have written:
+\code
+c = (a.cwise().abs().cwise().pow(3)).cwise() * (b.cwise().abs().cwise().sin());
+\endcode
+
+\section PartAndExtract Triangular and self-adjoint matrices
+
+In Eigen 2 you had to play with the part, extract, and marked functions to deal with triangular and selfadjoint matrices. In Eigen 3, all these functions have been removed in favor of the concept of \em views:
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th></tr>
+<tr><td>\code
+A.part<UpperTriangular>();
+A.part<StrictlyLowerTriangular>(); \endcode</td>
+<td>\code
+A.triangularView<Upper>()
+A.triangularView<StrictlyLower>()\endcode</td></tr>
+<tr><td>\code
+A.extract<UpperTriangular>();
+A.extract<StrictlyLowerTriangular>();\endcode</td>
+<td>\code
+A.triangularView<Upper>()
+A.triangularView<StrictlyLower>()\endcode</td></tr>
+<tr><td>\code
+A.marked<UpperTriangular>();
+A.marked<StrictlyLowerTriangular>();\endcode</td>
+<td>\code
+A.triangularView<Upper>()
+A.triangularView<StrictlyLower>()\endcode</td></tr>
+<tr><td colspan="2"></td></tr>
+<tr><td>\code
+A.part<SelfAdfjoint|UpperTriangular>();
+A.extract<SelfAdfjoint|LowerTriangular>();\endcode</td>
+<td>\code
+A.selfadjointView<Upper>()
+A.selfadjointView<Lower>()\endcode</td></tr>
+<tr><td colspan="2"></td></tr>
+<tr><td>\code
+UpperTriangular
+LowerTriangular
+UnitUpperTriangular
+UnitLowerTriangular
+StrictlyUpperTriangular
+StrictlyLowerTriangular
+\endcode</td><td>\code
+Upper
+Lower
+UnitUpper
+UnitLower
+StrictlyUpper
+StrictlyLower
+\endcode</td>
+</tr>
+</table>
+
+\sa class TriangularView, class SelfAdjointView
+
+\section TriangularSolveInPlace Triangular in-place solving
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th></tr>
+<tr><td>\code A.triangularSolveInPlace<XxxTriangular>(Y);\endcode</td><td>\code A.triangularView<Xxx>().solveInPlace(Y);\endcode</td></tr>
+</table>
+
+
+\section Decompositions Matrix decompositions
+
+Some of Eigen 2's matrix decompositions have been renamed in Eigen 3, while some others have been removed and are replaced by other decompositions in Eigen 3.
+
+<table class="manual">
+  <tr>
+    <th>Eigen 2</th>
+    <th>Eigen 3</th>
+    <th>Notes</th>
+  </tr>
+  <tr>
+    <td>LU</td>
+    <td>FullPivLU</td>
+    <td class="alt">See also the new PartialPivLU, it's much faster</td>
+  </tr>
+  <tr>
+    <td>QR</td>
+    <td>HouseholderQR</td>
+    <td class="alt">See also the new ColPivHouseholderQR, it's more reliable</td>
+  </tr>
+  <tr>
+    <td>SVD</td>
+    <td>JacobiSVD</td>
+    <td class="alt">We currently don't have a bidiagonalizing SVD; of course this is planned.</td>
+  </tr>
+  <tr>
+    <td>EigenSolver and friends</td>
+    <td>\code #include<Eigen/Eigenvalues> \endcode </td>
+    <td class="alt">Moved to separate module</td>
+  </tr>
+</table>
+
+\section LinearSolvers Linear solvers
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th><th>Notes</th></tr>
+<tr><td>\code A.lu();\endcode</td>
+<td>\code A.fullPivLu();\endcode</td>
+<td class="alt">Now A.lu() returns a PartialPivLU</td></tr>
+<tr><td>\code A.lu().solve(B,&X);\endcode</td>
+<td>\code X = A.lu().solve(B);
+ X = A.fullPivLu().solve(B);\endcode</td>
+<td class="alt">The returned by value is fully optimized</td></tr>
+<tr><td>\code A.llt().solve(B,&X);\endcode</td>
+<td>\code X = A.llt().solve(B);
+ X = A.selfadjointView<Lower>.llt().solve(B);
+ X = A.selfadjointView<Upper>.llt().solve(B);\endcode</td>
+<td class="alt">The returned by value is fully optimized and \n
+the selfadjointView API allows you to select the \n
+triangular part to work on (default is lower part)</td></tr>
+<tr><td>\code A.llt().solveInPlace(B);\endcode</td>
+<td>\code B = A.llt().solve(B);
+ B = A.selfadjointView<Lower>.llt().solve(B);
+ B = A.selfadjointView<Upper>.llt().solve(B);\endcode</td>
+<td class="alt">In place solving</td></tr>
+<tr><td>\code A.ldlt().solve(B,&X);\endcode</td>
+<td>\code X = A.ldlt().solve(B);
+ X = A.selfadjointView<Lower>.ldlt().solve(B);
+ X = A.selfadjointView<Upper>.ldlt().solve(B);\endcode</td>
+<td class="alt">The returned by value is fully optimized and \n
+the selfadjointView API allows you to select the \n
+triangular part to work on</td></tr>
+</table>
+
+\section GeometryModule Changes in the Geometry module
+
+The Geometry module is the one that changed the most. If you rely heavily on it, it's probably a good idea to use the \ref Eigen2SupportModes "Eigen 2 support modes" to perform your migration.
+
+\section Transform The Transform class
+
+In Eigen 2, the Transform class didn't really know whether it was a projective or affine transformation. In Eigen 3, it takes a new \a Mode template parameter, which indicates whether it's \a Projective or \a Affine transform. There is no default value.
+
+The Transform3f (etc) typedefs are no more. In Eigen 3, the Transform typedefs explicitly refer to the \a Projective and \a Affine modes:
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th><th>Notes</th></tr>
+<tr>
+  <td> Transform3f </td>
+  <td> Affine3f or Projective3f </td>
+  <td> Of course 3f is just an example here </td>
+</tr>
+</table>
+
+
+\section LazyVsNoalias Lazy evaluation and noalias
+
+In Eigen all operations are performed in a lazy fashion except the matrix products which are always evaluated into a temporary by default.
+In Eigen2, lazy evaluation could be enforced by tagging a product using the .lazy() function. However, in complex expressions it was not
+easy to determine where to put the lazy() function. In Eigen3, the lazy() feature has been superseded by the MatrixBase::noalias() function
+which can be used on the left hand side of an assignment when no aliasing can occur. Here is an example:
+\code
+MatrixXf a, b, c;
+...
+c.noalias() += 2 * a.transpose() * b;
+\endcode
+However, the noalias mechanism does not cover all the features of the old .lazy(). Indeed, in some extremely rare cases,
+it might be useful to explicit request for a lay product, i.e., for a product which will be evaluated one coefficient at once, on request,
+just like any other expressions. To this end you can use the MatrixBase::lazyProduct() function, however we strongly discourage you to
+use it unless you are sure of what you are doing, i.e., you have rigourosly measured a speed improvement.
+
+\section AlignMacros Alignment-related macros
+
+The EIGEN_ALIGN_128 macro has been renamed to EIGEN_ALIGN16. Don't be surprised, it's just that we switched to counting in bytes ;-)
+
+The EIGEN_DONT_ALIGN option still exists in Eigen 3, but it has a new cousin: EIGEN_DONT_ALIGN_STATICALLY. It allows to get rid of all static alignment issues while keeping alignment of dynamic-size heap-allocated arrays, thus keeping vectorization for dynamic-size objects.
+
+\section AlignedMap Aligned Map objects
+
+A common issue with Eigen 2 was that when mapping an array with Map, there was no way to tell Eigen that your array was aligned. There was a ForceAligned option but it didn't mean that; it was just confusing and has been removed.
+
+New in Eigen3 is the #Aligned option. See the documentation of class Map. Use it like this:
+\code
+Map<Vector4f, Aligned> myMappedVector(some_aligned_array);
+\endcode
+There also are related convenience static methods, which actually are the preferred way as they take care of such things as constness:
+\code
+result = Vector4f::MapAligned(some_aligned_array);
+\endcode
+
+\section StdContainers STL Containers
+
+In Eigen2, #include<Eigen/StdVector> tweaked std::vector to automatically align elements. The problem was that that was quite invasive. In Eigen3, we only override standard behavior if you use Eigen::aligned_allocator<T> as your allocator type. So for example, if you use std::vector<Matrix4f>, you need to do the following change (note that aligned_allocator is under namespace Eigen):
+
+<table class="manual">
+<tr><th>Eigen 2</th><th>Eigen 3</th></tr>
+<tr>
+  <td> \code std::vector<Matrix4f> \endcode </td>
+  <td> \code std::vector<Matrix4f, aligned_allocator<Matrix4f> > \endcode </td>
+</tr>
+</table>
+
+\section eiPrefix Internal ei_ prefix
+
+In Eigen2, global internal functions and structures were prefixed by \c ei_. In Eigen3, they all have been moved into the more explicit \c internal namespace. So, e.g., \c ei_sqrt(x) now becomes \c internal::sqrt(x). Of course it is not recommended to rely on Eigen's internal features.
+
+
+
+*/
+
+}
diff --git a/doc/A10_Eigen2SupportModes.dox b/doc/A10_Eigen2SupportModes.dox
new file mode 100644
index 0000000..c20b64e
--- /dev/null
+++ b/doc/A10_Eigen2SupportModes.dox
@@ -0,0 +1,101 @@
+namespace Eigen {
+
+/** \page Eigen2SupportModes Eigen 2 support modes
+
+This page documents the Eigen2 support modes, a powerful tool to help migrating your project from Eigen 2 to Eigen 3.
+Don't miss our page on \ref Eigen2ToEigen3 "API changes" between Eigen 2 and Eigen 3.
+
+\b Table \b of \b contents
+  - \ref EIGEN2_SUPPORT_Macro
+  - \ref StagedMigrationPathOverview
+  - \ref Stage10
+  - \ref Stage20
+  - \ref Stage30
+  - \ref Stage40
+  - \ref FinallyDropAllEigen2Support
+  - \ref ABICompatibility
+
+\section EIGEN2_SUPPORT_Macro The quick way: define EIGEN2_SUPPORT
+
+By defining EIGEN2_SUPPORT before including any Eigen 3 header, you get back a large part of the Eigen 2 API, while keeping the Eigen 3 API and ABI unchanged.
+
+This defaults to the \ref Stage30 "stage 30" described below.
+
+The rest of this page describes an optional, more powerful \em staged migration path.
+
+\section StagedMigrationPathOverview Overview of the staged migration path
+
+The primary reason why EIGEN2_SUPPORT alone may not be enough to migrate a large project from Eigen 2 to Eigen 3 is that some of the Eigen 2 API is inherently incompatible with the Eigen 3 API. This happens when the same identifier is used in Eigen 2 and in Eigen 3 with different meanings. To help migrate projects that rely on such API, we provide a staged migration path allowing to perform the migration \em incrementally.
+
+It goes as follows:
+\li Step 0: start with a project using Eigen 2.
+\li Step 1: build your project against Eigen 3 with \ref Stage10 "Eigen 2 support stage 10". This mode enables maximum compatibility with the Eigen 2 API, with just a few exceptions.
+\li Step 2: build your project against Eigen 3 with \ref Stage20 "Eigen 2 support stage 20". This mode forces you to add eigen2_ prefixes to the Eigen2 identifiers that conflict with Eigen 3 API.
+\li Step 3: build your project against Eigen 3 with \ref Stage30 "Eigen 2 support stage 30". This mode enables the full Eigen 3 API.
+\li Step 4: build your project against Eigen 3 with \ref Stage40 "Eigen 2 support stage 40". This mode enables the full Eigen 3 strictness on matters, such as const-correctness, where Eigen 2 was looser.
+\li Step 5: build your project against Eigen 3 without any Eigen 2 support mode.
+
+\section Stage10 Stage 10: define EIGEN2_SUPPORT_STAGE10_FULL_EIGEN2_API
+
+Enable this mode by defining the EIGEN2_SUPPORT_STAGE10_FULL_EIGEN2_API preprocessor macro before including any Eigen 3 header.
+
+This mode maximizes support for the Eigen 2 API. As a result, it does not offer the full Eigen 3 API. Also, it doesn't offer quite 100% of the Eigen 2 API.
+
+The part of the Eigen 3 API that is not present in this mode, is Eigen 3's Geometry module. Indeed, this mode completely replaces it by a copy of Eigen 2's Geometry module.
+
+The parts of the API that are still not 100% Eigen 2 compatible in this mode are:
+\li Dot products over complex numbers. Eigen 2's dot product was linear in the first variable. Eigen 3's dot product is linear in the second variable. In other words, the Eigen 2 code \code x.dot(y) \endcode is equivalent to the Eigen 3 code \code y.dot(x) \endcode In yet other words, dot products are complex-conjugated in Eigen 3 compared to Eigen 2. The switch to the new convention was commanded by common usage, especially with the notation \f$ x^Ty \f$ for dot products of column-vectors.
+\li The Sparse module.
+\li Certain fine details of linear algebraic decompositions. For example, LDLT decomposition is now pivoting in Eigen 3 whereas it wasn't in Eigen 2, so code that was relying on its underlying matrix structure will break.
+\li Usage of Eigen types in STL containers, \ref Eigen2ToEigen3 "as explained on this page".
+
+\section Stage20 Stage 20: define EIGEN2_SUPPORT_STAGE20_RESOLVE_API_CONFLICTS
+
+Enable this mode by defining the EIGEN2_SUPPORT_STAGE10_FULL_EIGEN2_API preprocessor macro before including any Eigen 3 header.
+
+This mode removes the Eigen 2 API that is directly conflicting with Eigen 3 API. Instead, these bits of Eigen 2 API remain available with eigen2_ prefixes. The main examples of such API are:
+\li the whole Geometry module. For example, replace \c Quaternion by \c eigen2_Quaternion, replace \c Transform3f by \c eigen2_Transform3f, etc.
+\li the lu() method to obtain a LU decomposition. Replace by eigen2_lu().
+
+There is also one more eigen2_-prefixed identifier that you should know about, even though its use is not checked at compile time by this mode: the dot() method. As was discussed above, over complex numbers, its meaning is different between Eigen 2 and Eigen 3. You can use eigen2_dot() to get the Eigen 2 behavior.
+
+\section Stage30 Stage 30: define EIGEN2_SUPPORT_STAGE30_FULL_EIGEN3_API
+
+Enable this mode by defining the EIGEN2_SUPPORT_STAGE30_FULL_EIGEN3_API preprocessor macro before including any Eigen 3 header. Also, this mode is what you get by default when you just define EIGEN2_SUPPORT.
+
+This mode gives you the full unaltered Eigen 3 API, while still keeping as much support as possible for the Eigen 2 API.
+
+The eigen2_-prefixed identifiers are still available, but at this stage you should now replace them by Eigen 3 identifiers. Have a look at our page on \ref Eigen2ToEigen3 "API changes" between Eigen 2 and Eigen 3.
+
+\section Stage40 Stage 40: define EIGEN2_SUPPORT_STAGE40_FULL_EIGEN3_STRICTNESS
+
+Enable this mode by defining the EIGEN2_SUPPORT_STAGE40_FULL_EIGEN3_STRICTNESS preprocessor macro before including any Eigen 3 header.
+
+This mode tightens the last bits of strictness, especially const-correctness, that had to be loosened to support what Eigen 2 allowed. For example, this code compiled in Eigen 2:
+\code
+const float array[4];
+x = Map<Vector4f>(array);
+\endcode
+That allowed to circumvent constness. This is no longer allowed in Eigen 3. If you have to map const data in Eigen 3, map it as a const-qualified type. However, rather than explictly constructing Map objects, we strongly encourage you to use the static Map methods instead, as they take care of all of this for you:
+\code
+const float array[4];
+x = Vector4f::Map(array);
+\endcode
+This lets Eigen do the right thing for you and works equally well in Eigen 2 and in Eigen 3.
+
+\section FinallyDropAllEigen2Support Finally drop all Eigen 2 support
+
+Stage 40 is the first where it's "comfortable" to stay for a little longer period, since it preserves 100% Eigen 3 compatibility. However, we still encourage you to complete your migration as quickly as possible. While we do run the Eigen 2 test suite against Eigen 3's stage 10 support mode, we can't guarantee the same level of support and quality assurance for Eigen 2 support as we do for Eigen 3 itself, especially not in the long term. \ref Eigen2ToEigen3 "This page" describes a large part of the changes that you may need to perform.
+
+\section ABICompatibility What about ABI compatibility?
+
+It goes as follows:
+\li Stage 10 already is ABI compatible with Eigen 3 for the basic (Matrix, Array, SparseMatrix...) types. However, since this stage uses a copy of Eigen 2's Geometry module instead of Eigen 3's own Geometry module, the ABI in the Geometry module is not Eigen 3 compatible.
+\li Stage 20 removes the Eigen 3-incompatible Eigen 2 Geometry module (it remains available with eigen2_ prefix). So at this stage, all the identifiers that exist in Eigen 3 have the Eigen 3 ABI (and API).
+\li Stage 30 introduces the remaining Eigen 3 identifiers. So at this stage, you have the full Eigen 3 ABI.
+\li Stage 40 is no different than Stage 30 in these matters.
+
+
+*/
+
+}
diff --git a/doc/AsciiQuickReference.txt b/doc/AsciiQuickReference.txt
new file mode 100644
index 0000000..d2e973f
--- /dev/null
+++ b/doc/AsciiQuickReference.txt
@@ -0,0 +1,170 @@
+// A simple quickref for Eigen. Add anything that's missing.
+// Main author: Keir Mierle
+
+#include <Eigen/Core>
+#include <Eigen/Array>
+
+Matrix<double, 3, 3> A;               // Fixed rows and cols. Same as Matrix3d.
+Matrix<double, 3, Dynamic> B;         // Fixed rows, dynamic cols.
+Matrix<double, Dynamic, Dynamic> C;   // Full dynamic. Same as MatrixXd.
+Matrix<double, 3, 3, RowMajor> E;     // Row major; default is column-major.
+Matrix3f P, Q, R;                     // 3x3 float matrix.
+Vector3f x, y, z;                     // 3x1 float matrix.
+RowVector3f a, b, c;                  // 1x3 float matrix.
+double s;                            
+
+// Basic usage
+// Eigen          // Matlab           // comments
+x.size()          // length(x)        // vector size
+C.rows()          // size(C)(1)       // number of rows
+C.cols()          // size(C)(2)       // number of columns
+x(i)              // x(i+1)           // Matlab is 1-based
+C(i,j)            // C(i+1,j+1)       //
+
+A.resize(4, 4);   // Runtime error if assertions are on.
+B.resize(4, 9);   // Runtime error if assertions are on.
+A.resize(3, 3);   // Ok; size didn't change.
+B.resize(3, 9);   // Ok; only dynamic cols changed.
+                  
+A << 1, 2, 3,     // Initialize A. The elements can also be
+     4, 5, 6,     // matrices, which are stacked along cols
+     7, 8, 9;     // and then the rows are stacked.
+B << A, A, A;     // B is three horizontally stacked A's.
+A.fill(10);       // Fill A with all 10's.
+A.setRandom();    // Fill A with uniform random numbers in (-1, 1).
+                  // Requires #include <Eigen/Array>.
+A.setIdentity();  // Fill A with the identity.
+
+// Matrix slicing and blocks. All expressions listed here are read/write.
+// Templated size versions are faster. Note that Matlab is 1-based (a size N
+// vector is x(1)...x(N)).
+// Eigen                           // Matlab
+x.head(n)                          // x(1:n)
+x.head<n>()                        // x(1:n)
+x.tail(n)                          // N = rows(x); x(N - n: N)
+x.tail<n>()                        // N = rows(x); x(N - n: N)
+x.segment(i, n)                    // x(i+1 : i+n)
+x.segment<n>(i)                    // x(i+1 : i+n)
+P.block(i, j, rows, cols)          // P(i+1 : i+rows, j+1 : j+cols)
+P.block<rows, cols>(i, j)          // P(i+1 : i+rows, j+1 : j+cols)
+P.topLeftCorner(rows, cols)        // P(1:rows, 1:cols)
+P.topRightCorner(rows, cols)       // [m n]=size(P); P(1:rows, n-cols+1:n)
+P.bottomLeftCorner(rows, cols)     // [m n]=size(P); P(m-rows+1:m, 1:cols)
+P.bottomRightCorner(rows, cols)    // [m n]=size(P); P(m-rows+1:m, n-cols+1:n)
+P.topLeftCorner<rows,cols>()       // P(1:rows, 1:cols)
+P.topRightCorner<rows,cols>()      // [m n]=size(P); P(1:rows, n-cols+1:n)
+P.bottomLeftCorner<rows,cols>()    // [m n]=size(P); P(m-rows+1:m, 1:cols)
+P.bottomRightCorner<rows,cols>()   // [m n]=size(P); P(m-rows+1:m, n-cols+1:n)
+
+// Of particular note is Eigen's swap function which is highly optimized.
+// Eigen                           // Matlab
+R.row(i) = P.col(j);               // R(i, :) = P(:, i)
+R.col(j1).swap(mat1.col(j2));      // R(:, [j1 j2]) = R(:, [j2, j1])
+
+// Views, transpose, etc; all read-write except for .adjoint().
+// Eigen                           // Matlab
+R.adjoint()                        // R'
+R.transpose()                      // R.' or conj(R')
+R.diagonal()                       // diag(R)
+x.asDiagonal()                     // diag(x)
+
+// All the same as Matlab, but matlab doesn't have *= style operators.
+// Matrix-vector.  Matrix-matrix.   Matrix-scalar.
+y  = M*x;          R  = P*Q;        R  = P*s;
+a  = b*M;          R  = P - Q;      R  = s*P;
+a *= M;            R  = P + Q;      R  = P/s;
+                   R *= Q;          R  = s*P;
+                   R += Q;          R *= s;
+                   R -= Q;          R /= s;
+
+ // Vectorized operations on each element independently
+ // (most require #include <Eigen/Array>)
+// Eigen                  // Matlab
+R = P.cwiseProduct(Q);    // R = P .* Q
+R = P.array() * s.array();// R = P .* s
+R = P.cwiseQuotient(Q);   // R = P ./ Q
+R = P.array() / Q.array();// R = P ./ Q
+R = P.array() + s.array();// R = P + s
+R = P.array() - s.array();// R = P - s
+R.array() += s;           // R = R + s
+R.array() -= s;           // R = R - s
+R.array() < Q.array();    // R < Q
+R.array() <= Q.array();   // R <= Q
+R.cwiseInverse();         // 1 ./ P
+R.array().inverse();      // 1 ./ P
+R.array().sin()           // sin(P)
+R.array().cos()           // cos(P)
+R.array().pow(s)          // P .^ s
+R.array().square()        // P .^ 2
+R.array().cube()          // P .^ 3
+R.cwiseSqrt()             // sqrt(P)
+R.array().sqrt()          // sqrt(P)
+R.array().exp()           // exp(P)
+R.array().log()           // log(P)
+R.cwiseMax(P)             // max(R, P)
+R.array().max(P.array())  // max(R, P)
+R.cwiseMin(P)             // min(R, P)
+R.array().min(P.array())  // min(R, P)
+R.cwiseAbs()              // abs(P)
+R.array().abs()           // abs(P)
+R.cwiseAbs2()             // abs(P.^2)
+R.array().abs2()          // abs(P.^2)
+(R.array() < s).select(P,Q);  // (R < s ? P : Q)
+
+// Reductions.
+int r, c;
+// Eigen                  // Matlab
+R.minCoeff()              // min(R(:))
+R.maxCoeff()              // max(R(:))
+s = R.minCoeff(&r, &c)    // [aa, bb] = min(R); [cc, dd] = min(aa);
+                          // r = bb(dd); c = dd; s = cc
+s = R.maxCoeff(&r, &c)    // [aa, bb] = max(R); [cc, dd] = max(aa);
+                          // row = bb(dd); col = dd; s = cc
+R.sum()                   // sum(R(:))
+R.colwise.sum()           // sum(R)
+R.rowwise.sum()           // sum(R, 2) or sum(R')'
+R.prod()                  // prod(R(:))
+R.colwise.prod()          // prod(R)
+R.rowwise.prod()          // prod(R, 2) or prod(R')'
+R.trace()                 // trace(R)
+R.all()                   // all(R(:))
+R.colwise().all()         // all(R)
+R.rowwise().all()         // all(R, 2)
+R.any()                   // any(R(:))
+R.colwise().any()         // any(R)
+R.rowwise().any()         // any(R, 2)
+
+// Dot products, norms, etc.
+// Eigen                  // Matlab
+x.norm()                  // norm(x).    Note that norm(R) doesn't work in Eigen.
+x.squaredNorm()           // dot(x, x)   Note the equivalence is not true for complex
+x.dot(y)                  // dot(x, y)
+x.cross(y)                // cross(x, y) Requires #include <Eigen/Geometry>
+
+// Eigen can map existing memory into Eigen matrices.
+float array[3];
+Map<Vector3f>(array, 3).fill(10);
+int data[4] = 1, 2, 3, 4;
+Matrix2i mat2x2(data);
+MatrixXi mat2x2 = Map<Matrix2i>(data);
+MatrixXi mat2x2 = Map<MatrixXi>(data, 2, 2);
+
+// Solve Ax = b. Result stored in x. Matlab: x = A \ b.
+bool solved;
+solved = A.ldlt().solve(b, &x));  // A sym. p.s.d.    #include <Eigen/Cholesky>
+solved = A.llt() .solve(b, &x));  // A sym. p.d.      #include <Eigen/Cholesky>
+solved = A.lu()  .solve(b, &x));  // Stable and fast. #include <Eigen/LU>
+solved = A.qr()  .solve(b, &x));  // No pivoting.     #include <Eigen/QR>
+solved = A.svd() .solve(b, &x));  // Stable, slowest. #include <Eigen/SVD>
+// .ldlt() -> .matrixL() and .matrixD()
+// .llt()  -> .matrixL()
+// .lu()   -> .matrixL() and .matrixU()
+// .qr()   -> .matrixQ() and .matrixR()
+// .svd()  -> .matrixU(), .singularValues(), and .matrixV()
+
+// Eigenvalue problems
+// Eigen                          // Matlab
+A.eigenvalues();                  // eig(A);
+EigenSolver<Matrix3d> eig(A);     // [vec val] = eig(A)
+eig.eigenvalues();                // diag(val)
+eig.eigenvectors();               // vec
diff --git a/doc/B01_Experimental.dox b/doc/B01_Experimental.dox
new file mode 100644
index 0000000..6d8b90d
--- /dev/null
+++ b/doc/B01_Experimental.dox
@@ -0,0 +1,55 @@
+namespace Eigen {
+
+/** \page Experimental Experimental parts of Eigen
+
+\b Table \b of \b contents
+  - \ref summary
+  - \ref modules
+  - \ref core
+
+\section summary Summary
+
+With the 2.0 release, Eigen's API is, to a large extent, stable. However, we wish to retain the freedom to make API incompatible changes. To that effect, we call many parts of Eigen "experimental" which means that they are not subject to API stability guarantee.
+
+Our goal is that for the 2.1 release (expected in July 2009) most of these parts become API-stable too.
+
+We are aware that API stability is a major concern for our users. That's why it's a priority for us to reach it, but at the same time we're being serious about not calling Eigen API-stable too early.
+
+Experimental features may at any time:
+\li be removed;
+\li be subject to an API incompatible change;
+\li introduce API or ABI incompatible changes in your own code if you let them affect your API or ABI.
+
+\section modules Experimental modules
+
+The following modules are considered entirely experimental, and we make no firm API stability guarantee about them for the time being:
+\li SVD
+\li QR
+\li Cholesky
+\li Sparse
+\li Geometry (this one should be mostly stable, but it's a little too early to make a formal guarantee)
+
+\section core Experimental parts of the Core module
+
+In the Core module, the only classes subject to ABI stability guarantee (meaning that you can use it for data members in your public ABI) is:
+\li Matrix
+\li Map
+
+All other classes offer no ABI guarantee, e.g. the layout of their data can be changed.
+
+The only classes subject to (even partial) API stability guarantee (meaning that you can safely construct and use objects) are:
+\li MatrixBase : partial API stability (see below)
+\li Matrix : full API stability (except for experimental stuff inherited from MatrixBase)
+\li Map : full API stability (except for experimental stuff inherited from MatrixBase)
+
+All other classes offer no direct API guarantee, e.g. their methods can be changed; however notice that most classes inherit MatrixBase and that this is where most of their API comes from -- so in practice most of the API is stable.
+
+A few MatrixBase methods are considered experimental, hence not part of any API stability guarantee:
+\li all methods documented as internal
+\li all methods hidden in the Doxygen documentation
+\li all methods marked as experimental
+\li all methods defined in experimental modules
+
+*/
+
+}
diff --git a/doc/C00_QuickStartGuide.dox b/doc/C00_QuickStartGuide.dox
new file mode 100644
index 0000000..8534cb0
--- /dev/null
+++ b/doc/C00_QuickStartGuide.dox
@@ -0,0 +1,99 @@
+namespace Eigen {
+
+/** \page GettingStarted Getting started
+    \ingroup Tutorial
+
+This is a very short guide on how to get started with Eigen. It has a dual purpose. It serves as a minimal introduction to the Eigen library for people who want to start coding as soon as possible. You can also read this page as the first part of the Tutorial, which explains the library in more detail; in this case you will continue with \ref TutorialMatrixClass.
+
+\section GettingStartedInstallation How to "install" Eigen?
+
+In order to use Eigen, you just need to download and extract Eigen's source code (see <a href="http://eigen.tuxfamily.org/index.php?title=Main_Page#Download">the wiki</a> for download instructions). In fact, the header files in the \c Eigen subdirectory are the only files required to compile programs using Eigen. The header files are the same for all platforms. It is not necessary to use CMake or install anything.
+
+
+\section GettingStartedFirstProgram A simple first program
+
+Here is a rather simple program to get you started.
+
+\include QuickStart_example.cpp
+
+We will explain the program after telling you how to compile it.
+
+
+\section GettingStartedCompiling Compiling and running your first program
+
+There is no library to link to. The only thing that you need to keep in mind when compiling the above program is that the compiler must be able to find the Eigen header files. The directory in which you placed Eigen's source code must be in the include path. With GCC you use the -I option to achieve this, so you can compile the program with a command like this:
+
+\code g++ -I /path/to/eigen/ my_program.cpp -o my_program \endcode
+
+On Linux or Mac OS X, another option is to symlink or copy the Eigen folder into /usr/local/include/. This way, you can compile the program with:
+
+\code g++ my_program.cpp -o my_program \endcode
+
+When you run the program, it produces the following output:
+
+\include QuickStart_example.out
+
+
+\section GettingStartedExplanation Explanation of the first program
+
+The Eigen header files define many types, but for simple applications it may be enough to use only the \c MatrixXd type. This represents a matrix of arbitrary size (hence the \c X in \c MatrixXd), in which every entry is a \c double (hence the \c d in \c MatrixXd). See the \ref QuickRef_Types "quick reference guide" for an overview of the different types you can use to represent a matrix.
+
+The \c Eigen/Dense header file defines all member functions for the MatrixXd type and related types (see also the \ref QuickRef_Headers "table of header files"). All classes and functions defined in this header file (and other Eigen header files) are in the \c Eigen namespace. 
+
+The first line of the \c main function declares a variable of type \c MatrixXd and specifies that it is a matrix with 2 rows and 2 columns (the entries are not initialized). The statement <tt>m(0,0) = 3</tt> sets the entry in the top-left corner to 3. You need to use round parentheses to refer to entries in the matrix. As usual in computer science, the index of the first index is 0, as opposed to the convention in mathematics that the first index is 1.
+
+The following three statements sets the other three entries. The final line outputs the matrix \c m to the standard output stream.
+
+
+\section GettingStartedExample2 Example 2: Matrices and vectors
+
+Here is another example, which combines matrices with vectors. Concentrate on the left-hand program for now; we will talk about the right-hand program later.
+
+<table class="manual">
+<tr><th>Size set at run time:</th><th>Size set at compile time:</th></tr>
+<tr><td>
+\include QuickStart_example2_dynamic.cpp
+</td>
+<td>
+\include QuickStart_example2_fixed.cpp
+</td></tr></table>
+
+The output is as follows:
+
+\include QuickStart_example2_dynamic.out
+
+
+\section GettingStartedExplanation2 Explanation of the second example
+
+The second example starts by declaring a 3-by-3 matrix \c m which is initialized using the \link DenseBase::Random(Index,Index) Random() \endlink method with random values between -1 and 1. The next line applies a linear mapping such that the values are between 10 and 110. The function call \link DenseBase::Constant(Index,Index,const Scalar&) MatrixXd::Constant\endlink(3,3,1.2) returns a 3-by-3 matrix expression having all coefficients equal to 1.2. The rest is standard arithmetics.
+
+The next line of the \c main function introduces a new type: \c VectorXd. This represents a (column) vector of arbitrary size. Here, the vector \c v is created to contain \c 3 coefficients which are left unitialized. The one but last line uses the so-called comma-initializer, explained in \ref TutorialAdvancedInitialization, to set all coefficients of the vector \c v to be as follows:
+
+\f[
+v =
+\begin{bmatrix}
+  1 \\
+  2 \\
+  3
+\end{bmatrix}.
+\f]
+
+The final line of the program multiplies the matrix \c m with the vector \c v and outputs the result.
+
+Now look back at the second example program. We presented two versions of it. In the version in the left column, the matrix is of type \c MatrixXd which represents matrices of arbitrary size. The version in the right column is similar, except that the matrix is of type \c Matrix3d, which represents matrices of a fixed size (here 3-by-3). Because the type already encodes the size of the matrix, it is not necessary to specify the size in the constructor; compare <tt>MatrixXd m(3,3)</tt> with <tt>Matrix3d m</tt>. Similarly, we have \c VectorXd on the left (arbitrary size) versus \c Vector3d on the right (fixed size). Note that here the coefficients of vector \c v are directly set in the constructor, though the same syntax of the left example could be used too.
+
+The use of fixed-size matrices and vectors has two advantages. The compiler emits better (faster) code because it knows the size of the matrices and vectors. Specifying the size in the type also allows for more rigorous checking at compile-time. For instance, the compiler will complain if you try to multiply a \c Matrix4d (a 4-by-4 matrix) with a \c Vector3d (a vector of size 3). However, the use of many types increases compilation time and the size of the executable. The size of the matrix may also not be known at compile-time. A rule of thumb is to use fixed-size matrices for size 4-by-4 and smaller.
+
+
+\section GettingStartedConclusion Where to go from here?
+
+It's worth taking the time to read the  \ref TutorialMatrixClass "long tutorial".
+
+However if you think you don't need it, you can directly use the classes documentation and our \ref QuickRefPage.
+
+\li \b Next: \ref TutorialMatrixClass
+
+*/
+
+}
+
diff --git a/doc/C01_TutorialMatrixClass.dox b/doc/C01_TutorialMatrixClass.dox
new file mode 100644
index 0000000..4860616
--- /dev/null
+++ b/doc/C01_TutorialMatrixClass.dox
@@ -0,0 +1,284 @@
+namespace Eigen {
+
+/** \page TutorialMatrixClass Tutorial page 1 - The %Matrix class
+
+\ingroup Tutorial
+
+\li \b Previous: \ref GettingStarted
+\li \b Next: \ref TutorialMatrixArithmetic
+
+We assume that you have already read the quick \link GettingStarted "getting started" \endlink tutorial.
+This page is the first one in a much longer multi-page tutorial.
+
+\b Table \b of \b contents
+  - \ref TutorialMatrixFirst3Params
+  - \ref TutorialMatrixVectors
+  - \ref TutorialMatrixDynamic
+  - \ref TutorialMatrixConstructors
+  - \ref TutorialMatrixCoeffAccessors
+  - \ref TutorialMatrixCommaInitializer
+  - \ref TutorialMatrixSizesResizing
+  - \ref TutorialMatrixAssignment
+  - \ref TutorialMatrixFixedVsDynamic
+  - \ref TutorialMatrixOptTemplParams
+  - \ref TutorialMatrixTypedefs
+
+In Eigen, all matrices and vectors are objects of the Matrix template class.
+Vectors are just a special case of matrices, with either 1 row or 1 column.
+
+\section TutorialMatrixFirst3Params The first three template parameters of Matrix
+
+The Matrix class takes six template parameters, but for now it's enough to
+learn about the first three first parameters. The three remaining parameters have default
+values, which for now we will leave untouched, and which we
+\ref TutorialMatrixOptTemplParams "discuss below".
+
+The three mandatory template parameters of Matrix are:
+\code
+Matrix<typename Scalar, int RowsAtCompileTime, int ColsAtCompileTime>
+\endcode
+\li \c Scalar is the scalar type, i.e. the type of the coefficients.
+    That is, if you want a matrix of floats, choose \c float here.
+    See \ref TopicScalarTypes "Scalar types" for a list of all supported
+    scalar types and for how to extend support to new types.
+\li \c RowsAtCompileTime and \c ColsAtCompileTime are the number of rows
+    and columns of the matrix as known at compile time (see 
+    \ref TutorialMatrixDynamic "below" for what to do if the number is not
+    known at compile time).
+
+We offer a lot of convenience typedefs to cover the usual cases. For example, \c Matrix4f is
+a 4x4 matrix of floats. Here is how it is defined by Eigen:
+\code
+typedef Matrix<float, 4, 4> Matrix4f;
+\endcode
+We discuss \ref TutorialMatrixTypedefs "below" these convenience typedefs.
+
+\section TutorialMatrixVectors Vectors
+
+As mentioned above, in Eigen, vectors are just a special case of
+matrices, with either 1 row or 1 column. The case where they have 1 column is the most common;
+such vectors are called column-vectors, often abbreviated as just vectors. In the other case
+where they have 1 row, they are called row-vectors.
+
+For example, the convenience typedef \c Vector3f is a (column) vector of 3 floats. It is defined as follows by Eigen:
+\code
+typedef Matrix<float, 3, 1> Vector3f;
+\endcode
+We also offer convenience typedefs for row-vectors, for example:
+\code
+typedef Matrix<int, 1, 2> RowVector2i;
+\endcode
+
+\section TutorialMatrixDynamic The special value Dynamic
+
+Of course, Eigen is not limited to matrices whose dimensions are known at compile time.
+The \c RowsAtCompileTime and \c ColsAtCompileTime template parameters can take the special
+value \c Dynamic which indicates that the size is unknown at compile time, so must
+be handled as a run-time variable. In Eigen terminology, such a size is referred to as a
+\em dynamic \em size; while a size that is known at compile time is called a
+\em fixed \em size. For example, the convenience typedef \c MatrixXd, meaning
+a matrix of doubles with dynamic size, is defined as follows:
+\code
+typedef Matrix<double, Dynamic, Dynamic> MatrixXd;
+\endcode
+And similarly, we define a self-explanatory typedef \c VectorXi as follows:
+\code
+typedef Matrix<int, Dynamic, 1> VectorXi;
+\endcode
+You can perfectly have e.g. a fixed number of rows with a dynamic number of columns, as in:
+\code
+Matrix<float, 3, Dynamic>
+\endcode
+
+\section TutorialMatrixConstructors Constructors
+
+A default constructor is always available, never performs any dynamic memory allocation, and never initializes the matrix coefficients. You can do:
+\code
+Matrix3f a;
+MatrixXf b;
+\endcode
+Here,
+\li \c a is a 3x3 matrix, with a static float[9] array of uninitialized coefficients,
+\li \c b is a dynamic-size matrix whose size is currently 0x0, and whose array of
+coefficients hasn't yet been allocated at all.
+
+Constructors taking sizes are also available. For matrices, the number of rows is always passed first.
+For vectors, just pass the vector size. They allocate the array of coefficients
+with the given size, but don't initialize the coefficients themselves:
+\code
+MatrixXf a(10,15);
+VectorXf b(30);
+\endcode
+Here,
+\li \c a is a 10x15 dynamic-size matrix, with allocated but currently uninitialized coefficients.
+\li \c b is a dynamic-size vector of size 30, with allocated but currently uninitialized coefficients.
+
+In order to offer a uniform API across fixed-size and dynamic-size matrices, it is legal to use these
+constructors on fixed-size matrices, even if passing the sizes is useless in this case. So this is legal:
+\code
+Matrix3f a(3,3);
+\endcode
+and is a no-operation.
+
+Finally, we also offer some constructors to initialize the coefficients of small fixed-size vectors up to size 4:
+\code
+Vector2d a(5.0, 6.0);
+Vector3d b(5.0, 6.0, 7.0);
+Vector4d c(5.0, 6.0, 7.0, 8.0);
+\endcode
+
+\section TutorialMatrixCoeffAccessors Coefficient accessors
+
+The primary coefficient accessors and mutators in Eigen are the overloaded parenthesis operators.
+For matrices, the row index is always passed first. For vectors, just pass one index.
+The numbering starts at 0. This example is self-explanatory:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_matrix_coefficient_accessors.cpp
+</td>
+<td>
+\verbinclude tut_matrix_coefficient_accessors.out
+</td></tr></table>
+
+Note that the syntax <tt> m(index) </tt>
+is not restricted to vectors, it is also available for general matrices, meaning index-based access
+in the array of coefficients. This however depends on the matrix's storage order. All Eigen matrices default to
+column-major storage order, but this can be changed to row-major, see \ref TopicStorageOrders "Storage orders".
+
+The operator[] is also overloaded for index-based access in vectors, but keep in mind that C++ doesn't allow operator[] to
+take more than one argument. We restrict operator[] to vectors, because an awkwardness in the C++ language
+would make matrix[i,j] compile to the same thing as matrix[j] !
+
+\section TutorialMatrixCommaInitializer Comma-initialization
+
+%Matrix and vector coefficients can be conveniently set using the so-called \em comma-initializer syntax.
+For now, it is enough to know this example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include Tutorial_commainit_01.cpp </td>
+<td>\verbinclude Tutorial_commainit_01.out </td>
+</tr></table>
+
+
+The right-hand side can also contain matrix expressions as discussed in \ref TutorialAdvancedInitialization "this page".
+
+\section TutorialMatrixSizesResizing Resizing
+
+The current size of a matrix can be retrieved by \link EigenBase::rows() rows()\endlink, \link EigenBase::cols() cols() \endlink and \link EigenBase::size() size()\endlink. These methods return the number of rows, the number of columns and the number of coefficients, respectively. Resizing a dynamic-size matrix is done by the \link PlainObjectBase::resize(Index,Index) resize() \endlink method.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include tut_matrix_resize.cpp </td>
+<td>\verbinclude tut_matrix_resize.out </td>
+</tr></table>
+
+The resize() method is a no-operation if the actual matrix size doesn't change; otherwise it is destructive: the values of the coefficients may change.
+If you want a conservative variant of resize() which does not change the coefficients, use \link PlainObjectBase::conservativeResize() conservativeResize()\endlink, see \ref TopicResizing "this page" for more details.
+
+All these methods are still available on fixed-size matrices, for the sake of API uniformity. Of course, you can't actually
+resize a fixed-size matrix. Trying to change a fixed size to an actually different value will trigger an assertion failure;
+but the following code is legal:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include tut_matrix_resize_fixed_size.cpp </td>
+<td>\verbinclude tut_matrix_resize_fixed_size.out </td>
+</tr></table>
+
+
+\section TutorialMatrixAssignment Assignment and resizing
+
+Assignment is the action of copying a matrix into another, using \c operator=. Eigen resizes the matrix on the left-hand side automatically so that it matches the size of the matrix on the right-hand size. For example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include tut_matrix_assignment_resizing.cpp </td>
+<td>\verbinclude tut_matrix_assignment_resizing.out </td>
+</tr></table>
+
+Of course, if the left-hand side is of fixed size, resizing it is not allowed.
+
+If you do not want this automatic resizing to happen (for example for debugging purposes), you can disable it, see
+\ref TopicResizing "this page".
+
+
+\section TutorialMatrixFixedVsDynamic Fixed vs. Dynamic size
+
+When should one use fixed sizes (e.g. \c Matrix4f), and when should one prefer dynamic sizes (e.g. \c MatrixXf)?
+The simple answer is: use fixed
+sizes for very small sizes where you can, and use dynamic sizes for larger sizes or where you have to. For small sizes,
+especially for sizes smaller than (roughly) 16, using fixed sizes is hugely beneficial
+to performance, as it allows Eigen to avoid dynamic memory allocation and to unroll
+loops. Internally, a fixed-size Eigen matrix is just a plain static array, i.e. doing
+\code Matrix4f mymatrix; \endcode
+really amounts to just doing
+\code float mymatrix[16]; \endcode
+so this really has zero runtime cost. By contrast, the array of a dynamic-size matrix
+is always allocated on the heap, so doing
+\code MatrixXf mymatrix(rows,columns); \endcode
+amounts to doing
+\code float *mymatrix = new float[rows*columns]; \endcode
+and in addition to that, the MatrixXf object stores its number of rows and columns as
+member variables.
+
+The limitation of using fixed sizes, of course, is that this is only possible
+when you know the sizes at compile time. Also, for large enough sizes, say for sizes
+greater than (roughly) 32, the performance benefit of using fixed sizes becomes negligible.
+Worse, trying to create a very large matrix using fixed sizes could result in a stack overflow,
+since Eigen will try to allocate the array as a static array, which by default goes on the stack.
+Finally, depending on circumstances, Eigen can also be more aggressive trying to vectorize
+(use SIMD instructions) when dynamic sizes are used, see \ref TopicVectorization "Vectorization".
+
+\section TutorialMatrixOptTemplParams Optional template parameters
+
+We mentioned at the beginning of this page that the Matrix class takes six template parameters,
+but so far we only discussed the first three. The remaining three parameters are optional. Here is
+the complete list of template parameters:
+\code
+Matrix<typename Scalar,
+       int RowsAtCompileTime,
+       int ColsAtCompileTime,
+       int Options = 0,
+       int MaxRowsAtCompileTime = RowsAtCompileTime,
+       int MaxColsAtCompileTime = ColsAtCompileTime>
+\endcode
+\li \c Options is a bit field. Here, we discuss only one bit: \c RowMajor. It specifies that the matrices
+      of this type use row-major storage order; by default, the storage order is column-major. See the page on
+      \ref TopicStorageOrders "storage orders". For example, this type means row-major 3x3 matrices:
+      \code
+      Matrix<float, 3, 3, RowMajor>
+      \endcode
+\li \c MaxRowsAtCompileTime and \c MaxColsAtCompileTime are useful when you want to specify that, even though
+      the exact sizes of your matrices are not known at compile time, a fixed upper bound is known at
+      compile time. The biggest reason why you might want to do that is to avoid dynamic memory allocation.
+      For example the following matrix type uses a static array of 12 floats, without dynamic memory allocation:
+      \code
+      Matrix<float, Dynamic, Dynamic, 0, 3, 4>
+      \endcode
+
+\section TutorialMatrixTypedefs Convenience typedefs
+
+Eigen defines the following Matrix typedefs:
+\li MatrixNt for Matrix<type, N, N>. For example, MatrixXi for Matrix<int, Dynamic, Dynamic>.
+\li VectorNt for Matrix<type, N, 1>. For example, Vector2f for Matrix<float, 2, 1>.
+\li RowVectorNt for Matrix<type, 1, N>. For example, RowVector3d for Matrix<double, 1, 3>.
+
+Where:
+\li N can be any one of \c 2, \c 3, \c 4, or \c X (meaning \c Dynamic).
+\li t can be any one of \c i (meaning int), \c f (meaning float), \c d (meaning double),
+      \c cf (meaning complex<float>), or \c cd (meaning complex<double>). The fact that typedefs are only
+    defined for these five types doesn't mean that they are the only supported scalar types. For example,
+    all standard integer types are supported, see \ref TopicScalarTypes "Scalar types".
+
+\li \b Next: \ref TutorialMatrixArithmetic
+
+*/
+
+}
diff --git a/doc/C02_TutorialMatrixArithmetic.dox b/doc/C02_TutorialMatrixArithmetic.dox
new file mode 100644
index 0000000..b04821a
--- /dev/null
+++ b/doc/C02_TutorialMatrixArithmetic.dox
@@ -0,0 +1,229 @@
+namespace Eigen {
+
+/** \page TutorialMatrixArithmetic Tutorial page 2 - %Matrix and vector arithmetic
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialMatrixClass
+\li \b Next: \ref TutorialArrayClass
+
+This tutorial aims to provide an overview and some details on how to perform arithmetic
+between matrices, vectors and scalars with Eigen.
+
+\b Table \b of \b contents
+  - \ref TutorialArithmeticIntroduction
+  - \ref TutorialArithmeticAddSub
+  - \ref TutorialArithmeticScalarMulDiv
+  - \ref TutorialArithmeticMentionXprTemplates
+  - \ref TutorialArithmeticTranspose
+  - \ref TutorialArithmeticMatrixMul
+  - \ref TutorialArithmeticDotAndCross
+  - \ref TutorialArithmeticRedux
+  - \ref TutorialArithmeticValidity
+
+\section TutorialArithmeticIntroduction Introduction
+
+Eigen offers matrix/vector arithmetic operations either through overloads of common C++ arithmetic operators such as +, -, *,
+or through special methods such as dot(), cross(), etc.
+For the Matrix class (matrices and vectors), operators are only overloaded to support
+linear-algebraic operations. For example, \c matrix1 \c * \c matrix2 means matrix-matrix product,
+and \c vector \c + \c scalar is just not allowed. If you want to perform all kinds of array operations,
+not linear algebra, see the \ref TutorialArrayClass "next page".
+
+\section TutorialArithmeticAddSub Addition and subtraction
+
+The left hand side and right hand side must, of course, have the same numbers of rows and of columns. They must
+also have the same \c Scalar type, as Eigen doesn't do automatic type promotion. The operators at hand here are:
+\li binary operator + as in \c a+b
+\li binary operator - as in \c a-b
+\li unary operator - as in \c -a
+\li compound operator += as in \c a+=b
+\li compound operator -= as in \c a-=b
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_add_sub.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_add_sub.out
+</td></tr></table>
+
+\section TutorialArithmeticScalarMulDiv Scalar multiplication and division
+
+Multiplication and division by a scalar is very simple too. The operators at hand here are:
+\li binary operator * as in \c matrix*scalar
+\li binary operator * as in \c scalar*matrix
+\li binary operator / as in \c matrix/scalar
+\li compound operator *= as in \c matrix*=scalar
+\li compound operator /= as in \c matrix/=scalar
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_scalar_mul_div.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_scalar_mul_div.out
+</td></tr></table>
+
+
+\section TutorialArithmeticMentionXprTemplates A note about expression templates
+
+This is an advanced topic that we explain on \ref TopicEigenExpressionTemplates "this page",
+but it is useful to just mention it now. In Eigen, arithmetic operators such as \c operator+ don't
+perform any computation by themselves, they just return an "expression object" describing the computation to be
+performed. The actual computation happens later, when the whole expression is evaluated, typically in \c operator=.
+While this might sound heavy, any modern optimizing compiler is able to optimize away that abstraction and
+the result is perfectly optimized code. For example, when you do:
+\code
+VectorXf a(50), b(50), c(50), d(50);
+...
+a = 3*b + 4*c + 5*d;
+\endcode
+Eigen compiles it to just one for loop, so that the arrays are traversed only once. Simplifying (e.g. ignoring
+SIMD optimizations), this loop looks like this:
+\code
+for(int i = 0; i < 50; ++i)
+  a[i] = 3*b[i] + 4*c[i] + 5*d[i];
+\endcode
+Thus, you should not be afraid of using relatively large arithmetic expressions with Eigen: it only gives Eigen
+more opportunities for optimization.
+
+\section TutorialArithmeticTranspose Transposition and conjugation
+
+The transpose \f$ a^T \f$, conjugate \f$ \bar{a} \f$, and adjoint (i.e., conjugate transpose) \f$ a^* \f$ of a matrix or vector \f$ a \f$ are obtained by the member functions \link DenseBase::transpose() transpose()\endlink, \link MatrixBase::conjugate() conjugate()\endlink, and \link MatrixBase::adjoint() adjoint()\endlink, respectively.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_transpose_conjugate.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_transpose_conjugate.out
+</td></tr></table>
+
+For real matrices, \c conjugate() is a no-operation, and so \c adjoint() is equivalent to \c transpose().
+
+As for basic arithmetic operators, \c transpose() and \c adjoint() simply return a proxy object without doing the actual transposition. If you do <tt>b = a.transpose()</tt>, then the transpose is evaluated at the same time as the result is written into \c b. However, there is a complication here. If you do <tt>a = a.transpose()</tt>, then Eigen starts writing the result into \c a before the evaluation of the transpose is finished. Therefore, the instruction <tt>a = a.transpose()</tt> does not replace \c a with its transpose, as one would expect:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_transpose_aliasing.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_transpose_aliasing.out
+</td></tr></table>
+This is the so-called \ref TopicAliasing "aliasing issue". In "debug mode", i.e., when \ref TopicAssertions "assertions" have not been disabled, such common pitfalls are automatically detected. 
+
+For \em in-place transposition, as for instance in <tt>a = a.transpose()</tt>, simply use the \link DenseBase::transposeInPlace() transposeInPlace()\endlink  function:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_transpose_inplace.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_transpose_inplace.out
+</td></tr></table>
+There is also the \link MatrixBase::adjointInPlace() adjointInPlace()\endlink function for complex matrices.
+
+\section TutorialArithmeticMatrixMul Matrix-matrix and matrix-vector multiplication
+
+Matrix-matrix multiplication is again done with \c operator*. Since vectors are a special
+case of matrices, they are implicitly handled there too, so matrix-vector product is really just a special
+case of matrix-matrix product, and so is vector-vector outer product. Thus, all these cases are handled by just
+two operators:
+\li binary operator * as in \c a*b
+\li compound operator *= as in \c a*=b (this multiplies on the right: \c a*=b is equivalent to <tt>a = a*b</tt>)
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_matrix_mul.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_matrix_mul.out
+</td></tr></table>
+
+Note: if you read the above paragraph on expression templates and are worried that doing \c m=m*m might cause
+aliasing issues, be reassured for now: Eigen treats matrix multiplication as a special case and takes care of
+introducing a temporary here, so it will compile \c m=m*m as:
+\code
+tmp = m*m;
+m = tmp;
+\endcode
+If you know your matrix product can be safely evaluated into the destination matrix without aliasing issue, then you can use the \link MatrixBase::noalias() noalias()\endlink function to avoid the temporary, e.g.:
+\code
+c.noalias() += a * b;
+\endcode
+For more details on this topic, see the page on \ref TopicAliasing "aliasing".
+
+\b Note: for BLAS users worried about performance, expressions such as <tt>c.noalias() -= 2 * a.adjoint() * b;</tt> are fully optimized and trigger a single gemm-like function call.
+
+\section TutorialArithmeticDotAndCross Dot product and cross product
+
+For dot product and cross product, you need the \link MatrixBase::dot() dot()\endlink and \link MatrixBase::cross() cross()\endlink methods. Of course, the dot product can also be obtained as a 1x1 matrix as u.adjoint()*v.
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_dot_cross.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_dot_cross.out
+</td></tr></table>
+
+Remember that cross product is only for vectors of size 3. Dot product is for vectors of any sizes.
+When using complex numbers, Eigen's dot product is conjugate-linear in the first variable and linear in the
+second variable.
+
+\section TutorialArithmeticRedux Basic arithmetic reduction operations
+Eigen also provides some reduction operations to reduce a given matrix or vector to a single value such as the sum (computed by \link DenseBase::sum() sum()\endlink), product (\link DenseBase::prod() prod()\endlink), or the maximum (\link DenseBase::maxCoeff() maxCoeff()\endlink) and minimum (\link DenseBase::minCoeff() minCoeff()\endlink) of all its coefficients.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_redux_basic.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_redux_basic.out
+</td></tr></table>
+
+The \em trace of a matrix, as returned by the function \link MatrixBase::trace() trace()\endlink, is the sum of the diagonal coefficients and can also be computed as efficiently using <tt>a.diagonal().sum()</tt>, as we will see later on.
+
+There also exist variants of the \c minCoeff and \c maxCoeff functions returning the coordinates of the respective coefficient via the arguments:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_redux_minmax.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_redux_minmax.out
+</td></tr></table>
+
+
+\section TutorialArithmeticValidity Validity of operations
+Eigen checks the validity of the operations that you perform. When possible,
+it checks them at compile time, producing compilation errors. These error messages can be long and ugly,
+but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
+\code
+  Matrix3f m;
+  Vector4f v;
+  v = m*v;      // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
+\endcode
+
+Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time.
+Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off.
+
+\code
+  MatrixXf m(3,3);
+  VectorXf v(4);
+  v = m * v; // Run-time assertion failure here: "invalid matrix product"
+\endcode
+
+For more details on this topic, see \ref TopicAssertions "this page".
+
+\li \b Next: \ref TutorialArrayClass
+
+*/
+
+}
diff --git a/doc/C03_TutorialArrayClass.dox b/doc/C03_TutorialArrayClass.dox
new file mode 100644
index 0000000..a1d8d69
--- /dev/null
+++ b/doc/C03_TutorialArrayClass.dox
@@ -0,0 +1,205 @@
+namespace Eigen {
+
+/** \page TutorialArrayClass Tutorial page 3 - The %Array class and coefficient-wise operations
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialMatrixArithmetic
+\li \b Next: \ref TutorialBlockOperations
+
+This tutorial aims to provide an overview and explanations on how to use
+Eigen's Array class.
+
+\b Table \b of \b contents
+  - \ref TutorialArrayClassIntro
+  - \ref TutorialArrayClassTypes
+  - \ref TutorialArrayClassAccess
+  - \ref TutorialArrayClassAddSub
+  - \ref TutorialArrayClassMult
+  - \ref TutorialArrayClassCwiseOther
+  - \ref TutorialArrayClassConvert
+
+\section TutorialArrayClassIntro What is the Array class?
+
+The Array class provides general-purpose arrays, as opposed to the Matrix class which
+is intended for linear algebra. Furthermore, the Array class provides an easy way to
+perform coefficient-wise operations, which might not have a linear algebraic meaning,
+such as adding a constant to every coefficient in the array or multiplying two arrays coefficient-wise.
+
+
+\section TutorialArrayClassTypes Array types
+Array is a class template taking the same template parameters as Matrix.
+As with Matrix, the first three template parameters are mandatory:
+\code
+Array<typename Scalar, int RowsAtCompileTime, int ColsAtCompileTime>
+\endcode
+The last three template parameters are optional. Since this is exactly the same as for Matrix,
+we won't explain it again here and just refer to \ref TutorialMatrixClass.
+
+Eigen also provides typedefs for some common cases, in a way that is similar to the Matrix typedefs
+but with some slight differences, as the word "array" is used for both 1-dimensional and 2-dimensional arrays.
+We adopt the convention that typedefs of the form ArrayNt stand for 1-dimensional arrays, where N and t are
+the size and the scalar type, as in the Matrix typedefs explained on \ref TutorialMatrixClass "this page". For 2-dimensional arrays, we
+use typedefs of the form ArrayNNt. Some examples are shown in the following table:
+
+<table class="manual">
+  <tr>
+    <th>Type </th>
+    <th>Typedef </th>
+  </tr>
+  <tr>
+    <td> \code Array<float,Dynamic,1> \endcode </td>
+    <td> \code ArrayXf \endcode </td>
+  </tr>
+  <tr>
+    <td> \code Array<float,3,1> \endcode </td>
+    <td> \code Array3f \endcode </td>
+  </tr>
+  <tr>
+    <td> \code Array<double,Dynamic,Dynamic> \endcode </td>
+    <td> \code ArrayXXd \endcode </td>
+  </tr>
+  <tr>
+    <td> \code Array<double,3,3> \endcode </td>
+    <td> \code Array33d \endcode </td>
+  </tr>
+</table>
+
+
+\section TutorialArrayClassAccess Accessing values inside an Array
+
+The parenthesis operator is overloaded to provide write and read access to the coefficients of an array, just as with matrices.
+Furthermore, the \c << operator can be used to initialize arrays (via the comma initializer) or to print them.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_accessors.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_accessors.out
+</td></tr></table>
+
+For more information about the comma initializer, see \ref TutorialAdvancedInitialization.
+
+
+\section TutorialArrayClassAddSub Addition and subtraction
+
+Adding and subtracting two arrays is the same as for matrices.
+The operation is valid if both arrays have the same size, and the addition or subtraction is done coefficient-wise.
+
+Arrays also support expressions of the form <tt>array + scalar</tt> which add a scalar to each coefficient in the array.
+This provides a functionality that is not directly available for Matrix objects.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_addition.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_addition.out
+</td></tr></table>
+
+
+\section TutorialArrayClassMult Array multiplication
+
+First of all, of course you can multiply an array by a scalar, this works in the same way as matrices. Where arrays
+are fundamentally different from matrices, is when you multiply two together. Matrices interpret
+multiplication as matrix product and arrays interpret multiplication as coefficient-wise product. Thus, two 
+arrays can be multiplied if and only if they have the same dimensions.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_mult.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_mult.out
+</td></tr></table>
+
+
+\section TutorialArrayClassCwiseOther Other coefficient-wise operations
+
+The Array class defines other coefficient-wise operations besides the addition, subtraction and multiplication
+operators described above. For example, the \link ArrayBase::abs() .abs() \endlink method takes the absolute
+value of each coefficient, while \link ArrayBase::sqrt() .sqrt() \endlink computes the square root of the
+coefficients. If you have two arrays of the same size, you can call \link ArrayBase::min() .min() \endlink to
+construct the array whose coefficients are the minimum of the corresponding coefficients of the two given
+arrays. These operations are illustrated in the following example.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_cwise_other.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_cwise_other.out
+</td></tr></table>
+
+More coefficient-wise operations can be found in the \ref QuickRefPage.
+
+
+\section TutorialArrayClassConvert Converting between array and matrix expressions
+
+When should you use objects of the Matrix class and when should you use objects of the Array class? You cannot
+apply Matrix operations on arrays, or Array operations on matrices. Thus, if you need to do linear algebraic
+operations such as matrix multiplication, then you should use matrices; if you need to do coefficient-wise
+operations, then you should use arrays. However, sometimes it is not that simple, but you need to use both
+Matrix and Array operations. In that case, you need to convert a matrix to an array or reversely. This gives
+access to all operations regardless of the choice of declaring objects as arrays or as matrices.
+
+\link MatrixBase Matrix expressions \endlink have an \link MatrixBase::array() .array() \endlink method that
+'converts' them into \link ArrayBase array expressions\endlink, so that coefficient-wise operations
+can be applied easily. Conversely, \link ArrayBase array expressions \endlink
+have a \link ArrayBase::matrix() .matrix() \endlink method. As with all Eigen expression abstractions,
+this doesn't have any runtime cost (provided that you let your compiler optimize).
+Both \link MatrixBase::array() .array() \endlink and \link ArrayBase::matrix() .matrix() \endlink 
+can be used as rvalues and as lvalues.
+
+Mixing matrices and arrays in an expression is forbidden with Eigen. For instance, you cannot add a matrix and
+array directly; the operands of a \c + operator should either both be matrices or both be arrays. However,
+it is easy to convert from one to the other with \link MatrixBase::array() .array() \endlink and 
+\link ArrayBase::matrix() .matrix()\endlink. The exception to this rule is the assignment operator: it is
+allowed to assign a matrix expression to an array variable, or to assign an array expression to a matrix
+variable.
+
+The following example shows how to use array operations on a Matrix object by employing the 
+\link MatrixBase::array() .array() \endlink method. For example, the statement 
+<tt>result = m.array() * n.array()</tt> takes two matrices \c m and \c n, converts them both to an array, uses
+* to multiply them coefficient-wise and assigns the result to the matrix variable \c result (this is legal
+because Eigen allows assigning array expressions to matrix variables). 
+
+As a matter of fact, this usage case is so common that Eigen provides a \link MatrixBase::cwiseProduct()
+.cwiseProduct() \endlink method for matrices to compute the coefficient-wise product. This is also shown in
+the example program.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_interop_matrix.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_interop_matrix.out
+</td></tr></table>
+
+Similarly, if \c array1 and \c array2 are arrays, then the expression <tt>array1.matrix() * array2.matrix()</tt>
+computes their matrix product.
+
+Here is a more advanced example. The expression <tt>(m.array() + 4).matrix() * m</tt> adds 4 to every
+coefficient in the matrix \c m and then computes the matrix product of the result with \c m. Similarly, the
+expression <tt>(m.array() * n.array()).matrix() * m</tt> computes the coefficient-wise product of the matrices
+\c m and \c n and then the matrix product of the result with \c m.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ArrayClass_interop.cpp
+</td>
+<td>
+\verbinclude Tutorial_ArrayClass_interop.out
+</td></tr></table>
+
+\li \b Next: \ref TutorialBlockOperations
+
+*/
+
+}
diff --git a/doc/C04_TutorialBlockOperations.dox b/doc/C04_TutorialBlockOperations.dox
new file mode 100644
index 0000000..eac0eaa
--- /dev/null
+++ b/doc/C04_TutorialBlockOperations.dox
@@ -0,0 +1,239 @@
+namespace Eigen {
+
+/** \page TutorialBlockOperations Tutorial page 4 - %Block operations
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialArrayClass
+\li \b Next: \ref TutorialAdvancedInitialization
+
+This tutorial page explains the essentials of block operations.
+A block is a rectangular part of a matrix or array. Blocks expressions can be used both
+as rvalues and as lvalues. As usual with Eigen expressions, this abstraction has zero runtime cost
+provided that you let your compiler optimize.
+
+\b Table \b of \b contents
+  - \ref TutorialBlockOperationsUsing
+  - \ref TutorialBlockOperationsSyntaxColumnRows
+  - \ref TutorialBlockOperationsSyntaxCorners
+  - \ref TutorialBlockOperationsSyntaxVectors
+
+
+\section TutorialBlockOperationsUsing Using block operations
+
+The most general block operation in Eigen is called \link DenseBase::block() .block() \endlink.
+There are two versions, whose syntax is as follows:
+
+<table class="manual">
+<tr><th>\b %Block \b operation</td>
+<th>Version constructing a \n dynamic-size block expression</th>
+<th>Version constructing a \n fixed-size block expression</th></tr>
+<tr><td>%Block of size <tt>(p,q)</tt>, starting at <tt>(i,j)</tt></td>
+    <td>\code
+matrix.block(i,j,p,q);\endcode </td>
+    <td>\code 
+matrix.block<p,q>(i,j);\endcode </td>
+</tr>
+</table>
+
+As always in Eigen, indices start at 0.
+
+Both versions can be used on fixed-size and dynamic-size matrices and arrays.
+These two expressions are semantically equivalent.
+The only difference is that the fixed-size version will typically give you faster code if the block size is small,
+but requires this size to be known at compile time.
+
+The following program uses the dynamic-size and fixed-size versions to print the values of several blocks inside a
+matrix.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_BlockOperations_print_block.cpp
+</td>
+<td>
+\verbinclude Tutorial_BlockOperations_print_block.out
+</td></tr></table>
+
+In the above example the \link DenseBase::block() .block() \endlink function was employed as a \em rvalue, i.e.
+it was only read from. However, blocks can also be used as \em lvalues, meaning that you can assign to a block.
+
+This is illustrated in the following example. This example also demonstrates blocks in arrays, which works exactly like the above-demonstrated blocks in matrices.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_BlockOperations_block_assignment.cpp
+</td>
+<td>
+\verbinclude Tutorial_BlockOperations_block_assignment.out
+</td></tr></table>
+
+While the \link DenseBase::block() .block() \endlink method can be used for any block operation, there are
+other methods for special cases, providing more specialized API and/or better performance. On the topic of performance, all what
+matters is that you give Eigen as much information as possible at compile time. For example, if your block is a single whole column in a matrix,
+using the specialized \link DenseBase::col() .col() \endlink function described below lets Eigen know that, which can give it optimization opportunities.
+
+The rest of this page describes these specialized methods.
+
+\section TutorialBlockOperationsSyntaxColumnRows Columns and rows
+
+Individual columns and rows are special cases of blocks. Eigen provides methods to easily address them:
+\link DenseBase::col() .col() \endlink and \link DenseBase::row() .row()\endlink.
+
+<table class="manual">
+<tr><th>%Block operation</th>
+<th>Method</th>
+<tr><td>i<sup>th</sup> row
+                    \link DenseBase::row() * \endlink</td>
+    <td>\code
+matrix.row(i);\endcode </td>
+</tr>
+<tr><td>j<sup>th</sup> column
+                    \link DenseBase::col() * \endlink</td>
+    <td>\code
+matrix.col(j);\endcode </td>
+</tr>
+</table>
+
+The argument for \p col() and \p row() is the index of the column or row to be accessed. As always in Eigen, indices start at 0.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_BlockOperations_colrow.cpp
+</td>
+<td>
+\verbinclude Tutorial_BlockOperations_colrow.out
+</td></tr></table>
+
+That example also demonstrates that block expressions (here columns) can be used in arithmetic like any other expression.
+
+
+\section TutorialBlockOperationsSyntaxCorners Corner-related operations
+
+Eigen also provides special methods for blocks that are flushed against one of the corners or sides of a
+matrix or array. For instance, \link DenseBase::topLeftCorner() .topLeftCorner() \endlink can be used to refer
+to a block in the top-left corner of a matrix.
+
+The different possibilities are summarized in the following table:
+
+<table class="manual">
+<tr><th>%Block \b operation</td>
+<th>Version constructing a \n dynamic-size block expression</th>
+<th>Version constructing a \n fixed-size block expression</th></tr>
+<tr><td>Top-left p by q block \link DenseBase::topLeftCorner() * \endlink</td>
+    <td>\code
+matrix.topLeftCorner(p,q);\endcode </td>
+    <td>\code 
+matrix.topLeftCorner<p,q>();\endcode </td>
+</tr>
+<tr><td>Bottom-left p by q block
+              \link DenseBase::bottomLeftCorner() * \endlink</td>
+    <td>\code
+matrix.bottomLeftCorner(p,q);\endcode </td>
+    <td>\code 
+matrix.bottomLeftCorner<p,q>();\endcode </td>
+</tr>
+<tr><td>Top-right p by q block
+              \link DenseBase::topRightCorner() * \endlink</td>
+    <td>\code
+matrix.topRightCorner(p,q);\endcode </td>
+    <td>\code 
+matrix.topRightCorner<p,q>();\endcode </td>
+</tr>
+<tr><td>Bottom-right p by q block
+               \link DenseBase::bottomRightCorner() * \endlink</td>
+    <td>\code
+matrix.bottomRightCorner(p,q);\endcode </td>
+    <td>\code 
+matrix.bottomRightCorner<p,q>();\endcode </td>
+</tr>
+<tr><td>%Block containing the first q rows
+                   \link DenseBase::topRows() * \endlink</td>
+    <td>\code
+matrix.topRows(q);\endcode </td>
+    <td>\code 
+matrix.topRows<q>();\endcode </td>
+</tr>
+<tr><td>%Block containing the last q rows
+                    \link DenseBase::bottomRows() * \endlink</td>
+    <td>\code
+matrix.bottomRows(q);\endcode </td>
+    <td>\code 
+matrix.bottomRows<q>();\endcode </td>
+</tr>
+<tr><td>%Block containing the first p columns
+                    \link DenseBase::leftCols() * \endlink</td>
+    <td>\code
+matrix.leftCols(p);\endcode </td>
+    <td>\code 
+matrix.leftCols<p>();\endcode </td>
+</tr>
+<tr><td>%Block containing the last q columns
+                    \link DenseBase::rightCols() * \endlink</td>
+    <td>\code
+matrix.rightCols(q);\endcode </td>
+    <td>\code 
+matrix.rightCols<q>();\endcode </td>
+</tr>
+</table>
+
+Here is a simple example illustrating the use of the operations presented above:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_BlockOperations_corner.cpp
+</td>
+<td>
+\verbinclude Tutorial_BlockOperations_corner.out
+</td></tr></table>
+
+
+\section TutorialBlockOperationsSyntaxVectors Block operations for vectors
+
+Eigen also provides a set of block operations designed specifically for the special case of vectors and one-dimensional arrays:
+
+<table class="manual">
+<tr><th> %Block operation</th>
+<th>Version constructing a \n dynamic-size block expression</th>
+<th>Version constructing a \n fixed-size block expression</th></tr>
+<tr><td>%Block containing the first \p n elements 
+                    \link DenseBase::head() * \endlink</td>
+    <td>\code
+vector.head(n);\endcode </td>
+    <td>\code 
+vector.head<n>();\endcode </td>
+</tr>
+<tr><td>%Block containing the last \p n elements
+                    \link DenseBase::tail() * \endlink</td>
+    <td>\code
+vector.tail(n);\endcode </td>
+    <td>\code 
+vector.tail<n>();\endcode </td>
+</tr>
+<tr><td>%Block containing \p n elements, starting at position \p i
+                    \link DenseBase::segment() * \endlink</td>
+    <td>\code
+vector.segment(i,n);\endcode </td>
+    <td>\code 
+vector.segment<n>(i);\endcode </td>
+</tr>
+</table>
+
+
+An example is presented below:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_BlockOperations_vector.cpp
+</td>
+<td>
+\verbinclude Tutorial_BlockOperations_vector.out
+</td></tr></table>
+
+\li \b Next: \ref TutorialAdvancedInitialization
+
+*/
+
+}
diff --git a/doc/C05_TutorialAdvancedInitialization.dox b/doc/C05_TutorialAdvancedInitialization.dox
new file mode 100644
index 0000000..4f27f1e
--- /dev/null
+++ b/doc/C05_TutorialAdvancedInitialization.dox
@@ -0,0 +1,172 @@
+namespace Eigen {
+
+/** \page TutorialAdvancedInitialization Tutorial page 5 - Advanced initialization
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialBlockOperations
+\li \b Next: \ref TutorialLinearAlgebra
+
+This page discusses several advanced methods for initializing matrices. It gives more details on the
+comma-initializer, which was introduced before. It also explains how to get special matrices such as the
+identity matrix and the zero matrix.
+
+\b Table \b of \b contents
+  - \ref TutorialAdvancedInitializationCommaInitializer
+  - \ref TutorialAdvancedInitializationSpecialMatrices
+  - \ref TutorialAdvancedInitializationTemporaryObjects
+
+
+\section TutorialAdvancedInitializationCommaInitializer The comma initializer
+
+Eigen offers a comma initializer syntax which allows the user to easily set all the coefficients of a matrix,
+vector or array. Simply list the coefficients, starting at the top-left corner and moving from left to right
+and from the top to the bottom. The size of the object needs to be specified beforehand. If you list too few
+or too many coefficients, Eigen will complain.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_commainit_01.cpp
+</td>
+<td>
+\verbinclude Tutorial_commainit_01.out
+</td></tr></table>
+
+Moreover, the elements of the initialization list may themselves be vectors or matrices. A common use is
+to join vectors or matrices together. For example, here is how to join two row vectors together. Remember
+that you have to set the size before you can use the comma initializer.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_Join.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_Join.out
+</td></tr></table>
+
+We can use the same technique to initialize matrices with a block structure.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_Block.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_Block.out
+</td></tr></table>
+
+The comma initializer can also be used to fill block expressions such as <tt>m.row(i)</tt>. Here is a more
+complicated way to get the same result as in the first example above:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_commainit_01b.cpp
+</td>
+<td>
+\verbinclude Tutorial_commainit_01b.out
+</td></tr></table>
+
+
+\section TutorialAdvancedInitializationSpecialMatrices Special matrices and arrays
+
+The Matrix and Array classes have static methods like \link DenseBase::Zero() Zero()\endlink, which can be
+used to initialize all coefficients to zero. There are three variants. The first variant takes no arguments
+and can only be used for fixed-size objects. If you want to initialize a dynamic-size object to zero, you need
+to specify the size. Thus, the second variant requires one argument and can be used for one-dimensional
+dynamic-size objects, while the third variant requires two arguments and can be used for two-dimensional
+objects. All three variants are illustrated in the following example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_Zero.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_Zero.out
+</td></tr></table>
+
+Similarly, the static method \link DenseBase::Constant() Constant\endlink(value) sets all coefficients to \c value.
+If the size of the object needs to be specified, the additional arguments go before the \c value
+argument, as in <tt>MatrixXd::Constant(rows, cols, value)</tt>. The method \link DenseBase::Random() Random()
+\endlink fills the matrix or array with random coefficients. The identity matrix can be obtained by calling
+\link MatrixBase::Identity() Identity()\endlink; this method is only available for Matrix, not for Array,
+because "identity matrix" is a linear algebra concept.  The method
+\link DenseBase::LinSpaced LinSpaced\endlink(size, low, high) is only available for vectors and
+one-dimensional arrays; it yields a vector of the specified size whose coefficients are equally spaced between
+\c low and \c high. The method \c LinSpaced() is illustrated in the following example, which prints a table
+with angles in degrees, the corresponding angle in radians, and their sine and cosine.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_LinSpaced.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_LinSpaced.out
+</td></tr></table>
+
+This example shows that objects like the ones returned by LinSpaced() can be assigned to variables (and
+expressions). Eigen defines utility functions like \link DenseBase::setZero() setZero()\endlink, 
+\link MatrixBase::setIdentity() \endlink and \link DenseBase::setLinSpaced() \endlink to do this
+conveniently. The following example contrasts three ways to construct the matrix
+\f$ J = \bigl[ \begin{smallmatrix} O & I \\ I & O \end{smallmatrix} \bigr] \f$: using static methods and
+assignment, using static methods and the comma-initializer, or using the setXxx() methods.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_ThreeWays.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_ThreeWays.out
+</td></tr></table>
+
+A summary of all pre-defined matrix, vector and array objects can be found in the \ref QuickRefPage.
+
+
+\section TutorialAdvancedInitializationTemporaryObjects Usage as temporary objects
+
+As shown above, static methods as Zero() and Constant() can be used to initialize variables at the time of
+declaration or at the right-hand side of an assignment operator. You can think of these methods as returning a
+matrix or array; in fact, they return so-called \ref TopicEigenExpressionTemplates "expression objects" which
+evaluate to a matrix or array when needed, so that this syntax does not incur any overhead.
+
+These expressions can also be used as a temporary object. The second example in
+the \ref GettingStarted guide, which we reproduce here, already illustrates this.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include QuickStart_example2_dynamic.cpp
+</td>
+<td>
+\verbinclude QuickStart_example2_dynamic.out
+</td></tr></table>
+
+The expression <tt>m + MatrixXf::Constant(3,3,1.2)</tt> constructs the 3-by-3 matrix expression with all its coefficients
+equal to 1.2 plus the corresponding coefficient of \a m.
+
+The comma-initializer, too, can also be used to construct temporary objects. The following example constructs a random
+matrix of size 2-by-3, and then multiplies this matrix on the left with 
+\f$ \bigl[ \begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix} \bigr] \f$.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_AdvancedInitialization_CommaTemporary.cpp
+</td>
+<td>
+\verbinclude Tutorial_AdvancedInitialization_CommaTemporary.out
+</td></tr></table>
+
+The \link CommaInitializer::finished() finished() \endlink method is necessary here to get the actual matrix
+object once the comma initialization of our temporary submatrix is done.
+
+
+\li \b Next: \ref TutorialLinearAlgebra
+
+*/
+
+}
diff --git a/doc/C06_TutorialLinearAlgebra.dox b/doc/C06_TutorialLinearAlgebra.dox
new file mode 100644
index 0000000..e8b3b79
--- /dev/null
+++ b/doc/C06_TutorialLinearAlgebra.dox
@@ -0,0 +1,269 @@
+namespace Eigen {
+
+/** \page TutorialLinearAlgebra Tutorial page 6 - Linear algebra and decompositions
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialAdvancedInitialization
+\li \b Next: \ref TutorialReductionsVisitorsBroadcasting
+
+This tutorial explains how to solve linear systems, compute various decompositions such as LU,
+QR, %SVD, eigendecompositions... for more advanced topics, don't miss our special page on
+\ref TopicLinearAlgebraDecompositions "this topic".
+
+\b Table \b of \b contents
+  - \ref TutorialLinAlgBasicSolve
+  - \ref TutorialLinAlgSolutionExists
+  - \ref TutorialLinAlgEigensolving
+  - \ref TutorialLinAlgInverse
+  - \ref TutorialLinAlgLeastsquares
+  - \ref TutorialLinAlgSeparateComputation
+  - \ref TutorialLinAlgRankRevealing
+
+
+\section TutorialLinAlgBasicSolve Basic linear solving
+
+\b The \b problem: You have a system of equations, that you have written as a single matrix equation
+    \f[ Ax \: = \: b \f]
+Where \a A and \a b are matrices (\a b could be a vector, as a special case). You want to find a solution \a x.
+
+\b The \b solution: You can choose between various decompositions, depending on what your matrix \a A looks like,
+and depending on whether you favor speed or accuracy. However, let's start with an example that works in all cases,
+and is a good compromise:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgExSolveColPivHouseholderQR.cpp </td>
+  <td>\verbinclude TutorialLinAlgExSolveColPivHouseholderQR.out </td>
+</tr>
+</table>
+
+In this example, the colPivHouseholderQr() method returns an object of class ColPivHouseholderQR. Since here the
+matrix is of type Matrix3f, this line could have been replaced by:
+\code
+ColPivHouseholderQR<Matrix3f> dec(A);
+Vector3f x = dec.solve(b);
+\endcode
+
+Here, ColPivHouseholderQR is a QR decomposition with column pivoting. It's a good compromise for this tutorial, as it
+works for all matrices while being quite fast. Here is a table of some other decompositions that you can choose from,
+depending on your matrix and the trade-off you want to make:
+
+<table class="manual">
+    <tr>
+        <th>Decomposition</th>
+        <th>Method</th>
+        <th>Requirements on the matrix</th>
+        <th>Speed</th>
+        <th>Accuracy</th>
+    </tr>
+    <tr>
+        <td>PartialPivLU</td>
+        <td>partialPivLu()</td>
+        <td>Invertible</td>
+        <td>++</td>
+        <td>+</td>
+    </tr>
+    <tr class="alt">
+        <td>FullPivLU</td>
+        <td>fullPivLu()</td>
+        <td>None</td>
+        <td>-</td>
+        <td>+++</td>
+    </tr>
+    <tr>
+        <td>HouseholderQR</td>
+        <td>householderQr()</td>
+        <td>None</td>
+        <td>++</td>
+        <td>+</td>
+    </tr>
+    <tr class="alt">
+        <td>ColPivHouseholderQR</td>
+        <td>colPivHouseholderQr()</td>
+        <td>None</td>
+        <td>+</td>
+        <td>++</td>
+    </tr>
+    <tr>
+        <td>FullPivHouseholderQR</td>
+        <td>fullPivHouseholderQr()</td>
+        <td>None</td>
+        <td>-</td>
+        <td>+++</td>
+    </tr>
+    <tr class="alt">
+        <td>LLT</td>
+        <td>llt()</td>
+        <td>Positive definite</td>
+        <td>+++</td>
+        <td>+</td>
+    </tr>
+    <tr>
+        <td>LDLT</td>
+        <td>ldlt()</td>
+        <td>Positive or negative semidefinite</td>
+        <td>+++</td>
+        <td>++</td>
+    </tr>
+</table>
+
+All of these decompositions offer a solve() method that works as in the above example.
+
+For example, if your matrix is positive definite, the above table says that a very good
+choice is then the LDLT decomposition. Here's an example, also demonstrating that using a general
+matrix (not a vector) as right hand side is possible.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgExSolveLDLT.cpp </td>
+  <td>\verbinclude TutorialLinAlgExSolveLDLT.out </td>
+</tr>
+</table>
+
+For a \ref TopicLinearAlgebraDecompositions "much more complete table" comparing all decompositions supported by Eigen (notice that Eigen
+supports many other decompositions), see our special page on
+\ref TopicLinearAlgebraDecompositions "this topic".
+
+\section TutorialLinAlgSolutionExists Checking if a solution really exists
+
+Only you know what error margin you want to allow for a solution to be considered valid.
+So Eigen lets you do this computation for yourself, if you want to, as in this example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgExComputeSolveError.cpp </td>
+  <td>\verbinclude TutorialLinAlgExComputeSolveError.out </td>
+</tr>
+</table>
+
+\section TutorialLinAlgEigensolving Computing eigenvalues and eigenvectors
+
+You need an eigendecomposition here, see available such decompositions on \ref TopicLinearAlgebraDecompositions "this page".
+Make sure to check if your matrix is self-adjoint, as is often the case in these problems. Here's an example using
+SelfAdjointEigenSolver, it could easily be adapted to general matrices using EigenSolver or ComplexEigenSolver.
+
+The computation of eigenvalues and eigenvectors does not necessarily converge, but such failure to converge is
+very rare. The call to info() is to check for this possibility.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgSelfAdjointEigenSolver.cpp </td>
+  <td>\verbinclude TutorialLinAlgSelfAdjointEigenSolver.out </td>
+</tr>
+</table>
+
+\section TutorialLinAlgInverse Computing inverse and determinant
+
+First of all, make sure that you really want this. While inverse and determinant are fundamental mathematical concepts,
+in \em numerical linear algebra they are not as popular as in pure mathematics. Inverse computations are often
+advantageously replaced by solve() operations, and the determinant is often \em not a good way of checking if a matrix
+is invertible.
+
+However, for \em very \em small matrices, the above is not true, and inverse and determinant can be very useful.
+
+While certain decompositions, such as PartialPivLU and FullPivLU, offer inverse() and determinant() methods, you can also
+call inverse() and determinant() directly on a matrix. If your matrix is of a very small fixed size (at most 4x4) this
+allows Eigen to avoid performing a LU decomposition, and instead use formulas that are more efficient on such small matrices.
+
+Here is an example:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgInverseDeterminant.cpp </td>
+  <td>\verbinclude TutorialLinAlgInverseDeterminant.out </td>
+</tr>
+</table>
+
+\section TutorialLinAlgLeastsquares Least squares solving
+
+The best way to do least squares solving is with a SVD decomposition. Eigen provides one as the JacobiSVD class, and its solve()
+is doing least-squares solving.
+
+Here is an example:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgSVDSolve.cpp </td>
+  <td>\verbinclude TutorialLinAlgSVDSolve.out </td>
+</tr>
+</table>
+
+Another way, potentially faster but less reliable, is to use a LDLT decomposition
+of the normal matrix. In any case, just read any reference text on least squares, and it will be very easy for you
+to implement any linear least squares computation on top of Eigen.
+
+\section TutorialLinAlgSeparateComputation Separating the computation from the construction
+
+In the above examples, the decomposition was computed at the same time that the decomposition object was constructed.
+There are however situations where you might want to separate these two things, for example if you don't know,
+at the time of the construction, the matrix that you will want to decompose; or if you want to reuse an existing
+decomposition object.
+
+What makes this possible is that:
+\li all decompositions have a default constructor,
+\li all decompositions have a compute(matrix) method that does the computation, and that may be called again
+    on an already-computed decomposition, reinitializing it.
+
+For example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgComputeTwice.cpp </td>
+  <td>\verbinclude TutorialLinAlgComputeTwice.out </td>
+</tr>
+</table>
+
+Finally, you can tell the decomposition constructor to preallocate storage for decomposing matrices of a given size,
+so that when you subsequently decompose such matrices, no dynamic memory allocation is performed (of course, if you
+are using fixed-size matrices, no dynamic memory allocation happens at all). This is done by just
+passing the size to the decomposition constructor, as in this example:
+\code
+HouseholderQR<MatrixXf> qr(50,50);
+MatrixXf A = MatrixXf::Random(50,50);
+qr.compute(A); // no dynamic memory allocation
+\endcode
+
+\section TutorialLinAlgRankRevealing Rank-revealing decompositions
+
+Certain decompositions are rank-revealing, i.e. are able to compute the rank of a matrix. These are typically
+also the decompositions that behave best in the face of a non-full-rank matrix (which in the square case means a
+singular matrix). On \ref TopicLinearAlgebraDecompositions "this table" you can see for all our decompositions
+whether they are rank-revealing or not.
+
+Rank-revealing decompositions offer at least a rank() method. They can also offer convenience methods such as isInvertible(),
+and some are also providing methods to compute the kernel (null-space) and image (column-space) of the matrix, as is the
+case with FullPivLU:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgRankRevealing.cpp </td>
+  <td>\verbinclude TutorialLinAlgRankRevealing.out </td>
+</tr>
+</table>
+
+Of course, any rank computation depends on the choice of an arbitrary threshold, since practically no
+floating-point matrix is \em exactly rank-deficient. Eigen picks a sensible default threshold, which depends
+on the decomposition but is typically the diagonal size times machine epsilon. While this is the best default we
+could pick, only you know what is the right threshold for your application. You can set this by calling setThreshold()
+on your decomposition object before calling rank() or any other method that needs to use such a threshold.
+The decomposition itself, i.e. the compute() method, is independent of the threshold. You don't need to recompute the
+decomposition after you've changed the threshold.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+  <td>\include TutorialLinAlgSetThreshold.cpp </td>
+  <td>\verbinclude TutorialLinAlgSetThreshold.out </td>
+</tr>
+</table>
+
+\li \b Next: \ref TutorialReductionsVisitorsBroadcasting
+
+*/
+
+}
diff --git a/doc/C07_TutorialReductionsVisitorsBroadcasting.dox b/doc/C07_TutorialReductionsVisitorsBroadcasting.dox
new file mode 100644
index 0000000..f3879b8
--- /dev/null
+++ b/doc/C07_TutorialReductionsVisitorsBroadcasting.dox
@@ -0,0 +1,273 @@
+namespace Eigen {
+
+/** \page TutorialReductionsVisitorsBroadcasting Tutorial page 7 - Reductions, visitors and broadcasting
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialLinearAlgebra
+\li \b Next: \ref TutorialGeometry
+
+This tutorial explains Eigen's reductions, visitors and broadcasting and how they are used with
+\link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink.
+
+\b Table \b of \b contents
+  - \ref TutorialReductionsVisitorsBroadcastingReductions
+    - \ref TutorialReductionsVisitorsBroadcastingReductionsNorm
+    - \ref TutorialReductionsVisitorsBroadcastingReductionsBool
+    - \ref TutorialReductionsVisitorsBroadcastingReductionsUserdefined
+  - \ref TutorialReductionsVisitorsBroadcastingVisitors
+  - \ref TutorialReductionsVisitorsBroadcastingPartialReductions
+    - \ref TutorialReductionsVisitorsBroadcastingPartialReductionsCombined
+  - \ref TutorialReductionsVisitorsBroadcastingBroadcasting
+    - \ref TutorialReductionsVisitorsBroadcastingBroadcastingCombined
+
+
+\section TutorialReductionsVisitorsBroadcastingReductions Reductions
+In Eigen, a reduction is a function taking a matrix or array, and returning a single
+scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink,
+returning the sum of all the coefficients inside a given matrix or array.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include tut_arithmetic_redux_basic.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_redux_basic.out
+</td></tr></table>
+
+The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed <tt>a.diagonal().sum()</tt>.
+
+
+\subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations
+
+The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients.
+
+Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink.
+
+These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things.
+
+If you want other \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm() lpNnorm<p>() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients.
+
+The following example demonstrates these methods.
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out
+</td></tr></table>
+
+\subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions
+
+The following reductions operate on boolean values:
+  - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true .
+  - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true .
+  - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to  \b true.
+
+These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, <tt>array > 0</tt> is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, <tt>(array > 0).all()</tt> tests whether all coefficients of \c array are positive. This can be seen in the following example:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out
+</td></tr></table>
+
+\subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions
+
+TODO
+
+In the meantime you can have a look at the DenseBase::redux() function.
+
+\section TutorialReductionsVisitorsBroadcastingVisitors Visitors
+Visitors are useful when one wants to obtain the location of a coefficient inside 
+a Matrix or Array. The simplest examples are 
+\link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and 
+\link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find
+the location of the greatest or smallest coefficient in a Matrix or 
+Array.
+
+The arguments passed to a visitor are pointers to the variables where the
+row and column position are to be stored. These variables should be of type
+\link DenseBase::Index Index \endlink, as shown below:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out
+</td></tr></table>
+
+Note that both functions also return the value of the minimum or maximum coefficient if needed,
+as if it was a typical reduction operation.
+
+\section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions
+Partial reductions are reductions that can operate column- or row-wise on a Matrix or 
+Array, applying the reduction operation on each column or row and 
+returning a column or row-vector with the corresponding values. Partial reductions are applied 
+with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink.
+
+A simple example is obtaining the maximum of the elements 
+in each column in a given matrix, storing the result in a row-vector:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out
+</td></tr></table>
+
+The same operation can be performed row-wise:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out
+</td></tr></table>
+
+<b>Note that column-wise operations return a 'row-vector' while row-wise operations
+return a 'column-vector'</b>
+
+\subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations
+It is also possible to use the result of a partial reduction to do further processing.
+Here is another example that finds the column whose sum of elements is the maximum
+ within a matrix. With column-wise partial reductions this can be coded as:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out
+</td></tr></table>
+
+The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column
+though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose
+size is 1x4.
+
+Therefore, if
+\f[
+\mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\
+                    3 & 1 & 7 & 2 \end{bmatrix}
+\f]
+
+then
+
+\f[
+\mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix}
+\f]
+
+The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied 
+to obtain the column index where the maximum sum is found, 
+which is the column index 2 (third column) in this case.
+
+
+\section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting
+The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting 
+constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in 
+one direction.
+
+A simple example is to add a certain column-vector to each column in a matrix. 
+This can be accomplished with:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out
+</td></tr></table>
+
+We can interpret the instruction <tt>mat.colwise() += v</tt> in two equivalent ways. It adds the vector \c v
+to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to
+form a four-by-two matrix which is then added to \c mat:
+\f[
+\begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix}
++ \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}
+= \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}.
+\f]
+The operators <tt>-=</tt>, <tt>+</tt> and <tt>-</tt> can also be used column-wise and row-wise. On arrays, we 
+can also use the operators <tt>*=</tt>, <tt>/=</tt>, <tt>*</tt> and <tt>/</tt> to perform coefficient-wise 
+multiplication and division column-wise or row-wise. These operators are not available on matrices because it
+is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with 
+\c v(1), and so on, then use <tt>mat = mat * v.asDiagonal()</tt>.
+
+It is important to point out that the vector to be added column-wise or row-wise must be of type Vector,
+and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that
+broadcasting operations can only be applied with an object of type Vector, when operating with Matrix.
+The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should
+not mix arrays and matrices in the same expression.
+
+To perform the same operation row-wise we can do:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out
+</td></tr></table>
+
+\subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations
+Broadcasting can also be combined with other operations, such as Matrix or Array operations, 
+reductions and partial reductions.
+
+Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds
+the nearest neighbour of a vector <tt>v</tt> within the columns of matrix <tt>m</tt>. The Euclidean distance will be used in this example,
+computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
+</td>
+<td>
+\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out
+</td></tr></table>
+
+The line that does the job is 
+\code
+  (m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
+\endcode
+
+We will go step by step to understand what is happening:
+
+  - <tt>m.colwise() - v</tt> is a broadcasting operation, subtracting <tt>v</tt> from each column in <tt>m</tt>. The result of this operation
+is a new matrix whose size is the same as matrix <tt>m</tt>: \f[
+  \mbox{m.colwise() - v} = 
+  \begin{bmatrix}
+    -1 & 21 & 4 & 7 \\
+     0 & 8  & 4 & -1
+  \end{bmatrix}
+\f]
+
+  - <tt>(m.colwise() - v).colwise().squaredNorm()</tt> is a partial reduction, computing the squared norm column-wise. The result of
+this operation is a row-vector where each coefficient is the squared Euclidean distance between each column in <tt>m</tt> and <tt>v</tt>: \f[
+  \mbox{(m.colwise() - v).colwise().squaredNorm()} =
+  \begin{bmatrix}
+     1 & 505 & 32 & 50
+  \end{bmatrix}
+\f]
+
+  - Finally, <tt>minCoeff(&index)</tt> is used to obtain the index of the column in <tt>m</tt> that is closest to <tt>v</tt> in terms of Euclidean
+distance.
+
+\li \b Next: \ref TutorialGeometry
+
+*/
+
+}
diff --git a/doc/C08_TutorialGeometry.dox b/doc/C08_TutorialGeometry.dox
new file mode 100644
index 0000000..b9e9eba
--- /dev/null
+++ b/doc/C08_TutorialGeometry.dox
@@ -0,0 +1,251 @@
+namespace Eigen {
+
+/** \page TutorialGeometry Tutorial page 8 - Geometry
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialReductionsVisitorsBroadcasting
+\li \b Next: \ref TutorialSparse
+
+In this tutorial, we will briefly introduce the many possibilities offered by the \ref Geometry_Module "geometry module", namely 2D and 3D rotations and projective or affine transformations.
+
+\b Table \b of \b contents
+  - \ref TutorialGeoElementaryTransformations
+  - \ref TutorialGeoCommontransformationAPI
+  - \ref TutorialGeoTransform
+  - \ref TutorialGeoEulerAngles
+
+Eigen's Geometry module provides two different kinds of geometric transformations:
+  - Abstract transformations, such as rotations (represented by \ref AngleAxis "angle and axis" or by a \ref Quaternion "quaternion"), \ref Translation "translations", \ref Scaling "scalings". These transformations are NOT represented as matrices, but you can nevertheless mix them with matrices and vectors in expressions, and convert them to matrices if you wish.
+  - Projective or affine transformation matrices: see the Transform class. These are really matrices.
+
+\note If you are working with OpenGL 4x4 matrices then Affine3f and Affine3d are what you want. Since Eigen defaults to column-major storage, you can directly use the Transform::data() method to pass your transformation matrix to OpenGL.
+
+You can construct a Transform from an abstract transformation, like this:
+\code
+  Transform t(AngleAxis(angle,axis));
+\endcode
+or like this:
+\code
+  Transform t;
+  t = AngleAxis(angle,axis);
+\endcode
+But note that unfortunately, because of how C++ works, you can \b not do this:
+\code
+  Transform t = AngleAxis(angle,axis);
+\endcode
+<span class="note">\b Explanation: In the C++ language, this would require Transform to have a non-explicit conversion constructor from AngleAxis, but we really don't want to allow implicit casting here.
+</span>
+
+\section TutorialGeoElementaryTransformations Transformation types
+
+<table class="manual">
+<tr><th>Transformation type</th><th>Typical initialization code</th></tr>
+<tr><td>
+\ref Rotation2D "2D rotation" from an angle</td><td>\code
+Rotation2D<float> rot2(angle_in_radian);\endcode</td></tr>
+<tr class="alt"><td>
+3D rotation as an \ref AngleAxis "angle + axis"</td><td>\code
+AngleAxis<float> aa(angle_in_radian, Vector3f(ax,ay,az));\endcode
+<span class="note">The axis vector must be normalized.</span></td></tr>
+<tr><td>
+3D rotation as a \ref Quaternion "quaternion"</td><td>\code
+Quaternion<float> q;  q = AngleAxis<float>(angle_in_radian, axis);\endcode</td></tr>
+<tr class="alt"><td>
+N-D Scaling</td><td>\code
+Scaling(sx, sy)
+Scaling(sx, sy, sz)
+Scaling(s)
+Scaling(vecN)\endcode</td></tr>
+<tr><td>
+N-D Translation</td><td>\code
+Translation<float,2>(tx, ty)
+Translation<float,3>(tx, ty, tz)
+Translation<float,N>(s)
+Translation<float,N>(vecN)\endcode</td></tr>
+<tr class="alt"><td>
+N-D \ref TutorialGeoTransform "Affine transformation"</td><td>\code
+Transform<float,N,Affine> t = concatenation_of_any_transformations;
+Transform<float,3,Affine> t = Translation3f(p) * AngleAxisf(a,axis) * Scaling(s);\endcode</td></tr>
+<tr><td>
+N-D Linear transformations \n
+<em class=note>(pure rotations, \n scaling, etc.)</em></td><td>\code
+Matrix<float,N> t = concatenation_of_rotations_and_scalings;
+Matrix<float,2> t = Rotation2Df(a) * Scaling(s);
+Matrix<float,3> t = AngleAxisf(a,axis) * Scaling(s);\endcode</td></tr>
+</table>
+
+<strong>Notes on rotations</strong>\n To transform more than a single vector the preferred
+representations are rotation matrices, while for other usages Quaternion is the
+representation of choice as they are compact, fast and stable. Finally Rotation2D and
+AngleAxis are mainly convenient types to create other rotation objects.
+
+<strong>Notes on Translation and Scaling</strong>\n Like AngleAxis, these classes were
+designed to simplify the creation/initialization of linear (Matrix) and affine (Transform)
+transformations. Nevertheless, unlike AngleAxis which is inefficient to use, these classes
+might still be interesting to write generic and efficient algorithms taking as input any
+kind of transformations.
+
+Any of the above transformation types can be converted to any other types of the same nature,
+or to a more generic type. Here are some additional examples:
+<table class="manual">
+<tr><td>\code
+Rotation2Df r;  r  = Matrix2f(..);       // assumes a pure rotation matrix
+AngleAxisf aa;  aa = Quaternionf(..);
+AngleAxisf aa;  aa = Matrix3f(..);       // assumes a pure rotation matrix
+Matrix2f m;     m  = Rotation2Df(..);
+Matrix3f m;     m  = Quaternionf(..);       Matrix3f m;   m = Scaling(..);
+Affine3f m;     m  = AngleAxis3f(..);       Affine3f m;   m = Scaling(..);
+Affine3f m;     m  = Translation3f(..);     Affine3f m;   m = Matrix3f(..);
+\endcode</td></tr>
+</table>
+
+
+<a href="#" class="top">top</a>\section TutorialGeoCommontransformationAPI Common API across transformation types
+
+To some extent, Eigen's \ref Geometry_Module "geometry module" allows you to write
+generic algorithms working on any kind of transformation representations:
+<table class="manual">
+<tr><td>
+Concatenation of two transformations</td><td>\code
+gen1 * gen2;\endcode</td></tr>
+<tr class="alt"><td>Apply the transformation to a vector</td><td>\code
+vec2 = gen1 * vec1;\endcode</td></tr>
+<tr><td>Get the inverse of the transformation</td><td>\code
+gen2 = gen1.inverse();\endcode</td></tr>
+<tr class="alt"><td>Spherical interpolation \n (Rotation2D and Quaternion only)</td><td>\code
+rot3 = rot1.slerp(alpha,rot2);\endcode</td></tr>
+</table>
+
+
+
+<a href="#" class="top">top</a>\section TutorialGeoTransform Affine transformations
+Generic affine transformations are represented by the Transform class which internaly
+is a (Dim+1)^2 matrix. In Eigen we have chosen to not distinghish between points and
+vectors such that all points are actually represented by displacement vectors from the
+origin ( \f$ \mathbf{p} \equiv \mathbf{p}-0 \f$ ). With that in mind, real points and
+vector distinguish when the transformation is applied.
+<table class="manual">
+<tr><td>
+Apply the transformation to a \b point </td><td>\code
+VectorNf p1, p2;
+p2 = t * p1;\endcode</td></tr>
+<tr class="alt"><td>
+Apply the transformation to a \b vector </td><td>\code
+VectorNf vec1, vec2;
+vec2 = t.linear() * vec1;\endcode</td></tr>
+<tr><td>
+Apply a \em general transformation \n to a \b normal \b vector
+(<a href="http://www.cgafaq.info/wiki/Transforming_normals">explanations</a>)</td><td>\code
+VectorNf n1, n2;
+MatrixNf normalMatrix = t.linear().inverse().transpose();
+n2 = (normalMatrix * n1).normalized();\endcode</td></tr>
+<tr class="alt"><td>
+Apply a transformation with \em pure \em rotation \n to a \b normal \b vector
+(no scaling, no shear)</td><td>\code
+n2 = t.linear() * n1;\endcode</td></tr>
+<tr><td>
+OpenGL compatibility \b 3D </td><td>\code
+glLoadMatrixf(t.data());\endcode</td></tr>
+<tr class="alt"><td>
+OpenGL compatibility \b 2D </td><td>\code
+Affine3f aux(Affine3f::Identity());
+aux.linear().topLeftCorner<2,2>() = t.linear();
+aux.translation().start<2>() = t.translation();
+glLoadMatrixf(aux.data());\endcode</td></tr>
+</table>
+
+\b Component \b accessors
+<table class="manual">
+<tr><td>
+full read-write access to the internal matrix</td><td>\code
+t.matrix() = matN1xN1;    // N1 means N+1
+matN1xN1 = t.matrix();
+\endcode</td></tr>
+<tr class="alt"><td>
+coefficient accessors</td><td>\code
+t(i,j) = scalar;   <=>   t.matrix()(i,j) = scalar;
+scalar = t(i,j);   <=>   scalar = t.matrix()(i,j);
+\endcode</td></tr>
+<tr><td>
+translation part</td><td>\code
+t.translation() = vecN;
+vecN = t.translation();
+\endcode</td></tr>
+<tr class="alt"><td>
+linear part</td><td>\code
+t.linear() = matNxN;
+matNxN = t.linear();
+\endcode</td></tr>
+<tr><td>
+extract the rotation matrix</td><td>\code
+matNxN = t.extractRotation();
+\endcode</td></tr>
+</table>
+
+
+\b Transformation \b creation \n
+While transformation objects can be created and updated concatenating elementary transformations,
+the Transform class also features a procedural API:
+<table class="manual">
+<tr><th></th><th>procedural API</th><th>equivalent natural API </th></tr>
+<tr><td>Translation</td><td>\code
+t.translate(Vector_(tx,ty,..));
+t.pretranslate(Vector_(tx,ty,..));
+\endcode</td><td>\code
+t *= Translation_(tx,ty,..);
+t = Translation_(tx,ty,..) * t;
+\endcode</td></tr>
+<tr class="alt"><td>\b Rotation \n <em class="note">In 2D and for the procedural API, any_rotation can also \n be an angle in radian</em></td><td>\code
+t.rotate(any_rotation);
+t.prerotate(any_rotation);
+\endcode</td><td>\code
+t *= any_rotation;
+t = any_rotation * t;
+\endcode</td></tr>
+<tr><td>Scaling</td><td>\code
+t.scale(Vector_(sx,sy,..));
+t.scale(s);
+t.prescale(Vector_(sx,sy,..));
+t.prescale(s);
+\endcode</td><td>\code
+t *= Scaling(sx,sy,..);
+t *= Scaling(s);
+t = Scaling(sx,sy,..) * t;
+t = Scaling(s) * t;
+\endcode</td></tr>
+<tr class="alt"><td>Shear transformation \n ( \b 2D \b only ! )</td><td>\code
+t.shear(sx,sy);
+t.preshear(sx,sy);
+\endcode</td><td></td></tr>
+</table>
+
+Note that in both API, any many transformations can be concatenated in a single expression as shown in the two following equivalent examples:
+<table class="manual">
+<tr><td>\code
+t.pretranslate(..).rotate(..).translate(..).scale(..);
+\endcode</td></tr>
+<tr><td>\code
+t = Translation_(..) * t * RotationType(..) * Translation_(..) * Scaling(..);
+\endcode</td></tr>
+</table>
+
+
+
+<a href="#" class="top">top</a>\section TutorialGeoEulerAngles Euler angles
+<table class="manual">
+<tr><td style="max-width:30em;">
+Euler angles might be convenient to create rotation objects.
+On the other hand, since there exist 24 different conventions, they are pretty confusing to use. This example shows how
+to create a rotation matrix according to the 2-1-2 convention.</td><td>\code
+Matrix3f m;
+m = AngleAxisf(angle1, Vector3f::UnitZ())
+*  * AngleAxisf(angle2, Vector3f::UnitY())
+*  * AngleAxisf(angle3, Vector3f::UnitZ());
+\endcode</td></tr>
+</table>
+
+\li \b Next: \ref TutorialSparse
+
+*/
+
+}
diff --git a/doc/C09_TutorialSparse.dox b/doc/C09_TutorialSparse.dox
new file mode 100644
index 0000000..34154bd
--- /dev/null
+++ b/doc/C09_TutorialSparse.dox
@@ -0,0 +1,455 @@
+namespace Eigen {
+
+/** \page TutorialSparse Tutorial page 9 - Sparse Matrix
+    \ingroup Tutorial
+
+\li \b Previous: \ref TutorialGeometry
+\li \b Next: \ref TutorialMapClass
+
+\b Table \b of \b contents \n
+  - \ref TutorialSparseIntro
+  - \ref TutorialSparseExample "Example"
+  - \ref TutorialSparseSparseMatrix
+  - \ref TutorialSparseFilling
+  - \ref TutorialSparseDirectSolvers
+  - \ref TutorialSparseFeatureSet
+    - \ref TutorialSparse_BasicOps
+    - \ref TutorialSparse_Products
+    - \ref TutorialSparse_TriangularSelfadjoint
+    - \ref TutorialSparse_Submat
+
+
+<hr>
+
+Manipulating and solving sparse problems involves various modules which are summarized below:
+
+<table class="manual">
+<tr><th>Module</th><th>Header file</th><th>Contents</th></tr>
+<tr><td>\link Sparse_Module SparseCore \endlink</td><td>\code#include <Eigen/SparseCore>\endcode</td><td>SparseMatrix and SparseVector classes, matrix assembly, basic sparse linear algebra (including sparse triangular solvers)</td></tr>
+<tr><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>\code#include <Eigen/SparseCholesky>\endcode</td><td>Direct sparse LLT and LDLT Cholesky factorization to solve sparse self-adjoint positive definite problems</td></tr>
+<tr><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>\code#include <Eigen/IterativeLinearSolvers>\endcode</td><td>Iterative solvers to solve large general linear square problems (including self-adjoint positive definite problems)</td></tr>
+<tr><td></td><td>\code#include <Eigen/Sparse>\endcode</td><td>Includes all the above modules</td></tr>
+</table>
+
+\section TutorialSparseIntro Sparse matrix representation
+
+In many applications (e.g., finite element methods) it is common to deal with very large matrices where only a few coefficients are different from zero.  In such cases, memory consumption can be reduced and performance increased by using a specialized representation storing only the nonzero coefficients. Such a matrix is called a sparse matrix.
+
+\b The \b %SparseMatrix \b class
+
+The class SparseMatrix is the main sparse matrix representation of Eigen's sparse module; it offers high performance and low memory usage.
+It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme.
+It consists of four compact arrays:
+ - \c Values: stores the coefficient values of the non-zeros.
+ - \c InnerIndices: stores the row (resp. column) indices of the non-zeros.
+ - \c OuterStarts: stores for each column (resp. row) the index of the first non-zero in the previous two arrays.
+ - \c InnerNNZs: stores the number of non-zeros of each column (resp. row).
+The word \c inner refers to an \em inner \em vector that is a column for a column-major matrix, or a row for a row-major matrix.
+The word \c outer refers to the other direction.
+
+This storage scheme is better explained on an example. The following matrix
+<table class="manual">
+<tr><td> 0</td><td>3</td><td> 0</td><td>0</td><td> 0</td></tr>
+<tr><td>22</td><td>0</td><td> 0</td><td>0</td><td>17</td></tr>
+<tr><td> 7</td><td>5</td><td> 0</td><td>1</td><td> 0</td></tr>
+<tr><td> 0</td><td>0</td><td> 0</td><td>0</td><td> 0</td></tr>
+<tr><td> 0</td><td>0</td><td>14</td><td>0</td><td> 8</td></tr>
+</table>
+
+and one of its possible sparse, \b column \b major representation:
+<table class="manual">
+<tr><td>Values:</td>        <td>22</td><td>7</td><td>_</td><td>3</td><td>5</td><td>14</td><td>_</td><td>_</td><td>1</td><td>_</td><td>17</td><td>8</td></tr>
+<tr><td>InnerIndices:</td>  <td> 1</td><td>2</td><td>_</td><td>0</td><td>2</td><td> 4</td><td>_</td><td>_</td><td>2</td><td>_</td><td> 1</td><td>4</td></tr>
+</table>
+<table class="manual">
+<tr><td>OuterStarts:</td><td>0</td><td>3</td><td>5</td><td>8</td><td>10</td><td>\em 12 </td></tr>
+<tr><td>InnerNNZs:</td>    <td>2</td><td>2</td><td>1</td><td>1</td><td> 2</td><td></td></tr>
+</table>
+
+Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices.
+The \c "_" indicates available free space to quickly insert new elements.
+Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector.
+On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective \c InnerNNZs entry that is a O(1) operation.
+
+The case where no empty space is available is a special case, and is refered as the \em compressed mode.
+It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS).
+Any SparseMatrix can be turned to this form by calling the SparseMatrix::makeCompressed() function.
+In this case, one can remark that the \c InnerNNZs array is redundant with \c OuterStarts because we the equality: \c InnerNNZs[j] = \c OuterStarts[j+1]-\c OuterStarts[j].
+Therefore, in practice a call to SparseMatrix::makeCompressed() frees this buffer.
+
+It is worth noting that most of our wrappers to external libraries requires compressed matrices as inputs.
+
+The results of %Eigen's operations always produces \b compressed sparse matrices.
+On the other hand, the insertion of a new element into a SparseMatrix converts this later to the \b uncompressed mode.
+
+Here is the previous matrix represented in compressed mode:
+<table class="manual">
+<tr><td>Values:</td>        <td>22</td><td>7</td><td>3</td><td>5</td><td>14</td><td>1</td><td>17</td><td>8</td></tr>
+<tr><td>InnerIndices:</td>  <td> 1</td><td>2</td><td>0</td><td>2</td><td> 4</td><td>2</td><td> 1</td><td>4</td></tr>
+</table>
+<table class="manual">
+<tr><td>OuterStarts:</td><td>0</td><td>2</td><td>4</td><td>5</td><td>6</td><td>\em 8 </td></tr>
+</table>
+
+A SparseVector is a special case of a SparseMatrix where only the \c Values and \c InnerIndices arrays are stored.
+There is no notion of compressed/uncompressed mode for a SparseVector.
+
+
+\section TutorialSparseExample First example
+
+Before describing each individual class, let's start with the following typical example: solving the Lapace equation \f$ \nabla u = 0 \f$ on a regular 2D grid using a finite difference scheme and Dirichlet boundary conditions.
+Such problem can be mathematically expressed as a linear problem of the form \f$ Ax=b \f$ where \f$ x \f$ is the vector of \c m unknowns (in our case, the values of the pixels), \f$ b \f$ is the right hand side vector resulting from the boundary conditions, and \f$ A \f$ is an \f$ m \times m \f$ matrix containing only a few non-zero elements resulting from the discretization of the Laplacian operator.
+
+<table class="manual">
+<tr><td>
+\include Tutorial_sparse_example.cpp
+</td>
+<td>
+\image html Tutorial_sparse_example.jpeg
+</td></tr></table>
+
+In this example, we start by defining a column-major sparse matrix type of double \c SparseMatrix<double>, and a triplet list of the same scalar type \c  Triplet<double>. A triplet is a simple object representing a non-zero entry as the triplet: \c row index, \c column index, \c value.
+
+In the main function, we declare a list \c coefficients of triplets (as a std vector) and the right hand side vector \f$ b \f$ which are filled by the \a buildProblem function.
+The raw and flat list of non-zero entries is then converted to a true SparseMatrix object \c A.
+Note that the elements of the list do not have to be sorted, and possible duplicate entries will be summed up.
+
+The last step consists of effectively solving the assembled problem.
+Since the resulting matrix \c A is symmetric by construction, we can perform a direct Cholesky factorization via the SimplicialLDLT class which behaves like its LDLT counterpart for dense objects.
+
+The resulting vector \c x contains the pixel values as a 1D array which is saved to a jpeg file shown on the right of the code above.
+
+Describing the \a buildProblem and \a save functions is out of the scope of this tutorial. They are given \ref TutorialSparse_example_details "here" for the curious and reproducibility purpose.
+
+
+
+
+\section TutorialSparseSparseMatrix The SparseMatrix class
+
+\b %Matrix \b and \b vector \b properties \n
+
+The SparseMatrix and SparseVector classes take three template arguments:
+ * the scalar type (e.g., double)
+ * the storage order (ColMajor or RowMajor, the default is RowMajor)
+ * the inner index type (default is \c int).
+
+As for dense Matrix objects, constructors takes the size of the object.
+Here are some examples:
+
+\code
+SparseMatrix<std::complex<float> > mat(1000,2000);         // declares a 1000x2000 column-major compressed sparse matrix of complex<float>
+SparseMatrix<double,RowMajor> mat(1000,2000);              // declares a 1000x2000 row-major compressed sparse matrix of double
+SparseVector<std::complex<float> > vec(1000);              // declares a column sparse vector of complex<float> of size 1000
+SparseVector<double,RowMajor> vec(1000);                   // declares a row sparse vector of double of size 1000
+\endcode
+
+In the rest of the tutorial, \c mat and \c vec represent any sparse-matrix and sparse-vector objects, respectively.
+
+The dimensions of a matrix can be queried using the following functions:
+<table class="manual">
+<tr><td>Standard \n dimensions</td><td>\code
+mat.rows()
+mat.cols()\endcode</td>
+<td>\code
+vec.size() \endcode</td>
+</tr>
+<tr><td>Sizes along the \n inner/outer dimensions</td><td>\code
+mat.innerSize()
+mat.outerSize()\endcode</td>
+<td></td>
+</tr>
+<tr><td>Number of non \n zero coefficients</td><td>\code
+mat.nonZeros() \endcode</td>
+<td>\code
+vec.nonZeros() \endcode</td></tr>
+</table>
+
+
+\b Iterating \b over \b the \b nonzero \b coefficients \n
+
+Random access to the elements of a sparse object can be done through the \c coeffRef(i,j) function.
+However, this function involves a quite expensive binary search.
+In most cases, one only wants to iterate over the non-zeros elements. This is achieved by a standard loop over the outer dimension, and then by iterating over the non-zeros of the current inner vector via an InnerIterator. Thus, the non-zero entries have to be visited in the same order than the storage order.
+Here is an example:
+<table class="manual">
+<tr><td>
+\code
+SparseMatrix<double> mat(rows,cols);
+for (int k=0; k<mat.outerSize(); ++k)
+  for (SparseMatrix<double>::InnerIterator it(mat,k); it; ++it)
+  {
+    it.value();
+    it.row();   // row index
+    it.col();   // col index (here it is equal to k)
+    it.index(); // inner index, here it is equal to it.row()
+  }
+\endcode
+</td><td>
+\code
+SparseVector<double> vec(size);
+for (SparseVector<double>::InnerIterator it(vec); it; ++it)
+{
+  it.value(); // == vec[ it.index() ]
+  it.index();
+}
+\endcode
+</td></tr>
+</table>
+For a writable expression, the referenced value can be modified using the valueRef() function.
+If the type of the sparse matrix or vector depends on a template parameter, then the \c typename keyword is
+required to indicate that \c InnerIterator denotes a type; see \ref TopicTemplateKeyword for details.
+
+
+\section TutorialSparseFilling Filling a sparse matrix
+
+Because of the special storage scheme of a SparseMatrix, special care has to be taken when adding new nonzero entries.
+For instance, the cost of a single purely random insertion into a SparseMatrix is \c O(nnz), where \c nnz is the current number of non-zero coefficients.
+
+The simplest way to create a sparse matrix while guaranteeing good performance is thus to first build a list of so-called \em triplets, and then convert it to a SparseMatrix.
+
+Here is a typical usage example:
+\code
+typedef Eigen::Triplet<double> T;
+std::vector<T> tripletList;
+triplets.reserve(estimation_of_entries);
+for(...)
+{
+  // ...
+  tripletList.push_back(T(i,j,v_ij));
+}
+SparseMatrixType mat(rows,cols);
+mat.setFromTriplets(tripletList.begin(), tripletList.end());
+// mat is ready to go!
+\endcode
+The \c std::vector of triplets might contain the elements in arbitrary order, and might even contain duplicated elements that will be summed up by setFromTriplets().
+See the SparseMatrix::setFromTriplets() function and class Triplet for more details.
+
+
+In some cases, however, slightly higher performance, and lower memory consumption can be reached by directly inserting the non-zeros into the destination matrix.
+A typical scenario of this approach is illustrated bellow:
+\code
+1: SparseMatrix<double> mat(rows,cols);         // default is column major
+2: mat.reserve(VectorXi::Constant(cols,6));
+3: for each i,j such that v_ij != 0
+4:   mat.insert(i,j) = v_ij;                    // alternative: mat.coeffRef(i,j) += v_ij;
+5: mat.makeCompressed();                        // optional
+\endcode
+
+- The key ingredient here is the line 2 where we reserve room for 6 non-zeros per column. In many cases, the number of non-zeros per column or row can easily be known in advance. If it varies significantly for each inner vector, then it is possible to specify a reserve size for each inner vector by providing a vector object with an operator[](int j) returning the reserve size of the \c j-th inner vector (e.g., via a VectorXi or std::vector<int>). If only a rought estimate of the number of nonzeros per inner-vector can be obtained, it is highly recommended to overestimate it rather than the opposite. If this line is omitted, then the first insertion of a new element will reserve room for 2 elements per inner vector.
+- The line 4 performs a sorted insertion. In this example, the ideal case is when the \c j-th column is not full and contains non-zeros whose inner-indices are smaller than \c i. In this case, this operation boils down to trivial O(1) operation.
+- When calling insert(i,j) the element \c i \c ,j must not already exists, otherwise use the coeffRef(i,j) method that will allow to, e.g., accumulate values. This method first performs a binary search and finally calls insert(i,j) if the element does not already exist. It is more flexible than insert() but also more costly.
+- The line 5 suppresses the remaining empty space and transforms the matrix into a compressed column storage.
+
+
+\section TutorialSparseDirectSolvers Solving linear problems
+
+%Eigen currently provides a limited set of built-in solvers, as well as wrappers to external solver libraries.
+They are summarized in the following table:
+
+<table class="manual">
+<tr><th>Class</th><th>Module</th><th>Solver kind</th><th>Matrix kind</th><th>Features related to performance</th>
+    <th>Dependencies,License</th><th class="width20em"><p>Notes</p></th></tr>
+<tr><td>SimplicialLLT    </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+    <td>built-in, LGPL</td>
+    <td>SimplicialLDLT is often preferable</td></tr>
+<tr><td>SimplicialLDLT   </td><td>\link SparseCholesky_Module SparseCholesky \endlink</td><td>Direct LDLt factorization</td><td>SPD</td><td>Fill-in reducing</td>
+    <td>built-in, LGPL</td>
+    <td>Recommended for very sparse and not too large problems (e.g., 2D Poisson eq.)</td></tr>
+<tr><td>ConjugateGradient</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Classic iterative CG</td><td>SPD</td><td>Preconditionning</td>
+    <td>built-in, LGPL</td>
+    <td>Recommended for large symmetric problems (e.g., 3D Poisson eq.)</td></tr>
+<tr><td>BiCGSTAB</td><td>\link IterativeLinearSolvers_Module IterativeLinearSolvers \endlink</td><td>Iterative stabilized bi-conjugate gradient</td><td>Square</td><td>Preconditionning</td>
+    <td>built-in, LGPL</td>
+    <td>Might not always converge</td></tr>
+
+
+<tr><td>PastixLLT \n PastixLDLT \n PastixLU</td><td>\link PaStiXSupport_Module PaStiXSupport \endlink</td><td>Direct LLt, LDLt, LU factorizations</td><td>SPD \n SPD \n Square</td><td>Fill-in reducing, Leverage fast dense algebra, Multithreading</td>
+    <td>Requires the <a href="http://pastix.gforge.inria.fr">PaStiX</a> package, \b CeCILL-C </td>
+    <td>optimized for tough problems and symmetric patterns</td></tr>
+<tr><td>CholmodSupernodalLLT</td><td>\link CholmodSupport_Module CholmodSupport \endlink</td><td>Direct LLt factorization</td><td>SPD</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+    <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+    <td></td></tr>
+<tr><td>UmfPackLU</td><td>\link UmfPackSupport_Module UmfPackSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+    <td>Requires the <a href="http://www.cise.ufl.edu/research/sparse/SuiteSparse/">SuiteSparse</a> package, \b GPL </td>
+    <td></td></tr>
+<tr><td>SuperLU</td><td>\link SuperLUSupport_Module SuperLUSupport \endlink</td><td>Direct LU factorization</td><td>Square</td><td>Fill-in reducing, Leverage fast dense algebra</td>
+    <td>Requires the <a href="http://crd-legacy.lbl.gov/~xiaoye/SuperLU/">SuperLU</a> library, (BSD-like)</td>
+    <td></td></tr>
+</table>
+
+Here \c SPD means symmetric positive definite.
+
+All these solvers follow the same general concept.
+Here is a typical and general example:
+\code
+#include <Eigen/RequiredModuleName>
+// ...
+SparseMatrix<double> A;
+// fill A
+VectorXd b, x;
+// fill b
+// solve Ax = b
+SolverClassName<SparseMatrix<double> > solver;
+solver.compute(A);
+if(solver.info()!=Succeeded) {
+  // decomposition failed
+  return;
+}
+x = solver.solve(b);
+if(solver.info()!=Succeeded) {
+  // solving failed
+  return;
+}
+// solve for another right hand side:
+x1 = solver.solve(b1);
+\endcode
+
+For \c SPD solvers, a second optional template argument allows to specify which triangular part have to be used, e.g.:
+
+\code
+#include <Eigen/IterativeLinearSolvers>
+
+ConjugateGradient<SparseMatrix<double>, Eigen::Upper> solver;
+x = solver.compute(A).solve(b);
+\endcode
+In the above example, only the upper triangular part of the input matrix A is considered for solving. The opposite triangle might either be empty or contain arbitrary values.
+
+In the case where multiple problems with the same sparcity pattern have to be solved, then the "compute" step can be decomposed as follow:
+\code
+SolverClassName<SparseMatrix<double> > solver;
+solver.analyzePattern(A);   // for this step the numerical values of A are not used
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+A = ...;                    // modify the values of the nonzeros of A, the nonzeros pattern must stay unchanged
+solver.factorize(A);
+x1 = solver.solve(b1);
+x2 = solver.solve(b2);
+...
+\endcode
+The compute() method is equivalent to calling both analyzePattern() and factorize().
+
+Finally, each solver provides some specific features, such as determinant, access to the factors, controls of the iterations, and so on.
+More details are availble in the documentations of the respective classes.
+
+
+\section TutorialSparseFeatureSet Supported operators and functions
+
+Because of their special storage format, sparse matrices cannot offer the same level of flexbility than dense matrices.
+In Eigen's sparse module we chose to expose only the subset of the dense matrix API which can be efficiently implemented.
+In the following \em sm denotes a sparse matrix, \em sv a sparse vector, \em dm a dense matrix, and \em dv a dense vector.
+
+\subsection TutorialSparse_BasicOps Basic operations
+
+%Sparse expressions support most of the unary and binary coefficient wise operations:
+\code
+sm1.real()   sm1.imag()   -sm1                    0.5*sm1
+sm1+sm2      sm1-sm2      sm1.cwiseProduct(sm2)
+\endcode
+However, a strong restriction is that the storage orders must match. For instance, in the following example:
+\code
+sm4 = sm1 + sm2 + sm3;
+\endcode
+sm1, sm2, and sm3 must all be row-major or all column major.
+On the other hand, there is no restriction on the target matrix sm4.
+For instance, this means that for computing \f$ A^T + A \f$, the matrix \f$ A^T \f$ must be evaluated into a temporary matrix of compatible storage order:
+\code
+SparseMatrix<double> A, B;
+B = SparseMatrix<double>(A.transpose()) + A;
+\endcode
+
+Binary coefficient wise operators can also mix sparse and dense expressions:
+\code
+sm2 = sm1.cwiseProduct(dm1);
+dm2 = sm1 + dm1;
+\endcode
+
+
+%Sparse expressions also support transposition:
+\code
+sm1 = sm2.transpose();
+sm1 = sm2.adjoint();
+\endcode
+However, there is no transposeInPlace() method.
+
+
+\subsection TutorialSparse_Products Matrix products
+
+%Eigen supports various kind of sparse matrix products which are summarize below:
+  - \b sparse-dense:
+    \code
+dv2 = sm1 * dv1;
+dm2 = dm1 * sm1.adjoint();
+dm2 = 2. * sm1 * dm1;
+    \endcode
+  - \b symmetric \b sparse-dense. The product of a sparse symmetric matrix with a dense matrix (or vector) can also be optimized by specifying the symmetry with selfadjointView():
+    \code
+dm2 = sm1.selfadjointView<>() * dm1;        // if all coefficients of A are stored
+dm2 = A.selfadjointView<Upper>() * dm1;     // if only the upper part of A is stored
+dm2 = A.selfadjointView<Lower>() * dm1;     // if only the lower part of A is stored
+    \endcode
+  - \b sparse-sparse. For sparse-sparse products, two different algorithms are available. The default one is conservative and preserve the explicit zeros that might appear:
+    \code
+sm3 = sm1 * sm2;
+sm3 = 4 * sm1.adjoint() * sm2;
+    \endcode
+    The second algorithm prunes on the fly the explicit zeros, or the values smaller than a given threshold. It is enabled and controlled through the prune() functions:
+    \code
+sm3 = (sm1 * sm2).prune();                  // removes numerical zeros
+sm3 = (sm1 * sm2).prune(ref);               // removes elements much smaller than ref
+sm3 = (sm1 * sm2).prune(ref,epsilon);       // removes elements smaller than ref*epsilon
+    \endcode
+
+  - \b permutations. Finally, permutations can be applied to sparse matrices too:
+    \code
+PermutationMatrix<Dynamic,Dynamic> P = ...;
+sm2 = P * sm1;
+sm2 = sm1 * P.inverse();
+sm2 = sm1.transpose() * P;
+    \endcode
+
+
+\subsection TutorialSparse_TriangularSelfadjoint Triangular and selfadjoint views
+
+Just as with dense matrices, the triangularView() function can be used to address a triangular part of the matrix, and perform triangular solves with a dense right hand side:
+\code
+dm2 = sm1.triangularView<Lower>(dm1);
+dv2 = sm1.transpose().triangularView<Upper>(dv1);
+\endcode
+
+The selfadjointView() function permits various operations:
+ - optimized sparse-dense matrix products:
+    \code
+dm2 = sm1.selfadjointView<>() * dm1;        // if all coefficients of A are stored
+dm2 = A.selfadjointView<Upper>() * dm1;     // if only the upper part of A is stored
+dm2 = A.selfadjointView<Lower>() * dm1;     // if only the lower part of A is stored
+    \endcode
+ - copy of triangular parts:
+    \code
+sm2 = sm1.selfadjointView<Upper>();                               // makes a full selfadjoint matrix from the upper triangular part
+sm2.selfadjointView<Lower>() = sm1.selfadjointView<Upper>();      // copies the upper triangular part to the lower triangular part
+    \endcode
+ - application of symmetric permutations:
+ \code
+PermutationMatrix<Dynamic,Dynamic> P = ...;
+sm2 = A.selfadjointView<Upper>().twistedBy(P);                                // compute P S P' from the upper triangular part of A, and make it a full matrix
+sm2.selfadjointView<Lower>() = A.selfadjointView<Lower>().twistedBy(P);       // compute P S P' from the lower triangular part of A, and then only compute the lower part
+ \endcode
+
+\subsection TutorialSparse_Submat Sub-matrices
+
+%Sparse matrices does not support yet the addressing of arbitrary sub matrices. Currently, one can only reference a set of contiguous \em inner vectors, i.e., a set of contiguous rows for a row-major matrix, or a set of contiguous columns for a column major matrix:
+\code
+  sm1.innerVector(j);       // returns an expression of the j-th column (resp. row) of the matrix if sm1 is col-major (resp. row-major)
+  sm1.innerVectors(j, nb);  // returns an expression of the nb columns (resp. row) starting from the j-th column (resp. row)
+                            // of the matrix if sm1 is col-major (resp. row-major)
+  sm1.middleRows(j, nb);    // for row major matrices only, get a range of nb rows
+  sm1.middleCols(j, nb);    // for column major matrices only, get a range of nb columns
+\endcode
+
+\li \b Next: \ref TutorialMapClass
+
+*/
+
+}
diff --git a/doc/C10_TutorialMapClass.dox b/doc/C10_TutorialMapClass.dox
new file mode 100644
index 0000000..09e7927
--- /dev/null
+++ b/doc/C10_TutorialMapClass.dox
@@ -0,0 +1,96 @@
+namespace Eigen {
+
+/** \page TutorialMapClass Tutorial page 10 - Interfacing with C/C++ arrays and external libraries: the %Map class
+
+\ingroup Tutorial
+
+\li \b Previous: \ref TutorialSparse
+\li \b Next: \ref TODO
+
+This tutorial page explains how to work with "raw" C++ arrays.  This can be useful in a variety of contexts, particularly when "importing" vectors and matrices from other libraries into Eigen.
+
+\b Table \b of \b contents
+  - \ref TutorialMapIntroduction
+  - \ref TutorialMapTypes
+  - \ref TutorialMapUsing
+  - \ref TutorialMapPlacementNew
+
+\section TutorialMapIntroduction Introduction
+
+Occasionally you may have a pre-defined array of numbers that you want to use within Eigen as a vector or matrix. While one option is to make a copy of the data, most commonly you probably want to re-use this memory as an Eigen type. Fortunately, this is very easy with the Map class.
+
+\section TutorialMapTypes Map types and declaring Map variables
+
+A Map object has a type defined by its Eigen equivalent:
+\code
+Map<Matrix<typename Scalar, int RowsAtCompileTime, int ColsAtCompileTime> >
+\endcode
+Note that, in this default case, a Map requires just a single template parameter.  
+
+To construct a Map variable, you need two other pieces of information: a pointer to the region of memory defining the array of coefficients, and the desired shape of the matrix or vector.  For example, to define a matrix of \c float with sizes determined at compile time, you might do the following:
+\code
+Map<MatrixXf> mf(pf,rows,columns);
+\endcode
+where \c pf is a \c float \c * pointing to the array of memory.  A fixed-size read-only vector of integers might be declared as
+\code
+Map<const Vector4i> mi(pi);
+\endcode
+where \c pi is an \c int \c *. In this case the size does not have to be passed to the constructor, because it is already specified by the Matrix/Array type.
+
+Note that Map does not have a default constructor; you \em must pass a pointer to intialize the object. However, you can work around this requirement (see \ref TutorialMapPlacementNew).
+
+Map is flexible enough to accomodate a variety of different data representations.  There are two other (optional) template parameters:
+\code
+Map<typename MatrixType,
+    int MapOptions,
+    typename StrideType>
+\endcode
+\li \c MapOptions specifies whether the pointer is \c #Aligned, or \c #Unaligned.  The default is \c #Unaligned.
+\li \c StrideType allows you to specify a custom layout for the memory array, using the Stride class.  One example would be to specify that the data array is organized in row-major format:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include Tutorial_Map_rowmajor.cpp </td>
+<td>\verbinclude Tutorial_Map_rowmajor.out </td>
+</table>
+However, Stride is even more flexible than this; for details, see the documentation for the Map and Stride classes.
+
+\section TutorialMapUsing Using Map variables
+
+You can use a Map object just like any other Eigen type:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include Tutorial_Map_using.cpp </td>
+<td>\verbinclude Tutorial_Map_using.out </td>
+</table>
+
+However, when writing functions taking Eigen types, it is important to realize that a Map type is \em not identical to its Dense equivalent.  See \ref TopicFunctionTakingEigenTypesMultiarguments for details.
+
+\section TutorialMapPlacementNew Changing the mapped array
+
+It is possible to change the array of a Map object after declaration, using the C++ "placement new" syntax:
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr>
+<td>\include Map_placement_new.cpp </td>
+<td>\verbinclude Map_placement_new.out </td>
+</table>
+Despite appearances, this does not invoke the memory allocator, because the syntax specifies the location for storing the result.
+
+This syntax makes it possible to declare a Map object without first knowing the mapped array's location in memory:
+\code
+Map<Matrix3f> A(NULL);  // don't try to use this matrix yet!
+VectorXf b(n_matrices);
+for (int i = 0; i < n_matrices; i++)
+{
+  new (&A) Map<Matrix3f>(get_matrix_pointer(i));
+  b(i) = A.trace();
+}
+\endcode
+
+\li \b Next: \ref TODO
+
+*/
+
+}
diff --git a/doc/CMakeLists.txt b/doc/CMakeLists.txt
new file mode 100644
index 0000000..96bff41
--- /dev/null
+++ b/doc/CMakeLists.txt
@@ -0,0 +1,78 @@
+project(EigenDoc)
+
+set_directory_properties(PROPERTIES EXCLUDE_FROM_ALL TRUE)
+
+project(EigenDoc)
+
+if(CMAKE_COMPILER_IS_GNUCXX)
+  if(CMAKE_SYSTEM_NAME MATCHES Linux)
+    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O1 -g1")
+  endif(CMAKE_SYSTEM_NAME MATCHES Linux)
+endif(CMAKE_COMPILER_IS_GNUCXX)
+
+configure_file(
+  ${Eigen_SOURCE_DIR}/unsupported/doc/Doxyfile.in
+  ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile-unsupported
+)
+
+configure_file(
+  ${CMAKE_CURRENT_SOURCE_DIR}/Doxyfile.in
+  ${CMAKE_CURRENT_BINARY_DIR}/Doxyfile
+)
+
+configure_file(
+  ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_header.html.in
+  ${CMAKE_CURRENT_BINARY_DIR}/eigendoxy_header.html
+)
+
+configure_file(
+  ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_footer.html.in
+  ${CMAKE_CURRENT_BINARY_DIR}/eigendoxy_footer.html
+)
+
+set(examples_targets "")
+set(snippets_targets "")
+
+add_definitions("-DEIGEN_MAKING_DOCS")
+
+add_subdirectory(examples)
+add_subdirectory(special_examples)
+add_subdirectory(snippets)
+
+add_custom_target(
+  doc-eigen-prerequisites
+  ALL
+  COMMAND ${CMAKE_COMMAND} -E make_directory ${CMAKE_CURRENT_BINARY_DIR}/html/
+  COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/eigendoxy_tabs.css
+                                   ${CMAKE_CURRENT_BINARY_DIR}/html/
+  COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/Eigen_Silly_Professor_64x64.png
+                                   ${CMAKE_CURRENT_BINARY_DIR}/html/
+  COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/AsciiQuickReference.txt
+                                   ${CMAKE_CURRENT_BINARY_DIR}/html/
+  WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}
+)
+
+add_custom_target(
+  doc-unsupported-prerequisites
+  ALL
+  COMMAND ${CMAKE_COMMAND} -E make_directory ${Eigen_BINARY_DIR}/doc/html/unsupported
+  COMMAND ${CMAKE_COMMAND} -E copy ${Eigen_SOURCE_DIR}/doc/eigendoxy_tabs.css
+                                   ${Eigen_BINARY_DIR}/doc/html/unsupported/
+  COMMAND ${CMAKE_COMMAND} -E copy ${Eigen_SOURCE_DIR}/doc/Eigen_Silly_Professor_64x64.png
+                                   ${Eigen_BINARY_DIR}/doc/html/unsupported/
+  WORKING_DIRECTORY ${Eigen_BINARY_DIR}/doc
+)
+
+add_dependencies(doc-eigen-prerequisites all_snippets all_examples)
+add_dependencies(doc-unsupported-prerequisites unsupported_snippets unsupported_examples)
+
+add_custom_target(doc ALL
+  COMMAND doxygen Doxyfile-unsupported
+  COMMAND doxygen
+  COMMAND doxygen Doxyfile-unsupported # run doxygen twice to get proper eigen <=> unsupported cross references
+  COMMAND ${CMAKE_COMMAND} -E rename html eigen-doc
+  COMMAND ${CMAKE_COMMAND} -E tar cvfz eigen-doc/eigen-doc.tgz eigen-doc/*.html eigen-doc/*.map eigen-doc/*.png eigen-doc/*.css eigen-doc/*.js eigen-doc/*.txt eigen-doc/unsupported
+  COMMAND ${CMAKE_COMMAND} -E rename eigen-doc html
+  WORKING_DIRECTORY ${Eigen_BINARY_DIR}/doc)
+
+add_dependencies(doc doc-eigen-prerequisites doc-unsupported-prerequisites)
diff --git a/doc/D01_StlContainers.dox b/doc/D01_StlContainers.dox
new file mode 100644
index 0000000..b5dbf06
--- /dev/null
+++ b/doc/D01_StlContainers.dox
@@ -0,0 +1,65 @@
+namespace Eigen {
+
+/** \page TopicStlContainers Using STL Containers with Eigen
+
+\b Table \b of \b contents
+  - \ref summary
+  - \ref allocator
+  - \ref vector
+
+\section summary Executive summary
+
+Using STL containers on \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", or classes having members of such types, requires taking the following two steps:
+
+\li A 16-byte-aligned allocator must be used. Eigen does provide one ready for use: aligned_allocator.
+\li If you want to use the std::vector container, you need to \#include <Eigen/StdVector>.
+
+These issues arise only with \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" and \ref TopicStructHavingEigenMembers "structures having such Eigen objects as member". For other Eigen types, such as Vector3f or MatrixXd, no special care is needed when using STL containers.
+
+\section allocator Using an aligned allocator
+
+STL containers take an optional template parameter, the allocator type. When using STL containers on \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", you need tell the container to use an allocator that will always allocate memory at 16-byte-aligned locations. Fortunately, Eigen does provide such an allocator: Eigen::aligned_allocator.
+
+For example, instead of
+\code
+std::map<int, Eigen::Vector4f>
+\endcode
+you need to use
+\code
+std::map<int, Eigen::Vector4f, std::less<int>, 
+         Eigen::aligned_allocator<std::pair<const int, Eigen::Vector4f> > >
+\endcode
+Note that the third parameter "std::less<int>" is just the default value, but we have to include it because we want to specify the fourth parameter, which is the allocator type.
+
+\section vector The case of std::vector
+
+The situation with std::vector was even worse (explanation below) so we had to specialize it for the Eigen::aligned_allocator type. In practice you \b must use the Eigen::aligned_allocator (not another aligned allocator), \b and \#include <Eigen/StdVector>.
+
+Here is an example:
+\code
+#include<Eigen/StdVector>
+\/* ... *\/
+std::vector<Eigen::Vector4f,Eigen::aligned_allocator<Eigen::Vector4f> >
+\endcode
+
+\subsection vector_spec An alternative - specializing std::vector for Eigen types
+
+As an alternative to the recommended approach described above, you have the option to specialize std::vector for Eigen types requiring alignment. 
+The advantage is that you won't need to declare std::vector all over with Eigen::allocator. One drawback on the other hand side is that
+the specialization needs to be defined before all code pieces in which e.g. std::vector<Vector2d> is used. Otherwise, without knowing the specialization
+the compiler will compile that particular instance with the default std::allocator and you program is most likely to crash.
+
+Here is an example:
+\code
+#include<Eigen/StdVector>
+\/* ... *\/
+EIGEN_DEFINE_STL_VECTOR_SPECIALIZATION(Matrix2d)
+std::vector<Eigen::Vector2d>
+\endcode
+
+<span class="note">\b Explanation: The resize() method of std::vector takes a value_type argument (defaulting to value_type()). So with std::vector<Eigen::Vector4f>, some Eigen::Vector4f objects will be passed by value, which discards any alignment modifiers, so a Eigen::Vector4f can be created at an unaligned location. In order to avoid that, the only solution we saw was to specialize std::vector to make it work on a slight modification of, here, Eigen::Vector4f, that is able to deal properly with this situation.
+</span>
+
+*/
+
+}
diff --git a/doc/D03_WrongStackAlignment.dox b/doc/D03_WrongStackAlignment.dox
new file mode 100644
index 0000000..b0e42ed
--- /dev/null
+++ b/doc/D03_WrongStackAlignment.dox
@@ -0,0 +1,56 @@
+namespace Eigen {
+
+/** \page TopicWrongStackAlignment Compiler making a wrong assumption on stack alignment
+
+<h4>It appears that this was a GCC bug that has been fixed in GCC 4.5.
+If you hit this issue, please upgrade to GCC 4.5 and report to us, so we can update this page.</h4>
+
+This is an issue that, so far, we met only with GCC on Windows: for instance, MinGW and TDM-GCC.
+
+By default, in a function like this,
+
+\code
+void foo()
+{
+  Eigen::Quaternionf q;
+  //...
+}
+\endcode
+
+GCC assumes that the stack is already 16-byte-aligned so that the object \a q will be created at a 16-byte-aligned location. For this reason, it doesn't take any special care to explicitly align the object \a q, as Eigen requires.
+
+The problem is that, in some particular cases, this assumption can be wrong on Windows, where the stack is only guaranteed to have 4-byte alignment. Indeed, even though GCC takes care of aligning the stack in the main function and does its best to keep it aligned, when a function is called from another thread or from a binary compiled with another compiler, the stack alignment can be corrupted. This results in the object 'q' being created at an unaligned location, making your program crash with the \ref TopicUnalignedArrayAssert "assertion on unaligned arrays". So far we found the three following solutions.
+
+
+\section sec_sol1 Local solution
+
+A local solution is to mark such a function with this attribute:
+\code
+__attribute__((force_align_arg_pointer)) void foo()
+{
+  Eigen::Quaternionf q;
+  //...
+}
+\endcode
+Read <a href="http://gcc.gnu.org/onlinedocs/gcc-4.4.0/gcc/Function-Attributes.html#Function-Attributes">this GCC documentation</a> to understand what this does. Of course this should only be done on GCC on Windows, so for portability you'll have to encapsulate this in a macro which you leave empty on other platforms. The advantage of this solution is that you can finely select which function might have a corrupted stack alignment. Of course on the downside this has to be done for every such function, so you may prefer one of the following two global solutions.
+
+
+\section sec_sol2 Global solutions
+
+A global solution is to edit your project so that when compiling with GCC on Windows, you pass this option to GCC:
+\code
+-mincoming-stack-boundary=2
+\endcode
+Explanation: this tells GCC that the stack is only required to be aligned to 2^2=4 bytes, so that GCC now knows that it really must take extra care to honor the 16 byte alignment of \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" when needed.
+
+Another global solution is to pass this option to gcc:
+\code
+-mstackrealign
+\endcode
+which has the same effect than adding the \c force_align_arg_pointer attribute to all functions.
+
+These global solutions are easy to use, but note that they may slowdown your program because they lead to extra prologue/epilogue instructions for every function.
+
+*/
+
+}
diff --git a/doc/D07_PassingByValue.dox b/doc/D07_PassingByValue.dox
new file mode 100644
index 0000000..b1e5e68
--- /dev/null
+++ b/doc/D07_PassingByValue.dox
@@ -0,0 +1,40 @@
+namespace Eigen {
+
+/** \page TopicPassingByValue Passing Eigen objects by value to functions
+
+Passing objects by value is almost always a very bad idea in C++, as this means useless copies, and one should pass them by reference instead.
+
+With Eigen, this is even more important: passing \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen objects" by value is not only inefficient, it can be illegal or make your program crash! And the reason is that these Eigen objects have alignment modifiers that aren't respected when they are passed by value.
+
+So for example, a function like this, where v is passed by value:
+
+\code
+void my_function(Eigen::Vector2d v);
+\endcode
+
+needs to be rewritten as follows, passing v by reference:
+
+\code
+void my_function(const Eigen::Vector2d& v);
+\endcode
+
+Likewise if you have a class having a Eigen object as member:
+
+\code
+struct Foo
+{
+  Eigen::Vector2d v;
+};
+void my_function(Foo v);
+\endcode
+
+This function also needs to be rewritten like this:
+\code
+void my_function(const Foo& v);
+\endcode
+
+Note that on the other hand, there is no problem with functions that return objects by value.
+
+*/
+
+}
diff --git a/doc/D09_StructHavingEigenMembers.dox b/doc/D09_StructHavingEigenMembers.dox
new file mode 100644
index 0000000..51789ca
--- /dev/null
+++ b/doc/D09_StructHavingEigenMembers.dox
@@ -0,0 +1,198 @@
+namespace Eigen {
+
+/** \page TopicStructHavingEigenMembers Structures Having Eigen Members
+
+\b Table \b of \b contents
+  - \ref summary
+  - \ref what
+  - \ref how
+  - \ref why
+  - \ref movetotop
+  - \ref bugineigen
+  - \ref conditional
+  - \ref othersolutions
+
+\section summary Executive Summary
+
+If you define a structure having members of \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types", you must overload its "operator new" so that it generates 16-bytes-aligned pointers. Fortunately, Eigen provides you with a macro EIGEN_MAKE_ALIGNED_OPERATOR_NEW that does that for you.
+
+\section what What kind of code needs to be changed?
+
+The kind of code that needs to be changed is this:
+
+\code
+class Foo
+{
+  ...
+  Eigen::Vector2d v;
+  ...
+};
+
+...
+
+Foo *foo = new Foo;
+\endcode
+
+In other words: you have a class that has as a member a \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen object", and then you dynamically create an object of that class.
+
+\section how How should such code be modified?
+
+Very easy, you just need to put a EIGEN_MAKE_ALIGNED_OPERATOR_NEW macro in a public part of your class, like this:
+
+\code
+class Foo
+{
+  ...
+  Eigen::Vector2d v;
+  ...
+public:
+  EIGEN_MAKE_ALIGNED_OPERATOR_NEW
+};
+
+...
+
+Foo *foo = new Foo;
+\endcode
+
+This macro makes "new Foo" always return an aligned pointer.
+
+If this approach is too intrusive, see also the \ref othersolutions.
+
+\section why Why is this needed?
+
+OK let's say that your code looks like this:
+
+\code
+class Foo
+{
+  ...
+  Eigen::Vector2d v;
+  ...
+};
+
+...
+
+Foo *foo = new Foo;
+\endcode
+
+A Eigen::Vector2d consists of 2 doubles, which is 128 bits. Which is exactly the size of a SSE packet, which makes it possible to use SSE for all sorts of operations on this vector. But SSE instructions (at least the ones that Eigen uses, which are the fast ones) require 128-bit alignment. Otherwise you get a segmentation fault.
+
+For this reason, Eigen takes care by itself to require 128-bit alignment for Eigen::Vector2d, by doing two things:
+\li Eigen requires 128-bit alignment for the Eigen::Vector2d's array (of 2 doubles). With GCC, this is done with a __attribute__ ((aligned(16))).
+\li Eigen overloads the "operator new" of Eigen::Vector2d so it will always return 128-bit aligned pointers.
+
+Thus, normally, you don't have to worry about anything, Eigen handles alignment for you...
+
+... except in one case. When you have a class Foo like above, and you dynamically allocate a new Foo as above, then, since Foo doesn't have aligned "operator new", the returned pointer foo is not necessarily 128-bit aligned.
+
+The alignment attribute of the member v is then relative to the start of the class, foo. If the foo pointer wasn't aligned, then foo->v won't be aligned either!
+
+The solution is to let class Foo have an aligned "operator new", as we showed in the previous section.
+
+\section movetotop Should I then put all the members of Eigen types at the beginning of my class?
+
+That's not required. Since Eigen takes care of declaring 128-bit alignment, all members that need it are automatically 128-bit aligned relatively to the class. So code like this works fine:
+
+\code
+class Foo
+{
+  double x;
+  Eigen::Vector2d v;
+public:
+  EIGEN_MAKE_ALIGNED_OPERATOR_NEW
+};
+\endcode
+
+\section dynamicsize What about dynamic-size matrices and vectors?
+
+Dynamic-size matrices and vectors, such as Eigen::VectorXd, allocate dynamically their own array of coefficients, so they take care of requiring absolute alignment automatically. So they don't cause this issue. The issue discussed here is only with \ref TopicFixedSizeVectorizable  "fixed-size vectorizable matrices and vectors".
+
+\section bugineigen So is this a bug in Eigen?
+
+No, it's not our bug. It's more like an inherent problem of the C++98 language specification, and seems to be taken care of in the upcoming language revision: <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2007/n2341.pdf">see this document</a>.
+
+\section conditional What if I want to do this conditionnally (depending on template parameters) ?
+
+For this situation, we offer the macro EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign). It will generate aligned operators like EIGEN_MAKE_ALIGNED_OPERATOR_NEW if NeedsToAlign is true. It will generate operators with the default alignment if NeedsToAlign is false.
+
+Example:
+
+\code
+template<int n> class Foo
+{
+  typedef Eigen::Matrix<float,n,1> Vector;
+  enum { NeedsToAlign = (sizeof(Vector)%16)==0 };
+  ...
+  Vector v;
+  ...
+public:
+  EIGEN_MAKE_ALIGNED_OPERATOR_NEW_IF(NeedsToAlign)
+};
+
+...
+
+Foo<4> *foo4 = new Foo<4>; // foo4 is guaranteed to be 128bit-aligned
+Foo<3> *foo3 = new Foo<3>; // foo3 has only the system default alignment guarantee
+\endcode
+
+
+\section othersolutions Other solutions
+
+In case putting the EIGEN_MAKE_ALIGNED_OPERATOR_NEW macro everywhere is too intrusive, there exists at least two other solutions.
+
+\subsection othersolutions1 Disabling alignment
+
+The first is to disable alignment requirement for the fixed size members:
+\code
+class Foo
+{
+  ...
+  Eigen::Matrix<double,2,1,Eigen::DontAlign> v;
+  ...
+};
+\endcode
+This has for effect to disable vectorization when using \c v.
+If a function of Foo uses it several times, then it still possible to re-enable vectorization by copying it into an aligned temporary vector:
+\code
+void Foo::bar()
+{
+  Eigen::Vector2d av(v);
+  // use av instead of v
+  ...
+  // if av changed, then do:
+  v = av;
+}
+\endcode
+
+\subsection othersolutions2 Private structure
+
+The second consist in storing the fixed-size objects into a private struct which will be dynamically allocated at the construction time of the main object:
+
+\code
+struct Foo_d
+{
+  EIGEN_MAKE_ALIGNED_OPERATOR_NEW
+  Vector2d v;
+  ...
+};
+
+
+struct Foo {
+  Foo() { init_d(); }
+  ~Foo() { delete d; }
+  void bar()
+  {
+    // use d->v instead of v
+    ...
+  }
+private:
+  void init_d() { d = new Foo_d; }
+  Foo_d* d;
+};
+\endcode
+
+The clear advantage here is that the class Foo remains unchanged regarding alignment issues. The drawback is that a heap allocation will be required whatsoever.
+
+*/
+
+}
diff --git a/doc/D11_UnalignedArrayAssert.dox b/doc/D11_UnalignedArrayAssert.dox
new file mode 100644
index 0000000..d173ee5
--- /dev/null
+++ b/doc/D11_UnalignedArrayAssert.dox
@@ -0,0 +1,121 @@
+namespace Eigen {
+
+/** \page TopicUnalignedArrayAssert Explanation of the assertion on unaligned arrays
+
+Hello! You are seeing this webpage because your program terminated on an assertion failure like this one:
+<pre>
+my_program: path/to/eigen/Eigen/src/Core/DenseStorage.h:44:
+Eigen::internal::matrix_array<T, Size, MatrixOptions, Align>::internal::matrix_array()
+[with T = double, int Size = 2, int MatrixOptions = 2, bool Align = true]:
+Assertion `(reinterpret_cast<size_t>(array) & 0xf) == 0 && "this assertion
+is explained here: http://eigen.tuxfamily.org/dox/UnalignedArrayAssert.html
+**** READ THIS WEB PAGE !!! ****"' failed.
+</pre>
+
+There are 4 known causes for this issue. Please read on to understand them and learn how to fix them.
+
+\b Table \b of \b contents
+ - \ref where
+ - \ref c1
+ - \ref c2
+ - \ref c3
+ - \ref c4
+ - \ref explanation
+ - \ref getrid
+
+\section where Where in my own code is the cause of the problem?
+
+First of all, you need to find out where in your own code this assertion was triggered from. At first glance, the error message doesn't look helpful, as it refers to a file inside Eigen! However, since your program crashed, if you can reproduce the crash, you can get a backtrace using any debugger. For example, if you're using GCC, you can use the GDB debugger as follows:
+\code
+$ gdb ./my_program          # Start GDB on your program
+> run                       # Start running your program
+...                         # Now reproduce the crash!
+> bt                        # Obtain the backtrace
+\endcode
+Now that you know precisely where in your own code the problem is happening, read on to understand what you need to change.
+
+\section c1 Cause 1: Structures having Eigen objects as members
+
+If you have code like this,
+
+\code
+class Foo
+{
+  //...
+  Eigen::Vector2d v;
+  //...
+};
+//...
+Foo *foo = new Foo;
+\endcode
+
+then you need to read this separate page: \ref TopicStructHavingEigenMembers "Structures Having Eigen Members".
+
+Note that here, Eigen::Vector2d is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types".
+
+\section c2 Cause 2: STL Containers
+
+If you use STL Containers such as std::vector, std::map, ..., with Eigen objects, or with classes containing Eigen objects, like this,
+
+\code
+std::vector<Eigen::Matrix2f> my_vector;
+struct my_class { ... Eigen::Matrix2f m; ... };
+std::map<int, my_class> my_map;
+\endcode
+
+then you need to read this separate page: \ref TopicStlContainers "Using STL Containers with Eigen".
+
+Note that here, Eigen::Matrix2f is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types" and \ref TopicStructHavingEigenMembers "structures having such Eigen objects as member".
+
+\section c3 Cause 3: Passing Eigen objects by value
+
+If some function in your code is getting an Eigen object passed by value, like this,
+
+\code
+void func(Eigen::Vector4d v);
+\endcode
+
+then you need to read this separate page: \ref TopicPassingByValue "Passing Eigen objects by value to functions".
+
+Note that here, Eigen::Vector4d is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types".
+
+\section c4 Cause 4: Compiler making a wrong assumption on stack alignment (for instance GCC on Windows)
+
+This is a must-read for people using GCC on Windows (like MinGW or TDM-GCC). If you have this assertion failure in an innocent function declaring a local variable like this:
+
+\code
+void foo()
+{
+  Eigen::Quaternionf q;
+  //...
+}
+\endcode
+
+then you need to read this separate page: \ref TopicWrongStackAlignment "Compiler making a wrong assumption on stack alignment".
+
+Note that here, Eigen::Quaternionf is only used as an example, more generally the issue arises for all \ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen types".
+
+\section explanation General explanation of this assertion
+
+\ref TopicFixedSizeVectorizable "fixed-size vectorizable Eigen objects" must absolutely be created at 16-byte-aligned locations, otherwise SIMD instructions adressing them will crash.
+
+Eigen normally takes care of these alignment issues for you, by setting an alignment attribute on them and by overloading their "operator new".
+
+However there are a few corner cases where these alignment settings get overridden: they are the possible causes for this assertion.
+
+\section getrid I don't care about vectorization, how do I get rid of that stuff?
+
+Two possibilities:
+<ul>
+  <li>Define EIGEN_DONT_ALIGN_STATICALLY. That disables all 128-bit static alignment code, while keeping 128-bit heap alignment. This has the effect of
+      disabling vectorization for fixed-size objects (like Matrix4d) while keeping vectorization of dynamic-size objects
+      (like MatrixXd). But do note that this breaks ABI compatibility with the default behavior of 128-bit static alignment.</li>
+  <li>Or define both EIGEN_DONT_VECTORIZE and EIGEN_DISABLE_UNALIGNED_ARRAY_ASSERT. This keeps the
+      128-bit alignment code and thus preserves ABI compatibility, but completely disables vectorization.</li>
+</ul>
+
+For more information, see <a href="http://eigen.tuxfamily.org/index.php?title=FAQ#I_disabled_vectorization.2C_but_I.27m_still_getting_annoyed_about_alignment_issues.21">this FAQ</a>.
+
+*/
+
+}
diff --git a/doc/Doxyfile.in b/doc/Doxyfile.in
new file mode 100644
index 0000000..e9e89d4
--- /dev/null
+++ b/doc/Doxyfile.in
@@ -0,0 +1,1484 @@
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project
+#
+# All text after a hash (#) is considered a comment and will be ignored
+# The format is:
+#       TAG = value [value, ...]
+# For lists items can also be appended using:
+#       TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (" ")
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all
+# text before the first occurrence of this tag. Doxygen uses libiconv (or the
+# iconv built into libc) for the transcoding. See
+# http://www.gnu.org/software/libiconv for the list of possible encodings.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded
+# by quotes) that should identify the project.
+
+PROJECT_NAME           = Eigen
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number.
+# This could be handy for archiving the generated documentation or
+# if some version control system is used.
+
+#EIGEN_VERSION is set in the root CMakeLists.txt
+PROJECT_NUMBER         = "${EIGEN_VERSION}"
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute)
+# base path where the generated documentation will be put.
+# If a relative path is entered, it will be relative to the location
+# where doxygen was started. If left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = "${Eigen_BINARY_DIR}/doc"
+
+# If the CREATE_SUBDIRS tag is set to YES, then doxygen will create
+# 4096 sub-directories (in 2 levels) under the output directory of each output
+# format and will distribute the generated files over these directories.
+# Enabling this option can be useful when feeding doxygen a huge amount of
+# source files, where putting all generated files in the same directory would
+# otherwise cause performance problems for the file system.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# The default language is English, other supported languages are:
+# Afrikaans, Arabic, Brazilian, Catalan, Chinese, Chinese-Traditional,
+# Croatian, Czech, Danish, Dutch, Farsi, Finnish, French, German, Greek,
+# Hungarian, Italian, Japanese, Japanese-en (Japanese with English messages),
+# Korean, Korean-en, Lithuanian, Norwegian, Macedonian, Persian, Polish,
+# Portuguese, Romanian, Russian, Serbian, Slovak, Slovene, Spanish, Swedish,
+# and Ukrainian.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES (the default) Doxygen will
+# include brief member descriptions after the members that are listed in
+# the file and class documentation (similar to JavaDoc).
+# Set to NO to disable this.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES (the default) Doxygen will prepend
+# the brief description of a member or function before the detailed description.
+# Note: if both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator
+# that is used to form the text in various listings. Each string
+# in this list, if found as the leading text of the brief description, will be
+# stripped from the text and the result after processing the whole list, is
+# used as the annotated text. Otherwise, the brief description is used as-is.
+# If left blank, the following values are used ("$name" is automatically
+# replaced with the name of the entity): "The $name class" "The $name widget"
+# "The $name file" "is" "provides" "specifies" "contains"
+# "represents" "a" "an" "the"
+
+ABBREVIATE_BRIEF       = "The $name class" \
+                         "The $name widget" \
+                         "The $name file" \
+                         is \
+                         provides \
+                         specifies \
+                         contains \
+                         represents \
+                         a \
+                         an \
+                         the
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# Doxygen will generate a detailed section even if there is only a brief
+# description.
+
+ALWAYS_DETAILED_SEC    = NO
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+
+INLINE_INHERITED_MEMB  = YES
+
+# If the FULL_PATH_NAMES tag is set to YES then Doxygen will prepend the full
+# path before files name in the file list and in the header files. If set
+# to NO the shortest path that makes the file name unique will be used.
+
+FULL_PATH_NAMES        = NO
+
+# If the FULL_PATH_NAMES tag is set to YES then the STRIP_FROM_PATH tag
+# can be used to strip a user-defined part of the path. Stripping is
+# only done if one of the specified strings matches the left-hand part of
+# the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the
+# path to strip.
+
+STRIP_FROM_PATH        =
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of
+# the path mentioned in the documentation of a class, which tells
+# the reader which header file to include in order to use a class.
+# If left blank only the name of the header file containing the class
+# definition is used. Otherwise one should specify the include paths that
+# are normally passed to the compiler using the -I flag.
+
+STRIP_FROM_INC_PATH    =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter
+# (but less readable) file names. This can be useful is your file systems
+# doesn't support long names like on DOS, Mac, or CD-ROM.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then Doxygen
+# will interpret the first line (until the first dot) of a JavaDoc-style
+# comment as the brief description. If set to NO, the JavaDoc
+# comments will behave just like regular Qt-style comments
+# (thus requiring an explicit @brief command for a brief description.)
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then Doxygen will
+# interpret the first line (until the first dot) of a Qt-style
+# comment as the brief description. If set to NO, the comments
+# will behave just like regular Qt-style comments (thus requiring
+# an explicit \brief command for a brief description.)
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make Doxygen
+# treat a multi-line C++ special comment block (i.e. a block of //! or ///
+# comments) as a brief description. This used to be the default behaviour.
+# The new default is to treat a multi-line C++ comment block as a detailed
+# description. Set this tag to YES if you prefer the old behaviour instead.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the DETAILS_AT_TOP tag is set to YES then Doxygen
+# will output the detailed description near the top, like JavaDoc.
+# If set to NO, the detailed description appears after the member
+# documentation.
+
+DETAILS_AT_TOP         = YES
+
+# If the INHERIT_DOCS tag is set to YES (the default) then an undocumented
+# member inherits the documentation from any documented member that it
+# re-implements.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES, then doxygen will produce
+# a new page for each member. If set to NO, the documentation of a member will
+# be part of the file/class/namespace that contains it.
+
+SEPARATE_MEMBER_PAGES  = NO
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab.
+# Doxygen uses this value to replace tabs by spaces in code fragments.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that acts
+# as commands in the documentation. An alias has the form "name=value".
+# For example adding "sideeffect=\par Side Effects:\n" will allow you to
+# put the command \sideeffect (or @sideeffect) in the documentation, which
+# will result in a user-defined paragraph with heading "Side Effects:".
+# You can put \n's in the value part of an alias to insert newlines.
+
+ALIASES                = "only_for_vectors=This is only for vectors (either row-vectors or column-vectors), i.e. matrices which are known at compile-time to have either one row or one column." \
+                         "array_module=This is defined in the %Array module. \code #include <Eigen/Array> \endcode" \
+                         "cholesky_module=This is defined in the %Cholesky module. \code #include <Eigen/Cholesky> \endcode" \
+                         "eigenvalues_module=This is defined in the %Eigenvalues module. \code #include <Eigen/Eigenvalues> \endcode" \
+                         "geometry_module=This is defined in the %Geometry module. \code #include <Eigen/Geometry> \endcode" \
+                         "householder_module=This is defined in the %Householder module. \code #include <Eigen/Householder> \endcode" \
+                         "jacobi_module=This is defined in the %Jacobi module. \code #include <Eigen/Jacobi> \endcode" \
+                         "lu_module=This is defined in the %LU module. \code #include <Eigen/LU> \endcode" \
+                         "qr_module=This is defined in the %QR module. \code #include <Eigen/QR> \endcode" \
+                         "svd_module=This is defined in the %SVD module. \code #include <Eigen/SVD> \endcode" \
+                         "label=\bug" \
+                         "matrixworld=<a href='#matrixonly' style='color:green;text-decoration: none;'>*</a>" \
+                         "arrayworld=<a href='#arrayonly' style='color:blue;text-decoration: none;'>*</a>" \
+                         "note_about_arbitrary_choice_of_solution=If there exists more than one solution, this method will arbitrarily choose one." \
+                         "note_about_using_kernel_to_study_multiple_solutions=If you need a complete analysis of the space of solutions, take the one solution obtained by this method and add to it elements of the kernel, as determined by kernel()." \
+                         "note_about_checking_solutions=This method just tries to find as good a solution as possible. If you want to check whether a solution exists or if it is accurate, just call this function to get a result and then compute the error of this result, or use MatrixBase::isApprox() directly, for instance like this: \code bool a_solution_exists = (A*result).isApprox(b, precision); \endcode This method avoids dividing by zero, so that the non-existence of a solution doesn't by itself mean that you'll get \c inf or \c nan values." \
+                         "note_try_to_help_rvo=This function returns the result by value. In order to make that efficient, it is implemented as just a return statement using a special constructor, hopefully allowing the compiler to perform a RVO (return value optimization)."
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C
+# sources only. Doxygen will then generate output that is more tailored for C.
+# For instance, some of the names that are used will be different. The list
+# of all members will be omitted, etc.
+
+OPTIMIZE_OUTPUT_FOR_C  = NO
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java
+# sources only. Doxygen will then generate output that is more tailored for
+# Java. For instance, namespaces will be presented as packages, qualified
+# scopes will look different, etc.
+
+OPTIMIZE_OUTPUT_JAVA   = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources only. Doxygen will then generate output that is more tailored for
+# Fortran.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for
+# VHDL.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should
+# set this tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string); v.s.
+# func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+
+CPP_CLI_SUPPORT        = NO
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip sources only.
+# Doxygen will parse them like normal C++ but will assume all classes use public
+# instead of private inheritance when no explicit protection keyword is present.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate getter
+# and setter methods for a property. Setting this option to YES (the default)
+# will make doxygen to replace the get and set methods by a property in the
+# documentation. This will only work if the methods are indeed getting or
+# setting a simple type. If this is not the case, or you want to show the
+# methods anyway, you should set this option to NO.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES, then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES (the default) to allow class member groups of
+# the same type (for instance a group of public functions) to be put as a
+# subgroup of that type (e.g. under the Public Functions section). Set it to
+# NO to prevent subgrouping. Alternatively, this can be done per class using
+# the \nosubgrouping command.
+
+SUBGROUPING            = YES
+
+# When TYPEDEF_HIDES_STRUCT is enabled, a typedef of a struct, union, or enum
+# is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically
+# be useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES doxygen will assume all entities in
+# documentation are documented, even if no documentation was available.
+# Private class members and static file members will be hidden unless
+# the EXTRACT_PRIVATE and EXTRACT_STATIC tags are set to YES
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES all private members of a class
+# will be included in the documentation.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_STATIC tag is set to YES all static members of a file
+# will be included in the documentation.
+
+EXTRACT_STATIC         = NO
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES classes (and structs)
+# defined locally in source files will be included in the documentation.
+# If set to NO only classes defined in header files are included.
+
+EXTRACT_LOCAL_CLASSES  = NO
+
+# This flag is only useful for Objective-C code. When set to YES local
+# methods, which are defined in the implementation section but not in
+# the interface are included in the documentation.
+# If set to NO (the default) only methods in the interface are included.
+
+EXTRACT_LOCAL_METHODS  = NO
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base
+# name of the file that contains the anonymous namespace. By default
+# anonymous namespace are hidden.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, Doxygen will hide all
+# undocumented members of documented classes, files or namespaces.
+# If set to NO (the default) these members will be included in the
+# various overviews, but no documentation section is generated.
+# This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_MEMBERS     = YES
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, Doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy.
+# If set to NO (the default) these classes will be included in the various
+# overviews. This option has no effect if EXTRACT_ALL is enabled.
+
+HIDE_UNDOC_CLASSES     = YES
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, Doxygen will hide all
+# friend (class|struct|union) declarations.
+# If set to NO (the default) these declarations will be included in the
+# documentation.
+
+HIDE_FRIEND_COMPOUNDS  = YES
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, Doxygen will hide any
+# documentation blocks found inside the body of a function.
+# If set to NO (the default) these blocks will be appended to the
+# function's detailed documentation block.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation
+# that is typed after a \internal command is included. If the tag is set
+# to NO (the default) then the documentation will be excluded.
+# Set it to YES to include the internal documentation.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then Doxygen will only generate
+# file names in lower-case letters. If set to YES upper-case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# and Mac users are advised to set this option to NO.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO (the default) then Doxygen
+# will show members with their full class and namespace scopes in the
+# documentation. If set to YES the scope will be hidden.
+
+HIDE_SCOPE_NAMES       = YES
+
+# If the SHOW_INCLUDE_FILES tag is set to YES (the default) then Doxygen
+# will put a list of the files that are included by a file in the documentation
+# of that file.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the INLINE_INFO tag is set to YES (the default) then a tag [inline]
+# is inserted in the documentation for inline members.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES (the default) then doxygen
+# will sort the (detailed) documentation of file and class members
+# alphabetically by member name. If set to NO the members will appear in
+# declaration order.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the
+# brief documentation of file, namespace and class members alphabetically
+# by member name. If set to NO (the default) the members will appear in
+# declaration order.
+
+SORT_BRIEF_DOCS        = YES
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the
+# hierarchy of group names into alphabetical order. If set to NO (the default)
+# the group names will appear in their defined order.
+
+SORT_GROUP_NAMES       = NO
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be
+# sorted by fully-qualified names, including namespaces. If set to
+# NO (the default), the class list will be sorted only by class name,
+# not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the
+# alphabetical list.
+
+SORT_BY_SCOPE_NAME     = NO
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or
+# disable (NO) the todo list. This list is created by putting \todo
+# commands in the documentation.
+
+GENERATE_TODOLIST      = NO
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or
+# disable (NO) the test list. This list is created by putting \test
+# commands in the documentation.
+
+GENERATE_TESTLIST      = NO
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or
+# disable (NO) the bug list. This list is created by putting \bug
+# commands in the documentation.
+
+GENERATE_BUGLIST       = NO
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or
+# disable (NO) the deprecated list. This list is created by putting
+# \deprecated commands in the documentation.
+
+GENERATE_DEPRECATEDLIST= NO
+
+# The ENABLED_SECTIONS tag can be used to enable conditional
+# documentation sections, marked by \if sectionname ... \endif.
+
+ENABLED_SECTIONS       =
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines
+# the initial value of a variable or define consists of for it to appear in
+# the documentation. If the initializer consists of more lines than specified
+# here it will be hidden. Use a value of 0 to hide initializers completely.
+# The appearance of the initializer of individual variables and defines in the
+# documentation can be controlled using \showinitializer or \hideinitializer
+# command in the documentation regardless of this setting.
+
+MAX_INITIALIZER_LINES  = 0
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated
+# at the bottom of the documentation of classes and structs. If set to YES the
+# list will mention the files that were used to generate the documentation.
+
+SHOW_USED_FILES        = YES
+
+# If the sources in your project are distributed over multiple directories
+# then setting the SHOW_DIRECTORIES tag to YES will show the directory hierarchy
+# in the documentation. The default is NO.
+
+SHOW_DIRECTORIES       = NO
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page.
+# This will remove the Files entry from the Quick Index and from the
+# Folder Tree View (if specified). The default is YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the
+# Namespaces page.  This will remove the Namespaces entry from the Quick Index
+# and from the Folder Tree View (if specified). The default is YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command <command> <input-file>, where <command> is the value of
+# the FILE_VERSION_FILTER tag, and <input-file> is the name of an input file
+# provided by doxygen. Whatever the program writes to standard output
+# is used as the file version. See the manual for examples.
+
+FILE_VERSION_FILTER    =
+
+#---------------------------------------------------------------------------
+# configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated
+# by doxygen. Possible values are YES and NO. If left blank NO is used.
+
+QUIET                  = NO
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated by doxygen. Possible values are YES and NO. If left blank
+# NO is used.
+
+WARNINGS               = YES
+
+# If WARN_IF_UNDOCUMENTED is set to YES, then doxygen will generate warnings
+# for undocumented members. If EXTRACT_ALL is set to YES then this flag will
+# automatically be disabled.
+
+WARN_IF_UNDOCUMENTED   = NO
+
+# If WARN_IF_DOC_ERROR is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some
+# parameters in a documented function, or documenting parameters that
+# don't exist or using markup commands wrongly.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be abled to get warnings for
+# functions that are documented, but have no documentation for their parameters
+# or return value. If set to NO (the default) doxygen will only warn about
+# wrong or incomplete parameter documentation, but not about the absence of
+# documentation.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that
+# doxygen can produce. The string should contain the $file, $line, and $text
+# tags, which will be replaced by the file and line number from which the
+# warning originated and the warning text. Optionally the format may contain
+# $version, which will be replaced by the version of the file (if it could
+# be obtained via FILE_VERSION_FILTER)
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning
+# and error messages should be written. If left blank the output is written
+# to stderr.
+
+WARN_LOGFILE           =
+
+#---------------------------------------------------------------------------
+# configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag can be used to specify the files and/or directories that contain
+# documented source files. You may enter file names like "myfile.cpp" or
+# directories like "/usr/src/myproject". Separate the files or directories
+# with spaces.
+
+INPUT                  = "${Eigen_SOURCE_DIR}/Eigen" \
+                         "${Eigen_SOURCE_DIR}/doc"
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding, which is
+# also the default input encoding. Doxygen uses libiconv (or the iconv built
+# into libc) for the transcoding. See http://www.gnu.org/software/libiconv for
+# the list of possible encodings.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank the following patterns are tested:
+# *.c *.cc *.cxx *.cpp *.c++ *.java *.ii *.ixx *.ipp *.i++ *.inl *.h *.hh *.hxx
+# *.hpp *.h++ *.idl *.odl *.cs *.php *.php3 *.inc *.m *.mm *.py *.f90
+
+FILE_PATTERNS          = *
+
+# The RECURSIVE tag can be used to turn specify whether or not subdirectories
+# should be searched for input files as well. Possible values are YES and NO.
+# If left blank NO is used.
+
+RECURSIVE              = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+
+EXCLUDE                = "${Eigen_SOURCE_DIR}/Eigen/Eigen2Support" \
+                         "${Eigen_SOURCE_DIR}/Eigen/src/Eigen2Support" \
+                         "${Eigen_SOURCE_DIR}/doc/examples" \
+                         "${Eigen_SOURCE_DIR}/doc/special_examples" \
+                         "${Eigen_SOURCE_DIR}/doc/snippets" 
+
+# The EXCLUDE_SYMLINKS tag can be used select whether or not files or
+# directories that are symbolic links (a Unix filesystem feature) are excluded
+# from the input.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories. Note that the wildcards are matched
+# against the file with absolute path, so to exclude all test directories
+# for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       =  CMake* \
+                          *.txt \
+                          *.sh \
+                          *.orig \
+                          *.diff \
+                          diff \
+                          *~ \
+                          *. \
+                          *.sln \
+                          *.sdf \
+                          *.tmp \
+                          *.vcxproj \
+                          *.filters \
+                          *.user \
+                          *.suo
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+
+# This could used to clean up the "class hierarchy" page
+EXCLUDE_SYMBOLS        = internal::* Flagged* *InnerIterator* DenseStorage<*
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or
+# directories that contain example code fragments that are included (see
+# the \include command).
+
+EXAMPLE_PATH           = "${Eigen_SOURCE_DIR}/doc/snippets" \
+                         "${Eigen_BINARY_DIR}/doc/snippets" \
+                         "${Eigen_SOURCE_DIR}/doc/examples" \
+                         "${Eigen_BINARY_DIR}/doc/examples" \
+                         "${Eigen_SOURCE_DIR}/doc/special_examples" \
+                         "${Eigen_BINARY_DIR}/doc/special_examples"
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp
+# and *.h) to filter out the source-files in the directories. If left
+# blank all files are included.
+
+EXAMPLE_PATTERNS       = *
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude
+# commands irrespective of the value of the RECURSIVE tag.
+# Possible values are YES and NO. If left blank NO is used.
+
+EXAMPLE_RECURSIVE      = NO
+
+# The IMAGE_PATH tag can be used to specify one or more files or
+# directories that contain image that are included in the documentation (see
+# the \image command).
+
+IMAGE_PATH             =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command <filter> <input-file>, where <filter>
+# is the value of the INPUT_FILTER tag, and <input-file> is the name of an
+# input file. Doxygen will then use the output that the filter program writes
+# to standard output.  If FILTER_PATTERNS is specified, this tag will be
+# ignored.
+
+INPUT_FILTER           =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis.  Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match.  The filters are a list of the form:
+# pattern=filter (like *.cpp=my_cpp_filter). See INPUT_FILTER for further
+# info on how filters are used. If FILTER_PATTERNS is empty, INPUT_FILTER
+# is applied to all files.
+
+FILTER_PATTERNS        =
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will be used to filter the input files when producing source
+# files to browse (i.e. when SOURCE_BROWSER is set to YES).
+
+FILTER_SOURCE_FILES    = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will
+# be generated. Documented entities will be cross-referenced with these sources.
+# Note: To get rid of all source code in the generated output, make sure also
+# VERBATIM_HEADERS is set to NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body
+# of functions and classes directly in the documentation.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES (the default) will instruct
+# doxygen to hide any special comment blocks from generated source code
+# fragments. Normal C and C++ comments will always remain visible.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES
+# then for each documented function all documented
+# functions referencing it will be listed.
+
+REFERENCED_BY_RELATION = YES
+
+# If the REFERENCES_RELATION tag is set to YES
+# then for each documented function all documented entities
+# called/used by that function will be listed.
+
+REFERENCES_RELATION    = YES
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES (the default)
+# and SOURCE_BROWSER tag is set to YES, then the hyperlinks from
+# functions in REFERENCES_RELATION and REFERENCED_BY_RELATION lists will
+# link to the source code.  Otherwise they will link to the documentstion.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code
+# will point to the HTML generated by the htags(1) tool instead of doxygen
+# built-in source browser. The htags tool is part of GNU's global source
+# tagging system (see http://www.gnu.org/software/global/global.html). You
+# will need version 4.8.6 or higher.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set to YES (the default) then Doxygen
+# will generate a verbatim copy of the header file for each class for
+# which an include is specified. Set to NO to disable this.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index
+# of all compounds will be generated. Enable this if the project
+# contains a lot of classes, structs, unions or interfaces.
+
+ALPHABETICAL_INDEX     = NO
+
+# If the alphabetical index is enabled (see ALPHABETICAL_INDEX) then
+# the COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns
+# in which this list will be split (can be a number in the range [1..20])
+
+COLS_IN_ALPHA_INDEX    = 5
+
+# In case all classes in a project start with a common prefix, all
+# classes will be put under the same header in the alphabetical index.
+# The IGNORE_PREFIX tag can be used to specify one or more prefixes that
+# should be ignored while generating the index headers.
+
+IGNORE_PREFIX          =
+
+#---------------------------------------------------------------------------
+# configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES (the default) Doxygen will
+# generate HTML output.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `html' will be used as the default path.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for
+# each generated HTML page (for example: .htm,.php,.asp). If it is left blank
+# doxygen will generate files with .html extension.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a personal HTML header for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard header.
+
+HTML_HEADER             = "${Eigen_BINARY_DIR}/doc/eigendoxy_header.html"
+
+# The HTML_FOOTER tag can be used to specify a personal HTML footer for
+# each generated HTML page. If it is left blank doxygen will generate a
+# standard footer.
+
+# the footer has not been customized yet, so let's use the default one
+# ${Eigen_BINARY_DIR}/doc/eigendoxy_footer.html
+HTML_FOOTER             =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading
+# style sheet that is used by each HTML page. It can be used to
+# fine-tune the look of the HTML output. If the tag is left blank doxygen
+# will generate a default style sheet. Note that doxygen will try to copy
+# the style sheet file to the HTML output directory, so don't put your own
+# stylesheet in the HTML output directory as well, or it will be erased!
+
+HTML_STYLESHEET         = "${Eigen_SOURCE_DIR}/doc/eigendoxy.css"
+
+# If the HTML_ALIGN_MEMBERS tag is set to YES, the members of classes,
+# files or namespaces will be aligned in HTML using tables. If set to
+# NO a bullet list will be used.
+
+HTML_ALIGN_MEMBERS     = YES
+
+# If the GENERATE_HTMLHELP tag is set to YES, additional index files
+# will be generated that can be used as input for tools like the
+# Microsoft HTML help workshop to generate a compiled HTML help file (.chm)
+# of the generated HTML documentation.
+
+GENERATE_HTMLHELP      = NO
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files
+# will be generated that can be used as input for Apple's Xcode 3
+# integrated development environment, introduced with OSX 10.5 (Leopard).
+# To create a documentation set, doxygen will generate a Makefile in the
+# HTML output directory. Running make will produce the docset in that
+# directory and running "make install" will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find
+# it at startup.
+
+GENERATE_DOCSET        = NO
+
+# When GENERATE_DOCSET tag is set to YES, this tag determines the name of the
+# feed. A documentation feed provides an umbrella under which multiple
+# documentation sets from a single provider (such as a company or product suite)
+# can be grouped.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# When GENERATE_DOCSET tag is set to YES, this tag specifies a string that
+# should uniquely identify the documentation set bundle. This should be a
+# reverse domain-name style string, e.g. com.mycompany.MyDocSet. Doxygen
+# will append .docset to the name.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded. For this to work a browser that supports
+# JavaScript and DHTML is required (for instance Mozilla 1.0+, Firefox
+# Netscape 6.0+, Internet explorer 5.0+, Konqueror, or Safari).
+
+HTML_DYNAMIC_SECTIONS  = YES
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_FILE tag can
+# be used to specify the file name of the resulting .chm file. You
+# can add a path in front of the file if the result should not be
+# written to the html output directory.
+
+CHM_FILE               =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the HHC_LOCATION tag can
+# be used to specify the location (absolute path including file name) of
+# the HTML help compiler (hhc.exe). If non-empty doxygen will try to run
+# the HTML help compiler on the generated index.hhp.
+
+HHC_LOCATION           =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the GENERATE_CHI flag
+# controls if a separate .chi index file is generated (YES) or that
+# it should be included in the master .chm file (NO).
+
+GENERATE_CHI           = NO
+
+# If the GENERATE_HTMLHELP tag is set to YES, the CHM_INDEX_ENCODING
+# is used to encode HtmlHelp index (hhk), content (hhc) and project file
+# content.
+
+CHM_INDEX_ENCODING     =
+
+# If the GENERATE_HTMLHELP tag is set to YES, the BINARY_TOC flag
+# controls whether a binary table of contents is generated (YES) or a
+# normal table of contents (NO) in the .chm file.
+
+BINARY_TOC             = NO
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members
+# to the contents of the HTML help documentation and to the tree view.
+
+TOC_EXPAND             = NO
+
+# The DISABLE_INDEX tag can be used to turn on/off the condensed index at
+# top of each HTML page. The value NO (the default) enables the index and
+# the value YES disables it.
+
+DISABLE_INDEX          = NO
+
+# This tag can be used to set the number of enum values (range [1..20])
+# that doxygen will group on one line in the generated HTML documentation.
+
+ENUM_VALUES_PER_LINE   = 1
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information.
+# If the tag value is set to FRAME, a side panel will be generated
+# containing a tree-like index structure (just like the one that
+# is generated for HTML Help). For this to work a browser that supports
+# JavaScript, DHTML, CSS and frames is required (for instance Mozilla 1.0+,
+# Netscape 6.0+, Internet explorer 5.0+, or Konqueror). Windows users are
+# probably better off using the HTML help feature. Other possible values
+# for this tag are: HIERARCHIES, which will generate the Groups, Directories,
+# and Class Hiererachy pages using a tree view instead of an ordered list;
+# ALL, which combines the behavior of FRAME and HIERARCHIES; and NONE, which
+# disables this behavior completely. For backwards compatibility with previous
+# releases of Doxygen, the values YES and NO are equivalent to FRAME and NONE
+# respectively.
+
+GENERATE_TREEVIEW      = NO
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be
+# used to set the initial width (in pixels) of the frame in which the tree
+# is shown.
+
+TREEVIEW_WIDTH         = 250
+
+# Use this tag to change the font size of Latex formulas included
+# as images in the HTML documentation. The default is 10. Note that
+# when you change the font size after a successful doxygen run you need
+# to manually remove any form_*.png images from the HTML output directory
+# to force them to be regenerated.
+
+FORMULA_FONTSIZE       = 12
+
+#---------------------------------------------------------------------------
+# configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES (the default) Doxygen will
+# generate Latex output.
+
+GENERATE_LATEX         = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `latex' will be used as the default path.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked. If left blank `latex' will be used as the default command name.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to
+# generate index for LaTeX. If left blank `makeindex' will be used as the
+# default command name.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES Doxygen generates more compact
+# LaTeX documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used
+# by the printer. Possible values are: a4, a4wide, letter, legal and
+# executive. If left blank a4wide will be used.
+
+PAPER_TYPE             = a4wide
+
+# The EXTRA_PACKAGES tag can be to specify one or more names of LaTeX
+# packages that should be included in the LaTeX output.
+
+EXTRA_PACKAGES         = amssymb \
+                         amsmath
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for
+# the generated latex document. The header should contain everything until
+# the first chapter. If it is left blank doxygen will generate a
+# standard header. Notice: only use this tag if you know what you are doing!
+
+LATEX_HEADER           =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated
+# is prepared for conversion to pdf (using ps2pdf). The pdf file will
+# contain links (just like the HTML output) instead of page references
+# This makes the output suitable for online browsing using a pdf viewer.
+
+PDF_HYPERLINKS         = NO
+
+# If the USE_PDFLATEX tag is set to YES, pdflatex will be used instead of
+# plain latex in the generated Makefile. Set this option to YES to get a
+# higher quality PDF documentation.
+
+USE_PDFLATEX           = NO
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \\batchmode.
+# command to the generated LaTeX files. This will instruct LaTeX to keep
+# running if errors occur, instead of asking the user for help.
+# This option is also used when generating formulas in HTML.
+
+LATEX_BATCHMODE        = NO
+
+# If LATEX_HIDE_INDICES is set to YES then doxygen will not
+# include the index chapters (such as File Index, Compound Index, etc.)
+# in the output.
+
+LATEX_HIDE_INDICES     = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES Doxygen will generate RTF output
+# The RTF output is optimized for Word 97 and may not look very pretty with
+# other RTF readers or editors.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `rtf' will be used as the default path.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES Doxygen generates more compact
+# RTF documents. This may be useful for small projects and may help to
+# save some trees in general.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated
+# will contain hyperlink fields. The RTF file will
+# contain links (just like the HTML output) instead of page references.
+# This makes the output suitable for online browsing using WORD or other
+# programs which support those fields.
+# Note: wordpad (write) and others do not support links.
+
+RTF_HYPERLINKS         = NO
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's
+# config file, i.e. a series of assignments. You only have to provide
+# replacements, missing definitions are set to their default value.
+
+RTF_STYLESHEET_FILE    =
+
+# Set optional variables used in the generation of an rtf document.
+# Syntax is similar to doxygen's config file.
+
+RTF_EXTENSIONS_FILE    =
+
+#---------------------------------------------------------------------------
+# configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES (the default) Doxygen will
+# generate man pages
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `man' will be used as the default path.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to
+# the generated man pages (default is the subroutine's section .3)
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and Doxygen generates man output,
+# then it will generate one additional man file for each entity
+# documented in the real man page(s). These additional files
+# only source the real man page, but without them the man command
+# would be unable to find the correct page. The default is NO.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES Doxygen will
+# generate an XML file that captures the structure of
+# the code including all documentation.
+
+GENERATE_XML           = NO
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be
+# put in front of it. If left blank `xml' will be used as the default path.
+
+XML_OUTPUT             = xml
+
+# The XML_SCHEMA tag can be used to specify an XML schema,
+# which can be used by a validating XML parser to check the
+# syntax of the XML files.
+
+XML_SCHEMA             =
+
+# The XML_DTD tag can be used to specify an XML DTD,
+# which can be used by a validating XML parser to check the
+# syntax of the XML files.
+
+XML_DTD                =
+
+# If the XML_PROGRAMLISTING tag is set to YES Doxygen will
+# dump the program listings (including syntax highlighting
+# and cross-referencing information) to the XML output. Note that
+# enabling this will significantly increase the size of the XML output.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES Doxygen will
+# generate an AutoGen Definitions (see autogen.sf.net) file
+# that captures the structure of the code including all
+# documentation. Note that this feature is still experimental
+# and incomplete at the moment.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES Doxygen will
+# generate a Perl module file that captures the structure of
+# the code including all documentation. Note that this
+# feature is still experimental and incomplete at the
+# moment.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES Doxygen will generate
+# the necessary Makefile rules, Perl scripts and LaTeX code to be able
+# to generate PDF and DVI output from the Perl module output.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES the Perl module output will be
+# nicely formatted so it can be parsed by a human reader.  This is useful
+# if you want to understand what is going on.  On the other hand, if this
+# tag is set to NO the size of the Perl module output will be much smaller
+# and Perl will parse it just the same.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file
+# are prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX.
+# This is useful so different doxyrules.make files included by the same
+# Makefile don't overwrite each other's variables.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES (the default) Doxygen will
+# evaluate all C-preprocessor directives found in the sources and include
+# files.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES Doxygen will expand all macro
+# names in the source code. If set to NO (the default) only conditional
+# compilation will be performed. Macro expansion can be done in a controlled
+# way by setting EXPAND_ONLY_PREDEF to YES.
+
+MACRO_EXPANSION        = YES
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES
+# then the macro expansion is limited to the macros specified with the
+# PREDEFINED and EXPAND_AS_DEFINED tags.
+
+EXPAND_ONLY_PREDEF     = YES
+
+# If the SEARCH_INCLUDES tag is set to YES (the default) the includes files
+# in the INCLUDE_PATH (see below) will be search if a #include is found.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by
+# the preprocessor.
+
+INCLUDE_PATH           = "${Eigen_SOURCE_DIR}/Eigen/src/plugins" 
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will
+# be used.
+
+INCLUDE_FILE_PATTERNS  =
+
+# The PREDEFINED tag can be used to specify one or more macro names that
+# are defined before the preprocessor is started (similar to the -D option of
+# gcc). The argument of the tag is a list of macros of the form: name
+# or name=definition (no spaces). If the definition and the = are
+# omitted =1 is assumed. To prevent a macro definition from being
+# undefined via #undef or recursively expanded use the := operator
+# instead of the = operator.
+
+PREDEFINED             = EIGEN_EMPTY_STRUCT \
+                         EIGEN_PARSED_BY_DOXYGEN \
+                         EIGEN_VECTORIZE \
+                         EIGEN_QT_SUPPORT \
+                         EIGEN_STRONG_INLINE=inline \
+                         EIGEN2_SUPPORT_STAGE=99
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then
+# this tag can be used to specify a list of macro names that should be expanded.
+# The macro definition that is found in the sources will be used.
+# Use the PREDEFINED tag if you want to use a different macro definition.
+
+EXPAND_AS_DEFINED      = EIGEN_MAKE_TYPEDEFS \
+                         EIGEN_MAKE_FIXED_TYPEDEFS \
+                         EIGEN_MAKE_TYPEDEFS_ALL_SIZES \
+                         EIGEN_MAKE_CWISE_BINARY_OP \
+                         EIGEN_CWISE_UNOP_RETURN_TYPE \
+                         EIGEN_CWISE_BINOP_RETURN_TYPE \
+                         EIGEN_CWISE_PRODUCT_RETURN_TYPE \
+                         EIGEN_CURRENT_STORAGE_BASE_CLASS \
+                         _EIGEN_GENERIC_PUBLIC_INTERFACE \
+                         EIGEN2_SUPPORT
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES (the default) then
+# doxygen's preprocessor will remove all function-like macros that are alone
+# on a line, have an all uppercase name, and do not end with a semicolon. Such
+# function macros are typically used for boiler-plate code, and will confuse
+# the parser if not removed.
+
+SKIP_FUNCTION_MACROS   = YES
+
+#---------------------------------------------------------------------------
+# Configuration::additions related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES option can be used to specify one or more tagfiles.
+# Optionally an initial location of the external documentation
+# can be added for each tagfile. The format of a tag file without
+# this location is as follows:
+#   TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+#   TAGFILES = file1=loc1 "file2 = loc2" ...
+# where "loc1" and "loc2" can be relative or absolute paths or
+# URLs. If a location is present for each tag, the installdox tool
+# does not have to be run to correct the links.
+# Note that each tag file must have a unique name
+# (where the name does NOT include the path)
+# If a tag file is not located in the directory in which doxygen
+# is run, you must also specify the path to the tagfile here.
+
+TAGFILES               = "${Eigen_BINARY_DIR}/doc/eigen-unsupported.doxytags"=unsupported
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create
+# a tag file that is based on the input files it reads.
+
+GENERATE_TAGFILE       = "${Eigen_BINARY_DIR}/doc/eigen.doxytags"
+
+# If the ALLEXTERNALS tag is set to YES all external classes will be listed
+# in the class index. If set to NO only the inherited external classes
+# will be listed.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will
+# be listed.
+
+EXTERNAL_GROUPS        = YES
+
+# The PERL_PATH should be the absolute path and name of the perl script
+# interpreter (i.e. the result of `which perl').
+
+PERL_PATH              = /usr/bin/perl
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES (the default) Doxygen will
+# generate a inheritance diagram (in HTML, RTF and LaTeX) for classes with base
+# or super classes. Setting the tag to NO turns the diagrams off. Note that
+# this option is superseded by the HAVE_DOT option below. This is only a
+# fallback. It is recommended to install and use dot, since it yields more
+# powerful graphs.
+
+CLASS_DIAGRAMS         = YES
+
+# You can define message sequence charts within doxygen comments using the \msc
+# command. Doxygen will then run the mscgen tool (see
+# http://www.mcternan.me.uk/mscgen/) to produce the chart and insert it in the
+# documentation. The MSCGEN_PATH tag allows you to specify the directory where
+# the mscgen tool resides. If left empty the tool is assumed to be found in the
+# default search path.
+
+MSCGEN_PATH            =
+
+# If set to YES, the inheritance and collaboration graphs will hide
+# inheritance and usage relations if the target is undocumented
+# or is not a class.
+
+HIDE_UNDOC_RELATIONS   = NO
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz, a graph visualization
+# toolkit from AT&T and Lucent Bell Labs. The other options in this section
+# have no effect if this option is set to NO (the default)
+
+HAVE_DOT               = YES
+
+# By default doxygen will write a font called FreeSans.ttf to the output
+# directory and reference it in all dot files that doxygen generates. This
+# font does not include all possible unicode characters however, so when you need
+# these (or just want a differently looking font) you can specify the font name
+# using DOT_FONTNAME. You need need to make sure dot is able to find the font,
+# which can be done by putting it in a standard location or by setting the
+# DOTFONTPATH environment variable or by setting DOT_FONTPATH to the directory
+# containing the font.
+
+DOT_FONTNAME           = FreeSans
+
+# By default doxygen will tell dot to use the output directory to look for the
+# FreeSans.ttf font (which doxygen will put there itself). If you specify a
+# different font using DOT_FONTNAME you can set the path where dot
+# can find it using this tag.
+
+DOT_FONTPATH           =
+
+# If the CLASS_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect inheritance relations. Setting this tag to YES will force the
+# the CLASS_DIAGRAMS tag to NO.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for each documented class showing the direct and
+# indirect implementation dependencies (inheritance, containment, and
+# class references variables) of the class with other documented classes.
+
+COLLABORATION_GRAPH    = NO
+
+# If the GROUP_GRAPHS and HAVE_DOT tags are set to YES then doxygen
+# will generate a graph for groups, showing the direct groups dependencies
+
+GROUP_GRAPHS           = NO
+
+# If the UML_LOOK tag is set to YES doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+
+UML_LOOK               = YES
+
+# If set to YES, the inheritance and collaboration graphs will show the
+# relations between templates and their instances.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDE_GRAPH, and HAVE_DOT
+# tags are set to YES then doxygen will generate a graph for each documented
+# file showing the direct and indirect include dependencies of the file with
+# other documented files.
+
+INCLUDE_GRAPH          = NO
+
+# If the ENABLE_PREPROCESSING, SEARCH_INCLUDES, INCLUDED_BY_GRAPH, and
+# HAVE_DOT tags are set to YES then doxygen will generate a graph for each
+# documented header file showing the documented files that directly or
+# indirectly include this file.
+
+INCLUDED_BY_GRAPH      = NO
+
+# If the CALL_GRAPH and HAVE_DOT options are set to YES then
+# doxygen will generate a call dependency graph for every global function
+# or class method. Note that enabling this option will significantly increase
+# the time of a run. So in most cases it will be better to enable call graphs
+# for selected functions only using the \callgraph command.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH and HAVE_DOT tags are set to YES then
+# doxygen will generate a caller dependency graph for every global function
+# or class method. Note that enabling this option will significantly increase
+# the time of a run. So in most cases it will be better to enable caller
+# graphs for selected functions only using the \callergraph command.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY and HAVE_DOT tags are set to YES then doxygen
+# will graphical hierarchy of all classes instead of a textual one.
+
+GRAPHICAL_HIERARCHY    = NO
+
+# If the DIRECTORY_GRAPH, SHOW_DIRECTORIES and HAVE_DOT tags are set to YES
+# then doxygen will show the dependencies a directory has on other directories
+# in a graphical way. The dependency relations are determined by the #include
+# relations between the files in the directories.
+
+DIRECTORY_GRAPH        = NO
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. Possible values are png, jpg, or gif
+# If left blank png will be used.
+
+DOT_IMAGE_FORMAT       = png
+
+# The tag DOT_PATH can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+
+DOT_PATH               =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the
+# \dotfile command).
+
+DOTFILE_DIRS           =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of
+# nodes that will be shown in the graph. If the number of nodes in a graph
+# becomes larger than this value, doxygen will truncate the graph, which is
+# visualized by representing a node as a red box. Note that doxygen if the
+# number of direct children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note
+# that the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the
+# graphs generated by dot. A depth value of 3 means that only nodes reachable
+# from the root by following a path via at most 3 edges will be shown. Nodes
+# that lay further from the root node will be omitted. Note that setting this
+# option to 1 or 2 may greatly reduce the computation time needed for large
+# code bases. Also note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is enabled by default, which results in a transparent
+# background. Warning: Depending on the platform used, enabling this option
+# may lead to badly anti-aliased labels on the edges of a graph (i.e. they
+# become hard to read).
+
+DOT_TRANSPARENT        = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10)
+# support this, this feature is disabled by default.
+
+DOT_MULTI_TARGETS      = NO
+
+# If the GENERATE_LEGEND tag is set to YES (the default) Doxygen will
+# generate a legend page explaining the meaning of the various boxes and
+# arrows in the dot generated graphs.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES (the default) Doxygen will
+# remove the intermediate dot files that are used to generate
+# the various graphs.
+
+DOT_CLEANUP            = YES
+
+#---------------------------------------------------------------------------
+# Configuration::additions related to the search engine
+#---------------------------------------------------------------------------
+
+# The SEARCHENGINE tag specifies whether or not a search engine should be
+# used. If set to NO the values of all tags below this one will be ignored.
+
+SEARCHENGINE           = NO
diff --git a/doc/Eigen_Silly_Professor_64x64.png b/doc/Eigen_Silly_Professor_64x64.png
new file mode 100644
index 0000000..079d45b
--- /dev/null
+++ b/doc/Eigen_Silly_Professor_64x64.png
Binary files differ
diff --git a/doc/I00_CustomizingEigen.dox b/doc/I00_CustomizingEigen.dox
new file mode 100644
index 0000000..aa4514d
--- /dev/null
+++ b/doc/I00_CustomizingEigen.dox
@@ -0,0 +1,191 @@
+namespace Eigen {
+
+/** \page TopicCustomizingEigen Customizing/Extending Eigen
+
+Eigen can be extended in several ways, for instance, by defining global methods, \ref ExtendingMatrixBase "by adding custom methods to MatrixBase", adding support to \ref CustomScalarType "custom types" etc.
+
+\b Table \b of \b contents
+  - \ref ExtendingMatrixBase
+  - \ref InheritingFromMatrix
+  - \ref CustomScalarType
+
+\section ExtendingMatrixBase Extending MatrixBase (and other classes)
+
+In this section we will see how to add custom methods to MatrixBase. Since all expressions and matrix types inherit MatrixBase, adding a method to MatrixBase make it immediately available to all expressions ! A typical use case is, for instance, to make Eigen compatible with another API.
+
+You certainly know that in C++ it is not possible to add methods to an existing class. So how that's possible ? Here the trick is to include in the declaration of MatrixBase a file defined by the preprocessor token \c EIGEN_MATRIXBASE_PLUGIN:
+\code
+class MatrixBase {
+  // ...
+  #ifdef EIGEN_MATRIXBASE_PLUGIN
+  #include EIGEN_MATRIXBASE_PLUGIN
+  #endif
+};
+\endcode
+Therefore to extend MatrixBase with your own methods you just have to create a file with your method declaration and define EIGEN_MATRIXBASE_PLUGIN before you include any Eigen's header file.
+
+You can extend many of the other classes used in Eigen by defining similarly named preprocessor symbols. For instance, define \c EIGEN_ARRAYBASE_PLUGIN if you want to extend the ArrayBase class. A full list of classes that can be extended in this way and the corresponding preprocessor symbols can be found on our page \ref TopicPreprocessorDirectives.
+
+Here is an example of an extension file for adding methods to MatrixBase: \n
+\b MatrixBaseAddons.h
+\code
+inline Scalar at(uint i, uint j) const { return this->operator()(i,j); }
+inline Scalar& at(uint i, uint j) { return this->operator()(i,j); }
+inline Scalar at(uint i) const { return this->operator[](i); }
+inline Scalar& at(uint i) { return this->operator[](i); }
+
+inline RealScalar squaredLength() const { return squaredNorm(); }
+inline RealScalar length() const { return norm(); }
+inline RealScalar invLength(void) const { return fast_inv_sqrt(squaredNorm()); }
+
+template<typename OtherDerived>
+inline Scalar squaredDistanceTo(const MatrixBase<OtherDerived>& other) const
+{ return (derived() - other.derived()).squaredNorm(); }
+
+template<typename OtherDerived>
+inline RealScalar distanceTo(const MatrixBase<OtherDerived>& other) const
+{ return internal::sqrt(derived().squaredDistanceTo(other)); }
+
+inline void scaleTo(RealScalar l) { RealScalar vl = norm(); if (vl>1e-9) derived() *= (l/vl); }
+
+inline Transpose<Derived> transposed() {return this->transpose();}
+inline const Transpose<Derived> transposed() const {return this->transpose();}
+
+inline uint minComponentId(void) const  { int i; this->minCoeff(&i); return i; }
+inline uint maxComponentId(void) const  { int i; this->maxCoeff(&i); return i; }
+
+template<typename OtherDerived>
+void makeFloor(const MatrixBase<OtherDerived>& other) { derived() = derived().cwiseMin(other.derived()); }
+template<typename OtherDerived>
+void makeCeil(const MatrixBase<OtherDerived>& other) { derived() = derived().cwiseMax(other.derived()); }
+
+const CwiseUnaryOp<internal::scalar_add_op<Scalar>, Derived>
+operator+(const Scalar& scalar) const
+{ return CwiseUnaryOp<internal::scalar_add_op<Scalar>, Derived>(derived(), internal::scalar_add_op<Scalar>(scalar)); }
+
+friend const CwiseUnaryOp<internal::scalar_add_op<Scalar>, Derived>
+operator+(const Scalar& scalar, const MatrixBase<Derived>& mat)
+{ return CwiseUnaryOp<internal::scalar_add_op<Scalar>, Derived>(mat.derived(), internal::scalar_add_op<Scalar>(scalar)); }
+\endcode
+
+Then one can the following declaration in the config.h or whatever prerequisites header file of his project:
+\code
+#define EIGEN_MATRIXBASE_PLUGIN "MatrixBaseAddons.h"
+\endcode
+
+\section InheritingFromMatrix Inheriting from Matrix
+
+Before inheriting from Matrix, be really, i mean REALLY sure that using
+EIGEN_MATRIX_PLUGIN is not what you really want (see previous section).
+If you just need to add few members to Matrix, this is the way to go.
+
+An example of when you actually need to inherit Matrix, is when you have
+several layers of heritage such as  MyVerySpecificVector1,MyVerySpecificVector1 -> MyVector1 -> Matrix and.
+MyVerySpecificVector3,MyVerySpecificVector4 -> MyVector2 -> Matrix.
+
+In order for your object to work within the %Eigen framework, you need to
+define a few members in your inherited class.
+
+Here is a minimalistic example:\n
+\code
+class MyVectorType : public  Eigen::VectorXd
+{
+public:
+    MyVectorType(void):Eigen::VectorXd() {}
+
+    typedef Eigen::VectorXd Base;
+
+    // This constructor allows you to construct MyVectorType from Eigen expressions
+    template<typename OtherDerived>
+    MyVectorType(const Eigen::MatrixBase<OtherDerived>& other)
+        : Eigen::Vector3d(other)
+    { }
+
+    // This method allows you to assign Eigen expressions to MyVectorType
+    template<typename OtherDerived>
+    MyVectorType & operator= (const Eigen::MatrixBase <OtherDerived>& other)
+    {
+        this->Base::operator=(other);
+        return *this;
+    }
+};
+\endcode
+
+This is the kind of error you can get if you don't provide those methods
+\code
+error: no match for ‘operator=’ in ‘delta =
+(((Eigen::MatrixBase<Eigen::Matrix<std::complex<float>, 10000, 1, 2, 10000,
+1> >*)(& delta)) + 8u)->Eigen::MatrixBase<Derived>::cwise [with Derived =
+Eigen::Matrix<std::complex<float>, 10000, 1, 2, 10000,
+1>]().Eigen::Cwise<ExpressionType>::operator* [with OtherDerived =
+Eigen::Matrix<std::complex<float>, 10000, 1, 2, 10000, 1>, ExpressionType =
+Eigen::Matrix<std::complex<float>, 10000, 1, 2, 10000, 1>](((const
+Eigen::MatrixBase<Eigen::Matrix<std::complex<float>, 10000, 1, 2, 10000, 1>
+>&)(((const Eigen::MatrixBase<Eigen::Matrix<std::complex<float>, 10000, 1,
+>2, 10000, 1> >*)((const spectral1d*)where)) + 8u)))’                                                    
+\endcode
+
+\anchor user_defined_scalars \section CustomScalarType Using custom scalar types
+
+By default, Eigen currently supports standard floating-point types (\c float, \c double, \c std::complex<float>, \c std::complex<double>, \c long \c double), as well as all integrale types (e.g., \c int, \c unsigned \c int, \c short, etc.), and \c bool.
+On x86-64 systems, \c long \c double permits to locally enforces the use of x87 registers with extended accuracy (in comparison to SSE).
+
+In order to add support for a custom type \c T you need:
+-# make sure the common operator (+,-,*,/,etc.) are supported by the type \c T
+-# add a specialization of struct Eigen::NumTraits<T> (see \ref NumTraits)
+-# define the math functions that makes sense for your type. This includes standard ones like sqrt, pow, sin, tan, conj, real, imag, etc, as well as abs2 which is Eigen specific.
+     (see the file Eigen/src/Core/MathFunctions.h)
+
+The math function should be defined in the same namespace than \c T, or in the \c std namespace though that second appraoch is not recommended.
+
+Here is a concrete example adding support for the Adolc's \c adouble type. <a href="https://projects.coin-or.org/ADOL-C">Adolc</a> is an automatic differentiation library. The type \c adouble is basically a real value tracking the values of any number of partial derivatives.
+
+\code
+#ifndef ADOLCSUPPORT_H
+#define ADOLCSUPPORT_H
+
+#define ADOLC_TAPELESS
+#include <adolc/adouble.h>
+#include <Eigen/Core>
+
+namespace Eigen {
+
+template<> struct NumTraits<adtl::adouble>
+ : NumTraits<double> // permits to get the epsilon, dummy_precision, lowest, highest functions
+{
+  typedef adtl::adouble Real;
+  typedef adtl::adouble NonInteger;
+  typedef adtl::adouble Nested;
+
+  enum {
+    IsComplex = 0,
+    IsInteger = 0,
+    IsSigned = 1,
+    RequireInitialization = 1,
+    ReadCost = 1,
+    AddCost = 3,
+    MulCost = 3
+  };
+};
+
+}
+
+namespace adtl {
+
+inline const adouble& conj(const adouble& x)  { return x; }
+inline const adouble& real(const adouble& x)  { return x; }
+inline adouble imag(const adouble&)    { return 0.; }
+inline adouble abs(const adouble&  x)  { return fabs(x); }
+inline adouble abs2(const adouble& x)  { return x*x; }
+
+}
+
+#endif // ADOLCSUPPORT_H
+\endcode
+
+
+\sa \ref TopicPreprocessorDirectives
+
+*/
+
+}
diff --git a/doc/I01_TopicLazyEvaluation.dox b/doc/I01_TopicLazyEvaluation.dox
new file mode 100644
index 0000000..393bc41
--- /dev/null
+++ b/doc/I01_TopicLazyEvaluation.dox
@@ -0,0 +1,65 @@
+namespace Eigen {
+
+/** \page TopicLazyEvaluation Lazy Evaluation and Aliasing
+
+Executive summary: Eigen has intelligent compile-time mechanisms to enable lazy evaluation and removing temporaries where appropriate.
+It will handle aliasing automatically in most cases, for example with matrix products. The automatic behavior can be overridden
+manually by using the MatrixBase::eval() and MatrixBase::noalias() methods.
+
+When you write a line of code involving a complex expression such as
+
+\code mat1 = mat2 + mat3 * (mat4 + mat5); \endcode
+
+Eigen determines automatically, for each sub-expression, whether to evaluate it into a temporary variable. Indeed, in certain cases it is better to evaluate immediately a sub-expression into a temporary variable, while in other cases it is better to avoid that.
+
+A traditional math library without expression templates always evaluates all sub-expressions into temporaries. So with this code,
+
+\code vec1 = vec2 + vec3; \endcode
+
+a traditional library would evaluate \c vec2 + vec3 into a temporary \c vec4 and then copy \c vec4  into \c vec1. This is of course inefficient: the arrays are traversed twice, so there are a lot of useless load/store operations.
+
+Expression-templates-based libraries can avoid evaluating sub-expressions into temporaries, which in many cases results in large speed improvements. This is called <i>lazy evaluation</i> as an expression is getting evaluated as late as possible, instead of immediately. However, most other expression-templates-based libraries <i>always</i> choose lazy evaluation. There are two problems with that: first, lazy evaluation is not always a good choice for performance; second, lazy evaluation can be very dangerous, for example with matrix products: doing <tt>matrix = matrix*matrix</tt> gives a wrong result if the matrix product is lazy-evaluated, because of the way matrix product works.
+
+For these reasons, Eigen has intelligent compile-time mechanisms to determine automatically when to use lazy evaluation, and when on the contrary it should evaluate immediately into a temporary variable.
+
+So in the basic example,
+
+\code matrix1 = matrix2 + matrix3; \endcode
+
+Eigen chooses lazy evaluation. Thus the arrays are traversed only once, producing optimized code. If you really want to force immediate evaluation, use \link MatrixBase::eval() eval()\endlink:
+
+\code matrix1 = (matrix2 + matrix3).eval(); \endcode
+
+Here is now a more involved example:
+
+\code matrix1 = -matrix2 + matrix3 + 5 * matrix4; \endcode
+
+Eigen chooses lazy evaluation at every stage in that example, which is clearly the correct choice. In fact, lazy evaluation is the "default choice" and Eigen will choose it except in a few circumstances.
+
+<b>The first circumstance</b> in which Eigen chooses immediate evaluation, is when it sees an assignment <tt>a = b;</tt> and the expression \c b has the evaluate-before-assigning \link flags flag\endlink. The most important example of such an expression is the \link GeneralProduct matrix product expression\endlink. For example, when you do
+
+\code matrix = matrix * matrix; \endcode
+
+Eigen first evaluates <tt>matrix * matrix</tt> into a temporary matrix, and then copies it into the original \c matrix. This guarantees a correct result as we saw above that lazy evaluation gives wrong results with matrix products. It also doesn't cost much, as the cost of the matrix product itself is much higher.
+
+What if you know that the result does no alias the operand of the product and want to force lazy evaluation? Then use \link MatrixBase::noalias() .noalias()\endlink instead. Here is an example:
+
+\code matrix1.noalias() = matrix2 * matrix2; \endcode
+
+Here, since we know that matrix2 is not the same matrix as matrix1, we know that lazy evaluation is not dangerous, so we may force lazy evaluation. Concretely, the effect of noalias() here is to bypass the evaluate-before-assigning \link flags flag\endlink.
+
+<b>The second circumstance</b> in which Eigen chooses immediate evaluation, is when it sees a nested expression such as <tt>a + b</tt> where \c b is already an expression having the evaluate-before-nesting \link flags flag\endlink. Again, the most important example of such an expression is the \link GeneralProduct matrix product expression\endlink. For example, when you do
+
+\code matrix1 = matrix2 + matrix3 * matrix4; \endcode
+
+the product <tt>matrix3 * matrix4</tt> gets evaluated immediately into a temporary matrix. Indeed, experiments showed that it is often beneficial for performance to evaluate immediately matrix products when they are nested into bigger expressions.
+
+<b>The third circumstance</b> in which Eigen chooses immediate evaluation, is when its cost model shows that the total cost of an operation is reduced if a sub-expression gets evaluated into a temporary. Indeed, in certain cases, an intermediate result is sufficiently costly to compute and is reused sufficiently many times, that is worth "caching". Here is an example:
+
+\code matrix1 = matrix2 * (matrix3 + matrix4); \endcode
+
+Here, provided the matrices have at least 2 rows and 2 columns, each coefficienct of the expression <tt>matrix3 + matrix4</tt> is going to be used several times in the matrix product. Instead of computing the sum everytime, it is much better to compute it once and store it in a temporary variable. Eigen understands this and evaluates <tt>matrix3 + matrix4</tt> into a temporary variable before evaluating the product.
+
+*/
+
+}
diff --git a/doc/I02_HiPerformance.dox b/doc/I02_HiPerformance.dox
new file mode 100644
index 0000000..ac1c2ca
--- /dev/null
+++ b/doc/I02_HiPerformance.dox
@@ -0,0 +1,128 @@
+
+namespace Eigen {
+
+/** \page TopicWritingEfficientProductExpression Writing efficient matrix product expressions
+
+In general achieving good performance with Eigen does no require any special effort:
+simply write your expressions in the most high level way. This is especially true
+for small fixed size matrices. For large matrices, however, it might be useful to
+take some care when writing your expressions in order to minimize useless evaluations
+and optimize the performance.
+In this page we will give a brief overview of the Eigen's internal mechanism to simplify
+and evaluate complex product expressions, and discuss the current limitations.
+In particular we will focus on expressions matching level 2 and 3 BLAS routines, i.e,
+all kind of matrix products and triangular solvers.
+
+Indeed, in Eigen we have implemented a set of highly optimized routines which are very similar
+to BLAS's ones. Unlike BLAS, those routines are made available to user via a high level and
+natural API. Each of these routines can compute in a single evaluation a wide variety of expressions.
+Given an expression, the challenge is then to map it to a minimal set of routines.
+As explained latter, this mechanism has some limitations, and knowing them will allow
+you to write faster code by making your expressions more Eigen friendly.
+
+\section GEMM General Matrix-Matrix product (GEMM)
+
+Let's start with the most common primitive: the matrix product of general dense matrices.
+In the BLAS world this corresponds to the GEMM routine. Our equivalent primitive can
+perform the following operation:
+\f$ C.noalias() += \alpha op1(A) op2(B) \f$
+where A, B, and C are column and/or row major matrices (or sub-matrices),
+alpha is a scalar value, and op1, op2 can be transpose, adjoint, conjugate, or the identity.
+When Eigen detects a matrix product, it analyzes both sides of the product to extract a
+unique scalar factor alpha, and for each side, its effective storage order, shape, and conjugation states.
+More precisely each side is simplified by iteratively removing trivial expressions such as scalar multiple,
+negation and conjugation. Transpose and Block expressions are not evaluated and they only modify the storage order
+and shape. All other expressions are immediately evaluated.
+For instance, the following expression:
+\code m1.noalias() -= s4 * (s1 * m2.adjoint() * (-(s3*m3).conjugate()*s2))  \endcode
+is automatically simplified to:
+\code m1.noalias() += (s1*s2*conj(s3)*s4) * m2.adjoint() * m3.conjugate() \endcode
+which exactly matches our GEMM routine.
+
+\subsection GEMM_Limitations Limitations
+Unfortunately, this simplification mechanism is not perfect yet and not all expressions which could be
+handled by a single GEMM-like call are correctly detected.
+<table class="manual" style="width:100%">
+<tr>
+<th>Not optimal expression</th>
+<th>Evaluated as</th>
+<th>Optimal version (single evaluation)</th>
+<th>Comments</th>
+</tr>
+<tr>
+<td>\code
+m1 += m2 * m3; \endcode</td>
+<td>\code
+temp = m2 * m3;
+m1 += temp; \endcode</td>
+<td>\code
+m1.noalias() += m2 * m3; \endcode</td>
+<td>Use .noalias() to tell Eigen the result and right-hand-sides do not alias. 
+    Otherwise the product m2 * m3 is evaluated into a temporary.</td>
+</tr>
+<tr class="alt">
+<td></td>
+<td></td>
+<td>\code
+m1.noalias() += s1 * (m2 * m3); \endcode</td>
+<td>This is a special feature of Eigen. Here the product between a scalar
+    and a matrix product does not evaluate the matrix product but instead it
+    returns a matrix product expression tracking the scalar scaling factor. <br>
+    Without this optimization, the matrix product would be evaluated into a
+    temporary as in the next example.</td>
+</tr>
+<tr>
+<td>\code
+m1.noalias() += (m2 * m3).adjoint(); \endcode</td>
+<td>\code
+temp = m2 * m3;
+m1 += temp.adjoint(); \endcode</td>
+<td>\code
+m1.noalias() += m3.adjoint()
+              * m2.adjoint(); \endcode</td>
+<td>This is because the product expression has the EvalBeforeNesting bit which
+    enforces the evaluation of the product by the Tranpose expression.</td>
+</tr>
+<tr class="alt">
+<td>\code
+m1 = m1 + m2 * m3; \endcode</td>
+<td>\code
+temp = m2 * m3;
+m1 = m1 + temp; \endcode</td>
+<td>\code m1.noalias() += m2 * m3; \endcode</td>
+<td>Here there is no way to detect at compile time that the two m1 are the same,
+    and so the matrix product will be immediately evaluated.</td>
+</tr>
+<tr>
+<td>\code
+m1.noalias() = m4 + m2 * m3; \endcode</td>
+<td>\code
+temp = m2 * m3;
+m1 = m4 + temp; \endcode</td>
+<td>\code
+m1 = m4;
+m1.noalias() += m2 * m3; \endcode</td>
+<td>First of all, here the .noalias() in the first expression is useless because
+    m2*m3 will be evaluated anyway. However, note how this expression can be rewritten
+    so that no temporary is required. (tip: for very small fixed size matrix
+    it is slighlty better to rewrite it like this: m1.noalias() = m2 * m3; m1 += m4;</td>
+</tr>
+<tr class="alt">
+<td>\code
+m1.noalias() += (s1*m2).block(..) * m3; \endcode</td>
+<td>\code
+temp = (s1*m2).block(..);
+m1 += temp * m3; \endcode</td>
+<td>\code
+m1.noalias() += s1 * m2.block(..) * m3; \endcode</td>
+<td>This is because our expression analyzer is currently not able to extract trivial
+    expressions nested in a Block expression. Therefore the nested scalar
+    multiple cannot be properly extracted.</td>
+</tr>
+</table>
+
+Of course all these remarks hold for all other kind of products involving triangular or selfadjoint matrices.
+
+*/
+
+}
diff --git a/doc/I03_InsideEigenExample.dox b/doc/I03_InsideEigenExample.dox
new file mode 100644
index 0000000..3245a01
--- /dev/null
+++ b/doc/I03_InsideEigenExample.dox
@@ -0,0 +1,500 @@
+namespace Eigen {
+
+/** \page TopicInsideEigenExample What happens inside Eigen, on a simple example
+
+\b Table \b of \b contents
+  - \ref WhyInteresting
+  - \ref ConstructingVectors
+  - \ref ConstructionOfSumXpr
+  - \ref Assignment
+\n
+
+<hr>
+
+
+Consider the following example program:
+
+\code
+#include<Eigen/Core>
+
+int main()
+{
+  int size = 50;
+  // VectorXf is a vector of floats, with dynamic size.
+  Eigen::VectorXf u(size), v(size), w(size);
+  u = v + w;
+}
+\endcode
+
+The goal of this page is to understand how Eigen compiles it, assuming that SSE2 vectorization is enabled (GCC option -msse2).
+
+\section WhyInteresting Why it's interesting
+
+Maybe you think, that the above example program is so simple, that compiling it shouldn't involve anything interesting. So before starting, let us explain what is nontrivial in compiling it correctly -- that is, producing optimized code -- so that the complexity of Eigen, that we'll explain here, is really useful.
+
+Look at the line of code
+\code
+  u = v + w;   //   (*)
+\endcode
+
+The first important thing about compiling it, is that the arrays should be traversed only once, like
+\code
+  for(int i = 0; i < size; i++) u[i] = v[i] + w[i];
+\endcode
+The problem is that if we make a naive C++ library where the VectorXf class has an operator+ returning a VectorXf, then the line of code (*) will amount to:
+\code
+  VectorXf tmp = v + w;
+  VectorXf u = tmp;
+\endcode
+Obviously, the introduction of the temporary \a tmp here is useless. It has a very bad effect on performance, first because the creation of \a tmp requires a dynamic memory allocation in this context, and second as there are now two for loops:
+\code
+  for(int i = 0; i < size; i++) tmp[i] = v[i] + w[i];
+  for(int i = 0; i < size; i++) u[i] = tmp[i];
+\endcode
+Traversing the arrays twice instead of once is terrible for performance, as it means that we do many redundant memory accesses.
+
+The second important thing about compiling the above program, is to make correct use of SSE2 instructions. Notice that Eigen also supports AltiVec and that all the discussion that we make here applies also to AltiVec.
+
+SSE2, like AltiVec, is a set of instructions allowing to perform computations on packets of 128 bits at once. Since a float is 32 bits, this means that SSE2 instructions can handle 4 floats at once. This means that, if correctly used, they can make our computation go up to 4x faster.
+
+However, in the above program, we have chosen size=50, so our vectors consist of 50 float's, and 50 is not a multiple of 4. This means that we cannot hope to do all of that computation using SSE2 instructions. The second best thing, to which we should aim, is to handle the 48 first coefficients with SSE2 instructions, since 48 is the biggest multiple of 4 below 50, and then handle separately, without SSE2, the 49th and 50th coefficients. Something like this:
+
+\code
+  for(int i = 0; i < 4*(size/4); i+=4) u.packet(i)  = v.packet(i) + w.packet(i);
+  for(int i = 4*(size/4); i < size; i++) u[i] = v[i] + w[i];
+\endcode
+
+So let us look line by line at our example program, and let's follow Eigen as it compiles it.
+
+\section ConstructingVectors Constructing vectors
+
+Let's analyze the first line:
+
+\code
+  Eigen::VectorXf u(size), v(size), w(size);
+\endcode
+
+First of all, VectorXf is the following typedef:
+\code
+  typedef Matrix<float, Dynamic, 1> VectorXf;
+\endcode
+
+The class template Matrix is declared in src/Core/util/ForwardDeclarations.h with 6 template parameters, but the last 3 are automatically determined by the first 3. So you don't need to worry about them for now. Here, Matrix\<float, Dynamic, 1\> means a matrix of floats, with a dynamic number of rows and 1 column.
+
+The Matrix class inherits a base class, MatrixBase. Don't worry about it, for now it suffices to say that MatrixBase is what unifies matrices/vectors and all the expressions types -- more on that below.
+
+When we do
+\code
+  Eigen::VectorXf u(size);
+\endcode
+the constructor that is called is Matrix::Matrix(int), in src/Core/Matrix.h. Besides some assertions, all it does is to construct the \a m_storage member, which is of type DenseStorage\<float, Dynamic, Dynamic, 1\>.
+
+You may wonder, isn't it overengineering to have the storage in a separate class? The reason is that the Matrix class template covers all kinds of matrices and vector: both fixed-size and dynamic-size. The storage method is not the same in these two cases. For fixed-size, the matrix coefficients are stored as a plain member array. For dynamic-size, the coefficients will be stored as a pointer to a dynamically-allocated array. Because of this, we need to abstract storage away from the Matrix class. That's DenseStorage.
+
+Let's look at this constructor, in src/Core/DenseStorage.h. You can see that there are many partial template specializations of DenseStorages here, treating separately the cases where dimensions are Dynamic or fixed at compile-time. The partial specialization that we are looking at is:
+\code
+template<typename T, int _Cols> class DenseStorage<T, Dynamic, Dynamic, _Cols>
+\endcode
+
+Here, the constructor called is DenseStorage::DenseStorage(int size, int rows, int columns)
+with size=50, rows=50, columns=1.
+
+Here is this constructor:
+\code
+inline DenseStorage(int size, int rows, int) : m_data(internal::aligned_new<T>(size)), m_rows(rows) {}
+\endcode
+
+Here, the \a m_data member is the actual array of coefficients of the matrix. As you see, it is dynamically allocated. Rather than calling new[] or malloc(), as you can see, we have our own internal::aligned_new defined in src/Core/util/Memory.h. What it does is that if vectorization is enabled, then it uses a platform-specific call to allocate a 128-bit-aligned array, as that is very useful for vectorization with both SSE2 and AltiVec. If vectorization is disabled, it amounts to the standard new[].
+
+As you can see, the constructor also sets the \a m_rows member to \a size. Notice that there is no \a m_columns member: indeed, in this partial specialization of DenseStorage, we know the number of columns at compile-time, since the _Cols template parameter is different from Dynamic. Namely, in our case, _Cols is 1, which is to say that our vector is just a matrix with 1 column. Hence, there is no need to store the number of columns as a runtime variable.
+
+When you call VectorXf::data() to get the pointer to the array of coefficients, it returns DenseStorage::data() which returns the \a m_data member.
+
+When you call VectorXf::size() to get the size of the vector, this is actually a method in the base class MatrixBase. It determines that the vector is a column-vector, since ColsAtCompileTime==1 (this comes from the template parameters in the typedef VectorXf). It deduces that the size is the number of rows, so it returns VectorXf::rows(), which returns DenseStorage::rows(), which returns the \a m_rows member, which was set to \a size by the constructor.
+
+\section ConstructionOfSumXpr Construction of the sum expression
+
+Now that our vectors are constructed, let's move on to the next line:
+
+\code
+u = v + w;
+\endcode
+
+The executive summary is that operator+ returns a "sum of vectors" expression, but doesn't actually perform the computation. It is the operator=, whose call occurs thereafter, that does the computation.
+
+Let us now see what Eigen does when it sees this:
+
+\code
+v + w
+\endcode
+
+Here, v and w are of type VectorXf, which is a typedef for a specialization of Matrix (as we explained above), which is a subclass of MatrixBase. So what is being called is
+
+\code
+MatrixBase::operator+(const MatrixBase&)
+\endcode
+
+The return type of this operator is
+\code
+CwiseBinaryOp<internal::scalar_sum_op<float>, VectorXf, VectorXf>
+\endcode
+The CwiseBinaryOp class is our first encounter with an expression template. As we said, the operator+ doesn't by itself perform any computation, it just returns an abstract "sum of vectors" expression. Since there are also "difference of vectors" and "coefficient-wise product of vectors" expressions, we unify them all as "coefficient-wise binary operations", which we abbreviate as "CwiseBinaryOp". "Coefficient-wise" means that the operations is performed coefficient by coefficient. "binary" means that there are two operands -- we are adding two vectors with one another.
+
+Now you might ask, what if we did something like
+
+\code
+v + w + u;
+\endcode
+
+The first v + w would return a CwiseBinaryOp as above, so in order for this to compile, we'd need to define an operator+ also in the class CwiseBinaryOp... at this point it starts looking like a nightmare: are we going to have to define all operators in each of the expression classes (as you guessed, CwiseBinaryOp is only one of many) ? This looks like a dead end!
+
+The solution is that CwiseBinaryOp itself, as well as Matrix and all the other expression types, is a subclass of MatrixBase. So it is enough to define once and for all the operators in class MatrixBase.
+
+Since MatrixBase is the common base class of different subclasses, the aspects that depend on the subclass must be abstracted from MatrixBase. This is called polymorphism.
+
+The classical approach to polymorphism in C++ is by means of virtual functions. This is dynamic polymorphism. Here we don't want dynamic polymorphism because the whole design of Eigen is based around the assumption that all the complexity, all the abstraction, gets resolved at compile-time. This is crucial: if the abstraction can't get resolved at compile-time, Eigen's compile-time optimization mechanisms become useless, not to mention that if that abstraction has to be resolved at runtime it'll incur an overhead by itself.
+
+Here, what we want is to have a single class MatrixBase as the base of many subclasses, in such a way that each MatrixBase object (be it a matrix, or vector, or any kind of expression) knows at compile-time (as opposed to run-time) of which particular subclass it is an object (i.e. whether it is a matrix, or an expression, and what kind of expression).
+
+The solution is the <a href="http://en.wikipedia.org/wiki/Curiously_Recurring_Template_Pattern">Curiously Recurring Template Pattern</a>. Let's do the break now. Hopefully you can read this wikipedia page during the break if needed, but it won't be allowed during the exam.
+
+In short, MatrixBase takes a template parameter \a Derived. Whenever we define a subclass Subclass, we actually make Subclass inherit MatrixBase\<Subclass\>. The point is that different subclasses inherit different MatrixBase types. Thanks to this, whenever we have an object of a subclass, and we call on it some MatrixBase method, we still remember even from inside the MatrixBase method which particular subclass we're talking about.
+
+This means that we can put almost all the methods and operators in the base class MatrixBase, and have only the bare minimum in the subclasses. If you look at the subclasses in Eigen, like for instance the CwiseBinaryOp class, they have very few methods. There are coeff() and sometimes coeffRef() methods for access to the coefficients, there are rows() and cols() methods returning the number of rows and columns, but there isn't much more than that. All the meat is in MatrixBase, so it only needs to be coded once for all kinds of expressions, matrices, and vectors.
+
+So let's end this digression and come back to the piece of code from our example program that we were currently analyzing,
+
+\code
+v + w
+\endcode
+
+Now that MatrixBase is a good friend, let's write fully the prototype of the operator+ that gets called here (this code is from src/Core/MatrixBase.h):
+
+\code
+template<typename Derived>
+class MatrixBase
+{
+  // ...
+
+  template<typename OtherDerived>
+  const CwiseBinaryOp<internal::scalar_sum_op<typename internal::traits<Derived>::Scalar>, Derived, OtherDerived>
+  operator+(const MatrixBase<OtherDerived> &other) const;
+
+  // ...
+};
+\endcode
+
+Here of course, \a Derived and \a OtherDerived are VectorXf.
+
+As we said, CwiseBinaryOp is also used for other operations such as substration, so it takes another template parameter determining the operation that will be applied to coefficients. This template parameter is a functor, that is, a class in which we have an operator() so it behaves like a function. Here, the functor used is internal::scalar_sum_op. It is defined in src/Core/Functors.h.
+
+Let us now explain the internal::traits here. The internal::scalar_sum_op class takes one template parameter: the type of the numbers to handle. Here of course we want to pass the scalar type (a.k.a. numeric type) of VectorXf, which is \c float. How do we determine which is the scalar type of \a Derived ? Throughout Eigen, all matrix and expression types define a typedef \a Scalar which gives its scalar type. For example, VectorXf::Scalar is a typedef for \c float. So here, if life was easy, we could find the numeric type of \a Derived as just
+\code
+typename Derived::Scalar
+\endcode
+Unfortunately, we can't do that here, as the compiler would complain that the type Derived hasn't yet been defined. So we use a workaround: in src/Core/util/ForwardDeclarations.h, we declared (not defined!) all our subclasses, like Matrix, and we also declared the following class template:
+\code
+template<typename T> struct internal::traits;
+\endcode
+In src/Core/Matrix.h, right \em before the definition of class Matrix, we define a partial specialization of internal::traits for T=Matrix\<any template parameters\>. In this specialization of internal::traits, we define the Scalar typedef. So when we actually define Matrix, it is legal to refer to "typename internal::traits\<Matrix\>::Scalar".
+
+Anyway, we have declared our operator+. In our case, where \a Derived and \a OtherDerived are VectorXf, the above declaration amounts to:
+\code
+class MatrixBase<VectorXf>
+{
+  // ...
+
+  const CwiseBinaryOp<internal::scalar_sum_op<float>, VectorXf, VectorXf>
+  operator+(const MatrixBase<VectorXf> &other) const;
+
+  // ...
+};
+\endcode
+
+Let's now jump to src/Core/CwiseBinaryOp.h to see how it is defined. As you can see there, all it does is to return a CwiseBinaryOp object, and this object is just storing references to the left-hand-side and right-hand-side expressions -- here, these are the vectors \a v and \a w. Well, the CwiseBinaryOp object is also storing an instance of the (empty) functor class, but you shouldn't worry about it as that is a minor implementation detail.
+
+Thus, the operator+ hasn't performed any actual computation. To summarize, the operation \a v + \a w just returned an object of type CwiseBinaryOp which did nothing else than just storing references to \a v and \a w.
+
+\section Assignment The assignment
+
+At this point, the expression \a v + \a w has finished evaluating, so, in the process of compiling the line of code
+\code
+u = v + w;
+\endcode
+we now enter the operator=.
+
+What operator= is being called here? The vector u is an object of class VectorXf, i.e. Matrix. In src/Core/Matrix.h, inside the definition of class Matrix, we see this:
+\code
+    template<typename OtherDerived>
+    inline Matrix& operator=(const MatrixBase<OtherDerived>& other)
+    {
+      eigen_assert(m_storage.data()!=0 && "you cannot use operator= with a non initialized matrix (instead use set()");
+      return Base::operator=(other.derived());
+    }
+\endcode
+Here, Base is a typedef for MatrixBase\<Matrix\>. So, what is being called is the operator= of MatrixBase. Let's see its prototype in src/Core/MatrixBase.h:
+\code
+    template<typename OtherDerived>
+    Derived& operator=(const MatrixBase<OtherDerived>& other);
+\endcode
+Here, \a Derived is VectorXf (since u is a VectorXf) and \a OtherDerived is CwiseBinaryOp. More specifically, as explained in the previous section, \a OtherDerived is:
+\code
+CwiseBinaryOp<internal::scalar_sum_op<float>, VectorXf, VectorXf>
+\endcode
+So the full prototype of the operator= being called is:
+\code
+VectorXf& MatrixBase<VectorXf>::operator=(const MatrixBase<CwiseBinaryOp<internal::scalar_sum_op<float>, VectorXf, VectorXf> > & other);
+\endcode
+This operator= literally reads "copying a sum of two VectorXf's into another VectorXf".
+
+Let's now look at the implementation of this operator=. It resides in the file src/Core/Assign.h.
+
+What we can see there is:
+\code
+template<typename Derived>
+template<typename OtherDerived>
+inline Derived& MatrixBase<Derived>
+  ::operator=(const MatrixBase<OtherDerived>& other)
+{
+  return internal::assign_selector<Derived,OtherDerived>::run(derived(), other.derived());
+}
+\endcode
+
+OK so our next task is to understand internal::assign_selector :)
+
+Here is its declaration (all that is still in the same file src/Core/Assign.h)
+\code
+template<typename Derived, typename OtherDerived,
+         bool EvalBeforeAssigning = int(OtherDerived::Flags) & EvalBeforeAssigningBit,
+         bool NeedToTranspose = Derived::IsVectorAtCompileTime
+                && OtherDerived::IsVectorAtCompileTime
+                && int(Derived::RowsAtCompileTime) == int(OtherDerived::ColsAtCompileTime)
+                && int(Derived::ColsAtCompileTime) == int(OtherDerived::RowsAtCompileTime)
+                && int(Derived::SizeAtCompileTime) != 1>
+struct internal::assign_selector;
+\endcode
+
+So internal::assign_selector takes 4 template parameters, but the 2 last ones are automatically determined by the 2 first ones.
+
+EvalBeforeAssigning is here to enforce the EvalBeforeAssigningBit. As explained <a href="TopicLazyEvaluation.html">here</a>, certain expressions have this flag which makes them automatically evaluate into temporaries before assigning them to another expression. This is the case of the Product expression, in order to avoid strange aliasing effects when doing "m = m * m;" However, of course here our CwiseBinaryOp expression doesn't have the EvalBeforeAssigningBit: we said since the beginning that we didn't want a temporary to be introduced here. So if you go to src/Core/CwiseBinaryOp.h, you'll see that the Flags in internal::traits\<CwiseBinaryOp\> don't include the EvalBeforeAssigningBit. The Flags member of CwiseBinaryOp is then imported from the internal::traits by the EIGEN_GENERIC_PUBLIC_INTERFACE macro. Anyway, here the template parameter EvalBeforeAssigning has the value \c false.
+
+NeedToTranspose is here for the case where the user wants to copy a row-vector into a column-vector. We allow this as a special exception to the general rule that in assignments we require the dimesions to match. Anyway, here both the left-hand and right-hand sides are column vectors, in the sense that ColsAtCompileTime is equal to 1. So NeedToTranspose is \c false too.
+
+So, here we are in the partial specialization:
+\code
+internal::assign_selector<Derived, OtherDerived, false, false>
+\endcode
+
+Here's how it is defined:
+\code
+template<typename Derived, typename OtherDerived>
+struct internal::assign_selector<Derived,OtherDerived,false,false> {
+  static Derived& run(Derived& dst, const OtherDerived& other) { return dst.lazyAssign(other.derived()); }
+};
+\endcode
+
+OK so now our next job is to understand how lazyAssign works :)
+
+\code
+template<typename Derived>
+template<typename OtherDerived>
+inline Derived& MatrixBase<Derived>
+  ::lazyAssign(const MatrixBase<OtherDerived>& other)
+{
+  EIGEN_STATIC_ASSERT_SAME_MATRIX_SIZE(Derived,OtherDerived)
+  eigen_assert(rows() == other.rows() && cols() == other.cols());
+  internal::assign_impl<Derived, OtherDerived>::run(derived(),other.derived());
+  return derived();
+}
+\endcode
+
+What do we see here? Some assertions, and then the only interesting line is:
+\code
+  internal::assign_impl<Derived, OtherDerived>::run(derived(),other.derived());
+\endcode
+
+OK so now we want to know what is inside internal::assign_impl.
+
+Here is its declaration:
+\code
+template<typename Derived1, typename Derived2,
+         int Vectorization = internal::assign_traits<Derived1, Derived2>::Vectorization,
+         int Unrolling = internal::assign_traits<Derived1, Derived2>::Unrolling>
+struct internal::assign_impl;
+\endcode
+Again, internal::assign_selector takes 4 template parameters, but the 2 last ones are automatically determined by the 2 first ones.
+
+These two parameters \a Vectorization and \a Unrolling are determined by a helper class internal::assign_traits. Its job is to determine which vectorization strategy to use (that is \a Vectorization) and which unrolling strategy to use (that is \a Unrolling).
+
+We'll not enter into the details of how these strategies are chosen (this is in the implementation of internal::assign_traits at the top of the same file). Let's just say that here \a Vectorization has the value \a LinearVectorization, and \a Unrolling has the value \a NoUnrolling (the latter is obvious since our vectors have dynamic size so there's no way to unroll the loop at compile-time).
+
+So the partial specialization of internal::assign_impl that we're looking at is:
+\code
+internal::assign_impl<Derived1, Derived2, LinearVectorization, NoUnrolling>
+\endcode
+
+Here is how it's defined:
+\code
+template<typename Derived1, typename Derived2>
+struct internal::assign_impl<Derived1, Derived2, LinearVectorization, NoUnrolling>
+{
+  static void run(Derived1 &dst, const Derived2 &src)
+  {
+    const int size = dst.size();
+    const int packetSize = internal::packet_traits<typename Derived1::Scalar>::size;
+    const int alignedStart = internal::assign_traits<Derived1,Derived2>::DstIsAligned ? 0
+                           : internal::first_aligned(&dst.coeffRef(0), size);
+    const int alignedEnd = alignedStart + ((size-alignedStart)/packetSize)*packetSize;
+
+    for(int index = 0; index < alignedStart; index++)
+      dst.copyCoeff(index, src);
+
+    for(int index = alignedStart; index < alignedEnd; index += packetSize)
+    {
+      dst.template copyPacket<Derived2, Aligned, internal::assign_traits<Derived1,Derived2>::SrcAlignment>(index, src);
+    }
+
+    for(int index = alignedEnd; index < size; index++)
+      dst.copyCoeff(index, src);
+  }
+};
+\endcode
+
+Here's how it works. \a LinearVectorization means that the left-hand and right-hand side expression can be accessed linearly i.e. you can refer to their coefficients by one integer \a index, as opposed to having to refer to its coefficients by two integers \a row, \a column.
+
+As we said at the beginning, vectorization works with blocks of 4 floats. Here, \a PacketSize is 4.
+
+There are two potential problems that we need to deal with:
+\li first, vectorization works much better if the packets are 128-bit-aligned. This is especially important for write access. So when writing to the coefficients of \a dst, we want to group these coefficients by packets of 4 such that each of these packets is 128-bit-aligned. In general, this requires to skip a few coefficients at the beginning of \a dst. This is the purpose of \a alignedStart. We then copy these first few coefficients one by one, not by packets. However, in our case, the \a dst expression is a VectorXf and remember that in the construction of the vectors we allocated aligned arrays. Thanks to \a DstIsAligned, Eigen remembers that without having to do any runtime check, so \a alignedStart is zero and this part is avoided altogether.
+\li second, the number of coefficients to copy is not in general a multiple of \a packetSize. Here, there are 50 coefficients to copy and \a packetSize is 4. So we'll have to copy the last 2 coefficients one by one, not by packets. Here, \a alignedEnd is 48.
+
+Now come the actual loops.
+
+First, the vectorized part: the 48 first coefficients out of 50 will be copied by packets of 4:
+\code
+  for(int index = alignedStart; index < alignedEnd; index += packetSize)
+  {
+    dst.template copyPacket<Derived2, Aligned, internal::assign_traits<Derived1,Derived2>::SrcAlignment>(index, src);
+  }
+\endcode
+
+What is copyPacket? It is defined in src/Core/Coeffs.h:
+\code
+template<typename Derived>
+template<typename OtherDerived, int StoreMode, int LoadMode>
+inline void MatrixBase<Derived>::copyPacket(int index, const MatrixBase<OtherDerived>& other)
+{
+  eigen_internal_assert(index >= 0 && index < size());
+  derived().template writePacket<StoreMode>(index,
+    other.derived().template packet<LoadMode>(index));
+}
+\endcode
+
+OK, what are writePacket() and packet() here?
+
+First, writePacket() here is a method on the left-hand side VectorXf. So we go to src/Core/Matrix.h to look at its definition:
+\code
+template<int StoreMode>
+inline void writePacket(int index, const PacketScalar& x)
+{
+  internal::pstoret<Scalar, PacketScalar, StoreMode>(m_storage.data() + index, x);
+}
+\endcode
+Here, \a StoreMode is \a #Aligned, indicating that we are doing a 128-bit-aligned write access, \a PacketScalar is a type representing a "SSE packet of 4 floats" and internal::pstoret is a function writing such a packet in memory. Their definitions are architecture-specific, we find them in src/Core/arch/SSE/PacketMath.h:
+
+The line in src/Core/arch/SSE/PacketMath.h that determines the PacketScalar type (via a typedef in Matrix.h) is:
+\code
+template<> struct internal::packet_traits<float>  { typedef __m128  type; enum {size=4}; };
+\endcode
+Here, __m128 is a SSE-specific type. Notice that the enum \a size here is what was used to define \a packetSize above.
+
+And here is the implementation of internal::pstoret:
+\code
+template<> inline void internal::pstore(float*  to, const __m128&  from) { _mm_store_ps(to, from); }
+\endcode
+Here, __mm_store_ps is a SSE-specific intrinsic function, representing a single SSE instruction. The difference between internal::pstore and internal::pstoret is that internal::pstoret is a dispatcher handling both the aligned and unaligned cases, you find its definition in src/Core/GenericPacketMath.h:
+\code
+template<typename Scalar, typename Packet, int LoadMode>
+inline void internal::pstoret(Scalar* to, const Packet& from)
+{
+  if(LoadMode == Aligned)
+    internal::pstore(to, from);
+  else
+    internal::pstoreu(to, from);
+}
+\endcode
+
+OK, that explains how writePacket() works. Now let's look into the packet() call. Remember that we are analyzing this line of code inside copyPacket():
+\code
+derived().template writePacket<StoreMode>(index,
+    other.derived().template packet<LoadMode>(index));
+\endcode
+
+Here, \a other is our sum expression \a v + \a w. The .derived() is just casting from MatrixBase to the subclass which here is CwiseBinaryOp. So let's go to src/Core/CwiseBinaryOp.h:
+\code
+class CwiseBinaryOp
+{
+  // ...
+    template<int LoadMode>
+    inline PacketScalar packet(int index) const
+    {
+      return m_functor.packetOp(m_lhs.template packet<LoadMode>(index), m_rhs.template packet<LoadMode>(index));
+    }
+};
+\endcode
+Here, \a m_lhs is the vector \a v, and \a m_rhs is the vector \a w. So the packet() function here is Matrix::packet(). The template parameter \a LoadMode is \a #Aligned. So we're looking at
+\code
+class Matrix
+{
+  // ...
+    template<int LoadMode>
+    inline PacketScalar packet(int index) const
+    {
+      return internal::ploadt<Scalar, LoadMode>(m_storage.data() + index);
+    }
+};
+\endcode
+We let you look up the definition of internal::ploadt in GenericPacketMath.h and the internal::pload in src/Core/arch/SSE/PacketMath.h. It is very similar to the above for internal::pstore.
+
+Let's go back to CwiseBinaryOp::packet(). Once the packets from the vectors \a v and \a w have been returned, what does this function do? It calls m_functor.packetOp() on them. What is m_functor? Here we must remember what particular template specialization of CwiseBinaryOp we're dealing with:
+\code
+CwiseBinaryOp<internal::scalar_sum_op<float>, VectorXf, VectorXf>
+\endcode
+So m_functor is an object of the empty class internal::scalar_sum_op<float>. As we mentioned above, don't worry about why we constructed an object of this empty class at all -- it's an implementation detail, the point is that some other functors need to store member data.
+
+Anyway, internal::scalar_sum_op is defined in src/Core/Functors.h:
+\code
+template<typename Scalar> struct internal::scalar_sum_op EIGEN_EMPTY_STRUCT {
+  inline const Scalar operator() (const Scalar& a, const Scalar& b) const { return a + b; }
+  template<typename PacketScalar>
+  inline const PacketScalar packetOp(const PacketScalar& a, const PacketScalar& b) const
+  { return internal::padd(a,b); }
+};
+\endcode
+As you can see, all what packetOp() does is to call internal::padd on the two packets. Here is the definition of internal::padd from src/Core/arch/SSE/PacketMath.h:
+\code
+template<> inline __m128  internal::padd(const __m128&  a, const __m128&  b) { return _mm_add_ps(a,b); }
+\endcode
+Here, _mm_add_ps is a SSE-specific intrinsic function, representing a single SSE instruction.
+
+To summarize, the loop
+\code
+  for(int index = alignedStart; index < alignedEnd; index += packetSize)
+  {
+    dst.template copyPacket<Derived2, Aligned, internal::assign_traits<Derived1,Derived2>::SrcAlignment>(index, src);
+  }
+\endcode
+has been compiled to the following code: for \a index going from 0 to the 11 ( = 48/4 - 1), read the i-th packet (of 4 floats) from the vector v and the i-th packet from the vector w using two __mm_load_ps SSE instructions, then add them together using a __mm_add_ps instruction, then store the result using a __mm_store_ps instruction.
+
+There remains the second loop handling the last few (here, the last 2) coefficients:
+\code
+  for(int index = alignedEnd; index < size; index++)
+    dst.copyCoeff(index, src);
+\endcode
+However, it works just like the one we just explained, it is just simpler because there is no SSE vectorization involved here. copyPacket() becomes copyCoeff(), packet() becomes coeff(), writePacket() becomes coeffRef(). If you followed us this far, you can probably understand this part by yourself.
+
+We see that all the C++ abstraction of Eigen goes away during compilation and that we indeed are precisely controlling which assembly instructions we emit. Such is the beauty of C++! Since we have such precise control over the emitted assembly instructions, but such complex logic to choose the right instructions, we can say that Eigen really behaves like an optimizing compiler. If you prefer, you could say that Eigen behaves like a script for the compiler. In a sense, C++ template metaprogramming is scripting the compiler -- and it's been shown that this scripting language is Turing-complete. See <a href="http://en.wikipedia.org/wiki/Template_metaprogramming"> Wikipedia</a>.
+
+*/
+
+}
diff --git a/doc/I05_FixedSizeVectorizable.dox b/doc/I05_FixedSizeVectorizable.dox
new file mode 100644
index 0000000..192ea74
--- /dev/null
+++ b/doc/I05_FixedSizeVectorizable.dox
@@ -0,0 +1,38 @@
+namespace Eigen {
+
+/** \page TopicFixedSizeVectorizable Fixed-size vectorizable Eigen objects
+
+The goal of this page is to explain what we mean by "fixed-size vectorizable".
+
+\section summary Executive Summary
+
+An Eigen object is called "fixed-size vectorizable" if it has fixed size and that size is a multiple of 16 bytes.
+
+Examples include:
+\li Eigen::Vector2d
+\li Eigen::Vector4d
+\li Eigen::Vector4f
+\li Eigen::Matrix2d
+\li Eigen::Matrix2f
+\li Eigen::Matrix4d
+\li Eigen::Matrix4f
+\li Eigen::Affine3d
+\li Eigen::Affine3f
+\li Eigen::Quaterniond
+\li Eigen::Quaternionf
+
+\section explanation Explanation
+
+First, "fixed-size" should be clear: an Eigen object has fixed size if its number of rows and its number of columns are fixed at compile-time. So for example Matrix3f has fixed size, but MatrixXf doesn't (the opposite of fixed-size is dynamic-size).
+
+The array of coefficients of a fixed-size Eigen object is a plain "static array", it is not dynamically allocated. For example, the data behind a Matrix4f is just a "float array[16]".
+
+Fixed-size objects are typically very small, which means that we want to handle them with zero runtime overhead -- both in terms of memory usage and of speed.
+
+Now, vectorization (both SSE and AltiVec) works with 128-bit packets. Moreover, for performance reasons, these packets need to be have 128-bit alignment.
+
+So it turns out that the only way that fixed-size Eigen objects can be vectorized, is if their size is a multiple of 128 bits, or 16 bytes. Eigen will then request 16-byte alignment for these objects, and henceforth rely on these objects being aligned so no runtime check for alignment is performed.
+
+*/
+
+}
diff --git a/doc/I06_TopicEigenExpressionTemplates.dox b/doc/I06_TopicEigenExpressionTemplates.dox
new file mode 100644
index 0000000..b31fd47
--- /dev/null
+++ b/doc/I06_TopicEigenExpressionTemplates.dox
@@ -0,0 +1,12 @@
+namespace Eigen {
+
+/** \page TopicEigenExpressionTemplates Expression templates in Eigen
+
+
+TODO: write this dox page!
+
+Is linked from the tutorial on arithmetic ops.
+
+*/
+
+}
diff --git a/doc/I07_TopicScalarTypes.dox b/doc/I07_TopicScalarTypes.dox
new file mode 100644
index 0000000..2ff03c1
--- /dev/null
+++ b/doc/I07_TopicScalarTypes.dox
@@ -0,0 +1,12 @@
+namespace Eigen {
+
+/** \page TopicScalarTypes Scalar types
+
+
+TODO: write this dox page!
+
+Is linked from the tutorial on the Matrix class.
+
+*/
+
+}
diff --git a/doc/I08_Resizing.dox b/doc/I08_Resizing.dox
new file mode 100644
index 0000000..c323e17
--- /dev/null
+++ b/doc/I08_Resizing.dox
@@ -0,0 +1,11 @@
+namespace Eigen {
+
+/** \page TopicResizing Resizing
+
+
+TODO: write this dox page!
+
+Is linked from the tutorial on the Matrix class.
+
+*/
+}
diff --git a/doc/I09_Vectorization.dox b/doc/I09_Vectorization.dox
new file mode 100644
index 0000000..274d045
--- /dev/null
+++ b/doc/I09_Vectorization.dox
@@ -0,0 +1,9 @@
+namespace Eigen {
+
+/** \page TopicVectorization Vectorization
+
+
+TODO: write this dox page!
+
+*/
+}
diff --git a/doc/I10_Assertions.dox b/doc/I10_Assertions.dox
new file mode 100644
index 0000000..d5697fc
--- /dev/null
+++ b/doc/I10_Assertions.dox
@@ -0,0 +1,13 @@
+namespace Eigen {
+
+/** \page TopicAssertions Assertions
+
+
+TODO: write this dox page!
+
+Is linked from the tutorial on matrix arithmetic.
+
+\sa Section \ref TopicPreprocessorDirectivesAssertions on page \ref TopicPreprocessorDirectives.
+
+*/
+}
diff --git a/doc/I11_Aliasing.dox b/doc/I11_Aliasing.dox
new file mode 100644
index 0000000..7c11199
--- /dev/null
+++ b/doc/I11_Aliasing.dox
@@ -0,0 +1,214 @@
+namespace Eigen {
+
+/** \page TopicAliasing Aliasing
+
+In Eigen, aliasing refers to assignment statement in which the same matrix (or array or vector) appears on the
+left and on the right of the assignment operators. Statements like <tt>mat = 2 * mat;</tt> or <tt>mat =
+mat.transpose();</tt> exhibit aliasing. The aliasing in the first example is harmless, but the aliasing in the
+second example leads to unexpected results. This page explains what aliasing is, when it is harmful, and what
+to do about it.
+
+<b>Table of contents</b>
+  - \ref TopicAliasingExamples
+  - \ref TopicAliasingSolution
+  - \ref TopicAliasingCwise
+  - \ref TopicAliasingMatrixMult
+  - \ref TopicAliasingSummary
+
+
+\section TopicAliasingExamples Examples
+
+Here is a simple example exhibiting aliasing:
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_block.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_block.out
+</td></tr></table>
+
+The output is not what one would expect. The problem is the assignment
+\code
+mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2);
+\endcode
+This assignment exhibits aliasing: the coefficient \c mat(1,1) appears both in the block
+<tt>mat.bottomRightCorner(2,2)</tt> on the left-hand side of the assignment and the block
+<tt>mat.topLeftCorner(2,2)</tt> on the right-hand side. After the assignment, the (2,2) entry in the bottom
+right corner should have the value of \c mat(1,1) before the assignment, which is 5. However, the output shows
+that \c mat(2,2) is actually 1. The problem is that Eigen uses lazy evaluation (see 
+\ref TopicEigenExpressionTemplates) for <tt>mat.topLeftCorner(2,2)</tt>. The result is similar to
+\code
+mat(1,1) = mat(0,0);
+mat(1,2) = mat(0,1);
+mat(2,1) = mat(1,0);
+mat(2,2) = mat(1,1);
+\endcode
+Thus, \c mat(2,2) is assigned the \e new value of \c mat(1,1) instead of the old value. The next section
+explains how to solve this problem by calling \link DenseBase::eval() eval()\endlink.
+
+Note that if \c mat were a bigger, then the blocks would not overlap, and there would be no aliasing
+problem. This means that in general aliasing cannot be detected at compile time. However, Eigen does detect
+some instances of aliasing, albeit at run time.  The following example exhibiting aliasing was mentioned in
+\ref TutorialMatrixArithmetic :
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include tut_arithmetic_transpose_aliasing.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_transpose_aliasing.out
+</td></tr></table>
+
+Again, the output shows the aliasing issue. However, by default Eigen uses a run-time assertion to detect this
+and exits with a message like
+
+\verbatim
+void Eigen::DenseBase<Derived>::checkTransposeAliasing(const OtherDerived&) const 
+[with OtherDerived = Eigen::Transpose<Eigen::Matrix<int, 2, 2, 0, 2, 2> >, Derived = Eigen::Matrix<int, 2, 2, 0, 2, 2>]: 
+Assertion `(!internal::check_transpose_aliasing_selector<Scalar,internal::blas_traits<Derived>::IsTransposed,OtherDerived>::run(internal::extract_data(derived()), other)) 
+&& "aliasing detected during tranposition, use transposeInPlace() or evaluate the rhs into a temporary using .eval()"' failed.
+\endverbatim
+
+The user can turn Eigen's run-time assertions like the one to detect this aliasing problem off by defining the
+EIGEN_NO_DEBUG macro, and the above program was compiled with this macro turned off in order to illustrate the
+aliasing problem. See \ref TopicAssertions for more information about Eigen's run-time assertions.
+
+
+\section TopicAliasingSolution Resolving aliasing issues
+
+If you understand the cause of the aliasing issue, then it is obvious what must happen to solve it: Eigen has
+to evaluate the right-hand side fully into a temporary matrix/array and then assign it to the left-hand
+side. The function \link DenseBase::eval() eval() \endlink does precisely that.
+
+For example, here is the corrected version of the first example above:
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_block_correct.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_block_correct.out
+</td></tr></table>
+
+Now, \c mat(2,2) equals 5 after the assignment, as it should be.
+
+The same solution also works for the second example, with the transpose: simply replace the line 
+<tt>a = a.transpose();</tt> with <tt>a = a.transpose().eval();</tt>. However, in this common case there is a
+better solution. Eigen provides the special-purpose function 
+\link DenseBase::transposeInPlace() transposeInPlace() \endlink which replaces a matrix by its transpose. 
+This is shown below:
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include tut_arithmetic_transpose_inplace.cpp
+</td>
+<td>
+\verbinclude tut_arithmetic_transpose_inplace.out
+</td></tr></table>
+
+If an xxxInPlace() function is available, then it is best to use it, because it indicates more clearly what you
+are doing. This may also allow Eigen to optimize more aggressively. These are some of the xxxInPlace()
+functions provided: 
+
+<table class="manual">
+<tr><th>Original function</th><th>In-place function</th></tr>
+<tr> <td> MatrixBase::adjoint() </td> <td> MatrixBase::adjointInPlace() </td> </tr>
+<tr class="alt"> <td> DenseBase::reverse() </td> <td> DenseBase::reverseInPlace() </td> </tr>
+<tr> <td> LDLT::solve() </td> <td> LDLT::solveInPlace() </td> </tr>
+<tr class="alt"> <td> LLT::solve() </td> <td> LLT::solveInPlace() </td> </tr>
+<tr> <td> TriangularView::solve() </td> <td> TriangularView::solveInPlace() </td> </tr>
+<tr class="alt"> <td> DenseBase::transpose() </td> <td> DenseBase::transposeInPlace() </td> </tr>
+</table>
+
+
+\section TopicAliasingCwise Aliasing and component-wise operations
+
+As explained above, it may be dangerous if the same matrix or array occurs on both the left-hand side and the
+right-hand side of an assignment operator, and it is then often necessary to evaluate the right-hand side
+explicitly. However, applying component-wise operations (such as matrix addition, scalar multiplication and
+array multiplication) is safe. 
+
+The following example has only component-wise operations. Thus, there is no need for .eval() even though
+the same matrix appears on both sides of the assignments.
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_cwise.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_cwise.out
+</td></tr></table>
+
+In general, an assignment is safe if the (i,j) entry of the expression on the right-hand side depends only on
+the (i,j) entry of the matrix or array on the left-hand side and not on any other entries. In that case it is
+not necessary to evaluate the right-hand side explicitly.
+
+
+\section TopicAliasingMatrixMult Aliasing and matrix multiplication
+
+Matrix multiplication is the only operation in Eigen that assumes aliasing by default. Thus, if \c matA is a
+matrix, then the statement <tt>matA = matA * matA;</tt> is safe. All other operations in Eigen assume that
+there are no aliasing problems, either because the result is assigned to a different matrix or because it is a
+component-wise operation.
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_mult1.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_mult1.out
+</td></tr></table>
+
+However, this comes at a price. When executing the expression <tt>matA = matA * matA</tt>, Eigen evaluates the
+product in a temporary matrix which is assigned to \c matA after the computation. This is fine. But Eigen does
+the same when the product is assigned to a different matrix (e.g., <tt>matB = matA * matA</tt>). In that case,
+it is more efficient to evaluate the product directly into \c matB instead of evaluating it first into a
+temporary matrix and copying that matrix to \c matB.
+
+The user can indicate with the \link MatrixBase::noalias() noalias()\endlink function that there is no
+aliasing, as follows: <tt>matB.noalias() = matA * matA</tt>. This allows Eigen to evaluate the matrix product
+<tt>matA * matA</tt> directly into \c matB.
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_mult2.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_mult2.out
+</td></tr></table>
+
+Of course, you should not use \c noalias() when there is in fact aliasing taking place. If you do, then you
+may get wrong results:
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicAliasing_mult3.cpp
+</td>
+<td>
+\verbinclude TopicAliasing_mult3.out
+</td></tr></table>
+
+
+\section TopicAliasingSummary Summary
+
+Aliasing occurs when the same matrix or array coefficients appear both on the left- and the right-hand side of
+an assignment operator.
+ - Aliasing is harmless with coefficient-wise computations; this includes scalar multiplication and matrix or
+   array addition.
+ - When you multiply two matrices, Eigen assumes that aliasing occurs. If you know that there is no aliasing,
+   then you can use \link MatrixBase::noalias() noalias()\endlink.
+ - In all other situations, Eigen assumes that there is no aliasing issue and thus gives the wrong result if
+   aliasing does in fact occur. To prevent this, you have to use \link DenseBase::eval() eval() \endlink or
+   one of the xxxInPlace() functions.
+
+*/
+}
diff --git a/doc/I12_ClassHierarchy.dox b/doc/I12_ClassHierarchy.dox
new file mode 100644
index 0000000..700d018
--- /dev/null
+++ b/doc/I12_ClassHierarchy.dox
@@ -0,0 +1,133 @@
+namespace Eigen {
+
+/** \page TopicClassHierarchy The class hierarchy
+
+This page explains the design of the core classes in Eigen's class hierarchy and how they fit together. Casual
+users probably need not concern themselves with these details, but it may be useful for both advanced users
+and Eigen developers.
+
+<b>Table of contents</b>
+  - \ref TopicClassHierarchyPrinciples
+  - \ref TopicClassHierarchyCoreClasses
+  - \ref TopicClassHierarchyBaseClasses
+  - \ref TopicClassHierarchyInheritanceDiagrams 
+
+
+\section TopicClassHierarchyPrinciples Principles
+
+Eigen's class hierarchy is designed so that virtual functions are avoided where their overhead would
+significantly impair performance. Instead, Eigen achieves polymorphism with the Curiously Recurring Template
+Pattern (CRTP). In this pattern, the base class (for instance, \c MatrixBase) is in fact a template class, and
+the derived class (for instance, \c Matrix) inherits the base class with the derived class itself as a
+template argument (in this case, \c Matrix inherits from \c MatrixBase&lt;Matrix&gt;). This allows Eigen to
+resolve the polymorphic function calls at compile time.
+
+In addition, the design avoids multiple inheritance. One reason for this is that in our experience, some
+compilers (like MSVC) fail to perform empty base class optimization, which is crucial for our fixed-size
+types.
+
+
+\section TopicClassHierarchyCoreClasses The core classes
+
+These are the classes that you need to know about if you want to write functions that accept or return Eigen
+objects.
+
+  - Matrix means plain dense matrix. If \c m is a \c %Matrix, then, for instance, \c m+m is no longer a 
+    \c %Matrix, it is a "matrix expression".
+  - MatrixBase means dense matrix expression. This means that a \c %MatrixBase is something that can be
+    added, matrix-multiplied, LU-decomposed, QR-decomposed... All matrix expression classes, including 
+    \c %Matrix itself, inherit \c %MatrixBase.
+  - Array means plain dense array. If \c x is an \c %Array, then, for instance, \c x+x is no longer an 
+    \c %Array, it is an "array expression".
+  - ArrayBase means dense array expression. This means that an \c %ArrayBase is something that can be
+    added, array-multiplied, and on which you can perform all sorts of array operations... All array
+    expression classes, including \c %Array itself, inherit \c %ArrayBase.
+  - DenseBase means dense (matrix or array) expression. Both \c %ArrayBase and \c %MatrixBase inherit
+    \c %DenseBase. \c %DenseBase is where all the methods go that apply to dense expressions regardless of
+    whether they are matrix or array expressions. For example, the \link DenseBase::block() block(...) \endlink
+    methods are in \c %DenseBase.
+
+\section TopicClassHierarchyBaseClasses Base classes
+
+These classes serve as base classes for the five core classes mentioned above. They are more internal and so
+less interesting for users of the Eigen library.
+
+  - PlainObjectBase means dense (matrix or array) plain object, i.e. something that stores its own dense
+    array of coefficients. This is where, for instance, the \link PlainObjectBase::resize() resize() \endlink
+    methods go. \c %PlainObjectBase is inherited by \c %Matrix and by \c %Array. But above, we said that 
+    \c %Matrix inherits \c %MatrixBase and \c %Array inherits \c %ArrayBase. So does that mean multiple
+    inheritance? No, because \c %PlainObjectBase \e itself inherits \c %MatrixBase or \c %ArrayBase depending
+    on whether we are in the matrix or array case. When we said above that \c %Matrix inherited 
+    \c %MatrixBase, we omitted to say it does so indirectly via \c %PlainObjectBase. Same for \c %Array.
+  - DenseCoeffsBase means something that has dense coefficient accessors. It is a base class for
+    \c %DenseBase. The reason for \c %DenseCoeffsBase to exist is that the set of available coefficient
+    accessors is very different depending on whether a dense expression has direct memory access or not (the
+    \c DirectAccessBit flag). For example, if \c x is a plain matrix, then \c x has direct access, and 
+    \c x.transpose() and \c x.block(...) also have direct access, because their coefficients can be read right
+    off memory, but for example, \c x+x does not have direct memory access, because obtaining any of its
+    coefficients requires a computation (an addition), it can't be just read off memory.
+  - EigenBase means anything that can be evaluated into a plain dense matrix or array (even if that would
+    be a bad idea). \c %EigenBase is really the absolute base class for anything that remotely looks like a
+    matrix or array. It is a base class for \c %DenseCoeffsBase, so it sits below all our dense class
+    hierarchy, but it is not limited to dense expressions. For example, \c %EigenBase is also inherited by
+    diagonal matrices, sparse matrices, etc...
+
+
+\section TopicClassHierarchyInheritanceDiagrams Inheritance diagrams
+
+The inheritance diagram for Matrix looks as follows:
+
+<pre>
+EigenBase&lt;%Matrix&gt;
+  <-- DenseCoeffsBase&lt;%Matrix&gt;    (direct access case)
+    <-- DenseBase&lt;%Matrix&gt;
+      <-- MatrixBase&lt;%Matrix&gt;
+        <-- PlainObjectBase&lt;%Matrix&gt;    (matrix case)
+          <-- Matrix
+</pre>
+
+The inheritance diagram for Array looks as follows:
+
+<pre>
+EigenBase&lt;%Array&gt;
+  <-- DenseCoeffsBase&lt;%Array&gt;    (direct access case)
+    <-- DenseBase&lt;%Array&gt;
+      <-- ArrayBase&lt;%Array&gt;
+        <-- PlainObjectBase&lt;%Array&gt;    (array case)
+          <-- Array
+</pre>
+
+The inheritance diagram for some other matrix expression class, here denoted by \c SomeMatrixXpr, looks as
+follows:
+
+<pre>
+EigenBase&lt;SomeMatrixXpr&gt;
+  <-- DenseCoeffsBase&lt;SomeMatrixXpr&gt;    (direct access or no direct access case)
+    <-- DenseBase&lt;SomeMatrixXpr&gt;
+      <-- MatrixBase&lt;SomeMatrixXpr&gt;
+        <-- SomeMatrixXpr
+</pre>
+
+The inheritance diagram for some other array expression class, here denoted by \c SomeArrayXpr, looks as
+follows:
+
+<pre>
+EigenBase&lt;SomeArrayXpr&gt;
+  <-- DenseCoeffsBase&lt;SomeArrayXpr&gt;    (direct access or no direct access case)
+    <-- DenseBase&lt;SomeArrayXpr&gt;
+      <-- ArrayBase&lt;SomeArrayXpr&gt;
+        <-- SomeArrayXpr
+</pre>
+
+Finally, consider an example of something that is not a dense expression, for instance a diagonal matrix. The
+corresponding inheritance diagram is:
+
+<pre>
+EigenBase&lt;%DiagonalMatrix&gt;
+  <-- DiagonalBase&lt;%DiagonalMatrix&gt;
+    <-- DiagonalMatrix
+</pre>
+
+
+*/
+}
diff --git a/doc/I13_FunctionsTakingEigenTypes.dox b/doc/I13_FunctionsTakingEigenTypes.dox
new file mode 100644
index 0000000..f9e6faf
--- /dev/null
+++ b/doc/I13_FunctionsTakingEigenTypes.dox
@@ -0,0 +1,188 @@
+namespace Eigen {
+
+/** \page TopicFunctionTakingEigenTypes Writing Functions Taking Eigen Types as Parameters
+
+Eigen's use of expression templates results in potentially every expression being of a different type. If you pass such an expression to a function taking a parameter of type Matrix, your expression will implicitly be evaluated into a temporary Matrix, which will then be passed to the function. This means that you lose the benefit of expression templates. Concretely, this has two drawbacks:
+ \li The evaluation into a temporary may be useless and inefficient;
+ \li This only allows the function to read from the expression, not to write to it.
+
+Fortunately, all this myriad of expression types have in common that they all inherit a few common, templated base classes. By letting your function take templated parameters of these base types, you can let them play nicely with Eigen's expression templates.
+
+<b>Table of contents</b>
+  - \ref TopicFirstExamples
+  - \ref TopicPlainFunctionsWorking
+  - \ref TopicPlainFunctionsFailing
+  - \ref TopicResizingInGenericImplementations
+  - \ref TopicSummary
+
+\section TopicFirstExamples Some First Examples
+
+This section will provide simple examples for different types of objects Eigen is offering. Before starting with the actual examples, we need to recapitulate which base objects we can work with (see also \ref TopicClassHierarchy).
+
+ \li MatrixBase: The common base class for all dense matrix expressions (as opposed to array expressions, as opposed to sparse and special matrix classes). Use it in functions that are meant to work only on dense matrices.
+ \li ArrayBase: The common base class for all dense array expressions (as opposed to matrix expressions, etc). Use it in functions that are meant to work only on arrays.
+ \li DenseBase: The common base class for all dense matrix expression, that is, the base class for both \c MatrixBase and \c ArrayBase. It can be used in functions that are meant to work on both matrices and arrays.
+ \li EigenBase: The base class unifying all types of objects that can be evaluated into dense matrices or arrays, for example special matrix classes such as diagonal matrices, permutation matrices, etc. It can be used in functions that are meant to work on any such general type.
+
+<b> %EigenBase Example </b><br/><br/>
+Prints the dimensions of the most generic object present in Eigen. It coulde be any matrix expressions, any dense or sparse matrix and any array.
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include function_taking_eigenbase.cpp
+</td>
+<td>
+\verbinclude function_taking_eigenbase.out
+</td></tr></table>
+<b> %DenseBase Example </b><br/><br/>
+Prints a sub-block of the dense expression. Accepts any dense matrix or array expression, but no sparse objects and no special matrix classes such as DiagonalMatrix.
+\code
+template <typename Derived>
+void print_block(const DenseBase<Derived>& b, int x, int y, int r, int c)
+{
+  std::cout << "block: " << b.block(x,y,r,c) << std::endl;
+}
+\endcode
+<b> %ArrayBase Example </b><br/><br/>
+Prints the maximum coefficient of the array or array-expression.
+\code
+template <typename Derived>
+void print_max_coeff(const ArrayBase<Derived> &a)
+{
+  std::cout << "max: " << a.maxCoeff() << std::endl;
+}
+\endcode
+<b> %MatrixBase Example </b><br/><br/>
+Prints the inverse condition number of the given matrix or matrix-expression.
+\code
+template <typename Derived>
+void print_inv_cond(const MatrixBase<Derived>& a)
+{
+  const typename JacobiSVD<typename Derived::PlainObject>::SingularValuesType&
+    sing_vals = a.jacobiSvd().singularValues();
+  std::cout << "inv cond: " << sing_vals(sing_vals.size()-1) / sing_vals(0) << std::endl;
+}
+\endcode
+<b> Multiple templated arguments example </b><br/><br/>
+Calculate the Euclidean distance between two points.
+\code
+template <typename DerivedA,typename DerivedB>
+typename DerivedA::Scalar squaredist(const MatrixBase<DerivedA>& p1,const MatrixBase<DerivedB>& p2)
+{
+  return (p1-p2).squaredNorm();
+}
+\endcode
+Notice that we used two template parameters, one per argument. This permits the function to handle inputs of different types, e.g.,
+\code
+squaredist(v1,2*v2)
+\endcode
+where the first argument \c v1 is a vector and the second argument \c 2*v2 is an expression.
+<br/><br/>
+
+These examples are just intended to give the reader a first impression of how functions can be written which take a plain and constant Matrix or Array argument. They are also intended to give the reader an idea about the most common base classes being the optimal candidates for functions. In the next section we will look in more detail at an example and the different ways it can be implemented, while discussing each implementation's problems and advantages. For the discussion below, Matrix and Array as well as MatrixBase and ArrayBase can be exchanged and all arguments still hold.
+
+\section TopicPlainFunctionsWorking In which cases do functions taking plain Matrix or Array arguments work?
+
+Let's assume one wants to write a function computing the covariance matrix of two input matrices where each row is an observation. The implementation of this function might look like this
+\code
+MatrixXf cov(const MatrixXf& x, const MatrixXf& y)
+{
+  const float num_observations = static_cast<float>(x.rows());
+  const RowVectorXf x_mean = x.colwise().sum() / num_observations;
+  const RowVectorXf y_mean = y.colwise().sum() / num_observations;
+  return (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations;
+}
+\endcode
+and contrary to what one might think at first, this implementation is fine unless you require a genric implementation that works with double matrices too and unless you do not care about temporary objects. Why is that the case? Where are temporaries involved? How can code as given below compile?
+\code
+MatrixXf x,y,z;
+MatrixXf C = cov(x,y+z);
+\endcode
+In this special case, the example is fine and will be working because both parameters are declared as \e const references. The compiler creates a temporary and evaluates the expression x+z into this temporary. Once the function is processed, the temporary is released and the result is assigned to C.
+
+\b Note: Functions taking \e const references to Matrix (or Array) can process expressions at the cost of temporaries.
+
+\section TopicPlainFunctionsFailing In which cases do functions taking a plain Matrix or Array argument fail?
+
+Here, we consider a slightly modified version of the function given above. This time, we do not want to return the result but pass an additional non-const paramter which allows us to store the result. A first naive implementation might look as follows.
+\code
+// Note: This code is flawed!
+void cov(const MatrixXf& x, const MatrixXf& y, MatrixXf& C)
+{
+  const float num_observations = static_cast<float>(x.rows());
+  const RowVectorXf x_mean = x.colwise().sum() / num_observations;
+  const RowVectorXf y_mean = y.colwise().sum() / num_observations;
+  C = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations;
+}
+\endcode
+When trying to execute the following code
+\code
+MatrixXf C = MatrixXf::Zero(3,6);
+cov(x,y, C.block(0,0,3,3));
+\endcode
+the compiler will fail, because it is not possible to convert the expression returned by \c MatrixXf::block() into a non-const \c MatrixXf&. This is the case because the compiler wants to protect you from writing your result to a temporary object. In this special case this protection is not intended -- we want to write to a temporary object. So how can we overcome this problem? 
+
+The solution which is preferred at the moment is based on a little \em hack. One needs to pass a const reference to the matrix and internally the constness needs to be cast away. The correct implementation for C98 compliant compilers would be
+\code
+template <typename Derived, typename OtherDerived>
+void cov(const MatrixBase<Derived>& x, const MatrixBase<Derived>& y, MatrixBase<OtherDerived> const & C)
+{
+  typedef typename Derived::Scalar Scalar;
+  typedef typename internal::plain_row_type<Derived>::type RowVectorType;
+
+  const Scalar num_observations = static_cast<Scalar>(x.rows());
+
+  const RowVectorType x_mean = x.colwise().sum() / num_observations;
+  const RowVectorType y_mean = y.colwise().sum() / num_observations;
+
+  const_cast< MatrixBase<OtherDerived>& >(C) =
+    (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations;
+}
+\endcode
+The implementation above does now not only work with temporary expressions but it also allows to use the function with matrices of arbitrary floating point scalar types.
+
+\b Note: The const cast hack will only work with templated functions. It will not work with the MatrixXf implementation because it is not possible to cast a Block expression to a Matrix reference!
+
+
+
+\section TopicResizingInGenericImplementations How to resize matrices in generic implementations?
+
+One might think we are done now, right? This is not completely true because in order for our covariance function to be generically applicable, we want the follwing code to work
+\code
+MatrixXf x = MatrixXf::Random(100,3);
+MatrixXf y = MatrixXf::Random(100,3);
+MatrixXf C;
+cov(x, y, C);
+\endcode
+This is not the case anymore, when we are using an implementation taking MatrixBase as a parameter. In general, Eigen supports automatic resizing but it is not possible to do so on expressions. Why should resizing of a matrix Block be allowed? It is a reference to a sub-matrix and we definitely don't want to resize that. So how can we incorporate resizing if we cannot resize on MatrixBase? The solution is to resize the derived object as in this implementation.
+\code
+template <typename Derived, typename OtherDerived>
+void cov(const MatrixBase<Derived>& x, const MatrixBase<Derived>& y, MatrixBase<OtherDerived> const & C_)
+{
+  typedef typename Derived::Scalar Scalar;
+  typedef typename internal::plain_row_type<Derived>::type RowVectorType;
+
+  const Scalar num_observations = static_cast<Scalar>(x.rows());
+
+  const RowVectorType x_mean = x.colwise().sum() / num_observations;
+  const RowVectorType y_mean = y.colwise().sum() / num_observations;
+
+  MatrixBase<OtherDerived>& C = const_cast< MatrixBase<OtherDerived>& >(C_);
+  
+  C.derived().resize(x.cols(),x.cols()); // resize the derived object
+  C = (x.rowwise() - x_mean).transpose() * (y.rowwise() - y_mean) / num_observations;
+}
+\endcode
+This implementation is now working for parameters being expressions and for parameters being matrices and having the wrong size. Resizing the expressions does not do any harm in this case unless they actually require resizing. That means, passing an expression with the wrong dimensions will result in a run-time error (in debug mode only) while passing expressions of the correct size will just work fine.
+
+\b Note: In the above discussion the terms Matrix and Array and MatrixBase and ArrayBase can be exchanged and all arguments still hold.
+
+\section TopicSummary Summary
+
+  - To summarize, the implementation of functions taking non-writable (const referenced) objects is not a big issue and does not lead to problematic situations in terms of compiling and running your program. However, a naive implementation is likely to introduce unnecessary temporary objects in your code. In order to avoid evaluating parameters into temporaries, pass them as (const) references to MatrixBase or ArrayBase (so templatize your function).
+
+  - Functions taking writable (non-const) parameters must take const references and cast away constness within the function body.
+
+  - Functions that take as parameters MatrixBase (or ArrayBase) objects, and potentially need to resize them (in the case where they are resizable), must call resize() on the derived class, as returned by derived().
+*/
+}
diff --git a/doc/I14_PreprocessorDirectives.dox b/doc/I14_PreprocessorDirectives.dox
new file mode 100644
index 0000000..f29f072
--- /dev/null
+++ b/doc/I14_PreprocessorDirectives.dox
@@ -0,0 +1,112 @@
+namespace Eigen {
+
+/** \page TopicPreprocessorDirectives Preprocessor directives
+
+You can control some aspects of %Eigen by defining the preprocessor tokens using \c \#define. These macros
+should be defined before any %Eigen headers are included. Often they are best set in the project options.
+
+This page lists the preprocesor tokens recognised by %Eigen.
+
+<b>Table of contents</b>
+  - \ref TopicPreprocessorDirectivesMajor
+  - \ref TopicPreprocessorDirectivesAssertions
+  - \ref TopicPreprocessorDirectivesPerformance
+  - \ref TopicPreprocessorDirectivesPlugins
+  - \ref TopicPreprocessorDirectivesDevelopers
+
+
+\section TopicPreprocessorDirectivesMajor Macros with major effects
+
+These macros have a major effect and typically break the API (Application Programming Interface) and/or the
+ABI (Application Binary Interface). This can be rather dangerous: if parts of your program are compiled with
+one option, and other parts (or libraries that you use) are compiled with another option, your program may
+fail to link or exhibit subtle bugs. Nevertheless, these options can be useful for people who know what they
+are doing.
+
+ - \b EIGEN2_SUPPORT - if defined, enables the Eigen2 compatibility mode. This is meant to ease the transition
+   of Eigen2 to Eigen3 (see \ref Eigen2ToEigen3). Not defined by default.
+ - \b EIGEN2_SUPPORT_STAGEnn_xxx (for various values of nn and xxx) - staged migration path from Eigen2 to
+   Eigen3; see \ref Eigen2SupportModes.
+ - \b EIGEN_DEFAULT_DENSE_INDEX_TYPE - the type for column and row indices in matrices, vectors and array
+   (DenseBase::Index). Set to \c std::ptrdiff_t by default.
+ - \b EIGEN_DEFAULT_IO_FORMAT - the IOFormat to use when printing a matrix if no #IOFormat is specified.
+   Defaults to the #IOFormat constructed by the default constructor IOFormat().
+ - \b EIGEN_INITIALIZE_MATRICES_BY_ZERO - if defined, all entries of newly constructed matrices and arrays are
+   initializes to zero, as are new entries in matrices and arrays after resizing. Not defined by default.
+ - \b EIGEN_NO_AUTOMATIC_RESIZING - if defined, the matrices (or arrays) on both sides of an assignment 
+   <tt>a = b</tt> have to be of the same size; otherwise, %Eigen automatically resizes \c a so that it is of
+   the correct size. Not defined by default.
+
+
+\section TopicPreprocessorDirectivesAssertions Assertions
+
+The %Eigen library contains many assertions to guard against programming errors, both at compile time and at
+run time. However, these assertions do cost time and can thus be turned off.
+
+ - \b EIGEN_NO_DEBUG - disables %Eigen's assertions if defined. Not defined by default, unless the
+   \c NDEBUG macro is defined (this is a standard C++ macro which disables all asserts). 
+ - \b EIGEN_NO_STATIC_ASSERT - if defined, compile-time static assertions are replaced by runtime assertions; 
+   this saves compilation time. Not defined by default.
+ - \b eigen_assert - macro with one argument that is used inside %Eigen for assertions. By default, it is
+   basically defined to be \c assert, which aborts the program if the assertion is violated. Redefine this
+   macro if you want to do something else, like throwing an exception.
+ - \b EIGEN_MPL2_ONLY - disable non MPL2 compatible features, or in other words disable the features which
+   are still under the LGPL.
+
+
+\section TopicPreprocessorDirectivesPerformance Alignment, vectorization and performance tweaking
+
+ - \b EIGEN_DONT_ALIGN - disables alignment completely. %Eigen will not try to align its objects and does not
+   expect that any objects passed to it are aligned. This will turn off vectorization. Not defined by default.
+ - \b EIGEN_DONT_ALIGN_STATICALLY - disables alignment of arrays on the stack. Not defined by default, unless
+   \c EIGEN_DONT_ALIGN is defined.
+ - \b EIGEN_DONT_VECTORIZE - disables explicit vectorization when defined. Not defined by default, unless 
+   alignment is disabled by %Eigen's platform test or the user defining \c EIGEN_DONT_ALIGN.
+ - \b EIGEN_FAST_MATH - enables some optimizations which might affect the accuracy of the result. The only
+   optimization this currently includes is single precision sin() and cos() in the present of SSE
+   vectorization. Defined by default. 
+ - \b EIGEN_UNROLLING_LIMIT - defines the size of a loop to enable meta unrolling. Set it to zero to disable
+   unrolling. The size of a loop here is expressed in %Eigen's own notion of "number of FLOPS", it does not
+   correspond to the number of iterations or the number of instructions. The default is value 100. 
+
+
+\section TopicPreprocessorDirectivesPlugins Plugins
+
+It is possible to add new methods to many fundamental classes in %Eigen by writing a plugin. As explained in
+the section \ref ExtendingMatrixBase, the plugin is specified by defining a \c EIGEN_xxx_PLUGIN macro. The
+following macros are supported; none of them are defined by default.
+
+ - \b EIGEN_ARRAY_PLUGIN - filename of plugin for extending the Array class.
+ - \b EIGEN_ARRAYBASE_PLUGIN - filename of plugin for extending the ArrayBase class.
+ - \b EIGEN_CWISE_PLUGIN - filename of plugin for extending the Cwise class.
+ - \b EIGEN_DENSEBASE_PLUGIN - filename of plugin for extending the DenseBase class.
+ - \b EIGEN_DYNAMICSPARSEMATRIX_PLUGIN - filename of plugin for extending the DynamicSparseMatrix class.
+ - \b EIGEN_MATRIX_PLUGIN - filename of plugin for extending the Matrix class.
+ - \b EIGEN_MATRIXBASE_PLUGIN - filename of plugin for extending the MatrixBase class.
+ - \b EIGEN_PLAINOBJECTBASE_PLUGIN - filename of plugin for extending the PlainObjectBase class.
+ - \b EIGEN_QUATERNIONBASE_PLUGIN - filename of plugin for extending the QuaternionBase class.
+ - \b EIGEN_SPARSEMATRIX_PLUGIN - filename of plugin for extending the SparseMatrix class.
+ - \b EIGEN_SPARSEMATRIXBASE_PLUGIN - filename of plugin for extending the SparseMatrixBase class.
+ - \b EIGEN_SPARSEVECTOR_PLUGIN - filename of plugin for extending the SparseVector class.
+ - \b EIGEN_TRANSFORM_PLUGIN - filename of plugin for extending the Transform class.
+ - \b EIGEN_FUNCTORS_PLUGIN - filename of plugin for adding new functors and specializations of functor_traits.
+
+
+\section TopicPreprocessorDirectivesDevelopers Macros for Eigen developers
+
+These macros are mainly meant for people developing %Eigen and for testing purposes. Even though, they might be useful for power users and the curious for debugging and testing purpose, they \b should \b not \b be \b used by real-word code.
+
+ - \b EIGEN_DEFAULT_TO_ROW_MAJOR - when defined, the default storage order for matrices becomes row-major
+   instead of column-major. Not defined by default.
+ - \b EIGEN_INTERNAL_DEBUGGING - if defined, enables assertions in %Eigen's internal routines. This is useful
+   for debugging %Eigen itself. Not defined by default.
+ - \b EIGEN_NO_MALLOC - if defined, any request from inside the %Eigen to allocate memory from the heap
+   results in an assertion failure. This is useful to check that some routine does not allocate memory
+   dynamically. Not defined by default.
+ - \b EIGEN_RUNTIME_NO_MALLOC - if defined, a new switch is introduced which can be turned on and off by
+   calling <tt>set_is_malloc_allowed(bool)</tt>. If malloc is not allowed and %Eigen tries to allocate memory
+   dynamically anyway, an assertion failure results. Not defined by default.
+
+*/
+
+}
diff --git a/doc/I15_StorageOrders.dox b/doc/I15_StorageOrders.dox
new file mode 100644
index 0000000..7418912
--- /dev/null
+++ b/doc/I15_StorageOrders.dox
@@ -0,0 +1,89 @@
+namespace Eigen {
+
+/** \page TopicStorageOrders Storage orders
+
+There are two different storage orders for matrices and two-dimensional arrays: column-major and row-major.
+This page explains these storage orders and how to specify which one should be used.
+
+<b>Table of contents</b>
+  - \ref TopicStorageOrdersIntro
+  - \ref TopicStorageOrdersInEigen
+  - \ref TopicStorageOrdersWhich
+
+
+\section TopicStorageOrdersIntro Column-major and row-major storage
+
+The entries of a matrix form a two-dimensional grid. However, when the matrix is stored in memory, the entries
+have to somehow be laid out linearly. There are two main ways to do this, by row and by column.
+
+We say that a matrix is stored in \b row-major order if it is stored row by row. The entire first row is
+stored first, followed by the entire second row, and so on. Consider for example the matrix
+
+\f[
+A = \begin{bmatrix}
+8 & 2 & 2 & 9 \\
+9 & 1 & 4 & 4 \\
+3 & 5 & 4 & 5
+\end{bmatrix}.
+\f]
+
+If this matrix is stored in row-major order, then the entries are laid out in memory as follows:
+
+\code 8 2 2 9 9 1 4 4 3 5 4 5 \endcode
+
+On the other hand, a matrix is stored in \b column-major order if it is stored column by column, starting with
+the entire first column, followed by the entire second column, and so on. If the above matrix is stored in
+column-major order, it is laid out as follows:
+
+\code 8 9 3 2 1 5 2 4 4 9 4 5 \endcode
+
+This example is illustrated by the following Eigen code. It uses the PlainObjectBase::data() function, which
+returns a pointer to the memory location of the first entry of the matrix.
+
+<table class="example">
+<tr><th>Example</th><th>Output</th></tr>
+<tr><td>
+\include TopicStorageOrders_example.cpp
+</td>
+<td>
+\verbinclude TopicStorageOrders_example.out
+</td></tr></table>
+
+
+\section TopicStorageOrdersInEigen Storage orders in Eigen
+
+The storage order of a matrix or a two-dimensional array can be set by specifying the \c Options template
+parameter for Matrix or Array. As \ref TutorialMatrixClass explains, the %Matrix class template has six
+template parameters, of which three are compulsory (\c Scalar, \c RowsAtCompileTime and \c ColsAtCompileTime)
+and three are optional (\c Options, \c MaxRowsAtCompileTime and \c MaxColsAtCompileTime). If the \c Options
+parameter is set to \c RowMajor, then the matrix or array is stored in row-major order; if it is set to 
+\c ColMajor, then it is stored in column-major order. This mechanism is used in the above Eigen program to
+specify the storage order.
+
+If the storage order is not specified, then Eigen defaults to storing the entry in column-major. This is also
+the case if one of the convenience typedefs (\c Matrix3f, \c ArrayXXd, etc.) is used.
+
+Matrices and arrays using one storage order can be assigned to matrices and arrays using the other storage
+order, as happens in the above program when \c Arowmajor is initialized using \c Acolmajor. Eigen will reorder
+the entries automatically. More generally, row-major and column-major matrices can be mixed in an expression
+as we want.
+
+
+\section TopicStorageOrdersWhich Which storage order to choose?
+
+So, which storage order should you use in your program? There is no simple answer to this question; it depends
+on your application. Here are some points to keep in mind:
+
+  - Your users may expect you to use a specific storage order. Alternatively, you may use other libraries than
+    Eigen, and these other libraries may expect a certain storage order. In these cases it may be easiest and
+    fastest to use this storage order in your whole program.
+  - Algorithms that traverse a matrix row by row will go faster when the matrix is stored in row-major order
+    because of better data locality. Similarly, column-by-column traversal is faster for column-major
+    matrices. It may be worthwhile to experiment a bit to find out what is faster for your particular
+    application.
+  - The default in Eigen is column-major. Naturally, most of the development and testing of the Eigen library
+    is thus done with column-major matrices. This means that, even though we aim to support column-major and
+    row-major storage orders transparently, the Eigen library may well work best with column-major matrices.
+
+*/
+}
diff --git a/doc/I16_TemplateKeyword.dox b/doc/I16_TemplateKeyword.dox
new file mode 100644
index 0000000..3245323
--- /dev/null
+++ b/doc/I16_TemplateKeyword.dox
@@ -0,0 +1,136 @@
+namespace Eigen {
+
+/** \page TopicTemplateKeyword The template and typename keywords in C++
+
+There are two uses for the \c template and \c typename keywords in C++. One of them is fairly well known
+amongst programmers: to define templates. The other use is more obscure: to specify that an expression refers
+to a template function or a type. This regularly trips up programmers that use the %Eigen library, often
+leading to error messages from the compiler that are difficult to understand.
+
+<b>Table of contents</b>
+  - \ref TopicTemplateKeywordToDefineTemplates
+  - \ref TopicTemplateKeywordExample
+  - \ref TopicTemplateKeywordExplanation
+  - \ref TopicTemplateKeywordResources
+
+
+\section TopicTemplateKeywordToDefineTemplates Using the template and typename keywords to define templates
+
+The \c template and \c typename keywords are routinely used to define templates. This is not the topic of this
+page as we assume that the reader is aware of this (otherwise consult a C++ book). The following example
+should illustrate this use of the \c template keyword.
+
+\code
+template <typename T>
+bool isPositive(T x)
+{
+    return x > 0;
+}
+\endcode
+
+We could just as well have written <tt>template &lt;class T&gt;</tt>; the keywords \c typename and \c class have the
+same meaning in this context.
+
+
+\section TopicTemplateKeywordExample An example showing the second use of the template keyword
+
+Let us illustrate the second use of the \c template keyword with an example. Suppose we want to write a
+function which copies all entries in the upper triangular part of a matrix into another matrix, while keeping
+the lower triangular part unchanged. A straightforward implementation would be as follows:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include TemplateKeyword_simple.cpp
+</td>
+<td>
+\verbinclude TemplateKeyword_simple.out
+</td></tr></table>
+
+That works fine, but it is not very flexible. First, it only works with dynamic-size matrices of
+single-precision floats; the function \c copyUpperTriangularPart() does not accept static-size matrices or
+matrices with double-precision numbers. Second, if you use an expression such as
+<tt>mat.topLeftCorner(3,3)</tt> as the parameter \c src, then this is copied into a temporary variable of type
+MatrixXf; this copy can be avoided.
+
+As explained in \ref TopicFunctionTakingEigenTypes, both issues can be resolved by making 
+\c copyUpperTriangularPart() accept any object of type MatrixBase. This leads to the following code:
+
+<table class="example">
+<tr><th>Example:</th><th>Output:</th></tr>
+<tr><td>
+\include TemplateKeyword_flexible.cpp
+</td>
+<td>
+\verbinclude TemplateKeyword_flexible.out
+</td></tr></table>
+
+The one line in the body of the function \c copyUpperTriangularPart() shows the second, more obscure use of
+the \c template keyword in C++.  Even though it may look strange, the \c template keywords are necessary
+according to the standard. Without it, the compiler may reject the code with an error message like "no match
+for operator<".
+
+
+\section TopicTemplateKeywordExplanation Explanation
+
+The reason that the \c template keyword is necessary in the last example has to do with the rules for how
+templates are supposed to be compiled in C++. The compiler has to check the code for correct syntax at the
+point where the template is defined, without knowing the actual value of the template arguments (\c Derived1
+and \c Derived2 in the example). That means that the compiler cannot know that <tt>dst.triangularPart</tt> is
+a member template and that the following &lt; symbol is part of the delimiter for the template
+parameter. Another possibility would be that <tt>dst.triangularPart</tt> is a member variable with the &lt;
+symbol refering to the <tt>operator&lt;()</tt> function. In fact, the compiler should choose the second
+possibility, according to the standard. If <tt>dst.triangularPart</tt> is a member template (as in our case),
+the programmer should specify this explicitly with the \c template keyword and write <tt>dst.template
+triangularPart</tt>.
+
+The precise rules are rather complicated, but ignoring some subtleties we can summarize them as follows:
+- A <em>dependent name</em> is name that depends (directly or indirectly) on a template parameter. In the
+  example, \c dst is a dependent name because it is of type <tt>MatrixBase&lt;Derived1&gt;</tt> which depends
+  on the template parameter \c Derived1.
+- If the code contains either one of the contructions <tt>xxx.yyy</tt> or <tt>xxx-&gt;yyy</tt> and \c xxx is a
+  dependent name and \c yyy refers to a member template, then the \c template keyword must be used before 
+  \c yyy, leading to <tt>xxx.template yyy</tt> or <tt>xxx-&gt;template yyy</tt>.
+- If the code contains the contruction <tt>xxx::yyy</tt> and \c xxx is a dependent name and \c yyy refers to a
+  member typedef, then the \c typename keyword must be used before the whole construction, leading to
+  <tt>typename xxx::yyy</tt>.
+
+As an example where the \c typename keyword is required, consider the following code in \ref TutorialSparse
+for iterating over the non-zero entries of a sparse matrix type:
+
+\code
+SparseMatrixType mat(rows,cols);
+for (int k=0; k<mat.outerSize(); ++k)
+  for (SparseMatrixType::InnerIterator it(mat,k); it; ++it)
+  {
+    /* ... */
+  }
+\endcode
+
+If \c SparseMatrixType depends on a template parameter, then the \c typename keyword is required:
+
+\code
+template <typename T>
+void iterateOverSparseMatrix(const SparseMatrix<T>& mat;
+{
+  for (int k=0; k<m1.outerSize(); ++k)
+    for (typename SparseMatrix<T>::InnerIterator it(mat,k); it; ++it)
+    {
+      /* ... */
+    }
+}
+\endcode
+
+
+\section TopicTemplateKeywordResources Resources for further reading
+
+For more information and a fuller explanation of this topic, the reader may consult the following sources:
+- The book "C++ Template Metaprogramming" by David Abrahams and Aleksey Gurtovoy contains a very good
+  explanation in Appendix B ("The typename and template Keywords") which formed the basis for this page.
+- http://pages.cs.wisc.edu/~driscoll/typename.html
+- http://www.parashift.com/c++-faq-lite/templates.html#faq-35.18
+- http://www.comeaucomputing.com/techtalk/templates/#templateprefix
+- http://www.comeaucomputing.com/techtalk/templates/#typename
+
+*/
+}
diff --git a/doc/Overview.dox b/doc/Overview.dox
new file mode 100644
index 0000000..2657c85
--- /dev/null
+++ b/doc/Overview.dox
@@ -0,0 +1,60 @@
+namespace Eigen {
+
+o /** \mainpage Eigen
+
+<div class="eimainmenu">
+    \ref GettingStarted       "Getting started"
+  | \ref TutorialMatrixClass  "Tutorial"
+  | \ref QuickRefPage         "Short reference"
+</div>
+
+This is the API documentation for Eigen3. You can <a href="eigen-doc.tgz">download</a> it as a tgz archive for offline reading.
+
+Eigen2 users: here is a \ref Eigen2ToEigen3 guide to help porting your application.
+
+For a first contact with Eigen, the best place is to have a look at the \ref GettingStarted "tutorial". The \ref QuickRefPage "short reference" page gives you a quite complete description of the API in a very condensed format that is specially useful to recall the syntax of a particular feature, or to have a quick look at the API. For Matlab users, there is also a <a href="AsciiQuickReference.txt">ASCII quick reference</a> with Matlab translations. The \e Modules and \e Classes tabs at the top of this page give you access to the API documentation of individual classes and functions.
+
+\b Table \b of \b contents
+  - \ref Eigen2ToEigen3
+  - \ref GettingStarted
+  - \b Tutorial
+    - \ref TutorialMatrixClass
+    - \ref TutorialMatrixArithmetic
+    - \ref TutorialArrayClass
+    - \ref TutorialBlockOperations
+    - \ref TutorialAdvancedInitialization
+    - \ref TutorialLinearAlgebra
+    - \ref TutorialReductionsVisitorsBroadcasting
+    - \ref TutorialGeometry
+    - \ref TutorialSparse
+    - \ref TutorialMapClass
+  - \ref QuickRefPage
+  - <b>Advanced topics</b>
+    - \ref TopicAliasing
+    - \ref TopicLazyEvaluation
+    - \ref TopicLinearAlgebraDecompositions
+    - \ref TopicCustomizingEigen
+    - \ref TopicMultiThreading
+    - \ref TopicPreprocessorDirectives
+    - \ref TopicStorageOrders
+    - \ref TopicInsideEigenExample
+    - \ref TopicWritingEfficientProductExpression
+    - \ref TopicClassHierarchy
+    - \ref TopicFunctionTakingEigenTypes
+    - \ref TopicTemplateKeyword
+    - \ref TopicUsingIntelMKL
+  - <b>Topics related to alignment issues</b>
+    - \ref TopicUnalignedArrayAssert
+    - \ref TopicFixedSizeVectorizable
+    - \ref TopicStlContainers
+    - \ref TopicStructHavingEigenMembers
+    - \ref TopicPassingByValue
+    - \ref TopicWrongStackAlignment
+  
+   
+
+Want more? Checkout the \ref Unsupported_modules "unsupported modules" <a href="unsupported/index.html">documentation</a>.
+
+*/
+
+}
diff --git a/doc/QuickReference.dox b/doc/QuickReference.dox
new file mode 100644
index 0000000..3310d39
--- /dev/null
+++ b/doc/QuickReference.dox
@@ -0,0 +1,737 @@
+namespace Eigen {
+
+/** \page QuickRefPage Quick reference guide
+
+\b Table \b of \b contents
+  - \ref QuickRef_Headers
+  - \ref QuickRef_Types
+  - \ref QuickRef_Map
+  - \ref QuickRef_ArithmeticOperators
+  - \ref QuickRef_Coeffwise
+  - \ref QuickRef_Reductions
+  - \ref QuickRef_Blocks
+  - \ref QuickRef_Misc
+  - \ref QuickRef_DiagTriSymm
+\n
+
+<hr>
+
+<a href="#" class="top">top</a>
+\section QuickRef_Headers Modules and Header files
+
+The Eigen library is divided in a Core module and several additional modules. Each module has a corresponding header file which has to be included in order to use the module. The \c %Dense and \c Eigen header files are provided to conveniently gain access to several modules at once.
+
+<table class="manual">
+<tr><th>Module</th><th>Header file</th><th>Contents</th></tr>
+<tr><td>\link Core_Module Core \endlink</td><td>\code#include <Eigen/Core>\endcode</td><td>Matrix and Array classes, basic linear algebra (including triangular and selfadjoint products), array manipulation</td></tr>
+<tr class="alt"><td>\link Geometry_Module Geometry \endlink</td><td>\code#include <Eigen/Geometry>\endcode</td><td>Transform, Translation, Scaling, Rotation2D and 3D rotations (Quaternion, AngleAxis)</td></tr>
+<tr><td>\link LU_Module LU \endlink</td><td>\code#include <Eigen/LU>\endcode</td><td>Inverse, determinant, LU decompositions with solver (FullPivLU, PartialPivLU)</td></tr>
+<tr><td>\link Cholesky_Module Cholesky \endlink</td><td>\code#include <Eigen/Cholesky>\endcode</td><td>LLT and LDLT Cholesky factorization with solver</td></tr>
+<tr class="alt"><td>\link Householder_Module Householder \endlink</td><td>\code#include <Eigen/Householder>\endcode</td><td>Householder transformations; this module is used by several linear algebra modules</td></tr>
+<tr><td>\link SVD_Module SVD \endlink</td><td>\code#include <Eigen/SVD>\endcode</td><td>SVD decomposition with least-squares solver (JacobiSVD)</td></tr>
+<tr class="alt"><td>\link QR_Module QR \endlink</td><td>\code#include <Eigen/QR>\endcode</td><td>QR decomposition with solver (HouseholderQR, ColPivHouseholderQR, FullPivHouseholderQR)</td></tr>
+<tr><td>\link Eigenvalues_Module Eigenvalues \endlink</td><td>\code#include <Eigen/Eigenvalues>\endcode</td><td>Eigenvalue, eigenvector decompositions (EigenSolver, SelfAdjointEigenSolver, ComplexEigenSolver)</td></tr>
+<tr class="alt"><td>\link Sparse_Module Sparse \endlink</td><td>\code#include <Eigen/Sparse>\endcode</td><td>%Sparse matrix storage and related basic linear algebra (SparseMatrix, DynamicSparseMatrix, SparseVector)</td></tr>
+<tr><td></td><td>\code#include <Eigen/Dense>\endcode</td><td>Includes Core, Geometry, LU, Cholesky, SVD, QR, and Eigenvalues header files</td></tr>
+<tr class="alt"><td></td><td>\code#include <Eigen/Eigen>\endcode</td><td>Includes %Dense and %Sparse header files (the whole Eigen library)</td></tr>
+</table>
+
+<a href="#" class="top">top</a>
+\section QuickRef_Types Array, matrix and vector types
+
+
+\b Recall: Eigen provides two kinds of dense objects: mathematical matrices and vectors which are both represented by the template class Matrix, and general 1D and 2D arrays represented by the template class Array:
+\code
+typedef Matrix<Scalar, RowsAtCompileTime, ColsAtCompileTime, Options> MyMatrixType;
+typedef Array<Scalar, RowsAtCompileTime, ColsAtCompileTime, Options> MyArrayType;
+\endcode
+
+\li \c Scalar is the scalar type of the coefficients (e.g., \c float, \c double, \c bool, \c int, etc.).
+\li \c RowsAtCompileTime and \c ColsAtCompileTime are the number of rows and columns of the matrix as known at compile-time or \c Dynamic.
+\li \c Options can be \c ColMajor or \c RowMajor, default is \c ColMajor. (see class Matrix for more options)
+
+All combinations are allowed: you can have a matrix with a fixed number of rows and a dynamic number of columns, etc. The following are all valid:
+\code
+Matrix<double, 6, Dynamic>                  // Dynamic number of columns (heap allocation)
+Matrix<double, Dynamic, 2>                  // Dynamic number of rows (heap allocation)
+Matrix<double, Dynamic, Dynamic, RowMajor>  // Fully dynamic, row major (heap allocation)
+Matrix<double, 13, 3>                       // Fully fixed (static allocation)
+\endcode
+
+In most cases, you can simply use one of the convenience typedefs for \ref matrixtypedefs "matrices" and \ref arraytypedefs "arrays". Some examples:
+<table class="example">
+<tr><th>Matrices</th><th>Arrays</th></tr>
+<tr><td>\code
+Matrix<float,Dynamic,Dynamic>   <=>   MatrixXf
+Matrix<double,Dynamic,1>        <=>   VectorXd
+Matrix<int,1,Dynamic>           <=>   RowVectorXi
+Matrix<float,3,3>               <=>   Matrix3f
+Matrix<float,4,1>               <=>   Vector4f
+\endcode</td><td>\code
+Array<float,Dynamic,Dynamic>    <=>   ArrayXXf
+Array<double,Dynamic,1>         <=>   ArrayXd
+Array<int,1,Dynamic>            <=>   RowArrayXi
+Array<float,3,3>                <=>   Array33f
+Array<float,4,1>                <=>   Array4f
+\endcode</td></tr>
+</table>
+
+Conversion between the matrix and array worlds:
+\code
+Array44f a1, a1;
+Matrix4f m1, m2;
+m1 = a1 * a2;                     // coeffwise product, implicit conversion from array to matrix.
+a1 = m1 * m2;                     // matrix product, implicit conversion from matrix to array.
+a2 = a1 + m1.array();             // mixing array and matrix is forbidden
+m2 = a1.matrix() + m1;            // and explicit conversion is required.
+ArrayWrapper<Matrix4f> m1a(m1);   // m1a is an alias for m1.array(), they share the same coefficients
+MatrixWrapper<Array44f> a1m(a1);
+\endcode
+
+In the rest of this document we will use the following symbols to emphasize the features which are specifics to a given kind of object:
+\li <a name="matrixonly"><a/>\matrixworld linear algebra matrix and vector only
+\li <a name="arrayonly"><a/>\arrayworld array objects only
+
+\subsection QuickRef_Basics Basic matrix manipulation
+
+<table class="manual">
+<tr><th></th><th>1D objects</th><th>2D objects</th><th>Notes</th></tr>
+<tr><td>Constructors</td>
+<td>\code
+Vector4d  v4;
+Vector2f  v1(x, y);
+Array3i   v2(x, y, z);
+Vector4d  v3(x, y, z, w);
+
+VectorXf  v5; // empty object
+ArrayXf   v6(size);
+\endcode</td><td>\code
+Matrix4f  m1;
+
+
+
+
+MatrixXf  m5; // empty object
+MatrixXf  m6(nb_rows, nb_columns);
+\endcode</td><td class="note">
+By default, the coefficients \n are left uninitialized</td></tr>
+<tr class="alt"><td>Comma initializer</td>
+<td>\code
+Vector3f  v1;     v1 << x, y, z;
+ArrayXf   v2(4);  v2 << 1, 2, 3, 4;
+
+\endcode</td><td>\code
+Matrix3f  m1;   m1 << 1, 2, 3,
+                      4, 5, 6,
+                      7, 8, 9;
+\endcode</td><td></td></tr>
+
+<tr><td>Comma initializer (bis)</td>
+<td colspan="2">
+\include Tutorial_commainit_02.cpp
+</td>
+<td>
+output:
+\verbinclude Tutorial_commainit_02.out
+</td>
+</tr>
+
+<tr class="alt"><td>Runtime info</td>
+<td>\code
+vector.size();
+
+vector.innerStride();
+vector.data();
+\endcode</td><td>\code
+matrix.rows();          matrix.cols();
+matrix.innerSize();     matrix.outerSize();
+matrix.innerStride();   matrix.outerStride();
+matrix.data();
+\endcode</td><td class="note">Inner/Outer* are storage order dependent</td></tr>
+<tr><td>Compile-time info</td>
+<td colspan="2">\code
+ObjectType::Scalar              ObjectType::RowsAtCompileTime
+ObjectType::RealScalar          ObjectType::ColsAtCompileTime
+ObjectType::Index               ObjectType::SizeAtCompileTime
+\endcode</td><td></td></tr>
+<tr class="alt"><td>Resizing</td>
+<td>\code
+vector.resize(size);
+
+
+vector.resizeLike(other_vector);
+vector.conservativeResize(size);
+\endcode</td><td>\code
+matrix.resize(nb_rows, nb_cols);
+matrix.resize(Eigen::NoChange, nb_cols);
+matrix.resize(nb_rows, Eigen::NoChange);
+matrix.resizeLike(other_matrix);
+matrix.conservativeResize(nb_rows, nb_cols);
+\endcode</td><td class="note">no-op if the new sizes match,<br/>otherwise data are lost<br/><br/>resizing with data preservation</td></tr>
+
+<tr><td>Coeff access with \n range checking</td>
+<td>\code
+vector(i)     vector.x()
+vector[i]     vector.y()
+              vector.z()
+              vector.w()
+\endcode</td><td>\code
+matrix(i,j)
+\endcode</td><td class="note">Range checking is disabled if \n NDEBUG or EIGEN_NO_DEBUG is defined</td></tr>
+
+<tr class="alt"><td>Coeff access without \n range checking</td>
+<td>\code
+vector.coeff(i)
+vector.coeffRef(i)
+\endcode</td><td>\code
+matrix.coeff(i,j)
+matrix.coeffRef(i,j)
+\endcode</td><td></td></tr>
+
+<tr><td>Assignment/copy</td>
+<td colspan="2">\code
+object = expression;
+object_of_float = expression_of_double.cast<float>();
+\endcode</td><td class="note">the destination is automatically resized (if possible)</td></tr>
+
+</table>
+
+\subsection QuickRef_PredefMat Predefined Matrices
+
+<table class="manual">
+<tr>
+  <th>Fixed-size matrix or vector</th>
+  <th>Dynamic-size matrix</th>
+  <th>Dynamic-size vector</th>
+</tr>
+<tr style="border-bottom-style: none;">
+  <td>
+\code
+typedef {Matrix3f|Array33f} FixedXD;
+FixedXD x;
+
+x = FixedXD::Zero();
+x = FixedXD::Ones();
+x = FixedXD::Constant(value);
+x = FixedXD::Random();
+x = FixedXD::LinSpaced(size, low, high);
+
+x.setZero();
+x.setOnes();
+x.setConstant(value);
+x.setRandom();
+x.setLinSpaced(size, low, high);
+\endcode
+  </td>
+  <td>
+\code
+typedef {MatrixXf|ArrayXXf} Dynamic2D;
+Dynamic2D x;
+
+x = Dynamic2D::Zero(rows, cols);
+x = Dynamic2D::Ones(rows, cols);
+x = Dynamic2D::Constant(rows, cols, value);
+x = Dynamic2D::Random(rows, cols);
+N/A
+
+x.setZero(rows, cols);
+x.setOnes(rows, cols);
+x.setConstant(rows, cols, value);
+x.setRandom(rows, cols);
+N/A
+\endcode
+  </td>
+  <td>
+\code
+typedef {VectorXf|ArrayXf} Dynamic1D;
+Dynamic1D x;
+
+x = Dynamic1D::Zero(size);
+x = Dynamic1D::Ones(size);
+x = Dynamic1D::Constant(size, value);
+x = Dynamic1D::Random(size);
+x = Dynamic1D::LinSpaced(size, low, high);
+
+x.setZero(size);
+x.setOnes(size);
+x.setConstant(size, value);
+x.setRandom(size);
+x.setLinSpaced(size, low, high);
+\endcode
+  </td>
+</tr>
+
+<tr><td colspan="3">Identity and \link MatrixBase::Unit basis vectors \endlink \matrixworld</td></tr>
+<tr style="border-bottom-style: none;">
+  <td>
+\code
+x = FixedXD::Identity();
+x.setIdentity();
+
+Vector3f::UnitX() // 1 0 0
+Vector3f::UnitY() // 0 1 0
+Vector3f::UnitZ() // 0 0 1
+\endcode
+  </td>
+  <td>
+\code
+x = Dynamic2D::Identity(rows, cols);
+x.setIdentity(rows, cols);
+
+
+
+N/A
+\endcode
+  </td>
+  <td>\code
+N/A
+
+
+VectorXf::Unit(size,i)
+VectorXf::Unit(4,1) == Vector4f(0,1,0,0)
+                    == Vector4f::UnitY()
+\endcode
+  </td>
+</tr>
+</table>
+
+
+
+\subsection QuickRef_Map Mapping external arrays
+
+<table class="manual">
+<tr>
+<td>Contiguous \n memory</td>
+<td>\code
+float data[] = {1,2,3,4};
+Map<Vector3f> v1(data);       // uses v1 as a Vector3f object
+Map<ArrayXf>  v2(data,3);     // uses v2 as a ArrayXf object
+Map<Array22f> m1(data);       // uses m1 as a Array22f object
+Map<MatrixXf> m2(data,2,2);   // uses m2 as a MatrixXf object
+\endcode</td>
+</tr>
+<tr>
+<td>Typical usage \n of strides</td>
+<td>\code
+float data[] = {1,2,3,4,5,6,7,8,9};
+Map<VectorXf,0,InnerStride<2> >  v1(data,3);                      // = [1,3,5]
+Map<VectorXf,0,InnerStride<> >   v2(data,3,InnerStride<>(3));     // = [1,4,7]
+Map<MatrixXf,0,OuterStride<3> >  m2(data,2,3);                    // both lines     |1,4,7|
+Map<MatrixXf,0,OuterStride<> >   m1(data,2,3,OuterStride<>(3));   // are equal to:  |2,5,8|
+\endcode</td>
+</tr>
+</table>
+
+
+<a href="#" class="top">top</a>
+\section QuickRef_ArithmeticOperators Arithmetic Operators
+
+<table class="manual">
+<tr><td>
+add \n subtract</td><td>\code
+mat3 = mat1 + mat2;           mat3 += mat1;
+mat3 = mat1 - mat2;           mat3 -= mat1;\endcode
+</td></tr>
+<tr class="alt"><td>
+scalar product</td><td>\code
+mat3 = mat1 * s1;             mat3 *= s1;           mat3 = s1 * mat1;
+mat3 = mat1 / s1;             mat3 /= s1;\endcode
+</td></tr>
+<tr><td>
+matrix/vector \n products \matrixworld</td><td>\code
+col2 = mat1 * col1;
+row2 = row1 * mat1;           row1 *= mat1;
+mat3 = mat1 * mat2;           mat3 *= mat1; \endcode
+</td></tr>
+<tr class="alt"><td>
+transposition \n adjoint \matrixworld</td><td>\code
+mat1 = mat2.transpose();      mat1.transposeInPlace();
+mat1 = mat2.adjoint();        mat1.adjointInPlace();
+\endcode
+</td></tr>
+<tr><td>
+\link MatrixBase::dot() dot \endlink product \n inner product \matrixworld</td><td>\code
+scalar = vec1.dot(vec2);
+scalar = col1.adjoint() * col2;
+scalar = (col1.adjoint() * col2).value();\endcode
+</td></tr>
+<tr class="alt"><td>
+outer product \matrixworld</td><td>\code
+mat = col1 * col2.transpose();\endcode
+</td></tr>
+
+<tr><td>
+\link MatrixBase::norm() norm \endlink \n \link MatrixBase::normalized() normalization \endlink \matrixworld</td><td>\code
+scalar = vec1.norm();         scalar = vec1.squaredNorm()
+vec2 = vec1.normalized();     vec1.normalize(); // inplace \endcode
+</td></tr>
+
+<tr class="alt"><td>
+\link MatrixBase::cross() cross product \endlink \matrixworld</td><td>\code
+#include <Eigen/Geometry>
+vec3 = vec1.cross(vec2);\endcode</td></tr>
+</table>
+
+<a href="#" class="top">top</a>
+\section QuickRef_Coeffwise Coefficient-wise \& Array operators
+Coefficient-wise operators for matrices and vectors:
+<table class="manual">
+<tr><th>Matrix API \matrixworld</th><th>Via Array conversions</th></tr>
+<tr><td>\code
+mat1.cwiseMin(mat2)
+mat1.cwiseMax(mat2)
+mat1.cwiseAbs2()
+mat1.cwiseAbs()
+mat1.cwiseSqrt()
+mat1.cwiseProduct(mat2)
+mat1.cwiseQuotient(mat2)\endcode
+</td><td>\code
+mat1.array().min(mat2.array())
+mat1.array().max(mat2.array())
+mat1.array().abs2()
+mat1.array().abs()
+mat1.array().sqrt()
+mat1.array() * mat2.array()
+mat1.array() / mat2.array()
+\endcode</td></tr>
+</table>
+
+It is also very simple to apply any user defined function \c foo using DenseBase::unaryExpr together with std::ptr_fun:
+\code mat1.unaryExpr(std::ptr_fun(foo))\endcode
+
+Array operators:\arrayworld
+
+<table class="manual">
+<tr><td>Arithmetic operators</td><td>\code
+array1 * array2     array1 / array2     array1 *= array2    array1 /= array2
+array1 + scalar     array1 - scalar     array1 += scalar    array1 -= scalar
+\endcode</td></tr>
+<tr><td>Comparisons</td><td>\code
+array1 < array2     array1 > array2     array1 < scalar     array1 > scalar
+array1 <= array2    array1 >= array2    array1 <= scalar    array1 >= scalar
+array1 == array2    array1 != array2    array1 == scalar    array1 != scalar
+\endcode</td></tr>
+<tr><td>Trigo, power, and \n misc functions \n and the STL variants</td><td>\code
+array1.min(array2)            
+array1.max(array2)            
+array1.abs2()
+array1.abs()                  std::abs(array1)
+array1.sqrt()                 std::sqrt(array1)
+array1.log()                  std::log(array1)
+array1.exp()                  std::exp(array1)
+array1.pow(exponent)          std::pow(array1,exponent)
+array1.square()
+array1.cube()
+array1.inverse()
+array1.sin()                  std::sin(array1)
+array1.cos()                  std::cos(array1)
+array1.tan()                  std::tan(array1)
+array1.asin()                 std::asin(array1)
+array1.acos()                 std::acos(array1)
+\endcode
+</td></tr>
+</table>
+
+<a href="#" class="top">top</a>
+\section QuickRef_Reductions Reductions
+
+Eigen provides several reduction methods such as:
+\link DenseBase::minCoeff() minCoeff() \endlink, \link DenseBase::maxCoeff() maxCoeff() \endlink,
+\link DenseBase::sum() sum() \endlink, \link DenseBase::prod() prod() \endlink,
+\link MatrixBase::trace() trace() \endlink \matrixworld,
+\link MatrixBase::norm() norm() \endlink \matrixworld, \link MatrixBase::squaredNorm() squaredNorm() \endlink \matrixworld,
+\link DenseBase::all() all() \endlink, and \link DenseBase::any() any() \endlink.
+All reduction operations can be done matrix-wise,
+\link DenseBase::colwise() column-wise \endlink or
+\link DenseBase::rowwise() row-wise \endlink. Usage example:
+<table class="manual">
+<tr><td rowspan="3" style="border-right-style:dashed;vertical-align:middle">\code
+      5 3 1
+mat = 2 7 8
+      9 4 6 \endcode
+</td> <td>\code mat.minCoeff(); \endcode</td><td>\code 1 \endcode</td></tr>
+<tr class="alt"><td>\code mat.colwise().minCoeff(); \endcode</td><td>\code 2 3 1 \endcode</td></tr>
+<tr style="vertical-align:middle"><td>\code mat.rowwise().minCoeff(); \endcode</td><td>\code
+1
+2
+4
+\endcode</td></tr>
+</table>
+
+Special versions of \link DenseBase::minCoeff(Index*,Index*) minCoeff \endlink and \link DenseBase::maxCoeff(Index*,Index*) maxCoeff \endlink:
+\code
+int i, j;
+s = vector.minCoeff(&i);        // s == vector[i]
+s = matrix.maxCoeff(&i, &j);    // s == matrix(i,j)
+\endcode
+Typical use cases of all() and any():
+\code
+if((array1 > 0).all()) ...      // if all coefficients of array1 are greater than 0 ...
+if((array1 < array2).any()) ... // if there exist a pair i,j such that array1(i,j) < array2(i,j) ...
+\endcode
+
+
+<a href="#" class="top">top</a>\section QuickRef_Blocks Sub-matrices
+
+Read-write access to a \link DenseBase::col(Index) column \endlink
+or a \link DenseBase::row(Index) row \endlink of a matrix (or array):
+\code
+mat1.row(i) = mat2.col(j);
+mat1.col(j1).swap(mat1.col(j2));
+\endcode
+
+Read-write access to sub-vectors:
+<table class="manual">
+<tr>
+<th>Default versions</th>
+<th>Optimized versions when the size \n is known at compile time</th></tr>
+<th></th>
+
+<tr><td>\code vec1.head(n)\endcode</td><td>\code vec1.head<n>()\endcode</td><td>the first \c n coeffs </td></tr>
+<tr><td>\code vec1.tail(n)\endcode</td><td>\code vec1.tail<n>()\endcode</td><td>the last \c n coeffs </td></tr>
+<tr><td>\code vec1.segment(pos,n)\endcode</td><td>\code vec1.segment<n>(pos)\endcode</td>
+    <td>the \c n coeffs in \n the range [\c pos : \c pos + \c n [</td></tr>
+<tr class="alt"><td colspan="3">
+
+Read-write access to sub-matrices:</td></tr>
+<tr>
+  <td>\code mat1.block(i,j,rows,cols)\endcode
+      \link DenseBase::block(Index,Index,Index,Index) (more) \endlink</td>
+  <td>\code mat1.block<rows,cols>(i,j)\endcode
+      \link DenseBase::block(Index,Index) (more) \endlink</td>
+  <td>the \c rows x \c cols sub-matrix \n starting from position (\c i,\c j)</td></tr>
+<tr><td>\code
+ mat1.topLeftCorner(rows,cols)
+ mat1.topRightCorner(rows,cols)
+ mat1.bottomLeftCorner(rows,cols)
+ mat1.bottomRightCorner(rows,cols)\endcode
+ <td>\code
+ mat1.topLeftCorner<rows,cols>()
+ mat1.topRightCorner<rows,cols>()
+ mat1.bottomLeftCorner<rows,cols>()
+ mat1.bottomRightCorner<rows,cols>()\endcode
+ <td>the \c rows x \c cols sub-matrix \n taken in one of the four corners</td></tr>
+ <tr><td>\code
+ mat1.topRows(rows)
+ mat1.bottomRows(rows)
+ mat1.leftCols(cols)
+ mat1.rightCols(cols)\endcode
+ <td>\code
+ mat1.topRows<rows>()
+ mat1.bottomRows<rows>()
+ mat1.leftCols<cols>()
+ mat1.rightCols<cols>()\endcode
+ <td>specialized versions of block() \n when the block fit two corners</td></tr>
+</table>
+
+
+
+<a href="#" class="top">top</a>\section QuickRef_Misc Miscellaneous operations
+
+\subsection QuickRef_Reverse Reverse
+Vectors, rows, and/or columns of a matrix can be reversed (see DenseBase::reverse(), DenseBase::reverseInPlace(), VectorwiseOp::reverse()).
+\code
+vec.reverse()           mat.colwise().reverse()   mat.rowwise().reverse()
+vec.reverseInPlace()
+\endcode
+
+\subsection QuickRef_Replicate Replicate
+Vectors, matrices, rows, and/or columns can be replicated in any direction (see DenseBase::replicate(), VectorwiseOp::replicate())
+\code
+vec.replicate(times)                                          vec.replicate<Times>
+mat.replicate(vertical_times, horizontal_times)               mat.replicate<VerticalTimes, HorizontalTimes>()
+mat.colwise().replicate(vertical_times, horizontal_times)     mat.colwise().replicate<VerticalTimes, HorizontalTimes>()
+mat.rowwise().replicate(vertical_times, horizontal_times)     mat.rowwise().replicate<VerticalTimes, HorizontalTimes>()
+\endcode
+
+
+<a href="#" class="top">top</a>\section QuickRef_DiagTriSymm Diagonal, Triangular, and Self-adjoint matrices
+(matrix world \matrixworld)
+
+\subsection QuickRef_Diagonal Diagonal matrices
+
+<table class="example">
+<tr><th>Operation</th><th>Code</th></tr>
+<tr><td>
+view a vector \link MatrixBase::asDiagonal() as a diagonal matrix \endlink \n </td><td>\code
+mat1 = vec1.asDiagonal();\endcode
+</td></tr>
+<tr><td>
+Declare a diagonal matrix</td><td>\code
+DiagonalMatrix<Scalar,SizeAtCompileTime> diag1(size);
+diag1.diagonal() = vector;\endcode
+</td></tr>
+<tr><td>Access the \link MatrixBase::diagonal() diagonal \endlink and \link MatrixBase::diagonal(Index) super/sub diagonals \endlink of a matrix as a vector (read/write)</td>
+ <td>\code
+vec1 = mat1.diagonal();        mat1.diagonal() = vec1;      // main diagonal
+vec1 = mat1.diagonal(+n);      mat1.diagonal(+n) = vec1;    // n-th super diagonal
+vec1 = mat1.diagonal(-n);      mat1.diagonal(-n) = vec1;    // n-th sub diagonal
+vec1 = mat1.diagonal<1>();     mat1.diagonal<1>() = vec1;   // first super diagonal
+vec1 = mat1.diagonal<-2>();    mat1.diagonal<-2>() = vec1;  // second sub diagonal
+\endcode</td>
+</tr>
+
+<tr><td>Optimized products and inverse</td>
+ <td>\code
+mat3  = scalar * diag1 * mat1;
+mat3 += scalar * mat1 * vec1.asDiagonal();
+mat3 = vec1.asDiagonal().inverse() * mat1
+mat3 = mat1 * diag1.inverse()
+\endcode</td>
+</tr>
+
+</table>
+
+\subsection QuickRef_TriangularView Triangular views
+
+TriangularView gives a view on a triangular part of a dense matrix and allows to perform optimized operations on it. The opposite triangular part is never referenced and can be used to store other information.
+
+\note The .triangularView() template member function requires the \c template keyword if it is used on an
+object of a type that depends on a template parameter; see \ref TopicTemplateKeyword for details.
+
+<table class="example">
+<tr><th>Operation</th><th>Code</th></tr>
+<tr><td>
+Reference to a triangular with optional \n
+unit or null diagonal (read/write):
+</td><td>\code
+m.triangularView<Xxx>()
+\endcode \n
+\c Xxx = ::Upper, ::Lower, ::StrictlyUpper, ::StrictlyLower, ::UnitUpper, ::UnitLower
+</td></tr>
+<tr><td>
+Writing to a specific triangular part:\n (only the referenced triangular part is evaluated)
+</td><td>\code
+m1.triangularView<Eigen::Lower>() = m2 + m3 \endcode
+</td></tr>
+<tr><td>
+Conversion to a dense matrix setting the opposite triangular part to zero:
+</td><td>\code
+m2 = m1.triangularView<Eigen::UnitUpper>()\endcode
+</td></tr>
+<tr><td>
+Products:
+</td><td>\code
+m3 += s1 * m1.adjoint().triangularView<Eigen::UnitUpper>() * m2
+m3 -= s1 * m2.conjugate() * m1.adjoint().triangularView<Eigen::Lower>() \endcode
+</td></tr>
+<tr><td>
+Solving linear equations:\n
+\f$ M_2 := L_1^{-1} M_2 \f$ \n
+\f$ M_3 := {L_1^*}^{-1} M_3 \f$ \n
+\f$ M_4 := M_4 U_1^{-1} \f$
+</td><td>\n \code
+L1.triangularView<Eigen::UnitLower>().solveInPlace(M2)
+L1.triangularView<Eigen::Lower>().adjoint().solveInPlace(M3)
+U1.triangularView<Eigen::Upper>().solveInPlace<OnTheRight>(M4)\endcode
+</td></tr>
+</table>
+
+\subsection QuickRef_SelfadjointMatrix Symmetric/selfadjoint views
+
+Just as for triangular matrix, you can reference any triangular part of a square matrix to see it as a selfadjoint
+matrix and perform special and optimized operations. Again the opposite triangular part is never referenced and can be
+used to store other information.
+
+\note The .selfadjointView() template member function requires the \c template keyword if it is used on an
+object of a type that depends on a template parameter; see \ref TopicTemplateKeyword for details.
+
+<table class="example">
+<tr><th>Operation</th><th>Code</th></tr>
+<tr><td>
+Conversion to a dense matrix:
+</td><td>\code
+m2 = m.selfadjointView<Eigen::Lower>();\endcode
+</td></tr>
+<tr><td>
+Product with another general matrix or vector:
+</td><td>\code
+m3  = s1 * m1.conjugate().selfadjointView<Eigen::Upper>() * m3;
+m3 -= s1 * m3.adjoint() * m1.selfadjointView<Eigen::Lower>();\endcode
+</td></tr>
+<tr><td>
+Rank 1 and rank K update: \n
+\f$ upper(M_1) \mathrel{{+}{=}} s_1 M_2 M_2^* \f$ \n
+\f$ lower(M_1) \mathbin{{-}{=}} M_2^* M_2 \f$
+</td><td>\n \code
+M1.selfadjointView<Eigen::Upper>().rankUpdate(M2,s1);
+M1.selfadjointView<Eigen::Lower>().rankUpdate(M2.adjoint(),-1); \endcode
+</td></tr>
+<tr><td>
+Rank 2 update: (\f$ M \mathrel{{+}{=}} s u v^* + s v u^* \f$)
+</td><td>\code
+M.selfadjointView<Eigen::Upper>().rankUpdate(u,v,s);
+\endcode
+</td></tr>
+<tr><td>
+Solving linear equations:\n(\f$ M_2 := M_1^{-1} M_2 \f$)
+</td><td>\code
+// via a standard Cholesky factorization
+m2 = m1.selfadjointView<Eigen::Upper>().llt().solve(m2);
+// via a Cholesky factorization with pivoting
+m2 = m1.selfadjointView<Eigen::Lower>().ldlt().solve(m2);
+\endcode
+</td></tr>
+</table>
+
+*/
+
+/*
+<table class="tutorial_code">
+<tr><td>
+\link MatrixBase::asDiagonal() make a diagonal matrix \endlink \n from a vector </td><td>\code
+mat1 = vec1.asDiagonal();\endcode
+</td></tr>
+<tr><td>
+Declare a diagonal matrix</td><td>\code
+DiagonalMatrix<Scalar,SizeAtCompileTime> diag1(size);
+diag1.diagonal() = vector;\endcode
+</td></tr>
+<tr><td>Access \link MatrixBase::diagonal() the diagonal and super/sub diagonals of a matrix \endlink as a vector (read/write)</td>
+ <td>\code
+vec1 = mat1.diagonal();            mat1.diagonal() = vec1;      // main diagonal
+vec1 = mat1.diagonal(+n);          mat1.diagonal(+n) = vec1;    // n-th super diagonal
+vec1 = mat1.diagonal(-n);          mat1.diagonal(-n) = vec1;    // n-th sub diagonal
+vec1 = mat1.diagonal<1>();         mat1.diagonal<1>() = vec1;   // first super diagonal
+vec1 = mat1.diagonal<-2>();        mat1.diagonal<-2>() = vec1;  // second sub diagonal
+\endcode</td>
+</tr>
+
+<tr><td>View on a triangular part of a matrix (read/write)</td>
+ <td>\code
+mat2 = mat1.triangularView<Xxx>();
+// Xxx = Upper, Lower, StrictlyUpper, StrictlyLower, UnitUpper, UnitLower
+mat1.triangularView<Upper>() = mat2 + mat3; // only the upper part is evaluated and referenced
+\endcode</td></tr>
+
+<tr><td>View a triangular part as a symmetric/self-adjoint matrix (read/write)</td>
+ <td>\code
+mat2 = mat1.selfadjointView<Xxx>();     // Xxx = Upper or Lower
+mat1.selfadjointView<Upper>() = mat2 + mat2.adjoint();  // evaluated and write to the upper triangular part only
+\endcode</td></tr>
+
+</table>
+
+Optimized products:
+\code
+mat3 += scalar * vec1.asDiagonal() * mat1
+mat3 += scalar * mat1 * vec1.asDiagonal()
+mat3.noalias() += scalar * mat1.triangularView<Xxx>() * mat2
+mat3.noalias() += scalar * mat2 * mat1.triangularView<Xxx>()
+mat3.noalias() += scalar * mat1.selfadjointView<Upper or Lower>() * mat2
+mat3.noalias() += scalar * mat2 * mat1.selfadjointView<Upper or Lower>()
+mat1.selfadjointView<Upper or Lower>().rankUpdate(mat2);
+mat1.selfadjointView<Upper or Lower>().rankUpdate(mat2.adjoint(), scalar);
+\endcode
+
+Inverse products: (all are optimized)
+\code
+mat3 = vec1.asDiagonal().inverse() * mat1
+mat3 = mat1 * diag1.inverse()
+mat1.triangularView<Xxx>().solveInPlace(mat2)
+mat1.triangularView<Xxx>().solveInPlace<OnTheRight>(mat2)
+mat2 = mat1.selfadjointView<Upper or Lower>().llt().solve(mat2)
+\endcode
+
+*/
+}
diff --git a/doc/SparseQuickReference.dox b/doc/SparseQuickReference.dox
new file mode 100644
index 0000000..7d6eb0f
--- /dev/null
+++ b/doc/SparseQuickReference.dox
@@ -0,0 +1,198 @@
+namespace Eigen {
+/** \page SparseQuickRefPage Quick reference guide for sparse matrices
+
+\b Table \b of \b contents
+  - \ref Constructors
+  - \ref SparseMatrixInsertion
+  - \ref SparseBasicInfos
+  - \ref SparseBasicOps
+  - \ref SparseInterops
+  - \ref sparsepermutation
+  - \ref sparsesubmatrices
+  - \ref sparseselfadjointview
+\n 
+
+<hr>
+
+In this page, we give a quick summary of the main operations available for sparse matrices in the class SparseMatrix. First, it is recommended to read first the introductory tutorial at \ref TutorialSparse. The important point to have in mind when working on sparse matrices is how they are stored : 
+i.e either row major or column major. The default is column major. Most arithmetic operations on sparse matrices will assert that they have the same storage order. Moreover, when interacting with external libraries that are not yet supported by Eigen, it is important to know how to send the required matrix pointers. 
+
+\section Constructors Constructors and assignments
+SparseMatrix is the core class to build and manipulate sparse matrices in Eigen. It takes as template parameters the Scalar type and the storage order, either RowMajor or ColumnMajor. The default is ColumnMajor.
+
+\code
+  SparseMatrix<double> sm1(1000,1000);              // 1000x1000 compressed sparse matrix of double. 
+  SparseMatrix<std::complex<double>,RowMajor> sm2; // Compressed row major matrix of complex double.
+\endcode
+The copy constructor and assignment can be used to convert matrices from a storage order to another
+\code 
+  SparseMatrix<double,Colmajor> sm1;
+  // Eventually fill the matrix sm1 ...
+  SparseMatrix<double,Rowmajor> sm2(sm1), sm3;         // Initialize sm2 with sm1.
+  sm3 = sm1; // Assignment and evaluations modify the storage order.
+ \endcode
+
+\section SparseMatrixInsertion  Allocating and inserting values
+resize() and reserve() are used to set the size and allocate space for nonzero elements
+ \code
+    sm1.resize(m,n);      //Change sm to a mxn matrix. 
+    sm1.reserve(nnz);     // Allocate  room for nnz nonzeros elements.   
+  \endcode 
+Note that when calling reserve(), it is not required that nnz is the exact number of nonzero elements in the final matrix. However, an exact estimation will avoid multiple reallocations during the insertion phase. 
+
+Insertions of values in the sparse matrix can be done directly by looping over nonzero elements and use the insert() function
+\code 
+// Direct insertion of the value v_ij; 
+  sm1.insert(i, j) = v_ij;   // It is assumed that v_ij does not already exist in the matrix. 
+\endcode
+
+After insertion, a value at (i,j) can be modified using coeffRef()
+\code
+  // Update the value v_ij
+  sm1.coeffRef(i,j) = v_ij;
+  sm1.coeffRef(i,j) += v_ij;
+  sm1.coeffRef(i,j) -= v_ij;
+  ...
+\endcode
+
+The recommended way to insert values is to build a list of triplets (row, col, val) and then call setFromTriplets(). 
+\code
+  sm1.setFromTriplets(TripletList.begin(), TripletList.end());
+\endcode
+A complete example is available at \ref TutorialSparseFilling.
+
+The following functions can be used to set constant or random values in the matrix.
+\code
+  sm1.setZero(); // Reset the matrix with zero elements
+  ...
+\endcode
+
+\section SparseBasicInfos Matrix properties
+Beyond the functions rows() and cols() that are used to get the number of rows and columns, there are some useful functions that are available to easily get some informations from the matrix. 
+<table class="manual">
+<tr>
+  <td> \code
+  sm1.rows();         // Number of rows
+  sm1.cols();         // Number of columns 
+  sm1.nonZeros();     // Number of non zero values   
+  sm1.outerSize();    // Number of columns (resp. rows) for a column major (resp. row major )
+  sm1.innerSize();    // Number of rows (resp. columns) for a row major (resp. column major)
+  sm1.norm();         // (Euclidian ??) norm of the matrix
+  sm1.squaredNorm();  // 
+  sm1.isVector();     // Check if sm1 is a sparse vector or a sparse matrix
+  ...
+  \endcode </td>
+</tr>
+</table>
+
+\section SparseBasicOps Arithmetic operations
+It is easy to perform arithmetic operations on sparse matrices provided that the dimensions are adequate and that the matrices have the same storage order. Note that the evaluation can always be done in a matrix with a different storage order. 
+<table class="manual">
+<tr><th> Operations </th> <th> Code </th> <th> Notes </th></tr>
+
+<tr>
+  <td> add subtract </td> 
+  <td> \code
+  sm3 = sm1 + sm2; 
+  sm3 = sm1 - sm2;
+  sm2 += sm1; 
+  sm2 -= sm1; \endcode
+  </td>
+  <td> 
+  sm1 and sm2 should have the same storage order
+  </td> 
+</tr>
+
+<tr class="alt"><td>
+  scalar product</td><td>\code
+  sm3 = sm1 * s1;   sm3 *= s1; 
+  sm3 = s1 * sm1 + s2 * sm2; sm3 /= s1;\endcode
+  </td>
+  <td>
+    Many combinations are possible if the dimensions and the storage order agree.
+</tr>
+
+<tr>
+  <td> Product </td>
+  <td> \code
+  sm3 = sm1 * sm2;
+  dm2 = sm1 * dm1;
+  dv2 = sm1 * dv1;
+  \endcode </td>
+  <td>
+  </td>
+</tr> 
+
+<tr class='alt'>
+  <td> transposition, adjoint</td>
+  <td> \code
+  sm2 = sm1.transpose();
+  sm2 = sm1.adjoint();
+  \endcode </td>
+  <td>
+  Note that the transposition change the storage order. There is no support for transposeInPlace().
+  </td>
+</tr> 
+
+<tr>
+  <td>
+  Component-wise ops
+  </td>
+  <td>\code 
+  sm1.cwiseProduct(sm2);
+  sm1.cwiseQuotient(sm2);
+  sm1.cwiseMin(sm2);
+  sm1.cwiseMax(sm2);
+  sm1.cwiseAbs();
+  sm1.cwiseSqrt();
+  \endcode</td>
+  <td>
+  sm1 and sm2 should have the same storage order
+  </td>
+</tr>
+</table>
+
+
+\section SparseInterops Low-level storage
+There are a set of low-levels functions to get the standard compressed storage pointers. The matrix should be in compressed mode which can be checked by calling isCompressed(); makeCompressed() should do the job otherwise. 
+\code
+  // Scalar pointer to the values of the matrix, size nnz
+  sm1.valuePtr();  
+  // Index pointer to get the row indices (resp. column indices) for column major (resp. row major) matrix, size nnz
+  sm1.innerIndexPtr();
+  // Index pointer to the beginning of each row (resp. column) in valuePtr() and innerIndexPtr() for column major (row major). The size is outersize()+1; 
+  sm1.outerIndexPtr();  
+\endcode
+These pointers can therefore be easily used to send the matrix to some external libraries/solvers that are not yet supported by Eigen.
+
+\section sparsepermutation Permutations, submatrices and Selfadjoint Views
+In many cases, it is necessary to reorder the rows and/or the columns of the sparse matrix for several purposes : fill-in reducing during matrix decomposition, better data locality for sparse matrix-vector products... The class PermutationMatrix is available to this end. 
+ \code
+  PermutationMatrix<Dynamic, Dynamic, int> perm;
+  // Reserve and fill the values of perm; 
+  perm.inverse(n); // Compute eventually the inverse permutation
+  sm1.twistedBy(perm) //Apply the permutation on rows and columns 
+  sm2 = sm1 * perm; // ??? Apply the permutation on columns ???; 
+  sm2 = perm * sm1; // ??? Apply the permutation on rows ???; 
+  \endcode
+
+\section sparsesubmatrices Sub-matrices
+The following functions are useful to extract a block of rows (resp. columns) from a row-major (resp. column major) sparse matrix. Note that because of the particular storage, it is not ?? efficient ?? to extract a submatrix comprising a certain number of subrows and subcolumns.
+ \code
+  sm1.innerVector(outer); // Returns the outer -th column (resp. row) of the matrix if sm is col-major (resp. row-major)
+  sm1.innerVectors(outer); // Returns the outer -th column (resp. row) of the matrix if mat is col-major (resp. row-major)
+  sm1.middleRows(start, numRows); // For row major matrices, get a range of numRows rows
+  sm1.middleCols(start, numCols); // For column major matrices, get a range of numCols cols
+ \endcode 
+ Examples : 
+
+\section sparseselfadjointview Sparse triangular and selfadjoint Views
+ \code
+  sm2 = sm1.triangularview<Lower>(); // Get the lower triangular part of the matrix. 
+  dv2 = sm1.triangularView<Upper>().solve(dv1); // Solve the linear system with the uppper triangular part. 
+  sm2 = sm1.selfadjointview<Lower>(); // Build a selfadjoint matrix from the lower part of sm1. 
+  \endcode
+
+
+*/
+}
diff --git a/doc/TopicLinearAlgebraDecompositions.dox b/doc/TopicLinearAlgebraDecompositions.dox
new file mode 100644
index 0000000..faa564b
--- /dev/null
+++ b/doc/TopicLinearAlgebraDecompositions.dox
@@ -0,0 +1,260 @@
+namespace Eigen {
+
+/** \page TopicLinearAlgebraDecompositions Linear algebra and decompositions
+
+
+\section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen
+
+<table class="manual-vl">
+
+    <tr>
+        <th class="meta"></th>
+        <th class="meta" colspan="5">Generic information, not Eigen-specific</th>
+        <th class="meta" colspan="3">Eigen-specific</th>
+    </tr>
+
+    <tr>
+        <th>Decomposition</th>
+        <th>Requirements on the matrix</th>
+        <th>Speed</th>
+        <th>Algorithm reliability and accuracy</th>
+        <th>Rank-revealing</th>
+        <th>Allows to compute (besides linear solving)</th>
+        <th>Linear solver provided by Eigen</th>
+        <th>Maturity of Eigen's implementation</th>
+        <th>Optimizations</th>
+    </tr>
+
+    <tr>
+        <td>PartialPivLU</td>
+        <td>Invertible</td>
+        <td>Fast</td>
+        <td>Depends on condition number</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td>Blocking, Implicit MT</td>
+    </tr>
+
+    <tr class="alt">
+        <td>FullPivLU</td>
+        <td>-</td>
+        <td>Slow</td>
+        <td>Proven</td>
+        <td>Yes</td>
+        <td>-</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td>-</td>
+    </tr>
+
+    <tr>
+        <td>HouseholderQR</td>
+        <td>-</td>
+        <td>Fast</td>
+        <td>Depends on condition number</td>
+        <td>-</td>
+        <td>Orthogonalization</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td>Blocking</td>
+    </tr>
+
+    <tr class="alt">
+        <td>ColPivHouseholderQR</td>
+        <td>-</td>
+        <td>Fast</td>
+        <td>Good</td>
+        <td>Yes</td>
+        <td>Orthogonalization</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td><em>Soon: blocking</em></td>
+    </tr>
+
+    <tr>
+        <td>FullPivHouseholderQR</td>
+        <td>-</td>
+        <td>Slow</td>
+        <td>Proven</td>
+        <td>Yes</td>
+        <td>Orthogonalization</td>
+        <td>Yes</td>
+        <td>Average</td>
+        <td>-</td>
+    </tr>
+
+    <tr class="alt">
+        <td>LLT</td>
+        <td>Positive definite</td>
+        <td>Very fast</td>
+        <td>Depends on condition number</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td>Blocking</td>
+    </tr>
+
+    <tr>
+        <td>LDLT</td>
+        <td>Positive or negative semidefinite<sup><a href="#note1">1</a></sup></td>
+        <td>Very fast</td>
+        <td>Good</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Yes</td>
+        <td>Excellent</td>
+        <td><em>Soon: blocking</em></td>
+    </tr>
+
+    <tr><th class="inter" colspan="9">\n Singular values and eigenvalues decompositions</th></tr>
+
+    <tr>
+        <td>JacobiSVD (two-sided)</td>
+        <td>-</td>
+        <td>Slow (but fast for small matrices)</td>
+        <td>Excellent-Proven<sup><a href="#note3">3</a></sup></td>
+        <td>Yes</td>
+        <td>Singular values/vectors, least squares</td>
+        <td>Yes (and does least squares)</td>
+        <td>Excellent</td>
+        <td>R-SVD</td>
+    </tr>
+
+    <tr class="alt">
+        <td>SelfAdjointEigenSolver</td>
+        <td>Self-adjoint</td>
+        <td>Fast-average<sup><a href="#note2">2</a></sup></td>
+        <td>Good</td>
+        <td>Yes</td>
+        <td>Eigenvalues/vectors</td>
+        <td>-</td>
+        <td>Good</td>
+        <td><em>Closed forms for 2x2 and 3x3</em></td>
+    </tr>
+
+    <tr>
+        <td>ComplexEigenSolver</td>
+        <td>Square</td>
+        <td>Slow-very slow<sup><a href="#note2">2</a></sup></td>
+        <td>Depends on condition number</td>
+        <td>Yes</td>
+        <td>Eigenvalues/vectors</td>
+        <td>-</td>
+        <td>Average</td>
+        <td>-</td>
+    </tr>
+
+    <tr class="alt">
+        <td>EigenSolver</td>
+        <td>Square and real</td>
+        <td>Average-slow<sup><a href="#note2">2</a></sup></td>
+        <td>Depends on condition number</td>
+        <td>Yes</td>
+        <td>Eigenvalues/vectors</td>
+        <td>-</td>
+        <td>Average</td>
+        <td>-</td>
+    </tr>
+
+    <tr>
+        <td>GeneralizedSelfAdjointEigenSolver</td>
+        <td>Square</td>
+        <td>Fast-average<sup><a href="#note2">2</a></sup></td>
+        <td>Depends on condition number</td>
+        <td>-</td>
+        <td>Generalized eigenvalues/vectors</td>
+        <td>-</td>
+        <td>Good</td>
+        <td>-</td>
+    </tr>
+
+    <tr><th class="inter" colspan="9">\n Helper decompositions</th></tr>
+
+    <tr>
+        <td>RealSchur</td>
+        <td>Square and real</td>
+        <td>Average-slow<sup><a href="#note2">2</a></sup></td>
+        <td>Depends on condition number</td>
+        <td>Yes</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Average</td>
+        <td>-</td>
+    </tr>
+
+    <tr class="alt">
+        <td>ComplexSchur</td>
+        <td>Square</td>
+        <td>Slow-very slow<sup><a href="#note2">2</a></sup></td>
+        <td>Depends on condition number</td>
+        <td>Yes</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Average</td>
+        <td>-</td>
+    </tr>
+
+    <tr class="alt">
+        <td>Tridiagonalization</td>
+        <td>Self-adjoint</td>
+        <td>Fast</td>
+        <td>Good</td>
+        <td>-</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Good</td>
+        <td><em>Soon: blocking</em></td>
+    </tr>
+
+    <tr>
+        <td>HessenbergDecomposition</td>
+        <td>Square</td>
+        <td>Average</td>
+        <td>Good</td>
+        <td>-</td>
+        <td>-</td>
+        <td>-</td>
+        <td>Good</td>
+        <td><em>Soon: blocking</em></td>
+    </tr>
+
+</table>
+
+\b Notes:
+<ul>
+<li><a name="note1">\b 1: </a>There exist two variants of the LDLT algorithm. Eigen's one produces a pure diagonal D matrix, and therefore it cannot handle indefinite matrices, unlike Lapack's one which produces a block diagonal D matrix.</li>
+<li><a name="note2">\b 2: </a>Eigenvalues, SVD and Schur decompositions rely on iterative algorithms. Their convergence speed depends on how well the eigenvalues are separated.</li>
+<li><a name="note3">\b 3: </a>Our JacobiSVD is two-sided, making for proven and optimal precision for square matrices. For non-square matrices, we have to use a QR preconditioner first. The default choice, ColPivHouseholderQR, is already very reliable, but if you want it to be proven, use FullPivHouseholderQR instead.
+</ul>
+
+\section TopicLinAlgTerminology Terminology
+
+<dl>
+  <dt><b>Selfadjoint</b></dt>
+    <dd>For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for \em hermitian.
+        More generally, a matrix \f$ A \f$ is selfadjoint if and only if it is equal to its adjoint \f$ A^* \f$. The adjoint is also called the \em conjugate \em transpose. </dd>
+  <dt><b>Positive/negative definite</b></dt>
+    <dd>A selfadjoint matrix \f$ A \f$ is positive definite if \f$ v^* A v > 0 \f$ for any non zero vector \f$ v \f$.
+        In the same vein, it is negative definite if \f$ v^* A v < 0 \f$ for any non zero vector \f$ v \f$ </dd>
+  <dt><b>Positive/negative semidefinite</b></dt>
+    <dd>A selfadjoint matrix \f$ A \f$ is positive semi-definite if \f$ v^* A v \ge 0 \f$ for any non zero vector \f$ v \f$.
+        In the same vein, it is negative semi-definite if \f$ v^* A v \le 0 \f$ for any non zero vector \f$ v \f$ </dd>
+
+  <dt><b>Blocking</b></dt>
+    <dd>Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.</dd>
+  <dt><b>Implicit Multi Threading (MT)</b></dt>
+    <dd>Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product rountines.</dd>
+  <dt><b>Explicit Multi Threading (MT)</b></dt>
+    <dd>Means the algorithm is explicitely parallelized to take advantage of multicore processors via OpenMP.</dd>
+  <dt><b>Meta-unroller</b></dt>
+    <dd>Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.</dd>
+  <dt><b></b></dt>
+    <dd></dd>
+</dl>
+
+*/
+
+}
diff --git a/doc/TopicMultithreading.dox b/doc/TopicMultithreading.dox
new file mode 100644
index 0000000..f7d0826
--- /dev/null
+++ b/doc/TopicMultithreading.dox
@@ -0,0 +1,46 @@
+namespace Eigen {
+
+/** \page TopicMultiThreading Eigen and multi-threading
+
+\section TopicMultiThreading_MakingEigenMT Make Eigen run in parallel
+
+Some Eigen's algorithms can exploit the multiple cores present in your hardware. To this end, it is enough to enable OpenMP on your compiler, for instance:
+ * GCC: \c -fopenmp
+ * ICC: \c -openmp
+ * MSVC: check the respective option in the build properties.
+You can control the number of thread that will be used using either the OpenMP API or Eiegn's API using the following priority:
+\code
+ OMP_NUM_THREADS=n ./my_program
+ omp_set_num_threads(n);
+ Eigen::setNbThreads(n);
+\endcode
+Unless setNbThreads has been called, Eigen uses the number of threads specified by OpenMP. You can restore this bahavior by calling \code setNbThreads(0); \endcode
+You can query the number of threads that will be used with:
+\code
+n = Eigen::nbThreads(n);
+\endcode
+You can disable Eigen's multi threading at compile time by defining the EIGEN_DONT_PARALLELIZE preprocessor token.
+
+Currently, the following algorithms can make use of multi-threading:
+ * general matrix - matrix products
+ * PartialPivLU
+
+\section TopicMultiThreading_UsingEigenWithMT Using Eigen in a multi-threaded application
+
+In the case your own application is multithreaded, and multiple threads make calls to Eigen, then you have to initialize Eigen by calling the following routine \b before creating the threads:
+\code
+#include <Eigen/Core>
+
+int main(int argc, char** argv)
+{
+  Eigen::initParallel();
+  
+  ...
+}
+\endcode
+
+In the case your application is parallelized with OpenMP, you might want to disable Eigen's own parallization as detailed in the previous section.
+
+*/
+
+}
diff --git a/doc/TutorialSparse_example_details.dox b/doc/TutorialSparse_example_details.dox
new file mode 100644
index 0000000..0438da8
--- /dev/null
+++ b/doc/TutorialSparse_example_details.dox
@@ -0,0 +1,4 @@
+/**
+\page TutorialSparse_example_details
+\include Tutorial_sparse_example_details.cpp
+*/
diff --git a/doc/UsingIntelMKL.dox b/doc/UsingIntelMKL.dox
new file mode 100644
index 0000000..379ee3f
--- /dev/null
+++ b/doc/UsingIntelMKL.dox
@@ -0,0 +1,168 @@
+/*
+ Copyright (c) 2011, Intel Corporation. All rights reserved.
+ Copyright (C) 2011 Gael Guennebaud <gael.guennebaud@inria.fr>
+
+ Redistribution and use in source and binary forms, with or without modification,
+ are permitted provided that the following conditions are met:
+
+ * Redistributions of source code must retain the above copyright notice, this
+   list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright notice,
+   this list of conditions and the following disclaimer in the documentation
+   and/or other materials provided with the distribution.
+ * Neither the name of Intel Corporation nor the names of its contributors may
+   be used to endorse or promote products derived from this software without
+   specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+ (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+ LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+ ********************************************************************************
+ *   Content : Documentation on the use of Intel MKL through Eigen
+ ********************************************************************************
+*/
+
+namespace Eigen {
+
+/** \page TopicUsingIntelMKL Using Intel® Math Kernel Library from Eigen
+
+\section TopicUsingIntelMKL_Intro Eigen and Intel® Math Kernel Library (Intel® MKL)
+
+Since Eigen version 3.1 and later, users can benefit from built-in Intel MKL optimizations with an installed copy of Intel MKL 10.3 (or later).
+<a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php"> Intel MKL </a> provides highly optimized multi-threaded mathematical routines for x86-compatible architectures.
+Intel MKL is available on Linux, Mac and Windows for both Intel64 and IA32 architectures.
+
+\warning Be aware that Intel® MKL is a proprietary software. It is the responsibility of the users to buy MKL licenses for their products. Moreover, the license of the user product has to allow linking to proprietary software that excludes any unmodified versions of the GPL. As a consequence, this also means that Eigen has to be used through the LGPL3+ license.
+
+Using Intel MKL through Eigen is easy:
+-# define the \c EIGEN_USE_MKL_ALL macro before including any Eigen's header
+-# link your program to MKL libraries (see the <a href="http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/">MKL linking advisor</a>)
+-# on a 64bits system, you must use the LP64 interface (not the ILP64 one)
+
+When doing so, a number of Eigen's algorithms are silently substituted with calls to Intel MKL routines.
+These substitutions apply only for \b Dynamic \b or \b large enough objects with one of the following four standard scalar types: \c float, \c double, \c complex<float>, and \c complex<double>.
+Operations on other scalar types or mixing reals and complexes will continue to use the built-in algorithms.
+
+In addition you can coarsely select choose which parts will be substituted by defining one or multiple of the following macros:
+
+<table class="manual">
+<tr><td>\c EIGEN_USE_BLAS </td><td>Enables the use of external BLAS level 2 and 3 routines (currently works with Intel MKL only)</td></tr>
+<tr class="alt"><td>\c EIGEN_USE_LAPACKE </td><td>Enables the use of external Lapack routines via the <a href="http://www.netlib.org/lapack/lapacke.html">Intel Lapacke</a> C interface to Lapack (currently works with Intel MKL only)</td></tr>
+<tr><td>\c EIGEN_USE_LAPACKE_STRICT </td><td>Same as \c EIGEN_USE_LAPACKE but algorithm of lower robustness are disabled. This currently concerns only JacobiSVD which otherwise would be replaced by \c gesvd that is less robust than Jacobi rotations.</td></tr>
+<tr class="alt"><td>\c EIGEN_USE_MKL_VML </td><td>Enables the use of Intel VML (vector operations)</td></tr>
+<tr><td>\c EIGEN_USE_MKL_ALL </td><td>Defines \c EIGEN_USE_BLAS, \c EIGEN_USE_LAPACKE, and \c EIGEN_USE_MKL_VML </td></tr>
+</table>
+
+Finally, the PARDISO sparse solver shipped with Intel MKL can be used through the \ref PardisoLU, \ref PardisoLLT and \ref PardisoLDLT classes of the \ref PARDISOSupport_Module.
+
+
+\section TopicUsingIntelMKL_SupportedFeatures List of supported features
+
+The breadth of Eigen functionality covered by Intel MKL is listed in the table below.
+<table class="manual">
+<tr><th>Functional domain</th><th>Code example</th><th>MKL routines</th></tr>
+<tr><td>Matrix-matrix operations \n \c EIGEN_USE_BLAS </td><td>\code
+m1*m2.transpose();
+m1.selfadjointView<Lower>()*m2;
+m1*m2.triangularView<Upper>();
+m1.selfadjointView<Lower>().rankUpdate(m2,1.0);
+\endcode</td><td>\code
+?gemm
+?symm/?hemm
+?trmm
+dsyrk/ssyrk
+\endcode</td></tr>
+<tr class="alt"><td>Matrix-vector operations \n \c EIGEN_USE_BLAS </td><td>\code
+m1.adjoint()*b;
+m1.selfadjointView<Lower>()*b;
+m1.triangularView<Upper>()*b;
+\endcode</td><td>\code
+?gemv
+?symv/?hemv
+?trmv
+\endcode</td></tr>
+<tr><td>LU decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT </td><td>\code
+v1 = m1.lu().solve(v2);
+\endcode</td><td>\code
+?getrf
+\endcode</td></tr>
+<tr class="alt"><td>Cholesky decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT </td><td>\code
+v1 = m2.selfadjointView<Upper>().llt().solve(v2);
+\endcode</td><td>\code
+?potrf
+\endcode</td></tr>
+<tr><td>QR decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT </td><td>\code
+m1.householderQr();
+m1.colPivHouseholderQr();
+\endcode</td><td>\code
+?geqrf
+?geqp3
+\endcode</td></tr>
+<tr class="alt"><td>Singular value decomposition \n \c EIGEN_USE_LAPACKE </td><td>\code
+JacobiSVD<MatrixXd> svd;
+svd.compute(m1, ComputeThinV);
+\endcode</td><td>\code
+?gesvd
+\endcode</td></tr>
+<tr><td>Eigen-value decompositions \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT </td><td>\code
+EigenSolver<MatrixXd> es(m1);
+ComplexEigenSolver<MatrixXcd> ces(m1);
+SelfAdjointEigenSolver<MatrixXd> saes(m1+m1.transpose());
+GeneralizedSelfAdjointEigenSolver<MatrixXd>
+    gsaes(m1+m1.transpose(),m2+m2.transpose());
+\endcode</td><td>\code
+?gees
+?gees
+?syev/?heev
+?syev/?heev,
+?potrf
+\endcode</td></tr>
+<tr class="alt"><td>Schur decomposition \n \c EIGEN_USE_LAPACKE \n \c EIGEN_USE_LAPACKE_STRICT </td><td>\code
+RealSchur<MatrixXd> schurR(m1);
+ComplexSchur<MatrixXcd> schurC(m1);
+\endcode</td><td>\code
+?gees
+\endcode</td></tr>
+<tr><td>Vector Math \n \c EIGEN_USE_MKL_VML </td><td>\code
+v2=v1.array().sin();
+v2=v1.array().asin();
+v2=v1.array().cos();
+v2=v1.array().acos();
+v2=v1.array().tan();
+v2=v1.array().exp();
+v2=v1.array().log();
+v2=v1.array().sqrt();
+v2=v1.array().square();
+v2=v1.array().pow(1.5);
+\endcode</td><td>\code
+v?Sin
+v?Asin
+v?Cos
+v?Acos
+v?Tan
+v?Exp
+v?Ln
+v?Sqrt
+v?Sqr
+v?Powx
+\endcode</td></tr>
+</table>
+In the examples, m1 and m2 are dense matrices and v1 and v2 are dense vectors.
+
+
+\section TopicUsingIntelMKL_Links Links
+- Intel MKL can be purchased and downloaded <a href="http://eigen.tuxfamily.org/Counter/redirect_to_mkl.php">here</a>.
+- Intel MKL is also bundled with <a href="http://software.intel.com/en-us/articles/intel-composer-xe/">Intel Composer XE</a>.
+
+
+*/
+
+}
\ No newline at end of file
diff --git a/doc/eigendoxy.css b/doc/eigendoxy.css
new file mode 100644
index 0000000..c6c1628
--- /dev/null
+++ b/doc/eigendoxy.css
@@ -0,0 +1,911 @@
+/* The standard CSS for doxygen */
+
+body, table, div, p, dl {
+  font-family: Lucida Grande, Verdana, Geneva, Arial, sans-serif;
+  font-size: 12px;
+}
+
+/* @group Heading Levels */
+
+h1 {
+  font-size: 150%;
+}
+
+h2 {
+  font-size: 120%;
+}
+
+h3 {
+  font-size: 100%;
+}
+
+dt {
+  font-weight: bold;
+}
+
+div.multicol {
+  -moz-column-gap: 1em;
+  -webkit-column-gap: 1em;
+  -moz-column-count: 3;
+  -webkit-column-count: 3;
+}
+
+p.startli, p.startdd, p.starttd {
+  margin-top: 2px;
+}
+
+p.endli {
+  margin-bottom: 0px;
+}
+
+p.enddd {
+  margin-bottom: 4px;
+}
+
+p.endtd {
+  margin-bottom: 2px;
+}
+
+/* @end */
+
+caption {
+  font-weight: bold;
+}
+
+span.legend {
+        font-size: 70%;
+        text-align: center;
+}
+
+h3.version {
+        font-size: 90%;
+        text-align: center;
+}
+
+div.qindex, div.navtab{
+  background-color: #EBEFF6;
+  border: 1px solid #A3B4D7;
+  text-align: center;
+  margin: 2px;
+  padding: 2px;
+}
+
+div.qindex, div.navpath {
+  width: 100%;
+  line-height: 140%;
+}
+
+div.navtab {
+  margin-right: 15px;
+}
+
+/* @group Link Styling */
+
+a {
+  color: #3D578C;
+  font-weight: normal;
+  text-decoration: none;
+}
+
+.contents a:visited {
+  color: #4665A2;
+}
+
+a:hover {
+  text-decoration: underline;
+}
+
+a.qindex {
+  font-weight: bold;
+}
+
+a.qindexHL {
+  font-weight: bold;
+  background-color: #9CAFD4;
+  color: #ffffff;
+  border: 1px double #869DCA;
+}
+
+.contents a.qindexHL:visited {
+        color: #ffffff;
+}
+
+a.el {
+  font-weight: bold;
+}
+
+a.elRef {
+}
+
+a.code {
+  color: #4665A2;
+}
+
+a.codeRef {
+  color: #4665A2;
+}
+
+/* @end */
+
+dl.el {
+  margin-left: -1cm;
+}
+
+.fragment {
+  font-family: monospace, fixed;
+  font-size: 105%;
+}
+
+pre.fragment {
+  border: 1px solid #C4CFE5;
+  background-color: #FBFCFD;
+  padding: 4px 6px;
+  margin: 4px 8px 4px 2px;
+  overflow: auto;
+  /*word-wrap: break-word;*/
+  font-size:  9pt;
+  line-height: 125%;
+}
+
+div.ah {
+  background-color: black;
+  font-weight: bold;
+  color: #ffffff;
+  margin-bottom: 3px;
+  margin-top: 3px;
+  padding: 0.2em;
+  border: solid thin #333;
+  border-radius: 0.5em;
+  -webkit-border-radius: .5em;
+  -moz-border-radius: .5em;
+  box-shadow: 2px 2px 3px #999;
+  -webkit-box-shadow: 2px 2px 3px #999;
+  -moz-box-shadow: rgba(0, 0, 0, 0.15) 2px 2px 2px;
+  background-image: -webkit-gradient(linear, left top, left bottom, from(#eee), to(#000),color-stop(0.3, #444));
+  background-image: -moz-linear-gradient(center top, #eee 0%, #444 40%, #000);
+}
+
+div.groupHeader {
+  margin-left: 16px;
+  margin-top: 12px;
+  font-weight: bold;
+}
+
+div.groupText {
+  margin-left: 16px;
+  font-style: italic;
+}
+
+body {
+  background: white;
+  color: black;
+        margin: 0;
+}
+
+div.contents {
+  margin-top: 10px;
+  margin-left: 10px;
+  margin-right: 10px;
+}
+
+td.indexkey {
+  background-color: #EBEFF6;
+  font-weight: bold;
+  border: 1px solid #C4CFE5;
+  margin: 2px 0px 2px 0;
+  padding: 2px 10px;
+}
+
+td.indexvalue {
+  background-color: #EBEFF6;
+  border: 1px solid #C4CFE5;
+  padding: 2px 10px;
+  margin: 2px 0px;
+}
+
+tr.memlist {
+  background-color: #EEF1F7;
+}
+
+p.formulaDsp {
+  text-align: center;
+}
+
+img.formulaDsp {
+  
+}
+
+img.formulaInl {
+  vertical-align: middle;
+}
+
+div.center {
+  text-align: center;
+        margin-top: 0px;
+        margin-bottom: 0px;
+        padding: 0px;
+}
+
+div.center img {
+  border: 0px;
+}
+
+address.footer {
+  text-align: right;
+  padding-right: 12px;
+}
+
+img.footer {
+  border: 0px;
+  vertical-align: middle;
+}
+
+/* @group Code Colorization */
+
+span.keyword {
+  color: #008000
+}
+
+span.keywordtype {
+  color: #604020
+}
+
+span.keywordflow {
+  color: #e08000
+}
+
+span.comment {
+  color: #800000
+}
+
+span.preprocessor {
+  color: #806020
+}
+
+span.stringliteral {
+  color: #002080
+}
+
+span.charliteral {
+  color: #008080
+}
+
+span.vhdldigit { 
+  color: #ff00ff 
+}
+
+span.vhdlchar { 
+  color: #000000 
+}
+
+span.vhdlkeyword { 
+  color: #700070 
+}
+
+span.vhdllogic { 
+  color: #ff0000 
+}
+
+/* @end */
+
+/*
+.search {
+  color: #003399;
+  font-weight: bold;
+}
+
+form.search {
+  margin-bottom: 0px;
+  margin-top: 0px;
+}
+
+input.search {
+  font-size: 75%;
+  color: #000080;
+  font-weight: normal;
+  background-color: #e8eef2;
+}
+*/
+
+td.tiny {
+  font-size: 75%;
+}
+
+.dirtab {
+  padding: 4px;
+  border-collapse: collapse;
+  border: 1px solid #A3B4D7;
+}
+
+th.dirtab {
+  background: #EBEFF6;
+  font-weight: bold;
+}
+
+hr {
+  height: 0px;
+  border: none;
+  border-top: 1px solid #4A6AAA;
+}
+
+hr.footer {
+  height: 1px;
+}
+
+/* @group Member Descriptions */
+
+table.memberdecls {
+  border-spacing: 0px;
+  padding: 0px;
+}
+
+.mdescLeft, .mdescRight,
+.memItemLeft, .memItemRight,
+.memTemplItemLeft, .memTemplItemRight, .memTemplParams {
+  background-color: #F9FAFC;
+  border: none;
+  margin: 4px;
+  padding: 1px 0 0 8px;
+}
+
+.mdescLeft, .mdescRight {
+  padding: 0px 8px 4px 8px;
+  color: #555;
+}
+
+.memItemLeft, .memItemRight, .memTemplParams {
+  border-top: 1px solid #C4CFE5;
+}
+
+.memItemLeft, .memTemplItemLeft {
+        white-space: nowrap;
+}
+
+.memTemplParams {
+  color: #4665A2;
+        white-space: nowrap;
+}
+
+/* @end */
+
+/* @group Member Details */
+
+/* Styles for detailed member documentation */
+
+.memtemplate {
+  font-size: 80%;
+  color: #4665A2;
+  font-weight: normal;
+  margin-left: 9px;
+}
+
+.memnav {
+  background-color: #EBEFF6;
+  border: 1px solid #A3B4D7;
+  text-align: center;
+  margin: 2px;
+  margin-right: 15px;
+  padding: 2px;
+}
+
+.memitem {
+  padding: 0;
+  margin-bottom: 10px;
+}
+
+.memname {
+        white-space: nowrap;
+        font-weight: bold;
+        margin-left: 6px;
+}
+
+.memproto {
+        border-top: 1px solid #A8B8D9;
+        border-left: 1px solid #A8B8D9;
+        border-right: 1px solid #A8B8D9;
+        padding: 6px 0px 6px 0px;
+        color: #253555;
+        font-weight: bold;
+        text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9);
+        /* opera specific markup */
+        box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+        border-top-right-radius: 8px;
+        border-top-left-radius: 8px;
+        /* firefox specific markup */
+        -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
+        -moz-border-radius-topright: 8px;
+        -moz-border-radius-topleft: 8px;
+        /* webkit specific markup */
+        -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+        -webkit-border-top-right-radius: 8px;
+        -webkit-border-top-left-radius: 8px;
+        background-image:url('nav_f.png');
+        background-repeat:repeat-x;
+        background-color: #E2E8F2;
+
+}
+
+.memdoc {
+        border-bottom: 1px solid #A8B8D9;      
+        border-left: 1px solid #A8B8D9;      
+        border-right: 1px solid #A8B8D9; 
+        padding: 2px 5px;
+        background-color: #FBFCFD;
+        border-top-width: 0;
+        /* opera specific markup */
+        border-bottom-left-radius: 8px;
+        border-bottom-right-radius: 8px;
+        box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+        /* firefox specific markup */
+        -moz-border-radius-bottomleft: 8px;
+        -moz-border-radius-bottomright: 8px;
+        -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
+        background-image: -moz-linear-gradient(center top, #FFFFFF 0%, #FFFFFF 60%, #F7F8FB 95%, #EEF1F7);
+        /* webkit specific markup */
+        -webkit-border-bottom-left-radius: 8px;
+        -webkit-border-bottom-right-radius: 8px;
+        -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+        background-image: -webkit-gradient(linear,center top,center bottom,from(#FFFFFF), color-stop(0.6,#FFFFFF), color-stop(0.60,#FFFFFF), color-stop(0.95,#F7F8FB), to(#EEF1F7));
+}
+
+.paramkey {
+  text-align: right;
+}
+
+.paramtype {
+  white-space: nowrap;
+}
+
+.paramname {
+  color: #602020;
+  white-space: nowrap;
+}
+.paramname em {
+  font-style: normal;
+}
+
+.params, .retval, .exception, .tparams {
+        border-spacing: 6px 2px;
+}       
+
+.params .paramname, .retval .paramname {
+        font-weight: bold;
+        vertical-align: top;
+}
+        
+.params .paramtype {
+        font-style: italic;
+        vertical-align: top;
+}       
+        
+.params .paramdir {
+        font-family: "courier new",courier,monospace;
+        vertical-align: top;
+}
+
+
+
+
+/* @end */
+
+/* @group Directory (tree) */
+
+/* for the tree view */
+
+.ftvtree {
+  font-family: sans-serif;
+  margin: 0px;
+}
+
+/* these are for tree view when used as main index */
+
+.directory {
+  font-size: 9pt;
+  font-weight: bold;
+  margin: 5px;
+}
+
+.directory h3 {
+  margin: 0px;
+  margin-top: 1em;
+  font-size: 11pt;
+}
+
+/*
+The following two styles can be used to replace the root node title
+with an image of your choice.  Simply uncomment the next two styles,
+specify the name of your image and be sure to set 'height' to the
+proper pixel height of your image.
+*/
+
+/*
+.directory h3.swap {
+  height: 61px;
+  background-repeat: no-repeat;
+  background-image: url("yourimage.gif");
+}
+.directory h3.swap span {
+  display: none;
+}
+*/
+
+.directory > h3 {
+  margin-top: 0;
+}
+
+.directory p {
+  margin: 0px;
+  white-space: nowrap;
+}
+
+.directory div {
+  display: none;
+  margin: 0px;
+}
+
+.directory img {
+  vertical-align: -30%;
+}
+
+/* these are for tree view when not used as main index */
+
+.directory-alt {
+  font-size: 100%;
+  font-weight: bold;
+}
+
+.directory-alt h3 {
+  margin: 0px;
+  margin-top: 1em;
+  font-size: 11pt;
+}
+
+.directory-alt > h3 {
+  margin-top: 0;
+}
+
+.directory-alt p {
+  margin: 0px;
+  white-space: nowrap;
+}
+
+.directory-alt div {
+  display: none;
+  margin: 0px;
+}
+
+.directory-alt img {
+  vertical-align: -30%;
+}
+
+/* @end */
+
+div.dynheader {
+        margin-top: 8px;
+}
+
+address {
+  font-style: normal;
+  color: #2A3D61;
+}
+
+table.doxtable {
+  border-collapse:collapse;
+}
+
+table.doxtable td, table.doxtable th {
+  border: 1px solid #2D4068;
+  padding: 3px 7px 2px;
+}
+
+table.doxtable th {
+  background-color: #374F7F;
+  color: #FFFFFF;
+  font-size: 110%;
+  padding-bottom: 4px;
+  padding-top: 5px;
+  text-align:left;
+}
+
+.tabsearch {
+  top: 0px;
+  left: 10px;
+  height: 36px;
+  background-image: url('tab_b.png');
+  z-index: 101;
+  overflow: hidden;
+  font-size: 13px;
+}
+
+.navpath ul
+{
+  font-size: 11px;
+  background-image:url('tab_b.png');
+  background-repeat:repeat-x;
+  height:30px;
+  line-height:30px;
+  color:#8AA0CC;
+  border:solid 1px #C2CDE4;
+  overflow:hidden;
+  margin:0px;
+  padding:0px;
+}
+
+.navpath li
+{
+  list-style-type:none;
+  float:left;
+  padding-left:10px;
+  padding-right: 15px;
+  background-image:url('bc_s.png');
+  background-repeat:no-repeat;
+  background-position:right;
+  color:#364D7C;
+}
+
+.navpath a
+{
+  height:32px;
+  display:block;
+  text-decoration: none;
+  outline: none;
+}
+
+.navpath a:hover
+{
+  color:#6884BD;
+}
+
+div.summary
+{
+  float: right;
+  font-size: 8pt;
+  padding-right: 5px;
+  width: 50%;
+  text-align: right;
+}       
+
+div.summary a
+{
+  white-space: nowrap;
+}
+
+div.header
+{
+        background-image:url('nav_h.png');
+        background-repeat:repeat-x;
+  background-color: #F9FAFC;
+  margin:  0px;
+  border-bottom: 1px solid #C4CFE5;
+}
+
+div.headertitle
+{
+  padding: 5px 5px 5px 10px;
+}
+
+
+
+/******** Eigen specific CSS code ************/
+
+
+body {
+  max-width:60em;
+  margin-left:5%;
+  margin-top:2%;
+  font-family: Lucida Grande, Verdana, Geneva, Arial, sans-serif;
+}
+
+img {
+    border: 0;
+}
+
+a.logo {
+  float:right;
+  margin:10px;
+}
+
+div.fragment {
+  display:table; /* this allows the element to be larger than its parent */
+  padding: 0pt;
+}
+pre.fragment {
+  border: 1px solid #cccccc;
+
+  margin: 2px 0px 2px 0px ;
+  padding: 3px 5px 3px 5px;
+}
+
+/* Common style for all Eigen's tables */
+
+table.example, table.manual, table.manual-vl {
+    max-width:100%;
+    border-collapse: collapse;
+    border-style: solid;
+    border-width: 1px;
+    border-color: #cccccc;
+    font-size: 1em;
+    
+    box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+    -moz-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+    -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+}
+
+table.example th, table.manual th, table.manual-vl th {
+  padding: 0.5em 0.5em 0.5em 0.5em;
+  text-align: left;
+  padding-right: 1em;
+  color: #555555;
+  background-color: #F4F4E5;
+  
+  background-image: -webkit-gradient(linear,center top,center bottom,from(#FFFFFF), color-stop(0.3,#FFFFFF), color-stop(0.30,#FFFFFF), color-stop(0.98,#F4F4E5), to(#ECECDE));
+  background-image: -moz-linear-gradient(center top, #FFFFFF 0%, #FFFFFF 30%, #F4F4E5 98%, #ECECDE);
+  filter: progid:DXImageTransform.Microsoft.gradient(startColorstr='#FFFFFF', endColorstr='#F4F4E5');
+}
+
+table.example td, table.manual td, table.manual-vl td {
+  vertical-align:top;
+  border-width: 1px;
+  border-color: #cccccc;
+}
+
+/* header of headers */
+table th.meta {
+  text-align:center;
+  font-size: 1.2em;
+  background-color:#FFFFFF;
+}
+
+/* intermediate header */
+table th.inter {
+  text-align:left;
+  background-color:#FFFFFF;
+  background-image:none;
+  border-style:solid solid solid solid;
+  border-width: 1px;
+	border-color: #cccccc;
+}
+
+/** class for exemple / output tables **/
+
+table.example {
+}
+
+table.example th {
+}
+
+table.example td {
+  padding: 0.5em 0.5em 0.5em 0.5em;
+  vertical-align:top;
+}
+
+/* standard class for the manual */
+
+table.manual, table.manual-vl {
+    padding: 0.2em 0em 0.5em 0em;
+}
+
+table.manual th, table.manual-vl th {
+  margin: 0em 0em 0.3em 0em;
+}
+
+table.manual td, table.manual-vl td {
+  padding: 0.3em 0.5em 0.3em 0.5em;
+  vertical-align:top;
+  border-width: 1px;
+}
+
+table.manual td.alt, table.manual tr.alt, table.manual-vl td.alt, table.manual-vl tr.alt {
+  background-color: #F4F4E5;
+}
+
+table.manual-vl th, table.manual-vl td, table.manual-vl td.alt {
+  border-color: #cccccc;
+  border-width: 1px;
+  border-style: none solid none solid;
+}
+
+table.manual-vl th.inter {
+  border-style: solid solid solid solid;
+}
+
+h2 {
+  margin-top:2em;
+  border-style: none none solid none;
+  border-width: 1px;
+  border-color: #cccccc;
+}
+
+
+/**** old Eigen's styles ****/
+
+th {
+    /*text-align: left;
+    padding-right: 1em;*/
+    /* border: #cccccc dashed; */
+    /* border-style: dashed; */
+    /* border-width: 0 0 3px 0; */
+}
+/*
+table.noborder {
+  border-collapse: separate;
+  border-bottom-style : none;
+  border-left-style   : none;
+  border-right-style  : none;
+  border-top-style : none ;
+  border-spacing : 0px 0px;
+  margin: 4pt 0 0 0;
+  padding: 0 0 0 0;
+  
+    -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+    -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
+}
+
+table.noborder td {
+  border-bottom-style : none;
+  border-left-style   : none;
+  border-right-style  : none;
+  border-top-style : none;
+  border-spacing : 0px 0px;
+  margin: 0 0 0 0;
+  vertical-align: top;
+}
+
+table.tutorial_code {
+  width: 90%;
+    -webkit-box-shadow: 5px 5px 5px rgba(0, 0, 0, 0.15);
+    -moz-box-shadow: rgba(0, 0, 0, 0.15) 5px 5px 5px;
+}
+
+table.tutorial_code tr {
+  border: 1px dashed #888888;
+}
+*/
+
+table.tutorial_code td {
+  border-color: transparent; /* required for Firefox */
+  padding: 3pt 5pt 3pt 5pt;
+  vertical-align: top;
+}
+
+
+/* Whenever doxygen meets a '\n' or a '<BR/>', it will put 
+ * the text containing the characted into a <p class="starttd">.
+ * This little hack togehter with table.tutorial_code td.note
+ * aims at fixing this issue. */
+table.tutorial_code td.note p.starttd {
+  margin: 0px;
+  border: none;
+  padding: 0px;
+}
+/*
+div.fragment {
+  font-family: monospace, fixed;
+  font-size: 95%;
+  
+  border: none;
+  padding: 0pt;
+}
+
+pre.fragment {
+  margin: 0pt;
+  border: 1px solid #cccccc;
+  padding: 2px 5px 2px 5px;
+  
+  background-color: #f5f5f5;
+}
+*/
+
+div.eimainmenu {
+  text-align:     center;
+}
+
+/* center version number on main page */
+h3.version { 
+  text-align:     center;
+}
+
+
+td.width20em p.endtd {
+  width:  20em;
+}
diff --git a/doc/eigendoxy_footer.html.in b/doc/eigendoxy_footer.html.in
new file mode 100644
index 0000000..e70829f
--- /dev/null
+++ b/doc/eigendoxy_footer.html.in
@@ -0,0 +1,5 @@
+
+<hr class="footer"/><address class="footer"><small>
+<a href="http://www.doxygen.org/index.html"><img class="footer" src="$relpath$doxygen.png" alt="doxygen"/></a></small></address>
+</body>
+</html>
\ No newline at end of file
diff --git a/doc/eigendoxy_header.html.in b/doc/eigendoxy_header.html.in
new file mode 100644
index 0000000..a4fe47f
--- /dev/null
+++ b/doc/eigendoxy_header.html.in
@@ -0,0 +1,14 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<title>$title</title>
+<link href="$relpath$eigendoxy_tabs.css" rel="stylesheet" type="text/css">
+<link href="$relpath$search/search.css" rel="stylesheet" type="text/css"/>
+<script type="text/javaScript" src="$relpath$search/search.js"></script>
+<link href="$relpath$eigendoxy.css" rel="stylesheet" type="text/css">
+</head>
+<body onload='searchBox.OnSelectItem(0);'>
+<a name="top"></a>
+<a class="logo" href="http://eigen.tuxfamily.org/">
+<img class="logo" src="Eigen_Silly_Professor_64x64.png" width=64 height=64 alt="Eigen's silly professor"/></a>
diff --git a/doc/eigendoxy_tabs.css b/doc/eigendoxy_tabs.css
new file mode 100644
index 0000000..2192056
--- /dev/null
+++ b/doc/eigendoxy_tabs.css
@@ -0,0 +1,59 @@
+.tabs, .tabs2, .tabs3 {
+    background-image: url('tab_b.png');
+    width: 100%;
+    z-index: 101;
+    font-size: 13px;
+}
+
+.tabs2 {
+    font-size: 10px;
+}
+.tabs3 {
+    font-size: 9px;
+}
+
+.tablist {
+    margin: 0;
+    padding: 0;
+    display: table;
+}
+
+.tablist li {
+    float: left;
+    display: table-cell;
+    background-image: url('tab_b.png');
+    line-height: 36px;
+    list-style: none;
+}
+
+.tablist a {
+    display: block;
+    padding: 0 20px;
+    font-weight: bold;
+    background-image:url('tab_s.png');
+    background-repeat:no-repeat;
+    background-position:right;
+    color: #283A5D;
+    text-shadow: 0px 1px 1px rgba(255, 255, 255, 0.9);
+    text-decoration: none;
+    outline: none;
+}
+
+.tabs3 .tablist a {
+    padding: 0 10px;
+}
+
+.tablist a:hover {
+    background-image: url('tab_h.png');
+    background-repeat:repeat-x;
+    color: #fff;
+    text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0);
+    text-decoration: none;
+}
+
+.tablist li.current a {
+    background-image: url('tab_a.png');
+    background-repeat:repeat-x;
+    color: #fff;
+    text-shadow: 0px 1px 1px rgba(0, 0, 0, 1.0);
+}
diff --git a/doc/examples/.krazy b/doc/examples/.krazy
new file mode 100644
index 0000000..00b9940
--- /dev/null
+++ b/doc/examples/.krazy
@@ -0,0 +1,2 @@
+EXCLUDE copyright
+EXCLUDE license
diff --git a/doc/examples/CMakeLists.txt b/doc/examples/CMakeLists.txt
new file mode 100644
index 0000000..13ec0c1
--- /dev/null
+++ b/doc/examples/CMakeLists.txt
@@ -0,0 +1,20 @@
+file(GLOB examples_SRCS "*.cpp")
+
+add_custom_target(all_examples)
+
+foreach(example_src ${examples_SRCS})
+  get_filename_component(example ${example_src} NAME_WE)
+  add_executable(${example} ${example_src})
+  if(EIGEN_STANDARD_LIBRARIES_TO_LINK_TO)
+    target_link_libraries(${example} ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO})
+  endif()
+  get_target_property(example_executable
+                      ${example} LOCATION)
+  add_custom_command(
+    TARGET ${example}
+    POST_BUILD
+    COMMAND ${example_executable}
+    ARGS >${CMAKE_CURRENT_BINARY_DIR}/${example}.out
+  )
+  add_dependencies(all_examples ${example})
+endforeach(example_src)
diff --git a/doc/examples/DenseBase_middleCols_int.cpp b/doc/examples/DenseBase_middleCols_int.cpp
new file mode 100644
index 0000000..0ebd955
--- /dev/null
+++ b/doc/examples/DenseBase_middleCols_int.cpp
@@ -0,0 +1,15 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main(void)
+{
+    int const N = 5;
+    MatrixXi A(N,N);
+    A.setRandom();
+    cout << "A =\n" << A << '\n' << endl;
+    cout << "A(1..3,:) =\n" << A.middleCols(1,3) << endl;
+    return 0;
+}
diff --git a/doc/examples/DenseBase_middleRows_int.cpp b/doc/examples/DenseBase_middleRows_int.cpp
new file mode 100644
index 0000000..a6fe9e8
--- /dev/null
+++ b/doc/examples/DenseBase_middleRows_int.cpp
@@ -0,0 +1,15 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main(void)
+{
+    int const N = 5;
+    MatrixXi A(N,N);
+    A.setRandom();
+    cout << "A =\n" << A << '\n' << endl;
+    cout << "A(2..3,:) =\n" << A.middleRows(2,2) << endl;
+    return 0;
+}
diff --git a/doc/examples/DenseBase_template_int_middleCols.cpp b/doc/examples/DenseBase_template_int_middleCols.cpp
new file mode 100644
index 0000000..6191d79
--- /dev/null
+++ b/doc/examples/DenseBase_template_int_middleCols.cpp
@@ -0,0 +1,15 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main(void)
+{
+    int const N = 5;
+    MatrixXi A(N,N);
+    A.setRandom();
+    cout << "A =\n" << A << '\n' << endl;
+    cout << "A(:,1..3) =\n" << A.middleCols<3>(1) << endl;
+    return 0;
+}
diff --git a/doc/examples/DenseBase_template_int_middleRows.cpp b/doc/examples/DenseBase_template_int_middleRows.cpp
new file mode 100644
index 0000000..7e8b657
--- /dev/null
+++ b/doc/examples/DenseBase_template_int_middleRows.cpp
@@ -0,0 +1,15 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main(void)
+{
+    int const N = 5;
+    MatrixXi A(N,N);
+    A.setRandom();
+    cout << "A =\n" << A << '\n' << endl;
+    cout << "A(1..3,:) =\n" << A.middleRows<3>(1) << endl;
+    return 0;
+}
diff --git a/doc/examples/MatrixBase_cwise_const.cpp b/doc/examples/MatrixBase_cwise_const.cpp
new file mode 100644
index 0000000..23700e0
--- /dev/null
+++ b/doc/examples/MatrixBase_cwise_const.cpp
@@ -0,0 +1,18 @@
+#define EIGEN2_SUPPORT
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  Matrix3i m = Matrix3i::Random();
+  cout << "Here is the matrix m:" << endl << m << endl;
+  Matrix3i n = Matrix3i::Random();
+  cout << "And here is the matrix n:" << endl << n << endl;
+  cout << "The coefficient-wise product of m and n is:" << endl;
+  cout << m.cwise() * n << endl;
+  cout << "Taking the cube of the coefficients of m yields:" << endl;
+  cout << m.cwise().pow(3) << endl;
+}
diff --git a/doc/examples/QuickStart_example.cpp b/doc/examples/QuickStart_example.cpp
new file mode 100644
index 0000000..7238c0c
--- /dev/null
+++ b/doc/examples/QuickStart_example.cpp
@@ -0,0 +1,14 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using Eigen::MatrixXd;
+
+int main()
+{
+  MatrixXd m(2,2);
+  m(0,0) = 3;
+  m(1,0) = 2.5;
+  m(0,1) = -1;
+  m(1,1) = m(1,0) + m(0,1);
+  std::cout << m << std::endl;
+}
diff --git a/doc/examples/QuickStart_example2_dynamic.cpp b/doc/examples/QuickStart_example2_dynamic.cpp
new file mode 100644
index 0000000..672ac82
--- /dev/null
+++ b/doc/examples/QuickStart_example2_dynamic.cpp
@@ -0,0 +1,15 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  MatrixXf m = MatrixXf::Random(3,3);
+  m = (m + MatrixXf::Constant(3,3,1.2)) * 50;
+  cout << "m =" << endl << m << endl;
+  VectorXf v(3);
+  v << 1, 2, 3;
+  cout << "m * v =" << endl << m * v << endl;
+}
diff --git a/doc/examples/QuickStart_example2_fixed.cpp b/doc/examples/QuickStart_example2_fixed.cpp
new file mode 100644
index 0000000..edf3268
--- /dev/null
+++ b/doc/examples/QuickStart_example2_fixed.cpp
@@ -0,0 +1,15 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  Matrix3f m = Matrix3f::Random();
+  m = (m + Matrix3f::Constant(1.2)) * 50;
+  cout << "m =" << endl << m << endl;
+  Vector3f v(1,2,3);
+  
+  cout << "m * v =" << endl << m * v << endl;
+}
diff --git a/doc/examples/TemplateKeyword_flexible.cpp b/doc/examples/TemplateKeyword_flexible.cpp
new file mode 100644
index 0000000..9d85292
--- /dev/null
+++ b/doc/examples/TemplateKeyword_flexible.cpp
@@ -0,0 +1,22 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+
+template <typename Derived1, typename Derived2>
+void copyUpperTriangularPart(MatrixBase<Derived1>& dst, const MatrixBase<Derived2>& src)
+{
+  /* Note the 'template' keywords in the following line! */
+  dst.template triangularView<Upper>() = src.template triangularView<Upper>();
+}
+
+int main()
+{
+  MatrixXi m1 = MatrixXi::Ones(5,5);
+  MatrixXi m2 = MatrixXi::Random(4,4);
+  std::cout << "m2 before copy:" << std::endl;
+  std::cout << m2 << std::endl << std::endl;
+  copyUpperTriangularPart(m2, m1.topLeftCorner(4,4));
+  std::cout << "m2 after copy:" << std::endl;
+  std::cout << m2 << std::endl << std::endl;
+}
diff --git a/doc/examples/TemplateKeyword_simple.cpp b/doc/examples/TemplateKeyword_simple.cpp
new file mode 100644
index 0000000..6998c17
--- /dev/null
+++ b/doc/examples/TemplateKeyword_simple.cpp
@@ -0,0 +1,20 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+
+void copyUpperTriangularPart(MatrixXf& dst, const MatrixXf& src)
+{
+  dst.triangularView<Upper>() = src.triangularView<Upper>();
+}
+
+int main()
+{
+  MatrixXf m1 = MatrixXf::Ones(4,4);
+  MatrixXf m2 = MatrixXf::Random(4,4);
+  std::cout << "m2 before copy:" << std::endl;
+  std::cout << m2 << std::endl << std::endl;
+  copyUpperTriangularPart(m2, m1);
+  std::cout << "m2 after copy:" << std::endl;
+  std::cout << m2 << std::endl << std::endl;
+}
diff --git a/doc/examples/TutorialLinAlgComputeTwice.cpp b/doc/examples/TutorialLinAlgComputeTwice.cpp
new file mode 100644
index 0000000..06ba646
--- /dev/null
+++ b/doc/examples/TutorialLinAlgComputeTwice.cpp
@@ -0,0 +1,23 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix2f A, b;
+   LLT<Matrix2f> llt;
+   A << 2, -1, -1, 3;
+   b << 1, 2, 3, 1;
+   cout << "Here is the matrix A:\n" << A << endl;
+   cout << "Here is the right hand side b:\n" << b << endl;
+   cout << "Computing LLT decomposition..." << endl;
+   llt.compute(A);
+   cout << "The solution is:\n" << llt.solve(b) << endl;
+   A(1,1)++;
+   cout << "The matrix A is now:\n" << A << endl;
+   cout << "Computing LLT decomposition..." << endl;
+   llt.compute(A);
+   cout << "The solution is now:\n" << llt.solve(b) << endl;
+}
diff --git a/doc/examples/TutorialLinAlgExComputeSolveError.cpp b/doc/examples/TutorialLinAlgExComputeSolveError.cpp
new file mode 100644
index 0000000..f362fb7
--- /dev/null
+++ b/doc/examples/TutorialLinAlgExComputeSolveError.cpp
@@ -0,0 +1,14 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   MatrixXd A = MatrixXd::Random(100,100);
+   MatrixXd b = MatrixXd::Random(100,50);
+   MatrixXd x = A.fullPivLu().solve(b);
+   double relative_error = (A*x - b).norm() / b.norm(); // norm() is L2 norm
+   cout << "The relative error is:\n" << relative_error << endl;
+}
diff --git a/doc/examples/TutorialLinAlgExSolveColPivHouseholderQR.cpp b/doc/examples/TutorialLinAlgExSolveColPivHouseholderQR.cpp
new file mode 100644
index 0000000..3a99a94
--- /dev/null
+++ b/doc/examples/TutorialLinAlgExSolveColPivHouseholderQR.cpp
@@ -0,0 +1,17 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix3f A;
+   Vector3f b;
+   A << 1,2,3,  4,5,6,  7,8,10;
+   b << 3, 3, 4;
+   cout << "Here is the matrix A:\n" << A << endl;
+   cout << "Here is the vector b:\n" << b << endl;
+   Vector3f x = A.colPivHouseholderQr().solve(b);
+   cout << "The solution is:\n" << x << endl;
+}
diff --git a/doc/examples/TutorialLinAlgExSolveLDLT.cpp b/doc/examples/TutorialLinAlgExSolveLDLT.cpp
new file mode 100644
index 0000000..f8beacd
--- /dev/null
+++ b/doc/examples/TutorialLinAlgExSolveLDLT.cpp
@@ -0,0 +1,16 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix2f A, b;
+   A << 2, -1, -1, 3;
+   b << 1, 2, 3, 1;
+   cout << "Here is the matrix A:\n" << A << endl;
+   cout << "Here is the right hand side b:\n" << b << endl;
+   Matrix2f x = A.ldlt().solve(b);
+   cout << "The solution is:\n" << x << endl;
+}
diff --git a/doc/examples/TutorialLinAlgInverseDeterminant.cpp b/doc/examples/TutorialLinAlgInverseDeterminant.cpp
new file mode 100644
index 0000000..43970ff
--- /dev/null
+++ b/doc/examples/TutorialLinAlgInverseDeterminant.cpp
@@ -0,0 +1,16 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix3f A;
+   A << 1, 2, 1,
+        2, 1, 0,
+        -1, 1, 2;
+   cout << "Here is the matrix A:\n" << A << endl;
+   cout << "The determinant of A is " << A.determinant() << endl;
+   cout << "The inverse of A is:\n" << A.inverse() << endl;
+}
\ No newline at end of file
diff --git a/doc/examples/TutorialLinAlgRankRevealing.cpp b/doc/examples/TutorialLinAlgRankRevealing.cpp
new file mode 100644
index 0000000..c516507
--- /dev/null
+++ b/doc/examples/TutorialLinAlgRankRevealing.cpp
@@ -0,0 +1,20 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix3f A;
+   A << 1, 2, 5,
+        2, 1, 4,
+        3, 0, 3;
+   cout << "Here is the matrix A:\n" << A << endl;
+   FullPivLU<Matrix3f> lu_decomp(A);
+   cout << "The rank of A is " << lu_decomp.rank() << endl;
+   cout << "Here is a matrix whose columns form a basis of the null-space of A:\n"
+        << lu_decomp.kernel() << endl;
+   cout << "Here is a matrix whose columns form a basis of the column-space of A:\n"
+        << lu_decomp.image(A) << endl; // yes, have to pass the original A
+}
diff --git a/doc/examples/TutorialLinAlgSVDSolve.cpp b/doc/examples/TutorialLinAlgSVDSolve.cpp
new file mode 100644
index 0000000..9fbc031
--- /dev/null
+++ b/doc/examples/TutorialLinAlgSVDSolve.cpp
@@ -0,0 +1,15 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   MatrixXf A = MatrixXf::Random(3, 2);
+   cout << "Here is the matrix A:\n" << A << endl;
+   VectorXf b = VectorXf::Random(3);
+   cout << "Here is the right hand side b:\n" << b << endl;
+   cout << "The least-squares solution is:\n"
+        << A.jacobiSvd(ComputeThinU | ComputeThinV).solve(b) << endl;
+}
diff --git a/doc/examples/TutorialLinAlgSelfAdjointEigenSolver.cpp b/doc/examples/TutorialLinAlgSelfAdjointEigenSolver.cpp
new file mode 100644
index 0000000..8d1d1ed
--- /dev/null
+++ b/doc/examples/TutorialLinAlgSelfAdjointEigenSolver.cpp
@@ -0,0 +1,18 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix2f A;
+   A << 1, 2, 2, 3;
+   cout << "Here is the matrix A:\n" << A << endl;
+   SelfAdjointEigenSolver<Matrix2f> eigensolver(A);
+   if (eigensolver.info() != Success) abort();
+   cout << "The eigenvalues of A are:\n" << eigensolver.eigenvalues() << endl;
+   cout << "Here's a matrix whose columns are eigenvectors of A \n"
+        << "corresponding to these eigenvalues:\n"
+        << eigensolver.eigenvectors() << endl;
+}
diff --git a/doc/examples/TutorialLinAlgSetThreshold.cpp b/doc/examples/TutorialLinAlgSetThreshold.cpp
new file mode 100644
index 0000000..3956b13
--- /dev/null
+++ b/doc/examples/TutorialLinAlgSetThreshold.cpp
@@ -0,0 +1,16 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix2d A;
+   A << 2, 1,
+        2, 0.9999999999;
+   FullPivLU<Matrix2d> lu(A);
+   cout << "By default, the rank of A is found to be " << lu.rank() << endl;
+   lu.setThreshold(1e-5);
+   cout << "With threshold 1e-5, the rank of A is found to be " << lu.rank() << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_accessors.cpp b/doc/examples/Tutorial_ArrayClass_accessors.cpp
new file mode 100644
index 0000000..dc720ff
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_accessors.cpp
@@ -0,0 +1,24 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  ArrayXXf  m(2,2);
+  
+  // assign some values coefficient by coefficient
+  m(0,0) = 1.0; m(0,1) = 2.0;
+  m(1,0) = 3.0; m(1,1) = m(0,1) + m(1,0);
+  
+  // print values to standard output
+  cout << m << endl << endl;
+ 
+  // using the comma-initializer is also allowed
+  m << 1.0,2.0,
+       3.0,4.0;
+     
+  // print values to standard output
+  cout << m << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_addition.cpp b/doc/examples/Tutorial_ArrayClass_addition.cpp
new file mode 100644
index 0000000..480ffb0
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_addition.cpp
@@ -0,0 +1,23 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  ArrayXXf a(3,3);
+  ArrayXXf b(3,3);
+  a << 1,2,3,
+       4,5,6,
+       7,8,9;
+  b << 1,2,3,
+       1,2,3,
+       1,2,3;
+       
+  // Adding two arrays
+  cout << "a + b = " << endl << a + b << endl << endl;
+
+  // Subtracting a scalar from an array
+  cout << "a - 2 = " << endl << a - 2 << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_cwise_other.cpp b/doc/examples/Tutorial_ArrayClass_cwise_other.cpp
new file mode 100644
index 0000000..d9046c6
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_cwise_other.cpp
@@ -0,0 +1,19 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  ArrayXf a = ArrayXf::Random(5);
+  a *= 2;
+  cout << "a =" << endl 
+       << a << endl;
+  cout << "a.abs() =" << endl 
+       << a.abs() << endl;
+  cout << "a.abs().sqrt() =" << endl 
+       << a.abs().sqrt() << endl;
+  cout << "a.min(a.abs().sqrt()) =" << endl 
+       << a.min(a.abs().sqrt()) << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_interop.cpp b/doc/examples/Tutorial_ArrayClass_interop.cpp
new file mode 100644
index 0000000..371f070
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_interop.cpp
@@ -0,0 +1,22 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  MatrixXf m(2,2);
+  MatrixXf n(2,2);
+  MatrixXf result(2,2);
+
+  m << 1,2,
+       3,4;
+  n << 5,6,
+       7,8;
+  
+  result = (m.array() + 4).matrix() * m;
+  cout << "-- Combination 1: --" << endl << result << endl << endl;
+  result = (m.array() * n.array()).matrix() * m;
+  cout << "-- Combination 2: --" << endl << result << endl << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_interop_matrix.cpp b/doc/examples/Tutorial_ArrayClass_interop_matrix.cpp
new file mode 100644
index 0000000..1014275
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_interop_matrix.cpp
@@ -0,0 +1,26 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  MatrixXf m(2,2);
+  MatrixXf n(2,2);
+  MatrixXf result(2,2);
+
+  m << 1,2,
+       3,4;
+  n << 5,6,
+       7,8;
+
+  result = m * n;
+  cout << "-- Matrix m*n: --" << endl << result << endl << endl;
+  result = m.array() * n.array();
+  cout << "-- Array m*n: --" << endl << result << endl << endl;
+  result = m.cwiseProduct(n);
+  cout << "-- With cwiseProduct: --" << endl << result << endl << endl;
+  result = m.array() + 4;
+  cout << "-- Array m + 4: --" << endl << result << endl << endl;
+}
diff --git a/doc/examples/Tutorial_ArrayClass_mult.cpp b/doc/examples/Tutorial_ArrayClass_mult.cpp
new file mode 100644
index 0000000..6cb439f
--- /dev/null
+++ b/doc/examples/Tutorial_ArrayClass_mult.cpp
@@ -0,0 +1,16 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main()
+{
+  ArrayXXf a(2,2);
+  ArrayXXf b(2,2);
+  a << 1,2,
+       3,4;
+  b << 5,6,
+       7,8;
+  cout << "a * b = " << endl << a * b << endl;
+}
diff --git a/doc/examples/Tutorial_BlockOperations_block_assignment.cpp b/doc/examples/Tutorial_BlockOperations_block_assignment.cpp
new file mode 100644
index 0000000..76f49f2
--- /dev/null
+++ b/doc/examples/Tutorial_BlockOperations_block_assignment.cpp
@@ -0,0 +1,18 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+  Array22f m;
+  m << 1,2,
+       3,4;
+  Array44f a = Array44f::Constant(0.6);
+  cout << "Here is the array a:" << endl << a << endl << endl;
+  a.block<2,2>(1,1) = m;
+  cout << "Here is now a with m copied into its central 2x2 block:" << endl << a << endl << endl;
+  a.block(0,0,2,3) = a.block(2,1,2,3);
+  cout << "Here is now a with bottom-right 2x3 block copied into top-left 2x2 block:" << endl << a << endl << endl;
+}
diff --git a/doc/examples/Tutorial_BlockOperations_colrow.cpp b/doc/examples/Tutorial_BlockOperations_colrow.cpp
new file mode 100644
index 0000000..2e7eb00
--- /dev/null
+++ b/doc/examples/Tutorial_BlockOperations_colrow.cpp
@@ -0,0 +1,17 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+
+int main()
+{
+  Eigen::MatrixXf m(3,3);
+  m << 1,2,3,
+       4,5,6,
+       7,8,9;
+  cout << "Here is the matrix m:" << endl << m << endl;
+  cout << "2nd Row: " << m.row(1) << endl;
+  m.col(2) += 3 * m.col(0);
+  cout << "After adding 3 times the first column into the third column, the matrix m is:\n";
+  cout << m << endl;
+}
diff --git a/doc/examples/Tutorial_BlockOperations_corner.cpp b/doc/examples/Tutorial_BlockOperations_corner.cpp
new file mode 100644
index 0000000..3a31507
--- /dev/null
+++ b/doc/examples/Tutorial_BlockOperations_corner.cpp
@@ -0,0 +1,17 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+
+int main()
+{
+  Eigen::Matrix4f m;
+  m << 1, 2, 3, 4,
+       5, 6, 7, 8,
+       9, 10,11,12,
+       13,14,15,16;
+  cout << "m.leftCols(2) =" << endl << m.leftCols(2) << endl << endl;
+  cout << "m.bottomRows<2>() =" << endl << m.bottomRows<2>() << endl << endl;
+  m.topLeftCorner(1,3) = m.bottomRightCorner(3,1).transpose();
+  cout << "After assignment, m = " << endl << m << endl;
+}
diff --git a/doc/examples/Tutorial_BlockOperations_print_block.cpp b/doc/examples/Tutorial_BlockOperations_print_block.cpp
new file mode 100644
index 0000000..edea4ae
--- /dev/null
+++ b/doc/examples/Tutorial_BlockOperations_print_block.cpp
@@ -0,0 +1,20 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+
+int main()
+{
+  Eigen::MatrixXf m(4,4);
+  m <<  1, 2, 3, 4,
+        5, 6, 7, 8,
+        9,10,11,12,
+       13,14,15,16;
+  cout << "Block in the middle" << endl;
+  cout << m.block<2,2>(1,1) << endl << endl;
+  for (int i = 1; i <= 3; ++i)
+  {
+    cout << "Block of size " << i << "x" << i << endl;
+    cout << m.block(0,0,i,i) << endl << endl;
+  }
+}
diff --git a/doc/examples/Tutorial_BlockOperations_vector.cpp b/doc/examples/Tutorial_BlockOperations_vector.cpp
new file mode 100644
index 0000000..4a0b023
--- /dev/null
+++ b/doc/examples/Tutorial_BlockOperations_vector.cpp
@@ -0,0 +1,14 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+
+int main()
+{
+  Eigen::ArrayXf v(6);
+  v << 1, 2, 3, 4, 5, 6;
+  cout << "v.head(3) =" << endl << v.head(3) << endl << endl;
+  cout << "v.tail<3>() = " << endl << v.tail<3>() << endl << endl;
+  v.segment(1,4) *= 2;
+  cout << "after 'v.segment(1,4) *= 2', v =" << endl << v << endl;
+}
diff --git a/doc/examples/Tutorial_PartialLU_solve.cpp b/doc/examples/Tutorial_PartialLU_solve.cpp
new file mode 100644
index 0000000..a560879
--- /dev/null
+++ b/doc/examples/Tutorial_PartialLU_solve.cpp
@@ -0,0 +1,18 @@
+#include <Eigen/Core>
+#include <Eigen/LU>
+#include <iostream>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+   Matrix3f A;
+   Vector3f b;
+   A << 1,2,3,  4,5,6,  7,8,10;
+   b << 3, 3, 4;
+   cout << "Here is the matrix A:" << endl << A << endl;
+   cout << "Here is the vector b:" << endl << b << endl;
+   Vector3f x = A.lu().solve(b);
+   cout << "The solution is:" << endl << x << endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
new file mode 100644
index 0000000..334b4d8
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
@@ -0,0 +1,24 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+  Eigen::MatrixXf m(2,4);
+  Eigen::VectorXf v(2);
+  
+  m << 1, 23, 6, 9,
+       3, 11, 7, 2;
+       
+  v << 2,
+       3;
+
+  MatrixXf::Index index;
+  // find nearest neighbour
+  (m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
+
+  cout << "Nearest neighbour is column " << index << ":" << endl;
+  cout << m.col(index) << endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
new file mode 100644
index 0000000..e6c87c6
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
@@ -0,0 +1,21 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+int main()
+{
+  Eigen::MatrixXf mat(2,4);
+  Eigen::VectorXf v(2);
+  
+  mat << 1, 2, 6, 9,
+         3, 1, 7, 2;
+         
+  v << 0,
+       1;
+       
+  //add v to each column of m
+  mat.colwise() += v;
+  
+  std::cout << "Broadcasting result: " << std::endl;
+  std::cout << mat << std::endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
new file mode 100644
index 0000000..d87c96a
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
@@ -0,0 +1,20 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+int main()
+{
+  Eigen::MatrixXf mat(2,4);
+  Eigen::VectorXf v(4);
+  
+  mat << 1, 2, 6, 9,
+         3, 1, 7, 2;
+         
+  v << 0,1,2,3;
+       
+  //add v to each row of m
+  mat.rowwise() += v.transpose();
+  
+  std::cout << "Broadcasting result: " << std::endl;
+  std::cout << mat << std::endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
new file mode 100644
index 0000000..df68256
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
@@ -0,0 +1,13 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+int main()
+{
+  Eigen::MatrixXf mat(2,4);
+  mat << 1, 2, 6, 9,
+         3, 1, 7, 2;
+  
+  std::cout << "Column's maximum: " << std::endl
+   << mat.colwise().maxCoeff() << std::endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
new file mode 100644
index 0000000..049c747
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
@@ -0,0 +1,20 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+int main()
+{
+  MatrixXf mat(2,4);
+  mat << 1, 2, 6, 9,
+         3, 1, 7, 2;
+  
+  MatrixXf::Index   maxIndex;
+  float maxNorm = mat.colwise().sum().maxCoeff(&maxIndex);
+  
+  std::cout << "Maximum sum at position " << maxIndex << std::endl;
+
+  std::cout << "The corresponding vector is: " << std::endl;
+  std::cout << mat.col( maxIndex ) << std::endl;
+  std::cout << "And its sum is is: " << maxNorm << std::endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
new file mode 100644
index 0000000..0cca37f
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
@@ -0,0 +1,21 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+  ArrayXXf a(2,2);
+  
+  a << 1,2,
+       3,4;
+
+  cout << "(a > 0).all()   = " << (a > 0).all() << endl;
+  cout << "(a > 0).any()   = " << (a > 0).any() << endl;
+  cout << "(a > 0).count() = " << (a > 0).count() << endl;
+  cout << endl;
+  cout << "(a > 2).all()   = " << (a > 2).all() << endl;
+  cout << "(a > 2).any()   = " << (a > 2).any() << endl;
+  cout << "(a > 2).count() = " << (a > 2).count() << endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
new file mode 100644
index 0000000..740439f
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
@@ -0,0 +1,28 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+  VectorXf v(2);
+  MatrixXf m(2,2), n(2,2);
+  
+  v << -1,
+       2;
+  
+  m << 1,-2,
+       -3,4;
+
+  cout << "v.squaredNorm() = " << v.squaredNorm() << endl;
+  cout << "v.norm() = " << v.norm() << endl;
+  cout << "v.lpNorm<1>() = " << v.lpNorm<1>() << endl;
+  cout << "v.lpNorm<Infinity>() = " << v.lpNorm<Infinity>() << endl;
+
+  cout << endl;
+  cout << "m.squaredNorm() = " << m.squaredNorm() << endl;
+  cout << "m.norm() = " << m.norm() << endl;
+  cout << "m.lpNorm<1>() = " << m.lpNorm<1>() << endl;
+  cout << "m.lpNorm<Infinity>() = " << m.lpNorm<Infinity>() << endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
new file mode 100644
index 0000000..80427c9
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
@@ -0,0 +1,13 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+int main()
+{
+  Eigen::MatrixXf mat(2,4);
+  mat << 1, 2, 6, 9,
+         3, 1, 7, 2;
+  
+  std::cout << "Row's maximum: " << std::endl
+   << mat.rowwise().maxCoeff() << std::endl;
+}
diff --git a/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
new file mode 100644
index 0000000..b54e9aa
--- /dev/null
+++ b/doc/examples/Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
@@ -0,0 +1,26 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+using namespace Eigen;
+
+int main()
+{
+  Eigen::MatrixXf m(2,2);
+  
+  m << 1, 2,
+       3, 4;
+
+  //get location of maximum
+  MatrixXf::Index maxRow, maxCol;
+  float max = m.maxCoeff(&maxRow, &maxCol);
+
+  //get location of minimum
+  MatrixXf::Index minRow, minCol;
+  float min = m.minCoeff(&minRow, &minCol);
+
+  cout << "Max: " << max <<  ", at: " <<
+     maxRow << "," << maxCol << endl;
+  cout << "Min: " << min << ", at: " <<
+     minRow << "," << minCol << endl;
+}
diff --git a/doc/examples/Tutorial_simple_example_dynamic_size.cpp b/doc/examples/Tutorial_simple_example_dynamic_size.cpp
new file mode 100644
index 0000000..0f0280e
--- /dev/null
+++ b/doc/examples/Tutorial_simple_example_dynamic_size.cpp
@@ -0,0 +1,22 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+
+int main()
+{
+  for (int size=1; size<=4; ++size)
+  {
+    MatrixXi m(size,size+1);         // a (size)x(size+1)-matrix of int's
+    for (int j=0; j<m.cols(); ++j)   // loop over columns
+      for (int i=0; i<m.rows(); ++i) // loop over rows
+        m(i,j) = i+j*m.rows();       // to access matrix coefficients,
+                                     // use operator()(int,int)
+    std::cout << m << "\n\n";
+  }
+
+  VectorXf v(4); // a vector of 4 float's
+  // to access vector coefficients, use either operator () or operator []
+  v[0] = 1; v[1] = 2; v(2) = 3; v(3) = 4;
+  std::cout << "\nv:\n" << v << std::endl;
+}
diff --git a/doc/examples/Tutorial_simple_example_fixed_size.cpp b/doc/examples/Tutorial_simple_example_fixed_size.cpp
new file mode 100644
index 0000000..bc4f95d
--- /dev/null
+++ b/doc/examples/Tutorial_simple_example_fixed_size.cpp
@@ -0,0 +1,15 @@
+#include <Eigen/Core>
+#include <iostream>
+
+using namespace Eigen;
+
+int main()
+{
+  Matrix3f m3;
+  m3 << 1, 2, 3, 4, 5, 6, 7, 8, 9;
+  Matrix4f m4 = Matrix4f::Identity();
+  Vector4i v4(1, 2, 3, 4);
+
+  std::cout << "m3\n" << m3 << "\nm4:\n"
+    << m4 << "\nv4:\n" << v4 << std::endl;
+}
diff --git a/doc/examples/class_Block.cpp b/doc/examples/class_Block.cpp
new file mode 100644
index 0000000..ace719a
--- /dev/null
+++ b/doc/examples/class_Block.cpp
@@ -0,0 +1,27 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+template<typename Derived>
+Eigen::Block<Derived>
+topLeftCorner(MatrixBase<Derived>& m, int rows, int cols)
+{
+  return Eigen::Block<Derived>(m.derived(), 0, 0, rows, cols);
+}
+
+template<typename Derived>
+const Eigen::Block<const Derived>
+topLeftCorner(const MatrixBase<Derived>& m, int rows, int cols)
+{
+  return Eigen::Block<const Derived>(m.derived(), 0, 0, rows, cols);
+}
+
+int main(int, char**)
+{
+  Matrix4d m = Matrix4d::Identity();
+  cout << topLeftCorner(4*m, 2, 3) << endl; // calls the const version
+  topLeftCorner(m, 2, 3) *= 5;              // calls the non-const version
+  cout << "Now the matrix m is:" << endl << m << endl;
+  return 0;
+}
diff --git a/doc/examples/class_CwiseBinaryOp.cpp b/doc/examples/class_CwiseBinaryOp.cpp
new file mode 100644
index 0000000..682af46
--- /dev/null
+++ b/doc/examples/class_CwiseBinaryOp.cpp
@@ -0,0 +1,18 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+// define a custom template binary functor
+template<typename Scalar> struct MakeComplexOp {
+  EIGEN_EMPTY_STRUCT_CTOR(MakeComplexOp)
+  typedef complex<Scalar> result_type;
+  complex<Scalar> operator()(const Scalar& a, const Scalar& b) const { return complex<Scalar>(a,b); }
+};
+
+int main(int, char**)
+{
+  Matrix4d m1 = Matrix4d::Random(), m2 = Matrix4d::Random();
+  cout << m1.binaryExpr(m2, MakeComplexOp<double>()) << endl;
+  return 0;
+}
diff --git a/doc/examples/class_CwiseUnaryOp.cpp b/doc/examples/class_CwiseUnaryOp.cpp
new file mode 100644
index 0000000..a5fcc15
--- /dev/null
+++ b/doc/examples/class_CwiseUnaryOp.cpp
@@ -0,0 +1,19 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+// define a custom template unary functor
+template<typename Scalar>
+struct CwiseClampOp {
+  CwiseClampOp(const Scalar& inf, const Scalar& sup) : m_inf(inf), m_sup(sup) {}
+  const Scalar operator()(const Scalar& x) const { return x<m_inf ? m_inf : (x>m_sup ? m_sup : x); }
+  Scalar m_inf, m_sup;
+};
+
+int main(int, char**)
+{
+  Matrix4d m1 = Matrix4d::Random();
+  cout << m1 << endl << "becomes: " << endl << m1.unaryExpr(CwiseClampOp<double>(-0.5,0.5)) << endl;
+  return 0;
+}
diff --git a/doc/examples/class_CwiseUnaryOp_ptrfun.cpp b/doc/examples/class_CwiseUnaryOp_ptrfun.cpp
new file mode 100644
index 0000000..36706d8
--- /dev/null
+++ b/doc/examples/class_CwiseUnaryOp_ptrfun.cpp
@@ -0,0 +1,20 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+// define function to be applied coefficient-wise
+double ramp(double x)
+{
+  if (x > 0)
+    return x;
+  else 
+    return 0;
+}
+
+int main(int, char**)
+{
+  Matrix4d m1 = Matrix4d::Random();
+  cout << m1 << endl << "becomes: " << endl << m1.unaryExpr(ptr_fun(ramp)) << endl;
+  return 0;
+}
diff --git a/doc/examples/class_FixedBlock.cpp b/doc/examples/class_FixedBlock.cpp
new file mode 100644
index 0000000..9978b32
--- /dev/null
+++ b/doc/examples/class_FixedBlock.cpp
@@ -0,0 +1,27 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+template<typename Derived>
+Eigen::Block<Derived, 2, 2>
+topLeft2x2Corner(MatrixBase<Derived>& m)
+{
+  return Eigen::Block<Derived, 2, 2>(m.derived(), 0, 0);
+}
+
+template<typename Derived>
+const Eigen::Block<const Derived, 2, 2>
+topLeft2x2Corner(const MatrixBase<Derived>& m)
+{
+  return Eigen::Block<const Derived, 2, 2>(m.derived(), 0, 0);
+}
+
+int main(int, char**)
+{
+  Matrix3d m = Matrix3d::Identity();
+  cout << topLeft2x2Corner(4*m) << endl; // calls the const version
+  topLeft2x2Corner(m) *= 2;              // calls the non-const version
+  cout << "Now the matrix m is:" << endl << m << endl;
+  return 0;
+}
diff --git a/doc/examples/class_FixedVectorBlock.cpp b/doc/examples/class_FixedVectorBlock.cpp
new file mode 100644
index 0000000..c88c9fb
--- /dev/null
+++ b/doc/examples/class_FixedVectorBlock.cpp
@@ -0,0 +1,27 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+template<typename Derived>
+Eigen::VectorBlock<Derived, 2>
+firstTwo(MatrixBase<Derived>& v)
+{
+  return Eigen::VectorBlock<Derived, 2>(v.derived(), 0);
+}
+
+template<typename Derived>
+const Eigen::VectorBlock<const Derived, 2>
+firstTwo(const MatrixBase<Derived>& v)
+{
+  return Eigen::VectorBlock<const Derived, 2>(v.derived(), 0);
+}
+
+int main(int, char**)
+{
+  Matrix<int,1,6> v; v << 1,2,3,4,5,6;
+  cout << firstTwo(4*v) << endl; // calls the const version
+  firstTwo(v) *= 2;              // calls the non-const version
+  cout << "Now the vector v is:" << endl << v << endl;
+  return 0;
+}
diff --git a/doc/examples/class_VectorBlock.cpp b/doc/examples/class_VectorBlock.cpp
new file mode 100644
index 0000000..dc213df
--- /dev/null
+++ b/doc/examples/class_VectorBlock.cpp
@@ -0,0 +1,27 @@
+#include <Eigen/Core>
+#include <iostream>
+using namespace Eigen;
+using namespace std;
+
+template<typename Derived>
+Eigen::VectorBlock<Derived>
+segmentFromRange(MatrixBase<Derived>& v, int start, int end)
+{
+  return Eigen::VectorBlock<Derived>(v.derived(), start, end-start);
+}
+
+template<typename Derived>
+const Eigen::VectorBlock<const Derived>
+segmentFromRange(const MatrixBase<Derived>& v, int start, int end)
+{
+  return Eigen::VectorBlock<const Derived>(v.derived(), start, end-start);
+}
+
+int main(int, char**)
+{
+  Matrix<int,1,6> v; v << 1,2,3,4,5,6;
+  cout << segmentFromRange(2*v, 2, 4) << endl; // calls the const version
+  segmentFromRange(v, 1, 3) *= 5;              // calls the non-const version
+  cout << "Now the vector v is:" << endl << v << endl;
+  return 0;
+}
diff --git a/doc/examples/function_taking_eigenbase.cpp b/doc/examples/function_taking_eigenbase.cpp
new file mode 100644
index 0000000..49d94b3
--- /dev/null
+++ b/doc/examples/function_taking_eigenbase.cpp
@@ -0,0 +1,18 @@
+#include <iostream>
+#include <Eigen/Core>
+using namespace Eigen;
+
+template <typename Derived>
+void print_size(const EigenBase<Derived>& b)
+{
+  std::cout << "size (rows, cols): " << b.size() << " (" << b.rows()
+            << ", " << b.cols() << ")" << std::endl;
+}
+
+int main()
+{
+    Vector3f v;
+    print_size(v);
+    // v.asDiagonal() returns a 3x3 diagonal matrix pseudo-expression
+    print_size(v.asDiagonal());
+}
diff --git a/doc/examples/tut_arithmetic_add_sub.cpp b/doc/examples/tut_arithmetic_add_sub.cpp
new file mode 100644
index 0000000..e97477b
--- /dev/null
+++ b/doc/examples/tut_arithmetic_add_sub.cpp
@@ -0,0 +1,22 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+
+int main()
+{
+  Matrix2d a;
+  a << 1, 2,
+       3, 4;
+  MatrixXd b(2,2);
+  b << 2, 3,
+       1, 4;
+  std::cout << "a + b =\n" << a + b << std::endl;
+  std::cout << "a - b =\n" << a - b << std::endl;
+  std::cout << "Doing a += b;" << std::endl;
+  a += b;
+  std::cout << "Now a =\n" << a << std::endl;
+  Vector3d v(1,2,3);
+  Vector3d w(1,0,0);
+  std::cout << "-v + w - v =\n" << -v + w - v << std::endl;
+}
diff --git a/doc/examples/tut_arithmetic_dot_cross.cpp b/doc/examples/tut_arithmetic_dot_cross.cpp
new file mode 100644
index 0000000..631c9a5
--- /dev/null
+++ b/doc/examples/tut_arithmetic_dot_cross.cpp
@@ -0,0 +1,15 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+using namespace std;
+int main()
+{
+  Vector3d v(1,2,3);
+  Vector3d w(0,1,2);
+
+  cout << "Dot product: " << v.dot(w) << endl;
+  double dp = v.adjoint()*w; // automatic conversion of the inner product to a scalar
+  cout << "Dot product via a matrix product: " << dp << endl;
+  cout << "Cross product:\n" << v.cross(w) << endl;
+}
diff --git a/doc/examples/tut_arithmetic_matrix_mul.cpp b/doc/examples/tut_arithmetic_matrix_mul.cpp
new file mode 100644
index 0000000..f213902
--- /dev/null
+++ b/doc/examples/tut_arithmetic_matrix_mul.cpp
@@ -0,0 +1,19 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+int main()
+{
+  Matrix2d mat;
+  mat << 1, 2,
+         3, 4;
+  Vector2d u(-1,1), v(2,0);
+  std::cout << "Here is mat*mat:\n" << mat*mat << std::endl;
+  std::cout << "Here is mat*u:\n" << mat*u << std::endl;
+  std::cout << "Here is u^T*mat:\n" << u.transpose()*mat << std::endl;
+  std::cout << "Here is u^T*v:\n" << u.transpose()*v << std::endl;
+  std::cout << "Here is u*v^T:\n" << u*v.transpose() << std::endl;
+  std::cout << "Let's multiply mat by itself" << std::endl;
+  mat = mat*mat;
+  std::cout << "Now mat is mat:\n" << mat << std::endl;
+}
diff --git a/doc/examples/tut_arithmetic_redux_basic.cpp b/doc/examples/tut_arithmetic_redux_basic.cpp
new file mode 100644
index 0000000..5632fb5
--- /dev/null
+++ b/doc/examples/tut_arithmetic_redux_basic.cpp
@@ -0,0 +1,16 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace std;
+int main()
+{
+  Eigen::Matrix2d mat;
+  mat << 1, 2,
+         3, 4;
+  cout << "Here is mat.sum():       " << mat.sum()       << endl;
+  cout << "Here is mat.prod():      " << mat.prod()      << endl;
+  cout << "Here is mat.mean():      " << mat.mean()      << endl;
+  cout << "Here is mat.minCoeff():  " << mat.minCoeff()  << endl;
+  cout << "Here is mat.maxCoeff():  " << mat.maxCoeff()  << endl;
+  cout << "Here is mat.trace():     " << mat.trace()     << endl;
+}
diff --git a/doc/examples/tut_arithmetic_scalar_mul_div.cpp b/doc/examples/tut_arithmetic_scalar_mul_div.cpp
new file mode 100644
index 0000000..d5f65b5
--- /dev/null
+++ b/doc/examples/tut_arithmetic_scalar_mul_div.cpp
@@ -0,0 +1,17 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+
+int main()
+{
+  Matrix2d a;
+  a << 1, 2,
+       3, 4;
+  Vector3d v(1,2,3);
+  std::cout << "a * 2.5 =\n" << a * 2.5 << std::endl;
+  std::cout << "0.1 * v =\n" << 0.1 * v << std::endl;
+  std::cout << "Doing v *= 2;" << std::endl;
+  v *= 2;
+  std::cout << "Now v =\n" << v << std::endl;
+}
diff --git a/doc/examples/tut_matrix_coefficient_accessors.cpp b/doc/examples/tut_matrix_coefficient_accessors.cpp
new file mode 100644
index 0000000..c2da171
--- /dev/null
+++ b/doc/examples/tut_matrix_coefficient_accessors.cpp
@@ -0,0 +1,18 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+
+int main()
+{
+  MatrixXd m(2,2);
+  m(0,0) = 3;
+  m(1,0) = 2.5;
+  m(0,1) = -1;
+  m(1,1) = m(1,0) + m(0,1);
+  std::cout << "Here is the matrix m:\n" << m << std::endl;
+  VectorXd v(2);
+  v(0) = 4;
+  v(1) = v(0) - 1;
+  std::cout << "Here is the vector v:\n" << v << std::endl;
+}
diff --git a/doc/examples/tut_matrix_resize.cpp b/doc/examples/tut_matrix_resize.cpp
new file mode 100644
index 0000000..0392c3a
--- /dev/null
+++ b/doc/examples/tut_matrix_resize.cpp
@@ -0,0 +1,18 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+
+int main()
+{
+  MatrixXd m(2,5);
+  m.resize(4,3);
+  std::cout << "The matrix m is of size "
+            << m.rows() << "x" << m.cols() << std::endl;
+  std::cout << "It has " << m.size() << " coefficients" << std::endl;
+  VectorXd v(2);
+  v.resize(5);
+  std::cout << "The vector v is of size " << v.size() << std::endl;
+  std::cout << "As a matrix, v is of size "
+            << v.rows() << "x" << v.cols() << std::endl;
+}
diff --git a/doc/examples/tut_matrix_resize_fixed_size.cpp b/doc/examples/tut_matrix_resize_fixed_size.cpp
new file mode 100644
index 0000000..dcbdfa7
--- /dev/null
+++ b/doc/examples/tut_matrix_resize_fixed_size.cpp
@@ -0,0 +1,12 @@
+#include <iostream>
+#include <Eigen/Dense>
+
+using namespace Eigen;
+
+int main()
+{
+  Matrix4d m;
+  m.resize(4,4); // no operation
+  std::cout << "The matrix m is of size "
+            << m.rows() << "x" << m.cols() << std::endl;
+}
diff --git a/doc/snippets/.krazy b/doc/snippets/.krazy
new file mode 100644
index 0000000..00b9940
--- /dev/null
+++ b/doc/snippets/.krazy
@@ -0,0 +1,2 @@
+EXCLUDE copyright
+EXCLUDE license
diff --git a/doc/snippets/AngleAxis_mimic_euler.cpp b/doc/snippets/AngleAxis_mimic_euler.cpp
new file mode 100644
index 0000000..456de7f
--- /dev/null
+++ b/doc/snippets/AngleAxis_mimic_euler.cpp
@@ -0,0 +1,5 @@
+Matrix3f m;
+m = AngleAxisf(0.25*M_PI, Vector3f::UnitX())
+  * AngleAxisf(0.5*M_PI,  Vector3f::UnitY())
+  * AngleAxisf(0.33*M_PI, Vector3f::UnitZ());
+cout << m << endl << "is unitary: " << m.isUnitary() << endl;
diff --git a/doc/snippets/CMakeLists.txt b/doc/snippets/CMakeLists.txt
new file mode 100644
index 0000000..92a22ea
--- /dev/null
+++ b/doc/snippets/CMakeLists.txt
@@ -0,0 +1,30 @@
+file(GLOB snippets_SRCS "*.cpp")
+
+add_custom_target(all_snippets)
+
+foreach(snippet_src ${snippets_SRCS})
+  get_filename_component(snippet ${snippet_src} NAME_WE)
+  set(compile_snippet_target compile_${snippet})
+  set(compile_snippet_src ${compile_snippet_target}.cpp)
+  file(READ ${snippet_src} snippet_source_code)
+  configure_file(${CMAKE_CURRENT_SOURCE_DIR}/compile_snippet.cpp.in
+                 ${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src})
+  add_executable(${compile_snippet_target}
+                 ${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src})
+  if(EIGEN_STANDARD_LIBRARIES_TO_LINK_TO)
+    target_link_libraries(${compile_snippet_target} ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO})
+  endif()
+  get_target_property(compile_snippet_executable
+                      ${compile_snippet_target} LOCATION)
+  add_custom_command(
+    TARGET ${compile_snippet_target}
+    POST_BUILD
+    COMMAND ${compile_snippet_executable}
+    ARGS >${CMAKE_CURRENT_BINARY_DIR}/${snippet}.out
+  )
+  add_dependencies(all_snippets ${compile_snippet_target})
+  set_source_files_properties(${CMAKE_CURRENT_BINARY_DIR}/${compile_snippet_src}
+                              PROPERTIES OBJECT_DEPENDS ${snippet_src})
+endforeach(snippet_src)
+
+ei_add_target_property(compile_tut_arithmetic_transpose_aliasing COMPILE_FLAGS -DEIGEN_NO_DEBUG)
\ No newline at end of file
diff --git a/doc/snippets/ColPivHouseholderQR_solve.cpp b/doc/snippets/ColPivHouseholderQR_solve.cpp
new file mode 100644
index 0000000..b7b204a
--- /dev/null
+++ b/doc/snippets/ColPivHouseholderQR_solve.cpp
@@ -0,0 +1,8 @@
+Matrix3f m = Matrix3f::Random();
+Matrix3f y = Matrix3f::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the matrix y:" << endl << y << endl;
+Matrix3f x;
+x = m.colPivHouseholderQr().solve(y);
+assert(y.isApprox(m*x));
+cout << "Here is a solution x to the equation mx=y:" << endl << x << endl;
diff --git a/doc/snippets/ComplexEigenSolver_compute.cpp b/doc/snippets/ComplexEigenSolver_compute.cpp
new file mode 100644
index 0000000..11d6bd3
--- /dev/null
+++ b/doc/snippets/ComplexEigenSolver_compute.cpp
@@ -0,0 +1,16 @@
+MatrixXcf A = MatrixXcf::Random(4,4);
+cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl;
+
+ComplexEigenSolver<MatrixXcf> ces;
+ces.compute(A);
+cout << "The eigenvalues of A are:" << endl << ces.eigenvalues() << endl;
+cout << "The matrix of eigenvectors, V, is:" << endl << ces.eigenvectors() << endl << endl;
+
+complex<float> lambda = ces.eigenvalues()[0];
+cout << "Consider the first eigenvalue, lambda = " << lambda << endl;
+VectorXcf v = ces.eigenvectors().col(0);
+cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl;
+cout << "... and A * v = " << endl << A * v << endl << endl;
+
+cout << "Finally, V * D * V^(-1) = " << endl
+     << ces.eigenvectors() * ces.eigenvalues().asDiagonal() * ces.eigenvectors().inverse() << endl;
diff --git a/doc/snippets/ComplexEigenSolver_eigenvalues.cpp b/doc/snippets/ComplexEigenSolver_eigenvalues.cpp
new file mode 100644
index 0000000..5509bd8
--- /dev/null
+++ b/doc/snippets/ComplexEigenSolver_eigenvalues.cpp
@@ -0,0 +1,4 @@
+MatrixXcf ones = MatrixXcf::Ones(3,3);
+ComplexEigenSolver<MatrixXcf> ces(ones, /* computeEigenvectors = */ false);
+cout << "The eigenvalues of the 3x3 matrix of ones are:" 
+     << endl << ces.eigenvalues() << endl;
diff --git a/doc/snippets/ComplexEigenSolver_eigenvectors.cpp b/doc/snippets/ComplexEigenSolver_eigenvectors.cpp
new file mode 100644
index 0000000..bb1c2cc
--- /dev/null
+++ b/doc/snippets/ComplexEigenSolver_eigenvectors.cpp
@@ -0,0 +1,4 @@
+MatrixXcf ones = MatrixXcf::Ones(3,3);
+ComplexEigenSolver<MatrixXcf> ces(ones);
+cout << "The first eigenvector of the 3x3 matrix of ones is:" 
+     << endl << ces.eigenvectors().col(1) << endl;
diff --git a/doc/snippets/ComplexSchur_compute.cpp b/doc/snippets/ComplexSchur_compute.cpp
new file mode 100644
index 0000000..3a51701
--- /dev/null
+++ b/doc/snippets/ComplexSchur_compute.cpp
@@ -0,0 +1,6 @@
+MatrixXcf A = MatrixXcf::Random(4,4);
+ComplexSchur<MatrixXcf> schur(4);
+schur.compute(A);
+cout << "The matrix T in the decomposition of A is:" << endl << schur.matrixT() << endl;
+schur.compute(A.inverse());
+cout << "The matrix T in the decomposition of A^(-1) is:" << endl << schur.matrixT() << endl;
diff --git a/doc/snippets/ComplexSchur_matrixT.cpp b/doc/snippets/ComplexSchur_matrixT.cpp
new file mode 100644
index 0000000..8380571
--- /dev/null
+++ b/doc/snippets/ComplexSchur_matrixT.cpp
@@ -0,0 +1,4 @@
+MatrixXcf A = MatrixXcf::Random(4,4);
+cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl;
+ComplexSchur<MatrixXcf> schurOfA(A, false); // false means do not compute U
+cout << "The triangular matrix T is:" << endl << schurOfA.matrixT() << endl;
diff --git a/doc/snippets/ComplexSchur_matrixU.cpp b/doc/snippets/ComplexSchur_matrixU.cpp
new file mode 100644
index 0000000..ba3d9c2
--- /dev/null
+++ b/doc/snippets/ComplexSchur_matrixU.cpp
@@ -0,0 +1,4 @@
+MatrixXcf A = MatrixXcf::Random(4,4);
+cout << "Here is a random 4x4 matrix, A:" << endl << A << endl << endl;
+ComplexSchur<MatrixXcf> schurOfA(A);
+cout << "The unitary matrix U is:" << endl << schurOfA.matrixU() << endl;
diff --git a/doc/snippets/Cwise_abs.cpp b/doc/snippets/Cwise_abs.cpp
new file mode 100644
index 0000000..0aeec3a
--- /dev/null
+++ b/doc/snippets/Cwise_abs.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,-2,-3);
+cout << v.abs() << endl;
diff --git a/doc/snippets/Cwise_abs2.cpp b/doc/snippets/Cwise_abs2.cpp
new file mode 100644
index 0000000..2c4f9b3
--- /dev/null
+++ b/doc/snippets/Cwise_abs2.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,-2,-3);
+cout << v.abs2() << endl;
diff --git a/doc/snippets/Cwise_acos.cpp b/doc/snippets/Cwise_acos.cpp
new file mode 100644
index 0000000..34432cb
--- /dev/null
+++ b/doc/snippets/Cwise_acos.cpp
@@ -0,0 +1,2 @@
+Array3d v(0, sqrt(2.)/2, 1);
+cout << v.acos() << endl;
diff --git a/doc/snippets/Cwise_boolean_and.cpp b/doc/snippets/Cwise_boolean_and.cpp
new file mode 100644
index 0000000..df6b60d
--- /dev/null
+++ b/doc/snippets/Cwise_boolean_and.cpp
@@ -0,0 +1,2 @@
+Array3d v(-1,2,1), w(-3,2,3);
+cout << ((v<w) && (v<0)) << endl;
diff --git a/doc/snippets/Cwise_boolean_or.cpp b/doc/snippets/Cwise_boolean_or.cpp
new file mode 100644
index 0000000..83eb006
--- /dev/null
+++ b/doc/snippets/Cwise_boolean_or.cpp
@@ -0,0 +1,2 @@
+Array3d v(-1,2,1), w(-3,2,3);
+cout << ((v<w) || (v<0)) << endl;
diff --git a/doc/snippets/Cwise_cos.cpp b/doc/snippets/Cwise_cos.cpp
new file mode 100644
index 0000000..f589f07
--- /dev/null
+++ b/doc/snippets/Cwise_cos.cpp
@@ -0,0 +1,2 @@
+Array3d v(M_PI, M_PI/2, M_PI/3);
+cout << v.cos() << endl;
diff --git a/doc/snippets/Cwise_cube.cpp b/doc/snippets/Cwise_cube.cpp
new file mode 100644
index 0000000..85e41dc
--- /dev/null
+++ b/doc/snippets/Cwise_cube.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4);
+cout << v.cube() << endl;
diff --git a/doc/snippets/Cwise_equal_equal.cpp b/doc/snippets/Cwise_equal_equal.cpp
new file mode 100644
index 0000000..0ba96f6
--- /dev/null
+++ b/doc/snippets/Cwise_equal_equal.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v==w) << endl;
diff --git a/doc/snippets/Cwise_exp.cpp b/doc/snippets/Cwise_exp.cpp
new file mode 100644
index 0000000..db23618
--- /dev/null
+++ b/doc/snippets/Cwise_exp.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3);
+cout << v.exp() << endl;
diff --git a/doc/snippets/Cwise_greater.cpp b/doc/snippets/Cwise_greater.cpp
new file mode 100644
index 0000000..40ad029
--- /dev/null
+++ b/doc/snippets/Cwise_greater.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v>w) << endl;
diff --git a/doc/snippets/Cwise_greater_equal.cpp b/doc/snippets/Cwise_greater_equal.cpp
new file mode 100644
index 0000000..6a08f89
--- /dev/null
+++ b/doc/snippets/Cwise_greater_equal.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v>=w) << endl;
diff --git a/doc/snippets/Cwise_inverse.cpp b/doc/snippets/Cwise_inverse.cpp
new file mode 100644
index 0000000..3967a7e
--- /dev/null
+++ b/doc/snippets/Cwise_inverse.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4);
+cout << v.inverse() << endl;
diff --git a/doc/snippets/Cwise_less.cpp b/doc/snippets/Cwise_less.cpp
new file mode 100644
index 0000000..cafd3b6
--- /dev/null
+++ b/doc/snippets/Cwise_less.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v<w) << endl;
diff --git a/doc/snippets/Cwise_less_equal.cpp b/doc/snippets/Cwise_less_equal.cpp
new file mode 100644
index 0000000..1600e39
--- /dev/null
+++ b/doc/snippets/Cwise_less_equal.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v<=w) << endl;
diff --git a/doc/snippets/Cwise_log.cpp b/doc/snippets/Cwise_log.cpp
new file mode 100644
index 0000000..f7aca72
--- /dev/null
+++ b/doc/snippets/Cwise_log.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3);
+cout << v.log() << endl;
diff --git a/doc/snippets/Cwise_max.cpp b/doc/snippets/Cwise_max.cpp
new file mode 100644
index 0000000..6602881
--- /dev/null
+++ b/doc/snippets/Cwise_max.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4), w(4,2,3);
+cout << v.max(w) << endl;
diff --git a/doc/snippets/Cwise_min.cpp b/doc/snippets/Cwise_min.cpp
new file mode 100644
index 0000000..1c01c76
--- /dev/null
+++ b/doc/snippets/Cwise_min.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4), w(4,2,3);
+cout << v.min(w) << endl;
diff --git a/doc/snippets/Cwise_minus.cpp b/doc/snippets/Cwise_minus.cpp
new file mode 100644
index 0000000..b89b9fb
--- /dev/null
+++ b/doc/snippets/Cwise_minus.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3);
+cout << v-5 << endl;
diff --git a/doc/snippets/Cwise_minus_equal.cpp b/doc/snippets/Cwise_minus_equal.cpp
new file mode 100644
index 0000000..dfde49d
--- /dev/null
+++ b/doc/snippets/Cwise_minus_equal.cpp
@@ -0,0 +1,3 @@
+Array3d v(1,2,3);
+v -= 5;
+cout << v << endl;
diff --git a/doc/snippets/Cwise_not_equal.cpp b/doc/snippets/Cwise_not_equal.cpp
new file mode 100644
index 0000000..57a407a
--- /dev/null
+++ b/doc/snippets/Cwise_not_equal.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3), w(3,2,1);
+cout << (v!=w) << endl;
diff --git a/doc/snippets/Cwise_plus.cpp b/doc/snippets/Cwise_plus.cpp
new file mode 100644
index 0000000..9d47327
--- /dev/null
+++ b/doc/snippets/Cwise_plus.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,3);
+cout << v+5 << endl;
diff --git a/doc/snippets/Cwise_plus_equal.cpp b/doc/snippets/Cwise_plus_equal.cpp
new file mode 100644
index 0000000..d744b1e
--- /dev/null
+++ b/doc/snippets/Cwise_plus_equal.cpp
@@ -0,0 +1,3 @@
+Array3d v(1,2,3);
+v += 5;
+cout << v << endl;
diff --git a/doc/snippets/Cwise_pow.cpp b/doc/snippets/Cwise_pow.cpp
new file mode 100644
index 0000000..a723ed8
--- /dev/null
+++ b/doc/snippets/Cwise_pow.cpp
@@ -0,0 +1,2 @@
+Array3d v(8,27,64);
+cout << v.pow(0.333333) << endl;
diff --git a/doc/snippets/Cwise_product.cpp b/doc/snippets/Cwise_product.cpp
new file mode 100644
index 0000000..714d66d
--- /dev/null
+++ b/doc/snippets/Cwise_product.cpp
@@ -0,0 +1,4 @@
+Array33i a = Array33i::Random(), b = Array33i::Random();
+Array33i c = a * b;
+cout << "a:\n" << a << "\nb:\n" << b << "\nc:\n" << c << endl;
+
diff --git a/doc/snippets/Cwise_quotient.cpp b/doc/snippets/Cwise_quotient.cpp
new file mode 100644
index 0000000..7cb9f7f
--- /dev/null
+++ b/doc/snippets/Cwise_quotient.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4), w(4,2,3);
+cout << v/w << endl;
diff --git a/doc/snippets/Cwise_sin.cpp b/doc/snippets/Cwise_sin.cpp
new file mode 100644
index 0000000..46fa908
--- /dev/null
+++ b/doc/snippets/Cwise_sin.cpp
@@ -0,0 +1,2 @@
+Array3d v(M_PI, M_PI/2, M_PI/3);
+cout << v.sin() << endl;
diff --git a/doc/snippets/Cwise_slash_equal.cpp b/doc/snippets/Cwise_slash_equal.cpp
new file mode 100644
index 0000000..2efd32d
--- /dev/null
+++ b/doc/snippets/Cwise_slash_equal.cpp
@@ -0,0 +1,3 @@
+Array3d v(3,2,4), w(5,4,2);
+v /= w;
+cout << v << endl;
diff --git a/doc/snippets/Cwise_sqrt.cpp b/doc/snippets/Cwise_sqrt.cpp
new file mode 100644
index 0000000..97bafe8
--- /dev/null
+++ b/doc/snippets/Cwise_sqrt.cpp
@@ -0,0 +1,2 @@
+Array3d v(1,2,4);
+cout << v.sqrt() << endl;
diff --git a/doc/snippets/Cwise_square.cpp b/doc/snippets/Cwise_square.cpp
new file mode 100644
index 0000000..f704c5e
--- /dev/null
+++ b/doc/snippets/Cwise_square.cpp
@@ -0,0 +1,2 @@
+Array3d v(2,3,4);
+cout << v.square() << endl;
diff --git a/doc/snippets/Cwise_tan.cpp b/doc/snippets/Cwise_tan.cpp
new file mode 100644
index 0000000..b758ef0
--- /dev/null
+++ b/doc/snippets/Cwise_tan.cpp
@@ -0,0 +1,2 @@
+Array3d v(M_PI, M_PI/2, M_PI/3);
+cout << v.tan() << endl;
diff --git a/doc/snippets/Cwise_times_equal.cpp b/doc/snippets/Cwise_times_equal.cpp
new file mode 100644
index 0000000..147556c
--- /dev/null
+++ b/doc/snippets/Cwise_times_equal.cpp
@@ -0,0 +1,3 @@
+Array3d v(1,2,3), w(2,3,0);
+v *= w;
+cout << v << endl;
diff --git a/doc/snippets/DenseBase_LinSpaced.cpp b/doc/snippets/DenseBase_LinSpaced.cpp
new file mode 100644
index 0000000..8e54b17
--- /dev/null
+++ b/doc/snippets/DenseBase_LinSpaced.cpp
@@ -0,0 +1,2 @@
+cout << VectorXi::LinSpaced(4,7,10).transpose() << endl;
+cout << VectorXd::LinSpaced(5,0.0,1.0).transpose() << endl;
diff --git a/doc/snippets/DenseBase_LinSpaced_seq.cpp b/doc/snippets/DenseBase_LinSpaced_seq.cpp
new file mode 100644
index 0000000..f55c508
--- /dev/null
+++ b/doc/snippets/DenseBase_LinSpaced_seq.cpp
@@ -0,0 +1,2 @@
+cout << VectorXi::LinSpaced(Sequential,4,7,10).transpose() << endl;
+cout << VectorXd::LinSpaced(Sequential,5,0.0,1.0).transpose() << endl;
diff --git a/doc/snippets/DenseBase_setLinSpaced.cpp b/doc/snippets/DenseBase_setLinSpaced.cpp
new file mode 100644
index 0000000..50871df
--- /dev/null
+++ b/doc/snippets/DenseBase_setLinSpaced.cpp
@@ -0,0 +1,3 @@
+VectorXf v;
+v.setLinSpaced(5,0.5f,1.5f).transpose();
+cout << v << endl;
diff --git a/doc/snippets/DirectionWise_replicate.cpp b/doc/snippets/DirectionWise_replicate.cpp
new file mode 100644
index 0000000..d92d4a3
--- /dev/null
+++ b/doc/snippets/DirectionWise_replicate.cpp
@@ -0,0 +1,4 @@
+MatrixXi m = MatrixXi::Random(2,3);
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "m.colwise().replicate<3>() = ..." << endl;
+cout << m.colwise().replicate<3>() << endl;
diff --git a/doc/snippets/DirectionWise_replicate_int.cpp b/doc/snippets/DirectionWise_replicate_int.cpp
new file mode 100644
index 0000000..f9b1b53
--- /dev/null
+++ b/doc/snippets/DirectionWise_replicate_int.cpp
@@ -0,0 +1,4 @@
+Vector3i v = Vector3i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "v.rowwise().replicate(5) = ..." << endl;
+cout << v.rowwise().replicate(5) << endl;
diff --git a/doc/snippets/EigenSolver_EigenSolver_MatrixType.cpp b/doc/snippets/EigenSolver_EigenSolver_MatrixType.cpp
new file mode 100644
index 0000000..c1d9fa8
--- /dev/null
+++ b/doc/snippets/EigenSolver_EigenSolver_MatrixType.cpp
@@ -0,0 +1,16 @@
+MatrixXd A = MatrixXd::Random(6,6);
+cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl;
+
+EigenSolver<MatrixXd> es(A);
+cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl;
+cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl;
+
+complex<double> lambda = es.eigenvalues()[0];
+cout << "Consider the first eigenvalue, lambda = " << lambda << endl;
+VectorXcd v = es.eigenvectors().col(0);
+cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl;
+cout << "... and A * v = " << endl << A.cast<complex<double> >() * v << endl << endl;
+
+MatrixXcd D = es.eigenvalues().asDiagonal();
+MatrixXcd V = es.eigenvectors();
+cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl;
diff --git a/doc/snippets/EigenSolver_compute.cpp b/doc/snippets/EigenSolver_compute.cpp
new file mode 100644
index 0000000..a5c96e9
--- /dev/null
+++ b/doc/snippets/EigenSolver_compute.cpp
@@ -0,0 +1,6 @@
+EigenSolver<MatrixXf> es;
+MatrixXf A = MatrixXf::Random(4,4);
+es.compute(A, /* computeEigenvectors = */ false);
+cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
+es.compute(A + MatrixXf::Identity(4,4), false); // re-use es to compute eigenvalues of A+I
+cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl;
diff --git a/doc/snippets/EigenSolver_eigenvalues.cpp b/doc/snippets/EigenSolver_eigenvalues.cpp
new file mode 100644
index 0000000..ed28869
--- /dev/null
+++ b/doc/snippets/EigenSolver_eigenvalues.cpp
@@ -0,0 +1,4 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+EigenSolver<MatrixXd> es(ones, false);
+cout << "The eigenvalues of the 3x3 matrix of ones are:" 
+     << endl << es.eigenvalues() << endl;
diff --git a/doc/snippets/EigenSolver_eigenvectors.cpp b/doc/snippets/EigenSolver_eigenvectors.cpp
new file mode 100644
index 0000000..0fad4da
--- /dev/null
+++ b/doc/snippets/EigenSolver_eigenvectors.cpp
@@ -0,0 +1,4 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+EigenSolver<MatrixXd> es(ones);
+cout << "The first eigenvector of the 3x3 matrix of ones is:" 
+     << endl << es.eigenvectors().col(1) << endl;
diff --git a/doc/snippets/EigenSolver_pseudoEigenvectors.cpp b/doc/snippets/EigenSolver_pseudoEigenvectors.cpp
new file mode 100644
index 0000000..85e2569
--- /dev/null
+++ b/doc/snippets/EigenSolver_pseudoEigenvectors.cpp
@@ -0,0 +1,9 @@
+MatrixXd A = MatrixXd::Random(6,6);
+cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl;
+
+EigenSolver<MatrixXd> es(A);
+MatrixXd D = es.pseudoEigenvalueMatrix();
+MatrixXd V = es.pseudoEigenvectors();
+cout << "The pseudo-eigenvalue matrix D is:" << endl << D << endl;
+cout << "The pseudo-eigenvector matrix V is:" << endl << V << endl;
+cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl;
diff --git a/doc/snippets/FullPivHouseholderQR_solve.cpp b/doc/snippets/FullPivHouseholderQR_solve.cpp
new file mode 100644
index 0000000..23bc074
--- /dev/null
+++ b/doc/snippets/FullPivHouseholderQR_solve.cpp
@@ -0,0 +1,8 @@
+Matrix3f m = Matrix3f::Random();
+Matrix3f y = Matrix3f::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the matrix y:" << endl << y << endl;
+Matrix3f x;
+x = m.fullPivHouseholderQr().solve(y);
+assert(y.isApprox(m*x));
+cout << "Here is a solution x to the equation mx=y:" << endl << x << endl;
diff --git a/doc/snippets/FullPivLU_image.cpp b/doc/snippets/FullPivLU_image.cpp
new file mode 100644
index 0000000..817bc1e
--- /dev/null
+++ b/doc/snippets/FullPivLU_image.cpp
@@ -0,0 +1,9 @@
+Matrix3d m;
+m << 1,1,0,
+     1,3,2,
+     0,1,1;
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Notice that the middle column is the sum of the two others, so the "
+     << "columns are linearly dependent." << endl;
+cout << "Here is a matrix whose columns have the same span but are linearly independent:"
+     << endl << m.fullPivLu().image(m) << endl;
diff --git a/doc/snippets/FullPivLU_kernel.cpp b/doc/snippets/FullPivLU_kernel.cpp
new file mode 100644
index 0000000..7086e01
--- /dev/null
+++ b/doc/snippets/FullPivLU_kernel.cpp
@@ -0,0 +1,7 @@
+MatrixXf m = MatrixXf::Random(3,5);
+cout << "Here is the matrix m:" << endl << m << endl;
+MatrixXf ker = m.fullPivLu().kernel();
+cout << "Here is a matrix whose columns form a basis of the kernel of m:"
+     << endl << ker << endl;
+cout << "By definition of the kernel, m*ker is zero:"
+     << endl << m*ker << endl;
diff --git a/doc/snippets/FullPivLU_solve.cpp b/doc/snippets/FullPivLU_solve.cpp
new file mode 100644
index 0000000..c1f8823
--- /dev/null
+++ b/doc/snippets/FullPivLU_solve.cpp
@@ -0,0 +1,11 @@
+Matrix<float,2,3> m = Matrix<float,2,3>::Random();
+Matrix2f y = Matrix2f::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the matrix y:" << endl << y << endl;
+Matrix<float,3,2> x = m.fullPivLu().solve(y);
+if((m*x).isApprox(y))
+{
+  cout << "Here is a solution x to the equation mx=y:" << endl << x << endl;
+}
+else
+  cout << "The equation mx=y does not have any solution." << endl;
diff --git a/doc/snippets/HessenbergDecomposition_compute.cpp b/doc/snippets/HessenbergDecomposition_compute.cpp
new file mode 100644
index 0000000..50e3783
--- /dev/null
+++ b/doc/snippets/HessenbergDecomposition_compute.cpp
@@ -0,0 +1,6 @@
+MatrixXcf A = MatrixXcf::Random(4,4);
+HessenbergDecomposition<MatrixXcf> hd(4);
+hd.compute(A);
+cout << "The matrix H in the decomposition of A is:" << endl << hd.matrixH() << endl;
+hd.compute(2*A); // re-use hd to compute and store decomposition of 2A
+cout << "The matrix H in the decomposition of 2A is:" << endl << hd.matrixH() << endl;
diff --git a/doc/snippets/HessenbergDecomposition_matrixH.cpp b/doc/snippets/HessenbergDecomposition_matrixH.cpp
new file mode 100644
index 0000000..af01366
--- /dev/null
+++ b/doc/snippets/HessenbergDecomposition_matrixH.cpp
@@ -0,0 +1,8 @@
+Matrix4f A = MatrixXf::Random(4,4);
+cout << "Here is a random 4x4 matrix:" << endl << A << endl;
+HessenbergDecomposition<MatrixXf> hessOfA(A);
+MatrixXf H = hessOfA.matrixH();
+cout << "The Hessenberg matrix H is:" << endl << H << endl;
+MatrixXf Q = hessOfA.matrixQ();
+cout << "The orthogonal matrix Q is:" << endl << Q << endl;
+cout << "Q H Q^T is:" << endl << Q * H * Q.transpose() << endl;
diff --git a/doc/snippets/HessenbergDecomposition_packedMatrix.cpp b/doc/snippets/HessenbergDecomposition_packedMatrix.cpp
new file mode 100644
index 0000000..4fa5957
--- /dev/null
+++ b/doc/snippets/HessenbergDecomposition_packedMatrix.cpp
@@ -0,0 +1,9 @@
+Matrix4d A = Matrix4d::Random(4,4);
+cout << "Here is a random 4x4 matrix:" << endl << A << endl;
+HessenbergDecomposition<Matrix4d> hessOfA(A);
+Matrix4d pm = hessOfA.packedMatrix();
+cout << "The packed matrix M is:" << endl << pm << endl;
+cout << "The upper Hessenberg part corresponds to the matrix H, which is:" 
+     << endl << hessOfA.matrixH() << endl;
+Vector3d hc = hessOfA.householderCoefficients();
+cout << "The vector of Householder coefficients is:" << endl << hc << endl;
diff --git a/doc/snippets/HouseholderQR_solve.cpp b/doc/snippets/HouseholderQR_solve.cpp
new file mode 100644
index 0000000..8cce6ce
--- /dev/null
+++ b/doc/snippets/HouseholderQR_solve.cpp
@@ -0,0 +1,9 @@
+typedef Matrix<float,3,3> Matrix3x3;
+Matrix3x3 m = Matrix3x3::Random();
+Matrix3f y = Matrix3f::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the matrix y:" << endl << y << endl;
+Matrix3f x;
+x = m.householderQr().solve(y);
+assert(y.isApprox(m*x));
+cout << "Here is a solution x to the equation mx=y:" << endl << x << endl;
diff --git a/doc/snippets/HouseholderSequence_HouseholderSequence.cpp b/doc/snippets/HouseholderSequence_HouseholderSequence.cpp
new file mode 100644
index 0000000..2632b83
--- /dev/null
+++ b/doc/snippets/HouseholderSequence_HouseholderSequence.cpp
@@ -0,0 +1,31 @@
+Matrix3d v = Matrix3d::Random();
+cout << "The matrix v is:" << endl;
+cout << v << endl;
+
+Vector3d v0(1, v(1,0), v(2,0));
+cout << "The first Householder vector is: v_0 = " << v0.transpose() << endl;
+Vector3d v1(0, 1, v(2,1));
+cout << "The second Householder vector is: v_1 = " << v1.transpose()  << endl;
+Vector3d v2(0, 0, 1);
+cout << "The third Householder vector is: v_2 = " << v2.transpose() << endl;
+
+Vector3d h = Vector3d::Random();
+cout << "The Householder coefficients are: h = " << h.transpose() << endl;
+
+Matrix3d H0 = Matrix3d::Identity() - h(0) * v0 * v0.adjoint();
+cout << "The first Householder reflection is represented by H_0 = " << endl;
+cout << H0 << endl;
+Matrix3d H1 = Matrix3d::Identity() - h(1) * v1 * v1.adjoint();
+cout << "The second Householder reflection is represented by H_1 = " << endl;
+cout << H1 << endl;
+Matrix3d H2 = Matrix3d::Identity() - h(2) * v2 * v2.adjoint();
+cout << "The third Householder reflection is represented by H_2 = " << endl;
+cout << H2 << endl;
+cout << "Their product is H_0 H_1 H_2 = " << endl;
+cout << H0 * H1 * H2 << endl;
+
+HouseholderSequence<Matrix3d, Vector3d> hhSeq(v, h);
+Matrix3d hhSeqAsMatrix(hhSeq);
+cout << "If we construct a HouseholderSequence from v and h" << endl;
+cout << "and convert it to a matrix, we get:" << endl;
+cout << hhSeqAsMatrix << endl;
diff --git a/doc/snippets/IOFormat.cpp b/doc/snippets/IOFormat.cpp
new file mode 100644
index 0000000..735f5dd
--- /dev/null
+++ b/doc/snippets/IOFormat.cpp
@@ -0,0 +1,14 @@
+std::string sep = "\n----------------------------------------\n";
+Matrix3d m1;
+m1 << 1.111111, 2, 3.33333, 4, 5, 6, 7, 8.888888, 9;
+
+IOFormat CommaInitFmt(StreamPrecision, DontAlignCols, ", ", ", ", "", "", " << ", ";");
+IOFormat CleanFmt(4, 0, ", ", "\n", "[", "]");
+IOFormat OctaveFmt(StreamPrecision, 0, ", ", ";\n", "", "", "[", "]");
+IOFormat HeavyFmt(FullPrecision, 0, ", ", ";\n", "[", "]", "[", "]");
+
+std::cout << m1 << sep;
+std::cout << m1.format(CommaInitFmt) << sep;
+std::cout << m1.format(CleanFmt) << sep;
+std::cout << m1.format(OctaveFmt) << sep;
+std::cout << m1.format(HeavyFmt) << sep;
diff --git a/doc/snippets/JacobiSVD_basic.cpp b/doc/snippets/JacobiSVD_basic.cpp
new file mode 100644
index 0000000..ab24b9b
--- /dev/null
+++ b/doc/snippets/JacobiSVD_basic.cpp
@@ -0,0 +1,9 @@
+MatrixXf m = MatrixXf::Random(3,2);
+cout << "Here is the matrix m:" << endl << m << endl;
+JacobiSVD<MatrixXf> svd(m, ComputeThinU | ComputeThinV);
+cout << "Its singular values are:" << endl << svd.singularValues() << endl;
+cout << "Its left singular vectors are the columns of the thin U matrix:" << endl << svd.matrixU() << endl;
+cout << "Its right singular vectors are the columns of the thin V matrix:" << endl << svd.matrixV() << endl;
+Vector3f rhs(1, 0, 0);
+cout << "Now consider this rhs vector:" << endl << rhs << endl;
+cout << "A least-squares solution of m*x = rhs is:" << endl << svd.solve(rhs) << endl;
diff --git a/doc/snippets/Jacobi_makeGivens.cpp b/doc/snippets/Jacobi_makeGivens.cpp
new file mode 100644
index 0000000..4b733c3
--- /dev/null
+++ b/doc/snippets/Jacobi_makeGivens.cpp
@@ -0,0 +1,6 @@
+Vector2f v = Vector2f::Random();
+JacobiRotation<float> G;
+G.makeGivens(v.x(), v.y());
+cout << "Here is the vector v:" << endl << v << endl;
+v.applyOnTheLeft(0, 1, G.adjoint());
+cout << "Here is the vector J' * v:" << endl << v << endl;
\ No newline at end of file
diff --git a/doc/snippets/Jacobi_makeJacobi.cpp b/doc/snippets/Jacobi_makeJacobi.cpp
new file mode 100644
index 0000000..0cc331d
--- /dev/null
+++ b/doc/snippets/Jacobi_makeJacobi.cpp
@@ -0,0 +1,8 @@
+Matrix2f m = Matrix2f::Random();
+m = (m + m.adjoint()).eval();
+JacobiRotation<float> J;
+J.makeJacobi(m, 0, 1);
+cout << "Here is the matrix m:" << endl << m << endl;
+m.applyOnTheLeft(0, 1, J.adjoint());
+m.applyOnTheRight(0, 1, J);
+cout << "Here is the matrix J' * m * J:" << endl << m << endl;
\ No newline at end of file
diff --git a/doc/snippets/LLT_example.cpp b/doc/snippets/LLT_example.cpp
new file mode 100644
index 0000000..46fb407
--- /dev/null
+++ b/doc/snippets/LLT_example.cpp
@@ -0,0 +1,12 @@
+MatrixXd A(3,3);
+A << 4,-1,2, -1,6,0, 2,0,5;
+cout << "The matrix A is" << endl << A << endl;
+
+LLT<MatrixXd> lltOfA(A); // compute the Cholesky decomposition of A
+MatrixXd L = lltOfA.matrixL(); // retrieve factor L  in the decomposition
+// The previous two lines can also be written as "L = A.llt().matrixL()"
+
+cout << "The Cholesky factor L is" << endl << L << endl;
+cout << "To check this, let us compute L * L.transpose()" << endl;
+cout << L * L.transpose() << endl;
+cout << "This should equal the matrix A" << endl;
diff --git a/doc/snippets/LLT_solve.cpp b/doc/snippets/LLT_solve.cpp
new file mode 100644
index 0000000..7095d2c
--- /dev/null
+++ b/doc/snippets/LLT_solve.cpp
@@ -0,0 +1,8 @@
+typedef Matrix<float,Dynamic,2> DataMatrix;
+// let's generate some samples on the 3D plane of equation z = 2x+3y (with some noise)
+DataMatrix samples = DataMatrix::Random(12,2);
+VectorXf elevations = 2*samples.col(0) + 3*samples.col(1) + VectorXf::Random(12)*0.1;
+// and let's solve samples * [x y]^T = elevations in least square sense:
+Matrix<float,2,1> xy
+ = (samples.adjoint() * samples).llt().solve((samples.adjoint()*elevations));
+cout << xy << endl;
diff --git a/doc/snippets/Map_general_stride.cpp b/doc/snippets/Map_general_stride.cpp
new file mode 100644
index 0000000..0657e7f
--- /dev/null
+++ b/doc/snippets/Map_general_stride.cpp
@@ -0,0 +1,5 @@
+int array[24];
+for(int i = 0; i < 24; ++i) array[i] = i;
+cout << Map<MatrixXi, 0, Stride<Dynamic,2> >
+         (array, 3, 3, Stride<Dynamic,2>(8, 2))
+     << endl;
diff --git a/doc/snippets/Map_inner_stride.cpp b/doc/snippets/Map_inner_stride.cpp
new file mode 100644
index 0000000..d95ae9b
--- /dev/null
+++ b/doc/snippets/Map_inner_stride.cpp
@@ -0,0 +1,5 @@
+int array[12];
+for(int i = 0; i < 12; ++i) array[i] = i;
+cout << Map<VectorXi, 0, InnerStride<2> >
+         (array, 6) // the inner stride has already been passed as template parameter
+     << endl;
diff --git a/doc/snippets/Map_outer_stride.cpp b/doc/snippets/Map_outer_stride.cpp
new file mode 100644
index 0000000..2f6f052
--- /dev/null
+++ b/doc/snippets/Map_outer_stride.cpp
@@ -0,0 +1,3 @@
+int array[12];
+for(int i = 0; i < 12; ++i) array[i] = i;
+cout << Map<MatrixXi, 0, OuterStride<> >(array, 3, 3, OuterStride<>(4)) << endl;
diff --git a/doc/snippets/Map_placement_new.cpp b/doc/snippets/Map_placement_new.cpp
new file mode 100644
index 0000000..2e40eca
--- /dev/null
+++ b/doc/snippets/Map_placement_new.cpp
@@ -0,0 +1,5 @@
+int data[] = {1,2,3,4,5,6,7,8,9};
+Map<RowVectorXi> v(data,4);
+cout << "The mapped vector v is: " << v << "\n";
+new (&v) Map<RowVectorXi>(data+4,5);
+cout << "Now v is: " << v << "\n";
\ No newline at end of file
diff --git a/doc/snippets/Map_simple.cpp b/doc/snippets/Map_simple.cpp
new file mode 100644
index 0000000..423bb52
--- /dev/null
+++ b/doc/snippets/Map_simple.cpp
@@ -0,0 +1,3 @@
+int array[9];
+for(int i = 0; i < 9; ++i) array[i] = i;
+cout << Map<Matrix3i>(array) << endl;
diff --git a/doc/snippets/MatrixBase_adjoint.cpp b/doc/snippets/MatrixBase_adjoint.cpp
new file mode 100644
index 0000000..4680d59
--- /dev/null
+++ b/doc/snippets/MatrixBase_adjoint.cpp
@@ -0,0 +1,3 @@
+Matrix2cf m = Matrix2cf::Random();
+cout << "Here is the 2x2 complex matrix m:" << endl << m << endl;
+cout << "Here is the adjoint of m:" << endl << m.adjoint() << endl;
diff --git a/doc/snippets/MatrixBase_all.cpp b/doc/snippets/MatrixBase_all.cpp
new file mode 100644
index 0000000..46f26f1
--- /dev/null
+++ b/doc/snippets/MatrixBase_all.cpp
@@ -0,0 +1,7 @@
+Vector3f boxMin(Vector3f::Zero()), boxMax(Vector3f::Ones());
+Vector3f p0 = Vector3f::Random(), p1 = Vector3f::Random().cwiseAbs();
+// let's check if p0 and p1 are inside the axis aligned box defined by the corners boxMin,boxMax:
+cout << "Is (" << p0.transpose() << ") inside the box: "
+     << ((boxMin.array()<p0.array()).all() && (boxMax.array()>p0.array()).all()) << endl;
+cout << "Is (" << p1.transpose() << ") inside the box: "
+     << ((boxMin.array()<p1.array()).all() && (boxMax.array()>p1.array()).all()) << endl;
diff --git a/doc/snippets/MatrixBase_array.cpp b/doc/snippets/MatrixBase_array.cpp
new file mode 100644
index 0000000..f215086
--- /dev/null
+++ b/doc/snippets/MatrixBase_array.cpp
@@ -0,0 +1,4 @@
+Vector3d v(1,2,3);
+v.array() += 3;
+v.array() -= 2;
+cout << v << endl;
diff --git a/doc/snippets/MatrixBase_array_const.cpp b/doc/snippets/MatrixBase_array_const.cpp
new file mode 100644
index 0000000..cd3b26a
--- /dev/null
+++ b/doc/snippets/MatrixBase_array_const.cpp
@@ -0,0 +1,4 @@
+Vector3d v(-1,2,-3);
+cout << "the absolute values:" << endl << v.array().abs() << endl;
+cout << "the absolute values plus one:" << endl << v.array().abs()+1 << endl;
+cout << "sum of the squares: " << v.array().square().sum() << endl;
diff --git a/doc/snippets/MatrixBase_asDiagonal.cpp b/doc/snippets/MatrixBase_asDiagonal.cpp
new file mode 100644
index 0000000..b01082d
--- /dev/null
+++ b/doc/snippets/MatrixBase_asDiagonal.cpp
@@ -0,0 +1 @@
+cout << Matrix3i(Vector3i(2,5,6).asDiagonal()) << endl;
diff --git a/doc/snippets/MatrixBase_block_int_int.cpp b/doc/snippets/MatrixBase_block_int_int.cpp
new file mode 100644
index 0000000..f99b6d4
--- /dev/null
+++ b/doc/snippets/MatrixBase_block_int_int.cpp
@@ -0,0 +1,5 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.block<2,2>(1,1):" << endl << m.block<2,2>(1,1) << endl;
+m.block<2,2>(1,1).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_block_int_int_int_int.cpp b/doc/snippets/MatrixBase_block_int_int_int_int.cpp
new file mode 100644
index 0000000..7238cbb
--- /dev/null
+++ b/doc/snippets/MatrixBase_block_int_int_int_int.cpp
@@ -0,0 +1,5 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.block(1, 1, 2, 2):" << endl << m.block(1, 1, 2, 2) << endl;
+m.block(1, 1, 2, 2).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_bottomLeftCorner_int_int.cpp b/doc/snippets/MatrixBase_bottomLeftCorner_int_int.cpp
new file mode 100644
index 0000000..ebae95e
--- /dev/null
+++ b/doc/snippets/MatrixBase_bottomLeftCorner_int_int.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.bottomLeftCorner(2, 2):" << endl;
+cout << m.bottomLeftCorner(2, 2) << endl;
+m.bottomLeftCorner(2, 2).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_bottomRightCorner_int_int.cpp b/doc/snippets/MatrixBase_bottomRightCorner_int_int.cpp
new file mode 100644
index 0000000..bf05093
--- /dev/null
+++ b/doc/snippets/MatrixBase_bottomRightCorner_int_int.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.bottomRightCorner(2, 2):" << endl;
+cout << m.bottomRightCorner(2, 2) << endl;
+m.bottomRightCorner(2, 2).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_bottomRows_int.cpp b/doc/snippets/MatrixBase_bottomRows_int.cpp
new file mode 100644
index 0000000..47ca92e
--- /dev/null
+++ b/doc/snippets/MatrixBase_bottomRows_int.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.bottomRows(2):" << endl;
+cout << a.bottomRows(2) << endl;
+a.bottomRows(2).setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_cast.cpp b/doc/snippets/MatrixBase_cast.cpp
new file mode 100644
index 0000000..016880b
--- /dev/null
+++ b/doc/snippets/MatrixBase_cast.cpp
@@ -0,0 +1,3 @@
+Matrix2d md = Matrix2d::Identity() * 0.45;
+Matrix2f mf = Matrix2f::Identity();
+cout << md + mf.cast<double>() << endl;
diff --git a/doc/snippets/MatrixBase_col.cpp b/doc/snippets/MatrixBase_col.cpp
new file mode 100644
index 0000000..87c91b1
--- /dev/null
+++ b/doc/snippets/MatrixBase_col.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Identity();
+m.col(1) = Vector3d(4,5,6);
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_colwise.cpp b/doc/snippets/MatrixBase_colwise.cpp
new file mode 100644
index 0000000..a048bef
--- /dev/null
+++ b/doc/snippets/MatrixBase_colwise.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the sum of each column:" << endl << m.colwise().sum() << endl;
+cout << "Here is the maximum absolute value of each column:"
+     << endl << m.cwiseAbs().colwise().maxCoeff() << endl;
diff --git a/doc/snippets/MatrixBase_computeInverseAndDetWithCheck.cpp b/doc/snippets/MatrixBase_computeInverseAndDetWithCheck.cpp
new file mode 100644
index 0000000..a7b084f
--- /dev/null
+++ b/doc/snippets/MatrixBase_computeInverseAndDetWithCheck.cpp
@@ -0,0 +1,13 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+Matrix3d inverse;
+bool invertible;
+double determinant;
+m.computeInverseAndDetWithCheck(inverse,determinant,invertible);
+cout << "Its determinant is " << determinant << endl;
+if(invertible) {
+  cout << "It is invertible, and its inverse is:" << endl << inverse << endl;
+}
+else {
+  cout << "It is not invertible." << endl;
+}
diff --git a/doc/snippets/MatrixBase_computeInverseWithCheck.cpp b/doc/snippets/MatrixBase_computeInverseWithCheck.cpp
new file mode 100644
index 0000000..873a9f8
--- /dev/null
+++ b/doc/snippets/MatrixBase_computeInverseWithCheck.cpp
@@ -0,0 +1,11 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+Matrix3d inverse;
+bool invertible;
+m.computeInverseWithCheck(inverse,invertible);
+if(invertible) {
+  cout << "It is invertible, and its inverse is:" << endl << inverse << endl;
+}
+else {
+  cout << "It is not invertible." << endl;
+}
diff --git a/doc/snippets/MatrixBase_cwiseAbs.cpp b/doc/snippets/MatrixBase_cwiseAbs.cpp
new file mode 100644
index 0000000..28a3160
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseAbs.cpp
@@ -0,0 +1,4 @@
+MatrixXd m(2,3);
+m << 2, -4, 6,   
+     -5, 1, 0;
+cout << m.cwiseAbs() << endl;
diff --git a/doc/snippets/MatrixBase_cwiseAbs2.cpp b/doc/snippets/MatrixBase_cwiseAbs2.cpp
new file mode 100644
index 0000000..889a2e2
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseAbs2.cpp
@@ -0,0 +1,4 @@
+MatrixXd m(2,3);
+m << 2, -4, 6,   
+     -5, 1, 0;
+cout << m.cwiseAbs2() << endl;
diff --git a/doc/snippets/MatrixBase_cwiseEqual.cpp b/doc/snippets/MatrixBase_cwiseEqual.cpp
new file mode 100644
index 0000000..eb3656f
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseEqual.cpp
@@ -0,0 +1,7 @@
+MatrixXi m(2,2);
+m << 1, 0,
+     1, 1;
+cout << "Comparing m with identity matrix:" << endl;
+cout << m.cwiseEqual(MatrixXi::Identity(2,2)) << endl;
+int count = m.cwiseEqual(MatrixXi::Identity(2,2)).count();
+cout << "Number of coefficients that are equal: " << count << endl;
diff --git a/doc/snippets/MatrixBase_cwiseInverse.cpp b/doc/snippets/MatrixBase_cwiseInverse.cpp
new file mode 100644
index 0000000..23e08f7
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseInverse.cpp
@@ -0,0 +1,4 @@
+MatrixXd m(2,3);
+m << 2, 0.5, 1,   
+     3, 0.25, 1;
+cout << m.cwiseInverse() << endl;
diff --git a/doc/snippets/MatrixBase_cwiseMax.cpp b/doc/snippets/MatrixBase_cwiseMax.cpp
new file mode 100644
index 0000000..3c95681
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseMax.cpp
@@ -0,0 +1,2 @@
+Vector3d v(2,3,4), w(4,2,3);
+cout << v.cwiseMax(w) << endl;
diff --git a/doc/snippets/MatrixBase_cwiseMin.cpp b/doc/snippets/MatrixBase_cwiseMin.cpp
new file mode 100644
index 0000000..82fc761
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseMin.cpp
@@ -0,0 +1,2 @@
+Vector3d v(2,3,4), w(4,2,3);
+cout << v.cwiseMin(w) << endl;
diff --git a/doc/snippets/MatrixBase_cwiseNotEqual.cpp b/doc/snippets/MatrixBase_cwiseNotEqual.cpp
new file mode 100644
index 0000000..6a2e4fb
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseNotEqual.cpp
@@ -0,0 +1,7 @@
+MatrixXi m(2,2);
+m << 1, 0,
+     1, 1;
+cout << "Comparing m with identity matrix:" << endl;
+cout << m.cwiseNotEqual(MatrixXi::Identity(2,2)) << endl;
+int count = m.cwiseNotEqual(MatrixXi::Identity(2,2)).count();
+cout << "Number of coefficients that are not equal: " << count << endl;
diff --git a/doc/snippets/MatrixBase_cwiseProduct.cpp b/doc/snippets/MatrixBase_cwiseProduct.cpp
new file mode 100644
index 0000000..1db3a11
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseProduct.cpp
@@ -0,0 +1,4 @@
+Matrix3i a = Matrix3i::Random(), b = Matrix3i::Random();
+Matrix3i c = a.cwiseProduct(b);
+cout << "a:\n" << a << "\nb:\n" << b << "\nc:\n" << c << endl;
+
diff --git a/doc/snippets/MatrixBase_cwiseQuotient.cpp b/doc/snippets/MatrixBase_cwiseQuotient.cpp
new file mode 100644
index 0000000..9691212
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseQuotient.cpp
@@ -0,0 +1,2 @@
+Vector3d v(2,3,4), w(4,2,3);
+cout << v.cwiseQuotient(w) << endl;
diff --git a/doc/snippets/MatrixBase_cwiseSqrt.cpp b/doc/snippets/MatrixBase_cwiseSqrt.cpp
new file mode 100644
index 0000000..4bfd75d
--- /dev/null
+++ b/doc/snippets/MatrixBase_cwiseSqrt.cpp
@@ -0,0 +1,2 @@
+Vector3d v(1,2,4);
+cout << v.cwiseSqrt() << endl;
diff --git a/doc/snippets/MatrixBase_diagonal.cpp b/doc/snippets/MatrixBase_diagonal.cpp
new file mode 100644
index 0000000..cd63413
--- /dev/null
+++ b/doc/snippets/MatrixBase_diagonal.cpp
@@ -0,0 +1,4 @@
+Matrix3i m = Matrix3i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here are the coefficients on the main diagonal of m:" << endl
+     << m.diagonal() << endl;
diff --git a/doc/snippets/MatrixBase_diagonal_int.cpp b/doc/snippets/MatrixBase_diagonal_int.cpp
new file mode 100644
index 0000000..7b66abf
--- /dev/null
+++ b/doc/snippets/MatrixBase_diagonal_int.cpp
@@ -0,0 +1,5 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here are the coefficients on the 1st super-diagonal and 2nd sub-diagonal of m:" << endl
+     << m.diagonal(1).transpose() << endl
+     << m.diagonal(-2).transpose() << endl;
diff --git a/doc/snippets/MatrixBase_diagonal_template_int.cpp b/doc/snippets/MatrixBase_diagonal_template_int.cpp
new file mode 100644
index 0000000..0e73d1c
--- /dev/null
+++ b/doc/snippets/MatrixBase_diagonal_template_int.cpp
@@ -0,0 +1,5 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here are the coefficients on the 1st super-diagonal and 2nd sub-diagonal of m:" << endl
+     << m.diagonal<1>().transpose() << endl
+     << m.diagonal<-2>().transpose() << endl;
diff --git a/doc/snippets/MatrixBase_eigenvalues.cpp b/doc/snippets/MatrixBase_eigenvalues.cpp
new file mode 100644
index 0000000..039f887
--- /dev/null
+++ b/doc/snippets/MatrixBase_eigenvalues.cpp
@@ -0,0 +1,3 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+VectorXcd eivals = ones.eigenvalues();
+cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << eivals << endl;
diff --git a/doc/snippets/MatrixBase_end_int.cpp b/doc/snippets/MatrixBase_end_int.cpp
new file mode 100644
index 0000000..03c54a9
--- /dev/null
+++ b/doc/snippets/MatrixBase_end_int.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.tail(2):" << endl << v.tail(2) << endl;
+v.tail(2).setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_eval.cpp b/doc/snippets/MatrixBase_eval.cpp
new file mode 100644
index 0000000..1df3aa0
--- /dev/null
+++ b/doc/snippets/MatrixBase_eval.cpp
@@ -0,0 +1,12 @@
+Matrix2f M = Matrix2f::Random();
+Matrix2f m;
+m = M;
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Now we want to copy a column into a row." << endl;
+cout << "If we do m.col(1) = m.row(0), then m becomes:" << endl;
+m.col(1) = m.row(0);
+cout << m << endl << "which is wrong!" << endl;
+cout << "Now let us instead do m.col(1) = m.row(0).eval(). Then m becomes" << endl;
+m = M;
+m.col(1) = m.row(0).eval();
+cout << m << endl << "which is right." << endl;
diff --git a/doc/snippets/MatrixBase_extract.cpp b/doc/snippets/MatrixBase_extract.cpp
new file mode 100644
index 0000000..c96220f
--- /dev/null
+++ b/doc/snippets/MatrixBase_extract.cpp
@@ -0,0 +1,13 @@
+#ifndef _MSC_VER
+  #warning deprecated
+#endif
+/* deprecated
+Matrix3i m = Matrix3i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the upper-triangular matrix extracted from m:" << endl
+     << m.part<Eigen::UpperTriangular>() << endl;
+cout << "Here is the strictly-upper-triangular matrix extracted from m:" << endl
+     << m.part<Eigen::StrictlyUpperTriangular>() << endl;
+cout << "Here is the unit-lower-triangular matrix extracted from m:" << endl
+     << m.part<Eigen::UnitLowerTriangular>() << endl;
+*/
\ No newline at end of file
diff --git a/doc/snippets/MatrixBase_fixedBlock_int_int.cpp b/doc/snippets/MatrixBase_fixedBlock_int_int.cpp
new file mode 100644
index 0000000..3201127
--- /dev/null
+++ b/doc/snippets/MatrixBase_fixedBlock_int_int.cpp
@@ -0,0 +1,5 @@
+Matrix4d m = Vector4d(1,2,3,4).asDiagonal();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.fixed<2, 2>(2, 2):" << endl << m.block<2, 2>(2, 2) << endl;
+m.block<2, 2>(2, 0) = m.block<2, 2>(2, 2);
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_identity.cpp b/doc/snippets/MatrixBase_identity.cpp
new file mode 100644
index 0000000..b5c1e59
--- /dev/null
+++ b/doc/snippets/MatrixBase_identity.cpp
@@ -0,0 +1 @@
+cout << Matrix<double, 3, 4>::Identity() << endl;
diff --git a/doc/snippets/MatrixBase_identity_int_int.cpp b/doc/snippets/MatrixBase_identity_int_int.cpp
new file mode 100644
index 0000000..918649d
--- /dev/null
+++ b/doc/snippets/MatrixBase_identity_int_int.cpp
@@ -0,0 +1 @@
+cout << MatrixXd::Identity(4, 3) << endl;
diff --git a/doc/snippets/MatrixBase_inverse.cpp b/doc/snippets/MatrixBase_inverse.cpp
new file mode 100644
index 0000000..a56142e
--- /dev/null
+++ b/doc/snippets/MatrixBase_inverse.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Its inverse is:" << endl << m.inverse() << endl;
diff --git a/doc/snippets/MatrixBase_isDiagonal.cpp b/doc/snippets/MatrixBase_isDiagonal.cpp
new file mode 100644
index 0000000..5b1d599
--- /dev/null
+++ b/doc/snippets/MatrixBase_isDiagonal.cpp
@@ -0,0 +1,6 @@
+Matrix3d m = 10000 * Matrix3d::Identity();
+m(0,2) = 1;
+cout << "Here's the matrix m:" << endl << m << endl;
+cout << "m.isDiagonal() returns: " << m.isDiagonal() << endl;
+cout << "m.isDiagonal(1e-3) returns: " << m.isDiagonal(1e-3) << endl;
+
diff --git a/doc/snippets/MatrixBase_isIdentity.cpp b/doc/snippets/MatrixBase_isIdentity.cpp
new file mode 100644
index 0000000..17b756c
--- /dev/null
+++ b/doc/snippets/MatrixBase_isIdentity.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Identity();
+m(0,2) = 1e-4;
+cout << "Here's the matrix m:" << endl << m << endl;
+cout << "m.isIdentity() returns: " << m.isIdentity() << endl;
+cout << "m.isIdentity(1e-3) returns: " << m.isIdentity(1e-3) << endl;
diff --git a/doc/snippets/MatrixBase_isOnes.cpp b/doc/snippets/MatrixBase_isOnes.cpp
new file mode 100644
index 0000000..f82f628
--- /dev/null
+++ b/doc/snippets/MatrixBase_isOnes.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Ones();
+m(0,2) += 1e-4;
+cout << "Here's the matrix m:" << endl << m << endl;
+cout << "m.isOnes() returns: " << m.isOnes() << endl;
+cout << "m.isOnes(1e-3) returns: " << m.isOnes(1e-3) << endl;
diff --git a/doc/snippets/MatrixBase_isOrthogonal.cpp b/doc/snippets/MatrixBase_isOrthogonal.cpp
new file mode 100644
index 0000000..b22af06
--- /dev/null
+++ b/doc/snippets/MatrixBase_isOrthogonal.cpp
@@ -0,0 +1,6 @@
+Vector3d v(1,0,0);
+Vector3d w(1e-4,0,1);
+cout << "Here's the vector v:" << endl << v << endl;
+cout << "Here's the vector w:" << endl << w << endl;
+cout << "v.isOrthogonal(w) returns: " << v.isOrthogonal(w) << endl;
+cout << "v.isOrthogonal(w,1e-3) returns: " << v.isOrthogonal(w,1e-3) << endl;
diff --git a/doc/snippets/MatrixBase_isUnitary.cpp b/doc/snippets/MatrixBase_isUnitary.cpp
new file mode 100644
index 0000000..3877da3
--- /dev/null
+++ b/doc/snippets/MatrixBase_isUnitary.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Identity();
+m(0,2) = 1e-4;
+cout << "Here's the matrix m:" << endl << m << endl;
+cout << "m.isUnitary() returns: " << m.isUnitary() << endl;
+cout << "m.isUnitary(1e-3) returns: " << m.isUnitary(1e-3) << endl;
diff --git a/doc/snippets/MatrixBase_isZero.cpp b/doc/snippets/MatrixBase_isZero.cpp
new file mode 100644
index 0000000..c2cfe22
--- /dev/null
+++ b/doc/snippets/MatrixBase_isZero.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Zero();
+m(0,2) = 1e-4;
+cout << "Here's the matrix m:" << endl << m << endl;
+cout << "m.isZero() returns: " << m.isZero() << endl;
+cout << "m.isZero(1e-3) returns: " << m.isZero(1e-3) << endl;
diff --git a/doc/snippets/MatrixBase_leftCols_int.cpp b/doc/snippets/MatrixBase_leftCols_int.cpp
new file mode 100644
index 0000000..6ea984e
--- /dev/null
+++ b/doc/snippets/MatrixBase_leftCols_int.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.leftCols(2):" << endl;
+cout << a.leftCols(2) << endl;
+a.leftCols(2).setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_marked.cpp b/doc/snippets/MatrixBase_marked.cpp
new file mode 100644
index 0000000..f607121
--- /dev/null
+++ b/doc/snippets/MatrixBase_marked.cpp
@@ -0,0 +1,14 @@
+#ifndef _MSC_VER
+  #warning deprecated
+#endif
+/*
+Matrix3d m = Matrix3d::Zero();
+m.part<Eigen::UpperTriangular>().setOnes();
+cout << "Here is the matrix m:" << endl << m << endl;
+Matrix3d n = Matrix3d::Ones();
+n.part<Eigen::LowerTriangular>() *= 2;
+cout << "Here is the matrix n:" << endl << n << endl;
+cout << "And now here is m.inverse()*n, taking advantage of the fact that"
+        " m is upper-triangular:" << endl
+     << m.marked<Eigen::UpperTriangular>().solveTriangular(n);
+*/
\ No newline at end of file
diff --git a/doc/snippets/MatrixBase_noalias.cpp b/doc/snippets/MatrixBase_noalias.cpp
new file mode 100644
index 0000000..3b54a79
--- /dev/null
+++ b/doc/snippets/MatrixBase_noalias.cpp
@@ -0,0 +1,3 @@
+Matrix2d a, b, c; a << 1,2,3,4; b << 5,6,7,8;
+c.noalias() = a * b; // this computes the product directly to c
+cout << c << endl;
diff --git a/doc/snippets/MatrixBase_ones.cpp b/doc/snippets/MatrixBase_ones.cpp
new file mode 100644
index 0000000..02c767c
--- /dev/null
+++ b/doc/snippets/MatrixBase_ones.cpp
@@ -0,0 +1,2 @@
+cout << Matrix2d::Ones() << endl;
+cout << 6 * RowVector4i::Ones() << endl;
diff --git a/doc/snippets/MatrixBase_ones_int.cpp b/doc/snippets/MatrixBase_ones_int.cpp
new file mode 100644
index 0000000..2ef188e
--- /dev/null
+++ b/doc/snippets/MatrixBase_ones_int.cpp
@@ -0,0 +1,2 @@
+cout << 6 * RowVectorXi::Ones(4) << endl;
+cout << VectorXf::Ones(2) << endl;
diff --git a/doc/snippets/MatrixBase_ones_int_int.cpp b/doc/snippets/MatrixBase_ones_int_int.cpp
new file mode 100644
index 0000000..60f5a31
--- /dev/null
+++ b/doc/snippets/MatrixBase_ones_int_int.cpp
@@ -0,0 +1 @@
+cout << MatrixXi::Ones(2,3) << endl;
diff --git a/doc/snippets/MatrixBase_operatorNorm.cpp b/doc/snippets/MatrixBase_operatorNorm.cpp
new file mode 100644
index 0000000..355246f
--- /dev/null
+++ b/doc/snippets/MatrixBase_operatorNorm.cpp
@@ -0,0 +1,3 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+cout << "The operator norm of the 3x3 matrix of ones is "
+     << ones.operatorNorm() << endl;
diff --git a/doc/snippets/MatrixBase_part.cpp b/doc/snippets/MatrixBase_part.cpp
new file mode 100644
index 0000000..d3e7f48
--- /dev/null
+++ b/doc/snippets/MatrixBase_part.cpp
@@ -0,0 +1,13 @@
+#ifndef _MSC_VER
+  #warning deprecated
+#endif
+/*
+Matrix3d m = Matrix3d::Zero();
+m.part<Eigen::StrictlyUpperTriangular>().setOnes();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "And let us now compute m*m.adjoint() in a very optimized way" << endl
+     << "taking advantage of the symmetry." << endl;
+Matrix3d n;
+n.part<Eigen::SelfAdjoint>() = (m*m.adjoint()).lazy();
+cout << "The result is:" << endl << n << endl;
+*/
\ No newline at end of file
diff --git a/doc/snippets/MatrixBase_prod.cpp b/doc/snippets/MatrixBase_prod.cpp
new file mode 100644
index 0000000..d2f27bd
--- /dev/null
+++ b/doc/snippets/MatrixBase_prod.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the product of all the coefficients:" << endl << m.prod() << endl;
diff --git a/doc/snippets/MatrixBase_random.cpp b/doc/snippets/MatrixBase_random.cpp
new file mode 100644
index 0000000..65fc524
--- /dev/null
+++ b/doc/snippets/MatrixBase_random.cpp
@@ -0,0 +1 @@
+cout << 100 * Matrix2i::Random() << endl;
diff --git a/doc/snippets/MatrixBase_random_int.cpp b/doc/snippets/MatrixBase_random_int.cpp
new file mode 100644
index 0000000..f161d03
--- /dev/null
+++ b/doc/snippets/MatrixBase_random_int.cpp
@@ -0,0 +1 @@
+cout << VectorXi::Random(2) << endl;
diff --git a/doc/snippets/MatrixBase_random_int_int.cpp b/doc/snippets/MatrixBase_random_int_int.cpp
new file mode 100644
index 0000000..3f0f7dd
--- /dev/null
+++ b/doc/snippets/MatrixBase_random_int_int.cpp
@@ -0,0 +1 @@
+cout << MatrixXi::Random(2,3) << endl;
diff --git a/doc/snippets/MatrixBase_replicate.cpp b/doc/snippets/MatrixBase_replicate.cpp
new file mode 100644
index 0000000..3ce52bc
--- /dev/null
+++ b/doc/snippets/MatrixBase_replicate.cpp
@@ -0,0 +1,4 @@
+MatrixXi m = MatrixXi::Random(2,3);
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "m.replicate<3,2>() = ..." << endl;
+cout << m.replicate<3,2>() << endl;
diff --git a/doc/snippets/MatrixBase_replicate_int_int.cpp b/doc/snippets/MatrixBase_replicate_int_int.cpp
new file mode 100644
index 0000000..b1dbc70
--- /dev/null
+++ b/doc/snippets/MatrixBase_replicate_int_int.cpp
@@ -0,0 +1,4 @@
+Vector3i v = Vector3i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "v.replicate(2,5) = ..." << endl;
+cout << v.replicate(2,5) << endl;
diff --git a/doc/snippets/MatrixBase_reverse.cpp b/doc/snippets/MatrixBase_reverse.cpp
new file mode 100644
index 0000000..f545a28
--- /dev/null
+++ b/doc/snippets/MatrixBase_reverse.cpp
@@ -0,0 +1,8 @@
+MatrixXi m = MatrixXi::Random(3,4);
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the reverse of m:" << endl << m.reverse() << endl;
+cout << "Here is the coefficient (1,0) in the reverse of m:" << endl
+     << m.reverse()(1,0) << endl;
+cout << "Let us overwrite this coefficient with the value 4." << endl;
+m.reverse()(1,0) = 4;
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_rightCols_int.cpp b/doc/snippets/MatrixBase_rightCols_int.cpp
new file mode 100644
index 0000000..cb51340
--- /dev/null
+++ b/doc/snippets/MatrixBase_rightCols_int.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.rightCols(2):" << endl;
+cout << a.rightCols(2) << endl;
+a.rightCols(2).setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_row.cpp b/doc/snippets/MatrixBase_row.cpp
new file mode 100644
index 0000000..b15e626
--- /dev/null
+++ b/doc/snippets/MatrixBase_row.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Identity();
+m.row(1) = Vector3d(4,5,6);
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_rowwise.cpp b/doc/snippets/MatrixBase_rowwise.cpp
new file mode 100644
index 0000000..ae93964
--- /dev/null
+++ b/doc/snippets/MatrixBase_rowwise.cpp
@@ -0,0 +1,5 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the sum of each row:" << endl << m.rowwise().sum() << endl;
+cout << "Here is the maximum absolute value of each row:"
+     << endl << m.cwiseAbs().rowwise().maxCoeff() << endl;
diff --git a/doc/snippets/MatrixBase_segment_int_int.cpp b/doc/snippets/MatrixBase_segment_int_int.cpp
new file mode 100644
index 0000000..70cd6d2
--- /dev/null
+++ b/doc/snippets/MatrixBase_segment_int_int.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.segment(1, 2):" << endl << v.segment(1, 2) << endl;
+v.segment(1, 2).setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_select.cpp b/doc/snippets/MatrixBase_select.cpp
new file mode 100644
index 0000000..ae5477f
--- /dev/null
+++ b/doc/snippets/MatrixBase_select.cpp
@@ -0,0 +1,6 @@
+MatrixXi m(3, 3);
+m << 1, 2, 3,
+     4, 5, 6,
+     7, 8, 9;
+m = (m.array() >= 5).select(-m, m);
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_set.cpp b/doc/snippets/MatrixBase_set.cpp
new file mode 100644
index 0000000..50ecf5f
--- /dev/null
+++ b/doc/snippets/MatrixBase_set.cpp
@@ -0,0 +1,13 @@
+Matrix3i m1;
+m1 << 1, 2, 3,
+      4, 5, 6,
+      7, 8, 9;
+cout << m1 << endl << endl;
+Matrix3i m2 = Matrix3i::Identity();
+m2.block(0,0, 2,2) << 10, 11, 12, 13;
+cout << m2 << endl << endl;
+Vector2i v1;
+v1 << 14, 15;
+m2 << v1.transpose(), 16,
+      v1, m1.block(1,1,2,2);
+cout << m2 << endl;
diff --git a/doc/snippets/MatrixBase_setIdentity.cpp b/doc/snippets/MatrixBase_setIdentity.cpp
new file mode 100644
index 0000000..4fd0aa2
--- /dev/null
+++ b/doc/snippets/MatrixBase_setIdentity.cpp
@@ -0,0 +1,3 @@
+Matrix4i m = Matrix4i::Zero();
+m.block<3,3>(1,0).setIdentity();
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_setOnes.cpp b/doc/snippets/MatrixBase_setOnes.cpp
new file mode 100644
index 0000000..4cef9c1
--- /dev/null
+++ b/doc/snippets/MatrixBase_setOnes.cpp
@@ -0,0 +1,3 @@
+Matrix4i m = Matrix4i::Random();
+m.row(1).setOnes();
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_setRandom.cpp b/doc/snippets/MatrixBase_setRandom.cpp
new file mode 100644
index 0000000..e2c257d
--- /dev/null
+++ b/doc/snippets/MatrixBase_setRandom.cpp
@@ -0,0 +1,3 @@
+Matrix4i m = Matrix4i::Zero();
+m.col(1).setRandom();
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_setZero.cpp b/doc/snippets/MatrixBase_setZero.cpp
new file mode 100644
index 0000000..9b5b958
--- /dev/null
+++ b/doc/snippets/MatrixBase_setZero.cpp
@@ -0,0 +1,3 @@
+Matrix4i m = Matrix4i::Random();
+m.row(1).setZero();
+cout << m << endl;
diff --git a/doc/snippets/MatrixBase_start_int.cpp b/doc/snippets/MatrixBase_start_int.cpp
new file mode 100644
index 0000000..c261d2b
--- /dev/null
+++ b/doc/snippets/MatrixBase_start_int.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.head(2):" << endl << v.head(2) << endl;
+v.head(2).setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_template_int_bottomRows.cpp b/doc/snippets/MatrixBase_template_int_bottomRows.cpp
new file mode 100644
index 0000000..f9ea892
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_bottomRows.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.bottomRows<2>():" << endl;
+cout << a.bottomRows<2>() << endl;
+a.bottomRows<2>().setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_template_int_end.cpp b/doc/snippets/MatrixBase_template_int_end.cpp
new file mode 100644
index 0000000..f5ccb00
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_end.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.tail(2):" << endl << v.tail<2>() << endl;
+v.tail<2>().setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_template_int_int_bottomLeftCorner.cpp b/doc/snippets/MatrixBase_template_int_int_bottomLeftCorner.cpp
new file mode 100644
index 0000000..847892a
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_int_bottomLeftCorner.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.bottomLeftCorner<2,2>():" << endl;
+cout << m.bottomLeftCorner<2,2>() << endl;
+m.bottomLeftCorner<2,2>().setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_template_int_int_bottomRightCorner.cpp b/doc/snippets/MatrixBase_template_int_int_bottomRightCorner.cpp
new file mode 100644
index 0000000..abacb01
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_int_bottomRightCorner.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.bottomRightCorner<2,2>():" << endl;
+cout << m.bottomRightCorner<2,2>() << endl;
+m.bottomRightCorner<2,2>().setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_template_int_int_topLeftCorner.cpp b/doc/snippets/MatrixBase_template_int_int_topLeftCorner.cpp
new file mode 100644
index 0000000..1899d90
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_int_topLeftCorner.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.topLeftCorner<2,2>():" << endl;
+cout << m.topLeftCorner<2,2>() << endl;
+m.topLeftCorner<2,2>().setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_template_int_int_topRightCorner.cpp b/doc/snippets/MatrixBase_template_int_int_topRightCorner.cpp
new file mode 100644
index 0000000..c3a1771
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_int_topRightCorner.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.topRightCorner<2,2>():" << endl;
+cout << m.topRightCorner<2,2>() << endl;
+m.topRightCorner<2,2>().setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_template_int_leftCols.cpp b/doc/snippets/MatrixBase_template_int_leftCols.cpp
new file mode 100644
index 0000000..1c425d9
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_leftCols.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.leftCols<2>():" << endl;
+cout << a.leftCols<2>() << endl;
+a.leftCols<2>().setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_template_int_rightCols.cpp b/doc/snippets/MatrixBase_template_int_rightCols.cpp
new file mode 100644
index 0000000..fc8c0d9
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_rightCols.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.rightCols<2>():" << endl;
+cout << a.rightCols<2>() << endl;
+a.rightCols<2>().setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_template_int_segment.cpp b/doc/snippets/MatrixBase_template_int_segment.cpp
new file mode 100644
index 0000000..e448b40
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_segment.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.segment<2>(1):" << endl << v.segment<2>(1) << endl;
+v.segment<2>(2).setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_template_int_start.cpp b/doc/snippets/MatrixBase_template_int_start.cpp
new file mode 100644
index 0000000..d336b37
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_start.cpp
@@ -0,0 +1,5 @@
+RowVector4i v = RowVector4i::Random();
+cout << "Here is the vector v:" << endl << v << endl;
+cout << "Here is v.head(2):" << endl << v.head<2>() << endl;
+v.head<2>().setZero();
+cout << "Now the vector v is:" << endl << v << endl;
diff --git a/doc/snippets/MatrixBase_template_int_topRows.cpp b/doc/snippets/MatrixBase_template_int_topRows.cpp
new file mode 100644
index 0000000..0110251
--- /dev/null
+++ b/doc/snippets/MatrixBase_template_int_topRows.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.topRows<2>():" << endl;
+cout << a.topRows<2>() << endl;
+a.topRows<2>().setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_topLeftCorner_int_int.cpp b/doc/snippets/MatrixBase_topLeftCorner_int_int.cpp
new file mode 100644
index 0000000..e52cb3b
--- /dev/null
+++ b/doc/snippets/MatrixBase_topLeftCorner_int_int.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.topLeftCorner(2, 2):" << endl;
+cout << m.topLeftCorner(2, 2) << endl;
+m.topLeftCorner(2, 2).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_topRightCorner_int_int.cpp b/doc/snippets/MatrixBase_topRightCorner_int_int.cpp
new file mode 100644
index 0000000..811fa56
--- /dev/null
+++ b/doc/snippets/MatrixBase_topRightCorner_int_int.cpp
@@ -0,0 +1,6 @@
+Matrix4i m = Matrix4i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is m.topRightCorner(2, 2):" << endl;
+cout << m.topRightCorner(2, 2) << endl;
+m.topRightCorner(2, 2).setZero();
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_topRows_int.cpp b/doc/snippets/MatrixBase_topRows_int.cpp
new file mode 100644
index 0000000..f2d75f1
--- /dev/null
+++ b/doc/snippets/MatrixBase_topRows_int.cpp
@@ -0,0 +1,6 @@
+Array44i a = Array44i::Random();
+cout << "Here is the array a:" << endl << a << endl;
+cout << "Here is a.topRows(2):" << endl;
+cout << a.topRows(2) << endl;
+a.topRows(2).setZero();
+cout << "Now the array a is:" << endl << a << endl;
diff --git a/doc/snippets/MatrixBase_transpose.cpp b/doc/snippets/MatrixBase_transpose.cpp
new file mode 100644
index 0000000..88eea83
--- /dev/null
+++ b/doc/snippets/MatrixBase_transpose.cpp
@@ -0,0 +1,8 @@
+Matrix2i m = Matrix2i::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the transpose of m:" << endl << m.transpose() << endl;
+cout << "Here is the coefficient (1,0) in the transpose of m:" << endl
+     << m.transpose()(1,0) << endl;
+cout << "Let us overwrite this coefficient with the value 0." << endl;
+m.transpose()(1,0) = 0;
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/MatrixBase_zero.cpp b/doc/snippets/MatrixBase_zero.cpp
new file mode 100644
index 0000000..6064936
--- /dev/null
+++ b/doc/snippets/MatrixBase_zero.cpp
@@ -0,0 +1,2 @@
+cout << Matrix2d::Zero() << endl;
+cout << RowVector4i::Zero() << endl;
diff --git a/doc/snippets/MatrixBase_zero_int.cpp b/doc/snippets/MatrixBase_zero_int.cpp
new file mode 100644
index 0000000..370a9ba
--- /dev/null
+++ b/doc/snippets/MatrixBase_zero_int.cpp
@@ -0,0 +1,2 @@
+cout << RowVectorXi::Zero(4) << endl;
+cout << VectorXf::Zero(2) << endl;
diff --git a/doc/snippets/MatrixBase_zero_int_int.cpp b/doc/snippets/MatrixBase_zero_int_int.cpp
new file mode 100644
index 0000000..4099c5d
--- /dev/null
+++ b/doc/snippets/MatrixBase_zero_int_int.cpp
@@ -0,0 +1 @@
+cout << MatrixXi::Zero(2,3) << endl;
diff --git a/doc/snippets/Matrix_resize_NoChange_int.cpp b/doc/snippets/Matrix_resize_NoChange_int.cpp
new file mode 100644
index 0000000..acdf18c
--- /dev/null
+++ b/doc/snippets/Matrix_resize_NoChange_int.cpp
@@ -0,0 +1,3 @@
+MatrixXd m(3,4);
+m.resize(NoChange, 5);
+cout << "m: " << m.rows() << " rows, " << m.cols() << " cols" << endl;
diff --git a/doc/snippets/Matrix_resize_int.cpp b/doc/snippets/Matrix_resize_int.cpp
new file mode 100644
index 0000000..044c789
--- /dev/null
+++ b/doc/snippets/Matrix_resize_int.cpp
@@ -0,0 +1,6 @@
+VectorXd v(10);
+v.resize(3);
+RowVector3d w;
+w.resize(3); // this is legal, but has no effect
+cout << "v: " << v.rows() << " rows, " << v.cols() << " cols" << endl;
+cout << "w: " << w.rows() << " rows, " << w.cols() << " cols" << endl;
diff --git a/doc/snippets/Matrix_resize_int_NoChange.cpp b/doc/snippets/Matrix_resize_int_NoChange.cpp
new file mode 100644
index 0000000..5c37c90
--- /dev/null
+++ b/doc/snippets/Matrix_resize_int_NoChange.cpp
@@ -0,0 +1,3 @@
+MatrixXd m(3,4);
+m.resize(5, NoChange);
+cout << "m: " << m.rows() << " rows, " << m.cols() << " cols" << endl;
diff --git a/doc/snippets/Matrix_resize_int_int.cpp b/doc/snippets/Matrix_resize_int_int.cpp
new file mode 100644
index 0000000..bfd4741
--- /dev/null
+++ b/doc/snippets/Matrix_resize_int_int.cpp
@@ -0,0 +1,9 @@
+MatrixXd m(2,3);
+m << 1,2,3,4,5,6;
+cout << "here's the 2x3 matrix m:" << endl << m << endl;
+cout << "let's resize m to 3x2. This is a conservative resizing because 2*3==3*2." << endl;
+m.resize(3,2);
+cout << "here's the 3x2 matrix m:" << endl << m << endl;
+cout << "now let's resize m to size 2x2. This is NOT a conservative resizing, so it becomes uninitialized:" << endl;
+m.resize(2,2);
+cout << m << endl;
diff --git a/doc/snippets/Matrix_setConstant_int.cpp b/doc/snippets/Matrix_setConstant_int.cpp
new file mode 100644
index 0000000..ff5a86c
--- /dev/null
+++ b/doc/snippets/Matrix_setConstant_int.cpp
@@ -0,0 +1,3 @@
+VectorXf v;
+v.setConstant(3, 5);
+cout << v << endl;
diff --git a/doc/snippets/Matrix_setConstant_int_int.cpp b/doc/snippets/Matrix_setConstant_int_int.cpp
new file mode 100644
index 0000000..32b950c
--- /dev/null
+++ b/doc/snippets/Matrix_setConstant_int_int.cpp
@@ -0,0 +1,3 @@
+MatrixXf m;
+m.setConstant(3, 3, 5);
+cout << m << endl;
diff --git a/doc/snippets/Matrix_setIdentity_int_int.cpp b/doc/snippets/Matrix_setIdentity_int_int.cpp
new file mode 100644
index 0000000..a659671
--- /dev/null
+++ b/doc/snippets/Matrix_setIdentity_int_int.cpp
@@ -0,0 +1,3 @@
+MatrixXf m;
+m.setIdentity(3, 3);
+cout << m << endl;
diff --git a/doc/snippets/Matrix_setOnes_int.cpp b/doc/snippets/Matrix_setOnes_int.cpp
new file mode 100644
index 0000000..752cb35
--- /dev/null
+++ b/doc/snippets/Matrix_setOnes_int.cpp
@@ -0,0 +1,3 @@
+VectorXf v;
+v.setOnes(3);
+cout << v << endl;
diff --git a/doc/snippets/Matrix_setOnes_int_int.cpp b/doc/snippets/Matrix_setOnes_int_int.cpp
new file mode 100644
index 0000000..1ffb66b
--- /dev/null
+++ b/doc/snippets/Matrix_setOnes_int_int.cpp
@@ -0,0 +1,3 @@
+MatrixXf m;
+m.setOnes(3, 3);
+cout << m << endl;
diff --git a/doc/snippets/Matrix_setRandom_int.cpp b/doc/snippets/Matrix_setRandom_int.cpp
new file mode 100644
index 0000000..e160dd7
--- /dev/null
+++ b/doc/snippets/Matrix_setRandom_int.cpp
@@ -0,0 +1,3 @@
+VectorXf v;
+v.setRandom(3);
+cout << v << endl;
diff --git a/doc/snippets/Matrix_setRandom_int_int.cpp b/doc/snippets/Matrix_setRandom_int_int.cpp
new file mode 100644
index 0000000..80cda11
--- /dev/null
+++ b/doc/snippets/Matrix_setRandom_int_int.cpp
@@ -0,0 +1,3 @@
+MatrixXf m;
+m.setRandom(3, 3);
+cout << m << endl;
diff --git a/doc/snippets/Matrix_setZero_int.cpp b/doc/snippets/Matrix_setZero_int.cpp
new file mode 100644
index 0000000..0fb16c1
--- /dev/null
+++ b/doc/snippets/Matrix_setZero_int.cpp
@@ -0,0 +1,3 @@
+VectorXf v;
+v.setZero(3);
+cout << v << endl;
diff --git a/doc/snippets/Matrix_setZero_int_int.cpp b/doc/snippets/Matrix_setZero_int_int.cpp
new file mode 100644
index 0000000..ad883b9
--- /dev/null
+++ b/doc/snippets/Matrix_setZero_int_int.cpp
@@ -0,0 +1,3 @@
+MatrixXf m;
+m.setZero(3, 3);
+cout << m << endl;
diff --git a/doc/snippets/PartialPivLU_solve.cpp b/doc/snippets/PartialPivLU_solve.cpp
new file mode 100644
index 0000000..fa3570a
--- /dev/null
+++ b/doc/snippets/PartialPivLU_solve.cpp
@@ -0,0 +1,7 @@
+MatrixXd A = MatrixXd::Random(3,3);
+MatrixXd B = MatrixXd::Random(3,2);
+cout << "Here is the invertible matrix A:" << endl << A << endl;
+cout << "Here is the matrix B:" << endl << B << endl;
+MatrixXd X = A.lu().solve(B);
+cout << "Here is the (unique) solution X to the equation AX=B:" << endl << X << endl;
+cout << "Relative error: " << (A*X-B).norm() / B.norm() << endl;
diff --git a/doc/snippets/PartialRedux_count.cpp b/doc/snippets/PartialRedux_count.cpp
new file mode 100644
index 0000000..c7b3097
--- /dev/null
+++ b/doc/snippets/PartialRedux_count.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the count of elements larger or equal than 0.5 of each row:" << endl << (m.array() >= 0.5).rowwise().count() << endl;
diff --git a/doc/snippets/PartialRedux_maxCoeff.cpp b/doc/snippets/PartialRedux_maxCoeff.cpp
new file mode 100644
index 0000000..e8fd382
--- /dev/null
+++ b/doc/snippets/PartialRedux_maxCoeff.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the maximum of each column:" << endl << m.colwise().maxCoeff() << endl;
diff --git a/doc/snippets/PartialRedux_minCoeff.cpp b/doc/snippets/PartialRedux_minCoeff.cpp
new file mode 100644
index 0000000..d717bc0
--- /dev/null
+++ b/doc/snippets/PartialRedux_minCoeff.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the minimum of each column:" << endl << m.colwise().minCoeff() << endl;
diff --git a/doc/snippets/PartialRedux_norm.cpp b/doc/snippets/PartialRedux_norm.cpp
new file mode 100644
index 0000000..dbcf290
--- /dev/null
+++ b/doc/snippets/PartialRedux_norm.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the norm of each column:" << endl << m.colwise().norm() << endl;
diff --git a/doc/snippets/PartialRedux_prod.cpp b/doc/snippets/PartialRedux_prod.cpp
new file mode 100644
index 0000000..aacf09c
--- /dev/null
+++ b/doc/snippets/PartialRedux_prod.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the product of each row:" << endl << m.rowwise().prod() << endl;
diff --git a/doc/snippets/PartialRedux_squaredNorm.cpp b/doc/snippets/PartialRedux_squaredNorm.cpp
new file mode 100644
index 0000000..9f3293e
--- /dev/null
+++ b/doc/snippets/PartialRedux_squaredNorm.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the square norm of each row:" << endl << m.rowwise().squaredNorm() << endl;
diff --git a/doc/snippets/PartialRedux_sum.cpp b/doc/snippets/PartialRedux_sum.cpp
new file mode 100644
index 0000000..ec82d3e
--- /dev/null
+++ b/doc/snippets/PartialRedux_sum.cpp
@@ -0,0 +1,3 @@
+Matrix3d m = Matrix3d::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the sum of each row:" << endl << m.rowwise().sum() << endl;
diff --git a/doc/snippets/RealSchur_RealSchur_MatrixType.cpp b/doc/snippets/RealSchur_RealSchur_MatrixType.cpp
new file mode 100644
index 0000000..a5530dc
--- /dev/null
+++ b/doc/snippets/RealSchur_RealSchur_MatrixType.cpp
@@ -0,0 +1,10 @@
+MatrixXd A = MatrixXd::Random(6,6);
+cout << "Here is a random 6x6 matrix, A:" << endl << A << endl << endl;
+
+RealSchur<MatrixXd> schur(A);
+cout << "The orthogonal matrix U is:" << endl << schur.matrixU() << endl;
+cout << "The quasi-triangular matrix T is:" << endl << schur.matrixT() << endl << endl;
+
+MatrixXd U = schur.matrixU();
+MatrixXd T = schur.matrixT();
+cout << "U * T * U^T = " << endl << U * T * U.transpose() << endl;
diff --git a/doc/snippets/RealSchur_compute.cpp b/doc/snippets/RealSchur_compute.cpp
new file mode 100644
index 0000000..20c2611
--- /dev/null
+++ b/doc/snippets/RealSchur_compute.cpp
@@ -0,0 +1,6 @@
+MatrixXf A = MatrixXf::Random(4,4);
+RealSchur<MatrixXf> schur(4);
+schur.compute(A, /* computeU = */ false);
+cout << "The matrix T in the decomposition of A is:" << endl << schur.matrixT() << endl;
+schur.compute(A.inverse(), /* computeU = */ false);
+cout << "The matrix T in the decomposition of A^(-1) is:" << endl << schur.matrixT() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver.cpp b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver.cpp
new file mode 100644
index 0000000..73a7f62
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver.cpp
@@ -0,0 +1,7 @@
+SelfAdjointEigenSolver<Matrix4f> es;
+Matrix4f X = Matrix4f::Random(4,4);
+Matrix4f A = X + X.transpose();
+es.compute(A);
+cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
+es.compute(A + Matrix4f::Identity(4,4)); // re-use es to compute eigenvalues of A+I
+cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.cpp b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.cpp
new file mode 100644
index 0000000..3599b17
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType.cpp
@@ -0,0 +1,17 @@
+MatrixXd X = MatrixXd::Random(5,5);
+MatrixXd A = X + X.transpose();
+cout << "Here is a random symmetric 5x5 matrix, A:" << endl << A << endl << endl;
+
+SelfAdjointEigenSolver<MatrixXd> es(A);
+cout << "The eigenvalues of A are:" << endl << es.eigenvalues() << endl;
+cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl;
+
+double lambda = es.eigenvalues()[0];
+cout << "Consider the first eigenvalue, lambda = " << lambda << endl;
+VectorXd v = es.eigenvectors().col(0);
+cout << "If v is the corresponding eigenvector, then lambda * v = " << endl << lambda * v << endl;
+cout << "... and A * v = " << endl << A * v << endl << endl;
+
+MatrixXd D = es.eigenvalues().asDiagonal();
+MatrixXd V = es.eigenvectors();
+cout << "Finally, V * D * V^(-1) = " << endl << V * D * V.inverse() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.cpp b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.cpp
new file mode 100644
index 0000000..bbb821e
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_SelfAdjointEigenSolver_MatrixType2.cpp
@@ -0,0 +1,16 @@
+MatrixXd X = MatrixXd::Random(5,5);
+MatrixXd A = X + X.transpose();
+cout << "Here is a random symmetric matrix, A:" << endl << A << endl;
+X = MatrixXd::Random(5,5);
+MatrixXd B = X * X.transpose();
+cout << "and a random postive-definite matrix, B:" << endl << B << endl << endl;
+
+GeneralizedSelfAdjointEigenSolver<MatrixXd> es(A,B);
+cout << "The eigenvalues of the pencil (A,B) are:" << endl << es.eigenvalues() << endl;
+cout << "The matrix of eigenvectors, V, is:" << endl << es.eigenvectors() << endl << endl;
+
+double lambda = es.eigenvalues()[0];
+cout << "Consider the first eigenvalue, lambda = " << lambda << endl;
+VectorXd v = es.eigenvectors().col(0);
+cout << "If v is the corresponding eigenvector, then A * v = " << endl << A * v << endl;
+cout << "... and lambda * B * v = " << endl << lambda * B * v << endl << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType.cpp b/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType.cpp
new file mode 100644
index 0000000..2975cc3
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType.cpp
@@ -0,0 +1,7 @@
+SelfAdjointEigenSolver<MatrixXf> es(4);
+MatrixXf X = MatrixXf::Random(4,4);
+MatrixXf A = X + X.transpose();
+es.compute(A);
+cout << "The eigenvalues of A are: " << es.eigenvalues().transpose() << endl;
+es.compute(A + MatrixXf::Identity(4,4)); // re-use es to compute eigenvalues of A+I
+cout << "The eigenvalues of A+I are: " << es.eigenvalues().transpose() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType2.cpp b/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType2.cpp
new file mode 100644
index 0000000..07c92a1
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_compute_MatrixType2.cpp
@@ -0,0 +1,9 @@
+MatrixXd X = MatrixXd::Random(5,5);
+MatrixXd A = X * X.transpose();
+X = MatrixXd::Random(5,5);
+MatrixXd B = X * X.transpose();
+
+GeneralizedSelfAdjointEigenSolver<MatrixXd> es(A,B,EigenvaluesOnly);
+cout << "The eigenvalues of the pencil (A,B) are:" << endl << es.eigenvalues() << endl;
+es.compute(B,A,false);
+cout << "The eigenvalues of the pencil (B,A) are:" << endl << es.eigenvalues() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_eigenvalues.cpp b/doc/snippets/SelfAdjointEigenSolver_eigenvalues.cpp
new file mode 100644
index 0000000..0ff33c6
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_eigenvalues.cpp
@@ -0,0 +1,4 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+SelfAdjointEigenSolver<MatrixXd> es(ones);
+cout << "The eigenvalues of the 3x3 matrix of ones are:" 
+     << endl << es.eigenvalues() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_eigenvectors.cpp b/doc/snippets/SelfAdjointEigenSolver_eigenvectors.cpp
new file mode 100644
index 0000000..cfc8b0d
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_eigenvectors.cpp
@@ -0,0 +1,4 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+SelfAdjointEigenSolver<MatrixXd> es(ones);
+cout << "The first eigenvector of the 3x3 matrix of ones is:" 
+     << endl << es.eigenvectors().col(1) << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_operatorInverseSqrt.cpp b/doc/snippets/SelfAdjointEigenSolver_operatorInverseSqrt.cpp
new file mode 100644
index 0000000..114c65f
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_operatorInverseSqrt.cpp
@@ -0,0 +1,9 @@
+MatrixXd X = MatrixXd::Random(4,4);
+MatrixXd A = X * X.transpose();
+cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl;
+
+SelfAdjointEigenSolver<MatrixXd> es(A);
+cout << "The inverse square root of A is: " << endl;
+cout << es.operatorInverseSqrt() << endl;
+cout << "We can also compute it with operatorSqrt() and inverse(). That yields: " << endl;
+cout << es.operatorSqrt().inverse() << endl;
diff --git a/doc/snippets/SelfAdjointEigenSolver_operatorSqrt.cpp b/doc/snippets/SelfAdjointEigenSolver_operatorSqrt.cpp
new file mode 100644
index 0000000..eeacca7
--- /dev/null
+++ b/doc/snippets/SelfAdjointEigenSolver_operatorSqrt.cpp
@@ -0,0 +1,8 @@
+MatrixXd X = MatrixXd::Random(4,4);
+MatrixXd A = X * X.transpose();
+cout << "Here is a random positive-definite matrix, A:" << endl << A << endl << endl;
+
+SelfAdjointEigenSolver<MatrixXd> es(A);
+MatrixXd sqrtA = es.operatorSqrt();
+cout << "The square root of A is: " << endl << sqrtA << endl;
+cout << "If we square this, we get: " << endl << sqrtA*sqrtA << endl;
diff --git a/doc/snippets/SelfAdjointView_eigenvalues.cpp b/doc/snippets/SelfAdjointView_eigenvalues.cpp
new file mode 100644
index 0000000..be19867
--- /dev/null
+++ b/doc/snippets/SelfAdjointView_eigenvalues.cpp
@@ -0,0 +1,3 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+VectorXd eivals = ones.selfadjointView<Lower>().eigenvalues();
+cout << "The eigenvalues of the 3x3 matrix of ones are:" << endl << eivals << endl;
diff --git a/doc/snippets/SelfAdjointView_operatorNorm.cpp b/doc/snippets/SelfAdjointView_operatorNorm.cpp
new file mode 100644
index 0000000..f380f55
--- /dev/null
+++ b/doc/snippets/SelfAdjointView_operatorNorm.cpp
@@ -0,0 +1,3 @@
+MatrixXd ones = MatrixXd::Ones(3,3);
+cout << "The operator norm of the 3x3 matrix of ones is "
+     << ones.selfadjointView<Lower>().operatorNorm() << endl;
diff --git a/doc/snippets/TopicAliasing_block.cpp b/doc/snippets/TopicAliasing_block.cpp
new file mode 100644
index 0000000..03282f4
--- /dev/null
+++ b/doc/snippets/TopicAliasing_block.cpp
@@ -0,0 +1,7 @@
+MatrixXi mat(3,3); 
+mat << 1, 2, 3,   4, 5, 6,   7, 8, 9;
+cout << "Here is the matrix mat:\n" << mat << endl;
+
+// This assignment shows the aliasing problem
+mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2);
+cout << "After the assignment, mat = \n" << mat << endl;
diff --git a/doc/snippets/TopicAliasing_block_correct.cpp b/doc/snippets/TopicAliasing_block_correct.cpp
new file mode 100644
index 0000000..6fee580
--- /dev/null
+++ b/doc/snippets/TopicAliasing_block_correct.cpp
@@ -0,0 +1,7 @@
+MatrixXi mat(3,3); 
+mat << 1, 2, 3,   4, 5, 6,   7, 8, 9;
+cout << "Here is the matrix mat:\n" << mat << endl;
+
+// The eval() solves the aliasing problem
+mat.bottomRightCorner(2,2) = mat.topLeftCorner(2,2).eval();
+cout << "After the assignment, mat = \n" << mat << endl;
diff --git a/doc/snippets/TopicAliasing_cwise.cpp b/doc/snippets/TopicAliasing_cwise.cpp
new file mode 100644
index 0000000..7049f6c
--- /dev/null
+++ b/doc/snippets/TopicAliasing_cwise.cpp
@@ -0,0 +1,20 @@
+MatrixXf mat(2,2); 
+mat << 1, 2,  4, 7;
+cout << "Here is the matrix mat:\n" << mat << endl << endl;
+
+mat = 2 * mat;
+cout << "After 'mat = 2 * mat', mat = \n" << mat << endl << endl;
+
+
+mat = mat - MatrixXf::Identity(2,2);
+cout << "After the subtraction, it becomes\n" << mat << endl << endl;
+
+
+ArrayXXf arr = mat;
+arr = arr.square();
+cout << "After squaring, it becomes\n" << arr << endl << endl;
+
+// Combining all operations in one statement:
+mat << 1, 2,  4, 7;
+mat = (2 * mat - MatrixXf::Identity(2,2)).array().square();
+cout << "Doing everything at once yields\n" << mat << endl << endl;
diff --git a/doc/snippets/TopicAliasing_mult1.cpp b/doc/snippets/TopicAliasing_mult1.cpp
new file mode 100644
index 0000000..cd7e900
--- /dev/null
+++ b/doc/snippets/TopicAliasing_mult1.cpp
@@ -0,0 +1,4 @@
+MatrixXf matA(2,2); 
+matA << 2, 0,  0, 2;
+matA = matA * matA;
+cout << matA;
diff --git a/doc/snippets/TopicAliasing_mult2.cpp b/doc/snippets/TopicAliasing_mult2.cpp
new file mode 100644
index 0000000..a3ff568
--- /dev/null
+++ b/doc/snippets/TopicAliasing_mult2.cpp
@@ -0,0 +1,10 @@
+MatrixXf matA(2,2), matB(2,2); 
+matA << 2, 0,  0, 2;
+
+// Simple but not quite as efficient
+matB = matA * matA;
+cout << matB << endl << endl;
+
+// More complicated but also more efficient
+matB.noalias() = matA * matA;
+cout << matB;
diff --git a/doc/snippets/TopicAliasing_mult3.cpp b/doc/snippets/TopicAliasing_mult3.cpp
new file mode 100644
index 0000000..1d12a6c
--- /dev/null
+++ b/doc/snippets/TopicAliasing_mult3.cpp
@@ -0,0 +1,4 @@
+MatrixXf matA(2,2); 
+matA << 2, 0,  0, 2;
+matA.noalias() = matA * matA;
+cout << matA;
diff --git a/doc/snippets/TopicStorageOrders_example.cpp b/doc/snippets/TopicStorageOrders_example.cpp
new file mode 100644
index 0000000..0623ef0
--- /dev/null
+++ b/doc/snippets/TopicStorageOrders_example.cpp
@@ -0,0 +1,18 @@
+Matrix<int, 3, 4, ColMajor> Acolmajor;
+Acolmajor << 8, 2, 2, 9,
+             9, 1, 4, 4,
+	     3, 5, 4, 5;
+cout << "The matrix A:" << endl;
+cout << Acolmajor << endl << endl; 
+
+cout << "In memory (column-major):" << endl;
+for (int i = 0; i < Acolmajor.size(); i++)
+  cout << *(Acolmajor.data() + i) << "  ";
+cout << endl << endl;
+
+Matrix<int, 3, 4, RowMajor> Arowmajor = Acolmajor;
+cout << "In memory (row-major):" << endl;
+for (int i = 0; i < Arowmajor.size(); i++)
+  cout << *(Arowmajor.data() + i) << "  ";
+cout << endl;
+
diff --git a/doc/snippets/Tridiagonalization_Tridiagonalization_MatrixType.cpp b/doc/snippets/Tridiagonalization_Tridiagonalization_MatrixType.cpp
new file mode 100644
index 0000000..a260124
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_Tridiagonalization_MatrixType.cpp
@@ -0,0 +1,9 @@
+MatrixXd X = MatrixXd::Random(5,5);
+MatrixXd A = X + X.transpose();
+cout << "Here is a random symmetric 5x5 matrix:" << endl << A << endl << endl;
+Tridiagonalization<MatrixXd> triOfA(A);
+MatrixXd Q = triOfA.matrixQ();
+cout << "The orthogonal matrix Q is:" << endl << Q << endl;
+MatrixXd T = triOfA.matrixT();
+cout << "The tridiagonal matrix T is:" << endl << T << endl << endl;
+cout << "Q * T * Q^T = " << endl << Q * T * Q.transpose() << endl;
diff --git a/doc/snippets/Tridiagonalization_compute.cpp b/doc/snippets/Tridiagonalization_compute.cpp
new file mode 100644
index 0000000..0062a99
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_compute.cpp
@@ -0,0 +1,9 @@
+Tridiagonalization<MatrixXf> tri;
+MatrixXf X = MatrixXf::Random(4,4);
+MatrixXf A = X + X.transpose();
+tri.compute(A);
+cout << "The matrix T in the tridiagonal decomposition of A is: " << endl;
+cout << tri.matrixT() << endl;
+tri.compute(2*A); // re-use tri to compute eigenvalues of 2A
+cout << "The matrix T in the tridiagonal decomposition of 2A is: " << endl;
+cout << tri.matrixT() << endl;
diff --git a/doc/snippets/Tridiagonalization_decomposeInPlace.cpp b/doc/snippets/Tridiagonalization_decomposeInPlace.cpp
new file mode 100644
index 0000000..93dcfca
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_decomposeInPlace.cpp
@@ -0,0 +1,10 @@
+MatrixXd X = MatrixXd::Random(5,5);
+MatrixXd A = X + X.transpose();
+cout << "Here is a random symmetric 5x5 matrix:" << endl << A << endl << endl;
+
+VectorXd diag(5);
+VectorXd subdiag(4);
+internal::tridiagonalization_inplace(A, diag, subdiag, true);
+cout << "The orthogonal matrix Q is:" << endl << A << endl;
+cout << "The diagonal of the tridiagonal matrix T is:" << endl << diag << endl;
+cout << "The subdiagonal of the tridiagonal matrix T is:" << endl << subdiag << endl;
diff --git a/doc/snippets/Tridiagonalization_diagonal.cpp b/doc/snippets/Tridiagonalization_diagonal.cpp
new file mode 100644
index 0000000..6eec821
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_diagonal.cpp
@@ -0,0 +1,13 @@
+MatrixXcd X = MatrixXcd::Random(4,4);
+MatrixXcd A = X + X.adjoint();
+cout << "Here is a random self-adjoint 4x4 matrix:" << endl << A << endl << endl;
+
+Tridiagonalization<MatrixXcd> triOfA(A);
+MatrixXd T = triOfA.matrixT();
+cout << "The tridiagonal matrix T is:" << endl << T << endl << endl;
+
+cout << "We can also extract the diagonals of T directly ..." << endl;
+VectorXd diag = triOfA.diagonal();
+cout << "The diagonal is:" << endl << diag << endl; 
+VectorXd subdiag = triOfA.subDiagonal();
+cout << "The subdiagonal is:" << endl << subdiag << endl;
diff --git a/doc/snippets/Tridiagonalization_householderCoefficients.cpp b/doc/snippets/Tridiagonalization_householderCoefficients.cpp
new file mode 100644
index 0000000..e5d8728
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_householderCoefficients.cpp
@@ -0,0 +1,6 @@
+Matrix4d X = Matrix4d::Random(4,4);
+Matrix4d A = X + X.transpose();
+cout << "Here is a random symmetric 4x4 matrix:" << endl << A << endl;
+Tridiagonalization<Matrix4d> triOfA(A);
+Vector3d hc = triOfA.householderCoefficients();
+cout << "The vector of Householder coefficients is:" << endl << hc << endl;
diff --git a/doc/snippets/Tridiagonalization_packedMatrix.cpp b/doc/snippets/Tridiagonalization_packedMatrix.cpp
new file mode 100644
index 0000000..0f55d0c
--- /dev/null
+++ b/doc/snippets/Tridiagonalization_packedMatrix.cpp
@@ -0,0 +1,8 @@
+Matrix4d X = Matrix4d::Random(4,4);
+Matrix4d A = X + X.transpose();
+cout << "Here is a random symmetric 4x4 matrix:" << endl << A << endl;
+Tridiagonalization<Matrix4d> triOfA(A);
+Matrix4d pm = triOfA.packedMatrix();
+cout << "The packed matrix M is:" << endl << pm << endl;
+cout << "The diagonal and subdiagonal corresponds to the matrix T, which is:" 
+     << endl << triOfA.matrixT() << endl;
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_Block.cpp b/doc/snippets/Tutorial_AdvancedInitialization_Block.cpp
new file mode 100644
index 0000000..96e40ac
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_Block.cpp
@@ -0,0 +1,5 @@
+MatrixXf matA(2, 2);
+matA << 1, 2, 3, 4;
+MatrixXf matB(4, 4);
+matB << matA, matA/10, matA/10, matA;
+std::cout << matB << std::endl;
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_CommaTemporary.cpp b/doc/snippets/Tutorial_AdvancedInitialization_CommaTemporary.cpp
new file mode 100644
index 0000000..50cff4c
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_CommaTemporary.cpp
@@ -0,0 +1,4 @@
+MatrixXf mat = MatrixXf::Random(2, 3);
+std::cout << mat << std::endl << std::endl;
+mat = (MatrixXf(2,2) << 0, 1, 1, 0).finished() * mat;
+std::cout << mat << std::endl;
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_Join.cpp b/doc/snippets/Tutorial_AdvancedInitialization_Join.cpp
new file mode 100644
index 0000000..84e8715
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_Join.cpp
@@ -0,0 +1,11 @@
+RowVectorXd vec1(3);
+vec1 << 1, 2, 3;
+std::cout << "vec1 = " << vec1 << std::endl;
+
+RowVectorXd vec2(4);
+vec2 << 1, 4, 9, 16;;
+std::cout << "vec2 = " << vec2 << std::endl;
+
+RowVectorXd joined(7);
+joined << vec1, vec2;
+std::cout << "joined = " << joined << std::endl;
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_LinSpaced.cpp b/doc/snippets/Tutorial_AdvancedInitialization_LinSpaced.cpp
new file mode 100644
index 0000000..c6a73ab
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_LinSpaced.cpp
@@ -0,0 +1,7 @@
+ArrayXXf table(10, 4);
+table.col(0) = ArrayXf::LinSpaced(10, 0, 90);
+table.col(1) = M_PI / 180 * table.col(0);
+table.col(2) = table.col(1).sin();
+table.col(3) = table.col(1).cos();
+std::cout << "  Degrees   Radians      Sine    Cosine\n";
+std::cout << table << std::endl;
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_ThreeWays.cpp b/doc/snippets/Tutorial_AdvancedInitialization_ThreeWays.cpp
new file mode 100644
index 0000000..cb74576
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_ThreeWays.cpp
@@ -0,0 +1,20 @@
+const int size = 6;
+MatrixXd mat1(size, size);
+mat1.topLeftCorner(size/2, size/2)     = MatrixXd::Zero(size/2, size/2);
+mat1.topRightCorner(size/2, size/2)    = MatrixXd::Identity(size/2, size/2);
+mat1.bottomLeftCorner(size/2, size/2)  = MatrixXd::Identity(size/2, size/2);
+mat1.bottomRightCorner(size/2, size/2) = MatrixXd::Zero(size/2, size/2);
+std::cout << mat1 << std::endl << std::endl;
+
+MatrixXd mat2(size, size);
+mat2.topLeftCorner(size/2, size/2).setZero();
+mat2.topRightCorner(size/2, size/2).setIdentity();
+mat2.bottomLeftCorner(size/2, size/2).setIdentity();
+mat2.bottomRightCorner(size/2, size/2).setZero();
+std::cout << mat2 << std::endl << std::endl;
+
+MatrixXd mat3(size, size);
+mat3 << MatrixXd::Zero(size/2, size/2), MatrixXd::Identity(size/2, size/2),
+        MatrixXd::Identity(size/2, size/2), MatrixXd::Zero(size/2, size/2);
+std::cout << mat3 << std::endl;
+
diff --git a/doc/snippets/Tutorial_AdvancedInitialization_Zero.cpp b/doc/snippets/Tutorial_AdvancedInitialization_Zero.cpp
new file mode 100644
index 0000000..76a36a3
--- /dev/null
+++ b/doc/snippets/Tutorial_AdvancedInitialization_Zero.cpp
@@ -0,0 +1,13 @@
+std::cout << "A fixed-size array:\n";
+Array33f a1 = Array33f::Zero();
+std::cout << a1 << "\n\n";
+
+
+std::cout << "A one-dimensional dynamic-size array:\n";
+ArrayXf a2 = ArrayXf::Zero(3);
+std::cout << a2 << "\n\n";
+
+
+std::cout << "A two-dimensional dynamic-size array:\n";
+ArrayXXf a3 = ArrayXXf::Zero(3, 4);
+std::cout << a3 << "\n";
diff --git a/doc/snippets/Tutorial_Map_rowmajor.cpp b/doc/snippets/Tutorial_Map_rowmajor.cpp
new file mode 100644
index 0000000..fd45ace
--- /dev/null
+++ b/doc/snippets/Tutorial_Map_rowmajor.cpp
@@ -0,0 +1,7 @@
+int array[8];
+for(int i = 0; i < 8; ++i) array[i] = i;
+cout << "Column-major:\n" << Map<Matrix<int,2,4> >(array) << endl;
+cout << "Row-major:\n" << Map<Matrix<int,2,4,RowMajor> >(array) << endl;
+cout << "Row-major using stride:\n" <<
+  Map<Matrix<int,2,4>, Unaligned, Stride<1,4> >(array) << endl;
+
diff --git a/doc/snippets/Tutorial_Map_using.cpp b/doc/snippets/Tutorial_Map_using.cpp
new file mode 100644
index 0000000..e5e499f
--- /dev/null
+++ b/doc/snippets/Tutorial_Map_using.cpp
@@ -0,0 +1,21 @@
+typedef Matrix<float,1,Dynamic> MatrixType;
+typedef Map<MatrixType> MapType;
+typedef Map<const MatrixType> MapTypeConst;   // a read-only map
+const int n_dims = 5;
+  
+MatrixType m1(n_dims), m2(n_dims);
+m1.setRandom();
+m2.setRandom();
+float *p = &m2(0);  // get the address storing the data for m2
+MapType m2map(p,m2.size());   // m2map shares data with m2
+MapTypeConst m2mapconst(p,m2.size());  // a read-only accessor for m2
+
+cout << "m1: " << m1 << endl;
+cout << "m2: " << m2 << endl;
+cout << "Squared euclidean distance: " << (m1-m2).squaredNorm() << endl;
+cout << "Squared euclidean distance, using map: " <<
+  (m1-m2map).squaredNorm() << endl;
+m2map(3) = 7;   // this will change m2, since they share the same array
+cout << "Updated m2: " << m2 << endl;
+cout << "m2 coefficient 2, constant accessor: " << m2mapconst(2) << endl;
+/* m2mapconst(2) = 5; */   // this yields a compile-time error
diff --git a/doc/snippets/Tutorial_commainit_01.cpp b/doc/snippets/Tutorial_commainit_01.cpp
new file mode 100644
index 0000000..47ba31d
--- /dev/null
+++ b/doc/snippets/Tutorial_commainit_01.cpp
@@ -0,0 +1,5 @@
+Matrix3f m;
+m << 1, 2, 3,
+     4, 5, 6,
+     7, 8, 9;
+std::cout << m;
diff --git a/doc/snippets/Tutorial_commainit_01b.cpp b/doc/snippets/Tutorial_commainit_01b.cpp
new file mode 100644
index 0000000..2adb2e2
--- /dev/null
+++ b/doc/snippets/Tutorial_commainit_01b.cpp
@@ -0,0 +1,5 @@
+Matrix3f m;
+m.row(0) << 1, 2, 3;
+m.block(1,0,2,2) << 4, 5, 7, 8;
+m.col(2).tail(2) << 6, 9;		    
+std::cout << m;
diff --git a/doc/snippets/Tutorial_commainit_02.cpp b/doc/snippets/Tutorial_commainit_02.cpp
new file mode 100644
index 0000000..c960d6a
--- /dev/null
+++ b/doc/snippets/Tutorial_commainit_02.cpp
@@ -0,0 +1,7 @@
+int rows=5, cols=5;
+MatrixXf m(rows,cols);
+m << (Matrix3f() << 1, 2, 3, 4, 5, 6, 7, 8, 9).finished(),
+     MatrixXf::Zero(3,cols-3),
+     MatrixXf::Zero(rows-3,3),
+     MatrixXf::Identity(rows-3,cols-3);
+cout << m;
diff --git a/doc/snippets/Tutorial_solve_matrix_inverse.cpp b/doc/snippets/Tutorial_solve_matrix_inverse.cpp
new file mode 100644
index 0000000..fff3244
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_matrix_inverse.cpp
@@ -0,0 +1,6 @@
+Matrix3f A;
+Vector3f b;
+A << 1,2,3,  4,5,6,  7,8,10;
+b << 3, 3, 4;
+Vector3f x = A.inverse() * b;
+cout << "The solution is:" << endl << x << endl;
diff --git a/doc/snippets/Tutorial_solve_multiple_rhs.cpp b/doc/snippets/Tutorial_solve_multiple_rhs.cpp
new file mode 100644
index 0000000..5411a44
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_multiple_rhs.cpp
@@ -0,0 +1,10 @@
+Matrix3f A(3,3);
+A << 1,2,3,  4,5,6,  7,8,10;
+Matrix<float,3,2> B;
+B << 3,1, 3,1, 4,1;
+Matrix<float,3,2> X;
+X = A.fullPivLu().solve(B);
+cout << "The solution with right-hand side (3,3,4) is:" << endl;
+cout << X.col(0) << endl;
+cout << "The solution with right-hand side (1,1,1) is:" << endl;
+cout << X.col(1) << endl;
diff --git a/doc/snippets/Tutorial_solve_reuse_decomposition.cpp b/doc/snippets/Tutorial_solve_reuse_decomposition.cpp
new file mode 100644
index 0000000..3ca0645
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_reuse_decomposition.cpp
@@ -0,0 +1,13 @@
+Matrix3f A(3,3);
+A << 1,2,3,  4,5,6,  7,8,10;
+PartialPivLU<Matrix3f> luOfA(A); // compute LU decomposition of A
+Vector3f b;
+b << 3,3,4;
+Vector3f x;
+x = luOfA.solve(b);
+cout << "The solution with right-hand side (3,3,4) is:" << endl;
+cout << x << endl;
+b << 1,1,1;
+x = luOfA.solve(b);
+cout << "The solution with right-hand side (1,1,1) is:" << endl;
+cout << x << endl;
diff --git a/doc/snippets/Tutorial_solve_singular.cpp b/doc/snippets/Tutorial_solve_singular.cpp
new file mode 100644
index 0000000..abff1ef
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_singular.cpp
@@ -0,0 +1,9 @@
+Matrix3f A;
+Vector3f b;
+A << 1,2,3,  4,5,6,  7,8,9;
+b << 3, 3, 4;
+cout << "Here is the matrix A:" << endl << A << endl;
+cout << "Here is the vector b:" << endl << b << endl;
+Vector3f x;
+x = A.lu().solve(b);
+cout << "The solution is:" << endl << x << endl;
diff --git a/doc/snippets/Tutorial_solve_triangular.cpp b/doc/snippets/Tutorial_solve_triangular.cpp
new file mode 100644
index 0000000..9d13f22
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_triangular.cpp
@@ -0,0 +1,8 @@
+Matrix3f A;
+Vector3f b;
+A << 1,2,3,  0,5,6,  0,0,10;
+b << 3, 3, 4;
+cout << "Here is the matrix A:" << endl << A << endl;
+cout << "Here is the vector b:" << endl << b << endl;
+Vector3f x = A.triangularView<Upper>().solve(b);
+cout << "The solution is:" << endl << x << endl;
diff --git a/doc/snippets/Tutorial_solve_triangular_inplace.cpp b/doc/snippets/Tutorial_solve_triangular_inplace.cpp
new file mode 100644
index 0000000..16ae633
--- /dev/null
+++ b/doc/snippets/Tutorial_solve_triangular_inplace.cpp
@@ -0,0 +1,6 @@
+Matrix3f A;
+Vector3f b;
+A << 1,2,3,  0,5,6,  0,0,10;
+b << 3, 3, 4;
+A.triangularView<Upper>().solveInPlace(b);
+cout << "The solution is:" << endl << b << endl;
diff --git a/doc/snippets/Vectorwise_reverse.cpp b/doc/snippets/Vectorwise_reverse.cpp
new file mode 100644
index 0000000..2f6a350
--- /dev/null
+++ b/doc/snippets/Vectorwise_reverse.cpp
@@ -0,0 +1,10 @@
+MatrixXi m = MatrixXi::Random(3,4);
+cout << "Here is the matrix m:" << endl << m << endl;
+cout << "Here is the rowwise reverse of m:" << endl << m.rowwise().reverse() << endl;
+cout << "Here is the colwise reverse of m:" << endl << m.colwise().reverse() << endl;
+
+cout << "Here is the coefficient (1,0) in the rowise reverse of m:" << endl
+<< m.rowwise().reverse()(1,0) << endl;
+cout << "Let us overwrite this coefficient with the value 4." << endl;
+//m.colwise().reverse()(1,0) = 4;
+cout << "Now the matrix m is:" << endl << m << endl;
diff --git a/doc/snippets/class_FullPivLU.cpp b/doc/snippets/class_FullPivLU.cpp
new file mode 100644
index 0000000..fce7fac
--- /dev/null
+++ b/doc/snippets/class_FullPivLU.cpp
@@ -0,0 +1,16 @@
+typedef Matrix<double, 5, 3> Matrix5x3;
+typedef Matrix<double, 5, 5> Matrix5x5;
+Matrix5x3 m = Matrix5x3::Random();
+cout << "Here is the matrix m:" << endl << m << endl;
+Eigen::FullPivLU<Matrix5x3> lu(m);
+cout << "Here is, up to permutations, its LU decomposition matrix:"
+     << endl << lu.matrixLU() << endl;
+cout << "Here is the L part:" << endl;
+Matrix5x5 l = Matrix5x5::Identity();
+l.block<5,3>(0,0).triangularView<StrictlyLower>() = lu.matrixLU();
+cout << l << endl;
+cout << "Here is the U part:" << endl;
+Matrix5x3 u = lu.matrixLU().triangularView<Upper>();
+cout << u << endl;
+cout << "Let us now reconstruct the original matrix m:" << endl;
+cout << lu.permutationP().inverse() * l * u * lu.permutationQ().inverse() << endl;
diff --git a/doc/snippets/compile_snippet.cpp.in b/doc/snippets/compile_snippet.cpp.in
new file mode 100644
index 0000000..894cd52
--- /dev/null
+++ b/doc/snippets/compile_snippet.cpp.in
@@ -0,0 +1,12 @@
+#include <Eigen/Dense>
+#include <iostream>
+
+using namespace Eigen;
+using namespace std;
+
+int main(int, char**)
+{
+  cout.precision(3);
+  ${snippet_source_code}
+  return 0;
+}
diff --git a/doc/snippets/tut_arithmetic_redux_minmax.cpp b/doc/snippets/tut_arithmetic_redux_minmax.cpp
new file mode 100644
index 0000000..f4ae7f4
--- /dev/null
+++ b/doc/snippets/tut_arithmetic_redux_minmax.cpp
@@ -0,0 +1,12 @@
+  Matrix3f m = Matrix3f::Random();
+  std::ptrdiff_t i, j;
+  float minOfM = m.minCoeff(&i,&j);
+  cout << "Here is the matrix m:\n" << m << endl;
+  cout << "Its minimum coefficient (" << minOfM 
+       << ") is at position (" << i << "," << j << ")\n\n";
+
+  RowVector4i v = RowVector4i::Random();
+  int maxOfV = v.maxCoeff(&i);
+  cout << "Here is the vector v: " << v << endl;
+  cout << "Its maximum coefficient (" << maxOfV 
+       << ") is at position " << i << endl;
diff --git a/doc/snippets/tut_arithmetic_transpose_aliasing.cpp b/doc/snippets/tut_arithmetic_transpose_aliasing.cpp
new file mode 100644
index 0000000..c8e4746
--- /dev/null
+++ b/doc/snippets/tut_arithmetic_transpose_aliasing.cpp
@@ -0,0 +1,5 @@
+Matrix2i a; a << 1, 2, 3, 4;
+cout << "Here is the matrix a:\n" << a << endl;
+
+a = a.transpose(); // !!! do NOT do this !!!
+cout << "and the result of the aliasing effect:\n" << a << endl;
\ No newline at end of file
diff --git a/doc/snippets/tut_arithmetic_transpose_conjugate.cpp b/doc/snippets/tut_arithmetic_transpose_conjugate.cpp
new file mode 100644
index 0000000..88496b2
--- /dev/null
+++ b/doc/snippets/tut_arithmetic_transpose_conjugate.cpp
@@ -0,0 +1,12 @@
+MatrixXcf a = MatrixXcf::Random(2,2);
+cout << "Here is the matrix a\n" << a << endl;
+
+cout << "Here is the matrix a^T\n" << a.transpose() << endl;
+
+
+cout << "Here is the conjugate of a\n" << a.conjugate() << endl;
+
+
+cout << "Here is the matrix a^*\n" << a.adjoint() << endl;
+
+
diff --git a/doc/snippets/tut_arithmetic_transpose_inplace.cpp b/doc/snippets/tut_arithmetic_transpose_inplace.cpp
new file mode 100644
index 0000000..7a069ff
--- /dev/null
+++ b/doc/snippets/tut_arithmetic_transpose_inplace.cpp
@@ -0,0 +1,6 @@
+MatrixXf a(2,3); a << 1, 2, 3, 4, 5, 6;
+cout << "Here is the initial matrix a:\n" << a << endl;
+
+
+a.transposeInPlace();
+cout << "and after being transposed:\n" << a << endl;
\ No newline at end of file
diff --git a/doc/snippets/tut_matrix_assignment_resizing.cpp b/doc/snippets/tut_matrix_assignment_resizing.cpp
new file mode 100644
index 0000000..cf18998
--- /dev/null
+++ b/doc/snippets/tut_matrix_assignment_resizing.cpp
@@ -0,0 +1,5 @@
+MatrixXf a(2,2);
+std::cout << "a is of size " << a.rows() << "x" << a.cols() << std::endl;
+MatrixXf b(3,3);
+a = b;
+std::cout << "a is now of size " << a.rows() << "x" << a.cols() << std::endl;
diff --git a/doc/special_examples/CMakeLists.txt b/doc/special_examples/CMakeLists.txt
new file mode 100644
index 0000000..eeeae1d
--- /dev/null
+++ b/doc/special_examples/CMakeLists.txt
@@ -0,0 +1,20 @@
+
+if(NOT EIGEN_TEST_NOQT)
+  find_package(Qt4)
+  if(QT4_FOUND)
+    include(${QT_USE_FILE})
+  endif()
+endif(NOT EIGEN_TEST_NOQT)
+
+
+if(QT4_FOUND)
+  add_executable(Tutorial_sparse_example Tutorial_sparse_example.cpp Tutorial_sparse_example_details.cpp)
+  target_link_libraries(Tutorial_sparse_example ${EIGEN_STANDARD_LIBRARIES_TO_LINK_TO} ${QT_QTCORE_LIBRARY} ${QT_QTGUI_LIBRARY})
+
+  add_custom_command(
+      TARGET Tutorial_sparse_example
+      POST_BUILD
+      COMMAND Tutorial_sparse_example
+      ARGS ${CMAKE_CURRENT_BINARY_DIR}/../html/Tutorial_sparse_example.jpeg
+  )
+endif(QT4_FOUND)
diff --git a/doc/special_examples/Tutorial_sparse_example.cpp b/doc/special_examples/Tutorial_sparse_example.cpp
new file mode 100644
index 0000000..002f19f
--- /dev/null
+++ b/doc/special_examples/Tutorial_sparse_example.cpp
@@ -0,0 +1,32 @@
+#include <Eigen/Sparse>
+#include <vector>
+
+typedef Eigen::SparseMatrix<double> SpMat; // declares a column-major sparse matrix type of double
+typedef Eigen::Triplet<double> T;
+
+void buildProblem(std::vector<T>& coefficients, Eigen::VectorXd& b, int n);
+void saveAsBitmap(const Eigen::VectorXd& x, int n, const char* filename);
+
+int main(int argc, char** argv)
+{
+  int n = 300;  // size of the image
+  int m = n*n;  // number of unknows (=number of pixels)
+
+  // Assembly:
+  std::vector<T> coefficients;            // list of non-zeros coefficients
+  Eigen::VectorXd b(m);                   // the right hand side-vector resulting from the constraints
+  buildProblem(coefficients, b, n);
+
+  SpMat A(m,m);
+  A.setFromTriplets(coefficients.begin(), coefficients.end());
+
+  // Solving:
+  Eigen::SimplicialCholesky<SpMat> chol(A);  // performs a Cholesky factorization of A
+  Eigen::VectorXd x = chol.solve(b);         // use the factorization to solve for the given right hand side
+
+  // Export the result to a file:
+  saveAsBitmap(x, n, argv[1]);
+
+  return 0;
+}
+
diff --git a/doc/special_examples/Tutorial_sparse_example_details.cpp b/doc/special_examples/Tutorial_sparse_example_details.cpp
new file mode 100644
index 0000000..8c3020b
--- /dev/null
+++ b/doc/special_examples/Tutorial_sparse_example_details.cpp
@@ -0,0 +1,44 @@
+#include <Eigen/Sparse>
+#include <vector>
+#include <QImage>
+
+typedef Eigen::SparseMatrix<double> SpMat; // declares a column-major sparse matrix type of double
+typedef Eigen::Triplet<double> T;
+
+void insertCoefficient(int id, int i, int j, double w, std::vector<T>& coeffs,
+                       Eigen::VectorXd& b, const Eigen::VectorXd& boundary)
+{
+  int n = boundary.size();
+  int id1 = i+j*n;
+
+        if(i==-1 || i==n) b(id) -= w * boundary(j); // constrained coeffcieint
+  else  if(j==-1 || j==n) b(id) -= w * boundary(i); // constrained coeffcieint
+  else  coeffs.push_back(T(id,id1,w));              // unknown coefficient
+}
+
+void buildProblem(std::vector<T>& coefficients, Eigen::VectorXd& b, int n)
+{
+  b.setZero();
+  Eigen::ArrayXd boundary = Eigen::ArrayXd::LinSpaced(n, 0,M_PI).sin().pow(2);
+  for(int j=0; j<n; ++j)
+  {
+    for(int i=0; i<n; ++i)
+    {
+      int id = i+j*n;
+      insertCoefficient(id, i-1,j, -1, coefficients, b, boundary);
+      insertCoefficient(id, i+1,j, -1, coefficients, b, boundary);
+      insertCoefficient(id, i,j-1, -1, coefficients, b, boundary);
+      insertCoefficient(id, i,j+1, -1, coefficients, b, boundary);
+      insertCoefficient(id, i,j,    4, coefficients, b, boundary);
+    }
+  }
+}
+
+void saveAsBitmap(const Eigen::VectorXd& x, int n, const char* filename)
+{
+  Eigen::Array<unsigned char,Eigen::Dynamic,Eigen::Dynamic> bits = (x*255).cast<unsigned char>();
+  QImage img(bits.data(), n,n,QImage::Format_Indexed8);
+  img.setColorCount(256);
+  for(int i=0;i<256;i++) img.setColor(i,qRgb(i,i,i));
+  img.save(filename);
+}
diff --git a/doc/tutorial.cpp b/doc/tutorial.cpp
new file mode 100644
index 0000000..62be7c2
--- /dev/null
+++ b/doc/tutorial.cpp
@@ -0,0 +1,62 @@
+#include <Eigen/Array>
+
+int main(int argc, char *argv[])
+{
+  std::cout.precision(2);
+
+  // demo static functions
+  Eigen::Matrix3f m3 = Eigen::Matrix3f::Random();
+  Eigen::Matrix4f m4 = Eigen::Matrix4f::Identity();
+
+  std::cout << "*** Step 1 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl;
+
+  // demo non-static set... functions
+  m4.setZero();
+  m3.diagonal().setOnes();
+  
+  std::cout << "*** Step 2 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl;
+
+  // demo fixed-size block() expression as lvalue and as rvalue
+  m4.block<3,3>(0,1) = m3;
+  m3.row(2) = m4.block<1,3>(2,0);
+
+  std::cout << "*** Step 3 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl;
+
+  // demo dynamic-size block()
+  {
+    int rows = 3, cols = 3;
+    m4.block(0,1,3,3).setIdentity();
+    std::cout << "*** Step 4 ***\nm4:\n" << m4 << std::endl;
+  }
+
+  // demo vector blocks
+  m4.diagonal().block(1,2).setOnes();
+  std::cout << "*** Step 5 ***\nm4.diagonal():\n" << m4.diagonal() << std::endl;
+  std::cout << "m4.diagonal().start(3)\n" << m4.diagonal().start(3) << std::endl;
+
+  // demo coeff-wise operations
+  m4 = m4.cwise()*m4;
+  m3 = m3.cwise().cos();
+  std::cout << "*** Step 6 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl;
+
+  // sums of coefficients
+  std::cout << "*** Step 7 ***\n m4.sum(): " << m4.sum() << std::endl;
+  std::cout << "m4.col(2).sum(): " << m4.col(2).sum() << std::endl;
+  std::cout << "m4.colwise().sum():\n" << m4.colwise().sum() << std::endl;
+  std::cout << "m4.rowwise().sum():\n" << m4.rowwise().sum() << std::endl;
+
+  // demo intelligent auto-evaluation
+  m4 = m4 * m4; // auto-evaluates so no aliasing problem (performance penalty is low)
+  Eigen::Matrix4f other = (m4 * m4).lazy(); // forces lazy evaluation
+  m4 = m4 + m4; // here Eigen goes for lazy evaluation, as with most expressions
+  m4 = -m4 + m4 + 5 * m4; // same here, Eigen chooses lazy evaluation for all that.
+  m4 = m4 * (m4 + m4); // here Eigen chooses to first evaluate m4 + m4 into a temporary.
+                       // indeed, here it is an optimization to cache this intermediate result.
+  m3 = m3 * m4.block<3,3>(1,1); // here Eigen chooses NOT to evaluate block() into a temporary
+    // because accessing coefficients of that block expression is not more costly than accessing
+    // coefficients of a plain matrix.
+  m4 = m4 * m4.transpose(); // same here, lazy evaluation of the transpose.
+  m4 = m4 * m4.transpose().eval(); // forces immediate evaluation of the transpose
+
+  std::cout << "*** Step 8 ***\nm3:\n" << m3 << "\nm4:\n" << m4 << std::endl;
+}