Matrices and Vectors
Math.NET Numerics includes rich types for matrices and vectors. They support both single and double precision, real and complex floating point numbers.
\[\mathbf{A}= \begin{bmatrix} a_{0,0} & a_{0,1} & \cdots & a_{0,(n1)} \\ a_{1,0} & a_{1,1} & \cdots & a_{1,(n1)} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(m1),0} & a_{(m1),1} & \cdots & a_{(m1),(n1)} \end{bmatrix},\quad \mathbf{v}=\begin{bmatrix}v_0\\v_1\\ \vdots \\v_{n1}\end{bmatrix}\]
Like all data structures in .Net they are 0indexed, i.e. the top left cell has index (0,0). In matrices, the first index always refers to the row and the second index to the column. Empty matrices or vectors are not supported, i.e. each dimension must have a length of at least 1.
Context: Linear Algebra
The context and primary scenario for these types is linear algebra. Their API is broad enough to use them in other contexts as well, but they are not optimized for geometry or as general purpose storage structure as common in MATLAB. This is intentional, as spatial problems, geography and geometry have quite different usage patterns and requirements to linear algebra. All places where Math.NET Numerics can be used have a strong programming language with their own data structures. For example, if you have a collection of vectors, consider to store them in a list or array of vectors, not in a matrix (unless you need matrix operations, of course).
Storage Layout
Both dense and sparse vectors are supported:
 Dense Vector uses a single array of the same length as the vector.
 Sparse Vector uses two arrays which are usually much shorter than the vector. One array stores all values that are not zero, the other stores their indices. They are sorted ascendingly by index.
Matrices can be either dense, diagonal or sparse:
 Dense Matrix uses a single array in columnmajor order.
 Diagonal Matrix stores only the diagonal values, in a single array.
 Sparse Matrix stores nonzero values in 3 arrays in the standard compressed sparse row (CSR) format. One array stores all values that are not zero, another array of the same length stores the their corresponding column index. The third array of the length of the number of rows plus one, stores the offsets where each row starts, and the total number of nonzero values in the last field.
If your data contains only very few zeros, using the sparse variant is orders of magnitudes slower than their dense counterparts, so consider to use dense types unless the data is very sparse (i.e. almost all zeros).
Creating Matrices and Vectors
The Matrix<T>
and Vector<T>
types are defined in the MathNet.Numerics.LinearAlgebra
namespace.
For technical and performance reasons there are distinct implementations for each data type.
For example, for double precision numbers there is a DenseMatrix
class in the MathNet.Numerics.LinearAlgebra.Double
namespace. You do not normally need to be aware of that, but as consequence the generic Matrix<T>
type is abstract
and we need other ways to create a matrix or vector instance.
The matrix and vector builder provide functions to create instances from a variety of formats or approaches.
1: 2: 3: 4: 5: 6: 

Since within an application you often only work with one specific data type, a common trick to keep this a bit shorter is to define shortcuts to the builders:
1: 2: 3: 4: 5: 6: 

The builder functions usually start with the layout (Dense, Sparse, Diagonal),
so if we'd like to build a sparse matrix, intellisense will list all available options
together once you type M.Sparse
.
There are variants to generate synthetic matrices, for example:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 

But often we already have data available in some format and need a matrix representing the same data. Whenever a function contains "Of" in its name it does create a copy of the original data.
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 36: 37: 38: 

Very similar variants also exist for sparse and diagonal matrices, prefixed
with Sparse
and Diagonal
respectively.
The approach for vectors is exactly the same:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 

Creating matrices and vectors in F#
In F# we can use the builders just like in C#, but we can also use the F# modules:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 

Or using any other of all the available functions.
Arithmetics
All the common arithmetic operators like +
, 
, *
, /
and %
are provided,
between matrices, vectors and scalars. In F# there are additional pointwise
operators .*
, ./
and .%
available for convenience.
1: 2: 3: 4: 5: 6: 7: 8: 

Arithmetic Instance Methods
All other operations are covered by methods, like Transpose
and Conjugate
,
or in F# as functions in the Matrix module, e.g. Matrix.transpose
.
But even the operators have equivalent methods. The equivalent code from
above when using instance methods:
1: 2: 

These methods also have an overload that accepts the result data structure as last argument, allowing to avoid allocating new structures for every single operation. Provided the dimensions match, most also allow one of the arguments to be passed as result, resulting in an inplace application. For example, an inplace version of the code above:
1: 2: 

Shortcut Methods
A typical linear algebra problem is the regression normal equation \(\mathbf{X}^T\mathbf y = \mathbf{X}^T\mathbf X \mathbf p\) which we would like to solve for \(p\). By matrix inversion we get \(\mathbf p = (\mathbf{X}^T\mathbf X)^{1}(\mathbf{X}^T\mathbf y)\). This can directly be translated to the following code:
1:


Since products where one of the arguments is transposed are common, there are a few shortcut routines that are more efficient:
1:


Of course in practice you would not use the matrix inverse but a decomposition:
1: 2: 3: 4: 

Norms
With norms we assign a "size" to vectors and matrices, satisfying certain properties pertaining to scalability and additivity. Except for the zero element, the norm is strictly positive.
Vectors support the following norms:
 L1Norm or Manhattan norm (p=1): the sum of the absolute values.
 L2Norm or Euclidean norm (p=2): the square root of the sum of the squared values. This is the most common norm and assumed if nothing else is stated.
 InfinityNorm (p=infinity): the maximum absolute value.
 Norm(p): generalized norm, essentially the pth root of the sum of the absolute ppower of the values.
Similarly, matrices support the following norms:
 L1Norm (induced): the maximum absolute column sum.
 L2Norm (induced): the largest singular value of the matrix (expensive).
 InfinityNorm (induced): the maximum absolute row sum.
 FrobeniusNorm (entrywise): the square root of the sum of the squared values.
 RowNorms(p): the generalized pnorm for each row vector.
 ColumnNorms(p): the generalized pnorm for each column vector.
Vectors can be normalized to unit pnorm with the Normalize
method, matrices can
normalize all rows or all columns to unit pnorm with NormalizeRows
and NormalizeColumns
.
Sums
Closely related to the norms are sum functions. Vectors have a Sum
function
that returns the sum of all vector elements, and SumMagnitudes
that returns
the sum of the absolute vector elements (and is identical to the L1norm).
Matrices provide RowSums
and ColumnSums
functions that return the sum of each
row or column vector, and RowAbsoluteSums
and ColumnAbsoluteSums
for the
sums of the absolute elements.
Condition Number
The condition number of a function measures how much the output value can change for a small change in the input arguments. A problem with a low condition number is said to be wellconditioned, with a high condition number illconditioned. For a linear equation \(Ax=b\) the condition number is the maximum ratio of the relative error in \(x\) divided by the relative error in \(b\). It therefore gives a bound on how inaccurate the solution \(x\) will be after approximation.
1:


Trace and Determinant
For a square matrix, the trace of a matrix is the sum of the elements on the main diagonal, which is equal to the sum of all its eigenvalues with multiplicities. Similarly, the determinant of a square matrix is the product of all its eigenvalues with multiplicities. A matrix is said to be singular if its determinant is zero and nonsingular otherwise. In the latter case the matrix is invertible and the linear equation system it represents has a single unique solution.
1: 2: 3: 4: 5: 6: 

Column Space, Rank and Range
The rank of a matrix is the dimension of its column and row space, i.e. the maximum number of linearly independent column and row vectors of the matrix. It is a measure of the nondegenerateness of the linear equation system the matrix represents.
An orthonormal basis of the column space can be computed with the range method.
1: 2: 3: 

Null Space, Nullity and Kernel
The null space or kernel of a matrix \(A\) is the set of solutions to the equation \(Ax=0\). It is the orthogonal complement to the row space of the matrix.
The nullity of a matrix is the dimension of its null space. An orthonormal basis of the null space can be computed with the kernel method.
1: 2: 3: 4: 5: 6: 

Matrix Decompositions
Most common matrix decompositions are directly available as instance methods. Computing a decomposition can be expensive for large matrices, so if you need to access multiple properties of a decomposition, consider to reuse the returned instance.
All decompositions provide Solve methods than can be used to solve linear
equations of the form \(Ax=b\) or \(AX=B\). For simplicity the Matrix class
also provides direct Solve
methods that automatically choose
a decomposition. See Linear Equation Systems for details.
Currently these decompositions are optimized for dense matrices only, and can leverage native providers like Intel MKL if available. For sparse data consider to use the iterative solvers instead if appropriate, or convert to dense if small enough.
 Cholesky: Cholesky decomposition of symmetric positive definite matrices
 LU: LU decomposition of square matrices
 QR(method): QR by Householder transformation. Thin by default (Q: mxn, R: nxn) but can optionally be computed fully (Q: mxm, R: mxn).
 GramSchmidt: QR by Modified GramSchmidt Orthogonalization
 Svd(computeVectors): Singular Value Decomposition. Computation of the singular U and VT vectors can optionally be disabled.
 Evd(symmetricity): Eigenvalue Decomposition. If the symmetricity of the matrix is known, the algorithm can optionally skip its own check.
Manipulating Matrices and Vectors
Individual values can be get and set in matrices and vectors using the indexers
or the At
methods. Using At
instead of the indexers is slightly faster but
skips some range checks, so use it only after checking the range yourself.
1: 2: 3: 4: 5: 6: 

In F#:
1:


We can also get entire column or row vectors, or a new matrix from parts of an existing one.
1: 2: 3: 4: 

For each of these methods there is also a variant prefixed with Set
that can be used
to overwrite those elements with the provided data.
1:


In F# we can also use its slicing syntax:
1: 2: 3: 4: 5: 

To set the whole matrix or some of its columns or rows to zero, use one of the clear methods:
1: 2: 3: 4: 

Because of the limitations of floating point numbers, we may want to set very small numbers to zero:
1: 2: 

Even though matrices and vectors are mutable, their dimension is fixed and cannot be changed after creation. However, we can still insert or remove rows or columns, or concatenate matrices together. But all these operations will create and return a new instance.
1: 2: 3: 4: 5: 6: 

Enumerators and Higher Order Functions
Since looping over all entries of a matrix or vector with direct access is inefficient, especially with a sparse storage layout, and working with the raw structures is nontrivial, both vectors and matrices provide specialized enumerators and higher order functions that understand the actual layout and can use it more efficiently.
Most of these functions can optionally skip zerovalue entries. If you do not need to handle zerovalue elements, skipping them can massively speed up execution on sparse layouts.
Iterate
Both vectors and matrices have Enumerate methods that return an IEnumerable<T>
,
that can be used to iterate through all elements. All these methods optionally
accept a Zeros
enumeration to control whether zerovalues may be skipped or not.
 Enumerate: returns a straight forward enumerator over all values.
 EnumerateIndexed: returns an enumerable with indexvaluetuples.
Matrices can also enumerate over all column or row vectors, or all of them within a range:
 EnumerateColumns: returns an enumerable with all or a range of the column vectors.
 EnumerateColumnsIndexed: like EnumerateColumns buth returns indexcolumn tuples.
 EnumerateRows: returns an enumerable with all or a range of the row vectors.
 EnumerateRowsIndexed: like EnumerateRows buth returns indexrow tuples.
Map
Similarly there are also Map methods that replace each element with the result of applying a function to its value. Or, if indexed, to its index and value.
 MapInplace(f,zeros): map inplace with a function on the element's value
 MapIndexedInplace(f,zeros): map inplace with a function on the element's index and value.
 Map(f,result,zeros): map into a result structure provided as argument.
 MapIndexed(f,result,zeros): indexed variant of Map.
 MapConvert(f,result,zeros): variant where the function can return a different type
 MapIndexedConvert(f,result,zeros): indexed variant of MapConvert.
 Map(f,zeros): like MapConvert but returns a new structure instead of the result argument.
 MapIndexed(f,zeros): indexed variant of Map.
Example: Convert a complex vector to a real vector containing only the real parts in C#:
1: 2: 

Or in F#:
1: 2: 

Fold and Reduce
Matrices also provide column/row fold and reduce routines:
 FoldByRow(f,state,zeros): fold through the values of each row, returns an columnarray.
 FoldRows(f,state): fold over all row vectors, returns a row vector.
 ReduceRows(f): reduce all row vectors, returns a row vector.
Printing and Strings
Matrices and vectors try to print themselves to a string with the ToString
in a reasonable way, without overflowing the output device on a large matrix.
Note that this function is not intended to export a data structure to a string or file, but to give an informative summary about it. For data import/export, use one of the MathNet.Numerics.Data packages instead.
Some matrix examples:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 35: 

Vectors are printed as a column that can wrap over to multiple columns if needed:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 

The format is customizable to some degree, for example we can choose the floating point format and culture, or how many rows or columns should be shown:
1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28: 29: 30: 31: 32: 33: 34: 

If you are using Math.NET Numerics from within F# interactive, you may want to load the MathNet.Numerics.fsx script of the F# package. Besides loading the assemblies it also adds proper FSI printers for both matrices and vectors.