# Linear algebra¶

## Adjoint¶

Another term for the conjugate transpose. Identical to the transpose if the matrix is real.

## Affine combination¶

A linear combination of vectors where the weights sum to 1. Unlike a convex combination, the weights can be negative.

## Condition number¶

The condition number of a matrix is defined as:

where and are the largest and smallest singular values of respectively.

If is high, the matrix is said to be **ill-conditioned**. Conversely, if the condition number is very low (ie close to 0) we say is **well-conditioned**.

Since singular values are always non-negative, condition numbers are also always non-negative.

## Conjugate transpose¶

The matrix obtained by taking the transpose followed by the complex conjugate of each entry.

## Eigenvalues and eigenvectors¶

Let be a square matrix. Then the eigenvalues and eigenvectors of the matrix are the vectors and scalars respectively that satisfy the equation:

### Properties¶

The trace of A is the sum of its eigenvalues:

The determinant of A is the product of its eigenvalues.

## Gaussian elimination¶

An algorithm for solving SLEs that iteratively transforms the matrix into an upper triangular one in row echelon form.

## Hadamard product¶

Synonymous with elementwise-multiplication.

## Inverse¶

The inverse of a matrix is written as .

A matrix is invertible if and only if there exists a matrix such that .

The inverse can be found using:

- Gaussian elimination
- LU decomposition
- Gauss-Jordan elimination

## Matrix decomposition¶

Also known as matrix factorization.

### Cholesky decomposition¶

where A is Hermitian and positive-definite, L is lower-triangular and is its conjugate transpose. Can be used for solving SLEs.

### Eigendecomposition¶

Where the columns of Q are the eigenvectors of A. is a diagonal matrix in which is the i’th eigenvalue of A.

### LU decomposition¶

A = LU, where L is lower triangular and U is upper triangular. Can be used to solve SLEs.

### QR decomposition¶

Decomposes a real square matrix such that . is an orthogonal matrix and is upper triangular.

### Singular value decomposition (SVD)¶

Let be the matrix to be decomposed. SVD is:

where is a unitary matrix, is a rectangular diagonal matrix containing the singular values and is a unitary matrix.

Can be used for computing the sum of squares or the pseudoinverse.

## Orthonormal vectors¶

Two vectors are orthonormal if they are orthogonal and both unit vectors.

## Principal Component Analysis (PCA)¶

Decomposes a matrix into a set of orthogonal vectors. The matrix represents a dataset with examples and features.

Method for PCA via eigendecomposition:

- Center the data by subtracting the mean for each dimension.
- Compute the covariance matrix on the centered data .
- Do eigendecomposition of the covariance matrix to get .
- Take the k largest eigenvalues and their associated eigenvectors. These eigenvectors are the ‘principal components’.
- Construct the new matrix from the principal components by multiplying the centered by the truncated .

PCA can also be done via SVD.

## Rank¶

### Matrix rank¶

The number of linearly independent columns.

### Tensor rank¶

When the term is applied to tensors, the rank refers to the dimensionality: * Rank 0 is a scalar * Rank 1 is a vector * Rank 2 is a matrix etc.

## Singular values¶

For a matrix A the singular values are the set of numbers:

where and is an eigenvalue of the matrix .

## Span¶

The span of a matrix is the set of all points that can be obtained as a linear combination of the vectors in the matrix.

## Spectral norm¶

The maximum singular value of a matrix.

## Spectral radius¶

The maximum of the magnitudes of the eigenvalues.

## Spectrum¶

The set of eigenvalues of a matrix.

## System of Linear Equations (SLE)¶

A set of linear equations using a common set of variables. For example:

In matrix form an SLE can be written as:

Where is the vector of unknowns to be determined, is a matrix of the coefficients from the left-hand side and the vector contains the numbers from the right-hand side of the equations.

Systems of linear equations can be solved in many ways. Gaussian elimination is one.

### Underdetermined and overdetermined systems¶

- If the number of variables exceeds the number of equations the system is
**underdetermined**. - If the number of variables is less than the number of equations the system is
**overdetermined**.

## Trace¶

The sum of the elements along the main diagonal of a square matrix.

Satisfies the following properties:

## Types of matrix¶

This table summarises the relationship between types of real and complex matrices. The concept in the complex column is the same as the concept in the same row of the real column if the matrix is real-valued.

Real | Complex |
---|---|

Symettric | Hermitian |

Orthogonal | Unitary |

Transpose | Conjugate transpose |

### Degenerate¶

A matrix that is not invertible.

### Diagonal matrix¶

A matrix where if .

Can be written as where is a vector of values specifying the diagonal entries.

Diagonal matrices have the following properties:

The eigenvalues of a diagonal matrix are the set of its values on the diagonal.

### Hermitian matrix¶

The complex equivalent of a symmetric matrix. , where * represents the conjugate transpose.

Also known as a self-adjoint matrix.

### Normal matrix¶

where is the conjugate transpose of .

### Orthogonal matrix¶

### Positive and negative (semi-)definite¶

A matrix is positive definite if:

Positive semi-definite matrices are defined analogously, except with

Negative definite and negative semi-definite matrices are the same but with the inequality round the other way.

### Singular matrix¶

A square matrix which is not invertible. A matrix is singular if and only if the determinant is zero.

### Symmetric matrix¶

A square matrix where .

Some properties of symmetric matrices are:

- All the eigenvalues of the matrix are real.

### Triangular matrix¶

Either a lower triangular or an upper triangular matrix.

#### Lower triangular matrix¶

A square matrix where only the lower triangle is not composed of zeros. Formally:

#### Upper triangular matrix¶

A square matrix where only the upper triangle is not composed of zeros. Formally:

### Unitary matrix¶

A matrix where its inverse is the same as its complex conjugate. The complex version of an orthogonal matrix.

## ZCA¶

Like PCA, ZCA converts the data to have zero mean and an identity covariance matrix. Unlike PCA, it does not reduce the dimensionality of the data and tries to create a whitened version that is minimally different from the original.