Another term for the conjugate transpose. Identical to the transpose if the matrix is real.
A linear combination of vectors where the weights sum to 1. Unlike a convex combination, the weights can be negative.
The condition number of a matrix is defined as:
where and are the largest and smallest singular values of respectively.
If is high, the matrix is said to be ill-conditioned. Conversely, if the condition number is very low (ie close to 0) we say is well-conditioned.
Since singular values are always non-negative, condition numbers are also always non-negative.
The matrix obtained by taking the transpose followed by the complex conjugate of each entry.
The dot product for two vectors and is:
Eigenvalues and eigenvectors¶
Let be a square matrix. Then the eigenvalues and eigenvectors of the matrix are the vectors and scalars respectively that satisfy the equation:
The trace of A is the sum of its eigenvalues:
The determinant of A is the product of its eigenvalues.
An algorithm for solving SLEs that iteratively transforms the matrix into an upper triangular one in row echelon form.
Synonymous with elementwise-multiplication.
The inverse of a matrix is written as .
A matrix is invertible if and only if there exists a matrix such that .
The inverse can be found using:
- Gaussian elimination
- LU decomposition
- Gauss-Jordan elimination
Also known as matrix factorization.
where A is Hermitian and positive-definite, L is lower-triangular and is its conjugate transpose. Can be used for solving SLEs.
Where the columns of Q are the eigenvectors of A. is a diagonal matrix in which is the i’th eigenvalue of A.
A = LU, where L is lower triangular and U is upper triangular. Can be used to solve SLEs.
where is unitary and is positive semi-definite and Hermitian.
Decomposes a real square matrix such that . is an orthogonal matrix and is upper triangular.
Singular value decomposition (SVD)¶
Let be the matrix to be decomposed. SVD is:
where is a unitary matrix, is a rectangular diagonal matrix containing the singular values and is a unitary matrix.
Can be used for computing the sum of squares or the pseudoinverse.
Two vectors are orthonormal if they are orthogonal and both unit vectors.
The outer product of two column vectors and is:
Principal Component Analysis (PCA)¶
Decomposes a matrix into a set of orthogonal vectors. The matrix represents a dataset with examples and features.
Method for PCA via eigendecomposition:
- Center the data by subtracting the mean for each dimension.
- Compute the covariance matrix on the centered data .
- Do eigendecomposition of the covariance matrix to get .
- Take the k largest eigenvalues and their associated eigenvectors. These eigenvectors are the ‘principal components’.
- Construct the new matrix from the principal components by multiplying the centered by the truncated .
PCA can also be done via SVD.
The number of linearly independent columns.
When the term is applied to tensors, the rank refers to the dimensionality: * Rank 0 is a scalar * Rank 1 is a vector * Rank 2 is a matrix etc.
For a matrix A the singular values are the set of numbers:
where and is an eigenvalue of the matrix .
The span of a matrix is the set of all points that can be obtained as a linear combination of the vectors in the matrix.
System of Linear Equations (SLE)¶
A set of linear equations using a common set of variables. For example:
In matrix form an SLE can be written as:
Where is the vector of unknowns to be determined, is a matrix of the coefficients from the left-hand side and the vector contains the numbers from the right-hand side of the equations.
Systems of linear equations can be solved in many ways. Gaussian elimination is one.
Underdetermined and overdetermined systems¶
- If the number of variables exceeds the number of equations the system is underdetermined.
- If the number of variables is less than the number of equations the system is overdetermined.
The sum of the elements along the main diagonal of a square matrix.
Satisfies the following properties:
Satisfies the following properties:
Types of matrix¶
This table summarises the relationship between types of real and complex matrices. The concept in the complex column is the same as the concept in the same row of the real column if the matrix is real-valued.
A matrix that is not invertible.
A matrix where if .
Can be written as where is a vector of values specifying the diagonal entries.
Diagonal matrices have the following properties:
The eigenvalues of a diagonal matrix are the set of its values on the diagonal.
The complex equivalent of a symmetric matrix. , where * represents the conjugate transpose.
Also known as a self-adjoint matrix.
where is the conjugate transpose of .
Positive and negative (semi-)definite¶
A matrix is positive definite if:
Positive semi-definite matrices are defined analogously, except with
Negative definite and negative semi-definite matrices are the same but with the inequality round the other way.
A square matrix which is not invertible. A matrix is singular if and only if the determinant is zero.
A square matrix where .
Some properties of symmetric matrices are:
- All the eigenvalues of the matrix are real.
Either a lower triangular or an upper triangular matrix.
Lower triangular matrix¶
A square matrix where only the lower triangle is not composed of zeros. Formally:
Upper triangular matrix¶
A square matrix where only the upper triangle is not composed of zeros. Formally:
A matrix where its inverse is the same as its complex conjugate. The complex version of an orthogonal matrix.
Like PCA, ZCA converts the data to have zero mean and an identity covariance matrix. Unlike PCA, it does not reduce the dimensionality of the data and tries to create a whitened version that is minimally different from the original.