Cholesky Decomposition

Cholesky Decomposition Calculator

Factor any symmetric positive definite matrix into A = LLᵀ with detailed step-by-step solutions. The fastest decomposition for SPD matrices — half the cost of LU.

[
]

Enter a symmetric positive definite matrix and click Decompose
or press Enter

What is Cholesky Decomposition?

The Cholesky decomposition (or Cholesky factorization) is a method of decomposing a symmetric positive definite matrix into the product of a lower triangular matrix and its transpose. Named after the French mathematician Andre-Louis Cholesky, it is one of the most efficient and numerically stable ways to solve linear systems involving symmetric positive definite (SPD) matrices.

Given a symmetric positive definite matrix A, the Cholesky decomposition finds a unique lower triangular matrix L with strictly positive diagonal entries such that:

A = LLᵀ

Here L is lower triangular (all entries above the main diagonal are zero) and Lᵀ is its transpose, which is upper triangular. The matrix L is sometimes called the Cholesky factor or the square root of the matrix A, because L plays a role analogous to the square root of a positive number.

Requirements: Symmetric Positive Definite

The Cholesky decomposition exists if and only if the matrix A is symmetric positive definite. A real matrix A is symmetric positive definite when:

If A is only positive semidefinite (xᵀAx ≥ 0, allowing equality for nonzero x), the decomposition still exists but L may have zeros on the diagonal and is no longer unique.

How to Check if a Matrix is Positive Definite

Before applying the Cholesky decomposition, you need to verify that your matrix is SPD. There are several equivalent ways to test positive definiteness:

  1. Eigenvalue test: All eigenvalues of A are strictly positive. This is the definition-level check but can be expensive to compute.
  2. Leading principal minors (Sylvester's criterion): Every leading principal minor (the determinant of the top-left k x k submatrix for k = 1, 2, ..., n) is strictly positive.
  3. Cholesky test: Attempt the Cholesky decomposition itself. If the algorithm completes without encountering a zero or negative value under a square root, the matrix is positive definite. This is often the most practical check.
  4. Diagonal dominance (sufficient condition): If A is symmetric and strictly diagonally dominant (each diagonal entry exceeds the sum of absolute values of other entries in its row), then A is positive definite.

The Cholesky Algorithm Step by Step

The entries of the lower triangular matrix L are computed column by column, from left to right. For each column j = 1, 2, ..., n:

  1. Diagonal entry: Compute the diagonal element L_jj using the formula:
    L_jj = sqrt( A_jj − ∑_{k=1}^{j-1} L_jk^2 )

    The value under the square root must be positive. If it is zero or negative, the matrix is not positive definite.

  2. Below-diagonal entries: For each row i > j, compute:
    L_ij = ( A_ij − ∑_{k=1}^{j-1} L_ik * L_jk ) / L_jj

This process is repeated for every column until the entire lower triangle of L is filled. The entries above the diagonal are all zero by construction.

Worked Example: 3x3 Matrix

Consider the symmetric positive definite matrix:

A = [4, 2, 2; 2, 5, 3; 2, 3, 6]

Column 1:

Column 2:

Column 3:

L = [2, 0, 0; 1, 2, 0; 1, 1, 2]

You can verify: LLᵀ reconstructs the original matrix A.

Computational Complexity

The Cholesky decomposition requires approximately n³/3 floating-point operations for an n x n matrix. This is roughly half the cost of LU decomposition (which needs about 2n³/3 flops) and is one of the primary reasons Cholesky is preferred whenever the matrix is known to be SPD.

In addition to being faster, the Cholesky decomposition is also more numerically stable than LU for SPD matrices because it avoids the need for pivoting. The algorithm is guaranteed not to encounter growth in the entries of L, making it inherently backward-stable.

When to Use Cholesky vs LU

Both Cholesky and LU decomposition can factor square matrices and solve linear systems. However, they serve different purposes:

In practice, covariance matrices, kernel matrices (Gram matrices), normal equation matrices (AᵀA), and stiffness matrices in finite element analysis are all SPD, making Cholesky the preferred factorization in these contexts.

Connection to the Square Root of a Matrix

The Cholesky factor L is sometimes called the "matrix square root" because it satisfies A = LLᵀ, analogous to a = (sqrt(a))^2 for positive scalars. However, the Cholesky factor is not the unique symmetric positive definite square root of A (which satisfies S^2 = A with S symmetric positive definite). Instead, it is a triangular square root, which is computationally much cheaper to obtain.

The unique symmetric square root can be found via the eigendecomposition A = QDQᵀ as S = QD^{1/2}Qᵀ, but this is significantly more expensive. The Cholesky factor is preferred in practice because its triangular structure makes forward and backward substitution trivial.

Applications of Cholesky Decomposition

The Cholesky decomposition appears across many areas of applied mathematics, statistics, and engineering:

Variants of the Cholesky Decomposition

Other Decomposition Types

Explore the other matrix decomposition calculators available on Decomposition.ai.

LU

LU Decomposition

Factor a matrix into Lower and Upper triangular matrices. Essential for solving systems of linear equations efficiently.

Open calculator →
QR

QR Decomposition

Decompose into an orthogonal matrix Q and upper triangular R. Used in least squares regression and eigenvalue algorithms.

Open calculator →
SVD

Singular Value Decomposition

The most general matrix decomposition. Factorize any matrix into U, Σ, Vᵀ. Powers recommendation systems and data compression.

Open calculator →
λ

Eigendecomposition

Find eigenvalues and eigenvectors. Fundamental to PCA, quantum mechanics, vibration analysis, and stability theory.

Open calculator →