Free Online Tool

Gram-Schmidt Orthogonalization Calculator

Transform any set of linearly independent vectors into an orthonormal basis using the Gram-Schmidt process with step-by-step explanations.

[
]

Enter a matrix and click Orthogonalize
or press Enter

What is the Gram-Schmidt Process?

The Gram-Schmidt process is a fundamental algorithm in linear algebra that takes a finite set of linearly independent vectors and produces an orthonormal set of vectors spanning the same subspace. Named after Jørgen Pedersen Gram and Erhard Schmidt, it is one of the most important constructive procedures in mathematics, providing both a theoretical proof that every finite-dimensional inner product space has an orthonormal basis and a practical algorithm for computing one.

Given vectors {v1, v2, …, vn} in an inner product space, the Gram-Schmidt process produces orthonormal vectors {e1, e2, …, en} such that for every k, the span of {e1, …, ek} equals the span of {v1, …, vk}. This incremental spanning property is what makes Gram-Schmidt so useful: the first k orthonormal vectors always span the same subspace as the first k original vectors.

Step-by-Step Algorithm

The Gram-Schmidt process operates in three conceptual stages for each vector: projection, subtraction, and normalization. Here is the complete algorithm:

  1. Initialize — Take the first vector v1. Compute u1 = v1, then normalize it to obtain the first orthonormal vector: e1 = u1 / ‖u1‖.
  2. Compute projections — For each subsequent vector vk (k = 2, 3, …, n), calculate its projection onto each previously computed orthonormal vector: projej(vk) = ⟨vk, ej⟩ ej for j = 1, 2, …, k−1.
  3. Subtract projections — Remove all these projection components from vk to get the orthogonal residual: uk = vk − ∑j=1k-1 ⟨vk, ej⟩ ej.
  4. Normalize — Divide by the norm to obtain the next orthonormal vector: ek = uk / ‖uk‖.
  5. Repeat — Continue until all n vectors have been processed.

The key insight is that at each step, subtracting the projections removes exactly the components of vk that lie in the subspace spanned by the previously computed orthonormal vectors. What remains — the vector uk — is orthogonal to all of them by construction.

Worked Example with Three Vectors

Let us orthogonalize the vectors v1 = (1, 1, 0), v2 = (1, 0, 1), and v3 = (0, 1, 1) using the Gram-Schmidt process.

Step 1: Process v1

Set u1 = v1 = (1, 1, 0). Compute the norm: ‖u1‖ = √(1² + 1² + 0²) = √2. Normalize: e1 = (1/√2, 1/√2, 0).

Step 2: Process v2

Compute the projection of v2 onto e1: ⟨v2, e1⟩ = (1)(1/√2) + (0)(1/√2) + (1)(0) = 1/√2. Subtract: u2 = v2 − (1/√2) e1 = (1, 0, 1) − (1/2, 1/2, 0) = (1/2, −1/2, 1). Compute the norm: ‖u2‖ = √(1/4 + 1/4 + 1) = √(3/2). Normalize: e2 = (1/√6, −1/√6, 2/√6).

Step 3: Process v3

Compute projections: ⟨v3, e1⟩ = (0)(1/√2) + (1)(1/√2) + (1)(0) = 1/√2 and ⟨v3, e2⟩ = (0)(1/√6) + (1)(−1/√6) + (1)(2/√6) = 1/√6. Subtract both projections: u3 = v3 − (1/√2) e1 − (1/√6) e2 = (0, 1, 1) − (1/2, 1/2, 0) − (1/6, −1/6, 2/6) = (−2/3, 2/3, 2/3). Compute the norm: ‖u3‖ = √(4/9 + 4/9 + 4/9) = 2/√3. Normalize: e3 = (−1/√3, 1/√3, 1/√3).

The resulting orthonormal set {e1, e2, e3} forms a basis for R³. You can verify orthonormality by checking that ⟨ei, ej⟩ = 0 for i ≠ j and ‖ei‖ = 1 for all i.

Classical vs. Modified Gram-Schmidt

The algorithm described above is the classical Gram-Schmidt (CGS) process. While mathematically correct, it suffers from numerical instability in floating-point arithmetic. When the input vectors are nearly linearly dependent, rounding errors accumulate and the computed vectors can lose orthogonality dramatically.

The modified Gram-Schmidt (MGS) process addresses this by changing the order of operations. Instead of computing all projections of vk against the original orthonormal vectors at once, MGS updates vk sequentially:

  1. Set uk(0) = vk.
  2. For j = 1, 2, …, k−1: compute uk(j) = uk(j-1) − ⟨uk(j-1), ej⟩ ej.
  3. Set uk = uk(k-1) and normalize: ek = uk / ‖uk‖.

In exact arithmetic, CGS and MGS produce identical results. In floating-point arithmetic, MGS is significantly more stable because each subtraction uses the most recently updated (and therefore most orthogonal) intermediate vector. The loss of orthogonality in MGS is proportional to the machine epsilon ε times the condition number of the input matrix, compared to ε times the square of the condition number for CGS.

For applications demanding even greater orthogonality, CGS with re-orthogonalization (CGS2) runs the classical process twice. This achieves orthogonality at the level of machine epsilon regardless of the condition number, at the cost of doubling the work.

Connection to QR Decomposition

The Gram-Schmidt process is intimately connected to QR decomposition. When you apply Gram-Schmidt to the columns of a matrix A, you simultaneously compute two matrices:

A = Q R

This means Gram-Schmidt orthogonalization is not merely related to QR decomposition — it is a QR decomposition algorithm. The orthonormal vectors form Q, and the bookkeeping of projection coefficients and norms forms R. This is why our calculator displays both Q and R: the orthonormal basis you seek is exactly the Q matrix, and R records exactly how each original vector decomposes into the orthonormal basis.

Geometric Interpretation

The Gram-Schmidt process has an elegant geometric interpretation. At each step, you are projecting out the components of a vector that lie along previously established directions, leaving only the component that points in a genuinely new direction.

Consider vectors in R³. The first vector v1 establishes a line. The second vector v2 generally points in a different direction; by subtracting its projection onto e1, you extract the component of v2 that is perpendicular to the line, establishing a plane. The third vector v3 is then projected onto that plane, and the perpendicular residual gives the component that extends into the third dimension.

Each orthogonal residual uk represents the "new information" that vk brings — the part of vk that cannot be explained by the previous vectors. If ‖uk‖ is large, then vk contributes substantial new directional content. If ‖uk‖ is very small (or zero), then vk is nearly (or exactly) a linear combination of the previous vectors.

Applications

The Gram-Schmidt process and the orthonormal bases it produces appear throughout mathematics, science, and engineering:

When Vectors Are Linearly Dependent

If the input vectors {v1, …, vn} are linearly dependent, then at some step k the orthogonal residual uk will be the zero vector. This happens because vk lies entirely in the span of the previous vectors, so subtracting all projections removes everything.

When uk = 0, normalization is undefined (division by zero). In practice, the algorithm detects that ‖uk‖ is below some threshold and either:

In floating-point arithmetic, uk will rarely be exactly zero even for linearly dependent vectors. Instead, ‖uk‖ will be very small (on the order of machine epsilon times the norms of the input vectors), and the resulting "orthonormal" vector will be dominated by rounding errors and point in an essentially random direction. Robust implementations use a tolerance to detect this situation.

Computational Complexity

The Gram-Schmidt process applied to n vectors in Rm (or equivalently, to an m×n matrix) has the following costs:

For a square n×n matrix, the overall complexity is O(n³). This is the same asymptotic complexity as Householder-based QR decomposition, though the constant factors and numerical stability differ.

Gram-Schmidt in General Inner Product Spaces

One of the most powerful aspects of the Gram-Schmidt process is that it works in any inner product space, not just Rn with the standard dot product. The algorithm only requires the ability to compute inner products ⟨·, ·⟩ and norms ‖·‖ = √⟨·, ·⟩.

This generality means Gram-Schmidt applies to:

Comparison with Householder Reflections and Givens Rotations

While Gram-Schmidt is the most intuitive orthogonalization algorithm, it is not the only one. Two other methods are commonly used to achieve the same QR factorization:

Householder reflections apply a sequence of orthogonal reflections Hk = I − 2vkvkT that zero out all sub-diagonal entries in one column at a time. Compared to Gram-Schmidt:

Givens rotations zero out individual sub-diagonal entries using 2×2 orthogonal rotations. Compared to Gram-Schmidt:

In summary: use Gram-Schmidt when you need an intuitive, easily implemented algorithm or when working in abstract inner product spaces; use Householder reflections for production-quality dense matrix computations; and use Givens rotations for sparse or streaming problems that require incremental updates.

Other Decomposition Types

Explore other matrix decomposition methods with dedicated calculators and step-by-step explanations.

LU

LU Decomposition

Factor a matrix into Lower and Upper triangular matrices. Essential for solving systems of linear equations efficiently.

Open calculator →
QR

QR Decomposition

Factor any matrix into an orthogonal Q and upper triangular R. The standard output of the Gram-Schmidt process applied to matrix columns.

Open calculator →
SVD

Singular Value Decomposition

The most general matrix decomposition. Factorize any matrix into U, Σ, Vᵀ. Powers recommendation systems and data compression.

Open calculator →
LL

Cholesky Decomposition

Efficient factorization for symmetric positive definite matrices into LLᵀ. Widely used in Monte Carlo simulations and optimization.

Open calculator →
λ

Eigendecomposition

Find eigenvalues and eigenvectors. Fundamental to PCA, quantum mechanics, vibration analysis, and stability theory.

Open calculator →

Why Decomposition.ai?

📐

Step-by-Step Solutions

Every calculation shows the full Gram-Schmidt derivation, including projections, subtractions, and normalizations at each step.

Instant Results

All computations run in your browser. No server round-trips, no waiting, no sign-up.

Verified Results

Every orthogonalization is verified by checking that the output vectors are orthonormal and span the same subspace.

📱

Works Everywhere

Responsive design works on phones, tablets, and desktops. Orthogonalize vectors on the go.