Solve any system of linear equations using LU decomposition with forward and back substitution. See every step explained.
Enter a matrix A and vector b, then click Solve
or press Enter
The equation Ax = b is the matrix form of a system of linear equations. Here A is an n×n coefficient matrix, x is the unknown vector of n variables we want to find, and b is a known right-hand side vector. Each row of the matrix equation corresponds to one linear equation in the system. For example, the system
can be written as Ax = b where A = [[2, 1], [1, 3]], x = [x₁, x₂]ᵀ, and b = [5, 7]ᵀ. Solving this system means finding the values of x₁ and x₂ that satisfy every equation simultaneously. The matrix formulation is not just compact notation—it opens the door to powerful computational methods that scale to systems with thousands or even millions of unknowns.
LU decomposition is one of the most efficient and widely used methods for solving Ax = b. The key insight is to split the problem into two simpler triangular systems. First, decompose the coefficient matrix A into a lower triangular matrix L and an upper triangular matrix U, with partial pivoting recorded in a permutation matrix P:
Once this factorization is computed, the original system PA x = Pb becomes LU x = Pb. We introduce an intermediate variable y = Ux and solve in two stages:
Each of these triangular solves is straightforward and fast. The beauty of this approach is that the expensive part—the LU factorization itself—only needs to be done once. If you later need to solve Ax = b for a different b, you can reuse the same L, U, and P and simply run the two substitution steps again.
Forward substitution solves a lower triangular system by computing each unknown in order from top to bottom. Because L is lower triangular (all entries above the diagonal are zero) and the diagonal entries are 1 (in Doolittle form), the first equation gives y₁ directly. Each subsequent equation has only one new unknown, since all the previous y values are already known.
The algorithm for an n×n system is:
Because each step involves a simple sum and subtraction, forward substitution is very fast—O(n²) operations for an n-dimensional system. There is no division by diagonal elements (since they are all 1), which also avoids any numerical issues at this stage.
Back substitution works in the opposite direction, solving the upper triangular system from bottom to top. The last equation in Ux = y involves only xₙ, so it can be solved immediately. Then xₙ₋₁ is found using xₙ, and so on upward.
Like forward substitution, this requires O(n²) operations. The division by diagonal elements Uᵢᵢ is safe as long as A is non-singular—a zero diagonal in U would mean the matrix is singular and the system either has no solution or infinitely many solutions.
The total cost of solving Ax = b via LU decomposition breaks down into two parts:
This separation is what makes LU decomposition so powerful in practice. Consider a scenario where you need to solve Ax = b for 1000 different b vectors (common in engineering simulations). With Gaussian elimination you would do 1000 × O(n³) work. With LU decomposition you do one O(n³) factorization plus 1000 × O(n²) substitutions—potentially hundreds of times faster for large n.
One of the primary advantages of the LU approach is efficient handling of multiple right-hand sides. In many practical applications, the coefficient matrix A remains the same while the right-hand side b changes. Examples include:
Once you have stored L, U, and P, each new b requires only O(n²) additional work. This is why numerical libraries like LAPACK separate the factorization step (e.g., dgetrf) from the solve step (e.g., dgetrs).
Not every system Ax = b has a solution. Two main failure modes can occur:
In floating-point arithmetic, a matrix might not be exactly singular but very close to it (ill-conditioned). In such cases the solution may exist mathematically but be extremely sensitive to rounding errors, making it practically unreliable. The condition number of A measures this sensitivity—a large condition number warns that the solution may not be trustworthy.
Several other methods exist for solving Ax = b. Here is how they compare to LU decomposition:
Solving Ax = b is one of the most fundamental operations in applied mathematics and engineering. Some key application areas include:
In exact arithmetic, LU decomposition works for any non-singular matrix. In floating-point arithmetic, however, numerical errors can accumulate and degrade the solution. Two key techniques address this:
The condition number κ(A) quantifies how much errors in b or A are amplified in the solution x. If κ(A) is large (the matrix is ill-conditioned), even small rounding errors can produce a highly inaccurate solution. In such cases, iterative refinement—solving the system, computing the residual r = b − Ax, then solving Aδx = r to correct x—can improve accuracy without repeating the full factorization.
For well-conditioned systems, LU decomposition with partial pivoting is reliable and efficient. It is the workhorse algorithm behind MATLAB’s backslash operator, NumPy’s numpy.linalg.solve, and LAPACK’s dgesv routine, handling the vast majority of linear system solves in scientific computing.
Factor any square matrix into lower and upper triangular matrices with partial pivoting.
Open calculator →Decompose into orthogonal Q and upper triangular R. Ideal for least squares problems.
Open calculator →The most general decomposition. Factor any matrix into UΣVᵀ.
Open calculator →Efficient factorization for symmetric positive definite matrices.
Open calculator →Find eigenvalues and eigenvectors of square matrices.
Open calculator →