Linear Algebra

Iterative Solvers (Jacobi)

Visualize the convergence of the Jacobi iterative method for solving systems of linear equations.

Iterative Solvers (Jacobi)

Concept Overview

The Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization. Unlike direct methods such as Gaussian elimination, iterative solvers guess a solution and continually refine it.

Mathematical Definition

Consider a square system of n linear equations, where:

Ax = b
We can decompose the matrix A into a diagonal component D and the remainder R:
A = D + R
Substituting this into the original equation yields:
(D + R)x = b
Dx + Rx = b
Dx = b - Rx
By isolating x and adding an iteration index (k), we get the Jacobi iterative formula:
x(k+1) = D-1(b - Rx(k))

Because D is a diagonal matrix, its inverse D-1 is trivial to compute — we just take the reciprocals of the diagonal elements. The component-wise formula for the i-th element of the vector at iteration k+1 is:

xi(k+1) = (1 / aii) [bi - Σ (aij xj(k))]

where the sum is over all j ≠ i.

Key Concepts

Strict Diagonal Dominance

A matrix is said to be strictly diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in that row is larger than the sum of the magnitudes of all the other (non-diagonal) entries in that row.

|aii| > Σ |aij| for all i, where j ≠ i.

If a matrix is strictly diagonally dominant, the Jacobi method is guaranteed to converge to the unique solution of the system, regardless of the initial guess x(0). The method may still converge even if this condition is not met, but convergence is not guaranteed.

Convergence Criteria

The iterative method can be written in the form x(k+1) = Bx(k) + c. Here, B is the iteration matrix: B = -D-1R. The sequence converges for any initial guess if and only if the spectral radius of the iteration matrix B is less than 1. The spectral radius is defined as the largest absolute value of its eigenvalues.

Historical Context

The method is named after the prominent 19th-century German mathematician Carl Gustav Jacob Jacobi. It was published in 1845 alongside his research into planetary orbits. Before the advent of high-speed digital computers, the primary advantage of iterative methods was that they were highly amenable to hand calculation, as errors introduced at any stage were "self-correcting" provided the method still converged.

In modern computing, the Jacobi method is rarely used directly as a standalone solver for large systems because its convergence is relatively slow compared to other iterative methods like Gauss-Seidel or Conjugate Gradient. However, it is fundamentally important pedagogically and serves as a preconditioner for more advanced algorithms. Furthermore, its completely parallelizable nature (each component xi(k+1) can be updated independently of the others) makes it historically significant in the development of parallel computing algorithms.

Real-world Applications

  • Discretized PDEs — Solving large sparse linear systems arising from finite-difference or finite-element discretizations of elliptic partial differential equations, such as steady-state heat conduction or Poisson problems.
  • Parallel Computing — Serving as a simple, fully parallelizable testbed for designing and benchmarking parallel solvers on multicore CPUs and GPUs, since each component can be updated independently.
  • Preconditioning — Acting as a diagonal scaling preconditioner inside more advanced iterative methods (e.g., Conjugate Gradient) to accelerate convergence on ill-conditioned systems.

Related Concepts

  • Gauss-Seidel Method — A modification of the Jacobi method that uses the newly computed values xi(k+1) as soon as they are available, rather than waiting for the next iteration step. This typically results in faster convergence.
  • Successive Over-Relaxation (SOR) — An extrapolation of the Gauss-Seidel method that introduces an over-relaxation parameter to further accelerate convergence.
  • Krylov Subspace Methods — More sophisticated iterative solvers, such as the Conjugate Gradient (CG) method or GMRES, which project the problem onto a smaller subspace to iteratively approximate the solution.
  • Preconditioners — Matrices applied to a system of equations to lower the condition number, effectively transforming a system into one that iterative solvers can converge on more quickly. The Jacobi method is often used as a simple preconditioner (diagonal scaling).

Experience it interactively

Adjust parameters, observe in real time, and build deep intuition with Riano’s interactive Iterative Solvers (Jacobi) module.

Try Iterative Solvers (Jacobi) on Riano →

More in Linear Algebra