Taylor Series
Approximating functions with polynomials.
Taylor Series Expansion
Concept Overview
A Taylor series represents a function as an infinite sum of terms calculated from the values of its derivatives at a single point. This powerful technique allows us to approximate complex functions—such as trigonometric, exponential, and logarithmic functions—using simple polynomial arithmetic. Taylor series underpin much of numerical computing, from pocket calculators evaluating sin(x) to physics engines simulating orbital mechanics.
Mathematical Definition
Given a function f that is infinitely differentiable at a point a, its Taylor series expansion about a is:
Written out term by term, this becomes:
Each successive term incorporates higher-order derivative information, capturing increasingly fine details of the function's behavior near the expansion point.
Maclaurin Series (a = 0)
When the expansion point is a = 0, the Taylor series is called a Maclaurin series. This is the most commonly used form and simplifies the general formula to:
Taylor Expansion of sin(x)
The interactive visualization in this module uses sin(x), whose Maclaurin series is one of the most elegant in mathematics. Because the derivatives of sin(x) cycle through sin, cos, −sin, −cos, only the odd-degree terms survive:
With just three terms (up to x5), the approximation is remarkably accurate for |x| < 2. Adding more terms extends the region of accuracy outward. This is exactly what the interactive demonstrates: as you increase the number of terms, the polynomial "hugs" the true sine curve over a wider interval.
Convergence and Radius of Convergence
A Taylor series does not necessarily converge for all values of x. The radius of convergence R defines the interval around the expansion point within which the series converges to the function. It can be determined using the ratio test:
For sin(x), cos(x), and ex, the radius of convergence is infinite—the series converges for all real numbers. For other functions like 1/(1−x), the radius is finite (R = 1). When x lies outside the radius, the partial sums diverge, which is visible in the interactive as the approximation swings wildly away from the true curve far from the expansion center.
Key Concepts
- Truncation error: The difference between the true function value and the partial sum. For a degree-N approximation, the error is on the order of (x−a)N+1, bounded by the Lagrange remainder term.
- Analytic functions: A function is analytic at a point if its Taylor series converges to the function in some neighborhood of that point. Most functions encountered in physics and engineering are analytic on their domains.
- Polynomial approximation: Taylor polynomials provide the best local approximation to a function in the sense that they match the function and its first N derivatives at the expansion point.
- Alternating series: The sin(x) expansion is an alternating series, which means the partial sums alternately overshoot and undershoot the true value, making error estimation straightforward.
Historical Context
The series is named after Brook Taylor (1685–1731), an English mathematician who published the general formula in his 1715 work Methodus Incrementorum Directa et Inversa. However, the idea of representing functions as power series predates Taylor: James Gregory and Isaac Newton had already used specific instances of such expansions decades earlier.
The special case at a = 0 is named after Colin Maclaurin (1698–1746), a Scottish mathematician who made extensive use of the expansion in his Treatise of Fluxions (1742). Indian mathematicians of the Kerala school, particularly Madhava of Sangamagrama (c. 1350–1425), had discovered series expansions for trigonometric functions centuries earlier, though their work was not known in Europe at the time.
Real-world Applications
- Numerical computation: Calculators and computer math libraries evaluate transcendental functions (sin, cos, exp, log) using truncated Taylor or related polynomial approximations.
- Physics approximations: Small-angle approximations (sin θ ≈ θ) used in pendulum analysis and optics are first-order Taylor approximations.
- Signal processing: Taylor expansions are used to analyze filter responses and to linearize nonlinear systems around operating points.
- Error analysis: Propagation of uncertainty formulas in experimental science rely on first-order Taylor expansion of measurement functions.
- Machine learning: Second-order optimization methods (Newton's method, L-BFGS) use Taylor expansions of the loss function to compute update steps.
Related Concepts
- Fourier Series — representing functions as sums of sinusoids
- Gradient Descent — uses first-order Taylor approximation of cost functions
- Harmonic Oscillator — small-angle pendulum analysis relies on Taylor approximation of sin(θ)
- Numerical Methods — finite difference schemes derived from Taylor expansions
Experience it interactively
Adjust parameters, observe in real time, and build deep intuition with Riano’s interactive Taylor Series module.
Try Taylor Series on Riano →