Linear Transformations
Visualizing 2D transformations via matrix operations.
Linear Transformations and Matrix Representation
Concept Overview
A linear transformation is a function between vector spaces that preserves vector addition and scalar multiplication. In two dimensions, every linear transformation can be represented by a 2×2 matrix. By watching how the standard basis vectors e1 = (1,0) and e2 = (0,1) move, you can understand exactly what any matrix does to the entire plane — rotations, reflections, scaling, shearing, and projections are all linear transformations.
Mathematical Definition
A function T: ℝn → ℝm is a linear transformation if for all vectors u, v and scalar c:
Key Concepts
The Determinant
The determinant det(A) = ad − bc measures how the transformation changes area. A determinant of 2 means areas double; a determinant of −1 means areas are preserved but orientation is reversed (like a mirror). A determinant of 0 means the transformation collapses the plane onto a line or point — the matrix is singular and cannot be inverted.
Common Transformations
- Rotation by θ: Matrix [cos θ, −sin θ; sin θ, cos θ]. Determinant = 1 (area preserved, orientation preserved).
- Scaling by (sx, sy): Diagonal matrix [sx, 0; 0, sy]. Determinant = sx · sy.
- Reflection across x-axis: Matrix [1, 0; 0, −1]. Determinant = −1 (orientation reversed).
- Shear along x: Matrix [1, k; 0, 1]. Determinant = 1 (area preserved, shape distorted).
- Projection onto x-axis: Matrix [1, 0; 0, 0]. Determinant = 0 (singular, collapses y-dimension).
Composition and Matrix Multiplication
Applying transformation B followed by A is equivalent to multiplying by the matrix AB. This is why matrix multiplication is not commutative — rotating then scaling gives a different result than scaling then rotating. The order of operations matters geometrically.
Eigenvalues and Eigenvectors
An eigenvector of a transformation is a vector whose direction doesn't change — it only gets scaled by a factor called the eigenvalue. Finding eigenvalues reveals the "natural axes" of a transformation and is fundamental to understanding matrix behavior, stability of systems, and principal component analysis.
Historical Context
The concept of linear transformations emerged from the work of Arthur Cayley and James Sylvester in the mid-19th century, who developed matrix algebra as a systematic way to represent and compose linear maps. Cayley's 1858 "A Memoir on the Theory of Matrices" established matrices as algebraic objects in their own right.
The geometric interpretation was deepened by Hermann Grassmann and later formalized in the abstract framework of vector spaces by Giuseppe Peano and Stefan Banach. Today, linear algebra is considered the most universally applicable branch of mathematics, forming the backbone of computer graphics, quantum mechanics, machine learning, and engineering.
Real-world Applications
- Computer graphics: Every rotation, scaling, and projection in 3D rendering is a matrix multiplication. GPUs are essentially massively parallel matrix multiplication engines.
- Machine learning: Neural networks are compositions of linear transformations (weight matrices) with nonlinear activations. Understanding how matrices transform space is key to understanding what networks learn.
- Quantum mechanics: Quantum states are vectors and observables are linear operators (matrices). Measurement, evolution, and entanglement are all described by linear algebra.
- Data compression: SVD and PCA use eigendecomposition to find the most important directions in data, enabling lossy compression of images, audio, and high-dimensional datasets.
- Control systems: Stability analysis of dynamical systems relies on eigenvalues of the system matrix — eigenvalues with negative real parts indicate stability.
Related Concepts
- Gradient Descent — optimizes over parameter spaces where the Hessian matrix (second-order linear approximation) determines convergence
- Fourier Transform — represents signals as linear combinations of basis functions, a change-of-basis operation
- Probability Distributions — covariance matrices describe the shape of multivariate distributions via linear algebra
- Taylor Series — linear approximation (first-order Taylor) is the simplest linear transformation of a function
Experience it interactively
Adjust parameters, observe in real time, and build deep intuition with Riano’s interactive Linear Transformations module.
Try Linear Transformations on Riano →