Machine Learning

Naive Bayes Classifier

Interactively explore how a Naive Bayes classifier learns probabilities and makes predictions.

Naive Bayes Classifier

Concept Overview

The Naive Bayes classifier is a simple but highly effective probabilistic machine learning model used for classification tasks. It is based on applying Bayes' Theorem with a strong (and often unrealistic, hence "naive") assumption: it assumes that all the features in a dataset are mutually independent given the class label. Despite this oversimplification, Naive Bayes performs surprisingly well in many real-world scenarios, particularly in natural language processing and document classification.

Mathematical Definition

The classifier relies on Bayes' Theorem to calculate the posterior probability of a class Ck given a set of features x = (x1, x2, ..., xn). Bayes' Theorem is stated as:

P(Ck | x) = (P(x | Ck) · P(Ck)) / P(x)

Because the denominator P(x) is constant for all classes, we can ignore it when finding the most likely class. The "naive" independence assumption states that the features are independent given the class, meaning:

P(x | Ck) = P(x1, x2, ..., xn | Ck) = ∏i=1n P(xi | Ck)

Substituting this into the proportional form of Bayes' Theorem, the Naive Bayes decision rule becomes:

y = argmaxk P(Ck) ∏i=1n P(xi | Ck)

Key Concepts

Prior Probability

The prior probability, P(Ck), represents our initial belief about the likelihood of a class before observing any evidence. It is typically estimated by counting the frequency of each class in the training dataset.

Likelihood

The likelihood, P(xi | Ck), is the probability of observing a specific feature value xi given that the sample belongs to class Ck. The way this is calculated depends on the type of data (e.g., Gaussian for continuous data, Multinomial for word counts, or Bernoulli for boolean features).

The "Zero-Frequency" Problem and Laplace Smoothing

If a specific feature value never occurs with a particular class in the training data, its likelihood estimate becomes zero. Because Naive Bayes multiplies the probabilities, a single zero will wipe out the entire probability for that class. To prevent this, we use an additive smoothing technique (most commonly Laplace smoothing), which adds a small constant to all counts so no probability is ever strictly zero.

Historical Context

The underlying theorem was developed by the Reverend Thomas Bayes in the 18th century (published posthumously in 1763) and independently formalized by Pierre-Simon Laplace shortly after.

The application of Bayes' Theorem as the "Naive Bayes" machine learning classifier emerged much later. It gained significant popularity in the 1950s for medical diagnosis systems and became a cornerstone of automated text categorization and spam filtering in the late 1990s and early 2000s, proving that complex probability distributions could often be successfully approximated by assuming independence.

Real-world Applications

  • Spam Filtering: One of the most famous applications; it classifies emails as "spam" or "ham" by evaluating the probabilities of individual words appearing in the email.
  • Sentiment Analysis: Categorizing text (like product reviews or tweets) as having positive, negative, or neutral sentiment based on the words used.
  • Medical Diagnosis: Estimating the probability of a patient having a specific disease given a set of independent symptoms.
  • Document Categorization: Automatically organizing news articles or web pages into predefined categories (e.g., Sports, Politics, Technology).

Related Concepts

  • Probability Distributions — the theoretical foundation for how feature likelihoods are modeled (e.g., Gaussian, Binomial).
  • Logistic Regression — a discriminative classifier that often competes with the generative Naive Bayes model.
  • Decision Boundaries — understanding how the independence assumption shapes the separation between classes in the feature space.

Experience it interactively

Adjust parameters, observe in real time, and build deep intuition with Riano’s interactive Naive Bayes Classifier module.

Try Naive Bayes Classifier on Riano →

More in Machine Learning