Probabilistic Inference

Before discussing likelihood estimation, it's important to first clarify what "likelihood" actually means. In this framework, we often focus on the relationship between causes and observed outcomes.

$$ \overset{z}{\text{cause}} \rightarrow \overset{y}{\text{observed outcome}} $$

At its core, inference involves estimating potential causes based on observed outcomes. To do this, we rely on four foundational components:

  1. Prior ($p(z)$): Our initial belief about the cause, $z$, before observing any data, $y$. It encapsulates our baseline assumptions or existing knowledge.
  2. Evidence ($p(y)$): The overall probability of observing the outcome, $y$, across all possible causes. It integrates every potential scenario and serves as a normalization constant to ensure our probabilities sum to one.
  3. Likelihood $(p(y|z)$): The probability of observing the outcome, $y$, given a specific cause, $z$. It measures how well the assumed cause explains the observed effect.
  4. Posterior ($p(z|y)$): Our updated belief about the cause, $z$, after observing the outcome, $y$. It is the refined estimate resulting from the inference process.

Using Bayes’ Theorem, we synthesize these four components to transition from initial assumptions (the Prior) to refined conclusions (the Posterior) by incorporating observed data (the Likelihood and Evidence).