— Ch. 1 · Foundations Of Bayes Theorem —
Bayesian inference.
~5 min read · Ch. 1 of 6
Thomas Bayes published his essay An Essay Towards Solving a Problem in the Doctrine of Chances between 1763 and 1764, though he died in 1761 before seeing it printed. His work proved that probabilistic limits could be placed on an unknown event using what we now call posterior probability. A geometric visualization shows how values like 2, 3, 6, and 9 give relative weights to different conditions within a contingency table. These figures denote cells involved in each metric where probability becomes the fraction of shaded area. Bayesian inference derives the posterior probability as a consequence of two antecedents: a prior probability and a likelihood function derived from statistical models for observed data. The formula calculates this by multiplying the prior estimate with the likelihood of observing evidence given the hypothesis. This process updates belief when new information arrives without discarding previous assumptions entirely.
Historical Development And Evolution
Pierre-Simon Laplace introduced what is now called Bayes theorem as Principle VI in his own work during the late eighteenth century. He used it to address problems in celestial mechanics medical statistics reliability and jurisprudence. Early Bayesian inference which used uniform priors following Laplace principle of insufficient reason was called inverse probability because it infers backwards from observations to parameters or from effects to causes. After the 1920s inverse probability was largely supplanted by a collection of methods that came to be called frequentist statistics. In the twentieth century ideas of Laplace were further developed in two different directions giving rise to objective and subjective currents in Bayesian practice. The objective current depends on only the model assumed the data analyzed and the method assigning the prior while the subjective current specification of the prior depends on belief propositions prepared to act upon. A dramatic growth in research occurred in the 1980s mostly attributed to discovery of Markov chain Monte Carlo methods which removed many computational barriers.