— Ch. 1 · Foundations And Definitions —
Unsupervised learning.
~3 min read · Ch. 1 of 6
In 1974, researchers began proposing the Ising magnetic model for cognitive tasks. This early work laid the groundwork for understanding how machines might learn without explicit labels. Unsupervised learning operates as a framework where algorithms extract patterns from data that lacks human annotation. Unlike supervised methods which rely on tagged datasets like ImageNet1000, unsupervised approaches harvest information cheaply from sources such as Common Crawl. The dataset is often massive and requires only minor filtering before analysis begins. Some experts classify self-supervised learning as a subset of this broader paradigm. Other frameworks exist along the supervision spectrum including weak or semi-supervision techniques. These variations involve tagging small portions of available data to guide the process.
Historical Evolution Of Algorithms
John Hopfield described an Ising variant known as the Hopfield net in 1982. Kunihiko Fukushima introduced the neocognitron in 1980, later recognized as a convolutional neural network. Hinton and Sejnowski described a Boltzmann machine with probabilistic neurons in 1983 following earlier work by Sherington and Kirkpatrick. Paul Smolensky published Harmony Theory in 1986, creating an RBM with a Boltzmann energy function. Dayan and Hinton introduced the Helmholtz machine in 1995. Hochreiter and Schmidhuber brought forward the LSTM neuron for language tasks in 1995. Kingma, Rezende, and colleagues introduced Variational Autoencoders in 2013 as Bayesian graphical probability networks. Modern large-scale unsupervised learning now trains general-purpose neural architectures using gradient descent adapted for specific procedures.