— Ch. 1 · Hybrid Cognitive Models —
Neuro-symbolic AI.
~3 min read · Ch. 1 of 6
In 1992, researchers began exploring how to merge fast intuition with slow reasoning. Gary Marcus stated that building rich cognitive models requires a specific trio of elements. He argued that without symbol manipulation tools, useful abstract knowledge remains out of reach. Daniel Kahneman described human thought as having two distinct systems in his book Thinking Fast and Slow. System One handles pattern recognition through reflexive actions. System Two manages planning and deduction through step-by-step logic. Deep learning excels at the first system while symbolic reasoning serves the second. Leslie Valiant claimed that effective computational models demand this combination. Angelo Dalli and Henry Kautz joined Francesca Rossi and Bart Selman in advocating for such synthesis. They sought to address the limitations inherent in using only one approach.
Architectural Taxonomies
Henry Kautz developed a taxonomy listing diverse integration methods for these hybrid systems. Symbolic Neural approaches treat words or subword tokens as inputs for large language models like BERT. AlphaGo exemplifies Symbolic[Neural] techniques where Monte Carlo tree search invokes neural evaluation. Neural | Symbolic architectures interpret perceptual data into symbols for logical reasoning. The Neural-Concept Learner demonstrates this interpretation process. A fourth method uses symbolic reasoning to generate training data for deep learning models. Logic Tensor Networks encode logical formulas directly within neural networks. Garcez, Lamb, and Gabbay conducted early work on connectionist modal logics. Sepp Hochreiter identified Graph Neural Networks as predominant models of neural-symbolic computing. These networks describe molecular properties or simulate social interactions with particle-particle dynamics. Bader and Hitzler presented a finer categorization in 2005 regarding propositional logic usage.