— Ch. 1 · Foundations Of Planning Theory —
Automated planning and scheduling.
~4 min read · Ch. 1 of 7
The year 1971 marked a turning point when the STRIPS planning system introduced action names ordered in a sequence for robot execution. This early framework defined the simplest possible planning problem known as Classical Planning Problem. It required a unique known initial state and durationless actions that occurred one at time. A single agent operated within this deterministic environment where all outcomes were predictable before execution began. The goal was to synthesize a plan guaranteed to generate a desired goal state from any of the available initial states. Researchers identified several classes of planning problems based on properties like determinism, observability, and agent count. These dimensions determined whether solutions could be found offline or needed online revision during dynamic operations.
Classical And Temporal Models
Discrete-time Markov decision processes emerged as planning problems with nondeterministic actions accompanied by specific probabilities. Full observability allowed agents to maximize reward functions while operating under single-agent constraints. When full observability gave way to partial observability, the field shifted toward partially observable Markov decision process frameworks. Temporal planning introduced complexity through temporally overlapping actions with measurable durations occurring concurrently. State definitions had to include current absolute time and progress metrics for each active action. Rational or real-time planning created infinite state spaces unlike classical planning or integer-based systems. The Simple Temporal Network with Uncertainty became a scheduling problem involving controllable actions alongside uncertain events. Dynamic Controllability required temporal strategies to activate actions reactively as observations triggered constraint satisfaction needs.