— Ch. 1 · Origins And Foundational Theory —
Affective computing.
~4 min read · Ch. 1 of 6
Rosalind Picard published a technical report in 1995 that defined the modern field of affective computing. Her work at MIT established the core goal of giving machines emotional intelligence. The system must interpret human emotional states and adapt its behavior accordingly. This approach aims to simulate empathy within computational devices. Early philosophical inquiries into emotion provided distant roots for this research branch. Yet the specific discipline emerged from computer science rather than pure psychology. Machines should provide appropriate responses to detected emotions. Recent experimental research shows subtle haptic feedback can shape reward learning. Mobile vibrations influence consumer choice through these emotion-laden outputs.
Multimodal Detection Technologies
A video camera captures facial expressions, body posture, and gestures without interpreting input initially. Microphones record speech patterns to detect changes in pitch or volume. Physiological sensors measure skin temperature and galvanic resistance directly. These passive sensors gather data analogous to cues humans use to perceive others. Speech produced in fear becomes fast, loud, and precisely enunciated with higher pitch ranges. Tiredness generates slow, low-pitched, and slurred speech instead. Systems achieve an average reported accuracy of 70 to 80 percent in research from 2003 and 2006. This performance outperforms average human accuracy which sits around 60 percent. Facial detection works well with frontal views but fails when heads rotate more than 20 degrees. Blood volume pulse graphs show cardiac cycles where heart rate increases during fear or startle events. Infra-red light measures reflected signals to track blood flow through extremities.