In the year 1957, a man named Max Mathews sat before a massive mainframe computer and typed a single line of code that would change the course of human sound forever. He was not playing a violin or striking a piano key, yet he was composing music. This moment marked the birth of MUSIC-N, the first digital synthesis family of computer programs, and it proved that a machine could generate sounds without a single physical instrument being touched. Mathews, working at Bell Labs, utilized a table-lookup oscillator in his second iteration, MUSIC II, and later introduced the unit generator in MUSIC III, which acted as a fundamental building block for all future music programming software. These innovations allowed for an unlimited number of sound synthesis structures to be created within the computer, effectively turning the abstract concept of code into tangible audio waves that could be heard by human ears.
The Rhythm Revolution
The 1950s saw electric rhythm machines begin to infiltrate popular music, offering artists a way to create percussion sounds with unprecedented efficiency. By the late 1970s, guitarist Roger Linn released the LM-1 drum machine computer, a device that promised to help artists achieve realistic sounding drum sounds through the use of high frequencies reaching 28 kHz. This machine featured eight distinct drum sounds including kick drum, snare, hi-hat, cabasa, tambourine, two tom toms, two congas, cowbell, clave, and handclaps, all of which could be recorded individually to mimic the nuance of a live drummer. Artists such as Peter Gabriel, Stevie Wonder, Michael Jackson, and Madonna adopted this technology, while earlier figures like J. J. Cale, Sly Stone, Phil Collins, Marvin Gaye, and Prince utilized predecessors like the Side Man, Ace Tone's Rhythm Ace, Korg's Doncamatic, and Maestro's Rhythm King. These developments paved the way for future electrical instruments such as the Theremin, Hammond organ, electric guitar, synthesizer, and digital sampler, allowing creators to produce sounds without the need for live musicians.The Language of Sound
Music coding languages serve as the bridge between human intent and electronic execution, each possessing its own level of difficulty and function. The language known as Alda was specifically designed for musicians who do not know how to program, as well as programmers who do not know how to music, providing a tutorial, cheat sheet, and community for anyone visiting the website. In contrast, the LC computer music programming language represents a more complex system meant for experienced coders who require granular control. Unlike existing unit-generator languages, LC provides objects as well as library functions and methods that can directly represent microsounds and related manipulations that are involved in microsound synthesis. These languages allow a musician to produce a sound or patch from scratch or with the aid of a synthesizer or sampler, arranging a song through the precise manipulation of digital data rather than physical vibration.