Free to follow every thread. No paywall, no dead ends.
Sound: the story on HearLore | HearLore
Sound
Sound cannot exist in the void of space, a fact that defies the common intuition that the universe is filled with the hum of distant stars. This fundamental limitation arises because sound is a mechanical wave that requires a physical medium to propagate, whether that medium is a solid, liquid, or gas. Without particles to collide and transfer energy, the vibration of a source simply has nowhere to go, rendering the vacuum of space perfectly silent to human ears. This absence of transmission medium explains why astronauts must rely on radio waves, which are electromagnetic and do not require matter to travel, to communicate with one another. The inability of sound to cross the vacuum of space was a pivotal realization in the history of physics, distinguishing mechanical waves from electromagnetic radiation and reshaping our understanding of how energy moves through the cosmos. Isaac Newton, the first to attempt a rigorous measurement of sound speed, initially believed that the speed of sound in a substance was equal to the square root of the pressure acting on it divided by its density, a formula that later proved incorrect because it assumed the process was isothermal rather than adiabatic. It was the French mathematician Pierre-Simon Laplace who corrected this error by introducing the adiabatic factor, gamma, into the equation, creating what is now known as the Newton-Laplace equation. This correction established that the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density, a relationship that holds true across all matter types. The speed of sound varies dramatically depending on the material it travels through, moving at approximately 343 meters per second in air at sea level, but accelerating to about 1,480 meters per second in fresh water and reaching speeds of roughly 5,960 meters per second in steel. In the most extreme case, sound moves the fastest in solid atomic hydrogen, traveling at approximately 12,000 meters per second, demonstrating that the density and elasticity of a material dictate the velocity of the wave. This variation in speed is not merely a theoretical curiosity; it has practical implications for everything from the design of aircraft to the detection of volcanic eruptions. When sound moves through a medium that is not uniform in its physical properties, it may be refracted, either dispersed or focused, altering its path and intensity. The viscosity of the medium also plays a critical role, determining the rate at which sound is attenuated, although for many media such as air or water, attenuation due to viscosity is negligible. The motion of the medium itself can further influence the speed of sound, with wind increasing the propagation speed if the sound and wind move in the same direction and decreasing it if they move in opposite directions. These complex interactions between density, pressure, temperature, and motion create a dynamic environment for sound waves, one that is far more intricate than the simple vibration of a tuning fork might suggest. The theoretical work suggesting that sound waves may carry an extremely small effective mass and be associated with a weak gravitational field adds another layer of complexity to our understanding of how sound interacts with the fabric of reality. This interplay between mechanical waves and the properties of matter underscores the importance of acoustics as a scientific discipline, bridging the gap between the abstract mathematics of wave propagation and the tangible experience of hearing.
Common questions
Can sound travel through the void of space?
Sound cannot exist in the void of space because it requires a physical medium to propagate. This fundamental limitation arises because sound is a mechanical wave that needs particles to collide and transfer energy, rendering the vacuum of space perfectly silent to human ears.
Who corrected Isaac Newton's formula for the speed of sound?
The French mathematician Pierre-Simon Laplace corrected Isaac Newton's formula by introducing the adiabatic factor, gamma, into the equation. This correction created what is now known as the Newton-Laplace equation and established that the speed of sound is proportional to the square root of the ratio of the bulk modulus of the medium to its density.
What is the frequency range of human hearing?
The human ear is sensitive to frequencies ranging from 20 Hz to 20 kHz, a range that defines the boundaries of what we can hear and shapes our perception of the world. Below 20 Hz, sound waves are heard as discrete stuttering sounds or rumbles, while above 20 kHz, sound waves are classified as ultrasound.
How does the speed of sound vary across different materials?
The speed of sound varies dramatically depending on the material it travels through, moving at approximately 343 meters per second in air at sea level. It accelerates to about 1,480 meters per second in fresh water and reaches speeds of roughly 5,960 meters per second in steel, with the fastest speed occurring in solid atomic hydrogen at approximately 12,000 meters per second.
What is the difference between longitudinal and transverse sound waves?
Sound waves are transmitted through fluids as longitudinal waves, also called compression waves, which consist of alternating pressure deviations from the equilibrium pressure. Through solids, however, sound can be transmitted as both longitudinal waves and transverse waves, with transverse waves being waves of alternating shear stress perpendicular to the direction of propagation.
What is the scientific study of mechanical waves and sound called?
Acoustics is the interdisciplinary scientific study of mechanical waves, vibrations, sound, ultrasound, and infrasound in gaseous, liquid, or solid media. A scientist who works in the field of acoustics is called an acoustician, while an individual specializing in acoustical engineering may be referred to as an acoustical engineer.
The physical nature of sound is defined by the oscillation of pressure, stress, particle displacement, and particle velocity within a medium, creating a wave that propagates energy without transporting matter. When a sound source, such as the vibrating diaphragm of a loudspeaker, moves, it creates mechanical disturbances that travel away from the source at the local speed of sound. At a fixed distance from the source, the pressure, velocity, and displacement of the medium's particles vary in time, while at an instant in time, these properties vary spatially. The particles of the medium do not travel with the sound wave; instead, the disturbance and its mechanical energy propagate through the medium, a concept that is intuitively obvious for solids but equally applies to liquids and gases. Sound waves are transmitted through fluids, such as gases, plasmas, and liquids, as longitudinal waves, also called compression waves, which consist of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction. Through solids, however, sound can be transmitted as both longitudinal waves and transverse waves, with transverse waves being waves of alternating shear stress perpendicular to the direction of propagation. Unlike longitudinal sound waves, transverse sound waves have the property of polarization, a characteristic that allows them to be filtered or oriented in specific directions. The energy carried by a periodic sound wave alternates between the potential energy of the extra compression in the case of longitudinal waves or lateral displacement strain in the case of transverse waves, and the kinetic energy of the particles' displacement velocity in the medium. This exchange of energy between potential and kinetic forms is what allows sound to travel over distances, sustaining the wave as it moves through the medium. Although sound transmission involves many physical processes, the signal received at a point, such as a microphone or the ear, can be fully described as a time-varying pressure. This pressure-versus-time waveform provides a complete representation of any sound or audio signal detected at that location, allowing scientists and engineers to analyze and manipulate sound with precision. Sound waves are often simplified as sinusoidal plane waves, which are characterized by generic properties such as frequency, wavelength, amplitude, speed of sound, and direction. Sometimes speed and direction are combined as a velocity vector, while wave number and direction are combined as a wave vector, creating a mathematical framework for understanding the behavior of sound. To analyze audio, a complicated waveform, such as the one shown in technical diagrams, can be represented as a linear combination of sinusoidal components of different frequencies, amplitudes, and phases, a process that forms the basis of modern audio signal processing. This mathematical representation allows for the analysis of complex sounds, such as the timbre of a musical instrument or the texture of a busy cafe, by breaking them down into their constituent frequencies and phases. The ability to decompose sound into its fundamental components has revolutionized the field of acoustics, enabling the development of technologies such as noise cancellation, audio compression, and medical imaging. The study of sound waves has also led to the discovery of phenomena such as the Prandtl-Glauert singularity, where a white halo is formed by condensed water droplets thought to result from a drop in air pressure around an aircraft approaching the speed of sound. This visual phenomenon, captured in photographs of U.S. Navy F/A-18 jets, illustrates the complex interaction between sound waves and the surrounding medium, highlighting the power of sound to affect the physical world. The theoretical work suggesting that sound waves may carry an extremely small effective mass and be associated with a weak gravitational field adds another layer of complexity to our understanding of how sound interacts with the fabric of reality. This interplay between mechanical waves and the properties of matter underscores the importance of acoustics as a scientific discipline, bridging the gap between the abstract mathematics of wave propagation and the tangible experience of hearing.
The Human Ear's Range
The human ear is sensitive to frequencies ranging from 20 Hz to 20 kHz, a range that defines the boundaries of what we can hear and shapes our perception of the world. Below 20 Hz, sound waves are heard as discrete stuttering sounds for discrete pulses or fast 'wow-wow-wow' sounds for continuous sounds like sine waves, a phenomenon that is often experienced as a rumble or vibration rather than a clear pitch. Above 20 kHz, sound waves are classified as ultrasound, which is not different from audible sound in its physical properties but cannot be heard by humans. Ultrasound devices operate with frequencies from 20 kHz up to several gigahertz, and are commonly used for medical diagnostics and treatment, allowing doctors to visualize the inside of the body without invasive surgery. The upper limit of human hearing decreases with age, a process known as presbycusis, which gradually reduces the ability to hear high-frequency sounds. Other species have different ranges of hearing, with dogs capable of perceiving vibrations higher than 20 kHz, and whales and elephants able to detect infrasound, which are sound waves with frequencies lower than 20 Hz. Infrasound is too low for humans to hear as a pitch, but these sounds are heard as discrete pulses, like the 'popping' sound of an idling motorcycle, and can be used to detect volcanic eruptions or communicate over long distances. The ability to detect sound is a critical survival mechanism for many species, used for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces and is characterized by its unique sounds, creating a soundscape that is perceived by humans and other animals. Many species, such as frogs, birds, marine and terrestrial mammals, have developed special organs to produce sound, and in some species, these produce song and speech. Humans have developed culture and technology, such as music, telephone, and radio, that allows them to generate, record, transmit, and broadcast sound, transforming the way we interact with the world. The field of psychoacoustics is dedicated to the study of how sound is perceived by the brain, exploring the relationship between the physical properties of sound and the subjective experience of hearing. Webster's dictionary defined sound as both the sensation of hearing and the vibrational energy that occasions such a sensation, highlighting the dual nature of sound as both a physical phenomenon and a psychological experience. The correct response to the question, 'if a tree falls in a forest and no one is around to hear it, does it make a sound?' is 'yes' if using the physical definition and 'no' if using the psychophysical definition, a paradox that has sparked philosophical debates for centuries. The human ear does not have a flat spectral response, meaning that sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electrotechnical Commission (IEC) has defined several weighting schemes, with A-weighting attempting to match the response of the human ear to noise and C-weighting used to measure peak levels. The sound pressure level, or SPL, is defined as a logarithmic decibel scale, where the reference sound pressure is 20 μPa in air and 1 μPa in water, allowing for the measurement of sounds with a wide range of amplitudes. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level, emphasizing the importance of standardization in the field of acoustics. The ability to measure and manipulate sound has led to the development of technologies such as noise control, audio signal processing, and architectural acoustics, which are used to improve the quality of life in modern society. The study of sound has also led to the discovery of phenomena such as the Doppler effect, where the frequency of a sound wave changes due to the relative motion of the source and the observer, and the phenomenon of resonance, where a system oscillates at a greater amplitude at certain frequencies. These phenomena have applications in fields ranging from music to medicine, from engineering to environmental science, demonstrating the versatility and importance of sound in the modern world.
The Perception of Sound
The perception of sound is a complex process that involves the brain's interpretation of the physical properties of sound waves, creating a subjective experience that is unique to each individual. Pitch is perceived as how 'low' or 'high' a sound is and represents the cyclic, repetitive nature of the vibrations that make up sound, with the fundamental harmonic determining the pitch of simple sounds. In the case of complex sounds, pitch perception can vary, with individuals identifying different pitches for the same sound based on their personal experience of particular sound patterns. Selection of a particular pitch is determined by pre-conscious examination of vibrations, including their frequencies and the balance between them, with specific attention given to recognizing potential harmonics. Duration is perceived as how 'long' or 'short' a sound is and relates to onset and offset signals created by nerve responses to sounds, with the duration of a sound usually lasting from the time the sound is first noticed until the sound is identified as having changed or ceased. Sometimes this is not directly related to the physical duration of a sound, as gapped sounds can sound as if they are continuous because the offset messages are missed owing to disruptions from noises in the same general bandwidth. Loudness is perceived as how 'loud' or 'soft' a sound is, and reflects the overall pattern of auditory-nerve activity produced by a sound, with louder sounds creating greater displacement of the basilar membrane, which stimulates more auditory-nerve fibres and results in a stronger neural representation of loudness. Perceived loudness also depends on how sound energy is distributed over time, with temporal summation operating over a window of roughly 200 ms, beyond which increasing the length of the sound no longer increases its perceived loudness. The spectral complexity of a sound can also influence loudness perception, with complex tones often judged as louder than simple tones even when matched for physical amplitude. Timbre is perceived as the quality of different sounds, such as the thud of a fallen rock, the whir of a drill, the tone of a musical instrument, or the quality of a voice, and represents the pre-conscious allocation of a sonic identity to a sound. This identity is based on information gained from frequency transients, noisiness, unsteadiness, perceived pitch, and the spread and intensity of overtones in the sound over an extended time frame, with the way a sound changes over time providing most of the information for timbre identification. Even though a small section of the waveform from each instrument looks very similar, differences in changes over time between the clarinet and the piano are evident in both loudness and harmonic content, with less noticeable differences in the different noises heard, such as air hisses for the clarinet and hammer strikes for the piano. Texture relates to the number of sound sources and the interaction between them, with the word texture, in this context, relating to the cognitive separation of auditory objects. In music, texture is often referred to as the difference between unison, polyphony, and homophony, but it can also relate to a busy cafe, a sound which might be referred to as cacophony. Spatial location represents the cognitive placement of a sound in an environmental context, including the placement of a sound on both the horizontal and vertical plane, the distance from the sound source, and the characteristics of the sonic environment. In a thick texture, it is possible to identify multiple sound sources using a combination of spatial location and timbre identification, allowing the brain to create a three-dimensional map of the acoustic environment. There are, historically, six experimentally separable ways in which sound waves are analyzed: pitch, duration, loudness, timbre, sonic texture, and spatial location, with some of these terms having a standardized definition in the ANSI Acoustical Terminology. More recent approaches have also considered temporal envelope and temporal fine structure as perceptually relevant analyses, expanding our understanding of how the brain processes sound. The study of sound perception has led to the development of technologies such as audio compression, noise cancellation, and spatial audio, which are used to improve the quality of life in modern society. The field of psychoacoustics continues to explore the relationship between the physical properties of sound and the subjective experience of hearing, with researchers seeking to understand how the brain creates a coherent auditory world from the complex array of sound waves that reach the ear.
The Science of Acoustics
Acoustics is the interdisciplinary scientific study of mechanical waves, vibrations, sound, ultrasound, and infrasound in gaseous, liquid, or solid media, with a scientist who works in the field of acoustics called an acoustician. An individual specializing in acoustical engineering may be referred to as an acoustical engineer, while an audio engineer is concerned with the recording, manipulation, mixing, and reproduction of sound. Applications of acoustics are found in many areas of modern society, with subdisciplines including aeroacoustics, audio signal processing, architectural acoustics, bioacoustics, electroacoustics, environmental noise, musical acoustics, noise control, psychoacoustics, speech, ultrasound, underwater acoustics, and vibration. The study of sound has led to the development of technologies such as sonar, which uses sound waves to detect objects underwater, and medical ultrasound, which is commonly used for diagnostics and treatment. The field of acoustics also includes the study of noise, which is a term often used to refer to an unwanted sound, and in science and engineering, noise is an undesirable component that obscures a wanted signal. However, in sound perception, noise can often be used to identify the source of a sound and is an important component of timbre perception. The soundscape is the component of the acoustic environment that can be perceived by humans, with the acoustic environment being the combination of all sounds, whether audible to humans or not, within a given area as modified by the environment and understood by people, in context of the surrounding environment. The study of sound has also led to the discovery of phenomena such as the Doppler effect, where the frequency of a sound wave changes due to the relative motion of the source and the observer, and the phenomenon of resonance, where a system oscillates at a greater amplitude at certain frequencies. These phenomena have applications in fields ranging from music to medicine, from engineering to environmental science, demonstrating the versatility and importance of sound in the modern world. The field of acoustics continues to evolve, with researchers seeking to understand how sound interacts with matter, how it is perceived by the brain, and how it can be used to improve the quality of life in modern society. The study of sound has also led to the development of technologies such as noise control, audio signal processing, and architectural acoustics, which are used to improve the quality of life in modern society. The field of acoustics also includes the study of infrasound, which is sound waves with frequencies lower than 20 Hz, and which can be used to detect volcanic eruptions or communicate over long distances. The study of sound has also led to the discovery of phenomena such as the Prandtl-Glauert singularity, where a white halo is formed by condensed water droplets thought to result from a drop in air pressure around an aircraft approaching the speed of sound, highlighting the power of sound to affect the physical world. The theoretical work suggesting that sound waves may carry an extremely small effective mass and be associated with a weak gravitational field adds another layer of complexity to our understanding of how sound interacts with the fabric of reality. This interplay between mechanical waves and the properties of matter underscores the importance of acoustics as a scientific discipline, bridging the gap between the abstract mathematics of wave propagation and the tangible experience of hearing.