Handbook · Natural
Waves & Frequencies
Waves & Frequencies··39 min read
TL;DR
Drop a pebble in a pond and ripples move outward. Pluck a string and sound travels to your ear. Flip on a radio transmitter and electromagnetic energy spreads across the sky. These all look different — water, air, empty space — but they are all the same kind of phenomenon: a wave. A wave is what you get when a disturbance in some medium (or the electromagnetic field) propagates through space and time, carrying energy but not carrying the medium itself along with it. Ocean water does not travel from the middle of the Atlantic to the shore; the water bobs up and down while the wave moves.
Waves are everywhere because the same piece of mathematics — the wave equation — shows up in almost every branch of physics. Solving that equation in water, in air, in a guitar string, in a transmission line, in a laser cavity, or in the electromagnetic field of empty space gives different kinds of waves with different speeds and different behaviours, but the toolkit is the same: decompose complicated waves into sums of simple sinusoids (Fourier), predict where the waves reinforce and cancel (interference), find the frequencies at which systems ring loudest (resonance), and understand how waves at different frequencies travel at slightly different speeds (dispersion).
This handbook walks the toolkit. The wave equation — the single PDE that describes every wave. Superposition, interference, and standing waves — what happens when two waves meet. Fourier decomposition — any signal is a sum of pure tones. The electromagnetic spectrum — the same wave equation from radio to gamma rays. Resonance and Q — why some systems ring loudest at specific frequencies. Optics and diffraction — what light does when it meets edges, slits, and small objects. Acoustics — longitudinal pressure waves in air and other fluids. Dispersion — why the speed of a wave can depend on its frequency and what that causes.
You will be able to
- Write the wave equation, name its two key parameters (speed, dispersion relation), and say what determines each in a given medium.
- Convert between frequency, wavelength, wavenumber, period, and photon energy for any region of the EM spectrum without tables.
- Explain resonance, the quality factor Q, and why every amplifier, every instrument, every laser works the way it does.
The Map
- You will be able to
- The Map
- Station 1 — The wave equation
- Station 2 — Superposition, interference, and standing waves
- Station 3 — Fourier decomposition
- Station 4 — The electromagnetic spectrum
- Station 5 — Resonance, damping, and the quality factor Q
- Station 6 — Optics and diffraction
- Station 7 — Acoustics: longitudinal pressure waves
- Station 8 — Dispersion: when c depends on frequency
- How the stations connect
- Standards & Specs
- Test yourself
Read the map bottom-up once — all waves are solutions to the wave equation — then top-down for phenomenology — specific waves differ in their medium, their dispersion relation, and how they interact with matter. The Physics & Energy handbook provides the Maxwell and Schrödinger backgrounds; this handbook zooms in on the wave phenomena those theories describe.
Station 1 — The wave equation
The reason waves are a unified subject rather than a dozen separate subjects is that a remarkably wide range of physical situations satisfy the same differential equation. If a quantity u(x, t) (displacement of a string, pressure in air, electric field in empty space, voltage on a wire) satisfies ∂²u/∂t² = c² · ∂²u/∂x², we say it obeys the wave equation, and its solutions will always include travelling-wave forms f(x − ct) and f(x + ct) — shapes that propagate in time without changing form, at a characteristic speed c determined by the medium.
The constant c is different in every medium. In a stretched string it is √(T/μ) — the square root of tension over linear mass density. In air it is √(γP/ρ) — related to pressure and density. In empty space for electromagnetic waves it is 1/√(μ₀ε₀), a combination of the permittivity and permeability of the vacuum, which happens to equal 299,792,458 m/s. The wave equation is what makes a medium a wave-supporting medium; anything that does not satisfy it (a thick viscous fluid, a dissipative conductor) either does not support waves at all or supports them only weakly.
Once you believe the wave equation, everything else in this handbook is consequences. Fourier says the general solution is a sum of sinusoids. Boundary conditions select which sinusoids are allowed (standing waves). Two superposed waves can reinforce or cancel (interference). And if c in the equation depends on frequency, different components travel at different speeds (dispersion).
Push on a string and a disturbance travels along it; compress a volume of air and a sound pulse propagates; shake an electron and EM radiation streams out. The one equation describing all of these (and countless more) is the wave equation:
Classical wave equation (non-dispersive, homogeneous medium):
∂² u / ∂ t² = c² · ∇² u
u(x, y, z, t) the disturbance (displacement, pressure, field)
c wave speed in the medium
∇² Laplacian (spatial second derivative)
General plane-wave solution in 1D:
u(x, t) = f(x − c t) + g(x + c t)
Monochromatic plane wave (the Fourier building block):
u(x, t) = A · cos( k x − ω t + φ )
k = 2π / λ (angular wavenumber)
ω = 2π f (angular frequency)
c = ω / k (phase speed)
λ · f = c (the schoolbook relation)
- Mechanical wave speeds come from medium properties: on a string, c = √(T/μ) (tension over mass per length); in a solid bar, c = √(E/ρ) (Young's modulus over density); in air at 20 °C, sound speed ≈ 343 m/s; in water ≈ 1482 m/s; in steel ≈ 5100 m/s; longitudinal P-waves in Earth's mantle ~7–14 km/s (how seismology tells us what's down there).
- EM wave speed in vacuum is exactly c = 299 792 458 m/s. In a medium with relative permittivity ε_r and permeability μ_r, the phase speed is v = c/√(ε_r μ_r). Most glasses have ε_r ≈ 2.25, so v ≈ 2 × 10⁸ m/s — visible light slows by ~1/3 when it enters window glass, which is why lenses refract.
- Dispersion relation ω(k) describes how phase speed depends on frequency. Non-dispersive waves (ideal strings, EM in vacuum, sound in air under most conditions) have ω = c k, a straight line. Dispersive media (ocean waves, EM in glass, electrons in a crystal) have a curved ω(k), and different frequencies travel at different speeds — which is why a prism spreads white light into colours.
- Wave packet group velocity is v_g = dω/dk, different from phase velocity v_p = ω/k in dispersive media. A pulse travels at v_g; the carrier oscillations ride at v_p. In a laser, the pulse (information) moves at v_g; in a fibre, v_g determines how a bit spreads out over kilometres.
The model you want: a wave is a solution of a wave equation, characterized by frequency, wavelength, speed, and dispersion relation. Everything else — reflection, refraction, diffraction, interference — is boundary conditions and superposition applied to this basic picture.
TIP
When a new wave phenomenon puzzles you, ask three questions: what's oscillating, what's the restoring force, what's the dispersion relation? "What's oscillating?" gives you u. "What's the restoring force?" gives you c (or the full ω(k)). Everything else follows.
Go deeper: Crawford, Waves (Berkeley Physics Course Vol. 3 — still the clearest introduction); French, Vibrations and Waves; Main, Vibrations and Waves in Physics; a half-day writing a 1D wave-equation simulator in 50 lines of Python (u[i, t+1] = 2·u[i, t] − u[i, t−1] + c²·Δt²/Δx² · (u[i+1] − 2·u[i] + u[i−1])) and watching reflections off ends.
Station 2 — Superposition, interference, and standing waves
The wave equation is linear, which means its solutions add. If u₁(x, t) and u₂(x, t) are both solutions, so is any sum a·u₁ + b·u₂ for constants a and b. This property is called the principle of superposition, and it is why two people can talk in a room without their sound waves corrupting each other — the pressure pattern at your ear is simply the sum of the two waves, and your auditory system separates them.
Superposition produces interference. Two waves of the same frequency meeting in phase reinforce each other (constructive, amplitudes add); 180° out of phase they cancel (destructive, amplitudes subtract to zero). Thomas Young's double-slit experiment (1801) used this to prove light was a wave; the same phenomenon powers interferometers, noise-cancelling headphones, antenna arrays, and LIGO's detection of gravitational waves (a tiny interferometer-arm-length change measured to 1/10,000 the width of a proton).
When a wave is confined to a bounded region (a string with two fixed ends, a tube with closed ends, a microwave cavity), only a discrete set of frequencies — the normal modes — can form self-reinforcing standing waves. A guitar string of length L has modes at frequencies nv/2L; an organ pipe of length L closed at one end has modes at (2n−1)v/4L. These are the notes the instrument can make. Every acoustic instrument, every microwave oven cavity, every laser resonator is a specific boundary-value problem picking specific modes of an underlying wave equation.
The wave equation is linear: if u₁ and u₂ are solutions, so is any linear combination α u₁ + β u₂. This single property — superposition — makes interference, diffraction, Fourier analysis, and virtually everything else in wave physics work.
Two waves of equal amplitude, opposite directions → standing wave:
cos(kx − ωt) + cos(kx + ωt) = 2 · cos(kx) · cos(ωt)
╷ ╷
spatial pattern temporal breathing
(nodes where kx = π/2, 3π/2, …)
Interference between two coherent waves with phase difference φ:
intensity ∝ |A₁ + A₂ · e^(iφ)|²
= A₁² + A₂² + 2 A₁ A₂ cos φ
constructive at φ = 2π n destructive at φ = (2n + 1) π
Double-slit path difference d · sin θ = n λ (bright fringes)
Standing waves are why instruments make discrete notes. A string fixed at both ends of length L can hold waves where L = n·(λ/2) — half-integer wavelengths fit — so allowed frequencies are f_n = n · v/(2L). The n=1 fundamental plus integer harmonics = the frequency spectrum of a plucked string. Open and closed pipes (flute vs clarinet) select different harmonics by boundary conditions at each end.
- Coherence is the ability of two waves to maintain a fixed phase relationship. A single laser is highly coherent (phase stable over metres); sunlight is incoherent (phase randomises over micrometres, so no visible interference patterns). An LED and a laser of the same colour look similar but differ by ~10⁶ in coherence length.
- Double-slit experiment (Young, 1801 for light; Davisson-Germer 1927 for electrons): two slits produce an interference pattern only if each particle/wave goes through both in some sense. The pattern disappears if you measure which slit — the foundational quantum-mechanical observation. See Physics & Energy Station 7.
- Thin-film interference — the iridescence of oil on water, soap bubbles, and anti-reflection coatings on lenses — comes from light reflecting off the top and bottom surfaces of a thin layer, interfering constructively or destructively by wavelength. A λ/4 MgF₂ coating on glass reduces reflection from ~4% to under 1% for green light by destructive interference.
- Beats: two close frequencies superpose to produce amplitude modulation at |f₁ − f₂|. Piano tuners listen for beats to align strings; RF mixers use the same math to downconvert signals; every heterodyne receiver on Earth is a beat-frequency exploitation.
The model you want: superposition turns multiple waves into one, and the result carries fingerprints of the components — fringes for interference, beats for close-frequency mixing, harmonics for bound modes. Reading the fingerprints is most of applied wave science.
CAUTION
Two sound sources radiating the same tone at different positions will produce standing waves of loud and quiet regions. Audio engineers call these "room modes" and fight them with absorbers and diffusers. If a room sounds unbalanced, it's almost certainly modal, not equipment — the room is doing what Fourier insisted it would.
Go deeper: Hecht, Optics (5th ed) chapters 7–9 on interference; Feynman Vol. 1 chapter 28–30 (interference with path integrals); the double-slit discussion in Nielsen & Chuang chapter 1 for the quantum view; a week building Fabry-Perot cavities or diffraction gratings in a lab to feel the math.
Station 3 — Fourier decomposition
A pure sinusoid is the simplest possible wave — one frequency, one amplitude, one phase. Real signals are almost never pure sinusoids. A spoken vowel, a guitar note, a radio broadcast, a voltage on a wire are all complicated time-varying functions. Fourier's theorem (Joseph Fourier 1807) says any reasonable periodic function can be written as a sum of sinusoids at integer multiples of a fundamental frequency; for non-periodic signals, the Fourier transform does the same thing with a continuous spectrum of frequencies.
This is the same mathematical tool that powers the FFT in the Signals & Embedded handbook. The insight is that two completely different-looking signals (a square wave and the first few terms of its Fourier series) can be the same object viewed in two different domains — the time domain (what voltage at each instant) and the frequency domain (what amplitude at each frequency). The two views are equally valid and completely interchangeable; some problems are trivial in one view and brutal in the other.
Fourier analysis is why equalisers work (boost or cut frequency bands individually), why JPEG compresses images (most of the visual information is in the low-frequency components), why a string's tone depends on which harmonics are excited (the ratio of overtone amplitudes is the timbre), and why wave phenomena that look complicated turn out to be made of simple pieces.
Any reasonable signal can be decomposed into a sum of sines. This is the content of Fourier analysis (Fourier, Théorie analytique de la chaleur, 1822). The theorem transforms wave physics into algebra — any wave equation you can solve for one monochromatic component, you can solve for all of them by superposition.
Fourier series (periodic signal, period T):
f(t) = a_0 + Σₙ [ a_n cos(n ω₀ t) + b_n sin(n ω₀ t) ]
with ω₀ = 2π / T
Fourier transform (aperiodic, continuous spectrum):
F(ω) = ∫ f(t) · e^(−i ω t) dt
f(t) = (1 / 2π) ∫ F(ω) · e^( i ω t) dω
Parseval's theorem (energy conservation):
∫ |f(t)|² dt = (1 / 2π) ∫ |F(ω)|² dω
Key properties:
time shift f(t − τ) ⟷ e^(−i ω τ) · F(ω)
scaling f(a t) ⟷ (1/|a|) · F(ω/a)
convolution (f ∗ g)(t) ⟷ F(ω) · G(ω) ← the big one
differentiation d f / dt ⟷ i ω · F(ω)
The convolution theorem is the operational heart of signal processing: convolution in time (what an LTI system does to a signal) becomes multiplication in frequency. Every filter in the Signals & Embedded handbook exploits this equivalence.
- Discrete Fourier Transform (DFT) and its fast implementation (FFT, Cooley & Tukey 1965) bring this to digital signals. An N-point DFT costs O(N²) naively, O(N log N) with the FFT. N = 1024: 10⁶ vs 10⁴, three orders of magnitude.
- Spectrogram / Short-Time Fourier Transform slides a window across the signal, computes one FFT per frame, and stacks |F(ω)|² to get a time-frequency picture. This is how bat echolocation, bird-song recognition, speech-to-text, and earthquake analysis all start.
- Uncertainty principle in signal form: Δt · Δω ≥ 1/2. A signal localized in time must spread in frequency; a narrow bandwidth must extend in time. This is the same inequality that gives Heisenberg uncertainty in quantum mechanics — literally the same math, applied to wave-function amplitudes there and signal amplitudes here.
- Wavelets (Morlet, Daubechies, 1980s–) trade Fourier's infinite-duration sines for localized basis functions, giving simultaneous frequency and time locality. JPEG-2000 and many seismic codecs use them because they compress natural signals better than DCT at sharp transitions.
The model you want: every signal is a sum of sines; the Fourier transform tells you which sines and at what weights. Whether you're analyzing music, an EEG, or a Wi-Fi demod, the first tool is usually an FFT.
TIP
When you cannot resolve two close tones in the spectrum, you need more time samples (longer FFT window) — Δf = 1/T_window. When you cannot localize an event in time, you need shorter windows or wavelets. These trade off by the uncertainty principle; choose both deliberately.
Go deeper: Oppenheim & Schafer, Discrete-Time Signal Processing (3rd ed) chapters 8–10; Bracewell, The Fourier Transform and Its Applications; Mallat, A Wavelet Tour of Signal Processing; 3Blue1Brown's "But what is the Fourier transform?" video for intuition first.
Station 4 — The electromagnetic spectrum
Station 5 of the Physics & Energy handbook showed that Maxwell's equations predict waves in the electromagnetic field that propagate at the speed of light. Those electromagnetic waves can have any frequency from zero to arbitrarily high, and the continuum of possible frequencies is called the electromagnetic spectrum. Human eyes evolved to see a tiny slice of it — roughly 400 to 700 nm wavelength, the range we call visible light — but the physics is the same everywhere on the spectrum. Radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays are the same phenomenon at different frequencies, differing only in wavelength, photon energy, and how they interact with matter.
The three useful conversions are c = λf (speed = wavelength × frequency), E = hf (photon energy = Planck's constant × frequency), and k = 2π/λ (wavenumber from wavelength). A 100 MHz FM radio wave has a 3 m wavelength and a photon energy of ~4 × 10⁻⁷ eV. A red photon at 700 THz has a 430 nm wavelength and ~1.7 eV. A gamma ray at 10²⁰ Hz has a 3 × 10⁻¹² m wavelength (smaller than an atomic nucleus) and ~400 keV of energy — enough to ionise anything it hits.
Different parts of the spectrum are useful for different things because different materials absorb, reflect, or transmit them differently. Metal foils absorb low-frequency radio; glass transmits visible light but absorbs UV; human skin transmits gigahertz microwaves and absorbs them as heat; bone absorbs X-rays strongly while soft tissue does not (why X-ray images are useful). The EM spectrum is what makes radio astronomy, medical imaging, fiber optics, telecommunications, microwave ovens, and solar cells all possible.
The EM spectrum is a continuous range of frequencies, named by convention into bands each with their own technology, interactions with matter, and safety implications. Each band has the same underlying physics — Maxwell's equations, photons E = h f — but different applications and different instruments because the interaction with matter varies hugely with frequency.
Band frequency wavelength photon energy how we use it
────── ───────── ────────── ───────────── ──────────────
ELF / ULF radio 3 Hz – 3 kHz 10 Mm – 100 km — (negligible) submarine comms
VLF / LF radio 3 kHz – 300 kHz 100 km – 1 km — time, beacons
MF / HF radio 300 kHz – 30 MHz 1 km – 10 m — AM, amateur
VHF / UHF 30 MHz – 3 GHz 10 m – 10 cm µeV FM, TV, mobile
Microwave 3 GHz – 300 GHz 10 cm – 1 mm meV Wi-Fi, radar, 5G
Far / mid infrared 0.3 THz – 100 THz 1 mm – 3 µm meV – eV thermal imaging
Near infrared 100 THz – 400 THz 3 µm – 0.75 µm eV fiber optics (1.55 µm)
Visible 400 THz – 790 THz 750 nm – 380 nm 1.7 – 3.3 eV human vision
Ultraviolet 790 THz – 30 PHz 380 nm – 10 nm 3 – 120 eV sterilization, EUV litho
X-ray 30 PHz – 30 EHz 10 nm – 10 pm 0.1 – 100 keV imaging, diffraction
Gamma ray > 30 EHz < 10 pm > 100 keV nuclear, astrophysics
Conversion identities:
E = h · f = h · c / λ h = 6.626 × 10⁻³⁴ J·s
photon energy in eV ≈ 1240 / λ(nm) (handy memonic for visible/UV)
thermal peak λ_peak · T ≈ 2.90 mm·K (Wien, blackbody peak at temp T)
- Visible light spans one octave of frequency — 400–790 THz, 380–750 nm. Red is ~700 nm, green ~530 nm, blue ~470 nm. Human eyes evolved to peak sensitivity around 555 nm (yellow-green) because that's near the Sun's emission peak at our surface; photosynthesis uses mostly the same band with a dip in green (which is why leaves are green — they reflect the middle of our eye's range).
- Blackbody radiation: every object emits EM radiation with a spectrum set by its temperature (Planck 1900). A 300 K object peaks at ~9.7 µm (mid-infrared, invisible but thermal cameras see it); 3000 K incandescent bulb peaks at ~970 nm (near-IR, wasteful); 5800 K Sun peaks at ~500 nm (yellow-green, convenient for life); 20 million K plasma peaks at X-rays.
- Atmospheric windows decide what astronomy we can do from the ground. Visible (0.4–0.7 µm), near-IR (with some gaps), and radio (3 mm – 30 m) pass through; most UV, far-IR, X-ray, and gamma bands are absorbed — which is why Hubble is in orbit, JWST at L2 with an infrared design, Chandra (X-ray) and Fermi (gamma) are both orbital.
- Non-ionizing vs ionizing: the line sits at ~10 eV, roughly the ionization energy of hydrogen. Radio, microwave, IR, visible — non-ionizing, can only heat tissue. UV (above 10 eV), X-ray, gamma — ionizing, can break chemical bonds and damage DNA. This boundary dictates every safety standard around EM exposure.
The model you want: the EM spectrum is one continuum split into named bands by how matter responds. Same math everywhere, wildly different consequences — a 2.4 GHz photon and an X-ray photon are both "light," and differ by a factor of ~10¹² in energy.
WARNING
"5G causes health effects" mixes up non-ionizing microwave photons (μeV energies, can't break molecules) with ionizing radiation. The actual safety concern with any RF is thermal — high-power microwave at close range can cook tissue, which is why ovens run at 700–1000 W. 5G handset power is ~0.2 W spread over space. Physics sets the priors; regulations inform the tail risk.
Go deeper: Jackson, Classical Electrodynamics; Hecht, Optics; ITU-R radio-frequency band nomenclature (spectrum allocations worldwide); NASA's Electromagnetic Spectrum overview page; Reif, Fundamentals of Thermal and Statistical Physics chapter 15 on blackbody radiation.
Station 5 — Resonance, damping, and the quality factor Q
Every system that can support oscillations has one or more frequencies at which it responds most strongly — the natural frequencies or resonant frequencies. A swing pushed at its natural frequency goes higher and higher with small pushes; pushed at any other frequency it barely responds. A tuning fork rings at a specific pitch because that is its resonant frequency. A crystal oscillator in a watch holds a specific frequency because its piezoelectric structure has an extraordinarily sharp resonance at that frequency. A laser cavity resonates at specific wavelengths; so does a microwave cavity, an antenna, a tuned circuit, and a molecular vibration mode.
Resonance is the dramatic amplitude build-up that happens when a periodic driving force matches a system's natural frequency. Without damping (energy loss to friction, air resistance, electrical resistance), the amplitude grows without bound at exact resonance; all real systems have some damping that limits the peak. The quality factor Q measures how sharp the resonance is — Q = 2π × (energy stored) / (energy lost per cycle). A child's swing has Q ≈ 10; a wine glass has Q ≈ 1,000; a quartz crystal oscillator has Q ≈ 10⁵; an atomic clock reference cavity has Q ≈ 10¹⁰.
Resonance is both a powerful tool and a hazard. It is a tool because a high-Q resonator lets a weak signal at the right frequency build up to usable amplitudes — every radio receiver, every frequency-selective filter, every laser is doing this. It is a hazard because structures (bridges, buildings, aircraft wings) have natural frequencies, and matching the driving force to one of them (wind loading, earthquake frequencies, engine vibrations) can accumulate catastrophic amplitude — the 1940 Tacoma Narrows Bridge collapse is the classic demonstration.
A driven harmonic oscillator with natural frequency ω₀ responds most strongly when driven at or near ω₀. This is resonance, and it is the mechanism behind every musical instrument, every radio tuner, every laser cavity, every mechanical failure under repetitive load (Tacoma Narrows Bridge, 1940).
Driven, damped harmonic oscillator:
m · ẍ + b · ẋ + k · x = F₀ · cos(ω t)
ω₀ = √(k / m) natural frequency
γ = b / (2 m) damping rate
Q = ω₀ / (2 γ) quality factor
Amplitude response vs driving frequency ω:
|X(ω)| = F₀ / m / √( (ω₀² − ω²)² + (ω / Q · ω₀)² )
Peak amplitude roughly at ω ≈ ω₀ · √(1 − 1/(2Q²))
FWHM (full-width half-max) of |X|² ≈ ω₀ / Q
→ high Q ⇒ narrow, tall resonance peak
low Q ⇒ broad, flat response
Q is the single number that summarizes an oscillator's behaviour:
Q ≈ 1 — critically damped, no oscillation, fastest return to equilibrium. Car suspension, door closers, control-loop tuning.
Q ≈ 10–100 — musical instruments (piano strings ~1000, but coupled to a soundboard that radiates and damps), mechanical springs with air resistance.
Q ≈ 10⁴ — good LC radio circuits; a tuned quartz crystal oscillator; this is why a $2 watch crystal keeps time to ~10 seconds/month.
Q ≈ 10⁶–10⁹ — laser cavities, microwave resonators, atomic clocks, LIGO's test-mass suspensions. A strontium optical-lattice atomic clock has an atomic transition Q of ~10¹⁷.
Energy perspective: Q = 2π · (energy stored) / (energy lost per cycle). High Q means an oscillator "rings" a long time — exactly τ = Q / (π·f) seconds to decay by factor e. An acoustic guitar's sustain comes from Q ~ a few hundred on the strings plus the body's response.
Resonance curves and bandwidth: the bandwidth of a resonant filter is Δf = f₀ / Q. A radio receiver's Q determines how sharply it separates adjacent stations; too-high Q cuts into the information bandwidth of the signal and makes it distort.
Anti-resonance and zeros: a system can have frequencies at which it does not respond (zeros in the transfer function). Noise-cancelling headphones, vibration dampers on skyscrapers, and "tuned mass dampers" all engineer anti-resonance to kill specific modes.
The model you want: Q is the ratio of oscillation rate to damping rate; high Q means narrow band, long ringing, precise tuning; low Q means broad response, fast settling, noise rejection. Pick Q for what you want the system to do.
CAUTION
A bridge or a building designed without considering wind-induced vibration modes is a lesson waiting to be re-taught. Tacoma Narrows (1940) was a Q ~ 100 aeroelastic flutter mode; modern structural engineering requires identifying every Q > 20 mode within the design wind spectrum and damping it. The math has been understood since von Kármán; the humility is what rebuilds bridges.
Go deeper: Morin, Introduction to Classical Mechanics chapter 5; Crawford, Waves chapters 3–4; Siegman, Lasers chapter on cavity Q for resonator engineering; Den Hartog, Mechanical Vibrations (4th ed).
Station 6 — Optics and diffraction
Light has a dual personality. On scales much larger than the wavelength, it behaves like rays travelling in straight lines — geometrical optics. That is the approximation that explains lenses, mirrors, cameras, eyes, and telescope design with high-school trigonometry. Snell's law (n₁ sin θ₁ = n₂ sin θ₂) tells you where a refracted ray goes; the lens equation (1/f = 1/s + 1/s′) tells you where an image forms. This picture works brilliantly until you look at fine details, at which point the wave nature of light reasserts itself.
Diffraction is what happens when a wave meets an obstacle or passes through an aperture comparable in size to its wavelength. Instead of following geometry's straight lines, the wave bends around the edge and produces characteristic interference patterns. A single slit of width a produces a central bright fringe of angular width ~λ/a; smaller apertures produce broader patterns. A diffraction grating (thousands of regular slits) disperses light into its constituent wavelengths, which is how spectrometers work. The diffraction limit — ~λ/D for an aperture of diameter D — sets the fundamental angular resolution of any optical instrument: a telescope cannot resolve detail smaller than this no matter how perfect its optics, which is why astronomers build larger and larger mirrors.
The wave-optics view also explains polarisation (the orientation of the electric field in an EM wave), which gives polarised sunglasses, LCD displays, 3D cinema glasses, and a large fraction of optical sensing. For short-wavelength light in small systems, even geometrical optics breaks down and you need diffraction theory; for wavelengths much smaller than any feature, the ray picture wins.
Optics is wave physics on small length scales — the diffraction and interference of visible light through apertures, slits, and lenses. Two limits: geometric optics (when λ ≪ aperture, light travels in straight lines with snooker-ball refraction) and physical optics (when λ ~ aperture, diffraction patterns dominate).
Snell's law (refraction at an interface):
n₁ · sin θ₁ = n₂ · sin θ₂
n = c / v (refractive index)
Thin lens equation:
1 / f = 1 / d_o + 1 / d_i
Single-slit diffraction minimum at angle θ:
a · sin θ = m λ (a = slit width, m = ±1, ±2, …)
Grating equation (maxima):
d · sin θ = m λ (d = grating spacing)
Rayleigh resolution criterion (two point sources):
sin θ_min = 1.22 · λ / D (circular aperture of diameter D)
The Rayleigh criterion is why telescopes get bigger. At 550 nm, a 1-m telescope resolves two points separated by ~0.14 arcseconds; an 8-m (Gemini, VLT unit) reaches ~0.017″ in ideal seeing; a 10-m (Keck) reaches ~0.014″; the 39-m ELT (first light expected 2028) will reach ~0.003″ — enough to image a dime on the Moon. The Cosmos & Astrophysics handbook picks up this thread.
- Chromatic aberration: glass's refractive index depends slightly on wavelength (dispersion), so a simple lens focuses red and blue to different points. Achromatic doublets (crown + flint glass, invented by Hall & Dollond ~1730) cancel this to first order.
- Polarization: EM waves are transverse, so the electric vector can point in any direction perpendicular to propagation. Sunglasses polarize (block horizontal, admit vertical — glare from water is mostly horizontal). LCDs depend entirely on polarization rotation by liquid-crystal layers between crossed polarizers.
- Nonlinear optics: at high intensity, refractive index depends on field strength, enabling frequency doubling (green lasers from IR), four-wave mixing, Kerr effect, and the solitons that power modern long-haul optical fibre. Most of frontier laser physics lives here.
- Adaptive optics: atmospheric turbulence blurs astronomical images to ~1 arcsecond at sea level. Measure the wavefront distortion with a guide star, deform a mirror 1000 times per second to cancel it, and recover near-diffraction-limited imaging. Now standard on 8-m-class ground telescopes; gives them angular resolution that was exclusively space's domain in the 1990s.
The model you want: geometric optics handles "which way does the ray bend" with Snell's law and lens equations; physical optics handles "how sharp can this image be" via diffraction and the Rayleigh criterion. The transition between them is when λ becomes comparable to a feature size.
TIP
"My camera photo is soft" can be focus, motion blur, lens aberrations, sensor pixel pitch, or diffraction. The last one kicks in as you stop down: past about f/8 on full-frame sensors, Airy-disk size exceeds pixel size and image sharpness decreases even if everything else is perfect. Physics puts a hard ceiling on small-aperture sharpness.
Go deeper: Hecht, Optics (5th ed) — the single best optics textbook; Born & Wolf, Principles of Optics (7th ed) for graduate depth; Goodman, Introduction to Fourier Optics for the link between Fourier analysis and imaging; Saleh & Teich, Fundamentals of Photonics.
Station 7 — Acoustics: longitudinal pressure waves
Not all waves move the medium perpendicular to the direction of propagation. In longitudinal waves the medium compresses and rarefies along the direction the wave travels, creating regions of high and low pressure that march outward. Sound is the classic example: a vibrating loudspeaker pushes the air in front of it, compressing it; that compression expands outward, and the wave of alternating compression and rarefaction is what your eardrum ultimately detects.
Acoustic waves in air at room temperature travel at about 343 m/s. In water (~1500 m/s) and in steel (~5000 m/s) they travel faster because the medium is stiffer. Human hearing covers roughly 20 Hz to 20 kHz, corresponding to wavelengths from ~17 m down to ~17 mm — a huge dynamic range that makes a single "acoustic room" contain features on many different scales. Ultrasound is anything above 20 kHz and is used for medical imaging, non-destructive testing, cleaning, and animal navigation (bats at 20–200 kHz, dolphins at 40–150 kHz). Infrasound is below 20 Hz and is generated by volcanoes, earthquakes, and whales; it can travel thousands of kilometres through the atmosphere and ocean.
Acoustics behaves like optics in many ways — reflection, refraction, diffraction, interference all apply — but with one critical difference: the medium (air, water, solid) always exists, unlike empty space for light. This means acoustic waves can be absorbed, scattered, and reshaped by every feature of the medium, which is why room acoustics, concert-hall design, and noise-control engineering are deep practical disciplines.
Sound is a longitudinal pressure wave — air molecules oscillate along the direction of propagation, not across it (unlike light). Compressions and rarefactions travel at the speed of sound (343 m/s in 20 °C air, rising with temperature as √T), and the ear's response to the tiny pressure fluctuations (~20 µPa at threshold, 20 Pa at pain) is what we call hearing.
Sound pressure level (SPL):
SPL = 20 · log₁₀ ( p / p_ref ) dB p_ref = 20 µPa
0 dB SPL — hearing threshold, 20 µPa
40 dB SPL — quiet library
60 dB SPL — conversation
85 dB SPL — OSHA occupational limit (8 h)
120 dB SPL — jet engine at 100 m, pain threshold
194 dB SPL — theoretical max in air (1 atm compression)
Human hearing range: 20 Hz – 20 kHz (declines with age, "presbycusis")
A4 concert pitch: 440 Hz
musical half-step: 2^(1/12) ≈ 1.0595 (equal-temperament ratio)
octave: 2:1 frequency ratio
- Acoustic impedance Z = ρ c controls how sound transmits across interfaces. Air has Z ≈ 400 kg/(m²·s); water ≈ 1.5 × 10⁶; soft tissue ~ water; bone higher still. This huge mismatch is why ultrasound imaging needs a gel between the transducer and skin (to match Z), and why hearing under water is different (impedance mismatch with the eardrum changes).
- Doppler shift: f_observed = f_source · (c + v_observer) / (c + v_source). For a car horn passing you, pitch drops by a factor ~1.2 from approach to recession at highway speed — you've all heard this. In astronomy, stellar Doppler shifts (redshift, blueshift) are how radial velocities and exoplanets are measured — same math, vastly smaller v/c.
- Reverberation time (RT60): time for a sound to decay by 60 dB. Sabine's formula RT60 = 0.16 · V / A (SI units; V volume m³, A total absorption m² sabins). Concert halls target 1.8–2.2 s (Musikverein is 2.0); cathedrals 6–10 s; recording studios 0.3–0.5 s; a typical living room ~0.5 s.
- Ultrasonic and infrasonic: above 20 kHz, sonar, bat echolocation, ultrasonic cleaners (40 kHz cavitation), medical imaging (2–15 MHz). Below 20 Hz, infrasound propagates hundreds of kilometres with little attenuation — elephants and whales communicate at these frequencies; so do earthquakes.
The model you want: sound is a pressure wave whose speed depends on the medium's bulk modulus and density, and whose intensity spans an astonishing 14 orders of magnitude. dB units exist because our ears and everything we build them to measure span vastly more than a linear scale can hold.
TIP
Every "it sounds bad in this room" problem reduces to mode placement (Station 2), reverberation time, and material absorption. A few hundred dollars of bass traps and acoustic panels in corners do more for listening-room acoustics than thousands of dollars of new speakers. The physics of the room dominates the physics of the transducer.
Go deeper: Kinsler, Frey, Coppens & Sanders, Fundamentals of Acoustics (4th ed); Rossing, The Science of Sound; Beranek, Concert Halls and Opera Houses; the recordings-engineering perspective in Toole, Sound Reproduction.
Station 8 — Dispersion: when c depends on frequency
In the idealised wave equation from Station 1, the speed c is a constant — every frequency component of a wave travels at the same speed, and a complicated pulse keeps its shape as it moves. In many real media, that idealisation breaks. The speed actually depends on frequency, so different Fourier components of a wave travel at different speeds, and an initially sharp pulse spreads out over distance. This is dispersion.
Dispersion is why a prism splits white light into a rainbow — the glass's index of refraction is slightly different for blue light than for red, so the two refract by different angles. It is why lightning's thunder rumbles for several seconds instead of a sharp crack (the sound travels through turbulent, temperature-varied air that disperses the pulse). It is why a long fibre-optic cable cannot carry arbitrarily fast bit rates — pulse-broadening from chromatic dispersion limits how close together bits can be before they smear into each other. It is why ocean swell (which is dispersive) arrives at the shore sorted by wavelength, long wavelengths first, while the original storm produced them all at once.
The precise form of the dispersion relation ω(k) characterises the medium and determines everything interesting. For electromagnetic waves in vacuum ω = ck (no dispersion), so a laser pulse travels through space without distortion. For water waves, ω² = gk (for deep water) — strongly dispersive. For optical fibres, the dispersion is small but finite and has to be actively compensated. Knowing the dispersion relation is knowing how the wave will behave.
Most simple wave examples assume the wave speed c is a constant. In dispersive media, c depends on frequency — different frequency components travel at different speeds. This is why a prism spreads white light into a rainbow, why rapid drum beats lose crispness over distance under water, and why high-bandwidth fibre-optic links are engineering achievements.
Dispersion relation ω(k) captures the speed-vs-frequency story:
non-dispersive: ω = c · k (phase speed v_p = group speed v_g)
dispersive: ω(k) is nonlinear
v_p = ω / k
v_g = d ω / d k (the pulse / envelope speed)
A pulse with bandwidth Δω travels a distance L and spreads by
Δt ≈ L · Δω · | d²ω / dk² |⁻¹ (second-order group-velocity dispersion)
- Optical glass: refractive index depends on wavelength (Sellmeier equation). For BK7 glass, n ≈ 1.52 at 587 nm (Fraunhofer d line), n ≈ 1.53 at 486 nm (blue F line), n ≈ 1.51 at 656 nm (red C line) — a ~1% spread that makes prisms work and forces lens designers to use combinations.
- Single-mode fibre at 1550 nm has chromatic dispersion ~17 ps/(nm·km). A 10 Gbps signal has ~0.1 nm spectral width — so 100 km of fibre smears 1 ps/km × 100 km = 1.7 ns of inter-symbol interference, many bit times. Dispersion-compensating fibre or digital equalization in coherent receivers is how 100 Gbps+ per wavelength reaches thousands of km.
- Ocean waves: deep-water gravity waves have ω² = g · k, so v_p = √(g/k) and v_g = ½ v_p. Long-wavelength (low-frequency) swells travel fast; short waves travel slow. A storm at sea sends low-frequency swell arriving hours before the higher-frequency waves — a predictable, observable dispersion signature.
- Plasma dispersion: radio waves travel through the ionosphere with ω² = ω_p² + c²k², where ω_p (plasma frequency) is ~9 MHz at typical ionospheric densities. Below ω_p, waves reflect off the layer (HF shortwave "skip" propagation); above ω_p, they transmit. This is why your AM radio works at night (ionosphere reflects) and FM doesn't (above ω_p).
The model you want: dispersion is a frequency-dependent wave speed, diagnosed from ω(k), quantified by d²ω/dk². It turns a sharp pulse into a smear over distance, and every communications engineer and seismologist fights or exploits it.
WARNING
Saying "light travels at c in all media" is wrong — the phase speed of light in glass is ~2c/3; the speed of the photon's wavefronts is slower. What is invariant is c in vacuum and the Lorentz-transformation structure. In media, dispersion is a real and useful fact, not a violation of relativity.
Go deeper: Jackson, Classical Electrodynamics §7.5 on dispersion; Hecht, Optics chapter 3 on propagation; Agrawal, Fiber-Optic Communication Systems chapters 2–3 for the engineering view; Lighthill, Waves in Fluids for the geophysical perspective on dispersion.
How the stations connect
The wave equation is the trunk; every station is a specialization. Media set the dispersion relation; boundary conditions set the allowed modes; driving inputs set the resonance; superposition decomposes arbitrary waves into the monochromatic solutions.
The Physics & Energy handbook supplies the Maxwell-equations and Schrödinger-equation origins of wave phenomena; the Signals & Embedded handbook operationalises the Fourier and modulation machinery for engineering; the Cosmos & Astrophysics handbook applies diffraction-limited imaging and Doppler to astronomical scales.
Standards & Specs
- ITU-R Radio Regulations — international band allocations across the EM spectrum.
- IEEE Std 1003 (radar), Std 211 (definitions of RF terms) — the vocabulary for microwave and radar engineering.
- ISO 226:2023 (equal-loudness contours) — how human hearing weights frequencies.
- IEC 60651 / 61672 (sound level meters) — how SPL is actually measured.
- IEEE / OSA definitions on laser safety and ANSI Z136.1 laser safety standard.
- CIE 1931 chromaticity — the colour-space that tied human colour perception to physical spectrum.
- Canonical papers — Fourier, Théorie analytique de la chaleur (1822); Maxwell, "A Dynamical Theory of the Electromagnetic Field" (1865); Rayleigh, The Theory of Sound (2 vols, 1877); Planck, "Zur Theorie des Gesetzes der Energieverteilung im Normalspektrum" (1900, blackbody); Young, "The Bakerian Lecture: On the Theory of Light and Colours" (1802); Cooley & Tukey, "An Algorithm for the Machine Calculation of Complex Fourier Series" (1965); Sabine's early reverberation papers (1900s).
- Books — Crawford, Waves. Hecht, Optics. Born & Wolf, Principles of Optics. Kinsler et al., Fundamentals of Acoustics. Jackson, Classical Electrodynamics. Siegman, Lasers. Agrawal, Fiber-Optic Communication Systems. Bracewell, The Fourier Transform and Its Applications.
Test yourself
A student claims a 1 cm aperture can produce sharper images than an 8-cm aperture "because the light is more focused." Use the Rayleigh criterion to argue otherwise, and compute the resolution difference at 550 nm.
Rayleigh resolution is sin θ_min ≈ 1.22 · λ/D — smaller aperture D gives a larger θ_min and thus worse resolution. For 550 nm light: 1-cm aperture resolves angles of 1.22 × 550 × 10⁻⁹ / 0.01 ≈ 6.7 × 10⁻⁵ rad ≈ 14″. 8-cm aperture: 8.4 × 10⁻⁶ rad ≈ 1.7″ — 8× better. The "focused" intuition confuses ray-optics focus (a small aperture can have an accurate focal point) with diffraction-limited resolution (a larger aperture collects more spatial frequencies and produces a tighter Airy disk). See Station 6.
A concert-hall acoustician measures RT60 = 3.5 s in a 10 000 m³ hall. Classical music director wants RT60 ≈ 2.0 s. Use Sabine's formula to estimate how much additional absorption (in m² sabins) is needed.
Sabine: RT60 = 0.16 · V / A. Currently A₁ = 0.16 · 10 000 / 3.5 ≈ 457 m² sabins. Target: A₂ = 0.16 · 10 000 / 2.0 = 800 m² sabins. Add about 343 m² sabins. Practical translation: ~340 m² of a material with absorption coefficient α ≈ 1 (like thick open-cell foam at mid frequencies), or ~1000 m² at α ≈ 0.3 (heavy curtains, carpet). Notice the formula has no frequency in it — real halls need different RT60 at different frequencies, which complicates this estimate. See Station 7.
A fibre-optic link carries 100 Gbps at 1550 nm over 80 km of standard SMF with chromatic dispersion 17 ps/(nm·km). The transmitter has a spectral width of 0.02 nm. Estimate the pulse spreading at the receiver.
Chromatic dispersion spread: Δt ≈ D · L · Δλ = 17 ps/(nm·km) × 80 km × 0.02 nm = 27.2 ps. At 100 Gbps the bit time is 10 ps, so spreading is ~2.7 bit-times — significant inter-symbol interference. Remediation: dispersion-compensating fibre in series, a fibre Bragg grating equaliser, or (modern coherent systems) DSP at the receiver. 100 Gbps coherent transceivers handle this routinely because they know D and L and invert the dispersion in software. See Station 8.