Handbook · Natural
Physics & Energy
Physics & Energy··41 min read
TL;DR
Physics is the attempt to describe how the universe works with as few rules as possible. The astonishing outcome of four centuries of experiment is that we can. A small collection of laws — each written as one or a handful of equations — accurately predicts almost everything we see, from a falling apple to the lifetime of a neutron to the orbit of a satellite to the glow of a filament to the merging of two black holes. Learning physics well enough to use is learning these laws, the regime where each applies, and the regime where it breaks.
The laws come in five families. Classical mechanics (Newton 1687, reformulated by Lagrange and Hamilton in the 1700s–1800s) describes how objects move under forces; it covers almost everything from the scale of bacteria to the scale of planets, at speeds slow compared to light. Thermodynamics and its microscopic cousin statistical mechanics describe heat, work, entropy, and the statistical behaviour of systems with many particles — why ice melts, why engines have an efficiency ceiling, why the air in a room mixes. Electromagnetism (Maxwell 1865) unifies electricity, magnetism, and light into a single set of four equations; it explains atomic structure, chemistry, and every photon you have ever seen. Relativity (Einstein 1905, 1915) fixes Newton where his description fails — at high speeds (special relativity) and in strong gravity (general relativity). Quantum mechanics fixes Newton at short distances, where the particle picture is wrong and probability becomes fundamental.
Above all of these is one meta-principle: symmetry. Emmy Noether proved in 1918 that every continuous symmetry of the laws of physics implies a corresponding conserved quantity — time-translation symmetry gives conservation of energy, spatial-translation symmetry gives momentum, rotational symmetry gives angular momentum, and so on. This is why the conservation laws that beginners memorise as rules are actually deep features of the structure of the theory.
This handbook walks the five families plus symmetry, in the order students historically meet them: units and dimensional analysis (the language), classical mechanics (Newton → Lagrangian → Hamiltonian), thermodynamics, statistical mechanics, electromagnetism, relativity, quantum mechanics, and symmetry. Everything is built from first principles with the minimum vocabulary needed.
You will be able to
- Write down the five laws that matter most — Newton's second, the four thermodynamic laws, Maxwell's equations, Schrödinger's equation, Einstein's field equations — and say where each applies.
- Reason with dimensions and order-of-magnitude estimates to get within a factor of 3 of any physical quantity without a calculator.
- Place energy, momentum, charge, and angular momentum on Noether's map and identify which symmetry each reflects.
The Map
- You will be able to
- The Map
- Station 1 — Units, dimensional analysis, and the SI
- Station 2 — Classical mechanics: from Newton to Lagrangian
- Station 3 — Thermodynamics: four laws, one arrow of time
- Station 4 — Statistical mechanics and the partition function
- Station 5 — Electromagnetism: Maxwell's equations in full
- Station 6 — Relativity: special and general
- Station 7 — Quantum mechanics: Schrödinger, observables, superposition
- Station 8 — Symmetry and Noether's theorem
- How the stations connect
- Standards & Specs
- Test yourself
Read the map along two axes. Horizontally: domains of applicability — mechanics at human scale, thermodynamics in aggregate, EM everywhere charged, relativity near c or massive gravity, quantum at atomic scale. Vertically: each station has a concise mathematical spine (Newton's 2nd law, the Second Law, Maxwell's four, Schrödinger, Einstein's ten), a domain of validity, and a bridge to the next station where it breaks.
Station 1 — Units, dimensional analysis, and the SI
Physics is quantitative, which means every number comes with a unit — metres, seconds, kilograms, coulombs, kelvins. A number without a unit is meaningless; "the car is going 60" is not an answer unless you know "60 what." Units enforce that equations relate the same kinds of thing: you can add two lengths, but you cannot add a length to a time. This sounds pedantic until it prevents a real error — dimensional analysis is the practice of checking that the dimensions on both sides of an equation match, and it is the most reliable quick sanity-check physics has.
The SI (Système International d'Unités) is the international agreement on which units are fundamental. Seven base units — second (time), metre (length), kilogram (mass), ampere (current), kelvin (temperature), mole (amount of substance), candela (luminous intensity) — from which every other unit is derived (newton = kg·m/s², joule = kg·m²/s², watt = J/s). Since 2019 the SI units are defined not by physical artefacts but by fixed values of fundamental constants — the speed of light c, the Planck constant h, the elementary charge e, the Boltzmann constant k_B, and Avogadro's number — ensuring the same metre and the same second are available anywhere in the universe without calibrating against a prototype.
Physics is quantitative; quantities carry units. Keep the dimensions in every formula and most mistakes surface before the arithmetic. The SI (Système International) is the agreed set of seven base units; every other physical unit is a product of powers of these.
SI base units (as redefined in 2019 — now all tied to fundamental constants):
s (second) — 9 192 631 770 periods of Cs-133 hyperfine transition
m (metre) — distance light travels in 1/299 792 458 s (c exact)
kg (kilogram) — from Planck constant h = 6.626 070 15 × 10⁻³⁴ J·s (exact)
A (ampere) — from elementary charge e = 1.602 176 634 × 10⁻¹⁹ C (exact)
K (kelvin) — from Boltzmann constant k = 1.380 649 × 10⁻²³ J/K (exact)
mol (mole) — from Avogadro N_A = 6.022 140 76 × 10²³ mol⁻¹ (exact)
cd (candela) — from luminous efficacy at 555 nm (exact)
The platinum-iridium prototype of the kilogram was retired in 2019.
Every base unit is now derivable from a fixed count of Planck-constant-like quantities.
Every physical equation must be dimensionally homogeneous — both sides carry the same unit powers. This is the single most effective error-check in physics; if you differentiate position by time and your answer has units of metres rather than m/s, stop.
- Order-of-magnitude estimation ("Fermi problems") is a core skill. How many piano tuners in Chicago? How much energy is in a lightning bolt? How long until an ice cube melts in a cup of water? Each is factored into a product of quantities with units, rounded to the nearest decade, multiplied. Expect the answer to be within a factor of 3 of reality.
- CODATA values (Committee on Data for Science and Technology, ~every 4 years) publish the agreed best numerical values for every fundamental constant — c, h, e, k_B, N_A, G, m_e, m_p. Cite CODATA, not a random textbook. The 2018 and 2022 updates are current.
- Planck units (ℏ, c, G, k_B set to 1) define natural scales: Planck length ~1.6 × 10⁻³⁵ m, Planck time ~5.4 × 10⁻⁴⁴ s, Planck mass ~2.2 × 10⁻⁸ kg. They're the floor physicists suspect quantum gravity kicks in at.
- Geometrized units (c = G = 1, in GR) and Gaussian units (different factor of 4π in EM) and atomic units (ℏ = e = m_e = 1, in QM) each erase constants that obscure structure in a specific domain. Switch between them by dimensional analysis on the constants.
The model you want: every quantity is a number and a unit; equations must balance on both; constants have fixed values you can look up. The math is easier once the book-keeping is automatic.
TIP
Carry units symbolically through every derivation and cancel them to check. A force that ends in kg·m²/s is wrong — the right dimensions of force are kg·m/s² (newton). This single habit kills 80% of algebra bugs before they make it to a calculator.
Go deeper: BIPM's "SI Brochure" (9th ed, 2019) — the official source of the seven base units; CODATA 2022 recommended values; Mahajan, Street-Fighting Mathematics chapters 1–2; Taylor, An Introduction to Error Analysis (2nd ed) for propagation of uncertainty.
Station 2 — Classical mechanics: from Newton to Lagrangian
Classical mechanics is the physics of how objects move under forces — projectiles, planets, pendulums, cars, bridges, machines. It is the oldest and most intuitive branch of modern physics, and for most everyday scales (not too fast, not too small, not too strong-gravity) it is essentially correct. Isaac Newton formulated it in 1687 in the Principia, with three famous laws: (1) an object at rest stays at rest and an object in motion stays in motion unless acted upon by a force; (2) F = ma — force equals mass times acceleration; (3) every action has an equal and opposite reaction.
Newton's laws are perfectly adequate for solving any classical mechanics problem in principle, but for complex systems (many bodies, constrained motion, rotating reference frames) they become algebraically miserable. Over the following two centuries, Joseph-Louis Lagrange and William Rowan Hamilton reformulated the same physics in more powerful frameworks. Lagrangian mechanics replaces forces with a single scalar function L = T − V (kinetic minus potential energy) and derives motion from the principle of least action — the actual path taken by a system is the one that makes a certain integral (the action) stationary. Hamiltonian mechanics reformulates again in terms of position + momentum pairs and a total-energy function; it is the bridge to quantum mechanics (Station 7) and statistical mechanics (Station 4).
The frameworks describe the same physics; the reason to know all three is that certain problems are much cleaner in one formulation than in another, and the Lagrangian and Hamiltonian forms extend naturally into relativity and quantum theory while Newton's original does not.
Newton's three laws describe a particle's motion in a given reference frame. F = ma — a force produces proportional acceleration of a mass — is the equation that built the Enlightenment's picture of the universe. For two millennia before Newton, nobody had written down the link between cause (force) and effect (change of velocity) this cleanly.
Newton's laws (Principia, 1687):
1st: a body continues in uniform motion unless a net force acts
2nd: F = d p / d t with p = m v (→ F = m a for fixed mass)
3rd: every action has an equal and opposite reaction
Conservation consequences (for isolated systems):
momentum p = Σ m_i v_i constant
angular mom. L = Σ r_i × p_i constant
mechanical E E = T + U constant if U is conservative
T = ½ m v² (kinetic)
U = U(r) (potential)
Gravity (Newton 1687):
F = G · m1 · m2 / r² G ≈ 6.674 × 10⁻¹¹ N·m²/kg²
for a central force, orbits are conic sections (Kepler)
The Lagrangian reformulation (Lagrange 1788, Hamilton 1834) generalizes Newton beyond Cartesian coordinates. Define L = T − U; the Euler-Lagrange equation d/dt (∂L/∂q̇) − ∂L/∂q = 0 yields the same physics in any generalized coordinate q. This matters because hard problems (double pendulums, constrained systems, rigid bodies in 3D) are trivial to set up in q and hellish in x, y, z.
- Kepler's three laws (1609–1619) follow from Newton's law of gravity applied to a 1/r² central force: orbits are ellipses with the Sun at one focus; equal areas are swept in equal times (angular momentum conservation); T² = a³ for orbits around the same primary with a in astronomical units and T in years. This third law is the single equation that takes you from "how far is Mars?" to "how long is its year?" See the Cosmos & Astrophysics handbook for the modern form.
- Rotation brings new quantities: moment of inertia I = Σ m_i r_i² (the rotational analogue of mass), angular velocity ω, torque τ = r × F. The rotational version of Newton's 2nd law is τ = I α, and the rotational kinetic energy is ½ I ω². Gyroscopic precession is what you get when torque acts perpendicular to angular momentum.
- Hamiltonian mechanics rewrites Lagrangian theory in phase space (q, p): H = T + U and Hamilton's equations q̇ = ∂H/∂p, ṗ = −∂H/∂q. This formulation becomes the bridge to quantum mechanics — the Hamiltonian operator is the quantum analogue.
- Chaos shows up even in simple classical systems (the double pendulum, three-body gravitational problem). A system is chaotic when nearby initial conditions diverge exponentially (positive Lyapunov exponent) — deterministic but unpredictable in practice.
The model you want: Newton gives you F = m a, Lagrangian gives you d/dt (∂L/∂q̇) = ∂L/∂q, Hamiltonian gives you q̇ = ∂H/∂p and ṗ = −∂H/∂q. Same physics, different coordinates, different tool for different problems.
CAUTION
Newtonian mechanics is exact at low v/c and weak gravity. At speeds above ~10% of c or in fields strong enough to bend light visibly, Newton is an approximation; for GPS satellites (orbiting fast, in Earth's weak but non-negligible gravity), GR corrections amount to ~38 µs/day — an uncorrected GPS drifts ~11 km/day. See Station 6.
Go deeper: Goldstein, Classical Mechanics (3rd ed); Landau & Lifshitz, Mechanics (Course of Theoretical Physics Vol. 1 — compact and brutal); Taylor, Classical Mechanics; Feynman's Lectures on Physics Vol. 1 chapters 7–10 on Newton's laws.
Station 3 — Thermodynamics: four laws, one arrow of time
Thermodynamics is the physics of heat, work, temperature, and the way energy spreads. It was developed in the 19th century to understand steam engines, and it remains the governing framework for any process where many particles exchange energy — combustion engines, refrigerators, weather, chemical reactions, metabolism, the entire economics of power generation. Thermodynamics is a macroscopic theory — it does not ask what individual molecules are doing; it asks what measurable bulk quantities (pressure, volume, temperature, heat) do when you push on a system.
Four laws compress everything. The zeroth law defines temperature: if A and B are each in equilibrium with C, they are in equilibrium with each other. The first law is conservation of energy for heat and work: ΔU = Q − W — the internal energy of a system changes by heat added minus work done by the system. The second law introduces entropy, a measure of how spread-out or disordered a system is: in an isolated system, entropy never decreases. This is the only law of physics that distinguishes past from future — the "arrow of time" — and it is what forbids perpetual-motion machines, limits the efficiency of engines (Carnot's theorem), and explains why heat flows from hot to cold. The third law says that as temperature approaches absolute zero, entropy approaches a constant minimum (often zero).
The second law is the deepest. It is a statistical statement — there is nothing in the microscopic laws that prevents a cup of coffee from spontaneously heating up by cooling the air around it, it is just overwhelmingly unlikely. That is what Station 4 explains.
Thermodynamics is what the world looks like when you count atoms in the billions rather than one at a time. Four laws — Zeroth through Third — describe every bulk process, from a steam turbine to a life-form. The Second Law alone introduces something Newton's laws never carry: a preferred direction of time.
The Four Laws (cleanly stated):
0th. If A is in thermal equilibrium with C, and B is with C,
then A is with B. (Temperature is a well-defined quantity.)
1st. ΔU = Q − W (conservation of energy)
internal-energy change = heat added − work done by system
2nd. dS_universe ≥ 0 (entropy never decreases)
Equivalently: heat flows spontaneously from hot to cold;
no engine converts heat to work with 100% efficiency.
3rd. S → 0 as T → 0 K (third law; entropy vanishes at absolute zero)
Carnot efficiency (ideal heat engine between hot T_H and cold T_C):
η = 1 − T_C / T_H
This is the upper bound; any real engine does worse.
- Entropy (Clausius 1865) is the state function whose change is dS = dQ_rev / T. Boltzmann (1877) gave it a microscopic meaning: S = k_B · ln Ω, where Ω is the number of microstates consistent with the macrostate. The famous inscription on Boltzmann's tombstone is exactly this equation — entropy is a count of possibilities, the disorder is our ignorance of which microstate the system is actually in.
- Enthalpy H = U + pV, Helmholtz free energy F = U − TS, Gibbs free energy G = H − TS. Each is useful in a different ensemble: G is what decides chemical-reaction spontaneity at constant p, T; F at constant V, T; minimizing the appropriate free energy is the shortest path to "which direction does this process run?"
- Heat engines, refrigerators, heat pumps all obey the Carnot bound. A modern combined-cycle gas turbine runs at ~60% thermal efficiency (close to its Carnot limit at 1500 K / 300 K). A residential heat pump's coefficient of performance (heat delivered per unit electric work) can exceed 4 — not because it beats thermodynamics, but because it is moving heat, not creating it.
- The Second Law in practice: information processing has an entropy cost. Landauer's principle (1961) says erasing one bit of information dissipates at least k_B · T · ln 2 ≈ 3 × 10⁻²¹ J at room temperature. Modern CPUs dissipate ~10⁹ × that per bit — but the floor is real and has been measured. See also the Foundations handbook on information and entropy.
The model you want: the Second Law is not pessimism about engines; it is a statement that microstate counts grow under isolated evolution, and temperature is the derivative of energy with respect to entropy. Thermodynamics is what happens when you stop caring which atom is which.
WARNING
"Perpetual motion" inventions come in two flavours: first-kind (violates 1st law, creates energy) and second-kind (violates 2nd law, converts heat to work with 100% efficiency). Patent offices refuse the first by reflex and the second by law. If a design claims either, the design is wrong; you will eventually find the bookkeeping error.
Go deeper: Fermi, Thermodynamics (1937, short, still excellent); Reif, Fundamentals of Statistical and Thermal Physics; Callen, Thermodynamics and an Introduction to Thermostatistics (2nd ed); Schroeder, An Introduction to Thermal Physics.
Station 4 — Statistical mechanics and the partition function
Thermodynamics (Station 3) describes bulk behaviour without reference to what individual particles are doing. Statistical mechanics is the bridge: start from the mechanics of individual particles (Newtonian or quantum), apply probability theory across the enormous number of microscopic states consistent with a given macroscopic measurement, and derive the thermodynamic laws as the overwhelmingly likely behaviour of the ensemble. Ludwig Boltzmann (1870s–80s) and Josiah Willard Gibbs (1900s) built this bridge, and the result is one of the most elegant constructions in physics.
The central object is the partition function Z = Σ exp(−E_i / k_B T), a sum over all microscopic states weighted by their energy — a single number from which you can derive every macroscopic thermodynamic quantity (free energy, internal energy, entropy, heat capacity, pressure, chemical potential). Boltzmann's formula S = k_B ln W — entropy is k_B times the logarithm of the number of microscopic states corresponding to a given macroscopic one — is inscribed on his tombstone and explains why entropy always tends to increase: there are simply vastly more "disordered" microscopic configurations than "ordered" ones, so a random system drifts toward disorder.
Statistical mechanics is also what lets us see temperature as a population statistic — at temperature T, particles have average kinetic energy ~ (3/2)·k_B·T, and their energies follow a Boltzmann distribution. This is the foundation for chemistry (reaction rates via Arrhenius), materials science (phase transitions), astrophysics (stellar populations), and — centuries later — machine learning (the partition function reappears in Boltzmann machines and energy-based models).
Statistical mechanics bridges microscopic dynamics to thermodynamic macrostates. Given a system's Hamiltonian H and a heat bath at temperature T, the probability of finding the system in a microstate with energy E_i is the Boltzmann distribution:
P(state i) = e^(−E_i / k_B T) / Z
Z = Σ_i e^(−E_i / k_B T) ← the partition function
Everything thermodynamic follows from Z:
U = − ∂ ln Z / ∂β β = 1 / k_B T
F = − k_B T · ln Z
S = − ∂F / ∂T
p = − ∂F / ∂V
Z is the master of thermodynamic bookkeeping. Compute Z for a model, differentiate appropriately, and the energy, entropy, pressure, and all response functions fall out. The art is computing Z; for non-interacting systems it factorizes nicely, for interacting systems it becomes a research program.
- Ideal gas with N particles in volume V gives Z = V^N / N! · (2πm k_B T / h²)^(3N/2), leading to PV = N k_B T and U = 3/2 N k_B T — the equipartition theorem (½ k_B T per quadratic degree of freedom).
- Quantum statistics changes the counting. Bosons (integer spin, photons, Higgs, phonons) obey Bose-Einstein statistics: P(n) ∝ 1/(e^(E/kT) − 1); photons can condense into a single state (laser, BEC). Fermions (half-integer spin, electrons, quarks, neutrinos) obey Fermi-Dirac: P(n) ∝ 1/(e^(E/kT) + 1); they fill states from the bottom up (Pauli exclusion), which is why atoms have shells and why metals conduct.
- Phase transitions happen when Z develops non-analyticities as parameters cross critical values. The Ising model of magnetism (spins on a lattice, nearest-neighbour interaction) is the canonical study: analytic solution in 1D (Ising 1925), no phase transition at T > 0; analytic solution in 2D (Onsager 1944), a continuous transition at T_c = 2 J / (k_B ln(1+√2)).
- Fluctuation-dissipation theorem (Kubo 1966): equilibrium fluctuations of a quantity determine how the system responds to perturbations of its conjugate. Brownian motion, Johnson-Nyquist noise in resistors, and the variance of number of photons in a laser mode are all instances.
The model you want: pick an ensemble (canonical, grand canonical, microcanonical), write Z, take derivatives. Statistical mechanics is a machine; inputs are Hamiltonians and constraints, outputs are thermodynamic quantities.
TIP
For simulations, Monte Carlo sampling (Metropolis-Hastings, 1953) produces states weighted by e^(−E/kT) without computing Z explicitly — which is why MCMC dominates modern stat-mech and why Monte Carlo integration dominates Bayesian inference. Same math, different labels.
Go deeper: Reif, Fundamentals of Statistical and Thermal Physics; Pathria & Beale, Statistical Mechanics (3rd ed); Kardar, Statistical Physics of Particles and Statistical Physics of Fields; Feynman, Statistical Mechanics: A Set of Lectures.
Station 5 — Electromagnetism: Maxwell's equations in full
Every atom is held together by electricity, every chemical bond is electromagnetic, every photon in a laser or a radio wave or a beam of sunlight is an oscillation of the electromagnetic field, every electronic device you have ever used works because moving charges create magnetic fields and changing magnetic fields create electric fields. Electromagnetism is the single most used branch of classical physics, and its unification into four equations is one of the greatest intellectual achievements of the 19th century.
Until James Clerk Maxwell's 1865 paper, electricity and magnetism had been studied as two related but distinct phenomena — Coulomb's law for static charges, Ampère's law for steady currents, Faraday's law for induced electric fields from changing magnetic fields. Maxwell noticed a missing term (now called displacement current), added it, and the resulting set of four equations not only unified the two fields but predicted that oscillating electric and magnetic fields could propagate as waves through empty space at a speed numerically equal to the measured speed of light. Light, it turned out, is an electromagnetic wave — the radio spectrum, infrared, visible light, ultraviolet, X-rays, and gamma rays are all the same phenomenon at different frequencies. This is covered in depth in the Waves & Frequencies handbook.
Maxwell's equations in vacuum are the full theory: Gauss's law for electricity (charges make electric fields), Gauss's law for magnetism (no magnetic monopoles), Faraday's law (changing magnetic fields make circulating electric fields), and the Ampère–Maxwell law (currents and changing electric fields make circulating magnetic fields). In matter, the equations are slightly modified to account for polarisation and magnetisation. Together with the Lorentz force law F = q(E + v × B), they describe every classical electromagnetic phenomenon.
Two fields — electric E and magnetic B — explain every non-gravitational, non-nuclear force you will ever encounter: why a magnet sticks, why a wire pushes back on a compass, why light exists and travels at c. Maxwell (1861–1865) unified them in four equations; Heaviside and Gibbs pruned the twenty-odd components Maxwell originally wrote down into the modern vector-calculus form.
Maxwell's equations (differential form, SI units):
∇ · E = ρ / ε₀ (Gauss, electric)
∇ · B = 0 (Gauss, magnetic — no monopoles)
∇ × E = − ∂ B / ∂ t (Faraday)
∇ × B = μ₀ J + μ₀ ε₀ ∂ E / ∂ t (Ampère-Maxwell)
Lorentz force on a charge q moving with velocity v:
F = q (E + v × B)
Speed of light from constants:
c = 1 / √(μ₀ ε₀) = 299 792 458 m/s (exact, by SI definition)
In vacuum, J = ρ = 0 and the equations become the wave equation:
( ∇² − 1/c² · ∂² / ∂ t² ) E = 0
→ electromagnetic waves travel at c, transverse, coupled E and B.
That "Maxwell's equations imply waves travelling at c, and so does measurement of the speed of light" is one of the greatest unifications in the history of physics — before Maxwell, "electricity," "magnetism," and "optics" were three distinct sciences.
- Gauss's law has a practical consequence: the electric field inside a closed conductor is zero (the charges rearrange to cancel any internal field). This is why a car in a lightning strike is safe (Faraday cage), and why sensitive electronics live in shielded enclosures.
- Faraday's law — a changing B induces an E (a circulating one, in fact). This is the generator and the transformer: move magnets past coils and current flows. Every power station on Earth that isn't a battery is an exploitation of this law.
- Ampère-Maxwell: currents and changing E fields produce B. Maxwell's addition of the ∂E/∂t "displacement current" term made the equations self-consistent and predicted the existence of radio waves, which Hertz confirmed in 1887. Every radio, Wi-Fi router, and phone on the planet rides this.
- Fields carry energy and momentum. The Poynting vector S = (1/μ₀) E × B gives the energy flux density of an EM field; integrate over a surface and you get the power flowing through. Laser cooling, solar sails, and radiation pressure are all applications. The Waves & Frequencies handbook covers the EM spectrum and what lives where.
The model you want: two fields, four equations, one Lorentz force. Every piece of classical electromagnetism is somewhere in this set; every engineering discipline that uses electricity is applying a special case. Solve Maxwell in a given geometry and the physics is done.
TIP
Noether's theorem (Station 8) says E-field conservation corresponds to gauge invariance — the freedom to add a constant to the electric potential without changing physics. This is why only voltage differences matter, not absolute voltages; and why "ground" is a convention, not a thing.
Go deeper: Griffiths, Introduction to Electrodynamics (4th ed) — the canonical undergraduate text; Jackson, Classical Electrodynamics (3rd ed) for graduate level; Feynman Vol. 2 (especially chapters on Maxwell's equations and the vector potential); Maxwell, A Treatise on Electricity and Magnetism (1873) for historical perspective.
Station 6 — Relativity: special and general
Classical mechanics (Station 2) predicts the motion of a train or a planet beautifully, but as you approach the speed of light it stops making sense — and Maxwell's equations, which predict the speed of electromagnetic waves from first principles, do not depend on the observer's motion. Something had to give. Albert Einstein's 1905 special relativity resolved the crisis with two postulates: (1) the laws of physics are the same in every inertial frame, and (2) the speed of light is the same for every observer regardless of their motion. Accepting both required giving up something else — and what Einstein gave up was the absoluteness of time and space.
The consequences are famous. Time runs more slowly for a moving observer (time dilation); a moving rod appears shorter in the direction of motion (length contraction); simultaneous events in one frame are not simultaneous in another (relativity of simultaneity); and mass and energy are interchangeable (E = mc²). These effects are vanishingly small at everyday speeds but become dominant near c — GPS satellites have to correct for them to work, particle colliders measure them directly, and nuclear reactions turn small amounts of mass into vast amounts of energy.
Ten years later, Einstein generalised further. General relativity (1915) extends the equivalence of inertial frames to include gravity itself: a person in a falling elevator cannot tell whether they are in free fall or in empty space. Gravity is not a force but a curvature of spacetime caused by mass and energy, and objects follow the straightest possible paths (geodesics) through that curved geometry. The field equations G_μν + Λg_μν = 8πG/c⁴ · T_μν relate the curvature of spacetime to the distribution of energy and momentum. GR predicts black holes, gravitational lensing, gravitational waves (detected in 2015 by LIGO), and the expansion of the universe (cosmology is one of its applications — see the Cosmos & Astrophysics handbook).
Maxwell's equations demanded a new mechanics. They are invariant under Lorentz transformations, not under the Galilean transformations Newton assumed. Einstein, in 1905, chose to trust Maxwell and rebuilt mechanics around the two postulates that followed: (1) physical laws are the same in every inertial frame; (2) the speed of light is the same in every inertial frame.
Special relativity (Einstein 1905): geometry of spacetime is Minkowski.
Lorentz boost along x, between frames S and S' with relative velocity v:
t' = γ · ( t − v · x / c² ) γ = 1 / √(1 − v²/c²)
x' = γ · ( x − v t )
y' = y z' = z
Consequences (non-exhaustive):
time dilation moving clocks tick slow by factor γ
length contraction moving rods shorten along motion by factor 1/γ
simultaneity "at the same time" is frame-dependent
invariant mass m is a scalar; "relativistic mass" is a pedagogical relic
E² = (pc)² + (mc²)² (energy-momentum 4-vector norm)
E = m c² (rest energy, for p = 0)
General relativity (Einstein 1915): gravity IS the curvature of spacetime.
G_μν + Λ g_μν = 8 π G / c⁴ · T_μν
G_μν — Einstein tensor (curvature of spacetime)
g_μν — metric tensor (geometry)
T_μν — stress-energy tensor (matter/energy/momentum)
Λ — cosmological constant (dark energy)
G — Newton's constant
Ten coupled nonlinear PDEs in a 4-dimensional manifold.
Matter tells spacetime how to curve; spacetime tells matter how to move.
- Twin paradox: one twin stays home, the other flies near c and returns. The traveller has aged less (time dilation). The paradox — "why isn't it symmetric?" — resolves because the traveller accelerated (changed frames); the stay-at-home did not. Asymmetric acceleration, asymmetric aging.
- E = m c² is not a special relation; it's the p = 0 case of the full energy-momentum relation E² = (pc)² + (mc²)². For photons (m = 0), E = pc; for non-relativistic particles, E ≈ mc² + p²/2m (rest energy plus kinetic).
- Schwarzschild metric (1916) gives spacetime around a spherical non-rotating mass: ds² = −(1 − 2GM/rc²) c² dt² + (1 − 2GM/rc²)⁻¹ dr² + r²(dθ² + sin²θ dφ²). At r = 2GM/c² (the Schwarzschild radius) the metric has a coordinate singularity — this is the event horizon of a non-rotating black hole. For the Sun, that radius is ~3 km; for Earth, ~9 mm.
- Tests and consequences: gravitational time dilation (GPS, Pound-Rebka 1959), bending of starlight (Eddington 1919), perihelion precession of Mercury (~43 arcsec/century beyond Newton), gravitational waves (LIGO, 2015). Every test to the precision achievable has agreed with GR so far.
The model you want: special relativity replaces Galilean transformations with Lorentz; general relativity replaces Newton's universal gravitation with a tensor equation where mass-energy curves spacetime and free-falling objects follow geodesics. For weak gravity and low speed, both reduce to what Newton wrote.
CAUTION
"Nothing can travel faster than light" is a statement about massive objects through local spacetime. Spacetime itself can expand at any rate (inflation, dark energy). Quantum entanglement correlations look faster-than-light until you try to send a signal through them — and fail, because no usable information transfers. The rule is exact but narrower than "nothing faster than c."
Go deeper: Einstein, "Zur Elektrodynamik bewegter Körper" (1905, short and revolutionary); Hartle, Gravity: An Introduction to Einstein's General Relativity; Schutz, A First Course in General Relativity (2nd ed); Misner, Thorne & Wheeler, Gravitation (the massive graduate text, aka "MTW" or "the phone book"); Carroll, Spacetime and Geometry.
Station 7 — Quantum mechanics: Schrödinger, observables, superposition
Classical physics treats particles as tiny billiard balls with definite positions, definite momenta, and definite paths. That picture is spectacularly wrong at atomic and subatomic scales. Electrons do not orbit nuclei like planets; they exist in fuzzy "clouds" described by a wave function. Light is neither strictly a wave nor a stream of particles — it is both, in a specific quantum sense. These strange facts are not philosophical oddities; they are what makes chemistry possible, what makes semiconductors behave, and what makes every modern quantum technology work.
Quantum mechanics replaces the classical concept of a definite state with a wave function ψ, whose value at each point in space gives the probability amplitude for the particle being found there. The wave function evolves according to the Schrödinger equation iℏ ∂ψ/∂t = Ĥψ, where Ĥ is the Hamiltonian operator (total energy). Physical quantities that we can measure (position, momentum, energy, angular momentum, spin) are represented by operators, and the possible outcomes of a measurement are the eigenvalues of the corresponding operator. A particle can exist in a superposition of states, and the act of measurement forces it into one of the eigenstates with a probability given by the Born rule. Heisenberg's uncertainty principle Δx · Δp ≥ ℏ/2 is a direct mathematical consequence — you cannot simultaneously know position and momentum to arbitrary precision.
Quantum mechanics is counter-intuitive but mathematically precise and experimentally verified to roughly one part in 10¹². Its applications are everywhere: semiconductors and the entire computer industry, lasers, MRI, nuclear power, solar cells, chemical bonding, fluorescence. Quantum field theory — the extension that combines quantum mechanics with special relativity — produces the Standard Model of particle physics, arguably the most precisely tested theory in the history of science.
Below roughly the atomic scale, Newtonian mechanics fails. A particle's state is not a point in phase space; it is a wavefunction ψ(x, t) whose evolution is governed by the Schrödinger equation. Physical quantities (position, momentum, energy) are observables represented by linear operators on the space of wavefunctions; measurement yields an eigenvalue of the operator and collapses the state to the corresponding eigenstate.
Time-dependent Schrödinger equation (1926):
i ℏ · ∂ ψ / ∂ t = Ĥ ψ
Ĥ = − ℏ² / (2m) · ∇² + V(r, t)
ℏ = h / (2π) ≈ 1.0546 × 10⁻³⁴ J·s
Time-independent form:
Ĥ ψ = E ψ (energy eigenstates)
Heisenberg uncertainty:
Δx · Δp ≥ ℏ / 2
ΔE · Δt ≥ ℏ / 2
Born rule:
probability of finding particle in [x, x+dx] = |ψ(x)|² dx
Observables and commutators:
[x̂, p̂] = i ℏ (canonical commutation relation)
[L̂_x, L̂_y] = i ℏ L̂_z (angular momentum)
[σ̂_i, σ̂_j] = 2 i ε_ijk σ̂_k (Pauli spin-½)
The Schrödinger equation is non-relativistic; Dirac extended it in 1928 to a relativistic wave equation for spin-½ particles, predicting antimatter as a consequence of negative-energy solutions. Quantum Field Theory (QFT) replaces particles with quantum fields; the Standard Model of particle physics is a specific QFT.
- Spin is intrinsic angular momentum with no classical analogue. Electrons, protons, neutrons, quarks, neutrinos are spin-½; photons spin-1; gravitons would be spin-2; the Higgs is spin-0. Fermions (half-integer spin) obey Pauli exclusion; bosons (integer spin) do not. This one fact drives the structure of the periodic table, the behaviour of lasers and superconductors, and the reason your kitchen table is solid.
- Entanglement is the feature Einstein called "spooky action at a distance." Two particles prepared in an entangled state have correlated outcomes that no local hidden-variable theory can reproduce; Bell's theorem (1964) makes this a testable inequality, and every Bell experiment since Aspect (1982, Nobel 2022) has confirmed quantum mechanics over local realism.
- Decoherence explains why we don't see macroscopic superpositions. A system coupled to an environment rapidly loses phase information into the environment's degrees of freedom; the reduced density matrix of the system looks classical on timescales ∼ ℏ / (k_B T · n_environment). This is why Schrödinger's cat doesn't survive contact with a single air molecule — quantum superposition doesn't vanish, it diffuses into the environment.
- Computation exploits superposition and entanglement. A qubit is a two-state quantum system (|0⟩ + α|1⟩); n qubits span a 2^n-dimensional Hilbert space. Shor's algorithm (1994) factors integers in polynomial time on a quantum computer, breaking RSA and ECC — which is why the Security & Cryptography handbook talks about post-quantum KEMs.
The model you want: ψ evolves unitarily under Schrödinger; measurement projects ψ onto an eigenstate of the measured observable with Born-rule probability; entanglement is the generic state of interacting systems. Classical physics is what this machinery reduces to when the phase information is averaged away.
WARNING
"Quantum" in popular discourse often means "mysterious" — useful for selling books, corrosive for understanding. In physics it means "the math of state vectors in complex Hilbert spaces and Hermitian operators with real eigenvalues." The predictions are shockingly precise (the electron's magnetic moment matches experiment to 12 decimal places); the interpretation debates are about what the math means, not whether it works.
Go deeper: Griffiths, Introduction to Quantum Mechanics (3rd ed) — the standard undergraduate text; Shankar, Principles of Quantum Mechanics; Sakurai & Napolitano, Modern Quantum Mechanics; Feynman Vol. 3; Nielsen & Chuang, Quantum Computation and Quantum Information for the computing angle.
Station 8 — Symmetry and Noether's theorem
Running through every law of physics is a meta-principle that deserves its own station: symmetry. A symmetry of a physical system is a transformation — a rotation, a translation in space, a shift in time, a reflection, a more abstract internal rotation of field components — that leaves the laws of physics unchanged. Different symmetries correspond to different conserved quantities, and the connection between the two was made precise by Emmy Noether in 1918.
Noether's theorem says that for every continuous symmetry of the action (the integral of the Lagrangian over time), there is a corresponding conserved quantity. Time-translation symmetry — the laws of physics were the same yesterday as they are today — implies conservation of energy. Spatial-translation symmetry — the laws are the same here as over there — implies conservation of momentum. Rotational symmetry — the laws do not depend on orientation — implies conservation of angular momentum. Gauge symmetries in field theories (internal rotations of the electromagnetic or Yang-Mills fields) imply conservation of electric charge and other quantum numbers.
This is one of the deepest results in physics. The conservation laws that beginners memorise as empirical rules turn out to be consequences of symmetries of the fundamental theory. Modern physics explicitly builds new theories by postulating symmetry groups first and deriving dynamics from them — the Standard Model is constructed by insisting on SU(3) × SU(2) × U(1) gauge symmetry, which forces exactly the particle content and interactions we observe. Symmetry is the closest physics gets to an organising principle that sits above every other theory.
The deepest organizing principle in physics is not any specific law but a relationship: every continuous symmetry of an action corresponds to a conserved quantity. This is Noether's theorem (Emmy Noether, 1918) and it is the closest thing physics has to a grand unified principle — it applies in classical mechanics, field theory, general relativity, and quantum mechanics, every time in the same form.
Symmetry of the action Conserved quantity
──────────────────────── ─────────────────────────
time translation energy E
space translation (x, y, z) linear momentum p
rotation (θ, φ) angular momentum L
gauge (phase of ψ) electric charge Q
SU(2) weak isospin weak isospin charge
SU(3) colour colour charge (QCD)
Lorentz boosts centre-of-mass motion
Energy is conserved because the laws of physics are the same today as yesterday. Momentum is conserved because the laws are the same here as there. Angular momentum is conserved because the laws are the same under rotation. Charge is conserved because Maxwell's equations are invariant under multiplication of the wavefunction by a phase. These aren't just heuristics — they're theorems.
- Internal symmetries (gauge) give the Standard Model its structure. U(1) gauge invariance gives QED (photons, electric charge). SU(2) × U(1) gives the electroweak sector (W, Z bosons, Higgs mechanism breaking this symmetry below ~100 GeV). SU(3) gives QCD (gluons, strong interaction, quark confinement).
- Broken symmetries matter as much as unbroken ones. Spontaneous symmetry breaking gives the Higgs mechanism (the vacuum has less symmetry than the Lagrangian; W and Z bosons eat Goldstone modes and acquire mass). The crystal structure of a solid is a broken translational symmetry (only discrete translations remain). Superconductivity is a broken U(1) symmetry of the electron phase.
- CPT theorem: combined Charge conjugation, Parity inversion, and Time reversal is an exact symmetry of any local, Lorentz-invariant QFT. Individual combinations fail — CP violation in kaon decay (1964, Nobel 1980) is precisely what's needed to explain the matter-antimatter asymmetry in the universe, though current known CP violation is insufficient by ~10⁹ to account for it.
- Dimensional analysis is secretly about scale symmetry. Natural units set ℏ = c = 1 and measure everything in energy; the only free parameter left for a phenomenon is its characteristic energy. This is why dimensional analysis works so well — physical laws respect the underlying symmetry, and demanding dimensional consistency exploits that.
The model you want: conservation laws are not coincidences; they are shadows of symmetries. When a new theory preserves a symmetry, it automatically conserves the corresponding quantity. When a symmetry is broken, something is released (Goldstone bosons, phase transitions, new physics).
TIP
When you suspect a derivation is wrong, ask "which symmetry am I secretly breaking?" If Newton's laws seem to predict energy loss in free fall, you probably dropped the work-energy book-keeping; the time-translation symmetry of the problem demands conservation. Noether's theorem is a debugging tool.
Go deeper: Noether, "Invariante Variationsprobleme" (1918) — remarkably readable in translation; Peskin & Schroeder, An Introduction to Quantum Field Theory chapter 2; Zee, Group Theory in a Nutshell for Physicists; Feynman, The Character of Physical Law (lectures from 1964 — the clearest popular account of symmetry).
How the stations connect
Each station has a specific mathematical spine and domain of validity; the map of when to use which is itself part of physics.
The Waves & Frequencies handbook picks up at the EM spectrum and Fourier decomposition; the Cosmos & Astrophysics handbook applies GR and statistical mechanics at galactic and cosmological scales; the Foundations handbook discusses Shannon entropy, which shares a formal structure with thermodynamic entropy.
Standards & Specs
- BIPM SI Brochure (9th ed, 2019) — the authoritative definition of SI base units after the 2019 redefinition.
- CODATA Recommended Values (2022) — fundamental-constant values, updated every 4 years.
- IUPAP Symbols, Units, Nomenclature — physics notation standards.
- NIST Special Publication 330 — The International System of Units (SI), 2019 edition.
- IAU / IUGS — astronomical and geological unit standards (the astronomical unit, parsec, etc.).
- Canonical papers — Newton, Philosophiæ Naturalis Principia Mathematica (1687); Maxwell, "A Dynamical Theory of the Electromagnetic Field" (1865); Clausius, "Über verschiedene für die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie" (1865, the paper that named entropy); Boltzmann's H-theorem papers (1872–1877); Einstein, "Zur Elektrodynamik bewegter Körper" (Annalen 1905) and "Die Feldgleichungen der Gravitation" (1915); Schrödinger, "Quantisierung als Eigenwertproblem" (1926); Dirac, "The Quantum Theory of the Electron" (1928); Noether, "Invariante Variationsprobleme" (1918); Bell, "On the Einstein-Podolsky-Rosen Paradox" (1964); Higgs, "Broken Symmetries and the Masses of Gauge Bosons" (1964); Weinberg, "A Model of Leptons" (1967).
- Books — Feynman, The Feynman Lectures on Physics (Vols. 1–3, free online). Landau & Lifshitz, Course of Theoretical Physics (10 volumes, the standard deep treatment). Griffiths, Introduction to Electrodynamics and Introduction to Quantum Mechanics. Goldstein, Classical Mechanics. Hartle, Gravity. Peskin & Schroeder, An Introduction to Quantum Field Theory. Weinberg, The Quantum Theory of Fields (Vols. I–III).
Test yourself
A satellite orbits Earth at 400 km altitude. Its onboard atomic clock drifts by ~7 µs/day compared to a clock on the ground. GR predicts ~45 µs/day faster in orbit (weaker gravity); SR predicts ~7 µs/day slower (moving fast). What does the satellite actually show, and which of the two effects dominates?
The satellite clock runs ~38 µs/day faster than the ground clock — GR (gravitational time dilation, clock runs faster in weaker gravity) dominates over SR (velocity time dilation, clock runs slower when moving). The GPS system corrects for this difference by design: without it, GPS positions would drift by ~11 km/day. The numbers work out because orbital radius and velocity balance such that GR's contribution (depends on r) outweighs SR's (depends on v²). See Station 6.
A block of aluminum (specific heat 900 J/(kg·K)) at 100 °C is dropped into 1 kg of water at 20 °C. The system reaches thermal equilibrium at 25 °C. Use the first law of thermodynamics to find the aluminum's mass, and state the second-law condition that made this process irreversible.
First law (no work done, conservation of energy): Q_Al = −Q_water. Q_water = m · c · ΔT = 1 × 4186 × (25−20) = +20 930 J absorbed. Q_Al = m_Al · 900 · (25−100) = −67 500 · m_Al. Setting magnitudes equal: m_Al = 20 930 / 67 500 ≈ 0.31 kg (310 g). Second-law irreversibility: heat flowed spontaneously from hot (Al) to cold (water); reversing it would decrease entropy of the universe, which is forbidden. ΔS = m_Al·c_Al·ln(T_f/T_Al) + m_water·c_water·ln(T_f/T_water) > 0. See Stations 3 and 4.
A system has Lagrangian L = T(q̇) − V(q) where V depends only on the radial distance r = √(x² + y² + z²). Use Noether's theorem to name one conserved quantity without computing anything.
V depends only on r, so the Lagrangian is invariant under rotations (continuous SO(3) symmetry). By Noether's theorem, the corresponding conserved quantity is angular momentum L = r × p (all three components). This is why planetary orbits lie in a plane (angular momentum is a fixed vector), why Kepler's equal-area law holds, and why central-force problems in QM conserve ℓ and m_ℓ. See Stations 2 and 8.