Handbook · Digital · Software

Languages

Languages··14 min read

TL;DR

A language is the sum of four choices: type system, runtime model, memory model, concurrency model. Mastering a language is mastering its position on each axis. Learning the next language is reading the delta. Syntax is the last thing that transfers; the axes are the first. This roadmap names the axes, places real languages on them, and suggests the order to visit.

You will be able to

  • Place any language on the four axes in under a minute.
  • Predict the debugging flavor of a language before you write a line of it.
  • Choose your next language by the axis you haven't yet stretched.

The Map

Rendering diagram…

Four axes, four or five options each. ~1000 combinations; perhaps 30 that anyone uses. A language is a point in this space, and its personality — how it debugs, how it scales, how it crashes — is the product of its coordinates.

Station 1 — Type systems

The type system answers two questions: when are types checked (static vs dynamic), and how strictly (strong vs weak). Two more questions refine the picture: by name (nominal) or by shape (structural)? And how much does the compiler figure out on its own (inference)?

                           ┌────────────────────────────────────┐
                           │              STATIC                │
                           │  ┌──────────┐      ┌──────────┐    │
                           │  │  Java    │      │  Rust    │    │
                 STRONG    │  │  nominal │      │ nominal+ │    │
                           │  └──────────┘      └──────────┘    │
                           │        ┌──────────────┐            │
                           │        │  TypeScript  │            │
                           │        │  structural  │            │
                           │        └──────────────┘            │
                           ├────────────────────────────────────┤
                           │             DYNAMIC                │
                           │  ┌──────────┐      ┌──────────┐    │
                           │  │  Python  │      │  Ruby    │    │
                           │  │          │      │          │    │
                           │  └──────────┘      └──────────┘    │
                           │  ┌──────────┐      ┌──────────┐    │
                  WEAK     │  │   C      │      │JavaScript│    │
                           │  │          │      │          │    │
                           │  └──────────┘      └──────────┘    │
                           └────────────────────────────────────┘

The model you want: static types move errors from runtime to compile time, at the cost of expressiveness. Strong types refuse to coerce; weak types coerce and hope. Inference decides how much of the type system you have to type yourself. Structural typing is duck typing that the compiler can check.

WARNING

"Static" and "strong" are independent axes, not synonyms. C is static and weak — it will cheerfully let you treat an int* as a char* and hand you the results. Python is dynamic and strong — it refuses to add "3" + 3 and tells you to pick one.

Go deeper: Benjamin Pierce's Types and Programming Languages chapters 1–9 (skim, then return when you meet Rust or Haskell); write the same non-trivial program in one static and one dynamic language and count the bugs each caught.

Station 2 — Runtime models

How does source text become running instructions? Four routes:

Rendering diagram…
  • AOT compiled — everything resolved before the program runs. Fast start, predictable performance, hard to hot-patch.
  • Interpreted — source walked at run time. Slow, but trivial to ship and modify.
  • Bytecode + VM — source compiled to an intermediate form; VM executes it. Portable, moderate speed.
  • JIT — VM watches hot paths and compiles them to machine code at run time. Slow to warm up, near-AOT speed at peak. Crucially, can deoptimize if assumptions break.

The model you want: startup vs peak performance is a choice. AOT wins the start, JIT wins the peak, interpretation wins time-to-ship. Serverless loves AOT; long-running servers love JIT.

TIP

"Cold start" problems are almost always a runtime-model problem. Switching from a JIT runtime (Java, C#) to an AOT runtime (Go, GraalVM native-image, Rust) is the common, effective fix.

Go deeper: Mike Pall's LuaJIT writeups; Cliff Click's "A crash course in modern hardware" for why JITs exist; read the Go compiler's README — the whole toolchain fits in a morning.

Station 3 — Memory models

Who frees the memory?

  Manual        Refcount         Tracing GC        Ownership
  ───────       ─────────        ───────────       ──────────
  You do.       Every ref        Runs periodically  Compiler tracks
  You leak.     bumps a count.   over the heap.     who owns what.
  You UAF.      Cycles leak      Stops the world    Enforces at
                without help.    (sometimes).       compile time.

  C, C++ raw    Swift, Python    Java, Go, C#       Rust
                CPython           JS, Ruby

Four strategies. Each trades correctness, performance, and cognitive load differently:

  • Manual — you call free. Fastest, most unforgiving. Use-after-free and double-free are your pets.
  • Reference counting — every reference is a counter increment; drop to zero frees. Predictable, but leaks cycles unless you add weak references or a cycle collector.
  • Tracing GC — periodically walk live references from roots; collect the unreachable. Pauses are the cost. Generational GCs make the common case (young objects dying fast) cheap.
  • Ownership / borrowing — compiler tracks a single owner per value and which references borrow for how long. No leaks, no use-after-free, no runtime cost. Steep learning curve.

The model you want: memory safety is unavoidable engineering work; the only question is where the work happens. Manual puts it in your head. Refcount puts it in the runtime and leaks cycles. GC puts it in the runtime and pauses. Ownership puts it in the compiler and frustrates you until you think in lifetimes.

CAUTION

A program with a tracing GC is not "memory-safe" in a way that makes it fast and predictable. Tail-latency pain in high-throughput JVM services is almost always a GC-tuning problem.

Go deeper: Chapter 4 of The Rust Book (ownership) even if you never ship Rust; Richard Jones's The Garbage Collection Handbook for the serious version; instrument one JVM service with -XX:+PrintGCDetails for a week.

Station 4 — Concurrency primitives

Four dominant models:

Rendering diagram…
  • OS threads + locks — kernel schedules; you coordinate with mutexes, semaphores, condition variables. The classic, most general, easiest to get wrong.
  • Green threads — many user-space threads multiplexed onto a few OS threads. Cheap to create (millions). Go's goroutines, Java 21's virtual threads.
  • Async / await — single-threaded cooperative multitasking on an event loop. Non-blocking I/O is free; CPU work blocks everyone.
  • Actors / CSP — no shared state; message passing between isolated units. Scales horizontally without locking.

The model you want: the concurrency primitive is the shape of every bug you'll write in the language. OS threads give you races. Green threads give you cheap concurrency but still shared-state bugs. async/await gives you event-loop-hogging bugs. Actors give you mailbox-overflow and ordering bugs.

WARNING

Mixing concurrency models inside one program is where production incidents are born. A sync-blocking call inside an async function stalls the whole event loop (hello, p99 cliff). A green thread that calls into a blocking C library pins an OS thread and you wonder why scaling stopped. Pick a model; stay inside it; cross the boundary only with an explicit thread-pool adapter.

Go deeper: Rob Pike's "Concurrency is not Parallelism" talk; Java Virtual Threads (JEP 444) even if you don't use Java; Elixir in Action Chapter 5 for actors done clearly.

Station 5 — Evaluation and dispatch

Small axis, large consequences. When expressions evaluate, and how calls dispatch, change what a language feels like day to day.

  Evaluation:
    strict  → arguments evaluated before the call                   (most languages)
    lazy    → arguments evaluated only when needed                  (Haskell, some Scala)

  Dispatch:
    static  → compile-time, by declared type                        (C function calls)
    dynamic → runtime, by object type                               (Java virtual methods)
    multi   → by types of all arguments                             (Julia, CLOS, Dylan)

  Polymorphism flavors:
    parametric → generics ("works for any T")                       (Java <T>, Rust <T>)
    ad-hoc     → overloading ("works for these T")                  (Rust traits, Haskell classes)
    subtype    → inheritance ("works for any subclass")             (Java, C#, C++)

The model you want: every language picks a default and charges extra for the others. Java's default dispatch is virtual (dynamic); final opts out. Haskell's default is lazy; seq opts in. Knowing the defaults tells you what's cheap and what's expensive to express.

TIP

Multi-dispatch looks exotic until you maintain a huge switch (type) in a single-dispatch language. Julia and CLOS are worth reading even if you never use them, for the vocabulary.

Go deeper: Structure and Interpretation of Computer Programs ch. 3 (lazy evaluation); On Lisp ch. 25 (multi-dispatch); one afternoon writing the same code with and without generics in a language you know.

Station 6 — The polyglot's ordering

You get diminishing returns from your fourth Java-family language. You get compounding returns from one language in each major family. A recommended order, if starting fresh:

Rendering diagram…

The logic: each step stretches an axis you haven't touched. Python → Go adds static types and green threads. Go → Java adds JIT and OS-thread-and-lock concurrency. Java → Rust adds ownership and replaces GC. Rust → Haskell stretches evaluation and purity.

The model you want: the next language that teaches you the most is the one furthest on the axis you've never moved. Learning "one more mainstream language" is a rounding error.

CAUTION

"I should learn X because my team uses it" is necessary but not the same as "I should learn X to grow." Keep both lists, and don't pretend one is the other.

Go deeper: Seven Languages in Seven Weeks (dated but the premise stands); a three-month deliberate polyglot project in the language that covers an axis you don't know.

How the stations connect

The four axes are orthogonal in theory but correlated in practice. Static + tracing GC + OS threads is the JVM corner. Static + ownership + async is the Rust corner. Dynamic + GC + event loop is the JS/Python corner. When a corner gets popular, a cluster of tools grows around it, and the axis choices propagate through the ecosystem.

Rendering diagram…

Read: the personality of a language — how it debugs, how it scales, how it crashes — is predictable from the coordinates, not from the syntax.

Standards & Specs

The authoritative references that govern programming languages. Many languages are defined by an ISO or ECMA specification; the canonical papers below are what the spec editors actually read.

Test yourself

Your team's Node service has tail latency that spikes every few minutes. You switch nothing but the runtime, from Node to Deno (same language). The spikes don't move. What axis is actually causing them?

Switching runtime kept the same memory and concurrency model (tracing GC, event loop). The spikes are almost certainly GC pauses or event-loop stalls, not interpreter/VM differences. To move them, you'd need to change the memory axis (Rust or C++ service) or reduce allocation pressure on the hot path.

Your Python team is asked to port a compute-heavy library to Rust. The PM asks "why not Go?" Give two-axis reasons.

Memory axis: Rust's ownership + no GC delivers predictable tail latency; Go has GC (small but real pauses). Concurrency axis: Rust's async + ownership statically prevents many concurrent-mutation bugs; Go's go keyword + channels is cheaper to write but gives you data races if you share state carelessly. For a compute-heavy library, Rust wins on both axes.

A colleague argues that TypeScript "makes JavaScript a strong-typed language." What's wrong with that sentence?

TypeScript adds static type checking at compile time. At runtime, it is still JavaScript — so it is still weakly typed (coerces on operations). Static and strong are independent axes. TypeScript is static + structural; JavaScript's runtime remains dynamic + weak.