Act III of VII
Instructions
A programming language is a set of constraints you choose, and those constraints shape what's easy and what's hard.
On this page
The working table of contents.
- What a program actually is — a file of text that gets turned into instructions the machine can run.
- The two paths: compilation (translate once, run fast) vs interpretation (translate line by line, more flexible). And the hybrid: bytecode + VM (Java, Python, JavaScript engines).
- The four big trade-offs every language makes:
- Control vs safety — can you touch memory directly? (C yes, Python no).
- Speed vs expressiveness — close to machine = fast but verbose, far from machine = concise but slower.
- Static vs dynamic — do you check types before running or while running?
- Manual vs managed memory — do you free memory yourself, does a garbage collector do it, or does the compiler figure it out (ownership/Rust)?
- Paradigms as thinking styles — imperative (do this then that), object-oriented (model the world as things), functional (model the world as transformations), and why most modern languages mix them.
- Algorithms as reusable ideas — sorting, searching, graph traversal. The idea of Big-O (not every algorithm, just the concept: "how does the work grow as input grows?").
- Data structures as the shape of your data — arrays, hash maps, trees, queues. The right shape makes the algorithm obvious.
Going deeper
Branches that earn their own article.
- Compiler internals (lexing, parsing, IRs, optimization passes, code generation).
- Interpreter and VM design.
- JIT compilation.
- Garbage collection strategies (mark-sweep, generational, concurrent).
- Type theory and type systems.
- Specific language deep dives.
- Formal algorithm analysis.
- Individual data structure internals (B-trees, red-black trees, skip lists, bloom filters).
- Concurrency primitives (threads, async/await, actors, CSP).
- Formal methods and verification.