Standards & Compatibility
When more than one implementation has to interoperate, somebody has to write down what "correct" means. That document is a standard. The math underneath cryptography in Act VIIa is true regardless of who reads it; the rules for a TLS handshake, an HTTP status code, or a WCAG conformance level are only true because every implementer agreed to read them the same way. This chapter walks the bodies that publish those rules, the processes they use, the language inside the documents, the versioning regimes that keep implementations compatible over decades, the conformance suites that turn an agreement into a test, and the compliance frameworks that turn a test into a contract.
Why standards exist
Pick any moment where bits cross a boundary between two organisations — a browser requesting a page from a server they have never met, a payment terminal talking to a bank, a Bluetooth headset pairing with a phone built four years earlier. Each side was written by people who never spoke. The only thing making the encounter work is that both sides read the same document and implemented it the same way. Without the document, every cross-vendor connection would be a bespoke negotiation; with the document, an engineer ships one implementation and inherits compatibility with every other implementation that read the same page.
The cost of disagreement is interop failure: a page that renders in one browser and not another, a JWT that one library signs and another refuses to verify, a USB-C cable that delivers 100 W to one laptop and 5 W to a different one. The cost of agreement is a slow committee: years between the first Internet-Draft and a final RFC, decades between an ISO working group convening and the standard reaching national-body approval. The trade is almost always worth it. The web platform exists because HTTP, HTML, CSS, JavaScript, URL, and TLS are agreed-upon documents that any vendor can implement without asking permission; the absence of standards in adjacent spaces (proprietary chat protocols, vendor-locked smart-home hubs) is what those spaces look like when permission is required.
Standards are economic infrastructure as much as technical infrastructure. Permissionless interoperability is what an open standard buys: a new web browser can ship next week without negotiating bilateral agreements with every server on the internet. The internet protocols are open in this sense; the cellular protocols are partially open (the air interface is standardised by 3GPP, but device certification is gated); the proprietary alternatives are closed (Apple's iMessage protocol cannot be implemented by a third party without reverse engineering and the risk of being broken in the next iOS update). The pattern repeats across every standardised layer: open standards turn into a substrate that compounds, closed protocols turn into a moat the owner has to defend forever.
The bodies that write the rules
The Internet Engineering Task Force (IETF) writes the protocols that move bits between organisations on the internet. IP, TCP, UDP, TLS, HTTP, DNS, BGP, SMTP, IMAP, MIME, JSON, OAuth — every protocol that crosses an autonomous-system boundary is an RFC. The IETF is unusual among standards bodies in three ways: anyone can participate (no membership fee, no national affiliation, just a working-group mailing list), decisions are reached by rough consensus rather than voting, and the rule is that running code beats abstract argument. The IETF is operated under the legal umbrella of the Internet Society but in practice is governed by the IESG (Internet Engineering Steering Group) and the IAB (Internet Architecture Board).
The World Wide Web Consortium (W3C) and the Web Hypertext Application Technology Working Group (WHATWG) split web-platform standardisation between them. The W3C was founded by Tim Berners-Lee in 1994 and writes XML, SVG, WCAG, ARIA, and most of the cross-platform web APIs. WHATWG was founded in 2004 by browser vendors who felt the W3C was moving too slowly on HTML; today it owns HTML, DOM, URL, Fetch, and other core web-platform specs as living standards, documents that are updated continuously rather than versioned. The two bodies signed a memorandum in 2019 that gave WHATWG authoritative ownership of HTML and DOM while the W3C continues to publish snapshots.
ISO (the International Organization for Standardization) and IEEE (the Institute of Electrical and Electronics Engineers) handle industry-wide standardisation. ISO is a federation of national standards bodies — ANSI represents the US, BSI represents the UK, DIN represents Germany, JISC represents Japan. ISO publishes everything from screw threads to information-security management systems (ISO/IEC 27001) to programming-language definitions (ISO/IEC 9899 is C, ISO/IEC 14882 is C++). IEEE publishes the protocols at the lower layers — Ethernet (802.3), Wi-Fi (802.11), and the floating-point format every CPU implements (IEEE 754).
NIST (the US National Institute of Standards and Technology) publishes the FIPS (Federal Information Processing Standards) and the SP 800 series, which are the de facto baseline for cryptographic algorithms and security controls in any US-facing system. FIPS 197 is AES, FIPS 180-4 is the SHA-2 family, FIPS 186-5 is the digital-signature standard. NIST also runs the post-quantum cryptography selection process whose outputs are described in Act X. The SP 800-53 control catalogue is what FedRAMP audits against; NIST CSF 2.0 is the cybersecurity framework most US enterprises adopt.
ECMA International owns ECMA-262, the JavaScript language specification, through its TC39 technical committee. TC39 is where every new JavaScript feature is proposed, debated through five stages (Stage 0 strawperson, Stage 1 proposal, Stage 2 draft, Stage 3 candidate, Stage 4 finished), and either lands in the next yearly snapshot or stays in stage purgatory. ECMA-402 is Intl, the JavaScript internationalisation API. The Unicode Consortium publishes the Unicode Standard, the data files that every text-rendering library on earth depends on, and the CLDR locale database.
The Linux Foundation is a meta-body that houses many open-source projects that have become de facto standards in their own right: the CNCF (Kubernetes, Prometheus, Envoy, OpenTelemetry), the OpenSSF (sigstore, SLSA), the OpenJS Foundation, Hyperledger. Outside that umbrella, domain-specific bodies handle their territories: OASIS publishes XML-related standards (OData, SAML, MQTT), OMG owns UML and BPMN, Khronos publishes Vulkan, OpenGL, OpenCL, glTF, WebGPU.
The RFC process
A new IETF document starts as an Internet-Draft, a working document with the filename draft-<author>-<wg>-<topic>-<NN>.txt that expires automatically six months after publication unless re-submitted. Drafts are discussed on the working-group mailing list and at three IETF meetings a year (Spring, Summer, Autumn). Working groups have a chartered scope, two co-chairs, and a mandate to produce a specific set of documents; new working groups are formed when the IESG approves a charter, and old working groups close when their charters are complete.
Once a draft reaches consensus inside a working group, it goes through a Working Group Last Call — a final review window, typically two weeks, when WG participants can raise unresolved objections. Survivors of that step go to IETF Last Call, a broader four-week review opened to anyone on the IETF-announce list. The document then reaches the IESG, which conducts a discuss-and-approve cycle where each Area Director either approves, blocks (DISCUSS), or abstains. A document needs no DISCUSS to be approved; resolving DISCUSS positions is where most editing happens in the final weeks before publication.
Approved documents are assigned an RFC number by the RFC Editor and become immutable. Errata can be filed against published RFCs, but the document text itself is never changed; a substantive revision results in a new RFC that obsoletes the old one (RFC 9110 obsoletes RFC 7230 through 7235 for HTTP/1.1; RFC 8446 obsoletes RFC 5246 for TLS 1.2 → TLS 1.3). The RFC numbering is monotonic and global — RFC 1 was published in 1969, RFC 9700 in 2024.
Not every RFC is a standard. The IETF status field distinguishes:
- Standards Track — Proposed Standard or Internet Standard. The vast majority of well-known protocols (TCP, HTTP, TLS) live here.
- Best Current Practice (BCP) — operational guidance with an RFC number, treated as binding by the community. RFC 2119 (key-word language) is BCP 14; the IETF process itself is documented in BCP 9 / RFC 2026.
- Informational — documents that describe a system without prescribing it. RFC 1149 (IP over avian carriers) is famously informational.
- Experimental — protocols that are deployable but not yet standardised. Becomes a Proposed Standard if it succeeds.
- Historic — explicitly retired protocols. Telnet's status was changed to Historic when SSH replaced it for almost all use cases.
A Proposed Standard can be promoted to Internet Standard when the protocol has been widely deployed and shown to interoperate; this is a separate, higher bar. As of 2024 only around 100 documents have reached Internet Standard status out of more than 9,500 published RFCs.
The IETF's working ethos was captured by David Clark in a 1992 talk: "We reject kings, presidents, and voting. We believe in rough consensus and running code." Rough consensus means the chair judges the sense of the room — not unanimity, not majority, but the absence of unresolved technical objections. Running code means that a draft with two interoperating implementations carries more weight than a draft with elegant prose and no users. Both halves of the slogan are still load-bearing thirty years later.
Reading a specification
The most important sentence in any IETF document is this one, usually quoted from RFC 2119 in the Terminology section: "The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this document are to be interpreted as described in RFC 2119." Those uppercase words are not stylistic emphasis. MUST is a non-negotiable requirement; an implementation that does not satisfy it is non-conforming. SHOULD is a strong recommendation with implicit allowance for valid reasons to deviate; MAY is a permission, not an obligation. RFC 8174 clarified an ambiguity from the original — lowercase "must" and "should" in prose are not key words and carry no special meaning, only uppercase versions count.
The grammar of a specification is usually given in ABNF (Augmented Backus-Naur Form), defined in RFC 5234. ABNF reads almost like English once you know the syntax: rule = element1 / element2 is alternation, *element is zero-or-more repetition, 1*element is one-or-more, [element] is optional, and quoted strings are literal characters. The HTTP/1.1 grammar in RFC 9110 is roughly 150 ABNF productions; the JSON grammar in RFC 8259 fits on half a page. Older RFCs sometimes use BNF or pseudo-grammar; the ABNF discipline became the IETF default in the mid-1990s.
A well-formed RFC contains a fixed set of sections. Abstract and Status of This Memo at the top give the executive summary and the standards-track classification. Terminology defines the key-word convention and any new terms. Conformance specifies what an implementation must do to be considered conforming — typically referenced by the test-suite authors who will later write the conformance harness. IANA Considerations instructs the Internet Assigned Numbers Authority to add or modify entries in registries. Every HTTP status code, every TLS cipher suite, every well-known URI, every MIME type lives in a registry that IANA maintains; new values get assigned through Standards Action, Specification Required, or First-Come First-Served depending on the registry's policy. Security Considerations is mandatory and substantive — the IESG will block any document that fails to discuss the threat model and known weaknesses. Privacy Considerations is increasingly required for any document that handles user data.
Worked example: reading RFC 8259 (JSON) in ten minutes versus a week
RFC 8259 is the canonical specification for JSON. It is nine pages of body text plus a few pages of references and authors — short for an RFC, which is part of why it serves as a teaching example. Here is what an engineer extracts at three depths.
Ten minutes (the consumer). Open the document, skim the Abstract: JSON is a text format for serialising structured data. Read section 2 (JSON Grammar) which defines six value types — object, array, number, string, boolean, null — and the ABNF for each. Note that strings are sequences of Unicode code points and the document explicitly forbids unescaped control characters. Skim the Security Considerations: the relevant warning is that JSON parsers in different languages disagree on edge cases (duplicate keys, very large numbers, surrogate-pair handling). This is enough to write a serialiser correctly for almost any use case.
One hour (the careful implementer). Read the entire document, pen in hand. Discover that the spec allows trailing commas only in some contexts and disallows them in others; that the document does not specify how to handle duplicate keys (parsers MAY accept the last value, MAY accept the first, MAY raise an error); that the number grammar permits arbitrary precision but says nothing about how a parser maps it to a platform-native double-precision float. Notice section 9 (Parsers): "A JSON parser transforms a JSON text into another representation." There is no requirement that the round trip preserve precision. This explains why JavaScript's JSON.parse("3.141592653589793238") returns 3.141592653589793.
One week (the spec implementer). Cross-reference RFC 7159 (the predecessor) to find every changed sentence; the diff is small but meaningful. Read Tim Bray's "Parsing JSON is a Minefield" blog post linked in the references and walk every edge case it enumerates against your implementation. Build a conformance corpus from JSON Test Suite and run it. Discover that your implementation accepts [1,] (trailing comma) because the parser was generated from a lenient ABNF; fix it. Add fuzzers. After a week you have a parser that disagrees with JSON.parse in three documented places and you can defend each disagreement by citation.
The point of the example is that the spec is the same document for all three readers. What changes is how much of the surrounding ecosystem — errata, test suites, predecessor versions, security analyses — the reader has time for. A spec rewards re-reading; almost every senior implementer has a story about finding a clause on the fifth reading that they had missed on the first four.
Versioning and backwards compatibility
Semantic Versioning (SemVer 2.0.0) is the most widely-adopted version-numbering scheme for libraries and APIs. The format is MAJOR.MINOR.PATCH. MAJOR is incremented for backwards-incompatible changes; MINOR is incremented for backwards-compatible additions; PATCH is incremented for backwards-compatible bug fixes. Pre-release tags (1.0.0-rc.1) and build metadata (1.0.0+sha256.abc123) attach via hyphens and plus signs respectively. The contract is simple: an engineer who upgrades from 2.3.5 to 2.4.0 should not see anything break; an engineer who upgrades from 2.3.5 to 3.0.0 is on notice that breakage is possible and the changelog should explain it.
The alternative is Calendar Versioning (CalVer), where the version number encodes the release date: 2024.11.0, 25.04, 2024.11.12-1430. Ubuntu, JetBrains products, and pip use CalVer. The trade is information: a SemVer number tells you whether the change is breaking; a CalVer number tells you when it was made. Many tools use a hybrid — Python's setuptools exposes both version and version_info, and the Linux kernel uses a hybrid MAJOR.MINOR.PATCH scheme where the meanings have drifted over time.
The hard part is defining "breaking change". The strict SemVer reading is any observable change in behaviour from the public API. The practical reading is any change that breaks code that was relying on the documented contract. The gap between those two readings is Hyrum's Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviours of your system will be depended on by somebody." Hyrum Wright observed this at Google when refactors that should have been safe under the documented contract broke tests across the monorepo because callers had taken dependencies on undocumented behaviours — the order of items in a hash map, the exact text of an error message, the precision of a floating-point calculation.
Mitigating Hyrum's Law in practice has three moves. First, randomise observable behaviours that are not part of the contract — Go intentionally randomises map iteration order so that no caller can write code that relies on it. Second, provide explicit deprecation periods before removing anything — typical practice is one minor version with a deprecation warning, one major version with a soft removal that still works but logs, and a major version after that where the call fails. Third, test the contract, not the implementation — a public API should have contract tests that assert what was promised, and internal tests that may assert more, so that a refactor that breaks an internal test but not a contract test is known to be safe.
Worked example: a SemVer bump decision for a query library
The library's public API exposes a connect(options) function. The team wants to change the default value of options.cacheSize from 100 to 1000. The question: MAJOR, MINOR, or PATCH?
The clean-room reading of SemVer. The default value is part of the function's behaviour. Any code that called connect({}) and depended on cacheSize being 100 will now see different behaviour. That is, by definition, a breaking change. MAJOR bump.
The Hyrum reading. Most callers do not care about the exact cache size; they care that caching exists. The change increases cache memory by 10× per connection. Some callers run hundreds of connections per process and will hit OOM in production. That is observable, that is broken, and the documentation never promised a specific cache size — so the docs say MINOR, but the caller's monitoring will say MAJOR.
The practical decision. Treat anything that could plausibly break production for any caller as a breaking change. Bump MAJOR. Add a deprecation note in the changelog: "the default cacheSize was raised from 100 to 1000. To restore the previous behaviour, pass cacheSize: 100 explicitly." Provide a migration script that grep-finds calls to connect({}) and rewrites them. Ship 2.0.0 next quarter, then 2.1.0 a month later for callers who had a smooth migration. The cost of a misjudged MAJOR is one extra changelog entry; the cost of a misjudged MINOR is a wave of incident reports from callers whose CI was set to auto-upgrade minor versions.
A useful heuristic: if you can list three plausible production failures from the change, treat it as MAJOR. If you cannot list any, treat it as MINOR or PATCH. The asymmetry is intentional — surprising users with breakage is far more expensive than reading "2.0.0" in the upgrade log.
The browser-platform world adds two extra tools to the compatibility kit. Feature detection — checking whether window.CSS.supports('display: grid') returns true before using grid — lets a single codebase target many browsers at once. Polyfills — a script that implements a missing API on top of older primitives — let an engineer write to the modern API and ship working code on older runtimes. Both compensate for the fact that browser vendors update their implementations on different schedules; the long-tail of mobile devices on Android 7 still receiving traffic in 2025 makes "all browsers support feature X" a probabilistic statement rather than a binary one. The caniuse.com and MDN browser-compat tables are the working engineer's reference for what is shipped where.
Interoperability testing
The earliest IETF tradition is the IETF Hackathon: every IETF meeting includes a weekend where implementers sit in the same room and connect their implementations to each other. A document that survives a hackathon is one where the spec was clear enough that two independent implementers, working from text alone, ended up with interoperable code. Disagreements found at hackathons usually surface as spec errata or as draft revisions before the document reaches Last Call.
The web platform's equivalent is the Web Platform Tests project at web-platform-tests.org — a single repository of cross-browser conformance tests for HTML, CSS, DOM, Fetch, Service Workers, and roughly every other web-platform feature. Each test lives in the spec area it tests; each test runs in every major browser; the dashboard at wpt.fyi reports per-test pass/fail for Chrome, Firefox, Safari, Edge, and Servo. WPT has over 2 million subtest assertions as of 2025 and is the de facto truth table for "does my browser conform to the web platform".
Built on top of WPT is Interop, an annual cross-vendor commitment where Chrome, Firefox, Safari, and Edge engineers pick a small set of focus areas (around 20 each year) and commit to passing every WPT test in those areas by year-end. Interop 2023 focused on container queries, CSS Nesting, and the :has() selector; Interop 2024 added Web Codecs, CSS subgrid, and Custom Properties; Interop 2025 picked up Anchor Positioning and the Popover API. The Interop dashboard shows progress quarter by quarter; passing the Interop bar is what turns "Chrome ships this feature" into "the web platform supports this feature".
Outside the browser world, every major standards body maintains a conformance test suite:
- POSIX has the Open POSIX Test Suite and the Linux Test Project.
- OpenGL / Vulkan have the Khronos Conformance Test Suite (CTS); a GPU cannot legally claim "Vulkan 1.3 conformant" without passing it.
- C and C++ have the SuperTest and the GCC / Clang testsuites.
- HTTP/3 has interop matrices published per-implementation at the QUIC working group.
- OpenID Connect has the OpenID Foundation certification program.
The honest limit of conformance testing is that "passes the suite" is a necessary, not sufficient, condition for real-world interop. Test suites cover the spec; production exposes integrations the spec never anticipated — a load-balancer that times out at 60 seconds while the conformant TLS handshake takes 65; a middlebox that strips an HTTP header the spec considered optional; a TLS implementation that conforms to RFC 8446 but uses cipher-suite preferences that interact badly with a conformant peer. Every standards-track protocol eventually accumulates a list of these "interop folklore" cases that no test suite captures. The mitigation is shadow-deployment: ship the new implementation to a small fraction of traffic, monitor errors, increase exposure if nothing breaks.
Reference implementations as a standardization vector
Three of the most influential standards in computing are not, in their canonical form, documents. The Linux kernel is the de facto Unix-like operating system specification — POSIX exists as an ISO document, but in practice "what Linux does" is what server software is written against, and the POSIX document is updated to reflect Linux behaviour as often as Linux is updated to match POSIX. The Chromium browser engine is the de facto web platform; the web standards documents at WHATWG and W3C are authoritative, but a feature that lands in Chromium and is shipped to a billion users becomes part of the web whether or not the document caught up. OpenSSL (and its fork BoringSSL inside Google) is the reference implementation of TLS — most TLS bugs found in the wild are found in OpenSSL, most TLS implementations cross-check against OpenSSL's wire format, and a protocol extension that OpenSSL refuses to implement effectively dies on the vine.
This is not necessarily a bad thing. Reference implementations resolve ambiguities the spec left open — if a spec says SHOULD and both Linux and FreeBSD do X, then "X" becomes the working answer for everyone. The trade-off is that a reference implementation forecloses choices the spec deliberately left open. POSIX leaves file-name encoding undefined; Linux treats file names as opaque byte strings; this works on Linux but breaks every cross-platform tool that assumed file names were UTF-8. The spec was permissive; the dominant implementation made an interpretation; that interpretation became binding for everyone connected to that implementation.
The pattern repeats across many domains. glibc is the reference C library on Linux, and several glibc-specific extensions (getline, vasprintf, strdup) have made it into POSIX after the fact. systemd is the reference init system on most Linux distributions, and "what systemd does" has effectively replaced earlier sysv-init behaviour as the operating standard. CPython is the reference Python implementation, and the language reference often defers to "what CPython does" for edge cases like the order of dict iteration (which became part of the language guarantee in Python 3.7 after CPython 3.6 happened to implement it that way).
The healthiest configuration is a primary specification with two or more independent implementations actively maintained against it. The web platform has three (Chromium, Gecko, WebKit). The C++ language has three (GCC, Clang, MSVC). TLS has many (OpenSSL, BoringSSL, NSS, GnuTLS, mbedTLS, rustls). The unhealthy configuration is a single implementation that becomes the canonical reference — at which point the document becomes documentation of the code, the code can change without going through the spec process, and any second implementer is effectively in the position of reverse-engineering a moving target.
Compliance frameworks
A compliance framework is a structured set of controls — administrative, technical, and physical — that an organisation implements and then evidences to a third party. The third party is sometimes a private auditor (SOC 2, ISO 27001), sometimes a payment-network gatekeeper (PCI-DSS), sometimes a government regulator (HIPAA, GDPR, EU AI Act, FedRAMP). The framework specifies the controls; the audit verifies that the controls exist and operate; the certification or attestation is the artefact the organisation hands to a customer who needs assurance.
- SOC 2 (American Institute of Certified Public Accountants) is the dominant US-market trust framework for SaaS vendors. It audits five Trust Service Criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy. SOC 2 Type I is a point-in-time attestation; SOC 2 Type II is an attestation across an observation window of 6 to 12 months, where the auditor samples evidence over that window to confirm controls operated. Most enterprise customers will not buy from a vendor that does not have a SOC 2 Type II report.
- ISO/IEC 27001:2022 is the international information-security management standard. The 2022 revision restructured the Annex A controls into four themes (Organizational, People, Physical, Technological) with 93 controls total. Certification is awarded by an accredited registrar after a Stage 1 documentation review and a Stage 2 implementation audit, then re-audited annually with full recertification every three years.
- PCI-DSS v4.0 (the Payment Card Industry Data Security Standard) is the card networks' mandatory framework for any organisation that stores, processes, or transmits cardholder data. Twelve high-level requirements decompose into hundreds of sub-controls. PCI-DSS is contractually mandatory through the merchant's acquiring bank, not a law — but failing a PCI audit can mean losing the ability to take card payments.
- HIPAA (Health Insurance Portability and Accountability Act, 1996) is the US federal regulation for protected health information. The Security Rule (45 CFR Part 164, Subpart C) specifies administrative, physical, and technical safeguards. Violations can incur civil penalties up to several million dollars per year per category of violation.
- GDPR (Regulation EU 2016/679) is the EU general data protection regulation. Article 32 requires "appropriate technical and organisational measures" for data security; Article 25 requires "data protection by design and by default". Breach notification is 72 hours; maximum fines are 4 % of global annual turnover or €20 million, whichever is higher.
- EU AI Act (Regulation EU 2024/1689) is the EU artificial-intelligence regulation in force from August 2024 with a phased application schedule through 2026. It classifies AI systems into four risk tiers: unacceptable (banned), high (heavily regulated, conformity assessment required), limited (transparency obligations), minimal (unrestricted). General-purpose AI models above a compute threshold (10²⁵ FLOPs of training) face additional obligations including red-teaming and post-market monitoring.
- FedRAMP (Federal Risk and Authorization Management Program) is the US federal cloud-services authorisation programme. Authorisations are granted at Low, Moderate, or High impact levels and are mapped to NIST SP 800-53 control baselines. The process from initial assessment to authorisation typically takes 12 to 18 months.
The vocabulary distinction matters. An attestation is a third-party opinion that controls are present and operating (SOC 2 is an attestation, not a certification — auditors are bound by professional standards but no central body issues a stamp). A certification is an issued credential against a published standard (ISO 27001 certifications are issued by accredited registrars). An authorisation is a regulator's permission to operate (FedRAMP authorisations are issued by an agency's Joint Authorization Board). All three are signed pieces of paper; what differs is the legal weight behind them.
Worked example: a SOC 2 Type II audit cycle
A startup at around 60 engineers and 200 customers decides to pursue SOC 2 Type II. The customer pressure is concrete: three deals stalled because the buyer's procurement team would not move forward without a SOC 2 Type II report. The work splits into three phases.
Phase 1 · Readiness (months 1–3). The team picks an auditor (typically a small-to-mid CPA firm specialising in tech), engages a compliance-platform vendor (Vanta, Drata, Secureframe) that automates evidence collection, and selects which Trust Service Criteria to include. Security is mandatory; Availability is almost always included; Confidentiality is included if the product handles customer data the customer considers sensitive; Processing Integrity and Privacy are usually skipped unless they apply directly. The team writes policies (Information Security Policy, Access Control Policy, Vendor Management Policy, Incident Response Policy, Change Management Policy, Business Continuity Policy) and reviews them with the auditor in a "Stage 0" gap analysis.
Phase 2 · Observation window (months 4–9, or up to 15). The Type II window is the auditor's observation period during which controls must operate. Every control needs evidence: that production access requires MFA (screenshot of the IdP config plus a CSV of recent logins), that code changes go through pull-request review (an export of merged PRs over the window showing approvals), that backups are tested (a runbook plus the most recent restore-test report), that vendors are reviewed annually (the vendor inventory with review dates). The platform vendor pulls most of this evidence automatically by connecting to GitHub, the cloud provider, the IdP, and the ticketing system. Where automation fails — a quarterly access review, a board meeting that approved the security policy — humans upload artefacts.
Phase 3 · Audit and report (months 10–12). The auditor samples evidence across the window. For a control like "all production access requires MFA", the auditor will pick 25 random logins and verify each shows an MFA event; for "code changes go through review", the auditor will pick 25 PRs and verify each has an approver. Exceptions are flagged; the team responds; the auditor either accepts the response or notes the exception in the final report. The published artefact is a SOC 2 Type II report — typically around 70 to 120 pages — that the company hands to prospective customers under NDA. The first cycle costs around USD 30,000 to 60,000 in auditor fees plus platform and engineering time; subsequent annual cycles cost less because policies, procedures, and integrations are already in place.
The honest pattern: the audit is the cheap part. The hard part is operating the controls consistently for nine months — every code change reviewed, every access reviewed quarterly, every vendor risk-assessed before onboarding. SOC 2 does not change what good engineering teams already do; it formalises the evidence trail so a third party can confirm it.
Accessibility as a standard
The Web Content Accessibility Guidelines (WCAG) are published by the W3C's Web Accessibility Initiative. The current version is WCAG 2.2, published in October 2023; WCAG 3.0 is in active development and not yet a Recommendation. WCAG organises around four POUR principles: content must be Perceivable (text alternatives for non-text content, captions for audio, sufficient contrast), Operable (keyboard accessible, sufficient time, no seizure triggers), Understandable (readable text, predictable interaction, error prevention), and Robust (parseable by assistive technology, compatible with current and future user agents).
WCAG defines three cumulative conformance levels. Level A is the minimum — about 30 success criteria covering things like text alternatives and keyboard accessibility. Level AA is the practical target — adds another 20 or so criteria covering colour contrast (4.5:1 for normal text, 3:1 for large text), focus visibility, and resize-to-200% support. Level AAA is aspirational — adds another 28 criteria including 7:1 contrast and sign-language alternatives — and the spec itself notes that AAA conformance "is not recommended as a general policy for entire sites" because some content cannot satisfy it. Almost all legal regimes and procurement requirements land at AA.
Adjacent to WCAG are several other standards. WAI-ARIA (Accessible Rich Internet Applications) is the W3C specification for the attributes (aria-label, aria-describedby, role="dialog") that expose semantics of custom widgets to screen readers — ARIA 1.3 is the current draft. ATAG (Authoring Tool Accessibility Guidelines) covers tools that produce web content (CMSes, IDEs, design tools) and requires that they both produce accessible output and be accessible to authors with disabilities. EN 301 549 is the European harmonised standard for ICT accessibility — it references WCAG for web content and adds requirements for non-web software, hardware, and documentation.
The legal layer turns the standard into a requirement. In the US, the Americans with Disabilities Act (ADA) Titles II and III have been interpreted by courts and the Department of Justice to require accessibility of websites and apps; Section 508 of the Rehabilitation Act requires accessibility of federal-government electronic content and procurement, with technical standards aligned to WCAG 2.0 Level AA. In the EU, the Web Accessibility Directive (2016/2102) covers public-sector websites; the European Accessibility Act (Directive EU 2019/882) extends accessibility requirements to private-sector e-commerce, banking, and consumer electronics, with the compliance deadline in June 2025. In the UK, equivalent requirements come from the Public Sector Bodies Accessibility Regulations 2018; in Australia, the Disability Discrimination Act 1992. The set of jurisdictions with binding accessibility law is now large enough that "WCAG 2.2 AA" is the de facto global baseline for any consumer-facing digital product.
The Act IXa interface chapter walks the implementation side — the ARIA roles, the keyboard handlers, the focus management — that produce a WCAG-conformant page; this section is about what the document says you have to produce, not how to produce it.
Standards drift in practice
The simplest kind of drift is the living standard. WHATWG HTML, the URL Standard, Fetch, and the DOM Standard have no version numbers and no snapshots; they are updated whenever the editors find an inconsistency, a new feature is shipped by a major browser, or a long-standing bug is fixed. A reader citing "HTML5" is citing a moment-in-time; the canonical document is whatever html.spec.whatwg.org says today. The advantage is that the document never goes stale; the disadvantage is that "what version of HTML do you support" cannot be answered with a number.
The dated-version model is the ISO and IEEE convention. ISO/IEC 27001:2013 is a different document from ISO/IEC 27001:2022; the 2013 version is superseded but still recognised in some procurement contexts during transition windows. Software dependencies that take an ISO standard as input have to pick a year; software dependencies that take a WHATWG standard as input have to take "as of compile-time" semantics. Both work; the trade is between change velocity and reference stability.
Beyond model, three drift patterns recur. Spec lags reality. A browser ships a feature that turns out to be popular; the spec catches up two years later; in the interim, every other browser implements the feature from the leading browser's behaviour rather than from the eventual document. This is roughly how the Fetch API, Service Workers, and async/await reached the web platform. Reality lags spec. A document is finalised but no implementer ships it widely. HTTP/2 Push, ECMAScript 4, XHTML 2.0, and the IPv6 transition (still ongoing in 2026) are examples. Reality forks the spec. Two vendors ship divergent extensions to a shared protocol. The infamous case is Microsoft Internet Explorer's box model in the late 1990s, where two different interpretations of CSS were both deployable and the standard had to retroactively choose. The browser-vendor coordination story since 2019 — Interop, joint WPT contributions, the Compat Standard work — exists in part because the industry decided unilateral forking is too expensive.
The historical recurring failure mode is embrace, extend, extinguish — a vendor implements the open standard, adds incompatible extensions, makes the extensions valuable enough that users depend on them, and then drops support for the unextended core. Microsoft was famously documented practising this pattern in the 1990s with Kerberos and SMB; the pattern is still observable today with proprietary extensions to open protocols. The defence is process: a standard with multiple active implementations and a public conformance suite is harder to extinguish than one with a single dominant vendor. The web platform escaped the late-1990s browser wars largely because Mozilla survived as an independent implementation; an alternate history where Mozilla shut down in 2002 has a much darker web.
The optimistic reading of the past decade is that standards drift has narrowed. The Interop 2022–2025 sequence has measurably closed cross-browser gaps; the LLM provider community is converging on OpenAI's Chat Completions API as a de facto standard with Anthropic, Google, and others publishing compatible endpoints; post-quantum cryptography is being standardised by NIST on a coordinated multi-vendor timeline rather than a unilateral one. The pessimistic reading is that consolidation in some layers (three browser engines, two mobile OSes, one search engine) makes the cost of drift higher and the cost of fork prohibitive — and that the open-standards advantage is only one strategic decision away from being undone in any given layer.
Standards
The references in this section are the standards documents and process texts that this chapter described. Many readers will arrive here as a starting list for a specific question — "what does ISO 27001 actually require", "where do I find the SemVer spec", "what is the conformance bar for WCAG 2.2 AA" — and the entries are organised to be re-entered rather than read in sequence.
Standards bodies
- IETF · ietf.org · publishes RFCs at www.rfc-editor.org. Working groups at datatracker.ietf.org.
- W3C · w3.org · publishes Recommendations, Working Drafts, and Notes. WAI at w3.org/WAI.
- WHATWG · whatwg.org · publishes living standards (HTML, DOM, URL, Fetch, Streams, Console, Compatibility).
- ISO · iso.org · publishes ISO and joint ISO/IEC standards. National-body federation.
- IEEE · ieee.org · publishes IEEE standards including 802 (LAN/WAN), 754 (floating point), 1003 (POSIX).
- NIST · nist.gov · publishes FIPS, SP 800 series, NIST CSF. Post-quantum cryptography selection ongoing.
- ECMA International · ecma-international.org · TC39 publishes ECMA-262 (JavaScript) annually; ECMA-402 is Intl.
- Unicode Consortium · unicode.org · publishes the Unicode Standard and CLDR (Common Locale Data Repository).
- Linux Foundation · linuxfoundation.org · houses CNCF, OpenSSF, OpenJS Foundation, Hyperledger, LF AI & Data.
- OASIS · oasis-open.org · publishes XML-related standards (OData, SAML, MQTT, KMIP).
- OMG · omg.org · owns UML, BPMN, SysML, MDA.
- Khronos Group · khronos.org · publishes Vulkan, OpenGL, OpenCL, OpenXR, glTF, WebGPU.
IETF process documents
- RFC 2026 · The Internet Standards Process — Revision 3. The procedural foundation for everything else.
- RFC 2119 · Key words for use in RFCs to Indicate Requirement Levels. BCP 14. The MUST/SHOULD/MAY incantation.
- RFC 8174 · Ambiguity of Uppercase vs Lowercase in RFC 2119 Key Words. Clarifies that only uppercase counts.
- RFC 5234 · Augmented BNF for Syntax Specifications (ABNF). The grammar notation used in most modern RFCs.
- RFC 7322 · RFC Style Guide.
- BCP 9 · the Internet Standards Process. BCP 14 · the key-word language above.
- RFC 8160 · IETF Glossary (informally "the tao of the IETF").
Versioning and compatibility
- semver.org · Semantic Versioning 2.0.0.
- calver.org · Calendar Versioning conventions.
- hyrumslaw.com · Hyrum's Law canonical statement.
- caniuse.com · web-platform feature support matrix.
- developer.mozilla.org/en-US/docs/Web · MDN browser-compat data.
Interop testing
- web-platform-tests.org · the cross-browser conformance suite.
- wpt.fyi · per-browser dashboard.
- web.dev/interop-2025 · the current annual Interop cohort.
- IETF Hackathon · ietf.org/how/runningcode/hackathons.
- Khronos Conformance Test Suite (CTS) · github.com/KhronosGroup.
- OpenID Foundation certification · openid.net/certification.
Compliance frameworks
- AICPA SOC 2 Trust Services Criteria · aicpa-cima.com.
- ISO/IEC 27001:2022 · the information-security management standard.
- ISO/IEC 27002:2022 · the controls catalogue referenced by 27001 Annex A.
- PCI-DSS v4.0 · pcisecuritystandards.org.
- HIPAA Security Rule · 45 CFR Part 164, Subpart C.
- GDPR · Regulation (EU) 2016/679. Article 32 for security measures; Article 25 for data protection by design.
- EU AI Act · Regulation (EU) 2024/1689.
- FedRAMP · fedramp.gov. Authorisations at Low, Moderate, High impact levels.
- NIST SP 800-53 Rev. 5 · the control catalogue underneath FedRAMP.
- NIST Cybersecurity Framework 2.0 · the high-level framework most US enterprises adopt.
Accessibility standards
- WCAG 2.2 · w3.org/TR/WCAG22. The current recommendation.
- ARIA 1.3 · w3.org/TR/wai-aria-1.3. The accessibility attribute set for custom widgets.
- ATAG 2.0 · w3.org/TR/ATAG20. For authoring tools.
- EN 301 549 v3.2.1 · the European harmonised ICT-accessibility standard.
- EAA · Directive EU 2019/882. Compliance from 28 June 2025 for in-scope products.
- ADA Title II / III · adata.org. Section 508 · section508.gov.
Foundational texts
- David Clark (1992) · "We reject kings, presidents, and voting. We believe in rough consensus and running code." The IETF working ethos.
- Lawrence Lessig (1999, rev. 2006) · Code and Other Laws of Cyberspace — code as a regulatory force comparable to law.
- Yochai Benkler (2006) · The Wealth of Networks — open standards as the substrate of commons-based peer production.
- Carl Cargill (1989) · Information Technology Standardization: Theory, Process, and Organizations — the still-standard textbook on standards-process design.
- Andrew Updegrove · consortiuminfo.org — long-running blog covering standards-body politics.
Cross-act references
- Act VIIa — the cryptographic primitives the security-standards section sits on.
- Act I — Unicode, IEEE 754, the foundational data agreements.
- Act Va — the IETF protocols (IP, TCP, TLS, HTTP) this page described the process behind.
- Act IXa — WCAG and ARIA as implemented in the web platform.
- Act IXb — the engineering discipline of writing ADRs and internal RFCs inside a team.
- Act X — post-quantum cryptography as a current example of a coordinated multi-vendor standardisation.
Branches that earn their own article.
- Reading and writing RFCs.
- The IETF working-group process in depth.
- W3C and WHATWG governance.
- Web Platform Tests and the Interop project.
- SOC 2 / ISO 27001 audit playbooks.
- GDPR and EU AI Act technical implications.
- Patent policy and SDOs (standards-development organizations).
- OASIS, OMG, Khronos and other domain-specific bodies.
- The history of standards wars (Betamax, Blu-ray, OOXML vs ODF).
- Reference implementations as a standardization vector.