Your Monolith Is a Distributed System (And Your Type System Can’t Save You)

Here’s a truth that functional programmers don’t want to hear: the very tools that make you excellent at reasoning about programs can make you dangerously overconfident about systems.
The Core Insight

Ian K. Duncan’s essay on what functional programmers get wrong about systems hits on something that most programming communities refuse to acknowledge: the unit of correctness in production is not the program—it’s the set of deployments.
When Haskell’s type checker tells you your program is well-typed, it has verified properties of a single artifact. One binary, one version, one coherent snapshot. But in production, that artifact is just one member of an ensemble that includes:
- The current deploy serving new requests
- The previous deploy draining connections
- Background workers running 2-3 deploys behind
- A database with migrated schemas
- Serialized data on Kafka written by code that no longer exists
- Third-party webhooks conforming to their schema, not yours
The type checker verified one element. It told you nothing about interactions between elements. And that’s exactly where production bugs live.
Why This Matters

Consider a simple sum type in Haskell:
data PaymentStatus
= Pending
| Completed
| Failed
You add a constructor:
data PaymentStatus
= Pending
| Completed
| Failed
| Refunded -- new!
For the next several minutes during rolling deployment, old workers will receive messages containing Refunded and crash. That exhaustive pattern match that felt like an ironclad guarantee? It guaranteed nothing about a world where multiple code versions coexist.
This is why Protocol Buffers uses numeric field tags, why Avro requires both writer’s and reader’s schemas at deserialization, and why Erlang’s BEAM VM supports exactly two module versions during hot upgrades. These aren’t quirks—they’re engineering responses to the reality that producers and consumers will be at different versions.
Key Takeaways
Every production system is distributed: If you have multiple servers, background workers, cron jobs, or talk to any external service, you’re operating a distributed system. The word “monolith” describes your deployment artifact, not your runtime topology.
The two-version rule emerges everywhere: Google’s F1 database, Erlang’s BEAM, rolling deploys—all converge on constraining systems to at most two versions running simultaneously. This makes the compatibility problem tractable.
Rollbacks are more dangerous than they appear: When you roll back code but can’t roll back schema changes, you create a combination (old code, new schema) that nobody ever tested. Forward-only deployment is often safer.
Message queues are version time capsules: RabbitMQ with short retention? Two-version problem. Kafka with 30-day retention? 30 versions of serialization format coexisting. Infinite retention? Every format ever used, forever.
Expand-and-contract isn’t a recipe—it’s a discipline: Adding a nullable column, deploying code that writes to both, backfilling, deploying code that reads from new, dropping old. Four deploys for what feels like one change. This is the cost of correctness across versions.
Looking Ahead
The functional programming community has built extraordinary tools for reasoning about programs. The next frontier is reasoning about systems—about version compatibility, about schema evolution, about the interactions between artifacts running at different points in your codebase’s history.
The research exists. DSU (Dynamic Software Updating) theory proved in 1996 that general update validity is undecidable. Bidirectional schema transformation theory was published in 2017. But most practitioners reinvented expand-and-contract through trial and 3am incident retrospectives.
Maybe it’s time to close that gap between theory and practice. The type system can’t help you here. But clear thinking about version boundaries can.
Based on: “What Functional Programmers Get Wrong About Systems” by Ian K. Duncan