data Blog = Blog { me :: Programmer, posts :: [Opinion] }

The Danger of “Simplicity”

There are a few tendencies among programmers that involve the totem of “simplicity.” There’s the ancient concept of KISS, of course, but there’s also the much-abused YAGNI, the insistence that we “Choose Boring Technology,” entire languages that base their elevator pitch around the idea that they’re “simple,” and the concept in object-oriented design that “every class should have a single responsibility.”

The problem with these approaches is that “simplicity” never gets a rigorous definition. The rhetoric around them is tautological: “Do the simple thing.” “Which one is that?” “The simple one, isn’t it obvious?” If you disagree with the assessment of what constitutes simplicity, you’re met with incredulity. “What do you mean, you think Go’s error handling is bad? It’s simple!” How can we make this less confrontational? How can we articulate a position more powerful than “syntax highlighting is juvenile?”

The Purpose of Programming

It’s easy to get lost in minutiae. Who hasn’t shaved a yak? Still, the ultimate purpose of programming as a profession or hobby is clear: we want to apply the raw power of computation to a problem, and know that the “recipe” we’ve written can be trusted to solve the problem we think that it does. Code, in itself, is not an asset. A solved problem is an asset.

The types of programs that we need to deliver, their performance models, their failure modes, and the methods we can use to understand them are all different depending on the environment in which we work, but the need for trust in our programs is universal. I think the worst fear for a programmer is a program that not only misbehaves, but also misbehaves in ways we don’t understand. We want tools that do what they’re supposed to do, or, if they can’t, that fail predictably in ways that we can quickly understand and fix.

When someone argues for “simplicity,” they are almost always arguing implicitly for concepts that they believe improve the possibility of trustworthiness. The problem emerges when programmers of all types forget that their preferred model of success does not apply to all types of programs, and each choice imposes costs.

What are we optimizing for?

The creators of Go have said, sometimes in pejorative terms,1 that “average programmers” have difficulty understanding complex languages, and have set themselves against that complexity by designing one with a relatively small syntactic grammar, error handling based on return codes, and no generics. They looked at the landscape of “complicated” languages and decided that the form of simplicity that they’d optimize for would be the initial learning curve.

On the other hand, Rust’s designers aimed at a different concept of simplicity. They wanted to protect users from the types of memory errors that have led to countless CVEs and bugs in C and C++ programs. They did so by embracing the type-theoretic concept of “affine types,” called “lifetimes” in Rust.2 They also eliminated the traditional idea of null3 by making Haskell-like Result and Option types core to the language.

Both of these decisions, in major modern languages, solve certain classes of problems—and both pay a price for their choices. Go’s error handling has become a meme, for example. Articles to help with understanding lifetimes in Rust are nearly as ubiquitous as monad tutorials for the aspiring Haskeller, and it turns out that a lot of important data structures are pretty hard to implement when you have to worry about them. Still, you can’t say that either language has failed in its goals—it’s just that each has chosen different tradeoffs.

In other words: every decision made in order to simplify a program will cost something.

Down to Earth

Most programmers aren’t writing programming languages, so the choice of “what to simplify” may seem esoteric, but it’s not. Consider data serialization. When you first implement a web service, it can seem “simple” to model both deserialization and serialization using the same objects. Then, since you’re implementing something new, you naturally want to keep it DRY, so you hook up your serialization library to your ORM models.

Then it turns out that you need to add a field to a table for batch processing and omit that from emitted JSON. Then it turns out that the enormous client that is going to pay your salaries for the next three years really needs XML output. Then it turns out that the other client, the one owned by your main investor’s brother, is intrinsically incapable of sending in any input other than DOS-formatted CSVs, for some reason.

Suddenly, it looks like unifying serialization, deserialization, and persistence involved a few more tradeoffs than it seemed when you were writing class Widget: in that fresh new git repository, right?

The Elusive Universal

If everything is a tradeoff, how do we make any decision about the design of our software at all? It’s tempting to make hard and fast rules. There are plenty of those: “Methods should never have more than five lines of code,” “no line of code should be written without a unit test,” or “all production code is pair programmed.” What this is doing, though, is hiding the choice from you (or lying about it). It’s not that the choice wasn’t made, it’s that someone else made it, and now wants to convince you that their opinion has no cost.

Instead of accepting this at face value, instead think about how to make those choices for yourself. Think about the different ways that code can be simple, and the things to which that particular simplicity is opposed.

For example, dynamically typed languages like Python (or Ruby, Lisp, etc.) are easy and pleasant to write. They avoid the difficulty of convincing a compiler that you’ve written your code correctly, but they’re harder for a new team member to comprehend: you’re trading fluent writing for more laborious reading.

Short functions, methods, and classes are quick to read, but must either be built of many lower-level functions or themselves comprise parts of larger compositions in order to accomplish anything. You’re trading away locality in favor of concision.

Error codes are easier to understand than exceptions or Result types, but they don’t carry much information. You’re trading ease of comprehension for difficulty of debugging. Exceptions carry a great deal of information but break the sequentiality of the code. Result types can carry information and preserve sequentiality, but can require a lot of “plumbing” in order to compose and handle different types of errors.

Each of these examples illustrates choices you need to make. You can reduce the number of things you care about to help with making them: limiting your choice of programming languages to a particular set, committing to use automatic formatters, or standardizing on a DevOps toolchain can all help to reduce the number of decisions that need to be made. In the end, though, it will come down to how should I implement this feature? and you’ll make better decisions about that if you understand that simplicity is not simple.

And that you should not trust anyone who claims it is.


  1. It’s inflammatory, but the article “Why Go’s design is a disservice to intelligent programmers” is worth reading just for the quotes from Rob Pike. ↩︎

  2. Consider reading the three chapters “Ownership,” “References and Borrowing,” and “Lifetimes” from the online Rust Book, if you want to understand this concept. ↩︎

  3. Tony Hoare’s presentation on the subject, “The Billion Dollar Mistake,” is a great talk and explains why they’d want to do something so weird. ↩︎