The M-Word: The Culture of Programming
In addition to it begin useful, it is also cursed and the curse of the monad is that once you get the epiphany, once you understand—“Oh, that’s what it is!”—you lose the ability to explain it to anybody. — Doug Crockford.
I had a problem, once. I was working on a Rails application which generated integration files to be consumed by external partners. They were generated every night in a dozen formats and shipped around the Internet to populate sites you’ve probably used. Since it was a typical Rails application, the code used the Rails ORM directly:
builder = Nokogiri::XML::Builder.new do |xml|
xml.Widget do |xw|
xw.Title db_widget.title.titlecase
xw.Description db_widget.description.truncate(120)
xw.Color db_widget.color.name unless db_widget.color.nil?
end
end
All of the builders were in different formats and made different assumptions about the
data, and almost all of the fields were nullable in the database. Any generator that
omitted a .nil?
check anywhere could suddenly break. My solution was relatively
straightforward. Since all of these integrations were built on the same data structure,
I decided to add an additional shim to encode the true state of the world, eliminating
direct accesses to the database. This would allow me to control those pesky
NoMethodError
s that were causing our integrations to fail every few days, and as a
side effect reduce queries to the database. After the change, the code looked something
like this:
builder = Nokogiri::XML::Builder.new do |xml|
xml.Widget do |xw|
xw.Title shim.widget.title.titlecase.or_else "Untitled"
xw.Description shim.widget.description.truncate(120).or_else ""
db_widget.color.map { |color| xw.Color color.name }
end
end
Two problems solved, right? Performance improved because the shim could query once and cache the result for many different exports. Data-related failures were all but eliminated, because a failure to explicitly unwrap the shim value showed up during testing instead of in production. Easy-peasy.
Unfortunately, my technical solution was a social debacle. I’m not sure of a better way to phrase it: many of my coworkers found the code baffling after I changed it, and I made the mistake of using the term “monad” to refer to the pattern. Overnight, I became “the monad guy.” During scrums, people would ask me questions like “are you putting monads anywhere else?” “How are the monads going?” I even tried to give a lunch and learn about how to use the library, and people came and made sarcastic comments during that. The situation contributed to my departing the company later that year.
Where did I go wrong?
I knew about monads from previously learning Haskell. The company, however, wasn’t a
Haskell shop. We wrote Ruby and Java, and most of the developers there had no interest
whatsoever in Haskell. Furthermore, I approached the problem from an enthusiast’s
standpoint. I explained what I thought the advantages were, what I would use them for,
and explained the theory behind them. Worse, I introduced a library which was explicit
about being a monadic Maybe
type. No beating around the bush: its tagline was “a Maybe
monad for Ruby.”
If I could, I would go back in time and avoid all of that. Instead, I would write a small class within the codebase, less general and powerful than the library, and never bring the issue up in scrum at all (an action made necessary by the company’s policy on adding new libraries). In this hypothetical timeline, I would never have uttered “the M-word” at that company at all.
When In Rome
Every language has its culture. In my previous article, “The Danger of
‘Simplicity,’” I contrasted the different choices that Go
and Rust have made in terms of what to simplify. Those choices pervade the
communities: the Go community debates whether new language features will have too large
of an impact on compilation time. The Rust community debates the compiled size of the
format!
macro in embedded contexts. Both are equally valid concerns, and the
significance of each issue arises from the choices made in the language design.
Furthermore, although you can disagree with aspects of a language’s culture, it makes little sense to fight it. If your company writes Ruby, write Ruby. Don’t try to write Haskell (or Go, or Rust). You’ll start at a disadvantage. For example, function calls are very expensive in Ruby and Python, to the degree that inlining simple methods can be an early step in performance optimization.
If you need to “import” a concept—and I stand by the structure of my solution to the
integration problem above—do so in an idiomatic way. If I had referred to the shim
as a DatabaseCache
and used methods like fetch
and has?
instead of the library’s
or_else
and map
, I think that the familiar language would’ve solved the social
problem, despite an uglier API.
builder = Nokogiri::XML::Builder.new do |xml|
xml.Widget do |xw|
xw.Title db_cache.widget.fetch(:title, "Untitled").titlecase
xw.Description db_cache.widget.fetch(:description, "").truncate(120)
xw.Color db_cache.widget.fetch(:color).fetch(:name) if db_cache.widget.has?(:color)
end
end
Such compromises eliminate theoretical explanations of what’s happening. They reduce the problem of unfamiliarity when new team members need to read or review your code. Most importantly, if you design your software with a concept in mind, you can achieve many of its benefits even without explaining to your colleagues the reason that you conceived the design.
…but why is it hard, anyway?
I opened this post with a Douglas Crockford quote. I think it’s more generally applicable: you can learn about a concept and have that concept change your way of thinking, only to find that afterwards you can’t explain why. It’s very difficult to shift your thinking to how you were before you learned something, especially if it or its prerequisites was significant. Monads are an example, but not the only one: object-oriented programming had a reputation as “academic nonsense” during its rise, and Lisp has more “learn Lisp to free your mind!” memes than any other language.
Furthermore, a challenge to a programmer’s methods or knowledge can seem like a personal attack. Programming is pure thought, and the results of a new technique are not immediately obvious. Contrast something like building furniture. If you see someone demonstrate a superior technique, the performance of that technique and its results are obvious. It’s easy to be egoless in such immediate crafts, because you want to produce the thing you’ve just seen. In programming, understanding the product of a new technique often requires you to understand the technique in question and, worse, the problems with your own approach that motivate the change.
Complicating the problem is the issue that I mentioned in my last post: since all programmers are self-taught, it’s easy to stumble into someone’s knowledge gaps when trying to explain something. Defensiveness is a common response to this inadvertent “insult.” If you are suffering from imposter syndrome and someone shows you a technique motivated by a problem with your current practices, it feels like a validation of that imposter syndrome. The only way to combat this is to be as compassionate as possible in teaching, and reinforce for your students, repeatedly, that failure to understand a new concept is not a personal indictment. For any difficult idea, every student will have a different moment of epiphany. Tutorials written from different perspectives will reach different people.