In part 1 of this series I looked at the promise of literate programming (LP), created by Donald Knuth. It seems to be a fairly simple idea to promote good design and reduce bugs. But it never really took off after its inception in the 1980s. In this article I’ll look at some possible reasons.
In this series
The problems and challenges here are from discussion with individuals, and from two sources on the web: a Hacker News discussion, and a great post by Bob Myers. The Bob Myers piece is particularly thoughtful, and is the second in a three-part series on the LP. (The third is about how we might make it a success in today’s world.)
This article is not a repeat of discussions within those links; it’s just a summary of problems and challenges brought against LP. I don’t agree that all of these are wholly justified or insurmountable, but they exist in some people’s heads, so they are genuinely felt, and few of them are trivial. That means they do need to be addressed if LP is to achieve some success.
The problems, then, can be grouped into categories…
The general theme here is that times are different, and so are our tools. Knuth was programming in Pascal and, it is argued, he was addressing limitations we don’t have today. His Pascal didn’t have macros or pre-processor directives (which is something that LP brings in its way), and he didn’t have source control.
Today we have source control, which allows us to go back and see why this line of code is like that—perhaps why we made the design decision. We have IDEs now, too, which help us find our way around complex code in ways we couldn’t have done in the ’80s, and give us capabilities such as easy refactoring. Since IDEs are very often our code editors of choice, if there’s no literate programming editing environment in them then there’s less inclination to use LP. Some IDEs also make it particularly easy to view source history, which elevates the power of that.
There’s a general point about modern tooling, too. Since LP introduces another tool, we have to integrate it into our tool chain, including our IDE, compiler, source maps, and automated tests. It’s another dependency.
There are a few challenges which centre around individuals—our mindset, psychology, personal needs, biases, and so on. These are the kinds of things that apply to all of us; we’re all just human.
Many of us believe—from experience, of course—that our code will be obsolete soon, anyway, so why take the extra time to write a beautiful design document along with that code? Especially as developers are not writers so we’re asking them to do something they’re not attuned to (Knuth was a writer, but he’s a rarity.) And of course, the “what’s in it for me?” question—a design document is for future developers, not the present one, so there’s no personal driver to explain the design well.
There are also developers with a strong antipathy towards code commentary. Some are against comments, let alone higher level design documents; some claim “good code is sort of literate” anyway; some think literate programming is just over-commenting. You may disagree with those thoughts (I do), but they are real objections.
The personal challenges also feed into the…
Just like test-driven development, literate programming takes more time, so there’s pressure from management (not just oneself) to cut down on the design-writing. We rarely get the chance, as one Hacker News contributor put it, to do “tranquil and rigorous” work in this way.
It clearly also demands discipline. Do we have that? Do we have the time for that? It’s arguably also harder to revise a design story that’s already been written. And it’s difficult to tell the story of evolving code, as we would in a tutorial, where we want to say “let’s write it like this for the moment…” and then later on say, “But knowing what we know now we’ll replace it with this…”
Boilerplate code can be a challenge, too—surely that doesn’t deserve a design explanation, does it? But LP requires explanation so we can end up writing “boilerplate narrative” for it, wasting the reader’s and writer’s time.
Finally in this area, it’s not obvious how to write large design documents about large modular systems; the original TeX design document generates a single 535-page PDF. It’s not broken into chapters, and it generated one monolithic program. How does this work in the world of objects, libraries, serverless and microservices?
Poor industry culture
There are many wonderful things things that make the software industry so enviable, but sadly professionalism isn’t one of them. Our incredible pace of change comes at the cost of, say, a code of ethics and adherence to evidence-based decision-making that are so deeply embedded in medicine.
And so as an industry we forget our history quickly (almost willingly); insights from the 1980s seem barely relevant to developers who were born in the 1990s. The decline of design is probably also a factor; it’s not regarded as highly as it was a couple of decades ago or more (when compilers were slower, or you had to wait for your place in the mainframe’s execution queue). The rise of agile must also take some of the blame, where working software is placed above documentation, as well as the lean startup—both of which have (happily) raised expectations for very rapid turnaround times, from weeks and months to hours and days.
And finally in this section, I can’t help thinking that the development community does have a tendency to follow what’s cool, and who’s cool, rather than what’s been shown to work. If our today’s developer rock stars aren’t advocating LP then it’s not likely to get much traction. (Fun quiz: If Donald Knuth was a developer rock star of the past, which rock star do we compare him to?)
And that leads us into…
Maybe this isn’t the fault of us today; maybe some of the blame can be placed on the advocates and originators of LP. Perhaps literate programming just wasn’t communicated well enough. To take just one example, the article by Bob Myers mentioned at the start of this post, about how we might be successful with LP today, seems never to have been published. So we have to look mainly back to LP’s original examples, and Kartik Agaram argues that “the early examples are just not that good”.
So the powerful case studies are missing. All the quotes I used in the introductory article are from individuals and make it seem like LP is a single-person activity. The great case study of TeX was largely that. There don’t seem to be leaders of companies—even technical leaders in companies—stepping forward and saying, “We use this technique, it’s what’s made us the success we are.”
On the other hand…
I don’t want to make a point-by-point rebuttal of all the above challenges, but I will say briefly that I don’t believe they are insurmountable, even while most of them are reasonable challenges. I’ve been in many organisations where significant changes have happened successfully—changes in culture, changes process, changes in tooling. It’s never easy, but it can happen.
We can ask whether these problems and challenges are problems intrinsic to LP, or whether we can raise an interest in literate programming by some judicious adjustments to our communication, tool chains, processes and culture. But I dislike binary thinking, so I’m inclined to say that it must be a bit of both. I’m sure it’s not appropriate to pick up the processes of the 1980s and try to apply them today as they were intended to be used then. Equally, those advocating literate programming and those who might use it could both do well to adjust the way they talk, think and work.
In the next article in the series I’ll look at some current approaches to literate programming—some which stick faithfully to the original idea, and some which move further away in an attempt to fit better with existing tools and techniques.