Category: Software design

QCon London 2009: A few lessons learned

Last week I attended QCon London 2009, which was characteristically excellent — and I know my colleagues who attended thought the same. Most excitingly this year two of the Guardian development team were invited to give presentations on our most recent work building a large content management system for a very large website. I don’t mind admitting that I learnt one or two things from attending those talks; I also found the presentation on the BBC’s architecture fascinating. Here are some of the things I took away…

Mat Wall on The evolving architecture

Mat’s presentation was structured around rebuilding over many months and dealing with the scalability issues that were presented at various stages. Some of the many insights were presented in a deliberately provocative way, such as this one:

Developers try to complicate things; architects simplify them.

I’m not entirely sure about the first part, but it’s certainly true that over-engineering is a much more common trait than under-engineering. The second part, though, is very important. It’s something I’ve always known, but never quite in that pithy way. It’s a useful phrase to tuck away for future reference.

And if you think Mat wasn’t feeling too good towards developers, don’t worry, because he also put up a slide saying

Developers are better than architects.

The message behind this is that it’s the developers who are working on the code each day — they know what’s possible and what not, and what’s reasonable and what’s not. So it’s important to trust them when they say they can or can’t do something.

Dirk-Willem van Gulik on Forging ahead – Scaling the BBC into Web/2.0

Dirk-Willem talked about the programme to re-engineer the BBC websites to enable greater scalability and dynamic content. In an organisation the size and complexity of the BBC you need to spend more time thinking about people than the technology. One of his first slides underlined this by presenting the seven layers of the OSI model (physical up to application) and then said

But most people forget there are two layers on top of that: level 8, the organisation, and level 9, its goals and objectives.

From that point on he kept referring to something being “an L8 problem” or “a level 9 issue”. It was a powerful reminder that technology work is about much, much more than technology.

Another great insight was how much they have chosen to use the simplest of standard internet protocols to join various layers and services within their network — even when those layers and services are organisation- and application-specific. This ties back to Mat’s point about an architect’s job being one of simplification.

Phil Wills on Rebuilding with DDD

Phil talked about the role domain driven design has played for us. He also pointed out various lessons that there were learnt along the way, such as the importance of value objects, and the fact that “database tables are not the model — I don’t know how many times I can say this.”

But it was only a day or two after his presentation that I was struck by a remarkable thing: Phil referred so many times to specific members of the team, and talked about contributions they had made to the software design process, even though the people he was talking about were generally not software developers. He put their pictures up on the screen, he named them, he told stories about how they had shaped key ideas. This was an important reminder to me that being close to our users and stakeholders on a day-to-day basis is incredibly important to the health of our software.

Walking away

I didn’t get to see as much at QCon as I’d like to have done, although I dare say most people will say that, even if they went to as many sessions as was physically possible. But what I did see was fascinating and thought-provoking, even when it came from people I work with every day.

An ABC of R2: D is for domain driven design

…which Mat Wall and I have written about extensively before, However, for this piece let me say this…

When you have a huge number of people for whom you are building software (1500 staff, 20 million unique users, and an entire wired economy influencing which way you should go next) then simply following instructions is insufficient, because your users’ demands will change and evolve over time — even if not during the current project then certainly before Version Two. So to minimise the cost of those changes you need to understand the way your users are thinking.

That’s where domain driven design (DDD) comes from. It’s about taking the concepts in your users’ heads and embedding them straight into the software you’re writing. And then when those concepts evolve and change the cost of changing your software is directly proportional to the mental shift that your users are making. When your users say “I want to make a small change” then usually it’s a small cost; if they propose a big change then they should understand when you walk them through all the implications.

For the R2 project we used DDD from the start, and it was key to many of our successes: when we discovered new opportunites which arose only from direct use of what we had implemented, then DDD allowed us to realise them.

Take, for example, the idea of “tone” — the principle that we should be able to categorise content by its “voice” or “style” (well, its tone). Its tone might be obituary, blog post, match report, and so on. The vague notion of this had been around since the start of the project, but we hadn’t settled on many details, let alone how to implement them. But after a few releases, and when the software started getting real use, it became apparent that applying a tone should be very like applying a keyword. Suddenly things fell into place. The functional requirements were clear, as were the implementation details. Tone has become a very powerful feature (here are all our obituraries, all our blog posts, all our match reports,…) yet it was a relatively straightforward piece of work because it was, in the end, a relatively intuitive idea.

Of course, domain driven design has a lot more to it than I’ve described here, and our use of it has been much deeper than feature implementation. If you’re lucky enough to be going to QCon San Francisco then today you can see our software architect Phil Wills presenting a lot more detail there.

Lightweight versus heavyweight: The cost is in the management

A recent conversation with a colleague got me thinking about so-called “lightweight” systems, and when they become more trouble then they’re worth. He was frustrated by some problems he was having; even more so, he explained, because he thought he was dealing with something that was “lightweight”. It’s a seductive word, and sometimes — as with other forms of seduction — when you get more involved than you should things can get a bit sticky.

This article is an attempt to explain what lightweight really means, both in terms of benefits and drawbacks. There are also a couple of comparative examples from my own experience.

A lightweight system (plus management support)Lightweight doesn’t mean simple

People often mistake “lightweight” to mean simple or quick. But this can’t be right, because everyone wants simple and quick, and if it really meant this no-one would use anything else. Every website would be rewritten with the lightweight Ruby on Rails and every application would be sitting on top of the lightweight SQLite database. Who wouldn’t? Who doesn’t want simple? Who doesn’t want quick?

Lightweight is often good, but it must have its tradeoffs, otherwise other technologies wouldn’t exist.

From the examples below I see lightweight as offering low cost in return for low demands, high cost for high demands. Heavyweight is disproportionately high cost for low demands, but low cost for high demands.

Lightweight carries low inherent management costs. But some situations require a high degree of management control whether you like it or not. That means that if a lightweight system needs to scale up you have to wrest management from it and maintain it externally. If you can do that then the lightweight system continues to work, but if the lightweight system will not relinquish management control, or if you don’t have the discipline to keep the management going, then it won’t be effective in the long run. By contrast heavyweight systems impose management and structure of their own. This is good if you’re going to need it, as it takes the pressure of discipline off you, but it’s not effective if you didn’t need that management structure in the first place.

To illustrate this, here are a couple of lightweight/heavyweight comparison case studies…

Language example: TCL and Java

TCL is a lightweight language. You get to write Hello World in one line, it doesn’t force much structure on you, and it’s pretty relaxed about how it’s written.

TCL is so good, in fact, that was the basis of the original Guardian Unlimited website. We built Ajax-style tools with it before Ajax was known as a concept, we generated our front page from it, we used it to integrate with our ad server.

But as our site grew the language didn’t scale with it. Clever shortcuts implemented by earlier developers confused newer developers because they obscured the purpose of the code. The lack of an imposed structure meant every foray into older code involved learning its idiosyncracies from scratch. Development slowed down as we worked around older code. And when we wanted to redesign the website we found that through years of lightweight flexibility we had allowed ourselves to be tied into knots: it would be more effective to start again than to work with what we had.

In fact, for the most part we’re now using Java…

In contrast to TCL, Java is pretty heavyweight. Not only does Hello World require three lines (excluding any lines with just braces), but its philosophy of structure and layering percolates through from the core language to most of its add-ons. For example, to parse an XML document you have to drill through two abstraction layers before you can find the parser.

One Java framework that maintains this ethos is Hibernate, used for database access. Its architecture is complicated, and as usual this is to offer flexibility without relinquishing manageability. Recently a forthcoming release of the Guardian Unlimited website was failing its pre-production performance tests. Our developers tracked down a major cause of the problem to an inefficient query within Hibernate. They extracted some of the query’s logic up into the application layer and simplified what remained, rebalancing the work between the application and the database. Problem solved, performance restored. What’s relevant to our story is that the developers did this entirely within the archicture of Hibernate, so they didn’t compromise the design of the application and therefore didn’t add complexity.

CMS example: WordPress and the GU CMS

Over on ZDNet Larry Dignan extolls the virtues of WordPress and says, effectively, “What have big content management systems ever done for us?”

WordPress is the lightweight CMS I’ve chosen for this blog, and I’m very happy with it. It’s easy to install, requires almost zero maintenance, and lets me focus on the writing. And yet I’m a strong advocate of the home-grown CMS we have for our journalists and subs on the Guardian Unlimited site. Is lightweight not good enough for our journalists? What has a big CMS ever done for us?

Well, I just looked at a current article on “Ministers ordered to assess climate cost of all decisions”. It was created with our big CMS. What’s there that WordPress couldn’t deliver?

For a start it’s got a list of linked subjects down the side, which aren’t the same as WordPress’s tags because they’re tightly managed to ensure consistency and reliability. These subjects are also categorised, so Pollution and Climate Change are subjects under Environment, while Green politics is a subject under Politics. As I write this, I note also that the pages for Pollution and Climate Change are designed differently, with Climate Change being more pictorial and feature-led. Subject categorisations and subject-specific designs are beyond what WordPress’s tags do.

Okay, so apart from the linked subjects, the categorisations, and the subject-specific designs, what has a big CMS ever done for us? I suppose it’s worth mentioning the related advertising, which as write includes a large ad for environmentally-friendly washing liquid. There are other contextual commercial elements, too, such as the sponsored features, links to green products and books, and offers of reducing energy bills and offsetting carbon emissions. And there are related articles and related galleries. And details of the article history, listing when and where it was first published, on what page and in what newspaper section.

Okay, so apart from the linked subjects, the categorisations, the subject-specific designs, the related advertising, the contextual sponsored features, the links to relevant products and books, the complementary offers, the related articles, the related galleries, and the article history, what has a big CMS ever done for us?

Well, I suppose it is serving to over 17 million unique users a month…

I’ll stop now. The point is a lightweight CMS such as WordPress could probably do any one of these things, with a bit of work. But it isn’t designed to do anywhere near all of them. And each time it’s changed to do one more of these things the more it is moved away from its core architecture and it gets closer to a point of paralysis, where nothing functions well anymore because no part of it is doing what it was designed to do. A bit like the TCL example.

Looking back

Reviewing these two examples, it’s clear that the lightweight systems became, or would become, very costly when they were pushed beyond their initial expectations. In both cases the corresponding heavyweight systems came with their own (heavy) management structure, but that management structure ensures lower running costs.

In the Hibernate example our software maintained its architecture after we’d made our performance change; anyone looking at this new code would be able to rely on previous knowledge to understand what was going on. By contrast, anyone coming fresh to a snippet of old TCL code would be starting from scratch, regardless of how much of the other TCL code they’d seen.

Similarly, the large-scale content management system at GU is internally consistent, despite its vast range of features and functionality. Once someone has learnt the principles (which, admittedly, are non-trivial) they can get to work on pretty much any part of it. Pushing WordPress to do that would have created a monster.

Lightweight systems take the management away from you. And that’s ideal, as long you don’t need that manageability.


Sometimes you can trust too much. Wherever I’ve worked I’ve been involved in a few examples where we listened to the customer, trusted them to know what they wanted, given it to them, and they’ve regretted it. We have delivered anti-features.

A lamp with slightly too many featuresMost recently at Guardian Unlimited our (previous) homepage had clever layout rules whose logic sometimes overrode the content that editors entered. The result was that editors were often confused that the page didn’t render according to what they had put in the system. The clever layout rules had been devised by close collaboration with the original editors and graphic designers — the expert users. They reasoned that they didn’t want anyone to unbalance the layout by entering inappropriate combinations of text and images.

But these layout rules had been forgotten over time, hence the confusion years later. Consequently the tech team was often called up regarding a supposed bug (“I’ve put this image in but it doesn’t appear”), and effort was expended only to discover that it was in fact a feature — much to the caller’s amazement. Our micro-lesson there was to give the end users the freedom they naturally expected, including the freedom to decide for themselves what was a balanced layout. If what they produced was unbalanced then the designers would steer them back in the right direction — a much more human corrective.

Those clever layout rules were an anti-feature. They were additional functionality that actually made users’ lives worse. Eventually we removed them.

Anti-features happen in highly experienced mega-corporations, too. In November 2006 Joel O’Software started what became known as “the Windows shutdown crapfest”. He compared the three shutdown options of Apple’s OS X with the astonishing nine or more shutdown options of Microsoft’s Windows Vista. Not only is that confusing for the user, but it was also incredibly painful to develop — Moishe Letvinn, a former Microsoft engineer, tells the sorry story.

But at least the Vista shutdown anti-feature made the problem visible. In the layout logic example, and others I know of, there is silent intelligence at work that leads to confusion and frustration without giving the user any visibility at all of what’s going on.

Anti-features are time-consuming to build in, because any features are time-consuming to build in. But anti-features also consume additional time in their removal.

One way to prevent anti-features is to help the end user determine the long-term consequences of what they want. Of course, you’d hope they’d think of that themselves, but you can’t avoid your responsibilities to the project as a whole if you see something that others have missed.

Another way is to adhere even more fervently to the Agile mantras of delivering early (before every last sub-feature is there), keeping it simple, and focusing ruthlessly on only the highest value work. This way we deliver first a front page without clever corrective layout logic, or one or two shutdown options only, and consider further enhancements later if we find we need them. Suggesting and doing this is easier said than done, of course, but if everyone trusts each other to listen honestly to what they have to say then it’s more likely the decisions made will be the best ones.

Meanwhile we can at least ask ourselves each time: Is this a feature or an anti-feature?

There’s nothing so permanent as temporary

Temporary workaroundAn aphorism I heard recently seems to be particularly memorable: “there’s nothing so permanent as temporary”. However, it wasn’t originally referring to software — it comes from a builder who is rebuilding the kitchen of friends. He’s from Azerbaijan, and my friends are fond of quoting him in full, to give the words maximum colour: “As we say in Azerbaijan, there’s nothing so permanent as temporary”.

It is, however, as relevant to software as it is to building (and probably to many other areas of life). The constant pressure to deliver means there is very rarely the opportunity to go back and improve a nasty historical workaround which is causing problems today. However, there are some strategies we might consider…

1. Automated testing

If the nasty programmatic stickytape is relatively small then a comprehensive suite of automatic tests should enable you to make the change relatively safely and painlessly. Of course, you need to have built up that suite of tests in the first place. And the “relatively small” caveat is important, because only then can the software people fit the replacement activty into their daily work without disrupting any schedules. If there’s no external change then this is an example of refactoring.

If the nasty workaround isn’t so small, but really does need replacing, then a much more concerted effort is needed. A plan needs to be carefully devised which allows slow piecemeal replacement. The point of the plan should be to take baby steps towards the end goal; each step should also leave the system in a stable state, because you never know when you might have to delay implementing any subsequent step. However, if everyone in the team knows the plan then the end goal can be achieved with minimum disruption to the business’s schedules.

2. Act like a trusted professional

I find trust and transparency is an increasing part of the software projects I’m involved with. Part of this is that the technical people want to give their customers options, and are keen to explain the pros and cons, so an informed decision can be made.

However, while this is usually excellent, there can be times when it is too much or undesirable. Stakeholders don’t necessarily want to be given options for every single thing, and sometimes it’s right for a technologist to make an executive decision without referring back — because they’re a professional, and because they are trusted (and employed) to be professional. In this regard it’s sometimes the responsibility of a technologist not even to entertain the possibility of a temporary workaround knowing it will become permanent.

The decisions I’ve been close to in this regard tend to be architecture or technology decisions. For example, in my current work we have, roughly speaking, legacy technologies and modern technologies. Sometimes a feature requirement would arise which is quicker to implement in the legacy technology — but of course we knew it would be more costly in the long run. If the difference in timescales was not wildly divergent then I’d encourage my team to choose the modern technology and not to offer up a choice. These days I need to do that much less, because by building up that base of modern technology it’s now easier to expand and extend that for future features. By making hard decisions in our professional capacity earlier on we’ve made it easier to make the right decisions in the future.

3. Bite the bullet

A third way to remove the temporary workaround is to just be honest about the work involved and the cost to the business. Easier said than done, but an open and frank discussion followed by adjustment and agreement will ensure the issue is examined properly and supported more widely.

One example I remember is a particularly troublesome database table. Originally conceived as an optimisation with a few hundred rows, it grew over time to be a real albatross, slowing both development and production work, and running to over 19 million rows. However, we couldn’t replace it without a lot of effort, because its tentacles were everywhere; we needed to schedule real time for it, and that would mean less time for other, more visible, work.

But when we presented the plan, two things sugared the pill. First, we had long cited the table as the reason for previous work taking longer than anyone would like, so the cost of its presence was already felt tangibly. Second, we ensured our plan was broken down using the “baby steps” approach outlined above — it would take a long time, but it would take out no more than 10% of our resources at any moment, and we could always suspend work for a period if absolutely necessary. The plan won support, was executed, and after several months our DBA typed the SQL “drop” command and we popped a bottle of champagne.

Meanwhile, back in the kitchen…

Of course, all that is assuming the temporary workaround really does need to be replaced. In my friends’ kitchen they are actually quite happy with the drawer dividers they’ve requisitioned as a spicerack. Aside from anything else, it provokes conversation and allows them to talk about their Azerbaijani builder’s many philosophies. He seems to have so many pithy sayings — all of which begin “As we say in Azerbaijan…” — that they’re beginning to suspect he may be making them up as he goes along. Still, if he does have to make a hasty exit from the building trade there may be an opening for him in the software industry.

Series: One nice thing about three new sites

Today we’ve launched three new sites on Guardian Unlimited. Well, more correctly, we’ve redesigned and rebuilt our Science, Technology and Environment sites. Working for a media organisation, I can rely on people much more articulate than me to tell the stories better — see Alok Jha on Science, Alison Benjamin on Environment and (with more technical info than you can shake a memory stick at) Bobbie Johnson on Technology.

So I won’t repeat what they’ve said. Instead, I want to point to one tiny new feature which I’m unfeasibly excited about. It’s series. You know series. They’re those things which appear regularly in a paper or on a website.

Flagging that it's a seriesNavigating through a seriesIn our redesigned sites we’ve introduced series as a specific and distinct concept. For example, Bad Science is a weekly series by Ben Goldacre on pseudo-science. Ask Jack is a series in which Jack Schofield helps people with their computer problems. In the old world these articles would be gathered into their own sections and left at that. But a series is a little bit more than this. For one thing when you land on an article in a series it’s nice to know that it is one of a family. We now flag that at the side of the piece. For another thing, you should be able to navigate to the next and previous articles in the series. Again, this series-specific navigation appears with each piece.

It’s a very small thing, but to me it shows something fundamental and important: the software we’re producing really does match the way our writers and editors think. It is, for those in the know, a result of domain driven design. For everyone else, it’s simply software that does what you think it should do.

The truth about quick and dirty

“Quick and dirty” is a weapon that’s often deployed by people for getting themselves out of a tight spot. But like any weapon, in the wrong hands it can backfire badly. Technical people use techie quick and dirty solutions in their work. Those people we like to work so closely with — our clients — use their own quick and dirty solutions in their work. But what happens when these people come together? What happens when it’s not the developers or sysadmins who suggest their own quick and dirty solution, but when it’s suggested by our customers on our behalf?

When it’s the client who suggests we deploy a quick and dirty solution, there’s a few things that we need to consider…

1. What does “dirty” mean?

There’s no doubting what “quick” means. But what does “dirty” mean? It’s something that makes the technicians itchy, so it’s worth understanding why.

A dirty solution might suggest one that’s inelegant, that doesn’t conform to a techie’s idea of a good design. That might be true, but it’s probably not the core of the matter. Any decent techie who works in the real world doesn’t advocate elegant design for its own sake. They know that most of their designs are far from perfect, and in fact they probably pride themselves in maintaining a pragmatic balance between the ideal and the expedient — this, then, won’t necessarily make a techie uncomfortable.

So “dirty” doesn’t mean “inelegant”. I think in this context it means something which incurs technical debt. That is, it’s a technical shortcut for which we are going to pay a rising cost as time goes on. That’s what causes the discomfort — the knowledge that implementing this will be making the techie’s life, and the life of their client, difficult in ways that cannot be really understood until it’s too late. It might happen also to be a inelegant solution, but that’s by-the-by. The important thing is that “dirty” implies something which carries a cost.

Of course, the cost of dirty has the benefit of quick. And as we go on, seeing this in cost/benefit terms is actually very informative…

2. Who gets what?

If you’re my customer and I’m your techie, then there’s a significant and unfortunate thing about quick and dirty that I need to tell you: You get the quick, but I get stuck with the dirty. Or to put it another way: the cost and the benefit are not felt by the same person.

There are times and circumstances when this is not the case. If an individual sysadmin is making a judgement call about their own work, say, then they may choose a quick and dirty solution. And if we are considering an in-house software team with in-house customers, then the company as a whole will both benefit from the quick and pay the cost of the dirty.

But these examples don’t tell the whole tale. If our individual sysadmin is actually one member of a systems team, and has responsibilities to the team, then it’s their colleagues who are going to be sharing the cost — and in fact it’s not entirely correct to have called them an “individual sysadmin” in the first place. And while it’s true to say our example company reaps the benefit and pays the cost, it’s probably not the case that “the company” has made an informed and conscious decision about the cost and benefit. That’s because it’s very unlikely that such a decision will have been escalated to the single body responsible for both the cost and the benefit — the board, or the owner, or the managing director.

In the end, we must recognise that the one who reaps the benefit is probably not the one who’ll be paying the cost.

3. Who decides what?

The “who gets what?” question leads in to a more general point: experts must be trusted to make judgements in their own areas.

So just as a techie should not tell a customer how to do their job, so the customer should not tell the techie how to do theirs. And in particular, it’s not up to the customer to decide whether the technical solution should be a dirty one.

Of course, we should always feel free to make suggestions; no-one is so big that they shouldn’t be open to ideas. It’s fair and reasonable for the customer to ask the techie if they can do it quicker… and for that the techie would need to demonstrate some creative thinking. The customer may ask if there are better solutions… and the techie would need to do some research. And the customer might even suggest a specific technical solution they’d heard of… and the techie would have a responsibility to give an honest and informed opinion. But if someone is supposed to be the expert on technical issues then they do need to be free to make judgement calls on technical issues, including which solutions are and are not appropriate. That shouldn’t be a worry to the customer; as mentioned above, good technical people tend to take pride in their pragmatism.

And in fact, this should be good for everyone. By trusting everyone use their expertise we should get the best solutions.

Quick and dirty: cost and benefit4. How long does it last?

One final point on the cost/benefit view of quick and dirty: Quick lasts only a short time, but dirty lasts forever.

More precisely, the benefit of quick lasts from the time the quick thing is released to the time the “proper” thing would have been released; and the cost of dirty also lasts from the time the quick thing is released, but it goes on until the time the software is rewritten or deleted.

I’d say that unless software is a total failure it tends to last a very long time, and usually much longer than anyone ever would have wished. This is why the term “legacy software” is heard so often — if software was generally replaced or removed in good time then we’d never have needed to invent that phrase. And the fact that the dirty bits outstay their welcome means that the technical debt they incur is paid, and paid, and paid again… certainly long after the shine of being early to market has worn off.

But it’s not all bad

I don’t want to give the impression that quick and dirty is inherently bad. It’s not. Nor is everything quick necessarily dirty. Far from it; some things are just plain quick.

But if we’re talking about quick and dirty then it helps to think in cost/benefit terms. We need to be very clear about who’s paying the cost and who’s reaping the benefit, and exactly what that cost and its benefit is. If we can understand that, and if we can respect each other’s expertise in making these decisions, then we can be more certain that we’re making the best decisions a better proportion of the time for the long term benefit of everyone.

Javascript puzzler: throw null

I know this isn’t really the place to post code, and I’m sorry. But this is why I don’t like loosely typed languages. I was caught by a Javascript nasty earlier this week (no I don’t code professionally, this was for fun) which boils down to the code below. What you think could happen here…?

var problem_found = null;
var listener = function(event){ /* Do something with event */ };

    document.firstChild.addEventListener("click", listener, false);
catch (ex)
    alert("Problem: "+ex);
    problem_found = ex;

if (problem_found)
    alert("There was a problem: " + problem_found);

You might have thought that either everything passes within the try block smoothly, in which case no alerts appear, or that there’s an exception in the try block and you get two alerts — one from the catch block and one from the if block.

And you’d be wrong.

Javascript actually allows you to throw anything. You should really throw an Error. But you could throw a String, or even a null if you liked. And if your code could throw a null then testing for null does not tell you if you had a problem. In the above example you’d get only one alert, from the catch block. If your problem-resolution code is in the if block then you’ll miss it.

Advocates of loosely typed languages will say that type problems can be avoided if you write good unit tests. But the problem occurred for me because this kind of mistake was in the JsUnit test framework, and it incorrectly showed a test passing. It took me two days to find the source of the problem. It turns out that the addEventListener() method above in Firefox 1.5 does indeed manage to throw a null if the listener parameter is not a function. Try changing it to a string and you’ll see this. My mistake was to pass in a non-function. Firefox’s mistake was to throw a null. JsUnit’s mistake was to assume a null exception means no exception.

Strict typing helps avoid problems that come about because we’re human and make mistakes. A wise man in ancient China once said “the compiler is your friend”. Javascript doesn’t have a compiler, and I can do with all the friends I can get.

By the way, you may be interested to know what the statically-typed Java language does. It doesn’t allow you to throw any object, but it will allow you to throw null. And if you do that it automatically converts it to NullPointerException. An actual, proper, concrete object. I know who my friends are.

The benefits of OO aren’t obvious: Part V of V, evolving the product

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding, separation of concerns, better testability and refactoring. Now read on for the final part…

Evolving the product

The final advantage to OO: it’s easier to evolve — hell, just to change — what you’re producing. Now ideally this shouldn’t matter. Ideally you’d get your requirements, your client would say “I want to this, this and this” and off you’d go, and then you’d produce something which the client is delighted with first time. That works in a waterfall environment, an environment where things are predictable and don’t change, but not in most cases. In most cases the client changes their mind, or adapts their ideas, and that’s just reality.

Note that I’m not talking about the so-called advantage of OO that I’ve often heard cited, and never really believed: that it’s easier to capture requirements with OO because it better models the world. To me the advantage is in tracking changing requirements.

I think OO deals with this better because people’s thinking evolves around things more than processes, so the amount you have to change your code is roughly proportional to the amount of rethinking your client has done, and their idea of a “small change” will at least be closer to your idea of a small change.

Again, this doesn’t come for free. You need to keep the structure of your application running in parallel with the structure of the client’s thinking. This takes experience and some work. But at least you’ve got more of a fighting chance of success than with a procedural language that’s already one step removed from people’s thinking. In particular domain driven design is all about ensuring the structure of software keeps up with how clients understand their domain, with the express aim of making change much easier. And of course being able to refactor with confidence is an important part of that.

Why wasn’t all that so obvious?

If you’d have asked me five or ten years ago about the benefits of objected oriented programming I could have cited the theory but wouldn’t have really been able to justify it. Data hiding is helpful, but probably doesn’t warrant changing your language — and your thought process — alone. But ideas such as dependency injection, refactoring and domain driven design are things that have come to prominence only relatively recently. They’re where I see the real benefits, and none are obvious from the atomic concepts of OO.

None of this means OO is always the right answer. If your requirements are solid and your domain is well understood then lots of the above advantages are irrelevant. But most of the time that isn’t the case, and it’s then that object orientation can show its value.

It’s odd to me that OO’s benefits are really only tangible years after it’s come to prominence. Perhaps it takes years for us as an industry to really understand our tools. Perhaps it’s only after the masses have taken up a technology — and produced an awful lot of badly written software — that the experts will come forward and really explain how things should be done. Or perhaps I’m just a very slow learner.

Whatever the reason, I can at last explain why I think OO is a step forward from procedural languages. Even though it’s not obvious.

All installments:

The benefits of OO aren’t obvious: Part IV of V, refactoring

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding, separation of concerns and better testability. Now read on…


This is something that builds on all the items above. If you’ve worked on any piece of code for a length of time you’ll always have come to the point where you think “Damn, I really should have done that bit differently.” Refactoring is what you want to do here: “restructuring an existing body of code, altering its internal structure without changing its external behavior”.

Refactoring is helped considerably by OO. “I already refactor,” says my non-OO friend, “I always make sure common code is in put into functions.” Which is good, but there’s a whole lot more to refactoring than that. Martin Fowler lists a vast number of options. But before I list my favourite refactorings (how sad is that? I have favourite refactorings) let me explain why this is so important, why it probably isn’t all entirely obvious, and how it relates to so much else here…

First, be aware that two people could both write outwardly identical applications but with their source totally different internally. At its most extreme, refactoring would enable you to evolve from one codebase to the other in lots of very small steps, at each step leaving you with the same working, stable, application. That’s much more than just putting common code into functions. If you think you already have the skill and technology to entirely change your codebase without changing anything outwardly then you’re a refactoring expert. If not, then you’ve got a lot to learn (and a lot of fun yet to have).

Second, refactoring is so important because it gives you a chance to improve your code. Unit testing (and indeed all testing) makes this possible. Test driven development isn’t just about test-first development, it’s also about refactoring. The last step of a TDD sequence is to refactor. You ensure your tests pass, you refactor, and you run your tests again, to make sure they still pass — to ensure you’ve not broken anything while refactoring. A suite of unit tests acts as scaffolding around your code, allowing you to rebuild parts of it without fear of the edifice collapsing. If you’ve made a mistake in your refactoring then the unit tests should show exactly whereabouts the problem is.

Third, taking the previous two together, you can make vast, sweeping changes to your codebase in tiny, safe steps. Simply break down the process into lots of small refactorings, and test after each one. This is how the apparently-impossible task of changing your entire codebase reliably is possible. In reality you’ll probably never want to change your entire codebase but you will want to change large chunks of it, and this is how you’d do it.

Fourth, you often find yourself needing to add a feature which seems reasonable to a client, but difficult to someone who understands the structure of the application. An excellent way to add the feature is to utilise refactoring as follows: refactor the software so that the addition of the feature does become a natural extension, and then add it. Martin Fowler gives an excellent worked example of this in the first couple of chapters of his book, which is (almost) worth the price of admission alone.

Now you can browse the refactoring catalogue linked to above (and here again), but here are some of my favourite refactorings…

  1. Rename variable. This is very trivial, and very easy, but all the better for it. Forgotten what a variable does? Give it a better name. It’s so easy to do, but so useful.
  2. Extract method. Never mind extracting common code into functions, I often find myself extracing one-off code into a single function (or “method”, as they’re called in OO) with a long and useful name. This is to make the original method much shorter, so I can see it all on one screen, and also so that it reads better. I no longer need to look at ten lines and translate them in my head into “…and then sort all the even numbered vertices into alphabetical order”. Instead I have a single method called sortEvenNumberedVerticesAlphabetically(). You think that’s silly? Maybe Java or VB.NET is your first language. Personally, English is my first language.
  3. Introduce null object [1, 2]. I don’t think ever used this, but I like it a lot because it’s just very interesting. The problem is that you’ve got special logic whenever a key object is null. For example, you need a basic billing plan by default, if the customer object isn’t set up and doesn’t have its own billing plan:

    if (customer == null)
    billing_plan = Plans.basic();
    billing_plan = customer.getPlan();

    This is a small example, but it’s important to realise why this is undesirable: partly because each branch point is another potential source of error, and parly because your “null customer” logic isn’t just here, but is probably scattered all over your code. It would be better to have all the logic in one place, where it can be seen and managed and tested more easily. The solution is to introduce a null object — in this case a NullCustomer object such as this:

    public class NullCustomer
    // ...probably lots missed out here.
    // Meanwhile the getPlan() method of NullCustomer
    // just returns the default basic plan...

    public Plan getPlan()
    return Plans.basic();

    Every time we set up a customer we should now use this NullCustomer as a default rather than the non-specific null object. And our original if/then/else code can be reduced to this:

    billing_plan = customer.getPlan();

    That’s simpler and clearer, it doesn’t have any branches, and the logic for a null customer lives in one place.

Refactoring is a wonderful thing. And it’s even more wonderful if your IDE helps you. For .NET languages you can get the Refactor! plugin, and MSDN has video demo, too. All that code manipulation at the touch of a button!

All installments: