Category: Software design

A software proposal to improve estimation

Previously I’ve talked about estimation, using an example of a project that we were very confident would make a certain amount of revenue. The slight flaw in this approach is that “very confident” varies depending on who’s making the estimate. So here is a proposal for an application to train the user in consistent estimation. I’d love to see this running somewhere and looking pretty. “Improve your estimation” would be a fun game to play (if you’re that way inclined). So let me set out…

  1. The point of it
  2. The no-tech solution (and credit)
  3. The software proposal

The point of it

Sepia Eiffel Tower, by Joanna KiyonéHere are example situations we want to improve in project proposals:

  • “I’m very confident this will bring in $0.5m to $2.5m of new business”
  • “I’m very confident this change will see no less than a 2% drop-off rate in conversions”
  • “I’m very confident the addressable market is between 5m and 9m users globally”

We want to turn those into:

  • “I’m 90% certain this will bring in $0.5m to $2.5m of new business — and you can be sure that when I say 90%, I’m right 90% of the time”
  • “I’m 95% confident this change will see no less than a 2% drop-off rate in conversions — and you can be sure that when I say I’m 95% confident of something I really am right 95% of the time”
  • “I’m 90% sure the addressable market is between 5m and 9m users globally — and when I say I’m 90% sure of something I really am right 90% of the time”

The purpose of the application is to help people really be right 90% of the time when they say they’re 90% certain — or any percentage they want.

Estimating well is important for accuracy. This is particularly true when we consider that targets like revenue and reach are actually factors of many other measures (market size, consumer price sensitivity, economic conditions) which are themselves estimated. With estimation piled on estimation it becomes even more important to move from a nebulous “very confident” to a concrete “90% confident” and so be sure of our bounds of certainty.

The no-tech solution (and credit)

Credit for switching me on to the problem, and for the no-tech solution that follows, is to Doug Hubbard and is part of the narrative of his book How to Measure Anything. What is his approach to getting good at estimating percentage certainty? Have a look at this question:

What is the height of the Eiffel Tower in feet?

I don’t suppose you know… exactly. I certainly don’t. But we can have a good go at estimating a range with 90% certainty. I am absolutely certain that it’s taller than me, so it’s definitely over 6 ft. And I’m virtually certain that a mile is a thousand-and-something feet, and it must surely be less than a mile high, but let’s say it’s less than two miles for safety.

So I’m 99.9% certain it’s between 6 ft and 2,000 ft tall. But that’s too conservative, because we’re aiming for 90% certainty, not 99.9% certainty. So I’ve got start using some other tricks. I’m 6ft 3in, and a house is about six mes tall… call it 40ft. How many houses tall is the Eiffel Tower? Between six and twenty, I reckon. So between 240 ft and 800 ft. That seems better — I can sign up to that range with 90% confidence.

Spinner, by Darrell HamiltonNow let’s keep going. Answer the following questions, each with a range you’re 90% confident of:

  1. What is the height of the Eiffel Tower in feet?
  2. In what year was Anne Boleyn born?
  3. What is the population of Tokyo?
  4. How many square miles is Peru?
  5. How many saints did John Paul II canonise?
  6. How many books in the Mister Men series?
  7. What is the weight of the Liberty Bell, in kilograms?
  8. What is the circumference of the moon, in kilometres?
  9. How many square centimetres is the Mona Lisa?
  10. How many patents did IBM lodge in 1999?

The aim, of course, is to have 90% of your ranges right. This is what Doug Hubbard calls being “calibrated” — i.e. when we say we’re x% sure of something we are indeed correct x% of the time.

Unfortunately it’s not quite as easy as saying we must score 9/10 in the above questions — we’re expected to get 9 out of 10 on average, not necessarily 9 of these particular 10 right. So we refer to statistics, and the binomial distribution says if we want to be 95% sure that we are calibrated then we can expect to get 8, 9 or 10 out of 10. Similarly…

  • 20 questions implies 17-19 should be right at the 90% level.
  • 25 questions implies 21-24 should be right at the 90% level.
  • 30 questions implies 25-29 should be right at the 90% level.

We can calibrate ourselves with true/false questions, too. This time we have add how certain we are of our choice:

The first Cannes Film Festival was held in 1961. True or false? And how sure are you of your answer: 50%? 60%? 70%? 80%? 90%? Or 100%

Saying we are 50% sure means we have no idea — equivalent to the flip of a coin. 100% means we’re absolutely certain. If we answer five questions with certainties of 60%, 60%, 50%, 90% and 80% then we expect on average we’ll get 0.6 + 0.6 + 0.5 + 0.9 + 0.8 = 3.4 out of 5. If we want to be 95% sure that we are indeed calibrated then (if my maths is right) we should expect to get 2, 3, or 4 out of 5. This is clearly too wide a margin to be informative so in reality we should again be answering 20 or 30 questions at a time.

By putting in a feedback loop we can improve. We answer 30 questions, find out if we’re over-cautious or under-cautious in our estimating, then consciously compensate while we try answering another 30 questions. And so on until we start hitting our targets.

Chippewa Country info boxThere are also tricks to helping us judge our understanding of 90% confidence. Again, I’m taking my queue from Doug Hubbard. Take a look at a wheel with a one-tenth slice marked out and a spinnable needle in the centre. If we are 90% confident of something then we should be equally confident that we can risk £5 that a spin of the needle will end up in the larger zone (if it ends up in the smaller slice we lose our money). If we would rather spin the needle then we’re less than 90% confident of our assertion and should widen our range; if we’d prefer to stick with our assertion then we’re more than 90% confident and should narrow our range; it’s only when we think we’d be equally happy with one or the other that we’ve got to 90% confidence.

This seems to be worth trying with each question whenever in doubt.

I think being good at estimating like this would be a cool party trick. Especially if we go to the parties held by our local maths department. Now the only question is: where do we find an endless source of semi-obscure questions? This brings us to…

The software solution

The proposal, then, is for an application which automates this estimation training:

  1. How many questions would you like to answer? 10, 20, 30…? And would you like questions about ranges or true/false questions?
  2. Here are the questions, enter your answers. (If you want help with an answer you can pop up a wheel with a suitably-proportioned slice marked out — do you prefer your answer or prefer to spin the wheel? Now you can rethink your answer in the light of that.)
  3. Feedback: You got this many right, which was/wasn’t what we expected for someone who is a good estimator. You’re over/under/correctly confident so you need to adjust accordingly/congratulate yourself. Now let’s go again…

I imagine this would look nice with now-de-rigueur CSS3, pastel colours and big shiny buttons with rounded corners. And a good source of questions is Wikipedia.

In particular the info box on the right hand side of a many articles is good source of stark facts — for example, the one for Chippewa County, Minnesota. There are some good range-type questions to be extracted from this: total area in square kilometres, population as of 2000, year of founding. There are also some good true/false questions (largest city, date founded) but only if we can find a reasonable-sounding false answers.

That last problem can be solved like this: First, at the bottom of the page find a list in which this subject appears. In this case Chippewa appears in a list of counties. Then find another item in the same list — say, Murray County — and go to that page and find its info box. Now we can compare the same aspects of Chippewa and Murray (largest city, date founded, timezone) with different values (largest city and date founded are different; but avoid timezone as it’s the same for both) and swap them randomly.

Murray County info boxThe list these two subjects were drawn from (Counties), the heading of that list (State of Minnesota) and the aspect selected from the info box (largest city, etc) provide a reasonable way to present the question, since English-language question generation looks a bit tricky. We end up with something like:

  • Answer these questions with 90% confidence:
    • State of Minnesota > Counties > Chippewa County, Minnesota > Area – Total > ……. to …… km2
  • Answer these questions as true or false, and state your confidence:
    • State of Minnesota > Counties > Murray County, Minnesota > Founded > February 20, 1862. True or false? 50%? 60%? 70%? 80%? 90%? 100%?

There are still some flaws with this. You have to go through quite a lot of random Wikipedia pages before you find one with a decent info box. And sometimes whether the info in the info box is true or false is too obvious:

  • State of Minnesota > Counties > Chippewa County, Minnesota > Named for > Chippewa Indians, Chippewa River. True or false? 50%? 60%? 70%? 80%? 90%? 100%?
  • State of Minnesota > Counties > Murray County, Minnesota > Website > http://www.co.chippewa.mn.us. True or false? 50%? 60%? 70%? 80%? 90%? 100%?

Maybe we should restrict the subjects to some known areas (books, US states, composers,…) and not look in the entire pool of Wikipedia articles. And maybe some kind of text matching might help avoid the embarrassingly obvious true/false questions.

Or maybe you can think of a better source of questions.

If so, there’s a very useful service just waiting to be created…

Footnote

Where to find answers to the above questions…

  1. http://www.discoverfrance.net/France/Paris/Monuments-Paris/Eiffel.shtml
  2. http://www.bbc.co.uk/history/historic_figures/boleyn_anne.shtml
  3. http://www.metro.tokyo.jp/ENGLISH/PROFILE/overview03.htm
  4. http://www.wolframalpha.com/input/?i=how+many+square+miles+is+peru%3F
  5. http://en.wikipedia.org/wiki/Pope_John_Paul_II
  6. http://www.themistermen.co.uk/mrmen.html
  7. http://history.howstuffworks.com/revolutionary-war/liberty-bell.htm
  8. http://solarsystem.nasa.gov/planets/profile.cfm?Display=Facts&Object=Moon
  9. http://en.wikipedia.org/wiki/Mona_Lisa
  10. http://www-03.ibm.com/press/us/en/pressrelease/1920.wss

QCon London 2009: A few lessons learned

Last week I attended QCon London 2009, which was characteristically excellent — and I know my colleagues who attended thought the same. Most excitingly this year two of the Guardian development team were invited to give presentations on our most recent work building a large content management system for a very large website. I don’t mind admitting that I learnt one or two things from attending those talks; I also found the presentation on the BBC’s architecture fascinating. Here are some of the things I took away…

Mat Wall on The evolving Guardian.co.uk architecture

Mat’s presentation was structured around rebuilding guardian.co.uk over many months and dealing with the scalability issues that were presented at various stages. Some of the many insights were presented in a deliberately provocative way, such as this one:

Developers try to complicate things; architects simplify them.

I’m not entirely sure about the first part, but it’s certainly true that over-engineering is a much more common trait than under-engineering. The second part, though, is very important. It’s something I’ve always known, but never quite in that pithy way. It’s a useful phrase to tuck away for future reference.

And if you think Mat wasn’t feeling too good towards developers, don’t worry, because he also put up a slide saying

Developers are better than architects.

The message behind this is that it’s the developers who are working on the code each day — they know what’s possible and what not, and what’s reasonable and what’s not. So it’s important to trust them when they say they can or can’t do something.

Dirk-Willem van Gulik on Forging ahead – Scaling the BBC into Web/2.0

Dirk-Willem talked about the programme to re-engineer the BBC websites to enable greater scalability and dynamic content. In an organisation the size and complexity of the BBC you need to spend more time thinking about people than the technology. One of his first slides underlined this by presenting the seven layers of the OSI model (physical up to application) and then said

But most people forget there are two layers on top of that: level 8, the organisation, and level 9, its goals and objectives.

From that point on he kept referring to something being “an L8 problem” or “a level 9 issue”. It was a powerful reminder that technology work is about much, much more than technology.

Another great insight was how much they have chosen to use the simplest of standard internet protocols to join various layers and services within their network — even when those layers and services are organisation- and application-specific. This ties back to Mat’s point about an architect’s job being one of simplification.

Phil Wills on Rebuilding guardian.co.uk with DDD

Phil talked about the role domain driven design has played for us. He also pointed out various lessons that there were learnt along the way, such as the importance of value objects, and the fact that “database tables are not the model — I don’t know how many times I can say this.”

But it was only a day or two after his presentation that I was struck by a remarkable thing: Phil referred so many times to specific members of the team, and talked about contributions they had made to the software design process, even though the people he was talking about were generally not software developers. He put their pictures up on the screen, he named them, he told stories about how they had shaped key ideas. This was an important reminder to me that being close to our users and stakeholders on a day-to-day basis is incredibly important to the health of our software.

Walking away

I didn’t get to see as much at QCon as I’d like to have done, although I dare say most people will say that, even if they went to as many sessions as was physically possible. But what I did see was fascinating and thought-provoking, even when it came from people I work with every day.

An ABC of R2: D is for domain driven design

…which Mat Wall and I have written about extensively before, However, for this piece let me say this…

When you have a huge number of people for whom you are building software (1500 staff, 20 million unique users, and an entire wired economy influencing which way you should go next) then simply following instructions is insufficient, because your users’ demands will change and evolve over time — even if not during the current project then certainly before Version Two. So to minimise the cost of those changes you need to understand the way your users are thinking.

That’s where domain driven design (DDD) comes from. It’s about taking the concepts in your users’ heads and embedding them straight into the software you’re writing. And then when those concepts evolve and change the cost of changing your software is directly proportional to the mental shift that your users are making. When your users say “I want to make a small change” then usually it’s a small cost; if they propose a big change then they should understand when you walk them through all the implications.

For the R2 project we used DDD from the start, and it was key to many of our successes: when we discovered new opportunites which arose only from direct use of what we had implemented, then DDD allowed us to realise them.

Take, for example, the idea of “tone” — the principle that we should be able to categorise content by its “voice” or “style” (well, its tone). Its tone might be obituary, blog post, match report, and so on. The vague notion of this had been around since the start of the project, but we hadn’t settled on many details, let alone how to implement them. But after a few releases, and when the software started getting real use, it became apparent that applying a tone should be very like applying a keyword. Suddenly things fell into place. The functional requirements were clear, as were the implementation details. Tone has become a very powerful feature (here are all our obituraries, all our blog posts, all our match reports,…) yet it was a relatively straightforward piece of work because it was, in the end, a relatively intuitive idea.

Of course, domain driven design has a lot more to it than I’ve described here, and our use of it has been much deeper than feature implementation. If you’re lucky enough to be going to QCon San Francisco then today you can see our software architect Phil Wills presenting a lot more detail there.

Lightweight versus heavyweight: The cost is in the management

A recent conversation with a colleague got me thinking about so-called “lightweight” systems, and when they become more trouble then they’re worth. He was frustrated by some problems he was having; even more so, he explained, because he thought he was dealing with something that was “lightweight”. It’s a seductive word, and sometimes — as with other forms of seduction — when you get more involved than you should things can get a bit sticky.

This article is an attempt to explain what lightweight really means, both in terms of benefits and drawbacks. There are also a couple of comparative examples from my own experience.

A lightweight system (plus management support)Lightweight doesn’t mean simple

People often mistake “lightweight” to mean simple or quick. But this can’t be right, because everyone wants simple and quick, and if it really meant this no-one would use anything else. Every website would be rewritten with the lightweight Ruby on Rails and every application would be sitting on top of the lightweight SQLite database. Who wouldn’t? Who doesn’t want simple? Who doesn’t want quick?

Lightweight is often good, but it must have its tradeoffs, otherwise other technologies wouldn’t exist.

From the examples below I see lightweight as offering low cost in return for low demands, high cost for high demands. Heavyweight is disproportionately high cost for low demands, but low cost for high demands.

Lightweight carries low inherent management costs. But some situations require a high degree of management control whether you like it or not. That means that if a lightweight system needs to scale up you have to wrest management from it and maintain it externally. If you can do that then the lightweight system continues to work, but if the lightweight system will not relinquish management control, or if you don’t have the discipline to keep the management going, then it won’t be effective in the long run. By contrast heavyweight systems impose management and structure of their own. This is good if you’re going to need it, as it takes the pressure of discipline off you, but it’s not effective if you didn’t need that management structure in the first place.

To illustrate this, here are a couple of lightweight/heavyweight comparison case studies…

Language example: TCL and Java

TCL is a lightweight language. You get to write Hello World in one line, it doesn’t force much structure on you, and it’s pretty relaxed about how it’s written.

TCL is so good, in fact, that was the basis of the original Guardian Unlimited website. We built Ajax-style tools with it before Ajax was known as a concept, we generated our front page from it, we used it to integrate with our ad server.

But as our site grew the language didn’t scale with it. Clever shortcuts implemented by earlier developers confused newer developers because they obscured the purpose of the code. The lack of an imposed structure meant every foray into older code involved learning its idiosyncracies from scratch. Development slowed down as we worked around older code. And when we wanted to redesign the website we found that through years of lightweight flexibility we had allowed ourselves to be tied into knots: it would be more effective to start again than to work with what we had.

In fact, for the most part we’re now using Java…

In contrast to TCL, Java is pretty heavyweight. Not only does Hello World require three lines (excluding any lines with just braces), but its philosophy of structure and layering percolates through from the core language to most of its add-ons. For example, to parse an XML document you have to drill through two abstraction layers before you can find the parser.

One Java framework that maintains this ethos is Hibernate, used for database access. Its architecture is complicated, and as usual this is to offer flexibility without relinquishing manageability. Recently a forthcoming release of the Guardian Unlimited website was failing its pre-production performance tests. Our developers tracked down a major cause of the problem to an inefficient query within Hibernate. They extracted some of the query’s logic up into the application layer and simplified what remained, rebalancing the work between the application and the database. Problem solved, performance restored. What’s relevant to our story is that the developers did this entirely within the archicture of Hibernate, so they didn’t compromise the design of the application and therefore didn’t add complexity.

CMS example: WordPress and the GU CMS

Over on ZDNet Larry Dignan extolls the virtues of WordPress and says, effectively, “What have big content management systems ever done for us?”

WordPress is the lightweight CMS I’ve chosen for this blog, and I’m very happy with it. It’s easy to install, requires almost zero maintenance, and lets me focus on the writing. And yet I’m a strong advocate of the home-grown CMS we have for our journalists and subs on the Guardian Unlimited site. Is lightweight not good enough for our journalists? What has a big CMS ever done for us?

Well, I just looked at a current article on guardian.co.uk: “Ministers ordered to assess climate cost of all decisions”. It was created with our big CMS. What’s there that WordPress couldn’t deliver?

For a start it’s got a list of linked subjects down the side, which aren’t the same as WordPress’s tags because they’re tightly managed to ensure consistency and reliability. These subjects are also categorised, so Pollution and Climate Change are subjects under Environment, while Green politics is a subject under Politics. As I write this, I note also that the pages for Pollution and Climate Change are designed differently, with Climate Change being more pictorial and feature-led. Subject categorisations and subject-specific designs are beyond what WordPress’s tags do.

Okay, so apart from the linked subjects, the categorisations, and the subject-specific designs, what has a big CMS ever done for us? I suppose it’s worth mentioning the related advertising, which as write includes a large ad for environmentally-friendly washing liquid. There are other contextual commercial elements, too, such as the sponsored features, links to green products and books, and offers of reducing energy bills and offsetting carbon emissions. And there are related articles and related galleries. And details of the article history, listing when and where it was first published, on what page and in what newspaper section.

Okay, so apart from the linked subjects, the categorisations, the subject-specific designs, the related advertising, the contextual sponsored features, the links to relevant products and books, the complementary offers, the related articles, the related galleries, and the article history, what has a big CMS ever done for us?

Well, I suppose it is serving to over 17 million unique users a month…

I’ll stop now. The point is a lightweight CMS such as WordPress could probably do any one of these things, with a bit of work. But it isn’t designed to do anywhere near all of them. And each time it’s changed to do one more of these things the more it is moved away from its core architecture and it gets closer to a point of paralysis, where nothing functions well anymore because no part of it is doing what it was designed to do. A bit like the TCL example.

Looking back

Reviewing these two examples, it’s clear that the lightweight systems became, or would become, very costly when they were pushed beyond their initial expectations. In both cases the corresponding heavyweight systems came with their own (heavy) management structure, but that management structure ensures lower running costs.

In the Hibernate example our software maintained its architecture after we’d made our performance change; anyone looking at this new code would be able to rely on previous knowledge to understand what was going on. By contrast, anyone coming fresh to a snippet of old TCL code would be starting from scratch, regardless of how much of the other TCL code they’d seen.

Similarly, the large-scale content management system at GU is internally consistent, despite its vast range of features and functionality. Once someone has learnt the principles (which, admittedly, are non-trivial) they can get to work on pretty much any part of it. Pushing WordPress to do that would have created a monster.

Lightweight systems take the management away from you. And that’s ideal, as long you don’t need that manageability.

Anti-features

Sometimes you can trust too much. Wherever I’ve worked I’ve been involved in a few examples where we listened to the customer, trusted them to know what they wanted, given it to them, and they’ve regretted it. We have delivered anti-features.

A lamp with slightly too many featuresMost recently at Guardian Unlimited our (previous) homepage had clever layout rules whose logic sometimes overrode the content that editors entered. The result was that editors were often confused that the page didn’t render according to what they had put in the system. The clever layout rules had been devised by close collaboration with the original editors and graphic designers — the expert users. They reasoned that they didn’t want anyone to unbalance the layout by entering inappropriate combinations of text and images.

But these layout rules had been forgotten over time, hence the confusion years later. Consequently the tech team was often called up regarding a supposed bug (“I’ve put this image in but it doesn’t appear”), and effort was expended only to discover that it was in fact a feature — much to the caller’s amazement. Our micro-lesson there was to give the end users the freedom they naturally expected, including the freedom to decide for themselves what was a balanced layout. If what they produced was unbalanced then the designers would steer them back in the right direction — a much more human corrective.

Those clever layout rules were an anti-feature. They were additional functionality that actually made users’ lives worse. Eventually we removed them.

Anti-features happen in highly experienced mega-corporations, too. In November 2006 Joel O’Software started what became known as “the Windows shutdown crapfest”. He compared the three shutdown options of Apple’s OS X with the astonishing nine or more shutdown options of Microsoft’s Windows Vista. Not only is that confusing for the user, but it was also incredibly painful to develop — Moishe Letvinn, a former Microsoft engineer, tells the sorry story.

But at least the Vista shutdown anti-feature made the problem visible. In the layout logic example, and others I know of, there is silent intelligence at work that leads to confusion and frustration without giving the user any visibility at all of what’s going on.

Anti-features are time-consuming to build in, because any features are time-consuming to build in. But anti-features also consume additional time in their removal.

One way to prevent anti-features is to help the end user determine the long-term consequences of what they want. Of course, you’d hope they’d think of that themselves, but you can’t avoid your responsibilities to the project as a whole if you see something that others have missed.

Another way is to adhere even more fervently to the Agile mantras of delivering early (before every last sub-feature is there), keeping it simple, and focusing ruthlessly on only the highest value work. This way we deliver first a front page without clever corrective layout logic, or one or two shutdown options only, and consider further enhancements later if we find we need them. Suggesting and doing this is easier said than done, of course, but if everyone trusts each other to listen honestly to what they have to say then it’s more likely the decisions made will be the best ones.

Meanwhile we can at least ask ourselves each time: Is this a feature or an anti-feature?

There’s nothing so permanent as temporary

Temporary workaroundAn aphorism I heard recently seems to be particularly memorable: “there’s nothing so permanent as temporary”. However, it wasn’t originally referring to software — it comes from a builder who is rebuilding the kitchen of friends. He’s from Azerbaijan, and my friends are fond of quoting him in full, to give the words maximum colour: “As we say in Azerbaijan, there’s nothing so permanent as temporary”.

It is, however, as relevant to software as it is to building (and probably to many other areas of life). The constant pressure to deliver means there is very rarely the opportunity to go back and improve a nasty historical workaround which is causing problems today. However, there are some strategies we might consider…

1. Automated testing

If the nasty programmatic stickytape is relatively small then a comprehensive suite of automatic tests should enable you to make the change relatively safely and painlessly. Of course, you need to have built up that suite of tests in the first place. And the “relatively small” caveat is important, because only then can the software people fit the replacement activty into their daily work without disrupting any schedules. If there’s no external change then this is an example of refactoring.

If the nasty workaround isn’t so small, but really does need replacing, then a much more concerted effort is needed. A plan needs to be carefully devised which allows slow piecemeal replacement. The point of the plan should be to take baby steps towards the end goal; each step should also leave the system in a stable state, because you never know when you might have to delay implementing any subsequent step. However, if everyone in the team knows the plan then the end goal can be achieved with minimum disruption to the business’s schedules.

2. Act like a trusted professional

I find trust and transparency is an increasing part of the software projects I’m involved with. Part of this is that the technical people want to give their customers options, and are keen to explain the pros and cons, so an informed decision can be made.

However, while this is usually excellent, there can be times when it is too much or undesirable. Stakeholders don’t necessarily want to be given options for every single thing, and sometimes it’s right for a technologist to make an executive decision without referring back — because they’re a professional, and because they are trusted (and employed) to be professional. In this regard it’s sometimes the responsibility of a technologist not even to entertain the possibility of a temporary workaround knowing it will become permanent.

The decisions I’ve been close to in this regard tend to be architecture or technology decisions. For example, in my current work we have, roughly speaking, legacy technologies and modern technologies. Sometimes a feature requirement would arise which is quicker to implement in the legacy technology — but of course we knew it would be more costly in the long run. If the difference in timescales was not wildly divergent then I’d encourage my team to choose the modern technology and not to offer up a choice. These days I need to do that much less, because by building up that base of modern technology it’s now easier to expand and extend that for future features. By making hard decisions in our professional capacity earlier on we’ve made it easier to make the right decisions in the future.

3. Bite the bullet

A third way to remove the temporary workaround is to just be honest about the work involved and the cost to the business. Easier said than done, but an open and frank discussion followed by adjustment and agreement will ensure the issue is examined properly and supported more widely.

One example I remember is a particularly troublesome database table. Originally conceived as an optimisation with a few hundred rows, it grew over time to be a real albatross, slowing both development and production work, and running to over 19 million rows. However, we couldn’t replace it without a lot of effort, because its tentacles were everywhere; we needed to schedule real time for it, and that would mean less time for other, more visible, work.

But when we presented the plan, two things sugared the pill. First, we had long cited the table as the reason for previous work taking longer than anyone would like, so the cost of its presence was already felt tangibly. Second, we ensured our plan was broken down using the “baby steps” approach outlined above — it would take a long time, but it would take out no more than 10% of our resources at any moment, and we could always suspend work for a period if absolutely necessary. The plan won support, was executed, and after several months our DBA typed the SQL “drop” command and we popped a bottle of champagne.

Meanwhile, back in the kitchen…

Of course, all that is assuming the temporary workaround really does need to be replaced. In my friends’ kitchen they are actually quite happy with the drawer dividers they’ve requisitioned as a spicerack. Aside from anything else, it provokes conversation and allows them to talk about their Azerbaijani builder’s many philosophies. He seems to have so many pithy sayings — all of which begin “As we say in Azerbaijan…” — that they’re beginning to suspect he may be making them up as he goes along. Still, if he does have to make a hasty exit from the building trade there may be an opening for him in the software industry.

Series: One nice thing about three new sites

Today we’ve launched three new sites on Guardian Unlimited. Well, more correctly, we’ve redesigned and rebuilt our Science, Technology and Environment sites. Working for a media organisation, I can rely on people much more articulate than me to tell the stories better — see Alok Jha on Science, Alison Benjamin on Environment and (with more technical info than you can shake a memory stick at) Bobbie Johnson on Technology.

So I won’t repeat what they’ve said. Instead, I want to point to one tiny new feature which I’m unfeasibly excited about. It’s series. You know series. They’re those things which appear regularly in a paper or on a website.

Flagging that it's a seriesNavigating through a seriesIn our redesigned sites we’ve introduced series as a specific and distinct concept. For example, Bad Science is a weekly series by Ben Goldacre on pseudo-science. Ask Jack is a series in which Jack Schofield helps people with their computer problems. In the old world these articles would be gathered into their own sections and left at that. But a series is a little bit more than this. For one thing when you land on an article in a series it’s nice to know that it is one of a family. We now flag that at the side of the piece. For another thing, you should be able to navigate to the next and previous articles in the series. Again, this series-specific navigation appears with each piece.

It’s a very small thing, but to me it shows something fundamental and important: the software we’re producing really does match the way our writers and editors think. It is, for those in the know, a result of domain driven design. For everyone else, it’s simply software that does what you think it should do.

The truth about quick and dirty

“Quick and dirty” is a weapon that’s often deployed by people for getting themselves out of a tight spot. But like any weapon, in the wrong hands it can backfire badly. Technical people use techie quick and dirty solutions in their work. Those people we like to work so closely with — our clients — use their own quick and dirty solutions in their work. But what happens when these people come together? What happens when it’s not the developers or sysadmins who suggest their own quick and dirty solution, but when it’s suggested by our customers on our behalf?

When it’s the client who suggests we deploy a quick and dirty solution, there’s a few things that we need to consider…

1. What does “dirty” mean?

There’s no doubting what “quick” means. But what does “dirty” mean? It’s something that makes the technicians itchy, so it’s worth understanding why.

A dirty solution might suggest one that’s inelegant, that doesn’t conform to a techie’s idea of a good design. That might be true, but it’s probably not the core of the matter. Any decent techie who works in the real world doesn’t advocate elegant design for its own sake. They know that most of their designs are far from perfect, and in fact they probably pride themselves in maintaining a pragmatic balance between the ideal and the expedient — this, then, won’t necessarily make a techie uncomfortable.

So “dirty” doesn’t mean “inelegant”. I think in this context it means something which incurs technical debt. That is, it’s a technical shortcut for which we are going to pay a rising cost as time goes on. That’s what causes the discomfort — the knowledge that implementing this will be making the techie’s life, and the life of their client, difficult in ways that cannot be really understood until it’s too late. It might happen also to be a inelegant solution, but that’s by-the-by. The important thing is that “dirty” implies something which carries a cost.

Of course, the cost of dirty has the benefit of quick. And as we go on, seeing this in cost/benefit terms is actually very informative…

2. Who gets what?

If you’re my customer and I’m your techie, then there’s a significant and unfortunate thing about quick and dirty that I need to tell you: You get the quick, but I get stuck with the dirty. Or to put it another way: the cost and the benefit are not felt by the same person.

There are times and circumstances when this is not the case. If an individual sysadmin is making a judgement call about their own work, say, then they may choose a quick and dirty solution. And if we are considering an in-house software team with in-house customers, then the company as a whole will both benefit from the quick and pay the cost of the dirty.

But these examples don’t tell the whole tale. If our individual sysadmin is actually one member of a systems team, and has responsibilities to the team, then it’s their colleagues who are going to be sharing the cost — and in fact it’s not entirely correct to have called them an “individual sysadmin” in the first place. And while it’s true to say our example company reaps the benefit and pays the cost, it’s probably not the case that “the company” has made an informed and conscious decision about the cost and benefit. That’s because it’s very unlikely that such a decision will have been escalated to the single body responsible for both the cost and the benefit — the board, or the owner, or the managing director.

In the end, we must recognise that the one who reaps the benefit is probably not the one who’ll be paying the cost.

3. Who decides what?

The “who gets what?” question leads in to a more general point: experts must be trusted to make judgements in their own areas.

So just as a techie should not tell a customer how to do their job, so the customer should not tell the techie how to do theirs. And in particular, it’s not up to the customer to decide whether the technical solution should be a dirty one.

Of course, we should always feel free to make suggestions; no-one is so big that they shouldn’t be open to ideas. It’s fair and reasonable for the customer to ask the techie if they can do it quicker… and for that the techie would need to demonstrate some creative thinking. The customer may ask if there are better solutions… and the techie would need to do some research. And the customer might even suggest a specific technical solution they’d heard of… and the techie would have a responsibility to give an honest and informed opinion. But if someone is supposed to be the expert on technical issues then they do need to be free to make judgement calls on technical issues, including which solutions are and are not appropriate. That shouldn’t be a worry to the customer; as mentioned above, good technical people tend to take pride in their pragmatism.

And in fact, this should be good for everyone. By trusting everyone use their expertise we should get the best solutions.

Quick and dirty: cost and benefit4. How long does it last?

One final point on the cost/benefit view of quick and dirty: Quick lasts only a short time, but dirty lasts forever.

More precisely, the benefit of quick lasts from the time the quick thing is released to the time the “proper” thing would have been released; and the cost of dirty also lasts from the time the quick thing is released, but it goes on until the time the software is rewritten or deleted.

I’d say that unless software is a total failure it tends to last a very long time, and usually much longer than anyone ever would have wished. This is why the term “legacy software” is heard so often — if software was generally replaced or removed in good time then we’d never have needed to invent that phrase. And the fact that the dirty bits outstay their welcome means that the technical debt they incur is paid, and paid, and paid again… certainly long after the shine of being early to market has worn off.

But it’s not all bad

I don’t want to give the impression that quick and dirty is inherently bad. It’s not. Nor is everything quick necessarily dirty. Far from it; some things are just plain quick.

But if we’re talking about quick and dirty then it helps to think in cost/benefit terms. We need to be very clear about who’s paying the cost and who’s reaping the benefit, and exactly what that cost and its benefit is. If we can understand that, and if we can respect each other’s expertise in making these decisions, then we can be more certain that we’re making the best decisions a better proportion of the time for the long term benefit of everyone.

Javascript puzzler: throw null

I know this isn’t really the place to post code, and I’m sorry. But this is why I don’t like loosely typed languages. I was caught by a Javascript nasty earlier this week (no I don’t code professionally, this was for fun) which boils down to the code below. What you think could happen here…?

var problem_found = null;
var listener = function(event){ /* Do something with event */ };

try
{
    document.firstChild.addEventListener("click", listener, false);
}
catch (ex)
{
    alert("Problem: "+ex);
    problem_found = ex;
}

if (problem_found)
{
    alert("There was a problem: " + problem_found);
}

You might have thought that either everything passes within the try block smoothly, in which case no alerts appear, or that there’s an exception in the try block and you get two alerts — one from the catch block and one from the if block.

And you’d be wrong.

Javascript actually allows you to throw anything. You should really throw an Error. But you could throw a String, or even a null if you liked. And if your code could throw a null then testing for null does not tell you if you had a problem. In the above example you’d get only one alert, from the catch block. If your problem-resolution code is in the if block then you’ll miss it.

Advocates of loosely typed languages will say that type problems can be avoided if you write good unit tests. But the problem occurred for me because this kind of mistake was in the JsUnit test framework, and it incorrectly showed a test passing. It took me two days to find the source of the problem. It turns out that the addEventListener() method above in Firefox 1.5 does indeed manage to throw a null if the listener parameter is not a function. Try changing it to a string and you’ll see this. My mistake was to pass in a non-function. Firefox’s mistake was to throw a null. JsUnit’s mistake was to assume a null exception means no exception.

Strict typing helps avoid problems that come about because we’re human and make mistakes. A wise man in ancient China once said “the compiler is your friend”. Javascript doesn’t have a compiler, and I can do with all the friends I can get.

By the way, you may be interested to know what the statically-typed Java language does. It doesn’t allow you to throw any object, but it will allow you to throw null. And if you do that it automatically converts it to NullPointerException. An actual, proper, concrete object. I know who my friends are.

The benefits of OO aren’t obvious: Part V of V, evolving the product

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding, separation of concerns, better testability and refactoring. Now read on for the final part…

Evolving the product

The final advantage to OO: it’s easier to evolve — hell, just to change — what you’re producing. Now ideally this shouldn’t matter. Ideally you’d get your requirements, your client would say “I want to this, this and this” and off you’d go, and then you’d produce something which the client is delighted with first time. That works in a waterfall environment, an environment where things are predictable and don’t change, but not in most cases. In most cases the client changes their mind, or adapts their ideas, and that’s just reality.

Note that I’m not talking about the so-called advantage of OO that I’ve often heard cited, and never really believed: that it’s easier to capture requirements with OO because it better models the world. To me the advantage is in tracking changing requirements.

I think OO deals with this better because people’s thinking evolves around things more than processes, so the amount you have to change your code is roughly proportional to the amount of rethinking your client has done, and their idea of a “small change” will at least be closer to your idea of a small change.

Again, this doesn’t come for free. You need to keep the structure of your application running in parallel with the structure of the client’s thinking. This takes experience and some work. But at least you’ve got more of a fighting chance of success than with a procedural language that’s already one step removed from people’s thinking. In particular domain driven design is all about ensuring the structure of software keeps up with how clients understand their domain, with the express aim of making change much easier. And of course being able to refactor with confidence is an important part of that.

Why wasn’t all that so obvious?

If you’d have asked me five or ten years ago about the benefits of objected oriented programming I could have cited the theory but wouldn’t have really been able to justify it. Data hiding is helpful, but probably doesn’t warrant changing your language — and your thought process — alone. But ideas such as dependency injection, refactoring and domain driven design are things that have come to prominence only relatively recently. They’re where I see the real benefits, and none are obvious from the atomic concepts of OO.

None of this means OO is always the right answer. If your requirements are solid and your domain is well understood then lots of the above advantages are irrelevant. But most of the time that isn’t the case, and it’s then that object orientation can show its value.

It’s odd to me that OO’s benefits are really only tangible years after it’s come to prominence. Perhaps it takes years for us as an industry to really understand our tools. Perhaps it’s only after the masses have taken up a technology — and produced an awful lot of badly written software — that the experts will come forward and really explain how things should be done. Or perhaps I’m just a very slow learner.

Whatever the reason, I can at last explain why I think OO is a step forward from procedural languages. Even though it’s not obvious.

All installments: