A new Travel site, and four uses for tags

Here at Guardian Unlimited we’ve just launched our new Travel site. You’ll be able to read a lot about it in the industry press, but I just wanted to write a little what we’ve done from a technical-managerial point of view.

The project motivation is ideal

The main strands of our business are editorial, commercial and technical. But this is not an editorial project, it’s not a commercial project, and it’s not a technical project. This is something that’s been driven by all of us. Editorially we wanted to improve navigation and bring in more user-generated content. Commercially we wanted to offer more flexible advertising opportunities. And technically we wanted to exploit our content much more.

The new Travel site brings all of us together. One technical force was to exploit keywords, or tags, a bit like those on Del.icio.us and Flickr. This is a very flexible way of categorising content and hence allowing better use of it. Tags, in turn, allow us to generate much more relevant navigation and bring in related readers’ tips from our travel tips site, Been There. For example, if you’re reading about Croatia then you can also see a column headed “Readers’ top tips” which shows readers’ tips specifically about Croatia.

This is also the perfect opportunity to redesign, to better present that information. And just as we can present more relevant tips, so we can present more relevant advertising to our readers. And the redesign also allows us to shape the pages to better fit the ads — the page can close up around a small ad, and expand to comfortably fit a larger one.

Of course, there’s a lot of technical stuff involved, but a technical project is no good for the sake of it. Here, the three strands of our business were driving each other. An ideal scenario.

Aside from much better navigation and contextual advertising, and since this is a technical blog, here are four of the other (smaller) things we’ve used tags for…

There’s a page for every tag

Each tag has its own page! Here’s the page for San Francisco. And here’s the page for skiiing. And here’s one for food and drink trips.

In fact, we’ve got hundreds of tags in the system already — you can see our list of places and activities, for example. And now we’ve automatically got a page for each of them.

There’s an RSS feed for every tag

Okay, not a big leap technically, but terrific for our readers. Top right on each subject page is the RSS link for that subject. If you’re interested in an area of Travel then it’s odds on we write about it. And if we write about it then we’ve got an RSS feed for it. You can get your own personal skiiing feed from Guardian Unlimited.

We can combine tags

Tags also allow us to pull together even more really specific and relevant content. If you’re on the San Francisco page you can find out about food and drink in San Francisco. Or you can investigate short breaks in Portugal. Or water sports in the UK. Well, you get the idea.

Friendly URLs

Our URLs are usually of the form

http://travel.guardian.co.uk/budget/story/0,,991699,00.html

which is short but a bit obscure. So we’ve taken the opportunity to improve that for the new Travel site. The same article above now has the URL

http://travel.guardian.co.uk/article/2003/jul/05/watersportsholidays.budgettravel.boatingholidays

It’s longer, but we hope it’s also more intuitive. You can clearly see the what it is (an article), the date (5 July 2003), and what it’s about (water sports, budget travel, and boating holidays).

And it’s not just articles which have friendlier URLs. Here’s the URL for the Hamburg tag page:

http://travel.guardian.co.uk/tag/hamburg

and here’s the RSS feed for our Hamburg content:

http://travel.guardian.co.uk/tag/hamburg/rss

All much easier to deal with.

And one other thing…

Lots of this is automatic. But there’s one important part of tagging that’s not: the actual act of adding the tags. In theory we could use software to look at each of our articles and determine what it’s about, and so apply the tags. But that’s not going to give the best results for our readers. So we’ve made sure that all our articles have had their tags considered and chosen individually by someone who has actually read the article. That means the navigation should be spot on, and the advertising links are ideally targetted. There’s an awful lot that machines can do, but sometimes it needs a human being to add the magic.

Anyway, hope you like the site.

Javascript puzzler: throw null

I know this isn’t really the place to post code, and I’m sorry. But this is why I don’t like loosely typed languages. I was caught by a Javascript nasty earlier this week (no I don’t code professionally, this was for fun) which boils down to the code below. What you think could happen here…?

var problem_found = null;
var listener = function(event){ /* Do something with event */ };

try
{
    document.firstChild.addEventListener("click", listener, false);
}
catch (ex)
{
    alert("Problem: "+ex);
    problem_found = ex;
}

if (problem_found)
{
    alert("There was a problem: " + problem_found);
}

You might have thought that either everything passes within the try block smoothly, in which case no alerts appear, or that there’s an exception in the try block and you get two alerts — one from the catch block and one from the if block.

And you’d be wrong.

Javascript actually allows you to throw anything. You should really throw an Error. But you could throw a String, or even a null if you liked. And if your code could throw a null then testing for null does not tell you if you had a problem. In the above example you’d get only one alert, from the catch block. If your problem-resolution code is in the if block then you’ll miss it.

Advocates of loosely typed languages will say that type problems can be avoided if you write good unit tests. But the problem occurred for me because this kind of mistake was in the JsUnit test framework, and it incorrectly showed a test passing. It took me two days to find the source of the problem. It turns out that the addEventListener() method above in Firefox 1.5 does indeed manage to throw a null if the listener parameter is not a function. Try changing it to a string and you’ll see this. My mistake was to pass in a non-function. Firefox’s mistake was to throw a null. JsUnit’s mistake was to assume a null exception means no exception.

Strict typing helps avoid problems that come about because we’re human and make mistakes. A wise man in ancient China once said “the compiler is your friend”. Javascript doesn’t have a compiler, and I can do with all the friends I can get.

By the way, you may be interested to know what the statically-typed Java language does. It doesn’t allow you to throw any object, but it will allow you to throw null. And if you do that it automatically converts it to NullPointerException. An actual, proper, concrete object. I know who my friends are.

Measuring development is useful, up to a point

There’s a post from Joel O’Software regarding measuring performance and productivity. He’s saying some good stuff about how these metrics don’t work, but I’d like to balance it with a few further words in favour of metrics generally. Individual productivity metrics don’t work, but some metrics are still useful, including team metrics which you might class as productivity-related.

  • Individual productivity metrics don’t work.
  • Productivity-like metrics are still useful…
  • …but they don’t tell the whole story

Individual productivity metrics don’t work

Joel O’S states that if you incentivise people by any system, there are always opportunities to game that system. My own experience here is in a previous company where we incentivised developers by how much client-billable time they clocked up. Unfortunately it meant that the developers flatly refused to do any work on our internal systems. We developed internal account codes to deal with that, but it just meant that our incentivisation scheme was broken as a result. Joel has other examples, and Martin Fowler discusses the topic similarly.

Productivity-like metrics are still useful…

Agile development people measure something called “velocity”. It measures the amount of work delivered in an iteration, and as such might be called a measurement of productivity. But there are a couple of crucial differences to measuring things such as lines of code, or function points:

  • Velocity is a measurement of the team, not an individual.
  • It’s used for future capacity planning, not rewarding past progress.

Velocity can also be used in combination with looking at future deadlines to produce burndown charts and so allow you to make tactical adjustments accordingly. Furthermore, a dip in any of these numbers can highlight that something’s going a bit wrong and some action needs to be taken. But that tells you something about the process, not the individuals.

The kick-off point for Joel’s most recent essay on the subject is a buzzword-ridden (and just clunkily-worded) cold-call e-mail from a consultant:

Our team is conducting a benchmarking effort to gather an outside-in view on development performance metrics and best practice approaches to issues of process and organization from companies involved in a variety of software development (and systems integration).

It’s a trifle unfair to criticise the consultant for looking at performance metrics, but one has to be careful about what they’re going to be used for.

…but they don’t tell the whole story

A confession. We track one particular metric here in the development team at Guardian Unlimited. And a few days ago we recorded our worst ever figure for this metric since we started measuring it. You could say we had our least productive month ever. You could. But were my management peers in GU unhappy? Was there retribution? No, there was not. In fact there was much popping of champagne corks here, because we all understand that progress isn’t measured by single numbers alone. The celebrations were due to the fact that, with great effort from writers, subs, commercial staff, sponsors, strategists and other technologists we had just launched

A bad month then? Not by a long shot. The numbers do tell us something. They tell me there was a lot of unexpected last-minute running around, and I’ve no doubt we can do things better the next time. It’s something I’ve got to address. But let’s not flog ourselves too much over it — success is about more than just numbers.

Quality is a bogus variable

Quality seems to be widely misunderstood. People often talk about quality being a key variable that a project team can choose to alter to reshape their project — along with other variables such as cost, scope and time. I think that view is flawed. Meanwhile Mishkin Berteig is not unusual when he says “I have come to believe that quality is not negotiable” — meaning that although it’s possible to negotiate over it, it’s fundamentally wrong to do so. I think that view is flawed, too. From my position quality can rarely vary in the same way as other key project variables, is rarely changed directly, and often it is right to allow it to drop below 100%. I want to spend some time here explaining why.

Before I launch into this, allow me to say that although this is inspired by an issue on which I differ with Mishkin, I do have great respect for him. In fact, it’s arguable that I’m only writing this because I have such respect for him — because it’s so surprising to see him write something that seems to be so mistaken.

The sections ahead are:

  • Quality is about much more than bugs
  • Quality is rarely addressed at the outset
  • Quality is less flexible than scope, time or money
  • Quality is usually the driven, not the driver
  • Quality can be lowered — and sometimes that’s okay
  • Non-executive summary for non-executives

Now, let’s get on with things…

Quality is about much more than bugs

First, let’s make one thing clear: quality is not just bugs. It includes look and feel, ease of use, ease of installation, performance, and so on. Looking at Mishkin’s post it’s clear to me that we he says “quality” he really means “bugs”: he is making a strong case for test-driven development. But test-driven development doesn’t help with an application that requires six clicks rather than one to enter the default data. It doesn’t help that one of your modules needs administrator access to save its state. It doesn’t help that you can’t add any more servers when you need to triple its capacity. These are situations where quality could be improved, but which aren’t solved by test-driven development. Quality is wider than that.

Quality is rarely addressed at the outset

There’s a classic project management mantra in many variations: “cost, quality, time: pick any two” is one such [1, 2 (PDF), 3]. At one level it implies (quite fairly) that the major project variables at the outset are interdependent, and you can’t choose settings for them all. You can’t say, for instance “I want it to do all this, I want it to be flawless, and I want it in a week” — making just two of those demands determines what you get for the third.

But at another level it suggests that people are making these kinds of decisions, and that quality is often determined at the outset. I find this implausible. I rarely hear clear requirements at the start of a project about the quality of the result. I’ve heard vague pronouncements, but rarely anything that could be factored into a project plan. I’ve heard “we need to do better than last time”, and “let’s not over-do it the way we did before”. But that’s usually part of an agenda-setting speech, not the project plan. It’s not enough to help make a calculable impact on scope, cost or time.

At the start of a project you will hear a precise number for a budget, a strong date for a deadline, and (we hope) a clear list of requirements. But it’s very rare that people give a precise level of quality — it doesn’t even have a common unit of measurement (remember: quality is not just a bug count). Quality is rarely negotiated at the outset. The client expects the software to do the job specified, and that’s that.

Quality is less flexible than scope, time or money

Another misconception is that quality can be varied to the same degree as other major (initial, or input) variables such as scope, time and money. Scope can be cut right back, or added to indefinitely. Similarly for money and time. But quality doesn’t vary in the same way. Quality can’t be increased arbitrarily — after a point there’s nothing left to do, assuming the scope isn’t changing. Similarly it can’t be reduced to zero, because below a certain point — well before you get to zero — the product isn’t doing the thing it was supposed to do and therefore you’re reducing scope.

However, while I believe that quality varies less than other project variables, I do believe the consequences of improving quality — even a small amount — are significant. Unit tests reduce firefighting and ensure the project is always moving forward. Usability testing ensures people are much more comfortable and confident with your software, paying dividends throughout the business. Increasing quality by a small amount can have a big, positive knock-on effect.

Quality is usually the driven, not the driver

I’ve talked about scope, time and money being “input” or “initial” variables. They’re things that are usually explicity set at the start of the project. Quality, however, might be set early on, but it’s usually determined by other things.

Sam Newman writes, correctly in my opinion, that quality is driven by process. Here, they way you do things determines how good the result is. Quality is an “output” variable more often than not. Introducing test-first development is a process change; allowing post-development testing time to be reduced because the development phase has overrun is a consequence of an unfortunate project management process; introducing or eliminating performance testing is a process change. In all these cases quality is not an independent variable; it is driven by other things.

Quality can be lowered — and sometimes that’s okay

Quality is negotiable, and it would be foolish if it wasn’t. It would be foolish if budget, scope and time weren’t ever negotiable, but they are, and in that respect quality stands next to them on a level.

Real situation: The website is being built; the investors are making bullish noises in the City; the launch — a release and public demo — was scheduled months ago and is due next week with 200 big media attendees, and testing is still on-going. If a real, but infrequent, bug is found that can’t be fixed by next week does the company cancel the launch?

Real situation: You’ve been working for months on your release. It’s almost done, but there’s another project looming, which the sales team are desperate get started. The time remaining for your current work will leave some untidy edges, but there’s a pressing commercial need to release it and move on to the new project before the company loses more competitive edge than it already has. Do you push for a project extension?

If a developer tells me their professional needs are bigger than those of the organisation in which they work, then they shouldn’t be working there. A developer works for their organisation — just as everyone in that organisation does — not the other way round. And making the right decisions for the business must have positive consequences for the development team.

Mishkin says

Every time the developers sacrifice quality for schedule/cost/features, they incur a debt. Sometimes this is called “technical debt”. I like to call it lying.

Powerful prose, but I’m afraid that particular extract strikes me as uncharacteristically arrogant. “Technical debt” is an honest term. By using this term the developer says to their organisation that the decision taken now will incur an increased cost later. The organisation as a whole can then weigh up the options and make an informed decision. And call me crazy, but it might just be that after being appraised of the situation, the project board, or managers, or directors might be in a better position to make a strategic decision than the developer who’s paid for their expertise in one very particular part of the organisation.

Of course, that’s not to say that every decision on quality needs to be raised to board level. Just as a developer wouldn’t dream of bothering their project board as to whether they should use descriptive variable names (“Well, short names are quicker to type and might shave off 2% of our delivery time, but longer names will save us time if we need to revisit that part of the code…”) so there will always be ways to improve quality in refactoring, or configuration, or packaging, or architecture which the developer should just do without question because they’re a professional. But the developer who never informs their project board of major quality forks in the road, and who never allows their client to be involved in such decisions, is failing to fulfill their responsibility as a member of the whole project team.

At Guardian Unlimited we’re always looking for ways to improve the quality of what we do. But I’ve also heard people say “let’s not gold-plate it”, and “look, we’re not producing art”. These are senior developers who are arguing against particular quality improvements because they are able to weigh up not just the technical decisions but also the wider business decisions. They are being pragmatic. I’d say they are much better developers for it.

Non-executive summary for non-executives

Let’s recap. Quality is really important. It’s about much more than unit testing. And making small increments in quality can result in disproportionately large increments in the results. But let’s not kid ourselves that quality is a variable that can be tweaked arbitrarily and independently this way and that. It’s not of the same standing as budget or scope or time. And let’s not kid ourselves that developers of principle have to stand firm against some kind of management beast constantly trying to undermine them. Developers are an integral part of the organisation, and should behave (and be treated) as such. The quality of developers’ output needs to be balanced just as every other organisational demand does.

As long as we can see and understand software quality for what it really is, the better we are able to improve it.

The benefits of OO aren’t obvious: Part V of V, evolving the product

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding, separation of concerns, better testability and refactoring. Now read on for the final part…

Evolving the product

The final advantage to OO: it’s easier to evolve — hell, just to change — what you’re producing. Now ideally this shouldn’t matter. Ideally you’d get your requirements, your client would say “I want to this, this and this” and off you’d go, and then you’d produce something which the client is delighted with first time. That works in a waterfall environment, an environment where things are predictable and don’t change, but not in most cases. In most cases the client changes their mind, or adapts their ideas, and that’s just reality.

Note that I’m not talking about the so-called advantage of OO that I’ve often heard cited, and never really believed: that it’s easier to capture requirements with OO because it better models the world. To me the advantage is in tracking changing requirements.

I think OO deals with this better because people’s thinking evolves around things more than processes, so the amount you have to change your code is roughly proportional to the amount of rethinking your client has done, and their idea of a “small change” will at least be closer to your idea of a small change.

Again, this doesn’t come for free. You need to keep the structure of your application running in parallel with the structure of the client’s thinking. This takes experience and some work. But at least you’ve got more of a fighting chance of success than with a procedural language that’s already one step removed from people’s thinking. In particular domain driven design is all about ensuring the structure of software keeps up with how clients understand their domain, with the express aim of making change much easier. And of course being able to refactor with confidence is an important part of that.

Why wasn’t all that so obvious?

If you’d have asked me five or ten years ago about the benefits of objected oriented programming I could have cited the theory but wouldn’t have really been able to justify it. Data hiding is helpful, but probably doesn’t warrant changing your language — and your thought process — alone. But ideas such as dependency injection, refactoring and domain driven design are things that have come to prominence only relatively recently. They’re where I see the real benefits, and none are obvious from the atomic concepts of OO.

None of this means OO is always the right answer. If your requirements are solid and your domain is well understood then lots of the above advantages are irrelevant. But most of the time that isn’t the case, and it’s then that object orientation can show its value.

It’s odd to me that OO’s benefits are really only tangible years after it’s come to prominence. Perhaps it takes years for us as an industry to really understand our tools. Perhaps it’s only after the masses have taken up a technology — and produced an awful lot of badly written software — that the experts will come forward and really explain how things should be done. Or perhaps I’m just a very slow learner.

Whatever the reason, I can at last explain why I think OO is a step forward from procedural languages. Even though it’s not obvious.

All installments:

The benefits of OO aren’t obvious: Part IV of V, refactoring

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding, separation of concerns and better testability. Now read on…

Refactoring

This is something that builds on all the items above. If you’ve worked on any piece of code for a length of time you’ll always have come to the point where you think “Damn, I really should have done that bit differently.” Refactoring is what you want to do here: “restructuring an existing body of code, altering its internal structure without changing its external behavior”.

Refactoring is helped considerably by OO. “I already refactor,” says my non-OO friend, “I always make sure common code is in put into functions.” Which is good, but there’s a whole lot more to refactoring than that. Martin Fowler lists a vast number of options. But before I list my favourite refactorings (how sad is that? I have favourite refactorings) let me explain why this is so important, why it probably isn’t all entirely obvious, and how it relates to so much else here…

First, be aware that two people could both write outwardly identical applications but with their source totally different internally. At its most extreme, refactoring would enable you to evolve from one codebase to the other in lots of very small steps, at each step leaving you with the same working, stable, application. That’s much more than just putting common code into functions. If you think you already have the skill and technology to entirely change your codebase without changing anything outwardly then you’re a refactoring expert. If not, then you’ve got a lot to learn (and a lot of fun yet to have).

Second, refactoring is so important because it gives you a chance to improve your code. Unit testing (and indeed all testing) makes this possible. Test driven development isn’t just about test-first development, it’s also about refactoring. The last step of a TDD sequence is to refactor. You ensure your tests pass, you refactor, and you run your tests again, to make sure they still pass — to ensure you’ve not broken anything while refactoring. A suite of unit tests acts as scaffolding around your code, allowing you to rebuild parts of it without fear of the edifice collapsing. If you’ve made a mistake in your refactoring then the unit tests should show exactly whereabouts the problem is.

Third, taking the previous two together, you can make vast, sweeping changes to your codebase in tiny, safe steps. Simply break down the process into lots of small refactorings, and test after each one. This is how the apparently-impossible task of changing your entire codebase reliably is possible. In reality you’ll probably never want to change your entire codebase but you will want to change large chunks of it, and this is how you’d do it.

Fourth, you often find yourself needing to add a feature which seems reasonable to a client, but difficult to someone who understands the structure of the application. An excellent way to add the feature is to utilise refactoring as follows: refactor the software so that the addition of the feature does become a natural extension, and then add it. Martin Fowler gives an excellent worked example of this in the first couple of chapters of his book, which is (almost) worth the price of admission alone.

Now you can browse the refactoring catalogue linked to above (and here again), but here are some of my favourite refactorings…

  1. Rename variable. This is very trivial, and very easy, but all the better for it. Forgotten what a variable does? Give it a better name. It’s so easy to do, but so useful.
  2. Extract method. Never mind extracting common code into functions, I often find myself extracing one-off code into a single function (or “method”, as they’re called in OO) with a long and useful name. This is to make the original method much shorter, so I can see it all on one screen, and also so that it reads better. I no longer need to look at ten lines and translate them in my head into “…and then sort all the even numbered vertices into alphabetical order”. Instead I have a single method called sortEvenNumberedVerticesAlphabetically(). You think that’s silly? Maybe Java or VB.NET is your first language. Personally, English is my first language.
  3. Introduce null object [1, 2]. I don’t think ever used this, but I like it a lot because it’s just very interesting. The problem is that you’ve got special logic whenever a key object is null. For example, you need a basic billing plan by default, if the customer object isn’t set up and doesn’t have its own billing plan:

    if (customer == null)
    billing_plan = Plans.basic();
    else
    billing_plan = customer.getPlan();

    This is a small example, but it’s important to realise why this is undesirable: partly because each branch point is another potential source of error, and parly because your “null customer” logic isn’t just here, but is probably scattered all over your code. It would be better to have all the logic in one place, where it can be seen and managed and tested more easily. The solution is to introduce a null object — in this case a NullCustomer object such as this:

    public class NullCustomer
    {
    // ...probably lots missed out here.
    // Meanwhile the getPlan() method of NullCustomer
    // just returns the default basic plan...

    public Plan getPlan()
    {
    return Plans.basic();
    }
    }

    Every time we set up a customer we should now use this NullCustomer as a default rather than the non-specific null object. And our original if/then/else code can be reduced to this:

    billing_plan = customer.getPlan();

    That’s simpler and clearer, it doesn’t have any branches, and the logic for a null customer lives in one place.

Refactoring is a wonderful thing. And it’s even more wonderful if your IDE helps you. For .NET languages you can get the Refactor! plugin, and MSDN has video demo, too. All that code manipulation at the touch of a button!

All installments:

The benefits of OO aren’t obvious: Part III of V, better testability

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding and separation of concerns. Now read on…

Better testability

Another great benefit of OO is better testability. The simplest kind of testing you can do is unit testing — taking a single class and testing it s methods in isolation. By being able to instantiate individual objects and ensuring there are minimal dependencies it becomes fairly easy to test almost every branch of your code. Fairly easy, but not trivial — there’s a difference between writing merely working code and writing testable code. Writing testable code is a very good skill to acquire, and test-driven development is a further discipline that’s certainly improved my code (and my peace of mind) no end.

There are further aids to testing that OO gives you, most notably stubs and mocks. Stubs are dummy implementations when you need a non-critical dependency (such as Repository for an Article, but you’re not actually testing the Repository). Mocks are dummy implementations which simulate a given interaction and allow you to check that interaction afterwards. Both of these are easier with dependency injection.

Personally I think the greatest step forward in testing is unit testing, and automatic unit testing — have a look at NUnit for .NET languages and JUnit for Java. Even coding for fun, I’ve found a great sense of relief in not having to think about the correctness of my code, but leaving that to my unit testing framework. It means I can leave my code at any point and not wonder about whether or not it really works — because I know it does — and not have to hold anything in my head for when I return, because everything will be verified by the tests. Pedants will say there’s a lot more to testing than unit testing, and they’d be right, but to go from next-to-no testing, or infrequent testing, to automated unit tests is a huge step forward, and I cannot recommend it highly enough.

All installments:

The benefits of OO aren’t obvious: Part II of V, separation of concerns

Previously I wrote:

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. […] What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

I’ve already written about data hiding. Now read on…

Separation of concerns

This is one of those things that is a consequence of OO fundamentals, but by no means obvious. The root of so many problems in software is dependencies. A depends on B, B depends on C, and before you know it everything depends on everything else. It makes things very difficult to manage: changes become slower and slower, and riskier and riskier. Separation of concerns is about moving away from this: by forcing things into classes it encourages each class to do its own specific job. This breaks the dependencies and makes the management of your code much easier.

But it’s not that easy or obvious, and requires a fair bit of imagination to see it working really well. Let’s take a very simple example.

Suppose Guardian Unlimited has an Article class, and articles can be saved into a Repository (probably a database). Then we have two classes, each with a separate concern; the Article class has a save() method which uses the Respository to save to the backing store. But hold on there — we’ve still got the chance for it to go wrong.

An easy mistake would be to have the Article manage the setup of the Repository: when you create an Article it sets up the Repository all by itself. In some ways this is nice (it’s a form of encapsulation) because you can create and save an article without worrying about how the save is done, or even that there is a Respository at all. But there is a nasty dependency here: the Article depends on the Repository: not only are we unable to set up an Article without having a Respository, but it’s the Article which determines what the Repository is — if your application requires a working SQL Server database then it will always require a working SQL Server database, even if you just want to do a little test.

Enter dependency injection [1, 2, 3]. This is a technique that is by no means an obvious consequence of object orientation, but is facilitated by it.

In our example the Article depends on the Repository. It also specifies exactly what the Repository is, because it sets it up internally. Dependency injection says we shouldn’t allow the Article to set up the Repository; rather, we should set up the Repository externally and inject it into the Article (for example by calling a setRepository() method). The dependency isn’t broken, but the dangerous element is: the particular kind of Repository is no longer a concern of the Article.

What this means in practice is that you can feed any kind of Repository into the Article and it will still work. You can even inject a “do nothing” Repository just in case you want to do a quick test which doesn’t require saving it into the real database. Or you could inject a disposable in-memory Repository which only lasts for a limited time.

All of this is possible with another feature of OO: abstraction. If we ensure Repository is only an abstraction (rather than a specific concrete class) then we can have all kinds of Repository implementations: a SQL Server one, an in-memory one, one which always throws an error (so you can check how your application handles errors), and so on. In VB.NET you’d use an interface for this particular example.

Dependency injection forces concerns to stay separate and makes your code much more flexible and easy to manage. But that wouldn’t be obvious to you if you’re just starting out in OO development, and if you’re just looking at the concept of objects, inheritance, and so on.

All installments:

The benefits of OO aren’t obvious: Part I of V, data hiding

Sometimes you work with something for so long that you forget that other people aren’t familiar with it. This happened recently talking to a friend of mine — “I’ve got no need for object orientation”, he said, and I was shocked. Yet I couldn’t easily explain why I thought OO was far superior to procedural languages. And thinking about it afterwards I realised that its advantages for me weren’t obvious — almost none of them are clear from OO’s fundamentals (inheritance, abstraction, etc), and almost all of the advantanges are things that have been learnt by most of us recently, years after OO first appeared.

My friend spends most of his time writing embedded software, but occasionally is called upon to write a front-end interface. His front-ends are Windows-based and for his last GUI project he chose to try out VB.NET rather than his usual VB6. He was baffled by the verbosity and he felt it didn’t add anything. It all seemed rather unnecessary. And yet to me it’s long been the only way.

What, I wondered to myself, were the benefits of OO that he was missing? How could I persuade him that it does have a point? So here is my attempt to trawl my memory and see how I’ve changed in response to OO, and why I think it serves me (and my projects) better. In the end I’ve found that the benefits are real, but nothing I would have guessed at when I first started in this field.

Not the point of OO

DNJ Online has an article on migrating from VB6 to VB.NET, and it says that from this point of view the OO language is “unlike anything you’ve used before”. This is helpful in its honesty, but that particular article explains the low level differences rather than the high level aims. Try/catch blocks and namespaces are definite steps forward, but they hardly justify learning an entirely new paradigm, let alone phasing out a well-established one.

Similarly, common justifications of OO is that objects promote code re-use and allow better modelling of user requirements, but it’s hard to see how this is obvious. Regards the former, procedural languages allow code reuse via libraries. Regards the latter, it’s just as easy to see user requirements as procedural things as object-based things — easier, arguably.

For me, the advantages of OO are not obvious. But they are tangible once experienced, given the right environment. For me, those advantages are…

Data hiding

Even I’m horrified by how little fits into my tiny brain, and the more I have to keep in my head at any one time the more likely I am to make mistakes. In terms of development, the more attention I have to pay to the intricacies of my software the less attention I can pay to solving whatever the immediate problem happens to be.

Fortunately OO’s concept of data hiding (or encapsulation, if you like long words) is for people like me. By making explicit what I need to know (public methods/functions) and hiding what I don’t (private methods/functions and fields/variables) I am freed to focus on the task in hand. Private fields and methods allow my classes to manage themselves without having to burden the user with the details. You can look at the class (or its auto-generated documentation [1, 2, 3, 4]) and focus on what you need to know, without getting distracted by the details.

Anything which makes me think less is huge benefit.

Of course, data hiding by itself isn’t enough to switch programming paradigm, and if you’re writing in VB6 you’ll already have the “private” keyword. But this is only the start…

All installments, which I’ll post over the next few days:

Conversation versus concentration

Compare and contrast two blog entries that popped up in my RSS reader on the same day last week. In the blue corner Joel O’Software, fighting for private offices. And in the red corner, Martin Fowler, battling it out for continuous collaboration between developers and their customers.

Naturally, they’re not really at opposite ends of a spectrum — in fact, their posts are about different things — and they’ll find a lot to agree with each other. But look at a select part of each of their text and see the contrast. Here’s Joel:

Not every programmer in the world wants to work in a private office. In fact quite a few would tell you unequivocally that they prefer the camaradarie and easy information sharing of an open space. Don’t fall for it. They also want M&Ms for breakfast and a pony.

And here’s Martin:

One of Kent’s suggested names for ‘Agile’ was conversational software development – the point being that it’s a two way communication. This isn’t something like a telecoms protocol that you can define, but the back and forth discussions about how software can enhance the business are where the real value lives. Much of this conversation is of half-baked ideas, some of which grow into valuable features – often ones that aren’t things that the customer originally thought of.

It’s notable how two people renowned for being leaders in software can be veering apart on what should be a fundamental issue: how should people interact? Joel is for concentration, Martin is for conversation.

It’s notable also that they do different things in the software world. Joel produces shrink-wrapped products on behalf of his own company. So does Microsoft, the company Joel used to work for and who he praises for “putting literally everyone in individual, private offices”. Martin is a gun for hire (via his employer), called in to consult on a variety of projects for different companies, no doubt 90% of the time producing in-house software for each client.

I can’t escape the feeling that their respective backgrounds inform their respective views, though wouldn’t for a second think that implies that one approach always suits one kind of output.

So, should one prefer concentration or conversation? Obviously[*] it depends on several factors, and here’s the way I see it…

First, it must come from whoever you start with. If Joel O’Software starts a one-man band and likes to work in silence, then takes on his first employee, he’s not going to want his hire to keep piping up with questions every two minutes. Similarly, if you’ve created your business by extolling the virtues of pair programming then you’ll be looking exclusively for developers who will continue that good work. If you start with one introvert or extravert, you’ll grow from there.

Second, it depends on where you think your strengths are as an organisation. Joel talks a lot about productivity and algorithms, while Martin tends to talk of people and methodologies. Each also has an interest in the other’s topics, but their chosen hot subjects are where they see the biggest gaps, and where they think they can most make a difference.

Third, it’s about how you see your team. I suspect Joel has very low staff turnover, hires developers very infrequently, and there’s no doubt he puts a lot of effort into picking the cream of the crop: he’s in a buyer’s market, and his developers will all be very smart. Martin will inevitably work with a much broader range of companies. While they will of course have made a very smart decision to hire him and his colleagues (ahem) they will tend more towards the market norm, and will also tend to be fairly large development teams — even if individual project teams are smaller. Thus Martin is going to be much more concerned about sharing information between developers, evolving designs collaboratively, establishing standards and keeping those standards refreshed.

Finally, it’s about how you see the long term. Again, low staff turnover and a tight-knit team means Joel is less concerned about knowledge silos, but an average corporate team will have average turnover and will have its average share of crises. Knowledge sharing and reducing single points of failure is essential in these cases.

All of that is why I favour conversational development. Knowledge sharing and evolving ideas is key to me as a general principle, all the more so that Guardian Unlimited is such a diverse site that there’s just so much to know. That doesn’t mean it’s easy for everyone, but for me a typical team will be stronger if it keeps every last bit of information flowing round, ideas constantly exchanged and checked, and experience continuously refreshed and revised. Concentration is often needed, but too often the price paid is too high, and is only found when one person is seconded to another project, has left, or is holiday.

[*] I think there’s a progression whenever you ask “Is A or B better?” Naive or inexperienced people will always pick one or the other. Eventually they’ll come to hedge their bets because they realise things are more subtle than they previously thought, or else it just makes them sound wiser. Finally they may reach a point when they are (or regard themselves as) leaders in their field and act as evangelists or iconoclasts pushing one or other opinion heavily. You’ll see a lot of hedging on this blog.