What management buy-in really means

We work using agile development processes, which obviously needs the buy-in of internal users and project sponsors. But this jumped out at me when I read it. It’s from Carolyn McCall, Chief Executive of Guardian Media Group, which owns the company I work for. She was announcing a £15m investment in our digital business, and it was reported thus:

“What we’ve done so far is our own version of web 1.0, but we want to continue to web 2.0 and what comes after that,” Ms McCall added. “We need to be agile and ready to change.”

I suspect that phrase, “We need to be agile and ready to change”, is not a coincidence. You can trace a strong lineage between the agile development work going on in Guardian Unlimited and that little sentence in a speech to the OPA. Let me tell you how I think it got there.

A story of a sentence

A long time ago we began using agile processes in the GU tech team. It was a new way of working, but it seemed to address a lot of our problems — actually, the usual ones you get in most software organisations, such as knowledge silos, lack of flexibility, impossible deadlines, and so on. Changing what you know is always a tough choice, but the GU management team were very supportive of the move to agile.

In particular, our boss at GU at the time, Simon Waldman, was very interested in its implementation and evolution. (Simon’s since moved to GMG.) Over the months and years he enquired, queried, provoked, but was never less than encouraging and constructive. (At one memorable annual review to the GU staff Simon devoted an entire slide to explain how we had stealthily removed a problematic database table. Our DBA was thrilled; I have no idea what the attendant sales people and journalists made of it.)

And agile clearly worked for GU. One notable change was that planning meetings took on much more of a business focus. Far less about technical dependencies, much more on what functionality we wanted to release when.

And then we began we work on our new Travel site (released in November). It featured a new design and new commercial opportunties. More people around the company needed to scrutinise this as it started up, because it was a pretty big project and we had a much greater responsibility to be rigorous and transparent in our delivery. We saw agile methods as being critical to the work’s success. It would allow us to delay decisions to the latest possible moment and therefore produce something that was much more relevant — both editorially and commercially — to our audience and clients.

I therefore spent some time explaining agile to various company directors… but not as much as you might expect, because clearly the principles and practices had been discussed and understood well before. I ended up having the kind of conversations about agile development with non-technical senior managers that I wish I could have had with more technical job candidates. Word was getting around. The business people scrutinising the work on the Travel site understood the process’s business benefits, and they understood how the low-level practices would provide those, and they knew those practices were working successfully within GU at the time. And indeed, the project was blessed by the the company directors, including Simon and our MD at the time… Carolyn McCall.

You can see how that word “agile” has been thrown around a lot inside the company, and was — at least then — strongly associated with Guardian Unlimited and therefore its innovation (which I’d like to think is almost synonymous with “daily work” — ahem), and it’s stuck.

Now I can’t claim that Carolyn knows what, say, test-driven development really is. I suspect she has higher level things to think about. But if it came up, I think she’d understand our practices such as refactoring (making lots of tiny internal restructurings to produce a smoother, more manageable system), continuous integration (ensuring our changes are constantly integrated into the day-to-day work, not siloed) and retrospectives (frequent reviews and actions to improve). And, come to think of it, TDD, which is really about supporting and enabling change.

It seems Carolyn understands “agile” in a way that I wish more technical people would. It’s about change, and about supporting change (“We need to be agile and ready to change”).

The hard work starts now

And what of the future? Well, agile development helps us deliver responsibly to the business — and if someone invests £15m in you, then you really need to deliver responsibily. Again, that kind of investment only happens if there’s confidence in your ability to deliver. Can we offer that confidence? Well, just this week I was in a meeting with Tom Turcan, who is our Head of Digital Media Development. The conversation went something like this. Note that development for this particular thing is already about 30% of the way through:

Expert user: “…So in summary we’ve decided that features A, B and C aren’t that important after all, and we’ve replaced them with features J and K.”

Tom: “It looks like A, B, C total 7 ideal days, and J and K total 8.5 ideal days. So that’s a net increase of 1.5 ideal days. Is that right?”

Expert user: “Er, yes.”

Tom: “So we can’t really allow that to happen. We can make changes to make the work less, or keep it at the same level, but we mustn’t start increasing it. Can you find 1.5 days’ worth of features to remove?”

Expert user: “Yes, I should think so.”

Tom: “Okay, we’ll make a note to see what you’ve decided next week.”

We couldn’t have had that conversation in a more waterfall environment. By the time we’d have reached this point the groundwork for A, B and C — the database tables, the DAO layer, etc — would have been done, and it would have been wasted effort. Instead our expert user is able to make decisions later, and Tom is able keep to keep the work under control, giving the rest of the business the confidence it needs. Our agile processes enable that to happen. And all that is known by our developers, and by our managers, and that same confidence is shared by our group Chief Executive.

Postscript

By the way, there are other things going on this week in and around Guardian Unlimited. You might want to take a look at Jeff Jarvis’s commentary on Alan Rusbridger‘s commitment to digital and his interest in being flexible. Also, my esteemed colleague Neil McIntosh tells how GU saved a cat, and offers a word of advice to a researcher at News International.

As a friend said to me the other day, “I know when you’re busy at work — your blog goes quiet.” So back to work now.

The Times Online’s redesign, and a word about performance testing

Late last night (UK time) the Times Online launched their new design, and jolly nice it is, too. It’s clean and spacious, and there’s an interview with the designers who are very excited about the introduction of lime green into the logo. Personally, I like the columnists’ pictures beside their pull-quotes. That’s a nice touch. You can also read about it from the Editor in Chief, Anne Spackman.

However, not everyone at the website will have been downing champagne as the moment the new site went live, because in the first few hours it was clearly having some performance issues. We’ve all had those problems some time in our careers (it’s what employers call “experience”), and it’s never pleasant. As I write it looks as though the Times Online might be getting back to normal, and no doubt by the time you read this the problems will all be ancient history. So while we give their techies some breathing space to get on with their jobs, here are three reasons why making performant software is not as easy as non-techies might think…

1. Software design

A lot of scalability solutions just have to be built in from the start. But you can’t do that unless you know what the bottlenecks are going to be. And you won’t know what the bottlenecks are going to be until you’ve built the software and tested it. So the best you can do from the start is make some good judgements and decide how you’d like to scale up the system if needed.

Broadly speaking you can talk of “horizontal scaling” and “vertical scaling”. The latter is when you can scale by keeping the same boxes but beefing them up — giving them more memory/CPU/etc. The former is where you can scale by keeping the same boxes end-to-end, but add more alongside them. Applications are usually designed for one or the other (deliberately or not) and it’s good to know which before you start.

Vertical scaling seems like an obvious solution generally, but if you’re at the limit of what your hardware will allow then you’re at the limit of your scalability. Meanwhile a lot has been made of Google’s MapReduce algorithm which explicitly allowed parallelisation for the places it was applied — it allowed horizontal scaling, adding more machines. That’s very smart, but they’ll have needed to apply that up-front — retrofitting it would be very difficult.

You can also talk about scaling on a more logical level. For example, sometimes an application would do well to split into two distinct parts (keeping its data store separate from its logic, say) but if that was never considered when the application was built then it will be too late once the thing has been build — there will be too many inter-dependencies to untangle.

That can even happen on a smaller scale. It’s a cliche that every programming problem can be solved with one more level of indirection, but you can’t build in arbitrary levels of indirection at every available point “just in case”. At Guardian Unlimited we make good use of the Spring framework and its system of Inversion of Control. It gives us more flexibility over our application layering, and one of our developers recently demonstrated to me a very elegant solution to one particular scaling problem using minimally-invasive code precisely because we’d adopted that IoC strategy — effectively another level of indirection. Unfortunately we can’t expect such opportunities every time.

How to scale down your production environment2. Devising the tests

Before performance testing, you’ve got to know what you’re actually testing. Saying “the performance of site” is too broad. There’s likely to be a world of difference between:

  • Testing the code that pulls an article out of the database;
  • Testing the same code for 10,000 different articles in two minutes;
  • Testing 10,000 requests to the application server;
  • Testing 10,000 requests to the application server via the web server;
  • Testing the delivery of a page which includes many inter-linked components.

Even testing one thing is not enough. It’s no good testing the front page of the website and then testing an article page, because in reality requests come simultaneously to the front page and many article pages. It’s all very well testing whether they can work smoothly alone — it’s whether they work together in practice that counts. This is integration testing. And in practice many, many combinations of things happen together in an integrated system. You’ve got to make a call on what will give the best indicators in the time available.

Let me give two examples of integration testing from Guardian Unlimited. Kevin Gould’s article on eating in Barcelona is very easy to extract from the database — ask for the ID and get the content. But have a look down the side of the page and you’ll see a little slot that shows the tags associated with the article. In this case it’s about budget travel, Barcelona, and so on. That information is relatively expensive to generate. It involves cross referencing data about the article with data about our tags. So testing the article is fine, but only if we test it with the tags (and all the other things on the page) will we get half an idea about the performance in practice.

A second example takes us further in this direction. Sometimes you’ve got to test different operations together. When we were testing one version of the page generation sub-system internally we discovered that it slowed down considerably when journalists tried to launch their content. There was an interaction between reading from the database, updating the cache, and changing the content within the database. This problem was caught and resolved before anything went live, but we wouldn’t have found that if we hadn’t spent time dry-running the system with real people doing real work, and allowing time for corrections.

3. Scaling down the production environment

Once you’ve devised the tests, you’ll want to run them. Since performance testing is all about having adequate resources (CPU, memory, bandwidth, etc) then you really should run your tests in an exact replica of the production environment, because that’s the only environment which can show you how those resources work together. However, this is obviously going to be very expensive, and for all but the most cash-rich of organisations prohibitively so.

So you’ll want to make a scaled down version of the production environment. But that has its problems. Suppose your production environment has four web servers with two CPUs and six application servers with 2GB of RAM each. What’s the best way to cut that down? Cutting it down by a half might be okay, but if that’s still too costly then cutting it further is tricky. One and half application servers? Two application servers with different amounts of RAM or CPU?

None of these options will be a real “system in miniature”, so you’re going to have to compromise somewhere. It’s a very difficult game to play, and a lot of the time you’ve got to play to probabilities and judgement calls.

And that’s only part of it

So next time you fume at one of your favourite websites going slow, by all means delve deep into your dictionary of expletives, but do also bear in mind that producing a performant system is not as easy as you might think. Not only does it encompass all the judgement calls and hard thinking above (and usually judgement calls under pressure), but it also includes a whole lot of really low-level knowledge both from software developers and systems administrators. And then, finally, be thankful you’re not the one who has to fix it. To those people we are truly grateful.

Interview tips for that tricky developer-pharmacist role

We’ve launched a big campaign today to expand the Guardian Unlimited technical team even further — software, systems, project management, and more. You can see the online ad at http://www.guardian.co.uk/joinus. I won’t tell you that the reason you should seriously think about joining us that GU is packed full of friendly, intelligent and dedicated people. But I will tell you one small thing about how I’ll be conducting those of the interviews that I’ll be in.

The GU software team is a little different to many others. Like most teams our developers spend a lot of time understanding requirements, sketching out solutions, implementing, bug fixing, and so on. But unlike many teams our developers don’t spend any time at all surveying petrol stations, weighing pills, or being held in weird prisons run by pathological prison governors.

I’m sure that news will surprise a lot of people, because I know a vast number of developers are very focused daily on the old petrol station/pharmacy/solitary confinement side of software. And I know this because these are the questions they get asked in interviews.

I once attended an interview for a role where I was asked the pill-weighing question, set out on techinterview.org, for example:

[…] the queen gives the assassin 12 pills which are all completely identical in shape, smell, texture, size, except 1 pill has a different weight. the queen gives the man a balance and tells him that all the pills are deadly poison except for the pill of a different weight. the assassin can make three weighings and then must swallow the pill of his choice. if he lives, he will be sent back to the bad king’s kingdom. if he dies, well, thats what you get for being an assassin.

only one pill is not poison and it is the pill which has a different weight. the assassin does not know if it weighs more or less than the other pills. how can he save his skin?

I was able to answer the question only because I’d read it the previous month. Fortunately they decided to change the job spec and withdrew the role, and I was spared a working life of having to deal with royal assassins. Or brainteasers. I hope no-one’s going to tell me this wasn’t what the job was about, though. After all, what else could the interviewer have been trying to learn about me?

Petrol pump PCI’ve also met someone who likes to ask potential candidates “how many petrol stations are there in London?” I was keen to know why he asked this, since I was pretty sure he didn’t work in a company that audited petrol stations. “I want to know if they’re a good problem-solver,” he replied.

Here’s my tip for testing problem-solving skills: make the problem as realistic as you can.

Interviewing is a difficult and artificial thing. I’m sure I’m not the world’s best interviewer, but I do know that honesty and transparency are important, because without that you and the candidate can’t make the right assessment of each other. So when it comes to creating scenarios you’ll find out an awful lot more if you make the scenario as close as possible to what really happens in the job.

If you think your team members spend important time solving problems it’s a good idea to take a moment to consider some situations where that skill was important. And if you can’t think of any, then maybe problem-solving isn’t such an important skill after all.

Here are some problems I can think of right now that we’ve faced recently:

  • A really important person in the business wants something done really quickly, but they’re so important they don’t have time to explain it properly to you.
  • We need to get a particular data feed into a particular system. What are our options? And which should we choose?
  • A particular piece of code keeps throwing up far too many bugs than is reasonable. Why might that be? And what should we do about it?
  • We need to change a bit of code, but it was build 5 years ago by someone who’s no longer here. Where do you start?
  • We want to add a new feature somewhere, and although it seems like a naturally intuitive extension it’s actually disproportionately difficult to do — what’s gone wrong there?

No pills, no prisoners, no petrol stations, no foxes crossing rivers with geese. There are issues with using these situations, of course. We’d need to make a bit of an effort to work them into something that’s solvable in the confines of an interview situation. And I don’t get to show off some kind of pseudo-cleverness in front of the candidate. But I’d say both the interviewer and the candidate get much more out of the ensuing discussion. Because it does involve a discussion, not a simple one-way answer; we don’t force people to solve problems in isolation.

And one more thing about life here at GU. We do an awful lot of Java programming, so it’s not usually terribly useful for us to test someone’s use of pointers, either.

But if you do want to use pointers — or weigh pills — then it seems there are plenty of other opportunities for you.

Guardian Unlimited is hiring a Development Manager

If you’re interested in the kind of work we’re doing here at Guardian Unlimited, then you might be interested in joining us. We’re looking for a Development Manager. As an experiment I put an ad on Google Base (now expired). Here’s the text in full…

Guardian Unlimited is embarking on a concentrated effort to expand its Development Team over the next 6-12 months, and is seeking an experienced Development Manager to drive that process.

The ideal candidate will have excellent knowledge and experience of website operations, of managing a team of developers, and of hiring excellent new developers. Additionally, you will have experience of working in a fast-changing and pressured environment, with demonstrable expertise in managing change in development teams. Excellent planning and organising skills are also essential

Technically, the team practises agile development with Java, so you will need to demonstrate a deep understanding and affinity for both these areas. The underlying operating systems are Linux and Solaris, so expertise there would be an advantage.

The Guardian Unlimited workplace is very diverse with many different stakeholders, so it’s essential to be able to work with many different kinds of people and have excellent interpersonal skills.

This position will remain open until a suitable candidate is found, and ideally the successful candidate will start as soon as possible. This is an excellent opportunity to be part of an award-winning team on a product recognised for its innovation and quality.

To apply for this role you must be available to attend an interview in London, and to work permanently in the UK. If you are, and if you are interested in this role, please e-mail gujobs@guardianunlimited.co.uk and include (a) your current and desired salary, and (b) a copy of your CV.

We welcome applications from any individual regardless of ethnic origin, gender, disability, religious belief, sexual orientation or age. All applications will be considered on merit.

This role has come up because my job has expanded to encompass the whole software side, including project management and QA, so we’ve got a gap to fill.

A new Travel site, and four uses for tags

Here at Guardian Unlimited we’ve just launched our new Travel site. You’ll be able to read a lot about it in the industry press, but I just wanted to write a little what we’ve done from a technical-managerial point of view.

The project motivation is ideal

The main strands of our business are editorial, commercial and technical. But this is not an editorial project, it’s not a commercial project, and it’s not a technical project. This is something that’s been driven by all of us. Editorially we wanted to improve navigation and bring in more user-generated content. Commercially we wanted to offer more flexible advertising opportunities. And technically we wanted to exploit our content much more.

The new Travel site brings all of us together. One technical force was to exploit keywords, or tags, a bit like those on Del.icio.us and Flickr. This is a very flexible way of categorising content and hence allowing better use of it. Tags, in turn, allow us to generate much more relevant navigation and bring in related readers’ tips from our travel tips site, Been There. For example, if you’re reading about Croatia then you can also see a column headed “Readers’ top tips” which shows readers’ tips specifically about Croatia.

This is also the perfect opportunity to redesign, to better present that information. And just as we can present more relevant tips, so we can present more relevant advertising to our readers. And the redesign also allows us to shape the pages to better fit the ads — the page can close up around a small ad, and expand to comfortably fit a larger one.

Of course, there’s a lot of technical stuff involved, but a technical project is no good for the sake of it. Here, the three strands of our business were driving each other. An ideal scenario.

Aside from much better navigation and contextual advertising, and since this is a technical blog, here are four of the other (smaller) things we’ve used tags for…

There’s a page for every tag

Each tag has its own page! Here’s the page for San Francisco. And here’s the page for skiiing. And here’s one for food and drink trips.

In fact, we’ve got hundreds of tags in the system already — you can see our list of places and activities, for example. And now we’ve automatically got a page for each of them.

There’s an RSS feed for every tag

Okay, not a big leap technically, but terrific for our readers. Top right on each subject page is the RSS link for that subject. If you’re interested in an area of Travel then it’s odds on we write about it. And if we write about it then we’ve got an RSS feed for it. You can get your own personal skiiing feed from Guardian Unlimited.

We can combine tags

Tags also allow us to pull together even more really specific and relevant content. If you’re on the San Francisco page you can find out about food and drink in San Francisco. Or you can investigate short breaks in Portugal. Or water sports in the UK. Well, you get the idea.

Friendly URLs

Our URLs are usually of the form

http://travel.guardian.co.uk/budget/story/0,,991699,00.html

which is short but a bit obscure. So we’ve taken the opportunity to improve that for the new Travel site. The same article above now has the URL

http://travel.guardian.co.uk/article/2003/jul/05/watersportsholidays.budgettravel.boatingholidays

It’s longer, but we hope it’s also more intuitive. You can clearly see the what it is (an article), the date (5 July 2003), and what it’s about (water sports, budget travel, and boating holidays).

And it’s not just articles which have friendlier URLs. Here’s the URL for the Hamburg tag page:

http://travel.guardian.co.uk/tag/hamburg

and here’s the RSS feed for our Hamburg content:

http://travel.guardian.co.uk/tag/hamburg/rss

All much easier to deal with.

And one other thing…

Lots of this is automatic. But there’s one important part of tagging that’s not: the actual act of adding the tags. In theory we could use software to look at each of our articles and determine what it’s about, and so apply the tags. But that’s not going to give the best results for our readers. So we’ve made sure that all our articles have had their tags considered and chosen individually by someone who has actually read the article. That means the navigation should be spot on, and the advertising links are ideally targetted. There’s an awful lot that machines can do, but sometimes it needs a human being to add the magic.

Anyway, hope you like the site.

Javascript puzzler: throw null

I know this isn’t really the place to post code, and I’m sorry. But this is why I don’t like loosely typed languages. I was caught by a Javascript nasty earlier this week (no I don’t code professionally, this was for fun) which boils down to the code below. What you think could happen here…?

var problem_found = null;
var listener = function(event){ /* Do something with event */ };

try
{
    document.firstChild.addEventListener("click", listener, false);
}
catch (ex)
{
    alert("Problem: "+ex);
    problem_found = ex;
}

if (problem_found)
{
    alert("There was a problem: " + problem_found);
}

You might have thought that either everything passes within the try block smoothly, in which case no alerts appear, or that there’s an exception in the try block and you get two alerts — one from the catch block and one from the if block.

And you’d be wrong.

Javascript actually allows you to throw anything. You should really throw an Error. But you could throw a String, or even a null if you liked. And if your code could throw a null then testing for null does not tell you if you had a problem. In the above example you’d get only one alert, from the catch block. If your problem-resolution code is in the if block then you’ll miss it.

Advocates of loosely typed languages will say that type problems can be avoided if you write good unit tests. But the problem occurred for me because this kind of mistake was in the JsUnit test framework, and it incorrectly showed a test passing. It took me two days to find the source of the problem. It turns out that the addEventListener() method above in Firefox 1.5 does indeed manage to throw a null if the listener parameter is not a function. Try changing it to a string and you’ll see this. My mistake was to pass in a non-function. Firefox’s mistake was to throw a null. JsUnit’s mistake was to assume a null exception means no exception.

Strict typing helps avoid problems that come about because we’re human and make mistakes. A wise man in ancient China once said “the compiler is your friend”. Javascript doesn’t have a compiler, and I can do with all the friends I can get.

By the way, you may be interested to know what the statically-typed Java language does. It doesn’t allow you to throw any object, but it will allow you to throw null. And if you do that it automatically converts it to NullPointerException. An actual, proper, concrete object. I know who my friends are.

Measuring development is useful, up to a point

There’s a post from Joel O’Software regarding measuring performance and productivity. He’s saying some good stuff about how these metrics don’t work, but I’d like to balance it with a few further words in favour of metrics generally. Individual productivity metrics don’t work, but some metrics are still useful, including team metrics which you might class as productivity-related.

  • Individual productivity metrics don’t work.
  • Productivity-like metrics are still useful…
  • …but they don’t tell the whole story

Individual productivity metrics don’t work

Joel O’S states that if you incentivise people by any system, there are always opportunities to game that system. My own experience here is in a previous company where we incentivised developers by how much client-billable time they clocked up. Unfortunately it meant that the developers flatly refused to do any work on our internal systems. We developed internal account codes to deal with that, but it just meant that our incentivisation scheme was broken as a result. Joel has other examples, and Martin Fowler discusses the topic similarly.

Productivity-like metrics are still useful…

Agile development people measure something called “velocity”. It measures the amount of work delivered in an iteration, and as such might be called a measurement of productivity. But there are a couple of crucial differences to measuring things such as lines of code, or function points:

  • Velocity is a measurement of the team, not an individual.
  • It’s used for future capacity planning, not rewarding past progress.

Velocity can also be used in combination with looking at future deadlines to produce burndown charts and so allow you to make tactical adjustments accordingly. Furthermore, a dip in any of these numbers can highlight that something’s going a bit wrong and some action needs to be taken. But that tells you something about the process, not the individuals.

The kick-off point for Joel’s most recent essay on the subject is a buzzword-ridden (and just clunkily-worded) cold-call e-mail from a consultant:

Our team is conducting a benchmarking effort to gather an outside-in view on development performance metrics and best practice approaches to issues of process and organization from companies involved in a variety of software development (and systems integration).

It’s a trifle unfair to criticise the consultant for looking at performance metrics, but one has to be careful about what they’re going to be used for.

…but they don’t tell the whole story

A confession. We track one particular metric here in the development team at Guardian Unlimited. And a few days ago we recorded our worst ever figure for this metric since we started measuring it. You could say we had our least productive month ever. You could. But were my management peers in GU unhappy? Was there retribution? No, there was not. In fact there was much popping of champagne corks here, because we all understand that progress isn’t measured by single numbers alone. The celebrations were due to the fact that, with great effort from writers, subs, commercial staff, sponsors, strategists and other technologists we had just launched

A bad month then? Not by a long shot. The numbers do tell us something. They tell me there was a lot of unexpected last-minute running around, and I’ve no doubt we can do things better the next time. It’s something I’ve got to address. But let’s not flog ourselves too much over it — success is about more than just numbers.