Category: Project management

How innovation depends on trust

Innovation requires trust and freedom. But freedom only comes from trust, so the primary requirement for innovation is trust. And broadly speaking, the more trust you extend to a development team the more innovative they’re going to be. At the very least, innovation will not extend beyond the trust they’re given. Here are four levels of trust, and the innovation each can lead to.

Innovation is driven by trustTrust = 0

Let’s start at the very beginning, where trust = 0 and hence freedom = 0. “I don’t trust you to work by yourself; everything you do must be done in conjunction with me.” This is micro-management and generates nothing unexpected whatsoever. The only opportunity any team member has for creativity is during their bathroom break. We can do better.

Trust with tasks

Let’s extend trust a little bit: we allow our team members to complete individual tasks or deliverables that we specify — but let them determine how each task should be completed. This is work to a tightly-defined plan, and so any innovation will be under the hood — the internal architecture, the software design, and so on. But what you’ll see is what you expect. The tangible product, and every visible aspect of the product, is exactly what you imagined.

Trust with milestones

We can extend trust further still: we allow our team to deliver to clear product goals and trust them to work out the details of those product goals. This is where we start seeing innovation — as opposed to having innovation under the hood. Innovation of this kind is driven by project milestones or product principles, like “We want some personalisation features” or “The user must be able to share their work”. We want the development team — the programmers, user experience people, designers and so on — to be free to determine how to achieve this because it’s they who have the most intimate understanding of the product, how it fits together, what it’s capable of, and what it might be capable of.

In this scenario those project milestones or product principles come from outside the development team. It’s what they rely on product managers or other senior stakeholders for. Amazon does this with a press release: a description of the product in terms of end-user benefits, written to inspire and guide the development team.

Interlude: Where we don’t want to innovate

If you’re thinking that this amount of freedom, from this amount of trust, is just hippy nonsense that leads to escalating budgets and timescales, then let me bring us down to earth.

One area where we almost certainly don’t want to be innovating is project management. There are plenty of good project management methodologies out there, one or two of which will be good enough for us, and we don’t need to be inventing any more. So we may not go in with a detailed task list and instead trust the team to define the tasks themselves; and we may not go in with any predefined product features and instead trust the team to work out for themselves what features will achieve the stated aims of the product; but to not go in with any idea of how we might responsibly spend our budget, manage risks, showcase progress, communicate with stakeholders, meet timescales, and adapt to feedback is downright irresponsible. And to generally deprive the team of project management expertise is foolish. Unless we explicitly want to invent a new project management methodology we really should ensure the team picks one of these tools and has the wherewithal to use it well.

Trust leads to innovation, but let’s be clear about where we want to innovate.

Now back to our regular programming.

Trust to define projects

We can extend our trust even further if we allow our team to define not just product principles or milestones, but entire products or projects. We can set an organisation-level goal such as “we want to double our revenues” or “we need to make our service a social experience” and then see how our team can achieve it.

A goal like this is important for two reasons.

First, it eliminates irrelevant ideas. Anyone in an organisation can have ideas, but we really want ideas that push our business forward. If my media company specifies that our goal is to make what we do more social then an interesting idea like making an iPhone game will be considered irrelevant because it doesn’t meet the goal of making our existing work more social. In other circumstances it might be promising, but our current circumstances are about being more social and that’s what we need to focus on.

Second, it allows us to compare alternate ideas. I might propose that people could be given virtual biscuits every time they share an article. Already this is more relevant than the iPhone game idea thanks to having to focus it on our organisation-level goal. But if someone else has a proposal to create common interest groups around our content then I’d say my virtual biscuits are looking pretty poor, relatively speaking. Our organisation-level goal has provided a scale of comparison.

And round again…

We’ve just extended trust to enable people to envisage an entirely new project. This is the very first stage of development — the concept. We also have to implement it. And this takes us right back to the beginning: how much trust do we want to give to the team? None? Just enough to complete a series of predefined tasks? Or do we trust them to devise the most appropriate means to meet the principles of the product? This is our chance to inject innovation again.

Should we track effort or duration?

I’ve been thinking recently about the different approaches to measuring in development projects, and in particular whether to measure effort or duration.

Tracking - photo by cackhandedThe usual approach is to track time spent on tasks or on a project. This is equivalent to filling in a timesheet. Let’s say that on Monday, Tuesday and Wednesday we work on Task A for 2 hours and Task B for 3 hours. That completes Task A, and a 3 more hours on Task B on Thursday finishes that. We record

Task A: 6 hours

Task B: 12 hours

Or we might be recording effort at the project level (Project X: 6+12 = 18 hours’ effort).

An increasingly common alternative is to measure duration, also known as cycle time. In the example above, and assuming we are only working at the granularity of days, we would record

Task A: 3 days

Task B: 4 days

Cycle time generally isn’t captured at the project level, but at the task or feature level.

If you come from the effort-measuring school then measuring duration looks peculiar. By contrast, if you’re measuring duration because you’re using lean processes then the effort-measuring school looks positively archaic. Either way, the difference between the two records above is striking.

In the end, neither one is right or wrong per se. What’s important is: What do we want to achieve?

Capturing measures or metrics is often for a number of purposes: financial matters, tracking progress, and improving efficiency. Let’s look at each one in turn…

Finance - photo by cdsessumsFinancial

There are a couple things here: capitalisation and billing.

Capitalisation is the process of representing newly-written software as increasing the assets (capital value) of our company: by having this software we are now worth more. The accepted method is to take the asset value to be the people-cost of building the software, i.e. the salaries or fees of those involved divided across the various projects. There are clearly shortcomings with this approach, but that’s a discussion for another day.

By measuring effort we discover how we divide our time across our various projects and therefore how to split our people-costs proportionally, which becomes our capital value.

Measuring duration is not so obviously appropriate, but it can work. To see it working, it helps first to see where it doesn’t work in a deliberately unrealistic example. Suppose over the course of a year we run 250 tasks of duration five days each, plus one task which we spend a bit of time on on the first day of the year, but which then gets kicked into the long grass until the last day of the year when it’s finished off. If we capitalise by duration that single long-running task would bias the capital costs inappropriately for whichever project it was part of.

So that looks unhelpful. However it doesn’t have to work like that. In particular if we forcibly limit the number of tasks in play (aka work in progress, aka WIP) at any time then long-running tasks will be exposed and people’s natural desire to get things done should ensure they are dealt with and not forgotten. Also, a long-running task can be just taken out of the system, regarded as waste, and started afresh at another time if necessary. More generally, tasks won’t drag on as long as they might otherwise and over the course of several weeks the measurement of duration will be as just as good (and just as as imperfect) as measurement of effort.

So for capitalisation both effort and duration are plausible measures.

For the purposes of billing clients measurement of effort is ideal when charging time and materials. Lawyers, consultants and contractors all do this. Charging clients by task duration isn’t likely to go down well, and I can’t imagine how to make a itemised breakdown look palatable on the invoice.

If, however, we’re the ones being billed then we need to track effort (=spend) and match that against progress.

Efficiency - photo by (aka Brent)Tracking progress

Tracking progress (and therefore assessing completion date) is not about tracking effort, it’s about tracking when features are completed. If we’re 25% of the way through the project we want to know that we’ve completed 25% of the features; how much effort we’ve expended doesn’t tell us that. Tracking duration also doesn’t help us, but it is usually a by-product of tracking start date and end date of each feature, and it’s the end date we’re interested in here. If we’re tracking effort then the end date just one more data point we need to collect for each item in play.


If we want to improve efficiency then we have to track metrics (so that we can tell when efficiency has improved). But again, what we want to make efficient will determine whether we track effort or duration.

If we want to improve the single activity of writing software then tracking effort is the thing to do. However, that’s not usually that useful: single activities can almost always be made more efficient, but the most significant gains tend to be found elsewhere. Improvements in writing software will produce very small gains compared to everything else that makes up the process of delivering software: analysis, design, development, third party integration, functional testing, user testing, deployment.

If we want to improve software delivery then tracking duration is what we want. Duration of that sequence includes, and therefore exposes, all the delays. If we aim to improve our tracked metric then one major gain is to eliminate those delays; eliminating the delays then exposes the inefficiencies in hand-offs between teams, which we can then work on. At the end we have improved our overall process, not just a single activity.

There are more…

There will be more reasons to track data, but I hope it’s clear that what we should track depends on what we want to achieve. Tracking cycle time is increasingly sexy (it’s lean, it’s kanban, it’s systems thinking… we’ll forget that it’s Six Sigma, too), and there are real advantages there, particularly when it comes to improving efficiency. But plain old tracking of effort isn’t entirely redundant yet.

How to fight the laws of (not just CMS) projects

Freelance Unbound has some great laws, clearly learned the hard way, about… well, supposedly about website launches with new content management systems. But actually they can be generalised to be laws of almost any kind of big project with a strong technology element. Laws of physics? Laws of the land? Either way, you don’t need to take them lying down. Here are some suggestions about how to fight back…

A successful project1. It only exists when it’s live

The point here is that you can’t get people’s full attention unless it’s actually live. That’s virtually a law of physics. The way to fight this is to deliver early and often: start making it real for people as soon as possible, get something out there quickly so they can use it and feel what it’s like, and then build up.

2. It’s nobody’s fault but yours

This is a law of the land, and you can get a change in legislation. If people work in silos then it’ll be easy to finger-point when things go wrong. If people are brought together from across the organisation and are jointly responsible for the project being a success, then there’s a much better chance of success happening. This is because there is much greater cohesion between people with different expertise, which means more fluid communication and much less loss of understanding. So problems are less likely. And when they do happen then you can expect a much more productive reaction from all involved.

3. No-one has thought about [some critical function]

Again, this is a law that can be changed: find out early who all the stakeholders are, and make sure they’re involved. This is a pain, because it’s much easier to get a project going when there are fewer dependencies or teams involved. But inconveniently projects are judged a success at the end, not at the start. So it’s worth foregoing a bit of pain at start — socialising the ideas, seeing who needs to be involved, involving them, listening to them, changing your thinking accordingly — to avoid a lot of pain at the end.

4. [Team which does unglamorous but critical work] is always the last to know

Yes, it may be easier to involve some people only at arm’s length, but the inconvenience you’re avoiding early on will have to be repaid with interest. As before, discovering and involving all the stakeholders early is key.

5. Things you want to change are always hard-coded — and vice versa

You can’t please all of the people all of the time. But you can choose a small number of people and make the system flexible for them. And then you can extend your system a bit so that it pleases a few more people. And then you can continue to extend and expand, including more and more people, and adding more and more flexibility in just the right places to please them, until… well, hopefully everyone agrees it’s okay to stop.

For this approach to work you do have to make sure you have the capability to adapt your system. If you’ve chosen to buy then you need to know it’s sufficiently flexible for your needs, and where it doesn’t flex (or you’ve chosen not to flex it) then you need to make sure you communicate the limitations to all the relevant people. If you’ve chosen to build then you need to make sure you have the skills to build effectively.

6. Your system will always be out of date

Every project is a response to the wider business environment. And the business environment will never stop changing. So the question to ask is: If this project is a response to business change, and the business environment is changing constantly, what resources do we need to continue to put into this system after the project is complete to ensure it continues to keep pace with the moving target it was aimed at in the first place?

I’d say almost any answer is acceptable, including “We’re happy for it start falling behind the moment it goes live, because that bit of our business isn’t the most important part”. That particular answer will probably upset some staff, but the main thing is it’s a conscious decision. What will lead to problems is not answering the question at all.

Mind you, it’s also possible to get that answer wrong. Or, more likely, to incorrectly predict the pace of business change. If that’s a possibility then acknowledging the risk should help secure additional resources in the future.

A health warning to followers of the Knight News Challenge

I was by turns initially horrified and puzzled when I read Ryan Sholin’s piece on “How to manage technology decisions in 5 easy steps”. Horrified because they seemed at odds with my own experiences of what works and what doesn’t, and then puzzled because Ryan is someone with a great deal of experience in digital media, so I couldn’t understand why he was writing these things. Eventually they made sense, but only when I realised his tips were for entrants of the Knight News Challenge (KNC), which is a very specific competition with very specific demands from an audience being mostly of a particular type.

It became apparent to me that much digital media advice for non-technical people really needs to come with a very strong health warning. Here’s one taken from a prescription medicine I’ve just retrieved from my bathroom. It needed only very light editing and seems quite appropriate:

REMEMBER this advice was prescribed only for you. Only a digital professional can prescribe it. NEVER give it to someone else to use even if you think their project is similar.

Looking at Ryan’s blog the advice is for “dealing with developers and choosing a platform for your [Knight News Challenge] project”. As a previous winner of the KNC he should know what he’s talking about here. Unfortunately in the process of being prepared for the KNC blog its intent seems to have been overstated. It is not, unfortunately, advice on “how to manage technology decisions in 5 easy steps” — which is a shame, because I and a lot people I know could do with easy answers to hard technology decisions. Nor does it provide guidance on “How to hire developers”, as the intro suggests, unless you’re hiring one or two developers specifically for the KNC.

This may sound flippant (perhaps, admittedly, because it is) but there is a serious point here. I suspect there are going to be many more people following the KNC than there are participating, and I think a lot of those followers will be people in the process of embracing digital media projects for the very first time, and will be looking for guidance. Perhaps they are just starting on digital projects within their current traditional media companies. They will need good guidance so they can make a reasonable success of their early projects and so feel confident about getting more involved. The media industry needs those people to be successful.

But taking the right advice for the wrong project will lead to problems. Here are some instances of Ryan’s advice and where your (or a colleague’s) situation might require something different…

1. Learn a little bit about any one Web framework, standard, or programming language

If you’re leading a 2 or 3 person team then it’s a good idea to understand the technologies your team members are using. But you may find yourself in a slightly different position in a larger team. In that case the level of conversation you need to have with people will be different. If the technology being used is fairly complex then deep-diving into a particular web framework or programming language is going to be far less useful than having, say, a good overview of the technologies involved and why they’re important — if only so that you can ask appropriate questions and deal with the answers appropriately.

2. Choose the people you want to work with and spend an ample amount of time telling them what you want.

I’d never disagree with the “ample amount of time” part — if you don’t dedicate a lot time to your project don’t expect it to go the way you want. But the “telling” part will only get you so far. It will probably work very well if you’ve become an expert in a particular web framework which is the basis of your project, and if you’ve hired someone with less experience there. But if your situation is different your approach should be different. For anything ambitious or complex or imaginative the development will be an evolving two-way conversation, not a one-way monologue in which you’re telling someone what to do.

So, lots of good advice on the KNC blog for participants of the Knight News Challenge. But if you’re sitting on the sidelines you should not mistake the demands of the Challenge as being the same as the demands of whichever project you happen to be working on next. Get advice that’s been prescribed specifically for you.

This information has been provided free of charge under the National Digital Health Service.

An ABC of R2: N is for News section

…which was one of the two highest priority launches of project. Yet it happened around 12 months after we planned it, and between the planning and the launch we also launched the home page, video integration, and sections for Media, Technology, Business, Science, Society, Money and Environment. If it was so important, why did we take that seemingly roundabout route?

Actually, it wasn’t that indirect. In January and February 2007 planned all the work that was to follow the launch of the Travel section, which had gone out in November 2006. From our senior stakeholders we sought the business priorities, and there were two major milestones: changing the home page of would send the clearest public signal of intent (even though it was only one page), and launching our news content in the new design would demonstrate the depth, extent and utility of the transformation. So those were our two major targets.

But there was another major requirement running through all our launches, and that is that they should be sufficiently comprehensive and largely complete at the moment of launch — there shouldn’t be any obviously missing features or tools. Since the news agenda is both urgent and highly volatile the news desk needed a comprehensive set of tools with strong integration to be able to deal with the daily demand. That included polls, improved galleries, an audio player with better podcast integration, a wider range of layout templates, many more navigational components, more streamlined tools, cartoon pages, very flexible keyword management, and much more.

It was, in total, around a year’s worth of work, so getting there directly would have meant no major launches — no tangible benefits — for a year, and that just wasn’t on. Agile development is about providing value early. To deal with this problem we exploited that fact that many sections, such as Science and Media, would benefit greatly from early launches and wouldn’t need such a comprehesive featureset as News from the word go. And developing those other sections would help build up towards News. We then calculated that if we wanted to get to the News launch with a detour through those other launches then it was a difference of only 6%.

Clearly there were big wins all round. Many sections were launched early so those desks got the benefits early; the commercial team was able to make use of the flexible advertising early; the technology going to the news desk was tested and refined well ahead of launch; and the risks normally inherent in a big bang launch were drastically reduced.

Speaking personally, I found the launch of the News section surprisingly muted. Lots of the public excitement and discussion around the reworking of our site had occurred when we launched the front page many months before, and continued in varying forms with the release of each subsequent section. By the time we got to the News section it was much less startling to those outside looking in. But in terms of internal change so much traffic goes through the News section, and it involves so many people working with it each day, it was very significant. Frankly, big launches which happen with so little fuss is something I’m very happy to live with.

An ABC of R2: G is for Guardian America

…which was not part of the project scope when we started R2. It’s fair to say that when we began implementing in February 2006, the idea of a Guardian America launch was not on the radar. Yet by the middle of 2007 it was being talked about very seriously, and increasingly so. How did we fit in an additional sub-project?

As much as technologists might sometimes think they hold the key to success, when it comes to media it’s still true that content is king. Guardian America’s success is driven from its editor, Michael Tomasky, and his team. But technology did provide the vehicle for that.

The most valuable thing we technologists did for Guardian America was to not reinvent the wheel. We recognised that the core elements had already been built — most notably a front page designed to showcase a variety of content was already in use as the front. Making use of that also ensured design consistency. But that’s not to say there was no work to do. The content management system was not originally designed to support two major fronts, particularly with a variation in branding. The core work, then, was to extend something we had already delivered and make it more useful.

Using this approach everyone won. The Guardian America team got all the functionality and flexibility that we had delivered for the team running the original site front, and they got it relatively quickly. From an operational point of view, by managing Guardian America in the same way the front was managed, the GA team was able to share skills, people, advice and creative ideas. The tech team got to show they could deliver technology for a high-profile project on very short timescales. The business as a whole won because the overall R2 budget remained constant. We managed that by deprioritising a small amount of less important work in the usual Agile manner. I don’t think there were any arguments about what, exactly, was deprioritised, since Guardian America was so clearly more significant than many other features on our list.

We (heart) UAT

When do we need user acceptance testing? And when can we get away without it?

User acceptance testing (UAT) is when your software goes in front of the user to get final sign-off — and when they ask for changes if not. In theory you shouldn’t need UAT at all (didn’t they tell you what they wanted? Weren’t you listening?), and indeed perhaps you can sometimes get away without it.

Testing a search engine rigorouslyThis argument is stronger if you’re considering UAT throughout the project, not just at the end. This would be the case if you’re UATing individual features, or individual deliveries in a culture of more and smaller iterations. The arguments are: more deliverables mean more UAT means more expense for the customer; smaller iterations mean a reduced chance of getting it wrong; frequent deliverables give the customer more opportunities to change anything they don’t like anyway.

However, even with small, frequent deliveries there are circumstances when it’s more important to do UAT. Looking back on some projects I’ve worked on, here are times when it was, or would have been, a good idea…

1. When the output of the software is subjective

I once worked on a project that asked what bands you liked and then based on that recommended albums by unsigned bands. This was a big leap away from the “people who liked that also liked this…” kind of recommendations, because the unsigned bands won’t have had a significant userbase, so we couldn’t rely on a self-generating wisdom-of-crowds mechanism. Plus, of course, music recommendation is highly subjective, and it was our clients who were the music experts — we implementers were mere software developers.

In this case you can see it was key for our clients to play with the system and be sure they were comfortable with it. If they didn’t then it would have meant tweaking the algorithm, not a complete software rewrite of the software. In the event something rather unexpected happened. While our client’s representative was very happy with the output he also found it rather disconcerting. He felt somewhat disenfranchised from the system because it seemed so mysterious. He realised that he couldn’t demonstrate it in front of his colleagues and their investors without having an answer for the suddenly inevitable question: how does it work? And once this was explained to hiim he became much more comfortable again.

If the output of the system had been much more objective and predictable this instance of UAT wouldn’t have been so important. In the end it threw up an important new acceptance criteria for client which we were quickly able to address.

2. When the software has a costly userbase

One of my past projects involved creating a user interface for a property data entry team. Individuals on the team were, as you’d expect, low-paid and largely interchangeable — after an hour of training you would know everything there was to know about the system. Your sole function was then to spend hour after unsociable hour entering information about properties you could never afford to live in for people you would never want to live next to.

But although the daily cost of individuals on the team was considered low, the cost of the team as a whole — and the value they were bringing to the business — was very high.

Problems with the user interface in this case would have had a knock-on effect across the whole team of 20 or 30 individuals, and cost their employer dear. It was key for someone to check not only that the system was responsive, but also that there were intuitive keystrokes across the data fields, data entry was reasonably forgiving, tabbing between fields happened in the order that data was presented in, and so on. In this case, the time and early feedback from one user at the vanguard saved time and cost 20 or 30 times over.

3. When you don’t have a QA department

A pretty obvious one, but if you don’t have people dedicated to quality assurance then those internal people who take on that function (probably the lucky developers or irritated account manager) will be too close to the software and too distant from client to pick up on all the issues that are important to them.

A good percentage of the projects I’ve worked on have involved integrating search engines. And in almost every instance when testing has been left to the development team they’ve alighted on one or two key phrases to use as test cases and considered that sufficient. By contrast, the end user to whom this is important will have half a dozen phrases that are relevant each day — and they’ll involve accented characters, non-ASCII characters, mismatched quote marks, and so on. And that’s before we get to the results page. Without effort the expert user has given the search engine a good workout — often raising difficult issues for the developers.

Actually, even if you do have a QA department then there’s a case for UAT, because more often than not the QA team will test against written criteria. There are very often criteria which, with the best effort in the world, just don’t get realised or written down at the requirements stage. And in these cases it’s only the end user who can really tell if it’s right.

4. When you lack detailed requirements

Another fairly obvious situation, but if you’ve not had a good bit of customer input at the start of the project, then UAT is going to be even more important later on.

A sales person I worked with once had a cunning plan. We’d already produced an e-commerce site for one of our clients, and he decided to sell a duplicate system to one of their rivals. It was good plan on paper. There was no contractual restriction for us; the new client would get their site at a relatively low cost; they’d get it quicker than usual; and we’d only have to reskin it. The deal was done over a nice lunch.

You can guess what happened in reality. When they saw what we imagined was a near-complete version they wanted some changes. The checkout system wasn’t quite to their liking; their product catalogue didn’t have quite the fields we were expecting; the order placement system didn’t align with what they had internally; and so on. It cost everyone more than they had anticipated and it was delivered later than anyone would have liked.

Certainly this is a strong case for robust specifications — or at least understanding what you’re getting into before you get into it. But it also demonstrates that lack of early information cost us dearly when the information did come along later. We hadn’t made time for UAT, which is why we delivered the final product later than planned. Fortunately for everyone there was no externally-driven deadline. If there had been then we’d have been obliged to deliver something which the client wasn’t happy with.

5. When the inputs to the software are difficult to specify an example that brings us right to the present.

The current work to refresh the Guardian Unlimited website is much more than applying a lick of paint. We’re also producing new tools for our reporters and new ways they can tell their stories. But telling a news story is not simply a matter of filling in a few boxes and clicking Save. To make a story relevant requires a lot of hard training. To edit an entire news section additionally needs an awful lot of experience and good intuition…

As I write, is leading with three big stories and a podcast, plus some smaller items headed “More media news”. The stories have between three and six sublinks which indicates the depth of related news that the editor has available. Yet at the same time the Technology section is showcasing six stories, all with a bit of blurb, but only one of which has sublinks (to a video and a gallery). In both cases the editors are using the same system, but the page layout has to balance itself out regardless of the highly variable input.

But though the input is highly variable it is not without bounds. A section would never be showing just one story, and an editor would almost certainly not try to highlight twenty stories and claim they had largely equal weight. The only way to really know the bounds in which the system is supposed to reasonably work is to sit an editor down with the tools and let them lay out pages in response to a real day’s stories. No amount of requirements analysis and QA experience can substitute for a real journalist responding to a real day’s events.

And that’s the beginning…

No mechanism should be applied blindly; there are times when user acceptance testing is more appropriate, and times when it is less appropriate. The important thing is to be aware first that UAT exists, and second that there are criteria which you can apply to assess how valuable it can be.