Quality seems to be widely misunderstood. People often talk about quality being a key variable that a project team can choose to alter to reshape their project — along with other variables such as cost, scope and time. I think that view is flawed. Meanwhile Mishkin Berteig is not unusual when he says “I have come to believe that quality is not negotiable” — meaning that although it’s possible to negotiate over it, it’s fundamentally wrong to do so. I think that view is flawed, too. From my position quality can rarely vary in the same way as other key project variables, is rarely changed directly, and often it is right to allow it to drop below 100%. I want to spend some time here explaining why.
Before I launch into this, allow me to say that although this is inspired by an issue on which I differ with Mishkin, I do have great respect for him. In fact, it’s arguable that I’m only writing this because I have such respect for him — because it’s so surprising to see him write something that seems to be so mistaken.
The sections ahead are:
- Quality is about much more than bugs
- Quality is rarely addressed at the outset
- Quality is less flexible than scope, time or money
- Quality is usually the driven, not the driver
- Quality can be lowered — and sometimes thatâ€™s okay
- Non-executive summary for non-executives
Now, let’s get on with things…
Quality is about much more than bugs
First, let’s make one thing clear: quality is not just bugs. It includes look and feel, ease of use, ease of installation, performance, and so on. Looking at Mishkin’s post it’s clear to me that we he says “quality” he really means “bugs”: he is making a strong case for test-driven development. But test-driven development doesn’t help with an application that requires six clicks rather than one to enter the default data. It doesn’t help that one of your modules needs administrator access to save its state. It doesn’t help that you can’t add any more servers when you need to triple its capacity. These are situations where quality could be improved, but which aren’t solved by test-driven development. Quality is wider than that.
Quality is rarely addressed at the outset
There’s a classic project management mantra in many variations: “cost, quality, time: pick any two” is one such [1, 2 (PDF), 3]. At one level it implies (quite fairly) that the major project variables at the outset are interdependent, and you can’t choose settings for them all. You can’t say, for instance “I want it to do all this, I want it to be flawless, and I want it in a week” — making just two of those demands determines what you get for the third.
But at another level it suggests that people are making these kinds of decisions, and that quality is often determined at the outset. I find this implausible. I rarely hear clear requirements at the start of a project about the quality of the result. I’ve heard vague pronouncements, but rarely anything that could be factored into a project plan. I’ve heard “we need to do better than last time”, and “let’s not over-do it the way we did before”. But that’s usually part of an agenda-setting speech, not the project plan. It’s not enough to help make a calculable impact on scope, cost or time.
At the start of a project you will hear a precise number for a budget, a strong date for a deadline, and (we hope) a clear list of requirements. But it’s very rare that people give a precise level of quality — it doesn’t even have a common unit of measurement (remember: quality is not just a bug count). Quality is rarely negotiated at the outset. The client expects the software to do the job specified, and that’s that.
Quality is less flexible than scope, time or money
Another misconception is that quality can be varied to the same degree as other major (initial, or input) variables such as scope, time and money. Scope can be cut right back, or added to indefinitely. Similarly for money and time. But quality doesn’t vary in the same way. Quality can’t be increased arbitrarily — after a point there’s nothing left to do, assuming the scope isn’t changing. Similarly it can’t be reduced to zero, because below a certain point — well before you get to zero — the product isn’t doing the thing it was supposed to do and therefore you’re reducing scope.
However, while I believe that quality varies less than other project variables, I do believe the consequences of improving quality — even a small amount — are significant. Unit tests reduce firefighting and ensure the project is always moving forward. Usability testing ensures people are much more comfortable and confident with your software, paying dividends throughout the business. Increasing quality by a small amount can have a big, positive knock-on effect.
Quality is usually the driven, not the driver
I’ve talked about scope, time and money being “input” or “initial” variables. They’re things that are usually explicity set at the start of the project. Quality, however, might be set early on, but it’s usually determined by other things.
Sam Newman writes, correctly in my opinion, that quality is driven by process. Here, they way you do things determines how good the result is. Quality is an “output” variable more often than not. Introducing test-first development is a process change; allowing post-development testing time to be reduced because the development phase has overrun is a consequence of an unfortunate project management process; introducing or eliminating performance testing is a process change. In all these cases quality is not an independent variable; it is driven by other things.
Quality can be lowered — and sometimes that’s okay
Quality is negotiable, and it would be foolish if it wasn’t. It would be foolish if budget, scope and time weren’t ever negotiable, but they are, and in that respect quality stands next to them on a level.
Real situation: The website is being built; the investors are making bullish noises in the City; the launch — a release and public demo — was scheduled months ago and is due next week with 200 big media attendees, and testing is still on-going. If a real, but infrequent, bug is found that can’t be fixed by next week does the company cancel the launch?
Real situation: You’ve been working for months on your release. It’s almost done, but there’s another project looming, which the sales team are desperate get started. The time remaining for your current work will leave some untidy edges, but there’s a pressing commercial need to release it and move on to the new project before the company loses more competitive edge than it already has. Do you push for a project extension?
If a developer tells me their professional needs are bigger than those of the organisation in which they work, then they shouldn’t be working there. A developer works for their organisation — just as everyone in that organisation does — not the other way round. And making the right decisions for the business must have positive consequences for the development team.
Every time the developers sacrifice quality for schedule/cost/features, they incur a debt. Sometimes this is called “technical debt”. I like to call it lying.
Powerful prose, but I’m afraid that particular extract strikes me as uncharacteristically arrogant. “Technical debt” is an honest term. By using this term the developer says to their organisation that the decision taken now will incur an increased cost later. The organisation as a whole can then weigh up the options and make an informed decision. And call me crazy, but it might just be that after being appraised of the situation, the project board, or managers, or directors might be in a better position to make a strategic decision than the developer who’s paid for their expertise in one very particular part of the organisation.
Of course, that’s not to say that every decision on quality needs to be raised to board level. Just as a developer wouldn’t dream of bothering their project board as to whether they should use descriptive variable names (“Well, short names are quicker to type and might shave off 2% of our delivery time, but longer names will save us time if we need to revisit that part of the code…”) so there will always be ways to improve quality in refactoring, or configuration, or packaging, or architecture which the developer should just do without question because they’re a professional. But the developer who never informs their project board of major quality forks in the road, and who never allows their client to be involved in such decisions, is failing to fulfill their responsibility as a member of the whole project team.
At Guardian Unlimited we’re always looking for ways to improve the quality of what we do. But I’ve also heard people say “let’s not gold-plate it”, and “look, we’re not producing art”. These are senior developers who are arguing against particular quality improvements because they are able to weigh up not just the technical decisions but also the wider business decisions. They are being pragmatic. I’d say they are much better developers for it.
Non-executive summary for non-executives
Let’s recap. Quality is really important. It’s about much more than unit testing. And making small increments in quality can result in disproportionately large increments in the results. But let’s not kid ourselves that quality is a variable that can be tweaked arbitrarily and independently this way and that. It’s not of the same standing as budget or scope or time. And let’s not kid ourselves that developers of principle have to stand firm against some kind of management beast constantly trying to undermine them. Developers are an integral part of the organisation, and should behave (and be treated) as such. The quality of developers’ output needs to be balanced just as every other organisational demand does.
As long as we can see and understand software quality for what it really is, the better we are able to improve it.
3 thoughts on “Quality is a bogus variable”
Thanks for writing this! I agree with many of the things you are saying. I recently taught a course where there were several support folks in the audience. Their problem was complex in its details, but fundamentally it came down to them not being empowered to _really_ fix their users’ problems. Everything they did was surface patches, bandaids, appeasment, etc. In a lean environment (and I admit to not being an expert here), my understanding is that any time a quality problem is found (in the large sense, not just in the “bug” sense), everyone stops what they are doing and fixes the problem… but not just the surface problem, but the root cause of the problem so that hopefully, the problem never re-surfaces again. For a support organization, that would mean that instead of providing one-off workarounds to users, the support people would solve the problem so that it never happened again. In fact, a support organization (e.g. a help desk) could be thought of as the sensory apparatus for detecting quality problems. A user calls and says “how do I do X”? and the support organization will normally just tell the user how to do X. But wouldn’t it be better if the support organization also figured out how to make it so that no user would ever have to call and ask that question again? This gets to your comments about quality including usability, performance, etc. The trouble with this encompassing definition of quality is that it is in fact never perfectly attainable. That is also the good thing about this definition: it provides a very high standard to which we can aspire, and always find ways to move towards. This gets to my belief that with software we are ultimately looking to create the perfect servant: an uber-system which can do what we want and need before we even know it… and that’s an intro to a huge topic in and of itself!!!
Also, thanks for calling me on the arrogant comment. You are right: that is arrogant. I guess that lately I’ve been very focused on the idea of truthfulness and how incredibly foundational it is to being able to do valuable work. Again, a long discussion, but I think that we have a long way to go in developing our ability to be truthful. This isn’t something that can be enforced with a process, and yet it is at the very core of what I think of as “agile”. Now to get back to my comment… The problem with technical debt is that most organizations aren’t really aware of the conseqences of allowing it to build up. We have a good understanding of financial debt: debt ratios, cash flow, assets vs. liabilities, etc. But when it comes to technical debt, it is a lot harder to measure, it is often obsure or hidden, and it is often incurred without the _informed_ consent of those who are taking the risks, namely the owners of the business funding the software development. This is where we get into a bit of trouble. Software development is a black art, an esoteric discipline. It is difficult to truly communicate the consequences of technical debt to the people who should be making the decisions about that debt. In fact, I think that software developers often don’t know the consequences.
Mishkin, you touch on two very important areas which really need their own separate exploration, but are worth at least identifying in brief.
I really feel for the support team you describe, and to me this is all about “whole teams”. Whole teams must work together and be empowered to create the system. But once it goes into production most of that team often disappears and it’s just the support people who are left. The team is no longer whole. The support people cannot possibly have the power to make improvements when those improvements really are the domain of the long-departed developers.
The other area relates to common understanding. You’re hinting at the point (which I didn’t consider) that while it’s one thing for the developers to report technical debt, it’s quite another for the rest of the business to understand what they’re being told. It does relate to truthfulness, your hot topic, but it’s also apparent that truthfulness is going to be very hard to attain if your two parties don’t understand each other in the first place. And it’s not helped that quality is rather intangible in the way that time and budget are not.
That issue of common understanding also takes us right back to the beginning: is quality about a bug count (nice and tangible) or about the bigger issues including usability, architecture, scaleability, etc? I suspect that because developers see a different side of the software from the client there will often be a mismatch on this issue. Maybe the most we can hope for is that whole teams in constant communication with each other will be able to agree among themselves on a per-project basis, even if there isn’t ever a single standard understanding throughout everyone in the industry.
Comments are closed.