The other week I was reading about the quality problems with major Windows 10 releases. To recap, Microsoft started rolling out to the public their second Windows 10 release of the year (called 1809), but had to halt it quickly when it was discovered it was deleting some people’s files.
Before Windows 10 Microsoft would release a new version of their operating system approximately every three years, with “service patches” roughly annually. Those were mostly days of buying the software on CD-ROM or floppy disk. A different era. Now there are patches every month, two major updates a year, and it’s all delivered electronically.
Clearly the release of 1809 is a black mark for Microsoft, and a (small) public embarrassment. So I was very interested to read Peter Bright’s detailed analysis of the problem, which he determines is Microsoft’s overall development process:
The latest problem has brought this to a head, with commentators saying that two feature updates a year is too many and Redmond should cut back to one, and that Microsoft needs to stop developing new features and just fix bugs. […] But saying Microsoft should only produce one update a year instead of two, or criticising the very idea of Windows as a Service, is missing the point. The problem here isn’t the release frequency. It’s Microsoft’s development process.
In much smaller organisations than Microsoft I’ve worked with people who say, “We have quality problems with our releases; we must do fewer releases, not more.” The logic is reasonable, but I disagree with the treatment.
To take Microsoft’s situation, I’d argue that slowing releases to once a year won’t improve things. The first thing that will happen is that there will be greater ambition. Executives (and their customers) will demand greater changes if they are to receive updates less often.
The second thing that will happen—or rather, will remain the same—is the deadlines. They will still be there and will still be tight. There will inevitably be the same level of long hours, feature cuts and other kinds of compromise to make those deadlines.
The third thing that will happen is that any process improvements will take a full year to come into effect. So the feedback loop to see improvements will lengthen, and hence improvement will be slower.
The real solution to improving delivery quality, while also remaining competitive, is to release more often, not less often. But there is an important caveat to that. As Peter Bright says, you can’t just do the same things faster and expect quality to improve. Rather, you need to use the goal of very frequent releases to force out process changes that will improve quality. Organisations that do this ask themselves, “so if we were to release much more frequently, what would our tools and processes look like…?”
This leads to practices that can be entirely new for an organisation: more independent teams, different relationships between functional areas, a very high degree of automation, and so on. And then the feedback loop becomes much shorter, and some further improvements can be realised more quickly.
The surprising result is that an organisation achieves a seemingly-impossible pair of outcomes: greater speed and higher quality.
I find it fascinating that solutions which seems very intuitive—like releasing less frequently—are very often the opposite of what the ideal solution actually is. It’s a reminder to us all to be open-minded when taking on new ideas.