The other day I was discussing prioritisation with a colleague, and in particular talking about cost of delay and weighted shortest job first (WSJF).
When deciding how to prioritise it’s very useful to calculate cost of delay divided by duration (CD3). By sequencing our work by that metric we get the greatest economic benefit. WSJF is a more generalised form of CD3 in which the cost of delay is not a concrete economic measure such as money, but is some arbitrary unit such as points in particular categories.
There is a challenge in deciding between these two approaches. CD3 gives us a much more realistic and a generally more useful measure of value. Financial return is measured in money, of course. But most other things can be reduced and therefore compared to that, too. If our measure of value is “security” then that might sound vague, but it can be expressed in financial terms—how much money a potential breach might cost us in staff time, lost business, etc. However, it’s hard to get people to attempt those estimates, so getting support for this approach is likely to be more time consuming.
By contrast a point system is less realistic but easier to engage with, at least for prioritisation. It’s not too tough to ask (and answer) “If this security measure is worth five points, what do you think that one is worth?” You might even have a few of these categories (security, usability, efficiency, etc) and add the points together. It’s not scientific, and its utility is more limited, but it is at least easier to get people involved.
So if we’re trying to introduce a useful prioritisation approach into an organisation, which should we push for first? Should we spend more time getting support for the more useful CD3 or go for a quicker win with WSJF, and then ease people into CD3 after that?
Happily, there are a couple of techniques to help us answer this question. One of them is called weighted shortest job first; the other is cost of delay divided by duration…