Faster builds make better software

The other day I watched Daniel Worthington-Bodart’s presentation on 10 second build times, and was most inspired by the idea that software which runs through the build process quicker is actually better-designed software.

Daniel’s premise is that he hates software that takes more than 10 seconds build — which includes running tests. He’s therefore been on a (sometimes lonely) crusade to restructure systems to improve their times. Mostly this involves rethinking the tests and restructuring the software to allow faster testing.

One of his most compelling ideas is that software that can be tested faster turns out to be better designed, more modular software. He cites a typical example being the number of times a database instance is set up and torn down either side of end-to-end integration tests, of which there are typically dozens, if not hundreds, in a reasonable-sized system. His antidote is to insist on rigorous testing of the interfaces, but not the end-to-end integration. In other words, test the links in the chain, but not the chain itself. For example, by all means test that the right data comes in and out of the database, but you can do that without involving the layer above it; and you can test the layer above it without using the database.

If you think your testing cannot be not sufficiently thorough without those end-to-end tests then Daniel will argue that it may well be that your software design is lacking, and that’s what prevents more modular testing. Thus forcing yourself towards faster build times forces you to improve your software design.