Within the agile world, it’s a given that continuous integration should be part of the process. But an efficient way of getting the software into users’ hands seems to sit outside of agile, and I believe that’s a mistake. This is one of the things I’ll be talking about later this month in my workshop at How to Web in Bucharest (drop by if you’re in the area, etc, etc).
This is not to say seamless delivery into production itself is a neglected subject — it certainly is not — but it’s often regarded as a topic distinct from the agile software development process, and often not implemented. It’s easy to see why this is: in part it’s because it’s quite possible to demonstrate working software without it being in production. In part I suspect it’s also because the last mile is typically the realm of the systems integrators and administrators, who typically sit (or are sat) outside of the software process. And, let’s be honest: getting something into production is difficult, getting it into production regularly is very difficult, and so it’s good be able to claim the agile badge without having to sort that out.
But getting something into production again and again and again is a huge benefit to the core goals of agile: until something is actually being used for real by its end users it’s very difficult to make further calls about how it should evolve. By doing this we really get to understand what counts as valuable. And those end users are rarely the business analysts or internal customer that a lot of teams deal with.
Unfortunately a consequence of pushing for a seamless last mile (or continuous delivery, or however you would label your target) is that it forces you to be clear about what success really is. For many software teams a demo in front of the internal customer is success, even though that’s often a cop-out. Getting the thing in front of your end users is what should be classed as success, but that’s often difficult to deal with, because it means having to respond to potentially unwanted feedback in the middle of your project. And yet if the agile software development is about delivering value — regularly — then the value is really only measurable when it’s in your real end users’ hands.
Hi Nik,
This is a interesting topic. Agile is always criticized (by waterfallists) for not being able to get projects done. I hope that you will publish this discussion as a post…
Thanks, PM Hut, that’s a good idea.
I totally agree, and is something I have worked on over a number of projects. Whilst often big companies resist handing over operational responsibility of the whole platform to the agile team, we have found a successful middle ground using a devops approach. We form an environments and infrastructure team to support a few common agile software teams – preferably under the same overall management to ease priority calls, and we take full responsibility for all the dev/test platforms and production and co-operate closely with the devs – a working production system is a shared responsibility and goal. Often we are only hands on at the OS level and above, but this is where the biggest gains are. We delegate hardware, storage and networks to the level of a service with a clear interface (similar to a cloud model, although if only I could get an automated interface to define firewall rules, but that’s a separate battle). This helps justify that we are using the corporate IT resources and reduces the control arguments. It also allows us to leverage corporate wide efficiencies in these areas, and to be honest, I find hardware rather boring anyway, so why would I want to manage it – once we have a stack of capacity provided, it doesn’t often get touched, other than to add more of the same as capacity requirements grow.
The key factor to success is that the infra team needs to be formed at the same time as the first software development takes place, and they start to create an automated build for the run-time environment in parallel. If you try and catch up later, there won’t be time to build sufficient automation, and the dev team will already be managing their test platform – something that will invariably result in diffs in approach.
First the run-time is built in test, and the showcase each week takes place on that. Eventually we have sufficient automation for the QA’s to self deploy the whole stack after every build if they wish – from db creation/upgrades to os rebuilds to app deployment and load balancer config – thus the whole deployment is being continually tested as the app evolves. We set the rule that the same artifacts must be capable of being deployed to all environments just by running the deployment scripts with a different environment parameter. All environment diffs are handled by the deployment tool form a versioned respository.
It is then a simple matter to take the same deployment mechanism and target production. We have even used a common agile tool to front this with a workflow to ensure that any given build can only get to production by following a set process with approvals – ie QA deploys to test, Scrum Master approves deployment to Stage, Infra team deploys to Stage, QA approves, Scrum Master approves deployment to prod, Infra team deploy to prod. We thus have a full record of the production change, including a log of what the deployment activities were (we attach the log to the ticket afterwards).
This approach also works well for demonstrating confidence to a Release Management team that only a light touch through the change board is required.
Our goal is to deploy weekly, although, it could be more frequently if required, as the actual deploy has a very low resource overhead, and a predictably high success rate.
Good points here. Keep up the good thinking!