Welcome to the third part of this series of articles exploring the foundational principles of DevOps.
How much, and what should I automate?
Automation is somewhat of a hot topic at the moment. Wherever you look, companies are attempting to automate their processes from simple data collection to fully automated systems management. So, from a DevOps perspective, the burning question is “How much, and what, should I automate?”
The answer, of course, is “As much as you possibly can.”
Whilst the above seems flippant, I have seen organisations that can completely wipe and rebuild their production application stacks overnight. Using the code checked into their source control repository, by close of business that day, they execute all their regression tests and switch the customer-facing application instance to the freshly built infrastructure if it passed the tests. This way, the customer, whilst blithely unaware of the switch is always using the very latest daily build of the application.
Of course, you don’t have to automate to that degree, but the maxim stands – the more you automate – the greater the benefits you will realise.
This level of automation (and confidence) doesn’t come about overnight. There are likely to be sizeable shifts needed in your existing organisation and processes to accommodate these changes.
As ever, start small and tackle items that you can tackle easily:
- Look to see where you can automate tiny changes:
- Can your code check-in, compile and link steps be automated to a single button-push?
- Can you go one step further and have this happen automatically overnight?
- Can you automate the compilation of documentation from code?
- What about release-notes?
These are all laborious tasks that developers usually skimp on, rush or miss entirely, if they can be derived from sufficiently code, then BANG – instant win.
- Look to automated testing tools / frameworks suitable for your system and develop and maintain an automated test repository. This not only takes much of the labour out of conducting system/integration tests, but if sufficiently maintained provides a full suite of regression tests. If you can also tie these in with your source-code management repository, then you can integrate it with your build cycle so that code, once checked-in can be automatically regression tested and developers notified of any failures.
- The above point also leads into adopting a doctrine of test-driven development. By this, we mean that the tests for a unit of code are written directly from the specification before the code is ever written. Thus, as far as a unit test goes, the code never gets anywhere near a deployment branch until it satisfies its unit test. Furthermore, these unit tests can also then be added to your regression test suite as older, obsolete tests are rolled out, so your test suite is always kept up to date with the current specification.
Once you have a fully integrated and automated the build-compile-test cycle, then the final step is to automate the deployment. This step is entirely optional and depends on both your organisation’s appetite for risk, and just how rapidly you need the latest code to be available in the production environment. Either way, whether you choose to adopt a fully automated deployment regime or not, you should ensure that your change management processes do not get in the way. So, if you want to fully automate the deployment then you also need to automate your change scheduling and integrate the change management system with your build and deployment systems. You should also consider making these standard changes, so they aren’t held up by approval steps.
One viable alternative to a fully automated approach I have seen work is to have the change process work as a gateway to deployment. Once the change record has been scheduled and approved, the steps required for deployment are fully automated by the opening of the change deployment window. This allows some measure of control as to which builds will hit your Production systems, and when they will be applied.
What about headcount?
Whilst all this automation may sound like a valid reason to review either headcount or the balance of roles within your DevOps teams, with a view to reduction, don’t be lulled into this trap. It takes more resources to manage an automated test suite than you would normally require when handling manual tests. Testers will also require the authority to direct your development and operations personnel to update their tests where required, which may not fit in with their own priorities.
This can cause friction. You may also require a slightly different spread of skills to manage the automation tools you adopt, necessitating extra headcount – especially in the early period of adoption. Do not forget however, that as more and more teams adopt a DevOps approach, management of these communal tools can themselves be pushed out into their own teams for management, and costs spread across multiple product teams accordingly.
The observant amongst you will have noticed that I have not actually mentioned any specific tools in this piece. This is deliberately so. There is currently a vast array of automation tools on the market – both COTS and OSS varieties – and there is no one-size-fits-all approach. Some tools may be incompatible with your specific application and build regime, others may not fit comfortably within your corporate infrastructure for licensing reasons, etc. As always, let your product teams be the guiding light on these tools – especially in the early period of adoption. Once you have a suite of workable automation tools in your environment, then adopt these as a corporate standard and mandate their use within other teams unless there is a valid business reason for choosing other .
So, in summary:
Automate as much as you possibly can. The more of your application development pipeline that can be automated the greater will be your coding velocity, and ultimately this should start to show a more positive ROI.
That about wraps it up for this time. In the next gripping instalment, we’ll explore the ‘L’ in CALMS – Lean.
I started my career in IT Operations, working in the machine room of a small in-house IT organisation with old VAXen and DEC Alphas. Since then, I’ve worked on busy service desks, developed enterprise applications and spent the larger part of the last 20 years evangelising in the IT Service and Asset Management space.
I like to think that over the years I’ve managed to see everything that the industry can throw at me, but every single day seems to surprise me with yet another interpretation of some industry (or in-house) standard, so it keeps me on my toes.
I’ve always believed that the various frameworks out in the wild are there to serve as guidelines (except for the bits about interoperability standards) so like to approach projects with a pragmatic view as to how the frameworks can bend and adapt (within reasonable bounds) to the ways that an individual client would like them to work within their business.
Outside of work, I enjoy fast cars, fast bikes and international travel. Often combining the bikes and travel into off-road adventures in striking parts of the world.