Quick Read For:
Where Are You In The Chain?
It has long been a dream of both lean manufacturing and agile software development to shorten the feedback cycle to the point that when a customer dreams of a feature he can soon have it in his hands. In software, that dream has been hindered, at least in part, by the time it takes to deploy complex software applications. Docker shortens this deployment time to milliseconds… and sometimes eliminates it all together.
Docker is an open source tool that allows developers to program their infrastructure. When it later comes to deploying the application, Docker solves the it-was-working-on-my-machine problem. This is because as well as packaging an application the developer also packages the machine on which the application will run. This packaging is done in a simple text file that is stored in version control.
There are other benefits, though. Using Docker in conjunction with Ansible allows not only a Walking Skeleton to be defined but to be constantly set up and torn down. This has implications for test and deploy time and therefore Docker helps reduce the feedback cycle, which of course has implication for a much more effective software delivery cycle.
With Docker, you no longer have to fight with your team mates in operations about what worked on your machine and what didn’t, instead you coordinate using the Docker file and then use any time you save to do much more interesting things.
Docker allows testers to create clean environments in a matter of milliseconds. Analyse the typical day of a tester and you’ll see that most of the day is spend setting up tests and tearing them down. Great efforts go into creating scripts that will make sure the system is in a state of readiness. Because this is expensive, and error prone, testers often re-use systems and thus create dependencies across tests. This means, when one fails, all the subsequent tests may fail.
As a tester, you no longer need to spend days working on scripts to ‘manipulate the environment’. You simply create the environment, save it as a Docker file, and rebuild the environment (in milliseconds) every time you need it.
The DevOps movement has tried to pull down the walls that exist between development and operations teams. Container solutions, such as Docker, are making sure that the communications between the teams are enhanced by providing a clear interface. This means that operations teams don’t have to focus on what they are deploying only that they are deploying something.
In the old days of cargo shipping, those who worked on the docks and at the freight yards had to know about the contents of their shipments. They knew, for example, that spices couldn’t be shipped with coffee. In much the same way, nowadays operations teams know that you can’t use certain Java libraries with a certain Java runtime – our inability to package work up means that operations people have to know what it is that they are deploying.
The general problem all leads face is that there are an infinite number of problems but only a finite amount of time to solve them. All leads know, too, that one of the best ways to save time is to optimise the feedback cycle.
In the last fifteen years we’ve seen time saving tools, like Ant and Maven, combined with other solutions, such as the continuous integration tool Jenkins. Each tool was combined and re-combined with one thing in mind: to save time by shortening the feedback cycle. Most recently, orchestration tools like Ansible and configuration tools like Puppet have helps shorten the cycle even further. Docker is the next (big) step in the evolution of software configuration management.
If you are a lead, and you want to increase motivation by decreasing frustration, reduce time to market and shorten the feedback loop, then Docker embedded within a well designed Continuous Delivery pipeline is the solution you’ve been looking for.