The last few years have seen a great change in people's mentality about software and the way they use it. This change has been brought upon us by the omnipresence of smartphones, the advent of cloud-based computing and, lately, by the Internet of things (IoT). Combine this with the ever-increasing competition of software companies on a relatively slow-growing market, and you get to the point where we are now. Nowadays, people expect to have ever-improving software on their devices, with the newest features in the shortest time possible. There is an upside to all of this: People are willing to cut you some slack when it comes to small glitches and problems in your software, knowing that you will fix them in the next release. So, now comes the point I want to discuss. How do you live up to the expectations? In our opinion, the answer is implementing Continuous Delivery for your projects.
So what is Continuous Delivery? Depending on who you ask, you might get different answers, but for us Continuous Delivery is the ability to go from a single commit to your software repository to a new software release and deployment provided to your customer automatically, at the press of a "button". Testing and validation remain as important as before, maybe even more important, but their scope and execution take a different approach. By integrating them in the Continuous Delivery process, manual testing focuses more on the user experience, on exploratory tests and on the ease of finding and using the new features added to the final product.
So how do you get there? There are many ways, and recipes may vary based on the flavor of your project, but there are quite a few elements that remain constant throughout the industry.
First, you need to implement Continuous Integration for your project. You can say you have achieved that when every single commit in your source repository triggers a build, which, if successful, triggers the execution of your tests whose success triggers the packaging of your project deliverables. Parallel to the build step, more often than not, a review process will be integrated. The review process allows other developers to check what has been changed in the code, and to spot potential errors.
The next step is the automatic rollout of this new Software onto the test environment, and execution of your System and User Acceptance tests. Ideally your tests would be automated, but is it still called Continuous Delivery if your tests are manual tests? Well, yes, as long as, upon successful execution of the Delivery, the process moves on automatically to releasing the software in the production environment.
The setup for such an environment is called a deployment pipeline.
Fig. 1 Typical workflow in a continuous delivery environment
In order to enable continuous delivery, we first need to set-up a deployment pipeline.
A deployment pipeline is the automated representation of the processes involved: Development,
Testing and Release. The deployment pipeline can be visualized as the implementation of the project value stream map.
The deployment pipeline is composed of stages, each stage accomplishing a particular activity towards the release and delivery of the software.
The stages that are shared by almost all software projects are:
Commit stage
Integration stage
Acceptance stage
The stages should always be tailored to meet the project needs. That means that some stages might be omitted, and others might be optional. For example, it makes sense to make User acceptance tests optional for the case when their effort, in terms of magnitude, is greater than all the other stages. This would allow the deployment of certain critical bug fixes before the whole user acceptance criteria.
The first stage in the deployment pipeline is the commit stage. The cycle is started by a developer submitting a change to the software repository, which triggers a build on our CI Server that executes a series of checks to assess the new commit. These checks usually include the compilation, running some static analysis tool against the source code, executing the unit tests. Parallel to these checks, every commit is reviewed by peer developers. The review process is integrated in the commit stage, and the process cannot proceed until checks are successfully passed. The resulting artifacts need to be saved to the artifact repository to be used in the next steps.
The next stage is the integration of the module generated above in the application. It is important not to rebuild the module, but use the artifact that was already generated. At this step, we run the Integration tests. If all checks are passed, the potentially deliverable product is generated here and stored in the artifacts repository. It is important that, at this step, all the latest commits are collected and stored in the Release Notes. Having a healthy commit messages policy will allow you to create a very-well explained list of features, bug fixes and other improvements brought to your software product.
The next stage is the acceptance stage. This stage is, in our opinion, crucial for a successful software project. At this stage of the deployment pipeline, the product is tested for the first time in the form that will be delivered to the customer. The suite of automated acceptance tests needs to be good enough to give developers the confidence to deploy the software directly into production for minor software releases. In parallel to the automated user acceptance tests, automated performance and load tests are in place on dedicated test environments. The manual user acceptance tests will still be performed by the QA team on separate test environments, with the added benefit of having additional knowledge about software quality from the execution of automated tests. The second major role of the acceptance stage is to test the deployment of your software to a production-like environment. Deployment is automated. The scripts that perform the deployment use the artifacts repository to get the latest software version to deploy it to the test environment. The deployment to the test environment is triggered for each successful build that passes the integration. Here, the automatic System Tests and acceptance tests will be executed, and, based on the results obtained, a decision can be made on whether to continue with manual tests, deploy to pre-production and/or production, or drop the release. The decision to deploy to pre-production or production will be taken by a person in your organization, but the deployment itself will be automated. The deployment scripts used to deploy to the test-environment will be also used to deploy to pre-production and production environments with different configurations. This will ensure that the likelihood of failure during deployment is low.
I would start with the fact that tools evolve too and tying your deployment pipeline too intimate to specific tools might lead to lock-in that would deny you the opportunities of evolving fast enough. But there are a few candidates, which, due to their popularity, tend to stand out and be good candidates to use for implementing deployment pipeline. So, starting with Jenkins as your Continuous Integration server, the first step towards implementing the deployment pipeline is using plugins provided for continuous delivery.
The Jenkins pipeline already has the concept of stage integration, which helps defining the infrastructure through code. Jenkins uses groovy DSL to define the deployment pipeline and implement the deployment pipeline.
Fig 2. Jenkins pipeline stage definition
The details of each stage need to be programmed in, and these depends on the configuration of your project. For example, in the commit stage you might want to integrate Gerrit, via the Gerrit (i.e setGerritReview unsuccessfulMessage: 'Build failed'
). The acceptance stage will contain multiple parallel nodes for the different testing performed. In between Acceptance and Deployment there will usually be an intermediate stage that requires User input in, to trigger the deployment to the production environment.
The versatility offered by Jenkins through the vast number of offered plug-ins allows us to easily integrate virtualization, test frameworks, automated software distribution tools and so on, to our Deployment pipeline.
Wow, all that seems like a lot work. It involves coordination across disciplines, and there will definitely be a learning curve. Tools will get you so far, but the mentality and way of implementing stuff in your company are the only things capable of taking you over the finish line.
How can you make it easier? In our opinion, the easiest way is to have dedicated people taking care of the deployment pipeline for you. Your deployment pipeline is a living software that evolves in time, and that changes the way it accomplishes things. For example, when I first designed a deployment pipeline during a teambuilding, Jenkins did not support a build pipeline out of the box. With the advent of cloud-based solutions, you might choose to externalize your Deployment pipeline completely, but the pros and cons of such a decision are beyond the scope of this article.
In conclusion, the world we are living in today is accelerating. The ability to change your software product in time, according to the market trends and customer needs, in a reasonable amount of time forces anyone who wants to stay relevant on the market to change the ways they are implementing, testing and delivering software. Continuous Delivery comes, in that context, as a necessary process in your Software project.
by Ovidiu Mățan
by Ovidiu Mățan
by Mihai Varga