Whether you're new to agile development, completely at home in a DevOps environment or dipping your toe into CI/CD, sometimes it's worth taking a moment to consider how application development got to where it is today.
In the beginning (80s and 90s) the most popular/used development methodology (the method by which the software gets ready to release) was the Waterfall method. This represents a strictly sequential structure in which teams embarked onto the next development phase only after the previous one had been completed.
And so, a movement towards a more "lightweight" approach began to gather strength. It culminated in what we now see as the start of agile. In 2001, 17 pioneering software developers met at the now-famous development conference in Snowbird, Utah, giving birth to the Agile Manifesto. Thus, Agile takes hold, introducing the idea that the team should prepare their software for release throughout the development process. Around the time that agile practices really took hold in business, a new kid on the block was lurking around the corner. Born out of agile, but distinct from it too, DevOps was a new interpretation of what collaborative software development should look like.
While agile brought development and testing together, DevOps argued that other areas of the businesses needed to be part of the process and saw the creation of teams involving non-IT staff, such as product managers. This was quite a cultural shift compared to the traditional development lifecycle of that time. DevOps decided to aim higher, creating an environment where new releases could be delivered faster and more frequently.
While DevOps (combination of "Development" and "Operations") promotes a culture/philosophy of team collaboration and work transparency, CI/CD are the main building blocks in any DevOps architecture.
DevOps, being an art form in its own right, it unfolds to the world following the 7 key principles of art:
Balance
Movement
Rhythm
Pattern
Contrast
Unity
Using fully automated procedures, DevOps provides faster and cleaner deployment processes during the application's entire lifecycle. It all starts with a simple commit, where the developer wants to test or roll out new changes in the source code. A DevOps engineer chooses a tool from the DevOps tool stack and configures it in such a way that the developer's commit triggers the build and deployment stage of any given application.
Adopting this approach, we eliminate the need for manual intervention in two of the most important parts of an application's lifecycle, leading to faster and cleaner deployments.
CI/CD focuses on the automation of building, testing, and deploying by properly setting up a CI/CD pipeline with the appropriate tools. When we talk about continuous integration & delivery (CI/CD), it's really just another extension of agile that focuses mainly on the tools and processes needed to integrate smaller chunks of code into the core code quickly, using automated testing, and to deliver continuous updates that in turn enable faster application development.
Benefits of CI/CD:
Deliver software with less risk;
Release new features more frequently;
Deliver the product that users need;
Imagine you're building a web application which is going to be deployed on live web servers. You have a team of developers responsible for writing code and pushing it
into a version control system. This, together with the continuous administration of various elements of an application, constitutes the process of Continuous Integration.
Now that we covered Continuous Integration, let's dip our toes into the meaning behind Continuous Delivery/Deployment (CD). CD came as a natural extension to CI, meaning that every integration done by the development team will automatically trigger a deployment. This approach guarantees that the exact same steps will be executed during every run of the respective deployment, minimizing the need for human intervention, and consequently drastically reducing the deployment time.
CI & CD are fundamental concepts in today's fast-moving world. In today's world, software is being developed and improved at a never before seen pace, and without concepts like Agile or CI & CD this would be nearly impossible to achieve.
The processing time of an application's deployment depends on a handful of factors which can include: size of the application, extent of automatic test executions, hardening & patching, report generation, available hardware resources, etc.
Each of the above-mentioned processes (and more) can represent a step in any application's deployment process and, thus, it can extend its execution time. It all depends on how the deployment pipeline is set up and on how many steps it needs to execute.
If we would compare the time it takes for a deployment pipeline to execute automatically VS how much it would take an IT professional to run the same steps manually, we would undoubtedly conclude that the automatic process execution is faster and more reliable. A pipeline can execute the same steps hundreds of times having the same outcome. When we add manual intervention to that process, sooner or later the IT professional would miss some steps, attribute wrong values to key variables, or maybe forget some vital test executions. All these could add up to long debugging sessions, involving different people to try to find a solution, which would in turn delay the deployment and would require more resources.
Therefore, CD is a vital part in every application's lifecycle, greatly reducing the time & costs it takes for an application to reach its final state. There is nothing wrong in grabbing a cup of coffee while waiting for the deployment to finish.
Deployment pipelines can greatly vary in their structure, depending on the tools used for the process, on the application demands and the infrastructure. However, there are a few key concepts which can generally be found in any Deployment Pipeline:
Version Control Checkout - This step is responsible for checking out the latest version of the application source code.
Build Phase - This step has as input the source code from the previous step, and is responsible for generating an executable artifact.
Unit Test - This is a vital step in every pipeline representing the implementation of automated tests, making sure that any changes that were made to the source code of the application have the desired output and didn't affect previous parts of it.
Deploy - This step is responsible for rolling out the application into the foreseen environment. This could have multiple outcomes based on where the application was deployed.
If the Deployment was done into a lower environment, additional testing procedures can be started to determine that all parts of the application have the desired behavior.
The above-mentioned concepts should be taken with a grain of salt, as they depend on the underlying architecture and the way the pipeline was designed. Some applications might have different dependencies than others, leading to different implementations of deployment pipelines.
In the not-too-distant past, development and operations have worked in absolute isolation, interacting only during releases. Developers, knowing about the release date, tend to insert a new feature by then. On the other hand, Operations team knows from beforehand whether the current release contains any new features or not. So once they get the released version of the application, before deploying it on the customer's site, Ops do rigorous testing to test the stability of the release. It is only after they are satisfied that the build is deployed at the customer site.
A conflict frequently emerges between these two camps and their seemingly incongruent goals. Whereas development teams are motivated and measured by their high change frequency and scale (deploying features, fixes, and improvements), operations teams are judged by reliability and consistency, qualities which are often seen as an outcome of low change frequency and scale. This often results in an antagonistic relationship between the two teams, characterized by the ongoing tussle for fixing the flaws which is a time-intensive operation, which often means risking the final product not always fitting the original business need.
The obvious solution for the Developers and Operations communication gap is the synchronization between those two teams, which in time came to be known as DevOps. It is the most prominent method by which an organization can bring in a check and balance between development and quality. Both Dev and Ops need to embrace the DevOps methodology by changing their outlook as well as their way of working.
DevOps benefits include the following:
fewer silos and increased communications between IT groups;
faster time to market the software;
rapid improvement based on feedback;
less downtime;
improvement to the entire software delivery pipeline through builds, validations and deployment;
However, the DevOps methodology also involves a lot of challenges:
organizational and IT departmental changes, including new skills and job roles;
expensive tools and platforms, including training and support for their efficient use;
As with every art form, beauty is in the eye of the beholder, some might prefer one approach, others would have a completely different take on the process and the meaning of DevOps - we think that we all are right in our own way.
For us, DevOps is less of a job title and more of a mindset in which developers and operations work closely together in achieving one common goal - to roll out reliable software using the latest technology trends that are at our disposal, learn, make the world a better place and have fun while we're at it.