When you are starting out on your DevOps journey you will hear about building a pipeline. But what is a pipeline and why is it important? First, it is important to get a little bit of context.
Noun a linear sequence of specialized modules used for pipelining. Verb design or execute (a computer or instruction) using the technique of pipelining.
It appears that every development house and IT department purports to be doing or being DevOps. It has become the default answer to both software development processes and infrastructure deployment.
If you are not doing Kubernetes and Serverless yet you are so far behind the curve you might as well just shut down your company and spend the time on your garden. Or so the cool kids say. All the best companies are doing several hundred deployments a day, tearing down containers and serverless processes with abandon, right? This is not reality.
The reality is that the large majority of environments are still only starting their journey. The fact is the majority is hype, driven by vendors. Nothing more than a sales playground for software vendors.
Cloud is not proving to have the same potential for cost-saving that, say, virtualization had. At least not without major software re-engineering. So how can companies drive value from Cloud migrations?
This where the concept of the Pipeline enters stage left. My colleague Joep Piscaer in his post discusses the why of the Pipeline. This post discusses how to align business processes to the concept of the pipeline.
A traditional pipeline is a linear process model. It goes from inception to completion.
It is a linear process and one defined for the delineated hierarchy of the silo’d IT department. It is commonly termed waterfall; because once the development cycle reached the end of testing, it fell off a cliff with the developers washing their hands of the process, throwing over the edge into the hands of the operations team to deploy. Most times it is often the first time that the operation and security teams have even seen the program; this led to very long delivery times often fraught with pain and weekend-long deployments.
In the world of DevOps, the pipeline evolved to what we know and love as the CI/CD paradigm or infinity loop. In this model, feedback is at every stage for the cycle – enhancing the processes improving the life-cycle of the product from inception to continuous delivery, and driving continuous improvements. This will lead to a reduced delivery timeline. A well-running DevOps pipeline can introduce and redact code into and back from production seamlessly,
But how do you implement this?
It has often been said that DevOps is not a tool, but a state of mind. But at its crux, DevOps is a melding of the Development process and operational mindset. A fusing of a creative mindset with a conservationalistic mindset.
The creation of the pipeline needs to take these often-conflicting mindsets and merge them into something new. This is what this particular article is about.
Conflicts and consensus
For a developer, life is change; new products, new features, and new releases: it is their reason to be. Change scares operational staff, who are charged with the upkeep and stability of the corporate platform. Change makes their operation break.
They are not adverse to change perse, but they like it to be evolutionary, not revolutionary. Planned and organized. This is the reason for Change Control Boards in large enterprises, to allow oversight of what is changing, confirm that all checks and balances have been investigated and verified, all risks noted and mitigated and that a recovery plan is available. For a CI/CD focused environment the CAB is an anathema, it is the perfect blocker to seamless deployments.
This is where the pipeline comes in, once implemented correctly it provides the resilience and confidence to the Operational side of the equation that the CAB provides without the delays inherent in a change board.
Trust is Key
Traditional IT departments are siloed, and as such silos are tribal. The network team always blames Applications, Applications always blames Virtualization/Cloud and everybody blames the security team. It has been common knowledge that project and support teams should be cross-functional, but the fact is the corporate world likes its silos and management enjoys their ivory towers, it makes them appear important.
For Pipelines and by default DevOps to work correctly teams need to be truly cross-functional with all members have input at all stages for the life-cycle of the project pipeline.
One thing that must be said is that it is difficult to correctly organize and implement cross-functional teams, there will be push back from Managers. Focus moves from direction of the team to direction from the project. Another thing to note and this is IMPORTANT. There are a number of projects shown, these are not just traditional projects they should also relate to day one and two operational support functions and business as usual tasks. All teams should be cross-functional. This can cause a problem with teams that do not have as large a staff count as they will be spread across multiple teams creating potential blockers, also every IT department has a person who just knows how to fix an issue. I call this person Brent, Brent is a character from the seminal book “The Phoenix Project”. Brents need to be protected and moved from the direct workflow. Brents are potential blockers.
The Cross-Functional Team is your building block of a DevOps Culture. It is all about collaboration, communication and building trust, more than about workflow and coding. By breaking down the team structure you remove a communication blocker that of the concept of box communication paths; teams are empowered to make decisions about all tiers not just their traditional silo focus. In traditional teams the decision process has to travel up the hill and back down the other side lengthening the decision path and loosening cohesion, I call this Box communication, it is inherently slow, often reactive, and riven with the potential for Blame apportioning.
While this information may be considered tangential to the main focus of this article it is pertinent because to build a pipeline you need the cross-functional nature of the. DevOps team. Managing the movement of a piece of code from inception to deployment and absorbing the feedback to enhance the next iteration requires the flexibility that such a team will bring. The agile nature of the decision-making process builds trust between the parties and enhances the collaborative nature needed to build a pipeline.
But wait isn’t a pipeline just a workflow?
Yes, a pipeline is just a workflow, but it is a workflow that moves through various teams. The developers write the code, the security teams make sure that the code is secure and the deployment is hardened, the virtualization/cloud team needs to make sure that the correct VMs’ Containers or serverless objects are available, storage teams need to make sure that the correct volumes, file-shares or S3 buckets are available, The DBA’s need to deploy the various databases, there are many moving parts that could get missed if each is being worked on in isolation.
O so how do we stop this from happening with our pipeline?
A Common language, and lexicon is important in communication and trust building, so is a solid process. Traditionally this has been the Change Advisory Board (CAB), this was typically the only cross-functional procedural entity in many businesses and usually first time that the whole business would get to test the assumptions of a change or deployment. In a DevOps environment the Pipeline is the honored replacement. However, moving from a CAB to a fully automated pipeline is a long journey and automation is the key, the removal of human interaction is as many processes as possible will allow the development of repeatable processes.
Before those pipelines are written the manual processes need to be documented and fully understood. This is a stage that is often overlooked. The testing plans only test a small subsection of what is required, little to no regression testing is done on deployments as it is not understood by the developers and testers. Testers do not fully test full systems, as they do not have access and are unlikely to understand all the systems and processes that a change will touch. This is where the cross functional team come in, they are each expert in their own particular fields.
The first few pipelines are very likely still going to be manual processes. As the team becomes more familiar with the concept of the pipeline, sections will be automated and the need for those to be managed by the CAB will be removed. As the pipeline matures the CAB will be obsolete as trust in the processes to capture issues before they reach production is proven.
DevOps is not a product or a set of products, it is a mindset driven by a melding of development and operations through processes and procedures. Yes, tools are used to drive work through the pipeline, Kaban for monitoring work in process, version-controlled code repositories to hold and control the development streams, a CI/CD tool to manage the automated movement of deployment through its lifecycle.
However at its crux without a proper and in-depth understanding of the full lifecycle of Development-QA-UAT-Deploy-Feedback across all areas and functions any DevOps initiative will fail and at the pinnacle of this is trust. Trust in your colleagues to be able to do their part, trust in your processes to capture and drive the desired result, and finally and the least important trust in you tools to help you in that task. The problem is, is that the vast majority of DevOps initiatives drive the wrong way, thinking that the tool is in and of themselves DevOps.
Unfortunately, organizations often swing, and miss, their DevOps implementations for failure of creating a safe, trusted environment of collaboration across teams. A DevOps Pipeline is only a small, but often a crucial element in building this lexicon, trust and shared understanding for different blood group to work together on a shared outcome.