CI/CD is a common practice to bring applications from code to production. Unfortunately, the tools needed to create these pipelines vary wildly in functionality and quality, and you need to integrate dozens of tools into a single toolchain to create a complete CI/CD pipeline.
|This a post in the Amazic World series sponsored by GitLab partner CINQ|
Organizations often choose to put a centralized CI/CD team in place that owns, manages and improves the CI/CD tooling, but this often leads to a one-size-fits-all that only fits the lowest common denominator, yet doesn’t properly fit any single team. Due to the complexity of the many tools and their integrations from one to the other, tailoring the toolchain to each team’s specific needs is hard.
So organizations face a dilemma, having to choose between a patchwork of tools that is nearly impossible to manage and breaks often due to the fragile integrations, or choose a one-size-fits-all that doesn’t actually fit anyone.
Is CI/CD complexity beating its purpose?
CI/CD’s purpose is to lower time to market (by removing toil, manual work and the human element), increase code quality (and by extension, security), and increase consistency of the workflows DevOps engineers use to bring code to production.
These pipelines operate under a few clear principles: workflow automation of integration, testing and deployment, shift-left of tests to as early in the process to get feedback. All to reduce cost and standardized, documented and codified processes to remove the dependency on specific individuals or teams to get code to production consistently.
But given the complexity of the patchwork of tools to make it work, are we really keeping true to these goals, or is the net result of using these tools a negative outcome?
Cinq’s Eric Cornet, a Continuous Integration and Delivery Engineer, has created many pipelines, and has seen the issues with complexity first-hand. In many cases, it took a lot of work to remove this complexity.
By looking at the patchwork tool-by-tool, critically evaluating each tool’s part in the larger toolchain, Eric can simplify the toolchain to a point where the patchwork becomes usable and manageable again.
His work has helped organizations with CI/CD toolchains with, in some cases, as many as 14 tools to simplify and standardise their CI/CD pipelines but still ensure they do the same work across source control, static code checking, security scans, integration and deployment (with infrastructure-as-code).
CI/CD is evolving
In the early days of CI/CD, the focus was on creating a single, consistent flow to move code to production, automating build and testing, as well as automating deployments.
More recently, Eric’s seen the focus shift towards hands-off infrastructure, where infrastructure is no longer manually accessible, but solely via CI/CD and other automated systems, and the move from static infrastructure to dynamic infrastructures using containers, Kubernetes and cloud-native services for databases and the like.
Another shift in the CI/CD space is the ‘shift-left’ of security, integrating security testing and scanning into the pipelines earlier, so that potential issues are identified sooner. This increases code quality, decreases the attack surface in production and significantly decreases cost of security measures.
Finally, CI/CD tooling is maturing and consolidating. The wild-wild-west days of tooling is over, and standardization and consolidation is reducing the patchwork complexity significantly with single vendors integrating many functionalities into a single CI/CD product.
Maturity is preventing common pitfalls
And with maturity comes improved quality. In the early days, DevOps engineers had to do all the integration themselves, and many tools had weak spots that needed work-arounds to even work.
Choosing the right tool was increasingly important in these scenarios, because a wrong decision increased the fragility of integrations, increased the number or issues and outages and increased the amount of work needed to keep things running. But choosing the right tool was no easy task, as Joep Piscaer discussed earlier in this blogpost about DevOps Tool Sprawl.
The opportunity cost of introducing a new tool is significant. Each new tool requires time to implement successfully and requires new skills, knowledge and experience, plus time to be successfully adopted across many teams. In the long run, replacing a tool may be the wiser choice, but in the short term, it’s often just a quick fix to keep the old tool running.
Getting the outside perspective
Striking a balance between short-term quick fixes that are required to keep things running versus the long-term tool replacements and consolidation is incredibly hard, and there is no silver bullet.
Organizations often need a little help from outside to help them make the right long-term decisions. People like Eric have dedicated their careers to these complex problems, and can help you make the right decisions, like which tool to consolidate and build up relevant knowledge in your teams, guide the first experiments with new tooling and help with implementation and adoption.
Or, in Eric’s words:
To me, that is one of the strengths of a DevOps consultancy firm like CINQ. Customers expect consultants to introduce new ideas and developments. In the DevOps unit, we all have our own expertise and interests. We regularly chat on Slack about new tools, features or nice articles. We have knowledge sharing sessions where we do workshops and show each other new developments and we attend tech conferences. That is how we keep up to date with the DevOps industry and that is how we can introduce new ideas and developments and add value to our customers.
He does the hard work, so you don’t have to. Because remember, CI/CD is not your core business, but it is CINQ’s core business.
Innersourcing is the way to go
That doesn’t mean you need to outsource the CI/CD pipelines to CINQ, though. Innersourcing, or the process of making each team responsible for their own CI/CD pipelines with help from a centralized CI/CD team (with team members consisting of experts like Eric), makes sure that each team can reuse knowledge and built-up experience quickly, without the need for re-inventing the wheel.
This gives teams the confidence to tailor their pipelines to their specific needs, while re-using standardized building as much as possible. This strikes the right balance between tailoring tooling to each team’s specific requirements, re-using boilerplate code where possible and leveraging expert knowledge. This increases the quality of the pipelines, without sacrificing on complexity or creating impossible-to-manage snowflake configurations.
Simplify and consolidate
In Eric’s direct experience, the single-most important requirement to be able to innersource and make each team responsible for their own pipelines (helped out by a centralized CI/CD team) is simplifying and consolidating the tooling patchwork.
In the upcoming webinar ‘The benefits of CI/CD‘, Eric will walk us through the rationale for why knowledge, standardisation and manpower is key to a successful central CI/CD team with a practical example of ABN AMRO and however, if you do not have such resources available, you can choose a tool like GitLab to do this for you.
GitLab offers standard pipelines with the most common set of pipeline features out of the box, like code quality checks, container security vulnerability checking, dependency scanning, open source license scanning, static and dynamic application security testing, creating performance testing environments and much more.
In Eric’s experience, these standard pipelines offer a good starting point for DevOps teams to implement their CI/CD, with enough customizability to tailor pipelines to each team for a successful adoption across the organization.
If you want to learn from Eric’s experience and mistakes (so you don’t have to make them), please register for the ‘The benefits of CI/CD‘ webinar, which will take place on November 26th.