CI/CD is not new anymore. The hype slowly cools down and is now entering a mature state. Software development companies incorporated their CI/CD processes to fully support their DevOps teams. There is no single way to implement CI/CD. So is true for pipelines. In this article we’ll explore best practices for CI/CD pipelines that are worth considering. Since there is a lot of debate about what are “best practices” for nearly every technology, I don’t want to push my opinion. Just pick whatever helps you in your CI/CD and DevOps journey.
Split CI and CD pipelines
Some companies organize so-called “contests” between teams to “build the best pipeline”. Teams might want to implement nearly every feature and component they can think of. Multiple types of tests, security scanning, smoke tests, manual approvals, quality gates, etc to name a few. Even teams that do not participate in such contests might build pipelines that are a bit “over the top”. Often, these kinds of pipelines combine the typical steps you would expect in the CI and in the CD phase. One pipeline for both phases.
Suppose those pipelines run quite long, they can be stuck in the middle. If this is the case in your organization, it’s hard to figure out what went wrong for a particular build. Other builds pile up and team members also need to keep an eye on those as well. Combined pipelines become slow, cluttered and painful to maintain. CI/CD processes come to a standstill.
One way to avoid this is by creating small independent CI and CD pipelines. Let one pipeline do its job quickly but in an excellent way. By doing so it’s easier to manage them and to quickly adjust things when needed. For sure this will break the concept of an integrated CI/CD principle. However, no one said things should forcefully be (tightly) connected. It’s also about ease of use and flexibility which helps to speed up timely delivery.
Popularity of low-code
Low-code is a rather new phenomenon that abstracts away and automates (nearly) every step of the application lifecycle. I won’t call it magic, but code happens ‘almost automatically’.
Within the concept of CI/CD this really helps to speed up since you don’t have to script your CI/CD pipelines anymore. Less coding and a more intuitive way of working.
Business representatives are much better informed about the possibilities of modern IT. And on the other hand, techies do not need to forcefully explain tech solutions from an end-user’s perspective. Communication overhead and misalignment of these departments will (hopefully) be reduced since both parties speak the same language.
No big bang changes
Big bang changes are not the best approach to push out new features as quickly and reliable as possible. Large features encompass a lot of source code to be changed. This also increases the risk of something that might go wrong. And it if does, it becomes extremely difficult to troubleshoot the issue. It also puts pressure on the developer who works on the feature since they are (probably) most knowledgeable about it. No one else knows more about the context and reasoning of the feature than that one developer. Even other team members have to guess if there just a small subset of documentation. Remember: working code over extensive documentation, so this is a realistic scenario in a DevOps world.
Break it down
Break down that feature into smaller sub-features and use feature flags to switch those sub-features on and off. This helps to isolate problems and increases the ways you can track and trace the steps in the pipeline itself. Furthermore, it reduces the probability of integration problems since each sub-feature can be rolled out on its own.
Governance and ownership
If governance of the application features behind feature flags is an issue, it’s crucial to align with multiple teams to let them take proper ownership. Transfer of this ownership is also easier since not all functionality needs to be shifted to the other team in one go. It’s easier to fit into their sprints, thus no extra delay for them. Customers get their features faster and the right body of knowledge is aligned. In the end, this also helps the tech support teams which cover the application in production. They can be the new owner of the feature. Less pressure on them. Unload the burden of a DevOps team but keep in mind that, in the end, the DevOps team remains responsible for their product.
It is pretty common to work on multiple (sub)features that depend on each other. Feature A needs feature B to work correctly and vise versa. Being able to release a sub-feature independent of another feature helps to speed up. No need to wait for both of them to be finished.
Use stubs and mocks to act as temporary placeholders. When the actual feature is ready, swap the placeholder to enable the deployment of the entire set of features. Another advantage is to remove the deadlock which might occur when multiple independent teams or multiple team-members of the same team are waiting for each other.
Do you remember the article about testing in the cloud? CI/CD pipelines have multiple types of tests. Some of them take pretty long to complete. A best practice is to run multiple tests at the same time. When pushing multiple small incremental changes your builds will pile up. Run tests in parallel to avoid this.
It’s best to avoid a single Virtual Machine or piece of infrastructure which should run all of your parallel tests. Resources can be exhausted, the infra component will be overloaded and it can become unstable or even crash. Valuable time is lost since you need to set it up again and also re-run all of the pipelines which got terminated. Avoid this at all costs, the negative impact on everyone in the organization is too big. Invest time and money in a proper solution that scales as the load increases.
For this to become reality, you need to have your infrastructure layer setup in a scalable way. Suppose you run your CI/CD tool on Kubernetes, you need Pod auto-scaling or auto-scaling on a Virtual Machine layer. A big benefit of pure cloud-native based pipelines is that you don’t need this. AWS CodePipeline as well as Azure DevOps removes this task from your backlog.
Application code that is shipped as a “ready to deploy package” needs to be promoted to multiple environments as easy as possible. In the “old days” of CI/CD (which is not so long ago :)), you would recompile your application, push it (again) through the pipeline and deploy it on a different (higher) environment. By doing so, valuable time is lost. Also, it might lead to (small) differences if a developer merges to an “old branch” which is not deleted. If the pipeline breaks in the middle of the execution you face a hard time restoring it. A new tag is created and everything should start from the beginning.
Aim for a “single click approach” to promote your releases. Be sure to make this feature available to other teams since this is a very generic way of working for all. In the end, it pays off once all teams share the same approach.
Organize pipeline review meetings
Since the CI/CD pipeline is at the heart of every organization, it is crucial that everyone is aligned with the main concepts and importance. If needed, organize specific pipeline review meetings every X weeks with multiple teams. Share best practices to keep everyone on the same page and learn from other teams.
If needed, discuss problems and issues which teams encounter to be able to remove them as quickly as possible. Since the pipeline might encompass multiple processes that were previously owned by multiple teams or even multiple departments, it’s also wise to involve those parties. They might still have their set of responsibilities from the perspective of a specific component in the pipeline.
Suppose you conduct a security scan by a tool that is maintained by team X, they need to know how this component works in practice to be able to improve it or optimize the tool to be able to best support the consumer teams. This also brings the discussion: who owns which component of the CI/CD pipeline, but that is a completely different topic for another article.
One pipeline to rule them all?
A pipeline needs to be reliable across all environments and applications. In almost every company there is a lot of debate whether or not to use standard pipelines that have a fixed set of components and required steps. The main reasons to embrace standard pipelines are:
- Every team uses the same, consistent way to deliver software (packages).
- It’s easier to troubleshoot by another team (if the pipeline itself is not too complicated).
- Less knowledge in the DevOps application teams themselves is required.
- Easier to migrate to another platform.
- Everyone depends on a central team that builds and maintains the (standards) for the pipelines.
- It is more difficult to support multiple teams that have different needs.
- Slower adoption of new features or bugfixes – requires a tremendous effort of stakeholder management
- Granting exceptions to teams becomes a problem. You need to answer questions like: based on what criteria and meta-data do you allow exceptions? This can differ per team and application. Who is responsible for the exceptions?
It’s impossible to give advice for this topic which is also very context specific and differs per organization. So no best practice here. Hopefully, this list helps you to decide which situation suits you best.
Pipelines are mature now. No one delivers software without a proper pipeline. Pipelines are “proven technology”, we don’t need to discuss this. However, there are modern best practices that help you get the most out of them. No right or wrong here, pick whatever helps you. Good luck.
This ebook shows how a single application optimizes workflows so teams can get the most out of their CI/CD.
Download the ebook to learn how you can utilize CI/CD without the costly integrations or plug-in maintenance.