Big enterprises that employ a large number of Agile teams to produce software applications face numerous challenges to keep their business healthy. Not only the technical aspects of different applications play a role here, but also a lot of organizational aspects. Think of attracting new talent, roll out training sessions, deal with security issues, etc. Another challenge which pops up now and then has to do with centralized versus federated control for various initiatives. The global DevOps movement pushes team autonomy since one of the core values is to strive for speed and agility. Teams need to be flexible when (external) factors demand so. However, sometimes this conflicts with the overall strategy and governance of the enterprise. Today we will explore key topics to decide about centralized versus federated control.
First of all, let’s find out what the main differences between centralized control and federated control are. Seen from the perspective of the tools and platforms that are needed to support the entire IT transformation from Agile/scrum towards DevSecOps, centralized control means that a dedicated IT team decides about it. In contrary, using the federated approach, the developer teams enjoy much more freedom of choice, thus choosing their own solutions and direction for their projects.
Both types of control offer benefits and drawbacks. Important considerations are summarized below.
- Project management is more efficient using the centralized approach, there is a central way to track progress.
- A centralized approach mostly introduces fewer risks compared to federated control.
- Using the centralized method, you need to employ a dedicated group of professionals with the right level of mandate and trust of the other departments. Otherwise the initiative strands. At the same time, the other departments might feel they are losing control.
- When applying the federated approach, individual teams can go faster since they do not need to wait for the centralized team. It can also help to improve collaboration with other teams since they might need them during their day to day work.
- The decision making process comes closer to where the expertise lies, this is especially true if there are major differences between the developer teams.
- From an organizational point of view, it’s much more difficult to measure progress and track changes if all teams have the freedom to do things their own way. On top of that, it’s also difficult to communicate a change management strategy to all the teams since it has a different impact on all of them.
This list is never complete, but acts as a frame of reference to explore the following topics in more detail.
Perhaps one of the hottest topics of the last couple of years. In many companies there is a “battle” of centralized and thus standardized pipelines versus custom pipelines (per team / application).
Centralized pipelines offer a great opportunity for developer teams that share the same tech stack, application structure and functional requirements. In essence, they can literally copy and paste a simple pipeline configuration file such as a Jenkinsfile, an azure-pipelines.yaml file or a .gitlab-ci.yaml file and adjust it to fit their situation. Since the standard pipeline consists of multiple “building blocks” (think of a Lint step, a SAST analysis or an artifact upload step), developers can just fill in the variables to pass on to the pipeline templates.
Rather inexperienced teams greatly benefit from this approach since they don’t need to understand the ins and outs of these (complex) pipelines. You do require a dedicated CI/CD pipeline team to build this standardized pipeline. Given the number of stakeholders (every application development team requires it), it can be a painful process to support all of the applications. It’s vital to choose with which tech stack you would start since you can’t do it all at the same time. Troubleshooting and offering support of these standard pipeline is rather easy since error messages are also generic and the CI/CD pipeline teams knows everything under the hood.
A major drawback is the difficulty to deviate from the standard pipeline. What if a specific team requires a different tool for their SAST process? That means you need to create a switch to switch the default SAST tool off. Or what if a large application needs to be refactored? If this is the case for a number of steps, your pipeline source code will be very hard (of not impossible) to maintain.
Naming conventions on an enterprise level are a great way to gain control and visibility about nearly everything that you want to share or use in your organization. Think of the following aspects / topics that require a proper naming convention. The name of application (components), the names of teams, project names, roles, permissions. And think of (cloud) infrastructure related aspects such as resource groups, artifact repositories, git repositories, etc.
Consider the following benefits of centralized and standardized naming conventions:
- Track and trace of items that require reporting and auditing on behalf of internal and external stakeholders.
- Sorting and counting of these aspects since there might be duplicates from both a functional as well as a technical viewpoint.
- Aggregation: it’s more difficult to group multiple entities and tie them towards a single unit such as a developer team or application or environment. The same is true if you want to aggregate different applications of a single tech stack.
- Easier to troubleshoot: application names are consistent in the entire Software Development Life Cycle: everyone in the organization shares the same understanding and context.
- Finding an item is much easier when using standardized names and titles since you can follow a pattern (like a regular expression) to construct your search phrase.
Furthermore, standardized naming conventions might deviate from “industry standard” conventions since your organization has specific needs. If individual teams won’t see the value of it, constant discussions might pop up and will slow you down.
In contrary to the list above, individual teams might benefit from choosing their own naming conventions. They do not have to follow the enterprise level approach so they can speed up and use whatever naming convention suits them best. No need to rename and/or refactoring of their applications, infrastructure components and connection strings.
Security controls help to make sure your applications stay secure in all of the stages in the Software Development LifeCycle. Often, security is a trade off between speed (for the business to deliver new features) and reduced risk impact (for departments such as CISO & SOC).
Data classification frameworks
It’s impossible to benefit from security controls if you don’t follow a single data classification framework for all of your applications. A data classification framework helps to establish as baseline for how critical your application is. This should be seen from multiple perspectives such as confidentiality, integrity and availability. If multiple data classification frameworks are adopted throughout your organization you can’t compare applications against each other and thus do not set the right security controls which help to minimize business risks. Then every team would choose what they think is appropriate for the overall business continuity of the organization as a whole.
Nearly every security tool has so called quality profiles. Think of the quality profiles of SonarQube or Checkmarkx that acts as a way to capture various security related issues in your source code. If every developer team can choose their own profile, you will end up with a different list given the same pieces of source code across different teams. Using this approach, some of the developer teams feel they are doing a great job while others lack visibility into very critical issues of their internet facing application that serves millions of end-users.
One step further in the development process are the quality gates. This concept determines if your source code is “of enough quality” to pass on to the next stage or not. Often a CI/CD pipeline breaks if it does not pass. This helps the team to improve or mitigate (security) issues. Once this concept is left over to individual teams you can’t strive for a consistent quality of your applications. Thus you do not have a uniform and controlled way of your business risks. You might end up with a lot of (unknown or unnoticed) vulnerabilities and security issues that you discover while an application is already in production. Fixing issues early on saves time and money and also reduces the pressure on operational SOX teams to monitor applications at runtime.
Yet, on the other hand, if quality gates driven from a centralized perspective are too “strict” your developer teams might spend a lot of time to fix issues for rather simple applications that are not mission critical such as PoCs and/or internal applications which are completely isolated and that do not process any significant information.
Another interesting topic is that of the deployment platform. Since highly autonomous DevOps teams have a lot of freedom to choose their desired target architecture and thus also the deployment platform of their choice. What does it mean in reality?
To answer this question, one should look at the cloud (migration) strategy of the organization. If there is a strong strategy that pushes teams in the same direction, the number of deployment platforms might be limited. Suppose the deployment strategy is SaaS over PaaS over IaaS, this offers a clear choice of how teams can deploy there applications.
Zooming in into the different PaaS services of the AWS cloud provider as an example, there are still plenty of options to choose from.
Suppose the teams are pushed towards containers, they can run their applications on Elastic Container Services (Fargate), Kubernetes, custom container solutions and many other flavors. This all has an effect on how to actually release and deploy software. For some simple applications by inexperienced teams, Kubernetes might be overkill since it has a pretty steep learning curve, a lot of “moving parts” and it is time consuming to maintain.
Yet other teams benefit more since they already have the needed experience. They have a bunch of micro-services which need to talk to each other and thus Kubernetes is a logical choice for them.
Many companies decide to build or acquire a deployment platform to facilitate and standardize on how applications are run. It helps to streamline processes like the ones above, but also create a dependency on the team that it maintains. Every company has to figure out for themself whether this is a proper solution or not.
Based on the key topics presented in this article, the following links help to view this problem space from different angles. It helps to make decisions a bit easier.
- An article from Gartner to highlight why leaders should balance between freedom (federated) control and centralized control when they decide about organizational models.
- The Harvard Business Review offers an article that shares the thoughts on the right amount of team autonomy.
- When your company wants or need to acquire another company, consider reading the article of Quotidian that covers decentralized and centralized solutions from a strategic and tactical point of view.
Every large organization that pushes DevOps initiatives needs to find the balance between centralized versus federated controls. This article covered a number of key topics which need to be taken into account. It provides a concrete overview of the pros and cons of various situations. I hope it was beneficial for you and steer the discussion in your company in a meaningful direction.
If you have questions related to this topic, feel free to book a meeting with one of our solutions experts, mail to firstname.lastname@example.org.