Companies increasingly execute cloud-first strategies for their core business applications. Some organizations even put a deadline on when they want to close down their data-center in the near future. DevOps teams that create microservices demand speed and flexibility when it comes to every stage of their Software Development Lifecycle. Security and regulatory requirements quickly reveal friction between the DevOps teams and the (network security) infrastructure department. Most probably you don’t know how many IP addresses, firewall rules and application endpoints you have. Let alone how they all interact with each other. In the cloud, things become even more problematic. Since there are many more moving parts, traditional procedures and techniques to handle network infrastructure management can’t keep up. Identity-Based micro-segmentation provides an answer to this.
Prior to the DevOps movement, applications were hosted on Virtual Machines with fixed IP addresses. Virtual Machines were created and kept running as long as they were needed. Seen from a traditional network security perspective, you had to deal with firewalls, whitelists of allowed IP addresses, protocols and a limited set of open ports. Management of those components relied heavily on formal change requests and manual intervention. Sometimes these kinds of processes took weeks. Exceptions to the rules took even longer.
In today’s world, application workloads use disposable infrastructure components like Virtual Machines which only exist when they are needed, containers, dynamic load balancers, etc. Pipelines are there to create, maintain & destroy them. Even IP addresses for applications come and go. DevOps requires more and more microservices talking to each other in contrast to one big monolith. Besides this, there is less human intervention since nearly everything becomes automated. Zero trust systems entered the stage.
The number of internal connections between applications, the corporate network, and other internet-based resources quickly becomes very difficult to manage. When taking into account a large number of IoT devices, the problem becomes even more important. Network security was all about controlling points of entry (internal traffic) and points of exit (external traffic).
From a technical point of view, network traffic was not context-specific. Also, this has changed due to the least privilege principle for each and every component. Components that require access got an “identity” since they should be treated individually. Instead of raw network traffic which should pass or blocked between different networks, the focus shifts towards “which component needs to talk to another one and nothing more”.
The primary focus of security shifts from the network infrastructure layer to the application. A difference here is: “which” component is allowed to communicate to another component instead of “how” that specific component communicates to another one.
In short, identity-based micro-segmentation is based on four concepts:
- It treats every component (application, network configuration, etc) as a uniquely identifiable artifact. Therefore identification is decoupled from its network-based properties such as IP address, network subnet, port number, etc. Security constraints are derived from every (application) component and not from the network (layer).
- It learns how applications communicate with each other inside and outside the cloud. By doing so, it captures communication paths and stores those. All are based on the dynamic infrastructure and application components.
- Policies such as network security policies are not created and implemented for the entire network. Instead, they are have the context of the application and created for small segments only. Furthermore, they are written as code in a declarative way. Forget manual procedures.
- True zero trust since humans plays a very small role in this story. On top of that, every request to and from an application will trigger authentication and authorization of that request.
With these core concepts in mind, let’s try to make things more practical now. Kubernetes is popular for its way to deploy microservices at scale. The challenges which have been mentioned can be clarified when taking a look at Kubernetes.
The context of Kubernetes
Running Kubernetes at scale requires you to take into account network segmentation. Workloads should be isolated based on various criteria: confidentiality, availability, secrets exposure, etc. Kubernetes clusters deal with multiple networks:
- The outside network from which you can access and maintain the cluster itself.
- Internal networks to let one cluster communicate to another cluster.
- Container to container network (the overlay network) for applications (Pods) to communicate to each other.
- Endpoints / URLs like websites you should be able to access from the network itself.
- Integrations with your on-prem network will make things even more complicated.
Seen from this list, some of the key challenges here are:
- IP addresses are mixed up and don’t have enough context. You don’t know which IP address belongs to which network from the list above.
- The number of communication paths inside and outside the clusters is huge. They are dynamically created and destroyed based on how and when applications are deployed and destroyed. Furthermore, think about auto-scaling of both Virtual Machines and containers.
- Security policies should support on-prem-related infrastructure and applications and applications in the cloud.
What is needed here is an automated way to discover communication paths which are context-specific between applications and all of their network connections.
Various tools help to solve the above-mentioned challenges. Simply said, the following need to happen:
First of all, workload segmentation should be done automatically and continuously. It is impossible to let humans do this. Machine Learning helps to complete this task in a quick manner. All unneeded communication paths should be filtered out. This already helps to reduce the attack surface significantly.
From here, every component gets a unique and immutable identity. This identity is created using the meta-data of the component itself. Think of: the UUID (Universal Unique IDentity) of the component or an SHA-256 hash of a binary. Many individual components together make up an application. Think of third-party libraries or custom-developed scripts. For example CI/CD pipeline scripts or runtime configuration settings. Even those components should get an identity.
Verification of those identities should be verified in real-time at the time applications request a network connection to another application over a certain network. No more firewall rules based on Word files or other configuration files which required manual processing. Granting or denying access is based on the identity of the component. Segmented policies and security decide to grant or deny access.
Back to the previous example of Kubernetes. Identity-based micro-segmentation further helps with the following topics:
- Application components are discovered on the fly while they are deployed. No need to manually add those components to whitelists and/or routing tables.
- Micro-segmentation-oriented tools can help to visualize traffic to and from the Kubernetes clusters and inside these clusters.
- Generate reports to review the application context in terms of compliance. For example on audit trails.
- Network-based attributes such as IP addresses and network subnet do not play a role anymore.
- Even more advanced tools can let policies change on the fly when your Kubernetes environments change.
- Create generic policies and rules based on a hierarchical model. Think of inheritance in Java applications. This approach scales as more and more DevOps teams progress towards this new way of working.
With these benefits in mind, it’s very obvious that identity-based micro-segmentation boosts your cloud security. Staying in control is king here.
Meet the tools
Tools are a necessity when it comes to DevOps. Everything that can be automated should be supported by tools and not human processes. A number of tools dedicated to micro-segmentation:
Zscaler is famous for its enterprise-grade proxy solutions. Besides that, they also offer a tool called Zscaler Workload Segmentation. This tool analyzes the total number of network events. These events act as the source to capture the most important policies since all duplicate events are filtered out. Similar communication paths of different applications are also left out. And over a series of next iterations, it narrows the total number of events down to create less than 100 identity-based policies. That number is manageable. You’re good to go now.
Palo Alto offers Prisma Cloud which scans your public cloud infrastructure (all three major cloud providers are supported). It captures your resources independent of the cloud-native service which is being used. It can visualize ingress, egress and other internal network communications very well. Special care for Kubernetes to focus on network segmentation makes this a good candidate for micro-segmentation.
Another powerful tool to help you here is Edgewise. This tool has gained four patents, one of which focuses on the mapping of application communication paths. Those paths are constructed by looking at Load Balancers, layer-7 proxies, and Network Address Translation (NAT). Interesting to note here is there is no requirement to install a client (agent) on the components which need to be analyzed. There is no extra task needed on the backlog of each application team. Furthermore, the configuration of the component can remain intact. High-quality data is essential to get the correct communication paths and to define the right policies. Data matters, again!
So far so good, what are the business justifications? The true power of micro-segmentation is based on the following aspects:
- Mapping of the application communications paths. In the past, this was a manual effort, now all can be automated. This literally saves weeks or even months of tedious and error-prone work. Valuable time saved can be used to bring business features to production. Less human involvement, so fewer manual errors and misconceptions.
- Even more powerful than the previous phase is the maintenance of the application communication paths. Having to maintain this manually is not an option (anymore). Companies that process critical (privacy-related) information should always be in control. Auditors require insights into the processes. And when not, they are in trouble.
- Humans don’t touch running systems anymore, zero trust systems prevail. Fewer human interaction means fewer mistakes that need to be corrected. The risk of data leakage and accidental mistakes is reduced. Besides this, everything is much more consistent and thus less time-consuming to find exceptions to the rule.
- The entire attack surface of your network and application (stacks) greatly reduces. All possible connections are downsized to what is really needed. Less time is spent to hunt vulnerabilities and this benefit makes your organization much more secure.
Besides the business benefits, there are two important aspects to take into account when implementing micro-segmentation.
- Expect resistance from your network (security) operators and engineers. Their work will change a lot, if not completely vanish. They might do whatever they can to delay these kinds of initiatives.
- Legacy equipment like old applications, network devices and mainframes may not support micro-segmentation. You need to decide whether or not to drop the support for these pieces of hard- and software or leave them out of scope. If you want to move forward, this might be a very good reason to refactor and modernize them now.
Micro-segmentation is a great step forward for the concepts of zero-trust systems. It enables the mapping and maintenance of all application communication paths, especially in cloud-native environments. The context of the application or component becomes the key to grant or deny access and not the IP address, port, or protocol. When this concept is implemented, you can save a tremendous amount of time to set up and maintain the communication needs of applications and other components. DevOps teams will really like it since they can focus on their features to move even faster than was possible before. Be sure to check out the tools to find out how to make this a reality in your organization.