HomeOperationsAutomated OperationsHow to unlock the potential of your applications with Kubernetes at the...

How to unlock the potential of your applications with Kubernetes at the edge


In previous articles in this series, we discussed why multi-cloud and Kubernetes-as-a-Service are important factors to consider when choosing a Kubernetes solution. In this post, we’ll dive into the third important factor: edge computing.

What is edge computing?

While edge computing is a slightly vague concept, it’s best summarized as ‘compute and data storage close to the source of data,’ usually to reduce costs, bandwidth usage, and response times.

Recent developments such as the increased bandwidth of 5G networks have removed bottlenecks for endpoint devices and end-users. This increase in connectivity allows an exponential growth of data — and unlocks many different use cases, from image and video processing and voice recognition to running factories and retail locations over 5G instead of Wi-Fi or even the factory floor copper networks of yore.

In the world of cloud-native applications, edge enables new innovative applications that were previously not feasible due to cost or deployment complexity. Edge computing (and IoT) is a large driver of digital transformation across retail and manufacturing, but also bandwidth-intensive or latency-sensitive end-user applications. These applications require a new architectural model that embraces flexibility to manage the scale of deploying Kubernetes clusters across locations and at scale.

Flexibility needed

And while we see a rise of serverless platforms at the edge, the flexibility of using Kubernetes as a substrate for edge and IoT applications is unrivaled, offering autonomous operations, automatic scaling, self-healing infrastructure and the flexibility to run anything in a container. Serverless platforms are often severely limited, offering only support for a select subset of programming languages, and often lack support for the right databases, message queues and networking infrastructure to deliver a mature, cloud-native development environment. In similar fashion: containers are sometimes not enough, and the ability to transparently and consistently manage containers and VMs at the edge will reduce complexity and cost.

Running Kubernetes at the edge also has the advantage of having a single, consistent developer experience across on-prem, public cloud and edge computing use cases, making it quick and easy for developers to embrace edge-enabled applications. Simply put: they don’t have to learn about a completely new platform just to build and run applications at the edge, saving on re-training and reducing their cognitive load.

Invisible Operations

For DevOps and SRE teams, a single consistent experience also enables policy-based operations and lifecycle management to manage many edge clusters as one at scale. It allows admins to manage and maintain clusters from a single policy, regardless of the number of actual clusters deployed. And as you can imagine, edge computing does suffer from cluster sprawl, with the many relatively small clusters across many edge locations.

Unlike with on-prem or public cloud, edge clusters can pop up and recycle quickly, because applications follow users (and not vice versa). That means clusters pop in and out of existence quickly and often. Not all Kubernetes deployment platforms support that paradigm.

Profile-based cluster management makes it easier to deploy identical remote clusters and configurations from a single profile, instead of managing each remote cluster separately. That minimizes configuration drift, while still being able to apply unique configurations where needed.

This feature helps standardize application and container deployment across clusters, making the onboarding of new edge regions easy and consistent. This allows the day-to-day operations work to scale non-linearly, making the most of each engineer’s time. The feature allows administrators to manage a large number of edge locations without additional work.

Managing the complexity of scale

This is especially true for managing the ​​distributed architecture of edge computing. Clusters across many edge locations need to communicate with each other, but also with the central cloud locations. Creating and managing the mesh of ephemeral clusters and networking is not trivial, especially if you remember that we’re dealing with systems other than containers—including VMs—running across edge locations, public cloud and on-prem.

These kinds of modern, distributed applications require a scalable solution that runs many clusters across heterogeneous locations in a distributed architecture with single-pane-of-glass management.

Mirantis’ solutions support this federated distributed architecture, providing a consistent experience across regions, while being centrally managed and resilient against connectivity and bandwidth issues. The architecture supports multiple regions in a hub-and-spoke model. Mirantis bridges the gap when running containers, VMs, and serverless across on-prem, public cloud and edge with an infrastructure-agnostic platform. This built-in flexibility supports specific integrations such as GPUs, storage, networking, real-time operating systems, kernel modules to support dedicated hardware, and much more.

Solve Your Kubernetes-at-the-Edge Challenges

Mirantis Container Cloud is suitable for any kind of application at the edge across retail, manufacturing, enterprise, ISV, and SaaS use cases. Its ability to manage deployments on any infrastructure based on centralized policies is a huge time saver. It lowers time-to-fix when outages occur, lowers support costs, and improves customer satisfaction.

Make sure to look at Mirantis and its distributed architecture to solve your Kubernetes-at-the-edge challenges. To learn more about how Mirantis supports edge use cases, read their edge solution brief and see how their infrastructure solutions can support enterprises at the edge.


Sign up to receive our top stories directly in your inbox