Kubernetes is great for developers creating containerized workloads and microservices. But it’s open, pluggable architecture and design principles can be scary at first sight: it just doesn’t hide any of its complexity, and managing a cluster’s lifecycle is a complex task, requiring expert knowledge and experience. Managed services like Spectro Cloud fill this gap with flexible, managed services that remove complexity and prevent cloud lock-in.
That means that to be able to run Kubernetes yourself, you have to be quite the expert; you have to make all the decisions yourself. Any mistakes in the architecture design, configuration or ongoing operations can have an impact on the availability, resilience, security and cost of your DIY Kubernetes platform. And no-one else but you is accountable for mistakes and no-one else to call for support (although something like IBM’s Technology Support Services or other 3rd-party support services may help you out in a pinch).
And DIY-ing doesn’t scale well from a team perspective for all but the largest environments. Smaller outfits simply can’t justify the cost of a dedicated Kubernetes Operations teams, which requires expensive expertise, nor run the risk of being dependent on a single or a small number of employees for their specific knowledge.
That explains why many organizations turn to hosted or managed services for their Kubernetes needs.
Managed services are invaluable for organizations faced with the complexity or cost/ scaling issues of DIY Kubernetes. These services, like Amazon EKS, Azure AKS, Google GKE and many others, remove the complexity of designing, implementing, and operating Kubernetes, allowing organizations to focus their employee’s time on things that more directly impact their own bottom line.
And that’s great, a win-win for all involved parties. Or is it? Some wins may be bigger than others, in this case.
That’s because the big public cloud vendors have a bit of magnetic inertia: start using their services and you’ll be seduced to use another of their cloud services. And another. And another. You catch my drift.
The issue here is of (unconscious) lock-in. Many managed service providers, including the big cloud vendors do everything in their power to pull you in and sell you more of their services. Their entire service portfolio is designed to lower the barriers for their users, mostly developers, to start using additional services, often initially without cost. It’s a beautifully integrated portfolio of services that your developers love. Your wallet may not.
The problem of lock-in
And this is where lock-in often stings. It’s because lock-in happens so unconsciously and gradually, that when lock-in finally becomes visible, it’s already too late, and organizations are intertwined with a vendor’s cloud services without the right checks and balances, and no easy way out.
And while this is not exclusive to managed Kubernetes services (it’s happening across the board, from infrastructure to databases and from message queues to mobile back-ends), the cost of Kubernetes clusters can be prohibitive. Let’s not forget that we’re not dealing with just the Kubernetes control plane, but worker nodes and all containers that run the entire application and services landscape. It can get really expensive, really fast. And without an easy way out, it can take months before you’ve reduced the cost, if even at all possible.
Smart people in the infrastructure space have seen this co-dependence between cloud infrastructure and running Kubernetes coming, and created independent Kubernetes services. These agnostic services don’t care where you run your Kubernetes: on-prem, in a private cloud, or any of the public clouds. Two notable examples are Platform9 and Spectro Cloud (website) which both offer a fully managed, SaaS service for running the Kubernetes control plane.
Both of these companies offer a vanilla Kubernetes experience to developers, allowing them to use the platform without re-training and optimizing for ease-of-use and quick onboarding. But honestly, that’s table-stakes functionality. Offering a vanilla Kubernetes experience is a checkbox on the enterprise’s list while shopping around for a managed service.
So what are the differentiators for managed services like this? Let’s take a look at Spectro Cloud, one of the more recent offerings in this space.
Spectro Cloud’s obvious difference with any of the public cloud offerings is its multi-cloud capabilities. Spectro’s service runs across public clouds, private clouds, and bare-metal environments, offering a cloud-agnostic, but consistent experience across environments.
Kubernetes never comes alone. Many other products are needed to create a complete, secure solution. This includes products for enterprise storage, security, networking and monitoring. This requires additional knowledge and adds to the complexity of running your own Kubernetes environment.
Spectro Cloud simplifies the implementation and ongoing operations by creating profiles that combine different products and configurations for different environments, without compromising flexibility. By modeling the infrastructure stack layer-by-layer in code, admins can define the desired state of each cluster in a few lines of declarative code describing which products and configuration to use. Profiles define the Operating System flavor and version, Kubernetes version, as well as storage, networking, security and monitoring products and their configurations.
Spectro Cloud provides out-of-the-box configurations for many popular ecosystem integrations, like Sysdig, WeaveWorks, PortWorx, Prometheus and many others. This allows admins to start defining clusters more easily.
These profiles combine different products and their configuration that make up the layer cake of a fully-fledged Kubernetes environment. Each profile can be reused for different clusters across Dev-Test-Prod environments, and profiles re-use components and configuration where possible.
This way, different stacks can be built, with minimal changes between profiles for vastly different use cases. For instance, production and dev/test profiles share many components and configuration, but the dev/test environment uses a community-supported Linux base instead of a commercial version and lacks the expensive security and monitoring tools.
This has the benefit of making Kubernetes cluster deployment a trivial task, which should result in a reduction of a deployed cluster’s lifespan; why re-use a cluster when deploying a new cluster is easy and cheap?. An ‘old’ cluster runs the risk of configuration drift, hidden issues; deploying a new cluster improves security, reduces configuration drift and technical debt. This helps improve resilience, performance, availability and other qualitative aspects of the cluster.
Spectro’s Pallet Orchestrator is the engine that deploys new clusters, and monitors them for any configuration changes, correcting any drift in the deployed cluster in addition to upgrading clusters based on changes in profiles.
And while this is very similar to how Kubernetes itself manages the lifecycle of pods and containers under its management, Spectro Cloud offers an easy-to-use web UI to build profiles, deploy clusters and handle lifecycle operations.
Additionally, consistency of deployment and reducing the lifespan of a cluster removes the ‘snowflake effect‘ of unique configurations that are brittle, easily break, and no one dares to change, update or otherwise ‘touch’ because of configuration and drift over time. Being able to consistently deploy and manage infrastructure, without replying on tribal knowledge or single-points-of-failure of concentrated knowledge in a single person, is key to business continuity.
And any system admin (or SRE, or DevOps engineer) will tell you this: consistency of management is a major aspect of running a smooth IT Operation. Being able to apply the same methodology, same tooling, same processes without exceptions across different parts of multi-cloud estate matters: it allows IT to spend time on improving and servicing the business, instead of being bogged down with the repeating the same toil with small variations due to each snowflake’s little differences.
Spectro Cloud solves this by managing the lifecycle of a cluster, including version management, from the profiles a cluster was deployed with. Profiles use specific versions or tags (like latest) to manage the lifecycle. That means any version changes in the profile propagate to all deployed clusters automatically, with the right approvals and confirmations. This way, deployed clusters are always in sync with their profiles, removing the need to manually take care of each individual cluster; instead administrators only need to manage the versions in the profiles.
This is why repeatability and consistency are important. The ability to do the same task over and over without linearly scaling the work involved solves the scaling issue, and policy-based management is the driving enabler that makes this happen.
Spectro Cloud is positioned right where enterprises need them most: an easy-to-use service without compromising on flexibility or control. What makes them unique is how they combine their profiles that capture the desired state of clusters, reusability of those profiles with keeping deployed clusters in sync with profiles.
Enterprises now don’t need to hire expensive (and rare) Kubernetes experts to deploy, and operate their clusters; Spectro Cloud does the heavy lifting for them.