Kubernetes is changing the divide between operations and development. Where traditionally, provisioning and managing infrastructure was a task exclusive to IT Operations, Kubernetes is shifting infrastructure towards developers. Even though public cloud, Kubernetes, and infrastructure-as-code tooling put infrastructure into their hands, developers are not operationally focused. That leaves operational aspects in the dark, which impacts many non-functional and qualitative aspects of the software being built.
So how do operations and development engineers come together to conquer this divide?
Developers love public cloud. But why?
Putting infrastructure and the flexibility to change its composition and configuration into the developer’s hands is a good thing. Developers use that flexibility to scale their app up or down, instantiate an instance of the application’s infrastructure for a new version of the application or change the composition of infrastructure depending on the application’s changing requirements.
Without the self-service, on-demand aspect of provisioning infrastructure, software developers are dependent on the processes for requesting and changing infrastructure the Operations team has put in place, often slowing them down to a grinding halt. No wonder that developers love Kubernetes and often choose a public cloud’s managed services for Kubernetes because that allows them to provision and change infrastructure without the dependency on the Operations team.
The public cloud’s building-block approach to delivering infrastructure components in a self-service and on-demand way is a natural fit for the way developers work and are crucial in being agile. But outsourcing infrastructure to a cloud provider doesn’t free you from the operational responsibility, as developers often find out at the first roadblock after swiping a credit card to spin up a cluster.
Infrastructure is not Operations, Operations is not infrastructure
Organizations still need people focused on the qualitative aspects of infrastructure. Without this, public cloud consumption quickly becomes expensive, slow, insecure, and unreliable.
Operational engineers drive consistency, promote re-usability of the standards, create boilerplate infrastructure-as-code templates, balance resource usage and cost, and much more. Engineers that are exceptionally successful in these areas partner closely with software development teams, allowing software engineers to learn about operational aspects, diffusing the clear lines between each role.
This doesn’t mean software developers should suddenly be responsible for operational aspects, though. And while educating developers is useful, they are not, and should not, be the experts. Their core skillset is developing software, and minimizing toil that takes away time from doing that work makes them more efficient.
That’s where the operationally-focused engineers come in. These engineers take a non-functional approach, focusing on qualitative aspects like security, cost-effectiveness, performance, availability, and stability.
These are crucial aspects of any system’s success but easily overlooked. The engineering work required to make a system cost-effective, secure, compliant, performant, resilient, and stable is non-trivial and requires expertise. For instance: it’s easy to say ‘no’ to a developer’s requirements because the security guidelines say so, but hard to work out the nuances to figure out how to give the developer what they need without compromising security. It’s easy to just spin up a bunch of cloud infrastructure resources for a new application, but incredibly hard to spin up just the right infrastructure at minimal cost, and even harder to keep costs down later in the lifecycle of an application.
Creating Simple Platforms is hard
Creating simple platforms that cater to different requirements without adding additional complexity is hard. And considering Kubernetes’ complexity in particular, simplicity is very hard to achieve.
Many organizations need help to evolve beyond the most common level of maturity in using Kubernetes, which often means forcing all developers to use a single Kubernetes cluster, catering only to the lowest common denominator but never fully fitting any single use-case fully. Operational engineers are crucial in maximizing the potential of Kubernetes, like tailoring cluster configurations to specific requirements across environments and teams and allowing single-purpose clusters without compromising on cost or security.
For Kubernetes and the ecosystem of products that inseparably come along for the ride (as I discussed earlier in this post about infrastructure composability), operational skills are crucial. Even standing up a single production-ready Kubernetes cluster with sometimes dozens of additional products and complex configurations to integrate them is hard. No wonder running Kubernetes yourself is considered doing it the hard way.
Do we still need ops when we have managed Kubernetes?
Offering a tailored, purpose-built cluster to each development team’s specific requirements is an operational nightmare if you don’t get some extra help.
Yet, this is the pitfall many organizations walk into, eyes wide open: as Kubernetes shifts infrastructure capabilities to dev, we don’t need ops anymore.
The reality is that, regardless of where you run Kubernetes, you need a little help to navigate the operational challenges of maximizing Kubernetes’ value across many specific use cases, while keeping the system secure, compliant, performant, cost-effective, resilient, and stable.
The complexity of delivering tailored Kubernetes clusters to multiple teams takes a lot of engineering effort. Building a cloud-like portal that balances pre-vetted, known-good cluster configurations with self-service capabilities so that developers can create new clusters on-demand, tailoring each to their ever-changing and specific requirements is not something your Operations team should be doing.
No wonder managed services for Kubernetes make sense. By outsourcing Kubernetes, you don’t have to build a bespoke and unsustainable version of Kubernetes, but can instead focus on adoption of Kubernetes with development teams, helping them use the self-service capabilities to create specific cluster configurations that they know are secure, cost-effective and tailored to their needs, like adding GPUs or particular kernel modules, running a specific Kubernetes version or swapping out open source products for security, storage or networking with commercially supported options for production environments.
How Spectro Cloud fits
Spectro Cloud offers a managed Kubernetes solution that is built to cater to the balance between operations and development.
Its cluster profiles allow developers to choose from vetted, pre-configured options for tailoring configuration to their needs. This includes options for where to run the cluster (cloud or on-prem), operating system, Kubernetes version and configuration, networking and security solutions, storage infrastructure, load balancing, as well as observability and service mesh options.
These features are key in shifting infrastructure capabilities to developers. Paradoxically, by moving Kubernetes closer to the developer, the ability for developers to create a specific, tailored Kubernetes cluster configuration on-demand and self-service removes the friction of Kubernetes, so that the developer can focus on their software itself, not the infrastructure underneath.
Spectro Cloud’s templating capabilities to customize cluster configuration means that operational engineers can focus on the qualitative aspects of infrastructure and driving consistency across all development teams without compromising their freedom of choice.
By promoting re-usability of standard configuration, variation is reduced, leading to a reduction in infrastructure-related issues and outages, allowing operational engineers to focus on the qualitative aspects of infrastructure, like cost, security and performance, as well as work on improving the configuration options in the templates.