Getting started with Kubernetes is a daunting endeavor, but a necessary one. For most organizations, it’s about digital transformation in the marketplace, where digital products and services are becoming the norm. That means getting and maintaining an edge is highly dependent on your ability to deliver more features more quickly, and at an ever-increasing level of quality to stay ahead of your competition. In order to deliver great software quickly, Kubernetes is often touted as the solution. The Otomi take on doing Kubernetes ‘right’ is an interesting one, and RedKubes, the company behind this solution, has some refreshing opinions on cloud-native complexity.
Kubernetes never comes alone
Agile and lean software architecture, with many small but easy-to-change microservices, lie at the heart of cloud-native software development. Kubernetes is the core technology enabling microservices architecture, as it is able to manage containers at scale across many teams, many services, and cloud platforms.
Unfortunately, things aren’t as simple when starting with a container platform. Kubernetes is only one part of a much more complicated infrastructure puzzle, integrating additional software like storage management, networking, security, monitoring, identity, code and artifact repositories, dashboards, and more to create an enterprise-grade platform. Just looking at that list tells you: building a Kubernetes-based container platform is hard.
And while some of the complexity, that of Kubernetes itself, is easy to outsource by embracing a SaaS service like Google GKE, Amazon EKS, Azure AKS, or one of the many managed services, it only solves a small part of the equation. In all cases, you’re left with the puzzle of choosing, integrating, operating, and updating all of the other pieces of the container platform.
Choosing the more valuable engineering work creates technical debt
All this work on the container platform itself has zero value to an organization: it is simply something that must be done before more valuable work, like developing software, can be done. From a company perspective, it makes sense to try and minimize the amount of work spent on operating the platform; and this is often what happens under pressure by a product owner or manager: they don’t have the budget for it or don’t understand or appreciate the work that goes into it.
This tension between spending time on toil and developing features builds up over time and leads to technical debt, making the platform harder to operate, upgrade or change, and less resilient and stable over time. This slowly locks teams into the platform, increasing friction with every change and making the platform more. This inertia forces teams to spend more time on fixing issues and outages, configuration changes, and regular maintenance. All this takes away from the time developers spend actually writing code.
This technical debt is most visible in the ‘glue’ between the products and services that make up the platform: in the custom integrations between, for instance, the container platform and the identity provider, or between the storage management solution and the container platform. These cracks in the glue start to form at each missed opportunity for maintenance, increasing the entropy over time and increasing the cracks each time a short-cut or easy-way-out is taken, forcing additional rework every time a change is needed as part of feature development work.
This accumulating ‘interest’ of not paying the debt (doing maintenance) makes it harder to do changes when they’re needed, increasing friction and inertia. So how do we maintain this balance, without spending too much time on the platform?
Gluing the glue?
As mentioned, the technical debt of the integrations between the different products in the container platform tends to have the biggest impact on the resilience and inertia, causing outages or forcing re-work to fix issues.
So, it stands to reason that in order to remove this technical debt, we need a solution for keeping cracks from forming in the glue. We need to glue the glue.
In other words: standardize and automate not just Kubernetes, but all of the pieces in the container platform. Much like how organizations use a SaaS or managed service for Kubernetes to solve the complexity in Kubernetes, standardizing and automating the glue and integration between all products in the container platform solves the technical debt and complexity of the entire container platform.
The Otomi Container Platform does just this. It’s a suite of everything needed to build an enterprise-grade container platform, and all of the individual software products come pre-integrated, and they stay integrated, as the entire platform is updated as a whole as part of the managed service.
Getting Started with Otomi is taking the easy way
This means getting started with Otomi is as easy as deploying a single container. The entire container platform is deployed on-prem or in your cloud account automatically, taking care of configuration and integration so you don’t have to choose, plan, design, or integrate any of the individual components.
This out-of-the-box experience focusing on the entire container platform instead of just Kubernetes is what makes Otomi unique. It contains everything organizations need for cloud-native software development and running an enterprise-grade container platform in production. The technology stack is built on commonly-used open source components, implemented, and integrated using industry best practices.
Otomi is cloud and vendor agnostic: it works with all existing Kubernetes solutions, like Google GKE, Amazon EKS, Azure AKS, as well as on-prem solutions like RedHat OpenShift and VMware Tanzu.
Integrated lifecycle management prevents technical debt, because the entire software stack, including all of the integrations and software versions, is under single version control. It is managed as a whole in a single software repository to minimize complexity, even after the initial deployment. Software updates to the stack and its components, improvements to the integration between components or additions to the stack are done automatically as part of the managed service, so you don’t have to spend any engineering time on the container platform, enabling you to dedicate 100% of your development capacity to software development, not toil.
Finally, Otomi is a true turn-key solution, requiring zero initial investment. Pricing is pay-as-go-you and flexible, based on the number of container clusters under management.
After talking to Otomi’s founders and testing out their software, I think their take on ‘the easy way’ of Kubernetes is an interesting one, and different enough to solutions like Spectro Cloud’s configurability to warrant their own place in the competitive landscape.
They focus on the suite of products that Kubernetes needs to become enterprise-ready and production-grade, and specifically, Otomi takes care of the integration of those products. It basically takes all these open source products like Keycloak, Prometheus, Jaeger, Harbor, Grafana and Istio (full list here), and packages not just the products, but the integrations between the products into a commercial offering, so you don’t have to manually install, configure and maintain these products.
That saves teams from massive amounts of complexity, and from a lot of headaches that come from broken upgrades, which I think is a major pitfall in open source software today.
What makes Otomi unique is that their pricing model is subscription-based, but not as a Service. The Otomi stack is installed into the VPC or datacenter using a simple bootstrap installation process. This means that the stack is ‘yours’ and continues to work, unlike SaaS solutions that stop working if you stop paying. The installable part does mean a little more friction during onboarding, but given the amount of friction and frustration this all-in-one solution prevents, it already solves 99% of that friction.