Companies adopt Kubernetes at a rapid pace. There is no doubt Kubernetes serves as the foundation of almost every container platform of choice. Kubernetes is a ‘batteries included but replaceable’ deal, integrating with technologies in CI/CD, monitoring, data storage, security and much more, further speeding up development and deployment of applications. The Helm package manager fits into this picture very well. It has quickly become the package manager of choice for application deployment on Kubernetes.
What is Helm?
Helm lets you easily deploy your applications to any Kubernetes cluster, regardless of where you’re deploying: in the public cloud or in your local datacenter. Helm’s value is in the metadata description of the applications; deploying an application on Kubernetes require ‘manifests’ written in YAML format and differ (vastly) across different applications. With Helm, the application vendor writes these files so you don’t have to.
Helm packages it all together and manages the deployment in a consistent way across Pods, container images, daemonsets, load balancers, service accounts, secrets and more.
Helm does many other things, too:
- Install and upgrade software and individual components
- Configure and deploy software deployments including a release history
- Fetch software packages called Helm charts from repositories
- Search for Helm charts in Helm chart repositories
- Verify Helm charts for linting errors
- And a lot more
Backed by the CNCF
Helm was already incorporated and adopted by the CNCF a while ago. However, quite recently, it graduated from the Incubating phase to the Graduated phase. This means Helm is now a mature software product, developed and supported by a large community.
At the end of 2019, Helm 3 was released. The most important improvements over version 2 are:
- No tiller. Previous versions of Helm require Tiller, a server-side component that listens to client-based commands to render and deploy the actual helm charts to Kubernetes clusters. Helm 2 was developed before Kubernetes supported Role Base Access Control (RBAC), so Helm (including Tiller) had to take care of it. A big improvement in terms of security and maintainability since it is handled by Kubernetes itself instead of Helm.
- Helm 3 stores release information in secrets, unique per Kubernetes namespace. Helm 2 stores release configuration in ConfigMaps (a way to store data). It requires a lot of operations to read it. Helm 3 reads out the secrets directly which simplifies the deployment process.
- Changes to namespaces. In Helm 3 you need to provide a (valid) namespace. If you do not do so, the namespace is not created automatically anymore. It reduces the confusion for developers if namespaces are created on the fly without specifying them explicitly.
- Chart validation. It is now possible to validate the syntax of a Helm chart. Shift left and avoid broken deployments by including it in your CI/CD pipeline.
A simple chart
Given these great improvements, let’s take a look at a simple chart for the hello-world application. To bootstrap a chart, use the following command
helm create hello-world
In case of an hell-world chart, the following structure is created:
As you can see, a lot of YAML files similar to Kubernetes resources files. The main difference is the usage of reusable templates. Every YAML file inside of the templates directory is processed through a template. You can concentrate on the contents of:
- Chart.yaml with all of the meta-data of the chart (name, version, application version, author, compatible Kubernetes version, etc).
- Values.yaml with all of the configuration items for the chart.
Charts, especially for more big applications, can become large and complex quickly. Helm charts should be treated as artifacts with proper versioning, testing and naming conventions on the different components.
Helm has been widely adopted by the open-source community and many open-source applications and components are available as public Helm charts. This makes deploying applications that were traditionally very hard to deploy a breeze.
For many of them, you don’t need to tweak the details to get started. You can pass on parameters to other Helm charts to configure them for your own needs. Or you can change the default values in the main YAML file.
Helm charts fall into one of these categories/stages:
- Incubator: Helm charts in a development stage
- Stable: stable charts for general consumption
It’s advised to only use stable helm charts for production systems, as incubating charts are can change frequently and can include breaking changes.
One of the biggest benefits of Helm charts is the ability to pack and ship them like regular software artefacts. Similar to other pieces of infrastructure, like Docker images, you can process them using pipelines to validate and test changes to charts before pushing them to ‘production’, a Helm repository.
With the Helm chart museum you can host your own Helm chart repository. This is similar to an artifact repository. It’s also possible to push Helm charts to Azure Container Registry or to S3 in AWS. For sure more will follow in the future. If you don’t need all of these, you can just create a local repository, a bit similar to your own local Docker registry. For detailed instructions, be sure to check out the website of Andrew Lock.
At the heart of every DevOps organization are automated pipelines for software delivery and creating software artifacts. For Helm charts, these integrations into pipelines exist:
- Gitlab CI. It is possible to deploy Helm charts from within your Gitlab CI pipelines. Gitlab CI can connect to your Kubernetes cluster directly. The open-source version of Gitlab CI can only connect to a single cluster.
- Azure DevOps. With Azure DevOps Pipelines, you can also deploy Helm charts. This requires a so-called “Service connection” between Azure DevOps and your Kubernetes cluster in Azure (AKS).
- AWS Codebuild and CodePipeline. As described before, AWS S3 can be turned into a Helm chart repository. Once you can upload a Helm chart to S3 it can trigger the CodePipeline.
Helm charts can be deployed using the “Helm install” command. Most important options are the following:
- View existing and previous deployments (deployment history) of Helm charts.
- Gradually upgrade an application.
- Rollback to a previous version in case of an error.
- Get details (describe) the entire deployment definition.
It’s also possible to deploy Helm charts using Terraform. If you prefer this, you need to install the Helm provider for Terraform. This way, you only need one tool for your infrastructure components and Helm charts.
Drawbacks are: you don’t get all of the features of the native way of Helm deployments. It’s not possible to list your deployments and/or versions of the applications in your cluster. Terraform will deploy Helm charts in the background and it forgets them afterward. It’s very difficult to get feedback in case of a deployment failure.
Every software component should be tested early on in the development lifecycle. Helm charts are no exception. Helm unittest and Terratest are well-known tools for testing Helm charts. Especially Terratest is very promising as it can test a lot of other infrastructure-related tools as well: Packer, Terraform, Docker, AWS and more.
You can use Terratest to run static tests for your Helm charts to validate the syntax of your chart. You can also run actual integration tests to verify if your deployments are working correctly.
If your test fails, you can roll back to a previous version of the Helm chart. No more need to manually deploy Helm charts and verify if they work correctly. Once these tests are executed, a test report is generated and the infrastructure resources are cleaned up.
This article provided an overview of the main features of the Helm package manager for Kubernetes applications. It touches various important aspects like the recent improvements, support for registries and options to integrate Helm with your existing tools. I hope it has inspired you to give it a try.