Installing Helm

    IMPORTANT: If you are responsible for ensuring your cluster is a controlled environment, especially when resources are shared, it is strongly recommended installing Tiller using a secured configuration. For guidance, see Securing your Helm Installation.

    The Helm client can be installed either from source, or from pre-built binary releases.

    Every of Helm provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed.

    1. Download your desired version
    2. Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
    3. Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)

    From there, you should be able to run the client: helm help.

    From Snap (Linux)

    The Snap package for Helm is maintained by Snapcrafters.

    From Homebrew (macOS)

    Members of the Kubernetes community have contributed a Helm formula build to Homebrew. This formula is generally up to date.

    1. brew install kubernetes-helm

    (Note: There is also a formula for emacs-helm, which is a different project.)

    From Chocolatey or scoop (Windows)

    Members of the Kubernetes community have contributed a build to Chocolatey. This package is generally up to date.

    1. choco install kubernetes-helm

    The binary can also be installed via command-line installer.

    1. scoop install helm

    From Script

    Helm now has an installer script that will automatically grab the latest version of the Helm client and .

    You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.

    1. $ curl -LO https://git.io/get_helm.sh
    2. $ ./get_helm.sh

    Yes, you can curl -L https://git.io/get_helm.sh | bash that if you want to live on the edge.

    “Canary” builds are versions of the Helm software that are built from the latest master branch. They are not official releases, and may not be stable. However, they offer the opportunity to test the cutting edge features.

    Canary Helm binaries are stored in the Kubernetes Helm GCS bucket. Here are links to the common builds:

    From Source (Linux, macOS)

    Building Helm from source is slightly more work, but is the best way to go if you want to test the latest (pre-release) Helm version.

    You must have a working Go environment with installed.

    1. $ cd $GOPATH
    2. $ mkdir -p src/k8s.io
    3. $ cd src/k8s.io
    4. $ git clone https://github.com/helm/helm.git
    5. $ cd helm
    6. $ make bootstrap build

    The bootstrap target will attempt to install dependencies, rebuild the vendor/ tree, and validate configuration.

    The build target will compile helm and place it in bin/helm. Tiller is also compiled, and is placed in bin/tiller.

    Installing Tiller

    Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster. But for development, it can also be run locally, and configured to talk to a remote Kubernetes cluster.

    Special Note for RBAC Users

    Most cloud providers enable a feature called Role-Based Access Control - RBAC for short. If your cloud provider enables this feature, you will need to create a service account for Tiller with the right roles and permissions to access resources.

    Easy In-Cluster Installation

    The easiest way to install tiller into the cluster is simply to run helm init. This will validate that helm’s local environment is set up correctly (and set it up if necessary). Then it will connect to whatever cluster kubectl connects to by default (kubectl config view). Once it connects, it will install tiller into the kube-system namespace.

    After helm init, you should be able to run kubectl get pods --namespace kube-system and see Tiller running.

    You can explicitly tell helm init to…

    • Install the canary build with the --canary-image flag
    • Install a particular image (version) with --tiller-image
    • Install to a particular cluster with --kube-context
    • Install into a particular namespace with --tiller-namespace
    • Install Tiller with a Service Account with --service-account (for )
    • Install Tiller without mounting a service account with --automount-service-account false

    Once Tiller is installed, running helm version should show you both the client and server version. (If it shows only the client version, helm cannot yet connect to the server. Use kubectl to see if any pods are running.)

    Helm will look for Tiller in the kube-system namespace unless --tiller-namespace or TILLER_NAMESPACE is set.

    Installing Tiller Canary Builds

    Canary images are built from the master branch. They may not be stable, but they offer you the chance to test out the latest features.

    The easiest way to install a canary image is to use helm init with the --canary-image flag:

    1. $ helm init --canary-image

    This will use the most recently built container image. You can always uninstall Tiller by deleting the Tiller deployment from the kube-system namespace using kubectl.

    For development, it is sometimes easier to work on Tiller locally, and configure it to connect to a remote Kubernetes cluster.

    The process of building Tiller is explained above.

    Once tiller has been built, simply start it:

    When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)

    You must tell helm to connect to this new local Tiller host instead of connecting to the one in-cluster. There are two ways to do this. The first is to specify the --host option on the command line. The second is to set the $HELM_HOST environment variable.

    1. $ export HELM_HOST=localhost:44134
    2. $ helm version # Should connect to localhost.
    3. Client: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"db...", GitTreeState:"dirty"}
    4. Server: &version.Version{SemVer:"v2.0.0-alpha.4", GitCommit:"a5...", GitTreeState:"dirty"}

    Importantly, even when running locally, Tiller will store release configuration in ConfigMaps inside of Kubernetes.

    As of Helm 2.2.0, Tiller can be upgraded using helm init --upgrade.

    For older versions of Helm, or for manual upgrades, you can use kubectl to modify the Tiller image:

    1. $ export TILLER_TAG=v2.0.0-beta.1 # Or whatever version you want
    2. $ kubectl --namespace=kube-system set image deployments/tiller-deploy tiller=gcr.io/kubernetes-helm/tiller:$TILLER_TAG
    3. deployment "tiller-deploy" image updated

    Setting TILLER_TAG=canary will get the latest snapshot of master.

    Deleting or Reinstalling Tiller

    Because Tiller stores its data in Kubernetes ConfigMaps, you can safely delete and re-install Tiller without worrying about losing any data. The recommended way of deleting Tiller is with kubectl delete deployment tiller-deploy --namespace kube-system, or more concisely helm reset.

    Tiller can then be re-installed from the client with:

    1. $ helm init

    helm init provides additional flags for modifying Tiller’s deployment manifest before it is installed.

    Using --node-selectors

    The example below will create the specified label under the nodeSelector property.

    1. helm init --node-selectors "beta.kubernetes.io/os"="linux"

    The installed deployment manifest will contain our node selector label.

    1. ...
    2. spec:
    3. template:
    4. spec:
    5. nodeSelector:
    6. beta.kubernetes.io/os: linux

    Using --override

    --override allows you to specify properties of Tiller’s deployment manifest. Unlike the --set command used elsewhere in Helm, helm init --override manipulates the specified properties of the final manifest (there is no “values” file). Therefore you may specify any valid value for any valid property in the deployment manifest.

    Override annotation

    In the example below we use --override to add the revision property and set its value to 1.

    1. helm init --override metadata.annotations."deployment\.kubernetes\.io/revision"="1"

    Output:

    Override affinity

    In the example below we set properties for node affinity. Multiple commands may be combined to modify different properties of the same list item.

    1. helm init --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].weight"="1" --override "spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[0].preference.matchExpressions[0].key"="e2e-az-name"

    The specified properties are combined into the “preferredDuringSchedulingIgnoredDuringExecution” property’s first list item.

    1. ...
    2. spec:
    3. strategy: {}
    4. template:
    5. ...
    6. spec:
    7. affinity:
    8. nodeAffinity:
    9. preferredDuringSchedulingIgnoredDuringExecution:
    10. - preference:
    11. matchExpressions:
    12. - key: e2e-az-name
    13. operator: ""
    14. weight: 1
    15. ...

    Using --output

    The --output flag allows us skip the installation of Tiller’s deployment manifest and simply output the deployment manifest to stdout in either JSON or YAML format. The output may then be modified with tools like jq and installed manually with kubectl.

    In the example below we execute helm init with the --output json flag.

    1. helm init --output json

    The Tiller installation is skipped and the manifest is output to stdout in JSON format.

    1. "apiVersion": "extensions/v1beta1",
    2. "kind": "Deployment",
    3. "metadata": {
    4. "creationTimestamp": null,
    5. "labels": {
    6. "app": "helm",
    7. "name": "tiller"
    8. },
    9. "name": "tiller-deploy",
    10. "namespace": "kube-system"
    11. },
    12. ...

    Storage backends

    By default, tiller stores release information in ConfigMaps in the namespace where it is running.

    Secret storage backend

    As of Helm 2.7.0, there is now a beta storage backend that uses Secrets for storing release information. This was added for additional security in protecting charts in conjunction with the release of Secret encryption in Kubernetes.

    To enable the secrets backend, you’ll need to init Tiller with the following options:

    1. helm init --override 'spec.template.spec.containers[0].command'='{/tiller,--storage=secret}'

    Currently, if you want to switch from the default backend to the secrets backend, you’ll have to do the migration for this on your own. When this backend graduates from beta, there will be a more official path of migration

    SQL storage backend

    As of Helm 2.14.0 there is now a beta SQL storage backend that stores release information in an SQL database (only postgres has been tested so far).

    Using such a storage backend is particularly useful if your release information weighs more than 1MB (in which case, it can’t be stored in ConfigMaps/Secrets because of internal limits in Kubernetes’ underlying etcd key-value store).

    To enable the SQL backend, you’ll need to deploy a SQL database and init Tiller with the following options:

    1. helm init \
    2. 'spec.template.spec.containers[0].args'='{--storage=sql,--sql-dialect=postgres,--sql-connection-string=postgresql://tiller-postgres:5432/helm?user=helm&password=changeme}'

    PRODUCTION NOTES: it’s recommended to change the username and password of the SQL database in production deployments. Enabling SSL is also a good idea. Last, but not least, perform regular backups/snapshots of your SQL database.

    Currently, if you want to switch from the default backend to the SQL backend, you’ll have to do the migration for this on your own. When this backend graduates from beta, there will be a more official migration path.

    Conclusion

    In most cases, installation is as simple as getting a pre-built helm binary and running helm init. This document covers additional cases for those who want to do more sophisticated things with Helm.