Migrating Helm v2 to v3

    • Removal of Tiller:
      • Replaces client/server with client/library architecture ( binaryonly)
      • Security is now on per user basis (delegated to Kubernetes user clustersecurity)
      • Releases are now stored as in-cluster secrets and the release objectmetadata has changed
      • Releases are persisted on a release namespace basis and not in the Tillernamespace anymore
    • Chart repository updated:
      • helm search now supports both local repository searches and making searchqueries against Helm Hub
    • Chart apiVersion bumped to “v2” for following specification changes:
      • Dynamically linked chart dependencies moved to Chart.yaml(requirements.yaml removed and requirements –> dependencies)
      • Library charts (helper/common charts) can now be added as dynamicallylinked chart dependencies
      • Charts have a type metadata field to define the chart to be of anapplication or library chart. It is application by default which meansit is renderable and installable
      • Helm 2 charts (apiVersion=v1) are still installable
    • XDG directory specification added:
      • Helm home removed and replaced with XDG directory specification for storingconfiguration files
      • No longer need to initialize Helm
      • helm init and removed
      • Helm v2 and v3 can quite happily manage the same cluster. The Helm versionscan be installed on the same or separate systems
      • If installing Helm v3 on the same system, you need to to perform anadditional step to ensure that both client versions can co-exist untilready to remove Helm v2 client. Rename or put the Helm v3 binary in adifferent folder to avoid conflict
      • Otherwise there are no conflicts between both versions because of thefollowing distinctions:
        • v2 and v3 release (history) storage are independent of each other. Thechanges includes the Kubernetes resource for storage and the releaseobject metadata contained in the resource. Releases will also be on a peruser namespace instead of using the Tiller namespace (for example, v2default Tiller namespace kube-system). v2 uses “ConfigMaps” or “Secrets”under the Tiller namespace and TILLERownership. v3 uses “Secrets” inthe user namespace and helm ownership. Releases are incremental in bothv2 and v3
        • The only issue could be if Kubernetes cluster scoped resources (e.g.clusterroles.rbac) are defined in a chart. The v3 deployment would thenfail even if unique in the namespace as the resources would clash
        • v3 configuration no longer uses HELM_HOME and uses XDG directoryspecification instead. It is also created on the fly as need be. It istherefore independent of v2 configuration. This is applicable only whenboth versions are installed on the same system
      • This use case applies when you want Helm v3 to manage existing Helm v2releases
      • It should be noted that a Helm client:
        • can manage 1 to many Kubernetes clusters
        • can connect to 1 to many Tiller instances for a cluster
      • This means that you have to be cognisant of this when migrating as releasesare deployed into clusters by Tiller and its namespace. You have totherefore be aware of migrating for each cluster and each Tiller instancethat is managed by the Helm v2 client instance
      • The recommended data migration path is as follows:
        • Backup v2 data
        • Migrate Helm v2 configuration
        • Migrate Helm v2 releases
        • When happy that Helm v3 is managing all Helm v2 data (for all clustersand Tiller instances of the Helm v2 client instance) as expected, thenclean up Helm v2 data
      • The migration process is automated by the Helm v32to3 plugin
    • Helm v3 plugin