Upgrading Linkerd

    Before starting, read through the version-specific upgrade notices below, which may contain important information you need to be aware of before commencing with the upgrade process:

    There are three components that need to be upgraded, in turn:

    Upgrade the CLI

    This will upgrade your local CLI to the latest version. You will want to follow these instructions for anywhere that uses the Linkerd CLI. For Helm users feel free to skip to the .

    To upgrade the CLI locally, run:

    Alternatively, you can download the CLI directly via the Linkerd releases page.

    Verify the CLI is installed and running correctly with:

    Which should display:

    1. Client version: stable-2.10.2

    Note

    Until you upgrade the control plane, some new CLI commands may not work.

    You are now ready to .

    Upgrade the Control Plane

    Now that you have upgraded the CLI, it is time to upgrade the Linkerd control plane on your Kubernetes cluster. Don’t worry, the existing data plane will continue to operate with a newer version of the control plane and your meshed services will not go down.

    Note

    You will lose the historical data from Prometheus. If you would like to have that data persisted through an upgrade, take a look at the

    Use the linkerd upgrade command to upgrade the control plane. This command ensures that all of the control plane’s existing configuration and mTLS secrets are retained. Notice that we use the --prune flag to remove any Linkerd resources from the previous version which no longer exist in the new version.

    1. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

    Next, run this command again with some --prune-whitelist flags added. This is necessary to make sure that certain cluster-scoped resources are correctly pruned.

    1. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd \
    2. --prune-whitelist=rbac.authorization.k8s.io/v1/clusterrole \
    3. --prune-whitelist=rbac.authorization.k8s.io/v1/clusterrolebinding \
    4. --prune-whitelist=apiregistration.k8s.io/v1/apiservice -f -

    For upgrading a multi-stage installation setup, follow the instructions at Upgrading a multi-stage install.

    Users who have previously saved the Linkerd control plane’s configuration to files can follow the instructions at to ensure those configuration are retained by the linkerd upgrade command.

    With Helm

    For a Helm workflow, check out the instructions at .

    Verify the control plane upgrade

    Once the upgrade process completes, check to make sure everything is healthy by running:

    1. linkerd check

    This will run through a set of checks against your control plane and make sure that it is operating correctly.

    To verify the Linkerd control plane version, run:

    1. linkerd version

    Which should display:

    1. Client version: stable-2.10.2
    2. Server version: stable-2.10.2

    Next, we will .

    Upgrade the Data Plane

    With a fully up-to-date CLI running locally and Linkerd control plane running on your Kubernetes cluster, it is time to upgrade the data plane. The easiest way to do this is to run a rolling deploy on your services, allowing the proxy-injector to inject the latest version of the proxy as they come up.

    With kubectl 1.15+, this can be as simple as using the kubectl rollout restart command to restart all your meshed services. For example,

    1. kubectl -n <namespace> rollout restart deploy

    Note

    Unless otherwise documented in the release notes, stable release control planes should be compatible with the data plane from the previous stable release. Thus, data plane upgrades can be done at any point after the control plane has been upgraded, including as part of the application’s natural deploy cycle. A gap of more than one stable version between control plane and data plane is not recommended.

    Workloads that were previously injected using the linkerd inject --manual command can be upgraded by re-injecting the applications in-place. For example,

    1. kubectl -n emojivoto get deploy -l linkerd.io/control-plane-ns=linkerd -oyaml \
    2. | linkerd inject --manual - \
    3. | kubectl apply -f -

    Verify the data plane upgrade

    Check to make sure everything is healthy by running:

      This will run through a set of checks to verify that the data plane is operating correctly, and will list any pods that are still running older versions of the proxy.

      Congratulation! You have successfully upgraded your Linkerd to the newer version. If you have any questions, feel free to raise them at the #linkerd2 channel in the Linkerd slack.

      Upgrade notice: stable-2.10.0

      If you are currently running Linkerd 2.9.0, 2.9.1, 2.9.2, or 2.9.3 (but not 2.9.4), and you upgraded to that release using the --prune flag (as opposed to installing it fresh), you will need to use the linkerd repair command as outlined in the Linkerd 2.9.3 upgrade notes before you can upgrade to Linkerd 2.10.

      Additionally, there are two changes in the 2.10.0 release that may affect you. First, the handling of certain ports and protocols has changed. Please read through our for the repercussions.

      Second, we’ve introduced extensions and moved the default visualization components into a Linkerd-Viz extension. Read on for what this means for you.

      Visualization components moved to Linkerd-Viz extension

      With the introduction of extensions, all of the Linkerd control plane components related to visibility (including Prometheus, Grafana, Web, and Tap) have been removed from the main Linkerd control plane and moved into the Linkerd-Viz extension. This means that when you upgrade to stable-2.10.0, these components will be removed from your cluster and you will not be able to run commands such as linkerd stat or linkerd dashboard. To restore this functionality, you must install the Linkerd-Viz extension by running linkerd viz install | kubectl apply -f - and then invoke those commands through linkerd viz stat, linkerd viz dashboard, etc.

      1. # Upgrade the control plane (this will remove viz components).
      2. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -
      3. # Prune cluster-scoped resources
      4. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd \
      5. --prune-whitelist=rbac.authorization.k8s.io/v1/clusterrole \
      6. --prune-whitelist=rbac.authorization.k8s.io/v1/clusterrolebinding \
      7. --prune-whitelist=apiregistration.k8s.io/v1/apiservice -f -
      8. # Install the Linkerd-Viz extension to restore viz functionality.
      9. linkerd viz install | kubectl apply -f -

      Helm users should note that configuration values related to these visibility components have moved to the Linkerd-Viz chart. Please update any values overrides you have and use these updated overrides when upgrading the Linkerd chart or installing the Linkerd-Viz chart. See below for a complete list of values which have moved.

      1. helm repo update
      2. # Upgrade the control plane (this will remove viz components).
      3. helm upgrade linkerd2 linkerd/linkerd2 --reset-values -f values.yaml --atomic
      4. # Install the Linkerd-Viz extension to restore viz functionality.
      5. helm install linkerd2-viz linkerd/linkerd2-viz -f viz-values.yaml

      The following values were removed from the Linkerd2 chart. Most of the removed values have been moved to the Linkerd-Viz chart or the Linkerd-Jaeger chart.

      • dashboard.replicas moved to Linkerd-Viz as dashboard.replicas
      • tap moved to Linkerd-Viz as tap
      • tapResources moved to Linkerd-Viz as tap.resources
      • tapProxyResources moved to Linkerd-Viz as tap.proxy.resources
      • webImage moved to Linkerd-Viz as dashboard.image
      • webResources moved to Linkerd-Viz as dashboard.resources
      • webProxyResources moved to Linkerd-Viz as dashboard.proxy.resources
      • grafana moved to Linkerd-Viz as grafana
      • grafana.proxy moved to Linkerd-Viz as grafana.proxy
      • prometheus moved to Linkerd-Viz as prometheus
      • prometheus.proxy moved to Linkerd-Viz as prometheus.proxy
      • global.proxy.trace.collectorSvcAddr moved to Linkerd-Jaeger as webhook.collectorSvcAddr
      • global.proxy.trace.collectorSvcAccount moved to Linkerd-Jaeger as webhook.collectorSvcAccount
      • tracing.enabled removed
      • tracing.collector moved to Linkerd-Jaeger as collector
      • tracing.jaeger moved to Linkerd-Jaeger as jaeger

      Also please note the global scope from the Linkerd2 chart values has been dropped, moving the config values underneath it into the root scope. Any values you had customized there will need to be migrated; in particular identityTrustAnchorsPEM in order to conserve the value you set during install.”

      See upgrade notes for 2.9.3 below.

      Upgrade notice: stable-2.9.3

      Linkerd Repair

      Due to a known issue in versions stable-2.9.0, stable-2.9.1, and stable-2.9.2, users who upgraded to one of those versions with the –prune flag (as described above) will have deleted the secret/linkerd-config-overrides resource which is necessary for performing any subsequent upgrades. Linkerd stable-2.9.3 includes a new linkerd repair command which restores this deleted resource. If you see unexpected error messages during upgrade such as “failed to read CA: not PEM-encoded”, please upgrade your CLI to stable-2.9.3 and run:

      1. linkerd repair | kubectl apply -f -

      This will restore the secret/linkerd-config-overrides resource and allow you to proceed with upgrading your control plane.

      Upgrade notice: stable-2.9.0

      Images are now hosted on ghcr.io

      As of this version images are now hosted under ghcr.io instead of gcr.io. If you’re pulling images into a private repo please make the necessary changes.

      Upgrading multicluster environments

      Linkerd 2.9 changes the way that some of the multicluster components work and are installed compared to Linkerd 2.8.x. Users installing the multicluster components for the first time with Linkerd 2.9 can ignore these instructions and instead refer directly to the installing multicluster instructions.

      Users who installed the multicluster component in Linkerd 2.8.x and wish to upgrade to Linkerd 2.9 should follow the .

      In previous versions when you injected your ingress controller (Nginx, Traefik, Ambassador, etc), then the ingress’ balancing/routing choices would be overridden with Linkerd’s (using service profiles, traffic splits, etc.).

      As of 2.9 the ingress’s choices are honored instead, which allows preserving things like session-stickiness. Note however that this means per-route metrics are not collected, traffic splits will not be honored and retries/timeouts are not applied.

      If you want to revert to the previous behavior, inject the proxy into the ingress controller using the annotation linkerd.io/inject: ingress, as explained in using ingress

      Breaking changes in Helm charts

      Post-upgrade cleanup

      In order to better support cert-manager, the secrets linkerd-proxy-injector-tls, linkerd-sp-validator-tls and linkerd-tap-tls have been replaced by the secrets linkerd-proxy-injector-k8s-tls, linkerd-sp-validator-k8s-tls and linkerd-tap-k8s-tls respectively. If you upgraded through the CLI, please delete the old ones (if you upgraded through Helm the cleanup was automated).

      Upgrade notice: stable-2.8.0

      There are no version-specific notes for upgrading to this release. The upgrade process detailed above (upgrade the CLI, , then upgrade the data plane) should work.

      Upgrade notice: stable-2.7.0

      Checking whether any of your TLS certificates are approaching expiry

      This version introduces a set of CLI flags and checks that help you rotate your TLS certificates. The new CLI checks will warn you if any of your certificates are expiring in the next 60 days. If you however want to check the expiration date of your certificates and determine for yourself whether you should be rotating them, you can execute the following commands. Note that this will require and jq 1.6.

      Check your trust roots:

      1. kubectl -n linkerd get cm linkerd-config -o=jsonpath="{.data}" | \
      2. jq -r .identityContext.trustAnchorsPem | \
      3. step certificate inspect --short -
      4. X.509v3 Root CA Certificate (ECDSA P-256) [Serial: 1]
      5. Subject: identity.linkerd.cluster.local
      6. Issuer: identity.linkerd.cluster.local
      7. to: 2021-01-13T13:23:52Z

      Check your issuer certificate:

      If you determine that you wish to rotate your certificates you can follow the process outlined in . Note that this process uses functionality available in stable-2.7.0. So before you start your cert rotation, make sure to upgrade.

      When ready, you can begin the upgrade process by installing the new CLI.

      Breaking changes in Helm charts

      As part of an effort to follow Helm’s best practices the Linkerd Helm chart has been restructured. As a result most of the keys have been changed. In order to ensure trouble-free upgrade of your Helm installation, please take a look at Helm upgrade procedure. To get a precise view of what has changed you can compare that and stable-2.7.0 values.yaml files.

      Note

      Upgrading to this release from edge-19.9.3, edge-19.9.4, edge-19.9.5 and edge-19.10.1 will incur data plane downtime, due to a recent change introduced to ensure zero downtime upgrade for previous stable releases.

      The destination container is now deployed as its own Deployment workload. When you are planning the upgrade from one of the edge versions listed above, be sure to allocate time to restart the data plane once the control plane is successfully upgraded. This restart can be done at your convenience with the recommendation that it be done over the course of time appropriate for your application.

      If you are upgrading from a previous stable version, restarting the data-plane is recommended as a best practice, although not necessary.

      If you have previously labelled any of your namespaces with the linkerd.io/is-control-plane label so that their pod creation events are ignored by the HA proxy injector, you will need to update these namespaces to use the new config.linkerd.io/admission-webhooks: disabled label.

      When ready, you can begin the upgrade process by .

      Upgrade notice: stable-2.5.0

      This release supports Kubernetes 1.12+.

      Note

      Linkerd 2.5.0 introduced . If Linkerd was installed via linkerd install, it must be upgraded via linkerd upgrade. If Linkerd was installed via Helm, it must be upgraded via Helm. Mixing these two installation procedures is not supported.

      Upgrading from stable-2.4.x

      Note

      These instructions also apply to upgrading from edge-19.7.4, edge-19.7.5, edge-19.8.1, edge-19.8.2, edge-19.8.3, edge-19.8.4, and edge-19.8.5.

      Use the linkerd upgrade command to upgrade the control plane. This command ensures that all of the control plane’s existing configuration and mTLS secrets are retained.

      1. # get the latest stable CLI
      2. curl -sL https://run.linkerd.io/install | sh

      Note

      The linkerd cli installer installs the CLI binary into a versioned file (e.g. linkerd-stable-2.5.0) under the $INSTALLROOT (default: $HOME/.linkerd) directory and provides a convenience symlink at $INSTALLROOT/bin/linkerd.

      If you need to have multiple versions of the linkerd cli installed alongside each other (for example if you are running an edge release on your test cluster but a stable release on your production cluster) you can refer to them by their full paths, e.g. $INSTALLROOT/bin/linkerd-stable-2.5.0 and $INSTALLROOT/bin/linkerd-edge-19.8.8.

      1. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

      The options --prune -l linkerd.io/control-plane-ns=linkerd above make sure that any resources that are removed from the linkerd upgrade output, are effectively removed from the system.

      For upgrading a multi-stage installation setup, follow the instructions at .

      Users who have previously saved the Linkerd control plane’s configuration to files can follow the instructions at Upgrading via manifests to ensure those configuration are retained by the linkerd upgrade command.

      Once the upgrade command completes, use the linkerd check command to confirm the control plane is ready.

      Note

      The stable-2.5 linkerd check command will return an error when run against an older control plane. This error is benign and will resolve itself once the control plane is upgraded to stable-2.5:

      1. linkerd-config
      2. --------------
      3. control plane Namespace exists
      4. control plane ClusterRoles exist
      5. control plane ClusterRoleBindings exist
      6. × control plane ServiceAccounts exist
      7. missing ServiceAccounts: linkerd-heartbeat
      8. see https://linkerd.io/checks/#l5d-existence-sa for hints

      When ready, proceed to upgrading the data plane by following the instructions at .

      Upgrade notice: stable-2.4.0

      This release supports Kubernetes 1.12+.

      Upgrading from stable-2.3.x, edge-19.4.5, edge-19.5.x, edge-19.6.x, edge-19.7.x

      Use the linkerd upgrade command to upgrade the control plane. This command ensures that all of the control plane’s existing configuration and mTLS secrets are retained.

      1. # get the latest stable CLI
      2. curl -sL https://run.linkerd.io/install | sh

      For Kubernetes 1.12+:

      1. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

      For Kubernetes pre-1.12 where the mutating and validating webhook configurations’ sideEffects fields aren’t supported:

      1. linkerd upgrade --omit-webhook-side-effects | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

      The sideEffects field is added to the Linkerd webhook configurations to indicate that the webhooks have no side effects on other resources.

      For HA setup, the linkerd upgrade command will also retain all previous HA configuration. Note that the mutating and validating webhook configurations are updated to set their failurePolicy fields to fail to ensure that un-injected workloads (as a result of unexpected errors) are rejected during the admission process. The HA mode has also been updated to schedule multiple replicas of the linkerd-proxy-injector and linkerd-sp-validator deployments.

      For users upgrading from the edge-19.5.3 release, note that the upgrade process will fail with the following error message, due to a naming bug:

      1. The ClusterRoleBinding "linkerd-linkerd-tap" is invalid: roleRef: Invalid value:
      2. rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole",
      3. Name:"linkerd-linkerd-tap"}: cannot change roleRef

      This can be resolved by simply deleting the linkerd-linkerd-tap cluster role binding resource, and re-running the linkerd upgrade command:

      1. kubectl delete clusterrole/linkerd-linkerd-tap

      For upgrading a multi-stage installation setup, follow the instructions at Upgrading a multi-stage install.

      Users who have previously saved the Linkerd control plane’s configuration to files can follow the instructions at to ensure those configuration are retained by the linkerd upgrade command.

      Once the upgrade command completes, use the linkerd check command to confirm the control plane is ready.

      Note

      The stable-2.4 linkerd check command will return an error when run against an older control plane. This error is benign and will resolve itself once the control plane is upgraded to stable-2.4:

      1. linkerd-config
      2. --------------
      3. control plane Namespace exists
      4. × control plane ClusterRoles exist
      5. missing ClusterRoles: linkerd-linkerd-controller, linkerd-linkerd-identity, linkerd-linkerd-prometheus, linkerd-linkerd-proxy-injector, linkerd-linkerd-sp-validator, linkerd-linkerd-tap
      6. see https://linkerd.io/checks/#l5d-existence-cr for hints

      When ready, proceed to upgrading the data plane by following the instructions at Upgrade the data plane.

      Upgrading from stable-2.2.x

      Follow the stable-2.3.0 upgrade instructions to upgrade the control plane to the stable-2.3.2 release first. Then follow to upgrade the stable-2.3.2 control plane to stable-2.4.0.

      Upgrade notice: stable-2.3.0

      stable-2.3.0 introduces a new upgrade command. This command only works for the edge-19.4.x and newer releases. When using the upgrade command from edge-19.2.x or edge-19.3.x, all the installation flags previously provided to the install command must also be added.

      To upgrade from the stable-2.2.x release, follow the .

      1. kubectl -n linkerd delete deploy/linkerd-ca

      Upgrading from edge-19.4.x

      1. # get the latest stable
      2. curl -sL https://run.linkerd.io/install | sh
      3. # upgrade the control plane
      4. linkerd upgrade | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

      Follow instructions for .

      Upgrading a multi-stage install

      edge-19.4.5 introduced a feature. If you previously installed Linkerd via a multi-stage install process, you can upgrade each stage, analogous to the original multi-stage installation process.

      Stage 1, for the cluster owner:

      1. linkerd upgrade config | kubectl apply -f -

      Stage 2, for the service owner:

        Note

        Passing the --prune flag to kubectl does not work well with multi-stage upgrades. It is recommended to manually prune old resources after completing the above steps.

        Upgrading via manifests

        By default, the linkerd upgrade command reuses the existing linkerd-config config map and the linkerd-identity-issuer secret, by fetching them via the the Kubernetes API. edge-19.4.5 introduced a new --from-manifests flag to allow the upgrade command to read the linkerd-config config map and the linkerd-identity-issuer secret from a static YAML file. This option is relevant to CI/CD workflows where the Linkerd configuration is managed by a configuration repository.

        For release after edge-20.10.1/stable-2.9.0, you need to add secret/linkerd-config-overrides to the by running command:

        1. kubectl -n linkerd get \
        2. secret/linkerd-identity-issuer \
        3. configmap/linkerd-config \
        4. secret/linkerd-config-overrides \
        5. -oyaml > linkerd-manifests.yaml
        6. linkerd upgrade --from-manifests linkerd-manifests.yaml | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

        For release after stable-2.6.0 and prior to edge-20.10.1/stable-2.9.0, you can use this command:

        For releases prior to edge-19.8.1/stable-2.5.0, and after stable-2.6.0, you may pipe a full linkerd install manifest into the upgrade command:

        1. linkerd install > linkerd-install.yaml
        2. # deploy Linkerd
        3. cat linkerd-install.yaml | kubectl apply -f -
        4. # upgrade Linkerd via manifests
        5. cat linkerd-install.yaml | linkerd upgrade --from-manifests -

        Note

        secret/linkerd-identity-issuer contains the trust root of Linkerd’s Identity system, in the form of a private key. Care should be taken if storing this information on disk, such as using tools like .

        Upgrading from edge-19.2.x or edge-19.3.x

        1. # get the latest stable
        2. curl -sL https://run.linkerd.io/install | sh
        3. # Install stable control plane, using flags previously supplied during
        4. # installation.
        5. # For example, if the previous installation was:
        6. # linkerd install --proxy-log-level=warn --proxy-auto-inject | kubectl apply -f -
        7. # The upgrade command would be:
        8. linkerd upgrade --proxy-log-level=warn --proxy-auto-inject | kubectl apply --prune -l linkerd.io/control-plane-ns=linkerd -f -

        Follow instructions for .

        Upgrade notice: stable-2.2.0

        There are two breaking changes in stable-2.2.0. One relates to , the other relates to Automatic Proxy Injection. If you are not using either of these features, you may to the full upgrade instructions.

        Service Profile namespace location

        , previously defined in the control plane namespace in stable-2.1.0, are now defined in their respective client and server namespaces. Service Profiles defined in the client namespace take priority over ones defined in the server namespace.

        Automatic Proxy Injection opt-in

        The linkerd.io/inject annotation, previously opt-out in stable-2.1.0, is now opt-in.

        To enable automation proxy injection for a namespace, you must enable the linkerd.io/inject annotation on either the namespace or the pod spec. For more details, see the doc.

        A note about application updates

        Also note that auto-injection only works during resource creation, not update. To update the data plane proxies of a deployment that was auto-injected, do one of the following:

        • Manually re-inject the application via linkerd inject (more info below under )
        • Delete and redeploy the application

        Auto-inject support for application updates is tracked on github

        Upgrade the 2.2.x CLI

        This will upgrade your local CLI to the latest version. You will want to follow these instructions for anywhere that uses the linkerd CLI.

        To upgrade the CLI locally, run:

        1. curl -sL https://run.linkerd.io/install | sh

        Alternatively, you can download the CLI directly via the Linkerd releases page.

        Verify the CLI is installed and running correctly with:

        1. linkerd version

        Which should display:

        1. Client version: stable-2.10.2
        2. Server version: stable-2.1.0

        It is expected that the Client and Server versions won’t match at this point in the process. Nothing has been changed on the cluster, only the local CLI has been updated.

        Note

        Until you upgrade the control plane, some new CLI commands may not work.

        Upgrade the 2.2.x control plane

        Now that you have upgraded the CLI running locally, it is time to upgrade the Linkerd control plane on your Kubernetes cluster. Don’t worry, the existing data plane will continue to operate with a newer version of the control plane and your meshed services will not go down.

        To upgrade the control plane in your environment, run the following command. This will cause a rolling deploy of the control plane components that have changed.

        1. linkerd install | kubectl apply -f -

        The output will be:

        1. namespace/linkerd configured
        2. configmap/linkerd-config created
        3. serviceaccount/linkerd-identity created
        4. clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-identity configured
        5. clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-identity configured
        6. service/linkerd-identity created
        7. secret/linkerd-identity-issuer created
        8. deployment.extensions/linkerd-identity created
        9. serviceaccount/linkerd-controller unchanged
        10. clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-controller configured
        11. clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-controller configured
        12. service/linkerd-controller-api configured
        13. service/linkerd-destination created
        14. deployment.extensions/linkerd-controller configured
        15. customresourcedefinition.apiextensions.k8s.io/serviceprofiles.linkerd.io configured
        16. serviceaccount/linkerd-web unchanged
        17. service/linkerd-web configured
        18. deployment.extensions/linkerd-web configured
        19. serviceaccount/linkerd-prometheus unchanged
        20. clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-prometheus configured
        21. clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-prometheus configured
        22. service/linkerd-prometheus configured
        23. deployment.extensions/linkerd-prometheus configured
        24. configmap/linkerd-prometheus-config configured
        25. serviceaccount/linkerd-grafana unchanged
        26. service/linkerd-grafana configured
        27. deployment.extensions/linkerd-grafana configured
        28. configmap/linkerd-grafana-config configured
        29. serviceaccount/linkerd-sp-validator created
        30. clusterrole.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator configured
        31. clusterrolebinding.rbac.authorization.k8s.io/linkerd-linkerd-sp-validator configured
        32. service/linkerd-sp-validator created
        33. deployment.extensions/linkerd-sp-validator created

        Check to make sure everything is healthy by running:

        1. linkerd check

        This will run through a set of checks against your control plane and make sure that it is operating correctly.

        To verify the Linkerd control plane version, run:

        1. linkerd version

        Which should display:

        1. Client version: stable-2.10.2
        2. Server version: stable-2.10.2

        Note

        You will lose the historical data from Prometheus. If you would like to have that data persisted through an upgrade, take a look at the persistence documentation

        Upgrade the 2.2.x data plane

        With a fully up-to-date CLI running locally and Linkerd control plane running on your Kubernetes cluster, it is time to upgrade the data plane. This will change the version of the linkerd-proxy sidecar container and run a rolling deploy on your service.

        For stable-2.3.0+, if your workloads are annotated with the auto-inject linkerd.io/inject: enabled annotation, then you can just restart your pods using your Kubernetes cluster management tools (helm, kubectl etc.).

        Note

        With kubectl 1.15+, you can use the kubectl rollout restart command to restart all your meshed services. For example,

        1. kubectl -n <namespace> rollout restart deploy

        As the pods are being re-created, the proxy injector will auto-inject the new version of the proxy into the pods.

        If auto-injection is not part of your workflow, you can still manually upgrade your meshed services by re-injecting your applications in-place.

        Begin by retrieving your YAML resources via kubectl, and pass them through the linkerd inject command. This will update the pod spec with the linkerd.io/inject: enabled annotation. This annotation will be picked up by Linkerd’s proxy injector during the admission phase where the Linkerd proxy will be injected into the workload. By using kubectl apply, Kubernetes will do a rolling deploy of your service and update the running pods to the latest version.

        Example command to upgrade an application in the emojivoto namespace, composed of deployments:

        1. kubectl -n emojivoto get deploy -l linkerd.io/control-plane-ns=linkerd -oyaml \
        2. | linkerd inject - \
        3. | kubectl apply -f -

        Check to make sure everything is healthy by running:

        1. linkerd check --proxy

        This will run through a set of checks against both your control plane and data plane to verify that it is operating correctly.

        You can make sure that you’ve fully upgraded all the data plane by running:

        If there are any older versions listed, you will want to upgrade them as well.