Shared control plane (single-network)

    In this configuration, multiple Kubernetes clusters runninga remote configuration connect to a shared Istiocontrol plane.Once one or more remote Kubernetes clusters are connected to theIstio control plane, Envoy can then form a mesh network across multiple clusters.

    Istio mesh spanning multiple Kubernetes clusters with direct network access to remote pods over VPN

    • Two or more clusters running a supported Kubernetes version (1.13, 1.14, 1.15).

    • The ability to deploy the Istio control planeon one of the clusters.

    • A RFC1918 network, VPN, or an alternative more advanced network techniquemeeting the following requirements:

      • Individual cluster Pod CIDR ranges and service CIDR ranges must be uniqueacross the multicluster environment and may not overlap.

      • All pod CIDRs in every cluster must be routable to each other.

      • All Kubernetes control plane API servers must be routable to each other.

    This guide describes how to install a multicluster Istio topology using theremote configuration profile provided by Istio.

    Deploy the local control plane

    Install the Istio control planeon one Kubernetes cluster.

    Wait for the Istio control plane to finish initializing before following thesteps in this section.

    You must run these operations on the Istio control plane cluster to capture theIstio control plane service endpoints, for example, the Pilot and Policy Pod IPendpoints.

    Set the environment variables with the following commands:

    Normally, automatic sidecar injection on the remote clusters is enabled. Toperform a manual sidecar injection refer to the

    Install the Istio remote

    You must deploy the component to each remote Kubernetescluster. You can install the component in one of two ways:

    • Use the following command on the remote cluster to installthe Istio control plane service endpoints:
    1. $ istioctl manifest apply \
    2. --set profile=remote \
    3. --set values.global.controlPlaneSecurityEnabled=false \
    4. --set values.global.createRemoteSvcEndpoints=true \
    5. --set values.global.remotePilotCreateSvcEndpoint=true \
    6. --set values.global.remotePilotAddress=${PILOT_POD_IP} \
    7. --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
    8. --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
    9. --set gateways.enabled=false \
    10. --set autoInjection.enabled=true

    All clusters must have the same namespace for the Istiocomponents. It is possible to override the istio-system name on the maincluster as long as the namespace is the same for all Istio components inall clusters.

    • The following command example labels the default namespace. Use similarcommands to label all the remote cluster’s namespaces requiring automaticsidecar injection.
    1. $ kubectl label namespace default istio-injection=enabled

    Repeat for all Kubernetes namespaces that need to setup automatic sidecarinjection.

    Installation configuration parameters

    You must configure the remote cluster’s sidecars interaction with the Istiocontrol plane including the following endpoints in the istio-remote profile:pilot, policy, telemetry and tracing service. The profileenables automatic sidecar injection in the remote cluster by default. You candisable the automatic sidecar injection via a separate setting.

    The following table shows the istioctl configuration values for remote clusters:

    The Istio control plane requires access to all clusters in the mesh todiscover services, endpoints, and pod attributes. The following stepsdescribe how to generate a kubeconfig configuration file for the Istio control plane to use a remote cluster.

    Perform this procedure on each remote cluster to add the cluster to the servicemesh. This procedure requires the cluster-admin user access permission tothe remote cluster.

    • Set the environment variables needed to build the kubeconfig file for theistio-reader-service-account service account with the following commands:
    1. $ export WORK_DIR=$(pwd)
    2. $ CLUSTER_NAME=$(kubectl config view --minify=true -o jsonpath='{.clusters[].name}')
    3. $ export KUBECFG_FILE=${WORK_DIR}/${CLUSTER_NAME}
    4. $ SERVER=$(kubectl config view --minify=true -o jsonpath='{.clusters[].cluster.server}')
    5. $ NAMESPACE=istio-system
    6. $ SERVICE_ACCOUNT=istio-reader-service-account
    7. $ SECRET_NAME=$(kubectl get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o jsonpath='{.secrets[].name}')
    8. $ TOKEN=$(kubectl get secret ${SECRET_NAME} -n ${NAMESPACE} -o jsonpath="{.data['token']}" | base64 --decode)

    An alternative to base64 —decode is openssl enc -d -base64 -A on many systems.

    • Create a kubeconfig file in the working directory for theistio-reader-service-account service account with the following command:
    1. apiVersion: v1
    2. clusters:
    3. - cluster:
    4. certificate-authority-data: ${CA_DATA}
    5. server: ${SERVER}
    6. name: ${CLUSTER_NAME}
    7. contexts:
    8. - context:
    9. cluster: ${CLUSTER_NAME}
    10. user: ${CLUSTER_NAME}
    11. name: ${CLUSTER_NAME}
    12. current-context: ${CLUSTER_NAME}
    13. kind: Config
    14. preferences: {}
    15. users:
    16. - name: ${CLUSTER_NAME}
    17. user:
    18. token: ${TOKEN}
    19. EOF
    • (Optional) Create file with environment variables to create the remote cluster’s secret:

    At this point, you created the remote clusters’ kubeconfig files in thecurrent directory. The filename of the kubeconfig file is the same as theoriginal cluster name.

    Instantiate the credentials

    Perform this procedure on the cluster running the Istio control plane. Thisprocedure uses the WORK_DIR, CLUSTER_NAME, and NAMESPACE environmentvalues set and the file created for the remote cluster’s secret from the.

    If you created the environment variables file for the remote cluster’ssecret, source the file with the following command:

    1. $ source remote_cluster_env_vars

    You can install Istio in a different namespace. This procedure uses theistio-system namespace.

    Do not store and label the secrets for the local clusterrunning the Istio control plane. Istio is always aware of the local cluster’sKubernetes credentials.

    Create a secret and label it properly for each remote cluster:

    1. $ kubectl create secret generic ${CLUSTER_NAME} --from-file ${KUBECFG_FILE} -n ${NAMESPACE}
    2. $ kubectl label secret ${CLUSTER_NAME} istio/multiCluster=true -n ${NAMESPACE}

    Uninstalling the remote cluster

    To uninstall the cluster run the following command:

    1. $ istioctl manifest generate \
    2. --set profile=remote \
    3. --set values.global.controlPlaneSecurityEnabled=false \
    4. --set values.global.createRemoteSvcEndpoints=true \
    5. --set values.global.remotePilotCreateSvcEndpoint=true \
    6. --set values.global.remotePilotAddress=${PILOT_POD_IP} \
    7. --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
    8. --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
    9. --set gateways.enabled=false \
    10. --set autoInjection.enabled=true | kubectl delete -f -

    The following example shows how to use the istioctl manifest command to generatethe manifest for a remote cluster with the automatic sidecar injectiondisabled. Additionally, the example shows how to use the configmaps of theremote cluster with the command to generate anyapplication manifests for the remote cluster.

    Perform the following procedure against the remote cluster.

    Before you begin, set the endpoint IP environment variables as described in theset the environment variables section

    • Install the Istio remote profile:
    1. $ istioctl manifest apply \
    2. --set profile=remote \
    3. --set values.global.controlPlaneSecurityEnabled=false \
    4. --set values.global.createRemoteSvcEndpoints=true \
    5. --set values.global.remotePilotCreateSvcEndpoint=true \
    6. --set values.global.remotePilotAddress=${PILOT_POD_IP} \
    7. --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
    8. --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
    9. --set autoInjection.enabled=false

    Manually inject the sidecars into the application manifests

    The following example istioctl command injects the sidecars into theapplication manifests. Run the following commands in a shell with thekubeconfig context set up for the remote cluster.

    Access services from different clusters

    Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tiedto the cluster, you must define the service object in every cluster where aclient runs, regardless of the location of the service’s endpoints. To ensurethis is the case, duplicate the service object to every cluster usingkubectl. Duplication ensures Kubernetes can resolve the service name in anycluster. Since the service objects are defined in a namespace, you must definethe namespace if it doesn’t exist, and include it in the service definitions inall clusters.

    Deployment considerations

    The previous procedures provide a simple and step-by-step guide to deploy amulticluster environment. A production environment might require additionalsteps or more complex deployment options. The procedures gather the endpointIPs of the Istio services and use them to invoke istioctl. This process createsIstio services on the remote clusters. As part of creating those services andendpoints in the remote cluster, Kubernetes adds DNS entries to the kube-dnsconfiguration object.

    This allows the kube-dns configuration object in the remote clusters toresolve the Istio service names for all Envoy sidecars in those remoteclusters. Since Kubernetes pods don’t have stable IPs, restart of any Istioservice pod in the control plane cluster causes its endpoint to change.Therefore, any connection made from remote clusters to that endpoint arebroken. This behavior is documented in Istio issue #4822

    To either avoid or resolve this scenario several options are available. Thissection provides a high level overview of these options:

    • Update the DNS entries
    • Expose the Istio services via a gateway

    Upon any failure or restart of the local Istio control plane, kube-dns on the remote clusters must beupdated with the correct endpoint mappings for the Istio services. Thereare a number of ways this can be done. The most obvious is to rerun theistioctl command in the remote cluster after the Istio services on the control planecluster have restarted.

    Use load balance service type

    In Kubernetes, you can declare a service with a service type of LoadBalancer.See the Kubernetes documentation on service typesfor more information.

    A simple solution to the pod restart issue is to use load balancers for theIstio services. Then, you can use the load balancers’ IPs as the Istioservices’ endpoint IPs to configure the remote clusters. You may need loadbalancer IPs for these Istio services:

    • istio-pilot
    • istio-telemetry
    • istio-policy

    Currently, the Istio installation doesn’t provide an option to specify servicetypes for the Istio services. You can manually specify the service types in theIstio manifests.

    Expose the Istio services via a gateway

    This method uses the Istio ingress gateway functionality. The remote clustershave the istio-pilot, istio-telemetry and istio-policy servicespointing to the load balanced IP of the Istio ingress gateway. Then, all theservices point to the same IP.You must then create the destination rules to reach the proper Istio service inthe main cluster in the ingress gateway.

    This method provides two alternatives:

    • Re-use the default Istio ingress gateway installed with the providedmanifests. You only need to add the correct destination rules.

    • Create another Istio ingress gateway specifically for the multicluster.

    Istio supports deployment of mutual TLS between the control plane components aswell as between sidecar injected application pods.

    To enable control plane security follow these general steps:

    • Deploy the Istio control plane cluster with:

    • Deploy the Istio remote clusters with:

      • The control plane security enabled.

      • The citadel certificate self signing disabled.

      • A secret named cacerts in the Istio control plane namespace with the.The Certificate Authority (CA) of the main cluster or a root CA must signthe CA certificate for the remote clusters too.

      • Set control plane IPs or resolvable host names.

    Mutual TLS between application pods

    To enable mutual TLS for all application pods, follow these general steps:

    • Deploy the Istio control plane cluster with:

      • Mutual TLS globally enabled.

      • The Citadel certificate self-signing disabled.

      • A secret named cacerts in the Istio control plane namespace with the

    • Deploy the Istio remote clusters with:

      • Mutual TLS globally enabled.

      • The Citadel certificate self-signing disabled.

      • A secret named cacerts in the Istio control plane namespace with theCA certificatesThe CA of the main cluster or a root CA must sign the CA certificate forthe remote clusters too.

    The CA certificate steps are identical for both control plane security andapplication pod security steps.

    Example deployment

    This example procedure installs Istio with both the control plane mutual TLSand the application pod mutual TLS enabled. The procedure sets up a remotecluster with a selector-less service and endpoint. Istio Pilot uses the serviceand endpoint to allow the remote sidecars to resolve theistio-pilot.istio-system hostname via Istio’s local Kubernetes DNS.

    Primary cluster: deploy the control plane cluster

    • Create the cacerts secret using the Istio certificate samples in theistio-system namespace:
    1. $ kubectl create ns istio-system
    2. $ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
    • Deploy the Istio control plane with security enabled for the control planeand the application pod:
    1. $ istioctl manifest apply \
    2. --set values.global.mtls.enabled=true \
    3. --set values.security.selfSigned=false \
    4. --set values.global.controlPlaneSecurityEnabled=true

    Remote cluster: deploy Istio components

    • Create the cacerts secret using the Istio certificate samples in theistio-system namespace:
    1. $ kubectl create ns istio-system
    2. $ kubectl create secret generic cacerts -n istio-system --from-file=samples/certs/ca-cert.pem --from-file=samples/certs/ca-key.pem --from-file=samples/certs/root-cert.pem --from-file=samples/certs/cert-chain.pem
    • Set the environment variables for the IP addresses of the pods as describedin the setting environment variables section.

    • The following command deploys the remote cluster’s components with securityenabled for the control plane and the application pod and enables thecreation of the an Istio Pilot selector-less service and endpoint to get aDNS entry in the remote cluster.

    1. $ istioctl manifest apply \
    2. --set profile=remote \
    3. --set values.global.mtls.enabled=true \
    4. --set values.security.selfSigned=false \
    5. --set values.global.controlPlaneSecurityEnabled=true \
    6. --set values.global.createRemoteSvcEndpoints=true \
    7. --set values.global.remotePilotCreateSvcEndpoint=true \
    8. --set values.global.remotePilotAddress=${PILOT_POD_IP} \
    9. --set values.global.remotePolicyAddress=${POLICY_POD_IP} \
    10. --set values.global.remoteTelemetryAddress=${TELEMETRY_POD_IP} \
    11. --set gateways.enabled=false \
    12. --set autoInjection.enabled=true
    • To generate the kubeconfig configuration file for the remote cluster,follow the steps in the

    You must instantiate credentials for each remote cluster. Follow theinstantiate credentials procedureto complete the deployment.

    Congratulations!

    You have configured all the Istio components in both clusters to use mutual TLSbetween application sidecars, the control plane components, and otherapplication sidecars.

    See also

    Google Kubernetes Engine

    Set up a multicluster mesh over two GKE clusters.

    Example multicluster mesh over two IBM Cloud Private clusters.

    Replicated control planes

    Install an Istio mesh across multiple Kubernetes clusters with replicated control plane instances.

    Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for disconnected cluster networks.

    Simplified Multicluster Install [Experimental]

    Configure an Istio mesh spanning multiple Kubernetes clusters.

    Provision and manage DNS certificates in Istio.