Troubleshooting Multicluster

    The most common, but also broad problem with multi-network installations is that cross-cluster load balancing doesn’t work. Usually this manifests itself as only seeing responses from the cluster-local instance of a Service:

    When following the guide to we would expect both and v2 responses, indicating traffic is going to both clusters.

    There are many possible causes to the problem:

    Locality load balancing can be used to make clients prefer that traffic go to the nearest destination. If the clusters are in different localities (region/zone), locality load balancing will prefer the local-cluster and is working as intended. If locality load balancing is disabled, or the clusters are in the same locality, there may be another issue.

    Cross-cluster traffic, as with intra-cluster traffic, relies on a common root of trust between the proxies. The default Istio installation will use their own individually generated root certificate-authorities. For multi-cluster, we must manually configure a shared root of trust. Follow Plug-in Certs below or read to learn more.

    Plug-in Certs:

    To verify certs are configured correctly, you can compare the root-cert in each cluster:

    1. $ diff \
    2. <(kubectl --context="${CTX_CLUSTER1}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}') \
    3. <(kubectl --context="${CTX_CLUSTER2}" -n istio-system get secret cacerts -ojsonpath='{.data.root-cert\.pem}')

    You can follow the Plugin CA Certs guide, ensuring to run the steps for every cluster.

    The following steps assume you’re following the . Before continuing, make sure both helloworld and sleep are deployed in each cluster.

    From each cluster, find the endpoints the sleep service has for helloworld:

    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld

    Troubleshooting information differs based on the cluster that is the source of traffic:

    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
    2. 10.0.0.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

    Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly.

    • If the secret is present:
      • Look at the config in the secret. Make sure the cluster name is used as the data key for the remote kubeconfig.
      • If the secret looks correct, check the logs of for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include Failed to add remote cluster from secret along with an error reason.
    1. $ istioctl --context $CTX_CLUSTER2 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
    2. 10.0.1.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

    Only one endpoint is shown, indicating the control plane cannot read endpoints from the remote cluster. Verify that remote secrets are configured properly.

    1. $ kubectl get secrets --context=$CTX_CLUSTER1 -n istio-system -l "istio/multiCluster=true"
    • If the secret is missing, create it.
    • If the secret is present and the endpoint is a Pod in the primary cluster:
      • Look at the config in the secret. Make sure the cluster name is used as the data key for the remote kubeconfig.
      • If the secret looks correct, check the logs of istiod for connectivity or permissions issues reaching the remote Kubernetes API server. Log messages may include Failed to add remote cluster from secret along with an error reason.

    The steps for Primary and Remote clusters still apply for multi-network, although multi-network has an additional case:

    1. $ istioctl --context $CTX_CLUSTER1 proxy-config endpoint sleep-dd98b5f48-djwdw.sample | grep helloworld
    2. 10.0.5.11:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

    In multi-network, we expect one of the endpoint IPs to match the remote cluster’s east-west gateway public IP. Seeing multiple Pod IPs indicates one of two things:

    • The address of the gateway for the remote network cannot be determined.
    • The network of either the client or server pod cannot be determined.

    The address of the gateway for the remote network cannot be determined:

    If the EXTERNAL-IP is stuck in <PENDING>, the environment may not support LoadBalancer services. In this case, it may be necessary to customize the spec.externalIPs section of the Service to manually give the Gateway an IP reachable from outside the cluster.

    If the external IP is present, check that the Service includes a topology.istio.io/network label with the correct value. If that is incorrect, reinstall the gateway and make sure to set the –network flag on the generation script.

    The network of either the client or server cannot be determined.

    On the source pod, check the proxy metadata.

    1. $ kubectl get pod $SLEEP_POD_NAME \
    2. -o jsonpath="{.spec.containers[*].env[?(@.name=='ISTIO_META_NETWORK')].value}"
    1. $ kubectl get pod $HELLOWORLD_POD_NAME \
    2. -o jsonpath="{.metadata.labels.topology\.istio\.io/network}"

    If either of these values aren’t set, or have the wrong value, istiod may treat the source and client proxies as being on the same network and send network-local endpoints. When these aren’t set, check that values.global.network was set properly during install, or that the injection webhook is configured correctly.

    Istio determines the network of a Pod using the topology.istio.io/network label which is set during injection. For non-injected Pods, Istio relies on the topology.istio.io/network label set on the system namespace in the cluster.

    In each cluster, check the network:

      If the above command doesn’t output the expected network name, set the label: