IBM Cloud Private

  • .

Make sure individual cluster Pod CIDR ranges and service CIDR ranges are unique and do not overlapacross the multicluster environment and may not overlap. This can be configured by network_cidr andservice_cluster_ip_range in cluster/config.yaml.

  1. # Default IPv4 CIDR is 10.1.0.0/16
  2. # Default IPv6 CIDR is fd03::0/112
  3. network_cidr: 10.1.0.0/16
  4. ## Kubernetes Settings
  5. # Default IPv4 Service Cluster Range is 10.0.0.0/16
  6. # Default IPv6 Service Cluster Range is fd02::0/112
  7. service_cluster_ip_range: 10.0.0.0/16
  • After IBM Cloud Private cluster install finishes, validate kubectl access to each cluster. In this example, considertwo clusters cluster-1 and cluster-2.

  1. $ kubectl get nodes
  • Repeat above two steps to validate cluster-2.

Configure Pod Communication Across IBM Cloud Private Clusters

IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP clienton each node distributes the IP router information to all nodes.

To ensure pods can communicate across different clusters, you need to configure IP routers on all nodesacross the two clusters. In summary, you need the following two steps to configure pod communication acrosstwo IBM Cloud Private Clusters:

  • Add IP routers from cluster-1 to cluster-2.

  • Add IP routers from cluster-2 to cluster-1.

This approach works if all the nodes within the multiple IBM Cloud Private clusters are located in the same subnet. It is unable to add BGP routers directly for nodes located in different subnets because the IP addresses must be reachable with a single hop. Alternatively, you can use a VPN for pod communication across clusters. Refer to this article for more details.

You can check how to add IP routers from cluster-1 to cluster-2 to validate pod to pod communicationacross clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes inthe cluster. In this example, both clusters have three nodes.

The hosts file for cluster-1:

  1. 172.16.160.23 micpnode1
  2. 172.16.160.27 micpnode2
  3. 172.16.160.29 micpnode3
  1. 172.16.187.16 nicpnode2
  2. 172.16.187.18 nicpnode3
  • Obtain routing information on all nodes in cluster-1 with the command ip route | grep bird.
  1. $ ip route | grep bird
  2. 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
  3. 10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
  4. blackhole 10.1.192.0/26 proto bird
  1. $ ip route | grep bird
  2. 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
  3. blackhole 10.1.176.64/26 proto bird
  4. 10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
  • There are three IP routers total for those three nodes in cluster-1.
  1. 10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink
  2. 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink
  3. 10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
  1. $ ip route add 10.1.176.64/26 via 172.16.160.29
  2. $ ip route add 10.1.103.128/26 via 172.16.160.23
  3. $ ip route add 10.1.192.0/26 via 172.16.160.27
  • You can use the same steps to add all IP routers from cluster-2 to cluster-1. After the configurationis complete, all the pods in those two different clusters can communicate with each other.

  • Verify across pod communication by pinging pod IP in cluster-2 from cluster-1. The following is a podfrom cluster-2 with pod IP as 20.1.58.247.

  • From a node in cluster-1 ping the pod IP which should succeed.
  1. $ ping 20.1.58.247
  2. PING 20.1.58.247 (20.1.58.247) 56(84) bytes of data.
  3. 64 bytes from 20.1.58.247: icmp_seq=1 ttl=63 time=1.73 ms

The steps above in this section enables pod communication across the two clusters by configuring a full IP routing meshacross all nodes in the two IBM Cloud Private Clusters.

Follow the to install and configurelocal Istio control plane and Istio remote on cluster-1 and cluster-2.

In this guide, it is assumed that the local Istio control plane is deployed in cluster-1, while the Istio remote is deployed in cluster-2.

Deploy the Bookinfo example across clusters

The following example enables .

  • Install on the first cluster cluster-1. Remove the reviews-v3 deployment which will be deployed on cluster cluster-2 in the following step:

Zip

  1. $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@
  2. $ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@
  3. $ kubectl delete deployment reviews-v3
  • Deploy the reviews-v3 service along with any corresponding services on the remote cluster-2 cluster:
#

Ratings service

#

apiVersion: v1kind: Servicemetadata: name: ratings labels: app: ratings service: ratingsspec: ports:

  • port: 9080name: http

#

Reviews service

#

apiVersion: v1kind: Servicemetadata: name: reviews labels: app: reviews service: reviewsspec: ports:

  • port: 9080name: httpselector:app: reviews

apiVersion: apps/v1kind: Deploymentmetadata: name: reviews-v3 labels: app: reviews version: v3spec: replicas: 1 selector: matchLabels: app: reviews version: v3 template: metadata: labels: app: reviews version: v3 spec: containers:

  1. - name: reviews
  2. image: istio/examples-bookinfo-reviews-v3:1.12.0
  3. imagePullPolicy: IfNotPresent
  4. ports:
  5. - containerPort: 9080

EOF

Access http://<INGRESS_HOST>:<INGRESS_PORT>/productpage repeatedly and each version of reviews should be equally load balanced,including reviews-v3 in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancingbetween reviews versions.

相关内容

Google Kubernetes Engine

Set up a multicluster mesh over two GKE clusters.

Install an Istio mesh across multiple Kubernetes clusters using a shared control plane for disconnected cluster networks.

Shared control plane (single-network)

Install an Istio mesh across multiple Kubernetes clusters with a shared control plane and VPN connectivity between clusters.

通过控制平面副本集实例,在多个 Kubernetes 集群上安装 Istio 网格。

Multi-mesh deployments for isolation and boundary protection

Deploy environments that require isolation into separate meshes and enable inter-mesh communication by mesh federation.

Configuring Istio route rules in a multicluster service mesh.