Service Topology

    The following picture shows the general function of the service topology.

    To use service topology, the feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.

    1. Kubernetes v1.18 or above, since EndpointSlice resource needs to be supported.
    2. Yurt-app-manager is deployed in the cluster.

    How to use

    Ensure that kubernetes version is v1.18+.

    Ensure that yurt-app-manager is deployed in the cluster.

    1. $ kubectl get pod -n kube-system
    2. NAME READY STATUS RESTARTS AGE
    3. coredns-66bff467f8-jxvnw 1/1 Running 0 7m28s
    4. coredns-66bff467f8-lk8v5 1/1 Running 0 7m28s
    5. etcd-kind-control-plane 1/1 Running 0 7m39s
    6. kindnet-5dpxt 1/1 Running 0 7m28s
    7. kindnet-ckz88 1/1 Running 0 7m10s
    8. kindnet-sqxs7 1/1 Running 0 7m10s
    9. kube-apiserver-kind-control-plane 1/1 Running 0 7m39s
    10. kube-controller-manager-kind-control-plane 1/1 Running 0 5m38s
    11. kube-proxy-ddgjt 1/1 Running 0 7m28s
    12. kube-proxy-j25kr 1/1 Running 0 7m10s
    13. kube-proxy-jt9cw 1/1 Running 0 7m10s
    14. kube-scheduler-kind-control-plane 1/1 Running 0 7m39s
    15. yurt-app-manager-699ffdcb78-8m9sf 1/1 Running 0 37s
    16. yurt-app-manager-699ffdcb78-fdqmq 1/1 Running 0 37s
    17. yurt-controller-manager-6c95788bf-jrqts 1/1 Running 0 6m17s
    18. yurt-hub-kind-control-plane 1/1 Running 0 3m36s
    19. yurt-hub-kind-worker 1/1 Running 0 4m50s
    20. yurt-hub-kind-worker2 1/1 Running 0 4m50s

    To use service topology, the EndpointSliceProxying feature gate must be enabled, and kube-proxy needs to be configured to connect to Yurthub instead of the API server.

    1. $ kubectl edit cm -n kube-system kube-proxy
    2. apiVersion: v1
    3. data:
    4. config.conf: |-
    5. apiVersion: kubeproxy.config.k8s.io/v1alpha1
    6. bindAddress: 0.0.0.0
    7. featureGates: # 1. enable EndpointSliceProxying feature gate.
    8. EndpointSliceProxying: true
    9. clientConnection:
    10. acceptContentTypes: ""
    11. burst: 0
    12. contentType: ""
    13. #kubeconfig: /var/lib/kube-proxy/kubeconfig.conf # 2. comment this line.
    14. qps: 0
    15. clusterCIDR: 10.244.0.0/16
    16. configSyncPeriod: 0s
    1. $ kubectl delete pod --selector k8s-app=kube-proxy -n kube-system
    2. pod "kube-proxy-cbsmj" deleted
    3. pod "kube-proxy-cqwcs" deleted
    4. pod "kube-proxy-m9dgk" deleted

    Create NodePools

    • Create test nodepools.
    • Add nodes to the nodepool.
    1. $ kubectl label node kind-control-plane apps.openyurt.io/desired-nodepool=beijing
    2. node/kind-control-plane labeled
    3. node/kind-worker labeled
    4. $ kubectl label node kind-worker2 apps.openyurt.io/desired-nodepool=shanghai
    5. node/kind-worker2 labeled
    • Get NodePool.
    1. $ kubectl get np
    2. beijing Cloud 1 0 63s
    3. hangzhou Edge 1 0 63s
    4. shanghai Edge 1 0 63s
    • Create test united-deployment1. To facilitate testing, we use a serve_hostname image. Each time port 9376 is accessed, the hostname container returns its own hostname.
    1. $ cat << EOF | kubectl apply -f -
    2. apiVersion: apps.openyurt.io/v1alpha1
    3. kind: UnitedDeployment
    4. metadata:
    5. labels:
    6. controller-tools.k8s.io: "1.0"
    7. name: united-deployment1
    8. spec:
    9. selector:
    10. matchLabels:
    11. app: united-deployment1
    12. workloadTemplate:
    13. deploymentTemplate:
    14. metadata:
    15. labels:
    16. app: united-deployment1
    17. spec:
    18. template:
    19. metadata:
    20. labels:
    21. app: united-deployment1
    22. spec:
    23. containers:
    24. - name: hostname
    25. image: mirrorgooglecontainers/serve_hostname
    26. ports:
    27. - containerPort: 9376
    28. protocol: TCP
    29. topology:
    30. pools:
    31. - name: hangzhou
    32. nodeSelectorTerm:
    33. matchExpressions:
    34. - key: apps.openyurt.io/nodepool
    35. operator: In
    36. values:
    37. - hangzhou
    38. replicas: 2
    39. - name: shanghai
    40. nodeSelectorTerm:
    41. matchExpressions:
    42. - key: apps.openyurt.io/nodepool
    43. operator: In
    44. values:
    45. replicas: 2
    46. revisionHistoryLimit: 5
    47. EOF
    • Create test united-deployment2. Here we use image, in order to access the hostname pod that created by united-deployment1 above.
    • Get pods that created by the unitedDeployment.
    1. $ kubectl get pod -l "app in (united-deployment1,united-deployment2)" -owide
    2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    3. united-deployment1-hangzhou-fv6th-66ff6fd958-f2694 1/1 Running 0 18m 10.244.2.3 kind-worker <none> <none>
    4. united-deployment1-hangzhou-fv6th-66ff6fd958-twf95 1/1 Running 0 18m 10.244.2.2 kind-worker <none> <none>
    5. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt 1/1 Running 0 18m 10.244.1.3 kind-worker2 <none> <none>
    6. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2 1/1 Running 0 18m 10.244.1.2 kind-worker2 <none> <none>
    7. united-deployment2-hangzhou-lpkzg-6d958b67b6-gf847 1/1 Running 0 15m 10.244.2.4 kind-worker <none> <none>
    8. united-deployment2-hangzhou-lpkzg-6d958b67b6-lbnwl 1/1 Running 0 15m 10.244.2.5 kind-worker <none> <none>
    9. united-deployment2-shanghai-tqgd4-57f7555494-9jvjb 1/1 Running 0 15m 10.244.1.5 kind-worker2 <none> <none>
    10. united-deployment2-shanghai-tqgd4-57f7555494-rn8n8 1/1 Running 0 15m 10.244.1.4 kind-worker2 <none> <none>

    Create Service with TopologyKeys

    1. $ cat << EOF | kubectl apply -f -
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: svc-ud1
    6. annotations:
    7. openyurt.io/topologyKeys: openyurt.io/nodepool
    8. spec:
    9. selector:
    10. app: united-deployment1
    11. type: ClusterIP
    12. ports:
    13. - port: 80
    14. targetPort: 9376
    15. EOF
    1. $ cat << EOF | kubectl apply -f -
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: svc-ud1-without-topology
    6. spec:
    7. selector:
    8. app: united-deployment1
    9. type: ClusterIP
    10. ports:
    11. - port: 80
    12. targetPort: 9376
    13. EOF

    Test Service Topology

    We use the nginx pod in the shanghai nodepool to test service topology. Therefore, its traffic can only be routed to the nodes that in shanghai nodepool when it accesses a service with the openyurt.io/topologyKeys: openyurt.io/nodepool annotation.

    For comparison, we first test the service without serviceTopology annotation. As we can see, its traffic can be routed to any nodes.

    Then we test the service with serviceTopology annotation. As expected, its traffic can only be routed to the nodes in shanghai nodepool.

    1. $ kubectl exec -it united-deployment2-shanghai-tqgd4-57f7555494-9jvjb bash
    2. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
    3. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
    4. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
    5. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
    6. united-deployment1-shanghai-5p8zk-84bdd476b6-hr6xt
    7. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
    8. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80
    9. united-deployment1-shanghai-5p8zk-84bdd476b6-wjck2
    10. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/#
    11. root@united-deployment2-shanghai-tqgd4-57f7555494-9jvjb:/# curl svc-ud1:80