Multi Cluster Application

    There are many scenarios that developers or system operators need to deploy and manage applications across multiple clusters.

    • For scalability, a single Kubernetes cluster has its limit around 5K nodes or less, it is unable to handle the large scale application load.
    • For stability/availability, one single application can be deployed in multiple clusters for backup, which provides more stability and availability.
    • For security, application might need to be deployed in different zones/areas as government policy requires.

    The following guide will introduce how to manage applications across clusters on KubeVela.

    Preparation

    Please make sure you have clusters in your control plane, in general, this work should be done by operator engineers. If you’re a DevOps engineer or trying KubeVela, you can refer to manage cluster docs to learn how to join clusters.

    For the rest docs, we assume you have clusters with the following names:

    note

    By default, the hub cluster where KubeVela locates is registered as the cluster. You can use it like a managed cluster in spite that you cannot detach it or modify it.

    To deliver your application into multiple clusters, you simply need to configure which clusters you want to deploy through the topology policy. For example, you can deploy an nginx webservice in hangzhou clusters by running the following commands

    1. $ cat <<EOF | vela up -f -
    2. apiVersion: core.oam.dev/v1beta1
    3. kind: Application
    4. metadata:
    5. name: basic-topology
    6. namespace: examples
    7. spec:
    8. components:
    9. - name: nginx-basic
    10. type: webservice
    11. properties:
    12. image: nginx
    13. traits:
    14. - type: expose
    15. properties:
    16. port: [80]
    17. policies:
    18. - name: topology-hangzhou-clusters
    19. type: topology
    20. properties:
    21. clusters: ["cluster-hangzhou-1", "cluster-hangzhou-2"]
    22. EOF

    You can check the deploy result by running vela status

    1. vela status basic-topology -n examples

    expected output

    1. About:
    2. Name: basic-topology
    3. Namespace: examples
    4. Created at: 2022-04-08 14:37:54 +0800 CST
    5. Status: workflowFinished
    6. Workflow:
    7. mode: DAG
    8. finished: true
    9. Suspend: false
    10. Terminated: false
    11. Steps
    12. - id:3mvz5i8elj
    13. name:deploy-topology-hangzhou-clusters
    14. type:deploy
    15. phase:succeeded
    16. message:
    17. Services:
    18. - Name: nginx-basic
    19. Cluster: cluster-hangzhou-1 Namespace: examples
    20. Type: webservice
    21. Healthy Ready:1/1
    22. Traits:
    23. expose
    24. - Name: nginx-basic
    25. Cluster: cluster-hangzhou-2 Namespace: examples
    26. Type: webservice
    27. Healthy Ready:1/1
    28. Traits:
    29. expose

    You can debugging the above deployed nginx webservice by running the following vela CLI commands. You can play with your pods in managed clusters directly on the hub cluster, without switching KubeConfig context. If you have multiple clusters in on application, the CLI command will ask you to choose one interactively.

    • vela status as shown above can give you an overview of your deployed multi-cluster application. Example usage is shown above.
    • vela status --pod can list the pod status of your application.
    • vela logs shows pod logs in managed clusters.
    1. $ vela logs basic-topology -n examples
    2. ? You have 2 deployed resources in your app. Please choose one: Cluster: cluster-hangzhou-1 | Namespace: examples | Kind: Deployment | Name: nginx-basic
    3. + nginx-basic-dfb6dcf8d-km5vk nginx-basic
    4. nginx-basic-dfb6dcf8d-km5vk nginx-basic 2022-04-08T06:38:10.540430392Z /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    5. nginx-basic-dfb6dcf8d-km5vk nginx-basic 2022-04-08T06:38:10.540742240Z /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    • vela exec helps you execute commands in pods in managed clusters.
    • vela port-forward can discover and forward ports of pods or services in managed clusters to your local endpoint.
    1. $ vela port-forward basic-topology -n examples 8080:80
    2. ? You have 4 deployed resources in your app. Please choose one: Cluster: cluster-hangzhou-1 | Namespace: examples | Kind: Deployment | Name: nginx-basic
    3. Forwarding from 127.0.0.1:8080 -> 80
    4. Forwarding from [::1]:8080 -> 80
    5. Forward successfully! Opening browser ...
    6. Handling connection for 8080

    Advanced Usage

    Understanding the Multi-cluster Application

    The following figure displays the architecture of a multi-cluster application. All the configurations (including Application, Policy and Workflow) lives in the hub cluster. Only the resources (like deployment or service) will be dispatched in to managed clusters.

    The policies mainly takes charge of describing the destination of the resources and how they should be overridden. The real executor of the resource dispatch is the workflow. In each deploy workflow step, it will refer to some policies, override the default configuration, and dispatch the resources.

    The most straightforward way to configure the deploy destination is to write cluster names inside the topology policy. Sometimes, it will be more easy to select clusters by labels, like filtering all clusters in hangzhou:

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: label-selector-topology
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-label-selector
    9. type: webservice
    10. properties:
    11. image: nginx
    12. - name: topology-hangzhou-clusters
    13. type: topology
    14. properties:
    15. clusterLabelSelector:
    16. region: hangzhou

    If you want to deploy application components into the control plane cluster, you can use the local cluster. Besides, you can also deploy your application components in another namespace other than the application’s original namespace.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: local-ns-topology
    5. namespace: examples
    6. components:
    7. - name: nginx-local-ns
    8. type: webservice
    9. properties:
    10. image: nginx
    11. policies:
    12. - name: topology-local
    13. type: topology
    14. properties:
    15. clusters: ["local"]
    16. namespace: examples-alternative

    Multi Cluster Application - 图3tip

    Sometimes, for security issues, you might want to disable this behavior and retrict the resources to be deployed within the same namespace of the application. This can be done by setting --allow-cross-namespace-resource=false in the bootstrap parameter of the KubeVela controller.

    Control the deploy workflow

    By default, if you declare multiple topology policies in the application, the application components will be deployed in all destinations following the order of the policies.

    If you want to control the deploy process, like changing the order or adding manual approval, you can use the deploy workflow step explicitly in the workflow to achieve that.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: deploy-workflowstep
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-deploy-workflowstep
    9. type: webservice
    10. properties:
    11. image: nginx
    12. policies:
    13. - name: topology-hangzhou-clusters
    14. type: topology
    15. properties:
    16. clusterLabelSelector:
    17. region: hangzhou
    18. - name: topology-local
    19. type: topology
    20. properties:
    21. clusters: ["local"]
    22. namespace: examples-alternative
    23. workflow:
    24. steps:
    25. - type: deploy
    26. name: deploy-local
    27. properties:
    28. policies: ["topology-local"]
    29. - type: deploy
    30. name: deploy-hangzhou
    31. properties:
    32. # require manual approval before running this step
    33. auto: false
    34. policies: ["topology-hangzhou-clusters"]

    You can also deploy application components with different topology policies concurrently, by filling these topology policies in one deploy step.

    There are times that you want to make changes to the configuration in some clusters, rather than use the default configuration declared in the application’s components field. For example, using a different container image or changing the default number of replicas.

    The override policy is able to help you make customizations in different clusters. You can use it together with the topology policy in the deploy workflow step.

    In the following example, the application will deploy a default nginx webservice in the local cluster. Then it will deploy a high-available nginx webservice with the legacy image nginx:1.20 and 3 replicas in hangzhou clusters.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: deploy-with-override
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-with-override
    9. type: webservice
    10. properties:
    11. image: nginx
    12. policies:
    13. - name: topology-hangzhou-clusters
    14. type: topology
    15. properties:
    16. clusterLabelSelector:
    17. region: hangzhou
    18. - name: topology-local
    19. type: topology
    20. properties:
    21. clusters: ["local"]
    22. namespace: examples-alternative
    23. - name: override-nginx-legacy-image
    24. type: override
    25. properties:
    26. components:
    27. - name: nginx-with-override
    28. properties:
    29. image: nginx:1.20
    30. - name: override-high-availability
    31. type: override
    32. properties:
    33. components:
    34. - type: webservice
    35. traits:
    36. - type: scaler
    37. properties:
    38. replicas: 3
    39. workflow:
    40. steps:
    41. - type: deploy
    42. name: deploy-local
    43. properties:
    44. policies: ["topology-local"]
    45. name: deploy-hangzhou
    46. properties:
    47. policies: ["topology-hangzhou-clusters", "override-nginx-legacy-image", "override-high-availability"]

    note

    The override policy is used to modify the basic configuration. Therefore, it is designed to be used together with topology policy. If you do not want to use topology policy, you can directly write configurations in the component part instead of using the override policy. If you misuse the override policy in the deploy workflow step without topology policy, no error will be reported but you will find nothing is deployed.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: advance-override
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-advance-override-legacy
    9. type: webservice
    10. properties:
    11. image: nginx:1.20
    12. - name: nginx-advance-override-latest
    13. type: webservice
    14. properties:
    15. image: nginx
    16. policies:
    17. - name: topology-hangzhou-clusters
    18. type: topology
    19. properties:
    20. clusterLabelSelector:
    21. region: hangzhou
    22. - name: topology-local
    23. type: topology
    24. properties:
    25. clusters: ["local"]
    26. namespace: examples-alternative
    27. - name: override-nginx-legacy
    28. type: override
    29. properties:
    30. selector: ["nginx-advance-override-legacy"]
    31. - name: override-nginx-latest
    32. type: override
    33. properties:
    34. selector: ["nginx-advance-override-latest", "nginx-advance-override-stable"]
    35. components:
    36. - name: nginx-advance-override-stable
    37. type: webservice
    38. properties:
    39. image: nginx:stable
    40. workflow:
    41. steps:
    42. - type: deploy
    43. name: deploy-local
    44. properties:
    45. policies: ["topology-local", "override-nginx-legacy"]
    46. - type: deploy
    47. name: deploy-hangzhou
    48. properties:
    49. policies: ["topology-hangzhou-clusters", "override-nginx-latest"]

    Use policies and workflow outside the application

    Sometimes, you may want to use the same policy across multiple applications or reuse previous workflow to deploy different resources. To reduce the repeated code, you can leverage the external policies and workflow and refer to them in your applications.

    Multi Cluster Application - 图5caution

    You can only refer to Policy and Workflow within your application’s namespace.

    1. apiVersion: core.oam.dev/v1alpha1
    2. kind: Policy
    3. metadata:
    4. name: topology-hangzhou-clusters
    5. namespace: examples
    6. type: topology
    7. properties:
    8. clusterLabelSelector:
    9. region: hangzhou
    10. ---
    11. apiVersion: core.oam.dev/v1alpha1
    12. kind: Policy
    13. metadata:
    14. name: override-high-availability-webservice
    15. namespace: examples
    16. type: override
    17. properties:
    18. components:
    19. - type: webservice
    20. traits:
    21. - type: scaler
    22. properties:
    23. replicas: 3
    24. ---
    25. apiVersion: core.oam.dev/v1alpha1
    26. kind: Workflow
    27. metadata:
    28. name: make-release-in-hangzhou
    29. namespace: examples
    30. steps:
    31. - type: deploy
    32. name: deploy-hangzhou
    33. properties:
    34. auto: false
    35. policies: ["override-high-availability-webservice", "topology-hangzhou-clusters"]
    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: external-policies-and-workflow
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-external-policies-and-workflow
    9. type: webservice
    10. properties:
    11. image: nginx
    12. workflow:
    13. ref: make-release-in-hangzhou

    note

    The internal policies will be loaded first. External policies will only be used when there is no corresponding policy inside the application.

    In the following example, we can reuse topology-hangzhou-clusters policy and make-release-in-hangzhou workflow but modify the override-high-availability-webservice policy by injecting the same-named policy inside the new application.

    KubeVela Application v1.3 uses different policies and workflow steps to configure and managing multi-cluster applications.

    The outdated env-binding policy and workflow step in old versions are kept now and might be deprecated in the future.

    The new policies and workflow steps can cover all the use-cases in old versions so it is possible to upgrade all your applications while maintaining the same capabilities. Upgrade tools are not available now but will come out before deprecation happens.

    If you already have applications running in production environment and do not want to change them, KubeVela v1.3 is also compatible for it. It is NOT necessary to migrate old multi-cluster applications to new ones.

    Conclusion

    In this section, we introduced how KubeVela delivering micro services in multiple clusters, the whole process can be easily modeled as a declarative deployment plan.

    No more add-hoc scripts or glue code, KubeVela will get the application delivery workflow done with full automation and determinism. Most importantly, KubeVela expects you keep using the CI solutions you are already familiar with and KubeVela is fully complementary to them as the CD control plane.