Distribute Reference Objects

    This section requires you to know the basics about how to deploy multi-cluster application with policy and workflow.

    You can reference and distribute existing Kubernetes objects with KubeVela in the following scenarios:

    • Copying secrets from the hub cluster into managed clusters.
    • Promote deployments from canary clusters into production clusters.
    • Using Kubernetes apiserver as the control plane and storing all Kubernetes objects data in external databases. Then dispatch those data into real Kuberenetes managed clusters.

    Besides, you can also refer to Kubernetes objects from remote URL links.

    To use existing Kubernetes objects in the component, you need to use the typed component and declare which resources you want to refer to. For example, in the following example, the secret image-credential-to-copy in namespace examples will be taken as the source object for the component. Then you can use the topology policy to dispatch it into hangzhou clusters.

    Refer to objects from URL

    If your source Kubernetes objects are from remote URLs, you can also refer to them in the component properties as follows. Your remote URL files could include multiple-resources as well.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: example-app
    5. namespace: default
    6. spec:
    7. components:
    8. - name: busybox
    9. type: ref-objects
    10. properties:
    11. urls: ["https://gist.githubusercontent.com/Somefive/b189219a9222eaa70b8908cf4379402b/raw/e603987b3e0989e01e50f69ebb1e8bb436461326/example-busybox-deployment.yaml"]

    The most simple way to specify resources is to directly use resource: secret or resource: deployment to describe the kind of resources. If no name or labelSelector is set, the application will try to find the resource with the same name as the component name in the application’s namespace. You can also explicitly specify name and namespace for the target resource as well.

    In addition to name and namespace, you can also specify the cluster field to let the application component refer to resources in managed clusters. You can also use the labelSelector to select resources in replace of finding resources by names.

    In the following example, the application will select all deployments in the hangzhou-1 cluster inside the examples namespace, which matches the desided labels. Then the application will copy these deployments into hangzhou-2 cluster.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: ref-objects-duplicate-deployments
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: duplicate-deployment
    9. type: ref-objects
    10. properties:
    11. objects:
    12. - resource: deployment
    13. cluster: hangzhou-1
    14. # select all deployment in the `examples` namespace in cluster `hangzhou-1` that matches the labelSelector
    15. labelSelector:
    16. need-duplicate: "true"
    17. policies:
    18. - name: topology-hangzhou-2
    19. type: topology
    20. properties:
    21. clusters: ["hangzhou-2"]

    The override policy can be used to override properties defined in component and traits while the reference objects don’t have those properties.

    If you want to override configuration for the ref-objects typed component, you can use traits. The implicit main workload is the first referenced object and trait patch will be applied on it. The following example demonstrate how to set the replica number for the referenced deployment while deploying it in hangzhou clusters.

    1. apiVersion: core.oam.dev/v1beta1
    2. kind: Application
    3. metadata:
    4. name: ref-objects-multiple-resources
    5. namespace: examples
    6. spec:
    7. components:
    8. - name: nginx-ref-multiple-resources
    9. type: ref-objects
    10. properties:
    11. objects:
    12. - resource: deployment
    13. - resource: service
    14. traits:
    15. - type: scaler
    16. properties:
    17. replicas: 3
    18. policies:
    19. - name: topology-hangzhou-clusters
    20. type: topology
    21. properties:
    22. clusterLabelSelector:
    23. region: hangzhou

    The container-image trait can be used to change the default image settings declared in the original deployment.

    By default, the container-image will replace the original image in the main container (the container uses the name of the component).

    1. traits:
    2. - type: container-image
    3. properties:
    4. image: busybox-1.34.0

    You can modify other containers by setting the containerName field.

    1. traits:
    2. - type: container-image
    3. properties:
    4. image: busybox-1.34.0
    5. containerName: sidecar-nginx

    You can also modify the ImagePullPolicy as well.

    1. traits:
    2. - type: container-image
    3. properties:
    4. image: busybox-1.34.0
    5. containerName: sidecar-nginx
    6. imagePullPolicy: IfNotPresent

    Multiple container patch is also available.

    1. traits:
    2. - type: container-image
    3. properties:
    4. containers:
    5. - containerName: busybox
    6. image: busybox-1.34.0
    7. imagePullPolicy: IfNotPresent
    8. - containerName: sidecar-nginx
    9. image: nginx-1.20

    Override Container Command

    The command trait can be used to modify the original running command in deployment’s pods.

    1. traits:
    2. - type: command
    3. properties:

    The above configuration can be used to patch the main container (the container that uses the name of the component). If you would like to modify another container, you could use the field containerName.

    If you want to replace the existing args in the container, instead of the command, use the parameter.

    1. traits:
    2. - type: command
    3. properties:
    4. args: ["86400"]

    If you want to append/delete args to the existing args, use the addArgs/delArgs parameter. This can be useful if you have lots of args to be managed.

    1. traits:
    2. - type: command
    3. properties:
    4. addArgs: ["86400"]
    1. traits:
    2. - type: command
    3. properties:
    4. delArgs: ["86400"]

    You can also configure commands in multiple containers.

    1. traits:
    2. - type: command
    3. properties:
    4. containers:
    5. - containerName: busybox
    6. command: ["sleep", "8640000"]
    7. - containerName: sidecar-nginx
    8. args: ["-q"]

    With the trait env, you can easily manipulate the declared environment variables.

    1. traits:
    2. - type: env
    3. properties:
    4. env:
    5. key_first: value_first
    6. key_second: value_second

    You can remove existing environment variables by setting the unset field.

    1. traits:
    2. - type: env
    3. properties:
    4. unset: ["key_existing_first", "key_existing_second"]

    If you would like to clear all the existing environment variables first, and then add new variables, use replace: true.

    1. traits:
    2. - type: env
    3. properties:
    4. env:
    5. key_first: value_first
    6. key_second: value_second
    7. replace: true

    If you want to modify the environment variable in other containers, use the containerName field.

    1. traits:
    2. - type: env
    3. properties:
    4. env:
    5. key_first: value_first
    6. key_second: value_second
    7. containerName: sidecar-nginx

    You can set environment variables in multiple containers as well.

    Override Labels & Annotations

    To add/update/remove labels or annotations for the workload (like Kubernetes Deployment), use the labels or annotations trait.

    1. traits:
    2. # the `labels` trait will add/delete label key/value pair to the
    3. # labels of the workload and the template inside the spec of the workload (if exists)
    4. # 1. if original labels contains the key, value will be overridden
    5. # 2. if original labels do not contain the key, value will be added
    6. # 3. if original labels contains the key and the value is null, the key will be removed
    7. - type: labels
    8. properties:
    9. added-label-key: added-label-value
    10. label-key: modified-label-value
    11. to-delete-label-key: null
    1. traits:
    2. # the `annotations` trait will add/delete annotation key/value pair to the
    3. # labels of the workload and the template inside the spec of the workload (if exists)
    4. # 1. if original annotations contains the key, value will be overridden
    5. # 2. if original annotations do not contain the key, value will be added
    6. # 3. if original annotations contains the key and the value is null, the key will be removed
    7. - type: annotations
    8. properties:
    9. added-annotation-key: added-annotation-value
    10. annotation-key: modified-annotation-value
    11. to-delete-annotation-key: null

    Except for the above trait, a more powerful but more complex way to modify the original resources is to use the json-patch or json-merge-patch trait. They follow the RFC 6902 and respectively. Usage examples are shown below.

    1. traits:
    2. # the json patch can be used to add, replace and delete fields
    3. # the following part will
    4. # 1. add `deploy-label-key` to deployment labels
    5. # 2. set deployment replicas to 3
    6. # 3. set `pod-label-key` to `pod-label-modified-value` in pod labels
    7. # 4. delete `to-delete-label-key` in pod labels
    8. # 5. add sidecar container for pod
    9. - type: json-patch
    10. properties:
    11. operations:
    12. - op: add
    13. path: "/spec/replicas"
    14. value: 3
    15. - op: replace
    16. path: "/spec/template/metadata/labels/pod-label-key"
    17. value: pod-label-modified-value
    18. - op: remove
    19. path: "/spec/template/metadata/labels/to-delete-label-key"
    20. - op: add
    21. path: "/spec/template/spec/containers/1"
    22. value:
    23. name: busybox-sidecar
    24. image: busybox:1.34
    25. command: ["sleep", "864000"]
    1. traits:
    2. # the json merge patch can be used to add, replace and delete fields
    3. # 1. add `deploy-label-key` to deployment labels
    4. # 2. set deployment replicas to 3
    5. # 4. delete `to-delete-label-key` in pod labels
    6. # 5. reset `containers` for pod
    7. - type: json-merge-patch
    8. properties:
    9. metadata:
    10. labels:
    11. deploy-label-key: deploy-label-added-value
    12. spec:
    13. replicas: 3
    14. template:
    15. metadata:
    16. labels:
    17. pod-label-key: pod-label-modified-value
    18. to-delete-label-key: null
    19. spec:
    20. containers:
    21. - name: busybox-new
    22. image: busybox:1.34
    23. command: ["sleep", "864000"]

    The general idea is to using override policy to override traits. Then you can distribute reference objects with different traits for different clusters.

    Assume we’re distributing the following Deployment YAML to multi-clusters:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. labels:
    5. app: demo
    6. name: demo
    7. namespace: demo
    8. spec:
    9. replicas: 1
    10. selector:
    11. matchLabels:
    12. app: demo
    13. template:
    14. metadata:
    15. labels:
    16. app: demo
    17. spec:
    18. containers:
    19. - image: oamdev/testapp:v1
    20. name: demo

    We can specify the following topology policies.

    1. apiVersion: core.oam.dev/v1alpha1
    2. kind: Policy
    3. metadata:
    4. name: cluster-beijing
    5. namespace: demo
    6. type: topology
    7. properties:
    8. clusters: ["<clusterid1>"]
    9. ---
    10. apiVersion: core.oam.dev/v1alpha1
    11. kind: Policy
    12. metadata:
    13. name: cluster-hangzhou
    14. namespace: demo
    15. type: topology
    16. properties:
    17. clusters: ["<clusterid2>"]

    Then we can use override policy to override with different traits for the reference objects.

    1. apiVersion: core.oam.dev/v1alpha1
    2. kind: Policy
    3. metadata:
    4. name: override-replic-beijing
    5. namespace: demo
    6. type: override
    7. properties:
    8. components:
    9. - name: "demo"
    10. traits:
    11. - type: scaler
    12. properties:
    13. replicas: 3
    14. ---
    15. apiVersion: core.oam.dev/v1alpha1
    16. kind: Policy
    17. metadata:
    18. name: override-replic-hangzhou
    19. namespace: demo
    20. type: override
    21. properties:
    22. components:
    23. - name: "demo"
    24. traits:
    25. - type: scaler
    26. properties:
    27. replicas: 5

    The workflow can be defined like:

    1. apiVersion: core.oam.dev/v1alpha1
    2. kind: Workflow
    3. metadata:
    4. name: deploy-demo
    5. namespace: demo
    6. steps:
    7. - type: deploy
    8. name: deploy-bejing
    9. properties:
    10. policies: ["override-replic-beijing", "cluster-beijing"]
    11. - type: deploy
    12. name: deploy-hangzhou
    13. properties:

    With the help of KubeVela, you can reference and distribute any Kubernetes resources to multi clusters.