内置策略列表
本文档将按字典序展示所有内置策略的参数列表。
只交付部署资源,不保证终态一致、允许配置漂移。适用于与其他控制器协作的轻量级交付场景。
It’s generally used in one time delivery only without continuous management scenario.
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
selector | 指定资源筛选目标规则。 | false | ||
strategy | Specify the strategy for configuring the resource level configuration drift behaviour。 | strategy | true |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
componentNames | 按组件名称选择目标资源。 | []string | false | |
componentTypes | 按组件类型选择目标资源。 | []string | false | |
oamTypes | 按 OAM 概念,组件(COMPONENT) 或 运维特征(TRAIT) 筛选。 | []string | false | |
traitTypes | 按 trait 类型选择目标资源。 | []string | false | |
resourceTypes | 按资源类型选择。 | []string | false | |
resourceNames | 按资源名称选择。 | []string | false |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
affect | When the strategy takes effect,e.g. onUpdate、onStateKeep。 | string | false | |
path | 指定资源的路径。 | []string | true |
为应用配置资源回收策略。 如配置资源不回收。
kind: Application
metadata:
name: first-vela-app
spec:
components:
- name: express-server
type: webservice
properties:
image: oamdev/hello-world
port: 8000
traits:
- type: ingress-1-20
properties:
domain: testsvc.example.com
http:
"/": 8000
policies:
- name: keep-legacy-resource
type: garbage-collect
properties:
keepLegacyResource: true
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: garbage-collect-app
spec:
components:
- name: hello-world-new
type: webservice
properties:
image: oamdev/hello-world
traits:
- type: expose
properties:
port: [8000]
policies:
- name: garbage-collect
type: garbage-collect
properties:
rules:
- selector:
traitTypes:
- expose
strategy: onAppDelete
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
keepLegacyResource | 如果为 true,过时的版本化 resource tracker 将不会自动回收。 过时的资源将被保留,直到手动删除 resource tracker。 | bool | false | false |
rules | 在资源级别控制垃圾回收策略的规则列表,如果一个资源由多个规则控制,将使用第一个规则。 | false |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
componentNames | 按组件名称选择目标资源。 | []string | false | |
componentTypes | 按组件类型选择目标资源。 | []string | false | |
oamTypes | 按 OAM 概念,组件(COMPONENT) 或 运维特征(TRAIT) 筛选。 | []string | false | |
traitTypes | 按 trait 类型选择目标资源。 | []string | false | |
resourceTypes | 按资源类型选择。 | []string | false | |
resourceNames | 按资源名称选择。 | []string | false |
Apply periodical health checking to the application。
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
probeTimeout | Specify health checking timeout(seconds), default 10s。 | int | false | 10 |
probeInterval | Specify health checking interval(seconds), default 30s。 | int | false | 30 |
描述部署资源时要覆盖的配置,需要配合工作流的 deploy
步骤一起使用才能生效。
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: deploy-with-override
namespace: examples
spec:
components:
- name: nginx-with-override
type: webservice
properties:
image: nginx
policies:
- name: topology-hangzhou-clusters
type: topology
properties:
clusterLabelSelector:
region: hangzhou
- name: topology-local
type: topology
properties:
clusters: ["local"]
namespace: examples-alternative
- name: override-nginx-legacy-image
type: override
properties:
components:
- name: nginx-with-override
properties:
image: nginx:1.20
- name: override-high-availability
type: override
properties:
components:
- type: webservice
traits:
- type: scaler
properties:
replicas: 3
workflow:
steps:
- type: deploy
name: deploy-local
properties:
policies: ["topology-local"]
- type: deploy
name: deploy-hangzhou
properties:
policies: ["topology-hangzhou-clusters", "override-nginx-legacy-image", "override-high-availability"]
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
components | 要覆盖的组件配置列表。 | []components | true | |
selector | 要使用的组件名称列表。 如果未设置,将使用所有组件。 | []string | false |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
name | 要覆盖的组件的名称。 如果未设置,它将匹配具有指定类型的所有组件。 可以与通配符 * 一起使用以进行模糊匹配。。 | string | false | |
type | 要覆盖的组件的类型。 如果未设置,将匹配所有组件类型。 | string | false | |
properties | 要覆盖的配置属性,未填写配置会与原先的配置合并。 | map[string]:_ | false | |
traits | 要覆盖的 trait 配置列表。 | false |
Describe the configuration to replicate components when deploying resources, it only works with specified deploy
step in workflow。
In KubeVela, we can dispatch resources across the clusters. But projects like OpenYurt have finer-grained division like node pool. This requires to dispatch some similar resources to the same cluster. These resources are called replication. Back to the example of OpenYurt, it can integrate KubeVela and replicate the resources then dispatch them to the different node pool.
Replication is an internal policy. It can be only used with deploy
workflow step. When using replication policy. A new field replicaKey
will be added to context. User can use definitions that make use of . For example, apply a replica-webservice ComponentDefinition.
kind: ComponentDefinition
metadata:
annotations:
definition.oam.dev/description: Webservice, but can be replicated
name: replica-webservice
namespace: vela-system
spec:
schematic:
cue:
template: |
output: {
apiVersion: "apps/v1"
kind: "Deployment"
metadata: {
if context.replicaKey != _|_ {
name: context.name + "-" + context.replicaKey
}
if context.replicaKey == _|_ {
name: context.name
}
}
spec: {
selector: matchLabels: {
"app.oam.dev/component": context.name
if context.replicaKey != _|_ {
"app.oam.dev/replicaKey": context.replicaKey
}
}
template: {
metadata: {
labels: {
if parameter.labels != _|_ {
parameter.labels
}
if parameter.addRevisionLabel {
"app.oam.dev/revision": context.revision
}
"app.oam.dev/name": context.appName
"app.oam.dev/component": context.name
if context.replicaKey != _|_ {
"app.oam.dev/replicaKey": context.replicaKey
}
}
if parameter.annotations != _|_ {
annotations: parameter.annotations
}
}
}
}
}
outputs: {
if len(exposePorts) != 0 {
webserviceExpose: {
apiVersion: "v1"
kind: "Service"
metadata: {
if context.replicaKey != _|_ {
name: context.name + "-" + context.replicaKey
}
if context.replicaKey == _|_ {
name: context.name
}
}
spec: {
selector: {
"app.oam.dev/component": context.name
if context.replicaKey != _|_ {
"app.oam.dev/replicaKey": context.replicaKey
}
}
ports: exposePorts
type: parameter.exposeType
}
}
}
}
Then user can apply application below. Replication policy is declared in application.spec.policies
. These policies are used in deploy-with-rep
workflow step. They work together to influence the deploy
step.
- override: select
hello-rep
component to deploy. - topology: select cluster
local
to deploy. - replication: select
hello-rep
component to replicate.
As a result, there will be two Deployments and two Services:
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app-replication-policy
spec:
components:
- name: hello-rep
type: replica-webservice
properties:
image: crccheck/hello-world
- port: 80
policies:
- name: comp-to-replicate
type: override
properties:
selector: [ "hello-rep" ]
- name: target-default
type: topology
properties:
clusters: [ "local" ]
- name: replication-default
type: replication
properties:
keys: ["beijing","hangzhou"]
selector: ["hello-rep"]
workflow:
steps:
- name: deploy-with-rep
type: deploy
properties:
policies: ["comp-to-replicate","target-default","replication-default"]
kubectl get deploy -n default
NAME READY UP-TO-DATE AVAILABLE AGE
hello-rep-beijing 1/1 1 1 5s
hello-rep-hangzhou 1/1 1 1 5s
kubectl get service -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-rep-hangzhou ClusterIP 10.43.23.200 <none> 80/TCP 41s
hello-rep-beijing ClusterIP 10.43.24.116 <none> 80/TCP 12s
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
keys | Spicify the keys of replication. Every key coresponds to a replication components。 | []string | true | |
selector | Specify the components which will be replicated。 | []string | false |
Configure the resources to be sharable across applications。
It’s used in scenario. It can be used to configure which resources can be shared between applications. The target resource will allow multiple application to read it but only the first one to be able to write it.
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: app2
spec:
components:
- name: ns2
type: k8s-objects
properties:
objects:
- apiVersion: v1
kind: Namespace
metadata:
name: example
- name: cm2
type: k8s-objects
properties:
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: cm2
namespace: example
data:
key: value2
policies:
- name: shared-resource
type: shared-resource
properties:
rules:
- selector:
resourceTypes: ["Namespace"]
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
rules | Specify the list of rules to control shared-resource strategy at resource level。 | []rules | false |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
selector | 指定资源筛选目标规则。 | true |
名称 | 描述 | 类型 | 是否必须 | 默认值 |
---|---|---|---|---|
componentNames | 按组件名称选择目标资源。 | []string | false | |
componentTypes | 按组件类型选择目标资源。 | []string | false | |
oamTypes | 按 OAM 概念,组件(COMPONENT) 或 运维特征(TRAIT) 筛选。 | []string | false | |
traitTypes | 按 trait 类型选择目标资源。 | []string | false | |
resourceTypes | 按资源类型选择。 | []string | false | |
resourceNames | 按资源名称选择。 | []string | false |
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: basic-topology
namespace: examples
spec:
components:
- name: nginx-basic
type: webservice
properties:
image: nginx
policies:
- name: topology-hangzhou-clusters
type: topology
properties:
clusters: ["hangzhou-1", "hangzhou-2"]
apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
name: label-selector-topology
namespace: examples
spec:
components:
- name: nginx-label-selector
type: webservice
properties:
image: nginx
policies:
- name: topology-hangzhou-clusters
type: topology
properties: