Destination Rule

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. simple: LEAST_CONN

    Version specific policies can be specified by defining a named subset and overriding the settings specified at the service level. The following rule uses a round robin load balancing policy for all traffic going to a subset named testversion that is composed of endpoints (e.g., pods) with labels (version:v3).

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. simple: LEAST_CONN
    10. subsets:
    11. - name: testversion
    12. labels:
    13. version: v3
    14. trafficPolicy:
    15. loadBalancer:
    16. simple: ROUND_ROBIN
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. simple: LEAST_CONN
    10. subsets:
    11. - name: testversion
    12. labels:
    13. version: v3
    14. trafficPolicy:
    15. loadBalancer:
    16. simple: ROUND_ROBIN

    Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.

    Traffic policies can be customized to specific ports as well. The following rule uses the least connection load balancing policy for all traffic to port 80, while uses a round robin load balancing setting for traffic to the port 9080.

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings-port
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy: # Apply to all ports
    8. portLevelSettings:
    9. - port:
    10. number: 80
    11. loadBalancer:
    12. simple: LEAST_CONN
    13. - port:
    14. number: 9080
    15. loadBalancer:
    16. simple: ROUND_ROBIN
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings-port
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy: # Apply to all ports
    8. portLevelSettings:
    9. - port:
    10. number: 80
    11. loadBalancer:
    12. simple: LEAST_CONN
    13. - port:
    14. number: 9080
    15. loadBalancer:
    16. simple: ROUND_ROBIN

    DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.

    TrafficPolicy

    Traffic policies to apply for a specific destination, across all destination ports. See DestinationRule for examples.

    FieldTypeDescriptionRequired
    loadBalancer

    Settings controlling the load balancer algorithms.

    No
    connectionPoolConnectionPoolSettings

    Settings controlling the volume of connections to an upstream service

    No
    outlierDetection

    Settings controlling eviction of unhealthy hosts from the load balancing pool

    No
    tlsClientTLSSettings

    TLS related settings for connections to the upstream service.

    No
    portLevelSettings

    Traffic policies specific to individual ports. Note that port level settings will override the destination-level settings. Traffic settings specified at the destination-level will not be inherited when overridden by port-level settings, i.e. default values will be applied to fields omitted in port-level traffic policies.

    No

    Subset

    A subset of endpoints of a service. Subsets can be used for scenarios like A/B testing, or routing to a specific version of a service. Refer to VirtualService documentation for examples of using subsets in these scenarios. In addition, traffic policies defined at the service-level can be overridden at a subset-level. The following rule uses a round robin load balancing policy for all traffic going to a subset named testversion that is composed of endpoints (e.g., pods) with labels (version:v3).

    1. kind: DestinationRule
    2. metadata:
    3. name: bookinfo-ratings
    4. spec:
    5. host: ratings.prod.svc.cluster.local
    6. trafficPolicy:
    7. loadBalancer:
    8. simple: LEAST_CONN
    9. subsets:
    10. - name: testversion
    11. labels:
    12. version: v3
    13. trafficPolicy:
    14. loadBalancer:
    15. simple: ROUND_ROBIN
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. simple: LEAST_CONN
    10. subsets:
    11. - name: testversion
    12. labels:
    13. version: v3
    14. trafficPolicy:
    15. loadBalancer:
    16. simple: ROUND_ROBIN

    Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.

    One or more labels are typically required to identify the subset destination, however, when the corresponding DestinationRule represents a host that supports multiple SNI hosts (e.g., an egress gateway), a subset without labels may be meaningful. In this case a traffic policy with can be used to identify a specific SNI host corresponding to the named subset.

    FieldTypeDescriptionRequired
    namestring

    Name of the subset. The service name and the subset name can be used for traffic splitting in a route rule.

    Yes
    labelsmap<string, string>

    Labels apply a filter over the endpoints of a service in the service registry. See route rules for examples of usage.

    No
    trafficPolicyTrafficPolicy

    Traffic policies that apply to this subset. Subsets inherit the traffic policies specified at the DestinationRule level. Settings specified at the subset level will override the corresponding settings specified at the DestinationRule level.

    No

    LoadBalancerSettings

    Load balancing policies to apply for a specific destination. See Envoy’s load balancing for more details.

    For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:

    The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. consistentHash:
    10. httpCookie:
    11. name: user
    12. ttl: 0s
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-ratings
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. loadBalancer:
    9. consistentHash:
    10. httpCookie:
    11. name: user
    12. ttl: 0s
    FieldTypeDescriptionRequired
    simpleSimpleLB (oneof)No
    consistentHashNo
    localityLbSettingLocalityLoadBalancerSetting

    Locality load balancer settings, this will override mesh wide settings in entirety, meaning no merging would be performed between this object and the object one in MeshConfig

    No

    ConnectionPoolSettings

    Connection pool settings for an upstream host. The settings apply to each individual host in the upstream service. See Envoy’s for more details. Connection pool settings can be applied at the TCP level as well as at HTTP level.

    For example, the following rule sets a limit of 100 connections to redis service called myredissrv with a connect timeout of 30ms

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-redis
    5. spec:
    6. host: myredissrv.prod.svc.cluster.local
    7. trafficPolicy:
    8. connectionPool:
    9. tcp:
    10. maxConnections: 100
    11. connectTimeout: 30ms
    12. tcpKeepalive:
    13. time: 7200s
    14. interval: 75s
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: bookinfo-redis
    5. spec:
    6. host: myredissrv.prod.svc.cluster.local
    7. trafficPolicy:
    8. connectionPool:
    9. tcp:
    10. maxConnections: 100
    11. connectTimeout: 30ms
    12. tcpKeepalive:
    13. time: 7200s
    14. interval: 75s
    FieldTypeDescriptionRequired
    tcpTCPSettings

    Settings common to both HTTP and TCP upstream connections.

    No
    http

    HTTP connection pool settings.

    No

    OutlierDetection

    A Circuit breaker implementation that tracks the status of each individual host in the upstream service. Applicable to both HTTP and TCP services. For HTTP services, hosts that continually return 5xx errors for API calls are ejected from the pool for a pre-defined period of time. For TCP services, connection timeouts or connection failures to a given host counts as an error when measuring the consecutive errors metric. See Envoy’s outlier detection for more details.

    The following rule sets a connection pool size of 100 HTTP1 connections with no more than 10 req/connection to the “reviews” service. In addition, it sets a limit of 1000 concurrent HTTP2 requests and configures upstream hosts to be scanned every 5 mins so that any host that fails 7 consecutive times with a 502, 503, or 504 error code will be ejected for 15 minutes.

    1. apiVersion: networking.istio.io/v1alpha3
    2. metadata:
    3. name: reviews-cb-policy
    4. spec:
    5. host: reviews.prod.svc.cluster.local
    6. trafficPolicy:
    7. connectionPool:
    8. tcp:
    9. maxConnections: 100
    10. http:
    11. http2MaxRequests: 1000
    12. maxRequestsPerConnection: 10
    13. outlierDetection:
    14. consecutive5xxErrors: 7
    15. interval: 5m
    16. baseEjectionTime: 15m
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: reviews-cb-policy
    5. spec:
    6. host: reviews.prod.svc.cluster.local
    7. trafficPolicy:
    8. connectionPool:
    9. tcp:
    10. maxConnections: 100
    11. http:
    12. http2MaxRequests: 1000
    13. maxRequestsPerConnection: 10
    14. outlierDetection:
    15. consecutive5xxErrors: 7
    16. interval: 5m
    17. baseEjectionTime: 15m
    FieldTypeDescriptionRequired
    consecutiveGatewayErrors

    Number of gateway errors before a host is ejected from the connection pool. When the upstream host is accessed over HTTP, a 502, 503, or 504 return code qualifies as a gateway error. When the upstream host is accessed over an opaque TCP connection, connect timeouts and connection error/failure events qualify as a gateway error. This feature is disabled by default or when set to the value 0.

    Note that consecutivegatewayerrors and consecutive5xxerrors can be used separately or together. Because the errors counted by consecutivegatewayerrors are also included in consecutive5xxerrors, if the value of consecutivegatewayerrors is greater than or equal to the value of consecutive5xxerrors, consecutivegatewayerrors will have no effect.

    No
    consecutive5xxErrorsUInt32Value

    Number of 5xx errors before a host is ejected from the connection pool. When the upstream host is accessed over an opaque TCP connection, connect timeouts, connection error/failure and request failure events qualify as a 5xx error. This feature defaults to 5 but can be disabled by setting the value to 0.

    Note that consecutivegatewayerrors and consecutive5xxerrors can be used separately or together. Because the errors counted by consecutivegatewayerrors are also included in consecutive5xxerrors, if the value of consecutivegatewayerrors is greater than or equal to the value of consecutive5xxerrors, consecutivegatewayerrors will have no effect.

    No
    interval

    Time interval between ejection sweep analysis. format: 1h/1m/1s/1ms. MUST BE >=1ms. Default is 10s.

    No
    baseEjectionTimeDurationNo
    maxEjectionPercentint32

    Maximum % of hosts in the load balancing pool for the upstream service that can be ejected. Defaults to 10%.

    No
    minHealthPercentint32

    Outlier detection will be enabled as long as the associated load balancing pool has at least minhealthpercent hosts in healthy mode. When the percentage of healthy hosts in the load balancing pool drops below this threshold, outlier detection will be disabled and the proxy will load balance across all hosts in the pool (healthy and unhealthy). The threshold can be disabled by setting it to 0%. The default is 0% as it’s not typically applicable in k8s environments with few pods per service.

    No

    SSL/TLS related settings for upstream connections. See Envoy’s for more details. These settings are common to both HTTP and TCP upstreams.

    For example, the following rule configures a client to use mutual TLS for connections to upstream database cluster.

    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: db-mtls
    5. spec:
    6. host: mydbserver.prod.svc.cluster.local
    7. trafficPolicy:
    8. tls:
    9. mode: MUTUAL
    10. clientCertificate: /etc/certs/myclientcert.pem
    11. privateKey: /etc/certs/client_private_key.pem
    12. caCertificates: /etc/certs/rootcacerts.pem

    The following rule configures a client to use TLS when talking to a foreign service whose domain matches *.foo.com.

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: DestinationRule
    3. metadata:
    4. name: tls-foo
    5. spec:
    6. host: "*.foo.com"
    7. trafficPolicy:
    8. tls:
    9. mode: SIMPLE
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: tls-foo
    5. spec:
    6. host: "*.foo.com"
    7. trafficPolicy:
    8. tls:
    9. mode: SIMPLE

    The following rule configures a client to use Istio mutual TLS when talking to rating services.

    1. kind: DestinationRule
    2. metadata:
    3. name: ratings-istio-mtls
    4. spec:
    5. host: ratings.prod.svc.cluster.local
    6. trafficPolicy:
    7. tls:
    8. mode: ISTIO_MUTUAL
    1. apiVersion: networking.istio.io/v1beta1
    2. kind: DestinationRule
    3. metadata:
    4. name: ratings-istio-mtls
    5. spec:
    6. host: ratings.prod.svc.cluster.local
    7. trafficPolicy:
    8. tls:
    9. mode: ISTIO_MUTUAL

    LocalityLoadBalancerSetting

    Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form. For additional detail refer to Locality Weight The following example shows how to setup locality weights mesh-wide.

    Given a mesh with workloads and their service deployed to “us-west/zone1/” and “us-west/zone2/”. This example specifies that when traffic accessing a service originates from workloads in “us-west/zone1/”, 80% of the traffic will be sent to endpoints in “us-west/zone1/”, i.e the same zone, and the remaining 20% will go to endpoints in “us-west/zone2/”. This setup is intended to favor routing traffic to endpoints in the same locality. A similar setting is specified for traffic originating in “us-west/zone2/”.

    1. distribute:
    2. - from: us-west/zone1/*
    3. to:
    4. "us-west/zone1/*": 80
    5. "us-west/zone2/*": 20
    6. - from: us-west/zone2/*
    7. to:
    8. "us-west/zone1/*": 20
    9. "us-west/zone2/*": 80

    If the goal of the operator is not to distribute load across zones and regions but rather to restrict the regionality of failover to meet other operational requirements an operator can set a ‘failover’ policy instead of a ‘distribute’ policy.

    The following example sets up a locality failover policy for regions. Assume a service resides in zones within us-east, us-west & eu-west this example specifies that when endpoints within us-east become unhealthy traffic should failover to endpoints in any zone or sub-zone within eu-west and similarly us-west should failover to us-east.

    1. failover:
    2. - from: us-east
    3. to: eu-west
    4. - from: us-west

    Locality load balancing settings.

    FieldTypeDescriptionRequired
    distribute

    Optional: only one of distribute or failover can be set. Explicitly specify loadbalancing weight across different zones and geographical locations. Refer to Locality weighted load balancing If empty, the locality weight is set according to the endpoints number within it.

    No
    failover

    Optional: only failover or distribute can be set. Explicitly specify the region traffic will land on when endpoints in local region becomes unhealthy. Should be used together with OutlierDetection to detect unhealthy endpoints. Note: if no OutlierDetection specified, this will not take effect.

    No
    enabledBoolValue

    enable locality load balancing, this is DestinationRule-level and will override mesh wide settings in entirety. e.g. true means that turn on locality load balancing for this DestinationRule no matter what mesh wide settings is.

    No

    TrafficPolicy.PortTrafficPolicy

    Traffic policies that apply to specific ports of the service

    FieldTypeDescriptionRequired
    port

    Specifies the number of a port on the destination service on which this policy is being applied.

    No
    loadBalancerLoadBalancerSettings

    Settings controlling the load balancer algorithms.

    No
    connectionPool

    Settings controlling the volume of connections to an upstream service

    No
    outlierDetectionOutlierDetection

    Settings controlling eviction of unhealthy hosts from the load balancing pool

    No
    tls

    TLS related settings for connections to the upstream service.

    No

    LoadBalancerSettings.ConsistentHashLB

    Consistent Hash-based load balancing can be used to provide soft session affinity based on HTTP headers, cookies or other properties. This load balancing policy is applicable only for HTTP connections. The affinity to a particular destination host will be lost when one or more hosts are added/removed from the destination service.

    FieldTypeDescriptionRequired
    httpHeaderNamestring (oneof)

    Hash based on a specific HTTP header.

    No
    httpCookieHTTPCookie (oneof)

    Hash based on HTTP cookie.

    No
    useSourceIpbool (oneof)

    Hash based on the source IP address.

    No
    httpQueryParameterNamestring (oneof)

    Hash based on a specific HTTP query parameter.

    No
    minimumRingSizeuint64

    The minimum number of virtual nodes to use for the hash ring. Defaults to 1024. Larger ring sizes result in more granular load distributions. If the number of hosts in the load balancing pool is larger than the ring size, each host will be assigned a single virtual node.

    No

    LoadBalancerSettings.ConsistentHashLB.HTTPCookie

    Describes a HTTP cookie that will be used as the hash key for the Consistent Hash load balancer. If the cookie is not present, it will be generated.

    FieldTypeDescriptionRequired
    namestring

    Name of the cookie.

    Yes
    pathstring

    Path to set for the cookie.

    No
    ttl

    Lifetime of the cookie.

    Yes

    ConnectionPoolSettings.TCPSettings

    Settings common to both HTTP and TCP upstream connections.

    FieldTypeDescriptionRequired
    maxConnectionsint32No
    connectTimeoutDuration

    TCP connection timeout. format: 1h/1m/1s/1ms. MUST BE >=1ms. Default is 10s.

    No
    tcpKeepalive

    If set then set SO_KEEPALIVE on the socket to enable TCP Keepalives.

    No

    Settings applicable to HTTP1.1/HTTP2/GRPC connections.

    ConnectionPoolSettings.TCPSettings.TcpKeepalive

    TCP keepalive.

    FieldTypeDescriptionRequired
    probesuint32

    Maximum number of keepalive probes to send without response before deciding the connection is dead. Default is to use the OS level configuration (unless overridden, Linux defaults to 9.)

    No
    timeDuration

    The time duration a connection needs to be idle before keep-alive probes start being sent. Default is to use the OS level configuration (unless overridden, Linux defaults to 7200s (ie 2 hours.)

    No
    interval

    The time duration between keep-alive probes. Default is to use the OS level configuration (unless overridden, Linux defaults to 75s.)

    No

    LocalityLoadBalancerSetting.Distribute

    Describes how traffic originating in the ‘from’ zone or sub-zone is distributed over a set of ‘to’ zones. Syntax for specifying a zone is {region}/{zone}/{sub-zone} and terminal wildcards are allowed on any segment of the specification. Examples:

    * - matches all localities

    us-west/* - all zones and sub-zones within the us-west region

    us-west/zone-1/* - all sub-zones within us-west/zone-1

    FieldTypeDescriptionRequired
    fromstring

    Originating locality, ‘/’ separated, e.g. ‘region/zone/sub_zone’.

    No
    tomap<string, uint32>

    Map of upstream localities to traffic distribution weights. The sum of all weights should be 100. Any locality not present will receive no traffic.

    No

    LocalityLoadBalancerSetting.Failover

    Specify the traffic failover policy across regions. Since zone and sub-zone failover is supported by default this only needs to be specified for regions when the operator needs to constrain traffic failover so that the default behavior of failing over to any endpoint globally does not apply. This is useful when failing over traffic across regions would not improve service health or may need to be restricted for other reasons like regulatory controls.

    FieldTypeDescriptionRequired
    fromstring

    Originating region.

    No
    tostring

    Destination region the traffic will fail over to when endpoints in the ‘from’ region becomes unhealthy.

    No

    google.protobuf.UInt32Value

    Wrapper message for uint32.

    The JSON representation for UInt32Value is JSON number.

    FieldTypeDescriptionRequired
    valueuint32

    The uint32 value.

    No

    LoadBalancerSettings.SimpleLB

    Standard load balancing algorithms that require no tuning.

    NameDescription
    ROUND_ROBIN

    Round Robin policy. Default

    LEAST_CONN

    The least request load balancer uses an O(1) algorithm which selects two random healthy hosts and picks the host which has fewer active requests.

    RANDOM

    The random load balancer selects a random healthy host. The random load balancer generally performs better than round robin if no health checking policy is configured.

    PASSTHROUGH

    This option will forward the connection to the original IP address requested by the caller without doing any form of load balancing. This option must be used with care. It is meant for advanced use cases. Refer to Original Destination load balancer in Envoy for further details.

    Policy for upgrading http1.1 connections to http2.

    ClientTLSSettings.TLSmode

    TLS connection mode

    NameDescription
    DISABLE

    Do not setup a TLS connection to the upstream endpoint.

    SIMPLE

    Originate a TLS connection to the upstream endpoint.

    MUTUAL

    Secure connections to the upstream using mutual TLS by presenting client certificates for authentication.

    ISTIO_MUTUAL