Installing Gateways

    Some of Istio’s built in deploy gateways during installation. For example, a call to with default settings will deploy an ingress gateway along with the control plane. Although fine for evaluation and simple use cases, this couples the gateway to the control plane, making management and upgrade more complicated. For production Istio deployments, it is highly recommended to decouple these to allow independent operation.

    Follow this guide to separately deploy and manage one or more gateways in a production installation of Istio.

    This guide requires the Istio control plane before proceeding.

    You can use the minimal profile, for example istioctl install --set profile=minimal, to prevent any gateways from being deployed during installation.

    Using the same mechanisms as Istio sidecar injection, the Envoy proxy configuration for gateways can similarly be auto-injected.

    Using auto-injection for gateway deployments is recommended as it gives developers full control over the gateway deployment, while also simplifying operations. When a new upgrade is available, or a configuration has changed, gateway pods can be updated by simply restarting them. This makes the experience of operating a gateway deployment the same as operating sidecars.

    To support users with existing deployment tools, Istio provides a few different ways to deploy a gateway. Each method will produce the same result. Choose the method you are most familiar with.

    As a security best practice, it is recommended to deploy the gateway in a different namespace from the control plane.

    First, setup an IstioOperator configuration file, called ingress.yaml here:

    Then install using standard istioctl commands:

    1. $ kubectl create namespace istio-ingress
    2. $ istioctl install -f ingress.yaml

    First, set up a values configuration file, called values.yaml here:

    1. gateways:
    2. istio-ingressgateway:
    3. # Enable gateway injection
    4. injectionTemplate: gateway
    5. # Set a name for the gateway
    6. name: ingressgateway
    7. labels:
    8. # Set a unique label for the gateway. This is required to ensure Gateways
    9. # can select this workload
    10. istio: ingressgateway

    Then install using standard helm commands:

    First, setup the Kubernetes configuration, called ingress.yaml here:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. name: istio-ingressgateway
    5. namespace: istio-ingress
    6. spec:
    7. type: LoadBalancer
    8. selector:
    9. istio: ingressgateway
    10. ports:
    11. - port: 80
    12. name: http
    13. - port: 443
    14. ---
    15. apiVersion: apps/v1
    16. kind: Deployment
    17. metadata:
    18. name: istio-ingressgateway
    19. spec:
    20. selector:
    21. matchLabels:
    22. istio: ingressgateway
    23. template:
    24. metadata:
    25. annotations:
    26. # Select the gateway injection template (rather than the default sidecar template)
    27. inject.istio.io/templates: gateway
    28. labels:
    29. # Set a unique label for the gateway. This is required to ensure Gateways can select this workload
    30. istio: ingressgateway
    31. # Enable gateway injection. If connecting to a revisioned control plane, replace with "istio.io/rev: revision-name"
    32. sidecar.istio.io/inject: "true"
    33. spec:
    34. containers:
    35. - name: istio-proxy
    36. image: auto # The image will automatically update each time the pod starts.
    37. ---
    38. # Set up roles to allow reading credentials for TLS
    39. apiVersion: rbac.authorization.k8s.io/v1
    40. kind: Role
    41. metadata:
    42. name: istio-ingressgateway-sds
    43. namespace: istio-ingress
    44. rules:
    45. - apiGroups: [""]
    46. resources: ["secrets"]
    47. verbs: ["get", "watch", "list"]
    48. ---
    49. kind: RoleBinding
    50. metadata:
    51. name: istio-ingressgateway-sds
    52. namespace: istio-ingress
    53. roleRef:
    54. apiGroup: rbac.authorization.k8s.io
    55. kind: Role
    56. name: istio-ingressgateway-sds
    57. - kind: ServiceAccount
    58. name: default

    This example shows the bare minimum needed to get a gateway running. For production usage, additional configuration such as HorizontalPodAutoscaler, PodDisruptionBudget, and resource requests/limits are recommended. These are automatically included when using the other gateway installation methods.

    Next, apply it to the cluster:

    1. $ kubectl create namespace istio-ingress
    2. $ kubectl apply -f ingress.yaml

    The following describes how to manage gateways after installation. For more information on their usage, follow the and Egress tasks.

    The labels on a gateway deployment’s pods are used by Gateway configuration resources, so it’s important that your Gateway selector matches these labels.

    For example, in the above deployments, the istio=ingressgateway label is set on the gateway pods. To apply a Gateway to these deployments, you need to select the same label:

    Depending on your mesh configuration and use cases, you may wish to deploy gateways in different ways. A few different gateway deployment patterns are shown below. Note that more than one of these patterns can be used within the same cluster.

    Shared gateway

    In this model, a single centralized gateway is used by many applications, possibly across many namespaces. Gateway(s) in the ingress namespace delegate ownership of routes to application namespaces, but retain control over TLS configuration.

    Shared gateway

    This model works well when you have many applications you want to expose externally, as they are able to use shared infrastructure. It also works well in use cases that have the same domain or TLS certificates shared by many applications.

    Dedicated application gateway

    In this model, an application namespace has its own dedicated gateway installation. This allows giving full control and ownership to a single namespace. This level of isolation can be helpful for critical applications that have strict performance or security requirements.

    Dedicated application gateway

    Dedicated application gateway

    Unless there is another load balancer in front of Istio, this typically means that each application will have its own IP address, which may complicate DNS configurations.

    Because gateways utilize pod injection, new gateway pods that are created will automatically be injected with the latest configuration, which includes the version.

    If you would like to change the in use by the gateway, you can set the istio.io/rev label on the gateway Deployment, which will also trigger a rolling restart.

    In place upgrade in progress

    This upgrade method depends on control plane revisions, and therefore can only be used in conjunction with .

    If you would like to more slowly control the rollout of a new control plane revision, you can run multiple versions of a gateway deployment. For example, if you want to roll out a new revision, canary, create a copy of your gateway deployment with the istio.io/rev=canary label set:

    1. apiVersion: apps/v1
    2. kind: Deployment
    3. metadata:
    4. name: istio-ingressgateway-canary
    5. namespace: istio-ingress
    6. spec:
    7. selector:
    8. matchLabels:
    9. istio: ingressgateway
    10. template:
    11. metadata:
    12. annotations:
    13. inject.istio.io/templates: gateway
    14. labels:
    15. istio: ingressgateway
    16. istio.io/rev: canary # Set to the control plane revision you want to deploy
    17. spec:
    18. containers:
    19. - name: istio-proxy
    20. image: auto

    When this deployment is created, you will then have two versions of the gateway, both selected by the same Service:

    1. $ kubectl get endpoints -o "custom-columns=NAME:.metadata.name,PODS:.subsets[*].addresses[*].targetRef.name"
    2. NAME PODS
    3. istio-ingressgateway istio-ingressgateway-788854c955-8gv96,istio-ingressgateway-canary-b78944cbd-mq2qf

    Canary upgrade in progress

    Canary upgrade in progress

    Unlike application services deployed inside the mesh, you cannot use to distribute the traffic between the gateway versions because their traffic is coming directly from external clients that Istio does not control. Instead, you can control the distribution of traffic by the number of replicas of each deployment. If you use another load balancer in front of Istio, you may also use that to control the traffic distribution.

    Because other installation methods bundle the gateway , which controls its external IP address, with the gateway Deployment, only the Kubernetes YAML method is supported for this upgrade method.

    A variant of the approach is to shift the traffic between the versions using a high level construct outside Istio, such as an external load balancer or DNS.

    Canary upgrade in progress with external traffic shifting

    This offers fine-grained control, but may be unsuitable or overly complicated to set up in some environments.