Egress Gateways

    The task shows how to configure Istio to allow access to external HTTP and HTTPS services from applications inside the mesh. There, the external services are called directly from the client sidecar. This example also shows how to configure Istio to call external services, although this time indirectly via a dedicated egress gateway service.

    Istio uses ingress and egress gateways to configure load balancers executing at the edge of a service mesh. An ingress gateway allows you to define entry points into the mesh that all incoming traffic flows through. Egress gateway is a symmetrical concept; it defines exit points from the mesh. Egress gateways allow you to apply Istio features, for example, monitoring and route rules, to traffic exiting the mesh.

    Consider an organization that has a strict security requirement that all traffic leaving the service mesh must flow through a set of dedicated nodes. These nodes will run on dedicated machines, separated from the rest of the nodes running applications in the cluster. These special nodes will serve for policy enforcement on the egress traffic and will be monitored more thoroughly than other nodes.

    Another use case is a cluster where the application nodes don’t have public IPs, so the in-mesh services that run on them cannot access the Internet. Defining an egress gateway, directing all the egress traffic through it, and allocating public IPs to the egress gateway nodes allows the application nodes to access external services in a controlled way.

    Before you begin

    • Setup Istio by following the instructions in the .

      The egress gateway and access logging will be enabled if you install the configuration profile.

    • Deploy the sample app to use as a test source for sending requests. If you have automatic sidecar injection enabled, run the following command to deploy the sample app:

      Otherwise, manually inject the sidecar before deploying the sleep application with the following command:

      Zip

      1. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@)

      You can use any pod with curl installed as a test source.

    • Set the SOURCE_POD environment variable to the name of your source pod:

      1. $ export SOURCE_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})

    The instructions in this task create a destination rule for the egress gateway in the default namespace and assume that the client, SOURCE_POD, is also running in the default namespace. If not, the destination rule will not be found on the destination rule lookup path and the client requests will fail.

    Deploy Istio egress gateway

    1. Check if the Istio egress gateway is deployed:

      1. $ kubectl get pod -l istio=egressgateway -n istio-system

      If no pods are returned, deploy the Istio egress gateway by performing the following step.

    2. If you used an IstioOperator CR to install Istio, add the following fields to your configuration:

      1. spec:
      2. components:
      3. egressGateways:
      4. - name: istio-egressgateway
      5. enabled: true

      Otherwise, add the equivalent settings to your original istioctl install command, for example:

      1. $ istioctl install <flags-you-used-to-install-Istio> \
      2. --set components.egressGateways[0].name=istio-egressgateway \
      3. --set components.egressGateways[0].enabled=true

    First create a ServiceEntry to allow direct traffic to an external service.

    1. Define a ServiceEntry for edition.cnn.com.

      DNS resolution must be used in the service entry below. If the resolution is NONE, the gateway will direct the traffic to itself in an infinite loop. This is because the gateway receives a request with the original destination IP address which is equal to the service IP of the gateway (since the request is directed by sidecar proxies to the gateway).

      With DNS resolution, the gateway performs a DNS query to get an IP address of the external service and directs the traffic to that IP address.

      1. $ kubectl apply -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: ServiceEntry
      4. metadata:
      5. name: cnn
      6. spec:
      7. hosts:
      8. - edition.cnn.com
      9. ports:
      10. - number: 80
      11. name: http-port
      12. protocol: HTTP
      13. - number: 443
      14. name: https
      15. protocol: HTTPS
      16. resolution: DNS
      17. EOF
    2. Verify that your ServiceEntry was applied correctly by sending an HTTP request to .

      1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - http://edition.cnn.com/politics
      2. ...
      3. HTTP/1.1 301 Moved Permanently
      4. ...
      5. location: https://edition.cnn.com/politics
      6. ...
      7. HTTP/2 200
      8. Content-Type: text/html; charset=utf-8
      9. ...
    3. Create an egress Gateway for edition.cnn.com, port 80, and a destination rule for traffic directed to the egress gateway.

      To direct multiple hosts through an egress gateway, you can include a list of hosts, or use * to match all, in the Gateway. The subset field in the DestinationRule should be reused for the additional hosts.

      1. $ kubectl apply -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: Gateway
      4. metadata:
      5. name: istio-egressgateway
      6. spec:
      7. selector:
      8. istio: egressgateway
      9. servers:
      10. - port:
      11. number: 80
      12. name: http
      13. protocol: HTTP
      14. hosts:
      15. - edition.cnn.com
      16. ---
      17. apiVersion: networking.istio.io/v1alpha3
      18. kind: DestinationRule
      19. metadata:
      20. name: egressgateway-for-cnn
      21. spec:
      22. host: istio-egressgateway.istio-system.svc.cluster.local
      23. subsets:
      24. - name: cnn
      25. EOF
    4. Define a VirtualService to direct traffic from the sidecars to the egress gateway and from the egress gateway to the external service:

      1. $ kubectl apply -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: VirtualService
      4. metadata:
      5. name: direct-cnn-through-egress-gateway
      6. spec:
      7. hosts:
      8. - edition.cnn.com
      9. gateways:
      10. - istio-egressgateway
      11. - mesh
      12. http:
      13. - match:
      14. - gateways:
      15. - mesh
      16. port: 80
      17. route:
      18. - destination:
      19. host: istio-egressgateway.istio-system.svc.cluster.local
      20. subset: cnn
      21. port:
      22. number: 80
      23. weight: 100
      24. - match:
      25. - gateways:
      26. port: 80
      27. route:
      28. - destination:
      29. host: edition.cnn.com
      30. port:
      31. number: 80
      32. EOF
    5. Resend the HTTP request to http://edition.cnn.com/politics.

      1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - http://edition.cnn.com/politics
      2. ...
      3. HTTP/1.1 301 Moved Permanently
      4. ...
      5. location: https://edition.cnn.com/politics
      6. ...
      7. HTTP/2 200
      8. Content-Type: text/html; charset=utf-8
      9. ...

      The output should be the same as in the step 2.

    6. Check the log of the istio-egressgateway pod for a line corresponding to our request. If Istio is deployed in the istio-system namespace, the command to print the log is:

      1. $ kubectl logs -l istio=egressgateway -c istio-proxy -n istio-system | tail

      You should see a line similar to the following:

      1. [2019-09-03T20:57:49.103Z] "GET /politics HTTP/2" 301 - "-" "-" 0 0 90 89 "10.244.2.10" "curl/7.64.0" "ea379962-9b5c-4431-ab66-f01994f5a5a5" "edition.cnn.com" "151.101.65.67:80" outbound|80||edition.cnn.com - 10.244.1.5:80 10.244.2.10:50482 edition.cnn.com -

      Note that you only redirected the traffic from port 80 to the egress gateway. The HTTPS traffic to port 443 went directly to edition.cnn.com.

    Remove the previous definitions before proceeding to the next step:

    Egress gateway for HTTPS traffic

    In this section you direct HTTPS traffic (TLS originated by the application) through an egress gateway. You need to specify port 443 with protocol TLS in a corresponding ServiceEntry, an egress Gateway and a VirtualService.

    1. Define a ServiceEntry for edition.cnn.com:

      1. $ kubectl apply -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: ServiceEntry
      4. metadata:
      5. name: cnn
      6. spec:
      7. hosts:
      8. - edition.cnn.com
      9. ports:
      10. - number: 443
      11. name: tls
      12. protocol: TLS
      13. resolution: DNS
      14. EOF
    2. Verify that your ServiceEntry was applied correctly by sending an HTTPS request to .

      1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - https://edition.cnn.com/politics
      2. ...
      3. HTTP/2 200
      4. Content-Type: text/html; charset=utf-8
      5. ...
    3. Create an egress Gateway for edition.cnn.com, a destination rule and a virtual service to direct the traffic through the egress gateway and from the egress gateway to the external service.

      To direct multiple hosts through an egress gateway, you can include a list of hosts, or use * to match all, in the Gateway. The subset field in the DestinationRule should be reused for the additional hosts.

      1. $ kubectl apply -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: Gateway
      4. metadata:
      5. name: istio-egressgateway
      6. spec:
      7. selector:
      8. istio: egressgateway
      9. servers:
      10. - port:
      11. number: 443
      12. name: tls
      13. protocol: TLS
      14. hosts:
      15. - edition.cnn.com
      16. tls:
      17. mode: PASSTHROUGH
      18. ---
      19. apiVersion: networking.istio.io/v1alpha3
      20. kind: DestinationRule
      21. metadata:
      22. name: egressgateway-for-cnn
      23. spec:
      24. host: istio-egressgateway.istio-system.svc.cluster.local
      25. subsets:
      26. - name: cnn
      27. ---
      28. apiVersion: networking.istio.io/v1alpha3
      29. kind: VirtualService
      30. metadata:
      31. name: direct-cnn-through-egress-gateway
      32. spec:
      33. hosts:
      34. - edition.cnn.com
      35. gateways:
      36. - mesh
      37. - istio-egressgateway
      38. tls:
      39. - match:
      40. - gateways:
      41. - mesh
      42. port: 443
      43. sniHosts:
      44. - edition.cnn.com
      45. route:
      46. - destination:
      47. host: istio-egressgateway.istio-system.svc.cluster.local
      48. subset: cnn
      49. port:
      50. number: 443
      51. - match:
      52. - gateways:
      53. - istio-egressgateway
      54. port: 443
      55. sniHosts:
      56. - edition.cnn.com
      57. route:
      58. - destination:
      59. host: edition.cnn.com
      60. port:
      61. number: 443
      62. weight: 100
      63. EOF
    4. Send an HTTPS request to https://edition.cnn.com/politics. The output should be the same as before.

      1. $ kubectl exec "$SOURCE_POD" -c sleep -- curl -sSL -o /dev/null -D - https://edition.cnn.com/politics
      2. ...
      3. Content-Type: text/html; charset=utf-8
      4. ...
    5. Check the log of the egress gateway’s proxy. If Istio is deployed in the istio-system namespace, the command to print the log is:

      1. $ kubectl logs -l istio=egressgateway -n istio-system

      You should see a line similar to the following:

    1. $ kubectl delete serviceentry cnn
    2. $ kubectl delete gateway istio-egressgateway
    3. $ kubectl delete virtualservice direct-cnn-through-egress-gateway
    4. $ kubectl delete destinationrule egressgateway-for-cnn

    Additional security considerations

    Note that defining an egress Gateway in Istio does not in itself provides any special treatment for the nodes on which the egress gateway service runs. It is up to the cluster administrator or the cloud provider to deploy the egress gateways on dedicated nodes and to introduce additional security measures to make these nodes more secure than the rest of the mesh.

    Istio cannot securely enforce that all egress traffic actually flows through the egress gateways. Istio only enables such flow through its sidecar proxies. If attackers bypass the sidecar proxy, they could directly access external services without traversing the egress gateway. Thus, the attackers escape Istio’s control and monitoring. The cluster administrator or the cloud provider must ensure that no traffic leaves the mesh bypassing the egress gateway. Mechanisms external to Istio must enforce this requirement. For example, the cluster administrator can configure a firewall to deny all traffic not coming from the egress gateway. The can also forbid all the egress traffic not originating from the egress gateway (see the next section for an example). Additionally, the cluster administrator or the cloud provider can configure the network to ensure application nodes can only access the Internet via a gateway. To do this, the cluster administrator or the cloud provider can prevent the allocation of public IPs to pods other than gateways and can configure NAT devices to drop packets not originating at the egress gateways.

    This section shows you how to create a to prevent bypassing of the egress gateway. To test the network policy, you create a namespace, test-egress, deploy the sleep sample to it, and then attempt to send requests to a gateway-secured external service.

    1. Follow the steps in the section.

    2. Create the test-egress namespace:

      1. $ kubectl create namespace test-egress
    3. Deploy the sleep sample to the test-egress namespace.

      1. $ kubectl apply -n test-egress -f @samples/sleep/sleep.yaml@
    4. Check that the deployed pod has a single container with no Istio sidecar attached:

      1. $ kubectl get pod "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress
      2. NAME READY STATUS RESTARTS AGE
      3. sleep-776b7bcdcd-z7mc4 1/1 Running 0 18m
    5. Send an HTTPS request to from the sleep pod in the test-egress namespace. The request will succeed since you did not define any restrictive policies yet.

      1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -s -o /dev/null -w "%{http_code}\n" https://edition.cnn.com/politics
      2. 200
    6. Label the namespaces where the Istio components (the control plane and the gateways) run. If you deployed the Istio components to istio-system, the command is:

      1. $ kubectl label namespace istio-system istio=system
    7. Label the kube-system namespace.

    8. Define a NetworkPolicy to limit the egress traffic from the test-egress namespace to traffic destined to istio-system, and to the kube-system DNS service (port 53):

      1. $ cat <<EOF | kubectl apply -n test-egress -f -
      2. apiVersion: networking.k8s.io/v1
      3. kind: NetworkPolicy
      4. metadata:
      5. name: allow-egress-to-istio-system-and-kube-dns
      6. spec:
      7. podSelector: {}
      8. policyTypes:
      9. - Egress
      10. egress:
      11. - to:
      12. - namespaceSelector:
      13. matchLabels:
      14. kube-system: "true"
      15. ports:
      16. - protocol: UDP
      17. port: 53
      18. - to:
      19. - namespaceSelector:
      20. matchLabels:
      21. istio: system
      22. EOF

      Network policies are implemented by the network plugin in your Kubernetes cluster. Depending on your test cluster, the traffic may not be blocked in the following step.

    9. Resend the previous HTTPS request to . Now it should fail since the traffic is blocked by the network policy. Note that the sleep pod cannot bypass istio-egressgateway. The only way it can access edition.cnn.com is by using an Istio sidecar proxy and by directing the traffic to istio-egressgateway. This setting demonstrates that even if some malicious pod manages to bypass its sidecar proxy, it will not be able to access external sites and will be blocked by the network policy.

      1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -v -sS https://edition.cnn.com/politics
      2. Hostname was NOT found in DNS cache
      3. Trying 151.101.65.67...
      4. Trying 2a04:4e42:200::323...
      5. Immediate connect fail for 2a04:4e42:200::323: Cannot assign requested address
      6. Trying 2a04:4e42:400::323...
      7. Immediate connect fail for 2a04:4e42:400::323: Cannot assign requested address
      8. Trying 2a04:4e42:600::323...
      9. Immediate connect fail for 2a04:4e42:600::323: Cannot assign requested address
      10. Trying 2a04:4e42::323...
      11. Immediate connect fail for 2a04:4e42::323: Cannot assign requested address
      12. connect to 151.101.65.67 port 443 failed: Connection timed out
    10. Now inject an Istio sidecar proxy into the sleep pod in the test-egress namespace by first enabling automatic sidecar proxy injection in the test-egress namespace:

      1. $ kubectl label namespace test-egress istio-injection=enabled
    11. Then redeploy the sleep deployment:

      Zip

      1. $ kubectl delete deployment sleep -n test-egress
      2. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n test-egress
    12. Check that the deployed pod has two containers, including the Istio sidecar proxy (istio-proxy):

      1. $ kubectl get pod "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -o jsonpath='{.spec.containers[*].name}'
      2. sleep istio-proxy
    13. Create the same destination rule as for the sleep pod in the default namespace to direct the traffic through the egress gateway:

      1. $ kubectl apply -n test-egress -f - <<EOF
      2. apiVersion: networking.istio.io/v1alpha3
      3. kind: DestinationRule
      4. metadata:
      5. name: egressgateway-for-cnn
      6. spec:
      7. host: istio-egressgateway.istio-system.svc.cluster.local
      8. subsets:
      9. - name: cnn
      10. EOF
    14. Send an HTTPS request to . Now it should succeed since the traffic flows to istio-egressgateway in the istio-system namespace, which is allowed by the Network Policy you defined. istio-egressgateway forwards the traffic to edition.cnn.com.

      1. $ kubectl exec "$(kubectl get pod -n test-egress -l app=sleep -o jsonpath={.items..metadata.name})" -n test-egress -c sleep -- curl -sS -o /dev/null -w "%{http_code}\n" https://edition.cnn.com/politics
      2. 200
    15. Check the log of the egress gateway’s proxy. If Istio is deployed in the istio-system namespace, the command to print the log is:

      1. $ kubectl logs -l istio=egressgateway -n istio-system

      You should see a line similar to the following:

      1. [2020-03-06T18:12:33.101Z] "- - -" 0 - "-" "-" 906 1352475 35 - "-" "-" "-" "-" "151.101.193.67:443" outbound|443||edition.cnn.com 172.30.223.53:39460 172.30.223.53:443 172.30.223.58:38138 edition.cnn.com -
    1. Delete the resources created in this section:

      Zip

      1. $ kubectl delete -f @samples/sleep/sleep.yaml@ -n test-egress
      2. $ kubectl delete destinationrule egressgateway-for-cnn -n test-egress
      3. $ kubectl delete networkpolicy allow-egress-to-istio-system-and-kube-dns -n test-egress
      4. $ kubectl label namespace kube-system kube-system-
      5. $ kubectl label namespace istio-system istio-
      6. $ kubectl delete namespace test-egress
    2. Follow the steps in the section.

    Troubleshooting

    1. If mutual TLS Authentication is enabled, verify the correct certificate of the egress gateway:

      1. $ kubectl exec -i -n istio-system "$(kubectl get pod -l istio=egressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}')" -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout | grep 'Subject Alternative Name' -A 1
      2. X509v3 Subject Alternative Name:
      3. URI:spiffe://cluster.local/ns/istio-system/sa/istio-egressgateway-service-account
    2. For HTTPS traffic (TLS originated by the application), test the traffic flow by using the openssl command. openssl has an explicit option for setting the SNI, namely -servername.

      1. $ kubectl exec "$SOURCE_POD" -c sleep -- openssl s_client -connect edition.cnn.com:443 -servername edition.cnn.com
      2. CONNECTED(00000003)
      3. ...
      4. Certificate chain
      5. 0 s:/C=US/ST=California/L=San Francisco/O=Fastly, Inc./CN=turner-tls.map.fastly.net
      6. i:/C=BE/O=GlobalSign nv-sa/CN=GlobalSign CloudSSL CA - SHA256 - G3
      7. 1 s:/C=BE/O=GlobalSign nv-sa/CN=GlobalSign CloudSSL CA - SHA256 - G3
      8. i:/C=BE/O=GlobalSign nv-sa/OU=Root CA/CN=GlobalSign Root CA
      9. ---
      10. Server certificate
      11. -----BEGIN CERTIFICATE-----
      12. ...

      If you get the certificate as in the output above, your traffic is routed correctly. Check the statistics of the egress gateway’s proxy and see a counter that corresponds to your requests (sent by openssl and curl) to edition.cnn.com.

    Cleanup

    Shutdown the service:

    Zip