Ingress Gateway

    Before you begin this task, do the following:

    • Read the .

    • Install Istio using the Istio installation guide.

    • Deploy a workload, in a namespace, for example foo, and expose it through the Istio ingress gateway with this command:

      Zip

    • Turn on RBAC debugging in Envoy for the ingress gateway:

      1. $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do istioctl proxy-config log "$pod" -n istio-system --level rbac:debug; done
    • Follow the instructions in to define the INGRESS_HOST and INGRESS_PORT environment variables.

    • Verify that the httpbin workload and ingress gateway are working as expected using this command:

      1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
      2. 200

    If you don’t see the expected output, retry after a few seconds. Caching and propagation overhead can cause a delay.

    All methods of getting traffic into Kubernetes involve opening a port on all worker nodes. The main features that accomplish this are the NodePort service and the LoadBalancer service. Even the Kubernetes Ingress resource must be backed by an Ingress controller that will create either a NodePort or a LoadBalancer service.

    • A NodePort just opens up a port in the range 30000-32767 on each worker node and uses a label selector to identify which Pods to send the traffic to. You have to manually create some kind of load balancer in front of your worker nodes or use Round-Robin DNS.

    What if the Pod that is handling traffic from the NodePort or LoadBalancer isn’t running on the worker node that received the traffic? Kubernetes has its own internal proxy called kube-proxy that receives the packets and forwards them to the correct node.

    If a packet goes through an external proxy load balancer and/or kube-proxy, then the original source IP address of the client is lost. Below are some strategies for preserving the original client IP for logging or security purposes.

    A critical bug has been identified in Envoy that the proxy protocol downstream address is restored incorrectly for non-HTTP connections.

    Please DO NOT USE the remoteIpBlocks field and remote_ip attribute with proxy protocol on non-HTTP connections until a newer version of Istio is released with a proper fix.

    Note that Istio doesn’t support the proxy protocol and it can be enabled only with the EnvoyFilter API and should be used at your own risk.

    If you are using a TCP/UDP Proxy external load balancer (AWS Classic ELB), it can use the to embed the original client IP address in the packet data. Both the external load balancer and the Istio ingress gateway must support the proxy protocol for it to work. In Istio, you can enable it with an EnvoyFilter like below:

    1. apiVersion: networking.istio.io/v1alpha3
    2. kind: EnvoyFilter
    3. metadata:
    4. name: proxy-protocol
    5. namespace: istio-system
    6. spec:
    7. configPatches:
    8. - applyTo: LISTENER
    9. patch:
    10. operation: MERGE
    11. value:
    12. listener_filters:
    13. - name: envoy.listener.proxy_protocol
    14. - name: envoy.listener.tls_inspector
    15. workloadSelector:
    16. labels:
    17. istio: ingressgateway

    Here is a sample of the IstioOperator that shows how to configure the Istio ingress gateway on AWS EKS to support the Proxy Protocol:

    1. apiVersion: install.istio.io/v1alpha1
    2. kind: IstioOperator
    3. spec:
    4. meshConfig:
    5. accessLogEncoding: JSON
    6. accessLogFile: /dev/stdout
    7. components:
    8. ingressGateways:
    9. - enabled: true
    10. k8s:
    11. hpaSpec:
    12. maxReplicas: 10
    13. minReplicas: 5
    14. serviceAnnotations:
    15. service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: "5"
    16. service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
    17. service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: elb-logs
    18. service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: k8sELBIngressGW
    19. service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    20. affinity:
    21. podAntiAffinity:
    22. preferredDuringSchedulingIgnoredDuringExecution:
    23. - podAffinityTerm:
    24. istio: ingressgateway
    25. topologyKey: failure-domain.beta.kubernetes.io/zone
    26. weight: 1
    27. name: istio-ingressgateway

    If you are using a TCP/UDP network load balancer that preserves the client IP address (AWS Network Load Balancer, GCP External Network Load Balancer, Azure Load Balancer) or you are using Round-Robin DNS, then you can also preserve the client IP inside Kubernetes by bypassing kube-proxy and preventing it from sending traffic to other nodes. However, you must run an ingress gateway pod on every node. If you don’t, then any node that receives traffic and doesn’t have an ingress gateway will drop the traffic. See Source IP for Services with Type=NodePort for more information. Update the ingress gateway to set externalTrafficPolicy: Local to preserve the original client source IP on the ingress gateway using the following command:

    1. $ kubectl patch svc istio-ingressgateway -n istio-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'

    If you are using an HTTP/HTTPS external load balancer (AWS ALB, GCP ), it can put the original client IP address in the X-Forwarded-For header. Istio can extract the client IP address from this header with some configuration. See . Quick example if using a single load balancer in front of Kubernetes:

    1. apiVersion: install.istio.io/v1alpha1
    2. kind: IstioOperator
    3. spec:
    4. meshConfig:
    5. accessLogEncoding: JSON
    6. accessLogFile: /dev/stdout
    7. defaultConfig:
    8. gatewayTopology:
    9. numTrustedProxies: 1

    For reference, here are the types of load balancers created by Istio with a LoadBalancer service on popular managed Kubernetes environments:

    You can instruct AWS EKS to create a Network Load Balancer when you install Istio by using a serviceAnnotation like below:

    Load Balancer TypeSource of Client IPipBlocks vs. remoteIpBlocks
    TCP ProxyProxy ProtocolremoteIpBlocks
    Networkpacket source addressipBlocks
    HTTP/HTTPSX-Forwarded-ForremoteIpBlocks
    • The following command creates the authorization policy, ingress-policy, for the Istio ingress gateway. The following policy sets the action field to ALLOW to allow the IP addresses specified in the ipBlocks to access the ingress gateway. IP addresses not in the list will be denied. The ipBlocks supports both single IP address and CIDR notation.

    Create the AuthorizationPolicy:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: ingress-policy
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: istio-ingressgateway
    11. action: ALLOW
    12. rules:
    13. - from:
    14. - source:
    15. ipBlocks: ["1.2.3.4", "5.6.7.0/24"]
    16. EOF
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: ingress-policy
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: istio-ingressgateway
    11. action: ALLOW
    12. rules:
    13. - from:
    14. - source:
    15. EOF
    • Verify that a request to the ingress gateway is denied:

      1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
      2. 403
    • Update the ingress-policy to include your client IP address:

    Find your original client IP address if you don’t know it and assign it to a variable:

    1. $ CLIENT_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $3}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
    2. 192.168.10.15
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. namespace: istio-system
    6. spec:
    7. selector:
    8. matchLabels:
    9. app: istio-ingressgateway
    10. action: ALLOW
    11. rules:
    12. - from:
    13. - source:
    14. ipBlocks: ["1.2.3.4", "5.6.7.0/24", "$CLIENT_IP"]
    15. EOF

    Find your original client IP address if you don’t know it and assign it to a variable:

    1. $ CLIENT_IP=$(kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system | grep remoteIP; done | tail -1 | awk -F, '{print $4}' | awk -F: '{print $2}' | sed 's/ //') && echo "$CLIENT_IP"
    2. 192.168.10.15

    Create the AuthorizationPolicy:

    • Verify that a request to the ingress gateway is allowed:

      1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
      2. 200
    • Update the ingress-policy authorization policy to set the action key to DENY so that the IP addresses specified in the ipBlocks are not allowed to access the ingress gateway:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: ingress-policy
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: istio-ingressgateway
    11. action: DENY
    12. rules:
    13. - from:
    14. - source:
    15. ipBlocks: ["$CLIENT_IP"]
    16. EOF
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: ingress-policy
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: istio-ingressgateway
    11. action: DENY
    12. rules:
    13. - from:
    14. - source:
    15. remoteIpBlocks: ["$CLIENT_IP"]
    16. EOF
    • Verify that a request to the ingress gateway is denied:

      1. $ curl "$INGRESS_HOST:$INGRESS_PORT"/headers -s -o /dev/null -w "%{http_code}\n"
      2. 403
    • You could use an online proxy service to access the ingress gateway using a different client IP to verify the request is allowed.

    • If you are not getting the responses you expect, view the ingress gateway logs which should show RBAC debugging information:

      1. $ kubectl get pods -n istio-system -o name -l istio=ingressgateway | sed 's|pod/||' | while read -r pod; do kubectl logs "$pod" -n istio-system; done