Ingress traffic

    Combining Linkerd and your ingress solution requires two things:

    1. Configuring your ingress to support Linkerd.
    2. Meshing your ingress pods so that they have the Linkerd proxy installed.

    Meshing your ingress pods will allow Linkerd to provide features like L7 metrics and mTLS the moment the traffic is inside the cluster. (See Adding your service for instructions on how to mesh your ingress.)

    Note that some ingress options need to be meshed in “ingress” mode. See details below.

    Common ingress options that Linkerd has been used with include:

    For a quick start guide to using a particular ingress, please visit the section for that ingress. If your ingress is not on that list, never fear—it likely works anyways. See below.

    Note

    If your ingress terminates TLS, this TLS traffic (e.g. HTTPS calls from outside the cluster) will pass through Linkerd as an opaque TCP stream and Linkerd will only be able to provide byte-level metrics for this side of the connection. The resulting HTTP or gRPC traffic to internal services, of course, will have the full set of metrics and mTLS support.

    Ambassador can be meshed normally. An example manifest for configuring the Ambassador / Emissary is as follows:

    For a more detailed guide, we recommend reading Installing the Emissary ingress with the Linkerd service mesh.

    Nginx

    Nginx can be meshed normally, but the annotation should be set to true. No further configuration is required.

    1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: emojivoto-web-ingress
    6. namespace: emojivoto
    7. annotations:
    8. nginx.ingress.kubernetes.io/service-upstream: true
    9. spec:
    10. ingressClassName: nginx
    11. defaultBackend:
    12. service:
    13. name: web-svc
    14. port:
    15. number: 80

    Traefik should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    Instructions differ for 1.x and 2.x versions of Traefik.

    The simplest way to use Traefik 1.x as an ingress for Linkerd is to configure a Kubernetes Ingress resource with the ingress.kubernetes.io/custom-request-headers like this:

    1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: web-ingress
    6. namespace: emojivoto
    7. annotations:
    8. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80
    9. spec:
    10. ingressClassName: traefik
    11. rules:
    12. - host: example.com
    13. http:
    14. paths:
    15. - path: /
    16. pathType: Prefix
    17. backend:
    18. service:
    19. name: web-svc
    20. port:
    21. number: 80

    The important annotation here is:

    1. ingress.kubernetes.io/custom-request-headers: l5d-dst-override:web-svc.emojivoto.svc.cluster.local:80

    Traefik will add a l5d-dst-override header to instruct Linkerd what service the request is destined for. You’ll want to include both the Kubernetes service FQDN (web-svc.emojivoto.svc.cluster.local) and the destination servicePort.

    To test this, you’ll want to get the external IP address for your controller. If you installed Traefik via Helm, you can get that IP address by running:

    1. kubectl get svc --all-namespaces \
    2. -l app=traefik \
    3. -o='custom-columns=EXTERNAL-IP:.status.loadBalancer.ingress[0].ip'

    You can then use this IP with curl:

    Note

    This solution won’t work if you’re using Traefik’s service weights as Linkerd will always send requests to the service name in l5d-dst-override. A workaround is to use traefik.frontend.passHostHeader: "false" instead.

    Traefik 2.x adds support for path based request routing with a Custom Resource Definition (CRD) called .

    If you choose to use IngressRoute instead of the default Kubernetes Ingress resource, then you’ll also need to use the Traefik’s Middleware Custom Resource Definition to add the l5d-dst-override header.

    1. apiVersion: traefik.containo.us/v1alpha1
    2. kind: Middleware
    3. metadata:
    4. name: l5d-header-middleware
    5. namespace: traefik
    6. headers:
    7. customRequestHeaders:
    8. l5d-dst-override: "web-svc.emojivoto.svc.cluster.local:80"
    9. ---
    10. apiVersion: traefik.containo.us/v1alpha1
    11. kind: IngressRoute
    12. metadata:
    13. annotations:
    14. kubernetes.io/ingress.class: traefik
    15. creationTimestamp: null
    16. name: emojivoto-web-ingress-route
    17. namespace: emojivoto
    18. spec:
    19. entryPoints: []
    20. - kind: Rule
    21. match: PathPrefix(`/`)
    22. priority: 0
    23. middlewares:
    24. - name: l5d-header-middleware
    25. services:
    26. - kind: Service
    27. name: web-svc
    28. port: 80

    GCE

    The GCE ingress should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    This example shows how to use a Google Cloud Static External IP Address and TLS with a .

    1. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
    2. apiVersion: networking.k8s.io/v1
    3. kind: Ingress
    4. metadata:
    5. name: web-ingress
    6. namespace: emojivoto
    7. annotations:
    8. ingress.kubernetes.io/custom-request-headers: "l5d-dst-override: web-svc.emojivoto.svc.cluster.local:80"
    9. ingress.gcp.kubernetes.io/pre-shared-cert: "managed-cert-name"
    10. kubernetes.io/ingress.global-static-ip-name: "static-ip-name"
    11. spec:
    12. ingressClassName: gce
    13. rules:
    14. - host: example.com
    15. http:
    16. paths:
    17. - path: /
    18. pathType: Prefix
    19. backend:
    20. service:
    21. name: web-svc
    22. port:
    23. number: 80

    To use this example definition, substitute managed-cert-name and static-ip-name with the short names defined in your project (n.b. use the name for the IP address, not the address itself).

    The managed certificate will take about 30-60 minutes to provision, but the status of the ingress should be healthy within a few minutes. Once the managed certificate is provisioned, the ingress should be visible to the Internet.

    Gloo should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    As of Gloo v0.13.20, Gloo has native integration with Linkerd, so that the required Linkerd headers are added automatically. Assuming you installed Gloo to the default location, you can enable the native integration by running:

    1. kubectl patch settings -n gloo-system default \
    2. -p '{"spec":{"linkerd":true}}' --type=merge

    Gloo will now automatically add the l5d-dst-override header to every Kubernetes upstream.

    Now simply add a route to the upstream, e.g.:

    1. glooctl add route --path-prefix=/ --dest-name booksapp-webapp-7000

    Contour

    Contour should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    The following example uses the documentation to demonstrate how to set the required header manually.

    Contour’s Envoy DaemonSet doesn’t auto-mount the service account token, which is required for the Linkerd proxy to do mTLS between pods. So first we need to install Contour uninjected, patch the DaemonSet with automountServiceAccountToken: true, and then inject it. Optionally you can create a dedicated service account to avoid using the default one.

    Verify your Contour and Envoy installation has a running Linkerd sidecar.

    Next we’ll deploy a demo service:

    1. linkerd inject https://projectcontour.io/examples/kuard.yaml | kubectl apply -f -

    To route external traffic to your service you’ll need to provide a HTTPProxy:

    1. apiVersion: projectcontour.io/v1
    2. kind: HTTPProxy
    3. metadata:
    4. name: kuard
    5. namespace: default
    6. spec:
    7. routes:
    8. - requestHeadersPolicy:
    9. set:
    10. - name: l5d-dst-override
    11. value: kuard.default.svc.cluster.local:80
    12. services:
    13. - name: kuard
    14. port: 80
    15. virtualhost:

    Notice the l5d-dst-override header is explicitly set to the target service.

    Finally, you can test your working service mesh:

    1. kubectl port-forward svc/envoy -n projectcontour 3200:80
    2. http://127.0.0.1.nip.io:3200

    Note

    You should annotate the pod spec with config.linkerd.io/skip-outbound-ports: 8001. The Envoy pod will try to connect to the Contour pod at port 8001 through TLS, which is not supported under this ingress mode, so you need to have the proxy skip that outbound port.

    Note

    If you are using Contour with flagger the l5d-dst-override headers will be set automatically.

    Kong should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    • The
    • The [emojivoto] example application(../../getting-started/)

    Before installing emojivoto, install Linkerd and Kong on your cluster. When injecting the Kong deployment, use the --ingress flag (or annotation).

    We need to declare these objects as well:

    • KongPlugin, a CRD provided by Kong
    • Ingress

      1. apiVersion: configuration.konghq.com/v1
      2. kind: KongPlugin
      3. metadata:
      4. name: set-l5d-header
      5. plugin: request-transformer
      6. config:
      7. add:
      8. headers:
      9. - l5d-dst-override:$(headers.host).svc.cluster.local
      10. ---
      11. # apiVersion: networking.k8s.io/v1beta1 # for k8s < v1.19
      12. apiVersion: networking.k8s.io/v1
      13. kind: Ingress
      14. metadata:
      15. name: web-ingress
      16. namespace: emojivoto
      17. annotations:
      18. konghq.com/plugins: set-l5d-header
      19. spec:
      20. ingressClassName: kong
      21. rules:
      22. - http:
      23. paths:
      24. - path: /api/vote
      25. pathType: Prefix
      26. backend:
      27. service:
      28. name: web-svc
      29. port:
      30. number: http
      31. - path: /api/list
      32. pathType: Prefix
      33. backend:
      34. service:
      35. name: web-svc
      36. port:
      37. name: http

    Here we are explicitly setting the l5d-dst-override in the KongPlugin. Using templates as values, we can use the host header from requests and set the l5d-dst-override value based off that.

    Finally, install emojivoto so that it’s deploy/vote-bot targets the ingress and includes a host header value for the web-svc.emojivoto service.

    Before applying the injected emojivoto application, make the following changes to the vote-bot Deployment:

    Note

    There are two different haproxy-based ingress controllers. This example is for the and not the haproxy-ingress controller.

    Haproxy should be meshed with ingress mode enabled, i.e. with the linkerd.io/inject: ingress annotation rather than the default enabled.

    The simplest way to use Haproxy as an ingress for Linkerd is to configure a Kubernetes Ingress resource with the haproxy.org/request-set-header annotation like this:

    1. apiVersion: networking.k8s.io/v1
    2. kind: Ingress
    3. metadata:
    4. name: web-ingress
    5. namespace: emojivoto
    6. annotations:
    7. kubernetes.io/ingress.class: haproxy
    8. haproxy.org/request-set-header: |
    9. l5d-dst-override web-svc.emojivoto.svc.cluster.local:80
    10. spec:
    11. rules:
    12. - host: example.com
    13. http:
    14. paths:
    15. - path: /
    16. pathType: Prefix
    17. backend:
    18. service:
    19. name: web-svc
    20. port:
    21. number: 80

    Unfortunately, there is currently no support to do this dynamically in a global config map by using the service name, namespace and port as variable. This also means, that you can’t combine more than one service ingress rule in an ingress manifest as each one needs their own haproxy.org/request-set-header annotation with hard coded value.

    In this section we cover how Linkerd interacts with ingress controllers in general.

    In general, Linkerd can be used with any ingress controller. In order for Linkerd to properly apply features such as route-based metrics and traffic splitting, Linkerd needs the IP/port of the Kubernetes Service. However, by default, many ingresses do their own endpoint selection and pass the IP/port of the destination Pod, rather than the Service as a whole.

    Thus, combining an ingress with Linkerd takes one of two forms:

    1. Configure the ingress to pass the IP and port of the Service as the destination, i.e. to skip its own endpoint selection. (E.g. see above.)

    The most common approach in form #2 is to use the explicit l5d-dst-override header.

    Note

    If requests experience a 2-3 second delay after injecting your ingress controller, it is likely that this is because the service of type: LoadBalancer is obscuring the client source IP. You can fix this by setting externalTrafficPolicy: Local in the ingress’ service definition.

    Note

    While the Kubernetes Ingress API definition allows a backend’s servicePort to be a string value, only numeric servicePort values can be used with Linkerd. If a string value is encountered, Linkerd will default to using port 80.