Authentication Policy

    • Understand Istio and related mutual TLS authentication concepts.

    • Install Istio on a Kubernetes cluster with the configuration profile, as described in .

    Our examples use two namespaces foo and bar, with two services, httpbin and sleep, both running with an Envoy proxy. We also use second instances of httpbin and sleep running without the sidecar in the legacy namespace. If you’d like to use the same examples when trying the tasks, run the following:

    ZipZipZip

    1. $ kubectl create ns foo
    2. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n foo
    3. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n foo
    4. $ kubectl create ns bar
    5. $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@) -n bar
    6. $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@) -n bar
    7. $ kubectl create ns legacy
    8. $ kubectl apply -f @samples/httpbin/httpbin.yaml@ -n legacy
    9. $ kubectl apply -f @samples/sleep/sleep.yaml@ -n legacy

    You can verify setup by sending an HTTP request with curl from any sleep pod in the namespace foo, bar or legacy to either httpbin.foo, httpbin.bar or httpbin.legacy. All requests should succeed with HTTP code 200.

    For example, here is a command to check sleep.bar to httpbin.foo reachability:

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -n bar -o jsonpath={.items..metadata.name})" -c sleep -n bar -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
    2. 200

    This one-liner command conveniently iterates through all reachability combinations:

    1. $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl -s "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
    2. sleep.foo to httpbin.foo: 200
    3. sleep.foo to httpbin.bar: 200
    4. sleep.foo to httpbin.legacy: 200
    5. sleep.bar to httpbin.foo: 200
    6. sleep.bar to httpbin.bar: 200
    7. sleep.bar to httpbin.legacy: 200
    8. sleep.legacy to httpbin.foo: 200
    9. sleep.legacy to httpbin.bar: 200
    10. sleep.legacy to httpbin.legacy: 200

    Verify there is no peer authentication policy in the system with the following command:

    1. $ kubectl get peerauthentication --all-namespaces
    2. No resources found

    Last but not least, verify that there are no destination rules that apply on the example services. You can do this by checking the host: value of existing destination rules and make sure they do not match. For example:

    1. $ kubectl get destinationrules.networking.istio.io --all-namespaces -o yaml | grep "host:"

    Depending on the version of Istio, you may see destination rules for hosts other than those shown. However, there should be none with hosts in the foo, bar and legacy namespace, nor is the match-all wildcard *

    By default, Istio tracks the server workloads migrated to Istio proxies, and configures client proxies to send mutual TLS traffic to those workloads automatically, and to send plain text traffic to workloads without sidecars.

    Thus, all traffic between workloads with proxies uses mutual TLS, without you doing anything. For example, take the response from a request to httpbin/header. When using mutual TLS, the proxy injects the X-Forwarded-Client-Cert header to the upstream request to the backend. That header’s presence is evidence that mutual TLS is used. For example:

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name})" -c sleep -n foo -- curl -s http://httpbin.foo:8000/headers -s | grep X-Forwarded-Client-Cert | sed 's/Hash=[a-z0-9]*;/Hash=<redacted>;/'
    2. "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=<redacted>;Subject=\"\";URI=spiffe://cluster.local/ns/foo/sa/sleep"

    When the server doesn’t have sidecar, the X-Forwarded-Client-Cert header is not there, which implies requests are in plain text.

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name})" -c sleep -n foo -- curl http://httpbin.legacy:8000/headers -s | grep X-Forwarded-Client-Cert

    While Istio automatically upgrades all traffic between the proxies and the workloads to mutual TLS, workloads can still receive plain text traffic. To prevent non-mutual TLS traffic for the whole mesh, set a mesh-wide peer authentication policy with the mutual TLS mode set to STRICT. The mesh-wide peer authentication policy should not have a selector and must be applied in the root namespace, for example:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "default"
    6. namespace: "istio-system"
    7. spec:
    8. mtls:
    9. mode: STRICT
    10. EOF

    The example assumes istio-system is the root namespace. If you used a different value during installation, replace istio-system with the value you used.

    This peer authentication policy configures workloads to only accept requests encrypted with TLS. Since it doesn’t specify a value for the selector field, the policy applies to all workloads in the mesh.

    1. $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
    2. sleep.foo to httpbin.foo: 200
    3. sleep.foo to httpbin.bar: 200
    4. sleep.foo to httpbin.legacy: 200
    5. sleep.bar to httpbin.foo: 200
    6. sleep.bar to httpbin.bar: 200
    7. sleep.bar to httpbin.legacy: 200
    8. sleep.legacy to httpbin.foo: 000
    9. command terminated with exit code 56
    10. sleep.legacy to httpbin.bar: 000
    11. command terminated with exit code 56
    12. sleep.legacy to httpbin.legacy: 200

    You see requests still succeed, except for those from the client that doesn’t have proxy, sleep.legacy, to the server with a proxy, httpbin.foo or httpbin.bar. This is expected because mutual TLS is now strictly required, but the workload without sidecar cannot comply.

    Cleanup part 1

    Remove global authentication policy and destination rules added in the session:

    1. $ kubectl delete peerauthentication -n istio-system default

    Namespace-wide policy

    To change mutual TLS for all workloads within a particular namespace, use a namespace-wide policy. The specification of the policy is the same as for a mesh-wide policy, but you specify the namespace it applies to under metadata. For example, the following peer authentication policy enables strict mutual TLS for the foo namespace:

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "default"
    6. namespace: "foo"
    7. spec:
    8. mtls:
    9. mode: STRICT
    10. EOF

    As this policy is applied on workloads in namespace foo only, you should see only request from client-without-sidecar (sleep.legacy) to httpbin.foo start to fail.

    1. $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
    2. sleep.foo to httpbin.foo: 200
    3. sleep.foo to httpbin.bar: 200
    4. sleep.foo to httpbin.legacy: 200
    5. sleep.bar to httpbin.foo: 200
    6. sleep.bar to httpbin.bar: 200
    7. sleep.bar to httpbin.legacy: 200
    8. sleep.legacy to httpbin.foo: 000
    9. command terminated with exit code 56
    10. sleep.legacy to httpbin.bar: 200
    11. sleep.legacy to httpbin.legacy: 200

    To set a peer authentication policy for a specific workload, you must configure the selector section and specify the labels that match the desired workload. However, Istio cannot aggregate workload-level policies for outbound mutual TLS traffic to a service. Configure a destination rule to manage that behavior.

    For example, the following peer authentication policy and destination rule enable strict mutual TLS for the httpbin.bar workload:

    1. $ cat <<EOF | kubectl apply -n bar -f -
    2. apiVersion: security.istio.io/v1beta1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "httpbin"
    6. namespace: "bar"
    7. selector:
    8. matchLabels:
    9. app: httpbin
    10. mtls:
    11. mode: STRICT
    12. EOF

    And a destination rule:

    Again, run the probing command. As expected, request from sleep.legacy to httpbin.bar starts failing with the same reasons.

    1. $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
    2. sleep.foo to httpbin.foo: 200
    3. sleep.foo to httpbin.legacy: 200
    4. sleep.bar to httpbin.foo: 200
    5. sleep.bar to httpbin.bar: 200
    6. sleep.bar to httpbin.legacy: 200
    7. sleep.legacy to httpbin.foo: 000
    8. command terminated with exit code 56
    9. sleep.legacy to httpbin.bar: 000
    10. command terminated with exit code 56
    11. sleep.legacy to httpbin.legacy: 200
    1. ...
    2. sleep.legacy to httpbin.bar: 000
    3. command terminated with exit code 56

    To refine the mutual TLS settings per port, you must configure the portLevelMtls section. For example, the following peer authentication policy requires mutual TLS on all ports, except port 80:

    1. $ cat <<EOF | kubectl apply -n bar -f -
    2. apiVersion: security.istio.io/v1beta1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "httpbin"
    6. namespace: "bar"
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: httpbin
    11. mtls:
    12. mode: STRICT
    13. portLevelMtls:
    14. 80:
    15. mode: DISABLE
    16. EOF

    As before, you also need a destination rule:

    1. $ cat <<EOF | kubectl apply -n bar -f -
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: DestinationRule
    4. metadata:
    5. name: "httpbin"
    6. spec:
    7. host: httpbin.bar.svc.cluster.local
    8. trafficPolicy:
    9. tls:
    10. mode: ISTIO_MUTUAL
    11. portLevelSettings:
    12. - port:
    13. number: 8000
    14. tls:
    15. mode: DISABLE
    16. EOF
    1. The port value in the peer authentication policy is the container’s port. The value the destination rule is the service’s port.
    2. You can only use portLevelMtls if the port is bound to a service. Istio ignores it otherwise.
    1. $ for from in "foo" "bar" "legacy"; do for to in "foo" "bar" "legacy"; do kubectl exec "$(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name})" -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
    2. sleep.foo to httpbin.foo: 200
    3. sleep.foo to httpbin.bar: 200
    4. sleep.foo to httpbin.legacy: 200
    5. sleep.bar to httpbin.foo: 200
    6. sleep.bar to httpbin.bar: 200
    7. sleep.bar to httpbin.legacy: 200
    8. sleep.legacy to httpbin.foo: 000
    9. command terminated with exit code 56
    10. sleep.legacy to httpbin.bar: 200
    11. sleep.legacy to httpbin.legacy: 200

    Policy precedence

    A workload-specific peer authentication policy takes precedence over a namespace-wide policy. You can test this behavior if you add a policy to disable mutual TLS for the httpbin.foo workload, for example. Note that you’ve already created a namespace-wide policy that enables mutual TLS for all services in namespace foo and observe that requests from sleep.legacy to httpbin.foo are failing (see above).

    1. $ cat <<EOF | kubectl apply -n foo -f -
    2. apiVersion: security.istio.io/v1beta1
    3. kind: PeerAuthentication
    4. metadata:
    5. name: "overwrite-example"
    6. namespace: "foo"
    7. spec:
    8. selector:
    9. matchLabels:
    10. app: httpbin
    11. mtls:
    12. mode: DISABLE
    13. EOF

    and destination rule:

    1. $ cat <<EOF | kubectl apply -n foo -f -
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: DestinationRule
    4. metadata:
    5. name: "overwrite-example"
    6. spec:
    7. host: httpbin.foo.svc.cluster.local
    8. trafficPolicy:
    9. tls:
    10. mode: DISABLE
    11. EOF

    Re-running the request from sleep.legacy, you should see a success return code again (200), confirming service-specific policy overrides the namespace-wide policy.

    1. $ kubectl exec "$(kubectl get pod -l app=sleep -n legacy -o jsonpath={.items..metadata.name})" -c sleep -n legacy -- curl http://httpbin.foo:8000/ip -s -o /dev/null -w "%{http_code}\n"
    2. 200

    Cleanup part 2

    Remove policies and destination rules created in the above steps:

    1. $ kubectl delete peerauthentication default overwrite-example -n foo
    2. $ kubectl delete peerauthentication httpbin -n bar
    3. $ kubectl delete destinationrules overwrite-example -n foo
    4. $ kubectl delete destinationrules httpbin -n bar

    To experiment with this feature, you need a valid JWT. The JWT must correspond to the JWKS endpoint you want to use for the demo. This tutorial use the test token JWT test and from the Istio code base.

    Also, for convenience, expose httpbin.foo via ingressgateway (for more details, see the ingress task).

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: Gateway
    4. metadata:
    5. name: httpbin-gateway
    6. namespace: foo
    7. spec:
    8. selector:
    9. istio: ingressgateway # use Istio default gateway implementation
    10. servers:
    11. - port:
    12. name: http
    13. protocol: HTTP
    14. hosts:
    15. EOF
    1. $ kubectl apply -f - <<EOF
    2. apiVersion: networking.istio.io/v1alpha3
    3. kind: VirtualService
    4. metadata:
    5. name: httpbin
    6. namespace: foo
    7. spec:
    8. hosts:
    9. - "*"
    10. gateways:
    11. - httpbin-gateway
    12. http:
    13. - route:
    14. - destination:
    15. port:
    16. number: 8000
    17. host: httpbin.foo.svc.cluster.local
    18. EOF

    And run a test query

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"
    2. 200

    Now, add a request authentication policy that requires end-user JWT for the ingress gateway.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: RequestAuthentication
    4. metadata:
    5. name: "jwt-example"
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. istio: ingressgateway
    11. jwtRules:
    12. - issuer: "testing@secure.istio.io"
    13. jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.10/security/tools/jwt/samples/jwks.json"
    14. EOF

    Apply the policy to the namespace of the workload it selects, ingressgateway in this case. The namespace you need to specify is then istio-system.

    If you provide a token in the authorization header, its implicitly default location, Istio validates the token using the , and rejects requests if the bearer token is invalid. However, requests without tokens are accepted. To observe this behavior, retry the request without a token, with a bad token, and with a valid token:

    1. $ curl --header "Authorization: Bearer deadbeef" "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"
    2. 401
    1. $ TOKEN=$(curl https://raw.githubusercontent.com/istio/istio/release-1.10/security/tools/jwt/samples/demo.jwt -s)
    2. $ curl --header "Authorization: Bearer $TOKEN" "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"
    3. 200

    To observe other aspects of JWT validation, use the script gen-jwt.py to generate new tokens to test with different issuer, audiences, expiry date, etc. The script can be downloaded from the Istio repository:

    1. $ wget --no-verbose https://raw.githubusercontent.com/istio/istio/release-1.10/security/tools/jwt/samples/gen-jwt.py

    You also need the key.pem file:

    1. $ wget --no-verbose https://raw.githubusercontent.com/istio/istio/release-1.10/security/tools/jwt/samples/key.pem

    Download the library, if you haven’t installed it on your system.

    The JWT authentication has 60 seconds clock skew, this means the JWT token will become valid 60 seconds earlier than its configured nbf and remain valid 60 seconds after its configured exp.

    For example, the command below creates a token that expires in 5 seconds. As you see, Istio authenticates requests using that token successfully at first but rejects them after 65 seconds:

    1. $ TOKEN=$(python3 ./gen-jwt.py ./key.pem --expire 5)
    2. $ for i in $(seq 1 10); do curl --header "Authorization: Bearer $TOKEN" "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"; sleep 10; done
    3. 200
    4. 200
    5. 200
    6. 200
    7. 200
    8. 200
    9. 200
    10. 401
    11. 401
    12. 401

    You can also add a JWT policy to an ingress gateway (e.g., service istio-ingressgateway.istio-system.svc.cluster.local). This is often used to define a JWT policy for all services bound to the gateway, instead of for individual services.

    To reject requests without valid tokens, add an authorization policy with a rule specifying a DENY action for requests without request principals, shown as notRequestPrincipals: ["*"] in the following example. Request principals are available only when valid JWT tokens are provided. The rule therefore denies requests without valid tokens.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: "frontend-ingress"
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. istio: ingressgateway
    11. action: DENY
    12. rules:
    13. - from:
    14. - source:
    15. notRequestPrincipals: ["*"]
    16. EOF

    Retry the request without a token. The request now fails with error code 403:

    1. $ curl "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"
    2. 403

    Require valid tokens per-path

    To refine authorization with a token requirement per host, path, or method, change the authorization policy to only require JWT on /headers. When this authorization rule takes effect, requests to $INGRESS_HOST:$INGRESS_PORT/headers fail with the error code 403. Requests to all other paths succeed, for example $INGRESS_HOST:$INGRESS_PORT/ip.

    1. $ kubectl apply -f - <<EOF
    2. apiVersion: security.istio.io/v1beta1
    3. kind: AuthorizationPolicy
    4. metadata:
    5. name: "frontend-ingress"
    6. namespace: istio-system
    7. spec:
    8. selector:
    9. matchLabels:
    10. istio: ingressgateway
    11. action: DENY
    12. rules:
    13. - from:
    14. - source:
    15. notRequestPrincipals: ["*"]
    16. to:
    17. - operation:
    18. paths: ["/headers"]
    19. EOF
    1. $ curl "$INGRESS_HOST:$INGRESS_PORT/headers" -s -o /dev/null -w "%{http_code}\n"
    2. 403
    1. $ curl "$INGRESS_HOST:$INGRESS_PORT/ip" -s -o /dev/null -w "%{http_code}\n"
    2. 200

    Cleanup part 3

    1. Remove authentication policy:

      1. $ kubectl -n istio-system delete requestauthentication jwt-example
    2. Remove authorization policy:

      1. $ kubectl -n istio-system delete authorizationpolicy frontend-ingress
    3. Remove the token generator script and key file: