Security Problems

    1. If isn’t set, make sure the JWT issuer is of url format and url + /.well-known/openid-configuration can be opened in browser; for example, if the JWT issuer is https://accounts.google.com, make sure https://accounts.google.com/.well-known/openid-configuration is a valid url and can be opened in a browser.

    2. If the JWT token is placed in the Authorization header in http requests, make sure the JWT token is valid (not expired, etc). The fields in a JWT token can be decoded by using online JWT parsing tools, e.g., .

    3. Verify the Envoy proxy configuration of the target workload using istioctl proxy-config command.

      With the example policy above applied, use the following command to check the listener configuration on the inbound port 80. You should see envoy.filters.http.jwt_authn filter with settings matching the issuer and JWKS as specified in the policy.

      1. $ POD=$(kubectl get pod -l app=httpbin -n foo -o jsonpath={.items..metadata.name})
      2. $ istioctl proxy-config listener ${POD} -n foo --port 80 --type HTTP -o json
      3. <redacted>
      4. {
      5. "name": "envoy.filters.http.jwt_authn",
      6. "typedConfig": {
      7. "@type": "type.googleapis.com/envoy.config.filter.http.jwt_authn.v2alpha.JwtAuthentication",
      8. "providers": {
      9. "origins-0": {
      10. "issuer": "testing@secure.istio.io",
      11. "localJwks": {
      12. "inlineString": "*redacted*"
      13. },
      14. "payloadInMetadata": "testing@secure.istio.io"
      15. }
      16. },
      17. "rules": [
      18. {
      19. "match": {
      20. "prefix": "/"
      21. },
      22. "requires": {
      23. "requiresAny": {
      24. "requirements": [
      25. {
      26. "providerName": "origins-0"
      27. },
      28. {
      29. "allowMissing": {}
      30. }
      31. ]
      32. }
      33. }
      34. }
      35. ]
      36. }
      37. },
      38. <redacted>

    Authorization is too restrictive or permissive

    One common mistake is specifying multiple items unintentionally in the YAML. Take the following policy as an example:

    1. apiVersion: security.istio.io/v1beta1
    2. kind: AuthorizationPolicy
    3. metadata:
    4. name: example
    5. namespace: foo
    6. spec:
    7. action: ALLOW
    8. rules:
    9. - to:
    10. - operation:
    11. paths:
    12. - /foo
    13. - from:
    14. - source:
    15. namespaces:
    16. - foo

    You may expect the policy to allow requests if the path is /foo and the source namespace is foo. However, the policy actually allows requests if the path is /foo or the source namespace is foo, which is more permissive.

    In the YAML syntax, the - in front of the from: means it’s a new element in the list. This creates 2 rules in the policy instead of 1. In authorization policy, multiple rules have the semantics of OR.

    To fix the problem, just remove the extra - to make the policy have only 1 rule that allows requests if the path is /foo and the source namespace is foo, which is more restrictive.

    The authorization policy will be more restrictive because HTTP-only fields (e.g. host, path, headers, JWT, etc.) do not exist in the raw TCP connections.

    In the case of ALLOW policy, these fields are never matched. In the case of DENY and CUSTOM action, these fields are considered always matched. The final effect is a more restrictive policy that could cause unexpected denies.

    Check the Kubernetes service definition to verify that the port is named with the correct protocol properly. If you are using HTTP-only fields on the port, make sure the port name has the http- prefix.

    Check the workload selector and namespace to confirm it’s applied to the correct targets. You can determine the authorization policy in effect by running istioctl x authz check POD-NAME.POD-NAMESPACE.

    • If not specified, the policy defaults to use action ALLOW.

    • When a workload has multiple actions (CUSTOM, ALLOW and DENY) applied at the same time, all actions must be satisfied to allow a request. In other words, a request is denied if any of the action denies and is allowed only if all actions allow.

    • The AUDIT action does not enforce access control and will not deny the request at any cases.

    Istiod converts and distributes your authorization policies to the proxies. The following steps help you ensure Istiod is working as expected:

    1. Run the following command to enable the debug logging in istiod:

      1. $ istioctl admin log --level authorization:debug
    2. Get the Istiod log with the following command:

      You probably need to first delete and then re-apply your authorization policies so that the debug output is generated for these policies.

      1. $ kubectl logs $(kubectl -n istio-system get pods -l app=istiod -o jsonpath='{.items[0].metadata.name}') -c discovery -n istio-system
    3. Check the output and verify there are no errors. For example, you might see something similar to the following:

      This shows that Istiod generated:

      • An HTTP filter config with policy ns[foo]-policy[deny-path-headers]-rule[0] for workload httpbin-74fb669cc6-lpscm.foo.

      • A TCP filter config with policy ns[foo]-policy[deny-path-headers]-rule[0] for workload httpbin-74fb669cc6-lpscm.foo.

    Ensure Istiod distributes policies to proxies correctly

    Istiod distributes the authorization policies to proxies. The following steps help you ensure istiod is working as expected:

    The command below assumes you have deployed httpbin, you should replace "-l app=httpbin" with your actual pod if you are not using httpbin.

    1. Run the following command to get the proxy configuration dump for the httpbin workload:

    2. Check the log and verify:

      • The log includes an envoy.filters.http.rbac filter to enforce the authorization policy on each incoming request.
      • Istio updates the filter accordingly after you update your authorization policy.
    3. The following output means the proxy of httpbin has enabled the envoy.filters.http.rbac filter with rules that rejects anyone to access path /headers.

      1. {
      2. "name": "envoy.filters.http.rbac",
      3. "typed_config": {
      4. "rules": {
      5. "action": "DENY",
      6. "policies": {
      7. "ns[foo]-policy[deny-path-headers]-rule[0]": {
      8. "permissions": [
      9. {
      10. "and_rules": {
      11. "rules": [
      12. {
      13. "or_rules": {
      14. "rules": [
      15. {
      16. "url_path": {
      17. "path": {
      18. "exact": "/headers"
      19. }
      20. }
      21. }
      22. ]
      23. }
      24. }
      25. ]
      26. }
      27. }
      28. ],
      29. "principals": [
      30. {
      31. "and_ids": {
      32. "ids": [
      33. {
      34. "any": true
      35. }
      36. ]
      37. }
      38. }
      39. ]
      40. }
      41. }
      42. },
      43. "shadow_rules_stat_prefix": "istio_dry_run_allow_"
      44. }
      45. },

    Proxies eventually enforce the authorization policies. The following steps help you ensure the proxy is working as expected:

    The command below assumes you have deployed httpbin, you should replace "-l app=httpbin" with your actual pod if you are not using httpbin.

      1. $ istioctl proxy-config log deploy/httpbin --level "rbac:debug"
    1. Verify you see the following output:

      1. active loggers:
      2. ... ...
      3. rbac: debug
      4. ... ...
    2. Send some requests to the httpbin workload to generate some logs.

    3. Print the proxy logs with the following command:

    4. Check the output and verify:

      • The output log shows either enforced allowed or enforced denied depending on whether the request was allowed or denied respectively.

      • Your authorization policy expects the data extracted from the request.

    5. The following is an example output for a request at path /httpbin:

      1. ...
      2. 2021-04-23T20:43:18.552857Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:46180, directRemoteIP: 10.44.3.13:46180, remoteIP: 10.44.3.13:46180,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/sleep, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
      3. ':path', '/headers'
      4. ':method', 'GET'
      5. ':scheme', 'http'
      6. 'user-agent', 'curl/7.76.1-DEV'
      7. 'accept', '*/*'
      8. 'x-forwarded-proto', 'http'
      9. 'x-request-id', '672c9166-738c-4865-b541-128259cc65e5'
      10. 'x-envoy-attempt-count', '1'
      11. 'x-b3-traceid', '8a124905edf4291a21df326729b264e9'
      12. 'x-b3-spanid', '21df326729b264e9'
      13. 'x-b3-sampled', '0'
      14. 'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/sleep'
      15. , dynamicMetadata: filter_metadata {
      16. key: "istio_authn"
      17. value {
      18. fields {
      19. key: "request.auth.principal"
      20. value {
      21. string_value: "cluster.local/ns/foo/sa/sleep"
      22. }
      23. }
      24. fields {
      25. key: "source.namespace"
      26. value {
      27. string_value: "foo"
      28. }
      29. }
      30. fields {
      31. key: "source.principal"
      32. value {
      33. }
      34. }
      35. fields {
      36. key: "source.user"
      37. value {
      38. string_value: "cluster.local/ns/foo/sa/sleep"
      39. }
      40. }
      41. }
      42. }
      43. 2021-04-23T20:43:18.552910Z debug envoy rbac enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]
      44. ...

      The log enforced denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] means the request is rejected by the policy ns[foo]-policy[deny-path-headers]-rule[0].

    6. The following is an example output for authorization policy in the :

      1. ...
      2. 2021-04-23T20:59:11.838468Z debug envoy rbac checking request: requestedServerName: outbound_.8000_._.httpbin.foo.svc.cluster.local, sourceIP: 10.44.3.13:49826, directRemoteIP: 10.44.3.13:49826, remoteIP: 10.44.3.13:49826,localAddress: 10.44.1.18:80, ssl: uriSanPeerCertificate: spiffe://cluster.local/ns/foo/sa/sleep, dnsSanPeerCertificate: , subjectPeerCertificate: , headers: ':authority', 'httpbin:8000'
      3. ':path', '/headers'
      4. ':method', 'GET'
      5. ':scheme', 'http'
      6. 'user-agent', 'curl/7.76.1-DEV'
      7. 'accept', '*/*'
      8. 'x-forwarded-proto', 'http'
      9. 'x-request-id', 'e7b2fdb0-d2ea-4782-987c-7845939e6313'
      10. 'x-envoy-attempt-count', '1'
      11. 'x-b3-traceid', '696607fc4382b50017c1f7017054c751'
      12. 'x-b3-spanid', '17c1f7017054c751'
      13. 'x-b3-sampled', '0'
      14. 'x-forwarded-client-cert', 'By=spiffe://cluster.local/ns/foo/sa/httpbin;Hash=d64cd6750a3af8685defbbe4dd8c467ebe80f6be4bfe9ca718e81cd94129fc1d;Subject="";URI=spiffe://cluster.local/ns/foo/sa/sleep'
      15. , dynamicMetadata: filter_metadata {
      16. key: "istio_authn"
      17. value {
      18. fields {
      19. key: "request.auth.principal"
      20. value {
      21. string_value: "cluster.local/ns/foo/sa/sleep"
      22. }
      23. }
      24. fields {
      25. key: "source.namespace"
      26. value {
      27. string_value: "foo"
      28. }
      29. }
      30. fields {
      31. key: "source.principal"
      32. value {
      33. string_value: "cluster.local/ns/foo/sa/sleep"
      34. }
      35. }
      36. fields {
      37. key: "source.user"
      38. value {
      39. string_value: "cluster.local/ns/foo/sa/sleep"
      40. }
      41. }
      42. }
      43. }
      44. 2021-04-23T20:59:11.838529Z debug envoy rbac shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0]
      45. 2021-04-23T20:59:11.838538Z debug envoy rbac no engine, allowed by default
      46. ...

      The log shadow denied, matched policy ns[foo]-policy[deny-path-headers]-rule[0] means the request would be rejected by the dry-run policy ns[foo]-policy[deny-path-headers]-rule[0].

      The log no engine, allowed by default means the request is actually allowed because the dry-run policy is the only policy on the workload.

    Keys and certificates errors

    If you suspect that some of the keys and/or certificates used by Istio aren’t correct, you can inspect the contents from any pod:

    1. $ istioctl proxy-config secret sleep-8f795f47d-4s4t7
    2. RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
    3. default Cert Chain ACTIVE true 138092480869518152837211547060273851586 2020-11-11T16:39:48Z 2020-11-10T16:39:48Z
    4. ROOTCA CA ACTIVE true 288553090258624301170355571152070165215 2030-11-08T16:34:52Z 2020-11-10T16:34:52Z

    By passing the -o json flag, you can pass the full certificate content to openssl to analyze its contents:

    1. $ istioctl proxy-config secret sleep-8f795f47d-4s4t7 -o json | jq '[.dynamicActiveSecrets[] | select(.name == "default")][0].secret.tlsCertificate.certificateChain.inlineBytes' -r | base64 -d | openssl x509 -noout -text
    2. Certificate:
    3. Data:
    4. Version: 3 (0x2)
    5. Serial Number:
    6. 99:59:6b:a2:5a:f4:20:f4:03:d7:f0:bc:59:f5:d8:40
    7. Signature Algorithm: sha256WithRSAEncryption
    8. Issuer: O = k8s.cluster.local
    9. Validity
    10. Not Before: Jun 4 20:38:20 2018 GMT
    11. Not After : Sep 2 20:38:20 2018 GMT
    12. ...
    13. X509v3 extensions:
    14. X509v3 Key Usage: critical
    15. Digital Signature, Key Encipherment
    16. X509v3 Extended Key Usage:
    17. TLS Web Server Authentication, TLS Web Client Authentication
    18. X509v3 Basic Constraints: critical
    19. CA:FALSE
    20. X509v3 Subject Alternative Name:
    21. URI:spiffe://cluster.local/ns/my-ns/sa/my-sa
    22. ...

    Make sure the displayed certificate contains valid information. In particular, the Subject Alternative Name field should be .

    If you suspect problems with mutual TLS, first ensure that Citadel is healthy, and second ensure that to sidecars properly.