Locking down external access with DNS-based policies

    If you haven’t read the Introduction to Cilium & Hubble yet, we’d encourage you to do that first.

    The best way to get help if you get stuck is to ask a question on the . With Cilium contributors across the globe, there is almost always someone available to help.

    If you have not set up Cilium yet, pick any installation method as described in section Installation to set up Cilium for your Kubernetes environment. If in doubt, pick as the simplest way to set up a Kubernetes cluster with Cilium:

    Deploy the Demo Application

    DNS-based policies are very useful for controlling access to services running outside the Kubernetes cluster. DNS acts as a persistent service identifier for both external services provided by AWS, Google, Twilio, Stripe, etc., and internal services such as database clusters running in private subnets outside Kubernetes. CIDR or IP-based policies are cumbersome and hard to maintain as the IPs associated with external services can change frequently. The Cilium DNS-based policies provide an easy mechanism to specify access control while Cilium manages the harder aspects of tracking DNS to IP mapping.

    • Controlling egress access to services outside the cluster using DNS-based policies
    • Using patterns (or wildcards) to whitelist a subset of DNS domains
    • Combining DNS, port and L7 rules for restricting access to external service

    In line with our Star Wars theme examples, we will use a simple scenario where the empire’s pods need access to Twitter for managing the empire’s tweets. The pods shouldn’t have access to any other external service.

    1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes-dns/dns-sw-app.yaml
    2. $ kubectl get po
    3. NAME READY STATUS RESTARTS AGE
    4. pod/mediabot 1/1 Running 0 14s

    The following Cilium network policy allows mediabot pods to only access api.twitter.com.

    1. apiVersion: "cilium.io/v2"
    2. kind: CiliumNetworkPolicy
    3. metadata:
    4. name: "fqdn"
    5. spec:
    6. endpointSelector:
    7. matchLabels:
    8. org: empire
    9. class: mediabot
    10. egress:
    11. - toFQDNs:
    12. - matchName: "api.twitter.com"
    13. - matchLabels:
    14. "k8s:io.kubernetes.pod.namespace": kube-system
    15. "k8s:k8s-app": kube-dns
    16. toPorts:
    17. - ports:
    18. - port: "53"
    19. protocol: ANY
    20. rules:
    21. dns:
    22. - matchPattern: "*"

    Let’s take a closer look at the policy:

    • The first egress section uses toFQDNs: matchName specification to allow egress to api.twitter.com. The destination DNS should match exactly the name specified in the rule. The endpointSelector allows only pods with labels to have the egress access.
    • The second egress section allows mediabot pods to access kube-dns service. Note that rules: dns instructs Cilium to inspect and allow DNS lookups matching specified patterns. In this case, inspect and allow all DNS queries.

    Note that with this policy the mediabot doesn’t have access to any internal cluster service other than kube-dns. Refer to to learn more about policies for controlling access to internal cluster services.

    1. $ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes-dns/dns-matchname.yaml

    Testing the policy, we see that mediabot has access to api.twitter.com but doesn’t have access to any other external service, e.g., help.twitter.com.

    DNS Policies Using Patterns

    The above policy controlled DNS access based on exact match of the DNS domain name. Often, it is required to allow access to a subset of domains. Let’s say, in the above example, mediabot pods need access to any Twitter sub-domain, e.g., the pattern *.twitter.com. We can achieve this easily by changing the toFQDN rule to use matchPattern instead of matchName.

    1. apiVersion: "cilium.io/v2"
    2. kind: CiliumNetworkPolicy
    3. metadata:
    4. name: "fqdn"
    5. spec:
    6. endpointSelector:
    7. matchLabels:
    8. org: empire
    9. class: mediabot
    10. egress:
    11. - toFQDNs:
    12. - matchPattern: "*.twitter.com"
    13. - matchLabels:
    14. "k8s:io.kubernetes.pod.namespace": kube-system
    15. "k8s:k8s-app": kube-dns
    16. toPorts:
    17. - ports:
    18. protocol: ANY
    19. rules:
    20. dns:
    21. - matchPattern: "*"
    1. $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes-dns/dns-pattern.yaml

    Test that mediabot has access to multiple Twitter services for which the DNS matches the pattern *.twitter.com. It is important to note and test that this doesn’t allow access to twitter.com because the *. in the pattern requires one subdomain to be present in the DNS name. You can simply add more matchName and matchPattern clauses to extend the access. (See policies to learn more about specifying DNS rules using patterns and names.)

    1. $ kubectl exec -it mediabot -- curl -sL https://help.twitter.com
    2. ...
    3. $ kubectl exec -it mediabot -- curl -sL https://about.twitter.com
    4. ...
    5. $ kubectl exec -it mediabot -- curl -sL https://twitter.com
    6. ^C

    The DNS-based policies can be combined with port (L4) and API (L7) rules to further restrict the access. In our example, we will restrict mediabot pods to access Twitter services only on ports 443. The toPorts section in the policy below achieves the port-based restrictions along with the DNS-based policies.

    1. $ kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/v1.8/examples/kubernetes-dns/dns-port.yaml
    1. $ kubectl exec -it mediabot -- curl https://help.twitter.com
    2. ...
    3. $ kubectl exec -it mediabot -- curl http://help.twitter.com
    4. ^C

    Refer to Layer 4 Examples and to learn more about Cilium L4 and L7 network policies.

    Clean-up

    1. $ kubectl delete cnp fqdn