Because there’s no load balancer available with most cloud providers, we have to make sure the NGINX server is always running on the same host, accessible via an IP address that doesn’t change. As our master node is pretty much idle at this point, and no ordinary pods will get scheduled on it, we make kube1 our dedicated host for routing public traffic.

    We already opened port 80 and 443 during the initial firewall configuration, now all we have to do is to write a couple of manifests to deploy the NGINX ingress controller on kube1:

    One part requires special attention. In order to make sure NGINX runs on kube1—which is a tainted master node and no pods will normally be scheduled on it—we need to specify a toleration:

    Specifying a toleration doesn’t make sure that a pod is getting scheduled on any specific node. For this we need to add a node affinity rule. As we have just a single master node, the following specification is enough to schedule a pod on kube1:

    Running will apply all manifests in this folder. First, a namespace called ingress is created, followed by the NGINX deployment, plus a default backend to serve 404 pages for undefined domains and routes including the necessary service object. There’s no need to define a service object for NGINX itself, because we configure it to use the host network (hostNetwork: true), which means that the container is bound to the actual ports on the host, not to some virtual interface within the pod overlay network.

    Services are now able to make use of the ingress controller and receive public traffic with a simple manifest:

    dns/cloudflare
    Terraform
    dns/aws

    At this point we could use a domain name and put some DNS entries into place. To serve web traffic it’s enough to create an A record pointing to the public IP address of kube1 plus a wildcard entry to be able to use subdomains:

    Once the DNS entries are propagated our example service would be accessible at http://service.example.com. If you don’t have a domain name at hand, you can always add an entry to your hosts file instead.

    Additionally, it might be a good idea to assign a subdomain to each host, e.g. kube1.example.com. It’s way more comfortable to ssh into a host using a domain name instead of an IP address.

    Thanks to and a project called kube-lego it’s incredibly easy to obtain free certificates for any domain name pointing at our Kubernetes cluster. Setting this service up takes no time and it plays well with the NGINX ingress controller we deployed earlier. These are the related manifests:

    Before deploying kube-lego using the manifests above, make sure to replace the email address in with your own.

    After applying this manifest, kube-lego will try to obtain a certificate for service.example.com and reload the NGINX configuration to enable TLS. Make sure to check the logs of the kube-lego pod if something goes wrong.

    NGINX will automatically redirect clients to HTTPS whenever TLS is enabled. In case you still want to serve traffic on HTTP, add ingress.kubernetes.io/ssl-redirect: "false" to the list of annotations.

    Now that everything is in place, we are able to expose services on specific domains and automatically obtain certificates for them. Let’s try this out by deploying the Kubernetes Dashboard with the following manifests:

    Optionally, the following manifests can be used to get resource utilization graphs within the dashboard using :

    What’s new here is that we enable basic authentication to restrict access to the dashboard. The following annotations are supported by the NGINX ingress controller, and may or may not work with other solutions:

    This example will prompt a visitor to enter their credentials (user: admin / password: test) when accessing the dashboard. Secrets for basic authentication can be created using htpasswd, and need to be added to the manifest as a base64 encoded string.