DNS
In this mode, all name lookups are handled locally by the data plane proxy. This approach allows for more robust handling of name resolution.
On Kubernetes, this is the default. You must enable it manually on universal deployments.
In Universal mode, the and kuma-dp
processes enable DNS resolution to .mesh addresses.
Prerequisites:
kuma-dp
,envoy
, andcoredns
must run on the worker node — that is, the node that runs your service mesh workload.core-dns
must be in the PATH so thatkuma-dp
can access it.- You can also set the location with the
--dns-coredns-path
flag.
- You can also set the location with the
Specify the flags
--skip-resolv-conf
and--redirect-dns
in the transparent proxy iptables rules:Start
$ kuma-dp run \
--cp-address=https://127.0.0.1:5678 \
--dataplane-file=dp.yaml \
--dataplane-token-file=/tmp/kuma-dp-redis-1-token
The
kuma-dp
process also starts CoreDNS and allows resolution of .mesh addresses.
Special considerations
This mode implements advanced networking techniques, so take special care for the following cases:
- The mode can safely be used with the .
- In mixed IPv4 and IPv6 environments, it’s recommended that you specify an IPv6 virtual IP CIDR.
The data plane proxy DNS consists of:
- an Envoy DNS filter provides responses from the mesh for DNS records
- a CoreDNS instance launched by
kuma-cp
that sends requests between the envoy filter and the host DNS - iptable rules that will redirect the original DNS traffic to the local CoreDNS instance
Overriding the coreDNS configuration
In some cases it might be useful for you to configure the default coreDNS. To do so you can use as an argument to kuma-dp
. This file is a coreDNS configuration (opens new window) that is processed as a go-template. If you edit this configuration you should base yourself on the default existing configuration.
The Kuma control plane deploys its DNS resolver on UDP port 5653
. It allows decoupling the service name resolution from the underlying infrastructure and thus makes Kuma more flexible. When Kuma is deployed as a distributed control plane, Kuma DNS enables cross-cluster service discovery.
When you install the control plane, set the following environment variable to disable the data plane proxy DNS:
KUMA_RUNTIME_KUBERNETES_INJECTOR_BUILTIN_DNS_ENABLED=false
Pass the environment variable to the --env-var
argument when you install:
kumactl install control-plane \
--env-var KUMA_RUNTIME_KUBERNETES_INJECTOR_BUILTIN_DNS_ENABLED=false
Set the environment variable:
Universal
Start the kuma-dp with flag
--dns-enabled
set tofalse
:$ kuma-dp run \
--cp-address=https://127.0.0.1:5678 \
--dataplane-file=dp.yaml \
--dataplane-token-file=/tmp/<KUMA_DP_REDIS_1_TOKEN> \
--dns-enabled=false
You can configure Kuma DNS with the config file, or with environment variables:
The port
field specifies the port where Kuma DNS accepts requests. Make sure this value matches the port setting for the kuma-control-plane
service.
The CIDR
field sets the IP range of virtual IPs. The default 240.0.0.0/4
is reserved for future IPv4 use IPv4 and is guaranteed to be non-routable. We strongly recommend to not change this value unless you have a specific need for a different IP range.
Kuma DNS includes these components:
- The DNS server
- The VIPs allocator
- Cross-replica persistence
The DNS server listens on port 5653
, responds to type A
and AAAA
DNS requests, and answers with A
or AAAAA
records, for example <service>.mesh. 60 IN A 240.0.0.100
or <service>.mesh. 60 IN AAAAA fd00:fd00::100
. The default TTL is 60 seconds, to ensure the client synchronizes with Kuma DNS and to account for any intervening changes.
The virtual IPs are allocated from the configured CIDR, by constantly scanning the services available in all Kuma meshes. When a service is removed, its VIP is also freed, and Kuma DNS does not respond for it with A
DNS record.
Kuma DNS is not a service discovery mechanism. Instead, it returns a single VIP that is assigned to the relevant service in the mesh. This makes for a unified view of all services within a single zone or across multiple zones.
Consuming a service handled by Kuma DNS from inside a Kubernetes container is based on the automatically generated kuma.io/service
tag. The resulting domain name has the format {service tag}.mesh
. For example:
<kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh:80
<kuma-enabled-pod>$ curl http://echo-server_echo-example_svc_1010.mesh
A DNS standards compliant name is also available, where the underscores in the service name are replaced with dots. For example:
<kuma-enabled-pod>$ curl http://echo-server.echo-example.svc.1010.mesh
The default listeners created on the VIP default to port , so the port can be omitted with a standard HTTP client.