A high-performance, recursive DNS resolver server with DNSSEC support, focused on preserving privacy.
This project is maintained by semihalev
Kubernetes DNS middleware for SDNS. Resolves cluster-domain names
(services, pods, SRV, PTR) directly from a sharded in-memory registry
populated by Kubernetes informers. Each affected name’s dns.RR
slices are pre-built on every mutation, so ResolveQuery is a single
sharded map lookup with zero allocations.
This middleware does not cache DNS responses, and the chain order
in gen.go places kubernetes before the cache middleware so the
cache layer doesn’t see these answers either. That is by design:
registry lookups are already O(1), and only the dns.Msg setup +
wire packing in ServeDNS cost allocations on the hot path. If you
are debugging stale answers, the source of truth is the registry —
the upstream cache is not involved.
service.namespace.svc.cluster.local → ClusterIPpod-ip.namespace.pod.cluster.local → Pod IP10-244-1-1.namespace.pod.cluster.local2001-db8--1.namespace.pod.cluster.localpod-name.service.namespace.svc.cluster.local_port._protocol.service.namespace.svc.cluster.local1.0.96.10.in-addr.arpa → service / pod…ip6.arpa → service / podThe registry is 256-way sharded:
serviceShards keyed by namespace/namepodShards keyed by IPendpointShards keyed by namespace/servicepodByName keyed by namespace/name (StatefulSet lookups, public accessor)serviceByIP keyed by ClusterIP string (PTR fast path)Reads and writes against different shards never contend. Per-shard RWMutexes serialise reads against any concurrent write to the same shard.
kubernetes.go — middleware entry: New, ServeDNS, Stats, demo seedregistry.go — sharded Registry: query resolution + accessorsclient.go — Kubernetes API client (informers for Services, EndpointSlices, Pods)types.go — Service, Pod, Endpoint, Portipv6_utils.go — IPv6 parsing helpersconstants.go — TTLs, network octets, etc.test_helpers.go — mock ResponseWriter for tests[kubernetes]
enabled = true
cluster_domain = "cluster.local" # default
# kubeconfig = "/path/to/kubeconfig" # optional, falls back to in-cluster
# demo = true # populate synthetic data for local testing
[kubernetes.ttl]
service = 30
pod = 30
srv = 30
ptr = 30
The legacy
killer_modeflag is accepted for backward compatibility but has no effect — the middleware always uses the sharded registry. Remove it from your config; SDNS logs a deprecation warning if it is set totrue.
# Service
dig @localhost service-name.namespace.svc.cluster.local
# Pod by IP
dig @localhost 10-244-1-1.namespace.pod.cluster.local
# SRV
dig @localhost _http._tcp.service-name.namespace.svc.cluster.local SRV
# Reverse
dig @localhost -x 10.96.0.1
# IPv6 service
dig @localhost service-name.namespace.svc.cluster.local AAAA
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: sdns-kubernetes-dns
rules:
- apiGroups: [""]
resources: ["services", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: sdns-kubernetes-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: sdns-kubernetes-dns
subjects:
- kind: ServiceAccount
name: sdns
namespace: sdns-system
Kubernetes.Stats() returns:
queries, answered, errors, write_errorsregistry: per-registry counters (services, pods, endpoints,
endpoint_sets, queries, hits, hit_rate_pct, shards)No Kubernetes connection. Verify kubeconfig path, in-cluster pod identity, and RBAC permissions for Services / Pods / EndpointSlices.
Queries not resolving. Ensure cluster_domain matches the
cluster’s actual domain (kubectl get cm -n kube-system coredns -o yaml
shows the answer if you’re migrating from CoreDNS). Check that
informers have synced — the middleware passes through to the next
handler until at least one informer has populated the registry.
Cache behaviour. This middleware does not cache responses, and the
cache middleware sits below it in the chain (see gen.go) so it
never sees Kubernetes answers either. There is no DNS-message cache in
this path. Stale answers can therefore only come from stale informer
state — check Stats()["registry"] and the Kubernetes API directly,
not the cache middleware.