@jpmens The cargo cults around lowering that kills me. Kills me dead.

@jpmens nice I'll take a look at my records tomorrow


Just curious, what does your DNS setup look like? 3 servers distributed across the world? Or is it more like a hidden master setup with an Anycast frontend (that you rented somewhere?)?

Or just a single box sitting in a data center doing its duty?


My DNS setup is 2 Hetzner Cloud instances in a master-slave setup (ns1.infra.bn4t.me and ns2.infra.bn4t.me).
The cloud instances are located in Nürnberg and Helsinki. On the software side I use CoreDNS with Prometheus and Grafana for monitoring.

Anycast would be cool. At the same time though it would probably be a bit of an overkill 😉

What's your setup?


DNS is still one of the few services I host externally, currently on Cloudflare due to the quite high cost to keep DNS alive and running it in an anycast setup. To me anycast for DNS appears quite essential. But actually this can be solved by longer TTLs.

TTLs for most services are set to 1 day to improve privacy.

My idea was to run a hidden master setup as replacement, but DNS zone transfers are still quite expensive :/


1 second ttl !!! A minimum of 1 or 5 min for migration or desaster recovery is enought...

@jpmens Great post. I'd love to hear if the DNS operators can list actually valid reasons for the low TTL

@jpmens If you use a CDN like CloudFront, with Route53 for DNS, it doesn't look like you can control the TTL. The Route53 DNS zones will use a 60 second :blobfearful: TTL, and there's no configuration change that I can find. :(


Long response from @bjarni :

> This is very interesting. Good research. Bad advice.
> [ . . . ]
> His advice that people wilfully ignore what the upstream says, will certainly make some things faster, but will also directly cause outages and unavailability. I consider that a poor trade-off. Reliability should trump performance.


@jpmens For ftp.acc.umu.se we use 600s, because that's how long we can reasonably wait until we take a fronted down for maintenance. More than twice that isn't really feasible, at that point clients would just hit errors more often.

@jpmens I don't know why people are complaining about getting your local dns cache/resolver/whatever to enforce a cache time of > 1 or even 5 seconds? The colossal waste of bandwidth & therefore energy & CO2 waste simply isn't worth it.

I'd hope that CDNs would use anycast & BGP so stupidly short TTLs wouldn't actually be required? Or is that naive.

@jpmens Also note: High TTLs can mitigate some of the harm from takeovers at the DNS or registrar level.

Especially for MX records, this would buy you time to regain control of the domain before resets can be sent.

Sign in to participate in the conversation

Server run by the main developers of the project 🐘 It is not focused on any particular niche interest - everyone is welcome as long as you follow our code of conduct!