Less snarky: because you didn't get a dedicated prefix assigned to your office,and you don't want to renumber every time your ISP moves your assignment.
Static addresses don't exist in the real-world, thanks $sales!
so NAT works around your ISPs dumb-fuckery. the IETF answer is "get/become a better ISP", which is clearly bullshit.
sucks? yes. works around the problem? yes.
@phessler @ckeen @mwlucas @kurtm @kellerfuchs @Polishdub The other option would be using static ULA addresses for your internal communication, and variable provider-assigned addresses for external connections.
But yeah, just using NAT is probably easier, and also leaks slightly less information about the internal network.
So, you either have to allow any random client to update entries on your DNS server, or you must do static addressing with static DNS entries. Which you then have to update every time your ISP moves you around.
I really think most of the IPv6 zealots have never run a sizable network of machines.
@mwlucas @phessler @Polishdub @kellerfuchs @kurtm Well, NAT certainly has it's downsides too, like I'm not sure having additional protocol variants just in order to pass NAT gateways are such a great thing (IPSEC NAT traversal, anyone?)...
But on the other hand, most of the time I really don't want end-to-end connectivity with anyone from the internet to my desktop, even though that would make life much easier for a lot of applications...
It's when they refuse to admit any advantages to certain things, they lose me. Is NAT ideal? No. Does it provide benefits in certain scenarios? Yes.
@sysadmin1138 Maybe things like RDMA could make use of additional addresses? Or the host uses them to create more / dedicated endpoints for specific services?
Tbh I don't see a real problem with that proposal, even though I don't really like the "just always use a /64 for everything, we'll surely find an appropriate use" meme.
If I may jump in...
The /64-per-host thing came way before docker. And it really should be /64-per-vaguely-defined-subnet. Which is probably not a great answer but the general goal seems to be "everyone should have as many IP addresses as they ever need".
Which seems reasonable to me. There's no reason for IP addresses to be scarce, and NAT breaks lots of assumptions.
@icefox @sysadmin1138 @phessler @kurtm @kellerfuchs @Polishdub @mwlucas Well yes, but the jump from "a /64 for every subnet" to "a /64 for every host" involves additional complexity, like a routing protocol, maybe?
For hosts that run things like containers, that's probably not going to be a problem, and it really is preferable to just using heaps of addresses from the directly connected subnet, looking at neighbor table sizes of common network equipment. But for simple end hosts, it's overkill.
Oh yeah it's totally overkill. They really could have gotten away with the smallest dividing unit being, like, a /96 or ,aybe a /112 or something and life would be fine.
I think they just said "well there's no kill like overkill" and ran with it.
I confess we are bumping against the limits of my intimate understanding here, so I may be wrong...
...but consider that it's only been in the last 5 years or so with UPnP (which is a security dumpster fire already) that NAT has NOT broken everything that allows other people to connect to a system? Life gets way simpler when you have direct end-to-end public connections. And no less secure when you firewall properly anyway.
@dvl breaks end-to-end and shitty protocols (sip)