What is hard, is dual stack networking. Dual stack networking absolutely sucks, it's twice the stuff you have to deal with, and the edge cases are exponentially more complex because if your setup isn't identical in both stacks, you're going to hit weird issues (like the article suggests) with one stack behaving one way and the other behaving another way.
None of this is the fault of IPv6, it's a natural consequence of the strategy of having two internets during the transition period. A transition period which has now been going on longer than the internet existed when it started.
Maybe 100 years from now we'll finally be able to shut off the last IPv4 system and the "dream" of IPv6 will become a reality, but until then, life is strictly more difficult on every network administrator because of dual stack.
(Note: no, we could not have done it any other way. Any upgrade strategy to IPv4 that extended the address size would have necessitated having two internets. It just sucks that it's been this long and we're still not even at 50% IPv6 usage.)
The addressing is intimidating, getting your head away from NAT, thinking about being globally routable, the lack of (or variances in) DHCP, ND, "static" addressing via suffixes, etc.
None of this is technically "hard", but it's a significant barrier to entry because it's quite different to what most normal technical users are used to.
Dual stack may create some new problems, but it fixes a lot of others. All of the major cloud providers have awful IPv6 only support, and many prominent web resources that usually invisibly "just work" have little to no IPv6 support.
I suspect that if the situation were reversed and we always had IPv6, and someone proposed a new IP suite where you had to use NAT for everything, and all hosts needed to have their addresses hand-picked (or served via a DHCP server that used an ugly hack to communicate with hosts that don't have an IP yet), you'd find that to be the "hard" one.
I can remember two numbers from 0 to 255, but four? Not in a million years.
Isn't there an idea that our working memory holds seven pieces of information? Which would be two of those numbers and change, if you consider each digit as a piece of information.
I also could have mentioned mDNS as a distinct subclass of DNS. That's most of what I use in IPv6 LAN stuff, personally, but I don't have a lot of subnets to worry about. Also I could have mentioned tricks like writing down a subnet prefix somewhere such as a .env script and using assigned addresses full of zeroes in that prefix like $PREFIX::1, $PREFIX::16, $PREFIX::cafe, or also, if you really want, and just miss dotted quads that much: $PREFIX::136.324.154.123 (that's been a supported address format for a few years now as long as it is on the right side of a :: shortcut).
I've been sending my mail via IPv6 for ages without issues… did you mean reputation or reputation?
In most cases it does work and there are no problems. But especially medium-sized businesses who run their own email through some crappy mailfiltering middlebox file anything that comes from or through an IPv6 endpoint to spam.
> did you mean reputation or reputation?
I don't understand your question. I meant IP address reputation, which is what many mail filtering systems use to estimate spam probability or outright reject email. See for example https://www.spamhaus.org/ip-reputation/
Just don't listen on v6, that would achieve your goal, would prevent reputable dual-stack senders (like Google!) to send messages over v6 that you'll drop, and simplify your configuration.
There's reputation of the IP address(es) among spam filters, and there's reputation among human operators configuring mail servers. I was trying to riff on ambiguity with ambiguity, that wasn't particularly helpful, sorry.
IPv6 nodes aren't individual random addresses in a 128-bit address space. They are going to be grouped in subnets, so it makes sense to explore /64 ranges where you know there's already at least 1 address active. There's a pretty decent chance at least some addresses are going to be sequential - either due to manual configuration or DHCPv6 - so you can make a decent start by scanning those. For non-client devices, SLAAC usually generates a fixed address determined by the NIC's MAC address, which in turn has a large fixed component identifying the NIC's vendor. This leaves you with a 24-bit space to scan in order to find other devices using NICs made by that vendor - not exactly an unfair assumption in larger deployments. Much faster scanning can of course be done if you can use something like DNS records as source for potential targets, and it's game over once an attacker has compromised the first device and can do link-local discovery.
It's not going to be extremely fast or efficient, but IPv6 scanning isn't exactly impossible either. It's already happening in practice[0], and it's only going to get worse.
[0]: https://www.akamai.com/blog/security-research/vulnerability-...
See what address gets used, and blam.
Then there is this one customer, they are colocated in other ISP owned datacenters, and some of their server maybe have fully IPv6, some may have only IPv4, some may be a mixture of both. Supporting them is a never ending nightmare.
The really sad thing about IPv6 is that it's a relic of a bygone era. The internet was envisioned as a thing where every device on every network could, in principle, initiate a connection to any other device on any other network. But then we started running out of addresses, ISP's wouldn't give you more than one for a household, so we started doing NAT, and that kind of sealed the internet's fate.
Nowadays, we wouldn't want a world where everything was connectable, even if we had enough addresses. Everyone's network in essence has a firewall. If you run IPv4 and using NAT, you're going to be dropping unsolicited incoming packets no matter what (let alone that you wouldn't know where to route them even if you wanted to let them through) and with IPv6 you'd be insane to allow them.
Devices have "grown up" in a world where we expect the router we're connecting to, to shield us from incoming unsolicited traffic. I certainly don't have the firewall enabled on my linux desktop. Windows I _think_ has it enabled by default, but I often turn it off because I want LAN connections to work. MacOS is probably similar. Suffice to say, if the IPv6 dream happened overnight and everyone's devices were instantly connectable, all hell would break loose. We need our routers to disallow traffic. (Edit: This wasn't all that clear, so let me restate: In both the IPv4 and IPv6 world, you'd be insane to disallow incoming unsolicited traffic, which is why basically everyone blocks it in IPv6 as well as IPv4. I'm not trying to say IPv6 can't do this... quite the contrary: IPv6 routers absolutely do, and should, block unsolicited incoming traffic. But my point is that this is what prevents the original vision of IPv6 from becoming a reality: We can't design software around the idea of direct peer-to-peer communication, without stuff like UPnP. Oh, and UPnP works with NAT and IPv4 anyway so, there's the rub.)
So, even in a pure-IPv6 world, that instantly prevents any startup with an idea to allow true peer-to-peer communication using end-to-end routability. What are you going to do, train your users to log into their router and enable traffic? Approximately 0% of your customers know how to do that, even in a counterfactual universe where IPv6 happened and we all have routable IP's on our devices. Maybe in such a universe, people would be trained to use local firewalls on their device, and say "ok" to popups asking if you want to let software through. But I'd wager that a lot of people would prefer the simplicity of just having their gateway drop the traffic for them.
No, the "all devices are routable" idea came from a naive world where there wasn't a financial incentive for malicious behavior at every turn. Where there aren't millions of hackers waiting for an open port to an exploitable service they can use to encrypt all your data and ransom it for bitcoin. The internet is a dark forest now. We don't _want_ end-to-end connectivity.
IPv6 makes P2P routing substantially easier, even in a world of default firewalls that drop unsolicited packets. You can still apply standard NAT busting techniques like STUN with IPv6 devices behind firewalls, and you get much better results because IPv6 removes the need to track and predict the port mapping a standard NAT does. Two P2P systems can both send unsolicited packets to each others IPv6 addresses, with a specific port, and know that port number isn’t going to be remapped, so their outbound packets are guaranteed to have the right address and port data in them to get their respective firewalls to treat inbound traffic on those addresses and ports as solicited.
This is particularly useful when dealing with CGNATs, where your residential devices ends up behind two NAT layers, both of them messing with your outbound traffic, which just creates an absolute nightmare when trying to NAT bust. IPv6 means that you’re no longer dealing with three or more stateful NATs/firewalls between two peers, and instead only have to deal with at most 2 (in the general case) stateful firewalls, who’s behaviour is inherently simpler and more predictable than a NAT ever is.
(Of course, the other huge benefit of IPv6 is "more addresses", so we need it just for that. But my point is that "global routability" isn't really the dream people think it is. In practice, the only differences between modern GUA-but-deny-by-default IPv6 setups and NAT'ed IPv4 setups are the simplicity of the former for the network administrators.)
You should tell that to my ISP, they’ve managed to deploy a CGNAT that’s proven to be completely STUN proof. The only way I can achieve any kind of P2P comms is using IPv6. IPv4 is useless for anything except strictly outbound connections.
> and you're going to need something like that even if there were no NAT, so to me it sounds like a bit of a wash, at least to the end-user.
Not in my case. As above, I simply can bust my ISPs CGNAT, so IPv6 is invaluable to me. Makes a huge difference to me, the end-user.
It's not the only benefit, as anyone who's tried to build a large network or merge two networks with overlapping address space will tell you.
Except with 10/8 (or 172/12) everyone is using the same address space. How many network have a 10.0.0.0/24? What are the odds of a conflict for that?
But if you have an ULA FDxx:xxxx:xxxx/48 address space, what are the odds that all those x bits will be the same for any two sites? That's 40 bits 'entropy'. Much, much lower (notwithstanding folks doing DEADBEEF, BADCAFE, etc).
Sure, some network engineers will try and design v6 networks as though they're v4, but everyone else will just go and get a squillion addresses from the RIR that's unique to their org and then just use/announce that.
Nothing stops you getting a subnet allocated from your local RIR and announcing that on the internet, if you care about that sort of thing for your home network though. You just need a decent ISP.
I’ll just leave that there.
No, I can’t just choose to have a decent ISP. I live in the real world where there’s only one choice where I live, same as a rather large majority of people in the world.
Sometimes. I've experienced a few networks where even with STUN I'm still not able to get a workable session in IPv4.
2) In fact you at least can ask them to change their CGNAT setup, as they at least could, if they chose, unilaterally fix their CGNAT setup, while--even if you also felt it possible to ask them to upgrade to IPv6--upgrading to IPv6 as the solution requires the rest of the world to upgrade as well.
The real question is then: which is solution to your problem is easier and cheaper to obtain, and the answer is clearly not "upgrade the world to IPv6". "I couldn't convince my ISP to fix their CGNAT setup, so I'd rather convince them--and everyone else--to upgrade to IPv6" makes no sense as a coping mechanism.
There is also a bit of conflated use of terms. A firewall on windows is just a fancy name of permission management. A program want to open a port and the user get a prompt if they want to allow it (assuming the user has such privileges). It is indistinguishable from similar permission system found on phones that ask if an app is allowed to access contacts. The only distinguishable "feature" is that programs might not be aware that the permission was denied, thus opening a port that is not actually open. Windows programmers may know more of the specific of this.
... which is ... virtually all of them ...
get p0wned on a massive scale? You know, now with AI powered assaults for extra special vulnerability levels...
And then a magic wand happens, and the IT orgs that couldn't even install patches will be able to back-discover all the compromised hard drive firmwares, rootkits, and nth-level security holes after the fact?
I get a NAT wall isn't perfect security. But pretending it is NO security is disingenuous.
IMO this is the only way forward.
You have a firewall at the edge of the network. It blocks incoming connections by default, but supports Port Control Protocol. The cacophony of unpatched legacy IoS devices stay firewalled because your ten year old network printer was never expected to have global connectivity and doesn't request it. End-to-end apps that actually want it do make the requests, and then the firewall opens the port for them without unsophisticated users having to manually configure anything.
The protocol is an evolution of NAT-PMP (RFC6886) for creating IPv4 NAT mappings, but RFC6887 supports IPv6 and "mappings" that just open incoming ports without NAT.
There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?
In my opinion the current kind of hole punching is a far better option: instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet. It's well-researched when it comes to the interplay between firewalls, NAT, and UDP[0].
And that pretty much solves the problem, to be honest. These days you only really need incoming traffic to support things like peer-to-peer video calls. Hosting game servers on your local machine is a thing of the past, everything has moved to cloud services. What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.
It's not about new or old. The devices that don't have any reason to be globally reachable never request it. New ones would do likewise.
Devices that are expected to connect to random peers on the internet are going to do that anyway. It's not any more secure for them to do it via hole punching rather than port mapping; causing it to be nominally outgoing doesn't change that it's a connection to a peer on the internet initiated by request of the remote device.
> There's no way in hell enterprise admins are going to give that kind of control to random devices, and home users aren't going to have the skills to cull vulnerable devices from their networks. So who's going to use it?
Enterprise admins can easily use it because they have control over the gateway and how it answers requests, so they can enable it for approved applications (in high security networks) or all but blocked applications (in normal networks), and log and investigate unexpected requests. They should strongly prefer this to the alternative where possibly-vulnerable applications make random outgoing HTTPS connections that can't easily be differentiated from one another.
Whether they will or not is a different question (there are a lot of cargo cult admins), but if they don't they can expect their security posture to get worse instead of better as the apps that make outgoing HTTPS connections to avoid rigid firewalls become the vulnerable legacy apps that need to be restricted.
Home users are already using NAT-PMP or UPnP and this has only advantages over those older solutions.
> instead of creating a mapping allowing essentially unrestricted access from everyone, have the app use an out-of-bounds protocol to allow a single incoming connection from exactly one vetted source. You get all of the benefits of a direct peer-to-peer connection with none of the risks of exposing yourself to the internet.
There are significant problems with this.
The first is that it's a privacy fail. The central server is at a minimum in a position to capture all the metadata that shows who everyone is communicating with. It's made much worse if the payload data being relayed isn't end-to-end encrypted as it ought to be.
But if it is E2EE, or the server is only facilitating NAT traversal, the server isn't really vetting anything. The attacker sends a connection request or encrypted payload to the target through the relay and the target is compromised. Still all the risks of exposing yourself to the internet, only now harder to spot because it looks like an outgoing connection to a random server.
Worse, the server becomes an additional attack vector. The server gets compromised, or the company goes out of business and the expired domain gets registered by an attacker, and then their vulnerable legacy products are presenting themselves for compromise by making outgoing connections directly to the attacker.
Doing it that way also requires you to have a central server, and therefore a funding source. This is an impediment for open source apps and community projects and encourages apps to become for-profit services instead. The central server then puts the entity maintaining it in control of the network effect and becomes a choke point for enshitification.
Meanwhile the NAT traversal methods are ugly hacks with significant trade offs. To keep a NAT mapping active requires keep-alive packets. For UDP the timeout on many gateways is as short as 30 seconds, which prevents radio sleep and eats battery life on mobile devices. Requiring a negotiated connection is a significant latency hit. For real peer to peer applications where nodes are communicating with large numbers of other nodes, keeping that many connections active can exceed the maximum number of NAT table entries on cheap routers, whereas an open port requires no state table entries.
> What's left is a handful of nerds running servers for obscure hobby projects at home, but they are more than capable of setting up a static firewall rule.
The point is to allow interesting projects to benefit more than a handful of nerds and allow them to become popular without expecting ordinary users to manually configure a firewall.
Doesn't this assume all devices are well behaved, trusted and correctly implemented? You don't think the crap Samsung Smart TV will consider itself to have a reason to be globally reachable?
Some of them could do it for no reason, but they could also have the devices make outgoing connections to the company's servers and then blindly trust any data the servers send back, even if the servers are compromised or the domain expires and falls into the hands of someone else, or the company's own servers are simple relays that forward arbitrary internet traffic back to their devices.
If a device can make outgoing connections then it can emulate incoming connections. Blocking incoming connections to devices that explicitly request them is therefore not a security improvement unless you're also prohibiting them from making any outgoing connections, because the result is only that they do it in a less efficient way with more complexity and attack surface and less visibility to the user/administrator.
But it doesn't seem to be in widespread use, does it? Like, would a random internet gateway from podunk ISP support this? I kinda doubt it, right? Pretty sure the default Comcast modem/router setup my mom uses doesn't support this.
But I guess my point was about the contrapositive universe where IPv6 was actually used everywhere, and in that universe I suppose RFC6887 might have been commonplace.
Then it starts to make it into popular internet gateways. For example, the Comcast consumer gateways generally do support UPnP, which is a miserable protocol with an unnecessary amount of attack surface that should be transitioned to Port Control Protocol as soon as possible, but it demonstrates you can get them to implement things of this nature.
You do what has already been done for decades. You ship a client premises router that does deny by default inbound and allow all outbound and things behave pretty much the same as they do today with these exact same rules in IPv4.
Network firewalls still exist in a publicly routable network. If I want my game console to allow incoming traffic for game matchmaking, I can then do that. Or have systems that auto configure that. But then I don't have to have multiple devices fighting for a limited port range, each device has more IPs and ports than they could know what to do with.
To use your example:
> If I want my game console to allow incoming traffic for game matchmaking, I can then do that
My point is that because of the fact that even in IPv6, consumer firewalls will block traffic by default, the company that makes the game would not have designed it to require your console to have special configuration on your firewall. Because this is not something a typical gamer knows how to do.
Instead, companies use UPnP for matchmaking, and UPnP works in both NAT and GUA environments, so what exactly does GUA give you?
You'll have even more problems if you're on CGNAT networks. You're not going to be able to get any of that traffic.
None of this is a problem if each device has its own IP address and its own range of ports to deal with. Every device can have its own :5000, it can know it's public IP address without having to have something see from outside, and with how big assignments usually are it can have dozens of things all listening on a public :5000 all at the same time.
In fact, in IPv6, privacy addresses mean that the address the gaming service observes from my device shouldn’t be assumed to work for other peers, because my device may only be using that address for communication to the gaming service itself. Instead, authors of this software ought to understand that the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work (because if I have 2 XBox’s, they could decide on a different port if another console already has one used.)
It’s just I’m disillusioned with the idea of GUA’s for every device actually solving anything. It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there: firewall config, dynamic IP’s, mobile IP’s, security best practices, shitty middleware boxes not doing the right thing, etc etc etc.
Does IPv6 solve it all? No. Does it solve some of it better than IPv4? Yep. Does it just completely eliminate the need for CGNAT? Yep. It is not a silver bullet to solve all problems, but it does solve some, and for that I'd much rather use it. Because I'd rather just be able to host whatever at home and not need to remember what port is what or rely on proxies looking at other info in the request.
> shitty middleware boxes not doing the right thing
You reduce the need for shitty middleware boxes. I don't need a reverse proxy. I don't need to have a STUN/TURN server. I don't need to randomize ports or worry about running out of ports.
Why solve a problem once properly in the network stack when you can add an ad-hoc workaround to each individual protocol instead?
> the console itself needs to tell the service “this is the address peers can use to communicate with me”, and thus you may as well just include the port in that call, and then you don’t need to assume port 5000 will work
And if you're behind CGNAT so your home router doesn't know what the address and port are, what then?
Not to mention the case of a multiplayer game between two people who live in the same house and a third person who doesn't. That's one that's trivial with IPv6 but difficult with every IPv4-based system I've seen.
> It solves like 10% of the difficulty of writing good p2p software. All the other problems are still there
It all helps. I mean, people mostly manage to play games with each other today, they just get disconnects or random lag every so often. Cutting down on that, even if it was only 10%, would make a lot of lives better.
> Within the port range, enter the starting port and the ending port to forward. For the Nintendo Switch console, this is port 1024 through 65535.
https://en-americas-support.nintendo.com/app/answers/detail/...
Nintendo tells me to forward all high numbered ports to my console. Because obviously we're only going to have a single one in the house.
Having globally allocated address space doesn’t actually imply openness of connectability
Not just difficult, but impossible, even in principle, because there are more than one device sharing the same IP so at most one host would be vulnerable. Not the same as with IPv6, where screwing up the defaults leaves your entire network vulnerable.
Tons of firewalls ship with this as a default logic, it doesn't require NAT in the slightest.
https://www.anvilsecure.com/blog/dhcp-games-with-smart-route...
They are of course commonly deployed together with a firewall that does deny that traffic, but claiming that NAT blocks connections because it's usually deployed together with a different technology that handles all of the blocking would also be lying.
That doesn’t make sense.
If I have a single routable IPv4 addresses and 100 machines behind it with RFC1918 addresses, how can any possible router “allow by default” say, port 22? Which machine would it route it to? Would it pick the first one? Randomly select one?
Of course NAT has to drop incoming unsolicited packets. Unless you tell it which machine to route them too, it couldn’t possibly know how to “allow” them in the first place.
The only thing NAT does is rewrite the dst or src headers of packets. If there's no rule or state entry that applies to a packet, it doesn't drop the packet. It just leaves the original headers on it.
Stateful Firewalls are the security tool. NATs being mediocre to somewhat alright stateful firewalls "out-of-the-box" before adding a real Firewall is the accident (and sometimes bug). Something doing security by accident (or as a bug) isn't a security tool (just like security through obscurity isn't a security tool). You can have Stateful Firewalls without NAT. Everyone saying that you "need" NAT to have Stateful Firewalls doesn't understand Firewalls or even possibly why "firewall" is and has always been a different word from "NAT". NAT has something to do with security, but that's being generally always paired with a good firewall, not being a mediocre firewall mostly by accident.
Are you not doing that already? If you trust whoever else happens to be on the same wifi in the cafe you're a braver man than I.
Of course not, that's not my point. My point is that because of the fact that your home router still firewalls with both IPv6 and IPv4, any software which relies on being able to "just" connect to a peer over the internet, is doomed. Our networks don't work that way any more (they probably did in the early 90's though.)
My point is that even if we had global routability, we still wouldn't have open connectability, because open connectability is a stupid idea. Which means any software ideas people might have that rely on connectability, are already a non-starter. So why do we need open routability in the first place? (Honest question. This is the crux of the issue. Yes, open routability means you can have a host listen on the open internet, but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.)
With IPv6 the only thing you need is PCP (or equivalent).
With IPv4 you need PCP/whatever plus a whole bunch of STUN/TURN/ICE infrastructure.
Just hole punching is a lot easier to support than more-than-just hole punching.
I'd say the biggest practical objection (not just "NAT is ugly" or "DHCP is ugly" or "NAT is evil since it delayed IPv6") is CGNAT, which really does put a lot of restrictions on end-users that they can't circumvent. The more active hosts stuffed behind a single NAT, the more they have to compete for connections.
> but fewer than 1% of people know how to configure their home firewalls to do this, so it's effectively not possible to rely on this being something your users can do.
And a chunk of that 1% are on WANs that they aren't authorized to configure even if they wanted to.
| | IPv4 | IPv6 |
|--------------+-------------------------------------+-------------------------------------|
| With uPnP | Unsolicited connection goes through | Unsolicited connection goes through |
| without uPnP | Unsolicited connection blocked | Unsolicited connection blocked |
It's exactly the same in IPv4 and IPv6 just like FeistySkink was suggesting.I thought the problem was lack of upload bandwidth for almost all households without fiber to the home. Surely, if people had symmetric broadband internet at home, then there would have been commercial solutions to allow devices to connect to each other, thereby decreasing the need for giant cloud providers.
"All devices are routeable" is a good idea, because it means when we want devices to be routeable there's a simple and obvious way that should be done.
Where we've ended up with NAT though is worse though: we still need a lot of devices to routeable, and so we enable all sorts of workarounds and non-obvious ways to make it happen, giving both an unreasonable illusion of security and making network configuration sufficiently unintuitive we're increasing the risk of failures.
Something like UPnP for example - shouldn't exist (and in fact I have it turned off these days because UDP hole-punching works just fine, but all of this is insane).
Just look at the distribution of successful attacks in 2000 and in 2025.
Yes, and not only for the NAT reason you say; see "The world in which IPv6 was a good deisgn" by Apenwarr, from 2017: https://apenwarr.ca/log/20170810
> We don't _want_ end-to-end connectivity.
YOU dont want that.
The wish to not have all devices connected to the Internet does not defeat the need of having a protocol that allows that. NAT is merely a workaround that introduced a lot of technical debt because we don't have enought IPv4 addresses. And having IPv6 does not mean that you must connect everything to the internet.
IPv6 Is good and absolutely necessary. NAT is really expensive and creates a lot of problems in large networks. Having the possibility to just hand out routsble addresses solves a lot of problems.
There's a kind of holy war, where people pick sides, and if you're "team IPv6", you look for any post that kinda vaguely smells like it defends IPv4, call them ignorant, and respond with the typical slew of reasons why IPv6 is better, NAT is a hack, you can still firewall with IPv6, etc. If you're "team IPv4" you look for the IPv6 posts and talk about how NAT is useful, IPv6 should have been less complicated, etc etc.
It's really tiresome.
Personally, I'm not on either "team". These things I believe:
- We need more addresses
- IPv6 is as good a solution as any, as no solution was going to be backwards compatible.
- We should get on with it and migrate to IPv6 as soon as possible. What's the hold up? I'm sick of running dual stack.
- But don't throw away NAT, please. I want my internal addresses to be stable and if possible, static and memorizable (An fd00::1 ULA [0] is even easier to remember than 192.168.1.1! And it won't change when my ISP suddenly hands me a new prefix.)
- But CGNAT is The Devil. Don't conflate my desire for using NAT at my house for some desire to be behind CGNAT. Again, we should get on IPv6 ASAP to banish CGNAT back to hell.
- Oh and don't pretend that IPv6 would have led to a more decentralized internet. A client/server model is inevitable for so many reasons, and true peer-to-peer where customers just communicate directly with one another is not a realistic goal, even with IPv6. Even with PMP/PCP. Even with DDNS. You just can't design software around the idea that "everyone's router is perfectly configured."
[0] No, I don't actually use fd00::/64 as my ULA prefix, I actually put my cellphone number in there :-P. The point being it's as memorizable as you want it to be.
True.
>IPv6 is as good a solution as any...
There aren't any other practical solutions. Debatable if it was the best solution but it is what it is.
>We should get on with it and migrate to IPv6 as soon as possible
Most LANs will run IPv4 forever. IPv6 offers nothing to LANs larger than home labs (when SLAAC is not enough) but smaller than huge companies (when there are more than enough addresses even in IPv4. 240/4 is for every practical purpose private). The community's hostility to NAT/NPT means LAN admins will get it de facto by running another stack internally.
>I want my internal addresses to be stable and if possible, static and memorizable.
Right. But ULAs have ridiculous routing rules. It would take at least a decade for fixes to be deployed everywhere.
>Don't conflate my desire for using NAT at my house for some desire to be behind CGNAT.
Right. But most users don't care.
> A client/server model is inevitable for so many reasons.
Right.
Could you elaborate? I’m able to run ULA-only just fine, with one line in my pf.conf to route 1:1 to my GUA prefix. The NAT is stateless.
Maybe you’re referring to the fact that typical OS’s will prefer to use IPv4 over a ULA if both are available? I’ve noticed that, and indeed it’s unfortunate.
Yep. There's an RFC for this. I guess it would take a decade to be everywhere.
The Global > Local rule is also unfortunate in some configurations*, but that's more of a result of the end-to-end idea and software not realizing a device can have more than one address...
* One example from this thread: https://news.ycombinator.com/item?id=43070290
But for certain, today if we had IPv6 everywhere, it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device. That software would only be usable by people who know how to configure their router, and that doesn’t make for very good market penetration. You still need something like UPnP, so you’re basically right back to where we are today. At least the connection tracking would be stateless, I guess?
I have two SIP phones both wanting to register port 5061 to forward to their address. How does this work with IPv4/UPnP?
- UPnP or something like it - A place to register your address, since they change all the time even in IPv6
And if you need these two things anyway, you can do this with IPv4, with the added change that you also include the port when registering your address, which makes the “multiple phones in a network” thing work.
Like, say, a SIP client? Wanting to listen on the standard SIP port?
Or say two web servers that both want to listen on :443 and you don't want to have to reverse proxy them from another box.
> UPnP or something like it
> https://datatracker.ietf.org/doc/html/rfc6887
Port Control Protocol seems to be pretty well supported on most of the devices I own even in IPv6.
And in the end I'm pretty boned if I'm on CGNAT usually. I can pretty much never get my Switch to do matchmaking with peers on IPv4 when I'm on a CGNAT network. If we were all on IPv6, it wouldn't be a problem.
No, not like, say, a SIP client.
You seem pretty intent on not reading my whole comments or something. Mind responding to what I’m saying and not just making up your own arguments?
I’m saying “ipv6 alone won’t solve X”, and you’re saying “but what about Y?”
So once again, how do two devices both share port 5061 on a UPnP NAT with a single public IP address? Or even worse, if they're CGNAT'd? It's a simple answer with IPv6...
IPv6 was designed in a world where we thought we’d have a truly peer-to-peer internet where anyone could just talk to someone else’s device. This obviously isn’t what happened. An ipv6 proponent may say “this is because NAT breaks the necessary assumptions”, but that’s extremely oversimplified and wrong.
The reason that, when I use my iPhone to FaceTime my mom’s iPhone, it doesn’t just use SIP to contact her phone, isn’t because IPv4 and NAT. It’s because the very idea of that is nearly impossible to implement at scale even if everyone had a unique address.
I’m aware you can’t have multiple devices behind a NAT address listen on the same port. Thank you for pointing that out, you’re very smart. But it really doesn’t address my point at all, does it?
My point being that the reason we don’t use SIP and related technologies for billions of end user devices today, isn’t because of NAT. It’s because the myriad of other problems that would need to be solved for it to work reliably, and because of NAT. Eliminating NAT wouldn’t really meaningfully change the landscape of how software works. FaceTime calls would still go through Apple servers. People would still use centralized services run in real data centers, etc.
Ah, finally, you do acknowledge that there are issues that aren't actually solved with IPv4 and NAT. Thanks.
> But it really doesn’t address my point at all, does it?
Lets go back to the first thing I was reponding to.
> it still wouldn’t let you design something like a VoIP phone (or a pure peer-to-peer version of FaceTime) that just listened for connections and let other people “call” you by connecting to your device.
With having a public IP address, PMP enabled on my router, and DNS registration I have this today. My SIP phone here is sitting by, waiting for incoming calls. I don't need Apple's servers for this. Anyone with a SIP client and who knows the name can get to it (or trawls IPv6 space). This becomes a headache with IPv4 with multiple devices all wanting that 5061 port. Sure, one could just also tell people the port and have lots of random ports assigned, but that's just yet another bit of information to get lost, one more mapping to maintain, etc. Imagine if Amazon ran their ecommerce site off a random high number port instead of :443, think they'd get as much traffic?
That’s cool for you, congratulations.
I’m talking about the broader internet at large, and the billions of users on it, and the software that is written for these billions of users. This software does not behave the way your cool phone does. Because people’s IP’s change. They’re behind firewalls they are unable or don’t understand how to configure. We don’t write software this way for a lot of good reasons.
Now, I keep talking about this, then you bring up “but I have this use case, what about that?” Demanding an answer as if I give a shit that you want to run a SIP phone at your house, and you’re willing to configure your firewall, etc.
My whole point (!!!) is that what you’re doing is not what the billions of people using the internet are doing. When I bought my mom an iPhone, I didn’t have to tell her “ok so, FaceTime only works at your house, you have to reconfigure this DDNS service when your IP changes, you have to configure your router this way, and don’t ever leave the WiFi network or it won’t work, but thank god for ipv6, because it means dad can do all this too for his phone!” No. Because FaceTime doesn’t work that way, nor could it ever, because doing true peer to peer calling was never a thing that would have ever worked at scale. It’s not being held back by NAT. It’s being held back because it was never gonna work in the first place.*
I didn't have to configure the firewall, that happened automatically with PMP. That thing you didn't even know existed until a few hours ago.
But somehow you know all that is and is not possible. All that could have been if things were different.
I mean if you’re just going to sling ad-hominems around and invent your own argument so you have a chance at winning it, sure. I’m not aware of the specific RFC’s du jour that are used for auto-configuring routers. But after spending some time looking it up, it seems relatively niche (not as widespread as UPnP), and it appears the last 3 routers I’ve owned don’t support it. (I run an openbsd box with pf currently, but my unifi gateway before that didn’t support it, and the shitbox gateway Comcast gave me before that certainly didn’t support it. I could run a service on my openbsd box to support it but it wouldn’t make a difference because I’m perfectly capable of editing my pf.conf as it is.)
But if you’re gonna dig up other posts of mine to try and get a dig in at me, maybe at least read the rest of the post? You certainly haven’t done a good job of that so far.
Because I indicated that of course a protocol like this ought to exist, but what percentage of users of the broader internet are actually running a router that supports this? If you wanted to come out with a video calling app that only worked if your users had PMP working, what market share would you be losing out on? That’s the topic of discussion here after all (not that that’ll stop you from inventing your own discussion as you’ve continued to do.)
Funny way to say you don't know what you're talking about. If you don't know what technologies are actually out there how can you say what is and is not possible?
> but what percentage of users of the broader internet are actually running a router that supports this?
Most who aren't running their own home rolled pf setup that can't be bothered to read about "the latest" (over a decade old) RFCs.
And yeah, last I ran a UniFi gateway it supported PMP. That was several years ago. If it's a recent model device with halfway decent IPv6 support it's practically a sure thing it's got PMP support. Maybe disabled, but it supports it. Same with UPnP, might be disabled, but probably supported.
(I’ll leave you to ponder whether this changes my point at all, but I don’t think it really matters. You seem to be pretty fixated on getting your digs in, so maybe we’ll just end the discussion here. Good day.)
> Sick burn bro
> My post was basically the biggest pile of sarcasm I could conjure and you still took it seriously, congratulations!
> That’s cool for you, congratulations
> not that that’ll stop you from inventing your own discussion
> I don’t think I’ve met someone that truly thinks technological progress stopped in the 1990s and that URL’s and DNS are all we actually need.
And yet you accuse me of ad homenims for pointing out your acknowledgement of not knowing of the last decade+ of networking tech and that I have a need to feel superior. Pot meet kettle buddy.
You've spent so much effort telling me what's possible or not, berating me multiple times, while acknowledging you haven't kept up with decade+ old tech.
The topic of discussion right now is peer-to-peer software, which is the quintessential thing that is always lauded by ipv6 proponents as the killer app of ipv6, because each device has its own globally-routable IP.
But “what address do I send these packets to?” is like, 10% of the issue with designing p2p software. Users ip’s are always changing. They move around. They go behind firewalls that will block their traffic. They go behind firewalls they don’t have permission to reconfigure. This is the case in IPv4, and it will always continue to be the case even if we were 100% IPv6.
IPv4 can work today for p2p use cases if you don’t make the assumption that the port is static. But that’s like 1 of N assumptions you have to check if you’re writing peer to peer software. The other N of them are all still issues, even in IPv6.
So wouldn't it be nice to go ahead and solve that 10% of the problem instead of just limping along with it?
If you can avoid an explicit port, that can be nice. But anywhere you're using an IP, v6 without a port is longer than v4 with a port. An implicit port is only useful when you're using DNS. (And if you're designing a service, you can choose to use DNS records that contain the port.)
It gives you a public dynamic IP + port combination if your network is NAT1 all the way to the internet. Both IP & port being dynamic kinda complicates things, they ended up inventing a new DNS record type, "IP4P" where IPv4 address and port are encoded as an IPv6 address, and modified WireGuard/OpenVPN clients are required.
We are supposed to solve this using SRV records, but I don't think many consumer-facing apps can do this.
Instead, we're locked into proprietary platforms or relying on old phone systems.
No, there’s a reason software like FaceTime, and calling in general, are mediated by centralized services and implemented as client-server. Because relying on your users to have stable IP’s, and to always be behind a properly configured firewall, would be an utterly insane way to design something.
Hence “ipv6 is a relic of a bygone era”, because in the era we live in now, IP addresses change and users move between networks constantly, and NAT is very much the least of our problems.
You're pretty much never getting a new IP at every cell tower.
> Then I get to my friends house and his router is gonna block unestablished connections so you better call me soon.
PMP can solve that.
> Because relying on your users to have stable IP’s
Not a stable IP, but some kind of stable identity. Like a DNS name. That can be provided by someone like Apple, or Google, or whoever. It doesn't need to be ultra-centralized by only a single thing controlling it.
I never said this. You're putting some rather strong words in my mouth. Just giving an example of something that could have been if it wasn't for NAT and an acceptance that you need a third party platform to allow you talk. Feel free to change out the pieces to whatever.
Personally I'd like to be able to just dial a FaceTime-like client from any device instead of just what Apple blesses with accounts Apple allows. And have that stack just be the norm. I understand NAT isn't the only headache that prevents this, but it is one of the several that does.
I don't think it's crazy to think easy dynamic DNS services could have become common if advertised right. People don't know how phone numbers work but in the end they know how to use them. The tech for routers to auto configure firewall rules based on client requests exists, and potentially if everyone-has-a-public-ip was common we'd have seen less reliance on brittle edge firewalls and NAT to provide so much of our security.
Killing NAT doesn't solve all the problems, but it does solve some of them. And I'd prefer solving the ones that help give users more freedom instead of accepting things like CGNAT.
Routers have a stateful firewall meaning by default they only allow incoming packets that belong to a connection that was initiated from the inside. By default you also have this kind of firewall on every operating system. NAT adds 0 additional security.
What I’m against is the oversimplification that if we only had ipv6 from the start, that the internet would have turned out very different and we’d be directly connecting to each other through direct unsolicited connections or something.
No, there are a lot of reasons direct unsolicited peer to peer communication are not a good expectation to have on the modern internet. NAT makes it tough, yes, but so does security (you’re gonna want to firewall your network at the gateway anyway, even in v6), changing addresses, mobility, etc.
For instance, even in my network at home, I don’t think using my real GUA prefix on each device is a great idea. Because the prefix changes, I have to use ULA for any static configs (dns, etc), which means the GUA address is this extra address I don’t use, and it only complicates things. So I’m moving toward ULA-only with NAT66, which I think is a more sane choice. I get the benefits of lots of addresses, which is great, but instead of my firewall simply allowing traffic to my public stuff, I just NAT a public IP to each public thing I want.
I don't think any reasonable engineer expects that that end would ever be the case, even if it was one of the stated original "dreams" of IPv6.
Then imagine my frustration when that'e the exact argument people are giving: https://news.ycombinator.com/item?id=43072844
Like, I come up with these scenarios as a strawman to illustrate that direct client-to-client connections aren't going to work, and people come and say "actually it totally could work! We have PCP, and DDNS! Cell towers have mobile IP's too so you're only going to change IP addresses 3 times on your drive to your friend's house! And his router will also support PCP so it could have totally worked this way!"
I literally came up with the example to sound as crazy as possible, and people still say "Yup, that's exactly how things could have worked if we had IPv6. Now look at how stupid ninkendo is for not knowing about PCP, the moron~"
Like imagine if steve jobs came on stage in 2011 to introduce FaceTime, and said "You can make video calls to other people on iPhones, it's great! But you have to know their IP address. Or subscribe to a DDNS service. Make sure to use really low TTL's on your DNS records in case you roam to a different network. Oh and you have to have a router which supports PCP. Every router you connect to must support it." Even in a world where IPv6 was everywhere, that would be insane. (Well, other commenters seem to think that's exactly how phones ought to work? Maybe I'm the crazy one?)
(It does rely on state tracking, which it shares in common with stateful firewalls, but you could run a LAN with either a stateless firewall or no firewall if you really wanted to meet the "nothing in common with a firewall" requirement.)
No, this isn't right. Instead of having IPv6 we ended up extending IPv4 by using port numbers as extra address bits and using NAT. This transition was painless (for the most part), and didn't require building a second internet.
In a better and saner world we could have got roughly the same thing - except conceptually simpler and standard - instead of IPv6.
In reality we just really do need more IP’s. 32-bit addresses with 16 bits of ports isn’t enough.
Sadly, that's a feature, not a bug.
Two layers of NAT sucks, but I guess it could have been avoided if a designed (instead of ad-hoc) solution to the problem was devised.
> And there are more internet subscribers in the world than there are IP’s for them
Is this really true, did somebody actually run the business math? "Internet subscriber" roughly corresponds to "household", not to "person".
Obviously you can use (just) IPv6, e.g. GitHub.
https://labs.apnic.net/index.php/2024/10/19/the-ipv6-transit...
Adoption will slow down and my guess is that we'll be stuck with a long tail forever, and only time will tell if that happens at 50% adoption or 90% adoption.
But that long tail will probably look like figure 4 of that link, only reversed: IPv4 running mostly as tunnels over a core IPv6 network. Some large cellphone networks are already at that stage, using things like 464XLAT to run IPv4 with NAT over an IPv6 network.
BTW: Contact info is in my profile if anyone wants the inside scoop. The project just isn't fully ready for public launch yet, but we're well into the cooking.
Depending on what that dream is, it may already be a reality in the upper layers (webrtc, websockets, quic/http3, etc). This is why not many really bother if IPv4 will go away?
That said, a lot of posts here don't seem to reckon with the fact that a slim majority of www.google.com connections in the United States are via IPv6, and a super-majority from India, Germany, and France. Comcast, T-Mobile, Verizon, as far as I have experienced, these all default to IPv6. While dropping IPv4 support is both a worthy, distant goal and sometimes used in goal-post moving rhetoric, it's not like nobody uses IPv6...rather, mobile broadband networks have depended on it for over a decade (see T-Mobile's deployment of 464XLAT)
There are multiple v6 prefixes already allocated for containing the v4 space. It would help if all ISPs ran NAT64 routers on the standard NAT64 prefix, but e.g. how would that function with software that only works with v4 addresses?
On an ipv6 only device if I type "ping 1.1.1.1", I would expect that to be translated by the OS (not the program) to 64:ff9b::1.1.1.1, rather than relying on DNS64 to do it.
Then at some point upstream it would go via a nat64 gateway to reach that legacy IP.
This should have been in OS stacks 20 years ago.
You can try this on a vanilla macOS install if the router advertisement hints at a NAT64 prefix (RFC8781) and you either don’t run DHCP4, or your DHCP4 server lists the IPv6-preferred option (RFC8925). I think they’ve also added a check to query a well-known ipv4-only hostname on DNS, and if it gets a AAAA back and the connection works, it knows it must be behind a working nat64 setup, which obviates the need for the router advertisement to have the PREF64 option.
I tried this a year ago and it worked great, macOS skips DHCPv4 and gives itself a single 192.0.0.1/32 address so that legacy apps can “see” an IPv4 interface, and any packets sent from it are translated to IPv6 and sent to the nat64 prefix. So “ping 1.1.1.1” works fine. I stopped using this though, because I have a garage door opener (ratGDO) that only works with ipv4, and my nat64 prefix can’t “hair-pin” the packet back to the LAN address… it basically only works if everything on your LAN is IPv6, which my garage door is not.
I think android supports this as well, but I’m not sure about vanilla Linux, I think you need to install a clat daemon to do the translation.
It did take way too long for this to work though, as you point out.
That's the point. It should have been in from the start, making it normal way to do anything.
> I think android supports this as well, but I’m not sure about vanilla Linux, I think you need to install a clat daemon to do the translation.
So still not the norm. Will take another 20 years to get rid of equipment being installed now which rely on an ipv4 network.
Basically every server program ever supports the notion of "which IP address am I supposed to bind to?" Normally the answer is either "localhost" or "whatever my LAN or WAN address is, for a particular interface". And specifying this (both for servers and for clients) is much less effort than replacing all the standard-ish ports with ad-hoc ones.
Just assign multiple IPv6 to your loopback interface and everything will work as expected. You can use either private IPv6 or even link-local IPv6 range.
There’s no need to waste thousands of addresss on this use case.
As opposed to IPv4, where everything Just Works™.
Because parent is right, having multiple loopback addresses is just so nice for a lot of things.
Yeah would hate to run out
IPv6 is still the only game in town as a successor technology to IPv4, and it makes no sense to invent some other new thing.
The real question at hand, is do we keep trying to make IPv6 happen, or do we give up and just deal with IPv4 until the end of time. (It’s not an obvious answer. L7 load balancing and IP anycast makes it so you can basically run an entire tech company on a single IPv4 address. CGNAT is becoming more and more commonplace. The pressure on IPv4 availability is alleviating all the time. Maybe 4 billion addresses will ultimately be enough for humanity, who knows.)
But I mean it when I say that if we continue as is and just sorta gradually, grudgingly add IPv6 on a best-effort basis, it’ll likely be 100 years or so before we can actually turn off IPv4.
Clearly you have gotten some unfortunate hands on experience with those edge cases. In my uses the dual-stacking of IPv4 & IPv6 has been a big benefit in that I have to worry a lot less about locking myself out of systems, as I can always reconnect and correct configuration mistakes on the second stack.
Comparing the IPv4+IPv6 dual-stack story to the one from the 90s of IPv4 with IPX/SPX (Novell Netware) and/or NetBIOS (Microsoft Windows), the current state is a lot more smooth and robust.
I've so far only run into 3 issues over the years (2013 till now). A local train operator running IPv6 on routers that didn't properly support non-default MTU sizes (fragmented packages getting dropped) and Microsoft where Github is still IPv4-only and Teams recently developed an IPv6-only failure on a single API endpoint.
Should we ever turn off IPv4 during my working life-time, I hope we have at that point introduced a successor to IPv6, so I can keep using a dual- or triple-stack solution, using different network protocols for robustness.
Partially due to companies like Metronet whose focus is on growing rather than doing what it should be doing.
See apenwarr's by now nearly a decade old blog post "The world in which IPv6 was a good design": https://apenwarr.ca/log/20170810, previous discussions of it here: https://hn.algolia.com/?query=The%20world%20in%20which%20IPv..., as well as the follow up blog post here: https://apenwarr.ca/log/20200708, previous discussions here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
And the issues with IP (and by extension, TCP, ignoring the fundamental results from the Delta-T research at Lawrence Livermore keeps biting us all in the ass) whether IPv4 or IPv6 go even deeper, far deeper, than what that blog post already tells us, so here, have this—flawed in some minor aspects, which makes CCIEs burry their head in the sand of denial about the deeper point of it—polemic for dessert: https://web.archive.org/web/20210415054027if_/http://rina.ts...
Where did I give that impression? I tried my hardest in that post to not make a judgement call one way or the other as to whether it was a good design, only that dual stack fucking sucks.
My followup post in fact, totally agrees with you? https://news.ycombinator.com/item?id=43070286
No it's not. Slaac and NA make it a totally different beast.
If ipv6 only had dhcp6-pd, it would have been "just like ipv4 with longer addresses".
How many ways to be assigned an ipv6 address are there anyway? Two or three too many?
Why should the ISP know what devices I have behind their router?
Considering the amount of enterprise-ish thought that went into ipv6, they thought preciously little of privacy, for example.
The existence of privacy addresses suggests that some thought was put into this.
The prefix is also not delegated to that router, the router does npd proxying, and the ISP only routes the ips which have the corresponding NAs recorded into its database.
If your ISP runs the default router for your own LAN, they'll have full visibility into it on both v4 and v6. That's just how IP works.
Most ISP I have seen implement it like this. A large chunk of those "most" also require you to bind a phone number to each separate hwaddr appearing in the network via SMS. (Not all though.)
Those few that implement it differently, do the following:
They serve ULAs to the customers over slaac, and nat6 all the ULAs to a single ipv6 assigned to the router (actually a wifi hotspot).
I totally believe that where you live things are done differently, but this is exactly why ipv6 critics call it defective. It allows too large a variety in implementations.
But shitty or not, you have to somehow convince them, incentivise them to deploy ipv6.
Broadcast was renamed to "all nodes multicast". ARP was renamed to Neighbor Discovery.
Slight improvements: ND isn't broadcast, but multicast based on several bits of the IP address. This allows NICs to filter most of the irrelevant ones based on multicast MAC address. And subnet broadcast addresses were removed. There's only local broadcast to your own subnet and not to someone else's subnet, since IPv4 routers found that to be a bad idea and mostly started blocking it anyway.
Your cellphone company uses it - or would like to.
It's like SCTP: just because you don't use it doesn't mean there isn't a big group of people who do.
there is nothing really wrong with the design of ipv6 relative to ipv4
Having two internets suggests that it's totally different, in the sense that there is no address space overlap between IPv4 and IPv6, an inherent design flaw of the IPv6 protocol. Just to provide the architypical links:
This was documented about 25 years ago by DJB:
https://cr.yp.to/djbdns/ipv6mess.html
And has been repeatedly discussed on HN:
https://news.ycombinator.com/item?id=10854570
As you mention, IPv6 has exsited for the majority of the commercial internet's history, now 25 years later, it's still not the default transport protocol.
It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.
It is not a design flaw of IPv6, it is a limitation of the laws of physics: how can you fit >32 bits of data (IPv6) into 32 bit data structures (IPv4: struct s_addr)?
Even if the IPng protocol could recognize IPv4 address and send out data, how can a host which only understands 32-bit addresses send back a reply back to the IPng host? If a packet with a >32b address comes in, how would a 32b-only hosts be able to handle it?
From Bernstein:
> In other words: The current IPv6 specifications don't allow public IPv6 addresses to send packets to public IPv4 addresses. They also don't allow public IPv4 addresses to send packets to public IPv6 addresses.
How would a non-updated IPv4 system know about the IPng protocol? If a non-updated router had a IPng packet arrive on an interface, how would it know how to handle the packet and routing?
All the folks criticizing IPv6 don't ever describe how this mythical system would work: how do you get 32-bit-only system to understand more-32-bits of addressing? Translation boxes? Congratulations, you've just invented 6rd BR or NAT64, making it no different that IPv6.
The primary purpose of IPng was a bigger address space, but how would that work without updating every single IPv4 device? You know, like had to be done for the eventual IPng, SIPP (aka IPv6).
Maybe you don't. I imagine it would work like this (where XXXX::<IPv4> is the transition block mapping the current Internet v4 into IPv6):
- 32 bit host could only talk to other 32-bit hosts, and other XXXX::<IPv4 Adress> hosts. We have this now, just without the XXXX::<IPv4> part.
- Hosts with v6 addresses outside of XXXX::<IPv4> could not talk to IPv4 addresses without having a 6to4 proxy. We also have this now.
- Hosts with v4 addresses could switch to IPv6, KEEPING their current v4 address by setting their IPv6 addres to XXXX::<IPv4>. They can now talk to all the hosts they used to be able to, AND they can start talking to IPv6 hosts, without having to have two IP, dual stack config, etc.
So we end up with a significant benefit of allowing people with IPv4 addresses and IPv6 connectivity to do an IPv6-only setup.
In my case, I simply don't see us transitioning to IPv6. Our main service has an IPv4-only address and we receive 0 complaints about it. We've literally never had anyone say they couldn't connect to our services because of it. Our users are geographically located in the central US, and everybody has IPv4s. Maybe they have an IPv6 as well, but if we went v6-only I can basically guarantee that we'd have users screaming at us. We'd probably have lawsuits over it. But going v4-only, not a peep.
Oh, you mean like IPv4-mapped Address (::ffff:0:0/96) as defined in RFC 4291 § 2.2.5, or NAT64 (64:ff9b::/96) as per RFC 6050. See also 6to4 in RFC 3056 dating back to 2001.
Your idea is not new.
Every time IPv6 comes up some number of people talk about a protocol that 'just' added more bits IPv4 and calling it a day. But when going over the details of how what would work, exactly, you end up at the same place as IPv6 (and various 'transition mechanisms').
But there is no 'just' adding more addresses: you need to update (at least) end-hosts, update DNS, have relays/proxies if the middle of your network does not support IPng, have tunnelling to/from those special translation systems. It's basically the same thing as IPv6.
Certainly you can argue about keeping ARP versus going with NDP. Or not having SLAAC—but then you need infrastructure like DHCP (which is optional with IPv6).
By using nat46 on ISP's routers.
> By using nat46 on ISP's routers.
I do not understand.
In Ye Olde Times a system would call gethostbyname() and this send out a DNS packet asking for the IP address. Of course DNS record returned would be an A record, which is where your first problem is: A records are fixed at 32-bits (RFC 1035 § 3.4.1). So your first task is to create a new record type and update every DNS server and client to support the longer addresses.
Of course the longer address has to fit into the data structures of the OS you called gethostbyname() from, but gethostbyname() is set to use particular data structures. So now you have to update all the client DNS code to support the new, longer addresses.
Further, gethostbyname() had no way to specify an address type (like if you wanted the IPv4 address(es) or the IPng one(s)):
* https://man.freebsd.org/cgi/man.cgi?query=gethostbyname&manp...
The the returned data structures only allowed for one type of address type, so you either got IPv4 or IPng. So if you wanted to be able to specify the record/address type to query, a new API had to be created.
Luckily the next step, setting up a socket() and connect()ing is less of a burden, because that API already supported different protocols:
* https://man.freebsd.org/cgi/man.cgi?query=socket&manpath=Fre...
But if you have a host that support IPng, but its upstream router does not, how does it support a packet with a longer source address a shorter destination address get through?
Do you send IPv4 packets with embedded IPng packets with-in them to translation relays perhaps? How is this any different IPv6 transition mechanisms like Teredo?
If software has hardcoded ipv6 addresses, this is much harder to solve, but I don't think this is really an issue, because hardcoded ipv6 addresses appear very seldom.
But that means it has to intercept all DNS requests and doesn't work with anything it didn't intercept, which seems far from ideal.
But those don't have to be globally routable.
>it needs a big pile of ipv4 addresses to dynamically allocate whenever a new AAAA record comes through.
This wasn't really a problem until doh/dot became a thing.
But even nowadays doh and dot are provided by whom? By google and cloudflare who are tls-mitm'ing half of the internet anyway.
Moreover most ISPs intercept dns to display those "please enter your phone number to verify your identity" screens anyway.
>far from ideal.
No more imperfect than NAT. Idealism is what is killing ipv6.
All this cgnat46 setup is not in the name of end users, because they will always adapt somehow, as reels have a huge force of attraction.
It is to make sure that service providers can deploy ipv6-only _servers_ and not bother that 60 percent of their target audience will be unavailable.
The issue with the ipv6 internet is not the end users, they are paying, so the ISPs will always adapt to their needs, or the users themselves will buy new equipment in the worst case, it's the service providers who stop being profitable by going ipv6-only.
Worse is better.
Moreover, we really mostly need to make it work on http, that's enough to say "you can watch YouTube", but also deficient enough to stimulate ipv6 transition.
>given how many people here for example don't run their ISPs DNS..
Those people usually know how to deal with ipv6.
>Just seems massively complex and very resource intensive process.
No more complex than hanging in limbo for 30 years .
You mean like:
* https://en.wikipedia.org/wiki/IPv6_transition_mechanism#464X...
If an isp already has full ipv6 deployment, accommodating for ipv4 is easy.
The biggest issue is, surprisingly, not ISPs, but service providers.
How? People keep claiming this, but I've yet to see a coherent design (either back then or today). Do you think they broke backwards-compatibility _on purpose_? What's the motivation here?
This is not an inherent design flaw and is often brought up in IPv6 threads commonly referred to as the "just add more octets, bro" argument. This comment[0] sums it up well, but I'll leave it here for convenience:
> Fact is you'd run into exactly the same problems as with IPv6. Sure, network-enabled software might be easier to rewrite to support 40-bit IPv4+, but any hardware-accelerated products (routers, switches, network cards, etc.) would still need replacement (just as with IPv6), and you'd still need everyone to be assigned unique IPv4+ addresses in order to communicate with each other (just as with IPv6).
That's not at all what I'm suggesting, I'm saying "two internets" because there are two internets: IPv4 and IPv6. You need two internet stacks on every host to deal with this, hence two internets. If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, there would still be two internets, because they are not mutually compatible.
Saying "two internets" is not a judgement call on whether IPv6 changed too much or is too different. It's just the literal truth. There are two internets, because one can't communicate with the other.
$ ip -4 addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
$ ping -6 ipv4.google.com
PING ipv4.google.com(sof04s06-in-f14.1e100.net (64:ff9b::142.251.140.78)) 56 data bytes
64 bytes from 64:ff9b::8efb:8c4e: icmp_seq=1 ttl=55 time=27.9 ms
This is a v6-only machine (no v4 address other than lo), talking to a v4-only hostname. If it was truly not possible to communicate, this wouldn't work.> If everything in the IPv6 protocol suite was literally identical to IPv4, but just with some extra bytes in every address, because they are not mutually compatible.
This part is true though. v6 is backwards compatible in rather a lot of ways, but v4 isn't forwards compatible, and it's important to note that this comes entirely from the larger address size -- no amount of making v6 more identical to v4 than it already is would make v4 any more compatible with it, unless you undermined the entire point and made the addresses 32 bits.
(Yes, I understand there is no other way for a larger address space to exist that doesn’t have this problem. We are in violent agreement here. But that doesn’t mean there aren’t 2 internets. Let’s call a spade a spade.)
This situation was inevitable, but it's not like they just gave up and said "welp, guess they're separate then". We've developed basically every single method of communicating between v4 and v6 that are possible to develop. There's two address spaces, but they're linked into one Internet.
(Or really, with NAT in the picture, it's millions of overlapping address spaces, but we still consider that to be one Internet.)
Yes there is: https://www.rfc-editor.org/rfc/rfc4291.html#section-2.5.5.2
>It _was_ possible to create an address space extension that was IPv4 backwards compatible, this option was just not chosen, and now we're still dealing with the dual stack mess.
This was never possible.
What's hard is getting people who should know better to do what they're supposed to do, without playing games. The very fact that there are people who will argue why IPv6 is unnecessary tells us everything we need to know, because occasionally those people are in the path of deployment of something public. They don't write software which works agnostically with IPv6, they don't implement addressing that's stack agnostic, they don't bother taking the extra five minutes to test with IPv6.
Then you have a corporation that requires manager types to have enough "justification" to actually fix whatever Mr.-IPv4-is-good-enough, and because they don't understand that their own phone uses IPv6 all the time, they don't see a "business case" for it.
The title would be longer, but perhaps more accurate, if it were "Getting know-it-alls to stop stubbornly clinging to IPv4-is-good-enough and getting managers to realize that IPv6 is not only inevitable, but ubiquitous, are hard".
This article's conclusion isn't totally off, but in this case it's something that could easily be fixed by having a junior network admin just simply set things up. It's a trivial problem that's probably there because someone thinks they know better.
* We can't not support IPv4.
* IPv6 clients can connect to v4 only hosts.
* Supporting IPv6 is nonzero additional work.
* The cost of v4 addresses is on the order of a deli sandwich.
* (If not greenfield) We have a working v4 system right now and adding v6 support is entirely risk with no benefit.
* (Meant negatively) v6 is substantially different. So we will forever be maintaining two sets of networking stacks that play by different rules.
May I introduce you to my ipv6-only vps
How do I do a guest wifi network with isolation? On ipv4 I just flip a switch and thanks to NATs it just works, always.
With ipv6? First I have to figure out what prefix delegation my ISP gives me. How do I do that? Manually, somehow, and hope it never changes. Already not "ridiculously easy" at the first step.
Oops, turns out I was given a /64. So now what? Please explain this "ridiculously easy" thing to me, thanks.
If ALL customers would get a static /48 and the router provided by ISP wouldn't be industrial waste, you could easily use a different /64 for guest WiFi. (Or even a /56, if for some reason your friend wants to delegate some /64s to VMs running on their notebook.)
But in that case these ISPs wouldn't be able to ask more money for "business" internet services.
I think this is just the result of negligence from IANA or RIRs, these "suggestions" or "best practices" should be mandatory for ISPs and enforced by RIRs.
So it's not easy.
Setting up a VLAN for the first time isn't ridiculously easy, but it's the same whether you are using IPv4 or IPv6.
Is that really the case though? I very much doubt it.
I get a /56. Dynamic configuration mechanisms exist. I literally do not have to anything except flip a switch. My router even supports Prefix Delegation, so a downstream router/access point can do its thing.
It's what AT&T fiber does. Well, they give a /60 to their shitbox, but if you want your own router with a public IP then you're stuck with a single /64 for it at least when doing the "easy" path.
You can get some routers to request multiple IPv6 blocks and then you get the freedom of a whopping 7 subnets but you've also left "ridiculously easy" way, waaaay in the rear view mirror at this point anyway
But I’m with you on prefix delegation sucking. Prefixes change, and that makes all your devices’ addresses change. ULA’s solve this. But then you start asking hard questions like “if I’m going to use a ULA anyway, why even use the GUA addresses?” And the answer is shrug.
I mean, it’s great that you can give real addresses to your devices when you want to host a service on them, but you can always just NAT your ISP-provided prefix to them anyway. You’ll probably want better addresses for them anyway, as those randomly generated host addresses aren’t easy to remember (may as well just start your public addresses at ::1 and increment from there, routing each one to the underlying ULA.)
If you have a static NAT you don't need connection tracking on the router.
OP was saying ipv6 makes it hard to do a guest network if all you get is a /64 from your ISP, but stateful NAT can fix that.
I've been writing software for pay for several decades and do not want IPv6.
Phones use IPv4 and IPv6 and you can switch your phone to IPv6 only in the networking settings, run like that for a month and observe multiple weird and bad behaviors.
I don't want IPv6 for several reasons, it looks like it was written by an alien and it inherently is privacy busting. I know many whose job it is to track people as closely as possibly will object but it's true, IPv6 is mainly only wanted by people whose job it is to track people.
It's just an IP. Worry about browser cookies and phone apps sending tracking IDs, not the randomized and non-persistent IPs they're sending them from.
Yeah, it tells us that the IPv6 standard was invented by retarded monkeys who don't know what they're doing.
(Yeah, yeah, I know, I've heard it before - akshually it's the rest of humanity who is stupid for not switching to IPv6. If only they were enlightened enough to know they need to throw away all their hardware and switch to an untested network stack for no practical benefit!)
My IPv6 connectivity tester at https://ready.chair6.net is still doing its thing, twelve years since https://news.ycombinator.com/item?id=2154124.
Looks like Hacker News has added support in five years since https://news.ycombinator.com/item?id=21382275.
will add a link to your site on the failed domains page.
With IPv4 it's easy to set up a static configuration. With IPv6, the Linux kernel is eager to listen to all sorts of advertisements from the network, and it's hard to say "no, stop, don't ever ever listen to anything except my static configuration".
I can bring up IPv4 networking in a tiny fraction of a second. IPv6 networking sometimes takes seconds to settle down and be reliable.
(I'd be interested in any suggestions for how to improve this.)
net.ipv6.conf.default.accept_dad=0
net.ipv6.conf.default.accept_ra=0
net.ipv6.conf.default.autoconf=0
Or, in your network startup script (like Debian’s /etc/network/interfaces): sysctl -w net.ipv6.conf.default.accept_dad=0
sysctl -w net.ipv6.conf.default.accept_ra=0
sysctl -w net.ipv6.conf.default.autoconf=0
Or, in your /etc/systemd/network/en0whatever.network: [Network]
IPv6AcceptRA=false
IPv6DuplicateAddressDetection=0
Although I'd also argue that you should prefer automatic config over static in general. RAs can set your default route correctly and you can use `ip token` if you really need SLAAC to give you a specific address. I think it's better to be able to just connect machines and have them get the right config rather than need to hand-craft it on each one.
In this case I'm bringing up virtual machines, as fast as possible, and static is inherently faster and more predictable. Automatic configuration isn't compatible with literally counting milliseconds.
LinkLocalAddressing=no
and/or sysctl -w net.ipv6.conf.default.addr_gen_mode=1
What if you have multiple internet connections and you want to choose which one you go through? With ipv4 and nat you can just change the default route. And you don't even need to write down the router's address since it's <your prefix for the whole network> dot one number.
(I appreciate the confirmation of those settings, though, and those are all an important part of the setup.)
If anything, IPv4 is much harder. The Internet is in an abusive relationship with NAT, thinking it's normal.
But we're in a world where if a service breaks IPv4, they lose a very large set of their customers and it's the service's fault. If they break IPv6, it's is perceived that if any customer fails to connect, then it's the customer's fault (or their ISPs fault).
I'm also convinced that the length of IPv6 addresses and the difficulty in typing them is a major reason for lack of adoption. Very seemingly small UX annoyances can have a large effect.
I've been convinced of this since around 2000. Every network utility dealing with IPv4 had... dotted quad addresses. You could remember them, you could read them, you could speak them to someone else. Brainwise, we can remember 3-5 things - segments, digits, etc. Remembering more is harder for the average person.
Having the 6 be literally dotted sextant(?) - 5.73.192.168.0.4 for example - would have been much much much easier to transition in to, and would have given us 64k x 4 billion addresses. Yes, it's not the near infinity we have with IPv6, where apparently every molecule in the universe can have multiple IP addresses or whatever the max number is. But it would have been much much easier to transition. 27 years later we're not transitioned from ipv4, and we've got another 27 years of dotted quad baked in to daily usage everywhere.
Part of this would just be transitory, but... would take up far less space. Even as late as last year, I'm reading stories about BGP issues and slowdowns because routing tables are too large/many for the hardware. In 2024. How on earth was this expected to work on hardware from 1999?
https://blog.apnic.net/wp-content/uploads/2023/01/Figure-1-%...
https://blog.apnic.net/wp-content/uploads/2023/01/Figure-14-...
But even if they had the counter argument to "128 bit addresses are too long, 48 bit would have been better" is obviously not "that's the same thing just with 48 bit addresses!"
No, it wouldn't have taken 'exactly as much work to transition'. See another reply. In 2024, I'm still reading about BGP/routing table issues, and hardware having problems keeping up with IPv6 networking - it's just too large. How was this supposed to work in 1999?
A smaller stepped transition from 4 billion addresses to... 64k of those would have been an easier transition step, and the ease - mental, UI, training, testing, security, etc would have given everyone involved the confidence and positive feedback to tackle a next stage to IPv6 (eventually).
Actually mDNS does exactly what they said just fine. You’re confusing it with the ability of a DHCP server to make updates in DNS on behalf of clients. There is more than one way to skin the cat. Both achieve the same affect, just differently.
> nor do they necessarily ever consider your local DNS server.
Hardcoded DNS servers are just as equally a thing with IPv4.
Of course, but they aren't the baseline expectation like it is with ipv6 since ipv6 baseline assumption is SLAAC. And SLAAC is terrible. If you control all the devices on the network, like a cloud install, you can just pretend SLAAC doesn't exist and live in a much better world.
If it's a home network then you're stuck expecting at least some usage of SLAAC. And you're better off just not supporting ipv6 at all at that point
Why not? Are you saying your computer doesn't need to know the IP address of any dns server in order to be useful?
What do you mean when you say that SLAAC is terrible? Especially when you control all the devices on the network, you are in full control of what SLAAC manages for you. Also, SLAAC is the only way to use the IPv6 Privacy Extensions.
Rather, most routers add DHCP hostname requests to the local DNS routes. That doesn't require mDNS, nor would you want to replace it with mDNS as mDNS is more limited & flakier.
Ping foo on my machine results in foo.local being successfully pinged. Your mileage may vary.
> would potentially use mDNS if you have a ping that does such a thing
It's not a feature of ping, it's a feature of the name resolution setup on your machine (nsswitch and/or resolved on Linux).
> Rather, most routers add DHCP hostname requests to the local DNS routes
What do you mean?
Things like RA and NDP would clue hosts into the local DNS server as well. You don't need DHCPv6 to advertise a local DNS server.
Indeed. I meant that if evaluating the two de novo, IPv4+NAT is harder.
Then there are crimes against network protocols like NAT66 that are borderline necessary in some situations because terrible ISPs have decided to dynamically alter IPv6 prefixes so they can charge extra for a static prefix.
I still think NAT makes everything harder than it should be, but I get why some people struggle with IPv6 if their ISP is particularly terrible.
Everyone should bomb Android bug reports with complaints about lack of DHCPv6 until they fix it. Then people can de-facto deprecate SLAAC. AFAIK Android is the only major offender here.
Some corporate networks already don't do SLAAC if they don't need Android to work on them.
Possibly, but I think it's also that some aspects of IPv6 are just badly designed, and subsequent updates only fixed some of the issues.
SLAAC, for example, is godawful stupid. And while DHCPv6 exists to fix some of that, critical aspects of the protocol, like prefix delegation, are manually configured. This is beyond idiotic and makes things like subnets massively more difficult than they have any right to be.
SLAAC and PD combine to create the worst sin of all - flakiness. It's hard to figure out why IPv6 isn't working because everything is telling you it is (it "auto configured" a valid IP despite not having any actual route to anywhere else, so helpful! /s). If IPv6 was literally just "IPv4 but 128-bit addresses instead of 32-bit ones" it almost certainly would have been adopted en masse by now. But it isn't, it made things more fragile and harder to configure for no goddamn reason.
And of course you then have ISPs doing silly stuff, like my ISP only gives me a /64 so I literally can't use IPv6 on my network since I want VLAN isolation. 18446744073709551615 possible addresses are given to me yet dividing that up into 3-5 subnets is completely impossible. Fucking stupid protocol.
The core of V6 -- longer IPs -- is great. SLAAC and /64 being the smallest need to be deprecated. It's asinine.
Really absolutely everything but longer IPs should be deprecated and was a mistake.
are you sure? what about homepods, Chromecasts, Hue bridges, kasa smart plugs, esphome, etc etc etc...?
How much are you willing to bet your home network on Android being the final user of SLAAC? And how much time do you want to invest in figuring that out in the first place?
Although https://www.rfc-editor.org/rfc/rfc7217 doesn't require a /64 or shorter prefix for its variant of address auto-configuration, for example Linux doesn't allow otherwise.
Heck, I would have probably simply added an Options flag for "long address" to support longer addresses. With the first part of the address being the IPv4 address of some ISP backbone router and the next 3 words being a device address on the network (with the all-0 device address reserved for the router itself). Then you don't even need to upgrade most of the routers beyond configuring them to accept packets with that option set. It would have been so simple and elegant and we would have probably upgraded most of the world by 2001.
The quantity of existing deployments that need to change is much larger for v6 than it was for v4. The problem with deploying v6 isn't that it's too different from v4, the problem is that it's different at all. Your v7 will have the same problems.
Last time I've checked, some telecoms such as Vodafone didn't even supported IPv6 yet.
b2b telecoms as a consequence also have at best zero incentive since if the consumer doesn't have ipv6, businesses don't need or want ipv6.
the solution here, which will inevitably strike a nerve with some, is to have a regulatory mandate for ipv6. a soft version would start with 'government networks must run exclusively on ipv6, this applies to any and all contractors, including govclouds' and work upwards from there. I understand such policies were in place or at least planned; OTOH in the current political climate it'll be sort of a miracle if the existing ipv4 networks keep working.
I spent a decade writing element management systems. An element is a network device - switch, router, ROADM, etc. Often you give the customer kit & software and they spend a couple of years assessing it. This software would be installed on secure out of band networks with no route to the internet. 10/8 would have been adequate for the lifetime of the hardware currently in service - 200G stuff. However, that's not how things played out.
When I started working on EMS software IPv6 support was a requirement but it was just a box ticking exercise. No one was actually going to use it. A decade later IPv6 HAD to work for northbound and southbound traffic, on an internal network. Otherwise, no sale. IPv6 is being used in anger in telecoms. I have no idea how long that will take to filter to in band networks.
Not entirely true. 10/8 is only so big, so the management of addresses very much has a nonzero opex cost.
And on top of that there's capex on "carrier grade NAT".
For the biggest providers 10/8 is not even big enough for their production environment. And then they acquire some some company that (obviously) also uses 10/8.
No, there's definitely incentive there. Just… not enough.
It was never a viable option for the Internet (million or billions of devices would never support it, and anyway it'd only buy us like 3 months), but in theory it should work better in the (more, but not entirely) controlled environment of a large prod environment.
I know of several massive setups that when running out of 10/8 evaluated 240/4 but deemed it less feasible, riskier, and definitely more work, than to "just" move to IPv6. Sorry, can't namedrop them, since I don't want to doxx myself. I've never seen 240/4 actually chosen.
Works fine in Linux, doesn't really work on Windows and some BSDs.
https://blog.benjojo.co.uk/post/class-e-addresses-in-the-rea...
I don't think this is true. The IPv4 address exhaustion is still a problem and it's getting worse, and telecoms which also provide internet services need to allocate addresses to their clients. With the 5G rollout these telecoms need to make room for new devices, and that's not feasible with IPv4.
This is dependent on the firewall features of the NAT router, but at least its something, and the router provides one centralized point of protection from interent traffic.
In IPv6 typically every machine on a LAN is directly connected to the internet with a public IP address. Every one of those machines now needs a full strength internet firewall.
This complicates some intra-LAN communications, and removes the feature of having the LAN be a walled garden, mostly isolated from the interent.
Which is, coincidentally, exactly how it works if your LAN is made up of devices with publicly-routable IPv4 addresses as well, which happens in business/academic/military networks all the time.
They just want to watch some reels.
No they don't.
Most ISP boxes only implement the bare minimum of functions to make sure that youtube is available to the users. Which includes NAT, because otherwise youtube does not work, and does not include anything else.
Anyway, NAT is costlier than a firewall. It uses more memory, it requires rewriting packets on-the-fly, and typically if you're using embedded Linux (I'll assume that the vast majority of consumer devices for this are) then you're already using `iptables` or `nftables` to get NAT functionality. It is comparatively to set default inbound/forward drop policies.
But yes, I should have said "in my experience," since it's true that I only know the networking equipment of a few people in a small country with limited IPv6 rollout (my ISP does not provide it).
Is that a real, full firewall? Heck no, I'm equally worried about what my systems leak than outside threats, and it does nothing for that.
but aren't we nitpicking a little here? veering on the "technically correct, worst kind of correct".
Yes, but that's because it's co-located with a stateful firewall, sharing the same connection tracking state. Without that firewall, if a device on the WAN side sends to your router a packet with destination address on the LAN side, your router will route that packet to the LAN, even though it's not a reply to anything sent from the LAN to the WAN. That is: it's a misconception that port mapping NAT blocks incoming connections; port mapping NAT only affects outgoing connections and replies to them.
This can happen only if the device is directly attached to the NAT server. Basically, only your ISP can do that.
Most NAT servers by default also prohibit this kind of routing (because why even allow it?!?), but even the most misconfigured bad NAT server is still vulnerable only if you control the ISP, in practice.
But it is. Even a "full" firewall can't do anything against something on one of the inner host leaking into the outside. Most of the traffic is encrypted, even DNS. So the firewall can't operate on much more than the destination address to decide if a connection should be allowed.
If you want more security, you need to secure individual endpoints.
I suspect when people talk about "NAT" in the context of a residential connection, what they actually mean is a "default-deny firewall with connection tracking".
(Are you going to point out that statelessly blocking a fixed IP range used to be called "firewalling" so a device without connection tracking can still technically be a "firewall" next?)
A system that can't track connection state, or doesn't have enough capacity to do so at line rate, can't meaningfully firewall. When the norm for home equipment is ISPs sending it out for "free", they absolutely send out the cheapest stuff they can get away with, and in the pre-smartphone era that meant "modems" that would just provide a straight PPPoA connection with no firewalling.
Pretty much everything else in a firewall in the modern world is just useless feature creep.
Take two networks that are identical, except one is doing NAT and one isn't, and test them. Every connection that works on the without-NAT network will also work on the with-NAT network. The only difference you'll find is that some outbound connections which didn't work on the former will work on the latter.
Some would argue that "every connection that used to work still works, and some connections that didn't work before do now work" is kind of the opposite of a firewall.
It absolutely does. There is no way with NAT to connect from the outside network to the inside ("but what if the attacker is directly attached to the next hop blahblahblah"). That's all that firewalls need to do, everything else is useless noise in the current world.
> Take two networks that are identical, except one is doing NAT and one isn't, and test them.
Here's my IP: 192.168.80.36. Feel free to attack me. Go on. The public IPv4 of my router is 76.191.126.81. I even opened a port 8871 for you with a static page with a Bitcoin wallet worth $1000, you just need to get to it.
You can't. There's no way to get to my computer without hacking the NAT server first.
That's why NAT _is_ a firewall, and a foolproof one at that. It's secure by default, unlike IPv6 firewalls that can fail open.
https://sh.itjust.works/pictrs/image/421c9549-83ee-487a-9fa0...
When I set one up for you (back here: https://news.ycombinator.com/item?id=39173556) I gave you the access needed to actually do the test (...but you either missed that message or opted to ignore it, so you never tried it). You need to either move your network to routed IPs or give me a tunnel to the upstream network of your router first.
Not doing either of those doesn't make you right. NAT is still not acting as a firewall here. I'm just not in a position to take advantage of that, due to your use of RFC1918 addresses on the LAN side.
Nope. Even with a misconfigured box that forwards the traffic from WAN to LAN, you need to be in the direct contact with the NAT server (an OpenWRT box). And that's the point, you are _not_, as is pretty much everybody else in the world, outside of people who have keys to my house and my ISP.
In practice, this provides all the practical security you need from a firewall.
> I'm just not in a position to take advantage of that, due to your use of RFC1918 addresses on the LAN side.
Well, duh. That's the whole point of NAT (when used typically). You don't get to access my internal network, you simply can't do that physically.
All you need to be able to do is send a packet that ends up at your router with the dest IP set to one of your LAN machines. This only requires being directly attached to your router if you're using RFC1918 on the LAN; if you're using a properly routed prefix then it can be done from anywhere.
> Even with a misconfigured box that forwards the traffic from WAN to LAN
Forwarding traffic from WAN to LAN isn't a misconfiguration. Your router needs to do that for TCP to work, or get replies to outbound UDP.
Or control the internal network of my ISP.
> All you need to be able to do is send a packet that ends up at your router with the dest IP set to one of your LAN machines.
Which you can't do.
> if you're using a properly routed prefix then it can be done from anywhere.
If I had a publically routed /24, then I wouldn't be using NAT in the first place. Which is the case in point: NAT _is_ a firewall.
> Forwarding traffic from WAN to LAN isn't a misconfiguration
Sigh. I mean allowing the SYN packets to be forwarded from WAN to LAN.
Whether or not someone can get that packet to your router in the first place is a separate question. If you're using RFC1918 then yeah, they'd need access to your direct upstream network. But the point I've been trying to make is that NAT doesn't influence the answer to this question: whether or not you do NAT has no influence on the behavior of any of the routers upstream of you.
NAT has no influence on whether these packets can reach your router or not, or on whether your router will forward them if they do arrive. That's why it's not a firewall: because it's not actually doing any firewalling.
> If I had a publically routed /24, then I wouldn't be using NAT in the first place. Which is the case in point: NAT _is_ a firewall.
It's not possible to go from "I wouldn't be using NAT if I had a routed /24" to "NAT's a firewall".
If you had a routed /24 then you might not use NAT, but that says nothing about whether NAT works as a firewall or not. The relevant part isn't "would I still be using NAT if the situation was different?", it's "does NAT block inbound packets?". Since it doesn't, it's not.
> Sigh. I mean allowing the SYN packets to be forwarded from WAN to LAN.
Well, yeah, allowing those is generally a misconfiguration... in the firewall. The routing part of the router doesn't pay any attention to TCP flags, just IP addresses.
You quite regularly see people pop up in these threads arguing something like, oh, I dunno, "In IPv6 typically every machine on a LAN is directly connected to the internet with a public IP address. Every one of those machines now needs a full strength internet firewall.", as if this is somehow a problem with IPv6 rather than a problem with their own understanding. It's hard enough to get people to deploy v6 as it is without them mistakenly believing that a lack of NAT means their router can't provide firewalling either.
NAT wasn't invented for the home-broadband user, you know.
But I'm at least glad we've cleared up the myth that NAT provides any security benefit when, in fact, it's the often deployed accompanying stateful firewall that's actually providing this.
Note that curl itself also implements Happy Eyeballs if you don't force a particular protocol via `-4` or `-6`. You can see it in action with `curl -vvvv https://opentalk.mailbox.org`. So you shouldn't look at the author's example and extrapolate that `curl github.com` or `curl opentalk.mailbox.org` will fail in an IPv6 environment.
It's entirely a different thing for home users which get handed some shitty router from the only ISP around.
My options are cable or long-distance ADSL. The cable ISP has dynamic prefix, so I get a new prefix every week or so. If you contact them regarding IPv6 there's zero support, I had to find the prefix length by trial and error.
So much software relies on the prefix being fixed, so this can lead to a lot of frustration. On the bright side I actually get a prefix, some just get a /64.
IPv6 might not be hard, but using IPv6 might indeed be hard.
what's the difference?
I've posted here before about my home network with two internet connections. Not going to whine about it again, except restate that i basically have no idea what to do about it with ipv6.
https://github.com/systemd/systemd/pull/34165
So you can do.
RefuseRecordTypes=AAAA
So there, your DNS won't ever query for IPv6 AAAA records.
(in fact you can refuse any DNS questions like MX, SRV, TXT as well with:
RefuseRecordTypes=MX SRV TXT)
Note - I started adding that feature because some applications were "misbehaving" and querying AAAA records when IPv6 stack was disabled.
For example, if you want to do multi-homing with IPv6 for an SMB, you need to either do NAT (not great), or have your router advertise/whithdraw prefixes dynamically. If you do the prefix withdrawal, you can end up in a situation where your devices have _no_ prefixes. So even your internal functionality like printing or VoIP starts breaking.
Great.
Then there's ULA that is mis-prioritized. You'd think that simply adding a stable ULA prefix to all devices would solve the issue above, but that's not the case. The devices will prioritize the global IPv6 source address, so stuff can _still_ break.
I have two primary connections that I load balance between, each providing a /56. If either fails, all traffic goes to the other. I have a third connection that is used only as a backup in the event that both the other connections go down, providing a /64.
So simple case, I assign an address from each connection out to each device.
Except, whoops--the backup ISP only provides a /64 and despite that containing something like 18.5 quintillion addresses (a number so large my spell check is telling me it's not a real word), I can't actually split those up in any meaningful way so I can only allow _one_ subnet to access the internet via that connection.
And then I'm pushing responsibility for load balancing between the two primary connections out to each client. You know, to the place with the _least_ useful information for making decisions in this situation. And then I need to figure out how to convince each device to make these decisions individually. (As far as I can tell, with an equal cost, Linux will simply use the most recently added address. So in the event that one of the primary connections is having issues and bouncing up and down, that's going to be the one that all the Linux devices choose to use preferentially. Absolutely what I'd want!)
So I give everything a ULA (which I'd want to do anyway so my internet going down doesn't mean my TV can no longer stream media from the server across the house, since that's a reasonable thing to expect to happen) and then look at doing some sort of prefix translation. Except since nobody's using this stuff because it's so very not hard, I hit a bunch of weird edge cases in my network gear trying to do that. Not the least of which is that I need to statically configure the prefix I'm translating _to_ except that's not provided to me statically so now I need to set up some sort of script to monitor the prefix on the external interface and dynamically update my firewall rules?
Oh, except apparently the RFC was only updated like a year and a half ago to prefer IPv6 ULA addresses over IPv4 so most devices would end up just using IPv4 anyway and the IPv6 deployment would be little more than a box checking exercise until everything's updated. (Though it looks like that change may never have been adopted anyway?)
Last time I was digging in to this, I ended up down a path of starting to file forms with IANA to get a IPv6 block directly allocated so I could use a GUA internally (avoiding the preference issue), do prefix translation for load balancing, and announce the IP range from AWS and tunnel it to my router to work around the /64 issue on the backup connection.
Or, or, or... hear me out... I can just... _not_. Nothing is broken right now. I derive no personal value or enjoyment from stacking up a house of cards I'll have to keep standing back up so my toaster can have a globally reachable internet address.
People can tell me it's "not hard" all they want, but after... oh hell, 30 years of bending computers and networks to my will, I feel pretty confident in saying either:
1. It is hard; or
2. IPv6 is fundamentally unfit for purpose.
I'd be happy for someone to come along and tell me I'm an idiot and _precisely why_ though. Really. Somebody. Please do. Tell me how I am supposed to make this work. I know it's a thing I _should_ be doing but I really can't figure out how.Sometime around 2035.
NAT sounds like a bad idea at first glance, but it doesn’t exist just to alleviate address shortages (although it does.) It doesn’t just exist to provide security by default either (although it does that too.)
NAT is also the only sane choice in residential setups. Because your public IP address prefix isn’t static. That was never going to happen. And just saying “never put an IP address in any configuration file” isn’t a satisfactory answer. When a reprefix happens, and your clients are all only speaking on GUA addresses, your network is going to be fucked until they expire the old addresses. Not in theory, in practice. None of my devices in my household did any sort of reasonable thing when Comcast changes my prefix. I had to just go around and reboot everything.
NAT means I am in 100% control over what IP addresses I configure in my own network. It means I can actually run a DNS server and it works. It means I can configure devices with static addresses and it actually works.
ULA isn’t some edge case, it’s the only sane option in residential setups. (You know, that thing you need to maximally support in order for ipv6 to have a chance in hell at succeeding.)
The IETF finally releasing guidance that ULA’s should be on equal footing to GUA’s feels like them finally admitting that there might be something to this whole NAT thing. It happened entirely too late IMO.
With IPv4 and NAT, I'm setting up a network. I control it from end-to-end. I can define whatever subnets I want, connect them how I want, put a static IP on my IoT device so Home Assistant can find them reliably, etc. Then I have _a_ point (well, three) where my network touches another network and all configuration and setup around how the two interface and interact is defined and controlled there.
With IPv6 the expectation is I'm building out another little piece of someone else's network. Now all the communication within my network is closely intertwined with theirs. There's really no good way for me to handle that today.
Using SLAAC so devices update when prefixes change leaves me unable to find/connect to anything. Short of installing a DDNS client on my air quality monitor somehow, how do I manage to connect to it?
I can use DHCP6 (not on Android though!) and let the server handle DNS registrations. This would also regain the ability to push a NTP server to clients which would be a nice thing to have. Except now when the prefix changes the devices have no way to know and (like you say) I have to run around and reboot everything.
And this still doesn't give me a path to load balancing across multiple connections!
I guess really the bigger issue here is the dynamic nature of the prefixes. If I could go log into IANA and click a button and get assigned a /48 then log in to my ISPs' sites and attach it to my connection and be on my way... I'd deploy it today, no problem.
I can't see any _technical_ reason that wouldn't be possible. But there isn't a snowball's chance in hell any ISP starts offering that ability on consumer connections when they could be gating it behind having a $5k/mo business connection.
Yes, this right here. It should work like a phone number where you can take it from provider to provider. You setup your gateway with your IPv6 prefix and then go.
Problem is all the technical and non-technical stuff to actually make that work.
- Verifying ownership of the prefix (rPKI or IRRs) and which ASNs are allowed to advertise it.
- Limiting allocations to someone? I guess phones solve this with SIMs
- Getting your ISP to route the traffic to your device
- Getting the device to announce it with RAs
- Dealing with asymmetric return routing over a slow "path" since it could be multi-homed
- Dealing with routing aggregation since IPv6 routing tables would explode with all the /56s in there
Maybe bring back Mobile IPv6? https://en.wikipedia.org/wiki/Mobile_IP
Because the other issue if you’re ULA-less is that if your router loses its connection and rescinds its RA (or you just stop the RA daemon, or shut down the router, etc), suddenly you don’t even have LAN connectivity any more because everything will have deleted its routes.
Every additional route and its ASN takes up RAM in nearly _every_ router that has a DFZ (default-free zone) view. This RAM is not cheap, so regional registries are incentivized to keep the number of new registrations down.
For IPv4 there are about 1 million routing entries right now, and for IPv6 about 200k. The IPv6 routing space can reasonably scale by about 5-10x before requiring serious changes in the core routers, but that's just not a lot of entries.
IPv6 just yelled at people and said it wasn’t a problem.
It took me a very long time to figure out how to configure my docker containers on my home server to have global IPv6 addresses. But I did it, and it taught me a lot about IPv6. I would personally prefer for the internet to be IPv6 only. It would make certain things much easier like firewall rules, IP whitelisting, and no more stupid CGNAT.
Would there be much of a security risk if you had a dedicated device on its own network that doesn't actually directly communicate with anything else in your home?
I’d prefer to keep it on but every time I try it I run into a bunch of odd bugs. Silly stuff like GitHub container registry pulls being dialup speeds but docker.io being fine. Disable ipv6 and both at gigabit speeds. Bizarre stuff like that which I just don’t have the time and energy to go spelunking into the mysteries of dual stack to untangle.
(Out of the box opnsense)
afk ghcr uses fastly and docker uses cloudflare.
That doesn't make any sense. 0-9 is base ten while 2^64 is base two.
docker pull ghcr.proxy.orca.toys/kpcyrd/apt-swarm:edge
My services are often run behind cloudflare, who does ipv4 for me and proxies to my ipv6-only vps.Should you use this? Well docker's security model is pretty much only https, so it's like using a proxy server from some random hackernews comment for your `curl|sh` needs, without end-to-end encryption.
I wish github would just step up their game, at least for ghcr.io.
Also my key point is 'hard at scale', in contrast to yours 'smaller isps'.
I've lived in Canada for > 20 years in several cities and provinces. I'm telling you that CGNAT is extremely common.
I don't think you have any idea what you're talking about.
For mobile Bell uses 464XLAT. Bell mobility is IPv6 only network, also MPLS free, they are SRv6 early adopters, impressive!
Rogers, Videotron mobile use NAT44 for IPv6. IPv6 is native on Rogers.
The fact is ipv4 is scarce, why wouldn't ISPs NAT their customers? Hardly anyone cares, and that's why they get away with it.
I'm now on Bell Fibe and it's a relief to be reachable again, but who knows how long this lasts. But yes at least Rogers is on IPv6...
Happy that Bell Fibe is good for you!
Bell Fibe even has IPv6 in some markers even. ( EBOX has IPv6 everywhere on fiber for years )
For the 90%+ of network clients on-prem that play by the rules you are often supporting both DHCP and SLAAC on IPv6 networks (unless android has figured out DHCP).
Dual WAN redundancy on IPv6 in SMB settings is actually worse relative to a NAT and ipv6.
The subnet size /64 is stupidly large, and the number of available subnets you can easily get annoyingly small from most upstream providers.
Firewall filtering has to be modified or you get weird errors. People don't always like ICMP coming through the firewall.
It's does lots of things differently, but for what? We do prefix delegation with DHCP but can't use DHCP to assign address (we are supposed to use SLAAC) but still need DHCP for lots of other stuff? It's nonsensical - you end up with just way too much garbage.
The privacy extension stuff hits IPv6 hard. You have lots of auto-rotating addresses.
The number of times turning of IPv6 fixes weird glitches / timeouts / stalls etc is still crazy too me.
ATT (massive corp) requires end user devices request /64 subnets one by one - which most end user gear does not support.
Getting static ipv6 IP's (despite claims there are lots of them) is seriously hard from upstream providers in many cases - but IPv4 is trivial by comparison.
The list goes on.
I would have made the subnet size 32 bits. Expanded the network part. Maybe even reduced the overall size - 128 bits feel dumb with /64's as the smallest subnet? Then just go super high interop / overlay to IPv4 with great suggestions so folks could ship ipv6 only stuff (even with on device bridge to ipv4 so outbound interface is ipv6) and no new concepts unless clearly justified.
One thing I hate about it is that they used 64 bits for host which is just absurd. 32 bits or even 16 bits would have been plenty for a single broadcast domain.
I admit it does make the address space look a bit lop-sided, but taking bits away from the host side (where they're being used) and giving them to the network side (where we should already have enough space anyway) wouldn't be a sensible trade.
Setting edge routers to simply drop all ipv6 packets and running only ipv4 works, is simpler, is faster (than dual stack) and easier to troubleshoot.
I don’t have any inheritant issues with IPv6 other than is has failed to make a case for why people like me should bother rolling it out.
To avoid ip-range clashes between customers, traffic inside the hub was IPv6-only, with each customer given their own prefix. Routing permissions were based on group membership, so we had a very easy and secure way to make sure each of our users was only able to reach their own customers, and personnel changes did not require any customer-side changes.
Wouldn't this be true with IPv6 as well, assuming that both companies happened to have chosen some overlapping ranges?
By comparison my company uses the entirety of the private IPv4 space. `docker compose up` often fails because there is no private IP range left that is not directed through the VPN.
curl -6 https://github.com -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 273k 0 273k 0 0 957k 0 --:--:-- --:--:-- --:--:-- 956k
Working with IPv6 if you are not entitled to switch out every vendor, every piece of hardware, every piece of software that is lacking.. that's the hard part.
In short: reality is hard.
It's a meaningless way of saying "look at me I'm smart." Everything is or is not hard depending on the audience and what it's being compared against. IPv6 is hard compared to Ethernet, but easy compared to managing a mess of NATs to handle V4 networks with conflicting addresses at different locations. All of this is a lot easier than understanding the leading proposed theories of quantum gravity.
I mean even if there is no "Happy Eyeballs", it would still work for end users, no? Unlike `curl`, you don't specify -4 or -6 with browsers. Without HE, wouldn't (shouldn't?) it just try both in sequence?
> HE can have some funny side effects. In a project a connection to a development web server sometimes worked and sometimes didn’t. The solution was quite simple. The customer used a split VPN tunnel. IPv4 was routed via the VPN tunnel and those IPv4 addresses were allowed in the web servers access list. IPv6 was routed via the normal Internet connection and those addresses weren’t allowed.
This following sentence makes no sense, when the author literally said HE help it work when one IP family was broken and the other is not. So why would it work sometimes if IPv6 is consistently broken and IPv4 is not? (I'm not saying it won't happen; just that it shouldn't be "caused by HE" -- HE literally is the solution to this issue in general as the author said themselves!)
No browsers and other clients prefer IPv6 and there would be a rather long timeout till the client falls back and tries IPv4. If it falls back.
> So why would it work sometimes if IPv6 is consistently broken and IPv4 is not? > (I'm not saying it won't happen; just that it shouldn't be "caused by HE" -- HE > literally is the solution to this issue in general as the author said > themselves!)
IPv6 wasn't broken. For HE, an HTTP 403 is a valid connection. TCP handshake works, IPv6 works.
I had another case where the TCP handshake worked, and the browser decided to use IPv6 is good, but then came the TLS handshake, and it failed because PMTUD (Path MTU Discovery) was broken.
Which is why I deactivated IPv6 on almost every device in my network. My main issue is that the router/gateway provided by my ISP does not support DHCPv6 prefix delegation, which causes a couple of issues in my setup.
Sometimes you can try really hard to archive a goal, but when the workarounds to simple problems become new problems, it's time to say "no".
Amen - replace IPv6 in the above aphorism with $anything, and it's a great rule for running projects.
Asking for a friend, like, what all the ipv6 use in the world is.
I'm curious. On the other side of CGNAT, you know, the mobiles, can they individually address each other, or can they only talk to each other through the CGNAT? I honestly don't know.