It's the same problem. Search for "fauux" (one of our more popular web sites) and you'll see other sites talking about one of the more popular sites but you won't see a link to the site itself.
I would love to see some evidence for the huge increase in protein on this new pyramid. I'm not challenging it, I'm genuinely curious if there's substantial evidence that a lot of it is actually good for most people.
I don't like to admit this, but at this point honestly I think ipv6 is largely a failure, and I say this as someone that wrote a blog post for APNIC on how to turn on ipv6.
I'll get endless pushback for this, but the reality is that adoption isn't at 100%, it very closely needs to be, and there are still entire ISPs that only assign ipv4, to say nothing of routers people are buying and installing that don't have ipv6 enabled out of the box.
A much better solution here would have been an incredibly conservative "written on a napkin" change to ipv4 to expand the number of available address space. It still would have been difficult to adopt, but it would have the benefit of being a simple change to a system everyone already understands and on top of a stack that largely already exists.
I'm not proposing to abandon ipv6, but at this point I'm really not sure how we proceed here. The status quo is maintaining two separate competing protocols forever, which was not the ultimate intention.
> A much better solution here would have been an incredibly conservative change to ipv4 to expand the number of available address space
"And what do you base this belief on?
Fact is you'd run into exactly the same problems as with IPv6. Sure, network-enabled software might be easier to rewrite to support 40-bit IPv4+, but any hardware-accelerated products (routers, switches, network cards, etc.) would still need replacement (just as with IPv6), and you'd still need everyone to be assigned unique IPv4+ addresses in order to communicate with each other (just as with IPv6)."[0]
> Fact is you'd run into exactly the same problems as with IPv6.
If you treat IPv4 addresses as a routable prefix (same as today), then the internet core routers don't change at all.
Only the edge equipment would need to be IPv4+ aware. And even that awareness could be quite gradual, since you would have NAT to fall back on when receiving an IPv4 classic packet at the network. It can even be customer deployed. Add an IPv4+ box on the network, assign it the DMZ address, and have it hand out public IPV4+ addresses and NAT them to the local IPv4 private subnet.
IPv6 seems to be a standard that suffered from re-design by committee. Lots of good ideas were incorporated, but it resulted in a stack that had only complicated backwards compatibility. It has taken the scale of mobile carriers to finally make IPv6 more appealing in some cases than IPv4+NAT, but I think we are still a long way from any ISP being able to disable IPv4 support.
> Only the edge equipment would need to be IPv4+ aware.
"Only"? That's still the networking stack of every desktop, laptop, phone, printer, room presentation device, IoT thing-y. Also every firewall device. Then recompile every application to use the new data structures with more bits for addresses.
And let's not forget you have to update all the DNS code because A records are hardcoded to 32-bits, so you need a new record type, and a mechanism to deal with getting both long and short addresses in the reply (e.g., Happy Eyeballs). Then how do you deal with a service that only has a "IPv4+" address but application code that is only IPv4-plain?
Basically all the code and infrastructure that needed to be updated and deployed for IPv6 would have to be done for IPv4+.
But the desktop/laptop/phone/printer was the EASIEST thing to change in that 30 year history. And it would have been the easiest thing to demand a change req from a company for.
Yes: but the process would have been exactly the same whether for a hypothetical IPv4+ or the IPng/IPv6 that was decided on; pushing new code to every last corner of the IP universe.
How could it have been otherwise given the original network structures were all of fixed lengths of 32 bits?
If we have IPv4 address 1.2.3.4, and the hypothetical IPv4+ adds 1.2.3.4.1.2.3.4 (or longer), how would a IPv4-only router handle 1.2.3.4.1.2.3.4? If an IPv4-only host or application gets a DNS response with 1.2.3.4.1.2.3.4, how is it supposed to use it?
As I see it, the transition mechanism for some IPv4+ that 'only' has longer addresses is exactly the same as for IPv6: new code paths that use new data structures, with a gradual rollout with tech refreshes and code updates where hosts slowly go from IPv4-only to IPv4-and-IPv4+ at different rates in different organizations.
If you think it's somehow different, can you explain how it is so? What proposal available (especially when IPng was being decided on in the 1990s) would have allowed for a transition that is different than the one described above (gradual, uncoördinated rollout)?
The proposal is that IPv4+ would be interpretable as an IPv4 packet. Either the IP header is extended, or we add another protocol layer for the IPv4+ bits (IPv4+ is another envelope for the user payload).
DNS is like today: A and AAAA records for IPv4 and IPv4+ respectively.
Core routers do not need to know about IPv4+, and might never know.
The transition is similar to 6to4. The edge router does translation to allow IPv4+ hosts to connect to IPv4 hosts. IPv4 hosts are unable to connect to IPv4+ directly (only via NAT). So it has the similar problem to IPv6 that you ideally want all servers to have a full IPv4 address.
What you don't have is a completely parallel addressing system, requirements to upgrade all routers (only edge routers for 4+ networks), requirements to have your ISP cooperate (they can just give you an IPv4 and you handle IPv4+ with your own router), and no need that the clients have two stacks operating at once.
It's essentially a better NAT, one where the clients behind other NATs can directly connect, and where the NAT gradually disappears completely.
If you hand UTF-8 that actually uses anything added by utf-8 to something that can only render ASCII, the text will be garbled. People can read garbled text ok if it’s a few missing accented characters in a western language, but it’s no good for Japanese or Arabic.
In networking terms, this is like a protocol which can reach ipv4 hosts only but loses packets to the ipv4+ hosts randomly depending on what it passes through. Who would adopt a networking technology that fails randomly?
v6 has nearly 3 billion users. How is that abysmal?
We've never done something like the v4->v6 migration before, on this sort of scale. It's not clear what the par time for something like this is. Maybe 30 years is a normal amount of time for it to take?
HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
3 billion people sorta use ipv6, but not really, cause almost all of those also rely on ipv4 and no host can really go ipv6-only. Meanwhile, many sites are HTTPS-only.
And because it's a layer 7 thing, so it only required updating the server and client software, not the OS... and only the client and server endpoints and not the routers in between... and because we only have two browser vendors who between them can push the ecosystem around, and maybe half a dozen relevant web server daemons.
Layer 3 of the Internet is the one that requires support in all software and on all routers in the network path, and those are run by millions of people in hundreds of countries with no central entity that can force them to do anything.
HTTP->HTTPS is only similar in terms of number of users, not in terms of the deployment itself. The network effects for IP are much stronger than for HTTP.
They don't "sorta" use v6, they're properly using it, and you can certainly go v6-only. I'm posting from a machine with no v4. Also, if you want to go there: HTTPS was released before IPv6, and yet still no browser is HTTPS only, despite how much easier it is to deploy it.
I know they aren't very comparable in a technical way, but look at the mindset. IPv6 included decisions that knowingly made it more different from v4 than strictly needed, cause they wanted it to be perfect day 1. If they did HTTPS like this, it'd be tied to HTTP/2.
Most browsers now discourage plain HTTP with a warning. Any customer-facing server basically needs to use HTTPS now. And you're rare if you actually have no ipv4, not even via a tunnel.
The compromised "ipv4+" idea a bunch of people keep asking for wouldn't require changing the spec down the road. ISPs would just need to clean up their routes later, and SLAAC could still exist as an optional (rather than default) feature for anyone inclined to enable later. Btw, IPv6 spec was only finalized in 2017, wasn't exactly one-shot.
I don't know if HTTP's job is easier. Maybe on the client side, since there were never that many browsers, but you have load-balancers, CDNs, servers, etc. HTTP/2 adoption is still dragging out because of how many random things don't support it. Might be a big reason why gRPC isn't so popular too.
> HTTP->HTTPS was this kind of scale, and it was smooth because they changed as little as possible while also being very careful about default behaviors.
HTTP->HTTPS is not equivalent in any way. The payload in HTTP and HTTPS are exactly the same; HTTPS simply adds a wrapper (e.g., stunnel can be used with an HTTP-only web server). Further HTTP(S) is only on the end points, and specifically in the application layer: your OS, switch, firewall, CPE, ISP router(s), etc, all can be left alone.
If you're not running a web browser or web server (i.e., FTP, SMTP, DNS, database) then there are zero changes that need to be made to any code on a system. This is not true for changing the number of bits the addressing space: every piece of code that calls socket(), bind(), connect(), etc, has to be touched.
Whereas the primary purpose of IPng was to expand the address space, which means your OS, switch, firewall, CPE, ISP router(s), etc, all have to be modified to handle more address bits in the Layer 3 protocol data unit.
Plus stuff at the application layer like DNS (since A records are 32-bit only, you need an entire new network type): entire new library functions had to be created (e.g., gethostbyname() replaced by getaddrinfo()).
I hear people say the IETF/IP Wizards of the 1990s should have "just" picked an IPng that was a larger address space, but don't explain how IPv4 and hypothetical IPv4+ would actually work. Instead of 1.1.1.1, a packet comes in with 1.1.1.1.1.1.1.1: how would a non-IPv4+ router know what to do with that? How would non-updated routers and firewalls be able to handle longer addresses? How would non-updated DNS code be able to handle new record types with >32 bits?
HTTP->HTTPS looks easy in hindsight, but there were plenty of ways it could have gone wrong. They took the path of least resistance, unlike ipv6. I know they're different layers ofc.
To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened. The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
> To answer the last question, routers would need IPv4+ support, just like ipv6 which already happened.
So exactly like IPv6: you need to roll out new code everywhere.
> The key is it's much easier for users to switch after. No dual stack, you get the same address, routes, DNS, and middleboxes like NAT initially. ISPs can't hand out longer addrs like /40 until things like DNS are upgraded in-place to support that, but again those are pretty invisible changes throughout the stack.
So exactly like IPv6: you need to roll out new code everywhere.
Would organization have rolled out in IPv4+ any differently than IPv6? Some early, some later, some questioning the need at all. It's the exact same coördination / herding cats problem.
It's a simple toggle on vs asking orgs to redo their entire network. In both cases you need routers and network stacks to support the new packet format, but that isn't the hard part of ipv6, we already got there and people still aren't switching.
Sorry, I'm still not seeing how a IPv4+ would be any less complicated (or as simple) as IPv6. In either case you would still have to:
* roll out new code everywhere
* enable the protocol on your routers
* get address block(s) assigned to you
* put those blocks into BGP
* enable the protocol on middleware boxes
* have translation boxes for new-protocol hosts talk to old-protocol-only hosts
* enable the protocol on end hosts
And just because you do it, does not mean anyone else would do in the same timeframe (or ever). You're back in the chicken-and-egg of whether servers/services do it first ("where are the clients?"), or end-devices ("where are the services?").
Redo all your addresses and routes, reconfigure or replace NAT and DHCP, reconfigure firewall, change your DNS entries at minimum. If it's a home or small business and you don't want to fight the defaults, you go from NAT to NATless.
No, routers would have to be fixed anyway, because even if you put extra bits into extension header we have 30 years of experience that routers and ISPs will regularly fuck around with those extra bits - it's related to why we have TLS GREASE option.
Application rework would be exactly the same as with v6, because the issue was not with v6 but with BSD Sockets API exposing low-level details to userland.
> Only the edge equipment would need to be IPv4+ aware. And even that awareness could be quite gradual, since you would have NAT to fall back on when receiving an IPv4 classic packet at the network. It can even be customer deployed. Add an IPv4+ box on the network, assign it the DMZ address, and have it hand out public IPV4+ addresses and NAT them to the local IPv4 private subnet.
Congratulations, you’ve re-invented CGNAT, with none of the benefits, and the additional hassle of it being an entirely new protocol!
No. No “extra bits” on an IPv4 address would have ever worked. NAT itself is a bug. Suggesting that as an intentional design is disingenuous.
I have not "reinvented CGNAT". It is hierarchal public addressing similar to IPv4 and IPv6.
The edge router has an IPv4+ subnet (either a classic v4 address, or part of a v4+ address). It maintains an L2 routing table with ARP+, and routes IPv4+ packets to the endpoint without translation. Private subnetting and NAT is only needed to support legacy IPv4 clients.
CGNAT pools IPv4 public addresses and has an expanded key for each connection, and translates either 4 to 6 or into a private IPv4 subnet. My proposal needs no pooling and only requires translation if the remote host is IPv4 classic and the edge router is not assigned a full IPv4+/24.
Not just the edge router. Every router between the ISP edge and the destination edge.
And since the goal is “backwards-compatability”, you’d always need to poll, because a “legacy” IPv4 client would also be unable to send packets to the IPv4+ destination. Or receive packets with an IPv4+ source address.
And it would be an absolute nightmare to maintain. CGNAT + a quasi backwards-compatible protocol where the backwards-compatability wouldn’t work in practice.
So you would have exactly the same problem as IPv6. I can say the same about v4 and v6 today. You could just turn off IPv4 on the internet, and we’d only need to do translation on the edge for the legacy clients that would still use IPv4. You can even put IPv4 addresses in IPv6 packets!
I think you've actually reinvented 6to4, or something morally very close to it.
Each v4 address has a corresponding /48 of IPv6 tunnelled to it. The router with that IP receives the tunnelled v6 packets, extracts them and routes them on natively to the end host. This is something that v6 already does, so you don't need to make posts complaining about how dumb they were for not doing it.
That's quite true, but in this counterfactual, IPv4+ doesn't pretend that 6to4 is just a transition mechanism to an all-IPv6 future. That is, IPv4+ is as-if 6to4 was the default, preferred, or only mechanism, and core routers were never demanded to upgrade.
It's an edge based solution similar to NAT, but directly addressable. And given that it extends IPv4, I think it would have been much more "marketable" than IPv6 was.
But again, this is all counterfactual. The IETF standardized IPv6, and 30 years on it's still unclear that we will deprecate IPv4 anytime soon.
I agree with that belief, and I've been saying it for over 20 years.
I base it on comparing how the IPv2 to IPv4 rollout went, versus the IPv4 to IPv6 rollout. The fact that it was incredibly obvious how to route IPv2 over IPv4 made it a no-brainer for the core Internet to be upgraded to IPv4.
By contrast it took over a decade for IPv6 folks to accept that IPv6 was never going to rule the world unless you can route IPv4 over it. Then we got DS-Lite. Which, because IPv6 wasn't designed to do that, adds a tremendous amount of complexity.
Will we eventually get to an IPv6 only future? We have to. There is no alternative. But the route is going to be far more painful than it would have been if backwards compatibility was part of the original design.
Of course the flip side is that some day we don't need IPv4 backwards compatibility. But that's still decades from now. How many on the original IPv6 will even still be alive to see it?
The IPv2 to IPv4 migration involved sysadmins at less than 50 institutions (primarily universities and research labs), updating things they considered to be a research project, that didn’t have specialised network hardware that knew anything about IP, and any networked software was primarily written either by the sysadmins themselves or people that one of them could walk down the corridor to the office of. Oh, and several months of downtime if someone was too busy to update right now was culturally acceptable. It’s not remotely the same environment as existed at the time of IPv6 being designed
Hardware would catch up. And IPv4 would never go away. If you connect to 1.1.1.1 it would still be good ole IPv4. You would only have in addition the option to connect to 1.1.1.1.1.1.1.2 if the entire chain supports it. And if not, it could still be worked around through software with proxies and NAT.
So... just a less ambitious IPv6 that would still require dual-stack networking setups? The current adoption woes would've happened regardless, unless someone comes up with a genius idea that doesn't require any configuration/code changes.
I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply. A less ambitious IPv4 is exactly what we need in order to make any progress
It’s not _that_ different. Larger address space, more emphasis on multicast for some basic functions. If you understand those functions in IPv4, learning IPv6 is very straightforward. There’s some footguns once you get to enterprise scale deployments but that’s just as true of IPv4.
Lol! IPv4 uses zero multicast (I know, I know, technically there's multicast, but we all just understand broadcast). The parts of an IPv4 address and their meaning have almost no correlation to the parts of an IPv6 address and their meaning. Those are pretty fundamental differences.
IP addresses in both protocols are just a sequence of bits. Combined with a subnet mask (or prefix length, the more modern term for the same concept) they divide into a network portion and a host portion. The former tells you what network the host is on, the latter uniquely identifies the host on that network. This is exactly the same for both protocols.
Or what do you mean by “parts of an IPv4 address and their meaning”?
That multicast on IPv4 isn’t used as much is irrelevant. It functions the same way in both protocols.
The biggest difference is often overlooked because it's not part of the packet format or anything: IPv4 /32s were not carried over to IPv6. If you owned 1.1.1.1 on ipv4, and you switch to ipv6, you get an entirely different address instead of 1.1.1.1::. Maaybe you get an ipv6-mapped-ipv4 ::ffff:1.1.1.1, but that's temporary and isn't divisible into like 1.1.1.1.2.
And then all the defaults about how basically everything works are different. Home router in v6 mode means no DHCP, no NAT, and hopefully yes firewall. In theory you can make it work a lot like v4, but by default it's not.
> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.
In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.
Part of the ipv6 ambition was fixing all the suboptimally allocated ipv4 routes. They considered your idea and decided against it for that reason. But had they done it, we would've already been on v6 for years and had plenty of time to build some cleaner routes too.
I think they also wanted to kill NAT and DHCP everywhere, so there's SLAAC by default. But turns out NAT is rather user-friendly in many cases! They even had to bolt on that v6 privacy extension.
> I said it's different from IPv4. At the IP layer.
In what way? Longer addresses? In what way is it "so different" that people are unable to handle whatever differences you are referring to?
We used to have IPv4, NetBEUI, AppleTalk, IPX all in regular use in the past: and that's just on Ethernet (of various flavours), never mind different Layer 2s. Have network folks become so dim over the last few years that they can't handle a different protocol now?
Current statistics are that a bit over 70% of websites are IPv4 only. A bit under 30% allow IPv6. IPv6 only websites are a rounding error.
Therefore if I'm on an IPv6 phone, odds are very good that my traffic winds up going over IPv4 internet at some point.
We're 30 years into the transition. We are still decades away from it being viable for servers to run IPv6 first. You pretty much have to do IPv4 on a server. IPv6 is an afterthought.
> We are still decades away from it being viable for servers to run IPv6 first.
Just put Cloudflare in front of it. You don’t need to use IPv4 on servers AT ALL. Only on the edge. You can easily run IPv6-only internally. It’s definitely not an afterthought for any new deployments. In fact there’s even a US gov’t mandate to go IPv6-first.
It’s the eyeballs that need IPv4. It’s a complete non-issue for servers.
Listen, you can be assured that the geek in me wants to master IPv6 and run it on my home network and feel clever because I figured it out, but there's another side of me that wants my networking stuff to just work!
If you don’t want to put Cloudflare in front of it, you can dual-stack the edge and run your own NAT46 gateway, while still keeping the internal network v6 only.
You have a point. But you still need DNS to an IPv4 address. And the fact that about 70% of websites are IPv4 only means that if you're setting up a new website, odds are good that you won't do IPv6 in the first pass.
Cloudflare proxy automatically creates A and AAAA records. And you can’t even disable AAAA ones, except in the Enterprise plan. So if you use Cloudflare, your website simply is going to be accessible over both protocols, irrespective of the one you actually choose. Unless you’re on Enterprise and go out of your way to disable it.
Actually, my bad. NAT was NEVER standardized. Not only NAT was never standardized, it’s never even been on standards track. RFC 3022 is also just “Informational”
Plus, RFC 1918 doesn’t even mention NAT
So yes, NAT is a bug in history that has no right to exist. The people who invented it clearly never stopped to think on whether they should, so here we are 30 years later.
That doesn't really mean much. Basic NAT wasn't eligible to be on the standards track as it isn't a protocol. Same reason firewall RFCs are informational or BCP.
The protocols involving NAT are what end up on the standards track like FTP extensions for NAT (RFC 2428), STUN (RFC 3489), etc.
It didn’t speed up adoption and people then tried most of the other solutions people are going to suggest for IPv4+. Want the IPv4 address as the network address instead? That’s 2002:a.b.c.d/48 - many ISPs didn’t deploy that either
Think of it like phone numbers. For decades people have accepted gradual phone number prefix additions. I remember in rural Ireland my parents got an extra digit in the late 70s, two more in the 90s, and it was conceptually easy. It didn't change how phones work, turn your phone into a party line or introduce letters or special characters into the rotary dial, or allow you to skip consecutive similar digits.
For people who deal with ip addresses, the switch from ipv4 to ipv6 means moving from 4 digits (1.2.3.4) to this:
Yes, the ipv6 examples are all the same address. This is horrible. Worse than MAC addresses because it doesn't even follow a standard length and has fancy (read: complex) rules for shortening.
Plus switching completely to ipv6 overnight means throwing away all your current knowledge of how to secure your home network. For lazy people, ipv4 NAT "accidentally" provides firewall-like features because none of your home ipv4 addresses are public. People are immediately afraid of ipv6 in the home and now they need to know about firewalls. With ipv4, firewalls were simple enough. "My network starts with 192.168, the Internet doesn't". You need to learn unlearn NAT and port forwarding and realise that with already routable ipv6 addresses you just need a firewall with default deny, and then add rules that "unlock" traffic on specific ports to specific addresses. Of course more complexity gets in the way... devices use "Privacy Extensions" and change their addresses, so making firewall rules work long-term, you should use the device's MAC Address. Christ on a bike.
I totally see why people open this bag of crazy shit and say to themselves "maybe next time I buy a new router I'll do this, but right now I have a home with 4 phones, 3 TVs, 2 consoles, security cameras, and some god damn kitchen appliances that want to talk to home connect or something". Personally, I try to avoid fucking with the network as much as possible to avoid the wrath of my wife (her voice "Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?").
What is confusing about that? That's like complaining that you can write an IPv4 address as 001.002.003.004 or 1.2.3.4. Even the :: isn't much different from being able to write 127.0.0.1 as 127.1 (except it now becomes explicit that you've elided the zeroes).
While it's possible to write an ipv4 address in a bunch of different ways (it's just a number, right?) nobody does it because ipv4 standard notation is easy to remember. Ipv6 is not, and none of these attempts to simplify it really work because they change the "format". I understand it and you understand it, but the point here is that it's unfriendly to anyone who isn't familiar with it.
These are all the same address too:
1.2.3.4, 16909060, 0x1020304, 0100401404, 1.131844, 1.0x20304, 1.0401404, 1.2.772, 1.2.0x304, 1.2.01404, 1.2.3.0x4, 1.2.0x3.4, 1.2.0x3.0x4, 1.0x2.772, 1.0x2.0x304, 1.0x2.01404, 1.0x2.3.4, 1.0x2.3.0x4, 1.0x2.0x3.4, 1.0x2.0x3.0x4, 0x1.131844, 0x1.0x20304, 0x1.0401404, 0x1.2.772, 0x1.2.0x304, 0x1.2.01404, 0x1.2.3.4, 0x1.2.3.0x4, 0x1.2.0x3.4, 0x1.2.0x3.0x4, 0x1.0x2.772, 0x1.0x2.0x304, 0x1.0x2.01404, 0x1.0x2.3.4, 0x1.0x2.3.0x4, 0x1.0x2.0x3.4, 0x1.0x2.0x3.0x4
v6 has optional leading zeros and ":: splits the address in two where it appears". v4 has field merging, three different number bases, and it has optional leading zeros too but they turn the field into octal!
"Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?"
LOL. Yup. What can I do after this? The answer is basically "nothing really" or "maybe go find some other internet connection that also has IPv6 and directly connect to one of my computers inside the network (which would have been firewalled I'd hope so I'd, what, have to punch open a hole in the firewall so my random internet connection's IPv6 can have access to the box? how does that work? I could have just VPN'd in with the IPv4 world).
Seriously though, how do I "cherry pick hole punch" random hotel internet connections? It's moot anyway because no hotel on earth is dishing out publicly accessable IPv6 addresses to guests....
Hardware support for ipv6 hasn't been the limiting factor in a long time. Users higher on the stack don't want to adopt something that makes so many unnecessary changes.
You’re focusing on the technical difficulty of implementing it in software. This is not the problem. IPv6 support is now present in almost every product, but people still refuse to set it up because it’s so different to what they’re used to (I’m not arguing whether the changes are good - they’re just changes). IPv4+ would’ve solved this social problem.
There’s absolutely, utterly zero chance IPv4+ would be adopted. CGNAT is the solution to the social problem.
I don’t even buy your way of thinking - unlike an “engineering” solution or an “incentives” solution, the problem with “social solutions I speculate about” is: they offer nothing until implemented. They are literally all the same, no difference between the whole world of social solutions, until they are adopted. They are meaningless. They’re the opposite of plans.
Like what’s the difference between IPv4+, which doesn’t exist, and “lets pass a law that mandates ipv6 support”? Nothing. This is what the mockery of “just pass a law” is about. I don’t like those guys, but they are right: it’s meaningless.
This is a good point. The same “why should I migrate” would affect it. However, it being so close to the predecessor, it would be much easier to do so, catching way more people who are on the fence.
The IPv4+ could pass through a router that doesn't know about it - the cloud host that receives that packet could interpret it in a special way, in fact you could stuff additional data into the next layer of the stack for routing - it's not like many services beyond TCP would need to support the scheme.
> The IPv4+ could pass through a router that doesn't know about it
It couldn't do that reliably. We don't have any flags left for that that. Options are not safe. We've got one reserved flag which is anyways set to 0, so that's not safe either.
There's the reserved bit (aka the evil bit[1]). Are you saying gear out there drops packets with reserved bit set to 1? Wouldn't surprise me, just curious.
Seems like IPv4+ would have been a good time to use that bit. Any IPv4+ packets could have more flags in the + portion of the header, if needed.
That bit is currently defined as "Bit 0: reserved, must be zero", so there will be network gear out there, that either drops the packet otherwise or resets the bit to 0 when forwarding.
That makes it effectively impossible to ever use then, so a waste of a bit. Too bad they made that mistake when writing the spec. Would have been better if they specified it like most APIs, ie ignore if you get it, carry it if you forward, and set it to zero if you send it.
It depends what you want to achieve. If we had some feature which is actually incompatible and needed everything else to set it to 0, then it would be perfect. It's not a mistake when you don't predict the future.
This whole discussion reminds me of the beautiful design of UTF-8. They used the lower bits to be ASCII which made backwards compatibility so much easier. It also reminds me of the failure of Intels Itanium and the success of AMD x64. Engineers often want to abandon backwards compatibility to make a new "beautiful" design, but it's the design that has full backwards compatibility that's actual impressive.
It reminds me of python 3. Basically, a huge chunk of people (in my case, scientific programming) get an enormous mess and nothing at all of value until... 3.6 maybe (the infix matrix mult operator). Stunningly, people weren't enthused about this deal.
It would maybe be okay at the router to break some things, but ffs even in software I have to choose? Why do I need both ping and ping6 this is stupid!! They really screwed up by making it a breaking change to the OS and not just internet routing.
They didn't screw up. They made it a breaking change to OSs because it had to be a breaking change to OSs. If anyone screwed up here, it was the people who made v4, not the ones that made v6.
For ping, I think it originally had different binaries because ICMPv4 and ICMPv6 are different protocols, but Linux has had a dual-stack `ping` binary for a very long time now. You can just use `ping` for either address family.
The only solution is a gov't mandate. China went from almost no adoption to leading the world in adoption (77% of all Chinese internet users) in a few years because they explicitly prioritized it in their last 5-year-plan.
US government has finally learnt from how vendors break the mandates and there's now IPv6 mandate if you want to sell to federal government, and waivers are only available for buyers not vendors, and individually every time.
I wouldn't say "failure". There are many, many IPv6 client devices out there, mostly on mobile networks. And it works great and they do well and the tools all support it very well.
But IPv4 will never, ever die. The rise of NAT as a pervasive security paradigm[1] basically neuters the one true advantage IPv6 brought to the table by hiding every client environment behind a single address, and the rise of "cloud everything" means that no one cares enough about reaching peer devices anyway. Just this morning my son asked me to share a playlist, so of course I just send him a link to a YouTube Music URL. Want to work on a spreadsheet for family finances with your spouse in the next room? It lives in a datacenter in The Dalles.
[1] And yes, we absolutely rely as a collective society on all our local devices being hidden. Yes, I understand how it works, and how firewalls could do this with globally writable addresses too, yada yada. But in practice NAT is best. It just is.
> I wouldn't say "failure". There are many, many IPv6 client devices out there, mostly on mobile networks.
Honestly it's a huge success due to this fact alone.
IPv6 is failure only if you measure success by replacing IPv4 or if you called "time" on it before the big mobile providers rolled it out. The fact that all mobile phones support it and many mobile networks exclusively deploy it tells you what you really need to know.
IPv6 is a backbone of the modern Internet for clients, even if your servers don't have to care about it due to nat64.
I toyed with using ipv6 in my local network just to learn it and what a headache that was. Ultimately not worth the hassle. I can remember most of the important device ipv4 on my network, I can't say the same for v6.
This is the first time I've heard this critique. I think most people don't care if their IP address is easily human readable/memorizable. In my experience when people do deal with ipv4/v6 addresses directly, they just copy-paste.
Man, readability of IP numbers is a important thing. You are not always in a situation where you can simply copy the address.
I can tell you what is what simply from the Ipv4 address, but when its IPv6, my dyslexia is going to kick my behind.
Readability reduces errors, and IPv6 is extreme unreadable. And we have not talked yet about pre-fix, post-fix, that range :: indicator, ... Reading a Ipv6 network stack is just head pain inducing, where as Ipv4 is not always fun but way more readable.
They where able to just extend IPv4 with a extra range, like 1.192.120.121.122, 2.... and you have another 255 Ipv's ... They did the same thing for the Belgium number plates (1-abc-001) and they will run out in the year 11990 somewhere lol...
The problem is, that Ipv6 is over engineered, and had no proper transition from Ipv4 > Ipv6 build in, and that is why 30 years later, we are still dealing with the fallout.
Genuinely speaking, that sounds like a process issue if you really can't copy/paste. Perhaps you don't have control over whichever scenario you're talking about but not describing, but data entry is famously error prone regardless of it being 12 characters or 32, and if you're trying to focus on reliability, avoiding errors, you should be avoiding it at all costs.
Circa 1999 I was working for Cisco as a sysadmin. I got my CCNP through internal training and considered making a career of network administration, but ipv6 changed my mind. It seemed so much more difficult and unpleasant to deal with. I didn't want that to be my day to day work.
I think the same thing happens on a different scale with ISPs. They don't want to deal with it until they have to for largely the same reason.
> It seemed so much more difficult and unpleasant to deal with.
In my experience it’s much easier and much more pleasant do deal with. Every VLAN is a /64 exactly. Subnetting? Just increment on a nibble boundary. Every character can be split 16 ways. It’s trivial.
You don’t even need to use a subnet calculator for v6, because you can literally do that in your head.
Network of 2a06:a003:1234:5678::555a:bcd7/64? Easy - the first 4 octets.
Network of 10.254.158.58/27? Your cheapest shotgun and one shell please.
If you have a /48 assigned, you’ll burn the prefix in your brain. Leaves 16 bits for the network address.
e.g. you’ll get 2a06:a003:1234::/48 from the ISP - what you’ll really need to remember is the 2a06:a003:1234:xxxx::/64 part. And I use the VLAN id for the xxxx part. Trivial.
At first I though so too but IPv6 is actually easier. instead of CIDR you always have 64 bits for network and 64 for host. You get a public /48 IPv6 prefix that allows for 16 bits of subnets and then the host addresses can just start at 1 if you really want. So addresses can be prefix_1_1 if you want. And the prefix is easy to memorize since it never changes.
I DO think using 64 bits for hosts was stupid but oh well.
That seems oddly rigid though. I need to known in advance which networks will definitely never need subnetting so I can assign them a /64.
Why have so, so many address bits and then give us so few for subnetting? People shame ISPs endlessly for only giving out /56s instead of /48s, pointing at the RFCs and such. But we still have 64 entire bits left over there on the right! For what? SLAAC? Was DHCP being stateful really such a huge problem that it deserves sacrificing half of our address bits?
The actual intention has always been that there be no hard-
coded boundaries within addresses, and that Classless Inter-
Domain Routing (CIDR) continues to apply to all bits of the
routing prefixes.
I actually think it would have had a better chance of success if ipv6 had embraced the breaking changes to add some killer feature that would have made it worthwhile to upgrade even for entities who didn't need to worry about running out of ipv4 addresses.
IPv6's failure was mostly caused by the IETF's ivory tower dwellers, who seem to generally have no practical experience or understanding whatsoever of how networks are actually built and run today, especially at the small to mid scale.
Small site multihoming, for example, is an absolute disaster. Good luck if you're trying to add a cellular backup to your residential DSL connection.
IETF says you should either have multiple routers advertising multiple provider-assigned prefixes (a manageability nightmare), or that you should run BGP with provider independent address space; have fun getting your residential ISP or cellular carrier onboard with this idea.
IETF has a history of being hostile to network operators. I mean actual network operators - not the people who show up at conferences or work the mailing list who just happen to get a paycheck from a company that runs a network (and have zero production access / not on call / not directly involved in running shit). It's gotten better in the last few years in certain areas (and credit to the people who have been willing to fight the good fight). But it's very much a painful experience where you see good ideas shot down and tons of people who want to put their fingerprint on drafts/proposals - it's still a very vendor heavy environment.
Even the vendor representatives are mostly getting paid to post on mailing lists and show up at conferences.
They're not building products, and they're not supporting, visiting or even talking to their customers. Design-by-committee is a full time job that people actually building things for a living tend to not have time for.
The fact is that already in 1993 routing tables were just too big, and the fact is that having a "flat" address space was always going to mean huge routing tables, and the fact is that because IPv6 is still "flat" routing tables only got larger.
The fix would have been to have a subset of the address space that is routed as usual for bootstrapping ex-router address->AS number mapping, and then do all other routing on the basis of AS numbers _only_. This would have allowed us to move prefix->AS number mappings into.. well, DNS or something like it (DNS sucks for prefix mapping, but it could have been extended to not suck for prefix mapping), and all routing would be done based on AS numbers, making routing tables in routers _very small_ by comparison to now. Border routers could then have had tiny amounts of RAM and worked just fine. The IP packets could have borne AS numbers in addition to IP addresses, and all the routers in the middle would use only the AS numbers, and all the routers at the destination AS would know the routes to the destination IPs.
But, no. Great missed chance.
Well, we still could do this with IPv6, but it would be a lot of heavy lifting now.
EDIT: Ah, I see draft-savola-multi6-asn-pi existed.
> a cellular backup to your residential DSL connection
Hmm, what's the problem? I suppose your home devices should never be exposed to the public internet, and should only be accessible via a VPN like Wireguard. NAT64 is a thing if your home network is IPv4.
BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Yes, an interface can hsve two separate IPv6 addresses, but that doesn't make it easy.
If you do the easy and obvious thing of setting up two routers to advertise their prefix with your preferred priority when they're available (and advertise it as unavailable when they're not), your devices are likely to configure themselves for addresses on both prefixes, which is great.
Then when you open a new tcp connection (for example), they'll pick a source address more or less randomly... There's a RFC suggestion to select the source address with the largest matching prefix with the destination address, which is useful if the prefix is pretty long, but not so useful when the prefix is 2001:: vs 2602::
Anyway, once the source address is selected, the machine will send the packet to whichever router most recently sent an announcement. Priorities only count among prefixes in the same announcement. If you manage to get a connection established, future packets will use the same source address, but will be sent as appropriate for the most recently received advertisement.
This is pretty much useless, if you want it to work well, you're better off with NAT66 and a smart NAT box.
This so, and this is the same if you use IPv4. IPv6 does not bring any regression here; sadly, no progress either. If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
I would experiment with advertising two default routes, one with a significantly higher metric than the other. Most / all outgoing traffic would go through one link then. If you want to optimally load both uplinks, you likely need a more intelligent (reverse) load balancer.
> If you have a server that listens to requests though, such as an HTTP server, I don't see how this setup would be grossly inadequate for the purpose.
That's the problem. It sounds like it would work if you do this. The documentation suggests multi homing like this would work. When your server gets a request, it sends back the response from the address it received on... but the problem is what router it sends to; when it sends to the correct router, everything is good, when it sends to the wrong router, that router's ISP should drop the packets, because they come from a prefix they don't know about.
> I would experiment with advertising two default routes, one with a significantly higher metric than the other.
Sounds like it would work, but as far as I've found, the priority metric only works if the prefixes are in the same advertisement. If each router advertises its own prefix, the actual metric used is most recent advertisement wins as default route.
As I recall, I tried Windows, Linux, and FreeBSD and it was circa 2020. 25 years in, bad OS support for a supposed feature means the feature doesn't work.
> BTW what's the trouble with multi-homing? Can't an interface have two separate IPv6 addresses configured on it, the same way as IPv4 addresses?
Because it breaks your network when that router goes away. Your switch ACLs, firewall rules, and DNS records all become invalid because they contain addresses that no longer exist, that your devices continue trying to reach anyway.
Ah, I understand what you likely mean saying "small site multihoming": not a Web site (where it would be trivial), but e.g. a small office.
But with multi-homing you would need to actively test which of your uplinks has Internet access anyway, won't you? And you would have to react somehow when one of your uplinks goes down.
It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
I still assume that you don't want the internals of your office network directly accessible via the public Internet, even when you easily can; VPNs exist for a reason.
In the IPv4 world, it's easy. Just use NAT, and forward everything over your preferred bearer. Have your router ping 8.8.8.8 or something periodically from that WAN interface to verify reachability. If your preferred link goes down, make your backup link the primary route, clear your NAT translation table, and your local devices remain mostly oblivious that anything happened.
> It's easiest to do by abstracting your site away. Make it use a LAN, and do port-forwarding and proxying through a box that knows about the multiple uplinks, and handles the switch-over when one of them goes down. I don't see how it might be easier with IPv4 than with IPv6.
In the IPv6 world, this is pretty much what you have to do. A whole lot of extra complexity and expense that you didn't have previously.
Extra complexity and expense? You're describing basically the same thing they are. A router that does NAT and decides which link to send the packets over based on connection testing.
And IPv6 has the benefit of a significantly simpler 1:1 NAT.
NPTv6 is rarely used, and so its real world implementations tend to be poorly tested and buggy.
The answer in this case ends up being solutions like explicit web proxies, or alternatively a VPN concentrator or the like from which you can receive a routable prefix delegation, and then run multiple tunnels to satisfy your own availability or policy routing needs. Either way, you’re building some complex infrastructure to overcome regressions imposed upon you at layer 3.
You should be using dynamic DNS and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
That doesn't solve the problem. DNS remains broken until each and every device, assuming VERY generously that it is capable of dynamic DNS at all, realises that one of its prefixes has disappeared and it updates its DNS records. With DNS TTL and common default timeouts for prefix lifetime and router lifetime, that can take anywhere from 30 minutes to 30 days.
> and firewall rules should be on the subnet boundary in this scenario, any decent firewall (including referee PFsense/OpnSense) support ACLs that follow IPv6 address changes.
This requires you to assign one VLAN per device, unless perhaps you've got lots of money, space, and power to buy high end switches that can do EVPN-VXLAN so that you can map MAC addresses to SGTs and filter on those instead.
> each and every device ... updates its DNS records.
What device on your office LAN should maintain its own DNS records? Advertise your own caching DNS server over DHCP(6), give its responses a short TTL (10 sec), make it expire the relevant entries, or the whole cache, when one of your links goes down. I suppose dnsmasq should handle this easily.
It seems that the discussion turned away from a multi-homed setup (pooling the bandwidths of two normally reliable links) to an HA/failover setup (with two unreliable links, each regularly down).
It either needs to be able to update DNS by itself (a la Active Directory), or it needs to be able to give the DHCP server a sensible hostname in order for DHCP to make this update on its behalf, which most IoT devices cannot.
The amount of ignorance in these ipv6 posts is astounding (seems to be one every two months). It isn't hard at all, I'm just a homelabber and I have a dual-stack setup for WAN access (HE Tunnel is set up on the router since Bell [my isp] still doesn't give ipv6 address/prefixes to non-mobile users), but my OpenStack and ceph clusters are all ipv6 only, it's easy peasy. Plus subnetting is a heck of a lot less annoying that with ipv4, not that that was difficult either.
“it’s easy peasy” says guy who demonstrably already knows and has time to learn a bunch of shit 99.9% of people don’t have the background or inclination to.
People like you talking about IPv6 have the same vibe as someone bewildered by the fact that 99.9% of people can’t explain even the most basic equation of differential or integral calculus. That bewilderment is ignorance.
"The shit about IPv4" was easy to learn and well documented and supported.
"The shit about IPv6" is a mess of approaches that even the biggest fanboys can't agree on and are even less available on equipment used by people in prod.
IPv6 has failed wide adoption in 30 decades, calling it "easy" is outright denying the reality and shows the utter dumb obliviousness of people trying to push it and failing to realize where the issues are.
Could you share a list of IPv6 issues that IPv4 does not exhibit? Something that becomes materially harder with IPv6? E.g., "IPv6 addresses are long and unwieldy, hard to write down or remember". What else?
Traffic shapping in v6 is harder than v4. At least it was for me, because NDP messages were going into the shaping queue, but then getting lost since the queue only had a 128 bit address field, and 128 bits isn't actually enough for local addresses. When the traffic shaping allowed traffic immediately, the NDP traffic would be sent, but if it needed to be queued, the adapter index would get lost (or something) and the packets disappeared. So I'd get little bursts of v6 until NDP entries timed out and small queues meant a long time before it would work again.
Not an issue in ipv4 because ARP isn't IPv4 so IP traffic shaping ignores it automatically.
Software support is a big one. I ran pfSense. It did not support changing IPv6 prefixes. It still barely does. So something as simple has having reliable IPv6 connectivity and firewall rules with pfSense was impossible just a few years ago for me.
Android doesn't support DHCPv6 so I can't tell it my preferred NTP server, and Android silently ignores your local DNS server if it is advertised with a IPv4 address and the Android device got a IPv6 address.
Without DHCPv6 then dynamic DNS is required for all servers. Even a 56 bit prefix is too much to remember, especially when it changes every week. So then you need to install and configure a dynamic DNS client on all servers in your network.
"I already know enough to be productive, can the rest of the world please freeze and stop changing?"
This is not even that unreasonable. Sadly, the number of IP devices in the world by now far exceeds the IPv4 address space, and other folks want to do something about that. They hope the world won't freeze but would sort of progress.
Network engineering is a profession requiring specific education. At a high level it’s not different from calculus. You learn certain things and then you learn how to apply them in the real life situations.
It’s not hard for people who get an appropriate education and put some effort into it. Your lack of education is not my ignorance.
company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.
all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.
I find this completely fine. I don't see much (if any) upside in migrating a large existing network to anything new at all, as long as the currently deployed IPv4 is an adequate solution inside it (and it obviously is).
Public-interfacing parts can (and should) support IPv6, but I don't see much trouble exposing your public HTTP servers (and maybe mail servers) using IPv6, because most likely your hosting / cloud providers do 99.9% of it already, out of the box (unless it's AWS, haha), and the rare remaining cases, like, I don't know, a custom VPN gateway, are not such a big deal to handle.
I ran network team at an organization with hundreds of thousands hardware hosts in tens-of-megawatts large data centers, millions of VMs and containers, links between data centers, links to ISPs and IXes. We ran out of RFC1918 addresses at around 2011-2012 and went IPv6-only. IPv4 is delivered as a service to nodes requiring it via an overlay network. We intentionally simplified network design by doing so.
I should have been gentler and less arrogant, yes. Sincerely though, please explain how ipv6 is in anyway more difficult than a properly set up ipv4 enterprise. What tools are not available?
I left my job as a NE/architect over a 15 years ago, but the show stopper back then revolved around how to handle routing with firewalling. Firewalling being biggest roadblock due to needing traffic symmetry. I'm doing my best to remember why we stopped at just providing v6 at the edge for site-specific Internet hosted services and never pushed it further.
Mind you, our team discussed this numerous times over a few years and never came up with a solution that didn't look like it would require us to completely fork-lift what we were doing. The whole team was FOR getting us to v6, so there was no dogmatic opposition.
Consider this:
25k employee company. Four main datacenter hubs spread out across the USA with 200 remote offices evenly dual-homed into any two of the four.
All four of the DCs had multi-ISP Internet access advertising their separate v4 blocks and hosting Internet services. The default-route was redistributed into the IGP from only two locations, site A and B. e.g. two of the four DCs were egress for Internet traffic from the population of users and all non-internet-facing servers. IGP metrics were gently massaged as to fairly equally use of both sites.
All outbound traffic flowed naturally out of the eastern or western sites based on IGP metrics. This afforded us a tertiary failover for outbound traffic in the event that both of the Internet links into one of the two egress sites was down. e.g., if both of site A's links (say, level-3 and att) were down, the route through site A was lost, and all the egress traffic was then routed out site B (and vice-versa). This worked well with ipv4 because we used NAT to masquerade all the internal v4 space as site X's public egress block. Therefore all the return traffic was routed appropriately.
BGP advertisements were either as-path prepended or supernetted (don't remember which) such that if site A went down, site B, C, or D would get its traffic, and tunnel it via GRE to the appropriate DC hub's external segment.
The difficulty was that traffic absolutely had to flow symmetrically because of the firewalls in place, and easily could for v4 because NAT was happening at every edge.
With v6 it just didn't seem like there was any way to achieve the same routing architecture / flexibility, particularly with multi-homing into geographically disparate sites.
I'm not sure anymore where we landed, but I remember it being effectively insurmountable. I don't think it was difficult for Internet-hosted services, but the effort seemed absolutely not worth it for everything on the inside of the network.
I want to send my ssh via my low latency reliable connection, I want to route my streaming via another connection. That’s just a routing rule and srcnat in ipv4
That’s before you go on to using PBR. I want to route traffic with different dscp via different routes.
Ultimately I want the rout g to be handled by the network, not by the client.
Without nat, my understanding is the right way in v6 is to issue addresses of every network and then send a message to each end device asking it to use a specific ip address to route traffic and hope every client implements RFC 4191 in the right way.
The "proper" way would be to get your own ASN and use BGP to route the traffic.
If you're wanting to use a secondary WAN link as a backup for when the other goes down you could have the backup link's LAN have a lower priority. (So I guess hope everything implements RFC 4191 like you said).
You can use NAT66/NPTv6 if you want (though it's icky I guess).
Nat 4, it’s trivial. But IPv6 tell me how terrible nat is despite it being the only solution in both the v6 and v4 world.
Sadly my 4g provider will not peer via bgp with me, even if I could provide an AS and Sufficiently large IP range.
I think my home ISP will actually peer with me, but I’d have to tunnel to them over my non-fibre connection, and there’s reduced resilience in that case.
At work that wouldn’t help at all, there are very few providers for many of our branch offices.
So once again ipv6 only works with “icky” nat, or on simple 1990s style connections, and not in the real world of multiple providers. Now sure I can do npt which means I don’t need to keep track of state, but then if I didn’t keep track of state I lose the benefits of a stateful firewall.
As such the only benefits of nat on v6 is that source ports will never need to change even if client 1 and client 2 both send to server 1 port 1234 from source port 5555. This helps with a handful of crappy protocols which embed the layer 4 data (port number) in a layer 6 or 7 protocol.
I've been thinking we could simply extend the ipv4 address to be 11 bytes by (ab)using the options field. That is, add an option that holds more bytes for the source and destination address, which are to be appended to the address already present in the header.
I am thinking that since an option starts with 2 bytes and everything must be padded to a multiple of 4 bytes, we can add 16 bytes to the packet, which would hold 7 extra address bytes per source and destination, giving us 11 byte addresses. ISPs would be given a bunch of 4-byte toplevel addresses and can generate 7-byte suffixes dynamically for their subscribers, in a way that is almost the same as CGNAT used today but without all the problems that has.
Most routers will only need to be updated to pass along the option and otherwise route as normal, because the top level address is already enough to route the packet to the ISP's routers. Then only at the edge will you need to do extra work to route the packet to the host. Not setting the option would be equivalent to setting it to all 0s, so all existing public hosts will be automatically addressable with the new scheme.
There will of course need to be a lot more work done for DNS, DHCP, syntax in programs, etc, but it would be a much easier and more gradual transition than IPv6 is demanding.
I don't think so. It would be more confusion because no one will know if a network is ipv4 or ipv4+, leading to edge case bugs and confusion and people will similarly be lazy and choose to only implement ipv4 knowing it will always be reverse compatible and the cost is transferred to the consumer.
Plus, it's only 2048x the address space. It's within the realm of possibility that we will need to upgrade again once this place is swarming with robots.
x2048 is a lot though! Maybe we should let the robots figure out their own solution, rather than trying to make every atom on Earth individually addressable :)
ipv6 adoption is still steadily rising. Not as fast as anyone hoped, but at least steadily. There is no way it can be abandoned at this point even if we wanted to.
I wonder if it could still be usurped by another standard that is somehow more popular. If adoption of that leapfrogs over IPV6 then maybe it will have just been a waypoint along the way.
What would a new standard do that would make it more popular? IPv6, for all its faults, is designed to be the last Internet Protocol we will ever need.
In the new standard every publicly routable packet will include a cryptographically signed passport number of the responsible person.
Then the government could, for example, limit criminals' access to the internet by mandating that their packets be dropped on most major ISPs, or at least deprioritised.
Funny enough I actually looked at a scheme for corporate networks where your personal corporate ID is encoded as part of the host bits of the IPv6 packet and policy could be applied based on who you are instead of what machine it is (or both). It was kind of neat but the complexity was too high for it to gain traction, and also it turns out that most corporate networks are allergic to IPv6 and government networks doubly so.
Stripped of all the other baggage that came with it (e.g. SLAAC, IPsec, etc) IPv6 is an incredibly conservative addressing extension. The only thing even more conservative than v6 would have been to drop the lower 64 bits of the address and the associated EUI-64 local addressing scheme. Which... to be fair, that turned out to be a very bad idea, but the length of the field isn't what was holding up v6 adoption.
I suspect by "incredibly conservative" you mean "backwards compatible", which... no. You can't make an addressing extension backwards compatible with hardware that doesn't read all of the address. Of course, we did that anyway with CGNAT, and predictably it causes huge problems with end-to-end connectivity, which is the whole point of IPv6. You're probably thinking more along the lines of an explicit "extension addressing header" for v4. Problem is, that'd mean a more awkward version of IPv6's /64 address split[0], combined with all sorts of annoying connectivity problems. The same corporate middleboxes that refuse to upgrade to IPv6 also choke on anything that isn't TCP traffic to ports 80 and 443. So you'd need Happy Eyeballs style racing between CGNAT IPv4 and "extended IPv4".
Also, that would just be a worse version of 6in4. Because they also thought of just tunneling IPv6 traffic in IPv4 links. I don't think you understand how incredibly conservative IPv6 actually is.
The problem with "incredibly conservative" IP extensions is that nothing beats the conservatism of doing literally nothing. IT infrastructure is never ripped out and replaced unless there is a business case for doing so. The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic", they've only said "let's get more dual-stack hosts online", which is a process that only asymptotes to 100% IPv6, and never reaches it.
IPv4 was not the first version of the Internet protocol. That honor goes to Network Control Protocol (NCP). The reason why we don't have an asymptotic long tail of Internet hosts still demanding NCP connectivity is because this was back when "having a connection to the Internet" meant "having a connection to ARPANET". The US military could just refuse to process NCP packets and actively did this to force people onto IPv4. Now imagine if someone big like Google said "we're going to stop accepting IPv4 connections" - people would jump onto v6 immediately.
[0] Let's say we add a 32-bit extension header onto IPv4
"Stripped of all the other baggage that came with it..."
But that baggage is a huge part of the problem. Almost nothing you know about IPv4 applies when you switch to IPv6, and most of us found that out the hard way when we tried to make the switch. Leaves a pretty bad taste in your mouth.
I mean this is just wrong. Routing and switching behave exactly the same in V6 vs V4. Details on how you get an IP and what it looks like changed but there's TONS of knowledge shared between the two.
When I configure a new router at my home, routing is barely a blip on the radar. I mean, everything that's not local goes upstream. Switches just swich, I plug in cables and they work.
The things I need to think about are precisely the things that changed radically. Firewall rules aren't the same due to prefix changes and no NAT. DHCP isn't the same, DNS isn't quite the same, distributing NTP servers isn't the same.
Almost nothing of what I knew about configuring my home router for IPv4 has transferred to IPv6 configuration.
"Details on how you get an IP and what it looks like changed but..."
This is exactly what I'm talking about. When you have problems with your IP network, that's the first thing you try and figure out, "what's my address? Why is that my address? Did it change? If so, why? Are other devices able to get packets? What are their addresses? Why can those addresses get packets but this address can't?"
> The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic"
Mobile carriers have done that between consumer devices and network towers. That forced a lot of innovation (including tools like better DNS64 and "happy eyeballs" protocols) and network stack hardening.
The roll out of out CGNAT in some cases is "let's drop IPv4 traffic randomly" and "happy eyeballs" in consumer devices is transparently driving a lot of consumer traffic to IPv6.
This is why mobile and consumer devices are leading the pack on IPv6 adoption.
It's maybe not all of Google that next needs to say "we're going to stop accepting IPv4 traffic", it's maybe more specifically GCP (and AWS and Azure) that need to do that to drive the non-consumer IPv6 push we need. The next best thing would be for all the cloud providers to at least start raising IPv4 address prices until their clients start to feel them.
> The current problem with IPv6 adoption is that nobody has yet said "let's stop processing IPv4 traffic"…
One of the giant CDNs translates all IPv4 traffic to IPv6 at the edge (stateless NAT46) and is IPv6-only in its core network (for one of its primary product networks; like everybody they have multiple networks.)
Multiple networks do the same - Both T-Mobile (at least in EU) and Orange no longer actually support v4 other than through funky 464 and by funky I mean really funky at times.
Truth is there are too many devices that only speak IPv4 or have untested IPv6 stack. People still can’t even agree on how ipv6 address is represented.
I think this is defeatist talk where it’s not warranted. I remember IPX networks in the 90s were still a thing because people believed they could eke out a little more performance for their games. It’s taking a long time to move to IPv6 in some parts of the world. eg: anyone who doesn’t feel the pain of the IPv4 address crunch likely due to having a large chunk to begin with. Many influential organizations in North America definitely fall in that category.
IPv6 is a success IMHO because it is used in so many places. Google’s IPv6 traffic graph shows close to 50% adoption and still trending up. We can’t possibly expect the world to be near 100% overnight… the internet is a big place with the whole spectrum of humans influencing IT; There will always be someone who will cling to IPv4 for dear life.
> I'm not proposing to abandon ipv6, but at this point I'm really not sure how we proceed here. The status quo is maintaining two separate competing protocols forever, which was not the ultimate intention.
The end game will be a cryptographically large address space allocated based on some cryptographic operation, rather than a committee carving up the space arbitrarily.
Tor already does this, addresses allocation is not a problem.
I think they used to use hashes, but now use Ed25519 public keys.
Obviously, Tor is not suitable for most tasks.
No one should have to pay for the extra latency if they don't need the anonymity.
The real problem is routing in these address spaces, and there have been a few projects like CJDNS which try to solve it.
Imagine every address along a major road is 3 digits, and some shortsighted post office code assumes 3. Your business is 845 Oak St. One day they say hey, this road is getting too long, let's update that code to support 10 digits and we never worry about this again.
Oh and btw, your address is now 9245593924 Oak St.
Tiktok is not heroin. Tiktok does not make you vomit if you quit using it. Tiktok does not give you a 5 year life expectancy. You can't overdose on it and die. Tiktok does not make you rob a grandma to get your next fix of it.
I hate social media more than most people do, and I don't use tiktok and don't think anyone else should, but can we all please stop comparing a mobile phone app to using heroin? It's misinformed and dangerous to make rhetorical comparisons like that.
It's like how the Sackler's did everything they can to make opioids more addictive and increase profit margins, there is virtually no difference between this and Zuckerberg hiring psychologists to make his apps more addictive.
You're right, heroin is merely physical addiction. TikTok is psychological, emotional and social crack for young malleable minds. Produced by the subjects of, and in cahoots with, an aggressive totalitarian regime that has about as much respect for human life as Oracle has for its customers.
Also assuming your heroin isn't tainted it isn't toxic and you can have a normal life expectancy.
I empathize w/ your take. I've occasionally responded similarly to the thoughtless use of "cancer" in shallow analogies, as a survivor who's also watched it kill several people I dearly love. I don't have direct experience w/ heroin, but the film "Requiem For A Dream" was unforgettable and helps me better understand its evil.
Unfortunately, whether it's a deadly drug or a deadly disease, these casual references are unlikely to drop from public discourse anytime soon. And I personally would rather live in a world where insensitive or potentially-triggering language is gently discouraged, than one where the pendulum swings too far the other way towards censorship or radical left woke cancel culture. Words can be unintentionally callous without being "micro-aggressions". (And I say that as a liberal progressive.)
Thanks for posting in a personal and persuasive manner, instead of anger. Yours is the more effective approach anyway.
Honestly, why? You see the exact same nonsense over and over. I've seen nothing to indicate that the community is any better than most other moderated forums. Worse than some.
>Tiktok does not give you a 5 year life expectancy
12 year old life expectancy then?
> The lawsuit, filed in the US claims that Isaac Kenevan, 13, Archie Battersbee, 12, Julian "Jools" Sweeney, 14, and Maia Walsh, 13, died while attempting the so-called "blackout challenge". Four children died because of, compared to one, who injects?
Heroin invokes addiction, TikTok does that. Heroin can cause physical dependency, TikTok brews this. Heroin is highly addictive, isn't TikTok to the young viewer?
I still hold my point that TikTok can be distilled and viewed as a form of Digital Heroin. Evidence shows.
Depends, how rich are they in this hypothetical scenario?It's being poor that's the problem, and we know that's true because of the many rock stars who've lived with a heroin addiction for many many years.
If it's the first thing you think about when you wake up, and it kills you to sleep at night, and you think about it all day, sure, one's a highly addictive habit that destroys lives, and the other is heroin. Which is also a highly addictive habit that destroys lives. Funnily enough, one destroys lives because it's legal, and the other destroys lives because it's illegal. But if you're taking your phone to bed with you at night, and it's the first thing you check in the morning, before you even have a thought to yourself, okay, you're not injecting it with a needle under a freeway underpass but after you get fired for watching TikTok on the clock and can't pay your rent, is you're landlord gonna care when you don't pay rent whether you got fired for drugs or a smartphone addiction?
Because "digital heroin" is a nonsense phrase used as a thought-terminating cliché.
> when the side-effects are the same of?
Assuming that this is intended to be something like "when the side effects are the same as those of heroin?" then the premise is false; the effects (side or otherwise) of TikTok are not meaningfully similar to those of heroin.
some people feel like they are addicted to short form content but it’s really nothing like a drug addiction much less an addiction to something as devastating as heroin
TikTok (along with the other platforms) is more like cigarettes, or sugar.
It's highly addictive. The negative effects are somewhat diffuse and may take a while to really impact your life, but they're very real.
And, rather importantly, it's legal and widely available, and the industry behind them is suppressing evidence of their harms and making tons of money off of addiction.
I guess so. 36 and having seen the internet from IRC to how it is now arguing over some internet forum because views are different. How old are you?
TikTok causes chemical release in the brain and which can cause other self psychological damage. Heroin causes chemical release in the brain in the brain, and can cause other self psychological damage.
Both are addictions, both are hard to fight. Some find it easier some find it hard.
The effects of one are more devastating sure, Alcohol is more damaging than Caffeine; I'm not ruling that out.
However the effects of Heroin which comes with addiction and the cravings are some-what mimicked within the realms of TikTok.
To op below:
I'm now rate limited, so I can't reply directly.
A drug, a real life substance that is designed to alter human chemistry. Cannabis, Caffine, MDMA, DMT all alter your brain chemistry organically.
You cannot compare one or to something that is man-made digital. You can however compare the effects of a substance that is organically designed to that of something is digital. The relation of effects of TikTok to Heroin are very similar.
Social media is being designed as a digital service to alter human chemistry. It works, why do you think the world is in utter shit? Why do you think social enterprises pay big bucks to exploit the human psyche by hiring sociologists/psychologists?
The TikTok icon on mobile devices is strategically designed to manipulate and trigger a response.
Facebook is a grand example with the A/B emotional testing they did with Cambridge Analytica which that is that is far worse then heroin IMO. At least with Heroin you need to inject.
> TikTok causes chemical release in the brain and can cause other self psychological damage.
You're literally describing any activity that someone enjoys generating natural dopamine, and then comparing it to a drug that crosses your blood-brain barrier and mimicks your brain's chemistry to give you a super-charged chemical version of that. The difference in dopamine levels is orders of magnitude. Your brain re-wires itself to handle the level of dopamine produced and you start only feeling normal if you're constantly using the drug. I would be surprised if Tiktok generated even 1/10th the dopamine level of using methamphetamine. It all honestly sounds quite fun, but my awareness of the consequences will prevent me from ever trying them.
Eating a good meal, having sex, finishing writing your first novel, winning a race, doing breath work, doing yoga, rock climbing, and an unlimited supply of examples generate dopamine in our brains the same way that Tiktok does. They can all ruin your life just as much, if you allow them to.
A much better comparison would be to describe Tiktok as a "digital slot machine", and indeed slot machine mechanics have been heavily studied by social media platforms to make usage more habitual. Nir Eyal's Hooked was an interesting and informative read on this topic. If he describes social media as heroin in the book I'll happily take the self-own.
I think age is a lame argument here but fwiw i also grew up on IRC and 90s internet - I just have a less rosy view of that time.
> TikTok causes chemical release in the brain
Basically everything causes a chemical release in the brain. For example HN does as well, would you compare posting on HN to heroin?
> both are hard to fight
I know and knew people both addicted to heroin and to TikTok. Let me assure you that ditching a short-form content addiction is VASTLY more easy than ditching heroin.
> the effects of Heroin which comes with addiction and the cravings are some-what mimicked within the realms of TikTok
This is true for everything that humans enjoy. Next you gonna say that talking a walk in nature or working out is like heroin because I enjoy it and I’m addicted to it (if I don’t do it every day I feel bad and I have a compulsion to do it every day)
> why do you think the world is in utter shit
I disagree with that assessment, the “world” as a whole is actually much better than it used to be 30 years ago. Of course that might not be the case for you individually but then this thread is more about your feelings than an objective observation of the world.
> At least with Heroin you need to inject.
Most heroin users don’t inject which ones again shows you don’t know anything about it outside of tropes and cliches.
“At least with TikTok you need a smartphone and internet and swipe to unlock” - see how dumb that makes me sound?
Don’t get me wrong I dislike the tech hegemony and social media as much as you - I just think your way of arguing damages your position more than it helps it.
part of the definition of addiction is that it has a negative impact on your life, so your nature walks and exercise arent comparable without describing that harm
I state my age so at least I can represent myself as someone with experience in this world and who has seen the internet deteriorate from something fundamental awesome to a wash-rag floating in a swamp.
I see it as Digital Heroin. If others, you don't, fine.
Social Media is addicting. I use none and explaining it as " digital heroin" may be an extreme way to present the thought but at least it's bluntly represents the curse of it. Finally, it's not the teenagers at fault. It's the governments in the first place for allowing this. I saw it on the wall when Facebook came to be back in 2007.
> Finally to perhaps lighten the mood, rather than the usual book recommendations, the three of them discuss their favourite cars.
I'm sure they're talking about the 8 million dollar Porto Embargo or whatever, but I would love one day to hear a podcast discussing the genius of the Toyota Corolla. Safe, affordable, available, reliable. I've had friends with Toyotas that were about to fall off their base from road salt rust but the engine and transmission still worked perfectly.
Pretty amazing that you can be of modest or lavish means and still own a really solid car that everyone can fix. A lot funner than hanging out in the repair shop waiting for a specialist to fix a blown turbo headgasket.
Why does Apple need a different AI strategy? They make good laptops and people like them (I say this as someone that runs Linux on a framework laptop). Anyone using AI is using their MacBook laptops to use AI already. They're perfectly capable of running LLMs locally if anyone actually wants to do that. I'm not sure shoving a bunch of weird AI crap into the UI is actually what end users want.
I’d guess a matter of control - Apple wants full control of the products and services that it owns and maintains - buying WB/HBO comes with a bunch of baggage around IP stewardship, international contracts around IP that restrict Apple in ways they’d never do with their own content, a ton of employees outside of the Apple corporate structure, etc.
I inherited a Samsung fridge when I moved to a new place, it was a terrible fridge with serious mechanical flaws. The deicer broke, causing a constant stream of leaking water in the fridge. The French door middle component hinge was cheap plastic that broke and I had to replace it, then it broke again and I had to replace it again, then it broke again. I finally gave up and replaced the fridge.
Recommendation threads on Reddit usually begin with "anything but Samsung". They seem designed to be made cheap and hit the lowest price point with consumers that don't want to spend a lot more on something they don't really care about, so I'm not surprised to learn that ads are a part of their strategy.
But also, why do fridges need to connect to the internet?
I'm sure this is for good reasons, but as someone that maintains a lot of ssl certificates, I'm not in love with this change. Sometimes things break with cert renewal, and it sometimes takes a chunk of time to detect and then sit down to properly fix those issues. This shortens the amount of time I will have to deal with that if it ever comes up (which is more often than you would expect), and increases the chances of running into rate limit issues too.
Without getting into specific stuff I've run into, automated stuff just, breaks.
This is a living organism with moving parts and a time limit - you update nginx with a change that breaks .well-known by accident, or upgrade to a new version of Ubuntu and suddenly some dependency isn't loading correctly, or that UUID generator you depended on to generate the name for the challenge doesn't get loaded, or certbot becomes obsolete because of some API change and you can't upgrade to the latest because the OS is older and you installed it from the package manager.
You eventually see it in your exception monitoring or when an ssl monitor detects the cert is about to expire. Then you have to drop that other urgent thing you needed to get done, come in and debug it, fix it, and re-issue all the certs at the rate limit allowed. That's assuming you have that monitoring - most sites probably don't.
If you detect that issue with 1/3 of the cert left, you will now have 15 days to figure that out instead of 30. If you can't finish it in time, or you don't learn about it in time, the site(s) hard fail on every web browser that visits and you've effectively got a full site outage until you repair it.
So you discover it's because of certbot not working with a new API change, and you can't upgrade with the package manager. Now you need to figure out how to compile it from source, but it doesn't like the python that is currently installed and now you need to install that from source, but that version of python breaks your python web app so you have to figure out how to migrate your app to that version of python before you can do that, and the programmer that can do that is on a week long whitewater rafting trip in Idaho.
Aside from all that, what happens if a hacker manages to wreck the let's encrypt infra so badly they need 2 weeks to get it back online? The internet archive was offline for weeks after a ddos attack. The cloudflare outage took one site of mine down for less than 10 minutes, it's not hard to imagine a much worse outage for the web here.
AKA the real world, a place where you have older appliances, legacy servers, contractual constraints and better things to do than watch a nasty yearly ritual become a nasty monthly ritual.
I need to make sure SSL is working in a bunch of very heterogeneous stuff but not in a position to replace it and/or pick an authority with better automation. I just suck it up and dread when a "cert day" looms closer.
Sometimes these kind of decisions seem to come from bodies that think the Internet exists solely for doing the thing they do.
Happens to me with the QA people at our org. They behave as if anything happens just for the purpose of having them measure it, creating a Heisenberg situation where their incessant narrow-minded meddling makes actually doing anything nearly imposible.
The same happens with manual processes done once a year - you just aren't aware of it until renewal.
Consider the inevitable need for immediate renewal due to an incident. Would you rather have this renewal happen via a fast, automated and well-tested process, or a silently broken slow and manual one?
The manual process was annoying but it wasn't complicated.
You knew exactly when it was going to fail and you could put it on your calendar to schedule the work, which consisted of an email validation process and running a command to issue the certificate request from your generated key.
The only moving part was the issued certificate, which you copied and pasted over and reloaded the server. There are a lot less things to go wrong on this process, which at one point I could do once every two years, than in a really complicated automated background task that has to happen within 15 days.
I love short duration automated free certs, but I think we really need to have a conversation about how short we can make them before we make it so humans no longer have the time required to fix problems anymore.
There are also alternatives to Cloudflare and AWS, that didn't stop their outages from taking down pretty much the entire internet. I'm not sure what your point is, pretty much everybody is using let's encrypt and it will very much be a huge outage event for the web if something were to go seriously wrong with it.
One key difference: A cert is a “pickled” thing, it’s stored and kept until it is successfully renewed. So if you attempt to renew at day 30 and LE is down, then you still have nearly more than two weeks to retrieve a new cert. Hopefully LE will get on their feet again within that time. Otherwise you have Google, ZeroSSL, etc where you can fetch a replacement cert.