> NAT registers in the microseconds for packet processing time, that isn’t even comparable to Internet path jitter.
NAT, at scale, can get expensive:
> Our [American Indian] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
No, I mean that the IPv4 packets go through a stateful NAT engine on the provider side, instead of going through potentially multi-path L2/L3 switching fabrics that are stateless w.r.t. the content of the packets (especially if they're actually routed L3 fabrics, not auto-discovering L2 fabrics).
Thus the packets have to take a detour through the NAT engine instead of taking the shortest physical path to the destination as they can with a plain stateless L3 switched fabric.
E.g. in an AON with L2/L3 switches on the provider side of the last-mile link, you can easily bend the packets right around in that switch if you're WebRTC calling your neighbor. No need to even go beyond the curbside switch.
Other than that, it's still typically gonna be a physical detour to the NAT engine instead of going straight to the destination, as the NAT engine isn't fused into the main fabric data plane.
Does NAT port exhaustion not count as "materially affecting bandwidth"? At least, the maximum bandwidth is capped. Furthermore, the keepalive packets (required to maintain a port mapping) must cause the maximum useful bandwidth to be reduced, do they not?
Either way, expecting no effects on latency/bandwidth when additional processing is involved is a rather insane take. If anything, you should be the person presenting evidences to prove your position. Ideally, the cost effectiveness of whatever you present should be included too.
Anecdotally, I have experienced IPv4 slowdown due to CGNAT overload. Make of it what you will.
> Does NAT port exhaustion not count as "materially affecting bandwidth"?
In a home network this is a negligible concern. In a CGNAT setup it is a limitation of the hardware or ISP, not really NAT. Furthermore it impedes establishing connections, not bandwidth.
> Furthermore, the keepalive packets
First, what keepalive packets? NAT does not introduce packets into an existing TCP or UDP stream; for TCP it uses control packets to detect state and whether teardown is needed. For UDP keepalive packets depends on the underlying protocol; for example wireguard supports a persistent keepalive. But is this material though? Will this even materially affect a dialup connection?
> Either way, expecting no effects on latency/bandwidth when additional processing is involved is a rather insane take.
You're putting words in my mouth. I asked does it materially affect bandwidth or latency or does it bottleneck it in some way. My argument is that NAT introduces packet process delay in the magnitude of microseconds per packet processing. This is negligible when most applications on the Internet deal with latency in milliseconds. Lastly, if the point is to compare, it'd be NAT against no NAT, not really NAT against IPv6. I find your position weak insofar saying port exhaustion or keepalive packets limit bandwidth; if it does, quantify it.
> Anecdotally, I have experienced IPv4 slowdown due to CGNAT overload. Make of it what you will.
That's on your ISP, not NAT. It's no different to your ISP delivering gigabit to last mile and using dialup links for connectivity. They need to adequately resource the network.
> Furthermore it impedes establishing connections, not bandwidth.
Can't have bandwidth if you are TCP RST'd.
> First, what keepalive packets?
Any devices or protocols sending a useless keepalive packets (e.g. Wireguard has this) just so the NAT won't take away the precious 5-tuple and give it to others.
> NAT against no NAT, not really NAT against IPv6
I am sorry, but I cannot imagine a "No NAT" solution that doesn't involve IPv6. So yes, NAT vs No NAT is still NAT vs IPv6.
Unless you still have a /16 IPv4 block in your hands, in which case, go on. Just remember that in this case you are the exception and not the rule.
> That's on your ISP, not NAT. It's no different to your ISP delivering gigabit to last mile and using dialup links for connectivity. They need to adequately resource the network.
NAT being stateful is ultimately the cause of this. No statefulness, no need for (additional!) expensive equipment, no resource constraints.