The article only mentions NAT as a showcase of what IP packet parsing is used for. QUIC does not encrypt the UDP header.
TLS 1.3 actually run into a lot of issues due to network ossification. QUIC has been developed to bypass TCP's ossification problem and enable quicker iteration and deployment to improve latency, congestion mitigation etc.
You could argue that if QUIC's principles were in place for TCP/IP 20 years ago, middleboxes would not have made protocol revamps as difficult as they are today. Perhaps NAT would have never been developed and we'd all be on IPv6 already.
> If it does not tamper with UDP I fail to recognise the issue though to be honest.
the problem is that if you expose any kind of information in addition to just the IP and UDP headers, then future middleboxes will start to use this information and will start dropping packets they can't parse.
If a QUIC packet is just random noise (aside of the UDP and IP headers), then a middlebox can't make assumptions about the inner workings of QUIC. It's a very common (but unfortunate) practice by "security" appliances to drop everything they don't understand because they treat that as potentially malicious.
Let me make a (hypothetical) example: Let's say QUIC packets had some publicly available "version" field. The implementation as it's used now is setting that to 1.
Now middleboxes start "gaining" support for QUIC and as "everybody" knows, the only widely deployed version is 1, so these middleboxes start to treat every other value of that field as a possible attack and drop such packets.
Now, years later, we want a new version of QUIC, but unfortunately, the most widely deployed (and never updated) middleboxes out there assume any version but 1 to be malicious.
Which leaves us with a "version" field that practically has to be set to 1, so now we need another way to flag the new packets. Maybe a "real_version" field? Who knows? We'll have to try various things until the majority of the middle-boxes currently deployed are fooled.
Also, it will likely be impossible to fool them all, so even when we get around the majority of the boxes, we'll still exclude some people from being able to reach QUIC 2 servers. Sure - it will be a very small amount, but it won't be zero.
This isn't just theoretical. We had this problem just now with TLS 1.3. Since the beginning SSL and then TLS had version fields in order for clients and servers to negotiate the SSL version to use.
Unfortunately, because of precisely this problem, that field stopped being useable years ago where even the 1.2 negotiation had to happen using a workaround which then promptly also stopped working for 1.3.
By not exposing anything but random noise as part of a QUIC packet, the protocol designers aim to prevent this from happening. If all a "transparent" proxy sees is random noise, they can decide to not support QUIC at all or to support all of it. They can't decide on a "safe" subset and burn that into the internet for all eternity.
The discussion about exposing the bit was to allow network administrators to detect retransmissions. The bit is supposed to flip constantly, if it doesn't, then retransmissions happen and thus something might be wrong with the network.
Because it's just one bit and it can have two values and both are valid and both values are seen with about the same frequency, a middlebox won't be able to just drop a packet if the value is either 0 or 1.
This is why there is even a discussion happening. It's felt to be reasonably safe to include to provide some actual value to tools.
The debate is though whether it's really safe and/or whether it provides enough value to go through the trouble.
If you ask me personally, in my professional life, I have been bitten by protocol ossification way more than by not being able to make sense out of a packet stream, so I personally would absolutely not expose anything.
But then again, I'm an application level developer and not a network administrator.
You do know that someone somewhere is going to make a middlebox that just drops a packet unless that bit flipped in the precise sequence that the middlebox developer believed was the correct one, right?
Then someone proposes an enhancement to QUIC which happens to change the sequence of the flips (perhaps some multipath thing, or an enhancement in the way it treats reordered packets), and it breaks...
TLS 1.3 actually run into a lot of issues due to network ossification. QUIC has been developed to bypass TCP's ossification problem and enable quicker iteration and deployment to improve latency, congestion mitigation etc.
You could argue that if QUIC's principles were in place for TCP/IP 20 years ago, middleboxes would not have made protocol revamps as difficult as they are today. Perhaps NAT would have never been developed and we'd all be on IPv6 already.