The big services (Google, Cloudflare, etc.) provide DDoS attack mitigation (and seem to succeed), but details on their tactics are rare (at least I did not find in-depth information on that).
I guess to make this work well you have to do classification (regular request vs. malicious) on several protocol layers and then reroute or drop packets accordingly. But how does that prevent severe service degradation - you still have to do some kind of work (in computation and energy) on the listening side or can fat edge-servers just eat that up?
You can break down DDoS into roughly three categories:
1. Volumetric (brute force)
2. Application (targeting specific app endpoints)
3. Protocol (exploiting protocol vulnerabilities)
DDoS mitigation providers concentrate on 1 & 3.
The basic idea is: attempt to characterize the malicious traffic if you can, and or divert all traffic for the target. Send the diverted traffic to a regional "scrubbing center"; dirty traffic in, clean traffic out.
The scrubbing centers buy or build mitigation boxes that take large volumes of traffic in and then do heuristic checks (liveness of sender, protocol anomalies, special queueing) before passing it to the target. There's some in-line layer 7 filtering happening, and there's continuous source characterization happening to basic network layer filters back towards ingress.
You can do pretty simple statistical anomaly models and get pretty far with attacker source classification, and to track targets and be selective about what things need to be diverted.
A lot of major volumetric attacks are, at the network layer, pretty unsophisticated; they're things like memcached or NTP floods. When you're special-casing traffic to a particular target through a scrubbing center, it's pretty easy to strip that kind of stuff off.