Hacker News new | past | comments | ask | show | jobs | submit login
Belgian ISP under 250 Gbps DDoS for days on end (issues.edpnet.be)
497 points by laurensr on Sept 18, 2021 | hide | past | favorite | 300 comments



I got hit with a ~40Gbps DDoS last week. These attacks are on the rise. Some responses to folks above: Success working with upstreams is quite varied. Some care, some don't, and it can be difficult to get to folks that can help- even if their networks are impacted as well. Some carriers immediately turn this into a sales opp. - buy more bandwidth, buy more services.

In our case it was based on DNS reflection from a large number of hosts. I've contacted the top sources (ISPs hosting the largest number of attackers) and provided IPs and timestamps. I've received zero responses.

Geo-based approaches yielded no helpful reduction in source traffic.

Also, during this event we discovered an upstream of ours had misconfigured our realtime blackhole capability. As a result, I'm going to add recurring testing for this capability and burn a couple IPs to make sure upstreams are listening to our rtbh announcements.

Very concerned about the recent microtik CVE, as that is going to make for some very large botnets.

Personally this all is very disappointing because it creates an incentive to centralize / de-distribute applications to a few extremely large infrastructure providers who can survive an attack of these magnitudes.


> Very concerned about the recent microtik CVE, as that is going to make for some very large botnets.

To be pedantic there is technically no recent Mikrotik CVE WRT Meris. It was patched in 2018(?) shortly after discovery.

From their response to the Meris botnet[1]:

> As far as we have seen, these attacks use the same routers that were compromised in 2018, when MikroTik RouterOS had a vulnerability, that was quickly patched.

> Unfortunately, closing the vulnerability does not immediately protect these routers. If somebody got your password in 2018, just an upgrade will not help. You must also change password, re-check your firewall if it does not allow remote access to unknown parties, and look for scripts that you did not create.

It goes into more detail to further check/harden the device in the blog post. A lot of issues stem from having Winbox or other admin access not properly firewalled off and open to the world. Blessing and a curse of the power you have with these devices I guess.

[1] https://blog.mikrotik.com/security/meris-botnet.html


I work for a DDoS prevention provider, and business is booming at the moment.

I'm not a salesman nor do I care that much about the company, but seriously, let someone handle this stuff for you. There are at least a handful of us left, and we are good at what we do, generally.


I don't think it's a question of service quality rather than a question of technical independence and sovereignty.


I get that, but it's one of those things a lot of people go through, and blow a ton of effort on solved problems. Because it's not one of the things that's so obvious and commoditized...yet.

Peace of mind is worth a lot. If this person were with my company for example, this wouldn't even be a comment.

I say my company meaning my employer. I have exactly zero stake in mentioning them by name or advertising them.


The best name in the DDoS protection business is Cloudflare. I’m sure they’re going to acquire or snuff out all competition within a few years.

It’s probably a liability to go with anyone else, due to the lack of network resources.


It's not a bad response, but not so correct.

TBF, I agree with the sentiment...cloudflare does seemed poised to own the world.

But they don't do everything well.. Not everyone wants a stupid landing page or captcha.

And Cloudflare will say... wait that's just free tier stuff, our enterprise stuff is X. Which is the whole problem, people associate free tier stuff with them, while they do offer better things. And those things compete directly with others who offer...those same things.

If Cloudflare is the endgame, why are companies like mine still getting customer acquisition?

Others have plenty of network capacity... that's not even a thing.


> The best name in the DDoS protection business is Cloudflare

They definitely have brand recognition and are tip-of-the-tongue for people who have no prior experience.

"Best" in terms of actual product and protection is going to be F5, Akamai, Imperva, Voxility, etc.


Voxility has some very aggressive cold-sales department. They've approached me many times to sell their product.

I've ignored the first few mails. Still they kept coming.

I've told them curtly but politely we weren't interested. They tried to engage a sales conversation from that. Ignored them. Still they kept coming. Future outreach mails not even acknowledging the earlier conversation.

I told them they needed to learn when to let go and, literally, to "FUCK OFF". Still they kept coming.

I stopped their spamning with a custom SpamAssassin rule. That's what it took to get rid of them.

Don't support those spammy sales techniques. Don't do business with them.


Imperva are awful. they've sometimes caused us a DDoS by their mitigation techniques.


Don't Google, Amazon and Msft have their own CDN products with DDoS protections? I know they probably aren't at cloudflare's level of sophistication but in a few years time you don't think it will be a native offering from the big three as well as cloudflare duking it out in this space?


They do, but think about it. These companies charge money for traffic, so there's always a bit of a conflict of interest. My company has tons of customers from companies using cloud solutions who don't want the hit in pricing.

I do think one day they'll squash everyone out, but not anytime immediately.


What is your company? Or, alternately, what DDoS prevention company(s) would you most recommend?


Any of the players in say, the typical Forrester wave report is fine. However, don't necessarily trust the report's findings...just for the list of names.

One big thing, to me. Call each as you would as a customer. Apparently some are easier to get ahold of than others.

Pick one where you can get a competent human on the phone quickly. Because when you're getting DDoS'ed and threatened by your bosses at 1AM, it's going to be a big peace of mind.


DNS based amplification has been very popular for many years now. By this point if a DNS resolver is still being used by amplification no email or contacting the ISP will do anything as they have received many other similar emails.


Dns is so broken in many different ways. It’s crazy it’s still the back bone of the internet when it’s insecure awful nonsense. It’s just ingrained.


Modern UDP-based protocols handle this in two ways. First, prefer to make responses no larger than the request, so there is no amplification.

Second, if the response has to be larger than the request, send the requestor an address-specific value in a small initial response, e.g. HMAC of a secret and the sender IP. Then any request that incurs a large response has to contain that value. If the sender is spoofing the IP address and can't receive the small response sent to that address, they can't cause a large response to be sent there.

This can't be done with DNS because of "security" middleboxes. They ossify the protocol because they reject anything they don't understand, and they don't understand new versions of the protocol even if both of the endpoints do. So the protocol gets frozen in time and no security improvements can be made because of the things that claim to be there to improve security.


That sounds like its time to push standards forward, announce deprecations in advance, and have as many end services as possible adopt erroring if what they are receiving isn't standards compliant.

There is little actual reason for security middleware to not keep up.


Everything is working as intended though: we're talking about security middleware, not security middleware.

This stuff is built on the foundation of puffing out EnTeRpRiSe ScAlE egos with "look at all this vast complexity that I made, I am a god". It's not built on a technical foundation of always moving the needle forward just because you can and because it's cool and the right thing to do.

Sooo, all the $$$ get spent on dashboards and analytics screens and front panel designs and logos and stuff. The actual DNS bits? Probably /r/programminghorror material.


Except that in practice, deprecations don't work, no matter how much warning you give.


The point of deprecations is to eventually force a bad experience for those who are not keeping up. They definitely do work, but the time periods to affect change can be long. In the tech sphere many seem to interpret a long transition period as not working granted the usual pace of change.


Is IPv4 deprecated already?


That's different. There are ways in which ipv4 is subjectively better than ipv6, and "the catastrophe of needing more addresses" has not really panned out yet.


> There are ways in which ipv4 is subjectively better than ipv6

Apart from human readability, how so?


Some people like that addresses must be masqueraded behind NAT. We've solved how to do p2p through that system.


More "solved" than solved IMO. You still need some 3rd party to make the initial connection, for example.


The only way in which IPv4 is better than IPv6 is that many people understand one and not the other.


Three years ago Comcast told me they would be rolling out IPv6 to my business account... I'm still waiting.


is Frash depreciated? Tried browsing without ECDSA support lately? It can be done.


Resolver software is massively distributed, you don't force anything. The only place that can force anything from the top may be root servers, but even then, many resolver operators are probably just downloading root zone in bulk via https from somewhere to precache it and don't contact root servers at all.


It is not the back bone of the Internet. It is the back bone of most naming services.

The internet is IP. It is very robust. DNS is an application over the top of the Internet.


The public won’t be able to use the internet without it.


This is an interesting proposal. After some thought I must say I'm also in favour of it.


The public would still use internet applications, like Facebook. They just wouldn't be able to use the web any more.


I don't think Facebook's apps work when there's no access to DNS. At least it didn't seem like it when I was working to keep that capability for WhatsApp as it moved into FB datacenters.

I don't think very many other applications will work without DNS either, although I never did much competitive testing.


Facebook could (relatively) easily modify their system to do its own IP handling; they probably have enough money to rebuild the entire stack.


Sure, they could, but at least while I was there, there was no interest in doing it, and amazement than anyone else would want to (and push back on declaring at least a handful of IPs as stably allocated enough to be included in app downloads).


Telegram does AFAIK.


Apps use hostsnames 99% of the time.


When was the last time you read a URL aloud or typed it in?


The last 100 times were in the past few hours, roughly. Are you assuming nobody uses FQDNs or URLs anymore? Better yet, are you assuming only humans use those?


You were both savage and self-deprecating, well done!


I dictated the domain for our home automation system to our early-20s cleaner ("dictionaryword dot dictionaryword"), and after a few minutes she asked me "what do you usually Google to get there?".


a minute ago? what an outrageous question


I think many people Google “gmail” and click the link instead of memorizing URLs


Even then you're going to be hitting up DNS to turn "www.gmail.com" into an IP address.


Then an HTTP redirect to mail.google.com


And maybe to accounts.google.com too , if you’re not signed in


Today


> As a result, I'm going to add recurring testing for this capability and burn a couple IPs to make sure upstreams are listening to our rtbh announcements.

Could you or someone else expand on this? How do you coordinate rtbh with your ISP? And how do you check for whether it's working? I'd love to learn more on this topic. Thanks!


Many ISPs (better called transit providers in this context) offer a service whereby you announce to them a route (over BGP) with a specific BGP community, sometimes over a special session, sometimes inband with your normal transit sessions and they will blackhole (route to discard, null0, /dev/null) all traffic to that IP. Unlike normal internet announcements these are generally (exclusively with the providers i've worked with) available down to the smallest IP unit (v4/32, v6/64) so you can blackhole an IP which is being attacked without impacting other IPs inside the same subnet.

How do you test it? Very simple. Announce an IP (or a few) as blackhole and test to make sure things don't work (from that IP).

Very simply could setup something to ping something on that provider's infrastructure from that IP and... if it starts to work, alert!


To my knowledge there's also ways to integrate network observability tools (like kentik) to automate this to a degree, for those that are big enough or DDOS events are common enough that is useful to do so.

I imagine getting paged because of a DDOS is slightly easier when it's telling you it already null routed a few IPs so the rest of your network isn't screwed and you just have to identify how problematic those specific IPs being out of service is and whether you need to take action or wait for them to get bored.


That sort of encourages the attackers, because from their point of view, the attack is succeeding while the IP is being blackholed.


I think it’s more about mitigating collateral damage for cases where only one customer of the ISP is being targeted.


Can you give more detail on what you mean by the "recent Mikrotik CVE?" As far as I understood, the recent botnet was utilizing devices still unpatched since the original issue (2018?), as well as credentials gathered then even for patched devices that were affected but passwords weren't rotated.


You are correct. It's not really "new" just that it's still being used.

https://blog.mikrotik.com/security/meris-botnet.html


According to Mikrotik the recent botnet of hacked routers are only used for proxying traffic, not generating it. If this is true then they're only useful for hiding source addresses, an attacker actually loses power if using them for a volumetric attack.


If that were true, it would be trivial to trace back the traffic to the origin of the attack since you would see 40Gbps incoming at the target, one set of intermediaries, and 40Gbps to a single source (which is also unlikely given 40Gbps uplinks are quite a big monthly expense). They might be using the Tor network (or creating a makeshift proxy network), but it would seemingly be a waste of bandwidth on the target routers. The regular C&C approach seems more practical since it can make use of available bandwidth and leaves less of a trail.


It's quite unlikely there's a single link sending the traffic; that would be super easy to block. Most likely this is used to either hide the command server or the actually compromised servers. While symmetrical load can be quite obvious, it's less so if we're talking about 2000 links sending at 20 mb/s each. Especially when those links also have legitimate traffic.


I fail to understand how DNS reflection attacks are possible. Isn't it in the interest of any ISP to block outgoing spoofed IP packets ? So as not to be accused of letting those attacks originate from them ?


If you're thinking of residential ISPs which own the IPs that their customers use, it's mostly pretty simple to prevent spoofing (or at least, prevent spoofing outside their address range), but once you get customers that can bring their own IPs, it can be more cumbersome and the default is to not care.

There are, of course, proposals and protocols to make things better, but not caring is easier, and strict enforcement is hard, especially as the bandwidth goes up.

There's also enough ISPs that don't do egress filtering that name and shame isn't effective, and anyway, what am I going to do if I'm connected to an ISP that doesn't filter? I have exactly one meaningful offer of connectivity, so that's the one I've accepted, regardless of its merits. Many ISPs have a similar hold on customers.


Thanks.

> but once you get customers that can bring their own IPs (...)

You know these IPs though, since you route to your customer. So you could drop the offending packets. (I may be getting over my head here).


The routing need not be symmetric. If I've got two /24s and two ISPs, maybe I advertise each /24 on only one ISP, for traffic engineering purposes (I want to steer some of my clients to send through one ISP and some clients to send through another), but that doesn't mean I want or need to send the return traffic back through the same way it came in.

Also, the routing/BGP configuration is often separate from the filtering config, so making sure things are synchronized can cause problems.

It's far less time intensive as an ISP to just not filter once it starts getting complex. After all, you don't get a lot of customer service calls when you let ill-advised packets through, but you do get calls when you break things by dropping packets your clients were authorized to send.


> I may be getting over my head here)

It may be so, however my own need for clarification was the same as yours. If ISP xyz operates a /16 ip block, and only forwards source packets from its /16... Then, I guess, my question: how do spoofed packers usually travel past their first hops on their route to the target?

[edit] after-thought: I imagine attackers might easily spoof their source as any of their ISPs /16 or /8. Feels like that's not entire story here though.


I think you're still missing the point. Let's say I run John Doe's chop shop. I have purchased my own /24 (192.168.1.0/24). I want redundant connectivity so I have service from Comcast and qwest. I use bgp to decide every I'm routing my IP through Comcast or qwest at any given moment so they both need to allow traffic from that IP space even though it may not be from me. X100000. So it's easier to just not filter at all.


"I have 192.168.1.0/24 to sell you..."


"Ignore that guy. I'll offer you 172.16.0.0/12 for the same price."


Blocking outgoing spoofed packets requires time and money.

No carrier will shut down an ISP paying them 100k/month just because of some spoofed traffic.


The ISP originating the spoofed packets isn’t apparent to the person receiving the attack. The source is spoofed to the victim’s address so neither the DNS operator nor the victim can see where the spoofed packets originated.


I know. But one layer below, there's still a chain linking back to the origin ISP. The DNS operator can ask his own ISP about it.

(Also, to people down-voting a genuine question ? wth..)


Anecdotally I've noticed bot-like behavior on HN down-voting posts almost immediately, before a human could have read them. This is followed, typically, by a correction over a few minutes and then regular up- (or down-) voting resumes. I can imagine that HN is subjected to quite a lot of novel attacks of this nature, and I don't think it's in anyone's interest that they broadcast the details of it. My advice: be patient and take downvotes with a big grain of salt. They may not mean what you think they mean.


I’ve noticed it too. I think there are features of some sort already on on the backend, though obviously not fool-proof.


Who the fuck cares about downvotes as long as your sincere about your opinions?


Hard to be the smuggest dude in the valley at 1% opacity. Gotta keep the points up


The mac addresses on the ethernet layer are rewritten at each hop. You would only see the mac address of the router your router is connected to, not a chain linking back to the origin.


> The DNS operator can ask his own ISP about it.

Who would then have to determine where it's coming into their network from, and go ask that ISP, who would have to do the same, ad nauseum. And all parties would have to be paying enough staff to handle that load in addition to, you know, making sure their services work.


Two problems:

- the DNS operator doesn’t care. These look like normal requests.

- if they did care, asking an ISP to packet trace ingress traffic is not trivial. At any large scale ISP there are hundreds to thousands of direct peers that could have originated that traffic.


When you get to the level of traffic that it matters to check this, you are at the point that you cannot actually check this. Just think of the amount of traffic per second, the number of source addresses, and how many people it will take for how long to research that minute worth of traffic from last month.


I'm no sure how things are now, but 15 years ago (back when I was doing network admin type stuff), most routers did destination routing in hardware, source routing hit the CPU and was thus, very slow. I think that was the primary reason not to do that back then---perhaps it's still the case now?


> recent microtik CVE

I bought a mikrotik router recently, for my home network. How can I tell if I am impacted by this, and are there any mitigation strategies?


https://blog.mikrotik.com/security/meris-botnet.html

It's not really recent (2018) just that it's still being used after all this time. If you're using a default config on the home router you're basically already fine (save for changing the default login).


Microtik DDOS attacks are using CVE-2018-14847. [1]

If your router is up-to-date and not previously compromised, you should be fine.

[1] https://blog.mikrotik.com/security/meris-botnet.html


Or decentralize towards content-centric services.


Stupid question but can't you just proxy your servers behind Cloudflare (while the attack last)?


That works for a website. Not so much if it's the core router of your business campus.


Disclaimer: I work at Cloudflare on Workers.

I asked around internally & Magic Transit is a solution the ISP should be able to leverage to protect themselves.


Or if you want your site to be accessible for everyone, value privacy, or remember this guy from yesterday: https://news.ycombinator.com/item?id=28552948


Thats not really a solution. If you're getting hit with a 40Gbps attack and are running a website and you can simply use Cloudflare, that's a perfectly valid solution. Sure it means that some folks can't crawl your sites and a small number of people might get hit with captchas, but it's better than having no site.

I think people who make this argument tend to forget that Cloudflare is an infrastructure provider that the website owner employs. It's not really MITM if the site owner explicitly asked them to terminate TLS so that CF can provide load balancing, tunneling, and a number of other services. It's the exact same as using an AWS ELB. Yeah it terminates TLS, but you can't really say its doing MITM since the site owner specifically configured it for that purpose.


> Sure it means that some folks can't crawl your sites and a small number of people might get hit with captchas, but it's better than having no site.

Yeah, I just feel like people overreact to news.

Moms of 2021: this person in the news had a very bad case of X (covid, covid vaccine, hazelnuts, idk), I should avoid X preemptively

Nerds of 2021: this website in the news had a very bad DDoS attack, I should avoid DDoS attacks preemptively

The vast majority of people do not need to break the internet[1] for DDoS protection. It is really not that common. I know exactly nobody whose personal website got DDoS'ed. I do know people whose personal website is behind Cloudflare to preemptively avoid this problem.

I run a website myself where people can host all sorts of contents, I can totally imagine not everyone is happy with that. Never been on the receiving end of any kind of abuse though (people even ask me if I'm not afraid of that!). And if I were, I'd talk to my ISP -- they were previously involved in lawsuits for internet freedoms (i.e. on the good side), perhaps they are also happy to help me keep a site hosted with them before I need to consider moving to big brother corp for protection.

> It's not really MITM if the site owner explicitly asked them to terminate TLS

Nobody means MITM in the attacker sense when the service being MITM'd literally asked the proxy to proxy their traffic. Obviously. Saying that Cloudflare MITMs connections is a way to carry both meaning and judgement, similar to how I will talk about middleboxes on corporate networks that block evil haxxor tools that I need for my daily work (y'know, wireshark and such). I call those MITM boxes because that's what they do but also because I think they're more evil than good and the term reflects that (even if there are obviously pros and cons, same with Cloudflare).

> It's the exact same as using an AWS ELB

Hmm, if I understand what AWS does correctly, their load balancing service just routes traffic internally. It's not a transparent proxy where you think you're talking to one company but really you're talking to another. The manager at BigBank also understands intuitively that if they host their data at ExampleCorp, that ExampleCorp needs to not have data breaches. But if Cloudflare it just removing malicious traffic, it's not immediately obvious that they are in just as sensitive a position. The privacy policy rarely if ever mentions such proxying services.

I take your point though that it's not that different. This is also why I'd never host with Amazon or configure my email servers to be Google's, but yeah Cloudflare proxying gets more comments than hosting the whole thing at what some people perceive as an evilcorp. Not sure if that's for the aforementioned reasons or not.

[1] https://en.wikipedia.org/wiki/End-to-end_principle


What is the lowest level hardware that can help defend against DDoS on the backbone level? Or is that not a thing?


Since this is HN, it’s 2021 and DDoS’es are still a thing: why are they still a thing? Is there some fundamental “anonymity” to the Internet that makes it impossible to structurally prevent DDoS attacks? Apart from CloudFlare-like approaches, are there any R&D in the pipeline that may kill this type of attack once and for all?

To me it’s incredibly infuriating to see the damage that still happens with these extremely simple techniques. Will it ever end?

Edit: to elaborate, I know that there are tons of insecure Internet devices and whatnot. I’m more interested in standards, and core protocol improvements that can fundamentally rid the world of these types of attacks.


DDoSes come from asymmetries in the difficulty of making a request and providing a response.

For example, a classic is the SYN flood. In this, the attacker makes many requests to open TCP connections (with a SYN packet) but does none of the rest of the handshaking. The server does the initial bookkeeping for setting up TCP connections, before eventually timing out and freeing the resources. The SYN packets are very cheap and easy to form (and the attacker can immediately free all the resources after firing them away), but the connection state information is a little bigger. This asymmetry builds up, making the server overloaded.

I believe that asymmetries in resource consumption are, at some point, unavoidable.

You also asked:

> are there any R&D in the pipeline that may kill this type of attack once and for all?

Note that DDoS attacks are a class of attacks. They're a description for a broad genre. It's kind of like how "bronchitis" actually just means "enflamed bronchial tubes" - it's a symptom, not a cause. Even if we permanently defeat SYN floods, there are a gazillion other DDoS-type attacks (UDP flooding, messing with ARP tables - its an almost endless list).


Why is this resource asymmetry unavoidable? It seems reasonable that a new protocol could solve it.

Consider augmenting TCP with a "pre-SYN" and "pre-SYNACK". Suppose you have to send a pre-SYN which declares your identity ip X. Then the server sends to X a "pre-SYNACK" encrypted message containing (X, t = timestamp()) based on a server-only key.

Afterwards, X may send the server a SYN for real. But then the server only allows a timeout of length, say, timestamp() - t, as of receiving the SYN back to maintain bookkeeping.

As benevolent cooperating clients, we may want to ourselves wait an exponentially-backed-off amount between receiving the pre-SYNACK from the server and sending our SYN back (such that in log-many rounds we establish our conn).

Then any malevolent actor must then keep a resource on X alive for at least half long as the server does (since the server requires zero bookkeeping after firing off the pre-SYNACK).

Edit: this is simplified, you could just own a machine at X which is far away from the ddos'd server and then have your botnet spoof requests from X; by augmenting this procedure with even more preamble rounds you could verify how long communication takes between the server and client X and subtract that out.


The server has to do an encryption and a decryption. A legit client has to do the pre-SYN dance. In your protocol, I don’t spam pre-SYNs; I spam SYNs with fake tokens. Thus I send only the same number of packets as before, but the server has to do these decryptions! Or I can mix and match spoofed SYNs with pre-SYNs and legitimate SYNs in whatever way finds the best leverage.

Your pre-SYN design is very close to the design of TLS 1.2, by the way. That’s how I knew how to attack it. TLS is attackable in just this way: open a real connection, then lots of fake handshake requests. No crypto required to generate them, but the server has to do lots.

Now that said, BCP 38 would go a LONG way towards addressing this. There will come a day when the well-behaved networks decide they’re not accepting traffic from non-compliant peers.


You do bring a good point. Maybe outlandish, but see my sibling comment (https://news.ycombinator.com/item?id=28578832) where the server overhead for rejecting invalid packets which the attacker made for free goes down to performing a hash.


mind you BCP38 will do nothing against legitimate IP's used in DDoS attacks. (which are more and more common because of large botnets thanks to IoT devices and their abysmal security).


Are there legitimate reasons to spoof an IP?


This method is basically already in use for DOS mitigation using a technique called a SYN cookie. But as the parent said, there is a huge spectrum of different ways to produce a DOS and so there is no single method of addressing them. The problem tends to be much harder as you get to higher levels, for example, requesting things from an HTTP server - there are mitigations available there as well such as Cloudflare's approach of capturing DoS like requests and subjecting them to CAPTCHA, but once again no one size fits all.


I guess the problem is "simply" the use of these resource-asymmetric protocols. As a server-owner, if there was a robust alternative protocol which ensured resource symmetry, wouldn't you be naturally incentivized to adopt it and reject all asymmetric ones?

It still feels like there's a way of specifying a protocol (which of course would need to be adopted) where (1) authentication is a first-class entity (2) all non-authenticated requests enforce resource symmetry at the protocol level, while still allowing asymmetric work to be done (after a resource-symmetric auth).

Edit: augmenting the rough outline of what I originally proposed you can essentially require an arbitrary amount of proof-of-work, however useless, on the requester side (e.g., literally request a nonce such that the hash of the packet is below some value). Since only auth needs to be symmetric the overhead doesn't seem too bad.


The problem with the proof-of-work concept, which is raised frequently, is that the vast majority of DoS traffic today comes from attackers that don't pay for their work. They're using botnets of compromised devices. So all a PoW requirement tends to do, in practice, is make their botnet run a little bit warmer.

Sure, you could crank up the difficulty until it makes the cost of the attack too high, but then all your users will get mad - that's basically what CloudFlare does and it attracts non-stop criticism.

Another way we could put this is that asymmetric work requirements are basically the definition of a network service. If we require the client to do enough computation that it expends as many resources as the backend, in a lot of ways there's no reason for the backend to even exist! People will be prone to migrate to a peer-to-peer or desktop solution instead of using your very very slow website.

Or a little more of a hot-take: the idea of proof-of-work requirements to prevent abuse of systems with asymmetric workload is almost as old as network systems. A prominent example is "hashcash" for email, introduced 1997. These have so routinely failed to gain any traction that we need to consider that the proof-of-work idea tends to come from a fundamental misunderstanding of the problem. Increasing the cost to the attacker sounds great until you consider that in most real-world situations, the attacker's budget is effectively infinite... because for a couple decades now these types of abuses have originated mostly from compromised systems, not from systems the attacker controls. Proof-of-work requirements still reduce the volume the attacker is capable of, but trends like IoT and the general proliferation of computing mean that the attacker's potential resources continually expand. More difficult PoW, in most cases, will only cause the attacker to make a relatively one-time investment of obtaining a new set of compromised resources (new botnet, new exploit to distribute existing botnet, etc). The increase in "per-item" cost to the attacker tends to never materialize.


> Sure, you could crank up the difficulty until it makes the cost of the attack too high, but then all your users will get mad - that's basically what CloudFlare does and it attracts non-stop criticism.

This is interesting, where can I read more about both sides?

I do think you hit the nail on the head here, with resource symmetry being the center of the give-get here and making the botnet run warmer is the win (because it requires larger botnets, because it's more detectable, etc.). I think that this might make me upset as a client, but if it's economically advantageous for the server, then I'd still tolerate it to get the server's content.

I think that's why you need auth to be first-class in such a protocol. Yes, auth might be painful and annoying and require that your browser essentially mines hashes for a while on connection setup, but then after that presumably your website would be as fast as usual, and perhaps there might be a way to safely cache such authentication with a lease for valid users. Perhaps this kind of approach is too anti-internet, though, since it basically says that to do any useful backend work you need to be a registered user for some backend service.


Yes, I think there is a lot of room for better protocols that reduce the asymmetry. A lot of our internet is built on protocols that assumed good behavior, since back in the 70s and 80s and even 90s that was plausible.

But I don’t think we will ever get to a point where we bring that asymmetry to zero. We often really do want complex tasks to be done by a server in response to a simple client request.

Software people rarely put this asymmetry in the front of their minds during design, so I think its something we will continue to see in new systems, even if we (magically) universally adopted better versions of the old, too-trusting protocols.


If we do some form of authentication it seems likely that the receiving side needs to do some kind of computation. What if I just sent garbage? Garbage is easy to create but the receiver needs to do some work to figure this out.


Hrm, I don't think you're engaging with the spirit of my proposal, which does require resource-symmetry to establish auth in the protocol.

I admit I haven't specified a full counterfactual protocol here, but see my edit for a rough outline of how you wouldn't just be able to generate garbage.


You said "I feel that..." so I suppose you're an mbti 'F' - excellent choice! But I feel you'll work on this idea for a bit and then throw your hands up.

The symmetry is irrelevant when the attacker has stolen someone else's resources. Granny may notice her computer is even slower than usual, what then?

If we stipulate a future where attackers cannot steal other people's resources, then work backwards, I can't see any internet with any degree of freedom.

It would be necessary for every system to be 100% watertight. Mathematically-provably so. Spectre/Meltdown demonstrates that this isn't just formal proof about software. Eradicating the pathways to ddos is an intractable problem. You'd need to seriously consider preventing all network access except through certified and regulated kiosks to which users do not have direct physical access. Like, hand a librarian a piece of paper with a URL on it and get a print out.

I'm not expert, you shouldn't have confidence that I'm correct.


On your somewhat off-topic meta-comment about my lack of due diligence: lucky for me I don't need to break out Coq to validate an RFC draft to post a comment on HN, which despite my mushy-gushy feelings has resulted in a productive, curious, and educational discussion, at least for me!

On the point of stolen resources, true, the attacker doesn't care, but I think if we get to the level of resource symmetry in a protocol we've effectively throttled a class of attacks. There are only so many grannies whom a given attacker can pwn. Symmetry is relevant because it makes it that much harder and that much more demanding of your botnet. Besides, like you mention at the end of your comment demanding some kind of additional Byzantine DOS-tolerance is likely too hard of an ask.


> There are only so many grannies whom a given attacker can pwn

And at how many millions does that number start to taper down? Plus, zombie devices of all sorts are being used of course, so while there certainly does feel like some sort of resource-symmetry scheme would be, if nothing else, a satisfying solution, its hard to see logically how that would really help all that much for these sort of attacks.


Not to mention, implementation of new protocols is one thing, adoption of such protocols is another - I would dare say the harder part of the two. If you are required to serve a population that won't keep up, its an uphill battle.


You're too lazer focused on syn attacks, there are countless others. What it boils down to is the sheer size of the botnets. You stop syn? Great, I'll make it UDP with a max payload and send it from 500k hosts. You will eventually fall over if my botnet is big enough.


> Why is this resource asymmetry unavoidable? It seems reasonable that a new protocol could solve it.

Sending a HTTP GET is easy, but the server has to process the request and send the whole page... think a few tens of bytes of a request, and a whole webpage in a reply.


The fundamental asymmetry is bandwidth. If you can overwhelm my incoming connection, I can't do much useful work.

I've got to pay for capacity, but a bot net controller doesn't need a whole lot of bandwidth to trigger a lot. If you control 10,000 hosts with 10Mbps upload, that's 100Gbps; and botnets are growing as are typical upload bandwidths.

And that's without spoofing and amplified reflection attacks. Some of the reflection attacks have high amplification, so if you've got access to 100Gbps of spoofable traffic, you can direct Tbps of traffic.

If my server has a 10G connection and you're sending me 1Tbps, there's just no way I can work.

Syncookies work pretty well for TCP, I had run into some effective SYN floods against my hosts I managed in 2017, but upgrading to FreeBSD 11 got me to handling line rate SYNs on a 2x 1Gbps box and I didn't take the time to test 2x10G as I wasn't really seeing that many SYN floods. I don't recall any more effective SYN floods after that point. We didn't tend to have dedicated attackers though; people seemed to be testing DDoS as a service against us, and we'd tend to see exactly 90 or 300 seconds of abuse at random intervals.

Our hosting provider would null route IPs that were attacked if the volume was high enough for long enough. Null routing is kind of the best you can hope for your upstreams to do, but it's also a denial of service, so there you go.


> Why is this resource asymmetry unavoidable? It seems reasonable that a new protocol could solve it.

Protocols are trying to avoid this problem. For TCP there exist SYN cookies to reduce the amount of work a host has to do. For QUIC retry packets exist, as well as anti-amplification rules. However applying those mechanisms also has a cost. One is you still get computational cost not to zero, since you still have to receive and reject packets. And the other cost is now your legitimate users are experiencing a latency penality due to potentially requiring another round trip.

The latter means you don't want to make those mechanisms the default, but rather use them selectively in certain situations.

The former means even if you apply countermeasures you are not fully protected. If you get flooded purely by the bare amount of packets and the system stalls just from handling them, all the higher level mitigations won't work very well anymore. That's the difference between a DDoS and a DoS - the DDoS might not even need an asymetry in cost, it just brute-forces the system down.


> Why is this resource asymmetry unavoidable? It seems reasonable that a new protocol could solve it.

There's some sense in which a little asymmetry is unavoidable -- a malicious attacker can always just send whatever bytes they want over the wire, and a server on the other end with any practical purpose must at a minimum receive each of those bytes AND perform some operation equivalent to determining if any more work needs to be done.

To the extent that any real work or state management needs to be done by the server, the only way to avoid much asymmetry is to use a protocol forcing the client calls to be artificially more expensive before the server will agree to respond.


Yes and... a botnet controller simply doesn't care anyway. She or he isn't spending their own resources.


The server would still need to track some state.

Worse, the cost is another round trip delay. That's a problem for anyone with a high latency connection.


Right, under the previously unspecified constraint that you need a response in one round trip I agree there's a natural resource asymmetry.


So a bit like a proof of work concept for packets? Doesnt sound like such a crazy idea to me


It seems like requiring the client to provide proof of work (e.g. Hashcash[1]) could be an easy way to help reduce the asymmetry.

[1] https://en.wikipedia.org/wiki/Hashcash


> Note that DDoS attacks are a class of attacks. They're a description for a broad genre.

While they may be a class of attacks, I would describe all of them as undesired requests. Sometimes, with some clever protocol design, you can, in fact, mitigate a whole class of problems.


But why can't you just block the offending IPs in iptables?

My router has some SYN flood protection is that useful at all?

High level these kind of attacks must be easily fixable, just remove the offending IPs of the countrys internet for a set amount of time... easy peasy.


The first 'D' of DDoS means distributed. Which means it is so broad. If you unplug yourself from the internet, yes you can block them... Also blackholing them like this won't be useful. Since attack is comming from multiple zombie clients (hundreds of thousands) and current internet structure is using IPv4 with CGNAT, you would block tens of millions of legitimate users at the expense of a small percentage.

Edit: Also, even though you blackhole them within your iptables, packages are still coming to you (routed) and filling up the bandwidth whether you like it or not.

And you pay your peers according to bandwidth you use. (eg. bytes in and out per month)

So, a proper solution is to use globally distributed system to broadcast specific routes so the attack gets stopped even before it reaches you. (eg. through their own ISP's route table blackholing your routes) But that means legit users from that ISP wont be able to access your network at all...


Seems like that's another benefit of IPv6.

Without needing NAT everyone can have their own IP address and biz will be able to ban individual machines instead of some giant NAT Network because one host using that IP address is compromised.


Doesn't it also mean addresses serving as the source of an attack become more disposable?


The hackers still have to compromise the host right?


if you are sending packets but don't care about the reply, you can spoof the from header.

The other form of ddos is udp amplification. you send a packet to a legit host and spoof the target ip as the from ip, the reply goes to your target and bam, ddos.


Yep but that never opens a full connection. You can filter on hosts who dont respond to an ACK after a certain amount of fails.

Most machines don't have UDP services open to accept packets.


all machines need udp opened to get response from ntp and dns.

moving these to tcp would actually seriously help with filtering ddos, but good luck.


Yeah but as I understand it, Port 22 doesn't just sit open on a machine waiting for any connection.

Running a quick netstat -p udp doesn't show any UDP entrys with LISTENING status.


this could be solved in ipv6 by implementing ipsec in AH mode.


Most devices are configured to randomize part of the v6 address for privacy so you can no longer see more info than ipv4 would show.


Ban the prefix instead.


A trillion hosts.deny entries?


How many hackers would it take to compromise a trillion hosts?

the hackers are better than I thought!


Yes, but these attacks are actually easy to detect if they all come from the same IP. Sending packages from a lot of different IPs actually is expensive.


It's not expensive if you have a botnet, reflective attacks or L7 attacks. It's also not just one type of attack (overloading the target), the only thing in common is that it's a denial of service, and if you distribute it amongst many sources it's a distributed denial of service.

Distributed can mean different things, especially if you can spoof addresses, in which case you can do fake distribution (causing src based filtering to be useless). If you know it's just a single bad AS that is doing it, then you can block the AS, but if it's distributed amongst multiple AS'es that's harder. Same with reflective attacks, the source of the traffic might not be the source of the attack and might be a legitimate source which you cannot block because your internet wouldn't work anymore (i.e. when it's DNS servers or NTP servers doing the reflection, or a GCN address which would completely take an ISP's section offline).

It's not as simple as sending a large volumetric load from A to B. It used to be that way a decade ago and it can be effective today, but it's not the only way it's happening.


That's the first 'D' in DDoS - distributed.

Most robots participating in this are unwitting. It's incredibly unlikely someone is paying... even with stolen funds.

Often a sort of malware that more or less sits dormant until command/control sends a target to attack

edit: To the individual machine, not a lot. The sum total is where the denial of service comes in - everything has a tipping point.


These days even peoples iot washing machines are being found to be participating in ddos attacks.


The attacks come from (up to) millions of different ip addresses, each one a different machine and often on a residential network.

This is the problem with botnets - all the connections look like real traffic at first glance.


The src ip can be spoofed and randomized if you don't care about getting the response (and to some degree, you care more about not getting it)


That’s.. not what a Distributed Denial of Service does. They come from thousands of IPs.


It's not that simple. Collecting 250Gbps of uplink is not that simple. The attack vectors are simple in principle, but carrying it out...

Here's next year's fine vector: You write a nice iphone or android app and persuade fifty million people to download it. But unknown to the owners of those mobile phones, it also has a hidden evil feature: If the phone has the screen off, is charging and is on a WLAN, then it can participate in a DDoS if you direct them to. The phone's owner won't notice, but if enough phones are charging and each can contribute 2Mbps of upload, you could collect 250Gbps at the target.

And tracing it back to that app will be very difficult.


Google/Apple will catch you in hours/days and all your work goes down the drain. Also its hard to get 50M downloads, not worth the time.

There is already an established economy of botnets. They work like this:

1. There is a hacker group (usually ex-USSR or China) that is actively monitoring 0 day vulnerabilities. They usually target internet connected devices which are from companies that dont give a damn about updating them after they are released. This is the case for most budget routers, "smart" things (lamp, frigde, vacuum, doorbell,..).

2. If they find a nice one (e.g. one that affects millions of shit-tier $40 routers), they set up port scanners and infect as many as they can.

3. Rent the botnet on darknet sites, there is already an established pricing for them, for example $1000 for 1 hour of 1000Gbps DDOS. You just pay them in crypto and tell what to attack.


> Google/Apple will catch you in hours/days

Tell that to the Hola and Honey extensions. They're known to be a botnet and for this kind of abuse for years yet you can still find them everywhere in the App/Play Store...and _so_ many tech related youtube channels keep advertising for them blindly, it's ridiculous.


I had heard that Hola was possibly some kind of malware, but this is the first I'm hearing that Honey (I'm assuming we're both talking about the coupon-code extension) is like that too.

I just figured Honey was kind of like spyware, collecting information about what people bought online.


Honey redirects their users' traffic to their own ad-identifiers and replaces them in the codes/script tags. That's how they make money. Personally this kinda classifies as a CnC because they can redirect traffic at demand for websites remotely.

If you look at the extracted codebase, it reads like a malware implant that's designed to give some blackhat traffic for their own army of clickbots.

Hola on the other hand is basically an origin obfuscator, and they route traffic through other browser extension instances.

That's how they "unlock" websites and what their proxy feature is about. Note that this is an http proxy only, and hola sniffs all local passwords and cookies, too.

So they can abuse your user account for lots of stuff without your consent, and they did that in the past, too. (A search for DDoS and hola reveals lots of incidents and news reports)


While I agree that Honey is borderline spyware and extremely privacy invasive, that's a little different from claiming it is malware facilitating DDoS attacks.


Echoing the sibling comment, I knew Honey was WAY-too-good-to-be-true from the seemingly infinite marketing resources being poured into it - but I had no idea it was a botnet (?!). Further insight highly appreciated!


(2015) Imgur was being used to create a botnet and DDOS 8Chan : https://news.ycombinator.com/item?id=10256942


It's not that easy to get 50 million people to download your app and if you pulled that off there are more profitable things you could do.


Toolbars did just that and been used for many "evil" things..


My point exactly.

These things are easy to do in principle (most readers here can write that evil payload, or learn how to do it quickly) but there are real practical challenges to getting the code to run on millions of strangers' computers/phones/dsl routers, so most of the code that's actually deployed that widely is beneficial. Happily.


Why limit it to DDoSing? It could be just one feature, but an important one.


On iOS the OS does not allow apps to do this, you basically can only use the OS APIs to do networking in the background so clever tricks are right out, and it is throttled.


What's the solution to this? If millions of devices are each sending 2Mbps to some target, what can the target do? It doesn't sound like there's a program that would be able to identify something so diffuse.


It is intrinsic to the design.

The open nature of the protocol carries with it an implicit trust. Any client may join and speak to any other client and be heard. It is a social contract, just as we have "in real life."

Imagine someone knocks on your door. You can choose to not answer, but even considering the choice costs you cognitive bandwidth. As long as you're within earshot of your door, a knock will cost you attention.

We have solutions all across the spectrum, from putting up a "no soliciting" sign, to arresting bad actors. But what if you have a riot outside your front door (a DDoS)?


The rioters you mention are remote teleconference zombies. There are those that can stop them with only relatively small effort - ISPs and less directly transit providers, but it is against their interests.

Perhaps a better analogy may be robocalls with spoofed numbers.


I think you understand why edpnet.be has problems now.

What they usually do is try to write packet filter rules and drop packets as close to the millions of origins as possible, plus call the networks where the sending devices are and say things like "could you please call your customers and tell them to unfuck their {phones,dsl routers,…}"


remote triggered blackholing the networks from which the ddos comes is very effective for small DDoS's. but its a very crude methods that drops entire /24's from being able to reach you.

this doesn't work for massive ddos's.


The abuse system would need to detect that these ip addresses are sending a sustained amount of traffic abnormally and deny them.


I think part of the difficulty is that just receiving or denying the traffic can consume all of your resources. It really has to be denied at the ISP levels before it reaches your router.


Why would it be so hard to trace back?


It wouldn't be hard.

If you can examine traffic on a network from which attacking traffic originates, you can see that it's coming from a phone.

Then you could see which app by either limiting or deleting apps one at a time, or you'd need two sources, then just see which apps the two sources have in common.

When you have 250 Gbps, it'd be simple to capture a fraction of a second's worth of traffic, then write a script to fire off emails to network admins of every IP involved and ask them to look in to it. Out of hundreds or thousands of messages, you'll get a few humans who'd look in to it and would be helpful.

I should know, because I've done this very thing.


If your nontechnical neighbour's ISP calls and says "you're participating in a DDoS, will you please find out which device behind your NAT is sending the traffic and fix whatever the problem is", do you think your neighbour can fix it in five minutes?


They could remote stop the router?


A responsible ISP will block the user but this means the user is going to call the helpdesk which costs money.

An irresponsible ISP will stall the process to see if you give up, which is free.


They could, which mitigates the attack without locating the actual source (that is, your app) and leaves you free to use the same app again for another attack next week. Maybe even taking care to use a different subset of the devices where your app is installed.


because nothing says you have to include your real ip in the src header of the ip packet if you don't care about the reply.


rp_filter (reverse path) does. Problem is it's a leaf(ish) router solution: if the src ip is not in the locally attached subnet, drop it. Not every isp sets this up though, for whatever reason.


limiting your spoofability to ips by the same isp doesn't help, especially since many will have multiple subnets so it won't help your target's ability to filter the packets out


Unpopular opinion and will get downvoted:

This is why the appstores do reviews etc. these kind of things dont happen because of that


Getting this kind of behaviour past review is simple, and malevolent apps do happen.

Google has a clever technique for detecting them (after review) and it's good, but slow. It has a decent chance of detecting the app after a few attacks, and works by analysing which apps were installed on phones just before a factory reinstall.


Sure, the official App store can do that but that does not mean the user should not have the option to side load apps.


... except that they most definitely do still happen in spite of that.

But yes, they do tend to mean less obviously-just-malware apps.


Simple - egress filtering:

https://en.wikipedia.org/wiki/Egress_filtering

If every large network operator did this, then shutting down abusive hosts used to generate the kind of traffic needed for attacks like these would be possible.

You'd make a list of all the IPs for each large provider from which excess traffic is coming, and you'd send them to each large provider, and they'd either block the IPs completely (requiring customers to fix their compromised machines before they could get back on the Internet), or at very least they would block all traffic to the destination of the attack.

Some silly people think this isn't possible, but if large providers are forced to do it, then they can easily tell smaller ones they'll be blocked if they don't also do this, and so on, and we'd have a slightly more responsible Internet again.

Too bad we have shitty, huge companies that don't care about doing the right thing :(


> Too bad we have shitty, huge companies that don't care about doing the right thing :(

The situation is not really different from dumping waste in the nearest stream. It’s naive to think that companies wont just act in their own best interest unless they are forced to clean up their act.


So what you’re saying is that legislation would help here?


> Some silly people think this isn't possible, but if large providers are forced to do it, then they can easily tell smaller ones they'll be blocked if they don't also do this, and so on, and we'd have a slightly more responsible Internet again.

You go ahead and force a large provider to do it, and let us know how. From what I can tell from reading NANOG mailing list posts off and on for decades is that most of the larger providers really won't be bothered to do this, and there's not any effective leverage to force them.


Telcos also tried claiming its impossible to secure SS7, and here we are in 2021 finally getting Stir/Shaken implemented https://www.fcc.gov/spoofed-robocalls


No, the solution is ingress filtering and a lot of ISPs do it these days, see bcp38.


Huh?

Ingress filtering is filtering traffic coming in to your network. How are you supposed to know whether traffic coming from a certain peer actually legitimately originates from that peer?

Egress filtering is filtering traffic that exits your network. It's your network, so you should know with absolute certainty whether it's real - either the source is one of your networks, or it isn't, and if it's not, you drop it.

Please explain how ingress filtering is supposed be the better solution.


We are almost talking about the same thing. But it's better to accomplish this on all your customer links, otherwise they can still spoof within your network.

Read up on bcp38? This is from the first hit on Google: Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing


> Some silly people think this isn't possible, but if large providers are forced to do it, then they can easily tell smaller ones they'll be blocked if they don't also do this, and so on, and we'd have a slightly more responsible Internet again.

> Too bad we have shitty, huge companies that don't care about doing the right thing :(

There's no call to be dismissive of anyone who disagrees with you; maybe you're "silly" for believing that companies will threaten each other for no reason (that they care about).


Nope, they are basically right and it's a source of endless frustration to security researchers that these very simple problems are still thinks that have to be dealt with in 2021.


Usually I would agree. But not in this instance. It really is silly some isps are not doing this by default, just fixing this will reduce so many types of ddos attacks. The current size of ddos attacks are just insane. And thabks to these attacks decentralized nature of the internet is eroding everyday.


As far as I know, a lot of DDoS attacks use UDP amplification, which can be prevented if every ISP implements BCP 38; i.e. drop UDP traffic at the edge of their network that has a source that cannot have come from within their network.

EDIT: To clarify, this won't stop layer-7 based DDoS attacks, or anything that uses TCP (like SYN flooding). Just UDP amplification.


>or anything that uses TCP (like SYN flooding)

Plenty of SYN floods spoof IP as well. If you don't need to get the response, and you're behind an ISP that doesn't bother blocking IP spoofing, why would you use your actual IP? It'll make it much harder to actually trace an attack to the actual device doing it. It won't work on devices behind NAT but neither will reflected UDP attacks.


BCP 38 would stop SYN flooding if it is using source address spoofing. It won't stop any attack not based on IP spoofing though.


Most/all ddos attacks come from unwitting people whose computers or iot devices have been taken over on the order of tens or hundreds of thousands of devices. So there isn't much that can be done beyond filtering or having a large enough pipeline to absorb the traffic.

Even if you had a central authority cutting off access from ip addresses, do you cut off whole university campuses because someone's "smart" coffee machine is participating in a ddos or entire businesses because there is a computer in a closet somewhere that is infected?


We talked about this problem when I worked at a telco. The problem isn't necessarily cutting of the devices that are part of the attack, it's dealing with the aftermatch.

As sad as it is, most people won't understand that their device have been effected, and the telcos won't be responsible, financially, for paying to do the cleanup or verification.


> Even if you had a central authority cutting off access from ip addresses, do you cut off whole university campuses because someone's "smart" coffee machine is participating in a ddos or entire businesses because there is a computer in a closet somewhere that is infected?

I think the solution is probably similar to the sorts of laws that we have for dealing with pollution and similar bad behaviour. Start with warnings and education, escalate to fines and other penalties, and only escalate to outright bans in the worst and most recalcitrant cases.


> do you cut off whole university campuses because someone's "smart" coffee machine is participating in a ddos

How about: use IPv6, and cut off just the coffee machine.


I feel like home router makers could intervene here and let their owners know when they’ve likely been hacked. Maybe even a regulation type deal?


That's never going to work but it very well could lead to ISPs very strongly herding customers to use their own managed routers. They already push for that for customer support purposes, using this to start automatically snooping on "malicious" traffic within a customer's network would be a big step in the wrong direction.

You could make a compromise here and require ISPs and network vendors to support a common notification protocol to identify devices sourcing malicious traffic. Most ISPs already have systems in place to send notification messages to customers that are sending botnet traffic. You could mandate it for all of them and let the customer decide if they want their router configured to automatically block a device they were notified about or just record the MAC address and send the ISP a URL the user can visit to view the device sending the traffic. You could make it friendly to the unwashed masses and still put detection on the ISPs and not give them privileged access inside every customer's network.


> home router makers could intervene here and let their owners know when they’ve likely been hacked

How?

A couple of years ago I had an email from our ISP telling me that "common port X is open on your router" (forwarded to a box exactly as I set it up) and asking if I could "fix the problem".

Except it wasn't a mistake or a configuration error, I deliberately set it up that way.


I don't see how this would be feasible without giving them a backdoor to snoop around in my home network. No thanks.


> do you cut off whole university campuses because someone's "smart" coffee machine is participating in a ddos

Yes?

That’s the only reasonable way to go about it. If you block only single clients you’ll just paint a large bullseye on universities.


With DDoS the problem is that botnets come from a bunch of random devices that were compromised (for example, your un-updated smart TV or old router could be contributing to the DDoS without your knowledge.)

With spam calls, ransomware, and most other forms of cybercrime, the problem is that Russia/China and many 3rd world countries don't extradite/cooperate or have the resources to work with international authorities.

I sometimes wonder how much safer the internet would be if China and Russia were cut off. Wouldn't be so great for the already poor state of liberty in either country, but the rest of the world would certainly benefit.


It's true that they often use botnets, but amplification attacks are more and more common, and an important component here.

In these, the attacker sends crafted requests to an unwitting third party, and that third party then floods the victim with traffic. DNS amplification is probably the best known, here, since DNS is (usually) over UDP which has no persistent connection for replies:

- Send a DNS query to a DNS server, using your victim's IP address as the "source IP" for the query. Make the query something with a huge response, like using "ANY".

- DNS server responds to the victim's IP address, sending many big UDP packets with the response. After all, it can't know any better - it must* just trust the claimed source IP!

- Victim gets a huge flood of UDP traffic, overwhelming IP-level infrastructure like routers and switches.

This lets a relatively small attacker multiply ("amplify") their impact - and they do it while traversing a third party, making them more hidden.

---

* If course, there actually are mitigations for this; it's possible to detect this spoofing. But this is how DNS operators have worked in the past.


"without your knowledge"... is there a way to check if my IP was participating in the DDoS? If they are filtering the traffic, they might be able to create a list of IPs that I can check. Or perhaps, an IP can be associated with a contact info so the if any of my devices would be infected and participate in DDoS, I would get notified by the victim and could take action.


>Wouldn't be so great for the already poor state of liberty in either country, but the rest of the world would certainly benefit.

That's a pretty closed minded way of looking at things.


They are already doing it themselves...


How so? To me, closed-mindedness is shooting down an idea without a reason or alternative.


That's the reason why my nextdns blocks all .ru and .ch domains.

I don't trust them + i don't speak the language. So entirely blocking it is an easy decision.

Nextdns on my router blocks generated domains, which could be used against c&c servers.


.ch is Swiss TLD ;)


Oh yeah, indeed. Verified it, it was .cn that is blocked ;)

Thanks for reminding me


Cutting off countries will do very little. Compromised devices are spread evenly across the globe, and even if their puppeteers sit in China or Russia, you wouldn't be able to block them from issuing orders to botnets.


> the rest of the world would certainly benefit

OTOH most pirate library sites are hosted in those countries for the same reason.


Imagine the real-world equivalent: someone spreads a rumor that the Fifth Avenue Apple store is giving away free iPhones at 6pm. Ten thousand people turn up and block the store, the sidewalk, the roads…

How should a business defend against that without losing legitimate customers?

The answer is probably along the lines of “educate people not to believe rumors” (:: “educate people not to run insecure software”).


> (:: “educate people not to run insecure software”).

Some of these exploits uses zero day attacks, the users cannot do anything.


How does secure software protect you from acknowledging perfectly innocuous requests?


It’s the zombie-botnet-machine owners who need to be running more secure software, not the botnet victims.


The issue is: bot nets. The attacks come from multiple sources worldwide. No anonymity, rather a swarm of legitimate hosts sending mass;requests. Research and development? Network connections come in, one way or another some devices have to filter them in or out. AFAIK it's an arm race. Large bot nets on one side, and hardware optimised to filter out certain requests on the other.

It's a simple technique but requires compromising enough networked resources to make the attack effective.

Cloudflare and other providers having large capacities are well armed to deflect DDoS that can't overwhelmed them, but even then, they aren't safe from a larger bot net than what their network resource can handle.

If someone knows more than I do, maybe they could elaborate on technologies I'm not aware of.


A DDoS attack is indistinguishable from a huge legitimate success. See Slashdot effect [1].

The very problem of a DDoS attack stems from that (1) each request is a small, fully legitimate request and (2) there are too many of them, coming from too many sources for filtering to be efficient. It is also mixed with indistinguishable non-attack requests by your normal customers.

So no, even the best authentication would help little.

[1]: https://en.wikipedia.org/wiki/Slashdot_effect


And which sometimes happens to sites with limited b/w when they trend on HN.


DDOS events typically use distributed botnets. Unless you analyze the CNC you’re probably asking for wide deployment of acls (flow spec or otherwise); and such agreements happen occasionally, as they operationally brutal but are slow (otherwise they would be subject to abuse).

You, right now, may be participating in a DDOS attack without knowing it. Should your ISP cut you off based on information from some Belgian ISP?


There are many parts to the answer of your question, but one part yet to be implemented in a meaningful manor is BCP38 [1]. The RFC's around BCP38 keep changing and that may be part of the issue. It is still too easy to spoof the source across ISP boundaries. Then of course there is also the issue of getting all the ISP's to work together to detect and mitigate these attacks and that involves cost that the tier 1 providers barely spend towards. There are so many more pieces to this problem and I am barely touching on it.

[1] - https://www.rfc-editor.org/info/rfc8704


It isn't anonymity. It's insecurity. Until you fix the human factor, bad code, weak passwords, shitty programming practices and minimum outsourcing to the worst vendors.


There are very little consequences.

Until states start doing s/sea/internet/g on their old-school maritime piracy laws[0] I don't think anything is going to change, except attacks are gonna get bigger and bigger.

Under current law, that's a lifetime sentence in many*(US[1] & Canada[2] at least) countries. It should do the trick if most states get onboard.

> Acts of piracy threaten internet security by endangering, in particular, the welfare of internetfarers and the security of navigation and commerce. These criminal acts may result in the loss of life, physical harm or hostage-taking of internetfarers, significant disruptions to commerce and navigation, financial losses to server owners, increased insurance premiums and security costs, increased costs to consumers and producers, and damage to the internet environment. Pirate attacks can have widespread ramifications, including preventing humanitarian assistance and increasing the costs of future connectivity to the affected areas.

[0] https://www.un.org/depts/los/piracy/piracy.htm [1] https://www.law.cornell.edu/uscode/text/18/1651 [2] https://laws-lois.justice.gc.ca/eng/acts/c-46/page-11.html#s...


>> Since this is HN, it’s 2021 and DDoS’es are still a thing: why are they still a thing? Is there some fundamental “anonymity” to the Internet that makes it impossible to structurally prevent DDoS attacks?

IMHO that is exactly what the problem is. Weather its malware, phishing scams, extortion schemes, the fundamental problem is knowing where stuff comes from.

I propose an IPv6 subnet that routes by using a portion of the IP address as lat/long coordinates as a starting point because routing tables are litterally a form of indirection.


> an IPv6 subnet that routes by using a portion of the IP address as lat/long coordinates

That seems like a rather large privacy problem.


You could for example protect all networked computers with single packet authorization. They will ignore all network traffic unless a signed packet is sent first. To the unauthorized it's like the computer is not even there.


this is basically how a vpn tunnel works in some way. (wireguard, ipsec if properly configured etc).

it has one, enourmous downside. it destroys reachability to your systems from the population at large because they need to trust some key, which is not universally distributed.


> it destroys reachability to your systems from the population at large

Indeed.

Sometimes I think it was never meant to be. We're allowing our computers to talk to untrustworthy strangers. It was always a matter of time before one of them sent a crafted payload...

Perhaps computers shouldn't be so reachable. Can't exploit a computer that's not even there, zero days or not.


ZERO monetary incentives to deploy technical solutions preventing DDOSes.

It would cost additional hardware, manpower, time and PR dealing with disconnected customers. Ignoring the problem lets you offer paid extra protection services.


I'm very much not a network engineer, but I'd like to understand the magnitude of this issue because my intuition is wrong:

250 Gbps seems like it would definitely be a lot for a server or website but it also seem like a drop in the bucket for an ISP providing broadband for many customers.

Clearly I'm wrong because it is an issue here. I'd like to understand why I'm wrong, and I hope that here, on HN, that's taken in the spirit of curiosity intended and not negativism.

So, what am I missing? Maybe Belgian broadband is lower capacity than what I'm used to in a US metropolitan area? Maybe this particular ISP served a population too small to have a... um... "fat pipe"? I'd like to understand.


An ISP has two types of edge connections where ddos traffic comes into their network. Private peering where they are directly connected to their neighbor in one or more locations, typically neither party pays for traffic, can be 1 to 100G interfaces and multiple of them. Second type is transit, smaller or mid sized buys it from larger ISPs and pay for traffic. Everything that does not go directly to a peer mentioned before goes over this connection. Can also be very different in size.

You can just add upp all edge Links and know their theoretical limit, but most of the ddos traffic likely comes from Asia and it's unlikely that they peer directly with those ISPs. So most of the traffic will go over transit, but the larger ISP likely has an automated system that handles volumetric attacks. So if there's any impact it will be on transit mostly affecting traffic to/from outside of Europe. 250 might be a lot for this ISP, but they might also have a lot more transit headroom, or it light be handled by their transit provider. Who knows.


As an outsider to network topology that's fascinating to understand. Thanks for explaining.

If you don't mind going further, how does an ISP (or anti-DDoS services) mitigate something like this without incurring even more overhead to examine the traffic & not route it onwards? Just a flat block against the sources, or something more sophisticated?


ISP mitigation is usually simple. Figure out what the destination of the attack traffic is and null route[1] that, pushing those null routes upstream where possible. If the destination IPs for the abuse are too diffuse and/or the upstream won't do it on their routers where they presumably have more capacity to drop than to forward to you, you're still limited by your connection bandwidth on the connections the abuse is flowing. Depending on the use case, sometimes you can do pretty well by cycling your legitimate traffic across a large block of IPs faster than the abusers can cycle their abuse traffic, but it depends on relationships and APIs to control null routing of your upstreams. More sophisticated things like only block traffic to/from port X, or throttling rather than blocking aren't generally available (but would often be super helpful if they were).

Anti-DDoS services are quite a bit different, they generally have obscene bandwidth, and BGP advertise your attacked IPs, so they can get the traffic, then they "scrub" the traffic and provide it back to you over a tunnel (or something). Scrubbing can be relatively simple stuff like rate limiting UDP and especially fragmented UDP (which is usually stuff coming from reflection attacks) or more complex stuff like looking at TCP options to determine good vs bad traffic and only letting good traffic through, or tracking TCP SYNs and only letting retries through, plenty of other stuff with TCP state machines (or half state machines, if they're only seeing the incoming traffic and not the outgoing traffic). Oh, and they usually charge you gobs of money.

[1] if destination == IP, drop packet


It's only simple if you don't care that the service behind that IP is down. I would only use it as a last resort if there are no others tools available or if i was a consumer ISP. Any competent ISP with business customers, hosting or otherwise, wouldn't be doing this.


Null routing is often (but not always) used in the scenario that the service behind the IP is already down, but null routing will improve service to everyone else, so it's an acceptable casualty.

I was (working for) a major customer of a competent hosting service and they'd do null routes (automated) on our service IPs when we got attacked. Of course we complained, but it was that or have automation put us behind a traffic 'scrubber' that caused more problems than null routing. In the end, what made it right for us was when they got the traffic levels and sampling intervals set properly; only null route if incoming traffic is above line rate for the server, and initially null route for a short time (5 minutes, I think) before resampling and escalating the time. Most attacks were short, so the default of 6 hours was excessive.


And if you just handled the ddos by dropped the correct traffic the service would be unaffected.


I mean, yes, but a) they didn't have the equipment that could do that at the rate required, b) they certainly couldn't ask their upstreams to do that and at least for some of the attacks, the attack traffic added enough traffic to go over capacity of some of their links; having the upstream null route it was possible and solved the congestion.


Haven't done much traditional networking for a few years so my knowledge is not that current.

They use bgp flowspec to send out filter lists using the bgp protocol to routers, essentially a long list of ACL rules with srcip dstip dstport. These routers can filter any amount of traffic it gets in hardware, so as long as your upstream (edge transit or peering links) are not full you won't even notice the ddos.

But you need something to analyse traffic that can understand which traffic is ddos and what is normal. We used devices from Arbor which are basically regular x86 servers. These devices also received netflow so whenever a ddos target received a lot of weird traffic (lots of DNS for example) it was configured to redirect all traffic for that IP to itself where it could gain a better understanding than what you can from netflow and either filtered it locally or sent out flowspec updates to filter it on routers. You could also enable this manually for non-volumetric ddos and do some filtering, but it's hard to do if it's encrypted and if they are abusing some http call that is just expensive to handle for the servers.


ISPs have a very decentralized network, that is, they have routers in a lot of places.

If you sum up the available traffic capacity, you get a number that's far bigger than 250 Gbit/s.

But that's not the right comparison to make. Much of that DDoS traffic goes through multiple of the ISP's routers, and if you're doing just a numbers comparison, you'd have to multiply the attack rate by the (average) number of hops.

But things are even more complicated in reality: many smaller links will have only 10Gbit/s or 20Gbit/s or 40Gbit/s capacity (or maybe even 1G), and saturating them not only causes increased latency, but also routing the traffic around causes more overall load on the network.

Then there are management systems behind firewalls, and if you DDoS them, it's typically the firewalls that give out first.

Finally, customers don't get the full throughput internally available to an ISP; if you DDoS a customer's IP, their link will be saturated, even if the ISP itself has plenty to spare.

(Not a network engineer myself, but working closely with some).


So in the case where the attackers are aiming their traffic at one of the ISP's routers to saturate a link, how do they get the IP address/domain name of this router?


Traceroutes. Peering information is also often public.

More sinister options like insider information or leaked/exfiltrated network plans are also an option, but usually you get pretty far with standard network diagnostic tools.


Wouldn't that all be true of normal web traffic too, presumably already capable of rates beyond 250Gbps? Though they may not have that much free headroom on top of normal traffic.


250Gbps is still a lot of bandwidth for a network even today. An enterprise ISP like Verizon or Spectrum may have less of a capacity issue with it but for many other networks this is a lot.

Quick example, though not an ISP, but the entire Internet archive operates on 60Gbps of bandwidth. So an attack of 250 would be quite a problem.

http://blog.archive.org/2020/05/11/thank-you-for-helping-us-...


the entire Internet archive operates on 60Gbps of bandwidth

Wow, yes, that definitely puts the magnitude of this into perspective. Thank you.


Though that's a substantial amount of traffic: isn't this currently going through one ISP, which (I would hope) has many more customers than just archive.org? "top 160 site" is significant, but still minuscule compared to all internet traffic.


I think I’m thinking of it reverse from you. A top 160 site could have more concurrent users than a small regional ISP.


It might! I suspect my intuition fails at this scale. I have absolutely no data, so I'd definitely be interested in any that can be found.

In my defense though! 60Mbps is literally less than one billionth the estimated internet bandwidth these days: https://en.m.wikipedia.org/wiki/Internet_traffic . I doubt there are even one million ISPs, so it still seems relatively "small" to me. It could definitely be bigger than [random company X] can handle, but it seems like it stands a good chance of being smaller than most in aggregate, so even if it's hard for X to support it's not particularly "interesting" in a state-of-the-art way.

And I can completely believe I'm wildly wrong in this. 999/1000+ low-technical-effort ISPs doesn't seem unlikely to me, so 60Mbps could very well be a major achievement as far as I know. I'd be surprised, but... I've been surprised enough that I'm no longer surprised at being surprised. Humans are weird at scale.


The Wikipedia entry you linked to estimates 2021 internet traffic at 276 EB/month. That’s an average 839,625 gigabits per second. So 60 Gbps is one 14,000th of that.

Of course, one figure was about capacity and the other about actual traffic, but still.


Ah, per month. Yep, I missed that, I thought it was per second.


I dunno, my home operates on 2Gbps of bandwidth. 250Gbps is half of our neighborhood (not sure what the shared uplink size is though).


It's not just 250Gbps, its 250 on top of normal usage. Most ISPs run a fairly efficient setup where they are only paying for a little bit over the average usage.


For comparison, the entire internet averages on the order of 50 Tbps*, so this is about 0.5% of that. Residential internet might have a max rate of 1Gbps in the best conditions but if you use even 1% of that consistently you'll get overage fees. Steady bandwidth numbers are typically much lower than max bandwidth numbers.

* Based on 122,000 PB/month in 2017 from https://en.wikipedia.org/wiki/Internet_traffic, if my calcs are right


I don’t know anything about this particular isp or attack, but if that 250 gbps is aimed at specific parts of their infrastructure that are not designed for this much load, it’s likely to saturate individual links, servers, and/or network gear.

Their announcement said “all of our services”, so this could be anything from their authoritative/recursive dns to monitoring or billing servers to specific routing nodes.

So even if they had nice fat 100G pipes coming in, and those pipes weren’t saturated, a ddos of this size with a mix of vectors can be debilitating on enough discrete pieces of infrastructure that the isp is effectively “down” for many end users.

(Again I am only speaking generally with guesses about what might be going on at this isp. I’m also not a network engineer but I do write code for network gear that does anti ddos stuff.)


I'm an Edpnet subscriber currently suffering the effects of the DDOS. It's annoying to say the least. Last year's DDOS attack was relatively simple, pointed at the DNS servers. Simply using other dns servers was good enough to get back online. Edpnet also joined NaWas [0] at that time, a non-profit for ISPs to be able to redirect all trafic through big pipes when needed to deal with large attacks. Because the current attacks are rapidly shifting targets, it's a game of cat and mouse to properly filter the ddos.

In practice, this means that some sites such as google and youtube keep working, but other services might not be available. It is extremely annoying when all of a sudden AWS api calls time out, or a Teams or Slack call suddenly drops to very low bandwith, and then drops entirely. I've had to resort to my phone's hotspot multiple times in the last few days. Yes, I pay for SLA, but what's the point in that? I've got priority in case of a cable break, and failover to 4G connection, but that's no use if the upstream is congested.

The sad part is that the attack works because it it a small isp, 45000 customers. [1] It is the main reason I'm a customer, they offer good service for great prices. Kudos for not paying the ransom. If the attacks continue for much longer, I will probably switch to the bigger, more expensive, less customer friendly isp. I'm happy to support a local company instead of a big multinational coorporation. But if my clients can't depend on me when working from home, I've got no choice but to pick the ISP with the bigger and more expensive pipes.

[0] https://www.nbip.nl/en/nawas/

[1] https://datanews.knack.be/ict/nieuws/edpnet-al-dagen-getroff...


If you give them kudos for not paying the ransom, it's also worth considering not switching away from the small ISP you chose because they were doing things right. Otherwise they might as well have not done the morally right thing and given y'all better service.

Though, of course, I do understand your conundrum. Perhaps there's some middle ground, where for anything that really needs the uptime you have a (mobile) fallback? Still annoying if there are outages on non-essential lines but perhaps better than forcing them into paying the criminals.


I agree, maybe my wording about the situation could have been better. I fully support them in their handling of the attacks. Paying the ransom would in no way guarantee the attacks would stop, now or in the future.

My phone's 4G hotspot works (via another provider), but it does have limits of course. I don't mind paying a bit for extra data, but there is a limit to the amount of extra time and effort I can spend on this problem. I can't keep apologising indefinitely to my clients about bad audio/video calls if none of my coworkers have the same problems with their ISP.

Sibling comment did have a good idea: to tunnel my traffic through an OVH instance. A great opportunity to try out Wireguard.


For now I'm just redirecting my traffic through a VPN that exits at OVH, and everything's fine although I'm now geolocated in France, so a few things are blocked or don't work as they should.

But the edpnet/OVH link is unaffected by the ddos, so that's a solution.


Is your phone 4G connection provided by the same ISP?


That's unlikely, edpnet doesn't really offer plans for high data usage.


> EDIT 16/09/2021 10:12*: During the night we had two more attacks. We are working with the authorities, who have confirmed they are looking into it and are doing everything in their power to find the responsible individuals. We were contacted by an individual who verified he was behind the attacks, asking for a ransom.

Roughly how many criminal groups are active in DDoS ransoms (as opposed to data crimes, like cryptolockers and exfiltration)? How common is this nowadays? Clearly it happens, but I've no idea the general scale of the problem in 2021.


> Roughly how many criminal groups are active in DDoS ransoms

These days there are groups providing ddos as a service. They control huge iot botnets and for a sum they’ll point whatever fraction you pay for to whatever target you want.


Put more concretely, how many can front 100+ Gbps? E.g. 2-3, or dozens, hundreds, any teenager, etc.


Depends what kind of attack they use. A few years back, reflection attacks were all the rage, and it was cheap as fuck. But I can't say how many Gbps they delivered.

They'd usually market this as "stresstest" service, and you could pay by Paypal etc. Curiously they were all behind a Cloudflare-anti-DDoS wall. I still wonder if Cloudflare knew/knows about them, probably good for their business.


I wonder if this is related to the attack on voip.ms (ongoing for multiple days)

https://twitter.com/voipms


Mods, can the URL please be corrected to https://issues.edpnet.be/?p=3507 ? The current link to https://issues.edpnet.be/ is likely to rot badly over the years.


How can I check my home network for signs I have any devices participating in a botnet?


Generally large spikes in upload data usage is the best indicator. Many consumer grade routers are starting to build in per-device bandwidth usage, which helps narrow down the culprit.


Invest in something like a cheap sonicwall. Without bunch of licenses, it can still show you all the traffic information and act as a very powerful firewall with all types of blocking. You can search "sonicwall soho".

If you don't want to invest, you can also use pfsense. You can use an old computer to turn it into a perfect firewall.


I used both pfsense, opnsense and none had good facilities to keep track where your packets are going.


Then you might want to check sonicwall DPI.


Spamhus bad ISPs. Give them low QoS until they fix their problem.

If all major exchanges and service providers agree to punish IP blocks, ISPs will have no choice but to better police their networks.


Why are the people doing those attacks doing them? Ransom? Or are they just doing that to prove and advertise/demo their capacity to harm?


someone here said the attacker asked for a ransom


What is the motivation?

Extortion? Retribution? Censorship? Marketing? QA?

Are they asking for money stop?

Are they being paid by some actor to do this because they wishes to protest / punish the company?

Are they demonstrating what they can do, in order to drum up business?

Are they testing what their system can accomplish? and iron out any bugs and learn how to best utilize it? So what mitigations they face and how to overcome them?


The attackers are asking for ransom. So as usual, money.


Don't the peering agreements have to pay for bandwidth? Isn't there a bigger cost for those flooding this one? Could they charge-back and encourage peers to throttle? Can peers fix this with pricing model changes?


How often do we ever find the actual individual humans resonsible for these bad actions?

I would imagine if the individuals were identified, some harmed parties would spend money eliminating them to dissuade others doing the same thing.


They are all too often in another country, so they can sometimes be determined but seldom accountable.


I worked at an ISP AAA/Radius provider and since we started offering a cloud offering to our customers we were DDoSed twice.

The source IPs were a Turkish university, I wrote to them a few times but they seemed clueless. Cloudflare's protection suite was beyond our budget and our engineers really started moving the system between 5 different IP blocks thanks to the help of our customers.

We still don't know who's responsible, but we suspect it could be done by our competitor...

(PS - We're based in India)


It there no way to just completely block every IP sending you so much traffic?

You might end up blocking a few million, but your network remains uncongested.


the only way to block this traffic effectively is by asking your upstream peers to blackhole the traffic. dropping this with a firewall or other device is nearly impossible because of the high load.

remote triggered blackholing is done by BGP. which between peers only allows a /24 in terms of the smallest announcable prefix.

blocking every /24 from which an IP in a ddos originates kills most if not all of your reachability.


Thanks for that explanation.

Wouldn’t some device still need to blackhole the stuff though?

I’d still argue that blackholing /24 blocks and retaining some reachability is preferable to losing all of it.

Now I’m kind of curious how fast you can make hardware drop stuff, but even inside my local PC I probably cannot reach 250Gbps of data flow (Hmm, 20 Gbps for DDR4, guess not).



I've tried getting into gnunet on and off for a couple years now (granted the last time was 2+ years now) and it was always inscrutable to me. Nowhere near as easy as Tor or even FreeNet (which is a dead Java app used only by the worst people as far as I could tell).


A 250 Gbps attack is bringing an ISP down? How is this possible in the age of 1 Gbps connections in every home?


It's like a large city in a way. Most of your roads are local, and so is most of your traffic. You try to distribute load and makes trips as short as possible. For a city its grocery stores in neighborhoods.

For an ISP, they do this by peering locally with major providers. Google, Twitter, Facebook, Netflix, etc. all have very large networks and will openly connect at Meet Me points in open and private internet exchange sites as will CDN providers. https://en.wikipedia.org/wiki/Internet_exchange_point. Some providers such as Google and Netflix will even install local caching servers that you can reach from your network if you have enough traffic. As a small ISP, you try to take advantage of all of that.

Your option of last resort is the Tier 1 upstream provider. That's for anything where you can't get to it via local transit peering. This should be a smaller percentage of your traffic and it costs more than peering arrangements. The nature of a DDOS is going to fill that smallest and most expensive connection that provides the filler access to the majority of the world but doesn't handle the majority of the traffic.

Even as a hosting provider, it doesn't have to be overselling. If you have 250 1Gbps customers and 250 Gbps of bandwidth, you'll still be useless in the face of 250 Gbps of junk traffic. Pick a random packet and tell me what should decide if it is dropped is something that has to be determined upstream of the saturated link.

EDPnet looks pretty well-connected too: https://bgp.he.net/AS9031#_ix


I know how these things work, and I'm sorry, but 250 Gbps in this day and age is just a joke, even when you are talking about bandwidth outside your network. There's more to this story or this ISP is incompetent or severely underfunded.


using peeringdb gives most of the time better view into the IX connections and their bandwidth. -> https://www.peeringdb.com/asn/9031


50 1Gbps connections usually peak at less than 50Gbps. Depending on your demographic and area, it can be a lot less.

250 Gbps will service much more than 250 1Gbps connections.

Also, not nearly every ISP offers 1 Gbps to all their customers. Some ISPs don't offer it at all.


There's also things like local Netflix/Google/etc caching boxes where you can serve downsteam traffic without touching your upstream as much.



They are mainly offering VDSL2 connections over phone lines. They also offer Fiber over a monopolist's (called Proximus) GPON infrastructure, but the rollout of that GPON network is still in early stages. However, that monopolist has pledged to accelerate that rollout in light of the increasing work-from-home statistics following the pandemic.

https://www.proximus.com/news/2020/20201208-proximus-brings-...


Just because everyone has 1Gbps doesn't mean if everyone tried downloading something they'd get it. The internet is tunneled through a system where the more people are on it, the more likely that the bandwidth entirely gets used up. It's literally like the phone systems back in the day where if everyone picked up the phone, not everyone would get a dial tone.


I am an edpnet customer in a rural area and my peak DL speed is 18 Mbps (long distance VDSL) in an era of Gbps but as a consolation I do not pay much. I could get 400 Mbps with anoter ISP but the price increase is significant. I enjoy good DL speeds at my work place.


overselling


Unfortunately this works. Kills small ISPs on all fronts. Customer perception, technical and financial


I'm assuming that eBPF would be a potent tool for dealing with this, and it looks like CloudFlare agrees: https://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitig...


Software solutions at the receiving end are meaningless for stateless DDoS. It doesn't matter how quickly you drop/filter packets if your physical connection cannot receive any more of them. You need to ensure the traffic never saturates your connection in the first place to avoid issues - this can only be done on the sending end or somewhere on the path to you.


Good to know


Ebpf only matters if you have the bandwidth to receive it in the first place.


Why not just block the offending IPs in iptables?

Edit: as a Swede it's very funny that customer is called klant in dutch!


and no one accountable :-)


Yeah this is the part I don't get. Ok the source IP is spoofed, so you need to use another technique. The packets come in on some physical line, so you can ask the person on the other end which line they're getting it from and so forth. Relatively labor-intensive (I imagine a few hours' worth of time from each network hop), but you get there. Next, you find that it's coming from across some border, so you hand it off to the local authority.

As I understand it, this is where people drop the ball. The local authorities can't be arsed, so the criminals there just keeps doing it. Why then not block that connection? If the provider in that country wants to be able to continue to operate their services, they'll need to fix this problem or customers around the world just won't be able to receive responses from their network.

Same with the DNS resolvers: if someone has an open DNS resolver that is constantly being abused, as all of them are, just cut them off right? You don't need that on your IP ranges. Or blackhole that source IP on the edge of your network if you see abuse coming from that resolver. It's not rocket science to figure out which part of DNS traffic is abuse if you see a constant volume of responses to queries that the client in your network never actually asked for.

Neither of these things seem to happen. Why can you be an ass on the internet and never be held accountable?


Possible for DoS attacks. Practically impossible for DDoS attacks.


Huh? I wrote this:

> Same with the DNS resolvers: if someone has an open DNS resolver that is constantly being abused, as all of them are, just cut them off right? [etc.]

That's what I'm not understanding why it is not being applied to easily solve those reflective DDoS attacks.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: