Hacker News new | past | comments | ask | show | jobs | submit login
Sourcehut and Codeberg are both currently experiencing a DDoS attack (fosstodon.org)
233 points by smlavine 9 months ago | hide | past | favorite | 141 comments



There's been a few of those for sourcehut ... makes you wonder wth is going on ...

In one of the previous ones, they said there was high suspicion it related to 'hatred against a transgender developer in the company' or something like that. Like, who the hell targets a "company" because of an employee who works there you don't like ...


Considering Drew’s history with being an unrepentant antagonist, I feel it’s far more likely that he’s the target rather than some random employee.


I wondered about Drew's "antagonism" last year when he claimed that Google was DDoS'ing Sourcehut, which is why sr.ht blacklisted the Go module mirror; see https://news.ycombinator.com/item?id=34310674 .

I concluded he was right after I read about ongoing problems with the GMP project, at https://gmplib.org/ : "The GMP servers has been under serious load due to a barrage of clone requests from Microsoft/Github. ... In effect, Microsoft's computer cloud performed a DDoS attack on the GMP servers. ... Our servers are fully available again, but that's the result of us adding all participating Microsoft network ranges to our firewall. We understand that we are far from the first project to take such measures against Github."

Also, you can't build projects like Sourcehut by being only an antagonist, and I can think of some pretty antagonistic CEOs of billion dollar companies.


Yeah, especially considering that it seems like his personal website is also being DDOS'd. (or maybe it just runs on the same infra)


Sourcehut has a static site hosting service, similar to GH pages. I would be surprised if his blog weren't hosted on it.


Pretty sure the blog existed long before SourceHut.


And the NS records for his blog are ns1.sr.ht. and ns2.sr.ht. so he probably moved his blog to SourceHut Pages since.


> Like, who the hell targets a "company" because of an employee who works there you don't like ...

Too many. The internet hivemob is dangerous. Hell here in Germany we have a pretty nasty cyberbullying case around former Youtuber "Drachenlord" - his hater mob goes as far as to terrorize hotels where he's stayed (he lost his home as a consequence, it's utterly utterly insane).


I have a paid account on Sourcehut and I like that it is independent. But...yeah there's a kind of "wife’s sister’s puppy fell into the strawberry patch" vibe to a lot of their official communications.


Can you explain what you mean by that?


I think what they are saying is that they're being too unreliable/unprofessional. Like "whoops, my grandpa spilled is coffee on the server so we're down again." This is not the vibe that I'm getting from their messaging and this is the first time in my 2ish years of paying that I've ever felt impacted.


How is it independent? It’s just another company.


I think of it as similar to independent movies or independent bookstores (back in the day). Yes, it’s still a company/organization, but one that isn’t controlled by one of the big studios.

So in this case, that really means that it’s not controlled directly (or indirectly) by a FAANG.


It is not paid to another company (ehm, Mozilla), or it doesn’t live from advertisements (almost everybody else). You pay for its services the old-fashioned way, with your money.


Sourcehut being a company is like calling a farmer's market stand a corporation.


It's still a company. Just because it's a company that behaves ethically doesn't make it less of a company. It's just a very odd one.


that's missing the point. the farmer at the farmers market also have a tax code and pay sales taxes. by your logic you would also make a fool of yourself and call a farmer at a farmers market a corporation, just like you're making yourself look like a fool now defending that a tiny ethical opensource focused one person company is a corporation.


I don't follow. A "corporation" is just an entity that's filed articles of incorporation with the appropriate government office. You can have a single person that owns a corporation or LLC. (Source: I own an LLC.)


At the end of the day, everything reduces to human beings interacting with each other.

A corporation is just one method by which people coordinate their activities, not some separate entity that has its own independent existence or intentions.

You can certainly draw distinctions between the consistency, reliability, and scalability of some people's way of organizing their activities vs. other people's, but that doesn't distinguish the underlying nature of what they are, just how good they are at organizing.


Looks like a duck, swims like a duck…


What's your point? It's mainly drew working on a fully OSS project that he happens to host and accept money for doing.

Is your beef with people making money for their work? Or with Megacorps? Because sourcehut isn't one, by a longshot.


They have a history of doing things with negative financial consequences just to make a point on some sociopolitical issue. I can't remember a single "just another company" off the top of my head that conducts business in this way.


Bingo. While I might afree with some of their points, it seems a poor way to convey them.


Their status page was a lot more professional. But the third party hosting provider got DDOS'd as well.


bro, can you speak more clearly? especially this one -> "wife’s sister’s puppy fell into the strawberry patch".


Thats the point, the parent is talking about complex and long-winded connections.


People get weirdly upset about that kind of stuff so it would not surprise me tbh. Too many bored people with too much time on their hands and deeply invested in some kind of “culture war”


> Like, who the hell targets a "company" because of an employee who works there you don't like

I've seen plenty of witch hunts of the sort the past years in tech, including attacking or boycotting the company due to an individual (usually due to them having conservative views).

That said, I'm inclined to think this is a form of "pay us or we keep on ddos-ing".


Culture war stuff is a pretty common cause of low-to-mid-level technical griefing. Nobody's going to take down AWS over it but smaller websites and services? Absolutely.


Unfortunately, too many. If you turn on dead comments on HN, you'll find that sentiment pretty quickly on many posts from transgender people. And unfortunately some of those commenters are problematic enough, that they organise their own social circles of hate.

We can't tell one way or another from outside in this case, but don't discount this as too unlikely.


> In one of the previous ones, they said there was high suspicion it related to 'hatred against a transgender developer in the company' or something like that.

It's a lot of resource to expend for a personal vendetta. Transgender people work in a lot of places, my next question would be "why this one?".

My guess would be that there is something that pays for this financially. It could be that it's a specific repository on Sourcehut that hosts a competitor, for example.

Of course it's all speculation.


IIRC $100 can buy you 24h of DDoS service. Some people spent way more than that for personal vendetta.


It’s not personal in the same way Nazis organizing hate against Jews is not personal.

Trans has become the new focus chosen out group of the culture war and we’ve had to deal with this fascist ideology shit for a while now.

Here’s a great real would descent into the insanity of these people, fueled by the pandemic.

After reading it, there’s little doubt that this bullshit is infectious https://www.reddit.com/r/singapore/comments/p6ovxr/an_indept...


> It’s not personal in the same way Nazis organizing hate against Jews is not personal.

I don't think such a comparison is useful here. That said, there is a clear difference between group and individually focussed hate. For some reason Sourcehut is clearly being singled out.

> Trans has become the new focus chosen out group of the culture war and we’ve had to deal with this fascist ideology shit for a while now.

I don't think it is a strictly fascist ideology at all. I see people on both the left and right with a range of opinions on this subject. You'll find the majority of people are resistant to any form of change, whether rightfully or wrongly.

> After reading it, there’s little doubt that this bullshit is infectious https://www.reddit.com/r/singapore/comments/p6ovxr/an_indept...

It's difficult to know who is the insane one there, the person who is committed or the person who dives so deeply into another person's life.

I think it would generally be a mistake to assume that this attack is based on the gender of a single employee at Sourcehut. To me it would be just as likely to be something they host, or something they refuse to host.


>Like, who the hell targets a "company" because of an employee who works there you don't like

https://en.wikipedia.org/wiki/Donglegate


Gamers.


A ton of source code that is stored on Sourcehut and Codeberg is also likely replicated over dozens or hundreds of other machines.

I wonder if something like the BitTorrent protocol could be used to keep a repo accessible without directing all the traffic to one site. Take a magnet link to a particular repo and commit, receive the complete branch eventually. Now serve these commits to other peers. That would be rather harder to DDoS or censor.

(This, of course, ignores other aspects of forges: CI/CD, releases, packages, etc. Issues and even code reviews can be branches, too; such tools exist, and e.g. Fossil has them built in.)


You could also use the git protocol and a simple load-balanced pool of replicas that each pull from the main repo constantly. All push traffic would be routed to the central repo and all fetches would be routed to a random replica.

To do it in a decentralized fashion you could simply have a DNS pool with all of the known replicas and rely on round robin DNS (not great but better than a single point of failure)

Phabricator implements an interesting multi-master replication scheme with git that essentially keeps track of which node has the newest commit for a given HEAD, then edge nodes can automatically proxy incoming requests to the node with the latest change, or a random node if they are all in sync already.


I think a Nordic broadcaster used "cascaded git" as a CDN. They pushed the content to a central repo, and as a hook, it pushed to four other repos.

The repos had the same hook and it just fanned out, unattended. Pretty neat, if you ask me.


> To do it in a decentralized fashion you could simply have a DNS pool with all of the known replicas and rely on round robin DNS

Using the new HTTPS DNS record¹ you could simply list all the servers in the DNS.

1. RFC 9460; discussion here a few months ago: <https://news.ycombinator.com/item?id=38420555>


BitTorrent relies heavily on hashes, you can't really update a torrent in place. Distributing a static piece of content peer-to-peer is a (somewhat) solved problem by now. Achieving consensus on which hash best represents a given repo, what its latest version is and who can update it under what circumstances is much harder. It's relatively easy if you have a blockchain handy, but there's serious opposition to such solutions now.


Git commits also rely on hashes, so there isn't a huge mismatch.

The main issue is that each commit would become a separate torrent, so you'd need some abstraction on top to clone a whole repository (you'd need to download all the torrent-commits).

OTOH, when updating a repository you'd only pull new commits (just like other transports). I've toyed with the idea in the past, but it's a substantial amount of work.


Fetching the commits isn't the problem, knowing which commits to fetch is. You need some way to assign a stable identifier to a repository and to query for the current tip of the master branch from that identifier. This can be done naively by using public-key cryptography and DHT nodes storing signed {timestamp, commit_hash} messages, but this is wildly insecure against key compromise, which (as Bitcoin and others have shown) is pretty much inevitable. You'd probably need m-of-n signatures and some way to do key rotation, and that's not easy to do if you can't just run arbitrary programs, like you can on the EVM for example.


Also, I'm guessing by time you go down that road, you wind up re-implementing Radicle.


I don't think you need radicle TBH, there's nothing in this use case that requires a custom blockchain or even a token (if you're willing to accept UUIDs instead of human-friendly repo names). A simple smart contract running on EVM / Solidity with no financial incentives (except the gas fees of the underlying chain of course) would do nicely. That means you get nothing out of it as a developer, though, and there are very few people willing to do crypto stuff and not interested in $$$.

You still have the problem of incentivizing hackers. In the Github world, hacking the author of a major project is bad but not disastrous, somebody is probably going to notice, ping the Github security team, and the changes will get reversed in a matter of hours. Hacking a major project on radicle or similar means that the project is now out of its authors hands, potentially forever, and there's nothing you can do, short of informing every single consumer and asking them to change the project ID they depend on. You can give a trusted team a way to override ownership in such cases, but then these people are also subject to being hacked or compelled by a court. Requiring multiple signatures and best practices around hardware wallets is probably the best you can do realistically, and that dramatically increases complexity and expense on the maintainers' side compared to traditional Git workflows.


Last time I'd looked, Radicle had done a better job of decoupling the Git stuff, from the token stuff. But yes, adoption is always the issue with these sorts of things, unfortunately.


It smells crypto-ey but there's Radicle (https://app.radicle.xyz/)


Only not having access to https://todo.sr.ht made me to recognize fully, that I don’t have any access to it. https://github.com/MichaelMure/git-bug suddenly looks much more interesting.




The problem I see with the torrent approach being that you need to reach a consensus eventually, right? If I push a commit on the main branch, share that state, and you do the same with a different commit, then we have a conflict that we need to solve...


There would have to be a canonical source of truth (with private key). Then one could be reasonably sure an update is blessed by the owner. Makes me think of nostr + git + torrent/ipfs.


Oh right. Not everyone modifies the torrent, then, just the maintainer. So as a user I get the magnet link, download (clone) the repo over torrent, then I can send a patch to the maintainer over email, discuss it there (maybe on a mailing list), and when the maintainer merges it, they can update the torrent and propagate the new state of the history.

Is that about right? It actually sounds cool :-). I guess the next issue I see is that torrents are not mutable (I think there are some specs maybe, but it's not common/implemented? Not sure).


iirc keybase (pre-acquisition) had this


Genuinely interested: who the hell pays to DDoS services like Sourcehut and Codeberg? :/


Gitea! /s

In all seriousness though, the reason why services end up under DDoS attacks are wide and sometimes complex.

There is a likelihood that someone is simply upset at these services, or that there was a coordinated effort against source control programs more generally but its only succeeding against these two (that we're aware of), but not say, GitLab / GitHub.

Another example: they may be targeted simply due to traffic and volume, and this is an attempt to "dry run" DDoS techniques that may be used against bigger sites down the road too.

There's alot of room for why these things happen. I've learned that motivations and execution is very wide.


Another reason is hosted content that someone really wants gone from the internet.

Of course the efficacy of attacking a service with lots of content is questionable, but the plan is to put on so much pressure that the service removes the content.


I know of some projects that were on Github and received cease and desist letters. So they moved their project to one of these other repos. It is entirely possible but of course only speculation.


The Venn diagram is of “entities that pay lawyers to send cease and desist letters” and “entities that launch DDoS attacks” is pretty non-overlapping.


You would think so, but then I just read about eBay executives sending cockroaches & spiders, funeral/grieving-related items and a bloody pig mask to a couple that wrote some not-so-positive reviews about that company…

https://www.theguardian.com/technology/2024/jan/11/ebay-fine...


You should be using a phase diagram rather than a venn diagram here. It depends on persistence or tenacity of the adversary. Legal and administrative measures is the first phase. If that fails to deter adversary, non-traditional and perhaps extrajudicial measures can be employed.


Github has already been DDoSed by state actors not willing to see something being distributed, such as anti-censorship tools: https://en.wikipedia.org/wiki/Censorship_of_GitHub#DDoS_atta...

Same state actors might do the same here


HN's provider was DDoSed yesterday [0], I wonder if the popular website being targeted was HN itself. Either way, I do wonder if there is any relation.

[0]: https://news.ycombinator.com/item?id=38939559


Maybe someone who doesn't like Sourcehut or Drew for some reason.


But why also Codeberg?


status.sr.ht (sourcehut's status page) is hosted on codeberg.


More likely Drew. Lol


The Christian calculators are finally biting back.


This is some reference I don't get.


Drew is an antagonistic atheist, which is part of a good handful of reasons he was banned from Omnimaga, a calculator forum.


It would be epic if the DDoS was from a swarm of KnightOS infected calculators. Even more if they were able to do it because Drew made a workable network stack. lol


Subversion hold outs


That is just what RCS-ians want you to think maaaaaan!

tightens tinfoil hat


sourcehut supports svn repositories too doesn't it?


No. Mercurial.


Probably someone seeking a ransom.


Both sites run on donations. That would be trying to squeeze water from a stone.


I think it's true they are not rich but I pay sourcehut for a commercial service not charity.


What if some OSS hosted on one of these sites was compromised, and now that they are offline it will be harder to discover. Perhaps builds or binaries have already been distributed.


Like the infamous Linux kernel backdoor. If BitKeeper had been DDOS'd it would have been harder to detect the difference between the version with the backdoor that "appeared" in CVS.


Cloudflare :^)


You joke, but in the DDoS mitigation service industry it’s not unheard of as a high pressure sales tactic. Packeteering, as a friend called it.


I do wish folks would implement PoW ratelimiting. You can host the req'd JS on a CDN.

I implemented this as a .NET lib (https://bvulpes.net/tarpit-a-proof-of-work-http-ratelimiting...) for obscure reasons, but an NGINX integration would really be ideal.


According to https://social.anoxinon.de/@Codeberg/111739257904872454 their network link is saturated with incoming data. No amount of rate-limiting, captcha, or PoW can address that.


Considering how much Drew DeVault hates cryptocurrency and sites that require JavaScript, I don’t think he’s going to like this idea.


PoW rate limiting has been around for way longer than mainstream cryptocurrencies:

https://en.wikipedia.org/wiki/Hashcash

Uh. Ignore the "Cash" in the name. That's mostly a coincidence.


Or if you need another antique example: opentnl, the network library use in a 2001 released game Tribes 2 had „client puzzles“ (aka PoW): https://github.com/kocubinski/opentnl/blob/master/tnl/client...


The problem of energy waste applies to proof-of-work in general. It could be argued that mitigating DDoS attacks might make it a net negative, but someone would have to do the calculations.


Eh. Nah, probably not. Proof-of-work energy waste with cryptocurrencies is a problem specifically because of its scale, and the relative worthlessness of the activity.

Human basal metabolic rate is around 100W. Actual energy use per capita averages up to 10,000W, depending on how wealthy and boreal you are. If you want to count "energy waste", a successful DDOS costs each blocked user multiple hours, at multiple kilowatts.

It's no more a waste of energy than using advanced video encodings, or running `bcrypt` on passwords. The energy cost is comparatively miniscule, and it solves or mitigates other problems.


It could be a FOSS portable binary that runs outside of a web browser.


You could still do mostly non-harmful no-JS PoW captchas by making some sort of FOSS external generator executable. Of course, this would be annoying to an extent, require mult-tasking for the user everytime, or custom keybinds, etc.

You could make it less annoying by borrow from TeamSpeak and/or hCaptcha. On TS3, as a server admin, you could set a minimum [pow-style] security-level to prevent spam/flooding. If server had a higher than default number, clients were prompted to improve their security-level or disconnect. And alternatively, as a client, you could pre-emptively generate higher/highest security-level (up to "30"?) over night or whenever you had spare cycles.

You could borrow that concept to allow non-JS clients to generate extra-strong tokens to allow them to bypass the system entirely. Or combinee that concept with what hCaptcha does for their disabled users -- once you solve the special disability captcha [or in this case, generate a stronger token], rather than inflict extra pain on the users, you get granted ~20 tokens or so to allow you to avoid the normal captcha system for a decent period of time.

If you want to go the extra mile and account for no-cookie use in addition to no-JS, allow it to be sent in the auth header or something.

Anyway, I don't know how many things I've seen use the concept, but IMO, it's under-utilized.

As a kid, I was super into reverse engineering RuneScape and writing our own MMORPG servers based on the protocol. Most of us had no idea what we were doing or what the games industry was doing, and we were all mostly kids, so we all had to be our own developers, db admins, sys admins, etc, and DDoS mitigation was effectively non-existent until OVH democratized it (BlackLotus would charge like $550 a month for a mediocre dedicated server, Staminus was similar, AwkNet was cheaper but only did TCP so TeamSpeak etc was still vulnerable, etc - and god knows how much Akamai and similar would've wanted).

One of the billion issues people ran into often was just login flooding; if it was a new username, that'd mean forcing the server to go through the new account and serialization process, and if it were an existing account, it A. opened up bruteforcing but B. forced the server to, at the very least, do password hashing, possibly database queries, or commonly reading a character file from disk, etc.

Rate-limiting by IP isn't super effective, because proxies. Rate-limiting / straight-up banning certain ASNs worked better to deal with VPNs, but you can't exactly grind all of Comcast or Verizion to a halt without disrupting users. And rate-limiting the particular account sucks, because you run into situations where individual players get targeted and effectively locked out of their account because someone is griefing them.

Far more effective than anything else was the idea of just generating some sort of PoW token on login that took a bit of time for the client to generate but was quick for the server to verify. Significantly better was the idea of being able to scale the difficulty globally, per IP/block, and importantly per account -- nearly no difficulty if no flooding is occurring, ~1-3 second logins if a massive flood is ongoing, or a per-account ~1-10 second difficulty if someone is trying to bruteforce a particular account. Even if the latter is mildly annoying, it's better than being totally blocked from playing.

RuneScape itself implemented this a couple of years ago too, I believe. I haven't seen them actually do much with the difficulty factor yet but even their super naive simple version seems to have helped a ton.

[of course this doesn't fix the issue of giant IoT botnets and stuff, but most people didn't (don't?) actually bother -- it was almost always DDoS amplification for volume and one or two servers with proxies doing the layer-7 floods]


POW rate limiting is such a Godsend. I know a few websites that receive regular DDoS attacks, and their uptime increased dramatically as a result. Large-scale DDoS is still unfortunately an issue, but that happens once to twice a year and can only last for so long.


Isn't it a bit more complicated than that if you, like Sourceforge, also host Git repos over SSH?


Outage statement from Drew: https://outage.sr.ht/


Wait. I thought the problem of being DDoS-ed is widely solved, i.e., just pick someone else (cloudflare, akamai, fastly, etc) to be the "proxy" of your site, and then let them defend the attack for you. No?


> just pick someone else ... to be the "proxy" of your site,

If you have deep enough pockets, sure, just pay anyone ASAP. But as far as I can tell, Codeberg is a non-profit entirely dependent on donations and, according to Drew DeVault, paying Cloudflare to protect SourceHut is prohibitively expensive even at his scale, so I doubt Codeberg is in any better position to finance this.

See https://news.ycombinator.com/item?id=38960189


They aren't. Even though many people use Codeberg, only few become members. This is a problem. Passing Join + Donate pages from archive.org. According to their November newsletter:

> More than 100.000 projects as of late on Codeberg. Codeberg e.V. has 401 members in total, these are 281 members with active voting rights, 113 supporting members and 7 honorary members.

[0] Joining: https://web.archive.org/web/20240106140505/https://join.code...

[1] Donating: https://web.archive.org/web/20240106140512/https://docs.code...


The fact that they only take SEPA for payments is a big part of the problem. Screams "if you aren't European we don't want you here".


That's what credit-card-required donations feel like to me. I didn't need a credit card except for giving americans money, then I applied for one and it was denied seemingly based on not having a credit score (not good, not bad, just none: privacy is suspicious!).

Afaik you can also do wire transfers in other countries but there are some fees involved, so isn't IBAN available to anyone with any bank account if they're okay seeing fees which the payment platform would otherwise hide?


No, SEPA/IBAN is Europe-only:

https://en.wikipedia.org/wiki/International_Bank_Account_Num...

https://en.wikipedia.org/wiki/Single_Euro_Payments_Area

Only European bank accounts can do SEPA transfers.



Notably US is missing though.


Classic USA move


What's your point?

Europe is many nations, not one.


I think OP's point was that SEPA and IBAN are completely different things.

SEPA is the Eurozone area that treats all transfers as domestic - usually free or near-free, and fast (max 1 day normally, max 10 seconds for instant transfer).

IBAN is just a numbering scheme to uniquely identify bank accounts worldwide. It has nothing to do with transfers. The closest thing would be SWIFT, though that's just an information protocol and says nothing about fees or times.


For example Emirates and Egypt are not in Europe.


SEPA is EU only.

IBAN is also used by some countries outside of EU. It's basically just a way to assign an alphanumeric code to an account so that it also includes a country code, bank code, and checksum.


How do bank transfers work outside of the IBAN area then? I seem to remember that, also before IBAN, you'd put in the bank identifier (BIC?) and account number and the bank would figure it out from there. (I've heard stuff about bank "routing" but assume that's a thing the bank does for you.)

Also when getting money from Japan (mtgox), I had to put in the bank's branch address and some funny details one doesn't usually need, but it's possible to receive it on my IBAN account


Looking to XML schema of pacs.008.001.07 that I found here: https://github.com/yudhik/example-iso-20022/blob/master/src/...

.

According to SWIFT, the account is defined by a combination of bank identifier and account identifier. That is, account identifiers are supposed to be unique only within the bank. (Not sure about this part.)

There are many ways to identify an account, IBAN is one of them. Other methods require you to specify a "schema" (one of predefined codes 1-4 characters long), an "issuer" (string up to 35 chars), and "id" (string up to 34 chars).

I suppose banks know the values that make sense for their customers, and otherwise just forward the information to other banks.

IBAN is an alphanumeric string, where the first two characters are country code, and the meaning of the rest is country-specific. BIC is encoded in IBAN, so the information provided is redundant (in the payment you specify both the bank and the account, but the account also specifies the bank).

.

SEPA is a subset of SWIFT, with extra rules imposed. The currency is EUR, and the only allowed account identifier is IBAN.

That means, SEPA requires IBAN, but you can also use IBAN outside of SEPA as one of many possible ways of identifying the account. For example, if I pay from EU to USA, I am using my IBAN account in a non-SEPA payment.


They do take donations through Liberapay, where you can use credit cards and PayPal. As for membership, they are a German e.V. (Registered club/society? Not sure how this translates in terms of both culture and language.) According to a short internet search, you _can_ collect membership dues via other means as an e.V., but SEPA certainly is the standard. Since you need a bank account for an e.V. anyways and every alternative to simple wire transfers would involve fees and just the hassle of setting them up, integrating them into your existing bookkeeping and maintaining all that, I can understand why they decided not to do that (yet) or maybe didn't even think about it.

(I can certainly see where you're coming from, I just wanted to provide some additional insight.)


I don't think so. It looks like a regular bank payment complete with an IBAN and everything. So you should be able to send from any bank (as long as you're not under US och EU sanctions).

They are just using sloppy language. SEPA is just a payment system for cheaper and faster payments. It uses still uses regular bank accounts to send from and to, it can be regarded as backwards compatible with normal bank transfers.


Sure, but the problem with that solution is that it de facto centralises the Internet as everything then has to pass through a handful of large corporations. I do not claim to have a better solution though, but would be happy to hear other ways forward as DDoS appears to be a threat that is here to stay.


As was mentioned and shown off earlier in the thread, I feel more websites would benefit from proof-of-work rate limiting. It doesn’t solve it entirely, but I’ve heard it can handle around tens-of-thousands of connections.


This isn't solved. This is "we've got workarounds provided by people who have absolutely no incentive to solve the problem they protect you from for $$$". Solving this issue would be having actually working abuse contacts that can boot the customer off their network long-term. That's extremely unlikely to happen.


Also the incentives set by tier ones are completely wrong for the DDoS problem to be solved.

Instead of finding ways to isolate and blackhole the offending networks - granted that refining this data into small enough ranges would be quite difficult - they instead punish the targets, who are already suffering to begin with. This is mobster mentality.

You can bet that if random ISPs ranges would get blackholed when there are missbehaving clients, they would instantly come up with ways to detect and mitigate excessive traffic, net neutrality be damned.


The problem for ISPs is that they can't really tell who's misbehaving. Does the user do lots of requests to a host? Maybe they're part of DDoS. Maybe they're uploading a file in many fragments. Maybe they have automation for that endpoint setup and the service being down causes repeating request. Maybe ... If the ISP listens to every abuse complaint, we could easily take down most residential ranges by claiming they're part of DDoS on their local Netflix PoP IP.


The ISPs can tell that someone is missbehaving when their upstream informs them their IPs are taking part in a DDoS.

This information can trickle down from the tier one that presently just blackholes the victim.

Nobody says that infrastructure that was designed in the late eighties can't be improved upon with additional functionality, namely passing top down information about missbehaving network ranges and or IPs.

As an ISP you already see devices on your network sending traffic in unusual patters, and when receive the warning that they are part of a DDoS you shut them down. If not, your whole range goes down, etc.

Let's add a fourth to the three-napkin protocol.


Given that right now we can't even verify BGP announcements which are a more critical target, I don't think we can tackle blocking. We'd need a realtime notification system which is not spoofable and has the ability to almost instantly boot anyone off the internet... and it needs to be reachable by nation-controlled ISPs. That's really a massive problem to solve and ensure it's not used for any other purpose. We haven't even established the trust/signing at that level yet. (see the nation-controlled CAs getting booted for spoofing domains)


All good points, but it doesn't sound impossible. :)


kind of goes against the principles for existing.


tinfoil hats on.

mystery solved: in the coming days github will announce obligatory 2fa linked to a phone number.

some Microsoft exec is betting his job that people will not just abandon github if they turn the heat on too hard on the slow boiled frog that is open source still hosted there.

to offset that theoretical exec fears that open source projects will jump ship, they spent the equivalent of one expensive dinner on ddos for hire to take the top competitors offline.

this is not some coordinate evil plan at Microsoft, just something that someone with even as little as M3@microsoft money and very little cryptocoins can easily do in under an hour.


To use the Mastodon web application, please enable JavaScript. Alternatively, try one of the native apps for Mastodon for your platform.


> "we called [Cloudflare] and they quoted a very big number" [0]

This is what I've never understood about Cloudflare.

You're suppose to be able to purchase a business account for $200/mo [1]

But it seems like at some point, Cloudflare says you need to upgrade to a custom pricing plan.

What triggers Cloudflare to state you can't use the $200 Business Plan account?

[0] https://fosstodon.org/@arch@floofy.tech/111739294821803544

[1] https://www.cloudflare.com/plans/


At a previous company we ended up choosing against Cloudflare because their sales team were so terrible.

> It's $200/mo!

>> Ok, but the other quotes we're receiving for $requirements are 10x that, are you sure?

> Yep, sure, it's $200/mo* (until it isn't)

>> Right, so we might pay more than that, what would the pricing be for $requirements?

> $200/mo*

>> Ok, cool, just to confirm, if we put $requirements of traffic through this service, we will only be charged $200/mo?

> Wellllllll, that might work out to more like $4000/mo.

Compared to all the competitors who would happily take our $requirements, stick them in some internal calculator, and give us a real figure.


Our experience:

1) Sales didn't know WTF they were doing. Incorrect info we had to waste time sorting out, confused when we asked them about a use case for which they had an entire-ass prominent landing page with a top level link on the home page.

2) They gave no fucks about our mostly-CDN needs. All their enterprise plans were oriented around selling us a bucket of other crap that we didn't need. We weren't able to get them under $4kish no matter what, and even at $5k there wasn't enough pre-allocated transfer to keep us from hitting metered billing. They were not interested in talking about adjusting the price down by removing all that stuff, or almost any other customization. We just needed to sling bits, and a few features directly related to that. The result was that they were not at all price competitive with our other options.

3) They were the least-hungry sales folks I've ever interacted with. It was a weird experience. They didn't act like we were annoying them or anything, they just acted like they were totally unmotivated to make a sale. Every question, no matter how simple, seemed to give them pause, and set them to musing as if they'd never even considered it. Never seen anything like it. Maybe mid-five-figures a year is too small-fish for them and we were talking to the not-even-actually-sales-folks screeners who keep the actual salespeople from wasting their time?

My guess (based solely on that experience) was that their sales organization had crazy turnover (even worse than a typical sales org), awful training/mentoring, and possibly not-great hiring. But IDK.

We did end up using the $200 "self-serve" plan because it's 1) the only self-serve plan with any kind of SLA at all, and 2) actually a pretty fuckin' good deal for the things it provides. They weren't even in our top-3 for our heavier CDN needs, though.


> What triggers Cloudflare to state you can't use the $200 Business Plan account?

iirc one reason is needing proxying of non-HTTP protocols.


ahem if I was into tinfoil theories, I'd wonder if Cloudflare knew more about the attack =P

Oh? You want that attack to stop? It's going to be big number.


HN just yesterday had a DDOS, and switched over to using Cloudflare.

https://news.ycombinator.com/item?id=38939559


it would be a shame if we had to keep letting all that traffic through...

...we'd have to charge you for causing to much traffic to your server


If you need (or more likely, think you need) custom stuff that doesn't fit into the one-size-fits-all tiers Cloudflare has available on their signup page.

Like... if you can just go there and self-signup and move your DNS, etc... then you're going to be just fine in their standard plans.

There's a lot of big-orgs that think they need special stuff when they in fact do not.

> "we called [Cloudflare] and they quoted a very big number"

Why did they think they needed to even call Cloudflare or speak with a sales person at all? That was the first mis-step... their Business Plan ($200 monthly) says "unmetered DDoS Protection"...


git over ssh or an email interface are both core sourcehut features that seem like they wouldn't fit into cloudflare's standard https only model


That's a fair point.

I think specifically for Sourcehut, to survive the DDoS they likely don't need 100% feature parity. Right now you can't even load their webpage - the service is effectively entirely down and users are left guessing what's going on. Getting web services back online would be significant.

The point for them would be to get breathing room to figure out a better long term plan. When the immediacy is gone, they'll be in a much better negotiating position with Cloudflare (or alternatives) for non-standard features.


Expanding on GP a bit, ssh is not just a core feature for SourceHut the _only_ allowed method of write access to a repository[0].

sr.ht does not support pushing to git repositories over HTTPS with a username+password — SSH keys are mandatory.

HTTP access is restricted to read-only operations (clone, pull, etc).

[0] https://web.archive.org/web/20231225152845/https://man.sr.ht...


That was understood. The point I was attempting to make was a partial outage is better than a complete outage. Restoring web services would allow them to better communicate to their customers what was happening... instead of everything just vanishing from the internet.

It appears they restored HTTP services now, so perhaps they ended up taking that route. Their homepage lists details of the ongoing attack/outage and their current efforts. This is what should have been done immediately, I'm afraid to say.

Once the attack passes, they will then be in a better position to consider their options and negotiate with providers for long term solutions.


Cloudflare Spectrum (https://www.cloudflare.com/application-services/products/clo... - available pay-as-you-go by the GB for SSH and Minecraft) could handle git over SSH without an enterprise account. Email would probably require either Email Workers or an enterprise Spectrum plan though.


> "we called [Cloudflare] and they quoted a very big number" [0]

To be honest such approaches are not ideal anyways. It would most likely require a GRE tunnel with the DDoS washing service, increasing latency amongst other potential issues.

It's probably way easier (and possibly cheaper) to buy DDoS protection from your ISP/transit provider, assuming they're not so small that they don't have it themselves.


Simple solution, Rent a botnet and DDOS the DDOS. /s


[flagged]


When has he cheered the DDoSing of websites he doesn't like?

Last year he pointed out how Google was effectively DDoSing Sourcehut[1], so it's not like this is something completely new.

[1] https://news.ycombinator.com/item?id=34310674 .

Also, example of "disconnecting of websites" were you thinking of? I mean, I can think of entirely reasonable situations, like if an employee created a website on a corporate subdomain solely to harass others, then asking the employer to disconnect that site is fine, right?


It’s interesting to me that both threads on HN discussing this DDoS incident have new accounts claiming that Drew supports DDoS, without sources:

https://news.ycombinator.com/item?id=38966629


And they are both new accounts.


some internet loser probably




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: