Hacker News new | past | comments | ask | show | jobs | submit login
Let's Encrypt's performance is currently degraded due to a DDoS attack (status.io)
268 points by sorcix on March 7, 2021 | hide | past | favorite | 115 comments



We used to run a game server for a small community of around 400-500 people and DDos attacks were something we had to face almost every week, whenever someone got upset with the admin team, the go to solution was was to DDos, you get scammed by another player? DDos. Got banned for saying racist things ingame? DDos. You figured out a new way to cheat in game and the admins fixed it? DDos.

We were kids back then and those were kids that were attacking us with just a 5-10usd budget. Yes they were relatively small (ranging from 10-60Gbps) attacks compared to the Tbps attacks that are happening to some companies, but good god it was so annoying when all it took was just 5 usd from some idiot to take down your server.

We moved to gcp got null routed (or reduced network bandwitch to the node under attack) every-time there was an attack. Bought azure's 3000usd a month anti DDos protection, was worthless for a tcp/udp service. Tried to have a network load balancer in the cloud that auto-scaled, still some players got effected when an attack came in.

Finally we moved over to OVH and placed a few really powerful servers in-front of the game server and applied some ipfilter rules to reduce common attacks. That ended up being the cheapest option out of all the options. When you have a very small community its not like you have the biggest budget to work with. But it was really fun and taught all of us a lot. Looking back its kinna sad we had to end things. But it was a lot fun.

DDos attacks are one of those things that really makes me worried about the future of the internet. The only way to win it is to throw money at it and cross your fingers that the attacker will run out of resources before you do.

Definitely companies like cloudflare does an incredibly good job of stopping some insanely big attacks when it comes to http/https (I recently saw they were supporting udp and tcp based services now, never tried it).

But one thing that's weird is having to rely on some 3rd party company. Yes cloudflare so far has been a company I can trust, but, I once loved and trusted a company that said "Don't be evil".

If you are a developer for some IOT device manufacturer please do your best to makesure someone wont turn your light bulb in to a part of a botnet. When you guys fuck-up the rest of us have to suffer.


The significant thing about "script kiddy DDOS" level attacks, is that they significantly raise the effort and expense for the smallest projects. This is exactly where the most important innovations happen:

http://www.paulgraham.com/marginal.html

Finally we moved over to OVH and placed a few really powerful servers in-front of the game server and applied some ipfilter rules to reduce common attacks. That ended up being the cheapest option out of all the options

The cheaper attacks seem to be at the level, where machine learning could be able to counter them. Raising the bar for inexpensive attacks would be a huge boon to the internet and human progress. It wouldn't be that expensive to fund, either.

We used to run a game server for a small community of around 400-500 people and DDos attacks were something we had to face almost every week, whenever someone got upset with the admin team, the go to solution was was to DDos, you get scammed by another player? DDos. Got banned for saying racist things ingame? DDos. You figured out a new way to cheat in game and the admins fixed it? DDos.

I wonder if this sort of thing could be honeypotted? Give perpetrators a way to figure out and target a fake "edge server" of a particular user? (Which only affects about 5% of your user base, let's say.) However, that "edge server" is actually a honeypot that gathers data on the attack, and correlates that to support emails to the admin team, or flame wars in the game's forums.

This is the kind of suckage that holds back the entire network, but which can ultimately be defeated:

http://www.paulgraham.com/spam.html


"Learning" has nothing to do with any of this. Deciding which packets are part of the attack is not hard at all.

What's hard is paying for 100s of gigabits of bandwidth, 24x7, so the incoming packet flood doesn't crowd out the good traffic before it gets to your filtering box.

Basically the only solution there is centralization. Cloudflare can afford to buy 1000s of times more bandwidth than any one of its customers needs, because it has (much more than) 1000s of customers.


As far as auto-learning to counter such things, https://linuxsecurity.com/features/features/introducing-crow... did show up recently: an attempt at a crowd-data-enhanced next-gen-fail2ban-alike. (Not an endorsement, never tried it.)

I don't think it uses any of the techniques currently considered central to machine learning, but if it works well / catches on to start with then it could be a good place to see how useful those would be.


I don't see how that project helps solve the underlying problem: denial of service

if the idea kicks off, instead of spamming packets directly at their targets: kiddies will switch to feeding cloud-fail2ban with their target's IP addresses

and there will be paid services to do this for you

same effect


if the idea kicks off, instead of spamming packets directly at their targets: kiddies will switch to feeding cloud-fail2ban with their target's IP addresses

As far back as the 2000's, kids knew to keep their IP addresses secret. There are plenty of real-time game server architectures where no game client knows the IP address of another game client. This might not be feasible for very fast paced FPS games, for example, but that's only one particular use case.

I suspect we could significantly raise the bar to DDOS something like 80% of all websites/apps/servers -- at least to the level where random kids or even random middle class adults would think about it because they had a bad day.


> I wonder if this sort of thing could be honeypotted?

One method could be to anycast the domain to a bunch of edge servers which all relay traffic to the actual server.

DNS queries of the domain return the closest edge which gets attacked, other edge servers can still route.


Slightly OT but the significant thing about "Don't be Evil" is that Google had already taken the fundamental choices that were evil. The slogan itself is blatantly self-conscious - an acknowledgement of the insane power that would inevitably accumulate as a result of the business model they were pioneering.


> The slogan itself is blatantly self-conscious - an acknowledgement of the insane power that would inevitably accumulate as a result of the business model they were pioneering.

I think that's historically false. At the time, "Don't be evil" seemed like, more than anything, an acknowledgement that Google wanted to have a corporate culture that was different from Microsoft, which at the time was the 800 pound gorilla in tech and was widely seen as being "evil" (I may be dating myself, but does anyone else remember the Bill Gates/Borg avatar that was the standard for Microsoft stories on Slashdot back in the day?) Google was founded in 1998, right when the US v Microsoft antitrust suit was filed.

One could certainly argue Google now engages in some of the monopolistic tactics that originally got Microsoft in hot water (with MS is "everything is part of the OS", with Google it's "everything can be part of the search results page"), but I think you're reading too much into what was originally behind the "Don't be evil" slogan.


I would have certainly felt the same way until recently but the timeline laid out in Shoshana Zuboff's book on Surveillance Capitalism made me re-evaluate that.


I think that’s overly pessimistic. I think it may have helped delay the inevitable as it was in the back of their minds. I didn’t really completely give up on them until they renounced the slogan. It was a sad day and made it dead obvious they had gone full corporate.


> an acknowledgement of the insane power that would inevitably accumulate

That's overthinking it. "Don't be evil" is just the kind of slogan that could emerge in the '90s, when it became clear that good and bad were not linked to a specific organizational form or trait - you could have bad capitalism and good collectivism as well as the other way around. There was a feeling that "big business" was bad but "medium business" could be a force for good, you only had to stay decent and things would work out. And of course the 'net would have rejected any clipper-chip and not replicate the historical corruption of the real world.

Those were very naive times, in retrospect, but I don't blame the original googlers for believing in a simplistic view of the world. I blame Eric Schmidt and his sponsors for hiding their evil behind that line. Modern Google is basically all Schmidt.


I see. Now that makes it worse.


I've long thought of it as a big-brothery admonishment: "Don't be evil... because we'll know about it." Sort of a Santa Claus Is Coming To Town style cutesy celebration of tyranny.


> Definitely companies like cloudflare does an incredibly good job of stopping some insanely big attacks when it comes to http/https (I recently saw they were supporting udp and tcp based services now, never tried it).

CF still requires an Enterprise contract for proxying arbitrary traffic via Spectrum, likely because of the abuse prevention aspect. Otherwise SSH and minecraft is offered at pay-as-you go rates, but a lot have complained about how expensive it is:

https://community.cloudflare.com/t/what-do-you-think-about-t...


Incidentally, I find it deeply weird that the only protocols supported by Cloudflare Spectrum below the enterprise level are SSH, RDP, and... Minecraft?!

I mean, I guess it's a compelling use case for some customers. Still, it's a weird outlier.


Minecraft is incredibly popular and it's normal to run your own server. (It's the 5th most-watched game on Twitch; at 81,000,000 hours per month!) I don't play, so I can't authoritatively say there is no company that provides "default" servers that new clients log into, but I've never heard of such a thing.

(If you look at the other popular games on Twitch, they all provide servers and can't self host. GTA V, Fortnite, LoL, CoD, Valorant, etc. So there is no market for anything but Minecraft-related services.)


And perhaps more significantly, DDoS attacks against Minecraft servers are extremely common. There's a massive market dedicated purely to DDoSing Minecraft servers.

In addition to its popularity, I would guess that this is probably related to the fact that the average age of Minecraft players is probably lower than the average age for most other popular online games. A disgruntled person between the age of 12 - 18 who knows they can completely shut down the fun-having ability of everyone they're pissed off at for a few dollars per hour will often feel pretty tempted.


If you do the development work to support hot protocol of the moment X, chances are in 18 months nobody cares because either (a) now X is old news and nobody uses that any more or (b) X+1 came out, it's incompatible and you'd have to do the work again for it to be useful.

If an enterprise customer will pay $$$ to support X this can still make financial sense, but Cloudflare's non-enterprise customers aren't paying $$$.

Minecraft is apparently not going anywhere, it's still very popular a decade after release. And my understanding is that the protocol is fundamentally the same as ever. So, you do that work once, and then you've got a free proof of concept apparently forever.


There's a reasonable amount of servers with the traffic to need to worry about ddos attacks, and have revenue models from selling ingame perks (often in ways forbidden by the minecraft ToS, but that's relatively toothless for these servers) that would allow them to pay for this service.


Was this a RuneScape private server (RSPS)? I remember back in the day you could find a free RSPS DDoS tool with a quick Google search, and all you had to do was enter the public IP address to start attacking. The culture for the RSPS scene was exactly what you're describing.

Also there was another kind of attack where you would start thousands of bot clients at once that would spam messages. The hopes would be that you would (a) shut down the server, (b) attract the players to the server your bots were advertising


Not runescape, it was an old MMO that got abandoned, we modded the game kept it alive for almost 8 years before most of the Admins quit and we started our own company, last I heard the game is still running.

As for the DDoS tools, after writing the parent comment just did a quick ddg search and you can still find several websites advertising services to DDoS. Some I recognize from back then.

On a side-note doing a nslookup shows some of these sites are behind cloudflare haha.

>>Also there was another kind of attack where you would start thousands of bot clients at once that would spam messages. The hopes would be that you would (a) shut down the server, (b) attract the players to the server your bots were advertising

Oh man.. some people..


I wondered when I'd see a mention of RSPSs on here. God what an excellent time of my life that was. Running a server with my brother got me into programming and tech and was such an absolute pleasure. We had the good fortune of never being DDoSed but I suppose our scale was small enough to avoid it (20 players max at any given time). I had never heard about the tool you're mentioning.


Haha I'm in the exact same boat - creating my own server got me into programming and tech.

A common tool was called Syipkpker[0]. It was really annoying dealing with these bots. All you could do is IP-ban them and pray the attacker didn't change their IP

[0] https://www.dailymotion.com/video/x417fz2


> When you guys fuck-up the rest of us have to suffer.

Is there a lawyer here who can comment on whether the manufacturer of these horrid devices have any civil liability - either currently or possibly in the future?

My gut tells me the only way this will get better is for their to be rules of negligence applied to the realm of computer security.


I don't know what it is about gaming that attracts DDoS events more than practically anything else, but there are a lot of server hosts that will not even rent you a server if they know that there will be a game hosted on it due to this.

I have used Cloudflare Spectrum to prevent attacks. It does work incredibly well but the cost is significant.

As for 3rd party copmanies, I do hate to rely on cloudflare for this. It is the worst business relationship by far I have ever been in, but yet there are no good alternatives we found.


> I don't know what it is about gaming

Gaming tends to attract a population that is tech-savvy (means), competitive (motive), and has copious leisure time (opportunity). Combine those three things and you have the kindling.

The spark, I think, is due to the fact that the crowd was historically quite young. That means three things. First, impulsive. Second, nothing/less to lose (someone with real assets they Worked Hard For wouldn't Risk It All over an in-game spat). Third, might not've learned how to handle competition in a healthy way.

A DDoS attack is a crime, but the sort that most law enforcement don't really care about at least in the context of a small-time game server. It's kind of the modern equivalent of knocking down mailboxes or shooting out traffic signs with a shotgun. Both things that cause actual damage that costs actual money, but which teenage males have been doing probably since the advent of mailboxes and shotguns.


> [...] but there are a lot of server hosts that will not even rent you a server if they know that there will be a game hosted on it due to this.

Huh, I've only seen that with VPS hosters and thought it was related to game servers causing high CPU load on shared resources.


> I don't know what it is about gaming that attracts DDoS events more than practically anything else, but there are a lot of server hosts that will not even rent you a server if they know that there will be a game hosted on it due to this.

Like IRC back in the day :P


Trying to DDoS someone because you got banned for saying racist things is the virtual predecessor to the Jan 6th insurrection.


No it is not the kids that used to do this kind of things rarely are part of the Q cult just because you don’t like two different kinds of people it doesn’t mean they are the same people mad about rigged elections are not the same as techy kids having fun so get a grip


I'm good friends with a whole bunch of small-time GOP operatives. No one serious, but people employed full-time in the broader GOP world. Think stuff like "staff for state senator" or "event organizer at regional chamber of commerce type orgs".

I asked all of them in December if they were worried about violence given all the stolen election claims. I had TDS, etc. etc.

I asked after Jan 6 for a post-mortem on their dismissals and every single one said they thought all of the people online were just trolling. Half of them work for employers who ended up sort of half-severing historically very deep ties with state GOP parties.

So, I think there's probably a lot more truth in the GP than you give credit for.


It’s fair comparison in that they are both sets of garbage people overreacting when things don’t go their way. I find it a very apt comparison.


It's just a comparison of two groups similar reactions.


The magic of 3rd party anti-DDOS providers is rarely the software/methods: it's just about having bigger pipes. Everyone can figure out how to block volumetric attacks with iptables or whatever, the problem is if you have a 1 gig pipe from your transit provider, it's going to get saturated even before you can do any blocking. The 3rd parties can afford to have multiple 100g pipes with 10gbps commits in multiple DCs -- you share this cost with other customers for when you get attacked. That's kinda the entire point of 3rd party anti-DDOS providers, and not much else.


Cloudflare does any tcp now, maybe udp?

https://www.cloudflare.com/products/cloudflare-spectrum/


I'd like to learn more about using ipfilter filters on bare metal machines to mitigate ddos attacks. Do you have any recommendations?


Iot device makers aren't going to be better just out of the goodness of their hearts - regulation / litigation is needed.


DDoS, 400-500ppl, OVH - let me guess the game, Tibia?

edit: nvm.


And this is why we need MaidSAFE instead of the Web. They don’t have DDOS attacks, instead you make money every time someone accesses a chunk of a resource, and the kademlia tree hides the hosts’ IP after one hop so the network and hosts can’t be taken down easily. Very different from Tor.

https://maidsafe.net is the best project to come out of the “Web3” space. If you heard of freenet, this is like freenet 2.0

PS: Why the massive silent downvotes? This platform actually solves the problem and many others HN constantly correctly complain about. But when posted, you prefer to ignore it. (Disclaimer: I am not affiliated with them in any way. In some ways they are a competitor to Qbix and Intercoin but I give credit where it is due.)


I think a lot of us are tired of cryptocurrency snake oil. It is an uphill battle to build things on a cryptocurrency and try to convince the wise that it isn't an elaborate vehicle for get-rich-quick speculation.


MaidSAFE is not a cryptocurrency. It’s a network closer to Tor and Freenet but architected way better.

This is like downvoting all mentions of IPFS because FileCoin is a currency that is used to pay others for storing files. Or like downvoting all mentions of Freenet.


> MaidSAFE is not a cryptocurrency

Mmhm. Well, they raised money using an ICO, so apparently they felt their coin was worth people speculating on. Market cap is above $200 million.

Maybe the technology is great, but it's built and operates like any other get-rich-quick crypto scheme. Their mission, if genuine, would be better served by a different approach. Avoid the appearance of impropriety and all that.

I don't have much of an opinion and don't want to argue, but you did ask why people were downvoting. That's why.


I am Lolling at the "get-rich-quick scheme" criticism of yours. This project has been in development for 14 years. It is incredible that people don't even care to properly research it, and they blindly choose their favourite bias just to say something. On one side we have those who criticize for not being soon enough, and other for being a 14 years get-rich-quick-scheme. Fabulous.

I would invite you to check out what the project is really about, check out: https://primer.safenetwork.org/

Secondly, if you want proof that this project actually predates bitcoins, here is a talk the founder gave at Google Tech Talks in 2008: https://www.youtube.com/watch?v=fLA77zxk-vA And maybe it was just because of this that they found a way to solve the Bizantine Generals Problem independently without relying on blockchains. I hope you get the significance of this.


I get this is a popular view, and justifiably so. The project is not a get rich scheme for the founder, you will see that if you look at how it was created and the structures that support it (which include a Scottish Charity that owns the company MaidSafe which is building the first version, but then aims to remove itself from control in favour of decentralised development funded by the network itself).

And there is good reason why they chose an ICO over alternatives. You don't say what you favour, but what this project has achieved by this route is complete independence to deliver according to fundamental principles.

They have no VC investment, nobody has control who is not aligned with the goal to create a fully autonomous, decentralised network and the "fundamentals" of Safe Network which you can read up on if you want. So the project continues to aim at a hard target regardless of many opportunities to get rich along the way.

It was I think the first ICO, so if getting rich was the aim it could have happened many times by now.

As things stand it is a shame that people are turned off from even looking into this project because of that association. Once it emerges I think the value it provides rather than captures will change minds.

If you want to know what the target is you can read about the Fundamentals of Safe Network here: https://safenetforum.org/t/safe-network-fundamentals-context...


Funny definition of “quick”. They started this project in 2006 and have not been raising any money except back then, as one of the FIRST ICOs (before Ethereum existed, they used Mastercoin).

And frankly, raising money with a token is far better than raising money selling equity. When you sell equity to a guy like Peter Thiel who thinks that “competition is for losers” and “you should build a monopoly”, you build exactly that. You kill off wirehog and lock everyone into your platform, instead of making an open source reference implementation of a protocol like Email (smtp) or the Web (http). All because you have to generate profits perpetually, as that is what equity investors expect.

With a token, you raise money from future participants in the network and you don’t need to have a profit motive causing you to build a monopoly. Show me open source projects that were funded by seed equity funding.

The socialist in me wants to see more funding of projects by developers, infrastructure providers and others being paid in tokens, rather than equity in a monopoly that will extract rents and be afraid to give out the code to anybody — or even become interoperable!


Apparently this project has been around since 2006 and still hasn't gotten any traction. Why should I move my resources to this system?


The sister comment called it a “get rich quick scheme” due to selling tokens once. Which is it?


This had me scratching my head earlier today when I was debugging why renewal was taking so long. I've taken Let's Encrypt's reliability for granted. Didn't even cross my mind that it might be a service issue.


Here is something to think about. If you get ddos'd in middle of trial run by an enterprise customer and its the end of your startup. AZ , AWS , OVH almost all hosting providers will all start dropping your connections. And DDOS protection services are expensive. Pay as you go models are awful for this as when you do get ddos'd , you bill could be quite high.


No, OVH has DDoS protection built into their service, and it's free, and your connections will not get dropped. I moved to OVH after getting a few DDoS attacks, and since then there have been no problems. I've had a few emails from OVH notifying me about attacks in progress and that they are automatically mitigating it. When it happens, it has zero effect on our service.


Thnak you . I did not know this. I would love to hear more about it -- is this protecion available for their VPS or Public cloud ? I already have stringent firewall rules for all VMs in my ansible roles (I can port over my stuff from one cloud to next as my deployment is based on asible). what else do I need to do to protect my servers?


All their services come with DDoS protection.

https://www.ovh.com/ca/en/anti-ddos/faq.xml?lsdDoc=faq.xml


OVH actually does a really good job when it comes to DDoS protection. They can take some really big attacks and slow it down just enough so you firewall rules can take care of the rest.

From all the hosting providers I have used, they are the only ones who don't null route you as soon as you get a attacked, and considering the cost of their service OVH is a real life saver when you really need the help.

----

Just realized this sounds like an advert for OVH, lol I have no affiliation with them whatsoever, just a really happy old customer.


I wonder how well common acme tools implement exponential backoff on retries. With a ton of clients you can inadvertently make a DDoS longer/worse.

Let's Encrypt asks for max 1 request per day per certificate: https://letsencrypt.org/docs/integration-guide/


Caddy does a lot of it: https://caddyserver.com/docs/automatic-https#errors, and Caddy also falls back to ZeroSSL if it couldn't issue with Let's Encrypt. Caddy is without a doubt the most robust ACME client implementation to date.


I completely dropped nginx for caddy some months ago. The only thing missing is an ingress implementation of caddy


I wonder what their motivation is?


"no one is buying our expensive SSL certs. If we show businesses how unreliable a free service is, CTOs will make their admins buy from us."

or

"domain xyz's certificate is expiring. If we pay for a ddos, their site won't be able to renew and (customers wont go to the site due to expired cert/API people use wont work/we can take advantage of a compromised cert longer)"

Just some possible but implausible scenarios.


> domain xyz's certificate is expiring

That's why it's so important not to wait until the end of the 90 day expiration period but to renew it every other week or so.


is that Let's Encrypt's default behavior?


The default is to renew when less than 30 days remain, and to check that every day or every week.


And you get the extra benefit of being emailed[0] a few times beforehand if the certificate fails to renew as long you register your account with an email address.

[0]: https://letsencrypt.org/docs/expiration-emails/


Seems like a sane default to me.


Very much so. In my experience with nightly jobs in a corporate setting, the more often something happens, the more likely you are to catch an upstream dependency that breaks it.

The sooner you catch that breakage, the easier it is to get the resources (either from that team, or from your own team) to fix it. It’s a matter of “Oh we changed that API 2 months ago, everything is fine for us, all of our people have moved on to other tasks” versus “Oh our change broke you? We can revert it until we have a workaround”.

2 months, in most orgs, is enough time to figure something out before your entire business goes offline.


It seems unworkable for the majority of smaller sites who are increasingly forced to use letsencrypt.

Unless you want that automatic update tool on your server, which I find a bit sketchy.


If by "automatic update tool" you're referring to Certbot (the EFF's reference ACME client implementation), you don't have to use it. There are several dozen ACME clients, including some that are entirely shell scripts (such as Dehydrated).


> Unless you want that automatic update tool on your server, which I find a bit sketchy.

Where else would you put it? You could put an ACME client somewhere else but it still needs to connect to place the updated certs.


Not recommending this, but it is technically possible to proceed as follows:

* Server mints an arbitrary key pair, it produces a CSR with the public key in it, signed using the private key, and the administrator manually copies this to an external ACME client.

* ACME client uses the CSR and control over DNS to get Let's Encrypt (or any ACME CA) to issue periodically, say, once every 45 days or as soon as possible if that's exceeded.

* Server periodically fetches any new certificates for its name from any public source of them. You can build your own server, you can use crt.sh, you can use Google, it doesn't matter because the certificates necessarily contain baked in proof that they're genuine.

Notice that in this design the server and ACME client don't talk to each other at all. They both have to be (at least sometimes) connected somehow to the public Internet, but they never need to connect to each other after that one time setup, they don't even need a human to manually transfer stuff, they just both do their thing and it works out.

This does mean the key remains the same for the lifetime of the server. But if you have a sane policy for servers getting replaced (e.g. they're depreciated over 36 months and replaced before 48 months) there's no reason that should pose a problem in itself.


Why are they forced to use letsencrypt? Cloud flare gives me free SSL and I imagine AWS or GCE would also


> It seems unworkable for the majority of smaller sites who are increasingly forced to use letsencrypt.

It's one line in a crontab, hardly an enterprise level endevaour involving IBM consultants and a triple redundant K8S cluster.

> Unless you want that automatic update tool on your server, which I find a bit sketchy.

It's no sketchier than you using binaries from your distro's repo, or even the Linux kernel. I doubt you'd read either line by line to check for nefarious wrong doing.


Let's Encrypt is a Certificate Authority, so, it doesn't have any "default behaviour" in respect of renewals, from the CA's point of view "renewing" is just a new issuance that happens to be for an identical subject. Let's Encrypt's rate limiting policies do actually care about that ("Duplicate Certificate Limit" five per week for each subject) but it can't put in place any particular policy about when you must or will renew.

CAs which charge for issuance often have a policy which implies an earliest sensible renewal date, because they will "carry over" remaining time on the previous certificate. There is a practical limit to that, (for example today your "one year" certificate from such a CA can only have up to 398 days until it expires, so renewing two months early won't make sense) because of the Baseline Requirements and/or trust store policies.

But if you're doing client development for ACME, the protocol Let's Encrypt implements for issuance, then yes, they'd tell you that they advise you to begin trying to renew with 30 days left. The EFF's Certbot tool, which a long time ago was just named "letsencrypt" implements this policy as do many other stand alone ACME clients.


> CTOs will make their admins buy from us

Or, those admins can switch to zerossl.com until the DDoS ends (you basically just need to change the domain in certbot).

Yielding the DDoS.. wasted money.


Aren't OCSP servers affected too? That would cause issues for page visitors too.


Possibly testing or demonstrating a botnet. "For bragging rights" is a thing, as is advertising ("my botnet took down critical infrastructure, wanna buy my DDoS service?").


Not sure. Unless this is sustained for a long time it shouldn't affect autorenewals which are done well in advance of expiry. So it _shouldn't_ affect cert expiry unless people are still manually renewing and leaving it to last minute.

[edit] Unless the attackers identified a bug in certbot (commonly used autorenewal scripts), e.g what happens when LE is unavailable when autorenew is triggered - you'd hope it would retry periodically until LE is restored, but perhaps not. If not you could time the DDoS just right to ensure a specific cert does not get renewed even after the DDoS stops, then maybe a couple weeks later it would expire... But that's relying on such a bug existing and the site owners not noticing it (LE will also email the registered email address eventually regardless of autorenewal scripts), so maybe this is too much of a stretch.


Fun?

Just last week our shared hosting provider was attacked and the attacker tried to brute-force it‘s way into a management API. I cannot image another reason as „fun“ and „just because we can“ because there‘s nothing to get [besides money after encrypting all data].

So I think the attacker just attacks LE because it‘s in the internet and he can.


They could be aiming for credentials to use in credential stuffing attack, a place to put malware, a place to distribute malware, servers to add to their botnet, a proxy to use for shady stuff, the list goes on. I see plenty of reasons to attack a hosting provider and its infrastructure?

Or am I missing something?


It as a practice run before they try it on a bigger entity


Often a DDoS is used as a smokescreen/cover for an actual compromise. I guess the hope is that it gets unnoticed in all the noise and whilst all hands are busy at the pumps. Hope not!

I see in their status page that OCSP endpoints are also impacted. There could be any number of motivations including interfering with someone's ability to check if a certificate has been revoked.


Some people just want to watch the world burn.


Given the complete absence of information could it just be an accident? I think Wikipedia recently had performance issues where it turned out that a popular app was just pulling an image in the background and the app developers fixed it once they were notified.


Pointing out the issues of a single point of failure for the internet?


Caddy mitigates this by falling back to ZeroSSL if it couldn't issue a cert from LE: https://caddyserver.com/docs/automatic-https#errors


The keys to the kingdom are ever more being placed in the hands of relatively few internet custodians. Figuratively here of course, since the private keys are generated locally and never transmitted to LE.


Is it really a single point of failure though? Certificates are renewed well in advance, and there are several free alternatives with ACME support to LetsEncrypt today.

Switching to a new provider in case LetsEncrypt goes down is as simple as updating your scripts.


A large number of sites use LE, and only LE.

Perhaps this move will mean people actually update their scripts and get it working on another system


Why? If you only renew at the last day you will run into troubles independently of Lets Encrypt.


Buypass, ZeroSSL also provide free certificates with ACME.


Could be a "test fire" of a paid service - showing it works and will do what the customer wants.


A distraction from the real intrusion?


My thoughts exactly. Hacking letsencrypt would be a massive deal.


it's better to assume that people will simply do everything that is possible, you can't open umbrella in your butt so that can be assumed not to be done, but everything possible should be assumed done/to be done sonner or later; motivation of "because I can" is simply enough.


About how expensive is it to rent a botnet and pull off an attack like this?


Asking for a friend?


Just a reminder that users of Caddy (v2.3.0 or higher) are not at risk when LE gets hit like this, because it will fallback to having a certificate issued from ZeroSSL. Both issuers would need to be down for the whole last 30 days of the certificate's 90 day lifetime before Caddy would be stuck with expired certificates.

https://caddyserver.com/docs/automatic-https#errors

https://github.com/caddyserver/caddy/releases/tag/v2.3.0


Why the duck would anyone DDoS Let’s Encrypt?


This is a very morbid thought, but I wonder if the people who run LE ever travel via the same means. If somebody took them out all at once, would the web's security essentially crumble? This is the danger of centralized services, but moreso the crap design of web PKI.

All "usable" HTTPS depends on certs, right? And "usable" certs require a domain, right? And that cert for that domain needs to have been generated by a CA, right? But it's tied to a domain, and IP space. You have to prove to a CA that you both control a domain record and some IP space it points to. Nobody has designed anything to straightforwardly prove that in an unhackable way. We have shitty hacks, like "serve this unique file on this web server that this domain record is pointing to", or "answer an e-mail on one of 20 addresses at this domain", etc.

But none of those address what we actually want to do, which is just to prove that we own/control a domain record. That's the only meaningful thing in having a cert: proving that you actually own the domain record this cert is assigned to. And we have no actual way to do this. Literally the only way to prove definitively that you own a domain is to talk to the registrar, and the only way to prove that you control a domain record is to talk to the nameserver that the registrar is pointing to. The former we don't handle at all, and the latter is highly susceptible to various attacks.

You could remove the reliance on CAs entirely with a different model. You tie a private key to domain ownership, and a private key to a domain record. Then you only have to trust registrars' keys/certs, and you can walk backward along a cryptographically-signed web of trust. Your browser trusts the registrar's key X. The registrar signs your domain key Y. The domain key Y signs a domain record key Z. Your web server generates a cert using domain key Z.

For a client to verify the web server cert, they verify it was created by key Z, and verify that key Z was signed by key Y, and that key Y was signed by key X. Then any webserver can generate its own cert for any domain record, we don't need CAs to generate certs, and we have a solid web of trust that goes back to the actual owner of the domain, but also allows split trust via the domain owner assigning keys to domain records.


> This is a very morbid thought, but

This is such a well-understood problem in fact that it has a name and Wikipedia entry, called "bus factor". According to:

> The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel

As for proving that you own a domain, I think the DNS-01 challenge that is used to grant Star-certificates does a pretty good approximation, if you can create and update TXT records in the root zone, you have at least functionally "owned" a domain even if you don't legally own the domain.


But this is vulnerable to a number of attacks that my solution isn't. Right now it's pretty damn easy to create valid certs for arbitrary domains. I'm constantly surprised that nobody wants to fix this.


> I wonder if the people who run LE ever travel via the same means

Afaik, the LE team is distributed across the globe.

> If somebody took them out all at once, would the web's security essentially crumble?

No, there are other both free and paid CAs

> We have shitty hacks, like "serve this unique file on this web server that this domain record is pointing to", or "answer an e-mail on one of 20 addresses at this domain", etc.

Yes, but we also have certificate transparency. You can monitor all certificates issued to your domains and revoke them if needed. Not perfect but imo still reasonably safe considering you know that all the issued certs are on your servers.

> You tie a private key to domain ownership, and a private key to a domain record. Then you only have to trust registrars' keys/certs, and you can walk backward along a cryptographically-signed web of trust.

That exists and is called DNSSEC. If you haven't heard of it, you already understand: it isn't widely used. Also, it would require major rethinking of how we use the internet. Most clients do not validate DNSSEC, only public and maybe ISP resolvers do, but they can (and probably will) tamper the DNSSEC answers if they can better spy and mitm you.

> Your browser trusts the registrar's key X

Sure, we could do it in browsers, but the internet is wider than the web, and we would need to rewrite a great part or what we use every day (not saying that we can't or should not).

In the mean time, if you use a DNSSEC-compatible TLD and registrar, you can already sign your zones. That way, the current CAs will be able to cryptographically verify that the server asking for a cert also owns the domain/subdomain.


> Yes, but we also have certificate transparency.

Right. Because of the hundreds of millions of domains out there, every one of them is monitoring the CT logs for their domains....? And once someone does create a false cert, by the time you find out about it, the cyber criminals have already hauled away a bank transfer or personal data, etc.

CT isn't security, it's a broken window.

> That exists and is called DNSSEC.

Every time I propose this, somebody equates it to something else (DNSSEC, DANE, etc), but what I'm proposing intentionally avoids those designs' pitfalls. I'm saying we need a brand new design that does not piggy-back on existing solutions.

> Also, it would require major rethinking of how we use the internet.

It would require rethinking of the workflows between registrars, domain owners, nameservers, and webservers. But in theory, browsers would work exactly the same; they'd just trade their ca-certificates for registrar-certificates. Validating the full chain of certs that they already do should be the same.


Heroes really, https and certificate centralization should end as soon as possible. Maybe along with DNS.


TLS cert authorities shouldn't end, but more importantly, HTTP shouldn't end. HTTP+HTTPS together are great. HTTPS only, as being pushed in modern times is quite bad.

LetsEncrypt is great and I am really glad someone stepped up to create a mostly not evil non-profit cert authority. But everyone using LE is very bad for the health of the internet. It provides nearly a single point of failure for government/political interefence, technical failure, and failure due to corruption from money and scale internally.


The Web's PKI already has multiple single points of failure because any trusted root CA can issue certificates for any domain. Any problem you list for LE is compounded by every additional trusted root CA. While transparency logs can identify bad CAs it can't prevent them.

Putting certs in DNS with DNSSEC authenticating them might be a more robust design overall, and would eliminate a lot of what is bad about HTTPS-everywhere (namely that LE trusts DNS to begin with, so doesn't add much to the web of trust, and that certificate issuance would be much more straightforward and automated from your TLD).

Unfortunately I have to disagree with you about the end of HTTP. ISPs have historically proven that they can't be trusted (NXDOMAIN interception, ad replacement/injection, DPI) and so for a non-negligable fraction of the world HTTPS (and DNSSEC or similar, although not enough people realize it yet) are a necessity.

I don't see alternative options except perhaps onion routing everywhere, but that only moves the goalposts to exit nodes without HTTPS and a PKI.

Another possibility for securing the existing PKI is to extend support for Name Constraints so that root CAs are only given authority to issue for subsets of domains, and finally making TLS only trust the most specific root CA for a given domain, e.g. if a TLS implementation has a trusted root CA with a Name Constraint of .example.com then it should not accept a certificate chain for anything under example.com from another root CA, and vice versa that root CA could not sign certificates for domains not under example.com. This would allow sites with high security needs to get their own CAs accepted by browsers, and allow breaking root CAs up by TLD which would match DNSSEC.



From the RFC: Relying Parties MUST NOT use CAA records as part of certificate validation.

A normal user is in roughly the same situation with and without CAA; is a particular certificate trustworthy? Only trusted root CAs and CRLs can answer the question. CAA is only cryptographically secure with DNSSEC, and transparency reports give at least as much auditability.


> HTTPS only, as being pushed in modern times is quite bad.

You can say this when ISPs stop MITMing ads into documents served over HTTP.


Parent was downvoted, but it happens. People think a site with only public content should be served over HTTP, what's the harm. Here's my anecdote:

A site I developed was being critiqued by a fellow director. They looked at the HTML and didn't like the poorly written advertising and analytics Javascript near the start of it.

But wait! What advertising and analytics? I didn't add that sort of junk.

It took us a few rounds of me defending my design decisions and not understanding what their problem with it was, and them becoming suspicious of me, before we figured out they were looking at Javascript inserted by their ISP in real-time into the site's HTML. Not something I wrote. We were viewing different HTML because of that.

That was 6 years ago. One more reason to switch to HTTPS, even for public, static content.


ISPs should be charged like the criminals they are but they are abusing a unique position not shared by a random attacker. My own ISP, comcast, has injected contents into my HTTP connections and broken things like the steam client browser. For almost a decade now I've tunneled to various VPSes for web surfing.

The problem here is not in HTTP. HTTP allows anyone and everyone to easily host and view each other's websites. Yes, ISP can interfere but that's not something anyone else can do in a targeted way.

The benefits far outweigh the downsides in most cases. You might have a business/profit motive to disable HTTP and that's fine. But most cases are not profit motivated.


> You might have a business/profit motive to disable HTTP and that's fine. But most cases are not profit motivated

No, it was a community group non-profit (non-profits have directors too!) and the site was a static site with public information and no tracking. Exactly the sort of friendly hobbyist site you are probably thinking should use HTTP. I was an unpaid volunteer, and the group did not pay for hosting.

> The benefits far outweigh the downsides in most cases

There were no identifiable benefits to HTTP or downsides to HTTPS for us. The switch was almost trivial. The ISP issue hurried the conversion though.

> I've tunneled to various VPSes for web surfing.

If you have to use a VPS to use HTTP safely, with its extra cost and latency, why are you down on HTTPS? Having to use a VPS with your HTTP is basically the same thing as HTTPS but with higher cost, higher latency and more security centralisation.

That's not a positive advert for HTTP, if you feel you have to use a VPS to use it safely.


Still, there should be a law (wire fraud?) that is applied to the ISPs who engage in forgery.

If applied, they'd stop and HTTP would be safe for static sites.


> HTTPS only, as being pushed in modern times is quite bad.

I'd be interested to hear more about this, care to elaborate?


Web devs have been cargo-culting really hard lately and adopting practices like completely disabling HTTP and only doing 304 redirects to HTTPS on the HTTP interface. They say they need to protect their users from MITM and downgrade attacks if they say anything at all, but realistically this isn't even in the threat model for 99% of sites.

So now we have sites abandoning HTTP entirely and only having HTTPS. So this encourages browsers like Firefox to start enabling things like HTTPS only in their browsers by default. It encourages putting up scaremongering warnings of danger on HTTP sites like HTTPS self-signed certs get (which killed off self signed sites).

So now browsers are beginning to refuse to show HTTP and the web admins are putting up servers that refuse to serve HTTP. That means in the near future unless you can get a cert authority approval (forever) you'll be unable to host a visitable website (ie, get a TLS cert from an authority) and unable to visit most websites that don't play the cert game unless you modify your browser.

Human people cannot be cert authorities. Only corporations can. These two trends towards HTTPS only, on client and server, lead inevitably towards a situation where everywhere is in a handful of cert authority chains and things become easily controlled, or accidentally broken, due to that centralization.


Eliminating central authorities of trust is a difficult problem. At least with LE around we can have encrypted communications for free. That's a huge net good for society.

Cloudflare though, we should all talk about more...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: