Ouch! Percentage of internet of things devices who don't ship libcurl is a rounding error. Percentage of internet of things devices that patch libcurl is also a rounding error.
Silver lining: perhaps this will be exploitable for the purpose of jailbreaking and de-cloudifying various mobile hardware and IoT devices, which would otherwise become (or already are) expensive paperweights.
But what does a “typical” attack on a libcurl vuln look like? Unlike a server process attack, wouldn’t curl be required to be directed to the attacker’s malicious content?
So the vulnerable systems are those where an attacker can craft an endpoint where curl downloads data?
Isn’t the lucky circumstance here that most systems with libcurl don’t use it and among those who do, an even tinier subset will allow an attacker to point it anywhere (e.g downloads from an url the attacker decides)?
So a victim behind a hostile AP might be redirected to a malicious site masquerading as a known legit site and when the bad site presents a maliciously crafted bogus certificate curl doesn't notice.
True, there are probably ways that could make this more severe if it's related to that kind of thing. And it would need to be on that level to come close to an attack of the kind that the log4j debacle was.
For the most part it's not common to be able to make a server call curl with an arbitrary server which is usually required to exploit this sort of thing. There will be some vulnerable apps, but the vast majority of servers with this vulnerability present won't be exploitable in any practical sense.
You have to consider the author of curl has recently been vocal against CVE scoring for vulnerabilities that require very specific conditions or user stupidity to trigger. For him to come out with "the one rated HIGH is probably the worst curl security flaw in a long time" most likely means it's bad.
Hmm. The worst case I can think of is if the vulnerability is exploitable before (or during) verification of TLS certificates for http(s).
That would mean that someone in a MITM position would be able to inject the payload when libcurl make requests.
But even that seems less messy than log4j? It can't possibly be as common that libcurl makes connections to arbitrary user entered urls, compared to log4j logging user entered text.
I thought so at first but then I remembered server side request forgery, SSRF
That's a bug class that is quite common but rarely leads to code exec or other issues (except in some cloud environments). If this is something that gives code exec after pointing curl at a malicious server it's going to be bad.
wget is fine for downloading stuff, but I'm not sure how common its use for IoT is. curl has two distinct advantages:
- a library which can be linked to the application, i.e. one doesn't need to do the expensive fork()/exec[v[p[e]]]() dance and
- documentation: most cloud services offer examples using curl for their API. It takes some additional mental work to translate those into wget syntax.
> Updating the shared libcurl library should be enough to fix this issue on all operating systems.
> Then again there will also be countless docker (and similar) images that feature their own copies, so there will still be quite a large number of rebuilds necessary I bet.
Including mine once the security team sees the CVE warning, even though our image literally never uses curl or libcurl and only ever communicates with other internal systems, within our private network.
Not that we shouldn't patch it! But unless the nasal demons are going to start a process and make unwanted HTTP connections, I'm not worried.
Could it be better not to just come out with somewhat alarmist take that hey we are going to release high risk vulnerability in week... And fixes to that...
But instead just release new version and CVE at same time? Now is everyone trying to get ready to exploit this on 11th, or already getting most out of it if they know? And does this information really make anyone to hover their finger on button to push new versions and so on on 11th?
At the moment, there is (most likely) no exploit available in the wild. A fix for the vulnerability is basically going to be the blueprint for an exploit. This means an exploit is pretty much guaranteed to start circulating within hours of the vulnerability & fix being released.
A fix cannot immediately be applied to billions of machines. It takes time for distros to port the fixes and backport it to all the versions they still support, it takes time for admins to notice the vulnerability at all, and it takes time to schedule a support window and apply the fix to all your machines. From initial disclosure until significant numbers have been patched can easily take days - or even weeks. During that time, people will be actively exploiting the vulnerability.
On the other hand, by giving a pre-warning to the general public and coordinating the fix with distro maintainers in a closed mailing list, anyone who even remotely cares will be scheduling maintenance windows right when the deadline expires - and patches will be ready for immediate use. This significantly reduces the amount of time the vulnerability will be public without a patch being available for the general population.
It's of course a different story when it is a zero-day actively actively exploited in the wild already, but that doesn't seem to be the case here.
> On the other hand, by giving a pre-warning to the general public and coordinating the fix with distro maintainers in a closed mailing list, anyone who even remotely cares will be scheduling maintenance windows right when the deadline expires - and patches will be ready for immediate use.
It seems that one of the most productive positions for an intelligence agency to infiltrate is a distro maintainer. They don’t ever have to do anything suspicious, just do a great job maintaining the distro and just give access to the intelligence agency of all these vulnerabilities under embargo.
(The details of the following depend on the nature of the flaw/exploit.)
I think a pre-announcement gives much more advantage to the population of defenders than to the population of attackers.
Attackers can move faster than most defenders, and they only need to find one weak link. Also there are a lot more defenders with various states of readiness, and only one attacker with the resources to spray the internet with the exploit needs to find it in order for there to already be a big problem.
How much faster will attackers be able to do anything because they know it's coming? Mostly only as long as it would have taken them to hear about it.
How much faster will defenders be able to do anything because they know it's coming? They can spend the next week making a list of things that need to be done and places that they'll need to deploy updates, so that when it's available they can act immediately and efficiently.
The risk that attackers will suddenly find the flaw after years because they were told "there's a flaw in cURL" seems low.
There is a risk that the details leak to attackers in advance of the release.
> The risk that attackers will suddenly find the flaw after years because they were told "there's a flaw in cURL" seems low.
I’m not so sure about that. Still understand why they’re handling it this way but this is bait like a big red bullseye or rainbow with a pot of gold at the bottom …
And now the race has started with admins not being able to do anything. Anyone that knows of this vulnerability has enough time for a last hurray to exploit it as much as possible.
I suppose it's theoretically possible someone was hoarding this as a zero-day and may decide to more actively exploit it before it gets patched. Except of course that they don't know which precise vulnerability it is.
Also what I consider is that who has insider access and how does that information leak... This fix must be known at least some members of curl developers. Will they leak it or not? Or anyone who receive it early...
It most definitely does. You know it's going to be patched. You no longer have to tiptoe around to conceal the problem. This can be the difference between snooping a bit of data here and there and just straight up dumping the contents of entire servers.
Of course this depends on the vulnerability itself. But knowing a vulnerability will be patched can be hugely interesting and worthwhile information
I don't agree. As an admin I can cordon off systems which might be exploited until the fix is released. If there's nothing to exploit, how can you exploit it?
Sure you can. Do you think slack can? Google can just down their entire fleet? Servers are an essential part of the world functioning. Curl is such a foundational library it's almost sure to be used in a large of part of existing servers.
I mean i did just put it on the calendar with a note to update and deploy... so yeah kinda a digital finger ready to push the button as a result of this post...
This only works for curl itself. But how many programs use curl or libcurl and bow many of those won't get an update?
It's good to know beforehand to check which software in your stack will be affected so you can take precautions if those don't get an update fast enough.
It really is insane how much you have to tip toe around tech circles just to say anything that isn't part of the colloquial circle jerk.
What you're saying is the approach any competent software company takes to managing vulnerabilities. There's zero reason to write a prior notice that there's a flaw because it would cause panic and allow opportunities to exploit the flaw (((before there's a fix.))) This is the whole premise around 'responsible disclosure' and why every company wants security researchers to abide by it.
The only logical conclusion I can draw here is curls notice is not responsible.
Security engineer here and for context I manage a very small amount of servers that don’t really matter too much. Having the notice means that I see it on HN before i need to patch - that’s massively handy.
I don’t want to run updates on cron because I feel the risks may outweigh the benefits in some cases, if this extends to other implementations (php curl, etc) then I doubt vuln scanners would pick it up.
Not every company has infinite resources, and security notices are a firehouse.
Sure this gives bad actors more of a chance to tee up staff to hit this thing, but it helps the competent but under resourced blue teamers a chance too.
Edit: I upvoted you btw and would encourage others to consider this also. I think your opinion is a valid perspective and conversation provoking which iirc is the point of votes - I’d rather not see HN fall into an echo chamber hive-mind, if it’s not already too late.
I don't think the comment above you is a valid perspective considering that it does not appear to be a 0-day vulnerability and there is no evidence of it being used in the wild. The information he provided is IMO not enough to craft an exploit out of. Yes, now there is a giant bullseye on cURL and maybe the bad guys will start looking hard at it, but cURL has always been a widely distributed software that needs to interact with the unsafe world (the internet), so I would imagine attackers have been looking at it for a while already. So far he hasn't revealed critical information such as when the vulnerability was introduced and exactly what area it affects, which would have helped a potential attacker narrow down the scope.
So I think it's just fear mongering to say suddenly people will craft exploits because of this notice. Like, if they are so good at finding the exploit then they probably would have found it a long time ago already given the lack of useful intel here.
Actually, the approach taken by curl is the best of all worlds: they give minimal information to attackers ("there is a bug"), and they maximize the amount of people that know a critical fix will be required, with a specific date for when the fix will be there.
The more traditional way of releasing the fix and the detailed description of the vulnerability at the same time is strictly worse. It's a very slight improvement for people who monitor these news (attackers don't get to find out there is some issue they could look for), but at a massive cost to those who don't monitor these news as often (attackers know exactly how and what to exploit before they find out).
Both of those things are true. Many CVEs are unimportant, but serious security vulnerabilities do exist, and are very hard to avoid entirely when writing C.
The CVE severity scores kind of make sense in absolute (Daniel's post clearly shows that this isn't always the case) but many companies have an inelastic, inflexible, unthinking approach to them which really frustrates our effort to prioritize actually relevant stuff.
A CVSS 10 on a log4j library sitting unused in a folder, shipped with an app that isn't even running, should not have prio over an unauthenticated RCE on an internet-facing service without even a WAF in front of it. But hey, that's only a 9.2. Try having this discussion with an auditor. (I don't want to lump all auditors together - I have ~12 years of collaboration with them and met some excellent ones - typically the ones we lose after a short time because they're wasted on us. And then there are those who just want to see a documented risk acceptance and will happily tolerate some criminally insecure or stupid shit).
> And then there are those who just want to see a documented risk acceptance and will happily tolerate some criminally insecure or stupid shit
The job of an auditer isn't to make you secure, its to make sure you aren't lying about implementing your security policies. If your policies are stupid, all they are going to do is ensure you follow your stupid policies.
Even memory safe languages do not technically eliminate all memory issues but they significantly reduce the amount of code you need to audit for memory issues.
You have no magical way of removing all memory security issues but you can definitely reduce the chance of one occurring by picking one of those memory safe languages.
The less time you have to spend on checking every piece of code memory allocations the more time you can spend on checking business logic for logical errors.
That blog post is talking about something else though. He's saying that the CVE system does not do a good job and allows for people who drums up severity for drama. That is just a generic issue he has with the procedure. You can have severity written in C to JavaScript to even Rust, or just simple configuration mistakes. That's not what he's complaining about.
In fact, him saying that this vulnerability is high is part of the point. If every single bug or vulnerability is a "high" severity bug, then nothing really is. It's only when you use this rating when it makes sense to that it would have the proper impact.
In this networked world, it really is a terrible language, there is no excuse for it
The only authority this program should have is network access, some compute time and permission to create and write to one or more files. Nothing more.
Though this is where almost all of our currently popular programming languages and operating systems are failing. They are fundamentally broken. Just on account of security, monolithic kernels are a terrible idea. And sandboxing hasn't even been an afterthought in most languages and virtual machines. Even on the hardware level, secure compartmentalization and access mechanisms are a joke.
That would be a good start, because it would allow one to "hollow out the attack surface" - a great concept I've encountered in the erights community. Primitives that allow one to gradually secure a system in the future when the need arises.
Though unfortunately it doesn't provide the level of expressivity and flexibility that a full capability security architecture would.
The world has a great deal to learn from the likes of KeyKOS, seL4 and Genode, i don't see any of these issues going away without their adoption (or at least their ideas, in other systems).
Those are two separate things though. C is more vulnerable than Rust, but either way we should properly sandbox our applications even if they are written in a memory-safe language like Rust (which is not infallable, it's just safer).
One issue is if cURL is allowed to write to "one or more files", then how do you prevent it from writing to a key configuration file or sensitive one that has a lot of downstream effect or write a Bash script that could launch further attacks?
Edit: Just because it seems pertinent, I noticed the line """"50% of past curl vulnerabilities are "C mistakes"""" in the slides linked by the post above.
Why hasn't Apple rewritten libtiff, libpng, libjpeg, libwebp, et c in Swift?
Their flagship moneymaker keeps getting popped via these, and they have thousands of engineers and a memory safe first party language. The zeroclick from a few weeks ago relied on a chain, the second most important of which (CVE-2023-41064) was in libwebp. (The first most important was a kernel privilege escalation. XNU is c and cpp, of course.)
I really can't imagine that writing performant replacements for these libraries would be that daunting a task for them, and it would permanently shut down an entire class of repeated, ongoing vulnerabilities. I really don't understand why Apple relies on 3p code for format parsing/decoding when it has proven over and over again to be a source of brand damage.
I’m gonna guess apple has considered the ROI on this, the negative publicity can’t be good for them but in the context of apple id somewhat agree. I consider iOS probably more secure than desktop operating systems, and it’s the Crown Jewels of my life.
Curl is maintained by a much smaller set of people, and is delivered for free.
I'm talking about for use in Apple's first-party apps, like PassKit/Wallet, which is how the zeroclick happened recently. Apple gets to choose what codecs PassKit uses.
They also use them in Safari, AFAIK.
I'm also pretty sure most consumers of them are using them via ImageIO, which is under Apple's exclusive control.
Swift is probably not quite good enough for that type of library actually. There was a lot of hype about it being an everything language, but I'm not sure if it actually is for a low level graphics decoding library. For example, a core internal library of SwiftUI is written in C++, ActionGraph, and that is a pretty new thing!
Also apple can be slow to write things in new coding languages internally, there is a lot of stuff still in Objective-C and will be for many, many years.
Let’s wait and see what the vulnerability is. Maybe it doesn’t have anything to do with the pitfalls of C. But if it does, expect to read a lot more comments like this.
I wonder whether we'll ever get to a point where the kernel, the drivers and the userland software are all written in memory safe languages, possibly with other safe mechanisms and abstractions thrown in; yet to have it become mainstream and as popular as Linux is now.
Might take decades of work though and probably nobody cares enough for something like that.
Android has been replacing a lot of core components with Rust and other memory safe languages. The Asahi team built a GPU driver in Rust recently. Seems like things are moving in the right direction.
Android makes frequent use of unsafe but in their blog post they claim an unsafe rust line has never caused a memory issue. Because they are such a small and focused selection of the code, full scrutiny can be used for any unsafe lines.
Google has actually audited using cargo-vet every crate that chromiumos and fucshia depend on that have unsafe in it. They also have some additional rules related to cryptographic algorithms. I'm pretty surprised they haven't done the same for rust usage in android. https://github.com/google/rust-crate-audits
Kernels and device drivers have to read and write from hardware registers. Doing so is fundamentally "unsafe", by Rust's definition. Hardware is a big bag of external state which can often be mutated external to any software running on the CPU. It's a device driver's job to abstract away this unsafe interface and (ideally) try to present a safe one.
That's not to say there aren't benefits to using languages other than C for this stuff. But a Rust kernel will necessarily rely on `unsafe` blocks to do its job.
All software handling untrusted input should be sandboxed really. Even if curl was written in a language that prioritises memory safety, there would still be plenty of opportunity for harmful, exploitable bugs to be introduced.
This is correct (assuming the same facetiousness as the gp comment), and even extends to algorithmic oversights and hardware errors. Your threads not stepping on each other's toes and your memory boundaries being automatically checked doesn't make you invulnerable.