Hacker News new | past | comments | ask | show | jobs | submit login
A case against security nihilism (cryptographyengineering.com)
468 points by feross on July 20, 2021 | hide | past | favorite | 332 comments



Just the other day I suggested using a yubikey, and someone linked me to the Titan sidechannel where researchers demonstrated that, with persistent access, and a dozen hours of work, they could break the guarantees of a Titan chip[0]. They said "an attacker will just steal it". The researchers, on the other hand, stressed how very fundamentally difficult this was to pull off due to very limited attack surface.

This is the sort of absolutism that is so pointless.

At the same time, what's equally frustrating to me is defense without a threat model. "We'll randomize this value so it's harder to guess" without asking who's guessing, how often they can guess, how you'll randomize it, how you'll keep it a secret, etc. "Defense in depth" has become a nonsense term.

The use of memory unsafe languages for parsing untrusted input is just wild. I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'll also link this talk[1], for the millionth time. It's Rob Joyce, chief of the NSA's TAO, talking about how to make NSA's TAO's job harder.

[0] https://arstechnica.com/information-technology/2021/01/hacke...

[1] https://www.youtube.com/watch?v=bDJb8WOJYdA


I'll conclude with a philosophical note about software design: Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administators; but it's time for software engineers to start applying the same engineering principles within individual applications as well.

-cperciva, http://www.daemonology.net/blog/2009-06-24-encrypt-then-mac....


I've worked as a structural engineer (EIT) on bridges and buildings in Canada before getting bored and moving back into software.

There are major differences in designing bridges and in crafting code. So many, in fact it is difficult to even know where to start. But with that proviso, I think the concept of safety versus the concept of security is one that so many people conflate. We design bridges to be safe against the elements. Sure, there are 1000 year storms but we know what we're designing for and it is fundamentally an economic activity. We design these things to fail at some regularity because to do otherwise would require an over-investment of resources.

Security isn't like safety because the attack scales up with the value of compromising the target. For example, when someone starts a new social network and hashes passwords the strength of their algorithm may be just fine, but once they have millions of users it may become worthwhile for attackers to invest in rainbow tables or other means to thwart their salted hash.

Security is an arms race. That's why we're having so much trouble securing these systems. A flood doesn't care how strong your bridge is, or where it is most vulnerable.


Aside: The distinction between safety and security I know:

- safety is "the system cannot harm the environment"

- security is the inverse: "the environment cannot harm the system"

To me, your distinction has to do with the particular attacker model - both sides are security (under these definitions).


That's an interesting distinction, but I think GP meant something else - and I'm willing to agree with their view:

- Safety is a PvE game[0] - your system gets "attacked" by non-sentient factors, like weather, animals, or people having an accident. The strength of an attack can be estimated as a distribution, and that estimate remains fixed (or at least changes predictably) over time. Floods don't get monotonically stronger over the years[1], animals don't grow razor-sharp titanium teeth, accidents don't become more peculiar over time.

- Security is a PvP game - your system is being attacked by other sentient beings, capable of both carefully planning and making decisions on the fly. The strength of the attack is unbounded, and roughly proportional to how much the attacker could gain from breaching your system. The set of attackers, the revenue[2] from an attack, the cost of performing it - all change over time, and you don't control it.

These two types of threats call for a completely different approach.

Most physical engineering systems are predominantly concerned with safety - with PvE scenarios. Most software systems connected to the Internet are primarily concerned with security - PvP. A PvE scenario in software engineering is ensuring your intern can't accidentally delete the production database, or that you don't get state-changing API requests indexed by web crawlers, or that an operator clicking the mouse wrong won't irradiate their patient.

--

[0] - PvE = "Player vs Environment"; PvP = "Player vs Player".

[1] - Climate change notwithstanding; see: estimate changing predictably.

[2] - Broadly understood. It may not be about the money, but it can be still easily approximated in dollars.


I wonder how this distinction plays out in languages that use the same word for safety and security, e.g. German and Portuguese.


You would use "protection" (Schutz) to make this distinction. Also German verbs can have many suffixes, which often help with the direction of an action and thereby changing the meaning (e.g. sichern, absichern, besichern, versichern).


suffixes -> prefixes


So it's like building a bridge... that needs to constantly withstand thousands of anonymous, usually untraceable, and always evolving terrorist attacks.


...in which the attackers have free access to copies of the bridge where they can silently test attack strategies millions of times per second for months or years on end.

The safety vs security distinction made above is fundamental. Developers are faced with solving an entire class of problems that is barely addressed by the rest of the engineering disciplines.


> where they can silently test attack strategies millions of times per second for months or years on end

Remotely, anonymously, at virtually no risk to themselves.


And then, when they finally perfect their technique, they can just sell or give away the plan to other people in an instant, who can then put it into practice almost for free, against any compatible bridge they like.


But its also a case where "perfect" exists. A case where you can, in principle, have perfect information about the internals of your bridge at any point. A case where you can, in theory, design the bridge to handle an infinite load from above.

In software, you can spec the behavior of your program. And then it is possible to code to that exact spec. It is also possible, with encryption and stuff, to write specs that are safe even when malicious parties have control over certain parts.

This is not to say that writing such specs is easy, nor that coding to an exact spec is easy. Heck, I would even doubt that it is possible to do either thing consistently. My point is, the challenge is a lot harder. But the tools available are a lot stronger.

Its not a lost cause just because the challenge is so much harder.


That kind of perfect is possible in math but not in software, which runs on physical machines and was written and verified by humans. It's like building your bridge inside a vacuum chamber with no entrances or exits—possible but not practical.


Or climate change.


I agree that safety & security are frequently conflated, but I don't think the important aspect is that there's no analogy between IT & construction.

IT safety = construction safety. What kind of cracks/bumps does your bridge/building have, can it handle increase in car volume over time, lots of new appliances put extra load on the foundation etc. IT safety is very similar in that way.

IT security = physical infrastructure security. Is your construction safe from active malicious attacks/vandalism? Generally we give up on vandalism from a physical security perspective in cities - spray paint tagging is pretty much everywhere. Similarly, crime is generally a problem that's not solvable & we try to manage. There's also large scale terrorist attacks that can & do happen from time to time.

There are of course many nuanced differences because no analogy is perfect, but I think the main tangible difference is that one is in the physical space while the other is in the virtual space. Virtual space doesn't operate the same way because the limits are different. Attackers can easily maintain anonymity, attackers can replicate an attack easily without additional effort/cost on their part, attackers can purchase "blueprints" for an attack that are basically the same thing as the attack itself, attacks can be carried out at a distance, & there are many strong financial motives for carrying out the attack. The financial motive is particularly important because it funds the every growing arms raise between offence & defense. In the physical space this kind of race is only visible in nation states whereas in the virtual space both nation states & private actors participate in this race.

Similarly, that's why IT development is a bit different from construction. Changing a blueprint in virtual space is nearly identical from changing the actual "building" itself & the cost is several orders of magnitude lower than it would be in physical space. Larger software projects are cheaper because we can build reusable components that have tests that ensure certain behaviors of the code & then we rerun them in various environments to make sure our assumptions still hold. We can also more easily simulate behavior in the real world before we actually ship to production. In the physical space you have to do that testing upfront to qualify a part. Then if you need a new part, you're sharing less of the design whereas in virtual space you can share largely the same design (or even the exact same design) across very different environments. & there's no simulation - you build & patch, but you generally don't change your foundation once you've built half the building.


Also pointing your (or anyone's finger) at the already overworked and exploited engineers in many countries is just abysmal in my opinion. It's not an engineers decision to make what the deadlines of finishing a software is. Countless number of companies are controlled by business people. So point your finger at them because they are who don't give a flying f*&% whether the software is secure or not. We engineers are very well aware with both the need and the implications of security. So this kind of name shaming must be stopped by the security community now and forever in my opinion.


I’ve found that software, among other engineering disciplines, is uniquely managed as a manufacturing line rather than a creative art. In the other disciplines, the difference between these phases of the project is much more explicit.


I think this quote is fundamentally wrong and intentionally misleading. The equivalent question would be "can we find any cracks on it?" Which makes complete sense. And in fact it is frequently asked during inspections. Just like the security flaw question should be asked in the same vein.


This is one of the best examples I’ve ever seen supporting the claim that analogies aren’t reasoning.

Edit: apparently elaboration is in order. In mechanical engineering one deals with smooth functions. A small error results in a small propensity for failure. Software meanwhile is discrete, so a small error can result in a disproportionately large failure. Indeed getting a thousandth of a percent of a program wrong could cause total failure. No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong. In software the margin of error is literally undefined behavior.


>No bridge ever collapsed because the engineer got a thousandth of a percent of the building material’s properties wrong.

Perhaps not with building properties, but very small errors can cause catastrophic failure.

One of the most famous ones would be the Hyatt Regency collapse, where a contractor accidentally doubled the load on a walkway because he used two shorter beans attached to the top and bottom of a slab, rather than a longer beam that passed through it.

https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...

In electrical engineering, it's very common to have ICs that function as a microcontroller at 5.5V, and an egg cooker at 5.6V.

Microsoft lost hundreds of millions of dollars repairing the original Xbox 360 because the solder on the GPU cracked under thermal stress.

It's definitely not to the same extreme as software, but tiny errors do have catastrophic consequences in physical systems too.


From the GP comment:

> Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera)

I am not a mechanical engineer, but none of these examples look like smooth functions to me. I would expect that an unexpectedly high wind can cause your structure to move in way that is not covered by your model at all, at which point it could just show a sudden non-linear response to the event.


They are smooth in that they are continuously differentiable.


> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust and just think way, way less about this.

I'm beginning to worry that every time Rust is mentioned as a solution for every memory-unsafe operation we're moving towards an irrational exuberance about how much value that safety really has over time. Maybe let's not jump too enthusiastically onto that bandwagon.


Not just memory safety. Rust also prevents data races in concurrent programs. And there are a few more things too.

But these tricks have the same root: What if we used all this research academics have been writing about for decades, improvements to the State of the Art, ideas which exist in toy languages nobody uses -- but we actually industrialise them so we can use the resulting language for Firefox and Linux not just get a paper into a prestigious journal or conference?

If ten years from now everybody is writing their low-level code in a memory safe new C++ epoch, or in Zig, that wouldn't astonish me at all. Rust is nice, I like Rust, lots of people like Rust, but there are other people who noticed this was a good idea and are doing it. The idea is much better than Rust is. If you can't do Rust but you can do this idea, you should.

If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

Imagine it's 1995, you have just seen an Internet streaming radio station demonstrated, using RealAudio.

Is RealAudio the future? In 25 years will everybody be using RealAudio? No, it turns out they will not. But, is this all just stupid hype for nothing? Er no. In 25 years everybody will understand what an "Internet streaming radio station" would be, they just aren't using RealAudio, the actual technology they use might be MPEG audio layer III aka MP3 (which exists in 1995 but is little known) or it might be something else, they do not care.


> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

It's 26 years after Java was released. Java has largely been the main competitor to C++. I don't see C++ going away nor do I see C going away. And it's almost always a mistake to lump C and C++ developers together. There is rarely an intersection between the two.

I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.


> I think you do not understand how short 10 years is. There are tons of people still running computers on Sandy Bridge.

Ten years is about the time since C++ 11. I may be wrong, but I do not regret my estimate.


Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain. But I agree with your overall point, and would add that you could replace Zig with any one of the dozens of popular memory safe language, even old standbys like Java. The point is not to migrate to one language in particular, but rather to migrate to languages in which memory errors are compiler bugs instead of application bugs.


Zig isn't memory safe but it's still leaps and bounds above C.

I have to admire the practicality of the approach they've been taking.


I could have sworn I'd read that Zig's ambition was to be memory safe. Given ten years I don't find that impossible. Indeed I gave C++ the same benefit of the doubt on that timeline. But, when I just searched I couldn't find whatever I'd seen before on that topic.


The Zig approach is "memory safe in practice" vs "memory safe in theory". They don't have any aspirations to total memory safety like Rust, but they want to get most of the way there with a lot less overhead.

Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.

Additionally the approach they've taken to allocators means that you can use special allocators for testing that can perform even more checks, including leak detection.

I think it's a great idea and a really interesting approach but it's definitely not as rigorous as what Rust provides.


I don't think Zig is going to be memory safe in practice, unless they add a GC or introduce a Rust-like system. All of the mitigations I've seen come from that language--for example, quarantine--are things that we've had for years in hardened memory allocators for C++ like Chromium PartitionAlloc [1] and GrapheneOS hardened_malloc [2]. These have been great mitigations, but have not been effective in achieving memory safety.

Put another way: Anything you could do in the malloc/free model that Zig uses right now is something you could do in C++, or C for that matter. Maybe there's some super-hardened malloc design yet to be found that achieves memory safety in practice for C++. But we've been looking for decades and haven't found such a thing--except for one family of techniques broadly known as garbage collection (which, IMO, should be on the table for systems programming; Chromium did it as part of the Oilpan project and it works well there).

There is always a temptation to think "mitigations will eliminate bugs this time around"! But, frankly, at this point I feel that pushing mitigations as a viable alternative to memory safety for new code is dangerous (as opposed to pushing mitigations for existing code, which is very valuable work). We've been developing mitigations for 40 years and they have not eliminated the vulnerabilities. There's little reason to think that if we just try harder we will succeed.

[1]: https://chromium.googlesource.com/chromium/src/+/HEAD/base/a...

[2]: https://github.com/GrapheneOS/hardened_malloc


You understand "memory safe in practice" as soundly eliminating all memory safety issues. This is not how I understand it. Zig can exceed Rust's memory safety in practice without soundly eliminating all issues. The reason is that many codebases rely on unsafe code, and finding problems in Zig can be cheaper than finding problems in Rust w/ unsafe. This is even more pronounced when we look at security overall because while many security issues are memory safety issues, many aren't (and most aren't use-after-free bugs); in other words, it's certainly possible that paying to eliminate all use-after-free harms security more than just catching much of it more cheaply. So there is no doubt that Rust programs that don't use unsafe will have fewer use-after-free bugs than Zig programs, but it is very doubtful that they will, on average, be more secure as a result of this tradeoff.


> Basically they have a lot of runtime checks enabled in debug mode, where you do the majority of your testing, that are then disabled in the release binary.

But there's the problem: Testing can't and won't cover all inputs that a malicious attacker will try [1]. Now you've tested all inputs you can think of with runtime checks enabled, you release your software without runtime checks, and you can be sure that some hacker will find a way to exploit a memory bug in your code.

[1] Except for very thorough fuzzing. Maybe. If you're lucky. But probably not.


It's not “memory safe in practice”. It's “we provide tools to make our memory unsafe language with as little memory issue as possible”. Is it better than what C or C++ offer out of the box: yes. It's totally reasonable to think that it may be as good as C or C++ with state of the art tooling that most programmers aren't using today because they don't want to invest the effort, so this is a big progress over C.

But this shouldn't be called “memory safety”.


> Well, Zig isn't memory safe (as implemented today; they could add a GC), so it's not a good example of a Rust alternative in this domain.

While the first part of the sentence is mostly true (although the intention is to make safe Zig memory safe, and unsafe Rust isn't safe either), the second isn't. The goal isn't to use a safe language, but to use a language that best reduces certain problems. The claim that the best way to reduce memory safety problems is by completely eliminating all of them regardless of type and regardless of cost is neither established nor sensical;. Zig completely eliminates overflows, and, in exchange for the cost of eliminating use-after-free, makes detecting and correcting it, and other problems, easier.


I remain unconvinced that race proof programs are nearly as big a deal as memory safety. Many classes of applications can tolerate panics and its not a safety or security issue. I don't worry about a parser or server in go like I would in C.

(I realize that racing threads can cause logic based security issues. I've never seen a traditional memory exploit from on racing goroutines though.)


A Race in Go is Undefined Behaviour. All bets are off, whatever happens, no matter how strange, is OK.

If you have a race which definitely only touches some simple value like an int and nothing more complicated then Go may be able to promise your problem isn't more widespread - that value is ruined, you can't trust that it makes any sense (now, in the future, or previously), but everything else remains on the up-and-up. However, the moment something complicated is touched by a race, you lose, your program has no defined meaning whatsoever.


Of course, but when talking about security, a race in Go would be very hard to exploit.

It is a different story in languages meant to run untrusted code of course.


I've seen Qt Creator segfault due to the CMake plugin doing some strange QStringList operations on an inconsistent "implicitly shared" collection, that I guess broke due to multithreading (though I'm not sure exactly what happened). In RSS Guard, performing two different "background sync" operations causes two different threads to touch the same list collections, producing a segfault. (These are due to multiple threads touching the same collection/pointers; racing on primitive values is probably less directly going to lead to memory unsafety.)

Apparently in Golang, you can achieve memory unsafety through data races: https://blog.stalkr.net/2015/04/golang-data-races-to-break-m... (though I'm not sure if a workaround has been added to prevent memory unsafety).


I take that bet about C being pretty prominent in 10 years from now.

A language and a memory access model are no panacea. 10 years is like the day after tomorrow in many industries.


> also prevents data races in concurrent programs.

I have another neat trick to avoid races. Just write single threaded programs. Whenever you think you need another thread, you either don't need it, or you need another program.


You do realize that data races can happen between multiple programs as well, when shared resources are used? Which is pretty much a requirement for many things.


Yes, and rust can't prevent those.


It can prevent data races in memory shared between processes in the same way it can prevent them in memory shared between threads. Data race prevention isn't built into the Rust language, it is constructed using a combination of the borrow checker and the type system.


As I understand the borrow checker, it wouldn't detect races that result from the interaction of separate processes, since that would be out of the bounds of the compilation unit. But my knowledge is limited in this, so I maybe wrong.


Rust has no concept of a process, same as it has no concept of a thread. So you'd build a safe abstraction for sharing memory across processes the same way you do today with threads.


> If ten years from now people are writing unsafe C and C++ like it's still somehow OK, that would be crazy.

I mean to be clear, modern C++ can be effectively as safe as rust is. It requires some discipline and code review, but I can construct a tool-chain and libraries that will tell me about memory violations just as well as rust will. Better even in some ways.

I think people don't realize just how much modern C++ has changed.


It must have changed a shitload in the last 2-3 years if that's the case. What tools are you referring to? I'm pretty familiar with C++ tooling but I haven't paid attention for a little while.


The modern standard library, plus some helpers is the big part of it. Compiler warnings as errors are very good at capturing bad situations if you follow the rules (e.g. don't allow people to just make raw pointers, follow a rule of 5). I never said it was as easy to do as in rust.

As for tooling, things like valgrind provide an excellent mechanism for ensuring that the program was memory safe, even in it's "unsafe" areas or when calling into external libraries (something that rust can't provide without similar tools anyway).

My broader point is that safety is more than just a compiler saying "ok you did it", though that certainly helps. I would trust well written safety focused C++ over Rust. On the other hand, I would trust randomly written Rust over C++. Rust is good for raising the lower end of the bar, but not really the top of it unless paired with a culture and ecosystem of safety focus around the language.


Address sanitizers and valgrind tell you whether your program did something unsafe while you were analysing, but they can't tell you whether the program can do something unsafe.

Since we're in a thread about security this is a crucial difference. I'm sure Amy, Bob, Charlie and Deb were able to use the new version of Puppy Simulator successfully for hours without any sign of unsafety in the sanitizer. Good to go. Unfortunately, the program was unsafe and Evil Emma had no problem finding a way to attack it. Amy, Bob, Charlie and Deb had no reason to try naming a Puppy in Puppy Simulator with 256 NUL characters, so, they didn't, but Emma did and now she's got Administrator rights on your server. Oops.

In contrast safe Rust is actually safe. Not just "It was safe in my tests" but it's just safe.

Even though it might seem like this doesn't buy you anything when of course fundamental stuff must use unsafe somewhere, the safe/unsafe boundary does end up buying you something by clearly delineating responsibility.

For example, sometimes in the Rust community you will see developers saying they had to use unsafe because, alas, the stupid compiler won't optimise the safe version of their code properly. For example it has a stupid bounds check they don't need so they used "unsafe" to avoid that. But surprisingly often, another programmer looks at their "clever" use of unsafe, and actually they did need that bounds check they got rid of, their code is unsafe for some parameters.

For example just like the C++ standard vector, Rust's Vec is a wasteful solution for a dozen integers or whatever, it does a heap allocation, it has all this logic for growing and shrinking - I don't need that for a dozen integers. There are at least two Rust "small vector" replacements. One of them makes liberal use of "unsafe" arguing that it is needed to go a little faster. The other is entirely safe. Guess which one has had numerous safety bugs.... right.

Over in the C++ world, if you do this sort of thing, the developer comes back saying duh, of course my function will cause mayhem if you give it unreasonable parameters, that was your fault - and maybe they update the documentation or maybe they don't bother. But in Rust we've got this nice clean line in the sand, that function is unsafe, if you can't do better label it "unsafe" so that it can't be called from safe code.

This discipline doesn't exist in C++. The spirit is willing but the flesh (well, the language syntax in this case) is too weak. Everything is always potentially unsafe and you are never more than one mistake from disaster.


> This discipline doesn't exist in C++.

And this is where the argument breaks down for me. The C++ vector class can be just as safe if people are disciplined. And as you even described, people in rust can write "unsafe" and do whatever they want anyway to introduce bugs.

The language doesn't really seem to matter at the end of the day from what you are telling me (and that's my main argument).

With the right template libraries (including many parts of the modern C++ STL) you can get the same warnings you can from Rust. One just makes you chant "unsafe" to get around it. But a code review should tell off any developer doing something unsafe in either language. C++ with only "safe" templates is just as "actually safe" as rust is (except for with a better recovery solution than panics!).


What’s with the backlash against Rust? It literally is “just another language”. It’s not the best tool for every job, but it happens to be exceptionally good at this kind of problem. Don’t you think it’s a good thing to use the right tool for the job?


It is good to keep in mind that the Rust language still has lots of trade-offs. Security is only one aspect addressed by Rust (another is speed), and hence it is not the most "secure" language.

For example, in garbage collected languages the programmer does not need to think about memory management all the time, and therefore they can think more about security issues. Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand. And this can be problematic even if Rust solves every security bug in the class of (for instance) buffer overflows.

If you want secure, better use a suitable GC'ed language. If you want fast and reasonably secure, then you could use Rust.


A thing to remember about GC is that it solves only one very important resource. Memory.

If your program loses track of which file handles are open, which database transactions are committed, which network sockets are connected, GC does not help you at all for those resources, when you are low on heap the system automatically looks for some garbage to get rid of, but when you are low on network sockets, the best it could try is hope that cleaning up garbage disconnects some of them for you.

Rust's lifetime tracking doesn't care why we are tracking the lifetime of each object. Maybe it just uses heap memory, but maybe it's a database transaction or a network socket. Either way though, at lifetime expiry it gets dropped, and that's where the resource gets cleaned up.

There are objects where that isn't good enough, but the vast majority of cases, and far more than under a GC, are solved by Rust's Drop trait.


High-level languages can provide abstractions though that can manage object life cycles to a degree for you, for example dependency injection frameworks, like Spring.

Not disagreeing, just mentioning.


And many languages also provide convenient syntax for acquiring and releasing a resource for a dynamic extent (Java try-with-resources, C# `using`, Python `with`, etc.), which cover the majority of use cases.


Yes, but these features are usually optional. Library users can easily forget to use them and neither library authors nor the compiler can do anything to enforce it.

The brilliant thing about RAAI style resource management is that library authors can define what happens at the end of an object's lifetime and the Rust compiler enforces the use of lifetimes.


I agree that RAII is superior, but it’s not true that compilers and library authors can’t do anything to enforce proper usage of Drop-able types in GC’d languages. C# has compiler extensions that verify IDisposables are used with the using statement, for example. Granted, this becomes a problem once you start writing functions that pass around disposable types.


It's not like Rust were the only or even the best language in solving the problems you mentioned. It might be the best performance focused / low-level language though.


It's not the best language for solving this type of problems? What (kind of) language would you say is even better for that?


Actually safe languages fOR example. Pony guarantees all three safeties, memory, type and concurrency, whilst in Rust it's only a plan, just not implementated. Stack overflows, type unsafety, dead locks. POSIX compatible stdlib.

Concurrent Pascal or Singularity also fit the bill, with actual operating systems being written in it.


Any language that offers some kind of effect system that has support for brackets and cancelation, for example Haskell or Scala.

There isn't even specific language support necessary, it's on the library level.


> Rust's typesystem, on the other hand, can really get in the way and make code more opaque and more difficult to understand.

I don't disagree with the premise of your post, which is that time spent on X takes away from time spent on security. I'll just say that I have not had the experience, as a professional rust engineer for a few years now, that Rust slows me down at all compared to GC'd languages. Not even a little.

In fact, I regret not choosing Rust for more of our product, because the productivity benefits are massive. Our rust code is radically more stable, better instrumented, better tested, easier to work with, etc.


I don't think this is a good take. Go, Java, Rust, Python, Swift; they all basically eliminate the bug class we're talking about. The rest is ergonomics, which are subjective.

"Don't use Rust because it is GC'd" is a take that I think basically nobody working on memory safety (either as a platform concern or as a general software engineering concern) would agree with.


It's unusually or suspiciously "hyped". Not to the extent as the other sibling exaggerated to, but enough for it to be noticeable and to rub people the wrong way, myself included. It rubs me the wrong way because something feels off about the way it's hyped/pushed/promoted. It's like the new javascript in the programming world. And if we allow it (like we did with JS), it'll overtake way too much mindshare with the unfortunate detriment and neglect of all others.


> What’s with the backlash against Rust?

What's with the hyping of Rust as the Holy Grail as the solution to everything not including P=NP and The Halting Problem?


No serious and good programmer is hyping Rust as the "Holy Grail". You are seeing things due to an obvious negative bias. Link me 100x HN comments proving your point if you like but they still mean nothing. I've worked with Rust devs for a few years and all were extremely grounded and practical people who arrived at working with it after a thorough analysis of the merits of a number of technologies. No evangelizing to be found.

Most security bugs/holes have been related to buffer [over|under]flows. Statistically speaking, it makes sense to use a language that eliminates those bugs by the mere virtue of the program compiling. Do you disagree with that?


Nobody seriously thinks it's "Rust" that's the silver bullet either; they just believe memory-safe languages are. There are a bunch of them to choose from. We hear about Rust because it works in a bunch of high-profile cases that other languages have problems with, but there's no reason the entire iMessage stack couldn't have been written in Swift.


Fair. Two further thoughts:

1. Rust also has other safety features that may be relevant to your interests. It is Data Race Free. If your existing safe-but-slow language offers concurrency (and it might not) it almost certainly just tells you that all bets are off if you have a Data Race, which means complicated concurrent programs exhibit mysterious hard-to-debug issues -- and that puts you off choosing concurrency unless it's a need-to-have for a project. But with Data Race Freedom this doesn't happen. Your concurrent Rust programs just have normal bugs that don't hurt your brain when you think about them, so you feel free to pick "concurrency" as a feature any time it helps.

2. The big surface area of iMessage is partly driven by Parsing Untrusted File Formats. You could decide to rewrite everything in Rust, or, more plausibly, Swift. But this is the exact problem WUFFS is intended to solve.

WUFFS is narrowly targeted at explaining safely how to parse Untrusted File Formats. It makes Rust look positively care free. You say this byte from the format is an 8-bit unsigned integer? OK. And you want to add it to this other byte that's an 8-bit unsigned integer? You need to sit down and patiently explain to WUFFS whether you understand the result should be a 16-bit unsigned integer, or whether you mean for this to wrap around modulo 256, or if you actually are promising that the sum is never greater than 255.

WUFFS isn't in the same "market" as Rust, its "Hello, world." program doesn't even print Hello, World. Because it can't. Why would parsing an Untrusted File Format ever do that? It shouldn't, so WUFFS can't. That's the philosophy iMessage or similar apps need for this problem. NSO up against WUFFS instead of whatever an intern cooked up in C last week to parse the latest "must have" format would be a very different story.


Totally. I said Rust because I write Rust. Like, that's (part of) my job. Rust is no more memory safe (to my knowledge) than Swift, Java, C#, etc.

I also said "way, way less" not "not at all". I still think about memory safety in our Rust programs, I just don't allocate time to address it (today) specifically.


If we include data race safety in the definition of memory safety (which it ultimately is), then Rust is safer than any commonly used garbage collected language with access to multithreading, including Swift, Java and C#.


This is a RESF trope. We do not include Rust's notion of data race safety in the definition of memory safety as it is used in security. Not all bugs are created equal.


If you would have mentioned those other languages in your original post, it might have amplified your valuable and important point even better, rather than triggering some readers effectively accusing you of shilling.

I don’t mean this in a very critical spirit, though.

Communication is really hard - especially in a large setting where not everyone reads you in the same context, and not everyone means well.

On balance, you post was valuable to me!


I mentioned Rust because I write Rust professionally. If I wrote Java professionally, as I used to, I would have said "java". So you're probably correct that I could preempt stupid people's posts, but I don't care about the dregs of HN reading into my very clear, simple statement, just because they're upset about rust or whatever. It's just not worth it to me.

I'm glad the post was of value to you. The talk is really good and I think more people should read it.


I hear you, and it’s your prerogative to choose how much to invest in reducing the attack surface for your communication.

On the other hand, you could choose to think about communications in an analogous way to your code, both being subject to attack by bad actors trying to subvert your good intentions.

So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.

I also come from a coding background (albeit a long time ago) and with the help of some well meaning bosses over time eventually came to realize, that my messages could gain more influence, by reducing unnecessary attack surface. - Doesn’t mean I always get it right, even now - but I am aware and generally try hard to do just that.


Yep, I definitely get what you're saying and strategic communication is totally worthwhile (I'm a CEO, the value is absolutely not lost on me). It's just not something I prioritize on HN, that's just the personal call I make.


fair enough! :-)


> So, the argument could be made, that removing attack surface from communication is analogous to hardening your code.

That's true, but this is one of the cases where obtaining the last 5-10% of clarify might require 90% of the total effort.

Now whether one actually already has plucked all the low-hanging fruit in their own communication and if it's already good -- that's a separate discussion.


Agreed. I was simply mostly addressing this person's obvious beef with Rust.


It also doesn't help that Rust has this addictive nature and once you tasted your first major Rust program and tamed the borrow checker, you will want to keep using it everywhere. And that's the reason why people keep looking around to rewrite something in Rust. It's in the same category as any other banned drug :)


That has not been my experience.


I like what tptacek wrote in the sibling comment. IIUC Rust keeps getting mentioned as "the" memory-safe language because it's generally equally fast compared to C programs. And it's mainly C and C++ that are memory-unsafe. So Rust is good language to combat the argument of speed (that's often interchangeable with profits in business world, especially if security issues have a flat rate of cyber insurance).


I'm a security professional so it's based on being an experienced expert, not some sort of hype or misplaced enthusiasm.


The article we are commenting on is about targeted no-interaction exploitation of tens of thousands of high profile devices. I think this is one of the areas where there is a very clear safety value (not just theoretical).


Whole classes of bugs -- the most common class of security-related bugs in C-family languages -- just go away in safe Rust with few to no drawbacks. What's irrational about the exuberance here? Rust is a massive improvement over the status quo we can't afford not to take advantage of.


> how much value that safety really has over time

Billions and billions of dollars. Large organizations like Microsoft and Google have published numbers on the proportion of vulns in their software that are caused by memory errors. As you can imagine, a lot of effort is spent within these institutions to try to mitigate this risk (world class fuzzing, static analysis, and pentesting) yet vulns continue to persist.

Rust is not the solution. Memory-safe languages are. It is just that there aren't many such languages that can compete with C++ when it comes to speed (Rust and Swift are the big ones) so Rust gets mentioned a lot to preempt the "but I gotta go fast" concerns.


… it is a solution for every memory-unsafe operation, though?


No. Rust cannot magically avoid memory-unsafe operations when you have to deal with, well, memory. If I throw a byte stream at you and tell you it is formatted like so and so, you have to work with memory and you will create memory bugs.

It can however make it extremely difficult to exploit and it can make such use cases very esoteric (and easier to implement correctly).


That's totally untrue, unless you are using a really weird definition of "memory safety". A rust program that doesn't make use of the unsafe keyword will not have memory safety bugs. We've had programming languages for decades that are able to happily process arbitrary bytestreams with incredibly buggy code without ever actually writing to a memory region not reachable through pointers allocated by the ordinary program execution.

A Java program can't write over the return address on the stack.


> A rust program that doesn't make use of the unsafe keyword will not have memory safety bugs

https://www.cvedetails.com/vulnerability-list/vendor_id-1902...

What if the bug is in std?

What if I use a bugged Vec::from_iter?

What if I use the bugged zip implementation from std?

You'll probably blame unsafe functions, but those unsafe functions were in std, written by the people who know Rust better than anyone.

Imagine what you and me could do writing unsafe.

Imagine trusting a 3rd party library...


Sure, and the JVM can contain an exploitable buffer overrun.

We are on a thread about "a case against security nihilism".

1. Not all vulnerabilities are memory safety vulnerabilities. The idea that adopting memory safe languages will prevent all vulns is not only a strawman, but empirically incorrect since we've had memory safe languages for many decades.

2. It is the case that a tremendously large number of vulns are caused by memory safety errors and that transitioning away from memory-unsafe languages will be a large win for industry safety. 'unsafe' is a limitation of Rust, but compared to the monstrous gaping maw of eldritch horror that is C and C++, it is small potatoes.

3. You are going to struggle to write real programs without ever using third party code.


Then same logic applies for python, java too? What if there is bug in internal implementation?

Rust Lang strives for safety and safety is no 1 priority. Regarding the unsafe in std please read the source code just to know how much careful they are with the implementation. They only use unsafe for performance and even with unsafe rust, it doesn't provide too much freedom tbh.

The 3rd party thing you are referring etc sounds childish. They are not the rust lang fault tbh. If you don't trust them don't use. It is as simple as that.

So I think telling people rust program that doesn't have unsafe will not have memory safety bugs. Exceptions to this statement do occurs but are rare.


>"A Java program can't write over the return address on the stack."

Could you say why Java is not susceptible to ROP?


ROP isn't the vulnerability, but instead the exploitation technique. "Memory safety errors" were around for decades before ROP was widely understood.

A Java program, by construction, cannot write to memory regions not allocated on the stack or pointed to by a field of an object constructed with "new". Runtime checks prevent ordinary sorts of problems and a careful memory model prevents fun with concurrency errors. There are interesting attacks against the Java Security Manager - but this is independent of memory safety.


Yes I'm well aware of buffer overflows/stack smashing. I was asking why Java wasn't susceptible to something like ROP.


All memory access in Java goes through fields or array offsets.

There are runtime checks around class structure that ensure that a field load cannot actually read some unexpected portion of memory.

There are runtime checks that ensure that you cannot read through a field on a deallocated object, even when using weakreference and therefore triggering a GC even while the program has access to that field.

There are runtime checks around array reads that ensure that you cannot access memory outside of the allocated bounds of the array.

I have no idea why "susceptible to something like ROP" is especially relevant here. ROP is not the same as "writing over the return address" ROP is a technique you use to get around non-executable data sections and happens after you abuse some memory safety error to write over the return address (or otherwise control a jump). It means "constructing an exploit via repeated jumps to already existing code rather than jumping into code written by the attacker".

But just for the record, Java does have security monitoring of the call stack that can ensure that you cannot return to a function that isn't on the call stack so even if you could change the return target the runtime can still detect this.


That's not memory-unsafety. Memory-safety means avoiding bugs like buffer overflow, ROP, etc.


Language absolutism.


There's literally zero evidence that a program written in Rust is actually practically safer than one written in C at the same scale. And there won't be any evidence of this for some time because no Rust program is as widely deployed as an equivalent highly used C program.


That's not true, actually. There is more than "literally zero" evidence. I don't feel like finding it for you, but at minimum Mozilla has published a case study showing that moving to Rust considerably reduced the memory safety issues they discovered. That's just one example, I believe there are others.

There are likely many other examples of, say, Java not having memory safety issues. Java makes very similar guarantees to Rust, so we can extrapolate, using common sense, that the findings roughly translate.

Common sense is a really powerful tool for these sorts of conversations. "Proof" and "evidence" are complex things, and yet the world goes on with assumptions that turn out to hold quite well.


Not sure what your last sentence means - without evidence, there are cases when we guess right, and those when we guess wrong. Are you just choosing to ignore the latter?

The Mozilla case study is not a real world study. It simply looks at the types of bugs that existed and says "I promise these wouldn't have existed if we had used Rust". Would Rust have introduced new bugs? Would there be an additional cost to using Rust? We don't know and probably never will. What we care about is preventing real world damage. Does Rust prevent real world damage? We have no idea.


> Not sure what your last sentence means - without evidence, there are cases when we guess right, and those when we guess wrong. Are you just choosing to ignore the latter?

What I'm saying is that truth is a matter of debate. We believe lots of things based on evidence much less rigorous than a formal proof in many cases - like most modern legal systems, which rely on various types of evidence, and then a jury that must form a consensus.

So saying "there is no evidence" is sort of missing the point. Safe Rust does not have memory safety issues, barring compiler bugs, therefor common sense as well as experience with other languages (Java, C#, etc), would show that that memory safety issues are likely to be far less common. Maybe that isn't the evidence that you're after, but I find that compelling.

To me, the question of "does rust improve upon memory safety relative to C/C++" is obvious to the point that it really doesn't require justification, but that's just me.

I could try to find more evidence, but I'm not sure what would convince you. There's people fuzzing rust code and finding far fewer relevant vulns - but you could find that that's not compelling, or whatever.


I’d wager Dropbox’s Magic Pocket is up there with equivalent C/C++ based I/O / SAN stacks:

https://dropbox.tech/infrastructure/extending-magic-pocket-i...


There's still a lot of macho resistance to using safe languages, because "I can write secure code in C!"

"You" probably can. I can too. That's not the point.

What happens when the code has been worked on by other people? What happens after a few dozen pull requests are merged? What happens when it's ported to other platforms with different endian-ness or pointer sizes or hacked in a late night death march session to fix some bug or add some feature that has to ship tomorrow? What happens when someone accidentally deletes some braces with an editor's refactor feature, turning a "for { foo(); bar(); baz(); }" into a "for foo(); bar(); baz();"?

That's how bugs creep in, and the nice thing about safe languages is that the bugs that creep in are either caught by the compiler or result in a clean failure at runtime instead of exploitable undefined behavior.

Speed is no longer a good argument. Rust is within a few decimal points of C performance if you code with an eye to efficiency, and if you really need something to be as high-performance as possible code just that one thing in C (or ASM) and code the rest in Rust. You can also use unsafe to squeeze out performance if you must, sparingly.

Oh and "but it has unsafe!" is also a non-argument. The point of unsafe is that you can trivially search a code base and audit every use of it. Of course it's easy to search for unsafe code in C and C++ too... because all of it is!

If we wrote most things and especially things like parsers and network protocols in Rust, Go, Swift, or some other safe language we'd get rid of a ton of low-hanging fruit in the form of memory and logic error attack vectors.


> "You" probably can. I can too. That's not the point.

I'm not even sure that's true. I do agree with you that the argument that you need to hire other people is more convincing, but I'd wager that no single human on the planet can actually write a vuln-free parser of any complexity in C on their first attempt - even if handed the best tools that the model checking community has to offer.

Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.


>Macho is the best word to describe it. It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++.

It reminds me a little of some of the free-wheeling nuclear physicists in the Manhattan Project - probably some of the smartest people on the planet - being hubristically lax with safety: https://en.wikipedia.org/wiki/Demon_core#Second_incident

>[...] The experimenter needed to maintain a slight separation between the reflector halves in order to stay below criticality. The standard protocol was to use shims between the halves, as allowing them to close completely could result in the instantaneous formation of a critical mass and a lethal power excursion.

>Under Slotin's own unapproved protocol, the shims were not used and the only thing preventing the closure was the blade of a standard flat-tipped screwdriver manipulated in Slotin's other hand. Slotin, who was given to bravado, became the local expert, performing the test on almost a dozen occasions, often in his trademark blue jeans and cowboy boots, in front of a roomful of observers. Enrico Fermi reportedly told Slotin and others they would be "dead within a year" if they continued performing the test in that manner. Scientists referred to this flirting with the possibility of a nuclear chain reaction as "tickling the dragon's tail", based on a remark by physicist Richard Feynman, who compared the experiments to "tickling the tail of a sleeping dragon".

>On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while he was lowering the top reflector, allowing the reflector to fall into place around the core. Instantly, there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing an intense burst of neutron radiation estimated to have lasted about a half second. Slotin quickly twisted his wrist, flipping the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation, while Slotin's reaction prevented a recurrence and ended the accident. The position of Slotin's body over the apparatus also shielded the others from much of the neutron radiation, but he received a lethal dose of 1,000 rad (10 Gy) neutron and 114 rad (1.14 Gy) gamma radiation in under a second and died nine days later from acute radiation poisoning.


Beat me to it. The macho effect is there for sure, but on what grounds do you claim you can write secure C? As far as I know, you can't really prove anything about C unless you severely restrict the language, and those restrictions include pointer usage. So at best, you can do a hand-wavy read through code and have some vague notion of its behaviour.


It depends on the size of the parser. As they get big and complex I would start to agree with you.



It doesn't do enough. It's so low level that you have to run another OS on top of it. So all it does is provide a virtual machine. Typically people load Linux on top, which means you have all the security holes of Linux. You just get to run a few copies of Linux, possibly at different security levels.

I would have liked to see a secure QNX as a mainstream OS. The microkernel is about 60Kb, and it offers a POSIX API. All drivers, file systems, networking, etc. are in user space. You pay about 10%-20% overhead for message passing. You get some of that back because you have good message passing available, instead of using HTTP for interprocess communication.


i was responding to the claim "It is sheer ego that would cause anybody to say that they can feasibly write a safe program in C or C++". of course, the feasibility part is questionable.


It was written by top experts of the field through multiple years and is formally verified. It could have been written in brainfuck as well, since at that point the language is not important.


"on their first attempt" is part of that sentence.


I have used a Yubikey for years. Nothing is perfect, but as you mentioned, the only hacks of them have been with persistent physical access, or somehow getting the end user to hit the activate button tens of thousands of times.

On any system, if you give an attacker physical access to the device, you are done. Just assume that. If your Yubikey lives in your wallet, or on your key chain, and you only activate it when you need it, it is highly unlikely that anyone is going to crack it.

As far as physical device access, my last employer maintained a 'garage' of laptops and phones for employees traveling to about a half dozen countries. If you were going there, you left your corporate laptop and phone in the US, and borrowed one of these 'travel' devices with you for your trip. Back home, those devices were never allowed to connect to the corporate network. When you handed them in, they were wiped and inspected, but IT assumed that they were still compromised.

Lastly, Yubikey, as a second factor, is supposed to be part of a layered defense. Basically forcing the attacker to hack both you password and your Yubikey.

It bugs me that people don't understand how important two factor auth is, and also how crazy weak SMS access codes are.


This depends...

I've had an argument here about SMS for 2FA... Someone said, that SMS for 2FA is broken, because some companies misuse it for 1FA (for eg password reset)... but in essence, a simple sms verification solves 99.9% of issues with eg. password leaks and password reuse.

No security solution is perfect, but using a solution that works 99% of the time is still better than no security at all (or just one factor).


I'm pretty sure I've written on HN before that SMS 2FA doesn't do much against phishing, which we know is a big problem, but worse it creates a false reassurance.

The user doesn't reason correctly that the bank would send them this legitimate SMS 2FA message because a scammer is now logging into their account, they assume it's because this is the real bank site they've reached via the phishing email, and therefore their concern that it seemed maybe fake was unfounded.


But the scammer needs username, password and to phish the user... this is still more than just username+password (which could be reused on eg. linkedin, adobe or any of the other hacked sites), and if the scammers do the phishing attack, they can also get the OTP from the users app in the same way as they would get the number from an SMS


The phisher needs to know your phone number though to do that.


Why would the phisher need to know your phone number? Once you've clicked the link in the email and are on the phisher's website, they can just trigger the 2FA SMS through the bank's own login flow, display a 2fa prompt on the phishing site, then relay the credential on their end.

This isn't unique to SMS, obviously, since the same attack scenario works against e.g. a TOTP from a phone app.


Of course. I was thinking man in the middle, but it is not needed here.

Edit:thinking about it, without man in the middle the phisher can login, but cannot make transfers (assuming the SMS shows what transfer is beiing authorized). Still bad enough.


Crooks also thrive on confusion†. We can and should make software more robust against getting confused by bad guys, but Grannie we can't do much about.

So alas, even if on every previous transaction, Grannie was told, "Please read the SMS carefully and only fill out the code if the transfer is correctly described", she may not be suspicious when this time the bank (actually a phishing site) explains, "Due to a technical fault, the SMS may indicate that you are authorising a transfer. Please disregard that". Oops.

† e.g. some modern "refund" scams involve a step where the poor user believes they "slipped" and entered a larger number than they meant to, but actually the bad guys made the number bigger, the user is less suspicious of the rest of the transaction because they believe their agency set the wheels in motion.


For anyone who's fresh to cyber security, the fundamental axiom of it is that anything can be cracked, only a matter of computations (time*resource). Just as the dose makes the poison (sola dosis facit venenum).

Suppose you have a secret, that is RSA-encrypted, we might be looking at three hundred trillion years according to Wikipedia with the kind of computer we have now. Obviously that secrecy would have lost its value then, and the resource it requires to crack the secret would worth more than the secret itself. Even with quantum computing, we are still looking at 20+ years, which is still enough for most of the secrets, you got plenty time to change it, or after it lost its value. So we say that's secure enough.


From the video: "Cloud computing is really a fancy name for someone else's computer."

He goes on to discuss the expansion of "trust boundaries".

Big Tech: Use our computers, please!


> The use of memory unsafe languages for parsing untrusted input is just wild.

I think some of the vulnerabilities have been found in image file format or PDF parsing libraries. These are huge codebases that you can't just rewrite in another language.

At the same time, Apple is investing huge amounts of resources into making their (and everyone elses) code more secure. Xcode/clang includes a static analyzer that catches a lot of errors in unsafe languages, and they include a lot of "sanitizers" that try to catch problems like data races etc.

And finally, they introduced a new, much safer programming language that prevents a lot of common errors, and as far as I can tell they are taking a lot of inspiration from Rust.

So it's not like Apple isn't trying to improve things.


These are stopgaps, not long term solutions.

Msan has a nontrivial performance hit and is a problem to deploy on all code running a performance critical service. Static analysis can find some issues but any sound static analysis of a C++ program will rapidly havoc and report false positives out the wazoo. Whole-program static analysis (which you need to prevent false positives) is also a nightmore for C++ due to the single-translation-unit compilation model.

All of the big companies are spending a lot of time and money trying to make systems better with the existing legacy languages and this is necessary today because they have so much code and you can't just YOLO and run a converter tool to convert millions and millions of lines of code to Rust. But it is very clear that this does not just straight up prevent the issue completely like using a safe language.


I was with you until the parsing with memory unsafe languages. Isn’t that exactly the kind of “random security not based on a threat model” type comment you so rightly criticised in the first half of your comment?


I think you must have misunderstood the point the parent comment was trying to make. Memory-safety issues are responsible for a majority of real-world vulnerabilities. They are probably the most prevalent extant threat in the entire software ecosystem.


Buffer overflows are common in CVEs because it's the kind of thing programmers are very familiar with. But I'm pretty sure that in terms of real-world exploits things like SQL injection, cross-site scripting, authentication logic bugs, etc, are still far more common. Almost all of those are in bespoke, proprietary software. A Facebook XSS exploit doesn't get a CVE.


First Microsoft, then two different teams at Google, and then Mozilla, and then someone else, all found that roughly 70% of security vulnerabilities reported in their products are due to memory unsafety issues. That roughly that number keeps coming up across all of the biggest companies in our industry lends it some weight.

Here's the first Microsoft one: https://www.zdnet.com/article/microsoft-70-percent-of-all-se...

And Chrome: https://www.zdnet.com/article/chrome-70-of-all-security-bugs...


Yes, I'm well aware of what the data says, as well as what the data is measuring--CVEs and bug reports in well-known C/C++/Java projects.

But not too long ago, before SaaS, social media, etc, displaced phpBB, WordPress, and other open source platforms, things like SQL injection reigned supreme even in the reported data. Back then CVEs more closely represented the state of deployed, forward-facing software. But now the bulk of this software is proprietary, bespoke, and opaque--literally and to vulnerability data collection and analysis.

How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows? Some, like Stuxnet, but they're considered exceptionally complex; and even in Stuxnet buffer overflows were just one of several different classes of exploits chained together.

Bad attackers are usually pursuing sensitive, confidential data. Access to most data is protected by often poorly written logic in otherwise memory-safe languages.


> How many of the large state-sponsored penetrations (i.e. the ones we're most likely to hear about) used buffer overflows?

It really depends on the target. If you’re attacking a website, then sure, you’re more likely to find vulnerability classes like XSS that can exist in memory-safe code. When you’re talking about client-side exploits like the ones used by NSO Group, though, almost all of them use memory corruption vulnerabilities of some sort. (That doesn’t only include buffer overflows; use-after-free vulnerabilities seem to be the most common ones these days.)


SQL Injection is a good lesson here. How is it mitigated effectively? By telling devs to write code carefully? No. It is mitigated by prepared statement libraries that are structurally resistant to SQL Injection. Similarly, "here are some static analysis tools - try your best to write safe C" is not a winning move.


The thing is, SQL injection and cross-site scripting are both trivial to defend against — at least compared to memory safety. It has a small surface area and most frameworks do help with it, or at least it is in their realm of possibility.

Preventing buffer overruns require language-level support.


My understanding was that while some of these are about CVEs and such, not all are. Like my understanding was the Microsoft numbers are from across all products, proprietary and open source.


It may sound like I’m being snarky, but I’m not:

Aren’t users / social engineering make up the actual majority of real-world vulnerabilities, and pose the most prevalent extant threat in the entire software ecosystem?


Yes, but I think that within the context of discussing a memory safety vulnerability in a text messaging app it's reasonable to talk about memory safe parsers, no?

Beyond that, I've already addressed phishing at our company, it just didn't seem worth pointing out.


A fair point, but that's not really a problem with the technology. (And I did hedge with "probably" :-)


Based on the hundreds, perhaps thousands of critical vulnerabilities that are due directly to parsing user input in memory-unsafe languages, usually resulting in remote code execution, how's this for a threat model: attacker can send crafted input that contains machine code that subsequently runs with the privileges of the process parsing the input. That's bad.


The attack surface is the parser. The ability to access it is arbitrary. I can't build a threat model beyond that for any specific case, but in the case of a text messaging app I absolutely expect "attacker can text you" to be in your threat model.


There are very few threat models that a memory unsafe parser does not break.

Even the "unskilled attacker trying other people's vulns" threat basically depends on the existence of memory-safety related vulnerabilities.


Then we’re right back in the checklist mentality of “500 things secure apps never do”. I could talk to somebody else and they’d tell me the real threat to worry about is phishing or poor CI/CD or insecure passwords or whatever.


There is no "real threat". Definitely phishing is one of the top threats to an organization, left unmitigated. Thankfully, we now have unphishable 2FA, so you can mitigate it. When you choose to prioritize a threat is going to be a call you have to make as the owner of your company's security posture - maybe phishing is above memory safety for you, I can't say.

What I can say is that parsing untrusted data in C is very risky. I can't say it is more risky than phishing for you, or more risky than anything else. I lack the context to do so.

That said, a really easy solution might be to just not do that. Just like... don't parse untrusted input in C. If that's hard for you, so be it, again I lack context. But that's my general advice - don't do it.


In-arguable these days.


I mean, the threat model is that 1. Memory leaks/errors are bad 2. Programmers make those mistakes all the time 3. Using memory safe languages is cheap Therefore, 4. We should use memory safe languages more often


The threat-model there is that the attacker controls the text that is parsed.


Add this language absolutism to the list of things we need to avoid.


“ The use of memory unsafe languages for parsing untrusted input is just wild.” Indeed! The continued casualness of attitudes towards input validation continues to floor me. “computer science” is anything but :p


> I'm glad that I'm working in a time where I can build all of my parsers and attack surface in Rust

Can you though? Where/how are you deploying your Rust executables that isn't relying deeply on OS code written in "wild" "memory unsafe languages"?

I mean, I _guess_ it'd be possible to write everything from the NIC firmware all the way through your network drivers and OS to ensure no untrusted input gets parsed before it hits your Rust code, but I doubt anyone except possibly niche academic projects or NSA/MOSSAD devs have ever done that...


Yeah I mean, 100%, I hate that I run my code on Linux, which I don't consider to be a well secured kernel. It's an unfortunate thing, but such is life.

But attackers have significantly less control over that layer. This is quite on topic with regards to security nihilism - my parser code being memory safety means that the code that's directly interfacing with attacker input is memory safe. Is the allocator under the hood memory safe? Nope, same with various other components - like my TCP stack. But again, attackers have a lot less control over that part of the stack, so while unfortunate, it's not my main concern.

I do hope to, in the future, leverage a much much more security optimized stack. I'd dive into details on how I intend to do that, but I think it's out of scope for this conversation.


> Just the other day I suggested using a yubikey

The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

Everybody defaults to a small number of security/identity providers because running the system is so stupidly painful. Hand a YubiKey to your CEO and their secretary. Make all access to corporate information require a YubiKey. They won't last a week.

We don't need better crypto. Crypto is good enough. What we need is better integration of crypto.


> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.

But what does this have to do with the FIDO authenticator?

At first I thought you said $100 per user, and I figured, wow, you are buying them all two Yubikeys, that's very generous. And then I realised you wrote "per month".

None of this costs anything "per month per user". You're buying some third party service, they charge whatever they like, this is the same as the argument when people said we can't have HTTPS Everywhere because my SSL certificate cost $100. No, you paid $100 for it, but it costs almost nothing.

I built WebAuthn enrollment and authentication for a vanity site to learn how it works. No problem, no $100 per month per user fees, just phishing proof authentication in one step, nice.

The integration doesn't get any better than this. I guess having watched a video today of people literally wrapping up stacks of cash to Fedex their money to scammers I shouldn't underestimate how dumb people can be but really even if you struggle with TOTP do not worry, WebAuthn is easier than that as a user.


And how do I use my YubiKey to access mail if its not Gmail/Office365?

And how do I enroll all my employees into GitHub/GitLab?

And how do I recover when a YubiKey gets lost?

And how do I ...

Sure, I can do YubiKeys for myself with some amount of pain and a reasonable amount of money.

Once I start rolling secure access out to everybody in the company, suddenly it sucks. And someone spends all their time doing internal customer support for all the edge cases that nobody ever thinks about. This is fine if I have 10,000 employees and a huge IT staff--this is not so fine if I've got a couple dozen employees and no real IT staff.

That's what people like okta and auth0 (now bought by okta) charge so bloody much for. And why everybody basically defaults to Microsoft as an Identity Provider. etc.

Side note: Yes, I do hand YubiKeys out as trios--main use, backup use (you lost or destroyed your main one), and emergency use (oops--something is really wrong and the other two aren't working). And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.


> And a non-trivial amount of services won't allow you to enroll multiple Yubikeys on the same account.

For WebAuthn (and its predecessor U2F) that "non-trivial" amount seems to be precisely AWS. The specification tells them to allow multiple devices to be enrolled but they don't do it.


> Hand a YubiKey to your CEO and their secretary.

Well, I'm the CEO lol so we have an advantage there.

> The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security.

Totally, this is a huge issue to me. I strongly believe that we need to start getting TPMs and hardware tokens into everyone's hands, for free - public schools should be required to give it to students when they tell them to turn in homework via some website, government organizations/ anyone who's FEDRAMP should have it mandated, etc. It's far too expensive today, totally agreed.

edit: Wait, per month? No no.

> We don't need better crypto.

FWIW the kicker with yubikeys isn't really anything with regards to cryptography, it's the fact that you can't extract the seed and that the FIDO2 protocols are highly resistant to phishing.


I am scared to death of rust.

It appears that if one uses it, one become evangelicalized to it, spreads the word "Praise Rust!", and so forth.

Anything so evangelized is met with strong skepticism here.


What scares me about Rust is that people put so much trust in it. And part of that is because of what you mention, the hype in other words.

I don't follow this carefully but even I have heard of at least one Rust project that when audited failed miserably. Not because of memory safety but because the programmer had made a bunch of rookie mistakes that senior programmers might be better at.

So in other words, Rust's hype is going to lead to a lot of rewrites and a lot of new software being written in Rust. And much of that software will have simple programming errors that you can do in any language. So we're going to need a whole new wave of audits.


I recall a time at the grocery store, years ago. I wanted some sliced meat, but when I approached the counter a young woman was sweeping the floor.

Naturally, she was wearing gloves.

Seeing me, she grabbed the dustpan, threw away her sweepings, put the broom away, and was prepared to now serve me...

Still wearing the same gloves. Apparently magic gloves, for she was confused when I asked her to change them. She'd touched the broom, the dustpan, the floor, stuff in the dustpan, and the garbage. All within 20 seconds of me seeing her.

Proper procedure, understanding processes, are far more effective than a misused tool.

Is rust mildly better than some languages? Maybe.

But it is not a balm for all issues, and as you say, replacing very well maintained codebases might result in suboptimal outcomes.


From the Ars reference: "There are some steep hurdles to clear for an attack to be successful. A hacker would first have to steal a target's account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning-were it ever to happen in the wild-would likely be done only by a nation-state pursuing its highest-value targets."

"only by a nation-state"

This ignores the possibility that the company selling the solution could itself easily defeat the solution.

Google, or another similarly-capitalised company that focuses on computers, could easily succeed in attacking these "user protections".

Further, anyone could potentially hire them to assist. What is to stop this if secrecy is preserved.

We know, for example, that Big Tech companies are motivated by money above all else, and, by-and-large, their revenue does not come from users. It comes from the ability to see into users' lives. Payments made by users for security keys are all but irrelevant when juxtaposed against advertising services revenue derived from personal data mining.

Google has an interest in putting users' minds at ease about the incredible security issues with computers connected to the internet 24/7. The last thing Google wants is for users to be more skeptical of using computers for personal matters that give insight to advertisers.

The comment on that Ars page is more realistic than the article.

Few people have a "nation-state" threat model, but many, many people have the "paying client of Big Tech" threat model.


Yes, if you don't trust Google don't use a key from Google. Is that what you're trying to say? If your threat model is Google don't buy your key from Google. Do I think that's probably a stupid waste of thought? Yes, I do. But it's totally legitimate if that's your threat model.


"But it's totally legitimate if that's your threat model."

Not mine. I have no plans to purchase a security key from Google. I have no threat model.

Nothing in the comment you replied to mentioned "trust" but since you raised the issue I did a search. It seems there are actually people commenting online who claim they do not trust Google; this has been going on for years. Can you believe it. Their CEO has called it out multiple times.^1 "[S]tupid waste of thought", as you call it. (That's not what I would call it.) It's everywhere.^2 The message to support.google and the response are quite entertaining.

1. For example, https://web.archive.org/web/20160601234401/http://allthingsd...

2.

https://support.google.com/googlenest/thread/14123369/what-i...

https://www.inc.com/jason-aten/google-is-absolutely-listenin...

https://www.consumerwatchdog.org/blog/people-dont-trust-goog...

https://www.wnd.com/2015/03/i-dont-trust-google-nor-should-y...

https://www.theguardian.com/technology/2020/jan/03/google-ex...

https://www.forbes.com/sites/kateoflahertyuk/2018/10/10/this...


> This ignores the possibility that the company selling the solution could itself easily defeat the solution.

How do you imagine this would work?

The "solution" here is just a cheap device that does mathematics. It's very clever mathematics but it's just mathematics.

I think you're imagining a lot of moving parts to the "solution" that don't exist.


All I am suggesting is that "hacker" as used by the Ars author could be a company, or backed by a company, and not necessarily a "nation-state". That is not far-fetched at all, IMO. The article makes it sound like "nation-states" are the only folks who could defeat the protection or would even have an interest in doing so. As the comment on the Ars page points out, that is ridiculous.

Assuming "hacker" could be a company what company would have such a motivation and resources to spy on people. The NSO's of the world, sure. Anyone else. Companies have better things to do than spy on people, right. Not anymore.

What about a company whose businesss is personal data mining, who goes so far as to sniff people's residential wifi (they lied about it at first when they got caught), collect audio via a "smart" thermostat (Nest), collect data from an "activity tracker" (FitBit), a "smartphone OS", a search engine, e-mail service, web analytics, etc., etc. Need I go on. I could fill up an entire page with all the different Google acquisitions and ways they are mining people's data.

Why are security keys any different. 9 out of 10 things Google sells or gives away are designed to facilitate data collection, but I guess this is the 1 in 10. "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising, but I suppose Google is different.

These companies want personal data. With the exception of Apple, they do not stay in business by selling physical products. Collecting data is what they do and they spend enormous amounts of time and effort doing it.

"That's all I know."


> That is not far-fetched at all, IMO.

The problem with your neat little model of the world is that it doesn't provide you with actionable predictions. Everything is a massive global conspiracy against you, nothing can be trusted, everybody is in on it, and so you can dismiss everything as just part of the charade, which feels good for a few moments, but still doesn't actually help you make any decisions at all.

> "Two-factor authentication" has already been abused by Facebook and Twitter where they were caught using the data for advertising

Right, I mean, if somebody really wanted to help provide working two factor authentication, they'd have to invent a device that offered phishing-proof authentication, didn't rely on sharing "secrets" that might be stolen by hackers, and all while not giving up any personal information and ensuring the user's identity can't be linked from one site to another. That device would look exactly like the FIDO Security Keys we're talking about... huh.

Actually no, if they weren't really part of a massive conspiracy against o8r3oFTZPE there would be one further thing, instead of only being from Google you could just buy these Security Keys from anybody and they'd work. Oh right.


They want more data/information. Today it is two factors. Tomorrow it will be three. You love your Big Tech. I get it.

But personal attacks are not cool. Keep it civil, please.


In what sense is it "more data" ? Did you know you can hook up a CRNG and just get endless streams of such "data" for almost nothing? If "they" just want "more data" they could do that all they like.

Earlier you gave the example of Facebook harvesting people's phone numbers. That's not just data that's information. But a Yubikey doesn't know your phone number, how much you weigh, where you live, what type of beer you drink... no information at all.

The genius thing about the FIDO Security Key design is figuring out how to make "Are you still you?" a question we can answer. Notice that it can't answer a question like "Who is this?". Your Yubikey has no idea that you're o8r3oFTZPE. But it does know it is still itself and it can prove that when prompted to do so.

And you might think, "Aha, but it can track me". Nope. It's a passive object unless activated, and it also doesn't have any coherent identity of its own, so sites can't even compare notes on who enrolled to discover that the same Yubikey was used. Your Yubikey can tell when it's being asked if it is still itself, but it needs a secret to do that and nobody else has the secret. All they can do is ask that narrow question, "Are you still you?".

Which of course is very narrowly the exact authentication problem we wanted to solve.


Who created that "problem we are trying to solve". It wasn't the user.

If the solution to the "problem" is giving increasingly more personal information to a tech company, that's not a great solution, IMO. Arguably, from the user's perspective, it's creating a new problem.

Most users are not going to purchase YubiKeys. It's not a matter of whether I use one, what I am concerned about is what other users are being coaxed into doing.

There are many problems with "authentication methods" but the one I'm referring to is giving escalating amounts of personal information to tech companies, even if it's under the guise "for the purpose of authentication" or argued to be a fair exchange for "free services". Obviously tech companies love "authenticating" users as it signals "real" ad targets.

The "tech" industry is riddled with conflicts of interest. That is a problem they are not even attempting to solve. Perhaps regulation is going to solve it for them.


> Who created that "problem we are trying to solve". It wasn't the user.

Sure it was, if you didn't want this problem you'd be fine with remaining anonymous and receiving only services that can be granted anonymously. I understand reading Hacker News doesn't require an account, and yet you've got one and are writing replies. So yes, you created the problem.

Now, Hacker News went with 1970s "password" authentication. Maybe you're good at memorising a separate long random password for each site, and so this doesn't really leak any information it's just data. Lots of users seem to provide the names of pets, favourite sports teams, cultural icons, it's a bit of a mish-mash but certainly information of a sort.

In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?".


I think you misunderstood. I am not insisting anything about security keys (physical tokens) requiring escalating amounts of personal information. I am referring to "two-factor authentication" as it is promoted by "tech" companies (give us your mobile number so you can use our website or "increase your security"). Call me a tinfoil hat if you like, but I am skeptical,^1 when the "solution" to "the problem of authentication" is giving ever-increasing amounts of information to Big Tech.

Regardless of intent, it seems very much in the spirit of trying to solve a complex problem by adding more complexity, a common theme I see in "tech".

There is nothing inherently wrong with the idea of "multi-factor authentication" (as I recall some customer-facing organisations were using physical tokens long before "Web 2.0") however in practice this concept is being (ab)used by web-based "tech" companies whose businesses rely on mining personal data. The fortuitous result for them being intake of more data/information relating to the lives of users, the obvious examples being email addresses and mobile phone numbers.

1. This is not an issue I came up with in a vacuum. It is shared by others. I once heard an "expert" interviewed on the subject of privacy describe exactly this issue.


> I think you misunderstood. I am not insisting anything about security keys

And yet here's a thread in which you did exactly that.


"In contrast, even though you keep insisting otherwise, Security Keys don't give "escalating amounts of personal information to tech companies" but instead no information at all, just that useful answer to the question, "Are you still you?"."

No, I am responding to the above assertion that I have insisted security keys give esacalating amounts of personal information to "tech" companies.

This is incorrect. Most users do not have physical security tokens. But "tech" companies promote authentication without using physical tokens: 2FA using a mobile number.

What I am "insisting" is that "two-factor authentication" as promoted by tech campanies ("give us your mobile number because ...") has resulted in giving increasing amounts of personal information to tech companies. It has been misused; Facebook and Twitter were both caught using phone numbers for advertising purposes. There was recently a massive leak of something like 550 million Facebook accounts, many including telephone numbers. How many of those numbers were submitted to Facebook under the belief they were needed for "authentication" and "security". I am also suggesting that this "multi-factor authentication" could potentially increase to more than two factors. Thus, users would be giving increasing amounts of personal information to "tech" companies "for the purposes of authentication". That creates additional risk and, as we have seen, the information has in fact been misused. This is not an idea I came up with; others have stated it publicly.


Whilst you're clearly much more comfortable with your "Facebook are bad" line, the problem is that this isn't the thread about how Facebook are good actually, this thread was about your completely bogus claim about Security Keys:

> This ignores the possibility that the company selling the solution could itself easily defeat the solution.

I'm sure you really are worried about how "Facebook are bad", and you feel like you need to insert that into many conversations about other things, but "Facebook are bad" is irrelevant here.

You made a bogus claim about Security Keys. These bogus claims help to validate people's feeling that they're helpless and, eh, they might as well put up with "Facebook are bad" because evidently there isn't anything they can really do about it.

So your problem is, which is more important, to take every opportunity to surface the message you care about "Facebook are bad" in contexts where it wasn't actually relevant, or to accept that hey, actually you're wrong about a lot of things, and some of those things actually reduce the threat from Facebook ? I can't help you make that choice.


It's just tinfoil hat nonsense, it's not worth responding to.


But you just did. :)


A key part of various such tamper-resistant devices is an embedded secret that's very difficult/expensive to extract. However, the manufacturer (i.e. "the company selling the soution) may know the embedded secret without extracting it. Because of that, trust in the solution provider is essential even if it's just simple math.

For a practical illustration, see the 2011 attack on RSA (the company) that allowed attackers access to secret values used in generating RSA's SecurID tokens (essentially, cheap devices that do mathematics) allowing them to potentially clone previously issued tokens. Here's one article about the case - https://www.wired.com/story/the-full-story-of-the-stunning-r...


That's true. Yubico provide a way to just pick a new random number. Because these are typically just AES keys, just "picking a random number" is good enough, it's not going to "pick wrong".

If you worry about this attack you definitely should perform a reset after purchasing the device. This is labelled "reset" because it invalidates all your credentials, the credentials you enrolled depend on that secret, and so if you pick a random new secret obviously those credentials stop working. So, it won't make sense to do this randomly while owning it, but doing it once when you buy the device can't hurt anything.

However, although I agree it would be possible for an adversary who makes keys to just remember all the factory set secrets inside them, I will note an important practical difference from RSA SecurID:

For SecurID those are actually shared secrets. It's morally equivalent to TOTP. To authenticate you, the other party needs to know the secret which is baked inside your SecurID. So RSA's rationale was that if they remember the secret they can help their customers (the corporation that ordered 5000 SecurID dongles, I still have some laying around) when they invariably manage to lose their copy of that secret.

Whereas for a FIDO token, that secret is not shared. Each key needs a secret, but nobody else has a legitimate purpose for knowing it. So whereas RSA were arguably just foolish for keeping these keys, they had a reason - if you found out that say, Yubico kept the secrets that's a red flag, they have no reason to do that except malevolence.


The article says that although "you can't have perfect security," you can make it uneconomical to hack you. It's a good point, but it's not the whole story.

The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

I would never think to spend a million dollars on securing my home network (including other non-dollar costs like inconveniencing myself). Let's suppose that spending $1M would force the US NSA to spend $10M to hack into my home network. The people making that decision aren't spending $10M of their own money; they're spending $10M of the government's money. The NSA doesn't care about $10M in the same way that I care about $1M.

As a result, securing yourself even against a dedicated attacker like Israel's NSO Group could cost way, way more than a simple budget analysis would imply. I'd have to make the costs of hacking me so high that someone at NSO would say "wait a minute, even we can't afford that!"

So, sure, "good enough" security is possible in principle, I think it's fair to say "You probably can't afford good-enough security against state-level actors."


Whether $10M is a lot of money to the NSA or not is also only part of the story. The remaining part is how much they value the outcome they will achieve from the attack.

That reminds me somehow of an old expression: If you like apples, you might pay a dollar for one, and if you really like apples you might pay $10 for one, but there's one price you'll never pay, no matter how much you like them, and that's two apples.


You're right. It's only part of the story. Another part of the story is that the cost of these attacks is so far below the noise floor of any state-level actor that raising their costs will probably have perverse outcomes. For the same reason you don't routinely take half a course of antibiotics, there are reasons not to want to deliberately drive up the cost of exploits as an end in itself. When you do that, you're not hurting NSO; you're helping them, since their business essentially boils down to taking a cut.

We should do things that have the side effect of making exploits more expensive, by making them more intrinsically scarce. The scarcer novel exploits are, the safer we all. But we should be careful about doing things that simply make them cost more. My working theory is that the more important driver at NSA isn't the mission as stated; like most big organizations, the real driver is probably just "increasing NSA's budget".


> there are reasons not to want to deliberately drive up the cost of exploits as an end in itself. When you do that, you're not hurting NSO; you're helping them, since their business essentially boils down to taking a cut.

In essence, NSO their income is (price of exploits) * (number of exploit customers).

If the price of exploits goes up, that doesn't mean their income does. That depends on how the price affects the number of customers. Governments have lots of money to spend, but generally they still have some price sensitivity. Especially the more fringe governments.

I am not sure what the effect on NSO their income would be.


My contention, which is counterintuitive and very possibly wrong, but I feel strongly enough about it to defend it on a message board, is that exploits are so cheap that state-level actors are in fact not meaningfully price-sensitive to them.

It's true that you can't charge $2MM for a Firefox exploit right now. But that's because someone else is selling that exploit for an (orders of magnitude) lower price. So NSO can't just jack up exploit prices to soak the IC.

But if all exploit prices for a target are driven up, everywhere, my contention is that the IC will shrug and pay. That's because the value per dollar for exploits is extremely high compared to the other sources of intelligence the IC has, and will remain extremely high almost no matter how high you can realistically drive their prices. The fact is that for practically every government on the planet, the dollar figures we're talking about are not meaningful.


Ah, that is a novel and surprising take. Thanks!

Essentially exploits are sold massively under their "true value" and NSO doesn't get to capture this value because there are so many others giving them away for free.

It seems to me that a lot of exploits / PoCs are developed by security researchers doing it for the sport and making a name for themselves. This is probably part of the reason why exploits are so cheap. So then the question is, how much less productive will these researchers be if building exploits gets harder.

My feeling is that they will put in roughly the same amount of time. And hence their exploit production will probably drop proportionally to how much harder exploits are to find.


> The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

They also have something else most people don't have: time. Nation-states and actors at that level of sophistication can devote years to their goals. This is reflected in the acronym APT, or Advanced Persistent Threat. It's not that just once they have hacked you they'll stick around until they are detected or have everything they need, it's also that they'll keep trying, playing the long game, waiting for their target to get tired or make a mistake, and fail to keep up with advancing sophistication?

In your example, you spend $1M on your home network, but do you keep spending the money, month after month, year after year, to prevent bitrot? Equifax failed to update Struts to address a known vulnerability, not just because of cost but also time. It's cost around $2billion so far, and the final cost might never really be known.


Most organizations should not really be factoring state level actors into their risk assessment. It just doesn't make sense. If you are an actual target for state level actors you likely will know about it. You will also likely have the funding to protect yourself against them. And if you can't, that isn't a failing of your risk assessment decision making.


An illustrative counterexample of "if you are an actual target for state level actors you likely will know about it" is the case of Intellect Services, a small company (essentially, father and daughter) developing a custom accounting product (M.E.Doc) that assists preparation of Ukrainian tax documents.

It turned out that they were a target for state level actors, as their software update distribution mechanism was used in a "watering hole attack" to infect many companies worldwide (major examples are Maersk and Merck) in the NotPetya data destruction (not ransomware, as it's often wrongly described) attack, causing billions of dollars in damage. Here's an article about them https://www.bleepingcomputer.com/news/security/m-e-doc-softw...

In essence, you may be an actual target for state level actors not because they care about you personally, but because you just supply some service to someone whom they're targeting.


I did say “likely know”. The point was not so much who the targets of state level actors are, but if you are a target there is not much you can do about it. The resources they can invest, especially against a smaller but more critical company, is orders of magnitude more than that organization can afford to defend against. There just isn’t a lot you can do practically to defend yourself from those kind of threat actors at smaller business. I think medium to large business have way more tools at their disposal.


Are you a semi-large American company?

Then you are an actual target for state level actors.


as a security engineer at a semi large American company, we factor in state actors. we do tool for, and routinely hunt for nation state actors.

most people I know, even those in mid size businesses tool for and hunt for nation state TAs as well. it's just something you have to do. the line between ecrime and nation state is sooooo thin, you might as well. especially when your talking about NK, were you have nation state level ecrime.


Said state level actors probably have agents in your company anyways.


Heck, are you in the supply chain of a semi-large American company?


What counts as a state-level actor? The NSA, obviously. But a lot of other groups seem to be in more of a grey area.


The corresponding agencies in China and Russia, obviously. But usually a state-level actor wants deniability, which is where the "grey area" hacker teams come in (groups that appear to be state actors but this can be difficult to prove).


It’s really just about the resources an organization has. Can be lots of different government and NGO type entities.


Meanwhile, the biggest state-level actors are developing offensive capabilities at the scale of "we can wipe out everything on the enemy's entire domestic network" which includes thousands of businesses of unknown value. The same way strategic nuclear weapons atomize plenty of civilian infrastructure.

Sure, in that kind of event, an org might be more concerned with flat out survival. But you never know if you'll be roadkill. And once that capability is developed, there is no telling how some state-level actors are connected to black markets and hackers who are happy to have more ransomware targets. Some states are hurting for cash.


"So, sure, "good enough" security is possible in principle, I think it's fair to say "You probably can't afford good-enough security against state-level actors.""

I don't think so. State level actors also have limited ressources (and small states have very limited ressources) and everytime they deploy their tools, they risk that they get discovered and anyalized and added to the antivirus heuristics and with that rendered allmost worthless. Or they risk the attention of the intelligence agencies of your state. So when that happens, heads might be rolling, so the heads want to avoid that.

So if there is a state level group looking for easy targets for industry espionage - and they find a tough security, where it looks like people care - I would say chances are that they go look for more easy targets (of which there are plenty).

Unless of course there is a secret they absolutely want to have. Then yes, they will likely get in after a while, if the state backing it, is big enough.

But most hacking is done on easy targets, so "good enough" security means not beeing an easy target, which also means not getting hacked in most of the cases. That is the whole point of "good enough".


This reminds me of the US's program against the Soviet Union in Afghanistan (or at least one fictionalised version of it). Supposedly the pitch for funding involved the cost of a US stinger missile being much less than the cost of a Soviet helicopter. If it's an effective means to force a rivalrous actor to waste money, the fact the decision makers don't care about the money they spend could be a counterattack vector.


> The problem is that state-level actors don't just have a lot of money; they (and their decision makers) also put a much much lower value on their money than you do.

I think you have a false perception of the budgetary constraints mid-level state actors are dealing with. Most security agencies have set budgets and a large number of objectives to achieve, so they'll prioritize cost-effective solutions/cheap problems (whereby the cost is both financial and political but finances act as hard constraint). Germany actually didn't buy Pegasus largely because it was too expensive.

Without Pegasus, Morocco's security apparatus probably wouldn't have the resources otherwise to target such a wide variety of people, ranging from Macron to their own king.


Sure, there might be other theoretical concerns beyond just getting to “uneconomical”, but they are all basically irrelevant compared to the fundamental economical problem that you do not spend $1M to force the attacker to spend $10M, you spend $100M to make the attacker spend $1M. We need to start by improving systems by 10,000% to fix that problem before worrying about minutiae like relative willingness to pay.


For the likes of NSO there is no “we can’t afford that,” there is only “your Highness, this will cost $MUCH” and for, say, Saudi Arabia the boss might not even blink.


"Secure" and "uneconomical" are generally equivalent. A door lock is an _economic_ instrument, that just happens to leverage the laws of physics in its operation. If the NSOs of the world are your enemy, and they are by definition of having you on their list, then you must wisely expend your energy on making their attack more costly or else get eaten.


> I'd have to make the costs of hacking me so high that someone at NSO would say "wait a minute, even we can't afford that!

No really. You just have to do what just happened happen a couple more times and they are finished. If they can't protect their data they have no business, their reputation is destroyed and there's no point of hiring them if a week later the list of the people you are spying leaks. Turn the game around, info security is asymmetric by definition, it's a lot easier to attack than to defend. As a defender you need to plug all possible holes but If you become the attacker you just need to find one.


"the fact that iMessage will gleefully parse all sorts of complex data received from random strangers, and will do that parsing using crappy libraries written in memory unsafe languages."

C. 30 years of buffer overflows.


Yep, making companies liable for damages would incentivize them to stop relying on C for a lot of things. Apple knows full well iMessage has security relevant bugs they just haven't found yet. Hence their attempts to shield it and mitigate those issues with layers of security. However, the appropriate action would be to reimplement it in something less likely to get exploited. That's expensive. Liability would justify this cost. Companies like Google, MS, Apple, etc. rely on large amounts of legacy C code. There are quite a few repeat offenders in terms of having security vulnerabilities exploited in the wild.

Basically, my reasoning here is that Apple knows it is exposing users to hacks because of quality issues with this and other components. The fact that they try to fix them as fast as they find them is nice but not good enough: people still get hacked. When the damage is mostly PR, it's manageable (to a point). But when users sue and start claiming damages, it becomes a different matter: that gets costly and annoying real quick.

Recently we have seen several companies embrace Rust for OS development. Including Apple even. Both Apple and Google have also introduced languages like Swift and Go that likewise are less likely to have issues with buffer overflows. Switching languages won't solve all the problems but buffer overflows should largely be a thing of the past. So, we should encourage them to speed that process up.


50 years!


49

C is a programming language which born at “AT & T’s Bell Laboratories” of USA in 1972. It was written by Dennis Ritchie.


Buffer overflows are older than C.

One of the reasons for the decline of the British computer industry was Tony Hoare, at one of the big companies (Elliott Brothers, later part of ICL), implemented Fortran by compiling it to Algol, and compiled the Algol with bounds checks. This would have been around 01965, according to his Turing Award lecture. They failed to win customers away from the IBM 7090 (according to https://www.infoq.com/presentations/Null-References-The-Bill...) because the customers' Fortran programs were all full of buffer overflows ("subscript errors", in Hoare's terminology) and so the pesky Algol runtime system was causing them to abort!


"What can we do to make NSO’s life harder?" That seems pretty simple to me: We ask Western democratic governments (which include Israel) to properly regulate the cybersecurity industry.

This is the purpose of governments; it is why we keep them around. There is no really defensible reason why the chemical, biological, radiological and nuclear industries are heavily regulated, but "cyber" isn't.


Nobody has any credible story for how regulations would prevent stuff like this from happening. The problem is simple economics: with the current state of the art in software engineering, there is no way to push the cost of exploits (let alone supporting implant tech) high enough to exceed the petty cash budget of state-level actors.

I think we all understand that the medium-term answer to this is replacing C with memory-safe languages; it turns out, this was the real Y2K problem. But there's no clear way for regulations to address that effectively; assure yourself, the major vendors are all pushing forward with memory safe software.


Well, first of all the NGO group in its current form wouldn't exist if Israel regulated them, at the very least it wouldn't exist as a state-level equivalent actor.

Second of all if you can't push the costs high enough then it becomes time to limit the cash budget of state level actors. Which is hardly without precedent.

For some reason you seem to only be looking at this as a technology problem, while at the core it is far more political. Sure technology might help, but that's the raison d'etre of technology.


Sure, you can outlaw NSO itself. I won't complain! But all you're doing is smearing the problem over the globe. You can push this kind of work all the way to "universally acknowledged as organized crime", and it'll still happen, exactly the same way, with basically the same actors. You might even increase the incentives by doing it. Policy is complicated.


I really don't get this line of argument that regulation is useless. For example if you made it illegal for ex US gov workers to work at companies like these I would expect the vast majority to comply with this, so at the very minimum you would be limiting the available talent pool. The post several parents up talked about regulation for biological, nuclear, etc industries being effective, and although 'cyber' would never be treated in the same way, they're right, after all you don't see organized criminals running around with biological or radiological weapons now do you?


I don't know if it's useless. I just know it isn't going to stop NSO-type attacks by state-level actors. People on message boards have very strange ideas about what the available talent pool is; for starters, they seem strangely convinced that it's all people who are choosing between writing exploits and working at a Google office.


Of course you will never stop all attacks, however you can try limit them in amount by making them more expensive to do, whether this be by limiting where they can hire from, the kind of political consequences they will incur, etc.


On this thread, we're talking about state-level attackers targeting iMessage.


Removing NSO won't limit access to the talent pool in practice because the key assets of NSO - the vulnerabilities - does not rely on people they employ directly but rather on the global market for exploits.

Currently, some blackhat somewhere finds a vulnerability and sells it to NSO and then NSO sells it to various countries. If Israel forbids such deals, then the same "someone's" (without regard of where they're located - those deals are essentially unregulatable, you might anonymously trade knowledge/PoC for crypto) will sell the vulnerability to NSOv2 headquartered in Panama or Mozambique, and NSOv2 will sell it to the same customers.


Well you can hardly complain it's impossible to make the cost of exploits high enough if you do nothing to restrict their funding. If a country lets them openly conduct business then it's no surprise they're well funded, which wouldn't be a problem if that country kept an eye on them to ensure they're not doing anything harmful, but predictably that didn't work out.


NSO is just the exploit vendor you hear about. There are lots more.


Isn’t this the security nihilism the article is addressing?


Israel does regulate them, you may think not well enough but likely there isn’t a single sale that wasn’t approved at a pretty high level based on their export license every sale requires an authorization.

I doubt they made a deal that didn’t directly served either Israeli or US foreign policy and security interest.

I don’t know about the NSO but another player in mobile tracking (Verint) tho very much more LEO oriented (SS7 tracking) had about a million failsafes that ensure that their software cannot be used to track or intercept US or Israeli numbers.


You're extremely correct, of course, but what I'm really proposing here is something much more boring than actually solving the technical problem(s). How about a dose of good old-fashioned bureaucracy? If you want to sell exploits, in a Western country, then yeah sure you can, but first you should have to go through an approval process and fill in a form for every customer and have them vetted, yada yada.

This wouldn't do anything to stop companies who base themselves in places like Russia. It wouldn't even really do anything to stop those who base themselves in the Seychelles. But, you want to base yourself in a real bona-fide country, like the USA or France or Israel or Singapore? Then you should have to play by some rules.


If you make people fill out paperwork to sell exploits in Israel, Germany, and the United States, they will sell exploits in Kuala Lumpur, Manila, and Kigali. I'm not saying you're expressing it at all, but there is a lot of chauvinism built into the most popular ideas for regulating exploits.


Yes, they certainly will. I'm not naive, or colonial, about that. But what more can we do than live out the standards that we want to see upheld in the world?


I'd be surprised if Israel didn't already regulate who NSO does business with.


Nor does anyone need one, yet. Again, the point of government -- force the dang discussion; that's what investigations, committees, et al are for.

It's fun to make fun of old people in ties asking (to us) stupid questions about technology in front of cameras, but at the end of the day, it's a crucial step in actually getting something done about all this.


>Nobody has any credible story for how regulations would prevent stuff like this from happening.

We do have some of those already.

https://www.faa.gov/space/streamlined_licensing_process/medi...


If the governments can't ban exploits, perhaps they can ban writing commercial programs in memory unsafe languages? Countries could agree on setting a goal, e.g. that by 2040 all OSs etc. need to use a memory safe language.


The whole approach of regulating on the level of "please don't exploit vulnerable systems" seems reactive to me. If the cats out of the bag on a vulnerability and it's just data to copy and proliferate - not much a government can do other than threaten with repercussions which only applies if you get caught.

The only tractable way to deal with cyber security is to implement systems that are secure by default. That means working on hard problems in cryptography, hardware, and operating systems.


By the exact same logic, implementing physical security on the level of "please don't kill vulnerable people" would also be reactive. If the cat's out of the bag on a way to kill people, well, don't we need to implement humans that are unkillable in that way? That's going to mean working on some hard problems...

No. We don't operate that way, and we don't want to.

But for us to not operate that way in cyberspace, we need crackers (to use the officially approved term) to be at least as likely to be caught (and prosecuted) as murderers are. That's a hard problem that we should be working on.

(And, yes, we need to work on the other problems as well.)


Despite the enforcement mechanisms against murders (which work less than 2/3s of the time), you see many places that implement preventive security measures to make killing people more difficult.

I think it is wholey reasonable to work on both preventive and punitive approaches. For online crimes, jurisdictional issues are major hurdles for the punitive approach.


> For online crimes, jurisdictional issues are major hurdles for the punitive approach.

Yeah. If you can catch people in your jurisdiction (without the problems of spoofing and false flags), then people are just going to attack you from outside your jurisdiction. You'd have to firewall your jurisdiction against outside attacks. (You might even be able to do that, by controlling every cable into the country. But then there's satellites...)


the original tale of international cyber espionage was accomplished via a satellite link

https://en.wikipedia.org/wiki/The_Cuckoo%27s_Egg_(book)


> We ask Western democratic governments (which include Israel) to properly regulate the cybersecurity industry.

That's a bit naive. Governments want surveillance technology, and will pay for it. The tools will exist, and like backdoors and keys in escrow, they will leak, or be leaked.

The reason why all those other industries are regulated as much as they are is because governments don't need those types weapons they way they need information. It's messy and somewhat distasteful to overthrow an enemy in war, but undermining a government, through surveillance, disinformation, propaganda, until it collapses and is replaced by a more compliant government is the bread-and-butter of world affairs.


They want nukes too, which also exist. It doesnt mean theyll get them.

Non proliferation treaties are effective against nuclear weapons theyd be effective against "cyber" weapons.


> They want nukes too

No, they want weapons that can project and multiply threat. Nukes are just one way of doing that.


The thing is, countries with vast intellectual property base have more to lose in the game, thus they should favor defense over offense. Like Schneier says, we must choose between security for everyone, or security for no-one.


Except that the big money IP owners consider piracy and loss of revenue far more important than merely securing their assets. The kinds of software they buy, DRM, copy protection, automatic DMCA takedown of automatically-detected infringing works, doesn't have any applicability to cybersecurity.


Yeah, it seems kind of silly to start with the fact that the something has caused "the bad thing everyone said would happen" to happen and somehow not see that thing as a blatant security hole in and of itself.

I mean sure technical solutions are available and do help, but to only look at the technical side and ignore the original issue seems like a mistake.


> a blatant security hole in and of itself

That means our society, our governments, our economic systems are security holes. Everyone saying the Bad Thing would happen did so by looking, not at technology, but at how our world is organized and run. The Bad Thing happened because all those actors behaved exactly as they are designed to behave.


This still leaves us with threats from state actors and cybersecurity firms answering only to Eastern, undemocratic governments.


“Cyber” is pretty “well” regulated, NSO exports under essentially an arm export license.


> to properly regulate the cybersecurity industry

Regulated Cybersecurity: Must include all mandatory government backdoors.


The 'economic' argument simply doesn't work. Does the author think that every "tin-pot authoritarian" owns a poor country scrabbling in the unproductive desert for scraps? Of course not!

Literally one of the best customers of NSO tools is Saudi Arabia (SA), where money literally bursts out of the ground in the form of crude oil. The market cap of Saudi Aramco is 3x that of Apple's. Good luck making it "uneconomical" for SA to exploit iPhones.

I'll even posit that there is literally no reasonable amount where the government of SA cannot afford an exploitation tool. The governments that purchase these tools aren't doing it for shits and giggles. They're doing it because they believe that their targets represent threats to their continued existence.

Think of it this way, if it costs you a trillion dollars to preserve your access to six trillion dollars worth of wealth, would you spend that? I would, in a heartbeat.


I respectfully disagree.

If we can raise the cost from $100k per target to $10m per target, even SA will reduce the number and breadth of targets.

They do have limited funds, and they want to see an ROI. At a lower cost, perhaps they’ll just monitor every single journalist who has ever said a bad thing about the king. As that price increases, they’ll be more selective.

Like Matt said, that’s not ideal. But forcing a more highly targeted approach rather than the current fishing trawler is an incremental improvement.


The NSO target list has like 15,000 Mexican phone numbers on it. You don't think making exploits more expensive would force attackers to prioritize only the very highest value targets?

In the limit, a trillion dollar exploit that will be worthless once discovered will only be used with the utmost possible care, on a very tiny number of people. That's way better than something that you can play around with and target thousands.

https://www.theguardian.com/news/2021/jul/19/fifty-people-cl...


100%, the argument is perfect in it's circularity: we should it make it uneconomical for there to be iMessage exploits via fixing iMessage exploits


Computer security isn't a board game where my unit can Damage your unit if my unit has more Combat than your unit has Defense, and once your unit is Damaged enough you lose it, and you can buy a card with 5 Combat for 5 Gold, and so on. It's not a contest of strength. It's not about who has the most gold. It's about who fucks up.

If you follow the guidelines in http://canonical.org/~kragen/cryptsetup to encrypt the disk on a new laptop, it will take you an hour (US$100), plus ten practice reboots over the next day (US$100), plus 5 seconds every time you boot forever after (say, another US$100), for a total of about US$300. A brute-force attack by an attacker who has killed you or stolen your laptop while it was off is still possible. My estimate in that page is that it will cost US$1.9 trillion. That's the nature of modern cryptography. (The estimate is probably a bit out of date: it might cost less than US$1 trillion now, due to improved hardware.)

Other forms of software security are considerably more absolute. Regardless of what you see in the movies, if your RAM is functioning properly and if there isn't a cryptographic route, there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T. It's like trying to find a number that when multiplied by 0 gives you 5. The money you spend on attacking the problem is simply irrelevant.

Usually, though, the situation is considerably more absolute in the other direction: there are always abundant holes in the protections, and it's just a matter of finding one of them.

Now, of course there are other ways someone might be able to decrypt your laptop disk, other than stealing it and applying brute force. They might trick you into typing the passphrase in a public place where they can see the surveillance camera. They might use a security hole in your browser to gain RCE on your laptop and then a local privilege escalation hole to gain root and read the LUKS encryption key from RAM. They might trick you into typing the passphrase on the wrong computer at a conference by handing you the wrong laptop. They might pay you to do a job where you ssh several times a day into a server that only allows password authentication, assigning you a correct horse battery staple passphrase you can't change, until one day you slip up and you type your LUKS passphrase instead. They might steal your laptop while it's on, freeze the RAM with freeze spray, and pop the frozen RAM out of your motherboard and into their own before the bits of your key schedule decay. They might break into your house and implant a hardware keylogger in your keyboard. They might do a Zoom call with you and get you to boot up the laptop so they can listen to the sound of you typing the passphrase on the keyboard. (The correct horse battery staple passphrases I favor are especially vulnerable to that.) They might remotely turn on the microphone in your cellphone, if they have a way into your cellphone, and do the same. They might use phased-array passive radar across the street to measure the movements of your fingers from the variations in the reflection of Wi-Fi signals. They might go home with you from a bar, slip you a little scopolamine, and suggest that you show them something on your (turned-off) laptop while they secretly film your typing.

The key thing about these attacks is that they are all cheap. Well, the last one might cost a few thousand dollars of equipment and tens of thousands of dollars in rent. None of them requires a lot of money. They just require knowledge, planning, and follow-through.

And the same thing is true about defenses against this kind of thing. Don't run a browser on your secure laptop. Don't keep it in your bedroom. Keep your Bitcoin in a Trezor, not your laptop (and obviously not Coinbase), so that when your laptop does get popped you don't lose it all.

You could argue that, with dollars, you can hire people who have knowledge, do planning, and follow through. But that's difficult. It's much easier to spend a million (or a billion, or a trillion) dollars hiring people who don't. In fact, large amounts of money is better at attracting con men, like antivirus vendors, than it is at attracting people like the seL4 team.

Here in Argentina we had a megalomaniacal dictator in the 01940s and 01950s who was determined to develop a domestic nuclear power industry, above all to gain access to atomic bombs. Werner Heisenberg was invited to visit in 01947; hundreds of German physicists were spirited out of the ruined, occupied postwar Germany. National laboratories were built, laboratory-scale nuclear fusion was announced to have been successful, promises to only seek peaceful energy were published, plans for a nationwide network of fusion energy plants were announced, hundreds of millions of dollars were spent (in today's money), presidential gold medals were awarded...

...and finally in 01952 it turned out to be a fraud, or at best the kind of wishful-thinking-fueled bad labwork we routinely see from the free-energy crowd: https://en.wikipedia.org/wiki/Huemul_Project

Meanwhile, a different megalomaniacal dictator who'd made somewhat better choices about which physicists to trust detonated his first H-bomb in 01953.


> there's no attack that will allow one seL4 process to write to the memory of another seL4 process it hasn't been granted write access to. Not for US$1B, not for US$1T, not for US$1000T.

Nitpick: for only about US$1M (give or take a order of magnitude or two depending on location), the process (assuming network access) can hire a assassin to kill you, pull up a shell on your computer, and give the process whatever priviledges it wants.


Normally it wouldn't have network access, but that's an excellent point—generalized, once programs can start having physical-world effects that loop around to affect the computer they're running on, you can no longer make such adamantium-clad guarantees. And, as Rowhammer and various passive-emission attacks show, it's not uncommon in practice for the ordinary execution of the program to have such effects.

Still, this kind of thing isn't always applicable. If the seL4 kernel in question is on orbit, or running on a computer at an unknown location, or in a submarine, or in a drone in flight, the assassin can't in practice sit down at the console. And if it's running on something like the Secure Enclave chip in an iPhone, or a permissive action link, physical access may be impractically difficult regardless of who you kill.


One possible technical improvement is high quality honeypots. If Apple tried hard, they could arrange for certain iPhones to have instrumentation intended to detect and characterize these sorts of attacks. If every targeted user has a 0.1% chance of leaking the exploit vector to Apple, then mass exploitation becomes much more complex and expensive.

Doing this well would be hard, but even an imperfect implementation would have some value.


It might be hard to convince the privacy engineers to allow us access to a random sample of message attachments. What if we asked for a temporary 'root' access credential, that is only valid for 3 minutes per day?


Get users to opt in, both to participate and to analyze any given payload?


NSO == arms dealers, by their own admission. They did not create the market for digital arms, but successfully cater to it. No HN comment will change their business model. They benefit from the easy distribution of software twice. Once as an exploit developer, because all target systems look alike (recall hardware and software vendors also want to write hardware and software once and then distribute widely) and therefore an exploit must only be developed once to apply broadly. Then, a second time as a software developer, because they can sell their same software to multiple clients. Having worked on Pegasuses, the thing that is dreaded the most and is very costly is a rewrite. Those get financially prohibitive. If the world was serious about stopping the NSOs of the world, it would work toward efficiently (read: inexpensively) making different individual systems wildly different yet remaining interoperable (because the interoperability is where the network effect comes in, providing value in communication systems and leading to their wider adoption). The conflict to solve is how to make systems interoperable and non-interoperable at the same time. While it is easy to imagine randomized instruction sets, Morpheus-like-blindly-randomize-everything chips and bytecode VMs that use binary translation to modify the phenotype at each individual system, it is not so easy to envision how systems could be written once to interoperate yet prevent BORE-type attacks whereby the one time exploit development cost can be easily offset by repeat exploitation. The only way forward is to find that lever which gives defenders a cheap button to push that forces an expensive Pegasus rewrite.


There's plenty of evidence that this type of attack surface (parsers operating on untrusted data received over the Internet) is fixable, even at Big Tech scale. The most obvious example is Microsoft Office in the early 2000s and the switch to the XML-based format with newer, easier-to-implement and ideally memory-safer parsers. That's not to say there's no bugs in Office anymore, but it's certainly much much better than it was.

Microsoft figured it out. Apple can do it, too.


The point that the author makes is very valuable, it is important to not throw out hands in the air. If you are not moving forward, you are falling back.

Though one (perhaps nit-picky) point I'd like to make is that these dictators are not dumb. They are incredibly intelligent. They themselves are probably not hackers, but they understand people and power. They are going do what they can to get what they want. We can't ignore the factor they play in creating these problems, and we need to take it just as seriously as we would a technical security exploit.


You don't have to be "incredibly intelligent" to pay some company to hack a list of your enemies. That just takes money and a list of people you hate, not insight.


Not to split hairs here, but lets separate this into a few different traits. Smart, intelligent and cunning.

Most dictators are not very intelligent. Just like Donald Trump is not very intelligent.

Cunning and with social smarts would be apt. These guys really know how to play people off each other, and manipulate, like, really well.


How does a bug in iMessage lead to my iPhone being completely taken over by Pegasus? I thought apps were sandboxed on iOS.

Or can they only monitor SMS/iMessages with this entry point?


I imagine they use one exploit to get code execution in iMessage, then another exploit to escape sandbox and execute code in kernel.


Yeah. Here’s a 2016 write up when Pegasus (presumably a different deployment) was leaked and reversed: https://citizenlab.ca/2016/08/million-dollar-dissident-iphon...


The immediate takeaway from this article is that if any of these threat actors are on your radar and you have an I phone then you should delete iMessage.


I just disabled iMessages and FaceTime ... I don't actively use them anyway and I have a feeling Apple applies more scrutiny to 3rds party apps than theirs (Whatsapp , signal ...)

I don't know if disabling iMessages is enough though in this case.


The security attack that scares me more than any other is rough men with guns kidnapping me in the middle of the night and then torturing me until I reveal my security material. While normally torture just results in the victim saying anything to make it stop, in the specific case where the attacker has encrypted material and can test key extraction in real-time torture is highly effective.


There’s a canonical term for this: rubber hose cryptography. That’s when you beat someone with a rubber hose until they give you the key. It’s effective against a wide range of algorithms and constructions.


The technical solution is having very available, very believable lies. Something where you can "reveal" false secrets that decrypt to believable data by your attacker.

This is generally hard. Because you gotta know, at the time of being tortured, which fake secret will give believable results.



The only practical security is security through isolation, like what Qubes OS provides. Security through correctness is impossible.


With Qubes you are depending on the correctness of the VM and whatever hardware it is running on. Modern chips are really complex.

The only perfectly secure computer is one that is off. Security is always about probabilities and trade offs. As you approach perfection cost approaches infinity. It’s similar to adding “nines” to your uptime.

A good security policy balances cost with security and also has plans in place for what to do if security is compromised.


More security nihilism.


I'm not saying security is impossible, just that there are trade offs especially as you try to approach some mythical "perfection."


Stupid question: how do you know your isolation is correct?


Not stupid question at all. Nothing is 100% correct. Instead, you look at the attack surface, which for Qubes is extremely small: no network in AdminVM, only 100k lines of code in Xen supervisor, hardware virtualization with extremely low number of discovered escapes and so on.


Xen is bloated and has a security hole history. This also ignores the size of the Linux acting as dom0, that is.

The only correct answer is formal reasoning, as successfully executed by seL4.


> Xen is bloated and has a security hole history.

This is a useless security nihilism. Xen is much more secure than anything else in terms of hole history. And Qubes relies on hardware virtualization, not software. Most famous escape from it was discovered by the Qubes founder ("Blue Pill").

The size of Linux in dom0 does not matter, because it has no network, does not run any apps and is only used to manage VMs. There is just no way for an attacker to exploit a bug there.

>formal reasoning

I hope this is the future, but unfortunately it's not the present yet.


>I hope this is the future, but unfortunately it's not the present yet.

Qubes devs are welcome to adopt seL4's VMM virtualization solution.

In seL4's virtualization design, VMM handles VM exceptions, and yet has no more privileges (capabilities, enforced by seL4, which is thoroughly formally proven) than the VM itself, thus an escape from VM to VMM would yield no fruit.


You test for it with rigor and incorporate new learning, just like every other engineering discipline.


There have been Qubes-breaking bugs in Xen before, and it wouldn't be surprising to see more.


You seem to have missed the point of the article completely.

We can’t achieve perfect security (there’s no such thing). What we can achieve is raising the bar for attackers. Simple things like using memory-safe languages for handling untrusted inputs, least-privilege design, defense in depth, etc.


Memory-safe languages are good, but decreasing the attack surface through compartmentalization is much more reliable I think.


Related: Fuck privacy nihilism

https://twitter.com/evacide/status/1416968243642724353?s=21

The same logic applies. You will not achieve perfect privacy online but there is plenty you can do to make tracking you so much harder.


> In a perfect world, US and European governments would wake up and realize that arming authoritarianism is really is bad for democracy

Well, they haven't done that since what, the early XX century, what with "he may be a son of a bitch, but he's our son of a bitch"? If the US really cared that much about "democracy", perhaps they'd have by now sorted it out in, say, Mexico already: after all, that's their closest-connected country after Canada, right? Yet the last time I've checked the news about upcoming Mexican elections, there were quite a lot of dead/missing opposition candidates reported (just as it was during the previous elections, and pre-previous, and...) Interestingly enough, the US press doesn't report much on that: apparently things in Afghanistan, across half the globe, are much more important for home security.


Can someone explain why Blastdoor has been unsuccessful? Is it too hard a problem to restrict what iMessage can do?


> Notably, these targets include journalists and members of various nations’ political opposition parties

For all we know it also included cryptographers and security researchers. Unfortunately, the list hasn't been published-- so we only know what the journalists who had access to it cared to look up.


I don't care the slightest about computer security as long as governments won't try to ask experts to write laws and security standards. Same goes for insurance companies, who should try to sell cyber insurance and lobby the government for security standards.

There are plenty security standards for many things that are not computers, yet cyber is a weird exception.

My understanding is both a liberal silicon valley state of mind, combined with the NSA benefiting from low security standards and having a monopoly on tech companies.

In my view, the computer security industry is too blame here, because they benefit from chaos and a lack of government intervention.


> The problem that companies like Apple need to solve is not preventing exploits forever, but a much simpler one: they need to screw up the economics of NSO-style mass exploitation.

On the one hand, sure, make it too expensive to do this. On the other hand, how much more expensive is too expensive? When the first SHA1 collision attack was found, it was considered a problem, and SHA1 was declared unsuitable for security purposes, but now it's cheap.


The article states:

>"A more worrying set of attacks appear to use Apple’s iMessage to perform “0-click” exploitation of iOS devices. Using this vector, NSO simply “throws” a targeted exploit payload at some Apple ID such as your phone number, and then sits back and waits for your zombie phone to contact its infrastructure."

Does anyone have a link or any resources that describe how this “0-click” exploitation" works?


... against nihilism? They're just sort of handwaving and saying, "Well, uh... we should do better, somehow... and expect Apple to do better, and... uh..." How's that any different from saying "The problem is basically impossible"?

The core of the problem is complexity. Our modern computing stack can be broadly described as:

- Complexity to add features. - Complexity to add performance. - Complexity to solve problems with the features. - Complexity to solve problems created from the performance complexity. - Complexity added to solve the issues the previous complexity created.

And this has been iterating over, and over, and over... and over. The code gets more complex, so the processors have to be faster, which adds side channel issues, so the processors get more complex to solve that, as does the software, hurting performance, and around you go again.

At no point does anyone in the tech industry seem to step back and say, "Wait. What if we simplify instead?" Delete code. Delete features. I would rather have an iPhone without iMessage zero click remote exploits than one with animated cartoons based on me sticking my tongue out and waggling my eyebrows, to pick on a particularly complex feature.

I've made a habit of trying to run as much as I can on low power computers, simply to see how it works, and ideally help figure out the choke points. Chat has gotten comically absurd over the years, so I'll pick on it as an example of what seems, to me, to be needless complexity.

Decades ago, I could chat with other people via AIM, Yahoo, MSN, IRC, etc. Those clients were thin, light, and ran on a single core 486 without anything that I recall as being performance issues.

Today, Google Chat (having replaced Hangouts, which was its own bloated pig in some ways) struggles to keep up with typing on a quad core, 1.5GHz ARM system (Pi 4). It pulls down nearly 15MB of resources - or roughly 30% of a Windows 95 install. To chat with someone person to person, in the same way AIM did decades ago. I'm more used to lagged typing in 2021 than I was in 1998.

Yes, it's got some new features, and... I'm sure someone could tell me what they are, but in terms of sending text back and forth to people across the internet, along with images, it's fundamentally doing the exact same thing that I did 20 years ago, just using massively more resources, which means there are massively more places for vulnerabilities, exploits, bugs, etc, to hide. Does it have to be that huge? No idea, I didn't write it. But it's larger and slower than Hangouts, to accomplish, as far as I'm concerned, the same things.

We can't just keep piling complexity on top of complexity forever and expect things to work out.

Now, if I wanted to do something like IRC, which is substantially unchanged from the 90s, I can use a lightweight native client that uses basically no CPU and almost no memory to accomplish this, on an old Pi3 that has an in-order CPU with no speculation, and can run a rather stripped down kernel, no browser, etc. That's going to be a lot harder to find bugs in than the modern bloated code that is most of modern computing.

But nobody gets promoted for stripping out code and making things smaller these days, it seems.

As long as the focus is on adding features, that require more performance, we're simply not going to get ahead of the security bugs. And, if everyone writing the code has decided that memojis are more important than security iMessage against remote zero click exploits, well... OK. But the lives of journalists are the collateral damage of those decisions.

These days, I regularly find myself wondering why I bother with computers at all outside work. I'd free up a ton of "overhead maintenance time" I spend maintaining computers, and that's before I get into the fact that even with aggressive attempts to tamp down privacy invasions, I'm sure lots of my data is happily being aggregated for... whatever it is people do with that, send ads I block, I suppose.


The bugs we're talking about have almost nothing to do with the underlying message transport, but rather the features built on top of it. Replacing iMessage with IRC wouldn't solve anything.


No, but my point is about complexity.

If all iMessage allowed were ASCII text strings, do you think it would have nearly the same attack surface as it does now, allowing all the various things it supports (including, if I recall properly, some tap based patterns that end up on the watch)?

In a very real sense, complexity (which is what features are) is at odds with security. You increase the attack surface, and you increase the number of pieces you can put together into weird ways that were never intended, but still work and get the attacker something they want.

If there were some toggle to disable parsing everything but ASCII text and images in iMessage, I'd turn it on in a heartbeat.


Virtually no one wants to use a messaging platform that just sends ASCII strings.

It's true that if you constrain the problems enough, ratcheting them down to approximately what we were doing with the Internet in 1994 when we were getting access to it from X.25 gateways, you can plausibly ship secure software --- with the engineering budgets of 2021 (we sure as shit couldn't do it in 1994). The problem is that there is no market to support those engineering budgets for the feature set we had in 1994.


> Virtually no one wants to use a messaging platform that just sends ASCII strings.

That's just about all I use for messages. Some images, but it's not critical. And if I had the option to turn off "all advanced gizamawhatchit parsing" in iMessage to reduce the attack surface, I absolutely would - and you can bet any journalist in a hostile country would like the option as well.

The whole "zero click" thing is the concerning bit - if I can remotely compromise someone's phone with just their phone # or email address, well... that's kind of a big deal, and this is hardly the first time it's been the case for iMessage.

If software complexity is at a point that it's considered unreasonable to have a secure device, then it's long past time to put an icepick through the phones and simply stop using them. Though, as I noted above, I feel this way about most of modern computing these days.


I 100% believe that this is all you do with messages. In the 1990s, my cool friends did lots of their work on Wyse dumb terminals hooked up to FreeBSD boxes. Everything they did worked fine on dumb terminals! They were neat, you could have a bunch of them hooked up to one box! But nobody else in the whole world worked that way; even the bank data entry people who were the original market for those stupid terminals had moved on from them.

The issue here is that we aren't saying anything about the real problem. You can radically scope software down. That will indeed make it more secure. But you will stop making money. When you stop making money, you will stop being able to afford the developers who can write secure software (the track record on messaging software written by amateurs for love is not great). Now we're back where we're started, just with shittier software.

It's a hard problem. You aren't wrong to observe it; it's just that you haven't gotten us an inch closer to a solution.


So you speak English? And the rest of the world should do what?


I suppose I should have gone with "Unicode without emoji" instead of ASCII. I don't mind unicode, but I question the emoji parsing engines as they're doing all sorts of crazy stuff with modifiers, and even unicode rendering is oddly complex and likely has bugs in some corner case or another.

From a "I would like it as simple and secure as possible," ASCII does tick quite a few boxes.


I think it's been single-digit months since the last UTF-8 parsing vulnerability.


Syonyk still has a point even though this thread has gone sideways.

Plot twist: extended ASCII?


The "and images" part has historically been a rich source of software exploits. I would guess that chat with full Unicode support but no images would be easier to implement to a high degree of security than ASCII text plus images.


First of all, getting rid of Unicode is not going to happen. Don’t ask.

Getting rid of images might be doable, but still difficult. Talking features away from people is politically difficult.


You know what else is "politically difficult"? Getting journalists and such killed because they're in a hostile nation, and your phone is vulnerable to remote zero-click exploits with full pwnage.

Give users the option. If you're not 100% confident in your parsing (and nobody should be), allow users the option to restrict parsing to something that's limited, tested, fuzzed, and generally trusted. People who care can turn it on. People who want touch memojis on their watch can leave it off.


Well put. The market values features. With present system engineering approaches, the path of least resistance is to add complexity to enable said features and reap the financial rewards. It takes more effort to build smaller attack surfaces, so nature tends to avoid that path. Regulation helps little. Security is not additive, it is subtractive. Less is more. There is very little incentive to simplify, except in niche segments. So, zero surprise commodity systems fail so horrendously.


This is a really good point here. Most corporate development that I have experienced is centered around "features" and speed. "I'm working on a new feature", "there has been a feature request", "the feature has a bug." The only time the complexity of the project is considered is by the time it fails and the team is canned.


An industry exists when there's market, in other words when there's supply and demand. Difficult hacking increases demands and more players will join the game.

Until something changes how the Internet works, the very moment we sent something across the Internet through a service, we no longer have control over the data. Pegasus is just the tip of an iceberg and with technology become closer to us, i.e. homepods, smart electronic appliances, it is just time until all the big brands are hiring hackers to spy on their customers so that they can produce better product (make more money).

Going forward, privacy is not supposed to be a personal preference like how platforms nowadays makes us click through different settings to opt out, it is supposed to be something we all collective have out of the box and we need to work together towards that goal.


The problem is that the recent security company purchases suggest that it costs roughly $100 per month per user to have just basic security. Cost goes up from that exponentially.


Treating this like a technology issue is a mistake. Tech fixes are band-aids. This is a social and governance issue that needs social and governance solutions.


Should iMessage do what Facebook messenger does and request receiver permission before letting a new contact message them?


Dumb article. Basically amounts to "Apple should continue to do what they're doing".


Apple seems incapable of successfully sandboxing iMessage after years of exploits. At this point I think we just have to assume they just don't care.


This doesn’t doesn’t seem like a stable fact about Apple. Their priorities can change. I expect that these recent revelations have gotten the attention of top management and they are likely to get a strong organizational response.

(I’m reminded of Google’s responses to the Snowden leak.)


You're not entirely wrong. For a proprietary, closed-source, limited-access system that Apple has complete control of, it's surprisingly vulnerable and slow to be patched.


It's pretty much completely wrong. Apple invests more on this problem than almost any vendor in the world, and no vendor with a comparable footprint fares meaningfully better than they do --- Google surpasses them at some problems, but vice versa. The problems we're dealing with here are basically at the frontier of software engineering and implicate not so much Apple as the entire enterprise of commercial consumer software development, no matter where it's practiced.

It's fair to criticize Apple. But you can't reasonably argue that they DGAF.


Also not partially wrong either.


Don't own or relies on phone is the first step of security. We can't trust phone, all of them.


The article correctly refutes the silly binary argument that many people fall back on that since perfection is impossible, we must accept an imperfect solution. And since the current solutions are clearly imperfect, the status quo must be acceptable since imperfect solutions are acceptable.

However, the article falls right into the next failed model of considering everything in terms of relative security. We should make things “better”, we should make things “harder”, but those terms mean very little. 1% better is “better”. Making a broken hashing function take 2x as long to break makes things “harder”, but it does not make things more secure since it is already hopelessly inadequate. The problem with considering things only in relative terms to existing solutions is that it ignores defining the problem, and more importantly, it does not tell you if you solved your problem.

The correct model is the one used by engineering disciplines, specifying objective, quantifiable standards for what is adequate and then verifying the solution passes those standards. Because if you do not define what is adequate, how do you know if you have even achieved the bare minimum of what you need and how far your solution may be from that.

For instance, consider the same NSO case as the article. Did Apple do an adequate job, what is an adequate job, and how far away are they?

Well, let us assume that the average duration of surveillance for the 50,000 phones was 1 year per phone. Now what is a good level of protection against that kind of surveillance? I think a reasonable standard is making it so the phone is not the easiest way to surveil a person for that length of time, it is cheaper to do it the old fashioned way, so the phone does not make you more vulnerable on average. So, how much does it cost to surveil a person and listen in on their conversations for a year the old fashioned way? 1k, 10k, 100k? If we assume 10k, then the level of security needed to protect against NSO type threats and to adequately protect against surveillance is $500M.

So, how far away is Apple from that? Well, Zerodium pays $1.5M per iMessage zero click [1]. If we assume they burned 10 of them, infecting a mere 5k per with a trivially wormable complete compromise, that would amount to ~15M at market price. Adding in the rest of the work, it would maybe cost $20M all together worst case. So, if you agree with this analysis (if you do not feel free to plug in your own estimates), then Apple has achieved ~4% of the necessary level and would need to improve processes by 2,500% to achieve adequate security against this type of attack. I think that should make it clear why things are so bad. “Best in class” security needs to improve by over 10x to become adequate. It should be no wonder these systems are so defenseless.

[1] http://zerodium.com/program.html


I am not convinced apple wasn’t aware of what NSO was doing?

Governments want access to spy on people. Apple wants to market and sell a “secure” mobile device.

In a way NSO provides apple with a perfect out. They can legally claim they are secure platform and do not work with bad actors or foreign governments to “spy”.

Hear no evil see no evil. NSO ability to penetrate iOS gives powerful governments what they want and in a way may keep “pressure” off Apple in providing official back door access.


That would be a surprise to literally every person who works in Ivan's organization at Apple. This is message-board-think, not analysis.


How would Apple benefit from this conspiracy? Apple these days is throwing out the "secure" and "private" PR to differentiate themselves from their surveillance capitalist neighbors. They would never do anything that hurts their PR, and they would always do the bare minimum security and privacy to support it. If they knew of these vulnerabilities ahead of time, I am positive they would have patched them. This is not to say that they are doing everything they can, but I don't see your proposed conspiracy following profit-maximizing corporate logic. Apple isn't there to screw you; they're just there for the big buck.


The benefit would be they could still market the device and the most “secure” but at the same time duck pressure from governments to allow back door access.

If Apple truly secured the OS to the point where even state level actors could not access they bring some unwanted attention, regulation, etc from power governmental agencies.

Apple has also been interestingly silent /vague on a response to this story.


This thread devolved into a debate over C versus Rust or other memory-safe languages.

1. Blaming the language instead of the programmer will not lead to improved program quality.

2. C will always be available as a user's language. Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages

3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code, of which a ton is written in C. Programmers in the future who are schooled only in memory-safety languages may not be able to approach C as a learning resource, and may in fact be taught to fear it.

There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors. It is amazing how easily that work is ignored in these debates. Find me a buffer overflow or use-after-free in one of djb's programs.


> 1. Blaming the language instead of the programmer will not lead to improved program quality.

I disagree. Blaming the language is critically important. Tony Hoare (holds a turing aware, is a genius) puts it well.

> a programming language designer should be responsible for the mistakes that are made by the programmers using the language. [...]

> It's very easy to persuade the customers of your language that everything that goes wrong is their fault and not yours. I rejected that...

[0]

> Users will still be write programs in C for their own personal use that are smaller and faster than ones written in memory-safe. languages

Users will always write C. No they won't always be smaller and faster.

> 3. Those in the future who are practiced in C will have a siginificant advantange in being able to leverage an enormous body of legacy code

Much to society's loss, I'm sure.

> and may in fact be taught to fear it

Cool. Same way we teach people to not roll their own crypto. This is a good thing. Please be more afraid.

> There is a tremendous amount of C code that DOES NOT contain buffer overflows or use-after-free errors.

No one cares. Not only is that not provably the case, nor is it likely the case, but it's also irrelevant when I'm typing on a computer with a C kernel, numerous C libraries, in a C++ browser, or texting someone via a C++ app that has to parse arbitrary text, emojis, videos, etc.

> Find me a buffer overflow or use-after-free in one of djb's programs.

No, that's a stupid waste of my time. Thankfully, others seem more willing to do so[1] - I hate to even entertain such an arbitrary, fallacious benchmark, but it's funny so I'll do it just this once.

[0] http://blog.mattcallanan.net/2010/09/tony-hoare-billion-doll...

[1] http://www.guninski.com/where_do_you_want_billg_to_go_today_...


Guess who is "rolling your crypto" for you. The same guy who is the subject of your "fallacious benchmark". He writes in C. The problem with software is not the languages (seems a new one is created every month), it is a lack of competence in using them. (Over)Confidence is very commmon, competence not so much. I am thankful that Dennis Ritchie created C; I appreciated his humility and I do not blame him for others' mistakes.


It's not a lack of competence because evidence shows that even the most competent C programmers can't write C without security issues - literally every nontrivial security-relevant product that has been written by great C programmers has had them, their competence apparently was not sufficient. I'd argue that nobody, including Dennis Ritchie, is "competent enough" to write secure C - even the best people slip sometimes in ways that (in C!) cause exploitable holes.

"The same guy who is the subject of your fallacious benchmark. He writes in C" and that crypto code, which we used was more secure than rolling our own, but it still is riddled with security bugs because he writes in C (e.g. Heartbleed) - and despite the fact that those particular bugs have been fixed, that code still isn't trustworthy enough just because it's written in C, likely has more issues undetected and needs to be rewritten and replaced eventually with some not-C solution that can remove a whole class of bugs accidentally causing arbitrary code execution. Sure, you'll still have logic bugs - but a logic bug in iMessage image parsing has much lower consequences than a memory safety issue in that same image parsing.


I think you are misattributing the source of Heartbleed. Can you point out the security issues in NaCl. It is written in C. According to this "no one can write C" logic, there must be bugs because "no one can write C". https://nacl.cace-project.eu/

The other bizarre aspect of this logic is that not only is the author of the code irrelevant but apparently the task is, too. It would appear to apply to, e.g., even the most simple programs. The only factor that matters is "written in C". I use sed every day. It's written in C. Show me the bugs. I will probably be dead before someone finds them. Will I be using a "memory-safe" sed before then.


Saying "show me the vulns in this codebase" over and over is not a good argument.


Whereas saying "no one can write C without bugs" over and over is a good argument.

Its hyperbole. If the argument was "few people can write C without bugs" that would be much easier to digest.


OK, but I didn't say that no one can write C without bugs. I said that blaming languages is good, that we'll all lose due to people continuing to use C, that C programs aren't inherently smaller and faster than other languages, and that people should treat writing C as they would treat writing crypto.


To clarify, I know you may not have said "no one can" but plenty of other HN commenters are saying exactly that on a regular basis. Thank you for refraining from repeating this absurd hyperbole.

C programs are not inherently smaller and faster but in practice this is usually the case. Can you guide me to some Rust programs that are smaller than their C counterparts. The thing that holds me back from experimenting more with Rust is the (apparently) enormous size of the development environment relative to a GCC toolchain.

The number of downlaods from crates.io is questionably large and some of the binaries I have produced were absolutely gigantic. Largest executables I have ever compiled. Crazy.

We do not "lose" if people keep writing in C as long as its the right people. The right programmer for the job. All programmers are not created equal no matter what languages they use. Absent professional certifications and enforceable quality standards, perhaps the world of writing software for use by others needs an ethos something along the lines of "code within your means". Memory-safe languages are great but it seems like they just enable people to become far too ambitious in what they think they can take on. This is no problem at all unless and until they start marketing their grand creation to undiscerning users who are none the wiser. (This is of course the general idea behind the "dont roll your own" meme. However, I do not think it should be limited to cryptography.)


> Thank you for refraining from repeating this absurd hyperbole.

To be fair, I wouldn't quite label it as "absurd", though it is hyperbole. With near-extreme levels of discipline you can write very solid C code - this involves having ~100% MCDC coverage, using sanitizers, static analysis tools, and likely outright banning a few functions. It's doable, especially if your project doesn't have to move fast or has extreme requirements (spaceships).

> Can you guide me to some Rust programs that are smaller than their C counterparts.

Rust has a big standardlib compared to C++ so by default you end up with a lot of "Extra" stuff that helps deal witht hings like unicode, etc. If you drop std and dynamically link your libraries you can drop a lot of space and get down to C levels.

There are a number of examples like this: https://cliffle.com/blog/bare-metal-wasm/

> is the (apparently) enormous size of the development environment relative to a GCC toolchain.

I can't really relate to this. I have a 1TB SSD, 32GB of ram, and an 8 core CPU. My rust build tools are a single command to install and I don't know or care how much space they take up. If you do, I don't really know why, but sure that's a difference maybe.

> All programmers are not created equal no matter what languages they use

While this is true, it doesn't matter practically.

1. We can't restrict who writes C, so even if programmer skill was the kicker, it isn't enforceable.

2. There are lots of projects that invest absolutely incredible amounts of time and money, cutting edge research, into writing safe low level code. Billions are spent on this. And the problem persists. Very very few projects seem to be able to achieve "actually looks safe" C code.

> Memory-safe languages are great but it seems like they just enable people to become far too ambitious in what they think they can take on.

I don't really see how Rust is any different from Python, Java, or anything else in that regard.


If you cannot write safe C and you need memory-safety, why not just use Ada.

Restricting who can write C is another "extreme" idea in line with "no one can write secure C". I will not call it hyperbole but I think its absurd.

What we can do is be more cognizant of who is writing the software we use. (For example, I use software written in C by Robert Dewar, co-founder of AdaCore, called spitbol. A big part of why I use it is because of who wrote it, the code itself and its history.)

Not caring how much space something occupies is not something to which I can relate. I always care. I do not have unconstrained computers. Each has a finite amount of resources and I try to use them in a controlled and efficient manner. That means avoiding lots of large, amorphous software programmers use without question. For me, this works quite well.

Intentionally ignoring who writes the software I use does not make sense to me either. I think in a previous comment you mentioned Heartbleed. It seems that countless people using OpenSSL were relying on it heavily without ever bothering to investigate anything about its source. That to me was strange. We read comments from people who were "shocked" to find out who was managing the project. Total lack of curiosity. They never bothered to look. Not a great recipe for learning.


"An entirely separate area is surveillance and detection: Apple already performs some remote telemetry to detect processes doing weird things. This kind of telemetry could be expanded as much as possible while not destroying user privacy. While this wouldn't necessarily stop NSO, it would make the cost of throwing these exploits quite a bit higher - and make them think twice before pushing them out to every random authoritarian government."

Apple could do more spying (excuse me, "telemetry") "as much as possible" in addition to NSO... because it would make the the competitor's spying more expensive.

This could be a unilateral decision to be made by Apple without input from users, as usual.

Any commercial benefits to Apple due to the incresed data collection would be purely incidental, of course.

Apple and NSO may have different ways of making money, but they both use (silent) data collection from computer users to help them.


This article doesn't seem to have a direction, it just seems to be a lump of refutations about how hard it is to maintain a secure system, and how we need to be understanding throughout this process. What it doesn't actually address is security nihilism, so let's expand on the seed he plants in the final section:

> It’s the scale, stupid

This should 100% be the focus, not how truly amicable Apple's efforts are to improve security. Security nihilism is entirely about scale, and understanding your place in the digital pecking order. The only way to be 'secure' in that sense is to directly limit the amount of personal information that the surrounding world has on you: in most first-world countries, it's impossible to escape this. Insurance companies know your medical history before you even apply for their plan, your employer will eventually learn about 80% of your lifestyle, and the internet will slowly sap the rest of the details. In a world where copying is free, it's undeniable that digital security is a losing game.

Here's a thought: instead of addressing security nihilism in the consumer, why don't you highlight this issue in companies? There's currently no incentive to hack your phone unless it has valuable information that can't be found anywhere else: in which case, you have more of a logistics issue than a security one. Meanwhile, ransomware and social-engineering attacks are at an all-time high, yet our security researchers are taking their time to hash out exactly how mad we deserve to be at Apple for their exploit-of-the-week. If this is the kind of attitude the best-of-the-best have, it's no wonder we're the largest target for cyberattacks in the world.


> The only way to be 'secure' in that sense is to directly limit the amount of personal information that the surrounding world has on you

I may misunderstand you but this is privacy, not security. The 2 are not completely separated, but that’s another issue.


"Security nihilism" sounds like a dig, but actually, it's the same thing as "security realism". All security eventually fails, thus all security is eventually meaningless. Not only can you not have perfect security, but the entirety of security is just a hedge against the will of your opponent. Any kind of security - from a file cabinet lock to a nuclear weapon - only stops people less motivated than you. I don't care what thing you think you've created, it can be defeated, one way or another.

You know what you could do to make NSO's life harder, other than develop more security? Fight them. Have your politicians attack Israel for allowing the group to operate. Use sanctions, take a proportional response, condemn their actions at the UN, stop protecting them. Block Israel from the Apple Store. It's a more direct route, and more likely to succeed than making your goal "perfect security". (Would there be huge political challenges? Of course; but those are more approachable than "perfect security")


>You know what you could do to make NSO's life harder, other than develop more security? Fight them. Have your politicians attack Israel for allowing the group to operate. Use sanctions, take a proportional response, condemn their actions at the UN, stop protecting them. Block Israel from the Apple Store. It's a more direct route, and more likely to succeed than making your goal "perfect security". (Would there be huge political challenges? Of course; but those are more approachable than "perfect security")

but it's not just NSO, every reasonable country probably has people like them.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: