Hacker News new | past | comments | ask | show | jobs | submit login
Class Breaks (schneier.com)
189 points by thegeomaster on Jan 3, 2017 | hide | past | favorite | 55 comments



This is exactly why I hate electronic voting. When I argue about it, people always tell me that it's easier to commit vote fraud with paper ballots.

Sure, it's easier for a single person to temper with a single paper ballot or even poll box, but it is much harder on a large scale. Tampering with every physical boxes and tallies in a country requires much more effort and organisation than exploiting a single vulnerability that applies to an endless series of identical machines.



The biggest actual risk to voting remains the party in power. Why go through the effort of fraud, when you can just throw out the results?

Similarly, propaganda is also more effective.


Why hide that you're changing the election result? Well, I can imagine cases where it's not worth the bother, but they hardly seem like the default.


My point is look at pretty much every election where democracy actually left. Not exactly subtle fraud going on.

(Though, I will confess I have not tabulated this, so would be very curious in full numbers.)


This is a tautology. You're saying that in every election where fraud was obvious ("democracy actually left"), fraud was obvious.

There is probably a lot more subtle election fraud that is never detected.


Hmm... I'm trying to say in every election that was manipulated, it was not subtle fraud that was the problem. I have little doubt that some fraud happens. I just don't think it typically matters. In most cases where it was not detected, it had no bearing on the outcome.

Now, overt manipulation of the voting populace through propaganda or other means is very common and has definitely happened. Ballet fraud, though? Are there any documented cases of that being a thing that mattered?



Advances in mathematically verified software could help here - although mechanisms for verifying the manufacturing of chips all the way down the supply chain might be farther out.


I can't help but think this sort of issue far outweighs whatever concern we may have when it comes to any sort of bad AI actor. Every time I read about the OTA updates on Tesla cars my spine tingles a bit about the possibility (which I can only conceive is greater than 0) of some bad actor sending out code which turns those cars into ungoverned highway missiles.

If there's one lesson that we can take from all the data breaches from top companies and high-profile government officials is that we are terrible at securing software. The 21st century Ted Kaczyinski isn't going to need the post office; he's got a MBP and SSH.


That's why I love "Black Mirror". They tackled the risk of class breaks in the episode called "Hated in the Nation" (season 3, episode 6) and they did it in such a way that everyone can understand the problem, not just us computer geeks.


They've tackled a number of different social and technological issues very well. Big big fan of their Christmas episode!


> of some bad actor sending out code which turns those cars into ungoverned highway missiles

Yes but think of the probabilities that have to happen are:

a) Knowledgeable person

b) Motivated person

c) Evil or mentally unstable person

d) Determination and ability to overcome adversity.

And the person or person who has to decide that Tesla is what they want to exploit instead of another car manufacturer or another object and so on.

Compare this with someone who simply wants to take their car and drive it into a crowd.


If we're assuming independent hackers that simply have nothing better to do... Simply look at how many hackers do it already.

If we're assuming hackers that realize they can lock down a person's car in exchange for ransom... Simply look at how many hackers do it already.

If we're assuming a broad attack by a foreign entity... Simply look at how many hackers do it already.


> Compare this with someone who simply wants to take their car and drive it into a crowd.

You have to multiply the probability by the harm. Yes, most people will just drive their car into a crowd, killing - let's say - 10 people.

But the man who perseveres and drives EVERY controllable car into a crowd can kill thousands, if not millions.


Yes but it's infinitely easier (which is my point) for multiple people to drive a car into a crowd killing 10 people at different times. The bar to be able to deploy on a large scale (by a single actor, not North Korea) is much higher and the group of people that could potentially pull it off is infinitely smaller for at least the reasons that I mentioned.

Anyone of us could drive into a crowd and kill 10 or many more people. Yet it happens extremely infrequently, compared to at least how many times it could happen.


How many times can one individual crash their car into a crowd vs force a hacked car into a crowd though?


Yes, there are many people in the world fitting all of a, b, c, and d.


There actually aren't. Problem is, it only needs one.


9/11


I call this the cyber terror paradox. Consider:

1. Computer systems that run important things (power, water distribution, logistics, financial networks, communication networks, military systems, air traffic control, individual plane/ship control, etc) are inter-networked and vulnerable.

2. The world is full of evil people that hate the West and want to see it suffer.

3. Nothing has happened.

How can all of these three things simultaneously be true?


You can apply similar logic to terrorism in general.

1. Western countries are full of soft, populated targets with minimal protection.

2. The world is full of evil people that hate the West and want to see it suffer.

3. Very little has happened.

It's not quite as strong, since 3 is a weaker claim, but the attacks we do see are pretty rare relative to what you'd expect.

In both cases, I think the answer is that #2 is wrong. A lot fewer people are out to get us than we're led to believe.


It's weaker overall since "cyber terror" could be continuous. If you wanted to carry out continuous terror attacks you'd need a shitload of dudes to run around with guns and bomb trucks, and you'd need to train, feed, and move them. With computers, none of that is a problem any more. So why don't you see strictly more cyber terror than regular terror, always?


You could just send dudes to the US with a small amount of cash and instructions to buy a gun and shoot up a public place. Terror doesn't have to be sophisticated. We're seeing how bad an attack can be with nothing more than a truck. That it's not happening constantly suggests a marked shortage of dudes willing to do these things.


Before 9/11 I wondered about this: clearly someone smart could do a lot more damage than we'd ever seen, and clearly part of the reason it hadn't happened was that the few, most disaffected people were not the people who had their shit together. But this reason didn't quite reassure me, and, well, now we know that it shouldn't have.


Very interesting read. My personal highlight is (about the type of security we are moving into):

> It's a world where driverless cars are much safer than people-driven cars, until suddenly they're not.


It also means any political figure who "dies in a self driving car accident" will instantly become part of a conspiracy theory.


The case of Michael Hastings comes to mind.

http://nymag.com/news/features/michael-hastings-2013-11/


If we take a cue from nature, it means diversified genetics. This means different IoT manufacturers becoming as snowflaked as possible to avoid diseases that affects other systems. When one system gets too large it only takes a single virus to bring it down.

But this runs counter to the "don't roll your own crypto" philosophy. Crypto in this context can be any sort of important aspect of your device that you want to secure from hackers.

Right now it's a bad idea to do things on your own, but as systems get even larger and interconnected it may be your only defense.


I think you're missing the fact that a line of devices from a manufacturer is still a "class." If you have thousands or millions of devices out there with un-tested, DIY crypto, then that's still thousands or millions of devices rendered malicious when a vulnerability is found & exploited. At least vulnerabilities like Heartbleed were quickly patched and widely-publicized. IoT vendors have been historically horrible when it comes to providing updates.


Maybe a better metaphor would be bodily responses to ailments. We usually think of immune systems as white cell warriors but the body has several ways to fight off disease.

Fevers are often created by the body to "burn" the offending pathogen from functioning properly and infecting further. Diarrhea is an attempt to purge the body of the disease from the areas it thrives in most.

So in this case, roll your own crypto and use known strong crypto. Don't rely on either on their own.

Example: use end-to-end encryption apps like Signal, but if necessary, communicate in coded language that only the person on the other end would understand.


This seems related to the concept of Software Monoculture that Bruce wrote about here: https://www.schneier.com/blog/archives/2010/12/software_mono...

It's a clear case for technological diversity. The hard problem is that economies of scale work against technological diversity. It's similar to what happens with monoculture in biology.


It's a catch 22 because diversity reduces the potential for class breaks but also increases the probability that any given system is insecure in the first place. (Less resources can be devoted to securing each unique system.)


I don't think that necessarily follows. People deciding to make a new system with different security would factor that into the cost of the system. It's just as likely to result in fewer systems due to increased cost.


That's basically my point. The level of tech diversity is set by some kind of equilibrium between the cost of writing new code and the value of doing so.


I'm surprised this isn't getting much attention here.

There really isn't a technological solution for class breaks. Go big and you replace one kind of class break with another (a top-tier security system for hotel card readers just focuses the attacks on that system). Go small and it isn't cost effective or plausible at scale (everybody install gentoo! Compile your own kernels!).

Honestly, I'm stumped. We rely on these systems more and more to squeeze out every bit of productivity we can get, but that only leaves us at the mercy of our systems.

Is it even possible to fix this kind of situation before it really turns on us, or are we doomed to suffer a cataclysm we cannot yet imagine?


I think we can lower the risk of these kinds of attacks a lot by reducing the amount of "dangerous" code that runs on devices. There are several ways to do this: run stripped-down embedded OSes and software that only have the bare minimum functionality they need to do what they do, transition as much C/C++ code as possible to Rust, Haskell, Agda, or whatever other higher level language is suitable to the requirements of the application, and use containers/capabilities/sandboxes to restrict potentially unsafe code from interacting with the rest of the system in ways that aren't absolutely necessary to the application.

So, we do have some technical solutions that could greatly reduce the risk. I think the main barriers preventing these techniques from being used as widely as they should right now are that 1) secure and insecure products are basically indistinguishable to consumers, so there is little economic incentive on the part of product manufacturers to ensure their products are secure, 2) we have an entrenched programmer culture (especially in systems programming) that thinks the solution is to "be more careful" and "don't hire dumb programmers", and 3) we have a whole lot of really complicated C and C++ programs that work really well and people are happy with them whereas if you want to ship a product with, say, all of the code from the OS up written in Rust, you're sort of blazing your own trail and you'll have to write a lot of it yourself.


The cure is worse than the disease. But it's coming. It will come as a devastating one-two punch: the end of general purpose computing devices and the rise of a publicly-identified Internet.

The former will be phased in as countries legislate it. Legislators won't be concerned with the slippery slope between abacus and iphone, the courts will likely draw a distinction somewhere around anything north of "silicon-based electronics with the right resources for an IP stack". A few countries will be bastions of Free Computing, but many/most will draw up similar laws.

The latter will be held back by Metcalfe's Law until some near-global momentum can be built. It would likely take a simultaneous attack on many global corporations/political parties/beloved individuals, but it seems like that's just a matter of time. Faced with repeated attacks, the public won't accept "but we designed it that way intentionally" as valid.


Abandoning general purpose computing is too crazy, as it amounts to throwing away software. The anonymous internet is a dead-man walking, but silicon valley is as complicit in this as the government will be.

However, software engineering will change into a much more regulated environment. Writing software for the web will eventually become like writing software for the space shuttle: slow, rigorous and very expensive.

It will also kill the hacker culture, as hacking together a piece of software will become like hacking together a bridge: illegal.


> However, software engineering will change into a much more regulated environment. Writing software for the web will eventually become like writing software for the space shuttle: slow, rigorous and very expensive.

Or, it could bifurcate. Look at aviation. If you want to fly an airplane in the US, it needs to be something that has been approved by the government down to each nut, bolt and rivet. All onboard systems (mechanical, electrical, software) and every vendor providing them have undergone expensive and rigorous government certification, and you have documents showing it. You can't so much as change out a switch in the cockpit without going through an official approved service provider! But there's a totally separate "experimental" category carved out for people who want to build their own airplanes, where the rules are much more lax and there's far less oversight. As long as you keep good records and abide be some basic rules, you can pretty much build and fly whatever the hell you want. The major operating limitation is that you can't fly one for hire or carry people for compensation. By some great miracle, these separate, parallel tracks, certificated and experimental, co-exist successfully and planes aren't falling from the sky.

It may end up this way for computing: A "serious software" regime for most consumer- and business-purchased products, subject to slow, rigorous, expensive regulation, and a parallel "hobbyist" track for at-home, non-commercial hacking and making where anything goes.


IANAL but I assume that if I purchased a tract of land and built a bridge on it, I could create it as I please, given that I would be liable for damages should the bridge break and harm somebody on my property, not unlike a slip-and-fall.

I for one would prefer most vital software to be developed slowly and methodically. We insist on strict standards for our bridges and roads - why don't we do the same for our information infrastructure?

Move fast and break things can break people too.


I would prefer software infrastructure be developed rigorously as well, as I think would most people. But that will lead to regulations or at best, expensive insurance company mandated standards.

But before we celebrate its demise: the side effect will also be the same ones seen in the civil engineering world: creative ideas take decades to come into existence. As you mentioned in your other comment, there was a time when railroad building was exciting, and in that time, people died doing exciting engineering. Civil engineering tools are now orders of magnitude better than they were years ago, but there hasn't been an explosion in creative structures, even at the lowest level. I also think many of the perks (ie: salary) of the software engineering industry are intimately related to the 'move fast and break things' culture. When that leaves the industry, it may be less of a good riddance than you think.

This may be my perspective as an outsider though, I don't work in software. But I have considered making the jump to software engineering, because 'moving fast and break things' sounds fun.

Fundamentally we agree, but I'm a pessimist.


Also agreed, but I don't think the "boringification" of civil engineering is all that bad either. I met a civil engineer recently. He is a mundane but serious professional. He has no patience for the shiny objects that the real estate developers try to distract other people with. Codes are upheld and enforced.

It's easy to grumble about regulations until their reasons are forgotten. Then they get repealed and the problems come back. The mortgage crisis is a prime example.

I don't think this is such a bad future for software either. Much of my job writing software for higher education involves adhering to complex policies. These policies are necessary to remain FERPA/HIPAA compliant, which are necessary for their own reasons. Playing fast and loose is taking out a debt for an uncertain future.


The "boringification" of computers will bring us back to the 70s.

If we have to prove _everything_ on your computer, from your calculator or Pokemon, how much do you think a license of windows will cost? $250,000?

Linux/FreeBSD/OpenBSD/Minix will be dead (no one to sponsor certification and then put it in the public).

Crazy "best-practices" (change passwords every couple days, password must include symbols, letters, numbers, upper-case and lower-case in a random order, but be no shorter or longer than eight characters)

Must have a full team of lawyers to prove that everything done fit the letter of the law, and that any hacks are not your responsibility

"Shinyness" is what allows you to have free VSCode and Atom.

Sure, I miss the 70s when you could get a text editor measured in bytes (but, btw, electron is probably more secure than 70s unix) but I definitely like the cost/convenience of modern software.


> IANAL but I assume that if I purchased a tract of land and built a bridge on it, I could create it as I please, given that I would be liable for damages should the bridge break and harm somebody on my property, not unlike a slip-and-fall.

Depends on where you buy your land, most likely. If you buy in an unincorporated territory, you might get away with whatever if there are no county or state regulations. If you're buying in a city/town/village, you can expect that basically any construction project will be subject to code requirements and permitting. To get permitted in such a situation would typically require an appropriately engineered design with signoff from a civil PE.


I wonder what's the programming equivalent of interior designer[1] or nail technician[2] and how much time until one will be required to spend a few thousand bucks and a couple of years in some bootcamp to be allowed to publish them.

[1] http://www.dopl.utah.gov/licensing/commercial_interior_desig...

[2] http://www.dopl.utah.gov/licensing/cosmetology_barbering.htm...


>how much time until one will be required to spend a few thousand bucks and a couple of years in some bootcamp to be allowed to publish them.

... risking anyway to end up in the "B Ark" ...

http://www.geoffwilkins.net/fragments/Adams.htm

(obligatory reference to Douglas Adams, nail technicians and interior designers are not so different from phone sanitizers and hairdressers)


> However, software engineering will change into a much more regulated environment. Writing software for the web will eventually become like writing software for the space shuttle: slow, rigorous and very expensive.

I wouldn't be totally pessimistic like this. Increases in regulation and quality standards will happen along with increases in programmer productivity as tools further improve. We're still living in the stone age of programming languages and system infrastructure.


Agreed. People usually aren't aware that the 19th century was a sprawling graveyard of industrial accidents and train derailments, often because the regulations introduced as a result of these tragedies have turned them into a distant memory for an older generation and a "grandpa story" as far as young-uns are concerned.


Most technological advancements go through a pioneer era, then a haphazard status quo, then government regulation.

See: aviation (ICAO, FAA), automobiles (WP.29 [1], NHTSA), wireless transmissions (ITU, FRC/FCC)

[1] https://en.wikipedia.org/wiki/World_Forum_for_Harmonization_...


As he said, we do have some experience with Class Breaks. Off the top of my head: Palm Pilots had a little button on the back to reset them. Chrome Books have a read-only OS/boot loader. Old software licensing dongles covered the chips with epoxy. We use VLANS to separate classes of traffic. x86 uses rings to separate classes of code. Firewalls and QOS systems separate classes of traffic.


A better analogy would be crops and the dangers of a mono culture. A class break is akin to a disease that an entire crop is susceptible to. See, e.g., the bole weavil. Or the reason the current popular banana is at risk.


For what it's worth, the more idiomatic term in computer security is "bug class". I know "class break" is potentially the better, more general term, but nobody I talk to ever really uses it.


In the insurance industry it is called Category Risk. As in if you insure houses instead o cars you get lots of exposure with a hurricane.


Very similar argument to lovely CPG Grey video: https://www.youtube.com/watch?v=VPBH1eW28mo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: