Hacker News new | past | comments | ask | show | jobs | submit login
How well do cars do in crash tests they're not optimized for? (danluu.com)
295 points by weinzierl on June 30, 2020 | hide | past | favorite | 343 comments



The article reminds me of a story from a friend that was an engineer at a major water heater company. Modern electric water heaters are already fairly efficient, but every model gets tested by the government to be given an efficiency rating.

The test consists of replacing the anode with a rod that has a series of temperature probes to measure the water at different levels in the water heater. Since water would form layers of different temperatures you needed measure at multiple locations to know the overall energy in the water. Then a series of water draws are performed with pauses between them and the total energy used is compared to the energy delivered.

Our engineer realized that if he could put the temperature layers at just the right locations then the calculations with the limited number of probes could be skewed. And this can be done by tweaking the height of the 2 heating elements. Just moving them a few inches.

The first time he tried this, an internal test came back at 101% efficiency. At little more tweaking and the test returns a consistent 99.x% that was respectable.


Isn't this a symptom of the exact testing procedure being a white box that rarely changes? Many students would score 100% on the ACT if the entire question and answer set were widely published.

Also that seems like a silly test. Couldn't you just measure the differential after mixing? TFA also mentions it, but noise is also a factor. For the water-heater example, the limited number of probes introduces a known amount of potential variance.

Rather than a straight % efficiency, a more useful number to report would include the confidence interval - Its 99% ± 3%. Consumers can't do stats but at least it's more honest. TFA touches on this but doesn't draw any conclusions other than "we need to look at the source data and do our own analysis".


This phenomenon also appears in the testing of crash helmets. The DOT (US government) helmet standard is easy to game since it's very strict on how and where the helmets are tested. The Snell Foundation standard is a bit better since the humans testing the helmets are allowed to look for weak points to target for anvil drops. The new FIM standard adds more variability to the tests to better simulate the varying angles of crashes.

https://youtu.be/uLj9WfoWPSQ?t=413


Crash test dummies have basically this problem also. They're designed for realism in certain very narrow ways, and then the very small number of approved dummies are used for testing car safety.

The industry has made a bit of progress, surprisingly unprompted by regulations - female and child dummies came into circulation before they were required in tests. But overall, testing is still run against a tiny handful of body types which move 'realistically' in only a few regulation-guided respects.


I think some of this falls into the simulation paradox: the more accurate the simulation, the closer the simulation is to the thing being modelled. But it's a quadratic relationship in most cases, so at some point meaningful increases in simulation accuracy cease to be economically viable.


Yeah, but in the words of RyanF9, "The US government can afford a BB gun", so there's no reason that DOT can't test helmet visors.

The main reason the DOT standard is so bad is because its mired in bureaucracy and managed by a severely underfunded organization.



How did I know this linked to some kind of general principle with a cool short name even before reading or opening the link?


Well, there are several such:

https://en.wikipedia.org/wiki/List_of_eponymous_laws

Though I'm not aware of a law stating that for any given principle there is an existing eponymous law.


Haven’t you heard of DredMorbius’s Law of Eponymity?


But bananamerica really discovered it.

... oh, wait: https://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy


The patent system in a nutshell, ladies and gentlemen.


Something tells me there’s a XKCD for that, or at least it should be.


It means the test procedure has vulnerabilities which need to be patched.


Serious question, how is this different from what VW did? There was a lot of talk about complicit engineers vs. shady management in that discussion. Did this ever make it to production?


VW actively detected if the car was on a treadmill testing setup and electronically changed the engine parameters to make it run cleaner.

The water heater was designed with the test in mind but functions the same regardless.

VW was clearly cheating whereas the water heater is taking advantage of known deficiencies in the test (is that cheating?).


I'm not sure that's an accurate description of VW's cheat.

VW knew the parameters of the test (stationary car, no steering input, prescribed throttle inputs, etc). And they configured the car to pass the test.

Water-heater engineer knew the parameters of the test (number and location of probes). And he configure the water-heater to pass the test.

Same-same. In both cases, an engineering team willfully committed fraud to improve their sales figures.

The only differences are a bit pedantic. VW's "configuration" was more elaborate. But, both groups gamed the test.


You might misunderstand what VW did.

Apparently it is common for cars to have a "test mode" bit, because they run on a dyno (only one set of wheels spin), and the car may disable certain traction control systems, etc.

VW changed the way other systems (the engine itself) function in this mode. So even if everything is exactly the same on the road (e.g. 30mph in a straight line for hours), the car will perform differently and have different emissions. You don't drive around on a dyno.

Optimizing for the test may go up to the line, but VW crossed it.


The differences are not pedantic at all. The differences are extremely significant.

The VW engineers designed the car to do different thing while being tested vs used by consumers. While being tested, the emissions system was on. While being used by consumers, the emission system was entirely disabled.

The water heater engineers designed the water heater to do the same thing while being tested vs used by consumers.

If the water heater engineers designed the water heater to never turn on and keep the water at room temperature while being tested, ("works good boss, water heater efficiency is infinity percent") then yes, it would be reasonable to claim they did essentially the same thing.


> If the water heater engineers designed the water heater to never turn on and keep the water at room temperature while being tested

I think the difference between what they did and this is only a difference in degree and not kind though, right?


It is a difference in kind: VW dynamically detected and adapted behavior to the test. It would never operate in that way under normal conditions. The water heater example was completely static: it always behaved the same way, under test or not.


People obviously disagree since I was downvoted but I don't really see the black-and-white difference between the two.

In both cases there was an intentional design change to deliberately changed mislead a measurement, and the final product does not match the intended thing. Both adversarially directly cause the consumer's purchase to not be the intended item; no one gets a car that has emissions measured and no one gets a heater that has the efficiency measured.


One added difference is that the water heater. The intent may be deceptive but they provided what was asked for and what they claimed. It is almost malicious compliance. Dickish gaming and probably worth suing over but not sure if it is criminal fraud per se.

It is like the joke about Soviet factory metrics for some item like nails. They set the quota on numbers and got useless ones better described as needles. They wised up a bit and set the quota by weight next month and got one useless massive nail instead. It isn't even too far from the truth given actual management involved things like a train line circularly shipping coal between depots instead of retrieving from source or distributing to end users to boost their "metric tons transported kilometers" metric.

VW meanwhile had a covert illegal configuration while claiming mileage and lack of maintenancd urea. The claims are outright false - that it provides all three benefits instead of two of three.


That kind of mentality was the joke in many eastern eu movies in the 80s. Pretty much the standard comedy in Polish movies from that time.


It's not the same at all, and I don't think the differences are pedantic.

The water heater runs the same regardless of whether it's being tested or is running normally in someone's home. Really, this test "cheat" exposed that the test itself was not measuring what it thought it was measuring; and hell, a manufacturer could accidentally cheat with their design.

The car would run differently if it detected it was in a test situation. In the real world, with a real driver, it would run in a way that would give different test results (if you were in a position to run the test while it's being driven).


What was more heinous about the VW case IIRC is that the cleanliness gains the cars made on the treadmill setup were enough to push them into compliance with regulations, which the engines normally didn't meet.

"Cheating" a better efficiency rating is significantly less reprehensible than cheating a legal obligation which was legislated for public health and environmental reasons.


But the efficiency ratings also exist for public health and environmental reasons. Why does it matter to you that one is a mandate? They're both intentionally deceiving regulators and consumers (vs accidentally acing the test without cheating). Both are fraud.


I do think that manipulating a purely instructive measure is less extreme than manipulating a compliance test; consumers can seek alternate tests and reviews, but the state emissions test has special status even if a dozen other tests give a different result. That said, I believe Energy Star ratings affect tax rebates and electric bills, and they're required to be printed on products - so that's not really an arbitrary test.

There are other differences here too, I think. The water heater trick is passive manipulation that stays in place at all times, which limits how far from "real" performance it can get. And per the story, it seems more like "teaching to the test" than "cheating". That is, Volkswagen consciously moved away from the mandate outside of testing. The water heater was (potentially) as energy-efficient as they could design, with the test score manipulated on top of that.

None of that makes it harmless - if "as good as you can make" doesn't hit standards without manipulating them, that's still a problem. But I do find it less galling than "intentionally worsens emissions outside the test bench".


The flip side of the water heater test is, you could game the test the other way too. Making your water heater look worse than it is. Would you do that? No.

The difference between the water heater and VW is the water heater manufacturer is providing a representative sample. And VW was not. It'd also be dubious to say that the water heater company is acting in bad faith. Where VW's bad faith rose to the level of criminal. On the other hand Volvo appears to be acting in strictly good faith.

Bad faith for a crash test would be crafting a silver plate model for testing. Reminds me that's what my uncle said the power supply manufacturer he worked for did.


The difference is that the water heater test itself was flawed in that it depended on arbitrary design decisions that have nothing to do with efficiency. Two completely innocent manufacturers could build water heaters with the exact same real-world efficiency, but score fairly differently on the efficiency ratings just due to how they're designed.

While I agree that this particular water heater manufacturer was doing something shady in order to get the best score, at least they weren't selling a product that did something differently while under test conditions vs. in real-world usage. They merely realized that the test itself had wide error bars, and designed their heater to "err" in the positive side of those.

VW, in contrast, sold a product that lied to the testers about its emissions in order to pass certifications, while in real-world driving would behave in a way that would not pass muster.

And to me I think that's the key: VW's cars intentionally behaved differently depending on if they were being tested or if they were being driven in normal real-world usage. This water heater behaved the same regardless of whether it was being tested or was heating water in someone's home.

In a way I think of this in academic terms. The water heater manufacturer studied the SAT to learn what kind of questions were going to be asked. VW stole the answer key to the test and memorized it.


One is polluting at the tailpipe, the other is only polluting at a power plant.


Many important differences.

1. Actively detecting test and behaving differently. It's like stealing a test vs teaching to the test.

2. Lower stakes. Health issues are much more serious than inefficiency.

3. It affects the buyer. It's more acceptable for the buyer to be cheated than everyone around them.

4. People could have created these layers by accident. Favouring those who got lucky is unfair.

Honestly I think basically all my gadgets exaggerate how energy efficient they are, by tuning parameters for tests that don't correspond to the real world. My dishwasher has an energy efficient mode, the manual literally says it's just for compliance and recommends other modes. It's just a fact of life.


This is omnipresent even where regulators aren't involved: every graphics card benchmark out there is 'manipulated' relative to real world performance. At this point it's so universal that I don't think anyone is even fighting it - as long as everyone games benchmarks roughly the same amount, the relative scores stay usable.

Your point about fairness and passive design is the one that makes me view these cases differently also. In the anecdote, the product being tested was the same one being sold, and there's no sign the heater was worsened to improve test performance. The designers just picked the best-scoring option among some reasonable configurations. (Frankly, once they noticed that issue, what were they supposed to do? Pick the worst-scoring, or pick the spec out of a hat?)

In the VW story, the test-bench vehicle was fundamentally different from the market vehicle, and the road version was designed to behave worse on the metrics to get other gains. I happen to know someone who bought a diesel Jetta specifically because it was more eco-friendly than other options, and I think he'd draw a clear line between tuning for test metrics and VW consciously lying to their buyers.


It's interesting you mention graphics cards because that very behavior has lead to the gaming community favoring benchmarks derived from a handful of current gen games min/max/avg FPS over so called synthetic benchmarks. It only took a handful of instances of companies baking in "benchmark" modes that get triggered when certain benchmarks are detected for people to start discounting those benchmarks in favor of more organic measurements.


> Honestly I think basically all my gadgets exaggerate how energy efficient they are, by tuning parameters for tests that don't correspond to the real world.

Having measured all my gadgets with a Kill a Watt meter, that's not my experience. It seems that many gadget-makers realize that people don't really care about power draw, so they just slap the maximum draw onto the specs.


The big difference with VW is that they put in a mechanism that detects the test and completely changed the behavior of the car for the test.


So they _intentionally_ set themselves up to lie to regulatory agencies and consumers about real world efficiency. That honestly sounds basically the same to me. In both cases the tests are poor approximations, and in both cases someone could accidentally optimize the test, and in both cases someone did it intentionally to deceive people.


Imagine the water heater company didn't know they were gaming the test. They first design a water heater and it gets a B- on the test. Being overachievers, they work hard and submit a second design that gets an A+. They might not realize that both heaters are basically the same with the only difference being some heating element spacing that works better for the test. Both times they submitted a legit design that was the same they'd provide to consumers. Sure, we know the engineers knew what was happening, but we can see how one might innocently arrive in the same scenario. I think it's safe to say the test is flawed.

The VW test is not like that. There's no way to innocently arrive in the scenario they did. They did not game a bad test, they literally lied to the test administrators. The car ran in "clean mode" only if it was in the test environment. If the car ran like it did on the road, they'd have failed (which is how they were caught, with a mobile testing setup).

One of the points in the article is that regulating for safety based on known testing conditions is going to result in over-fitting for the test. The water heater company is guilty of intentionally over-fitting. VW just straight up lied. I don't think those 2 actions are equal, VW is worse, but I agree that both are dishonest to a degree.


> Imagine the water heater company didn't know they were gaming the test.

We don't have to. In this case we have clear admission of intent. The intent to deceive is what makes it fraud and not just being wrong.


My point isn't that what the water heater designer did was perfectly ethical, just that it's clearly distinct from what VW did.


It's distinct in technical detail, but not in the broken ethics rule against intent to deceive.


I think the ethics are still different. In the case of the water heater, they deceived the test to make their heater seem more efficient than it really was. However, the heater still does its job, it just costs a bit more to run (and emits some extra CO2).

What VW did was to take a clean-burning car and disable the pollution controls under normal driving conditions, for performance reasons. So while to the consumer it seems deceptive that VWs get better performance than they should, given how clean they’re supposed to be, in reality the cars are illegal and spewing toxins they were supposed to be removing from the exhaust. This makes the cars not only a pollution source but a health hazard to people living nearby.

To get on the same level of VW, the water heater would have to be doing something like emitting low levels of carbon monoxide into the home while having a feature that avoids doing that in the laboratory. In other words, reckless and willful disregard for human health and life.


There's no intent to deceive necessary in the water heater example. The water heater company could have sent it in with a note to the regulator saying "we moved the second element up, because we believe it will perform better on your test" and the regulator would likely just accept it instead of redesigning the test.

Also, for the water heating one, there's a plausible reason for the regulator to care about the discrete measurements rather than the total amount of thermal energy in the water. Hot water at the top of the tank is more valuable, because it's used first and less likely to be wasted, so you could wait it more heavily in a test. There's no parallel for the VW test cheating. No indication that's what happened here, of course.


There is no deception and no ethical issue. The nature of the test is known.

What placement of components would be ethical? Should engineers required to be separated from the test parameters by a Chinese wall? Do they need to build the system for the worst result? Some middle ground? If the engineers are unethical, where is the line?

The obvious answer to optimizations like this are for the testing body to tweak the test procedure based on what manufacturers do over time. That provides an incentive to be more conservative or accurate.


The key legal difference is that VW literally behaved differently if a test was running. If they had simply designed a system that tested better than it actually performed (by optimizing the factors tested for), they would not have gotten in trouble.

If the water heater manufacturer had special heating elements that only ran during the test, it would be equivalent.


> in both cases someone could accidentally optimize the test

I think this is what I disagree with.

The water heater story is about a viable-for-market design which also optimized for the test. The equivalent for a car emissions test might be optimizing the transmission to reduce emissions at the specific speeds which will be tested. Those speeds could be sweet spots of the engine curve by accident, or they could be planned that way. I don't think that's necessarily right, but it's within the bounds of "natural" design for the product.

Instead of doing that, VW submitted something for testing which was fundamentally different from what went to market. Rather than being misleading, the test results were fundamentally irrelevant. Creating two completely different modes of behavior isn't something you could do by chance, and it means there's no real limit on how badly they could cheat.


There is a difference between memorizing enough information to ace a test and sneaking in notes that aren't allowed to ace a test. And people would also say it is wrong to steal a copy of a test and then memorize the answers to it to ace a test. But what if a professor uses the same test every year (maybe changing a few numbers but in a way that only impacts the calculations, not the way to solve it) and people study just the information needed to answer the test. Is that cheating?


If you cheat in most such tests it just means you miss out on actually learning what you were supposed to. If it wasn't your intention to learn anyway I guess that's fine.

Rarely the purpose of tests is to assure the public of your fitness (e.g. a driving test) and cheating those might be a problem, but if you cheat my CS 101 course and then struggle because you needed remedial classes but the cheated test means you don't get them that's your problem.


Another aspect is the incentives. Most discussion here is about the cheating itself, and not the reasons for it. I may not learn much from just writing about a degree I don't really have on my resume, or roles I never worked at, and experience I don't have. But I can get paid a lot more by doing so.


There are a few other issues with cheating, such as devaluing a degree for all others who didn't cheat to earn it.


Morally, they both seem to fall in the same category. Legally, it might be a complicated question.


That sounds exactly like the third sentence of the article.

> Sun managed to increase its score on 179.art (a sub-benchmark of specfp) by 12x with a compiler tweak that essentially re-wrote the benchmark kernel.


Yes, but you're talking about what VW did vs what Sun did, but the person you're replying to is talking about what VW vs what a company that makes a water heater does.

I agree that what Sun did is very similar to what VW did, with the exception that VW's increased emissions (statistically speaking) killed people, and what Sun did likely had no health impact on anybody except a few hurt paychecks.


Sort of, except that emissions testing is a regulatory requirement.


VW's vehicles did not meet the standards required when used every day. They only met the standard during testing.

The water heaters don't sound like they'd fail any given test.


With the following question, I'm not absolving VW from criticism. With that in mind:

Why are we not holding those doing the measurement accountable as well?

If you produce a test that can be gamed and your job is to test things to meet consumer expectations, you've failed at your job.

After all is said and done, what is a better outcome: a) VW is punished for gaming the test b) the test is significantly harder to game

With (a), we have only one less manufacturer gaming the tests, VW. With (b) we have tests that none of the manufacturers can game any longer or at least will take time to game. The testers should be expected to always be two steps ahead.

This is not unlike whitehat/blackhat security engineering. We should pay bug bounties to teams that successfully exploit the tests and we should be actively running red team drills.

https://en.wikipedia.org/wiki/Red_team


What VW did is similar to uber's greyball system in that they give a different experience to the regulator rather than giving everyone an experience that is tuned to what a regulator might hope to see


Is that not outright fraud? Just because you can hack something doesn't make it right or legal.


The system incentives are based on outcomes, not inputs. People are not benefiting from pointing out flaws in the system. They are benefiting from exploiting the flaws.


How far out of optimal (for testing) position would you require their water temperature layers to be?


Reminds me of the Ferrari fuel injection controversy in F1 last season. The gist of it is the theory that Ferrari were cheating the maximum fuel injection rates, by tuning their injection system to inject more fuel when the injection sensor wasn’t taking a reading.

https://www.motorsport.com/f1/news/analysis-fia-settlement-f...


There was a scandal a few years back when a Kaggle team did something simar with ML. They were treating the competition as a black box and optimising for the unknown dataset instead of actually building better AI algorithms.



It goes back way further than that, the incidents that I am recalling involved academic teams and I think ImageNet. It happened quite a few times.


Out of curiosity, why would the water form layers of different temperatures?


I think because with a modern water heater especially electric models the heat loss is so low convection often stops completely. So the water in the tank doesn't mix. I think you can get situations too where the water at the level of the heating elements gets hotter and hotter and the water away from it gets cold. My water heater strongly suggests adding a mixing valve for that reason.


> "water at the level of the heating elements gets hotter and hotter and the water away from it gets cold."

Eh? How come that temperature differential doesn't cause convection? The colder water being more dense and sinking down, the warmer water being less dense and rising up.

If the heating elements aren't at the bottom, specifically for this reason, how come they aren't?


> If the heating elements aren't at the bottom, specifically for this reason, how come they aren't?

There are typically two heating elements, one near the bottom and one further up. You can use the higher one if you only need a half-tank of hot water.

I'm not sure if this is the complete answer to your other question, or if sometimes conditions are such that convection occurs extremely slowly despite a substantial temperature difference.


I think perversely it's not the density profile in the vertical dimension that drives convection but the horizontal dimension. A hotter thus lower density but perfectly flat homogeneous layer can't displace the colder denser layer above it because the pressure between the two layers is uniform.


How can it become a perfectly flat homogeneous layer, from a heating element which looks like this[1]?

It's your claim "the water at the level of the heating elements gets hotter and hotter and the water away from it gets cold" which is surprising me; water being quite a good conductor of heat, and "efficient boiler" suggesting that the tank will be insulated so not much heat is lost to the outside world, so the water away from the element should warm up by conduction from the other water faster than it gets cold from losing heat through the insulation (in my head). OK maybe I can imagine that in a fixed pressure with no room for expansion, the warm water cannot be less dense, so there can't be much convection - but then won't the heat radiate and conduct outwards in a "sphere" from the heating element including up and down, and not make any layers?

And here[2] is a video of Thunderf00t putting an infra-red thermal camera on a closed water bottle with a peltier cooler attatched to the side of it, and showing that the cold water does sink, there is some convection happening with a cold "heating element" halfway up the "tank" on one side. (But there is compressible air in the top).

Maybe it's that you don't want to run a boiler long enough for all the water to get to a uniform temperature before you can get warm water out of it, and if you don't then you risk getting super hot and cold water unmixed from "impatience" more than anything else?

[1] https://ae01.alicdn.com/kf/HTB1_U_RSpXXXXbYapXXq6xXFXXXy/DC-...

[2] https://youtu.be/p9i1mhNsYXQ?t=903


> but then won't the heat radiate and conduct outwards in a "sphere" from the heating element including up and down, and not make any layers?

Suspicion of mine is part of the answer is that the viscosity of water drops with temperature. Rising hot water in a warm layer hits the cold layer above it and spreads out then turns down. Also water is a middle of the road non metallic solid when it comes to heat conduction. But it has high heat capacity and is a liquid.

Complicating things stratification at least in a well designed water heater happens rarely. So it's easy enough to set up conditions where it doesn't happen. But of course it's a known problem. And not a totally solved one either. Suspect that simplified models probably don't display the behavior.


I've maintained that water heater is the only equipment that is 100 percent efficient. Glad to know that is not the case.


The whole car safety topic is very confusing. Car accidents is top1-top2 death reason in US for people in 20-40 age group [1]. Probably less in Europe but still one of the leading ones. I was thinking about how I can make myself safer.

Which car brand/model is better? Which seat is better for passenger? How much riskier it is to drive during the night? In the rain? How dangerous is it to overtake on road with 1 lane in each direction? What is most dangerous: crossroad with no traffic lights? Turns? City? Highways or smaller roads?

I was not able to find any meaningful research. There are ratings by different government agencies (US, UK, Australia etc), but basically all cars have highest rating which I find hard to believe.


You're right that there aren't meaningfully significant differences between car brands. Driving itself is just very dangerous. Other forms of transportation like buses, trains, and airplanes are orders of magnitude safer.[0] If you want to make yourself safer, avoid driving as much as possible.

[0] https://injuryfacts.nsc.org/home-and-community/safety-topics...


For almost everyone 18-40 year old, driving less is one of the healthiest decisions you can make.

A little nitpick: Motorcycling, cycling, and walking are all significantly worse than traveling by car.

If this interests you, it's possible to find a lot more interesting research by searching for "microlife" and "micromort".


Motorcycles are 4% less likely to be involved in an accident, however motorcyclists are 27 TIMES more likely to die if they are in an accident.

https://www.iii.org/fact-statistic/facts-statistics-motorcyc...


   Motorcycling, cycling, and walking are all significantly worse than traveling by car. 
Is this because of the chance of getting hit by a car?


That and because they're more dangerous per kilometer. I can go 240km in two hours in a car, but only 16 on foot, and I'm going to be crossing many, many streets in that distance.


Yes, but morticians don't care whose fault it is.


    Here lays the body of William Jay,
    Who died maintaining his right of way,
    He was right, so right, as he sped along,
    But he's just as dead as if he were wrong.
Moral of the limerick? Optimize for for your own safety, not mere adherence to the law, because lawsuits cannot bring you back from the dead. The other person being wrong for killing you won't make you any less dead. This is something I try to keep in mind whenever I'm driving, crossing a street, etc. The crosswalk may say I can walk, but for my own sake I better look both ways anyway.


Can you explain more about the walking part? Is it because pedestrians get hit by cars a lot? Otherwise it seems walking should be incredibly safe.


Transportation safety is generally measured on a distance basis. If you compare one hour of driving vs one hour of walking, then walking is safer, but if you compare the traffic risks of driving a hundred miles and walking a hundred miles, then that's a different story.


This is also kind of silly; the average person doesn't walk 100 miles (say, a week). Extrapolating relative walking safety from 100ft of darting from your office building, across the street, to your lunch spot, out to 100 miles is not actually the same thing as (say) commuting by foot 5 miles each way for 10 days (where the per-step risk profile is probably a lot lower than the numbers we're extrapolating from).


Yeah, agreed. I think you just need to be making apples-to-apples comparisons based on the type of trip.

If you're talking about trips of no more than a few miles (that don't include hauling too much cargo to carry), you can compare driving with walking. If you're talking about trips of tens or hundreds or thousands of miles, you have to compare driving with flying or taking a train.

I would be interested to see crash/injury/fatality stats for car trips under certain distance thresholds. I wonder if those are tracked anywhere.


This is also why the safety of commercial air travel is (sometimes) overstated. Since most crashes happen on takeoff or landing, the risk profile of a 100-mile flight is about the same as for a 1000-mile flight, but on a per-mile basis the latter flight appears safer.


Is this still true for commercial flights? The most prominent airliner losses from the last 20 years were not close to takeoff nor to (scheduled) landing. I'm thinking of Air France off Brazil, the two Malaysian planes, all four 9/11 attacks, and the Germanwings crash. The exception is the Concorde crash on takeoff which just sneaks into the last 20 years (July 2000).

Of course, part of the reason these crashes were newsworthy is that they were unusual. Are there enough takeoff- and landing- related losses to make these statistically irrelevant?


My information may very well be out of date! This is one of those lines I've been repeating for 20 years and it might be time to update it.


Looking at Wikipedia [0], there a a lot of incidents I never heard about and a disproportionate number are "shortly after takeoff".

I didn't count them all, but from eyeballing it a 1000-mile journey is 3x as likely to kill you as a 100-mile one, not 1x or 10x.

[0] https://en.wikipedia.org/wiki/List_of_accidents_and_incident...


On a per journey basis, in Europe/US/Canada, considering _only_ commercial aviation (i.e., excluding general aviation), the fatality/journey rate is comparable to both cars and walking.


Also because when people are contemplating whether to fly the alternative often isn't using another form of transportation but not travelling at all and using video conferencing, or travelling a shorter distance (e.g. going on holiday more locally)


Like some of the other comments already mentioned, its indeed on a per distance basis. Also traffic accidents are far more deathly if you're walking basically naked from a safety perspective vs protected by a well designed mass of metal.

When a car and a human collide, it's not the driver that's likely to die.

I got my stats from https://en.wikipedia.org/wiki/Micromort which shows walking approx 12 times as dangerous per mile.

On a time basis walking would be less dangerous than driving I reckon. Walking also has other health benefits.

I mentioned it specifically, because walking to the store instead of driving for safety reasons would not be a smart choice. Doing some daily walking in a park likely would be a smart choice though.


That depends on many factors. Living in a crowded European city I have plenty of shops in walking distance, protected sidewalks and a traffic nightmare. It would take me longer and I'd be much more likely to get into an accident when driving to any of the shops nearby; walking is easier and safer. If you have a city designed for cars like most US cities seem to be the story is very different.

Per se walking is definitely safer, given decent walking conditions. If all you got are sidewalk-less streets and only big shops that are a decent distance away, that's not the fault of walking but of bad city design. similarly in winter, if there's snow and your city clears the street but not the sidewalk then of course the sidewalk is less safe.


FYI, here's where they get the 12x more dangerous data from:

https://web.archive.org/web/20091122180337/http://www.statis...


Might have something to do with the size of your signature (I think that's what they call it in military terms).

Consider a spherical traveler following a certain path in a field of randomly moving projectiles. The chances of being hit with one depends on path length, your own size, and how long you stay on the field.

Seriousness of the hit depends on your durability and speeds involved.

Maybe pedestrians just spend a lot of time exposed (assuming they have to cover similar distance) and bikers are extra fragile.


My guess is that on a per distance basis, walking is more dangerous because of pedestrian-car collisions (aka getting run over).

I think part of the dissonance is that intuitively I tend to compare on a time basis. I guess it depends on whether one is traveling to a destination (constant distance) or traveling for recreation (constant time).


Is that a typo, or do you teally mean cycling and walking are significantly worse than traveling by car? Are both of those activities actually more dangerous? I would have thought cycling and walking - even in urban settings! - are safer activities than driving, statistically.


I doubt it was a typo. As a ballpark, cycling has 10x the deaths per mile as driving.

Here in the Bay nobody stops at stop signs (3-10mph is typical), and they don't even really slow down to look both ways till after the white line and after the sign -- plowing straight into sidsewalks and bike lanes. I'm honestly shocked it doesn't cause more deaths.

Cyclists themselves usually aren't much better -- not even bothering to slow down for a 4-way stop, failing to signal before turning, etc....

As a personal anecdote, I was biking home down a pretty steep hill (I had lights, a helmet, etc) last fall on a road without any shoulder or bike lanes whatsoever. It was a short stretch, there were plenty of signs warning cars of cyclists, and I thought it would be fine, but shortly after I started the descent I had an SUV tailgate me the entire way down the mountain (again, nowhere for me to pull over, no way to get out of the way). If I'd hit a rock or anything and lost control I would've been toast.

When I checked my GPS after the fact I had been going a minimum of 10 over the limit, but that wasn't enough for somebody in a hurry and apparently not aware they were risking my life.


Anything involving cars is probably the biggest ongoing nightmare in our society. People have no appreciation for the danger they're constantly putting themselves and everyone around them in, just to save a few minutes a month in commute time.

The news, social media, etc should be plastered with the 100+ people who needlessly die every day in car accidents. ALERT: 10 YEAR OLD CHILD SLAIN SO DRIVER COULD GET TO WORK 30 SECONDS EARLIER

(Side note: This is why I'm going long on the stock market again.. it looks like the public and the media have gotten over being scared of the virus. Even if we have another 100-200k deaths in the USA this year, my guess is it'll get swept under the rug just like 50k car deaths, regular flu deaths, heart attacks, etc)


There was a study on UK cycle commute accidents recently.

Commuters over several years who cycled, were 44% more likely to be hospitalised than other commuters.

However...! This cycling was associated with some quite dramatic health benefits. If 1000 people switched to cycling... The benefit would be 15 fewer cancers, four fewer heart attacks or stroke and three fewer deaths.

https://www.bmj.com/content/368/bmj.m336


I would imagine that it’s not a typo. In a car you have a big shell around you to protect against the environment. While walking or cycling you’re far more prone to be injured by the world - a branch could fall on you, a car could hit you, another person could hit you, you could trip, slip or lose balance, etc.

Being inside a big metal shell that can’t fall over and with active safety measures is a great place to be, risk-wise.


Exactly. Just like air conditioners cool a container while increasing the overall heat of the system, cars make the car occupants safer while increasing the overall danger of the system.


On the other hand, when I'm cycling, I'm typically not moving at >15 mph or so. When I'm walking, reduce that to 3mph. That seems like it would do a lot to counteract the car's crumple zones and airbags in terms of severity of injury.

In fact, once you take into account severity, the GP's claim seems extremely suspect: maybe the chance of any injury on a bike or by foot is higher, but I'd be skeptical of claims that the same applies to death, or even chance of serious injury (since that's what I, personally, care about), without serious backing evidence.


I knew a lot more people that died cycling than I knew people that died in car accidents. And all of the cyclists got hit by vehicles.


I'm in the opposite situation, personally. Irregardless, the point I meant to convey is that I'm able to talk myself into both the original claim and its converse, so it's probably better to actually point to a statistical study than try to provide a causal explanation, no matter how compelling.


I think it very much depends on where you live. Here cycling is super popular and so you can expect more deaths even if the deaths per distance number is likely quite low even when compared to places where cycling is less popular due to infrastructure differences.


You can cycle at 15mph and have a car crash into you at a much higher speed


Yes, exactly: there would seem to be factors that both increase and decrease the risk of both activities, so the relative danger of each one is an empirical question, not something which can be deduced from first principles.


I would be curious to know what the numbers look like if you broke the driving up by city/highway or something like that. I'm guessing the deaths per passenger-mile are very different between the two.

They're also somewhat difficult to compare at a nationwide level because driving in the suburbs is not really comparable to biking in the city, in part because the miles per trip in suburban and rural areas are so great that biking and walking aren't really feasible, anyway. This poses a big selection bias problem: What if the real underlying effect is that suburban transportation is generally safer per mile, perhaps even independently (as much as they can be separated) of transportation mode? For that matter, are we even measuring the right thing? What if the metric that really matters is deaths per passenger-hour, or deaths per passenger-thing-you-need-to-get-done?

City driving and city biking or city walking, on the other hand, can be much more comparable. For example, I own a car, but typically use a bike to buy groceries. In part because, where I live, it's the quicker and easier option. Similar story for driving vs. walking to go to my favorite restaurant, or to go to the hardware store.

Finally, I think we'd be remiss not to think about how it's being framed: The original data focus on likelihood of being the victim of an accident. It's maybe more properly called "deaths experienced per passenger mile." The numbers would look entirely different if they were framed as "deaths caused per passenger-mile." That's kind of a big deal, because one framing suggests, on an instinctual level, that cars generally increase public safety, while the other, I'm guessing, would suggest that cars generally reduce public safety. It's the difference between saying that Elaine Herzberg died because she was walking across the street, and saying that she died because an Uber self-driving car hit her.


This is probably an area where I wouldn't just trust statistics. During pandemic I've tried out many new running routes and one particular stretch of road i have had 4 incidents of me having to move to not be hit by a car that didn't see me wheras on other routes its happened 5 times in 6 years.( The cause is its people coming out of a neighborhood with lots of stop signs then they get to main road that is busy and are impatient and looking for a hole to slip into). It really makes each persons situation different as traffic patterns also changes throughout the day.


It's a skew of metrics. Cycling has more fatalities per mile than driving. A higher percentage of cyclists die per year than motorists. In an accident with a car a cyclists has much much higher probability of death than the motorists. But are those really fair comparison?


Distance isn’t the best measure. It should be time of travel or trips. If we had a teleportation device that could teleport us to another solar system but had a 50% chance of killing the person, by using distance it would be the safest method of travel.


Why are buses safer? They drive on the same roads as cars.


>Why are buses safer?

When was the last time you saw a bus get north of 60mph? Exactly.

And for crashes with other vehicles their mass makes them pretty safe for occupants (in the places buses tend to roam it's hard to find enough space to get a small car up to the speeds necessary to hurt the occupants)


They don't stop so quickly in crash. It's how fast you stop that makes it dangerous.


This reminds me of the fact that my grandmother who recently passed away at 96 refused to make left turns. She stopped driving years ago, but before she did, my father would always help her find a way to get to the pharmacy, the grocery store, church, and family and friends houses making only right turns. Sometimes she went miles out of her way.


At what point does the risk of the extra miles on the road outweigh the risk mitigated by avoiding left turns?


Left turns are super dangerous. I would imagine that it would be difficult to find a realistic distance tradeoff that would be more dangerous (assuming it doesn't mean something like "spending a lot more time on the highway")


Your grandmother was onto something. UPS does the same thing:

https://www.businessinsider.com/ups-efficiency-secret-our-tr...


Not at all. UPS does it to become more efficient, and that works because UPS has a lot of stops to make. But GP's grandmother did it to be safer even though it was usually less efficient.


They make lefts, but they do routing in advance and heavily prioritize routing using right turns. There is no hard and fast "zero left turns" routing policy or driver policy.


Not really the same thing. UPS is optimizing the efficiency of a multi-stop route given that making a lot of unprotected left hand turns (especially with a truck) can add a fair bit of time to a route.


Transgender icon J Edgar Hoover did the same.

If it was for safety or because he hated lefties is not clear.

https://www.nytimes.com/1975/05/27/archives/exagent-says-hoo...


>Transgender icon J Edgar Hoover

Citation please. This is the first time I've ever heard him called that. There was some speculation he was gay but nothing conclusive AFAIK. In any case he's one of the last people anyone should pick as a positive representation of their group.



The cross dressing claims are pervasive but not very substantiated. So we can all believe what we want.

I was just trying to be witty when telling a story. Not make any grand statement about anything.


regardless crossdressing != transgender


Indeed. When I'm wearing heels and a floor length flower pattern frock I would say I'm just a bloke wearing a dress, but if you insist that's cross-dressing I shan't disagree.

However my gender remains resolutely unaffected and I have no time for people who expect otherwise. Judge a book by a cover you might, but slipping the dust jacket from Moby Dick onto a copy of The Princess Bride certainly won't make it into a story about whaling.

Also, this word always reminds me of Eddie Izzard: "They're not women's clothes, they're my clothes, I bought them".


Driving is already incredibly safe. Yes, it's the leading cause of death in those age categories, but there are 3-4 trillion miles driven each year in the USA. That comes out to 1 death every 100 million miles.


I'm not sure that this is a helpful way to look at things. Cars are much safer than they could be and the risk per mile is low, but driving is still one of our big sources of risk, precisely because we do it so much.

Similarly, I estimate ~2 deaths per million cigarettes smoked (~1000 cigarettes/year/person >= age 15, 480k deaths/year), and while driving a hundred miles in a day is unremarkable, smoking 100 cigarettes in a day is pretty astonishing, but I wouldn't say that cigarettes are not that risky. The total deaths per capita are high.

(To be clear: cigarettes are more dangerous than driving any way you measure it. I'm just showing that you get to weird places if you look at risk/unit, without looking at total volume).


Driving is still the biggest source of risk because the risk from everything else is now basically zero.


This. If driving was the only cause of death, and we did nothing but drive from birth we would expect to live to ~250 years old on average.


That number makes driving feel less safe to me.


(Holding the risk per mile driven constant, from current driving patterns. Risk might increase with more drivers on the road, especially given they're all driving ~16 hour shifts.)


So if we find the cure for all diseases, we still need to figure out how not to drive to become immortals.


I enjoyed your message, and it reminded me of a completely unrelated fallacy.

If 1 women can produce a baby in 9 months, then 9 women can produce a baby in 1 month.


I've always heard that one as "A Project Manager is someone who thinks 9 women working together can produce a baby in 1 month."


And 28% of those deaths were alcohol related.

And a further 16% of those deaths were related to the use of other drugs such as marijuana.

https://www.cdc.gov/motorvehiclesafety/impaired_driving/impa...

And those are just the drugs/alcohol related deaths related to driving.


I wonder what happens if you compare driving to other activities, by stretching them out to the 3-4 trillion miles - but in time and then compare death rates. E.g. taking swimming 2hrs per day (maybe equivalent to the 2-3 hrs spent in cars daily). Higher or lower death rates?


This might have some useful info for you... https://www.cyclehelmets.org/1026.html

death rates per million hours of activity


Do you know what 'residential fire' means? I'm imagining a scenario where my residence is on fire, which would seem to be high-risk (but is somehow the lowest risk).


It's a time-based rate based on (presumably) lifetime hours. Obviously if a residence is on fire, you're at high risk. But the rate of that happening is quite low.


That would make sense, except the unit is 'Fatalities per million hours'. Maybe they mean "fatalities from residential fire per million hours of living in a residence"?


I assume it means "per million hours of living." (Though I guess it could be "per million hours of living in a residence")


The problem with the unqualified "per million hours of living" is that sky diving doesn't kill more people than passenger cars.

It must be "fatalities from X during Y", and that makes sense when X and Y are the same thing (e.g. sky diving), but it seems like X and Y diverge in some of these entries.


Though even skydiving has a very different time/risk profile than, say, mountain climbing (however you care to define). If skydiving time is measured only by time from leaving the plane to hitting the ground, you're measuring a fatality rate against a pretty short interval of time. (Same with scuba diving.)

Whereas other activities that may have similar risk profiles for typical "active participants" may have much different rates relative to the time spent actively participating in the activity.

There's no single metric that really represents a correct view.


Thanks! That’s really a great source. I am so glad I asked. Highlights:

- Fishing seems to be quite dangerous!!??

- Swimming is less dangerous than living (all other causes)!


"Fishing" might include "working on an industrial-scale fishing boat", which is a notoriously dangerous line of work.


For some, "fishing" means "getting drunk on a boat, with no real intent of seeing a fish." Alcohol is a very common factor in boating accidents, somewhere between 15-50% depending on which source you look at. (I think the higher end of that range is likely closest to the truth.)


This is true for commercial ships too. Drug use (both alcohol and various illegal drugs) is something humans turn to when they are bored, and life aboard a ship can be very boring. If it gets exciting again while you're impaired that is likely to go very badly wrong very quickly.

Airline pilots have sometimes been caught, most flights just aren't long enough for you to be sober at both ends yet drunk during the flight - but for the crew of a tramp freighter so long as they sleep it off before they come into port nobody will know - unless there's an accident and suddenly everything is happening with unseemly haste.


Swimming vs 'living' seems intuitively weird to me too. I guess people who are otherwise in poor health tend to avoid swimming?


I think it shows the limit of time-based normalization. E.g. hunting and snowmobiling come out very safe per hour, but a typical participant might spend many more hours in those activities.


That the rate (not time based) for fishing is so high does seem a bit odd. I wonder a bit what all is being lumped in there. I assume it's because it's an activity that can involve water, boats, and sometimes alcohol.


Even if you're on the shore it's easy to slip and hit your head on a rock. Also, many other sports that are listed self-select for athleticism or physical ability.


Any time you are operating in water, your mode of failure goes from falling to drowning.

Rocks are slippery.


Searching for keyword micromort may also help. It's the concept of deaths/unit time spent on something.


Interesting indeed! I’ve heard the same concept called “nano deaths”


My rule is that the phrase "X is the Nth leading cause of death among group Y" is only used to make this seem a lot worse than it is.

Note that this says nothing about how many actually die from X. And what is the Nth leading cause, depends entirely on how you classify causes.

The leading cause of death could be 80% of deaths or 4%. But most people will read probably the phrase as "more than half of deaths".


It might be safe personally, but not necessarily for others. I'm guessing the death statistics don't count the deaths caused by cars such as running into pedestrians or cyclists or air pollution deaths as driving deaths?


But cities designed for cars encourage people to go farther. The distances that people go are not a fixed quantity.


Riding motorcycles means I'm often playing the percentages game. When we turn we're sacrificing our finite amount of traction for either more speed or a more aggressive turn. If I lean off the bike, I get some traction back. So how much am I willing to sacrifice in the name of fun, while still leaving wiggle room for suddenly encountering a tar snake or something in the middle of my turn?

Little things. Motorcyclists in white helmets get hit slightly less, so I wear a white helmet. 50% of Motorcycle fatalities involved dwi level BACs, so I obviously don't drink, but then another 40%ish of the remainder involve any amount of alcohol at all, so I simply don't get on the bike if I've even one drink in the past several hours.

Rain adds risk, night adds risk. Choose one and I'll ride. Both, no.

Majority of accidents happen at intersections, so every single green light has me slowing to look both ways. I watch behind me at reds. I don't get in the middle of intersections to make a left turn, I wait in the turning lane.

The first person I bought a bike from gave me fantastic advice: if you have to think about it for a second, just don't do it. Thinking about that narrow pass on the freeway? Let it go, you can pass in a couple seconds after the gap widens. Thinking about gunning through the yellow? Relax. Just stop. Etc.

I can't speak for cars but so many things are measured for motorcycles. Where our heads hit the ground, what our gear does for us, where we crash. There's also tons of videos from GoPro mounts of many kinds of crashes we get into. Over time you can build a sort of risk management software inside your brain I think.


"if you have to think about it for a second, just don't do it. Thinking about that narrow pass on the freeway? Let it go, you can pass in a couple seconds after the gap widens. Thinking about gunning through the yellow? Relax. Just stop. Etc."

That totally applies to driving a car as well!


It's not thinking that gets me into accidents - it's not thinking that gets me into accidents.

That also applies to driving.

edit: Unintentional. I meant, it's not those situations where I think for a second and make a bad call - it's those situations where I act without thinking at all.


I don't ride, but I've always heard to also ride as if every car driver was out to kill you.


Even if unintentionally, yes. Because they can. When exiting a freeway, never let there be a car on your left - cars LOVE to cut across a lane and shoulder, barely missing the barrels, to save having to waste 5 or 10 minutes taking the wrong exit.

What you've said basically is to convince motorcyclists that they're 200% responsible for their own safety. Unlike cars, you just can't trust in the horde.

Which makes riding in places like Vietnam all the more interesting - the horde of 2 wheeled vehicles flows like water. The more you think about it, the more you fail to mesh with it. Better to flow with the rest of them. Very interesting place.


The way to make yourself safer is to move to one of the few American cities and towns with walking and transit-oriented development.

Seriously!

You'll spend a fewer hours in the car risking car accident death. A family can easily downsize to one car or less.

You'll also spend more time getting light exercise by walking to the bus/train, walking to the store, walking for enjoyment.

You'll also save from no longer buying an expensive, depreciating asset. Sure, maybe your housing cost is higher in a city like this - but your house doesn't lose value like a car.

It's also more pleasant to live in. I love the rural areas, and I love the cities, but the suburbs are just the worst of both worlds.


I think it's hard to generalize that living in Manhattan without a car is safer and healthier than living in, say, Vermont with one--especially if you rent cars for weekend and other vacations. Depends of course on location, lifestyle, commute, etc.


Yes you’re right, and a big factor I didn’t think of is pollution.

The car accident death rate may be low enough that it’s really not worth mitigating against.

But also, I don’t mean Manhattan specifically. When I say “city” maybe you are thinking high rises. A neighborhood with predominately 2-4 story buildings can be incredibly walkable. You just have to stop designing everything to accommodate cars ahead of all else.


Just have to.

So long as that is the requirement it won't be done, nobody will do it. You need to find a different starting place.

Start by not requiring buildings be far apart and have parking. Right not the neighborhood you envision isn't legal to build if you want to. Make it legal.

Make tax codes encourage building high. Charge higher taxes for the ground floor.

Allow everyone to build by right (meaning no public hearings, permits must be approved by buerocrats) up to the road/sidewalk or property line.

Even if the building is setback from the property line require the outside walls be built to up the lot line on all sides standards. That means fireproof if the building next door burns. That means can be demolished without anything falling to the outside, or even a need to do anything outside to demolish it.

For bonus points define places where a door to the next building must be located. (you might never visit your neighbors that way but the door still exists and can be repurposed to a public walkway when you leave and the new owner has other ideas.


Manhattan is certainly an extreme case. Though I'd argue that, in the US, there are a small number of cities where you can live carless without any real compromises. Of course, my definition of compromise may be different from yours; if I lived in the Bay Area I'd be regularly going to mountains/ocean/etc. on weekends. Even a couple I know who live in SF without a car are regularly doing short-term and long-term car rentals in addition to walking, Muni, and Uber. I'm not sure they really are in a car less than they would be if they owned one.


Especially with pollution, however there are also social implications that can change your mortality rates. for example a low stress job in NYC could have great social payoff as you are surrounded in a world with anything and everything. Want to play ping pong with random people? Do it! Vermont is extremely isolating unless you are in Burlington or such.


Unfortunately, we will also have to start taking into account pathogen transmission risk on public transport.

If you have a larger family and a limited budget, suburbs are the best of both worlds.

Also cars are quite reasonable unless you want new ones before their useful life is exhausted. A modern car will usually last 25+ years, which is more than many homes before they need major renovation (new roof, windows, etc. in the US).


Although it's too early to see how everything plays out, there's a fair bit of anecdotal data suggesting that people who can, especially renters, are leaving urban areas for more suburban or rural locations. I know a number personally and there have been various stories about many broken leases in places like the Bay Area.


Easy to say. I live a long walk from work (normally I ride the bus), now they are talking about moving to a different building that isn't served by transit (20 minute walk to the nearest bus stop), since it isn't that far by car they aren't even thinking about offering relocation expenses so it would cost me more than a new car to move (after realtor fees and other moving expenses). There is no reason to believe that this is the last time they move us.


The most relevant research on car brands and models is probably the IIHS driver death rate data. It's not perfect since the populations are self selecting (cars aren't assigned to drivers at random) but within a particular vehicle class if you see a huge difference in death rates then that's probably meaningful.

https://www.iihs.org/ratings/driver-death-rates-by-make-and-...


An interesting instance of the self-selection problem seems to show up if you select small 4-door 2017 cars. The Nissan Juke 2WD has a death rate of 12; the 4WD version of the same car has a death rate of 65. I have trouble believing that there's a significant difference in the cars themselves, so it seems likely that people who would buy a 2WD small SUV are a significantly different demographic than those who would buy a 4WD small SUV.


>so it seems likely that people who would buy a 2WD small SUV are a significantly different demographic than those who would buy a 4WD small SUV.

Exactly. Nissan and FCA will finance anyone with a pulse and they make cars at low price points. The Juke is just about the cheapest AWD crossover so you get a statistically unflattering clientele driving it. You can see the reverse situation with the Dodge Journey. It is known for being the cheapest people hauler on a per-seat basis and also has a very unflattering set of customers. The 2wd is statistically more dangerous than the 4wd presumably because the riskiest buyers don't care how many driven wheels it has they just need something with a lot of seats for cheap to haul around their massive family.


I think it's more relevant how much each group drives. Everyone I know who drives a lot espouses how important 4WD/AWD is, whereas the people who barely drive at all aren't willing to pay so much more for it.


I've always found it a bit amusing how common this belief is, especially in snowier areas. Most people would be safer without 4WD/AWD -- less likely to oversteer around a curve, and less likely to drive aggressively because of a false belief that AWD will help you stop faster or have better control of your vehicle in low-traction conditions.

It's still a really nice feature that has its uses, but most people I talk to about it (admittedly a small subset of people who use AWD) greatly overestimate its safety benefits.


Comments like this remind me of the Principal Skinner quote "surely I am right and it's everyone else who's wrong" or something like that.

Everyone(TM) prefers AWD for it's superior go-ing abilities even if the turning and stopping abilities are unchanged. An obnoxious vocal minority tells them they're wrong. Since it's a matter of performance and consensus the overwhelming majority can't be wrong though.

Anecdote time:

I have a fleet of several identical station wagons. There is one of them that's FWD. They all use the same brand of tires. It's as close to scientific experiment as you can get. The FWD one sucks in the wet/snow/anything but dry. Forcing all the horsepower to go through just two tires that also have to do double duty and steer the vehicle make it peel-out city. The AWD ones are much more tame. But the FWD one gets a couple more mpg so we keep it around. Given the choice between snow tires on the FWD wagon and driving one of the AWD wagons on all season tires I will pick the AWD one every time. Having a marginally shorter minimum stopping distance and being able to corner better is less of an improvement than having a substantially reduced likelihood of peeling out on an uphill.

The above opinion is shared by the other four people who have to drive the vehicles in question which is why I'm the one who gets stuck driving the FWD one when it snows.


The comment was actually making a point more insulting than you assumed: they're saying that AWD may or may not make cars safer, but any effect it has is dwarfed by making the owners of those cars into overconfident morons. It's a statement about drivers, not about cars.

Note that your statement could be taken as either confirmation or refutation of the OP's point, entirely depending upon whether the reader believes you're an unreliable narrator. Not knowing you nor having driven an AWD car, I'm not qualified to guess which is true. But as someone who's super interested in the distinction between reality and perceptions of reality, I thought the juxtaposition was pretty intriguing.


edit: "a matter of preference and consensus"


The likelihood to oversteer depends on whether the 2WD alternative is front or rear wheel drive. RWD is more likely to oversteer on acceleration in slippery, turning conditions.


No argument here. That was my regional bias slipping in -- RWD is incredibly uncommon in the cold places I've lived except for pickups, and most people are aware of the potential dangers of RWD.


I daily-drove a rear-drive Alfa Romeo Spider for 5 Boston winters. Then drove a rear-drive Mercedes E300TD for another 4 Boston winters (though that one I at least put snow tires on). You learn pretty quickly what the car can and can't do; even on summer tires (and a limited-slip differential), the Alfa would go until it got high-centered on the snowpack. Either would handily out-perform my current Nissan LEAF (FWD) in the snow. LEAF factory tires are designed for economy, not winter traction.


The same discrepancy appears in the 2014 data as well so it's probably not a statistical fluke. Maybe there's a real design flaw in the Juke 4WD model?


The person is not suggesting it's a statistical flaw -- but that it's a self-selection bias.

Hypothesis: The people who opt to spend more for a 4wd version are people who know they will use it because they operate the vehicle in inclement conditions, and/or they are uncomfortable with their ability to operate a 2wd vehicle in those conditions and/or they do not have the option to stay home when the weather is bad.


Living in Vermont for five years taught me that while a 4wd is a better choice for bad conditions, it's because they are less likely to get stuck with spinning wheels, but they don't stop any faster than a 2wd.

I would see way more SUVs in ditches than sedans. People tend to overestimate what their 4wd is going to do for them and aren't sufficiently cautious. The first snow/ice of the year was particularly bad.


> People tend to overestimate what their 4wd is going to do for them and aren't sufficiently cautious.

Or because when it snows the ones who can afford to do so take the "winter beater" 2002 Trailblazer instead of the car they usually drive.


Good point, can't crash a car if you can't get it to move.


It can go the other way too, they don't buy it to drive in inclement conditions, but once they have it they increase their driving in inclement conditions because "4x4s can go in any conditions".

4x4 drivers might do offroading: so the figures could be much lower than if those people had 4x2 (2wd) vehicles.


I think recreational 4x4 is niche enough that it is probably not significantly represented in the data (and those accidents are mostly single-vehicle and probably less likely to be reported)


I have been watching this chart for as long as it has been on the internet -- it's great for finding vehicles which are cheap on insurance.

That being said, I see obvious self-selection issues show up in that chart all the time. Looking at the extremes usually points it out -- 'old man' sports cars usually have very low accident rates. Vehicles sold with deep discounts or aggressive financing deals usually have high accident rates. These are likely bought by very different buyers with very different life circumstances who use their cars very differently.


Speaking purely from personal experience.

Driving while impaired in some way certainly is more dangerous. (Drugs and alcohol but also just fatigue.) Driving too fast for conditions. Bad weather, especially snow. I'd say in general that highway driving was probably safer than secondary roads--again assuming not excess speed.

One of the things I like about greater work flexibility these days is that I rarely have to drive in snow. I just stay home.


NHTSA has a fairly good repository of research, although it doesn't cover everything you're looking for [1]. It's been a while since I reviewed the statistics, but I recall that highways are significantly safer than high-speed undivided "surface streets". Intersections, and specifically left turns at them are also more dangerous than other road areas. There is also some scattered research like [2] on related topics, for example showing that overtaking accounts for relatively few total accidents, but has an outsized percentage of fatalities (as you would expect due to the collision dynamics).

[1] https://www.nhtsa.gov/research

[2] https://www.sciencedirect.com/science/article/pii/S235214651...


> Car accidents is top1-top2 death reason in US for people in 20-40 age group [1]

Car crashes where the top 1-2. I am willing to bet most of those involved DUI, and therefore negligence, and shouldn't be classed as "accidents"

#CrashNotAccident.


I don't have statistics to back it up, but I would be willing to bet money that at least 95% of crashes are from negligence with slightly less than 5% being from a sudden unexpected mechanical failure (Such as a tire exploding), with less than 1% being something you could truly call an "accident".

The usual counter-argument is that the use of the word "accident" is a statement of intent, and outside of extreme road rage scenarios, nobody intends to cause a crash. However, the creation of an unsafe situation, such as texting while driving, is often absolutely deliberate.

Didn't check your blind spot before making a lane change? That's not an accident, that's negligence.

TBH, I'm not even sure if any crashes could really qualify as an "accident".


People don't have perfect reflexes, physical coordination, and situational awareness. Someone could be checking their blind spot at the exact moment the person in front of them slams on their brakes. Someone could move their foot over from the gas and for no particular reason completely miss the brake pedal, or sneeze at the exact wrong time, or be stung by a hornet they didn't know was in the car, or simply not react fast enough.

Humans are fallible and make unintentional mistakes. It's surprising that humans are as good at driving as we are and that most of us aren't making catastrophic mistakes on a regular basis.


>less than 5% being from a sudden unexpected mechanical failure

Less than that. There have been many studies done about the efficacy of vehicle "safety" inspections on making roads safer and they range from "no effect" to "positive effect about as big as the noise threshold". There was some Danish meta-study I thought I had bookmarked but can't seem to find that spells it all out.

That said, I 200% disagree with the overall opinion of your comment. Typical human behavior is not negligence. Systems should take into account normal, error making, humans and not expect perfect "behave like a computer humans". Behaving normally, even if that means not checking your blind spot as well as you should 1/100 times is not negligence. Furthermore there's only so much situational awareness a person with two eyes facing the same way can have. As another commenter points out you're gonna be in for a near miss at best if the time you're checking your blind spot is the same time traffic in front of you starts coming to an emergency stop.


> TBH, I'm not even sure if any crashes could really qualify as an "accident".

There are some. One that comes to mind is a freak rock falling off a truck in the opposing lane coming through your windshield. Possible negligence on the part of whoever loaded the truck, but for the victim it may as well be a lightning strike on a clear sunny day. Falling trees or things like that could do it too.

However, I think "car crashes aren't accidents" is probably close enough to the truth that it makes sense to use as a mantra in drivers ed (as my drivers ed teacher did.)


Regardless of whether you call them crashes or accidents, even if you don't do anything stupid you can still be killed by someone else's poor driving or driving under the influence. Also, no one starts out as a perfect driver


Also a lot of suicides there.


You are spot on. What's really interesting is when companies publish death rate numbers. In everything I've seen, most low-cost car models suffer on safety, generally due to lower cost construction, materials, or cheaper safety/avoidance systems. Volvo obviously has historically been the best at reducing mortality, Tesla is seeming to be beating them but I don't know how many of the numbers actually match reality, that's not clearly published. To better test this it's about time we added a sixth star, for sure. Left Turns across traffic are generally the highest danger factor, notably for motorcyclists. Many states in the US are replacing traffic lights with roundabouts and are removing two-lane passing zones.

Oh and SUVs are markedly LESS safe. They are more prone to rollover, less likely to avoid an accident, stop slower, and generally have less safe body structure.


> Oh and SUVs are markedly LESS safe. They are more prone to rollover, less likely to avoid an accident, stop slower, and generally have less safe body structure.

But they're also heavier, and there exist cheap, heavy SUVs, while cheap cars are generally lighter.


Weight increases mortality in all accidents. Either by increasing the likelihood of the other driver being injured or by increasing the amount of force impacted on crumple-zones in single car accidents. All modern safety studies have shown this and only recently has any SUV gotten a pure 5 star award. SUVs are less safe than equally priced cars. Also most SUvs cost much more than cars with similar wheelbases.


Yes, but empirically, lots of people don't care about increasing the likelyhood of the other driver being injured.



You are so right. Too much confusing and scattered information in the US. Do some reading on UPS training for their UPS drivers, No turn left, optimization,...


Also, when people think about the risk of driving they forget the additional risk to their health from in-car air pollution


Some tips:

Dress like a motor-cyclist. Wear a helmet and airbag suit.

This is probably also useful when traveling by airplane.

I have no data on the effectiveness, however.


People who feel safer can take greater risks, thus safety equipment can create negative returns.


This theory (known in literature as ‘risk compensation theory’) has been debunked. See some references here: https://twitter.com/whitequark/status/1264486505168482304


tl;dr I would not use that link as evidence to say risk compensation is debunked.

Interesting, I'd certainly say it applies to me - I dive more in volleyball if I have my kneepads; I tackle harder in football when wearing shin-guards, the moreso with ankle protection; drive far more cautiously on a motorbike than in the metal, crumpled-zoned protection of a car; etc. ...

That tweet leads to https://www.sciencedirect.com/science/article/pii/S136984781... "Bicycle helmets and risky behaviour: A systematic review" which concludes:

>"Although two out of the 23 studies were supportive of risk compensation, ten other studies found helmet wearing was associated with safer cycling behaviour." //

So, risk averse people wear helmets.

Some of the support in the review comes from results like helmet wearers obeying traffic laws: the issue seems much more nuanced and complicated than that.

FWIW, risk compensation doesn't require that the entire risk is negated (or even over-compensated) so statistics on accident occurrence and severity alone - for example - would not be sufficient to prove if the effect is real: take my car driving, no matter how much I exhibit riskier driving in a car - short of being a lunatic - it's not going to make it more dangerous than motorbike riding because of all the other people on the road.

[This post relies on part on past activities and experiences!]


To be fair, it seems the debunking applied to people who were obliged to wear safety gear, not to people who were more cautious than average people.


If you are more cautious to begin with, you do not need to compensate for anything, so the theory does not apply.


But what if you are overly cautious and this bothers you and wearing safety gear makes you feel more normal?


Drive less.


Drive the largest vehicle you can. When in a collision the momentum is converted to crushing force, you want to be in the less-crushable vehicle.

Hey this may be an unpopular choice, but it is the one with the biggest safety impact! And that was the question.


Rather immoral. Not just offloading, but also exacerbating the burden onto others.

Vehicles and Crashes: Why is this Moral Issue Overlooked? by Douglas Husak

https://www.jstor.org/stable/23562447?seq=1


It's the arms race itself (and the fact that we allow it to continue) that's immoral. Acting rationally within the arms race is not immoral. Our laws should address the core causes of this arms race.

People are starting to wake up to the notion of "systemic issues," but still too slow to apply what they've learned about that topic to other matters. It's a mistake to focus on the behavior of individuals in a system that's sanctioning and encouraging bad behavior.


The free preview of your citation seems to propose legal liability for drivers of vehicles that cause outsize risk to others.

Safety equipment inside cars does the same -- people who feel more safe in their car drive more dangerously and take more risks. Maybe that's addressed in the full article.


My buddies and I all subscribed to this theory as young kids in Texas, then one time some of us did a trip to the junkyard to get some parts and it was a graveyard of absolutely demolished pickup trucks. Shook us pretty good. The junkyard guy knew what happened to some of them, too, like this one where the passenger compartment was just a block of near solid steel apparently got in a crash with a Camry.

So I'm not sure "big car safe" is necessarily true.


Correction: you want to be in the vehicle with the most crushable distance of material. That way the force is spread out more over time and there's less going through your body at any moment.


That’s not the entire story. If you value your health over that of all others, you will want to crush the car the others are driving in over its full length before the crushing of the parts of your own car where you are sitting starts.

That’s where having the sturdier car can help.

You also will want to be in the car that decelerates less briskly. That’s where, in collisions with movable objects, having the heavier car helps.

A heavy car without crumple zones from the 1950s that could kill you when hitting a wall at 20 km/hour might still be the safer car in a head-on collision with a much smaller car with a few meters of crumple zone (cars from the 1950s were incredibly unsafe, so I think this would require extra-ordinary circumstances)


I'm trying to be careful how I type this, but the point of crumple zones isn't just to make the car stop slower, but to keep parts of the car you really don't want to crush from crushing.

If you crash into a wall at 20km/hr, it's a lot better if the engine bay gets smashed up, than if the compartment you're in collapses around you.


Given enough mass, you won't experience any noticeable deceleration during a collision! https://www.youtube.com/watch?v=hWToJxKYRSc


My Kenworth will crush your F350, X5, Tahoe, or whatever other big ass truck/SUV you can get.

So if you really want to reign supreme on the road you ought to get a semi truck, with a sleep cab you convert it to seating for the kids too.

Is this the end game of suburban families race to owning the biggest hunks of steel on the road?


Reminds me of this:

https://www.autoexpress.co.uk/volvo/xc90/103233/no-fatalitie... (2018)

"No fatalities ever recorded in a Volvo XC90 in the UK - Analysis of official figures finds no driver or passenger fatalities have been recorded in Volvo's XC90 since records began in 2004"

Volvo XC90 is a relatively popular SUV there, and for the UK, quite large.

"Around 70,000 XC90s have been sold in the UK since the original model launched in 2002"

I would guess the magic here is:

a) the car is designed with safety as the number 1 design criterium (it's a Volvo)

b) it's quite large (lots of crumple zone mass)

c) it's probably not driven at insanely high speeds by young male drivers with poor impulse control very often.


I propose an alternative.

Start including the effect on other cars in crash tests.

In other words, yeah, if I drove a frickin' monster truck, yes, it's going to be extremely safe for me, but it's absolutely deadly to anybody in a sedan. So that monster truck should get low crash test scores.

Your answer merely creates an arms race of larger and heavier cars.


Only if everyone can afford them, which isn't true. Its more like, creates a class system where the rich are safer. Which is already the case to be honest.


A sad truth.


This has had little to no real evidence in specifics to SUVs. Plus it increases the risk on others, which is something one should have to realize.

Pluuus, Larger vehicles suffer from worse handling, worse braking distances and higher accident rates. It's all just dumb.


if you defect you get a bigger payoff than if you cooperate!


It's definitely not an unpopular choice, since SUV sales are skyrocketing


Presumably downvoted because people don't like it. But it's the truth.


> A 2700lb subcompact vehicle that scores Good may fare worse than a 5000lb SUV that scores Acceptable. This is because the small overlap tests involve driving the vehicle into a fixed obstacle, as opposed to a reference vehicle or vehicle-like obstacle of a specific weight. This is, in some sense, equivalent to crashing the vehicle into a vehicle of the same weight, so it's as if the 2700lb subcompact was tested by running it into a 2700lb subcompact and the 5000lb SUV was tested by running it into another 5000 lb SUV.

Its the "truth" until you run into something that's bigger than you.


No.. that still shows it's the truth.


This isn't correcting for whether people can safely drive the biggest vehicles on the road, semi trucks.


It's the "let's defect in a prisoner's dilemma" strategy that works great until others use it too.


I've been riding out the quarantine in a less populated area, and I am surrounded by trucks so tall that the driver can't even see the roof of my car if I am stopped next to them at a light. It's already too late to avoid defection -- if I stay here for good, I will have to get a taller car just so other drivers can see me and so I can see through the windows of parked cars when turning.


I know; I've had the same experience. :(

It's a shame. The pedestrians will have to find a way to get taller too.


Pedestrians have no place on the road, just like cyclists. They aren't people, they are obstacles. It's going to take a huge reform to change that, especially in the US.


It is not the prisoner's dilemma at all. If everyone gets big cars then everyone is equally as safe. They don't all die.

For crying out loud, people, I know it's stupid, I know it's wrong, but it's the truth. It's an uncomfortable truth that your downvoting will not change.


It is a prisoner's dilemma.

When you all have big cars, you're back in an equilibrium where everyone seems as safe in head-on collisions as the equilibrium where everyone drove small cars. But each car now has way more kinetic energy and momentum, and needs bigger crumple zones to absorb it (oops! You won't find room to expand those in side collisions!), is more prone to rollover, has less visibility of pedestrians, bicycles, and children playing in your driveway, worse gas mileage, more pollution, less parking space, etc.

And then someone decides to get an edge over the crowd by driving a monster truck and you're back to scrambling into a new, even worse equilibrium.


Is that really true? If two big cars hit each other there’s a lot more energy in the collision than two small cars.


> It is not the prisoner's dilemma at all. If everyone gets big cars then everyone is equally as safe.

All the pedestrians and cyclists they mow down are less safe though.

And everything becomes more polluted, which is presumably also not equally safe.


I think you're on to something.

First: there's vehicle-to-barrier crashes (think trees, etc):

Having a vehicle with a large well-designed crumple zone seems obviously good from a survivability POV.

Second: there's vehicle-to-vehicle crashes:

Is it really a zero-sum game? If a small car crashes head on with a large car, will the people in the small car suffer worse than if they crashed into another small car? To what degree?


If a heavy car collides with a light car head on at, say, 60mph, the heavy car might slow to 20mph whereas the light car might be pushed backwards at 20mph. So one car experience 40mph of velocity change in a short time interval, whereas the other experience 80mph of velocity change.

This is hypothetical; I don't know what real numbers from real collisions are like, but light cars are definitely at a disadvantage in that they can get knocked around a lot more. There's also the possibility of a light car getting pinned between a heavy car and an immovable object or another heavy car.

I don't think this is zero-sum: ideally, we'd all be driving around in large bubbles that weigh 200 pounds or less and have huge crush zones filled with styrofoam or something.


> Is it really a zero-sum game? If a small car crashes head on with a large car, will the people in the small car suffer worse than if they crashed into another small car? To what degree?

Well it's zero sum in the linear sense as the relative total velocity change is the same, but crash survivability isn't linear. Rather, it decreases exponentially, since kinetic energy is proportional to the square of velocity - so a head-on crash between a semi and a compact car subjects the occupants of the compact car to nearly 4x the force that they'd be subjected to if they crashed into another compact car.


It's hard to disagree with physics.

However, please read what I wrote here:

https://news.ycombinator.com/item?id=23694479


To an amazing degree. In the limit, the large vehicle proceeds nearly unimpeded and the small one is a ball of tinfoil.


I mean, that makes sense if we assume that cars are sphere-shaped with consistent density. Of course, they aren't.

I guess what I was looking for was actual data, not assumptions.


Data? My son in his Ford F350 had a collision with a compact car that pulled out directly in front of him across the highway at speed. Small car: totalled. Truck: some scratches on bumper, loud noise.


One of the more morbid yet profound aphorisms that I know is:

Safety regulations are written in blood.

FAA regulations are often based on finding a new way to break an airplane that they hadn't diagnosed before. Safety ratchets up as new failure modes are discovered. If cars keep showing up pancaked it a particular way, then guidelines change.

Until those failure modes are regulatory capture, at least.

In the 90's the collective wisdom said you could shock people into doing the right thing, and on several occasions they preserved a badly t-boned vehicle to illustrate why drunk driving was such a bad thing.

These days the pillars are huge and well-connected. I'm not sure you can crush a car that badly without a commercial vehicle. They test for that stuff.


The "pancaking" is designed, modern cars are made to "crush" and absorb the energy of an impact.


You might want to talk to someone about your pancake making skills.

A pancake is flat. There is no room in a pancake for a viable human. The car is all but obliterated.


The front pancakes. The cabin does not.


If I were talking about crumple zones I would have said crumple zones. I'm talking about the car as a unit. Is-a, not has-a.


Does anyone know of a way to objectively compare safety ratings between different eras of cars?

The standards for safety ratings change periodically, and great strides have been made in safety. I'm curious how much risk I'd remove by upgrading from a 2010 to a 2015 or 2018.

Driver death data [1] mentioned below may be the best source, just curious if there are any others.

Also, [2] is a really interesting crash test between a 1998 Corolla and a 2015 Auris (rebranded Corolla).

[1] https://www.iihs.org/ratings/driver-death-rates-by-make-and-...

[2] https://youtu.be/IVEjsvip8kc


Also worth considering is that anything after 2018 likely has intelligent crash prevention features. In addition to modern cars being much safer in a crash, the benefits of avoiding collisions altogether have to be worth it.

I’ve been driving a car with the pre collision warnings, lane departure warnings, etc since 2017 and it has saved me at least once.

There’s a concern about moral hazard with the new technologies, but my rule is to drive as if the car has no safety features, and if they activate, then that is a bonus.

It’s hard to compare the collision prevention systems over time since they’re so new, but just going off the narrative reporting from IIHS, and comparing the test standard in 2017 vs 2020, leads me to believe they’ve made great strides in the last few years.


> Also worth considering is that anything after 2018 likely has intelligent crash prevention features. In addition to modern cars being much safer in a crash, the benefits of avoiding collisions altogether have to be worth it.

I expect they're a net gain for the public, but that for an individual who's conscientious and never drives under the influence, tired, or distracted, that they're only marginally useful at best.


>I expect they're a net gain for the public, but that for an individual who's conscientious and never drives under the influence, tired, or distracted, that they're only marginally useful at best.

Possibly even negatively useful if you consider that the car they come in has significantly reduced visibility compared to it's '00s counterpart.


Some distractions are inevitable and unavoidable while driving, even if you know not to get distracted.

Lane departure avoidance systems that been when you start to drift sideways are really annoying, but the preemptive braking stuff seems like a really good thing to have.


> https://youtu.be/IVEjsvip8kc

Starting at 1:30, there's a 1998 vs 2015 Corolla. Manufacturers only started taking the small overlap frontal test seriously in 2015 (it was introduced in 2012). If you can afford not to, you really don't want to be driving a car older than that. The safety improvements that have been made in the last 10 years alone make a new car worth it.


It seemed weird the airbag didn't deploy on the 1998 Corolla. Googling "1998 corolla airbag" brought me to an article about the same video. Apparently, the Australian-model 1998 Corolla they used in the test wasn't equipped with an airbag.

https://jalopnik.com/this-crash-between-a-2015-and-a-1998-to...


As much as I like driving a car into the ground, I think getting a new car every 10 years is the sweet spot between safety upgrades, frugality and shiny new toy.

Here's some highlights and keep in mind there's a lag between the first car with a feature and it becoming a widely adopted standard. And of course each new feature is always being improved. 1959 - First car with 3 point seat belt as standard 1971 - First car with anti-skid brakes (ABS precursor) 1978 - First car with electronic 4 wheel ABS 1983 - First car with anti-skid control (ESC precusor) 1995 - First car with ESC 1997 - Curtain air bags 2002 - Knee air bags 2004 - Lane keeping assist 2009 - First car with emergency pedestrian braking 2009 - Rear-seat center airbags 2010 - Seat belt airbags 2016 - Tesla Autopilot 2018 - Reverse cameras become mandatory


>As much as I like driving a car into the ground, I think getting a new car every 10 years is the sweet spot between safety upgrades, frugality and shiny new toy.

The balance between frugality and safety upgrades is my internal struggle. Growing up, I remember it being a big deal when cars hit 100,000 or 120,000 miles. That's pretty common now, which makes the safety aspect more important.

I average ~12,000 miles/year, which is pretty close to the US average of 13,500 miles/year. With cars lasting so much longer than they used to, buying an older low-mileage car means you'll be driving a pretty old car by the time it becomes high mileage.

I'm going to run my 2010 Civic for awhile longer, then probably look at a ~3 year old car with ~80,000 miles. I can run it for ~6 years, end up with ~150,000 miles, then sell it to a high school kid who is only going to put 5,000 miles/year on it.


Cars from 2010-2018 would have been subject to many of the same tests anyway that could be compared directly. Just don't compare aggregates based on tests that only the later models were subject to.


In my university physics course, we did a project where we made our own physics models of car crashes and compared with crash tests and statistical data about real-life crashes. Our conclusion at the time was that crash test results matched our models of collisions with identical vehicles of opposite velocity (i.e. a head-on collision - the most complex case we could solve analytically), which, if you work it out, should come out the same as the normal force when hitting something stationary like a concrete pillar. In this case crumple zones are the most important factor, as they allow you to decelerate over a longer distance. The real life data crash reports though matched the result we got from simulations (which is what we resorted to when the models got to complex to solve analytically), in which case by far the most important factor was the weight of the cars involved in a given crash. Whoever is in the heaviest car will tend to win the game of conservation of momentum. Kind of a sobering conclusion tbh.


This is the most confusing article I've seen from danluu. What data was used?

The glaring incongruity for me arose from the details on Tesla.

"For example, on the driver-side small overlap test, Tesla had one model with a relevant score and it scored Acceptable (below Good, but above Poor and Marginal) "

Note the words "above poor." But in the bullet points danluu uses a different scale that doesn't match this prose, and slots Tesla into "poor."

I'm sensing an agenda. But I'd rather keep an open mind, and look at the raw data myself.

The other odd thing is that the article mentions both 2012 and 2018 data, but doesn't provide a specific link to the data that was used. What raw data was used?

A clue lies in the other prose about Tesla in the article, which refers to a 2017 article about Tesla's pushing back. (The points about this being predictable as a PR move seem valid. However what's more interesting is the timeline.) 2017 is prior to 2018. Which points to the data being relied on here being the 2012 data, not the 2018 data.

The problem with using very old data is that the cars have evolved significantly since 2012. I mean, 2012 was the very first production year for Model 2. We're talking almost a hand crafted (though semi robotic) assembly line back at that point. And yet the article talks about this as though the data represents Tesla in general. We are eight years along into the future now — almost a decade — with that many years of improvements and two (correction, three) new models, if you don't even count the vast updates to the Model S. This should be disclosed in a way that is glaringly clear, not glossed over or minimized.

In short, I would take everything here with a grain of salt. The research question, though, seems like a very good one. I would look forward to hearing an update where the data is provided, the sources are examined, the funding for the lab tests is disclosed, and current car models are covered.


Most of this is based on the IIHS data; 2012 is the last time a new type of test was added (small overlap), so it's what's being used to make this comparison.

Almost all the cars mentioned in the article have likely improved since then, but the issue isn't so much how safe specific models are, but rather what approaches the manufacturers are using to improve safety. What the test results indicate is that rather than actually designing for safety, manufacturers are designing to meet a very specific set of benchmarks; there isn't much reason to believe that has changed since then.

The most obvious example of this (which the article briefly mentions) is the difference between driver-side and passenger-side performance: https://www.iihs.org/news/detail/small-overlap-gap-vehicles-...


>the issue isn't so much how safe specific models are, but rather what approaches the manufacturers are using to improve safety.

How would anybody possibly be able to characterize Tesla’s approach to improve safety today based on looking only at a test of one car eight years ago? You said it yourself: “improve” — the word implies a measurable delta in the data. But if the data is a static single value, there is no delta to work with.

And there’s no rule stating that this is the issue that a blog post must cover.

It’s a blog. The author is free to choose and cover any issue they please.

It’s just not a worthy issue to make claims about, if your data is so limited that you are (he is) trying to derive manufacturer behavior by looking only at a single car model year as a snapshot in time. It’s simply ridiculous to try to conclude anything from such data. And then to cover that up by presenting the results on twitter as a table that highlights the false conclusion... just dishonest imho.


Selective reading?

He acknowledges Tesla to be difficult to fit, right under the "Tier Table": "[...] Tesla not really properly fitting into any category (with their category being the closest fit)".

And in the "Bonus: Reputation" section, he supplies an argument/reasoning what he doesn't like about Tesla:

[...]I find the Tesla thing interesting since their responses are basically the opposite of what you'd expect from a company that was serious about safety. [...], they often have a very quick response that's basically "everything is fine". I would expect an organization that's serious about safety or improvement to respond with "we're investigating", [...]

For example, on the driver-side small overlap test, Tesla had one model with a relevant score and it scored Acceptable (below Good, but above Poor and Marginal) even after modifications were made to improve the score. Tesla disputed the results, saying they make "the safest cars in history" and implying that IIHS should be ignored in favor of NHSTA test scores[...]

(end quote).

In that context I think it's important not to skip on Tesla and mention the above. And by giving his thoughts this allows readers to make up their own opinion (and you're free to come to your own conclusion).

Full disclosure: Safety was one of the main reasons to get a Volvo. However, I am not someone to hate on Tesla.


>very first production year for Model 2.

Too late to edit but that should have said "Model S."


Volvo was always marketed as a safety product years ago. Seatbelts, seatbelt warning lights, airbags, and daytime lights — a requirement in Sweden — all spring to mind.

I had assumed that Volvo lost that edge this century so it’s good to see them at the top of the list.


I just took a quick look at Consumer Reports for Vovlo's reliability and they're not great. A couple were even bad. If my memory is right, this has been the case for a while. It looks like maybe Volvos are safe, but not super reliable I guess?


On the other hand, all car makes have become much more reliable than they were in the past.

Mostly, what you're looking out for is cost of parts and difficulty of labor. For example, a lot of Audi/Volkswagen products suffer from simple repairs being made more complicated by the need to pull things apart for the mechanic to reach them.


In some cases, absolutely. Having to completely disassemble the door to repair a failing lock module is seriously annoying, as is making it impossible to remove the engine mount while the engine is in the car (seriously complicates timing belt changes on VW mk4 cars, because the belt loops around the engine mount).

But on the GTI (2009 in my case), they did something rather ingenious: instead of welding the outer door skin on, they bolted it on with about 25 short bolts. Taking those off takes about one minute with an impact driver, and you have full access to the door lock module and window regulator, as well as making rust or dent repairs almost trivial. Why can't everything be like that?


Speaking of timing.. .you got that Timing chain Tensioner replaced right? Mine blew on my 2009 Eos and lunched the motor.


Five years ago! The GTI was my late brother's (now my parents'), and he was on top of that kind of stuff. Of course, we got a recall notice last year for it, way too late. And since we did it ourselves, I don't see that VW would reimburse us for the work.


Look at the replacement costs of the mirrors. Anything over $300 is excessive.


Very nice engines if you put oil in them but supported with expensive, fragile electronics.


When I designed car seats we optimized them for the federal tests. The biggest load on a seat back was represented by a large 95% man leaning back to get at his wallet. We were ready for production when the car had it's first sled test. Basically mimic what happens to the occupants when hitting a solid barrier at 30 MPH. The front seat occupants flew forward, bounced back, and completely broke the seat backs, crushing the legs of the rear occupants.


That sounds like it was straight up underdesigned. The amount of force in an accident is way higher than a human can normally exert, so designing only for what a human can exert means you were doomed to fail in the accident scenario. The designer forgot to consider the edge cases.


What's a 95% man?


95% man, 5% Coors.


95th percentile in size


And more importantly: mass.


>For a long time, adult NHSTA and IIHS tests used a 1970s 50%-ile male dummy, which is 5'9" and 171lbs. ... >For reference, in 2019, the average weight of a U.S. adult male was 198 lbs and the average weight of a U.S. adult female was 171 lbs.

So the average mass of the American male has increased 15% in 50 years! No data on whether he is taller, but I'd bet the BMI has gone up.


Referencing the same line, I remember seeing another article on HN somewhat recently about the downside to taking averages of a human to fit specs since the "average"(mathematical mean of measurements) human never existed in reality. Something about fitting pilots to a cockpit?


These "average" cars have pretty terrible seat backs if you happen to be taller than the average. Outright destructive to the spine.



Over 70% of adult Americans are overweight or obese, according to BMI. Of course, BMI is an imperfect measure, but you're quite right to bet on the average American male having gotten fatter, not taller.


At 5'9" and 171 lb, the old 1970s dummy has a BMI of 25.2 and is already "overweight" (probably because of the heavy steel they have to build it around).

My main takeaway from Dan's article is that we have a lot of blind spots in our view of car crash safety, mostly for lack of samples, and also because the American consumer just doesn't care enough about safety during the buying process to demand better information.


This was an excellent article, car safety is something I think most people don't take very seriously when buying a new car.

However, the author mentions the 1959 Bel-Air vs 2009 Chevy Malibu crash test as a dramatic example of the progress of car safety. It is indeed a spectacular video, but there's a good case to be made that the 1959 Bel-Air was likely specifically chosen to exaggerate the progression of car safety, as it had a particularly terrible X-frame design that crumples like tissue paper in a crash. Other, more traditionally designed cars from that era would almost certainly fair significantly better in that test.

Take note of the comment below a Jalopnik article on this crash test:

>"Fast_Nel in the last story about this crash-off posted a video showing the deficiency of the X-Frame design of this car and it was an excellent video. If the IIHS would have chosen a car with a ladder frame, like say a 65 Impala or another car of the same year with a standard ladder frame, the results would probably been much better for the older car. I am sure IIHS purposefully used this particular style vehicle with the knowledge that it would fail spectacularly in order to toot its own horn. So yes there was something seriously wrong with the video.

One might ask, "Why GM would use a frame like this?" My guess is that GM attempted to be innovative with the X-Frame concept and they failed, because all the ramifications had not been considered."

Source: https://jalopnik.com/yes-the-iihs-crashed-59-chevy-had-an-en...

That's not to say that newer cars aren't safer in general, they very likely are. But that test in particular is not a good example.


Is passenger side small overlap really a common form of collision? I would imagine since everyone drives on the right side of the road, most higher speed small overlap collisions would be on the driver side. Perhaps car manufacturers were right not to optimize for passenger side small overlap collisions?

Frankly though, this is an area where I don’t think it is all that harmful for the metric to become the target, since the metric itself can be modified to reflect common crash conditions, and signal to car manufacturers what kinds of collisions should be optimized for to improve real world safety.

One big feature missing from the current suite of crash testing though is small vehicles colliding with larger vehicles. Walls only simulate colliding with the same vehicle, but small vehicles are especially vulnerable in collisions with larger vehicles.


Passenger side small overlap seems like something that would happen when someone tries to pull into your lane. As you say, the average delta-V for that collision is going to be much lower than the driver's side collisions (passing on the right, or crossing the center line)


overlap crashes tend to happen when the driver attempts to avoid an obstacle but there is not sufficient space to fully do so.


One thing I appreciate in supervised machine learning is that it's widely acknowledged that one should not test on training/tuning data. For both computer benchmarks and car crash tests, this is not the case. Both fields routinely work to optimize for a test, which leads to dramatic overfitting. This would be like passing out a standardized test years before students take it.

This seems solvable. For benchmarks, there should be held out tests that don't get distributed to (say) compiler authors until after the results are published. For crash tests, parameters like overlap and speed should be randomized each year so that manufacturers are forced to optimize for a range of possible crash scenarios.


>For benchmarks, there should be held out tests that don't get distributed to (say) compiler authors until after the results are published.

The way you really "solve" the problem is to have a third-party test organization run the benchmark and only provide the final result--and you only get one shot for a given generation of hardware/software. (This is sort of how standardized tests work.)

The problem is that you're now benchmarking with a totally opaque test that you have to trust is reasonably representative of the type of workload it claims to represent as well as trust that the benchmark isn't favoring specific vendors or design decisions that are irrelevant to real world workloads.

I assume there have been benchmarks at various times that were similar to this. Certainly, I'm pretty sure source code for benchmark suites hasn't always been available.


Somewhat related anecdote, related to me by an acquaintance who works at a major US crash testing facility:

I don't know about now, but, once upon a time, not too long ago, US safety standards included crash tests where the dummies were not wearing seat belts. This naturally leads to cars sold in the US having air bags that are tuned slightly differently, so that they are less likely to cause injuries to people not wearing seat belts. Unfortunately, doing this decreases their ability to protect the vast majority (nowadays) of people who do use their seat belts.


OEMs in the US market still absolutely have to meet unbelted crash performance requirements.

Fortunately modern airbags can be a bit more advanced than just having one "tune" for all situations. Depending on whether an occupant is buckled or not, the airbag control unit can elect to provide different amounts of restraint by controlling the timing of dual-staged inflators, and also cushion vents can be opened or closed either passively or electronically based on what cushion stiffness characteristic is needed.

I've heard that people buy seat-belt extenders or plug in a seat-belt behind their back in order to make the seat-belt reminder chime deactivate (and these chimes are only getting more aggressive) but it's worth noting that in any of the vehicles with this kind of conditional deployment logic they're exposing themselves to an even higher likelihood of being injured. Not to mention the people who straight just cut the wires to their seat-belt sensor.

I'm sure many people would love if the unbelted requirements could just go away, but until unbelted occupants stop accounting for such a large share of injuries/fatalities I don't think they will.


As far as I know, this is still the case. A significant minority of people in the US do not wear seat belts; thus the timing on airbags has to accommodate this, or the airbags themselves would cause injury.


I feel naked without my seat belt.

Occasionally, I'll get into my wife's car to simply move it into the street so I have room in my driveway to wash mine, and even though I'm driving less than 50 feet, I still feel awkward not putting on the seat belt.


The "Using real life crash data" section is just wrong. The IIHS confidence interval describes an empirical death rate, not an "expected fatality rate." Therefore, an interval including 0 deaths per million registered vehicle years arises due to very few deaths over a very large dataset, not from some "informative prior" bounding the interval at 0. As an empirical description, the IIHS figures should have high statistical power, after adjusting for confounding variables. Time and location could be interesting factors to consider.

For Dan's suggestions, the IIHS already breaks down death rates per 10 billion miles by vehicle class and individual vehicles, and the latter supports the original ordering by registered vehicle years. Also, incorporating data from models under the same make doesn't make sense. Since the IIHS already accounts for different vehicle platforms, any believed correlation in death rates between a manufacturer's models must derive from the original dataset, creating a meaningless statistic.

Updated IIHS doc here: https://www.iihs.org/api/datastoredocument/status-report/pdf...


On page 3 of [1] from the article which shows most number of deaths, it seems like there's almost a perfect correlation between 'size' and 'deaths'.

I remember thinking about getting a mini Cooper, literally while driving in my family's quite large SUV, and my father indicating how they are unsafe due to size. I couldn't help thinking of the odd kind of 'race to size-safety' factor severely limiting people's ability to drive smaller, more efficient cars.

I want to drive a small car for a variety of reasons. But the moment I have children ... 'safety' might be my #1 concern (along with the convenience of a large utility vehicle). It could very well be that this 'safety arms race' is harmful.

[1] https://www.iihs.org/api/datastoredocument/status-report/pdf...


"I think it would be surprising if other car makers had a large suite of crash tests they ran that aren't being run by testing agencies, but it's theoretically possible that they do and just didn't include a passenger side small overlap test)."

The fact that Mercedes and BMW are in the category "poor without modifications; good with modifications" (i.e. with some modifications, the cars get from "bad" to "good") rather shows to me that these two car manufacturers do have such test facilities where they do such tests to be prepared for the situation when such test become included.


Reminds me of the modern web that is more optimized for SEO than for the users.


Whenever metrics are instrumented that measure productivity, safety, quality, or efficiency, the metrics and metrics alone are what matter, not the over arching goal.

Whether it's car crashes, fuel efficiency, lines of code, completed tickets, OKRs, or anything else, players in the system adjust to optimize output for the evaluated metric.

A tale as old as time.


Unfortunately, there isn't really a solution to this problem. Metrics are the only way to measure some things reliably and consistently across users, testers and subjects.

The goal is to have performance on the metric be as close as possible to the desired characteristic. This isn't always possible (and gets really hard when people are involved), but there isn't really any other solution.


Hidden metrics help, but it is pretty frustrating as an individual to be evaluated based on hidden criteria.


Yea, and no metric stays hidden forever if there is enough money in it getting unhidden.

Statistical metrics have some of the benefits of hidden metrics while staying "hidden" forever, but they also can be much more time consuming to do right and are even more frustrating to be evaluated by.


The more direct control people have of a metric, the more likely it is to be gamed. Some metrics are almost guaranteed to be meaningless and unproductive: bug counts (open/closed), lines of code, closed tickets.

Other metrics are informative, but game-able: fuel efficiency, crash tests.

Other metrics are expensive and potentially confusing: customer satisfaction, number of automotive deaths, CO2 levels, pollution levels.

If you are measuring a game-able metric, you should probably also measure a higher level metric and make sure that they are both heading in the same direction.

Obviously this is expensive and complicated to figure out.


For those who find this website's edge to edge text maddening to read on large displays, I highly recommend Sakura (https://github.com/oxalorg/sakura) for a one-click fix for badly formatted websites.


Reader mode is another option on most browsers.

However, it can also be said that a full-screen browser on a landscape widescreen monitor is "doing it wrong."


This was a very interesting article. I’m curious if auto makers knew about the new tests being added, and if so, how far in advance? It certainly seems that if they did know about the new tests, it was far too long into the vehicle development to make changes.


"When a measure becomes a target, it ceases to be a good measure."


Also, don't forget. Crash tests (in the US atleast) don't measure safety for non-occupants, ie pedestrians, other car users.


There must be stats for injuries or deaths by car. Is it public?



Thanks, that is a good resource.


Those are just driver deaths.


The request was vague, but insurance industry figures are going to be the best breakdown by vehicle you can get in the US. NHTSA stats are usually broken down by category of vehicle.


Must there be? One would imagine that there should be, but comparing over time and from different jurisdictions is quite difficult. What is the baseline? Many car accidents never get reported to any official body (single car accidents, fender benders). How do you classify an injury? I was involved in an incident that looked quite horrifying if you looked at the cars, but no injuries were reported at the time.


Yeah its a problem too that not all cars are driven the same. Sports cars will have more accidents than family cars as they're driven more agressively. Cars with 4WD in the snow probably crash more often than 2WD ones in the sun belt, but you can't say they're more dangerous.


> Sports cars will have more accidents than family cars as they're driven more agressively.

Surprisingly, that's not always the case. Particularly with higher dollar sports cars, which might commonly be driven only for pleasure and with limited mileage.


How about F1 cars?


Back in the 1960's and 1970's F1 was pure carnage. Things have gotten a lot better.

Here is a summary of improvements: https://www.autosport.com/f1/news/149628/history-of-safety-d...

Here is a quite scary crash that Fernando Alonso walked away from: https://www.youtube.com/watch?v=x45fLUTHCuk

The basic idea is the car is designed to be completely destroyed except for the survival cell for the driver.


The only safe way to drive an F1 car is with remote control.


No, you could also safely drive an F1 car from the driver’s seat by limiting your speed to 40 km/h.


Maybe. Last I checked, the tires on F1 cars are optimized for traction at actual racing speeds (when they get hot and stickier), which means that at low speeds F1 cars actually have much poorer traction than one might think. Only matters on turns, presumably, and 40km/h might be slow enough that it does not matter, but I'd really want to see some experimental data before deciding on anything that happens so far out of a vehicle's intended-use envelope.


Could you? Here[1] is Richard Hammond driving an F1 car on Top Gear, and he says it can't go slowly, safely - no downforce, no heat in the tyres, no heat in the brakes. If you're coming off a lot of fast laps into the pit lane they will still be warm, if you try to drive it at 40km/h all the way they won't be.

[1] https://www.youtube.com/watch?v=EGUZJVY-sHo


Hammond is correct, however the term "slowly" in an F1 car would refer to speeds like 60–100 km/h. Note that the pit lane speed limit is generally 100 km/h during a race. I picked 40 km/h as it's a speed which even the hardest rubber tyre would not present difficulty retaining traction even without the aid of any downforce.

And indeed, if my memory of Hammond's experience is anything to go by, the F1 engine may not even be able to handle being driven exclusively at such slow speeds without stalling out.


I doubt a normal driver would be able to drive a F1 at any speed. Just keeping it below 40 km/h would be impossible.


The car itself can limit speed like they already do in pit lanes


Has there ever been a full or partial head-on collision with an f1 car?


Yes, in Abu Dhabi in 2010, Michael Schumacher spun 180 and Vitantonio Liuzzi then hit him head on.

It was at relatively low speed (for F1) but Schumacher's head was very nearly hit as the other car rode over his.


They’ve been T-Boned in modern times and although not exactly a head on collision with another vehicle, they do crash at high 100+ MPH straight into barriers with very limited injuries.


Colliding with a rigid stationary object is generally just as bad as a head on collision with a similar vehicle. The shortest short hand for why this is so is because each car brings its own momentum and it’s own crumple zone.


My guess would be one car spinning 180 degrees and another one hitting it must have happened, even with the spinned car at rest and the other one at high speed, but two cars moving at speed in reverse directions? Unlikely.


Optimizing for a test is an issue if the test is not well correlated with the desired outcome. By and large that doesn’t seem to be a big issue for crash tests.


In this case, it's an argument that crash testing should be more expansive and thorough.

For example, if engineers are going to position the driver and passenger airbags for optimal crash test performance, we probably shouldn't just be testing with a male dummy in the driver's seat and a female dummy in the passenger's seat, as in the modern age women often drive.

If engineers are strictly designing to the test, there should be a suite of repeats, putting dummies that are male, female, fat, thin, tall and short in every seat.


>By and large that doesn’t seem to be a big issue for crash tests.

Source? The article provides data as a counterpoint:

>Check to see if crash tests are being gamed with hyper-specific optimization looking at what happens when new tests are added since that lets us see a crash test result that manufacturers weren't optimizing for just to get a good score


The problem is that they're optimizing for a very specific crash tests. Optimizing for those alone may involve trade offs for other tests. Doing super well for an offset head on collision at 30mph may involve adding weight and supports that make things worse for a collision at an angle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: