Hacker News new | past | comments | ask | show | jobs | submit login
San Francisco cops pull over a Cruise driverless car for no lights on (theverge.com)
311 points by bmitc on April 11, 2022 | hide | past | favorite | 486 comments



I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.

Tradition has it that when you load-test a new bridge, you put the architect underneath. I feel like this, except I didn't design those driverless cars, somebody else did. Being an experienced software engineer, my trust in the software in these cars is pretty low. And yet they are testing them on me, because I can be the one getting killed.

I think we should set a much higher bar for allowing those cars on the streets, rather than "it kinda works, so let's roll with it".


My wife's 96 year old grandfather still has a driver's license because he passes some ridiculous vision test with 1 of his eyes. He can't read a newspaper or see a television well enough to enjoy sports (previously his favorite thing). He says he's memorized the vision test at his doctor's office and when asked the doctor refuses to file a report (or whatever procedural thing is required to remove his driving abilities). He totaled his car(s) every ~6 months for the past ~5 years. Luckily for other drivers, always doing something like hitting a concrete light pole base in a large parking lot. Although him and wife have been hospitalized each time as 1) age 2) sometimes happens at >35MPH.

All to say, the roads being safe as a core assumption is completely false. I'd gladly share the road with beta AI versus other drivers like this. Being an occupant in beta AI is completely optional.


A huge percentage of elderly drivers shouldn't be on the road. We (the US) seem, as a society, to have decided to just live with that rather than trying to make it easier to live here without a car.


It's more that they vote more than young people and no politician is going to change the driving standards in their state only to be antagonized by the AARP.


If you’re in California, you can file a report with the DMV since you presumably know both his license plate and his name.


If there's one thing that seems the be much more forgiving in the US, it's car insurance. How has his premium and excess not become unmanageably expensive? Or not to have been denied insurance?

UK car insurance feels more like US health insurance. High premiums, and heaven forbid if you get into an accident.


I think it has become expensive by most measurements. Except, for them at 96, the cost and vehicle buys them independence. The alternative is to move into a senior living situation, which is quite expensive as well. I’m pretty sure they only cover him for liability and he keeps just buying new cars every time. He has some minor wealth and mailbox money type arrangements and at this age he’s in a “can’t take it with me” phase of his financial justifications. So he pays the premiums.

Why they keep insuring him with no sense of the danger he imposes to others is an interesting question and it’s because their financial exposure is all that matters. They are betting that he won’t kill anyone and if he causes some property damage they feel protected from a risk adjusted perspective. But no larger corporate responsibility / citizenship is in place.


We are all unwilling testers of every new teenage driver who hits the road.


That teenage driver goes through a training and certification process, and we have ways of stopping them from driving if they fuck up too badly. They also have liability for their actions.

Most of those are lacking or at least inconsistent, for AI driving.


That teenage driver goes through a training and certification process, and we have ways of stopping them from driving if they fuck up too badly.

This is a complete aside, but not having a license doesn't really prevent you from getting in a car and driving it. I was just reminded of this when a buddy's parked car was struck by a drunk driver who was not legally licensed to drive because their license was revoked from having many prior DUI's. Luckily they didn't kill anyone.

Hopefully it will be easier to enforce shutting down unsafe driverless cars.

I've also known teens without licenses who had to drive their irresponsible buddies' cars home after the licensed driver drove somewhere and got drunk. It's not like there's a biometric scanner in each car that verifies the person behind the wheel is a licensed driver...


Kind of a moot point though. You can't really stop anyone from doing anything, the best we can do in a free society is create laws that deter undesirable behavior.


In general, most of our cities (in the US) are very car-centric. Better than passing a law and hoping for the best while waiting to punish people, we can give people realistic alternatives to drunk driving to prevent it in the first place. Likely, these would have a second order effect of giving everyone realistic alternatives to driving too. If driving yourself isn't the norm, then drunk driving becomes more rare.


> we can give people realistic alternatives to drunk driving to prevent it in the first place

Such as?


In Japan, there are taxi-like services that will drive your own car. 2 persons come in a taxi, one of them drives you in your car, and the other follows with the taxi (and picks back the first person once destination is reached).


Off the top of my head, public transportation, ride sharing, making streets more pedestrian friendly, denser zoning.

I had a great bar that was less than a mile from where I used to live. I rarely went because there was no way to walk there. No sidewalks in the neighborhood, no traffic lights, no bus that stopped near it. It's a shame. It closed shortly after I moved away after away. In the paper they said they just didn't get a lot of people coming in.


> public transportation

A great idea that I am in support of, but it's not a "realistic alternative", public transportation is politically fraught, expensive, and slow to build in the U.S. I'm not saying we shouldn't continue to strive towards it, but it's not happening any time soon in the U.S.

> ride sharing

This is the status quo.

> making streets more pedestrian friendly

Same issue as public transportation - slow, expensive, politically fraught, but also of dubious ROI in terms of preventing DUIs; nobody is going to walk 5 miles home from the bar because of streets that are friendlier to pedestrians. Of course, there are other worthy reasons to create pedestrian friendly streets, but they don't really represent a realistic alternative to someone prone to intoxicated driving.

> denser zoning

Again, not really a realistic alternative, zoning issues are politically and economically contentious at a level that transcends concerns about drunk driving and yet the needle on that issue barley moves due to NIMBYism and entrenched special interests.


Or maybe we can hurry up with driverless cars so that we can take the wheel out asap for people who have DUIs.


Driverless cars aren't anywhere near being able to prevent DUIs on a statistically meaningful scale. If lives are what you care about you'd be better off having the government subsidize uber rides.


Two human drivers passed me today in a street where passing me was unconditionally forbidden. In that street no car may pass any bicycle, ever, and there's giant signage. Are you suggesting that we have consistent, reliable ways to prevent those two from driving again?


That is certainly the idea behind traffic policing, license points, etc. All you need is police who enforce the laws that already exist.


If I were to send the police a 30-second film showing someone overtaking me illegally, it's near certain that nothing would happen. Unless someone were harmed.

If, however, I were to send the certification authority a film of a driverless car doing the same, my guess is that the vendor would be required to investigate and solve that, even if the film were just a single incident and noone harmed. I base that guess on the authority's past behaviour.

Speaking as someone who's often in harm's way, I find the possibility of getting rid of conscious rule violations very attractive, even if that doesn't affect the number or effect of software bugs at all.


Here in San Francisco you don't actually need to go through any training to receive a drivers license. You just need to pass a computer test (of mostly easily memorizable material) and a road test that consists mostly of turning left and right a few times. They don't even test highway driving, three point turns or parallel parking like they do in some other countries.

Also, didn't the entire Uber fleet get grounded after the Tempe death, with a huge months-long NHTSA investigation?


I don't know where you live, but my "training and certification process" was mom driving with me for a few weeks, then I went to the driver testing office, drove around a closed course, proved I could stay on my side of the road, stop at a stop sign, and parallel park. Then I got my license without ever being tested at over 25mph. Even with the state of today's technology, I'd trust a driverless car more than I'd trust a new teen driver.


> Even with the state of today's technology, I'd trust a driverless car more than I'd trust a new teen driver.

No way. Yes, the driver education in the USA is appalling. It should be much more thorough and cover a lot more ground, particularly emergency manouvers and car control.

Nonetheless, a human driver has self-preservation instincts which are very deeply hardwired into the brain. Software has nothing like it, if it has a bug it will crash and take out whatever and whoever is in the way since it doesn't have, can't have, any emotion or self-preservation.


That's a pretty generous interpretation of the oversight for the average teenage driver. Most controls are lacking for ALL drivers, human or not.

The first footage that comes out of a felony traffic stop on an AI driver will be amusing!


"That teenage driver goes through a training and certification process"

Did anyone else spit out their morning coffee at this hilarious one-liner?


Feel free to argue that licensing isn't strict enough. I'd agree. But AI driver certification is effectively nonexistent, judging by Telsa's bullshit behavior.


It's not that it's not strict enough, it's that driver certification is so outrageously lenient that "existing" is about all it accomplishes. I would trust any AI on the road today over a "certified" 16-year old or 85-year-old.


The vehicle manufacturer is licensed here, all safety operators involved are licensed, and the DMV will pull the operations permit or vehicle registrations if they fuck up too badly. This has already happened to pony.ai and Uber.

More consistency would be better, but it's a California bureaucracy. You get what you get.


In fact, liability has been limited in this case by California laws...


And every drunk/elderly driver -- the car's decision to move after a traffic-stop felt very much the behavior of someone with impaired cognition.


Anecdote: There was an old lady living nearby who despite being 80+ years still driving around in her 20 year old car. She literally ran over traffic cones and ... a child on a tricycle [1] and never got prosecuted.

She recently passed away ... sad, but the streets around here are definitely safer now.

(This was in Germany just for reference)

[1] The child was fine it was a low speed collision ... but still ...


I caution against age as a sole proxy for driving ability. As a contradictory datum, my 80+ year old grandmother (in the USA) maintains a 100+ acre farm. She daily drives a (impeccably maintained) 2006 3/4 ton pickup, to include multiple yearly 10+ hour trips hauling a 10k+ lbs trailer. She almost always brings a passenger on these trips for companionship and safety. I've never heard any complaints, and never personally observed her driving unsafely.

It won't be this way forever, and I'm under no illusion that she is the norm for her age, but I'd hazard she's a better drive R than most people on the road.

I would like to see a move toward more frequent driver testing for all drivers in the USA, decreasing in frequency after the first few years of driving and increasing in frequency for older drivers. Testing after initial licensing is not the norm in any state I've inhabited, so this would probably require a significant uphill battle on both the individual and government fronts.


Triggering testing/education after traffic violations might be a middle-ground.


This seems too reactive. I agree it is better than nothing, but if someone is going to have the wherewithal to put a law in place around this, I think the goal should should be more about preventing violations than punishment. Maybe you should have to re-take the test when your license expires instead of the license being a lifetime certification. That would be every four years in my state. Not sure if it is different state to state.


States already introduce mandatory vision tests and inquire about medical history with age even very Red or Blue states.

https://www.dps.texas.gov/section/driver-license/drivers-age...

https://www.dmv.com/md/maryland/senior-drivers?tg1=DVA&utm_c...

Florida has vision requirements, and accepts reports of medical issues making driving unsafe: https://www.flhsmv.gov/driver-licenses-id-cards/medical-revi...

In that context Florida’s standards are lax, but still exist.


The DMV worker that administered my mom's last test definitely fudged things so she could renew her license.

Luckily she basically never drives anyway [edit: because my dad who's still a decent driver drivers her around when she needs it because they both know she's not safe on the road, not because she can get by without a car], but I doubt that was an isolated incident. Because of how we've designed our cities, not being able to drive can represent a huge loss of freedom. In the moment, it seems some (and I'm guessing many) DMV workers take pity on the vulnerable elderly person in front of them rather than some hypothetical future victim of that driver.


I remember doing traffic school about ~15 years ago, and realizing that there were several points of the law which I had not adequately learned when I first got my license. Over the years of driving, I've gotten better at some parts of driving (being safe, etc), but I'm _sure_ there are parts of the law that I have forgotten. (e.g., is it legal in my state to turn through a crosswalk if the pedestrian is past the halfway point?)

I feel like if driver licensing required re-certification, everyone would hate that, but it would have the added value of making sure to reinforce that people knew what the law _was_. Maybe it'd be less painful if our insurance already subsidized the cost. I feel bad even advocating for such a thing.


I'm _sure_ there are parts of the law that I have forgotten.

Or perhaps since laws have changed or new laws were made that you were unaware of (through no fault of your own - these things are hard to keep track of!). I agree people would hate it, but I think there's value in, like you say giving people a refresher about the law.

I also think there's value in reminding people that driving is not an inalienable right and also inviting a person to re-evaluate their situation every so often to think if they still really need a car.


There was recently an accident in Miami Beach. Elderly woman driving a Bentley reversed into a terrace restaurant. Killing 1 and injury 6, including a 3yo [0]. No charges filed because being senile driver (and voter) is not against the law.

[0] https://www.local10.com/news/local/2022/02/25/recalling-trag...


In the UK there’s a catch all for this called “driving without due care and attention”.


A teenage driver can lose its licence. What happens with an AI in this case? Can Cruise lose its licence ?


Who gets a license, though? Cruise, the company? The currently deployed algorithm? Each car w/ the software?


Yes.


> Can Cruise lose its licence ?

Of course


I think elderly drivers are probably more dangerous than teenagers. Lately whenever I've seen a really reckless driver, it's been someone very elderly and who looked totally overwhelmed by the experience of driving. I've seen old people struggling to keep up with traffic, driving in bike lanes or just on the side of the road thinking it was a second lane... There should be more frequently driver exams (and car exams) in the US.


One of the more surreal things i've seen is in Naples FL where you have elderly behind the wheel of Lamborghinis, Aston Martin, Ferraris and lots of other very very powerful sports cars. Vallet needs to give them extra help getting in and out but then off they go. A simple muscle twitch blipping the gas peddle on those cars would put them so far out of control it would be funny if it weren't life threatening.


Aside: I'm happy to see that teenage drivers today have a lot more required training than I did when I got my license. At least in this US state, and likely most of them.

I'm also really happy about the continued advances in auto safety technology.


Has that training had any noticable impact accident statistics? Seems like just more bureaucracy to me.


My recollection of learning to drive many years ago was that I took driver's ed, drive my parents around a bit, but then didn't drive much/own a car until after grad school. And therefore, didn't really have much driving experience until later. (And didn't really learn to drive a stick until many years later when I bought a car with one.)

I guess I'm skeptical that having some more classroom time and a more rigorous testing improves young person driving a lot--much less makes much of any difference to how people with a few years of driving under their belt drive.


I think the tests are just the colors under which bureaucratic red tape just happens to sail. Anything that creates barriers to entry will result in people being older before they get their licenses and that is where the bulk of the benefit comes from.


Is there any evidence that young drivers are dangerous due to age rather than inexperience?


Teenagers aren't known for not putting themselves in dangerous situations.

I don't think teenagers are any less safe than an person with equivalently little experience in normal traffic situations but they are definitively more likely to be less conservative when they do decide to push the limits. There's a difference between navigating a complex intersection and knowing how fast you can/should go on an empty highway.


US driving test standards are still a joke compared to overseas.


There is no such thing as "US" driving test standards. Each state has its own requirements. Some are quite weak, and some are very vigorous: For instance, Maryland driving licenses are accepted 1:1 as equivalent to the German driving license, you can exchange them without an exam or test. NY and CA... do not simply transfer.


The interesting thing is that Maryland drivers are among the worst I've seen over a wider spectrum of factors. Sober California drivers seem pretty safe. Virginia drivers are miserable on interstates because they pass one another at 1-3mph differentials (I assume this is a response to the reputed harsh soeeding enforcement there) and are victims of often-wrong signage, but are otherwise uninteresting. South Carolina drivers cause dangerous situations by trying to give away their right of way. Maryland drivers aggressively fight back agaibst merges, change lanes erratically, and crash in a quarter inch of snow at shocking rates, despite the fact that their salt trucks have higher mass-flow-rates than their snow storms. The only drivers who stand out vs Maryland are in big military areas where there's a significant minority of people driving 20mph faster than traffic and merging erratically while doing so.


I think it's more a Baltimore / DC area issue than a Maryland one. Baltimore consistently ranks dead last in best driver reports with DC right behind: https://www.allstate.com/americas-best-drivers/index.htm

The commuting structure of those cities is just awful and begging for road rage.


Having just left VA after a year in residence, VA drivers pass at 1-3mph deltas because they're already doing 15mph+ over the speed limit. Perhaps enforcement is different in southern VA, but I'd estimate only seeing speed traps/vehicles pulled over once every 3-4 weeks.

Maryland (and, to a lesser degree, New Jersey) drivers are legitimately terrible whenever you find them.


Maybe the stricter tests are a result of the bad driving, and not vice versa.


I mean, have you seen German drivers?


I don't think any US state requires you to take several full-day courses after getting your license, for snow and ice handling, as is required in Switzerland for example. Also the road tests I've seen are a joke compared to both the Swiss and South African tests, both of which I've gone through. Requirements to check your mirrors every 10 seconds. Check your blind spot before indicating, then again, before merging. Reverse alley docking and parallel parking in tight spaces. Emergency braking from a fixed speed within a certain distance.

Of course, after years of practice this is how you should be driving. But almost nobody who doesn't explicitly practice for the test can actually pass.

Whether a license is accepted reciprocally is more a political decision than based on test stringency.


That doesn't mean much because most states, including Texas, also have full reciprocity with Germany. There doesn't really seem to be any rhyme or reason to what states they deem "equivalent".


When I got my license I was advised to do it in nearby Kentucky, since they didnt have a driving portion of the test at all, all you had to do was a written quiz


In Texas, I got my license through a self-teaching avenue where my parents just had to sign off on a form that they taught me. IIRC the form had fields where they log the times they took me driving.

All of my friends were doing these driver's ed classes after school and a bit jealous that I got my license without a single test.


When was that? KY requires a road test now. Also, it requires you to prove KY residency. I'm surprised people were able to get KY licenses without being residents before.


About 9 years ago, and the people advising me were working off information older than that I'm sure.


Yeah, after asking my earlier question, a quick google found the 2006 driver's manual which indicates you must be a KY resident, and discussed the road test.

https://transportation.ky.gov/HighwaySafety/Documents/2006_k...


The US has lower per-kilometer vehicle fatalities than South Korea, Mexico, the Czech Republic, and Belgium, and is similar to Japan and New Zealand.


It's difficult to make an apples-to-apples comparison with this statistic because the US has more sprawl.

Like sure, per-kilometre fatalities may be down, but that says nothing of:

1. The number of non-fatal crashes

2. How much of this incorporates driving on large highways

3. When not on a highway, the degree of traffic in which fatal crashes can't occur (can't get in a high-speed accident in high traffic in New York, for example)

Needless to say, the roads of Japan and New Zealand are so different from the US that it isn't exactly clear that this statistic is really informing us of much, really.


A teenager is worried about: getting killed, killing someone else, hurting someone, wrecking their parents' car, losing their permit. I don't trust engineers working on the models that drive these cars to teach a computer the difference between a human and an open road. A teenager can discern these with no effort. I trust a teenager to care about not hitting me and try their best, and to choose a ditch over hitting someone when the decision time comes.


*A teenager is worried about only about themselves and their immediate happiness


I think you'd be immediately unhappy if you crashed your parents car


In the typical teenager's mind, that would be after, not if


Was it an "if" in yours when you were a teenager? There are lots of strong statistical differences that are visible in the teenage population and trickle their way into insurance tables.

Basically we could do a better job engendering concern from teenagers, its not some kind of inherent mental lack of capability.


My point was that worrying about hypothetical outcomes is not usually at the forefront of a teenager's mind. Sure, they will regret wrecking their parents' car, but that doesn't mean that they will avoid actions that might wreck said car.


Yup, got that.

Female teenagers are way less likely to have wrecks (1) and if we assume its not driven by one's genitalia and rather by their conditioning - an assumption I hold, its something that can be taught.

"In 2019, the motor vehicle death rate for male drivers aged 16–19 was over two times higher than the death rate for female drivers of the same age."

1. https://www.cdc.gov/transportationsafety/teen_drivers/teendr...


We are discussing a hypothetical teenager who doesn't exist, who knows?

Besides, I had thousands of HN karma (the most important metric of maturity of course) when I was a teenager not very long ago.


They might be worried but suck at risk evaluation so bad that the worry doesn’t really help.


To add to your point: it doesn’t bother me so much that the AI system doesn’t share these priorities. What I worry about is that the companies designing them are not incentivized to.


surely a company is absolutely incentivized to share these priorities?

every accident is a hit to their profits and pr.


No, because they can benefit even more from recklessness than the expected profit/PR damage. Someone already mentioned Tesla, but there is also the Volvo emissions scandal, Boeing's safety failures, and many more examples.

At the extreme end, picture a company that manages to get an okay from the city of SF for public testing, but whose internal tests show there is a small but significant risk of serious accidents and the product needs another two years of development to be safe. Their funding runs out in one year.

What do you think they would do?


I think Tesla has shown this fairly clearly to be untrue.

It takes a hit if there isnt' a very solid marketing/pr/charismatic leader to distract


A human being is not a software or engineering project


Depends on the country, but here teenagers are drilled so much during their L phase and on the cars with a manual transmission, so they are probably the safest drivers on the road.


Not true at all in the US. (And no, kids don't learn the manual because no one uses manuals in the US)

https://aaafoundation.org/wp-content/uploads/2017/06/aaa_fig...

Teens are roughly 3x worse than a 20 year old, and 10x worse than a 50 year old.


The teenager's ability to drive safely isn't just a mechanical ability of driving a car or of just being aware of the laws/rules. Safe driving comes from years of experience on being able to decide how long it will take to stop at certain speeds in certain conditions. Being able to recognize their car's brakes are a little soft and might not stop on a dime any more. Being able to recognize that the driver in front of them is potentially impaired, so best give them some space.

There are plenty of things that come with experience vs book learning that makes a safer driver. A freshly licensed teen comes with none of that. They also have the natural tendency of having high risk tolerance which makes decision making different than an older/exeprienced driver that is more risk averse.


Hearsay is that you have to drive more than 100 000 km in diverse conditions with the right mindset in order to become a fluent and experienced driver. With an average mileage, that will take most people around a decade after beginning to drive. Most people never seem to become fluent, experienced drivers, because they lack the practice, or the mindset, or both.


Exactly, except, I'm not sure where the 100,000km condition was established.

I'm in North Texas, so I have a personal rule that if the temps are forecasted to be below freezing with any form of precipitation, I do not leave the house. People make moronic decisions on how to drive in those conditions because it doesn't happen enough and as a collective, proper ability to drive is forgotten every single time.


That is my experience from one visit to the Arlington area years ago. One morning it was below freezing and there were some icy patches especially on bridges. Drivers were either creeping at 2mph or spinning out. There didn't seem to be anyone on the road who knew how to handle it.


How do you drive on icy roads though? Creeping at 2mph sounds like a reasonable choice, at least you won't make a lot of damage when you slowly slide into an obstacle.


>How do you drive on icy roads though?

It's a trick question. You don't should be the response.

For those that do, my first question is what is so damn important that you have to be driving in icy conditions in the first place? Schools are closed. Most businesses are closed. If my business isn't closed, I use a personal day. This isn't Chicago/Minneapolis where the snow lasts for 3 months. This is N. Texas. It's typically only cold enough for these conditions for a couple of days and typically only at night. Which actually makes it worse as it melts in the day, refreezes overnight, and people forget about the refreeze and drive as if things are normal.


experience vs book learning

Why would you assume that a driver's education is a matter of book learning? Is that how you were taught?


Yes it was. I am of a certain age that we had a certain number of classroom hours as well as a certain number of hours behind the wheel with an instructor as well as certain number of hours in a car as an observer with an instructor.

We had book(lets) that we had to read from to learn the rules/laws of the road. There were quizzes/tests to be taken during those classes in order to pass the class. Once the class was passed, I then had to take a written exam issued by the state. That was followed by an actual driving test with a state official in the car.

So assuming based on my personal experience. I know the times have changed, and you no longer have to do silly things like learn how to drive or be required to wear shoes while driving in today's world.


To a first approximation, I'm skeptical that anywhere in the world a learning supervised driver gains sufficient on-road experience to to count as a skilled experienced driver. Clearly, different places make somewhat different tradeoffs with respect to licensing. However, I'll go out on a limb and say the majority of drivers with 5 to 10 years experience are better drivers than newly licensed drivers.


Yeah definitely not the case in the US. The New Jersey driver’s test consisted of a single stop sign and a parallel parking test.


Lol in Florida I looped around a parking lot and parked head in. From there you’re allowed to get on i95… in south Florida. I guess if you don’t die the first time you get on it it’s like passing a test anyway.

Luckily for me I had been driving for a while and I was tested initially in a more rigorous country, I was just getting my license after I moved to the US, but it explains a lot of what I see in the streets.


You didn't design the aircraft that you fly on, which are incredibly computerized. Also the drugs that your doctor prescribes. Etc.

You put your faith in myriad bureaucracies and trust networks that you have no hope of understanding. Driverless cars are just one more added to the list.


The difference is that airplane software standards are by far the most rigorous in the world and so rigorous that most software developers are shocked at the level of rigor employed where as driverless cars have exactly zero software standards with respect to their safety or fitness for purpose. Comparing the processes in play is a outrageously fundamental category error; the systems have standards that are like 6 orders of magnitude apart.

As a example, just look at the 737 MAX, a airplane viewed as a literal deathtrap and a complete indictment of airplane software development, which had 2 crashes in 400,000 flights. At a average flight distance of ~500 miles, the 737 MAX, a airplane literally multiple orders of magnitude worse than any other and a complete failure of the approval process, only resulted in 1 fatality per 100 million person-miles. That is safer than the driving average of 1.1 fatalities per 100 million person-miles. What is viewed as by far the most atrocious outcome in the airplane sector, a system literally hundreds or thousands of times more dangerous than the average, is above average in the car sector; that is how far the gap is.

And this is ignoring driverless car software which is being developed to even lower standards than regular car software and which has no actual independent quality requirements. In contrast, far more lives are at risk with respect to driverless cars, so they should be developed to standards probably around 1000x better than airplane software which puts them around 0.0001% of the way there to a acceptable level of quality even if the software was already as reliable as regular car software.


> The difference is that airplane software standards are by far the most rigorous in the world and so rigorous that most software developers are shocked at the level of rigor employed...

You mean the standards set by the FAA, which regularly defers to the private industry to regulate itself, due to complexity that it can't afford to keep up with? Those standards?


Yes, those standards that produced a safety regime of 20 years of flights without software being even implicated in a single fatality. Literally trillions if not quadrillions of passenger-miles per software-related fatality. Those same standards which result in a system like the 737 MAX rightfully being excoriated for only being 10% safer than driving and resulted in successful demands for grounding of the entire fleet.

As you note, unfortunately the standards as employed in practice have recently regressed dramatically to allow a system as unsafe as the 737 MAX to fly. So it seems only prudent to go back to the historical, far more rigorous, process which did result in the amazing historical safety regime. And, since self driving cars are a even harder and more dangerous technology with far more downside risk, they should use a standard even more rigorous than the historical airplane standards which are still by far better, from a safety perspective, than any other consistently applied software process, such as cars, by multiple orders of magnitude.


> Literally trillions if not quadrillions of passenger-miles per software-related fatality.

So these were all miles where people were unwilling testers as well. The difference being airplane auto-pilot is primarily focused on keeping the plane in the air, not dodging pedestrians. I think it does a lot of warnings for the pilots as well, but I'm not in the industry.

It's really comparing apples to oranges, isn't it?

> since self driving cars are a even harder and more dangerous technology with far more downside risk, they should use a standard even more rigorous than the historical airplane standards

So this is on The United States Department of Transportation to be far more involved in regulating the software that governs self driving. But the question is, was the FAA as relaxed in governing the aerospace industry 60 some odd years ago as USDT is now, and did they just get lucky that flying planes is actually easier than driving cars? Or is there actually some negligence involved?


Yeah this is how most standards go. I use to work on medical devices, and the safety requirements for them were developed by my company and the other big players in the industry, and those became the FDA standards.

The FDA just wants there to be standards, and the companies have a vested interest in making sure those standards ensure safe medical devices.


Flying is still the safest way to travel. So I'm not sure what you're complaining about with the FAA.


The crashes revealed the FAA wasn't doing the job people thought they were. We achieved safety in this case by grounding all of the planes after crashes, which is hardly the ideal method.


I'm lost by your comment.

It seems that you're trying to suggest either 1) that the standards as mentioned necessarily aren't True Standards - as a blanket statement - because they're set by the industry or 2) that these standards aren't True Standards - specific to this particular discussion - because there is no reason to think they're necessarily more rigorous than the standards for driverless car software. If you're trying to suggest something else, please understand that it is unclear.

I'm all for a dissenting opinion, but when it is not presented in an understandable way the discussion becomes hard to follow.

(I don't mean to suggest that the argument is wrong when I call out the True Scotsman vibes; just that it is the argument I see.)


What is the legal/regulatory difference between the aerospace industry regulating itself vs the auto industry regulating itself in regards to self driving? Besides the obvious that the aerospace industry has a head start.


I believe I understand your point better.

From the person you're responding to:

> Yes, those standards that produced a safety regime of 20 years of flights without software being even implicated in a single fatality. ...

> As you note, unfortunately the standards as employed in practice have recently regressed dramatically to allow a system as unsafe as the 737 MAX to fly. ...

Your point, if I'm not mistaken, being that the FAA has clearly allowed for less stringent regulations in recent years. (Please correct me if I'm wrong. You can ask questions but it does not seem that they lead so immediately to the answers you expect.)

That being said, please consider that I am genuinely trying to understand you:

> I'm all for a dissenting opinion, but when it is not presented in an understandable way the discussion becomes hard to follow.


> The difference is that airplane software standards are by far the most rigorous in the world and so rigorous that most software developers are shocked at the level of rigor employed

Now they are. This regulations are a result of several tragedies culminating in a big disaster:

https://en.wikipedia.org/wiki/American_Airlines_Flight_191


Neither aircraft nor drug development norms look anything like the norms of software development. I actually find a substantial difference in my trust level between a traditional car company that's building self-driving capabilities (e.g. Mercedes) than a company that fashions itself as a software company that happens to build cars (e.g. Tesla) for exactly the reason of the norms and behaviors that each org would encourage.


Didn't aircraft development norms lead to several planes plunging out of the sky with zero mechanical issues, due to software failure?


It's actually _failure to follow aircraft development norms_ that lead to the crash of multiple 737-Max


Generally agree but also knew people who worked on Boeing QA/SWE and the internal picture sounds dicy. I'm a bit disconcerted flying after learning about it.


Boeing's issues were written on the wall about 2 decades ago when it moved it's hierarchy from an engineer-centric one to a Chicago accountant-centric one. That alone wouldn't have been an issue, but they had an immediate brain drain as half their HQ staff didn't follow the relocation.

Imagine a thousand employees, half your HQ staff, that leave. There are several possible reasons:

- they're older -> more corporate knowledge and knew the "magic recipes"

- skill based mobility -> they're so good they can go anywhere, and Chicago isn't Seattle

- etc

https://hbr.org/2001/10/inside-boeings-big-move

This was widely thought on ... what was I browsing back then? Reddit? as a fore-warner of bad stuff to come

The fore-warning came to fruition when the issues at various Boeing plants started developing over the next ten years (insider videos talking with an employee about being on drugs during crucial assembly work and shift mates being similarly high, "get it through your station" attitudes) and the Dreamliner issues that were hand-waved away (no training, 1-hour tablet self directed training, certification delays, and eventually crashes).


> what was I browsing back then? Reddit?

I remember it being discussed on SlashDot back then.


generally, you dont want to know how the sausage is being made... its pretty much all industries, some worse than the others.


Furthermore, at this stage the driverless cars are unlikely to be creating demand. These are replacing human drivers. Most of us have no hand in verifying the safety of other drivers on the road. I happen to believe it is bonkers that we generally test drivers once at the age of 16 and somehow that earns a person driving privileges for at least 50 years before being retested if not their entire life.


Imagine if cars were invented today. The bar would be much higher, probably halfway to how pilot licensing works.

(Of course this had some substantial downsides.)


If cars were invented today, they would be too dangerous to be allowed to exist. Almost nothing has been invented since 1970 that can cause any harm. Safety first and all that.


They'd have to be built with cow-catcher like devices to push all the horseshit out of their way while traveling down the road. It'd be a different kind of awful.


Personal computers cause all kinds of harm (in addition to the benefits).


Not much physical harm and very little liability. I think computers and software was the one place people could make what they wanted and sell it without much worry of getting sued out of existence and that is why there was so much advancement. Software licenses are pretty amazing with the "as is" clause that seems to hold up.


Presumably, you choose whether to ride that airplane or take those drugs. You probably don't get to choose when someone else's driverless car runs you over.


Nor do you get to choose when someone's airplane possibly crashes into your street from above.


There were 9 airplane crashes, worldwide, in 2021[1].

It's estimated 31,720 people died in car crashes, in the US, in 2021[2].

[1] https://en.wikipedia.org/wiki/Category:Aviation_accidents_an...

[2] https://www.transportation.gov/briefing-room/nhtsa-estimates...


True, but you miss my point in referring to this. You still don't get to choose when someone else's mistake might harm you. Also, we as the public in general routinely trust planes with millions of risky flights despite not knowing anything about how well or safely they work, or how they work at all in detail. We accept that accidents sometimes happen nonetheless because that's how it goes with human technology that we've come to depend on. The important thing is that precautions are taken as much as reasonably possible. The same standard can be applied to driverless cars without getting fearful of them simply because of their newness.


But driverless cars do not go through the same requirements as aviation.

Before an airplane goes into circulation, it has gone through extremely rigorous testing. An airplane also always has a pilot.

The same can't be said for driverless cars. They don't go through extreme rigorous testing, they don't have a driver, and they're controlled by software that is most likely a far cry from aviation standards.


Aircraft is a vastly different problem at a much smaller scale. Where did our ground traffic controllers go?


Except safety has only ever been improved for either aircraft or drugs by people dying from exactly this kind of failure.

Apparently there is ZERO learning or forethought about risks that can be anticipated or suggested from other technologies!


> Driverless cars are just one more added to the list.

As others have pointed out, aircraft certification (while it has its problems) is very rigorous. Driverless car certification, up to date, is basically nothing.

But the other aspect is that aircraft autopilots are solving a far easier problem. Developing a car autopilot for busy streets is many orders of magnitude more difficult than an airplane (or boat) autopilot.


Driverless cars are just one more added to the list

It's a qualitatively different change than any of the above (driven by a messianic view that AI is here and now and about to change everything), plus an instance of the

https://en.wikipedia.org/wiki/Boiling_frog

apologue.


This is one of the problems with technology that Ted Kaczynski wrote about. You can't opt out.

> Even the walker’s freedom is now greatly restricted. In the city he continually has to stop to wait for traffic lights that are designed mainly to serve auto traffic. In the country, motor traffic makes it dangerous and unpleasant to walk along the highway. (Note this important point that we have just illustrated with the case of motorized transport: When a new item of technology is introduced as an option that an individual can accept or not as he chooses, it does not necessarily REMAIN optional. In many cases the new technology changes society in such a way that people eventually find themselves FORCED to use it.)

Your problem is the same as the walker's. The walker has opted out of the automobile, but still has to breathe its pollution and risk being hit by it.


We're quoting the Unabomber now? Certainly you can choose good quotes from non-serial killers.


The fact that someone did something terrible does not imply that everything they ever said or did was wrong. That's dangerous thinking. The Unabomber's short manifesto makes a number of salient points about the negative interplay between technology and the human condition, and offers a neat potential explanation for why so many people are unhappy despite being surrounded by modern conveniences.


This is an example of "ad hominem", a logical fallacy where you attack a person's character or attributes rather than the point being made.


Fwiw, at the time and after, there were quite a few conferences and analysis of his manifesto.


Why not? It is a good and relevant point.


As someone who likes to walk in an urban environment, I'd say it's not good and relevant. It's rather detached from the actual experience of the urban walker.

There's simple road design concepts that mitigate danger, as well.


The point is simply that that one person's choice can have impact on others. This is pretty basic and true if you like like urban walking or not.


No, i think this was just part of the author's delusional narrative that technological adoption is a form of violence.

If it is not actually that, and if there are safety measures that can be taken, he doesn't care.

I'm not a pro car person actually. And that's the vibe I'm getting from Kaczynski in this instance. Words of an out of touch person.


Because it justifies their actions. Would we be quoting him if he didn't murder innocent people?


I don't think his murders invalidate his manifesto. Luditeism has been very popular since the industrial revolution, I think it's reasonably likely we would be talking about Industrial Society and its Future or something similar if he hadn't killed people.


> I don't think his murders invalidate his manifesto

That's not my point though, my point is that quoting his manifesto validates his violence. If his ideas are valid, there should be other sources to quote/promote.


>If his ideas are valid, there should be other sources to quote/promote.

Why is that so sure? Do you have an equivalent quote to substitute?


Absolutely his murders invalidate his manifesto. The ideas that he expressed in the manifesto are the same ideas that moved him to murder people. His opposition to technology is why he was bombing computer stores and the University of Utah Engineering Building.

This is why saying "but he's a murderer" isn't an ad hominem. His murders are the working-out of his ideas; his ideas are worth rejecting just on that basis.


That doesn't mean they are wrong in part or in whole.

>His murders are the working-out of his ideas; his ideas are worth rejecting just on that basis.

This is simply saying "if he is right, I would rather be wrong". It also completely ignores that arguments can be valid in part, but not in combination.


Most of us consider murder to be self-evidently wrong. The intersection of "right, and causing murder" is very close to empty for most definitions of "right".

Your question makes me wonder what your definition of "right" is.


Murder is typically defined as unjustifiable or wrong, so you would expect the set to be empty.

Replace "murder" with 'killing" and most people will have the opposite reaction.


>>>> I don't think his murders invalidate his manifesto.

- colinmhayes (the original post I responded to).

And, for your point to be relevant to this discussion, you would have to claim that what the Unabomber did was "killing" rather than "murder". If you claim that, I'd like to see your definitions and where you think the line between killing and murder is.

And if that isn't your claim, it looks like you're just arguing in order to argue.


I have no interest in defending the actions of the Unabomber.

I did feel that you were using the tautology of murder to ignore my actual point.

>You: His murders are the working-out of his ideas; his ideas are worth rejecting just on that basis.

>Me: That doesn't mean they are wrong in part or in whole.

If the unabomber said as part of the manifesto that the sky is blue, would you now reject that the sky is blue?

There is nothing about murder that invalidates the logic, truth, or accuracy of an argument, and especially not the individual components of an argument.

It is fine and fair if you don't personally want to learn or engage with an argument that has lead to violence. But that does not mean that it is internally inconsistent, or devoid of accurate observations about the world (e.g. new technologies can impose externalities on the unwilling).


Whether the sky is blue has no moral component. The Unabomber was saying 1) what he thought was wrong with society, and 2) how we should make a better society.

These judgments ("wrong" and "better") presuppose some definition of "good". But the guy was deliberately killing people. Am I supposed to believe his definition of what "good" is? No, I'm supposed to suspect that it's seriously skewed. (In fact, he's giving me evidence that it's seriously skewed.)

There's something about murder that invalidates the moral truth of an argument.

Could his argument still be internally consistent? Sure. Could it have accurate observations? Also sure. But should his manifesto be regarded as having any overall validity? No. (Recall that the original point I was replying to was whether his murders invalidated his manifesto.)


You need to learn to separate the ideas from the author. A work either has ideological merit or it does not. If he wrote the paper, it had merit, and then he murdered people, that does not change the writing at all. It has the same merit that it had before. Your learning information about the man who wrote the ideas should not have any effect on your assessment of its validity.

Nearly every great thinker and writer you can name from before a certain year owned slaves or wanted to.


I think we might just have to agree to disagree.

I feel that you view his acts of murder as inseparably linked to the manifesto, and that people can not accept manifesto to be true, while condemning the murders.

This ignores the possibility of a completely pacifist implementation of the manifesto's ideals.

My understanding is that a significant portion of it is not necessarily a call to violence, as much as a call to action. It states the technology and collectivism should be rejected, and this could be either a violent or non-violent process.


I think he ultimately suffered mental illness, and it is reflected in both the fact that he murdered and the ideas in his manifesto. It's hard to separate them.

There may be similar ideas expressed by those who don't share his pathology, but his version of it is extreme and fringe.


Maybe now is the time to start raising the bar on human and upcoming automated drivers.

Both could have to pass a driving test that includes challenging, difficult situations. Day, night, rain, snow, fog, heavy traffic, missing lane lines, simulated pedestrians (including small children) leaping out between parked cars, and more.

For further difficulty, both kinds of drivers could have to pass this within a x% margin of the fastest safe speed at which the maneuvers can be completed; no cheating by slowing down to traffic-disrupting speeds.


There's not a lot of evidence to suggest this would have a major impact.

The majority of fatal crashes involve drugs and alcohol and not people with "untested skills in challenging situations."

Pedestrians and motorcyclists account for a little less than 1/3 of all fatalities.

Young men are 8x more likely to die in a car accident than their female counterparts.

This "user-hostile" mode of thinking about driving serves no one.


The reason to hold humans and self driving cars to a different standard is that humans have already "class qualified" under ... well ... every condition that we drive in... with on the order of a billion driving years of total experience to the limitations of human-driving and far in excess of 10 million years of additional driving added per year along with a considerable amount of study across many decades.

Though there are systematic flaws in human drivers they're reasonably well known given the level of experience we have, there are many mitigations in place in both our cars and road designs, and many of these flaws are intuitively modeled by the other humans on the road as well as pedestrians allowing for real-time compensation.

The driverless cars, on the other hand-- we have radically less experience with as a class and there are many reasons to believe that vendor to vendor (e.g. due to sensor modalities) or even version to version will behave less self-consistently than human drivers do as a class. We know of some systematic faults in their operation which cause them to behave in ways that are highly surprising to humans, and we should expect to find many more as we gain experience.

To the extent that the properties of self driving cars have been formally studied at all it's been primarily by their creators which are obviously self interested. Some of the practices used in early supervised self-driving also actively undermined understanding their safety (e.g. executing a hand over to the human once the car was already in an nonrecoverable unsafe state and then attributing the accident to human driving since at the time of the collision the human was back in charge, but maybe only by a hundred milliseconds; or companies committing perjury to use the DMCA to force down unfavorable videos created by others)

This isn't an argument that we might not benefit from more rigorous driver testing or even from testing ideas that originated from self-driving car testing... but parity between driverless car tests and human testing shouldn't be a goal, particularly not when so many orders of magnitude separate our human driving experience from driverless experience.


I suspect that a challenging driver assessment between AVs and humans would be the most rapid driver of AVs possible... Just remember: Most drivers think they're above average.

Curiously, autonomous car technology also makes it feasible for us to rate drivers -- it's like having a professional watching over your shoulder all the time. IIRC, Tesla used something similar to determine rollout of new FSD capabilities.


Personally, any % above "average human driving skill" reliably should start to see more uptake of this new technology. Driverless cars have and will kill people. Sometimes it will be due to programming errors. But on the whole, they are, in many cases, on par or better than human drivers. I think it's good to be cautious, for the industry to be cautious. But I don't want to see people freaking out over a single accident because it involved an AI when there are thousands of non-AI accidents every day. Be consistent in judging them, not against some mythical 100% success rate but against the criteria, is it better/safer than most drivers.


They've been driving for a while now. Can't you go with data from their existing rides to form an opinion? All you're going with is fear right now. Those cars are already better than the majority of drivers.


I agree with your basic idea, but I don't think there's really any way around this - at some point these systems will have to be tested in a real world environment.

I guess the question is: how much higher should the bar be set? And if we set it substantially higher then how much longer will it take to improve the performance and safety of these systems?


This is the most challenging transition with tech in terms of the outcomes and the difficulty defining just what this trend is.

Cybernetics is* ~insertion of a governance layer into our social contract that we didn't agree to relinquish our agency to... It just sort of happened, and we deal with the outcomes. The more disconcerting aspect to me is the latter. Suddenly, QR codes everywhere, good luck getting around without a smart phone, if that driverless car sideswipes you and doesn't stop, call someone I guess?

If this area/trend interests you, there is a lot of reading available from the weird (CCRU, early Nick Land) to the fairly mechanical/academic (MIT, Norbert Wiener), to the openly behavioral governance-themed (Stanford Persuasive Technology Lab, B.J. Fogg), to adversarial-to-tech (The Cybernetic Hypothesis).

* so-so definition of the space, but its the best I can sum it up.


> I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.

You’re basically an non-consenting tester for every piece of innovation that society every has/ever will come up with, in one way or another. It’s an unavoidable feature of living on the same planet as 8 billion other people. Perhaps the safety of this technology does need to improve, or perhaps not. But there’s no level of improvement that would ever be able to satisfy this outrage you’ve expressed at something new existing in the same place as you.


> You’re basically an non-consenting tester for every piece of innovation that society every has/ever will come up with, in one way or another. It’s an unavoidable feature of living on the same planet as 8 billion other people. Perhaps the safety of this technology does need to improve, or perhaps not. But there’s no level of improvement that would ever be able to satisfy this outrage you’ve expressed at something new existing in the same place as you.

Why do you say it's unavoidable? Regulations could certainly keep these tests off of public roads.


Let’s say autonomous cars, once fully implemented, are even 10% safer than existing drivers. And testing in real-world conditions, speeds up the implementation by even just a year. We would need about 3k deaths at the hands of the autonomous drivers before the trade off was no longer worth it. This is only considering the US. And based on very conservative estimates of 10% improvement and a reduction of only a year savings in time to deploy. The more likely scenario will put that number in the 10s of thousands in my opinion.

Personally, I would much rather we slowly develop and deploy these cars in the wild rather than continue testing in some controlled environments. At least this way we can monitor progress and judge their readiness as a society. Plus, the sooner they happen, the better for everyone.


You're trying to do math in a scenario where you have no real numbers. Of course you can pull some stuff out of your butt (+10%, 1 year) to say it's going to save lives but unless you can justify those numbers it's meaningless.


No fucking way? I thought my math was rock-solid!

The point is not the exact numbers. It’s to demonstrate that even if we do suffer accidents and deaths from testing self-driving cars on the road, the end result will be many more saved lives (just based on the fact of how many people die from traffic accidents today) Of course this hinges on many other factors to be true, but that’s the case with all kinds of technological advances.


And they did, until they didn't. This is not a brand new, unproven technology. It's been developing for over a decade. The car didn't hurt anyone or crash into anything - it simply did not have its lights on. The behavior to drive to a safer place for the traffic stop was intentional, per Cruise. This is something human drivers do as well. When being pulled over, it's common to pull into a parking lot or other nearby place that isn't in the driving path, for the officer's safety and convenience, in addition to allowing traffic to continue to flow without obstruction.


They are speaking more broadly. Every day you walk past buildings with novel construction features, planes fly overhead without your permission, the ingredients to the food you eat change without your knowledge or consent.

It's a simple fact of life that you test things without explicitly consenting. The driverless cars are no different.


Food ingredients are posted on the package. If the contents differ the manufacturer is liable.


What about those pesky horseless carriages? Definitely should have had regulations to keep them off the roads. /sarcasm


You could say the same for regular cars when they first arrived on the road. Saying there should be a higher bar is useless unless you are able to say exactly what that bar should be.


People did say exactly the same for regular cars. The result was a massive advertising campaign to establish the concept of jaywalking, and to make the street the sole domain of the car.

I can easily imagine a similar campaign to label pedestrians as irresponsible for walking outside without an ultrasonic emitter, or a lidar reflector, or whatever integrates with the self-driving system. Whatever shifts the responsibility away from the car and onto the deceased.


You say "advertising campaign" as if it were some conspiracy by the world's big auto manufacturers

1. Do you have any sources?

2. You don't think that roads should be a car only domain? What do you think roads are for?


1. https://www.vox.com/2015/1/15/7551873/jaywalking-history The forgotten history of how automakers invented the crime of "jaywalking" (books and more sources are quoted in that article)

2. Roads existed before cars. This is an extremely American view to think that roads should be exclusively for cars. Roads should be for PEOPLE, regardless of which mode of transport they are using.


As for point 2.

Okay, but now cars do exist. Do you think it's reasonable for cars capable of traveling 40mph should be stuck behind pedestrians at walking speed? Our modern transportation infrastructure exists for a reason, and that reason is speed/effeciency. Most areas with heavy pedestrian traffic have sidewalks. If I'm going 60mph down the highway then the presumption that I should be subjected to pedestrians casually strolling across is absurd.


Already today, cars do not travel at 40mph in those areas with heavy pedestrian traffic--the typical speed limit on a city street is 25mph, and even that is probably above typical speeds. If you want cars to travel more quickly, you need to resort to highways that completely cut off pedestrian access, which has seriously deleterious effects on the neighborhoods it travels through.

If you try to compromise, you end up with stroads... which turn out to be just as bad as highways for pedestrians in practice, thanks in no small part to people like you who believe that all of the compromises of street design and urban planning should be borne not by vehicular drivers but by pedestrians.

(I'll add a side note here: a lot of efficiency depends on what you're trying to make efficient. Many people--including you, apparently--define efficiency as moving cars with as high throughput as possible. If you instead want to move people with high throughput, usually step one is to ditch cars, since single-occupant vehicles occupy vastly more space than any other form of transportation.)


Alright, but 25mph is still significantly faster than walking speed. I do think that mass transportation would be nice and we could use a better public trans infra in the US, but that doesn't make the fact that waking is still much less efficient that driving. Even with cars taking up much more space, a bunch of cars traveling at 25 mph are able to carry a many more people through a distance X in a period of time T than the equivalent walking would be able to. The alternative that everyone seems to be forgetting is having roads for cars and sidewalks for people, which already exists in the vast majority of areas with a high potentiality for pedestrian traffic.


LOL, cars don't go anywhere near 25mph in Boston. They'll average 15mph, if they're lucky. Even with ~70% of the right of way allocated to cars, there's still too many of them for the road to handle. I don't know what "efficiency" you think cars have over other modes but I'm not seeing it on my commute. More often than not, my bike is stuck behind cars (because the bike lane disappeared) rather than they're stuck behind me.


15mph is still faster than walking speed. I'm not saying cars are the most optimally efficient mode of transportation, just that they are much more efficient that walking.


That's a completely useless comparison that applies almost no where in the world? You need to compare driving vs pervasive transit and bike networks. I doubt New Yorkers or Amsterdammers would accept your efficient car hypothesis.

In the few places with high 100% walking commutes, I don't see cars being more efficient either. Do you want every college student to drive from their dorm to class? How about poor countries without the money to maintain good road networks?


Are you joking? That applies in the vast majority of the world, the exceptions you named are just that, exceptions that are exceptionally rare.

This entire discussion started as whether pedestrians should/shouldn't be on roads. No, they shouldn't. They should be on a sidewalk, best of both worlds.


The alternative to car-centric streets is mixed use emphasising bikes, pedestrians, and public transport.


Speed, yes, but efficiency is questionable. Efficiency depends both on how fast you're traveling, but also on how far you need to travel in order to reach a destination. With minimum parking requirements, the space required for a shop increases drastically. If there's one shop every 50 feet, you can efficiently visit them at low speeds. If there's one shop every quarter mile, walking past a row of shops is impossible. The car traveling 40 mph can go at a faster speed than a pedestrian, but it also requires the shops to be spaced out such that the car can park. Optimizing for speed reduces efficiency.


Not necessarily. This assessment forgets things like sidewalks. I don't think anyone is going to visit shops on main street and every time they leave a shop drives 5 feet further down the road only to park and visit another one. In tightly clustered areas it's most common to park and then walk via a sidewalk to other destinations on the area.


For 1, this is well established history that I didn't feel was necessary to provide sources for. That said, the vox article linked to by jeromegv does a good job of describing the transition.

For 2, I'd make a distinction between paths built for long-distance travel (roads) and paths built for connecting to destinations (streets). The two have fundamentally different goals that cannot be safely achieved at the same time. To allow for high-speed travel, a road should be as predictable as possible, with limited entrances and exits and barriers to prevent people and animals from wandering onto the road, and should be restricted to vehicles that can maintain a high speed. To allow streets to connect to destinations; such as homes, shops, and workplaces; there should be many entrances and exits to the street. The destinations should be packed as tightly as possible, so that they can be reached with a minimal distance traveled. It's not just that these goals require tradeoffs, but that they are fundamentally antithetical to each other.

The best way to satisfy these two goals is by having separate designs for streets and roads. The roads are pathed around pedestrian areas, and allow branching off into them at limited locations. The streets are mixed use, with rougher surfaces (e.g. brick/cobblestone) to prevent high-speed travel. This doesn't exclude motor vehicles from using roads altogether, allowing for use cases such as delivering food to cornershops or dropping off somebody with limited mobility, but does steer motorized traffic off of the streets and onto the roads wherever possible.


1. Fair enough, but sources should almost always be provided. I hadn't heard that particular piece of historical trivia and I don't think it's necessarily common knowledge.

As for the later, why is that any better than just sidewalks and crosswalks, which the majority of areas with heavy pedestrian traffic already have? It'd be incredibly eco-unfriendly, expensive, and impractical to have two different types of roads for cars and pedestrians when we already have a solution that works just fine - sidewalks.


I do provide sources when something is difficult to find, uncommon, or uses numeric values beyond ballpark estimations. Demanding sources for every statement contributes to Brandolini's Law [0], which states "The amount of energy needed to refute bullshit is an order of magnitude larger than is needed to produce it." In the spirit of restoring symmetry, I invite you to provide sources for the following statements made in this thread.

* "Most areas with heavy pedestrian traffic have sidewalks." I am uncertain on this one, as heavy pedestrian traffic is more frequent on mixed-use streets. Sidewalks are only used in places with car-centric design, which discourages walking.

* "Even with cars taking up much more space, a bunch of cars traveling at 25 mph are able to carry a many more people through a distance X in a period of time T than the equivalent walking would be able to." I am uncertain on this, because cars require very large amounts of space both for the physical size of the car, the spacing between cars, and the spacing between groups of cars at an intersection.

* "It'd be incredibly eco-unfriendly, expensive, and impractical to have two different types of roads" I'd be interested in a source for this, as I've generally heard that separating out usage improves the efficiency of each. Cars go faster because there are fewer stops requires. Pedestrians die less often because there are fewer cars nearby.

* "Bike lanes are less common than sidewalks" This is correct by my experience, but I've only lived in places with car-centric city planning. Can you provide a source?

I'm interested in continuing this conversation, but not if it is a conversation between sourced comments and unsourced assertions.

[0] https://en.wikipedia.org/wiki/Brandolini%27s_law


1: e.g. https://en.wikipedia.org/wiki/Jaywalking#Origin_of_the_term

2: Roads are also for bicyclists, motorcyclists, and even pedestrians. It's true though that car drivers often think that roads are a car only domain and behave aggressively towards the other mentioned groups


2. No, sidewalks and bike lanes are for pedestrians and bicyclists. Bike lanes are less common than sidewalks so I think it's reasonable to include them on low-speed/highly congested stretches of road, but to assume they should have a full access to, e.g. and interstate or highway is ridiculous and would hinder literally thousands of people.


OK, let's all take a deep breath here - we might just have a different understanding of the word "road". My definition of "road" is not restricted to interstates and highways, but it includes any larger road, err, street. Of course, nobody wants pedestrians or cyclists on highways. It looks like we agree there.

But I refuse the popular notion that roads (as in larger streets, or any street between villages/towns/cities) should be for cars only. The usual examples of Copenhagen, Amsterdam or recently Paris show that shared roads lead to a higher quality of living. I personally appreciate this, and I'd prefer a world where this sharing would be more common.


But why should shared roads be preferred over sidewalks? I see quite a few potential disadvantages but no real advantages whatsoever.


There’s an entirety different worldview that you are missing here. Check out this great YouTube channel which goes into a lot of detail about how our cities and towns could be better designed. https://youtube.com/c/NotJustBikes


I'll check it out, thanks


Sidewalks and bike lanes are not contiguous. Pedestrians and cyclists are not only permitted, but have the right of way, on nearly all roads other than controlled access highways and tunnels, even where sidewalks and bike lanes exist.


Yes but this conversation was about jaywalking and what roads are intended for. No one (I think) is arguing that a bike crossing the rode to go from one bike lane to another is wrong, but that it isn't the intended purpose of roads. Pedestrians do have the right away, but that doesn't make jaywalking illegal, it just makes it so drivers are (rightfully) held accountable for running over pedestrians trying to cross the street. It doesn't give a pedestrian the right to walk down the middle of the street at a leisurely pace


> 2. You don't think that roads should be a car only domain? What do you think roads are for?

You've clearly never had a UPS driver scream "Use the bike lane!" at you after parking his truck in that same bike lane. Drivers have bizarre reflexes for how to use roads.


Well, no I must admit I never have experienced that. Not really sure I understand how it pertains to my question you responded to.


> 2. You don't think that roads should be a car only domain? What do you think roads are for?

For example for my messanger on my horse so he doesn't get stuck in the mutt on it's voyage to the next city. ... or maybe my zenturie to support my legion?

What makes you think (or suggest) that roads where invented for cars?


I never said they were invented for cars. My question was why don't they think roads, as they are in 2022, shouldn't be a car only domain? This argument of "well their original purpose was for..." is logically incoherent in a multitude of ways. There are thousands of things that we use a bit different than originally as intended due to the advancement of modern tech


Let's start with setting the bar at "the driverless car only operating at night should have its lights on". And since Cruise so effortlessly limbod right under that one, we should probably ask for a root-cause analysis here and stop their operation immediately until then. God forbid they start disabling the brakes like Uber "because they cause disengagements".


1) lights on when required. 2) ...


> I am really worried by the fact that I am the unwilling tester..

stay inside in front of FANNG. You'll be safe there ( yet still an unwilling tester ).


I didn't know they load test bridges. Wouldn't that put unnecessary wear on the structure and age it prematurely?


What? No, not at all. The bridge has to be able to handle the max load, otherwise it's not safe. If the load test causes any issues with the bridge, that is a massive red flag.


The max load test is to verify it can support the max rated load before safety margin so they aren't testing it to the point right before it fails.


>I think we should set a much higher bar for allowing those cars on the streets, rather than "it kinda works, so let's roll with it".

Indeed. The minimum before any self driving car can be put on the road is that the developers, managers, executives, and major investors and their immediate families have to spend 8 hours running back and forth across a track on which 40 self driving cars per mile of track are running laps at a minimum of 50 MPH. Additionally there need to be things like rains of nails on the road, suddenly floods of water, packs of dogs, tumbling tumbleweeds, and other hazards. If at any point the average speed drops below 50 MPH or if the cars hit anyone or hit each other, the car can't go on public roads.


> Tradition has it that when you load-test a new bridge, you put the architect underneath.

Do you have any more info on this? It's a fascinating fact.


Previous articles mention that the cars provided rides to Cruise employees before the general public.


This.

These robots are massive and fast, they are killers. Yet as long as they look like cars we let them run loose on public streets. It's kinda nuts. (Arguably car traffic as it's set up now is kinds nuts too, but that's a different argument.)

The obvious thing to do is start with something like a golf cart that is limited to, say, 5mph top speed, and work your way up.


Great observation.

When the device is a giant robot arm or a laser cutter, there are huge amounts of regulation-- you may have to get permission from your municipality to operate it even within the confines of your own facility.

Make it look like a car, however, ...

Cars and roads are kind of nuts, but they're established lunacy at least.


It's not really my observation. Bicyclists have a (sick) joke: "If you want to get away with murder just be sure you're driving a car and your victim is riding a bike." The way I see it, the same blind spot (no pun intended) applies to car-shaped killer robots.

R.I.P. Elaine Herzberg.


Lol.

Drunk drivers are killers. People that text and drive are killers. People that get kicked by their kid and turn around for a second to yell at them are killers.

I'll take an unfatigable, indistractable, computer that literally has eyes in the back of its head, and on the side of its head, and the front, and can bounce radar under cars, etc over the pitiful example of a "safe (human) driver" we have now.


> I'll take an unfatigable, indistractable, computer that literally has eyes in the back of its head, and on the side of its head, and the front, and can bounce radar under cars, etc over the pitiful example of a "safe (human) driver" we have now.

Yes, friend, and when we have those so will I.

Those (as yet still fictional) robot cars (and can we PLEASE call them "auto-autos"!?) are not the problem.

- - - -

> Drunk drivers are killers. People that text and drive are killers. ...

Yes, and it's arguably fucking crazy that we let them do that. The whole reason is a deliberate campaign to normalize the carnage: "speed demons" were replaced by "jay walkers" and now more people (in the USA) have been killed by cars than by all the wars we've fought.

"The Real Reason Jaywalking Is A Crime (Adam Ruins Everything)" https://www.youtube.com/watch?v=vxopfjXkArM

Anyhow, "people kill people so my robots should be allowed to kill people too." Is not a valid argument IMO.


>unfatigable, indistractable

memory leaks, priority inversions, dropped packets / data. Lots of ways systems can act strangely.


plus the "beta" testers treat it like some kind of fun game. they're like "whoops, almost swerved into traffic. whoops, almost ran into a pedestrian crossing the street."


> I am really worried by the fact that I am the unwilling tester in the Great Driverless Car Experiment.

Welcome to America, where you are an unwitting tester of many private ventures that have been signed off on by the American government on your behalf.

Remember that complaining about these tests and demanding more control means that you are communist. Unless of course you don't mind waiting around to be harmed by such tests, and are either able to sue, or told that you had no reasonable expectation of safety.


The driverless car is already safer than the other people you share the road with everyday.


We can't make safe driverless cars if people frequently make false claims about their current safety.


We will never have driverless cars if we hold them to a standard that is unobtainable while continuing to let regular drivers cause accidents and deaths on a daily basis.


You're not wrong.


Autonomous cars are already safer than human drivers. Not as featureful or accurate, but safer.


This is almost certainly a data artifact, despite how often it gets repeated by optimists


This is not exactly irrelevant, but there is some indeterminate weighting factor that you would have to apply for the bad taste of algorithms killing people vs people killing people.

Not that I'm an impartial judge... At the end of the autonomous car rainbow waits more parking, more driving, and fences to keep people out of streets.


Being used to riding motorcycles in the US, why are headlights not always on for all vehicles at this point? LED bulbs last a very long time and there is little reason to not have them on during the day. I know, it’s much more fun to anthropomorphize the car in this case, but the simple solution is to just hard wire them.


It's been required to do so in some northern European countries. The practical reason there is that dawn and dusk can take many hours during much of the year at higher latitudes thus creating a weird situation where it's not really clear when exactly you should have the lights on or off in exactly the kind of situation where a lot of accidents can happen if you don't have them on.

The simple rule of "always have your lights on" fixes that and it slightly improves visibility even in broad daylight. The only downside would be the increased power usage and the light bulb wearing out sooner. A minor issue a few decades ago when this became normal and a very minor one now that we have LEDs that last a long time without using a lot of power.

My rental bike (a Swapfiets) has the LEDs permanently on; there is no off switch. If you ride it, the lights are on. That's indeed how it should be. I can't think of any good reason to voluntarily reduce my visibility while cycling through potentially lethal traffic.


I've gotten so used to lights being on always that I classify cars with lights off as parked. I won't focus on them. I'm very surprised when they move.

It should really be hardwired, because people forget.


> I can't think of any good reason to voluntarily reduce my visibility while cycling through potentially lethal traffic.

Watch more horror movies and you'll think of some.


Rephrase to "any good non-fiction reason".


Probably some special forces/police operations in countries with a reasonable road system. Vans and SUVs are much more convenient than MRAPs or humvees.


Headlights always being on was the biggest thing I noticed when I visited Iceland. It is so much easier to see vehicles even during a sunny day, they simply stand out much more. As a result, I always turn my lights on right after I get into a vehicle.


Mechanics need to be able to turn them off when they work on them or they’ll get blinded. Every car I’ve driven in the last 10 years seems to have an auto mode though.

I’d be surprised if the Chevy’s these are based on don’t have an auto mode. It could be that a fuse blew here.


I bet it is something boring, like a fuse blew, or maybe they still have the auto/manual lights selector even though the car is autonomous, and somebody misconfigured it.

You'd think 'are the lights on' would be a self-check it would perform, though. GM has to know that headlights go out occasionally, right?


> You'd think 'are the lights on' would be a self-check it would perform, though.

If it wasn't before I would bet it will be soon.


It could also be that some bumbling meat-man clumsily knocked the switch out of the "Auto" position. The Bolt has a truly idiotic headlight switch which always indicates "Auto" even when its been disabled by turning the switch counterclockwise.

In Canada it's programmed to reset to Auto every time the car is started, but not the U.S. model.

https://www.youtube.com/watch?v=yZlYCSmtD_4


Somehow I thought they would have hooked the selector up to the AI but you’re right. This probably just got missed on the checklist when the car went out. That selector also makes it seem highly likely to be missed.

Detailers always turn headlights off. (At least they do for my car.) They probably always turn them off when they come in to be cleaned then need to be turned back on when they go out.


> In Canada it's programmed to reset to Auto every time the car is started, but not the U.S. model.

That's a Volt. My Bolt & Camaro (in the US...) also have momentary switches that default to auto, and they both actually default to auto. You can only turn off the headlights until the next time you turn off the car.


The same cars sold in Canada have the tail and head lights on when it turns dark since September 2021. If the mechanic is scared for their eyes they can just have well lit area to work and the lights will not turn on.


Let’s not be so dismissive of issues for the plebs. You wouldn’t want bright lights in your face while trying to work either.

Artificial lights are not very bright in terms of lux, even very bright ones. Headlights need to come on at dusk which is still far brighter than a shop.

It seems Canada thought of this by allowing lights to be disabled when the car is in park.


I have been working on my vehicles for a while. Removing the fuse or unplugging the lights is stupid easy.


I don't know how you've come to the conclusion that's a reasonable solution. Wiring harnesses on cars are a pain in the ass and definitely are not heavy duty enough for being removed many times. With a fuse you have to find the fuse box, remove the cover, find which fuse to remove, then pull it out with pliers. It's far more work than you're implying. Also, this is done every time I take my car in for service whether it's a tire change, oil change, or engine change.

More importantly you need to remember to put it all back together when you return the vehicle—not necessary with a switch.

It seems Canadian vehicles handle both use cases by allowing the vehicle to have the headlights off but only when parked.


What are you talking about? Remove plastic cover from fuse box. Remove fuse. Done.

Or open hood. Unplug bulb. Done.

Also most times when you work on a vehicle it is off.

Have you actually had this problem yourself? Or are you doing this on behalf of imaginary other people?


> Mechanics need to be able to turn them off when they work on them

Couldn't you just turn off the car?


Mechanics need to turn the car on for various reasons like diagnosing problems and verifying things are seated.


It's required in Canada, but in the US, we tend to prefer our freedoms. There are also credible scenarios, say when someone is being stalked, when you would want to keep them off for security reasons; or, if you are parked in front of a house, or pulling into your driveway, perhaps you just don't want them to rake across the front windows and wake your sleeping children up. Many people won't buy a GM vehicle for precisely this reason.


When I had a GM car with daytime lights, they would switch off if you applied the parking brake. So a trick is to apply the parking brake enough that the lights go out (happens at the same time the parking brake indicator illuminates on the dash), but not enough to really grab the wheels. Then you can pull into a driveway with the lights off.


I'm also not sure why this is a thing. With all of the various sensors, etc. in cars, there's safeguards to prevent me from typing an address in the GPS while moving, so why can't there be something that turns your headlights on while driving at night above a certain speed.

On Saturday, I drove from NJ to MA and saw 3 vehicles on the highway at night with only daytime running lamps, and one with no lights on whatsoever.


I haven't driven a modern car in a decade or more that didn't have light sensors in the dashboard to know when to automatically turn on lights. They even turn them on when the wipers are engaged. Everytime you go through a tunnel you'll see headlights pop on automatically. My 2009 BMW is the oldest car I have that has auto lights, but it doesn't have wiper sensors.

Older cars? Of course not.


The problem with auto lights is that they can be turned off or set into other modes that prevent the auto part from working. Throw in drivers who don't seem to notice that their headlights are not or barely illuminating the road and we get what I experienced over the weekend.

Maybe there's hope: https://www.edmunds.com/car-news/make-sure-your-headlights-a...


My current car (2020 model year) defaults to Auto every time the car is started. I haven't had a car without auto headlights in a very long time, but this is the first one that actually defaults back to Auto.


The Bolt has auto headlights, something malfunctioned.


Many EU countries (especially those on the North) mandate them to be ON at all times when car is moving. The cars sold in those regions have light switch on Auto by default, which you can manually override in both directions.


I believe that (having headlights on) was locally required before Daytime Running Lights (DRL) became mandatory in the EU some twelve years ago: https://en.wikipedia.org/wiki/Daytime_running_lamp


What I find annoying is the people whose cars DO have headlights that are automatically "on", yet all the OTHER lighst on the car are off.

It's common to come across one of these cars at night from behind - no visible tail lights, but when you pull up next to them, their headlights are on, yet dim (daytime running lights?).


> why are headlights not always on for all vehicles at this point

It's been that way for over 30 years in Canada (since 1989 I think)...

I'd be curious to know how many more replacement headlights they see per car per year


Headlights do burn out. I'd rather not pay $1,000-$2,000 to replace my headlights since they are LED built into the headlight. I'd much rather have auto headlights be mandated. Even that can be a pain in the ass if you have HID bulbs and travel in areas where they turn on and off often reducing the life of them.

https://www.hondapartsnow.com/parts-list/2021-honda-accord--...


Automotive LEDs made by an OEM should not burn out for the life of the vehicle.


Should not sure, but I have seen people on the Accord forums complaining about headlight leds being dead, and being shocked at the replacement cost.


I mean they can be pretty annoying on some cars whose manufactures have decided its ok to have the beams right at eye level.


> LED bulbs last a very long time

LEDs are a high-end feature even in new cars, never mind all the existing cars on the road.


$20 bulbs are widely available on Amazon.


They're so poorly designed that they are completely useless. Also 100% illegal to use on road.


You may be approaching a check point and be asked to turn off the outside lights.


I remain confused on how these cars are allowed on the road. Did they pass a driver's license check? Who's responsible? Who gets the ticket? In this instance, you can see that the car performed a dangerous maneuver by running from the cops, with its lights still off, and crossing an intersection, which is then spun by the company as finding a "safer" spot to pull over at. Even though, it found a spot essentially identical to where it was. It also took off pretty fast.


> Did they pass a driver's license check?

Yes. That would be the AV permit that the cars operate under. Also as they are programmed to obey all road laws, it's exceedingly unlikely they will cause any moving infractions.

> Who's responsible? Who gets the ticket?

Cruise. Their name is on the AV permit, it's their insurance.

> spot essentially identical to where it was.

Before it was in a lane of traffic next to a parklet, after it was pulled over partially into what looks to be a parking spot. The SFPD officer also was no longer next to the vehicle.

> how these cars are allowed on the road

At the end of the day the argument is that these cars are safer than the average driver. Having spent a lot of time in SF, I've seen a lot of these cruise vehicles moving about. I remember watching them in the early days in the Dogpatch, when an errant cone would end up with them being stuck until manual action was taken. Slowly they got better and smarter.

Yes, it's a hard transitional time. Mistakes may happen, but the same is true for humans. And not all AV technology is the same. Cruise has limited itself a great deal so it doesn't try to take on the entire world at the same time. Just the streets of SF, which are hard enough.

How many Cruise traffic accidents have you read about? How many people injured or killed?


Isn't OP about a moving infraction it caused, despite its programming?

Arguing the probabilities of an outcome is an engineering-centric approach to selling these cars to the public, and it doesn't land well IMO/in this thread from others.

It makes a false equivalence b/t how we value human agency's role in outcomes, and something other-than-human. Most people seem to prefer the risk outcomes of something I decided, vs. a faceless control system.


Where do I apply for a permit to reassign culpability for my driving to a corporation?

> At the end of the day the argument is that these cars are safer than the average driver.

The last data I saw said otherwise, and that data was primarily collected in ideal conditions. That has been a few years, but these companies are not forthcoming with their data.


> Where do I apply for a permit to reassign culpability for my driving to a corporation?

I think this is mandatory in most states and countries, at least for civil liability up to some limit. You can purchase more, of course (and I recommend to do so, given that the minimal liabilities in each state haven't really kept pace with needs).


> Where do I apply for a permit to reassign culpability for my driving to a corporation?

Search a jobs aggregator for "Taxi Driver"?


> Did they pass a driver's license check?

Thought experiment: imagine a particular self-driving algo were to do something that would warrant a human driver getting their driving licence suspended, would that mean all the cars with that same algo would be simultanously suspended, or just that one car?


It does sound silly at first, but I think suspending the one car is actually a rather elegant solution on several counts.

1. Easy to implement

2. Doesn't punish fleet scale (compare: suspending the whole fleet means a fleet that's twice as large would be suspended twice as often)

3. Punishes the fleet owner in proportion to the frequency of the error

4. Is exactly correct when the cause is really a hardware problem specific to the car


> suspending the one car is actually a rather elegant solution

If the software in a commercial aircraft were to be found defective, would we really be OK if only the individual aircraft in which the fault manifested itself gets grounded (assuming that aircraft isn't itself already on the ground in thousands of pieces spread all over a hillside...)

Surely the FAA (or international equivalents) would issue an airworthiness directive which would apply to every single example of that specific type of aircraft?


It depends. If it's an issue with just that vehicle then yes, you don't ground the whole fleet.

If it's non-safety-critical, then it needs to be fixed asap but nothing gets grounded.

If it's safety critical, then yes, all planes are grounded.

But, I'd argue FAA levels of safety aren't suitable for cars. If a car has an engine failure or even a structural failure, they can usually just stop. Planes don't have that option.


> I'd argue FAA levels of safety aren't suitable for cars. If a car has an engine failure or even a structural failure, they can usually just stop. Planes don't have that option.

I had supposed we were thinking primarily about software faults and specifically those relating to the self-driving bit of a self-driving car.

Best case: self-driving car gets lost and is found with flat battery miles from "home" (?)

Worst-case: self-driving car kills cyclist/pedestrian and doesn't even notice or stop.

We should spend some thought on how to handle the latter because it will happen sooner or later. To be blunt: if the algo kills, why wouldn't you ground the algo (all instances of it)?


> To be blunt: if the algo kills, why wouldn't you ground the algo (all instances of it)?

It depends what the alternative is. If it crashes less than humans, then you keep running it despite the flaws.


> if the algo kills, why wouldn't you ground the algo (all instances of it)?

Well 40,000 people in the U.S. die in car crashes every year currently, why do we let people drive?


I wonder whether that would just lead to people using slightly tweaked variants of the same algorithm branded to be different?

(Of course, you don't want to fork too much, because there's probably a certification per algorithm needed. But it might make sense for Google to officially field a thousand 'different' algorithms, so that a single incident doesn't ground the entire fleet?)


Engineers often fail to realize you can't cheese a judge and if you try to, the law will come down much, much harder on you.

Either one of two things will be the case with this approach:

1. The algorithm will be too imperceptibly different to be treated as a different program. Adding some tiny decimals somewhere won't make it unlikely to make the same mistakes, and the entire class of algorithms will be treated as one.

2. The algorithm will be different enough to make different decisions, which means one program is probably "better" than the other(s) and they'll be taking a bigger risk by deploying cars to the road that are more dangerous than others.


Judges are human indeed. But things like tax dodges still exist and other loopholes. You just need good enough lawyers to structure things for you.

Your second point would lead to a ban on all but one self-driving car systems, when taken seriously?


I don't know about that. I do think that if, say, Google operated a spectrum of 1,000 self-driving algorithms, and knew that some were safer than others, a judge would take a very dim view to knowingly deploying less safe cars than others. And even from a business standpoint, it wouldn't be good business to have less safe (aka, higher liability) cars on the road.


Why would Google know that some of them are less safe than others?

Just have variations that don't dominate each other. Variant A could be better at some things, and variant B could be better at other things, etc.

(Otherwise, we should use your argument to only allow exactly one company with the 'best' system on the road. Instead of allowing competition.)

Google could even officially outsource the different variants to different nominally independent companies, if that's what's legally required to not get the whole fleet grounded.


It's unreasonable to expect airliner reliability from self-driving cars when cars driven by humans are accepted despite being 100 times more dangerous.


The whole fleet should be grounded until the issue is debugged. Without debugging the issue

* It might be a software issue (maybe the cars self-destruct when they encounter a solar eclipse -- silly example, but if the cars malfunction on events which are rare but wide-spread and correlated this could cause an emergency).

* If it is a hardware issue, it must be understood how a hardware failure can put the device in a dangerous, still semi-functional state. If one car was able to get itself pulled over for driving dangerously, then there'd be an unknown number of cars driving themselves dangerously.

* It is good that the fleet is punished disproportionately to the frequency of errors. We shouldn't accept a linear disincentive for putting dangerous vehicles on the road.


> * It is good that the fleet is punished disproportionately to the frequency of errors. We shouldn't accept a linear disincentive for putting dangerous vehicles on the road.

We already do..

Putting driverless cars to a much higher standard, makes perfect the enemy of the better. Human drivers are not all that great.


Yet we sort of do accept a linear disincentive for putting dangerous drivers on the road, worst case we take that driver off the road and very rarely do anything to increase the standard all drivers need to meet such that it would weed out all the drivers that are as bad as or worse than the ones taken off the road.

I think you're letting perfect be the enemy of good. In an ideal world we'd stop everything for debugging. Realistically all we need to have a net benefit right now is for the cars to be safer drivers than human drivers on average. As long as the issue isn't suspected of being intentionally triggered or correlated with some sort of large scale event, the world is still paying less of a cost than now even when not grounding the fleet.

A reasonable compromise would be to require autonomous vehicles moving in especially risky areas like highways to have a human inside with at least a "stop or slow down safely" override. In places where the vehicle would be moving too slowly to cause serious injury we can allow them to be human-less. By defining clear and objective criteria, the decision to ground a fleet becomes much easier to make (particularly for the machine, since you don't need advanced AI to check GPS against a database and apply appropriate speed limiters). If the incident was a serious malfunction of a basic function like ignoring the criteria for human-less operation, the fleet should be grounded. In other cases it wouldn't justify preemptively grounding the entire fleet (but may justify it after the severity of the issue has been assessed, probably by a group independent of the company responsible for the vehicles).


But the software "that did the crime" is all the same..


Yes, and that's why suspending the one car might seem counterintuitive. But that doesn't mean it's a bad solution.


Can we have lawn darts back if we suspend the darts that maim or kill?


Just sell guns without any controls whatsoever, and if a gun is used to shoot someone, impound that gun. Problem solved. Right?


You can't even have Kinder Surprise..


Sometimes the Indian supermarket has contraband imported Kinder Surprise, not the shitty domestic versions.


Why would you treat a software problem that's specific to the car (and, by extension, specific to all cars running the same software version) differently from a hardware problem that's specific to the car?


Waymo's marketing would imply they think the whole fleet should be suspended in this case. They call their system the driver, and seem to anthropomorphise it.

But more practically: what could a self-driving car do to get it's licence suspended? Afaik That's a pretty rare punishment outside of two cases: drunk driving or street racing.


It's a rare normal punishment for any single crime. But it's a pretty common escalation for repeated failure to cooperate with the system. A ticket for speeding or running a stop sign might be just a fine. But if you start no-showing your court dates, not making the payments, keep raking up more moving violations without properly resolving any of them, then you might find your license getting suspended.


Many (most?) states have a point based system for suspending licenses based on moving violations. You tend to collect points for things like speeding, failure to follow traffic signals, or illegal turns. Get enough points and your license is suspended for some period. Other states give you a number of points, and you lose points for infractions, get to zero and suspension.


That is a very intriguing scenario indeed. Due to the non-deterministic nature of many self-driving systems, I'd reckon a broader investigation would have to follow.

Should that investigation reveal a reproducible behaviour, a recall or general suspension of vehicles running that particular software version would seem like a reasonable response. Otherwise suspending just that one car could be an option.

It's uncharted territory both in terms of the legal system and w.r.t accountability so I guess no one has an answer to that yet.


How is it non-deterministic? If you run the same trained model on the same input data, you should arrive at the exactly identical output.


I think 'chaotic' is the technical term. It is technically deterministic, but the system is incomprehensibly complex and sensitive, such that it may as well be non deterministic / pseudorandom.

You could drive the same car down the same road at the same time tomorrow, and it may behave differently. It makes bugs / edges cases impossible to reproduce and fix.

With a human you can just say "why did you do that? Don't do that." Little fine tuned tweaks to neural nets is not so easy.


> How is it non-deterministic?

It's non-deterministic in a practical sense; you cannot reproduce the exact circumstances that caused a certain behaviour. Maybe a bug flew by one of the cameras at a certain moment, or a pedestrian with certain type of clothing passed by. You get the idea.


Depends. In principle you can log exactly what data the camera sends to the decision making unit. And you could replay that data.

The camera itself might not be deterministic enough to reproduce the same data with the same beetle, but replaying the log should work.

(Keep the log in a 'black box' on the car, and perhaps only keep the last x hours of data, if necessary.)

I think Waymo already does something like these kinds of replays.


what you’re referring to is just simulating, at least after the point your modified algorithm does something slightly different in the simulator than the real car did in real life.


I wondered about the last point, too. But Waymo apparently worked out a satisfactory solution to that.


That's easy and obvious: The whole fleet is "grounded" (or at least the self-driving feature disabled) until the bug has been identified and fixed. If you ask me, only a single car misbehaving is actually much more alarming than if the bug manifests in all cars of that type.


>If you ask me, only a single car misbehaving is actually much more alarming than if the bug manifests in all cars of that type.

It's not if it's hardware failure.


A hardware failure that puts the vehicle in a semi-functioning, but non-safe condition needs to be investigated, it could be silently lurking in any number of cars.


Investigated, yes. Grounding the whole fleet? Not necessarily.

It's a cost-benefit trade-off. We don't ground the whole fleet of human drivers, when one of them crashes their car, because the benefit doesn't outweigh the costs. It's not a given that the answer will change that much for self-driving cars. Especially for extremely rare failure modes.


Humans all have different versions of the 'driving' software installed, there's no way of identifying which ones will misbehave. If Teslas are malfunctioning, we shouldn't ground Chevy Cruise.

We exist on a spectrum of competence. If a type of car has a widespread issue that unexpectedly gets it into a state where a minimally competent non-impaired person can't operate it safely, that type of car should be pulled from the road. However, because humans are so sloppy (0 human drivers should be the ultimate goal), it is generally assumed that most accidents are caused by us, and so we require a significant accumulation of evidence to take a type of car off the road. Without a human in the loop, what's the unreliable part to point the finger at?


Well, the obvious incentive with such a rule would then be to install slightly different versions of the software in every vehicle.

(Or perhaps to have separate versions on file, and just load that different version when the first version is grounded.)

> Without a human in the loop, what's the unreliable part to point the finger at?

It's a cost benefit analysis. If the problem is sufficiently rare, it might make more sense to leave the cars running while you investigate. Even human lives aren't infinitely valuable.

Typical traffic engineering in the US assumes a cost of human life of about 6 million USD or so.


Or a cosmic ray..


Easy. All of the cars with that algo would be simultaneously suspended.


There's laws and regulations for this kind of stuff, and you need to get a permit.

Details depend on location / jurisdiction. Some Googling brought up eg https://www.dmv.ca.gov/portal/vehicle-industry-services/auto...


In case no one is specifically identified for the infraction, doesn't it come back to the car owner ?

Not in the US, but that's how it would work here for illegal parking or if the driver fled from the cops and the car is never found for instance. Even if the owner doesn't own a driving license, that's just where the buck stops. The owner if of course free to try to shove it somewhere else, but that's on them to make it work.


Parking citations attach to the car (as does civil collision liability generally), but moving violations can only be levied against the driver. Which is how some red light camera fines are avoided, if the driver isn’t clearly identifiable by the camera.


Here in UK it's always the owner of the car who is responsible, unless they can say who was driving the car at the time. If they can't then the fine defaults to them - the assumption being that they are responsible for knowing who is using their vehicle.

In other countries with similar laws(Poland) it has been argued in front of court that it cannot be the owner's responsibility to know who is driving their car at any given point especially if there are multiple people who do that usually - it should be the police's job to establish who they are fining.


It's quite logic and an easy rule to take responsible the driver no matter whether they have a driver assist feature on or not.

Then the driver could sue the manufacturer for any defect or if the manufacturer misled the user into thinking the system was safe and didn't need human attention.

This way, if a system is unreliable, users will start blaming the system and stop using the feature or buying this car. Also, it allows manufacturers to roll-out features faster and learn faster from the real world.

If the manufacturer is directly responsible when driver assist features are engaged, but not the driver, then the manufacturer will get scared to roll anything, and progress will be much slower, and there will be more victims because the drivers think "I don't care, I'm not responsible, it's even better if the auto mode is on, even if less safe, cause I won't be responsible"


> It's quite logic and an easy rule to take responsible the driver no matter whether they have a driver assist feature on or not.

That doesn't work in general. We'd want kids to be able to use self-driving cars. And really old people, and people without a licence, or who are mentally challenged enough that they don't have legal responsibility.

> If the manufacturer is directly responsible when driver assist features are engaged, but not the driver, then the manufacturer will get scared to roll anything, [...]

Google's Waymo specifically lobbies for getting the legal liability on themselves, not on the driver or owner.


>Also, it allows manufacturers to roll-out features faster and learn faster from the real world.

I don't think that "move fast and break things" is the approach we want for self-driving cars.


In the US, fines for moving violations identified by camera are the responsibility of the car owner, regardless of who was driving at the time of the infraction. The inability to identify the driver just means that the owner is not convicted of a moving violation, but he is stil l financially responsible for the penalty.


This is specific to jurisdiction. Having received a ticket for speeding from an automated camera, they just sent me a picture and told me to pay up or deny that it was me. In Oregon, moving violations attach to drivers only, so if they cannot prove you were driving the car at the time they drop the ticket.


Is that a federal rule, or decided by the states?


States.


What worries me a little bit is how this can shift blame further on other (more vulnerable) road users once such vehicles become more common, solidifying even stronger the notion that roads are for motor vehicles and for motor vehicles alone.


I'd say actually less so. As a pedestrian I'd be much more confident stepping in front of a self-driving car, than stepping in front of a human-driven car.

So much so, that I've read about people's fears that self-driving cars will be subject to bullying by pedestrians (and other drivers).


> As a pedestrian I'd be much more confident stepping in front of a self-driving car, than stepping in front of a human-driven car.

Why?


Because self-driving cars don't get distracted or tired, and they don't get angry either.


> Did they pass a driver's license check?

No

> Who's responsible?

Cruise and their insurance


Insurance doesnt pay for crimes. You cannot buy "speeding insurance" to pay your traffic tickets anymore than you can hire someone else to go to jail for you.


The company probably has various kinds of liability insurance that would cover things like lawyer's fees


>"...anymore than you can hire someone else to go to jail for you." *

* unless you're rich


I find the explanation offered by Cruise amusing: the car supposedly "figured out" that it is in a dangerous spot to be pulled over and "tried to move to a better one" (extremely complex behavior), while the actual reason for the car being pulled over was not having its headlights on (seemingly a relatively simple bug).


Why amusing? Sounds about right from what I know about Cruise cars. (Source/disclaimer: I worked at Cruise many moons ago.)


Amusement coming purely from the disproportionality of complexity of what worked correctly and what failed.


If a person did this they'd probably be in some serious trouble


Not necessarily, if you make your intentions clear, e.g. by slowing down and putting your hazards on.


Well, most of the time. If the cop feels like it, they can make you have a very bad time.

https://amp.usatoday.com/amp/7645474002

“An Arkansas woman says she had no safe place to pull over in July 2020 when a state trooper tried to stop her for speeding, so she turned on her hazard lights and slowed down. Moments later, the officer rammed her vehicle, causing it to flip over and injuring the woman”


As a teenager, I was pulled over for speeding. I slowed down, used the indicators to signal before changing lanes, then nearly came to a complete stop before realizing I was on the shoulder of a bridge. I suddenly realized that somewhere in driver's ed or a defensive driving course it was strongly suggested that stopping on a bridge is not allowed. I decided to continue pulling forward until a "safe" spot off the bridge.

When the officer arrived at the window, he did question why I didn't stop at the original spot. After explaining, he said he appreciated the thought. I still received the ticket.

Point of the story? Not every LEO is an asshole.


Assuming you aren't driving far, the police will generally understand, and appreciate your regard for their safety. I have seen people choose to pull over in the worst of spots, where I am almost certain the police would have appreciated them moving to a safer spot.

As for the Arkansas lady, did you watch the video? IIRC, she went quite a ways before he PIT'd her. I would agree he shouldn't have done the PIT on her so soon, but it wasn't the case of her going a few hundred feet more.


Ah yes, "one time in Arkansas" is now applicable to the situation at large. You'd think the HN crowd would be intelligent enough to not do this. This is akin to saying "this website doesn't load on the Nintendo DS web browser therefore..."


In my country this would be an illegal disproportionate action. Did the state trooper get away with this? If not, wouldn't it be time to change the law?


> Did the state trooper get away with this?

Yes, he did [1].

> If not, wouldn't it be time to change the law?

'The [Arkansas State Police], as part of the settlement agreement, has agreed to change its Use of Force policy as it relates to PIT maneuvers and institute an “objective standard” required to justify the maneuver’s use versus the previous “subjective standard.”'[1].

I would argue that a more across the board set of changes is required.

[1] https://www.kark.com/news/working4you/arkansas-state-police-...


Haha of course he got away with it


de-amped URL: https://www.usatoday.com/story/news/nation/2021/06/10/arkans...

I thought that Google was supposed to stop the AMP project? can't they automatically redirect old AMP URLs to the correct real URL?


clearly, it's not an engrained G service that lots and lots of people depend on, as that's the typical decision Googs waits for before pulling the plug


There are LOTS of AMP links floating around... most people don't even know that they depend on it...


sorry, i guess i should have ended my original post with /s


that's ok... I'm just glad that (as far as I know) all APM URLs are easy to identify visually... they will probably change that in version 2


I think I saw that video - don't get me wrong, the officer was definitely an asshole; but I can see why he was annoyed, there was in fact plenty of space &time for her to pull over.


Remind us how what the police officer did wasn't vehicular assault and potentially homicide.

Remind us how physical injury is a reasonable response to a traffic violation.


I don't want to defend him, what he did was *wrong*. He should absolutely face consequences for gross over-reaction. I never said his response was reasonable given the circumstances.

But the way the story was presented was extremely biased. I believe the US cops are trained to actually use the "pit" maneuver to stop offenders who otherwise refuse to stop. In this case, he should've known better - but one can totally see how an asshole cop would interpret the situation as "lady is refusing to stop, I need to compel her to do so".

I guess what I'm saying is that it wasn't a totally random act of unprovoked cop violence.


I think that it doesn't matter whether it was 'provoked' or not: the police should be professionals who prioritize public safety above all else, and this is a total failure.


Ok, let me retry:

> An Arkansas woman says she had no safe place to pull over in July 2020

The Arkansas woman was lying. That's it. I was not defending the cop, and never did I claim he was behaving professionally.

Still. The Arkansas woman was lying. She had plenty of safe space to pull over. Seemed relevant to me.


Remove "most of the time". This narrative that police are boogeymen who will subject you to their emotional whims is so overplayed its not even funny any more. There may be 1 in 100 truly awful police men and women out there, just as their are truly awful people in every known profession. People share a viral, highly edited video on tiktok to make themselves look like the victim only for the entire world to pile on and go "wow yep police bad".

There's a lot that could be improved in our judicial system, but you're not going to identify the areas that need improvement by sharing anecdotes of "Well this one time in Arkansas..."


I grew up in the late 90s in NYC, and spent a lot of time hanging out "on the streets".

I think on average, I saw an act of gross police misbehavior about once a month. Sometimes it was quite savage. Always racist. I'm white, but many of my friends weren't. 20 years later I still have intrusive memories of standing by helplessly while terrible things were said and done to my not-white friends, and I still have violent fantasies about those cops getting theirs in kind.

Sure, this was often aimed at young people who may or may not be smoking pot or engaged in other mildly truant behavior. The police were not reacting to the truant behavior. They were targeting the minorly truant young people because they wanted to get their rocks off, and these victims were easily available and had no recourse.


I don't doubt your experience but let's not pretend that there haven't been any changes in the past 2 decades.


What makes you so sure? Even if there's data showing improvement, this would be data from a supremely untrustworthy source with a long history of manipulation and deception.


I mean you're making an argument that does nothing to facilitate discussion on the issue. It's basically saying "No it hasn't changed. And if you have evidence or data showing otherwise, it can't be trusted." At the point you refute any provided data or evidence the conversation just devolves into an argument instead of a debate or conversation.


Discussion isn't what's needed here. It's reconciliation. This is what it looks like when abuse has built up and trust destroyed. The only way to claw back from that is patient humble hat-in-hands dialogue and persistent incremental trust-building measures. I haven't seen anything that looks like that from any major US law enforcement entity.


I generally am in agreement. However there's a complication in my mind.

My dad was a police officer and I grew up around police. It's there's one thing to know about police, it is that they are human, and subject to all the emotions, inconsistencies, and foibles as other humans.

However, they are given an extraordinary amount of power, with very little accountability. Think about what it would take for a regular person (maybe even you) to lose their job. Yet here is an example of an officer, behaving very aggressively, against department policy, causing a major accident and injuring a pregnant woman.

As far as I can tell, he still has his job (although he was "disciplined", and the lawsuit was picked up by the taxpayers).

https://apnews.com/article/lawsuits-arkansas-legal-settlemen...


Alright but again, the anecdote of "This one time in Arkansas" can't reasonably be used as a justification for stating "All police are bad and out of control all over the country"


I think it is entirely reasonable to push for a (much) higher standard of police accountability. The problem is not that all police are bad and out of control, it's that whenever these incidents occur the leadership and police unions institutionally push back forcefully on accountability. This institutional pushback is what makes this into a systemic problem that needs to be addressed.


Sure I agree, that is reasonable. Many people try to make the argument that all/most police are bad and out of control, which is what I refute.


Organizational leadership really matters. Imagine being a law-abiding person who joins an organization that's run by organized crime. Most of your colleagues are decent law-abiding people, but a few are thugs. And when those thugs hurt people, leadership pours all its resources into supporting and shielding them from consequences. When you (a law-abiding employee) try to report a thug for misbehavior, your leadership supports the thug over you or even drums you out of the organization.

Do you think this would be a sustainable organization? Should the general population look at a given representative and think "90% probability this is a good person" or should they think "10% probability this person is a violent thug who can hurt or kill me with impunity, and in any group of ten reps one might try to kill me and the others will look the other way?" What is the right way to engage with an organization like that? And over time is this organization going to reform itself, or is it increasingly going to attract bad people and drive away the decent ones?

There's a lot at stake here. Recent anti-accountability political activism by police leadership and union bosses is really scary and counterproductive stuff even if most cops are fundamentally decent.


> Alright but again, the anecdote of "This one time in Arkansas" can't reasonably be used as a justification for stating "All police are bad and out of control all over the country"

In this post https://news.ycombinator.com/item?id=30987681 you wrote:

> ...There may be 1 in 100 truly awful police men and women out there, just as their are truly awful people in every known profession.

Your complaint about others' anecdotes would hold more weight if you weren't just making up statistics.


I think it's extremely clear that I'm not trying to portray that exact number as a fact and more as a general portrayment of "most police officers are good, just as most people are good"


> I think it's extremely clear that I'm not trying to portray that exact number as a fact and more as a general portrayment of "most police officers are good, just as most people are good"

That sounds about as useful as the anecdotes in this thread. Police aren't a random sample of the population after all. Even if it's true that most people are good, that doesn't imply most police are.


Alright sure, I guess so. Neither of us have any hard data to back up our opinion so I guess it turns into a debate of feelings, which will lead neither of us anywhere.

What I do know is that as of May 2021 there were 665,380 police officer employed in the US, and trying to generalize all of them in any way based on a few highly edited videos is absolutely not a valid approach given the standard practices in data science and statistics. Given that the US generalized crime rate is 47.70 per 100,000 it's reasonable to argue that unless there is evidence showing otherwise, police officers also fit roughly into that same distribution and therefore would be "mostly good" (as defined by law), as most people are.

https://www.bls.gov/oes/current/oes333051.htm


Look up the per-capita number of deaths caused by U.S law enforcement officers vs other countries.

Then look up the per-capita number of people put in hospitals by U.S. law enforcement officers vs other countries.

Then look up the consequences U.S. law enforcement officers face when they harm or kill. Also look up their relative training.

Finally look up most dangerous professions and see where law enforcement ranks to get an idea of whether their lives are truly in such danger as to justify their actions.

After looking at this data (and I apologize for not providing citations directly but I’m on mobile), it’s hard to avoid forming the opinion that U.S. law enforcement is undertrained, unjustifiably aggressive, protective of bad officers, and unaccountable to the communities it serves.

Anecdotally, I’ve had more than my fair share of interactions with officers and about 20% of them have been hotheaded jerks.

Oh, I forgot one: look at the domestic abuse numbers for law enforcement officers too.

Edit: here’s at least one citation:

https://www.cfr.org/backgrounder/how-police-compare-differen...


That is not a valid citation, it immediately starts off by illustrating it's own political prejudice. The fact that they don't link directly to their sources is also questionable. They say they use Source: https://www.prisonpolicy.org/data/ (among others) but I don't see any data for some of the numbers they're projecting.

It's also not quite fair to analyze this data without also taking into account the prevalence of violent crime in general at each country in question. Of course a country with more violent crime will result in more violent policing.

I'm not saying that it doesn't make any valid points, but you can literally make any profession or population of people look bad if you cherry pick data-points and provide politically biased commentary while also not explaining co-factors or context.


The statement there isn't "All police are bad and out of control all over the country" but rather whether you should perceive that possibility as threatening enough to seriously affect your possible actions i.e. whether stopping in a non-dangerous place (as opposed to stopping immediately and staying put) is permissible. And if there's a 1-in-100 chance of a police being unreasonable, then that definitely is a too high of risk.


That's fair, I just pulled that number from thin air to try and portray my point. The actual crime stats in the US is 47.7 per 100,000. Unless there's hard evidence to the contrary, it can be assumed that those number also apply roughly to police.


I suggest watching some videos of how police behaved during the protests a couple years ago. The Verge had an excellent compilation. It’s hardly one or two bad actors. My personal favorite was the police pulling parents out of a vehicle and beating them while their children watched — the car was just trying to turn around and get out of the protest area safely.

Your argument might be more valid if these “bad apples” faced appropriate punishment, but they never do.


I've seen plenty of those videos, and 9 times out of 10 if you're able to find the whole, unedited version of the video the officers actions suddenly seems much more reasonable. Not always, admittedly, but very often.


With that many examples, it should be simple to post just one example to make your point. But unsubstantiated contrarianism typically does not contribute to a conversation.


No one else provided examples nor asked for any. Perhaps you should ask for examples instead of assuming I'm acting out of malice. Here's a link to the full video for probably the most widely recognized case in the history of police brutality, the killing of George Floyd.

Does the context this video provides make the officers actions okay? No, I don't think so. Does it make those actions appear much, much more reasonable than it was made out to be on social media? Absolutely.

https://www.youtube.com/watch?v=NjKjaCvXdf4


> Does the context this video provides make the officers actions okay? No, I don't think so.

I understand and appreciate that social media can (and has) exaggerated things, which you've demonstrated, but I don't see how this demonstrates that police brutality is not a problem, or is an exaggerated problem. Ultimately you agree the police officer's actions were not okay.

The Verge article which I referred to originally: https://www.theverge.com/2020/5/31/21276044/police-violence-...

For example, there were multiple videos showing police cars literally running into protesters, ramming into groups of them. What additional context could ever make that okay? Under what circumstances would that be a reasonable response?

Or as another example, police use tear gas on protesters, even though it's banned in war.


The difference is on one hand you have people sharing videos that make police look like they're suffocating people and hitting them with cars for the fun of it, and on the other you see that they're often in a situation where they've already tried a tactful approach, where they themselves are being met with or have been met with violence, and where the expediency of the moment required a difficult decision that may or may not have been correct.

I've seen videos of protesters surrounding police vehicles trying to destroy them and harm the officer inside. That's a situation where running people over to get away would absolutely be acceptable.

Tear gas is a great way to safely control crowds without firing gunshots. I've been subjected to it myself and while it's not pleasant, no lasting damage results from it.

Stop trying to act like context doesn't matter, it always does and here even more so.


> Tear gas is a great way to safely control crowds without firing gunshots. I've been subjected to it myself and while it's not pleasant, no lasting damage results from it.

It’s literally banned in warfare. Sorry but this is ridiculous.

No, running people over is never a proper response to violence. They’re policemen. They should disengage. Not plow into literal citizens of the country they’re supposed to be protecting. (Anyway, the videos did not show anyone trying to get inside the car — the protesters were behind barricades. Try watching them.)

Again, you haven’t given any actual evidence this is happening. The single piece of evidence you gave is still a case of clear cut brutality, you admitted it yourself.

Finally, again, even if they are “a few bad apples”, they not get punished. And their supposedly valiant comrades don’t stop them from doing these things. The system is corrupt.


Did you read anything or are you just going to project your own bias on the discussion?

"It's banned in warfare" doesn't make it some egregious WMD. It's banned because all chem weapons are banned per the Geneva convention and adding particular exceptions is a dangerous path to go down. Every single basic trainee in the US Armed Forces is subjected to it and they all turn out just fine (or well, if they don't it's not because of the tear gas exposure). You're using the term as if it's some catch all that ends any and all conversation. Would you like to suggest an alternative to safely controlling dangerous crowds?

> They should disengage

The exact situation I laid out was protestors surrounding an officers vehicle and beating it trying gain access to do him bodily harm. Would you like to suggest how he can disengage? I'd love to hear any ideas you have because driving away is quite literally the safest option there. The alternative is to shoot the people trying to attack him. One most likely results in broken bones where the other most likely results in death. Here's an example in which people swarmed the car that showed up on screen and the officer had to run someone over to escape personal harm. Frankly if protestors surround a vehicle, don't move or clear the way when asked multiple times and given multiple warning, then they 100% deserve to be run over. That's their fault. Don't surround an occupied vehicle in a violent environment and expect nothing to happen.

https://www.youtube.com/watch?v=bou2MzSGfvs

I didn't say it was "clear cut brutality". I said it probably wasn't okay, but that it's more nuanced than people such as yourself would like to make it appear.

Are you going and holding every single software engineer that writes malware or privacy invading adware responsible? Oh no? Guess that makes you just as bad as them.

Edit: And I should add, nor have you provided evidence to the contrary. How many videos are there of police intentionally ramming into completely peaceful protestors? I'd wager there aren't very many. You're the one claiming that any potentially adverse action taken by a police officer is wrong despite any co-factors or context. Show me a video where the police are running someone over and, with all the context and surrounding information, it still looks absolutely unreasonable.


Thanks for the link, it indeed provides context to your comment.


> There may be 1 in 100 truly awful police men and women out there, just as their are truly awful people in every known profession.

This feels a bit like Father Ted's defense of the Church's child protection record: "Say if there are 200 million priests in the world and five percent are paedophiles. That's still only 10 million."

1 in 100 would be _far_ too high. Maybe 1 in 100 people are awful in the general population, but ideally you'd hold police to a much higher standard.


Yeah fair enough, I just pulled that number from thin air to try and portray my point. The actual crime stats in the US is 47.7 per 100,000. Unless there's hard evidence to the contrary, it can be assumed that those number also apply roughly to police.


Depends on your skin color


Is it an euphemism for dead? /s


no but you'd probably be arrested


[flagged]


A comment like this has no place on HN.


Gallows humor like this always has a place.


It was a low effort comment. If you want low effort you go to Reddit. HN has made a name for itself as being a place for rational thought with good, honest, and respectable discussion. Just my opinion of the site though.


I apologize for creating offense.


Perhaps the lights are not even inside the loop of the Cruise hardware and software. They may rely on the automatic lights that GM provides from the factory. Do they even have visible light optical cameras in their system?


Sir, step out of the car please.

I am sorry, I can't do that Dave.


'Afraid Mode' coming soon!



"Cruise vehicles are only authorized to drive from 10PM to 6AM, which obviously makes headlights pretty important."

So why rely on the automatic headlights at all? Why not just set the headlights on manually?


In EU there is a law that you should drive with headlight all time.

Old cars drive with low beam, new cars comes with daytime running lights (DRLs).

https://en.wikipedia.org/wiki/Daytime_running_lamp#European_...


EU has a regulation requiring new cars to be equipped with DRL capability, that's all. Laws requiring the use of lights in daytime that exist in some countries are completely unrelated to EU.


Actually this is interesting because manufacturers should make same car but equip it with different parts.

Other example is turn signals. On EU they're orange. And when you drive after US imported car here drivers are confused because their turn signals are red. So driver behind don't know does front driver going to stop?

DRL - is few bucks, but it's great improvement of road safety.


> DRL - is few bucks, but it's great improvement of road safety.

Only if you ignore all participants in traffic that don't burn through so much energy that a few watts extra aren't an issue. When drivers get everything they feel they need to see conveniently highlighted, attention will inevitably drop to compensate. They certainly know they need to see more than just cars, but the subconscious level only really cares for threats.


No, as you have cited, there is a law saying vehicles must be equipped with these lights, but it's up to national regulations on when they must be used.

See https://trip.studentnews.eu/s/4086/77033-Car-travel-in-Europ... for a list (incomplete, Denmark requires daytime lights.)


Nordic countries are different because even their "days" they sun can be seen 2-3-4 hours.


During winter at mid-day my lights automatically turned on... Just in simple overcast weather. So it does get bit gloomy at times.


unless it's summer in which case it can be seen about 19 hours.


But even on 19 hours day it's good with daylights.

It's easier to see what vehicle is running/driving or not.


The US has this law for motorcycles. Maybe we should apply it to driverless cars as well


Yes, every self-driving car should have a row of LED lights at the front. Also, they should indicate that the AI is actively scanning its environment by switching the LEDs on and off in a regular left-to-right pattern.


And those lights should be in a vertical row of 8 red lights, going left-to right.

And when the car speaks to the driver and the surroundings, there should be three vertical bar graphs made of red leds to indicate the speech.

There is a good controller for those kinds of lights, named "knight industry two thousand", but i think there is and upgrade to 'three tousand' available too.


And a guy with the flag running in front of the vehicle.


I feel the need to point out that whilst permanent headlights make vehicles more visible most of the time, in the most dangerous scenario, they actually make you less visible.

Low sun behind the vehicle, especially if it's a motorbike, makes it very hard to see. If the bike isn't lit up at the front you stand a fighting chance. If it's got the front headlight on then you don't even see the silhouette, you just see bright light surrounding and bright light from the middle.

Motorcycle Action Group in the UK has had to run a handful of "repeal this safety law because it actually makes things more dangerous" campaigns in the last 20 years.


That's an good idea.

Actually few LED lights didn't require too much power and maintenance.


If a law says you should do something, it’s not a law it’s a suggestion.


Just my anecdote, saw a cruise car trying to turn left from the middle lane on Franklin.

Came to a complete stop, bringing traffic to a halt, and waited for the cars on its left to clear out, including anyone who went around, then cautiously moved into the left lane and turned.

Some kind of rude, selfish programming I’d be yelling at a person for.


I wonder if Cruise or other AVs have a programmed threshold to "give up"? A human driver in the middle lane might decide to keep driving on and take a different route rather than hold those people up.


I have lots of anecdotes of the opposite. Many, many times I've been driving around SF and noted "Hey! That driver had an opportunity to be an ass and instead chose to be a polite, reasonable driver. Ohhhh, it's a Cruise car. That explains it." 9/10 times it's a robot car.


The intention is for the car to be a very boring, unremarkable driver.

This sort of behavior has been noted before and most incidents were thought to have been fixed by a merge ~6 months ago, but there can always be missed scenarios/interactions. Was this more recent?


Yes this was march 29th, 12am ish 15 ish blocks south of Lombard


Weird. I'd look into it if I were still at cruise, but alas.


> Some kind of rude, selfish programming I’d be yelling at a person for.

Sounds like an inexperienced or incompetent driver who lacks confidence. Basically, the kind of driver that shouldn't be on the road.


Perhaps all human drivers are incompetent? In the USA there are 5 million crashes that involve insurance every year. Insurance companies say that someone will get in a crash (however minor) once every ~18 years of driving on average.


Rereading my comment yeah that’s harsh. Not trying to promote road rage though it is kind of fun to generally be loud from the safety of your car when no one can hear you.


How do you pull over an autonomous car? Like, how does it know that it needs to pull over?


"We work closely with the SFPD on how to interact with our vehicles and have a dedicated phone number for them to call in situations like this.”

And yes, that seems thoroughly impractical at scale.


Ok. So I guess the SFPD has been tailing the Cruise car, called the number, and then Cruise personnel made it pull over remotely?


If that's the case then it's pretty impractical and double standard vs a human driver who needs to pull over immediately when instructed by the police or face potentially grave consequences if he doesn't immediately comply.

When I'm at the wheel, I can't just tell the cop tailing me "yeah officer, I didn't pull over when you told me to behause you should have called me on my cell to tell me that" lol.


> double standard vs a human driver who needs to pull over immediately when instructed by the police or face potentially grave consequences

Especially where Police is so hasty to use their cruisers as weapons[1]

[1] https://eu.usatoday.com/story/news/nation/2021/06/10/arkansa...


On an unrelated note: why are the US police cars called "cruisers"? Are they a different category so to say?


The best description is on the first result from search:

> Police cars are called cruisers because that is what they do. They cruise about town, looking for trouble, and responding to those requiring assistance.

Doubt the original poster meant any irony there, but...

Still, Webster:

> cruiser, noun:

> 1 : a vehicle that _cruises_: such as

> a : squad car

> cruise, verb:

> 1 : to sail about touching at a series of ports

> 4 a : to go about the streets at random but on the lookout for possible developments

So probably it is the origin.

Wonder if there are similar designations in other languages. In my native one the term 'cruise' is restricted only for sail (as in v., 1 definition in Webster), so it could never be applied to a police car roaming the streets.


  > In my native one the term 'cruise' is restricted only for sail
Do you not used a cognate word for "Cruise control" in your language?

In Hebrew "Shait" is used for sail, but also for the idea of cruise control and a cruise missile. I cannot think of any other uses, not even for patrol vehicles.


In Swedish at least the 'cruise' when talking about in ships and missiles is the same word, but "cruise control" is a completely different word (literally "speed holder"). You can't "cruise" in you car in Swedish.


In Italy our police cars are much faster, a police car is usually called "volante" (literally "flying"), the more correct name is "autopattuglia" which is more like "patrol car".

Then we have the two different animal derived names, police cars are "pantere" (panthers) and the Carabinieri ones are "gazzelle" (gazelles).

As well you can't cruise with a car, the cruise control (often untranslated) is a "regolatore di velocità" (speed regulator), but you can actually mantain a "velocità di crociera" (cruise speed) on a vehicle.


  > a police car is usually called "volante" (literally "flying")
Don't all cars driven by Italians do that? I think that I've even seen a Punto with a single poor tire on the pavement once. :)


No Pinto ever set foot (I mean tire) in Italy AFAICT, I think they weren't ever imported in Europe, but - seriously - I once happened to be overtaken by one of the few Lamborghini's they have, on a highway, I was going (cruising) at a "normal" speed (some 120 kmh) and had this strange "something blue" feeling in the rear mirror and by the time I realized it was the police flashing lights I heard "whoosh" and they were gone. (in some rare cases they use these Lambo's to transport transplant organs when elicopter was not available/couldn't fly) between cities:

https://it.motor1.com/news/457740/lamborghini-huracan-polizi...


Oops, I corrected "Pinto" to "Punto". For some reason, typing that just feels like it's a dirty word.


I see, that makes sense now.


> Do you not used a cognate word for "Cruise control" in your language?

It is a literal copy from English here. ESC/ESP and ABS are used or abbreviations or as a full text ones, but they map 1:1 to both English and abbreviated counterparts.

For what it worth I checked how the article "cruise missile" is called in other languages in Wikipedia (well at least ones I could tell).

Most European languages uses a some variation of "cruise", while some European and Slavic languages uses "winged" or even "maneuverable" (Polish). There is a slight correlation between a sea-faring countries and a former Warsaw Pact ones.


History.

Transporting the verb "cruise" makes more sense circa 1915 when foot and mounted patrols are proportionally more common and the steering wheel interface had just recently been ported over from watercraft.


The scalable solution would be that law enforcement will be given a direct override/pull-over button for any driverless car. I expect this to happen in the future if driverless cars reach large volumes. Maybe I haven't thought this through enough, but I can't imagine a circumstance where you would want a driverless car to be able to ignore stopping directions from law enforcement.


> I can't imagine a circumstance where you would want a driverless car to be able to ignore stopping directions from law enforcement.

Funny, I can't imagine a circumstance where I would want "law enforcement" to be able to override my directions and take control of my car.


Do you want number plates?

Mine have never helped me get anything except for parking fines, but lawmakers decided being able to visually and uniquely identify a car for the purposes of law enforcement was more important than my privacy because it enabled widespread use of this dangerous and strictly regulated machinery.

You can own a car without number plates, you just can't drive it on a public road. Presumably similar with this hypothetical kill switch. Requiring compliance with rules for access to a shared resource doesn't seem that ethically challenging when you think about it.


Ok, I'm guessing you're American. I've read that the relationship between the police and the public can be a bit .. antagonistic. And I know that fugitives are disproportionately romanticized in fiction.

But police officers giving you stopping directions are already overriding your directions and taking control of your car, fully backed by law and and force if necessary. Only with a very inefficient implementation.


>Ok, I'm guessing you're American. I've read that the relationship between the police and the public can be a bit .. antagonistic. And I know that fugitives are disproportionately romanticized in fiction.

Two can play at this dumb game.

I'm guessing you're European. I've read that the relationship between the police and the public can be a bit .. blindly trusting. And I know that fugitives are disproportionately demonized in fiction.

I'd say you should concern yourself with problems on your side of the ocean and I'll do likewise. But looking at history says that's wishful thinking.


And, like guns, only the police will ever use this weapon and only for the purposes of public good.


Also, we're talking about driverless cars. The scenario is not the police pressing a remote stop button to override your directions, it's to override Cruise/Waymo/Uber's AI or remote control.

This is a serious question, is there some real scenario that hasn't occurred to me, where it's desirable for either Cruise/Waymo/Uber's remote control or the onboard AI to ignore stopping directives from the police?


No. I agree with you. I was thinking further down the line, when everyone is in a driverless (optional?) car.


It's all fun and games until someone else figures out a remote attack on that system.


Wouldn’t it be great to give the government the power to cause a traffic accident for anyone they didn’t like?


They already can. Police will simply crash your car if they feel like it by using their car.


That only happens when your running from them. Imagine if we give the DNC a kill switch to every car in America. Place it under as a tax penalty, and you'd have to wait until their audit finished in your favor for them to consider turning your car back on.


It's supposed to only happen when you're running from them (though I think it shouldn't happen then either), but the reality is they will do it if they feel like it. The only thing stopping them is physics. Police are generally not held accountable.

Either that or I could argue we already have a killswitch like that - suspending your license - if I also go along with the idea that the cop won't flip your car when you're not running from them because that's also illegal.

So either we have a world where the system works or where it doesn't, but unfortunately we have both.


> That only happens when your running from them.

Except for all the times it's happened when people weren't running from them?


Some of us feel that the death of Michael Hastings was a demonstration of exactly that.


Actually, I'd expect driverless cars to rarely get pulled over by police, so in fact this scales fine.


I'm guessing the plan is that once you reach that scale there will be people in the cars?


How can a passenger force a fully autonomous taxi to pull over? And what happens when the passenger is in the driver's seat and the police think they are resisting orders?


I would be certainly terrified if the car didn't pull over and I would see cops chasing me. And all I could do was watch.


Next thing you hear is shots and glass breaking sounds. Not sure you hear much more though.


Pretty sure you'd have some sort of "gracefully pull over" and "hard emergency stop" options in those once they are actually used for passengers? There's other reasons than police stops you would want to have the option.


I would assume they have to have it programmed in them to pull over.

They already need to incase of ambulance or fire trucks behind them. That's a day one feature. You see emergency flashing lights you pull over until they pass you.

If they pull over behind you, then that's just a case of lights behind you, pull over and the car remains stopped.

Flashing lights to signal to pull over is one of the most unbreakable of all driving rules, you can't release a self driving car that can't handle that case.


Same way humans do? It needs to recognize flashing lights and emergency vehicles to obey the law to get out of their way, and from there add code to fully pull over and remain stopped as long as there’s a cop car with flashing lights staying behind you.


I assume it's more complex that that, but I can't see how a driverless car can reliable distinguish between being pulled over and an emergency vehicle is trying to get past on a narrow road.


It might not, but the behaviour would be the same, so the effect is achieved.


No, because on a narrow road, coming to a stop blocks the vehicle behind you.


If your on a narrow road where you can't safely pull over, you don't stop. You continue until you could safely pull over enough.

Stopping would cause more harm than good if they're actually trying to get around you.


Sure, I know that, but does a driverless car? All it can see is blue lights right behind it, which could mean "stop right now or we'll shoot you" or "get a move on to the next passing point you can find" depending on context (probably conveyed by the gesticulating of the driver and maybe a loudspeaker).


Part of being pulled over is not blocking traffic while stopped, so if there’s literally no shoulder/dirt to pull over onto then obviously you continue until there is.

Just like a human driver is legally required to.


Remote override for authorities to stop the car, instruct it to pull over etc. Alternatively, passengers always having access to a pull over when safe button.


The real surprise here is that SF police pulled over a car at all. Have been here for more than a decade and I can count the times I have seen a car pulled over in SF on one finger - meanwhile, I have seen numerous cars blow through red lights, speeding, U-turns in the middle of the street etc. in front of police with no response.


Yeah this is the real surprise for me as well. How many cars drive around SF with no license plates and the SFPD doesn't do anything about it.


Yeah, this was my impression as well. In Chicago you have police actively pulling people over all the time. In SF i have literally not seen a single person pulled over in the year I've been here. For context, in Chicago i would see about 1 per day, and double that in Wisconsin.


In Chicago? Like, in the city proper and not the suburbs? I've lived most of my life in Chicago, and it's basically unheard of to get pulled over for a traffic violation.

Lately, blatant disregard for traffic laws in Chicago has gotten so bad that it's a little terrifying. It's become a common thread on the r/chicago subreddit with regular posts pointing out how bad it's gotten and how police never stop anybody.

Here are two posts from yesterday alone:

https://www.reddit.com/r/chicago/comments/tzyrmn/red_lights_...

https://www.reddit.com/r/chicago/comments/u0x3sd/im_getting_...

EDIT: Wanted to point this out b/c I think it's a larger problem in general of people not following rules of the road, not isolated to specific cities.


> In Chicago you have police actively pulling people over all the time

Lived in Chicago for 25 years, this is pretty much the opposite of my experience. Seems like 90% of the time someone is pulled over here they're being arrested so I don't think it's because of a traffic violation.


Cruise Automation has a terrible video for first responders.[1] The dialog sounds like it was written by lawyers.

Summary:

- Vehicles recognize lights and sirens and will pull over.

- Call Cruise's control center at 888-662-7103 in case of trouble with a vehicle. The control center needs the name of the vehicle, which is painted on the car in 3 places. Or at least the location.

- Chocking and blocking the wheels, as firefighters routinely do, is safe.

- Other emergency procedures are the same as for a regular Chevy Bolt.

[1] https://www.youtube.com/watch?v=ZM3kfauMgZY


There are a lot of things concerning about these cars driving themselves around the city -- most of which are typically covered in these comment threads -- but here are a few questions that seem relevant to this particular incident:

How are police officers trained to deal with these machines? Hopefully its more than "call this number"; are Cruise support lines required by law to be open when their cars are on the streets? What mechanism to police or city officials have to control these machines? Do they have some interface to put the vehicle in neutral and roll it away? Assuming no, why not? What's the right protocol if a car gets a flat tire or is driving around with a door open? Would a driverless car by a different manufacture follow all the same protocols?

I'm sure there are clear answers to similar questions -- for example -- for the BART or Muni. It is preposterous that something this significant and potentially dangerous can be deployed on streets in a city with tens of thousands of pedestrians, and no messaging to the public about these details.


They do outreach programs for local police and fire departments on first responder training and have published relevant materials.

Waymo: https://waymo.com/firstresponders/

Cruise: https://www.getcruise.com/firstresponders


I think they all still have test drivers in them right?


Imagine if every failure in a traditional car became news.


This is relevant, though. There are traffic laws and regulations in place for human drivers and this incident poses a conundrum: how are minor transgressions handled with autonomous vehicles?

Who gets fined? The owner? The manufacturer? Do the rules not apply to machines or to a lesser degree and if so, why?

Lots of interesting legal questions that are much more relevant in practise than hypothetical (and IMHO completely misguided) trolly-problem related ethical discussions.


I think it is reasonable that the owner or operating entity gets the fine. Then they can sue the manufacturer or service provider for any issues.


I would venture that manufacturers should prevent drivers from breaking the law when possible. For example, autonomous cars rolling stops and speeding should not be possible.


It’s the first real autonomous testbed, and the same software presumably drives all their cars (up to A/B tests). Unlike in traditional hardware, the cause of these errors is very opaque and difficult to understand (at least for us and possibly the company as well). That makes it plausible that equally dangerous and simple errors still exist.

Imagine if a traditional company had just released a new car model and someone was driving it and the front right wheel just fell off the car. It would be news because it indicates that something very basic was misunderstood by the designers and engineers. You probably wouldn’t want to get in such a car. Now imagine the general reaction is “cars are complex systems, the other three wheels actually did quite well, we fully expect to update the design to keep the front right wheel on in the future.”


I feel this will happen at the end of the adoption curve, should we reach it. Media will begin to shame legacy drivers and accidents will become newsworthy curiosities. Sort of like the last known cases of a pathogen that becomes eradicated.


The news here isn't the headlights failure.


The police should have shot the tires out and boxed in the vehicle to stop it from escaping.


Don't forget pistol whipping the application just because it was written in php.


A human gets pulled over and gets a ticket.

A bot gets pulled over and the cop acquiesces to the company.

Of course, monied interests dont get tickets. Only us plebes do.


So what would happen if the passenger is arrested by the police for being in a car that refused to stop? I wonder what legal principle could be used to justify this. Involuntary failure to stop? Are there any laws about driverless vehicles and who is responsible for problems?


> So what would happen if the passenger is arrested by the police for being in a car that refused to stop?

Just for being in the car? Presumably they would be questioned and released.

> I wonder what legal principle could be used to justify this.

"Probable cause" of having committed a crime is typically the principle I believe.

> Involuntary failure to stop? Are there any laws about driverless vehicles and who is responsible for problems?

I don't know and can't imagine what involuntary failure to stop might be. I'm sure they could fit an existing law to the situation if they suspected the passenger was instructing the car to not stop.


Why would they arrest the passenger?


How about putting a piece of cloth over a sensor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: