Hacker News new | past | comments | ask | show | jobs | submit login
Firefighters forced to smash window of driverless Cruise taxi to stop it (businessinsider.com)
266 points by cma on Jan 30, 2023 | hide | past | favorite | 320 comments



Less anybody think "this is no big deal", let me assure you as a former firefighter, cars driving over fire hoses is a MAJOR deal. Whether the car is driven by a human or AI. And sadly, humans do this all too often anyway.

FWIW, the reason(s) this is bad - if not obvious - include:

- The hose can (and does) get snagged on the car's under-carriage, which can rip it away from a hydrant, or engine, or yank an attack line right out of the hands of the crew manning it.

- Hoses snagged by cars can, as they're being unexpectedly pulled somewhere they were never expected to be, catch firefighters in the legs and cause them to fall. I'm aware of at least one case where a firefighter suffered a pretty severe head injury in one of these scenarios.

- Even if the hose isn't snagged, damage from the vehicle driving over the hose can cause the hose to rupture. Hopefully I don't have to say any more about why that would be a Bad Thing.

- Even if the hose isn't snagged, or doesn't burst at the moment the vehicle runs over it, the damage from this kind of thing is cumulative with all the other damage fire hoses suffer, which can lead to the need to replace it prematurely. And let me assure you, this stuff is not cheap.

- Probably more that I'm forgetting right now.


I'm slightly alarmed by the general scenario of a driverless vehicle approaching an emergency situation. A human-driven vehicle will respond to gestures from firefighters to go around, etc. And even most dull humans will have more situational awareness and flexibility than an AI is likely to, particularly as these sorts of scenarios are edge cases by definition. E.g. depending on your jurisdiction and the situation, it may be proper to run a red light to make way for an emergency vehicle. The insistence of the driver waving frantically at you is part of that judgement call. Software is not ready for that yet, I think.


driverless cars will have endless problems.

When I was on sales trip to KL in Malaysia there were times when the tropical rain could half flood a street in minutes. Our driver was able to creep around the shallow side of the road (alternating with cars in the other direction).

How would a driverless car cope? Would it even detect that there was problem? and if it detected it and wouldn't drive on that might be the worst thing to do if the flooding was going to get worse.


There's a one-lane bridge near where I live that would pose a similar challenge. Drivers coming from the east always have right-of-way, but the only way to know this is by reading the bespoke traffic sign posted next to the bridge. To make matters worse, during times of heavy traffic westbound drivers will often stop and let through a stream of eastbound traffic. How many cars are let through is negotiated with hand signals or flashing lights and smiles.


We will get the worst of both worlds, the car will tell you to take over in dangerous or confusing conditions but the person taking over will have minimal driving experience, on account of the car taking it all.


Driverless is a huge waste. Most of Tesla's bad press (the company/product, not Elon) is due to Autopilot and/or FSD.


> Software is not ready for that yet, I think.

It will never happen. Robots will never have the intelligence to manage dangerous situations in the open world, only in controlled environments. After a series of accidents passes the unforgivable threshold, certain levels of autonomous driving will be banned completely.


Maybe I'm too optimistic about self driving but it seems that a fail-safe default of "upon encountering an active emergency situation, stop at a reasonable distance away" is good enough and better than what a significant minority of human drivers do.


If the dangerous situation is a flash flood or landslide stopping may be entirely the wrong response. The entire problem with these rigid failsafes and the whole idea of ai-driven cars is this idea that a programmer can predict every situation the car will encounter by extrapolating from the climate and road conditions in places like san francisco.

One project I worked on was a safety critical system that almost got sunk by an overly-simplistic fault response. The device would reboot if it detected any sensor readings out of range. One day someone put a bag over it while it was delivering concentrated oxygen from the air and it rebooted because the ambient o2 sensor was reading well below what anyone could reasonably expect were the machine not enclosed. The patient had to be bagged while the machine came back on.

You simply can’t predict every emergency situation and you most certainly shouldn’t assume a simplistic single-response model is enough to get the passenger to safety. Humans are really good at inferring correct behavior socially and it would be much better for an AV to watch what other drivers are doing to figure out how to handle unexpected situations. To the degree an AI-driven system is unable to do this it is completely unsuited for use on the road.


I think many of us aren’t used to working on anything actually critical to human lives.

And I wouldn’t be surprised if quite a few of those who are have never actually fully understood what that means.

I certainly don’t.

But then, there’s a reason I stay away from life-critical systems.


Who are the significant minority?

This is the same claim always given by the A.I boosters. Who sees firefighters fighting a fire and decides to just plow through and make matters worse?

Because by "significant minority" what you are saying is that 2 or 3 out of 10 human drivers do this.


Exactly. The fact that some humans are drunks or psychopaths does not mean that AI mistakes are going to just be accepted. We send people to prison when they make those kinds of mistakes, what do we do when machine learning models kill while functioning entirely normally?


Send a low level engineer to prison, of course.


That's actually not as wild as you think, because that's what happens when a building or bridge falls. The contractor that built the bridge, just like the company that designed it, the agency responsible for inspections and maintenance, etc, is also liable for damages, including prison if it's deemed so by the courts.

Of course, software engineers get treated with kid's gloves, otherwise, everyone just freaks out and cries and cries, because their robotic utopia dreams are meaningless nonsense.

Apparently, smart enough to code, but not smart enough to take responsibility for their actions.


> We send people to prison when they make those kinds of mistakes, what do we do when machine learning models kill while functioning entirely normally?

Assess a massive fine. Money is lifeblood of a corpo.


How do we know we can make the machines smart enough to detect the dangerous situations it is incapable of handling on its own?

It's just a bit too perfect, no?


How do we know we can make humans do it?


What about a dangerous situation like a criminal trying to enter the vehicle where the right response is to avoid them and drive away?


I think never is a strong word. You genuine believe that we'll never create an equivalent or even comparable intelligence to our own artificially?


Even if one didn't accept that AGI will eventually be developed, I agree with you that "never" is too strong. Computers perform many tasks now that were once unthinkable, and I'm pretty confident they'll eventually drive cars well. Whether that time is remotely close however is another question.


I think there's no stage at which you trust such an entity with something like driving until it wants a robot to drive it's car so it can do other things than focus on driving. In the same way you don't trust a 5 year old with a loaded gun, you don't trust real synthetic intelligence with something like driving you safely. Not until it reaches the level where it wants equal rights, suffrage, and generally to be treated as alive. That's way off into science fiction land.


I don't get why people think intelligence requires things like a desire for suffrage...

There is no particular reason why advanced planning, modelling and pattern matching ability should need human-like goals.

Heck, even counting votes would be quite impossible (how much divergence does a copy need before it gets its own vote? What it not all modules/nodes are copied?).

Also, the anti-suffrage women seem to have been erased from history as far as popular discourse goes... guess the Women's National Anti-Suffrage League isn't taught in schools (support amongst women for extending the vote was very lukewarm, becoming a majority only after the law changed).


There is no particular reason why advanced planning, modelling and pattern matching ability should need human-like goals.

Because driving requires the ability to understand what other drivers are doing. It requires a theory of mind, which means understanding human goals.


Moving from point A to point B is the goal of humans while they drive. You don't need "human like" understanding of human goals. You do need to account for the stupidity of people though.


Not true, at least for any skilled driver. N=1, & my prior racing training & experience is an advantageous influence, but I am constantly assessing the intent, attitude, skills, etc. of other drivers to adjust my actions.

Of course it is trivially true that we all intend to go from point A to B. But that is almost irrelevant. It is the million micro-decisions on the trip that count.

Without seeing the human, I assess in seconds whether the other driver is unusually high or low-skills, focused or distracted or impaired, polite or rude... and this constant assessment, with human understanding, of (others') human goals and attitudes is a key element of driving skills, and yes this is basically all covered in your exception of needing to account for stupidity of people. That exception is just bigger than you think.


Human drivers communicate with each other through subtle cues, body language, etc. Moreover, pedestrians, children playing in the street, pets running onto the road chasing a ball, construction sites, parades, protests, are all things involving intelligence and theory of mind that you can't reduce to "the goal is to point A to point B."


We can try to reduce these things. Most things can be reduced to "obstacles". If obstacles are behaving the way we predict, then they are logical, otherwise they are flagged as illogical. When i engage in "subtle cues" or "body language" while on traffic i never trust it. If someone starts to drive illogically i never try to predict what they will do next, i just flag them as illogical and prepare for the worst.


There is a way to solve all these issues: keep humans (and other animals) away. This is how it's done with railroads. Unfortunately, there's not a whole of advantage in eliminating the engineer of a train since the ratio of engineers to passengers is so small to begin with.

I think instead of flirting with this absurd fantasy of self-driving cars we should be doing more to invest in railroads and other forms of mass transit so that people don't need to drive at all. Europe and Asia are far ahead of North America in this regard.


Mass transit doesn't get you exactly where you need to go. In areas with lower population density, it's especially unprofitable and difficult to implement -- and even if communities can afford it, it will deliver people even FARTHER from where they need to go.

I'm a transit advocate, but there are some fundamental issues that need to be resolved first, especially in North America.


Even in Europe, there are a lot of places you can't get to or get to easily without a car. I've done long distance walks in England, for example, and once you get beyond cities and larger towns, it's often the case that there may be a once or twice a day bus.


You can understand something/someone without agreeing with them and having the same goals.


> It requires a theory of mind, which means understanding human goals.

Like chess?


Chess doesn't require understanding human goals if you're an AI, just running through the space of winning moves that your training encountered. An AI doesn't think "ah, rook to E4, knowing Kasparov, he will then castle and I can do X". It'll just go "lol I encountered 3 486 864 games with this particular board pattern and 49 652 lead to a victory lemme pick one for the lulz".

Chess is really just about searching through as much of the space as possible, and you can take a few seconds. And believe me, there's a whole lot more stupid behaviours you can encounter on the road while you have to react in less than a second.


> It'll just go "lol I encountered 3 486 864 games with this particular board pattern and 49 652 lead to a victory lemme pick one for the lulz".

That isn't really an accurate description of how the modern DL bots work. They don't need to reference any database of past games while they play.


Chess is a perfect information game. Both opponents can see the entire board at all times. Reality, nor even driving, is not anywhere close to that. Weather, time of day, climate, seasonality, and other drivers throw constant and voluminous amounts of unpredictability into the mix.


> Reality, nor even driving, is not anywhere close to that.

This reminds me of the kinds of intuitions people had about Chess and Go. In retrospect they seem silly, but it made plenty of sense to them at the time. The fact was that there was a solution that machines could use that humans couldn't use. Naturally, a solution that humans couldn't use was hard for them to anticipate being effective.


Chess AIs don't need to develop an understanding of the goal, they're hard-wired with the rules of the game and its goal-states.

As an experiment, try making a chess AI without explicitly giving it the rules of the game and see how it performs.


I think that's already been done: https://en.wikipedia.org/wiki/MuZero


Almost, but not quite, exactly unlike chess.


So not like chess, but like how chess was supposed to be?


Well, "never" is a strong enough word to extend even into science fiction land (or 100, 1000, 10000, 100000 years into the future and beyond). Hence the reaction you got.


Or until we are able to change the nature of driving to make it require less creativity. As a simple examples, I suspect a road network where all cars were self-driving would probably be an easier software problem.


If Only.

Single-track railroad like in airport terminal-to-terminal transport? Fine.

Out on the open rail network? Not even close. And that is a LOT more constrained & predictable than an open road network.

Getting to all cars being software-driven seems as much in the near-mid future as was the idea of having full vehicle-to-vehicle telemetry &communications - i.e., sounded like a great idea until it crashed on the rocks of practicality (and inter-corporate politics?); not likely happening in a relevant time frame.


I personally don’t. We are not guaranteed endless progress or a good future.


> endless progress or a good future

The optimist believes progress will halt at a convenient moment. The pessimist fears otherwise.


People can't even manage dangerous situations all the time. They panic, they freeze, and they make mistakes.

Driverless cars don't need to work perfectly all the time. They just need to injure fewer people than human drivers.


It will definitely happen, and likely (>50%) within the next 10-15 years.

We need true GAI to do this, of course, and the question is: will a "real" GAI be willing to drive a car? How much will we have to pay it?


I think your "of course" is glossing over too much. It is very, very far from a given to me that GAI is required in order to solve self-driving cars well. I think it's reasonably possible we'll have good self-driving cars within 15 years (50% sounds about right to me for the longer end of that time horizon), but I highly, highly doubt we'll have GAI in anywhere close to that timeframe.


GAI is a pipe dream. There is no technology that we can even conceive—let alone build in the next 10-15 years—of which passes as GAI. Our current AI is based on statistical inference, GAI will need to be able to do far more than infer from data and interpolate actions.

No we will never have GAI.

---

I know never is a long time, but I can comfortably predict this because there is not only technological problems with GAI, but also philosophical problems. The way we define intelligence is such that if we find or invent something that combats human intelligence we immediately redefine it to exclude that aspect.


I'm trying to understand the worldview that leads to what you believe. As far as I can tell, you're assuming there is a supernatural aspect to the human mind.

According to my worldview, humans are just robots built out of DNA, proteins, etc. The idea that machines couldn't do everything a human can do (and more) doesn't make any sense to me.


I’m sorry but I don’t have that world view. I must not have been clear enough (english is not my first language; and I’m not even super clear in my native language).

I actually don’t believe in the superiority of the human mind, I think the word intelligence is actually a scientific distraction. Humans are remarkably—but not uniquely—adapted at communicating with each others and manipulating our environment for our benefit (most of the time; but sometimes to our determent, e.g. by causing a climate disaster). My statement about intelligence was not to share my believe about humanity, but a statement about human believes about our humanity (a theory of mind of sorts). I believe that humans are so human centric that we will always evaluate intelligence in human terms. As such nothing will ever supersede it as we will simply alter our former believes in order to keep our mind at the center of the universe.

Now as for machines that can do everything that humans can do (and more), I still consider this a pipe dream, even if we disregard the word intelligence. This is not because I believe the human mind is superior to machine, it is more because we haven’t conceived of how we would do this. Statistical inference has its limits, even with all the computing power in the world, it will never do what specialized machines like our brains have evolved over millions of years to do (well maybe if we allow our machines to freely evolve and procreate and set them loose in a dynamic self sustaining environment; but that wont provide any benefits for us).

I also have my doubts about the utility of AGI. People state that full self driving cars is easier with AGI, but I haven’t seen any proof of this statement (nor for the utility of AGI driven cars for that matter; we’ve had self driving trains for decades without AGI). With traditional machine learning models, there is tons of utility in the data analysis in all sorts of fields (just like there was with factor analysis and linear regression before it), but with AGI I fail to see what we will get from a vague notion of can do everything humans can do, which doesn’t apply to traditional machines that are specifically programmed for that task. In short, I don’t even think there is a market for AGI.


I may have read too much into your original post. I think I get what you're saying now.

We apply the concept of intelligence to other species, so I don't see why we couldn't apply it to machines. If people want to redefine intelligence so that humans stay on top, then they are wrong to do so. There is undeniably an objective component to it, even if there are many cultural factors at play as well. If you don't like the word "intelligence", replace it with "competence", "productivity", "performance", etc. and the same argument will apply.

You raised two other issues: the possibility of AGI and the utility of AGI.

I think we have good reasons to believe that it's possible. Three possible future scenarios: (1) we get to the point that we can simulate brains to a high degree of accuracy in software, (2) we get to the point that we can build artificial brains out of neurons made of silicon, (3) we get to the point that synthetic biology is advanced enough that we can engineer new kinds of intelligent organisms. These might be science fiction by today's standards, but they show that there is no issue with conceivability. As an aside, the human brain runs on relatively low power consumption and operations per second, so computational power isn't a bottleneck. We have enough compute, we just don't have the right algorithms yet.

As for the utility of AGI, I strongly disagree that there is no market for it. By definition, an AGI can do anything that a human can do. There is already a job market for humans. As soon as AGI can compete with humans in domains like management, engineering, scientific research, education, etc. I don't see why there won't be a huge economic incentive to adopt it.


> As an aside, the human brain runs on relatively low power consumption and operations per second, so computational power isn't a bottleneck.

You hit the nail on the head here (pun intended). Brains (and other adoptive biochemical processes which reinforce behavior) are way more ingenious then to be described as a simple computer. It is not just neurons firing in patterns with synapses which can reinforce certain pathways. No, there is a whole biome in our bodies, from our guts, our muscles, through hormonal systems and yes, nervous system. Our behavior is also influenced by bacteria and other microorganisms that doesn’t even share our DNA (but sometimes actually interacts with our DNA). This complex machinery has evolved over millions of years, and I don’t think there is any way for us to simulate it. Even if we look outwards, our behavior is just as much influenced by each other as it is by our brains. So for a successful simulation you’d need to “raise” your machine in a similar manner, allowing interactions with a whole community (or more) that treats it as equal. Our brains are merely a tool which encodes this interaction as learned behavior. Just building a brain will miss all of that and give us a lousy model.

In statistics we learn that: “No models are correct, but some models are useful”. I’ve always taken this to mean that there is not always utility in a 100% accurate model. I think this applies for AGI. If we create a machine that can do everything that a human can, and better, what can we use it for? You claim increased competence, productivity, and performance, but we already have machines that do that. And we humans have found a way to work pretty well with those. We even have fully automatic vehicles that transport millions of people every day (e.g. in Vancouver B.C. and Copenhagen). It doesn’t need to be better then humans in everything, just accelerating and decelerating in the right places, it doesn’t need to know the airspeed velocity of an unladen swallow, it will only need to know if something is blocking the tracks. Traditional statistical inference will make these systems better and cheaper, and they will benefit humanity, there is no need for them to have generalized knowledge beyond their immediate utility, so they will never be given one.


I agree with you in the sense that I think many problems don't require general intelligence once you have a solution. However, to come to those solutions it took decades or centuries of intelligent humans working on those problems. The potential benefit of AGI isn't in automating the problems we already know how to solve (although it might help refine our solutions). It's in accelerating progress in domains where there are unsolved problems.

Sometimes we get hung up on comparing AI to humans. I wouldn't want an AI that does math and physics in the same way that humans do. Our brains aren't very good at those things. I'm more interested in the potential for AI to find new, more effective ways of working in these areas.


> In short, I don’t even think there is a market for AGI.

AGIs by their own general purpose nature can improve other AGIs, possibly including themselves. AGIs can improve other less general purpose AIs.

Either you could ask for the AGIs to improve what you need for you, or the AGIs might do it by themselves.


The simple difference is humans have the biological goal to replicate their genes. That is the root source of intelligence. Software does not because it has not evolved. If you want real AI then start with simulating life in a simulated environment as complex as the real world for many thousands or millions of generations.

Humans may be like robots but their programing is not statistical pattern matching or Boolean logic. Rather it is the intelligence selected for by evolution to achieve the goals of an individual organism.

How does this not make sense?

Intelligence does not just happen absent goal directed evolution.


How does this not make sense?

As far as I can tell, your points don't apply to what I said. I'll quote myself:

>The idea that machines couldn't do everything a human can do (and more) doesn't make any sense to me.

In other words, it doesn't make sense to me when I hear impossibility arguments about AGI that appeal to something special about humans. I'm not making a claim about software and hardware in 2023. I'm saying in principle, there is nothing stopping us from building artificial systems that are just as complex and capable as evolved biological organisms, other than our lack of knowledge of how to build such systems.

>Software does not because it has not evolved

Software does evolve. To get evolution you need three things: replication, mutation, and selection pressure. We replicate, mutate, and select software all the time. I don't think you need a large scale simulation to get evolutionary dynamics.

>Humans may be like robots but their programing is not statistical pattern matching or Boolean logic

That might be true, but I'm also skeptical. Part of the problem is we don't really understand intelligence very well. If we really understood what makes humans capable of intelligent behavior, we would be able to replicate it in artificial systems. I don't see a fundamental barrier other than having the right algorithms.


You might be thinking about evolutionary learning (or more broadly evolutionary algorithms), which is a machine learning method different from—the more common and more successful—artificial neural networks. Several implementations actually exist, and I think some are even quite successful, but none that I’m aware of have been shown to have AGI capabilities.


The question of if an AI can be regarded as a sentient being has already been debated ad nauseam in science fiction (e.g. Data in Star Trek).

All that's needed is someone to achieve GAI at a supercomputer level. From then on, it's just a matter of scaling it down. Your iPhone has more compute power than a supercomputer 30 years ago, after all.

And what's also a way to "cheat" is fusing computers and organic matter. Brain-in-a-vat style, or, for yet another Star Trek reference, Voyager's organic compute network (which IIRC even suffered a pathogen infection at one point). And that is definitely possible in the next 10-15 years. Give Musk's Neuralink enough apes to euthanize and it will get done.


GAI is not limited by computing power, but theory. Yes we will be able to make better inference by feeding more data into our models, supervised learning will be able to work with thousands of parameters, etc. This will lead to all sorts of new discoveries in many fields, including biochemistry, astrophysics, neurology, etc. It will also bring us a bunch of new fun toys to play with including better and more capable robots, and it will also lead to better engineered social systems (including traffic engineering; but also public transit, and—yes—self driving vehicles—but we already have self driving trains, so it is not as big of a deal as you think it is).

However this misses my point about AGI. It is not about computing capabilities but theory. AGI will have to evaluate situations that has never arisen before without having data that directly directs it. I don’t see how statistical inference can do that, and as such we will need an entirely new methods of training it. Like I said, we haven’t even conceived how an AGI technology works, there are no algorithms which will work in theory even with unlimited computing power. Contrast this, for example, with quantum computing, which has a bunch of algorithms demonstrating it’s capabilities. With AGI all we have is “perhaps our cars become better at driving them selves”. I’m not buying it.


Your iPhone has more compute power than a supercomputer 30 years ago, after all.

In 30 years we will not have an iPhone-sized device with the power of today's supercomputers. Transistor scaling is in its final stages. We'll be lucky to get 2-3 more process nodes and that'll be it.


This assumes we don't use anything but transistor scaling for the next 30 years in mobile computing.


Even if we find a novel semiconductor that can scale 1000x better then silicon transistors (say DNA semiconductors), then we are still limited by classical computing. Meaning our most intelligent systems are still just good old statistical inference (I say still, but statistical inference is really powerful, and getting 1000s of orders of magnitude better at statistical inference means a world of amazing discoveries).

Perhaps quantum computing will enable new kinds of inference, then we will need to scale superconductors like we did transistors in the past 75 years, that is possible (not in the next 10-15 years though; 30-40 years is more realistic). However I don’t think this will lead to anything more then amazing statistical inference. People theorized artificial neural networks way back in the 1940s and we already had algorithms for it in the 1970s. I’m not aware of any algorithms, not even theoretical ones that gives us AGI, on any computing device (not even quantum computers).


> the way we define intelligence is such that if we find or invent something that combats human intelligence we immediately redefine it to exclude that aspect.

And what makes you so sure the opposite won't happen? E.g. GAI will be achieved earlier by excluding some aspect of human intelligence that some people deem essential. The whole "problem" is that we can't agree on what constitutes intelligence. I predict that it won't be long before AI systems can score 200 on standard IQ tests.


>I predict that it won't be long before AI systems can score 200 on standard IQ tests.

I don't know of a "standard" iq test that goes past 160 or so.


Have you tried any of the state-of-the-art out there?

ChatGPT is easily circumvented to violate all its own guardrails with a few instructions.

Knowledge without understanding.

It clearly doesn't understand it's own rules well enough to realize when it is asked to violate the rule "don't do C" that being asked to do A->B->C is the same as being asked to do C.


I donno the robots in terminator seemed to have some self awareness. Why can’t we apply the same to cars?


Your comment, I am not sure if intentionally, encapsulates what is really going on with self driving cars. I.e. people assume fantasy or fiction can become reality with enough effort. This is not always true...


> I.e. people assume fantasy or fiction can become reality with enough effort. This is not always true...

More or less hit the nail on the head :)


Sounds like having support for interpreting human gestures will solve some of the problems. But how do we distinguish real emergency scenario from fake one?

Such kinds of problems will only scale up as we apply L5 cars take on more public roads, but I wonder how long or what changes will have to be done in current road signs, critical city infra to support them.


> But how do we distinguish real emergency scenario from fake one?

A car shouldn't make that call. It should err on the side of safety and assuming the emergency is real.

When you see flashing lights on the highway coming up behind you, do you think "I bet that ambulance is faking it so they can get home faster", or do you just move over?


Agreed - faking an emergency is highly frowned upon and I'm betting it's a crime in most places. There's even a similarity with the following aspect of the Geneva Convention, which is that pretending to surrender is so harmful to norms that it is literally a war crime. https://en.wikipedia.org/wiki/Perfidy

"In the context of war, perfidy is a form of deception in which one side promises to act in good faith (such as by raising a flag of truce) with the intention of breaking that promise ... Perfidy constitutes a breach of the laws of war and so is a war crime, as it degrades the protections and mutual restraints developed in the interest of all parties, combatants and civilians. "

Pretending to surrender is something that's shown a lot in Star Trek and similar shows, and Pirates of the Carribean, but in our real-life world of 2023 it's a UN war crime to pretend to surrender, and it's really not a nice thing to do because if some people sometimes pretend to surrender, then pretty soon no one will trust surrender attempts and people who wanted to stop fighting will needlessly be lost.

The idea of criminals faking a medical emergency is similar imho. If I were in a position of determining how much time robbers should serve in prison, I would personally recommend a much much longer sentence if they faked a medical emergency, for exactly this reason.


I saw a video showing perfidy committed by a Russian soldier in Ukraine. Several Russians were coming out of a building surrendering to some Ukrainian soldiers, when one of them came out with a gun and started shooting. All the Russians were killed in the subsequent firefight, because one of them ruined it for the rest. It may seem like a clever ruse in movies, but in real life it gets people killed.


It'd be funny if it wasn't so terrible. At this point it would be easier to list the war crimes not committed by the Russian army, they seem to be well intent on speed running the Geneva convention. One can only hope one day there's trials in e.g. The Hague to bring all war criminals to justice.


Russians just have to start working on advanced weapons and then they'll be excused from the tribunals like Unit 731.


> One can only hope one day there's trials in e.g. The Hague to bring all war criminals to justice.

I hope that as well, but I think neither the US, Israel nor China will stand idle next to a nuclear power getting dragged in front of the ICC - not with the numerous counts of crimes each of these countries could reasonably be charged with. The Israelis may be the only ones being able to get away with a lot of their stuff given that their opponents continuously enact crimes as well (e.g. hiding military command structures under hospitals and schools), but the US and China? They can't pull that card.


> It may seem like a clever ruse in movies, but in real life it gets people killed.

To be fair, the point of the ruse (in real wars; in heroic fiction this sometimes varies) is usually to get people killed, too; the question is just which people get killed.


It gets more people to die on ALL sides which is why it's considered a shitty thing to do, even if it's technically a "valid" tactic in some anarchic philosophies. The use of this ruse will cause people to stop accepting surrender which is BAD because it means that the people who die will be the average Joe and average Ivan in different countries who really don't want to kill anyone and just want to go back to their family and their life. These goodhearted people just want to go home and would gladly surrender. Imagine this thought experiment. I'll use your pen name dragonwriter as the name of a hypothetical super-human bandit / assassin / mercenary / gun for hire who sometimes does jobs like assassinations and Mission Impossible style burglaries of royal treasure. Imagine the DragonWriter bandit has no qualms about killing and so anyone who stands in his way gets shot with the intent to kill, AND now there are some rumors that a group of cops had DragonWriter surrounded and so he surrendered to them but then he pulled out a hidden sword and spun in a circle chopping them all in half. Now, any time DragonWriter tries to surrender in the future, his surrender won't be accepted, but what's even worse is: his action that bought him maybe a 5 second advantage (vs a million different other ways he could have won or fled the fight), and this tiny temporary advantage by the ruse has imposed a cost in exchange:

This one infamous sword fight has now poisoned the well with respect to the global norm. Next time there's some other totally unaffiliated hat-wearing super-powered bandit who's stealing the Ark of the Covenant or the royal diamond or something, if THAT bandit tries to surrender, there's a high chance that person might be killed, because their surrender is distrusted since DragonWriter changed the norm and now every fight between supers and cops is now on "shoot to kill, no prisoners, take no quarter" stakes. Imagine how many times a famous super like Batman would have been killed if the authorities never accept a surrender because the surrender norm is ruined. It can be repaired but that might take a decade or more to repair the trust that was broken.

With strong norms about surrender, red cross medics being protected, etc., we can have things like the miracle of the WW1 Christmas Truce which help the war to be a little bit little less of pure hell, for all the pure-hearted or indifferent people dragged into being a soldier, perhaps even drafted or press-ganged, to be the disposable cannon fodder for a bloody and disease and gas drenched war for no good reason more than rich politicians' egos.

https://en.wikipedia.org/wiki/Christmas_truce

These people just want to go back home to their wife or their mum and would gladly gladly surrender if given the chance, but that's not possible if someone ruins the norm! It's like, for example even if someone is a super-villain for example, they should really not take human lives of innocent people, or destroy irreplaceable cultural heritage sites, or damage/burn an ecological area like a rain forest, because those, once lost, are forever gone, and leave our world impoverished.


The number of criminals faking medical emergencies to avoid having to appear in court is pretty high. And of course three days after they're right as rain. It's hard to act against because of the risk of punishing someone that really needs it.


Faking your own illness is very different to faking an ambulance with lights and sirens racing through town.


I'm pretty sure they meant faking being a ambulance or a firefighter or a cop to rob someone.


Ah sorry, misinterpretation on my end. You are probably right.

But: fake cops robbing people is pretty common too. Unfortunately.

Even more unfortunate, real cops doing it also happens.


Yeah and I bet that it would become even more common if cars fell for it every time. A predictable victim that always for for the same trick in the same way is much more attractive than a human that may respond in many different ways, that reads the news and may become more cautious if there is a crime wave.


But then you have essentially enabled everyone to wave your car to a stop, which could be very convenient for certain types of people.

I think the simple truth is that we are still a long long long time away from fully autonomous cars to be safely deployed anywhere except on very specific paths that are mostly closed off to everyone else.

But they could be tremendously useful there, for instance driving buses or airport shuttles and that kind of thing.


So this is currently possible today: I can walk out on any street today and redirect traffic. With or without my reflective vest, drivers will generally listen to me. And self-driving vehicles must as well.

Note that when I am doing traffic direction with appropriate authority, most drivers will never have heard of the organization I'm a part of (a local CERT team), nor have any real way to vet the authority I have. The authority they can determine I have is mostly that I'm doing it and no police officer has come by to tell me to stop doing it.

(Note that CERT members do not and should not self-deploy, mind you, and if you choose not to heed our directions, we almost certainly can find a cop car to come let you know you screwed up.)

Also bear in mind, many people directing traffic aren't public safety personnel at all. Construction workers are private employees, events have their own staff which direct traffic in various contexts, including public roads.

There's no definitive way to determine that someone is authorized to direct traffic, cars just need to obey them.


Human drivers are smart, and will generally judge your appearance, confidence, and context to determine whether or not your actions are legitimate. And you likely always pass these tests.

Imagine a teenager, dressed in street wear, who jumps into the middle of a street to stop traffic in a way that has no explanation given the scene, while their friends laugh on the sidewalk as they observe.

Or a group of men in masks stopping a car in an alley as their accomplices surround the vehicle.

Any human driver will recognize these scenarios as illegitimate. It isn't just the act of directing traffic that is judged in isolation.


In basically every example you gave, you still have to stop until people move though. The result is the same, despite everything else. The only one that's a bit iffy is if you're being carjacked, is a bit of an exception.

Legitimacy isn't really the problem, it's the circumstance of a human being in front of your car that prevents you from moving forward, and beyond getting out of your car and talking to them, or just waiting, you don't have much of a choice in what options you have.

I agree with the intent of your point, but what's the alternative? You can't just keep driving if someone is standing in the middle of the road.


Those were two examples, but they're not exhaustive. Traffic direction can (and often does) happen without someone blocking the vehicle.

In the case of an illegitimate request, a human driver might question the request and take a different action instead.

Are you being directed to run a red light? If you judge that someone is maliciously asking you to do so, you probably will choose to remain still instead.

Are you being directed to stop? If you judge that someone is maliciously asking you to do so, you may choose to turn around, reverse, turn into a parking lot or otherwise remove your vehicle from the situation. You don't have to run people over to refuse a command to stop.

And if you were completely blocked from moving, you'd probably call the police.

My point is that "always follow hand signals" is trivially problematic. I would expect that if self-driving cars followed any hand signal they saw, you'd see kids doing it for the lulz on the sidewalk in front of their school. ..."traffic jam challenge". After all, someone is less likely to act maliciously towards a human than they are a machine.


Okay, so your point was more about the specific directions being given, than the presence of a person in the road. Overall I agree.


One reason carjacking is rare is because people don't foolishly stop for obvious carjackings. If an autonomous car has trouble telling the difference between a carjacker and a construction worker, then it will be much more likely to get carjacked.


My brother and his teenage friends redirected traffic into their school grounds to prove that they could do it. They also passed all 'sure, looks legit' tests.


Humans aren’t perfect, but their BS detectors are better than nothing.

The more important test scenarios are life threatening or unreasonably disruptive situations rather than harmless pranks.


PKE transmitters given to firefighters with some mechanism to cycle private keys on autodriving taxis weekly.

Nowhere near perfect, but that makes it a lot harder than just getting a spike strip or a hand gun or a demolition derby car. We’re talking about people willing to risk a decade in prison to “do criminal things to a car and/or passengers”. How much work do you think they want to put into disabling a car with a remote? Why not just wave a gun in the driver’s face at a stop light?


> Note that when I am doing traffic direction with appropriate authority, most drivers will never have heard of the organization I'm a part of

Reading between the lines it doesn’t seem like there’s a lot of communication to emergency services etc on how to deal with autonomous vehicles.

Even the police seem to be operating under misconceptions about what the vehicles can and can’t do.


So there's no monolithic entity that handles public safety in any given area. Overlapping jurisdictions of police, fire, public works, utility companies, construction companies, volunteer orgs, etc. all will end up on the streets for different reasons.

If self-driving cars cannot cope with this, they should not be allowed on public roads. Very little has convinced me they should currently be street legal.


Isn't it the vehicle operators operating the vehicles while ignoring that there are things that the vehicles must do in certain scenarios to be suitable for operation?


>There's no definitive way to determine that someone is authorized to direct traffic, cars just need to obey them.

Exactly! But there still is a way. You don't obey anyone in any circumstances because that would be dangerous. People have a feeling for it, computers don't yet.


I'd obey such traffic commands from just about anybody in plain street clothes, unless they looked obviously crazy or if I was in part of the world where car jacking is common. At the very least I'd stop and ask them what was up. Maybe there is some dangerous condition ahead and they're trying to warn me.

Does the computerized car understand the meaning of the words "Turn back because the bridge is out"?


I miswrote, my meaning wasn’t that I would stop for nobody, but that I wouldn’t stop for just anybody in just any situation.

Essentially the same that you said.

This is something that people are very good at judging but software isn’t.

I’m trying to say that until we have figured out AGI, autonomous cars driving among people are very dangerous.


I bet you’d eyeball it though, right? Like if someone was telling cars to stop, looked nervous, and had a hand in a large enough pocket to hold a gun, I might head in the direction they were sending people (depending on visibility I guess) but I’d probably not slow down.


>most drivers will never have heard of the organization I'm a part of (a local CERT team)

Will you have to file a CVE after the incident?


As a sysadmin and a volunteer responder, I do find the fact CISA and FEMA each have a CERT program that are completely different things pretty funny.

Community Emergency Response Team: https://community.fema.gov/PreparednessCommunity/s/welcome-t...


I think the point is that humans do make this judgement call. We know what a real emergency vehicle and what real emergency personnel looks like, and we know what a crazy person waving their arms looks like.

When some idiot in the opposing lanes is waving for me to take a left across multiple lanes of moving traffic, I know that I should ignore their idiotic traffic direction. I know that when there's an emergency situation with a fire fighter telling me to continue through a red light, I know that I should follow the directions.


I hope you're not suggesting like a hater that a Tesla with FSD (available next week, I promise) isn't capable of performing a snap psych eval of any random person standing at the side of the road.


I am suggesting that a car which follows all human gestures is not better than a car that follows no human gestures.


I think the potential issue is that the driver of a non-emergency vehicle could start gesturing in particular ways that could trigger the autonomous car to think it was an emergency vehicle.

So then you have to teach the car what all the various kinds of emergency vehicles look like, in every jurisdiction. That's hard enough, but then what happens if it's an unmarked police car, for example?


> When you see flashing lights on the highway coming up behind you, do you think “I bet that ambulance is faking it so they can get home faster”, or do you just move over?

“You” in the specific or general sense? Because, my observations of highway behavior suggest that actual humans are…far from ideal at this.


I agree with treat it as real. Then, if the car has an adult passenger you can ask them for their judgement (and responsibility for the outcome!)

If the car has no passenger. Then stop and move slowly as required/indicated. Sure, it may be a car thief, but without a passenger, it is simply another car theft.


To spell it out, if an SDC Taxi stops for anyone waving frantically, how do you prevent robbers and rapists from routinely harvesting their passengers?


Exactly how you stop people from flagging a human driven taxi with plights for help and then robbing or raping the passenger. This doesn't seem like a problem specific to self driving.

Relevant xkcd: https://xkcd.com/1958/


A Taxi has a sober human driver with professional experience to handle such situations.

A piece of software with some cameras and actuators is really not the same.


self driving car AI challenge, run into the cross walk and start waving every car through


>> but I wonder how long or what changes will have to be done in current road signs, critical city infra to support them.

Unfortunately that attitude is common is the industry. The notion that the rest of the world needs to provide extra, up to date information for the shitty AI so it can function.

I have news for ya. Emergency road repairs, fires, etc are not going to make it into your world database in a timely manner so a dumb vehicle can navigate the situation. It will all be over before anyone even thinks about fixing your system for you.


There's a connected vehicle protocol (DSRC) which broadcasts all kinds of metadata about your vehicle. There was stuff like "transaxle temperature" and "steering angle" in there, if I recall, so I'd be surprised if there wasn't also "emergency flashers engaged".

Perhaps autonomous vehicles should be be sensitive to that field, signaling for manual control when they're too near a vehicle which has it enabled. That way any vehicle--not just the fire department--can create a "human control only" field by activating their emergency flashers. Perhaps emergency vehicles could be recognized such that they create a much larger field.


No need for human gesture, just put a stop sign or a road closed sign. These were likely taught to the model as its ABCs


It kinda sucks, though, to put the responsibility for this on the first responders, who already have enough to do dealing with the emergency.


As a general principle, one can imagine special signals relevant to autonomous vehicles. But there's probably something of a chicken and egg sort of thing and I have problems with a lot of processes being required to deal with early adopters of this sort of technology.


There are going to be times when someone is legitimately responding to an emergency and doesn't have a stop sign handy.

And there are going to be a lot of pranksters who do, and will use them to mess with so-called self-driving cars.


> Sounds like having support for interpreting human gestures will solve some of the problems.

Add that to the _LONG_ list of issues that will have to be solved until we can even think about releasing autonomous cars in the streets.


Was in backroads and ran into a cop directing travel. Apparently a small fair nearby.

Dude was totally pissed and gesturing every which way wildly. When I couldn’t immediately decode. He just started swinging even more wildly. Turning redder and redder.

Was fairly sure I was going to get shot arrested. So I just did a U turn. As I was leaving the next guy started yelling at cop that he was blocking all possible routes to his town.


How do people distinguish that? A person is likely to fall for a fake police officer and fake police car


Have you seen any of the adversarial neural net stuff? I wouldn't be surprised if it was possible to trick a car into thinking someone is a police officer without formally impersonating a cop


One thing that is often overlooked is that software can only understand what it was trained to understand.


> But how do we distinguish real emergency scenario from fake one?

You don’t. Bad actors can already pretend to be fireman doing fireman things and direct traffic. Human drivers will also be mislead by that.

What is more, if you have a high viz vest, a hard hat, and more confidence than a garden slug you can also direct trafic. Not legally mind you, but more often than not people will follow your directions. If there is a whole gaggle of fake construction people and they have cones too they can probably re-route a highway.

Whatever is stopping people from causing misschief (too often) by these “powers” doesn’t seem to be different in the case of self driving cars than with human driven cars. In fact the self-driving car is more likely to retain a perfect photographic evidence of the mischief maker’s face than a regular human driver.


Teleoperations is a thing.


Simple, just transfer control over to a remote operator. We already do this for delivery rovers (see e.g. LA's coco rovers). Not an impossible problem unless all communication infrastructure is down (solar storm, hurricane, that sort of thing).


Just saying, as another firefighter, this post above is fantastic.

To add: a vehicle driving over a hose also means a vehicle is likely to be “in” the scene operating area. Random cars where we don’t expect them can lead to all sorts of injury.

Granted, as a truck comp guy, I often had to deal with engine guys not leaving room - but first due is first due. ;)


"Pull past and leave room for the ladder"

Words not heard often enough! :-)


Wouldn't even the sudden drop in pressure be dangerous to the crew manning the hose?


Under the right circumstances, yes. Basically there are many, many bad things that can happen if a car drives over a hose, and not very many good things.


Interesting. A couple months ago, there were firefighters at my condo complex running a hose across the driveway into the building. I got past just before they ran the hose, but I stopped and asked them if they would be gone by the time I returned in about 30m, and they said "probably not, but if we're here you can just drive over the hose." That surprised me.


Our brigade has hose ramps so we can let people drive over... But they're pretty far down the list of priorities to pull off the truck unless we're just training.


Yeah, that would surprise me as well. I can think of a couple of possible explanations:

- You were talking to somebody really young and inexperienced, who did not themselves realize the risk of letting somebody drive over hose.

- You were talking to a prototypical "world-weary, crusty old greybeard" who was so jaded and cynical that they were thinking to themselves: "It doesn't matter what I say, this person is going to drive over my hose anyway."

- They did in fact have the ramps, that @fukawi2 mentioned, deployed. Hose ramps do make if somewhat reasonable to drive over hose, but they are rarely deployed in my experience.


Or you can just put signs that road is closed in front of it so nobody has to second guess it. Seems like some cognitive bias that people have to have same knowledge as you and take an educated guess in a few seconds they notice hoses.


If you've got a drivers license you should be able to not drive over unexpected objects in the road.


> And sadly, humans do this all too often anyway.

This is actually probably one of the best arguments for self-driving cars.

The self-driving cars will eventually stop doing this. Humans, never.


Agreed, in the general sense. I do also think it's important that we not get caught up in holding self-driving cars to an overly high standard, especially in comparison to error prone humans. Speaking only for myself, once self-driving cars are consistently as good as or slightly better than humans, I would find them acceptable for broad use. I don't think the standard needs to be "zero error, ever" although that would be a nice aspirational goal for sure.


Question, wouldn't it suffice to just put a couple traffic cones in front of it? AVs should be already trained to recognize and stop for those.


Traffic management is pretty low on the list of priorities for us at a going job.

Also, at least in my jurisdiction, traffic control is a police responsibility, and we're technically not allowed to do it anyway.

The preference is anyone who's not a fire-fighter just stays away, for their safety and ours.


As always, be wary of the word "just". Things that seem trivial can turn out to be surprisingly difficult. :-)

In this context, the issue is actually pretty simple: it's a factor of available personnel and time. At least in the US, most municipal fire departments don't have the budget for enough staffing to run with all of their companies fully staffed (by NFPA standards). I'd posit that 3-person companies are the most common configuration you find these days, with 4-person being rare, and 5-person (or more) being all but unheard of. Ironically some volunteer departments actually put apparatus out the door with 4, 5, or 6 personnel more often than most paid departments. The tradeoff is the staffing with volunteer departments tends to be highly variable, so that same department, on a different day, may roll a truck with 1 or 2 personnel, or not roll at all.

Anyway, back to the point. Let's a assume a typical "working fire": the first due company arrives, and there is a LOT of stuff that needs to get done, and in a hurry. Size up needs to be done, ideally with somebody doing a 360° walkaround. One or more attack lines need to be stretched and put into operation. A primary search needs to be conducted unless somebody on scene can absolutely confirm that everybody is out of the building. Vertical ventilation may be required. Additional lines may need to be pulled to protect exposures. A water supply needs to be established. A RIT (rapid intervention team) should ideally be in place to rescue any firefighters who become trapped.

Did I mention that you're starting out with 3 people on that first arriving apparatus (and the operator needs to stay with the apparatus as a general rule)? And then as the subsequent arriving apparatus come in, their crews usually slot in behind the first company, doing all of the things on the list above, which are of critical priority.

What isn't on the list of critical priorities (in relative terms) is "place cones around the supply line, or other hose that happened to fall in the street".

Now by the time the 7th or 8th due company arrives, you probably have staffing for things like putting out cones, doing traffic control, etc. But they're probably coming from the other side of town, or might be a mutual aid company from another department. So you could easily be looking at 10, 12, 15, or more minutes before they roll up. So it's during that first, let's say 15 minutes, of an incident, that you usually don't have cones put put, traffic control established (PD might be available to help with that, but they're dealing with their own staffing issues), and so on. And during this time is when things like "car running over the supply line" tend to happen.

It's unfortunate, but largely it's just the reality of the way things work today, based on the staffing that's available. If every company was showing up with 5 or 6 personnel, this stuff would happen a lot sooner.


"Cone of Safety" is already a standard for anyone who drives a vehicle bigger than a golf cart professionally


Obviously autonomous vehicles need to be able to respond to officials directing traffic for any reason -- making way for ambulances, stopping for firefighters, being pulled over for a faulty headlight.

Surely Cruise and Waymo have thought about this. Curious how it's supposed to work normally, and whether something about it failed in this case?

How is an official supposed to get a moving autonomous vehicle to stop or obey non-standard directions (back up to make room for a fire truck)? Is it supposed to require any special training on the part of officials, or are remote operators supposed to always be watching for it? And what happens if the regular communications channel to remote operators fails?


> Surely Cruise and Waymo have thought about this. Curious how it's supposed to work normally

I was in a Waymo that encountered a fire engine parked in the center of an intersection. Traffic was stopped already when we arrived. We were maybe three cars back. Our light was red and no one was directing traffic. I recall the car acting a little hesitant as we came to a stop. It pulled towards the curb a bit and might have stuttered rather than coming to a completely smooth stop. Either way, the first car at the light went right on red. And we pulled up smoothly to fill the gap. When our light turned green, the Waymo waited a bit for the first car to go. Slowly crept past the fire engine in the open lane, and accelerated somewhat aggressively away. I literally whooped.

My assumption is that the software flagged the situation to an operator and they were able to instruct it in the time we were stopped. I heard from a friend that the car basically prompts the operator with a few pre-planned choices to select among — so the car is following a route it planned, but it provides the operator with options to keep the human in the loop for the tricky decision while still allowing autonomous driving execution (no remote direct driving is allowed iiuc).


There should be a protocol where emergency workers can set up a geofence where cars are not allowed. Driverless cars can automatically avoid the geofence. Drivered cars can pop up an alert.


Of course, because emergency workers would love nothing more than arrive to a location, pull up their phone, open the Block-a-Car app, manually draw a fence with touch controls, then do it ten more times because no self driving constructor wanted to standardise on a single app. While, of course, a house is burning, a person is dying in the streets, etc.

No. The alert is the lights and sirens, and any driverless car unable to respond to that signal should be banned from roads. Emergency services have other things to do than draw a geofence so that Waymo can keep making money off of their collected data.


With that approach let's hope we never run out of bandaids and doubly that it's not a bandaid with a Dino pattern that the car doesn't know.


I agree. It could be as simple as a stop button on the outside, or just any kind of contact on the front hood. Press it and the car stops until a remote operator resets it.


> Surely Cruise and Waymo have thought about this

No worries, I bet they've factored the cost of the resulting fines neatly into their pricing model!


Obviously autonomous vehicles need to be able to respond to officials directing traffic for any reason.

In that case it would also need to tell the difference between a cop and someone impersonating a cop, or it's going to be really easy to mess with self-driving cars.


Doesn't seem fair to expect self driving vehicles to perform a task that humans can't. It's easy to impersonate a cop and interfere with traffic, and human drivers would be tricked too.


> Doesn't seem fair to expect self driving vehicles to perform a task that humans can't.

That is literally the sales pitch. They're supposed to be better at driving than humans. And that actually means all of it, not just lane-keeping on well marked brightly lit highways.


> They're supposed to be better at driving than humans. And that actually means all of it

Why? Couldn't it be "better at driving than humans" by reducing the most common errors (lapses in attention), while performing about the same as humans in less common situations (cop impersonators)?


No, they're supposed to be approximately as good at driving as humans.

The whole "sales pitch" is that it doesn't take any human effort to do.


> supposed to be approximately as good at driving as humans

They're supposed to be cheaper. Being as good is the limiting factor. Being cheaper is the pitch.


There's some truth to this. See "US Marines defeat DARPA robot by hiding under a cardboard box"[1] which is another case of fooling AI.

Whether we want it to be or not, vision AI at the moment is somewhat an idiot-savant when it comes to interacting with the real world.

[1] https://news.ycombinator.com/item?id=34518299


I mean you could just stand in front of a self-driving car and it'll probably stop rather than swerve around you, I imagine. There was an episode of revisionist history that discussed this a bit https://www.pushkin.fm/podcasts/revisionist-history/i-love-y...


In the submitted article it says firefighters stood in front of the car to stop it and it just continuously kept creeping forward anyway.


I wouldn't be willing to bet my life on any self-driving car doing either of those things.


You first!


> someone impersonating a cop

A RFID tag would do. If you prefer going dystopia, a chip implant.


AI being able to tell an "RFID" chip is real vs fake is not much different than telling whether someone in a fake cop outfit is real or not.

And I mean given RFID is only detectable from inches to feet range, what's the point? And isn't under skin chip just RFID under the skin?


That would go great in the US. Try to tell a cop that you're going to come close and grab their arm to check their chip after they stopped you. I'm sure they'll take it super nicely.


That sounds like a them problem—society won’t change any practices to accommodate these toys that are demonstrating how little value they bring every day.


You mean the same society that built thoroughfares, roads, stop lights, and laws for the benefit of transporting people and goods? Why wouldn't they modify the infrastructure to save a substantial fraction of the costs?

Transportation is literally the 2nd greatest household expense [1], and the cost is dominated by labor. I'm not sure how you could conclude that the potential for massive savings is a "toy".

https://20somethingfinance.com/transportation-costs/


Because if transportation decisions were driven by total cost to society, we would already have trains and transit. Instead, the costs are split among multiple entities, and weighed by planners and politicians against cost of new things, community preferences and personal preferences and commercial desire for profit.


Ableism at its finest. If these actually deliver on driverless cars it will be a boon to people who can’t drive. And that completely ignores the value in letting people work while they commute and the reduced loss of life from wrecks.


The letter from various SF agencies to the state CPUC has more info.[1] Cruise vehicles stopping for some reason and tying up traffic is a headache. This happens reasonably often. Sometimes on streetcar tracks and in bus lanes, which has the MUNI people annoyed.

Apparently Waymo doesn't do this as much.

[1] https://www.sfmta.com/sites/default/files/reports-and-docume...


I can see how this kind of thing would not be in the self-driving training sets.

But a person standing in the way should cause the vehicle to stop, right? Pedestrian of any kind, including a firefighter in full gear.

It's hard to speculate what happened without knowing the details, but I'm imagining a scenario where someone stood there to block the car's progress, and once they realized it's a dumb robot that will proceed as soon as the path is "clear", they did the next logical thing (which was to smash the robot).


There are a lot of these kind of edge cases that are just not going to end up in training data sets, at least not in sufficient quantities in sufficient time. That makes it hard for 100 percent data-driven approaches to adapt to all circumstances, since so much data is needed. A human can reason and understand odd circumstances without prior "training data". That's an advantage in my view. Current AI systems like these just respond to the data given to them --- garbage in, garbage out, as the saying goes.

In my view, we should be worried not just about what data is missing but also what data is either unintentionally or even intentionally incorrect. One day a hacker will find a way to insert malicious data (in contrast to malicious code) and try to exploit these systems. That's just inevitable at this point. I hope that these autonomous systems end up with a lot more fail safes, since episodes like this one are only going to become more common, in my view.


You might like the scifi story "Car Wars", which does some exploring of the subject: https://doctorow.medium.com/car-wars-a01718a27e9e


> Pedestrian of any kind, including a firefighter in full gear.

You assume that a firefighter will be recognised as a person. Even if they're wearing breathing gear or carrying a sack etc. And would definitely respond appropriately to hand gestures, driving away from the fire rather than queuing neatly behind a fire appliance or something.

For all the people concerned that a driverless car might stop somewhere it shouldn't, I'm way more concerned that the car will drive on when it really should stop. Especially given some of the insanity we've seen in Auckland with the current floods, the idea that a driverless car might not recognise someone standing waist deep in floodwaters or whatever, then just drive on through, doesn't excite me.

There's too many cases where actual human drivers fail to respond appropriately even though we have 100-odd years of selection pressure encouraging them to.


This is going to slowly evolve into "law enforcement needs a killswitch" which is a scary thought


The really scary part is when the kill switch is in reference to your life rather than the car's movement. Like the theories about the death of Michael Hastings in 2013.

Hacks of cars were demonstrated in 2015 and documents from the CIA released by Wikileaks showed they have explored the idea as early as 2014[1], but at the time it was still presumed that killing someone by hacking their vehicle would be quite resource-extensive.

---

[1] https://www.washingtonpost.com/news/innovations/wp/2017/03/0...


A probably better idea would be giving law enforcement/ emergency services the capability to enforce a no autonomous driving zone. Autonomous cars should either find another route or give up control to the driver while in the zone


This is factory equipment on many cars for over a decade. (GM, Toyota, Mercedes, BMW)

e.g.: https://www.onstar.com/public-safety/emergency-situations


I'm not sure why this is that scary of a thought.

Right now law enforcement already has a "killswitch": a traffic stop. If you don't stop at the traffic stop, the "killswitch" is a chase & spikes on the road.

A "killswitch" in which someone can just have the car pull-over seems like objectively safer.


Once there is a killswitch, it can be accessed by people and purposes it wasn't intended for.


Think this through. Like, really, think this through.

What’s the goal? You want to steel a car? Rob the passengers? Kidnap the passengers? Kill the passengers?

So what are your options in this world where there is the equivalent of a firefighters key in elevator, but probably using PKE and periodic private key cycling?

Steal a key? From where? Buy one? From who and for how much? Ok, so now you’re needing to commit some rather risky crimes before you even move onto the next step.

And then let’s say you get the key. So you disable a taxi. And then what? I assume the passengers can lock the doors. So you smash the window open and, well, maybe it’s best to have a gun…

But why not just get a gun and approach someone stuck at a red light?

Why kind of risk/reward calculus is going on inside the mind of the criminal deciding to forcibly stop a moving vehicle and rob or murder the passengers?

So yes, what you said is true, but the likelihood is very low and the benefit to emergency responders is high.

I guess it makes for a good plot device for a spy movie.


I was thinking more that I pissed off a cop, and now my car randomly won't start.


Well if I was working on such a system I would definitely want to log every shutdown event on storage in the vehicle itself. It could then be seen what key was used and compared to which officer had been assigned that key for the day at the beginning of their shift.

So yes, a pissed off (and stupid) cop could do this, but then you can turn around and go to the press with the information, call up a lawyer, and proceed in any other manner where you're being harassed by a police officer.

Cue some unrelated critique on policing in general that has nothing to do with what we're talking about...


I guess we have different beliefs around how easy it is to fight back against government abuse.

I don't think you just contact a journalist and a lawyer, and things usually work out for the best.


As predicted, some unrelated critique on policing in general that has nothing to do with what we're talking about...

The problem with your line of reasoning is that you really have a problem with government abuse and that you're looking for anything that could possibly add to government abuse even in the smallest of manners. There's no scalar to your concept of abusive systems, just the fact that any system that could be used for government abuse is bad.

It leaves out the intent and capabilities of the government to engage in any kind of abuse.

If the government wanted to silence you then it would silence you like they would in any authoritarian nation: showing up to your front door, putting a bag over your head, and taking you away. They don't need any advanced technology to do so. Technologies are not the main cause authoritarianism. People are the main cause. Technologies can facilitate authoritarianism or they can facilitate liberal democratic republics.

Can a system of warranted entry on private property be abused so that police can enter your house in an unjust manner? Of course! But the presence of a system of warranted entry on private property does not correlate to the authoritarian government.


This is mostly a critique of an imagined version of my state of mind.

Such discussions are pointless to me.


Eh, not really... this is the only paragraph where imagining your "state of mind" might come into play:

The problem with your line of reasoning is that you really have a problem with government abuse and that you're looking for anything that could possibly add to government abuse even in the smallest of manners. There's no scalar to your concept of abusive systems, just the fact that any system that could be used for government abuse is bad.

But I'm not imagining anything because you've made it clear that your problem all along was with government abuse. It's also clear that you're looking for anything that could possibly add to government abuse even in the smallest manners because you don't talk about scale or likelihood, just a simple binary of "could be used for government abuse" or not.

I'm just logically filling in the blanks based on our interaction to come up with how you reasoned your way through all of this.

And I just went back and realized you started talking about personal cars and not taxis... what kind of cop is going to get mad at you and then... wait for you to get a cab, follow you, and then make your car pull over... and then what? Why are they doing this? Why would anyone do this?

I mean, what system is more likely to be abused, one where cops can get a warrant to kick your door down or one where cops can make autonomous taxis pull safely over to the side?

Stop being silly!


Eh, of all the threats I can think of, law enforcement being able to killswitch autopilot on my car is not even gonna make the top 100. Whether on the run for crimes or being pursued by sinister oppressors the automotive autopilot is not gonna be a key margin of liberty. And being able to HERF any vehicle dead is probably coming no matter what:

https://boingboing.net/2018/04/30/zapguns-r-us.html

On the other hand, I can actually see a lot of merit in autopilot killswitches for anybody, including law enforcement.


> On the other hand, I can actually see a lot of merit in autopilot killswitches for anybody, including law enforcement.

Including thieves?


The human driver could still take command (I presume), so I don't see that as a relevant attack vector.


They already have a killswitch for humans and they use it often.


Don't they already have a kill switch in the fact that they have the authority to stop vehicles?

Or us the scary thought that it would be software/hardware and therefore insecure?


The insecurity is an issue, sure, but also just the fact you are turning over a crucial human activity to be far more strictly regulated. A "no fly lists" could be extended to "no drive lists" for instance; turning off the cars that don't pay vehicle registration, etc.


"No drive list" is the default.

Without a current license you should not be driving.


IoT killswitch, outsourced to a private outfit, is scary.


Parallel case with a standardized solution: override controls in elevators for emergency responders.


> Cruise did not immediately respond to Insider's request for comment made outside typical working hours.

As an aside, this seems like somewhat of an issue with modern journalism. I guess you have to get the article out asap to avoid being scooped, but then it ends up missing key information, like a statement from Cruise that might have been received if they'd been given a business day to respond. If you wait though, it's old news and no one cars about the added details.


This is one of those futuristic sounding headlines I would've found so cool to dream about 10+ years ago as a not-too-far-off sci-fi hypothetical.

Weird how mundane and unsurprising this feels after many years of headlines about self driving. It's hard to have a lot of faith in a concept that seems so doomed for failure before it's even hit a critical market size.


There was something similar recently, when a Waymo car drove into a construction zone: https://www.teslarati.com/waymo-almost-drives-trench/


I am so confused. Smashing the windshield of a Cruise taxi is able to stop it from moving forward?


Driving with a broken windshield is a safety hazard so it’s not really surprising that it’s programmed not to.


Johnny Cab gets closer to reality with every passing day. ( https://www.youtube.com/watch?v=eWgrvNHjKkY )


Watch them implement some kind of "fix" which then causes erroneous hard stops from full speed because of a stripe or something, resulting in pileups.


Are driverless cars are a solution in search of a problem or a problem in search of a solution? I think the latter. The problem is psychological and and the solution is for people to grow up. Technology is no where near the state necessary to navigate the real world, because it has no model of the real world comparable to the human brain. Competent adults who have looked into this problem understand this. The "driverless" phenomenon is a fascinating case study in how dumb investors can be when cheap money is coupled with baseless mass delusion.

Cars are not smart in 2023. They can't be because the technology does not exist, for example, for a robot to reliably recognize and flexibly respond to open ended an emergency situation in the real world the way a human can.

Maybe some situations can be some of the time but not any where near the rate a competent driver can.

The principals of so called self-driving company's should be held personally liable for accidents if the technology is so great. Because that is what happens to human drivers who commit infractions.


> The problem is psychological and the solution is for people to grow up.

To quote a popular Tumblr post, "If your solution to some problem relies on “If everyone would just…” then you do not have a solution. Everyone is not going to just. At not time in the history of the universe has everyone just, and they’re not going to start now."

Human flaws are not going to go away any time soon, and you probably can't improve the overall quality of human driving on any scale less than decades. Whether or not they're useful right now, driverless cars are promising because the technology improves over time.


Clarification, as there are comments saying otherwise:

Article says Jan 2023 incident was active fire fighting and June 2022 was an active fire scene. Additionally that running over hoses is a violation of CA vehicle codes and "can possibly injure firefighters".


Aren't they supposed to respond to gestures? What do they do if someone is directing traffic? And I guess the question if they do respond to traffic directing gestures: how do they avoid obvious pranks from drunk people?


Throw on a reflective vest & you'll have no problem pranking human drivers with your traffic signals


A surprising number of humans have no problem with doing things to faceless computers and public or corporate infrastructure that they would never do to another human being with whom they were making eye contact.

Few are willing to don the reflective vest and ruin someone's day by lying to their face and directing them down a bad detour, but I have no problem believing that a lot of people would laugh at a stupid self-driving car wasting its time and energy following their hand signals.


But a self driving car SHOULD follow anyone directing traffic.

There are valid reasons for <random citizen> to be directing traffic, like losing their load on the highway or helping traffic flow around a stopped vehicle.


Would you step in front of a self-driving car to make it emergency brake? I think a lot of people would if they could. It's another instance of what you mentioned - teenage gangs bullying and playing with lonely empty self-driving cars.


Isn't the idea that there would be humans in the self-driving car? At least 90% of the time, I would hope.


I'd be more concerned about mistaking regular gestures from instruction gestures. I might be waving at someone, either to say "hello" or to signal "you go ahead". All of this is very context-sensitive and subtle.


I doubt it. That's a really hard problem. Ignoring the machine vision component, if someone on the side of the road waves at their friend, should cars going by stop?

Since there's no way to take control of these vehicles, I imagine they fall back to a service center with remote drivers when they come across the unexpected. Anything involving flashy lights should fall into this category.



Cant the driver-less car, when it sees a fire truck, automatically stop the car at a safe location? Then if the fire truck did not move after 3 minutes, re-route to get away from the fire truck?


Next headlines:

Driverless cars cause traffic jam at accident scene.

Driverless cars cause traffic jam while tenders waiting at traffic lights.


There were numerous instances of the cars blocking traffic mentioned in the report. Someone decided that they weren’t headline material.


In the long-term self-driving vehicles will just become trains.


So what is the solution? Let emergency services have a remote control STOP button? But it would have many implications for passengers too


The solution is recognizing that we are nowhere near ready for driverless cars. With "ordinary" software, you can look at all of the use cases and if you solve 5% of them in a really good way, you can be successful, users will self-select whether those 5% of the use cases are valuable for them, and then not use your software for anything else.

With driverless self-driving vehicles, you have no such comfort. Even if your software correctly handles 99.9% of the use cases, the remaining .1% will get someone injured or killed, and people climbing into taxis cannot self-select whether their route will pass by a fire, in which case "take a cab with a human driver."

Driverless cars have a huge moral hazard here. Their incentives are all around serving their passengers, but the downsides are offloaded onto pedestrians, cyclists, other automobile users, and now firefighters and the citizens they are protecting.

No "solution" should "beg the question" by taking it as axiomatic that driverless cars at the current level of capability are necessary. They are not.

We are not curing cancer or feeding the world here, we are trying to make people get rich by eliminating the cost of labour for a human driver. No way this calculus should prioritize investors over human lives in any sane society.


I really believe that with that attitude, we'll never be "ready" for driverless cars.

> We are not curing cancer or feeding the world here, we are trying to make people get rich by eliminating the cost of labour for a human driver. No way this calculus should prioritize investors over human lives in any sane society.

How about, we're trying to make transportation cheaper for everyone by eliminating the most expensive part of taxi cabs (human labor)? Transportation being cheaper means people can get better jobs further away from where they live, see family more often, etc.

Before you mention public transportation, realize that it would benefit from driverless technologies as well (aka, not zero sum).


It's not an attitude. It's reality. Self-driving cars, if even attainable (which is questionable), don't solve any real problems besides bolstering the want for cars and thus more roads and highways, which are primary contributors to greenhouse gas emissions. That is, other than creating "cool" jobs and supporting get rich schemes and startups.

If we cared about transportation and climate change, we'd be investing into buses, trams, trains, biking and walking paths, and other such solutions.

The attitude designation belongs with self-driving car enthusiasts, who are perfectly happy throwing out decades of safety research and progress.


> Self-driving cars, if even attainable (which is questionable), don't solve any real problems besides bolstering the want for cars and thus more roads and highways, which are primary contributors to greenhouse gas emissions.

It may be the opposite, as driverless cars can potentially be utilized more efficiently. If I drive to a destination, my car will sit in a parking space doing nothing and wasting space. A driverless car could drop me off, then drive away to taxi other people around. There's much less need for parking lots, freeing up valuable urban land.

It may reduce car ownership overall. I have a car because it's more convenient than public transport, and far cheaper than a taxi. If I could have a self-driving car at my door in, say, 15 minutes, and it was competitively priced, then I probably wouldn't bother owning a car. Another advantage would be that if I needed more seats, or more space, I could order a vehicle to my specifications.

It may increase utilization of roads, as self-driving cars that can communicate with each other can make more efficient use of them. Less need for stopping distances if all cars can brake simultaneously, and being able to slow down in advance reduces the impact of traffic jams.


I would argue that all of that is somewhat wishful thinking. I highly doubt it comes to be. And even if it would, it would still be less efficient than public transportation systems (possibly operated partially privately).


It's certainly the case that societal changes are hard to predict. It might be the case that self-driving vehicles will encourage longer drives, for example.

That said, I think my own personal carbon footprint would likely fall, because most of the time I could use a much smaller vehicle. Unless I doubled the amount of time I spent on the road, I'd probably use less energy overall, as I could use a 2 seater car with half the mass.

Public transport is, of course, much more efficient; but it's also much less convenient, and at least in my country, more expensive. It costs less to run my car for a year than it would be to pay the equivalent bus fare for a year. If buses were self-driving, maybe the opposite would be true.


Ever consider if the effective duty cycle of your car is increased the vehicle wears out quicker?

The whole idea of autonomous robo taxis is bad on its face for many, many reasons.

When will we grow up and admit it?


> Ever consider if the effective duty cycle of your car is increased the vehicle wears out quicker?

Sure, but by how much? What's the average extra overhead per person? By that I mean, what's the average distance that a taxi would need to drive between dropping one person off, and picking the next person up.

The more taxis there are, the lower the overhead, as the more likely it is a taxi will happen be nearby to someone who wants one. We're used to taxis being rare, especially outside of major cities, but if taxis were common, the overhead might be very low.

I can look outside right now, and see maybe 20 vehicles parked within 50m of my home. If that were 5 self-driving taxis instead, that would be easily be enough capacity for my immediate neighborhood, at least most of the time.

So you could likely reduce the amount of cars locally by a factor of 4 or 5, and I'd still have a vehicle less than a minute away at most locations. Yes, if you had a 5th of the cars they'd have to do 5 times the work; but likely not much more than that. In other words, the maintenance cost wouldn't significantly worsen, and may even improve; cars aren't immune to entropy while stationary.

The other advantage of having self-driving taxis is that you can more easily specialize. I have a pretty typical 5 seater car because I sometimes need that space. But most of the time, I could make do with a 2 seater with half the mass. My choice of car is determined by the edge cases, but my choice of taxi would be determined by what I needed at the time.


Overhead is the wrong metric. Depreciation over fixed costs, marginal variable cost (e.g. fuel, oil, etc), and opportunity cost are what you want to look at. If you use a car more it wears out faster and needs to be replaced sooner. It needs more fuel and more frequent preventative maintenance.

If the average trip is short the rob taxi does more trips and the effect is the same.

Who cleans up the car when a drunk stranger pukes in it while you are sleeping?

Who fuels it or charges it?

There are many reasons people don't share cars.

Lacking proper control software is not one.


> Overhead is the wrong metric. Depreciation over fixed costs, marginal variable cost (e.g. fuel, oil, etc), and opportunity cost are what you want to look at. If you use a car more it wears out faster and needs to be replaced sooner. It needs more fuel and more frequent preventative maintenance.

Sure, but you also need less cars, and the cars can be more specialized (i.e., smaller on average). Two 1500kg cars travelling 10km each, might be replaced by one 750kg robotaxi travelling 20km. Overall a significant reduction in fuel for the same two journeys.

And then there's the question of whether it's cheaper to have one car traveling 100,000km over 5 years, or two cars travelling 50,000km over 5 years. My guess is that fewer cars travelling further would be less expensive in most cases.

> Who cleans up the car when a drunk stranger pukes in it while you are sleeping? > > Who fuels it or charges it?

You'd hire someone to do it? The same way it works with rental cars or taxis today. That's part of the overhead I mean: is it more efficient for companies to maintain a specialized fleet of robotaxis, or for individuals to maintain a far larger fleet of generalized vehicles?


> cars and thus more roads and highways, which are primary contributors to greenhouse gas emissions

No… they are _not_ the primary contributors to greenhouse gas emissions. Road transport accounts for less than 12% of emissions[0].

What does climate change have to do with driverless cars anyway? It seems like you're basically saying we should keep driving miserable under some false pretense that it's responsible for the death of the planet.

[0]: https://ourworldindata.org/emissions-by-sector


I didn't say the primary. They are indeed major contributors. I probably should have spelled out the point more, but building roads is the real problem. And so anything that supports building more and thus having to maintain more roads is a problem. If I remember the stat off the top of my head, building just 1-mile of road (I forget the definition of "road") is about as much as a single EV.

> It seems like you're basically saying we should keep driving miserable under some false pretense that it's responsible for the death of the planet.

And no, that's not what I'm saying. Self-driving cars are just trying to patch the problem of terrible traffic and terrible public transportation. By doing so, they aren't solving any real problem, and they will increase problems of congestion and also climate change, since they will continue to bolster cars and thus roads needed to support cars.


> they aren't solving any real problem, and they will increase problems of congestion and also climate change, since they will continue to bolster cars and thus roads needed to support cars.

Again, this is zero sum thinking. We can have both efficient and safe driverless EVs as well as trains/buses run by the government. Personally, I'd prefer to take the former given how little interest the government has in enforcing laws on public transportation, and this trend seems undue to change.


> Personally, I'd prefer to take the former given how little interest the government has in enforcing laws on public transportation, and this trend seems undue to change.

I'm not sure what you mean by this. Are you talking about HOV lanes and bus lanes not being enforced, fares not being enforced, or literal lawlessness on public transit?


Literal lawlessness on public transit. I gave up on Muni in SF and started driving my car a lot more after the n-th encounter with drug addicts ruining public transportation for everyone.


I'm of this attitude as well, that the Jevon's paradox would kick in for cars again and lead to more use, like how ride-hailing apps worsened congestion.


Agreed. And thank you for the reference to Jevon's paradox. I haven't heard of it before, but I've been looking for things like this.


I've recently been listening to some talks from Matthew Crawford and his arguments for protecting manual, human tasks from automation and safetyism. I find his stance to be a good middle ground between technocratic progress and luddite wariness.

https://www.youtube.com/watch?v=XxONOUwOX80


>>How about, we're trying to make transportation cheaper for everyone by eliminating the most expensive part of taxi cabs (human labor)?

Does anyone really believe that? Right now I could commute to work by taking an Uber - it costs about £10 for a one way trip.

Now you're telling me that a company somewhere, would send a state of the art car equipped with incredibly advanced computers and sensors, car which they need to insure, which needs constant communication and backup emergency controls ready on standby, for which they need to cover maintenance, depreciation and other costs, and would charge me less than £10 for the priviledge? How little exactly?

It's nonsense, that's what it is. Human taxis have pretty much reached the bottom of pricing through gig economy - you are hardly paying for maintenance and fuel at this point with the way rides are structured. What sort of possible margin is there for self driving taxis, which will inevitably end up being incredibly expensive assets until the technology becomes common place and cheaper?


That £10 trip is "artificially" cheap. Uber/Lyft are effectively tricking drivers into thinking it's a good deal to use their own cars (and thus their own fuel and pay for their own maintenance), and are partially subsidized by venture capital.

This, in my opinion, is unsustainable. Soon there won't be any new "suckers" in the pipeline, and it will suddenly get a lot more expensive as these rideshare companies need to reckon with the actual cost of human labor and leasing their own cars.

Investing in driverless technology, which can be replicated and will eventually become cheaper as it's mass-produced, is a very smart move. These costs will eventually get passed down to consumers as well once there's enough competition, and then everyone wins.


> I really believe that with that attitude, we'll never be "ready" for driverless cars.

Is that a problem?

And if it is, are you implying that we cannot one day have driverless cars without imposing life-threatening externalities on citizens who did not give informed consent to participate in the testing of driverless cars?


> And if it is, are you implying that we cannot one day have driverless cars without imposing life-threatening externalities on citizens who did not give informed consent to participate in the testing of driverless cars?

I'd much rather be around a driverless car today, which drives overly cautious and comes to a complete stop at every stop sign, than a car driven by a human driver. So that's my "informed consent", and I suspect many others agree. Is that how democracy works?


> Before you mention public transportation, realize that it would benefit from driverless technologies as well

Which has been in place for years in some places (ie Docklands Light Railway).

> we're trying to make transportation cheaper for everyone

Call me an old cynic but I'll believe it when I see it - I'm old enough to have seen many, many events where the supply cost of something has gone down but the consumer cost has not (or even gone up in some cases.) Why will driverless taxis be any different?


Not just years—decades. The Victoria Line in London has had automatic train operation since it opened in 1968 (unlike the DRL, there's still someone in the driver's seat, but they do little more than shut the doors).


> Before you mention public transportation, realize that it would benefit from driverless technologies as well (aka, not zero sum).

The problems from autonomously operating trains are limited owing to them being a separate, nearly-independent system. Anyways, some parts of the world has fully-autonomous trains with centrally-located operators on bay for overriding if anything unexpected (usually someone jumped into the rails) happens. Airplanes have significant automation built-in (and in theory can even land without input) but at the end of the day we have pilots that can override just in case that an engine became unexpectedly loose or someone on board suffered a heart attack.

The problem with autonomous cars is that a) it's specifically on a mixed-used location and b) often most of the developers want to fully abdicate any control, unlike on other systems where how operators are in standby just something unexpected happens. If both Waymo and Cruise are willing to have an operator-standby system a) might be mitigated. Until I hear that these vehicles are supervised just in case they have encountered something unexpected you will see a lot of similar nuisance issues.


> How about, we're trying to make transportation cheaper for everyone by eliminating the most expensive part of taxi cabs (human labor)? Transportation being cheaper means people can get better jobs further away from where they live, see family more often, etc.

Is this going to make people's lives better (shorter work week and/or better pay) or worse (longer commute and expectation that you work in your car)?


> I really believe that with that attitude, we'll never be "ready" for driverless cars.

no great loss there.


> Before you mention public transportation, realize that it would benefit from driverless technologies as well (aka, not zero sum).

No, public transportation has completely different technological needs.


Just a quick note, as a blind person waiting impatiently for the driverless future, there are other uses for driverless vehicles than just making some random person money.


Absolutely. Also aged people who no longer qualify to drive their own vehicles, people with other disabilities that prevent them from driving under any circumstance, people who have disabilities that permit them to drive with specially modified vehicles but would like to be driven or cannot afford the extra expense...

Yes, there are a lot of very good use cases. And honestly, I have a far less important use case of my own: If I take a taxi (usually when travelling) I just want transportation, I do not want someone chatting me up in an effort to earn a good review/tip because they don't earn enough from driving the route silently to get by.

I just don't want progress lubricated by the blood of people who did not give informed consent to assume the risks.


As a pedestrian and cyclist in San Francisco, I find driverless cars to be much safer than the average car driver. They follow the rules and are always paying attention.


That’s great, but it apparently wasn’t the case here and that’s the whole point of the article


We are nowhere near ready for cars with drivers either. Accidents happen everyday, drunk driving is common, quite a bit of people get their license while they shouldn't (and some drive licenceless anyway)

"A software handling 99.9% of the cases" wouls be a significant advantage over cars with drivers if it was true.

I think it is just a numbers game, we may not be there today but I believe we will have the a future with safer roads thanks to driverless cars. I am longing for that

So we should push for it instead of trying to shut it down. Solution to the problem in article sounds easy, allow law officers to mark roads as blocked so the car does not even to see the fire truck. Or just put a road signal or a person on the road so the cars (driverless or with drivers) don't step on your hose


>We are not curing cancer or feeding the world here, we are trying to make people get rich by eliminating the cost of labour for a human driver. No way this calculus should prioritize investors over human lives in any sane society.

Other than working and sleeping, driving is the main thing adults do with their lives. Not having to do it anymore would be transformational. I'd prefer to see that accomplished through urbanism and public transit, but that's about as much of a moonshot politically as self-driving is technically.


> The solution is recognizing that we are nowhere near ready for driverless cars.

I agree. If we both happen to be correct, I wonder in what other areas of automation we may be greatly over the tips of our skis.


"ordinary software" also gets people killed and injured


Escalators have big red “Emergency Stop” buttons anyone can press when things go wrong. Why not one for autonomous vehicles, too?


Passenger safety concerns? If anyone would be able to stop an autonomous vehicle, it may be not always in the passenger's best interest. Could be OK if the passenger has an override, though - if they decide to tell the car to not comply with a legitimate stop request, it should be treated as if they'd be driving themselves and refused to stop.

And, obviously, emergency stop should initiate stopping, but the vehicle should do so only when it's safe. It's not an escalator that's safe to halt immediately and at any moment.

For an completely unmanned vehicle (driving to pick up or to recharge/park for the night) - probably no big deal.


Even while unmanned it could be exploited. A robo-taxi's competitor could hit the emergency stop on any vehicles they see driving empty.


The point is that would be illegal and prosecuted. A competitor could also shoot them with guns but we don't worry about that, don't we?


> it may be not always in the passenger's best interest

What is a realistic scenario here? Unless you are in a tank in a warzone...

How is a traditional, non-self driving car protecting you from someone pointing a gun at you? Do you have bullet-proof glass and bodywork?


Here[0] is a realistic scenario from last week near me. Just checked. No tanks involved. You can decide for yourself if it is a warzone.

Note: OP edited the post after my response. Would be good if HN indicated this.

[0]: Cherry Creek carjacking prompts other victims to come forward https://www.cbsnews.com/colorado/news/cherry-creek-carjackin...


> "When all the four doors popped open and they came out with assault rifles, I was honestly thinking, I'm going to be dead," he said, remembering the moment the suspects got out of that car.

That sounds warzone tbh.


Carjackings and other crimes where people could press the button in a bad part of town and pull the riders out.


Or just jump in front of the car which is supposed to stop and then your companions handle carjacking the immobilized vehicle. There are many ways to normally stop or trick an autonomous vehicle without needing a big red button to stop it. Spike strips are used by police to stop human driven vehicles all the time.


What about if someone triggers it when you're traveling 75 mph on the highway?


I.e. what if someone pulls in front of you on the highway and then stands on the brakes?


Do you have a bulletproof car that can effectively protect the driver? Or are you just imagining Mad Max?


Trains too. There are abusers but not many.


Because it would take about 5 minutes for people to figure out how to use that for nefarious purposes.


Maybe emergency services should have the ability to block autonomous vehicles from particular areas in real time. Certainly the companies should collaborate with them to produce a lot more training data for this kind of scenario.


Seems like this would be super useful for allowing emergency services access in the kind of scenarios where human drivers are expected to pull over and let them through. I'm kinda surprised this kind of integration wasn't a condition on the permit to run them.


Sounds like a good question to ask Cruise and Waymo--it's their responsibility to do something about the issue.


The solution is to have a person inside the car controlling it. A revolutionary idea indeed.


Put a human being inside the car


Perhaps some sort of beacon emergency services can deploy which would tell self-driving vehicles to avoid an area.


The solution is to not allow these cars on the road until/unless they can behave properly.


If you must, you can do this when reading: “So what is the solution?” → “So what is the proper behavior?”


The proper behavior is to not run over fire hoses, and more broadly not to violate traffic laws.


This is not a serious solution but it would probably work: place a fake traffic light in the road in front of the hose.


I didn't see it mentioned in the article, but given they had time to break the window, my guess is that's more or less what they did -- without the light. Might take a little courage, but standing in front of the car should make it stop, yes? You'd have to stick around until another solution is found, of course.


Shadows would also work in that case


The problem of chaotic vehicle interactions exist irrespective of self driving cars. What stops progress is the glacial pace of legacy car companies and the libertarian nature of car culture.


Aren't there "road blocked" sign in their tools? Don't such cars recognise it?


[flagged]


>other things that can cause bodily harm.

So everything? I, for instance, recall a certain style of sketchers that had to be recalled because they caused bodily injury. Do you likewise demand (e.g.) independent and peer-reviewed studies before new footwear can be put on the market?


Don't be a fool. You know what I mean. Medicines and self driving cars can cause irreperable damage on a larger scale than a shoe does.

You sound like you would be first in line to give a pregnant woman Thalidomide too and that was considered "safe and effective" as well until it wasn't.


>Medicines and self driving cars can cause irreperable damage on a larger scale than a shoe does.

Don't be a fool - irreparable harm is occurring and you and people like you are standing in the way of fixing it.

The current state of the art kills 40,000 people a year.[0] You've got to consider the bigger picture around 'dangerous'. Skepticism here isn't prudent, isn't reasonable, and frankly will be responsible for the deaths of tens of thousands. The analogy is inane because your premise is.

[0]Say it with me: "A number if people equivalent to everyone I've ever met or will ever meet will die this year by automobile in America." And maybe one of them will be from a self driving car.


I bet you like to run untested alpha builds in production environments too.

Be my guest and risk killing yourself by beta testing dangerous self driving vehicles and medical procedures that haven't gone through their trials.

I risk nothing by not participating.

Just do us all a favor and try not to take out others For Science™. Enjoy doing someone else's QC for free.


>I risk nothing by not participating.

But you do, not participating has opportunity risk. It's an elementary CBA - give or take 40 thousand people will die, which could include either or both of us, for every year of delay in self-driving. Be it for 'prudent', 'common-sense' trials or otherwise.

It's scary as hell[0] that there are otherwise smart people who don't see the car-shaped gun already pointed at them just because it has grown familiar.

[0]Because that line of thinking will cost tens of thousands of lives.


> But you do, not participating has opportunity risk.

You're right it does, but I know what the risk is. I do not know what the risk is for said untested beta product.

Please valiantly be the better man by being society's guinea pig. I have a family and have reasons for living.

I would however recommend getting a life insurance policy so when the obvious happens you won't leave your loved ones high and dry.


Did the city pay for the damages?


Why should it? I don't see how the firefighters did anything wrong.


Firefighters practically salivate at the possibility of smashing the windows of a car parked in front of a hydrant they need to access for supply. This isn't much different.


So? In both cases, the cars are impeding their urgent work. It's not like firefighters are just going around smashing windows on cars that aren't doing something very, very wrong.


I agree that it's 100% justified in both cases and that Cruise definitely needs to fix their shit.


They have to do that to avoid a kink in the hose I think.


Correct, the supply hoses don't like bending much.


'Need to access'.....yes.....and your point? they salivate over saving lives! Stfu


I've talked to some that said it was one of the job perks. I don't blame them at all, it seems necessary and fun. It's one of those "instant justice" kind of things.


From what I can tell here there wasn't even a fire.


I was thinking more along the lines of 'who at Cruise got the ticket for running over a hose?' We need good answers for this question, because these days going after a big corporation is easier said than done. The laws need to be clear, the penalties appropriate, the recourse straightforward.


I sure hope so! If firefighters keep smashing them, these cars might get deployed to states where regulators are more accommodating enacting some protection. Would be a shame if San Francisco didn't have these manslaughter machines around anymore.


Was there actually a fire?

> fire hoses on the ground in the area of active firefighting

It sounds like there wasn't.


I don't know what meaning of "active firefighting" you're thinking of that doesn't involve fire, but you can open the PDF linked in the article and see "On January 21, 2023, a similar incident occurred. San Francisco Fire Department Staff were on duty at the scene of a fire on Hayes Street near the intersection with Divisadero Street."


I don't know either but would not be surprised if for some non-fire reason they were plugging in a hose.

I don't know why firetrucks show up on medical 911 calls or why fire departments run the lifeguards. But they do all those non-fire things and even more.


As a volunteer firefighter I'd be surprised if they pulled a line when there's no fire. Cleaning up the line is hard work and not something we do for fun.


Is that a relevant question? Seems to me that the issue is that the cars ran over fire hoses, and ignored firefighters. Whether or not there was actually fire anywhere nearby seems completely irrelevant.


It's hard to understand how the regulatory fervor which extends towards most of "big tech" seemingly has so little effect on the autonomous vehicles industry.


should firefighters wear cameras so we can see wtf is going on out there?

i guess it would cost a lot of money, and prob be very wasteful, wreck privacy even more, etc.

but... i wouldn't mind knowing a bit more of what's going on out there. now that our streets are even-more deadly beta robot playgrounds.

and, with all the new tech, maybe we can learn some things.


This is victim blaming. Why should firefighters wear cameras so a autonomous taxi company can avoid accountability?


victim blaming? who's the victim?


The fact that the NHTSA allows these "beta" self driving vehicles is an affront to their mandate. Sorry but I don't beta test self driving cars, medicines, and other things that can cause bodily harm on a wonton.

Thalidomide was considered safe and effective for pregnant women until it wasn't. You want to be a guinea pig, be my guest and while you're at it, go ahead and install alpha builds for your BE infrastructure.

Edit: the fact that this was down voted speaks volumes to me about how people view safety. Sad!


To be honest, as a human driver I'd not give a second thought to driving over a hose.

Not saying that is right or not, just surprised this is apparently an issue worth smashing a car up for. I have no recollection of this being in the highway code for instance


Have you ever seen a firehose in action? It is commonly 2.5 inches in diameter, and my first reaction on seeing one is "that does not look like something I want to drive over." Completely separate from the fact that it is a firehose, if you put down a 2.5 inch diameter pipe on the road, I'd feel the same way about not driving over it.


They're 45mm. So like a coke can in diameter.

Speed bumps - which are everywhere - are allowed to be up to 100mm: https://www.legislation.gov.uk/uksi/1999/1025/regulation/4/m....

I drive over speed bumps all the time and they're more than twice the height.


Seriously? You'd just drive over a firehose because it was in your way?


Why shouldn't I, if it's not illegal to do so? If it's a bad thing to do, it would say so in my state's driver's handbook, wouldn't it?

In reality, yes, I'd back up and find another way, if the hose appeared to be in use fighting a fire. I wouldn't want to cut off the water even momentarily. But that falls into the category of "Not being a dick behind the wheel," rather than the category of "Obeying the law." If we don't want this to happen with self-driving cars, then we need to change the law.


I've ridden over cleaning hoses (and one time a boat's hose that I assume was supplying water or fuel or something) without a second thought. Obviously I'd be careful around an emergency vehicle setup with lights and sirens and all that, but just a random hose lying across the road? Yeah I'd probably drive over it without thinking.


Yeah I would.


You will at a minimum get a ticket and very likely you will get your car smashed by some very angry firefighters / police. If you rupture the hose, you will also get an extremely expensive bill to replace the hose. If the hoses catches or the water pressure drop screws up something on the firefighting side or results in injury or other damage, you'll be held liable for that too.

California Vehicle Code § 21708: "No person shall drive or propel any vehicle or conveyance upon, over, or across, or in any manner damage any fire hose or chemical hose used by or under the supervision and control of any organized fire department."

I dunno man, that's an enormous amount of downside to take in exchange for avoiding the minor inconvenience of just finding a detour so as to not drive over the hose.


That is unintentionally the best argument for autonomous vehicles I've seen in a while.


Firefighters don't care about your car windows if you get in their way. Do an image search for "fire hose through car" if you want a good laugh. Park in front of a hydrant and firefighters will smash your windows and put the hose straight through it.


The article makes it clear driving over a fire hose violates California’s vehicle code.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: