I've lost count of the number of people I've talked to who think that neural nets means we've created brains that will just magically learn how to do new tasks. So we just need more training and then automated <whatever> is just around the corner.
Tesla, et al do not have the luxury of ignorance to explain that away however, they know what the technology is and is not currently capable of, but they don't want to admit it.
Its all artificial but no Intelligence. Statistical pattern matching, no matter how sophisticated, does not understand "why" from conceptually thinking about a space/time model of the world as we humans do.
If the explicit goal is to create a human intellect, then sure, there's a really interesting conversation there—one that is happening constantly in the DL/AI research community, in which virtually no one believes that we're close to AGI or that current deep learning is going to achieve it.
But that's explicitly not the goal that 99.9% of neural networks are designed with. Their traditional use case is where they excel: programmatically approximating functions that are exceedingly hard to approximate manually.
This includes but is not limited to image recognition, speech synthesis, recommendation (including search), fraud detection, ETA prediction, even medicinal chemistry.
While I agree that AI in current forms can be very useful, I believe that the problem of e.g. driverless taxis requires understanding of other humans and empathizing with their intentions to be truly viable. Driving is a social activity, and the current self-driving tech is about as convincing as trying to carry on a conversation with Alexa at a party. I do believe that we need AGI before self-driving will be more than a better cruise control.
I think the better cruise control is very useful and I love to see it, but Tesla’s marketing of it as “full self-driving” is disingenuous as best, and industry-chilling + deadly (as we’ve seen) at worst.
Because people are using Deep-learning over single-lens cameras to replace depth-perception... and then wondering why the cars that do this run into stationary objects with flashing lights. https://static.nhtsa.gov/odi/inv/2021/INOA-PE21020-1893.PDF
No one really cares about where deep learning works. People are complaining about all the areas where deep learning is failing, with dramatic and deadly results.
The semantic understanding problem, more generally, is under-acknowledged in autonomous driving.
A human can tell the difference of a child standing by the side of a road, about to throw a ball into the road; vs a child standing at the side of a road, waiting for a bus. A human will slow down in anticipation of the likely outcome. A robot without state awareness will be extremely limited in available responses.
Without a useful state model of the universe (i.e. concept awareness), you're limited to purely reactive behaviors.
That's still ignoring the problem. "Self-driving" tech is no where near that. You gotta set your expectations correctly.
We're at the "Firetruck with flashing lights was hit at full speed on FSD mode" stage of the problem. This means that the depth-field mapping broke. The car was unable to tell how far away the firetruck was, and plowed full speed into the firetruck.
Its very telling that the other self-driving companies are using LIDAR to build the depth map, instead of trying to create depth-maps through deep learning.
Entirely agree. The problem is that most people don't understand that and easily fall prey to thinking that AI is a magical black box that can solve any problem you throw at it. In no small part because of all the hype by salespeople and the media. The reality is that NNs are great but only for some types of problems AND where there is a high tolerance to false positives and negatives. Clearly, this does not include problems where safety is critical (unless you can really demonstrate that AI does better at safeguarding than humans AND you can convince the public to not be scared).
All true. But due to the hype we often see people trying to use overly complex ML models in problem domains where simpler deterministic statistical techniques such as linear regression analysis would be just as accurate with lower costs and better testability.
The issue is that people think that neural net technology is AI when, in fact, there is 0 Intelligence there. Which is fine, except when you have people like Elon Musk, Google and others thinking that there is I there, just need to throw more processors and data at it, and I will magically emerge. We now face a growing number of non-I cars on the road starting to kill people, so it's a real urgent problem that all who use neural nets need to, at minimum, understand that that tech will never give I and to communicate that fact to their customers and their goals with using it.
the problem is the unboundedness of the failure modes of (the current generation of) AI, not with how well it can approximate hard functions within regions of interest (which is essentially always a small subset of the range of said hard functions).
The main problem is human overconfidence in the resulting functions, which are often measuring things humans didn't realize was embedded in the data (eg with tumor recognition), or are utterly hopeless at representing the infinite edge cases that the real world can present (eg self-driving cars).
It's one thing to have hoped that these methods could solve these problems when the improvements were coming rapidly, but there will always be a limit to how well these systems can perform. And the problem is that they fail in entirely non-intuitive ways, making human oversight to correct for errors very difficult or impossible as well.
That's doesn't seem true. In the short term AI algorithms clearly understand why they do things. Yes, the time horizon is obviously shorter (not in games, but in the real world, sure), but the same can often be said of humans.
If you see humans responding to animals that don't use eyes (e.g. bats, insects) fuckups are a constant. We are very bad at interacting with anything that doesn't have something similar to our eyes to observe the world.
And third, the world has almost entirely been rebuilt to compensate for human observation flaws. It's not just staircases having a step height that works well with humans, but for example highway intersections have been changed 100 times until we found one that humans respond to in a manner different from slamming into the split. The same is true for many intersections (I first started realizing this when reading an article that an intersection with a bridge was modified because 5 people died when a car crushed them against the side of the bridge. It was redesigned. Now we find that an algorithm with an entirely different set of observations makes different mistakes ... not really that strange. Perhaps we should start modifying streets algorithms misjudge).
For example the warning cones for when you have an accident or road works or the like have also been adapted many times because version X was "causing too many accidents".
So in a bunch of cases it's neither that humans don't have big observational flaws or that algorithms have many more. It's just that we largely eliminated the human ones. Not by eliminating them from humans, but by eliminating them from the world.
Hey man, if we’re rebuilding roads for the sake of self driving cars, let’s just go back to ubiquitous light rail instead, like we had before cars got popular. This whole self-driving industry is so ridiculous when you consider that this has been a solved problem for over a hundred years..
The problem is rail is horrible at traffic handling and uses time multiplexing. One guilded age unethical activity was deliberately creating "traffic jams" on competitor's lines. While rail is good and could use expansion it makes a dubious complete replacement. It works wonderfully with shared routes but fails at handling the "amalgamation" part.
Even Japan still uses trucks for the last mile and they have embraced it enough to have "bullet train suburbs" around stations.
> And third, the world has almost entirely been rebuilt to compensate for human observation flaws.
I don't entirely agree with what I think your point is. Fundamentally, humans are pretty great at using context to work their way through a variety of unfamiliar situations. The work we do on intersections is about tuning. Even in a bad intersection with horrible flaws, 99.9% or more of all humans navigating it will be successful. The reason we keep tuning them is because our tolerance for death is zero. 1 death for every 100M miles driven is pretty good, but many people still find it completely intolerable. We're going to keep tuning.
But I don't think that means that making roads safely navigable by algorithms is going to be a simple matter of tuning them.
> Even in a bad intersection with horrible flaws, 99.9% or more of all humans navigating it will be successful.
I think you will almost universally see that everything in a human slows down a lot when dealing with unfamiliar and/or difficult situations. In driving, this easily causes damage.
When difficult enough we start relying on social behavior ("you go first and tell me how it went") to find something vaguely resembling acceptable performance, then go away and never touch it again.
Proposition to change the term "artificial intelligence" to "learned habits".
There are a lot of tasks that we humans do the same way as a machine does- repeating a set of mental and/or physical patterns until it becomes second nature to us. Those are called "habits" and those are precisely what machines are good at doing.
True. I find it helpful to think of current DL models as a form of the reptilian brain — information processing and, at best, instinctual pattern recognition.
Intelligence is different kind of processing. It resides in the particular form of processing most often found in the mammalian brain — a processing we know intimately as conscious experience. Every human thought, word, and innovation formed within human consciousness. There’s no difference between consciousness and intelligence — they are the same.
It’s here at the “hard problem” that most (but not all) ML research turns aside to follow the “bitter lesson”, hoping that the difference between instinct and intelligence is merely one of scale.
But as OP points out, the difference is one of kind.
Even if our ML systems were meaningfully intelligent, there's still the issue of proper training. You can't teach humans to be a safe drivers by showing them a huge slideshow of dash-cam images. Why do we expect ML to do any better?
> Even if our ML systems were meaningfully intelligent, there's still the issue of proper training. You can't teach humans to be a safe drivers by showing them a huge slideshow of dash-cam images.
That's an interesting concept to explore:
I would guess that videos could improve people's driving: Imagine new drivers; showing them videos of different situations, actions, and their outcomes, may help. The same videos might not help an experienced driver, but they might be helped by videos of more complex situations or by videos tailored to a specific driving skill.
But I'd be interested in research: When does such training help people and when does it not? What aspects of the training are effective or not?
And can that be applied to ML? It may be the old fallacy of conceiving of computers as 'thinking' like people, which Dijkstra compared to conceiving of submarines swimming like us.
I could see videos being effective. When I was younger I watched a lot of those "near miss" / "driving fails" youtube compilations, and I think they gave me a better intuition for how things tend to go wrong on the road. It would be interesting to see a drivers ed program that included material like that.
On the other hand, when I watch a dash-cam video, I'm already have an understanding of how drivers think, how pedestrians behave, how weather conditions affect driving, etc. I could watch a video and tell you "the driver ran the stop-sign because it was hidden behind the tree branch, and hit the other car because the road was wet and they couldn't brake effectively". I don't know if I could learn to recognize those subtleties from watching video alone, which is what it seems like we're trying to achieve with ML.
Does Magnus Carlsen understand why chess moves are good? Does high depth stockfish or AlphaZero understand why moves are good? Why is Carlsen so relatively bad at chess with his understanding of the why?
> Why is Carlsen so relatively bad at chess with his understanding of the why?
I think you have it backwards. Stockfish probably could tell you which specific 30-depth line changed its evaluation, while a human player is much more likely to play based on feel and intuition
Your certainty in it not existing to a level of alarm is just as ridiculous as your argument that "Tesla, et al" are lying of the capabilities of AI.
Yes, most people do not know what the difference between ML, AI, neural nets, and computation is - nonetheless we've reached the point in humanity where there is no question a pandora's box has been opened. There is very real reason why there would even be gag orders on public information given an entity achieved some level of strong AI.
And to your point of it just requiring more training, yeah, it kinda is that simple for the majority of tasks which is also enough to offer serious contemplation. A very wide depth of weak AI solutions that fake "strong ai" will probably be more dangerous long-term than a true "strong ai" solution due to the fine-tuning problems it would naturally have.
Big discussion, overall we need to be less certain on the state of things because there is very good reason why such an event would _not even be obvious when it happened_. A time of uncanny valley at the most and then you realize oh shit, AI has been running the world since... APT and DDoS patterns.
I’ve been following FSD from v8 all the way to the current v10.1. It’s nearly magical and it’s only improving. I can see it being pretty rock solid within the next 3-5 years.
To add context. In Aviation, if they do find a bug in the software running an airplane's engine they do ground planes until they know more about impact and mitigation.
Not saying cars should do the same, just that it's not absurd to consider it.
Well, to be honest something like Tesla's Autopilot would never have been certified in Aerospace anyway. And yes, the question of what authorities would do in such a case is a valid one. It seems like authorities are struggling with functions like that, would it be simply hardware, e.g. faulty airbag sensors, recalls would have been already be issued.
It's funny how super pro regulation people never consider the lives lost caused by slow regulation. If Tesla solves driving it will lead to tens of thousands of lives saved a year. Since 2015 there have been 20 deaths in 16 accidents related to autopilot, globally. Doesn't sound like a lot in comparison to highway death numbers, over 6 years.
On a per-mile-driven basis, Tesla's self-driving accident rate is somewhere in the region of 5-10× that of human drivers, IIRC. And keep in mind that the self-driving here is already in a state of selection bias where it's more likely to be used in safer conditions (i.e., fully-grade-separated highway driving in clear conditions, rather than dense urban environments in inclement weather).
Maybe you were thinking the autopilot accident rate being 5-10x _less_ than human drivers?
> In the 2nd quarter, we recorded one crash for every 4.41 million miles driven in which drivers were using Autopilot technology (Autosteer and active safety features). For drivers who were not using Autopilot technology (no Autosteer and active safety features), we recorded one crash for every 1.2 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 484,000 miles.
I know there are several wildly different statistics for Tesla's crash rate. Some of these statistics suffer from poor data quality, and the first big study saying "Tesla is safer than average drivers!" was really affected by this (I don't recall the exact details, but the consensus of pretty much everyone not Tesla was the statistics were complete garbage).
One immediately obvious issue is comparing Tesla vehicles, which are all relatively new vehicles typically owned by relatively older drivers to all other passenger vehicle.
The other immediately obvious issue is that the self driving features of a Tesla probably work best in ideal conditions, where human drivers also do.
You really can’t compare the average mile with the average autopilot mile. The best analogy I’ve heard is comparing human aircraft pilots (takeoffs, landings, weird old planes, all kinds of pilots) against aircraft autopilots only cruising at 30,000ft.
And how many disconnects do Teslas log per million miles on Autopilot? Because every disconnect is the car saying "I'm not good enough, humans are better than me".
More than that, if it's only doing the driving in ideal conditions, then it's unfair to compare the two. You need to compare against the human drivers in ideal conditions.
It's funny how people ignoring regulation, that developed during decades for very valid reasons, usually do so to further businesses and projects that benefit greatly by doing so. Tesla is one example when it comes to self-driving, Uber another one when it come to the Taxi business. All for the greater good of course.
Perhaps it can be simultaneously true that Uber is providing a good service to customers against an outdated and badly-run industry, while they are doing many bad things.
> If Tesla solves driving it will lead to tens of thousands of lives saved a year.
That’s a big if. If we solved aging, we’d save nearly all lives, so please don’t regulate my medical practice. I expect to have aging solved and millions of treatments rolled out by 2020.
Certainly not, but if we invent a life-saving medicine that 1 in 1 million people have a fatal reaction to, we'll still consider it enough of an overall positive for humanity to celebrate it as a technological win for society.
The ratio isn't 1:1 or even 1:10, but there is a line somewhere where X deaths <caused by new tech> is acceptable because of the X^Y lives saved <by new tech>.
See also: Most modes of transport, most medicines and surgical procedures.
I’ll be happy if “AI” cars are safer than human drivers on average. Zero accidents/deaths can come later.
If everyone was flying their personal planes around and constantly banging into each other a minor software bug would not ground any planes. It would be more like a malfunctioning airbag recall.
This has already been happening but self driving enthusiasts just keep thinking it is great. And regulators have done very little and otherwise smart people like Sandy Munro get extremely mad at what little regulators have hinted at. It is wild times.
Munro sat down with Musk and enthusiastically praised FSD but complained about how roads were often improperly painted. Musk--to his credit--said the car needed to work safely even in the presence of paint errors. (Video is on Munro's YT channel.) Munro is very critical of many aspects of car design; I cannot understand why he's such a fan of FSD.
I tried FSD on the $200/month plan and dropped it: It makes the car unsafe. To command a lane change you hold down the turn signal stalk. If you fail to hold it down long enough the car suddenly swerves back to the lane it was in. This is (to say the least) disconcerting at 80 mph.
FSD can also suddenly decide to do weird things that are difficult to correct even when you're paying close attention. It's unnerving. Ordinary autosteer (which is included with every Tesla at no extra charge) works well enough for me and it fails in more predictable ways; it's easy for me to build a mental model of its limitations. I'll stick with that.
there is a drastic change in his opinions of Tesla before he owned Tesla stock vs after.
I'm confident that this would apply to just about anyone that has ever bought, and then sold, Tesla stock. Because, and I don't mean to overstate the obvious, if one's opinion didn't change then why sell the stock?
>I tried FSD on the $200/month plan and dropped it: It makes the car unsafe. To command a lane change you hold down the turn signal stalk. If you fail to hold it down long enough the car suddenly swerves back to the lane it was in. This is (to say the least) disconcerting at 80 mph.
This isn't true. You just tap the stalk down until it clicks. Sometimes if it detects issues in the other lane it will not complete the lane change and go back into your existing lane. But you do not have to hold it down the whole time.
> I tried FSD on the $200/month plan and dropped it: It makes the car unsafe. To command a lane change you hold down the turn signal stalk. If you fail to hold it down long enough the car suddenly swerves back to the lane it was in. This is (to say the least) disconcerting at 80 mph.
That's not quite accurate. Unrelated to autopilot / FSD, you can do a small press on the turn signal and it will signal three times and then stop signaling. You can also push all the way down and it will signal until you turn it off. You don't have to hold it down.
FSD will only continue switching lanes while the turn signal is on, so if you do a small press down, you may see the behavior you described.
To clarify I'm talking about a lane change initiated by me rather than merely allowing one recommended by the car. The latter only requires a brief flick of the turn signal stalk.
But the former required me to actively hold it down for several seconds--much longer than a full turn signal would require. I had to hold it down until the car was completely within the stripes of the adjacent lane or the car would immediately swerve back into the original lane. I tested it many times on a traffic-free road. Might have been a setting; I don't know.
Had the opposite experience on a testdrive purposely (within reason) trying to see if it would merge into a lane that subsequently had a merging onramp with a car resulting in the blindspot likely needing to move in.
Was super impressed it seemed to over correct and be thoughtful about consequences a few seconds down the road.
It is great. I own a Model S with autopilot, and have since early 2018. I have driven it across some dozen states, and two Canadian provinces. It is far perfect, but it is very, very good, and makes long drives much less stressful and tiresome.
And there have been at least two incidences that I can recall where Autopilot saved me from a wreck.
I would not choose to go back, and would buy it again without hesitation.
We didn't get the FAA regulations we have now until after a ton of people died in aviation accidents and the aviation industry figured they need to get their image together.
So... yeah it's gonna take a lot of deaths for this to get regulated. It's a shame cuz we already basically know how to regulate this.
The same thing that happens to a human that is at risk of misclassifying a fire truck for a bridge: Once the odds of this happening are low enough (say 0.00001%) they can pass a driver's test and we give them a drivers license.
The problem is that's 1 person. For Tesla's autopilot, that's the hundreds of thousands of cars running that broken model. The scale of the problem is much larger.
I think that should very much depend on the severity and the statistical impact of the bug. As long as the long doesn't raise the deaths-per-mile-driven number above what humans would be doing, I would say it would be ok to just leave it running and just update once it is fixed.
The goal is to get cars driving better than humans (which btw they already do by a huge margin, even with the fsd beta). so disabling the autopilot because of a potential bug would be condemning a lot more people to death than leaving it in. Interesting variation of the trolley problem i guess
The idea is to have self driving safer than our average driver who’s always texting and driving, sometimes driving under the influence and sometimes just too stressed to pay attention to the road.
Self driving cars will make fatal mistakes but I have no doubt that tesla very soon will be able to be safer than the average driver.
Plus the more autonomous vehicles on the road the more safer it is.
Finally currently traffic accidents are the leading cause of death for 30yo so we aren’t exactly replacing a perfect system
Ok. But this idealism doesn’t work when Tesla has clear failure modes that simply shouldn’t happen during normal driving.
Humans generally do not give their cars haircuts by slamming them under a stopped cargo truck. They do not generally smash straight into stationary emergency vehicles.
We can’t handwave basic safety issues away by saying “in aggregate, they perform better than humans in most conditions.” The basic safety issues get people killed. Leaning on some “average driver” fallacy is a way to ignore issues core to the tech stack.
Slight quibble: I very much agree with your overall point but human-driven cars slam straight into rescue vehicles all the time. They're often intoxicated but not always. There's just something about flashing red lights that attracts driver attention and all too often the car follows.
This is why we put the BRT (big red truck) behind the rescue scene and park it at a 45 degree angle. It weighs 15x as much as a car so it won't move much when a car hits it, and we want the car to bounce off sideways and away from the rescuers (us) when it happens.
Assuming that in aggregate they do kill fewer humans, then this is a rare instance of an actual trolley problem. Do you want to kill fewer people by changing the norms and reasons?
why doesn't it work? isn't all our society always abstracting away the individual stories behind statistics? (crime is down by 10% ! GDP is up by 0.2%, birthrate is down by xx).
we're used to it and we're happy about it and we use it to make decisions (buy house now, move to a new city ...).
Plus in those rare cases that you mentioned at least the same system will be feedback the incident to make sure that this won't happen again in the future... so in a morbid way there is a way to learn from such fatal incidents ... you can't really say that about such incident if the driver was a human.
This seems to be a reasonable take to me. Systems like Autopilot make driving safer. I'm worried that the conventional wisdom on HN is (a) that systems like Autopilotmust be perfect, which will never be attained, and (b) extremely loathe to recognize the times Autopilot has saved lives.
Even when a critical problem is found, I can't think of any example of cars being disabled remotely - either through technical means or laws saying 'you may not drive this car'.
By the time people are allowed to let their cars drive unsupervised, the crash rate of AI versus human will probably be 1:10 or so.
So even when those cars ram into fire trucks from time to time, it would be better to let them do their thing. Otherwise people will grab the steering wheel, drive drunk, sleepy, angry etc and ram into all kinds of things again.
Currently, there are 6 million car accidents per year in the USA. Almost 100 people die in car accidents every day. So there is a ton of data to make the decision.
The entire way we approach driver liability in general is insane.
The Attorney General of South Dakota was looking at his phone, swerved into the shoulder, killed man and then left the scene. He claimed he thought he hit a deer, even though the victim's head went through the windshield and the victims glasses were later found inside the car.
What consequences did the Attorney General face? Was his licence revoked or suspended? Did he serve any jail time? Did he resign? The answer to all of these is "No." The only result was two misdemeanors and a 500 dollar fine.
So yes, accepting occaisonal inhuman errors from a system that is 10x safer (hypothetical, no current systems have this record) than human drivers may also be insane but it would still be far more sane than the current approach to human drivers.
I'm using a high profile example to point out clear issues with how we handle human driver responsibility. If your negligence directly results in you killing someone, you should lose your license to drive.
If we someday reach the point where autonomous driving systems are actually 10x safer than human drivers and those systems still have issues with hitting emergency vehicles, we should absolutely hold those companies responsible for those accidents.
My point is that not holding those companies responsible would be less insane than our current practice of letting clearly negligent drivers continue to drive with minimal consequences after they kill someone.
So instead of being outraged about this hypothetical future, why not be outraged about the insane lack of consequences that drivers face currently?
That seems more like plain old political corruption than a problem with the way driver liability is handled. If you or I did that we’d almost certainly be facing prison time.
In aggregate, humans have a lot of failure modes when driving, but it's also difficult to compare aggregate data with specific AI failure modes.
I have been driving for almost two decades with 0 accidents. I'm not saying I can't have a lapse of judgement or do something stupid going forward, but I certainly won't mis-class an object, nor kill myself over it.
I hypothetically want bad drivers to be replaced by AI because it's likely already better. But replacing everyone with AI (at the current generation of AI, which isn't the first, nor the last) will undoubtedly lead to tons of avoidable deaths, and I'm not keen on drawing a lottery ticket for it.
I'm more interested in replacing _other_ drivers, more than myself. Really if we could replace the bottom 10% of drivers with AI, even at the level we have today, I imagine that would be a net improvement. But that isn't really a feasible program. As for future, improved AI, I would trade my own driving for the more efficient and safer system.
That's really the move, having another class of driver license that lets you manually drive. Anyone can get an in ai self-driving car, but if you want to manually drive you have to pass a difficult skills test to prove you could outperform the ai.
People already let their Teslas drive unsupervised, they're just "not supposed to". That will be increasingly permitted over time, either implicitly or explicitly. It's not a switch that will be flipped nationwide once the data hits a threshold.
I am starting to wonder if we’re pretending to live a little further into the future than we realistically can right now. Modern Silicon Valley is built around businesses with software margins. If you can scale via software, it’s huge. If we actually saddled these companies with their externalities or with doing what they say they do, would they still have been able to have software margins, or would they have had to wait until the tech/science was better?
It gives me a wary feeling when people talk about tech regulation and warn that it would change the internet as we know it. Like, if putting the externalities on the company means the company can’t exist as it does today, is that really so bad?
Point is, we/they all knew the unavoidable AI’s flaws way before the s*it hit the fan, the underlying sci-fi assumptions undermining the science… and the problem is, some states’ wild de-regulation allowing deployment in public, commonal spaces (real and virtual) being managed like a gigantic theme park and joe mcjoes becoming the guinea pigs for such, well-advised experiments.
Question for people here: Is it justifiable to accept these flaws in the short-term if it results in a car that has a lower-error rate than human drivers in the medium/long-term?
I considered this before posting, but I think that human disengagements are an important supervisory signal for the model. You can’t hire enough safety drivers to scale this process and releasing it is the fastest way to get there. Simulations are not going to get you all the way because you can’t simulate the real world perfectly. Do you have any thoughts on how to develop this without releasing it?
> You can’t hire enough safety drivers to scale this process and releasing it is the fastest way to get there.
I am sorry but I sincerely hope you are never made responsible for the release of anything remotely safety-critical.
Please re-read what you wrote. You are saying that because a business does not have enough money/resources to scale a process in real-world conditions that the solution is to release something and verifying in the real-world, on a public road, risking real human lifes?
If you don't have enough money to test it then you don't have an actual product/business.
Had to google aggro, no aggression intended at all, but I frankly found the severity of GP question too scary to be considered.
> I considered this before posting ...
To me this is not someone asking a hypothetical but someone who is aware that what they are about to write is controversial yet are still considering it to be the best way forward. I happened to strongly disagree with that.
If there is clear evidence that these cars would have lower fatality rates than human drivers (I don't think this evidence currently exists), then holding off release to continue development is potentially a moral bad, for the same reason that we end clinical trials early when they demonstrate clear life-saving potential that would save lives in the control group.
I don't think it is morally reprehensible to ask what other people's intuition around this problem is.
> Is it worth killing n random people, for a uncalculated and unknowable chance of saving n+m lives in the future. The pool of people randomly selected to die have not volunteered or consented to be part of this project. There are ways to achieve the same lifesaving endgoal without the upfront sacrifice of lives.
The part which really strikes me as morally reprehensible is where the companies are saving money on test drivers and controlled test environments and externalizing those costs onto every other driver sharing the road with their training data collectors
> Is it worth killing n random people, for a uncalculated and unknowable chance of saving n+m lives in the future. The pool of people randomly selected to die have not volunteered or consented to be part of this project.
This reasoning renders any governmental policy change of any sort impermissible.
It isn't coherent to apply these forms of deontological ethics to state action - a random set of people will die with both state action and omission of action, I see no reason why not to pick the option with the smaller expected number of deaths.
But this is all besides my original point: this is a legitimate moral debate to have and the rhetoric used by the above commentator was entirely uncalled for.
Hey man, I understand this is a sensitive topic that people have strong opinions on so proposing something that sounds controversial can trigger a lot of assumed conclusions. Can we take a step back for a sec?
First, before we assume that it is inherently risky to release this tech in the public, let's consider where the risk comes from.
There is some risk caused by the inadequacy of the technology to handle certain edge cases. This can result in the vehicle making dangerous maneuvers. However this risk is mitigated by allowing the driver to control the car as soon as the car does something wrong. I'd imagine that the vast majority of errors like this can be handled safely by a human driver who's vigilant and has control of the car.
Some unknown proportion of these might be unavoidable accidents caused by the self-driving software that a human would have probably made too (e.g. a deer running onto the road).
Another possibility is that the self-driving car causes an error that a human might otherwise not have caused and could not have been avoided by a human taking control. If you group the latter two cases together, and the accident rate is lower than that of the driver without self-driving enabled, you could argue that it has a positive impact.
The other source of error is human error. Some argue that self-driving makes people complacent and they might not be as vigilant as if they were operating the vehicle themselves. I think companies are trying to address this too by implementing driver monitoring systems, however this is completely avoidable by the passenger and its a stretch to say that self-driving cars are risking human lives because of this.
Hopefully I have conveyed the reasons why I don't think public testing is necessarily inherently risking human lives (of course this is dependent on the state of the tech being released). I'm sure you understand that every company has a limited runway and a window of opportunity to scale their technology. I am 100% with you on making sure that the product doesn't risk people's lives recklessly. However, I think the optics of this make it seem like far more lives are at risk than reality. I'm open to changing my mind as more information about the safety of the tech comes out but I am not de facto against it for the reasons I mentioned above.
Waymo appears to disagree with you, they ran tests with safety drivers and in controlled conditions for a very long time, and they eventually got to the point where there are customers riding in unmanned taxis. Tesla is just applying a "move fast and break things" approach to pedestrians' lives in order to satisfy shareholder interests and executive egos. They're not the only ones, as we saw from Uber's absolutely shameful performance that eventually lead to a death (safety driver involved but not solely responsible, of course)
I was under the impression that while Waymo did some tests unsupervised, most of the rides that non-employees can hail still have a supervisor. Is that not true?
No, absolutely not. I'd say that Tesla flirts with the opposite outcome, that we saw with nuclear power. Early catastrophic failures resulting from premature deployment can produce a very reasonable revulsion away from the technology, such that the technology will have a PR problem well past the "break even" point. That folks continue to be bullish on self-driving tech despite cars repeatedly hitting stopped emergency vehicles at speed tells me that tech enthusiasts haven't learned from history.
The problem nuclear power has had is nuclear reactors are a great tool to build materials for a nuclear bomb. If everyone accepted them then access to nuclear weapons would have been a lot easier.
It's about risk trade off. If it's 2x better than humans is it worth it? 10x? 100x?
36,096 deaths in 2019 in U.S. ~1.3 million worldwide (I couldn't find injury statistics this morning)
If the flaw is found before someone dies from it I'm not concerned. If 1 person dies instead of 10 I'm all for it. (I'd take 2x better than humans any day)
The problem is, it's not a level playing field. 100 incidents of a person ploughing into a bus queue and killing a child, each is news for a day, everyone accepts the tragedy and moves on. A self-driving car does it once though, and the mob will be at the factory gates with torches and pitchforks.
That's an interesting problem. I wonder what social discussions need to take to reach acceptance for "AI assisted driving". Mandatory AI breaking to prevent driving into a bus queue?
I'm also curious how to find the people who do object vs theory-crafting all possible concerns people could have.
We can’t ignore core usability and basic safety issues by saying “on average, this is better.” End users can’t be expected to know that they’ll probably be fine, but an edge case they don’t understand will kill them one evening when they drive past a stopped ambulance.
Why not? If it saves 30,000 lives per year is that not a meaningful improvement? I suspect my understanding/acceptance of "unknown risk while driving" is flawed in some way, or at least very different from the general populace.
The same argument would justifies human medical experiments, and we know they actually work, and result in faster access to improved drugs and vaccines.
Unlike autopilot which could still be crappy after 10 years
Yet we have made them illegal after some rather nasty precedents. Seems like hsitory repeats itself
Hm. I find this very analogous to the MRNA vaccine debate: how high an error rate of a new technology do we accept, and to what degree does that choice have to be made at a community rather than an individual level?
I'd feel best if that decision was made at the smallest community level possible, so ideally county by county rather than federally. That lightens the burden of politicians making the wrong choice or being a citizen who disagrees with the right choice.
The error rate would certainly go up at first due to the additional complexity, but the cost of each error goes down. Besides, I have to switch the speed at which I'm driving far more granularly than at county lines, and no one's worried about a car's ability to handle that.
I don't disagree that sometimes central decision making is good, but in a complicated situation where any decision will have some negative consequences depending on the specific context seems like a textbook case of not being one of them.
This comparison gets me the most. People bringing up human error rate vs machine error rate. They’re not comparable at all. And it will be evident when you have majority autonomous cars on the road. Human errors are more or less RANDOM. Machine errors are NOT.
1. I’d challenge the premise that human errors are random. There are a ton of patterns that cause accidents including intoxication, low visibility conditions and tiredness. I haven’t done a statistical analysis but I’d hazard a guess that only a minority of human accidents are truly random.
2. Why does the randomness matter if the error rate is lower? Certainly if the errors are predictable, they can be discovered and fixed or avoided?
The only part that is random is if it gets you this time. People follow too close almost constantly, if that would cause an crash every time only in the most rural areas would people be able to drive even one mile without a crash. Note that the cause is pervasive: following too close.
Others are bring up tired, drunk, texting... All real problems, but following too close is universal to nearly all drivers.
also, human failure modes are better understood, and can be better anticipated by e.g. you can still have a somewhat predictive mental model of how a swerving drunk drive might behave.
We don't have a good frame of reference for how machines might behave with their failures, which means that accidents could be worse than they would be otherwise.
The severity of the accident is an interesting point. Though it intuitively feels like there are ways for software to mitigate the severity of an accident when it realizes it is about to crash than a human who might be asleep, intoxicated or otherwise have a slower reaction time.
The machine's ability to recognize that it's about to crash may actually be one of the issues here, since often the self-driving/driving-assist car crashes are cases where the AI just completely misinterpreted the environment and made bad choices.
A human driver is somewhat likely to eventually realize what situation they've gotten themselves into (oh no, i can't stop in time) because of the multiple different feedback loops and information sources they're working with combined with their experience as a driver. For example, a drunk or very tired driver is operating with impaired decision making and response time, but they may eventually notice and respond - while an AI misclassifying a fire truck as a stop sign may very well continue misclassifying it until impact.
One way to mitigate this would be via sensor fusion - even if your vision or radar sensing fail, you can rely on data from other sensors to do things like apply emergency braking.
Unfortunately at least one vendor has decided to ditch radar, lidar, etc and just go with vision!
> Businesses are also shifting their focus away from “AI-as-a-service” vendors who promise to carry out tasks straight out of the box, like magic. Instead, they are spending more money on data-preparation software, according to Brendan Burke, a senior analyst at PitchBook. He says that pure-play AI companies like Palantir Technologies Inc. and C3.ai Inc. “have achieved less-than-outstanding outcomes,” while data science companies like Databricks Inc. “are achieving higher valuations and superior outcomes.”
Palantir is now a "pure-play AI company"? (And, for that matter, a market cap of $50b is 'less than outstanding'?)
Burke specifies "valuations" as well as outcomes, so the observation stands. (Not that Palantir is any kind of monopoly... If you will recall, the usual criticism is that it's just a consulting shop with zero moat or monopoly.)
It seems the flaws are too binary. 90% of the results are great, the other 10% absolute trash.
You can see this everywhere. For example those app that generate a non existing person. A lot of times the results are great except for that one spot which makes the overall result useless.
Another example is the OptiX denoiser (NVidia). You can get very nice renders in a few seconds which speeds up the workflow. But every time it has areas with a lot of flaws. This doesn't matter when you are still working on something but for production it is useless.
ML has it's use in a lot of areas where the outcome doesn't have to be perfect. But I am still not convinced it is 'production ready'.
There's the notion that over time the flaws of the previous generation of technology are held up as traditional artifacts.
Things like:
- the "warm" sound of vinyl records.
- nostalgia for early myspace, tumblr, geocities, web design
- faux edison lightbulbs
- low vs high frame rate movies
I wonder if the flaws of all the current ML techniques will eventually be thought of similarly.
And at the same time, the UK government is looking at reversing the GDPR review right for automated decisions:
Article 22 guarantees that people can seek a human review of an algorithmic decision, such as an online decision to award a loan, or a recruitment aptitude test that uses algorithms to automatically filter candidates.
In May, a government task force set up to look for deregulatory dividends from Brexit, led by the leading Brexiter Iain Duncan Smith, argued that Article 22 should be removed because it made it “burdensome, costly and impractical” for organisations to use AI to automate routine processes.
The idea is part of broad-based plans for a big overhaul of the UK data regime after Brexit which ministers say will boost innovation, and deliver what Oliver Dowden, the culture secretary, has called a “data dividend” for the UK economy.
I advise companies on GDPR compliance, and art 22 is usually the least of their concerns. Why not offer a right to human review, if your algorithm is producing "legal or similarly significant effects"? If you're not 100% convinced in the accuracy and fairness of your automated system (two entirely separate GDPR rules), you can avoid issues down the line by offering individuals the ability to flag a dodgy decision and have a human look into it.
Is this another one of those GDPR articles that has no teeth? I cannot imagine how Google can keep running it’s spam filters, Facebook it’s automated bans, and Twitter its algorithmic feed, while abiding by this.
Presumably it means a spammer can send a spam email, and then when it goes into the spam folder, has the right to ask google to review that decision.
Sounds fair enough, especially if the spammer either has to pay the costs of the review (ie. a few dollars), or is limited to being only allowed one review per month/year to prevent abuse.
Would I be happy for it to be driving around on the road? Probably.
Would I be happy for it to drive me, and it's 'my fault' if I don't notice it's gone wrong and kill someone? No.
So far Tesla (for example) seems nowhere near the point where they would accept responsibility for crashes -- they still always blame the driver for not paying attention.
I'll take 0.1% chance of death which I (arguably) can control, over a 0.001% chance of death which I absolutely cannot control at all.
It's so clear to me. Yes, taking a plane is similar, but I only take that risk a couple of times a year.
The 0.1% (to me) is a probability number as I have some control over each event. The probability is so low that it likely never will happen.
The 0.001% number is an eventual outcome number. "Run the experiment X amount of times, and death will happen 0.001% of the time". And it's more relevant than flying as we drive so much more.
Anytime you get on the road you are exposing yourself to the possibility that a suicidal or drunk or elderly person with dementia will kill you.
Personally, I would like to have very rigorous driving tests for old people and remove licenses for people with DUIs. The obvious question then is, how are old people and drunks supposed to get around. Self driving cars seem like an option eventually. It makes you safer compared to the alternative, which is we generally wait until someone is killed before we permanently remove someone's license.
Nothing short of 100% reliability will convince me that handing over control to a closed-box AI is better.
And even then, one small "bug" could change that conclusion. It just takes one weird, anomoly in the real world to mess things up. And maybe handling those things will eventually be perfected*. Maybe. But I don't intend to be the beta tester for that.
*I know the default is to hand control back to the driver, but then you may as well be driving (which I enjoy). "Shut up and drive" is far more fun than being your cars KPI manager.
Problem is, you are not alone on the road. And if somebody else does something really stupid, you can die as a consequence, without any chance to avoid it.
We are definitely not there yet, but I think of myself as a decent driver, and some assistant tools on high-end models are way better (just faster, probably) than me predicting stuff. For the first year this summer, I've driven a car that was quicker than I in an emergency break situation: while I started breaking, it depressed the pedal, and I was quite surprised, 'cause the car in front of me didn't begin breaking yet.
There's also the fact though, that if you're elderly you can't really avoid being a worse driver. I know particularly in rural areas where there's no public transportation, a lot of elderly people would accept an error rate higher than what their own driving used to be. It seems pretty tough for the rest of us to deny them access to a technology like that while it's in the interstitial stage.
If errors in any subsystem surface out to the car, well then okay. But it's not unlikely that the overall system would deal with an error in the image classifier.
Pay wall so I can’t read the article. But can someone comment on what 0.001% error means in real world scenarios?
To keep the maths easy, that’s 18 wrong frames/10min. So roughly 2/3 of a second.
If these images are spread out, maybe it’s fine. If it’s failing on certain types of images, then those images could be grouped and 2/3rds of a second is more than enough time for a serious failure.
Don't forget - the error might be reproduce-able. As in, it will error rarely, but at that one obstacle, with that one light config, it will error repeatedly and constantly.
AI related car accidents and deaths will have special places and dates, were they will repeat annually.
Tesla, et al do not have the luxury of ignorance to explain that away however, they know what the technology is and is not currently capable of, but they don't want to admit it.