Hacker News new | past | comments | ask | show | jobs | submit login
Why collision avoidance is harder for an AI-based system (medium.com/rebane)
118 points by IndrekR on March 26, 2018 | hide | past | favorite | 140 comments



Hang on. That argument makes no sense. We've had subsumption architecture for a long time now( https://en.wikipedia.org/wiki/Subsumption_architecture ). Subsumption architecture puts some of the intelligence in the lower level systems. While the higher level controls can tell the lower level systems what it wants, it can't do things that the lower system determines is dangerous.

So if a normal Mercedez has a collision avoidance system that automatically brakes, there is no reason why an AI based system can't be built on top of that and the collision avoidance system automatically braking without intervention from the higher AI systems. A subsumption system prevents higher level controls from doing something catastrophic like hitting a pedestrian or, in biology, a person holding their breath until they die.


Very true. Which is why I'm still waiting for what happened to all the lower level safety systems, presumably driven by lighting invariant LIDAR or RADAR systems, in the Uber situation.

The article kind of mentions: "cyclist detection from 3D point cloud is much harder task than cyclist detection from an image"

The system didn't need to know that it was a "cyclist". There was something there, in its path. Stop.


In fact, the XC90 is already equipped with a radar-based emergency stop system from factory - that alone would have been 100% sufficient to prevent or vastly reduce the impact. But of course it looks like the system was disabled/overriden by the Uber's autonomous solution.


Yep! Every single autonomous system I'm aware of (and that I'm allowed to speak about) has such a system designed in from the ground up. The article presupposes we allow "intelligence" handle all decision making, which seems ridiculous even in biological systems, let alone those we design for a single purpose.


Yes. We had that in our DARPA Grand Challenge vehicle a decade ago. In addition to the main navigation and control system, there was a separate process taking inputs from the radar and speedometer, with the authority to slam on the brakes if the vehicle was about to hit something. Backing that up was a hardware stall timer reset every 100ms. If it hit 120ms without being toggled by the computer watchdog process, the brakes went on and the engine dropped to idle with no computer involvement. Then there was a DARPA-mandated remote engine kill radio system.


> So if a normal Mercedez has a collision avoidance system that automatically brakes, there is no reason why an AI based system can't be built on top of that and the collision avoidance system automatically braking without intervention from the higher AI systems.

In fact this 'bottom up' approach is exactly what all automoted driving efforts in the automotive industry follow. It's the biggest difference between how the auto industry and Silicon Valley are tackling this problem. A lot of digital ink has been spilled over how slow and supposedly prone to disruption by hip and agile software companies this approach is, yet it becomes ever more evident that it was the correct choice.


>A lot of digital ink has been spilled over how slow and supposedly prone to disruption by hip and agile software companies this approach is, yet it becomes ever more evident that it was the correct choice.

The car industry had 130 years to become the conservative, anti-agile, careful industry it is. They are in the business of costing human lives a lot longer.

The problem is "move fast and break things" works really extraordinarily well for software which is for entertainment and information purposes. Not for something serious.


Absolutely right.

Doing otherwise shows a worrying lack of systems engineering expertise.

Actually - the industry that really knows how to do this sort of thing is the defense industry.


That sort of thing is common in industrial controls. Start the big ass air compressor and if the oil pressure doesn't hit 100 psig in five seconds the interlock trips. That's handled by a controller on the pump itself.

Also really common or required that the interlock be totally separate from the control system.


Agreed. You use the lowest-tech option which is practical at every stage. So if you have a mechanism which can travel between points A and B and will cause damage if it leaves that range, your best option is a physical stop at each end. Second best option is a hard-wired limit switch at each end which cuts motor power. Third best is a limit switch which is used by software. You wouldn't use machine vision or LIDAR.

And then as you say, safety related functions (anything on which human life depends) need to be separate from general control code and to meet the requirements of whatever safety spec you're working to (e.g. SIL).


Doesn’t the author recommend exactly that in the last sentence of the article?


Yes. For new readers, suggestion for this article: start with the last sentence. Then skim the article re AI. Then respond. So we don't have to keep saying/hearing "Why not both?". The author's take is: both!


But let's make sure that the definition of AI doesn't encompass this because...?


I guess author makes the distinction from deterministic Vs non-deterministic algorithms. As a layman in these matters I still don't fully understand how some people classify machine learning algorithms as non-deterministic...

...i started wondering about this when I was reading a few articles about the AlphaZero algorithm that learned to play chess entirely from self play and wondered if it would always play the same moves in response to a fixed set of opponent moves (assuming the opponent starts as white). My guess was that it wouldn't always respond in exactly the same way in case there's any MCTS like step somewhere in there blended with the Machine learning algorithm.

For a game like chess it would seem to make sense that the overall algorithm would still include a MCTS step (like AlphaGo did) but for an autonomous car it would seem crazy to any human to imagine that there would be any random search for a decision in a tree of possible interpretations of the input for example.

Does any one have any detailed knowledge about this? Would a non-deterministic algorithm ever be allowed in an autonomous car?


> As a layman in these matters I still don't fully understand how some people classify machine learning algorithms as non-deterministic...

Some algorithms start with random numbers for the model and converge towards a better model. After the model is generated the input->output will be deterministic, but since the model generation is non-deterministic the algorithm overall is considered non-deterministic.


Not an expert on self-driving cars, but modern, statistical machine learning systems have to be trained sort of end-to-end. That is, if you want to machine-learn how to perform a task, you have to learn both the low-level and the high-level actions at once. And you can't add to the knowledge of the task, once the system is trained.

That's a limitation of the technology: statistical machine learning models are notoriously non-compositional. That means that basically you can't take one trained model and use it as a feature, to learn a new model for a different task.

Say, if you train a machine learning classifier C1 to recognise class Y1 from features F1,...,Fn, you can't then take the model of Y1 built by C1 and give it to a different classifier, C2, as a feature in a new feature vector Fn+1,...,Fn+k to learn a different class, Y2.

So you have to learn everything you need to know in one go- and that's terribly, awfully difficult.

There is a type of machine learning algorithm whose models are inherently compositional- Inductive Logic Programming (full disclosure: that's my research). These learn logic representations and from logic representations, so once you learn a model you can use it as a new feature and continue learning. Statistical machine learning folks have been trying to do that for a while now, without much success.


Actually they are extremely compassable. A feed forward neural network is often called a multiple layer perceptron, because it is multiple perceptron classifiers that feed into each other.

If you want to build a domain specific image classifier, you take the bottom layers of a generic classifier and train it on your otherwise too small dataset.

You can absolutely add predictions from one classifier as a feature to the next. It is quite common to have a pipeline where you augment text with Part of Speech, Named Entity Recognition, etc, before performing the final task.


What perceptrons feed into the next layer are not features- they're activations. They can't be used as features because they don't represent attributes of some concept.

CNNs invent new features that can be used further down the track- sure, but the end result is not something you can use any further. You can't, for example, take a learned concept of a dog and then use it as a feature to learn about pets.

With language learning pipelines again, you're not adding a newly learned model to a set of features- instead, you use that model to label parts of your data that were not previously labelled.

What I mean (what my research group does) is more like what is discussed by Francois Chollett of Keras, here:

https://blog.keras.io/the-future-of-deep-learning.html

The kind of program-like modularity he's describing is missing from modern statistical classifiers, deep nets included.


> What perceptrons feed into the next layer are not features- they're activations. > They can't be used as features because they don't represent attributes of some concept.

I think we have very different definitions on what a feature is. You can use these activations as features in a model.

> With language learning pipelines again, you're not adding a newly learned model to a set of features- instead, you use that model to label parts of your data that were not previously labelled.

I'm afraid I don't understand what you are trying to say. In an NLP pipeline you use the predictions of several models as features for your final model. How is that not composable and modular ?


>> I think we have very different definitions on what a feature is. You can use these activations as features in a model.

Let me backtrack a bit, to where I phrased the issue thusly:

Say, if you train a machine learning classifier C1 to recognise class Y1 from features F1,...,Fn, you can't then take the model of Y1 built by C1 and give it to a different classifier, C2, as a feature in a new feature vector Fn+1,...,Fn+k to learn a different class, Y2.

When you train a statistical machine learning classifier - let's take a simple linear model as an example, for simplicity; what you get in the output is a vector of numbers- parameters to a function. That's your model.

You can't use this vector of numbers as a feature. It is not the value of any one attribute - it's a set of parameters. So you can't just add it to your existing features, because your features are the values of attributes and the model is a set of parameters meant to be combined with those attributes.

What you can do is take your newly trained model and label the instances you have so far with the class labels the model can assign to them. Now, that's a pipeline alright. For instance, if you had a linear model with features "height" and "weight" and learned to label instances with "1" for male and "-1" for female, you could then go through your data, label every instance with a "1" or "-1" and then train again to learn a model of "age". At that point you have a new feature that is not the concept you learned in a previous session, but only a subset of that concept. Now you can try to learn a new concept from this new set of features, but the original concept ("sex") may or may not be part of it. It may turn out that "sex" is not necessary for learning "age" (it's redundant); or, it may be necessary, but in that case you have to learn the concept of "sex" all over again as part of learning "age".

By contrast, the class of algorithms I study, Inductive Logic Programming algorithms, can add the models they learn to their features (features are called "background knowledge" in ILP) and go on learning. For instance, such an algorithm can learn "parent" from examples of "father" and "mother", then "grandfather" from the original examples of "father" and the learned concept "parent" and "grandmother" from "mother" and "parent", then "grandparent" from "grandfather" and "grandmother" etc. Every time the new concept learned can be added to the learning algorithm's store of background knowledge, as it is- you don't need to go through the data and label it. That's because the representation of "data" and "concept" is the same, so you can interchange them at will.

Say, your background knowledge on "father" and "mother" might look like this:

  father(Earendil, Tuor)
  mother(Earendil, Idril)
From that you can learn "parent" that might look something like this:

  parent(A,B) :- father(A,B).
  parent(A,B) :- mother(A,B).
Now, you can add "parent" to your background knowledge, like this:

  father(Earendil, Tuor)
  mother(Earendil, Idril)
  parent(A,B) :- father(A,B).
  parent(A,B) :- mother(A,B).
From that you can learn "grandfather" and "grandmother" and add them to your background knowledge:

  father(Earendil, Tuor)
  mother(Earendil, Idril)
  parent(A,B) :- father(A,B).
  parent(A,B) :- mother(A,B).
  grandfather(A,B) :- father(A,C), parent(C,B).
  grandmother(A,B) :- mother(A,C), parent(C,B).
And then learn "grandparent" from that:

  father(Earendil, Tuor)
  mother(Earendil, Idril)
  parent(A,B) :- father(A,B).
  parent(A,B) :- mother(A,B).
  grandfather(A,B) :- father(A,C), parent(C,B).
  grandmother(A,B) :- mother(A,C), parent(C,B).
  grandparent(A,B) :- grandfather(A,B).
  grandparent(A,B) :- grandmother(A,B).
And so on.

That is what I mean by "composition"- building up knowledge by adding new concepts to your representation of the world.


But braking on a highway might not be the best response in all circumstances.


When the collision detection system kicks in and determines that you're going to hit something in the next couple of seconds, then braking probably is the best option.


What if it's a large piece of cardboard?


I know of an accident where someone overran a cardboard container used to package washing machines. He thought it was empty. Unfortunately it was not. Some truck lost part of its load.

I think its safe to break always when encountering an ambiguous situation.


Another anecdotal tale: My dad once didn't bother slowing down for "cardboard tubes" that had fallen off a lorry. It turned out they were ceramic tubes, and caused quite a bit of damage.


Ok, what if the cardboard is flying over the road?


Probably still safer not to hit it, or at least not to hit it at full speed.


Probably still the best if you are about to kill someone.


Integrating AI with a subsumption architecture sounds interesting but could still not work. You would be abstracting a lot of details (as well as potential action) from the training process, so the overall result you get still might not work. Training on everything might actually yield a better result.

Does anyone know how iRobot (Brooks' company) does it?


The AI can still drive the car normally, the only thing that changes is it's prevented from running into another car at full speed. Which is no worse than having drivers override the system forcing a disengagement.


Yes, as long as you add a huge penalty to the training algorithm for triggering any underlying safety system I see no problem in that. (Otherwise I can see a completely mad car that believes everything is safe as it never crashes even when reckless )


I believe it is subsumption based with random walk added on top: bumper hit means turn around and wheel dangling means don't go farther. I think the more recent models have more intelligence added on top but still has the lower level systems acting as safe guards. Of course it doesn't run into the problems you mentioned because it doesn't have any learning (at least not one of the earlier models I had).


iRobot builds more than just roomba, I was wondering about their more advanced military robots.


As I was skimming the article, I was thinking some sort of system like this could be made, and I was pleasantly surprised to see the approach has a long history. This is one reason I like Hacker News.


This is looking at it all wrong. Watch Chris Urmson's video from SXSW on how Waymo does it. I've mentioned that before. You build a map of what's around the vehicle. Then you try to tag some obstacles to help predict their behavior. But the obstacle detection is geometric, and does not depend on the tagging, which uses machine learning. If there's an obstacle, the system doesn't hit it, even if it has no clue how to identify it.

Tesla/Mobileye managed to get that backwards. Their original system was "recognize vehicle visually, compute distance and closing rate to vehicle". If it didn't recognize an obstacle as a vehicle, it ignored it. We know this for sure, because you can buy a Mobileye unit as a dashcam-like warning device and many people have seen how they work. That led to three collisions with vehicles partly blocking the left side of a lane. One death ramming a street sweeper, one collision with a stopped fire truck, one sideswipe. The NTSB is investigating the fire truck collision.

The NTSB is now investigating the Uber collision.[1] As they usually do, the first thing they did was to get control of the wreckage.[2] Uber does not have control of the investigation. The NTSB investigators are working this like an air crash. They are "beginning collection of any and all electronic data stored on the test vehicle or transmitted to Uber". As usual, they haven't announced much, but they have mentioned that the video seen publicly is from a third-party dashcam, not the vehicle sensors.

[1] https://www.ntsb.gov/news/press-releases/Pages/NR20180320.as...

[2] http://wsau.com/news/articles/2018/mar/21/arizona-police-rel...


Devils advocate here, but what if a supposed “obstacle” is a plastic bag or a jet of steam coming out of a sewer grate?


Then you add special cases for steam, and plastic bags, and small fast moving animals that it's more dangerous to swerve around than hit. And then when some kid in a chicken costume runs out in front of the car, the car stops even though it can't identify exactly what it's looking at.


Then you stop. Type 2 errors for cars are much less dangerous than Type 1 errors.


Unless it’s icy out, you have to swerve, or you’re being tailgated...but yes, stopping (or at least attempting to stop) is the best decision from a liability perspective.


Well that's when you're grateful that along with your forward collision prevention, your car's ABS and electronic stability control systems are also still enabled.

As for tailgating, this seems to be a problem with U.S. attitudes, not vehicle mechanics. Stop making it acceptable to tailgate! If you're close enough to the vehicle in front of you that any significant braking on its part will cause you to hit it you are too close and it's your fault if you hit it.


Isn't that the case in the US? In the EU, you can be fined for driving too close to the car in front of you.

I think that driving close to the car in front of you is the number 1 cause of accidents. Much more dangerous than driving fast.


Legally, in most (if not all) of the US you /can/ be cited for tailgating (the laws require a minimum following distance).

Reality is that the traffic cops seldom ever cite for tailgating in general. If one were to see a citation for such, it is likely after an accident where the officer can deduce that the cause was "following too closely" and so they then issue the citation.


In Australia, if you are the rear vehicle in a rear-end accident you are automatically at fault in almost all circumstances. The only exceptions I know of are if the front car pulled out immediately in front of the rear car, or if the front car was reversing.


> I've mentioned that before. You build a map of what's around the vehicle.

If it were that easy, everyone would have solved it already. Detecting distances, objects etc. from several varied sensors is exactly how you build this map. You can't just handwave the map into existence.


> Also, lets back up our AI’s with old school collision avoidance! Intelligence is not the same as perfection, at least for now.

The car was travelling at 38mph and never braked. Even if the collision-avoidance only saw it at the last moment it would still have braked and potentially slowed the car enough so that the woman was injured instead of being killed.

I'm all for self-driving and fully believe it can improve on humans, but I don't see how it's possible for self-driving cars to be on the road if they can't properly detect the most vulnerable users in all conditions.


This is absolutely a case where an autonomous system should have out-performed a manned system. Lasers see through darkness very well. A camera-only system is insufficient.


Not to mention it wasn't even very dark on the road. If you look at other video and images, that stretch is very well lit. The dash cam video was poorly calibrated / selected (but probably not used in the decision process of the AI).


Humans can't properly detect the most vulnerable users in all conditions. Not an excuse for this system failing to perform, but all self-driving cars have to do is be a little bit better for net safety to improve. People are notoriously unreliable. Computers don't get drunk or sleepy, for instance.


> all self-driving cars have to do is be a little bit better for net safety to improve.

That's true abstractly but ignores several important real-world factors about the adoption of self-driving cars.

On the one had, autonomous cars have to be a lot better than humans to prevent these sorts of PR trash can fires or they won't be given the opportunity to improve net safety.

On the other hand, people are so bad that we're liable to soon live in a world of autonomous cars, regardless of the effect on net safety.

I hope they can be made safe, because it's vital for the future of our car-obsessed culture. But I don't have as much faith as you.


If the technology were as undeterministic as this post makes it seem to be, there should not be any self driving car allowed even under the best conditions.

I fully agree that an AI can be fooled, but that is high level logic (path planing), the system should be designed to have a fallback that does emergency braking if all else fails.

There simply is a point where the high-level AI does not matter any more. And that is if I (the car) am moving at 45 mph towards an obstacle that is in the middle of the road less than 2 meters from my projected path. This does not mean that a full brake is required but the speed definitely needs to be reduced to account for the uncertainty, and once it is determined that it is physically impossible to miss the obstacle the system must do a full stop to reduce the impact velocity as much as possible.

It's fine, if the LIDAR data is plugged into a machine learning algorithm, and you will probably get less than the 10-20 Hz the scanner can produce, but at the same time it is probably also used by a much simpler obstacle tracking algorithm that can run at near real-time speeds.


Right; if we want to compare it to ourselves, maybe the various minds that control our impulses. We can hold a knife over our finger with our high level cognitive mind, but (hopefully) our low level brain will prevent us from bringing it down and chopping our finger off, even if we _really really_ wanted to do it.

I know that's sort of a weird example, but I think it's really illustrative of a dual/multiple mind scenario playing out in our own understanding.


Or reflex.

We act on an input on a lower cognitive level first before a higher level function even has the chance to intervene.

If someone is throwing a ball at my face, my body hopefully reacts before my higher level functions had the chance to evaluate if the ball will really hit the face and if I may look silly if I wave my hands in the air while no actual danger exists. Because it actually IS on a trajectory to my face the benefit far outweigh the risk of looking silly


> AI is not preprogrammed to monitor a known input from a sensor to take a predefined action.

I guess one of my outstanding questions, which reading this only confirmed, is why this is the case? I mean, humans are pretty good examples of intelligence. And yet we still have and use these anti-collision systems. Because, in the end, when wrong decisions are made these systems save lives.

Why would AI-driven vehicles not have dedicated, single-purpose subsystems such as anti-collision? I mean, are we going to also remove ABS, because the AI could learn to modulate the brakes itself? How much are we going to push into AI, when the purpose-built systems are both functional and effective?


>I mean, are we going to also remove ABS, because the AI could learn to modulate the brakes itself?

This is the topic that I feel SDC enthusiasts forget. Not everything HAS TO BE AI. And we could maybe make steps towards SDC, not fantastic leaps that get people killed and really just get government involved where it doesn't need to be yet.

We could replace the ABS/ESP/traction control systems in current vehicles with a machine learning / deep learning system absolutely - that's not sexy though!

No one wants increments, they want whole self-driving cars right now - and while I have opinions about that, there is no doubt it's driving (pun intended) the industry.

Ideally, at first, we'd see components in consumer vehicles, and completely automated long haul trucks from A to B determined routes - but like I said, not sexy.


> Ideally, at first, we'd see components in consumer vehicles

We do, though. Look at the cruise control or auto-park on a vehicle produced in the last few years. The totally autonomous car may make headlines, but these sorts of features will be what really make the technology ubiquitous.


I work on ECUs like this. High end stuff is doing computer vision, but 99% of what is on the road is pre-calibrated motor/solenoid control and that’s all.

Even adaptive cruise, steel camera object “detection” are just pretty simple systems. Almost nothing is doing even pieces of what the whole-package SDCs are.


This is a design trade-off that comes up all the time in automotive engineering. I own a couple of 90's-era Honda vehicles and what's interesting is although the cars are computerized, they also employ a number of semi-independent systems such as instrument gauges which directly connect to their own sensors even though a similar sensor connected to the engine control unit is right next to it, mechanical choke and coolant temperature modulators to intake air despite fuel injection, an "hydraulic computer" controlling part of the automatic transmission even though there is also a dedicated transmission control computer (which is itself redundant to making the transmission control an integrated feature of the engine control module).

To me this seems to make no design sense; why not bring all of the inputs into one central box with one computer control, and drive all of the outputs from this central box? But then who am I to argue that the separate / bespoke approach is not better, given Honda's reputation for reliability?


Seems like classic, prudent, avoiding of single points of failure. Sounds reasonable, especially since computerization in cars was still pretty new back then.

For something like ABS, or emergency braking, we have established algorithms for this already, so why be in a hurry to offload that to a machine-learning black box? Is the math behind "we need to stop this car" so complex that AI is needed?


> Why would AI-driven vehicles not have dedicated, single-purpose subsystems such as anti-collision? I mean, are we going to also remove ABS, because the AI could learn to modulate the brakes itself? How much are we going to push into AI, when the purpose-built systems are both functional and effective?

This makes me want to pull my hair out. No reasonable system relies on AI from top to bottom. Take this article with a grain of salt, it is attacking a straw-man.


Author here :) I cannot answer for Uber, but the grand reason why we cannot put deterministic parts into the AI is the same as for humans - we do not control the internals of it, the complexity is too high. Although we can build systems that are composed of both AI and deterministic collision avoidance (as seen on new cars today). It is what I also suggested in the article.


Ah, then I should complain to you that "AI" is a broad suite of techniques and goals, rather than a specific black-box technique which has these flaws.


Yes - you really seem to be talking about Machine Learning (a subset of AI). And as another comment says 'This makes me want to pull my hair out. No reasonable system relies on AI [or ML] from top to bottom. Take this article with a grain of salt, it is attacking a straw-man.'


"Why would AI-driven vehicles not have dedicated, single-purpose subsystems such as anti-collision? "

The answer is that they will. And probably already mostly do (at least for Waymo/Cruise/most of them). It's not clear why Uber's vehicle did not brake.


I don't know much about AI or ML and even less about how it relates to automotive engineering but I imagine this is already how it works. I don't think there is one model for everything as much as there are models for individual components of driving. Something for steering, something(s?) for identifying obstructions, etc.

In this case maybe letting a model modulate brake pulses based on conditions like temperature and velocity makes more sense?

Hopefully someone with more experience in this domain can enlighten us.


It is exactly how it works. There ~40-115 different computers in a modern car that often have completely walled-off modules running on them. Most 'models' you will find in a car's software aren't even generated through machine learning, but tried and true engineering/statistics.

No one in their right mind is proposing to replace those with one big AI monolith. That's marketing BS from the likes of NVidia, who would stand to gain from it.

AI is being used in certain subfunctions like visual object detection. In the future, AI will be used to make higher level decisions like trajectory (lane choice, overtaking etc.) and route planning. But it will only hand those higher level plans down to the mostly 'dumb' computer systems of the car to be carried out just as with a manually driven car.


The false positive rate of this system could be very noisy. At what point should you brake, even if the AI says not to? When you might collide with a lidar point? You want to collide with lidar points all the time (the ground beneath the vehicle, for example, or any reflective noise). The AI is there to tell you what lidar points are actual obstacles that you can't collide with, otherwise it's not really meaningful.


This line of argument doesn't mean anything when we already have cars with intelligent, independent drivers, and these systems. And yes, their purpose would be to override the AI control when they trigger. Exactly the same as they do right now.


" I mean, are we going to also remove ABS, because the AI could learn to modulate the brakes itself?"

I think that's a reasonable definition of ABS - an AI (in most cases for ABS, an expert system AI) that knows how to best mediate the brakes, with levels of performance that exceed the vast majority of humans.


The conventional ABS system is not AI in any sense. It is a few sensors and maybe some look up tables of what to do under various speed / brake pressure / wheel rotation situations


Does that not fit the definition of artificial intelligence? The system makes decisions – ones that would be otherwise done by a human – based on its perception of the world. That is what I have always understood AI to mean. To be sure, a conventional ABS system does not use machine learning, but machine learning is only a subset of AI.


By this definition almost all software would be an AI.

Making decisions or reacting to stimuli does not necessarily require any form of intelligence. You probably meant automation instead of "AI".


The core of this debate is the matter of what intelligence actually is. We don't really know, but the most common definition I have heard/read is that intelligence is the ability to generate new solutions when presented with previously unseen input. An ABS would not qualify under this definition, because its output is predefined rigorously for all possible combinations of input values.


On the other hand, artificial intelligence is not actual intelligence. The definition of artificial intelligence is the ability to perform a task that would normally require human intelligence. An ABS system would fit into this definition as it is a task that normally would require human sensing the conditions to know how to apply the breaks.

Although I can see why some point out that AI is a moving target, representing only what still seems 'magical'. If you showed an ABS system to someone in the early 1900s, I truly believe they would see it as some kind of intelligence. Now that we have acclimated to the technology, we don't see it the same way.


>The definition of artificial intelligence is the ability to perform a task that would normally require human intelligence.

Do you have any specific sources that define it as that?

>An ABS system would fit into this definition as it is a task that normally would require human sensing the conditions to know how to apply the breaks.

Responding to stimuli does not require any intelligence, let alone human intelligence.


> Do you have any specific sources that define it as that?

Several definitions as provided by a Google Search.

> Responding to stimuli does not require any intelligence, let alone human intelligence.

Which is why we call it artificial intelligence instead of intelligence. If these systems were actually intelligent, there would be no reason to add the artificial moniker. We specifically call the types of systems artificial intelligence on recognition that it is not actually what we consider real intelligence.

I agree, there is nothing intelligent about an ABS system, and nobody is labeling it as intelligent.


"Artificial" is opposed to "natural", not "genuine". It's a synonym of "machine intelligence". It's a comment on the fact that the intelligence has been artificially created in a machine. It is not in any way about the current capabilities of artificial intelligence.

https://en.wikipedia.org/wiki/Artificial_intelligence

> Artificial intelligence (AI, also machine intelligence, MI) is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals.


>I agree, there is nothing intelligent about an ABS system, and nobody is labeling it as intelligent.

Artificial intelligence is intelligence, just made artificially instead of organically. You clearly have no understanding of the term so you should probably shut up.


You wouldn't consider it AI because it's good at it's job. We only consider things to be AI while they suck at making decisions.


That doesn't make it not an AI -- it makes it a very simple, explicable AI. Unless you follow the definition that AI is whatever we don't know how to program yet.


What definition of AI do you use that would qualify ABS as an AI but wouldn't also include majority of all software ever written?


There is no sharp line, just as there is no sharp line for "intelligence". Lot of software can indeed be classified as (very weak) AI, because it makes decisions in service of a goal.

Because there is no sharp line, where to draw the line depends on context, and the point being made. I have found that "AI" is a good term to apply when it is more useful to take the intentional stance[+] to explain an artificial system's behavior than to trace the actual low-level implementation.

At the very lowest end I could include some closed-loop control as AI (a simple proportional controller wouldn't count, a PID might). The more complex the model it has of the environment, and it's own effect on the environment, the more useful it is to call it an AI.

A slightly more complex example are computer games, whose simple AIs are generally readily explainable from their code. In an RTS game with good AI, it's not useful to look at how computer opponents (or my own units) do pathing, only where they're trying to go, and the obstacles they'll encounter.

ABS is lower than I would normally consider AI to be a useful descriptor because of how little information processing occurs (basically just wheel speed and brake pedal trajectory). In this context it's worth not excluding precisely to emphasize that AI is not just machine learning, but just about anything that autonomously makes intelligent decisions in response to changing environments.

[+] https://en.wikipedia.org/wiki/Intentional_stance


So in your mind the difference between AI (which you define very loosely) and any other program is just the scope and complexity.

>Lot of software can indeed be classified as (very weak) AI, because it makes decisions in service of a goal.

So take GNU make for example. It makes decisions of what to build in service of the goal of building some target you specified. Is GNU make an AI? By this definition it would be.

>At the very lowest end I could include some closed-loop control as AI (a simple proportional controller wouldn't count, a PID might).

Why would artifical intelligence have any requirement of runnig continuously? Is this a requirement for intelligence or is it just an arbitrary requirement you came up with to narrow down your very loose definition of AI? Now is Nginx an AI? It runs continuously and makes decisions of what to serve over HTTP. How about Bays spam filters? Those run contiuously (at least some of them) and make decisions of what to classify as spam.

>The more complex the model it has of the environment, and it's own effect on the environment, the more useful it is to call it an AI.

Making the complexity a requirement for AI is a bit silly as well. What if someone comes up with a very simple way of building an artificial intelligence? Just smells like you want to be able to use "it's not complex enough" as an argument for things that would otherwise fit your definition of AI. The problem with your definition seems to be that it doesn't include intelligence.

>ABS is lower than I would normally consider AI to be a useful descriptor because of how little information processing occurs (basically just wheel speed and brake pedal trajectory).

This is exactly what I mean.

>anything that autonomously makes intelligent decisions in response to changing environments.

So now you're trying to bandaid the definition even more. In the beginning of your comment your AI didn't require any intelligence but now it has to make intelligent decisions. How about you define intelligence before you start defining artificial intellgence? But this still wouldn't disqualify an ABS from your definition of AI since it most certainly does make intellignet decisions autonymously in response to changing environments.


I'm not patching up the definition -- I'm refusing to give a definition. As a consolation prize, I'm giving examples and discussing the degree to which they fit the central concept.

My stance is that "AI or not" is not an inherent way of dividing up systems in the universe. The best attempt to answer will provide a degree, rather than a binary yes/no. Further, the degree is based on utility to humans to think about the system that way. Thus no bright line, and a multitude of context-dependent factors that weigh towards or against it. If you must force a cut-off somewhere, then that threshold is context-dependent too.


I am freaking sick of this notion that AI==ML. AI is a much bigger field than neural nets. AI can be programmed with rules, with logic, with symbols, with subsumption, and with a thousand other things that are both deterministic and don't require huge training sets.

If you're making a living doing AI, you damn well should know this.


If you define AI like that, then basically every decision making program, i.e. with branches, is AI.


how else are you going to include pacman's ghosts as using AI?


You shouldn't because those ghosts are stupid.

If intelligence just means "can make decisions based on inputs" then everything is intelligent. That's a useless definition.


I'd like to see a law that says: to obtain certification to operate a self-driving car on a roadway, an organization must agree to:

1) Conform to a standard set of protocols for how sensors provide data to a self-driving software system.

2) Log data in a form that could be submitted to any conforming self-driving software system, to obtain results from that system reporting what the system would do given these inputs.

With this in place, it would be easy to do after-the-fact comparisons of data leading up to incidents, and learn from the differences in results between systems.

It could be taken a step further if the car makers would also share data on near misses, which could uncover cases where other car makers' systems did not handle the situation as well.

Even if the sensors are different, I suspect some good mileage could be gotten out of this. The fact that learning opportunities are not perfect, is not always a good reason to pass them up.


Perhaps some sort of logging standard that is enforced across all systems? A black box of some sort?


Yes, exactly, but with the additional factors I mentioned.


> 1) Conform to a standard set of protocols for how sensors provide data to a self-driving software system.

Doesn't exist yet, though.


Oh no, we could never have that. It would squash innovation! We can't possibly have any regulation of self-driving cars or we'll lose to China.


My educated guess: The somewhat unusual shape of the combination of the woman and the bicycle, combined with the uneven lighting caused the vehicle to misclassify her as lightweight road debris.

To a real degree, this is a downfall of machine learning. Every distribution has tails. If we learn purely from data, rather than from principle, we will necessarily make mistakes on the tails. For problems that can be effectively solved with 99% accuracy, this is fine, and we just deal with a few mistakes. With more data, our accuracy will improve anyhow.

If a datapoint costs a human life though, we can't afford to collect enough data. We must have a more sophisticated model of the world in order to operate on the tails without killing people.

I think that this might actually be a watershed moment for ML. Supervised learning is not adequate to this type of task. Either the computer does low level perception, and a human writes a high level algorithm to manage the risk, or datapoints have to contain a lot more information than just safe/unsafe. When you made a mistake as a child, your parents didn't just punish you, they explained what you did wrong and why, and a rule to follow to do better next time.


> misclassify her as lightweight road debris.

Surely the classification has a confidence level, and a low confidence score should cause the vehicle to slow down if it's not confident in knowing what it's looking at? Also, the size of the "lightweight road debris" should have made the vehicle slow down slightly at least, because hitting a 6ft pile of paper wouldn't be great at even a low speed


I don't know what would have happened if "this guy programmed it", but the answer should be yes, the car should have seen the cyclist and it should have pressed the brakes.

This was an interesting post, too, by Brad Templeton who worked on Google's self-driving car project for a while:

http://ideas.4brad.com/almost-every-thing-went-wrong-uber-fa...


in response to that piece by Brad, I sincerely hope that the "safety" driver in the uber accident was fired.


The safety driver is basically irrelevant, beyond being a scapegoat. I've written up some words on this: https://www.brainonfire.net/blog/2018/03/23/safety-driver-sh...


I mAde a similar comment on the video link last week. The good news is that the NTSB is extremely good at human factors stuff - google Cockpit Resource Management for a fascinating trip down the aviation safety rabbit hole.

I sadly agree with you on the scapegoat point.


Why? If it is not proven the driver was negligent why fire this individual? Presumably this individual has a lot of experience and domain knowledge in testing self driving cars so replacing them with someone else may not be an improvement.

If they did make a mistake that got someone killed that could change things but I would hope we wait to find out if the driver was actually at fault.


I don't think Uber's safety drivers have any domain knowledge. The law says you need bodies in seats so Uber is paying as little as possible to put bodies in seats. They aren't engineers or professional (as in stunt/trick/etc) drivers.


I’m not sure stunt or trick drivers are the best choice anyway but you make a good point. Do we know this individual’s qualifications or are we just guessing at Uber’s standards?


Some sources mention a three-week training course. [0] It certainly costs some money to put an employee through such a course, but Uber have the deep pockets here. I'm not saying Ms. Vasquez should be fired before an investigation is complete, and she may be exonerated entirely, but presumably any investigation will take at least three weeks?

[0] http://www.dailymail.co.uk/news/article-5532129/Uber-pilot-d...


They released inward-facing video of the safety driver shortly after the incident.

The driver was staring at her phone in her lap with only occasional glances at the road every 5 seconds or so. I would call that pretty negligent.


It was more likely the uber iPad data logging app, which they've allowed to be using whilst moving.

Can't think it would help with driver's night vision.


That certainly doesn’t sound good. Is there footage of the driver at the time of the collision?



to me that sounds like a standard Uber driver


>Presumably this individual has a lot of experience and domain knowledge in testing self driving car

How do you arrive at this?


Well they were in a self driving car working for Uber in the capacity of self driving car test driver so they have more experience than me or anyone I know. But that’s only a presumption as I said.


Has anyone been able to even remotely explain why the LIDAR system wasn't going nuts? I saw the "it was dark" nonsense, but I assume this vehicle had laser and IR right?

The camera footage was released, I'd like to see the lidar representation.


Author here :) LIDAR itself is just a sensor, it does not process the data nor does it output a directly usable image like a camera more or less does.

LIDAR in this case is a rotating laser and while it scans, the vehicle moves (imagine moving a paper when a copier scans it). All processing is done later, first to construct an image and then to understand and use it. Part of why I wrote the piece was to explain how things can go wrong even if your LIDAR works fine.


You don't have to "construct an image" to use LIDAR returns. Minimal processing on the point cloud will tell you that there's an obstacle, and you don't need more than that to start trying to avoid hitting it. A simple occupancy grid map, for instance, would suffice.

https://en.wikipedia.org/wiki/Occupancy_grid_mapping


The problem with naive occupancy grid mapping from sparse LIDAR data is that things like birds, falling leaves, pieces of paper of plastic bags flying in the wind can mark the grid occupied.

Emergency breaking for all these cases would be very dangerous. The same object must be scanned multiple times to get the idea if the object is something to be avoided.


Quite true -- but LIDAR has a very high refresh rate, and it's not hard to do better than naive grid mapping. This was a full height pedestrian plus bicycle, which would have multiple nice returns from a Velodyne HDL 64 at these distances.


Birds, plastic bags, cardboard boxes, small pieces of wood, and pieces of tires are perfectly good reasons to apply brakes and drive with caution to avoid.


and yet you wrote "now one of them thinks it’s a dog" as if sensors did image classification.


all kinds of things can learn to think things are dogs. you don't need an image


The closest I can give you is a simulation of a scene modeled after the video using a very simple reconstruction of the site of the accident (based on google maps and street view)

http://www.blensor.org/lidar_accident.html


The camera video seen publicly is from a dashcam, not the vehicle's own sensors. The NTSB has mentioned this.


A couple possibilities here. The LIDAR was turned off. Or the system just blatantly failed to categorize the pedestrian properly. Given that she was walking a bike that had bags on the handlebars this is my guess, the system just choked on that and was too stupid to brake for any object in the road that it might hit.


What makes you think it wasn't? You're making some assumptions about which of the many systems failed, I think.


You know what makes me thing the LIDAR wasn’t working, the part about the dead woman.


I thought there were suggestions that the LIDAR was turned off for testing.


Either the author works on toy systems, or is being disingenuous.

These are not problems with AI, they are problems with using only AI, and no viable-for-development system that I'm aware of in industry does that.


They should release all of their sensor data and logs from the incident, and then some of us could actually answer the question in the headline.

This should be standard procedure for any future incidents.


The NTSB has control of the investigation and the data for now. That should be standard procedure IMO.


Why isn't car driving AI put to the same rigour of testing and approval as that of airplanes? There are far more people on the road than in the sky, and yet, there's formal verification for airplane autopilot code but an accident like this is supposed to be "you couldn't have prevented it either?"

If no one could prevent such a thing, this AI should never drive a car!


More rigor is definitely required, but it may not be possible to formally verify AI driven code to the same extent. Aviation software is driven from formal requirements, has a strict set of coding rules, and the generated assembly code is compared to the original source code. Within the coding rules, the software isn't even allowed to dynamically allocate memory.

I have no idea how you'd build an AI system with those constraints, given that the computer essentially programmed the model itself by learning.


This guy seems specificially talking about a certain style of ai approach.


I admit I stopped reading halfway through, but managed to scroll to the last paragraph.

This article is a dance through many important topics in a AV. Yet, it fails to actually answer the "Why collision avoidance is harder fo an AI-based system" question, really. Some arguments argue that systems with a smaller scope are easier, systems with a larger scope are more difficult, it brings on arguments about determinism in decision making. It brings on sensor sets, neither is really about AI or hand-crafted rules, but about problems inherent to robotics as a whole. Again, it is a fine example why the therm AI is useless and harmful for discussions, as it blurs what is talked about considerably.


Unpopular question but here goes:

What if some crashes are unavoidable? e.g. somebody darts out in front of a mobile vehicle. We accept that trains are not at fault for striking “trespassers” on their railways.

Also, when we all drive cars with collision avoidance systems, who gets sued by whom? If my car e-brakes for no reason and I get rear-ended, is the guy who hits me still at fault like usual?

I believe computer control will be super helpful and is here to stay, but it’s interesting to see it implemented in cars as emergency help versus in modern commercial airliners (where autopilot and landing control systems are ubiquitous) when it is only relied on for the most routine and straight-shot ability.


> If my car e-brakes for no reason and I get rear-ended, is the guy who hits me still at fault like usual?

What’s different about your SDC slamming on its brakes vs you doing the same thing? That’s why the law says how much spacing is to be between cars!

> What if some crashes are unavoidable?

From all evidence which is publicly available at this time, it appears this accident was anything but unavoidable, so the question is a red herring.


> What if some crashes are unavoidable? e.g. somebody darts out in front of a mobile vehicle.

It should brake as much as possible. Even if the collision is unavoidable, reducing the kinetic energy available for the collision is still a good idea.

> We accept that trains are not at fault for striking “trespassers” on their railways.

When a train "detects" an "obstacle" (the train driver sees the "trespasser"), it goes into emergency braking.

> Also, when we all drive cars with collision avoidance systems, who gets sued by whom? If my car e-brakes for no reason and I get rear-ended, is the guy who hits me still at fault like usual?

When we all drive cars with enough automation, the automation should keep enough distance that the following car can brake without colliding.

> but it’s interesting to see it implemented in cars as emergency help versus in modern commercial airliners (where autopilot and landing control systems are ubiquitous) when it is only relied on for the most routine and straight-shot ability.

Airplanes have the unfortunate property that they can't simply stop in an emergency; stopping would be an even bigger emergency. Cars (and trains) can simply stop, they won't fall from the sky in that case.

But even then, airplanes do have emergency help from their automation: the TCAS has a similar purpose to a car's automated emergency braking, that is, to prevent a collision.


All I'm going to say is that I could tell this was a Medium post by the title...


I couldn't, but I could immediately tell once I clicked and the page was half-covered by persistent sharing dickbars. -.-


The author writes a long text about how this is "not as easy for an autonomous system to solve" just to make the obvious point at the end of the article.

It should be obvious to anyone that you need to compose systems of different criticality to build a safe autonomous vehicle.

Of course the "AI" system needs to be complemented with a safety critical auto brake and other fail safes.


I've been really interested in exactly this question - the technical drive to revisit moments when contingency (and tragedy) emerges. I've been working on an artwork around this, "Iterated Accident", which I just put online this morning, http://darkmttr.xyz/16/


This is shoddily written bunk. I would downvote if I could. Object detection from LIDAR/Radar data plus path planning is 'AI-based', and I guarantee you Waymo's systems (which are 'AI-based', though more old school and less ML based) would have done collision avoidance better than Uber.


This is an extremely unconvincing argument


I smell a straw man argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: