"Facebook chatbot shut down" doesn't really show anything. It was a everyday failure of training that got overblown in the media.
"Google AI looks at rifles and sees helicopters": This wasn't new at all. Black box attacks have been around for a while. The paper purported to show a much more efficient attack, but it's not really a huge advancement.
In general, most of these aren't really what I would call "failures" of AI. Some of them are illuminating parts of ML that are problems (copying society's bias, adversarial attacks), but most of the rest are simply things being overhyped/regular design errors (anything to do with Alexa/google home).
Nonsense. These types of problems have been “illuminating” researchers for the last 5 decades. These are real shortcomings and will not be solved for another 5 decades or more. It’s unbeliebable how much in denial the AI folks are in.
Adversarial attacks are a bit more than just that : they reveal how much we don’t have any intuitive knowledge on what the algorithm understands. In the sense that we cannot build any intuition on what the 1% mistakes cognitive ML-based algorithms will be triggered by.
Aka : If a gun could be mistaken for an helicopter just by altering a few invisible pixels, then the behavior of that system is by definition not similar at all to « vision » in its common sense. And so if it’s not, then what is it ?
They changed a lot of pixels in a way that led a human to classify the image as not notably changed but led one AI to classify it as a different type of object. Of course the same thing is possible with human and AI reversed, there are images that are obvious to some AI but that would be misclassified by humans.
Humans do have the advantage that our vision system can draw on a huge repository of real-world knowledge and has been fed with decades of high resolution training data.
Humans also have the advantage to understand what world they’re living in, and be capable of reasoning. The way you describe human cognition using ML terms makes me think you’re confusing the map with the territory.
> Humans do have the advantage that our vision system can draw on a huge repository of real-world knowledge and has been fed with decades of high resolution training data.
My impression was that ML image recognition systems have exactly the same to draw on, non?
Maybe it's not a great list, but as far as I know, adversarial attacks are still a major unsolved research problem. Why pretend otherwise? They belong on the list, don't you think?
I'm not trying to pretend otherwise. Of the problems on this list, I think adversarial examples and bias are real problems.
> Some of them are illuminating parts of ML that are problems (copying society's bias, adversarial attacks), but most of the rest are simply things being overhyped/regular design errors (anything to do with Alexa/google home).
It's a personal opinion that adversarial examples will always exist (they exist for pretty much all kinds of machine learning models), and that they might exist for human vision too.
Assuming you're talking about me, what in my comment came off as defensive? I definitely do not consider myself part of the whole "futurology"/Elon Musk/AGI in 15 years group, but I don't think this list exemplifies machine learning's weaknesses at all.
When I read anything about “AI” that’s not just hype or some “futurist” predicting that we’re on the cusp of something huge and it’s just around the corner, it makes me think we still have a long way to go before it’s actually “intelligent” and nothing more than PR/marketing.
We do have a long way to go, but AI has made huge strides over the past few years, irrespective of the hype/marketing. Notably this year, AlphaZero, advances in generating images (https://github.com/tkarras/progressive_growing_of_gans), voice synthesis with Tacotron, etc. were all very impressive results. Not to mention all the recent advances in the theory of deep learning.
It's good to be on watch for the bullshit (the news stories about Facebook's chatbots' "language", most futurist's predictions, any hype about general artificial intelligence, replacing x% of jobs in y years), but don't throw the baby out with the bathwater.
IMO, unlike blockchain, AI/ML is undoubtedly here to stay, and it will have a huge effect on everything.
I think blockchains are here to stay, and that's fine; they're extremely interesting. I feel that at the moment they suffer from the problem that a few technologies suffered: they're a solution looking for problems to solve. By that I mean that currently there's a trend towards applying blockchains to solve every problem regardless of a solution for it not requiring blockchain at all. Do you want to make a transfer? Blockchain. Do you want to manage inventory? Blockchain. Do you want to help starving children in Africa? Blockchain. Do you want to transfer files? Blockchain. All of those are problems and they're problems that don't need a blockchain to be solved and a blockchain is arguably not the best way of handing them but because blockchains can be used as part of a solution, because it's a trendy technology, there's a huge effort towards applying it to solve them.
I don't think it's a bad thing to try and apply new solutions to old problems but it just feels that we're trying to apply it too much to problems that are otherwise already solved.
> it makes me think we still have a long way to go before it’s actually “intelligent” and nothing more than PR/marketing
That doesn't mean defense contractors, for instance, won't keep winning contracts based on that PR/marketing, while causing who knows how many innocent lives to be taken because of them over-stating the effectiveness of their AI.
That's just one example where AI is already "real" today, in the sense that many companies and government organizations have started deploying it and causing real harm to people because they think it's much better than it really is.
Other examples include anything from the AI being used in the justice system to AI "simply" being used to censor "porn" but censoring completely unrelated things in the process.
The most interesting thing about this is that it's not interesting. None of these cases are very bad. The self driving car accident is the most severe, it was a minor accident and the car wasn't even at fault.
I mostly agree, however regarding the bus accident the problem is that AI out of the box has to be way better than the human in order to gain widespread acceptance. That means it needs to be able to make tough calls, and do it better than the humans it is meant to replace (or complement). The description of the incident in particular shows why self-driving AIs have such a long way to go.
This may not be so. I expect many people would see the good in technology that prevents accidents and deaths even if it isn't perfect in its first release and that those who don't feel this way at the start might be brought around by good arguments.
Speaking of which, there's a study by some RAND Corporation researchers (described on RAND's blog [0] and here [1]) about how it is likely a good idea to get the technology out even before it is perfected, not only to save lives now, etc. but also to speed up the perfection of the technology -- the rubber needs to hit the road, so to speak.
The Apple Face ID hack could have long-term ramifications, no? Especially with anybody who doesn't read these articles believing the system to be completely secure.
Eh, it requires a lot of effort/technical skill/money to break, as well as good photos of the target. Oh and physical access to the device. It's very, very far from the worst security vulnerability ever. Remember the days when this worked? http://i.imgur.com/rG0p0b2.gif
You can make similar lists about "Airline Industry Failures in 2017" or "Software Failures in 2017" and so on. No one is stopped flying or using software.
One thing you have to realize about the law, it doesn't translate to a rule set, as understood by computers, at all. If that were the case we could just implement the law in code, there would be no need for judges or lawyers, and every case should be clear. But the law isn't just a bunch of logical statements, that you can run any given situation against, instead it constantly leads to edge cases or even contradictions. People practicing the law are interpreters, that often have to argue about how a real world situation even fits into this rule set.
It's weird if you think about it, but from a logical standpoint, whether or not you broke the law is often undefined.
Yes people in tech often have his misconception about the law, and they think they can find clever loopholes or gotchas where a judge would really say ‘that’s clearly not a reasonable interpretation’.
Well, very extraordinary stuff does happen and sometimes you’ve got to break the law and hope that traffic court will see your case given the circumstances.
For example, what if a light breaks right in front of you and is now stuck on red?
Happened to me once. I was a passenger, so I got out, went to the corresponding light to check it was safe and confirm it was broken not just on a long delay.
Self driving sensors are better than eyes, but it is possible for there to be a situation where the car can’t know if it is safe. This probably has been thought about more by the designers than by all armchair critics combined, of course, but even if not then the law is the lowest common denominator of backside-covering.
Yeah, and if IOT really takes off, it would be trivial for a car to use the city’s API to do a health check on the light many time faster than a human could. But now we’re introducing dependencies on API’s coded by God knows who to our heavily regulated, self-driving car.
But yes, engineers probably have thought about this, but there’s an infinite amount of stuff that could happen which is impossible to control for. What about falling trees across the road? Dirt roads that’s way behind on maintanance that would require you to drive off road a bit?
I’m not saying I believe AIs can’t handle this, but they would need room for improvisation to do it. When an entirely new situation occurs humans make decisions, argue in court, and then the ruling becomes precedent.
Ideally the lights would have cameras and report themselves. FWIW, this was a temporary set of lights for roadworks in the middle of the Welsh countryside with little or no mobile signal.
This makes me wonder, as I've been many ask for an "AI government" and such, will our future societies be ruled by constantly-watching and always-listening AIs that follow the absolute most strict versions of the law and punish automatically anyone who breaks them in the slightest?
I remember a few years ago we were arguing about the fact that it's not a good idea to have a society where 100% of the people are punished for breaking the law 100% of the time, because then you can't have any progress. Who's to say the version of American society today is the the pinnacle of a modern society and that no law should ever change?
However, for change to occur, some people first need to break the law, and begin to make that new thing a culturally accepted thing before there's a strong enough movement to support a law change.
But if the AI automatically sees your first instance of breaking the law and then automatically punishes you for it, how is that change ever going to come about?
These things won't just "work themselves out" if by that you mean anything but people viciously fighting for their rights against an AI-totalitarian state over the next few decades. Just like AI today can sometimes make a mistake of seeing a helicopter instead of rifles, we'll need to keep "updating" the AI along the way to include these rights, and hopefully before too many people have to suffer in the process. Hopefully that process will be much shorter than the time it takes today from a law being passed by Congress to being shut down by the Supreme Court when found inadequate.
This is very fascinating. It would also be hard for people to appease legislators to change the laws because if they say "this law is intrusive in my life and puts good people in jail" they just sound like complaining criminals, what's so different between them and a guy who argues he was justified in punching the guy who slept with his wife straight in the face?
There is no obvious victim when selling a drug, if transaction is voluntary. We already accept that by allowing selling some drugs, that were historically consumed.
If you want a future-proof law, you should start from first principles and not use a path-dependent current laws, where selling alcohol is ok, but selling heroin is a crime.
Bouncers are fooled by people wearing paper masks of the people depicted on a photo ID?
When bouncers let people enter a bar with a photo ID that does not match the person in question they are not failing to identify a human. They are failing to give enough of a shit to carefully examine the picture in question.
The Alexa one is confusing. Alexa asks for a PIN when you buy something. Maybe the little girl knew the PIN, but the newsreader's voice would not have accidentally triggered any actual purchases.
Is that a new feature? I’ve definitely purchased things on alexa in the past with no pin. It did ask for a verbal confirmation before pulling the trigger but no pin was required. But I don’t think I’ve purchased anything in a while so I don’t know if it’s changed.
Sure, but a trained human would probably pick up that something is not quite right, especially if family situation has been revealed earlier.
My neighbor has an identical twin and one day his twin came visiting, but I had no idea he was coming. He was standing in front of our intercom that’s been broken for years looking bewildered when I got home from work, and since I’d heard about my neighbor’s identical twin before it didn’t take me long to piece together what was going on, even though I’d never met this guy before and I had no idea he was going to be there.
(I work in telephony, working on voice recognition and voice id now and related tech now.) Current state of voice ID tech we were offered to buy is very dismal - it can identify a person but only if the pool of people is approx. 5K.
Pretty sad if you ask me, and - perhaps - it will never get better as it is tougher to tell a person by their voice rather than their looks.
And in any case, remember children: biometric data is user id and not password!
The failure here is not an AI failure, it is a security failure, and specifically a rush to deploy 'smart' technology without a sufficiently thorough consideration of the ways a sufficiently resourceful and determined attacker might exploit it. Unfortunately, I can say with some confidence (based on the repeatedly-demonstrated persistence of complacency) that we will see many more examples.
Apples claims the risk a random person is recognized with TouchID is 1 : 50000. So while in theory every person has a different fingerprint, there are technical limitations.
If the HSBC voice ID has a similar failure rate for random voices than it is as secure?
Democratisation and solving some local problems are the convincing bread and butter of the most recent progress in the AI field. Generalization efforts are nowhere near the hype and the mediatic bs yet but AI is dangerous enough already if used by malicious parties. All in all, it is a case of mediatic smoke because there is some fire already.
Maybe the biggest failure is to call it 'intelligence'. It seams just a ball trained to fall in a certain sink. Maybe call it artificial selection? Experiences with 'selection' in human history didn't had the nicest outcome. Curious what the artifical one will produce :o
"Selection" is responsible for all good outcomes, including the entire course of natural history that led to your comment. Natural selection got us mitochondria, eyes, legs, thumbs. Sexual selection gave us a brain capable of reasoning about itself. Machine learning is just the next step in that progression. It's amazing that the culmination of 3.7 billion years of selection is a mind that can come up with elaborate ways to cast aspersions on the selection mechanism that spawned it.
You know why I love machine learning? It's immune to anti-intellectual fads. It speaks the same truth that children do. While you can make an algorithm spit out the answers you want, you have to explicitly train it to do so, making the selective blindness and hypocrisy we demand crystal clear.
It seems difficult to combine formal verification, which is rooted in symbol manipulation, with deep learning, which is based on high dimensional stacked distributed representations.
While his reply seems like a hammer in search of a nail, his general point has merit, and is something that has been a topic of contention recently in deep learning (with Ali Rahimi's NIPS talk).
What he calls formal verification is what we would call regular math in machine learning. Deep learning is sorely lacking hard bounds for all sorts of things (generalization, etc.). It's something that's gotten substantially better over the past year, but is something that needs a lot of work.
Generalization bounds aren't even hard, though. They're in the form of "probably approximately correct", - there's no way to guarantee that any particular example won't be misclassified.
I think I'd still consider those bounds in the form x% chance that the result is within epsilon of optimum as hard. Otherwise, you'd have to discount the entire field of randomized algorithms as not having hard proofs.
With randomized algorithms you can get within an arbitrarily small epsilon chance of getting a wrong answer, and it’s not unusual to get on the order of machine error in practice.
PAC guarantees are very different. They generally rely on:
- Data proportional to 1/eps^2
- Only reduce the “variance” component of the “bias + variance” (you’ll never fit a linear model perfectly to a nonlinear dataset, regardless of PAC)
- Get worse as you decrease bias
- Assume your data is IID and identically distributed to the test data. In TFA, several of the examples of bad behavior come from data which is distributed differently from training data (adversarial examples or automated cars not moving out of the way)
I don't think we're really disagreeing here. Afaik, hard bounds aren't a well defined mathematical concept, and it's fine to have different ones.
What I initially meant by "hard bounds" was any kind of mathematical proof more rigorous than "well, dropout kinda makes your neural network not rely on one feature so that's why it generalizes".
As for your points, I don't think they're really criticisms of PAC bounds.
I'm not familiar with the first point, but it'd be surprising to me if most PAC bounds had that, considering PAC is a framework and not a specific technique...
Your second point is irrelevant to generalization. You're looking for theory about capacity I think? I think learnability also comes into play.
3rd: that is indeed what the bias variance trade-off would imply. It's also why most classic PAC bounds are vacuous for neural networks.
4th: I think that's a fair assumption to make for any meaningful study of generalization.
There is no formal specification for what these algorithms are supposed to do, so you can't verify anything.
Machine Learning isn't a mathematical discipline.
You could still verify some things, e.g. if you expect the model to have a certain kind of symmetry, or always produce outputs within a certain range, or something like that. Of course it would be best to encode those expectations in the model structure or training algorithm, but it might be useful to know whether you got it right.
For the common case of "The model misclassified a data point, no idea why.", formal verification doesn't help, but that doesn't make it completely useless. Probably not worth the effort, though.
"Facebook chatbot shut down" doesn't really show anything. It was a everyday failure of training that got overblown in the media.
"Google AI looks at rifles and sees helicopters": This wasn't new at all. Black box attacks have been around for a while. The paper purported to show a much more efficient attack, but it's not really a huge advancement.
In general, most of these aren't really what I would call "failures" of AI. Some of them are illuminating parts of ML that are problems (copying society's bias, adversarial attacks), but most of the rest are simply things being overhyped/regular design errors (anything to do with Alexa/google home).