Hacker News new | past | comments | ask | show | jobs | submit login
Explainable Artificial Intelligence (XAI) Darpa Funding (fbo.gov)
134 points by Dim25 on Sept 8, 2016 | hide | past | favorite | 40 comments



This is important because there exists a trade-off in statistical learning models: in general, the more flexible your models are, the less understandable they become[0]. Modern machine learning techniques are typically very flexible.

To gain intuition and reasoning of a model is to have understanding and trust--transparency. When you strike a nail with a hammer, it's pretty predictable what might happen: the nail could get hit, the hammer could miss, or very rarely, the hammer's head may fly off of the handle. When you replace the hammer with a black box that works correctly 99.999% of the time, but for 0.001%, something completely unpredictable happens, then there's a problem with volatility because that unpredictable event may have unacceptable consequences. I think explainable AI could help with intuitive and more fine-grained risk analysis, and that's certainly a good thing in high-stakes applications such as defense.

[0] ISLR, Page 25: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20First%20Printing.p...


This is interesting because usually "human-level performance" is the benchmark by which AI is judged. If it can do the thing as well as a human can do the thing, then that's good.

Human intelligence isn't explainable, though. We can make up explanations but they're not based on our actual neural function (heck, we don't even know how that works in general, let alone how we generate specific outcomes).

https://en.wikipedia.org/wiki/Introspection_illusion


Illusory Introspection and confabulation are fascinating but generally have more to do with what was going on in your head when you took that action? than can you explain why you think that?. XAI is not interested in psychological state or an explanation in terms of ion concentration, voltage changes and synaptic activations. Or in the case of AI, the meaning of the contents in vectors after maps on matrix vector products.

A large part of human success is that we are able to transmit knowledge to each other and accumulate experience over generations. A human starting from scratch is not very much more than its ape cousins.

Human beings can explain efficiently because we must learn concepts decomposably and can take deductive shortcuts. In its purest form we have mathematical proofs as explanations. Explanations work with core concepts and generates on that basis. E.g. We want to know A and we know B, B is applicable in condition C. We will show condition C applies. With B, A can be shown in terms of...

When you give big data to machines they work with raw correlations and output predictions. But humans have to be more creative because our scratch space is so small. Newton took Brahe's big data and compressed it to a theory of universal gravitation. Darwin made a relatively small number of observations, all things considered, and came up with On the Origin of Species.


Only a tiny fraction of human knowledge is transmitted through knowledge. Humans can't describe the wiring of their visual cortex, or the intuitions they feel. Expert chess players find it impossible to explain why they make the moves they do. Mathematicians can make proofs and explanations, but they still find it very difficult to transmit their underlying mathematical intuition. Even the purest fields of math can only really be learned through experience and practice, not just following a few proofs mechanically.


Language allows us to communicate abstractions, which are then of course subject to interpretation. This is much better than other animals, which lack language facilities as far as we can tell. The entirety of human knowledge cannot be passed, but enough information is communicated that we can rebuild much of it in every generation.


That is very true. Transmittable knowledge doesn't capture the wirings that was required to produce the knowledge. I was watching the film on the Indian mathematician Ramanujan who had an immense intuition of mathematics, and really wondering how mathematical intuition, or intuition in general develops, and what intuition even means.

I am still not sure but after seeing Schmidhuber's talk last week, I think it is being able to find regularities/patterns, resulting in a compressed representation that allows to understand and compress some aspect of the world, and building layers (abstractions) of those compressed representations. This allows you to navigate through very large search spaces much much faster.

It's a fascinating subject, and at the core of what we are. I recommend the film.

https://en.wikipedia.org/wiki/The_Man_Who_Knew_Infinity_(fil...


That doesn't contradict anything I said, see my first paragraph. The core point is that you don't need to have full metacognition in order to build a culture that learns over time by explaining things and transmitting concepts.

You don't need to explain every dendritic cluster in order to extract core, composable and generative abstractions useful enough to shorten time to understanding by orders of magnitude (discovery of zero vs how quickly it can be taught).


It might simply be a matter of comfort. We don't know how human cognition works, but we have a ton of practical experience with it, so we feel like we have a pretty good handle on why people make the decisions that they do, and how to train and shape those decisions. As a specific example, a human doesn't typically trigger their own death/destruction without a significant reason that can be expressed and potentially detected.

We know a lot more about how machine learning works--since we developed it ourselves recently--but because it is so new, we feel the need for deeper understanding to mitigate risks. To go back to the suicide example, a computer-driven system will destroy itself immediately for all sorts of reasons, including incredibly trivial errors.


Maybe human intelligence isn't explainable, but knowledge usually is. I think there's a ton of truth in that quote, "If you can't explain it simply, you don't understand it well enough."


This is fantastic. DARPA gets it. I look forward to whatever fruits come from this labor. Maybe one day I won't have to look at stack traces and reverse engineer 3rd-party dependencies to figure out why things are breaking. Maybe one day error messages will have explanatory power. Maybe one day IDEs will understand abstractions other than ASTs and types and instead will understand things that convey human intent that is not so closely tied to rigid constructs like type systems. What a wonderful world that will be.


> Maybe one day error messages will have explanatory power

I firmly believe that closely integrating the raw ML internals with the user experience will yield tremendous rewards.

The coding experience, with the stack trace debug experience loop (get stacktrace -> google -> stackoverflow.com -> try a new thing) could be vastly improved, and be made to be like an Akinator session [1].

How about having the IDE's console output be integrated in ML pipelines? You would have boxes with questions and suggestions like:

* Please select the words in the console output that are not supposed to happen.

* Is you current goal related to the following tag: a) library_upgrade, b) first_time_library_addition, c) <tag search box>

* Please describe with tags and words the context you are in.

* Go read this stackoverflow page. Did it help?

* I see you have been doing x and y, and getting these errors. Would you be interested in this tutorial?

* Last week you had this problem. It is resolved? What things (urls, boxes I presented you, etc) did it?

* Here is another context: eclipse, java, email, library_upgrade, ConcurrentModificationException. Are you having the same issues?

* Here are statistics about people being in the same context are you are in. Here is also the top remark they have said about it.

* Here is a decision tree node your context is currently in. Here are all the child nodes (lets you explore the tree without tainting your current context)

* Would you like to do some semi-supervised clustering for trying to tie your current context to other contexts?

And then have the dataset be openly accessible, with third party being able to provide boxes, and publicly emit new features, and all boxes being rateable.

With it implementing the base stackoverflow feedback loop, it would yield a user experience superior or equal to it.

[1] akinator.com


Are there any good Stack Overflow IDE integrations out there?



Thanks for sharing. Do you know anymore about the authors / who might have been consulted to write this?

The contact listed in the document (http://www.darpa.mil/staff/mr-david-gunning) used to work for PARC and ran the PAL (Siri) program.


Great question, no idea who are the exact people behind this (just found this online), but according to http://www.darpa.mil/tag-list?tt=73&PP=2 some interesting folks joined them recently (after 2013):

Mr. Wade Shen (Program Manager – interests: machine learning) http://www.darpa.mil/staff/mr-wade-shen

Dr. William Regli (Defense Sciences Office (DSO), Deputy Director – interests: artificial intelligence, robotics) http://www.darpa.mil/staff/dr-william-regli

Dr. Reza Ghanadan (Defense Sciences Office (DSO), Program Manager – interests: data analytics, autonomy, machine learning and artificial intelligence in information and cyber-physical systems) http://www.darpa.mil/staff/dr-reza-ghanadan

Dr. Paul Cohen (Information Innovation Office (I2O), Program Manager - interests: artificial intelligence, machine learning) http://www.darpa.mil/staff/dr-paul-cohen

Most of them can be found on linkedin.


This brings Lime [1] to mind. "Explaining the predictions of any machine learning classifier"

[1] https://github.com/marcotcr/lime


That's what I was thinking of, too. If any one wants the direct link to the arXiv paper, it's here: https://arxiv.org/abs/1602.04938 . One of the authors is Carlos Guestrin, one of the co-founders of Dato/Turi/Graphlabs that was recently acquired by Apple, fwiw.


Right. This paper ([0]) is actually mentioned in the DARPA BAA ([1]) as an example of a possible direction. A somewhat-similar scheme is [2]. Both seem to do some kind of sensitivity analysis, so as to show the user which parts of the input were most important for coming up with the decision: For instance, [2] "explains" an ML system (which answers questions about pictures), by telling you which pixels were most important for the decision. It does that by essentially hiding pixels and seeing how that influences the ML system's decisions.

So this produces not so much an explanation as "hints" as to why the system made the decision (still pretty useful). The BAA also mentions another possible direction ([3]), which is actually capable of making full-sentence explanations. For instance, it can explain the decisions of an image-to-wild-bird-name classifier with sentences like "This is a Laysan Albatross because this bird has a large wingspan, hooked yellow beak, and white belly”.

This sounds pretty impressive, but seems to depend on vocabulary provided by a user. As a result, in some cases the explanation provided may have nothing to do with how the classifier actually classified - see [4] for my interpretation of these issues and how they might perhaps be solved.

[0] https://arxiv.org/pdf/1602.04938v3.pdf

[1] https://www.fbo.gov/utils/view?id=ae0b129bca1080cc7c517e8dad...

[2] https://computing.ece.vt.edu/~ygoyal/papers/vqa-interpretabi...

[3] http://arxiv.org/pdf/1603.08507.pdf

[4] https://blog.foretellix.com/2016/08/31/machine-learning-veri...


There was a machine learning system designed to produce interpretable results, called Eureqa. Eureqa is a fantastic piece of software that finds simple mathematical equations that fit your data as good as possible. Emphasis on the "simple", it searches for the smallest equations it can find that works, and gives you a choice of different equations at different levels of complexity.

But still, the results are very difficult to interpret. Yes you can verify that the equation works, that it predicts the data. But why does it work? Well who knows? No one can answer that. Understanding even simple math expressions can be quite difficult. Imagine trying to learn physics from just reading the math equations involved and nothing else.

One biologist put his data into the program, and found, to his surprise, that it found a simple expression that almost perfectly explained one of the variables he was interested in. But he couldn't publish his result, because he couldn't understand it himself. You can't just publish a random equation with no explanation. What use is that?

I think the best method of understanding our models, is not going to come from making simpler models that we can compute by hand. Instead I think we should take advantage of our own neural networks. Try to train humans to predict what inputs, particularly in images, will activate a node in a neural network. We will learn that function ourselves, and then it's purpose will make sense to us. Just looking at the gradients of the input conveys a huge amount of information of which inout features are the most and least important. And by about how much.

But mostly I think the effort towards explainability is fundamentally misguided. In the domains where they are supposedly the most desirable, like medicine, accuracy should matter above all. A less accurate model could cost lives. Accuracy is easy to verify through cross validation, but explainability is a mysterious unmeasurable goal.


Eureqa uses (used? I haven't touched it in years) symbolic regression. Symbolic regression can certainly produce weird results, especially with low quality data, but I wouldn't rate it anywhere near classic ANNs in black-boxiness.

Also, science is unfortunately full of magic numbers - but it is pretty amazing when you feed in pendulum motion data and the resulting equation ends up being Newton's second law: http://phys.org/news/2009-12-eureqa-robot-scientist-video.ht...


Yes it's amazing that it can find an equation that fits the pendulum's motion. But if you didn't already know the physics math, it would just seem weird and arbitrary.


No more weird and arbitrary than the motion of the pendulum itself. I think you're underestimating the human intelligence component of the system. Include in the dataset an additional variable for rope length, record the data for multiple lengths - there, you've just isolated a variable in the equation and it is no longer arbitrary. The scientific method still works, but now you're working on a much more tightly bounded problem.


> But mostly I think the effort towards explainability is fundamentally misguided... explainability is a mysterious unmeasurable goal.

Do you think people should stop working on it?

Most interesting things seem difficult to measure, until someone finds a way, and then it seems obvious. An example is search engine quality. At first this might seem too subjective to be measurable. But Google started measuring search quality using a panel of humans, and now everybody does that.

The whole idea of these challenges is to broaden the search, to hope for key insights that by their nature seem elusive at first.


Whether or not we can ever come up with a good definition of "explainability", I still think it's misguided. In almost all applications of ML, accuracy matters far, far more. Predicting whether a patient has cancer, for instance, it matters a great deal that you get the highest accuracy possible. Every point in accuracy you lose to "explainability" means people die.


Until your model starts misdiagnosing people in real life because your training data wasn't 100% perfect (which it never will be for complex real-life problems).

https://vimeo.com/125940125

The problem with unexplainable AI isn't just the lack of explanations. It's the fact that the entire model is a black box, so you cannot somehow tweak a single part of it without retraining.


(Looks over shoulder)

Anyone else thinking this mirrors their own experimental work?

Anyone else thinking of putting in an abstract?

Abstract Due Date: September 1, 2016, 12:00 noon (ET)

Proposal Due Date: November 1, 2016, 12:00 noon (ET)


(I am relatively excited about this research direction. It seems like the sort of thing that might lead to genuinely useful components of a safer AGI system later.)


Yeah, all they want is a simple mechanism of how to jump from a mere "blind", mechanistic feature extraction to the notion that creatures of this Nature usually have two eys and make a hard-wired heuristic, a short-cut which improves pattern recognition in orders of magnitude with less computational cost.

Every child will tell you that cars have eyes, and even a crow could track the direction of your gaze.

Well, I would also give away some govt. printed money to know how to make this kind of a jump from raw pixels to high-level shapes.)

The answer, by the way, is that the code (which is data) should be evolved too, not just weights of a model. This is an old fundamental idea from the glorious times of using Lisp as AI language - everything in the brain is a structure made out of conses^W neurons.

And feature extraction and heuristics should be "guided". In the process of evolution it is guided by way too many iterations of training and random selection of emerging features. Eventually a short-cut "creatures have eyes" will be found and selected as much more efficient. We need just a few millions of years or so of brute forcing.

Hey, Darpa, do you fund lone gunmen?)


Integration of Neural Networks with Knowledge-Based Systems https://www.uni-marburg.de/fb12/datenbionik/pdf/pubs/1995/ul...


If there is real progress towards Explainable AI, this would also be very useful for _verifying_ machine-learning-based systems (i.e. finding the bugs in them).

I wrote about this in [1], but I am not a machine-learning expert (I am coming from the verification side), so would love to hear comments from other people.

[1] https://blog.foretellix.com/2016/08/31/machine-learning-veri...


This is sorely needed for machine learning if it's to get both more complex and more accurate. Coincidentally Alan Kay brought the "expert systems" idea up in his recent AMA as well. It'd be inconceivable to write code today that couldn't be thoroughly debugged, so we should expect the same of our machine learning systems.


Bringing some form of feature introspection to deep neural networks will probably involve clever ways of visualizing the feature activations of unstructured data https://arxiv.org/abs/1603.02518


"If you can't explain it to a six year old, you don't understand it yourself." -Einstein


I like Feynman's version better. If you can't explain it to a college freshman then you probably don't understand it.


The goal is ambitious. I have similar ideas in my mind but do not have a team to complete the details.


My Aunt worked on systems for explaining early generation networks for medical diagnoses: http://link.springer.com/article/10.1007/BF01413743


"I know of an uncouth region whose librarians repudiate the vain and superstitious custom of finding a meaning in books and equate it with that of finding a meaning in dreams or in the chaotic lines of one's palm ... "

JL Borges, The Library of Babel


Off topic: I checked other FizBizOpp listings (which is now famous thanks to the War Dogs film in theaters). There is a listing for a "Big Ass Fan" 16' long for the Air Force: https://www.fbo.gov/index?s=opportunity&mode=form&id=8de699e...


apparently had a preferred vendor in mind: http://www.bigassfans.com/


Well, if you were gonna submit an abstract, the deadline was a week ago. Good luck getting a grant proposal ready if you haven't already!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: