Hacker News new | past | comments | ask | show | jobs | submit login
Is artificial intelligence permanently inscrutable? (nautil.us)
124 points by peterbonney on Sept 1, 2016 | hide | past | favorite | 64 comments



This is not just an issue for neural nets, but also for brains. Our interpretations of our own actions should always be considered posthoc rationalizations in the absence of some falsifiable experiment being conducted to demonstrate the validity of the interpretation. Human brains are excellent at creating a coherent story about the world they experience based on the data at hand, thus we suffer the same kinds of issues, mitigated only by the fact that we have inherited developmental programs that have been subjected to a huge variety of adverse situations that have rigorously tested their performance (by killing anything that failed).


You're hand-waving a real problem by asserting rather misanthropic ideas with scant justification.

Show some child a cat and ask why it's not a crocodile. They will be able to explain it through the differences in shape, color, behavior and other features of those animals. Whether you consider it post-hoc is unimportant. The explanation is still real and relevant.

So people can explain their deliberate reasoning. Also, reason about reasoning. If they truly can't, it's called gut feeling and no one rational trusts those for solving complex problems with important outcomes. Especially at scale.

Neural networks, at least right now, have no capabilities of this sort. There are some attempts at visualization, but they are inherently limited, because they must make assumptions about the domain of the problem.


Scant justification written here certainly. Your example of the child is not a good one in this context because the child can only do it based on knowledge or based on 'they look different.' Well turns out black people and white people 'look different' but these days we now know that we are all members of the same species.

The examples I am more worried about are not knowledge based (or judgement based). The ones I am worried about are the stories we tell about the _actions_ we take, why we choose A instead of B. Answering these requires us to introspect our own behavior and that is exceptionally difficult. Take for example a question like "how did you know it was safe to cross the street?" Many people will tell you that they 'looked both ways and saw nothing was coming' but if you have them wear earplugs they will get run over because it was really the fact that they were listening for the cars. Or say you meet someone and you decide you don't like them, you can't really go back an undo that, but it might actually have been the case that you didn't like the perfume they were wearing because it was the same one that your ex used to wear, and it had nothing to do with the person themselves. Virtually no one can catch all the cases like this because we are trapped by our own brains and our own blind spots. Buying your brain's bullshit without attempting some independent verification (eg by asking someone for another perspective or doing little experiments on yourself) is a exceptionally common, and difficult problem to overcome.


Most actions people take are not the result of deliberate reasoning.

You may be able to explain why a cat is different than a dog, but your brain doesn't go through that categorization process before recognizing a cat.


Eh, I disagree, since a lot of times we rely heavily on visual cues to make assessments and determinations, and are often wrong just because of poor vision.

Take for example persons with androgynous features and the difficulty some people have with identifying them. Children are particularly useful for this insight since they are unabashedly curious and inquisitive, and it's not uncommon to hear a child blurt out "is that a boy or a girl?"; it would seem to me that their categorization process, even if not refined, gets goofed by androgyny; something as simple as hair length or facial hair greatly influences a child's ability to easily discern if persno is male or female.

At some level there is a categorization process that seems to happen with humans when it comes to recognition, we just assign high confidence to certain factors. In the case of children, it seems to be they look at person and use common factors like hair length, facial hair, and body shape to determine gender. The sound of one's voice also helps, but this usually isn't something we can pick up as easily or from afar in public places. As an adult, we probably have more refined points, but they are essentially the same as a child's point of view. Instead of "body shape" adults look for specific facial features, size of breasts, walking gait, style of clothing, hair style, etc. These are all just granular differences of how a child understands it.

This gets into personal experience, but when I was in college, I had long hair and really fine skin/facial features. All the time I would hear little kids ask "why does he look like a girl?" or "why does that girl have a beard?". I had no issue with their confusion, they were just curious kids. Something confused their understanding, and once it was explained "sometimes men have long hair", they just sort of accepted it.


Right.

We've added instrumentation to our recognition capabilities in order to socialize and gossip about derived categories through language. In addition, we (can sometimes) use the formalized expressions of these categories prepared for socialization to ensure internal consistency.

Artificial neural networks that did something similar could have a number of major philosophical and practical advantages: - enable decentralization of data sets - grow with humans rather than apart from them - enable identifying problematic internal inconsistencies


How do you know?


The intermediate steps don't usually bubble up to conscious levels.

Sometimes I "know" a design is bad. Explaining why to others requires stepping through a process that May or may not be the one my brain used to make that approach "feel" wrong.


> Human brains are excellent at creating a coherent story about the world they experience based on the data at hand,

One example of this is Anton-Babinski syndrome [1]. The patient can't see but is completely sure he can.

[1] https://en.wikipedia.org/wiki/Anton%E2%80%93Babinski_syndrom...


I strongly agree but I'm left wondering, what would a falsifiable experiment look like in this case?


If it's an assertion, we can test it. "The sky is blue" is an interpretation that can be re-tested. "My car has a flat" can be tested. If it's something that happened, we can't repeat history, so we're left digging up falsifiable evidence. This is the work of detectives when they solve cases, or historians re-evaluating historic facts. You don't re-test blood you no longer have, you're only able to scrutinize the analysis. You don't test again for genocide, you seek documents, records of communication, money trails, etc.

A super casual modern day example would be an assertion like "Joe must be gay". You never know until you ask him, and that is if he's honest with you. But everyday-life is full of gross approximations based on bias and partial information.

But the key, even in Joe's case, is you have to go back to the real source. If all we can do is talk about Joe, we can never know for sure. Much like celebrities we are so certain are gay, but haven't come out openly yet. So as long as computers can't leave their data centers, there really isn't any amount of data that can prove anything to them. They can't experience the truth, because ultimately the truth rests on experience and experience of evidence.


The pneumonia-asthma example seems to be an example of a Simpson's paradox [1]. The doctors acted on a strong (accurate) belief about asthma sufferers contracting pneumonia and acted in such a way that the data obscured an actual causal link (asthma as an aggravating factor to pneumonia). This is opposed to the canonical Simpson's paradox where doctors acted on a strong (inaccurate) belief about severe kidney stones [1a] and again produced lopsided data that hid the best treatment option until the paradox was identified.

Humans have a very hard time uncovering so-called "lurking variables" [2] and identifying such paradoxes. I don't see how a neural network (or other machine learning tool) could do so on their own, but I don't know that much about machine learning. So, I guess I have two questions for the experts out there:

* If all training data is affected by a confounding variable, can a machine learning algorithm identify its existence, or is it limited by only knowing a tainted world?

* Once we have identified such lopsided data and understood its cause, how do you feed that back into your algorithm to correct for it?

---

[1] https://en.wikipedia.org/wiki/Simpson%27s_paradox

[1a] https://en.wikipedia.org/wiki/Simpson%27s_paradox#Kidney_sto...

[2] https://en.wikipedia.org/wiki/Confounding


> Once we have identified such lopsided data and understood its cause, how do you feed that back into your algorithm to correct for it?

This is tackled in the recently popular field of study called 'counterfactual inference'.

http://leon.bottou.org/talks/counterfactuals


One method of fixing this would be to have the neural network make 2 predictions. The first would be to predict what decision the doctor would make. The second would be to predict what decision is actually likely to lead to the best outcome.

In cases where it's very likely the doctor would make a different decision, it should flag it for human review.


The answer is no. The problem is, is that we don't trim the neural networks of their spurious connections and instead we're stuck staring at these fully (visually) connected layered networks.

Once you start to trim out the spurious connections you start to see that you are left with a logic design with integration/threshold circuits instead of straight binary circuits that we're used to seeing. There are even certain universal network patterns what will emerge to perform different functions just like in binary circuit design.

I wrote a paper about this in 2008 that's now been cited about 150 times. It's using Artificial Gene Regulatory Networks instead of Artificial Neural Networks, but the math is the same and the principle still holds:

http://m.msb.embopress.org/content/4/1/213.abstract


Around 2006 - 2008 I participated in a Research Experiences for Undergraduates program at an AI lab. I used to get into arguments with grad students when I asserted it was intuitively obvious that given a neural network which recognizes two features, it should be possible to extract a trimmed network which recognizes one feature but not the other.


The trick to accurate interpretability is to decouple accuracy from explanations.

Just like an International Master commentator can explain most of the moves of a Super GM, so can an interpretable simple model explain the predictions of a very complex black box model.

The work by Caruana referenced in this article actually culminated in a method to get both very accurate models and still retain interpretability.

https://vimeo.com/125940125

http://www.cs.cornell.edu/~yinlou/projects/gam/

More recently there was LIME:

https://homes.cs.washington.edu/~marcotcr/blog/lime/

And there are workshops:

http://www.blackboxworkshop.org/pdf/Turner2015_MES.pdf

We will get there. 'Permanent' is a very long time and in the grand scale of things, deep learning is relatively new.


When I try to explain neural nets (specifically in vision systems) to people I basically explain how you take inputs in the form of images, label pixels/pixel groups in the images with what you want them to output in the future, and then do that thousands of times and continue to test the results.

Critically though, I will say something to the effect of "but if you try and break the net open and see how this specific net came to it's result, it will look like spaghetti"

So it's a roundabout way of saying "junk in; junk out." That holds true for any learning system, including human animals. The thought process of humans is inscrutable thus far, and I think that future computing will be similarly inscrutable if we do it correctly.


I think this issue of "Explainable Machine Learning" and interpretability is just going to get more and more important as ML grows. It will also be important for verifying ML-based systems - another problem area.

See [1] for a discussion of both.

[1] https://blog.foretellix.com/2016/08/31/machine-learning-veri...


I really disagree and think the whole "There's no way to gauge results!" meme is low-impact FUD. FUD that isn't particularly dangerous so people that don't know any better just believe it (like python can't be performant because of the GIL or macs are better than PCs or some other inanity).

> As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.

Nonsense! Cross validation. Develop hypotheses, develop subsets of data to prove or disprove given hypotheses, observe how the network reacts. All of these people complaining about not being able to understand what's going on are either reporters, bloggers, or machine learning dabblers looking to say something seemingly unconventional.

From your linked article, which gives more specifics as to the argument:

> Because ML systems are opaque, you cannot really reason about what they do.

Yes, it is possible to reason about a system even if it is "opaque"; the discipline is called reverse engineering. Or the scientific method.

> Also, you can’t do modular (as in module-by-module) verification.

You can do "modular verification" in a variety of ways. Start with analyzing the behavior of each layer and how that changes as you incorporate more layers. It's beyond the scope of this comment to go into it beyond surface level, but there are a lot of papers written about it, google "analyzing neural network hidden activations" or something.

> And you can never be sure what they’ll do about a situation never encountered before.

Humans can never be sure what they'll do about a situation they haven't encountered. Or engineers. They can simulate the events that they can think of, but we can also do that with a neural network.

> Finally, when fixing a bug (e.g. by adding the buggy situation + correct output to the learning set), you can never be sure (without a lot of testing) that the system has fixed “the full bug” and not just some manifestations of it.

Fixing the "full bug" is often not something that can be done in traditional software development "without a lot of testing". Machine learning works the same way.

If you want if/then statements, use a decision tree. If you want strong accuracy on predictions, use a neural network. It helps to know what you're doing when verifying results. You will run into trouble if you don't know what you're doing, as per common sense.


There is no strong theoretical foundation for very complex and/or deep models. There are ways to gauge results, but right now, the media is right in that there are risks.

The pneumonia project referenced in the article had Caruana voting against implementing the most accurate model: A neural net. Instead they went with a way less accurate logistic regression model, one they could safely implement in production, inspect, explain, and defend to the doctors.

Nobody does quite know why neural networks work so well. There is a Nobel Prize waiting there for someone or some team to solve this with mathematical (or physics) rigor.

Nodes in neural network layers can represent multiple features, or share feature representations. Do we know if a neural net (and which part) is targeting skin color, or acne? Do we know that credit risk models are targeting sex, even though we left out this feature (it may infer this from other features)?. Depending on the application, this is important to know for certain.

Reverse engineering sure does work, but can we fully find out the source code from a program, just by fuzzing inputs and looking at outputs? Or are we only looking at (perhaps a small part of) its behavior?

> You will run into trouble if you don't know what you're doing

Likewise: You will run into trouble, if you don't know for sure what your models are doing.

http://blogs.wsj.com/digits/2015/07/01/google-mistakenly-tag...

> as per common sense

https://en.wikipedia.org/wiki/Commonsense_reasoning#Commonse...


Well yes, you _can_ verify ML-based systems (and I discuss briefly how in the linked article). It is just that this is much harder with such systems than with HW or SW systems for which "you have the source code".

What I meant by "Because ML systems are opaque, you cannot really reason about what they do" (perhaps I was not being clear) was this: You can indeed _observe_ what they do, but being able to actually inspect the source makes verification much easier and more reliable: You know what parts of the logic you have covered, you can think of "danger areas" (and direct testing to them), and you can simply check whether all the cases you can think about have been covered in the source.

With opaque systems, you have no idea whether e.g. your ML-based autonomous vehicle will recognize people-painted-on-a-bus as people-on-the-road, until you actually test for that. And then you have to test people-painted-on-a-truck.

What I meant regarding "modular verification" is that you can check sub-modules according to some spec (or at least according to informal comments). This is quite different from what you can do when analyzing NN layers.

I suspect you are right in claiming (in your last paragraph) that one will always have to choose between "more understandable" and "more accurate". But I think we can do various things (the DARPA suggestion being one of them) to make the "more accurate" solution more verifiable.


If someone wants an analogy:

Nobody understands exactly how painkillers (aspirin, paracetamol etc.) work on the molecular level. Yet they are generally held to be quite useful.

Edit: electroconvulsive therapy is an even more extreme example; we have no clue how it works, but it's very effective on severe depression. We only know it works because some Italians back in the olden days, before ethics committees were invented, decided to electro-shock a bunch of "crazy people" to see what happened. The reason they tried it was that electroshocking of pigs for slaughter had been observed to give a temporary anaesthetic effect.


So how do we get the first sick volunteers for Dr. Neural Network?


AI's been in medical diagnosis for a long time and many people have been treated by it already. There was a news article a month or two ago from IBM for using Watson to do it, and it struck me as obviously a PR piece because "an AI system diagnosed a patient" is breaking news somewhere circa 1980, not now. Its scientific content was zero.


Indeed. But circa 1980 they would have been using expert systems, which typically were able to justify their reasoning.


We are talking about neural networks not general AI.


Or: nobody understands exactly how your doctor's brain reaches its conclusion about what's wrong with you, but you still trust it.


It is not like the doctor can explain in English when you ask him for details. Not all.


Or, to put it in other words - how your car's AI decides that there's five cars around you probably isn't that interesting. How it decides that based on its input, it should probably slam the brakes, is interesting.


I have a difficult time getting psychiatrists to do exactly this, so no change here in that area.


The human brain also does massive dimensionality reduction on very large amounts of data, and a lot of unconscious processing, with much of it being beyond our capabilities of conscious introspection.

I think eventually, within a couple of decades, we will have AI that correlates well enough with human thought process, and has enough knowledge of the world, to be able to introspect and explain in various levels of detail, in natural language, images and other human readable constructs, why it has reached a certain conclusion. And we will be able to experimentally verify those explanations.


I've been saying that ML is much like alchemy than science. They've pretty much given up to understand the underlying mechanism because it's so complex, but that doesn't stop them experimenting because they still get something that looks like a result. And hey, they can get paid for it.

Eventually it might grow into a full-fledged science, but it will probably take an awful lot of time.


I disagree about the underlying mechanism being complex. Machine learning algorithms are a class of equation-based systems and strategies for configuring these equations for a specific task. All our math and reasoning tools still apply to this class of algorithms, in principle.

Where we pass into alchemy though is the interplay of these basic components with each other and the parameters they encounter while running, this is where complexity happens. Part of this lies in the very nature of the tasks we use them for: we basically push a cart of raw data in front of a set of "AI" solvers and expect them to do something with it. When that doesn't work, start over, tweak parameters, and try again.

I agree that there is no sufficiently useful intellectual framework for creating these artificially intelligent components, and that shows not only in the uneven success rates and performance, but also in the surprising fact that experts in very different AI systems can usually create components with similar performance characteristics for a given problem, despite using very disparate strategies.


I think you mean it's much more like science than math. Not that it's much more like alchemy than science.

Machine learning research is very empirical. With a philosophy of doing lots of experiments and tests, and discovering what methods work best. That's what science is. No scientific field starts off with complete knowledge and understanding, they have to do a lot of experiments to discover the general laws.

Some people dislike empiricism and want pure, provable math. Machine learning isn't a field of mathematics, at least not in practice. But that doesn't mean it's not a science.


They haven't given up at all. And more importantly, they've developed a huge number of "if it looks good it is good" mathematical theorems.

This means that under many circumstances, you can build a model that satisfies certain abstract properties, test it, and have a high probability of generalizability. I.e. we've circumvented the "understanding" stage.

In fact, I'd say that the core of machine learning (as opposed to merely statistics) is exactly this.


Isn't this all simply about correlation vs causation? Machine learning can find strong correlations and we can make predictions based on those correlations, but at the end of the day, the machine knows nothing about what is causing any of it, and hence is "inscrutable".

So it is up to us to fill the gap in our understanding because that is what machine learning ultimately says about the subject. It tells us what we don't know. If we knew all about the subject, our predictions would match the predictions of the machine because there is only one reality we're both observing. But if there is any gap, then the machine is telling us what we don't know, not what it (of all things) knows. It's just crunching numbers. It doesn't "know" anything.


Interesting article. Some things are weird. I don't know why a support vector machine is ranked better than Bayesian nets, or why they are both worse than ensemble methods w.r.t. interpretability.

However, I think the human should not be in the loop. The network should have another semantic layer that serves communication. It can be done from the ground up like Steels or Vogt have been doing.

In other words, yes we need insight, but I prefer it through introspective networks. The network should be able to explain itself.


Some people, for example medics and civil engineers, are held legally liable for the decisions that they make. If they are to use machines that help them make those decisions (and mostly they would like to due to the terrible business of killing people) then they have to be able to understand what they are being told to do, or they have to trust it enough to bet their futures on it. If the machine is literally infallible then you can imagine option b being exercised, but being honest, if you were being threatened with five years of jail and you didn't understand why it was telling you to do something would you sign it off?


Note the same logic applies to a self-driving car.


Is that Luc Steels you are referring to? I took a couple classes from him at uni a little over tem years ago. What is he doing lately that you refer to?


Yes, definitely.

I don't think there has been much progress in his work though. :-) I asked him a few months ago if he saw some promising inroads to embodied cognition, but it didn't seem like it. Still it's a part of AI that's neglected on the moment.

PS: Of course there is research, just not mainstream. Der, Parisi, Tani, Bongard, Prokopenko.


This isn't unique to neural networks at all. There was a machine learning system designed to produce interpretable results, called Eureqa. Eureqa is a fantastic piece of software that finds simple mathematical equations that fit your data as good as possible. Emphasis on the "simple", it searches for the smallest equations it can find that works, and gives you a choice of different equations at different levels of complexity.

But still, the results are very difficult to interpret. Yes you can verify that the equation works, that it predicts the data. But why does it work? Well who knows? No one can answer that. Understanding even simple math expressions can be quite difficult.

One biologist put his data into the program, and found, to his surprise, that it found a simple expression that almost perfectly explained one of the variables he was interested in. But he couldn't publish his result, because he couldn't understand it himself. You can't just publish a random equation with no explanation. What use is that?

I think the best method of understanding our models, is not going to come from making simpler models that we can compute by hand. Instead I think we should take advantage of our own neural networks. Try to train humans to predict what inputs, particularly in images, will activate a node in a neural network. We will learn that function ourselves, and then it's purpose will make sense to us.

There is a huge amount of effort put into making more accurate models, but much less into trying to interpret them. I think this is a huge mistake, because understanding a model lets you see it's weaknesses. The things that it can't learn, and the mistakes it makes.


I appreciate the sentiment of your comment, but, what part of a neural net isn't interpretable? Indeed, they do require more careful examination compared to traditional learning techniques. You can examine the receptive field of each node to infer what it detects for.


No, but Nautil.us, with its mandatory tracking cookies, is.


You should try self-destructing cookies.


We're using these things and we're not even sure how they work. Love it.

At least we should have a standard for characterizing their accuracy or something like that ...


> We're using these things and we're not even sure how they work. Love it.

Similar to the human brain in that respect. Ironic that the human brain used itself without understanding how it works to eventually create something it uses without understanding how it works.


Except human brains are validated by other human brains to be capable of performing a given task. The validations come in many forms with varying degrees of accuracy for the task required. Ex:

-college degree (Doctor)

-employee referral (engineer)

-FizzBuzz (engineer again)

-licensing exam score (driver)

-popular election (supreme commander of the armed forces)


The same validation scheme(s) can be applied to AI as well. That doesn't mean their inner workings are any more or less penetrable, just that they pass some minimal threshold of competence.

E.g., AlphaGo is clearly good at Go. Yet it makes moves that can appear inscrutable to a human expert.

(NB: I'm not arguing that inscrutability is or is not an issue for AI systems, just suggesting that third-party validations don't meaningfully address the issue one way or the other.)


Exactly. I think trying to validate end-to-end is tail-chasing. Reminds me of Adam Curtis' BBC documentary "The Trap," whrein people create a limited theory of human behavior based on game theory, but the attractiveness of numbers results in the reformation of society to better fit those numbers, even if they don't fully represent human behavior and leave us worse off than before.

It's like trying to create an AI to play chess but the creators focus so hard on keeping it from making moves that result in its losing the queen that they forget that the objective of chess is to win.


Maybe the community needs a little simulated annealing. It seems the communal views, approaches, and focus are stuck in a local optimum.

Think Different! O'well.


To label so-called causative factors or even actual relationships (in a shifting...virtual...hyperspace) among potential relationships is a separate task than to make meaningful predictions or predictable changes. The Universe is inherently a system-less set of potentials. The strongest system is the one that is indeterminate in its methodologies. Systems are survivors of reduction processes.


I can't even understand why deep learning creates better predictions than regular neural nets. How does adding layers change anything?


In principle a shallow NN (1 hidden layer) can approximate any function. But it has a tendency to overfit and just "memorize" the inputs. The basic idea of adding additional layers, is that the early layers can learn very low-level features of the data, and later layers combine the low-level features into higher-level features. This tends to make the models generalize well.

A standard example is for a face detection algorithm. The first layer will do edge detection, the next layer will combine edges into corners and simple shapes, the next layer will maybe use those shapes to look for features like eyes, noses, mouths, etc., and then the next layer will maybe combine those features to look for a whole face.

I wrote a more detailed answer here:

http://stats.stackexchange.com/questions/222883/why-are-neur...


I am no expert but I think it allows for a higher order function to be arrived at. An example would be the output of a simple net, where the output is a linear combination of features. This would be extremely shallow and while this will work for some things, there are going to be some instances where this doesn't capture nuanced scenarios.

in a shallow net, maybe college student selection based on sat scores gets a heavy weight/low threshold/whatever. in a shallow linear combo, this will likely always carry a large weight.

in a deeper net, it might be able to learn that SATS are a great predictor except for when X Y Z or some combination of those are some particular value, in which case it might be wholly irrelavent. The deeper it is, the longer it will take to train, but the more it can handle exception cases/trends and approximate reality


No one really knows, there was a paper by Max Tegmark on HN yesterday with some new ideas and results, but I haven't had time to read it yet. http://arxiv.org/abs/1608.08225

The other responses to your question in this thread are as good I as could give you, but I would feel like I am recounting ideas that may be true but for which there is little evidence.


In general, it allows a better approximation of the solution function for far less hidden neurons. Sure, you could get arbitrarily close using a single hidden layer, but that hidden layer might need to be unfathomably large. Same idea for network topology in multilayer nets - a network could eventually learn to set a lot of the weights to zero, but training is a lot faster and more effective if you know a good problem-specific topology to start with. Deep nets make problems more tractable. Recurrence is the real game-changer, since then you've moved from non-linear function approximators up to Turing completeness (at least over the set of all possible RNNs).


Think of each layer as an opportunity to perform a level of abstraction or categorization. Concepts are built out of smaller concepts which are built out of smaller concepts; lots of layers allows lots of hierarchy in the concepts.

The first layer might recognize and respond to pixels in particular parts of an image, the next layer will group certain of those pixels-responses together into an abstraction you might call a "line", the next layer will respond to certain groupings of lines and add a level of wiggle-room regarding where the lines are in the image, and the final layer will judge whether a combination of groupings constitutes the letter "A". Or at least, if you spent a bit of time poking at a deep network, giving it slightly different inputs, you might eventually conclude that this is what the layers were doing.

Without layers, you're basically just approximating a simple function or mapping with one level of abstraction.


Adrian Thompson's 1996 paper was about Genetic Algorithms. A poor overfitting example considering the whole article is prominently about Artificial Neural Networks. Thompson's FPGA components were trained at room temperature and the creatures were unable to function well when the temperature deviates too much from 10 deg. C.


Artificial intelligence shows little promise of developing any time soon but still shows promise over long term development.


AI has been ten years away for over 10 years so.. it's just around the corner now!


Yes, but these days, it can actually identify whether or not a photograph contains a bird.


“What machines are picking up on are not facts about the world,” [Dhruv] Batra says. “They’re facts about the dataset.”

This seems analogous to 90% of (random, unreplicable) science these days.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: