Hacker News new | past | comments | ask | show | jobs | submit login
Probabilistic AI can't be AI (refaktorlabs.blogspot.com)
9 points by middayc on Feb 8, 2014 | hide | past | favorite | 29 comments



then people would be naturally great at probability

Where do you get that conclusion? Think of a baseball player with exceptional hand-eye coordination, who also knows absolutely nothing about any of the math or calculus behind it. There is a vast chasm between conscious and subconscious thought (where the latter might be viewed as the underlying "intelligence model"). I don't agree that a (partially) probabilistic model would necessarily imply a proficiency at the conscious, communicable level.

we wouldn't benefit from learning about and consciously using probability to solve problems, as our brain would already do it on a lower level.

Don't understand this one either. Back to the baseball player - there is a lot of value we gain from our brains "naturally" solving e.g. inverse kinematics problems on the fly. But, there is also a lot we gain from understanding these mathematical concepts at the conscious level. Totally different realms, both useful.

Statistical methods generally need a big learning set to learn anything.

Ok, that's a limitation of the current AI models. They're not smart enough to infer the essential characteristics of an elephant from a small set (it's also a very narrow approach...you don't start a child off by showing them pictures of elephants, first they have to spend a long time acquiring basic fundamentals of knowledge and perception). I don't see how that implies a more deterministic nature of organic intelligence.

Even the child might think something is an elephant and be wrong. But (IMO) it's the same process of using the bits of knowledge you have to make a guess (and an estimated degree of certainty). "It could be this thing or it could be that thing, and my model says it has the greatest probability of being this one, so that's what I'm going with."


Ok, based on yours and many other similar comments below I only now see where my argument isn't understood as I wanted it to be. Not that I think that now you and others will agree with me, but just to make this clear.

I do not think that people would "then be consciously good at probability" (I said naturally / later added instinctively), but that the results of their unconscious processes / intuition, would then be better matching the probability-wise view of the problem. While I think that exactly the intuition seems to fail the most at probability (planes, terror attacks, ...).


but the baseball player with exceptional h-e coordination would catch the ball without learning about math or calculus behind it. We suck at probabilities naturally / don't catch the ball naturally.

Otherwise I agree with what you said (and am limited in time, sorry)


I stopped reading at that first point too.


...people would be naturally great at probability, and we know we suck at probability

Human bodies are based on molecules, it doesn't imply that people understand chemistry.

Computer chips are based on transistors, it doesn't imply that computers understand electronics.

Sorry, but your assumption that knowledge is inherited by foundations would imply that everything understands the Universe.


I see now that I probably didn't express myself well. I am not saying we should understand probability but the result of our instincts should be in tune with probabilistic results on the problem. For example: We would not naturally fear airplanes, terror, etc. or..

If human body is made from molecules it does mean that we function as blobs of molecules.

If computer chips are based on transistors, the output of computer chips is transistor like.


First off, the title is terribly imprecise, you're not arguing that probabilistic AI can't be AI, you're arguing that animal intelligence is not probabilistic.

Second off, I personally find your theses unconvincing because the simple statements are easily refuted, and there's not enough supporting information to make them into tighter arguments.

For example, for your first bullet point, the underlying mechanism could be highly probabilistic, without our consciousness having access to the probabilities as they are executed. We do not have conscious knowledge of neuron potentials, or of any other internal state of the physical mechanisms of our consciousness, but that does not mean that they don't exist.

Further, people are bad at reasoning period, and have to be taught how to do it in school. It's only through much training that we learn reasoning from certainties, and reasoning with uncertainty will similarly require training.

Finally, I don't think that anybody is saying that our brains operate SVMs or RBMs or anything like that, they are merely computational mechanisms to replicate the behavior of complex networks of neurons; there may be direct analogs to computation in how neurons integrate chemical, electrical, and epigenetic signals, but it's not clear that they're exactly the same thing; and that doesn't mean that even if it's completely different, that a probabilistic AI can't be an Intelligence.

These are heady matters that people put a lot of careful thought into, and while there's definitely room for light discussion among friends, I'm not sure that this is the right material for HN.


you are right about the title, sorry.

is there a consensus/definition on what is AI without relying on what animal/human intell. looks like/functions?


I offer one crude definition: intelligence is a requirement for an agent that can solve a wid range of problems, without being specifically designed for these problems.


The argument itself is flawed as pointed out by others.

As for the subject of probabilistic AI there's quite a lot of evidence that natural language grammar to a large extent is probabilistic. In fact I'd say that most natural language grammar rules are just surface representations of probabilistic models such as Hidden Markov models.


This article is deeply flawed. First of all, it's premise is flawed; secondly, it's supporting evidence is flawed.

Example from supporting evidence:

"then people would be naturally great at probability, and we know we suck at probability"

No, this does not follow at all. There is absolutely no reason to assume that a probabilistic algorithm would somehow lead to better results when dealing with probabilities at a much higher level.

Example from premise:

- No argument for why "if real intelligences aren't probabilistic, therefore probabilistic intelligences can't work" is given.

- Just because your models come from statistical learning (such as SVMs or Bayesian methods, as the author names) does not mean your resulting algorithm is probabilistic.

Flagging.


so you think that we use probability (and are good at it) on a low level but that doesn't translate to being sufficient in probability on a higher level. Where is the border between low and high?

I am not saying any probabilistic intelligence can't work. Please define any "intelligence" or link to definition. I said I think "we" or organic intelligence doesn't with use of probability.

I am also not saying probabilistic models are useless / bad / or not worth exploring further for making better programs or anything, I just don't think human intelligence uses them as a basis.


- SVMs aren't really probabilistic models. They don't need to be thought of in a way that assumes underlying statistical properties of what they're measuring. Many machine learning methods (decision trees for one) are similarly agnostic.

- Reality is a probabilistic process. Brains work probabilistically at least in the sense that the underlying biomechanics have some statistical distribution.

I have the feeling the author doesn't really understand what "probability" means, as he more or less admits.


> Brains work probabilistically at least in the sense that the underlying biomechanics have some statistical distribution.

No, it's us that use these distribution to model them. Reality might be deterministic after all (no proof of the contrary), and then brains are not working probabilistic at all.


I would think even entirely deterministic systems need to be treated probabilistically if you don't have perfect information.


The argument is fundamentally and fatally flawed by a lack of understanding of how the "mind" works.

The basic flaw in the argument is that there is only one "mind" and that all its workings are available to us.

But as much recent research shows, especially that of Tversky and Kahneman, we have at least two "minds", the so-called "fast" and "slow" systems, and the "fast" system is not directly accessible to "us" - and "us", what we recognize as "us", is the working of "slow", the deliberate, conscious, effortful problem solver, often called upon to cook up a "good enough story" to explain an exception "fast" couldn't make sense of.

"Fast" might be terrifically good at probability, with some survival heuristics favouring particular conclusions - better to always conclude that those weird pattens in the bush are a tiger than to be wrong once.

Since "fast" is not really accessible to "slow", it matters not how good "fast" is at probability - and "fast" could be 100% probabilistic AI and we'd never know it...

...not without detailed neurophychological study, at least. It certainly wouldn't "just be evident".


My argument doesn't assume that we don't have unconscious processes. Maybe I didn't express well what I think. I do think we have multiple "minds" (I even wrote a blogpost on this subject on another blog once).

Maybe my "then people would be naturally great at probability" came out as if we would then consciously know about probability, know the formulas etc without the need to learn about them.

No. I tried to say that if so then results of our "fast" mind would be very well attuned to probabilistic study of the problem. For example, you wouldn't become scared when boarding a plane.

I will try to write another blogpost being more precise. Not that it's worth anything, more for myself. I won't post it here, not because of opposing comments, but because many think this is not a HN material and I don't want to spam.

Not that it matters, obviously. But I don't think there is no probability processing anywhere in the system (we do conscious probability if not else). My thoughts were that it can't be the main or important driver behind it all. I think the memory itself (storage and "soft" retrieval of information .. my theory on models that we make) plays much more important role in our function than any special or complex algorithm. Again, not that it matters what I think.


I originally intended to make a comment about how wrong this is because almost all approaches to AI and ML are deeply rooted in probability and statistics. The entire problem of intelligence is based on making accurate predictions and then acting on them. But there is some truth in this. Humans are terrible at probability and that's something I wouldn't have expected if I didn't already know it. It is a clue as to what kind of algorithm the brain is using.

I still disagree that the brain doesn't use probability at all or that we shouldn't focus on it in AI research.

>There are more external signs that we don't do probability. Statistical methods generally need a big learning set to learn anything. A small child doesn't have to see a set of 500 cartoon elephants in different poses to recognize elephants from then on in various different cartoons and in real life *)

No but they do see hundreds of hours of visual feed from their eyes from which they learn high level features. Learning an elephant from one example wouldn't be possible without first learning thousands of other concepts such as how to detect edges, shapes, 3d objects, the properties of animals, etc.


I am not saying that it doesn't use something that efectively is a probability calculation anywhere in the system, but that that isn't the main or the most important "engine".

I agree with your comment on "first learning thousands of other concepts". But that is a ton of (hierarchical) concepts that have to bo stored somehow and isn't then the storage/retrieval itself maybe more cruicial to the whole function than any special algo we just haven't figured out yet? (I mention this in my "wild" speculation on "models") :)


Perhaps but my understanding is that, at least for vision, the brain learns by trying to predict nearby neurons as well as the future, thus learning an internal model of the world.

Algorithms that do something like this are called Deep Learning and have recently proven extremely effective at machine vision and other tasks. They don't all work by prediction per se but they do learn to compress the input down to a smaller representation of features that can be used to recreate it and are very related to prediction.


TL;DR: "I'm not an AI/ML anything by any stretch of imagination"


good one, I accept it :)

edit: so you think human intelligence is mainly a probability machine?


Something to read:

Hierarchical Bayesian Modeling of Human Decision-Making Using Wiener Diffusion http://gandalf.psych.umn.edu/users/schrater/schrater_lab/cou...

Decision Theory and Human Behavior http://www.umass.edu/preferen/Class%20Material/Bounds%20of%2...

Beyond Accuracy: How Models of Decision Making Compare to Human Decision Making http://fileadmin.cs.lth.se/cs/Personal/Carl_Christian_Rolf/c...

Forgetful Bayes and myopic planning: Human learning and decision-making in a bandit setting http://papers.nips.cc/paper/5180-forgetful-bayes-and-myopic-...

>Our result shows that subjects’ choices, on a trial-to- trial basis, are best captured by a “forgetful” Bayesian iterative learning model [21] in combination with a partially myopic decision policy known as Knowledge Gradient [7].


I don't really want to discourage teenagers from public writing, but if you're really confused about why this is being negatively received...

The most glaring thing: your three argumentative points could be simultaneously true and the assertion "Probabilistic AI can't be AI" could still be false.


My personal suspicion is that it's an ensemble approach involving both probabilistic and symbolic methods. But I think that human intelligence is largely driven by probabilistic methods.

But, I also "am not an AI/ML anything", I just read a lot on the subject and think about it a lot.


The problem with this assumption is that one doesn't have to be aware of how their own brains work in order for them to work.

The fact that people seem to be universally bad at assessing probabilities means only that however their brains work, that mechanism doesn't produce intelligences that are finely adapted at assessing probabilities. The machinery that is producing that intelligence could still be completely probabilistic itself.

Humans are fantastic at pattern recognition, but that doesn't mean that our intelligence must have been created by a pattern recognition algorithm.


We can't say that humans are fantastic at pattern recognition either. We may only be sure that our built-in pattern recognition is better than our ability to create pattern recognition algorithms.


I disagree with some of the premises.

We are great at probability, we compute it unconsciously. Every person knows the approximate probability of flipping a coin to show heads, pulling a card from a stack, or being rejected/successful when asking a date out (we always say our 'chances' are good/bad, depending on factors we think are important).

In our day to day lives, many decisions are driven by what we think the 'odds' are of our endeavor being successful.


I am not sure if I intuitively know it or do we have to consciously compute / infer it? (the coin/card example).

I guess I have some aprox feeling, but it's highly susceptible to being skewed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: