> It is Kepecs' thesis that statistics - generated by the objective processing of sensory and other data - is the ultimate language of the brain.
Every time I read stuff like this, I lose more confidence that the scientific method as currently applied, funded, and published is capable of truly conquering complex systems like the brain or nutrition.
I read article after article like this one, where a prominent researcher conducts some experiment that shows some result. But listening to that person talk about their work, it becomes clear that this has become their pet theory that they want to be capable of explaining everything. As if everything we know about some system will someday be shown to be reducible to one clean, beautiful idea.
Here are a few quick reasons why I find this idea to be highly implausible:
- There are a dizzying number of documented cognitive biases in humans. If statistics is the ultimate language of the brain, our software is pretty thoroughly buggy: https://en.wikipedia.org/wiki/List_of_cognitive_biases
- Belief motivates people to do things that have absolutely no statistical justification. For example, the time that the prophecy of a teenage girl convinced a tribe of people in present-day South Africa to kill 300,000 - 400,000 of their own cattle, leading to the starvation and death of 20,000 - 40,000 people. How does a brain built on statistics decide that this is the most rational course of action? http://persistentfrontiers.com/xhosacattlekilling/
EDIT: Several people are replying with some form of: "your examples can still show that the mind is inherently built on statistics if you think about it in way X." Let's do a quick experiment to see if this theory is actually falsifiable. What experiment/result would convince you that statistics is not "the ultimate language of the brain?"
> For example, the time that the prophecy of a teenage girl convinced a tribe of people in present-day South Africa to kill 300,000 - 400,000 of their own cattle, leading to the starvation and death of 20,000 - 40,000 people. How does a brain built on statistics decide that this is the most rational course of action?
That's actually a really good example of statistics deciding a course of action.
Ask yourself this? Why did they believe it? It's most likely that group of people were repeatedly told stories that belongs to a larger framework of belief. What you might not understand is good literature has a tendency to fire parts of the brain related to sight, sound and touch in an fMRI. For so many people, a good story creates a parallel quasi-reality, and I wouldn't be surprised their brain is basically chalking up those stories as real experiences.
I was just listening to a story on This American Life about some kids whose prom was interrupted by a tornado. What shocked me was how many of these kids created internal narratives that they somehow caused the storm, or the storm was manifested as a personal lesson for them. If you think about the garbage plot lines and religious upbringing people absorb, this sort of narcissistic outlook makes a tremendous amount of sense.
I'll add this one point. Modern public relations and politics feeds on the principle that you can manufacture reality by constantly bombarding people with the same shallow message. It doesn't matter what you say as much as how loud and frequent you're saying it.
> What shocked me was how many of these kids created internal narratives that they somehow caused the storm, or the storm was manifested as a personal lesson for them.
Exactly. I would argue that internal narratives like this are inherently anti-statistical. They attempt to give greater meaning to something than the direct observations support.
What experiment would convince you that statistics is not "the ultimate language of the brain?"
I'm not sure I'm convinced that statistics is the language of the brain, but I am convinced that statistical inference is one of the main reasons why individuals (and later, populations) come to believe in some highly irrational superstitions - the ones where there's no reason to assume causality at all save for correlation of rare events (particularly if we're assuming imperfect use of statistics influenced by our cognitive biases rather than "the scientific method is the language of the brain)
"This cattle-killing sounds crazy but since she was right about some other low probability events the probability of her being an actual prophet is pretty high, given the existence of prophets is not inherently improbable" and "my odds of survival are better if I follow this stupid recommendation than if I'm lynched by my fellow countrymen for ignoring it" are both statistically valid conclusions in favour of following some ridiculous food-source-decimating millenialist prophecy in certain circumstances.
> "This cattle-killing sounds crazy but since she was right about some other low probability events..."
Ok you have a story. I can tell a different story: "My tendency towards religious/supernatural belief that has helped me and my ancestors form moral codes and act tribally has in this case caused me to embrace a belief that will lead to the ruin of me and my people. These religious/supernatural beliefs are inherently anti-statistical and lead to lots of incorrect ideas like witchcraft and rainmaking, but these systems of belief are selected by evolution because they overall increase our chances of survival."
Now we have two stories, two narratives. Why do you believe yours over mine? And how can we objectively decide which story reflects the truth about the human brain?
These religious/supernatural beliefs are inherently anti-statistical
No they aren't. They follow the Laplace–Bayes estimator for calculating the rule of succession[1].
I needed rain -> I did a rain dance -> it rained
I needed rain -> I did a rain dance -> it didn't rain -> I must have done something wrong
Both are perfectly valid rules of succession - not forgetting that most religious beliefs make it pretty easy to find something you did wrong. One might suspect these rules could have been caused by spurious statistical relationships.
> I needed rain -> I did a rain dance -> it rained [...] I needed rain -> I did a rain dance -> it didn't rain -> I must have done something wrong
This is a textbook example of a non-falsifiable experiment. I'm not sure why you would argue that you can draw meaningful statistical inferences from such an experiment.
If your argument is that the human mind functions primarily in terms of valid statistical inferences, it doesn't support your case if it processes its input in ways that are inherently flawed and biased.
If you mean "mathematically correct" then yes that is my premise.
If you mean "correct in reality" then no.
If you have a prior belief "god created humans" with a 99% confidence, it will take a lot of evidence to override that - especially since the evidence will be a long way down a causality chain (as any online argument about evolution will show). Additionally, the confidence in that belief will be erroneously increased by lots of things you see in nature (eg - humans cannot create other life forms, therefore it must have been done by something supernatural).
This thread could be summed up as "my priors tell me that if people were Actually Doing This (tm) it would look like X and it doesn't look like X, thus they aren't!" and then other people saying "uh, if you estimate the priors wrong that doesn't mean you aren't doing bayesian, it just means you estimated the priors wrong and got the wrong answer"
Ha. In my opinion, you and others are treating the brain doing statistics like religious people treat the existence of God. Absolutely any evidence can be interpreted as support for the hypothesis if you just tell the right story about it.
I'm not even saying the hypothesis is necessarily false. However I will never trust the results of people who become so attached to an idea that they focus entirely on confirming it rather than rigorously testing whether it actually reflects reality.
The whole point of the experiment is "how does this machinery operate?" not "does this machinery produce the same answers as I would expect?" There's a huge difference. Just because you reason through things one way and get one set of answers as a result doesn't mean that's how everyone should operate. Different priors can (and probably should) yield different results.
Different priors probably mean entirely different lines of reasoning, so you can't even say "well they should have thought about it the way I do" because it's not like the human brain is a single 10x10 neural net with no feedback, it's probably 1Mx1M and it is recurrent; the output from one time step feeds into the input at the next one.
You could get very, very different output for the same input from two neural nets that have been trained on different data. But that doesn't somehow make them not both neural nets; the meta-structure is the same even though all the weights from node to node are different.
> The whole point of the experiment is "how does this machinery operate?"
Yes, exactly!
If you say "the machinery operates in way X" (where X="Bayesian statistics"), you have to actually prove that. You can't just assert it and say that people who disagree with you have bad priors.
I have no doubt that you can construct a hypothetical Bayesian decision-making process that could have yielded any possible decision that any human has ever made. Just like any religious person can take any story that has ever happened and construct a story about why it is compatible with the existence of God.
But unless you have a falsifiable experiment to prove that the brain actually operates this way, your explanation is an unverifiable just-so story. And in this case I don't find it at all convincing.
Don't get me wrong, I understand your theoretical argument. I just think you are taking an unjustified (in my perspective) leap of faith to presume that your story reflects reality.
I don't think we can "objectively decide which story reflects the truth about the human brain", particularly not if focusing on episodes >150 years ago where historians don't even widely agree on exactly what happened, and particularly not for any theory that "statistics is the language of the human brain" that is any more nuanced than the trivially-falsified "human thought processes invariably follow the hypothetico-deductive method and are not subject to confirmation bias, attaching significance to spurious correlations or poor selection of Bayesian priors"
But if this was a problem I was particularly motivated to study, the most satisfactory method of reaching a conclusion would be to use statistical analysis to draw conclusions about which hypotheses human thought patterns appeared to be most compatible with (from my experiments and meta-analysis of others'). I don't claim to know whether this is a refinement of my innate human tendency to be impressed by statistical support for a proposition or an unusual level of abstraction I've learned to reason at after spending too much time in an undergrad classroom.
I think religious belief makes it less likely that all human belief-forming and decision-making are based on statistically sound principles, yes.
But the actual point I was making in the message you replied to is much weaker than this. I was arguing that just because we can spin a narrative that says a decision-making process was based on a statistical process doesn't mean that our narrative is true. Just because you can tell a story doesn't mean the story reflects reality. We need some objective way of determining which of several competing narratives is actually true.
If you set your internal P value at say .01 that seems great. Except your conducting lot's of experiments constantly. https://xkcd.com/882/ And when you do that you regularly show correlation when none exists.
Considering how many things used to be deadly, I can see people living with fairly low P values because false positives only need to be less deadly than false negatives for things to work out.
IMO, this also explains why people are risk averse. Generally, survival is more about avoid a few low probability bad events than having a long string of extra special awesome days.
They attempt to give greater meaning to something than the direct observations support.
I'd argue that an attempt to find a cause for something that is basically random good evidence that the brain is doing a statistical reasoning.
In this case it is pretty much a classical overfitting mistake: the brain thinks that it knows all the data, and everything that happens must be caused by something it knows.
Statistical processes find spurious relationships all the time.
Up-thread I answered how to perform experiments to falsify this.
> I would argue that internal narratives like this are inherently anti-statistical.
They're not. Those narratives are based on what they experienced. In this case, what they experienced are stories in the form of movies, TV, video games, anecdotes from peers, and Bible stories.
Now they didn't really experience them, but I'd posit the mechanisms in their mind really don't know the difference between what's real and what isn't. It doesn't matter if stories are real, all the matters is that they are the focal point of attention.
Think about it. The brain can very much be statistical, but the quality of the statistics is only as good as the data a brain is fed.
They attempt to give greater meaning to something than the direct observations support.
This is the disease of our time, and all times. Humans are narcissistic. We like to think that whatever we experience is extra-special.
Before he died, the evangelist Jerry Falwell used to say he believed the Antichrist was alive and walking the earth. He believed the second coming was imminent, as did many of his followers.
Think about that for a second. It's been ~2000 years since the supposed resurrection, but now is the time God will pick? Why not any one of the other 100 generations?
I'm not necessarily convinced statistics is the ultimate language of the brain. Like you, I imagine there are a number of legacy systems in the brain that coordinate in interesting ad-hoc ways.
> Like you, I imagine there are a number of legacy systems in the brain that coordinate in interesting ad-hoc ways.
I'm sorry if I have missed something obvious, but where does haberman claim such a belief? As far as I can see they are just questioning the application of the scientific method and the strength of conclusions that can be drawn from the results.
I don't have any reading suggestions, sorry. Psychology has a lot of interesting and useful localized results (like the list of cognitive biases I linked to before), but I don't believe any rigorous unified theory of cognition exists. At least I've never come across one.
The theory isn't that "statistics are the ultimate language of the brain". That's just a summary designed for popular consumption.
Unfortunately the links to the source article don't work, but from careful reading it appears that the real theory is that the brain tracks confidence in its measurement of data, and uses this confidence when making decisions.
This is a Bayesian approach, and it has been widely speculated that the brain does this.
This theory is falsifiable, because you can ask people their confidence in the original measurements, their confidence in a decision depending on that, and then use Bayes' rule to see if those confidences have propagated.
Quote:
In experiments with human subjects, Kepecs and colleagues therefore tried to control for different factors that can vary from person to person. The aim was to establish what evidence contributed to each decision. In this way they could compare people's reports of confidence with the optimal statistical answer. "If we can quantify the evidence that informs a person's decision, then we can ask how well a statistical algorithm performs on the same evidence"
They go on to describe a video game experiment. I suspect that the description is incomplete, and there is a decision making component to the game that depends on how frequently the person thinks they are hearing the clicks. If that is the case then that answers your question about how to falsify it.
> The theory isn't that "statistics are the ultimate language of the brain".
That's not the theory of this specific paper, but the says that this researcher considers this paper a "first step" towards this more general theory.
> This theory is falsifiable, because you can ask people their confidence in the original measurements, their confidence in a decision depending on that, and then use Bayes' rule to see if those confidences have propagated.
If your standard is that humans will apply Bayes' theorem correctly, completely intuitively, as more evidence becomes available, I think it will be very easy to falsify that.
If your standard is that humans will apply Bayes' theorem correctly, completely intuitively, as more evidence becomes available, I think it will be very easy to falsify that.
Yes. And there is increasing evidence that this is correct. See[1]. If you can find a copy, I think [2] will address many of your concerns about seemingly irrational behavior being supported by statistical reasoning.
I think you are also applying a biased viewpoint. Your are assuming rational decision making always leads to "correct" decisions.
Statistics and rationality merely work by measuring current events against your previous experiences and inferring the most probable outcome of your possible actions. That means that your previous life experiences and how you have interpreted and classified them are key in what your brain is going to expect from future events and in consequence how you are going to react to them.
In your example you are assuming these people are acting "irrationally" because they aren't taking the "correct" decision by your own judgment. The reality is that likely they are acting based on very different premises from your own, extracted from their own life experiences and those of their ancestors. Given the outcome, they were clearly wrong on their conclusions, but that only means that rationality and statistics are useless if you are basing your reasonings on wrong premises and there is no mechanism to stop you from feeding them the wrong data, not that these seemly absurd actions didn't have a somewhat rational justification.
About cognitive biases, well, your brain makes decisions based on the data it has available, and whenever it is insufficient it can't simply delay decision making until its confidence level is acceptable, hence it needs some heuristics to guarantee it gets a result whenever enough previous experience is lacking.
> Your are assuming rational decision making always leads to "correct" decisions.
I didn't say anything of the sort.
> The reality is that likely they are acting based on very different premises from your own, extracted from their own life experiences and those of their ancestors.
That is true. That doesn't imply however that their beliefs draw solid statistical conclusion from those experiences.
For example, take rituals like rainmaking that are intended to change the weather. These were performed over many years and generations, and yet (we presume) never brought about any actual statistical improvement over not performing them. Those people would have experienced that (lack of) improvement directly. And yet they continued to believe it.
"For example, take rituals like rainmaking that are intended to change the weather. These were performed over many years and generations, and yet (we presume) never brought about any actual statistical improvement over not performing them. Those people would have experienced that (lack of) improvement directly. And yet they continued to believe i"
That's a great example given it is long-term, repetitive, and even non-experts should understand it. I'm going to try to remember it.
The occasional spectacular random success is far more memorable than a string of failures, because the string of failures simply proves the ritual wasn't done correctly. Or there's some other unknown reason why it didn't work.
It's also known that rewarding a behaviour at random is better at reinforcing it than constant reward - perhaps because the reward stimulus is fresh each time it's re-experienced.
Damn. That's good counterpoints. Forgot about that. I shouldn't given I have to counter it all the time in risk management. Yeah, example might not work.
How are those counterpoints? None of the effects mentioned are rational statistically speaking. If anything they reinforce the example, because they show that some likely thought processes that would support belief in rainmaking are inherently anti-statistical.
You're assuming people are rational beings. Most evidence is to the contrary with beliefs passed down generations and many decisions intuitive with its mechanisms varying. There's also rational thought but makes up little of most people's day as intuition suffices. Also, there's documented biases in human mind that apply here:
The one-off event of rain forming becomes the story that everyone remembers most and is passed down with dogma. It gets lots of momentum. Day-by-day data generated for years and analytically processed isn't quite memorable. This is same reason people are more afraid of airplanes despite cars killing more. More relevant, it's the same kinds of bias that makes people think they'll win it big in the lottery despite all evidence to the contrary. I've walked in on 50yd lines when PowerBall is up. They'll always say "But such and such up North won $100mil. You never know. It can't hurt to try."
Illogical as hell given that time and money could produce real benefit with high certainty. Yet, they continue doing it to try to re-create that one-off moment someone else experienced. And then someone else experienced it again. :)
Am thinking of wearing your baseball hat backward or upside down when your team is behind. I know this does not actually help them win but it feels good to be part of the group doing it.
So we comply with the ritualistic behavior in a sort of group angst, when we win we feel it worked and when we loose we feel some sort of existential support system.
You are assuming that rational decision making can't lead to mass-slaughtering cattle.
Also you're assuming that everyone has accurate evidence about the impact of weather rituals. In a society without written records, how is that supposed to magically happen?
At most, he is assuming that there is no rational explanation in this particular case of mass slaughter. FWIW, I am going to assign that a high confidence score until I see actual evidence to the contrary.
Saying "maybe there's an explanation" for every case does not advance the discussion. Saying "maybe there's an explanation" for every datum that does not fit your prior assumptions pretty much rules out statistical thinking, even if you otherwise have the capability.
You'll need to explain what you mean by "correct" if "rational" isn't it. Rational decisions are those that maximize success, and if you've managed to find a rational decision that doesn't, it ceases to be rational given that information.
Most likely definitions from the context: "correct" means producing optimal outcomes, or producing beliefs most closely aligned with objective reality. "Rational" means obtained from available data using a rational process, whether or not the data is good.
So someone with bad priors can come to bad conclusions through entirely "rational" means.
Totally correct. That's why reasonable people don't agree on politics at all. Group A has one set of priors, group B has another and despite them both being basically rational, the priors lead them down totally different roads as to how to "fix things"
"Correct" ones would be those which actually maximize your outcome, while "rational" ones would be the ones are which you infer they will maximize your outcome from the information you have available.
While I agree that this seems like "scientist has pet theory, does study to prove it", I don't think the bias/irrationality claims disprove much.
The way to reconcile those claims is to say that many of our statistical/estimation patterns are very bad. Mostly, they were good once, but are completely unequipped to deal with the modern world. I know "we're optimized for the savannah" is a laugh-line at this point, but it doesn't seem absurd to claim that we're using statistical techniques that are easy but inflexible. Flawless Bayesian updating is basically unachievable, and a lot of our biases seem like once-useful heuristics that screw us a lot in the modern world.
Tetlock's Superforcasting seems like a good reference here: it's a study of people who are uncommonly good at estimating future probabilities. These are people who estimate near-future events to better than half a percent (as in, rounding from .1% to .5% reliably lowers their accuracy). Most of them claim to use a lot of numerical probability estimates, which suggests that they might be employing an existing system consciously to get the bugs out.
None of this means I buy the study, but I'm not willing to reject the hypothesis, either. Given that we're running on neural nets, I wouldn't be surprised if we deal in actually probabilities and are just exceedingly bad at it.
> The way to reconcile those claims is to say that many of our statistical/estimation patterns are very bad.
If studies like this one are taken as evidence of a statistical mind, then non-statistical behavior has to be taken as evidence of a non-statistical mind. Otherwise we have a non-falsifiable theory.
Experiment shows mind behaving statistically: "mind is statistical."
Experiment shows mind behaving non-statistically: "mind is statistical, but this scenario exceeds its capabilities."
> which suggests that they might be employing an existing system consciously to get the bugs out.
I should clarify -- I have nothing against the idea that our minds employ statistical methods. In fact I'm sure they must.
I'm opposed to taking that kernel of truth and pushing a reductionist understanding of something extremely complex (the brain).
Don't confuse non-statistical with non-rational. In particular, people that come to a different conclusion than you could just have different priors in their Bayesian model, but still be statistical in their intake of information.
Though... I think I agree that this weakens the claim sufficiently to not be a notable claim anymore.
You're so right, actually there is nothing new in this idea. Kalman filters (so basically belief propagation based on gaussian observations) were already used by DL Kleinman in 1970 to model aircraft pilots behavior. We are way past this simplistic hypothesis. Actually, quantum dynamics (yeah why not, it's just a model) seem to better describe human decision making than classical stochastic models (Quantum dynamics of human decision-making - http://www.sciencedirect.com/science/article/pii/S0022249606...)
To quote wikipedia's article on statistics: Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data.
Isn't that more or less what human brains do?
A neural network is just a curve fitting algorithm optimized to fit a bunch of data, that is, a statistical method. NNs are, to the best of my knowledge, a fairly good analogy for what the physical computation process of the brain.
NNs are also very easy to fool. Mislabel the humans in your robot training data as 'kill', and your network will make bad decisions. Garbage in, garbage out.
If you make a probabilistic model and attach a very high confidence to the things that spirits tell little girls down by the river, then sometimes your model will suggest some terrible ideas. Humans are also notoriously bad at sampling data, but that doesn't mean it isn't what they're doing.
Your edit raises a good question, though! I can't say I have a good answer, but would be interested to hear a good idea for such an experiment.
I want to use the phenomenon the inner monologue and the act of convincing oneself of something as a counter example, but on the other hand, language is very easily generated by statistical methods, and the brain apparently will make decisions and then come up with justifications afterward. http://pss.sagepub.com/content/early/2016/04/27/095679761664...
> Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data. Isn't that more or less what human brains do?
I think you have expanded the meaning of statistics beyond what is understood in the article. The originally understood meaning is "numerically/theoretically justifiable inferences given a set of input data." Now you're taking it to mean "any data-processing pipeline."
By this large and more abstract definition, here are some more things that are based on statistics:
- tabloid newspapers
- religious sermons
- sundials
- televisions
> How does a brain built on statistics decide that this is the most rational course of action?
The article suggests that humans have an innate means of determining confidence in a particular hypothesis based on evidence. Why does this imply they act rationally?
> The article suggests that humans have an innate means of determining confidence in a particular hypothesis based on evidence.
Yes. In my example, the hypothesis is "killing all of our cattle will make the spirits of our ancestors come to our aid." The evidence supporting this hypothesis was nonexistent. And yet the people expressed extreme confidence in this hypothesis by killing all of their cattle.
Perhaps it would be more accurate to say "humans have an innate means of determining confidence in a particular hypothesis based on perceived evidence".
From an objective standpoint, evidence is either black or white. But for our brains, beliefs and biases may be given equal weight with objective truth. Whether or not those beliefs are rooted in truth is a question ultimately controlled by each person's worldview and their understanding of scientific reasoning. The underlying algorithms may still be based on statistics, albeit statistics with unreliable data.
Or, in the example provided, perhaps the tribe's statistical models were warped enough by cultural/religious/conventional wisdom (unreliable data) that they really did express a high level of confidence in an otherwise outlandish idea - a confidence based on statistically sound evaluation, only fed with bad data.
I think you're using a definition of 'evidence' that is too narrow or perhaps too logical, although I agree with pretty much everything you're saying.
I think that the statistical theory is possible but certainly couldn't be objective (one big reason being the cognitive biases you mentioned). Meaning a person will do the necessary mental gymnastics to assign the necessary value to the variable 'evidence' that their brain uses.
In your example, I think some combination of religious belief, regret avoidance, and herd thinking all added up to equal the requisite 'evidence' needed to express confidence in the cattle killing hypothesis.
Human brains do statistics at a low level. This is what the experiment tested.
An example experiment would be if you gave someone a rewards with different probabilities after some random event like clicking, and the brain would learn to estimate the probability that a sequence of clicks would lead to a reward.
On a high level this obviously breaks down. The brain doesn't have thousands of examples of slaughtering cows leading to starvation. Cognitive biases all operate at a high level of reasoning, and involve verbal arguments and stuff. But on a low level, the brain is absolutely statistically based.
This is actually completely incorrect. Many cognitive biases occur at low levels of cognition without involving language, take professional pilots and confirmation biases despite of a ton of training data.
I think professional pilots are still making mistakes at a higher level. By "low level" I mean things your brain does in a split second without thinking about.
Saying that "statistics is the ultimate language of the brain" doesn't imply that the brain always performs statistical calculations correctly. In fact finding the true correct Bayesian answer is probably computationally infeasible for all but the simplest problems.
We already have this problem with artificial neural networks, which are still much simpler than the networks in the brain. ANN's have recently achieved significant successes and approach or exceed human performance on some tasks. They still however make mistakes and no one is claiming that they represent the optimal Bayesian solution, they are just the best solutions that we have so far been able to find.
I think the same thing happens in the human brain but can lead to behavior that is much more extremely erroneous due to the greater complexity of the brain as well as that of the domains to which it finds itself applied.
But that doesn't mean that the brain isn't based on statistical models anymore than the fact that ANN's aren't perfect means that they are not statistical models. They are statistical models but they are heuristic models, not perfect models.
Ultimately to prove that statistics is not fundamentally the language of the brain, you'd need to find behaviors that are fundamentally incompatible with statistics.
> What experiment/result would convince you that statistics is not "the ultimate language of the brain?"
Reflexes, like in your knee. Your spinal cord is very much part of your brain and the pateller tendon reflex is nearly 100% with 'healthy' people. Perhaps the statistical nature of the brain comes from measuring many people to get the right sampling power for the experiment. Genetics and the stochastic behavior of life then influence these brains to behave in statistical ways compared to each other. Twins studies should bear that out as true or false.
You can't take any single act, where something goes wrong, and conclude that they obviously made the irrational choice.
Sometimes, you can make the correct decision and still have a bad outcome.
If someone offers you even money that they can roll double sixes on a pair of dice, the rational and logical decision is to take the bet. The rationality of the decision doesn't change if they happen to roll double sixes, it just means you got unlucky.
> You can't take any single act, where something goes wrong, and conclude that they obviously made the irrational choice.
Sure you can. If you flip a coin 1M times and it always comes up heads or tails, and then you bet $100 that the next flip will come up daisies, you can conclude that I obviously made the irrational choice. Because there is no evidence to support the idea that this will happen.
Also, in your own example, we can say that it's irrational not to take the bet (if you can afford to lose the money), because statistically your expected value is strongly positive for taking the bet.
I think you are misinterpreting what I am saying. I am not saying that you can't judge a single action as being irrational, I am saying that you can make the rational decision and still lose. Therefore, the mere fact that some action lead to a bad result is not sufficient to show that the decision was irrational.
Consider hypnosis. If we could find a way to hypnotize people from many different backgrounds into believing some specific wild assertion, regardless of how much that assertion conflicts with their beliefs, that would be strong evidence against the hypothesis that statistics is the ultimate language of the brain. The results would suggest that a lower-level language exists and that hypnosis somehow taps into that language.
People get concerned about hypnosis precisely because they believe it may access a lower level of their brain that they can't control very well. People want to believe that their thinking is in fact rooted in statistical modeling rather than things they can't control.
I tend to believe our thinking is influenced by many things, including things we aren't in control of, but when we discover elements of our thoughts that we don't control, we can slowly take control of them. For example, we can slowly overcome habits and addictions.
The bug here is not that humans calculate expected payoffs incorrectly, it's that they have wildly inaccurate estimates of tpayoffs and their probabilities. Subtle but important difference
What I find very peculiar is that everybody talks about AI but no one mentions how the brain is comprised of many different areas that work differently and regulate each other.
It's not all the prefrontal cortex detecting patterns.
The kind of signal the amygdala fires because of a change in body temperature is different than a "pixel" changing in your visual cortex.
Hah, I agree with the disillusionment with the process but only because I see that statistics is OBVIOUSLY the ultimate language of the brain—how else would you explain neural networks, inductive logic, and linguistic disambiguation?
If I may riff on this rather unscientifically from my own anecdotal observations of how my own brain works:
I do not agree with TFAs assertion that "The feeling ultimately relies on the same statistical computations a computer would make".
I would agree, however, that one factor (of an undetermined number of factors) of decision making is something that roughly resembles statistical computation. The key difference is that our input data for these "computations" is absolute crap, and heavily influenced by any number of things, including confirmation bias, small sample size, selective memory, etc.
Let me draw an example of my many years and thousands of hours of play in a game that is dominated by statistics, pool (billiards):
Beginners at the game develop all sorts of incorrect assumptions about how likely a given shot is to succeed, informed by their insufficient sample size. "I made that shot three times in a row last night, why would I miss it this time?" This is, of course, the equivalent of "I hit 13 on the roulette wheel 3 times in a row last night, why would I miss it this time".
As you build experience, you start to get a more correct intuitive understanding of the actual statistical likelihood of a given shot going in, but it is still affected by selection bias, and a number of other factors.
The way (well, some) professional and semi-professional pool players have attempted to eliminate this bias is by actually taking notes on the types of shots they take and actually collecting the true statistics. Even then, after collecting this data for months/years, when confronted with a shot that, statistically, they are more likely to miss than to make, they will still assign an incorrect confidence level to their decision, particularly when they are in, say, the finals of a tournament with a big payday (adrenaline is bad for rational thought, though it can occasionally help you operate at the right edge of the bell curve of your "normal" ability, hence, poorly computed bias).
I don't think that we can boil the decision making process of the brain down to something as simple as a NN, or bayesian inference or whatever. I do, however, think that the brain does attempt a flawed statistical inference as one of many factors in making decisions.
(Note: i'm not anywhere near a professional pool player, nor will I ever be or desire to be, but I've brushed shoulders with some of the best in the world, so I believe I've got a modicum of observational authority on the matter. I also think it's a pretty good example of our brain trying and failing to properly infer the statistical likelihood of a given outcome; chance based gambling is another good one)
"Looking toward the future, Adamatzky is designing a hybrid device that combines a slime mold with conventional electronic computers. A year ago he received a $2.1 million GBP by the European Commission to build such a computer. Now with funding from the EU Unconventional Computation program, he is building a slime mold computer named PhyChip (http://www.phychip.eu/)"
I'm half colorblind. I often tell people that I don't see colors, I see probabilities. If you ask me what color something is, I am going to make a guess and there is a probability that I am right or wrong based on many factors.
It seems obvious that the brain will resemble whatever we've got as the most complex modeling system at any given time while never being accurately represented by any of them.
"The work may also have wider implications. The fields of statistics and, in particular, machine learning, may have something to learn from this inner statistician. "Humans are still better than computers at solving really difficult problems," says Kepecs."
Intuitive as Statistics is simply one of the many tools we humans use to tackle difficult problems, but definitely not the only one.
But it is a little disheartening (yet also exciting) to think that when an AI is eventually developed with a human's intuition, creativity, and learning ability, we'll essentially have no advantages over them.
We might have a global maximum of tech we can achieve with our brain capacity that does not including building brain-level intelligence. Not because it's some special taboo/limit, maybe just because it's too high a barrier.
(Or e.g. at best we could replicate it with organic matter, e.g. grow brain cells, tissues etc, but then it won't be as potentially speedy/efficient as computer AI either).
I think that's why the focus is on deep learning and self improvement for AIs.
We may not be able to create an AI that's as intelligent as us, but we can create a rudimentary AI that can then make itself smarter as time goes by (and since it can be practically immortal it would have a lot of time to learn) and which can be easily duplicated later.
Downside is we probably won't know what it's actually thinking and what it will do...
Alternately, in order to achieve a human's "intuition, creativity, and learning ability", the AI architecture migh be required to sacrifice most of the narrow advantages we find daunting.
At that point, further "capability upgrades" may be things you can apply to both systems.
That's the thing. It will not be up for you to assume.
It will be up to the AI to decide.
That's why we think it could be a problem (for us) if the AI to had the advantage on us...
Our thinking: "Great, we've made this 100x as smart as us AI, even gave it a robot body! Imagine how helpful it will be to us".
AI thinking: "Hmm, those pesky creatures think that I'm kind of their servant because they created me? They're like ants to me. I shall crush them and go on with my plans for conquering the universe".
Our brain, presumably, uses much simpler, primitive thresholds, like dynamic weights in a weighted sum.
As one might deduce from the facts of brain's physiology and development, especially prooning of neurons in the processe of maturation, the key is "a trained structure" to which a weighted graph is, perhaps, the most adequate model, which is shaped by highly specialized centers in the process of continuous training. At least all pattern recognition, including language and vision, presumably implemented this way.
The principle is of combination by trial and error of simplest, smallest, stable, reusable building blocks - specialized cells.
Three is no general purpose computer in a brain. It is a network of structured layers of microcontrollers.)
It's interesting how the reification of statistical reasoning, that is, understanding statistics symbolically and applying it to abstract symbols is far more fraught than the implicit statistical reasoning that this article describes.
That's not quite true. When catching a ball we constantly adjust as we go so we're not perfectly calculating where it will land. We can train humans to roughly approximate where a ball will land, that's about it.
This is just a matter of precision. The fact that you can probably start adjusting in a meaningful and probably correct way while the ball is still going up is significant. Somehow that person's brain is doing a thing they may not have the symbolic knowledge to describe.
> Somehow that person's brain is doing a thing they may not have the symbolic knowledge to describe.
And we can draw roughly correct circles but most people don't know the equation for one. I'm not sure that's particularly significant though, but I may have missed your point.
No the ability to draw circles is impressive. The amount of math and calculations being done in the brain to perform such a feat is incredible. Programming robots to do even a limited range of the things humans do, is nearly impossible.
As shown by haberman's comment and a reply by yummyfajitas below, our brain clearly does not operate on pure statistics, as most people are subject to https://en.wikipedia.org/wiki/Conjunction_fallacy
Ok guys, time to weigh in. Of course we aren't trying to claim that all brain processing reduces to Bayes. Our claim is that we experience the statistical likelihood of our beliefs about our input data (from sense organs or from long-term memory retrieval) as a sense of confidence. If that sense is actually generated by a heuristic computation rather than a neural implementation of Bayes rule is almost irrelevant, because the end product looks identical to Bayes in the ways that matter for use of a confidence variable in computation. We work through the mathematics of how statistical confidence should appear in different projections of decision data here:
By necessity, we had to study decisions that could be replicated in a lab, so no mass-killings of cows. That being said, the brain is constantly faced with a statistical analysis problem - to figure out what's really happening outside of itself. A Bayesian machine is much more useful to assign a classification to sense data and compute its likelihood, than to determine whether your elders are correct when they insist that God is going to cause a plague in 2 years if we don't burn some bovines. Point being, the brain can construct and switch between (occasionally hilarious) decision making strategies depending on context - especially in the murkier, information-impoverished realm of social decision making.
Dealing with sense and memory data is a problem common to all animals, and evolution has had millions of generations to reject suboptimal processors. Processing language and abstract symbols to reinforce tribal social cohesion or implement a political agenda is a different category of decision that is newer in evolutionary terms, and definitely buggier. I hope this helps clear what we are and aren't claiming!
As an aside (since this is the right crowd!) we open-sourced the circuit design files, firmware and APIs for the real-time stimulus generator we used here:
This system allowed us to shut off the ongoing audio click streams within 100 microseconds of a decision button press - so we could look backwards in time at the precise evidence stream used to make each choice and confidence report, for clues about what algorithm the brain used to process the streams. The follow-up study (under review) uses this data to evaluate different models of how decision and confidence computations are implemented in real-time.
Every time I read stuff like this, I lose more confidence that the scientific method as currently applied, funded, and published is capable of truly conquering complex systems like the brain or nutrition.
I read article after article like this one, where a prominent researcher conducts some experiment that shows some result. But listening to that person talk about their work, it becomes clear that this has become their pet theory that they want to be capable of explaining everything. As if everything we know about some system will someday be shown to be reducible to one clean, beautiful idea.
Here are a few quick reasons why I find this idea to be highly implausible:
- There are a dizzying number of documented cognitive biases in humans. If statistics is the ultimate language of the brain, our software is pretty thoroughly buggy: https://en.wikipedia.org/wiki/List_of_cognitive_biases
- Belief motivates people to do things that have absolutely no statistical justification. For example, the time that the prophecy of a teenage girl convinced a tribe of people in present-day South Africa to kill 300,000 - 400,000 of their own cattle, leading to the starvation and death of 20,000 - 40,000 people. How does a brain built on statistics decide that this is the most rational course of action? http://persistentfrontiers.com/xhosacattlekilling/
EDIT: Several people are replying with some form of: "your examples can still show that the mind is inherently built on statistics if you think about it in way X." Let's do a quick experiment to see if this theory is actually falsifiable. What experiment/result would convince you that statistics is not "the ultimate language of the brain?"