Hacker News new | past | comments | ask | show | jobs | submit login

> why not anthropomorphize?

It would be easy to be flippant about this suggestion and reply something like "Because we're doing science, not making Disney cartoons?" But the issue is not so black and white. I once saw a lecture by Daniel Kahneman (Thinking Fast and Thinking Slow) where he described why he talks about "System 1" and "System 2" as if they were real things (when of course there is no part of the brain you could point to and label as either) and his argument was that he was intentionally making use of a feature of the human mind which allows it to think quickly and reliably about "agents," (to use his exact word.) In other words, he is intentionally anthropomorphizing abstract systems to make it easier to attribute certain characteristics to it.

There's some evidence this works in multiple contexts. First, we have the Wason selection task where performance improves remarkably when a problem is translated from an abstract logic question to a question about checking IDs at a bar. Humans are good at thinking about rule enforcement in a social context, and terrible about thinking about abstract logic. Second, we have various mnemonic techniques which associate the things to be memorized (cards, digits of pi, whatever) with people (often celebrities) and then encode information (say, a hundred people each representing two digits) by telling stories about the people. "Einstein and Mr. Rogers baked pies for Lady Gaga" is much easier to remember than "314159".

If the human brain is wired to be good at thinking about other humans, if we can therefore co-opt more of our brain matter to work on a problem, why isn't that a good thing? After all, Popper would say that as long as the theories make falsifiable predictions, who cares where they come from? And shouldn't we be as open minded and creative as possible during the hypothesis generation phase of science, as Feyerabend argued in Against Method? If Kepler's mystic beliefs about numbers could lead him to groundbreaking work in astronomy, who are we to deny any scientist any aspect of their full mental powers?

And yet reliance on intuitive models often lead to bias, blind spots, and unwarranted assumptions. We have to remember that these specialized aspects of our brains evolved to solve very specialized problems in a particular environment, and the further we get from those traditional environments the worse they perform. Geometric intuition, developed by apes for a 3D world on the scale of meters was extremely useful aid when bootstrapping mathematics when it first allowed us to grasp plane geometry intuitively, but it became a hindrance as we investigated very small, very large, very fast, or very massive objects. Anthropomorphic models may allow to quickly generate new ideas, but we must always be willing to cast aside such intuitions as our theories develop.

So, I would say that if we currently are at a loss for theories that fit the available data on octopuses, we should place no restrictions on what theories we consider; and if anthropomorphic thinking is one way of generating new hypotheses, than so much the better: toss them on the pile with the others and test them all. But if lessons from the history of science hold true, it is likely that we shall soon have to abandon these early theories (despite the siren call of their intuitive appeal) in favor of more objective, more abstract, and less intuitive concepts that nevertheless explain the facts more thoroughly and more deeply.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: