Hacker News new | past | comments | ask | show | jobs | submit login

Right, I don't see the point of this article. At first I thought it's the yearly "humans have a soul and are better than AI algorithms" story we get.

Of course a mere year ago the fact that computers can't get good moves in Go was an argument. And all the arguments that came before that, voice recognition, reading, chess, games, even counting itself at one point. Of all those things that computers/"information processing" couldn't achieve we now know : computers have recognized more spoken text, read more books, letters, envelopes and ... than humans ever will. Computers have played more chess games, and certainly have processed more numbers than humans ever did or ever will. Humans probably still have a mild advantage in amount of Go processing, but it's clear that's not going to last much longer either.

This article takes another approach : because the type of processing that happens in our minds differs "so much" from what we do with computers, they must be fundamentally different. The emphasis lies on the human mind being different : humans don't work like computers, not the opposite. That wouldn't work : because computers do work like humans. That's why we build them, and use them. Computers analyse stocks, sell apples and cars and home insurance, they mail letters, they work out what a business should schedule tomorrow, they move their arms to glue soles onto shoes, ...

One might also analyse why computers are working so very differently. Why Von Neumann ? Well, because we don't really know ahead of time what we want computers to do, and even when we do, we want the ability to change it later. We want computers, to a large extent, and maybe even to 100%, to work like humans do, but we can't do that. As illustrated above, we get closer every year. Computers simulate other machines, that's what they do. So what you should compare is not instruction sets versus neurons, but neural network models those instruction sets simulate with biological neurons.

And then the differences melt away. Not fully. Not yet. We don't know how. Not fully. Not yet.

But more every year.




The point of this article is the argument that the metaphor that human brains work just like computers is leading people astray in figuring out how human brains actually do work.


He really didn't show where people are struggling due to that supposed handicap. He also didn't show why brains aren't literally performing the task of computing.

I mean shit, if I judge the distance between two points, how can it not be said that I have collected sensory data and computed the result?

All of his examples were pedantic or irrelevant.


When somebody says humans don't compute, I like this picture :

http://cdn.theatlantic.com/assets/media/img/posts/computer_w...

Let's call it "ipython before computers", or "excel before computers" if you must.


>leading people astray in figuring out how human brains actually do work.

In order to make any point remotely like this, he'd have to go talk to some actual neuroscientists, which he very plainly didn't.


Maybe we don't know how human brains actually do work (in my opinion we do, but for the sake of the discussion..) but we know what they do: brains compute.

To use computers as a simile is not so strange.


Brains associate. Compute is too narrow a definition in my opinion.

However, what we still lack completely is any kind of model for autonomy, i.e. how the brain decides what to "compute".


> We want computers, to a large extent, and maybe even to 100%, to work like humans do

http://yosefk.com/blog/ai-problems.html

"We don't build machines in order to raise them and love them; we build them to get work done.

If the thing is even remotely close to "intelligent", you can no longer issue commands; you must explain yourself and ask for something and then it will misunderstand you. Normal for a person, pretty shitty for a machine. Humans have the sacred right to make mistakes. Machines should be working as designed."


One look at the age pyramid of the world, or fertility statistics, will immediately drive home the point that we will in fact build machines to raise them and love them. I mean even in the places that for some reason currently feel comfortable with their birth statistics, there's no denying that the current generation is the last one to be larger than the one that came before it. In most countries, even that is not the case. In the US, GenX is the last one that was larger than the one before it, and only by the teensiest of margins. In Europe, that would have happened in the 80s.

A "child" that requires far less resources will be a product that will be incredibly popular. Not to mention an economic necessity.

And I would even say : 100 billion "small" AIs (scale vaguely comparable to a human mind) is a far preferable situation to one big AI. Both from a survivability standpoint and from an "oh my God it'll kill us all" standpoint.

> "We don't build machines in order to raise them and love them; we build them to get work done.

I would even say, if it's at all possible we'll do just that in the next 10-15 years. If we don't get advanced enough AI, perhaps 20-25 years.

Not a doubt in my mind.


If one doesn't want living child, why on earth would they need mechanical one?

(Except for Japanese of course. Excluding them from this question)

Seriously, fertility will become no issue if we prolong fertile period of our lives. Imagine having one kid at thirty, and another one at sixty.


I sort of get you, personally I feel the same. But then I walk around here and see people with dogs in baby carriages. Why ? Because they could never have a child, they're too old, or cannot or don't want to spend the money they think it'll require.

The problem does not tend to be that it's physically impossible to get a child.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: