Hacker News new | past | comments | ask | show | jobs | submit login

We have good reasons to assume that we ourselves, the sum of a particular person's memories, skills, identity and intelligence, are contained within a particular set of parameters encoded by different biochemical means in our brains, and the process of how we learn skills, facts and habits is literally a search through that space of parameters.

Actually, I have better reasons to assume that a parametrized model would poorly describe a human brain. We grow organically out of cells replicating in an environment for which they have been adapted. These cells make trillions of neural connections. Each cell has its own DNA etc.

We have already tried understanding just the DNA using straightforward parametric models, and they are too organic to be described that way.

It is far more likely that human brains are specifically adapted to the world they live in, and can operate with abstract concepts which are encoded in fuzzy ways (like a progressive JPEG, for example) that allow us to apply concepts to situations and search for concepts that fit situations. The concepts themselves are the hard part. It's not really a parametric model. Each concept represents experience that is stored between neurons.

Yes, we can teach these concepts to a computer eventually but we would have to figure out a language to express this info and data structures to store it. We'd still be designing the computer to mimic what we think we do. Ultimately for the computer to truly replicate what humans do it might need to simulate a gut brain, neurotransmitters etc. And even it would be only a simulation.

I think computer intelligence is just of a very different sort that human intelligence. Less organic, far less ability to come up with new concepts or reprogram itself to "understand" concepts. It is fed parametrized models and does a brute force serch or iterative statistical approximations, and then saves the precomputed results, that's all. That's why humans can recognize a cat with a brain that fits inside your head and consumes low energy, and computers need a huge data center which consumes a lot of energy.

We aren't replicating human intelligence. We are building huge number crunchers, and the algorithms are still written by humans.

Even our languages are too tied up in organic experience acquired over the years (refereces to current events, puns, emotions, fear of some animals vs dominance over others, inside jokes of each community etc) that language recognition is currently quite dumb and has trouble with context. Once again we solve this by dumbing down the human input, making people talk to computers differently than they would if the computer had "understood" anything they would say as a human with similar experience would.

When computers write algorithms to solve arbitrary problems the way groups of humans do, then I'd admit we made a huge leap forward. As it is, AlphaGo and self-driving cars are the result primarily of human work and refinement of the algorithms. It just is amazingly smart because computers crunch numbers fast, consistently and replicate what works across all the instances.

Computer AI does raise philosophical questions of identity and uniqueness, but currently they are not capable of true abstract thought.

The closest system I know is Cyc: http://www.wired.com/2016/03/doug-lenat-artificial-intellige...

And once again the rules "we all know" were fed to it by humans through a language and data structures and code devised by humans and now we will judge whether it does well and replicate the result to millions of machines. We are still doing nearly all the actual design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: