Hacker News new | past | comments | ask | show | jobs | submit login

The "simulate at reduced speed" theory appeals to me, but I think the actual numbers make it implausible. Assuming Moore's Law, 30 years is a 1,000,000 speed up, plus let's say 4.5 years for 3 more doublings, giving us 8,000,000. To simulate 1 second would take 3 months. Debugging would be frustrating. (assuming 35 years from now; seems an arbitrary figure.)



> Assuming Moore's Law

That alone may already be a mistake, it's an observation, not a law after all.

Besides, compared the 35 years ago we can now do things 1,000,000 times faster than back then, but computers are not 1,000,000 times 'smarter' they just give the same answers that you could compute back then but faster and on fewer computers.

The future is parallel anyway, so it isn't Moores law (increase in density of transistors on-chip) per-se that will drive this, more likely there will be a switch to increasing chip packing density with smaller chips (bigger yield) and better communications between the chips (think computing fabric).

We need a huge advance in programming languages before we can really contemplate building an AI by taking advantage of such a structure though, simply simulating the organic soup that forms a brain is going to be a much harder problem computationally and may simulate a dead or an insane brain much more easily than it will simulate a live and thinking one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: