Hacker News new | past | comments | ask | show | jobs | submit login

> At the most basic level, the entire "thought space" of a computer lies within the instructions fed to it by a human programmer. Until we can figure out how to build a general AI, whatever the computer decides to do or ends up doing is completely bounded by the content of the instructions.

(emphasis mine)

How would this change if you built a "general AI"? Assume I don't believe that a computer, even a general AI, is ensouled.




If we assume that a general AI can "understand" things in general and can "learn" over time, there is nothing stopping it from understanding the instructions it consists of, and subsequently learning how it can dynamically reprogram itself. If we extend that further, the program could also potentially obfuscate its activity by detecting logging or debugging activity.

That's the way I think about it at least.


All of that is still part of its original instructions.


In a narrow sense, yes. But if the program "learns" self-modification rather than being explicitly told how to do so, and then uses this knowledge to perform operations that were not included in the original instructions, I'd call that a general AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: