Hacker News new | past | comments | ask | show | jobs | submit login

> So when we look at something like this and say "we can't know why the computer did it this way" it almost doesn't make sense. The computer did what it did because it figured out that was the best solution - it doesn't understand the concept of "why".

This idea is what I don't agree with.

At the most basic level, the entire "thought space" of a computer lies within the instructions fed to it by a human programmer. Until we can figure out how to build a general AI, whatever the computer decides to do or ends up doing is completely bounded by the content of the instructions. This implies that, given enough resources, a human equipped with a debugger and an understanding of the instructions e.g. algorithm could reverse exactly why and how a program arrived at a particular result.

> it [the program] doesn't understand the concept of "why"

Oh yeah, it doesn't "understand" the concept, but that doesn't mean it doesn't make decisions (branches) based on some parameters (inputs or intermediate values). In other words, there is in fact a "why"! So if we can step through the sequence of branches executed and their results, we can understand "why" even if the program itself can't do that.




Once the number of instructions approaches a certain point, it will increase the cost of human analysis drastically.

From my previous work performing reverse engineering, I could take comfort that I was reconstructing algorithms designed by humans, compiled and assembled by programs written by humans, with really pleasantly un-optimized properties (frame pointers in some assemblies, loading and saving registers at the start & end of functions, logically-separated functions). My job would be much harder if I had to deobfuscate code that was optimized without a strict ruleset.

After enough spaghetti assembly, it would be too time-consuming to reverse engineer code except for the most profitable enterprises (interoperability, vulnerability research for a very important bug).

I'm using the following mental model of what machine learning-generated code would look like: https://news.ycombinator.com/item?id=8092359 (an evolutionary algorithm designed a circuit that is extremely difficult to analyze but is optimal at achieving its narrow purpose).


> At the most basic level, the entire "thought space" of a computer lies within the instructions fed to it by a human programmer. Until we can figure out how to build a general AI, whatever the computer decides to do or ends up doing is completely bounded by the content of the instructions.

(emphasis mine)

How would this change if you built a "general AI"? Assume I don't believe that a computer, even a general AI, is ensouled.


If we assume that a general AI can "understand" things in general and can "learn" over time, there is nothing stopping it from understanding the instructions it consists of, and subsequently learning how it can dynamically reprogram itself. If we extend that further, the program could also potentially obfuscate its activity by detecting logging or debugging activity.

That's the way I think about it at least.


All of that is still part of its original instructions.


In a narrow sense, yes. But if the program "learns" self-modification rather than being explicitly told how to do so, and then uses this knowledge to perform operations that were not included in the original instructions, I'd call that a general AI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: