Hacker News new | past | comments | ask | show | jobs | submit login

Once the number of instructions approaches a certain point, it will increase the cost of human analysis drastically.

From my previous work performing reverse engineering, I could take comfort that I was reconstructing algorithms designed by humans, compiled and assembled by programs written by humans, with really pleasantly un-optimized properties (frame pointers in some assemblies, loading and saving registers at the start & end of functions, logically-separated functions). My job would be much harder if I had to deobfuscate code that was optimized without a strict ruleset.

After enough spaghetti assembly, it would be too time-consuming to reverse engineer code except for the most profitable enterprises (interoperability, vulnerability research for a very important bug).

I'm using the following mental model of what machine learning-generated code would look like: https://news.ycombinator.com/item?id=8092359 (an evolutionary algorithm designed a circuit that is extremely difficult to analyze but is optimal at achieving its narrow purpose).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: