I've been worrying about tech for a while now, but mostly on the cybersecurity side since I figured AI was still too far away. That more or less changed when I read a post by Gwern on GPT-3, where he makes a compelling case that "there is a hardware overhang" or, in other words, as research in AI advances we're going to find that we do not need substantially more compute in order to achieve the capabilities of advanced AI. I think this was the post where he talked about it:
"GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today’s hardware & budgets that we just don’t know or care to do? There is a hardware overhang."
And thinking about it more, this multi-agent method should work in the offensive cybersecurity world if one could figure out how to crack the reward functions like they did for PCA. I think the core insight they found was a hierarchy of agents. If one could formulate the reward functions for the different agents intelligently enough it could allow layered privilege escalation to achieve RCE without random thrashing.
https://www.gwern.net/Scaling-hypothesis
"GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today’s hardware & budgets that we just don’t know or care to do? There is a hardware overhang."
And thinking about it more, this multi-agent method should work in the offensive cybersecurity world if one could figure out how to crack the reward functions like they did for PCA. I think the core insight they found was a hierarchy of agents. If one could formulate the reward functions for the different agents intelligently enough it could allow layered privilege escalation to achieve RCE without random thrashing.