The idea of using a multi-agent game, rather than minimizing a single loss function, is already used in GANs - and there it seems to be an very powerful generalization of optimization. Personally I would say the idea is extremely interesting.
GANs have lots of applications, and PCA is useful for various tasks in data-analysis: compression, feature selection, reduced-dimensional modelling. I doubt finding applications will be a problem.
Reliably finding solutions (Nash equilibria), is much harder than optimization for minimum loss however. So I see these being much harder to train than loss-based models.
I should say this is the first CS paper I've ever read that evoked a mild sense of dread in me. Although the positive applications can and no doubt will be substantial.
I've been worrying about tech for a while now, but mostly on the cybersecurity side since I figured AI was still too far away. That more or less changed when I read a post by Gwern on GPT-3, where he makes a compelling case that "there is a hardware overhang" or, in other words, as research in AI advances we're going to find that we do not need substantially more compute in order to achieve the capabilities of advanced AI. I think this was the post where he talked about it:
"GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today’s hardware & budgets that we just don’t know or care to do? There is a hardware overhang."
And thinking about it more, this multi-agent method should work in the offensive cybersecurity world if one could figure out how to crack the reward functions like they did for PCA. I think the core insight they found was a hierarchy of agents. If one could formulate the reward functions for the different agents intelligently enough it could allow layered privilege escalation to achieve RCE without random thrashing.
GANs have lots of applications, and PCA is useful for various tasks in data-analysis: compression, feature selection, reduced-dimensional modelling. I doubt finding applications will be a problem.
Reliably finding solutions (Nash equilibria), is much harder than optimization for minimum loss however. So I see these being much harder to train than loss-based models.