Hacker News new | past | comments | ask | show | jobs | submit login

The Game Theory has nothing to say about partially observable, stochastic, continuous environments and has no concept of future.



> The Game Theory has nothing to say about partially observable, stochastic, continuous environments and has no concept of future.

This is not true. What makes you think this way?

Game theory studies games with stochastic outcomes, adversaries, and partial observability. Bayesian Nash equilibrium, trembling hand equilibrium and correlated equilibrium are generalizations of Nash equilibrium into these domains. Game theory also extends to continuous environments and sequential games that have concept of past and future.


Whats is the Nash equilibrium of Pong? or Go? or Chess?

All the generalizations you cite are fancy economics methods which loose applicability at the first contact with complex real world scenarios.

Game Theory is like asking question such as "Is chess a win for white? [1]" Sure answering this question might be important theoretically but its of no use when designing an agent that plays and learns.

[1] https://en.wikipedia.org/wiki/Solving_chess


   loose applicability at the first contact
When thinking about complex domains, having a simplistic model is helpful for a variety of reasons, one of them is to establish a shared vocabulary to communicate with others, the other is to study where/why/how the simplistic model fails to deal with the complex domain. This gives an indication for how to go on about dealing with the complex domain, namely by selectively making the simplistic model more complicated until it is sufficiently rich.

One example of the power of simplicity is the habit of theoretical computer scientists to use simplistic devices such as Turing machines as models of computing and to conflate "effective computability" with the complexity class P and polynomial reducibility (in LOGSPACE). We know that this 'looses applicability at the first contact with reality', but it's currently our only approach to have a meaningful theory of computational complexity and has lead to many deep insights already such as interactive proof systems and zero-knowledge proofs.


Hahahahaha reminds me of a saying about spherical cows.

If Game Theory was being used as an "explanation" to show why deep learning outperforms other methods then maybe it would have been fine. But then we already have Statistical Learning Theory and PAC learning which try to explain/prove bounds on Machine Learning models.

There is nothing wrong with having a theory but these days Deep Learning has become a bandwagon that any/everything gets related to. And I just don't see how Game Theory of all mathematical tools is applicable to Deep Learning.

Finally the misguided theory driven approach was exactly the reason why Deep Learning was so controversial initially. Turns out making a theory like SLT and deriving ML models like SVMs is a really bad idea. Deep Learning succeeded because the ML and Vision community adopted empiricism over the fanciest/longest proof with covex loss. So when someone goes around claiming Game Theory as a savior/future of Deep Learning I find it perplexing.


Game theory gave us minimax, which is a major part of any top chess engine.

https://en.m.wikipedia.org/wiki/Minimax


The entire field of game theory grew out of von Neumann's work on analysis of how people play chess[1].

Quote: What should be noted here is that the format of his intended talks follows exactly the development of game theory up to 1928, beginning with Zermelo on chess, and culminating with von Neumann groping for a theory for three and more players: game theory was still the mathematics of parlor games

[1] http://elaine.ihs.ac.at/~blume/leonardjel.pdf


Sure but my point is that the Game Theory approach does not helps us in building better Chess playing agents. E.g. how color is used in photography might make a very interesting paper, but from perspective of building an image recognition model its irrelevant.


Yes, I agree.

Although "distance from Nash equilibrium" can be a useful metric to measure the performance of some (game playing) systems.


I believe an equilibrium would be achieved at a single point of state/time for any of those games you mentioned. The equilibrium being the best next move given what you know about your risk and your opponent.


> The Game Theory has nothing to say about partially observable, stochastic, continuous environments and has no concept of future.

Untrue; game theory has plenty to say about such environments, though that goes well beyond the kind of trivial level of exposure lots of people who are aware of game theory have of it.


This post is in particular about utilizing certain concepts from game theory to build new architectures. For example, it talks about adversarial networks where the problem is reduced to finding Nash equilibria between competing models -- a concept that merges game theory with neural networks.

The post is not talking about making a new paradigm primarily based on game theory concepts.


Game theory does have things to say about observable, stochastic, continuous environments and has a concept of future actually.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: