Hacker News new | past | comments | ask | show | jobs | submit login

Getting the game state data means deciding a-priori what features the AI should learn on. The whole point of the deep learning paradigm is to allow a machine to learn such features that enable good prediction, visualization, generation (aka. hallucination), etc.

Instead, researchers have provided the raw feed input data to these agents with the hope that the learned features could be interpreted as game state data by humans.




I would say that it is a point of deep learning rather than "the whole point". For an AI to interact with the real-world then building models from vision (as we do) makes a lot of sense. In the virtual world, however, it makes no sense. Model data is already available and the AI has no need for something as inefficient as vision. We humans have to use vision (and sound, etc) in games because we have do not have access to direct data feeds, computers have no such limitation. Why cripple the AI by imposing human limitations on it?


If they want their AGI to applicable to the real world or software with incomplete or insufficient APIS they have to do it the way they are doing it here.

There isn't an API for me to check if i'm still on the foot path and not the road as i walk down the street.

I can't use an API to tell me water is boiling and that i shouldn't stick my hand in it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: