Hacker News new | past | comments | ask | show | jobs | submit login

In AI there have been experiments where agents need to communicate and cooperate in order to solve tasks. They developed a kind of "language", as a result. It's just what happens when cooperation has an evolutionary advantage.

An agent needs to model its environment in order to plan successful strategies. But when the environment contains other agents, it becomes necessary to model them too - thus, create representations that can predict future actions of those agents. When applied on the agent itself, these models create the "ego", a representation useful in predicting the agent's own future actions. All this is necessary in order to maximise rewards in the game (and by game, I mean life, for humans, and the task at hand for artificial agents).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: