Hacker News new | past | comments | ask | show | jobs | submit login

Since you’re releasing the code to GitHub, do you think you’ll eventually run into issues with the training data including prior versions of the game?





The implied scenario being that the memory of its own output would result in the model producing degraded future output? Why is that a given?

Read about model collapse. The TL;DR is garbage in, garbage out.

https://en.wikipedia.org/wiki/Model_collapse


Probably the same reason that close relatives marrying each other for generations produces genetic problems.

Not the same reason at all. In genetics the reason is that you're losing gene variety and eventually recessive genes aren't suppressed anymore. In case of LLM it's just error accumulation.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: