Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
breckenedge
1 day ago
|
parent
|
context
|
favorite
| on:
Asking LLMs to create my game Shepard's Dog
Since you’re releasing the code to GitHub, do you think you’ll eventually run into issues with the training data including prior versions of the game?
tdy_err
1 day ago
[–]
The implied scenario being that the memory of its own output would result in the model producing degraded future output? Why is that a given?
reply
Chaosvex
1 day ago
|
parent
|
next
[–]
Read about model collapse. The TL;DR is garbage in, garbage out.
https://en.wikipedia.org/wiki/Model_collapse
reply
mythrwy
1 day ago
|
parent
|
prev
[–]
Probably the same reason that close relatives marrying each other for generations produces genetic problems.
reply
Etherlord87
23 hours ago
|
root
|
parent
[–]
Not the same reason at all. In genetics the reason is that you're losing gene variety and eventually recessive genes aren't suppressed anymore. In case of LLM it's just error accumulation.
reply
Join us for
AI Startup School
this June 16-17 in San Francisco!
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: