Hacker News new | past | comments | ask | show | jobs | submit login

My takeaway after scanning the paper -

In an ideal setting, a trained model learns exactly the real world probability distribution, and generates data indistinguishable from those sampled from the real world. Training on them would be fine, but pointless, since the model is already a perfect representation of the real world.

Practically, however, a model is only a lossy approximation of the real world probability distribution. Repeated self-training would simply compound the loss - amplifying both the probable and the improbable.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: