Hacker News new | past | comments | ask | show | jobs | submit login

It might be possible, but nobody knows for sure, because these models are rather more mysterious than their architecture suggests.

> Maybe have the network which generates the answer be moderated by another network that assesses the truthiness of it.

Like a GAN? Sometimes you can do that, but it seems not always.

If this was simple and obvious, they'd have done it as soon as the first one was interesting-but-wrong.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: