Hacker News new | past | comments | ask | show | jobs | submit login

The LLMs' training data is already mostly All Rights Reserved content which is more restrictive than whatever license you could come up with, and if that doesn't stop anyone then sure as hell you won't stand a chance either.

You best bet to fight back is to either try to poison your data, or to train your own models on their data.






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: