Hacker News new | past | comments | ask | show | jobs | submit login

>According to the data provided by OpenAI, that isn't true anymore

OpenAI main job is to sell that their models are better than human. I still remember when they're marketing their gpt-2 weights as too dangerous to release.




I remember that too, it's when I started following the space (shout out computerphile/robert miles) and iirc the reason they gave was not "it's too dangerous cause it's so badass" they basically were correct in that it can produce sufficiently "human" output as to break typical bot detectors on social media which is a legitimate problem - whether the repercussions of that failure to detect botting is meaningful enough to be considered "dangerous" is up to the reader to decide

also worth noting I don't agree with the comment you're replying to - but did want to add context to the situation of gpt-2




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: