Hacker News new | past | comments | ask | show | jobs | submit login

I reckon they did



Quite possible! But given how ChatGPT hallucinates, and my general lack of knowledge about LLMs in general and ChatGPT in particular, I would be hesitant to take what it says at face value. I'm especially hesitant to trust anything it says about itself in particular, since much of its specifics are not publicly documented and are essentially unverifiable.

I wish there were some way for it to communicate that certain responses about itself were more or less hardcoded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: