Hacker News new | past | comments | ask | show | jobs | submit login

It failed at the first step. This is like the worst timeline where people just cannot think for themselves and see that the AI produced an answer and so it must be true.



you’re reading way too much into my post.

It’s lots of words all run together for the purpose of being a logic puzzle and obviously I made a parsing mistake in my brain.

I’m not trying to assume AI is right, I’m trying to put a factual stake in the ground, one way or the other so we have more data points rather than speculation.


I dunno. Don't you think this could happen with other replies from ChatGPT? I think this is the "it" about this tech - it really, really does trick us some times. It's really good at tricking us, and it seems like it is getting better!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: