> Human beings can also not distinguish between fantasy and reality (on average)
this statement is prima facie untrue, and quite ridiculous sounding
it is certainly the case that humans can sometimes not tell without checking, which is it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
the AI output could have been completely made up, hence why it can't be relied upon for factual claims like you relied upon it for factual claims
> this statement is prima facie untrue, and quite ridiculous sounding
If it sounds ridiculous to you, I invite you to speak to some actual human beings. They believe so many things that are patently ridiculous (and frequently contradict) it's like stepping into an ocean of lies. I'm honestly surprised that this specific part of my statement is that weird to you.
> it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
This is a step you must take with both AI and people. Given that this is the case, I do not see a difference in value in terms of the reliability of facts. If anything, you could make a case that human imagination is more valuable (because the AI is just a mathematical reproduction of whatever imagination went into it). That said, I can't think of why it would be the case outside of a human superiority angle, which is tedious.
> the AI output could have been completely made up
Hence why I called it a springboard for further discussion and not an encyclopedia of absolute fact.
> If it sounds ridiculous to you, I invite you to speak to some actual human beings
sure, I will speak to you
below are 10 propositions, each either fantasy or reality. I invite you to try to determine which is which, then we can find out whether the comment about the average is true:
this statement is prima facie untrue, and quite ridiculous sounding
it is certainly the case that humans can sometimes not tell without checking, which is it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
the AI output could have been completely made up, hence why it can't be relied upon for factual claims like you relied upon it for factual claims