I have yet to jump on the LLM train (did it leave without me?), but I disagree on this sort of "<insert LLM> does/says <something wild or offensive>". Understand the technology and use it accordingly. it is not a person.
If ChatGPT or Gemini output some incorrect statement, guess what? it is a hallucination, error or whatever you want to call it. treat it as such and move on. This pearl-clutching, I am concerned, will only result in the models being heavily constricted to the point their usefulness is affected. These tools -- and that's all they are -- are neither infallible nor authoritative, their output must be validated by the human user.
If the output is incorrect, the feedback mechanism for the prompt engineers should be used. it shouldn't cause outrage, just as much as a google search leading you to an offensive or misleading site shouldn't cause an outrage.
See the bigger picture. The issue isn’t so much the potential horrors of an llm-based chatbot.
Consider, the world’s greatest super geniuses have spent years working on IF $OUTPUT = “DIE” GOTO 50, and they still can’t guarantee it won’t barf.
The issue is what happens when an llm gets embedded into some medical device, or factory, or financial system, etc.? If you haven’t noticed, corporate America is spending Billions and Billions to do this as fast on they can.
You say that, and yes I agree with you. But a human saying these words to a person can be charged and go to jail. There is a fine line here that many people just wont understand.
That's the whole point, it's not a human. you're rolling dice and interpreting a specific arrangement. The misleading thing here is the use of the term "AI", there is no intelligence or intent involved. it isn't some sentient computer writing those words.
> This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe.
Yeah, and it is not a living thing that's saying that. That's the whole point. You found a way to give a computer a specific input and it will give you that specific output. That's all there is to it, the computer is incapable of intent.
Perhaps users of these tools need training to inform them better, and direct them on how to report this stuff.
Yeah, I find the shock and indignant outrage at a computer program's output to be disturbing.
"AI safety" is clever marketing. It implies that these are powerful entities when really they are just upgraded search engines. They don't think, they don't reason. The token generator chose an odd sequence this time.
Ouija-board safety. Sometimes it hallu--er, it channels the wrong spirits from the afterlife. But don't worry, the rest of the time it is definitely connecting to the correct spirits from beyond the veil.
If ChatGPT or Gemini output some incorrect statement, guess what? it is a hallucination, error or whatever you want to call it. treat it as such and move on. This pearl-clutching, I am concerned, will only result in the models being heavily constricted to the point their usefulness is affected. These tools -- and that's all they are -- are neither infallible nor authoritative, their output must be validated by the human user.
If the output is incorrect, the feedback mechanism for the prompt engineers should be used. it shouldn't cause outrage, just as much as a google search leading you to an offensive or misleading site shouldn't cause an outrage.