Oh, not that again. Didn't we see this argument about three weeks ago.
A 100% correct LLM may be impossible. A LLM checker that produces a confidence value may be possible. We sure need one. Although last week's proposal for one wasn't very good.
When someone says something practical can't be done because of the halting problem, they're probably going in the wrong direction.
The authors are all from something called "UnitedWeCare", which offers "AI-Powered Holistic Mental Health Solutions". Not sure what to make of that.
A 100% correct LLM may be impossible. A LLM checker that produces a confidence value may be possible. We sure need one. Although last week's proposal for one wasn't very good.
When someone says something practical can't be done because of the halting problem, they're probably going in the wrong direction.
The authors are all from something called "UnitedWeCare", which offers "AI-Powered Holistic Mental Health Solutions". Not sure what to make of that.