I don't know how else to get this message across, but it does this all the time in all subjects.
It doesn't just occasionally hallucinate mistakes. The mechanism by which it makes non-mistakes is identical and it can't tell the difference.
There is no profession where you a) you shouldn't prefer an expert over ChatGPT and b) you won't find experts idiotically using ChatGPT to reduce their workloads.
This is why it's a grotesquely inappropriately positioned and marketed set of products.
GPT based legal/medical/critical positioning is doomed for failure. These are essentially monopolies protected by humans, something AI, even if they become extremely intelligent, cannot infiltrate it to accept it.
A lawyer would be taking on massive risks by trusting GPT outputs even while thinking 1/6 chance of noise filtering would be a form of risk mitigation, they are mistaken. It's a slot machine in some sense but it gives you a small payout 5/6 times.
For sure, I agree with you 100%. This is basically "Legal Analysis for Dummies" if you choose to rely on a machine to give you help here. Medical is also a bad domain.
What do you call a person who just barely passed the bar exam?
These tools are designed for and marketed to lawyers to use. These are not generalist LLM products. Your "this is why lawyers exist" statement makes no sense in the context of these products.
One of the models studied markets itself with "AI-Assisted Research on Westlaw Precision is the first generative AI offering from Thomson Reuters and will help legal professionals find the answers they need faster and with high confidence."
Another says "Most attorneys know Practical Law as the trusted, up-to-date legal know-how platform that helps them get accurate answers across major practice areas. Its newest enhancement, Ask Practical Law AI is a generative AI search tool that dramatically improves the way you access the trusted expertise from Practical Law."
A third says "Transform your legal work with Lexis+ AI"