> And yet the moment they do that some lawyer submits a bunch of hallucinations to a court and they get in the news.
That's the lawyer's problem, that shouldn't make it OpenAI's problem or that of its other users. If we want to pretend that adults can make responsible decisions then we should treat them so and accept that there'll be a non-zero failure rate that comes with that freedom.
That's the lawyer's problem, that shouldn't make it OpenAI's problem or that of its other users. If we want to pretend that adults can make responsible decisions then we should treat them so and accept that there'll be a non-zero failure rate that comes with that freedom.