GPT4 can generate some extraordinarily high quality text if you know how to prompt it. But this ain't it - it's some of the most boring way to prompt. The OP's response is what happens when you prompt with some article summary and a "What are some possible ways we could increase the production of nitric oxide in the body".
And no, that trashy CYA tone is not "precision", if anything, it's vagueness. It's weasel words.
> no, that trashy CYA tone is not "precision", if anything, it's vagueness. It's weasel words
That is not what I said. /That/ «trashy CYA tone is not "precision"». But some use similar expressions to those you noted in order to be factual.
Some texts give a strong impression of fakery; some texts could give a wrong impression upon brutal use of raw Bayesian indicators. Signs orient, do not decide. Hints are not proof.
Some patterns in LLM output can be caricatures of proper efforts (factuality for precision, when relevant, is one of them).
So, people may write similarly to that. (Only, hopefully, well beyond veneer.)
And no, that trashy CYA tone is not "precision", if anything, it's vagueness. It's weasel words.