How is reading a Wikipedia page or a chemistry textbook any harder than getting step by step instructions? Makes you wonder why people use LLMs at all when the info is just sitting there.
If you ask "for JSON" it'll make up a different schema for each new answer, and they get a lot less smart when you make them follow a schema, so it's not quite that easy.
Chain of prompts can be used to deal with that in many cases.
Also, the intelligence of these models will likely continue to increase for some time based on expert testimonials to congress, which align with evidence so far.
It doesn't solve the second problem. Though I can't say how much of an issue it is, and CoT would help.
JSON also isn't an ideal format for a transformer model because it's recursive and they aren't, so they have to waste attention on balancing end brackets. YAML or other implicit formats are better for this IIRC. Also don't know how much this matters.