Maybe I am naive about the progress in this space, but we should not use the word "AI" first because it adds to the confusion many people have about DNN based programs. So called AI is not much different from many software we're using in a sense that you give an input to the program, then it spits out the output. When I think about AI, I think of animal intelligence (no pun intended) that dogs or other mammals have.
A spider has intelligence too. It's far more limited than a mammal's, but it's still on the same spectrum.
And intelligence is not a single linear measure. AIs outperform humans on some tasks and are worse than rats at others. So it's more like that AIs have this weirdly shaped higher-dimensional capability surface that partially overlaps with what we consider intelligence. Haggling about which exact overlap gets to be called intelligence and which doesn't seems like... unproductive border-drawing on a poorly charted concept-map. Considering that they're getting more powerful with each year and such policies are about future capabilities, not just today's.
This is exactly my point about the word AI. They should not use the word AI to describe LLMs or any other generative models. Then again, words evolve to mean different things over time, so AI is a fine term to stick with.