I think by "widespread use" he means the reach of the AI System. Dangerous analogy but just to get the idea across: In the same way there is higher tax rates to higher incomes, you should increase regulations in relation to how many people could be potentially affected by the AI system. E.G a Startup with 10 daily users should not be in the same regulation bracket as google. If google deploys an AI it will reach Billions of people compared to 10. This would require a certain level of transparency from companies to get something like an "AI License type" which is pretty reasonable given the dangers of AI (the pragmatic ones not the DOOMsday ones)
But the "reach" is _not_ just a function of how many users the company has, it's also what they do with it. If you have only one user who generates convincing misinformation that they share on social media, the reach may be large even if your user-base is tiny. Or your new voice-cloning model is used by a single user to make a large volume of fake hostage proof-of-life recordings. The problem, and the reason for guardrails (whether regulatory or otherwise), is that you don't know what your users will do with your new tech, even if there's only a small number of them.
I think this gets at what I meant by "widespread use" - if the results of the AI are being put out into the world (outside of, say, a white paper), that's something that should be subject to scrutiny, even if only one person is using the AI to generate those results.