Well, one thing to keep in mind is what can be practically regulated and bring the desired results.
Here are some examples: firearms don't kill people on their own, but we regulate their production as well as distribution and use (analogous to release and deployment in terms of the article). We do this because regulating use alone would be impractical due to enforcement. This is because we'd rather prevent things from happening than punish perpetrators in this case.
Another example: we, generally, don't seek a just verdict when suing the insurance company of the driver who caused an accident by hitting the back of our car. Maybe it was the front car driver's fault -- the courts don't have time for that, and even if "unjustly" in many cases, they will still rule in favor of the front car driver.
So, is it practical to regulate at the level of deployment? -- I don't know... It would seem that to be on the safe side, it'd be better to find a way to regulate earlier. Eg. an autopilot combined with a drone with a dangerous payload: certainly, whoever launched the drone bears responsibility, but similarly as with the case with guns, perhaps there should be regulations in place that require licensing such programs in a way that children or mentally ill people couldn't obtain them?
You're basically making the "guns don't kill people, people kill people" argument, with LLMs instead of guns: "A gun on its own is just a mechanical device. Only by assembling it into a gun/ammunition/shooter system does it gain the potential to do harm, and only by performing the act of shooting an innocent bystander is harm actually done. Therefore, we should only regulate the act of loading a gun and shooting someone instead of mere possession or distribution of a firearm."
With firearms, the argument is usually rejected because such a regulation would obviously be impossible to enforce: If someone already has a gun and ammunition, they will just need a few seconds to load it up and pull the trigger. No cop could force them to only shoot at legitimate targets.
The analogue with LLMs would be: "An LLM on its own is just a collection of numbers. Only by deploying it into a software system does it gain the potential to do harm and only by executing the system and causing malicious output is the harm actually done. Therefore we should only regulate deployment of LLMs instead of storage and release."
You could make the same counter-argument as in the pro-gun case here, that such a regulation would be impossible to enforce: The interesting thing about open source LLMs is exactly that you can deploy them on your own hardware without having to bring any third party into the loop: Companies can deploy them in their own data centers, hobbyists on their own consumer machines, some person could just run Llama3 on their laptop solely for themself. There is no way a regulator could even detect all those deployments, let alone validate them.
That's why I find the argument disingenious: You could make a case that the harms caused by unregulated, home-deployed LLMs are much smaller than the benefits - but that would be a different argument. You're essentially arguing that the regulator should hamstring itself by leaving the part where regulation could actually be enforced unregulated (model training and release) and only regulate the "deployment" part that can't be enforced.