Yeah. At the risk of digging at a raw wound and trivializing a recent tragedy, this is kind of like saying "We can't expect structural engineers to develop a fundamentally-safe construction plan right from the get-go."
If you're going to do something at all, there are some fundamental standards that you just don't risk by putting them off for later. Not saying you have to start out with all the frills, but there is a minimum acceptable standard of safety and competency that can and should be expected of any new work, and things that don't meet such standards should never exist in a form that could potentially be misconstrued as doing so. Reasonable baseline security practices are certainly part of those inviolable professional standards.
Or see TCSEC that was how the market produced the first, security-focused systems. They were only ones to pass pentesting at the time with designs and implementations still stronger than most software today. Although it had issues, its core lifecycle requirements mostly work and are still used for high-assurance security implementations. Alternatively, the DO-178B standard (now DO-178C) that got more vendors writing well-documented, well-reviewed code that they run through all kinds of static analyzers and testing tools to avoid costly re-certifications. Two examples of regulations that worked so well that they raised the status quo for both security and safety.
People mostly mention bad or questionable regulations when the topic comes up. I figure the good ones deserve mention, too, esp given they worked better than the market. That's probably due to the absence in market of both liability for software failures and most customers' ability to evaluate security claims.
We absolutely can. Otherwise, expect regulation to do it (see: GDPR).