The problem is always in the tooling. Prompts aren't suitable for creative work at all, you need a large set of non-textual tools that let you guide it and create exactly what you want. As an instance, Stable Diffusion has crude higher-order tools, although they're still poorly suited for actual productive usage because they've been made by ML nerds, not creatives.
OpenAI doesn't have even that because they're an AI company, not a VFX company. Besides not even understanding the needs of their users, they see this model as a neat intermediate result on their path to the AGI, and as a progress report to raise more money. They're really interested in advanced emergent behavior it exhibits, not in artistic tools. For this reason they've never bothered to fix all the artifacts DALL-E 3 gives, let alone add any tooling to it. Sora will be the same, and its quality doesn't even remotely approach to what is required in production. It's more of an experiment.
What you see in the OP is simply a marketing material made by OpenAI in an attempt to make them look less nefarious to creatives by appealing to authority (took them quite long to understand that, usually they're superb at marketing). I can guarantee you won't see any real use of it in production because it's just not what OpenAI in there for. They already probably have another better model in the making, anyway.
Models made by actual VFX software companies will have a chance to be used because they care about the usability. Models made by Stability (SD3 has the same architecture as Sora, although it's for image generation) also have a chance because they are open-weights and have tremendous amounts of tooling around them. Models from OpenAI, unlikely.
I always tell people if I couldn't work on computers in any capacity, my favorite thing I'd like to work on is directing movies. I've already played with animatediff and I can't wait to play with Sora. These new AI tools (especially FOSS ones) are an absolute boon for anyone without a major budget.