There's a lot to be said for running qemu from the command line, vs the complexity and opaqueness of libvirt. For example you can put the invocation in a script and check it into version control, and it will work on anyone's Linux box. You can easily debug the config and be suire about what options qemu was given.
Yes, there may be many switches in the invocation, but at least they are in one place, the invocation syntax rarely changes and works accross distributions, and they are applied from scratch in each invocation rather than having lots of implicit state hidden in daemons and their configurations / databases.
Your example is nicer on the eyes written in a config file style and leaving out the unnecessary options:
I can see the value in "putting the command-line in a script and launch it anywhere". Many of us use bespoke QEMU command-line scripts for test and development. And I even know a small hosting provider who manages their production guests entirely via QEMU command-line "config file" syntax (-writeconfig and -readconfig options—they're little-known, not least because they don't cover all the options; upstream has some proposals to address it).
I appreciate that libvirt may not be a suitable option for some. That said, you might want to consider the below points on where it adds value (for production VMs; not talking about test and development setups):
• It's not about having all the QEMU "switches available in one place". The dizzying array of switches make it trivial to let you shoot yourself in the shoulder (e.g. getting fiddly details like PCI device addressing right manually).
• Knowing the "best practice" command-line will get you optimal performance out of your (production) guest—libvirt handles that for you. And if the best practices change, libvirt will automatically handle that for you, as it keeps track of "QEMU command-line best practice evolution".
• Launching the command-line is only the first step. If it's a production guest, at some point you might need to live-migrate the guest, or take a live disk backup, or if your guest has a long disk image backing chain, you want to "merge" the disk images into a single, coalesced image—all of this without the guest going offline. These tasks can be done manually (assuming you launched QEMU the 'right away' upfront) via extremely tedious and error-prone ways using QEMU's JSON-based run-time interface, QMP (QEMU Machine Protocol). But that's too risky and a colossal waste of time.
• Speaking of QMP, the run-time interface, it has even more dizzying array of commands—to be more precise, as of 2015: QMP had about 126 commands + 33 events. And more than 700 named arguments and results. Try keeping track of all of that manually. It reminds me of how once a QEMU sub-maintainer memorably compared (in his 2015 talk, "QEMU interface introspection: From hacks to solutions") the volume of QMP schema to old religious books: "QMP schema is larger than "Gospel of Luke", but smaller than "Genesis". :-)
IOW, I'm yet to see anyone who launches multiple guests (on the moderate scale of hundreds of VMs) use direct QEMU in production. There are odd examples, as I noted earlier, but it's certainly not the norm.
But NixOS has a neat feature where any (?) NixOS configuration can trivially instead be built as a "vm", which will assemble an initrd for the described system along with a script which will run a tailored qemu invocation to launch the system, leaving a qcow2 file of any changes you make to the filesystem in your cwd. Makes the whole thing extremely usable, and is done in a very extensible way.
Yes, there may be many switches in the invocation, but at least they are in one place, the invocation syntax rarely changes and works accross distributions, and they are applied from scratch in each invocation rather than having lots of implicit state hidden in daemons and their configurations / databases.
Your example is nicer on the eyes written in a config file style and leaving out the unnecessary options: