I agree. I've seen containerization mentioned by many vendors in the last year. It adds another layer of questions to vet like "do they possibly know what they're doing?"
Neither by default. I look to be convinced that they've implemented containers because it made sense for their technical architecture or strategy.
I have a vendor who pitched their new "cloud-native" re-platforming project and it really spooked me. It's data management-type tool that was migrating from a traditional on-prem client/server architecture to an AWS-hosted Angular interface with a MongoDB backend. I got the same pitch a year later and the entire stack had changed. I was spooked the first time; now I'm really spooked and thinking about migrating off the platform.
The key thing is that it's a sign that you're effectively going to be outsourcing a complete userland to them, which means you'll be much more dependent on the vendor for security updates to anything their code depends on as well as their code itself.
Whether or not this is a good idea depends on your situation and their level of competence.
The three products I have in mind were are all conventional 'lxc/docker' containers. Two are provided with docker-compose scripts. Two are available as either containerized or traditional products. One is container only.
Normally docker-compose should be portable. If they have OCI images they should be runnable on MicroVMs (no lxc container) with fly.io, Google Cloud Run, https://katacontainers.io/ and others.
The point I'm trying to make is that if you're aware of different ways to run an OCI image, you can run things in virtually the same way as the old way of having things packaged as a VM image (AMIs on EC2).
It does prevent you from setting up your own OS distribution and integrating the app directly with it, but so do AMIs.
If your company builds - it's no garuntee containers are used (but it's a choice).
If your company buys software - I highly doubt containers are used at all.