Having worked at AWS and then a litany of other companies in Seattle that are mostly using AWS or Google cloud, here's my perspective on some lock-in that you might not have actively on your mind:
* Larger companies generally have contracts with cloud providers to pay lower rates. Sometimes these contracts include obligations to use a technology for a certain period of time to get the reduced rate.
* Any technology that isn't completely lift-and-shift from one cloud provider to another. It used to be that a JAR deployed to a 'real' host (say EC2) that accesses config through environment variables and communicates through HTTP was the gold standard here. Now docker broadens the possibilities a bit
* All the cloud providers have annoyingly different queueing/streaming primitives (SQS, kinesis, kafka wrappers...). So if you are using those you might find it annoying to switch
* Even for tried-and-true technologies like compute, MySql, K/V stores, cloud providers offer lots of "Embrace and Extend" features.
* If you are wise then you will have back-ups of your data in cold storage. Getting these out can be expensive. Generally getting your data out of the cloud and into another cloud is expensive, depending on your scale.
IMO the only way to truly avoid lock-in is to use bog-standard boring technologies deployed to compute instances, with very few interaction patterns other than TCP/HTTP communication, file storage, and DB access. For all but the largest companies and perverse scaling patterns, this will get you where you are going, and is probably cheaper than using all the fancy bells and whistles offered by which ever cloud provider you are using.
One thing that I've seen work is that if you absolutely require the ability to deploy on-prem then using something like OpenShift/Kubernetes as a primitive can work per the parent.
Even if you rely on streaming like PubSub or Kinesis, one thing teams I've worked on has done is to write interfaces in the application tier that allow for using an on-prem primitive like Kafka and not depending too much on the implementation of that abstraction.
I've been on a platform team that built these primitives into the application layer, e.g. a blob storage interface to access any blob store whether it's on prem NFS, azure, etc. However I'm looking at newer projects like dapr [1] and have taken them for a spin in small projects. Such a project seems like a favorable way to add "platform services" to a non-trivial app while still maintaining a pubsub abstraction that allows for swapping out the physical backend.
So agreeing with you but with the caveat that you can rely on platform service interfaces and then the service behind that interface could be a cloud vendor product or a boring technology provide you don't let that abstraction leak and call a very specific kinesis feature for example.
Had similar goals. Started by writing Go interfaces for it with Go Micro - https://go-micro.dev then opted for the platform service model as you mentioned with Micro - https://micro.dev
I think whether it's Dapr, Micro or something else, the platform service model with well defined interfaces is the way to go. I don't think a lot of people get this yet so it's still going to be a few years before it takes off.
That's an exciting project! One thing that would be cool is to build a workflows engine on top of the pubsub + key-value primitives. This is something that I think too many teams build internally but we need it to be a platform service instead of hand-rolled over and over.
For instance there is a project that I think is pretty interesting but it's built on top of kakfa so you pretty much need to be running kafka to use it. I wish I could swap out redis as pubsub + kv store and I'd use that project in a heartbeat.
Yea had similar aspirations. Someone even implemented the "Flow" concept as a Go interface. I felt like the first implementation was too complex and we never got back to it but definitely agree, it's a core primitive that needs to be built. Flow, triggers and actions that can be sequenced into steps with rollbacks.
I prefer to exploit "cloud native" differences, but plan your off-ramps in advance. Plan, architect, and write up, then ensure the engineering of the native approach maintains that plan to exit.
You do not have to write to LCD, just have an exit strategy that fits inside your negotiated contract windows or regulatory grace periods for migration.
It's also worth noting that it seems infrequent for companies to move mature products to a different cloud.
My preference is for agnostic tech, but there's a fair chance of YAGNI for the flexibility. If some proprietary tech simplifies your operations, consider whether you'll actually get hurt investing in it.