The two philosophies at megacorp here seem to be "I built it from the ground up to target X service" where X service is usually amazon serverless or something, and "I built it in docker containers but I don't know about the cloud".
The former is a conscious decision, and we (Architecture) have a serious, sit down discussion with them about what it actually means to be fully cloud native for that particular service. This discussion ranges from cost analysis, to things like "is your application actually build correctly to do this", to "you're not going to have access to onprem resources if you do this", even asking them simply "why".
A lot of the time when the teams realize they're going to be on the hook for the cost alone they back out, and a lot of teams try to do it because "we don't understand K8s". Well, it doesn't get much better in Cloud Run either folks because you're trading K8s yaml for terraform or cloudformation.
Where it has been successful is for teams which own APIs which only get called once a month, or very low traffic APIs. I hate to say it boils down to cost, but a lot of the time it really does boil down to cost.
Additionally we've seen a weird boomerang effect as clouds offer K8s clusters which are simply priced per pod rather than per worker node (like GKE Autopilot). A lot of teams which straddled the middle of "low traffic but not low enough to really migrate" have found they're quite happy in GKE Autopilot. They use autoscalers to provide surge protection, but they just use Autopilot with 1 or 2 pods running and it keeps the costs down. That also means we can migrate them to beefier clusters in a heartbeat if they get the Hug of Death or something from HN. ;)
The second use case I discussed gets railroaded into our K8s clusters we built ourselves because we can typically get them to use our templates which provide ingresses and service meshes and the developers don't have to think about it too much, and the Devops team is comfortable with the technologies. While it means that there's a bit of "rubber stamping" and potential waste, it's allowed us to use K8s and the nice features it provides without having to invest too much in thinking about it for an individual application.
The two philosophies at megacorp here seem to be "I built it from the ground up to target X service" where X service is usually amazon serverless or something, and "I built it in docker containers but I don't know about the cloud".
The former is a conscious decision, and we (Architecture) have a serious, sit down discussion with them about what it actually means to be fully cloud native for that particular service. This discussion ranges from cost analysis, to things like "is your application actually build correctly to do this", to "you're not going to have access to onprem resources if you do this", even asking them simply "why".
A lot of the time when the teams realize they're going to be on the hook for the cost alone they back out, and a lot of teams try to do it because "we don't understand K8s". Well, it doesn't get much better in Cloud Run either folks because you're trading K8s yaml for terraform or cloudformation.
Where it has been successful is for teams which own APIs which only get called once a month, or very low traffic APIs. I hate to say it boils down to cost, but a lot of the time it really does boil down to cost.
Additionally we've seen a weird boomerang effect as clouds offer K8s clusters which are simply priced per pod rather than per worker node (like GKE Autopilot). A lot of teams which straddled the middle of "low traffic but not low enough to really migrate" have found they're quite happy in GKE Autopilot. They use autoscalers to provide surge protection, but they just use Autopilot with 1 or 2 pods running and it keeps the costs down. That also means we can migrate them to beefier clusters in a heartbeat if they get the Hug of Death or something from HN. ;)
The second use case I discussed gets railroaded into our K8s clusters we built ourselves because we can typically get them to use our templates which provide ingresses and service meshes and the developers don't have to think about it too much, and the Devops team is comfortable with the technologies. While it means that there's a bit of "rubber stamping" and potential waste, it's allowed us to use K8s and the nice features it provides without having to invest too much in thinking about it for an individual application.