Funny how things evolve over time. When I was (2014-2016) VP/CTO at vCloud Air, the cloud division of VMware, I spearheaded the partnership between VMware and GCP/Google (which of course took a much larger team to be brought to life), certain that it was the only viable and meaningful partnership for VMware at the time.
When I left in early 2016, soon thereafter VMware ditched that partnership in favor of... IBM. No comment.
And now it's 2020, IBM is far behind everyone else in the cloud, and VMware and Google are cranking along just fine. This partnership makes a ton of sense for both of them. AWS is eventually going to lose its "monopoly" over cloud (disclaimer: I was at AWS 2008-2014).
It gives me a good feeling. Especially for the very talented engineers that I met on both sides. Sometimes they have to put up with a ton of red tape and politics. At least this will make some of them smile.
I've seen how the VMware and Softlayer group operates inside IBM. They did not know what they were doing, but were given massive buckets of $ each month to spend. The entire cloud business is a joke.
IBM would spends 100s of thousands of dollars per Q buying the keyword "VMware" and send people to a page that did nothing.
One of the most expensive keywords in the company was VPS, which was a broadmatch term. There wasn't even conversion tracking in place and they spent millions per quarter in PPC with terrible results.
I've seen many instances where they spent more than my annual salary on keywords over 1 week with no results to show for it.
This was despite having about 10 people on the IBM side to "manage" stuff (excluding multiple layers of management) and several people on the ad agency side to "execute", with more focused on "strategy" and others focused on,"analytics".
A startup I work for spends a lot of $ a month bidding on a common typo which is also the name of a different and unrelated product. We are bidding against their branded ad.
Most teams simply didn't have conversion tracking at all for many many years from waht I recall..everyone would look at spend and clicks as success KPIs, making the client relationships jobs lucrative.
I've seen teams that spend most of their (7 figure per Q) ad budgets on branded keywords (vs unbranded) despite having top organic rankings, because no one knew any better.
The agency makes commissions on ad budget spent, so there was no incentive for efficiency.
The inefficiently was incredibly painful and no one gave a damn on either side..those that knew what they were doing didn't last or were pushed out.
(this is not a unique issue, from a PwC study last week 50% of ad spend is taken by middlemen rather than the publisher... the entire ecosystem thrives on inefficiency)
> This partnership makes a ton of sense for both of them
GCP purchased CloudSimple who provides managed VMWare running on bare metal. This isn't a solution co-developed between VMWare and GCP, nor is this a 1st party VMWare solution. I'm not even sure if this is running on GCE, or if its just a re-branding of CloudSimple as GCP VMWare Engine
VMWare Cloud on AWS was built by VMWare to run on AWS ec2 bare-metal instances, and is managed by VMWare themselves.
Can you help me understand why this would be useful? What can be achieved with VMWare that can't be done more efficiently with GCP (or AWS or Azure) native tools? The list of arcane product names makes me think this is for enterprises that already invested in VMWare products and just don't want to own servers anymore.
To an engineer, nothing. These acquisitions are part of a coming war over hybrid/on-prem dominance. Corporate VMware installations usually don't mean just a few instances, but entire buildings. If you think of the mobile phone landscape prior to the iOS/Android duopoly, that's on-prem today
It might tickle you to learn of Project Pacific, an upcoming rewrite of VMware to run seamlessly on Kubernetes by default, with an upgrade path for existing installations
I get the idea of GCP to play in the 'VMware <-> cloud arena', but I do not get Project Pacific.
Is VMware developing it because their installed base shrinks as people are moving to bare metal K8s and they simply need to counter that? Or is is there some other benefit such as retiring some parts of their codebase e.g. replacing VSAN by CSI?
Lots of large places are looking at Kubernetes as a way to reduce provider lock-in. Being able to sit down with the CIO and say you have a great migration path for their on-premise setup and it can seamlessly manage cloud workloads is a really nice pitch.
Google theoretically has a similar pitch with Anthos but they’re really not good at sales and GCP has a lot of basic catch-up to do in most areas other than GKE. Say what you will about VMware, they know how to sell effectively and don’t ignore features which aren’t cool CS problems.
The simplified version is that VMware is doing Pacific/etc to stay relevant in a world where virtualization is being commoditized, and where containers will eventually rule most workloads.
Most enterprise customers have huge VMware investment. It is much easier for these CIOs to move to cloud with same look and feel as VMware and single pane of glass. Save ton of time in onboarding and operational overhead.
Basically if your entire infrastructure is VMware based, and you need to do something like fail over your existing on-premise to the cloud, or scale way up past your physical infrastructure in a hurry, it makes GCP one of the providers you could possibly price out using without having to reinvent the wheel.
I'm not sure that AWS/GCP is cheaper. I'm running 20 VMs on my home vCenter/ESXi setup, and it's costing me $60/mo in electricity. If I had the same setup in GCP, it's be costing me 20 * 56 (20 * n2-standard-2) = $1k/mo.
While I agree bare metal is way cheaper than the cloud, they aren't comparable.
You didn't specify what the hardware cost you, nor what it would cost someone else to buy new and how it compares to cloud servers. Add to that lack of redundancy, having to do hardware maintenance yourself, no add-on services unless you run and manage them yourself, no flexibility in scaling up or down, etc.
IT is probably perfect for your use-case, but not for the use case of the people who choose the cloud.
You're absolutey right—they aren't comparable, and my personal example is misleading.
For clarity's sake, my setup cost ~$10k, and includes a 12TB ZFS NAS server, a 10GBe back end, 2 ESXi hosts totaling 16 cores, 256GB. It also took many weekends to set up (and labor isn't cheap).
The uplink is mere Comcast, and, at 30Mbps, does not come close to rivaling a cloud offering. Also, I'm unable to scale as fast as the cloud (think weeks instead of minutes).
On the positive side, for those of us who love infrastructure, there's nothing like running your own, very small, cloud.
That's basically what Pivotal Cloud Foundry is right now, isn't it? You can apply Terraform of the like to on-prem VMs and public cloud or something like that. I have a way better idea for public cloud providers to ween enterprise clients off their data centers. Offer to buy their hardware off of them. Rack them in their own data centers for whatever workloads they can handle. Clients get out from under their sunk costs. Cloud provider gets a long-term deal locked in. Everyone is happy.
Ugh. IBM bought SoftLayer and have been slowly destroying it with bureaucracy since then. I had a support agent tell me the other day "all servers become unstable after 90 days." I miss the old support team :(. These days it's a lot of "what the hell did the hypervisor do to my VM this time".
IBM's purchase of Softlayer has to be one of the biggest contributors to AWS's success.
Softlayer clearly needed a change of direction / leadership, and I guess we'll never know, but i feel they (via IBM) took a left turn when they should have gone right. Instead of trying to become cloud, they should have doubled-down on baremetal. What they _really_ needed was a pricing page that didn't force you to go through sales and reflected the price that you'd eventually get (50% less than what they advertised).
Wouldn’t a low-touch sales process be antithetical to the way IBM operates? They are all about high $, high margin, highly customized systems with big support contracts. I suspect anything self-service would be viewed with suspicion/derision internally
On at least two occasions I was forced to recreate an instance on AWS. They send out an email similar to this:
> We have important news about your account (AWS Account ID: 288053528466). EC2 has detected degradation of the underlying hardware hosting your Amazon EC2 instance (instance-ID: i-0480b1eb4617c84f3) in the us-east-1 region. Due to this degradation, your instance could already be unreachable. After 2016-08-16 01:00 UTC your instance, which has an EBS volume as the root device, will be stopped.
I’ve had that happen a couple of times but it’s definitely uncommon (single digit count for hundreds of instances over a decade). I haven’t seen it in a long enough time that I’m assuming they have a bulletproof live migration feature now.
I was thinking of live migration performed by AWS. Are there situations where running your own VMware could help, beyond what AWS can do themselves? If there's a sudden outage, neither helps, as the instance disappears. If there's a minor issue, either should be possible.
Well, if your instance is important enough, there's vSphere Fault Tolerance [0]:
> You can use vSphere Fault Tolerance (FT) for most mission critical virtual machines. FT provides continuous availability for such a virtual machine by creating and maintaining another VM that is identical and continuously available to replace it in the event of a failover situation.
> The protected virtual machine is called the Primary VM. The duplicate virtual machine, the Secondary VM, is created and runs on another host. The primary VM is continuously replicated to the secondary VM so that the secondary VM can take over at any point, thereby providing Fault Tolerant protection.
I was under the impression that AWS needs to do mandatory guest power cycles in order to update underlying infrastructure (e.g. to mitigate Meltdown, Spectre, L1TF, MDS, etc.). Is that not the case? Has the instance actually been up since 2016?
I had a Windows Server 2008R2 instance running SQL 2008 for over 10 years from 2008-2018 that I only rebooted myself a handful of times. It had ephemeral disks because that's all there was when it was launched so the hardware never changed. There was an unexpected reboot once in 10 years. In 2008 people told me I was crazy to run SQL let alone Windows in AWS. I never had a complaint.
I eventually switched to RDS to save some money.
I have another 2008R2 server running IIS that was launched at the same time and is still in service on the same hardware.
I don't have inside knowledge on AWS but I do know OpenStack (open source cloud infrastructure project) and it's had the ability to live migrate guest VMs for years so I'd be shocked if AWS can't move VMs and cycle hosts to swap failing hardware or apply firmware / ring 0 hypervisor updates.
I met a SoftLayer employee soon after the acquisition at an IBM conference. He complained that previously getting new hardware wasn't a problem. Once IBM took over they had to go through IBM's bureaucratic procurement processes which took much longer. That inevitably led to scaling and stability issues.
I was a VMware VCP5 several years ago and sold/installed many datacenter virtualization solutions and VDI deployments. A solid cloud offering would have been great for the ones that didn't have the dedicated server room and staff to handle the hardware and upkeep, or wanted to be able to scale easier.
I think for similar reasons that backups and regular server maintenance languishes: "it's working, I don't need to do anything." until something goes wrong. Then they're multiple versions out of date, or have no backup, or have half their VM's showing awful performance then wonder where things went wrong.
Apathetic engineers is often the problem. Just as often, though, they aren't given the people, tools, or budget.
Non tech companies see tech refresh and upkeep as a pool from which to steal people or dollars for other stuff. Until something breaks in a way that it hurts the business.
I think it’s usually due to those being seen as a cost with low expectations. As long as people can work, they’ll be low-priority and they almost certainly don’t have competitive salaries or career paths to keep top people.
Ditto on the VCP5 - both Datacenter and Cloud. It has been interesting to watch VMware evolve from hypervisor to software-defined and now on to this partnership with Google - really smart move to battle Microsoft.
> AWS is eventually going to lose its "monopoly" over cloud
How? I could see it becoming a duopoly with Azure, but Amazon has a huge lead that I don't see being lost. Customers shackled to AWS infra won't move. Development and egress costs are a huge moat.
It's not about snatching away users from AWS, it's about gaining those first time cloud users. And in today's world it would be very foolish to not opt for cloud agnostic offerings and go for hybrid solutions
I can't find this in my cloud console. The info pages are all pushing the "Contact a sales rep" button. It's not clear whether all the pricing info is included yet.
They lists one possible node at $10/hour ($7200/mo) before contract length discounts:
I could already import VMDK's to run in GCE. Obviously this is not intended for that, but rather to provide vSphere/vCenter/ESXi in the cloud. Where I've used these in the past (for relatively low-tech IIoT work), I'd often "pull" the VM's off the on-prem server and run 4-12 VM's on my laptop to simulate and configure machines for chemical plants. If I was in the office, I could RDP into the machines running on the on-prem server and would be able to enjoy a high performing desktop environment. Obviously as I was in the same building as the vSphere/ESXi host, latency was very low.
I've had significant issues with latency in the past trying to use RDP/VNC/etc from Houston to any of GCP's datacenters and wonder if that would affect quality of service for this offering. Will a lot of users be remoting into these VM's to use boxes running Windows/GUI/etc? Or is this a very different use case?
> Google Cloud VMware Engine is expected to be generally available to customers this quarter in the North Virginia (us-east4) and Los Angeles (us-west2) regions.
> We plan for the service to be available globally in eight additional regions—London, Frankfurt, Tokyo, Sydney, Montréal, São Paulo, Singapore, and Netherlands—in the second half of the calendar year.
With the latest ThreadRipper would probably be the same price after 3 months of service to buy the equipment. If you're running more than 3 servers, most likely you are, then you can add 5 2TB (10TB) per node of M.2 SSDs which are super fast for cache, and cheaper SATA SSDs for regular storage and cluster the storage using Rook (Ceph). Most expensive thing is RAM.
There are a lot of great colo facilities for this type of hardware. Using secure boot, TPM, LUKS and tamper detection, it sure would be hard to break into your server even if not caged. Most of these facilities have video cams and modern security protocols.
However, if you can migrate your workloads to Kubernetes with KubeVirt and others then I'd advise you avoid VMware like the plague.
The public clouds are ridiculously more expensive than on-prem; especially for egress. That’s an open secret; people choose clouds for the convenience.
My NodeJs app running inside a Docker container which is running inside a Linux Debian OS running as one of the VMs inside a Windows OS installed on a rack-mount server module housed with other modules in a server cabinet. The "atomic" structure of cloud infra.
I wish in VMware vSphere I could easily spin up and down / offline migrate to cloud providers. I have some temporary heavy workloads that could benefit from this but there is no easy way.
In a Kubernetes context can do this with Cluster API. I run my mgmt cluster in vSphere which spins up workload clusters in vSphere and AWS on demand. Works great! https://cluster-api.sigs.k8s.io/user/quick-start.html
VMware's entire strategy at the moment appears to be hybrid/multi-cloud and Kubernetes.
If you can't do that already (and I'm fairly certain you can), you shouldn't have to wait much longer -- as this "partnership" and their others e.g., with AWS) illustrate.
When I left in early 2016, soon thereafter VMware ditched that partnership in favor of... IBM. No comment.
And now it's 2020, IBM is far behind everyone else in the cloud, and VMware and Google are cranking along just fine. This partnership makes a ton of sense for both of them. AWS is eventually going to lose its "monopoly" over cloud (disclaimer: I was at AWS 2008-2014).
It gives me a good feeling. Especially for the very talented engineers that I met on both sides. Sometimes they have to put up with a ton of red tape and politics. At least this will make some of them smile.