> But if the strategic logic is to “change everything” thereby “resetting the cloud landscape”, they’re really paying over $34 billion for OpenShift. Which is a vanilla container runtime. That is open source. That IBM already had at least one of (as does everyone else in the industry, including the hyper-scale clouds). And one that isn’t a dominant brand or implementation. IBM could have bought both Docker and Pivotal at a nice premium for a quarter the price of Red Hat and gotten better assets if that is the strategy.
They attempted to buy Mobi/Docker? That's pretty sad if your company says, "We've not going to take this buyout, because we'd rather our platform survive than take your money and watch you kill it."
I've been giving this some thought because well who doesn't want to glue together cloud-y things.
The only version I can see is one that runs on generic nix. i.e. Spin up a VM on each cloud (and in house) and run a stack on that which software glues it together. Redundancy of sorts.
...however if that glue actually works I might as well do that with 20 sht tier providers and let the redundancy cover gloss over it. No competitive advantage for cloud providers - and each of the cloud's various competitive advantages are by definition not hybrid-able easily.
Scaling? You're now dealing with multiple cloud's worth of different approaches to scaling. Good luck scaling that hybrid style in a resilient way.
...and all of this is just VMs. Add the 50 other offerings the average cloud has all with different scaling, quirks and the hybrid dream is deader than dead.
Spin up a VM on each cloud (and in house) and run a stack on that which software glues it together. Redundancy of sorts
The thing is that using a cloud as a glorified hypervisor for home-baked VMs doesn’t work economically. Colo or even running a DC is cheaper. Cloud generally only makes financial sense if you are using the managed services. And of course the instant you do that you sacrifice portability to a greater or lesser extent. This is of course deliberate on the part of the cloud providers.
It makes sense to pick 2 clouds and go all-in on them using their native and managed services to the maximum. You will need to maintain two parallel skillsets to do this. Completely forget about any layer that promises to abstract it, they are all red herrings if not outright snake oil. Whether that’s IBM consultants, or Terraform.
I might end up using Azure of all for their hourly-priced Infiniband clusters. You can get an hour with ~25 TB ram for ~50$, d you have like 5~10GB/(physical)core.
I don't want to use them, but it looks like there might not be any alternatives for only some very few hours on such a cluster.
I was formerly at Docker and this thread of comments is spot on. There are so many head winds to the "multi-cloud" strategy play. The first is it's way more expensive. A common theme was: data science team is using containers on Azure and we want to do something that ACS makes hard for us so tell us why Docker Enterprise. Present Docker Enterprise architecture, workflow and pricing... So, for what you pay $300-500 month were going to ramp that cost up to a couple thousand a month because we're going to make you run 24/7/365 VMs to manage the architecture Microsoft has gotten the economy of scale from and you're now tied to a costly license for Docker Enterprise. Now do this across a few clouds simultaneously.
OpenShift isn't any different. And in fact Docker invested a ton of time into an entire product, built around Terraform, that manages ecosystem deployment to any cloud that a lot of people wanted but couldn't buy. In fact that was probably a more sellable product than trying to shove Enterprise Engine and Docker Trusted Repository down people's throats.
If you boil it down the common pattern seems to be: cloud lift and shift of legacy VM from on-prem DC is more expensive. The lift you get from cloud is PaaS and the SaaS that's included to manage it. But very few are cloud native ready and take the perspective that: we'll lift and shift it today and then our next project is cloud native transformation. Yeah... That latter part rarely happens. And AWS, GCP and Azure all profit.
This advice is common (usually it’s pick one cloud, pick two at least is a bit more sensible) but I really am starting to wonder about it being marketing propaganda of its own.
In one breath we hear that only three software vendors (Google, Amazon, Microsoft .. maybe DO?) have software worth consuming, and all other software vendors that ship this stuff “abstracts” them is snake oil... MongoDB Enterprise... Confluent Kafka... Elastic Cloud Enteprise... Pivotal CF, Red Hat Openshift, Hashicorp Nomad / Terraform / Vault etc - all of these provide valuable software that runs on any cloud, and often has a multi cloud control plane (that’s increasingly Kubernetes based).
let’s never use any of that, and screw the whole software industry for the cloud vendors because their stuff isn’t just a proprietary veneer of automation around those products charged by the hour?
on another breath we are told that the cloud providers charge too much for their VMs. And we think they’re not charging too much for their proprietary services?
Firstly, Most “managed cloud services” are not “managed” in the traditional sense. They’re hosted, no different from Dreamhost or a bazillion other hosted offerings, often with similar tradeoffs. It’s a testament to cloud marketing that people believe there is something magical about Amazon RDS for your Postgres instance. It’s a nice automated setup of volume replicated active/passive Pgsql.
There are many, many other ways to do this with open source or proprietary software with varying degrees of automation - maybe you don’t care, that’s fine, but I’m not sure delta between running in a DC/colo vs the EC2 costs is worth it for some proprietary software bits by the cloud vendors. It’s not magic, it’s just software.
Similarly to say all abstractions are snake oil is fashionable but also hypocritical. Kubernetes is on fire lately because it is an abstraction for your work loads, a universal control plane, and universal cloud API. Is that snake oil? Serverless (the framework) makes developing on Lambda or other FaaS’ sane - is it too snake oil? Heroku or Cloud foundry lets you push your apps and not worry about the plumbing on EC2 or the cloud of your choice (even on a colo/DC!)
But most importantly: You’re not locked into lowest common denominator (what does that even mean?) with any of this - you can use any proprietary cloud service you want...the stuff you don’t care about - the VMs, network and storage - is the stuff abstracted (and usually all the proprietary knobs like Azure advanced networking or Google metadata/DNS are all available).
Where is the problem? Are Elastic Beanstalk or Google App Engine really superior economically and functionality wise?
Terraform is not about the different codebases, it’s about glue code in a standard language to assemble all these cloud services. Crossplane.io is trying to do this via Kubernetes CRDs. Would you rather use Cloud Formation and JSON, really?? I’ve seen some monster CF scripts - they’re hard to maintain and debug compared to TF.
Let me give you a trivial example: GCP gives you a lot of flexibility with CPU and memory when creating a VM. Let’s say your workload is ideally suited to some weird combo, like 5 cores and 13G or something. On GCP that’s what you provision and that’s what you pay for. AWS and Azure offer fixed sizes, so you have to round up to 8 and 16 (and pay for it). So if you want to be cloud-agnostic there’s one cool but very basic feature you just can’t use.
Once you start digging into this stuff this keeps coming up: something that’s efficient (cheap) in one but not in another means: do I do it the same and pay more, or do I diverge and accept that I’ve got 2 configurations for this feature now, and save some money. Multiply this by 1000 special cases and then you find that trying to make one size fit all is a wild goose chase.
All Terraform really offers is doing this in similar syntax for the subset of each cloud’s features that Terraform knows about, and extending Terraform yourself for anything it doesn’t. Yes, I would rather use the native thing for each cloud, even CloudFormation. ARM Templates and DSC are actually quite nice once you get used to them!
The wild goose chase part I see as a major exaggeration.
It's interesting, I use BOSH + Terraform all the time, wherein BOSH exposes GCP's CPU/memory flexibility, the various Azure NIC/LB/availability set options, AWS' different disk types, etc. Those differences can be modularized, so that 95% of your configuration templates are identical across clouds, and the last 5% maps to specifics.
I'm sure it's not all that different from Terraform w/ modules, though my main problem with Terraform is that it doesn't constrain you into a "do the right thing" path, it's too easy to create a mess.
Anyway, IMO these kinds of differences really aren't hard to handle and it's valid to prioritize cloud-independent configuration if that's what you want/need. It allows the main configuration and installable software to be cloud-independent, dramatically easing testing. We're seeing this drive with the flocking to Kubernetes which enables cloud-independent networking, storage, and compute.
I think differing opinions on this are normal/fine, but i have to wonder why the single-cloud proponents use words like "snake oil" as if to completely discredit a different set of priorities.
Because either a) you are constrained to the lowest common denominators or b) the abstraction is something trivial like syntax and you still need to maintain two codebases (this is the problem with Terraform)
> ...however if that glue actually works I might as well do that with 20 sht tier providers and let the redundancy cover gloss over it. No competitive advantage for cloud providers - and each of the cloud's various competitive advantages are by definition not hybrid-able easily.
YES! I might be the only one that wants this, but this is the future I want[0][1][2]. As technology makes it easier and easier to run a cloud compute/service provider, more competition will improve the offerings.
I don't know what "shit tier providers" you're referring to, but I'm not convinced of the additional value AWS is delivering when running certain services (I'm thinking postgres) which are already pretty excellent by themselves. In the end you need to hire for AWS expertise (and spend time boning up anyway/reading docs when things go wrong), and for 90% of apps 1 mid-tier postgres instance (not even a highly-available) instance is all you need until you really get traction and can afford to hire someone to look after it. I think the future is going to see the bifurcation of management (making sure it's up/efficient) and hosting (providing hardware), and plummeting prices for both as more people enter the market.
The super perplexing thing is that cloud providers jumped on the kubernetes bandwagon so quickly and this is what made this commoditization possible. Pessimistically speaking, this means either they're either about to butcher the cross-cloud usability of kubernetes (i.e. drift between EKS,AKS,etc) or kubernetes was a wolf in sheep's clothing from the start.
I've been cautiously watching the Service Catalog[3] feature in Kubernetes because I think it's one of the places where vendors are exerting the most control to see how it goes -- I've brought it up before, but the "Service Catalog" is basically identical to the user Operator (Controller + Custom Resource Definition) pattern, but for some reason it's being treated as different...
> The super perplexing thing is that cloud providers jumped on the kubernetes bandwagon so quickly and this is what made this commoditization possible.
My view remains that this was the purpose of Kubernetes: to scorch the earth ahead of ECS. Google had nothing to lose by doing so.
> I've brought it up before, but the "Service Catalog" is basically identical to the user Operator (Controller + Custom Resource Definition) pattern, but for some reason it's being treated as different...
The Service Catalog is based on the Open Service Broker API, which in turn was extracted from the Cloud Foundry Service Broker API. I hear folks say "it's not declarative", but typically this turns out to mean "it's not a YAML file". The fact is that OSBAPI is literally about telling the platform: hey, I need a service for this app. The brokerage process then works out how that's achieved. One such mechanism is Operators, there are also brokers that can apply Helm charts.
The major difference is really about the CRDs as the point of interface. There's a fork or project somewhere which presents OSBAPI as CRDs (ie, YAML submitted to a remote REST endpoint) instead of as CLI commands (ie, JSON submitted to a remote REST endpoint). I feel like that's the best of both worlds. You keep the internal consistency of service discovery, service binding and service injection, plus the thing that really adds value: checking the YAML into revision control.
Disclosure: I work for Pivotal. We compete with Red Hat / IBM. We also cooperate with them on OSBAPI and a number of other projects.
> My view remains that this was the purpose of Kubernetes: to scorch the earth ahead of ECS. Google had nothing to lose by doing so.
I agree, but avoided saying it to try to not seem like a crackpot. I'm fairly sure I have a comment to the same tune somewhere in the back of HN somewhere (or maybe somewhere else).
Also in the sounds-like-what-a-crackpot-would-say category of thoughts -- I don't trust the CNCF. Similar to how I don't trust the OIN.
> The Service Catalog is based on the Open Service Broker API, which in turn was extracted from the Cloud Foundry Service Broker API. I hear folks say "it's not declarative", but typically this turns out to mean "it's not a YAML file". The fact is that OSBAPI is literally about telling the platform: hey, I need a service for this app. The brokerage process then works out how that's achieved. One such mechanism is Operators, there are also brokers that can apply Helm charts.
No comment about this, "declarative" is just tacked on onto things as marketing fluff these days. OSBAPI is plenty declarative to me; you're telling it what you want instead of how to provide what you want.
> The major difference is really about the CRDs as the point of interface. There's a fork or project somewhere which presents OSBAPI as CRDs (ie, YAML submitted to a remote REST endpoint) instead of as CLI commands (ie, JSON submitted to a remote REST endpoint). I feel like that's the best of both worlds. You keep the internal consistency of service discovery, service binding and service injection, plus the thing that really adds value: checking the YAML into revision control.
Though heavily simplified, do I have the difference (written below) correct?
API version:
0) User wants cloud provided service X
1) User makes service catalog CLI request (a web request is made to the broker)
CRD version:
0) User wants cloud provided service X
1) User creates a CRD
2) Controller sees created CRD and makes web request to the broker
If this is right, it seems like a very small difference.
My understanding is that yes, you have a controller listening for CRDs which essentially stands in for the CLI/API step.
To be fair, OSBAPI is a bigger upfront effort for a service author than fiddling with kubebuilder and sorta-kinda doing some basic service-y stuff yourself. Plus CRDs are the cool thing right now and I expect they will continue to build in momentum.
Where it will come back to bite everyone is that every service will have its own Operator and its own CRDs and these will often be different in shape, behaviour and conventions than other Operators and CRDs. It'll be like the bad old days of every piece of software shipping with its own pile of bash scripts for management and twenty different process managers. You know: the sort of thing Red Hat helped to bring an end to.
After we reach the point where you're spinning up whole clusters because the PostgreSQL operator doesn't play nice with the Kafka operator on Tuesdays when the breeze is too strong, someone wise and brave and famous on twitter will yell "this is insanity! Surely we can rationalise this into a single API, a single mechanism!"
And then OSBAPI will be reinvented (but ... declarative!) while I stand at a safe distance, teeth audibly grinding down to bone.
> To be fair, OSBAPI is a bigger upfront effort for a service author than fiddling with kubebuilder and sorta-kinda doing some basic service-y stuff yourself. Plus CRDs are the cool thing right now and I expect they will continue to build in momentum.
kubebuilder and the whole "ecosystem" for building controllers is unnecessarily hard to use and ideally shouldn't have been this way in the first place. Writing an operator is more painful than it should be, and I think many of the engineering decisions early on are what made it this way.
IMO CRDs are actually the defining feature of k8s but no one realized it until later. They set out to build a cathedral when they should have been building a more orderly bazaar.
> Where it will come back to bite everyone is that every service will have its own Operator and its own CRDs and these will often be different in shape, behaviour and conventions than other Operators and CRDs. It'll be like the bad old days of every piece of software shipping with its own pile of bash scripts for management and twenty different process managers. You know: the sort of thing Red Hat helped to bring an end to.
Right, but maybe this time people will solve that problem the right way -- by standardizing on the representation (CRD) in this case. I mean this in the general sense -- the case I think of is trying to capture all the different ways you can containerize something as they evolved over the years:
- LXC exists (let's call this an `ubuntu.com/Container` entity)
- Docker is introduced, people go wild (let's call this a `docker.com/Container` entity)
- OCI container spec gets written (Let's call this an `opencontainers.org/Container` entity)
- People want to just be able to make a "Container" that runs on whatever operator is available ==> They make a `foo.com/Container` that represents a the lowest common denominator
This kind of progression is the only way you can keep your rapid iteration but have consistent behavior -- foo.com takes the bullet to provide and maintain consistent behavior while the other container types can iterate as fast as they desire.
> After we reach the point where you're spinning up whole clusters because the PostgreSQL operator doesn't play nice with the Kafka operator on Tuesdays when the breeze is too strong, someone wise and brave and famous on twitter will yell "this is insanity! Surely we can rationalise this into a single API, a single mechanism!"
This I think is a symptom is an orchestrator/operators that aren't robust enough -- operators should not be able to take out other operators, except through very well defined integration points. Hard to write the interface in practice but I think we can do better than what kubernetes currently is.
I've been sitting on the concept for a kubernetes competitor that (I think) is simpler but haven't gotten around to writing it. Here are the broad strokes:
- The only pattern is provider/resource (essentially what k8s knows as CRDs and operators)
- Complexity increases only by composition (i.e. you want to write a `KafkaProvider`? looks like you'll need a `ContainerProvider`, `NetworkProvider`, etc)
- No GRPC (good 'ol HTTP1.1, upgradable to 2/3 in the standard web-compatible ways, with "Content-Type" if you really feel like you need to squash your request/responses, and SSE if you need to stream stuff)
- Single binary deployment (as in get your VM, maybe set some kernel flags, put a single binary on it, run that binary, and your cluster is up).
I've got pages of notes on what this could be but at this point it's vaporware -- Who knows if I'll ever get to work on this idea but I sure wish someone would. I don't think there are any companies willing (essentially, dumb enough) to try to compete with Kubernetes so I don't think anyone is even considering trying to build a new but refined k8s.
Competing head-on with Kubernetes is very difficult in several ways.
I think that your ideas have promise, but I caution that container schedulers a gigantic suckhole of tedious details. Pivotal and IBM wrote Diego and it's slightly older than Kubernetes. It's battle-tested but holy shit the zany one-in-a-million evil fluke bugs of doom happen every day simply because there are enough containers being run worldwide to hit them.
Roughly, GRPC ~= Schema + Binpacked messages + extra semantics for HTTP2
The "performance" bit is bandied about a lot, but I'm convinced that you can get far enough with `Content-Type` and bin-packed messages when you need them. You can request/receive binary with HTTP 1.1, that's how images and what not get to you -- if what I'm imagining is even relatively easy to implement, HTTP 1.1 would be strictly better than GRPC because of the flexibility (at the cost of sending a few more headers). Once HTTP 2/3 settle and see more adoption, GRPC's performance benefits would be reduced even further.
I'm not convinced I need to buy into all of GRPC to get efficiency from bin-packed messages across the wire, or bidirectional communication.
If I was picking a network setup from scratch I'd be going for RSocket, myself. It has nice backpressure and resumption mechanisms in the app layer protocol.
> The super perplexing thing is that cloud providers jumped on the kubernetes bandwagon so quickly and this is what made this commoditization possible. Pessimistically speaking, this means either they're either about to butcher the cross-cloud usability of kubernetes (i.e. drift between EKS,AKS,etc) or kubernetes was a wolf in sheep's clothing from the start.
Is it perplexing? Didn't Google do the same thing with Chrome and Android? Understandably those aren't enterprise systems, but they successfully entered a market by open sourcing something that became wildly popular. In the case of Android and k8s, user cost is also reduced. Once it was clear that k8s won over mesos and docker's solution, AWS and MSFT didn't have much of a choice but to adopt it.
Yes, but it's weird that all the providers did this. I think in that analogy, what's happening is like Apple (the competitor to beat) at the time also started offering Android phones as soon as Android started to get traction.
It makes sense for Microsoft/IBM/Oracle to jump on the k8s bandwagon because they're basically doing the worst/newest entrants but why AWS? They already had CloudFormation, along with lots of energy already dumped into SDKs for various tools (ECS, beanstalk, etc).
There are a few other technologies that could have done this before k8s really took off, like pre-k8s openstack -- why didn't that get adopted?
My guess is they felt they didn't have a choice. Having worked at a company with a very large AWS bill in the past, they actually listen to what their customers want. I also think it's clear that most cloud-going engineers and decision makers wanted a solution like k8s and "native" cloud support for it.
If they had chosen to use their own proprietary platform they risk Google and MSFT peeling customers away who want to work on a well designed open and portable orchestration platform. That might start slow but Amazon didn't get this big by allowing competition to take customers without throwing their own punches. They could have made their own _open source_ platform that would directly compete against k8s but by the time it was clear that demand was huge (I'm thinking early 2016) k8s already had to ton of momentum. Again they probably saw that and figured this was the best move with a high probability of success, even if it meant less lock-in.
They also have a whole slew of other products that serve almost exclusively as lock-in mechanisms. One truly portable product isn't going to be a huge deal.
EDIT: Distinguish proprietary and open source options in the second paragraph.
> If they had chosen to use their own proprietary platform
They have chosen that, but for rather than instead of k8s; they've announced that they are working on a fully managed system for k8s (Fargate for EKS, parallel to their existing ECS Fargate offering.)
It makes sense for Microsoft/IBM/Oracle to jump on the k8s bandwagon because they're basically doing the worst/newest entrants but why AWS? They already had CloudFormation, along with lots of energy already dumped into SDKs for various tools (ECS, beanstalk, etc).
I think it makes sense for AWS to jump on the bandwagon because they understand that Google is trying to commodify its complements [1]. The thing that AWS is betting on, I think, is not only surviving but thriving in a commodified environment [2]. The bet that AWS is making is that just like Amazon's retail business, commodification will hurt their competitors more than it hurts them, which will then allow them to make the most of the newly cleared competitive environment once the dust settles. Google's play, on the other hand, is to use the fact that Google is massively profitable elsewhere to allow Google cloud to run at negative profitability for a while in order to try to take a chunk out of AWS. The big loser I see from Kubernetes is Azure. Unlike Google, Microsoft does not have massive and growing profits from elsewhere. Enterprise and consumer Windows sales, while profitable, are on a declining profitability trajectory, according to Microsoft's own estimates. At the same time, Azure, in my estimation, doesn't have to survive extremely low margin environments in the same way that AWS can.
AWS uses proprietary sinplifying management layers on top of common technology as a lock-in mechanism, and have announced their product in this direction (Fargate for EKS) for k8s, though it hasn't been deployed yet.
AWS doesn't act like they can win without obvious lock-in, they act like they can win by extending open technologies with proprietary management layers that customers will depends on and be locked in by, which they do in pretty much every area.
>I don't know what "shit tier providers" you're referring to
The VM providers where you purchase a years worth of VM time but aren't entirely convinced they'll still exist as a business in 12 months. There is a whole ecosystem of sketchy oversold VMs that you can't push to 100% 24/7 but are dirt cheap.
Not useful for business in itself, but 20 glued together...maybe.
In a truly competitive market cloud vendors would be trying to underbid each other for people's workloads. show me the code, show me the data, and let the bidding wars begin.
I keep hearing people say some variation of this without understanding the irony of the statement. A perfectly competitive market means, among other things, that there is zero barrier to market entry. That means you can enter the market without large capital spending. Given the amount of capital plowed into these gigantic platforms, any statement starting with "if tech were truly competitive..." is purely hypothetical and probably always will be.
I think you might be taking it too literally -- I don't think they mean "perfectly competitive" in the economics sense.The commenter actually "truly competitive", which I take to mean "in a market that was truly competitive".
That said, somewhat orthogonal to your point -- the amount of capital applied to create these gigantic platforms is not indicative of what people can and do create with internalized cost -- the F/OSS movement has proven that. People have created immensely valuable software, and internalized/shared the burden of creating that software. I also think that it is entities with large capital that strive (and almost always end up creating) gigantic platforms -- it justifies their heft/consulting teams, etc. motivated single contributors/small groups never set out to giant platforms, they usually build things that do one thing well (see unix) and compose (see unix) to do greater things.
More towards your point directly, you're right, a lot of things would stop making sense in a perfectly competitive market in the theoretical economics sense. Outside the fact that the model is wrong/incomplete (as all non-perfect knowledge models have to be), I do think this would lead to a certain number of competitors, but likely more than 0. My instinct is that the number of firms selling the product would be equal to the aggregated demand / costs to run a business selling the product -- and that ratio is probably greater than 1. Purely a spherical cow scenario but at least that's where I think it would go.
I would argue that the only barrier right now is information asymmetry. With the right knowledge I should be able to compete with the big boys by just moving compute around to the cheapest provider, getting information using nothing more than a script for scraping pricing pages and being able to run code on distros derived from either Debian or RHEL. The thing is, that I don't have the information that the insiders do about the billions of other workloads that are being run. They should be able to use that information to massively reduce costs (aka under utilization of resources) and more importantly to _undercut the prices of their competitors_ but they are not. Where I agree with you is that if I tried to do such a thing and they found out, they would just drop their prices and force me out. They can do this because relatively speaking they have infinite capital compared to an individual.
This is total bullshit. Pivotal’s stock is $10 now, how the hell does your “back channel source” claim they have more paid customers than Red Hat? At Summit Red Hat had 1000 paid customer references. This guy must be short IBM to put out this kind of crap.
And Red Hat’s stock tanked due to multiple bad quarters as well. What’s your point? It’s hard to be a mid-sized ISV these days unless you also own the hardware or have a massive franchise. Red Hat’s franchise was fading and OpenShift wasn’t growing quick enough to offset it. Meanwhile IBM needed a major strategic shakeup.
Red Hat didn’t report OpenShift revenue, it conflates it with JBoss, Ansible, and all other middleware products.
I have no doubt there are more paying OpenShift customers in quantity, but I wonder about revenues - Cloud Foundry always was larger than OpenShift, and if OpenShift surpassed it, that would be indeed news. Cloud Foundry is easily a $500m software business at Pivotal alone, not counting services.
It’s all peanuts in the grand scheme of the industry so far - which is the point of this post. What does $38 billion actually buy IBM? A bunch of $150k-500k annual revenue customers? chairs on a bunch of Kubernetes SIGs? VMware got that with Heptio for $500m. Mostly it’s the K8s and Linux brain trust , and maybe Whitehurst as a new CEO in waiting, but that’s a hell of a price for an aquihire.
Small correction, it's ALWAYS been hard to be a mid-sized software company. We were constantly talking about the 5 or so companies that reached 5B with software only. They are rare birds indeed. Red Hat didn't make it without getting bought... (full disclosure, I work at Red Hat)
Most large companies don’t trust IBM anymore. RedHat has joined them in turning license audits into a profit center. Defending unfounded audits is a large cost of business with both vendors. Hard to imagine many folks trusting them on journeys with new technology.
> RedHat has joined them in turning license audits into a profit center.
If a company is using licenses it hasn't paid for, and so isn't entitled to, why is the vendor the bad guy for catching them out?
Maybe I'm the odd one out here as an individual in paying for the movies I watch, and the music I listen to; but I would expect a business to pay for the software it's using, irrespective of your stance on "big media".
Oracle makes their licensing model intentionally impossible to be compliant. It's not just "you run x number of instances you owe us y dollars".
It's "you enabled x feature on your database times y users oh and use this handy CPU core count chart to calculate how many cores you're using. Oh and you're running your database in a virtual machine with a clustered hypervisor so you owe us for every cpu core in your cluster".
Then they tell you how much they owe you but "it will all go away if you migrate some of your stuff over to our 'cloud'" and the process starts all over again in 2 years, or less.
The biggest tactic I've seen is their inability to be consistent. In 2 years, I spoke to 3 Oracle reps and received vastly different quotes for the same hardware, features, processors/cores, etc.
Definitely! We actually dropped one vendor because it was all ex-Oracle salespeople, and their negotiation tactics were outrageous. They tried to hold us over a barrel and instead we made a major platform change in six weeks just so we could tell them to f-off.
Microsoft, with SQL server. But when you deal with their auditors you just settle your bill and you're done. They don't turn it into an extortionate sales pitch.
It hasn’t been that bad for me. I give them an updated user count and tell them what else I’m using once per year and they give me a bill. They’ve never pushed back on anything. We start with a conference call, I send over a spreadsheet and that’s it. They have software I can run on my network, but they have always let me give them the numbers from my asset tracking system. Frankly, they’re one of the easiest software vendors I deal with.
"User count" sounds so simple when you put it that way. That spreadsheet isn't trivial to build, and the situation on what's in it may be foreign to some readers here (it was to me).
A Microsoft shop needs licenses for each laptop/desktop running windows, but in an office using Microsoft Server to operate its LAN and the requisite services - DNS, DHCP, SMB file sharing, VPN, email, etc - basically any device that touches the Windows Server machine needs a Client Access Licences (CAL), which is available in user-based and device-based flavors.
Let's say the company operates a website and has developers. The development/QA environment requires an (expensive) MSDN account (or whatever it's called now) per-developer. In production, unlimited anonymous/unauthenticated users are allowed to hit IIS (web server). Authenticated access by employees to IIS needs a user CAL, authenticated customer access requires an External Connector (EC) license. But don't worry, the backing MS SQL Server database for the website also needs to be licensed, with per-cpu-core-per-machine licensing available. Except everything's a VM theses day, so the servers sit on top of a VM host (Microsoft Hyper-V), so there's some additional licensing intricacy there to deal with.
On top of that, there's the Services Provider License Agreement (SPLA) licensing model available for ISVs, but OEM licenses cannot coexist wth SPLA licensing on the same system (VMs + host).
Just to make it more fun, different Microsoft reps will have different answers on how some of the more subtle intricacies even apply!
I don’t know. I don’t think it’s that bad. I don’t put together the spreadsheet from scratch. Been doing it a long time though. We start with what I had last year. I just have to fill in my numbers for each license. Then they come along and tell me I need an external connector because I’m doing this or that. I groan a little bit and pay.
They’re pretty easy because you only have to do it once a year. It drives me nuts when a vendor wants me to manage individual licenses as people are coming on board. I end up having to keep extras on hand. At least let me reconcile quarterly or something. It’s even worse when each seat has its own key.
Microsoft makes the license management and reconciliation so easy. The only negative about their licensing is they double dip with the desktop OS and CAL stuff.
We've been on EA for years and the amount of complexity and shifting rules year by year is absurd. It is nearly impossible to stay in compliance. Even the companies who have "owned" our EA (partner responsible for managing it) are wrong frequently about licensing rules, later contradicted by Microsoft.
If you have a handful of licenses on a Select agreement, or O365 (I don't know, we dont use it) maybe it is simple. But a large enterprise customer? It's a fucking nightmare.
We are on an EA. Annual spend is in the 200-250 range. We have pretty tight asset management so it’s not difficult to get precise numbers. We have grown substantially over the past 7 or 8 years and our license count has gone up accordingly so I’m sure that helps too. Maybe we ran their software once or twice to confirm counts. It’s kind of a non-event. I’ve never experienced any kind of full blown audit where they challenge our numbers and go looking for hidden software. We keep track of what we use, pay for it during true up and renewal, and that’s about it.
>I’ve never experienced any kind of full blown audit
I have been through SAM audits. It is a huge pain in the ass. You will spend hours arguing with them over obscure licensing details that equate to tens of thousands of dollars in licensing costs.
For example, in one audit they charged us for Visio licenses because we paid for Pro versions of licenses but the helpdesk had accidentally installed Standard.
Oracle products are outrageously expensive, but I wouldn't call their licensing complicated. Microsoft's model (with CALs) is much more opaque and they refuse to clarify it. I can't comment on IBM licensing though, unfortunately I know nothing about it.
I don't know about current IBM licensing, but for DB2 UDB server on Windows circa 2001, it was was something like $150k per CPU per year (pre multicore, only SMP at the time).
Being caught out using software you did not correctly license is not the problem. That would be fair enough. It is the burden of proof and the time it consumes when you have done nothing wrong.
It would be like the police turning up to your house and demanding you have a receipt for every item in your house. Any item you do not have a receipt for is assumed to be stolen and you have to pay for it. The burden of proof is placed on you to prove you did not steal it. Normally the burden of proof is on the police to prove you have stolen, suddenly it has been turned upside down.
Can you prove you have purchased every copy of every software instance on every computer in your organization? Maybe you can because you have excellent record keeping but most are not so efficient. Maybe the invoice cannot be found because it was not forwarded to the right person. Or a paper invoice has been filed incorrectly and nobody can find it. You KNOW you paid for it but cannot prove it. Sorry, but you are guilty and have to pay $10,000 for that server license again. Try explaining that to your boss.
> It would be like the police turning up to your house and demanding you have a receipt for every item in your house. Any item you do not have a receipt for is assumed to be stolen and you have to pay for it. The burden of proof is placed on you to prove you did not steal it. Normally the burden of proof is on the police to prove you have stolen, suddenly it has been turned upside down.
So, under what legal authority can Oracle or Microsoft or Red Hat or IBM _force_ you to submit to an audit?
It's in the contract you sign with them to get their software in the first place.
(Of course, if you're smart you don't. I worked for a place that had a sales pitch from Oracle, wanted to use their product, but cut off all contact once our lawyers got a look at the contract they were proposing)
Your police analogy only makes sense if you shop at a warehouse club. You sign a contract (subscription) that agrees to let you take whatever you want (content) from the warehouse store (vendor), but at the end of the year (subscription term), you agree to let the warehouse club work out what you took (true-up) from the store by looking in your garbage (logs, DB, whatever you use to track usage).
If you then are so disorganized at your job that you empty the trash, throw out the receipts, then sit dumbfounded as your contractual obligations come to roost, maybe that conversation with your boss should probably be uncomfortable.
There is a huge gulf between wanting your customers to pay what they owe you and this sort of "audit".
I make a serious point of doing what I can to steer my employers away from any outfit that believes adversarial relations with their own customers are ideal.
Screw that; I want to compete with my real competitors, not waste money on lawyers fighting my own supply chain.
So yeah, I want to see if they go full-Larry Ellison here, but I likely longer recommend anything Redhat and will look at moving away from the things we do use.
I turned down a job offer with such an adversarial company this year, only interviewed so my recruiter would keep sending interviews.
I knew the company by reputation: that they had a habbit of suing customers for patent violations. The commute would have been easier. Bit longer (train) than the job I took (driving 15 miles opposite traffic).
The job I took pays $20k less a year, but is worth it when factoring in less unpaid overtime, shorter commute amd reduced stress. My wife wanted me to take the higher paying job, but after explaining that, per hour, I'd be making less and only home for 1 waking hour during the week, she agreed. Also have better health insurance at the lower paying job.
It’s not about counting licenses. It’s the hassles of responding to proving you’re not in breech or complex contracts signed over decades when the vendors change product names, definitions and terms every few years.
It’s gotten so complicated that these vendors hire specialized firms to go after their customers, who then have to buy software to prove they’re not stealing.
This activity fills IBM’s coffers while distracting their customers.
Oracle made a business of squeezing lots of unreasonable corners here, so folks have soured a bit on what would be an otherwise reasonable process. Best would be some kind of random third party CPA firm or something do a balanced audit against standards without the screw you upsell to avoid total destruction on the license fee side.
Well, I believe some of the licensing was combative to many realities of best 21st century business practices. SAP had several lawsuits against some of its biggest clients, and as a result, changed its pricing methodology. In that particular case, I believe SAP was trying to count every automated process or algorithm that needed access to the data for reporting and analytics as a user versus now where there’s a cost to create a new entry in an ERP system, but zero added cost to read records. Some of the hatred that has developed around licensing seems to be that the vendor is working directly against the interests of its customers.
The best explanation I read was that IBM has bet the farm on Linux as a major dependency, Red Hat was a major contributor with successes they couldn't achieve, and IBM basically internalized a dependency on Red Hat to be in control with maybe other benefits down the line. Now the biggest, two contributors to Linux are the same company. I could see them doing a huge deal just for that.
I don't disagree that they're a major contributor to Linux itself, however RHEL usage is plummeting over the last 5 years as stated in the article. Being a contributor and owning RHEL is really no advantage at this point.
Redhat profits were growing for 4 years steadily 15-20% for each of last four years. I dont see the confirmation of redhat usage declining, more like very perspective and growing company.
Are you saying that from your perspective or what you think IBM's is? It's the latter that determines their actions. I'm guessing at it for sure. I just don't think you see it their way given difference in your statement and what they paid.
“As part of the agreement announced Tuesday, AT&T will use Red Hat’s open-source platform to manage workloads and applications and “better serve” enterprise customers.”
One debt-laden dinosaur that keeps throwing billions of dollars at the wall to see what sticks striking a deal with another dinosaur doing the same thing.
Wall street has little confidence in both companies' growth stories considering their P/Es
AT&T does not see it differently, the deal with IBM is a cover for outsourcing thousands of employees without drawing attention to it.
Internally ATT has been using RHEL as the default OS for almost every system build, but drifting towards Ubuntu, this may change back to RHEL.
They've built most of AIC on top of Mirantis, they enhanced helm with airship and open sourced it for k8s management so I don't see them jumping on Open Shift anytime soon either.
The deal with MS is where they be shifting most of their actual workloads to that go cloud, IBM will be running the stuff that's left behind in legacy AT&T data centers. IBM will not be touching any of the actual network/SDWAN stuff.
Well, the Microsoft deal includes M365, so that's Windows, Exchange Online, SharePoint Online, and such. There's probably Azure credits to move some employee-centered services into the cloud and more easily interface with AD/AAD.
On the IBM side, I can imagine they focused on pushing more of the back end systems into IBM's cloud. Possibly some of the monitoring and management systems for AT&T's hardware, and to make it easier to deploy systems for enterprise customers that don't require dedicated hardware.
Edit: this is obviously speculation. But based on the news articles I've read, that's what makes the most sense to me.
I'm saying the news articles are focusing on the wrong stuff compared to what we're actually doing internally. The news articles are PR pieces that don't outline the actual strategy in play.
M365 was already in place, that's not new at all every employee already had access to those capabilities.
IBM is basically taking over the internal cloud stuff that's not AIC - e.g. vmware and all of the employees that go with it and all the bare metal system support too.
It seems to me technology goes in cycles, and while cloud has had a meteoric rise to dominance, if the pendulum swings back to self-hosted - IBM is now poised to own a substantial part of big enterprise data center technology.
I do think at $34B it's a huge gamble, but as the author said, what other choice did they have? They aren't exactly centers of innovation any more (either of them.. and yes, some innovation comes out of redhat, but lets be honest, it's been tapering off for years now).
While there certainly could be a shift back, it seems like it will likely occur because of edge computing. The cloud is still relevant here, but the question also becomes what a company could do if it had distributed sensors, etc. everywhere.
I’m not sure that this helps Redhat or IBM though as it really breaks how they currently license software. AWS also already has software infrastructure offerings here as well so they could already be positioned to benefit if the trend changes.
All AWS, Azure, and Google would have to do to respond is offer cut-rate VMware hosting.
There isn't a ton of moat in selling the lowest common denominator. I don't know that IBM has much lead on the other side either, from Rackspace, for example.
I'm not sure if that would be an actual answer so an nth-wave of on-prem workloads.
It's not about owning a platform to run VMs, it's about owning the software stack to enable you to run and manage, bare-metal, VMs, containers, and high-level services on your customers own hardware I their own DC or colo'd.
RHEL will be around for a long time because the government demands something like it. All the other businesses stacked around it are dubious prospects at best.
So basically its like a beach party where they have a bonfire and burn massive stacks of cash. How do the people organizing this not know what they are doing? Or is it like, any kind of large bonfire is good for their careers?
My guess is that after the GitHub acquisition, IBM was afraid Microsoft would buy RedHat to gain more control of Linux. I'm kinda surprised MS hasn't bought Ubuntu, but there's probably no hurry - we're still in "embrace".
Ignoring the merit of the analysis, this text if very difficult to read. To many parenthesis, onomatopoeias and inside jokes. I know the author is well regarded for many books (I haven’t read any), but I think he has to be more objective in blog posts.
Pretty interesting. Probably right. So what's next? After flailing around for a while longer IBM will be up for sale. Who will be the buyer? That's easy: Microsoft!
Oracle, not Microsoft. Microsoft doesn't need anything they have. Oracle is already selling their (Oracle-rebranded) version of RHEL plus the Oracle database business ties in with the IBM mainframe business.
Maybe LBO? IBM still has some assets specialty assets like CPLEX for optimization where I believe they are the go to solution. I’m guessing if there was a buyout, some of the assets that could be adopted to a SaaS/cloud product could be sold off to the big players.
Full disclosure , I work at Red Hat, but IBM buying red hat is a big deal. Just go back 40 years and IBM vs Microsoft. Microsoft bought QDos (which became windows). Now IBM bought red hat, which has openshift and
Microsoft got the OEMs via anti-competitive actions. To quote myself from a few years ago:
> Probably one of the worst and most egregious is Microsoft's use of a "per processor" fee in the 90's which they only stopped when the government forced them to. If you were an OEM like Dell or HP, and sold Windows on any computers, you had to pay Microsoft for a copy of Windows on all computers you sold, even ones without Windows.
> This anti-competitive move meant alternative operating systems, like BeOS, or OS2/Warp, or even Linux weren't really an option. BeOS died, not on any technical merits, but because Microsoft forced it out via other means. Linux only survived because its openness made it hard to kill.
Microsoft did get the OEMs, true, I stand corrected. But Windows needed backward compatibility to DOS applications, otherwise Windows would have failed, as there was such a huge installed base of existing DOS business applications
Another Red Hat associate here... Using OpenShift doesn't necessarily mean managing your own infrastructure. There are a number of cloud-based managed OpenShift offerings out there from multiple vendors, including Microsoft. But you can also run it on-prem where it delivers tremendous value: better infra utilization, agility, security, productivity, portability, etc. Simply decoupling the lifecycle of the app from the lifecycle of the host is a huge win for many enterprises.
the latest incarnation of openshift is basically just a bundling of kubernetes. like GKE or EKS, but on-prem.
the hybrid-cloud dream therefore is to target k8s as your runtime, and then spread your workload across N cloud vendors and on-prem resources.
regardless of if you think that's a good plan or not, its the perfect sales pitch to the fortune 500 cto who doesn't want to go all-in on "the cloud" but realizes he has to "do something".
So I guess the question is... why didn't they?