There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.
There's no business reason on desktop or server for Intel for sure but there's so much inertia here which AMD needs to counter, it'll take years.
Intel is down on the floor until 2021-2022 when their 7nm (which is a smaller node than TSMC's 7nm) begins to ship because a) there's no reason to believe 10nm actually will ship in quantity b) there's every reason to believe even if it does, it's not going to be great, the first iteration of a process is never so and 14nm is so fine tuned by now, it is better in watt/performance which makes Ice Lake look stupid. 7nm is said to be a totally different, independent development and not a fine tune of the (dead) 10nm.
Intel has 12B cash at hand though so don't expect them to just go out with a whimper. If their profits go down a little for 2-3 years, they will live. The stock price didn't crash, with good reason. AMD had a net loss for seven consecutive quarters before turning a profit in Q2 2016. Intel won't even turn unprofitable for a similar period of time, just it'll have a littles less profit. And, again, they have a decent sized war chest to draw on if necessary.
The chip business is a slow business. In 2012, Intel said they will ship 10m chips in 2015. https://www.crn.com/news/components-peripherals/240007274/in... This is about the same time when AMD re-hired Jim Keller. AMD saw their window in 2015 when Intel 10nm didn't ship, thrown away K12 in a hurry and brought Zen to market in 2016 -- surely they didn't expect they will have a five year run when Intel can't put up a competition.
The fun will start in 2021 when TSMC is expected to have a refined 5nm (they call it 5nm Plus) process which you bet AMD will use vs Intel 7nm.
A fun detail of this is Apple's involvement.
The 7nm process was originally built to attract TSMC's largest customer, Apple's A Series. AMD adopting this same process inlines them with Apple's gains, spend, and chip quality. (and obviously they make a LOT of A series chips, much greater than any of AMD's production.)
The hilarious gain of this is that this will drive down Intel's chip prices in attempts to compete, improving Mac margins.
Disagree. Why would it take years to counter this "inertia?" It's not like you're asking someone to give up a religion they've had from birth. They are being asked to make economic purchasing decisions, and the economics are crystal clear.
There is momentum in the computing industry. The laptops and pre-assembled desktops being sold today are based on the OEM decisions that were made 1-2 years ago. Most corporate client computing machines are on a similar timeline. IT departments don't want to support a wide variety of hardware, so they standardize on a single model or variations within that model for years.
Warehouse-scale computing has similar budgets and timeframes. You don't decide how to re-build this month's 10k machines based on this month's benchmarks. You made the decision as far back as the supply chain required you to do it, maybe a year or more.
I'm sure that AMD's sales team has been telling their big customers about this generation's performance improvements for a while. But with their history, decision makers are going to discount the story a bit until they can see it in production silicon.
So the next few month's movements in AWS and the like will all depend on the extent that their decision makers were convinced many months ago.
Exactly right. AMD has not only been telling their big customers about it -- they've been letting their customers test it. Google has been testing Rome for months, and have decided to move forward with a full-scale deployment, not only for their own data centers, but for their public cloud. They are using Rome in their production servers today.
Like it or not, Google's stamp of approval carries a tremendous weight in this industry. And if that's not good enough for you, Microsoft and AWS are stepping up their deployments of EPYC as well.
As a result, a lot of smaller companies will now require a much lower standard of due diligence when approving an EPYC Rome deployment.
Not only that but the Ryzen 3 3300U and friends are Zen+ and not Zen 2. It won't be until next year when the IT department even can buy a Zen 2 U laptop.
The signs are there, Lenovo has called them T480 and A480 last year, T490 and T495 this year, indicating these are very close.
Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.
Also, software is very often the largest cost for these systems. It's not hard to find yourself paying $100k a month for an Oracle license. A one-time expense of $50k for one piece of hardware vs another is barely a blip.
And in fact that hardware is often charged based on spec. So if you have 4x as many cores on Epyc, you will pay more in software costs on a monthly basis as well. That, or the software will simply refuse to use them until you buy an upgrade, meaning those extra cores are sitting there doing nothing.
It's counterintuitive to people whose experience is building a gaming desktop at home, but hardware expenses are not necessarily a big part of total cost of ownership for enterprise operators.
> Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.
That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.
OSs are not most software and are not designed around the instruction set changing underneath them during normal operation. Why would they be? You can't physically swap out a processor while the system is booted and you can't swap a virtual processor either.
It works fine if you shut everything down and reboot the system, but that is often undesirable.
The whole point of the feature is that the VM can be migrated around different physical hardware without having to interrupt service. It just suddenly is running on a different host instance. But it has to be the same type of processor... or at least the same feature set. "Close" is not good enough, it needs to be a 1:1 match.
You can manually disable features until you have found the lowest common denominator between the feature sets of the different processors. But obviously the more types of processors you have in your cluster, the more problematic this is. In very few clusters will you find servers of mixed types, you buy 10,000 of the same server and operate them as a "unit". You don't just add in servers after the fact, sometimes you don't even replace failed servers.
And that hardware decision will have been made years ago, very often. The server market is hugely inertial, it's nothing like you putting together a build one evening and then going out and buying parts and putting it together.
> That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.
A very long time ago, I worked for a then very large company that sold servers. Plain standard 80486 based servers.
My job was to drive around and drop off these servers for evaluation at prospective customers, who would compare them against 80486 offerings from a different vendor.
Your argument about them all being fundamentally the same would be even stronger: it’s the same CPU.
And yet, customers did not take chances and would go through the eval motions. Because their business relied on it.
Now imagine that at a scale of thousands.
Claiming “they are fundamentally the same” is not wrong, but you don’t care about the fundamentals only. You care about the whole picture and you don’t take chances.
Paying more for lower performance and higher TDP isn't a "chance".
Very conservative corporate customers could wait a short time for good BIOS corrections and sufficient supply for all the parts (not only CPUs) they need before shopping for AMD servers, but they would be buying different hardware from the same established suppliers even if they went with Intel.
There are many small and large companies with relatively small compute needs (small meaning the own a small datacenter or two). Lots of the code they are running is _extremely_ legacy, and it may or may not be in their risk tolerance to switch vendors to save a hundred grand a year on CPU costs. Especially if they think like OP and believe Intel will match them again in just a few more years. Why rock the boat?
Of course such decisions are always political. But now with Google backing EPYC Rome, there is the political risk of not switching, and finding yourself in the Stone Age 5 years from now.
There's a lot more incentive to explore EPYC than there was a day ago.
People are creatures of habit. There are people still using Yahoo! for no reason other than it's what they are used to. For many people, buying Intel is the same thing. It takes years to win those people over. (and usually the argument that ultimately wins them over is 'everyone else is using it', rather than the economic one) Those of us who are early adopters jump ship as soon as it's obvious there's a better option, the masses move at a much more glacial pace.
Sure, people are creatures of habit. But this isn't Yahoo vs Google we're talking about. People are going to be throwing down hundreds or thousands of dollars per CPU, and the differences are not remotely subjective.
> There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.
it's not a huge advantage, but I'm not sure I would go so far as to call it foolish to buy intel at this point. if your only serious workload is gaming, intel seems like the obvious choice to me. you can actually get a decent all-core overclock on the intel parts, which leads to a significant performance lead in esports titles.
Thats a 10% gain which might help you in some games to stay above the 144Hz or even 200Hz refresh rate of your monitor.
Does not matter for most of us, but some hardcore esports gamers might care. I guess that's a very small minority though.
if you think a 10% fps gain is silly, why buy a high-end cpu for gaming at all?
also the "only at 1080p" meme is not really true for some esports titles. counterstrike is so cpu bound that it really doesn't matter what resolution you play at.
There's no business reason on desktop or server for Intel for sure but there's so much inertia here which AMD needs to counter, it'll take years.
Intel is down on the floor until 2021-2022 when their 7nm (which is a smaller node than TSMC's 7nm) begins to ship because a) there's no reason to believe 10nm actually will ship in quantity b) there's every reason to believe even if it does, it's not going to be great, the first iteration of a process is never so and 14nm is so fine tuned by now, it is better in watt/performance which makes Ice Lake look stupid. 7nm is said to be a totally different, independent development and not a fine tune of the (dead) 10nm.
Intel has 12B cash at hand though so don't expect them to just go out with a whimper. If their profits go down a little for 2-3 years, they will live. The stock price didn't crash, with good reason. AMD had a net loss for seven consecutive quarters before turning a profit in Q2 2016. Intel won't even turn unprofitable for a similar period of time, just it'll have a littles less profit. And, again, they have a decent sized war chest to draw on if necessary.
The chip business is a slow business. In 2012, Intel said they will ship 10m chips in 2015. https://www.crn.com/news/components-peripherals/240007274/in... This is about the same time when AMD re-hired Jim Keller. AMD saw their window in 2015 when Intel 10nm didn't ship, thrown away K12 in a hurry and brought Zen to market in 2016 -- surely they didn't expect they will have a five year run when Intel can't put up a competition.
The fun will start in 2021 when TSMC is expected to have a refined 5nm (they call it 5nm Plus) process which you bet AMD will use vs Intel 7nm.