I'll be interested to see how long it will be before this devastates Intel's business. At the moment, there's literally no reason to buy an Intel desktop or server processor, so how many of the Intel purchases coming from OEMs and big cloud companies are just because of contracts that might let up in a few years?
Come on, "literally no reason"? Maybe because AMD can't keep up with demand for their new CPUs. I can't buy a Ryzen 3900X without paying substantially inflated ebay prices, the BIOS issues out of the gate are super annoying, and all of these factors are not necessarily technical or performance factors but the fact is that I can get a 9900K right now with mature UEFI firmware on them from reputable motherboard manufacturers, and then I can actually do something with the hardware.
I've been holding out on building a desktop because I could go for a while without it but my patience is wearing very thin after waiting months and now having to wait even longer just to get CPUs in stock in the first place.
Intel's advantage is OEMs and sheer output volume, but the hyperscale infrastructure folks are going to be shoring up AMD financially while Intel has problems and maybe in two years or so of this nonsense AMD might be much more of a serious decision, but as of this moment it isn't a slam dunk for AMD at all.
>Maybe because AMD can't keep up with demand for their new CPUs.
You're confusing bandwidth and latency. Ryzen 3000 launched a month ago, the 3900x outperformed expectations, so the initial shipments sold out faster than expected. You can't magic up stock out of thin air, so there's an inevitable lag between retailers reporting unusually high demand and AMD being able to deliver sufficient stock.
The bandwidth question is much more important and bodes very poorly for Intel. TSMC's 7nm process is stable, providing excellent yields and has plenty of available capacity; they're already in risk production for 5nm, which is expected to free up substantial capacity at 7nm/7nm+ into 2020. Intel's 10nm has been a complete debacle and (despite the Ice Lake launch) is still blighted with sub-par yields.
> TSMC's 7nm process is stable, providing excellent yields and has plenty of available capacity
Yields in terms of getting lots of operational dies, sure, but one of the underlying problems here is chip quality. They have lots of chiplets but most of those chiplets are not fast enough to hit the clocks advertised for the 3900X.
In fact, even a lot of the chiplets sold as 3900X are not fast enough to hit the clocks advertised for 3900X, as well as elsewhere throughout their range. There are a lot of people finding their chips only boost to 50-200 MHz less than the advertised frequency of the chip.
Essentially, the boost algorithm now takes chip quality into account when determining how high it will boost. And most of the chips have silicon quality that is too poor to hit the advertised clocks, even on a single core, even under ideal cooling, etc etc.
Thus, AMD has the somewhat dubious honor of being the first company to make the silicon lottery apply not just to overclocking, but to their stock clocks as well. They really wasted no time before shifting to anti-consumer bullshit of their own; all they had to do was advertise the chips as being 200 MHz lower and everyone would have been happy, but they wanted to advertise clocks the chips couldn't hit.
And again, the underlying problem is chip quality - a lot of these chiplets can't boost to 4.3 or 4.4 GHz let alone 4.7. AMD simply can't yield enough 4.7 GHz chiplets to go around, even if the chiplets are nominally functional. The process may be "stable and providing excellent yields" but it's not stable and well yielding enough to meet AMD's expectations.
That's a major reason they're now introducing 3700 and 3900 non-X variations - that will allow them to reduce clocks and satisfy demand a bit better.
Not reaching advertised boost clocks seem to be related to newer AGESA releases from AMD to motherboard vendors, so I wouldn't blame it on chip quality just yet. These contain bugfixes (RdRand for example) and other changes, but has impacted boost clocks. People running AGESA 1.0.0.2 report reaching boost clocks easily (sustained in single-core tests), while I and others running later releases have issues.
New architecture, new chipset, bound to have some release issues. Intel is on its second or third refresh of their Skylake architecture from 2015, all ironed out.
I've seen plenty of people having problems on older AGESA too. I've seen some people actually have higher performance on newer AGESA. It all depends on your particular sample and setup and how it fits into the boost criteria. Silicon quality still plays a massive role.
So to be clear, older AGESA isn't a magic bullet that is letting all chips "easily hit their rated boost clocks".
It could be cleaned up somewhat in future AGESA releases, and silicon quality will definitely go up over time.
A lot of reviewers have noted similar things, but often are working with singular samples and didn't want to make too much of a stink without more data, but the problem is widespread. Out of all of der8auer's CPUs, only one hit its advertised boost clocks, and it was one of the lower-end CPUs with a less ambitious target to hit.
It may be a problem with early AGESA firmware, and silicon quality will definitely go up over time, but at least at this point in time AMD has certainly falsely advertised the clocks these CPUs are capable of achieving.
every forum pretty much. If you have been interested in getting one, and following along with the launch this is not a controversial statement. It's not 100% sure its the chips fault though, bios issues are still running rampant nearly 5 weeks later, and each new bios is changing performance significantly. It will take a while before everyone knows exactly where they stand.
Both my comment and the one I responded to specified "At the moment" which is fairly specific and implies that it's subject to change. Of course things will be different in 3+ months and bugs and supply chain issues get ironed out. The question that matters a lot more is whether Intel will be able to respond adequately to AMD's offerings.
It is unclear whether Intel is truly disadvantaged in throughput for any appreciable length of time. We've seen what happened to Intel after the disastrous Prescott release years ago - they worked on the Core architecture and its follow-up Core 2 that put AMD in a pretty serious rut for the past decade. Your point of the 10nm offering launching and _still_ being lackluster is the big, big problem for Intel for short-term competitiveness.
I am genuinely intrigued. I've been going to my local Microcenters in the DC VA area for weeks now and they said they have gotten ZERO shipments in since release day of the 3900X and have a couple 3700X maybe on shelves.
This doesn't necessarily completely invalidate my point though - distribution by AMD is clearly needing some work when one region is drowning in 3900X processors and a very wealthy metro area has none in retail channels.
Or may be ( cough ) someone in your local Microcenters hasn't been ordering to fulfil those stocks.
This strategy is quite widely used in many other industry as well.
Although given it has only been launched for less than a month and demand is actually through the roof ( I have been seen reviews being so Pro AMD, even in the AMD Athon 64 days ). So I think it is simply supply is a little tight while TSMC are working hard.
Consumer doesn’t really matter. Pretty much any IT admin I know is ordering one directly through wholesalers. I’m actually surprised people physically go to stores anymore.
Odd. I was in Microcenter on Tuesday and they had a few 3900X's in the case. When I was there closer to launch, there weren't any and they told me that people would come in and ask for them before they even hit the shelf.
Which Microcenter location though? The Fairfax one the employee in the section on Sunday said the two boxes in the floor cases were 3700X and not 3900X because the boxes for the 3900X are larger.
Where have you been able to find them? PCPartPicker hasn't exactly shown great availability on the 3900X so far [0], Amazon is full of scalping [1] and Ebay... Well, $700+ is what I'm seeing there. This is the CPU that'll probably be powering my next build, so if you've got a source close to MSRP I'd love to hear it!
I hope this letter finds you in good health. Since I've seen that you are offering AMD 3900X at a discount, I'd like to inform you that I am not like those pleebs and would therefore like you to pay in full MSRP. Please let me know where I can send the goats.
Funny enough, I had two buyers saying pretty much this. "I'll pay 49" on a £35 item. Why? Because apparently that's "what it's worth". I appreciate the thought, but it just looks insanely suspicious...
Same here. Find nearest Micro Center, see wonderful deal, spend more than you came in looking to spend, repeat. Unless you don’t have any kind of physical computer store near you, I don’t see why buying it on eBay would be useful.
Damn, I have been desperately trying to get one since launch. Think the USA is getting them all, and the rest of the world is getting screwed. I paid on launch day, my store is getting 2 3900x's this month, they have 78 people who have paid in full and are waiting.
My original comment was quite ignorant, then, and I apologize for it. They’ve got quite a few at my Micro Center for $449 if you purchase a compatible motherboard, so I was (incorrectly) assuming that this applied to the rest of the world too.
I am in Europe and a (previously?) Reputable online retailer showed me the same 2 weeks ago, but 1 week ago ( 2 days after the expected ship time) updated the expected time to a month and just said they didn't get the shipment expected, deal with it
Eh, it just came out last month, give them some time.
In fact Intel had supply issues with their 9900K at launch too, for at least a month or two they were often out of stock at the major retailers. If getting ahold of a 3900X is still tough by next month then maybe that's cause for concern.
I would agree that Intel is more mature on the BIOS side of things. AMD usually has launch issues that need to be ironed out with UEFI updates. But, if the past few launches are any indication, they've always got things fixed.
What? Intel are currently in the midst of a massive ongoing supply shortage. Lack of ability to keep up with demand is at least as much an Intel problem as it is an AMD one at this point.
AMD is literally facing unprecedented demand. There's no hype here. This is the real thing. Shortages are to be expected, but you can assume that AMD is smart enough to route their production such that they get the highest return -- e.g. to recurring enterprise customers.
There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.
There's no business reason on desktop or server for Intel for sure but there's so much inertia here which AMD needs to counter, it'll take years.
Intel is down on the floor until 2021-2022 when their 7nm (which is a smaller node than TSMC's 7nm) begins to ship because a) there's no reason to believe 10nm actually will ship in quantity b) there's every reason to believe even if it does, it's not going to be great, the first iteration of a process is never so and 14nm is so fine tuned by now, it is better in watt/performance which makes Ice Lake look stupid. 7nm is said to be a totally different, independent development and not a fine tune of the (dead) 10nm.
Intel has 12B cash at hand though so don't expect them to just go out with a whimper. If their profits go down a little for 2-3 years, they will live. The stock price didn't crash, with good reason. AMD had a net loss for seven consecutive quarters before turning a profit in Q2 2016. Intel won't even turn unprofitable for a similar period of time, just it'll have a littles less profit. And, again, they have a decent sized war chest to draw on if necessary.
The chip business is a slow business. In 2012, Intel said they will ship 10m chips in 2015. https://www.crn.com/news/components-peripherals/240007274/in... This is about the same time when AMD re-hired Jim Keller. AMD saw their window in 2015 when Intel 10nm didn't ship, thrown away K12 in a hurry and brought Zen to market in 2016 -- surely they didn't expect they will have a five year run when Intel can't put up a competition.
The fun will start in 2021 when TSMC is expected to have a refined 5nm (they call it 5nm Plus) process which you bet AMD will use vs Intel 7nm.
A fun detail of this is Apple's involvement.
The 7nm process was originally built to attract TSMC's largest customer, Apple's A Series. AMD adopting this same process inlines them with Apple's gains, spend, and chip quality. (and obviously they make a LOT of A series chips, much greater than any of AMD's production.)
The hilarious gain of this is that this will drive down Intel's chip prices in attempts to compete, improving Mac margins.
Disagree. Why would it take years to counter this "inertia?" It's not like you're asking someone to give up a religion they've had from birth. They are being asked to make economic purchasing decisions, and the economics are crystal clear.
There is momentum in the computing industry. The laptops and pre-assembled desktops being sold today are based on the OEM decisions that were made 1-2 years ago. Most corporate client computing machines are on a similar timeline. IT departments don't want to support a wide variety of hardware, so they standardize on a single model or variations within that model for years.
Warehouse-scale computing has similar budgets and timeframes. You don't decide how to re-build this month's 10k machines based on this month's benchmarks. You made the decision as far back as the supply chain required you to do it, maybe a year or more.
I'm sure that AMD's sales team has been telling their big customers about this generation's performance improvements for a while. But with their history, decision makers are going to discount the story a bit until they can see it in production silicon.
So the next few month's movements in AWS and the like will all depend on the extent that their decision makers were convinced many months ago.
Exactly right. AMD has not only been telling their big customers about it -- they've been letting their customers test it. Google has been testing Rome for months, and have decided to move forward with a full-scale deployment, not only for their own data centers, but for their public cloud. They are using Rome in their production servers today.
Like it or not, Google's stamp of approval carries a tremendous weight in this industry. And if that's not good enough for you, Microsoft and AWS are stepping up their deployments of EPYC as well.
As a result, a lot of smaller companies will now require a much lower standard of due diligence when approving an EPYC Rome deployment.
Not only that but the Ryzen 3 3300U and friends are Zen+ and not Zen 2. It won't be until next year when the IT department even can buy a Zen 2 U laptop.
The signs are there, Lenovo has called them T480 and A480 last year, T490 and T495 this year, indicating these are very close.
Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.
Also, software is very often the largest cost for these systems. It's not hard to find yourself paying $100k a month for an Oracle license. A one-time expense of $50k for one piece of hardware vs another is barely a blip.
And in fact that hardware is often charged based on spec. So if you have 4x as many cores on Epyc, you will pay more in software costs on a monthly basis as well. That, or the software will simply refuse to use them until you buy an upgrade, meaning those extra cores are sitting there doing nothing.
It's counterintuitive to people whose experience is building a gaming desktop at home, but hardware expenses are not necessarily a big part of total cost of ownership for enterprise operators.
> Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.
That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.
OSs are not most software and are not designed around the instruction set changing underneath them during normal operation. Why would they be? You can't physically swap out a processor while the system is booted and you can't swap a virtual processor either.
It works fine if you shut everything down and reboot the system, but that is often undesirable.
The whole point of the feature is that the VM can be migrated around different physical hardware without having to interrupt service. It just suddenly is running on a different host instance. But it has to be the same type of processor... or at least the same feature set. "Close" is not good enough, it needs to be a 1:1 match.
You can manually disable features until you have found the lowest common denominator between the feature sets of the different processors. But obviously the more types of processors you have in your cluster, the more problematic this is. In very few clusters will you find servers of mixed types, you buy 10,000 of the same server and operate them as a "unit". You don't just add in servers after the fact, sometimes you don't even replace failed servers.
And that hardware decision will have been made years ago, very often. The server market is hugely inertial, it's nothing like you putting together a build one evening and then going out and buying parts and putting it together.
> That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.
A very long time ago, I worked for a then very large company that sold servers. Plain standard 80486 based servers.
My job was to drive around and drop off these servers for evaluation at prospective customers, who would compare them against 80486 offerings from a different vendor.
Your argument about them all being fundamentally the same would be even stronger: it’s the same CPU.
And yet, customers did not take chances and would go through the eval motions. Because their business relied on it.
Now imagine that at a scale of thousands.
Claiming “they are fundamentally the same” is not wrong, but you don’t care about the fundamentals only. You care about the whole picture and you don’t take chances.
Paying more for lower performance and higher TDP isn't a "chance".
Very conservative corporate customers could wait a short time for good BIOS corrections and sufficient supply for all the parts (not only CPUs) they need before shopping for AMD servers, but they would be buying different hardware from the same established suppliers even if they went with Intel.
There are many small and large companies with relatively small compute needs (small meaning the own a small datacenter or two). Lots of the code they are running is _extremely_ legacy, and it may or may not be in their risk tolerance to switch vendors to save a hundred grand a year on CPU costs. Especially if they think like OP and believe Intel will match them again in just a few more years. Why rock the boat?
Of course such decisions are always political. But now with Google backing EPYC Rome, there is the political risk of not switching, and finding yourself in the Stone Age 5 years from now.
There's a lot more incentive to explore EPYC than there was a day ago.
People are creatures of habit. There are people still using Yahoo! for no reason other than it's what they are used to. For many people, buying Intel is the same thing. It takes years to win those people over. (and usually the argument that ultimately wins them over is 'everyone else is using it', rather than the economic one) Those of us who are early adopters jump ship as soon as it's obvious there's a better option, the masses move at a much more glacial pace.
Sure, people are creatures of habit. But this isn't Yahoo vs Google we're talking about. People are going to be throwing down hundreds or thousands of dollars per CPU, and the differences are not remotely subjective.
> There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.
it's not a huge advantage, but I'm not sure I would go so far as to call it foolish to buy intel at this point. if your only serious workload is gaming, intel seems like the obvious choice to me. you can actually get a decent all-core overclock on the intel parts, which leads to a significant performance lead in esports titles.
Thats a 10% gain which might help you in some games to stay above the 144Hz or even 200Hz refresh rate of your monitor.
Does not matter for most of us, but some hardcore esports gamers might care. I guess that's a very small minority though.
if you think a 10% fps gain is silly, why buy a high-end cpu for gaming at all?
also the "only at 1080p" meme is not really true for some esports titles. counterstrike is so cpu bound that it really doesn't matter what resolution you play at.
I think the one reason is that if you need the best gaming desktop performance Intel still dominates, but that little extra performance is coming at a very steep price.
My next desktop of will be AMD+Nvidia. Now if only I could avoid the Nvidia tax for deep learning...
DigitalFoundry had an interesting take on frame rate/frame time performance on Intel/AMD. If you look at the graph you can see Ryzen dip down more for a few times when there's more computation or memory throughput needed.
unless you are compiling chrome in the background or streaming, you're not going to saturate even eight cores while gaming. in most benchmarks I've seen, the 9900k still performs better while streaming.
okay, what parts are we talking about then? aside from the 3900x, the Intel parts all have the same core count as their amd counterpart at similar price points.
Intel has almost no overclocking headroom. It's only about 5% or so. Both AMD and Intel have squeezed everything they can out at this point, there's not really anything left.
The main difference is on Intel you get that magical sounding 5ghz number by overclocking, but it's not actually much higher than stock (4.7ghz on the 9900k is the "all-core turbo")
It’s a good set of first steps. AMD needs to keep executing as they have spent decades as second fiddle to Intel. Intel meanwhile has mindshare, enterprise agreements, and other partnerships that make its position as the market leader very sticky.
There’s still a slight performance advantage with gaming. On top of the fact that it’s still hit or miss getting any Ryzen 9s and getting an i9 is easy, anyone who is shelling out for the RTX 2080 GPUs will probably go Intel.
On top of which, given the history, those building gaming machines will assume Intel’s next 10nm cpus will still outshine AMDs in gaming for the foreseeable future.
> Intel is a juggernaut because even if it's not shipping the best, it's always shipping something on time, every time.
Intel is in their current situation because their 10 nm process is years late, and is still not able to manufacture high-performance parts like server CPUs in any meaningful quantity for a price that the market would bear. They've also had severe shortages over the past year, which has resulted in orders being delayed for weeks or months.
And of course, there's the matter of their products basically being warmed-over refreshes of a nearly 5-year-old architecture (because their new architectures are dependent on 10 nm), which has resulted in comically lopsided performance in AMD's favor, in basically every objective metric that matters.
What? Intel have been slipping the ship date on their 10 nm process node for years. It's slipped so much that it's likely to not to even be fully released before it is discarded for its successor.