Hacker News new | past | comments | ask | show | jobs | submit login
AMD Rome Second Generation EPYC Review: 2x 64-core Benchmarked (anandtech.com)
525 points by ksec on Aug 7, 2019 | hide | past | favorite | 204 comments



> For those with little time: at the high end with socketed x86 CPUs, AMD offers you up to 50 to 100% higher performance while offering at a 40% lower price. Unless you go for the low end server CPUs, there is no contest: AMD offers much better performance for a much lower price than Intel, with more memory channels and over 2x the number of PCIe lanes. These are also PCIe 4.0 lanes. What if you want to more than 2 TB of RAM in your dual socket server? The discount in favor of AMD just became 50%.

Well isn't that a kick in the pants.


Xeon has been making 50–70% profit margin depending on the Xeon type. Intel's profit margins will take a hit when they are forced to cut prices. It's unlikely that AMD has enough manufacturing capacity to fill all demand, so Intel will still make profit.

History:

AMD had similar upper hand against Intel with their Athlon processor 20 years ago. Intel's transition to the 180 nm process was delayed and AMD's K7 Athlon was superior microarchitecture. Intel's response was to lower profit margins until AMD was in the ropes again.

This time AMD has better change, but don't count out Intel. AMD's success depends on their profit margins when competing against Intel. If Intel force AMD's profits close to zero, it can spoil AMD's technical win.

https://www.macrotrends.net/stocks/charts/INTC/intel/profit-...

https://www.macrotrends.net/stocks/charts/AMD/amd/profit-mar...


Correct me if I'm mixing up the timeline, but that's the moment in history when Intel used illegal market manipulation to keep AMD out of the OEM market, right? So Intel did more than lower their profit margin, they basically bribed their way through that period. An option they are unlikely to have again, and which as a strategy shouldn't even work anymore for the type of data centers existing today.


I am curious why you think it will not work as well as last time. It is not as if law enforcement is more active now. Generally law enforcement at the moment is forbidden to touch the big fish, since the biggest fish sets policy.


While I'm similarly critical of US corruption and too big to fail thinking in general, there was a bit of (EU) progress in terms of monopoly control. Like the huge fines Microsoft and Google had to pay.

But that would be retroactive and not help AMD. But what helps AMD in advance is the data center landscape. How would Intel force Google, Microsoft and Amazon to use their server processors in their custom servers for their cloud data centers? They can compete on price (though price dumping could alert regulators again), but Intel has zero influence on those three. Thus we see https://news.ycombinator.com/item?id=20643604.

And smaller OEMs should be hard to pressure given the history and hard to mislead given the big companies validating AMDs processors.

Regular vendors for desktops and laptops are already offering AMD, so there seems to be limited danger there.


The first AMD v Intel cost Intel $10 million. The second cost $1.25 billion. A third one would be immense.

Also, Intel is operating under a consent decree and FTC supervision until October 29, 2020. The FTC would come down hard on direct violations of the decree.


Intel still to this day hasnt paid that $1 billion

https://pcper.com/2016/06/intel-still-hasnt-paid-amd-the-1-2...

and it looks like they finally brib^^convinced right people and have a chance of cancelling it https://www.nytimes.com/2017/09/06/business/intel-eu-antitru...


According to those articles it is the EU fine that isn't paid. From what I've heard the US fine is basically what paid for the Zen R&D. Karma.


Intel put $5B into Itanic, destroyed by Athlon.

But the consent decree is new to me. I guess FTC has been slightly more active than I gave them credit for.


They did everything they could, but Intel did not win with just bribery and dirty tricks.


>It's unlikely that AMD has enough manufacturing capacity to fill all demand,

Betting against TSMC, not a very wise move at the moment.

>If Intel force AMD's profits close to zero, it can spoil AMD's technical win.

That is assuming Intel will make a lost in other to hurt AMD, and I am not sure if predatory pricing is legal in US. AMD is not pricing their product to undercut Intel, AMD is pricing their product as it is because their Chiplet strategy allows them to do so while still retaining Industry level margin.

I am pretty sure Intel can afford to lose billions, just like their contra revenue, but this time around the value will be so great I am not sure if their Investor will be happy with it, not to mention their declining margin.


They can only lower their profit margin if AMD has the same performance as Intel. Seeing AMD has higher performance there is no way customers will buy a cheaper offering with less performance. It's like Intel vs AMD but reversed. For years AMD's offering lower performance lower priced chips but can't break through the market (CPU).


> It's unlikely that AMD has enough manufacturing capacity to fill all demand, so Intel will still make profit.

Keep in mind AMD's dies are tiny compared to Intel's. The CPU dies, the 7nm parts, are a mere 74mm2. They are going to be getting fantastic yields on them as a result, and can trivially allocate them to what's selling well. And it's the same die for their entire stack - consumer & enterprise.

Meanwhile the 28 core Xeons are a monstrous ~694mm2. Even though the 14nm process is mature, that's still a hugely expensive chip to make due to yields and capacity. You can only fit so many of those rectangles on the circular wafer.


https://www.goodreads.com/book/show/16145208-slingshot is a good book that covers Intel/AMD history


Of course, even if they were totally kicking ass it will take many years for AMD to build more capacity to what Intel has. Fabs take astronomic investment and time to build.

But, it seems Intel is and has been for quite some time unable to even keep up with competition. It was leader and it seems 10 and 7nm processes completely stopped them in their tracks with regards to competition.

I am kind of interested what actually happen. Probably there is an interesting book to be written on Intel's troubles with getting 10nm node.


AMD is fabless and uses TSMC and Global Foundries. I imagine they can ramp faster than you say.


I didn't know that. Still, somebody would have to build capacity to handle demand and it won't be much faster just because it is TSMC/GF.


The capacity already exists, if AMD can outbid the guys trying to use TSMC to produce GPUs and smartphone SoCs.


In thrilled to see all the good news coming from AMD; that sounds really good for the 3950X I'm eyeing later this year.

I've had enough of the issues with my 6850k.


They made some process technology choices that didn't work out as they planned.

The delay of the 10nm process and patching the delay with 14+ and 14++ didn't stop anything else they do in process technology. They have accelerated the path to 7nm (similar to TSMC's 5mm).

ps. Intel plans to launch their first discrete GPU in 2020 using 7nm process. That's going to be interesting.


>ps. Intel plans to launch their first discrete GPU in 2020 using 7nm process. That's going to be interesting.

It is 10nm in 2020, defiantly not 7nm. That is scheduled for 2021, if they could make it on time.


The issue with Intel's scheduling statements now is they're highly suspect.

I feel like they would have burned a lot less goodwill if they hadn't straight up lied for multiple years about their maturity and timeline.


I'll be interested to see how long it will be before this devastates Intel's business. At the moment, there's literally no reason to buy an Intel desktop or server processor, so how many of the Intel purchases coming from OEMs and big cloud companies are just because of contracts that might let up in a few years?


Come on, "literally no reason"? Maybe because AMD can't keep up with demand for their new CPUs. I can't buy a Ryzen 3900X without paying substantially inflated ebay prices, the BIOS issues out of the gate are super annoying, and all of these factors are not necessarily technical or performance factors but the fact is that I can get a 9900K right now with mature UEFI firmware on them from reputable motherboard manufacturers, and then I can actually do something with the hardware.

I've been holding out on building a desktop because I could go for a while without it but my patience is wearing very thin after waiting months and now having to wait even longer just to get CPUs in stock in the first place.

Intel's advantage is OEMs and sheer output volume, but the hyperscale infrastructure folks are going to be shoring up AMD financially while Intel has problems and maybe in two years or so of this nonsense AMD might be much more of a serious decision, but as of this moment it isn't a slam dunk for AMD at all.


>Maybe because AMD can't keep up with demand for their new CPUs.

You're confusing bandwidth and latency. Ryzen 3000 launched a month ago, the 3900x outperformed expectations, so the initial shipments sold out faster than expected. You can't magic up stock out of thin air, so there's an inevitable lag between retailers reporting unusually high demand and AMD being able to deliver sufficient stock.

The bandwidth question is much more important and bodes very poorly for Intel. TSMC's 7nm process is stable, providing excellent yields and has plenty of available capacity; they're already in risk production for 5nm, which is expected to free up substantial capacity at 7nm/7nm+ into 2020. Intel's 10nm has been a complete debacle and (despite the Ice Lake launch) is still blighted with sub-par yields.


> TSMC's 7nm process is stable, providing excellent yields and has plenty of available capacity

Yields in terms of getting lots of operational dies, sure, but one of the underlying problems here is chip quality. They have lots of chiplets but most of those chiplets are not fast enough to hit the clocks advertised for the 3900X.

In fact, even a lot of the chiplets sold as 3900X are not fast enough to hit the clocks advertised for 3900X, as well as elsewhere throughout their range. There are a lot of people finding their chips only boost to 50-200 MHz less than the advertised frequency of the chip.

Essentially, the boost algorithm now takes chip quality into account when determining how high it will boost. And most of the chips have silicon quality that is too poor to hit the advertised clocks, even on a single core, even under ideal cooling, etc etc.

Thus, AMD has the somewhat dubious honor of being the first company to make the silicon lottery apply not just to overclocking, but to their stock clocks as well. They really wasted no time before shifting to anti-consumer bullshit of their own; all they had to do was advertise the chips as being 200 MHz lower and everyone would have been happy, but they wanted to advertise clocks the chips couldn't hit.

And again, the underlying problem is chip quality - a lot of these chiplets can't boost to 4.3 or 4.4 GHz let alone 4.7. AMD simply can't yield enough 4.7 GHz chiplets to go around, even if the chiplets are nominally functional. The process may be "stable and providing excellent yields" but it's not stable and well yielding enough to meet AMD's expectations.

That's a major reason they're now introducing 3700 and 3900 non-X variations - that will allow them to reduce clocks and satisfy demand a bit better.


Not reaching advertised boost clocks seem to be related to newer AGESA releases from AMD to motherboard vendors, so I wouldn't blame it on chip quality just yet. These contain bugfixes (RdRand for example) and other changes, but has impacted boost clocks. People running AGESA 1.0.0.2 report reaching boost clocks easily (sustained in single-core tests), while I and others running later releases have issues.

New architecture, new chipset, bound to have some release issues. Intel is on its second or third refresh of their Skylake architecture from 2015, all ironed out.


> Intel is on its second or third refresh of their Skylake architecture from 2015, all ironed out.

More like the fourth, I think.


I've seen plenty of people having problems on older AGESA too. I've seen some people actually have higher performance on newer AGESA. It all depends on your particular sample and setup and how it fits into the boost criteria. Silicon quality still plays a massive role.

So to be clear, older AGESA isn't a magic bullet that is letting all chips "easily hit their rated boost clocks".

It could be cleaned up somewhat in future AGESA releases, and silicon quality will definitely go up over time.


>a lot of the chiplets sold as 3900X are not fast enough to hit the clocks advertised for 3900X

Do you have some sort of source for this claim?


der8auer has done extensive testing on multiple CPUs (he had 12 samples) and has discussed the topic.

https://youtu.be/WXbCdGENp5I?t=119

A lot of reviewers have noted similar things, but often are working with singular samples and didn't want to make too much of a stink without more data, but the problem is widespread. Out of all of der8auer's CPUs, only one hit its advertised boost clocks, and it was one of the lower-end CPUs with a less ambitious target to hit.

It may be a problem with early AGESA firmware, and silicon quality will definitely go up over time, but at least at this point in time AMD has certainly falsely advertised the clocks these CPUs are capable of achieving.


every forum pretty much. If you have been interested in getting one, and following along with the launch this is not a controversial statement. It's not 100% sure its the chips fault though, bios issues are still running rampant nearly 5 weeks later, and each new bios is changing performance significantly. It will take a while before everyone knows exactly where they stand.


Forums are not reputable sources unless it is insiders leaking info that can be verified through other means.


They might not be definitive sources, but they can definitely be the canary that triggers an investigation by somebody more authoritative.

As it stands, this mine has some dead canaries.


claim:

> ...but one of the underlying problems here is chip quality

supposed justification:

> ...every forum pretty much

That's hearsay not actual justification. And intel has a reputation for dirty play.


The justification was for the statement that chips aren't hitting their clocks. That is not a controversial statement.


Both my comment and the one I responded to specified "At the moment" which is fairly specific and implies that it's subject to change. Of course things will be different in 3+ months and bugs and supply chain issues get ironed out. The question that matters a lot more is whether Intel will be able to respond adequately to AMD's offerings.

It is unclear whether Intel is truly disadvantaged in throughput for any appreciable length of time. We've seen what happened to Intel after the disastrous Prescott release years ago - they worked on the Core architecture and its follow-up Core 2 that put AMD in a pretty serious rut for the past decade. Your point of the 10nm offering launching and _still_ being lackluster is the big, big problem for Intel for short-term competitiveness.


> I can't buy a Ryzen 3900X without paying substantially inflated ebay prices.

I'm experiencing the opposite. There has been so many deals on 3900X that I have to constantly tell myself I don't need an upgrade.


I am genuinely intrigued. I've been going to my local Microcenters in the DC VA area for weeks now and they said they have gotten ZERO shipments in since release day of the 3900X and have a couple 3700X maybe on shelves.

This doesn't necessarily completely invalidate my point though - distribution by AMD is clearly needing some work when one region is drowning in 3900X processors and a very wealthy metro area has none in retail channels.


Or may be ( cough ) someone in your local Microcenters hasn't been ordering to fulfil those stocks.

This strategy is quite widely used in many other industry as well.

Although given it has only been launched for less than a month and demand is actually through the roof ( I have been seen reviews being so Pro AMD, even in the AMD Athon 64 days ). So I think it is simply supply is a little tight while TSMC are working hard.


Consumer doesn’t really matter. Pretty much any IT admin I know is ordering one directly through wholesalers. I’m actually surprised people physically go to stores anymore.


Odd. I was in Microcenter on Tuesday and they had a few 3900X's in the case. When I was there closer to launch, there weren't any and they told me that people would come in and ask for them before they even hit the shelf.


Which Microcenter location though? The Fairfax one the employee in the section on Sunday said the two boxes in the floor cases were 3700X and not 3900X because the boxes for the 3900X are larger.


Tustin, CA


Same. The Nutley St. location sold out on the first day and hasn't been any since.


Where have you been able to find them? PCPartPicker hasn't exactly shown great availability on the 3900X so far [0], Amazon is full of scalping [1] and Ebay... Well, $700+ is what I'm seeing there. This is the CPU that'll probably be powering my next build, so if you've got a source close to MSRP I'd love to hear it!

0: https://pcpartpicker.com/product/tLCD4D/amd-ryzen-9-3900x-36...

1: https://smile.amazon.com/AMD-Ryzen-3900X-24-Thread-Processor...


Check out the buildapcsales subreddit, that's where I consistently find the biggest discounts.


I don't care about getting it at a discount, I care about getting it at MSRP...


Then check out the discounted deal and mail them you'd like to pay extra


Good Morning Sir,

I hope this letter finds you in good health. Since I've seen that you are offering AMD 3900X at a discount, I'd like to inform you that I am not like those pleebs and would therefore like you to pay in full MSRP. Please let me know where I can send the goats.

I have the honor to be your obedient customer.

N. Pleeb


Funny enough, I had two buyers saying pretty much this. "I'll pay 49" on a £35 item. Why? Because apparently that's "what it's worth". I appreciate the thought, but it just looks insanely suspicious...


The point was that there aren't going to be any 'sales' on this for at least a few months...


https://www.bhphotovideo.com/c/product/1485447-REG/amd_100_1...

B&H Photo is a reputable site, and they are advertising the 3900X at $499. Now whether they will ship anytime soon...


Same here. Find nearest Micro Center, see wonderful deal, spend more than you came in looking to spend, repeat. Unless you don’t have any kind of physical computer store near you, I don’t see why buying it on eBay would be useful.


Damn, I have been desperately trying to get one since launch. Think the USA is getting them all, and the rest of the world is getting screwed. I paid on launch day, my store is getting 2 3900x's this month, they have 78 people who have paid in full and are waiting.


My original comment was quite ignorant, then, and I apologize for it. They’ve got quite a few at my Micro Center for $449 if you purchase a compatible motherboard, so I was (incorrectly) assuming that this applied to the rest of the world too.


I am in Europe and a reputable local online shop shows that i can get it after a week for 690$ but that is including VAT (575$ without VAT).


I am in Europe and a (previously?) Reputable online retailer showed me the same 2 weeks ago, but 1 week ago ( 2 days after the expected ship time) updated the expected time to a month and just said they didn't get the shipment expected, deal with it


Arbitrage to the rescue?


Eh, it just came out last month, give them some time.

In fact Intel had supply issues with their 9900K at launch too, for at least a month or two they were often out of stock at the major retailers. If getting ahold of a 3900X is still tough by next month then maybe that's cause for concern.

I would agree that Intel is more mature on the BIOS side of things. AMD usually has launch issues that need to be ironed out with UEFI updates. But, if the past few launches are any indication, they've always got things fixed.


>If getting ahold of a 3900X is still tough by next month then maybe that's cause for concern.

Next month, the 3950X will be the one hard to get a hold of.


What? Intel are currently in the midst of a massive ongoing supply shortage. Lack of ability to keep up with demand is at least as much an Intel problem as it is an AMD one at this point.


Intel also has lower yields than amd which doesn’t exactly help intel our much.


AMD is literally facing unprecedented demand. There's no hype here. This is the real thing. Shortages are to be expected, but you can assume that AMD is smart enough to route their production such that they get the highest return -- e.g. to recurring enterprise customers.


The way you avoid both demand issues and BIOS issues is to purchase 2-3 months after launch.

These chips launched on schedule on 7/7, almost exactly one month ago. If you've been exasperatedly waiting for months it's not AMD's fault.

The "literally no reason" part is okay, what you need to do is adjust "at the moment" to "once we're out of the launch window".


you dont have a cpu yet but the bios issue is super annoying?

i have one and there is no bios issue. i got the cheapest x570 mobo and everything is great


To add to this, I've got a B350 Asus board (B350 Prime Plus). Updated the bios and my 3700x works great!


There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.

There's no business reason on desktop or server for Intel for sure but there's so much inertia here which AMD needs to counter, it'll take years.

Intel is down on the floor until 2021-2022 when their 7nm (which is a smaller node than TSMC's 7nm) begins to ship because a) there's no reason to believe 10nm actually will ship in quantity b) there's every reason to believe even if it does, it's not going to be great, the first iteration of a process is never so and 14nm is so fine tuned by now, it is better in watt/performance which makes Ice Lake look stupid. 7nm is said to be a totally different, independent development and not a fine tune of the (dead) 10nm.

Intel has 12B cash at hand though so don't expect them to just go out with a whimper. If their profits go down a little for 2-3 years, they will live. The stock price didn't crash, with good reason. AMD had a net loss for seven consecutive quarters before turning a profit in Q2 2016. Intel won't even turn unprofitable for a similar period of time, just it'll have a littles less profit. And, again, they have a decent sized war chest to draw on if necessary.

The chip business is a slow business. In 2012, Intel said they will ship 10m chips in 2015. https://www.crn.com/news/components-peripherals/240007274/in... This is about the same time when AMD re-hired Jim Keller. AMD saw their window in 2015 when Intel 10nm didn't ship, thrown away K12 in a hurry and brought Zen to market in 2016 -- surely they didn't expect they will have a five year run when Intel can't put up a competition.

The fun will start in 2021 when TSMC is expected to have a refined 5nm (they call it 5nm Plus) process which you bet AMD will use vs Intel 7nm.


A fun detail of this is Apple's involvement. The 7nm process was originally built to attract TSMC's largest customer, Apple's A Series. AMD adopting this same process inlines them with Apple's gains, spend, and chip quality. (and obviously they make a LOT of A series chips, much greater than any of AMD's production.)

The hilarious gain of this is that this will drive down Intel's chip prices in attempts to compete, improving Mac margins.


Disagree. Why would it take years to counter this "inertia?" It's not like you're asking someone to give up a religion they've had from birth. They are being asked to make economic purchasing decisions, and the economics are crystal clear.


There is momentum in the computing industry. The laptops and pre-assembled desktops being sold today are based on the OEM decisions that were made 1-2 years ago. Most corporate client computing machines are on a similar timeline. IT departments don't want to support a wide variety of hardware, so they standardize on a single model or variations within that model for years.

Warehouse-scale computing has similar budgets and timeframes. You don't decide how to re-build this month's 10k machines based on this month's benchmarks. You made the decision as far back as the supply chain required you to do it, maybe a year or more.

I'm sure that AMD's sales team has been telling their big customers about this generation's performance improvements for a while. But with their history, decision makers are going to discount the story a bit until they can see it in production silicon.

So the next few month's movements in AWS and the like will all depend on the extent that their decision makers were convinced many months ago.


Exactly right. AMD has not only been telling their big customers about it -- they've been letting their customers test it. Google has been testing Rome for months, and have decided to move forward with a full-scale deployment, not only for their own data centers, but for their public cloud. They are using Rome in their production servers today.

Like it or not, Google's stamp of approval carries a tremendous weight in this industry. And if that's not good enough for you, Microsoft and AWS are stepping up their deployments of EPYC as well.

As a result, a lot of smaller companies will now require a much lower standard of due diligence when approving an EPYC Rome deployment.


Not only that but the Ryzen 3 3300U and friends are Zen+ and not Zen 2. It won't be until next year when the IT department even can buy a Zen 2 U laptop.

The signs are there, Lenovo has called them T480 and A480 last year, T490 and T495 this year, indicating these are very close.


Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.

Also, software is very often the largest cost for these systems. It's not hard to find yourself paying $100k a month for an Oracle license. A one-time expense of $50k for one piece of hardware vs another is barely a blip.

And in fact that hardware is often charged based on spec. So if you have 4x as many cores on Epyc, you will pay more in software costs on a monthly basis as well. That, or the software will simply refuse to use them until you buy an upgrade, meaning those extra cores are sitting there doing nothing.

It's counterintuitive to people whose experience is building a gaming desktop at home, but hardware expenses are not necessarily a big part of total cost of ownership for enterprise operators.


> Because there are soft costs to integrating a completely different platform into your environments. What happens when you try to pass a VM from an Intel system over to an Epyc system? Unless they have the same instruction sets you can't pass them between different processors - meaning, you have to go in and manually find the greatest common denominator and disable the rest of the instructions that aren't mutually supported. That kind of thing.

That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.


OSs are not most software and are not designed around the instruction set changing underneath them during normal operation. Why would they be? You can't physically swap out a processor while the system is booted and you can't swap a virtual processor either.

It works fine if you shut everything down and reboot the system, but that is often undesirable.

The whole point of the feature is that the VM can be migrated around different physical hardware without having to interrupt service. It just suddenly is running on a different host instance. But it has to be the same type of processor... or at least the same feature set. "Close" is not good enough, it needs to be a 1:1 match.

You can manually disable features until you have found the lowest common denominator between the feature sets of the different processors. But obviously the more types of processors you have in your cluster, the more problematic this is. In very few clusters will you find servers of mixed types, you buy 10,000 of the same server and operate them as a "unit". You don't just add in servers after the fact, sometimes you don't even replace failed servers.

And that hardware decision will have been made years ago, very often. The server market is hugely inertial, it's nothing like you putting together a build one evening and then going out and buying parts and putting it together.


>You can't physically swap out a processor while the system is booted and you can't swap a virtual processor either.

I think you can do that on multisocket systems:

https://www.kernel.org/doc/html/v4.14/core-api/cpu_hotplug.h...


If the hardware supports it, yes, but only with basically the same type of processor.


> That shouldn't be a problem. They are both fundamentally the same architecture (amd64) and any CPU-specific features are already opportunistically handled by the vast majority of software because otherwise you wouldn't be able to run the same code on different versions of Intel's CPUs.

A very long time ago, I worked for a then very large company that sold servers. Plain standard 80486 based servers.

My job was to drive around and drop off these servers for evaluation at prospective customers, who would compare them against 80486 offerings from a different vendor.

Your argument about them all being fundamentally the same would be even stronger: it’s the same CPU.

And yet, customers did not take chances and would go through the eval motions. Because their business relied on it.

Now imagine that at a scale of thousands.

Claiming “they are fundamentally the same” is not wrong, but you don’t care about the fundamentals only. You care about the whole picture and you don’t take chances.


Paying more for lower performance and higher TDP isn't a "chance".

Very conservative corporate customers could wait a short time for good BIOS corrections and sufficient supply for all the parts (not only CPUs) they need before shopping for AMD servers, but they would be buying different hardware from the same established suppliers even if they went with Intel.


> Very conservative corporate customers could wait a short time for good BIOS corrections ...

That sentence above is a contradiction in terms.

A conservative corporate customers spends many months to do evaluations. There is no such thing as “a short time.”


No one has ever been fired for buying Intel. Management often doesn't want to chance it on the underdog when they could keep everything the same


There are many small and large companies with relatively small compute needs (small meaning the own a small datacenter or two). Lots of the code they are running is _extremely_ legacy, and it may or may not be in their risk tolerance to switch vendors to save a hundred grand a year on CPU costs. Especially if they think like OP and believe Intel will match them again in just a few more years. Why rock the boat?


Of course such decisions are always political. But now with Google backing EPYC Rome, there is the political risk of not switching, and finding yourself in the Stone Age 5 years from now.

There's a lot more incentive to explore EPYC than there was a day ago.


What if Google did not include that very old legacy code during their evaluation process?

And exactly what kind of risk are you taking about?

If switching is as easy as you claim it is, you can still do so next year or the year after.


AMD's marketing department has addressed this point. https://www.youtube.com/watch?v=rrJbT7AqcD0


People are creatures of habit. There are people still using Yahoo! for no reason other than it's what they are used to. For many people, buying Intel is the same thing. It takes years to win those people over. (and usually the argument that ultimately wins them over is 'everyone else is using it', rather than the economic one) Those of us who are early adopters jump ship as soon as it's obvious there's a better option, the masses move at a much more glacial pace.


Sure, people are creatures of habit. But this isn't Yahoo vs Google we're talking about. People are going to be throwing down hundreds or thousands of dollars per CPU, and the differences are not remotely subjective.


As other have already noted: those hundreds or thousands per CPU are often just a blip on the radar compared to other costs.

Your comments in this discussion are very small scale, retail oriented.


Mobo compatibility and resource heavy enterprise applications that have been built around Intel's chips are not things that you can change overnight.


> There's a very slight advantage of Intel CPUs in non-GPU constrained games (which means more or less 1080p only...). Very slight. Price/performance falls on Ryzen 3 so hard it's foolish to get an Intel for sure.

it's not a huge advantage, but I'm not sure I would go so far as to call it foolish to buy intel at this point. if your only serious workload is gaming, intel seems like the obvious choice to me. you can actually get a decent all-core overclock on the intel parts, which leads to a significant performance lead in esports titles.


We are talking a few percentage fps, at 1080p, where you are going from 200 fps to 220 fps.

To call that a "significant" performance lead is silly.


Thats a 10% gain which might help you in some games to stay above the 144Hz or even 200Hz refresh rate of your monitor. Does not matter for most of us, but some hardcore esports gamers might care. I guess that's a very small minority though.


if you think a 10% fps gain is silly, why buy a high-end cpu for gaming at all?

also the "only at 1080p" meme is not really true for some esports titles. counterstrike is so cpu bound that it really doesn't matter what resolution you play at.


At 1080p on 100hz+ monitors...


I think the one reason is that if you need the best gaming desktop performance Intel still dominates, but that little extra performance is coming at a very steep price.

My next desktop of will be AMD+Nvidia. Now if only I could avoid the Nvidia tax for deep learning...


I’ve looked at a lot of the benchmarks and the difference was in the ballpark of 1-5%, which isn’t dominating from my perspective... did I miss a few?


DigitalFoundry had an interesting take on frame rate/frame time performance on Intel/AMD. If you look at the graph you can see Ryzen dip down more for a few times when there's more computation or memory throughput needed.


That's not true, with more core Ryzen can run game with other apps without any problems it's not the same with Intel though.


unless you are compiling chrome in the background or streaming, you're not going to saturate even eight cores while gaming. in most benchmarks I've seen, the 9900k still performs better while streaming.


Now you are talking about Intel's best desktop CPU. of course it performs better while streaming otherwise it would be ..


okay, what parts are we talking about then? aside from the 3900x, the Intel parts all have the same core count as their amd counterpart at similar price points.


Yes if you don't count SMT.


similar mindset: Who needs more than 32-bit for internet address.


Part of it is intel k parts have quite a bit of overclocking headroom while zen 2 has none.


Intel has almost no overclocking headroom. It's only about 5% or so. Both AMD and Intel have squeezed everything they can out at this point, there's not really anything left.

The main difference is on Intel you get that magical sounding 5ghz number by overclocking, but it's not actually much higher than stock (4.7ghz on the 9900k is the "all-core turbo")


Right. I guess the Ryzen boost clock debacle is also part of it...


It’s a good set of first steps. AMD needs to keep executing as they have spent decades as second fiddle to Intel. Intel meanwhile has mindshare, enterprise agreements, and other partnerships that make its position as the market leader very sticky.


For desktops gaming still drives a huge market segment where 9700k/9900k still have higher performance (esp if you overclock).


Even for gaming, the advantage is marginal though, and it comes with a price increase that is often better put into a better GPU.

Even formerly die-hard intel gaming shops (Linus Tech Tips and similar) are recommending AMD for most gaming rigs nowadays.


i9 9900K: $484 on Newegg

https://www.newegg.com/core-i9-9th-gen-intel-core-i9-9900k/p...

Ryzen 9 3900x: $499 and sold out

https://www.newegg.com/amd-ryzen-9-3900x/p/N82E16819113103

And here’s gaming performance. Far Cry 5 has a 25fps advantage. And yes, I have an RTX 2080 Ti

https://www.pcworld.com/article/3405567/ryzen-3000-review-am...

So for a large group of people, Intel still makes sense.

And my reply was in response to “literally no reason to buy an Intel“ which is clearly just not true.


Thunderbolt?


There’s still a slight performance advantage with gaming. On top of the fact that it’s still hit or miss getting any Ryzen 9s and getting an i9 is easy, anyone who is shelling out for the RTX 2080 GPUs will probably go Intel.

On top of which, given the history, those building gaming machines will assume Intel’s next 10nm cpus will still outshine AMDs in gaming for the foreseeable future.


exactly, if you're going to spend $1200 on a GPU, why would you care about saving $30 on a processor?


Well, other than the unbroken history of AMD managing to fail when ahead.

Intel is a juggernaut because even if it's not shipping the best, it's always shipping something on time, every time.


> Intel is a juggernaut because even if it's not shipping the best, it's always shipping something on time, every time.

Intel is in their current situation because their 10 nm process is years late, and is still not able to manufacture high-performance parts like server CPUs in any meaningful quantity for a price that the market would bear. They've also had severe shortages over the past year, which has resulted in orders being delayed for weeks or months.

And of course, there's the matter of their products basically being warmed-over refreshes of a nearly 5-year-old architecture (because their new architectures are dependent on 10 nm), which has resulted in comically lopsided performance in AMD's favor, in basically every objective metric that matters.


What? Intel have been slipping the ship date on their 10 nm process node for years. It's slipped so much that it's likely to not to even be fully released before it is discarded for its successor.


FWIW Intel still has an edge over AMD for speciality use cases, I would like to see some benchmarks that use Intel's MKL library to AMD (OpenBLAS).


You are correct at that, and that's because for over 10 years, especially in academia / HPC / Sci Comp, AMD was non existent. If they get the edge long enough for the HPC community to care, maybe an alternative will be present. Otherwise, if AMD just overpowers intel in hardware / clocks so much, then it might not be an issue.

In addition, some specific software (eg Android emulator on Windows) doesn't exist for AMD CPUs.


The Android emulator can run on AMD CPUs under Windows 10 as of a year or so ago, actually. It now only requires Intel's HAXM on older versions of Windows and I don't think Zen 2 or recent Intel chips officially support those versions of Windows anymore.


You can still use MKL on AMD and it does yield a perf increase over OpenBLAS


What about the motherboard chipsets? In the past AMD had decent CPUs but their other chips were not that great.


Does price matter though? I would imagine power consumption is a much bigger deal.


EPYC Rome wins on perf/watt as well.


> "We designed this part to compete with Ice Lake, expecting to make some headway on single threaded performance. We did not expect to be facing re-warmed Skylake instead. This is going to be one of the highlights of our careers"

Looks like AMD expected Intel would actually start to fight back a few years ago when AMD started the Zen and Rome cores, and AMD has been running full steam ahead since then. Meanwhile, in reality, Intel dropped the ball and was too slow to react, and now AMD has basically leapfrogged them. What a time to be alive.


Intels 10nm node has been a disaster for years. This comes off as less "Intel has been doing nothing forever" and more "what they planned to do blew up in their face".

That being said, if AMD never made Ryzen, you can bet your bottom dollar we would have been really enjoying ourselves 6 core hyperthreaded Ice Lake desktop and 16-24 core server CPUs next year for prices that AMD is now pushing 12 core and 48 core chips at.


From what I heard, Intel 14nm was running a little late and management pushed the engineering team really hard.

As a result a bunch of the greybeards left, so for 10nm a bunch of institutional knowledge was completely missing and they had to learn everything again the hard way.


> "what they planned to do blew up in their face".

Ah the P4 days, what glory that was.


In the “what a time to be alive” category, this is essentially a repeat of Opteron.


And it was a hell of a time to be alive. AMD was first to 1GHz, and the Athlon/Operton opening up the x64 (amd64) architecture was awesome. I didn't go back to Intel until the Core2 duo, which was also an amazing cpu.


Zen was originally led by the same person as Opteron/K8/Hammer, Jim Keller... who now works for Intel.

I think the situation is a bit different this time around though as AMD’s bet on TSMC is paying off in a major way while Intel continue to flounder in the fab space.


> AMD’s bet on TSMC is paying off in a major way

Going back a bit further, AMD spinning off GlobalFoundries and then shopping around for fabs on the open market is definitely looking like a very good decision with hindsight. GF has also, since then, run into problems rolling out a next-gen node, and eventually cancelled their 7nm. Hard to say how much you can credit AMD for foresight there vs getting lucky, but being manufactured by TSMC vs. in-house has worked out well.

I don't think this was obvious at the time. Some people thought it was a good move (obviously including the decision makers), but a good number of pundits interpreted AMD giving up on a proprietary in-house fab and relying on commercially available facilities as basically AMD throwing in the towel on being able to compete head to head with Intel as an integrated chip designer/manufacturer, relegating them to more the budget space. To be fair, at the time (2009), TSMC processes were behind Intel's, so you would've had to predict TSMC catching up and surpassing Intel.


I don't think the foresight is in a particular manufacturer but in the fact that each successive fab generation was becoming more prohibitively expensive and more dominated by economies of scale.

I kind of want to see an analysis of the minimum viable volume of product to justify a new fab process going back over the years. Today it's just not feasible, compared to Noyce et al who could do it in their lab.


What's fun about this is now Intel's world is on fire from a single source. The only way AMD could have guess that the TSMC process would improve so much is if they guessed that Apple would get into their own chip design (which only became clear in 2011-2012 and showed results in ~2014-2015) and would bankroll TSMC's 10 and 7.


Opteron also (eventually) killed off all the 64 bit RISC players. Quite a juggernaut.


I am not seeing any similarities to Opteron era, from Intel's current position, AMD's Roadmap and execution, to TSMC's Roadmap.


No, it’s not. AMD didn’t have double of the performance for half of the price at the time. This is a much bigger advantage.


Page 4 (https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen/4) lines up AMD's prices, core counts, and frequencies with Intel's. AT says all these have 128 PCIe 4 lanes and support up to 4TB RAM aper socket.

Intel's own competitive analysis figured competition in servers "is likely to be the most intense in about a decade" (https://www.techpowerup.com/256842/intel-internal-memo-revea...). Sounds about right.

The AMD presentation had announcements from Azure and GCP but not Amazon.

How soon Intel gets Ice Lake ready for servers seems pretty relevant here.


Two more thoughts:

- A slide in AMD's presentation suggested 64C would cost $7k, but that's for the top two-socket-capable version. If you "just" want 64C, not 2x64C, there is a $4.4k single-socket 64C part. Or if the RAM/PCIe capacity of two sockets is useful, you can get two 32C parts, and those start at $2k each. Makes pricing look better at 64 cores.

- Since they turned on most everything on the I/O die for all parts, seems possible to build boxes with lots of RAM and I/O but as little as 8C or 16C ($500 or $1k) of CPU. Of course, balance tends to be nice, but the ability to make it as lopsided as you want could be relevant for applications that are very RAM/IO heavy (caching), or if you're running a commercial DB where you pay by the core. Neat.


From the article, mid-2020 for the Ice Lake equivalent:

> Ice lake promises 18% higher IPC, eight instead of six memory channels and should be able to offer 56 or more cores in reasonable power envelope as it will use Intel's most advanced 10 nm process. The big question will be around the implementation of the design, if it uses chiplets, how the memory works, and the frequencies they can reach.


Right, depends on the things AT mentioned, but also on execution: whether the the issues with 10nm are over (now that they've shipped some chips) or if they'll also slow the scale-up to server-sized dies. Intel moving to chiplets could certainly mitigate that risk some, with tradeoffs similar to the ones in EPYC.


Supposedly, they will only be shipping to select customers initially (and at 28 cores) in 2020.


Amazon already has 1st generation EPYC servers available: https://aws.amazon.com/ec2/amd/ so I assume they will follow up as well.


Yeah but AMD should just beg Amazon to remove that instance type. It's hilariously slow. I don't know if Naples was just the beta of Rome or what but if you use it you won't like it.


Everybody seems to gloss over single-threaded performance. AMD is trading cores for clock rate.

I just put together an EPYC (Naples) 3201 (8 cores, no SMT, 2133MHz DDR4) and my circa 2012 Xeon E3-1230v2 (4 cores/8 threads, 1333MHz DDR3) is still faster because of the higher clock rate. More interestingly, the EPYC peaks at 45W at the outlet but the Xeon only hits ~55W. The EPYC advertises a 30W TDP, but IIRC the Xeon advertised a 65W TDP for the chip, so Intel substantially over performs.

I don't regret building the 3201, and I'm looking forward to the next generation of EPYC embedded.[1] But Intel still has superior design skills when it comes to power consumption and clock rates. I'd expect Intel to keep pressing this advantage, especially because at this point it's all they've got left.

[1] Anybody know when it's coming out? Are they gonna wait for Zen 3?


> Everybody seems to gloss over single-threaded performance. AMD is trading cores for clock rate.

No they aren't and no they didn't.

Obviously this review of the ultra high end is not focused on single thread performance because that'd be insane. But in segments where it does matter, like consumer, it was not glossed over at all.

And if you look at the comparison table AMD didn't really trade cores for clocks. Both the Epyc and the Xeons are all in that mid 2ghz base frequency range.

> But Intel still has superior design skills when it comes to power consumption and clock rates.

Except no not really. Clock rate yes, but AMD has an IPC advantage now so it's not entirely clear cut. And you only get that clock rate advantage on the consumer chips anyway.

Power consumption is not at all in Intel's favor, either.


> Obviously this review of the ultra high end is not focused on single thread performance because that'd be insane. But in segments where it does matter, like consumer, it was not glossed over at all.

I principally meant glossed over in discussion in threads like this. The review has a whole section on single-threaded performance, and Xeon comes out ahead:

https://www.anandtech.com/show/14694/amd-rome-epyc-2nd-gen/9

It doesn't just matter in consumer land. It matters in some high-end workloads as well. I was replying to someone who lamented the poor performance of EC2 EPYC instances. Depending on your workload, they are poor performers.

I was very careful in my statement when I said that Intel has superior skill in designing for lower power or high clock rate. I did not say that that particular skill results in Xeons having a generally better performance, cost/performance, or power/performance profile. What I'm saying is that they can use that skill to anchor themselves at the very low power and very high clock rate niches to slow their decline.

AMD will continue to close the gap even in those respects. If I had put an EPYC 3251 up against my Xeon E3-1230 it would have smoked it at roughly the same power draw because doubling the core count absolutely matters. I'm not disputing that. And Zen 2- or Zen 3-based 3201 probably would out perform the 1230 as well without double the core count, though notably no such CPU exists at the moment. But people are underestimating what Intel still has going for them. Intel's strategy at this point will be to slow their market loss to buy them time to retool and counter, just like when AMD had the lead 15 years ago. Intel has some leverage to slow that loss.

And as some others have pointed out, Intel also has a huge product lineup. Do you know how many systems you can find with an EPYC 3000 series embedded chip? Only two: Supermicro in a miniITX form factor and Congatec on a COM Express Type 7 module. So, basically one as far as most people are concerned. OTOH, Intel has more SKUs in that market space than I can even be bothered to investigate, some of which are equal to or better than the EPYC in power, performance, and cost.[1] It'll take many more years for AMD to begin to displace Intel in those spaces. Again, that gives Intel time, breath spacing, and cash flow.

[1] I chose the EPYC because of Intel's poor security track record, and to support AMD. If the only thing that mattered to me was power, performance, or cost (individually or together) then a Xeon D would have been a smarter choice. EPYC Rome would likely change that, but there is no Zen 2/EPYC Rome embedded yet, nor any hint of one. I'm beginning to think it won't happen until Zen 3/EPYC Milan.


Amazon was there, but not in person: https://www.reddit.com/r/AMD_Stock/comments/coly1a/


I prefer the ServeTheHome review, this is more in their wheelhouse: https://www.servethehome.com/amd-epyc-7002-series-rome-deliv...


Yes but when I posted the Anandtech review the ServeTheHome review wasn't up yet. Both are very decent review, I could only hope this get more upvote so other could read on it as well.


Absolutely brutal TCO improvements... I feel like this could start a new race to the bottom with the hyper-scalers. The implications could be profound considering how concentrated a huge portion of our global compute capacity is these days.

Is there a hypothetical point at which it becomes cheaper to spend additional capital in order to remove a perfectly-good Intel server in favor of a new AMD server (considering potential savings on cooling+power+space)?


The best time to buy an EPYC Rome server was 10 years ago. The next best time is today.

Edit: Okay, since some people are downvoting my joke, a more serious answer -- if your server farm is running out of space, and you're currently looking at renting additional space to accommodate your growth, EPYC Rome will take up substantially less space for a given level of performance. This is one case where it may make sense to instantly replace.

Another is to test EPYC Rome on a limited basis, to evaluate potential for a larger scale replacement in the future -- I think many companies in the space are going to fall into this camp.

Hence in the near future, there will be a modest increase EPYC Rome uptake, followed by a massive increase in the medium-term future.


Physical space is almost never a constraint in data centers. It's power and cooling that are almost always the limit.

If AMD's chips offer better performance/watt, that would make them attractive. The capital cost of purchasing a server tends to be a smaller number than the cost of the power to run and cool it over its lifetime.

However, what also matters is vendor support. Lots of companies have contracts for pricing and support on servers, and they won't necessarily change to save even a large amount of money on one generation of hardware. So for adoption of Zen2 CPUs in the data center, it will be critical for AMD to get the big names on board.


Dell, HPE, Lenovo, Cisco, Supermicro, and others are on board. It's so weird when people talk about past events like they're hypothetical.


Big names like Cray, Google, AWS, and Microsoft are already on board.

The next generation of the Zen chipset has already been designed.

EPYC Rome is likely being priced s.t. Xeon's would have to be sold at a loss to match $/perf.


"on board" in this case means producing a full line of servers with the AMD chips, not just being interested in them.

People can't buy interesting ideas, and businesses aren't going to change system vendors overnight.


Wait, what? All of the above companies are basing major products on EPYC Rome. Have you read the press releases?


Less physical space = less cooling requirements


No that doesn't make any sense. If you can fit two 200W systems in the same space as a single big 200W system then using less physical space increases the cooling requirements and the power density might become high enough to warrant exotic cooling solutions like immersion cooling.


Yes, you are right. And manufacturers are packing more and more wattage in the same space over time.

Thanks for mentioning immersion cooling as a potential solution to this; as a shameless plug, we are working very hard to make immersion cooling not exotic anymore at Submer ( https://submer.com ), solving all the problems we saw of the previous state of the art and helping with some of the biggest problems in the data center industry: cooling, power, densities, real estate costs, data center location, DC power distribution, TCO and more.

Exciting times... we are seeing a lot of trends pointing towards our immersion cooling solution and stealth traction from big names.


That's always been the case since the beginning of Moore's law.


This should be interesting to watch. The new Ryzen CPUs are good, but the desktop market is a rather small portion of the overall CPU market which is dominated by server and laptop. AMD making inroads in the server space is where they _may_ actually be able to eat up some meaningful market share. That said, I wouldn't count the chickens before they've hatched here. AMD did this once before with opteron in ~2006. For them to have holding power they need to keep it up for several cycles, only then will the big fish jump ship. I expect Intel to respond with price cuts, they've been monopolizing the server market for a decade or so now without real competition so they can charge whatever they want. At least in theory since Intel owns fabs they should win a price war.


This is different than the Opteron days. Today, AMD's offering is substantially cheaper than Intel, both on an absolute and performance-adjusted basis. AMD is now ascending at a time when Intel is plagued by a never-ending stream of embarrassing security flaws, and the market has grown sick of its monopolistic exploitation of its customers. Furthermore, AMD has already designed the next generation chip!

AMD could have clearly charged more for these EPYC Rome chips, but they priced low for a reason -- to grab as much market share as humanly possible as quickly as possible, and I believe they will do so.


I think you're making the mistake of assuming a static market. Pricing is often assumed to be based on cost, it's not, at least not only. Intel's prices are high because they can be, not because their costs are inherently higher. In fact they should be lower given they operate their own fabs and thus don't require a built in margin for the outsourced fab. If perf/$ is deemed by the market to be important, Intel will just cut their prices. Also keep in mind no one that matters at a market scale, e.g. Amazon, Google, Facebook, etc are paying these list prices. It's entirely possible Intel is still winning the perf/$ fight in these contracts.


I disagree. If you do some research, you'll learn that the chiplet architecture employed by EPYC Rome is inherently and substantially cheaper to manufacture than the monolithic design employed for Xeon.

AMD priced these chips so low because they know Intel is going to fight back with deep discounts. AMD also knows they can manufacture more cheaply for any given level of performance.

It shouldn't surprise anyone if AMD is offering prices that Intel would have to counter by selling Xeon's at a loss.

Furthermore, some orgs do not put a price on security. With Intel's poor security track record in the recent past, it's no surprise that Google hopped on the EPYC train.

So at least one of the following two are going to play out over the next year: 1) AMD takes massive server market share. 2) INTC bleeds red to slow the loss of market share.


Any idea when Zen2 laptops


The APUs have so far lagged behind the CPUs by a year. Based on that we can expect Zen 2 + Navi APUs some time next year.


I'm hoping those have an 8 core chiplet and a GPU chiplet.


Further benchmarks on Phoronix of the 7502/7742: https://www.phoronix.com/vr.php?view=28142


wow, I don't think I have ever seen such a dominant benchmark victory across the board... for 40% less dollars.


At the end of the Phoronix review, you'll find that performance per dollar is 4x - 8x better with AMD. Assuming Intel slashes prices, I still dont' know if they can match it.


It also beats Intel on performance / watts according to the last page.

Seems like a slam dunk from AMD.


Question: does anyone know whether the current generation of Google TPU uses PCIe 4.0 already? The POWER9 systems support PCIe 4.0 and last year Google confirmed they use them. If that TPU is 4.0 already then expect Google to buy an ungodly amount of these EPYC servers to match them as Intel have absolutely nothing to match here, not until 2021, next year Ice Lake is only rumored to ship with 26 cores tops.


> AMD offers you up to 50 to 100% higher performance while offering at a 40% lower price

I expect to see many more AMD based EC2 tiers on AWS


servethehome had an interesting paragraph in their review:

> We are also not allowed to name because Intel put pressure on the OEM who built it to have AMD not disclose this information, despite said OEM having their logo emblazoned all over the system. Yes, Intel is going to that level of competitive pressure on its industry partners ahead of AMD’s launch.

https://www.servethehome.com/amd-epyc-7002-series-rome-deliv...


Can you expand on this a bit? Who is disallowing whom to not name what? From my first reading of your comment, it seems like Intel is pushing partners to not disclose their relationship with AMD? Is that not, like, almost cartel-like behavior?


It's probably best to ask servethehome since I've also understood it that way when first reading it but on a second read I'm not sure. The wording also reads to me as if they forgot a word or two (but maybe that's because I'm not a native speaker).

Here is the entire section:

> We are going to present a few data points in a min/ max. The minimum is system idle. Maximum is maximum observed through testing for the system. AMD specifically asked us not to use power consumption from the 2P test server with pre-production fan control firmware we used for our testing. We are also not allowed to name because Intel put pressure on the OEM who built it to have AMD not disclose this information, despite said OEM having their logo emblazoned all over the system. Yes, Intel is going to that level of competitive pressure on its industry partners ahead of AMD’s launch.


The build team has already started to look for a 1RU, 2 server blade, 2 cores each server to buy to test our workload on it. Space and power vs. test and build throughput wins all battles. Great job AMD. Me personally, I just want to know if it can handle Civ6 with huge map and 18 AI players. It’s the little things..


IDK about EPYC, but my 3700x was blazing through startgame with those settings on Linux Civ 6.

Where standard/8 AI was unplayable on my FX8350.


Civ AI is still single-threaded so the 9900k and 9700k are the reigning champions. Zen 2 brought AMD close to parity though.


Is there any benefit to multicore on Civ6 at all?


Were Spectre/Meltdown/etc... remediations active while benchmarking?


I didn't see it called out in the article but other articles have mentioned applying them and I assume that's standard practice at AnandTech.


> OEMs were also reluctant to partner with the company without a proven product.

I'm a little puzzled by this; we've been able to buy Ryzen and Epyc stuff from Dell in our university for a while. The biggest problem has been the nvme ssds..


Lenovo is all over Ryzen since Zen+ too.


As with all the comment, this really is EPYC. And we have a roadmap of Zen 3 and Zen 4 in the next two years. Zen 3 will likely be some enhancement of Zen 2 with 7nm EUV, Zen 4 will be DDR 5, PCIe 5.0 and 5 nm ,likely some more IPC improvement and I/O Die improvement.

The way I see it is that CPU performance for most of my needs has reached the tipping point. Unless something unexpected happen the performance per dollar in the next few years are only going to increase. I would not be surprised to see 128 Core / 256 Threads in Single Socket by 2021 / 2022.

The question I have in my head now, when will DRAM price drop to the point, where I have 64 Core EPYC Server with 4TB of Memory and call it a day. While there are some insanely large dataset, for possibly 90% of the Web DB I doubt we have a Database that is 4TB large. And it could all be in Memory. But even at $10 /GB, which is very low already for a 256GB DIMM Stick, 4TB is like $40K


Can anyone explain this?

> ...first generation of EPYC, ... attaching each one to two memory channels, resulting in a non-uniform memory architecutre (NUMA).

OK

> 2nd Gen EPYC, ... solved this. The CPU design implements a central I/O hub through which all communications off-chip occur.

Well it's solved in that all mem accesses now uniformly are a bit slower as all have to go through the new memory access hub. Is this a correct reading?

also

> The CCDs consist of two four-core Core CompleXes (1 CCD = 2 CCX). ... those CCX can only communicate with each other over the central I/O die. There is no inter-chiplet CCD communication.

What is the communication for, presumably the MESI (or whatever AMD uses) cache coherence stuff, and poss. the sync instructions (CAS, atomic increment) too? Anything else I'm missing?

Thanks

(edit: bloody love that you can click the 'Print This Article' button and it becomes a single long web page. Webbyness as god intended).


> Well it's solved in that all mem accesses now uniformly are a bit slower as all have to go through the new memory access hub. Is this a correct reading?

Yes. But the new EPYC chips have doubled their L3 cache, and that new memory-access hub has stupidly high bandwidth.

The larger L3 cache mitigates the latency problems, while the memory-access hub has more than enough memory-bandwidth to feed all the cores.


I wish I had not built an Epyc based workstation last year. This looks like a spectacular release.


If you had waited until now, then next year you'd be saying exactly the same thing when AMD announces the next iteration.


Sell last year's Epyc workstation on the used market and buy a new one.

(Seriously - I'm about ready to buy a used Zen1 workstation.)


Yeah, I'm interested in those performance penalties as well.


That AMD prices this aggressively to me indicates that they expect intel to slash prices hard too. If the graphics card releases is anything to go by, AMD might even cut further after intel shows its hand. I know intel have good margins for server parts but it’s also their bread & butter - what would it mean to intel’s bottom line if they had to cut prices (not just to select whales but msrp) by enough to compete here, say 40%?


I don’t understand the spec2006 results. In the first table, what are the units? In the second table, are positive or negative percentages good?


SPEC has a reference machine that they ran the benchmark on, the numbers given are (reference machine runtime)/(tested machine runtime). So bigger numbers mean it ran faster, positive percentages are good.


I assume they're throughput numbers[1], going as ~1/(time taken). So higher is better.

The percentages for "A vs B" seems to me to be (score_A / score_B - 1) * 100.

[1]: https://en.wikichip.org/wiki/spec/cpu2006


> Due to bad luck and timing issues we have not been able to test the latest Intel and AMD servers CPU in our most demanding workloads

AT just can't catch a break. Any reviews out there that tested heavy workloads? These are the interesting ones on a CPU like this one.


The thing is you can't even get Ryzen inside a Dell or Lenovo machine - so for businesses it's kind of hard to get onboard with they hype. So intel is going to keep up market share until AMD can get into some prebuilt systems.


Typing this on a Dell Ryzen system. They don't have Ryzen 3000 systems out yet, so that much is true.


In laptops, you can. Lenovo launched the T495 and T495s a couple of months ago. AMD being a part of the venerable T-series is a huge testament to their come back.

Not sure about desktops.


It's also available at lower price points. Typing from E495 which set me back like 600 EUR.


What ? Lenovo is selling Ryzen based desktop by the truck load. I've bought more than a dozen this year already.

Dell is in that game too.

What AMD need for consumer computers is to get in the laptop oem build.


I've bought the Lenovo ThinkCentre AMD poducts. There are NO Ryzen Zen 2s. Literally NO Zen2 / 7mm products in prebuilt. If you can provide a link I would be thrilled. These are the 3600+ products.

Many folks are posting that there is "no reason" to buy intel. Actually, Intel is going to keep market share if you can't actually buy these great AMD product in a prebuilt configuration.

Will they show up eventually? Sure. But it can take a surprising amount of time for something like this to come down (I'm hearing murmurs of issues with heat / BIOS etc). I hope AMD is working closely with whomever is most ready / has a long track record with them (Lenovo comes to mind) to get these to market in a config that a business can buy.


The shift for this is likely going to take some time. The design cycle for business machines is glacially slow.

Even the first generation Ryzen CPUs are amazing for multitasking and general computing workloads. Zen 2 is quite a bit further beyond that, and is priced highly competitively. That's something that can translate into increased productivity along with savings, but it's going to take some time before OEMs are really ready to take the plunge rather than just dipping their toes in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: