Hacker News new | past | comments | ask | show | jobs | submit login
AMD 16-Core ThreadRipper Enthusiast CPUs to Reportedly Utilize 4094 Pin Socket (hothardware.com)
164 points by rbanffy on May 15, 2017 | hide | past | favorite | 94 comments



Finally it looks like the desktop CPU market could become interesting and competitive again. Intel being pushed to release a new top of the line core i9 processor?! I cannot remember anything like this since the amd64 days more than 10 years ago.

Would love to play with one of these setups..


The i9 isn't new. Its a rebranded Skylake-X/KabyLake-X chip. This has been on their roadmap for over a year [1]; they just changed the name to trick consumers into thinking its something new.

The hype between here and reddit about i9 is astounding to me. You've been able to get an i7 in a 10 core variant for a year now [2]. Need 6 cores? Take a time machine back to Q3'11 and pick up an i7 3930k [3]. And if you needed twelve cores, you could buy Xeon [4]! Heck, go crazy, get 22 [5]!

The only difference is likely price; hopefully the i9 is cheaper.

Fundamentally, why you should not be excited about these things: They'll use the enthusiast motherboard socket/chipset, not the consumer socket. And they'll have a 140W TDP, whereas Ryzen has 95W max. These two things are what makes Ryzen so amazing. If you ignore these two things, then you might as well by Xeon; Intel has lead in that category for years and continues to.

[1] http://wccftech.com/intel-skylake-x-kaby-lake-x-q2-2017-road...

[2] http://ark.intel.com/products/94456/Intel-Core-i7-6950X-Proc...

[3] http://ark.intel.com/products/63697/Intel-Core-i7-3930K-Proc...

[4] http://ark.intel.com/products/91767/Intel-Xeon-Processor-E5-...

[5] http://ark.intel.com/products/93805/Intel-Xeon-Processor-E5-...


The problem is of course the fact that 6+ i7 chips don't actually fit the consumer motherboards, but you need to go to the server segment where MBs are significantly more expensive and lack a whole slew of additional features nice for desktop machines.


Intel have a high end desktop chipset related to the server boards which is used in such cases.

X79, X99, both are related to equivalent server chipsets in C602 and C612 respectively. Yes boards are more expensive but the HEDT platform has all the features you'd expect of a consumer/enthusiast board. also the high end i7s are all met with equivalent non overclockable Xeon E5-1* processors


>server segment where MBs are significantly more expensive

False-ish

Single socket server boards are on par with home motherboards. They only become expensive when you start wanting 2+ NIC's and A LOT of sockets. But then yes your feature thing is mostly true.

> and lack a whole slew of additional features nice for desktop machines.

False-ish

Intel's server socket (for non E7's) is 2011v3 which is also their enthusiast socket. I dropped a XEON E5v4 2030 into a 2011v3 ""ethusiast board"" and it worked without a problem.

Sure my enthusiast board doesn't support ECC RAM, and I can't over clock a XEON. But a 2011v3 board in and of itself is only ~50-60 more then an EXTREME low end <$100 stock standard home user motherboard.

You get what you pay for if you want a dirt cheap board to drop a server class CPU into you can find it.


You don't need to go to a server board for ECC - a lot of the X99 boards actually do support it. Not all of them (EVGA doesn't, a lot of Gigabyte boards don't) but a lot do (Asrock usually does).

And you can find motherboards in pretty much any price range except the barest of dirt cheap. There are motherboards like the GA-X99-SLI that regularly hit $125 or lower. Cheapest thing on Newegg right now is a $150 Asrock board that supports ECC.

The only price range that you "can't get" for X99 is the under-$100 price range, i.e. the super-shitty low-end chipsets. And you actually can get it if you are willing to take an open-box item. I paid $60 after tax for my last X99 board.

Also realize that most of the people who are bitching up a storm about X99's $150 motherboards likely turned around and paid $250 for an X370 board when Ryzen launched. I mean, you gotta get one of the nice ones with the external clockgen so you can get your $300 Samsung B-die memory kit to run stable, right? (but that's totally different! /s)


Problem is that you cannot have both via current (from 2011+ on) Intel route: high clock speed on the CPU(achieved by OC) AND ECC.

You want ECC you have to get Xeons.

You want OC you have to get regular mainstream K chips or enthusiast -E chips without ECC support.

I think the last chipset which let you OC Xeons and still use ECC was X58 on 1366 socket.

Disclaimer: I am not counting tiny BCLK OC available on Xeons.


Because ECC and OC are targeted at different audiences. If you want both... you don't understand one or the other.

ECC is to guarantee system stability. Solving rare event that become regular once you have hundreds of boxes running.

OC INCREASES system instability. You put more power into a chip, increasing the probability of incorrect or malformed answers.

Now both these scenarios are rare, especially for single box home enthusiasts. Generally if you have a stable OC you won't hit garbled instruction results. Likewise you may almost never have an error that ECC would present.

---

Having OC and ECC basically says, "I want to solve one extremely rare error while opening myself up to another!" This is an idiotically unsound decision.


ECC can help you ensure that your OC is still stable. And these chips are optimized for power use in servers. There is often a lot of room for stable overclocking.

You're also not guaranteed perfect operation inside the CPU without an overclock, so if it's idiotic to combine OC and ECC, it's idiotic to use ECC at all.


You can (and should!) decrease error ratios while OCing by providing proper cooling.

Having ECC memory in an OC setup will increase its stability further - and there are perfectly valid reasons for overclocking, for example if you want more computing power than even a 22-core Xeon does not provide.


Definitely true.

Tangent: a lot of people wonder who would pay $1700 for a 6950X. The answer I always give is "people want to do intensive work and gaming on the same system". Otherwise the 4/6C chips are better choices for the money.

The great weakness of that system is - as you note - the lack of ECC.

I strongly, strongly recommend building two different systems for this situation. Depending on your needs it can almost be more economical to do it this way. i3s (and now Pentiums) support ECC and make good ZFS servers. The 4C Xeons aren't too bad either. You can also do some cheap E5 Xeon builds using engineering samples. Many-core Xeons are more than sufficient for the kind of stuff you need ECC for, even at relatively low clock rates. A 10C at 2.4 GHz is roughly comparable to 6C at 4.13 GHz (I use mine in Handbrake). And uses 2/3 of the power to boot.

And then you can focus on gaming performance in your gaming system. Like 3x as much for a given level of performance.

It's a lot like laptops. You're going to spend a lot more if you insist on only owning one system that can do everything.

The segmentation here does suck, I would rather see it supported even on consumer hardware, but practically speaking I don't think Intel is going to change this any time soon. Even if AMD does.

(and note: unless the manufacturer is willing to stand behind ECC, Ryzens's "unofficial" ECC support holds about as much weight as using Intel's engineering samples)


I got to my inexpensive build by buying trailing edge hardware, still much faster than what I had, buy cheaper than current generation hardware.


Motherboard price is square to the number of sockets. (This is actually accurate, look it up; and for somewhat good reason).


Once you move into 2+ sockets you're already paying $5000+ for a CPU so what's an extra $4000 for a main board?


The i9 is most definitely something new.

It is the first desktop CPU to bring AVX512. If the code is designed for it, that by itself doubles the perf compared to AVX2 CPUs.

I look forward to porting my algorithms to 16 wide SIMD.


Of course it is something new. The supposition is that Intel just rebranded something they already had in the pipeline; something which they release every 18 months or so.

AVX512 was already planned for Cannonlake.


Mostly agree, but AMD's TDP estimates are slightly optimistic and in reality Ryzen's power usage is much closer to its Intel counterparts.


Unfortunately AMD and Intel don't use the same definition for TDP (look up AMD APC). AMD's values are more related to average power consumption during whatever they consider to be typical use. Intel's values on the other hand are more about the upper limit of power usage, "how high can it go" sort of thing. Both the average power and maximum power are useful bits of info, but it's obviously confusing and opaque for the end-user. The only reason that I can think of why AMD hasn't reverted back is because it would highlight how their processors are relatively power inefficient compared to their competitor's.

http://www.amd.com/Documents/43761A_ACP_WPv7.pdf


Didn't Intel try to do the same thing with SDP?


Do you have a source for that?


paulmd already put up a list, so I won't repeat it. Ars Technica sums it up:

https://arstechnica.com/gadgets/2017/03/amd-ryzen-review/

> Indeed, Ryzen's 95W TDP looks impressive compared to the 140W TDP of Intel's 6900K, a similar 8C/16T processor based on the Broadwell-E architecture. But in practice, Ryzen pulls just as much wattage from the wall as the 6900K, in some cases slightly more. Considering just how power-hungry Bulldozer was, that remains a significant achievement, even if it means that overclocking headroom isn't as high as some might have hoped.


https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-R...

https://www.bit-tech.net/hardware/2017/04/06/amd-ryzen-7-170...

http://www.tweaktown.com/reviews/8072/amd-ryzen-7-1800x-cpu-...

http://hothardware.com/reviews/amd-ryzen-7-1800x-1700x-1700-...

(bear in mind the 5820K/6800K/6850K are 6C, the 5960X/6900K are 8C, and the 6950X is 10C)

Essentially AMD is rating their TDP under a very light load - like a single thread at full boost clocks. Under an all-core load they are running roughly 30% over TDP. Intel's TDP rating is much more generous - they aren't quite rating at full, nonstop AVX load (eg Prime95) but typical full-load consumption is 15% under their TDPs which leaves some headroom for intermittent AVX usage.

Anecdotal evidence: my 10C20T Haswell-E Xeon pulls ~60% of its rated TDP during a Handbrake x264 encoding run. My 6C12T 5820K Haswell-E gaming chip (overclocked to 4.13 GHz all-core at stock voltages) pulls 75% of its rated TDP during a similar Handbrake x264 run. And supposedly x264 is smart enough to use AVX, so that should be a realistic-but-high figure. My OC'd 5820K pulls 90% of its rated TDP during a Prime95 SmallFFT run.

It's still a huge gain compared to Bulldozer - but not quite as rosy a picture as AMD is trying to paint. They are equal with Haswell-E/Broadwell-E in most respects, not 40% ahead like AMD is rating them.

AMD is playing the same game in the GPU market too. AMD's RX 480/580 are rated for "GPU only" TDP (which they define to exclude memory/etc) but NVIDIA rates the whole card. In practice, AMD is ~30% above their official TDPs while NVIDIA is pretty much spot on.

https://www.techpowerup.com/reviews/Sapphire/RX_580_Nitro_Pl...

(RX 580 is rated at 185W TDP, GTX 1060 is rated at 120W TDP)


TDP is and always was an expressly manufacturer- and product-specific number, though. It mainly exists for OEMs to match parts up with the correctly sized heatsink.


That argument stops holding water when the manufacturer starts marketing around the superior power efficiency of their processors based on that same number.


Thanks!


Same with Atom-based Celerons/Pentiums, and Core M-based Core i3's and i5's, and more recently the Skylake "Gold" And "Platinum":

https://semiaccurate.com/2017/05/05/intels-new-scalable-xeon...

It's looking like with Moore's Law dying, Intel has just given up, and is instead trying to "innovate" with how to trick its customers to pay more for the same or weaker stuff.


No, Intel's i9 if it exists is a rename of the old HEDT platform.

This is also likely unrelated to AMD, with Coffee Lake coming which brings 6-core/6-thread and 6-core/12-thread CPUs to the mainstream (150-300$) Intel likely wants a new tier to relay performance segmentation when even a core i5 will come with 6 cores from Q3 2017 onwards.


Nope. 6 core 12 thread CPUs came to the mainstream a while ago, AMD brought them. The Ryzen 5 1400 is $160.

Intel is playing catchup.


Ever heard of the i7 5820k? It was released Q3'14. 6 cores. 12 threads. i7 name. Around $400. [1]

AMD's great revolution was getting the TDP on that core count below 100W, and getting them onto a consumer chipset/socket instead of the enthusiast/server platform. That's it.

We keep saying they forced some revolution in price/performance that Intel has had to respond to. Its 80% hype, 20% truth. Intel was complacent on the desktop, yes. But Ryzen's perf gains over Skylake aren't significantly more than Intel's tick/tock perf gains they push each year; the main difference is that Ryzen focused on the enthusiast, which is something Intel had de-prioritized.

[1] http://ark.intel.com/products/82932/Intel-Core-i7-5820K-Proc...


Of course there have been 6 core processors before the Ryzen chips, the difference is Ryzen brought them to the mainstream.

The Ryzen chip is a third of the cost of the Intel chip and doesn't require the expensive enthusiast X99 platform.

AMD's great achievement is bringing 6 and 8 core chips /to the mainstream/, affordable by pretty much everyone. Intel's X99 enthusiast platform has been around longer but is massively more expensive.


It's 100% true that they've forced a revolution in price/performance. You forgot the much cheaper motherboards and RAM on Ryzen. Intel HEDT has been out of what most people consider a reasonable reach. The more attractive of the HEDT chips started at $1050+ (6900K). Add in your quad channel RAM and LGA-2011-v3 motherboard and you have quite a bundle. Not part of pricing but the AM4 promise for socket longevity further seals the deal for enthusiasts.

Either way, I'm happily using a newly built R7-1700 in a Node 202 mini-ITX case with a Geforce 1060 Founders Edition. It's very quiet and having 8 cores and 16 threads is far more useful than I assumed it would be. It's one of those things that you didn't know you needed until you have it. I use it for so much stuff, and it handles it all with ease.

If nothing else, the 16 threads get a workout daily as my Steam Link is set to do CPU encoding and that uses 8 threads. Beautiful image quality on the TV as a result when the fam is doing some gaming. I'm in love with this system, very happy with Ryzen. Thanks to AMD! :)


Intel did a really shitty job marketing these (maybe the i9 name will help). They are hardly known even amount enthusiasts but they are really good processors for the exact same reasons that people like Ryzen. A moderate tradeoff in gaming for a big gain in productivity tasks.

They are even priced essentially the same as the 4C8T i7s.

I basically had Ryzen performance at Ryzen prices a year ago when I built my 5820K system. Motherboard, processor, and 32 GB of DDR4-3000 CAS15 cost me $600 all together. I hit 4.13 GHz all-core on stock voltage and still run well under the rated TDP.

Ryzen was an incremental improvement at best. 8 core Ryzen is slightly faster (being that it has 33% more cores) but I'm still winning vs the 6-cores.

The real change Intel needs to make is dropping the cost of the 8-core and higher chips, because they're just past any reasonable budget.


The 5820K was an amazing processor. I have one as well. Its coming up on two years old, yet for $390 (at release) its only ~10-20% slower than a brand new $490 Ryzen 1800x, and much of that drop can be attributed to two fewer cores.

You're right on all counts. Ryzen was vastly overhyped. Its highly likely Cannonlake and KabyLake-X will destroy it when they are released in a couple months.

The world needed Ryzen to push Intel to lower prices on their high end chips. But, like most of the times when AMD "pushes ahead", Intel will beat them within a year, and hold that position for another 4 years. And then the cycle repeats.

I don't know how much longer AMD can keep this up. It seems like they're being propped up by their dominance in console tech.


Without the so called incremental improvement offered by Ryzen 7, there is no motivation for Intel to drop its prices on 8 cores chips. Intel now has to answer the challenges both on the 8c16t category and 16c32t as well.


I suspect the answer to the 8C16T category is going to be "Skylake-X".

Intel is still sitting on a ~10% lead in per-core performance in real-world applications, and Broadwell-E is handicapped in a number of ways. Skylake-X is going to be a huge step forward. I think the 6C Skylake-X chips are going to come real close to the 8C Ryzen 7s (let's call it within 5-10%). And that leaves the 8C Intel chips with a pretty solid lead.

(Synthetics like Cinebench are definitely tilting AMD - but I don't see a corresponding lead in real-world applications, on a core-for-core basis)

If Intel can maintain premium performance, they can also maintain premium pricing.

No question the race is on, though. Next year is gonna be interesting, when both companies have sized up the other and fully prepared their responses. There's some obvious fixes for AMD (like the interconnect, or just getting memory stability fixed), and Intel could do something like put the L4 eDRAM back on their dies to get minimum framerates up[0].

[0] http://techreport.com/review/28751/intel-core-i7-6700k-skyla...


The Ryzen CPUs just came out, no OEMs yet, and the sales are not a blimp or Intel's radar yet. Coffee Lake is honestly a more interesting part, it's a 6 core with an IGP, AMD's APU will be 4 core maximum atm.

I know it hurts but AMD needs to have ship an almost unreasonable number of units for several consecutive to be considered mainstream. If Ryzen will be anything like the Athlon 64, AMD would fail to capitalize on it and just be behind Intel when it comes to iterations.


I bought 3930K 6C/12HT in 2012 and still rockin' it. AMD was not even on radar back then.


My old 1090t from 2010 says hi.


I remember that one, the one that said hi to already-on-the-market i7 980x! :)


Don't lie you don't want to play with one of these setups you want to take it home own and make love to it. xD

16 cores is absolutely insane my first computer was a commodore 64 VIC-20 it was 1 MHz with 32kb of ram...

Hopefully this will bring some life back into AMD..


ThreadRipper is the perfect name for a 16cores/32 threads CPU


I wonder why it needs so many pins? It only has 4 DDR channels, which is going to be a bottleneck keeping those cores fed. 8 channels would be more appropriate for 16 cores and it would explain why so many pins.


It needs a lot of pins because it's an I/O monster. The socket is going to support their high end server chips with 8 memory channels. Plus, all of their high end line supports 128 pcie lanes, and will have on board an unknown number of usb, sata, and networking lanes. All that adds up to needing a very large socket.


That article points out it's a different version of the core on the desktop, SP3r2 instead of the SP3 that is landing server side. SP3r2 is only coming with 4 DDR channels.

It's not clear why they're deliberately using a crippled socket here for the desktop. I don't understand the value proposition, and can only envision it driving up motherboard prices. That CPU combined with a crippled socket is surely likely to end up starved of memory IO


Sp3 is for 32-core chips wirh 8-channel memory. Sp3r2 is for 16-core chips with 4-channel memory. The reason the Sp3r2 platform exists is to make motherboards cheaper -- the lines for 8-channel memory force you to use more layers, increasing cost.

They are using the same large socket to reduce time to market -- they had validated the 8-channel socket, they don't currently have a modern 4-channel socket, and they think there is demand.


The Naples chips also have 4094 pins and do have 8 channels as well as support for 2+ CPUs per board. Maybe these Threadripper chips are Naples chips where memory controllers or interprocessor communication controllers failed QA?


> Maybe these Threadripper chips are Naples chips where memory controllers or interprocessor communication controllers failed QA?

I'd be highly surprised if they were anything else.


Ryzen = 1 piece of silicon, cheap, 4-8 cores, 2 mem channels Threadripper = 2 piece of silicon, midrange, 8-16 cores, 4 mem channels Naples = 4 pieces of silicon, high end, 16-32 cores, 8 mem channels.


I would assume that most of the pins are power/ground, like most high-pin-count sockets. They help avoid system noise at really high clock rates.


Intel are moving to a similarly big socket for next-gen Xeons, LGA-3647 [0]. It's already in use for socketed Phi processors.

[0] https://en.wikipedia.org/wiki/LGA_3647


It's probably the same socket as Naples (=four die MCM with 128 PCIe lanes and 8 DDR4 channels).

This would mean that this is a two die (active) MCM with a maximum of 64 PCIe lanes.


Can't wait! Postponed my Ryzen purchase as rumors started to surface about June launch of 16c version - this could allow running 4x GTX 1080 for ML and provide significant boost for movie rendering.


I don't blame you, I was going to hold off, then realized since I'm all mITX from here on out that I'd just build now. None of this stuff is going to be 65W TDP (at least the 8-core or more parts) so the 1700 will continue to make a lot of sense for those moving to mini-ITX.


These are going to be 150W+ for sure, heat be damned when you need performance.


AMD was sooo close to having a perfect 4096 socket. SO CLOSE!

But, alas, it was not meant to be.


Missing corner pins means you won't accidentally insert it the wrong way.


This brings me back to LGA775 - or more specifically, when I wanted to insert an LGA771 CPU into that socket. Cutting off the notch in the motherboard's socket was something I never thought I'd do!

Oh and before anyone goes off trying this, you do need to swap two pins - you can buy a conductive sticker online to make it easy. I just upgraded to i7 6 months ago - you'd be surprised how well a Xeon x5470 (4 cores @ 3.33 GHz, 12MB cache) can stave off the need to upgrade. I'm not a gamer, though.


They could've added the extra pins, and kept the square pin count, but then molded the ceramic such that one corner is clipped, then had a socket to match that would only allow full insertion if the chip body is rotated to match.

Of course, doing all of that would have drove the cost up considerably - for not much practical purposes. No company that wants to stay in business is going to do that, and I wouldn't expect them to.


What about a socket that is like USB-C, there is no wrong way to insert it.

Though I imagine such a socket would be an insane task to complete and have numerous problems with latency and speed.


Engineering a socket like that has to be a low priority. It's not as though people are reseating their CPUs every day, and generally speaking, the people that do it can handle aligning it properly.


I helped a friend build a computer back in school and when he got the parts in the mail he hurriedly started building (against my advice to wait until I got there) he put the CPU in the wrong way and bent some of the pins. So it does happen more than you would think.


I'm not even mad. It's hard to jam them in the wrong way nowadays! What socket was this?


Waste of pins. You would need to quadruple number of pins or so (with a square shape, it may be worth to just have rectangular to just double). It is possible, fairly sure, but not really practical.


Or maybe only double the pins, if you no longer make it square. But you're right, it's a waste of time and money.


Couldn't you hypothetically design the pinout to be suitably symmetrical to avoid major part of the wasted pins. For example if the chip has 4 memory channels, then lay out the memory pins identically but rotated in each quadrant. Same principle should apply to other major pin count consumers (pcie lanes, power/ground pins), which leaves somewhat small area that needs to be duplicated/quadrupled.

Of course implementing such design would be ridiculously complicated with very little benefit. But it would be pretty cool though :)


Well damn this looks like it might be in the right spot for what I'm after. It'll depend on price-point to be exact but I've got a semi-server semi-gaming setup that I want to upgrade from an older i5 and have been waiting to see Naples and early adopter stuff get hammered out. 16 cores is a great spot for me to give a few to a gaming VM, while still being able to have plenty left over for running build/smoke VMs and other random projects like testing syzkaller to check that I'm not doing something inherently stupid with a sandbox I work on. Can't wait to see more about this.


For Intel's Xeon processors, the more cores you have the lower the frequency you get. Quad core might give you an extra gigahertz, which might be of more value to games than 12+ cores.


True for old games, but for them the single core performance is enough (except maybe supreme commander with 5'000 units on the field). Looking forward with DX12 and Vulcan more cores seems to be more future proof.

Plus shouldn't XFR (or Turbo Boost Max Technology 3.0 Frequency for Intel) somewhat alleviate that particular issue?


AMD is a great company and I wish them every bit of success and luck with this but I can't help but thinking that fighting two formidable competitors on their home turf (Intel and NVIDIA) is a bit much. Just one of those would be a battle.


It's like cheapish g34 quad setups all over again. Looking forward to benchmarks.


Who would they be marketing such a thing to? 8-core Ryzen seems like enough for most people and any more cores will probably have to be down-clocked for thermal reasons.

I thought for the high end they'd go to something like the EHP concept from a couple years ago - put a CPU, GPU, and HBM2 all on a single substrate and market that thing. Call it BraZen because it would be. Give me that in the case from their project quantum...


Workstation users. CAD, raytracing, and video editing people need as much CPU horsepower as they can get, and quad-channel RAM is pretty nice too.

("preview" rendering is done with GPU acceleration - but final encodes are still done on CPU for quality reasons)

Small-business servers, databases, that kind of stuff too.

We'll see where they go with this - but I would caution enthusiasts to tamp down their expectations a little bit because this is not necessarily going to be an ideal product for gaming. I would not be surprised to see lower clocks to keep TDP under control, and some fairly high price tags to go along with these chips.

I'm imagining this as being more of a competitor to the high-end Xeons, with 3.5 GHz boost clocks and 3 GHz base clocks and a $2000 price tag. It would still be a screaming great deal at that price.

In comparison Intel wants $1700+ for most of the high-core-count Xeons, and you're getting fewer cores and lower clocks.


I hear you. I'm a raytracing guy and I look forward to Ryzen and would love to try Naples. But this intermediate thing sounds like a 3rd platform squarely between those. It even has 16 cores rather than 8 or 32, so if the core count doubled or halved it would be very much be competing with one of the other market segments processor.


It seems 16c will be clocked at 3.1-3.6GHz, whereas Naples significantly lower (which is OK for servers). I'd go with 16c for sure for movie rendering (ehm, if only ReelSteady could do a multicore image stabilization rendering...?)


This would be the same platform as their server CPUs I think, so you're probably right that the desktop market is limited. That being said there are still a decent professional market that might be interested in something like this. I.e. 3D modeling, video editing, machine learning, etc.


How much of a performance hit would be expected for single threaded performance with the new 16-core processors?


Top-end Ryzen chips have a TDP of 95W. (Edit: AM4 socket is rated at >140W TDP.) The new sockets are designed for 180W and up, so they will probably run at the same clock frequencies. That means no hit for single-thread performance.


180W!? That thing is a monster.


Cores come at a power cost. This is why Intel has been hesitant to move to consumer 8 or 16 core CPUs. There's a lot of pushback on using this much power and serious environmental concerns. Intel's engineering has been focused on power savings, so they've actually been going the opposite way for years now on the consumer end.

This is also why you won't been seeing these 16c chips on non-specialist Dells and HPs. They'll be high-end workstations and servers only. Joe Consumer or Joe Officeworker doesn't need 16 cores.

6-8c comes in at a more modest power profile, so we'll probably see the industry standardize on 6-8c for a while. I think Intel is now forced to move toward 6-8c now, which may be questionable move for most consumer loads, but if everyone stays under 100w for consumer CPUs I think it'll be okay. I don't want the performance race to be driven by AMD's ideas that burning 200+w for a CPU is reasonable. Outside of specialist applications, its not.


The main reason they've been doing that is because it's been much easier to lower power consumption per core than increase performance per core. This way they could still claim "significant improvements" year over year.

It's also a way better strategy to make "better server chips", because it means you can add more cores.


That's assuming there's no trivial single-core advancements possible at this fab size, which there might not be. The run up to Ryzen had a lot of AMD types cheering about "lazy" intel getting beat but Ryzen's per core performance is on par with a 2600k from 2011 and is consistently edged out by a 7700k at the same, or below, pricing.

Also Intel's gains have been modest, but very real outside of power savings. CPU passmark benchmark:

2600k (2011) : 8484

7700k (2016): 12196

Both are 4 cores so its a one-to-one comparison. A ~30% increase per core is pretty impressive this late in the Moore's law game. If it wasn't then we'd see Ryzen doing much better per core, but its not. My workloads cannot make effective use of 8 cores so its academic that I can buy a 8c chip now. It would literally be downgrade from my intel for me.


It'll actually probably be more than 180W given that AMD is lowballing their Ryzen 7 TDPs. Those chips are more in the 120-130W range under a typical (non-Prime95) load - so unless they drop the clocks on the 16C chips a bit (which is very plausible), you can expect something closer to 250W TDP.

https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-R...

This is pretty much normal for HEDT chips though. Both Ryzen and Broadwell-E/Haswell-E hit ~230W already when you overclock them - and add another 75-100W for Prime95 loads.

I doubt you'll see too many people overclocking the 16C chips even if they're unlocked. You would race right past 300W, probably closer to 400W. You'd need like a 360mm liquid cooler to even think about it.


Of course it's possible that in a given box the new chips will be limited by thermal headroom. For applications like encoding, it's better to have 16x 2.2GHz cores than 8x 3.0GHz cores. But since they have the same architecture, they should be getting the same performance clock-for-clock without a penalty just because they have more cores.


I break out in cold sweat when I think about routing a 4094 pin socket. How does anyone layout a motherboard for this in a reasonable time frame?


Many of these will be ground and power, which directly terminate into plane(s) and aren't routed. Things like memory buses are put onto the socket such that they are routable to the memory sockets (the location/orientation of the memory sockets is prescribed by the processor socket!). Other interfaces are arranged to "make sense" as well.

I expect it's pretty much the same problem overall as with the (much smaller) pinouts of SoCs and such.


My guess is that the socket side of this is done during development by AMD and then given to the manufacturers directly so that they don't have to do that part. The only other way I could think of is if the tools have a way to lay out as many of the traces in parallel (i.e. tell it, all of these traces are going to the same place just offset a little) and then do most of that automatically for say the RAM and power. I think most of the length and impedance matching is already automate-able in Altium and other high end software.


There's an app for that :)


First, you need a computer with a lot of Cores. :-)

All the silicon vendors give out design/layout guideline and reference designs. Most of Motherboard vendor likely copy those with just minor changes.


Presumably there must be a lot of computer assistance?


Not really, autorouting is generally extremely bad. Especially for applications with such high performance criteria. On the plus side the work only has to be done once since later motherboards are mostly just iterations on a previous design.


The latest high-end PCB tools actually provide useful autorouting features, because they have figured out that if you combine human ability to solve the difficult pattern-recognition and planning problems, you can have a computer solve the details.

For example, Xpedition's sketch routing feature: https://www.youtube.com/watch?v=fIi13gI9xEA

Routing 4094 signals sounds daunting, but it is a bit less daunting when you realize that a good fraction of them are power/ground (and route directly to planes), and the rest of them are mostly logically organized into buses/groups that can be routed together.


Yeah, a good portion of those pins will be wired directly to the power regulator components that are next to the socket, and a good chunk more break out neatly into channels for memory and PCI-e.

Seems like it's almost boring these days since everything's encapsulated in a layer of abstraction. A lot of the more complicated wiring is in breaking out PCI-e channels into things like ethernet, audio, and other miscellaneous ports that involve good chunks of analog circuitry.


I would have put two more pins and made it an integer power of two. And it would have looked nice on the bottom! (64 x 64)


They usually take a pin or two out from square-matrix pinouts to create sufficient asymmetry to prevent you from inserting the chip rotated 90 degrees.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: