Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen price and release date revealed (pcworld.com)
279 points by kungfudoi on Feb 22, 2017 | hide | past | favorite | 134 comments



First (amateur) third-party benchmarks confirm a ~$350 Ryzen beats $1000 Intel processors in CPUMARK, 3DMark Fire Strike Physics, Cinebench: https://www.chiphell.com/thread-1706915-1-1.html

AMD had said for years their Zen goal was a 40% IPC gain over their previous microarchitecture, but they ended up with a 52% gain: http://www.anandtech.com/show/11143/amd-launch-ryzen-52-more...

Today's launch event by AMD's CEO: https://www.youtube.com/watch?v=1v44wWAOHn8


just be aware that Ryzen is probably operating much closer to it's limit than the Intel counterparts, so overclocking could be limited. If that is not interesting for you anyway, then god speed.

Also in terms of gaming performance, Intel will probably remain king, due to single threaded performance and OC potential.

Personally, i'd always take 8 cores vs 4 for the same price even if they are 10-20% slower.


In my experience, most modern games aren't CPU-bound anymore; more and more work is being pushed onto the GPU with every generation. My brother has a quad-core AMD CPU (don't remember which) with a powerful GPU, and his CPU is very rarely the bottleneck—there are few games he can't max out at 1080. Also, according to AMD's benchmarks, the Ryzen CPUs are very close to Intel's in single-threaded performance.


It is a big deal in VR, at 90Hz, two eyes, you have 3x the draw calls of a game running at 60Hz.

There are some optimizations to reduce the per-eye overhead, but it is still a lot.

Sony's headset can run at native 120Hz, increasing things more, but not too many games are using that mode.


4k at 60fps is 498 million pixels per second, Rift @ 90FPS is 233 million pixels per second. GPU's can deal with making 2 images without meaningful CPU overhead. The real issue is stutter and latency not CPU or GPU GFLOPS.


Pixels/second aren't a very useful metric when speaking about CPU performance. Your comparison seems to show that CPU performance might be much more important than GPU perf for VR.

Also, you're correct that GPU's can play all sorts of tricks to get multiple projected views for low CPU-overhead. But that is with significant software support. Otherwise VR will have to build twice the amount of framebuffers/frame as a single view game.


We're talking about draw calls on the CPU, not pixel fill. Still wrong about Rift vs 4K though. Due to the lens unwarping you have to render to a much larger render target to get native screen res. About 1.7x more pixels than he native res.

> GPU's can deal with making 2 images without meaningful CPU overhead.

I meant to say that with this "There are some optimizations to reduce the per-eye overhead, but it is still a lot." But I didn't word it well. I didn't mean per-eye overhead was a lot after the optimizations, but that the increase in CPU usage was still a lot in spite of them due to the higher update rates.


Physics heavy games at lower resolutions can still get a lot of benefit out of faster cores. Battlefield 1/4 multiplayer as an example. The Division is another game that benefits greatly from CPU power.

Another thing is that many enthusiasts nowadays go for high refresh rate monitors and at 1080p/1440p you can still be CPU bottlenecked if you are shooting for >144FPS on a 144Hz screen.


This. Gaming hasn't been bottlenecked by the CPU in a long time. Anything from an i5 up performs pretty much the same with the same GPU.


And you'll only be bottlenecked by the GPU if you want to play AAA titles at high settings or at a resolution beyond 1080p. I'm still using an ATI Radeon HD 5770 and I haven't seen a compelling reason to upgrade my card.


Entirely depends on the game.


SOme games still will require a beasty CPU.

Total War is one example.


All the info shows otherwise. This shows ryzen overclocking to 5.1GHz on all cores and beating Cinebench world record.

http://mirror.ninja/5vixv8

PS: mirror since it was removed from YouTube.


*With liquid nitrogen cooling. We still don't have a real-world test.


Yes, on the show floor. The comment above was that the Broadwell-E processor could go further, there is not support for this clain. The referred processor is unstable above 5GHz, even with liquid nitrogen (note that the frequency record is 5.8GHz but on CPU-z, a far cry from running Cinebench). Ryzen has this video on its side.

My personal opinion is to wait for more data, march 2 is not far away. For sure anandtech and the other good bench sites will have a Perf/$ graph.


Do people even overclock anymore? I think the mass overclock gains of the 2500k days are probably behind us as even intel is getting close to its top end due to its poorly performing architectures. I thought the whole idea behind turbo is that its an official overclock and once the chip gets too hot it moves back to 'stock' speeds, then back up, etc. All you need to worry about is keeping the stock cooler or not.


> just be aware that Ryzen is probably operating much closer to it's limit than the Intel counterparts, so overclocking could be limited.

Overclocking is also limited on the Intel counterparts, so that's not really a big deal. Most kaby lakes won't do 5ghz, and good ones hit the wall at 5ghz. That's not much of an increase over the stock 4.2/4.5ghz for the 7700K

Also AMD is claiming a lower TDP than Intel, so that would suggest Ryzen has more OC headroom than Intel has. Although of course this is all wait-and-see.


I'm a bit dubious. If they have such an amazing part why is it being sold for so cheap? Why not 80% of the price of the comparable Intel part? Selling it at 50% or even less raises some heavy alarm bells for me.


"Why not 80% of the price of the comparable Intel part?"

1. Because Intel is about to significantly drop their prices in response to Ryzen.

2. Because AMD really wants to immediately hurt Intel's revenues as badly as possible.

3. Because AMD has such an uphill battle to regain mind share that it really needs to release a product that makes people realize that AMD now has the superior perf/$ product, beyond any doubt. If Ryzen was only priced 20% lower than Intel, many people/OEMs/etc would not be motivated enough to go AMD.


3bis now Intel looks like a customer milking company with mild progress posturing as 'best effort'.


I am pretty sure after six iterations of "more of the same" most people realized this by now, we didn't need amd to know this, but surely to change the status quo in the cpu market...


It's hard to say when CPU's are approaching 10nm .. I've seen reports about how crazy hard it is to shrink silicon at that point so it was plausible.


If history is of any reference (see the Hammer architecture), AMD needs to build a large ecosystem of EOMs and partners to be able to cover with enough SKUs all different segments of the market.

If they price their CPUs anywhere near Intel, even if they are marginally better, OEM and partners will be shy and slow in creating new lines of products. They will only modestly deep their toes on AMD waters.

Intel can hold the line for a couple of years by offering OEMs large rebates/discounts for keeping volumes of sales in line with what they bought pre-Zen.

AMD has no choice but build market share and ecosystem by pricing way below Intel. Otherwise, history will repeat itself and they won't be able to capitalize long term on the Zen architecture as happened with the Hammer architecture.


Feature parity and market penetration. FP and AVX are middling, no iGPU, fewer PCIe lanes, and they've got keep firm hold of that budget build bracket.

In case you were also wondering why Intel's pricing has been so obscene, it's to do with AMD's underwhelming pre-Zen microarchitectures and having had no serious competition in the past decade until now.


do we think Intel will drastically drop prices on middle and high end chips?


My guess: Not right away. It's going to take a while for AMD to gain trust; and if intel massively drops prices that will if anything accelerate that effect. A dramatic price drop would tell your clients that you indeed gouged them and that AMD's chip is a serious competitor.

I'd expect them to make token gestures in MSRP and quietly make larger drops in practice; and to accelerate new models so it appears as if those new models are better price/perf wise due to their great new design.

Oh and of course, intel's 7700k is still quite a bit faster per-core, and there's AVX, so intel can play those cards loudly. I'd argue that even today almost all people are best served by a few fast cores than many slightly slower ones; there;s just so much poorly scaling software around. And where multicore works really well trivial the GPU is often just around the corner...

For servers the story may be different.


They might play dirty games instead (again). Intel loves incentivising its distributors with bonuses and deal combos. Sell top Intel CPU + expensive motherboard with top Intel chipset = receive free Intel SSD etc. Of course distributors are free to do what they like with those rewards, consumers almost never even know about it = bigger hidden margins for pushing hiend models.


The capitalist playbook suggests that street prices won't see the same price drop as important OEMs threatening to launch a big AMD line.

That way, Intel sells to those OEMs not only the hardware but also the promise that end customers will think that the parts were more expensive.


Probably because of their contract with GlobalFoundries which requires them to pay penalties if they don't move a specific volume.


There are a few obvious possibilities:

- AMD have optimized for die size, at 44mm^2 vs Broadwell-E at 246mm^2. This will result in an order of magnitude better yields and margin improvements.

- Intel prices high in the >= 8-core space because: they can; there is no current competition; and the aforementioned yield/margin issue.

- The Intel parts have more advanced and expensive features aimed at professional markets; i.e. quad-channel memory controller, more PCIe lanes, and in newer architectures, integrated Thunderbolt controllers, to name a few.


The Ryzen die is more like >100 mm^2; a single CCX is 44 mm^2.


Intel relies more on price discrimination. On the Skylake wikipedia page, I count 30 desktop and 11 server SKUs.

So Intel's "-E" enthusiast line isn't really the optimal price point in terms of generating revenue via volume, it's just the high end of a sliding scale.


> If they have such an amazing part why is it being sold for so cheap?

Because they get exploded in single-threaded performances, so they have to pit their 8-core parts against Intel's 4 cores.

The 1700X will be competing against the $350 7700K more than the $1100 6900K (which incidentally is a Broadwell-E part not a Skylake part despite de 6 prefix)


The linked benchmarks are putting the $350 1700X about on par with the $1100 6900K when it comes to both single-threaded and multi-threaded performance. Both of those processors are 8-core/16-thread chips. Also the 1700K is a 95W TDP compared to the 6900K's 140W TDP.


Again, the 6900K is a two years old µarch, and not one which was considered a great release in the first place. The 1700X will not be competing against the 6900K because nobody gives a rat's ass about the 6900K, the 1700X will be competing against the 7700K and its ilk.


The IPC difference between Broadwell (6900K) and Kaby Lake (7700K) isn't that big. 5-10%.

The only redeeming quality of the 7700K is that Intel was able to raise the turbo freq to 4.5 GHz, while the 1700X only boosts to 3.8 GHz. So yeah single-threaded perf will be higher with Intel.


Isn't the IPC difference basically 0 and the 7 series only has slightly higher frequencies due to the optimization cycle? So clock for clock 6 and 7 series CPU should yield the same performance.

I should add that the above is the main reason the kaby lake release is so lackluster. There is very little reason to upgrade your desktop to a 7 series (laptops get power optimization and 4k playback support).

It's also the main reason why AMD might look appealing, lack of worthwile upgrade from Intel while AMD delivers twice the cores/threads for not a gread deal more then a 7600K.


> Isn't the IPC difference basically 0 and the 7 series only has slightly higher frequencies due to the optimization cycle? So clock for clock 6 and 7 series CPU should yield the same performance.

You're talking about Skylake 6-series, the 6900K is Broadwell-E not Skylake despite being 6-branded.


Kaby Lake and Skylake are indistinguishable in terms of IPC.

Kaby Lake only brought GPU & video decoding improvements to the table, that's it. CPU performance is otherwise identical to Skylake.


You're right, except 6900K is a Broadwell-E part [0], not Skylake, despite the part number.

[0] http://ark.intel.com/products/94196/Intel-Core-i7-6900K-Proc...


Uh oh, looks like Ryzen even beats Kaby Lake per clock and per core! https://news.ycombinator.com/item?id=13715973


> Because they get exploded in single-threaded performances, so they have to pit their 8-core parts against Intel's 4 cores.

I assume that they are getting blown out of the water on single core performance and hence picked up an i7-7700k. We'll see if I made the right choice on March 2nd when real benchmarks surface.


You would be wrong. They are even or slightly faster than Intel in single threaded performance.


A 4.0 GHz Ryen is faster than a 4.5 GHz Intel in single thread? At this point we might as well wait for the embargo to expire.


They are equal, if not a bit faster than Intel in single-thread performance.


Expect Intel to do a huge price drop if the chip is competitive. In the end few people buy 500+$ chips so this is all just marketing.


I just have to say, I really hope that this is real. AMD has burned a lot of trust a few times in overzealously stating performance marks in their prior generations of CPUs. I ran AMD for several years until the Core-2 came out from Intel (currently very happy with an i7-4790K).

If these comparisons are real, my next build may be AMD again.


Seems like you skipped over the FX series (Bulldozer, I believe). I got a FX-8320 a few years back, and my brother got a FX-6300. Both are capable processors and are MUCH cheaper than their counterparts (i5 and i3, respectively).

Heck, if I built a PC in the past year, I would have gone with a FX-8350 just because it's so damned cheap yet very performant. Once Zen is out, it's really a no-brainer for me.


The biggest problem with the FX series, IMO, was the deceptive marketing. The FX-8000 CPUs were marketed as being 8-core, when in reality every pair of cores shared an FPU and cache, so real-world performance was much closer to that of a quad-core for many workloads. Add on top of that the abysmal IPC (compared to Intel's offerings) and high power consumption, and there was very little reason to get an FX-8350 or similar over an i5.

(Yes, today the FX CPUs are much cheaper, but when they came out they were barely so. I know because I got an FX-8350 as soon as it came out and regretted it.)


I have to disagree with that. Most workloads (H.264 decoding/encoding, gaming, typical desktop software, etc) are ALU-bound not FPU-bound. So the FX series is generally beating Intel on the perf/$ metric, just because it is so darn inexpensive. In my experience a $100 FX CPU is as good as a $200 Intel CPU when doing H.264 encoding:

https://news.ycombinator.com/item?id=13172972

(However the FX series perf/watt is worse than Intel, I give you that.)


Video encoding/decoding is definitely a niche for which the FX series had great value when it came out, so I agree with you on that. However, for gaming and most "typical desktop software", having higher single-core performance is much more important than having more cores, and the FX series was a huge letdown there. Then you factor in the extra energy you're using, and all of a sudden there are very few cases in which it makes sense to buy an AMD CPU over an Intel one.

(They're much cheaper now, so it might make a lot more sense to buy one today, but I'm sticking with my Intel CPU until AMD can prove themselves.)


Yeah, I would understand going for an i5 if the price difference was insignificant.


I had a home server that used an FX-8350, but it used a LOT of power... and the performance wasn't that great, I mean for the price sure, but in less than the cpu lifetime, you've already paid more for the extra electricity than a higher end i7.


I really like my FX-8320E but it's definitely not a speed demon. It struggles in single-threaded performance, in such tasks as web browsing... Even running vanilla Debian 8, browsing Reddit in Firefox is a bad experience. Images load very slowly (I have a 200mbps connection) and seem to make the rest of the processing hang. It's good for compilation though, and I can play recent games with it and an R9 280x.


Unless I'm sorely mistaken, Intel's single-core performance was ways ahead of Bulldozer chips.


True, but the price is a huge differentiator. The primary market for processors is gaming where the bottleneck is generally your GPU and memory. As a software developer, the difference between the two is not noticeable IMO.


I'd argue otherwise for software developers, especially if you're working with a large C++/Java codebase, or compiling large projects such as WebKit/Linux kernel.


To me the delta in price hardly mattered when I looked for how long I kept these systems running.


It matters a lot when you are on a very tight budget. I can either get a Kaby Lake i3 for $119.99, or a FX-8320 for $129.99. It's really a no-brainer.

Another way to look at it: instead of putting an extra ~$100 on an i5, which has performance most people don't really take advantage of anyways, you could get a better graphics card.


True and only you can make the choice on how to balance your system for your workload.


I'm still running an FX-6300 and it can't be beat, the thing only cost me $100. It does everything I need it to. I could imagine myself going for a ~$300 AMD chip.


They're ahead in multi-threaded performance, but as another commenter said, they suffer when it comes to single-threaded tasks, which meant that their gaming performance often suffered (my friend ran into this problem firsthand when he discovered that his FX6300 was bottlenecking his GTX 970).

Hopefully that isn't true anymore, or Intel may still be the superior chip for gaming.


Really looking forward to this, and I'm glad that AMD's management no longer seems to be on the pipe.

Even if the real world perf is close to the $1K Intel chips it will be a win. It's going to force price cuts from Intel and hopefully spark some competition again.


It would be interesting if Apple used these chips in upcoming Mac desktops.

I'm guessing that adopting these lower priced chips without lowering prices would have a negligible impact on sales for Apple if the performance is as good as AMD claims.


I wonder what features are intel specific (like quicksync) that Apple is is using in MacOS


It would be nice. However, would they be able to still keep Thunderbolt?


Considering that AMD Graphics Cards can be used behind the Thunderbolt interface I wouldn't be surprised. (XConnect)

If I remember correctly, you'd have to get permission to add Thunderbolt support into drivers....


They would have to license it from Intel. Which I bet Intel wouldn't really want to do.


The standard chipsets do have support for USB 3.1 Gen 2. But mixing in support for DisplayPort would require either a custom chipset (which Apple may be able to do but might not be as cheap), or wait until a Zen-based APU comes out.


USB 3.1 Gen 2 plus display is not the same as Thunderbolt. TB is basically straight PCIE. Can't just combine data and video and call it Thunderbolt. Plus 3.1 Gen 2 is only half the speed of TB3.


I think the server chip (Naples, up to 32 cores!) based on Zen may have more impact than the desktop chip. The servers chips are a smaller market, but it's growing faster than desktop, and I assume has larger margins.

Depends, I suppose, on how much AMD chooses to undercut on pricing.


The server chip business is essential to Intel's revenues these days. If they do well, it could really hurt Intel.


A big part of it is going to be whether major cloud providers adopt these.


I would guess at least some meaningful purchases, if only to keep Intel honest.


These chips look better suited to servers than clients as is. Most software still scales poorly with additional cores (or rather: most workloads still have a few single-core bottlenecks). E.g webbrowsers, compilers, video de and en-coders, all kinds of stuff. So even if you have twice the cores, catching up given the handicap of a single-core perf disadvantage is a pretty harsh battle.

E.g. see http://www.anandtech.com/bench/product/1826?vs=1729 (6900k vs. 7700k) - there are quite a few relevant workloads where the 8 core loses significantly to the 4-core. Even stuff where you'd think more cores are really natural (such as POV-ray), the perf advantage of the 8-core over the 4-core is less than 50% (!). For an example where fewer faster cores obviously win, consider web-browsers: the 4-core is 36% faster in google octane v2.

Servers that have lots of independant tasks, on the other hand, will love this cheap many-cores. And coincidentally, servers often don't use AVX all that much anyhow.


In the comparison you linked, the 4 core CPU is clocked at 4.2GHz, and the the 8 core at 3.2GHz.

If you pick a comparison where the clock speed isn't so far apart, things look different: http://www.anandtech.com/bench/product/1832?vs=1729

Of course, fewer cores usually means a higher available clock speed. Just pointing out it's not just "code isn't parallel enough" that drives the disparity.


Sure, the 8-core runs at a lower clock speed and and old arch - that's a major (but unavoidable) part of the problem, as I said.

To be specific: you can't get the equivalent of a 7700k with eight cores - the 6900k is (AFAIK) the next best thing, and by the looks of it, ryzen for all it's revolutionary improvements wrt bulldozer, is not all that different from a 6900k. AMD's own numbers place it at an identical single core perf.


I have a 5820K (6 core 3.3 ghz base) and using Handbrake to encode a HD .mkv file from MakeMKV, the 6 core machine beat the 7700k clocked at 4.2ghz. Once I overclocked the 5820k to 4Ghz, it was about 10-15% faster than the 7700k which I had then clocked to 4.6Ghz. So I'm looking forward to the 1700x. Not all programs scale as well as Handbrake does with more cores.


Sure, there are situations in which it's faster. In my (limited) experience with x264 at least the more extreme quality settings didn't scale quite as well, and those are the only ones I use. Regardless: some programs parallelize quite well, others not so much.

If you're lucky enough to primarily be bottlenecked on parallel stuff: go nuts, of course!

In any case, I don't want to suggest I'm not happy with AMD finally being a real competitor, I'm just trying to temper expectations of intel-beating world domination. Most users aren't so lucky. Many people use laptops, and slow websites are more of a problem than slow handbrake sessions. For me, it's visual studio and various build/run cycles. Sure, some tools use more cores with the right settings, but it typically scales really disappointingly after around 2 or so. That overclocked 5820K would probably lose to a stock 7700k in my workload, and that's just annoying.


Agreed... upgraded to an i7-4790K about 2 years ago, and really, other than software (x265) video encoding, I've been extremely happy... getting a big boost in performance for about the same price (for the cpu) as I paid 2 years ago would make me relatively happy.

Now, just need a refresh on the GTX 1080, so that 4K gaming can be consistent at >= 60fps.

ASIDE: I've been using nvenc with vbr2 for h.265 with very good results (but about 10-15% larger than roughly equivalent x265).


Why not just add a second 1080? I've been thinking about doing that for a little while now that I got a good 4k display.


I could do that.. it's been a while since I've done SLI.. last time, I was running linux and dual displays, and had a LOT of issues (went back to windows), it was very bad. Might give it another try though... though not planning an upgrade for another year... got the 1080, and a 40" 4k late last year... I try to space out my upgrades. Carry over video card, then upgrade that later.


I can't think of a more spectacular comeback than AMD in the last year or so. They are positioned pretty well for the inevitable convergence of CPU and GPU. That CUDA crosscompiler was such a brilliant move.


It's already in the works. https://www.overclock3d.net/news/cpu_mainboard/amd_reveals_a...

CPU, GPU, HBM all on one package. That would be a game changer for building analytics databases. There's a bunch of GPU based databases out now but a lot of the benchmark numbers they give you rely on having the data cases in GPU RAM. Real world benchmarks that require moving data over PCIe are less generous.


I think the low TDPs have to be the scariest thing for Intel. Really interested to see what they can do in the 15-25W range for notebooks and their server stuff.


I have to wonder if the AMDs console win for the current generation gave them this badly needed R&D to reach decent TDP.


How so? The R&D budget is even lower than before AMD got the console deals.

https://ycharts.com/companies/AMD/r_and_d_expense


Its still better than nothing. AMD management is dysfunctional, they treat engineering as a cost and were cutting it badly during K9-K10 era.


I'm optimistically looking forward until the independent benchmarks on March 2nd.


The NDA on benchmarks is supposedly set to expire on February 28. [0]

Not sure why this PC World article is coming up now. There have already been many stories on Ryzen pricing and the release date of March 2 has been known for several days now... [1]

Product lauch video [2]

[0] http://wccftech.com/amd-ryzen-reviews-live-february-28th-tes...

[1] https://www.reddit.com/r/hardware/comments/5urylu/when_does_...

[2] https://www.youtube.com/watch?v=1v44wWAOHn8


It was known, but not official until now. Also, preorders are now live.

[0] http://ir.amd.com/phoenix.zhtml?c=74093&p=irol-newsArticle&I...


Thanks


Just pre-ordered mine...:D (newegg)


Huh, I don't see them on Newegg yet.


Do a search for ryzen, they are there.


Thanks, I was trying to find them browsing by category.


That's funny, Ars Technica says reviews won't be out until launch day. https://arstechnica.com/information-technology/2017/02/amd-r...


Agreed. I really want to know how having two memory channels is going to play out for memory intensive workloads like in-memory databases and caches.

I've been wanting something wider to play with at home for a long time, but the AMD parts are slow, and the Intel parts are expensive and have expensive motherboards. I guess I don't want it that bad.


Intel motherboards are super cheap, the trick is to ignore the sucker options.


Have you priced an LGA-2011v3 board recently? The cheapest board on Newegg is 125$ for a refurb. To my knowledge you can't get >4 cores with Intel without stepping up LGA-2011v3 or similar.

The quad-channel boards genuinely cost more to manufacture. It's not clear how much more because almost all of them are intended as server or enthusiast boards and have other things driving up the cost.

If Intel marketed LGA-2011v3 as a mainstream option then I think it might be reasonable, but I really can't tell.


If AMD makes a comeback, how did they do it? Not enough competition and Intel got complacent or what? Or did they just reach the limits and it's not possible anymore to keep the lead and prevent AMD from catching up?


AMD's failure was mostly caused by (1) Intel's anticompetitive tactics starving them of revenue (2) Barcelona delayed many months by TLB bug and (3) clever yet bad Bulldozer design. All of these are now gone so AMD is back.



They actually have a history of this. They did it with Athlon too.


AMD has a history of having dysfunctional R&D:

386DX - copied Intel design verbatim (1:1 microcode). Got sued, won only because Intel had multiple source agreements with IBM.

K6 - purchased NexGen. Designed 100% by outsiders.

K7 - once again hired somebody else's engineers. This time almost whole DEC Alpha design team.

K8 - DEC people had some more steam left in them.

K9-Zen period - no more companies to take over and claim credit for their designs, add terrible management bleeding engineering talent while burning money on stupid ideas like Seamicro = in house garbage with 50% Intel IPC.

Zen - finally managed to rehire competent ex DEC engineer Jim Keller after his successful career at Apple.


I'm far too broke to shell out cash for a new Mobo and CPU right now, but I'm excited to see what this does to Intel's prices for older CPUs. I'd love to upgrade my i5-2400 soon-ish.


Unless you are regularly doing CPU intensive tasks you can probably stick with it a few more years. I noticed since I bought a SSD that most of time the hard drive is limiting the CPU.

Until my DVD reader burned my desktop motherboard I was using a core2quad and it was still fast enough. No lag whatsoever. Now I'm using a laptop with the first gen I5 and I feel no need to upgrade it (I'm using it as a desktop battery life does sucks)

I understand why PC sales drop every year. You just don't have to buy a new one unless yours die.


In terms of general use, yeah definitely no problem with the CPU at all. It's zippy enough. But I also do a lot of CPU-intensive gaming (Cities: Skylines w/ tons of mods, RimWorld, Banished, etc.).


I bought an i7 desktop CPU from 2008 and have absolutely no inkling or reason to upgrade. I never thought I would be using the same CPU for almost 10 years, but here we are.


Jim Keller deserves a frigging statue in front of the AMD HQ, if he doesn't have one there already. I'm not even kidding. His efforts shouldn't be easily forgotten, and it could serve to inspire new generations of AMD engineers.

Beyond the product quality itself, I think AMD has had a pretty smart launch strategy by releasing the CPU chips first to show that it can beat "Intel's best".

But they really need to start focusing on notebooks ASAP. That's where they can steal most of the market from Intel, especially now that Intel is showing signs of (slowly) abandoning the notebook market, by prioritizing Xeons over notebook checks for its new node generations.

AMD should prioritize notebook chips either next year or the one after that, at the latest. They should be making the notebook chips first, before the PC ones. They need that market share and awareness in the consumer market.

In terms of how they should compete against Intel in the notebook market, I would do it like this (at the very least - AMD could do it even better, if it can):

vs Celeron: 2 Ryzen cores with SMT disabled

vs Pentium: 2 Ryzen cores with SMT enabled

vs Core i3: 4 Ryzen cores with SMT disabled. Or keep SMT and lower clock speeds, as Intel did it. This may help further push consumers as well as developers towards using "more cores/threads".

vs Core i5 (dual core): 4 cores with SMT enabled

vs Core i5 (quad core/no HT): 4 cores with SMT enabled + higher clocks and better pricing. Maybe even 6-core with HT, if AMD goes the 6-core route. I honestly don't even know why Intel decided to make "Core i5" a quad-core chip as well, and its Core i7 a dual-core chip as well. It's so damn confusing - but maybe that was the goal. For differentiation's sake, it may be better for AMD to have a 6-core at this level or maybe even an 8 core with SMT disabled - so same thing as Intel, but with twice the physical cores. I don't know why but for some reason 6-cores don't attract me much. It feels like an "incomplete" chip.

vs Core i7 (quad core/HT): 8 cores with SMT enabled

The guiding principle for this strategy should be "twice the cores or threads with competitive/better single-thread performance, and competitive/better pricing).

In a way it would be the inverse of the PC strategy where they maintain the number of cores but cut the price in half. This would mainly focus on doubling the number of cores (because notebooks come with so few in the first place), while maintaining similar or better pricing.

The only ones that don't really fit well in this strategy are the Celeron and Pentium competitors and that's because a dual-core Ryzen, even at low clock speeds should destroy Intel's Atom-based Celeron and Pentium. We could be looking at a least +50% performance difference, and that's what AMD should strive for there as well. AMD should show Intel what a mistake it made when it tried to sell overpriced smartphone chips for laptop chip prices.


From what I recall of their recent marketing material, it seemed like like they are looking to corner the market on 64bit ARM servers and also have some hybrid amd64/ARM chip plans to help people wanting to take advantage of ARM in datacenters migrate.. but I'm no expert and this is based on a recollection of marketing materials from a few months back - 64bit arm could be a huge thing in large scale light-duty computing centers (e.g. web/cloud datacenters) because of the cooling/horizontal scale out and also mobile/IOT/industrial so this isn't quite as crazy as it sounds.


AMD K12[1] is their full-custom ARM64 core. Originally scheduled for 2016, it was delayed to focus on Zen, probably the right move.

It's still scheduled for 2017, but there hasn't been much noise about it for a long time so I suspect it's been delayed, as they're still focusing on Zen & Vega.

Zen & K12 probably share a lot of elements, so if Zen is awesome, K12 probably will be too.

1: https://en.wikipedia.org/wiki/AMD_K12


Project Skybridge was cancelled in 2015 (supposedly something to do with GF 20nm). Maybe they'll look at it again once current plans are proven to be executing well.


It'll be interesting to see how Vega and on-die HBM ties into all of this with Infinity Fabric. AMD has been laying this foundation for a while.

A "Zega" APU would be killer in notebooks as well as HPC...


I think Raven Ridge is what you're looking for.


Yep. In the past their APUs have been mem bandwidth starved. HBM on die changes that. If they can keep TDP in check (Zen is promising so far), then Intel might be in trouble in prosumer notebook space.


This is really exciting, as I have been waiting to build a new system with the new AMD gear. Lots of people also don't hear about it, but I am super-excited to see the new server class CPU's. I built a quad opteron 6380 system and have been in love with them ever since, but they weren't perfect and had some issues I hope are fixed with this new line.


Interesting to see multi-core wars starting here.

Intel did not do anything as a market leader, 8 years back I could still buy 4 core machine. Waiting to see, how AMD does on Server parts, 32 cores / 64 cores ? Power9 does 24 cores/ 96 threads.


I was highly skeptical when the first info about Ryzen came out. This is looking really promising versus the $300 to $400 price range Kaby Lake i7 (7700, 7700K) CPUs.

And when comparing the 1700/1700X to the $200-250 price range i5-7600 Kaby Lake.


every time i look at AMD's stock price from last year to today i really wished i'd been wiser with my money. they've performed excellent and it's showing, this will likely be huge.


I wish people benchmark GEMM performance for all of us math folk.


Ah, but what should they benchmark GEMM with?

MKL? I don't think so, it takes the worst codepath if the chip isn't GenuineIntel. OpenBLAS? Doesn't have Ryzen support. Doesn't even recognize some AMD cores. ACML? Wasn't updated since 2013 or so.

This is a serious question. What are you supposed to do if you need a GEMM kernel for Ryzen? I sure hope AMD puts out updated ACML libraries real soon.


I'd like to see benchmarks with OpenBLAS. Unlike ACML it's under active development, unlike MKL it won't deliberately screw AMD performance, and it offers "pretty good" performance across every environment I've tried it in. Good enough that it's not worth paying for MKL, not worth going through ATLAS's self-tuning routine, not worth changing my build scripts to use vecLib under OS X. If OpenBLAS currently runs poorly on Ryzen I hope Ryzen will get some development love, because I kind of hate using ATLAS and at this point it would take a major advantage to tempt me back away from open source components.


Word! I think AMD should focus on OpenBLAS and add support for their new CPUs there. OpenBLAS is part of the foss toolchain for the easybuild easyconfig. So for the HPC field adding support to OpenBLAS would be great.

I would also need to update this benchmark (if I can get hold of the new AMD CPUs) ;-) http://stackoverflow.com/questions/5260068/multithreaded-bla...


OpenBLAS performance is atrocious in 32 bit mode because it doesn't properly support AVX with the halved register file. Not the most common configuration, but MKL handles it fine (on Intel chips, obviously).

That said I agree it makes more sense for AMD to contribute to OpenBLAS than anything else.


Interesting -- what are the use cases for single precision BLAS on CPU? All the scientific software I use requires double precision and for tasks that do well with single precision, I would have thought that GPGPU would now be the go-to solution.


Not if you're shipping software to consumers. (Also, I actually meant 32-bit as in the OS, not the floating point precision)


Iirc gemm and float performance isn't very good, since amd made a decision to reduce math performance in order to decrease latency.


It's not so much reduced math performance (it's twice faster than Bulldozer!), it's that Intel chips as of Haswell have dual 256-bit FPUs per core.

If you use them, the Intel chips downclock (sometimes severely) in order not to violate their TDP, but the dual FPUs are still there, and it's still a win for GEMM. I can see why AMD didn't follow along here, but it could be a factor in some small spaces - when you need GEMM but can't use a GPU.

Note that Ryzen can split its 256-bit FPU into two 128-bit units, so on code that's not using AVX, it's completely on par with Intel.


Wow, who uses 256-bit floats?


It doesn't handle 256-bit floats, in this context 256-bit means it can operate on eight 32-bit floats or four 64-bit floats at the same time.


Ah, that's what I thought, I was so surprised when i thought you said they had a 256 bit fpu XD

I'm dumb.

It's interesting that power consumption increases with avx instructions. Do you happen to have a link?



Good now Intel has some competition




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: