Hacker News new | past | comments | ask | show | jobs | submit login
Intel Announces Skylake-X: Bringing 18-Core HCC Silicon to Consumers (anandtech.com)
261 points by satai on May 30, 2017 | hide | past | favorite | 221 comments



Intel getting kicked by AMD ever few years is good for the market and the consumer. I am still planning on getting an AMD system to show my support for their efforts. I have been holding off for one with a gasp integrated GPU as I will be using the system as a media center. Right now I have the high end Intel compute stick. The limited RAM is a huge draw back. Oh, if it plays Civ6 well, that's a huge bonus.


Compared to the atom on your compute stick anything will be grand!


The high end compute stick has a core m5, not an atom. http://ark.intel.com/products/91979/Intel-Compute-Stick-STK2...


The compute stick I was referring to is the BOXSTK2m3W64CC which has a Intel Core m3-6Y30 processor. The RAM is really the issue, 4GB only. I run Kodi on it 99% of the time. If I exit out it is to run Firefox to watch / listen to something that Kodi does not support (XMRadio). Every once in awhile I check and I am hitting swap and Kodi is complaining.


> Intel hasn’t given many details on AVX-512 yet, regarding whether there is one or two units per CPU, or if it is more granular and is per core.

I can't imagine it being more than one per core. For context Knights Landing has two per core but that's a HPC focused product.

> We expect it to be enabled on day one, although I have a suspicion there may be a BIOS flag that needs enabling in order to use it.

This seems odd.

> With the support of AVX-512, Intel is calling the Core i9-7980X ‘the first TeraFLOP CPU’. I’ve asked details as to how this figure is calculated (software, or theoretical)

So lets work backwards here the Core i9-7980XE has 18 cores but as of yet the clock speed is not specified.

A couple of assumptions:

- We're talking double precision FLOPs

- We can theoretically do 16 double precision FLOPs per cycle

FLOPs per cycle * Cycles per second (frequency) * number of cores =~ 1TF

So we can guesstimate the clock frequency being ~3.47Ghz.

Edit: In review such a clock speed seems rather high for an 18 core part. I'm not sure if consumer parts will do 32DP FLOPs?


32 full width vector ALUs running at 3.5 GHz is probably not realistic. I think that it is running around 2GHz at most [1]. The trick is that FMAs are normally counted as two FLOP.

[1] (* (/ 512 64) 2 2 18 2 1000 1000 1000) = 1152000000000 FLOPS (512 unit over 64 bits double) times 2 for FMA times two units, over 18 cores at 2 GHz)

edit: the 10 core part has a base clock of 3.3GHz. The 18 core part will probably be in the 2.5 range at best (the best 18 core Broadwell I can find runs at 2.3, but it is a dual socket part). Running in full AVX512 mode will probably downclock the cpu further.


> The 18 core part will probably be in the 2.5 range at best (the best 18 core Broadwell I can find runs at 2.3, but it is a dual socket part). Running in full AVX512 mode will probably downclock the cpu further.

Indeed, the 2.2-2.3/2.7-2.8 GHz (base/boost) of the >18C E5-269X v4 CPUs is the non-AVX instruction clock. With AVX the throttling these drop by 300-400 MHz [1] and I expect the skylake chips to behave very similarly. In fact I would not be surprised if on average 512-bit AVX2 required more throttling than 256-bit.

[1] https://www.microway.com/knowledge-center-articles/detailed-...


Yes this sounds more probable.


But current Intel CPUs already have dual AVX2 units; if it's just one AVX512, there aren't going to really be any gains (though some of the new instructions would be useful).

Knight's Landing has two per core, but they are pretty weak in IPC, so even if Skylake also had two they'd still maintain differentiation.


If I remember correctly, first generation AVX (Sandy Bridge) does not actually have AVX unit, it just execute equivalent SSE instruction two times instead. So it is not surprising if the dual AVX2 units also double as AVX512 unit here.


Although even with one unit, having 2x the registers of 2x the width means you can fit a significantly larger working set without spilling to cache...

So there still would be benefit even if it's just one unit.


you also get generalized predication which is nice for ease of vectorization. Still, it is very likely that these CPUs have two full vector ALUs per core.


>> We expect it to be enabled on day one, although I have a suspicion there may be a BIOS flag that needs enabling in order to use it.

>This seems odd.

When AES support was first introduced it was incredibly difficult to find a motherboard that would support it from the box, it almost universally required a BIOS update.


Even later on BIOS got it wrong from time to time.


Sandy Bridge on up have two AVX units per core and as you mentioned Knights Landing does as well, so I don't think it is a given that this will only have one per core. One AVX-512 unit when other chips already have two 256 bit AVX units doesn't seem like much of a leap.

> A couple of assumptions: > - We're talking double precision FLOPs

Double precision is not what is typically used to measure FLOPs.

> - We can theoretically do 16 double precision FLOPs per cycle

One AVX-512 lane would be able to work with 16 FLOPS or 32 if you count FMA (which I think is silly, but that's what Intel will do). Two AVX-512 lanes would double that, but one thing to remember is that Intel's chips don't run at their full clock speed when using their SIMD lanes to their fullest.

All of this is to say that I would guess the base clock speed could be lower, but it is likely to be much lower while filling the SIMD units.


RE: double counting FMA. I don't think it's that silly.

If you didn't count that operation as two different floating point operations you suddenly lose the ability to compare FMA chips to non-FMA chips. It's much simpler ​to just count it as two.


512 bit registers fit 8 doubles and 16 floats, and it's not as simple as 16 flops per cycle. They are probably counting fused multiply adds which are usually the highest flop/cycle instruction.

If an FMA can be done in one cycle, then we have 18 x 32 = 576 flops, so if it's clocked at e.g. 2.0 ghz, peak performance would be ~1.1 Tflops.

Edit: I see someone wrote exactly 8 hours before..


Looks like they think they're still winning regardless of the price and that simply bumping core count to be the kings and bringing the price back to the Haswell-EP level high (rather than Broadwell-EP crazy) will be enough.

What also shows that they seem to be confident is that they're further segmenting the market based on the PCIE lane count to push everyone wanting >32 lanes into the >$1k regime.

All in all, the cool thing is not the i9s and high core counts which you could get even before by plugging a Xeon chip into a consumer X99 mobo (though you'd have to pay some $$$), but the new cache hierarchy which will give serious improvements in well-implemented, cache friendly codes!


...and of course AVX-512 for the lucky ones that can get significant benefit from such a wide SIMD (also considering the very likely significant clock limit for AVX instruction streams).


even chips with AVX2 on all cores slow down when it's fully used. The Xeon Phi has a pretty low clock, 1.3 ghz iirc.

Still, it gets you GPU style performance on vector workloads without needing separate hardware and software stack.


    even chips with AVX2 on all cores slow down
    when it's fully used
Not really. Xeon Phi's clock low because the die is massive. The downclocking for AVX started with Knights Landing. My Boardwell-EP Xeon stays at 3.0Ghz even when I (ab)use AVX2.


I tried AVX512 on a Xeon (non-Phi) part recently and it was extremely underwhelming. The workload (OpenMP-parallelized n-body) was actually slower with AVX512. Since it was under virtualization and I didn't have access to the bare metal hardware or to performance counters, I have no way of knowing why, but I'm almost certain it was because it lost all-cores Turbo and downclocked aggressively. It had previously scaled almost linearly going from SSE to AVX/AVX2, but it regressed with AVX512.


What scaling are you referring to when talking about "linear"? Did you really get 2x absolute performance going from 128-bit to 256-bit SIMD (regardless of the uarch, e.g. both with SBE and HSW with the former having a relatively poor cache performance). I'd be surprised, but if your code is in the >>O(10) flops/byte regime [1] and especially friendly instruction stream too. Otherwise, I'm skeptical.

Putting aside the "scaled almost linearly" statement, I'm not surprised that AVX-512 did not give the expected benefits ootb, you suddenly need double the amount of data loaded into the registers for every 512-bit instruction. You'd also quite like want to make sure masking is used effectively. [2]

[1] http://people.eecs.berkeley.edu/~kubitron/cs258/lectures/lec... [2] https://software.intel.com/en-us/node/523777


That's disappointing. It may be a bad interaction between OpenMP and AVX512 (cache pressure etc)?

I've also seen reliable increases in performance up through AVX2 but when I tried to run same code on a Xeon Phi, it fell short of the plain Xeon.


It might be your processor only supported AVX512 in emulation — the article makes it sound like only the Phi currently supports it natively.


So they implemented AVX512 on the Xeon server parts in microcode? That seems crazy.


It's fairly common practice with bleeding-edge vector instructions. The reasoning (assuming it is the case here) is that a theoretically-minor performance regression (the cost of converting 1x AVX512 to 2x AVX2 in microcode) is usually much preferred over a CPU exception when attempting to run a binary with AVX512 instructions on a server. It also means you don't need a $15000 chip to test your AVX512 code.


> Xeon Phi's clock low because the die is massive.

That's not the main reason. The main reason is perf/W for highly parallel workloads.

> The downclocking for AVX started with Knights Landing.

You're mixing things up here and that statement is incorrect too. AVX throttling started with Haswell-EP [1,3] (Intel kept it quite hush-hush avoiding mentioning it in products specs and such). Secondly, Xeon Phi is the HPC product family and KNL is the codename of the 2nd generation of these arch [2]

> My Boardwell-EP Xeon stays at 3.0Ghz even when I (ab)use AVX2.

In that case you're most likely either not using more than 1-2 cores or you're overclocking (or perhaps monitoring incorrectly), see [1,3].

[1] http://images.anandtech.com/doci/8423/AVXTurboHaswEP.png [2] https://en.wikipedia.org/wiki/Xeon_Phi [3] https://www.microway.com/knowledge-center-articles/detailed-...


This[1] seems to suggest otherwise? Or am I misinterpreting it?

[1] - https://computing.llnl.gov/tutorials/linux_clusters/intelAVX...


It seems Skylake X will not be soldered [0] unlike previous HEDT CPU's from Intel. AMD even solders their normal consumer CPU Ryzen. How much will Intel save with this? 2 to 4 dollars per CPU?

I'm also curious what that means for the thermals. Intels 4 core parts have much better thermals when delided to change the bad TIM.

[0] https://www.overclock3d.net/news/cpu_mainboard/intel_s_skyla...


It's high time Intel started adding more cores to consumer CPUs rather than spending half the silicon area on a crappy integrated GPU. It's only thanks to Ryzen that this is happening.


I like my Intel graphics.

Good battery life on my laptop, good Linux support on my desktop. What's not to like?


I agree. No bullshit support for Linux / *BSD is a huge selling point for Intel video. If Intel made a dedicated video card that packed more performance (and I don't even care about matching NVidia) I'd probably buy it.


I can't dispute that Intel video is simple and that's nice, but that's a failing of NVidia and AMD, who happen to be the primary discrete GPU manufacturers, and not an indicator that integrated graphics is a good idea.


Definitely! If I could get an NVidia or AMD discrete graphics card with the same support for Open Source as Intel, I'd be even happier.

As for integrated graphics being a good or bad idea, I'd guess that power consumption vs. performance is the real differentiator there.


The poor performance (as well as perf/W for all but 2D graphics). Compare it to an NVIDIA chip of similar die size, TDP, and target form-factor. You'll realize that most Intel chips (other than the top of the line like the P580) are not in the same ballpark. As much as I hate saying this, NVIDIA's drivers are stable on Linux -- and I know because I use them every day both for display and compute (in OpenCL/CUDA).


And stable on BSD. That's the crucial part for me.


Hmmm. I must be doing something wrong, because I feel like it's completely unstable for me. NUC6i5 with integrated Iris on Ubuntu 16.04. I get a dozen errors a day with the x server, and I get weird intermittent visual glitches with multiple monitors.

I have no need for high performance and Iris should be good enough, but the stability still leaves a lot to be desired.


Ubuntu 16.04 is pretty old code. I would try a mainline kernel (currently http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.11.3/) and maybe even more up to date userspace libraries (but then you're getting into a dependency mess. replacing just the kernel is easy).


Likewise. I had to go to Ubuntu 14.04 to get reasonable Linux performance out of my NUC5i7.


I love the intel graphics linux driver. It's the only GPU with which I've been able to get HDMI working on a laptop in Linux.

What I don't like is Intel spending so much area for the iGPU[1]. Why should an iGPU be consuming nearly the same area as 4 cores? An area optimised iGPU that can only do a few GFLOPS is good enough, and where it isn't (gaming, deep learning), a discrete GPU with wide memory will be needed anyway.

[1] http://www.anandtech.com/show/10968/the-intel-core-i7-7700k-...


Nothing. But nice to have both options now, with GPU or with more cores, customer's choice.


I agree, laptops have moved to terrible intel GPUS and are much worse for it because intel conned oems into thinking it was a decent GPU. They brand them as "multimedia" which just means it can stream video up to its screen resolution at 110C


IMHO Intel GPU's are not that bad. If you're not gaming they handle most tasks such as image editing with Photoshop and driving multiple screens at high resolutions. And frankly, that's all a consumer really needs.

Sure, an external dedicated GPU is nice to have, but being able to buy a decent processor (ie. Pentium G4560) with an integrated GPU for 50 dollars is really awesome, if you're on a tight budget.

PS: I'm writing this on a laptop that has the very first Intel HD Graphics (1st gen i7). I don't have any problem with it, except gaming and a two monitor limit.


This is also my opinion. I don't understand the vehemence for the integrated GPUs...I think they perform remarkably well.

I am currently typing this from a bus on a Dell Alienware 13 r3 which has both an integrated Iris and a discrete Nvidia. TBH, I run my Ubuntu environment through the integrated graphics chip for the watt savings and because 90% of the stuff I do under Linux performs flawlessly on the integrated GPU.

Of course I can switch to the Nvidia GPU using `prime-select` but besides running games or playing with CUDA I don't need the GTX 1060 day-to-day.


I never got that weird loop through Nvidia adapter working satisfactorily on Ubuntu so decided on Intel all the way on laptop GPUs - I am not building in 3D or playing games so nothing is missed. Would not mind knowing how it works on your Dell though, do you have to call things differently to use the Nvidia adapter? Bumblebee rings a bell.


So there is now a proprietary graphics ppa that is (as I understand it) a collaboration between Nvidia and Canonical. You install the nvidia drivers from there and can also install a command line tool called "prime-select". You can then select which "profile" you want to use via:

    $ sudo prime-select [nvidia|intel]
Then a logout and log back in will swap the adapter in use. From there you will be using either the intel or the nvidia chip full time. Bumblebee is an option...but you would have to mark special commands with "optirun" and would be suffering a performance penalty. But it is a good option if you don't want to deal with the logout to swap.

For me, I don't mind the logout for swap so I just use the "prime-select" tool.

Edit: Here is the ppa:

https://launchpad.net/~graphics-drivers/+archive/ubuntu/ppa

It looks like there is a bumblebee something in there.

Here is a news blurb about it:

https://itsfoss.com/ubuntu-official-ppa-graphics/


Thanks for the update, I ran Bumblebee a few times for OpenGL intensive things and the heat management and general perfomance of the Dell XPS (from 5? years ago) was worrying. I also thought the weird loop-through was weird, how the results would be passed back to the built in graphics, plus the ports weren't mapped properly, so you only had two screens on built-in graphics - the Nvidia bit didn't have any connection to the HDMI in Linux mode. So it was far from the effortless high speed awesome graphics expected, every 'bumblebee' invocation had all the convenience of a helicopter ride.

Sounds like the prime-select tool is what I needed. Oh well, glad I went from cutting edge SGI Infinite Reality 3D to integrated graphics over the years.


Back when I had an an "Optimus" laptop with that kind of setup, I had to use bumblebee/optirun/primus (I forget how the parts fit together) to launch programs that I wanted to use the dedicated card with.

It worked a little like Wine does, you'd just wrap your program invocation with `optirun <game>` and it would set everything up for you. IIRC it was also possible to combine this with wine, `optirun wine <game>`. The first year after I got the laptop (this was maybe 5-6 years back) support was kind of flaky, but things got smoothed out pretty quickly.

Eventually I would just launch Steam with optirun and any programs started by Steam would inherit the dedicated GPU settings.

I haven't used a setup like that for a few years now, I'm sure it's only gotten better since. The Arch wiki has a good rundown.


they work fine when they are just rendering ubuntu's window manager yeah, but so would a cpu with avx2. That is a pretty low bar.

those of us complaining are using them for gaming, or serious compute jobs.


>those of us complaining are using them for gaming, or serious compute jobs.

Then... don't do that? Seriously, they're not intended for this. Are you also upset that your $50 Ryobi drill doesn't have enough torque to bolt together a 747?


Like I allude to...I can swap cards at any time:

    $ sudo prime-select nvidia
    $ sudo logout
But having the intel option to roughly double my battery life is really nice on a laptop that sees a fair bit of use as a portable.


The quality of their opengl drivers has been really lacking.


Except on Linux, where they're excellent.


I still havent gotten over Intels poulsbo fiasco on Linux. http://www.phoronix.com/scan.php?page=news_item&px=MTMyODA .


Um... isn't that misstep exactly why they avoided it the next time around, by making the GPUs in-house, and having open specs and drivers from the very start?

Seems like genuine efforts to mitigate.


Using it for anything like Browsers GPU acceleration causes many oddballs problems. ( Actually Firefox will just blacklist those drivers / Hardware ) The newer Iris are actually acceptable, But we will have to wait and see in a few years time if they will continue to provide Drivers Support. Those IGP from Sandy to Haswell didn't get supported and GPU acceleration gets turned off.

And if they dont intend it for gaming or any GPU usage ( Which is perfectly fine ), how about stop wasting silicon on GPU? After all the majority cost of developing GPU isn't even in the hardware, but the software and support of drivers.


> PS: I'm writing this on a laptop that has the very first Intel HD Graphics (1st gen i7). I don't have any problem with it, except gaming and a two monitor limit.

Damn, and that's one of the ones I would consider pretty bad. They started getting decent around Sandy Bridge / HD 3000. Before then, most 3d applications would just fail to open from my experience.


I think the worst part about it is not necessarily that some lower-end/slimmer laptops have integrated Intel GPUs. That's fine, there's a market for it.

The worst part is that Intel has coerced OEMs into buying their integrated GPUs even if they buy a dedicated GPU for their laptop.

Imagine if ARM said that if you wanted its Cortex-A CPU, you had to use a Mali GPU, even if you also bought a PowerVR GPU to put in your device...That would be crazy, right? Then so is this.

Intel has done quite a few anti-competitive things since it was last sued (and lost) in the EU for such behavior, almost a decade ago. I wonder why no government body seems willing to take on it anymore. Everyone's tired of doing the whole antitrust dance with Intel again?


Maybe I'm just a sucker for marketing, but the whole reason for the "Optimus Technology" was that you could effectively "turn off" the power hungry nVidia graphics when you weren't playing games. Yes, that means Intel is selling integrated graphics, but that silicon is useful for power savings. It doesn't seem like a bad thing.


No, you are absolutely correct in pointing this distinction out. There is indeed a place in the market for this dual GPU setup. The Intel GPU provides for much better battery performance over the discrete graphics card and switching between the two can be very useful.


Why is it such a problem to make discrete cards less power hungry instead of powering them down and switching to internal GPU?


A simple metaphor would be like - why not make a Ford F350 get 35 mpg instead of owning a Fusion Hybrid for commuting during the week?

Limits of physics and engineering.

Will a reasonably powerful gaming card eventually draw the equivalent power of an integrated chip (when not gaming)? Probably. It seems like we're not there yet.


I suspect that even if it were an option to get notebook CPUs without integrated graphics, very few models would come with this configuration. Apple would surely keep the dual GPU setup to maximize battery life via GPU switching when the dedicated GPU is not needed (and Apple gives you the option of only using the dedicated GPU if you wish).

Most PC makers would do the same. The integrated GPUs are fine for most notebook tasks, even for power users, and they get much, much better battery life, which is one of the key things a notebook needs to do well. I'm on a rMBP and only my integrated GPU is in use right now because I'm not doing anything that needs the bigger, more power hungry GPU.


Even for laptops with dedicated GPU, the the integrated GPU is still necessary to save battery life when the extra gfx performance is not needed.


But in practice it almost never works outside of a benchmarking environment.


You can set it via nVidia setting (Windows) or prime-select/optirun (Linux).


I fear that because AMD is their only credible competitor that without AMD directly competing that Intel will stagnate like before. Hopefully ARM processors can help fill that void soon and will increase competition. Especially with the rising costs of being competitive .


Yes, whether or not you like AMD as a company, purchasing some Ryzen now is good for the industry.


I can't wait for the server and laptop lines to come out. If nothing else, low-end server and laptop markets could do with a shake-up.


They only spend silicon on igpus on mainstream parts. Their enthusiast series has no integrated graphics. The currently released Ryzen parts are basically AMDs enthusiast series - the APUs with Ryzen come at the end of the year.


> It's only thanks to Ryzen that this is happening

Really? You think a few months ago Intel read some Rizen reviews and completely threw out their product roadmap and developed new processors overnight?


Not exactly like that but I've been around long enough to watch Intel play this wack-a-mole game with their competition before.


They didn't need to develop new processors, these are relabeled Xeon processors as it has always been with the HEDT platform.


This really makes me wonder how many more unreleased products Intel has waiting in some drawer somewhere for that case where they have some serious competition.

It is also strong proof that without competition Intel is not going to release anything to move the market forward.


>This really makes me wonder how many more unreleased products Intel has waiting in some drawer somewhere for that case where they have some serious competition.

Given the churn rate of technology? Probably close to none. It's not like you can wait on CPU technology and have it still be relevant when you finally release it.

Except if you mean "potential projects" that still need years, and tons of work and R&D to be finished.


I doubt they have stuff on the shelf for a rainy day, but it's undeniable that competition encourages them to ramp up R&D, invest in new processes and factories sooner, lower prices, and push the envelope on what's releasable. Sticking to 12 or even 16 core products would be a lot safer for Intel's gross margins.


They would sell this chip as Xeon anyway. It's just another variant.


They probably have a lot in the sleeve: https://en.wikipedia.org/wiki/Teraflops_Research_Chip


This is disheartening.


I can't even read this article properly. The site uses 130% CPU, scrolling hardly works at all, it keeps making network requests like crazy and it even crashed my Chrome tab.

And for what reason? I do understand the dilemma that ad funded sites are in. I'm not using an ad blocker. But I simply don't get what purpose this sort of abusive website design is supposed to have.

I will never visit Anandtech again. I've seen it many times. It's never long after advertising gets irrational that content quality suffers as well and the entire site goes down the drain.


I usually roll my eyes at these complaints, but in this case it's really quite something. I just let the page sit unused for 5min and it downloaded 165MB.

Safari has much better defaults when it comes to such behaviour by ad networks: It blocks 165 requests and shows no further activity after loading 5MB: "Blocked a frame with origin "http://www.anandtech.com" from accessing a frame with origin "http://pixel.mathtag.com". Protocols, domains, and ports must match."


This seems to be caused by a software bug (at least, I'd hope so). The site continues to make requests on a loop, driving up data and resource usage.


You probably need a faster CPU ;-)

No problem with cpu usage here though, but I do have uBlock origin installed. AnandTech can be a nice site to visit every now and then!


Same here. uBlock does the trick.


So what Anandtech managed to do in this case is drive away one reader who doesn't use an ad blocker and attract two users who do.

Where will this end? I believe it will end in all content moving to closed technology platforms that lock out ad blockers, i.e apps.

The reason why I have posted a meta comment, which is always a questionable thing to do, is that there may be people here on Hacker News who can fix this absurd logic that is killing the open web.


I saw the same thing. I had to kill the entire Chrome process. There's something being loaded, probably via ads, that totally wrecks the entire Chrome process group not just the renderer, which is impressive.


It doesn't appear to be caused purely by advertisements. Ghostery reports 70 trackers loaded on the page, and only by "trusting" the website in Ghostery do I see the loading of more data.


Oh. So now they're making the i9!

So it did take AMD and Ryzen to make Intel push it's game from it's 5-6 year long hiatus with the i7 eh?

Competition is clearly good :)


The i9 is just a new brand name for the top-tier i7.

AMD have been compared favorably to nearly-top-tier i7s. Suddenly, by rebranding the top-tier to i9, Intel put a lot of gap between the i7 and Ryzen in the mind of the punter.


Why would they name it as i9 anyway? I was so sure that it would be i11...


3, 5, 7, 9. Makes sense to me.


But they could have stuck with prime numbers!


Imagine what is going happen the moment ARM will attack desktop market share. That's what i am waiting for


ARM isn't close to competitive, unfortunately.


Yes it is. Though the intel part at 599 for 16 threads will likely be the better choice vs the 1800X.


True, but I'm curious to see what the rest of the Threadripper line shakes up to be. The $599 chip only has 28 pcie lanes, which isn't enough to run two gpus at full speed. In comparison, the $300 ivy bridge-e cpu has 40 lanes. Especially with their Zeppelin line, AMD's got a chance to shake up Intel's stagnant IO situation.


Even modern GPUs don't give up a significant number of FPS in games when run in 8x or even 2.0 mode. That's been known for a while. But creating a workstation for high IO around this would be advised to go elsewhere.



.


The Core i7-7820X announced in the article? :)


That is 8 core, 16 threads.


the 8 core 16 thread one


So, let's give credit when credit is due and call this the Intel Ryzen CPU :D


That's good. Finally, we're moving with processors forward - probably thanks to AMD, again. My only hope is for them (both, either) to make thunderbolt standard feature on motherboards or ditch it completely.


Well, Intel recently (last week?) announced that Thunderbolt 3 would be licensed for free, and that they intend to integrate it (a controller I assume) in to their future CPUs.

I presume that this free licensing extends to AMD and vendors of the AMD platform, which could entice them to adopt it too.


Intel certainly seem to have come around to opening Thunderbolt up to wider adoption.

https://newsroom.intel.com/editorials/envision-world-thunder...


Good!

Something was obviously very wrong before when Microsoft left out Thunderbolt on high-end machines for "non-technical reasons".


Which i will thank Apple for that.

Both actions are simply Intel reacting to AMD and Apple.


So does it support ECC like AMD? Otherwise not interested.


I'll bite: every time I see a CPU-related thread on HN there are a few people clamoring for ECC support. While I get why I'd want ECC on a high-availability server running critical tasks, I don't really feel a massive need for it on a workstation. I mean of course if it's given to me "for free" I'll gladly take it, but otherwise I'll prefer to trade it for more RAM or simply a cheaper build.

Why is ECC that much of a big deal for you? Maybe I'm lucky but I manage quite a few computers (at work and at home) and I haven't had a faulty RAM module in at least a year. And even if I do I run memtest to isolate the issue and then order a new module. An inconvenience of course, but pretty minor one IMO.

Do you also use redundant power supplies? I think in the past years I've had more issues with broken power supplies than RAM modules.


> I haven't had a faulty RAM module in at least a year

ECC isn't for physically broken RAM, it's for the prevention of data corruption caused by environmental bit-errors (e.g. cosmic-ray bitflips).

Memory density increases with RAM capacity - which means a higher potential for noise (and cosmic-rays...) to make one-off changes here-and-there.

I understand this now happens quite regularly, even on today's desktops ( https://stackoverflow.com/questions/2580933/cosmic-rays-what... ) - I guess we just don't observe it much because probably most RAM is occupied by non-executable data or otherwise-free memory - and if it's a desktop or laptop then you're probably rebooting it regularly so any corruption in system memory would be corrected too.


This stack overflow link is interesting but most of the concern is over very theoretical issues. In practice a significant portion of the humanity uses multiple non-ECC RAM devices every day and yet most of us don't seem to experience widespread memory issues. I can't even remember the last time my desktop experienced a hard crash (well actually I can, it was because of a faulty... graphic card).

I wish my phone fared that well, but I'm not sure RAM would be the first suspect for my general Android stability issues...


> most of the concern is over very theoretical issues

I've seen photo and other binary files become corrupted that were sitting on RAID drives. The RAID swears they're fine, the filesystem swears they're fine, both are checksummed so I believe them. The only possibility that I can see is that they were corrupted while being modified or transferred on non-ECC desktops connected to the RAID.

I'm not afraid of my computer crashing. I'm afraid of data I take great pains to preserve being silently, indeed undetectably, corrupted while in flight or in use. So that's why ECC is worth it to me.


Exactly!

In the past I've had a flimsy RAM module in a macbook Pro and it was a real pain. Everything appeared to work just fine, but on stressing the RAM with a lot of virtual machines the host would crash. That was not the main issue, but took some time to diagnose as I was also running beta virtualisation software and was tempted to blame the change instead of the hardware.

Copying virtual machines from one disk to another did end up corrupting the data. That was painful to find out.


Almost exactly the same thing happened to me - marginal RAM in an old style macbook pro. Drove me absolutely crazy.

People always talk about cosmic rays as what ECC is guarding against - not at all! It's shitty RAM, especially in laptops. When it's under stress.. for example buffering files in a large copy.. and you find out months later when it's too late to fix it. Not "theoretical" at all.


I'm curious: if storing lots of photos as .dng, .png or .jpg on ZFS without ECC, one presumably gets bit flips eventually. How does this affect the files? Do you just get artifacts in the photo? Or does the file become unreadable? If so, can you recover the file (with artifacts)?

I guess the answer boils down to how much non-recoverable but essential-for-reconstruction metadata there is in these file formats.


I had bit flips on a few JPGs and it renders them useless. Luckily I had a backup of a backup that had them uncorrupted. I'm still trying to find a complete solution to this problem. Presumably the TIFF or BMP file formats are more stable against bit flips.

I'd been reading so much about it over the past year or so I got to wondering just how many times cosmic rays affect our brains and what kind of protections we're running up in our skulls.


Our brains evolved through a chaotic, organic process. We're all the time storing new data and even losing data (selective memory). I'm thinking there's no mitigation process. If anything the random environmental noise might play some role in consciousness.


Have you tried (ab)using AFL-fuzz for this? It can create JPGs out of literally thin air, so I wouldn't be surprised.

https://lcamtuf.blogspot.com/2014/11/pulling-jpegs-out-of-th...


Depends which part of the file gets corrupted, and the issues between PNG and JPG are dramatically different. There are key bytes in the file (like size of segment, start of segment, etc) that if corrupted would dramatically mess up your image. If it's just in the compressed image data you'll just see some artifacts, and a JPEG already has plenty of compression artifacts anyhow...


People don't realize how frequent it could be because most of the time you don't see any of the consequences.

Here's a "funny" consequence of bit flips: bit squatting.

It's about exploiting a bit flip before a DNS query: you register the proper DNS domain and you wait for machines to wrongly contact you because they got the name wrong.

http://dinaburg.org/bitsquatting.html http://dinaburg.org/data/DC19_Dinaburg_Presentation.pdf http://www.youtube.com/watch?v=lZ8s1JwtNas

Just saw this was already posted below...


> experience widespread memory issues.

Because they rarely spread wide.

If you edit images or videos, maybe you detect small corruption in the image. If you use databases or do data analysis, there may be one number that is wrong, or some string has one byte of garbage. Sometimes, application may crash.

All this is very rare. It only matters if you need data integrity and do work where data has value.


It's not just about crashes, but also about (silent) data corruption.


The value of ECC for me isn't to provide redundancy, or error correcting, it's to let me know that the RAM (or parity bit) is going bad. Otherwise you are flying blind and injecting bit rot without being aware of it.

You write, " I haven't had a faulty RAM module in at least a year." - How do you know this without ECC?


Personally, I mostly value ECC for the sense of superiority it affords.


> Why is ECC that much of a big deal for you?

Here's one example: looking at Firefox crash data, a fairly large percentage of crashes are caused by bitflips that corrupt data structures (e.g. making a null pointer which is null-checked into a non-null one that points off into memory that can't be read).

So what I really want is for everyone to default to ECC RAM. It would prevent issues like that to a large degree.


I'm sure all of you who love debating ECC will enjoy this defcon video: https://www.youtube.com/watch?v=aT7mnSstKGs "DEFCON 19: Bit-squatting: DNS Hijacking Without Exploitation (w speaker)"


The problem is, without ECC you won't know if you have marginal RAM. Basic memory tests can frequently pass for hours at a time, only to have a couple bits flip a few days later.

Plus, with tens of GB of RAM, you likely won't even notice, as the majority of that RAM is being used as disk cache, or application data. The best case (but likely the lowest probability) is that the bit flip happens in an executable page on an instruction that gets executed (vs the large number which won't get executed) and the application crashes.

If that happens you will never figure out why your application crashes, but if it happens enough, you will start looking for problems.


> If that happens you will never figure out why your application crashes, but if it happens enough, you will start looking for problems.

Ironically, I had this happen. In particular Overwatch would crash constantly on my gaming rig starting around June, I reinstalled Windows, reverted to Windows 7, tried various video driver revisions and NOTHING MADE SENSE.

Eventually started pulling parts out of my system, lo and behold a single 4GB stick of RAM was bad and causing all my grief. If I had ECC memory I would have known what the problem was right away, and replaced it without pulling my hair out first.


If you had had ECC you might not have noticed anything at all because your system worked just fine other than incrementing an error counter somewhere in the kernel.

This depends to some extent on how many parts of that stick were affected. But best case the system would have simply recovered.



> I think in the past years I've had more issues with broken power supplies than RAM modules.

That you know about.

See, that's where ECC comes in.


If a bit flips in a lone workstation and noone is around to see it, does it make a bug?

More seriously though, in my experience faulty RAM is generally pretty easy to diagnose and leads to general system instability. I guess the worst case scenario is generating corrupt data before the issue is diagnosed but again while I would be very wary of that on an database server or something similar, I've never found it to be a massive issue on a workstation (at least if you have decent backups that is).

But maybe I've just been lucky so far. But given that the vast majority of consumer-grade computers don't come with ECC and yet RAM issues are still relatively rare I guess I'm not the only lucky one.


> If a bit flips in a lone workstation and noone is around to see it, does it make a bug?

No, bugs are - these days - considered to be mostly software issues, undetected hardware faults are not bugs in that sense but could lead to data corruption or at a much higher level wrong output.

If you don't care at all about the output of your computer (game playing, other recreational use) then not having ECC is fine, but if you do care about your results and you have multiple 10's or even 100's of GB of RAM in your machine then to have the option of ECC is useful.

Intel is just using the ECC thing as a way to justify the price difference between their Xeon product line and the consumer stuff.

If you ever have to deal with a filesystem that slowly got corrupted because of an undetected memory issue you'll be overnight transformed into an ECC advocate.

Keep in mind that those 'decent backups' were made by the machine you do not trust.

And god help you if your backups were incremental.


An other approach is to verify your output (preferably on an other system). Good validation and test suites should be able to catch messed up "output" along with many other non-ECC related issues.

I guess it makes sense to have ECC RAM on the machine building your releases (I actually don't even have that at the moment but I wouldn't advocate that...) but for your dev machine does it really matter?

I mean, at this point it's really about a rather subjective perception of risk and particular use cases. In my situation I find that memory issues are very low on my list of "things that can go catastrophically wrong". Really the only thing I can think about is building a corrupt release on my non-ECC build server. But from experience I'm not exactly in the minority to do that either and yet I don't observe many such issues in the wild.


Verifying your output on another system requires another system, the cost of which handily outweighs the cost of having ECC if the CPU/chipset support it.

As for: "I guess it makes sense to have ECC RAM on the machine building your releases"

That's a very narrow use case, there are many more usecases than that one and for a lot of those it makes good sense to have ECC: inputs to long running processes, computations that have some kind of real world value (bookkeeping, your thesis, experimental data subject to later verification, legal documents and so on).

> but for your dev machine does it really matter?

Maybe not to you.

> I mean, at this point it's really about a rather subjective perception of risk and particular use cases.

No, it's about a thing that if adopted widely would allow us to check off one possible source of errors that would not meaningfully increase the cost of your average machine and would still be an option, nobody would be forced to use anything.

> In my situation I find that memory issues are very low on my list of "things that can go catastrophically wrong".

Good for you.

> Really the only thing I can think about is building a corrupt release on my non-ECC build server.

You are still thinking about just your own use-cases.

> But from experience I'm not exactly in the minority to do that either and yet I don't observe many such issues in the wild.

Likely you also have somewhere between 8 and 32 GB of RAM in your machine.

If I look at my servers which have been operating for years on end they do tend to accumulate corrected ECC errors. The only reason I know about it is because there is ECC in there to begin with. If those machines would be running without ECC I'd likely not even be aware of any issues. But maybe the machines or some application on them would have crashed (best possible option), or maybe some innocent bits of data would have been corrupted (second best). And at the far end of the spectrum, maybe we'd have to re-install a machine from a backup (not so good, downtime, extra work) or maybe it would have led to silent data corruption (worst case).

Now, servers are not workstations, but my workstation has exactly as much RAM as my servers and no ECC, which is highly annoying but single threaded performance of the various Intel CPUs is much better on the consumer systems than it is on the Xeons unless you want to be subject to highway robbery prices.

So for me having the ECC option on consumer hardware would be quite nice, and I suspect anybody else doing real work on their PCs would love that option too.


Yeah I see where you're coming from, I guess I have a different perspective because I never really considered using consumer-grade hardware for "pro" server use. But I suppose it makes sense if you don't want to pay the premium for a Xeon type build. I wouldn't be comfortable hosting a critical production database on non-ECC RAM for instance.

Going on a tangent this discussion made me wonder if ECC memory was common on GPUs (after all, with GPGPU becoming more and more mainstream what good is it having ECC system RAM if your VRAM isn't?)

Unsurprisingly it turns out that consumer-grade GPUs don't have ECC. However I stumbled upon this 2014 paper: "An investigation of the effects of hard and soft errors on graphics processing unit-accelerated molecular dynamics simulations"[0].

Now obviously it's a rather specific use case but I thought their conclusions were interesting:

>The size of the system that may be simulated by GPU-accelerated AMBER is limited by the amount of available GPU memory. As such, enabling ECC reduces the size of systems that may be simulated by approximately 10%. Enabling ECC also reduces simulation speed, resulting in greater opportunity for other sources of error such as disk failures in large filesystems, power glitches, and unexplained node failures to occur during the timeframe of a calculation.

>Finally, ECC events in RAM are exceedingly rare, requiring over 1000 testing hours to observe [7, 8]. The GPU-corrected error rate has not been successfully quantified by any study—previous attempts conducted over 10,000 h of testing without seeing a single ECC error event. Testing of GPUs for any form of soft error found that the error rate was primarily determined by the memory controller in the GPU and that the newer cards based on the GT200 chipset had a mean error rate of zero. However, the baseline value for the rate of ECC events in GPUs is unknown.

[0]http://www.rosswalker.co.uk/papers/2014_03_ECC_AMBER_Paper_1...


HBM GPUs all have ECC by default.


Use ECC for workstations to avoid crashes and to extend the lifespan of the system. You start to get more crashes as the years go by.

Silicon degrades over time under use.


> Why is ECC that much of a big deal for you?

Not the op, but I twice spent weeks debugging random software crashes under high load that turned out to be triggered by faulty memory. Because it was while working on a locking infrastructure I really had to figure out why it was crashing on my workstation and couldn't just be content that I couldn't reproduce or elsewhere.


It's entirely possible that a broken power supply might cause the ram to silently corrupt data, in which case, ECC would help.


This is Intel. None of the previous iX HEDT platforms supported ECC.

Intel being Intel also means that they remove features for cheap (i.e. <1000 $) parts, such as memory channels or PCIe lanes.


I would really hope that AMD enabling ECC on all parts would cause Intel to stop differentiating their product lines on something that should be available everywhere. ECC should be mandated except for the least critical uses (toys, video players, etc).


I can see no mention of it anywhere and it's a "consumer" chip (for Intel, ECC is a segmentation feature between consumer and server hardware), so probably not.


Meh, I bought a Ryzen 5 1600 for $199 and a ASUS B350M for $29 at micro center, paired that with 16 GB Crucial ECC DDR4 2400 for $149 (working on ubuntu 16.04, confirmed and stress tested)... so for $377 I have 12 threads @3.9GHz with ECC, that can go up to 64GB. Thanks Intel, but no.


That's... completely irrelevant to the sort of people who might be interested in this chip.

That's like saying, "I've got a double cheeseburger with curly fries for $1.99. Thanks Intel, but no."


It is extremely relevant. The feature set and performance target overlap, if you know how to read you will notice that anandtech includes Core i7-7800X (a 6 core 12 thread CPU) on the new processors table and has a final comparison against Ryzen 7 1800X, which is the same chip as Ryzen 5 1600 (with 2 cores disabled).


Can this be stated as an effect of Ryzen launch?


Probably, but more likely because of AMD's HEDT (high end dedktop) platform called Threadripper which will have up to 16 cores (32 Threads).

Before AMD announced Threadripper Intel had only a 12 core chip on the roadmap for the x299 platform, and charged around 1700$ for their 10 core chip. Now they will be charging 2000$ for 18 cores.

Competition is such a nice thing. Glad that AMD is back in the CPU game. Can only be good for us customers.


I'm personally looking forward to the times that this competition drastically lowers the price of mid-range CPUs for the benefit of the normal people who don't buy CPUs for 2000$.

(Yes, I know that "normal people" don't even buy laptops anymore, let alone desktops. Please excuse my fantasy-world in which people buy desktop computers and even upgrade the amplifiers of their at least 7-piece stereo sound system)


The AMD Ryzen R7 1700 is a $320 8 core/16 thread processor. Intel's cheapest 8 core/16 thread processor is their i7-6900K which sells for $1049. Even their 6/12 i7-6850K is over $600.

IMO the Ryzen R7's have been a huge "mid-end" win for anyone doing any sort of multicore/CPU intensive work. Without competition, Intel's been gouging the market for the past few years.


>drastically lowers the price of mid-range CPUs

Well, I think the current mid-range Ryzen's offer significant value and I imagine OEMs will start including them into their popular models sooner than later.

6 cores at 3.6ghz with a 4.0ghz boost for $239.

https://www.amazon.com/AMD-Ryzen-1600X-Processor-YD160XBCAEW...

More than likely we won't see any kind of price war. Instead we'll see minor price fighting on a per category basis. Intel's mid vs AMD's mid, etc. There's no drive down to the bottom with a duopoly.


Competition breeds excellence. :-)

EDIT: I was getting excited by the i7-7820X until I saw it has only 28 pcie lanes. Talk about being pushed towards the way more expensive 7900x! My relatively cheaper 6850k has 40 lanes, wonder what the thinking behind this is?


Someone could explain why is important the number of PCIe lanes ?


It limits what peripherials can be attached. For instance, single graphic card uses 16 PCIe lanes, so having only 28 lines supported by processor makes it impossible to have two GPUs in SLI mode (ok, not impossible, because motherboard chipset can add some lanes, but performance will suffer)

NVMe SSD drives also use some lanes (typicaly 4), and you might want two of them in RAID.


Numerous tests show that performance is not meaningfully impacted by running two GPUs on 8x PCIe lanes vs 16x lanes, especially when using PCIe 3.0.


It's a huge difference for minimum frame times at 4K if you intend on getting a smooth 120fps.


with current-gen GPUs

That could also change over the next few years.


What are the chances you will still be running the CPU and motherboard you buy today in a few years?


So, if you not have plans of putting a NVM SSD plus two GPUs on SLI configuration, with 16 PCIe lanes will be enough.


Is there any way to see it as something else?


Apparently, Intel has years of runway vs AMD. The chips are literally designed and sitting there, only to be released if AMD comes up with something. Intel isn't scrambling in any sense, just announcing stuff that is already designed and built. All AMD was force Intel to release it earlier than their monopoly position was allowing.


This comment went as high as +4 before being downvoted to -1.


The down vote behavior is interesting. I simply mention that Intel has an early release of a previously constructed part. Monopolies behave in predictable ways.


Intel has been selling hexa-channel DDR4 Xeons since 2015 to select customers.

For users like myself constrained by memory bandwidth I would prefer that they publicly started selling their Skylake-SP Purley platform. In some configurations they even include a 100Gbit/s photonic interconnect and an FPGA for Deep Learning acceleration.

I would gladly pay $2500-3500 for an 18-24 core Intel CPU with hexa-channel DDR4 and PCIe 4.0 (or simply more than 44 lanes of 3.0).


Out of curiosity, why "to select customers" only?

I'd suppose the feeling of exclusivity isn't much of a sales point to processor buyers.

I supply is constrained, seems like demand could be similarly constrained by a price hike.

Do they get better feedback from these select customers? Better acceptance of eventual defects without bad PR?


> Out of curiosity, why "to select customers" only?

To justify extreme price differences so the 'select customers' can credibly claim this expensive stuff gives them an edge their competitors will not be able to easily match.

In an arms race arms that are supply constrained will fetch premium prices.


But to the parent point, all the more reason to open it up, and charge an even higher premium when other people come online and start a bidding war.


I wonder if the hardware at that scale isn't exactly stable - so to become a "select customer" comes with signing a gag-agreement not to publicly disclose stability problems - and I guess also requires the customer have suitably trained staff who can deal with any expected or unexpected problems - and software engineers who can write software that will fully utilise the processor's capabilities - much like how Sony took some PR flak for the PS3's Cell processor being difficult to program for given its atypical design.


AMD's Threadripper may be more to your liking then:

http://www.tomshardware.com/news/amd-threadripper-vega-pcie-...


Nice, 64 PCIe 3.0 lanes for all Threadripper SKUs. That would be enough for 7 GPU's with 8x PCIe 3.0 (60 of the 64 are feeding PCIe and M.2 slots, 4 go to the chipset). Seems like it will be a nice CPU to use for machine learning.

Other sites also report that it will support 2 TByte of RAM like the single socket Naples/Epyc, but LRDIMM and official/validated ECC support was not mentioned in the stream.


Perfect for running modern JavaScript frameworks! /s


Isn't JS still mostly single threaded?


Hence the 'end sarcasm' tag: /s


it work on multiple levels!


but only one at a time :)


Very glad to to see the clock speed didn't take a drop for the extra cores however still no ECC is disappointing to say the least.


So the ultimate question is now, how much the ThreadRipper will cost...


My next home CPU will be an AMD Ryzen.


I've been running a 1800X for about a month. Great chip, lousy RAM support (hoping that BIOS announcement from last week turns out good).

I guess since I'm used to new high end GPUs being scarce for months after launch, I wasn't expecting availability to be so good. Additionally, I didn't expect the small aftermarket AM4 cooling selection.


Anecdotal, running a beta BIOS with the new agesa version. The announcement seems to be accurate.

Now able to reach 3333MHz on 2x16GB RAM which is specced to do 3200. Couldn't hit 2900Mhz before, it wouldn't even boot.


That's encouraging, as I also have 2x 16GB 3200MHz sticks and a system that won't boot if I don't run with the defaults.


The brackets are there, you just need to ask.


Before buying one of those fancy Coolermasters, I asked their website/store, and the AM4 bracket was out of stock. I ended up buying a Thermaltake, and if I change it, it will probably be watercooled.


Yeah, I went with the h110i AIO since during pre-order they were the only confirmed by the manufacturer supported cooling solution in stock. Noctua also has AM4 versions of 3 of their coolers: http://noctua.at/en/noctua-presents-three-special-edition-am... (again, sold out when I pre-ordered, the D15 is otherwise a great cooler). The Crosshairs VI Hero also has brackets for AM3, so with that specific board you have many options for cooling. It also seemed like the board with the most support, so that's what I went with and it's been OK (mostly issues around RAM speed support which is actively improving -- currently at 3200mhz for 32GB (4x8)). RAM speed makes a big performance difference for Ryzen especially, but it's decent enough now that I'm comfortable recommending it. Get RAM from the QVL list and you're good to go.


Really intel? I don't want 10+ cores just to get reasonable PCIe connectivity. This is just another strike against these parts (after the lack of ECC). I guess intel is trying really hard to protect their server parts, but they continue to gimp the high end desktop parts (as if the removal of multisocket isn't enough).

I would really like to understand why intel tries so hard to not make a desktop part for people willing to spend a little more to get something that isn't basically an i5 (limited memory channels, limited PCIe, smaller caches, etc).


Do you mind me asking what you'd be using those PCIe lanes for? Their 8c part is good for a couple of NVMe drives and a video card, that's quire reasonable. The only use for 44+ lanes I have in mind is a mining rig but that's probably beyond reasonable and quite niche. No?


I'm sure they'll have a Xeon Silver for you soon.


Please put in nextgen Macbook to be announced in June. Jump to the head of the line Apple. Remember your roots.


I am still getting a Ryzen build


Well, Intel still didn't show anything better than Ryzen 8 core. Their processors have higher costs and require fancy motherboards which I don't even sure I can buy in my city.


I am actually (sadly) won't be buying Ryzen because of this announcement. Based on Ryzen/Skylake benchmarks, looks like i7-7820X will be a better deal: 15-20% performance advantage (because of better IPC + faster clock) for only $100 extra. I honestly do not know how to consume more than 28 PCI lanes...

Also, Ryzen seems to struggle on Linux vs Intel a bit. I have seen people complaining about it's unwillingness to use the Turbo frequency and its unixbench numbers are unimpressive, particularly execl throughput.


90s: CPU Hertz 2000s: RAM Sizes 201xs: CPU Cores?


Yes, that's roughly correct. Even so, a CPU that would have much better single threaded performance would outsell one that has much lower single threaded performance but more cores in the consumer market. In the server market it is the opposite.


Good. Bring on more cores. I could use them.


High-Cost Computing?


But this one goes to....9..


Why not name it as i18


I'm sick of having 0 to 1 choice in so many things. If a monopoly is bad, then what's the next worst number of companies? Two. Isn't the governments job to enhance the "free" market by forcing competition through forcing open on-boarding, or IP sharing, or breaking up, or really anything effective to lubricate the wheels of capitalism.


Actually some counterintuitive results from Industrial Relations (the branch of economics that studies supply-side structure, amongst other things) and Game Theory indicate that competition might be greater between oligopolist firms than between those that exist in situations of perfect competition (mainly because by ”knocking out” an oligopolistic competitor you get a big chunk of market share and thus sales volume and economies of scale, whereas ”knocking out” an anonymous perfect competitor nets you (ideally) an infinitesimal additional market share shared by an infinite number of other competitors).


The first case is good for the company because it will end up in a monopoly while the people will suffer. IMO competition and free market will not converge to the "best for the people/population" optimum point in all cases, lots of cases will converge to a good for the companies and very bad for the people.


If my competitor offers a widget on Amazon for $X and I offer it on Amazon for $X-1, I will capture ~100% of that market and the competitor will capture ~0%.

The internet's winner-take-all effect has both benefits and drawbacks for us consumers.


This is the economically expected outcome iff the lower-priced firm has no production capacity constraints and the products are an undifferentiated commodities to the point that consumers have no decision to make other than price.


For years now we had a choice of 1. It's moving, again. It's also a high entry bar.


Here's some ways I can think of that would allow competition without splitting the company up. Forced Ip sharing, incentivised loans to fund new competitors, ip sunsetting, fab sharing, etc, etc. Everyone dislikes government intervention in business, yet something must be done, so I'd like to see the current evil vs the calculated result of the least evil of the government interventions, and vote for one. If monopolies are illegal, and this is a monopoly...


I don't see how it's a monopoly: AMD's x64 chips are competent, and given that Windows now fully supports ARM it means that for an OEM working on a greenfield project they're free to explore non-Intel options entirely.

Intel has a monopoly in microprocessors today the same way that Microsoft Windows is an OS monopoly today: it sort-of is, but it isn't holding anyone back either.


Monopoly does not mean 100% market share. It means they control the market and 95% is generally plenty to hit that point because of economy of scale generally prevents anyone else from staying competitive.


But presently Intel and Microsoft are not being anti-competitive, despite their near-monopoly market share - on the contrary, competition is thriving.


The huge Intel price drop due to AMD Ryzen clearly suggests they where was not competitive. (35% on the Core i7-6700K) https://www.digitaltrends.com/computing/intel-cpu-prices-dro...

Further, Intel's market share directly links to their R&D advantage and process advantage. AMD can keep them honest every few years, but they can't keep up without a lot more market share and Intel has an insane war chest which they use to maintain that market share.


18-core Skylake-X is a luxury good: people will buy it just because it has 2 cores more than the ThreadRipper...


Or, you know, 4x the AVX throughput, stronger single threaded performance, and far fewer performance gotchas.

Interested in using BMI2 for bit twiddling because you'd like to efficiently manipulate bit matrices? PDEP has a reciprocal throughput of 1 on Skylake, or 18 on Ryzen. Guess it's time to make the tough choice between the top end Threadripper and a Core i3 6320.

Intel has positioned these well if the Ryzen price tag rumors are correct. If you're building a workstation with 16 cores, $1k for Ryzen or $1.7k for Skylake is not a straight forward decision.

If Ryzen is more than that, I don't see it taking a big bite out of the market. Which isn't surprising, as Intel did just halve the margins on their enthusiast parts...


"If Ryzen is more than that, I don't see it taking a big bite out of the market. Which isn't surprising, as Intel did just halve the margins on their enthusiast parts..."

Either it is more than that, and it will take huge chunk of the market, or Intel simply reduced margins out of the goodness of their heart.


I don't understand what you're trying to say. My point was that a $1k Threadripper would have absolutely destroyed Intel's 2016 lineup, but it'll merely be competitive with what was announced today.

If the top end SKU is more than $1k then I don't see it taking much of the market, due to the factors in my original post, factoring in the total cost of a machine and inertia greatly favoring Intel.


I was trying to say that AMD did threaten Intel with market share, and they countered that with lowering prices.

As for the rest of your post, it depends. HEDT is diverse. I was looking for a new CPU for my hobby project, 8 - 16 cores, still undecided. I have literally 0 FPU needs, but will take any integer power there is.

I also pay for electricity, so, 65W AMD vs 140W (at least for 6 core) Intel makes my decision very easy.

You also have to consider that AMD HEDT is announced and arriving. Intel response is all marketing slides right now, full with TBDs. They are also misleading people that the chunk of cache they moved from L3 to L2 will magically be all IPC gains.

I own right now more Intel than amd machines, but moving forward my TOC says AMD is the clear winner.

I do hope Intel will come back, but realistically, they are still overclocking sandy bridge. It may take them several years for a new architecture.


Well you can always argue for otherside as well, ECC support, more PCI lanes, and spend $7-800 differnce on RAM, SSD etc.

Personally, I would choose Ryzen.


AMD's Epyc will have 32 cores/64 threads.


Or problems which actually can be scaled to many cores. There is still a wide range of problems which profit from parallelism but isn't suited for GPU computing.


Then why aren't you using Xeon e5 v4 already? Dual socket boards are pretty cheap...


The secret is in combining the best of both, but that makes for difficult software-writing...


When downvoting: please explain your arguments...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: