I'm really curious if AMD can overtake Intel in single threaded performance. They already have the lead in number of cores (per socket) and multi threaded benchmarks like cinebench (when using same number of cores and clocks).
At the moment Intel has a slight edge in IPC and quite a bit better clocks.
If they can close the IPC gap or even overtake Intel and GlobalFoundries 7nm process really can clock around 5ghz this will be an interesting matchup.
Q17: Does the first generation of 7LP target higher frequency clocks than 14LPP?
GP: Definitely. It is a big performance boost - we quoted around 40%. I don't know how that exactly will translate into frequency, but I would guess that it should be able to get up in the 5GHz range, I would expect.
I think the IPC gap will become an interesting battle in the near future, especially as gains that came from aggressive speculation have proven to introduce security holes (take your pick of Meltdown, Spectre variant n,...).
Anandtech did a benchmark but that had a flaw in the way they measured the time. Many other benchmarks are with unpatched OS versions to keep comparability. It's not easy to judge when they don't explicitly investigate that effect.
Which is the main OS that will us CPUs when you count servers. Granted that isn't the consumer CPUs we care about but the work on servers actually trickles down. It isn't some minor group and AMD and Intel are very much concerned about how their CPUs run under Linux.
To be fair it's only recently that released the windows version of the openbanchmarking software, so this linux only side of things will change for sure. But yes
The IPC gap is directly related to their smaller cores. If you want to get more work done in a cycle you need to spend transistors to do it. You're asking them to have and eat cake, basically. That's not likely.
Would you have a particular reference for this? I have read that the Apple ARM chips are more power efficient than comparable ARM chips powering Android devices.
SD835 was 72mm^2[1], A10 was 125[2]. A11 and SD845 are actually very similar in die area[3]. However Qualcomm integrates their modem on the SoC and Apple does not, and Apple's CPU cores are still significantly larger as can be seen from die shots.
Phone SoC makers don't really publish TDPs and there is a lack of good data about it, but Apple clearly has high peak power draw (and very good idle efficiency). Battery life and thermal throttling observed under a sustained heavy workload is probably the best proxy for measuring SoC power. For example, see GFXBench battery life/throttling results in Anandtech's review[4] of the iPhone 7/plus. Much higher power consumption (and performance, but this is resolution-dependent because it's a graphics benchmark) than the competition.
They idle really well. In particular the software layer around suspend is really first rate when compared with the Linux BSPs from vendors, which tend to be somewhere between glitchy and awful (c.f. all those times when your off-brand Android phone turned red hot in your pocket, or the fact that a RPi is running a "mobile" SoC that never manages to draw less than 3W).
The power under load is quiet high, apocryphally. No, I don't have numbers.
The Broadcom SoC used in the Raspberry Pi hasn't aged terribly well, there is a pretty good thread comparing the various boards out there on the Armbian Forum[1], in it they discuss how much power boards use with different parts enabled. Some Allwinner boards (like the PC2) can use just 1w at idle without disabling anything. Further savings is possible if you disable gigabit ethernet, GPU, etc. More info is in the 2nd link!
Oh sure. It was more a point about software polish. All these SoC's have hardware support for clock gating and separable power domains and deep idle states, etc...
But in general, making all junk work is a huge rats' nest of complexity. And in practice, except for a tiny handful of devices from top-tier manufacturers, it just doesn't. Maybe it works "most" of the time. Maybe there are tuning tricks you can use that get the power down but break two or three devices in crazy ways.
But Apple has the smarts and resources to ship a power-hungry part and guarantee that it sits at a clean idle reliably. And Qualcomm isn't far behind as long as they have an integrator like Google or Samsung breathing down their neck. But all the other junk? Yeah, junk.
All parties that run Linux on these devices should be forced to mainline their kernel patches. The patchwork of shitty BSPs that both Google and Samsung have created were shipped, resulting in hundreds of millions of insecure, poorly supported devices that are actively used.
Look at what Armbian has done with these bespoke chips in spite of limited resources, had Google and Samsung put in a minor amount of effort, akin to what Sony has done, the Android landscape would have many more users running fully patched, slightly less buggy software today.
But isn't it easier for AMD to close the gap due to their multiple dies on a package design instead of a large monolithic design like Intel? If they make the cores larger the cost for the entire CPU doesn't increase as much as Intels.
Even though Zen+ (cache optimization) is probably not as much of a improvement as Zen2 ("improves Zen in multiple dimensions") over Zen will be, the single threaded gap did shrink quite a bit:
To my knowledge the achievable IPC gains for a single thread are limited. Therefore it can be assumed that other players catch up in figures of IPC while the market leader kind of gets stuck with only a tiny advantage.
I keep wondering if there are some gains left by dropping the x86 legacy baggage e.g. using RISC-V or ARM64. It seems that there should be some loss due to x86 weirdness and past compromises
What i read was: The instruction decoder would become easier but the instruction decoder is only a small part. Most rare commands are implemented via microcode anyways, so silicon area is not hit too hard. x86 is somewhat efficient due to compact encoding compared to IA64 for example. There is surprisingly little to gain.
Maybe somebody who really knows the details can chime in :)
The peeps building Mill CPU are claiming they can do 10x performance over traditional (x86). IIRC in an early talk they explained this is mostly by bringing the IPC on a level with high IPC microcontroller arches. They do VLIW which can do a lot of things more efficiently if your compiler can handle it (Rust should be able to pull a lot of performance out of VLIW IIRC)
Basically, though Mill also is belt-based (think stack with size-limit) not registers and compilers nowadays are a bit better at VLIW than when IA-64 was introduced. They also have a shitload of security addons that would mitigate most modern exploits and other nice-to-haves.
I don't want AMD to overtake intel on single-core performance. There are only a handful of things that a chip manufacturer can be best at. If AMD is ahead on per-core and total number of cores, then they can and will demand the premium prices intel does today. I like AMD because they are the underdog who must keep prices low to keep my business.
AMD taking the lead could also make them complacent when it comes to the more annoying things like DRM and privacy. I want them to remain the lean company. I don't want them investing in all the chaotic side projects that intel chips participate in. AMD needs to stick to number crunching and not allow their CPUs to become gateways for other tasks. They need to stay as they are now.
Lastly, atm AMD and intel sell very different chips. They do different things well. Those who pick one or the other do so for clear reasons. If they get too close on per-core performance we risk endless brand debates. Two tribes will emerge. Wagons will circle and fanboys appear. Competition betters the breed, but with only two players on the field we all know what will happen.
Your post is strange in how heavily it reflects your values instead of what's best for AMD or consumers. There's nothing so great about the current status quo that we should actively discourage change or real competition. In the unlikely event AMD were ever to supplant Intel (in market share) it would just force Intel into AMD's position now. OH NO! Your business isn't so valuable that it's incentive for AMD to active try and stay in second place.
We want competition, competition pushes progress. Maintaining the status quo, not so much. And if your biggest argument against AMD upsetting the status quo is you'll have to put a little more effort into shopping, well that's not really a problem for most of us...
> Your post is strange in how heavily it reflects your values instead of what's best for AMD or consumers.
> OH NO! Your business isn't so valuable that it's incentive for AMD to active try and stay in second place.
Not every post is about what's strategically best for a company. They very clearly stated it's what they personally wanted, and why.
What's the point of ripping into someone because they've actually expressed desire for a specific scenario? If you think they won't actually get what they want from the scenario or action they've described, just point out how you don't think it will end up that way.
Econ 101: Competition is good.
Econ 102: Unhealthy competition is bad.
Econ 201: The wrong kind of healthy competition is also bad.
Where only two competitors sell too similar a product, two dogs chasing the same rabbit, market forces will only amplify this lack of diversity. I want a range of core/clock options at a variety of price points. If AMD and Intel become too similar on close speeds I fear the inevitable race will reduce my options to a binary of two very similar product lines. This is s recognized realworld phenomena.
The most likely cause of Intel's lagging is a wall of cost/benefit to being both designer and fab. This hit AMD earlier by being smaller(and arguably, with a different management style, AMD could have held on to their fabs longer) but each node only gets more expensive and harder to keep busy with orders which in turn means a greater need for external customers to fund R&D.
Intel had a reprieve of almost a decade with a weak AMD that let them coast along with the desktop monopoly and extract a yawn-worthy 5-10% improvement per generation, but now that they have the incentive to compete again they're struggling to get their house in order - the chip design hasn't had a major refresh in a long time and they have been very late with their version of 10nm.
So they're really at a crossroads now where they can't rely on old tactics and their long-term prospects are in serious doubt, since they didn't position themselves to counter the efficient small-core/big-package architecture that AMD is pushing with the Zen chips. For now, they can still put out competitive flagship product launches simply by binning extremely expensive parts and running them hot(e.g. compare i9 and Threadripper) but that advantage might erode within one more generation.
Intel's biggest mistake over the past 10 years was not to open it's foundry services sooner now it's custom foundry services are nearly irrelevant outside of a few niche markets.
Mobile SoC design and being answerable to a huge number of very large customers forced fabs to evolve their processes in a way where market forces and customer demand forces requirements rather than office politics.
Intel has still an edge on many parts of the manufacturing process but as far as the lithography goes they've lost this round.
Luckily for them their 10nm was much more ambitious than any of the 7nm designs and they are facing problems that other fabs will not face until their 5nm and beyond processes.
Intel is also fighting a much tougher fight now TSMC was always good (better than AMD/GloFo) but now you also have the Alliance which is IBM/Samsung/GloFo which combine resources on process R&D.
GloFo botched their 20/22nm process and had to scrap it completely and if they couldn't use Samsung's 20nm process and call it 14nm AMD's Zen would likely would never have materialized because of the WSA.
And this is essentially the gist of things GloFo/Samsung/IBM are essentially a united front, TSMC is as capable as ever and what all of these 4 (and more) share in common is that they answer to external customers so if something doesn't work there is a much bigger pressure to drop it and find something that would instead of sinking more and more cost into it because of internal politics.
I'm not sure that's true. Intel has managed to have a higher performance process than their competitors in part by accepting more restrictive design rules. Because their architects can work hand-in-glove with their process engineers as a new node comes up they can have two-way feedback on this and to some extent design their process around their architecture's requirements. But it's very hard for third parties to work with their main process. They do have a separate relaxed rules process for use when acting as a foundry.
Pursuing the foundry route seriously would have meant Intel giving up one of its largest advantages.
> Luckily for them their 10nm was much more ambitious than any of the 7nm designs and they are facing problems that other fabs will not face until their 5nm and beyond processes.
I've always wondered about this. What's to keep the hard won experience and knowledge of how to do cobalt from escaping into the wile and being adopted at a much higher rate by their competitors?
It goes both ways I guess, and some experience and working at smaller scales can also come to Intel, but I would imagine the gains for a leg up are greater with the harder process.
Sure, there are IP laws in place, but just not having to waste months (or manpower in parallel research) on paths where you can get a hint (path B is a dead end, don't devote much to it) seems like a gain in itself.
My ex is a process improvement engineer at Intel she told me a story about considerably higher failure rate for silicon that was manufactured in a northern part of a room vs the southern part what’s more odd is that it only affected some products that shared this room in their pipeline.
The cause was that the water pipes under that section would impact that part of the process because of mechanical vibration the reason why it didn’t impact all processes because they were tied to the cooling system and it only had an effect when other procsses put stress on it.
These vibaration were apparently so small that they couldn’t measure the difference they identified this through a process of elimination.
This is a classical story of practical vs theoretical engineering the secret sauce isn’t the recepie it’s the cook.
If you think about it this way if it was so easy to recreate a process through reverse engineering a few hot chips talks and poaching Intel wouldn’t be in this mess and GloFo didn’t had to scrap their 20nm process.
When you almost at the level of moving atoms around the wing flaps of a butterfly start to really matter.
We’re well pass the time where you had Disney animators draw transistors on sheets and copper masks were cut by hand it wouldn’t surprise me if there are no beans days at foundries these days since a fart across the building might increase your defect rate by 3%
> If you think about it this way if it was so easy to recreate a process through reverse engineering a few hot chips talks and poaching Intel wouldn’t be in this mess and GloFo didn’t had to scrap their 20nm process.
That's what I'm talking about. Who's to say there aren't a few gotchas like this in the new cobalt processing, and that's not exactly something that can be protected under IP law (or even necessarily by NDA, at least provably).
How much time does it save to have the second company attempting it get a hint of "hey, the new process is much more susceptible to problems of X,Y and Z than you would think, and you can save yourself months of headaches by being extra vigilant with respect to those from the beginning." ?
The benefit of the lead time you get as the first really needs to pay off, since the followers almost always have it easier. It's not hard to imagine that sometimes the pain and cost of being first is actually higher than the benefit and profit for that time period.
Don’t know from my limited exposure I would be really surprised if Intel and another Foundry would encounter the same problem on a level any more granular than “issues with Cobalt” since there are so many factors that impact the process.
Intel and GloFo produce completely different silicon and it becomes more and more dissimilar with each step of the process so many factors impact this from where your raw materials come from and how long is the path between step 23 and 27 in the pipeline to what latitude your fav is located at and where does it face.
The example I gave in my previous was simplified the actual root cause was much weirder and included a condition where the flow rate would have to get to X and the temperature to get to just an exact Y for thermal expansion to apparently compress the insulation enoguh for it to matter if the temp would be a tad to low it won’t impact anything if it goes above the next level of flow rate it wouldn’t have an impact yet again because it would drop the temp.
Intel’s whole conflict free campaign was a marketing spin on a real problem and a way to benefit from the extra cost of materials and that extra cost wasn’t because of children in African mines.
Intel needs so much control over the raw materials to prevent contamination that they issue orders on what fuel, hydraulic fluid and lubricants can be used to mine their dirt to prevent or at least reduce contamination.
Heck I’m sure if one will read what ever T&C they attach to their supply contracts the metal composition of the pickaxes will be specified.
They survey and sample everything and essentially tell their supplier ok you can mine this specific area everything in the top 1m you get rid off everything below 10m we don’t want.
Like honestly at this point I think the only way for GloFo to have exactly the same problem as Intel is we live in a simulation.
They will however have their own problems with leakage, bonding, crystallization and w/e else can go wrong with this insanity with Cobalt but likely they’ll have completely different root causes and different solutions.
Back in the early days when silicon valley actually made silicon, they realized that batches would be effected by the local farms or if someone didn't wash their hands after using the bathroom.
Aren't all manufacturers dependent on ASML's stuff, anyway? As far as I know, none of the big guys are developing their own EUV tech. Where's the edge?
Got told once* that they have specific teams working with Intel, etc. with pretty strict rules, where if you ever work for one of those teams you can't ever work for one of the others.
So if I got it right ASML develops their EUV tech, etc and then those teams build things on top of that with Intel, TSMC and so on.
The litography is just one part of the manufacturing process and there are many ways to apply the same tech.
Also the development as mentioned below is completely segregated between teams.
Last time we were vendor shopping we considered Intel and had an in-person meeting. I'm not sure why they bothered turning up as they were nowhere near competitive. It was pretty clear they were not interested in any thing that involved any work from them. If only someone else could offer EMIB technology...
>The most likely cause of Intel's lagging is a wall of cost/benefit to being both designer and fab. This hit AMD earlier by being smaller(and arguably, with a different management style, AMD could have held on to their fabs longer) but each node only gets more expensive and harder to keep busy with orders which in turn means a greater need for external customers to fund R&D.
I have been struggling to explain this better, and those words describe the exact problem. Scale.
Apple alone ship more CPU unit then Intel. And that is all going to TSMC.
Just a reminder, the "original" original schedule, was that we have 10nm in 2016, 7nm in 2018, and 5nm in 2020. After the delay in 14nm, everything being pushed 1 year later. 2017 come and gone and we don't have 10nm, now 2018 we still won't have it, and as pointed out in various investor conference call. 10nm is now Scheduled for 2019.
They "were" miles ahead, but lack of competition and incentives meant 10nm didn't work out. They have been milking whatever they have for far too long. And waited a little too long to decide to enter the GPU market.
This misses the point of where Intel is going. Intel does not see the PC market as a growth sector as a whole. It may grow for individual companies, like AMD, as they take up part of the share, however. That doesn't mean the PC market as a whole is growing. It's actually shrinking.
Intel recognizes this and is focusing on data centric use-cases like supercomputers and clouds.
For example, Intel is leading "Pathforward" with the US Government to develop very high-end solutions to large-scale problems; https://www.ecpannualmeeting.com/
In summary, Intel doesn't care that much about the PC sector because it's shrinking as a whole.
Intel has already given a reason for the lagging: trying to improve density by 2.7x instead of 2.4x.
Other foundries were able to improve density by 2.7x for their 7nm nodes compared to their 14/16nm nodes, so I'm not sure what exactly Intel is doing there.
As feature sizes shrink cobalt start to look like a better choice for implementing wires than copper for reasons of electromigration and mean free path. Intel wanted to get a jump on this for their 10nm process and so decided to switch over to cobalt before TSMC, Global, etc. But they've been having huge problems getting it to work and 10nm has slipped from 2017 to at least 2019 and their process lead is now essentially gone.
They've actually released one chip on 10nm but it's performing much worse than the equivalent 14nm chip so it needs a lot more work.
I'm assuming others are also having difficulty with cobalt, but I'm guessing that once their final new cobalt chip is out, it won't be hard for others to figure out what they did unless Intel can patent it somehow.
To be fair to Intel, ‘nm’ hasn’t really meant anything for years now (it’s today’s ‘MHz’) but their 10nm process has been famously delayed, they were originally planning a launch for 2016 and it’s only been in the last couple of months that products have started to trickle out.
"All exponentials are s-curves" - basically you can be years ahead until it takes years to make any sort of gain. Intel fab technology has been years ahead of the rest of the industry for a while, but as we reach the flattening off point of power, performance, and cost gains, the difference in capabilities between a chip that is years behind the leader gets to be less and less.
Google in search, Intel it processor fabs, ARM in power efficiency, Etc. During the time of fast growth it is a huge differentiator, and once that fast growth has been exhausted you need a different differentiator.
Yes, Intel has fallen behind already. Their 10nm process node and a supposed equivalent to TSMC, GloFo, and Samsung's 7nm nodes, won't be ready for mass production until 2019. Meanwhile all the other foundries will have their 7nm nodes ready for mass production this year.
Also, it's rumored that Intel's 10nm will actually be worse for clock speeds than its 14nm is.
Intel could ultimately try to make their processors using the other foundries until they get their tech sorted out. If I'm not wrong Samsung sells phones with Qualcomm processors and, consumers don't care much about that.
It's a good question. I've thought the same thing. Historically, Intel's competitive advantage has been in manufacturing. As that slows down due to the general slowdown of Moore's law or just because they don't stay on top of their game, it seems likely that AMD, ARM and Nvidia will eat more and more of Intel's lunch.
So there was a rumor last year there might be a dual socket Threadripper/EPYC mobo, and that the CPU's would be capable of it, but now that I search for it, it seems that it only was 'some guy on Reddit says' style rumor. Despite that, anyone knows if there is any chance for that?
That's free lunch thinking. A platform with almost the same specs as Epyc is going to cost almost as much as Epyc and thus there's not really any point.
How is the quality and stability of AMD chips lately? Although I've always purchased AMD chips everytime I upgraded my desktop, the segfault problems reported with Ryzen made me cautious about further upgrades (on Piledriver now).
Are the latest Zen/Zen+ CPUs stable enough now, or should I wait for Zen 2?
Well if Threadripper 2 can do 32 cores at ~4GHz@250W is there a good reason we cant have an Epyc SKU that does the same? (So that we can take two of these and put them in a machine).
At the moment Intel has a slight edge in IPC and quite a bit better clocks.
If they can close the IPC gap or even overtake Intel and GlobalFoundries 7nm process really can clock around 5ghz this will be an interesting matchup.
https://www.anandtech.com/show/12438/the-future-of-silicon-a...