Hacker News new | past | comments | ask | show | jobs | submit login
Intel can’t supply 14nm Xeons, HPE recommends AMD Epyc (semiaccurate.com)
328 points by davidgerard on Sept 7, 2018 | hide | past | favorite | 125 comments



I really think Intel lost its way back in the Barrett era and never managed to find it (and I think AMD is really significantly exceeding its historical trend line right now) but despite all that I am dubious about semiaccurate.

Looking back at other submissions from that site ( https://news.ycombinator.com/from?site=semiaccurate.com ) it appears many HN readers are too. Based (only) on the submissions, it looks a bit like an AMD fan site.


Regarding the HN comments, it's understandable that everyone is cheering for AMD considering the Intel monopoly since the Athlon XP era.


There's a cheap and timeless emotional thrill in supporting a rising underdog, but if Intel declines too far, that thrill will not be cheap, but very expensive for most of us here. AMD will gladly collect the same quasi-monopoly margins that Intel enjoyed for the past decade, should Intel become enough of a hot mess to permit that. So be a bit careful what you wish for.


I like how Intel has been in exactly the position you describe for the past 15 years, but the moment AMD starts claiming back market-share, let's be worried about them!

No, am still worried about Intel plotting behind the scenes to undermine AMD. AMD is nowhere near the position to abuse their power and they're unlikely to be for a long time, so let's not be alarmist here.


We're still nowhere close to Intel "declining too far".

AMD used to be quite popular in the Athlon era, especially when Intel screwed up big time with Pentium 4 and Itanium. But, as we know, Intel recovered, then used anti-competitive tactics against AMD, and finally AMD screwed-up with its next architecture. All of these more than ensured Intel's comeback.

So I'd say wait until AMD has 50% of the market (not just the custom PC market, which AMD will probably have within 2-3 years) to feel sorry for Intel and worry about Intel dying.


Pretty amazing to think back to those days. Pentium 4 was so bad and the AMD Athlon 64 4400+ reigned. The worst thing you could say about AMD at the time was the time synchronicity problems in VMware, which was eventually patched.

Then... Core2Duo happened so fast and AMD disappeared so quickly. Felt like overnight.


And Turions were just so slow and ate so much power. Whoever said otherwise were just rabbid.


Intel's recovery a decade ago also came down to superior mfg, and that past strength is their main weakness this time around. I am very concerned about their ability to yield 24+ core monolithic server parts on 10nm at competitive cost/perf/timeframe, given that 10nm seems to be an out-of-control dumpster fire even compared to their 14nm node which was itself around 2 years late.


I'm not worried about Intel or AMD because x86_64 isn't the only ISA around. ARM and RISCV (amongst others) are doing quite well in terms of performance and power efficiency.


I like RISC-V more than is probably healthy, but it is not a player at all, let alone a compelling player, vs x86_64 or aarch64 in the server space and probably won't be for at least 2-3 years. One obvious missing feature is robust KVM/type 2 hyp support.


A good example would be Apple vs the variety of Wintel makers.

However, I don’t think this is really a major worry, because the actual real threat is from ARM and ARM has a lot of manufacturers.


Charlie Demerjian has been doing investigative journalism focusing on the semiconductor industry since at least 2008: https://www.theinquirer.net/inquirer/news/1050052/nvidia-chi...


Semi accurate most definitely has a big bias against Intel, but they are pretty open with it. They have a ton of insider information that nobody else does, there is a reason they have an insanely expensive paywall, and people gladly pay it.


Semi accurate features some of the people from The Register when it was good. They’ve usually got their knives out for someone, it’s not always Intel. The thing to bear in mind is that they’re all about the silicon, and the silicon at scale. It’s very inside baseball.


> They have a ton of insider information that nobody else does

Are you willing and able to elaborate?


They have a (slightly out of date) page on the very topic

https://www.semiaccurate.com/fullyaccurate/

Some things since then, they were the first media outlet to talk about how Intel's 10nm was a disaster, The first media outlet to talk about Zen chiplet technology and predict AMD had a homerun on their hands.


They also "predicted" Intel's delays and ultimate failure with Broadwell (basically a 6-month product), too.


Very interesting, thank you.


[flagged]


Or you can read it, subtract out the bias, and still have more information than other sites.


This is how one should handle all news outlets.


Estimating just how much bias to subtract is a real black art.


yeah huh

nuh uh

Neither one of you presented evidence of specific articles and insights that proved true, FWIW. I do think that calling a site with a vendor bias as equal to Infowars in any manner (impact or scale) is a huge accusation.


Honestly, this seems like another instance of Intel dropping the ball, and AMD is more than happy to pick up the slack. AMD is already testing their 7nm Epyc chips with OEMs to be released Q1'19.

My takeaway from this is that server manufacturers are starting to recommend Epyc is a solution which will increase AMD's market share. This will just create more competition between Red and Blue which will give consumers faster innovation and better prices.


Can anyone explain the shortage of 14nm chips? This article simply mentions that most readers are aware of this fact and I am unable to find supporting articles.


They have their first taste of 70M unit per quarter 14nm self fabbed Modem for the next iPhone cycle. They were suppose to have 10nm online which should leave room for those 14nm orders. Since 10nm are delayed, Intel had to move some of their 14nm chipset back to 22nm. as well as limiting supply to some segment of 14nm chips.

And I wouldn't be surprised more customer are looking at EPYC anyway, every microcode update from Intel has resulted in % of performance lost, at this rate we will be back to Broadwell era soon.


Interesting. I would not have thought that competing contracts would have been an issue (and I guess intel didn't either). Many thanks for the insight.


https://www.tomshardware.com/news/14nm-processor-intel-short...

Because Intel's 10nm process is delayed, their 14nm fab is overbooked leading to a chip shortage.


But isnt the positive spin on that they are selling every chip they could make?


No.

Two fabs output more chips than one fab. Intel has deals with other companies for both 10nm and 14nm production and some of those deals are undoubtedly based on Intel moving their own chips to 10nm. At capacity and unable to increase as expected means customers move to other companies that make compatible, competitive products. Once those companies move, they may not come back.

This is quite a bad position to be in.


Are there other companies that are using Intel fabs with real volume?


https://www.theverge.com/2018/7/25/17614930/apple-iphone-201...

https://www.tomshardware.com/news/14nm-processor-intel-short...

> That ramp is occurring as Intel is also bringing production of its 14nm XMM 7560 modems online for Apple during the second half of this year. The new Apple contract, which consists of millions of modems for iPhones, will certainly be a top priority at Intel's fabs.


I had heard that Apple will be dumping Intel modems after that contract is complete, with Mediatek the likely new partner. This might have something to do with the Qualcomm lawsuit over Apple breaking NDAs by sharing sensitive information with Intel.

Assuming this to be so, Apple may not be allocated the highest priority.


Altera pre-acquisition. But it wasn't working too well.


Altera is hilarious, they shit the bed with Stratix 10 - delayed by years and bet the house on Intel. Intel bought them and then totally shit the bed on what was meant to be the process shrunk version of Stratix 10. What a way to go...


Not hilarious if you are stuck with ancient 7 series parts as the only cost effective option. They just recently graced us with a Spartan 7 that uses the same die as their artix parts with the gigabit transceivers not bonded out. How generous.

We need competition in the FPGA space again.


Their original roadmap was to provide a low density, low cost Artix as a replacement for some of the Spartan price point and that never materialized. This latest dogbone is just to get around Vivado's lack of support for anything before 7-series and the age of the last ISE.


Ericsson 5g, or so I hear


Wasn't the rumour that this was Nokia 5g (https://www.semiaccurate.com/2018/07/02/intel-custom-foundry...)?


Moving is incredibly hard though, it takes year plus to adjust to a new fab.


But Intel has been dropping the ball over and over again lately. Some customers might take something like what's reported here as the last straw.


They have a competitor. If OEMs switch to AMD because they can’t get parts from Intel those sales could be gone forever.


Don't you mean TSMC, followed by Global Foundaries?


I wasn't aware that HP and companies that manufactured servers and PCs purchased directly from the fabs. You can buy server class CPUs from AMD, can you get them direct from TSMC?


AMD doesn't fab their own chips anymore.

I understood that TSMC would be manufacturing the next round of AMD processors, since Global Foundaries didn't have 7nm ready.


We're talking past each other. HP isn't going to switch from Intel to TSMC because HP doesn't design CPUs. They make servers. Their vendor is going to be AMD not TSMC or Global Foundaries. You're saying Global Foundaries has fabs that make AMD's chips but I'm looking at it from a business/marketing perspective not who's making the hardware. The customer never sees a "Global Foundaries" label on the CPU, it's going to say AMD or Intel on it.


Spin maybe... but they weren’t planning for the 10nm process CPUs to be delayed and they weren’t prepared for the increase in demand on the 14nm parts. Poor management decisions have collided here it would seem.

Not the same as when a manufacturer says they’re making as many as possible and they’re selling out.


Well, for the past 40 years Intel managed to deliver node shrinks mostly on time. Management got used to that and thought that 10nm troubles were temporary, not something that required new design like rumored 10/12nm hybrid replacing original 10nm plans.


Intel might not have been able to make alternative decisions anyways. Cost might have made them choose between 10nm sooner or more 14nm now, and not betting on 10nm could easily have cost them more. Hard to say without insider info.


But not every chip they projected or promised to make, so not so great really.


it is as a first layer analysis, but it's a fault for such a large company not to have fallback buffers to avoid short supply


Interesting snippet in that article - anyone know more about these new power standards?

Intel's recent 300-series chipset refresh found its new chipsets coming to market with the 14nm process, which is necessary to meet California's new power standards.


Probably https://energycodeace.com/download/21326/file_path/fieldList... which requires workstations and gaming PCs to implement Energy Efficient Ethernet and to use less than 10W in sleep mode. Maybe Intel's old chipset doesn't support EEE (although it seems like nobody uses the Intel NIC anyway).


I can't say anything about Intel and 14nm, but there is a big shortage of all electronic components since 18 months ago.

Microcontrollers are all backlogged, chip capacitors are also backlogged. PCB Fabs are backlogged, assembly houses at least in the US are at capacity.

What I suspect is demand for electronic 'things' by poor people in China, India, and Africa are outstripping supply chains across the board.



It is already hitting the news now. The 10nm delays don't help here.


Probably due to poor yield.


What's amazing (to me anyway, as I learn more about the fab game) is how Zen could suffer the exact same defects per wafer as Intel and produce considerably more viable CPUs per wafer. The "Core Complex" or chiplet architecture was such a brilliant longterm play by AMD—it still blows my mind.


The difference is closer than you'd imagine.

Intel's XCC die is 694mm^2 for a 28-core, while Zen / Zeppelin is 213mm^2 for an 8-core. Intel's XCC 28 core is around 25mm^2 per core, while AMD's Zen is 26.6mm^2 per core... suggesting a slight advantage to Intel!

Furthermore: Intel's XCC die has 28-cores. When defects come into XCC, Intel can simply disable those cores and sell them for cheaper. For example, Intel's Xeon Platinum 8160 is a 24-core. Presumably, 4-cores are defective but Intel can still sell it for $5000 or so.

So a defective product would require a manufacturing defect somewhere critical. But individual cores can most certainly be disabled during the manufacturing process and sold as a lower tier product, like the Xeon Platinum 8160

As such, Intel definitely has good yields, at least on 14nm production.

------------

The main advantage is that AMD's strategy works way better when yields are poor due to a new process being not fully understood. But as defect rates drop, Intel's monolithic strategy seems to have slight advantages over AMD's. (Remember: Intel's cores are slightly superior in IPC, have slightly faster caches, and AVX512 enabled).

AMD's strategy is definitely good, but I don't think Intel is really that far behind.


In addition to the benefits of getting working more working chips per wafer, don't chiplets also provide the benefit of being able to independently bin each chiplet, significantly increasing the effective production yield of high-clocking high core count chips relative to monolithic fabrication?


Intel did not have a lot of competition in recent years so why design for maximum yield when you can just slap a high price tag on the thing? AMD on the other hand needed to assume that they would not be able to compete in raw performance so what's left is being as cost effective as possible.


Arm chips will most likely replace Intel even for servers. I think we may se massive core count with lower clocked frequency chips as die shrink will be come too expensive.

“Rock's law or Moore's second law, named for Arthur Rock or Gordon Moore, says that the cost of a semiconductor chip fabrication plant doubles every four years“ https://en.m.wikipedia.org/wiki/Moore%27s_second_law


CPU side channel attacks, speculative execution leaks, throttling and cache effects, might eventually benefit the ARM server camp.

Maybe instead of virtual machines we'll end up running (some) workloads on cheap, low power and isolated ARM CPUs, directly on bare metal without potentially leaky virtualization. Something like 4-16 GB of ECC RAM over 1-2 channels, quad core Cortex A73, A76 or similar.

(Some ARMv8 designs are actually not that far behind of x86 chips in scalar performance anymore. SIMD (vector integer/floating point) is another matter, but I guess it's not impossible to slap a few 256 or 512 bits wide SIMD units in ARM designs.)


I wonder if AMD has the fortitude to try again after the premature ejaculation that was https://en.wikipedia.org/wiki/SeaMicro


SeaMicro wasn’t ARM though- it was X86 (intel Atom to be specific).


That's the idea behind Esperanto's 4096 core RISC-V CPU. It's run by Dave Ditzel and the creator of BOOM CPU.


> I think we may se massive core count with lower clocked frequency chips as die shrink will be come too expensive.

Only for applications whose workload can be parallelised. Many applications remain single-threaded, and until a parallel version is developed for all these applications, single thread speeds will matter.


I see this as unlikely. ARM was very late to the game with a 64-bit isa.

With a completely free competitor in risc-v, and not much market penetration, what is the motivation for a customer to pay the licensing fees?


Feels a bit like Apple's resurrection from the dead when Jobs came back to take the helm after having been 'fired'... now AMD is eating Intel's pie left right and center.


Are there specific people that are back at AMD that had not been there during the last decade?


Lisa Su is basically a superstar. She is an excellent engineer who also turned out to be great at management.


Jim Keller had a stint there. Just like with AMD K8, he flew in, saved the day with Zen, then moved to greener pastures.

Just that this time, these pastures are at Intel, so who knows what'll happen?


the new CEO, her name eludes me, a Chinese lady, she's very much responsible for AMD's recent success and capitalizing on Intel's mistakes


Wikipedia says Lisa Su is Taiwanese-American.

Rumor is that she's somehow related to NVidia's chief. Like 2nd cousins or 3rd niece / something-something removed or something of that nature. Just for some delicious irony.

EDIT: Found it. Jen-Hsun Huang is apparently her Uncle. https://babeltechreviews.com/nvidias-ceo-is-the-uncle-of-amd...

> Technically, Lisa Su’s grandfather is Jen-Hsun Huang’s uncle. They are not exactly niece and uncle, but close relatives.


Jen is 表舅, So it should be Lisa Su's mum ( it has to be on her mum's side and not her dad ) 's Dad or Mother's ( Grand Pa / Grand Ma ) , and their brother or sisters's son.


Lisa Su!


Intel used to keep less important products, like chipsets, on trailing nodes (right now, that's 22nm). Now the company is fabbing the chipsets on 14nm, too. That's mainly because of the late move to 10nm. Intel's processors SHOULD be on 10nm, but they aren't, so chipsets are eating into 14nm production capacity. Intel has to create one chipset for each processor produced (in most cases), so this adds up to a lot of chips.


How many server models have HP and Dell switched to using Epyc?


HPE Solution Architect here -- HPE has 4 lines, 3 public / 1 private. DL385 and DL325 are rack based servers aka traditional pizza box.

DL385 is your workhorse platform. DL325 is a 1P design based for heavy PCIe connected (read: NVMe) devices.

The CL3150 is a cloudline server and will likely be more consumed in the service provider space, in my opinion.

The Apollo 35 is available to top 200 volume accounts (which has been annouced publicly, but is not available to your everyday customer.)

This type [of information leak] is my worst nightmare as someone who works with resellers, letting these types of documents in the wrong hands or somehow access is breached like this.

Edit 2: I am a server SA, MASE. I configure servers all the time. If customer demand shows the swap, you could see the proc move over to other lines, such as blades and the HPC markets.


Super-curious: have you seen much customer-driven demand for AMD EPYC, i.e. for reasons other than Xeon availability? AMD processors have been affected to a lesser degree to speculative execution exploits, they're cheaper and (if I'm not mistaken) offer more PCIe lanes, etc.

Also, do you expect 7mm EPYC to do well in your space? Thanks!


Depends on market segement. I've seen alot of demand for Epyc in workloads that are sensitive to memory bandwidth. Just so much to offer when you max out 8 channels.

They are cheaper due to fabrication process. The PCIe lane story is stuff of fanboys. It comes at a cost, power and heat.

Secondly, anyone looking at an NVMe box should be looking at AMD in my opinion. The trick is if you are doing a VM farm, mixing Intel and AMD aint the best idea, as you all know.

I see EPYC ticking up fast.

In terms of exploits like Spectre/Meltdown, I'm pretty sure the exploits AMD claimed were not vulnerable, they ended up pushing out microcode for anyway. So its a moot point.

I HAVE come across alot of customers who have DOUBLED their core count due to Spectre/Meltdown mitigations, and they are attracted to AMDs, high core, lower cost options. But remember, the power draw is different and always test/PoC!


> The PCIe lane story is stuff of fanboys. It comes at a cost, power and heat.

Could you unpack this a bit? Specifically, I'm curious if the cost is a premium per lane (e.g. W/lane greater on AMD than on Intel) [1]. Also, is that cost at all affected by the I/O volume or merely the CPU being power-hungry overall?

[1] Of course, that assumes everything else being equal, which it can't be, as well as equal proprotion of PCIe utilization, which is unlikely.


Ive had a few customers test AMD and found a higher operating temp and determined it was due to higher power consumption. On paper, you get more lanes at a lower TDP w/ AMD. In practice, as always, your results may vary. Test!

PCIe lanes and counting them is funny math. Do the homework on system boards, how they communicate, and the tax of moving information between processors.

However, I would say their tests were short, and AMD processors have 3 power operating modes. There was also a neat blog posted somewhere (I think on here...) a little while back suggesting that the AMD proc did not need to run at advertised power on the customer procs. It was about compile times and how much power still resulted in good times. That was consumer-grade Ryzen chips tested though.


Unfortunately, higher temperature says less about power and more about thermal design (often of the overall system and not just the chip).

> On paper, you get more lanes at a lower TDP w/ AMD.

I was hoping you (or anyone) had at least some real-world anecdata.

However, the theoretical power cost being lower suggests it's unlikely that if there's a premium in practice, it's unlikely to be significant.

> PCIe lanes and counting them is funny math. Do the homework on system boards

It's not that funny. Latency "taxes" are certainly a concern for some workloads, but, ultimately, if there's not enough bandwidth to get the data to the CPU, such that it might end up idle, that can trump any tax. The difference between 40 and 128 lanes of PCIe 3.0 in transferring 64MiB is on the order of 1ms.

Finding a mobo that allows access to all the lanes might be more challenging when there are 128 than when there are 40-48, but I expect the popularity of NVMe to reduce that challenge somewhat.

OTOH, it seems Epyc uses half those lanes for communication between CPUs, so the usable lanes doesn't go up for 2S vs 1S, so perhaps the comparison is really 128 lanes vs 96 lanes.


yes, latency vs. throughput, the main idea also behind GPU computing. It worked there well, and CPUs are incredibly going to sacrifice latency for throughout as well.


> PCIe lanes and counting them is funny math.

Do you have a "relevant" chunk of customers that are really looking for the high-density PCI-Express connectivity? Are the 128 lanes per system a feature that actually draws in users with real world demands or is this the wrong thing to focus on?

> There was also a neat blog posted somewhere

You must be talking about the DragonflyBSD mailing list: http://lists.dragonflybsd.org/pipermail/users/2018-September... (as linked by others by now).

To me this wasn't very surprising. It's well understood in the more technically inclined enthusiast community that underclocking Ryzen yields tremendous efficiency improvements. Famous overclocker "The Stilt" did a great analysis on Ryzen's launch day in 2017: https://forums.anandtech.com/threads/ryzen-strictly-technica... One of his benchmarks showed an almost 80% efficiency improvement when underclocking an R7 1800X to 3.3GHz, which is just above Epyc's maximum boost frequency. Since Epyc is almost the same silicon as Ryzen 1st Gen (B2 stepping instead of B1), the chips should have almost identical characteristics.

Unfortunately, I'm not aware of any similar detailed analyses on recent Intel Core processors to compare. Samsung's low-power manufacturing node used by AMD has often been cited as the specific reason for the steep efficiency curve (and the realtively low upper end compared to Intel), but the general trend is the same for almost all chips.

On the other end of the spectrum, overclocker der8auer measured about 500W draw in Cinebench when overclocking the Epyc 7601 to around 4GHz: https://redd.it/92u6db


> Are the 128 lanes per system a feature that actually draws in users with real world demands or is this the wrong thing to focus on?

I'm going to go out on a limb and suggest (based on my own experience[1]) that most users are too ignorant to know that this might be something that they want or would benefit from.

Some of us have always demanded more I/O bandwidth (even if it meant 4S servers), but typically with a price limit.

I do, however, suspect that additional demand could materialize in the form of NVMe slot count.

[1] particularly with so many potential employers being categorically cloud-only, they don't even want to know about the underlying hardware or what it's capable of.



> In terms of exploits like Spectre/Meltdown, I'm pretty sure the exploits AMD claimed were not vulnerable, they ended up pushing out microcode for anyway. So its a moot point.

But even in the scenario where the microcode actually did incorporate some "interesting" changes, they haven't impacted performance at all. So this is basically the world's biggest ever design win at this exact moment.


Thought I'd ask a followup question for the sake of others here who might appreciate the answer:

What sorts of evaluation opportunities do you provide for Intel vs AMD processor comparison?

Besides short-term on-site placement, this could even look like remote access to a specifically-provisioned lab environment.

Something like this would be pretty cool to evaluate a wide range of processor/memory/storage/etc combinations.

I'm absolutely sure this almost certainly exists but I thought it would be interesting know how it actually works. (I'm not in the server hardware industry, but am very interested in how it works)


Very nice. I will make sure our server teams know they can get Epyc. Hopefully we are already testing the Spectre impact on Xeon vx. Epyc.


I'd love to know your results privately if you would share, as I have several customers who were impacted by Spectre/Meltdown and are also doing private benchmarks.


I'm not super familiar with HPE's product lines but I'm just kind of surprised they are still using the Apollo name.

Maybe we'll see some Silicon Graphics branded HP gear too?


SGI stuff is already rebranded to the apollo name and Superdome Flex (UV300 I think it was before)

Depending on your SGI server, the name may have gone into the appropriate HPE family name.

If you need help with our new decoder ring, I'd be happy to point you in the right direction.

I expect HPC workloads to take advantage of AMD chips, but I do expect many of the mathematical improvements that the Skylake instruction set provides to amplify desire in the computational HPC/numerical computation/ non-GPU stuff.


Can you comment on the September/October availability claims?


I hate doing this, but because this is sensitive, I have copy pasta-d the HPE Customer statement from our Customer Presentation on the issue:

HPE, along with all other server vendors, is starting to see supply constraints on various Intel Xeon-SP processors—commonly referred to as Skylake—used in our Gen10 servers.

–Intel is supply tight on Skylake SP HCC, Skylake SP LCC, Skylake SP XCC, and Skylake W LCC processor series.

–Customer demand for various Skylake server processors is exceeding short-term supply.

–Intel is working diligently to increase supply but indicates limited ability to materially improve Skylake availability potentially through December 2018.


I'm not sure how many, but Dell has at least three in the PowerEdge line and they've been announced and available for a while. One of them is a 1U single-socket system with 32 cores, 64 threads, and up to 2TB of RAM.

https://www.amd.com/en/campaigns/amd-and-dell

https://www.dell.com/support/article/us/en/04/qna44314/dell-...

https://blog.dellemc.com/en-us/poweredge-servers-amd-epyc-pr...

http://www.itpro.co.uk/server-storage/30799/dell-emc-powered...


Dell seems less than enthusiastic about Epyc. For some reason their Epyc servers are 1/4th the density of their Intel servers. The dual socket Intel is available with 4 systems per 2U, the dual socket AMD is 2U.


They have long been an Intel allies since the days of Pentium Inside. Intel were paying billions to Dell to not have any AMD's product in their lineup. Dell even said the missing AMD were because there were no customer demands. Only after the AMD vs Intel case ruling, Dell decided to have one or two AMD machines just to make things look "better". My guess is in reality Intel runs deep inside their blood.


This may be attributable less to a lack of enthusiasm and more to the physical size and TDP of Epyc CPUs. Additionally, it would be very hard to take advantage of Epyc's increased PCIe lane count at half-U densities (which is, of course, not relevant to all use cases) and would likely constrain number of memory slots.

Personally, I've never been convinced that such high server densities [1] provide a net benefit. Ever since "blade" systems first came out, and through the current availability of the half-U models (e.g. 4 per 2U) or even 2S 1U models, they've been plagued with thermal design issues and, sometimes, reliability issues due to non-commodity parts and/or needing to spin tiny-diameter cooling fans so much faster.

Until relatively recently, there wasn't even datacenter space available that could accomodate such systems at full density running at full load. What did become available was at a premium (in addition to paying a premium for the higher-density hardware in the first place).

[1] more than 1S/U or so


What is wrong with Intel...Haven't seen this level of inaction from a major firm for a while.


As far as I've heard it's not really inaction, but that their 10nm process has been plagued with issues, being "ready next year" for about 3 years (with no terribly convincing signs that 2019 will be the actual year, either). This has presumably played hell with their product development and fab capacity plans.


Semiaccurate had a pretty in-depth article on the recent management a while back but the site is down and I think it was paywalled anyways.


Like what happens at any big monopoly... sales and marketing are prioritised over product development.

https://www.youtube.com/watch?v=_1rXqD6M614


I knew what it was before I clicked, but I still watched it to the end. He was a really smart guy.


They don't even have a CEO now. But the troubles started years before. They got fat, lazy, and greedy, basically. And then they started messing up the technical stuff, too, as we're seeing now with their 10nm node.


This reads like a hit-piece. What are the relative volumes of Xeon versu Epyc? that would be kinda important to know. Maybe I just didn't see them mentioned in the article.


back in 2002 it was a struggle for AMD to sign up any big one to offer servers with Opteron chips. This time it looks different.


Opteron servers were the absolute business for a few years in the mid/late 2000s. They were a preferred supplier for a few years while Intel wallowed. 2002 was before my time, but that was the era of SPARC in the business I work in.


Dell's had some PowerEdge models available for a while now.



Very nice of them not to blur out the email recipient's address at the bottom of the email :)


No, I can't imagine that not being a major goof. SA may have just lost a source, or at least some goodwill.

It's entirely possible this was deliberate but I call it unlikely.


Is it me, or did SA leave Aaron Weston's e-mail address in the image?


> The page is marked, “Confidential | HPE Internal & Authorized Partner Use Only” but it is quite open and does not require a login. (Note: We are not linking it because of all the sites that steal our stories, rip us off, and don’t credit)

Oh please.


It does seem petty. But I think I'd also be irritated if I found the story, and did the research, only to be undercut by the copy/paste "journalists" that took my credit/traffic.


I don't think you are aware how much background other writers grab from them and no one credits them, ever. They have a reputation of knowing lots of insider stuff and there's a reason that they can survive with a paywall while hardly anyone else can (I can't think of a singe site writing about hardware that is subscription based?).


And what a paywall! I was reading semiaccurate for the entertainment factor when it was free, now you can't even think of subscribing unless their info actually means money to you.


They're lying anyway. It was an email.


[flagged]


That's an interesting assumption you're making. If you're good looking, you can't possibly be working in tech as your full time gig.


Well, I know if I was that good looking and had my same intelligence, I think I could do a lot better...


Maybe they... Like working there?


Ahh of course, because if you're good looking, you can't possibly enjoy working with tech, want a career in that field, or perish the thought, not actually like modelling.


You're making a lot of assumptions.

1. That working in tech is the only way to enjoy the tech community.

2. That the "cage-monkey" job described at the top of thread is as good as it gets, and that aspiring to do something else means something non-tech.

3. That good looks and some other job necessarily means modeling.

Look. I was clear to indicate it was what I would do, and also to indicate that I would have the same intelligence. What I was trying to imply is that based on inherent cultural (and likely neurological) bias towards good looking people and symmetrical faces, I would prefer to leverage that natural benefit into a better position, or something I liked doing better. Because I've been a cage-monkey before. It's better than a lot of things, but it sucks in a lot of ways too. Spending large amounts of time in a very loud and very cold room using crappy crash-kart setups to do emergency maintenance on systems sucks. Racking equipment and running cables is actually the fun part.

So, I understand you were probably conditioned to assume the worst given the thread and how your initial response, but please, it isn't warranted.


Oh jesus, it was a joke, token-overly-serious-HN-commenter.


Intel previous technology to 14nm is 22nm (2011 [1])

[1] https://en.wikichip.org/wiki/intel/process




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: