Hacker News new | past | comments | ask | show | jobs | submit login
Qualcomm Plans Exit From Server Chips (bloomberg.com)
126 points by ksec on May 9, 2018 | hide | past | favorite | 70 comments



It's a bit odd that this would happen so soon after Cloudflare started testing & praising ARM-based server (and specifically Qualcomm's Centriq). For reference:

https://blog.cloudflare.com/arm-takes-wing/ where they find Centriq is mostly performance-competitive with Skylake at a significantly lower TDP/power point modulo platform support/optimisation e.g. Go's ARMv8 backend being immature and lacking optimised assembly routines

https://blog.cloudflare.com/neon-is-the-new-black/ where they demonstrate the latter by simd-optimizing jpegtrans for ARMv8[0] leading the Centriq 2452 to reach (and even overtake) Xeon 4116's performance-per-worker and blowing its throughput-per-watt out the water (25 image/second/watt on Centriq 2452 versus under 10 on the 4116)

[0] as they'd previously done for AMD64, though starting with equivalent optimisations using SIMD intrinsics then adding ARM-specific assembly for further gains as "the compiler in that case produces somewhat suboptimal code" using intrinsics[1]

[1] because according to a commenter and SO[2] GCC inserts unnecessary register copies when involving NEON yielding to very poor machine code, Microsoft's ARM compiler is apparently stellar and Clang used to be meh but greatly caught up between 2012 and now

[2] https://stackoverflow.com/questions/9828567/arm-neon-intrins...


Isn't it a bit dubious that Cloudflares articles don't disclose that they have received investments from Qualcomm?

https://www.qualcommventures.com/companies/data-center-enter...


Very much so. It undermines their value argument considerably: what kind of sweetheart deal did they get that a regular customer wouldn't?


The value comes from the wattage and not from the price of the chip. Intel could give us CPUs for free and the Centriq chips would be cheaper overall because of the lower power requirements.


It's not so simple. What kind of engineering support, hardware support, optimizing support did CloudFlare get in exchange for a submarine PR piece? That can easily outweigh the performance-per-watt advantage that the blog touted. Qualcomm is not exactly known for being forthcoming with application support for any random company and that includes things like patent licensing costs.

It's unethical to not disclose that QualComm invested in CloudFlare.


They probably should have disclosed the relation (they disclosed that Intel, Cavium, and Qualcomm sent engineering samples to them), but accusing Vlad Krasnov of writing a PR piece for Qualcomm in the form of blog posts describing his work is over the top.


Exactly. And, in the spirit of full disclosure, Vlad used to work for Intel. And, I think, could rightly be described as an Intel fanboy. And, when we began testing Centriq, said something to the effect of: "This is a waste of time. There is no way this will be better than Intel's chips." And then was surprised when they were in many, but not all, cases — which his blog thoroughly spelled out.

One thing that may not be apparent from the outside is that it's the engineers who run the blog at Cloudflare, not our PR or marketing team. Vlad's post wasn't screened by anyone in PR or marketing ahead of time. I didn't read it until it was on the blog. We encourage engineers on our team to talk about interesting work they're doing. We see it as a recruiting tool, not a sales and marketing tool. I think that's why the content of the blog is so genuinely interesting and valued by an engineering audience.

It is correct that Qualcomm has a very small investment in Cloudflare. We've not hidden that; it's on the front page of our website. Other strategic companies like Google and Microsoft have also made investments.

I'm sure that our relationship with Qualcomm got us a bit earlier access to test equipment than we'd have had without it. If we do switch to their platform, it may help us get better pricing though, in the case of the blog, Vlad was comparing list prices with list prices. I don't know, but I'd be surprised if Vlad knew the actual pricing our Infrastructure team has negotiated with either Intel or Qualcomm.

I'm genuinely not sure whether Vlad even knew that Qualcomm was a small investor. It happened before he joined and it's not something that comes up internally. Should Vlad have disclosed the investment? Perhaps, though I doubt it crossed his mind and, again, we didn't run his blog post through a central marketing department whose job would be to think of such things. Frankly, doing so would make our blog the sort of boring, marketing-driven blog that no one reads which would defeat its purpose.

As for the shut down, the team at Qualcomm gave us an early heads up that the sever chip business would likely not stay within the company as Qualcomm goes through restructuring. I don't know what will happen with it, but I suspect it will get spun out into some independent company. If that happens, that independent company won't have any investment in Cloudflare, at which point if their chips continue to perform well for our application perhaps we can put this issue to rest.

As for the future, if sufficiently capitalized, I actually think having the server chip business as an independent company will be a terrific outcome — allowing it to innovate quickly without the burdens and distractions of Qualcomm's core business. (And, perhaps, we can get them to rethink the name. "Centriq" has always sounded incredibly effete to me.)

As others have pointed out, even if the Centriq line doesn't survive, there are several other companies working on ARM-based server chips. While, to date, Centriq has performed better than these other ARM servers have in our tests, inevitably that gap will close. We continue to believe that, for our application — where requests are atomic and performance is largely driven by core count, and where we are more sensitive to power costs because we need to operate out of old, inefficient but central and highly connected data centers — an ARM-based solution will win the day.

Does that mean ARM-based servers are better for every task? Of course not. But they have proven to be for ours. To think we'd be biased because of a small investment defies logic. The amount we spend annually on CapEx for CPUs or OpEx to operate those CPUs dwarfs Qualcomm's investment. They don't have a board seat, information rights, or any other way to know what's going on inside Cloudflare or influence us. We're highly rational and data-driven. If Intel makes an affordable, high-core-count processor that operates efficiently, awesome! We'd love to buy it. Until then, I am happy that for the first time in a long time there is starting to be real competition in the server chip market. That's good for everyone, even, probably, over the long term, Intel.


> Exactly. And for full disclosure, Vlad used to work for Intel.

He did disclose that though, in the first article:

> Intel supplied us with an engineering sample of their Skylake based Purley platform back in August 2016, to give us time to evaluate it and optimize our software. As a former Intel Architect, who did a lot of work on Skylake (as well as Sandy Bridge, Ivy Bridge and Icelake), I really enjoy that.

(emphasis mine)


It's not a submarine PR piece. It's literally the experience of a single engineer testing a piece of pre-production hardware.


You guys had an undisclosed (in the piece) investment from Qualcomm and then touted their products without clearly stating conflicts.

It's:

* a PR piece -- you guys are making the case for your/qualcomm's tech on a blog, designed for public relations for Cloudflare

* you didn't disclose serious conflicts of interest that taint the conclusion of Qualcomm's tech being better

You might not be the NYT, and you may not have had a PR firm pushing you, but it's not morally different.

For the record, I was very interested in there being a real competitor to intel/amd in the server processing space. Now that the only real win they have is tainted, that's seriously disappointing.


I stand by my objection to you calling this a "submarine PR piece" which implies that somehow Qualcomm got us to write this.


Great, but you are not a disinterested source on this.

You are an insider (C-level exec) who allowed someone to write a potentially conflicted PR piece without disclosing that the beneficiary of that piece has a non-trivial investment in your company.

I don't even see a clear disclaimer that says "Qualcomm is an investor in Cloudflare" in the blog post.

The SEC doesn't usually like a technical distinction on potentially-material information (huge win for Centriq!) about something that could move QCOM done underhandedly by a company that didn't disclose that it received a lot of investment by QCOM.


per the comments section, it appears that the engineer is open sourcing or maybe has already open sourced (discussion was 6 months ago) their testing setup so perhaps someone can check his work?

Maybe the lack of notice of investment should have been noticed earlier (or perhaps should be amended to include), but I'd assume an engineer isn't aware of details of who is providing funding (don't work for them, though, so maybe they do). The engineer specifies the metric that matters most to them is long term cost (performance per watt) so with that, I don't get the feeling that their conclusion is flawed when comparing ARM vs Intel in consumer space. We host a few pre-core ASP.NET web applications so I'm fairly certain we wouldn't come to the same conclusion due to use cases. I will say, though, based on the Bloomberg article vs the Cloudflare one, I'm a bit bummed as it seems like long term it'd be a good avenue where applicable and losing a competitor is unfortunate.


There is being honest with integrity and there is being above reproach. Being above reproach is better.


The only cost-related argument I can see in those posts is about power consumption? Cost of the hardware isn't mentioned (and probably not even available when comparing engineering samples)

I'd still liked to have seen the relationship disclosed, but IMHO the validity of the posts is only in question if you assume they'd actually lie about results.


> is only in question if you assume they'd actually lie about results.

Were the results reproduced/confirmed by any other large experiment?


It should also be noted that Cloudflare lies about nearly everything? Bandwidth numbers are always inflated at least 5x reality.


Looks like Qualcomm is a platinum level founding member of the risc-v foundation. Maybe they've decided to quit moving in this direction with ARM ISA and will revisit in a couple years with something they have more control over. Just me speculating or hoping.


No offense but, hoping... for what? That Qualcomm will produce locked down RISC-V based chips with their own cores, without open documentation or support, backed by their patent portfolio, with difficult to use BSPs and various proprietary extensions requiring SDK licenses (Hexagon)? The same Qualcomm we love? For software programmers, you might as well replace 'aarch64-linux-unknown-gcc' with 'riscv64-linux-unknown-gcc' and they won't notice much else, but that was never the real issue.

Qualcomm already has control over their products. The value offering from them has never been an ARM chip on its own. That is kind of the problem, in a sense. The distinction between them using RISC-V and ARM is one without a difference, more or less, especially in a market like servers (where margins are plentiful as opposed to razor thin, so BOM conservatism and paying for an ARM ISA license isn't as much a pain point.)

If they don't feel they can enter the market at this point -- a point where ARM and they definitively have real competitive silicon offerings, if Centriq is to be believed -- the issue almost definitely has nothing to do with an ISA, and everything to do with business/their market outlook. If they re-enter later, even with a different ISA -- it won't be the reason. And even if it came true, won't change the reasons people hate Qualcomm.


The ARM-based servers have never been viable yet, because you can't buy affordable/standard sized motherboards from any of the top ten Taiwanese motherboard manufacturers yet. It's a chicken or egg problem for whitebox server platform adoption. It's all custom order stuff. Nothing like where you can buy a $400 Supermicro, Tyan or MSI dual socket server motherboard, put two $350 Xeons in it, add RAM, etc.


There are several issues with ARM based servers:

* 99% of developers' laptops are still amd64. Cross compiling is a pain. Cross arch VMs are a pain (emulation vs para-virtualization). Unless one big brand of professional laptops makes the switch (Apple? Lenovo? Dell?), it will still be more convenient to have servers with the same architecture as your developers' laptops/stations.

* Servers cost a ton, and only a fraction is due to the CPU(s). Those sweet ECC RAM modules still cost a ton. This redundant power supply is expensive. Those fast SSD/NVME disks will put a dent in any bank account.

The gains seems only marginal, a bit less expensive to buy and operate in term of electricity bill but that's all. Also there is already a huge fleet of amd64 servers already deployed. switching is kind of a pain:

* you land on a less tested architecture, so it's likely that you will encounter issues.

* you could be stuck if you have off the shelf proprietary software with no support from the editor aside from amd64.

* you have to recompile all your binaries, including this old one that was statically compiled in 1999, and for which you have lost the original source code.

Unless there are clear incentives to switch, like for example an internal FPGA in the CPU, or the ability partition the hardware (VMs at the physical level, like LPAR in IBM Power), or a real cost decrease. It's unlikely that people will switch massively. But maybe cloud providers for services that hides the underlying hardware could be a market. RDS/Aurora/Route53/ELB/lamdba (to a lesser extent for the last) don't need to be amd64.


ARM development is also pretty tough if you want to use real hardware, and don't have deep pockets

On one end there's the low-end gear. The Raspberry Pi and its clones. They'll run up to about $50 USD on average. But they're utter agony to develop on due to resource constraints

Then there's the other end of gear with proper ITX or server formfactor that typically runs in the thousands of dollars, things from Gigabyte and others

The least agonizing one to purchase seems to be the https://softiron.com/development-tools/overdrive-1000/

Though that's based on AMD's ARM CPU's, which as far as I'm aware, are no longer in production? So they may not have much stock left


I just use the cheapest Scaleway ARM VM: quad-core ARM8, 2GB RAM, 50GB SSD for EUR3/month.


>It's a bit odd that this would happen so soon after Cloudflare started testing & praising ARM-based server

Qualcomm is a "Big American Co." They play big business. If they do invest a gigaton of money into something, they expect returns fast. In other words, those types tend to give up quickly.


Microsoft is a notable exception.


They were indeed doing so well... very sad.


Yeah, who would want to use ARM server, especially these days? Performance and power efficiency is not the only story nowadays. Most modern cloud servers rely on virtualization technologies. AFAIK ARM is very weak in running VMs. AMD's AMD-V struggles with that too. Even Docker EE does not support ARM (CE does but it's not been a long time yet).


This comment in 1996:

Yeah, who would want to use an Intel server, especially these days? Cost and vendor-lock-in is not the only story nowadays. Most modern data center servers rely on high-availability technologies. AFAIK Intel is very weak in running HA. AMD struggles with that too. Even Linux/FreeBSD does not support HA.


IMHO ARM64 server chips are suited for the cloud and cloud native applications. Containers and microservices can make better use of the high number of cores and greater mem bandwidth than old monolithic apps. Openstack, Kubernetes and Docker all work and are available with support. Virtualization with KVM also works fine.


>AMD's AMD-V struggles with that too

are there any benchmarks to support this? a quick search on google yields nothing.


Which company could be interested in buying this data-center CPU division from Qualcomm?


My guess would be Samsung, I don't see anyone else being a good fit. Centriq is already being Fabbed by Samsung, it also has a in development Next Generation CPU that has improved ISA and uses Samsung 7nm. They also own Joyent, although I am not sure if it is relevant anymore.


My guess is no one. The whole point was to apply the perf/watt gains you see in mobile SoCs to a larger die. Qualcomm wouldn't want to sell it to Samsung, just to see Exynos get better. Better to just destroy the project and write it off as a loss rather than help a competitor in your primary market.


[b]Server Chips[/b]


I could see Fujitsu. They already have considerable investment in datacenter ARM. The government’s post-K-computer is supposed to be the first big rollout of what they’re doing with ARM.


I doubt it would happen for a number of reasons, but who is a) known for making amazingly powerful and power-sipping ARM chips, and b) rumored to be working on more powerful server/desktop class ARM chips for new products? Apple!

OTOH, Apple's chip design team probably doesn't have much (if anything) to gain from buying out qualcomm's team.


Wait, since when Apple sells servers?


They're on the edge of having a large enough datacenter presence that, since they're already manufacturing their own CPUs that are conscious of perf/watt, building a server SKU for their own internal use might make sense.


They use server class Xeons in their iMac Pro and Mac Pro lines - you don't have to build servers to use chips like that.


I don't think that these have the single threaded performance to practically replace a Xeon. They fit the niche of servers a lot better, where you realistically care more about throughput/watt. Nearly every use case that would be able to take advantage of the extreme parallelism in these chips in a Mac Pro, would be better served by the GPU.


I imagine they were reusing a lot of technology from their mobile chips, so it would be difficult to split it off.


AMD did it, it’s how Qualcomm ended up with adreno in the first place.


Intel, AMD, Samsung, Micron, Cavium, Broadcom.


Didn't they just make their entrance into server chips?


Indeed: the Centriq announcement was maybe 6 months ago.

What does this mean in practice?


Probably that they were trying to chase one or two major clients, who turned out to be uninterested.


Or ended up going with Cavium's ThunderX2 instead?


bit of history here: https://www.theregister.co.uk/2018/05/08/cavium_thunderx2/

First Broadcom, now Qualcomm – if neither of those two can't crack the server market I don't hold out much hope for anyone else.


I's say that nobody really tried hard enough. All of them can make a true killer product. And Intel has little defence against defeat in details, other than "being Intel." Many, many niches are not being hold tight by Intel, and they can't spare resources to fight them all.


Or they are trying to push them to commitment


signaling you are about to give up is a rather counter productive strategy.


Worked for Airbus


Worked is a strong word. The A380 is rather on life support since it's basically only serving very few customers. On the whole this is a strategic loss for Airbus.


Disagree about the strategic loss.

It essentially killed passenger versions of B747. Because there was no competition in that niche of the market, before A380 Boeing was able to earn very nice profits making them. 747-8I is more than 3 times more expensive than B737 MAX.


Yeah but since then the 747 model has fallen out of fashion, since the market is now linking smaller airports to each other rather than relying on very few large hubs - and regulations have also made it possible for bi-reactors to fly much longer distances than before, accelerating the transformation of the whole routes model.


These things are cyclical.


Qualcomm's primary revenue stream is patent trolling. They maintain a veneer of respectability by also selling chips, which they maintain by using anti-competitive patent bundling practices to kill off competitors before they can become competitive. (Intel & nVidia spent a lot of money trying to break into the wireless market but gave up because Qualcomm was selling a chip & patent bundle for the same price as just a patent license, essentially giving away their chips for free.)

So any read of this announcement must be viewed through that lens. How was this server business going to affect their patent license gravy train?


> Qualcomm's primary revenue stream is patent trolling

This is not fair. By describing Qualcomm as a patent troll, you dilute the meaning of this term. CDMA alone is a pretty huge development and has enabled lots of fantastic technology. Why shouldn't they be able to reap the rewards of their patent portfolio?

> How was this server business going to affect their patent license gravy train?

The two are mostly orthogonal. QCOM's facing three major challenges right now: (1) still reeling from a narrowly-averted LBO. (2) They're no longer getting some revenue while "renegotiating" its licensing agreements with heavy hitters like AAPL. (3) Their acquisition of NXP is in jeopardy as it's up to Chinese regulators and current geopolitics seem to be working against them.

Their big bets like server chips were a good hedge against smartphone market saturation but now QCOM wants to get leaner and more focused.


> By describing Qualcomm as a patent troll, you dilute the meaning of this term.

Limiting the term 'patent troll' just to non-practicing entities is a fairly recent development. The earliest cite for the term in the media is describing Stac Electronics as a troll in 1993, who were very much a legitimate business, if failing.

https://wordspy.com/index.php?word=patent-troll


> CDMA alone is a pretty huge development and has enabled lots of fantastic technology. Why shouldn't they be able to reap the rewards of their patent portfolio?

Because they agreed to license it for fair, reasonable and non-discriminatory terms. If they wouldn't have agreed to do that, it would never have become an industry standard.

And then they proceeded to break that agreement.


> And then they proceeded to break that agreement.

What makes you think that agreement was broken? The licensing terms haven't changed. Qualcomm's licensees (Foxconn et al) have been paying these royalties with the current terms since before Apple asked Foxconn to make the first iphone.

What has changed is that device manufacturers' prices are now very high. In order to respond to licensees' concerns, Qualcomm introduced a cap on the royalty.


> What makes you think that agreement was broken?

Rulings from the South Korean court system for one, and preliminary decisions in its court battle with Apple, for another.


I agree that some aspects of Qualcomm's licensing practices are clearly in violation of that agreement and US regulator ignored it too long. But I find it a bit shady that the Korean regulator didn't ask/allow Qualcomm to explain/defend their practices, or even to respond to accusations made by companies like Apple. In US lawsuit where Apple and Qaulcomm sued and countersued each other, Apple is accused of lying and misleading regulators about their exclusivity deal, which resulted in Qualcomm holding back rebates. So I think we should wait and see what really happened there.

Further. Apple's claim that Qualcomm violated FRAND in respect to royalty basis and rates is somewhat dubious at best -- never been upheld legally -- and does not demonstrate Qualcomm's licensing malpractices. Apple challenged pretty much every wireless patent holders (eg, Nokia, Ericsson, Moto, Qualcomm, etc..) and refused to pay claiming that the entire industry's licensing practices were in violation of FRAND, but ended up losing or settling every lawsuit, including the one with Samsung a few years back where Obama had to intervene to prevent Apple's sales import ban.


Confirming this. People from Qinghua University Group do whisper around that Qcomm has their man in Trump's advisor's panel and T is effectively lobbying for them.

That "China must buy more American microchips" was pretty much about a single company.


who else left for ARM-Server-Chips? so far many if not all ARM-server efforts fell apart.

it took a long while for xeon to get where it is today in the data center, maybe it is as hard as making Xeon run on cellphones comparing to getting ARM into Servers.


Cavium/Broadcomm


Ampere


I think the issue may be that the core-war between Intel and AMD has heated up recently and that will push the performance bar higher fairly quickly.


Not much seems to be happening on the performance front, but Intel can't charge quasi-monopoly prices anymore. They started to sell 6-core desktop CPUs at the same price that they were selling 4-cores for... dunno, 10 years? In servers, Epyc is applying the price pressure, and it actually has some performance advantages over Intel's offerings (more cores, more PCIe lanes).


How is a 50% increase in core counts "not much on the performance front"? Intel used to push out yearly 5% IPC increases + 5% clock rate increases before AMD released Zen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: