But this is very intel: get psyched about a technology, dump billions into it for a few years, flake out. See: cloud services ca. 1997 and IoT ca. every 3 years since 2013.
> But this is very intel: get psyched about a technology, dump billions into it for a few years, flake out
Hit the nail on the head! I worked for Intel nearly a decade ago. I was a full time employee for a few years. They saw Android and the Apple app store capturing mobile users and decided to dump $$$ into an app store for the PC (Intel AppUp). The effort lasted a few years before it crashed and burned HARD. During this time they also tried to spin up Meego/Moblin for mobile and tablet, then a partnership with Samsung to work on Tizen, and bought up McAffee.
The amount of money Intel flushes down the toilet with fruitless endeavors is sad, especially considering the number of people that end up jobless (like many of my good friends did) because of it.
They started an app store? They're in entirely the wrong market position for such an endeavor, they don't own end user relationships or have any software platform to leverage, let alone software expertise. In fact would have likely suffered a strategic tax of wanting to favor their own processors. What a dead on the water idea. Definitely not the first time Intel has forgotten to stay in their lane.
They have huge contracts with all OEMs and make the drives for their chipsets. Planning to add that appstore to it as bloatware and having an installbase in the hundreds of millions quickly wasn't that crazy.
I noticed this recurring theme. Seems like that eventually all companies go down this path at some point if they become mega rich. At that point money alone doesn't matter as much as capturing new markets do so lots of money goes down the drain. It's a good thing for the companies that get acquired and get their paycheck.
I see as rich people who buy expensive things and let them rot because they forgot they bought them or so.
Sometimes bets like these turn out to be huge wins, like Microsoft going into the gaming platform space with the Xbox or Amazon building out cloud services. The Intel app store idea doesn't seem like such an idea to me, but I can see how the board could have been pushing for "outside the box" big bets.
I'd argue that Microsoft and Amazon's branches were at least tangential to what they were doing in the back-end and that they already have most of the required talents to do it right.
I have no idea how Intel could work their way into running an app store beyond "we have tech and market dudes, good enough right?"
With their new fabric/modular system, the modules are effectively "apps". Though possibly sold by app developers, you could install the Adobe module or the Autodesk module and it would have hardware/ram/storage tuned to those applications. It will start with DL and Graphics accelerators but then morph into application specific.
No no no, sorry but it’s a completely different story. Microsoft is a Software company and with the XBox they sought to expand from their business software market into the entertainment software.
It’s still software and besides the hardware which wasn’t much different from a standardized gaming PC, they worked on building the software platform and developer accessories. Something they happen to be pretty decent at...
Public markets demand growth. So boards demand managers who do Growthy Things. When it doesn't grow, you toss the old managers lots of money for their service, find new managers who promise you new Growthy Things, and repeat.
My first internship was with Intel in the early 2000's with the group that was designing Intel's Infiniband NICs. Between when I accepted the internship and started they already knew they were going to abandon that product development, and the engineering group was all getting moved to chipset development. I got to catalog discontinued hardware.
I sat through a Xeon Phi presentation at university, about how it would revolutionize the university's "supercomputer". I left shortly after; did Phi come to nothing?
It was too difficult to get decent performance from Xeon Phi for general use cases. A few apps could make it work e.g. PGS bought up all the old stock for a big geophysics system.
Omnipath went the way infiniband is going. Ethernet has caught up, and surpassed the speeds, so using proprietary technology with fewer features isn't that attractive anymore.
But not faster than L3 cache bandwidth. Some cards can DMA to L3 cache. Granted, eventually it's flushed to main RAM, so might not help too much in the end.
Interestingly Meego was really great to use. Super fluid and simple while not hiding linux too far under the covers. If Intel bought Nokia there could easily be three mobile OS in current use.
Interestingly Windows Mobile was really great to use. Super fluid and simple while integrating well into the Windows ecosystem. If Microsoft bought Nokia there could easily be three mobile OS in current use.
I used them both and there was no comparison. Being snarky because you think it's clever doesn't change that.
The design of the UI and the design of the apps was like the first time using a brand new system where everything has been thoroughly thought through. It used touch very well to be as basic and straightforward as possible.
I'm not really being snarky - it just seems like it.
Lots of people really thought Windows Mobile was great (including here on HN). Meego was great too.
It doesn't matter.
The point is that it's unclear why Intel (a company that has failed at every consumer focused business they have ever tried) would have done better with Nokia than Microsoft did, given that MS has had considerable success in consumer markets, already had carrier relationships, had developer relationships etc etc).
>MS has had considerable success in consumer markets, already had carrier relationships, had developer relationships etc etc
By consumer market, you mean PC market. They were tiny in mobile market. Its the reason they try to shove Nokia into its platform
Nokia already has they the dev relationship, carrier relationship, retail relationship. They dont need all that from microsoft. The only thing they need is money and strong leadership. Nokia was mismanaged into oblivion.
> It doesn't matter
No, it does matter. They were already one foot in the grave, but jumping ship into windows mobile is just the nail in the coffin.
Anyone would have done better if they were left alone. The point is that they already had great technology. I'm not sure how this isn't sinking in, but you seem desperate to point out that Microsoft tanked Nokia, which is actually already common knowledge.
A little less sarcasm and snark will probably leave room for absorbing what people are saying.
I often see the same sentiment here re:not just meego but also PalmOS and WP7. But the OS being 'well thought out and cool' simply didn't matter by that point.
Apple had revolutionized the category and was making over 90% of the profits, had all the apps and other ecosystem lock-in, the ipad on top of that, the highest user satisfaction, an avalanche of free publicity, almost all of the consumer mindshare, and a fleet of rapidly expanding and phenomenally popular stores for distribution. A brand new "well thought out os" with no apps simply wasn't going to make any kind of dent in Apple's market share.
Meanwhile the anti-Apple enthusiasts had been coalescing strongly around Android for several years. A bunch of big companies were fighting tooth and nail for the scraps and who won? The guys with the "most well thought out Android skin"? No. It was the guys who were by far the most ruthless and shameless competitors. That's what won between all the android handset makers, that's who won between Android makers and other "well thought out OS's" and imho that's who wins all hypothetical matchups as well if all you're bringing to the table is being cool and well thought out.
Now if Intel wanted to lose $5B a year and buy marketshare then yeah they could have done that but they would have never made back that money.
The N9 was released in 2011, only two years into the modern smartphone era. The hardware and software were fantastic and it got a lot of attention but died due to no other devices following it, not because of a lack of demand.
"Despite a limited release,[5] the N9 received widespread critical acclaim, with some describing it as Nokia's finest device to date. It was praised for both its software and hardware, including the MeeGo operating system, buttonless 'swipe' user interface, and its high-end features"
My sentiment exactly. Nokia was in a perfect position to launch a third mobile OS. even primer than windows mobile. Nokia was still on top eventhough declining. Many Nokia app dev are waiting to jump into the Meego ecosystem if Nokia commited. People are waiting for Nokia next move. Unfortunately, it was not a great move.
At launch, we know that the software are great. Hardware are great. But the management was abyssimal. Alot of internal politics, burning money, and ultimately, Meego was killed before it was borne into the world. The irony is that N9 was still a great revolutionary phone critically acclaimed, despite knowing its gonna be short lived.
I wouldnt have been so bitter about it if N9/Meego was not great.
Actually it was only Nokia Mobile that was bought and after how everything went Nokia Siemens Networks, got enough money to buy Siemens part back, become the current owner of Bell Labs and via the HMD agreement, one of the best Android OEMs.
In what concerns Windows, although it lost on the phones, it is winning on the tablet space via 2-1 hybrid/convertible laptops against Android and ChromeOS tablets.
And most PDA like industrial deployments are still mostly Windows based.
It was web hosting in 1997 not cloud services - I remember it well, as I was working for a vendor. It seemed like such a stupid grab for incremental income and coolness even then.
Now they're doing drone shows and sports as incremental business attempts. Sometimes I think of Intel as a PR agency with a semiconductor fab in the backyard...
A bit later ('98 or '99, I think) I worked at a content/community site for older folks. Intel gave us this odd speech-synth software/API bolted on to a low poly 3D face that could move its lips in vaguely realistic ways. There was a plugin needed to use it in a browser, and we were given money to figure out something useful enough to do with it that people would bother to install the plugin.
Pretty sure you can guess how well that worked. My suggestion, making the tacky 90's robot recite Cartman lines or otherwise swear in ridiculous ways, was... not exactly in alignment with the rest of our content strategy, you might say. And probably not what Intel had in mind. But I bet adoptions would have broken the two-digit barrier.
I mean, it seems like at some point, someone inside Intel should have been capable of saying, "You built this literal talking head and you don't know why, so you're proposing we pay a media company that specializes in health, retirement investing, and old people sex to come up with a reason for us?"
You are correct. It was basically a giant "iMac": all-in-one-server-box the size of a small room with racks and roll out keyboards. They were TRYING to write what would become cloud services, but they didn't have the talent: the team was pulled from all over and had no experience in backend/devops let alone the nascent linux kernal that was starting to blossom.
A former colleague worked for them in those days and had insane stories. They would fly him first class to Oregon from Virginia or Jersey every two weeks, just for a staff meeting!
Intel would fly Fellow Fred Pollock flew back and forth between Cali and his home in Florida for years in the late 80's early 90's. The guy was OG brilliant, for REAL, but it still was funny that he had Intel by the shorties.
20 years ago at Intel the description was Intel is a Fab company that designs microprocessors and motherboard chip sets on the side.
I think they have a tendency to ditch products prematurely. They enter markets that will take 10 years to mature and then cut their losses when the expected level of profit doesn't appear. They even do this to divisions that are actually turning a profit. Which goes over really badly with their customers.
I seem to recall Linley Gwennap or someone at the Microprocessor Report saying that this was because these other industries were too different, in terms of R&D requirements and margins, from Intel’s core business, making PC CPUs; none of these other businesses could ever produce the margins that Intel would need and none of them required the level of R&D that Intel could direct at them. This was when Intel was about to debut Larrabee.
Yes, that is 100% accurate. Their core competency is making wafers. CPUs specifically, high margin servers, low margin latop parts, ... and then chipsets (so many chipsets). Everything else is just a toe in the water. (Their investing arm also does quite well.)
> They enter markets that will take 10 years to mature and then cut their losses when the expected level of profit doesn't appear.
Amazon was far from the first company to do cloud. For instance, Sun Microsystems pushed it in the 90s, and Loudcloud, I mean "Opsware" did it in 2003.
I think the difference was that Amazon committed to it for a loooooong time.
Itanium isn’t a good example because HP heavily subsidized its continuation after a few years. Without that cash, Intel would’ve (justifiably) killed it then.
Word on the street was that Nervana power consumption was much too high and a big problem (likely thermal issues). The software will be repurposed for use on Habana. So no, AFAICT it wasn't the software, it was the actual Nervana chip that was the problem.
Not the case... AFAIK (based on talking to people who have played with Habana test systems), any real performant applications still need to be hand optimized with their assembly.
This is unbelievably dumb. Folks at NVIDIA HQ are no doubt breaking out their best champagne over this news: their Tensor Cores got a few more years of life thanks to this decision, and they retain the pricing power as the only provider of practical deep learning acceleration that actually works and has good tooling. I was looking forward to their TPU-like "N" offering in particular. Roughly the throughput of Google TPUv3, better silicon expertise, _and_ I'd be able to buy one. As a an applied scientist, I'd pay good money for 100TFLOPS of bfloat16 throughput, better yet if I can stick 4 of those things into each machine and train things over lunch that'd normally take a week to train. And it was pretty simple: just a systolic array with some extras to efficiently do convolutions, nothing super complicated, which means that unlike the more DSP-like approaches like Habana you don't have to do spend years writing and debugging a specialized compiler. You get to have this throughput _now_. Dumb, dumb move on Intel's part.
Just in time for NVIDIA to blow them out of the water completely on both the included RAM (which they're increasing) and performance (+40% if rumors are to be believed) with Ampere. AMD has a looong way to go.
AMD might still be able to offer competitive price/performance. Even if Nvidia is faster, I'm glad there's competition. For a while, Intel was really dominant, and they really stagnated.
1. It's not 100TFLOPs - you need fp32 to accumulate dot product, at which point you're getting much less. But even if you do 16/16, there's no way you'll really get near the roof of that roofline model.
2. Each V100 is $7K
3. 4x V100s not only cost as much as a decent car, but require a specialized chassis and a specialized PSU: they're 300W _sustained_ each (substantially more in momentary power consumption), and require a powerful external fan to cool them properly.
I want 400TFLOPs of bfloat16 dot product / convolution bandwidth under my desk, in a reasonably quiet, sub-1KW power envelope.
1) not true. I have one and hit 100TFLOPS with large enough batches in fp32 accumulate mode. Other benchmarks agree, so I'm not sure what numbers you're suggesting.
https://arxiv.org/pdf/1803.04014.pdf - no matter what they did, they could not go beyond 83TFLOPs in fp16. And that's just matrix multiply. Any kind of deep learning workload is going to be a lot slower than that.
I'd bet most successful hardware advances in the ML/AI will come from companies that also push the field's edge in software Bc all the others would be at at disadvantage - their "better" hardware will mismatch the software, and that mismatch will increase development costs A LOT.
Google's TPUs, Tesla's whatever those are etc. Bet on them!
NOT on whatever Intel or IBM or AMD or Arm are doing this field. Heck, even Nvidia will probably start loosing at this game in the long run. The field is too dynamic and volatile, companies need to "eat their own dogfood" by integrating vertically hardware + software, before pushing hardware to sell to others too. This is not the old game of general purpose hardware anymore...
I think NVIDIA is in a good spot with its strong software ecosystem, and an army of devtechs.
Their architecture is pretty general, so that if the ML algorithms change they adapt better than say TPU.
I can't find the video of the talk anymore, but late in the process of TPUv3 IIRC Google had to rebalance their hardware to account for new algorithm(change)s.
Second, most companies that want to deploy AI cannot afford to build their own custom hardware. Even most carmakers partner with Mobileye or NVIDIA.
These big companies also have research teams whose job it is to stay on the ball, research and develop AI techniques and influence how the hardware has to change.
As for eating your own dog food, I think NVIDIA does just that with their autonomous driving software stack, robotics kit, devtechs for customers and library optimization/porting.
With sufficient focus, funding and execution I think AMD and Intel can reach a similar spot.
That said, my hunch is that the legacy and compatibility that the general purpose hardware makers have to carry forward will become a problem. But until that happens (I.e. specialized HW delivers better bang/$) in a 2-10 years, they will likely figure out how to alleviate that and/or develop more specialized hardware.
Seems that if Tesla was able to build its own custom hardware and have it working in 2019, other carmakers which are much larger and profitable would easily be able to do it, if they cared to.
Domain expertise is a finite resource and couple that with the fact the automotive manufacturers are not adept at higher level software development (as opposed to embedded)
I'm not surprised companies like Ford have continued to push out dates
Actually Intel is at the forefront of deep learning software as well:
- MKL-DNN (rebranded DNNL) is unique and has no equivalent in the ARM space. It even supports OpenCL
- OpenCV
- MKL
- PlaidML and NGraph follow the current trend of deep learning compilers
- They had Nervana Neon as a deep learning framework which I'm pretty sure contributed to Nervana acquisition as at the time it was even faster for convolution than Nvidia CuDNN.
I'm really looking forward to their Xe GPU.
Also I expect them to leverage MLIR or contribute heavily to the linear algebra dialect -> LLVM IR optimization passes.
Countercounterpoint: I was recently hiring a firmware engineer and my recruiter handed me not one, not two, but three resumes from the same SSD engineering group.
Fun fact: Intel uses them in place of bribe^^ uh, I mean as a reward to volume re-sellers for meeting sale goals, and during specific SKU promotions (buy cpu +MB, receive free SSD).
And if you know what you're doing and have a rep at Synnex or Ingram thats willing to play along, you can really get things cranking.
I sold over 2 million dollars worth of Intel SSDs to Amazon customers in the span of a week once. Intel was running an extremely nebulously defined promo for 'new' customers where resellers could get 90% off disty price on enterprise SSD.
I uploaded all the eligibles SKUs to amazon at like 15% off list, wrote up some really gross python scripts to manage orders between amazon and synnex and to submit the deal registrations, and then hit my 6 months sales quota in an afternoon.
The double bonus was 6 months later when I then registered for all the backend spiffs. The backend rebate on some of this stuff sometimes exceeded 10% of list price. To date it's the only deal I've ever put together that had more than 100% gross margin.
Well played, but I'm genuinely not understanding why Intel would do this. If there's consumer demand for their SSDs at 15% off list (as you experimentally demonstrated), why would Intel run a distributor promo that offers 90% (what!) off the even lower disty price? Did somebody just have a KPI of units shifted to hit?
Because "The Channel' sales model provides all sorts of strange incentives from vendor reps through to end users. Everyone is playing everyone off everyone.
Lets say there's two solutions that would solve my end users problem equally well. Both have list prices of 10K, both have normal disty cost of 7K, but brand X is running a promo where I get a 50% off front end discount from disty, but brand y is not.
Now I call the customer up and let them know that not only is brand X the best solution, but because of my great relationship with brand x I've secured them a 20% discount off list.
So now the customer is getting what they need cheaper and I've got more margin in the deal, which is what my commision is calculated out of.
So essentially when vendors do this they're not trying to make things cheaper for the customer to increase demand, they're trying to make sure that their products put the most money in sales people's pockets so that sales people will push their stuff the hardest.
Though on this specific occasion I was following the letter (or lack thereof) of the promotion, but certainly not the spirit. Grabbing existing demand vs generating new demand.
The entire purpose is so that they can juice up the numbers for when they announce quarterly numbers.
For the kind of executives that stay at a company like Intel, the only thing they care about is the company’s stock price, and they’ll do anything to keep it high, even if it’s at the expense of what quite literally is destroying or directly hemorrhaging in the long term to the company itself. It’s how you end up with crazy situations like this.
I actually got scolded by the Intel rep for this one. Mostly she was mad that there was no way they'd let a loophole like the one I abused go through again next year, so her YoY numbers were gonna be awful.
I like Intel SSDs, often these cost a little more but have better 99% latency than competitors. If an Intel Inside sticker meant an Intel SSD rather than Atom or some other hunk of junk, that sticker would mean something.
If you go to microcenters website you can search for SSDs based on their controller chip. Specific controllers and large heatsinks seem to be what's next for the PCIe 4 SSDs that claim 5GB/s or more.
They would rather sell them in bulk...to bulk buyers...for scale reasons. They average around 10% of the market. They were nukber three days n volume and two in revenue for a long time. Still float around that point
It's rather like proving a negative. If they were really popular, chances are I'd have seen one by now - but I haven't. That doesn't mean they aren't popular, but it reduces the odds.
Otherwise, Intel SSDs have quite a bit of share in the datacenter space. If you haven't seen them before, that's likely because you're not an enterprise customer.
* Intel GPU. Work quite well on Linux for the past 10 years (albeit not ace 3D performance due to hardware limitations)
* Intel WLAN. Work great on Linux. They even worked in a time where almost no other FOSS drivers for WLAN were around.
* Intel LAN. Work great on Linux. Great performance.
I'm not at all a fan of Intel. Their x86-64 CPUs are overpriced compared to AMD's Ryzen and Threadripper. But the above three? I would not run away from these; on the contrary.
It's not that the products are poor but that Intel has a long and proud tradition of killing any product that is not directly related to x86.
Intel GPU started with i740 (a stand-alone AGP card), which is the last stand-alone GPU they made... and that was 1998 or so.
Their networking chipsets were a result of Intel's own x86 motherboard business, especially in the server arena. Intel also bought out companies in the late 90s/early 2000s to build their stand-alone network appliance business (routers, firewalls, SSL accelerators, etc) only to prompt kill that off when x86 started stumbling thanks to the P4 and AMD's competition.
Although the CPU hasn't been improved much, Intel has improved its iGPU quite a lot and now outperforms the current Ryzen mobile. [1]
I think it is natural for Intel to start making dGPU to replace popular mainstream mobile GPU such as nVidia MX250. And that's what Intel's new DG1 starts from.
Well, not really iGPU, those low res benchmarks relies more on CPU, and Icelake is quite far ahead of 1st Gen Zen. Not to mention it is an 14nm product and not even priced at the same segment.
The coming Ryzen Mobile with Zen 2 and Power Optimised Vega will do far better.
Only the paranoid survive era Intel would be treating the receding x86 monopoly and rise of GPGPU as an existential threat. Xe would be treated as a top priority and not be allowed to fail.
The guys in charge now? Who the fuck knows. They seem good at making immense amounts of cash from yesterday's defacto monopoly, not so much at setting up tomorrow's.
>Only the paranoid survive era Intel would be treating the receding x86 monopoly and rise of GPGPU as an existential threat.
Not really. GPGPU is not an existential threat to Intel, even now neither Nvidia or AMD are hurting Intel much. But TSMC is an existential threat, even with the recent rumours of splitting up Intel Fabs, it is all too late.
It's the two in combination that I believe to be an existential threat. nVidia is succeeding at entrenching CUDA as a standard, just as Intel's x86 grip is loosening and processors are becoming commodities.
This leaves nVidia in a great position to commoditize their compliments, offering high performance ARM CPUs (like Amazon is offering) for a tiny margin just to fuck with Intel. And it really would. Versions with nVidia GPUs in the cache hierarchy (HBM) are just the icing on the cake.
Intel smartly got out of the memory business because it didn't want to join a race to the bottom fabbing commodity transistors, but there's a good chance that's in their not so distant future.
And yeah, TSMC is a pretty big threat too... I was ignoring 10nm as part of this, because we don't have any data yet as to whether it represents a permanent shift in Intel's ability to compete on fabrication. I personally doubt that. But it is possible.
Cell was a PowerPC variant, which is still doing ok as the POWER series of CPUs. The Summit supercomputer[1] used POWER9 and is the fastest supercomputer in the world.
If you go speak to IBM and you have an interesting app you can get time on a POWER9 server to see how well it runs.
Calling Cell a PowerPC is a bit of a stretch. The only reason to use the Cell was the SPEs, which had their own custom SIMD instruction sets. The PowerPC core was just there for coordination.
Any work you put in to optimizing your code for the Cell would be completely thrown out if you brought it to another POWER architecture. At that point you might as well have hopped over to x86, ARM, CUDA, Xeon Phi, the Connection Machine, whatever.
Man, Larrabee was really cool! I wish that something had become of the software-defined pipeline thingy that let them add support for new graphics API's- IIRC they added DX9 or Vulkan or some other pipeline support to the card after it was made.
Now I want to get one, or a Phi or something from that era.
Intel had the i740 GPU in 1998. Poor performance and cancelled after 2 years. I don't know what you would call its internal instruction set.
Intel was going to release Larrabee, their GPU based on x86, in 2010 but cancelled it because of poor performance.
All of Intel's integrated GPUs are adequate for desktop use and light gaming but they have poor performance compared to any Nvidia or AMD GPU. I don't know what you would call their internal instruction set.
We will see how Intel's new Xe GPU does. I'm predicting that they will cancel it in 2 years because of poor performance.
Back in the early days of consumer 3D graphics accelerators, there was no such thing as a shader program and GPUs were not Turing-complete. You had to pick from among the texture blending modes that your GPU natively supported. Some GPUs exposed an interface that was basically a large portion of the OpenGL 1.x state machine, implemented in hardware.
You had that already in computers like the Amiga where you would arrange a memory region with set of operations that you wanted to do, enable DMA and let the blitter carry along doing the job on its own.
Likewise with 16 bit game consoles with their sprite engines.
Not surprised: 100GBe interconnects vs proprietary interconnect. Complex architecture for which a hell of a lot work has to be done in the compiler (anyone remember Itanium?) vs something that's barely more complex than NUMA. Lower TDP per node is relevant for large scale datacenters.
Yes, Nervana looks much sexier, but I'd expect Habana to be more useful/usable overall.
Hell of a lot has to be done for Habana, too. And hell of a lot had to be done only for their edge SKU. My understanding is, their "N" SKU was basically TPUv3 for the masses - it's not even a question there was massive product-market fit if they integrated it into TF or PyTorch, it'd take off like a rocket.. Their "I" SKU was more along the lines of Habana: more Movidius-like, but scaled up. That thing (Movidius) has a specialized VLIW architecture (SHAVE) for which they had to write a compiler and roll it into LLVM, and if you read the architecture docs you'll see why. 12 VLIW cores, specialized memory arch, it's a bear to program for. Nobody will ever touch this shit with a 10 foot pole if they can avoid it. Systolic TPU-like stuff is more constrained with what it can do, but it's much easier to use and write drivers for (not to mention shrink the litography with). And it's also nowhere near its peak potential either, from either the throughput or energy efficiency standpoint. _And_ you can scale it down to edge as well, as Google has shown. It's just a better approach at least in the foreseeable future.
Yeah "barely more complex than NUMA" was a slight exaggeration and only meant in comparison to the other ;-) Judging by the architecurial differences layed out in the link, Habana is still hard-as-hell to develop a toolchain for; but Nervana is the kind of stuff that can drive people insane.
I thought it was vapor ware. I went to their party where they talk about it and it was underwhelming. They did not give any number comparison to Nvidia and such.
But this is very intel: get psyched about a technology, dump billions into it for a few years, flake out. See: cloud services ca. 1997 and IoT ca. every 3 years since 2013.