Hacker News new | past | comments | ask | show | jobs | submit login
Intel Exiting 5G Modems (intel.com)
404 points by ItsTotallyOn on April 17, 2019 | hide | past | favorite | 216 comments



Intel couldn't make WiMax work right (too power hungry and low speed), their attempts to build an LTE chip have fared worse than Samsung and Qualcomm, to the point that they are a full 5 years behind their competitors, and their cable modem chipsets (Puma 6/7, used in the Xfinity converged gateway) are fatally flawed and DoSable with a few Kbps of traffic, while having horrible bufferbloat.

At this point I think Intel's entire advantage over the past two decades was being on the bleeding edge of silicon processes, paired with a middling silicon design team and good firmware devs that could patch most flaws in microcode.

The process lead has disappeared, and when working with RF frontends (LTE, WiMax, cable modems, WiFi) their ability to cover up implementation errors in the driver is limited.

Intel's prospects for the next few years look dim in this context, given the tens of billions wasted on the aformentioned forays into making radio chipsets and the declining revenue of their legacy CPU market paired with serious manufacuting constraints that have hamstrung their supply chain.


Intel could not resist bufferbloat.

I have a book about UPnP written by some Intel engineers and it describes a bastardization of http that is so insane that there is no way I would trust the company to embed an http server inside its management engine. It just wouldn't be possible with an attitude like that to get security right.

Maybe INTC went up tonight because investors know now that Intel will stop wasting money on ads proclaiming themselves to be the leader in the 5G "race". (How come nobody cares about the finish line?)


What is bufferbloat


When your buffers are too big for the connection, it induces latency as your computer sends 100Mbps of traffic, yet the modem is capped to say 5Mbps. Its better to drop this traffic as TCP/IP and UDP will throttle itself, rather than letting 500ms to 1 second of latency be induced by holding that data in a buffer, resulting in a jerky user experience.

Edit: Most Comcast/Xfinity modems and converged gateways are Intel based and have this and other issues, pure garbage devices.


This is the main reference I used when shopping for a cable modem a couple months ago:

https://badmodems.com/Forum/viewtopic.php?t=65000

You'd have to scroll down a bit on the page (I'm copying the factual data in case the remote link goes down at a later date).

The bad news is that a lookup table is literally required to know what chipset is in use inside of the device - just like with all of the WiFi adapters :(

(PS, this table looks like garbage, but many users on mobile will cry if I prefix with spaces to make it look OK... I'd really rather every newline were a <br> element.)

Motorola/Arris Modems Motorola SB 6121 4x4 (Intel Puma 5) Motorola 6180 8x4 (Intel Puma 5) Arris SB 6183 16x4 and Motorola MB7420 16x4 (Both Broadcom)

NetGear Modems NetGear CM1100 32x8 (Broadcom) NetGear CM1000 32x8 (Broadcom) NetGear Orbi CBK40 32x8 (Intel Puma 7) Note: I tested this model and was told that this modem build into Orbi does not have same issues as other Puma 6/7 modems. I haven't seen any issues with it since using it. The Orbi modem is based off Netgears CM700.

TP-Link TP-Link - TC-7610 8x4 (Broadcom)

Routers what work with zero issues with the above cable modems list in my current collection: Asus - RT-AC66U and GT-AC5300 (OEM and Merlin FW) D-Link - Many model routers tested including COVR models. Linksys - WRT1900AC v1 and WRTx32v1 NetDuma - R1 Current firmware version (1.03.6i) Netgear - Orbi CBK40, R7800, XR450 and XR500

Forum User Modem and Router Experiences Arris - SB 6141 8x4 (Intel Puma 5) and D-Link DIR-890L and ASUS RT-AC5300 Arris - SB 6141 8x4 (Intel Puma 5) and Asus RT-AC66U Arris - SB 6183 16x4 (Broadcom) and Linksys WRT1900ACM and WRT32x and NetGear XR500 Arris - SB 6183 16x4 (Broadcom) and NetGear XR500 Cisco - DPQ3212 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000 Motorola - MB 7220 (Broadcom) and Asus RT-AC66r, D-Link DGL-4500, NetDumaR1 and NetGear R7000 TP-Link - TC-7610 8x4 (Broadcom) and NetDuma R1


> (I'm copying the factual data in case the remote link goes down at a later date).

That's what the Internet Archive is for! The URL above has now been archived [0].

[0]: https://web.archive.org/web/20190417095456/https://badmodems...


Now we have a second reference in case, god forbid, something happen to the internet archive. Something happening is always possible!


Put your (cable/DSL/fiber) 'modem' [it is a router] in bridge modus and be done with it. Anything would work then. Ubiquity gear (which I use), but also different (more open source) stuff like Turris Omnia, Turris Mox, or a lovely PC Engines APU2.


The fault isn't just with cheapo Intel edge hardware - a lot of ISP infrastructure is built with the old telco mentality of "we never drop data". Which, as you correctly point out, is precisely the wrong thing to do for an overcongested IP network.

EDIT: And the resulting problem isn't just the resulting end-user latency. TCP's congestion control mechanisms (i.e. the ones that let the endpoints push as much traffic as the network can bear and no more) rely on quick feedback from the network when they push too much traffic. The traditional, quickest, and most widely implemented methods of feedback are packet drops - when those are replaced with wildly varying latency, it's hard to set a clear time-limit for "this packet was dropped", and Long-Fat-Network detection is a lot harder.


So, with TCP the speed should depend on the bandwidth-delay product (which depends on the full peer to peer round-trip latency, because it needs ACKs coming in faster than it empties the window, otherwise the sending peer just waits).

Whereas most UDP applications are constant rate, with some kind of control channel.

Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)

However, when congestion occurs and the data you sent, that sits in these buffers are already stale, irrelevant, but the problem is that there's no way to invalidate the cache on the middleboxes. And it leads to worse performance because it clogs up pipes with stale data when those pipes get full. So it prevents faster unclogging. This results in a jerk in TCP, because it scales back more than it should have without the unnecessary wait for the network to transmit the stale data.


> Bufferbloat should not matter for your home connection. (Unless it is constantly in use by more than one client.)

That is wrong. A single client can saturate the connection easily (eg. while downloading a software update or uploading a photo you just took to the cloud). Once the buffers are full, all other simultaneous connections suffer from a multi-second delay.

The result is that the internet becomes unusably slow as soon as you start uploading a file.


You can see this effect by going to fast.com.

Using my smartphone, it induces and measures > 700ms latency on my cable modem connection. That’s worse than old-fashioned high-orbit satellite internet!


I'd encourage you to get a non-Intel modem


The problem with bufferbloat is not necessarily excess retransmissions or stale data (although that does happen), it is primarily that delay significantly increases in general, and that delay in competing streams or intermittently active streams is highly variable.

Traditional tcp congestion control in an environment where buffers are oversized will keep expanding the congestion window until it covers the whole buffer or the advertised receive window, even if the buffer is several seconds of packets. There may be some delay based retransmission, but traditional stacks will also adapt and assume the network changed and the peer is expected to be 8 seconds away.


I have a 4G modem. Whenever I watch a video and skip forward a bunch of times, the connection hangs and I have to wait for about a minute before it resumes normal operation.

Is this bufferbloat? I guess what happens is that a bunch of packets get queued up and I have to wait until all of them are delivered?


Yes, that sort of jerky behavior is symptomatic of bufferbloat. Multiple 4G and 5G devices have now been measured as having up 1.6 seconds of buffering in them. They are terribly bloated. It was my hope that the algorithms we used to fix wifi ( https://www.usenix.org/system/files/conference/atc17/atc17-h... ) - where we cut latency under load by 25x and sped up performance with a slow station present by 2.5x - would begin to be applied against the bufferbloat problem there. Recently google published how much the fq_codel and ATF algorithms improved their wifi stack, here:

http://flent-newark.bufferbloat.net/~d/Airtime%20based%20que...

Ericson, at least, published a paper showing they recognized the problem: https://www.ericsson.com/en/ericsson-technology-review/archi...

and I do hope that shows up in something, however the chipsets on the handsets themselves also need rational buffer management.


That's probably something else. The server rate limits your client, or the ISP rate limits due to too many bursts, or the client needs to buffer more of the video.

To exclude cases you'd need to watch the network traffic with something like WireShark and look at retransmissions. If it suddenly shoots up and then packets start to trickle later but very slowly, then that could be bufferbloat.

But the 1 minute seems too long.


The whole connection hangs - it's not the server or buffering and I doubt it's the ISP.

Reading more about it, you are correct about 1min being too long, therefore it's probably not (just) bufferbloat.


Probably not. It's just crappy software.


Any cable modem brands known to not use bufferbloat-ing NICs?


DOCSIS 3.1 standard introduced a good but not great Active Queue Management scheme called PIE. But upgrading your modem only helps with traffic you're sending; your ISP needs to upgrade their equipment to manage the buffers at their end of the bottleneck in order to prevent your downloads from causing excessive induced latency.


The bufferbloat project introduced a great (IMHO) fq + AQM scheme called "cake", which smokes the DOCSIS 3.1 pie in every way, especially with it's new DOCSIS shaper mode in place. It's readily available in a ton of home routers now, notably openwrt, which took it up 3 years ago. It's also now in the linux mainline as of linux 4.19. The (first of several) papers on it is here: https://arxiv.org/abs/1804.07617

I hope to have a document comparing it to docsis 3.1 pie at some point in the next few months, in the meantime, I hope more (especially ISPs in their default gear) give cake a try! It's open source, like everything else we do at bufferbloat.net and teklibre.


Use a decent router on your side and configure it to rate limit slightly below the modems's limits. This avoids ever creating a queue in their boxes. You can run a ping while tweaking your router rate limit settings to find the point where it is just about queuing but not quite, to optimize both throughput and latency.


Depending on your speed, you may need a bit more than just a decent router. Many routers can't hardware accelerate qos traffic, which will be needed to limit the speed.

My Netgear R7000 can't handle my 400mbps connection using qos throttling. I will need probably at least a mid range Ubiquiti router to handle it.


Ubiquiti routers won't help you; they're even more reliant on hardware acceleration than typical consumer brands, and nobody has put the best modern AQM algorithms into silicon yet. What you really need is a CPU fast enough to perform traffic shaping and AQM in software, which ironically means x86 and Intel are the safest choices.


Well, bufferbloat is at it's worst on slow connections (<100Mbit) and 50 dollars worth of router can fix it there in software.


Only if the firmware implements the algorithms. OpenWRT is your best bet for this: I have it running on a TL-WDR3600 quite well.


> Any cable modem brands known to not use bufferbloat-ing NICs?

Avoid modems with Intel, specifically the various "Puma" chipsets. Best to double-check the spec sheet on whatever you buy.

The main alternative seems to be Broadcom-based modems: TP-Link TC7650 DOCSIS 3.0 modem and Technicolor TC4400 DOCSIS 3.1 modem (of which there are a few revisions now).


A $45 router is enough to de-bufferbloat connections up to several hundred megabits, and past that bufferbloat is less of a concern (in part because its difficult to saturate).


My $120+ Netgear r7000 can't quite handle my 400mbps connection when qos filtering is turned on. If anyone wants a reference.


There are cheap and good routers.

$69 MikroTik hAP ac2 will easily push 1Gbps+ with qos rules - https://mikrotik.com/product/hap_ac2#fndtn-testresults (it's a bit more tricky to setup and you need to make sure you don't expose the interface to the internet)


It's not the hardware that's at fault, it's the software and/or configuration. See here: https://news.ycombinator.com/item?id=17448022


From https://en.wikipedia.org/wiki/Bufferbloat

_Some communications equipment manufacturers designed unnecessarily large buffers into some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued for long periods in these oversized buffers. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput._

I hope I get this right, please correct if needed: So basically Intels chipsets were creating what looked like a fat network pipe that accepted packets from the host OS really fast but in fact was just a big buffer with a garden hose connecting it to the network. The result is your applications can write these fast bursts, misjudge transmission timing causing timing problems in media streams like an ip call leading to choppy audio and delay. The packets flow in fast, quickly backup and the ip stack along with your application now have to wait (edit: I believe the proper thing to say is the packets should be dropped but the big buffer just holds them keeping them "alive in the mind" of the ip stack. The proper thing to do is reject them and not hoard them?). The buffer empties erratically as the network bandwidth varies and might not ask for more packets until n packets have been transmitted. Then the process repeats as the ip stack rams a load more into the buffer and again, log jam.

A small buffer fills fast and will allow the software to "track" the sporadic bandwidth availability of crowded wireless networks. At that point the transmission rate becomes more even and predictable leading to accurate timing. That's important for judging the bitrate needed for that particular connection so packets arrive at the destination fast enough.

Bottom line is don't fool upstream connections into thinking that your able to transmit data faster than you actually can.


It's also a problem because a few protocols in you may use from time to time (like, say, TCP) rely on packet drops to discover and detect network throughput. TCP's basic logic is to push more and more traffic until packets start to drop, and then back off until they stop dropping. And then it keeps on doing this in a continuous cycle so that it can effectively detect changes in available throughput. If the feedback is delayed, then this detection strategy results in wild swings in the amount of traffic the TCP stack tries to push through, usually with little relation to the actual network throughput.

Buffering is layer 4's job. Do it on layer 2[a] and the whole stack gets wonky.

[a] Except on very small scales in certain pathological(ly unreliable) cases like Wi-Fi and cellular.


Is there no way to limit how much of the buffer is used via some config?


Usually not. There may be an undocumented switch somewhere in the firmware that a good driver could tweak? Depending on the exact hardware. But end-user termination boxes, whether delivered by the ISP or purchased by the end-user, are built for as cheap as possible and ship with whatever under-the-hood software configuration the manufacturer thought was a good idea. Margins are just too narrow to pay good engineers to do the testing and digging to fix performance issues. (Used to work at a company that sold high-margin enterprise edge equipment, and even there we were hard-strapped to get the buggy drivers and firmware working in even-slightly-non-standard configurations. Though 802.11 was most of the problem there.)

And in the case of telco equipment, that's an tradition-minded and misguided conscious policy decision.


Your analysis is correct.

Smaller buffers are in general better. However advanced AQM algorithms and fair queueing make for an even better network experience. Being one of the authors of fq_codel (RFC8290), it has generally been my hope to see that widely deployed. It is, actually - it's essentially the default qdisc in Linux now, and it is widely used in quite a few QoS (SQM) systems today. The hard part nowadays (since it's now in almost every home router) is convincing people (and ISPs) to do the right measurement and turn it on.

https://www.bufferbloat.net/projects/bloat/wiki/What_can_I_d...



It’s when you’re gaming and your ping jumps to 500 because someone is watching Netflix. It’s one of the main flaws in the currently deployed internet for end users. There are a lot of novel solutions (codel and so on) - but still not widely deployed.

More generally it refers to any hardware/system with large buffers - needed to handle large throughput but can lead to poor latency due to head of queue blocking.


Resisting bufferbloat isn't a tricky problem. A few simple config changes are usually all that's needed to resolve it.

Simply use a different queueing strategy or just smaller buffers.


This comment made me want to investigate Intel's financials and see whether this qualitative story matches their quantitative data, but it seems to be the opposite:

Revenue grew last quarter almost 20% YoY to ~$20B

Net income has been up 40%-80% each quarter YoY, to ~$5B

While I do believe what was said above, it seems that whatever choices they are making that supposedly are resulting in a failing company from an engineering perspective, are resulting in a successful company from a financial perspective.


Yeah, revenue of $80B/year can cover a lot of failed growth projects, even multibillion dollar ones.

I think the only point that I disagree with in the OP is the idea that there is declining revenue in the "legacy CPU" market. It seems like there is still a long trajectory of slow growth there at worst.


Intel is strip mining its current customers by jacking up newer server chip prices and discontinuing their older chips (due to 10nm fabs not being ready). Intel is producing few low-end chips(i3/i5), which has in part caused the current Ram glut.

These large customers that have caused a temporary surge in profit are in the process of migrating away from Intel.


> While I do believe what was said above, it seems that whatever choices they are making that supposedly are resulting in a failing company from an engineering perspective, are resulting in a successful company from a financial perspective.

Finances for big brands can be lagging indicators.


Also note that LTE and cable modems both came to Intel as a result of acquisition (LTE from Infineon and cable+DSL from Infineon via their Lantiq spinoff).

Both times they have tried to put an x86 core around the modem part with varying success.


There is a good chance the top talent at both jumped ship either before the acquisition, or immediately after, leaving the dregs and predictable results.


Reducing costs through 'synergy' is easier to quantify and link to bonuses and annual objectives than maintaining technical quality.


They've also picked up the Ex-Motorola RFIC team in Phoenix. Integrating those two teams 9 time zones away I'm sure went poorly. I worked with both of those teams, and they're both sharp. I'm sure the integration was tough, and obviously went poorly.


To me this is kind of strange, I thought Intel had a good reputation as far as Ethernet and Wireless drivers were concerned. Certainly much better than (old) Broadcom, Realtek, Ralink, ST and the like.


On Ethernet, Intel is known for their little frills, stable line of NICs. That said, they have not been a leader since the jump to 10Gbps. Their 10Gbps line eventually came along and is solid, but it was a few years behind everyone else. That same story seems to be repeating itself at every new ethernet standard over the past decade.

Come to think of it, the last decade has been really bad for Intel. They no longer have a node advantage. They no longer have a performance advantage in any market space that I can think of outside of frequency hungry low-thread applications (games).

Their WiFi chips are good, but second tier. Their Modem chips are third tier. Their node is mostly on par with the competition, for now. Their CPUs are trading blows with AMD. Their ethernet chips are a generation behind. Optane is is a bright spot, but we'll see how they squander that.

The next big diversification play by Intel is GPUs, I have no idea how that will pan out.


Optane will be squandered by limited CPU support and slim software support :c

Optane is a neat idea, but the severe change in software architecture combined with only select CPUs even supporting it will limit uptake outside FAANGs or organizations with really specialized needs.


Databases will love Optane. There have been companies showing how much they are willing to spend on database hardware and software for ages. I'm not sure that will change any time soon.


Not so sure about this. World is increasingly moving to distributed scale out databases. Once you go that way consensus algorithms and rpc costs dwarf disk io speed.


Not everyone is building apps which need to be "web scale". Optane has the potential to significantly raise the performance ceiling of a single-master database. I bet there are a lot of companies who will happily drop six figures on Optane systems if it saves them the complexity of managing a distributed database.


The niche where your present requirements are big enough to benefit from Optane and your future requirements are small enough to not need to go distributed is pretty narrow.


Finance, healthcare, and enterprise systems.

I'm not sure it's really a niche.


I've worked for a company that was willing to spend that kind of money on monolithic database servers. They were a top-100 website though, and this was the best part of a decade ago (and thus e.g. in the pre-SSD era).

They were also scrambling to move all their services away from use of that database in favour of a horizontally scaled system that could grow further.

The query rate that can be handled by a single conventional server are pretty monstrous these days. You'd have to be simultaneously a) maybe top-50 website level load (I'm well aware that there's a lot more than websites out there, but at the same time there really aren't that many organizations working at that scale, much as there are many that think they are) and b) confident that you weren't going to grow much.


The real world is much bigger than websites/apps.


It is, and I acknowledged that. But it gives a sense of the scale involved. Just as there are very few websites/apps that need to handle, what, 2000 requests/second (and simultaneously don't intend growing by more than a factor of two or so), systems that need that kind of performance in any other field are similarly rare.


Not really, it’s also not necessarily not needing to grow but not everything can scale in the same manner as Netflix or Facebook.

Financial systems especially trading platforms need to ensure market fairness they also need to have at the end a single database as you can’t have any conflicts in your orders and the orders need to be executed in the order they came in across the entire system not just a single instance.

This means that even when they do end up with some micro-service-esque architecture for the front end it still talks to a single monolithic database cluster in the end which is used to record and orchestrate everything.


That is indeed one case where a large single-node database makes a certain amount of sense (though it's not the only solution; you need a globally consistent answer for which orders match with which, but that doesn't have to mean a single database node. Looking at the transaction numbers I'd assume that e.g. the busiest books on NYSE must be multi-node systems just because of the transaction rates). But fundamentally there are what, 11 equity exchanges in the US total (and less than half of those are high-volume). And the market fairness requirements are very specific to one particular kind of finance; they're not something that would be needed in healthcare, general enterprise, or most financial applications. Like I said, niche.


That really depends on the workload.


You are correct.


Yep, exactly what I was getting at. If you have a giant database and a big budget, Optane is great. It could be really useful for smaller users too, but its unlikely to become widely popular as the average developer won't have it available in their laptop or desktop.


Optane will be seen as a really big main memory by the OS and that's it. So, I don't get why there will be severe changes in software..


Optane is not durable enough to survive as main memory, hence use cases using it as primarily read memory to avoid wearing out the cells with writes.

Take a look at how cagey Intel is acting about Optane: https://www.semiaccurate.com/2018/05/31/intel-dodges-every-q...

This is the same behaviour as with Intel's LTE chips, where promised features keep slipping (much to Apple's dismay).


The new DIMMs Intel is putting out have a RAM cache in front of the optane memory that will absorb all the churn plus a wear leveling algorithm on the writeback side. It has big enough capacitors to put all its data away in the event of power loss.

https://www.storagereview.com/intel_optane_dc_persistent_mem...

All of which looks to solve the wear problem even if it means higher price and latency.


SemiAccurate's singing a different tune this year, now that Intel has stated that the Optane DC Persistent Memory modules are warrantied for 5 years regardless of workload. Intel's gotten write endurance up to sufficient levels for use as main memory, though if you use those DIMMs as memory rather than storage, then your DRAM will be used as a cache.


Re ethernet youre spot on with 82599 and ixgbe. Cheap, plentiful, and really reasonable platform support. Its like the 2.6.18 kernel in that I expect it to be relevant for decades.

However my sidelines impression was intel got sidetracked with 40g while everyone else, especially dc network fabric land, went towards 25/100.


As long as Intel keeps fully supporting the open source Linux driver, I suspect there will always be demand for their GPUs (for people that don't need to game or do video production/etc).

Also IIRC for low end devices and battery life, Intel GPUs are play the nicest


xf86-video-intel?? Distros have been defaulting to the modesetting kernel driver for years due to the instability caused by Intel's linux drivers.


the intel drm kernel driver which the modesetting x11 ddx (not kernel driver) uses


Could you fill out those tiers? Who has better wifi/ethernet chips than Intel atm?


WiFi: Tier 1 is unquestionably Qualcomm Atheros (surprise!).

Ethernet NIC/CNA rankings(IMHO):

1. Mellanox is the pack leader right now.

2. Chelsio is right up there with them, but not leading.

3. SolarFlare and Intel bring up the middle ground.

4. Everyone else (QLogic, Broadcom, etc)

Aquantia is an unknown for me. As long as they dont suck, they'll probably go in tier 3.


Not sure if I agree. The top-end Intel Wi-Fi cards 826X/926X have excellent performance compared to almost any other card I've used. Throw in Intel's excellent Linux support and it's hard to find a better option in the laptop space.


Interested in learning more. Could you provide some sources on why Qualcomm Atheros is "Tier 1"?


I dont have any, and I'm not going to dig up some to pretend I do.

I am speaking off the cuff as someone heavily involved and interested in RF in general and WiFi/LTE in particular.

From my anecdota(shame this isnt a word), Atheros chips have better SNR and higher symbol discrimination thanks to cleaner amps, better signal discrimination logic, and tend to be on the forefront of newer RF techniques in the WiFi space. All this culminates in better throughput, latency, and spectrum utilization than anyone else.

It also helps that their support under Linux is far superior to most everything else, which helps in Router/AP/Client integration and testing.

I don't even like Qualcomm, but from my experience, you will almost always regret choosing someone else for anything but the most basic requirements.


In my experience Intel wifi, specifically in laptops, has been far and away the best wireless experience I've had on both Windows and Linux. I do not see how Intel's Linux support is second rate, Intel's Linux wifi team is very active and always has solid support for hardware before it is shipped.

A big frustration with Qualcomm wifi on Windows has been that they do not provide driver downloads to end users. If you are using a laptop that has been abandoned by the OEM and you have a wifi driver problem you have to hunt for the driver on sketchy 3rd party sites or just live with it. I have personally had to help several people find drivers because ancient Qualcomm drivers were causing bug checks on power state transitions.

What real-world experiences have you had with Intel wifi on Linux and Windows that make you believe it is second rate?


Just curious, why couldn’t you use your real account to ask this question?


This is my only account on this site.


Interesting pushback :)


Chelsio has 10GbE (and above) adapters with good reputation.

Mellanox has 40GbE (and above) adapters with good reputation.

Mellanox also have 10GbE stuff, but that's mostly older generation / legacy (low end). Not sure how the 10GbE ones are regarded.


What about Aquantia? Although I'm sad the USB 3.1 5gbit NICs never appeared.


That's not 100% correct. You can get them if you buy 500 or more. Product page: http://www.speeddragon.com/index.php?controller=Default&acti...

Available https://sybatech.en.alibaba.com/product/60793590161-80442320...

I talked to Syba USA two months ago and they said end of Q1 so I didn't pursue the idea of bringing 500 into the US and sell them. I still might. Do you know any good platforms for this sort of thing?

The other reason I didn't pursue this because the Realtek based 2.5gbps adapters are out https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu... (USB A version: https://www.centralpoint.nl/kabeladapters-verloopstukjes/clu...) and I wasn't sure whether people would care enough to jump to 5gbps.



Nice!

Unfortunately, I don't know 500 people who'd want one and I think shipping from the US to anywhere not-US (say, Australia) would be prohibitively expensive.


Sorry, no idea as I've never heard of them. :)


Chelsio and mellanox both have 100G with good reputations. 10G is not really something we should be comparing on anymore since it's been out over a decade.


> On Ethernet, Intel is known for their little frills, stable line of NICs. That said, they have not been a leader since the jump to 10Gbps. Their 10Gbps line eventually came along and is solid ...

I wouldn't call them solid - at least the X710. The net is full of bad experiences regarding them. They're VMware certified but are apparently really unstable on VMware; I have no personal experience with that platform. On Windows Hyper-V hosts I had the NICs repeatedly go into "disconnected" status and individual ports would straight up suddenly stop working. On Linux KVM hosts that didn't occur, at least for me.

Supposedly upgrading the firmware to a recent-ish release fixes it - I haven't had it occur after. That's understandable. What's not understandable is that the NIC was released in 2014 and the issue was resolved only in like 2018 according to the net.


I specifically called out Intel's software team as competent/good. They are obviously able to take buggy silicon and make it do impressive things, but when it comes to shaping and interpreting analog RF waves it seems this is outside their capability to tune much beyond what they've done.

Broadcom comparatively has crap drivers and decent silicon, meaning your cable modem works fine (with no bufferbloat or jitter issues), but good luck with that random WiFi chipset on Linux :P


My experience is that Intel software is crap.

For instance my biz dev guy thought that Intel's graph processing toolkit based on Hadoop would be the bee's knees and I didn't have to look at it to know that it was going to be something a junior dev knocked out in two weeks that moves about 20x more data to get the same result as what I knocked out in two days.

NVIDIA, on the other hand, impresses me with drivers and release engineering. Once I learned how to bypass their GUI installer I came to appreciate what a good job they do.

(They gotta have that GUI installer otherwise some dummy with a Radeon card or Intel Integrated 'Graphics' will post on their forum about how the drivers don't work.)


NVidia still equals NoVideo in my book, for a time it seemed like every other driver release would brick your card, and Linux support is abysmal. Wayland still isn't supported, despite AMD, Intel and even no-name HDMI Phy manufacturers fully supporting it.


> They gotta have that GUI installer otherwise some dummy with a Radeon card or Intel Integrated 'Graphics' will post on their forum about how the drivers don't work.

Maybe they need a GUI, but it doesn’t need to be such a bloated monstrosity.


It's because they include game patches in the drivers.


NVIDIA is the worst video card for Linux, if you don't want to install their blobs.


Intel is faaaaaaar behind on NICs. They're just now releasing an asic capable of 100G, while their competitors have had one for many years, and are now moving to 200G. I think they were hoping omnipath took off.


RF is difficult.


Yep, mostly done by graybeards, which Intel probably fired.


Using stack ranking to fire 10-15% of their employees every year selects for people who have political talents. Aren't they now comprehensively failing in every area except microarchitectures, with the next Ice Lake one being held up by the 10 nm node debacle?


If they are using stack ranking they deserve to fail.


I wonder if all Intel's acquisitions have failed while homegrown efforts succeeded.


> Intel couldn't make WiMax work right...

well imho, the _real_ reason why wimax failed had more to do with lack of compatibility with existing (at that time) 3g standards than anything else, and this was despite the fact that lte was delayed ! operators had already spent lots of money in deploying 3g, and were wary (genuinely so) of sinking money in something that was brand new when lte was just around the corner...

also, imho (once again), technological challenges (as you pointed out above) are _rarely_ an issue (if ever) in determining the marketplace success / failure...


wimax is still used in Japan, mainly for "pocket wi-fi" gadgets which you can buy or rent (the latter mainly for visitors or tourists). They compete with LTE but there's still a market because they don't have completely overlapping feature sets w.r.t. coverage and bandwidth.


WiMAX is dead in Japan. You can't get new devices on it, half of the spectrum has been refarmed to LTE (leaving speeds in the single-digit Mbits) and the network is shutting down by next year.

There's a WiMAX brand selling "WiMAX 2+" but that is just LTE.


Thanks for the update. I checked out the situation a few times the last two or three years or so (the last time more than a year ago), and things are apparently changing (also re. the next post). At one point there certainly was wimax though, the coverage / area was quite different.


They don't make it easy to understand what's going on since they still have a toggle between "LTE" and "WiMAX" that you can switch. The former is fast metered MVNO data on the main carrier of the telco that owns them and latter is "unlimited" data on their own separate radio spectrum with different coverage. But the latter is still actually LTE.

I actually still have a legacy wimax device and plan active since it was the last plan with true unlimited with zero "fair use". It also had a usage scale so on months I don't use it it's only 300 yen. So a good backup device. I've loaned it out to people who had have to spend a week in hospital (no wifi) and they've burned through 30 GB streaming video with no issues. They send me letters every couple months urging me to upgrade to a new plan that all cost 4000/mo and have fair use data caps.


What the Japanese carriers sells as wimax 2 is apparently rebranded lte.


I don't think they'll back out of RF tech entirely. The best WiFi chipsets I've used in recent years have been Intel-vended, and they're broadly used.

On the other hand, I've been bitten by Broadcom WiFi chipsets too much recently -- two different laptops with different Broadcom chips having a variety of different connectivity problems. One of them would spontaneously drop the WiFi connection when doing a TCP streaming workload (downloading a large file via HTTP/HTTPS). Admittedly this was probably some kind of driver issue, but I wasn't excited about the prospect of using an out-of-tree driver on Linux to solve the problem (and that didn't help my Windows install either). I swapped the mini-PCIe boards out with some Intel wireless cards and they've been running perfectly since.

But of course, if the rest of the business suffers, they may have to make cuts all over the place. I just hope the WiFi stuff doesn't end up on the chopping block or the competition improves.


I also chose Intel over other vendors. I don't know why. It is true Intel's stuff is better intergrated better into the WinTel ecosystem than the others, so it's probably true Intel gear gave less trouble on my laptop.

But then I deployed lots of boxes with WiFi, that had to work in remote locations where others depended on it. I'm ashamed to say I just shipped the first couple with the Intel cards provided by manufacturer.

But after getting a lot of complaints I actually tested it. Which is to say I purchased all the makes and models of 10 WiFi mPCI cards I could find on eBay, collected as many random access points as I could and lots of laptops (over 50), put them all into a room and tested the network to collapse.

Intel's cards were the most expensive, and also the worst performing. They collapsed at around 13 laptops. The best 801.11ac cards were Atheros (now owned by Qualcomm), and they were also the cheapest. Broadcomm were about the middle, but I had the most driver problems with them.

My pre-conceived but untested notions were shattered.


And on the fab area, they are being beaten by TSMC. It looks like they are falling behind across fronts.


If they can ship 10nm in reasonable volume soon, they can get their process lead back a bit, assuming their claims that its better than the 7nm foundry processes are true.


> The process lead has disappeared, and when working with RF frontends (LTE, WiMax, cable modems, WiFi) their ability to cover up implementation errors in the driver is limited.

Why would that be the case? Why would those functions be harder to fix in code/microcode than a CPU's functions?


I don't think you should count intel down and out just yet, over the past few years they have been one of the few tech companies actually committed to diversity.

Sure they had a couple bad years but now they have a talent pool deeper than any of the competitors so I'm still betting on them to turn this around and bounce back even stronger.


"one of the few tech companies actually committed to diversity."

What does that even mean?


Increasing hiring of non-white non-male individuals to the employee pool. Not sure what it has to do with the technical discussion.


Perhaps the continuous executive push for cheaper international labor is having an impact on their core competence. Who could have known.


Intel's x86 business is not far behind


Another commenter (metildaa) said Intel is 5 years behind the competition on their 4G LTE technology.

I can't reconcile that claim with PC Magazine's tests of the iPhone XS's mobile performance: https://www.pcmag.com/news/364116/iphone-xs-crushes-x-in-lte...

The PC Mag author, Sascha Segan, says repeatedly that the iPhone XS at worst only slightly trails the Galaxy Note 9 and other high-end phones with 2018 Qualcomm SoCs. To quote directly:

> Between the three 4x4 MIMO phones, you can see that in good signal conditions, the Qualcomm-powered Galaxy Note 9 and Google Pixel 2 still do a bit better than the iPhone XS Max. But as signal gets weaker, the XS Max really competes, showing that it's well tuned.

> Let's zoom in on very weak signal results. At signal levels below -120dBm (where all phones flicker between zero and one bar of reception) the XS Max is competitive with the Qualcomm phones and far superior to the iPhone X, although the Qualcomm phones can eke out a little bit more from very weak signals.

We could say this is all Apple tuning the phone's antenna array and materials, but I find it extremely unlikely that would have compensated for "5 years" of Intel lag - back then LTE was several times slower in theory and practice.

We could also say this is just Apple giving Intel all the secrets of modems that Intel couldn't figure out themselves. That could be more plausible, but again I'm doubtful, since Apple would have little incentive to hoard those secrets and then loan them to Intel instead of using them to build their own chips. Unless of course Apple stole those secrets from Qualcomm or someone else...?


> Another commenter (metildaa) said Intel was 5 years behind the competition on their 4G LTE technology.

Why didn't you just reply to their comment?


They made the same claim in 2 separate comment threads and I was unsure which to reply to.

https://news.ycombinator.com/item?id=19678579

https://news.ycombinator.com/item?id=19678578


Ah, the many headed hydra joy of threaded discussions, chop off one head and four more appear!


It is really amazing how much money Intel has thrown away chasing "mobile" dollars. First, they squandered billions trying to make mobile Socs, now the last part of the company that can get inside a cell phone is throwing in the towel as well. While Intel was doing all of this, it managed to squander away its core manufacturing competency as well. I have a feeling Intel will be the next HP, sliced and diced for parts - none of which resemble the company at its best.


I think the problem is that they believe they "squandered" supplying the iPhone CPU. They stated that mobile CPUs would never sell in quantity, so they weren't interested. Missing that boat, they've believed the need to be in front of mobile at any cost.

I think Intel failed to realize that they had made the right call with iPhone: their very culture isn't about being innovative, but providing microcode flexibility at high instructions/watt. They had a chance to define servers, and still do. They should have been all over the whole spectre/meltdown/timing security issues and owned creating a secure server chip. Instead, they've fretted away so many options that they never had a chance to win.

My two cents.


Intel owned a big slice of the embedded ecosystem well into the 90's. They threw that all away to focus on high margin Pentiums.


it seems like a regular business error ..


How did they fret away options with Meltdown?


The "cloud" as we know it rely heavily on CPU's hardware security and their virtualization technologies to enable untrusted computation from users.

CPU security is actually one of the few topic Intel is extremely good at (don't trust the RISC/CISC flamewar) and could define the way cloud providers are built upon.


SGX in particular had revolutionary potential.. Galen Hunt's Haven project showed remotely-attested, encrypted Drawbridge containers executing arbitrary Windows programs within secure enclaves... But then Meltdown and Spectre doomed SGX to side-channel attack. I don't think Intel is even making an attempt to salvage the instruction set...


What do you mean with salvaging the instruction set? Since I wouldn't count SGX a part of the instruction set and spectre/meltdown are not IA specific.


When Intel was making mobile SoCs, they couldn't get everything onto one chip, nor were they able to make a LTE capable device. What they were able to produce were $100 tablets and phones for the low end of the market that were subsidized to the tune of ~$30ea, or in the case of Wal-Mart's $50 Intel Atom Android tablet, to the tune of $40 to $50 per device in subsidy.


Intel and Microsoft both missed the mobile boat and suffered the consequences


It wasn’t squandered—I was able to use their investment in my personal growth as a springboard to a FAANG career.


I'm wondering if Apple decided to settle with Qualcomm, and Intel scuttled a presumably low-selling business, or did Intel abruptly close the business prompting Apple to suddenly settle?

Given that the settlement came in the middle of court arguments unbeknownst to the lawyers, I'm thinking the latter.


Apple was Intel's only LTE modem client, Apple did everything they could for years to give Intel the secrets that made Qualcomm's modems so performant, yet Intel was still a half decade behind in modem sensitivity, leaving a huge gap betwewn Intel and Qualcomm/Samsung.


Having had a little experience in how those conversations go, my guess would be that Intel were doing everything they could to keep Apple's business, but between failure at the foundry level and failure to compete on the actual design level Apple has slowly made the decision that their generation needed to be with Qualcomm. So they'll have sorted the settlement and deal with Qualcomm whilst keeping the door open with Intel. Now that the deal with Qualcomm is finalized they can finally confirm to Intel what their draw down will be and since Intel can't make the volume they need without Apple they've been forced to close down the product line.

Intel has a reputation for completely ripping up departments once they're 'refocusing' but they really only do that after the writing has been on the wall a while. I will be amazed if it slowly becomes clear their 5G backhaul business isn't going the same way though.


Surely in regards to the current top post on HN: https://www.apple.com/newsroom/2019/04/qualcomm-and-apple-ag...


Could you explain how that's related?


Apple currently buys 4G modems from Intel and presumably Apple is shopping for 5G modems. If Intel no longer makes 5G modems then Apple needs to buy them from Qualcomm. Or it could be the reverse: if Apple has decided to buy 5G modems from Qualcomm then Intel has no customers for its 5G modems and thus decided to cancel them.


It is not like Qualcomm settles with Apple to get into Apple's cellphones, it is a concession made to make them drop Intel and still pay them royalties after Apple finishes its own modem.

Apple been poaching Qualcomm's top radio engineers for years ...for work in their Chinese RnD centre (no non-compete enforcement there)


Non-competes are not enforced in California.


Well, they might be other reasons then. You don't normally haul a poached top tier employee from a first world country to China.

My speculation is maybe their talks with Huawei were more fruitful than known to the public.


Wouldn't it be suicide for a pro-privacy company to put Huawei gear in it's phones? Normally I would figure consumers just aren't that privacy-conscious, but with the current national security climate around Huawei, I think many more people are aware of the massive risks they pose.


To begin with, work on physical layer of 5G NR is to big extend Huawei's and ZTE's effort as part of 3gpp

Who will be your first option for supplier of 5G PHY chips other than standard's original authors?


Qualcomm, I suppose. I personally think it's more likely that Apple wants to make its own. One thing it does do well is custom fabrication and optimization, which is why iPhones run better with less RAM than Androids and can have good battery life. I think they may be sick of the trouble caused by 3rd-party suppliers in this department.


Maybe soon Apple could drop the "Designed in California", and just go 100% "Made in PRC".

True story: I was born in a opressive communist dictatorship, escaped with my family to freedom and democracy. It's very hard to observe the vanguard of democratic capitalism, become 99% communist, with a couple guys in California rounding the corners and painting the boxes...


The litigation between Apple and Qualcomm gave Intel a customer for a fledgling product line. Apple is a plum customer as they drive tons of volume for a single sku.

That’s a big advantage that Apple has had — their tighter product lines allow them to commit to a large number of parts for long periods of time.

The Intel reps mostly talk about storage these days. Quite a departure from the old days of high margin chips.


I feel like it's the reverse, since the lawsuits really only started on the heels of Apple supposedly sharing secrets with Intel to help develop their baseband tech. The litigation was probably ended because Apple eventually realized that Intel wasn't able to come up with a decent baseband chip that could compete with Qualcomm, much less one that could satisfy quality constraints and volume production across all of Apple's product lines. Apple had to come to a deal with Qualcomm (at the very least, so they can potentially gain access to Qualcomm's IP). Intel either was already planning to throw in the towel or this was the straw that broke the camel's back. With Apple deciding to stick with Qualcomm (or their own homegrown chip), there's no potential market for an Intel 5G modem and no point in throwing more money after bad.


The Verge went into some amount of depth with the relationship between the two events: https://www.theverge.com/2019/4/16/18411332/intel-exiting-5g...


I believe GP may have been referring to this line (emphasis mine) in the article suggesting Apple will now be using Qualcomm chips again.

  Companies have reached a global patent license agreement and *a chipset supply agreement*


Early this morning: Apple and Qualcomm abandon lawsuits. Late this evening: Intel announces leaving modem business.

Coincidence? I think not!



It's a coincidence unless you have any information to prove otherwise.

Edit: apparently the very wise HN users would rather downvote a call for evidence and upvote a feels goods conspiracy theory. stay classy HN.


Your post is only two hours old, and there are multiple responses on other comments here explaining the connection: Intel has never caught up to Qualcomm's performance in this field; their only significant customer for their 5G modem chips is Apple; they were Intel's customer primarily because of their litigation with Qualcomm; that litigation has now ended with Apple paying Qualcomm and agreeing to use their chips.

You could have taken a few minutes to scan comments yourself and find this information. Or is not classy of me to point that out?


Fastest way to get downvotes is to complain about them


This is of topic. How do you downvote a comment? Is it disabled for new users or user should have specific karma?

I have been here for years and I have decent karma I think


Based on this post [1] you need 500 karma

[1} https://news.ycombinator.com/item?id=8381148


you need karma above 500 to downvote


I have several friends that make $10+ per downvote on HN. So my guess is that the fastest way to get downvotes is to write something that someone with a marketing budget wants to remove.


Here you go. Where do I claim my $10?


Are you for real? How are they told what to downvote?


The cellular modem market has absolutely crashed in the last decade. Broadcom, ST-Ericsson, Nvidia and Intel have all left the market. I may be forgetting someone.

That leaves basically just Qualcomm and Mediatek (on the lower end) left. Huawei and Samsung have something too. Apple and Samsung will have to make their own modems now, if they don't want to be dependent on Qualcomm.

This is certainly not good for competition.



Some vindication for the oft-maligned SemiAccurate site.

Article five months ago: https://semiaccurate.com/2018/11/12/intel-tries-to-pretend-t...


More like "veryaccurate" am i right?


Big companies dont fail due to external forces, they fail because of internal dynamics. Everything apart from x86 is a hobby at Intel. Also think they are addicted to high margins and expect that a substantial portion of cost of a device goes to Intel components.


Cellular technology is hard and complex.

I remember Broadcom thought "no problem, we churned out Wifi solution with 50 people so how hard can this cellular thing be for us?"... Well, they crashed and burned.

You need massive teams of people who know what they are doing to build cellular chipsets with the necessary software stack.


Stock is jumping after hours, not surprised as modems really aren't a core competency of Intel.


Good for iPhone users, now that Apple & Qualcomm are in the bed again; they will get decent 5G experience (or) at-least what everyone else is getting.


It's probably too late for this year's iPhone though, unless engineering teams have been working behind closed doors of both companies expecting this to be the case.


I very much doubt 5G iPhones were ever something we would see in 2019. The network deployments aren’t there yet, and Apple won’t have two SKUs for their flagship 2019 model to hedge their bets with 1st gen modems like other manufacturers.


Next year Intel will be hit harder when Apple announces Macs based on its A series ARM chips and transitions its entire Mac line to it by 2023 or 2024. This move by Apple will also prompt other PC manufacturers (a lot more than before) to move to ARM if Microsoft continues to play along well on ARM.

Seems like Intel’s future is bleak, and the company may have to be broken into pieces.


I'd be curious to see where you've gotten such a bright future for ARM from. It feels very much like the "year of the Linux desktop" in being wishful thinking


While I think it's unrealistic to write Intel's CPUs off any time soon, ARM has been having a very bright present for years, haven't they? There are roughly 5x as many smartphones sold every quarter as there are PCs and the vast majority of those phones are using ARM-based CPUs -- in terms of units in use, there are way, way more of them out there than there are Intel CPUs.

As for ARM CPUs seriously moving into the PC space, who knows. It hasn't happened yet, but there's an ever-increasing amount of smoke around the likelihood of ARM-based Macs. It's silly to think that in five years Intel will be a destitute, hollow shell of its former self -- but it's not silly to think that ARM-based PCs are going to not only exist, but be both common and seriously competitive.


Do you see Apple selling its CPUs to other PC makers? If not, which ARM CPU licensee do you see as being competitive with Intel in the PC space?


Would be hilarious if the answer to your second question was Qualcomm.


Well, it probably is Qualcomm; they're likely ARM's biggest licensee in the mobile CPU space other than Apple -- and may be bigger than Apple by volume, when you consider how many Snapdragons there are out there.


Apple switching to Arm would be near impossible given many people run windows on apple and other non-apple programs that won't be able to be emulated fast enough on Arm. Steve Jobs might try that leap but Tim Cook won't want to scare anyone off from the Apple ecosystem.


Apple already did that transition multiple times. And running Windows on Apple hardware is not something that Apple cares about.


I am still disappointed that the menu item for Boot Camp to “log off and boot into Windows” never made it into the OS.


IIRC it was there in betas for a while. Typically when a feature was pulled from OS X betas it was either because it was too broken to make the release date, or patents.


1. The expected plan was to transition the MacBooks across first. Those users were never able to run Windows properly anyway and their apps aren't likely to be CPU bound.

2. The idea isn't to emulate OSX apps. It's to repurpose them to run on ARM. And they been doing this with all of the App Store apps for years:

https://thenextweb.com/apple/2015/06/17/apples-biggest-devel...


vmware runs fine on macbooks


I wonder if the timing of this announcement was determined by the timing of the Apple/Qualcomm settlement. In particular, if Apple encouraged Intel to delay this announcement until they reached a settlement with Qualcomm, to get better terms with Qualcomm.


I'm disappointed by this news because Intel was the only mobile modem manufacturer who was competitive (or nearly competitive) to Qualcomm in North America. Without Samsung, without Chinese OEMs, Qualcomm is unfettered in its monopoly here.


Mediatek isn't far behind. If they were to put actual decent CPU's in their SoC's (which is just a matter of licensing - switching out one ARM CPU for another isn't too hard), I think they could take a big share of Qualcomm's market.


I keep hoping intel will fix bufferbloat across all their lte/cablemodem/dsl/etc gear. It would improve their end user experience enormously. I even know a few folks that would help....


So earlier Apple and Qualcomm reach détente. Then Intel announces exit from the 5G modem space, meaning Intel isn't a viable alternative to Qualcomm as supplier had Apple and Qualcomm not made nice. Interesting timing of statements...

...but, Intel mentioned the IP they still retain in the 5G space. What's the over/under on Apple licensing Intel's IP and bringing modem dev in house (keeping in mind that would take a couple years at best to payoff) vs making a best case scenario of a working relationship with Qualcomm and using their tech?


I suspect this is the case.

Apple will have purchased IP and maybe entire teams from Intel.


Surely due to the recently top post on HN: https://news.ycombinator.com/item?id=19674218


I thought it was the Qualcomm Apple settlement? https://news.ycombinator.com/item?id=19676499

Surely it can't be a coincidence. Though the weather thing might not be either, but if so this was a very well coordinated response.



Surely? How are you sure of that when the post didn't mention any related reasoning? The weather forecasting thing isn't even new, it's just a blog post that happened to go out today.


What a coincidence that intel released this after Apple issued the press release about Qualcomm. Seems like Intel had some weight lifted up off their shoulders.


I like how the forward-looking statement disclosure is longer than the press release itself.


Well now I get why Apple settled its lawsuits with Qualcomm suddenly or Qualcomm might have asked for a lot more money interesting how Intel announces this a day after the Apple and Qualcomm settlement


I'm sure Apple told Intel to hold the press release or they would sue them for breach. I can't really blame Apple hear; Qualcomm can be nasty and Apple wasn't wrong to not give up a huge advantage.


I doubt Qualcomm was unaware. Lots of rumours spread within an industry but never make it out onto the internet.


Does anyone know what they mean by

> "...and Other Data-Centric Opportunities"

What does Data-Centric mean? Specifically, Data-Centric platforms that they keep referencing?


https://www.bizjournals.com/portland/news/2017/02/10/intel-c...

Not sure if that is still true after Krzanich out.


Intel should buy Nvidia, drop their internal GPU efforts, and focus on regaining their CPU lead. It may be a futile effort, with the end of Moore's law and Dennard scaling making their previous approach less appealing, but they've got to try. Then again, if they bought Nvidia, they'd probably try to put an x86 inside the GPU...


I sat at Intel's presentation about Larrabee during GDCE 2009.

Also enjoyed how their OpenGL drivers used to lie about hardware support and how in used to talk about being open source friendly, yet GPA analyser was DirectX only for several years.

NVidia has nothing to gain by becoming part of Intel.


> NVidia has nothing to gain by becoming part of Intel.

Who said they did? I'm saying Intel could gain by acquiring Nvidia and focusing their engineers on CPUs and chipsets.


> Intel should ... focus on regaining their CPU lead.

I think they are focused on that. They've been trying to bring their 10nm process to market for years. It's now 3 years past their initial projections. Moore's exponential curve is certainly dead, the big question now is whether performance will improve linearly or something worse.


Performance can still increase exponentially on average. There are still undeveloped ways of hyper-performant computing, such as chemical, biological (bacterial), DNA-based, optical, maybe quantum... and of course we would start replacing silicon with graphene once its production finally takes off, which should lead to 3D chips. Spintronics is another very interesting field.


> There are still undeveloped ways of hyper-performant computing ...

Agreed, but none of the technologies have reached the marketplace, and it's entirely unclear how long it will take for them to develop. If it takes decades, any exponential curve is toast.


That is not true. You can't claim that. I'm saying that on average it's going to be exponential - e.g. even if it takes 100 years, the jump will be so huge it'll be an exponentially large step.


> on average it's going to be exponential - e.g. even if it takes 100 years, the jump will be so huge it'll be an exponentially large step

I would not call that an exponential, it's a step function. It's entirely possible that human innovation over time will be look more like a series of step functions than a continuous exponential. The problem with step functions, of course, is that you can't easily predict future growth based on past growth, as you can with an exponential.

E.g., when quantum computing comes, it will surely be significant, but if it takes 10, 100, or 1000 years to make a breakthrough, it's not at all clear that the timing will happen to coincide with the exponential that we have been on for the last 50 years.


OMG this is huge, I can't fathom how a company with such level of engineering and talents can't/choose not to do it. At one point it's bigger than the bottom line or any other considerations, it's for the moral of the whole company.


At some point the size of the business becomes it's largest performance factor. It loses the ability to use talent and to understand which risks to take.


obvious movement, intel have been sucking at modem design and performance compared with its rivals on LTE


Yet again, INTC + RF == 0.


When Intel repeatedly says they're going to continue making the modems for the iPhone up to 2020 but now have shown themselves to have been blatantly lying, clearly this has to be illegal or at least warrant an investor lawsuit right?


You don’t believe the 2019 iPhones (which will be the flagships until Sept 2020) will contain Intel LTE modems?


Did you read the linked news piece? It seems like they said directly that:

> "The company will continue to meet current customer commitments for its existing 4G smartphone modem product line, but does not expect to launch 5G modem products in the smartphone space, including those originally planned for launches in 2020."


Unless you’ve contracted with Intel you won’t know deceit.


Did you work as a contractor at Intel?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: