Hacker News new | past | comments | ask | show | jobs | submit login
Why Networks Need ASICs (eetimes.com)
48 points by zxv on July 9, 2016 | hide | past | favorite | 26 comments



"John Maddison is senior vice president of products and solutions at Fortinet."

It's an ad for a crypto chip.


"It's a crummy commercial!" [1]

https://youtu.be/2-H4LT-WZWE


"Setting aside issues like the shortage of available talent and the average of four years to develop and bring a new ASIC to market, the material costs alone are high. The manufacturing of a typical two-gram chip requires 1.6 kilograms of fossil fuel, 72 grams of chemicals, and 32 kilograms of water. The materials involved in making a 32Mbit RAM chip can add up to as much as 630 times the mass of the final product."

You realize that this is replacing a number of general purpose chips and reduces the new amount of materials consumed to accomplish a task, right? If you're not spending those 72 grams of 'chemicals' on your ASIC your spending 720 on general purpose CPUs.


Actually, large data centers are using FPGAs at the network edges. The FGPA does data compression and/or encryption. This optimizes network bandwidth and does not tie up the CPU. I expect to see distributed network routing code so the communication can be peer to peer with no routers. I expect to see firewalls specific to a node (e.g. only web traffic from the traffic splitter). I expect to security code (e.g. no exfiltration from the confidential store-only machine).

All of this without involving the CPU... ASIC is too expensive but FPGAs are great. Intel bought Altera (about 45% of the FPGA market) which annoys me / excites me because I use Altera. I expect the next Gen CPUs to have an embedded FPGA so you can make your own instructions.


ASIC is expensive at low volume, but cheaper than an FPGA at high volume, so it depends on the application.


From what I've heard, "high volume" is now so high that hardly anybody really expects cost savings from "true" (i.e. new full layout, one customer) ASICs anymore. The vast majority of "new", "custom" chips are sold to multiple customers and/or are light customizations of existing designs.


Right, I think this has been a trend for awhile -- standard product ASICs ending up in multiple brands of similar products.

Just the masks for manufacturing these large chips on the newer processes are millions of dollars -- not even factoring in all the design work to get to that point. But they achieve far higher density than you can get with an FPGA.


Who is doing this? What "large data centers"? Who are they doing it for? Most colo facilities just sell transit if anything. What is "distributed network routing code"? Traffic splitter?


What does "peer to peer with no routers mean"? Also, with all the benefits of FPGAs, you seem to gloss over power consumption. Any concern there?


I understand p2p without routers to be a star topology network where every device is connected to the other without any need for indirection.

FPGAs are a bit piggy on power consumption historically given they've mostly been used for smaller shop engineers prototyping and validating designs from an HDL without needing to use a fab and have not been used in datacenter use cases often. But I'd expect newer FPGAs should be able to turn off power to unused LUTs instead of forcing everything to stay powered on.

Using FPGAs that are massively oversized for the logic they implement makes little sense anyway even if you have a super high pipeline stage count.


Large data centers are doing compression and encryption at the application layer on ordinary Intel CPUs.


Additionally, FPGAs offer better cryptographic algorithm agility than ASICs. With an FPGA you stand a much better chance of being able to upgrade your expensive network hardware with quantum resistant algorithms.


AES is quantum safe already. Do you really need hardware offload for that less than once per connection key exchange?


Networking seems to be moving towards Programable ASICs with companies like Barefoot Networks, Netronome, Cavium, and other developing programable network card, switches, and routers. So yes, its Application Specific to networking, but what the chip actually does with the packets is up the whoever buys it.

If Xilinx / Altera can get P4 to compile to their FPGAs (Xilinx has a basic version of this), I suspect the use of FPGAs in networking will become more widespread. FPGAs seem to get higher throughput SerDes before ASICs. Current FPGAs have up to 144x 30Gbps SerDes.



Cool project! I read your poster from the P4 summit.


Agree. My co used to sell memory ICs to (primarily) networking folks. We could see the shift from FPGAs to NPUs happen in real time. The holdouts were always well-funded engineering teams building core routers.


I was quite confused by the article. It's old news. As one of the comments there said:

Custom ICs for networking were first developed decades ago. Everything from carrier class routers to home routers contain networking ASICs. Cisco and Juniper consume many ASICs.


The article is a rebuttal to NFV hype that proposes to run all firewalls, NAT, IDS, IPS, DPI, VPN, SD-WAN, etc. in software.


As bandwidth goes up general purpose CPU's become inefficient at handling traffic again. Hell, pfSense on my Xeon E5-2403v2 at home eats up 25% of a CPU core WITH TSO and TCO enabled when maxing out my 100Mbps, meanwhile my Ubiquiti EdgeRouter X running on a whimpy little lower-power embedded processor sits around 2% utilization doing the same task. This is without any crazy firewall rules, IDS, etc, just three port forwards in place. I can't imagine what kind of hardware I'd need to even handle a 1Gbps with software routing, let alone handling the duties of my 24-port TP-Link switch at home that sips ~20W of power with all ports active and can handily route (and yes, I mean route, it is a Layer 3 switch) 1mpps.


Netmap-fwd on an atom is faster than your edge router.

And yes'm pfsense is getting some netmap/dpdk love.

I am the owner of Netgate (company behind pfsense)

And your figures ar full of shit. I do 1gbps on a 2.4GHz Atom


To be fair, this was running on ESXi, so there was a performance penalty to be had there. Got really annoyed when I switched to XenServer and I had to disable all hardware offloading (because XenServer is stupid and FreeBSD doesn't like stupid virtio drivers) and it would regularly peg a core at 100% when maxing out the connection, which was ultimately why I switched to a cheap hardware solution (still eyeing the cheap ~$300 pfSense appliance, because EdgeOS lacks a TON of features I used regularly).

Edit: Netmap would be lovely.


pfSense on my Xeon E5-2403v2 at home eats up 25% of a CPU core WITH TSO and TCO enabled when maxing out my 100Mbps

That sounds like thermal throttling, or software failure. There is no way a single 100Mbps link can come close to making a current CPU sweat.


This is a ThinkServer TD340, if it was thermal I would actually hear the fans - it remains whisper quiet. To be fair, this is virtualized, so there's a 3% performance penalty give or take.


TSO doesn't help your router, sorry. What is TCO? 25% of one core at 100Mbps? How many pps? (This doesn't mean that your limit is 400Mbps per core, far from it.)


TCP Checksum Offloading, important in NAT since it has to re-checksum each packet that it rewrites. This was just running speedtest.net, can't say I bothered to measure the packet rate because a normal HTTP download had the same effect. I ended up running into more issues when I switched from ESXi to XenServer so I don't really have the ability to recreate the results, I would probably have fewer issues if it was running on bare metal (and I'll probably switch to that eventually).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: