Hacker News new | past | comments | ask | show | jobs | submit login

Dual 10 GbE built-in? Sweet. Wonder what the price will be; a dual 10 GbE Intel card goes for $500 alone.

More importantly – will they provide zero-copy I/O like you can get with Intel network cards via their DPDK [1] or PF_RING/DNA [2]?

[1] http://dpdk.org/ [2] http://www.ntop.org/products/pf_ring/dna/




At $400 I assume youre thinking of something like an Intel X520 DA2 plus optics. If you can tolerate the power you can do 82599 with dual Phys for more like $150-200.

Obviously I have no idea on the network controller or its sdk/driver support.


You dont really want to buy a 25W SoC and then blow a load more power on the networking. It might be expected you use 802.3ap.


I was thinking of solutions for 10s of gbs on todays x86 boxes. Dollars and power are both a budget, so it's all a tradeoff.

WRT to the opteron A1100 yes, I could see your comment. Something like a box of A1100 blades plugging 802.3ap to a common backplane, a trident chip there, and then a bunch of (Q)SFP+ northbound. A couple hundred gbs for around 150 watts of networking.

When I see A1100 I think an IO node with 10s of SATA disks attached. In that case Im only getting 10 or 20 per rack. A backplane makes less sense to me, running two DAC phys per box to a TOR I could see.


Yes, I was. Didn't know this could be done cheaper, thanks!


DPDK is Intel's turf and PF RING only supports igb/ixgbe/e1000 drivers. For out of the box usage, looking at Netmap or Linux PACKET_MMAP (though not entirely zero-copy) should be possible.


Nice. How did I not know about these. Time to do some research!


I don't think that's "sweet" I think that's a bad decision. What if I don't need two of those per 10 arm cores? Now I'm just paying for gates I don't need.


What if you don't need AES encryption? Now you're just paying for gates you don't need. What if you don't need SIMD instructions? Now you're just paying for gates you don't need.

It doesn't matter. Modern processors have the complete opposite problem. It isn't that transistors are expensive, it's that they're so cheap you end up with too many and they generate too much heat. If you can stick a block on there which 60% of your customers can use and the other 40% can shut off to leave more headroom for frequency scaling, it's a win.

Also, the number of gates you need for a network controller is small.


You haven't been paying for the silicon for a while; when you buy a chip you're really paying for the design and it's cheaper to design one chip with all the features people might need.


It's much cheaper to make a chip that both you and the other 99% of the people (that need 2 ethernet ports/CPU) need than making a specific chip for each market.


Then you buy a different chip if they're not cost effective.

As an entry into the server market, this sounds awesome; for things like storage aggregation and vm migration, gigabit ethernet is becoming a real bottleneck for many applications as the core counts has gone up.


In server farms, who cares if the processor costs $400 or $500. Energy usage is the biggest cost over time, and this silicon (presumably) isn't powered if it's not used.


We have more gates than we know what do with. There is little correlation with die area and end user price.


http://info.iet.unipi.it/~luigi/netmap/ is supposed to be generic to any NIC


It needs some driver modifications, but they are not huge. It still only supports 3 or so of the most common cards on FreeBSD though.


There's a new emulation mode in -HEAD http://beta.freshbsd.org/commit/freebsd/r260368




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: