Hacker News new | past | comments | ask | show | jobs | submit login

All of these things are based on the Broadcom Trident II chip. If you want it cheap, don't go to juniper - get one from Quanta. You can have a 32x40G switch for $6,000.



On a similar note, I just recently became aware of Cumulus[1] Debian based switches (but the good bits are closed source) from among others edge-core[2] (via a presentation by PaaS-provider http://zetta.io) -- eg:

http://whiteboxswitch.com/products/edge-core-as4600-54t

[1] http://cumulusnetworks.com/support/faq/

[2] http://www.edge-core.com/


To clarify, the only part that is closed is "switchd", which is a userspace program that watches the kernel data structures (route tables, neighbor tables, bridges, ports, vlans, etc) and programs the hardware to match. It links against proprietary silicon vendor SDKs, and programs registers whose description were given to us under NDA.

Without this part, everything works the same, but is of course not hardware accelerated. So the 100% open source parts of Cumulus Linux would still make a great Network OS for a router/switch VM.

We don't yet have an official VM version, but that is something we will have in the future.

- nolan co-founder/CTO Cumulus Networks


What is the flexibility with "open" switches? To get linerate switching, I'm guessing you're still limited by the hardware? Is the benefit that you can more easily setup routing tables (instead of depending on the switch vendor's capabilities), vlans, etc. just by creating them in userspace then pushing them over to the hardware part?

Or can you actually get fairly low level, like implementing your own algorithms for channel bonding? A while back I wanted to do some L7 inspection, but could only get like 10G per server, and we had 40G coming in. EtherChannel didn't acceptably balance out the traffic. Doing so would have required dealing with one of the network processor vendors and all that mess. Would an open switch platform make this a straightforward exercise?


You are limited by the hardware, and what our code supports programming into it.

The big advantages are reusing config management tools like puppet/chef/ansible/etc, and monitoring tools like collectd/graphite/nagios/etc.

Also, it is super easy to run services on the switches. For example, you can easily run isc-dhcpd on each ToR, instead of DHCP relaying back to one mega DHCP server. Distributing services like this scales better, and reduces the blast radius of service failure.

I've been experimenting with the idea of a transparent caching TFTP proxy server running on the top of rack switch, to make PXE scale better to large clusters.

The important thing is that anyone who has the know-how to write a transparent caching TFTP proxy server for Linux can just go ahead and do that on a Cumulus Linux switch! You don't need to come to us and convince us that it is a good idea and then wait for us to actually implement it. Compare that to asking for features from a traditional switch vendor...

- nolan co-founder/CTO Cumulus


We've been loving Cumulus + Quanta for 10Gb and 40Gb, in that it's more manageable than Cisco (for our environment) and a fraction of the cost. We end up using it at 1Gb too, but it's just a price match there, instead of a win.


thanks, good to know!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: