Hacker News new | past | comments | ask | show | jobs | submit login
VMware Buys Nicira for $1.26B (bhorowitz.com)
175 points by daegloe on July 23, 2012 | hide | past | favorite | 53 comments



Vinod Khosla talks about entrepreneurs having a deep understanding of an area instead of surface knowledge to make a difference. I guess, Nicira is one such example where deep technical know-how of founders helped them disrupt the networking industry with ground-breaking innovation.

Congrats on many levels! Nicara=1.3 times Instagram but several magnitudes greater than Instagram, when it came to solving hard-technical problems. I know, it is a dubious comparison but I am biased towards entrepenuers/founders who solve hard-technical problems. I am pretty sure, someone will argue that Instagram was pushing the boundaries of sharing, and in some fuzzy/meta way improving the human condition and experience in non-tangible ways, and in the process will end-up solving hard-technical problems (scale, data-science blah-blah). Let us just say, I disagree. Expecting to be voted down to oblivion.


On the other hand, infrastructure can only be as useful as the applications they support. Technologists love solving hard technical problems [1], but ultimately, isn't the main goal to improve peoples' lives? Almost all the engineers I know would prefer to work on sophisticated interesting problems than "another web-app," but there isn't much point if it is just for our own edification.

I can understand your disdain for Instagram's frivolity, but people said the same about Twitter and it ended up playing a major role in the Arab Spring, whereas almost no one outside the techworld knows or really cares who runs their infrastructure. Of course without this infrastructure it all wouldn't work.

Like you say, comparing the two is like comparing apples to oranges, you can't really argue one is fundamentally more important than the other.

[1] http://jeffhuang.com/best_paper_awards.html


Twitter is much closer to being classified as infrastructure in my opinion.

Instagram is a pretty much a self-contained service (i'll draw no conclusions about the frivolity it enables, just wanted to draw a distinction between the two.)


Improving people's lives can take many routes.

I do agree that "infrastructure can only be as useful as the applications they support" but I disagree strongly that "there isn't much point if it is just for our own edification". Advances are built on top of each other and some of them can seem obscure to the lay person. If no-one were pushing those boundaries we might not be in a position to discuss whether 'facetagram' is more impactful than 'instabook'. In that way, I'd argue that infrastructure is fundamentally more important due to it's multiplier effect (consider what EC2 has enabled).

I believe Marc Andreessan once said that Netscape wouldn't have been possible were it not for all the technology/infrastructure that went before it. He likened it to the icing/frosting on the cake. I can't seem to find the exact quote now though.


As I see it, there are those who explore combinations of what's possible to find out what's needed and those who make new things possible by solving hard problems.

I agree that we need both. It's just that we seem to get a lot of the first and not enough of the second.


This should have interesting fallout with EMC and Cisco's relationship. Cisco has invested heavily in creating a Nicira competitor internally (http://bits.blogs.nytimes.com/2012/04/19/cisco-announces-its...).

Also, Nicira was contributing a lot to Openstack networking, at least as of the Diablo release, in the Quantum plugin and I believe in nova-network in general. That will have repercussions for HP, Dell, Rackspace, or anyone else heavily invested in Openstack.

I had the pleasure of working on a small project in which Nicira was involved. I didn't work directly with them, but they seemed really sharp. Many congratulations to them on a great exit.


I think VMware has made it fairly clear for a while that any part of the stack they touch will belong to them if they want it to. Even storage, which must make EMCers nervous.

Several people predicted that Nicira is a feature and the VMware of networking will be VMware; those turned out to be true, although in a different way than predicted.


Why would it make EMC nervous? They own 80% of VMware :)


When EMC'ers get nervous, they get mean. I'd be more worried about them meddling in VMware's affairs. They've done this before when they shoved a whole bunch of software products onto VMware that they didn't want.

Watch and see what happens. This "revolution" might fizzle pretty quickly if EMC gets involved.


As wmf alludes, you should compare the pricing of vmware and emc for similar products. I haven't been paying attention for a few years, but back in the day emc would charge like a bajillion dollars for features they were putting into vmware esx for free.


The thing about disrupting yourself is that you may end up with a larger share of a much smaller pie.


Nicely done, nicely done. Got to love it when someone takes a real problem and nails the solution. When I first read about these guys (as an operations person myself) I thought, "Ya know, if they can make this work they will have a helluva solution."


I'm not very savvy on what goes on with large scale networks and whatever market Nicira is in, but this blog post tells me extremely little beyond giving everybody high-fives.

What problem is Nicira solving, exactly? (It's certainly possible that the problem-space is too advanced to really tell it in layman's terms, but I can't even tell if that's the case here).


Why would you expect a blog entry from someone who invested in the company and wants to celebrate its success to explain the technical details of what it does? That's like showing up at a party when the first human walks on Mars and walking away disappointed because everyone's drinking champagne rather than giving a technical presentation on the landing mechanism.

If you want the technical details, try Nicira's website. They have a 3 minute Nicira intro video: http://bcove.me/lydym25p (skip to 1:10 for the meat). Or, just imagine if you had VMs in multiple racks within potentially multiple data centers and, on the fly, you could use an API to provision a private 10.0.0.0 subnet so that the machines could talk to each other as if they were directly connected to the same physical network switch. In a nutshell, virtualized TCP/IP tunneled over TCP/IP.


It's not clear to me that your example actually motivates a new technology. If these VMs all have IP addresses, then they can all talk to each other already -- no special subnet is needed. And unless this virtual subnet is performing VPN-style encryption, this virtual subnet does not provide any additional security or data isolation on top of what TCP/IP already provide (ie. you can only read data addressed to you unless you control the physical hardware and put it in promiscuous mode).

I'm not contradicting the idea that this technology is useful (obviously VMWare's acquisition speaks for itself), but I would like to see a motivating example that can't be accommodated with plain TCP/IP.


It looks remarkably similar to VCider which I personally use and love.

And it DOES provide additional security because it reduces the number of open ports that you have to firewall between servers. Mistakes do happen when you are managing large number of servers.


One of the best posts I've read on HN; great analogy followed by awesome super succinct explainaion.


Because the technical details of what it does is why it's successful.


Exactly! It's the "how".


I thought TCP over TCP was a sin?


It is, but Nicira does IP over IP.


Technically Nicira does both. One of the tunnelling protocols it uses is STT (Stateless Transport Tunnelling) Protocol which they effectively made themselves (and is up for draft in the IETF). It's their mechanism of choice for communicating between the network controllers and hypervisors over the physical network.

It's not true TCP, but looks enough like like it to allow hardware offloading to the network interfaces of all the tunnelling, saving a lot of CPU power.

This allows for throughput speeds in an STT software tunnel to reach the same maximums as "raw" TCP through a given interface.


The problem is provisioning and managing networks in a large scale data center deployment. And while networks are reasonably simple in the small (Hey plug in the cable and the light goes green, done!) in a larger (> 800 or so up to thousands and thousands) network lots of little things need to be dealt with.

When Google first started building their own switches I thought it was the stupidest idea ever. But when you look at it objectively, having the connectivity is essential and putting the 'smarts' in a place where it can be easily updated/modified/copied etc it vital. The network is just wire and switch companies put a lot of 'value add' in their switches which basically puts their second tier network programmers writing code you can't code review but can kill your network at any time. At Blekko the few outages we've had were all caused by a switch software programmer. How scary is that?

So if your wondering, what this means for the rest of us, it means there is a market for a rock-solid-dumb-as-a-post set of switches that do absolutely nothing to the traffic except forward it. They restart in milliseconds, not seconds. They achieve lowest cost per port because they have very little firmware, and the firmware they do have can, for all practical purposes, be proven correct. The money is spent on really reliable transceivers and low noise cross connects and just enough SNMP work to give the upper levels of the system a clue as to whether they are overloaded or not. They might do link aggregation. We'll see if they appear or not.


I have an idea for an open platform that just has PCIe slots and fpgas. 3rd parties can write the firmware that runs on the FPGA layer. Add in banks of DIMM slots and you can run memcache or redis directly on the minimal box. No operating system. Since network cards boot PXE, the firmware on the device could get loaded over the network. Routing tables could get compiled from a HLL -> Verilog -> FPGA.

What do you think?


Check out http://netfpga.org/

Ultimately I think such a design would have trouble competing with a network processor (or a switch chip if you're doing something simple).


That is great for research purposes. This platform would use off the shelf PCIe NICs and ram. I think of it as the PostgreSQL of networking gear, cheap and all purpose. It won't have speed or density but it would have robustness and flexibility. Services that currently run on general purpose servers could get targeted towards open network hardware platforms. I don't have any proof, but I don't think there would be shortage of people doing interesting things like making hardware in memory key value stores.

FPGAs would be cost competitive in a system like this. Routing multiple 100MB/s streams of shouldn't be a problem.

Convinced? Maybe a little?


> I don't have any proof, but I don't think there would be shortage of people doing interesting things like making hardware in memory key value stores. what you are describing are also known as CAM tables. for some preliminary introduction check this out : http://www.pagiamtzis.com/cam/camintro/


How would you compete on port density with PCIe? 72 port 10GbE for 1U (http://gnodal.com/Products/GS-Series/GS7200/) is already available. Once the "pure play nothing but a switch" manufacturers come out this price will be rock bottom.


You are correct, this couldn't initially. What the platform I outlined has is flexibility and cost. But mostly flexibility. Probably the high end of port density would be ~40 (using dual port nics) and this would be for 2u. But you could start to run stuff on the switch that his historically run with ha proxy, varnish, nginx, etc. If there are sata ports, one could code themselves a backblaze pod that doesn't suck and include grep directly on the stata port.


Arista came out last year with something vaguely(although lacking the modularity of cards)similar to what you've got in mind. I think it was originally targeted for high-frequency trading apps where proximity of the colocation such as 350 Cermak in Chicago makes a huge difference in latency. I never heard much more about it though.

http://www.aristanetworks.com/en/products/7100series/7124fx/


I really like this, thanks for the pointer.


You should build them. Why wait for a SuperMicro to get into the fray, when you could do it yourself.


There are already builders out of China that you can approach for custom solutions. I also wouldn't expect Marvell or Intel(especially with their acquisition of Fulcrum Microsystems last year) to sit by idly.

Another interesting thing we'll start to see is the use of concurrent programming languages by the established vendors to stay heavily involved and dare I say relevant? Juniper announced a few months back they were going to start using Scala and Akka from Typesafe (http://typesafe.com/company/news/23506). It is not hard to guess this is for their SDN and OpenFlow plans.


If you want to have security at the network layer so that only certain users have access to the finance server, or if you want to ensure that your web server always has higher priority than your email server in getting out to the internet, or if you want to make sure that your password server at Yahoo can only send and receive encrypted traffic; these are the kinds of things that are made easier, cheaper and more manageable via SDN and by extension Nicira. All of these things are quite difficult to accomplish in large networks.


No, SDN really isn't about this at all - its about segregation of the control and data planes, and in some respects about bringing a new level of programmability to the network layer. The scenarios you describe are easily achieved without SDN.


I think the keywords are that SDN makes these things easier, cheaper and more manageable by abstracting it out of the network hardware and into the software, specifically virtualized software. That is why VMWare sees a good fit in Nicira.


They are working on Software Defined Networking (SDN) or a rather crude definition is the removal of "proprietary" from networking equipment and putting the control into a more unified and programmable exposure. In other words, Cisco should be concerned even more about SDN than the marketshare they've lost to Juniper.


It's certainly not advanced - in a nutshell, they are moving the networking stack away from proprietary hardware and towards x86, although that of course does not preclude proprietary features from popping up in other places. This (http://highscalability.com/blog/2012/6/4/openflowsdn-is-not-...) is probably the best "layperson" (and I use that term _very_ loosely in this forum) explanation of the problem space.


If its not that advanced, why aren't you selling your company to VMware today? Clearly this problem is extremely difficult and having met some of the Nicira team, I'd say this problem could not have been tackled by most software engineers. There are some wicked distributed systems problems among other things.


You clearly don't comprehend the context in which that statement was made. Specifically, "It's certainly possible that the problem-space is too advanced to really tell it in layman's terms". Conceptually it's not advanced at all, in fact it's not even new. Nowhere do I allude to the implementation as trivial, which it clearly is not.



> this blog post tells me extremely little beyond giving everybody high-fives.

It also crashed my browser with that mp3 embed, FWIW.


This post links to an earlier one that goes into more detail:

http://bhorowitz.com/2012/02/06/the-future-of-networking/



Nicira lets you have more than 4096 VLANs.


Can someone explain what problem Ncira solves? I've been to their website, but I really don't understand what they're doing.


Virtual networks and routing. IOW no need to have big iron routers, eg Juniper and Cisco.

Cost reduction is a major advantage, but also network flexibility and ease of setup are wins.


Isn't performance a problem though?


Aparently not. Read some technical explanation on it, but can't remember the details.


Is it really possible to be independent (not bought out) while being successful and in business when your area of operation (virtualization in this case), closely overlaps with that of another much bigger, wealthier company (VMware in this case) ?

I like the Steve Jobs quote where he says he doesn't like companies whose sole goal is to eventually get bought out by a bigger player. I personally would like to build a company, cultivate culture within that company and stay independent, even if that means more competition and less cash-in-hand in the short run.


Arguably the cloud and enterprise virtualization markets are separate and Nicira could have made money from cloud providers.


They say the only requirement from physical network is IP connectivity. Technically it's straightforward to build a virtual machine on top of IP.

The real matter is to see that IP virtualization is the future of networking. However, I'm suspicious on the efficiency of that kind of virtualization as it might consume more CPU energy and maybe cause some lags between networks.


Conveniently, they recently posted on that topic: http://networkheresy.com/2012/06/08/the-overhead-of-software...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: