Hacker News new | past | comments | ask | show | jobs | submit login
Paypal To Drop VMware From 80,000 Servers and Replace It With OpenStack (forbes.com/sites/reuvencohen)
233 points by vegasbrianc on March 27, 2013 | hide | past | favorite | 124 comments



I hate cloud headlines. What's the hypervisor? VMWare is a hypervisor (ESX) and management applications (vCenter/vSphere, vCloud Director, etc). Openstack is only the management applications. And VMWare contributes the most interesting part of OpenStack, anyway, which is the virtual networking based on Nicira's OpenFlow.

My personal experience with OpenStack (admittedly late Diablo release timeframe) was that it was borderline unusable outside of the developer setup of a standalone node on Ubuntu 11.4 backed with KVM (edit: and that was only Nova/Compute, not even including Swift/Storage).

Anyway, very interested to see if this is KVM, Xen, or something else backed. Also very interested to see when they reverse their decision (probably after VMWare comes down on pricing by 10-20 percent).


>What's the hypervisor?

OpenStack supports multiple hypervisors. They are implemented as drivers - and are in this part of the OpenStack codebase:

https://github.com/openstack/nova/tree/master/nova/virt/

See my slides from a talk I did yesterday.

https://github.com/sc68cal/openstack-theory-and-practice-pre...


I know this, I'm asking which one of the supported hypervisors Paypal planned on using in their deployment.


Then they should say that. Reading their website tells me their are doing that bullshit thing were they divorce the tech from the marketing completely.

> OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface.

Should it (??) really say :

> Openstack is a ton of scripts and tools and things which pull lots of various bits of software together and allow you to have redeployable virtual machines all over the place


Almost definitely KVM. The Ubuntu/KVM/OpenStack combo is the bread-and-butter deployment scenario outside of a small number of folks (notably Rackspace, which is OpenStack on XenServer^TM).

Also, <= Diablo and to a large extent Essex releases were known to be (politely speaking) less-than-usable OOTB. Folsom was really the first release that could properly be called usable for a production IaaS, and Grizzly is even better.

Source: knowing these things is my job.


Many thanks for the info. I had a hell of a time trying^H^H^H^H^H^Hfailing to get Diablo working with XenServer, too. No ill will towards the project, and glad to hear it's maturing. It seemed to be in relatively good hands with Rackspace.


Hmm, eBay has worked extensively with rackspace for cloud-platforming though, so I think it's mostly Xen: http://www.rackspace.com/knowledge_center/case-study/an-open... http://hardware.slashdot.org/story/11/08/03/0219230/ebay-dep...


The Diablo version was the first version that really combined the projects. Before Diablo each system had its own auth/user system, api style, ...

We've come a long way in 2 years :) Redhat (EPEL/Fedora) and Ubuntu now have built-in packages that are much more solid.

That said, the way to think about OpenStack is like you think about the linux kernel. Most people want a distro, not raw source to setup their environments. Over the last couple years we've seen real advances in distros.


I've kept up a little on OpenStack since then and know people that work with the technology. I'm sure the keystone integrated auth system (or whatever it's called now) really improved things, and it seems like people are making really progress with pushbutton deployments using the usual config/orchestration suspects. Congrats!

That said I know more people would would be content to manage ESX with ovftool or some homebrew solution w/ the soap api than I do who would be willing to use the linux kernel without the usual userspace tools.


  I know more people would would be content to manage ESX with ovftool or some homebrew solution w/ the soap api
I for one am not content relying on someone else to manage all of our management tools. On that grounds, love hearing interesting news about OpenStack, CloudStack, Docker.io, or any project which releases open code purporting to helps us all to manage our many nodes, real and virtual.

  without usual userspace tools
As for the unusual "without usual userspace tools" criteria you added, I literally have no idea what you are talking about.

OpenStack is a bunch of code that runs in an OS, there's nothing nearly so magical or different about an OpenStack node as you imply. Do whatever you feel is best for your host nodes.

I install Debian packages on the host and manage it with many of the same tools used to manage it's images. It looks very much like the same usual "just a boring usual Linux node" node I log in to. http://wiki.debian.org/OpenStack


What do you mean by "linux kernel without the usual userspace tools"? Openstack runs on a complete system with all the tools you want to install, not some bare kernel machine.


"OpenStack is like you think about the linux kernel. Most people want a distro, not raw source to setup their environments"

Since he was comparing the relationship between a hypervisor and Openstack to that between the Linux kernel and the userspace tools, and I was just commenting on that comparison. That is, which I know people that use hypervisors without Openstack (or other virtualization management applications like Cloudstack or vCenter), I don't know anyone that uses the kernel on its own. Sorry, I thought that was clear but I guess it was not.



> And VMWare contributes the most interesting part of OpenStack, anyway, which is the virtual networking based on Nicira's OpenFlow.

Agree with you about the headline. However, VMWare is not contributing Nicira for free. There just happens to be a Quantum plugin for Nicira (as there is for pretty much everything else).


Rackspace Cloud Servers runs on OpenStack. It seems to work pretty well for them.


I would expect that it maybe heavily custom/staffed-up-to-be-configured-for-their-infra

I.E. - it likely didn't just work pretty well for them out of the box.

There is a difference between using a base technology which requires you have FTEs on staff to maintain it and grabbing an off-the-shelf app to help run your systems.


They are one of the two core companies that founded openstack, based on what they needed to run internally. It was rackspace and Nasa.


Isn't the idea of Open Source technologies? Customizing software for your particular use?


A few things going on here,

1. According to Business Insider (BI), PayPal is using Fuel from Mirantis to manage its OpenStack deployment.

2. BI also announced that Fuel was released on March 25 under the Apache 2.0 License. However, the fuel website points to a form that appears to give free access to fuel under a creative commons attribution, noncommercial, share alike license, with additional licensing options available from Mirantis.

3. To top it all off, the CEO of Mirantis contacted BI and said the information about PayPal migrating over was incorrect and based off of second-hand knowledge (as seen on the update in the bottom).

4. BI also spoke to paypal who stated they are diversifying their VM infrastructure to "enable choice and agility", not replacing VMware.

So, based on all this, it sounds like what really happened is PayPal decided to implement OpenStack to allow for a more diverse internal set of tools to develop infrastructure. A miscommunication occurred somewhere, which resulted in the BI and Forbes story.

http://www.businessinsider.com/a-dangerous-sign-for-vmware-p...

https://fuel.mirantis.com/


Slightly relevant:

If you're interested in discovering OpenStack, you should check out devstack[1]. It's a bash script that (git) pulls different projects of openstack into a machine and install them.

The script itself is very readable, thoroughly commented, on purpose. You're encouraged to read it while the whole thing installs (which can take several minutes).

Feel free to launch it inside of a VM (preferably running Ubuntu or Fedora), in order to avoid polluting your current system.

[1]: http://devstack.org/


> If you're interested in discovering OpenStack, you should check out devstack

You really shouldn't, unless you're interested in doing development work on the bleeding edge code in Git. Devstack is a development tool, not a deployment tool.

If you're interested in discovering OpenStack, you would be much better off installing released, tested packages provided by your Linux distribution. It's true that there is still some setup work required, but there are many, many people working on tools to help with this:

https://wiki.openstack.org/wiki/Get_OpenStack


You can also use Vagrant+Devstack[1] to get started even quicker.

https://github.com/bcwaldon/vagrant_devstack


So, what, now you can virtualize while your virtualize? I thought the point of OpenStack was to be a way to manage VMs, isn't that also much of what Vagrant does?

I'd tend to avoid mixing the streams & go for a single node management system rather than attempt two at once- were it me. But I also am not invested in Vagrant, which people are free to enjoy, so if they can happily run both VM managers at once, so be it. Cool, I guess.

I would like some clarity on what exactly this hybrid achieves though- what is it's goal?


Testing new software in a VM is done primarily because (as stated above) you "avoid polluting your current system". When you are done or if anything goes wrong you can just trash the VM. This is standard practice in a variety of situations and has other benefits.

The only twist here is the software you are testing happens to be virtualization management software. Make sense?


One example of where this is very useful is for people not running Linux on their workstation. If I want to play with OpenStack or Docker or whatever, a VM is a nice playground for it, and Vagrant makes convenient VMs.


  >So, what, now you can virtualize while your virtualize?
Yo dawg... yes (in a way). You can have issues with the environment as is the case with most new things, but this way there's nothing you need to keep should you decide to start fresh or decide to do something differently. It simplifies things a whole lot.


I went with a somewhat more ready-made, native way of getting OpenStack- I installed my OS's packages of it. http://wiki.debian.org/OpenStack


It looks to me like both PayPal and HP Cloud are using http://saltstack.com/ to manage their OpenStack clusters.


Not in hpcloud. At least not the public cluster available via hpcloud.com


People can also check out http://trystack.org/ and SAIO (Swift all in One, a single node Swift cluster for development, testing, and learning) http://docs.openstack.org/developer/swift/development_saio.h...


I only have a few servers here and I would love to test open/dev stack here inside a VM, but the script keeps freezing :(

... I am not expecting to deploy or do much on virtual hardware, but, the servers are pretty beefy and I just want to muck around with the management and learn how it works so I can deploy it at a later date.


how does this compare to puppet?


devstack is a shell script.

It sets up openstack from source, tracking master.

It is what developers of openstack use to setup dev environments, run automated testing, ... There are also puppet and chef recipes that use packages (which is recommended for production deploys)


It's an alternative to puppet. Biggest difference is built-in remote execution.


Not exactly, it doesn't have the same goal at all.

From the first line of the script: "stack.sh is an opinionated OpenStack developer installation". It doesn't provide nearly the flexibility of Puppet (or Chef).

Devstack does one thing, and one thing (almost) only: get a simple installation of OpenStack up and running as fast as possible (predominantly for developers to test their modifications against).

Puppet can handle much more complex scenarios, and is really meant to deploy production.


Thanks. Will definitely check it out.


Can we at least link to the original article: http://www.businessinsider.com/a-dangerous-sign-for-vmware-p...

I'm stunned by the extent to which the forbes.com rehashing of articles crest the front page compared to the original articles.


"UPDATE: We have heard from Adrian Ionel, the CEO of Renski's company Mirantis, who says that Renski is "exaggerating the use case" of OpenStack at PayPal and that Renski's "knowledge of the project is second hand and therefore limited." Renski's title at Mirantis is executive vice president."

It looks like they are adding OpenStack to the suite of tools they use at PayPal and not replacing anything(at least, not yet). Sounds like Renski is just a little over-excited when commenting about a platform he believes in.


Paypal runs on 80 THOUSAND servers? And it takes hours for me to even get a quarterly transaction log prepared for download?

That just boggles the mind....


Is that 80,000 physical or virtual servers though? That ratio could easily be 10:1 if it is the VM count which is more what I would be expecting.


The article says 10,000 servers, so maybe there's an 8:1 ratio


Unless I read it wrong the article says 10,000 servers going live this summer with the plan being to replace all 80,000 servers in total.

edit: just noticed they sneak "and eBay" into the 80,000 total server count. Makes a little more sense now :)


eBay doesn't have anything close to 80,000 servers.


I doubt they are optimizing for performance of reporting.


I think this only solves the cost problem. VMware is damn expensive for what it is.

I'm not really a fan of virtualization myself. We canned it at our organisation a couple of years ago. We had 30 ESX/vSphere managed hosts across 3 data centers which hosted app servers, web front end servers, virtual load balancers and some other minor infrastructure. The final straw was the upgrade costs. Not only that we had problems with volume size limits on our SAN which would require more expense and hackery to work around. Also, we couldn't host our database servers on top of it due to performance problems so we had to have dedicated machines there anyway.

The whole thing at the end of the day just added complexity, expense and didn't improve reliability, security or load distribution as our architecture was sound on that front already.

We've gone back to the original virtualization system: processes and sensible distribution of them across multiple machines. Performance, cost and sanity have improved.

I can understand that virtualization is useful when your resource requirement is less than one machine but above that, I doubt there are any real benefits. It's snake oil.


>It's snake oil.

As opposed to a miracle elixir that cures all ills?

It's a tool.

Like most tools, it's best used when you thoroughly understand the challenge and how/why to apply that tool to it.

The problems you've noted here seem more indicative of poor planning and understanding of the solution than any intrinsic deficiency of virtualization.


Its a shit tool that creates issues and costs fuckwads of cash. That is it.

We understand it. The numbers sold are not the numbers gained.

The entire infrastructure deployment was planned around virtualization, DC failover and resilience using VMware's guidelines and solutions (VMware were even paid to consult on this). It never delivered and doubled our administrative and licensing overhead.


VMWare sales from what I have heard, has a history of over-selling, over-promising and ultimately getting people to spend a crap-ton of money on their products.

Honestly I have to say, it sounds a bit like you guys got taken for a ride. I'd suggest sitting back, relaxing and taking a look at what other VM options there are out there, some of which don't have license fees attached to them.

When I first deployed virtualization there were really only two 'enterprise' ready solutions out there which were essentially VMWare or XenServer. Knowing about VMWare and their 'history' I chose XenServer. I haven't regretted the choice but they wouldn't be a fit for everyone.

These days there are multiple numbers of solutions out there to choose from, which Openstack may or may not be one for you... and that aren't going to whisper into your ear about how great their product is and how much money it will save you.


I see people being WAY-oversold on the SAN more than the software.


I echo this observation.

By now, I'd actually be more surprised not to find a VMAX or Shark in an "Enterprise" datacenter.


>Its a shit tool that creates issues and costs fuckwads of cash. That is it.

Calm down please.

>The entire infrastructure deployment was planned around virtualization, DC failover and resilience using VMware's guidelines and solutions (VMware were even paid to consult on this). It never delivered and doubled our administrative and licensing overhead.

That might say something about VMWare's specific solution and you being taken for a ride, but it's nothing intrinsic to virtualization as a tool.


Exactly. Virtualization/Cloud are only good for scaling down, not scaling up. The 20-30% hit to IO, additional latency of SAN vs local storage, management overhead, and added complexity to infrastructure more than negate any potential benefits of improved load distribution when you're dealing with applications that use anywhere near the capacity of physical servers.


20-30% hit in IO? It hasn't been at that level for a long time. With new KVM versions and good Intel processors there is a performance hit of as low as 5% these days. That's not to necessarily say your particular workload will see that, but 10% is generally an 'at-worst' level at this point.


Its definitely 20-30% for realistic workloads using VM based tech on top I.e. CLR/JVM or a database engine which is realistic. This is on top of VMware. I can't speak for Xen.

The outcome is pretty grim.


Exactly. I often see claims of 5-10%, but I've yet to see any reliable set of benchmarks done with those results. Too often, people are using dd and testing throughput instead of actual IOPS. Even the benchmarks that show 20%+ tend to be skewed in favour of virtualization, as they tend to be run with a single VM instead of multiple VM's.

Even if there was 0% performance penalty from virtualization, you'd still see suboptimal allocation of hardware resources just from trying to take an abstracted view of the hardware. Different applications have different performance profiles. You either end up with overbuilt hardware to support the virtualization environment and the different performance profiles of the different applications, or with multiple VM's for the same application on the same hardware which is totally unnecessary overhead. Virtualization is just not meant for large scale.


Here [1] is a great paper about nested virutalization for KVM. This combines hardware capabilities with software tricks to allow running multiple levels of VMMs. It may not have intense IOPs testing but it's got a couple benchmarks that would be representative of real-world workloads. Keep in mind that this paper was published in 2010 and virtualization performance has been on a dramatic rise the last several years.

Jump to the results section. The more relevant bullets here are 'single guest' (either virtio or using direct mapping).

Highlights (or lowlights, depending on your perspective): kernbench - 9.5% overhead SPECjbb - 7.6% overhead

I don't agree with your point about suboptimal allocation of hardware resources. Virtualization does not require you to divide a machine in a different way than processes do (you could easily have one VM consume nearly all CPU cycles, one consuming nearly all I/O capacity, etc.) IMO, the key difference is that virtualization lets you easier establish hard, enforceable limits and concrete policies around resource usage (not to mention the ability to account for usage across all kinds of different applications and users). And, it lets you do that for arbitrary applications on arbitrary operating systems. So users don't have to write to one particular framework/language/runtime/OS whatever. That's all pretty important for large scale.

[1] http://static.usenix.org/event/osdi10/tech/slides/ben-yehuda...


What distinction does KVM or the kernel make for a single guest?

Is there a system that would allow for mapping part of an IO device (such as a block range or a LUN) to a guest when multiple guests are running with the same level of overhead?


I'm not sure I follow your question 100%, but I'm gonna take a stab...

The distinction being made here isn't for a single guest or multiple guests, it's for a single guest OS or nested guests (i.e. a VM running another VM). To expose the hardware virtualization extensions to the guest VMM, then they must be emulated by the privileged domain (host). There are software tricks that allow this emulation to happen pretty efficiently (and map an arbitrary level of guests onto the single level provided by the actual hardware). It's not a common use-case, but for a few very specific things it's very useful.

There are a few different ways to map I/O devices directly into domains. Some definitely allow for part of an I/O device. For example, many new network devices support SR-IOV -- which effectively allows you to poke it and create new virtual devices (which may be constrainted in some way) which can be mapped directly into guests.


Ah, VMware is the problem, that is explains it.

Parties that care muchly about fine performance margins apparently need to be using Xen or KVM or Illuminos then.


Can't speak for the parent poster's company but the numbers don't match my experience with VMware many years back. It's possible they've had a sharp regression but we were maxing out gigabit ethernet and local RAID arrays in 2006.


Well, don't confuse I/O with throughput here. You can look at performance numbers for just about anything and tweak one direction or the other.

For instance it's easy to make a benchmark showing huge throughput to any given storage solution (and many NAS providers sell on this basis), but your I/O might be terrible because to get that throughput you're maxing the CPU (etc.). Likewise, you can change your benchmark and show high I/O, but the throughput is 'terrible'.


The parent very clearly specified I/O, which is what I was commenting on.


Virtualization which cares about iops needs SSD. Hard drives can't be sanely virtualized.


> The 20-30% hit to IO

If you're seeing that, something is configured wrong. VMWare was 95+% of native disk and gigabit ethernet 5 years ago.


Unless you are doing really heavy IO or CPU across multiple VMs on the same host which is likely if you have any load worth mentioning. There is a 20% to 30% difference if you run one process per VM or 8 processes on bare metal in an 8 core machine. We benchmarked it on known good configs optimised to bits.

Either the hypervisor scheduler is shit or the abstraction is costly. I reckon its down to the reduction in CPU cache availability and the IO mux.

This was HP kit end to end, VMware certified, debian stable on and off ESX 4.


Honest question here. First: What year exactly was this deployment put in place? At least for Intel their CPU performance for VM has increased SUBSTANTIALLY even in the last couple years. I remember taking some VM hosts from some of their first (or second? Can't remember for sure) gen processors to the 5500 series. It was like night and day. It is even more so for the e5 series.

Second: Why did you implement a large-scale VMWare install without either having a testing period before sinking contract dollars and license costs into it, or at least having contract terms to opt-out if their claims didn't match reality?


2008. TBH not sure what CPUs we had in there. Kit has been scrapped now. I wouldn't bother going through it again. We now have a standard half and full rack which we can purchase and deploy quickly for an installation so there are no plans to piss any more time and cost up the wall.

We did have a testing period which was unfortunately run by a Muppet who decided the loss was acceptable. Muppet now no longer works for organisation.


> Unless you are doing really heavy IO or CPU across multiple VMs on the same host which is likely if you have any load worth mentioning. There is a 20% to 30% difference if you run one process per VM or 8 processes on bare metal in an 8 core machine. We benchmarked it on known good configs optimised to bits. > > Either the hypervisor scheduler is shit or the abstraction is costly. I reckon its down to the reduction in CPU cache availability and the IO mux.

We were running heavy loads and there was nothing like a 20-30% hit. I'm not saying you didn't see one but this isn't magic or a black box: we had a few spots where we needed to tune (e.g. configuring our virtual networking to distribute traffic across all of the host NICs) but it performed very similarly to the bare metal benchmarks in equivalent configs.

What precisely was slower, anyway - network, local disk, SAN? Each of those have significant potential confounds for benchmarking.


> I can understand that virtualization is useful when your resource requirement is less than one machine but above that, I doubt there are any real benefits. It's snake oil.

Tell that to Netflix - or any of AWS' customers, really.


AWS is a prime example. EC2 unreliable, doesn't perform well and is expensive (and fairly weather dependent by the looks too). If you want anything that can actually shift anything, dedicated servers are cheaper (but you might actually have to commit - oh dear if your margin is that tight, you don't have a product worth it).

The only use case I can see for it really is rapid scaling but that seems only viable for content delivery as your data back end architecture is way harder to scale than click the magic cloud button. Back in the old days we used to do content via CDN (Akamai etc) which actually works out cheaper per GiB.

Then we approach things like S3 which is terribly unreliable (some of our clients use it for storage as it's cheaper than a SAN but they suffer for that). Drop outs, unreliable network requests, latency, rubbish throughput and buggy as fuck SDK.

Its like consuming value line products from a supermarket. Sure there's quantity but its lacking in quality.


If you setup you're stack wrong then some of you're points are valid. But many amazon services offer you redundant infrastructure at a reasonable cost.

But the key advantage over traditional hosted solution is that you can run reproducible test and dev stacks for 8/12 hours a day without having to pay for hardware that sits there doing nothing while you sleep. Sure cloud computing has it's faults but the ability to run you a HA test stack in 3/4 datacenters in minutes, and pay for just the time you are testing it, enables people to move forward and develop more and more impressive technology.

OTOH I'm not sure why you would run cloud on your own hardware as you're already paying for the HW, I suppose it simplifies the management significantly for PayPal or they wouldn't be doing it


Take a deep breath, relax, and realise that plenty of people have use cases that are different to yours.


If you have a way to build a disk image containing your application from scratch virtualization buys you easily rebuilt application stacks, and extremely clean systems. That is also a very good first step to gain the ability to scale out and up (ie. run your app in 20 places around the world).


Volume size limits on your SAN... Is 16TB too restrictive for your volume size limit?


2TiB per LUN on ESX until recently actually. We have a single filesystem that is 19TiB.


So... 19TiB and you're only using iSCSI instead of NFS and you're complaining about volume size restrictions?


We have to use iSCSI. Not all our kit talks NFS plus we use SAN block replication between DCs which really doesn't play nice with NFS (we did test it and discovered that not all NFS implementations are good).

This whole thing has to play nice with a windows DFS cluster as well.


You said you're using ESXi? The Windows virtual servers won't care whether they are sitting on a LUN or NFS. Why do you have to use iSCSI? Or SAN block replication for that matter? If you think you need to use iSCSI, why don't you use it for some of your storage and NFS for parts that require storage beyond 2TB?


What was your SAN?


Virtualization is a requirement for any kind of serious dynamic infrastructure.


Processes are a form of virtualisation are they not?

That's enough abstraction.


Originally, yes. Unfortunately Unix/Windows has accumulated a ton of global state (libfoo 3.7.1 not parallel installable with libfoo 3.6.4) and enterprise software is now so fragile that any alternative to VM sprawl is unthinkable.


We run our software (and other "enterprise" software) multiple instances per node and they are fully isolated. Most is in JVMs but we have C++ stuff with no problems.


Virtualization is a form of process so yes.


Except that Google, Facebook, etc. etc. successfully use baremetal deployments without a VM in sight...


So they're going to move to OpenStack Compute but... which supervisor are they going to use? I guess is not VMware (KVM? Xen?), but I still find the article a little bit confusing.

Besides the quotes suck ("OpenStack is not really free"; well, if you don't know how things work you'll have to pay who knows), so I'm not sure if I can trust that information.


"VMware’s RabbitMQ, an open source middleware-type application deployment project, ..."

Perhaps it's mean to poke fun at reporters, but this is pretty brazen.


Not quite sure what you're getting at here.


More than being wrong, the description is almost entirely without content. "Open source": true. "middleware-type": true, but doesn't mean much. "application deployment": not right, but also pretty vague itself.

The text straight from the results page tells an interested party that it's an "enterprise messaging system", which should be good enough for any blog on forbes.com. I really have no idea where they got the gobbledygook that they used.


Unless you're a developer, you'd probably think "enterprise messaging" has something to do with IM or email.

Something like "middleware that connects applications" would perhaps be clearer.


RabbitMQ is a messaging queue, Erlang based, not sure if it is owned by VMWare. I get the idea that either it is not owned by them, or the description is wrong enough

Oddly quite a few build systems are now turning to MQ's for build systems - beats make and jenkins it seems


rabbitmq.com says "Copyright (C) 2013 VMware, Inc" and http://www.rabbitmq.com/mpl.html says "The Original Code is RabbitMQ. The Initial Developer of the Original Code is VMware, Ltd."

So yeah, it's owned by VMware.


> quite a few build systems are now turning to MQ's for build systems

References? I'm curious as to who's doing this and why. I'm trying to think of a reason and not coming up with much.


We have started using a MQ to send messages between different Jenkins instances. For example to have long running test executions on a dedicated jenkins-instance so the main build-jenkins can be restarted without affecting the tests.


Curious how you've set this up. What role does the MQ play? Does it ship results to the mother ship or a console? Or are you sending notices of changes to the test machine? etc. TIA


We have custom plugin to send message to the MQ. Other Jenkins-instances running the same plugin will receive the message and can act on them according to the configuration.

Currently we mostly use this to send internal release notifications around, e.g. "foobar-1.0.0 released, find it here <path to RPM>". Kind of neat with a n * 1000 developer organization spread over the entire world.

Currently the plugin is not open source but hopefully it will be later on.


Very cool. Thanks for the explanation!


pybit, my organisations very own rbit.

THink replacing Jenkins not make


Is anyone in the startup world using OpenStack? If so, how is it working for you?

I'm always skeptical of abstractions but it'd be great to hear about any real world experience.


Yes.

It runs.

Its a VM. I can add and delete and image. Remotely.

Its admin system is all REST(ish) and I did have a fully working auto-build system till they changed it and I have to rebuild (known API breaker, well telegraphed, unlikely to be more, I was just lazy)


Not in the startup world, but I've used it quite a bit.

I'm a big fan.

It's pretty trivial to get a basic system up and running and your basic functions for creating, controlling and destroying instances work flawlessly. In general, core services are quite reliably and reasonably documented.

I've managed to create some sticky problems fooling with Quantum [1] when it was still in incubation, but nothing a little digging couldn't resolve.

1: https://wiki.openstack.org/wiki/Quantum#What_is_Quantum.3F


I think VMWare is going to be in some trouble in the next few years. A lot of opensource projects are picking up speed. One of my favorites, oVirt (http://www.ovirt.org) especially. Red Hat mainly runs the project, and is pretty much the open source version of RHEV.


I don't think they are in that much trouble. They are just making a boatload of money while they can get away with it. But they are working on multiple tiers: free, starter, and "OMG, Money"! That last one might not be around for much longer.

If you look at their other offerings some of it is reasonable (VMWare Fusion) and others are set at whatever the market will bear (VMWare Workstation, vsphere Enterprise).


Also the fact that VMware licenses cost about $2000 more for Australian users than US. Not sure about other countries.


Well, it makes sense... OpenStack is getting better and better with each release and whilst VMWare do add some cool new features every now and then, the price simply sky rockets and you don't really need the new features.

For the customers who do not need much more than basics (e.g. just virtualisation, storage server etc.), it is just so hard to justify the price now.

Just the other day I had a client who wanted basic fault tolerance and by changing versions, the quote goes from 6k to 47k! It is crazy pricing... :(


I can confidently say forbes.com is the worst website of all time


If I'm correct, Openstack is different from Openshift and Openshift actually runs on top of Openstack, right? It's more like an appserver that can run Ruby on Rails, Play framework, etc., right?

Because very often I tend to get confused between the two..because of the vague similarity in their names..


OpenShift (https://www.openshift.com/) is a PaaS, made by RedHat. OpenStack (http://www.openstack.org/) is an IaaS cloud platform, maintained as an open source offering by Rackspace, and originally created by a partnership between NASA and Rackspace.

OpenShift could theoretically run in an OpenStack cloud, as just about anything could, but the two products are in no way related to my knowledge.


Thank you!


Yes, they're different.

OpenShift Origin is a PaaS, as you describe. You generally run it on top of an IaaS such as OpenStack, although the hosted OpenShift actually runs on AWS (which is also an Iaas).


In other news, VMware has announced plans to accept only Square payments for all $4 billion of its revenue.


I'm very interested to see how this all plays out. Despite their customer service issues, PayPal has a huge ecosystem. I'm curious to see if this speeds up their infrastructure at all, considering that their API and website have been notoriously slow for years.


What's wrong with VMware? Why would someone not choose VWware over something else?


In my experience, VMWare is great and has a nice GUI, which lead to early adoption. BUT: it has licensing fees for each copy. This can be a big bummer for companies like PayPal that have a fleet of servers - much easier to replace to replace all the VMWare installs and get rid of licensing costs to switch to an open-source VM solution.

At the same time, I could be that OpenStack's API is superior to VMWare's. With a fleet of servers, you want to automate as much as possible - meaning that at the end of the day, API trumps GUI.


> At the same time, I could be that OpenStack's API is superior to VMWare's. With a fleet of servers, you want to automate as much as possible - meaning that at the end of the day, API trumps GUI.

It reflects the bizarre decision VMware made when they wanted only sell to the top end of the market: the biggest customers are also the most likely to have the resources and incentive to build tools around that API. The small to mid scale shops depend on the GUI tools far more.


Switch to virtualbox ... Nice gui, better features, open source, no licencing per copy.


Not really the same thing.

they would be talking about vmware ESX\vsphere not workstation.


Their pricing has gone up while the rest of the industry is trending down. This wouldn't be so bad except that most of their tools are very enterprise-y – clunky APIs, odd gaps in tooling, etc. Basically the hypervisor is developed by the A-team but the management tools and even components like the HA manager were obviously second-tier – big, convoluted snarls of daemons for something which should have been clean, simple and trustworthy. (i.e. I spent a few weeks on the phone attempting to convince second or third-level support that the HA system should also pay attention to the heartbeat checks rather than ethernet link status when deciding whether to use a network connection. The techs I talked to were under the impression that this was how it worked until I proved otherwise…)

I stopped supporting it in 2008 but at the time it was already apparent that they were trying to milk large companies rather than stay competitive – and that was before the price hikes.


Well nothing specifically is 'wrong' with VMWare, except for their notoriously terrible pricing... as in you might want to do the math and just buy more physical hardware and not virtualize anything unless you can get 10:1 or higher instance scaling out of the VM servers.

At one point VMWare was actually pricing things by memory used, not processor core counts.


Price most often.

Licenses for the top-end ESXi run $1000 - $3500 list per socket [1]. If you have a large deployment you're going to be inclined to go for the top-end license in order to support distributed switching and granular control over storage/network traffic.


Licensing costs.



Yep, that was the original source, it didn't gain any traction or comments.

It's pretty interesting, I hadn't heard of OpenStack before, and the company I work for is migrating TO VmWare.


Internet to drop Paypal and replace with Bitcoin :D




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: