Before I would deploy this, I'd have to really understand what you're doing with your chef recipies -- I would be concerned that your use of chef might throw my system into a unknown state -- specifically, what are you doing in chef? Are you adding SSH keys in your recipe (I'm not saying you are -- I'm just saying there's a severe lack of transparency as to what's going on behind the scenes, and that could put me at risk if I were managing that type of thing).
I definitely understand that there's a level of trust needed before you allow anyone get access to your system (or be it's provider). And it should be that way.
While I could describe what's going (and will right below), there's also the risk that what I'm saying doesn't match up with what's really going on. The best way to personally check what's going on is to set up a throwaway system and see if everything checks out. (You'll see that it does.)
So what is the process?
1. Install necessary os distribution packages to be able to build dependencies for the rayburst client
2. Download and build ruby from source
3. Install rayburst client (which includes chef)
4. Run the install for your server configuration
5. Report back success/failure of the install
You're able to see the install script that's downloaded via the wget command and as well as the recipes that are cached locally on your system (check the install output for the locations).
And the only way to be sure in the future that it remains this way is to have a trusted outside entity perform regular audits. It's a bit too early in the stage of this project for that. But it's one of the things I've thought about.
If your target audience for this is the individuals who don't have the skills or time to really settle down and do the install by hand, do you honestly think that they have the time (or skills) to set up a throwaway system and then audit the changes that your log files say versus what really happened on their system?
I'm not trying to bust your chops -- I think this is a great implementation that is geared for developers who might not have sysadmin experience -- I'm just saying that there seems to be a fair amount of risk opening up and running remote bash scripts based off of anything on the internet; chef and other system configuration tools require you to implicitly trust your sources, because they can change at any time. What happens if your system gets broken into? What happens when the service no longer exists? As I've mentioned, this would be a great service for developers who don't want to mess with managing their own test and development environments, but, I'd never push this to a production server.
There's a fair amount of risk in everything: Do you trust the admins of you hosting provider? Are you sure about the source code or binaries for all the pieces of your runtime stack? Did you audit every piece of your own source code for security issues?
But right now I'm not asking people to drop everything and use rayburst to set up their production servers no questions asked. Just to try it out and see if it does what they need.
If someone then decides to use it for production setup, we can talk about how to guarantee security and stability in the configuration for their setup. BTW this is why you enter into contracts, pay for services, etc.
There's elevated risk because the downloadable scripts use `sudo`. I think the concept is great, but asking users to allow `sudo` permissions to a foreign script site unseen is naturally bound to raise suspicisons. That said, I am thrilled that the site is protected by HTTPS, especially the install scripts themselves. You have obviously put a lot of thought into this product, and I see several ways to monetize it. The only problem for me is that I'm currently using a (cheap) webhost with restricted accounts and I cannot use `sudo`, but in the future I'll come back to see how your site performs.
"sudo" is unavoidable since the installs are designed to go into /usr/local. The other option would be to require you to run as root/privileged which leads to the same issue of running an unseen script and just adds an extra step to the process.
It's possible to design the process to allow installs into other locations but that would require a fair bit of extra configuration (picking up the libraries, etc.) and testing. And you'd lose the service startups. It's something to think about if there's enough demand (and probably required for supporting Macs)
When you run this on a system, like a throwaway VM or EC2 instance, it will download the cookbooks required for this run list, which you can review what they're going to do after the synchronization step of Chef is complete. With rayburst, the cookbooks will be synchronized into /var/cache/rayburst.
I like the concept; I think simple server configuration is still an unsolved problem and it's good to see someone taking a stab at something new. I'll hopefully have a chance to try it out soon.
Quick note; set 'cursor: pointer;' in your CSS for the 3 main blocks. I didn't notice they were clickable until I started randomly clicking :)
Looks nice for testing something, but I see no information about support (especially notifications about security patches).
EDIT: looking at their scripts, it's using a client written in Ruby with its own Ruby installation. Things are installed in {/usr/local/,/var/*/}rayburst.
The client gets installed in /usr/local/rayburst and data for your install goes under /var/lib/rayburst. It's done so that it doesn't get in the way of any of your actual installs (had enough conflicts with system ruby when testing). And you can clear out those directories once everything is installed.
All in all the current design is to get things installed on a new server and then get out of your way.
I found no contact or company information on their website, except 2 email adresses. I can't verify if this is just a clever way of mass deploying a stealth botnet or something.
It looks like it's a good UI on top of Opscode's Chef platform. You choose your Chef recipes and run a script that downloads the Chef client and starts installing recipes.
So, it doesn't seem to be too good to be true, at least to me. My guess is that this is an MVP for a product designed to put a non-awful UI on top of Chef (Opscode's SaaS product is, imho, borderline unusable).
That said, there is a high bar here for trust, no doubt about it.
Guess letting someone 3rd party install your server / compile your server from source requires a bit more than trust. IMHO this will be the key problem for professional /paying users. Those who would just use e.g. an AMI from a repository will certainly have no problem with that.
If you find a way to generate install scripts that can be inspected before those are run or the service is providing the scripts to be run from a local Chef server that might be an interesting approach.
FYI the recipes are all custom written. The community ones are aren't very standardized and usually are only good for installing from distribution packages.
(And if you look through the install output, you'll also see where they get stored on your server if you want to review anything that's being done.)
"Long Nguyen has nearly 10 years experience as an enterprise build and software configuration management consultant and remembers having to install his first Linux server from floppy images that were downloaded using a 14.4K modem in his Harvard dorm.
Long has a PhD in Physics from The University of Chicago and got a crash course in the world of startups as part of the YC W08 session."
But isn't this kind of running Gentoo on top of Ubuntu?
Having little experience with big deployments: Are people really preferring to build from source to have the latest and greatest? I noticed that I stopped wishing for that a long time ago, when management (and building!) of these packages just felt too much of a burden, compared to potentially slightly outdated binary packages.
It is a bit similar in concept to Gentoo but adds in a configuration management (CM) part. The current iteration doesn't take full advantage of the CM capabilities but we're working on it.
As far as building packages as compared to installed the distribution binaries... I've run into issues where the binary package wasn't compatible with libraries and apps that I had to install outside of apt-get or yum. For instance setting up reasonably modern Rails configuration with all the gems I needed on a Ubuntu 10.04 server.
I've also done builds and software configuration management for a living so the whole process is more like second nature at this point.
What you do want to avoid is a constant cycle of having to upgrade and build existing and working environments just to have the latest versions. At that point you do need to step back and evaluate if it's really necessary (security, fix, etc.).
Gentoo = Compile a whole OS (hundreds of packages)
This = Compile a package (or 2 or 3) to get the latest
A lot of distros package an older version of the software this thing lets you deploy (node.js on Ubuntu for example is at 0.49 on 11.x) so I had to compile 0.6 manually (and yes I know there are PPAs).
I wasn't being totally serious, of course. And I could nitpick that Gentoo has binary packages - build from ebuilds (which I tried to compare to these chef receipts, with a grin).
I'd rather run tested software that someone else has compiled then build it myself just to get a slightly newer version. Older versions are generally safer to run as well since by virtue of their age they've been more tested with more bugs removed and security holes plugged.
Tried the site on Safari and Chrome (Snow Leopard) and it didn't really work, I could only select Rails, couldn't customise any options. Seems like a nice idea, and hopefully it's just some weird issue my end.
I've checked the site on my MBP which is still running Snow Leopard and couldn't replicated what you were describing. (Although Safari does seem a bit slower than Chrome...)
The centmin.sh script is designed specifically for that install: CentOS + Nginx, MySQL, PHP. If it works well for you and that's what you need, then definitely keep using it.
rayburst sets up a framework to allow for more customized installs. You can pick the various apps and services and the modularity of the build scripts/recipes handles the install of the configuration. Part of what takes place is figuring out the build dependencies (ex Rails depends on Ruby) which would otherwise have to be hardcoded in a bash script.
I use chef as my behind-the-scenes mechanism for managing the installs. The client is a Ruby application so I build it as a runtime dependency.
I don't use rely on the distribution Ruby because of possible conflicts with gems that would have to get installed. The goal is to be able to get everything installed and then get out of the way.
It's because I didn't want to mix in the Chef gems and dependencies with the system Ruby. I ran into some conflicts when I was initially building and testing so decided to eliminate the issue entirely.
With apt-get and yum you're installing pre-built binaries. This works well if that's all you need to do and are okay with the versions that are installed (typically not the latest unless it's a new distribution).
Once you start having to download and build source for apps/libraries that aren't pre-packaged you can run into issues about version compatibilities, where files are located, do you need to download additional packages (*-dev), etc. Having a service that takes care of this for you solves a lot of headaches (part of why rayburst got built in the first place).
Then it sounds like gentoo emerge for other distro. It is useful for setup the software stack base, but I still have to log into the server to configure the software and deploy my code.
In my opinion, you either use PaaS or use chef/puppet, depending on your case.
The issue is that unless you have standards such as directory locations and structure, users and permissions, there isn't a way to automate the process of code deployment even with chef/puppet.
PaaS works because the provider has a standard and forces you to follow them. I don't mean that in any negative sense. I was in SCM for a number of years and know that kind of enforcement is necessary if you want to have any kind of repeatable, automatable and supportable deployments.
I spent a bit of time at the beginning of the project evaluating the two and ended up choosing chef because it was going to be easier to use this way (aka build something on top of).
I will say that it was easier and far less confusing to get puppet up and running initially. There's definitely a bit that needs to be worked out in the chef documentation. The "Ruby"-ness of chef wasn't really a deciding factor as I can work with just about anything as long as it's reasonable.
As soon as I can get a VM loaded and run through the install catalog. I should be able to get that taken care of by the beginning of next week.
It's mostly to fix any build dependencies that have to be installed as distribution packages have name updates. There were a handful between 10.04 and 11.10.
Good for "dev ops" people like me who would rather focus on the app than server configuration (but still need to do it).