Hacker News new | past | comments | ask | show | jobs | submit login
Otto: Meet the Successor to Vagrant (ottoproject.io)
108 points by clessg on Nov 14, 2015 | hide | past | favorite | 52 comments



What do I do when one of Otto's magic commands breaks my website in production?

Will I ever have to give Otto my root password?

I am deeply afraid of this tool.

The marketing heavily pushes the angle of "Don't worry, it's magic! It knows everything! You won't have to break a sweat!" But statements like this, while appealing to newcomers, sicken the people who understand the stack well enough to also understand why this tool could help them.

If Otto wants to gain mindshare of the devops people who will be forced to support it, I strongly suggest they reconsider their branding.

I really, really hope I never have to use this tool.


> If Otto wants to gain mindshare of the devops people who will be forced to support it, I strongly suggest they reconsider their branding.

It sounds like Otto isn't aimed at devops people at all. It's aimed at the poor (in the material sense) developers who have to churn out 10 Wordpress sites a year to make a living wage and don't want to spend half their time figuring out how the permission system works.


That's....a great point.

As the saying goes, "If you don't like the advertisement, you're not in the target market."


On Cloud Foundry we've found that we get buy-in from both operators and developers.

Operators heave a sigh of relief: "Finally, I don't have to clean up after devs! They can't break the platform for everyone else! I don't get 3AM phonecalls to fix someone else's mess!"

Developers heave a sigh of relief: "Finally, I can just deploy without dealing with suspicious operators! If it breaks I don't get berated by an operator who got a 3AM phonecall!"

I imagine OpenShift and Heroku guys can tell the same stories.

I work for Pivotal Labs, so I get to see projects with PaaSes and projects without PaaSes in a variety of companies, on a variety of platforms, with a variety of goals. But one thing remains constant: PaaS-deployed projects are easier.

In a PaaS-deployment project, you set up your credentials, set up CI/CD, then you just get on with doing the thing that matters: building a product.

In non-PaaS projects, you spend eye-blistering amounts of time and money trying to just deploy the damn software. The whole feedback loop turns into a tangled pile of spaghetti behind somebody's bespoke deployment process. Backpressure causes enormous shear stresses throughout the project because development is iterating quickly but there's no way to, you know, see if the users like it or to know if it actually works in the real environment.

It always boggles my mind that folk are happier to squander amazing amounts of engineering time and salary dollars instead of just renting or installing a PaaS.


Good questions, and I'm happy to answer. I'm going to answer with a philosophical point of view. I'm not trying to dodge the question, but I'm hoping it provides you some idea of how we're approaching the problem. If not, just let me know I can try to clarify.

The bigger philosophical idea is the centralization of knowledge. How does Otto know how to deploy a PHP application? Because people who can be considered experts in PHP encoded their knowledge of how to do it. You no longer have to be that expert. (Note that 0.1 deploys PHP in an awful way, 0.2 will do a lot better, thanks to the "experts" coming in). Our goal is to centralize dev/deploy knowledge so that we can focus on higher level problems and teach Otto how to do the details.

The next question is how do I know its safe? Under the covers, Otto uses a lot of production proven tooling. On top of that, we have a really extensive acceptance test suite to verify behavior against real things prior to release. We're doing the best we can do, but Otto will need to earn a lot of trust. We're going to build that trust over time. Note that you can always inspect the ".otto" directory after compiling your Appfile to see the configurations it generates.

You can choose to not use Otto's deployment features. That is completely fine. Our goal is that in less than a year, it does what you'd want it do (help us by teaching it!), and you'll gain trust for it. We have extensive acceptance tests to verify certain behaviors. Of course, we can't cover every edge case but we are going to do our best. This will be a trust building process.

And, I don't blame you being afraid. It is a very new tool that is trying something that is threatening to a lot of people. No worries. Just wait a while; maybe it earns your trust, maybe it doesn't. Take a look at our other tools if you want absolute control, Otto might not be for you. :)

Finally, and this is not a negative thing: the HN crowd is generally not the crowd Otto is aimed at. HN folks are tinkerers, they want to know how things work. That is what makes HN great. They question things and want to know the full details. Otto scares people like that because it _removes_ power from you (much in the same way compilers removed power from assembly programmers). For these folks, I ask you to take a look at the tools that are under Otto: Consul, Nomad, Packer, Terraform, even Vagrant. They provide absolute control. But, Otto is already making great great adoption in the group of people who think "I don't care how, just deploy this application." Think folks that work with well known technologies like the LAMP stack, Wordpress, Rails, etc. all day. Otto for them is fairly revolutionary, because this is a tool that actually makes them more productive and simplifies thing, vs. a lot of the other tools even we've built which appear to add complexity.

And for the folks that want simplicity, what Otto is trying to do is give them industry best practices in addition to simplicity. As an extreme example: we want `otto deploy` to be easier than FTP-ing PHP files to a server, because it isn't a best practice for many reasons. We want you to get an AWS infrastructure designed by experts deployed onto a server that is configured by experts with supporting tools for monitoring, logging, security, and more that only experts would really properly use and configure. These are Otto's aspirations. I believe we can get there, but Otto obviously is new so we haven't proven it yet. But I'm going to try.

I hope that clarifies things!


Sounds great. It's 80-20ing. The majority of sites will no doubt be simple cookie-cutters in terms of tech stacks (LAMP/Rails/Python, etc.). Would I expect to use it to build my custom N-tier architecture handling billions of requests per month? No. That still potentially makes it useful though. I'm looking forward to trying it out next time I have to chuck together a wordpress site or something.


It's OK to use heuristics and "magic" in many cases - when it's acceptable to have correctness rate lower than 100%. When an IDE generates some code it's OK that it's correct in 90% cases only, because I can review the code and correct it. But for a deployment tool even 99.99% correctness is not acceptable - it must be 100%. That's why I wouldn't use a deployment tool built on heuristics. You can't make it correct 100% even when having huge number of tests. (unless the tool works like a magic config generator only that I can review)


I'm not sure what real world deployment tools you're talking about, but 99.99% would be pretty good. Actually distributed deployment is "a hard problem".


For the same reason that "90%" okay IDE code is acceptable for most people, 90% correct deployment will also be okay for most.

Primarily because less experienced users could be doing a 40 - 60% optimal deployment today.


I've worked on the Cloud Foundry buildpacks team. What mitchellh describes for PHP (taking his example) is something that Cloud Foundry already does: take a codebase, apply heuristics to build and package it, then pass it off to a runtime. Heroku and OpenShift do it too.

The thing is that you're worried about snowflake deployment. You're still in the mindset that your app deserves special treatment; if it doesn't stage and launch using community-accepted best practices, then the platform is to blame.

I disagree. If your app needs zany configurations just to launch, I think that's a smell.

From time to time on buildpacks we would receive bug reports of the form "my app doesn't work". Usually, upon closer inspection, there was an easier fix: follow best practices.

The thing to remember about PaaSes like Cloud Foundry, Heroku, OpenShift and now Otto is that they are opinionated by design. The goal is not to provide an infinite supply of knobs and dials. We tried that with mod_perl, with PHP, with Java. It's a freaking nightmare made of sentient, glutinous mud that wants to eat your soul.

Our job as engineers is not to tinker. It's to create value for users and businesses. Clinging to the illusion of control is wasteful. When I fly I don't care about how the plane works; I just care that it's fast enough, safe enough and cheap enough.


I think you're overstating your case.

Have you ever seen a 100% correct deployment? Even if someone thinks they have, I doubt it wont be revealed to have numerous issues.


These are all good points. Thanks for your thoughtful commentary in the face of my hasty negativity. I'll stand back and see how it evolves!


That were my initial thoughts as well, but after reading mitchellh's and others' responses it made more sense. Developers are rightly weary and suspicious of bad abstractions. And what separates good abstraction from a bad one is how fast you are going to trip over its scope of applicability and how surprising it will be. Good general purpose abstractions for distributed services are not there yet but I think this tool will be successful if it will be a good enough abstraction for a large enough cohort of users. And those for whom it is not enough will be able to learn a thing or two by peeking under the hood.

But I agree with you wholeheartedly that reading through the marketing copy to calibrate what my expectations regarding this tool should be is very tiring. I really wish there were a little link titled "explanation for technical-minded people" leading to a clear description of what it really does behind the curtain without any bullshytt words like "boost to productivity", "scalable" or "best-in-class".


I don't know man, isn't that a little knee-jerk?

If Otto can break your website in production, then any deploy script could break your website in production. I mean yea, I have a script right now that can bring my company's website down. Since I happen to be in ancient python land, it's just `fab conf:web,prod stop`. Devops has always had this capability. :P

I think they're working on some security thing called Vault if you want to take care of password management. In the meantime, literally every other shop in SF is storing half their passwords in plaintext, in code (optimistically. things are probably worse). It's really hard to understate how bad things are.

I'm not buying that these tools are going to make devops go away. I hope that these tools will mean we spend less time taking care of seriously dumb stuff in devops.


> If Otto can break your website in production, then any deploy script could break your website in production.

The concern isn't "could it break my website?" but "what do I do if it does?". The more black-box something is, the harder it is for me to quickly fix something / override whatever weird bug my deployment script has done.


I think this is the beginning of the end of DevOps for small organizations. Release and infrastructure engineers are expensive.


I wouldn't mind if my job went away. I don't see the point in forcing engineering on startups that shouldn't need it. I'd rather be messing around with my image processing code, not fscking with the damn deploy.


amen to that


Would you be equally afraid of Capistrano?


For those interested, The Changelog podcast recently had an interview with the creator of Otto, specifically discussing Otto and Vagrant. I thought it was a great listen. Basically, Otto is looking to do for deployment what Vagrant did for setting up dev environments.

https://changelog.com/180/


I absolutely adore HashiCorp's tools, but every time I read a sentence like this, I'm disappointed:

> We'll deploy the application to AWS for the getting started guide since it is popular and generally well understood, but Otto can deploy to many different infrastructure providers.

Okay, great! I'll just go look at the infra types list and...oh, it's only aws right now [0].

[0] - https://ottoproject.io/docs/infra/index.html

AWS isn't just easy to setup and use, it's also very expensive. On the free tier, things are wonderful and the world is fine, but if you've exhausted the free tier, you're now paying a lot through the nose for little to no benefit. I pay Kimsufi less than $50/mo for more storage, unmetered bandwidth, and more CPU on dedicated hardware at the cost of the AWS API. This makes using this tool and many other HashiCorp tools completely impossible to use for me because I'm not using AWS infrastructure. I can't afford to shell out crazy high monthly bills for the overhead AWS adds only for the convenience of one tool.

This is disappointing, and quite frankly, a major turnoff to these sorts of tools.


They only just released version 0.1.


Then perhaps it's inappropriate to claim that "Otto can deploy to many different infrastructure providers" until "Otto can deploy to many different infrastructure providers".


Right or wrong, it's pretty standard practice to sell the vision of the product as opposed to the actuality of the product.

"Otto will some day hopefully be able to deploy to many different infrastructure providers" isn't a particularly attractive statement. You could try something like "Otto currently only supports deploying to AWS, but will eventually support many more" but while perhaps technically correct it'll probably shy more people away than get them involved. Those same people might just contribute additional deploy targets if only they got nudged enough to get involved in the first place.

You're not wrong, but getting people on board with the vision is much more important for a "hive mind" project such as Otto, than being technically correct as to the current state of things.


No, it's not standard practice. It's false advertising.


I'm not saying it isn't; I'm just saying that people do this all the time. Many times just to fake it till you make it, other times to be willfully be deceptive. I doubt it's the latter in this case.


Good point. More like they aim to eventually deploy to many different infrastructure providers.


Otto and Vargrant like technologies make me nervous and here is why: I recently took a contract assignment in a dev shop that use vagrant. If everything worked as advertised , I should have just done vagrant up and my jbosss appserver etc. would have automatically built and deployed for me so that I could focus on code. However, I had to spend a couple of day trying to fix Vagrant salt issues --- that I had no clue about. I could have easily setup a jboss linux vm without breaking a sweat but debugging vagrant and salt was like a solving a mystery wrapped in an enigma. So now that I am upto speed with vagrant and salt , it does not seem all that bad -- but I do feel as though it has been overhyped.

I shudder to think that the dev manager are now adding "must have xx yeas of solid vagrant experience " along with "must have 5 years of [some ide]" in their skills requirement.


Vagrant just boots a VM and runs a provisioner on it (at a basic level). I'm guessing that even if you started a VM and installed Linux on it by hand in VirtualBox, the Salt config would've given you some trouble. A lot of configurations I see in the wild that are used with Vagrant and/or with cloud VMs are extremely brittle and have a lot of unwritten assumptions built in.


I've been using Vagrant lately to spin up clean VMs to run Ansible against, and that's definitely something that you have to be very very careful about. I'm currently looking at setting up Jenkins to frequently re-provision from a clean VM, just to keep an eye on me to make sure I don't accidentally introduce a change that breaks things.

So far, I think the only real assumption my users have to deal with is that you need a full source tree checkout, not just the smaller part with the Ansible playbooks. The project uses Subversion, so it's quite easy to just checkout a subtree of the project instead of the whole thing. Which users did and immediately discovered how broken things are in that situation :)



I'm still trying to understand how this fits in with Docker/Rocket, and creating your own Dockerfiles (versus using Appfiles here), and using Swarm to link containers etc.

Does it essentially supplant Docker/Dockerfiles, for all intents and purposes?


It's a PaaS, in the vein of Cloud Foundry, OpenShift and Heroku.

Edit: I'm honestly surprised by the downvote. If I'm wrong, please correct me.


I am always amazed about the Hasicorp tools. What I really like that they solve problems I did not think about (yet). What I think is their biggest advantage is their ability to integrate in a heterogenous infrastructure - you do not need to throw away your current infrastructure setup. You can gradually integrate tools like Consul/Consul-template. You do not have to install for example Docker.

And I guess otto is a nice tool which solves the problem to deploy AND develop a microservice architecture. So maybe you already got your part right, where developers push a new microservice to production, and the production setup runs fine. There might still be problems how a developer can create a local development environment with multiple microservice depedencies, which you might want to have locally. The best solution in my opinion is fig.sh(docker-compose) right now. But docker-compose does not help you for deployment (and you have to depend on Docker)


Really? I feel like half of my time working with hashicorp tools is spent cursing. Vagrant takes more time to parse its ruby code than it takes to boot a vm and its guest. It breaks my coworkers routing on a regular basis. Terraform breaks for almost everything I've tried it with- no, deleting my infrastructure and rebuilding it from scratch is not acceptable in production. Consul is a distributed dowtime protocol, using a modern, peer reviewed consensus algorithm, and a gossip protocol for auto-partitioning of your network.


I just use consul/consul-template and Vagrant (+ Puppet) right now. Vagrant works just fine, I do not experience routing problems. But I will remember your advice/experience for our future plans.


I do not quite grasp how I would hook up an Nginx/Haproxy in front of my application. Would this be another application to deploy? Where does the nginx config live? Do I use consul template to update it depending on my other deployed applications?

I get how you would combine a database and a web application, e.g. Rails. But what about extra routes/legacy redirects?


An Ansible alternative written in Go would be perfect.

I like Docker. Though some initial automation to config Linux distro with a Go based tool would be awesome. (no Python/Ruby stack)


"An Ansible alternative written in Go would be perfect."

Why?

I understand the allure of Go - I was an embedded systems engineer working predominately in assembly language and C for many years. Memory management was a pain and concurrency was practically non-existent on many uCs.

But Ansible is pretty mature at this point, with a vibrant community, lots of internal and third-party modules and it's based on a ubiquitous scripting language (Python).

Please help me understand the rationale of a rewrite - what are the goals, what deficit would if fix and would the time required be justified versus enhancing Ansible as it is. At best I'm a Go newb, but I'm pretty proficient with Python - if it's just that Ansible's "not in your language", I'd argue a rewrite represents a false economy.


Why not?

> ubiquitous scripting language

Third party modules don't cover everything and then you have to write Python. Plus the config files of Docker are better. So I would welcome a modern Ansible alternative that requires just one executable instead of a full Python stack.


I love how Vagrant enables testing software on lots of different distributions and OSes. The large box catalog sure beats going through RandomLinux's installer. Unfortunately the successor to Vagrant doesn't seem to support this use case.


"Otto detects your application type and builds a development environment tailored specifically for that application, with zero or minimal configuration." - that alone tells me to avoid this like the plague. Too much "magic".


Abstraction != Automation. What does this have as a benefit over something like Heroku?


The ability to install and inspect it yourself. I work for the company that donates the majority of engineering effort to Cloud Foundry. For a lot of customers, that matters a lot.


This is great but can we have it without Vagrant?

We already have fully capable systems to simulate potential production stack with Docker, no need for another layer of virtualization.

OTOH I really love that there's no need to dig into .otto directory.


Vagrant itself isn't virtualization; it's just a wrapper around Virtualbox, VMWare Fusion, etc. It has a Docker provider.


You are asking for trouble using a tool like this if you don't understand every little thing that it is doing. Managing hundreds of instances on AWS is not easy even using Amazon's own tooling.


On the other hand, a Linux system involves hundreds of millions of lines of code, which you couldn't even fully grok in your lifetime even if you wanted to. I would argue that nobody truly understands what their application is doing. We're only discussing the degree of magic, not the presence of it. And yes, when things break you have to go pealing back the layers of magic. This just adds one more later of magic to peal back.


How does Otto address app updates with new and stale assets? Would there be pre-deploy steps allowing you to sync assets to S3 or shared storage?


Also an entertaining children's series staring a funny little robot which my 3 year old is really enjoying these days. You get to make up most of the story yourself as you go, which is definitely a feature.

[1] - http://www.amazon.com/s/field-keywords=see+otto


Sounds like Rails for Devops. Works great until it doesn't...


My experience from developing products and services across many stacks, in many fields, and in organizations ranging from the small (5-20 engineers) to the large (1000+ engineers) it seems the same pattern always emerges:

1. Setting up your development environment typically amounts to following some perpetually out-of-date check list of things you have to do

2. Setting up the development environment specific to some project typically amounts to following some perpetually out-of-date check list of things you have to do

3. Painfully figuring out why 1) and 2) aren't really working out, and what's missing from those checklists

4. Not updating the checklists through all that ad-hoc stumbling, mostly because you imposter syndrome makes you think the problem is you and not the lists

5. Finally getting to work on things, making small incremental changes to your environment as you go

6. Realizing that the small incremental changes you make actually makes things break when you try to integrate or deploy your work, because the parity between local dev and CI and live environments just isn't there

7. Having some coffee and lament the state of affairs with your co-workers, yet not really do anything to change things; either because you can't really get anywhere in due time (1000+ engineers kind of organization) or because the law of the jungle dictates you have to keep shipping(tm) (5-20 engineers kind of organization) so there's never any time to deal with the debt that just, keeps, building

8. Cry yourself to sleep

There are minute differences, but this is the general pattern I've seen through my decade long experience as a software engineer.

An important aspect of why this is – I think – stems from thinking your project or setup is a unique snow flake. It's really not. What you're doing is probably not earth shatteringly new and exciting, and even if it is, most of the things you do to get you there aren't. We are all standing on the shoulders of giants, but instead of realizing that and codifying all that knowledge into tooling we can just rely on, we seem to keep thinking that if only we have full control over things we'll be fine.

Then there's the other side of that coin, which is to say you don't ever want to have control (for whatever reasons,) and so you hand over everything services that'll build and deploy things magically for you, so long as you have the proper configurations. The promise of those offerings are alluring, and when they work it's great, but inevitably you end up with configuration no longer being best practice, or formats change, or the service pivots; so promises are broken.

My feeling is that the answer as usual probably lies somewhere in the middle of having full control, and relinquishing most or all of it.

If I understand things correctly, straddling this divide is what Otto wants to do, and Just Work(tm) for the 80% (perhaps more like 99.99%?) of projects that aren't unique snow flakes. The others aren't the target market, and for them the tooling already exists in various forms anyway.

It's an ambitious goal; sure to be fraught with edge cases and rabbit holes.

I hope it works out.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: