Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Hosted Docker-as-a-Service on SSDs for $5 (copper.io)
387 points by edbyrne on Nov 7, 2013 | hide | past | favorite | 132 comments



Hi everyone, Docker maintainer here. Here's my list of docker hosting services. Please correct me if I forgot one! I expect this list to get much, much longer in the next couple months.

* http://baremetal.io

* http://digitalocean.com (not docker-specific but they have a great docker image)

* http://orchardup.com

* http://rackspace.com (not docker-specific but they have a great docker image)

* http://stackdock.com

EDIT: sorted alphabetically to keep everyone happy :)


So I just checked the docker website (http://www.docker.io/learn_more/) and there is still a flag up stating it's not yet ready for production use. Does this mean you guys/gals are gaining enough confidence in it now?


The rise of Docker is fascinating. How did you get people to care about it initially? Did everyone immediately see it as a good idea? Congrats.


We spent 3 months going door to door, making demos to people I knew were working on similar projects or looking for one. We had a good reputation in the ops and systems engineering community because of our work on dotCloud over the last 6 years. Then we bootstrapped the open-source community with that initial group of 30-50 people willing to federate efforts. By the time the project was leaked on hacker news, the github repo and mailing list were already very active.

Early members of that "seed" community included engineers from Twilio, Heroku, Soundcloud, Koding, Google, Meteor, RethinkDB, Mailgun, as well as the current members of the Flynn project.


So, hi there. I'd like to invest in you guys, where do I send the cheque? :)


No response yet, hmm. Seriously, guys, I have a $100 bill here with your name on it.


I am flattered, but we are already well funded and are not currently looking for new investors. However if you're feeling generous I can point you to a few people who have been making awesome contributions to Docker on their free time. I'm sure they would appreciate donations, or perhaps contract work :)

There are also several startups currently raising money for a business based on docker. This is bigger than any one company!


I'm sure you can find some of the developers here: http://www.gittip.com/.


When you are an ops and spend time finding the perfect way to make "redeploying easier than fixing", docker became the answer.

I got to meet the docker team (a lot of french dudes in the team!). Very passionate, technically super sharp, and really fun ! They were interested in my point of view and opinion. Plus, there lead dev knows how to party from what i have seen during a meetup !


The real question is why Sun didn't succeed in leveraging this technology with their implementation of zones.


As a former Sun guy, I can say it's because extracting value wasn't something we were very good at or really gave much weight. From Grid to Java to Solaris 10 Zones and ZFS, Jini, RFID we mostly just made cool stuff and then... went and made other cool stuff.

We had very little adult supervision.


To be honest - I think it's a timing thing - virtualization wasn't popular initially but VMWare did a great marketing job. Then any hyper-visor became acceptable. Now - VPS-style containers are becoming acceptable. IE: Docker.


Timing is definitely part of it.

Being too early can kill you. If you think your idea is awesome but too early, my advice is to keep trying for as long as it takes. Docker was not my first attempt at solving this particular problem :) [1] [2] [3]

[1] https://bitbucket.org/Foi3GraS/dotcloud-fork/commits/1

[2] https://github.com/dotcloud/cloudlets/commit/0af885a5266fba7...

[3] https://bitbucket.org/dotcloud/vm2vm/commits/2a34438989fbff0...


Because Solaris isn't Linux and nobody got fired for using Linux. Even Ian Murdock couldn't make Solaris into Linux.


I learned about it from creack (lead contributor according to github) @ a startup meetup in SF.

Things that impressed me:

1. super passionate

2. he was very receptive and quick at squashing bugs I reported (real or not)

3. docker was super portable (the same across all linux distros)

4. they (the docker team) had real solutions for the long application deployment times that were plaguing me

Everyone seemed to know it would succeed, which is rare around here.



AppFog's CTL-C: http://ctl-c.io/


I found this one a couple weeks ago! https://stackmachine.com/


This is a different kind of service, they don't do hosting. They sell an alternative to the docker index https://index.docker.io


Sounds like you took the best parts of digital ocean and are trying to push it as a platform with docker baked in. I like. It seems like you're also trying to simplify using docker. I like even more.


Hey - thanks a million - that's the plan!


Speaking of digitalocean, are you guys affiliated at all? Because I get a digitalocean vibe from your pricing/terminology/etc. for some reason.


are they?


I love the fact that you keep trying to define your own vocubulary 'Deck' etc, but always have to explain it. Best to stick with the more eaily understood term, rather than invent your own, I think.

Unless you're going to try and trademark them all.


Yeah, these are my only problem with the service; I did not get the metaphors at all, and their descriptions only added to the confusion.

The simplicity of the service is an opportunity to attract people without a lot of webdev chops, so why not make it super simple?


I agree with this. I recently wrote a bunch of new Dockerfiles. The Dockerfile name is sort of meh, but "Deck" is not really telling me what its for.


Heroku created almost as many new terms as Tolkien, and they seem to have made out just fine.


Looked at Heruko once, but all the custom lingo really put me off it.. But, of course your point about them being successful stands


It's worth a shot if they fit your use case. Once you get past the ninja-speak of choosing size and database, it's really simple.

For rake/rails apps at least, you just run `heroku run 'command'` and you're done.


While I agree with your comment, I hope it isn't used as a measure or justification for doing so. I've had the same cognitive problem with Heroku as the parent describes.


God this drives my absolutely insane. Elvish marketing speak is such a stupid waste of time. Why can't we stick to commonly accepted terms instead of trying to bake up new "Cloud"esque replacement terms.


Thanks - we discussed that a lot - we were trying to make a simple 3 steps process. If we get a lot of feedback that it's confusing we'll ditch it.


I think that even if [deck drop instance] is clearer than [dockerfile image container] it would be better to use [dockerfile image container] it's the standard set forth from docker, sticking to the standard makes interoperating easier for everyone.


I agree with this, though I'm biased because I personally find [dockerfile image container] clearer than [deck drop instance]. Explicit > flashy.


I think you need to define your own language if it's better. 'Deck' is clearly better than 'Dockerfile'.

Feel free to innovate - you're a startup, and it's what we love about you.


Wouldn't "Container" fit the analogy best?


I'd have to agree that the standard Docker terminology would be much preferred. Your business covers what is a pretty cutting edge, advanced concept right now. Your customers are likely to be at least somewhat understanding of the standard terminology. Your custom terminology tripped me up as well, despite having a reasonable grasp of the higher level Docker terminology.

Other than that, this looks great! I'm excited for you guys.


I'm in a similar problem space to you. After a year of defining my own 'simpler' terminology, decided to abandon it in favour of being consistent with the more popular albeit complex terminology.

I hope that saves you some time.


I like the idea. Really cool. I've been researching docker a lot lately, and did most of my recent development on Core OS. I do have a question that wasn't immediately obvious: Docker maintains that one should make a container out of every application so that instead of having to install apache + mysql + php in one Ubuntu environment, you'd create three docker containers (apache, mysql, memcache) and run them together and define the share settings, etc. Now here's my question: It seems as if on Stardock, every container would be a seperate (at least) $5 instance? So if I want to run apache + mysql + memcached I'd need to cram them all into one docker container in order to have them on one machine? Or is it possible to use a $5 stardock system and run multiple containers on them, like on Core OS?

Thanks!


There is a new feature of Docker called Links which allows you to organize your stack in multiple containers and "link" them together so they can discover and connect to each other.

There's a great explanation here: http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...


I tried to deploy a Django application with Docker a few weeks ago (using a single image with supervisord), only to discover that, during "docker build", I needed the database already running (so Django could create its database), which was pretty much impossible using a single Docker image and a Dockerfile.

With the new Links functionality, this is much easier, but are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)? I want to be able to do "docker build ." and have my application up and running when it finishes.


> are you planning to ever have the ability to use a single Dockerfile to deploy an application which may contain multiple images (with links between them)?

Yes, definitely :)


It is common to include dependencies like MySQL and Apache in the container of your application. Usually people use supervisord with a configuration file to start all the different daemons needed.


It may be common outside of the Docker world, but definitely an anti-pattern within the community.


"Docker-as-a-Service", simple, easy-to-understand pricing. Love it.

This is my favourite Docker offer so far. I've been looking for something to replace dotCloud's deprecated sandbox tier for just playing around, and it looks like this fits the bill.


This is truly awesome, nice work!

I configured and launched a machine with redis and node in less than 5 minutes. Very cool.

How will you isolate instances from each other? My instance appears to have 24 GB of RAM and 12 cores, and it looks like I can use all of it in my instance.


Docker uses LXC which supports memory and CPU limits


You can limit Docker to have CPU weight shares, and also a memory limit. The file storage limits are due to Docker 0.7, and for now you can ulimit them.


I don't see node as an option. What am I missing?


One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

So say I have a fancy Django image, and a fancy Postgres image.

How do I then have the Django one learn of the Postgres one's IP, and then auths (somehow), and then communicates seperately.

Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained), and how does that even work with a DaaS like this? I'm pretty confused. Is there an idiomatic way in which to do this?

Do service registration/discovery things for Docker already exist?


> One thing that confuses me with Docker is that how do you configure your containers to communicate with each other.

Docker now supports linking containers together:

http://blog.docker.io/2013/10/docker-0-6-5-links-container-n...

> Also, the recommended advice for "production" is to mount host directories for the PostgreSQL data directory. Doesn't this rather defeat the point of a container (in that it's self contained)

The recommended advice for production is to create a persistent volume with 'docker run -v', and to re-use volumes across containers with 'docker run -volumes-from'.

Mounting directories from the host is supported, but it is a workaround for people who already have production data outside of docker and want to use it as-is. It is not recommended it you can avoid it.

Either way, you're right, it is an exception to the self-contained property of containers. But it is limited to certain directories, and docker guarantees that outside of those directories the changes are isolated. This is similar to the "deny by default" pattern in security. It's more reliable to maintain a whitelist than a blacklist.


We're doing a similar thing called Orchard:

https://orchardup.com

We give you a standard Docker instance in the cloud - all the tools work exactly the same as they do locally. You can even instantly open a remote bash shell, like the now-famous Docker demo!


The big point of Docker for me is that I can build the container on my machine, run automated tests on it, play with it and then ship it to the production machines when I'm confident that it is working.

If you build the container on a service like this testing it is hard or in some cases even impossible. For example acceptance tests with Selenium.

Gemfile.lock and similar version binding tools help, but prebuild containers bring the deployment stability to whole new level and is the reason why I'm exited about Docker and containers in general.

Do they support prebuild containers?


"You can create a Docker file with some easy steps we’ve created, or you can upload your own Docker file and create an instance from that."

Sounds like a yes.


Well, no. A Dockerfile is the build instructions, not the build artifact.


you can commit at any point and ship that via private registry


What would be even better is to decouple the idea of a drop from the containers running it. What I like about container approaches is having "machines" I can run them on. So let's say I make a "www" drop or several. I should then be able to fire up my containers into particular types of drops and have them started on those without having to think about the specifics. The benefit of this I'd that I only care about my container running and having some basic resource requirements and not so much the specific machine instance it is running on. I could even co-mingle different containers on types of "machines". Also separating out disk resources from CPU and ram would be good. Maybe you do this already buy it wasn't clear to me.


> We’re using dedicated because running virtual containers on virtual instances seems nuts to us.

but a traceroute points to AWS…



building a virtualization infrastructure on top of another, black box virtualization infrastructure…

What could possibly go wrong?


Hey, it worked for Heroku.


It kind of works for Heroku. Every few months I see the Hacker News post "Why/how we moved away from Heroku."


Those are still virtual, just no one else is on your box.


A traceroute of their blog, or a server you just spun up? They don't necessarily have to eat their own dogfood.


If you offer infrastructure services and don't eat your own dogfood you can't be serious.

If you offer infrastructure services and don't tell people where and how you provide it, you can't be serious, too.


But if you host your site on your infrastructure, and it goes down, you can't post status updates to tell people what's going on/ when you will be back online. Its quite reasonable to not host your own homepage or mechanism of updating your customers IMO.


I disagree. Your website should run on your own infrastructure and a separate status page, under a different (sub)-domain should be operated from another AS (autonomous system) e.g. statuspage.io or whatever you like/prefer.


Blog and lots of Copper tools hosted on AWS. Stackdock on dedicated (not AWS or other IaaS) hardware.


Great initiative! One thing to be aware is that Docker is using LXC for containers and LXC relies on kernel isolation and cgroup limits. The concern is about the vulnerabilities.

It is comforting that Heroku is also using LXC for dynos. Would be interesting to know how much in-house adjustments to the kernel and LXC has been made to ensure the hardening.


I work at ActiveState on Stackato, which is a private Platform as a service. Similar to Heroku, only for private hosting (e.g. you host it on your own hardware or hypervisor). We use Docker as of our v3 beta release today (http://beta.stackato.com/). Our use of docker in 3.0+ means that we bring their tuned security along with us (they integrate with apparmor really well, in fact they require it to start up a container). Here's a really good overview of LXC (and docker) security in general: http://blog.docker.io/2013/08/containers-docker-how-secure-a...


Just curious, how are people building Docker images these days? Doesn't it only run on 64-bit Linux? I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.

Are people running Linux VMs on their Macs to build containers?

I like the idea of this service. But both the client side and the server side have to be easy. Unless I'm missing something it seems like they made the server side really easy, but the client side is still annoying.


Yes. Emerging best practices seems to be to use Vagrant to create a great development environment, then use docker containers inside that for isolation. The two work together quite well. There's a comment from the Vagrant creator here about that: https://news.ycombinator.com/item?id=6291549

In short, yes, just run a VM.


So I already use Linux almost exclusively for development, and VMs are not in my workflow at all. It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.


> It seems bizarre to build a VM to build a container! Like too many levels of yak shaving.

Perhaps, but you just said:

> I have a 32 bit Linux desktop and a Mac and haven't gotten around to installing Docker. At work I have a 64 bit Linux desktop and it seemed to be extremely picky about the kernel version so I gave up.

Precisely. Hence VMs, because Vagrant makes it trivial to spin up an instance configured however you like.

You're basically saying "I have a problem installing Docker, but I don't need a VM because I don't have any problems a VM would solve", but this is nonsense because this is the precise problem development VMs are meant to solve.


I can see where you're coming from, but my issue is that Docker itself is LESS portable than the applications it's containerizing! It's creating the very problem it's trying to solve. The task I care about isn't to build and deploy a Docker container. It's to build and deploy my app.

I have a beef with build/deploy systems that have bootstrapping problems. For example I'm hearing from people using Chef that they have to freeze the version of Chef, its dependencies, and the Ruby interpreter (or maybe it was Puppet, I don't use either). To me that is just crazy. My code isn't that picky about the versions it needs, and to introduce a deployment tool like that makes things less stable, not more.

Take for example Python -- in my experience it's almost entirely portable because Linux and Mac. And I imagine the same is with node.js, Ruby, PHP, etc. Almost all C libraries you need are portable too. So in my ideal world you would only use a VM when you actually need it for the OS/CPU architecture. I suspect for a lot of people that would be 50-90% the time without a VM, depending on how you like to develop.

I'm working on a chroot-based build system, which in theory will work on Mac and Linux (but not Windows). It does need to solve a versioning problem. Because stuff isn't as portable between Python 2.6 and Python 2.7 on the same OS as it is between Python 2.7 on two different architectures/OSes.


I think it might depend on what sort of problem you're trying to solve.

If you have, let's say, a django app, and you want to be able to run it all sorts of places, Docker is very much the wrong tool; it doesn't run at all most places, and it's finicky to get working. You're better off just getting that one app to run when and where you want. And if you run into any issues, virtualenv will solve it, no big deal.

If you have a bunch of apps you want to get running (or perhaps a bunch of interlocking pieces of a single stack, or the different elements of a SOA), then Docker suddenly starts to look very attractive. And then you might go to the trouble to get a single gold server image with docker installed and working (or an Ansible playbook, or a Chef cookbook, or a Digitalocean snapshot, or an EC2 AMI, or whatever), and you know you can just spin up a server and deploy any app you want to it. And once you start thinking about testing, CI, orchestration, automatic scaling, etc., it all becomes that much more attractive; you've got these generic docker servers, on the one hand, and these generic docker containers on the other, and you can mix and match them however you like. When you start having more than 1 server and 1 app, that's amazing. Very much worth the cost of entry of having to install docker everywhere...if you need that kind of thing.

You're focusing on portability between operating systems, but that's not the point of docker; as you say docker isn't portable at all (which should be a strong hint that isn't the problem it solves). But docker containers are portable between servers with docker on it, and with some architectures (or at a certain scale), you will suddenly realise just how useful that is.

If it helps, consider Heroku (and the other PaaS outfits like dotCloud, etc.). A lot of startups outsource big chunks of their infrastructure to Heroku, and Heroku uses a very docker-like architecture. If you were to shift that back in house, in many cases that same architecture makes sense (largely depending on just what you were outsourcing to Heroku...). ...and sometimes it doesn't. But if it does, docker is probably a core part of any attempt at implementing your own in-house PaaS. And if you need that kind of thing, you aren't going to stop because "well, it doesn't run on OSX"; nobody (well, nearly) is using OSX is production. :)


All of my projects now include a Vagrantfile. It makes managing dependencies and creating repeatable environments simple.

The work flow is:

  $ vagrant up
  $ vagrant ssh
Editing and committing changes take place on the host. Running servers, tests, building Docker containers, etc, takes place on the guest.

You can get a new device ready for hacking on a project in minutes. Just git clone, vagrant up, grab a quick coffee.

I would definitely recommend putting VMs in your work flow.


The dominant workflow in docker-land is to ditch the Vagrantfile, use a Dockerfile instead, and sometimes use Vagrant when it helps you get a VM up and running with docker on it (but that Vagrantfile is typically the same across all projects requiring docker).


I don't get the need for Vagrant? Are you suggesting to use Vagrant solely for those not developing on a host capable of running Docker? If my host _can_ run Docker, what value do I get from running it inside Vagrant instead?


We recommend using vagrant as a simple vm management tool, to quickly get a simple VM on your machine.

If your host can already run docker containers, you don't need vagrant.


Vagrant is a useful way to very quickly get a Docker capable host. You wouldn't use it for production, no.

For development, if you're running on OS X or Windows (in which case, my condolences), you basically have to use a VM. If developing on Linux, it's a tossup; the complexity and overhead of Vagrant versus the pain and annoyance of fooling around with kernels and dependencies.

I use a Mac for day-to-day development, so a simple Vagrant VM is a no-brainer. :)


I'm building docker images on 64-bit linux (ubuntu) and maintaining a repo of Dockerfiles, instead of uploading to the docker repository.

You need a recent version of the linux kernel that supports Linux Containers. It's best if you can run Ubuntu 13 somewhere.

> Are people running Linux VMs on their Macs to build containers?

FreeBSD supports jails which are similar to linux containers in a way, but OSX does not. So unfortunately you're going to have to run a VM, checkout vagrant and docker though. [1]

[1]: http://docs.docker.io/en/latest/installation/vagrant/


OSX has seatbelts[1], which is similar to jails. They are just clunky and not documented. Chrome uses them for sandboxing browser processes[2].

[1] https://developer.apple.com/library/mac/documentation/Darwin...

[2] http://www.chromium.org/developers/design-documents/sandbox/...


No amount of jails or seatbelts is going to allow you to run a Linux executable on a Darwin kernel, though.


I love this idea, and want to try it but I have no experience with Docker (on the todo list).

I wanted to spin up an instance of Sphinx Search but no idea how to go about doing it.

Maybe creating a set of tutorials will help with this. I can think of two advantages. The first being customers like myself will love it. Second, similar to Linode and their tutorials it will drives a lot of traffic and establishes your reputation as docker experts. Will probably build a lot of back-links too as people link to your tutorials.


Absolutely. Along similar lines, DigitalOcean has done a great job of encouraging the community to write tutorial and articles, and as a result there are tons of resources to get you started with all kinds of ways to use a VPS. Doing the same would be tremendously beneficial for Stackdock.


This is pretty awesome. An api to automate deployments/management/monitoring would completely rock too.


How is private networking handled between Docker containers?

UPDATE: I'd also be interested to hear about Digital Ocean-style "shared" (but non-private) networking—basically, any network adaptor with a non-Internet routable IP address. ;)


Not being familiar with the subject basically it seems that:

Docker is a simple description of an internet server including the various services required (mysql, httpd, sshd, etc. - the bundle being call a deck).

It seems then you can create a server elsewhere (eg on your localhost), generate the docker description of that and use that description to fire up a server (either a VM or dedicated) using the service in the OP.

Am I close?

Could I use this to do general web hosting?

Edit: and looking at digitalocean.com it appears I can activate and deactivate the "server" at will, so I can have it online for an hour for testing and pay < 1¢?


This looks awesome! I currently have an AWS box for the same purpose, running a few of my docker containers. Will this support the ADD directive, or the ability to add custom files (config files) into containers?


Wonder if they have an idle/spin up time. Only their one instance plan is $5, but I know I have to buy more than one on heroku to get no idle/spin up time - that or use hacks like constant pingers, etc.. This is important for when I'm doing experiments/UI tests/alpha tests/submitting apps for reviews before they have any consistent traffic, but I don't want them to occasionally get stuck on 15 second spin up times on requests.


There are some websites that will ping your heroku instance every few minutes for free. Works great for me.


I'm not sure about the pricing yet as I can run like 5 or 10 docker instances in one DigitalOcean VM costing 5 dollars per month.


Probably the differences are your Docker instances run on dedicated server instead, and you have all the setup and preparation and maintaince made for you.


Looks cool. Here's what I'd love to see: built-in git deployment (ie. take a Dockerfile, build an image from it, and then after a push add the latest source code to /app and start new instances), and some kind of orchestration so you could run a number of app containers behind a load balancer container.


Hmm StackDock.com is hosted on a server at Hetzner in Germany.

I don't 100% know if the containers themselves are hosted by Hetzner or not, but Hetzner is more of a budget provider than something you host production sites on.

I've heard many mixed review about their network and mostly their support which isn't up to scratch. We'll see what happens but from what I see, if someone decides to abuse the service, Hetzner might just take down the whole server without warning just like OVH do.

http://www.hetzner.de/en/hosting/produkte_rootserver/px120ss... (I'm guessing they are using something similar to this).It's a pretty powerful and cheap server but if you search hard enough you can find something equivalent in the States for around the same price.


Hetzner surely has gone downhill over the years (quality and pricewise), and support was better 8 years ago, but to say you would not host a production site there is a pretty bold statement.

If you need real HA you should perhaps use more than one provider anyway. Or what are your recommendations?


Of course, with the prices these guys are charging, they are certainly going with a budget host.

Since Docker is still in beta, it's not production ready yet anyways. Docker could still go through a lot of changes between now and 1.0.

ETA: Whoops, I got the pricing wrong. It's $5 per instance. I was thinking you would get 1GB of RAM and 20GB of space to run as many containers as you like. That makes it not as cheap as I was originally thinking.


I'd actually prefer Germany, since data privacy is a law there and not spying.


Is the pricing for 1 dockerfile or unlimited dockerfiles?


It's per instance - so you can have unlimited docker files; you only pay when you create an instance from one.


I Love the idea! really. I just don't like all the UX yet. Some things feel ... off. It might be something personal. I'm not sure. But I guess it's interesting to discuss. "Drops are distilled Decks" The words feel semantically mismatched for some reason. If I think "Deck"I don't think "Config". If I think "Drop" I don'think "Deployable stuff" and I don't see how a "Distilled deck" is a "drop". Also it feels odd that I can create a "New deck" in the "instances" section.

though adding "cards" to a "deck" sounds intuitive.

I'm trying to come up with better terminology. something with ships and containers...


One thing that surprised me...

When I created a Deck (default Sinatra Hello World) and converted it to a Drop, it did just that: it removed the Deck and created a Drop.

I guess I thought it would keep the Deck so that I could see the configuration that I had chosen to create it. Is this a Docker thing where, once you've created it, you don't see the config any longer? I don't think it is but I've not honestly played with Docker yet. $5 a month is a low ask for me to try it out.

Also, when it comes time to pay for a Deck/Drop and you don't have credit card info saved, it forwards you to that page... but, after entering the info, you're not put back into the process. You're dumped back into the Deck page. That seemed odd to me... wasn't sure if it had been converted or not.

I wish the word 'manifest' wasn't used in so many contexts because, if you're going to stick with the container shipping analog, it would have made more sense to have Manifests, Containers and Ships. That's just me though... who knows. ;)

All in all, cool service. Look forward to playing around with it this weekend.

EDIT: I see that you can create a copy of the Deck that created a Drop... still seems odd that the default behavior is to blow it away upon creation of a Drop.


Appreciate the feedback - thanks - point taken and we'll fix this.


IMO Labels/tooltips should be added to the icons for the cards. Some of them, including the leaf (nodejs?) and the tree (nfi what that is) aren't especially obvious.

Otherwise, cool!


And when I click on one, the checkmark doesn't disappear until I unhover the mouse.


Very cool, and I was waiting for something like this to be built out. Are you planning on having a command line tool to control your deployments?


I use a similar service called Orchard that has a "heroku-like" command line wrapper around the docker client. It's quite nice https://github.com/orchardup/orchard-client


I started the default instance with sinatra running, but where do you see the IP address to visit it via a web browser?


Just signed up but the site now appears to be down, receiving "We're sorry, but something went wrong."


Hackernews traffic spike! You can signup and create a Dockerfile - we've just paused instance deployment for a couple of hours as we add more servers. Sorry for the inconvenience.


You should do some A/B tests to confirm, but I but the pricing table at the bottom was a little confusing because the price was not highlighted in any way, and the call to action was round when it is typically a rectangle.


Looks awesome! Anyone know if there are bandwidth / throughput / transfer charges?

Also, forgive my ignorance, but what would it take to be able to "add containers" in the same way that you can add dynos on Heroku?


The issue with linux containers is (or at least it used to be) that it is possible for a malicious user to 'break out' of the container. Has this problem been solved?


Nice. Looking forward to seeing how this and all the other Docker based PAAS ecosystems like Flynn, Deis, Tsuru, Shipbuilder, CoreOS etc pan out.


I agree this looks very cool. As far as http://deis.io/ is concerned, we're focused more on the "operate your own PaaS" capability, whereas this seems to be a pure hosted service -- which is great for lots of use cases.

Best of luck guys!


Can I use a docker image I have already created?


Is this production-ready and trusted? Who are these guys? I don't want my apps to be hosted on a quick hack.


This looks like an awesome service. And the image on the site reminds me of Season 2 of The Wire - even better!


Q: Do people have root on the containers?


Yes.


Cool service but your branding makes it look like you are affiliated with Canonical/Ubuntu.


Excellent! Will play around with it soon. Thanks for offering this, and best wishes.


I get a 500 error when logging in, am I the only one ?


Like the idea, but would like to see hourly billing.


Thanks for the suggestion - we are looking at more usage based billing - including per CPU cycle / RAM usage to be a 'true' utility.


This is fantastic!

Does anyone know where DO servers are located?


> DigitalOcean currently has data centers in San Francisco, New York City, and Amsterdam.

From here: http://www.enterprisenetworkingplanet.com/datacenter/digital...


Thanks! This is just what I was looking for.


Clicking on alpha/deploy leads to 404 :(


where are the servers hosted? AWS? US or EU?


Sounds great!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: